content
stringlengths
0
625k
subset
stringclasses
1 value
meta
dict
--- abstract: | We consider Steklov eigenvalues of nearly hyperspherical domains in $\mathbb{R}^{d + 1}$ with $d\ge 3$. In previous work, treating such domains as perturbations of the ball, we proved that the Steklov eigenvalues are analytic functions of the domain perturbation parameter. Here, we compute the first-order term of the asymptotic expansion and show that the first-order perturbations are eigenvalues of a Hermitian matrix, whose entries can be written explicitly in terms of the Pochhammer's and Wigner $3j$-symbols. We analyse the asymptotic expansion and show the following isoperimetric results among domains with fixed volume: (1) for an infinite subset of Steklov eigenvalues, the ball is not optimal, and (2) for a different infinite subset of Steklov eigenvalues, the ball is a stationary point. address: - Department of Mathematics, Wake Forest University, Winston-Salem, NC - Department of Mathematics, Denison University, Granville, OH author: - Chee Han Tan - Robert Viator bibliography: - Steklov-Asymptotics.bib title: Steklov Eigenvalues of Nearly Hyperspherical Domains --- # Introduction Let $\Omega\subset\mathbb{R}^{d + 1}$ be a bounded domain with $d\ge 1$. The Steklov eigenvalue problem for $(\lambda, u)$ on $\Omega$ is given by $$\begin{aligned} {2} \label{eq:Steklov1} \Delta u & = 0 && \ \ \textnormal{ in } \Omega, \\ \label{eq:Steklov2} \partial_\mathbf{n}u & = \lambda u && \ \ \textnormal{ on } \partial\Omega, \end{aligned}$$ where $\Delta$ is the Laplacian acting on $H^1(\Omega)$, $\partial_\mathbf{n}u = \nabla u\cdot \mathbf{n}$ is the unit outward normal derivative on the boundary $\partial\Omega$, and $\lambda$ is the spectral parameter. It is well-known that the Steklov spectrum is discrete as long as the trace operator $T\colon H^1(\Omega)\to L^2(\partial\Omega)$ is compact [@girouard2017]. Moreover, the eigenvalues are real and we enumerate them, counting multiplicity, in increasing order $$0 < \lambda_0(\Omega) < \lambda_1(\Omega)\le \lambda_2(\Omega)\le \dots \nearrow\infty.$$ The Steklov eigenvalue problem has received considerable attention in the literature; see the survey papers [@girouard2017; @colbois2023] and references therein. The Steklov eigenvalue problem was first introduced by Vladimir Steklov in [@stekloff1902] to describe the stationary heat distribution in a body $\Omega$ whose heat flux through the boundary is proportional to the temperature. For planar domains, the Steklov eigenvalues are the squares of the natural frequencies of a vibrating free membrane with all its mass concentrated along the boundary [@Bandle p. 95]. Steklov eigenvalues also have applications in optimal material design for both electromagnetism and torsional rigidity [@lipton1998a; @lipton1998b]. Recently, Cakoni et al. [@cakoni2016] used Steklov eigenvalues in nondestructive testing, where they established a crucial relationship between small changes in the (possibly complex valued) refractive index of a scattering object and the corresponding change in the eigenvalue of a modified Steklov problem. For this problem, numerical results in [@cakoni2016] revealed that a localised defect of the refractive index in a disc perturbs only a small number of modified Steklov eigenvalues. Isoperimetric inequalities for Steklov eigenvalues have been explored since the mid-twentieth century. The first major result was obtained by Weinstock in his 1954 seminal paper [@weinstock1954], where he showed that the disc uniquely maximises the first nontrivial perimeter-normalised Steklov eigenvalue $\lambda_1(\Omega)\lvert{\partial\Omega}\rvert$ among all bounded simply connected planar domains with smooth boundary. For higher eigenvalues, Girouard and Polterovich showed that the $n$th perimeter-normalised Steklov eigenvalue is maximised in the limit by a sequence of simply connected planar domains degenerating to the disjoint union of $n$ identical discs. At the same time, it is known that Weinstock's result fails for non simply-connected planar domains [@girouard2017 Example 4.2.5]. In dimension 3 or higher, Fraser and Schoen [@fraser2019] showed that Weinstock's result fails for general contractible domains, but Bucur et al. [@bucur2021 Theorem 3.1] showed that Weinstock's result holds for all bounded convex domains with Lipschitz boundary. While it is natural to consider the maximisation of Steklov eigenvalues with prescribed perimeter (because the spectral parameter $\lambda$ appears on the boundary $\partial\Omega$), in this paper we focus on the maximisation of Steklov eigenvalues with prescribed volume. For $\Omega\subset\mathbb{R}^{d + 1}$, let $\Lambda(\Omega)\coloneqq \lambda(\Omega)\cdot\lvert{\Omega}\rvert^{\frac{1}{d + 1}}$ denote the volume-normalised Steklov eigenvalue. Brock proved that the ball uniquely maximises $\Lambda_1(\Omega)$ for bounded Lipschitz domains $\Omega\subset\mathbb{R}^{d + 1}$ in all dimensions. For higher eigenvalues, Bogosel, Bucur, and Giacomini [@bogosel2017] obtained the existence and regularity results for the shape optimiser for $\Lambda_n(\Omega)$, $n\ge 2$, on bounded Lipschitz domains. In dimension 2, numerical results from [@bogosel2016; @bogosel2017; @akhmetgaliyev2017] suggested that the optimal domain is unique (up to dilations and rigid transformations), has $n$-fold symmetry, and has at least one axis of symmetry. In particular, the ball is not a maximiser for an infinite subset of Steklov eigenvalues for planar domains; this was confirmed for reflection-symmetric domains in [@viator2018]. Motivated by the asymptotic work of Lord Rayleigh [@Rayleigh] and Wolf and Keller [@wolf1994] on the minimisation of Laplace-Dirichlet eigenvalues on planar domains, Viator and Osting adopted their perturbative approach to study Steklov eigenvalues on reflection-symmetric nearly circular planar domains [@viator2018] and nearly spherical domains [@viator2022]. In dimension 3, Viator and Osting [@viator2022 Theorem 1.1] proved that for $n = 1, 2, \dots$, $\Lambda_{(n + 1)^2 - 1}$ is not maximised by the ball but $\Lambda_{n^2}$ is stationary for a ball, suggesting that the ball is a natural candidate for maximiser of $\Lambda_{n^2}$. However, recent numerical results from Antunes [@antunes2021] suggest that the ball maximises $\Lambda_4$ but not $\Lambda_9$ and $\Lambda_{16}$. The same numerical results also suggested that the optimal domain for $\Lambda_n$ seems to have $n$ \"buds\" and some of the optimal domains seem to have symmetries that can be related with Platonic solids. Tuning of mixed Steklov-Neumann boundary conditions have also been recently studied by Ammari, Imeri, and Nigam [@ammari2020], where an algorithm was designed to generate the proper mixed boundary conditions necessary to obtain desired resonance effects. Besides shape optimisation and isoperimetric results, there have been numerous recent results connecting Steklov eigenvalues to free-boundary minimal surfaces, inverse problems, and more; see [@colbois2023] for an extensive, though not exhaustive, review of recent work in Steklov eigenvalues. ## Nearly hyperspherical domains Given $d\ge 3$, we consider the Steklov eigenvalue problem on a *nearly hyperspherical domain* $\Omega_\varepsilon\subset\mathbb{R}^{d + 1}$ in hyperspherical coordinates $(r, \hat\theta)$, where $\Omega_\varepsilon$ has the form $$\label{eq:Domain} \Omega_\varepsilon= \left\{(r, \hat\theta): 0\le r\le 1+ \varepsilon\rho(\hat\theta), \, \hat\theta \in S^d \right\}, \ \ \rho(\hat\theta) = \sum_{p = 0}^\infty\sum_{q = 1}^{N(d, p)} A_{p, q} Y_{p, q}(\hat\theta).$$ Here, $\varepsilon\ge 0$ is a small perturbation parameter, $\rho\in C^1(S^d)$ is a perturbation function which we expand in the basis of real hyperspherical harmonics (see Section [2.2](#sec:Harmonics){reference-type="ref" reference="sec:Harmonics"}), and $S^d\subset\mathbb{R}^{d + 1}$ is the $d$-dimensional unit sphere. For $\varepsilon= 0$, $\Omega_0$ is the $(d + 1)$-dimensional unit ball $B\subset\mathbb{R}^{d + 1}$ and the eigenvalues are nonnegative integers $\lambda_{\ell, m} = \ell$, with multiplicity $N(d, \ell)$ given by $$\label{eq:Steklov_multiplicity} N(d, 0) = 1 \ \ \textnormal{ and } \ \ N(d, \ell) = \binom{d + \ell}{d} - \binom{d + \ell - 2}{d}, \ \ \ell\ge 1.$$ The corresponding eigenfunctions in hyperspherical coordinates $(r, \hat\theta)$ are given by $$u_{\ell, m}(r, \hat\theta) = r^\ell Y_\ell^m(\hat\theta), \ \ \ell\in\mathbb{N}= \{0, 1, 2, \dots\}, \ \ 1\le m\le N(d, \ell),$$ where $Y_\ell^m(\hat\theta)$ is a *complex hyperspherical harmonics* of degree $\ell$ on $S^d$. Viator and Osting proved that the Steklov eigenvalues $\lambda^\varepsilon$ of a nearly circular ($d = 1$) and nearly spherical ($d = 2$) domains are analytic with respect to $\varepsilon$ [@viator2020]. This analyticity result was recently extended to nearly-hyperspherical domains [@tan2023]. The proof relies on the fact that the Steklov eigenvalues can be interpreted as the eigenvalues of the Dirichlet-to-Neumann map $G_{\rho, \varepsilon}\colon H^{1/2}(\partial\Omega_\varepsilon)\to H^{-1/2}(\partial\Omega_\varepsilon)$. ## Main results In previous work [@viator2018; @viator2022], Viator and Osting used perturbation methods to study the asymptotic expansion of Steklov eigenvalues $\lambda^\varepsilon$ for reflection symmetric, nearly circular domains and nearly spherical domains. Moreover, these asymptotic results were used to establish local versions of the isoperimetric inequalities for certain Steklov eigenvalues. In this paper we extend their results to nearly hyperspherical domains. Given $d\ge 3$, we recall the volume-normalised Steklov eigenvalue $\Lambda(\Omega)\coloneqq \lambda(\Omega)\cdot\lvert{\Omega}\rvert^{\frac{1}{d + 1}}$. For $k\in\mathbb{Z}^+$, define the index $N_{d, k} = \sum_{\ell = 1}^{k - 1} N(d, \ell)$. **Theorem 1**. *Let $d\ge 3$ and $k\in\mathbb{Z}^+$. Then $\Lambda_{1 + N_{d, k}}$ is stationary in the sense that, for every perturbation function $\rho\in C^2(S^d)$, the map $\varepsilon\mapsto \Lambda_{1 + N_{d, k}}(\Omega_\varepsilon)$ is nonincreasing in $\lvert{\varepsilon}\rvert$ for $\lvert{\varepsilon}\rvert$ sufficiently small.* Theorem [Theorem 1](#thm:Iso_stat){reference-type="ref" reference="thm:Iso_stat"} suggests that the ball is a natural candidate for maximiser of $\Lambda_{1 + N_{d, k}}$ in dimensions $d + 1\ge 4$. However, the recent numerical result from Antunes for $d + 1 = 4$ suggested that $\Lambda_{1 + N_{3, 1}} = \Lambda_5$ is maximised by the ball. On the other hand, we show that the $(d + 1)$-dimensional ball doesn't maximise another infinite subset of Steklov eigenvalues. **Theorem 2**. *Let $d\ge 3$ and $k\in\mathbb{Z}^+$. Then $\Lambda_{N_{d, k + 1}}$ is not maximised by the $(d + 1)$-dimensional ball.* ## Outline This paper is organised as follows. We begin by reviewing hyperspherical coordinates, hyperspherical harmonics, and compute the first-order asymptotic expansions for geometrical quantities related to $\Omega_\varepsilon$ in Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}. In Section [3](#sec:asymptotic){reference-type="ref" reference="sec:asymptotic"}, we derive the first-order asymptotic expansion for Steklov eigenvalues of $\Omega_\varepsilon$; see Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"}. Section [4](#sec:trace){reference-type="ref" reference="sec:trace"} and Section [5](#sec:Wigner){reference-type="ref" reference="sec:Wigner"} are devoted to proving Theorem [Theorem 1](#thm:Iso_stat){reference-type="ref" reference="thm:Iso_stat"} and Theorem [Theorem 2](#thm:Iso_notball){reference-type="ref" reference="thm:Iso_notball"}, respectively. In Section [5](#sec:Wigner){reference-type="ref" reference="sec:Wigner"}, we also include the asymptotic result for the special case where the domain perturbation function is given by $\rho = Y_{p, q}(\hat\theta)$; see Theorem [Theorem 14](#thm:Ypq){reference-type="ref" reference="thm:Ypq"}. # Preliminaries {#sec:prelim} In this section, we first review vector calculus in hyperspherical coordinates. We then define and record several important properties of the hyperspherical harmonics. In particular, we derive the addition theorem for the derivatives of hyperspherical harmonics, which will be crucial in proving Theorem [Theorem 2](#thm:Iso_notball){reference-type="ref" reference="thm:Iso_notball"}. Finally, we compute the first-order asymptotic expansions for the volume of $\Omega_\varepsilon$ and the unit outward normal vector $\mathbf{n}_{\rho, \varepsilon}$ to $\partial\Omega_\varepsilon$. ## Hyperspherical coordinates in $\mathbb{R}^{d + 1}$ {#sec:Coordinates} Let $(x_1, x_2, \dots, x_{d + 1})$ denote the $(d + 1)$-dimensional Cartesian coordinates. The $(d + 1)$-dimensional hyperspherical coordinates $(r, \theta_1, \theta_2, \dots, \theta_d)$ are defined by the following equations: $$\begin{aligned} x_{d + 1} & = r\cos\theta_d, \\ x_d & = r\sin\theta_d\cos\theta_{d - 1}, \\ x_{d - 1} & = r\sin\theta_d\sin\theta_{d - 1}\cos\theta_{d - 2}, \\ \vdots & \qquad \ \ \vdots \qquad \ \ \vdots \qquad \ \ \vdots \\ x_3 & = r\sin\theta_d\sin\theta_{d - 1}\dots\cos\theta_2, \\ x_2 & = r\sin\theta_d\sin\theta_{d - 1}\dots\sin\theta_2\sin\theta_1, \\ x_1 & = r\sin\theta_d\sin\theta_{d - 1}\dots\sin\theta_2\cos\theta_1, \\ \end{aligned}$$ where the azimuth $0\le \theta_1 = \phi< 2\pi$ and inclinations $0\le \theta_2, \theta_3, \dots, \theta_d\le\pi$ define a $(d + 1)$-dimensional sphere with radius $r\ge 0$. The hyperspherical coordinates are an orthogonal curvilinear coordinate system in $\mathbb{R}^{d + 1}$. Define $\theta_0\coloneqq r$. The associated metric tensor $g$ is diagonal with components $$\label{eq:SCmetric} g_{ij} = \sum_{k = 1}^{d + 1} \frac{\partial x_k}{d\theta_i}\frac{\partial x_k}{\partial\theta_j} = h_i^2\delta_{i,j}, \ \ 0\le i, j\le d,$$ where the scale factors are given by $h_0 = 1$ and $h_i = r\prod_{k = i + 1}^d \sin\theta_k$ for $i = 1, 2, \dots, d$; the latter includes the empty product which gives $h_d = r$. Here, $\delta_{i, j}$ denotes the usual Kronecker delta. Let $\hat{\boldsymbol{r}}$, $\hat{\boldsymbol{\theta}}_1$, $\hat{\boldsymbol{\theta}}_2$, ..., $\hat{\boldsymbol{\theta}}_d$ be orthonormal hyperspherical basis vectors and define $\eta_j = h_j/r$ for $j = 1, 2, \dots, d$. The gradient operator in hyperspherical coordinates is given by $$\nabla = \frac{\partial}{\partial r}\hat{\boldsymbol{r}} + \frac{1}{r}\nabla_{S^d},$$ where $\nabla_{S^d}$ is the gradient on $S^d$: $$\label{eq:SCgrad} \nabla_{S^d} = \sum_{j = 1}^d \frac{1}{\eta_j}\frac{\partial}{\partial\theta_j}\hat{\boldsymbol{\theta}}_j.$$ The Laplacian in hyperspherical coordinates is given by $$\Delta = \frac{1}{r^d}\frac{\partial}{\partial r}\left(r^d\frac{\partial}{\partial r}\right) + \frac{1}{r^2}\Delta_{S^d},$$ where $\Delta_{S^d}$ is the spherical Laplacian (Laplace-Beltrami operator) on $S^d$: $$\label{eq:SCbeltrami} \Delta_{S^d} = \sum_{j = 1}^d \frac{1}{\eta_j^2\sin^{j - 1}(\theta_j)}\frac{\partial}{\partial\theta_j}\left(\sin^{j - 1}(\theta_j)\frac{\partial}{\partial\theta_j}\right).$$ The volume element in hyperspherical coordinates is given by $dV = r^d dr\, d\sigma_d$, where $d\sigma_d$ is the surface element over $S^d$: $$\label{eq:SCsurface} d\sigma_d(\hat\theta) = \left(\prod_{j = 2}^d \sin^{j - 1}(\theta_j)\right) d\theta_1\, d\theta_2\dots d\theta_d.$$ *Remark 3*. Throughout this paper, we denote with $\partial_r$ and $\partial_j$ the partial derivative with respect to $r$ and $\theta_j$, respectively, for $j = 1, 2, \dots, d$. ## Hyperspherical harmonics on $S^d$ {#sec:Harmonics} For $\ell\in\mathbb{N}= \{0, 1, 2, \dots\}$, let $\mathbf{H}_\ell^d$ denote the space of all hyperspherical harmonics of order $\ell$ on $S^d$. The dimension of $\mathbf{H}_\ell^d$ is the same as the multiplicity $N(d, \ell)$ of the Steklov eigenvalue $\lambda = \ell$ of the $(d + 1)$-dimensional unit ball $B$. Let $\{Y_\ell^m\}_{m = 1}^{N(d, \ell)}$ be an orthonormal basis of $\mathbf{H}_\ell^d$ with respect to the complex inner product on $L^2(S^d)$. The spaces $\mathbf{H}_\ell^d$ are pairwise orthonormal, *i.e.,* $$\int_{S^d} Y_\ell^m(\hat\theta)\overline{Y_k^n(\hat\theta)}\, d\sigma_d = \delta_{\ell, k}\delta_{m, n}.$$ and the family $\{Y_\ell^m\}_{\ell \in \mathbb{N}, 1 \leq m \leq N(d,\ell)}$ forms a complete orthonormal basis of $L^2(S^d)$. It is well-known that each $Y_\ell^m$ is an eigenfunction of the spherical Laplacian $\Delta_{S^d}$ corresponding to the eigenvalue $-\ell(\ell + d - 1)$, *i.e.,* $$\label{eq:SH_eigs} \Delta_{S^d}Y_\ell^m(\hat\theta) = -\ell(\ell + d - 1)Y_\ell^m(\hat\theta), \ \ 1\le m\le N(d, \ell).$$ Multiplying [\[eq:SH_eigs\]](#eq:SH_eigs){reference-type="eqref" reference="eq:SH_eigs"} with $\overline{Y_k^n}$ and integrating by parts over $S^d$, we obtain the following integral identity: $$\label{eq:SH_eigs_grad} \int_{S^d} \nabla_{S^d}Y_\ell^m(\hat\theta)\cdot \nabla_{S^d}\overline{Y_\ell^n(\hat\theta)}\, d\sigma_d = \ell(\ell + d - 1)\delta_{m, n}, \ \ 1\le m, n\le N(d, \ell).$$ Let $P_n^\alpha$ and $C_n^{(\alpha)}$ denote the *associated Legendre polynomial* and the *Gegenbauer (ultrapsherical) polynomial* of degree $n$, respectively, which can be defined through the Rodrigues formulas (see [@NIST Table. 18.5.1] and [@Williams Eqs. 6.27 & 6.29]): $$\begin{aligned} P_n^\alpha(z) & = \frac{(-1)^\alpha}{2^n n!}(1 - z^2)^{\alpha/2}\frac{d^{n + \alpha}}{dz^{n + \alpha}}\left(z^2 - 1\right)^n, \\ C_n^{(\alpha)}(z) & = \frac{(2\alpha)_n}{(-2)^n\left(\alpha + \frac{1}{2}\right)_n n!}\left(1 - z^2\right)^{-\alpha + \frac{1}{2}}\frac{d^n}{dz^n}\left(1 - z^2\right)^{n + \alpha - \frac{1}{2}}, \ \ \alpha > - \frac{1}{2}, \, \alpha\neq 0, \end{aligned}$$ where $(z)_n = \Gamma(z + n)/\Gamma(z)$ is the Pochhammer's symbol; see [@NIST Eq. 5.2.5]. We define the complex hyperspherical harmonics $Y_\ell^m$ of degree $\ell$ on $S^d$ as $$Y_\ell^m(\hat\theta) = \widetilde Y_{m_2}^{m_1}(\phi, \theta_2)\prod_{j = 3}^d Y(\theta_j; m_{j - 1}, m_j),$$ where $m\coloneqq \left(m_1, m_2, \cdots, m_{d - 1}\right)$ is any $(d - 1)$-tuple satisfying the inequality $$\label{eq:SH_tuple} 0\le \lvert{m_1}\rvert\le m_2\le \dots\le m_{d - 1}\le m_d\coloneqq\ell,$$ and $\widetilde Y_{m_2}^{m_1}$ is the three-dimensional complex spherical harmonics $$\widetilde Y_{m_2}^{m_1}(\phi, \theta_2) = \sqrt{\frac{(2m_2 + 1)}{4\pi}\frac{(m_2 - m_1)!}{(m_2 + m_1)!}}\, e^{im_1\phi}P_{m_2}^{m_1}(\cos\theta_2).$$ The functions $Y(\theta_j; m_{j - 1}, m_j)$ are real-valued and they are defined by (see [@wen1985 Section II]) $$Y(\theta_j; m_{j - 1}, m_j) = \frac{1}{\mu_j}\left(\sin\theta_j\right)^{m_{j - 1}}C_{m_j - m_{j - 1}}^{\left(m_{j - 1} + \frac{j - 1}{2}\right)}(\cos\theta_j), \ \ j = 3, 4, \dots, d,$$ where $\mu_j$ is the normalisation constant of $Y(\theta_j; m_{j - 1}, m_j)$ (with respect to the measure $\sin^{j - 1}(\theta_j)\, d\theta_j$) satisfying $$\mu_j^2 = \frac{4\pi\Gamma(m_j + m_{j - 1} + j - 1)}{2^{2m_{j - 1} + j}(m_j - m_{j - 1})!\left(m_j + \frac{j - 1}{2}\right)\Gamma^2\left(m_{j - 1} + \frac{j - 1}{2}\right)}.$$ Here, $\Gamma(z)$ is the standard Gamma function and we define $\Gamma^2(z) = \left[\Gamma(z)\right]^2$. The real hyperspherical harmonics $Y_{\ell, m}$ of degree $\ell$ on $S^d$ can be defined in the same way as the three-dimensional real spherical harmonics $\widetilde Y_{m_2, m_1}$, *i.e.,* $$Y_{\ell, m}(\hat\theta) = \widetilde Y_{m_2, m_1}(\phi, \theta_2)\prod_{j = 3}^d Y(\theta_j; m_{j - 1}, m_j),$$ where $$\label{eq:SH_real} \widetilde Y_{m_2, m_1}(\phi, \theta_2) = \begin{dcases} \, \frac{i}{\sqrt{2}}\left[\widetilde Y_{m_2}^{m_1}(\phi, \theta_2) - (-1)^{m_1}\widetilde Y_{m_2}^{-m_1}(\phi, \theta_2)\right] & \ \ \textnormal{ if } m_1 < 0, \\ \, \widetilde Y_{m_2}^{0}(\phi, \theta_2) & \ \ \textnormal{ if } m_1 = 0, \\ \, \frac{1}{\sqrt{2}}\left[\widetilde Y_{m_2}^{-m_1}(\phi, \theta_2) + (-1)^{m_1}\widetilde Y_{m_2}^{m_1}(\phi, \theta_2)\right] & \ \ \textnormal{ if } m_1 > 0. \end{dcases}$$ It is straightforward to verify that the set of real hyperspherical harmonics are pairwise orthonormal on $L^2(S^d)$. For notational simplicity, throughout this paper we will suppress the dependence of $\hat\theta$ on $\rho$ and hyperspherical harmonics whenever appropriate. *Remark 4*. Whenever we are counting all possible hyperspherical harmonics as $1\le m\le N(d, \ell)$ for a fixed degree $\ell\in\mathbb{N}$, this should be understood as counting over all tuples $m = \left(m_1, m_2, \dots, m_{d - 1}\right)$ satisfying the condition [\[eq:SH_tuple\]](#eq:SH_tuple){reference-type="eqref" reference="eq:SH_tuple"}. Take for instance $d = 3$ and $\ell = 2$. We then have $1\le m\le N(3, 2) = 9$ and the $9$ possible tuples $(m_1, m_2)$ satisfying $0\le \lvert{m_1}\rvert\le m_2\le m_3 = \ell = 2$ are $$\begin{aligned} (0, 0), \, (0, 1), \, (0, 2), \, (1, 1), \, (1, 2), \, (-1, 1), \, (-1, 2), \, (2, 2), \, (-2, 2). \end{aligned}$$ *Remark 5*. For any $\ell\in\mathbb{N}$, we will assume that the index $m = 1$ corresponds to the trivial tuple $(0, 0, \dots, 0)$. With this convention, the constant hyperspherical harmonic is $Y_0^1 = Y_{0, 1} = \lvert{S^d}\rvert^{-1/2}$. Another important result about hyperspherical harmonics is the addition theorem (see [@wen1985 Eq. 51]), which states that $$\label{eq:SH_addition} \sum_{m = 1}^{N(d, \ell)} Y_\ell^m(\hat\theta)\overline{Y_\ell^m(\hat\theta')} = K(d, \ell)C_\ell^{\left(\frac{d - 1}{2}\right)}(\hat{\boldsymbol{u}}\cdot\hat{\boldsymbol{u}}'), \ \ K(d, \ell)\coloneqq \frac{N(d, \ell)}{\lvert{S^d}\rvert\, C_\ell^{\left(\frac{d - 1}{2}\right)}(1)},$$ for any unit vector $\hat{\boldsymbol{u}}, \hat{\boldsymbol{u}}'\in S^d$ with corresponding angular coordinates $\hat\theta, \hat\theta'$. Setting $\hat{\boldsymbol{u}} = \hat{\boldsymbol{u}}'$, we have $\hat{\boldsymbol{u}}\cdot\hat{\boldsymbol{u}} = 1$ and $$\label{eq:SH_addition_same} \sum_{m = 1}^{N(d, \ell)} \lvert{Y_\ell^m(\hat\theta)}\rvert^2 = \frac{N(d, \ell)}{\lvert{S^d}\rvert}.$$ We now establish the addition theorem for the partial derivatives of hyperspherical harmonics when $\hat{\boldsymbol{u}} = \hat{\boldsymbol{u}}'$. Our proof is inspired by [@winch1995]. **Theorem 6**. *Let $d\ge 3$ and $\ell\in\mathbb{N}$. For all $j = 1, 2, \dots, d$, we have $$\sum_{m = 1}^{N(d, \ell)} \frac{1}{\eta_j^2}\lvert{\partial_jY_\ell^m(\hat\theta)}\rvert^2 = (d - 1)K(d, \ell)C_{\ell - 1}^{\left(\frac{d + 1}{2}\right)}(1).$$* *Proof.* For simplicity of notation, we write $C_\ell^{\left(\frac{d - 1}{2}\right)}(\hat{\boldsymbol{u}}\cdot\hat{\boldsymbol{u}}') = C(\hat{\boldsymbol{u}}\cdot\hat{\boldsymbol{u}}') = C(z)$. For any fixed $j = 1, 2, \dots, d$, differentiating [\[eq:SH_addition\]](#eq:SH_addition){reference-type="eqref" reference="eq:SH_addition"} with respect to $\theta_j$ first and then $\theta_j'$ yields $$\label{eq:SH_addition_grad1} \sum_{m = 1}^{N(d, \ell)} \partial_jY_k^m(\hat\theta)\partial_{j'}Y_k^m(\hat\theta') = K(d, \ell)\left[\frac{d^2C}{dz^2}\left[\hat{\boldsymbol{u}}\cdot\partial_{j'}\hat{\boldsymbol{u}}'\right]\left[\partial_j\hat{\boldsymbol{u}}\cdot\hat{\boldsymbol{u}}'\right] + \frac{dC}{dz}\left[\partial_j\hat{\boldsymbol{u}}\cdot \partial_{j'}\hat{\boldsymbol{u}}\right]\right].$$ In the case of $\hat{\boldsymbol{u}} = \hat{\boldsymbol{u}}'$, we know that $z = \hat{\boldsymbol{u}}\cdot\hat{\boldsymbol{u}} = 1$ and this implies $\hat{\boldsymbol{u}}\cdot\partial_j\hat{\boldsymbol{u}} = 0$. Since $\hat{\boldsymbol{u}}\in S^d$ can be written as $\hat{\boldsymbol{u}} = \left(x_1, x_2, \dots, x_{d + 1}\right)/r$, computing $\lvert{\partial_j\hat{\boldsymbol{u}}}\rvert^2$ gives $$\lvert{\partial_j\hat{\boldsymbol{u}}}\rvert^2 = \frac{1}{r^2}\sum_{k = 1}^{d + 1} \left(\frac{\partial x_k}{\partial\theta_j}\right)^2 = \frac{h_j^2}{r^2} = \eta_j^2,$$ thanks to [\[eq:SCmetric\]](#eq:SCmetric){reference-type="eqref" reference="eq:SCmetric"}. Consequently, setting $\hat{\boldsymbol{u}} = \hat{\boldsymbol{u}}'$ in [\[eq:SH_addition_grad1\]](#eq:SH_addition_grad1){reference-type="eqref" reference="eq:SH_addition_grad1"} and rearranging yields $$\begin{aligned} \sum_{m = 1}^{N(d, \ell)} \frac{1}{\eta_j^2}\lvert{\partial_jY_\ell^m(\hat\theta)}\rvert^2 & = K(d, \ell)\frac{dC}{dz}\bigg|_{z = 1} = K(d, \ell)\cdot (d - 1)C_{\ell - 1}^{\left(\frac{d + 1}{2}\right)}(1), \end{aligned}$$ where we use the derivative formula [@NIST Eq. 18.9.19]. The desired result now follows. ◻ ## Asymptotic expansions for geometric quantities Let $\lvert{S^d}\rvert$ and $\lvert{B}\rvert = \lvert{S^d}\rvert/(d + 1)$ denote the surface area of $S^d$ and the volume of the $(d + 1)$-dimensional unit ball $B$, respectively. Using the orthogonality of hyperspherical harmonics, we see from [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"} that $$\int_{S^d} \rho(\hat\theta)\, d\sigma_d = \int_{S^d} A_{0, 1}Y_{0, 1}\, d\sigma_d = A_{0, 1}\lvert{S^d}\rvert^{1/2}.$$ Thus, an asymptotic expansion for the volume of $\Omega_\varepsilon$ is given by $$\begin{aligned} \lvert{\Omega_\varepsilon}\rvert = \int_{S^d} \int_0^{1 + \varepsilon\rho(\hat\theta)} r^d\, dr\, d\sigma_d & = \frac{1}{d + 1}\int_{S^d} \left(1 + \varepsilon\rho(\hat\theta)\right)^{d + 1}\, d\sigma_d \\ & = \frac{\lvert{S^d}\rvert}{d + 1} + \varepsilon\int_{S^d} \rho(\hat\theta)\, d\sigma_d + O(\varepsilon^2) \\ & = \lvert{B}\rvert + \varepsilon A_{0, 1}\lvert{S^d}\rvert^{1/2} + O(\varepsilon^2). \end{aligned}$$ In particular, we have that $$\label{eq:Asymp_volume} \begin{aligned} \lvert{\Omega_\varepsilon}\rvert^{\frac{1}{d + 1}} & = \lvert{B}\rvert^{\frac{1}{d + 1}} + \varepsilon\left(\frac{1}{d + 1}\lvert{B}\rvert^{\frac{1}{d + 1} - 1}A_{0, 1}\lvert{S^d}\rvert^{1/2}\right) + O(\varepsilon^2) \\ & = \lvert{B}\rvert^{\frac{1}{d + 1}} + \varepsilon\left(\frac{A_{0, 1}\lvert{B}\rvert^{\frac{1}{d + 1}}}{\lvert{S^d}\rvert^{1/2}}\right) + O(\varepsilon^2). \end{aligned}$$ Next we find an asymptotic expansion for the unit outward normal vector $\mathbf{n}_{\rho, \varepsilon}$ to $\partial\Omega_\varepsilon$. By identifying $\partial\Omega_\varepsilon$ as the zero level set of an implicit function, it can be shown that (see [@tan2023 Section 5]) $$\mathbf{n}_{\rho, \varepsilon} = \Big((1 + \varepsilon\rho)^2 + \varepsilon^2\lvert{\nabla_{S^d}\rho}\rvert^2\Big)^{-1/2}\Big[(1 + \varepsilon\rho)\hat{\boldsymbol{r}} - \varepsilon\nabla_{S^d}\rho\Big].$$ It follows that $$\label{eq:Asymp_normal} \mathbf{n}_{\rho, \varepsilon} = \Big(1 - \varepsilon\rho + O(\varepsilon^2)\Big)\Big[\hat{\boldsymbol{r}} + \varepsilon\rho\, \hat{\boldsymbol{r}} - \varepsilon\nabla_{S^d}\rho\Big] = \hat{\boldsymbol{r}} - \varepsilon\nabla_{S^d}\rho + O(\varepsilon^2).$$ # An Asymptotic Expansion for Steklov Eigenvalues of Nearly Hyperspherical Domains {#sec:asymptotic} In this section, we derive an asymptotic expansion for the Steklov eigenvalues $\lambda(\varepsilon)\coloneqq \lambda^\varepsilon$ on a nearly hyperspherical domain $\Omega_\varepsilon$ of the form in [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"}. Recall that the unperturbed eigenvalues of $\Omega_0 = B$ are the nonnegative integers $\ell\in\mathbb{N}$, with corresponding eigenfunctions $r^\ell Y_\ell^m(\hat\theta)$, $1\le m\le N(d, \ell)$. Following [@viator2022], for fixed positive integer $k\in\mathbb{Z}^+$, we make the following perturbation ansatz in $\varepsilon$ for a Steklov eigenpair $(\lambda_k^\varepsilon, u_k^\varepsilon)$ of $\Omega_\varepsilon$ (not counting multiplicity): [\[eq:Pert_ansatz\]]{#eq:Pert_ansatz label="eq:Pert_ansatz"} $$\begin{aligned} \label{eq:Pert_ansatz1} \lambda_k^\varepsilon& = k + \varepsilon\lambda_k^{(1)} + O(\varepsilon^2), \\ \label{eq:Pert_ansatz2} u_k^\varepsilon(r, \hat\theta) & = \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} \Big(\delta_{\ell, k}\alpha_m + \varepsilon\beta_{\ell, m} + O(\varepsilon^2)\Big)r^\ell Y_\ell^m(\hat\theta). \end{aligned}$$ Note that we cannot apriori determine the coefficients $\alpha_m$ that will select the $O(1)$ eigenfunction from the $N(d, k)$-dimensional eigenspace. The ansatz [\[eq:Pert_ansatz2\]](#eq:Pert_ansatz2){reference-type="eqref" reference="eq:Pert_ansatz2"} satisfies [\[eq:Steklov1\]](#eq:Steklov1){reference-type="eqref" reference="eq:Steklov1"} exactly and we will determine the eigenvalue perturbation $\lambda_k^{(1)}$ and the coefficients $\alpha_m$ and $\beta_{\ell, m}$ so that the boundary condition [\[eq:Steklov2\]](#eq:Steklov2){reference-type="eqref" reference="eq:Steklov2"} is satisfied. Using the gradient [\[eq:SCgrad\]](#eq:SCgrad){reference-type="eqref" reference="eq:SCgrad"} in hyperspherical coordinates, we have that $$\label{eq:Pert_grad1} \nabla u_k^\varepsilon(r, \hat\theta) = \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} \Big(\delta_{\ell, k}\alpha_m + \varepsilon\beta_{\ell, m} + O(\varepsilon^2)\Big) r^{\ell - 1}\, \hat{\boldsymbol{v}}_{\ell, m},$$ where $$\label{eq:Pert_grad2} \hat{\boldsymbol{v}}_{\ell, m} = \ell Y_\ell^m\hat{\boldsymbol{r}} + \nabla_{S^d}Y_\ell^m.$$ The boundary condition [\[eq:Steklov2\]](#eq:Steklov2){reference-type="eqref" reference="eq:Steklov2"} reads $$\label{eq:Pert_BC} \nabla u_k^\varepsilon\cdot \mathbf{n}_{\rho, \varepsilon} = \lambda_k^\varepsilon u_k^\varepsilon\ \ \textnormal{ on } r = 1 + \varepsilon\rho(\hat\theta).$$ Substituting [\[eq:Pert_grad1\]](#eq:Pert_grad1){reference-type="eqref" reference="eq:Pert_grad1"} and the asymptotic expansion [\[eq:Asymp_normal\]](#eq:Asymp_normal){reference-type="eqref" reference="eq:Asymp_normal"} for $\mathbf{n}_{\rho, \varepsilon}$ into the left-hand side (LHS) of [\[eq:Pert_BC\]](#eq:Pert_BC){reference-type="eqref" reference="eq:Pert_BC"} and collecting terms in powers of $\varepsilon$, we obtain $$\nabla u_k^\varepsilon\cdot\mathbf{n}_{\rho, \varepsilon} = \left(\sum_{m = 1}^{N(d, k)} k\alpha_m Y_k^m\right) + \varepsilon L_1 + O(\varepsilon^2),$$ where $$\begin{aligned} L_1 & = \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} \Big(\delta_{\ell, k}\alpha_m\left((\ell - 1)\rho\, \hat{\boldsymbol{r}} - \nabla_{S^d}\rho\right) + \beta_{\ell, m}\hat{\boldsymbol{r}}\Big)\cdot\hat{\boldsymbol{v}}_{\ell, m} \\ % & = \sum_{m = 1}^{N(d, k)} \alpha_m\Big((k - 1)\rho\, \vec{r}\cdot\vec{v}_{k, m} - \nabla_{S^d}\rho\cdot\vec{v}_{k, m}\Big) + \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} \beta_{\ell, m}\vec{r}\cdot \vec{v}_{\ell, m} \\ & \stackrel{\eqref{eq:Pert_grad2}}{=} \sum_{m = 1}^{N(d, k)} \alpha_m\Big(k(k - 1)\rho Y_k^m - \nabla_{S^d}\rho\cdot\nabla_{S^d}Y_k^m\Big) + \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} \ell \beta_{\ell, m}Y_\ell^m. \end{aligned}$$ Substituting the perturbation ansatz [\[eq:Pert_ansatz\]](#eq:Pert_ansatz){reference-type="eqref" reference="eq:Pert_ansatz"} into the right-hand side (RHS) of [\[eq:Pert_BC\]](#eq:Pert_BC){reference-type="eqref" reference="eq:Pert_BC"} and collecting terms in powers of $\varepsilon$, we obtain $$\lambda_k^\varepsilon u_k^\varepsilon= \left(\sum_{m = 1}^{N(d, k)} k\alpha_m Y_k^m\right) + \varepsilon R_1 + O(\varepsilon^2),$$ where $$\begin{aligned} R_1 & = \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} \Big(k\left(\beta_{\ell, m} + \delta_{\ell, k}\alpha_m\ell\rho\right) + \lambda_k^{(1)}\delta_{\ell, k}\alpha_m\Big)Y_\ell^m \\ & = \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} k\beta_{\ell, m}Y_\ell^m + \sum_{m = 1}^{N(d, k)} \Big(k^2\rho + \lambda_k^{(1)}\Big)\alpha_m Y_k^m. \end{aligned}$$ The $O(1)$ terms in the LHS and RHS of [\[eq:Pert_BC\]](#eq:Pert_BC){reference-type="eqref" reference="eq:Pert_BC"} coincide, as expected. Rearranging the $O(\varepsilon)$ equation $L_1 = R_1$, we obtain $$\label{eq:Pert_Oeps} \sum_{m = 1}^{N(d, k)} \lambda_k^{(1)}\alpha_mY_k^m = -\sum_{m = 1}^{N(d, k)} \alpha_m\Big(k\rho Y_k^m + \nabla_{S^d}\rho\cdot \nabla_{S^d}Y_k^m\Big) + \sum_{\ell = 0}^\infty\sum_{m = 1}^{N(d, \ell)} (\ell - k)\beta_{\ell, m}Y_k^m.$$ If we now multiply [\[eq:Pert_Oeps\]](#eq:Pert_Oeps){reference-type="eqref" reference="eq:Pert_Oeps"} by $\overline{Y_k^n}$ for $1\le n\le N(d, k)$, integrate over $S^d$ with respect to $d\sigma_d$, and use the pairwise orthonormality of the hyperspherical harmonics, we see that the resulting sum on the left is nonzero only for $m = n$ and the third sum vanish for all $m$. This yields $$\lambda_k^{(1)}\alpha_n = \sum_{m = 1}^{N(d, k)} M_{m, n}^{(d, k)}\alpha_m, \ \ 1\le n\le N(d, k),$$ or more succinctly, $M^{(d, k)}\hat{\boldsymbol{\alpha}} = \lambda_k^{(1)}\hat{\boldsymbol{\alpha}}$, where the complex matrix $M^{(d, k)}\in\mathbb{C}^{N(d, k)\times N(d, k)}$ has entries given by $$\begin{gathered} \label{eq:Oeps_matrix} \begin{aligned} M_{m, n}^{(d, k)} & = -\int_{S^d} k\rho Y_k^m\overline{Y_k^n}\, d\sigma_d - \int_{S^d} \left(\nabla_{S^d}\rho\cdot \nabla_{S^d}Y_k^m\right)\overline{Y_k^n}\, d\sigma_d \\ % & \stackrel{\eqref{eq:SCgrad}}{=} -\int_{S^d} k\rho Y_k^m\cc{Y_k^n}\, d\sigma_d - \sum_{j = 1}^d \int_{S^d} \frac{\del_j\rho}{\eta_j^2}\del_jY_k^m\cdot \cc{Y_k^n}\, d\sigma_d. \end{aligned} \end{gathered}$$ This shows that the first-order perturbation of these $N(d, k)$ eigenvalues are characterised by the eigenvalues of $M^{(d, k)}$. Since $M^{(d, k)}$ is Hermitian, there are $N(d, k)$ real eigenvalues $\lambda_{k, j}^{(1)}$, $1\le j\le N(d, k)$, which we enumerate in increasing order. Moreover, the components of the corresponding eigenvectors $\hat{\boldsymbol{\alpha}}_j$ are the coefficients of the $O(1)$ eigenfunctions in $u_{k, j}^\varepsilon$, *i.e.,* $$\begin{aligned} u_{k, j}^{(0)}(r, \hat\theta) = \sum_{m = 1}^{N(d, k)} \left(\hat{\boldsymbol{\alpha}}_j\right)_m r^kY_k^m(\hat\theta), \ \ 1\le j\le N(d, k). \end{aligned}$$ We summarise these results and the analyticity result from [@tan2023] in the following theorem. **Theorem 7**. *Given $d\ge 3$ and $k\in\mathbb{Z}^+$, define $N_k = \sum_{\ell = 1}^{k - 1} N(d, \ell)$, where $N(d, \ell)$ is defined by [\[eq:Steklov_multiplicity\]](#eq:Steklov_multiplicity){reference-type="eqref" reference="eq:Steklov_multiplicity"}. For $N_k + 1\le n\le N_{k + 1}$, the Steklov eigenvalues $\lambda_n(\varepsilon)$ of a nearly hyperspherical domain $\Omega_\varepsilon$ of the form [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"} consist of at most $N(d, k)$ branches of analytic functions which have at most algebraic singularities near $\varepsilon= 0$. At first-order in $\varepsilon$, the perturbation is given by the real eigenvalues of the Hermitian matrix $M^{(d, k)}$ of size $N(d, k)$, whose entries are given by [\[eq:Oeps_matrix\]](#eq:Oeps_matrix){reference-type="eqref" reference="eq:Oeps_matrix"}.* # Analysis of $M^{(d, k)}$ {#sec:trace} This section is dedicated to the proof of Theorem [Theorem 2](#thm:Iso_notball){reference-type="ref" reference="thm:Iso_notball"}. Following [@viator2022 Theorem 1.1], the crux of the proof lies in showing that the trace of $M^{(d, k)}$ is proportional to $\int_{S^d} \rho\, d\sigma_d$, the mean of the domain perturbation function $\rho$. This is achieved by rewriting $M^{(d,k)}$ as the sum of a scalar multiple of the identity and a trace-zero Hermitian matrix. For notational simplicity, we write the surface element over $S^d$ as $d\sigma_d = \sin^{j - 1}(\theta_j)\, d\theta_j\, d\sigma_{d - 1, j}$, where $$d\sigma_{d - 1, j} = \prod_{i = 2, i\neq j}^d \sin^{i - 1}(\theta_i)\prod_{i = 1, i\neq j}^d d\theta_i.$$ **Lemma 8**. *Given $d\ge 3$ and $k\in\mathbb{Z}^+$, let $M^{(d, k)}$ be the Hermitian matrix defined in Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"}. The entries of $M^{(d, k)}$ can be written as $$\label{eq:Oeps_matrix1} M_{m, n}^{(d, k)} = \int_{S^d} \rho(\hat\theta)\left(-k(k + d)Y_k^m(\hat\theta)\overline{Y_k^n(\hat\theta)} + \nabla_{S^d}Y_k^m(\hat\theta)\cdot\nabla_{S^d}\overline{Y_k^n(\hat\theta)}\right) d\sigma_d.$$* *Proof.* From Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} and [\[eq:SCgrad\]](#eq:SCgrad){reference-type="eqref" reference="eq:SCgrad"}, we have $$\label{eq:Oeps_matrix1a} M_{m, n}^{(d, k)} = -\int_{S^d} k\rho Y_k^m\overline{Y_k^n}\, d\sigma_d - \sum_{j = 1}^d \int_{S^d} \frac{\partial_j\rho}{\eta_j^2}\partial_jY_k^m\cdot \overline{Y_k^n}\, d\sigma_d.$$ For each integral from the sum above, we integrate by parts with respect to $\theta_j$. Note crucially that $\eta_j$ is independent of $\theta_j$. For the case $j = 1$, the determinant of the Jacobian in $d\sigma_d$ is independent of $\theta_1$ and the boundary term vanishes due to the $2\pi$-periodicity of $\rho, \partial_1Y_k^m, Y_k^n$ with respect to $\theta_1$. This yields $$\begin{aligned} -\int_{S^d} \frac{\partial_1\rho}{\eta_1^2}\partial_1Y_k^m\cdot \overline{Y_k^n}\, d\sigma_d & = \int_{S^d} \frac{\rho}{\eta_1^2}\partial_1\left(\partial_1Y_k^m\cdot \overline{Y_k^n}\right)d\sigma_d \\ & = \int_{S^d} \frac{\rho}{\eta_1^2}\Big[\partial_1Y_k^m\cdot \partial_1\overline{Y_k^n} + \partial_1^2Y_k^m\cdot \overline{Y_k^n}\Big]\, d\sigma_d. \end{aligned}$$ For the case $j = 2, 3, \dots, d$, the boundary term vanishes because $\sin^{j - 1}(0) = \sin^{j - 1}(\pi) = 0$ for $j\ge 2$. This yields $$\begin{aligned} & -\int_{S^{d - 1}}\int_{\theta_j} \frac{\partial_j\rho}{\eta_j^2}\partial_jY_k^m\cdot \overline{Y_k^n}\sin^{j - 1}(\theta_j)\, d\theta_j\, d\sigma_{d - 1, j} \\ & = \int_{S^{d - 1}}\int_{\theta_j} \frac{\rho}{\eta_j^2}\partial_j\Big(\partial_jY_k^m\cdot \overline{Y_k^n}\sin^{j - 1}(\theta_j)\Big) d\theta_j\, d\theta_{d - 1, j} \\ & = \int_{S^d} \frac{\rho}{\eta_j^2} \left[\partial_jY_k^m\cdot \partial_j\overline{Y_k^n} + \frac{\partial_j\left(\sin^{j - 1}(\theta_j)\partial_jY_k^m\right)}{\sin^{j - 1}(\theta_j)}\, \overline{Y_k^n}\right] d\sigma_d. \end{aligned}$$ Summing over all $j = 1, 2, \dots, d$ and recalling the gradient [\[eq:SCgrad\]](#eq:SCgrad){reference-type="eqref" reference="eq:SCgrad"} and the spherical Laplacian [\[eq:SCbeltrami\]](#eq:SCbeltrami){reference-type="eqref" reference="eq:SCbeltrami"} on $S^d$, we obtain $$\begin{gathered} \label{eq:Oeps_matrix1b} \begin{aligned} & -\sum_{j = 1}^d \int_{S^d} \frac{\partial_j\rho}{\eta_j^2}\partial_jY_k^m\cdot \overline{Y_k^n}\, d\sigma_d \\ & = \sum_{j = 1}^d \int_{S^d} \rho\frac{\partial_jY_k^m}{\eta_j}\cdot \frac{\partial_j\overline{Y_k^n}}{\eta_j}\, d\sigma_d + \sum_{j = 1}^d \int_{S^d} \rho \left(\frac{\partial_j\left(\sin^{j - 1}(\theta_j)\partial_jY_k^m\right)}{\eta_j^2\sin^{j - 1}(\theta_j)}\right) \overline{Y_k^n}\, d\sigma_d \\ & = \int_{S^d} \rho\nabla_{S^d}Y_k^m\cdot \nabla_{S^d}\overline{Y_k^n}\, d\sigma_d + \int_{S^d} \rho\left(\Delta_{S^d}Y_k^m\right)\overline{Y_k^n}\, d\sigma_d \\ & \stackrel{\eqref{eq:SH_eigs}}{=} \int_{S^d} \rho\nabla_{S^d}Y_k^m\cdot \nabla_{S^d}\overline{Y_k^n}\, d\sigma_d - \int_{S^d} k(k + d - 1)\rho Y_k^m\overline{Y_k^n}\, d\sigma_d. \end{aligned} \end{gathered}$$ Substituting [\[eq:Oeps_matrix1b\]](#eq:Oeps_matrix1b){reference-type="eqref" reference="eq:Oeps_matrix1b"} into [\[eq:Oeps_matrix1a\]](#eq:Oeps_matrix1a){reference-type="eqref" reference="eq:Oeps_matrix1a"} and rearranging gives the desired expression [\[eq:Oeps_matrix1\]](#eq:Oeps_matrix1){reference-type="eqref" reference="eq:Oeps_matrix1"} for $M_{m, n}^{(d, k)}$. ◻ We are now ready to prove that the trace of $M^{(d, k)}$ is proportional to the mean of $\rho$. **Lemma 9**. *Given $d\ge 3$ and $k\in\mathbb{Z}^+$, let $M^{(d, k)}$ and $\rho$ be the Hermitian matrix and the perturbation function defined in Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} and [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"}, respectively. The trace of $M^{(d, k)}$ is given by $$\operatorname{tr}\left(M^{(d, k)}\right) = -\frac{k N(d, k)}{\lvert{S^d}\rvert}\int_{S^d} \rho \, d\sigma_d = -\frac{kA_{0, 1}N(d, k)}{\lvert{S^d}\rvert^{1/2}}.$$* *Proof.* From Lemma [\[thm:Oeps_matrix1\]](#thm:Oeps_matrix1){reference-type="eqref" reference="thm:Oeps_matrix1"}, we have $$\operatorname{tr}\left(M^{(d, k)}\right) = \sum_{m = 1}^{N(d, k)} M_{m ,m}^{(k)} = \sum_{m = 1}^{N(d, k)} \int_{S^d} \rho\Big(-k(k + d)Y_k^m\overline{Y_k^m} + \nabla_{S^d}Y_k^m\cdot\nabla_{S^d}\overline{Y_k^m}\Big)\, d\sigma_d.$$ The lemma is an direct consequence of the addition theorem for hyperspherical harmonics and its gradient; see [\[eq:SH_addition\]](#eq:SH_addition){reference-type="eqref" reference="eq:SH_addition"}, [\[eq:SH_addition_same\]](#eq:SH_addition_same){reference-type="eqref" reference="eq:SH_addition_same"}, and Theorem [Theorem 6](#thm:SH_addition_grad){reference-type="ref" reference="thm:SH_addition_grad"}. Indeed, $$\begin{aligned} \operatorname{tr}\left(M^{(d, k)}\right) & = \int_{S^d} \rho\left(-\frac{k(k + d)N(d, k)}{\lvert{S^d}\rvert} + d(d - 1)K(d, k)C_{\ell - 1}^{\left(\frac{d + 1}{2}\right)}(1)\right) d\sigma_d \\ & = -\frac{N(d, k)}{\lvert{S^d}\rvert}\left[-k(k + d) + d(d - 1)\frac{C_{k - 1}^{\left(\frac{d + 1}{2}\right)}(1)}{C_k^{\left(\frac{d - 1}{2}\right)}(1)}\right] \int_{S^d}\rho\, d\sigma_d. \end{aligned}$$ We need only show the expression in the bracket above is equal to $k$. From [@NIST Table. 18.6.1] and the definition of Pochhammer's symbol [@NIST Eq. 5.2.5], we have that $$C_n^{(\alpha)}(1) = \frac{(2\alpha)_n}{n!} = \frac{\Gamma(2\alpha + n)}{n!\Gamma(2\alpha)}.$$ Consequently, $$\begin{aligned} d(d - 1)\cdot \frac{C_{k - 1}^{\left(\frac{d + 1}{2}\right)}(1)}{C_k^{\left(\frac{d - 1}{2}\right)}(1)} & = \frac{\Gamma(d + k)}{\Gamma(d + 1)(k - 1)!}\cdot \frac{d(d - 1)\Gamma(d - 1)k!}{\Gamma(d + k - 1)} = k(d + k - 1), \end{aligned}$$ where we use the fact that $\Gamma(z + 1) = z\Gamma(z)$ for any positive integer $z$. The claim now follows. ◻ **Corollary 10**. *Given $d\ge 3$ and $k\in\mathbb{Z}^+$, let $M^{(d, k)}$ and $\rho$ be the Hermitian matrix and the perturbation function defined in Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} and [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"}, respectively. We have $$M^{(d, k)} = -\frac{kA_{0, 1}}{\lvert{S^d}\rvert^{1/2}}I_{N(d, k)} + E^{(d, k)},$$ where $I_{N(d, k)}$ is the identity matrix of size $N(d, k)$ and $E^{(d, k)}$ is a Hermitian, zero-trace matrix, whose entries are given by $$E_{m, n}^{(d, k)} = \sum_{p = 1}^\infty\sum_{q = 1}^{N(d, p)} \int_{S^d} A_{p, q}Y_{p, q}\left(-k(k + d)Y_k^m(\hat\theta)\overline{Y_k^n(\hat\theta)} + \nabla_{S^d}Y_k^m(\hat\theta)\cdot\nabla_{S^d}\overline{Y_k^n(\hat\theta)}\right) d\sigma_d.$$* *Proof.* We begin by substituting the expression for $\rho$ (see [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"}) into [\[eq:Oeps_matrix1\]](#eq:Oeps_matrix1){reference-type="eqref" reference="eq:Oeps_matrix1"} to obtain $$M_{m, n}^{(k)} = \sum_{p = 0}^\infty\sum_{q = 1}^{N(d, p)} \int_{S^d} A_{p, q}Y_{p, q}\left(-k(k + d)Y_k^m(\hat\theta)\overline{Y_k^n(\hat\theta)} + \nabla_{S^d}Y_k^m(\hat\theta)\cdot\nabla_{S^d}\overline{Y_k^n(\hat\theta)}\right) d\sigma_d.$$ Separating the infinite sum into $p = 0$ and $p > 0$, we may write $M_{m, n}^{(k)} = D_{m ,n}^{(d, k)} + E_{m, n}^{(k)}$, where $E_{m, n}^{(k)}$ has the desired expression and $$D_{m, n}^{(d, k)} = \int_{S^d} A_{0, 1}Y_{0, 1}\left(-k(k + d)Y_k^m(\hat\theta)\overline{Y_k^n(\hat\theta)} + \nabla_{S^d}Y_k^m(\hat\theta)\cdot\nabla_{S^d}\overline{Y_k^n(\hat\theta)}\right) d\sigma_d.$$ Using the integral identity [\[eq:SH_eigs_grad\]](#eq:SH_eigs_grad){reference-type="eqref" reference="eq:SH_eigs_grad"} and the orthonormality of the hyperspherical harmonics, we deduce that the matrix $D^{(d, k)}$ is diagonal. Moreover, $$\begin{aligned} D_{m, m}^{(d, k)} & = A_{0, 1}Y_{0, 1}\Big(-k(k + d) + k(k + d - 1)\Big) = -\frac{kA_{0, 1}}{\lvert{S^d}\rvert^{1/2}}, \ \ 1\le m\le N(d, k), \end{aligned}$$ as desired. Thanks to Lemma [Lemma 9](#thm:Matrix_trace){reference-type="ref" reference="thm:Matrix_trace"}, we see that $E^{(d, k)}$ has zero trace since the trace is linear. ◻ We are now ready to prove Theorem [Theorem 1](#thm:Iso_stat){reference-type="ref" reference="thm:Iso_stat"}. *Proof of Theorem [Theorem 1](#thm:Iso_stat){reference-type="ref" reference="thm:Iso_stat"}.* We recall the volume-normalised Steklov eigenvalue $$\Lambda_{k, j}(\Omega_\varepsilon)= \lambda_{k, j}^{\varepsilon}\lvert{\Omega_\varepsilon}\rvert^{\frac{1}{d + 1}}.$$ Substituting the ansatz [\[eq:Pert_ansatz1\]](#eq:Pert_ansatz1){reference-type="eqref" reference="eq:Pert_ansatz1"} for $\lambda_{k, j}^\varepsilon$ and the asymptotic expansion [\[eq:Asymp_volume\]](#eq:Asymp_volume){reference-type="eqref" reference="eq:Asymp_volume"} for $\lvert{\Omega_\varepsilon}\rvert^{\frac{1}{d + 1}}$, we obtain $$\begin{aligned} \Lambda_{k, j}(\Omega_\varepsilon) & = \left(k + \varepsilon\lambda_{k, j}^{(1)} + O(\varepsilon^2)\right)\left(\lvert{B}\rvert^{\frac{1}{d + 1}} + \varepsilon\left(\frac{A_{0, 1}\lvert{B}\rvert^{\frac{1}{d + 1}}}{\lvert{S^d}\rvert^{1/2}}\right) + O(\varepsilon^2)\right) \\ & = k\lvert{B}\rvert^{\frac{1}{d + 1}} + \varepsilon\left(\lambda_{k, j}^{(1)}\lvert{B}\rvert^{\frac{1}{d + 1}} + \frac{kA_{0, 1}\lvert{B}\rvert^{\frac{1}{d + 1}}}{\lvert{S^d}\rvert^{1/2}}\right) + O(\varepsilon^2). \end{aligned}$$ From Corollary [Corollary 10](#thm:Matrix_sum){reference-type="ref" reference="thm:Matrix_sum"}, we have that $$\lambda_{k, j}^{(1)} = -\frac{kA_{0, 1}}{\lvert{S^d}\rvert^{1/2}} + e_{k, j},$$ where $e_{k, j}$ is the $j$th eigenvalue (in increasing order) of the matrix $E^{(d, k)}$ which is real. It follows that $$\begin{aligned} \Lambda_{k, j}(\Omega_\varepsilon) = k\lvert{B}\rvert^{\frac{1}{d + 1}} + \varepsilon\left(e_{k, j}\lvert{B}\rvert^{\frac{1}{d + 1}}\right) + O(\varepsilon^2). \end{aligned}$$ Since $E^{(d, k)}$ is Hermitian with zero trace, either $e_{k, j} = 0$ for all $j = 1, 2, \dots, N(d, k)$ or $e_{k, 1} < 0$. Together, we see that $e_{k, 1}\le 0$ and this completes the proof since $\Lambda_{k, 1} = \Lambda_{1 + N_{d, k}}$. ◻ # $M^{(d, k)}$ and the Wigner $3j$-symbols {#sec:Wigner} To prove Theorem [Theorem 2](#thm:Iso_notball){reference-type="ref" reference="thm:Iso_notball"}, it suffices to find a perturbation function $\rho$ such that the corresponding matrix $M^{(d, k)}$ has at least one positive eigenvalue. The first step is to express $M^{(d, k)}$ in terms of the integral of the triple product of hyperspherical harmonics. **Lemma 11**. *Given $d\ge 3$ and $k\in\mathbb{Z}^+$, let $M^{(d, k)}$ and $\rho$ be the Hermitian matrix and the perturbation function defined in Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} and [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"}, respectively. Then the entries of $M^{(d, k)}$ can be written as $$\label{eq:Matrix_triple} M_{m ,n}^{(d, k)} = -\frac{1}{2}\sum_{p = 0}^\infty\sum_{q = 1}^{N(d, k)} A_{p, q}\Big(p(p + d - 1) + 2k\Big)W_{q, m, n}^{p, k},$$ where $$W_{q, m, n}^{p, k} = \int_{S^d} Y_{p, q}(\hat\theta)Y_k^m(\hat\theta)\overline{Y_k^n(\hat\theta)}\, d\sigma_d.$$* *Proof.* From Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} and [\[eq:SCgrad\]](#eq:SCgrad){reference-type="eqref" reference="eq:SCgrad"}, we have $$\label{eq:Matrix_triple1} M_{m, n}^{(d, k)} \stackrel{\eqref{eq:Oeps_matrix1}}{=} -\int_{S^d} k\rho Y_k^m\overline{Y_k^n}\, d\sigma_d - \sum_{j = 1}^d\int_{S^d} \frac{\partial_j\rho}{\eta_j^2}\partial_jY_k^m\cdot \overline{Y_k^n}\, d\sigma_d.$$ Integrating by parts with respect to $\theta_j$ and noting that $\eta_j$ is independent of $\theta_j$, we have that $$\begin{gathered} \begin{aligned} \label{eq:Matrix_triple2} -\sum_{j = 1}^d \int_{S^d} \frac{\partial_j\rho}{\eta_j^2}\partial_jY_k^m\cdot \overline{Y_k^n}\, d\sigma_d & = -\sum_{j = 1}^d \int_{S^{d - 1}}\int_{\theta_j} \frac{\partial_jY_k^m}{\eta_j^2}\Big(\sin^{j- 1}(\theta_j)\partial_j\rho\cdot \overline{Y_k^n}\Big)\, d\theta_j\, d\sigma_{d - 1, j} \\ & = \sum_{j = 1}^d \int_{S^{d - 1}}\int_{\theta_j} \frac{Y_k^m}{\eta_j^2}\partial_j\Big(\sin^{j- 1}(\theta_j)\partial_j\rho\cdot \overline{Y_k^n}\Big)\, d\theta_j\, d\sigma_{d - 1, j} \\ & = \sum_{j = 1}^d \int_{S^d} \frac{Y_k^m}{\eta_j^2}\left[\partial_j\rho\cdot\partial_j\overline{Y_k^n} + \frac{\partial_j\left(\sin^{j - 1}(\theta_j)\partial_j\rho\right)}{\sin^{j - 1}(\theta_j)}\overline{Y_k^n}\right] d\sigma_d \\ & \stackrel{\eqref{eq:SCbeltrami}}{=} \left(\sum_{j = 1}^d\int_{S^d} \frac{\partial_j\rho}{\eta_j^2} Y_k^m\partial_j\overline{Y_k^n}\, d\sigma_d\right) + \int_{S^d} \left(\Delta_{S^d}\rho\right) Y_k^m\overline{Y_k^n}\, d\sigma_d, \end{aligned} \end{gathered}$$ where the boundary term vanishes due to (1) $\partial_1\rho$ and hyperspherical harmonics are $2\pi$-periodic with respect to $\theta_1$ for $j = 1$, and (2) $\sin^{j - 1}(0) = \sin^{j - 1}(\pi) = 0$ for $j = 2, 3, \dots, d$. On the other hand, we deduce from [\[eq:Oeps_matrix1b\]](#eq:Oeps_matrix1b){reference-type="eqref" reference="eq:Oeps_matrix1b"} from the proof of Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} that $$\label{eq:Matrix_triple3} -\sum_{j = 1}^d \int_{S^d} \frac{\partial_j\rho}{\eta_j^2}\partial_jY_k^m\cdot \overline{Y_k^n}\, d\sigma_d = -\sum_{j = 1}^d\int_{S^d} \frac{\partial_j\rho}{\eta_j^2} Y_k^m\partial_j\overline{Y_k^n}\, d\sigma_d.$$ Taking the average of [\[eq:Matrix_triple2\]](#eq:Matrix_triple2){reference-type="eqref" reference="eq:Matrix_triple2"} and [\[eq:Matrix_triple3\]](#eq:Matrix_triple3){reference-type="eqref" reference="eq:Matrix_triple3"}, it follows that $$\label{eq:Matrix_triple4} -\sum_{j = 1}^d \int_{S^d} \frac{\partial_j\rho}{\eta_j^2}\left(\partial_jY_k^m\right)\overline{Y_k^n}\, d\sigma_d = \frac{1}{2}\int_{S^d} \left(\Delta_{S^d}\rho\right) Y_k^m\overline{Y_k^n}\, d\sigma_d.$$ Finally, we substitute [\[eq:Matrix_triple4\]](#eq:Matrix_triple4){reference-type="eqref" reference="eq:Matrix_triple4"} and the expansion [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"} for $\rho$ into [\[eq:Matrix_triple1\]](#eq:Matrix_triple1){reference-type="eqref" reference="eq:Matrix_triple1"} to obtain $$\begin{aligned} M_{m, n}^{(d, k)} & = -\frac{1}{2}\int_{S^d} \left(-\Delta_{S^d}\rho + 2k\rho\right)Y_k^m\overline{Y_k^n}\, d\sigma_d \\ & = -\frac{1}{2}\sum_{p = 0}^\infty \sum_{q = 0}^{N(d, p)} A_{p, q}\int_{S^d} \left(-\Delta_{S^d}Y_{p, q} + 2kY_{p, q}\right)Y_k^m\overline{Y_k^n}\, d\sigma_d \\ & \stackrel{\eqref{eq:SH_eigs}}{=} -\frac{1}{2}\sum_{p = 0}^\infty \sum_{q = 0}^{N(d, p)} \Big(p(p + d - 1) + 2k\Big)\int_{S^d} Y_{p, q}Y_k^m\overline{Y_k^n}\, d\sigma_d, % & = -\frac{1}{2}\sum_{p = 0}^\infty \sum_{q = 0}^{N(d, p)} \Big(p(p + d - 1) + 2k\Big)W_{q, m, n}^{p, k}, \end{aligned}$$ which gives the desired result. ◻ In order to use Lemma [Lemma 11](#thm:Matrix_triple){reference-type="ref" reference="thm:Matrix_triple"}, we require the evaluation of $W^{p, k}_{q, m, n}$. We introduce additional notation that simplifies our presentation in deriving the explicit expression for $W^{p, k}_{q, m, n}$. For any $(d - 1)$-tuples $q, m, n$ satisfying [\[eq:SH_tuple\]](#eq:SH_tuple){reference-type="eqref" reference="eq:SH_tuple"} with $q_d = p$, $m_d = n_d = k$, we define the 3-tuple $T_j = (t_j^1, t_j^2, t_j^3)\coloneqq (q_j, m_j, n_j)$ and introduce the following variables for $j = 1, 2, \dots, d$: $$\begin{aligned} s_j = q_j + m_j + n_j, \qquad \mathrm{diff}_j^i = t_j^i - t_{j - 1}^i, \qquad \nu_j = \frac{j - 1}{2}. \end{aligned}$$ Since the real and complex hyperspherical harmonics are both separable in hyperspherical coordinates (see Section [2.2](#sec:Harmonics){reference-type="ref" reference="sec:Harmonics"}), it follows that $W^{p, k}_{q, m, n}$ can be written as a product of integrals $$\label{eq:Matrix_3j_prod} W^{p, k}_{q, m, n} = I(T_1, T_2)\prod_{j = 3}^d I(T_{j - 1}, T_j), \ \ 1\le q\le N(d, p), \, 1\le m, n\le N(d, k),$$ where the integrals $I(T_1, T_2)$ and $I(T_{j - 1}, T_j)$, $j = 3, 4, \dots, d$, are given by $$\begin{aligned} \notag I(T_1, T_2) & = \int_0^{2\pi}\int_0^{\pi} \widetilde Y_{q_2, q_1}(\phi, \theta_2)\widetilde Y_{m_2}^{m_1}(\phi, \theta_2)\overline{\widetilde Y_{n_2}^{n_1}(\phi, \theta_2)}\sin(\theta_2)\, d\theta_2\, d\phi, \\ \notag I(T_{j - 1}, T_j) & = \int_0^{\pi} \left(\prod_{i = 1}^3 Y(\theta_j; t_{j - 1}^i, t_j^i)\right) \sin^{j - 1}(\theta_j)\, d\theta_j \\ \notag & = \left(\prod_{i = 1}^3 \mu_j^{(i)}\right)^{-1} \int_0^\pi \left(\prod_{i = 1}^3 C_{\mathrm{diff}_j^i}^{\left(t_{j - 1}^i + \nu_j\right)}(\cos\theta_j)\right) (\sin\theta_j)^{s_{j - 1} + 2\nu_j}\, d\theta_j \\ \label{eq:Matrix_3j_int} & = \left(\prod_{i = 1}^3 \mu_j^{(i)}\right)^{-1} \int_{-1}^1 (1 - z^2)^{\frac{s_{j - 1}}{2} + \nu_j - \frac{1}{2}} \prod_{i = 1}^3 C_{\mathrm{diff}_j^i}^{\left(t_{j - 1}^i + \nu_j\right)}(z)\, dz. \end{aligned}$$ The constant $\mu_j^{(i)}$ for $i = 1, 2, 3$ and $j = 3, 4, \dots, d$ satisfies $$\begin{aligned} \left(\mu_j^{(i)}\right)^2 & = \frac{4\pi\Gamma(t_j^i + t_{j - 1}^i + 2\nu_j)}{2^{2t_{j - 1}^i + j}\left(\mathrm{diff}_j^i\right)!\left(t_j^i + \nu_j\right)\Gamma^2(t_{j - 1}^i + \nu_j)}. \end{aligned}$$ Our next theorem provides an explicit expression for these $(d - 1)$ integrals above involving the Pochhammer's symbol and the Wigner $3j$-symbol $\begin{pmatrix} j_1 & j_2 & j_3 \\ m_1 & m_2 & m_3 \end{pmatrix}$; see [@NIST Chapter 34] for the definition of Wigner $3j$-symbols. A crucial property of the Wigner $3j$-symbol is the following: If $\begin{pmatrix} j_1 & j_2 & j_3 \\ m_1 & m_2 & m_3 \end{pmatrix}\neq 0$, then all the following *selection rules* must be satisfied: 1. $m_i\in\{-j_i, -j_i + 1, -j_i + 2, \dots, j_i\}$ for $i = 1, 2, 3$. 2. $m_1 + m_2 + m_3 = 0$. 3. The triangle conditions $\lvert{j_1 - j_2}\rvert\le j_3\le j_1 + j_2$. 4. $(j_1 + j_2 + j_3)\ge 0$ is an integer (and, moreover, an even integer if $m_1 = m_2 = m_3 = 0$). **Theorem 12**. *Fix $d\ge 3$, $p\in\mathbb{N}$, and $k\in\mathbb{Z}^+$. Write $c_{T_2} = \sqrt{\frac{(2q_2 + 1)(2m_2 + 1)(2n_2 + 1)}{4\pi}}$ and $Q(q_1) = \begin{pmatrix} q_2 & m_2 & n_2 \\ q_1 & m_1 & -n_1 \end{pmatrix}$. We have $I(T_1, T_2) = c_{T_2}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix}Q(T_1, T_2)$, where $$Q(T_1, T_2) = \begin{dcases} \, (-1)^{m_1}\delta_{m_1, n_1}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & m_1 & -m_1 \end{pmatrix} & \ \ \textnormal{ if } q_1 = 0, \\ \, \frac{(-1)^{n_1}}{\sqrt{2}}\Big[Q(-q_1) + (-1)^{q_1}Q(q_1)\Big] & \ \ \textnormal{ if } q_1 > 0, \\ \, \frac{i(-1)^{n_1}}{\sqrt{2}}\Big[Q(q_1) - (-1)^{q_1}Q(-q_1)\Big] & \ \ \textnormal{ if } q_1 < 0. \end{dcases}$$ For $j = 3, 4, \dots, d$, we have $I(T_{j - 1}, T_j) = \left(\mu_j^{(1)}\mu_j^{(2)}\mu_j^{(3)}\right)^{-1}H(T_{j - 1}, T_j)$, where $$\begin{aligned} H(T_{j - 1}, T_j) & = \sum_{\substack{0\le \ell_1\le \lfloor \mathrm{diff}_j^1/2\rfloor \\ 0\le \ell_2\le \lfloor \mathrm{diff}_j^2/2\rfloor \\ 0\le \ell_3\le \lfloor \mathrm{diff}_j^3/2\rfloor}} \prod_{i = 1}^3 V(\mathrm{diff}_j^i, t_{j - 1}^i + \nu_j, \ell_i)\sum_{\substack{\tau_1, \tau_2\in\mathbb{N}\\\ \tau_2 \textnormal{ even}}} (2\tau_1 + 1)(2\tau_2 + 1) L(s_{j - 1}, \nu_j, \tau_2) \\ & \hspace{2.5cm} \times \begin{pmatrix} \mathrm{diff}_j^2 - 2\ell_2 & \mathrm{diff}_j^3 - 2\ell_3 & \tau_1 \\ 0 & 0 & 0 \end{pmatrix}^2\begin{pmatrix} \mathrm{diff}_j^1 - 2\ell_1 & \tau_1 & \tau_2 \\ 0 & 0 & 0 \end{pmatrix}^2. \end{aligned}$$ The constants $V$ and $L$ are defined by $$\begin{aligned} V(\beta, \alpha, \ell) & = (1 + 2\beta - 4\ell)\frac{(\alpha)_{\beta - \ell}}{\left(\frac{3}{2}\right)_{\beta - \ell}}\frac{(\alpha - \frac{1}{2})_\ell}{\ell!}, \\ L(s_{j - 1}, \nu_j, \tau_2) & = \frac{\pi\Gamma^2\left(\frac{s_{j - 1}}{2} + \nu_j + \frac{1}{2}\right)}{\Gamma\left(\frac{s_{j - 1}}{2} + \nu_j + 1 + \frac{\tau_2}{2}\right)\Gamma\left(\frac{s_{j - 1}}{2} + \nu_j + \frac{1}{2} - \frac{\tau_2}{2}\right)\Gamma\left(\frac{\tau_2}{2} + 1\right)\Gamma\left(-\frac{\tau_2}{2} + \frac{1}{2}\right)}. \end{aligned}$$ Here, $\lfloor\cdot \rfloor$ and $(\alpha)_n = \frac{\Gamma(\alpha + n)}{\Gamma(\alpha)}$ denote the floor function and the Pochhammer's symbol, respectively.* *Proof.* Write $d\sigma_2 = \sin\theta_2\, d\theta_2\, d\phi$ with $0\le \phi < 2\pi$ and $0\le \theta_2\le \pi$. Recall that the product of three complex three-dimensional spherical harmonics can be written in terms of the Wigner $3j$-symbol by [@NIST Eq. 34.3.22]: $$\int_{S^2(\phi, \theta_2)} \widetilde Y_{a_2}^{a_1}\widetilde Y_{b_2}^{b_1}\widetilde Y_{c_2}^{c_1}\, d\sigma_2 = \sqrt{\frac{(2a_2 + 1)(2b_2 + 1)(2c_2 + 1)}{4\pi}}\begin{pmatrix} a_2 & b_2 & c_2 \\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} a_2 & b_2 & c_2 \\ a_1 & b_1 & c_1 \end{pmatrix}.$$ Now, using the complex conjugate formula for the normalised three-dimensional complex spherical harmonics, we have that $$\begin{aligned} I(T_1, T_2) = (-1)^{n_1}\int_{S^2(\phi, \theta_2)} \widetilde Y_{q_2, q_1}\widetilde Y_{m_2}^{m_1}\widetilde Y_{n_2}^{-n_1}\, d\sigma_2. \end{aligned}$$ If $q_1 = 0$, then $$\begin{aligned} I(T_1, T_2) & = (-1)^{n_1}\int_{S^2} \widetilde Y_{q_2}^0\widetilde Y_{m_2}^{m_1}\widetilde Y_{n_2}^{-n_1}\, d\sigma_2 \\ & = (-1)^{n_1}c_{T_2}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & m_1 & -n_1 \end{pmatrix} \\ & = (-1)^{m_1}\delta_{m_1, n_1}c_{T_2}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & m_1 & -m_1 \end{pmatrix}. \end{aligned}$$ If $q_1 > 0$, then $$\begin{aligned} I(T_1, T_2) & = \frac{(-1)^{n_1}}{\sqrt{2}}\int_{S^2} \Big[\widetilde Y_{q_2}^{-q_1}\widetilde Y_{m_2}^{m_1}\widetilde Y_{n_2}^{-n_1} + (-1)^{q_1}\widetilde Y_{q_2}^{q_1}\widetilde Y_{m_2}^{m_1}\widetilde Y_{n_2}^{-n_1}\Big]\, d\sigma_2 \\ & = \frac{(-1)^{n_1}}{\sqrt{2}}c_{T_2}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix}\left[\begin{pmatrix} q_2 & m_2 & n_2 \\ -q_1 & m_1 & -n_1 \end{pmatrix} + (-1)^{q_1}\begin{pmatrix} q_2 & m_2 & n_2 \\ q_1 & m_1 & -n_1 \end{pmatrix}\right]. \end{aligned}$$ If $q_1 < 0$, then $$\begin{aligned} I(T_1, T_2) & = \frac{i(-1)^{n_1}}{\sqrt{2}}\int_{S^2} \Big[\widetilde Y_{q_2}^{q_1}\widetilde Y_{m_2}^{m_1}\widetilde Y_{n_2}^{-n_1} - (-1)^{q_1}\widetilde Y_{q_2}^{-q_1}\widetilde Y_{m_2}^{m_1}\widetilde Y_{n_2}^{-n_1}\Big]\, d\sigma_2 \\ & = \frac{i(-1)^{n_1}}{\sqrt{2}}c_{T_2}\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix}\left[\begin{pmatrix} q_2 & m_2 & n_2 \\ q_1 & m_1 & -n_1 \end{pmatrix} - (-1)^{q_1}\begin{pmatrix} q_2 & m_2 & n_2 \\ -q_1 & m_1 & -n_1 \end{pmatrix}\right]. \end{aligned}$$ This completes the proof for $I(T_1, T_2)$. To establish the formula for $I(T_{j - 1}, T_j)$, we need only show the integral in [\[eq:Matrix_3j_int\]](#eq:Matrix_3j_int){reference-type="eqref" reference="eq:Matrix_3j_int"} is equal to $H(T_{j - 1}, T_j)$. We follow the proof in [@wen1985 Section V]. Let $P_\beta(z)$ and $(\alpha)_\beta = \Gamma(\alpha + \beta)/\Gamma(\alpha)$ denote the Legendre polynomial of degree $\beta$ and the Pochhammer's symbol, respectively. The first step is to combine the connection sum formula [@NIST Eq. 18.18.16] for Gegenbauer polnomials with $\lambda = 1/2$ together with the fact that $C_\beta^{(1/2)}(z) = P_\beta(z)$ [@NIST Eq. 18.7.8]: $$C_\beta^{(\alpha)}(z) = \sum_{\ell = 0}^{\lfloor \beta/2\rfloor} V(\beta, \alpha, \ell)P_{\beta - 2\ell}(z), \ \ \textnormal{ where } V(\beta, \alpha, \ell)\coloneqq (1 + 2\beta - 4\ell)\frac{(\alpha)_{\beta - \ell}}{(\frac{3}{2})_{\beta - \ell}}\frac{(\alpha - \frac{1}{2})_\ell}{\ell!}.$$ Fix $j\in\{3, 4, \dots, d\}$ and choose $\beta_j^i = \mathrm{diff}_j^i$ and $\alpha_j^i = t_{j - 1}^i + \nu_j$. The integral in [\[eq:Matrix_3j_int\]](#eq:Matrix_3j_int){reference-type="eqref" reference="eq:Matrix_3j_int"} is equal to $$\begin{gathered} \label{eq:Matrix_3j1} \begin{aligned} \int_{-1}^1 & (1 - z^2)^{\frac{s_{j - 1}}{2} + \nu_j - \frac{1}{2}}\left(\prod_{i = 1}^3 \sum_{\ell_i = 0}^{\lfloor \beta_j^i/2\rfloor} V(\beta_j^i, \alpha_j^i, \ell_i)P_{\beta_j^i - 2\ell_i}(z)\right) dz \\ & = \sum_{\substack{0\le \ell_1\le \lfloor \beta_j^1/2\rfloor \\ 0\le \ell_2\le \lfloor \beta_j^2/2\rfloor \\ 0\le \ell_3\le \lfloor \beta_j^3/2\rfloor}} \prod_{i = 1}^3 V(\beta_j^i, \alpha_j^i, \ell_i)\int_{-1}^1 (1 - z^2)^{\frac{s_{j - 1}}{2} + \nu_j - \frac{1}{2}} \prod_{i = 1} P_{\beta_j^i - 2\ell_i}(z)\, dz. \end{aligned} \end{gathered}$$ To evaluate the integral involving the triple product of Legendre polynomials, we use the fact that the product of Legendre polynomials can be written in terms of the Wigner $3j$-symbol by [@NIST Eq. 34.3.19] $$\begin{aligned} P_{b_1}(z)P_{b_2}(z) = \sum_{\tau_1\in\mathbb{N}} (2\tau_1 + 1)\begin{pmatrix} b_1 & b_2 & \tau_1 \\ 0 & 0 & 0 \end{pmatrix}^2P_{\tau_1}(z). \end{aligned}$$ Applying this identity twice and using the fact that odd permutations of columns of Wigner $3j$-symbols produce a phase factor [@NIST Eq. 34.3.9], we obtain $$\begin{gathered} \label{eq:Matrix_3j2} \begin{aligned} & \prod_{i = 1}^3 P_{\beta_j^i - 2\ell_i}(z) = P_{\beta_j^1 - 2\ell_1}(z) \sum_{\tau_1\in\mathbb{N}} (2\tau_1 + 1)\begin{pmatrix} \beta_j^2 - 2\ell_2 & \beta_j^3 - 2\ell_3 & \tau_1 \\ 0 & 0 & 0 \end{pmatrix}^2 P_{\tau_1}(z) \\ & = \sum_{\tau_1, \tau_2\in\mathbb{N}} (2\tau_1 + 1)(2\tau_2 + 1)\begin{pmatrix} \beta_j^2 - 2\ell_2 & \beta_j^3 - 2\ell_3 & \tau_1 \\ 0 & 0 & 0 \end{pmatrix}^2\begin{pmatrix} \beta_j^1 - 2\ell_1 & \tau_1 & \tau_2 \\ 0 & 0 & 0 \end{pmatrix}^2 P_{\tau_2}(z). \end{aligned} \end{gathered}$$ Substituting [\[eq:Matrix_3j2\]](#eq:Matrix_3j2){reference-type="eqref" reference="eq:Matrix_3j2"} into [\[eq:Matrix_3j1\]](#eq:Matrix_3j1){reference-type="eqref" reference="eq:Matrix_3j1"} and comparing the resulting expression with the given expression for $H(T_{j - 1}, T_j)$, we need only show $$\begin{aligned} \int_{-1}^1 (1 - z^2)^{\frac{s_{j - 1}}{2} + \nu_j - \frac{1}{2}}P_{\tau_2}(z)\, dz = L(s_{j - 1}, \nu_j, \tau_2). \end{aligned}$$ This follows from applying the following integration formula for Legendre polynomials with $\gamma = (s_{j - 1} + 2\nu_j + 1)/2$ [@Gradshteyn Eq. 7.132.1 with $\mu = 0$]: $$\begin{aligned} \int_{-1}^1 (1 - z^2)^{\gamma - 1}P_{\tau_2}(z)\, dz = \frac{\pi\Gamma^2(\gamma)}{\Gamma\left(\gamma + \frac{\tau_2}{2} + \frac{1}{2}\right)\Gamma\left(\gamma - \frac{\tau_2}{2}\right)\Gamma\left(\frac{\tau_2}{2} + 1\right)\Gamma\left(-\frac{\tau_2}{2} + \frac{1}{2}\right)}, \ \ \mathrm{Re}(\gamma) > 0. \end{aligned}$$ Finally, we may assume that $\tau_2$ is even, since otherwise $(1 - z^2)^{(s_{j - 1} + 2\nu_j - 1)/2}P_{\tau_2}(z)$ is an odd function which results in $L(s_{j - 1}, \nu_j, \tau_2) = 0$. ◻ We now combine Lemma [Lemma 11](#thm:Matrix_triple){reference-type="ref" reference="thm:Matrix_triple"} and Theorem [Theorem 12](#thm:Matrix_3j){reference-type="ref" reference="thm:Matrix_3j"} to prove Theorem [Theorem 2](#thm:Iso_notball){reference-type="ref" reference="thm:Iso_notball"}. *Proof of Theorem [Theorem 2](#thm:Iso_notball){reference-type="ref" reference="thm:Iso_notball"}.* Choose the perturbation function $\rho = Y_{2, q}$ with the tuple $q = (0, 2, \dots, 2)$. With this choice of $\rho$, Lemma [Lemma 9](#thm:Matrix_trace){reference-type="ref" reference="thm:Matrix_trace"} tells us that the trace of $M^{(d, k)}$ is 0. Since $M^{(d, k)}$ is Hermitian which is diagonalisable, it suffices to show that $M^{(d, k)}$ has a nonzero entry. In that case, we must have $\lambda_{k, N(d, k)}^{(1)} = \lambda_{N_{d, k + 1}}^{(1)} > 0$ since we defined it to be the largest eigenvalue of $M^{(d, k)}$. We claim that the diagonal entry of $M^{(d, k)}$ corresponding to the tuple $m = (k, k, \dots, k)$ is one such nonzero entry. From Lemma [Lemma 11](#thm:Matrix_triple){reference-type="ref" reference="thm:Matrix_triple"}, we need only show that $W^{2, k}_{q, m, m}\neq 0$ with our choice of tuples. Recall the definition of the 3-tuple $T_j = (q_j, m_j, n_j)$ for $j = 1, 2, \dots, d$. From Theorem [Theorem 12](#thm:Matrix_3j){reference-type="ref" reference="thm:Matrix_3j"}, we have $c_{T_2} = \sqrt{\frac{5}{4\pi}}(2k + 1) \neq 0$ and it follows from [@NIST Eqs. 34.3.5 & 34.3.7] that $$\begin{aligned} I(T_1, T_2) & = (-1)^k c_{T_2}\begin{pmatrix} 2 & k & k \\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} 2 & k & k \\ 0 & k & -k \end{pmatrix} \neq 0. \end{aligned}$$ Next, observe that for each $j = 3, 4, \dots, d$, we have $$\mathrm{diff}_j^i = t_j^i - t_{j - 1}^i = 0, \ \ i = 1, 2, 3, \ \ \textnormal{ and } \ \ s_{j - 1} = q_{j - 1} + m_{j - 1} + n_{j - 1} = 2 + 2k.$$ For a fixed $j\in\{3, 4, \dots, d\}$, Theorem [Theorem 12](#thm:Matrix_3j){reference-type="ref" reference="thm:Matrix_3j"} gives $$\begin{aligned} I(T_{j - 1}, T_j) & = \frac{V(0, 2 + \nu_j, 0)V(0, k + \nu_j, 0)^2}{\mu_j^{(1)}\mu_j^{(2)}\mu_j^{(3)}}\sum_{\substack{\tau_1, \tau_2\in\mathbb{N}\\\ \tau_2 \textnormal{ even}}} (2\tau_1 + 1)(2\tau_2 + 1)L(2 + 2k, \nu_j, \tau_2) \\ & \hspace{5cm} \times \begin{pmatrix} 0 & 0 & \tau_1 \\ 0 & 0 & 0 \end{pmatrix}^2\begin{pmatrix} 0 & \tau_1 & \tau_2 \\ 0 & 0 & 0 \end{pmatrix}^2. \end{aligned}$$ Since the Pochhammer's symbol satisfies $(\alpha)_0 =1$ for any $\alpha\ge 0$, we see that $V(0, 2 + \nu_j, 0)$ and $V(0, k + \nu_j, 0)$ are both 1. Next, using the triangle conditions for Wigner $3j$-symbols, we must have $\tau_1 = 0$ and subsequently $\tau_2 = 0$. Recalling $\nu_j = (j - 1)/2$ and using the fact that $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 1$ (see [@NIST Eq. 34.3.1]), we find $$\begin{aligned} I(T_{j - 1}, T_j) & = \frac{L(2 + 2k, \nu_j, 0)}{\mu_j^{(1)}\mu_j^{(2)}\mu_j^{(3)}}\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}^4 = \frac{\sqrt{\pi}\Gamma\left(k + 1 + \frac{j}{2}\right)}{\mu_j^{(1)}\mu_j^{(2)}\mu_j^{(3)}\Gamma\left(k + \frac{j + 3}{2}\right)} > 0. \end{aligned}$$ This completes the proof since $W^{2, k}_{q, m, m}$ is a product of $I(T_1, T_2)$ and $\{I(T_{j - 1}, T_j)\}_{j = 3}^d$. ◻ Following [@viator2022 Corollary 2.3], we wish to establish the first-order behaviour of the Steklov eigenvalues for a nearly hyperspherical domain $\Omega_\varepsilon$ of the form [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"} with $\rho = Y_{p, q}(\hat\theta)$, for both $p\in\mathbb{N}$ and the $(d - 1)$-tuple $q$ fixed. The proof in [@viator2022] is based on a delicate analysis of the integral $W^{p, k}_{q, m, n}$ using the selection rules for Wigner $3j$-symbols, where the authors proved that the entry [\[eq:Matrix_triple\]](#eq:Matrix_triple){reference-type="eqref" reference="eq:Matrix_triple"} for $M^{(d, k)}$ involving the infinite sum reduced to a finite sum over the set $\{p\le 2k, p\textnormal{ even}\}$. **Lemma 13**. *Given $d\ge 3$ and $k\in\mathbb{Z}^+$, let $M^{(d, k)}$ and $\rho$ be the Hermitian matrix and the perturbation function defined in Theorem [Theorem 7](#thm:Oeps_matrix){reference-type="ref" reference="thm:Oeps_matrix"} and [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"}, respectively. Then the entries of $M^{(d, k)}$ can be written as $$M_{m ,n}^{(d, k)} = -\frac{1}{2}\sum_{\substack{p = 0 \\ p \textnormal{ even}}}^\infty\sum_{q = 1}^{N(d, k)} A_{p, q}\Big(p(p + d - 1) + 2k\Big)W_{q, m, n}^{p, k}.$$* *Proof.* Fix $p\in\mathbb{N}$ and $k\in\mathbb{Z}^+$. We claim that if $p$ is odd, then $W^{p, k}_{q, m, n} = 0$ for all $q, m, n$ satisfying [\[eq:SH_tuple\]](#eq:SH_tuple){reference-type="eqref" reference="eq:SH_tuple"}. Recall that $s_j = q_j + m_j + n_j$ for $j = 1, 2, \dots, d$. Looking at the formula for $I(T_1, T_2)$ in Theorem [Theorem 12](#thm:Matrix_3j){reference-type="ref" reference="thm:Matrix_3j"}, we see that $W^{p, k}_{q, m, n} = 0$ if $\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix}$ is zero. Thus, we may assume $\begin{pmatrix} q_2 & m_2 & n_2 \\ 0 & 0 & 0 \end{pmatrix} \neq 0$ without loss of generality. By the fourth selection rule for Wigner $3j$-symbols, we must have $s_2 = q_2 + m_2 + n_2$ even. Next, we turn our attention to $I(T_2, T_3)$. Recall that $\mathrm{diff}_j^i = t_j^i - t_{j - 1}^i$ and $\tau_2$ is even. The Wigner $3j$ symbols appearing in $I(T_2, T_3)$ have the form $$\begin{pmatrix} \mathrm{diff}_3^2 - 2\ell_2 & \mathrm{diff}_3^3 - 2\ell_3 & \tau_1 \\ 0 & 0 & 0 \end{pmatrix}^2\begin{pmatrix} \mathrm{diff}_1^1 - 2\ell_1 & \tau_1 & \tau_2 \\ 0 & 0 & 0 \end{pmatrix}^2.$$ Looking at the Wigner $3j$-symbols from $I(T_2, T_3)$, the fourth selection rule tells us that we may assume both $\mathrm{diff}_3^1 + \tau_1$ and $(\mathrm{diff}_3^2 - 2\ell_2 + \mathrm{diff}_3^3 - 2\ell_3 + \tau_1)$ are even. In this case, $$\begin{aligned} \mathrm{diff}_3^1 + \tau_1 & = q_3 - q_2 + \tau_1 \\ \mathrm{diff}_3^2 - 2\ell_2 + \mathrm{diff}_3^3 - 2\ell_3 + \tau_1 & = m_3 - m_2 - 2\ell_2 + n_3 - n_2 - 2\ell_3 + \tau_1 \\ & = (s_3 - q_3) - (s_2 - q_2) + \tau_1 - 2\ell_2 - 2\ell_3 \\ & = (s_3 - s_2) - (\mathrm{diff}_3^1 - \tau_1) - 2\ell_2 - 2\ell_3.\end{aligned}$$ Since $\mathrm{diff}_3^1 - \tau_1 = \mathrm{diff}_3^1 + \tau_1 - 2\tau_1$ and $s_2$ are both even, we may conclude that $s_3$ is even as well. Continuing inductively, we conclude that $\mathrm{diff}_j^1 + \tau_1$ and $s_j$ is even (so that $I(T_{j - 1}, T_j)$ is nonzero) for each $j = 3, 4, \dots, d - 1$. We now consider $I(T_{d-1}, T_d)$. By the arguments above, we can assume that $\mathrm{diff}_d^1 + \tau_1$ is even, which in turn implies that $\mathrm{diff}_d^1 - \tau_1$ is also even. Thus, we compute $$\begin{aligned} \mathrm{diff}_d^2 - 2\ell_{d-1} + \mathrm{diff}_d^3 - 2\ell_d + \tau_1 & = m_d - m_{d-1} - 2\ell_{d-1} + n_d - n_{d-1} - 2\ell_d + \tau_1 \\ & = (s_d - q_d) - (s_{d-1} - q_{d-1}) + \tau_1 - 2\ell_{d-1} - 2\ell_d \\ & = (s_d - s_{d-1}) - (\mathrm{diff}_d^1 - \tau_1) - 2\ell_{d-1} - 2\ell_d \\ & = p + 2k - s_{d-1} - (\mathrm{diff}_d^1 - \tau_1) - 2\ell_{d-1} - 2\ell_d ,\end{aligned}$$ where we have used that that $(t_d^1 , t_d^2 , t_d^3) = (q_d, m_d, n_d) = (p, k ,k)$. But if $p$ is odd, then $\mathrm{diff}_d^2 - 2\ell_{d-1} + \mathrm{diff}_d^3 - 2\ell_d + \tau_1$ must be odd as well. By the fourth selection rule, we conclude that $I(T_{d-1},T_d)=0$, so that $W^{p,k}_{q,m,n} = 0$ as well, completing the proof. ◻ The following theorem is immediate from Corollary [Corollary 10](#thm:Matrix_sum){reference-type="ref" reference="thm:Matrix_sum"} and Lemma [Lemma 13](#thm:Matrix_triple_finite){reference-type="ref" reference="thm:Matrix_triple_finite"}. **Theorem 14**. *Given $d\ge 3$, consider a nearly hyperspherical domain $\Omega_\varepsilon$ of the form [\[eq:Domain\]](#eq:Domain){reference-type="eqref" reference="eq:Domain"} with $A_{p, q} = \delta_{p, p'}\delta_{q, q'}$. For any positive integer $k\in\mathbb{Z}^+$ and any $j = 1, 2, \dots, N(d, k)$,* 1. *if $p' = 0$ and $q' = (0, 0, \dots, 0)$ is the trivial $(d - 1)$-tuple, then $\lambda_{k, j}^{(1)} = -k\lvert{S^d}\rvert^{-1/2}$.* 2. *if $p'$ is odd, then the Steklov eigenvalue $\lambda_{k, j}^\varepsilon$ is unperturbed at first-order in $\varepsilon$.* Reducing the infinite sum [\[eq:Matrix_triple\]](#eq:Matrix_triple){reference-type="eqref" reference="eq:Matrix_triple"} to the case where $p\le 2k$ is intractable for $d\ge 3$. More specifically, we compute the entry of the matrix $M^{(d, k)}$ using [\[eq:Matrix_triple\]](#eq:Matrix_triple){reference-type="eqref" reference="eq:Matrix_triple"} for $p = 2k + 2\xi$ with $\xi\in\mathbb{Z}^+$, $q = (0, 2k, \dots, 2k)$, $m = n = (k, k, \dots, k)$, and we see numerically that this particular entry is zero due to cancellations of Wigner $3j$-symbols appearing in the formula of $W^{p, k}_{q, m, n}$ (see Theorem [Theorem 12](#thm:Matrix_3j){reference-type="ref" reference="thm:Matrix_3j"}). ### Acknowledgements. {#acknowledgements. .unnumbered} We would like to express our gratitude to Chiu-Yen Kao and Nathan Schroeder for sharing their helpful insights into the Steklov eigenvalue problem in arbitrary dimensions. We would also like to thank Yerim Kone, Lucas Alland, and Amy Liu for their work in 2D Steklov shape perturbation, and Swarthmore College for funding their summer work.
arxiv_math
{ "id": "2310.03960", "title": "Steklov eigenvalues of nearly hyperspherical domains", "authors": "Chee Han Tan, Robert Viator", "categories": "math.SP math.AP math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this note we described the polynomial identities of degree 4 for the certain subspace of the Weyl algerbra $\mathsf{A}_1$ over an infinite field of arbitrary characteristic. polynomial identities, matrix identities, Weyl algebra, positive characteristic. 16R10; 16S32. address: - | Artem Lopatin\ State University of Campinas, SPbU - | Carlos Arturo Rodriguez Palma\ Universidad Industrial de Santander author: - Artem Lopatin - Carlos Arturo Rodriguez Palma title: Identities for subspaces of the Weyl algebra --- [^1] # Introduction {#section_intro} Assume that $\mathbb{F}$ is an infinite field of arbitrary characteristic $p=\mathop{\rm char}\mathbb{F}\geq0$. All vector spaces and algebras are over $\mathbb{F}$ and all algebras are associative, unless stated otherwise. We write $\mathbb{F}\langle x_1,\ldots,x_n\rangle$ for the free $\mathbb{F}$-algebra with free generators $x_1,\ldots,x_n$. In case the free generators are $x_1,x_2,\dotsc$ the corresponding free algebra is denoted by $\mathbb{F}\langle X\rangle$. ## Witt algebra $\mathsf{W}_1$ The *Weyl algebra* $\mathsf{A}_1$ is the unital associative algebra over $\mathbb{F}$ generated by letters $x$, $y$ subject to the defining relation $yx=xy+1$ (equivalently, $[y,x]=1$, where $[y,x]=yx-xy$), i.e., $$\mathsf{A}_1=\mathbb{F}\langle x,y\rangle/{{\rm id}\{{yx-xy-1}\}}.$$ For $s>0$ define by $\mathsf{A}_1^{(-,s)}$ the $\mathbb{F}$-span of $a y^s$ in $\mathsf{A}_1$ for all $a\in \mathbb{F}[x]$. It is easy to see that the space $\mathsf{A}_1^{(-,s)}$ is closed with respect to the Lie bracket $[\,\cdot,\cdot\,]$ and the Lie bracket $[\,\cdot,\cdot\,]$ is not trivially zero on $\mathsf{A}_1^{(-,s)}$ if and only if $s=1$ (for example, see Corollary 3.5 of [@Lopatin_Rodriguez_II]). Note that in case $p=0$ the space $\mathsf{A}_1^{(-,1)}$ together with the multiplication given by the Lie bracket is the Witt algebra $\mathsf{W}_1$, which is a simple infinitely dimensional Lie algebra. Similarly, considering $n$ pairs $\{x_i,y_i\}$ $(1\leq i\leq n)$ instead of $\{x,y\}$ we can define the $n^{\rm th}$ Witt algebra $\mathsf{W}_n$, which is also a simple infinitely dimensional Lie algebra. ## Polynomial identities {#section_intro_PI} A polynomial identity for a unital $\mathbb{F}$-algebra $\mathcal{A}$ is an element $f(x_1,\ldots,x_m)$ of $\mathbb{F}\langle X\rangle$ such that $f(a_1,\ldots,a_m)=0$ in $\mathcal{A}$ for all $a_1,\ldots,a_m\in \mathcal{A}$. The set ${{\rm Id}_{\mathbb{F}}({\mathcal{A}})}={{\rm Id}({\mathcal{A}})}$ of all polynomial identities for $\mathcal{A}$ is a T-ideal, i.e., ${{\rm Id}({\mathcal{A}})}$ is an ideal of $\mathbb{F}\langle X\rangle$ such that $\phi({{\rm Id}({\mathcal{A}})})\subset {{\rm Id}({\mathcal{A}})}$ for every endomorphism $\phi$ of $\mathbb{F}\langle X\rangle$. An algebra that satisfies a nontrivial polynomial identity is called a PI-algebra. A T-ideal $I$ of $\mathbb{F}\langle X\rangle$ generated by $f_1,\ldots,f_k\in \mathbb{F}\langle X\rangle$ is the minimal T-ideal of $\mathbb{F}\langle X\rangle$ that contains $f_1,\ldots,f_k$. Denote $I={{\rm Id}({f_1,\ldots,f_k})}$. We say that $f\in \mathbb{F}\langle X\rangle$ is a consequence of $f_1,\ldots,f_k$ if $f\in I$. Given a monomial $w\in \mathbb{F}\langle x_1,\ldots,x_m\rangle$, we write $\deg_{x_i}(w)$ for the number of letters $x_i$ in $w$ and $\mathop{\rm mdeg}(w)\in\mathbb{N}^m$ for the multidegree $(\deg_{x_1}(w),\ldots,\deg_{x_m}(w))$ of $w$, where $\mathbb{N}=\{0,1,2,\ldots\}$. An element $f\in\mathbb{F}\langle X\rangle$ is called (multi)homogeneous if it is a linear combination of monomials of the same (multi)degree. Given $f=f(x_1,\ldots,x_m)$ of $\mathbb{F}\langle X\rangle$, we write $f=\sum_{{\underline{\delta}} \in \mathbb{N}^m} f_{{\underline{\delta}} }$ for multihomogeneous components $f_{{\underline{\delta}} }$ of $f$ with $\mathop{\rm mdeg}{f_{{\underline{\delta}} }}={\underline{\delta}}$. For ${\underline{\delta}} =(\delta_1,\ldots,\delta_m)$ we denote $|{\underline{\delta}} |=\delta_1+\cdots+\delta_m$. We say that algebras $\mathcal{A}$, $\mathcal{B}$ are called PI-equivalent and write $\mathcal{A}\sim_{\rm PI}\mathcal{B}$ if ${{\rm Id}({\mathcal{A}})} ={{\rm Id}({\mathcal{B}})}$. Given an $\mathbb{F}$-subspace $\mathcal{V}\subset \mathcal{A}$, we write ${{\rm Id}_{\mathbb{F}}({\mathcal{V}})}={{\rm Id}({\mathcal{V}})}$ for the ideal of all polynomial identities for $\mathcal{V}$. Note that $\phi({{\rm Id}({\mathcal{V}})})\subset {{\rm Id}({\mathcal{V}})}$ for every linear endomorphism $\phi$ of $\mathbb{F}\langle X\rangle$, but ${{\rm Id}({\mathcal{V}})}$ is not a T-ideal in general. Assume that $p=0$. It is well-known that the algebra $\mathsf{A}_1$ does not have nontrivial polynomial identities. Nevertheless, some subspaces of $\mathsf{A}_1$ satisfy certain polynomial identities. As an example, Dzhumadil'daev proved that the standard polynomial $${\rm St}_N(x_{1},\ldots,x_{N})=\sum_{\sigma\in S_{N}}(-1)^{\sigma}x_{\sigma(1)}\cdots x_{\sigma(N)}$$ is a polynomial identity for $\mathsf{A}_1^{(-,s)}$ if and only if $N>2s$ (Theorem 1 of [@Askar_2014]). More results on polynomial identities for some subspaces of $n^{\rm th}$ Weyl algebra were obtained in [@Askar_2004; @Askar_Yeliussizov_2015]. The polynomial Lie identities for the $n^{\rm th}$ Witt algebra $\mathsf{W}_n$ were studied by Mishchenko [@Mishchenko_1989], Razmyslov [@Razmyslov_book] and others. The well-known open conjecture claims that all polynomial identities for $\mathsf{W}_1$ follow from the standard Lie identity $$\sum_{\sigma\in S_{4}}(-1)^{\sigma}[[[[x_0,t_{\sigma(1)}]x_{\sigma(2)}]x_{\sigma(3)}]x_{\sigma(4)}].$$ The situation is drastically different in case $p>0$. Namely, $\mathsf{A}_1$ is PI-equivalent to the algebra $M_p$ of all $p\times p$ matrices over $\mathbb{F}$. Moreover, the Weyl algebra $\mathsf{A}_1$ over an arbitrary associative (but possible non-commutative) $\mathbb{F}$-algebra $\mathsf{B}$ is PI-equivalent to the algebra $M_p(\mathsf{B})$ of all $p\times p$ matrices over $\mathsf{B}$ (see Theorem 4.9 of [@Lopatin_Rodriguez_2022] for more general result). Over a field of an arbitrary characteristic minimal polynomials identities for 1. $\mathsf{A}_1^{(-,1)}$ for an arbitrary $p$, 2. $\mathsf{A}_1^{(-,s)}$ for $p=2$, were described in [@Lopatin_Rodriguez_II] as a consequence of more general results for so-called parametric Weyl algebras. ## Results In this paper we described all polynomial identities for $\mathsf{A}_1^{(-,1)}$ of degree 4. Namely, polynomial identities of multidegree 1. (3,1) were considered in Proposition [Proposition 5](#prop_deg31){reference-type="ref" reference="prop_deg31"}; 2. (2,2) were considered in Proposition [Proposition 7](#prop_deg22){reference-type="ref" reference="prop_deg22"}; 3. (2,1,1) were considered in Proposition [Proposition 9](#prop_id211){reference-type="ref" reference="prop_id211"}; 4. (1,1,1,1) were considered in Proposition [Proposition 14](#prop_id1111_p2){reference-type="ref" reference="prop_id1111_p2"}. # Auxiliaries ## Properties of $\mathsf{A}_1$ {#section_A1} Given $a\in \mathbb{F}[x]$, we write $a'$ for the usual derivative of a polynomial $a$ with respect to the variable $x$. Using the linearity of derivative and induction on the degree of $a\in\mathbb{F}[x]$ it is easy to see that $$\label{eq0} [y,a]=a' \text{ holds in }\mathsf{A}_1 \text{ for all }a\in \mathbb{F}[x].$$ Thus for all $i,j\geq0$ the associative multiplication and the Lie bracket on $\mathsf{A}_1^{(-,1)}$ are given by $$\label{eq1} x^i y\, x^j y = x^{i+j}y^2 + j x^{i+j-1}y \;\text{ and }\; [x^i y,x^j y] = (j-i)x^{i+j-1}y,$$ where we use notation that $$x^i=0 \text{ in }\mathsf{A}_1 \text{ in case }i\in\mathbb{Z}\text{ is negative}.$$ The following properties are well-known (for example, see [@Benkart_Lopes_Ondrus_I]): **Proposition 1**. 1. *$\{x^{i}y^{j} \ | \ i,j\geq 0\}$ and $\{y^{j}x^{i} \ | \ i,j\geq 0\}$ are $\mathbb{F}$-bases for $\mathsf{A}_1$. In particular, $\{x^{i}y \ | \ i\geq 0\}$ is an $\mathbb{F}$-basis for $\mathsf{A}_1^{(-,1)}$.* 2. *If $p=0$, then the center ${\rm Z}(\mathsf{A}_1)$ of $\mathsf{A}_1$ is $\mathbb{F}$; if $p>0$, then ${\rm Z}(\mathsf{A}_1)=\mathbb{F}[x^{p},y^{p}]$.* 3. *If $p>0$, then $\mathsf{A}_1$ is a free module over ${\rm Z}(\mathsf{A}_1)$ and the set $\{x^{i}y^{j} \ | \ 0\leq i,j<p\}$ is a basis.* 4. *The algebra $\mathsf{A}_1$ is simple if and only if $p=0$.* Theorem 5.4 of [@Lopatin_Rodriguez_II] implies the following statement: **Proposition 2**. 1. *A minimal polynomial identity for $\mathsf{A}_1^{(-,1)}$ has degree $3$.* 2. *Every homogeneous polynomial identity for $\mathsf{A}_1^{(-,1)}$ of degree $3$ is equal to $\xi\, {\rm St}_3$ for some $\xi\in\mathbb{F}$.* ## Partial linearizations Assume $f\in \mathbb{F}\langle X\rangle$ is multihomogeneous of multidegree ${\underline{\delta}} \in\mathbb{N}^m$. Given $1\leq i\leq m$ and ${\underline{\gamma}} \in\mathbb{N}^k$ for some $k>0$ with $|{\underline{\gamma}} |=\delta_i$, the *partial linearization* $\mathop{\rm lin}_{x_i}^{{\underline{\gamma}} }(f)$ of $f$ of multidegree ${\underline{\gamma}}$ with respect to $x_i$ is the multihomogeneous component of $$f(x_1,\ldots,x_{i-1},x_{i}+\cdots+x_{i+k-1},x_{i+k},\ldots,x_{m+k-1})$$ of multidegree $(\delta_1,\ldots,\delta_{i-1},\gamma_{1},\ldots,\gamma_{k},\delta_{i+1},\ldots,\delta_{m}$. As an example, $$\rm{lin}_{x_2}^{(1,1)}(x_1x_2^2x_3^3) = x_1 (x_2 x_3 + x_3 x_2)x_4^3.$$ The result of subsequent applications of partial linearizations to $f$ is also called a partial linearization of $f$. The *complete linearization* $\mathop{\rm lin}(f)$ of $f$ is the result of subsequent applications of $\mathop{\rm lin}_{x_1}^{1^{\delta_1}},\ldots, \mathop{\rm lin}_{x_m}^{1^{\delta_m}}$ to $f$, where $1^k$ stands for $(1,\ldots,1)$ ($k$ times). Assume $\mathcal{A}$ is a unital $\mathbb{F}$-algebra and $\mathcal{V}\subset \mathcal{A}$ is an $\mathbb{F}$-subspace. Since $\mathbb{F}$ is infinite, it is well-known that if $f$ is a polynomial identity for $\mathcal{A}$, then all partial linearizations of $f$ are also polynomial identities for $\mathcal{V}$. The following lemma is a folklore. **Lemma 3**. *Assume that all partial linearizations of a multihomogeneous element $f$ of $\mathbb{F}\langle X\rangle$ are equal to zero over some basis of $\mathcal{V}$. Then $f$ is a polynomial identity for $\mathcal{V}$.* *Proof.* Let $\{v_j\}$ be a basis for $\mathcal{V}$. Note that $f(\sum_j \alpha_{1j} v_j, \ldots, \sum_j \alpha_{mj} v_j)$ is a linear combination of partial linearizations of $f$ evaluated on the basis $\{v_j\}$, where $\alpha_{1j},\ldots,\alpha_{mj}\in\mathbb{F}$ for all $j$. Therefore, the required is proven. ◻ # Non-multilinear identities for $\mathsf{A}_1^{(-,1)}$ of degree four {#section_deg4} In this section we denote $c_i=x^i y$ for $i\geq0$. Note that in the presentation $c_i c_j c_k c_l=\sum_{r,t\geq0}\beta_{rt} x^r y^t$ in $\mathsf{A}_1$ the coefficient $\beta_{rt}\in\mathbb{F}$ is unique for all $r,t\geq0$ by part (a) of Lemma [Proposition 1](#prop_basis1){reference-type="ref" reference="prop_basis1"}. Consider the following multihomogeneous elements of $\mathbb{F}\langle X\rangle$ of degree $4$: $$\Phi_{22}= x_1^2 x_2^2 - 3 x_1 x_2 x_1 x_2 + 2 x_1 x_2^2 x_1 + 2 x_2 x_1^2 x_2 - 3 x_2 x_1 x_2 x_1 + x_2^2 x_1^2,$$ $$\Psi = x_2[x_1,x_4]x_3 + x_3[x_1,x_4]x_2,$$ $$\Psi_{211} = x_1[x_1,x_2]x_3 + x_3[x_1,x_2]x_1 = \Psi(x_1,x_1,x_3,x_2).$$ Denote $\Phi_{211} = \rm{lin}_{x_2}^{(1,1)}\Phi_{22}$, $\Phi_{\rm lin} = \rm{lin } (\Phi_{22})$, $\Psi_{\rm lin} = \rm{lin} (\Psi_{211})$. **Lemma 4**. *Given $i,j,k,l\geq0$, we denote $e=i+j+k+l$. Then in the presentation of $c_i c_j c_k c_l$ as the linear combination of basis elements $\{x^r y^t\,|\,r,t\geq0\}$ of $\mathsf{A}_1$ the coefficient of* 1. *$x^e y^4$ is $1$;* 2. *$x^{e-1} y^3$ is $j+2k+3l$, in case $e\geq1$;* 3. *$x^{e-2} y^2$ is $(k+2l)(j+k+l-1) + l(k+l-1)$, in case $e\geq2$;* 4. *$x^{e-3}y$ is $l(k+l-1)(j+k+l-2)$, in case $e\geq3$.* *The rest of coefficients are zeros.* *Proof.* Assume $i,j,k,l\geq1$. We apply equality ([\[eq0\]](#eq0){reference-type="ref" reference="eq0"}) to obtain that $$\begin{array}{rcl} c_jc_kc_l & = & x^j y x^k (x^l y + l x^{l-1}) y \\ & = & x^j( y x^{k+l}) y^2 + l x^j ( y x^{k+l-1})y\\ & = & x^{j+k+l} y^3 + (k+2l) x^{j+k+l-1} y^2 + l (k+l-1) x^{j+k+l-2} y \;\;\;\text{ in }\;\;\mathsf{A}_1. \end{array}$$ Applying the obtained formula to $c_i c_j c_k c_l$ we conclude the proof. Note that in these calculations $x^k$ with negative $k\in\mathbb{Z}$ always has zero coefficient. Then these calculation are valid for $i,j,k,l\geq0$ and the required is proven. ◻ **Proposition 5**. *There is no non-trivial multi-homogeneous polynomial identities for $\mathsf{A}_1^{(-,1)}$ of multidegree $(3,1)$.* *Proof.* Assume that $f(x_1,x_2) = \alpha_1 x_1^3 x_2 + \alpha_2 x_1^2 x_2 x_1 + \alpha_3 x_1 x_2 x_1^2 + \alpha_4 x_2 x_1^3$ is a polynomial identity for $\mathsf{A}_1^{(-,1)}$, where $\alpha_1,\ldots,\alpha_4\in\mathbb{F}$. We will show that $f=0$ is the trivial identity. For all $i,j\geq1$ we have $f(c_i,c_j)=0$ in $\mathsf{A}_1$. Applying parts (a)--(d), respectively, of Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"}, we obtain that $$\label{eqA} \alpha_1+\alpha_2+\alpha_3+\alpha_4=0,$$ $$\label{eqB} (3i+3j)\alpha_1 + (4i+2j)\alpha_2 + (5i+j)\alpha_3 + 6i\alpha_4 = 0,$$ $$\label{eqC} \begin{array}{c} (2i^2 + 6ij + 3j^2 - i - 3j)\alpha_1 + (5i^2 + 5ij + j^2 -3i - j)\alpha_2 +\\ (8i^2 + 3ij - 4i)\alpha_3 + (11i^2-4i)\alpha_4 = 0,\\ \end{array}$$ $$\label{eqD} \begin{array}{c} j(i+j-1)(2i+j-2)\alpha_1 + i(i+j-1)(2i+j-2)\alpha_2 + \\ i(2i-1)(2i+j-2)\alpha_3 + i(2i-1)(3i-2)\alpha_4 = 0,\\ \end{array}$$ respectively. We can rewrite formula ([\[eqB\]](#eqB){reference-type="ref" reference="eqB"}) as $$\label{eqB2} (3\alpha_1+4\alpha_2+5\alpha_3+6\alpha_4)i + (3\alpha_1+ 2\alpha_2 +\alpha_3)j = 0.$$ If $p=0$, then equality ([\[eqB2\]](#eqB2){reference-type="ref" reference="eqB2"}) implies that $3\alpha_1+ 2\alpha_2 +\alpha_3=0$, since $\mathbb{F}$ is infinite. In case $p>0$ we consider $i=p$, $j=1$ in equality ([\[eqB2\]](#eqB2){reference-type="ref" reference="eqB2"}) to obtain that $3\alpha_1+ 2\alpha_2 +\alpha_3=0$. Thus it follows from equality ([\[eqA\]](#eqA){reference-type="ref" reference="eqA"}) that for an arbitrary $p$ we have $\alpha_3=-3\alpha_1 - 2\alpha_2$ and $\alpha_4=2\alpha_1+\alpha_2$. Taking $i=1$, $j=2$ and $i=1$, $j=3$ in equality ([\[eqD\]](#eqD){reference-type="ref" reference="eqD"}) we obtain that $\alpha_2=-4\alpha_1=0$. Thus $f=0$ in case $p\neq2$. Assume $p=2$. We have $\alpha_2=\alpha_4=0$ and $\alpha_3=\alpha_1$. Considering $i=1$, $j=2$ in equality ([\[eqC\]](#eqC){reference-type="ref" reference="eqC"}) can see that $\alpha_1=0$. The required is proven. ◻ **Lemma 6**. *The elements $\Phi_{22}$, $\Phi_{211}$, $\Phi_{\rm lin}$ are polynomial identities for $\mathsf{A}_1^{(-,1)}$. In case $p=2$ the elements $\Psi_{211}$, $\Psi$, $\Delta$ and $[[x_1,x_2],[x_3,x_4]]$ are polynomial identities for $\mathsf{A}_1^{(-,1)}$.* *Proof.* Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"} implies that by straightforward calculations (by means of a computer program) we can see that $\Phi_{22}$ and its partial linearizations $\mathop{\rm lin}_{x_1}^{(1,1)}(\Phi_{22})$, $\mathop{\rm lin}_{x_2}^{(1,1)}(\Phi_{22})$ and the complete linearization $\mathop{\rm lin}(\Phi_{22})$ are equal to zero over the set $\{c_i\, |\, i\geq0\}$, which is a basis of $\mathsf{A}_1^{(-,1)}$. Lemma [Lemma 3](#lemma_id){reference-type="ref" reference="lemma_id"} concludes the proof for $\Phi_{22}$. Similarly, we prove this lemma for $\Psi$, $\Delta$ and $[[x_1,x_2],[x_3,x_4]]$. ◻ **Proposition 7**. *Assume $f$ is a multi-homogeneous polynomial identity for $\mathsf{A}_1^{(-,1)}$ of multidegree $(2,2)$. Then $f=\alpha\, \Phi_{22}$ for some $\alpha\in \mathbb{F}$.* *Proof.* Assume that $f(x_1,x_2) = \alpha_1 x_1^2 x_2^2 + \alpha_2 x_1 x_2 x_1 x_2 +\alpha_3 x_1 x_2^2 x_1 + \alpha_4 x_2 x_1^2 x_2 + \alpha_5 x_2 x_1 x_2 x_1 + \alpha_6 x_2^2 x_1^2$ is a polynomial identity for $\mathsf{A}_1^{(-,1)}$, where $\alpha_1,\ldots,\alpha_6\in\mathbb{F}$. Hence $f(c_i,c_j)=0$ in $\mathsf{A}_1$ for all $i,j\geq0$. Applying parts (a)--(d), respectively, of Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"}, we obtain that $$\label{eq22A} \alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6=0,$$ $$\label{eq22B} \begin{array}{c} (i+5j)\alpha_1 + (2i+4j)\alpha_2 + (3i+3j)\alpha_3 + \\ (3i+3j)\alpha_4 + (4i+2j)\alpha_5 + (5i+j)\alpha_6=0, \;\;\text{ if }\;\; i+j\geq1, \\ \end{array}$$ $$\label{eq22C} \begin{array}{c} (3ij + 8j^2 - 4j)\alpha_1 + (i^2 + 5ij + 5j^2 -i - 3j)\alpha_2 + \\ (3i^2 + 6ij + 2j^2 - 3i - j)\alpha_3 + (2i^2 + 6ij + 3j^2 - i - 3j)\alpha_4 + \\ (5i^2 + 5ij + j^2 - 3i - j)\alpha_5 + (8i^2 + 3 ij - 4i)\alpha_6=0, \;\;\text{ if }\;\; i+j\geq1, \\ \end{array}$$ $$\label{eq22D} \begin{array}{c} j(2j-1)(i+2j-2)\alpha_1 + j(i+j-1)(i+2j-2)\alpha_2 + \\ i(i+j-1)(i+2j-2)\alpha_3 + j(i+j-1)(2i+j-2)\alpha_4 + \\ i(i+j-1)(2i+j-2)\alpha_5 + i(2i-1)(2i+j-2)\alpha_6=0, \;\;\text{ if }\;\; i+j\geq2.\\ \end{array}$$ Taking $i=0$, $j=1$ in equality ([\[eq22B\]](#eq22B){reference-type="ref" reference="eq22B"}) we obtain $$5\alpha_1 + 4\alpha_2 + 3\alpha_3 + 3\alpha_4 + 2\alpha_5 + \alpha_6=0.$$ Considering $i=0$, $j=1$ and $i=1$, $j=0$ in equality ([\[eq22C\]](#eq22C){reference-type="ref" reference="eq22C"}) we obtain that $$\label{eq_new1} 4\alpha_1 + 2\alpha_2 +\alpha_3 = 0\quad \text{and}\quad \alpha_4 + 2\alpha_5 + 4\alpha_6 = 0,$$ respectively. Let $p\neq2$. Considering $i=0$, $j=2$ and $i=2$, $j=0$ in equality ([\[eq22D\]](#eq22D){reference-type="ref" reference="eq22D"}) we obtain that $3\alpha_1 + \alpha_2 = 0$ and $\alpha_5 + 3\alpha_6=0$, respectively. It it easy to see that the above five equalities imply that $$\label{eq22g1} \alpha_2=\alpha_5=-3\alpha_1,\;\; \alpha_3=\alpha_4=2\alpha_1 \;\text{ and }\; \alpha_6=\alpha_1.$$ Hence, $f=\alpha_1 \Phi_{22}$. Let $p=2$. Equality ([\[eq22B\]](#eq22B){reference-type="ref" reference="eq22B"}) implies that $(\alpha_1+\alpha_3+\alpha_4+\alpha_6)(i+j)=0$ for all $i,j\geq0$ with $i+j\geq1$. Considering $i=1$, $j=0$ we obtain that $\alpha_1+\alpha_3+\alpha_4+\alpha_6=0$. Similarly, equality ([\[eq22C\]](#eq22C){reference-type="ref" reference="eq22C"}) implies that $ij(\alpha_1+\alpha_2+\alpha_5+\alpha_6) + j\alpha_3 + i\alpha_4 =0$ for all $i,j\geq0$ with $i+j\geq1$. Equalities ([\[eq_new1\]](#eq_new1){reference-type="ref" reference="eq_new1"}) imply that $\alpha_3=\alpha_4=0$. Applying equality ([\[eq22A\]](#eq22A){reference-type="ref" reference="eq22A"}) we can see that $$\label{eq22g2} \alpha_3=\alpha_4=0, \;\; \alpha_5=\alpha_2 \;\text{ and }\; \alpha_6=\alpha_1.$$ In other words, $f=\alpha_1 [x_1^2,x_2^2] + \alpha_2 (x_1 x_2 x_1 x_2 + x_2 x_1 x_2 x_1)$. By Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"} we can see that $$\label{eq_lin} 0=\mathop{\rm lin}\nolimits_{x_2}^{(1,1)}(f)(c_1,c_1,c_2)=(\alpha_1+\alpha_2) x^2y.$$ Thus $\alpha_1=\alpha_2$ and the proof is completed. ◻ Let us recall that for $i_1,\ldots,i_k,j_1,\ldots,j_k\in\mathbb{Z}$ two *multisets* $\{i_1,\ldots,i_k\}_{\rm m}$ and $\{j_1,\ldots,j_k\}_{\rm m}$ are equal if for every $l\in\mathbb{Z}$ we have $|\{ 1\leq t\leq k \ | \ i_t=l \}| = |\{ 1\leq t\leq k \ | \ j_t=l \}|$. **Lemma 8**. *Assume $\mathcal{A}$ is an associative algebra and $\mathcal{V}\subset\mathcal{A}$ is an $\mathbb{F}$-subspace. Let* 1. *any polynomial identity of $\mathcal{V}$ of multidegree $(3,1)$ be trivial;* 2. *any polynomial identity of $\mathcal{V}$ of multidegree $(2,2)$ be equal to $\xi \Phi_{22}$ for some $\xi\in \mathbb{F}$.* *Then every polynomial identity $f$ of $\mathcal{V}$ of multidegree $(2,1,1)$ is equal to $$\alpha x_1 {\rm St}_3(x_1,x_2,x_3) -\beta\, [[x_1,x_2],[x_1,x_3]] + \gamma\, {\rm St}_3(x_1,x_2,x_3)x_1 + \xi h(x_1,x_2,x_3)$$ for some $\alpha,\beta,\gamma,\xi\in\mathbb{F}$ and $h(x_1,x_2,x_3) =$ $$x_1^2 x_3 x_2 + 2 x_3 x_1^2 x_2 + x_3 x_2 x_1^2 - x_1 x_2 x_3 x_1 + 3 x_1 x_3 x_2 x_1 -3 x_1 x_3 x_1 x_2 - 3 x_3 x_1 x_2 x_1.$$* *Proof.* We have that $f(x_1,x_2,x_3)=\sum {\alpha_{ijkl}}x_i x_j x_k x_l$, where the sum ranges over all $1\leq i,j,k,l\leq 3$ with $\{i,j,k,l\}_{\rm m}=\{1,1,2,3\}_{\rm m}$ and $\alpha_{ijkl}\in\mathbb{F}$. For short, we write $\alpha_{1^223}$ for $\alpha_{1123}$, etc. Applying part (a) to $f(x_1,x_1,x_2)=0$ we obtain that $$\begin{array}{rcl} \alpha_{1^223} + \alpha_{21^23} + \alpha_{1213} & = & 0,\\ \alpha_{231^2} + \alpha_{1312} + \alpha_{1321} & = & 0,\\ \alpha_{31^22} + \alpha_{321^2} + \alpha_{3121} & = & 0.\\ \end{array}$$ Similarly, applying part (b) to $f(x_1,x_2,x_2)=0$ we obtain that $$\begin{array}{rcl} \alpha_{1^223} + \alpha_{1^232} & = & \xi, \\ \alpha_{231^2} + \alpha_{321^2} & = & \xi,\\ \alpha_{1213} + \alpha_{1312} & = & -3\xi,\\ \alpha_{2131} + \alpha_{3121} & = & -3\xi,\\ \alpha_{1231} + \alpha_{1321} & = & 2\xi,\\ \alpha_{21^23} + \alpha_{31^22} & = & 2\xi.\\ \end{array}$$ These equations imply the required equality for $f$ for $\alpha= \alpha_{1^223}$, $\beta=\alpha_{21^23}$, $\gamma=\alpha_{231^2}$. ◻ **Proposition 9**. *The following set is an $\mathbb{F}$-basis of the space of all polynomial identities for $\mathsf{A}_1^{(-,1)}$ of multidegree $(2,1,1)$:* 1. *$x_1 {\rm St}_3(x_1,x_2,x_3)$, ${\rm St}_3(x_1,x_2,x_3)x_1$, $\Phi_{211}$, in case $p\neq 2$;* 2. *$x_1 {\rm St}_3(x_1,x_2,x_3)$, ${\rm St}_3(x_1,x_2,x_3)x_1$, $\Psi_{211}$, $[[x_1,x_2],[x_1,x_3]]$, in case $p= 2$.* *Proof.* By Proposition [Proposition 2](#theo_PI_min){reference-type="ref" reference="theo_PI_min"} and Lemma [Lemma 6](#lemma_deg22_id){reference-type="ref" reference="lemma_deg22_id"} all elements from the formulation of the proposition are identities. At first, we will show that any polynomial identity $f\in\mathbb{F}\langle X\rangle$ for $\mathsf{A}_1^{(-,1)}$ of multidegree $(2,1,1)$ is an $\mathbb{F}$-linear combination of elements from the formulation of the lemma. Since $x_1 {\rm St}_3(x_1,x_2,x_3)$ and ${\rm St}_3(x_1,x_2,x_3)x_1$ are polynomial identities for $\mathsf{A}_1^{(-,1)}$, by Lemma [Lemma 8](#lemma_deg211){reference-type="ref" reference="lemma_deg211"} it is enough to complete the proof for $$f(x_1,x_2,x_3) = -\beta\, [[x_1,x_2],[x_1,x_3]] + \xi h(x_1,x_2,x_3),$$ where $\beta,\xi\in\mathbb{F}$. Assume $p\neq2$. Since $0=f(c_3,c_2,c_1) = 2 (\beta-\xi)x^{6} y$ by Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"}, we obtain that $\xi=\beta$. By straightforward calculations we can see that $$[[x_1,x_2],[x_1,x_3]] - h(x_1,x_2,x_3) = \frac{1}{2}\left(x_1 {\rm St}_3(x_1,x_2,x_3) + {\rm St}_3(x_1,x_2,x_3)x_1 + \Phi_{211}\right).$$ The required is proven. Assume that $p=2$. By straightforward calculations we can see that $h(x_1,x_2,x_3)= x_1 {\rm St}_3(x_1,x_2,x_3) + \Psi_{211}(x_1,x_2,x_3)$. Note that $$\Phi_{211}= x_1 {\rm St}_3(x_1,x_2,x_3) + {\rm St}_3(x_1,x_2,x_3) x_1.$$ The required is proven. To show that elements from the formulation of the lemma are linearly independent in case $p\neq2$, we consider $$\alpha\, x_1 {\rm St}_3(x_1,x_2,x_3) + \beta\, {\rm St}_3(x_1,x_2,x_3)x_1 + \gamma\, \Phi_{211}(x_1,x_2,x_3) = 0$$ for some $\alpha,\beta,\gamma\in\mathbb{F}$. Taking the coefficients of $x_1^2x_2x_3$ and $x_1^2x_3x_2$ we obtain $\alpha+\gamma=-\alpha+\gamma=0$. Thus $\alpha=\gamma=0$ and the required is proven. Similarly, for $p=2$ we consider $$\alpha\, x_1 {\rm St}_3(x_1,x_2,x_3) + \beta\, {\rm St}_3(x_1,x_2,x_3)x_1 + \gamma\, \Psi_{211} +\delta[[x_1,x_2],[x_1,x_3]] =0$$ for some $\alpha,\beta,\gamma,\delta\in\mathbb{F}$. Taking the coefficients of $x_1^2x_3x_2$, $x_2x_3x_1^2$, $x_1x_3x_2x_1$ we obtain $-\alpha=\beta=-\alpha-\beta+\delta=0$. Thus $\alpha=\beta=\delta=0$ and the required is proven. ◻ # Multilinear identities for $\mathsf{A}_1^{(-,1)}$ of degree four {#section_mult4} As in Section [3](#section_deg4){reference-type="ref" reference="section_deg4"} we denote $c_i=x^i y$ for $i\geq0$. Consider the following multilinear elements of $\mathbb{F}\langle X\rangle$ of degree $4$: $$\Gamma= -x_ 1 x_2 x_3 x_4 + 2 x_1 x_2 x_4 x_3 + x_1 x_3 x_4 x_2 - 2 x_1 x_4 x_2 x_3$$ $$+ 2 x_2 x_1 x_3 x_4 - 2 x_2 x_1 x_4 x_3 - 2 x_2 x_3 x_1 x_4 + x_2 x_3 x_4 x_1 + x_2 x_4 x_1 x_3$$ $$+ x_3 x_1 x_2 x_4 - 2 x_3 x_1 x_4 x_2 + x_3 x_4 x_1 x_2 + x_4 x_1 x_2 x_3 - x_4 x_2 x_3 x_1,$$ $$\Lambda= -3 x_1 x_2 x_3 x_4 + 3 x_1 x_2 x_4 x_3 + 2 x_1 x_3 x_2 x_4 - 2 x_1 x_4 x_2 x_3$$ $$+ 3 x_2 x_1 x_3 x_4 - 3 x_2 x_1 x_4 x_3 - 2 x_2 x_3 x_1 x_4 + 2 x_2 x_4 x_1 x_3$$ $$- x_3 x_1 x_4 x_2 + x_3 x_2 x_4 x_1 + x_4 x_1 x_3 x_2 - x_4 x_2 x_3 x_1,$$ $$\Delta=x_2x_1x_3x_4 + x_2x_4x_1x_3 + x_3x_1x_2x_4 + x_3x_4x_1x_2 + x_4x_1x_2x_3 + x_4x_1x_3x_2.$$ A monomial from $\mathbb{F}\langle X\rangle$ of multidegree $(1,1,1,1)$ is called *reduced* if it does not belong to the following list: $$\label{eq_non_reduced} x_1x_4x_3x_2,\; x_2x_4x_3x_1,\; x_3x_4x_2x_1, \; x_4x_3x_2x_1,\; x_4x_3x_1x_2,\; x_4x_2x_1x_3,\; x_3x_2x_1x_4.$$ An element from $\mathbb{F}\langle X\rangle$ of multidegree $(1,1,1,1)$ called *reduced* if it is a linear combination of reduced monomials. **Lemma 10**. *For every homogeneous $f\in\mathbb{F}\langle X\rangle$ of multidegree $(1,1,1,1)$ there exist multilinear $f_1,f_2\in \mathbb{F}\langle X\rangle$ of degree 4 such that $f=f_1+f_2$,* 1. *$f_1$ is reduced;* 2. *$f_2$ is a linear combination of $x_i {\rm St}_3(x_j,x_k,x_l)$, ${\rm St}_3(x_i,x_j,x_k)x_l$ for $\{i,j,k,l\}=\{1,2,3,4\}$.* *Proof.* Consider the usual lexicographical order on the set of all monomials from $\mathbb{F}\langle X\rangle$ of multidegree $(1,1,1,1)$. Denote by $L$ the subspace of $\mathbb{F}\langle X\rangle$ generated by $x_i {\rm St}_3(x_j,x_k,x_l)$, ${\rm St}_3(x_i,x_j,x_k)x_l$ for $\{i,j,k,l\}=\{1,2,3,4\}$. Given a monomial $w\in\mathbb{F}\langle X\rangle$ of multidegree $(1,1,1,1)$, we write $w\equiv 0$, if $w-h\in L$ for some $h\in\mathbb{F}\langle X\rangle$ such that all monomials of $h$ are less than $w$. Since $x_1{\rm St}_3(x_2,x_3,x_4)\in L$, we obtain that $x_1 x_4 x_3 x_2\equiv0$. Similarly, considering $x_i{\rm St}_3(x_j,x_k,x_l)\in L$ for $2\leq i\leq 4$ with $\{i,j,k,l\}=\{1,2,3,4\}$, we obtain that $$x_2x_4x_3x_1\equiv0,\;\; x_3x_4x_2x_1\equiv0, \;\; x_4x_3x_2x_1\equiv0.$$ Moreover, since ${\rm St}_3(x_1,x_3,x_4)x_2\in L$, ${\rm St}_3(x_1,x_2,x_4)x_3\in L$, ${\rm St}_3(x_1,x_2,x_3)x_4\in L$, respectively, we can see that $$x_4x_3x_1x_2\equiv 0,\;\; x_4x_2x_1x_3\equiv0,\;\; x_3x_2x_1x_4\equiv0,$$ respectively. Consequently applying the obtained equivalences to $f$, it is easy to see that the claim holds. ◻ **Lemma 11**. *The elements $\Gamma$, $\Delta$ are polynomial identities for $\mathsf{A}_1^{(-,1)}$.* *Proof.* Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"} implies that by straightforward calculations (by means of a computer program) we can see that $\Gamma$, $\Delta$ are equal to zero over the set $\{c_i\, |\, i\geq0\}$, which is a basis of $\mathsf{A}_1^{(-,1)}$. Lemma [Lemma 3](#lemma_id){reference-type="ref" reference="lemma_id"} concludes the proof. ◻ The following remark can be verified by straightforward calculations. **Remark 12**. The following equalities hold in $\mathbb{F}\langle X\rangle$: 1. $4\, \Gamma- 2\,\Lambda= \Phi_{\rm lin} + x_1 {\rm St}_3(x_2,x_3,x_4) + x_2{\rm St}_3(x_1,x_3,x_4) + 2\, x_3 {\rm St}_3(x_1,x_2,x_4) +{\rm St}_3(x_2,x_3,x_4)x_1 + {\rm St}_3(x_1,x_3,x_4)x_2 + 2\, {\rm St}_3(x_1,x_2,x_4)x_3$. 2. $x_1 {\rm St}_3(x_2,x_3,x_4) - x_2 {\rm St}_3(x_1,x_3,x_4) + x_3 {\rm St}_3(x_1,x_2,x_4) - x_4 {\rm St}_3(x_1,x_2,x_3) + {\rm St}_3(x_2,x_3,x_4)x_1 - {\rm St}_3(x_1,x_3,x_4)x_2 + {\rm St}_3(x_1,x_2,x_4)x_3 - {\rm St}_3(x_1,x_2,x_3)x_4 = 0$. **Proposition 13**. *The following set is an $\mathbb{F}$-basis of the space of all polynomial identities for $\mathsf{A}_1^{(-,1)}$ of multidegree $(1,1,1,1)$ in case $p\neq 2$: $$\Gamma,\; \Phi_{\rm lin}, \; x_1{\rm St}_3(x_2,x_3,x_4), \; x_2{\rm St}_3(x_1,x_3,x_4), \; x_3{\rm St}_3(x_1,x_2,x_4), \; x_4{\rm St}_3(x_1,x_2,x_3),$$ $${\rm St}_3(x_2,x_3,x_4)x_1,\; {\rm St}_3(x_1,x_3,x_4)x_2,\; {\rm St}_3(x_1,x_2,x_4)x_3.$$* *Proof.* By Proposition [Proposition 2](#theo_PI_min){reference-type="ref" reference="theo_PI_min"} and Lemmas [Lemma 6](#lemma_deg22_id){reference-type="ref" reference="lemma_deg22_id"}, [Lemma 11](#lemma_deg1111_id){reference-type="ref" reference="lemma_deg1111_id"} all elements from the formulation of the proposition are identities. At first, we will show that any polynomial identity $f\in\mathbb{F}\langle X\rangle$ for $\mathsf{A}_1^{(-,1)}$ of multidegree $(1,1,1,1)$ is an $\mathbb{F}$-linear combination of elements from the formulation of the lemma. By Lemma [Lemma 10](#lemma_ireduced){reference-type="ref" reference="lemma_ireduced"} we can assume that $f$ is reduced, i.e., $f=\sum \alpha_w w$, where $w$ is a reduced monomial of multidegree $(1,1,1,1)$ and $\alpha_w\in\mathbb{F}$. Applying Lemma [Lemma 4](#lemma_decomposition){reference-type="ref" reference="lemma_decomposition"}, we consider equations $f(c_i,c_j,c_k,c_l)=0$ for all $i,j,k,l\in\{0,1\}$ together with equations $$f(c_2,c_1,c_0,c_0)=0,\; f(c_2,c_0,c_1,c_0)=0,\; f(c_2,c_0,c_0,c_1)=0,$$ where variables are $\{\alpha_w\}\backslash\{\alpha_{x_3x_1x_2x_4},\;\alpha_{x_4x_1x_3x_2}\}$, we obtain that $$f=2\, \alpha_{x_3x_1x_2x_4} \Gamma+ 2\, \alpha_{x_4x_1x_3x_2} \Lambda.$$ Parts 1 and 2 of Remark [Remark 12](#remark_equality){reference-type="ref" reference="remark_equality"} imply the required claim. Linear independence follows from the straightforward calculations. ◻ **Proposition 14**. *Assume $p=2$. Then any polynomial identity for $\mathsf{A}_1^{(-,1)}$ of multidegree $(1,1,1,1)$is a linear combination of the following polynomial identities: $$\Gamma,\; \Psi,\;\Delta, \; x_1{\rm St}_3(x_2,x_3,x_4), \; x_2{\rm St}_3(x_1,x_3,x_4), \; x_3{\rm St}_3(x_1,x_2,x_4), \; x_4{\rm St}_3(x_1,x_2,x_3),$$ $${\rm St}_3(x_2,x_3,x_4)x_1,\; {\rm St}_3(x_1,x_3,x_4)x_2,\; {\rm St}_3(x_1,x_2,x_4)x_3.$$* *Proof.* The proof is similar to the proof of Proposition [Proposition 13](#prop_id1111){reference-type="ref" reference="prop_id1111"}. ◻ 99 G. Benkart, S.A. Lopes, M. Ondrus, *A parametric family of subalgebras of the Weyl algebra I. Structure and automorphisms*, Transactions of the American Mathematical Society **367** (2015), no. 3, 1993--2021. A.S. Dzhumadil'daev, *$N$-commutators*, Comment. Math. Helv. **79** (2004), no. 3, 516--553. A.S. Dzhumadil'daev, *$2p$-commutator on differential operators of order $p$*, Lett. Math. Phys. **104** (2014), no. 7, 849--869, A.S. Dzhumadil'daev, D. Yeliussizov, *Path decompositions of digraphs and their applications to Weyl algebra*, Adv. in Appl. Math. **67** (2015), 36--54. A. Lopatin, C.A. Rodriguez Palma, *Identities for a parametric Weyl algebra over a ring*, J. Algebra, **595** (2022), 279--296. A. Lopatin, C.A. Rodriguez Palma, *Identities for subspaces of a parametric Weyl algebra*, Linear Algebra and its Applications **654** (2022), 250--266. S.P. Mishchenko, *Solvable subvarieties of a variety generated by a Witt algebra*, Math. USSR Sb. **64** (1989), no. 2, 415--426. Yu. Razmyslov, *Identities of algebras and their representations*, Transl. Math. Monogr., vol. 138, Amer. Math. Soc., Providence, RI, 1994. H. Strade, R. Farnsteiner, *Modular Lie algebras and their representations*, Monographs and Textbooks in Pure and Applied Mathematics, **116**, Marcel Dekker, Inc., New York, 1988. x+301 pp. Y. Tsuchimoto, *Preliminaries on Dixmier conjecture*, Mem. Fac. Sci. Kochi Univ. Ser. A Math. **24** (2003), 43--59. [^1]: The work was supported by RSF 22-11-00081
arxiv_math
{ "id": "2309.12795", "title": "Identities for subspaces of the Weyl algebra", "authors": "Artem Lopatin, Carlos Arturo Rodriguez Palma", "categories": "math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper uses reconstruction algebras to construct simultaneous resolution of determinantal surfaces. The main new difference to the classical case is that, in addition to the quiver of the reconstruction algebra, certain noncommutative relations, namely those of the canonical algebra of Ringel, are required. All the relations of the reconstruction algebra except the canonical relation are then deformed, and these deformed relations together with variation of the GIT quotient achieve the simultaneous resolution. address: Brian Makonzi, Department of Mathematics, Makerere University, P.O Box 7062 Kampala, Uganda & The Mathematics and Statistics Building, University of Glasgow, University Place, Glasgow, G12 8QQ, UK author: - Brian Makonzi title: Deformations and Simulatenous Resolutions of Determinantal Surfaces --- # introduction Rational surface singularities play a fundamental role in various areas of mathematics, such as algebraic geometry and singularity theory. In this paper, we specifically focus on constructing simultaneous resolution using noncommutative techniques, in the case of determinantal singularities. These are both non-Gorenstein and non-toric. The noncommutative framework we employ enables us to explore a wider spectrum of rational surface singularities, facilitating a deeper understanding of their properties and paving the way for future investigations. ## Motivation and Background Grothendieck and Brieskorn [@Resolution] [@defspace] made significant contributions in constructing the deformation space for Kleinian singularities $\mathbb{C}^2/H$ with $H\leq\mathrm{SL}(2,\mathbb{C}).$ They established a connection between these singularities and the Weyl group $W$ of the corresponding simple simply--connected complex Lie group. Brieskorn [@defspace] successfully constructed the versal deformation $D \to \mathfrak{h}_\mathbb{C}/W$ for a rational double point, and after base change via the action of the Weyl group as in the diagram below, the resulting space $Art$ resolves simultaneously [@defspace]. $$\begin{tikzpicture} \node (A) at (0,0) {$Art$}; \node (B) at (2,0) {$D$}; \node (C) at (-1,1) {$X$}; \node (a) at (0,-1) {$\mathfrak{h}_\mathbb{C}$}; \node (b) at (2,-1) {$\mathfrak{h}_\mathbb{C}/W$}; \draw[->] (A)--(B); \draw[->] (A)--(a); \draw[->] (a)--(b); \draw[->] (B)--(b); \draw[->] (C)--(A); \draw[densely dotted,->] (C)--(a); \end{tikzpicture}$$ Kronheimer [@kronheimer] and Cassens-Slodowy [@OnKleinianSing §3] introduced the utilization of the McKay quiver as a means to construct the semiuniversal deformation of Kleinian singularities, along with their simultaneous resolutions, belonging to types $A_n,~D_n,~E_6,~E_7$ and $E_8$. Building upon this, Crawley-Boevey and Holland [@crawley1998noncommutative] later provided a reinterpretation of this approach in the context of the deformed preprojective algebra. However, for non-Gorenstein surface quotient singularities, namely those $\mathbb{C}^2/H$ for small finite groups $H\leq\mathrm{GL}(2,\mathbb{C})$ that are not inside $\mathrm{SL}(2,\mathbb{C})$, the situation poses a more formidable challenge. Artin [@ArtinComp] revealed that within the deformation space of non-Gorenstein singularities, there exists a distinct component(the Artin component) which is irreducible and exhibits the intriguing property of admitting simultaneous resolution, again after a finite base change by some appropriate Weyl group $W$. $$\begin{tikzpicture} \node (A) at (0,0) {$Art$}; \node (B) at (2,0) {$D$}; \node (C) at (-1,1) {$X$}; \node (a) at (0,-1) {$\mathrm{H}^1_\mathbb{C}$}; \node (b) at (2,-1) {$\mathrm{H}^1_\mathbb{C}/W$}; \draw[->] (A)--(B); \draw[->] (A)--(a); \draw[->] (a)--(b); \draw[->] (B)--(b); \draw[->] (C)--(A); \draw[densely dotted,->] (C)--(a); \end{tikzpicture}$$ Riemenschneider [@deformRational] computed the Artin component $Art$ for cyclic quotient singularities, then later in [@RieCyclic §5] he used the McKay quiver and special representations as described by Wunram [@Wunram] to give an alternative description. But simultaneous resolution is also not obtained using the McKay quiver perspective. Our previous work [@Makonzi] solves this by using reconstruction algebras to construct simultaneous resolution for cyclic groups.\ Building upon this success, this paper extends these techniques to determinantal surface singularities, thereby broadening the scope of known examples with simultaneous resolutions. ## Main Result {#main result} This paper considers a class of rational surface singularities called determinantal surface singularities. Consider $R,$ the determinantal singularity given as the quotient of $\mathbb{C}[v, w_1, w_2, w_3]$ by the $2 \times 2$ minors of the matrix $$\left( \begin{array}{ccccc} {w}_2&{w}_3 &{v^{p_2}}\\ {v^{p_1}}&{{w}_3+{v^{p_3}}}&{w}_1 \end{array} \right).$$ The minimal resolution $X \rightarrow \text{Spec}~ R$ is the star-shaped graph in ([\[s Veron dual graph\]](#s Veron dual graph){reference-type="ref" reference="s Veron dual graph"}).\ The quiver of the reconstruction algebra corresponding to Spec$R$ is recalled in §[2](#Preliminaries){reference-type="ref" reference="Preliminaries"}, and will be written $Q.$ For dimension vector $\updelta=(1,\hdots,1),$ consider $\EuScript{R}\colonequals \mathbb{C}[\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q,\updelta)]/I$ and crucially $I$ is the ideal generated by Ringel's [@RingelCanAlgebras] canonical relation (see ([\[ReconAlgebraQuiver\]](#ReconAlgebraQuiver){reference-type="ref" reference="ReconAlgebraQuiver"})). This carries a natural action of $G \colonequals \textstyle \prod_{q \in Q_0} \mathbb{C}^{\ast}$ where $Q_0$ denotes the set of vertices of $Q.$ As shown in §[4.2](#ReconAlg){reference-type="ref" reference="ReconAlg"}, $\EuScript{R}^G$ is generated by cycles. These generate a $\mathbb{C}$--algebra $\mathbb{C}[\mathsf{w} , \mathsf{v}],$ and they further satisfy determinantal relations (recalled in §[5.1](#QDetform){reference-type="ref" reference="QDetform"}). Simultaneous resolution is then achieved by introducing the deformed reconstruction algebra (see [3.3](#Deformed){reference-type="ref" reference="Deformed"}), which generalises the work of Crawley-Boevey--Holland [@crawley1998noncommutative] on deformed preprojective algebras. In §[4.3](#SimRes){reference-type="ref" reference="SimRes"}, we construct a map $\uppi\colon\EuScript{R}^G \rightarrow \Delta,$ where $\Delta$ is an affine space defined in Notation [Notation 7](#Notationrel){reference-type="ref" reference="Notationrel"}. The fibre above the origin is the determinantal singularity Spec$R$ above (see Remark [Remark 17](#fibreabovegamma){reference-type="ref" reference="fibreabovegamma"} later). The following is our main result, where $\upvartheta_0$ is a *particular* choice of stability condition explained in §[3.2](#ModuliofReconAlg){reference-type="ref" reference="ModuliofReconAlg"}. **Theorem 1** ([Theorem 18](#thm: main){reference-type="ref" reference="thm: main"}). *The diagram $$\begin{tikzpicture} \node (A) at (0,0) {$\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL}$}; \node (B) at (4,0) {$\mathcal{\EuScript{R}}^G$}; \node (b) at (4,-2) {$\Delta$}; \draw[->] (A)-- node[above] {} (B); \draw[densely dotted,->] (A)-- node[below] {$\upphi$} (b); \draw[->] (B)-- node[right] {$\uppi$} (b); a\end{tikzpicture}$$ is a simultaneous resolution of singularities in the sense that the morphism $\upphi$ is smooth, and $\uppi$ is flat.* The smoothness of the fibres is achieved using moduli spaces of the deformed reconstruction algebra $\Lambda_{\boldsymbol{\upgamma}}.$ These are introduced in §[3.3](#Deformed){reference-type="ref" reference="Deformed"}, and may be of independent interest. Note that this is significantly more difficult than for cyclic quotients, with the main difficulty being showing that $\EuScript{M}_{\upvartheta_0} \neq \emptyset$ (see Remark [Remark 9](#gammainDelta){reference-type="ref" reference="gammainDelta"}). This paper is organised as follows. Section [2](#Preliminaries){reference-type="ref" reference="Preliminaries"} recalls the reconstruction algebra of star-shaped graphs. Section [3](#SecSimRes){reference-type="ref" reference="SecSimRes"} introduces the deformed reconstruction algebra, and uses this in section [4](#SimResSection){reference-type="ref" reference="SimResSection"} to achieve simultaneous resolution. Section [5](#repvar){reference-type="ref" reference="repvar"} gives a conjectural presentation of $\mathcal{\EuScript{R}}^G,$ although this is not required for simultaneous resolution.\ ## Conventions {#conventions .unnumbered} Throughout we work over the complex numbers $\mathbb{C}.$ If $a$ and $b$ are arrows in a quiver, $ab$ denotes $a$ followed by $b,$ $t(a)$ is the tail of $a$ and $h(a)$ is the head of $a.$ ## Acknowledgements {#acknowledgements .unnumbered} The author would like to thank the GRAID program for sponsoring his PhD studies, and the ERC Consolidator Grant 101001227 (MMiMMa) for funding his visit to Glasgow. Furthermore, he would like to extend special thanks to his supervisors Michael Wemyss and David Ssevviiri for their exceptional guidance and encouragement, the Department of Mathematics, Makerere University and the School of Mathematics & Statistics, University of Glasgow for providing a welcoming and supportive environment. The hospitality and conducive atmosphere have greatly facilitated the author's studies and research. # preliminaries {#Preliminaries} This section recalls the reconstruction algebra of star-shaped graphs following [@IyamaWemyss]. Consider the following star shaped dual graph, where there are $n \geq 3$ arms. $$\begin{array}{c} \begin{tikzpicture}[xscale=1,yscale=0.8] \node (0) at (0,0) [vertex] {}; \node (A1) at (-3,1) [vertex]{}; \node (A2) at (-3,2) [vertex] {}; \node (A3) at (-3,3) [vertex] {}; \node (A4) at (-3,4) [vertex] {}; \node (B1) at (-1.5,1) [vertex] {}; \node (B2) at (-1.5,2) [vertex] {}; \node (B3) at (-1.5,3) [vertex] {}; \node (B4) at (-1.5,4) [vertex] {}; \node (C1) at (0,1) [vertex] {}; \node (C2) at (0,2) [vertex] {}; \node (C3) at (0,3) [vertex] {}; \node (C4) at (0,4) [vertex] {}; \node (n1) at (2,1) [vertex] {}; \node (n2) at (2,2) [vertex] {}; \node (n3) at (2,3) [vertex] {}; \node (n4) at (2,4) [vertex] {}; \node at (-3,2.6) {$\vdots$}; \node at (-1.5,2.6) {$\vdots$}; \node at (0,2.6) {$\vdots$}; \node at (2,2.6) {$\vdots$}; \node at (1,3.5) {$\hdots$}; \node at (1,1.5) {$\hdots$}; \node (T) at (0,4.25) {}; %\node at (0,-0.2) {$\scriptstyle -\upbeta$}; \node at (-2.6,1) {$\scriptstyle -2$}; \node at (-2.6,2) {$\scriptstyle -2$}; \node at (-2.6,3) {$\scriptstyle -2$}; \node at (-2.6,4) {$\scriptstyle -2$}; \node at (-1.1,1) {$\scriptstyle -2$}; \node at (-1.1,2) {$\scriptstyle -2$}; \node at (-1.1,3) {$\scriptstyle -2$}; \node at (-1.1,4) {$\scriptstyle -2$}; \node at (0.4,1) {$\scriptstyle -2$}; \node at (0.4,2) {$\scriptstyle -2$}; \node at (0.4,3) {$\scriptstyle -2$}; \node at (0.4,4) {$\scriptstyle -2$}; \node at (2.45,1) {$\scriptstyle -2$}; \node at (2.45,2) {$\scriptstyle -2$}; \node at (2.45,3) {$\scriptstyle -2$}; \node at (2.45,4) {$\scriptstyle -2$}; \node at (0.4,-0.1) {$\scriptstyle -n$}; \draw (A1) -- (0); \draw (B1) -- (0); \draw (C1) -- (0); \draw (n1) -- (0); \draw (A2) -- (A1); \draw (B2) -- (B1); \draw (C2) -- (C1); \draw (n2) -- (n1); \draw (A4) -- (A3); \draw (B4) -- (B3); \draw (C4) -- (C3); \draw (n4) -- (n3); \end{tikzpicture} \end{array}$$ Below, we restrict to the case $n = 3.$ Note that all techniques below generalise, at the cost of additional significant notation. Thus our dual graph is $$\label{s Veron dual graph} \begin{array}{c} \begin{tikzpicture}[xscale=0.9,yscale=0.9] \node (0) at (0,0) [vertex] {}; %\node (A1) at (-3,1) [vertex] {}; %\node (A2) at (-3,2) [vertex] {}; %\node (A3) at (-3,3) [vertex] {}; %\node (A4) at (-3,4) [vertex] {}; \node (B1) at (-2,1) [vertex] {}; \node (B2) at (-2,2) [vertex] {}; \node (B3) at (-2,3) [vertex] {}; \node (B4) at (-2,4) [vertex] {}; \node (C1) at (0,1) [vertex] {}; \node (C2) at (0,2) [vertex] {}; \node (C3) at (0,3) [vertex] {}; \node (C4) at (0,4) [vertex] {}; \node (n1) at (2,1) [vertex] {}; \node (n2) at (2,2) [vertex] {}; \node (n3) at (2,3) [vertex] {}; \node (n4) at (2,4) [vertex] {}; %\node at (-3,2.6) {$\vdots$}; \node at (-2,2.6) {$\vdots$}; \node at (0,2.6) {$\vdots$}; \node at (2,2.6) {$\vdots$}; %\node at (1,3.5) {$\hdots$}; %\node at (1,1.5) {$\hdots$}; \node (T) at (0,4.25) {}; %\node at (0,-0.2) {$\scriptstyle -n-a$}; %\node at (-2.7,1) {$\scriptstyle -2$}; %\node at (-2.7,2) {$\scriptstyle -2$}; %\node at (-2.7,3) {$\scriptstyle -2$}; %\node at (-2.7,4) {$\scriptstyle -2$}; \node at (-1.7,1) {$\scriptstyle -2$}; \node at (-1.7,2) {$\scriptstyle -2$}; \node at (-1.7,3) {$\scriptstyle -2$}; \node at (-1.7,4) {$\scriptstyle -2$}; \node at (0.3,1) {$\scriptstyle -2$}; \node at (0.3,2) {$\scriptstyle -2$}; \node at (0.3,3) {$\scriptstyle -2$}; \node at (0.3,4) {$\scriptstyle -2$}; \node at (2.3,1) {$\scriptstyle -2$}; \node at (2.3,2) {$\scriptstyle -2$}; \node at (2.3,3) {$\scriptstyle -2$}; \node at (2.3,4) {$\scriptstyle -2$}; \node at (0.4,-0.1) {$\scriptstyle -3$}; %\draw (A1) -- (0); \draw (B1) -- (0); \draw (C1) -- (0); \draw (n1) -- (0); %\draw (A2) -- (A1); \draw (B2) -- (B1); \draw (C2) -- (C1); \draw (n2) -- (n1); %\draw (A4) -- (A3); \draw (B4) -- (B3); \draw (C4) -- (C3); \draw (n4) -- (n3); \draw [decorate,decoration={brace,amplitude=5pt},xshift=-4pt,yshift=0pt] (2,1) -- (2,4) node [black,midway,xshift=-0.55cm] {$\scriptstyle p_3-1$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=-4pt,yshift=0pt] (0,1) -- (0,4) node [black,midway,xshift=-0.55cm] {$\scriptstyle p_2-1$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=-4pt,yshift=0pt] (-2,1) -- (-2,4) node [black,midway,xshift=-0.55cm] {$\scriptstyle p_1-1$}; \end{tikzpicture} \end{array}$$ where there are $3$ arms, and the number of vertices on arm $i$ is $p_i-1$. Contracting these curves gives the singularity Spec$R$ in §[1.2](#main result){reference-type="ref" reference="main result"}. To ([\[s Veron dual graph\]](#s Veron dual graph){reference-type="ref" reference="s Veron dual graph"}) we associate the extended quiver, and the extended vertex is called the $0^{th}$ vertex. The resulting quiver is as shown below $$\begin{array}{cc} Q^{\prime} \colonequals & \begin{array}{c} \\ \begin{tikzpicture}[xscale=0.9,yscale=0.9,bend angle=40, looseness=1] \node (0) at (0,0) [vertex] {}; % \node (A1) at (-3,1) [vertex] {}; % \node (A2) at (-3,2) [vertex] {}; % \node (A3) at (-3,3) [vertex] {}; % \node (A4) at (-3,4) [vertex] {}; \node (B1) at (-2,1) [vertex] {}; \node (B2) at (-2,2) [vertex] {}; \node (B3) at (-2,3) [vertex] {}; \node (B4) at (-2,4) [vertex] {}; \node (C1) at (0,1) [vertex] {}; \node (C2) at (0,2) [vertex] {}; \node (C3) at (0,3) [vertex] {}; \node (C4) at (0,4) [vertex] {}; \node (n1) at (2,1) [vertex] {}; \node (n2) at (2,2) [vertex] {}; \node (n3) at (2,3) [vertex] {}; \node (n4) at (2,4) [vertex] {}; % \node at (-3,2.6) {$\vdots$}; \node at (-2,2.6) {$\vdots$}; \node at (0,2.6) {$\vdots$}; \node at (2,2.6) {$\vdots$}; % \node at (1.2,2.5) {$\hdots$}; \node (T) at (0,5) [cvertex] {}; % \draw [->] (A1)+(-30:4.5pt) -- node[gap]{$\scriptstyle d_{1,p_1}$} ($(0) + (190:4.5pt)$); \draw [->] (B1) --node[above,pos=0.5]{$\scriptstyle d_{1p_1}$} (0); \draw [->] (C1) --node[right,pos=0.2]{$\scriptstyle d_{2p_2}$} (0); \draw [->] (n1) --node[below,pos=0.2]{$\scriptstyle d_{3p_3}$} (0); % \draw [->] (A2) --node[right,pos=0.5]{$\scriptstyle d_{1,p_1-1}$} (A1); \draw [->] (B2) --node[right,pos=0.5]{$\scriptstyle d_{1p_1-1}$}(B1); \draw [->] (C2) --node[right,pos=0.5]{$\scriptstyle d_{2p_2-1}$}(C1); \draw [->] (n2) --node[right,pos=0.5]{$\scriptstyle d_{3p_3-1}$} (n1); % \draw [->] (A4) --node[right,pos=0.5]{$\scriptstyle d_{1,2}$} (A3); \draw [->] (B4) --node[right,pos=0.5]{$\scriptstyle d_{12}$}(B3); \draw [->] (C4) --node[right,pos=0.5]{$\scriptstyle d_{22}$}(C3); \draw [->] (n4) -- node[right,pos=0.5]{$\scriptstyle d_{32}$} (n3); % \draw [->] (T)+(-190:4.5pt) -- node[gap,pos=0.5]{$\scriptstyle d_{1,1}$}($(A4) + (30:4.5pt)$); \draw [->] (T) -- node[below,pos=0.4]{$\scriptstyle d_{11}$}(B4); \draw [->] (T) -- node[right,pos=0.6]{$\scriptstyle d_{21}$}(C4); \draw [->] (T) -- node[gap,pos=0.5]{$\scriptstyle d_{31}$}(n4); %back=red arrows % \draw [bend right, bend angle=10, looseness=0.5, <-, red] (A1)+(-50:4.5pt) to node[gap,pos=0.5]{$\scriptstyle u_{1,p_1}$}($(0) + (200:4.5pt)$); %\draw [bend right, bend angle=10, looseness=0.5, <-, red] (B1)+(-50:4.5pt) to node[left,pos=0.2]{$\scriptstyle u_{1,p_1}$}($(0) + (160:4.5pt)$); %\draw [bend right, <-, red](C1) to node[gap,pos=0.5]{$\hspace{-0.2em}\scriptstyle u_{2,p_2}$} (0); %\draw [bend right, bend angle=10, looseness=0.5, <-, red](n1) to node[gap,pos=0.5]{$\scriptstyle u_{3,p_3}$} (0); % \draw [bend right, <-, red] (A2) to node[left,pos=0.5]{$\scriptstyle u_{1,p_1-1}$}(A1); %\draw [bend right, <-, red] (B2) to node[gap,pos=0.3]{$\scriptstyle u_{1,p_1-1}$}(B1); %\draw [bend right, <-, red] (C2) to node[gap,pos=0.3]{$\scriptstyle u_{2,p_2-1}$}(C1); %\draw [bend right, <-, red] (n2) to node[left,pos=0.5]{$\scriptstyle u_{3,p_3-1}$}(n1); % \draw [bend right, <-, red] (A4) to node[left,pos=0.5]{$\scriptstyle u_{1,2}$}(A3); %\draw [bend right, <-, red] (B4) to node[gap,pos=0.5]{$\scriptstyle u_{1,2}$}(B3); %\draw [bend right, <-, red] (C4) to node[gap,pos=0.5]{$\scriptstyle u_{2,2}$}(C3); %\draw [bend right, <-, red] (n4) to node[left,pos=0.5]{$\scriptstyle u_{3,2}$}(n3); % \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T)+(-200:4.5pt) to node[gap,pos=0.5]{$\scriptstyle u_{1,1}$}($(A4) + (50:4.5pt)$); %\draw [bend right, bend angle=10, looseness=0.5, <-, red] (T)+(-155:4.5pt) to node[gap,pos=0.6]{$\scriptstyle u_{1,1}$}($(B4) + (60:4.5pt)$); %\draw [bend right, <-, red] (T) to node[gap,pos=0.5]{$\scriptstyle u_{2,1}$}(C4); %\draw [bend right, bend angle=10, looseness=0.5, <-, red] (T) to node[gap,pos=0.5]{$\scriptstyle u_{3,1}$}(n4); %%%braces %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(2,1) -- (2,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_n$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(0,1) -- (0,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_3$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(-1.5,1) -- (-1.5,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_2$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(-3,1) -- (-3,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_1$}; \end{tikzpicture} \end{array} \end{array}$$ The notation $d_{12}$ is read 'the second downward arrow along arm 1'.\ From this, Ringel [@RingelCanAlgebras] introduces the canonical algebra, as the path algebra of the quiver $Q^{\prime}$ subject to the relation $$I = \left<d_{11}\hdots d_{1p_1} - d_{21}\hdots d_{2p_2} + d_{31}\hdots d_{3p_3}\right>.$$ The double quiver of the above is written as follows $$\label{ReconAlgebraQuiver} \begin{array}{cc} Q \colonequals & \begin{array}{c} \\ \begin{tikzpicture}[xscale=1.3,yscale=1.2,bend angle=40, looseness=1] \node (0) at (0,0) [vertex] {}; % \node (A1) at (-3,1) [vertex] {}; % \node (A2) at (-3,2) [vertex] {}; % \node (A3) at (-3,3) [vertex] {}; % \node (A4) at (-3,4) [vertex] {}; \node (B1) at (-2,1) [vertex] {}; \node (B2) at (-2,2) [vertex] {}; \node (B3) at (-2,3) [vertex] {}; \node (B4) at (-2,4) [vertex] {}; \node (C1) at (0,1) [vertex] {}; \node (C2) at (0,2) [vertex] {}; \node (C3) at (0,3) [vertex] {}; \node (C4) at (0,4) [vertex] {}; \node (n1) at (2,1) [vertex] {}; \node (n2) at (2,2) [vertex] {}; \node (n3) at (2,3) [vertex] {}; \node (n4) at (2,4) [vertex] {}; % \node at (-3,2.6) {$\vdots$}; \node at (-2,2.6) {$\vdots$}; \node at (0,2.6) {$\vdots$}; \node at (2,2.6) {$\vdots$}; % \node at (1.2,2.5) {$\hdots$}; \node (T) at (0,5) [cvertex] {}; % \draw [->] (A1)+(-30:4.5pt) -- node[gap]{$\scriptstyle d_{1,p_1}$} ($(0) + (190:4.5pt)$); \draw [->] (B1) --node[above,pos=0.5]{$\scriptstyle d_{1p_1}$} (0); \draw [->] (C1) --node[right,pos=0.2]{$\scriptstyle d_{2p_2}$} (0); \draw [->] (n1) --node[gap,pos=0.6]{$\scriptstyle d_{3p_3}$} (0); % \draw [->] (A2) --node[right,pos=0.5]{$\scriptstyle d_{1,p_1-1}$} (A1); \draw [->] (B2) --node[right,pos=0.5]{$\scriptstyle d_{1p_1-1}$}(B1); \draw [->] (C2) --node[right,pos=0.5]{$\scriptstyle d_{2p_2-1}$}(C1); \draw [->] (n2) --node[right,pos=0.5]{$\scriptstyle d_{3p_3-1}$} (n1); % \draw [->] (A4) --node[right,pos=0.5]{$\scriptstyle d_{1,2}$} (A3); \draw [->] (B4) --node[right,pos=0.5]{$\scriptstyle d_{12}$}(B3); \draw [->] (C4) --node[right,pos=0.5]{$\scriptstyle d_{22}$}(C3); \draw [->] (n4) -- node[right,pos=0.5]{$\scriptstyle d_{32}$} (n3); % \draw [->] (T)+(-190:4.5pt) -- node[gap,pos=0.5]{$\scriptstyle d_{1,1}$}($(A4) + (30:4.5pt)$); \draw [->] (T) -- node[below,pos=0.4]{$\scriptstyle d_{11}$}(B4); \draw [->] (T) -- node[right,pos=0.6]{$\scriptstyle d_{21}$}(C4); \draw [->] (T) -- node[gap,pos=0.5]{$\scriptstyle d_{31}$}(n4); %back=red arrows % \draw [bend right, bend angle=10, looseness=0.5, <-, red] (A1)+(-50:4.5pt) to node[gap,pos=0.5]{$\scriptstyle u_{1,p_1}$}($(0) + (200:4.5pt)$); \draw [bend right, bend angle=10, looseness=0.5, <-, red] (B1)+(-50:4.5pt) to node[left,pos=0.2]{$\scriptstyle u_{1p_1}$}($(0) + (160:4.5pt)$); \draw [bend right, <-, red](C1) to node[gap,pos=0.5]{$\hspace{-0.2em}\scriptstyle u_{2p_2}$} (0); \draw [bend right, bend angle=10, looseness=0.5, <-, red](n1) to node[gap,pos=0.5]{$\scriptstyle u_{3p_3}$} (0); % \draw [bend right, <-, red] (A2) to node[left,pos=0.5]{$\scriptstyle u_{1,p_1-1}$}(A1); \draw [bend right, <-, red] (B2) to node[gap,pos=0.3]{$\scriptstyle u_{1p_1-1}$}(B1); \draw [bend right, <-, red] (C2) to node[gap,pos=0.3]{$\scriptstyle u_{2p_2-1}$}(C1); \draw [bend right, <-, red] (n2) to node[left,pos=0.5]{$\scriptstyle u_{3p_3-1}$}(n1); % \draw [bend right, <-, red] (A4) to node[left,pos=0.5]{$\scriptstyle u_{1,2}$}(A3); \draw [bend right, <-, red] (B4) to node[gap,pos=0.5]{$\scriptstyle u_{12}$}(B3); \draw [bend right, <-, red] (C4) to node[gap,pos=0.5]{$\scriptstyle u_{22}$}(C3); \draw [bend right, <-, red] (n4) to node[left,pos=0.5]{$\scriptstyle u_{32}$}(n3); % \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T)+(-200:4.5pt) to node[gap,pos=0.5]{$\scriptstyle u_{1,1}$}($(A4) + (50:4.5pt)$); \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T)+(-155:4.5pt) to node[gap,pos=0.6]{$\scriptstyle u_{11}$}($(B4) + (60:4.5pt)$); \draw [bend right, <-, red] (T) to node[gap,pos=0.5]{$\scriptstyle u_{21}$}(C4); \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T) to node[gap,pos=0.5]{$\scriptstyle u_{31}$}(n4); %%%braces %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(2,1) -- (2,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_n$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(0,1) -- (0,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_3$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(-1.5,1) -- (-1.5,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_2$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(-3,1) -- (-3,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_1$}; \end{tikzpicture} \end{array} \end{array}$$ Set $D_i=d_{i1}\hdots d_{ip_i},$ $U_i=u_{ip_i}\hdots u_{i1}$, which are the paths down, respectively up, arm $i$. **Definition 2**. *The reconstruction algebra is defined to be the path algebra of $Q,$ subject to the relations induced by the canonical relations, and at every vertex, all 2-cycles that exist at that vertex are equal. Thus, the recontruction algebra can be written explicitly as the following relations. $$\begin{aligned} u_{1i}d_{1i} &= d_{1i+1}u_{1i+1} ~ \text{for all}~ 1 \leq i \leq p_1-1\\ u_{2i}d_{2i} &= d_{2i+1}u_{2i+1} ~\text{for all}~ 1 \leq i \leq p_2-1\\ u_{3i}d_{3i} &= d_{3i+1}u_{3i+1} ~ \text{for all}~ 1 \leq i \leq p_3-1\\ d_{21}u_{21} &= d_{11}u_{11} \\ d_{21}u_{21} &= d_{31}u_{31} \\ u_{1p_1}d_{1p_1} &= u_{2p_2}d_{2p_2}\\ u_{3p_3}d_{3p_3} &= u_{2p_2}d_{2p_2}\\ D_1 - D_2 &+ D_3 = 0. \end{aligned}$$* # The Moduli Space {#SecSimRes} In this section, the moduli space of both the reconstruction algebra and the deformed reconstruction algebra is computed. This will be used to achieve simultaneous resolution in §[4](#SimResSection){reference-type="ref" reference="SimResSection"}. ## Generalities {#Generalities} Consider the dimension vector $\updelta=(1,\hdots,1)$ and the representation variety $\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q,\updelta)$, where $Q$ is the quiver in [\[ReconAlgebraQuiver\]](#ReconAlgebraQuiver){reference-type="eqref" reference="ReconAlgebraQuiver"}. We consider $\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q,\updelta)$ as an affine space, and write $$\label{eqnRep} \EuScript{R}\colonequals \frac{\mathbb{C}[\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q,\updelta)]}{\left<D_1 - D_2 + D_3 \right>,}$$ which we identify with the polynomial ring in the number of arrow variables subject to the relation $$I = \left<D_1 - D_2 + D_3 \right>.$$ The co-ordinate ring carries a natural action of $G \colonequals \textstyle \prod_{q \in Q_0} \mathbb{C}^{\ast}$ where $Q_0$ denotes the set of vertices of $Q$. The action is via conjugation, namely $\upmu \in G = \mathbb{C}^\ast \times \hdots \times \mathbb{C}^\ast$ acts on $p + I \in \EuScript{R}$ as $\upmu \cdot (p + I) \colonequals \upmu \cdot p + I = \upmu^{-1}_{t(p)} p \upmu_{h(p)} + I$. ## Moduli of Quiver with one relation {#ModuliofReconAlg} With respect to the ordering of the vertices as in Section [2](#Preliminaries){reference-type="ref" reference="Preliminaries"}, this subsection computes the moduli space of the quiver $Q$ in §[2](#Preliminaries){reference-type="ref" reference="Preliminaries"} subject to the single relation $I = \left<D_1 - D_2 + D_3 \right>,$ under the dimension vector $\updelta = (1,1, \hdots, 1),$ and generic King stability condition $\upvartheta_0 = (-(m_1+m_2+m_3 +1),1, \hdots, 1).$\ Set $\EuScript{M}_{\upvartheta_0} := \EuScript{R}~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL}$ where $\EuScript{R}$ is defined in [\[eqnRep\]](#eqnRep){reference-type="eqref" reference="eqnRep"}, and consider the following open subsets of $\EuScript{M}_{\upvartheta_0}$ with either $D_1, ~ D_2~ \text{or} ~D_3 \neq 0$ where $D_i \coloneqq d_{i1} \hdots d_{ip_i},$ $$V^1_{i,j} = \left\{ \begin{array}{cccc} D_1 \neq 0, & d_{21} \hdots d_{2i} \neq 0, & u_{2p_2} \hdots u_{2{i+2}} \neq 0 \\ &d_{31} \hdots d_{3j} \neq 0, & u_{3p_3} \hdots u_{3{j+2}} \neq 0 \end{array} \right\} 0 \leq i \leq p_2-1,~ 0 \leq j \leq p_3-1,$$ $$V^2_{i,j} = \left\{ \begin{array}{cccc} D_2 \neq 0, & d_{11} \hdots d_{1i} \neq 0, & u_{1p_1} \hdots u_{1{i+2}} \neq 0 \\ &d_{31} \hdots d_{3j} \neq 0, & u_{3p_3} \hdots u_{3{j+2}} \neq 0 \\ \end{array} \right\} 0 \leq i \leq p_1-1,~0 \leq j \leq p_3-1,$$ $$V^3_{i,j} = \left\{ \begin{array}{cccc} D_3 \neq 0, & d_{11} \hdots d_{1i} \neq 0, & u_{1p_1} \hdots u_{1{i+2}} \neq 0 \\ &d_{21} \hdots d_{2j} \neq 0, & u_{2p_2} \hdots u_{2{j+2}} \neq 0 \\ \end{array} \right\} 0 \leq i \leq p_1-1,~0 \leq j \leq p_2-1,$$ where by convention, if a product does not exist (e.g $d_{21}\hdots d_{2i}$ with $i=0,$ or $u_{3p_3}\hdots u_{3j+2}$ with $j=p_3-1$), then that condition is empty. **Lemma 3**. *With notation as above, the open sets $V^1_{i,j}, V^2_{i,j}, V^3_{i,j}$ completely cover the moduli space $\EuScript{M}_{\upvartheta_0}$.* *Proof.* By [@NCCR Remark 3.10], a $\mathbb{C}Q/I$ module $M$ of dimension vector $(1,1, \hdots, 1)$ is $\upvartheta_0$-stable if and only if for every vertex in the quiver representation of $M$, there exists a non-zero path from the zero vertex to that vertex. So, if we consider the bottom vertex of $M,$ we must have either $D_1\neq 0, ~ D_2\neq 0$ or $D_3 \neq 0.$ By symmetry, suppose that $D_1\neq 0.$ For convenience in the argument below, label the arrows and vertices of $Q$ as follows $$\begin{array}{cc} & \begin{array}{c} \\ \begin{tikzpicture}[xscale=1.3,yscale=1.2,bend angle=40, looseness=1] \node (0) at (0,0) [vertex] {}; % \node (A1) at (-3,1) [vertex] {}; % \node (A2) at (-3,2) [vertex] {}; % \node (A3) at (-3,3) [vertex] {}; % \node (A4) at (-3,4) [vertex] {}; \node (B1) at (-2,1) [vertex] {}; \node (B2) at (-2,2) [vertex] {}; \node (B3) at (-2,3) [vertex] {}; \node (B4) at (-2,4) [vertex] {}; \node (C1) at (0,1) [vertex] {}; \node (C2) at (0,2) [vertex] {}; \node (C3) at (0,3) [vertex] {}; \node (C4) at (0,4) [vertex] {}; \node (n1) at (2,1) [vertex] {}; \node (n2) at (2,2) [vertex] {}; \node (n3) at (2,3) [vertex] {}; \node (n4) at (2,4) [vertex] {}; \node at (-1.6,1) {$\scriptstyle (11)$}; \node at (-1.6,2) {$\scriptstyle (12)$}; \node at (-1.4,3) {$\scriptstyle (1m_1-1)$}; \node at (-1.5,4) {$\scriptstyle (1m_1)$}; \node at (0.4,1) {$\scriptstyle (21)$}; \node at (0.4,2) {$\scriptstyle (22)$}; \node at (0.6,3) {$\scriptstyle (2m_2-1)$}; \node at (0.5,4) {$\scriptstyle (2m_2)$}; \node at (2.4,1) {$\scriptstyle (31)$}; \node at (2.4,2) {$\scriptstyle (32)$}; \node at (2.6,3) {$\scriptstyle (3m_3-1)$}; \node at (2.5,4) {$\scriptstyle (3m_3)$}; % \node at (-3,2.6) {$\vdots$}; \node at (-2,2.6) {$\vdots$}; \node at (0,2.6) {$\vdots$}; \node at (2,2.6) {$\vdots$}; % \node at (1.2,2.5) {$\hdots$}; \node (T) at (0,5) [cvertex] {}; % \draw [->] (A1)+(-30:4.5pt) -- node[gap]{$\scriptstyle d_{1,p_1}$} ($(0) + (190:4.5pt)$); \draw [->] (B1) --node[above,pos=0.5]{$\scriptstyle d_{1p_1}$} (0); \draw [->] (C1) --node[right,pos=0.2]{$\scriptstyle d_{2p_2}$} (0); \draw [->] (n1) --node[gap,pos=0.6]{$\scriptstyle d_{3p_3}$} (0); % \draw [->] (A2) --node[right,pos=0.5]{$\scriptstyle d_{1,p_1-1}$} (A1); \draw [->] (B2) --node[right,pos=0.5]{$\scriptstyle d_{1p_1-1}$}(B1); \draw [->] (C2) --node[right,pos=0.5]{$\scriptstyle d_{2p_2-1}$}(C1); \draw [->] (n2) --node[right,pos=0.5]{$\scriptstyle d_{3p_3-1}$} (n1); % \draw [->] (A4) --node[right,pos=0.5]{$\scriptstyle d_{1,2}$} (A3); \draw [->] (B4) --node[right,pos=0.5]{$\scriptstyle d_{12}$}(B3); \draw [->] (C4) --node[right,pos=0.5]{$\scriptstyle d_{22}$}(C3); \draw [->] (n4) -- node[right,pos=0.5]{$\scriptstyle d_{32}$} (n3); % \draw [->] (T)+(-190:4.5pt) -- node[gap,pos=0.5]{$\scriptstyle d_{1,1}$}($(A4) + (30:4.5pt)$); \draw [->] (T) -- node[below,pos=0.4]{$\scriptstyle d_{11}$}(B4); \draw [->] (T) -- node[right,pos=0.6]{$\scriptstyle d_{21}$}(C4); \draw [->] (T) -- node[gap,pos=0.5]{$\scriptstyle d_{31}$}(n4); %back=red arrows % \draw [bend right, bend angle=10, looseness=0.5, <-, red] (A1)+(-50:4.5pt) to node[gap,pos=0.5]{$\scriptstyle u_{1,p_1}$}($(0) + (200:4.5pt)$); \draw [bend right, bend angle=10, looseness=0.5, <-, red] (B1)+(-50:4.5pt) to node[left,pos=0.2]{$\scriptstyle u_{1p_1}$}($(0) + (160:4.5pt)$); \draw [bend right, <-, red](C1) to node[gap,pos=0.5]{$\hspace{-0.2em}\scriptstyle u_{2p_2}$} (0); \draw [bend right, bend angle=10, looseness=0.5, <-, red](n1) to node[gap,pos=0.5]{$\scriptstyle u_{3p_3}$} (0); % \draw [bend right, <-, red] (A2) to node[left,pos=0.5]{$\scriptstyle u_{1,p_1-1}$}(A1); \draw [bend right, <-, red] (B2) to node[gap,pos=0.3]{$\scriptstyle u_{1p_1-1}$}(B1); \draw [bend right, <-, red] (C2) to node[gap,pos=0.3]{$\scriptstyle u_{2p_2-1}$}(C1); \draw [bend right, <-, red] (n2) to node[left,pos=0.5]{$\scriptstyle u_{3p_3-1}$}(n1); % \draw [bend right, <-, red] (A4) to node[left,pos=0.5]{$\scriptstyle u_{1,2}$}(A3); \draw [bend right, <-, red] (B4) to node[gap,pos=0.5]{$\scriptstyle u_{12}$}(B3); \draw [bend right, <-, red] (C4) to node[gap,pos=0.5]{$\scriptstyle u_{22}$}(C3); \draw [bend right, <-, red] (n4) to node[left,pos=0.5]{$\scriptstyle u_{32}$}(n3); % \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T)+(-200:4.5pt) to node[gap,pos=0.5]{$\scriptstyle u_{1,1}$}($(A4) + (50:4.5pt)$); \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T)+(-155:4.5pt) to node[gap,pos=0.6]{$\scriptstyle u_{11}$}($(B4) + (60:4.5pt)$); \draw [bend right, <-, red] (T) to node[gap,pos=0.5]{$\scriptstyle u_{21}$}(C4); \draw [bend right, bend angle=10, looseness=0.5, <-, red] (T) to node[gap,pos=0.5]{$\scriptstyle u_{31}$}(n4); %%%braces %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(2,1) -- (2,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_n$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(0,1) -- (0,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_3$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(-1.5,1) -- (-1.5,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_2$}; %\draw [decorate,decoration={brace,amplitude=5pt,mirror},xshift=4pt,yshift=0pt] %(-3,1) -- (-3,4) node [black,midway,xshift=0.55cm] %{$\scriptstyle m_1$}; \end{tikzpicture} \end{array} \end{array}$$ If $d_{21} = 0$ and $d_{31} = 0$ then the only way a non-zero path can reach vertex $(2m_2)$ and vertex $(3m_3)$ is if $D_1u_{2p_2} \hdots u_{22} \neq 0$ and $D_1u_{3p_3} \hdots u_{32} \neq 0$ respectively. In this case $M$ belongs to $V^1_{0,0}.$ If $d_{21} \hdots d_{2p_2-1} \neq 0$ and $d_{31} \hdots d_{3p_3-1} \neq 0$ then by a similar argument $M$ is in $V^1_{p_2-1,p_3-1}.$ Hence we can assume that some $d$ on arm 2, and some $d$ on arm 3 is equal to zero. In each arm, consider the $d$ closest to the top vertex which is zero: thus $$\begin{aligned} d_{21} \hdots d_{2i} \neq 0 &~\text{and}~ d_{2i+1} = 0\\ d_{31} \hdots d_{3j} \neq 0 &~\text{and} ~ d_{3j+1} = 0, \end{aligned}$$ and therefore $M$ is in $V^1_{i,j}.$ ◻ From now on, to ease notation set $\mathbb{C}[U^k_{i,j}] = \mathbb{C}[V^k_{i-1,j-1}]$ for $k = 1, 2, 3.$ **Proposition 4**. *With notation as above, the following statements hold. $$\begin{aligned} \mathbb{C}[U^1_{i,j}] \cong \frac{\mathbb{C}[u_{11}, \hdots, u_{1p_1},u_{21}, \hdots, u_{2i-1},u_{2i},d_{2i}, \hdots , d_{2p_2},u_{31}, \hdots, u_{3j-1},u_{3j},d_{3j}, \hdots , d_{3p_3}]}{\left( 1 - d_{2i} \hdots d_{2p_2} + d_{3j} \hdots d_{3p_3}\right)},\\ \mathbb{C}[U^2_{i,j}] \cong \frac{\mathbb{C}[u_{21}, \hdots, u_{2p_2},u_{11}, \hdots, u_{1i-1},u_{1i},d_{1i}, \hdots , d_{1p_1},u_{31}, \hdots, u_{3j-1},u_{3j},d_{3j}, \hdots , d_{3p_3}]}{\left( d_{1i} \hdots d_{1p_1} - 1 + d_{3j} \hdots d_{3p_3}\right)},\\ \mathbb{C}[U^3_{i,j}] \cong \frac{\mathbb{C}[u_{31}, \hdots, u_{3p_3},u_{11}, \hdots, u_{1i-1},u_{1i},d_{1i}, \hdots , d_{1p_1},u_{21}, \hdots, u_{2j-1},u_{2j},d_{2j}, \hdots , d_{2p_2}]}{\left( d_{1i} \hdots d_{1p_1} - d_{2j} \hdots d_{2p_2} + 1\right)}. \end{aligned}$$* *Proof.* *Case 1.* For $U^1_{i,j},$ after changing basis we set the specified non-zero arrows to be 1. Therefore, the only possible non-identity arrows on arm 1 are $$u_{11}, \hdots, u_{1p_1},$$ the only possible non-identity arrows on arm 2 are $$u_{21}, \hdots, u_{2i-1},u_{2i},d_{2i},d_{2i+1}, \hdots, d_{2p_2},$$ and the only possible non-identity arrows on arm 3 are $$u_{31}, \hdots, u_{3j-1},u_{3j},d_{3j},d_{3j+1}, \hdots , d_{3p_3}.$$ Since $D_1=1,~D_2=d_{2i}d_{2i+1} \hdots d_{2p_2},~D_3=d_{3j}d_{3j+1} \hdots d_{3p_3}$ the only relation is $1 - d_{2i} \hdots d_{2p_2} + d_{3j} \hdots d_{3p_3},$ as claimed. *Case 2.* The proof of $\mathbb{C}[U^2_{i,j}]$ and $\mathbb{C}[U^3_{i,j}]$ are similar to that of $\mathbb{C}[U^1_{i,j}]$ above. ◻ **Corollary 5**. *$\EuScript{M}_{\upvartheta_0}$ is smooth.* *Proof.* By Lemma [Lemma 3](#Cover){reference-type="ref" reference="Cover"}, $\EuScript{M}_{\upvartheta_0}$ is covered by the open sets $U^1_{i,j}, U^2_{i,j}, U^3_{i,j},$ so it suffices to prove that each of these is smooth. By Lemma [Proposition 4](#hypersurfaces){reference-type="ref" reference="hypersurfaces"}, set $f =1 - d_{2i} \hdots d_{2p_2} + d_{3j} \hdots d_{3p_3}$ and consider the ideal $$K= \left(f,~\frac{\partial f}{\partial v}\mid v~ \text{is variable in}~ \mathbb{C}[U^1_{i,j}]\right).$$ Since clearly $1 \in K,$ as is standard [@Greuel Algorithm 5.7.6], this implies that $\mathbb{C}[U^1_{i,j}]$ is smooth. The proof for $\mathbb{C}[U^2_{i,j}]$ and $\mathbb{C}[U^3_{i,j}]$ is identical. Thus, $\EuScript{M}_{\upvartheta_0}$ is smooth. ◻ ## The Deformed Reconstruction Algebras {#Deformed} In this subsection we introduce a deformed version of the reconstruction algebra, and compute its moduli space. **Definition 6**. *Given scalars $\boldsymbol{\upgamma} \in \mathbb{C}^{\oplus p_1-1} \oplus \mathbb{C}^{\oplus p_2 -1 } \oplus \mathbb{C}^{\oplus p_3 -1 } \oplus \mathbb{C}^{\oplus 4 },$ write $\boldsymbol{\upgamma} = (\boldsymbol{\upgamma}_{1}, \boldsymbol{\upgamma}_{2}, \boldsymbol{\upgamma}_{3},$ $A, B, a, b) ~\text{where}~ \boldsymbol{\upgamma}_{1} = (\upgamma_{11}, \hdots,\upgamma_{1p_1-1} ), \boldsymbol{\upgamma}_{2} = (\upgamma_{21}, \hdots,\upgamma_{2p_2-1} ), \boldsymbol{\upgamma}_{3} = (\upgamma_{31}, \hdots,\upgamma_{3p_3-1} ).$ Then the deformed reconstruction algebra $\Lambda_{\boldsymbol{\upgamma}}$ is defined to be the path algebra of the quiver $Q,$ subject to the following relations.* $$\begin{array}{ccccc} (1)&~u_{1i}d_{1i} - d_{1i+1}u_{1i+1} &= &\upgamma_{1i} &~ \text{for all}~ 1 \leq i \leq p_1-1\\ (2)&~u_{2i}d_{2i} - d_{2i+1}u_{2i+1} &= &\upgamma_{2i} &~ \text{for all}~ 1 \leq i \leq p_2-1\\ (3)&~u_{3i}d_{3i} - d_{3i+1}u_{3i+1} &= &\upgamma_{3i}&~ \text{for all}~ 1 \leq i \leq p_3-1\\ (a)&~d_{21}u_{21} - d_{11}u_{11} &= &a&\\ (b)&~d_{21}u_{21} - d_{31}u_{31} &= &b&\\ (c)&~u_{1p_1}d_{1p_1} - u_{2p_2}d_{2p_2} &= &A&\\ (d)&~u_{3p_3}d_{3p_3} - u_{2p_2}d_{2p_2} &= &B&\\ (x)&~d_{11} \hdots d_{1p_1} - d_{21} \hdots d_{2p_2} + d_{31} \hdots d_{3p_3}&= &0.& \end{array}$$ Note that $\Lambda_{\boldsymbol{0}}$ is the reconstruction algebra defined in §[2](#Preliminaries){reference-type="ref" reference="Preliminaries"} earlier. **Notation 7**. Set $$\Delta := \{\boldsymbol{\upgamma} \in \mathbb{C}^{\oplus p_1 + p_2 + p_3 + 1} \mid \sum_{i=1}^{p_{1}-1} \upgamma_{1i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + A + a =0 ~\text{and}~ \sum_{i=1}^{p_{3}-1} \upgamma_{3i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + B + b =0\}\label{def:Delta}.$$ **Lemma 8**. *If $\boldsymbol{\upgamma} \in \Delta,$ then $\sum_{i=1}^{p_{1}-1} \upgamma_{1i} - \sum_{i=1}^{p_{3}-1} \upgamma_{3i} + (A-B) + (a-b) =0.$* *Proof.* For $\boldsymbol{\upgamma} \in \Delta,$ $$\sum_{i=1}^{p_{1}-1} \upgamma_{1i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + A + a= \sum_{i=1}^{p_{3}-1} \upgamma_{3i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + B + b =0$$ and hence the result follows. ◻ **Remark 9**. If $\boldsymbol{\upgamma} \notin \Delta,$ then $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta) = \emptyset.$ Certainly, given $\boldsymbol{\upgamma} \notin \Delta,$ then either $\sum_{i=1}^{p_{1}-1} \upgamma_{1i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + A + a \neq 0 ~\text{or}~ \sum_{i=1}^{p_{3}-1} \upgamma_{3i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + B + b \neq 0.$ If $M \in \mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta),$ then its linear maps between vertices are scalars, which satisfy the relations of $\Lambda_{\boldsymbol{\upgamma}}.$ Now these scalars commute, and as a result of summing up the relations $(1) - (2) + (a) + (c)$ gives $\sum_{i=1}^{p_{1}-1} \upgamma_{1i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + A + a = 0$ and summing the relations $(3) - (2) + (b) + (d)$ gives $\sum_{i=1}^{p_{3}-1} \upgamma_{3i} - \sum_{i=1}^{p_{2}-1} \upgamma_{2i} + B + b = 0,$ which is a contradiction. This is why below we always assume that $\boldsymbol{\upgamma} \in \Delta.$ Now consider $\EuScript{M}_{\upvartheta_0}(\Lambda_{\boldsymbol{\upgamma}},~\updelta) \coloneqq \mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)~ /\!\!\!\!/_{\upvartheta_0}~ \mathrm{GL},$ and define open subsets $V^1_{i,j}, V^2_{i,j}, V^3_{i,j}$ in an identical manner to §[3.2](#ModuliofReconAlg){reference-type="ref" reference="ModuliofReconAlg"}. **Proposition 10**. *If $\boldsymbol{\upgamma} \in \Delta,$ then the following open sets $U^1_{i,j}, U^2_{i,j}, U^3_{i,j},$ cover the moduli space $\EuScript{M}_{\upvartheta_0}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)$ $$\mathbb{C}[U^1_{i,j}] \cong \frac{\mathbb{C}[d_{2i}, u_{2i}, d_{3j},u_{3j}]}{\left(\begin{array}{c} d_{2i} u_{2i} + \displaystyle\sum_{l=1}^{i-1} \upgamma_{2l} - d_{3j} u_{3j} - \displaystyle\sum_{l=1}^{j-1} \upgamma_{3l} - b,\\ 1 - d_{2i}(d_{2i}u_{2i} - \upgamma_{2i}) \hdots (d_{2i}u_{2i} - \displaystyle\sum_{l=i}^{p_2-1}\upgamma_{2l}) + d_{3j}(d_{3j}u_{3j} - \upgamma_{3j}) \hdots (d_{3j}u_{3j} - \displaystyle\sum_{l=j}^{p_3-1}\upgamma_{3l}) \end{array}\right)},$$* *$$\mathbb{C}[U^2_{i,j}] \cong \frac{\mathbb{C}[d_{1i}, u_{1i}, d_{3j},u_{3j}]}{\left(\begin{array}{c} d_{1i} u_{1i} + \displaystyle\sum_{l=1}^{i-1} \upgamma_{1l} - d_{3j} u_{3j} - \displaystyle\sum_{l=1}^{j-1} \upgamma_{3l} - (b-a),\\ d_{1i}(d_{1i}u_{1i} - \upgamma_{1i}) \hdots (d_{1i}u_{1i} - \displaystyle\sum_{l=i}^{p_1-1}\upgamma_{1l}) - 1 + d_{3j}(d_{3j}u_{3j} - \upgamma_{3j}) \hdots (d_{3j}u_{3j} - \displaystyle\sum_{l=j}^{p_3-1}\upgamma_{3l}) \end{array}\right)},$$* *$$\mathbb{C}[U^3 _{i,j}] \cong \frac{\mathbb{C}[d_{1i}, u_{1i}, d_{2j},u_{2j}]}{\left(\begin{array}{c} d_{2j} u_{2j} + \displaystyle\sum_{l=1}^{j-1} \upgamma_{2l} - d_{1i} u_{1i} - \displaystyle\sum_{l=1}^{i-1} \upgamma_{1l} - a,\\ d_{1i}(d_{1i}u_{1i} - \upgamma_{1i}) \hdots (d_{1i}u_{1i} - \displaystyle\sum_{l=i}^{p_1-1}\upgamma_{1l}) - d_{2j}(d_{2j}u_{2j} - \upgamma_{2j}) \hdots (d_{2j}u_{2j} - \displaystyle\sum_{l=j}^{p_2-1}\upgamma_{2l}) + 1 \end{array}\right)}.$$* *Proof.* The fact these cover the moduli space is identical to Lemma [Lemma 3](#Cover){reference-type="ref" reference="Cover"}. To compute these opens for $\mathbb{C}[U^1_{i,j}],$ consider the relations in Definition [Definition 6](#DeformedReconAlgbera){reference-type="ref" reference="DeformedReconAlgbera"}. Then the relations $(1)$ become $$u_{1i} - u_{1i+1} = \upgamma_{1i},\\ $$ which implies that $$\begin{aligned} u_{12} &= u_{11} - \upgamma_{11}\\ u_{13} &= u_{11}- \upgamma_{11}- \upgamma_{12}\\ &\vdots\\ u_{1p_1} &= u_{11} - \displaystyle\sum_{l=1}^{p_1-1}\upgamma_{1l}. \end{aligned}$$ Therefore, the arrows $u_{1i}$ can be expressed in terms of $u_{11}.$ The relations $(2)$ become $$\begin{aligned} u_{21} - u_{22} &= \upgamma_{21}\\ u_{22} - u_{23} &= \upgamma_{22}\\ &\vdots\\ u_{2i}d_{2i} - d_{2i+1} &= \upgamma_{2i}\\ &\vdots\\ d_{2p_2-2} - d_{2p_2-1} &= \upgamma_{2p_2-2}\\ d_{2p_2-1} - d_{2p_2} &= \upgamma_{2p_2-1}. \end{aligned}$$ Re-writing the middle as $d_{2i+1} = u_{2i}d_{2i} - \upgamma_{2i},$ working up and down gives $$\begin{aligned} u_{21} &= u_{2i}d_{2i} + \displaystyle\sum_{l=1}^{i-1}\upgamma_{2l}\\ &\vdots\\ u_{2i-1} &= u_{2i}d_{2i} + \upgamma_{2i-1}\\ d_{2i+1} &= u_{2i}d_{2i} - \upgamma_{2i}\\ d_{2i+2} &= u_{2i}d_{2i} - \upgamma_{2i}- \upgamma_{2i+1}\\ &\vdots\\ d_{2p_2} &= u_{2i}d_{2i} - \displaystyle\sum_{l=i}^{p_2-1}\upgamma_{2l}, \end{aligned}$$ and so all the arrows in arm 2 are determined by $(u_{2i}, d_{2i}).$ In a similar way, the relations $(3)$ become $$\begin{aligned} u_{31} &= u_{3j}d_{3j} + \displaystyle\sum_{l=1}^{j-1}\upgamma_{3l}\\ &\vdots\\ u_{3j-1} &= u_{3j}d_{3j} + \upgamma_{3j-1}\\ d_{3j+1} &= u_{3j}d_{3j} - \upgamma_{3j}\\ d_{3j+2} &= u_{3j}d_{3j} - \upgamma_{3j}- \upgamma_{3j+1}\\ &\vdots\\ d_{3p_3} &= u_{3j}d_{3j} - \displaystyle\sum_{l=j}^{p_3-1}\upgamma_{3l}, \end{aligned}$$ and all the arrows in arm 3 are determined by $(u_{3j}, d_{3j}).$ The remaining relations $$\begin{aligned} d_{21}u_{21} - d_{11}u_{11}& = a\\ d_{21}u_{21} - d_{31}u_{31}& = b\\ u_{1p_1}d_{1p_1} - u_{2p_2}d_{2p_2}& = A\\ u_{3p_3}d_{3p_3} - u_{2p_2}d_{2p_2}& = B\\ d_{11} \hdots d_{1p_1} - d_{21} \hdots d_{2p_2}& + d_{31} \hdots d_{3p_3}= 0, \end{aligned}$$ then become $$\begin{aligned} u_{2i}d_{2i} + \displaystyle\sum_{l=1}^{i-1} \upgamma_{2l} - u_{11}& = a \hspace{3cm} \hfill{(a)}\\ (u_{2i}d_{2i} + \displaystyle\sum_{l=1}^{i-1} \upgamma_{2l}) - (u_{3j}d_{3j} + \displaystyle\sum_{l=1}^{j-1} \upgamma_{3l})& = b\hspace{3.05cm} \hfill{(b)}\\ (u_{11} - \displaystyle\sum_{l=1}^{p_1-1}\upgamma_{1l}) - (u_{2i}d_{2i} - \displaystyle\sum_{l=i}^{p_2-1} \upgamma_{2l})& = A \hspace{3cm} \hfill{(c)}\\ (u_{3j}d_{3j} - \displaystyle\sum_{l=j}^{p_3-1} \upgamma_{3l}) - (u_{2i}d_{2i} - \displaystyle\sum_{l=i}^{p_2-1} \upgamma_{2l})& = B \hspace{3cm} \hfill{(d)} \end{aligned}$$ $$1 - d_{2i}(d_{2i}u_{2i} - \upgamma_{2i}) \hdots (d_{2i}u_{2i} - \displaystyle\sum_{l=i}^{p_2-1}\upgamma_{2l}) + d_{3j}(d_{3j}u_{3j} - \upgamma_{3j}) \hdots (d_{3j}u_{3j} - \displaystyle\sum_{l=j}^{p_3-1}\upgamma_{3l})= 0. \hfill{(x)}$$ Since $\boldsymbol{\upgamma} \in \Delta,$ $(a) + (c) = 0$ and $(b) + (d) = 0.$ Thus $(a)$ and $(b)$ are satisfied automatically from $(c)$ and $(d).$ By Lemma [Notation 7](#Notationrel){reference-type="ref" reference="Notationrel"} $(c)-(d) = 0,$ so from $(b)$ and $(x)$ all relations are satisfied. The proof of $\mathbb{C}[U^2_{i,j}]$ and $\mathbb{C}[U^3_{i,j}]$ are similar to that of $\mathbb{C}[U^1_{i,j}]$ above. ◻ The following two results will be used in showing that $\EuScript{M}_{\upvartheta_0}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)$ is smooth. **Lemma 11**. *In $\mathbb{C}[x, y],$ for any $f = x (xy - \upalpha_{1}) \hdots (xy - \upalpha_{n})$ with $\upalpha_{i} \in \mathbb{C},$ $$x \frac{\partial f}{\partial x} - y \frac{\partial f}{\partial y} = f(x, y).$$* *Proof.* Set $h_i = (xy - \upalpha_{i})$ for $i=1, 2, \hdots, n,$ and $$g \coloneqq (xy - \upalpha_{1}) \hdots (xy - \upalpha_{n}) = h_1 \hdots h_n,$$ so that $f = xg.$ Using the product rule, $$\begin{aligned} \frac{\partial f}{\partial x} = g + x \cdot \frac{\partial g}{\partial x} & \quad \text{and} \quad \frac{\partial f}{\partial y} = x \cdot \frac{\partial g}{\partial y}. \end{aligned}$$ Thus the statement follows if $x^2 \cdot \frac{\partial g}{\partial x} - xy \cdot \frac{\partial g}{\partial y} =0,$ which holds provided that $x \cdot \frac{\partial g}{\partial x} - y \cdot \frac{\partial g}{\partial y} =0.$ Now, again by the product rule $$\begin{aligned} x \cdot\frac{\partial g}{\partial x} &= x \cdot(y h_2 \hdots h_n + h_1 y h_3 \hdots h_n + \hdots + h_1 \hdots h_{n-1} y)\\ y \cdot\frac{\partial g}{\partial y} &= y \cdot(x h_2 \hdots h_n + h_1 x h_3 \hdots h_n + \hdots + h_1 \hdots h_{n-1} x) \end{aligned}$$ which clearly satisfies $x \cdot \frac{\partial g}{\partial x} - y \cdot \frac{\partial g}{\partial y} =0.$ ◻ **Lemma 12**. *For any scalars $\upalpha_1, \upalpha_2 \hdots, \upalpha_n, \upbeta_2, \hdots, \upbeta_m,$ set $f_1 \coloneqq ab - xy + \upalpha_1$ and $f_2 = 1 - a(ab - \upalpha_2) \hdots (ab - \upalpha_n) + x(xy - \upbeta_2) \hdots (xy - \upbeta_m).$ Then $\mathbb{C}[a, b, x, y]/(f_1,~f_2) \neq 0.$* *Proof.* Set $\mathcal{A}= \mathbb{C}[a, b, x, y]/(f_1,~f_2),$ then we claim that $\mathcal{A}/(a-1) \neq 0.$ Indeed, $$\begin{aligned} \frac{\mathcal{A}}{(a-1)} &= \frac{\mathbb{C}[b, x, y]}{\left(\begin{array}{c} b = xy - \upalpha_1,\\ 1 - (b - \upalpha_2) \hdots (b - \upalpha_n) + x(xy - \upbeta_2) \hdots (xy - \upbeta_m) \end{array}\right)}\\ &= \frac{\mathbb{C}[x, y]}{ (1 - (xy - (\upalpha_1 + \upalpha_2)) \hdots (xy - (\upalpha_1 +\upalpha_n)) + x(xy - \upbeta_2) \hdots (xy - \upbeta_m)) }, \end{aligned}$$ which is nonzero since the highest degree term is either $x^{n-1}y^{n-1}$ or $x^{m}y^{m-1},$ and these can never cancel. ◻ The following is the main result of this subsection. **Corollary 13**. *$\EuScript{M}_{\upvartheta_0}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)$ is two-dimensional and smooth.* *Proof.* By Proposition [Proposition 10](#modulicoverDeformed){reference-type="ref" reference="modulicoverDeformed"}, $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL}$ is covered by the open sets $U^1_{i,j}, U^2_{i,j}, U^3_{i,j},$ so it suffices to prove that each of these is two-dimensional and smooth.\ *Case 1.* Note that by Proposition [Proposition 10](#modulicoverDeformed){reference-type="ref" reference="modulicoverDeformed"}, $\mathbb{C}[U^1_{i,j}] \cong \mathbb{C}[d_{2i}, u_{2i}, d_{3j}, u_{3j}]/(f_1, f_2)$ where $$\begin{aligned} f_1 & \coloneqq d_{2i} u_{2i} + \displaystyle\sum_{l=1}^{i-1} \upgamma_{2l} - d_{3j} u_{3j} - \displaystyle\sum_{l=1}^{j-1} \upgamma_{3l} - b ,\\ f_2 & \coloneqq 1 - f_{21} + f_{22},\\ f_{21} & \coloneqq d_{2i}(d_{2i}u_{2i} - \upgamma_{2i}) \hdots (d_{2i}u_{2i} - \displaystyle\sum_{l=i}^{p_2-1}\upgamma_{2l}),\\ f_{22} & \coloneqq d_{3j}(d_{3j}u_{3j} - \upgamma_{3j}) \hdots (d_{3j}u_{3j} - \displaystyle\sum_{l=j}^{p_3-1}\upgamma_{3l}). \end{aligned}$$ Since $\mathbb{C}[d_{2i}, u_{2i}, d_{3j},u_{3j}]$ is an affine domain, and $f_1$ is not a unit, then from [@Eisenbud Corollary 13.11], $S \coloneqq \mathbb{C}[d_{2i}, u_{2i}, d_{3j}, u_{3j}]/(f_1)$ is a $3$-dimensional domain. Now $f_2$ is not a unit in $S$ since if it were a unit in $S,$ then $$0 = S/ (f_2) \cong \mathbb{C}[d_{2i}, u_{2i}, d_{3j}, u_{3j}]/(f_1, f_2)$$ which would contradict Lemma [Lemma 12](#f2_not_aunit){reference-type="ref" reference="f2_not_aunit"}. Thus since $S$ is an affine domain, again [@Eisenbud Corollary 13.11] asserts that $$\text{dim}~ \mathbb{C}[U^1_{i,j}] = \text{dim}~ S - 1 = 2.$$ Therefore, $\text{dim}~ \EuScript{M}_{\upvartheta_0}(\Lambda_{\boldsymbol{\upgamma}},~\updelta) = 2.$\ Now set $K = (f_1,~ f_2),$ and consider Jacobian matrix $$\mathcal{J} = \begin{pmatrix}\label{Jacobianminors} \frac{\partial f_1}{\partial u_{2i}}& &\frac{\partial f_1}{\partial d_{2i}}& &\frac{\partial f_1}{\partial u_{3j}}& &\frac{\partial f_1}{\partial d_{3j}} \\ &&&&&&\\ \frac{\partial f_2}{\partial u_{2i}}& &\frac{\partial f_2}{\partial d_{2i}}& &\frac{\partial f_2}{\partial u_{3j}}& &\frac{\partial f_2}{\partial d_{3j}} \end{pmatrix} = \begin{pmatrix} d_{2i}& &u_{2i}& & -d_{3j}& & -u_{3j} \\ &&&&&&\\ \frac{\partial f_2}{\partial u_{2i}}& &\frac{\partial f_2}{\partial d_{2i}}& &\frac{\partial f_2}{\partial u_{3j}}& &\frac{\partial f_2}{\partial d_{3j}} \end{pmatrix}.$$ Write $\mathrm {J}$ for the ideal generated by the $2 \times 2$ minors of $\mathcal{J}$, together with $K.$ Then $$\begin{aligned} f_{21} &= d_{2i} \cdot \frac{\partial f_{21}}{\partial d_{2i}} - u_{2i} \cdot \frac{\partial f_{21}}{\partial u_{2i}} \tag{by Lemma \ref{Jacobianpr}}\\ &= d_{2i} \cdot \frac{\partial f_2}{\partial d_{2i}} - u_{2i} \cdot \frac{\partial f_2}{\partial u_{2i}} \tag{by inspection}\\ &= \frac{\partial f_1}{\partial u_{2i}} \cdot \frac{\partial f_2}{\partial d_{2i}} - \frac{\partial f_1}{\partial d_{2i}} \cdot \frac{\partial f_2}{\partial u_{2i}}, \end{aligned}$$ and thus $f_{21} \in \mathrm {J}.$\ With a similar argument as above, using the last $2 \times 2$ minor $$\frac{\partial f_1}{\partial u_{3j}} \cdot \frac{\partial f_2}{\partial d_{3j}} - \frac{\partial f_1}{\partial d_{3j}} \cdot \frac{\partial f_2}{\partial u_{3j}} = \left( -d_{3j} \right) \cdot \frac{\partial f_{22}}{\partial d_{3j}} - \left( -u_{3j} \right) \cdot \frac{\partial f_{22}}{\partial u_{3j}},$$ together with Lemma [Lemma 11](#Jacobianpr){reference-type="ref" reference="Jacobianpr"}, $$f_{22} = d_{3j} \cdot \frac{\partial f_{22}}{\partial d_{3j}} - u_{3j} \cdot \frac{\partial f_{22}}{\partial u_{3j}} \in \mathrm {J}.$$ Thus, since $f_2 = 1 - f_{21} + f_{22}$ is in $\mathrm {J},$ it follows that $1 \in \mathrm {J}.$ As is standard, see e.g [@Greuel Algorithm 5.7.6], this implies that $\EuScript{M}_{\upvartheta_0}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)$ is smooth. *Case 2.* The proof of $\mathbb{C}[U^2_{i,j}]$ and $\mathbb{C}[U^3_{i,j}]$ are similar to that of $\mathbb{C}[U^1_{i,j}]$ in above. ◻ # Simultaneous Resolution {#SimResSection} This section considers the invariant representation variety associated to the quiver of the reconstruction algebra in §[4.2](#ReconAlg){reference-type="ref" reference="ReconAlg"}, and finds its generators in terms of cycles, the deformed reconstruction algebra introduced in §[3.3](#Deformed){reference-type="ref" reference="Deformed"} is used to achieve simultaneous resolution. ## Representation Variety Below, we say that arrows $p_1,\hdots,p_n$ are *composable* if $h(p_i) = t(p_{i+1}) ~\text{for all}~ i=1, \hdots , n-1.$ **Lemma 14**. *Let $Q$ be the quiver in [[\[ReconAlgebraQuiver\]](#ReconAlgebraQuiver){reference-type="eqref" reference="ReconAlgebraQuiver"}]{.nodecor}. With notation as in §[[3.1](#Generalities){reference-type="ref" reference="Generalities"}]{.nodecor}, $\mathcal{\EuScript{R}}^G$ is generated by $p + I$ where $p$ is a cycle in $Q$.* *Proof.* Choose a monomial $p = p_1 \hdots p_n \in \EuScript{R},$ where ${p_i}$'s are arrows. We claim that $\upmu \cdot (p + I) = p~+I ~ \text{for all} ~ \upmu \Leftrightarrow p$ is a cycle. First observe that $\upmu \cdot (p + I) = \upmu \cdot p + I = (\upmu_{t(p_1)} \hdots \upmu_{t(p_n)})^{-1} p (\upmu_{h(p_1)} \hdots \upmu_{h(p_n)}) + I$. ($\Leftarrow$) If $p$ is a cycle, in particular it is composable. Thus for all $\upmu \in G,$ $$\begin{aligned} \upmu \cdot (p + I) &= \upmu \cdot p + I\\ &= \upmu_{t(p_1)}^{-1} p_1\upmu_{h(p_1)}\upmu_{t(p_2)}^{-1}p_2\upmu_{h(p_2)} \hdots \upmu_{t(p_n)}^{-1}p_n \upmu_{h(p_n)}+ I\\ &= \upmu_{t(p_1)}^{-1} \upmu_{h(p_n)} p_1 p_2 \hdots p_n + I\\ &= \upmu_{t(p_1)}^{-1} \upmu_{h(p_n)} p + I\\ &= p + I. \tag{since $t(p_1) = h(p_n)$} \end{aligned}$$ Hence $p + I\in \EuScript{R}^G$. ($\Rightarrow$) Suppose that $p + I \in \EuScript{R}^G$ such that $\upmu \cdot (p + I) = \upmu \cdot p + I = p~+I$ for all $\upmu$. Then $\upmu_{h(p_1)}$ must cancel some $\upmu_{t(p_i)}^{-1}$ for some $i$, so $h(p_1) = t(p_i).$ Now consider $\upmu_{h(p_i)}.$ It must cancel $\upmu_{t(p_j)}^{-1}$ for some $j$, so $h(p_i) = t(p_j).$ Continuing like this, we can assume $p = p_1p_ip_j \hdots p_m$ where $p_1p_ip_j \hdots p_m$ is composable. But then $\upmu \cdot (p + I) = \upmu \cdot p + I = \upmu_{t(p_1)}^{-1} \cdot p \cdot \upmu_{h(p_m)} + I$ and so since $\upmu \cdot (p + I)= p + I$, $t(p_1) = h(p_m),$ and $p$ is a cycle. ◻ ## Reconstruction Algebras {#ReconAlg} By Lemma [Lemma 14](#lemcyclesgen){reference-type="ref" reference="lemcyclesgen"}, $\EuScript{R}^G$ is generated by cycles. This subsection finds a finite generating set, by considering $$\EuScript{S}_1 \coloneqq \{ 2\textnormal{-cycles}\}\bigcup \left\{ \begin{array}{cccc} D_1U_1, &D_1U_2, &D_1U_3\\ D_2U_1, &D_2U_2, &D_2U_3\\ D_3U_1, &D_3U_2, &D_3U_3 \end{array} \right\}.$$ **Proposition 15**. *$\EuScript{R}^G$ is generated as a $\mathbb{C}$--algebra by the set $\EuScript{S}_1.$* *Proof.* For any vertex $v$, consider a non trivial cycle $p$, then it must leave the vertex. According to the quiver, there are two options: *Case 1.* The path $p$ starts with a downward arrow ($p = d_{ij}p^{\prime})$ from vertex $v$ along the $i^{th}$ arm. Since $p$ is a cycle, $p^{\prime} \colon h(d_{ij}) \rightarrow v.$ If $p^{\prime}$ continues downwards, at some stage $p^{\prime}$ stops travelling downwards, and we can write $p = d_{ij}d_{ij+1}\hdots d_{ik} p^\prime$ for some $p^{\prime}\colon a \rightarrow v.$ If $a \neq b,$ where $b$ is the bottom vertex, then $p^\prime$ starts upwards, so $$p = d_{ij} \hdots \underbrace{(d_{ik}u_{ik})}_{z}p^{\prime\prime} \sim z \text{(cycles of length smaller than p)}$$ thus by induction, $p \in \left<\EuScript{S}_1\right>$. Hence we can assume $a = b,$ and $p^{\prime}$ must travel up one of the arms. According to the quiver $Q$ in §[2](#Preliminaries){reference-type="ref" reference="Preliminaries"}, there are $3$ options. 1. $p^{\prime}$ travels up the $i^{th}$ arm. Again $p = d_{ij} \hdots (d_{ip_i}u_{ip_i})p^{\prime\prime},$ by induction we are done. 2. $p^{\prime}$ travels up one of the other $2$ arms. If it doubles back on itself before reaching the top ($0^{th}$ vertex), we are done by induction since $\EuScript{S}_1$ contains all $2$-cycles. Hence we can assume $p^\prime$ reaches the top, so $p= d_{ij} \hdots d_{ip_i}U_lp^{\prime\prime}$ for some $l \neq i.$ Repeating, either $p^{\prime\prime}$ travels down the $i^{th}$ arm without doubling back, in which case $p \sim D_iU_l p^{\prime\prime\prime},$ or $p^{\prime\prime}$ travels down the $k^{th}$ arm $(k \neq i)$ without doubling back in which case $p \sim(d_{ij} \hdots d_{ip_i}) (U_lD_k) p^{\prime\prime}.$ In either case, by induction we are done. *Case 2.* The path $p$ starts with an upward arrow. This is very similar to Case 1, after interchanging the downward and the upward arrows. ◻ Now, set $$\begin{aligned} \EuScript{S}_2 & \coloneqq \{ 2\textnormal{-cycles}\}\bigcup \left\{ \begin{array}{ccccc} &D_1U_2, &D_1U_3\\ D_2U_1, &&D_2U_3\\ %D_3U_1, &D_3U_2, &\hdots &D_3U_n\\ %&\vdots&\\ %D_nU_1, &D_nU_2, &\hdots &D_nU_n \end{array} \right\}, \\ \EuScript{S}_3 & \coloneqq \{ 2\textnormal{-cycles}\}\bigcup\,\, \{D_1U_2,~D_2U_1,~D_2U_3\}. \end{aligned}$$ **Lemma 16**. *$\left<\EuScript{S}_3\right> = \left<\EuScript{S}_2\right> = \left<\EuScript{S}_1\right>.$* *Proof.* Since $\EuScript{S}_3 \subseteq \EuScript{S}_2 \subseteq \EuScript{S}_1,$ it suffices to prove that $\left<\EuScript{S}_1\right> \subseteq \left<\EuScript{S}_2\right> \subseteq \left<\EuScript{S}_3\right>.$ For $\left<\EuScript{S}_1\right> \subseteq \left<\EuScript{S}_2\right>,$ multiplying the relation $D_3 = D_2 - D_1$ with $U_1, U_2 , U_3,$ shows that all elements in $\EuScript{S}_1$ can be generated by the elements in $\EuScript{S}_2.$ For $\left<\EuScript{S}_2\right> \subseteq \left<\EuScript{S}_3\right>,$ again $D_3U_i = D_2 U_i - D_1 U_i$ for $i = 1,~2,~3$ and so $D_1 U_i = D_2 U_i - D_3U_i$ where $D_iU_i$ is a product of $2$-cycles. Therefore, $D_3 U_1,~D_3 U_2,~D_1 U_3$ can be expressed in terms of elements of $\EuScript{S}_3.$ ◻ ## Simultaneous Resolution {#SimRes} Recall from ([\[eqnRep\]](#eqnRep){reference-type="ref" reference="eqnRep"}) that $\mathcal{\EuScript{R}} = \mathbb{C}[\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q,\updelta)]/I,$ and $G = \mathrm{GL}$ acts on $\mathcal{\EuScript{R}},$ so we can form $\mathcal{\EuScript{R}}^G.$ By §[4.2](#ReconAlg){reference-type="ref" reference="ReconAlg"}, $\mathcal{\EuScript{R}}^G$ is given by $$\begin{aligned} \mathsf{w}_1 &\coloneqq D_1U_2 + I\\ \mathsf{w}_2 &\coloneqq ~~D_2U_1 + I\\ \mathsf{w}_3 &\coloneqq -D_2U_3 + I\\ \mathsf{v}_{i,j} &\coloneqq ~~d_{ij}u_{ij} + I. \end{aligned}$$ for $i=1, 2, 3 ~\text{and}~ 1 <j \leq p_i.$\ Write $(\upbeta_{1}, \upbeta_{2}, \upbeta_{3}, \upalpha_{1,1},\hdots, \upalpha_{1,p_1}, \upalpha_{2,1},\hdots, \upalpha_{2,p_2}, \upalpha_{3,1},\hdots,$ $\upalpha_{3,p_3})$ for the point in Spec $\mathcal{\EuScript{R}}^G$ corresponding to the maximal ideal $(\mathsf{w}_1 -\upbeta_{1}, \mathsf{w}_2 -\upbeta_{2}, \mathsf{w}_3 -\upbeta_{3}, \mathsf{v}_{1,1}-\upalpha_{1,1},\hdots,\mathsf{v}_{3,p_3}-\upalpha_{3,p_3}).$ Let $Q$ be the quiver of the reconstruction algebra, and consider the map $$\uppi \colon \mathcal{\EuScript{R}}^G = \mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)~ /\!\!\!\!/~ \mathrm{GL} \rightarrow \Delta,$$ defined by taking $$\begin{tikzpicture} \node (A) at (0,0) {$(\upbeta_{1}, \upbeta_{2}, \upbeta_{3}, \upalpha_{1,1},\hdots, \upalpha_{1,p_1}, \upalpha_{2,1},\hdots, \upalpha_{2,p_2}, \upalpha_{3,1},\hdots, \upalpha_{3,p_3})$}; \scriptsize \node (a) at (0,-2) {$((\upalpha_{1,i} - \upalpha_{1,i+1})^{p_1-1}_{i=1},(\upalpha_{2,i} - \upalpha_{2,i+1})^{p_2-1}_{i=1},(\upalpha_{3,i} - \upalpha_{3,i+1})^{p_3-1}_{i=1}, \upalpha_{2,1} - \upalpha_{1,1}, \upalpha_{2,1} - \upalpha_{3,1}, \upalpha_{1,p_1} - \upalpha_{2,p_2}, \upalpha_{3,p_3} - \upalpha_{2,p_2}) .$}; \draw[|->] (A)--(a); \end{tikzpicture}$$ **Remark 17**. The fibre above a point $\boldsymbol{\upgamma} \in \Delta$ is precisely $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta) /\!\!/ \mathrm{GL}.$ Indeed, the fibre above $\boldsymbol{\upgamma} \in \Delta$ is the zero locus of $$\begin{array}{ccccc} (1)&~\mathsf{v}_{1,i} - \mathsf{v}_{1,i+1} &= &\upgamma_{1,i} &~ \text{for all}~ 1 \leq i \leq p_1-1\\ (2)&~\mathsf{v}_{2,i} - \mathsf{v}_{2,i+1} &= &\upgamma_{2,i} &~ \text{for all}~ 1 \leq i \leq p_2-1\\ (3)&~\mathsf{v}_{3,i} - \mathsf{v}_{3,i+1} &= &\upgamma_{3,i}&~ \text{for all}~ 1 \leq i \leq p_3-1\\ (a)&~\mathsf{v}_{2,1} - \mathsf{v}_{1,1} &= &a&\\ (b)&~\mathsf{v}_{2,1} - \mathsf{v}_{3,1} &= &b&\\ (c)&~\mathsf{v}_{1,p_1} - \mathsf{v}_{2,p_2} &= &A&\\ (d)&~\mathsf{v}_{3,p_3} - \mathsf{v}_{2,p_2} &= &B& \end{array}$$ By Definition [Definition 6](#DeformedReconAlgbera){reference-type="ref" reference="DeformedReconAlgbera"}, this equals $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta) /\!\!/ \mathrm{GL}.$ In particular, the fibre above the origin is $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{0}},~\updelta) /\!\!/ \mathrm{GL}.$ Since $\Lambda_{\boldsymbol{0}}$ is the (undeformed) reconstruction algebra, this is known to be the determinantal singularity corresponding to ([\[s Veron dual graph\]](#s Veron dual graph){reference-type="ref" reference="s Veron dual graph"}). The following is the main result of this paper. **Theorem 18**. *The diagram $$\begin{tikzpicture} \node (A) at (0,0) {$\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL}$}; \node (B) at (4,0) {$\mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)~ /\!\!\!\!/~ \mathrm{GL}$}; \node (b) at (4,-2) {$\Delta$}; \draw[->] (A)-- node[above] {} (B); \draw[densely dotted,->] (A)-- node[below] {$\upphi$} (b); \draw[->] (B)-- node[right] {$\uppi$} (b); a\end{tikzpicture}$$ is a simultaneous resolution of singularities in the sense that the morphism $\upphi$ is smooth, and $\uppi$ is flat.* *Proof.* Write $\upphi$ for the composition $$Y= \mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL} \rightarrow \mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)~ /\!\!\!\!/~ \mathrm{GL} \rightarrow \Delta.$$ We first claim that $\upphi$ is flat. Since $(1)~\Delta$ is regular, $(2)~Y$ is regular (so Cohen-Macaulay) by [Corollary 5](#ModuliSmooth){reference-type="ref" reference="ModuliSmooth"}, $(3)~\mathbb{C}$ is algebraically closed so $\upphi$ takes closed points of $Y$ to closed points of $\Delta,$ and $(4)$ for every closed point $\boldsymbol{\upgamma} \in \Delta,$ for the same reason as in Remark [Remark 17](#fibreabovegamma){reference-type="ref" reference="fibreabovegamma"} the fibre $\upphi^{-1}(\boldsymbol{\upgamma})$ is $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL}$ which is always two-dimensional by Corollary [Corollary 13](#ModuliDeformedReconSmooth){reference-type="ref" reference="ModuliDeformedReconSmooth"}, it follows from [@Matsumura Corollary to 23.1] that $\upphi$ is flat. Now as in [@Liu 3.35] to show that $\upphi$ is smooth, we just require smoothness (equivalently regularity, as we are working over $\mathbb{C}$) at closed points of fibres above closed points $\boldsymbol{\upgamma} \in \Delta.$ But as above $\upphi^{-1}(\boldsymbol{\upgamma})$ is $\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta)~ /\!\!\!\!/_{\upvartheta_0} \mathrm{GL},$ which is regular at all closed points by Corollary [Corollary 13](#ModuliDeformedReconSmooth){reference-type="ref" reference="ModuliDeformedReconSmooth"}. Thus $\upphi$ is a smooth morphism, as required. Finally, the above can be adapted to show that $\uppi$ is flat. We have that $\uppi^{-1}(\boldsymbol{\upgamma})=\mathop{\mathrm{\mathrm{Rep}}}(\Lambda_{\boldsymbol{\upgamma}},~\updelta) /\!\!/ \mathrm{GL},$ which is always two-dimensional as a consequence of the resolution of its singularities computed in Corollary [Corollary 13](#ModuliDeformedReconSmooth){reference-type="ref" reference="ModuliDeformedReconSmooth"}. Thus we can still appeal to [@Matsumura Corollary to 23.1]. ◻ # The Representation Variety in Determinantal Form {#repvar} This section gives a conjectural explicit presentation of the high-dimensional invariant representation variety $\mathcal{\EuScript{R}}^G = \mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)/\!\!/ \mathrm{GL}$ in §[3](#SecSimRes){reference-type="ref" reference="SecSimRes"}. Although not strictly needed for simultaneous resolution in §[4](#SimResSection){reference-type="ref" reference="SimResSection"}, it does link to other work in the literature. ## Determinantal form {#QDetform} Consider a $2 \times n$ matrix $$X=\begin{pmatrix} a_1& &a_2& &\hdots& & a_n \\ b_1& &b_2& &\hdots& & b_n \\ \end{pmatrix}$$ Following Stevens [@DeformationsofSing §12], consider the $2 \times 2$ *minors* of this $2 \times n$ *matrix*, which for all, $i < j$ are defined to be $$a_i \cdot b_j - b_i \cdot a_j.$$ Write $\mathrm{Det}(X)$ for the set of all $2 \times 2$ *minors* of $X$.\ Consider the natural homomorphism $$\mathbb{C}[\mathsf{w} , \mathsf{v}] \xrightarrow{\varphi} \EuScript{R}^G = \mathop{\mathrm{\mathrm{Rep}}}(\mathbb{C}Q/I,~\updelta)/\!\!/ \mathrm{GL},$$ defined by taking $$\begin{aligned} \mathsf{w}_1 &\mapsto D_1U_2 + I\\ \mathsf{w}_2 &\mapsto ~~D_2U_1 + I\\ \mathsf{w}_3 &\mapsto -D_2U_3 + I\\ \mathsf{v}_{i,j} &\mapsto ~~d_{ij}u_{ij} + I. \end{aligned}$$ for $i=1, 2, 3 ~\text{and}~ 1 <j \leq p_i.$ **Proposition 19**. *The homomorphism $\mathbb{C}[\mathsf{w} , \mathsf{v}] \xrightarrow{\varphi} \EuScript{R}^G$ is surjective, and all the $2 \times 2$ minors of the matrix $$\label{R matrix} \left( \begin{array}{ccccc} {\mathsf w}_2&{\mathsf w}_3 &{v_{2,1}\hdots v_{2,{p_2}}}\\ {v_{1,1}\hdots v_{1,{p_1}}}&{\mathsf w}_3+{v_{3,1}\hdots v_{3,{p_3}}}&{\mathsf w}_1 \end{array} \right)$$ are all sent to zero.* *Proof.* Surjectivity follows from Lemma [Lemma 16](#lemmainvargens){reference-type="ref" reference="lemmainvargens"}. The $2 \times 2$ minor of the outer columns gives $$\varphi (\mathsf{w}_2 \mathsf{w}_1 - (\mathsf{v}_{1,1}\hdots \mathsf{v}_{1,{p_1}}) (\mathsf{v}_{2,1}\hdots \mathsf{v}_{2,{p_2}}))= D_2U_1D_1U_2 - D_1U_1D_2U_2 =0$$ Using $D_1U_i = D_2 U_i - D_3 U_i$ from the relation $x_1^{p_1} = x_2^{p_2} - x_3^{p_3},$ we further have $$\begin{aligned} \varphi ( \mathsf{w}_1\mathsf{w}_3) &= (D_1U_2)(-D_2U_3)\\ &= -(D_1U_3)(D_2U_2)\\ &= (-D_2U_3 +D_3U_3)D_2U_2\\ &= \varphi ((\mathsf{w}_3 + \mathsf{v}_{3,1}\hdots \mathsf{v}_{3,p_3})(\mathsf{v}_{2,1}\hdots \mathsf{v}_{2,p_2})). \end{aligned}$$ Thus the $2 \times 2$ minor consisting of the $i^{th}$ column and the last one is sent to zero.\ Finally, $$\begin{aligned} \varphi (\mathsf{w}_2(\mathsf{w}_3 + \mathsf{v}_{3,1}\hdots \mathsf{v}_{3,p_3})) &= (D_2U_1)( \mathsf{w}_3 + \mathsf{v}_{3,1}\hdots \mathsf{v}_{3,p_3})\\ &= (D_2U_1)(- D_2U_3 +D_3U_3)\\ &= (D_2U_1) \cdot -(D_1U_3)\\ &= (D_1U_1)(-D_2U_3)\\ &= \varphi ((\mathsf{v}_{1,1}\hdots \mathsf{v}_{1,p_1})\mathsf{w}_3).\\ \end{aligned}$$ This shows that the $2 \times 2$ minors of the matrix belong to the kernel $\varphi,$ as required. ◻ **Conjecture 20**. *The ring homomorphism $\varphi$ is an isomorphism.* Calculations in magma [@magma] confirm this in small cases, but the Gröbner approach seems hard in general. DW3 M. Artin, *Algebraic construction of Brieskorn's resolutions*, J. Algebra **29** (1974), 330--348. E. Brieskorn, *Die Auflösung der rationalen Singularitäten holomorpher Abbildungen. (German)*, Math. Ann. **178** (1968), 255--270. E. Brieskorn, *Singular elements of semi-simple algebraic groups*, Actes du Congrès International des Mathématiciens (Nice, 1970), Tome 2, (1971), 279--284. W. Bosma, J. Cannon, C. Playoust, *The Magma algebra system. I. The user language*, J. Symbolic Comput.  **24** (1997), 235--265. http://www.magma.usyd.edu W. Crawley-Boevey, M. P. Holland *Noncommutative deformations of Kleinian singularities*, Duke Math. J. **92** (1998), no. 3, 605--635. H. Cassens, P. Slodowy, *On Kleinian singularities and quivers*, Progr. Math. **162** (1998), no. 4, 263--288. D. Eisenbud, *Commutative algebra. With a view toward algebraic geometry*, Graduate Texts in Mathematics, **150**. Springer-Verlag, New York (1995). G. Greuel, G. Pfister, *A **Singular** introduction to commutative algebra*, Springer-Verlag, Berlin (2002). O. Iyama, M. Wemyss, *Weighted Projective Lines and Rational Surface Singularities*, Épijournal de Géométrie Algébrique, **3** (2020). P. B. Kronheimer, *The construction of ALE spaces as hyper-Kähler quotients*, J. Differential Geom. **29** (1989), no. 3, 665--683. Q. Liu, *Algebraic geometry and arithmetic curves*, Oxford Graduate Texts in Mathematics, Oxford University Press, Oxford, (2002), no. 6,  xvi+576 pp. ISBN: 0-19-850284-2. B. Makonzi, *The Artin Component and Simultaneous Resolution via Reconstruction Algebras of Type $A$*, arXiv preprint arXiv:2208.11966 (2022). H. Matsumura, *Commutative ring theory. Translated from the Japanese by M. Reid. Second edition*, Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, (1989), no. 8,  xiv+320 pp. ISBN: 0-521-36764-6 13-01. O. Riemenschneider, *Deformations of rational singularities and their resolutions*, Rice Univ. Stud. **59** (1973), no. 1, 119--130. O. Riemenschneider, *Cyclic quotients surface singularities: constructing the Artin Component via the McKay-Quiver*, Singularities and complex analytic geometry (Japanese) (Kyoto, 1997). Surikaisekikenkyusho Kokyuroku, (1998), no. 1033, 163--171. C. M. Ringel, *Tame algebras and integral quadratic forms*, Lecture Notes in Mathematics, **1099**. Springer-Verlag, Berlin (1984). J. Stevens, *Deformations of singularities*, Lecture Notes in Mathematics, **1811**. Springer-Verlag, Berlin (2003). M. Wemyss, *Noncommutative resolutions*, Math. Sci. Res. Inst. Publ. **64** (2016), 239--306. J. Wunram, *Reflexive modules on quotient surface singularities*, Math. Ann. **279** (1988), no. 4, 583--598.
arxiv_math
{ "id": "2309.02982", "title": "Deformations and Simulatenous Resolutions of Determinantal Surfaces", "authors": "Brian Makonzi", "categories": "math.AG math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this document we define a method of proof that we call proof by dichotomy. Its field of application is any proposition on the set of natural numbers $\mathbb{N}$. It consists in the repetition of a step. A step proves the proposition for half of the members of an infinite subset $U$ of $\mathbb{N}$ members for which we neither know if the proposition is verified nor not. We particularly study the case where the elements of $U$ are separated by the parity of the quotient of euclidean division by $2^k$. In such a case, we prove that if a natural $n$ does not verify the proposition, then it is unique. address: EPOC - UMR CNRS 5805 - Equipe PROMESS, Université de Bordeaux - Bordeaux-INP, Avenue des Facultés, CS60099, 33400 Talence, France author: - Laurent Fallot date: Revision 2.2 title: Notes on proof by dichotomy --- # Acknowledgement While working on Collatz problem, we introduced this method of proof. We decide to write some personal notes on it. Having dropped this field of interest for a long time we have not an up to date bibliography on this field of mathematics. In the doubt about its knowledge, we choose to share these informal notes with the research community. We apologize if this work is well known. We do not intend to steal other one works. # Introduction It is common to use a proof by induction in many sciences. It is a very well known method of proof so we only recall its principle. This method of proof is used to validate a given proposition $P$ on the set of positive integers $\mathbb{N}$. It starts with the proof of a base case. Often it's 0 but not only. This done, we have to build the induction step which consists in being able to prove $P(N)$ independently of the value of $N$ from the hypothesis that for all $n<N,~P(n)$ is valid. Until today, every attempt to use this method of proof to validate Collatz problem for instance failed. Many reason of these failures may be given. Mainly, to simplify, it seems that for any $N$ we may find a new length record greater than $N$. Today, many publications about that problem tend to better the density of solved cases in $\mathbb{N}$. Suppose that we get by this way a density of 1. Can we conclude that Collatz problem is solved ? It is the question we are trying to answer. In this document, we introduce a new method of proof that we call proof by dichotomy. We start with a subset of $\mathbb{N}$ denoted by $U_0(P)$. This set is composed of all the natural numbers for which we want to prove proposition $P$. As in a proof by induction, a proof by dichotomy is an infinite sequence of a step. But it differs from a proof by induction in that during the $k^{th}$ step we split $U_{k-1}(P)$ into two equally distributed parts and we prove that $P$ is verified on exactly one of these parts. For the other one, $P$ stays unsolved. So, we maintain along the proof two sets : $U_k(P)$ that contains all the numbers for which we don't know yet if they verify $P$ or not and $S_k(P)$ the set of numbers $n$ for which we proved $P(n)$. Note that during the step $k$, we can use the property that separate $U_{k-1}(P)$ into two parts and the fact that $P(n)$ is true for any $n\in S_{k-1}(P)$ to validate $P$ on one of the two parts. A quick statistical study tends to say that if we have a proof of a proposition $P$ on the set $\mathbb{N}$ by this way, then $P$ is verified for every natural numbers. But a closer look at this shows it is false in the general case. We particularly study an approach driven by the normalisation of any number as $2^k q + r$ where $r<2^k$. We prove that there can exist a natural number that does not verify the property $P$ we are trying to prove but this number is unique in this case. Further, we give indications on when this counter-example of $P$ exists and what it can be. # Position of the problem The term proposition denotes any application defined from $\mathbb{N}$ into the boolean algebra $\{True, False\}$. Let $n$ be a natural number and $P$ a proposition. We say that - $n$ verifies or satisfies $P$ when $P(n)=True$, - $n$ denies or refutes $P$ when $P(n)=False$ and - $n$ is an unsolved case for $P$ when we don't know the value of $P(n)$ at a given step of a proof. A counter-example of a proposition $P$ is a natural number that refutes $P$. We define now what we call a \"proof by dichotomy\" on the natural numbers set $\mathbb{N}$. **Definition 1** (Proof by dichotomy). *Let $P$ be a proposition we want to prove on the natural numbers set $\mathbb{N}$ or on one of its infinite subsets. The proof by dichotomy method is an infinite application of a step. This step consists into splitting the set of unsolved cases into two equally distributed parts and to be able to prove from the solved cases that the proposition is verified on exactly one of this part. Then, the second part is the set of unsolved staying cases given as the entry of next step.* In fact, it is easy to see that this method of proof maintains two sequences of sets : a sequence $\left ( U_k(p) \right ) _{k\in\mathbb{N}}$ : each of these $U_k(P)$ contains all the unsolved cases of $P$ after step $k$. It starts with $U_0(P)$ the infinite subset of $\mathbb{N}$ on which we want to prove $P$. a sequence $\left ( S_k(p)^{~} \right ) _{k\in\mathbb{N}}$ : each of these $S_k(P)$ is composed of solved cases during the $k$ first steps. We start with the empty set for $S_0(P)$. At the $k^{th}$ step, we divide $U_{k-1}(P)$ in two equally distributed subsets : $U_{k,1}(P)$ and $U_{k,2}(P)$. This partition of $U_{k-1}(P)$ is chosen so that we can prove $P$ is valid on only one of these subsets. The members of the second one stay unsolved cases. This because, if we were able to either prove $P$ on both parts or prove $P$ on one of them and its counter-proposition on the other part, the proof is closed and out of interest here. Let us say that we can prove that $P$ is valid on $U_{k,1}(P)$ without loss of generality. Then we know the result of $P(n)$ for every $n\in U_{k,1}(P)$ but we still don't when $n\in U_{k,2}(P)$. By this way, we can define $U_k(P) = U_{k,2}(P)$ and build $S_k(P) = S_{k-1}(P)\cup U_{k,1}(P)$. More precisely, it is easy too see that any member of $S_k(P)$ verifies $P$ because we add to this set only the numbers for which we proved that they verify $P$. So, we can use this particularity on $S_{k-1}(P)$ to justify that $P$ is valid on $U_{k,1}(P)$. This method of proof is not so far from a proof by induction. But, the induction step differs. Instead of proving that any $n<n_0$ verifies $P$ infers that $n_0$ verifies $P$ whatever $n_0$ is, we have to prove that from a set of unsolved natural numbers we can extract half of them by the use of the solved cases. By this way, let us suppose that we start with $\mathbb{N}$ as the first set of unsolved numbers against a given proposition $P$ to be proved. After a first step, we know that one out of two of them verifies $P$. But, we don't know the answer for the other half. A second step will lead to only a quarter of the numbers stay unsolved against $P$ and so on. So that, after the application of $k$ steps, we get that only ${1/{2^k}}^{th}$ of the natural numbers stay unsolved against $P$. Let us call this number the density of unsolved cases at step $k$. An infinite application of this step tends to prove that $P$ is verified for all the natural numbers. Indeed, the density of unsolved cases tends to $\lim\limits_{k \rightarrow \infty} 1/2^k = 0$. But what is it in fact ? Consider that we have an initial infinite set of unsolved cases. Dividing it in two parts still gives an infinity of unsolved cases. This means that whatever the number of steps we apply, we always get an infinity of numbers that potentially deny $P$. Further, if we try to count the final number of cases that may refute $P$ we get $\lim\limits_{k \rightarrow \infty} \infty / 2^k = \infty/\infty$ which is undefined. So, the number of potential counter-examples may be anything : 0, any finite number or infinity. Let us study a particular case of this method of proof driven by the binary code of a natural number. #### Remark Before going further, note that we focus this method of proof on infinite subsets because this is the most complex case. For finite sets, if we consider $N$ the initial number of unsolved cases, this finite number is divided by 2 at each step. The sequence obtained is a strictly decreasing sequence of positive numbers starting with a finite positive integer. So, in the worst case, we get at most one counter-example after something around $\log_2(N)$ steps where $\log_2$ identifies the logarithm in base 2 function. # Proof by dichotomy driven by the euclidean division of natural numbers by $2^k$ ## Notations and conventions Following, we use the notation $a\,/\,b$ where $a,b \in \mathbb{N},\; b\neq 0$ to denote the quotient of the euclidean division of $a$ by $b$. On the same way, $a\,\%\,b$ identifies the remainder of this division also called remainder of $a$ modulo $b$. By the definition of the euclidean division and for any $k \in \mathbb{N}$, we can write any natural number $n$ as $n=2^k \cdot q_{k,n} + r_{k,n}$ where $q_{k,n} = n\,/\,2^k$ and $r_{k,n} = n\,\%\,2^k$. This normalization is unique. We use this property to build the dichotomy method we discuss further. ## Construction Let $P$ be a proposition we want to prove on $\mathbb{N}$. Let $U_0(P)=\mathbb{N}$ be the initial set of unsolved natural numbers. Note that we can also start with any infinite subset of $\mathbb{N}$ if we need to prove $P$ only on this subset. At step $k>0$, we divide $U_{k-1}(P)$ into two parts by the use of the parity of $n\,/\,2^k, n\in U_{k-1}(P)$. On one side, the $n$ having odd quotients. On the other side, the $n$ having even ones. We suppose that we can prove $P$ on exactly one of these two subsets of $U_{k-1}(P)$. This constructs a proof by dichotomy driven by the euclidean division of natural numbers by $2^k$. Following, we prove that there cannot exist more than one counter-example of $P$. ## Validity of a proof by dichotomy driven by the euclidean division of natural numbers by $2^k$ We claim the following theorem **Theorem 1**. *Let $P$ be a proposition on $\mathbb{N}$. If it exists a proof of $P$ by dichotomy driven by the euclidean division of natural numbers by $2^k$, then $P$ is satisfied by any natural number but at most one.* Before we introduce the proof of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, remind that we maintain two sequences of sets along such a proof : $\left ( U_k(P)\right)_{k\in\mathbb{N}}$, the set of unsolved cases and $\left ( S_k(P)\right)_{k\in\mathbb{N}}$, the set of solved cases. Furthermore, for any $k$, the pair $(U_k(P),S_k(P))$ is a partition of the starting set $U_0(P)$. Note now, that any member $n$ of $U_k(P)$ migrates to $S_{k+1}(P)$ if and only if we can prove that $n$ satisfies $P$. So, any member of $S_k(n)$ verifies $P$ whatever $k$ is. Let us prove theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"} now. *Proof.* Under the conditions of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, let us suppose that there exist a finite natural number $n$ that refutes $P$ and prove that $n$ is the only one. As $n$ denies $P$, $n$ cannot be a member of any $S_k(P)$ because the members of these sets verify $P$ by the construction of this set. This infers that $n\in U_k(P)$ for any $k\in \mathbb{N}$. It is clear that for any natural number $m\neq n$, there exist at most one $k\in \mathbb{N}$ such that $m = 2^k q_{k,m} + r_{k,m}$ and $n=2^k q_{k,n} + r_{k,n}$ with different parities of $q_{k,m}$ and $q_{k,n}$. Consider the smallest of these $k$. Thus we have $r_{k,n} = r_{k,m}$. The natural number $n$ is supposed finite. Then we can define $k_0$ such that $2^{k_0}>n\geq 2^{k_0-1}$. That is $n = 2^{k_0-1} + r$ where $r=n\,\%\,2^{k_0-1}$. Let us consider two cases : when $k\leq k_0$ and when $k > k_0$. #### Case $k\leq k_0$ We know that $n\in U_{k-1}(P)$ otherwise we should have proved that $n$ verifies $P$. On the same way, it is easy to see that $m\in U_{k-1}(P)$ as it shares the same remainder modulo $2^k$ with $n$. But $n\in U_{k}(P)$ inferring that we can prove $P$ for any member of $U_{k-1}(P)$ having a parity different from $q_{k,n}$ one by hypothesis. As $q_{k,m}$ and $q_{k,n}$ have different parities and we are not able to conclude that $n$ verifies or not $P$ at this step, we get that $m$ verifies $P$ by the definition of a proof by dichotomy. #### Case $k > k_0$ As in the previous case , $n\in U_{k-1}(P)$ and $m\in U_{k-1}(P)$ for the same reasons. The defined $k_0$ verifies that $n = 2^{k_0-1} + r_{k_0-1,n}$. That infers that $q_{k,n} = 0$ thus that $q_{k,n}$ is necessarily even. Furthermore, as $n$ refutes $P$, we are not able to prove that $P$ is satisfied for any even $q_k$, $k>k_0$. By deduction from the hypothesis, $P$ is satisfied by any natural number with an odd quotient by the euclidean division by $2^k$. As $q_{k,n}$ and $q_{k,m}$ have opposite parities, $q_{k,m}$ is odd and $m$ satisfies $P$. In conclusion, whatever the case is, if a natural number $m$ differs from $n$, it satisfies $P$ and $n$ is the only unsolved natural number against $P$. So, if a number $n$ refutes $P$ under the conditions of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"} it's the only one. Finally, $P$ is verified for any natural number but at most one. ◻ Theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"} claims that we cannot have more than one natural number that refutes $P$ once a proof of $P$ by dichotomy driven by the euclidean division of natural numbers by $2^k$ is established. In fact, the proof does not insure the existence of a counter-example. It just claims that there exist at most one unsolved natural number. Let us introduce a first corollary describing the consequences of the existence of a counter-example. **Corollary 1**. *Under the hypothesis of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, the existence of a counter-example infers that there exist $K\in \mathbb{N}$ such that for any $k>K$ we are only able to prove $P$ when the quotient by euclidean division by $2^k$ is odd.* This corollary may be easily deduced from the proof of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}. *Proof.* From the second case of the proof of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"} we get that if there exists an $n$ refuting $P$ and it is of the form $n=2^{k_0}+(n\,\%\,2^{k_0})$ then for any $k>k_0,~n\,/\,2^k = 0$ so it is even. As $n$ cannot be in any $S_k(P)$, this infers that we are able to prove $P$ for any natural number $m$ verifying $m\,/\,2^k$ is odd and this whatever $k>k_0$ is. ◻ From the proof of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, especially from the second case, it is easy to understand which number $n$ is unsolved. Effectively we can construct the binary code of $n$ by writing down from right to left 0 when we can prove $P$ for odd quotients and 1 otherwise while running the steps from the first one to the end. The number $n$ is supposed finite so, let $k_0$ defined such that $n=2^{k_0} + (n\,\%\,k_0)$. Corollary [Corollary 1](#cor:counter_example_exist){reference-type="ref" reference="cor:counter_example_exist"} claims that for every $k>k_0$, $P$ can be proved only for odd quotients. This corresponds to leading zeroes we can put on the binary code of $n$. So, we just have to follow the steps of the proof from the first one to the $k_0^{th}$ one. Thus, we can conclude the proof of $P$ mainly in two ways. On the one hand, we may be satisfied with a unique possible counter-example because it is out of interest for instance. In this case, we only need to know the value of $n$. The method to do this is described above. On the other hand, if we need to finish the proof of $P$ on $U_0(P)$ we just have to deny the existence of a counter-example. For instance, we can proceed either by identifying this unsolved natural number and prove that it verifies $P$ in fact or by denying the existence of a unique case by any other way. Following, we present some more corollaries of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}. The first one may be perceived as a counter-proposition of corollary [Corollary 1](#cor:counter_example_exist){reference-type="ref" reference="cor:counter_example_exist"} **Corollary 2**. *Under the hypothesis of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, if for any $K\in\mathbb{N}$ there exist at least a $k>K$ such that we can prove $P$ on even quotients by the division by $2^k$, then any finite natural number verifies $P$. That is, we can conclude that $P$ is verified over $U_0(P)$.* *Proof.* The proof of this corollary is easy to get. Indeed, note that the hypothesis of corollary [Corollary 2](#cor:no_counter_example){reference-type="ref" reference="cor:no_counter_example"} denies the conclusion of corollary [Corollary 1](#cor:counter_example_exist){reference-type="ref" reference="cor:counter_example_exist"}, refuting by the same way the hypothesis of this corollary which is the existence of a finite counter-example of $P$. So, we cannot have a counter-example under the hypothesis of corollary [Corollary 2](#cor:no_counter_example){reference-type="ref" reference="cor:no_counter_example"} ◻ The following corollary may be seen as the reverse proposition of corollary [Corollary 1](#cor:counter_example_exist){reference-type="ref" reference="cor:counter_example_exist"} in the sense that it gives a sufficient condition to insure the existence of a finite unsolved case. **Corollary 3**. *Under the hypothesis of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, if we can find a $K\in \mathbb{N}$ such that for any $k>K$ we are only able to prove $P$ when the quotient by euclidean division by $2^k$ is odd, then there exist a unique unsolved case $n$.* *Proof.* By following the steps from the first to the $K^{th}$ we previously saw that we can build a finite unsolved $n$ for $P$. In fact, $U_K(P) = \{2^{K+1} a + n, a\in \mathbb{N}\}$. The further steps retrieve from $U_K(P)$ progressively all of its elements having $a\neq 0$. In conclusion, after the infinite application of the step, only $n$ stays. This validates corollary [Corollary 3](#cor:may_be_counter_example){reference-type="ref" reference="cor:may_be_counter_example"}. ◻ Recall that the existence of an unsolved case doesn't imply the existence of a counter-example. So, under only the conditions of corollary [Corollary 3](#cor:may_be_counter_example){reference-type="ref" reference="cor:may_be_counter_example"} we cannot affirm the existence of a counter-example. A particular case of corollary [Corollary 3](#cor:may_be_counter_example){reference-type="ref" reference="cor:may_be_counter_example"} is the following. **Corollary 4**. *Under the hypothesis of theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, suppose that at any step $k$ we can only prove $P$ when $n\,/\,2^k$ is odd. In such a case, the only possible counter-example for $P$ is 0. If 0 is not a member of the initial set then, $P$ is verified on $U_0(P)$.* *Proof.* The proof of corollary [Corollary 4](#cor:only_odd){reference-type="ref" reference="cor:only_odd"} is immediate from corollary [Corollary 3](#cor:may_be_counter_example){reference-type="ref" reference="cor:may_be_counter_example"}. In fact it is corollary [Corollary 3](#cor:may_be_counter_example){reference-type="ref" reference="cor:may_be_counter_example"} with $K=0$. So, the natural number $n$ build in its proof is 0 inferring corollary [Corollary 4](#cor:only_odd){reference-type="ref" reference="cor:only_odd"} by the same way. ◻ ## Conclusion of the validation We claimed that a proof by dichotomy driven by the euclidean division by $2^k$ can give at most one counter-example of $P$. On the one hand, if among the infinite application of the step, we can regularly prove that $P$ is satisfied for an even quotient, then this counter-example does not exist. On the other hand, if from a given $K$ we are only able to prove $P$ for numbers having an odd quotient, we may not have more than one unsolved case. The path followed during the $K$ first applications of the step gives an indication on a potential counter-example. If there exists an unsolved case $n$, we are free to ignore $n$, to prove separately either $P(n)$ or not or to use another way to prove the non existence of a single counter example. Let us take Collatz problem as an example of usage of this method of proof. # Example : application to Collatz problem We warn the reader that we do not intend to give neither a proof nor a refutation of Collatz problem. We just point out that if someone can build a proof of this problem by dichotomy driven by the euclidean division by $2^k$, then it will be solved for any natural number different from 0. ## Definition of the problem The Collatz problem is known under many different names. Some call it \"the 3x+1 problem\" , \"Syraccuse problem\" or Ulam problem and so on. This problem is easy to define and looks like an easy one but many scientists proposed some other formulations of it expecting that a proof will come from this model but none leads to a positive result. Neither a negative one. So it is still an open problem. Firstly, let us recall it. Collatz defined a function on natural numbers that we shall denote by $C(n)$. We can define it by $C(n) = n/2$ if $n$ is even and $C(n) = 3n+1$ if $n$ is odd. There exist some different formulations of this function but this is not important for our purpose. Then Collatz looked at successive applications of $C$ on any natural number $n$. Such a way, one can build a sequence $(n, C(n), C^2(n), \cdots)$ issued from any not null natural number. It seems that all these sequences contains at least one time the natural number 1. Numerous numerical tests failed to find a counter-example until today. ## Such a proof by dichotomy of Collatz problem would validate it On the one hand, let us suppose that someone establish a proof of Collatz problem by dichotomy driven by parity of euclidean division by $2^k$. Theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"} states that at most one natural number $n$ does not satisfy $P$. As a first approach of this case, it is easy to verify that $0$ denies Collatz problem. Effectively, $C(0)=0/2 = 0$ as 0 is even. Then Collatz problem would be positively solved for every non null natural numbers under the condition that one could establish such a proof by dichotomy. Another way to prove Collatz problem from the supposition of the existence of a proof by dichotomy driven by the euclidean division by $2^k$, consists to suppose that there exist a counter example $n\neq 0$ that doesn't satisfy Collatz problem. Theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"} states that it would be the only one. But the sub-sequence starting from $C(n)$ won't neither contain 1. As $C(n)$ cannot be equal to $n$ when $n\neq 0$ by definition of $C$, there should exist at least two natural numbers denying Collatz problem. This denies theorem [Theorem 1](#thm:general){reference-type="ref" reference="thm:general"}, refuting by the same way the existence of such an $n$. A more precise look at what we talked about would be such a proof shows that as 0 would be the only counter-example. As a consequence we have to prove that Collatz problem is verified for any odd number at the first step. This affirmation is well-known for a long time and didn't bring any solution until today. So, we do not expect the existence of such a proof to solve Collatz problem. Just let us say that these two deductions from the existence of such a proof on this problem are more samples of how to use this method of proof than trying to solve Collatz problem this way. # Conclusion and perspectives We defined what we mean by a proof of a proposition $P$ by dichotomy driven by the euclidean division by $2^k$. We stated that at most one natural number may deny $P$ under the condition that such a proof exists for $P$. So, this method of proof may be used to prove the validity of some propositions on $\mathbb{N}$. In the case of the existence of such a proof for Collatz problem, this problem will be valid for every non null natural numbers. Our last remark on this example doesn't seem to offer a solution to Collatz problem. A remark we can do at this point is that this method of proof can be quite easily extended to any infinite countable set. Indeed, the definition of a countable set $S$ claims that there exist at least one one-to-one mapping between $\mathbb{N}$ and $S$. Let us take one. Say that this mapping is $f$ and that $f$ is defined on $\mathbb{N}$ with values in $S$ for instance. To prove $P(s)$ for any $s\in S$, it is sufficient to prove $P(f(n))$ for any $n\in\mathbb{N}$. As it is possible to say that $P(f(n))$ is in fact a proposition on $\mathbb{N}$, a proof by dichotomy driven by euclidean division by $2^k$ may solve the problem in the hypothesis that such a proof exists. At the beginning of this paper, we defined a proof by dichotomy, then we studied one particular case. Very probably, there exist some other ways to partition $U_k(P)$ giving the same unique possibility of counter-example. We did not explore deeply this possibility. Thus, we wonder if it is possible to find some other possibilities and to make some generalizations of this type of proofs. Further, a deeper look at the proof of the theorem shows that we never used the fact that the set of unsolved cases is divided in two equally distributed partitions. Then there may exist some others ways to obtain a proof that does not give more than one counter-example.
arxiv_math
{ "id": "2310.04109", "title": "Notes on proof by dichotomy", "authors": "Laurent Fallot (EPOC)", "categories": "math.LO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove the existence of solutions to a non-linear, non-local, degenerate equation which was previously derived as the formal hydrodynamic limit of an active Brownian particle system, where the particles are endowed with a position and an orientation. This equation incorporates diffusion in both the spatial and angular coordinates, as well as a non-linear non-local drift term, which depends on the angle-independent density. The spatial diffusion is non-linear degenerate and also comprises diffusion of the angle-independent density, which one may interpret as cross-diffusion with infinitely many species. Our proof relies on interpreting the equation as the perturbation of a gradient flow in a Wasserstein-type space. It generalizes the boundedness-by-entropy method to this setting and makes use of a gain of integrability due to the angular diffusion. For this latter step, we adapt a classical interpolation lemma for function spaces depending on time. We also prove uniqueness in the particular case where the non-local drift term is null, and provide existence and uniqueness results for stationary equilibrium solutions. Boundedness-by-entropy; active particles; cross-diffusion; gradient flow. 35K65; 35K55; 76M30; 35Q92. address: - Computational Imaging Group and Helmholtz Imaging, Deutsches Elektronen-Synchrotron (DESY), Notkestr. 85, 22607 Hamburg, Germany, and Fachbereich Mathematik, Universität Hamburg, Bundesstraße 55, 20146 Hamburg, Germany - Scuola Normale Superiore, Centro di Ricerca Matematica Ennio De Giorgi, P.zza dei Cavalieri, 3, 56126 Pisa, Italy author: - Martin Burger - Simon Schulz bibliography: - active.bib title: Well-posedness and stationary states for a crowded active Brownian system with size-exclusion --- # Introduction {#sec:intro} We study the following non-local non-linear degenerate equation, which was derived in [@bbesModel] as the formal hydrodynamic limit of a many-particle system modeling the interaction of active Brownian particles endowed with a two-dimensional position $x \in \Omega = (0,2\pi)^2$ and orientation $\theta \in (0,2\pi)$: $$\label{eq: model 4} % \left\lbrace \begin{aligned} & \partial_t f + \mathrm{Pe}\mathop{\mathrm{div}}\big(f(1-\rho) {\bf e}(\theta)\big) = D_e\mathop{\mathrm{div}}\big((1-\rho)\nabla f + f \nabla \rho\big) + \partial^2_\theta f,% \\ % &\partial_t \rho + \pe \dv ((1-\rho)\p) = D_e \Delta \rho, \end{aligned} %\right.$$ where $\rho(t,x) = \int_0^{2\pi} f(t,x,\theta) \, \mathrm{d} \theta$ is the *angle-independent density*, ${\bf e}(\theta)=(\cos\theta,\sin\theta)$, the real constant parameters $\mathrm{Pe}$ and $D_e>0$ are respectively the Péclet number and spatial diffusion coefficient, supplemented with initial data $f_0$ and periodic boundary conditions in both the angle and space variables; the periodic cell for the space-angle coordinates is denoted by $\Upsilon := \Omega \times (0,2\pi)$. We prove the existence of weak solutions to [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"}, show uniqueness of such solutions in the case of zero Péclet number, and provide existence and uniqueness results for stationary states under various assumptions on the Péclet number. Active Brownian systems model the motion of self-propelled particles and have an abundance of applications; for instance in biology, pedestrian and traffic flows, and the modeling of collective behaviour in general (*cf. e.g. *[@Romanczuk:2012iz; @Cates:2013ia; @Redner.2013; @Cates:2014tr; @Stenhammar:2015ex; @Yeomans:2015dt; @Speck:2015um]). Much like kinetic models, active Brownian systems depict the *mesoscopic* dynamics of a group of interacting agents and, indeed, equation [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} presents similarities with kinetic Fokker--Planck equations as well as non-linear degenerate parabolic systems. At first glance, [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} appears similar to the cross-diffusion system studied in [@BoundEntropy]: $$\label{eq:martin marco cross diff} \begin{aligned} &\partial_t r = \mathop{\mathrm{div}}\big( (1-\sigma)\nabla r + r \nabla \sigma + r(1-\sigma)\nabla V \big), \\ &\partial_t b = \mathop{\mathrm{div}}\big( (1-\sigma)\nabla b + b \nabla \sigma + b(1-\sigma)\nabla W \big), \end{aligned}$$ where $\sigma = r + b$ and $V,W$ are given external potentials. Equation [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} may be interpreted as the analog of this system with infinitely-many species; the diffusive term $\mathop{\mathrm{div}}f \nabla \rho$ encodes the diffusion of the uncountable family of angles in the interval $(0,2\pi)$. However, unlike the system in [@BoundEntropy], the equation [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} is non-local; both in its diffusive and drift terms. This non-locality is a feature shared by certain kinetic models, in particular the Vlasov--Benney equation [@vlasovBenney1; @vlasovBenney2] and the recent contribution [@BriantMeunier]. Moreover, the angle-independent density, which formally satisfies the strongly parabolic drift-diffusion equation $$\partial_t \rho + \mathop{\mathrm{div}}\big( (1-\rho) {\bf p}\big) = \Delta \rho,$$ where ${\bf p}= \int_0^{2\pi} f {\bf e}(\theta) \, \mathrm{d} \theta$, is expected to be more regular than $f$ as its governing equation is no longer degenerate. This mechanism is evocative of velocity averaging lemmas (see, *e.g.*, [@PerthameKinetic]), illustrating another parallel between kinetic theory and active Brownian systems. The zones of degeneracy $\{\rho=1\}$ and $\{f = 0\}$ comprised in [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} make its analysis delicate as one expects the formation of discontinuities and other losses of regularity along these interfaces; this first degenerate region is all-the-more troublesome because one can only infer information about the local behaviour of $\rho$, not $f$. In turn, there exists relatively few known results in the existing literature for such equations. To the authors' knowledge, there exists only one prior contribution studying equations similar to [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} (except the authors' own work [@bbes], which has a different diffusion model and thus less degeneracy), namely the result of Erignoux [@Erignoux:2016un Theorem 2.3.3] . Therein, the author relies solely on probabilistic techniques to compute the hydrodynamic limit of the underlying particle system, thereby establishing the existence of a very weak solution; defined only as a measure. The present contribution provides an entirely deterministic PDE-theoretic proof, and we show the existence of a Sobolev weak solution with stronger regularity properties by means of the boundedness-by-entropy principle and a vanishing diffusivity method. Our strategy relies on using the linear diffusion in angle to upgrade the integrability of $f$ and obtain improved compactness properties; for this we adapt a classical interpolation lemma (*cf.* Appendix [7](#sec:appendix interpolation){reference-type="ref" reference="sec:appendix interpolation"}) . We also prove uniqueness of our solution within the stronger class of admissible weak solutions when the Péclet number is assumed to be null, and provide existence and uniqueness results for stationary solutions. At the core of the aforementioned boundedness-by-entropy method (originally developed in [@BoundEntropy]) is the notion that equation [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} may be interpreted as a perturbation of a Wasserstein gradient flow associated to the entropy functional $$\label{eq:entropy functional} E[f] = \int_\Upsilon \big( f \log f + (1-\rho)\log(1-\rho) \big) \, \mathrm{d} \boldsymbol{\xi};$$ see §[2](#sec:calc var){reference-type="ref" reference="sec:calc var"} for further details. One is able to obtain additional *a priori* estimates on the equation by considering the natural dual problem from the point of view of the underlying variational evolution. That is to say, we consider the first variation $u = E'[f]$ and rewrite [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} as an equation on $u$, which reads as a perturbation of a gradient flow associated to the convex conjugate functional $E^*[u]$. By rewriting the original unknown $f$ in terms of $u$, one is able to immediately determine the non-negativity of $f$ and the *a priori* bounds $0 \leq \rho \leq 1$. This generalizes the boundedness-by-entropy method developed for the related cross-diffusion system [\[eq:martin marco cross diff\]](#eq:martin marco cross diff){reference-type="eqref" reference="eq:martin marco cross diff"} in [@BoundEntropy] and later generalized in [@jungel2015boundedness]. The underlying philosophy is analogous to the reformulation of Lagrangian to Hamiltonian mechanics; now written in the context of Wasserstein gradient flows with nontrivial mobility. The rest of the paper is organised as follows. In §[1.1](#sec:main results){reference-type="ref" reference="sec:main results"} we state our main results precisely, and outline our strategy in §[1.2](#sec:setup){reference-type="ref" reference="sec:setup"}. §[2](#sec:calc var){reference-type="ref" reference="sec:calc var"} introduces the boundedness-by-entropy method; which we use to construct a sequence of suitable transformed regularised problems related to [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"}. In §[3](#sec:galerkin){reference-type="ref" reference="sec:galerkin"}, we prove the existence of the Galerkin approximation to the aforementioned transformed regularised problems and obtain uniform estimates. In §[4](#sec:proof of existence result){reference-type="ref" reference="sec:proof of existence result"}, we give the proof of our main existence result. In §[5](#sec:uniqueness zero pe){reference-type="ref" reference="sec:uniqueness zero pe"}, we prove uniqueness of the solution obtained in §[4](#sec:proof of existence result){reference-type="ref" reference="sec:proof of existence result"} under the assumption that the Péclet number is null. Finally, in §[6](#sec:stat states){reference-type="ref" reference="sec:stat states"}, we prove the existence and uniqueness of stationary states under particular assumptions on the Péclet number. Appendix [7](#sec:appendix interpolation){reference-type="ref" reference="sec:appendix interpolation"} contains an interpolation lemma pertaining to the uniform estimates of §[3.2](#sec:unif est){reference-type="ref" reference="sec:unif est"}, while Appendix [8](#sec:appendix initial data){reference-type="ref" reference="sec:appendix initial data"} contains technical results related to the sequence of regularised initial data used for the existence proof. **Notation:** Throughout this work, the letter $C$ will denote a positive constant independent of the parameters $\varepsilon,n$, which may change from line to line. Unless stated explicitly otherwise, the symbols $\mathop{\mathrm{div}},\nabla,\Delta$ denote divergences, gradients, and Laplacians taken with respect to the space variable $x$, while the symbols $\nabla_{\boldsymbol{\xi}},\Delta_{\boldsymbol{\xi}}$ denote gradients and Laplacians with respect to the concatenated space-angle variable $\boldsymbol{\xi}=(x,\theta)$. The domain on which the equation is posed will be denoted by $\Upsilon_T := (0,T)\times\Upsilon$, the closure of which we denote by $\overline{\Upsilon}_T := [0,T]\times\overline{\Upsilon}$; the sets $\Omega_T$ and $\overline{\Omega}_T$ are defined analogously. In later estimates and weak formulations, we shall make use of the families of admissible test functions $\mathcal{A},\mathcal{A}_s$, defined by $$\label{eq:admissible test functions} \begin{aligned} &\mathcal{A} := \big\{ \psi \in C^\infty([0,T]\times\mathbb{R}^3) : \, \psi(t,\cdot) \text{ triply } 2\pi \text{-periodic in }x,\theta, \, \psi(0,\cdot)=\psi(T,\cdot)=0 \big\}, \\ &\mathcal{A}_s := \big\{ \psi \in C^\infty([0,T]\times\mathbb{R}^2) : \, \psi(t,\cdot) \text{ doubly } 2\pi\text{-periodic in }x, \, \psi(0,\cdot)=\psi(T,\cdot)=0 \big\}, \end{aligned}$$ and test functions will always be denoted by the letter $\psi$. We define the function spaces $$\begin{aligned} &X := L^6(0,T;W^{1,6}_{\mathrm{per}}(\Upsilon)), \quad Y := L^2(0,T;H^1_{\mathrm{per}}(\Omega)), \quad Z:= W^{1,6}_{\mathrm{per}}(\Upsilon), \\ &\begin{aligned}\mathcal{X} := \Big\{ f \in & L^3(0,T;L^3_{\mathrm{per}}(\Upsilon)): f \geq 0 \text{ a.e.}, \partial_\theta \sqrt{f} \in L^2(\Upsilon_T), \partial_t f \in X', \\ &\rho = {\int_0^{2\pi}} f \, \mathrm{d} \theta \in [0,1] \text{ a.e.}, \partial_t \rho \in Y', \nabla\sqrt{1-\rho} \in L^2(\Omega_T) , \sqrt{1-\rho}\nabla \sqrt{f} \in L^{2}(\Upsilon_T)\Big\}, \end{aligned} \end{aligned}$$ and, in what follows, the notation $\langle \cdot , \cdot \rangle$ will always refer to the obvious underlying duality product where there is no confusion, unless stated explicitly otherwise. In the previous definitions and in what follows, the functions spaces denoted by $Z_{\mathrm{per}}(S)$ for $S \in \{(0,2\pi)^d\}_{d=1}^3$ are understood to mean $$Z_{\mathrm{per}}(S) := \big\{ g:\mathbb{R}^d \to \mathbb{R}: \, \Vert g \Vert_{Z(S)} < \infty, \text{ and } g(y+2\pi {\bf e}_i) = g(y) \, \forall y \in \mathbb{R}^d, i \in \{1,\dots,d\} \big\},$$ where $\{{\bf e}_i\}_{i=1}^d$ is the standard basis of $\mathbb{R}^d$. The case $d=2$ corresponds to functions depending on $x$ and $d=3$ to functions depending on $x,\theta$. ## Main theorems {#sec:main results} In order to state our main existence result, we begin by introducing our notion of solution. **Definition 1** (Weak solution). We say that $f \in \mathcal{X}$ is a *weak solution* of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} with periodic boundary conditions if, for all $\psi \in X$, there holds $$\label{eq:weak sol} \begin{aligned} -\langle \partial_t f, \psi \rangle + \mathrm{Pe}\int_{\Upsilon_T} (1-\rho)& f {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ &=\int_{\Upsilon_T} \Big( D_e\big((1-\rho)\nabla f + f \nabla \rho \big) \cdot \nabla \psi + \partial_\theta f\partial_\theta \psi \Big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$ We note that all integrals in the above weak formulation are well-defined since, using the Jensen and Hölder inequalities, $$\label{eq:weak form is well defined in intro} \begin{aligned} &\partial_\theta f \partial_\theta \psi = 2 \underbrace{\sqrt{f}}_{\in L^6} \underbrace{\partial_\theta \sqrt{f}}_{\in L^2} \underbrace{\partial_\theta \psi}_{\in L^6} \in L^{\frac{6}{5}}(\Upsilon_T) \subset L^1(\Upsilon_T) \\ & f \nabla \rho \cdot \nabla \psi = -2 \underbrace{f}_{\in L^3}\underbrace{\sqrt{1-\rho}}_{\in L^\infty}\underbrace{\nabla\sqrt{1-\rho}}_{L^2} \cdot \underbrace{\nabla \psi}_{L^6} \in L^1(\Upsilon_T) \\ & (1-\rho)\nabla f \cdot \nabla \psi = 2\underbrace{\sqrt{1-\rho}}_{\in L^\infty} \underbrace{\sqrt{f}}_{\in L^6} \underbrace{\sqrt{1-\rho}\nabla \sqrt{f}}_{\in L^2} \cdot \underbrace{\nabla \psi}_{\in L^6} \in L^{\frac{6}{5}}(\Upsilon_T) \subset L^1(\Upsilon_T). \end{aligned}$$ Note that one may identify the duality product with $$\langle \partial_t f , \psi \rangle = \int_0^T \langle \partial_t f(t,\cdot), \psi(t,\cdot) \rangle_{Z'\times Z} \, \mathrm{d} t,$$ whence a standard argument (*cf.* [@AGS08]) shows that the weak formulation of Definition [Definition 1](#def:weak sol){reference-type="ref" reference="def:weak sol"} is equivalent to requiring that, for all $\psi \in \mathcal{A}$, there holds $$\label{eq:alt dt on test function} \begin{aligned} \int_{\Upsilon_T} f \partial_t \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \mathrm{Pe}\int_{\Upsilon_T} (1-\rho)& f {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ &=\int_{\Upsilon_T} \Big( D_e\big((1-\rho)\nabla f + f \nabla \rho \big) \cdot \nabla \psi + \partial_\theta f\partial_\theta \psi \Big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$ Our main existence result is the following. **Theorem 1** (Existence of weak solution). *Let $T>0$ be arbitrary. Let $f_0$ be measurable non-negative initial data such that $f_0 \in L^p(\Upsilon)$ for some $p>1$ and $\rho_0 = \int_0^{2\pi} f_0 \, \mathrm{d} \theta \in [0,1]$ a.e. in $\Omega$. Then, there exists $f \in \mathcal{X}$ a weak solution of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} in the sense of Definition [Definition 1](#def:weak sol){reference-type="ref" reference="def:weak sol"}. Furthermore, the initial data is attained in the sense $\lim_{t \to 0^+}\Vert f(t,\cdot) - f_0 \Vert_{Z'} = 0$.* The essential requirement on $f_0$ in the above theorem is the finiteness of the entropy functional $E[f_0]$. This is of course satisfied by imposing the stronger condition $f_0 \in L^p(\Upsilon)$ for some $p>1$. For clarity of exposition, we choose this latter requirement in order to frame the problem in Lebesgue spaces instead of Orlicz spaces. We note in passing that the weak solution obtained in Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"} also satisfies an alternative notion of weak solution which is more convenient when studying regularity (*cf.* [@regularity]); this alternative weak formulation is written at the end of §[4](#sec:proof of existence result){reference-type="ref" reference="sec:proof of existence result"}. *Remark 2* (Attainment of initial data). Note that $\mathcal{X} \subset W^{1,\frac{6}{5}}(0,T;Z')$, whence (see, *e.g.*, [@evans Theorem 2, §5.9.2]) $f \in C([0,T];Z')$ and $\Vert f(t,\cdot) - f(s,\cdot) \Vert_{Z'} \leq |t-s|^{1/6}C( \Vert \partial_t f \Vert_{Y'})$ for a.e. $0 < s < t < T$. Hence, the attainment of the initial data is phrased in the sense of $Z'$. Our main uniqueness result is the following, and focuses on the case of zero Péclet number. **Theorem 2** (Uniqueness for null Péclet number). *If $\mathrm{Pe}=0$, then the weak solution provided by Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"} is unique in $\mathcal{X}$.* Finally, we study the existence and uniqueness of stationary states of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} in §[6](#sec:stat states){reference-type="ref" reference="sec:stat states"}. We mostly concentrate on uniqueness, which we divide into two parts: uniqueness for zero Péclet number, and uniqueness of strong solutions near *constant* stationary states for general Péclet number. We delay the statements of these results to §[6](#sec:stat states){reference-type="ref" reference="sec:stat states"}. ## Strategy and set-up {#sec:setup} In this section, we describe our strategy of proof for the main existence result Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"} in §[1.2.1](#sec:strategy){reference-type="ref" reference="sec:strategy"}, and then describe our mollification procedure for regularising the initial data in §[1.2.2](#sec:setup initial){reference-type="ref" reference="sec:setup initial"}. ### Strategy {#sec:strategy} We briefly outline our strategy of proof. As already mentioned, the equation [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} may be interpreted as a perturbation of a Wasserstein gradient flow associated to the entropy functional $E$. It is therefore natural to reframe the problem in terms of the first variation of $E[f]$, which we denote by $u = E'[f]$; see §[2](#sec:calc var){reference-type="ref" reference="sec:calc var"} for more details. The resulting equation for $u$ is again non-linear, non-local, and degenerate-parabolic. We therefore add the regularisation term $\varepsilon \Delta_{\boldsymbol{\xi}} u$ to this equation, and compute Galerkin approximations of the solution of this equation, which we denote by $u^\varepsilon_n$; where $n$ denotes the $n$-th Galerkin approximation. These approximations give rise to approximations $f^\varepsilon_n$ for the solution of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} by a simple explicit formula. After proving the required uniform estimates for the families of functions $\{u^\varepsilon_n,f^\varepsilon_n,\rho^\varepsilon_n\}_{n,\varepsilon}$, which are independent of $n$ and $\varepsilon$, we perform a diagonal procedure so as to simultaneously take the limit $n\to\infty$ and $\varepsilon\to 0$, and we obtain a weak solution of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} in the sense of Definition [Definition 1](#def:weak sol){reference-type="ref" reference="def:weak sol"}. ### Set-up for initial data {#sec:setup initial} We shall approximate the initial data with smooth strictly positive functions. We consider the product mollifier sequence $$\eta_\varepsilon(x,\theta) := \alpha_\varepsilon(x)\beta_\varepsilon(\theta),$$ where $\alpha,\beta$ are compactly supported smooth non-negative bump functions with unit integral; $\alpha_\varepsilon(x) := \varepsilon^{-2} \alpha(x/\varepsilon)$, $\beta_\varepsilon(\theta) = \varepsilon^{-\gamma} \beta(\theta/\varepsilon^\gamma)$ for $\gamma>2$. Define, for $\varepsilon \in (0,1)$, the strictly positive periodic functions $$\begin{aligned} &f_0^\varepsilon := \frac{\varepsilon}{4\pi} + (1-\varepsilon)\eta_\varepsilon*f_0 \in C^\infty_{\mathrm{per}}(\overline{\Upsilon}), \\ &\rho_0^\varepsilon := \int_0^{2\pi} f_0^\varepsilon(\cdot,\theta) \, \mathrm{d} \theta = \frac{\varepsilon}{2} + (1-\varepsilon)\alpha_\varepsilon*\rho_0 \in C^\infty_{\mathrm{per}}(\overline{\Omega}), \end{aligned}$$ where the convolutions are understood in the sense $\eta_\varepsilon*f_0 = \int_{\mathbb{R}^3} \eta_\varepsilon(\cdot-\boldsymbol{\xi}) f_0(\boldsymbol{\xi}) \, \mathrm{d} \boldsymbol{\xi}$ and $\alpha_\varepsilon*\rho_0 = \int_{\mathbb{R}^2} \alpha_\varepsilon(\cdot-x)\rho_0(x) \, \mathrm{d} x$. A straightforward computation (*cf.* Appendix [8](#sec:appendix initial data){reference-type="ref" reference="sec:appendix initial data"}) shows that $0<f^\varepsilon_0$ and $0<\rho^\varepsilon_0 < 1$, whence the function $$\begin{aligned} u_0^\varepsilon := \log\Big( \frac{f^\varepsilon_0}{1-\rho^\varepsilon_0} \Big) \in C^\infty_{\mathrm{per}}(\overline{\Upsilon}) \end{aligned}$$ is also well-defined. The next lemma summarises the key properties of the regularised initial data. The proof is delayed to Appendix [8](#sec:appendix initial data){reference-type="ref" reference="sec:appendix initial data"}. **Lemma 3** (Properties of the regularised initial data). *There exists a subsequence of the regularised initial data sequence $\{f^\varepsilon_0,\rho^\varepsilon_0,u^\varepsilon_0\}_\varepsilon$, which we do not relabel, such that $$f^\varepsilon_0 \to f_0 \text{ strongly in } L^p(\Upsilon), \quad \rho^\varepsilon_0 \to \rho_0 \text{ strongly in } L^q(\Omega) \text{ for all } q \in [1,\infty),$$ and for which there exists a positive constant $C=C(p,\Upsilon)$, independent of $\varepsilon$, such that $$\begin{aligned} E[f^\varepsilon_0] + \varepsilon \int_\Upsilon |u^\varepsilon_0|^2 \, \mathrm{d} \boldsymbol{\xi}\leq C\big(1+\Vert f_0 \Vert_{L^p(\Upsilon)} \big). \end{aligned}$$* # Entropy Variable and Transformed Problem {#sec:calc var} In the following we will perform a transformation of the problem [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} into entropy variables, which diagonalizes the diffusion part at the cost of coupling the time derivatives of $f$ and $\rho$. This transformation allows for a suitable approximation via the Galerkin method, as it did for the cross-diffusion system [\[eq:martin marco cross diff\]](#eq:martin marco cross diff){reference-type="eqref" reference="eq:martin marco cross diff"} considered in [@BoundEntropy], and an application of the boundedness-by-entropy principle; developed again in [@BoundEntropy] and subsequently generalised in [@jungel2015boundedness]. For this sake we consider the aforementioned *entropy functional* of [\[eq:entropy functional\]](#eq:entropy functional){reference-type="eqref" reference="eq:entropy functional"}, *i.e.*, $$E[f] = \int_\Upsilon \big( f \log f + (1-\rho) \log(1-\rho) \big) \, \mathrm{d} x \, \mathrm{d} \theta,$$ where $\rho = \int_0^{2\pi} f \, \mathrm{d} \theta$. Note that $E[f]$ is bounded from below by a fixed constant for all $f$, provided $0 \leq \rho \leq 1$ and $f \geq 0$ a.e. We can define $E$ on a dense subspace of $L^p(\Upsilon)$ for any $p > 1$, *e.g.*, smooth functions, and extend it uniquely to the whole space. Note that $E$ is a convex functional, and we may define its convex conjugate, or equivalently its Legendre transform, $E^*: L^{p'}(\Upsilon) \to \mathbb{R}$ via $$E^*[u] := \sup\big\{ \langle u, f \rangle - E[f]: f \in L^p(\Upsilon)\big\},$$ which is again a convex functional. In fact, we may compute $E^*$ explicitly as $$E^*[u] = \int_\Omega \log \left( 1 + \int_0^{2\pi} e^u \, \mathrm{d} \theta\right) \, \mathrm{d} x.$$ As usual we define the *entropy variable* (or dual variable) related to $f$ via the functional derivative $$\label{eq:entropy variable def} u := E'[f] = \log f - \log (1-\rho).$$ We therefore write $f$ and $\rho$ as functions of $u:\Upsilon_T \to \mathbb{R}$ via the expressions: $$%\label{eq:f from u} \begin{aligned} f = (1-\rho)e^{u}, \qquad \rho = (1-\rho) \int_0^{2\pi} e^{u} \, \mathrm{d} \theta. \end{aligned}$$ It follows that $$\label{eq:rho in terms of entropy variable} \rho(t,x) = \frac{\int_0^{2\pi} e^{u(t,x,\theta)} \, \mathrm{d} \theta}{1+\int_0^{2\pi} e^{u(t,x,\theta)} \, \mathrm{d} \theta}$$ automatically satisfies the bound $0 \leq \rho \leq 1$. In turn, we obtain $$f(t,x,\theta) = \frac{e^{u(t,x,\theta)}}{1+\int_0^{2\pi} e^{u(t,x,\theta)} \, \mathrm{d} \theta}.$$ Note that this latter formula makes the relation $(E^*)' = (E')^{-1}$ explicit, *i.e.*, $$u = E'[f] , \quad f = (E^*)'[u].$$ The main idea to analyze the system is as follows: the original problem [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} is considered as a perturbation of the gradient flow $$\partial_t f = \nabla_{\boldsymbol{\xi}} \cdot \left( M(f,\rho) (\nabla_{\boldsymbol{\xi}} E'[f] + V) \right)$$ with mobility $M$ and perturbative vector field $V$, independent of $f$, given by $$M (f,\rho) = \left( \begin{array}{cc} D_e f (1-\rho) I & 0 \\ 0 & f\end{array}\right), \qquad V(x,\theta) = \left(\begin{array}{c} - \mathrm{Pe}~{\bf e}(\theta) \\0 \end{array} \right).$$ Transforming to the entropy variable we arrive at $$\label{eq:grad flow perturbation in terms of u} \partial_t ((E^*)'[u]) = \nabla_{\boldsymbol{\xi}} \cdot \left( \tilde M(u) (\nabla_{\boldsymbol{\xi}} u + V) \right),$$ where $\tilde M(u) = M(f[u],\rho[u])$ is given by $$\label{eq:tilde M def} \tilde{M}(u) = \left( \begin{matrix} \frac{e^u}{(1+\int_0^{2\pi} e^u \, \mathrm{d} \theta)^2} D_e I & 0 \\ 0 & \frac{e^u}{1+\int_0^{2\pi} e^u \, \mathrm{d} \theta} \end{matrix}\right).$$ This dual formulation has several advantages. First of all, the boundedness-by-entropy principle applies, *i.e.* if we find a weak solution $u$ of the above, it is apparent that after transformation $f \geq 0$ and $0 \leq \rho \leq 1$ hold almost everywhere. Moreover, the transformed system can be approximated more easily, *e.g.* by regularising with a multiple of the heat operator. This yields a problem of the form $$\label{eq:eps eqn} (\varepsilon \mathrm{id}+ A(u^\varepsilon)) [\partial_t u^\varepsilon] = \nabla_{\boldsymbol{\xi}} \cdot \left( (\varepsilon I + \tilde M(u^\varepsilon)) \nabla_{\boldsymbol{\xi}} u^\varepsilon + \tilde M(u^\varepsilon) V \right),$$ where, for all $u \in L^{p'}(\Upsilon)$, the quantity $A(u)$ is a linear operator which corresponds to the second functional derivative of $E^*$ evaluated on the element $u$; furthermore, since $E^*$ is convex, this quantity is a positive semi-definite linear operator. In fact, one can explicitly write $$\label{eq:explicit Hessian} \begin{aligned} A(u^\varepsilon)[\partial_t u^\varepsilon] = \frac{e^{u^\varepsilon}}{1+\int_0^{2\pi} e^{u^\varepsilon} \, \mathrm{d} \theta} \partial_t u^\varepsilon - \frac{e^{u^\varepsilon}}{\big( 1+\int_0^{2\pi} e^{u^\varepsilon} \, \mathrm{d} \theta \big)^2}\int_0^{2\pi} e^{u^{\varepsilon}} \partial_t u^\varepsilon \, \mathrm{d} \theta. \end{aligned}$$ In principle, one can now write a proof of existence and uniqueness including some regularity for the regularised problem [\[eq:explicit Hessian\]](#eq:explicit Hessian){reference-type="eqref" reference="eq:explicit Hessian"}; this is the content of §[3](#sec:galerkin){reference-type="ref" reference="sec:galerkin"}. Moreover, testing the equation with $u$ preserves *a priori* estimates for the entropy, noticing that $$\label{eq:switching from duality bracket to entropy dissipation} \frac{\mathrm{d}}{\mathrm{d}t} E[f] = \langle E'(f), \partial_t f \rangle_\Upsilon = \langle u, \partial_t ((E^*)'[u]) \rangle_{\Upsilon}.$$ # Galerkin Approximation for the Transformed Regularised System {#sec:galerkin} The main result of this section is as follows. We shall prove it in several steps: first, we obtain the existence of the Galerkin approximations to the regularised problem [\[eq:eps eqn\]](#eq:eps eqn){reference-type="eqref" reference="eq:eps eqn"} in §[3.1](#sec:existence galerkin){reference-type="ref" reference="sec:existence galerkin"}, and then we obtain the desired uniform estimates in §[3.2](#sec:unif est){reference-type="ref" reference="sec:unif est"}. **Proposition 4** (Existence and uniform estimates for Galerkin approximations). *Let $\varepsilon \in (0,1)$ and $n \in \mathbb{N}$. There exists functions $u^\varepsilon_n,f^\varepsilon_n,\rho^\varepsilon_n \in C^1(\overline{\Upsilon}_T)$ satisfying the relations $$\label{eq:ae rel for reg sys} f^\varepsilon_n(t,x,\theta) = \frac{e^{u^\varepsilon_n(t,x,\theta)}}{1+\int_0^{2\pi} e^{u^\varepsilon_n(t,x,\theta)} \, \mathrm{d} \theta}, \qquad \rho^\varepsilon_n(t,x) = \int_0^{2\pi} f^\varepsilon_n(t,x,\theta) \, \mathrm{d} \theta \in [0,1]$$ in $\Upsilon_T$, such that, for all $\psi \in \mathcal{A}$, there holds $$\label{eq:weak form reg sys} \begin{aligned} - \langle \partial_t (\varepsilon u^\varepsilon_n + &f^\varepsilon_n) , \psi \rangle + \mathrm{Pe}\int_{\Upsilon_T} (1-\rho^\varepsilon_n)f^\varepsilon_n {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ =&\int_{\Upsilon_T} \Big( \varepsilon \nabla_{\boldsymbol{\xi}}u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}}\psi + D_e\big((1-\rho^\varepsilon_n)\nabla f^\varepsilon_n + f^\varepsilon_n \nabla \rho^\varepsilon_n \big) \cdot \nabla \psi + \partial_\theta f^\varepsilon_n \partial_\theta \psi \Big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t, \end{aligned}$$ Furthermore there exists a positive constant $C$, independent of $\varepsilon$ and $n$, such that $$\label{eq:main est reg i} \begin{aligned} \mathop{\mathrm{ess \, sup}}_{t \in [0,T]} E[f^\varepsilon_n](t) + \mathop{\mathrm{ess \, sup}}_{t \in [0,T]}\int_\Upsilon |\sqrt{\varepsilon} u^\varepsilon_n(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_{\Upsilon_T} |\nabla_{\boldsymbol{\xi}} \sqrt{\varepsilon} u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t& \\ + \int_{\Upsilon_T} (1-\rho^\varepsilon_n)|\nabla_{\boldsymbol{\xi}} \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Omega_T} |\nabla \sqrt{1-\rho^\varepsilon_n}|^2 \, \mathrm{d} x \, \mathrm{d} t& \\ + \int_{\Upsilon_T} |\partial_\theta \sqrt{f^\varepsilon_n}|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Omega_T} |\nabla \rho^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} t & \leq C, \end{aligned}$$ as well as $$\begin{aligned} \Vert f^\varepsilon_n \Vert_{L^3(\Upsilon_T)} \leq C \label{eq:sqrt func di ben reg} \\ \Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} \Vert_{L^{\frac{10}{3}}(\Upsilon_T)} \leq C \label{eq:gradient of f sqrt 1- rh rego} \\ \Vert f^\varepsilon_n \sqrt{1-\rho^\varepsilon_n} \Vert_{L^{\frac{6}{5}}(0,T;W^{1,\frac{6}{5}}(\Upsilon))} \leq C, \label{eq:bound on f sqrt 1-rho main one for limit reg} \end{aligned}$$ and the time-derivative bounds $$\begin{aligned} \Vert \partial_t(\varepsilon u^\varepsilon_n + f^\varepsilon_n) \Vert_{X'} \leq C, \label{eq:main time deriv reg i} \\ \Vert \partial_t(\varepsilon U^\varepsilon_n + \rho^\varepsilon ) \Vert_{Y'} \leq C, \label{eq:main time deriv reg ii}\end{aligned}$$ where $U^\varepsilon_n(t,x) := \int_0^{2\pi} u^\varepsilon_n(t,x,\theta) \, \mathrm{d} \theta$ also satisfies $$\label{eq:Ueps unif bound} \Vert \sqrt{\varepsilon} U^\varepsilon_n \Vert_{L^\infty(0,T;L^2(\Omega))} + \Vert \sqrt{\varepsilon} U^\varepsilon_n \Vert_{L^2(0,T;H^1(\Omega))} \leq C.$$* ## Existence of Galerkin approximants {#sec:existence galerkin} In what follows, we use that the biharmonic operator $\Delta_{\boldsymbol{\xi}}^2$, considered with periodic boundary conditions in space and angle on $\Upsilon$, is a coercive, bounded, linear self-adjoint operator with respect to the inner product of $H^2_{\mathrm{per}}(\Upsilon)$, where $$\langle f, g \rangle_{H^2_{\mathrm{per}}(\Upsilon)} = \int_\Upsilon \Delta_{\boldsymbol{\xi}}f \Delta_{\boldsymbol{\xi}} g \, \mathrm{d} \boldsymbol{\xi}.$$ It follows from the classical spectral theorem on Hilbert spaces that $\Delta_{\boldsymbol{\xi}}^2$ admits a discrete spectrum $0 < \lambda_1 < \lambda_2 < \dots$ with eigenfunctions $\{\varphi_j\}_{j=1}^\infty$ such that there holds $\Delta_{\boldsymbol{\xi}}^2 \varphi_j = \lambda_j \varphi_j$ for all $j$; a classical bootstrapping procedure ensures that $\varphi_j \in C^\infty_{\mathrm{per}}(\overline{\Upsilon})$ for all $j$, *cf.* *e.g.* [@AllaireCours Chapitre 7]. Furthermore, $\{\varphi_j\}_{j=1}^\infty$ is an orthonormal basis of $H^2_{\mathrm{per}}(\Upsilon)$. We note in passing that our choice of functional setting $H^2_{\mathrm{per}}(\Upsilon)$ is motivated by the fact that we must control the $L^\infty$ norm of the sequence of Galerkin approximations $\{u_n^\varepsilon(0,\cdot)\}_n$ in order to estimate the approximations $\{f_n^\varepsilon(0,\cdot)\}_n$ via the entropy variable formula [\[eq:ae rel for reg sys\]](#eq:ae rel for reg sys){reference-type="eqref" reference="eq:ae rel for reg sys"}; no $L^p$ control for $p < \infty$ suffices. This is made possible by approximating $u^\varepsilon(0,\cdot)$ in $H^2_{\mathrm{per}}(\Upsilon)$ and using Morrey's inequality over the three-dimensional open domain $\Upsilon$; see Step 1 of the proof of Lemma [Lemma 6](#lem:unif est for galerkin){reference-type="ref" reference="lem:unif est for galerkin"} for details. For each fixed $n\in\mathbb{N}$ we define the closed subspace $\mathcal{V}_n := \overline{\mathrm{span}}\{\varphi_j\}_{j=1}^n$. Note from our previous discussion that $\bigcup_n \mathcal{V}_n$ is dense in $H^2_{\mathrm{per}}(\Upsilon)$, $\mathcal{V}_n \subset \mathcal{V}_{n+1}$ for all $n$, and, for each $n$, the finite subset $\{\varphi_j\}_{j=1}^n$ is an orthonormal basis of $\mathcal{V}_n$. We now define the $n$-th approximant to the entropy variable $u^\varepsilon$, to be given by the explicit formula $$\label{eq:ansatz for galerkin} u^\varepsilon_n(t,x,\theta) := \sum_{j=1}^n \alpha_j^{\varepsilon,n}(t) \varphi_j(x,\theta),$$ where the coefficients $\alpha^{\varepsilon,n}_j$ are to be determined. This construction is such that, for large $n$, the function $u^\varepsilon_n$ ought to be a suitable approximation of $u^\varepsilon$; a weak solution in $H^1_{\mathrm{per}}(\Upsilon)$ of [\[eq:eps eqn\]](#eq:eps eqn){reference-type="eqref" reference="eq:eps eqn"}. One can see, by plugging in $u^\varepsilon_n$ in place of $u^\varepsilon$ in [\[eq:eps eqn\]](#eq:eps eqn){reference-type="eqref" reference="eq:eps eqn"}, that the coefficients $\alpha^{\varepsilon,n}_j$ will be determined from requiring $$\label{eq:galerkin how to get coeffs} (\varepsilon \mathrm{id}+ A(u^\varepsilon_n)) \Big[\sum_{j=1}^n\frac{\mathrm{d}\alpha^{\varepsilon,n}_j}{\mathrm{d}t} \varphi_j \Big] = \nabla_{\boldsymbol{\xi}} \cdot \Big( (\varepsilon I + \tilde M(u^\varepsilon_n)) \sum_{j=1}^n \alpha^{\varepsilon,n}_j \nabla_{\boldsymbol{\xi}} \varphi_j + \tilde M(u^\varepsilon_n) V \Big)$$ in the weak sense in $\mathcal{V}_n$, *i.e.*, for all $\phi \in \mathcal{V}_n$, there holds $$\begin{aligned} \sum_{j=1}^n \frac{\mathrm{d}\alpha^{\varepsilon,n}_j}{\mathrm{d}t} \int_\Upsilon (\varepsilon \mathrm{id}+ & A(u^\varepsilon_n)) [\varphi_j] \phi \, \mathrm{d} \boldsymbol{\xi}\\ &= -\sum_{j=1}^n \alpha^{\varepsilon,n}_j\int_\Upsilon \nabla_{\boldsymbol{\xi}} \phi \cdot \left( (\varepsilon I + \tilde M(u^\varepsilon_n)) \nabla_{\boldsymbol{\xi}} \varphi_j + \tilde M(u^\varepsilon_n) V \right) \, \mathrm{d} \boldsymbol{\xi}, \end{aligned}$$ for all $t \in (0,T)$, where we used linearity and $A(u^\varepsilon_n)[\frac{\mathrm{d}}{\mathrm{d}t}\alpha_j^{\varepsilon,n}(t)\varphi_j] = \frac{\mathrm{d}}{\mathrm{d}t}\alpha_j^{\varepsilon,n}(t)A(u^\varepsilon_n)[\varphi_j]$, which can be seen directly from the formula [\[eq:explicit Hessian\]](#eq:explicit Hessian){reference-type="eqref" reference="eq:explicit Hessian"}. Using the decomposition of $\phi$ into the basis functions $\{\varphi_k\}_{k=1}^n$, the previous relation is equivalent to requiring: for all $k\in\{1,\dots,n\}$, there holds $$\begin{aligned} \sum_{j=1}^n \frac{\mathrm{d}\alpha^{\varepsilon,n}_j}{\mathrm{d}t} \int_\Upsilon (\varepsilon \mathrm{id}+ & A(u^\varepsilon_n)) [\varphi_j] \varphi_k \, \mathrm{d} \boldsymbol{\xi}\\ &= -\sum_{j=1}^n \alpha^{\varepsilon,n}_j\int_\Upsilon \nabla_{\boldsymbol{\xi}} \varphi_k \cdot \left( (\varepsilon I + \tilde M(u^\varepsilon_n)) \nabla_{\boldsymbol{\xi}} \varphi_j + \tilde M(u^\varepsilon_n) V \right) \, \mathrm{d} \boldsymbol{\xi}, \end{aligned}$$ for all $t \in (0,T)$. Note that, provided the coefficients $\alpha^{\varepsilon,n}_j$ belong to $C^1([0,T])$, all terms in the previous equality are well-defined; in particular, the integrals are finite due to the boundedness of the functions $\varphi_j \in C^\infty_{\mathrm{per}}(\overline{\Upsilon})$. The above can be recast as a quasilinear system of ODEs of the form $$\label{eq:ODE sys galerkin} \left\lbrace\begin{aligned} & \mathcal{M}(\gamma^{\varepsilon,n}(t)) \frac{\mathrm{d}\gamma^{\varepsilon,n}}{\mathrm{d}t}(t) = \mathcal{R}(\gamma^{\varepsilon,n}(t)), \\ & \gamma^{\varepsilon,n}(0) = \gamma^{\varepsilon,n}_0, \end{aligned}\right.$$ where we have defined the vector function $\gamma^{\varepsilon,n} = (\alpha^{\varepsilon,n}_1, \dots, \alpha^{\varepsilon,n}_n)$ and the entries of the matrices are given explicitly by $$\label{eq:matrices for ODE sys galerkin} \begin{aligned} &\mathcal{M}(\gamma^{\varepsilon,n}(t))_{k j} = \int_\Upsilon (\varepsilon \mathrm{id}+ A(u^\varepsilon_n)) [\varphi_j] \varphi_k \, \mathrm{d} \boldsymbol{\xi}, \\ & \mathcal{R}(\gamma^{\varepsilon,n}(t))_{k j} = -\int_\Upsilon \nabla_{\boldsymbol{\xi}} \varphi_k \cdot \left( (\varepsilon I + \tilde M(u^\varepsilon_n)) \nabla_{\boldsymbol{\xi}} \varphi_j + \tilde M(u^\varepsilon_n) V \right) \, \mathrm{d} \boldsymbol{\xi}, \end{aligned}$$ where $k,j$ range over all possible values in $\{1,\dots,n\}$, and the initial data is given explicitly by $$\label{eq:initial data for galerkin sys} \alpha_j^{\varepsilon,n}(0) = \langle u^\varepsilon_0, \varphi_j \rangle_{H^2_{\mathrm{per}}(\Upsilon)},$$ such that, with $u^\varepsilon_n(0) = u^\varepsilon_{n,0} \equiv \sum_{j=1}^n \alpha^{\varepsilon,n}_j(0) \varphi_j$, there holds the strong convergence $$\label{eq:strong convergence initial data galerkin} u^{\varepsilon}_n(0) \to u^\varepsilon_0 \quad \text{in } H^2_{\mathrm{per}}(\Upsilon).$$ It follows from the positivity $\varepsilon>0$ and the positive semidefiniteness of the operator $A$ that the matrix $\mathcal{M}$ is invertible, and furthermore the product $\mathcal{M}^{-1}\mathcal{R}$ is locally Lipschitz; indeed, this quantity and its derivatives involve only exponential functions of the $\alpha^{\varepsilon,n}_j$ and their products, weighted by the reciprocal of a determinant which is bounded strictly away from zero for each $\varepsilon>0$. In turn, the Cauchy--Lipschitz Theorem for ODE systems implies the existence and uniqueness of $\gamma^{\varepsilon,n} \in C^1([0,T])$ solving [\[eq:ODE sys galerkin\]](#eq:ODE sys galerkin){reference-type="eqref" reference="eq:ODE sys galerkin"}. It follows that we have proved the following result, which concludes this subsection. **Lemma 5**. *Let $\varepsilon \in (0,1)$, $n\in\mathbb{N}$, and $u^\varepsilon_{n,0} \in H^2_{\mathrm{per}}(\Upsilon)$ be given by $$u^\varepsilon_{n,0} = \sum_{j=1}^n \varphi_j \langle u^\varepsilon_0, \varphi_j \rangle_{H^2_{\mathrm{per}}(\Upsilon)}.$$ There exists $u^\varepsilon_n \in C^1(\overline{\Upsilon}_T)$ such that $u^\varepsilon_n(t,\cdot) \in \mathcal{V}_n$ for all $t \in [0,T]$, $u^\varepsilon_n(0) = u^\varepsilon_{n,0}$ and, for all $\psi \in \mathcal{V}_n$, there holds for all $t \in (0,T)$ $$\label{eq:galerkin regularised weak form} \int_{\Upsilon}(\varepsilon \mathrm{id}+ A(u^\varepsilon_n)) [\partial_t u^{\varepsilon}_n] \psi \, \mathrm{d} \boldsymbol{\xi}= - \int_\Upsilon \nabla_{\boldsymbol{\xi}}\psi \cdot \left( (\varepsilon I + \tilde M(u^\varepsilon_n)) \nabla_{\boldsymbol{\xi}} u^\varepsilon_n + \tilde M(u^\varepsilon_n) V \right) \, \mathrm{d} \boldsymbol{\xi},$$ *i.e.*, in the weak sense with respect to $\mathcal{V}_n$, $$\label{eq:galerkin regularised} (\varepsilon \mathrm{id}+ A(u^\varepsilon_n)) [\partial_t u^{\varepsilon}_n] = \nabla_{\boldsymbol{\xi}} \cdot \left( (\varepsilon I + \tilde M(u^\varepsilon_n)) \nabla_{\boldsymbol{\xi}} u^\varepsilon_n + \tilde M(u^\varepsilon_n) V \right).$$* We then define $f^\varepsilon_n \in C^1([0,T]\times\overline{\Upsilon})$ and its angle-independent density $\rho^\varepsilon_n$ by $$\label{eq:f eps n in terms of u eps n pre galerkin} f^\varepsilon_n := (E^*)'[u^\varepsilon_n] = \frac{e^{u^\varepsilon_n}}{1+\int_0^{2\pi} e^{u^\varepsilon_n} \, \mathrm{d} \theta}, \quad \rho^\varepsilon_n := \int_0^{2\pi} f^\varepsilon_n \, \mathrm{d} \theta = \frac{\int_0^{2\pi} e^{u^\varepsilon_n} \, \mathrm{d} \theta}{1+\int_0^{2\pi} e^{u^\varepsilon_n} \, \mathrm{d} \theta},$$ whence $$(\varepsilon \mathrm{id}+ A(u^\varepsilon_n))[\partial_t u^\varepsilon_n] = \varepsilon \partial_t u^\varepsilon_n + \partial_t((E^*)'[u^\varepsilon_n]) = \varepsilon \partial_t u^\varepsilon_n + \partial_t f^\varepsilon_n,$$ and from which we obtain, using also [\[eq:switching from duality bracket to entropy dissipation\]](#eq:switching from duality bracket to entropy dissipation){reference-type="eqref" reference="eq:switching from duality bracket to entropy dissipation"}, $$\label{eq:ent diss rel} \int_\Upsilon (\varepsilon \mathrm{id}+ A(u^\varepsilon_n))[\partial_t u^\varepsilon_n]u^\varepsilon_n \, \mathrm{d} \boldsymbol{\xi}= \frac{\varepsilon}{2} \frac{\mathrm{d}}{\mathrm{d}t} \int_\Upsilon |u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}+ \frac{\mathrm{d}}{\mathrm{d}t} E[f^\varepsilon_n].$$ At this point, we record the relations $$\label{eq:convenient expansion} \begin{aligned} &f^\varepsilon_n \partial_\theta u^\varepsilon_n = \partial_\theta f^\varepsilon_n = 2\sqrt{f^\varepsilon_n} \partial_\theta \sqrt{f^\varepsilon_n}, \\ &(1-\rho^\varepsilon_n)f^\varepsilon_n \nabla u^\varepsilon_n = (1-\rho^\varepsilon_n) \nabla f^\varepsilon_n + f^\varepsilon_n \nabla \rho^\varepsilon_n = 2\sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} \nabla \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} + 2 f^\varepsilon_n \nabla \rho^\varepsilon_n, \end{aligned}$$ whence the equation [\[eq:galerkin regularised\]](#eq:galerkin regularised){reference-type="eqref" reference="eq:galerkin regularised"} may be rewritten as $$\label{eq:galerkin reg better} \varepsilon \partial_t u^\varepsilon_n + \partial_t f^\varepsilon_n + \mathrm{Pe}\mathop{\mathrm{div}}((1-\rho^\varepsilon_n) f^\varepsilon_n {\bf e}(\theta)) = \varepsilon \Delta_{\boldsymbol{\xi}}u^\varepsilon_n + D_e \mathop{\mathrm{div}}( (1-\rho^\varepsilon_n) \nabla f^\varepsilon_n + f^\varepsilon_n \nabla \rho^\varepsilon_n ) + \partial^2_\theta f^\varepsilon_n.$$ It follows from an integration by parts that [\[eq:weak form reg sys\]](#eq:weak form reg sys){reference-type="eqref" reference="eq:weak form reg sys"} is verified. We have therefore proved the existence part of Proposition [Proposition 4](#prop:transformed reg sys existence and est){reference-type="ref" reference="prop:transformed reg sys existence and est"}. It remains to obtain the uniform estimates, which is the focus of the next section. ## Uniform estimates for the Galerkin approximations {#sec:unif est} In this section, we perform the main entropy-dissipation estimate, and obtain bounds uniform in $n$ and $\varepsilon$ for both the entropy variable and the original unknown. After this, we establish similar estimates for their respective time derivatives. We first remark that, for all fixed $t,x$ there holds $\Vert f^\varepsilon_n (t,x,\cdot) \Vert_{L^1((0,2\pi))} \leq 1$ as well as $f^\varepsilon_n \geq 0$ and $0 \leq \rho^\varepsilon_n \leq 1$, from which we get $$\label{eq:unif L1 on f eps n} \Vert f^\varepsilon_n \Vert_{L^\infty(\Omega_T;L^1(0,2\pi))} \leq 1.$$ This bound will be used in the proof of the main result of this subsection, which we now state. **Lemma 6** (Uniform estimates for Galerkin approximation). *Let $\varepsilon,n,u^\varepsilon_n$ be as in Lemma [Lemma 5](#lem:existence of galerkin coeffs){reference-type="ref" reference="lem:existence of galerkin coeffs"} and $f^\varepsilon_n,\rho^\varepsilon_n$ be as in [\[eq:f eps n in terms of u eps n pre galerkin\]](#eq:f eps n in terms of u eps n pre galerkin){reference-type="eqref" reference="eq:f eps n in terms of u eps n pre galerkin"}. There exists a positive constant $C$, independent of $\varepsilon,n$ such that there holds the uniform estimates: $$\label{eq:main est gal i} \begin{aligned} \sup_{t \in [0,T]} E[f^\varepsilon_n](t) + \sup_{t \in [0,T]}\int_\Upsilon |\sqrt{\varepsilon} u^\varepsilon_n(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_{\Upsilon_T} |\nabla_{\boldsymbol{\xi}} \sqrt{\varepsilon} u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t& \\ + \int_{\Upsilon_T} (1-\rho^\varepsilon_n)|\nabla_{\boldsymbol{\xi}} \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Upsilon_T} |\nabla \sqrt{1-\rho^\varepsilon_n}|^2 \, \mathrm{d} x \, \mathrm{d} t& \\ + \int_{\Upsilon_T} |\partial_\theta \sqrt{f^\varepsilon_n}|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Omega_T} |\nabla \rho^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} t &\leq C, \end{aligned}$$ and $$\begin{aligned} \Vert f^\varepsilon_n \Vert_{L^3(\Upsilon_T)} \leq C \label{eq:sqrt func di ben} \\ \Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} \Vert_{L^{\frac{10}{3}}(\Upsilon_T)} \leq C \label{eq:gradient of f sqrt 1- rho} \\ \Vert f^\varepsilon_n \sqrt{1-\rho^\varepsilon_n} \Vert_{L^{\frac{6}{5}}(0,T;W^{1,\frac{6}{5}}(\Upsilon))} \leq C. \label{eq:bound on f sqrt 1-rho main one for limit}\end{aligned}$$* *Proof.* . *Entropy dissipation*: By testing [\[eq:galerkin regularised\]](#eq:galerkin regularised){reference-type="eqref" reference="eq:galerkin regularised"} with $u^\varepsilon_n$, which is an admissible test function since one can directly verify $u^{\varepsilon}_n, \partial_t u^\varepsilon_n \in \mathcal{V}_n$, we compute, using [\[eq:ent diss rel\]](#eq:ent diss rel){reference-type="eqref" reference="eq:ent diss rel"}, the entropy dissipation $$\begin{aligned} \frac{\varepsilon}{2} \frac{\mathrm{d}}{\mathrm{d}t} \int_\Upsilon |u^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} \theta + \frac{\mathrm{d}}{\mathrm{d}t} E[f^\varepsilon_n] + \int_\Upsilon (\varepsilon I + \tilde{M}(u^\varepsilon_n) )\nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \, \mathrm{d} \boldsymbol{\xi}= - \int_\Upsilon \tilde{M}(u^\varepsilon_n) V \cdot \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \, \mathrm{d} \boldsymbol{\xi}. \end{aligned}$$ Again, we emphasise that, since $\varphi_j \in C^\infty_{\mathrm{per}}(\overline{\Upsilon})$ for all $j$, the integrands in the previous line are composed of functions belonging to $C^1(\overline{\Upsilon}_T)$, whence the integrals are well-defined. Recall that the matrix $\tilde{M}$ is positive semi-definite and symmetric. It follows that there exists a unique square-root matrix $\tilde{N}$ of the same dimension such that $\tilde{N}^T \tilde{N} = \tilde{M}$, namely $$\tilde{N}(u) = N(f[u]) = \left( \begin{matrix} \sqrt{D_e f(1-\rho)}I & 0 \\ 0 & \sqrt{f} \end{matrix} \right).$$ In turn, there holds, for all $t \in [0,T]$, $$\label{eq:penult step main dissip galerkin} \begin{aligned} E[f^\varepsilon_n](t) + \frac{\varepsilon}{2} \int_\Upsilon &|u^\varepsilon_n(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+ \varepsilon \int_0^t \int_\Upsilon |\nabla_{\boldsymbol{\xi}} u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + \int_0^t \int_\Upsilon \underbrace{\tilde{M}(u^\varepsilon_n) \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}} u^\varepsilon_n}_{\geq 0} \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \\ &= E[f^\varepsilon_n](0) + \frac{\varepsilon}{2} \int_\Upsilon |u^\varepsilon_n(0)|^2 \, \mathrm{d} \boldsymbol{\xi}- \int_0^t \int_\Upsilon \tilde{N}(u^\varepsilon_n) \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot\tilde{N}(u^\varepsilon_n) V \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau, \end{aligned}$$ and observe that the final term on the right-hand side may be bounded by means of the Young inequality as follows $$\label{eq:cauchy young penult galerkin} \begin{aligned} |\tilde{N}(u^\varepsilon_n) \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot\tilde{N}(u^\varepsilon_n) V| \leq& \frac{1}{2}|\tilde{N}(u^\varepsilon_n)\nabla_{\boldsymbol{\xi}} u^\varepsilon_n|^2 + \frac{1}{2}|\tilde{N}(u^\varepsilon_n) V|^2 \\ =& \frac{1}{2}\tilde{M}(u^\varepsilon_n)\nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}} u^\varepsilon_n + \frac{1}{2}\tilde{M}(u^\varepsilon_n)V \cdot V \\ \leq & \frac{1}{2}\tilde{M}(u^\varepsilon_n)\nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}} u^\varepsilon_n + \frac{1}{2} \Vert V \Vert_{L^\infty(\Upsilon_T)}^2|f^\varepsilon_n|, \end{aligned}$$ where we used the boundedness of $\rho^\varepsilon_n$; *cf.* [\[eq:unif L1 on f eps n\]](#eq:unif L1 on f eps n){reference-type="eqref" reference="eq:unif L1 on f eps n"}. Recall from [\[eq:strong convergence initial data galerkin\]](#eq:strong convergence initial data galerkin){reference-type="eqref" reference="eq:strong convergence initial data galerkin"} that $u^\varepsilon_n(0) \to u^\varepsilon_0$ strongly in $H^2_{\mathrm{per}}(\Upsilon)$, and hence almost everywhere for a subsequence; which, with slight abuse of notation, we still label as $\{u^\varepsilon_n\}_{n}$. In view of the Morrey embedding $H^2(\Upsilon) \hookrightarrow C^{0,1/2}(\Upsilon)$, we deduce $$\Vert u^\varepsilon_n(0) - u^\varepsilon_0 \Vert_{L^\infty(\Upsilon)} \to 0,$$ and hence $$\sup_n \Vert u^\varepsilon_n(0) \Vert_{L^\infty(\Upsilon)} < \infty.$$ It follows from the continuity of the exponential and the relations [\[eq:f eps n in terms of u eps n pre galerkin\]](#eq:f eps n in terms of u eps n pre galerkin){reference-type="eqref" reference="eq:f eps n in terms of u eps n pre galerkin"} that $f^\varepsilon_n(0) \to f^\varepsilon_0$ almost everywhere, and $$0 < f^\varepsilon_n(0) \leq \exp \big( \sup_n {\Vert u^\varepsilon_n(0) \Vert_{L^\infty(\Upsilon)}}\big) < \infty,$$ and we remark that $\exp(\sup_n{\Vert u^\varepsilon_n(0) \Vert_{L^\infty(\Upsilon)}})$ is constant and is therefore an integrable dominating function on the bounded domain $\Upsilon$. Using the previous estimate, an application of the Dominated Convergence Theorem yields, for all $q \in [1,\infty)$, $$\label{eq:strong conv in Lq for initial f eps n} f^\varepsilon_n(0) \to f^\varepsilon_0 \quad \text{strongly in }L^q(\Upsilon) \text{ as } n \to \infty,$$ and a similar argument using also [\[eq:unif L1 on f eps n\]](#eq:unif L1 on f eps n){reference-type="eqref" reference="eq:unif L1 on f eps n"}, *cf.* *e.g.* Appendix [8](#sec:appendix initial data){reference-type="ref" reference="sec:appendix initial data"}, yields $\lim_{n \to \infty }E[f^\varepsilon_n(0)] = E[f^\varepsilon_0]$ in the sense of real numbers. It therefore follows, using also [\[eq:unif L1 on f eps n\]](#eq:unif L1 on f eps n){reference-type="eqref" reference="eq:unif L1 on f eps n"}, [\[eq:penult step main dissip galerkin\]](#eq:penult step main dissip galerkin){reference-type="eqref" reference="eq:penult step main dissip galerkin"}, and [\[eq:cauchy young penult galerkin\]](#eq:cauchy young penult galerkin){reference-type="eqref" reference="eq:cauchy young penult galerkin"}, that $$\begin{aligned} \sup_{t \in [0,T]} E[f^\varepsilon_n](t) +& \frac{\varepsilon}{2} \bigg( \sup_{t \in [0,T]}\int_\Upsilon |u^\varepsilon_n(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_{\Upsilon_T} |\nabla_{\boldsymbol{\xi}} u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg) \\ +& \frac{1}{2}\int_{\Upsilon_T} \tilde{M}(u^\varepsilon_n) \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}} u^\varepsilon_n \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \leq C \bigg( 1+ E[f^\varepsilon_0] + \varepsilon\int_\Upsilon |u^\varepsilon_0|^2 \, \mathrm{d} \boldsymbol{\xi}\bigg), \end{aligned}$$ for some positive constant $C=C(\mathrm{Pe},D_e,T,\Upsilon)$ independent of $\varepsilon,n$. Our choice of regularised initial data (*cf.* Lemma [Lemma 3](#lem:regularised initial data){reference-type="ref" reference="lem:regularised initial data"}) implies that the entire right-hand side of the above is bounded independently of $\varepsilon,n$. By rewriting the final term on the left-hand side in terms of $f^\varepsilon_n,\rho^\varepsilon_n$ using [\[eq:convenient expansion\]](#eq:convenient expansion){reference-type="eqref" reference="eq:convenient expansion"}, we get $$\begin{aligned} \sup_{t \in [0,T]} E[f^\varepsilon_n](t) +& \frac{\varepsilon}{2} \bigg( \sup_{t \in [0,T]}\int_\Upsilon |u^\varepsilon_n(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_{\Upsilon_T} |\nabla_{\boldsymbol{\xi}} u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg) \\ +& \frac{1}{2}\int_{\Upsilon_T} \Big( D_e f^\varepsilon_n(1-\rho^\varepsilon_n) \big|\nabla \log\big( \frac{f^\varepsilon_n}{1-\rho^\varepsilon_n}\big)\big|^2 + f^\varepsilon_n \big|\partial_\theta \log\big( \frac{f^\varepsilon_n}{1-\rho^\varepsilon_n}\big)\big|^2 \Big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \leq C, \end{aligned}$$ *i.e.*, $$\label{eq:unif in n est for galerkin} \begin{aligned} \sup_{t \in [0,T]} E[f^\varepsilon_n](t) + \sup_{t \in [0,T]}\int_\Upsilon |\sqrt{\varepsilon} u^\varepsilon_n(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_{\Upsilon_T} |\nabla_{\boldsymbol{\xi}} \sqrt{\varepsilon} u^\varepsilon_n|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t& \\ + \int_{\Upsilon_T} (1-\rho^\varepsilon_n)|\nabla \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Omega_T} \rho^\varepsilon_n |\nabla \sqrt{1-\rho^\varepsilon_n}|^2 \, \mathrm{d} x \, \mathrm{d} t& \\ + \int_{\Upsilon_T} |\partial_\theta \sqrt{f^\varepsilon_n}|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Omega_T} |\nabla \rho^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} t &\leq C, \end{aligned}$$ where we used the Fubini--Tonelli Theorem and the commutativity of the derivative in $x$ and integral in $\theta$ to rewrite $\int_{\Upsilon_T} \nabla f^\varepsilon_n \cdot \nabla \rho^\varepsilon_n \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t = \int_{\Omega_T} |\nabla \rho^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} t$ for the final term on the left-hand side of [\[eq:unif in n est for galerkin\]](#eq:unif in n est for galerkin){reference-type="eqref" reference="eq:unif in n est for galerkin"}. Observe that, using the bounds in the previous estimate, $$\begin{aligned} \int_{\Omega_T} |\nabla \sqrt{1-\rho^\varepsilon_n}|^2 \, \mathrm{d} x \, \mathrm{d} t = \frac{1}{4}\int_{\Omega_T} |\nabla \rho^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} t + \int_{\Omega_T} \rho^\varepsilon_n |\nabla \sqrt{1-\rho^\varepsilon_n}|^2\, \mathrm{d} x \, \mathrm{d} t \leq C, \end{aligned}$$ and similarly $$\label{eq:partial bound i} \begin{aligned} \int_{\Upsilon_T} (1-\rho^\varepsilon_n)|\nabla_{\boldsymbol{\xi}} \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \leq \int_{\Upsilon_T} (1-\rho^\varepsilon_n)|\nabla \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Upsilon_T} |\partial_\theta \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \leq C. \end{aligned}$$ Collating these latter two bounds with [\[eq:unif in n est for galerkin\]](#eq:unif in n est for galerkin){reference-type="eqref" reference="eq:unif in n est for galerkin"} yields the estimate [\[eq:main est gal i\]](#eq:main est gal i){reference-type="eqref" reference="eq:main est gal i"}. . *Interpolated bounds on $\sqrt{f}$ and $\sqrt{(1-\rho)f}$*: Using the product rule and the estimates [\[eq:partial bound i\]](#eq:partial bound i){reference-type="eqref" reference="eq:partial bound i"} and [\[eq:unif in n est for galerkin\]](#eq:unif in n est for galerkin){reference-type="eqref" reference="eq:unif in n est for galerkin"}, we find $$%\label{eq:unif est for nabla sqrt((1-rho)f)} \begin{aligned} \int_{\Upsilon_T} |\nabla_{\boldsymbol{\xi}} \sqrt{(1-\rho^\varepsilon_n) f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t &\leq C\bigg( \int_{\Upsilon_T} (1-\rho^\varepsilon_n)|\nabla_{\boldsymbol{\xi}} \sqrt{f^\varepsilon_n} |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + \int_{\Omega_T} \rho^\varepsilon_n |\nabla \sqrt{1-\rho^\varepsilon_n}|^2 \, \mathrm{d} x \, \mathrm{d} t \bigg) \\ &\leq C. \end{aligned}$$ Meanwhile, the estimate [\[eq:unif L1 on f eps n\]](#eq:unif L1 on f eps n){reference-type="eqref" reference="eq:unif L1 on f eps n"} implies $\Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n}\Vert_{L^\infty(0,T;L^2(\Upsilon))} \leq C$, whence we deduce $$\Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n}\Vert_{L^\infty(0,T;L^2(\Upsilon))} + \Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n}\Vert_{L^2(0,T;H^1(\Upsilon))} \leq C.$$ In turn, the classical interpolation lemma [@DiBenedetto §1, Proposition 3.2] implies $$\Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} \Vert_{L^{\frac{10}{3}}(\Upsilon_T)} \leq C,$$ and thus [\[eq:gradient of f sqrt 1- rho\]](#eq:gradient of f sqrt 1- rho){reference-type="eqref" reference="eq:gradient of f sqrt 1- rho"} is proved. Our objective is then to make use of the *linear* diffusion in angle to upgrade the integrability of $f^\varepsilon_n$; to do so, we use an adapted version of the aforementioned interpolation lemma to the space $L^\infty(\Omega_T;L^2(0,2\pi)) \cap L^2(\Omega_T;H^1(0,2\pi))$, *cf.* Appendix [7](#sec:appendix interpolation){reference-type="ref" reference="sec:appendix interpolation"}. Using both [\[eq:unif L1 on f eps n\]](#eq:unif L1 on f eps n){reference-type="eqref" reference="eq:unif L1 on f eps n"} and the estimate on $\partial_\theta \sqrt{f^\varepsilon_n}$ from [\[eq:unif in n est for galerkin\]](#eq:unif in n est for galerkin){reference-type="eqref" reference="eq:unif in n est for galerkin"}, we remark that $$\Vert \sqrt{f^\varepsilon_n} \Vert_{L^\infty(\Omega_T;L^2(0,2\pi))} + \Vert \sqrt{f^\varepsilon_n} \Vert_{L^2(\Omega_T;H^1(0,2\pi))} \leq C.$$ A direct application of Lemma [Lemma 12](#lem:our dibenedetto){reference-type="ref" reference="lem:our dibenedetto"} then implies $$\Vert \sqrt{f^\varepsilon_n} \Vert_{L^6(\Upsilon_T)} \leq C,$$ and [\[eq:sqrt func di ben\]](#eq:sqrt func di ben){reference-type="eqref" reference="eq:sqrt func di ben"} follows. . *Estimate on $f\sqrt{1-\rho}$*: We write $$\begin{aligned} \sqrt{1-\rho^\varepsilon_n} \nabla_{\boldsymbol{\xi}} f^\varepsilon_n = 2\sqrt{f^\varepsilon_n} \sqrt{1-\rho^\varepsilon_n} \nabla_{\boldsymbol{\xi}} \sqrt{f^\varepsilon_n}, \end{aligned}$$ whence Hölder's inequality yields $$\Vert \sqrt{1-\rho^\varepsilon_n} \nabla_{\boldsymbol{\xi}} f^\varepsilon_n \Vert_{L^{\frac{3}{2}}(\Upsilon_T)} \leq 2 \Vert \sqrt{f^\varepsilon_n} \Vert_{L^6(\Upsilon_T)} \Vert \sqrt{1-\rho^\varepsilon_n}\nabla_{\boldsymbol{\xi}} \sqrt{f^\varepsilon_n} \Vert_{L^2(\Upsilon_T)} \leq C,$$ where we used [\[eq:main est gal i\]](#eq:main est gal i){reference-type="eqref" reference="eq:main est gal i"} and [\[eq:sqrt func di ben\]](#eq:sqrt func di ben){reference-type="eqref" reference="eq:sqrt func di ben"}. In turn, using the Hölder, Minkowski, and Jensen inequalities, we get $$\begin{aligned} \Vert \nabla_{\boldsymbol{\xi}}\big( f^\varepsilon_n \sqrt{1-\rho^\varepsilon_n} \big) \Vert_{L^{\frac{6}{5}}(\Upsilon_T)} &\leq \Vert \sqrt{1-\rho^\varepsilon_n} \nabla_{\boldsymbol{\xi}} f^\varepsilon_n \Vert_{L^{\frac{6}{5}}(\Upsilon_T)} + \Vert f^\varepsilon_n \nabla \sqrt{1-\rho^\varepsilon_n} \Vert_{L^{\frac{6}{5}}(\Upsilon_T)} \\ &\leq C\Vert \sqrt{1-\rho^\varepsilon_n} \nabla_{\boldsymbol{\xi}} f^\varepsilon_n \Vert_{L^{\frac{3}{2}}(\Upsilon_T)} + \Vert f^\varepsilon_n \Vert_{L^3(\Upsilon_T)} \Vert \nabla \sqrt{1-\rho^\varepsilon_n}\Vert_{L^2(\Omega_T)} \\ &\leq C, \end{aligned}$$ where we again used [\[eq:main est gal i\]](#eq:main est gal i){reference-type="eqref" reference="eq:main est gal i"} and [\[eq:sqrt func di ben\]](#eq:sqrt func di ben){reference-type="eqref" reference="eq:sqrt func di ben"}. Therefore, using again Jensen's inequality and [\[eq:sqrt func di ben\]](#eq:sqrt func di ben){reference-type="eqref" reference="eq:sqrt func di ben"} to bound $\Vert f^\varepsilon_n \sqrt{1-\rho^\varepsilon_n}\Vert_{L^{\frac{6}{5}}(\Upsilon_T)}$ uniformly in $n$ and $\varepsilon$, we obtain the estimate [\[eq:gradient of f sqrt 1- rho\]](#eq:gradient of f sqrt 1- rho){reference-type="eqref" reference="eq:gradient of f sqrt 1- rho"}. ◻ In what follows, we estimate the time derivatives of the transformed variable and original unknowns; these bounds will then permit us to apply the Aubin--Lions Lemma in the proof of Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"}, *cf.* §[4](#sec:proof of existence result){reference-type="ref" reference="sec:proof of existence result"}. To this end, we recall the angle-independent integrated entropy variable $$U^\varepsilon_n(t,x) = \int_0^{2\pi} u^\varepsilon_n(t,x,\theta) \, \mathrm{d} \theta,$$ which was defined at the end of the statement of Proposition [Proposition 4](#prop:transformed reg sys existence and est){reference-type="ref" reference="prop:transformed reg sys existence and est"}. Observe that, by virtue of Jensen's inequality, the entropy estimate [\[eq:main est gal i\]](#eq:main est gal i){reference-type="eqref" reference="eq:main est gal i"} yields $$\label{eq:est on U eps n} \sup_{t \in [0,T]} \int_\Omega |\sqrt{\varepsilon}U^\varepsilon_n(t)|^2 \, \mathrm{d} x + \int_{\Omega_T} |\nabla \sqrt{\varepsilon} U^\varepsilon_n|^2 \, \mathrm{d} x \, \mathrm{d} t \leq C.$$ Furthermore, integrating [\[eq:galerkin reg better\]](#eq:galerkin reg better){reference-type="eqref" reference="eq:galerkin reg better"} with respect to the angle variable, yields the evolution equation on the angle-independent quantities: $$\label{eq:rho eqn in galerkin} \partial_t(\varepsilon U^\varepsilon_n + \rho^\varepsilon_n) + \mathrm{Pe}\mathop{\mathrm{div}}((1-\rho^\varepsilon_n){\bf p}^\varepsilon_n) = \varepsilon \Delta U^\varepsilon_n + D_e \Delta \rho^\varepsilon_n$$ in the weak sense, where $${\bf p}^\varepsilon_n(t,x) = \int_0^{2\pi} f^\varepsilon_n(t,x,\theta) {\bf e}(\theta) \, \mathrm{d} \theta$$ satisfies, using the non-negativity of $f^\varepsilon_n$ and the boundedness of $\rho^\varepsilon_n$, the uniform estimate $$\Vert {\bf p}^\varepsilon_n \Vert_{L^\infty(\Omega_T)} \leq 1.$$ **Corollary 7** (Uniform estimates on time derivatives). *Let $\varepsilon,n,u^\varepsilon_n$ be as in Lemma [Lemma 5](#lem:existence of galerkin coeffs){reference-type="ref" reference="lem:existence of galerkin coeffs"} and $f^\varepsilon_n,\rho^\varepsilon_n$ be as in [\[eq:f eps n in terms of u eps n pre galerkin\]](#eq:f eps n in terms of u eps n pre galerkin){reference-type="eqref" reference="eq:f eps n in terms of u eps n pre galerkin"}. There exists a positive constant $C$, independent of $\varepsilon,n$, such that there holds the uniform estimates: $$\begin{aligned} \Vert \partial_t(\varepsilon u^\varepsilon_n + f^\varepsilon_n) \Vert_{X'} \leq C, \label{eq:main time deriv i} \\ \Vert \partial_t(\varepsilon U^\varepsilon_n + \rho^\varepsilon_n ) \Vert_{Y'} \leq C. \label{eq:main time deriv ii} \end{aligned}$$* *Proof.* In view of the estimates of Lemma [Lemma 6](#lem:unif est for galerkin){reference-type="ref" reference="lem:unif est for galerkin"}, a standard density argument shows that the weak formulation [\[eq:galerkin regularised weak form\]](#eq:galerkin regularised weak form){reference-type="eqref" reference="eq:galerkin regularised weak form"}, *cf.* [\[eq:galerkin reg better\]](#eq:galerkin reg better){reference-type="eqref" reference="eq:galerkin reg better"}, generalises to the space of test functions $\mathcal{A}$ defined in [\[eq:admissible test functions\]](#eq:admissible test functions){reference-type="eqref" reference="eq:admissible test functions"}, *i.e.*, for all $\psi \in \mathcal{A}$, there holds $$\label{eq:galerkin regularised weak form for time deriv est} \begin{aligned} - \langle \partial_t (\varepsilon u^\varepsilon_n &+ f^\varepsilon_n) , \psi \rangle + \mathrm{Pe}\int_{\Upsilon_T} (1-\rho^\varepsilon_n)f^\varepsilon_n {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ =&\int_{\Upsilon_T} \Big( \varepsilon \nabla_{\boldsymbol{\xi}}u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}}\psi + D_e \big((1-\rho^\varepsilon_n)\nabla f^\varepsilon_n + f^\varepsilon_n \nabla \rho^\varepsilon_n \big) \cdot \nabla \psi + \partial_\theta f^\varepsilon_n \partial_\theta \psi \Big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$ . *Estimate on $\partial_t(\varepsilon u + f)$*: The weak formulation [\[eq:galerkin regularised weak form for time deriv est\]](#eq:galerkin regularised weak form for time deriv est){reference-type="eqref" reference="eq:galerkin regularised weak form for time deriv est"} implies that, for all $\psi \in \mathcal{A}$, there holds $$\label{eq:first exp for time deriv est in galerkin} \begin{aligned} |\langle \partial_t(\varepsilon u^\varepsilon_n + f^\varepsilon_n), \psi \rangle| \leq& D_e \bigg| \int_{\Upsilon_T} (1-\rho^\varepsilon_n) \nabla f^\varepsilon_n \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg| + D_e \bigg| \int_{\Upsilon_T} f^\varepsilon_n \nabla \rho^\varepsilon_n \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg| \\ &+ \sqrt{\varepsilon} \bigg| \int_{\Upsilon_T} \nabla_{\boldsymbol{\xi}}\sqrt{\varepsilon}u^\varepsilon_n \cdot \nabla_{\boldsymbol{\xi}}\psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg| + \bigg|\int_{\Upsilon_T} \partial_\theta f^\varepsilon_n \partial_\theta \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg| \\ &+ \mathrm{Pe}\bigg| \int_{\Upsilon_T} (1-\rho^\varepsilon_n)f^\varepsilon_n {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \bigg| \\ =:& \sum_{j=1}^5 I_j. \end{aligned}$$ Using the estimates of Lemma [Lemma 6](#lem:unif est for galerkin){reference-type="ref" reference="lem:unif est for galerkin"}, we bound each term on the right-hand side of [\[eq:first exp for time deriv est in galerkin\]](#eq:first exp for time deriv est in galerkin){reference-type="eqref" reference="eq:first exp for time deriv est in galerkin"} individually. To begin with, Hölder's inequality yields $$I_2 \leq \Vert f^\varepsilon_n \Vert_{L^3(\Upsilon_T)} \Vert \nabla \rho^\varepsilon_n \Vert_{L^2(\Upsilon_T)} \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^{6}(\Upsilon_T)} \leq C\Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^{6}(\Upsilon_T)}.$$ Next, we use the second relation in [\[eq:convenient expansion\]](#eq:convenient expansion){reference-type="eqref" reference="eq:convenient expansion"}, the previous bound, and Jensen's inequality to write $$\begin{aligned} I_1 \leq C\Big(\Vert \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} \Vert_{L^{\frac{10}{3}}(\Upsilon_T)} \Vert \nabla_{\boldsymbol{\xi}} \sqrt{(1-\rho^\varepsilon_n)f^\varepsilon_n} \Vert_{L^{2}(\Upsilon_T)} \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^{5}(\Upsilon_T)} \!+ \!I_2 \Big)\! \leq C \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^{6}(\Upsilon_T)}. \end{aligned}$$ Similarly, using the first relation in [\[eq:convenient expansion\]](#eq:convenient expansion){reference-type="eqref" reference="eq:convenient expansion"}, $$\begin{aligned} I_4 \leq \Vert \sqrt{f^\varepsilon_n} \Vert_{L^6(\Upsilon_T)} \Vert \partial_\theta \sqrt{f^\varepsilon_n} \Vert_{L^2(\Upsilon_T)} \Vert \partial_\theta \psi \Vert_{L^{3}(\Upsilon_T)} \leq C \Vert \partial_\theta \psi \Vert_{L^{3}(\Upsilon_T)}. \end{aligned}$$ For the third term, we have $$I_3 \leq \sqrt{\varepsilon} \Vert \nabla_{\boldsymbol{\xi}} \sqrt{\varepsilon} u^\varepsilon_n \Vert_{L^2(\Upsilon_T)} \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^2(\Upsilon)} \leq C \sqrt{\varepsilon}\Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^2(\Upsilon)}.$$ For the final integral, the uniform boundedness of $\rho^\varepsilon_n$ and Jensen's inequality implies $$\begin{aligned} I_5 \leq & \Vert f^\varepsilon_n \Vert_{L^{3}(\Upsilon_T)} \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^{\frac{3}{2}}(\Upsilon_T)} \leq C \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^{\frac{3}{2}}(\Upsilon_T)}. \end{aligned}$$ By combining the previous bounds and using Jensen's inequality, we deduce $$|\langle \partial_t(\varepsilon u^\varepsilon_n + f^\varepsilon_n) , \psi \rangle| \leq C \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^6(\Upsilon_T)},$$ and [\[eq:main time deriv i\]](#eq:main time deriv i){reference-type="eqref" reference="eq:main time deriv i"} follows. . *Estimate on $\partial_t (\varepsilon U + \rho)$*: Testing [\[eq:rho eqn in galerkin\]](#eq:rho eqn in galerkin){reference-type="eqref" reference="eq:rho eqn in galerkin"} against $\psi \in \mathcal{A}_s$, we obtain $$\begin{aligned} \langle \partial_t(\varepsilon U^\varepsilon_n + \rho^\varepsilon_n) , \psi \rangle = &\mathrm{Pe}\int_{\Omega_T} (1-\rho^\varepsilon_n) {\bf p}^\varepsilon_n \cdot \nabla \psi \, \mathrm{d} x \, \mathrm{d} t \\ &- \sqrt{\varepsilon} \int_{\Omega_T} \nabla \sqrt{\varepsilon}U^\varepsilon_n \cdot \nabla \psi \, \mathrm{d} x \, \mathrm{d} t - D_e \int_{\Omega_T} \nabla \rho^\varepsilon_n \cdot \nabla \psi \, \mathrm{d} x \, \mathrm{d} t, \end{aligned}$$ from which we deduce, using [\[eq:est on U eps n\]](#eq:est on U eps n){reference-type="eqref" reference="eq:est on U eps n"}, the uniform boundedness of ${\bf p}^\varepsilon_n$, and Jensen's inequality, $$\begin{aligned} |\langle \partial_t(\varepsilon U^\varepsilon_n &+ \rho^\varepsilon_n) , \psi \rangle| \\ \leq & C \Big( \Vert \nabla \psi \Vert_{L^1(\Omega_T)} + \sqrt{\varepsilon} \Vert \nabla \sqrt{\varepsilon} U^\varepsilon_n \Vert_{L^2(\Omega_T)} \Vert \nabla \psi \Vert_{L^2(\Omega_T)} + \Vert \nabla \rho^\varepsilon_n \Vert_{L^2(\Omega_T)}\Vert \nabla \psi \Vert_{L^2(\Omega_T)} \Big) \\ \leq & C \Vert \nabla \psi \Vert_{L^2(\Omega_T)} , \end{aligned}$$ whence we deduce [\[eq:main time deriv ii\]](#eq:main time deriv ii){reference-type="eqref" reference="eq:main time deriv ii"}. ◻ The results of Lemmas [Lemma 5](#lem:existence of galerkin coeffs){reference-type="ref" reference="lem:existence of galerkin coeffs"}-[Lemma 6](#lem:unif est for galerkin){reference-type="ref" reference="lem:unif est for galerkin"} and Corollary [Corollary 7](#cor:time deriv est gal){reference-type="ref" reference="cor:time deriv est gal"} prove Proposition [Proposition 4](#prop:transformed reg sys existence and est){reference-type="ref" reference="prop:transformed reg sys existence and est"}. # Proof of the Main Existence Theorem {#sec:proof of existence result} In this section, we study the convergence of the approximate solutions and obtain the existence of a weak solution to [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"}. We shall essentially take the double-limit as $\varepsilon \to 0$ in the regularisation parameter and $n\to\infty$ in the Galerkin approximation simultaneously, which will entail the use of a diagonal argument. We explain why we proceed in this manner in the next paragraph. The reason for employing a double-limit strategy is that the time-derivative estimates of Corollary [Corollary 7](#cor:time deriv est gal){reference-type="ref" reference="cor:time deriv est gal"} do not decouple into separate estimates on $\partial_t \varepsilon U^\varepsilon_n$ and $\partial_t \rho^\varepsilon_n$ individually. This means that the Aubin--Lions Lemma yields the strong convergence of only the sum $\varepsilon U^\varepsilon_n + \rho^\varepsilon_n$ as $n\to\infty$ for fixed $\varepsilon$. It is then, to the authors' knowledge, not possible to obtain the strong convergence $\rho^\varepsilon_n \to \rho^\varepsilon$ for fixed $\varepsilon$ without having already determined that $\varepsilon U^\varepsilon_n$ converges strongly. While the classical theory of [@ladyzhenskaya] implies that the linear heat term $\varepsilon(\partial_t u^\varepsilon_n - \Delta_{\boldsymbol{\xi}} u^\varepsilon_n)$ ought to yield $H^2$-regularity of $u^\varepsilon_n$ and hence $L^2$-boundedness of $\partial_t u^\varepsilon_n$ (from which one could infer strong convergence of $u^\varepsilon_n$ by another application of Aubin--Lions), the presence of the non-local operator $A$ and the non-local matrix $\tilde{M}$ in the equation [\[eq:galerkin regularised\]](#eq:galerkin regularised){reference-type="eqref" reference="eq:galerkin regularised"} makes *quantitative estimates* rather intractable. In turn, we are not able to deduce that $U^\varepsilon_n$ converges strongly as $n\to\infty$ while keeping $\varepsilon$ fixed. Instead, we rely on the uniform bound on $\Vert \sqrt{\varepsilon}U^\varepsilon_n \Vert_{L^2(\Omega_T)}$ to deduce that $\varepsilon U^\varepsilon_n \to 0$ strongly as $\varepsilon\to0$; whence the aforementioned strong convergence of the sum $\varepsilon U^\varepsilon_n + \rho^\varepsilon_n$ yields the desired strong convergence of $\rho^\varepsilon_n$. The main difficulty in implementing this strategy lies in making sure that the sequence of regularised initial data converges appropriately. This is all-the-more subtle since the initial functions $\{f^\varepsilon_n(0),\rho^\varepsilon_n(0)\}_{\varepsilon,n}$ are determined via the usual non-linear transformation from $u^\varepsilon_n(0)$, the $n$-th Galerkin approximation of the initial data $u^\varepsilon_0$, *i.e.*, $$\label{eq:initial values for f eps and co} u^\varepsilon_n(0) = \sum_{j=1}^{n} \langle u^\varepsilon_0,\varphi_j \rangle_{H^2_{\mathrm{per}}(\Upsilon)} \varphi_j, \quad f^\varepsilon_n(0) = \frac{e^{u^\varepsilon_n(0)}}{1+\int_0^{2\pi}e^{u^\varepsilon_n(0)} \, \mathrm{d} \theta}, \quad \rho^\varepsilon_n(0) = \int_0^{2\pi} f^\varepsilon_n(0) \, \mathrm{d} \theta,$$ where we recall that $\{\varphi_j \}_j$ is the orthonormal basis of $H^2_{\mathrm{per}}(\Upsilon)$ used in §[3.1](#sec:existence galerkin){reference-type="ref" reference="sec:existence galerkin"}. The next lemma shows that there exists a choice of diagonal subsequence from the array $\{f^\varepsilon_n\}_{\varepsilon,n}$ which preserves the desired convergence to the initial data $f_0$. The proof is delayed to Appendix [8](#sec:appendix initial data){reference-type="ref" reference="sec:appendix initial data"}. **Lemma 8** (Diagonal convergence of initial data). *There exists a diagonal subsequence of the array $\{f^\varepsilon_n\}_{\varepsilon,n}$, which, with slight abuse of notation, we denote by $\{f^\varepsilon\}_\varepsilon$, such that $$\lim_{\varepsilon \to 0}\Vert f^\varepsilon(0) - f_0 \Vert_{L^p(\Upsilon)} = 0.$$* We are now ready to give the proof of the main existence theorem; Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"}. We will show that the sequence $\{f^\varepsilon\}_\varepsilon$ constructed in the previous lemma admits a subsequential limit which is indeed a weak solution of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} in the sense of Definition [Definition 1](#def:weak sol){reference-type="ref" reference="def:weak sol"}. We emphasise that $\{f^\varepsilon\}_\varepsilon$ satisfies the uniform estimates of Proposition [Proposition 4](#prop:transformed reg sys existence and est){reference-type="ref" reference="prop:transformed reg sys existence and est"}. *Proof of Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"}.* We assume for the time being that the test function $\psi$ belongs to the stronger space $\mathcal{A}$. We shall relax this assumption at the end of the proof. . *Convergence of time-derivative*: The uniform bound [\[eq:main time deriv reg i\]](#eq:main time deriv reg i){reference-type="eqref" reference="eq:main time deriv reg i"} implies $$\partial_t (\varepsilon u^\varepsilon + f^\varepsilon) \overset{*}{\rightharpoonup} \zeta \quad \text{ weakly-* in } X',$$ for some unkown $\zeta \in X'$. The estimate [\[eq:sqrt func di ben reg\]](#eq:sqrt func di ben reg){reference-type="eqref" reference="eq:sqrt func di ben reg"} implies $f^\varepsilon \rightharpoonup f$ weakly in $L^3(\Upsilon_T)$ whence, for all $\psi \in \mathcal{A}$, $$\begin{aligned} \langle \zeta, \psi \rangle = \lim_{\varepsilon \to 0} \langle \partial_t(\varepsilon u^\varepsilon + f^\varepsilon) , \psi \rangle &= -\lim_{\varepsilon\to0}\sqrt{\varepsilon}\int_{\Upsilon_T} \sqrt{\varepsilon}u^\varepsilon \partial_t \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t -\lim_{\varepsilon\to0}\int_{\Upsilon_T} f^\varepsilon \partial_t \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ &= -\int_{\Upsilon_T} f \partial_t \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t, \end{aligned}$$ where we used that $\Vert \sqrt{\varepsilon} u^\varepsilon\Vert_{L^2(\Upsilon_T)} \leq C$ from [\[eq:main est reg i\]](#eq:main est reg i){reference-type="eqref" reference="eq:main est reg i"} to obtain the final line. It therefore follows that $\zeta = \partial_t f$ and $\partial_t f^\varepsilon \overset{*}{\rightharpoonup} \partial_t f$ weakly-\* in $X'$. An analogous argument shows $\partial_t \rho^\varepsilon \overset{*}{\rightharpoonup} \partial_t \rho$ weakly-\* in $Y'$. . *Strong convergence of $\rho^\varepsilon$*: To begin with, [\[eq:main est reg i\]](#eq:main est reg i){reference-type="eqref" reference="eq:main est reg i"} implies $\Vert \rho^\varepsilon \Vert_{L^2(0,T;H^1(\Omega))} \leq C$, whence it follows that $\rho^\varepsilon \rightharpoonup \rho$ weakly $L^2(0,T;H^1(\Omega))$. Meanwhile, the uniform estimates from Proposition [Proposition 4](#prop:transformed reg sys existence and est){reference-type="ref" reference="prop:transformed reg sys existence and est"} on $\sqrt{\varepsilon} U^\varepsilon$ and $\rho^\varepsilon$ imply that $$\Vert \varepsilon U^\varepsilon + \rho^\varepsilon \Vert_{L^\infty(0,T;L^2(\Omega))} + \Vert \varepsilon U^\varepsilon + \rho^\varepsilon \Vert_{L^2(0,T;H^1(\Omega))} \leq C.$$ In turn, the time-derivative bound [\[eq:main time deriv reg ii\]](#eq:main time deriv reg ii){reference-type="eqref" reference="eq:main time deriv reg ii"} and the Aubin--Lions Lemma imply that, up to a subsequence which we do not relabel, $$\varepsilon U^\varepsilon + \rho^\varepsilon \to A \quad \text{strongly in } L^2(\Omega_T).$$ for some unknown $A \in L^2(\Omega_T)$. Note that the triangle inequality and the uniform estimate on $\sqrt{\varepsilon}U^\varepsilon$ from Proposition [Proposition 4](#prop:transformed reg sys existence and est){reference-type="ref" reference="prop:transformed reg sys existence and est"} imply $$\begin{aligned} \Vert \rho^\varepsilon - A \Vert_{L^2(\Omega_T)} \leq & \Vert \varepsilon U^\varepsilon + \rho^\varepsilon - A \Vert_{L^2(\Omega_T)} + \sqrt{\varepsilon} \underbrace{\Vert \sqrt{\varepsilon} U^\varepsilon \Vert_{L^2(\Omega_T)}}_{\leq C} \to 0, \end{aligned}$$ and thus by the uniqueness of limits, $A=\rho$, *i.e.*, $$\rho^\varepsilon \to \rho \quad \text{strongly in } L^2(\Omega_T),$$ and $\rho \in [0,1]$ a.e. in $\Omega_T$. Moreover, for all $q \in (2,\infty)$, the boundedness of $\rho,\rho^\varepsilon$ implies $$\begin{aligned} \int_{\Omega_T} |\rho^\varepsilon - \rho|^q \, \mathrm{d} x \, \mathrm{d} t = \int_{\Omega_T} |\rho^\varepsilon - \rho|^{q-2} |\rho^\varepsilon - \rho|^2 \, \mathrm{d} x \, \mathrm{d} t \leq \int_{\Omega_T} |\rho^\varepsilon - \rho|^2 \, \mathrm{d} x \, \mathrm{d} t \to 0. \end{aligned}$$ Using Jensen's inequality to cover the indices $q \in [1,2)$, it follows that, for all $q \in [1,\infty)$, $$\label{eq:strong conv rho} \rho^\varepsilon \to \rho \quad \text{strongly in } L^q(\Omega_T).$$ The previous convergence implies that $\rho^\varepsilon \to \rho$ a.e. in $\Omega_T$; up to a subsequence which we do not relabel. In turn, given any $q \in [1,\infty)$, an application of the Dominated Convergence Theorem shows, using again $0 \leq \rho,\rho^\varepsilon_n \leq 1$ to establish an admissible constant majorant function, $$\begin{aligned} \int_{\Omega_T} \big|\sqrt{1-\rho^\varepsilon} - \sqrt{1-\rho} \big|^q \, \mathrm{d} x \, \mathrm{d} t \to 0, \end{aligned}$$ *i.e.*, for all $q \in [1,\infty)$, $$\label{eq:strong cov for sqrt 1-rho} \sqrt{1-\rho^\varepsilon} \to \sqrt{1-\rho} \quad \text{strongly in } L^q(\Omega_T).$$ . *Weak convergences and bounds*: The uniform estimate [\[eq:bound on f sqrt 1-rho main one for limit reg\]](#eq:bound on f sqrt 1-rho main one for limit reg){reference-type="eqref" reference="eq:bound on f sqrt 1-rho main one for limit reg"} implies that there exists $B \in L^{\frac{6}{5}}(0,T;W^{1,\frac{6}{5}}(\Upsilon))$ such that, up to a subsequence which we do not relabel, there holds $$f^\varepsilon \sqrt{1-\rho^\varepsilon} \rightharpoonup B \quad \text{weakly in } L^{\frac{6}{5}}(0,T;W^{1,\frac{6}{5}}(\Upsilon)).$$ Meanwhile, the uniform estimate [\[eq:sqrt func di ben reg\]](#eq:sqrt func di ben reg){reference-type="eqref" reference="eq:sqrt func di ben reg"} implies that $f^\varepsilon \rightharpoonup f$ weakly in $L^3(\Upsilon_T)$, so that, recalling the strong convergence [\[eq:strong cov for sqrt 1-rho\]](#eq:strong cov for sqrt 1-rho){reference-type="eqref" reference="eq:strong cov for sqrt 1-rho"}, there holds $$f^\varepsilon \sqrt{1-\rho^\varepsilon} \rightharpoonup f \sqrt{1-\rho} \quad \text{weakly in } L^3(\Upsilon_T),$$ and the uniqueness of weak limits implies $B = f \sqrt{1-\rho}$ a.e. in $\Upsilon_T$. We therefore have $$\label{eq:weak conv for f sqrt 1-rho} f^\varepsilon \sqrt{1-\rho^\varepsilon} \rightharpoonup f\sqrt{1-\rho} \quad \text{weakly in } L^{\frac{6}{5}}(0,T;W^{1,\frac{6}{5}}(\Upsilon)).$$ In turn, recalling the strong convergence [\[eq:strong cov for sqrt 1-rho\]](#eq:strong cov for sqrt 1-rho){reference-type="eqref" reference="eq:strong cov for sqrt 1-rho"}, we have that $$\label{eq:weak conv for diffusive term i} \sqrt{1-\rho^\varepsilon} \nabla_{\boldsymbol{\xi}}(f^\varepsilon \sqrt{1-\rho^\varepsilon}) \rightharpoonup \sqrt{1-\rho} \nabla_{\boldsymbol{\xi}} (f\sqrt{1-\rho}) \quad \text{weakly in } L^{\frac{6}{5}}(\Upsilon_T).$$ Analogous arguments show $\sqrt{1-\rho^\varepsilon}\nabla \sqrt{f^\varepsilon} \rightharpoonup \sqrt{1-\rho}\nabla\sqrt{f}$ and $\nabla \sqrt{1-\rho^\varepsilon} \rightharpoonup \nabla \sqrt{1-\rho}$ weakly in $L^2(\Omega_T)$. Similarly, we deduce from [\[eq:main est reg i\]](#eq:main est reg i){reference-type="eqref" reference="eq:main est reg i"} and [\[eq:sqrt func di ben reg\]](#eq:sqrt func di ben reg){reference-type="eqref" reference="eq:sqrt func di ben reg"} that $$\Vert \partial_\theta f^\varepsilon \Vert_{L^{\frac{3}{2}}(\Upsilon_T)} \leq C\Vert \sqrt{f^\varepsilon} \Vert_{L^6(\Upsilon_T)} \Vert \partial_\theta \sqrt{f^\varepsilon} \Vert_{L^2(\Upsilon_T)} \leq C,$$ whence $f^\varepsilon$ is uniformly bounded in $L^{\frac{3}{2}}(\Omega_T;W^{1,\frac{3}{2}}(0,2\pi))$. Using the uniqueness of limits and $f^\varepsilon \rightharpoonup f$ weakly in $L^3(\Upsilon_T)$, we deduce as before that $$\label{eq:weak conv of dtheta f eps} \partial_\theta f^\varepsilon \rightharpoonup \partial_\theta f \quad \text{weakly in } L^{\frac{3}{2}}(\Upsilon_T).$$ An analogous approach shows $\partial_\theta \sqrt{f^\varepsilon} \rightharpoonup \partial_\theta \sqrt{f}$ weakly in $L^2(\Upsilon_T)$. In view of the weak (resp. weak-\*) lower semicontinuity of the norms, the limits satisfy $$\label{eq:bounds final weak lower semi} \begin{aligned} &f \in L^3(\Upsilon_T), \quad \partial_t f \in X' , \quad \partial_\theta \sqrt{f} \in L^{2}(\Upsilon_T), \quad \nabla\sqrt{1-\rho} \in L^2(\Omega_T), \\ &\partial_t \rho \in Y', \quad \sqrt{1-\rho}\nabla \sqrt{f} \in L^2(\Upsilon_T), \quad f\sqrt{1-\rho} \in L^{\frac{6}{5}}(0,T;W^{1,\frac{6}{5}}(\Upsilon)). \end{aligned}$$ . *Convergence of spatial diffusions*: We rewrite the diffusive terms as follows $$\label{eq:rewriting of diffusive terms later} \begin{aligned} (1-\rho^\varepsilon)\nabla f^\varepsilon + f^\varepsilon \nabla \rho^\varepsilon = \sqrt{1-\rho^\varepsilon} \nabla (f^\varepsilon \sqrt{1-\rho^\varepsilon}) + \frac{3}{2} f^\varepsilon \nabla \rho^\varepsilon. \end{aligned}$$ The weak convergence [\[eq:weak conv for diffusive term i\]](#eq:weak conv for diffusive term i){reference-type="eqref" reference="eq:weak conv for diffusive term i"} implies $$\label{eq:weak conv diff i actual test func} \lim_{\varepsilon \to 0} \int_{\Upsilon_T} \sqrt{1-\rho^\varepsilon} \nabla (f^\varepsilon \sqrt{1-\rho^\varepsilon}) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t = \int_{\Upsilon_T} \sqrt{1-\rho} \nabla (f\sqrt{1-\rho}) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t,$$ and the desired convergence for the first term on the right-hand side of [\[eq:rewriting of diffusive terms later\]](#eq:rewriting of diffusive terms later){reference-type="eqref" reference="eq:rewriting of diffusive terms later"} is verified. The second term on the right-hand side of [\[eq:rewriting of diffusive terms later\]](#eq:rewriting of diffusive terms later){reference-type="eqref" reference="eq:rewriting of diffusive terms later"} is dealt with as follows; we essentially use an integration by parts to place two derivatives on the test function prior to passing to the limit. We write $$f^\varepsilon \nabla \rho^\varepsilon = - 2 f^\varepsilon \sqrt{1-\rho^\varepsilon} \nabla \sqrt{1-\rho^\varepsilon},$$ and observe that an integration by parts yields $$\label{eq:steps to decompose final convergence} \begin{aligned} \int_{\Upsilon_T} f^\varepsilon \nabla \rho^\varepsilon \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t =& -2 \int_{\Upsilon_T} f^\varepsilon \sqrt{1-\rho^\varepsilon} \nabla \sqrt{1-\rho^\varepsilon} \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ =& 2 \int_{\Upsilon_T} \sqrt{1-\rho^\varepsilon} \mathop{\mathrm{div}}\big( f^\varepsilon \sqrt{1-\rho^\varepsilon} \nabla \psi \big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ =& I_1^\varepsilon + I_2^\varepsilon, \end{aligned}$$ where $$\begin{aligned} I_1^\varepsilon :=& 2 \int_0^T \int_\Upsilon \sqrt{1-\rho^\varepsilon} \nabla \big( f^\varepsilon \sqrt{1-\rho^\varepsilon} \big) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t, \\ I_2^\varepsilon :=& 2\int_0^T \int_\Upsilon \sqrt{1-\rho^\varepsilon} \big( f^\varepsilon \sqrt{1-\rho^\varepsilon}\big) \Delta \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$ The weak convergence from [\[eq:weak conv for f sqrt 1-rho\]](#eq:weak conv for f sqrt 1-rho){reference-type="eqref" reference="eq:weak conv for f sqrt 1-rho"} along with the strong convergence from [\[eq:strong cov for sqrt 1-rho\]](#eq:strong cov for sqrt 1-rho){reference-type="eqref" reference="eq:strong cov for sqrt 1-rho"} implies that there holds the weak convergences in $L^{1}(\Upsilon_T)$: $$\begin{aligned} & \sqrt{1-\rho^\varepsilon} \nabla \big( f^\varepsilon \sqrt{1-\rho^\varepsilon} \big) \rightharpoonup \sqrt{1-\rho} \nabla \big( f \sqrt{1-\rho} \big), \\ & \sqrt{1-\rho^\varepsilon} \big( f^\varepsilon \sqrt{1-\rho^\varepsilon} \big) \rightharpoonup \sqrt{1-\rho} \big( f \sqrt{1-\rho} \big), \end{aligned}$$ from which we deduce $$\begin{aligned} \lim_{\varepsilon \to 0} I_1^\varepsilon = & 2 \int_{\Upsilon_T} \sqrt{1-\rho} \nabla \big( f \sqrt{1-\rho} \big) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t, \\ \lim_{\varepsilon \to 0} I_2^\varepsilon = & 2\int_{\Upsilon_T} \sqrt{1-\rho} \big( f \sqrt{1-\rho}\big) \Delta \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$ Using the computation in [\[eq:steps to decompose final convergence\]](#eq:steps to decompose final convergence){reference-type="eqref" reference="eq:steps to decompose final convergence"} again, we obtain $$\begin{aligned} \lim_{\varepsilon\to 0}\int_{\Upsilon_T} f^\varepsilon \nabla \rho^\varepsilon \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t =& 2 \int_{\Upsilon_T} \sqrt{1-\rho} \mathop{\mathrm{div}}\big( f \sqrt{1-\rho} \nabla \psi \big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ =& \int_{\Upsilon_T} f \nabla \rho \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t, \end{aligned}$$ where we used that the above quantities are regular enough to perform the integration by parts; *cf.* [\[eq:bounds final weak lower semi\]](#eq:bounds final weak lower semi){reference-type="eqref" reference="eq:bounds final weak lower semi"}. By collating the previous limit with [\[eq:weak conv diff i actual test func\]](#eq:weak conv diff i actual test func){reference-type="eqref" reference="eq:weak conv diff i actual test func"} and using [\[eq:rewriting of diffusive terms later\]](#eq:rewriting of diffusive terms later){reference-type="eqref" reference="eq:rewriting of diffusive terms later"}, we get $$\lim_{\varepsilon \to 0} \int_{\Upsilon_T} \big( (1-\rho^\varepsilon)\nabla f^\varepsilon + f^\varepsilon \nabla \rho^\varepsilon \big) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t = \int_{\Upsilon_T} \big( (1-\rho)\nabla f + f \nabla \rho \big) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t.$$ . *Convergence of angular diffusion, non-local drift, and $\varepsilon$-error term*: For the angular diffusion, the weak convergence of $\partial_\theta f^\varepsilon$ in [\[eq:weak conv of dtheta f eps\]](#eq:weak conv of dtheta f eps){reference-type="eqref" reference="eq:weak conv of dtheta f eps"} implies directly $$\begin{aligned} \lim_{\varepsilon \to 0} \int_{\Upsilon_T} \partial_\theta f^\varepsilon \partial_\theta \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t = \int_{\Upsilon_T} \partial_\theta f \partial_\theta \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$ For the non-local drift, the strong convergence of $\rho^\varepsilon$ from [\[eq:strong conv rho\]](#eq:strong conv rho){reference-type="eqref" reference="eq:strong conv rho"} and the weak convergence of $f^\varepsilon$ implies $(1-\rho^\varepsilon)f^\varepsilon \rightharpoonup (1-\rho) f$ weakly in $L^1(\Upsilon_T)$, whence $$\lim_{\varepsilon\to0}\int_{\Upsilon_T} (1-\rho^\varepsilon) f^\varepsilon {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t = \int_{\Upsilon_T} (1-\rho^\varepsilon) f^\varepsilon {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t.$$ Similarly, we have $$\lim_{\varepsilon \to 0}\int_\Upsilon \varepsilon \nabla_{\boldsymbol{\xi}}u^\varepsilon \cdot \nabla_{\boldsymbol{\xi}} \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \leq \lim_{\varepsilon \to 0}\sqrt{\varepsilon}\underbrace{\Vert \nabla_{\boldsymbol{\xi}} \sqrt{\varepsilon} u^\varepsilon \Vert_{L^2(\Upsilon)}}_{\leq C} \Vert \nabla_{\boldsymbol{\xi}} \psi \Vert_{L^2(\Upsilon)} = 0,$$ where we used the uniform estimate [\[eq:main est reg i\]](#eq:main est reg i){reference-type="eqref" reference="eq:main est reg i"}. . *Admissible test functions and attainment of initial data*: It therefore follows that the weak formulation of Definition [Definition 1](#def:weak sol){reference-type="ref" reference="def:weak sol"} holds for all test functions $\psi \in \mathcal{A}$. Using the density of this latter space in $X$ and the fact that $f,\rho$ and their derivatives are bounded in suitable spaces, *cf.* equation [\[eq:weak form is well defined in intro\]](#eq:weak form is well defined in intro){reference-type="eqref" reference="eq:weak form is well defined in intro"}, the weak formulation extends to test functions belonging to $X$. Finally, the attainment of the initial data in the dual space $Z'$ is clear from Remark [Remark 2](#rem:ibp for duality bracket){reference-type="ref" reference="rem:ibp for duality bracket"} and Lemma [Lemma 8](#lem:convergence of initial data){reference-type="ref" reference="lem:convergence of initial data"}. The proof is complete. ◻ We end this section by recording that the solution obtained in Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"} also satisfies an alternative notion of weak solution, which does not require the test functions of the weak formulation to be periodic with respect to the space-angle variable $\boldsymbol{\xi}$; this is the content of the next lemma. This formulation is convenient when implementing the method of De Giorgi to study the regularity of the solution (*cf*. [@regularity]), which will be the subject of future works. The proof is identical to that of Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"}, using that integrating by parts over all of $\mathbb{R}^3$ against an arbitrary non-periodic test function $\psi \in C^\infty_c((0,T)\times\mathbb{R}^3)$ yields no boundary terms, as was also the case when testing against periodic test functions $\psi \in X$ and integrating over one periodic cell. **Lemma 9**. *Let $f \in \mathcal{X}$ be the weak solution obtained in Theorem [Theorem 1](#thm:main existence result){reference-type="ref" reference="thm:main existence result"}. Then, for all $\psi \in C^\infty_c((0,T)\times\mathbb{R}^3)$, there holds $$\begin{aligned} \int_0^T \int_{\mathbb{R}^3} f \partial_t \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + & \mathrm{Pe}\int_{0}^T \int_{\mathbb{R}^3} (1-\rho) f {\bf e}(\theta) \cdot \nabla \psi \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \\ &=\int_{0}^T \int_{\mathbb{R}^3} \Big( D_e\big((1-\rho)\nabla f + f \nabla \rho \big) \cdot \nabla \psi + \partial_\theta f\partial_\theta \psi \Big) \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t. \end{aligned}$$* # Uniqueness for Null Péclet Number {#sec:uniqueness zero pe} In the case $\mathrm{Pe}=0$, the equation reads $$\label{eq: model 4 zero pe} \left\lbrace \begin{aligned} & \partial_t f = D_e \mathop{\mathrm{div}}((1-\rho)\nabla f + f \nabla \rho) + \partial^2_\theta f, \\ &\partial_t \rho = D_e \Delta \rho. \end{aligned}\right.$$ The main observation is that, as $\rho$ satisfies the heat equation, uniqueness for $\rho$ is trivial. With this information, we employ the method of Gajewski [@gajewski] to deduce that the solution of [\[eq: model 4 zero pe\]](#eq: model 4 zero pe){reference-type="eqref" reference="eq: model 4 zero pe"} is unique within the class $\mathcal{X}$; this approach had been used for a similar uniqueness argument by Jüngel and Zamponi in [@JungelZamponi]. *Proof of Theorem [Theorem 2](#thm:uniqueness){reference-type="ref" reference="thm:uniqueness"}.* Let $f_1,f_2 \in \mathcal{X}$ be two weak solutions of [\[eq: model 4 zero pe\]](#eq: model 4 zero pe){reference-type="eqref" reference="eq: model 4 zero pe"} with identical admissible initial data, and define $\rho_1,\rho_2$ accordingly. As $\rho_1-\rho_2$ satisfies the heat equation with zero initial data, the standard theory implies $\rho_1=\rho_2$; we therefore denote this common function by $\rho$, which satisfies the usual estimate $0\leq \rho \leq 1$. It therefore follows that $$\partial_t f_i = D_e\mathop{\mathrm{div}}((1-\rho)\nabla f_i + f_i \nabla \rho) + \partial^2_\theta f_i$$ for $i \in \{1,2\}$. Consider the distance $$\label{eq:gajewski distance} d(u,v) := \int_\Upsilon \Big( \zeta(u) + \zeta(v) -2\zeta\big(\frac{u+v}{2}\big) \Big) \, \mathrm{d} \boldsymbol{\xi},$$ where $\zeta(s) = s(\log s -1) + 1$ for $s \geq 0$. Note that the convexity of $\zeta$ implies that $d(u,v) \geq 0$. In fact we shall consider the regularised variant for $\delta > 0$: $$\label{eq:gajewski distance reg} d_\delta(u,v) := \int_\Upsilon \Big( \zeta_\delta(u) + \zeta_\delta(v) -2\zeta_\delta\big(\frac{u+v}{2}\big) \Big) \, \mathrm{d} \boldsymbol{\xi},$$ where $\zeta_\delta(s) = \zeta(s+\delta)$, and again $d_\delta$ is non-negative because $\zeta_\delta$ is convex. Note that $\zeta'(s) = \log(s)$ for $s>0$ and that, by a standard argument using Taylor's Theorem, there holds for all $\delta > 0$: $$\label{eq:est convex comb for gajewski metric} \zeta_\delta(u) + \zeta_\delta(v) -2\zeta_\delta\big(\frac{u+v}{2}\big) \geq \frac{1}{8}|u-v|^2;$$ we refer the reader to [@JungelZamponi §6] for further details. Our goal is to show that $f_1 \equiv f_2$ a.e. in $\Upsilon_T$. With this in mind, we compute $$\label{eq:uniqueness expanded equals} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}d_\delta(f_1,f_2) = &\langle \partial_t f_1, \log(f_1+\delta) \rangle + \langle \partial_t f_2 , \log(f_2+\delta) \rangle - \langle \partial_t f_1 + \partial_t f_2 , \log\big(\frac{f_1+f_2}{2}+\delta \big) \rangle \\ =& - 4 D_e \int_\Upsilon \Big( \sum_{i=1}^2 |\nabla \sqrt{f_i+\delta}|^2 - |\nabla\sqrt{f_1+f_2+2\delta}|^2 \Big) (1-\rho) \, \mathrm{d} \boldsymbol{\xi}\\ &- 4 \int_\Upsilon \Big( \sum_{i=1}^2|\partial_\theta \sqrt{f_i+\delta}|^2 - |\partial_\theta \sqrt{f_1+f_2+2\delta}|^2\Big) \, \mathrm{d} \boldsymbol{\xi}\\ &- D_e \sum_{i=1}^2 \int_\Upsilon \Big(\frac{f_i}{f_1+\delta} - \frac{f_1+f_2}{f_1+f_2+2\delta} \Big)\nabla f_i \cdot \nabla \rho \, \mathrm{d} \boldsymbol{\xi}. \end{aligned}$$ The first two integrals above are non-negative by virtue of the subadditivity of the Fisher information [@JungelZamponi Lemma 9]; *cf.* [@Pacard §3.6]. Our strategy will be to show that the final term on the right-hand side of the previous equation vanishes in the limit as $\delta \to 0$. It will suffice to first show that the term $\nabla f_i \cdot \nabla \rho$ is integrable, and to then apply the Dominated Convergence Theorem. We rewrite this product as $$\nabla f_i \cdot \nabla \rho = -4 \sqrt{1-\rho} \nabla \sqrt{f_i} \cdot \sqrt{f_i}\nabla\sqrt{1-\rho},$$ from which it follows, using the Young inequality, that $$\begin{aligned} \int_{\Upsilon_T} |\nabla f_i \cdot \nabla \rho| &\leq 2\int_{\Upsilon_T} (1-\rho)|\nabla_{\boldsymbol{\xi}} \sqrt{f_i}|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + 2 \int_0^T \Big( \int_\Omega |\nabla \sqrt{1-\rho}|^2 \underbrace{\int_0^{2\pi} f_i \, \mathrm{d} \theta}_{=\rho} \Big) \, \mathrm{d} x \, \mathrm{d} t \\ &\leq 2\int_{\Upsilon_T} (1-\rho)|\nabla_{\boldsymbol{\xi}} \sqrt{f_i}|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t + 2 \int_{\Omega_T} |\nabla \sqrt{1-\rho}|^2 \, \mathrm{d} x \, \mathrm{d} t, \end{aligned}$$ and thus $\nabla f_i \cdot \nabla \rho \in L^1(\Upsilon_T)$; the boundedness of the right-hand side follows from the definition of $\mathcal{X}$. It follows that, by integrating [\[eq:uniqueness expanded equals\]](#eq:uniqueness expanded equals){reference-type="eqref" reference="eq:uniqueness expanded equals"} in time, there holds, for a.e.$t \in (0,T)$, $$d_\delta(f_1,f_2)(t) - \underbrace{d_\delta(f_1,f_2)(0)}_{=0} \leq D_e \sum_{i=1}^2 \int_0^t \int_{\Upsilon} \Big(\frac{f_i}{f_1+\delta} - \frac{f_1+f_2}{f_1+f_2+2\delta} \Big)\nabla f_i \cdot \nabla \rho \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau,$$ whence, by the Dominated Convergence Theorem and using $d_\delta \geq 0$, we deduce that, for a.e. $t \in (0,T)$, $$\lim_{\delta \to 0^+}d_\delta(f_1,f_2)(t) = 0.$$ It follows from Fatou's Lemma that $$\lim_{\delta \to 0^+} \Big( \zeta_\delta(f_1) + \zeta_\delta(f_1) -2\zeta_\delta\big(\frac{f_1+f_2}{2}\big) \Big) = 0 \quad \text{a.e.~in } \Upsilon_T,$$ whence the estimate [\[eq:est convex comb for gajewski metric\]](#eq:est convex comb for gajewski metric){reference-type="eqref" reference="eq:est convex comb for gajewski metric"} implies that $f_1 \equiv f_2$ a.e. in $\Upsilon_T$, as required. ◻ # Existence and Uniqueness of Stationary States {#sec:stat states} We consider the existence and uniqueness of stationary solutions $f_\infty : \Upsilon \to \mathbb{R}$ which satisfy, with $\rho_\infty(x) = \int_0^{2\pi} f_\infty(x,\theta) \, \mathrm{d} \theta$, $$\label{eq:stat state} \mathrm{Pe}\mathop{\mathrm{div}}\big((1-\rho_\infty) f_\infty {\bf e}(\theta)\big) = D_e \mathop{\mathrm{div}}\big( (1-\rho_\infty)\nabla f_\infty + f_\infty \nabla \rho_\infty \big) + \partial^2_\theta f_\infty.$$ The question of existence of stationary states is straightforward, with no restriction on the Péclet number. Indeed, given any $\mathrm{Pe}\in \mathbb{R}$, there exists at least one non-trivial constant stationary solution; namely $f_\infty = 1/2\pi$, which gives rise to $\rho_\infty =1$. In fact, by writing the drift term as $$\mathop{\mathrm{div}}\big((1-\rho_\infty) f_\infty {\bf e}(\theta)\big) = {\bf e}(\theta) \cdot \nabla\big( (1-\rho_\infty) f_\infty \big),$$ for sufficiently regular $f_\infty$, we deduce that all constant solutions satisfy [\[eq:stat state\]](#eq:stat state){reference-type="eqref" reference="eq:stat state"}. We therefore focus entirely on uniqueness henceforth, and divide our results into two subsections; §[6.1](#sec:stat null peclet){reference-type="ref" reference="sec:stat null peclet"} focused exclusively on the case $\mathrm{Pe}=0$ by means of a simple variational approach, and §[6.2](#sec:stat fixed point){reference-type="ref" reference="sec:stat fixed point"} concerned with uniqueness of strong solutions near constant stationary states for general $\mathrm{Pe}$, which requires a more involved analysis. ## Uniqueness of stationary solutions for null Péclet number {#sec:stat null peclet} The case $\mathrm{Pe}=0$ is particularly simple since, in this case, the equation [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} admits an exact gradient flow structure. We shall show that stationary states are unique up to a mass constraint via a variational approach. The proof does not readily generalise to $\mathrm{Pe}\neq 0$; this is essentially because the perturbative term $V$ in [\[eq:grad flow perturbation in terms of u\]](#eq:grad flow perturbation in terms of u){reference-type="eqref" reference="eq:grad flow perturbation in terms of u"} does not admit a primitive, whence $\nabla_{\boldsymbol{\xi}}\cdot(\tilde{M}(u)(\nabla_{\boldsymbol{\xi}}u+V))$ cannot be rewritten as the dissipation of a suitable energy (*cf.* [@BoundEntropy §3.3]). **Theorem 3**. *Let $\mathrm{Pe}=0$ and $m>0$. Then, the stationary solution $f_\infty$ of [\[eq:stat state\]](#eq:stat state){reference-type="eqref" reference="eq:stat state"} is unique within the class $\int_\Upsilon f_\infty \, \mathrm{d} \boldsymbol{\xi}= m$.* Further results regarding the convergence to stationary states can be obtained using the theory of Bakry--Emery or log-Sobolev inequalities applied to equations endowed with an exact Wasserstein gradient flow structure (*cf.* *e.g.* [@AGS08; @Carrillo:2003uk; @JungelEntMeth]). This is well-known, and we do not include the details here. The study of the convergence to stationary states by means of hypocoercivity methods for the case of non-zero Péclet number, for which one does not have an exact gradient flow structure, will be the subject of future investigation. *Proof.* . *Variational consequence of stationary equation*: As per §[2](#sec:calc var){reference-type="ref" reference="sec:calc var"}, we introduce the transformed variable $u_\infty$ $$\label{eq:u infty} u_\infty := \log \big( \frac{f_\infty}{1-\rho_\infty} \big).$$ For clarity of presentation, we assume that the above is well-defined and perform formal computations; these may easily be rendered rigorous by standard approximation arguments. We show that any stationary solution satisfies $$\label{eq:euler lagrange type} \int_\Upsilon f_\infty (1-\rho_\infty) |\nabla_{\boldsymbol{\xi}} u_\infty|^2 \, \mathrm{d} \boldsymbol{\xi}= 0.$$ Indeed, define the functional $$F[f] := \int_\Upsilon \Big( D_e (1-\rho)f|\nabla u|^2 + f|\partial_\theta u |^2 \Big) \, \mathrm{d} \boldsymbol{\xi}= \int_\Upsilon \tilde{M}(u)\nabla_{\boldsymbol{\xi}} u \cdot \nabla_{\boldsymbol{\xi}} u \, \mathrm{d} \boldsymbol{\xi}\geq 0,$$ where we used that $\tilde{M}(u)$ is positive semidefinite, and observe that testing the equation [\[eq:stat state\]](#eq:stat state){reference-type="eqref" reference="eq:stat state"} with $u_\infty$ yields $F[f_\infty]=0$. The equality [\[eq:euler lagrange type\]](#eq:euler lagrange type){reference-type="eqref" reference="eq:euler lagrange type"} follows from the non-negativity of the integrand in the functional $F$ and basic manipulations. . *Uniqueness from mass constraint*: The relation [\[eq:euler lagrange type\]](#eq:euler lagrange type){reference-type="eqref" reference="eq:euler lagrange type"} implies that all stationary solutions with $\mathrm{Pe}=0$ are constant. The mass constraint $\int_\Upsilon f_\infty \, \mathrm{d} \boldsymbol{\xi}= m$ then uniquely determines the stationary solution, as required. ◻ ## Uniqueness of strong solutions near constant stationary states {#sec:stat fixed point} Our aim in this section is to prove that, provided a strong solution is sufficiently near a constant stationary state (in a sense to be made precise), then it is the only such solution. We proceed by means of a fixed-point argument employing the Contraction Mapping Theorem, similar to [@BoundEntropy §3.5], for which we define the spaces $$\label{eq:space for fixed point} \begin{aligned} &\Xi_t := L^\infty(t,T;H^2(\Upsilon)) \cap L^2(t,T;H^3(\Upsilon)) \cap H^1(t,T;H^1(\Upsilon)), \\ &\Lambda_t := L^\infty(t,T;H^1(\Upsilon)) \cap L^2(t,T;H^2(\Upsilon)) \cap H^1(t,T;L^2(\Upsilon)), \\ &\Theta_t := L^\infty(t,T;L^2(\Upsilon)) \cap L^2(t,T;H^1(\Upsilon)). \end{aligned}$$ Where no confusion arises, we shall omit the $t$ subscript and simply write $\Xi,\Lambda,\Theta$. The main result of this section is as follows. **Theorem 4**. *Let $D_e > 0$, $\mathrm{Pe}\in \mathbb{R}$, $f_\infty \in (0,1/2\pi)$ be a given constant stationary state, and $f$ be a solution of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"}. Assume there exists $t \in (0,T)$ such that $f(t,\cdot) \in H^2(\Upsilon)$ with $$\Vert f(t,\cdot) - f_\infty \Vert_{H^2(\Upsilon)} \leq R^2$$ for $R$ sufficiently small, depending only on $f_\infty,D_e$ and $\mathrm{Pe}$. Then, there exists a unique solution of [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} in $$B_{R,t} := \big\lbrace f \in \Xi : \, \Vert f - f_\infty \Vert_{\Xi_t} \leq R \big\rbrace.$$* The proof of Theorem [Theorem 4](#thm:uniqueness small peclet){reference-type="ref" reference="thm:uniqueness small peclet"} entails studying the time-evolution of the error term $w := f - f_\infty$, with $W := \rho - \rho_\infty$, which satisfies $$\begin{aligned} \partial_t w - \partial^2_\theta w -D_e \big[ (1-\rho_\infty)\Delta w & + f_\infty \Delta W \big] +\mathrm{Pe}\big[(1-\rho_\infty) \nabla w - f_\infty \nabla W \big]\cdot {\bf e}(\theta) \\ =& D_e \big[ w \Delta W - W\Delta w \big] + \mathrm{Pe}\big[w \nabla W + W \nabla w \big] \cdot {\bf e}(\theta). \end{aligned}$$ We therefore construct the operator $\Gamma$ to be the solution map $\Gamma : w \mapsto z$, where $z$ solves the linear problem $$\label{eq:define fixed point} \left\lbrace \begin{aligned} & \partial_t z - \partial^2_\theta z -D_e \big[ (1-\rho_\infty)&&\Delta z + f_\infty \Delta Z \big] +\mathrm{Pe}\big[(1-\rho_\infty) \nabla z - f_\infty \nabla Z \big]\cdot {\bf e}(\theta)\\ & &&= \underbrace{D_e \big( w \Delta W - W\Delta w \big) + \mathrm{Pe}\big( w \nabla W + W \nabla w \big) \cdot {\bf e}(\theta)}_{=:G(w)}, \\ &z(0,\cdot) = z_0, && \end{aligned}\right.$$ and $Z := \int_0^{2\pi} z \, \mathrm{d} \theta$; we write $\Gamma = S \circ G$, where $S$ is the solution mapping to the above equation for given right-hand side and initial data $z_0$, and $G$ maps precisely to this desired right-hand source term. Our goal is to show that $\Gamma$ admits a fixed point. Note that, provided the above is supplemented with adequate initial data and the right-hand term $G(w)$ is sufficiently regular, the solution map $\Gamma$ is well-defined in a suitable space using a standard application of the Galerkin method/Fourier series. Indeed, writing the ansatz $$z(t,\boldsymbol{\xi}) = \sum_{\mathbf{n}\in \mathbb{Z}^3} \alpha_\mathbf{n}(t) e^{\mathrm{i}\mathbf{n}\cdot \boldsymbol{\xi}}, \quad Z(t,x) = 2\pi \sum_{\mathbf{n}' \in \mathbb{Z}^2} \alpha_{(\mathbf{n}',0)}(t) e^{\mathrm{i}\mathbf{n}' \cdot x},$$ where $\mathbf{n}= (n_1,n_2,n_3)$, $\mathbf{n}' = (n_1,n_2)$, the equation [\[eq:define fixed point\]](#eq:define fixed point){reference-type="eqref" reference="eq:define fixed point"} reads $$\label{eq:ode fourier for fixed point setup} \begin{aligned} \alpha_\mathbf{n}'(t) + n_3^2 \alpha_\mathbf{n}(t) + D_e (n_1^2+n_2^2)\big[ (1-\rho_\infty) \alpha_\mathbf{n}(t) &+ \mathds{1}_{\{n_3 = 0\}}f_\infty 2\pi \alpha_{(\mathbf{n}',0)}(t) \big] \\ &+ \mathrm{Pe}(1-\rho_\infty) 2 \pi \mathrm{i}n_1 \alpha_\mathbf{n}(t) = \hat{G}_\mathbf{n}(t), \end{aligned}$$ where $$\hat{G}_\mathbf{n}(t) = \frac{1}{(2\pi)^2}\int_0^{2\pi} G(w)(t,\boldsymbol{\xi}) e^{-\mathrm{i}\mathbf{n}\cdot \boldsymbol{\xi}} \, \mathrm{d} \boldsymbol{\xi}.$$ It follows that, provided $\hat{G}_\mathbf{n}(w)$ is sufficiently well-behaved, the ODE system [\[eq:ode fourier for fixed point setup\]](#eq:ode fourier for fixed point setup){reference-type="eqref" reference="eq:ode fourier for fixed point setup"} is solvable by standard methods for all values of $f_\infty \geq 0$ and $0 \leq \rho_\infty \leq 1$. This existence construction is standard; we therefore deliberately omit further details for clarity of presentation, and henceforth focus exclusively on the *a priori* estimates required for the fixed-point strategy. Our results are encapsulated in the following two lemmas, which will be used to prove that $\Gamma$ admits a unique fixed point. **Lemma 10** (Estimates for the operator $S$). *Assume $f_\infty \in (0,1/2\pi)$, *i.e.*, $\rho_\infty \in (0,1)$. Then, there exists a positive constant $C_{\infty,D_e} = C_{\infty,D_e}(f_\infty,D_e)$ such that $$\label{eq:first est stat states fixed point} \begin{aligned} &\Vert S v \Vert_{\Theta} \leq C_{\infty,D_e} C_0^{1/2} \Big( \Vert z_0 \Vert_{L^2(\Upsilon)} + \Vert v \Vert_{L^2(\Upsilon_T)} \Big), \\ &\Vert \nabla S v \Vert_\Theta \leq C_{\infty,D_e} C_0 \Big( \Vert \nabla z_0 \Vert_{L^2(\Upsilon)} + \Vert v \Vert_{L^2(\Upsilon_T)} \Big), \\ &\Vert \partial_\theta S v \Vert_\Theta \leq C_{\infty,D_e} C_0^{3/2} \Big( \Vert z_0 \Vert_{L^2(\Upsilon)} + \Vert \partial_\theta z_0 \Vert_{L^2(\Upsilon)} + \Vert v \Vert_{L^2(\Upsilon_T)} \Big), \\ &\Vert \partial_t Sv \Vert_{L^2(\Upsilon_T)} \leq C_{\infty,D_e} C_1 \Big( \Vert z_0 \Vert_{H^1(\Upsilon)} + \Vert v \Vert_{L^2(\Upsilon_T)} \Big), \end{aligned}$$* *where $$\label{eq:fp c0} \begin{aligned} C_0 = e^{T(1+\mathrm{Pe}^2{D_e^{-1}})}, \qquad C_1 = (1+\mathrm{Pe}^2 D_e^{-1})^{1/2} C_0^{3/2}, \end{aligned}$$ and $$\label{eq:main fixed pt est for S} \Vert Sv \Vert_\Xi \leq C_{\infty,D_e} C_1 \Big( \Vert z_0 \Vert_{H^2(\Upsilon)} + \Vert v \Vert_{\Theta} \Big).$$* **Lemma 11** (Estimates for the operator $G$). *There exists a positive constant $C_{D_e}$, depending solely on $D_e$ and $\Upsilon$, such that, for all $w_1,w_2 \in \Xi$, there holds $$\label{eq:G contractive} \Vert G(w_1) - G(w_2) \Vert_\Theta \leq C_{D_e}(1+\mathrm{Pe}D_e^{-1}) \Vert w_1 - w_2 \Vert_\Xi \big( \Vert w_1 \Vert_\Xi + \Vert w_2 \Vert_{\Xi} \big),$$ and, for all $w \in \Xi$, $$\label{eq:self mapping for G} \Vert G(w) \Vert_\Theta \leq C_{D_e}(1+\mathrm{Pe}D_e^{-1}) \Vert w \Vert_\Xi^2.$$* Before proving Lemmas [Lemma 10](#eq:parabolic estimates for fixed pt){reference-type="ref" reference="eq:parabolic estimates for fixed pt"} and [Lemma 11](#eq:parabolic estimates for fixed pt ii){reference-type="ref" reference="eq:parabolic estimates for fixed pt ii"}, we provide the proof of Theorem [Theorem 4](#thm:uniqueness small peclet){reference-type="ref" reference="thm:uniqueness small peclet"}. *Proof of Theorem [Theorem 4](#thm:uniqueness small peclet){reference-type="ref" reference="thm:uniqueness small peclet"}.* Suppose for the time being that $T \leq 1$. This assumption will be removed in the third step of the proof. . *$\Gamma$ is self-mapping*: Suppose $f \in B_R$. Using the estimates of Lemmas [Lemma 10](#eq:parabolic estimates for fixed pt){reference-type="ref" reference="eq:parabolic estimates for fixed pt"} and [Lemma 11](#eq:parabolic estimates for fixed pt ii){reference-type="ref" reference="eq:parabolic estimates for fixed pt ii"}, we find that $$\Vert \Gamma w \Vert_\Xi \leq C_{\infty,D_e} C_1 \Big( 1 + C_{D_e}(1+\mathrm{Pe}D_e^{-1}) \Big)R^2.$$ Since we have restricted $T \leq 1$, it follows that there exists $R$ sufficiently small, depending only on $D_e,f_\infty,\mathrm{Pe}$ such that there holds $$\Vert \Gamma w \Vert_\Xi \leq R,$$ whence $\Gamma$ is self-mapping from $B_R$ to itself. . *$\Gamma$ is contractive*: Let $w_1,w_2 \in B_R$. Observe that the linearity of $S$ implies $\Gamma w_1 - \Gamma w_2 = S(Gw_1 - Gw_2)$, where the initial data $z_0$ in [\[eq:define fixed point\]](#eq:define fixed point){reference-type="eqref" reference="eq:define fixed point"} is null. It follows from Lemmas [Lemma 10](#eq:parabolic estimates for fixed pt){reference-type="ref" reference="eq:parabolic estimates for fixed pt"} and [Lemma 11](#eq:parabolic estimates for fixed pt ii){reference-type="ref" reference="eq:parabolic estimates for fixed pt ii"} that there holds $$\Vert \Gamma w_1 - \Gamma w_2 \Vert_\Xi \leq C_{\infty,D_e} C_1 C_{D_e}(1+\mathrm{Pe}D_e^{-1})R \Vert w_1 - w_2 \Vert_\Xi.$$ As per the first part of the proof, it follows that one may select $R$ sufficiently small, depending only on $f_\infty,D_e,\mathrm{Pe}$ such that $\Gamma$ is a contraction. An application of Banach's Contraction Mapping Theorem implies that there exists a unique $f$ solving [\[eq: model 4\]](#eq: model 4){reference-type="eqref" reference="eq: model 4"} and such that $\Vert f - f_\infty \Vert_\Xi$ over the time interval $t \in (0,1)$. . *Extension to arbitrary time intervals*: We apply Step 2 of the proof repeatedly over overlapping time intervals $(\alpha_j,\beta_j)$ for Lebesgue points $\alpha_j,\beta_j$ chosen such that $\beta_{j-1}>\alpha_j$ for all $j$ and such that $\bigcup_j (\alpha_j,\beta_j) \supset (t,T)$. In turn, obtain that there exists a unique $f \in B_R$ over the entire time interval $(0,T)$. ◻ *Proof of Lemma [Lemma 10](#eq:parabolic estimates for fixed pt){reference-type="ref" reference="eq:parabolic estimates for fixed pt"}.* By testing [\[eq:define fixed point\]](#eq:define fixed point){reference-type="eqref" reference="eq:define fixed point"} with $z$, we obtain the classical parabolic estimate $$\begin{aligned} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_\Upsilon |z|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_\Upsilon & |\partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}+ D_e \Big( (1-\rho_\infty) \int_\Upsilon |\nabla z |^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \Big) \\ \leq & \mathrm{Pe}\int_\Upsilon \big[ (1-\rho_\infty) |z| |\nabla z| + f_\infty | z| |\nabla Z| \big] \, \mathrm{d} \boldsymbol{\xi}+ \int_\Upsilon G(w) z \, \mathrm{d} \boldsymbol{\xi}\\ \leq& \frac{1}{2}D_e(1-\rho_\infty) \int_\Upsilon |\nabla z |^2 \, \mathrm{d} \boldsymbol{\xi}+ \frac{1}{2}D_e f_\infty\int_\Omega |\nabla Z|^2 \, \mathrm{d} x \\ &+ \frac{1}{2}\big((1-\rho_\infty+ 2\pi f_\infty)\mathrm{Pe}^2{D_e^{-1}} + 1\big)\int_\Upsilon |z|^2 \, \mathrm{d} \boldsymbol{\xi}+ \frac{1}{2}\int_\Upsilon |G(w)|^2 \, \mathrm{d} \boldsymbol{\xi} \end{aligned}$$ for some positive universal constant $C$, from which we get, for a.e. $t \in (0,T)$, $$\begin{aligned} \int_\Upsilon |z(t)|^2 \, \mathrm{d} \boldsymbol{\xi}+& \int_0^t\int_\Upsilon |\partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + D_e \Big( (1-\rho_\infty) \int_0^t \int_\Upsilon |\nabla z |^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + f_\infty \int_0^t \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \, \mathrm{d} \tau \Big) \\ \leq & (1+\mathrm{Pe}^2{D_e^{-1}})\int_0^t\int_\Upsilon |z|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + \Big(\int_\Upsilon |z_0|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_0^t \int_{\Upsilon} |G(w)|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \Big), \end{aligned}$$ and Grönwall's Lemma yields $$\begin{aligned} \int_\Upsilon |z(t)|^2 \, \mathrm{d} \boldsymbol{\xi} \leq & \Big(\int_\Upsilon |z_0|^2 \, \mathrm{d} \boldsymbol{\xi}+ \int_0^t \int_{\Upsilon} |G(w)|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \Big)e^{t(1+\mathrm{Pe}^2{D_e^{-1}})}, \end{aligned}$$ so that $$\begin{aligned} \int_0^T \int_\Upsilon |z(t)|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} t \leq & \Big(\Vert z_0 \Vert_{L^2(\Upsilon)}^2 + \Vert G(w) \Vert_{L^2(\Upsilon_T)}^2 \Big) \frac{1}{(1+\mathrm{Pe}^2{D_e^{-1}})}\big(e^{T(1+\mathrm{Pe}^2{D_e^{-1}})}-1 \big), \end{aligned}$$ whence $$\label{eq:to get first fp est} \begin{aligned} \Vert z \Vert_{L^\infty(0,T;L^2(\Upsilon))}^2 + \Vert \partial_\theta z \Vert_{L^2(\Upsilon_T)}^2 + D_e \Big( (1-\rho_\infty) \Vert & \nabla z \Vert_{L^2(\Upsilon_T)}^2 + f_\infty \Vert \nabla Z \Vert_{L^2(\Omega_T)}^2 \Big) \\ \leq & C_0 \Big(\Vert z_0 \Vert_{L^2(\Upsilon)}^2 + \Vert G(w) \Vert_{L^2(\Upsilon_T)}^2 \Big), \end{aligned}$$ and the first estimate in [\[eq:first est stat states fixed point\]](#eq:first est stat states fixed point){reference-type="eqref" reference="eq:first est stat states fixed point"} follows, where $C_0 = C_0(\mathrm{Pe},D_e,T)$ is independent of $f_\infty,z_0$ and is given by [\[eq:fp c0\]](#eq:fp c0){reference-type="eqref" reference="eq:fp c0"}. Similarly, by testing [\[eq:define fixed point\]](#eq:define fixed point){reference-type="eqref" reference="eq:define fixed point"} with $(1-\rho_\infty)\Delta z + f_\infty \Delta Z$, we get $$\begin{aligned} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t}\Big((1-\rho_\infty) \int_\Upsilon & |\nabla z|^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \Big) \\ +& \frac{1}{2}D_e \int_\Upsilon \big| (1-\rho_\infty)\Delta z + f_\infty \Delta Z \big|^2 \, \mathrm{d} \boldsymbol{\xi} + (1-\rho_\infty) \int_\Upsilon |\nabla \partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}\\ &\leq \frac{1}{2}\mathrm{Pe}^2 D_e^{-1} \int_\Upsilon |(1-\rho_\infty) \nabla z - f_\infty \nabla Z|^2 \, \mathrm{d} \boldsymbol{\xi}+ \frac{1}{2} D_e^{-1}\int_\Upsilon |G(w)|^2 \, \mathrm{d} \boldsymbol{\xi}. \end{aligned}$$ It follows that $$\begin{aligned} \Big((1-&\rho_\infty) \int_\Upsilon |\nabla z|^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \Big)(t) \\ &+ D_e \int_0^t \int_\Upsilon \big| (1-\rho_\infty)\Delta z + f_\infty \Delta Z \big|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + (1-\rho_\infty) \int_0^t \int_\Upsilon |\nabla \partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \\ \leq& \Big((1-\rho_\infty) \int_\Upsilon |\nabla z_0|^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z_0|^2 \, \mathrm{d} x \Big) + D_e^{-1} \int_0^t \int_\Upsilon |G(w)|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \\ &+\mathrm{Pe}^2 D_e^{-1} \Big((1-\rho_\infty)^2 \int_0^t \int_\Upsilon |\nabla z|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + (2(1-\rho_\infty)f_\infty + 2\pi f_\infty^2) \int_0^t \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \, \mathrm{d} \tau \Big), \end{aligned}$$ whence, since $2\pi f_\infty = \rho_\infty \in [0,1]$, we get $$\begin{aligned} \Big((1-\rho_\infty) & \int_\Upsilon |\nabla z|^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \Big)(t) \\ &+ D_e \int_0^t \int_\Upsilon \big| (1-\rho_\infty)\Delta z + f_\infty \Delta Z \big|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + (1-\rho_\infty) \int_0^t \int_\Upsilon |\nabla \partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \\ \leq& \Big((1-\rho_\infty) \int_\Upsilon |\nabla z_0|^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z_0|^2 \, \mathrm{d} x \Big) + D_e^{-1} \int_0^t \int_\Upsilon |G(w)|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau \\ &+2\mathrm{Pe}^2 D_e^{-1} \Big((1-\rho_\infty) \int_0^t \int_\Upsilon |\nabla z|^2 \, \mathrm{d} \boldsymbol{\xi}\, \mathrm{d} \tau + f_\infty \int_0^t \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \, \mathrm{d} \tau \Big). \end{aligned}$$ Grönwall's Lemma yields, for a.e. $t \in (0,T)$, $$\begin{aligned} \Big(&(1-\rho_\infty) \int_\Upsilon |\nabla z|^2 \, \mathrm{d} \boldsymbol{\xi}+ f_\infty \int_\Omega |\nabla Z|^2 \, \mathrm{d} x \Big)(t) \\ &\leq \Big[ (1-\rho_\infty) \Vert \nabla z_0 \Vert_{L^2(\Upsilon)}^2 + f_\infty \Vert \nabla Z_0\Vert_{L^2(\Omega)}^2 + D_e^{-1} \Vert G(w)\Vert_{L^2((0,t)\times\Upsilon)}^2 \Big] e^{t 2\mathrm{Pe}^2 D_e^{-1}}. \end{aligned}$$ We therefore obtain $$\begin{aligned} (1-\rho_\infty) \Vert \nabla z \Vert_{L^\infty(0,T;L^2(\Upsilon))}^2 & + f_\infty \Vert \nabla Z\Vert_{L^\infty(0,T;L^2(\Omega))}^2 \\ + D_e \Vert (1-\rho_\infty)\Delta z & + f_\infty \Delta Z \Vert_{L^2(\Upsilon_T)}^2 + (1-\rho_\infty) \Vert \nabla \partial_\theta z\Vert^2_{L^2(\Upsilon_T)} \\ \leq C_0^2 \Big( (1-\rho_\infty) & \Vert \nabla z_0 \Vert_{L^2(\Upsilon)}^2 + f_\infty \Vert \nabla Z_0\Vert_{L^2(\Omega)}^2 + D_e^{-1}\Vert G(w)\Vert_{L^2(\Upsilon_T)}^2 \Big). \end{aligned}$$ Note in particular that the Fubini--Tonelli Theorem implies $$\begin{aligned} \Vert (1-\rho_\infty)\Delta z + f_\infty \Delta Z \Vert_{L^2(\Upsilon_T)}^2 = (1-\rho_\infty)^2 \Vert \Delta z \Vert^2_{L^2(\Upsilon_T)} + f_\infty (2(1-\rho_\infty) + 2\pi f_\infty) \Vert \Delta Z \Vert^2_{L^2(\Omega_T)}, \end{aligned}$$ whence the previous estimate yields $$\label{eq:second bit for fp} \begin{aligned} (1-\rho_\infty) \Vert \nabla z \Vert_{L^\infty(0,T;L^2(\Upsilon))}^2 & + f_\infty \Vert \nabla Z\Vert_{L^\infty(0,T;L^2(\Omega))}^2 + (1-\rho_\infty) \Vert \nabla \partial_\theta z\Vert^2_{L^2(\Upsilon_T)} \\ + D_e \Big( (1-\rho_\infty)^2& \Vert \Delta z \Vert^2_{L^2(\Upsilon_T)} + f_\infty (2-\rho_\infty) \Vert \Delta Z \Vert^2_{L^2(\Omega_T)} \Big) \\ \leq C_0^2 \Big( & (1-\rho_\infty) \Vert \nabla z_0 \Vert_{L^2(\Upsilon)}^2 + f_\infty \Vert \nabla Z_0\Vert_{L^2(\Omega)}^2 + D_e^{-1}\Vert G(w)\Vert_{L^2(\Upsilon_T)}^2 \Big), \end{aligned}$$ and the second estimate in [\[eq:first est stat states fixed point\]](#eq:first est stat states fixed point){reference-type="eqref" reference="eq:first est stat states fixed point"} follows. Similarly, we test [\[eq:define fixed point\]](#eq:define fixed point){reference-type="eqref" reference="eq:define fixed point"} against $\partial^2_\theta z$ to obtain $$\begin{aligned} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Upsilon} |\partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}+ & \int_{\Upsilon} |\partial^2_\theta z| \, \mathrm{d} \boldsymbol{\xi}+ D_e(1-\rho_\infty)\int_{\Upsilon} |\nabla \partial_\theta z |^2 \, \mathrm{d} \boldsymbol{\xi}\\ = &\mathrm{Pe}\int_{\Upsilon} \big[ (1-\rho_\infty)\nabla z - f_\infty \nabla Z] \cdot {\bf e}(\theta) \partial^2_\theta z \, \mathrm{d} \boldsymbol{\xi}- \int_{\Upsilon} G(w) \partial^2_\theta z \, \mathrm{d} \boldsymbol{\xi}\\ =&-\mathrm{Pe}(1-\rho_\infty) \int_{\Upsilon} \nabla \partial_\theta z \cdot {\bf e}(\theta) \partial_\theta z \, \mathrm{d} \boldsymbol{\xi}\\ &-\mathrm{Pe}\int_{\Upsilon} \big[ (1-\rho_\infty)\nabla z - f_\infty \nabla Z \big] \cdot {\bf e}'(\theta) \partial_\theta z \, \mathrm{d} \boldsymbol{\xi}- \int_\Upsilon G(w) \partial^2_\theta z \, \mathrm{d} \boldsymbol{\xi}. \end{aligned}$$ Using ${\bf e}'(\theta) = (-\sin\theta,\cos\theta)$ and the Young inequality, we get $$\begin{aligned} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Upsilon} |\partial_\theta z|^2 \, \mathrm{d} \boldsymbol{\xi}&+ \int_{\Upsilon} |\partial^2_\theta z| \, \mathrm{d} \boldsymbol{\xi}+ D_e(1-\rho_\infty)\int_{\Upsilon} |\nabla \partial_\theta z |^2 \, \mathrm{d} \boldsymbol{\xi}\\ \leq&\mathrm{Pe}(1-\rho_\infty) \int_{\Upsilon} |\nabla \partial_\theta z| |\partial_\theta z |\, \mathrm{d} \boldsymbol{\xi}+ \mathrm{Pe}(1-\rho_\infty) \int_{\Upsilon} |\nabla z| |\partial_\theta z| \, \mathrm{d} \boldsymbol{\xi}\\ &+\mathrm{Pe}f_\infty \int_{\Upsilon} |\nabla Z| |\partial_\theta z| \, \mathrm{d} \boldsymbol{\xi}+ \int_\Upsilon |G(w)| |\partial^2_\theta z| \, \mathrm{d} \boldsymbol{\xi}. \end{aligned}$$ We obtain, for a.e. $t \in (0,T)$, $$\begin{aligned} \Vert\partial_\theta z\Vert_{L^\infty(0,t;L^2(\Upsilon))}^2 +\Vert \partial^2_\theta z &\Vert^2_{L^2(\Upsilon_t)} + D_e(1-\rho_\infty)\int_{\Upsilon_t} |\nabla \partial_\theta z |^2 \, \mathrm{d} \boldsymbol{\xi}\\ \leq &2\mathrm{Pe}^2 D_e^{-1} (1-\rho_\infty) \Vert\partial_\theta z\Vert^2_{L^2(\Upsilon_t)} + D_e (1-\rho_\infty) \Vert\nabla z \Vert^2_{L^2(\Upsilon_T)} \\ &+ \Vert \partial_\theta z_0 \Vert^2_{L^2(\Upsilon)} + \Vert G(w) \Vert^2_{L^2(\Upsilon_t)} \\ \leq& 2\mathrm{Pe}^2 D_e^{-1} \Vert\partial_\theta z\Vert^2_{L^2(\Upsilon_t)} \\ &+ C_0 \Big(\Vert z_0 \Vert_{L^2(\Upsilon)}^2 + \Vert \partial_\theta z_0 \Vert^2_{L^2(\Upsilon)} + 2\Vert G(w) \Vert^2_{L^2(\Upsilon_t)}\Big), \end{aligned}$$ where we used $\rho_\infty \in [0,1]$ and estimate [\[eq:to get first fp est\]](#eq:to get first fp est){reference-type="eqref" reference="eq:to get first fp est"} to obtain the final line. Grönwall's Lemma yields $$\label{eq:third bit for fp} \begin{aligned} \Vert\partial_\theta z\Vert_{L^\infty(0,T;L^2(\Upsilon))}^2 + \Vert \partial^2_\theta z \Vert^2_{L^2(\Upsilon_T)} + & D_e(1-\rho_\infty) \Vert \nabla \partial_\theta z \Vert_{L^2(\Upsilon_T)}^2 \\ \leq & C_0^3 \Big( \Vert z_0 \Vert_{L^2(\Upsilon)}^2 + \Vert \partial_\theta z_0 \Vert^2_{L^2(\Upsilon)} + 2\Vert G(w) \Vert_{L^2(\Upsilon_T)}^2 \Big), \end{aligned}$$ and the third estimate in [\[eq:first est stat states fixed point\]](#eq:first est stat states fixed point){reference-type="eqref" reference="eq:first est stat states fixed point"} follows. It follows, using directly the equation [\[eq:define fixed point\]](#eq:define fixed point){reference-type="eqref" reference="eq:define fixed point"}, that there holds, for some positive universal constant $C$, the estimate on the time derivative: $$\begin{aligned} \Vert \partial_t z \Vert_{L^2(\Upsilon_T)} \leq C\Big(& \Vert \partial^2_\theta z \Vert_{L^2(\Upsilon_T)}^2 + D_e^2 (1-\rho_\infty)^2 \Vert \Delta z \Vert_{L^2(\Upsilon_T)}^2 + D_e^2 f_\infty^2 \Vert \Delta Z \Vert_{L^2(\Upsilon_T)}^2 \\ &+ \mathrm{Pe}^2 (1-\rho_\infty)^2 \Vert \nabla z \Vert_{L^2(\Upsilon_T)}^2 + \mathrm{Pe}^2 f_\infty^2 \Vert \nabla Z \Vert_{L^2(\Upsilon_T)}^2 + \Vert G(w) \Vert_{L^2(\Upsilon_T)}^2 \Big) \end{aligned}$$ and thus, using [\[eq:to get first fp est\]](#eq:to get first fp est){reference-type="eqref" reference="eq:to get first fp est"}, [\[eq:second bit for fp\]](#eq:second bit for fp){reference-type="eqref" reference="eq:second bit for fp"}, [\[eq:third bit for fp\]](#eq:third bit for fp){reference-type="eqref" reference="eq:third bit for fp"}, $$\begin{aligned} \Vert \partial_t z \Vert_{L^2(\Upsilon_T)}^2 \leq & C_{\infty,D_e}(1+\mathrm{Pe}^2 D_e^{-1}) C_0^3 \Big( \Vert z_0 \Vert_{H^1(\Upsilon)}^2 + \Vert G(w) \Vert_{L^2(\Upsilon_T)}^2 \Big). \end{aligned}$$ Thus, there holds $$\Vert z \Vert_\Lambda \leq C_{\infty,D_e}(1+\mathrm{Pe}^2 D_e^{-1})^{1/2} C_0^{3/2} \Big( \Vert z_0 \Vert_{H^1(\Upsilon)} + \Vert G(w) \Vert_{L^2(\Upsilon_T)} \Big).$$ Finally, using the linearity of the equation to introduce difference quotients with respect to the space-angle variable $\boldsymbol{\xi}$, we reproduce the strategy required to obtain [\[eq:first est stat states fixed point\]](#eq:first est stat states fixed point){reference-type="eqref" reference="eq:first est stat states fixed point"} and get $$\Vert \nabla_{\boldsymbol{\xi}} z \Vert_\Lambda \leq C_{\infty,D_e} C_0^{3/2} \Big( \Vert \nabla_{\boldsymbol{\xi}} z_0 \Vert_{H^1(\Upsilon)} + \Vert \nabla_{\boldsymbol{\xi}} G(w) \Vert_{L^2(\Upsilon_T)} \Big).$$ It follows that $\Vert z \Vert_{\Xi}$ is controlled by $\Vert z_0\Vert_{H^2(\Upsilon)} + \Vert G(w) \Vert_{L^2(0,T;H^1(\Upsilon))}$ as per the previous inequality, and the final estimate [\[eq:main fixed pt est for S\]](#eq:main fixed pt est for S){reference-type="eqref" reference="eq:main fixed pt est for S"} follows. ◻ *Proof of Lemma [Lemma 11](#eq:parabolic estimates for fixed pt ii){reference-type="ref" reference="eq:parabolic estimates for fixed pt ii"}.* We focus on the contractivity of $G$. Direct computation yields $$\label{eq:L2 exp G contractive} \begin{aligned} \Vert G(w_1)- G(w_2) \Vert_{L^2(\Upsilon_T)} = D_e \Big(& \Vert (w_1- w_2) \Delta W_1 \Vert_{L^2(\Upsilon_T)} + \Vert w_2 \Delta (W_1-W_2) \Vert_{L^2(\Upsilon_T)} \\ &+ \Vert W_1 \Delta (w_1 - w_2) \Vert_{L^2(\Upsilon_T)} + \Vert (W_1-W_2) \Delta w_2 \Vert_{L^2(\Upsilon_T)} \\ +\mathrm{Pe}\Big(&\Vert (w_1-w_2) \nabla W_1 \Vert_{L^2(\Upsilon_T)} + \Vert w_2 \nabla (W_1-W_2) \Vert_{L^2(\Upsilon_T)} \\ &+ \Vert W_1 \nabla (w_1-w_2) \Vert_{L^2(\Upsilon_T)} + \Vert (W_1-W_2) \nabla w_2 \Vert_{L^2(\Upsilon_T)} \Big). \end{aligned}$$ Using Morrey's embedding $H^2(\Upsilon) \hookrightarrow L^\infty(\Upsilon)$ for the bounded open domain $\Upsilon \subset \mathbb{R}^3$, there exists a positive constant $C_M$ depending only on $\Upsilon$ such that $$\label{eq:apply morrey} \Vert w_i(t,\cdot) \Vert_{L^\infty(\Upsilon)} \leq C_M \Vert w_i(t,\cdot) \Vert_{H^2(\Upsilon)},$$ whence $$\begin{aligned} \Vert w_i \Vert_{L^\infty(\Upsilon_T)} \leq C_M \Vert w_i \Vert_\Xi. \end{aligned}$$ It follows that, for instance, $$\begin{aligned} \Vert (w_i - w_j) \Delta W_i \Vert_{L^2(\Upsilon_T)} \leq 2\pi C_M \Vert w_i - w_j \Vert_\Xi \Vert w_i \Vert_{L^2(0,T;H^2(\Upsilon))} \leq 2\pi C_M \Vert w_i - w_j \Vert_\Xi \Vert w_i \Vert_{\Xi}, \end{aligned}$$ and thus, arguing similarly for the other terms, it follows from [\[eq:L2 exp G contractive\]](#eq:L2 exp G contractive){reference-type="eqref" reference="eq:L2 exp G contractive"} that $$\Vert G(w_1)- G(w_2) \Vert_{L^2(\Upsilon_T)} \leq 4 \pi C_M D_e (1+ \mathrm{Pe}D_e^{-1} ) \Vert w_1 - w_2 \Vert_\Xi \big( \Vert w_1 \Vert_\Xi + \Vert w_2 \Vert_{\Xi} \big).$$ The same strategy yields an analogous estimate for $\Vert G(w_1)- G(w_2) \Vert_{L^\infty(0,T;L^2(\Upsilon))}$. Observe that $$\begin{aligned} \nabla_{\boldsymbol{\xi}} G(w) = &D_e \big( w \nabla_{\boldsymbol{\xi}} \Delta W + \nabla_{\boldsymbol{\xi}} w \Delta W - \nabla_{\boldsymbol{\xi}} W \Delta w - W \nabla_{\boldsymbol{\xi}} \Delta w \big) \\ &+ \mathrm{Pe}\big( w \nabla_{\boldsymbol{\xi}} \Delta W + \nabla_{\boldsymbol{\xi}} w \Delta W + \nabla_{\boldsymbol{\xi}} W \Delta w + W \nabla_{\boldsymbol{\xi}} \Delta w \big) \cdot {\bf e}(\theta) \\ &+ \mathrm{Pe}\big( w \Delta W + W \Delta w \big) \cdot {\bf e}'(\theta), \end{aligned}$$ and, for $w \in \Xi$, arguing as per [\[eq:apply morrey\]](#eq:apply morrey){reference-type="eqref" reference="eq:apply morrey"}, Morrey's embedding implies $\nabla w \in L^2(0,T;L^\infty(\Upsilon))$; furthermore this latter norm is controlled by the norm of $\Xi$. We also have $\Delta w \in L^\infty(0,T;L^2(\Upsilon))$ and $\nabla \Delta w \in L^2(\Upsilon_T)$, and analogously for the quantities involving $W$; furthermore, all of these norms are all controlled by $\Vert w \Vert_\Xi$. Thus, by arguing as per [\[eq:L2 exp G contractive\]](#eq:L2 exp G contractive){reference-type="eqref" reference="eq:L2 exp G contractive"} we obtain $$\Vert \nabla_{\boldsymbol{\xi}} G(w_1) - \nabla_{\boldsymbol{\xi}} G(w_2) \Vert_{L^2(\Upsilon_T)} \leq 8 \pi C_M D_e (1+\mathrm{Pe}D_e^{-1} ) \Vert w_1 - w_2 \Vert_\Xi \big( \Vert w_1 \Vert_\Xi + \Vert w_2 \Vert_{\Xi} \big),$$ and [\[eq:G contractive\]](#eq:G contractive){reference-type="eqref" reference="eq:G contractive"} follows. Finally, since $G(0)=0$, the previous estimate yields [\[eq:self mapping for G\]](#eq:self mapping for G){reference-type="eqref" reference="eq:self mapping for G"}. ◻ # Interpolation Lemma {#sec:appendix interpolation} In the present appendix, we derive a variant of a classical interpolation estimate for spaces of functions depending on time and space; our function spaces include dependence on time, space, and angle. **Lemma 12**. *Let $m,p \geq 1$ and define the Banach spaces $$\begin{aligned} &V^{m,p} := L^\infty(\Omega_T;L^m(0,2\pi))) \cap L^p(\Omega_T;W^{1,p}(0,2\pi)), \\ & V^{m,p}_0 := L^\infty(\Omega_T;L^m(0,2\pi)) \cap L^p(\Omega_T;W^{1,p}_0(0,2\pi)), \end{aligned}$$ both equipped with the norm $$\Vert v \Vert_{V^{m,p}} := \mathop{\mathrm{ess \, sup}}_{(t,x) \in \Omega_T} \Vert v(t,x,\cdot) \Vert_{L^m(0,2\pi)} + \Vert \partial_\theta v \Vert_{L^p(\Upsilon_T)}.$$ There exists a positive constant $C$ depending only on $p,m,\Upsilon,T$ such that, for all $v \in V^{m,p}$, there holds $$\Vert v \Vert_{L^q(\Upsilon_T)} \leq C \Vert v \Vert_{V^{m,p}},$$ where $q = p (m+1)$.* *Proof.* . *Inequality for $V_0^{m,p}$*: Assume for the time being that $v \in V^{m,p}_0$. Observe that $p \geq 1 > \frac{m}{m+1}$, *i.e.* $p-m + mp > 0$. The Gagliardo--Nirenberg inequality [@DiBenedetto §1, Theorem 2.1, (2.2-i)] yields, for a.e. $(t,x) \in \Omega_T$, $$\Vert v(t,x,\cdot) \Vert_{L^q(0,2\pi)} \leq C \Vert \partial_\theta v(t,x,\cdot) \Vert_{L^p(0,2\pi)}^{\frac{p}{q}} \Vert v (t,x,\cdot) \Vert^{1-\frac{p}{q}}_{L^m(0,2\pi)},$$ where $q=p(m+1)$. In turn, raising the previous inequality to the power $q$ and integrating over $\Omega_T$ yields $$\begin{aligned} \Vert v \Vert_{L^q(\Upsilon_T)}^q \leq C \bigg(\int_{\Omega_T} \Vert \partial_\theta v(t,x,\cdot) \Vert^{p}_{L^p(0,2\pi)} \, \mathrm{d} x \, \mathrm{d} t \bigg) \mathop{\mathrm{ess \, sup}}_{(t,x)\in \Omega_T}\Vert v (t,x,\cdot) \Vert^{q-p}_{L^m(0,2\pi)}, \end{aligned}$$ and thus $$\begin{aligned} \Vert v \Vert_{L^q(\Upsilon_T)} \leq C \Vert \partial_\theta v \Vert^{\frac{p}{q}}_{L^p(\Upsilon_T)} \Vert v (t,x,\cdot) \Vert^{1-\frac{p}{q}}_{L^\infty(\Omega_T;L^m(0,2\pi))}. \end{aligned}$$ Applying the Young inequality to the previous right-hand side with exponent $q/p>1$ and its Hölder conjugate yields $$\begin{aligned} \Vert v \Vert_{L^q(\Upsilon_T)} \leq C \Vert v \Vert_{V^{m,p}}. \end{aligned}$$ A classical argument shows that the bound above still holds for functions $v \in V^{m,p}$ such that $\int_0^{2\pi} v \, \mathrm{d} \theta = 0$ a.e. in $\Omega_T$; *cf.* [@DiBenedetto §1 Remark 2.1]. . *Relaxation to $V^{m,p}$*: Let $v \in V^{m,p}$ and define, for a.e. $(t,x,\theta)$, its mean-zero counterpart $$w(t,x,\theta) := v(t,x,\theta) - \frac{1}{2\pi}\int_0^{2\pi} v(t,x,\theta) \, \mathrm{d} \theta.$$ In view of the remark at the end of Step 1, we deduce that there holds $$\Vert w \Vert_{L^q(\Upsilon_T)} \leq C \Vert w \Vert_{V^{m,p}}.$$ Notice that $\partial_\theta w = \partial_\theta v$ and $\Vert w(t,x,\cdot) \Vert_{L^m(0,2\pi)} \leq 2 \Vert v(t,x,\cdot) \Vert_{L^m(0,2\pi)}$ a.e. in $\Omega_T$, and that Jensen's inequality yields $$\begin{aligned} \Big\Vert \int_0^{2\pi} v(\cdot,\cdot,\theta) \, \mathrm{d} \theta \Big\Vert_{L^q(\Upsilon_T)} \leq & C\bigg( \int_0^{2\pi} \int_{\Omega_T} \bigg( \int_0^{2\pi} |v(t,x,\theta')|^m \, \mathrm{d} \theta' \bigg)^{\frac{q}{m}} \, \mathrm{d} x \, \mathrm{d} t \, \mathrm{d} \theta \bigg)^{\frac{1}{q}} \\ \leq & C \mathop{\mathrm{ess \, sup}}_{(t,x)\in \Omega_T} \Vert v(t,x,\cdot) \Vert_{L^m(0,2\pi)}. \end{aligned}$$ It therefore follows from the Minkowski inequality that $$\begin{aligned} \Vert v \Vert_{L^q(\Upsilon_T)} \leq & \Vert w \Vert_{L^q(\Upsilon_T)} + \frac{1}{2\pi} \Big\Vert \int_0^{2\pi} v \, \mathrm{d} \theta \Big\Vert_{L^q(\Upsilon_T)} \\ \leq & C \bigg( \Vert \underbrace{\partial_\theta w }_{=\partial_\theta v}\Vert_{L^p(\Upsilon_T)} + \mathop{\mathrm{ess \, sup}}_{(t,x)\in \Omega_T} \Vert v(t,x,\cdot) \Vert_{L^m(0,2\pi)} \bigg), \end{aligned}$$ and the result follows. ◻ # Regularised Initial Data {#sec:appendix initial data} . *Proof of Lemma [Lemma 3](#lem:regularised initial data){reference-type="ref" reference="lem:regularised initial data"}.* The strong convergences in the statement of the lemma follow directly from recalling the mollification technique of [@bbes Appendix C]. It remains to verify the estimates; both those depending on $\varepsilon$ and those independent of $\varepsilon$. . *Bounds depending on $\varepsilon$* : using the non-negativity of $\rho_0$, there holds $$0< \frac{\varepsilon}{2} \leq \rho^\varepsilon_0(x) \leq \frac{\varepsilon}{2} + (1-\varepsilon) \int_{\mathbb{R}^2} \alpha_\varepsilon \, \mathrm{d} x = 1- \frac{\varepsilon}{2} < 1,$$ whence, for all $\varepsilon \in (0,1)$ and a.e. $x \in \Omega$, $$\label{eq:bounds on 1-rho initial eps} 1 - \frac{\varepsilon}{2} \geq 1-\rho^\varepsilon_0(x) \geq \frac{\varepsilon}{2}.$$ Similarly, we note the bound $$\label{eq:bound on f initial eps} f^0_\varepsilon \geq \frac{\varepsilon}{4\pi},$$ which holds due to the non-negativity of $f_0$. . *Uniform $L^2$-bound on $u^\varepsilon_0$* : we estimate the quantity $$\begin{aligned} \varepsilon \int_\Upsilon |u^\varepsilon_0|^2 \, \mathrm{d} \boldsymbol{\xi}= \varepsilon \int_{\{0<f^\varepsilon_0/(1-\rho^\varepsilon_0)<1\}} |u^\varepsilon_0|^2 \, \mathrm{d} \boldsymbol{\xi}+ \varepsilon \int_{\{f^\varepsilon_0/(1-\rho^\varepsilon_0) \geq 1\}} |u^\varepsilon_0|^2 \, \mathrm{d} \boldsymbol{\xi}=: I^\varepsilon_1 + I^\varepsilon_2. \end{aligned}$$ Using that there exists $C>0$ such that $|\log s|\mathds{1}_{\{0<s<1\}} \leq C |s|^{-1/2}$, the monotonicity of $s \mapsto s^{1/2}$ for $s>0$, and the estimates [\[eq:bounds on 1-rho initial eps\]](#eq:bounds on 1-rho initial eps){reference-type="eqref" reference="eq:bounds on 1-rho initial eps"}-[\[eq:bound on f initial eps\]](#eq:bound on f initial eps){reference-type="eqref" reference="eq:bound on f initial eps"}, the first integral is controlled by $$\begin{aligned} I^\varepsilon_1 \leq \varepsilon C\int_{\Upsilon} (1-\rho^\varepsilon_0)|{f^\varepsilon_0}|^{-1} \, \mathrm{d} \boldsymbol{\xi}\leq \varepsilon C\int_\Upsilon \varepsilon^{-1} \, \mathrm{d} \boldsymbol{\xi}\leq C, \end{aligned}$$ and the right-hand side is uniformly bounded independently of $\varepsilon$. Similarly, using that $|\log s| \mathds{1}_{\{s \geq 1\}} \leq |s|^{\frac{1}{2}}$, there holds $$I^\varepsilon_2 \leq C \varepsilon \int_\Upsilon \varepsilon^{-1} f^\varepsilon_0 \, \mathrm{d} \boldsymbol{\xi}\leq C \int_\Omega \rho^\varepsilon_0 \, \mathrm{d} x \leq C,$$ and the right-hand side is again uniformly bounded. . *Uniform bound on entropy functional*: the aforementioned strong convergences $f^\varepsilon_0 \to f_0$ and $\rho^\varepsilon_0 \to \rho_0$ imply that, up to a subsequence which we do not relabel, there holds $f^\varepsilon_0 \to f_0$ a.e. in $\Upsilon$ and $\rho^\varepsilon_0 \to \rho_0$ a.e. in $\Omega$. Since $s \mapsto s \log s$ is locally bounded, the bound $0 < \rho^\varepsilon_0 < 1$ implies the uniform boundedness of $(1-\rho^\varepsilon_0)\log(1-\rho^\varepsilon_0)$, whence the Dominated Convergence Theorem implies $$\lim_{\varepsilon\to 0} \int_\Omega (1-\rho^\varepsilon_0)\log(1-\rho^\varepsilon_0) \, \mathrm{d} x = \int_\Omega (1-\rho_0)\log(1-\rho_0) \, \mathrm{d} x \leq C,$$ for $C$ depending only on the maximum value of $s \log s$ over the interval $[0,1]$ and $\Omega$. Similarly, we decompose $$\int_\Upsilon f^\varepsilon_0 \log f^\varepsilon_0 \, \mathrm{d} \boldsymbol{\xi}= \int_{ \{0 < f^\varepsilon_0 < 1\}} f^\varepsilon_0 \log f^\varepsilon_0 \, \mathrm{d} \boldsymbol{\xi}+ \int_{ \{f^\varepsilon_0 \geq 1\}} f^\varepsilon_0 \log f^\varepsilon_0 \, \mathrm{d} \boldsymbol{\xi},$$ and the local boundedness of $s\log s$ and the Dominated Convergence Theorem that $$\lim_{\varepsilon \to 0}\int_{ \{0 < f^\varepsilon_0 < 1\}} f^\varepsilon_0 \log f^\varepsilon_0 \, \mathrm{d} \boldsymbol{\xi}= \int_{ \{0 < f_0 < 1\}} f_0 \log f_0 \, \mathrm{d} \boldsymbol{\xi}\leq C,$$ where the right-hand side of the above is bounded by virtue of the function $s \log s$ being bounded over the interval $s \in (0,1)$. Meanwhile, the assumption $f_0 \in L^p(\Upsilon)$ for $p>1$ and the estimate $|\log s| \mathds{1}_{\{s \geq 1 \}} \leq C_p |s|^{p-1}$ imply $$\int_{ \{f^\varepsilon_0 > 1\}} f^\varepsilon_0 \log f^\varepsilon_0 \, \mathrm{d} \boldsymbol{\xi}\leq C_p\int_{ \{f^\varepsilon_0 > 1\}} |f^\varepsilon_0|^p \, \mathrm{d} \boldsymbol{\xi}\leq C_p(1+\Vert f_0 \Vert_{L^p(\Upsilon)}^p),$$ and the right-hand side is bounded independently of $\varepsilon$. ◻ The next proof consists in a quantitative diagonal approach; the argument is classical, but it is nevertheless useful as it helps identify exactly which diagonal subsequence we select. We recall that for $\{\sigma'(n)\}_n$ to be a subsequence of $\{\sigma(n)\}_{n\in\mathbb{N}}$, it is sufficient that there holds: $\{\sigma'(n)\}_n \subset \{\sigma(n)\}_n$, $\sigma'(n) \geq \sigma(n)$, and $\sigma'(n+1)>\sigma'(n)$ for all $n$. *Proof of Lemma [Lemma 8](#lem:convergence of initial data){reference-type="ref" reference="lem:convergence of initial data"}.* For the purposes of the argument that follows, it is crucial that we replace the continuum $\{\varepsilon\}_{\varepsilon \in (0,1)}$ with a countable set. For this reason, we select $1>\varepsilon_1 > \varepsilon_2 > \dots > 0$ and define $$f^m_n(0) := f^{\varepsilon_m}_n(0), \qquad f^m_0 := f^{\varepsilon_m}_0,$$ so that the limit $m\to\infty$ morally corresponds to $\varepsilon \to 0$. Recall from equation [\[eq:strong conv in Lq for initial f eps n\]](#eq:strong conv in Lq for initial f eps n){reference-type="eqref" reference="eq:strong conv in Lq for initial f eps n"} of §[3.1](#sec:existence galerkin){reference-type="ref" reference="sec:existence galerkin"} that for each fixed $m$, we have $$\Vert f^m_n(0) - f^m_0 \Vert_{L^p(\Upsilon)} \to 0 \quad \text{as } n \to \infty.$$ In turn, for $m=1$, there exists a subsequence $\{\sigma_1(n)\}_{n}$ of $\mathbb{N}$ such that, for all $n$, $$\label{eq:first subsequence} \Vert f^1_{\sigma_1(n)}(0) - f^1_0 \Vert_{L^p(\Upsilon)} \leq \frac{1}{n}.$$ Note in particular that the above implies, for all $\ell \geq n$, $$\label{eq:skipping steps} \Vert f^1_{\sigma_1(\ell)}(0) - f^1_0 \Vert_{L^p(\Upsilon)} \leq \frac{1}{n}.$$ Then, for $m=2$, since we also know that $\Vert f^2_{n}(0) - f^2_0 \Vert_{L^p(\Upsilon)} \to 0$ as $n\to\infty$, we may construct a subsequence $\{\sigma_2(n)\}_{n}$ of $\{\sigma_1(n)\}_n$ such that, for all $n$, $$\Vert f^2_{\sigma_2(n)}(0) - f^2_0 \Vert_{L^p(\Upsilon)} \leq \frac{1}{n}.$$ Furthermore, since $\{\sigma_2(n)\}_n$ is a subsequence of $\{\sigma_1(n)\}_n$, the bound [\[eq:skipping steps\]](#eq:skipping steps){reference-type="eqref" reference="eq:skipping steps"} implies, for all $n$, $$\Vert f^1_{\sigma_2(n)}(0) - f^1_0 \Vert_{L^p(\Upsilon)} \leq \frac{1}{n}.$$ We continue iteratively, and construct for each $m\in\mathbb{N}$ a subsequence $\{\sigma_m(n)\}_n$ of all previous $m-1$ subsequences, such that there holds, for all $j \in \{1,\dots,m\}$ and all $n$, $$\label{eq:just before choosing the diagonal} \Vert f^j_{\sigma_m(n)}(0) - f^j_0 \Vert_{L^p(\Upsilon)} \leq \frac{1}{n}.$$ In turn, we define the diagonal sequence $$\label{eq:diagonal subseq is defined} f^m := f^m_{\sigma_m(m)},$$ and note that [\[eq:just before choosing the diagonal\]](#eq:just before choosing the diagonal){reference-type="eqref" reference="eq:just before choosing the diagonal"} implies, for all $m$, $$\Vert f^m (0) - f^m_0 \Vert_{L^p(\Upsilon)} \leq \frac{1}{m}.$$ Using the above and the strong convergence of $f^m_0 \to f_0$ in $L^p(\Upsilon)$ of Lemma [Lemma 3](#lem:regularised initial data){reference-type="ref" reference="lem:regularised initial data"}, the triangle inequality yields $$\Vert f^m(0) - f_0 \Vert_{L^p(\Upsilon)} \leq \underbrace{\Vert f^m(0) - f_0^m \Vert_{L^p(\Upsilon)}}_{\leq 1/m} + \underbrace{\Vert f_0^m - f_0 \Vert_{L^p(\Upsilon)}}_{\to 0} \to 0$$ as $m\to\infty$, and the proof is complete. ◻ **Acknowledgements:** MB acknowledges support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, and the German Science Foundation (DFG) through CRC TR 154, subproject C06. SMS acknowledges support from Centro di Ricerca Matematica Ennio De Giorgi. The authors report there are no competing interests to declare.
arxiv_math
{ "id": "2309.17326", "title": "Well-posedness and stationary states for a crowded active Brownian\n system with size-exclusion", "authors": "Martin Burger and Simon Schulz", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Primal-dual algorithms are frequently used for iteratively solving large-scale convex optimization problems. The analysis of such algorithms is usually done on a case-by-case basis, and the resulting guaranteed rates of convergence can be conservative. Here we consider a class of first-order algorithms for linearly constrained convex optimization problems, and provide a linear matrix inequality (LMI) analysis framework for certifying worst-case exponential convergence rates. Our approach builds on recent results for interpolation of convex functions and linear operators, and our LMI directly constructs a Lyapunov function certifying the guaranteed convergence rate. By comparing to rates established in the literature, we show that our approach can certify significantly faster convergence for this family of algorithms. author: - "Bryan Van Scoy[^1]" - "John W. Simpson-Porco[^2]" - "Laurent Lessard[^3]" bibliography: - brevalias.bib - Main2.bib - JWSP.bib - references.bib title: "Automated Lyapunov Analysis of Primal-Dual Optimization Algorithms: An Interpolation Approach" --- # Introduction Primal-dual (or saddle-point) optimization methods have a rich history, dating back to the earliest days of mathematical programming [@TK:56; @KA-LH-HU:58]. The core idea --- that of sequentially or simultaneously updating both primal and dual variables --- is now widely used in algorithms for solving constrained optimization problems, including in interior-point methods [@numerical-optimization], the method of multipliers, and in distributed optimization methods such as ADMM [@admm]. There is significant overlap with the literature on *operator splitting*, as finding a point satisfying the KKT conditions of a constrained optimization problem can be cast as the problem of finding a zero of a sum of monotone operators; see [@NK-JCP:15; @PLC-JCP:21; @LC-DK-AC-AK:23] for extensive overviews of this perspective. Convergence proofs in this literature generally rely on the construction of bespoke pre-conditioners, followed by applications of convergence results for known fixed-point algorithms (e.g., Krasnoselskii-Mann iterations), or by clever direct construction of Lyapunov-like functions. While we do not herein study operator splitting methods in generality, part of our goal is to establish some preliminary foundations for more systematic and automated analyses of such algorithms. Primal-dual methods have attracted attention from controls researchers, particularly in the continuous-time setting where Lyapunov analysis techniques can be applied with relative ease; see [@DF-FP:10; @AC-EM-JC:15; @JWSP:16f; @AC-EM-SL-JC:17; @JWSP-BKP-NM-FD:18b; @NKD-SZK-MRJ:19; @DD-MRJ:19; @DD-MRJ:20; @XC-NL:20]. This has led to various control applications of the algorithms, such as in energy systems [@NL-CL-ZC-SHL:13; @AC-JC:14; @TS-CDP-AVDS:15; @JWSP-BKP-NM-FD:16c; @AB-ED:19]. However, only [@DD-MRJ:20] directly addresses the issue of the *rate* of exponential convergence in discrete time, and the analysis presented is focused on a particular algorithm obtained via Euler discretization from the continuous-time version. By interpreting optimization algorithms as dynamical systems, specifically as robust controllers, integral quadratic constraints (IQCs) have been used to find tight bounds on convergence rates [@lessard; @michalowsky2021robust]. Alternatively, one can directly generate a set of valid inequalities relating inputs and outputs of the objective function, and solve a *meta*-optimization problem that searches for tight worst-case guarantees. This was applied in a *finite-horizon* setting in the so-called PEP formulation [@interpolation], and also in an asymptotic setting [@lifting; @lessard2022analysis; @hu2017dissipativity] to directly search for Lyapunov functions that certify a given convergence rate. To the best of our knowledge, the aforementioned approaches have not previously been applied to analyze primal-dual algorithms for linearly constrained convex optimization. The closest works we found examined over-relaxed ADMM [@nishihara2015general], or alternating gradient methods for bilinear games [@pmlr-v151-zhang22e] or smooth monotone games [@zhang2021unified]. #### Contributions: We consider a family of first-order primal-dual algorithms for solving linearly constrained convex optimization problems of the form $$\label{Eq:Opt} \mathop{\mathrm{minimize}}_{x \in \mathbb{R}^n} \,\, f(x) \quad \text{subject to} \quad Ax = b.$$ Our main contribution is an automated framework for computing worst-case convergence rates of the algorithm over a class of problem data. We consider smooth and strongly convex $f$ and matrices $A$ with known bounds on the singular values. We show numerically that our analysis improves on known results. We also develop the set of multipliers for the class of smooth strongly convex functions and the class of linear functions with eigenvalues in a closed interval which may be of independent interest. # Primal-dual iterations for linearly constrained convex optimization {#Sec:PrimalDual} Consider the optimization problem [\[Eq:Opt\]](#Eq:Opt){reference-type="eqref" reference="Eq:Opt"}, where the goal is to minimize the objective function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ over the affine constraint set $\mathsf{C} \colonequals\{x\in\mathbb{R}^n \;|\; Ax = b\}$ with $A \in \mathbb{R}^{r \times n}$ and $b \in \mathbb{R}^r$. Throughout this work, we make the following assumptions, which (among other things) ensure that the problem is feasible for any $b \in \mathbb{R}^r$ and possess a unique optimal solution $x_\star \in \mathbb{R}^n$. 1. For known constants $0 < m \leq L < \infty$, the objective function $f$ is $m$-strongly convex, continuously differentiable, and its gradient $\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is globally Lipschitz continuous with Lipschitz constant $L$; we denote the set of all such functions by $\mathcal{F}(m,L)$, and we denote the condition number as $\kappa(f) = L/m$. 2. For known constants $0 < \underline{\sigma} \leq \overline{\sigma} < \infty$, the singular values $\sigma_i(A)$ of the constraint matrix $A \in \mathbb{R}^{r \times n}$ satisfy $\underline{\sigma} \leq \sigma_i(A) \leq \overline{\sigma}$ for all $i \in \{1,\ldots,r\}$; we denote the set of all such matrices by $\mathcal{A}(\underline{\sigma},\overline{\sigma})$, and we denote the condition number as $\kappa(A) = \overline{\sigma}/\underline{\sigma}$. Note that (ii) implies that $A$ has full row rank, and thus the constraints $Ax = b$ are linearly independent. Perhaps the most immediate iterative approach for computing the optimal solution of [\[Eq:Opt\]](#Eq:Opt){reference-type="eqref" reference="Eq:Opt"} would be projected gradient descent $$x_{k+1} = \mathrm{Proj}_{\mathsf{C}}(x_k - \alpha \nabla f(x_k)), \qquad k = 0,1,\ldots$$ with step size $\alpha > 0$. In many large-scale applications however, computing this projection is too computationally expensive, and one encounters similar computational bottlenecks if gradient ascent is applied to the dual problem of [\[Eq:Opt\]](#Eq:Opt){reference-type="eqref" reference="Eq:Opt"}. Instead, iterative methods are sought which rely only on (a small number of) evaluations of $\nabla f$, $A$, and $A^{\mathsf{T}}$ at each iteration [@NK-JCP:15; @PLC-JCP:21]. Such methods can be developed through Lagrange relaxation of the equality constraint in [\[Eq:Opt\]](#Eq:Opt){reference-type="eqref" reference="Eq:Opt"}. For $\mu \geq 0$ define the *augmented Lagrangian* $$L_{\mu}(x,\lambda) = f(x) + \lambda^{\mathsf{T}}(Ax-b) + \tfrac{\mu}{2}\|Ax-b\|_2^2,$$ with $\lambda \in \mathbb{R}^r$ the dual variable and $\mu$ the augmentation parameter. Under the present assumptions, strong duality holds, and $(x_{\star},\lambda_{\star})$ is primal-dual optimal for [\[Eq:Opt\]](#Eq:Opt){reference-type="eqref" reference="Eq:Opt"} if and only if it is a *saddle point* of $L_{\mu}$. As an iterative method to determine a saddle point, one begins with any initial condition $(x_0,\lambda_0)$ and performs the gradient decent-ascent iterations $$\begin{aligned} \label{Eq:GDA} x_{k+1} &= x_{k} - \alpha_{x} \nabla_{x}L_{\mu}(x_{k},\lambda_{k}) \notag\\ &= x_{k} - \alpha_{x}[\nabla f(x_{k}) + A^{\mathsf{T}}\lambda_{k} + \mu A^{\mathsf{T}}(Ax_k-b)]\\ \lambda_{k+1} &= \lambda_{k} + \alpha_{\lambda} \nabla_{\lambda}L_{\mu}(x_{k},\lambda_{k})\notag\\ &= \lambda_{k} + \alpha_{\lambda}(Ax_k-b)\end{aligned}$$ where $\alpha_x, \alpha_{\lambda} > 0$ are step sizes. For obvious reasons, such algorithms are termed *primal-dual* algorithms. In this work, we consider a slightly more general variation on [\[Eq:GDA\]](#Eq:GDA){reference-type="eqref" reference="Eq:GDA"}, given by $$\begin{aligned} \label{Eq:AHU-Extended} x_{k+1} &= x_{k} - \alpha_{x}[\nabla f(x_{k}) + A^{\mathsf{T}}\lambda_{k} + \mu A^{\mathsf{T}}(Ax_k-b)]\\ \tilde{x}_{k} &= x_k + \gamma (x_{k+1}-x_k)\\ \lambda_{k+1} &= \lambda_{k} + \alpha_{\lambda}\left(A\tilde{x}_{k}-b\right),\end{aligned}$$ where $\gamma \in [0,2]$ is an *extrapolation* parameter. Various algorithms are contained as special cases of [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} and will serve as points of comparison. Our broad goal is to quantify the *worst-case* asymptotic geometric convergence rates achieved by some selected iterative primal-dual algorithms over all possible instances of problem data $f \in \mathcal{F}(m,L)$ and $A \in \mathcal{A}(\underline{\sigma},\overline{\sigma})$. ## Literature on known rates {#Sec:literature} For sufficiently small step sizes, [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} converges exponentially to the unique saddle point $(x_{\star},\lambda_{\star})$ of $L_{\mu}$. A significantly more challenging question is to provide non-conservative estimates of the worst-case asymptotic geometric convergence rate for the method over the class of problem data defined by $(\mathcal{F},\mathcal{A})$. We have found two relatively clear comparison points. First, [@SSD-WH:19] considers [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} with $\mu = \gamma = 0$. Translating the notation[^4], they use a Lyapunov function of the form $$V(x_k,\lambda_k) = \|x_{k} - \nabla f^*(-A^{\mathsf{T}}\lambda_k)\|_2 + c\,\|\lambda_k - \lambda_{\star}\|_2,$$ where $f^*(z) = \sup_{x \in \mathbb{R}^n} x^\mathsf{T}z - f(x)$ is the convex conjugate of $f$ [@convex-analysis] and $c \colonequals 2\frac{L}{m^2}\frac{\overline{\sigma}^3}{\underline{\sigma}^2}$. Using stepsizes $$\label{stepsizes1} \alpha_{x} = \frac{2}{m+L} \quad\text{and}\quad \alpha_{\lambda} = \frac{m}{(m+L)\bigl(\frac{\overline{\sigma}^2}{m}+c\overline{\sigma}\bigr)},$$ the decrease condition $V_{k+1} \leq \rho\,V_k$ holds with $$\label{bound1} \rho = 1 - \frac{1}{12\,\kappa(f)^3\,\kappa(A)^4}.$$ Second, the authors of [@SAA-AHS:20] consider [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} with $\gamma = 1$, and for notational simplicity we consider here $\mu = 0$. Using the quadratic Lyapunov function $$V(x_k,\lambda_k) = (1-\alpha_x\alpha_{\lambda}\overline{\sigma}^2)\|x_k-x_{\star}\|_2^2 + \|\lambda_k-\lambda_{\star}\|_2^2$$ and the step size conditions $\alpha_x < 1/L$ and $\alpha_{\lambda} < m/\overline{\sigma}^2$, they establish the decrease condition $V_{k+1} \leq \rho^2\,V_k$ with $$\rho^2 = \max\{1-\alpha_{x} m\,(1-\alpha_{x}L),\,1-\alpha_{x}\alpha_{\lambda}\underline{\sigma}^2\}.$$ The bound on the convergence rate is optimized by the step sizes [\[stepsizes2\]]{#stepsizes2 label="stepsizes2"} $$\begin{aligned} \alpha_x &= \begin{cases} \frac{1}{2L} & \text{if }\kappa(A)\leq\sqrt{2} \\ \frac{1-\kappa(A)^{-2}}{L} & \text{otherwise} \end{cases}\\ \alpha_\lambda &= \begin{cases} \frac{m}{4}\bigl(\frac{2}{\overline{\sigma}^2} + \frac{1}{\underline{\sigma}^2}\bigr) & \text{if }\kappa(A)\leq\sqrt{2} \\ \frac{m}{\overline{\sigma}^2} & \text{otherwise}, \end{cases}\end{aligned}$$ and the convergence factor using these step sizes is $$\label{bound2} \rho^2 = \begin{cases} 1-\frac{1}{4\,\kappa(f)} & \text{if }\kappa(A)\leq\sqrt{2} \\ 1 - \frac{1}{\kappa(f)} \bigl(\frac{1}{\kappa(A)^2} - \frac{1}{\kappa(A)^4}\bigr) & \text{otherwise}. \end{cases}$$ # Automated convergence analysis {#Sec:analysis} We now return to the primal-dual algorithm [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} and rewrite it in the form of a linear fractional representation, as traditionally used in robust control [@CS-SW:15]. Let $A = U\Sigma V_1^{\mathsf{T}}$ be a compact SVD of $A$, where $U\in\mathbb{R}^{r\times r}$ is orthogonal and $V_1 \in \mathbb{R}^{n\times r}$ has orthonormal columns. Let $V_2\in \mathbb{R}^{n\times (n-r)}$ be the orthogonal completion, so that $V = \begin{bmatrix}V_1 & V_2\end{bmatrix} \in \mathbb{R}^{n\times n}$ is orthogonal. Here, $\Sigma = \mathrm{diag}(\sigma_1,\dots,\sigma_r)$ and we have the inequality $\bar\sigma \geq \sigma_1 \geq \dots \geq \sigma_r \geq \underline\sigma$. Consider now the invertible change of state $$p_k = V_1^{\mathsf{T}}x_k, \qquad q_k = V_2^{\mathsf{T}}x_k, \qquad \nu_{k} = -\Sigma U^{\mathsf{T}}\lambda_k.$$ Routine computations quickly show that $$\begin{aligned} \begin{bmatrix}p_{k+1} \\ q_{k+1}\end{bmatrix} &= \begin{bmatrix}p_{k} \\ q_{k}\end{bmatrix} - \alpha_{x}V^{\mathsf{T}}\nabla f(V \left[\begin{smallmatrix}p_k \\ q_k\end{smallmatrix}\right]) + \alpha_x \begin{bmatrix} \nu_k - \mu \Sigma^2 p_k \\ 0\end{bmatrix}\\ \tilde{p}_{k} &= p_{k} + \gamma\,(p_{k+1} - p_{k})\\ \nu_{k+1} &= \nu_{k} - \alpha_{\lambda} \Sigma^2 \tilde{p}_k. \end{aligned}$$ Eliminating $\tilde{p}_k$ and defining the concatenated state $\xi_k = (p_k,q_k,\nu_k)$, the dynamics can be expressed as $$\begin{aligned} \xi_{k+1} &= A\xi_k + B_1 \left[\begin{smallmatrix}u_{k}^{1} \\ u_{k}^{2}\end{smallmatrix}\right] + B_2 \left[\begin{smallmatrix}u_{k}^{3} \\ u_{k}^{4}\end{smallmatrix}\right]\\ \left[\begin{smallmatrix}y_{k}^{1} \\ y_{k}^{2}\end{smallmatrix}\right] &= C_1 \xi_k + D_{11}\left[\begin{smallmatrix}u_{k}^{1} \\ u_{k}^{2}\end{smallmatrix}\right] + D_{12}\left[\begin{smallmatrix}u_{k}^{3} \\ u_{k}^{4}\end{smallmatrix}\right]\\ \left[\begin{smallmatrix}y_{k}^{3} \\ y_{k}^{4}\end{smallmatrix}\right] &= C_2 \xi_k + D_{21}\left[\begin{smallmatrix}u_{k}^{1} \\ u_{k}^{2}\end{smallmatrix}\right] + D_{22}\left[\begin{smallmatrix}u_{k}^{3} \\ u_{k}^{4}\end{smallmatrix}\right] \end{aligned}$$ where the matrices are defined by the blocks $${\small\addtolength{\arraycolsep}{-1.2mm}\begin{bmatrix}A & B_1 & B_2 \\ C_1 & D_{11} & D_{12} \\ C_2 & D_{21} & D_{22}\end{bmatrix} \!=\! \addtolength{\arraycolsep}{0mm} \left[\begin{array}{ccc|cc|cc} I & 0 & \alpha_x I & -\alpha_{x} I & 0 & -\alpha_{x} \mu I & 0\\ 0 & I & 0 & 0 & -\alpha_{x} I & 0 & 0\\ 0 & 0 & I & 0 & 0 & 0 & -\alpha_{\lambda}I\\ \hline I & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & I & 0 & 0 & 0 & 0 & 0\\ \hline I & 0 & 0 & 0 & 0 & 0 & 0\\ I & 0 & \gamma\alpha_{x}I & -\gamma\alpha_{x} I & 0 & -\gamma\alpha_x\mu I & 0 \end{array}\right] }$$ and with inputs $(u_{k}^{1},u_{k}^{2})$ and $(u_{k}^{3},u_{k}^{4})$ defined by $$\label{eq:feedback} \left[\begin{smallmatrix}u_{k}^{1} \\ u_{k}^{2}\end{smallmatrix}\right] = V^{\mathsf{T}}\nabla f\Bigl(V\left[\begin{smallmatrix}y_{k}^{1} \\ y_{k}^{2}\end{smallmatrix}\right]\Bigr), \quad \left[\begin{smallmatrix}u_{k}^{3} \\ u_{k}^{4}\end{smallmatrix}\right] = \left[\begin{smallmatrix}\Sigma^2 & 0\\ 0 & \Sigma^2\end{smallmatrix}\right]\left[\begin{smallmatrix}y_{k}^{3} \\ y_{k}^{4}\end{smallmatrix}\right]$$ and outputs $(y_k^1,y_k^2,y_k^3,y_k^4) = (p_k,q_k,p_k,\tilde p_k)$. To simplify notation in the sequel, we assume without loss of generality that each input and output is one-dimensional; see the lossless dimensionality reduction in [@lessard]. ## Lifted dynamics Let $G(z)$ denote the transfer function corresponding to the state-space matrices above, which maps the set of inputs $(u_k^1,u_k^2,u_k^3,u_k^4)$ to the set of outputs $(y_k^1,y_k^2,y_k^3,y_k^4)$. This system is connected with the feedback in [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"} through the gradient of the objective function and the squared matrix of singular values. To analyze the system, we will replace the feedback in [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"} with constraints on the inputs and outputs of $\nabla f$ and $\Sigma^2$. The more constraints that we use, the tighter the analysis will be. To obtain more constraints, we will lift the system so that the inputs and outputs are in a higher-dimensional space, and then apply the constraints between all iterates in this lifted space [@lifting]. Given a lifting dimension $\ell\in\{1,2,\ldots\}$, let $$\label{Eq:LiftedSystem} \psi(z) = I_4\otimes\begin{bmatrix}1 \\ z^{-1} \\ \vdots \\ z^{1-\ell}\end{bmatrix} \quad\text{and}\quad \mathbf{G}= \begin{bmatrix}\psi G \\ \psi\end{bmatrix}$$ where $\otimes$ denotes the Kronecker product. The system $\mathbf{G}$, called the *lifted system*, is the map [\[eq:lifted-iterates\]]{#eq:lifted-iterates label="eq:lifted-iterates"} $$\label{eq:lifted-map} (u_k^1,u_k^2,u_k^3,u_k^4) \mapsto (Y_k^1,\ldots,Y_k^4,U_k^1,\ldots,U_k^4)$$ where the lifted iterates are $$Y_k^i = (y_k^i,y_{k-1}^i,\ldots,y_{k-\ell+1}^i), \quad i\in\{1,2,3,4\}$$ and similarly for $U_k^i$. Each lifted iterate $Y_k^i$ and $U_k^i$ is an $\ell$-dimensional vector that consists of $\ell$ lagged iterates, where the lifting dimension is how many past iterates are used. When $\ell=1$, the lifted system is simply $\mathbf{G}= \left[\begin{smallmatrix} G \\ I \end{smallmatrix}\right]$. ## Multipliers To analyze the system, we will replace the feedback [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"} with inequalities on the inputs and outputs of the system in the lifted space. We parameterize the set of inequalities using a symmetric block matrix, called a *multiplier*, of the form $$\label{multiplier} M = \begin{bmatrix}M_{11} & M_{12} \\ M_{21} & M_{22}\end{bmatrix}.$$ We now describe the constraints for both the objective function and constraint matrix. #### Objective function. We consider inequalities on the objective function gradient of the form $$\begin{gathered} \label{eq:F-quadratic-inequality} 0 \leq \sum_{i=1}^\ell \sum_{j=1}^\ell \biggl((M_{11})_{ij}\,\begin{bmatrix}y_{k-i+1}^1 \\ y_{k-i+1}^2\end{bmatrix}^\mathsf{T}\begin{bmatrix}y_{k-j+1}^1 \\ y_{k-j+1}^2\end{bmatrix} \\ + 2\,(M_{12})_{ij}\,\begin{bmatrix}y_{k-i+1}^1 \\ y_{k-i+1}^2\end{bmatrix}^\mathsf{T}\begin{bmatrix}u_{k-j+1}^1 \\ u_{k-j+1}^2\end{bmatrix} \\ + (M_{22})_{ij}\,\begin{bmatrix}u_{k-i+1}^1 \\ u_{k-i+1}^2\end{bmatrix}^\mathsf{T}\begin{bmatrix}u_{k-j+1}^1 \\ u_{k-j+1}^2\end{bmatrix}\biggr).\end{gathered}$$ The multiplier $M$ has dimensions $2\ell\times 2\ell$ and parameterizes all inequalities that are linear in the inner products between inputs and outputs of the gradient. The following result characterizes all multipliers such that this inequality holds for all iterates of the system. We provide the proof in the appendix. **Proposition 1** (Objective multipliers). The quadratic inequality [\[eq:F-quadratic-inequality\]](#eq:F-quadratic-inequality){reference-type="eqref" reference="eq:F-quadratic-inequality"} holds for all iterates that satisfy the feedback [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"} for some function $f\in\mathcal{F}(m,L)$ if and only if the multiplier has the form $$\label{Eq:ObjectiveMult} M = \begin{bmatrix}-2mL & L+m \\ L+m & -2\end{bmatrix} \otimes R + \begin{bmatrix}\mathbf{0} & S \\ S^\mathsf{T}& \mathbf{0}\end{bmatrix}$$ for some $\ell\times\ell$ symmetric matrix $R$ such that $R\mathbf{1}=0$, $R_{ii}\geq 0$ for all $i$, and $R_{ij}\leq 0$ for all $i\neq j,$[^5] and some $\ell\times\ell$ skew-symmetric matrix $S$ satisfying $S\mathbf{1} = \mathbf{0}$. We denote the set of all such matrices as $\mathcal{M}_\mathcal{F}(m,L)$. #### Constraint matrix. We consider inequalities on the constraint matrix of the form $$\label{eq:A-quadratic-inequality} 0 \leq \mathop{\mathrm{\mathrm{tr}}}\left(M \begin{bmatrix}Y_k^3 \\ Y_k^4 \\ U_k^3 \\ U_k^4\end{bmatrix} \begin{bmatrix}Y_k^3 \\ Y_k^4 \\ U_k^3 \\ U_k^4\end{bmatrix}^\mathsf{T}\right).$$ The multiplier $M$ has dimensions $4\ell\times4\ell$ and parameterizes all inequalities that are linear in the inner products between inputs and outputs of the matrix $\Sigma^2$ of squared singular values of $A$. The following result characterizes the set of multipliers, which we prove in the appendix. **Proposition 2** (Constraint multipliers). The quadratic inequality [\[eq:F-quadratic-inequality\]](#eq:F-quadratic-inequality){reference-type="eqref" reference="eq:F-quadratic-inequality"} holds for all iterates that satisfy the feedback [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"} for some matrix $A\in\mathcal{A}(\underline{\sigma},\overline{\sigma})$ if the multiplier has the form $$\label{Eq:ObjectiveConst} M = \begin{bmatrix}-2\,\underline{\sigma}^2\,\overline{\sigma}^2 R & (\overline{\sigma}^2+\underline{\sigma}^2) R \\ (\overline{\sigma}^2+\underline{\sigma}^2) R & -2 R\end{bmatrix} + \begin{bmatrix}0 & S \\ S^\mathsf{T}& 0\end{bmatrix}$$ for some $2\ell\times2\ell$ symmetric positive semidefinite matrix $R$, and some $2\ell\times 2\ell$ skew-symmetric matrix $S$. We denote the set of all such matrices as $\mathcal{M}_\mathcal{A}(\underline{\sigma},\overline{\sigma})$. ## Linear matrix inequality We now use the lifted system [\[Eq:LiftedSystem\]](#Eq:LiftedSystem){reference-type="eqref" reference="Eq:LiftedSystem"} and the multipliers [\[Eq:ObjectiveMult\]](#Eq:ObjectiveMult){reference-type="eqref" reference="Eq:ObjectiveMult"} and [\[Eq:ObjectiveConst\]](#Eq:ObjectiveConst){reference-type="eqref" reference="Eq:ObjectiveConst"} that characterize the objective function and constraint matrix to construct a linear matrix inequality (LMI) whose feasibility certifies convergence of the primal-dual algorithm [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} with a specified rate $\rho \in (0,1)$. Denote a minimal realization of the lifted system as $\mathbf{G}\sim (\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D})$ and let $\mathbf{n}$ denote the dimension of the realization. Recall that the lifted system maps the iterates as in [\[eq:lifted-iterates\]](#eq:lifted-iterates){reference-type="eqref" reference="eq:lifted-iterates"}. Let $\mathbf{Y_f}$ and $\mathbf{U_f}$ denote the rows of $\mathbf{C}$ and $\mathbf{D}$ corresponding to pairs of inputs and outputs of the gradient, and let $\mathbf{Y_A}$ and $\mathbf{U_A}$ denote rows corresponding to inputs and outputs of $\Sigma^2$. We can now state our main result. **Theorem 1** (Analysis). Given $\rho\in(0,1)$, if there exists an $\mathbf{n}\times\mathbf{n}$ symmetric matrix $P$ and multipliers $M_1,M_2\in\mathcal{M}_\mathcal{F}(m,L)$ and $M_3,M_4\in\mathcal{M}_\mathcal{A}(\underline{\sigma},\overline{\sigma})$ such that [\[lmi\]]{#lmi label="lmi"} $$\begin{gathered} \label{lmi1} 0 \succeq \begin{bmatrix}\mathbf{A}^\mathsf{T}P\mathbf{A}- \rho^2 P & \mathbf{A}^\mathsf{T}P \mathbf{B}\\ \mathbf{B}^\mathsf{T}P \mathbf{A}& \mathbf{B}^\mathsf{T}P \mathbf{B}\end{bmatrix} + \begin{bmatrix}\mathbf{Y_f}\\ \mathbf{U_f}\end{bmatrix}^\mathsf{T}\! M_1 \begin{bmatrix}\mathbf{Y_f}\\ \mathbf{U_f}\end{bmatrix} \\ + \begin{bmatrix}\mathbf{Y_A}\\ \mathbf{U_A}\end{bmatrix}^\mathsf{T}\! M_3 \begin{bmatrix}\mathbf{Y_A}\\ \mathbf{U_A}\end{bmatrix} \end{gathered}$$ and $$\begin{gathered} \label{lmi2} 0 \preceq \begin{bmatrix}P-I & 0 \\ 0 & 0\end{bmatrix} + \begin{bmatrix}\mathbf{Y_f}\\ \mathbf{U_f}\end{bmatrix}^\mathsf{T}\! M_2 \begin{bmatrix}\mathbf{Y_f}\\ \mathbf{U_f}\end{bmatrix} \\ + \begin{bmatrix}\mathbf{Y_A}\\ \mathbf{U_A}\end{bmatrix}^\mathsf{T}\! M_4 \begin{bmatrix}\mathbf{Y_A}\\ \mathbf{U_A}\end{bmatrix}, \end{gathered}$$ then the primal-dual iterations from [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"} converge linearly with rate $O(\rho^k)$ for all objective functions $f\in\mathcal{F}(m,L)$ and all constraint matrices $A\in\mathcal{A}(\underline{\sigma},\overline{\sigma})$. **Proof.** Suppose the LMI [\[lmi\]](#lmi){reference-type="eqref" reference="lmi"} is feasible, and consider a trajectory of the primal-dual algorithm [\[Eq:AHU-Extended\]](#Eq:AHU-Extended){reference-type="eqref" reference="Eq:AHU-Extended"}. Let $\bm{\xi}_k$ denote the state of the lifted system $\mathbf{G}$. For each iterate, let a tilde denote the iterate shifted by the fixed point of the system. Now multiply the LMI in [\[lmi1\]](#lmi1){reference-type="eqref" reference="lmi1"} on the right and left by the lifted state and inputs $(\bm{\tilde\xi}_k,\tilde u_k^1,\tilde u_k^2,\tilde u_k^3,\tilde u_k^4)$ and its transpose and use the fact that the inequalities [\[eq:F-quadratic-inequality\]](#eq:F-quadratic-inequality){reference-type="eqref" reference="eq:F-quadratic-inequality"} and [\[eq:A-quadratic-inequality\]](#eq:A-quadratic-inequality){reference-type="eqref" reference="eq:A-quadratic-inequality"} are nonnegative when $M_1,M_2\in\mathcal{M}_\mathcal{F}(m,L)$ and $M_3,M_4\in\mathcal{M}_\mathcal{A}(\underline{\sigma},\overline{\sigma})$. This produces the inequality $V(\bm{\tilde\xi}_{k+1}) \leq \rho^2\,V(\bm{\tilde\xi}_k)$. Likewise, from the LMI [\[lmi2\]](#lmi2){reference-type="eqref" reference="lmi2"}, we obtain the inequality $V(\bm{\tilde\xi}_k)\geq \|\bm{\tilde\xi}_k\|^2$. Since the state $\tilde\xi_k$ of the original system $G$ is contained in the lifted system, we have that $\|\bm{\tilde \xi}_k\|^2 \geq \|\tilde\xi_k\|^2$. Chaining all of these inequalities together gives $$\|\tilde\xi_k\|^2 \leq \|\bm{\tilde\xi}_k\|^2 \leq V(\bm{\tilde\xi}_k) \leq \ldots \leq \rho^{2k}\,V(\bm{\tilde\xi}_0).$$ Taking the square root gives $\|\tilde\xi_k\|\leq c\,\rho^k$ for some $c>0$, so the iterates converge to the optimizer linearly with rate $O(\rho^k)$.   ------------------------------------------------------------------------ # Results {#Sec:results} We now compare our analysis with the results from the literature described in Section [2.1](#Sec:literature){reference-type="ref" reference="Sec:literature"}. In each case, we choose the objective function condition number $\kappa(f) = 2$ and the Lagrangian augmentation parameter $\mu=0$. We emphasize that our analysis applies to any values of these parameters, but we choose these values to be able to compare with known bounds. ![Comparison with [@SSD-WH:19]. The algorithm parameters are those in [\[stepsizes1\]](#stepsizes1){reference-type="eqref" reference="stepsizes1"} with extrapolation parameter $\gamma=0$.](figures/comparison_bound1.pdf){#fig:comparison1} We use the step size selections from [\[stepsizes1\]](#stepsizes1){reference-type="eqref" reference="stepsizes1"} and [\[stepsizes2\]](#stepsizes2){reference-type="eqref" reference="stepsizes2"}; Figures [1](#fig:comparison1){reference-type="ref" reference="fig:comparison1"} and [2](#fig:comparison2){reference-type="ref" reference="fig:comparison2"} plot the convergence factor $\rho$ obtained[^6] from our analysis in Theorem [Theorem 1](#thm:analysis){reference-type="ref" reference="thm:analysis"} along with the corresponding bound from the literature as a function of the matrix condition number $\kappa(A)$. In all cases, our approach certifies a faster rate of convergence. Note that the algorithm in Figure [1](#fig:comparison1){reference-type="ref" reference="fig:comparison1"} does not use extrapolation, while the algorithm in Figure [2](#fig:comparison2){reference-type="ref" reference="fig:comparison2"} does use extrapolation, and achieves a much faster convergence rate as a result. ![Comparison with [@SAA-AHS:20]. The algorithm parameters are those in [\[stepsizes2\]](#stepsizes2){reference-type="eqref" reference="stepsizes2"} with extrapolation parameter $\gamma=1$.](figures/comparison_bound2.pdf){#fig:comparison2} The guaranteed convergence factor produced by our approach with step size selection [\[stepsizes2\]](#stepsizes2){reference-type="eqref" reference="stepsizes2"} is a non-monotonic function of $\kappa(A)$ in Figure [2](#fig:comparison2){reference-type="ref" reference="fig:comparison2"}. For any *fixed* algorithm, the convergence rate must increase monotonically with the condition numbers $\kappa(f)$ and $\kappa(A)$. However, the step size selection [\[stepsizes2\]](#stepsizes2){reference-type="eqref" reference="stepsizes2"} is a function of $\kappa(A)$, and thus the algorithm used is varying at each point on this curve. This highlights that the bounds from the literature are conservative, and suggests that further improvements in the worst-case convergence rate could be obtained by optimizing the step-sizes subject to feasibility of our LMI. # Conclusion {#Sec:conclusion} Focusing on first-order primal-dual methods for linearly constrained convex optimization, we proposed a systematic method to search for a Lyapunov function that certifies a worst-case convergence rate of the algorithm across all smooth strongly convex functions and all constraint matrices with bounded singular values. Our analysis applies to a range of primal-dual algorithms, including those with extrapolation and Lagrangian augmentation. We compared our numerical results with two bounds from the literature, and our analysis yields better bounds in each case. Future work includes finding algorithm parameters that optimize the rates obtained from our analysis, improving the analysis by leveraging recent interpolation conditions for linear operators [@linear-interpolation], and extending the approach to more general operator splitting methods [@LC-DK-AC-AK:23]. # Proof of Proposition [Proposition 1](#prop:objective-multipliers){reference-type="ref" reference="prop:objective-multipliers"} {#proof-of-proposition-propobjective-multipliers} Let $y_i,u_i,f_i$ for $i\in [\ell]=\{1,\ldots,\ell\}$ denote a sequence of $\ell$ iterates. From [@interpolation], these points are interpolable by an $L$-smooth and $m$-strongly convex function if and only if $$q_{ij} := \left[\begin{smallmatrix} y_i \\ y_j \\ u_i \\ u_j\end{smallmatrix}\right]^\mathsf{T}\! \underbrace{\left[\begin{smallmatrix} -mL & mL & m & -L\\ mL & -mL & -m & L\\ m & -m & -1 & 1\\ -L & L & 1 & -1 \end{smallmatrix}\right]}_H \left[\begin{smallmatrix} y_i \\ y_j \\ u_i \\ u_j\end{smallmatrix}\right] + h^\mathsf{T}\left[\begin{smallmatrix}f_i \\ f_j\end{smallmatrix}\right] \geq 0$$ for all $i,j\in [\ell]$ where $h = 2(L-m)\left[\begin{smallmatrix}1 \\ -1\end{smallmatrix}\right]$. In terms of the stacked vectors $u,y,f$, $$q_{ij} = \begin{bmatrix}y \\ u\end{bmatrix}^\mathsf{T}\! H_{ij} \begin{bmatrix}u \\ u\end{bmatrix} + h_{ij}^\mathsf{T}f$$ where $h_{ij} = \left[\begin{smallmatrix}e_i & e_j\end{smallmatrix}\right] h = 2(L-m)(e_i-e_j)$ and $H_{ij}$ is the matrix $$\begin{aligned} \left[\begin{smallmatrix}e_i^\mathsf{T}& 0_\ell^\mathsf{T}\\ e_j^{\mathsf{T}} & 0_{\ell}^\mathsf{T}\\ 0_{\ell}^{\mathsf{T}} & e_i^\mathsf{T}\\ 0_{\ell}^{\mathsf{T}} & e_j^\mathsf{T}\end{smallmatrix}\right]^\mathsf{T}\!\!\!\!\! H\! \left[\begin{smallmatrix}e_i^\mathsf{T}& 0_\ell^\mathsf{T}\\ e_j^{\mathsf{T}} & 0_{\ell}^\mathsf{T}\\ 0_{\ell}^{\mathsf{T}} & e_i^\mathsf{T}\\ 0_{\ell}^{\mathsf{T}} & e_j^\mathsf{T}\end{smallmatrix}\right] \!\!=\!\! \left[\begin{smallmatrix} \!\!-mL(e_i-e_j)(e_i-e_j)^\mathsf{T}& (e_i-e_j)(me_i - Le_j)^\mathsf{T}\!\\ (me_i - Le_j)(e_i-e_j)^\mathsf{T}& -(e_i-e_j)(e_i-e_j)^\mathsf{T} \end{smallmatrix}\right]\end{aligned}$$ where $e_i$ is the $i\textsuperscript{th}$ unit vector in $\mathbb{R}^\ell$. Taking a nonnegative linear combination of the inequalities $q_{ij} \geq 0$, we obtain $Q = \sum_{i \neq j}\lambda_{ij}\,q_{ij} \geq 0$ for all coefficients $\lambda_{ij} \geq 0$ with $\lambda_{ii} = 0$ for all $i$. With $\Lambda \in \mathbb{R}^{\ell \times \ell}_{\geq 0}$ the matrix with elements $\lambda_{ij}$, straightforward but tedious algebra establishes that $$\begin{gathered} Q = \begin{bmatrix}y \\ u\end{bmatrix}^\mathsf{T} \begin{bmatrix} (-mL) R & \tfrac{m+L}{2}R + \tfrac{m-L}{2}T\\ \star & -R\end{bmatrix} \begin{bmatrix}y \\ u\end{bmatrix}\\ +(L-m)\mathbf{1}^{\mathsf{T}}(T-T^{\mathsf{T}})f,\end{gathered}$$ where the $\star$ block can be inferred from symmetry and $$\begin{aligned} R &\colonequals\mathop{\mathrm{\mathrm{diag}}}((\Lambda+\Lambda^\mathsf{T}) \mathbf{1})-(\Lambda+\Lambda^\mathsf{T}), \\ T &\colonequals\mathop{\mathrm{\mathrm{diag}}}((\Lambda-\Lambda^\mathsf{T}) \mathbf{1}) + (\Lambda-\Lambda^\mathsf{T}).\end{aligned}$$ Note that $R$ is symmetric doubly hyperdominant with zero row sums, and $R$ and $T$ are independent, as they depend on the symmetric and skew-symmetric parts of $\Lambda$, respectively. Now define the skew symmetric matrix $S \colonequals\tfrac{m-L}{2}(\Lambda - \Lambda^{\mathsf{T}})$ and note that $\tfrac{m-L}{2}T = \mathrm{diag}(S\mathbf{1}) + S$ and $\tfrac{m-L}{2}T^{\mathsf{T}} = \mathrm{diag}(S\mathbf{1}) - S = \mathrm{diag}(S\mathbf{1}) + S^{\mathsf{T}}$. Then the nonnegative quantity $Q$ is $$\left[\begin{smallmatrix}y \\ u\end{smallmatrix}\right]^\mathsf{T}\bigl(\left[\begin{smallmatrix}-2mL & m+L\\ m+L & -2\end{smallmatrix}\right]\otimes R + \left[\begin{smallmatrix}\boldsymbol{0} & \mathrm{diag}(S\mathbf{1})+S\\ \star & \boldsymbol{0}\end{smallmatrix}\right]\bigr) \left[\begin{smallmatrix}y \\ u\end{smallmatrix}\right] - 4(S\mathbf{1})^\mathsf{T}f,$$ which is of the form [\[eq:F-quadratic-inequality\]](#eq:F-quadratic-inequality){reference-type="eqref" reference="eq:F-quadratic-inequality"} if and only if $S\mathbf{1} = \mathbf{0}$.  ------------------------------------------------------------------------ # Proof of Proposition [Proposition 2](#prop:constraint-multipliers){reference-type="ref" reference="prop:constraint-multipliers"} {#proof-of-proposition-propconstraint-multipliers} Consider a matrix $M$ of this form, and define the matrices $$M_0 = \begin{bmatrix}-2\,\underline{\sigma}^2\,\overline{\sigma}^2 & \overline{\sigma}^2+\underline{\sigma}^2 \\ \overline{\sigma}^2+\underline{\sigma}^2 & -2\end{bmatrix} \quad\text{and}\quad M_1 = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}$$ so that the multiplier is $M = M_0\otimes R + M_1\otimes S$. We first write the inequality [\[eq:A-quadratic-inequality\]](#eq:A-quadratic-inequality){reference-type="eqref" reference="eq:A-quadratic-inequality"} as $$0\leq\left[\begin{smallmatrix}Y \\ U\end{smallmatrix}\right]^\mathsf{T}\! M \left[\begin{smallmatrix}Y \\ U\end{smallmatrix}\right] \quad\text{where}\quad Y = \left[\begin{smallmatrix}Y_k^3 \\ Y_k^4\end{smallmatrix}\right], \quad U = \left[\begin{smallmatrix}U_k^3 \\ U_k^4\end{smallmatrix}\right].$$ From the feedback [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"}, we have that $U = (I_{2\ell}\otimes\Sigma^2) Y$, so the matrix $YU^\mathsf{T}= UY^\mathsf{T}$ is symmetric. For the term $M_1\otimes S$ of the multiplier, the quadratic form is $$\begin{aligned} \left[\begin{smallmatrix}Y \\ U\end{smallmatrix}\right]^\mathsf{T}\! (M_1\otimes S) \left[\begin{smallmatrix}Y \\ U\end{smallmatrix}\right] &= Y^\mathsf{T}S U - U^\mathsf{T}S Y \\ &= \mathop{\mathrm{\mathrm{tr}}}\bigl( S( UY^\mathsf{T}-YU^\mathsf{T})\bigr) = 0.\end{aligned}$$ Therefore, this term does not affect the inequality. Without loss of generality, we can take $M_1\otimes S$ to be symmetric, in which case $S$ is skew-symmetric. For the term $M_0\otimes R$ of the multiplier, the quadratic form is $$\begin{gathered} \left[\begin{smallmatrix}Y \\ U\end{smallmatrix}\right]^\mathsf{T}(M_0\otimes R) \left[\begin{smallmatrix}Y \\ U\end{smallmatrix}\right] = (U-\underline{\sigma}^2 Y)^\mathsf{T}R\,(\overline{\sigma}^2 Y - U) \\ = \bigl((I\otimes\Sigma^2)-\underline{\sigma}^2 I\bigr) Y^\mathsf{T}R Y \bigl(\overline{\sigma}^2-(I\otimes\Sigma^2)\bigr).\end{gathered}$$ This quantity is nonnegative since $\Sigma$ is a diagonal matrix of singular values in the interval $[\underline{\sigma},\overline{\sigma}]$ and $Y^\mathsf{T}R Y$ is positive semidefinite. Therefore, the inequality [\[eq:A-quadratic-inequality\]](#eq:A-quadratic-inequality){reference-type="eqref" reference="eq:A-quadratic-inequality"} holds for iterates that satisfy [\[eq:feedback\]](#eq:feedback){reference-type="eqref" reference="eq:feedback"} for all $M \in \mathcal{M}_\mathcal{A}(\underline{\sigma},\overline{\sigma})$.  ------------------------------------------------------------------------ [^1]: B. Van Scoy is with the Department of Electrical and Computer Engineering at Miami University, Oxford, OH 45056, USA. `bvanscoy@miamioh.edu` [^2]: J. W. Simpson-Porco is with the Department of Electrical and Computer Engineering at the University of Toronto, Toronto, ON, M5S 3G4, Canada. `jwsimpson@ece.utoronto.ca` [^3]: L. Lessard is with the Department of Mechanical and Industrial Engineering at Northeastern University, Boston, MA 02115, USA. `l.lessard@northeastern.edu`\ This material is based upon work supported by the National Science Foundation under Grant No. 2136945 and 2139482. [^4]: In the notation of [@SSD-WH:19], our set-up corresponds to the case where $f = 0$; our "$A$" is their "$A^{\sf T}$" and our "$f$" is their "$-g$". [^5]: Matrices $R$ satisfying these conditions are also called *diagonally hyperdominant with zero excess*. [^6]: A bisection search was applied to determine the smallest $\rho$ for which the LMIs were feasible.
arxiv_math
{ "id": "2309.11365", "title": "Automated Lyapunov Analysis of Primal-Dual Optimization Algorithms: An\n Interpolation Approach", "authors": "Bryan Van Scoy and John W. Simpson-Porco and Laurent Lessard", "categories": "math.OC cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $N(X)$ be the Laplacian matrix of a directed graph obtained from the edge adjacency matrix of a graph $X.$ In this work, we study the bipartiteness property of the graph with the help of $N(X).$ We computed the spectrum of the edge Laplacian matrix for the regular graphs, the complete bipartite graphs, and the trees. Further, it is proved that given a graph $X,$ the characteristic polynomial of $N(X)$ divides the characteristic polynomial of $N(X^{\prime\prime}),$ where $X^{\prime\prime}$ denote the Kronecker double cover of $X.$ [2010 Mathematics Subject Classification.]{.smallcaps} 05C05, 05C50, 05C76. [Keywords and phrases.]{.smallcaps} Laplacian matrix, edge adjacency matrix. address: - Department of Mathematics, Shiv Nadar Institution of Eminence, Dadri, U.P, 201314 - Department of Mathematics, Shiv Nadar Institution of Eminence, Dadri, U.P, 201314 author: - Shivani Chauhan - A. Satyanarayana Reddy title: Spectral properties of edge Laplacian matrix --- # Introduction The *Laplacian matrix* $\mathfrak{L}(X)$ or $\mathfrak{L}$ of a directed graph $X$ is defined as $\mathfrak{L}(X)=\mathfrak{D}(X)-A(X)$, where $A(X)$ is the adjacency matrix of $X$ and $\mathfrak{D}(X)$ is the diagonal matrix of the row sums of $A(X).$ In [@wu2005algebraic; @wu2005rayleigh], Chai Wah Wu defined the algebraic connectivity of directed graphs. It is useful in deriving synchronization criteria for arrays of coupled dynamical systems for both constant and time-varying coupling. In this work, we are studying the Laplacian matrix of a special class of directed graphs, for that we first define the edge adjacency matrix of a graph. There are other definitions of Laplacian matrices of directed graphs (see [@bapat1999generalized; @chung2005laplacians]). Let $X=(V(X),E(X))$ be a graph with $|V(X)|=n, \;|E(X)|=m.$ We orient the edges of $X$ arbitrarily, label them as $e_1,e_2,\ldots,e_m$ and also $e_{m+i}=e_i^{-1},\;1\le i\le m,$ where $e_k^{-1}$ denotes the edge $e_k$ with the direction reversed. Then the *edge adjacency matrix* of $X$, denoted by $M(X)$ or $M$ if $X$ is clear from the context, is defined as $$M_{ij}=\begin{cases} 1 & \mbox{if $t(e_i)=s(e_j)$\; and\; $s(e_i)\ne t(e_j)$,}\\ 0 & \mbox{otherwise.} \end{cases}$$ where $s(e_i)$ and $t(e_i)$ denote the starting and the terminal vertex of $e_i,$ respectively. For further information about the matrix $M$, one can refer [@chauhan2023double; @horton2006ihara; @terras2010zeta]. The *edge Laplacian matrix* $N(X)$ or $N$ of a graph $X$ is defined as $N=D-M$, where $D$ is the diagonal matrix of the row sums of $M$ *i.e.,* the $i^{th}$ diagonal entry equals to $deg(t(e_i))-1$, and $deg(v)$ denotes the degree of the vertex $v.$ Recall, a graph is *strongly connected* if for every ordered pair of vertices $(v, w)$, there exists a directed path from $v$ to $w.$ In Theorem 11.10 of [@terras2010zeta], it is proved that if $X$ is a connected graph that is not a cycle graph and does not contain a pendant vertex, then matrix $M$ is irreducible. Therefore the directed graph obtained from the matrix $M$ is strongly connected, hence 0 is a simple eigenvalue of $N$. The process of computation of matrix $N(C_3)$ is given in Figure [\[fig:ex N\]](#fig:ex N){reference-type="ref" reference="fig:ex N"}. In the next section, we describe the structure of the matrix $N$ which is useful to determine the bipartiteness property of the graph. Further, the spectrum of various families of graphs is provided, in particular for regular graphs, trees, and complete bipartite graphs. Lastly, it is proved that $\phi_{N(X)}$ divides $\phi_{N(X^{\prime\prime})},$ where $\phi_A$ and $X^{\prime\prime}$ denote the characterstic polynomial of a matrix $A$ and the Kronecker product of a graph $X$ with $K_2$, respectively and $K_n$ denotes the complete graph on $n$ vertices. Recall, the *Kronecker product* $X_ 1\times X_2$ of graphs $X_1$ and $X_2$ is a graph such that the vertex set is $V(X_1)\times V(X_2)$, vertices $(x_1,x_2)$ and $(x_1^\prime,x_2^\prime)$ are adjacent if $x_1$ is adjacent to $x_1^\prime$ and $x_2$ is adjacent to $x_2^\prime.$ In particular given a graph $X,$ $X \times K_2$ is called the *Kronecker double cover* of $X.$ From now onwards, the edge adjacency matrix of a graph is denoted by $M$ and the diagonal matrix of its rows sums by $D.$ $e_1$ $e_2$ $e_3$ $e_1^{-1}$ $e_2^{-1}$ $e_3^{-1}$ ------------ ------- ------- ------- ------------ ------------ ------------ $e_1$ 1 -1 0 0 0 0 $e_2$ 0 1 -1 0 0 0 $e_3$ -1 0 1 0 0 0 $e_1^{-1}$ 0 0 0 1 0 -1 $e_2^{-1}$ 0 0 0 -1 1 0 $e_3^{-1}$ 0 0 0 0 -1 1 # Spectrum of edge Laplacian matrix The collection of all the eigenvalues of a matrix together with its multiplicities is known as the *spectrum* of that matrix. If $\lambda_1>\lambda_2>\ldots>\lambda_k$ are the distinct eigenvalues of a matrix $A,$ then the spectrum of $A$ is defined by $$\begin{Bmatrix} \lambda_1 & \lambda_2 & \cdots & \lambda_k\\ m_1 & m_2 & \cdots & m_k \end{Bmatrix},$$ where $m_i$ is the algebraic multiplicity of eigenvalue $\lambda_i$ for $1\leq i\leq k.$ The matrix $N$ has the following structure $$\label{eqn :1} N=D-M=\left[\begin{array}{c|c} P & Q \\ \hline R & S \end{array} \right],$$ where $P,Q,R,S$ are $m\times m$ matrices. From the structure of matrix $M,$ we have that $Q$ and $R$ are symmetric matrices. If $X$ is regular, then $P=S^T$ and $N^T=JNJ,$ where $J=\begin{pmatrix} 0 & \mathbb{I}_{m}\\ \mathbb{I}_{m} & 0 \end{pmatrix}$ and $\mathbb{I}_m$ denotes the identity matrix of order $m$. The row and column sums of the matrix $N$ are zero for regular graphs which implies that the cofactors of any two elements of $N$ are equal [@bapat2010graphs Lemma 4.2]. It is interesting to note that the sum of the blocks of $N,$ $P+Q+R+S$ is the Laplacian matrix of $L(X)$, where $L(X)$ denotes the line graph of $X$. As $$N=D-M=\begin{bmatrix} D_1 & 0\\ 0 & D_2 \end{bmatrix}- \begin{bmatrix} A & B\\ C & D \end{bmatrix},$$ the sum of the blocks of $M$ is equal to $A(L(X))$ [@horton2006ihara p.7] and $(D_1+D_2)_{ii}=deg(t(e_i))+deg(t(e_i^{-1}))-2,$ where $1\leq i\leq m$. It is well known from [@grone1990laplacian] that a graph $X$ is bipartite if and only if the Laplacian matrix and the signless Laplacian matrix of $X$ are similar. Our next result is analogous to this. **Theorem 1**. *A graph $X$ is bipartite if and only if $D+M$ and $D-M$ are similar, where $M$ is irreducible.* *Proof.* Suppose that $X$ is bipartite with $m$ edges. Let $\{v_1,v_2,\ldots,v_m\}$ and $\{v_1^\prime,v_2^\prime,\ldots,v_n^\prime\}$ be the vertex bipartition of $X.$ Choose an orientation in $X$ in such a way that $e_i's$ are the directed edges from $v_i$ to $v_j^\prime$ for all $1\leq i\leq m, 1\leq j \leq n$, then $M$ can be expressed in the following form $$\label{eqn:bip} M=\left[\begin{array}{c|c} 0 & B\\ \hline C & 0 \end{array} \right].$$ It is simple to check that $P^T(D+M)P=D-M$, where $P=\begin{bmatrix} \mathbb{I}_m & 0\\ 0 & -\mathbb{I}_m \end{bmatrix}.$ Conversely, suppose that $D+M$ and $D-M$ are similar, where $M$ is irreducible. Clearly, $0$ is an eigenvalue of $D+M$. Let $x=(x_1,x_2,\ldots,x_{2m})$ be the eigenvector corresponding to the eigenvalue $0$ of $D+M$. Choose $x_i$ to be the maximum absolute entry of $x.$ As $((D+M)x)_i=0,$ we have $\sum_{j=1,i\neq j}^{2m}m_{ij}x_j+d_{ii}x_i=0$. We obtain $d_{ii}x_i=-\sum x_j,$ where the summation is over the edges to which $e_i$ is feeding and these edges are $d_{ii}$ in number. By the maximality of $x_i,$ we have $x_i=-x_j$ for all edges into which $e_i$ is feeding. Now, we explain the construction of the vertex partition. We put the vertex $e_i$ in one partition, say $V_1$, and the vertices $e_j's$ into which $e_i$ is feeding in the partition $V_2$ and the edges are directed from $e_i$ to $e_j's.$ Again we choose the maximum absolute entry of $x$ and continue the same process. This shows $M$ has the structure given in ([\[eqn:bip\]](#eqn:bip){reference-type="ref" reference="eqn:bip"}), hence $X$ is bipartite. ◻ We shall now present the spectrum for several graph families. The spectrum of a regular graph can be easily obtained from Theorem 2.2 in [@glover2021some]. Let $X$ be a $k$-regular graph on $n$ vertices, then the eigenvalues of $N$ are $$k-1\pm 1,k-1-\left(\frac{\lambda_i \pm \sqrt{\lambda_i^2-4(k-1)}}{2}\right),(i=1,\ldots,n)$$ where $\lambda_i \in spec(A(X))$ and $k-1\pm 1$ each have multiplicity $m-n.$ Later, we computed the eigenvalues of $N$ for trees and complete bipartite graphs in Theorems [Theorem 3](#thm:inteigen){reference-type="ref" reference="thm:inteigen"} and [Theorem 5](#thm:cbi){reference-type="ref" reference="thm:cbi"}, respectively. In Table [\[tab:6vertices\]](#tab:6vertices){reference-type="ref" reference="tab:6vertices"}, the spectrum of $N(X)$ is given, where $X$ is a tree on $6$ vertices. We state Theorem [\[thm:nil\]](#thm:nil){reference-type="ref" reference="thm:nil"} which is required in the proof of Theorem [Theorem 3](#thm:inteigen){reference-type="ref" reference="thm:inteigen"}. **Theorem 2**. *[@torres2020non Theorem 3.2] [\[thm:nil\]]{#thm:nil label="thm:nil"} Let $M$ be the edge adjacency matrix of a graph $X.$ Then $M$ is a nilpotent matrix if and only if $X$ is a forest.* **Theorem 3**. *Let $X$ be a connected graph. Then the eigenvalues of $N$ are the same as the eigenvalues of $D$ if and only if $X$ is a tree.* *Proof.* Suppose that $X$ is a tree on $m$ edges. Let $\phi_D(\lambda)=\lambda^{2m}+d_1\lambda^{2m-1}+d_2\lambda^{2m-2}+\cdots+d_{2m-1}\lambda+d_{2m}$ and $\phi_N(\lambda)=\lambda^{2m}+n_1\lambda^{2m-1}+n_2\lambda^{2m-2}+\cdots+n_{2m-1}\lambda+n_{2m}$ be the characterstic polynomial of matrix $D$ and $N,$ respectively. We want to show that $d_i=n_i$, where $1\leq i \leq 2m.$ In order to prove the claim, we use the fact that the sum of the products of the eigenvalues of a matrix $A$ taken $k$ at a time equals the sum of the $k\times k$ principal minors of $A$ [@horn2012matrix p.53]. It is simple to observe that $d_1=n_1.$ The $2\times 2$ principal submatrix of $N$ has the following structure, $$B_2=\begin{pmatrix} d_{ii} & -m_{ij}\\ -m_{ji} & d_{jj} \end{pmatrix},$$ where $m_{ij}$ denotes the $ij^{th}$ entry of $M$. By the definition of $M$, $det(B_2)=d_{ii}d_{jj}$, therefore $d_2=n_2$. Let $B_k$ denote the $k\times k$ principal submatrix of $N$, whose rows and columns are indexed by $i_1>i_2>\ldots>i_k.$ It is well known that $$\label{eqn:det} det(A)=\sum_{\sigma\in S_n}sgn(\sigma)\prod_{i=1}^{n}a_{i\sigma(i)},$$ where the summation is over all permutation $\{1,2,\ldots,n\}$. As $X$ is a tree, the directed graph obtained from the matrix $M$ has no cycles. By ([\[eqn:det\]](#eqn:det){reference-type="ref" reference="eqn:det"}), $det(B_k)=d_{i_1i_1}d_{i_2i_2}\ldots d_{i_ki_k}$, which implies $d_k=n_k$. Conversely, suppose that $\phi_D(\lambda)=\phi_N(\lambda)$. Let $\phi_M(\lambda)=\lambda^{2m}+m_1\lambda^{2m-1}+m_2\lambda^{2m-2}+\ldots+m_{2m-1}\lambda+m_{2m}$. We want to show that $m_i=0\; \forall \;1\leq i \leq 2m.$ The result is proved using strong induction. Clearly, $m_1=m_2=0$. Assume that $m_i=0 \forall 1\leq i\leq k$. As $m_{k+1}=$ sum of $(k+1) \times (k+1)$ principal minors of $M$, by ([\[eqn:det\]](#eqn:det){reference-type="ref" reference="eqn:det"}) $$\label{eqn:det1} m_{k+1}=\sum_{i_1<i_2<\ldots<i_{k+1}}\sum_{\sigma \in S_{k+1}}sgn(\sigma)\prod_{\ell=1}^{k+1}m_{i_{\ell}\sigma(i_{\ell})},$$ where $1\leq i_1,i_2,\ldots,i_{k+1}\leq 2m.$ In the inner summation of ([\[eqn:det1\]](#eqn:det1){reference-type="ref" reference="eqn:det1"}), all the entries corresponding to permutations which are not a cycle of length $k+1$ are zero, by induction hypothesis. Hence, we are left with the following expression $$m_{k+1}=\sum_{i_1<i_2<\ldots<i_{k+1}}\sum_{\sigma \in C\cup \{e\}}(-1)^{k+1}m_{i_1\sigma(i_1)}m_{i_2\sigma(i_2)}\ldots m_{i_{k+1}\sigma(i_{k+1})},$$ where $C$ denotes the cycle of length $k+1$. As sum of $k\times k$ principal minors of $N$ is equal to the sum of $k\times k$ principal minors of $D$, we deduce the following expression $$\sum_{i_1<i_2<\ldots<i_{k+1}}\left(\prod_{j=1}^{k+1}n_{i_ji_j}+\sum_{\sigma\in S_{k+1},\sigma \neq \{e\}}sgn(\sigma)\prod_{\ell=1}^{k+1}n_{i_\ell\sigma(i_\ell)}\right)= \sum_{i_1<i_2<\ldots<i_{k+1}}\prod_{j=1}^{k+1}n_{i_ji_j}.$$ By above Equation we obtain $$\label{eqn:nil:2} \sum_{i_1<i_2<\ldots<i_{k+1}}\sum_{\sigma \in S_{k+1},\sigma\neq \{e\}}sgn(\sigma)\prod_{\ell=1}^{k+1}n_{i_\ell\sigma(i_\ell)}=0.$$ As $N=D-M$, $n_{i_\ell \sigma(i_\ell)}=\begin{cases} d_{i_\ell i_\ell} & \mbox{if $\sigma(i_\ell)=i_\ell$}\\ -m_{i_\ell\sigma(i_\ell)} & \mbox{if $\sigma(i_\ell)\neq i_\ell$} \end{cases}$. In the inner summation of ([\[eqn:nil:2\]](#eqn:nil:2){reference-type="ref" reference="eqn:nil:2"}), all the terms in the permutations except those terms corresponding to full cycle permutation are zero, by induction hypothesis. Equation ([\[eqn:nil:2\]](#eqn:nil:2){reference-type="ref" reference="eqn:nil:2"}) reduces to $$\sum_{i_1<i_2<\cdots<i_{k+1}}\sum_{\sigma \in C ,\sigma\neq \{e\}}(-1)^{k+1}(-1)^{k+1}m_{i_1\sigma(i_1)}m_{i_2\sigma(i_2)}\cdots m_{i_{k+1}\sigma(i_{k+1})}=0.$$ Hence $m_{k+1}=0$ and by Theorem [\[thm:nil\]](#thm:nil){reference-type="ref" reference="thm:nil"}, the proof is complete. ◻ **Corollary 4**. *Let $X$ be a tree on the $n$ vertices and $(d_1,d_2,\ldots,d_n)$ be the degree sequence of $X$. Then $d_i-1$ is an eigenvalue of $N$ with multiplicity $d_i$, where $1\leq i\leq n$.* We now define the Kronecker product of the matrices and the Schur complement as it is required to prove our next result. The Kronecker product of the matrices $A=[a_{ij}]$ and $B$ is defined as the partitioned matrix $[a_{ij}B]$ and is denoted by $A\otimes B.$ Let $H$ be an $n\times n$ matrix partitioned as $H=\begin{bmatrix} A_{11} & A_{12}\\ A_{21} & A_{22} \end{bmatrix},$ where $A_{11}$ and $A_{22}$ are square matrices. If $A_{11}$ is non-singular, then the Schur complement of $A_{11}$ in $H$ is defined to be the matrix $A_{22}-A_{21}A_{11}^{-1}A_{12}.$ For Schur complement, we have $det(H)=det(A_{11})det(A_{22}-A_{21}A_{11}^{-1}A_{12})$ and if $A_{11}A_{12}=A_{12}A_{11}$, then $det(H)=det(A_{22}A_{11}-A_{21}A_{12})$. Similarly if $A_{22}$ is non-singular, then the Schur complement of $A_{22}$ in $H$ is defined to be the matrix $A_{11}-A_{12}A_{22}^{-1}A_{21}$ and we can obtain $det(H)=det(A_{22})det(A_{11}-A_{12}A_{22}^{-1}A_{21})$. If $A_{21}A_{22}=A_{22}A_{21}$, then $det(H)=det(A_{11}A_{22}-A_{12}A_{21})$. **Theorem 5**. *Let $X=K_{p,q}$ be the complete bipartite graph on the $n$ vertices. Then the spectrum of $N(K_{p,q})$ is equal to $$\begin{Bmatrix} 0 & u & \frac{u\pm \sqrt{v+4}}{2} & \frac{u\pm \sqrt{v+4(1-q)}}{2} & \frac{u\pm \sqrt{v+4(1-p)}}{2}\\ 1 &1 & (q-1)(p-1) & p-1 & q-1 \end{Bmatrix},$$ where $u=p+q-2$ and $v=(p-q)^2$.* *Proof.* Let $V(K_{p,q})=\{v_1,v_2,\ldots,v_p\}\cup \{v_1^\prime,v_2^\prime,\ldots,v_q^\prime\}$ be a bipartition of $X$. Choosing an orientation that is defined in the proof of Theorem [Theorem 1](#thm:similar){reference-type="ref" reference="thm:similar"}, then $N(K_{p,q})$ can be written in the following form $$N(K_{p,q})=\begin{bmatrix} (p-1)(U) & \mathbb{J}_p\otimes(-\mathbb{I}_q)+U\\ V & (q-1)(U) \end{bmatrix},$$ where $U=\mathbb{I}_p \otimes \mathbb{I}_q,V=\mathbb{I}_p\otimes(-A(K_q)) ,$ and $\mathbb{J}_n$ denotes the matrix of order $n$ with all entries one. The characterstic polynomial of $N(X)$ is $$\phi_{N}(x)=\begin{vmatrix} (p-1-x)(U) & \mathbb{J}_p\otimes(-\mathbb{I}_q)+U\\ V & (q-1-x)(U) \end{vmatrix}=0.$$ A simple check shows that $$V((q-1-x)(U))=((q-1-x)(U))V=\mathbb{I}_p\otimes((x-q+1)A(K_q)).$$ By the Schur complement formula, we obtain the following Equation $$\begin{split} \phi_N(x)&=det(((p-1-x)U)((q-1-x)U)-(\mathbb{J}_p\otimes (-\mathbb{I}_q) +U)V)\\ &=det((p-1-x)(q-1-x)(U)-\mathbb{J}_p\otimes A(K_q)-V).\\ \end{split}$$ For the sake of convenience, assume that $$S^\prime=(p-1-x)(q-1-x)(U)-\mathbb{J}_p\otimes A(K_q)-V,$$ $$T^\prime=(1-p)A(K_q)+(p-x-1)(q-x-1)\mathbb{I}_q,U^\prime=A(K_q)+(p-x-1)(q-x-1)\mathbb{I}_q.$$ Let $\lambda$ be an eigenvalue of $T^\prime$ with its corresponding eigenvector $v$, then $S^\prime V=\lambda V,$ where $V$ is a $p$ dimensional vector $(v,v,\ldots,v)^T$.\ Let $\lambda^\prime$ be an eigenvalue of $U^\prime$ with corresponding eigenvector $v^\prime$. By a straightforward calculation, one can see that the following linearly independent vectors $$\{(v^\prime,-v^\prime,0,0,\ldots,0)^T, (v^\prime,0,-v^\prime,0,\ldots,0)^T,\ldots,(v^\prime, 0,0,0,\ldots,-v^\prime)^T \}$$ are the eigenvectors of $S^\prime$. As the sum of the multiplicities of the obtained eigenvalues equals the number of vertices in the directed graph obtained from $M$. Therefore, $\phi_N(x)=(det(U^\prime))^{p-1}det(T^\prime).$ As $A(K_q)=\mathbb{J}_q-\mathbb{I}_q$ and the eigenvalues of $\mathbb{J}_q$ are $0$ and $q$ with multiplicity $q-1$ and $1$, respectively. From here the eigenvalues of $N$ can be deduced. ◻ The following result is about the eigenvalues of symmetric block matrices that will be used to prove the next result. **Theorem 6**. *[@davis2013circulant][\[thm:unionspectra\]]{#thm:unionspectra label="thm:unionspectra"} Let $H=\begin{bmatrix} A^\prime & B^\prime\\ B^\prime & A^\prime \end{bmatrix}$ be a symmetric $2\times 2$ block matrix, where $A^\prime$ and $B^\prime$ are square matrices of same order. Then the spectrum of $H$ is the union of the spectra of $A^\prime+B^\prime$ and $A^\prime-B^\prime$.* **Theorem 7**. *Let $X$ be a connected graph, then $\phi_{N(X)}$ divides $\phi_{N(X^{\prime\prime})}$.* *Proof.* We know that $N(X)=D(X)-M(X)=\begin{pmatrix} D_1-A & -B\\ -C & D_2-D \end{pmatrix}$. From the proof of Theorem 4.2 in Horton [@horton2006ihara], $M(X^{\prime\prime})=\begin{pmatrix} 0 & MJ\\ JM & 0 \end{pmatrix},$ therefore $$N(X^{\prime\prime})=D(X^{\prime\prime})-M(X^{\prime\prime})=\begin{pmatrix} D_1 & 0 & -B & -A\\ 0 & D_2 & -D & -C\\ -C & -D & D_2 & 0\\ -A & -B & 0 & D_1 \end{pmatrix} .$$ Note that $P^TN(X^{\prime\prime})P=\begin{pmatrix} D_1 & 0 & -A & -B\\ 0 & D_2 & -C & -D\\ -A & -B & D_1 & 0\\ -C & -D & 0 & D_2 \end{pmatrix},$ where $P=\begin{pmatrix} I_m & 0 & 0 & 0\\ 0 & I_m & 0 & 0\\ 0 & 0 & 0 & I_m\\ 0 & 0 & I_m & 0 \end{pmatrix}.$ The result follows from Theorem [\[thm:unionspectra\]](#thm:unionspectra){reference-type="ref" reference="thm:unionspectra"}. ◻ # Acknowledgement The authors would like to thank the handling editor and the anonymous reviewers for their careful reading of the manuscript. 9 R.B. Bapat, *Graphs and matrices*, volume 27, Springer, (2010). R.B. Bapat, J. W. Grossman and D. M. Kulkarni, *Generalized matrix tree theorem for mixed graphs*, Linear and Multilinear Algebra, 46(40): 299-312, (1999). S. Chauhan and A.S. Reddy, *On the double covers of a line graph*, arXiv preprint arXiv:2202.04756v2, (2023). F. Chung, *Laplacians and the Cheeger inequality for directed graphs*, Annals of Combinatorics, 9(1):1-19, (2005). P.J. Davis, *Circulant matrices*, American Mathematical Soc., (2013). C. Glover and M. Kempton, *Some spectral properties of the non-backtracking matrix of a graph*, Linear Algebra and its Applications, 618:37-57, (2021). R. Grone, R. Merris and V. Sunder, *The Laplacian spectrum of a graph*, SIAM Journal on matrix analysis and applications, 11(2):218-238, (1990). R.A. Horn and C. R. Johnson, *Matrix analysis*, Cambridge university press, (2012). M.D Horton, *Ihara zeta functions of irregular graphs*, University of California, San Diego, (2006). A. Terras, *Zeta functions of graphs: a stroll through the garden*, volume 128, Cambridge University Press, (2010). L. Torres, *Non-backtracking spectrum: Unitary eigenvalues and diagonalizability*, arXiv preprint arXiv:2007.13611, (2020). C.W. Wu, *Algebraic connectivity of directed graphs*, Linear and multilinear algebra, 53(3):203-223, (2005). C.W. Wu, *On Rayleigh--Ritz ratios of a generalized Laplacian matrix of directed graphs*, Linear algebra and its applications, 402:207-227, (2005).
arxiv_math
{ "id": "2309.15841", "title": "Spectral properties of edge Laplacian matrix", "authors": "Shivani Chauhan and A. Satyanarayana Reddy", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For each odd prime power $q$, and each integer $k$, we determine the sum of the $k$-th powers of all elements $x\in\mathbb{F}_q$ for which both $x$ and $x+1$ are squares in $\mathbb{F}_q^*$. We also solve the analogous problem when one or both of $x$ and $x+1$ is a nonsquare. We use these results to determine the sum of the elements of the image set $f(\mathbb{F}_q)$ for each $f(X)\in\mathbb{F}_q[X]$ of the form $X^4+aX^2+b$, which resolves two conjectures by Finch-Smith, Harrington, and Wong. address: - Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043 USA - Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043 USA author: - Zhiguo Ding - Michael E. Zieve title: On sums of powers of consecutive squares over finite fields, and sums of distinct values of polynomials --- # Introduction Let $p$ be an odd prime. The problem of counting or estimating the number of tuples of $m$ consecutive squares in $(\mathbb{Z}/p\mathbb{Z})^*$ has a rich history, with contributions from Aladov [@A], Jacobsthal [@J], Davenport [@D1; @D2; @D3], Weil [@W], and others. It is natural to seek further information about $m$-tuples of consecutive squares, beyond merely their number. In this paper we determine the moments of the set of initial elements of such tuples when $m=2$. That is, we determine the sum of the $k$-th powers of all elements $x\in\mathbb{Z}/p\mathbb{Z}$ for which both $x$ and $x+1$ are nonzero squares, for any prescribed integer $k$. In fact, we solve the analogous problem where $\mathbb{Z}/p\mathbb{Z}$ is replaced by an arbitrary finite field $\mathbb{F}_q$ of odd order, in which case we write $\chi(x)$ for the quadratic character on $\mathbb{F}_q$, which is defined by $\chi(x):=x^{(q-1)/2}$ for any $x\in\mathbb{F}_q$. Thus $\chi(x)=1$ if $x$ is a nonzero square in $\mathbb{F}_q$, $\chi(x)=-1$ if $x$ is a nonsquare in $\mathbb{F}_q$, and $\chi(0)=0$. **Theorem 1**. *Let $q$ be an odd prime power and pick $a,b\in\mathbb{F}_q^*$. Write $$D\colonequals \{x\in\mathbb{F}_q\colon\chi(x)=\chi(a) \text{ and }\chi(x+1)=\chi(b)\}.$$ Let $k$ be an integer not divisible by $(q-1)/2$, and let $\ell$ and $\varepsilon$ be the unique integers such that $k\equiv \varepsilon \ell \pmod{q-1}$,  $\varepsilon\in\{1,-1\}$, and $0<\ell<(q-1)/2$. Then $$\sum_{x\in D} x^k = \frac{1+\chi(-a)+ 2^{-\ell}\cdot\chi(c)\cdot\sum_{i=0}^{\lfloor \ell/2\rfloor} 4^{-i}\binom{\ell}{2i}\binom{2i}{i}}{4\cdot (-1)^{\ell+1}},$$ where $c:=ab$ if $\varepsilon=1$ and $c:=b$ if $\varepsilon=-1$. In particular, if $q>3$ then $$\sum_{x\in D}x = \frac{1+\chi(-a)}4+\frac{\chi(ab)}8,$$ and if $q>5$ then $$\sum_{x\in D}x^2 = \frac{-1-\chi(-a)}4-3\cdot \frac{\chi(ab)}{32}.$$* **Remark 1**. The analogous result when $(q-1)/2$ divides $k$ is easy. For, if $k=m(q-1)/2$ with $m\in\mathbb{Z}$ then $\sum_{x\in D}x^k = \chi(a)^m\cdot\lvert D \rvert$, and it is well-known that $4\lvert D \rvert=q-2-\chi(-a)-\chi(b)-\chi(ab)$ (cf. Remark [Remark 1](#consec){reference-type="ref" reference="consec"}). More generally, it seems natural to study the sum of the values of some simple function (such as the identity function, or powers of the identity) on some "nice" subset of $\mathbb{F}_q$ (such as the set of elements $x$ in $\mathbb{F}_q$ for which $\chi(x)=\chi(x+1)=1$). The case that the function is the constant function $1$ amounts to determining the size of the set, so the classical question of determining the number of consecutive squares mod $p$, or determining the sizes of other natural subsets of $\mathbb{F}_p$, may be viewed as special cases of this general class of questions. Another instance of this general class of questions dates back to work of Stern [@Stern], who determined the sum of the "triangular" elements of $\mathbb{F}_p$ for any odd prime $p$, namely the elements of $\mathbb{F}_p$ which can be written as $(x^2+x)/2$ for some $x\in\mathbb{F}_p$. Shorter proofs of Stern's results were given in modern language by Stetson [@Stetson] and Gross et al. [@GHM], with the latter authors also solving the same question with $(x^2+x)/2$ replaced by any quadratic polynomial $ax^2+bx+c$. More generally, for any $f(X)\in\mathbb{F}_q[X]$, let $f(\mathbb{F}_q)$ be the value set $\{f(x):x\in\mathbb{F}_q\}$, and let $S(f)$ be the sum of the elements of $f(\mathbb{F}_q)$. When $q$ is prime and $\deg(f)=3$, the quantity $S(f)$ was determined by Finch-Smith et al. [@FHW], who also proposed two conjectures about the degree-$4$ case. We will use Theorem [Theorem 1](#main){reference-type="ref" reference="main"} to prove generalizations of these two conjectures. **Theorem 1**. *For any odd prime power $q$ with $q>5$, and any $a,b\in\mathbb{F}_q$ with $a\ne 0$, the quantity $S(X^4+aX^2+b)$ equals $$-\Bigl( \frac{4+\chi(-1)+4\chi(-2a)}{64} \Bigr) \cdot a^2 + \Bigl( \frac{4+\chi(-1)-2\chi(-a)+2\chi(-2a)}{8} \Bigr) \cdot b.$$* Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"} is a generalized and corrected version of Conjecture 3 of [@FHW]. For completeness, we note that $S(X^4+b)$ is determined in Proposition [Proposition 1](#power){reference-type="ref" reference="power"}, and also that if $q$ is even then $S(X^4+aX^2+b)=S(X^2+aX+b)$ is determined in Proposition [Proposition 1](#3){reference-type="ref" reference="3"}. As a consequence of Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"}, we determine the collection of possibilities for $S(X^4+aX^2)$ when $a$ varies over $\mathbb{F}_q$. **Corollary 1**. *For any odd prime power $q$ with $q>5$, the set $$T \colonequals \{ S(X^4+aX^2) : a \in \mathbb{F}_q\}$$ equals* 1. *all of $\mathbb{F}_q$ if $q\equiv 3 \pmod 4$ and $q\equiv 3, 5 \text{ or } 6 \pmod 7$;* 2. *the set of squares in $\mathbb{F}_q$ if either $q\equiv 1\pmod{12}$ or both $q\equiv 3\pmod 4$ and $q\equiv 0, 1, 2 \text{ or } 4 \pmod 7$;* 3. *the set of fourth powers in $\mathbb{F}_q$ if $q\equiv 5 \text{ or } 21 \pmod {24}$;* 4. *the set of squares in $\mathbb{F}_q$ which are not fourth powers in $\mathbb{F}_q^*$ if $q\equiv 9 \text{ or } 17 \pmod {24}$.* *Moreover, if $q\equiv 3 \pmod 4$ then $S(X^4+8X^2) =1$.* Corollary [Corollary 1](#conj2){reference-type="ref" reference="conj2"} is a generalized, corrected, and sharpened version of Conjecture 2 of [@FHW]. For completeness, we provide a relatively simple result determining $S(f)$ when $\deg(f)\le 3$. **Proposition 1**. *For any prime power $q$ with $q>4$, and any $a,b,c,d\in\mathbb{F}_q$ with $a\ne 0$, we have* 1. *$S(aX+b)=0$;* 2. *$S(aX^2+bX+c)$ equals* - *$(4ac-b^2)/(8a)$ if $q$ is odd;* - *$0$ if $q$ is even;* 3. *$S(aX^3+bX^2+cX+d)$ equals* - *$\lfloor (2q+1)/3\rfloor\cdot (27a^2d-9abc+2b^3)/(27a^2)$ if $b^2\ne 3ac$ and $3\nmid q$;* - *$0$ if $b^2=3ac$ and $q\equiv 2\pmod 3$;* - *$2(27a^2d-b^3)/(81a^2)$ if $b^2=3ac$ and $q\equiv 1\pmod 3$;* - *$0$ if $27\mid q$;* - *$-b^3/a^2$ if $q=9$.* Proposition [Proposition 1](#3){reference-type="ref" reference="3"} subsumes and generalizes [@FHW Thm. 2], [@GHM Thms. 1.2 and  3.2], [@Stern Satz in §4], and [@Stetson Thm. I]. It turns out that Proposition [Proposition 1](#3){reference-type="ref" reference="3"} follows easily from known results when $\mathbb{F}_q$ has characteristic at least $5$, but further arguments are required in characteristics $2$ and $3$. We conclude this introduction by listing some classes of polynomials $f(X)\in\mathbb{F}_q[X]$ of arbitrary degree for which it is easy to compute $S(f)$. **Proposition 1**. *For any prime power $q$, let $f(X)=aX^n+b$ for some positive integer $n$ and some $a,b\in\mathbb{F}_q$ with $a\ne 0$. Then* 1. *$S(f) = a+2b$ if $(q-1)\mid n$;* 2. *$S(f) = b \cdot \Bigl(1-\frac{1}{\gcd(n,q-1)}\Bigr)$ if $(q-1)\nmid n$.* **Proposition 1**. *For any prime power $q$, let $f(X)=X^r B(X^s)$ for some $B(X)\in\mathbb{F}_q[X]$, some positive divisor $s$ of $q-1$ and some positive integer $r$ with $s\nmid r$. Then $S(f)=0$.* **Corollary 1**. *For any $f(X)$ as in Proposition [Proposition 1](#odd){reference-type="ref" reference="odd"}, and any $a\in\mathbb{F}_q$, if $g(X):=f(X)+a$ then $S(g)=a\cdot\lvert f(\mathbb{F}_q) \rvert$.* The expression in Corollary [Corollary 1](#oddplus){reference-type="ref" reference="oddplus"} becomes explicit for any class of polynomials $f(X)$ in Proposition [Proposition 1](#odd){reference-type="ref" reference="odd"} for which one knows an explicit formula for $\lvert f(\mathbb{F}_q) \rvert$. **Remark 1**. Proposition [Proposition 1](#power){reference-type="ref" reference="power"} generalizes and corrects [@FHW Thm. 3]. Proposition [Proposition 1](#odd){reference-type="ref" reference="odd"} improves [@FHW Thm. 5] in multiple ways. This paper is organized as follows. In the next section we prove Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. Then in Section [3](#sec3){reference-type="ref" reference="sec3"} we prove Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"} and Corollary [Corollary 1](#conj2){reference-type="ref" reference="conj2"}, and in Section [4](#sec4){reference-type="ref" reference="sec4"} we prove Propositions [Proposition 1](#3){reference-type="ref" reference="3"}--[Proposition 1](#odd){reference-type="ref" reference="odd"} and Corollary [Corollary 1](#oddplus){reference-type="ref" reference="oddplus"}. # Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} {#proof-of-theorem-main} *Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.* Since the last sentence in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} consists of the instances $k=1$ and $k=2$ of the next-to-last sentence, it suffices to prove the next-to-last sentence. Moreover, it suffices to prove the result when $k=\ell$, since $x^k=x^{\varepsilon\ell}$ for $x\in D$, and also $\sum_{x\in D}x^{-\ell}=\sum_{x\in D}(x^{-1})^{\ell}$ and $$\{x^{-1}:x \in D\}=\{x\in\mathbb{F}_q:\chi(x)=\chi(a) \text{ and }\chi(x+1)=\chi(ab)\}.$$ Henceforth we assume that $k=\ell$, so that $0<k<(q-1)/2$. We now show that it suffices to treat the case $b=a$. Pick $\zeta\in\mathbb{F}_q^*$ of order $(q-1)/2$. Then $\chi(\zeta)=1$, and since $0<k<(q-1)/2$ we have $\zeta^k\ne 1$. Since $Z:=\{x\in\mathbb{F}_q:\chi(x)=\chi(a)\}$ is preserved by multiplying by $\zeta$, the sum $S:=\sum_{x\in Z} x^k$ satisfies $$S = \sum_{x\in Z} (\zeta x)^k = \zeta^k S,$$ so that $S=0$. Note that $Z$ is the union of the pairwise disjoint sets $D$, $E$, and $F$, where $$E:=\{x\in\mathbb{F}_q:\chi(x)=\chi(a) \text{ and } \chi(x+1)=-\chi(b)\}$$ and $$F:=\{x\in\mathbb{F}_q:\chi(x)=\chi(a) \text{ and }\chi(x+1)=0\}.$$ The equality $S=0$ says that $$\sum_{x\in E} x^k = - \sum_{x\in D} x^k - \sum_{x\in F} x^k.$$ Note that $F=\{-1\}$ if $\chi(-1)=\chi(a)$, and $F$ is empty if $\chi(-1)=-\chi(a)$. Thus in any case $\sum_{x\in F} x^k = (-1)^k \bigl(1+\chi(-a)\bigr)/2$, so that $$\sum_{x\in E} x^k = - \sum_{x\in D} x^k - (-1)^k\frac{1+\chi(-a)}2.$$ In light of this, Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is true for all pairs $(a,b)$ if it is true for the special pairs $(a,a)$. Henceforth suppose $b=a$, and write $$C\colonequals \{ (u,v) : u,v \in \mathbb{F}_q^*,\, u^2-v^2=a \}.$$ Let $\Lambda$ be the set of elements $t\in\mathbb{F}_q^*$ for which $t^4=a^2$, or equivalently $t^2\in\{a,-a\}$. For $t \in \mathbb{F}_q^*\setminus \Lambda$, define $$\theta(t) \colonequals \Bigl( \frac{t^2+a}{2t}, \frac{t^2-a}{2t} \Bigr).$$ We now show that $\theta$ induces a bijection $\mathbb{F}_q^*\setminus\Lambda\to C$. Plainly $\theta(t)\in C$, since $$\frac{(t^2+a)^2}{4t^2} - \frac{(t^2-a)^2}{4t^2} = a.$$ Next, the sum of the two components of $\theta(t)$ is $t$, so that $\theta$ is injective on $\mathbb{F}_q^*\setminus \Lambda$. Finally, we show that $\theta$ induces a surjection from $\mathbb{F}_q^* \setminus \Lambda$ onto $C$: for any $(u,v) \in C$, pick $t \in \mathbb{F}\llap{$\overline{\phantom{\rm\mathbb{F}}}$}_q$ with $t^2 - a = 2vt$. Plainly $t\ne 0$, so that $v=(t^2-a)/(2t)$. Then $u^2 = v^2 + a = ( (t^2+a)/(2t) )^2$, so that $u = \pm (t^2+a)/(2t)$. Note that $\hat{t}\colonequals -a/t$ satisfies $(\hat{t}^2-a)/(2\hat{t}) = v$ and $(\hat{t}^2+a)/(2\hat{t}) = -(t^2+a)/(2t)$, so upon replacing $t$ by $\hat{t}$ if necessary we may assume that $v=(t^2-a)/(2t)$ and $u=(t^2+a)/(2t)$, whence $t^2\ne \pm a$ and $u+v=t$, so that $t\in\mathbb{F}_q^*\setminus \Lambda$ and $(u,v)=\theta(t)$. Thus $\theta$ defines a bijection $\mathbb{F}_q^* \setminus \Lambda \to C$. Next we show that $\phi\colon (u,v)\mapsto v^2/a$ defines a surjective function $C\to D$ in which each element of $D$ has exactly four preimages. If $(u,v)\in C$ then $x\colonequals v^2/a$ satisfies $x+1=u^2/a$ and $\chi(x)=\chi(a)=\chi(x+1)$, so that $x\in D$. Conversely, if $x\in D$ then $\phi^{-1}(x)$ consists of the pairs $(u,v)$ of elements of $\mathbb{F}_q^*$ such that $u^2-v^2=a$ and $v^2/a=x$. Since $\chi(ax)=1$, the equality $v^2/a=x$ holds for exactly two elements $v \in \mathbb{F}_q^*$, and for each such $v$ we have $v^2+a=ax+a=a(x+1)$ so that $\chi(v^2+a)=\chi(a(x+1))=1$, so the equality $u^2=v^2+a$ holds for exactly two elements $u \in \mathbb{F}_q^*$. Thus $\lvert \phi^{-1}(x) \rvert=4$, so that $\phi$ defines a surjective $4$-to-$1$ function $C\to D$. Now we compute $$\label{fim1} 4 \sum_{x\in D} x^k = \sum_{t \in \mathbb{F}_q^* \setminus \Lambda} \phi(\theta(t))^k = \sum_{t\in\mathbb{F}_q^*\setminus \Lambda} \frac{(t^2/a-2+a/t^2)^k}{4^k}.$$ For any integer $i$, the value of $\sum_{t\in\mathbb{F}_q^*} t^i$ is $-1$ if $(q-1)\mid i$ and $0$ otherwise. Since $0<2k<q-1$, it follows that $\sum_{t\in\mathbb{F}_q^*} (t^2/a-2+a/t^2)^k$ equals the negative of the constant term of the Laurent polynomial $$\begin{aligned} \Bigl(\frac{X^2}a-2+\frac{a}{X^2}\Bigr)^k &= \sum_{j=0}^k \binom{k}{j} (-2)^{k-j} \Bigl(\frac{X^2}a+\frac{a}{X^2}\Bigr)^j \\ &= \sum_{j=0}^k \binom{k}{j} (-2)^{k-j} \sum_{i=0}^j \binom{j}{i} a^{j-2i} X^{4i-2j} \\ &= \sum_{i=0}^k \sum_{j=i}^k \binom{k}{j} \binom{j}{i} (-2)^{k-j} a^{j-2i} X^{4i-2j}.\end{aligned}$$ This constant term is $$\sum_{i=0}^{\lfloor k/2\rfloor} \binom{k}{2i} \binom{2i}{i} (-2)^{k-2i},$$ so that $$4^{k+1}\sum_{x\in D} x^k = - \sum_{i=0}^{\lfloor k/2\rfloor} \binom{k}{2i} \binom{2i}{i} (-2)^{k-2i} -\sum_{t\in\Lambda} \Bigl(\frac{t^2}a-2+\frac{a}{t^2}\Bigr)^k.$$ Each $t\in\Lambda$ satisfies $t^2=\pm a$. If $t^2=a$ then $t^2/a-2+a/t^2=0$, and if $t^2=-a$ then $t^2/a-2+a/t^2=-4$. Since in addition there are $1+\chi(-a)$ elements $t\in\Lambda$ for which $t^2=-a$, we conclude that $$\sum_{t\in\Lambda} \Bigl(\frac{t^2}a-2+\frac{a}{t^2}\Bigr)^k = \bigl(1+\chi(-a)\bigr) (-4)^k.$$ Thus $$4^{k+1}\sum_{x\in D} x^k = - \sum_{i=0}^{\lfloor k/2\rfloor} \binom{k}{2i} \binom{2i}{i} (-2)^{k-2i} - \bigl(1+\chi(-a)\bigr) (-4)^k,$$ or equivalently $$\sum_{x\in D} x^k = \frac{1+\chi(-a) + 2^{-k}\cdot \sum_{i=0}^{\lfloor k/2\rfloor} 4^{-i} \binom{k}{2i} \binom{2i}{i}}{4\cdot (-1)^{k+1}}.$$ Since $b=a$ we have $\chi(ab)=1$, so this matches the desired equality, which concludes the proof. ◻ **Remark 1**. As a by-product, the above proof yields the classical formula for $\lvert D \rvert$ in case $b=a$: for, $\phi\circ\theta$ is a surjective $4$-to-$1$ function $\mathbb{F}_q^*\setminus\Lambda\to D$, so since $\lvert \Lambda \rvert=2+\chi(a)+\chi(-a)$ we obtain $4\lvert D \rvert=q-3-\chi(a)-\chi(-a)$. More generally, this is also the formula for $\lvert D \rvert$ for any $a,b\in\mathbb{F}_q^*$ with $\chi(b)=\chi(a)$. We now deduce from this the formula for $\lvert D \rvert$ when $\chi(b)=-\chi(a)$. Let $Z$ be the set of $x\in\mathbb{F}_q$ with $\chi(x)=\chi(a)$. Since the number of $x\in Z$ for which $\chi(x+1)\ne 0$ is $(q-1)/2-(1+\chi(-a))/2$, then the number of $x\in Z$ for which $\chi(x+1)=-\chi(a)$ is $(q-1+\chi(a)-\chi(-a))/4$. Thus for any $a,b\in\mathbb{F}_q^*$ we have $4\lvert D \rvert=q-2-\chi(-a)-\chi(b)-\chi(ab)$. # Proofs of Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"} and Corollary [Corollary 1](#conj2){reference-type="ref" reference="conj2"} {#sec3} In this section we prove Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"} and Corollary [Corollary 1](#conj2){reference-type="ref" reference="conj2"}. We begin with the following generalization of [@Sun Thm. 2.1]. **Lemma 1**. *For any odd prime power $q$, pick $a\in\mathbb{F}_q^*$ and write $f(X):=X^4+aX^2$. Then $N:=\lvert f(\mathbb{F}_q) \rvert$ satisfies $$N = \frac{3q+4+\chi(-1)-2\chi(-a)+2\chi(-2a)}8 = \Bigl\lfloor\frac{3q+7-2\chi(-a)}8\Bigr\rfloor.$$* *Proof.* Let $Z$ be the set of squares in $\mathbb{F}_q^*$. Then $f(\mathbb{F}_q)$ consists of $f(0)=0$ and $W:=\{x^2+ax:x\in Z\}$. Plainly $0\in W$ if and only if $-a\in Z$, so that $N=\lvert W \rvert+(1-\chi(-a))/2$. For $x,y\in Z$ we have $x^2+ax=y^2+ay$ if and only if $y\in\{x,-a-x\}$. Note that $x=-a-x$ just when $x=-a/2$, and that $-a/2\ne -a$. Writing $M$ for the number of $x\in Z$ for which $y:=-a-x$ is also in $Z$, it follows that $\lvert W \rvert=\lvert Z \rvert-M/2+(1+\chi(-a/2))/4$. Next, writing $x:=a\tilde{x}$, we see that $M$ is the number of $\tilde{x}\in\mathbb{F}_q^*$ for which $\chi(\tilde{x})=\chi(a)=\chi(-1-\tilde{x})$, or equivalently $\chi(\tilde{x})=\chi(a)$ and $\chi(\tilde{x}+1)=\chi(-a)$. Thus Remark [Remark 1](#consec){reference-type="ref" reference="consec"} determines $M$, which in turn determines $\lvert W \rvert$ and $N$. ◻ *Proof of Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"}.* For $N$ as in Lemma [Lemma 1](#4val){reference-type="ref" reference="4val"}, we have $S(X^4 + aX^2 + b) = S(X^4 + aX^2) + bN$. Since Lemma [Lemma 1](#4val){reference-type="ref" reference="4val"} determines $N$, it remains to compute $S(X^4+aX^2)$. Note that $$\{x^4+ax^2 : x \in \mathbb{F}_q\} = \{0\} \cup \{x^2+ax : x \in \mathbb{F}_q, \chi(x)=1\}.$$ Since $q>5$, we have $\sum_{\chi(x)=1} (x^2+ax) = 0$. Since $x^2+ax=y^2+ay$ if and only if $y\in\{x,-a-x\}$, this says that $$S(X^4+aX^2) = -\frac12\cdot \!\!\!\!\sum_{\substack{\chi(x)=1 \\ \chi(-x-a)=1}} \!\!\!(x^2+ax) \,\,+\frac12\cdot \!\!\!\sum_{\substack{\chi(x)=1 \\ x=-x-a}} \!(x^2+ax).$$ Writing $x=a\tilde{x}$, it follows that $$S(X^4+aX^2) = -\frac{a^2}2\cdot \!\!\!\!\!\!\sum_{\substack{\chi(\tilde{x})=\chi(a) \\ \chi(\tilde{x}+1)=\chi(-a)}} \!\!\!\!\!\! (\tilde{x}^2+\tilde{x}) + \frac{a^2}2\cdot \!\!\!\sum_{\substack{\chi(\tilde{x})=\chi(a) \\ \tilde{x}=-\tilde{x}-1}} \!\!\! (\tilde{x}^2+\tilde{x}).$$ The first sum is determined in Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, and the second sum equals $-a^2/8$ if $\chi(-2a)=1$, and $0$ otherwise, so it always equals $-a^2 (1+\chi(-2a)) / 16$. Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"} follows. ◻ *Proof of Corollary [Corollary 1](#conj2){reference-type="ref" reference="conj2"}.* Plainly if $a=0$ then the image of $\mathbb{F}_q$ under $X^4+aX^2$ consists of $0$ and the set $W$ of fourth powers in $\mathbb{F}_q^*$, so that $S(X^4+aX^2)=0$. If $q\equiv 1 \pmod 8$ then by Theorem [Theorem 1](#conj3){reference-type="ref" reference="conj3"} we have $$T = \Bigl\{\frac{-9a^2}{64} : a\in\mathbb{F}_q,\, \chi(a)\in\{0,1\}\Bigr\} \cup \Bigl\{\frac{-a^2}{64} : a\in\mathbb{F}_q,\,\chi(a)=-1\Bigr\}.$$ Since $-1$ and $64$ are fourth powers in $\mathbb{F}_q$, this equals $$\{9x^4 : x \in \mathbb{F}_q\} \cup \{x : x \in \mathbb{F}_q\setminus W,\, \chi(x)=1\}.$$ If $\mathop{\mathrm{char}}(\mathbb{F}_q)=3$ then $T$ consists of the squares in $\mathbb{F}_q$ which are not in $W$. By quadratic reciprocity, if $\mathop{\mathrm{char}}(\mathbb{F}_q)\ne 3$ then $\chi(3)=1$ if and only if $q\equiv 1\pmod 3$, so if $q\equiv 1 \pmod 3$ then $T$ consists of the squares in $\mathbb{F}_q$, and if $q\equiv 2 \pmod 3$ then $T$ consists of the squares in $\mathbb{F}_q$ which are not in $W$. If $q\equiv 5 \pmod 8$ then $$T = \Bigl\{\frac{-a^2}{64} : \chi(a)\in\{0,1\}\Bigr\} \cup \Bigl\{\frac{-9a^2}{64} : \chi(a)=-1\Bigr\}.$$ Here $-1$ and $64$ are squares which are not fourth powers, so $$T = \{0\} \cup W \cup \{9x : x\in\mathbb{F}_q\setminus W,\, \chi(x)=1\}.$$ If $\mathop{\mathrm{char}}(\mathbb{F}_q)=3$ then $T=\{0\}\cup W$. If $\mathop{\mathrm{char}}(\mathbb{F}_q)\ne 3$ then $\chi(3)=1$ if and only if $q\equiv 1\pmod 3$, so if $q\equiv 1 \pmod 3$ then $T$ consists of the squares in $\mathbb{F}_q$, and if $q\equiv 2 \pmod 3$ then $T$ consists of the fourth powers in $\mathbb{F}_q$. If $q\equiv 3 \pmod 4$ then $T = \{\epsilon x : \chi(x)\in\{0,1\},\, \epsilon \in \{1,-7\}\}$. By quadratic reciprocity, $-7$ is a square in $\mathbb{F}_q$ if and only if $q$ is a square in $\mathbb{F}_7$. Thus if $q\equiv 0,1,2,4 \pmod 7$ then $T$ consists of the squares in $\mathbb{F}_q$, and if $q\equiv 3,5,6 \pmod 7$ then $T=\mathbb{F}_q$. Finally, if $a=8$ then $\chi(-2a)=-1$, so $S(X^4+8X^2)=8^2/64=1$. ◻ # Proofs of remaining results {#sec4} In this section we prove Propositions [Proposition 1](#3){reference-type="ref" reference="3"}, [Proposition 1](#power){reference-type="ref" reference="power"}, and [Proposition 1](#odd){reference-type="ref" reference="odd"}. We prove these in reverse order. *Proof of Proposition [Proposition 1](#odd){reference-type="ref" reference="odd"}.* Let $\zeta\in\mathbb{F}_q^*$ have order $s$. Then $f(\zeta X)=\zeta^r f(X)$, so that $f(\mathbb{F}_q)$ is preserved by multiplication by $\zeta^r$. Since $s\nmid r$, we have $\zeta^r\ne 1$ and thus $S(f)=0$. ◻ Note that Corollary [Corollary 1](#oddplus){reference-type="ref" reference="oddplus"} follows immediately from Proposition [Proposition 1](#odd){reference-type="ref" reference="odd"}. *Proof of Proposition [Proposition 1](#power){reference-type="ref" reference="power"}.* The set $Z$ of $n$-th powers in $\mathbb{F}_q$ equals the set of $m$-th powers, where $m:=\gcd(n,q-1)$. Thus $Z$ consists of $0$ and the $(q-1)/m$-th roots of unity, and plainly $f(\mathbb{F}_q)=\{ax+b:x\in Z\}$. If $m\ne q-1$ then $Z$ contains an element $\delta\notin\{0,1\}$, so since $Z$ is preserved by multiplication by $\delta$ we conclude that the sum of the elements of $Z$ equals $0$, whence $S(f)=b\cdot\lvert Z \rvert=b\cdot (q+m-1)/m=b\cdot (m-1)/m$. Finally, if $m=q-1$ then $Z=\{0,1\}$ so that $f(\mathbb{F}_q)=\{b,a+b\}$ and thus $S(f)=a+2b$. ◻ **Lemma 1**. *Let $q$ be a prime power, and let $f(X):=X^3+aX$ with $a\in\mathbb{F}_q^*$. If $q>3$ then $\lvert f(\mathbb{F}_q) \rvert=\lfloor(2q+1)/3\rfloor$.* Lemma [Lemma 1](#deg3vals){reference-type="ref" reference="deg3vals"} was first proved in full generality in [@T-val Prop. 4.6]; see [@T-val Rmk. 4.8(b)] for a discussion of earlier literature. *Proof of Proposition [Proposition 1](#3){reference-type="ref" reference="3"}.* If $f(X)=aX+b$ then $f(\mathbb{F}_q)=\mathbb{F}_q$, so that $S(f)$ equals $0$. If $q$ is odd and $f(X)=aX^2+bX+c$ then $f(X)=a(X+b/(2a))^2+e$ where $e:=c-b^2/(4a)$, so that $f(\mathbb{F}_q)=\{ax^2+e:x\in\mathbb{F}_q\}$. Since there are $(q+1)/2$ squares in $\mathbb{F}_q$, and their sum is $0$, it follows that $S(f)=e/2$. If $q$ is even and $f(X)=aX^2+c$ then $f(\mathbb{F}_q)=\mathbb{F}_q$, so that $S(f)$ equals $0$. Now suppose $q$ is even and $f(X)=aX^2+bX+c$ with $b\ne 0$. Then $(a/b^2)\cdot f(bX/a) = g(X)+e$ where $g(X):=X^2+X$ and $e:=ac/b^2$. Plainly $g(X)$ induces a homomorphism on the additive group of $\mathbb{F}_q$ with kernel $\mathbb{F}_2$, so that $\lvert g(\mathbb{F}_q) \rvert=q/2$. It is well-known that $g(\mathbb{F}_q)$ consists of the roots of the squarefree polynomial $T(X):=X^{q/2}+X^{q/4}+\dots+X$. Thus $S(g)$ is the coefficient of the term of $T(X)$ of degree $(q-2)/2$, so that $S(g)=0$. Finally, $(a/b^2)\cdot S(f)=S(g)+e\cdot\lvert S(g) \rvert=e\cdot q/2=0$. Next suppose $3\nmid q$ and $f(X)=aX^3+bX^2+cX+d$. Then $f(X-b/(3a)) = g(X)+e$ where $g(X):=aX^3+\tilde{c}X$ with $\tilde{c}:=(3ac - b^2)/(3a)$ and $e:=(27a^2d - 9abc + 2b^3)/(27a^2)$. We may assume $\tilde{c}\ne 0$, since otherwise the result follows from Proposition [Proposition 1](#power){reference-type="ref" reference="power"}. Thus $\lvert g(\mathbb{F}_q) \rvert=\lfloor(2q+1)/3\rfloor$ by Lemma [Lemma 1](#deg3vals){reference-type="ref" reference="deg3vals"}, so since $S(f)=S(g)+e\cdot\lvert g(\mathbb{F}_q) \rvert$ it remains to determine $S(g)$. If $q$ is odd then $S(g)=0$ by Proposition [Proposition 1](#odd){reference-type="ref" reference="odd"}. Now suppose $q$ is even. Then $g'(X)=aX^2+\tilde{c}=g(X)/X$, so that no element of $\mathbb{F}_q^*$ has exactly two $g$-preimages in $\mathbb{F}_q$. Hence every element of $f(\mathbb{F}_q)\setminus\{0\}$ has an odd number of $\mathbb{F}_q$-preimages, so that $S(g)=\sum_{x\in\mathbb{F}_q} g(x)=0$. Now suppose $3\mid q$ and $f(X)=aX^3+cX+d$, and write $g(X):=aX^3+cX$. By Corollary [Corollary 1](#oddplus){reference-type="ref" reference="oddplus"} we have $S(f)=d\cdot \lvert g(\mathbb{F}_q) \rvert$. Here $g(X)$ induces a homomorphism on the additive group of $\mathbb{F}_q$, so that $\lvert g(\mathbb{F}_q) \rvert=q/m$ where $m$ is the number of roots of $g(X)$ in $\mathbb{F}_q$. Plainly $m=3$ if $\chi(-c/a)=1$ and $m=1$ otherwise. Finally, suppose $3\mid q$ and $f(X)=aX^3+bX^2+cX+d$ with $b\ne 0$. Then $f(X+c/b)=g(X)+e$ where $g(X):=aX^3+bX^2$ and $e:=(ac^3+b^3d-b^2c^2)/b^3$. We have $S(f)=S(g)+e\cdot\lvert g(\mathbb{F}_q) \rvert$. Since $g((b/a)\cdot X)=(b^3/a^2)\cdot h(X)$ where $h(X):=X^3+X^2$, we have $S(g)=(b^3/a^2)\cdot S(h)$ and $\lvert g(\mathbb{F}_q) \rvert=\lvert h(\mathbb{F}_q) \rvert$. To conclude the proof, it remains to determine $S(h)$ and $\lvert h(\mathbb{F}_q) \rvert$, which is done in Lemma [Lemma 1](#char3){reference-type="ref" reference="char3"} below. ◻ **Lemma 1**. *Let $q=3^k$ and $h(X):= X^3+X^2$. Then an element $\gamma\in\mathbb{F}_q$ has more than one $h$-preimage in $\mathbb{F}_q$ if and only if $\gamma=\delta^2$ for some $\delta\in\mathbb{F}_q$ with $\mathop{\mathrm{Tr}}_{\mathbb{F}_q/\mathbb{F}_3}(\delta)=0$, and $\gamma$ has three such $h$-preimages if and only if $\delta\ne 0$. It follows that $\lvert h(\mathbb{F}_q) \rvert=2q/3$, and that $S(h)=0$ if $k>2$ and $S(h)=-1$ if $k\le 2$.* *Proof.* Since the only root of $h'(X)=2X$ is $0$, the only $\gamma\in\mathbb{F}_q$ with fewer than three distinct $h$-preimages in $\mathbb{F}\llap{$\overline{\phantom{\rm\mathbb{F}}}$}_q$ is $h(0)=0$, whose $h$-preimages are $0$ and $-1$. Writing $Z_i$ for the set of elements in $\mathbb{F}_q$ having exactly $i$ distinct $h$-preimages in $\mathbb{F}_q$, it follows that $Z_2=\{0\}$. We claim that $Z_3$ consists of the elements $\delta^2$ where $\delta\in\mathbb{F}_q^*$ satisfies $\mathop{\mathrm{Tr}}_{\mathbb{F}_q/\mathbb{F}_3}(\delta)=0$. We now deduce the value of $\lvert h(\mathbb{F}_q) \rvert$ from this claim. The claim implies that $\lvert Z_3 \rvert=(q-3)/6$, and that $\lvert h^{-1}(Z_3) \rvert=(q-3)/2$. Thus $$\lvert Z_1 \rvert=\lvert \mathbb{F}_q\cap h^{-1}(Z_1) \rvert=\lvert \mathbb{F}_q\setminus h^{-1}(Z_2\cup Z_3) \rvert=q-2-\frac{q-3}{2} = \frac{q-1}{2},$$ so that $$\lvert h(\mathbb{F}_q) \rvert = \lvert Z_1 \rvert+\lvert Z_2 \rvert+\lvert Z_3 \rvert=\frac{q-3}{2}+1+\frac{q-1}{6} = \frac{2q}{3}.$$ Now we prove the claim. For $\gamma\in\mathbb{F}_q^*$, we have $\gamma\in Z_3$ if and only if $X^3+X^2-\gamma$ has three roots in $\mathbb{F}_q$, or equivalently $\gamma X^3-X-1$ has three roots in $\mathbb{F}_q$. Since $\gamma X^3-X$ induces a homomorphism on the additive group of $\mathbb{F}_q$, the polynomial $\gamma X^3-X-1$ has three roots in $\mathbb{F}_q$ if and only if it has at least one such root and $\gamma X^3-X$ has three roots in $\mathbb{F}_q$. The last condition holds if and only if $\gamma=\delta^2$ for some $\delta\in\mathbb{F}_q^*$. Upon substituting $X/\delta$ for $X$, we see that $\delta^2 X^3-X-1$ has a root in $\mathbb{F}_q$ if and only if $X^3-X-\delta$ has such a root, or equivalently $\mathop{\mathrm{Tr}}_{\mathbb{F}_q/\mathbb{F}_3}(\delta)=0$. It remains to determine $S(h)$. This is easy when $k\le 2$, so we now assume $k>2$. Write $T(X):=X^{q/3}+X^{q/9}+\dots+X$, so that $T(X)$ is squarefree with roots being the elements $\delta\in\mathbb{F}_q$ for which $\mathop{\mathrm{Tr}}_{\mathbb{F}_q/\mathbb{F}_3}(\delta)=0$. Then $T(X)=X R(X^2)$ where $R(X):=X^{(q/3-1)/2}+X^{(q/9-1)/2}+\dots+1$, so the sum of the squares of the roots of $T(X)$ equals the sum of the roots of $R(X)$, which is zero since $R(X)$ has no term of degree $\deg(R)-1$. Hence the sum of the elements of $Z_3$ is zero, so since $Z_2=\{0\}$ and $\sum_{x\in\mathbb{F}_q} h(x)=0$ we conclude that $S(h)=0$. ◻ **Remark 1**. The value of $\lvert h(\mathbb{F}_q) \rvert$ in Lemma [Lemma 1](#char3){reference-type="ref" reference="char3"} was first determined in [@CM] and [@T-val Prop. 4.7]. In fact, [@CM] determines $\lvert g(\mathbb{F}_q) \rvert$ whenever $g(X)=X^{Q-1}(X+1)$ with $q=Q^k$, and one can combine further results from [@CM] with arguments in the above proof to show that $S(g)=0$ when $k>2$ or $q=2$, and $S(g)=-1$ otherwise. 9 N. S. Aladov, *Sur la distribution des résidus quadratiques et non-quadratiques d'un nombre premier $P$ dans la suite $1,2,\dots,P-1$ (in Russian)*, Mat. Sb. **18** (1896), 61--75. T. W. Cusick and P. Müller, *Wan's bound for value sets of polynomials*, 69--72, in: Finite Fields and Applications (Glasgow, 1995), London Math. Soc. Lecture Note Ser., 233, Cambridge Univ. Press, Cambridge, 1996. H. Davenport, *On the distribution of quadratic residues (mod $p$)*, J. London Math. Soc. **6** (1931), 49--54. H. Davenport, *On the distribution of quadratic residues (mod $p$) (Second paper)*, J. London Math. Soc. **8** (1933), 46--52. H. Davenport, *On character sums in finite fields*, Acta Math. **71** (1939), 99--121. C. Finch-Smith, J. Harrington, and T. W. H. Wong, *Sums of distinct polynomial residues*, Integers **23** (2023), \#A63, 8 pp. S. Gross, J. Harrington, and L. Minott, *Sums of polynomial residues*, Irish Math. Soc. Bull. **79** (2017), 31--37. E. Jacobsthal, *Anwendungen einer Formel aus der Theorie der quadratischen Reste*, dissertaion, Univ. Berlin, 1906. M. A. Stern, *Ueber einige Eigenschaften der Trigonalzahlen*, J. Reine Angew. Math. **69** (1868), 370--380. O. Stetson, *Triangular residues*, Amer. Math. Monthly **11** (1904), 106--107. Z.-H. Sun, *On the number of incongruent residues of $x^4+ax^2+b$ modulo $p$*, J. Number Theory **119** (2006), 210--241. G. Turnwald, *A new criterion for permutation polynomials*, Finite Fields Appl. **1** (1995), 64--82. A. Weil, *On some exponential sums*, Proc. Nat. Acad. Sci. U.S.A. **34** (1948), 204--207.
arxiv_math
{ "id": "2309.14979", "title": "On sums of powers of consecutive squares over finite fields, and sums of\n distinct values of polynomials", "authors": "Zhiguo Ding and Michael E. Zieve", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Let $(a(n) : n \in \mathbb{N})$ denote a sequence of nonnegative integers. Let $0.a(1)a(2)...$ denote the real number obtained by concatenating the digit expansions, in a fixed base, of consecutive entries of $(a(n) : n \in \mathbb{N})$. Research on digit expansions of this form has mainly to do with the normality of $0.a(1)a(2)...$ for a given base. Famously, the Copeland--Erdős constant $0.2357111317...$, for the case whereby $a(n)$ equals the $n^{\text{th}}$ prime number $p_{n}$, is normal in base 10. However, it seems that the "inverse" construction given by concatenating the decimal digits of $(\pi(n) : n \in \mathbb{N})$, where $\pi$ denotes the prime-counting function, has not previously been considered. Exploring the distribution of sequences of digits in this new constant $0.0122...9101011...$ would be comparatively difficult, since the number of times a fixed $m \in \mathbb{N}$ appears in $(\pi(n) : n \in \mathbb{N})$ is equal to the prime gap $g_{m} = p_{m+1} - p_{m}$, with the behaviour of prime gaps notoriously elusive. Using a combinatorial method due to Szüsz and Volkmann, we prove that Cramér's conjecture on prime gaps implies the normality of $0.a(1)a(2)...$ in a given base $g \geq 2$, for $a(n) = \pi(n)$. --- The prime-counting Copeland--Erdős constant [John M. Campbell]{.smallcaps}   # Introduction The study of normal numbers forms a large and important area in probabilistic number theory. Much of our notation concerning normal numbers is based on the work of Szüsz and Volkmann in [@SzuszVolkmann1994]. Following [@SzuszVolkmann1994], we let $g \geq 2$ be a fixed parameter throughout our article, letting it be understood that we are working with digits in base $g$, unless otherwise specified. For a real value $\alpha$, and for a block $E$ of digits, let $A_{E}(\alpha, n)$ denote the number of copies of $E$ within the first $n$ digits of $\alpha$. The real number $\alpha$ is said to be *normal of order $k$* if $$\label{SVnormal} \lim_{n \to \infty} \frac{ A_{E}(\alpha, n) }{n} = \frac{1}{g^{\ell(E)}}$$ for all $E$ such that $\ell(E) = k$, where $\ell(E)$ denotes the *length* of $E$, or the number of digits in $E$, counting multiplicities. For $k = 1$, the specified property concerning [\[SVnormal\]](#SVnormal){reference-type="eqref" reference="SVnormal"} is referred to as *simple normality*. A real number $\alpha$ is said to be *normal* if it is normal for all orders $k \in \mathbb{N}$. In this article, we introduce a constant related to the prime-counting function that may be seen as something of an inverse relative to the construction of the famous Copeland--Erdős constant [@CopelandErdos1946], and we prove, under the assumption of Cramér's conjecture, that this new constant is normal. This is inspired by past work related to the normality of real numbers defined via concatenations of number-theoretic functions, as in [@CattCoonsVelich2016; @DeKoninckKatai2016; @DeKoninckKatai2011; @PollackVandehey2015Besicovitch; @PollackVandehey2015Some; @Vandehey2013]. ## Background For a sequence $(a(n) : n \in \mathbb{N})$ of nonnegative integers, we let $$\label{generalconcatenate} 0.a(1)a(2)\ldots$$ denote the real value given by concatenating the digit expansions of consecutive entries of the aforementioned sequence. The first real value proved to be normal is famously due to Champernowne [@Champernowne1933] and is given by the case whereby $a(n) = n$ for all $n \in \mathbb{N}$ in [\[generalconcatenate\]](#generalconcatenate){reference-type="eqref" reference="generalconcatenate"}, in base 10. Shortly afterwards, Besicovitch [@Besicovitch1935] proved a result that may be applied to obtain the normality of the corresponding constant for the $a(n) = n^2$ case [@PollackVandehey2015Besicovitch]. This normality result was then generalized by Davenport and Erdős [@DavenportErdos1952] for the case whereby $a(n)$ is a polynomial satisfying certain conditions. The polynomial cases we have covered lead us to consider the behaviour of the digits in [\[generalconcatenate\]](#generalconcatenate){reference-type="eqref" reference="generalconcatenate"} for number-theoretic sequences. A famous 1946 result due to Copeland and Erdős [@CopelandErdos1946] provides the base-10 normality of [\[generalconcatenate\]](#generalconcatenate){reference-type="eqref" reference="generalconcatenate"} for the case whereby $a(n)$ is equal to the $n^{\text{th}}$ prime number $p_{n}$. In this case, the constant $$\label{numericalCE} 0.235711131719232...$$ of the form indicated in [\[generalconcatenate\]](#generalconcatenate){reference-type="eqref" reference="generalconcatenate"} is referred to as the *Copeland--Erdős constant*. Copeland and Erdős' 1946 article [@CopelandErdos1946] is seminal within areas of number theory concerning normal numbers, and this inspires the exploration of variants of [\[numericalCE\]](#numericalCE){reference-type="eqref" reference="numericalCE"}, with the use of number-theoretic sequences in place of $(p_{n} : n \in \mathbb{N} )$. As a natural variant of the Copeland--Erdős constant, we consider the constant $$\label{numericalPCCE} 0.012233444455666677888899999910101111...$$ obtained from [\[generalconcatenate\]](#generalconcatenate){reference-type="eqref" reference="generalconcatenate"} by setting the sequence $(a(n) : n \in \mathbb{N})$ to be equal to the sequence $(\pi(n) : n \in \mathbb{N})$ given by the prime-counting function. Copeland and Erdős' proof [@CopelandErdos1946] of the normality of [\[numericalCE\]](#numericalCE){reference-type="eqref" reference="numericalCE"} relied on the property whereby the sequence $(p_{n} : n \in \mathbb{N})$ is strictly increasing and the property given by $p_{n} = n^{1 + o(1)}$, but, since $(\pi(n) : n \in \mathbb{N} )$ is not strictly increasing, the techniques from [@CopelandErdos1946] cannot be translated so as to be applicable to the constant in [\[numericalPCCE\]](#numericalPCCE){reference-type="eqref" reference="numericalPCCE"}. # Main construction It seems that references on normal numbers related to Copeland and Erdős' work in [@CopelandErdos1946], including references such as [@BaileyCrandall2002; @BecherFigueiraPicchi2007; @DavenportErdos1952; @MadritschThuswaldnerTichy2008; @NivenZuckerman1951] that have inspired our work, have not involved the "inverse" constant in [\[numericalPCCE\]](#numericalPCCE){reference-type="eqref" reference="numericalPCCE"}. Moreover, integer sequences involving $$\label{notinOEIS} (0, 1, 2, 2, 3, 3, 4, 4, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 1, 0, \ldots)$$ are not currently included in the On-Line Encyclopedia of Integer Sequences, where the tuple in [\[notinOEIS\]](#notinOEIS){reference-type="eqref" reference="notinOEIS"} is given by the consecutive digits of the constant in [\[numericalPCCE\]](#numericalPCCE){reference-type="eqref" reference="numericalPCCE"}, which we refer to as the *prime-counting Copeland--Erdős (PCCE) constant*. In relation to this constant, we are to apply the remarkable result that was originally formulated by Szüsz and Volkmann in 1994 [@SzuszVolkmann1994] and that was later corrected by Pollack and Vandehey [@PollackVandehey2015Besicovitch] and that is reproduced below, and we are to later explain notation/terminology given in the following Theorem. [\[theoremSV\]]{#theoremSV label="theoremSV"} (Szüsz $\&$ Volkmann, 1994) Suppose that $f$ is a differentiable function, and that $f$ is monotonically increasing and positive for all $x \geq n_{0}(f)$ and that both $\eta(f)$ and $\eta(f')$ exist and that $0 < \eta(f) \leq 1$. It follows that $f$ is a Champernowne function [@SzuszVolkmann1994] (cf. [@PollackVandehey2015Besicovitch]). For a real-valued function $f$ defined on a domain containing $\mathbb{N}$ such that $f(n) > 0$, we let $\alpha(f) = \alpha_{g}(f)$ denote the real number such that the base-$g$ expansion of this real number is of the form $0.b_{1}b_{2}...$, where $b_{n}$ denotes the base-$g$ expansion of $\left\lfloor f(n) \right\rfloor$ [@SzuszVolkmann1994]. The PCCE constant in [\[numericalPCCE\]](#numericalPCCE){reference-type="eqref" reference="numericalPCCE"} is equal to $\alpha_{10}(\pi)$, writing $\pi$ in place of the prime-counting function. A function $f$ such that $\lim_{x \to \infty} f(x) = \infty$ is a *Champernowne function* if: For all $g \geq 2$, the value $\alpha_{g}(f)$ is normal in base $g$ [@SzuszVolkmann1994]. The identity function $f$ mapping $n$ to $n$ is a Champernowne function (cf. [@NakaiShiokawa1992]). For a real-valued, positive function $f$, we set $$\label{etadefinition} \eta(f) = \lim_{x \to \infty} \frac{\log f(x)}{\log x},$$ under the assumption that this limit exists. [\[sqrtcons\]]{#sqrtcons label="sqrtcons"} As noted in [@PollackVandehey2015Besicovitch; @SzuszVolkmann1994], the constant $0.1112222233333334...$ given by setting $f(n) = \sqrt{n}$ in $\alpha(f)$ is normal, with $\eta(f) = \frac{1}{2}$. A difficulty associated with the explicit construction of a function $f$ that satisfies all of the required properties in Theorem [\[theoremSV\]](#theoremSV){reference-type="ref" reference="theoremSV"} and that can be used, via Szüsz and Volkmann's combinatorial method, to prove the normality of [\[numericalPCCE\]](#numericalPCCE){reference-type="eqref" reference="numericalPCCE"} has to do with the limit associated with $\eta(f')$, since, for example, $f'$ cannot vanish infinitely often, in view of the definition in [\[etadefinition\]](#etadefinition){reference-type="eqref" reference="etadefinition"}. A more serious problem has to do with how the distribution of groupings of digits in the PCCE constant would depend on the behaviour of the sequence $(g_{n} = p_{n+1} - p_{n} : n \in \mathbb{N} )$ of prime gaps, since the number of times $m \in \mathbb{N}$ appears in $(\pi(n) : n \in \mathbb{N})$ is equal to $g_{m}$. In this regard, we are to make use of a purported formula, under the assumption of Cramér's conjecture, concerning the size of prime gaps. *Cramér's conjecture* (cf. [@Cramer1936]) often refers to the purported estimate $$\label{mainCramer} g_{n} = O(\log^2 p_{n}).$$ The weaker formula $$\label{needsRH} g_{n} = O(\sqrt{p_{n}} \log p_{n} ),$$ which Cramér proved [@Cramer1936] under the assumption of the Riemann Hypothesis (RH), is not sufficient for our purposes, as we later discuss. The problem, here, has to do with how we would want $\lim_{n \to \infty} \frac{\log h_{n}}{\log n}$ to vanish for a function $h_{n}$ approximating $g_{n}$. Our construction of a function $f$ that satisfies the conditions in Theorem [\[theoremSV\]](#theoremSV){reference-type="ref" reference="theoremSV"} and that may be used to prove the normality of the PCCE constant may be viewed as being analogous to how the $\Gamma$-function provides a differentiable analogue of the factorial function defined on natural numbers. As indicated above, $\eta(f')$ could not be vanishing infinitely often, if we were to apply Szüsz and Volkmann's combinatorial method [@SzuszVolkmann1994], which leads us to consider how $f'$ could be bounded, in such a way to guarantee the existence of the limit associated with $\eta(f')$. Informally, if we consider the graph of the prime-counting function as a function defined on $\mathbb{R}$, and if we consider a function $f$ that projects onto the graph of $\pi$ under $\lfloor \cdot \rfloor$ and that has a derivative that is reasonably well behaved, we would expect the derivative function $f'(x)$, for values $x$ such that $p_{m} \leq x < p_{m+1}$, to be "reasonably close" to $\frac{1}{g_{m}}$. Formalizing this notion in a way that would allow us to apply Theorem [\[theoremSV\]](#theoremSV){reference-type="ref" reference="theoremSV"} is nontrivial, as below. Under the assumption of Cramér's conjecture, the PCCE constant is normal in base 10 and, more generally, the constant $0.a(1)a(2)\cdots$ is normal in base $g$ for $a(n) = \pi(n)$. *Proof.* First suppose that the natural number $m \in \mathbb{N}_{\geq 2}$ is such that $\frac{1}{g_{m-1}} > \frac{1}{g_{m}}$. Let $\varepsilon^{(m)} > 0$. We set $q_{1}^{(m)} = p_{m} + \varepsilon^{(m)}$ for $\varepsilon^{(m)} < 1$, and we set $$\label{q2m} q_{2}^{(m)} = \frac{q_{1}^{(m)} (g_{m-1} - g_{m})}{g_{m-1} g_{m}^2} + \frac{ g_{m} p_m - g_{m-1} p_m + g_{m-1} g_{m}}{g_{m-1} g_{m}^2}.$$ We may verify that [\[q2m\]](#q2m){reference-type="eqref" reference="q2m"} reduces in such a way so that $$\label{qreduced} q_{2}^{(m)} = \frac{ \varepsilon^{(m)} \left( \frac{1}{g_{m}} - \frac{1}{g_{m-1}} \right) }{g_{m}} + \frac{1}{g_{m}},$$ which gives us that $q_{2}^{(m)} < \frac{1}{g_{m}}$. By letting $\varepsilon^{(m)}$ be sufficiently small, with $$\label{epsilonmsmall} 0 < \varepsilon^{(m)} < \frac{1}{\frac{1}{g_{m-1}} - \frac{1}{g_{m}}},$$ the condition in [\[epsilonmsmall\]](#epsilonmsmall){reference-type="eqref" reference="epsilonmsmall"} gives us that $0 < q_{2}^{(m)} < \frac{1}{g_{m}}$. We define the function $h^{\varepsilon}_{m}(x)$ on the interval $[p_{m}, p_{m+1}]$ so that: $$h_{\varepsilon}^{(m)}(x) = \begin{cases} \frac{1 - q_{2}^{(m)} g_{m-1}}{g_{m-1} \left(p_m - q_{1}^{(m)} \right)} x + \frac{q_{2}^{(m)} g_{m-1} p_m - q_{1}^{(m)}}{g_{m-1} \left(p_m - q_{1}^{(m)} \right)} & \text{if $p_{m} \leq x \leq p_{m} + \varepsilon^{(m)}$,} \\ \frac{1 - q_{2}^{(m)} g_{m}}{ g_{m} \left(p_{m+1} - q_{1}^{(m) } \right)} x + \frac{q_{2}^{(m)} g_{m} p_{m+1} - q_{1}^{(m)}}{g_{m} \left(p_{m+1} - q_{1}^{(m)} \right)} & \text{if $p_{m} + \varepsilon^{(m)} \leq x \leq p_{m+1}$}. \end{cases}$$ Again for a natural number $m \in \mathbb{N}_{\geq 2}$, we proceed to suppose that $\frac{1}{g_{m-1}} < \frac{1}{g_{m}}$. We write $\delta^{(m)} > 0$. We set $r_{1}^{(m)} = p_{m} + \delta^{(m)}$, with $\delta^{(m)} < 1$. We set $$\label{setr2m} r_{2}^{(m)} = \frac{ r_{1}^{(m)} (g_{m-1} - g_{m})}{g_{m-1} g_{m}^2} + \frac{g_{m} p_m - g_{m-1} p_{m+1} + 2 g_{m-1} g_{m}}{g_{m-1} g_{m}^2}.$$ We may verify that [\[setr2m\]](#setr2m){reference-type="eqref" reference="setr2m"} reduces so that $$\label{r2reduce} r_{2}^{(m)} = \frac{ \delta^{(m)} \left( \frac{1}{g_{m}} - \frac{1}{g_{m-1}} \right) }{g_{m}} + \frac{1}{g_{m}},$$ so that $r_{2}^{(m)} > \frac{1}{g_{m}}$. We define the function $h^{(m)}_{\delta}(x)$ on the interval $[p_{m}, p_{m+1}]$ so that: $$h^{(m)}_{\delta}(x) = \begin{cases} \frac{1 - r_{2}^{(m)} g_{m-1}}{g_{m-1} \left(p_m - r_{1}^{(m)} \right)} x + \frac{ r_{2}^{(m)} g_{m-1} p_m - r_{1}^{(m)} }{ g_{m-1} \left(p_m - r_{1}^{(m)} \right)} & \text{if $p_{m} \leq x \leq p_{m} + \delta^{(m)}$}, \\ \frac{1 - r_{2}^{(m)} g_{m} }{ g_{m} \left(p_{m+1} - r_{1}^{(m)} \right)} x + \frac{ r_{2}^{(m)} g_{m} p_{m+1} - r_{1}^{(m)} }{ g_{m} \left(p_{m+1} - r_{1}^{(m)} \right)} & \text{if $p_{m} + \delta^{(m)} \leq x \leq p_{m+1}$}. \end{cases}$$ Finally, if $\frac{1}{g_{m-1}} = \frac{1}{g_{m}}$ for $m \in \mathbb{N}_{\geq 2}$, we define $h^{(m)}_{\text{null}}(x)$ on the interval $[p_{m}, p_{m+1}]$ so that $h^{(m)}_{\text{null}}(x) = \frac{1}{g_{m}}$. Now, set $f(x) = \frac{1}{96} \left(4 x^2+12 x+39\right)$ if $0 \leq x \leq \frac{3}{2}$, and set $f(x) = \frac{1}{4} \left(3 x^2-8 x+8\right)$ if $\frac{3}{2} \leq x \leq 2$ and set $f(x) = x-1$ if $2 \leq x \leq 3$. Now, for $\pi(x) \in \mathbb{N}_{\geq 2}$, we set $f(x)$ so that $$f(x) = \begin{cases} \pi(x) + \int_{p_{\pi(x)}}^{x} h^{(\pi(x))}_{\varepsilon}(k) \, dk & \text{if $\frac{1}{g_{\pi(x)-1}} > \frac{1}{g_{\pi(x)}}$}, \\ \pi(x) + \int_{p_{\pi(x)}}^{x} h^{(\pi(x))}_{\delta}(k) \, dk & \text{if $\frac{1}{g_{\pi(x)-1}} < \frac{1}{g_{\pi(x)}}$}, \\ \pi(x) + \int_{p_{\pi(x)}}^{x} h^{(\pi(x))}_{\text{null}}(k) \, dk & \text{if $\frac{1}{g_{\pi(x)-1}} = \frac{1}{g_{\pi(x)}}$}. \end{cases}$$ By construction, we have that $f$ is a differentiable function and that $f$ is monotonically increasing and positive for $x \geq 0$. Moreover, the function $f$ is constructed so that $\lfloor f(x) \rfloor = \pi(x)$ for all $x \geq 0$. We have that $\frac{1}{8} \leq f'(x) \leq 1$ for $0 \leq x \leq 3$, and, for $x \geq 3$, we have that $f'(x)$ is either equal to $h^{(m)}_{\varepsilon}(x)$ or $h^{(m)}_{\delta}(x)$ or $h^{(m)}_{\text{null}}(x)$, according, respectively, to the possibilities whereby $\frac{1}{g_{m-1}} > \frac{1}{g_{m}}$ and $\frac{1}{g_{m-1}} < \frac{1}{g_{m}}$ and $\frac{1}{g_{m-1}} = \frac{1}{g_{m}}$. If $\frac{1}{g_{m-1}} > \frac{1}{g_{m}}$, then, by construction, we have that $$\label{inviewqred} 0 < \frac{ \varepsilon^{(m)} \left( \frac{1}{g_{m}} - \frac{1}{g_{m-1}} \right) }{g_{m}} + \frac{1}{g_{m}} \leq f'(x) \leq \frac{1}{g_{m-1}}$$ for $p_{m} \leq x \leq p_{m+1}$, in view of [\[qreduced\]](#qreduced){reference-type="eqref" reference="qreduced"}, where, for fixed $m$, the expression $\varepsilon^{(m)} > 0$ may be arbitrary, subject to [\[epsilonmsmall\]](#epsilonmsmall){reference-type="eqref" reference="epsilonmsmall"}. If $\frac{1}{g_{m-1}} < \frac{1}{g_{m}}$, then, by construction, we have that $$\label{recallepsilonupper} 0 < \frac{1}{g_{m-1} } \leq f'(x) \leq \frac{ \delta^{(m)} \left( \frac{1}{g_{m}} - \frac{1}{g_{m-1}} \right) }{g_{m}} + \frac{1}{g_{m}},$$ in view of [\[r2reduce\]](#r2reduce){reference-type="eqref" reference="r2reduce"}, and, for $p_{m} \leq x \leq p_{m+1}$, the expression $\delta^{(m)} > 0$ is arbitrary on the specified interval. Finally, by construction, if $\frac{1}{g_{m-1}} = \frac{1}{g_{m}}$, then $$0 < \frac{1}{g_{m-1}} = \frac{1}{g_{m}} \leq f'(x) \leq \frac{1}{g_{m-1}} = \frac{1}{g_{m}}.$$ So, in any case, we find that $$\min\left\{ \overline{\varepsilon^{(\pi(x))}} + \frac{1}{g_{\pi(x)}}, \frac{1}{g_{\pi(x)-1}} \right\} \leq f'(x) \leq \max\left\{ \overline{\delta^{(\pi(x))}} + \frac{1}{g_{\pi(x)}}, \frac{1}{g_{\pi(x)-1}} \right\},$$ where the "overlined" expressions given above are, respectively, given by the positive terms involving $\varepsilon^{(m)}$ and $\delta^{(m)}$ in [\[inviewqred\]](#inviewqred){reference-type="eqref" reference="inviewqred"} and [\[recallepsilonupper\]](#recallepsilonupper){reference-type="eqref" reference="recallepsilonupper"}, writing $m = \pi(x)$. Under the assumption of Cramér's conjecture, as formulated in [\[mainCramer\]](#mainCramer){reference-type="eqref" reference="mainCramer"}, we find that there exists $M > 0$ and $x_{0} \in \mathbb{R}$ such that $$\label{underCramer} \forall y \geq x_{0} \ \frac{1}{ M \log^{2} p_{y} } \leq \frac{1}{g_{y}} \leq \frac{1}{2}.$$ So, for suitable natural numbers $m \in \mathbb{N}_{\geq 2}$, by taking $\overline{\varepsilon^{(m)}}$ to be sufficiently small throughout a given interval $[p_{m}, p_{m+1}]$, and similarly for $\overline{\delta^{(m)}}$, from the consequence of Cramér's conjecture given in [\[underCramer\]](#underCramer){reference-type="eqref" reference="underCramer"}, we may deduce that $$\label{againCramer} \frac{1}{M \log^{2} p_{\pi(x)} } \leq f'(x) \leq 0.51,$$ for sufficiently large $x$. Using explicit bounds as in [@Dusart1999; @Dusart2010; @Rosser1941] for the prime-counting function and for the sequence of primes, we may obtain, from [\[againCramer\]](#againCramer){reference-type="eqref" reference="againCramer"}, that $$\label{2707273707972717870707P7M1A} \frac{1}{ M \log ^2\left(\frac{x \log \left(\frac{x \log \left(\frac{x}{\log (x)-1.1}\right)}{\log (x)-1.1}\right)}{\log (x)-1.1}\right)} \leq f'(x) \leq 0.51,$$ for sufficiently large $x$. By taking the natural logarithm of $f'(x)$ and the bounds in [\[2707273707972717870707P7M1A\]](#2707273707972717870707P7M1A){reference-type="eqref" reference="2707273707972717870707P7M1A"}, and then dividing by $\log(x)$, it is a matter of routine to verify that the limit as $x \to \infty$ of the resultant bounds vanishes, so that $\eta(f')$ exists, as desired, with $\eta(f') = 0$. By construction, we have that $\pi(x) \leq f(x) \leq \pi(x) + 1$ for all $x$. So, we find that $$\label{209293909992919785777P7M1A} \frac{\log \left(\frac{x}{\log (x)-1}\right)}{\log (x)} \leq \frac{ \log(f(x)) }{\log(x)} \leq \frac{\log \left(\frac{x}{\log (x)-1.1}+1\right)}{\log (x)}.$$ By taking the limit as $x \to \infty$ of the upper and lower bounds given in [\[209293909992919785777P7M1A\]](#209293909992919785777P7M1A){reference-type="eqref" reference="209293909992919785777P7M1A"}, this gives us the same value of $1$, so that $\eta(f) = 1$, as desired. So, we have that $f$ is a differentiable function and is monotonically increasing and positive, and we have that $\eta(f)$ and $\eta(f')$ exist and that $0 < \eta(f) \leq 1$, and we have that $\lim_{x \to \infty} f(x) = \infty$. So, from Theorem [\[theoremSV\]](#theoremSV){reference-type="ref" reference="theoremSV"}, we have that $f$ is a Champernowne function. ◻ # Discussion Following the work of Szüsz and Volkmann, as in the main article that has inspired our work [@SzuszVolkmann1994], we adopt the convention whereby strings of digits are counted "with overlaps allowed" in the sense that two occurrences of the same substring in a larger string are counted separately if these strings happen to overlap or otherwise. For example, Szüsz and Volkmann [@SzuszVolkmann1994] provide the illustration whereby the substring $131$ is counted four times within $713131051310131$, noting the substring $131$ overlapping with itself within $13131$. This convention concerning normal numbers does not agree with the Mathematica commands `StringCount` or `SequenceCount`, but, as a way of avoiding this kind of issue, we may instead use the Wolfram `StringPosition` command. For example, inputting StringPosition["713131051310131", "131"] into the Mathematica Computer Algebra System, we obtain a list of four elements, which agrees with Szüsz and Volkmann's convention for eunmerating subsequences of digits. By taking the first 10 million entries of the integer sequence $( \pi(n) : n \in \mathbb{N}_{0})$, and then converting this subsequence into a string without commas or spaces or brackets, and then counting the number of occurrences of each digit among $1$, $2$, $\ldots$, $9$, $0$ using the Wolfram `StringPosition` function, and then computing the frequency by dividing by the length of the string corresponding to $( \pi(n) : n \in \mathbb{N}_{\leq 10,000,000} )$, we obtain the frequencies listed in Table [1](#TablePCCE){reference-type="ref" reference="TablePCCE"}. Digit Frequency corresponding to $( \pi(n) )_{n \in \mathbb{N}_{\leq 10^7}}$ ------- ------------------------------------------------------------------------ 1 0.110875 2 0.111823 3 0.112635 4 0.113276 5 0.113596 6 0.102678 7 0.0835609 8 0.0836607 9 0.0839711 0 0.0839241 : Numerical evidence of the simple normality of the PCCE constant.   For each of the given frequencies $f$ shown in Table [1](#TablePCCE){reference-type="ref" reference="TablePCCE"}, we have that $\left| f - \frac{1}{10} \right| < 0.0165$. The data in Table [1](#TablePCCE){reference-type="ref" reference="TablePCCE"} may suggest that lower-value digits, according to the given ordering, may appear more frequently within the PCCE constant, say, in the sense that the frequencies, within substrings corresponding to $(\pi(n) : n \in \mathbb{N}_{0})$, for lower-order digits may be, in general, higher compared to the frequencies for higher-valued digits. This recalls the statistical principle known as *Benford's law*, and we encourage the number-theoretic exploration of this phenomenon with the use of the PCCE constant. One may wonder why it would be appropriate, for the purposes of our application of the combinatorial method due to Szüsz and Volkmann [@SzuszVolkmann1994], to use the formulation of Cramér's conjecture in [\[mainCramer\]](#mainCramer){reference-type="eqref" reference="mainCramer"}, as opposed to the estimate for prime gaps that Cramér proved under the assumption of the RH. The problem, here, has to do with the lower bound that would correspond to [\[underCramer\]](#underCramer){reference-type="eqref" reference="underCramer"}, if estimates other than Cramér's conjecture were to be used. If one were to attempt to make use of a lower bound as in $$\label{attemptRH} \frac{1}{M \sqrt{p_{\pi(x)}} \log p_{\pi(x)} } \leq f'(x) \leq 0.51,$$ under the assumption of the RH, in the hope of applying the estimate of prime gaps shown in [\[needsRH\]](#needsRH){reference-type="eqref" reference="needsRH"}, we encounter a problem concerning the behaviour of prime gaps, in the sense described as foillows. To apply Theorem [\[theoremSV\]](#theoremSV){reference-type="ref" reference="theoremSV"}, the limit corresponding to $\eta(f')$ would have to exist, but, by manipulating the inequalities in [\[attemptRH\]](#attemptRH){reference-type="eqref" reference="attemptRH"} so that $$\label{showRHnotenough} \frac{ \log\left( \frac{1}{M \sqrt{p_{\pi(x)}} \log p_{\pi(x)} } \right) }{\log(x)} \leq \frac{ \log\left( f'(x) \right) }{\log(x)} \leq \frac{ \log\left( 0.51\right)}{\log(x)},$$ we encounter the problem indicated as follows. By taking the limit as $x \to \infty$ for the bounds in [\[showRHnotenough\]](#showRHnotenough){reference-type="eqref" reference="showRHnotenough"}, the Prime Number Theorem gives us that the limit corresponding to the lower bound reduces to $-\frac{1}{2}$, whereas the limit corresponding to the upper bound vanishes, so the valuations for these limits corresponding to the upper and lower bounds for [\[showRHnotenough\]](#showRHnotenough){reference-type="eqref" reference="showRHnotenough"} would not imply that $\eta(f')$ would exist. How could the RH be used, in place of the formulation of Cramér's conjecture in [\[mainCramer\]](#mainCramer){reference-type="eqref" reference="mainCramer"}, to prove the normality of the PCCE constant? Digit Frequency corresponding to $\left( \left\lfloor \sqrt{n} \right\rfloor \right)_{n \in \mathbb{N}_{\leq 10^7}}$ ------- ---------------------------------------------------------------------------------------------------------------- 1 0.156047 2 0.198942 3 0.0980257 4 0.0740972 5 0.0758164 6 0.0762815 7 0.0776263 8 0.0793403 9 0.0810544 0 0.0827685 : Frequencies associated with the appearance of digits in the Szüsz--Volkmann constant (cf. [@PollackVandehey2015Besicovitch; @SzuszVolkmann1994]) indicated in Example [\[sqrtcons\]](#sqrtcons){reference-type="ref" reference="sqrtcons"}.   The data in Table [1](#TablePCCE){reference-type="ref" reference="TablePCCE"} may be regarded as offering strong evidence for the normality of the PCCE constant if we compare these data to the correpsonding frequencies for the Szüsz--Volkmann constant (cf. [@PollackVandehey2015Besicovitch; @SzuszVolkmann1994]) given by concatenating the digit expansions of consecutive integer parts of the square root function, as in Example [\[sqrtcons\]](#sqrtcons){reference-type="ref" reference="sqrtcons"}. Given the notoriously unpredictable behaviour of prime gaps, one might think that the frequencies of digits in the PCCE constant would be similarly unpredictable, compared to the Szüsz--Volkmann constant, but the data in Tables [1](#TablePCCE){reference-type="ref" reference="TablePCCE"} and [2](#sqrttable){reference-type="ref" reference="sqrttable"} suggest that the PCCE constant is actually much more well behaved compared to the $a(n) = \left\lfloor \sqrt{n} \right\rfloor$ case of [\[generalconcatenate\]](#generalconcatenate){reference-type="eqref" reference="generalconcatenate"}. For example, the largest frequency in Table [2](#sqrttable){reference-type="ref" reference="sqrttable"} is much farther away from the desire mean of $0.1$, relative to the PCCE constant, and similarly for the smallest frequency in Table [2](#sqrttable){reference-type="ref" reference="sqrttable"}. ## Acknowledgements {#acknowledgements .unnumbered} The author was supported through a Killam Postdoctoral Fellowship from the Killam Trusts. The author is thankful to Joel E. Cohen for useful comments concerning the subject of this article, and the author is thankful to Karl Dilcher for useful discussions concerning this article. 99 D.H. Bailey and R.E. Crandall, Random generators and normal numbers, *Experiment. Math.*, **11** (2002), 527--546. V. Becher, S. Figueira, and R. Picchi, Turing's unpublished algorithm for normal numbers, *Theoret. Comput. Sci.*, **377** (2007), 126--138. A.S. Besicovitch, The asymptotic distribution of the numerals in the decimal representation of the squares of the natural numbers, *Math. Z.*, **39** (1935), 146--156. E. Catt, M. Coons, and J. Velich, Strong normality and generalized Copeland-Erdős numbers, *Integers*, **16** (2016), Paper No. A11, 10. D.G. Champernowne, The construction of decimals normal in the scale of ten, *J. London Math. Soc.*, **8** (1933), 254--260. A.H. Copeland and P. Erdös, Note on normal numbers, *Bull. Amer. Math. Soc.*, **52** (1946), 857--860. H. Cramér, On the order of magnitude of the difference between consecutive prime numbers, *Acta Arith.*, **2** (1936), 23--46. H. Davenport and P. Erdös, Note on normal decimals, *Canad. J. Math.*, **4** (1952), 58--63. J.-M. De Koninck and I. Kátai, The number of large prime factors of integers and normal numbers, in: *Publications mathématiques de Besançon. Algèbre et théorie des nombres, 2015*, Publ. Math. Besançon Algèbre Théorie Nr. Vol. 2015, Presses Univ. Franche-Comté (Besançon, 2016), pp. 5--12. J.-M. De Koninck and I. Kátai, On a problem on normal numbers raised by Igor Shparlinski, *Bull. Aust. Math. Soc.*, **84** (2011), 337--349. P. Dusart, The $k$th prime is greater than $k(\ln k+\ln\ln k-1)$ for $k\geq 2$, *Math. Comput.*, **68** (1999), 411--415. P. Dusart, Estimates of Some Functions Over Primes without R.H., (2010), arXiv:1002.0442v1. M.G. Madritsch, J.M. Thuswaldner, and R.F. Tichy, Normality of numbers generated by the values of entire functions, *J. Number Theory*, **128** (2008), 1127--1145. Y. Nakai and I. Shiokawa, Discrepancy estimates for a class of normal numbers, *Acta Arith.*, **62** (1992), 271--284. I. Niven and H.S. Zuckerman, On the definition of normal numbers, *Pacific J. Math.*, **1** (1951), 103--109. P. Pollack and J. Vandehey, Besicovitch, bisection, and the normality of $0.(1)(4)(9)(16)$ $(25)$ $\cdots$, *Amer. Math. Monthly*, **122** (2015), 757--765. P. Pollack and J. Vandehey, Some normal numbers generated by arithmetic functions, *Canad. Math. Bull.*, **58** (2015), 160--173. B. Rosser, Explicit bounds for some functions of prime numbers, *Am. J. Math.*, **63** (1941), 211--232. P. Szüsz and B. Volkmann, A combinatorial method for constructing normal numbers, *Forum Math.*, **6** (1994), 399--414. J. Vandehey, The normality of digits in almost constant additive functions, *Monatsh. Math.*, **171** (2013), 481--497.   John M. Campbell Department of Mathematics and Statistics Dalhousie University `jmaxwellcampbell@gmail.com`
arxiv_math
{ "id": "2309.13520", "title": "The prime-counting Copeland-Erd\\H{o}s constant", "authors": "John M. Campbell", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we describe the possible disconnected complex reductive algebraic groups $E$ with component group $\Gamma = E/E_0$. We show that there is a natural bijection between such groups $E$ and algebraic extensions of $\Gamma$ by $Z(E_0)$. author: - | Marisa Gaetz[^1]\ Department of Mathematics\ MIT, Cambridge, MA 02139 - | David A. Vogan, Jr.\ 2-355, Department of Mathematics\ MIT, Cambridge, MA 02139 bibliography: - biblio.bib title: Disconnected reductive groups --- # Introduction {#sec:intro} This paper is concerned with the general problem of understanding and classifying *disconnected* reductive algebraic groups. If $E$ is such a group, and $G:=E_0$ its identity component, then $$\label{eq:disc1}\begin{aligned} \Gamma := E/&E_0 = E/G\\ 1 \longrightarrow G \longrightarrow &E \buildrel p_E \over \longrightarrow \Gamma \longrightarrow 1 \end{aligned}$$ with $\Gamma$ a finite group. We will begin in Section [2](#sec:connected){reference-type="ref" reference="sec:connected"} with Chevalley's description of the connected reductive algebraic group $G$. We will then use this to describe the possibilities for $E$ in Section [3](#sec:disconnected){reference-type="ref" reference="sec:disconnected"}. # Connected reductive algebraic groups {#sec:connected} We begin with a complex connected reductive algebraic group $G$ equipped with a *pinning*: **Definition 1** (see for example [@AV Definition 1.18]). Suppose $G$ is a complex connected reductive algebraic group. A *pinning* of $G$ consists of 1. a Borel subgroup $B \subset G$; 2. a maximal torus $H \subset B$; and 3. for each simple root $\alpha \in \Pi (B,H)$, a choice of basis vector $X_{\alpha} \in \mathfrak{g}_{\alpha}$. [\[se:rootdata\]]{#se:rootdata label="se:rootdata"} The reason we use [@AV] as a reference rather than something older and closer to original sources is that this reference is concerned, as we will be, with representation theory of disconnected groups. Write $$\label{eq:roots} \begin{tabular}{r c l c r c l } $R(G,H)$ & \hskip -.2cm \mbox{\Large$\subset$} & \hskip -.2cm $X^*(H)$, & \hspace{.25cm} & $R^{\vee} (G,H)$ & \hskip -.2cm\mbox{\Large$\subset$} & \hskip -.2cm $X_* (H)$ \\ $\Pi (B,H)$ & \hskip -.2cm\mbox{\Large$\subset$} & \hskip -.2cm $R^+ (B,H)$, & \hspace{.25cm} & $\Pi^{\vee}(B,H)$ & \hskip -.2cm\mbox{\Large$\subset$} & \hskip -.2cm $(R^{\vee})^+ (B,H)$ \end{tabular}$$ for the roots, coroots, simple roots, and simple coroots that are specified by the pinning $\{ B,H,X_{\alpha} \}$ of $G$. Here, the (co)roots live in the (co)character lattice and the simple (co)roots live in the set of positive (co)roots. For clarity, we will try to write $\alpha$ for simple roots and $\beta$ for arbitrary roots. Each $\beta\in R(G,H)$ defines a reflection $$\label{eq:sbeta} s_\beta \in \mathop{\mathrm{Aut}}(X^*(H)),\quad s_\beta(\lambda) = \lambda - \langle\beta^\vee,\lambda\rangle\beta \qquad (\lambda \in X^*(H)),$$ where $\langle \cdot , \cdot \rangle$ denotes the natural pairing between the dual lattices $X_*(H)$ and $X^*(H)$. These reflections generate the *Weyl group of $H$ in $G$*: $$\label{eq:W} W(G,H):=\langle \{s_\beta \mid \beta\in R(G,H)\} \rangle \subset \mathop{\mathrm{Aut}}(X^*(H)).$$ For each $s_{\beta} \in \mathop{\mathrm{Aut}}(X^*(H))$, there is a transpose automorphism of $X_* (H)$: $$\label{eq:sbetavee} s_{\beta^\vee} \in \mathop{\mathrm{Aut}}(X_*(H)),\quad s_{\beta^\vee}(\ell) = \ell - \langle\ell,\beta\rangle\beta^\vee \qquad (\ell \in X_*(H)).$$ The inverse transpose isomorphism $$\label{eq:inv-transpose} \mathop{\mathrm{Aut}}(X^*(H)) \simeq \mathop{\mathrm{Aut}}(X_*(H)), \qquad T \mapsto {}^t T^{-1}$$ identifies $W(G,H)$ with the group of automorphisms of $X_*(H)$ generated by the various $s_{\beta^\vee}$. **Proposition 1** (Springer [@Springer Proposition 8.1.1]). *Let $G$ be a complex connected reductive algebraic group with pinning $\{ B,H,X_{\alpha} \}$. Let $\mathbb{G}_a$ be $\mathbb{C}$, viewed as an additive group, with Lie algebra $\mathop{\mathrm{Lie}}(\mathbb{G}_a) = \mathbb C$.* 1. *For $\beta \in R(G,H)$, there is an isomorphism $u_{\beta}$ of $\mathbb{G}_a$ onto a closed subgroup $U_{\beta}$ of $G$ such that $\mathop{\mathrm{Lie}}(U_{\beta}) = \mathfrak{g}_{\beta}$.* 2. *If $\alpha \in \Pi(G,H)$, then $u_\alpha$ is characterized by requiring $du_\alpha(1) = X_\alpha$.* 3. *$H$ and the $U_{\beta}$ with $\beta \in R(G,H)$ generate $G$.* **Proposition 1** (Springer [@Springer Proposition 8.2.4 and Corollary 8.2.10]). *Let $G$ be a complex connected reductive algebraic group with pinning $\{ B,H,X_{\alpha} \}$. Let $\widetilde{R}^+$ be an arbitrary system of positive roots in $R(G,H)$, with simple roots $\widetilde\Pi$.* 1. *$H$ and the $U_{\alpha}$ with $\alpha \in \widetilde\Pi$ generate a Borel subgroup of $G$.* 2. *There is a unique $w \in W(G,H)$ with $\widetilde{R}^+ = w.R^+(B,H)$.* In this way, we see that the Borel subgroup $B$ in the pinning $\{ B,H,X_{\alpha} \}$ is determined by the $X_{\alpha}$, so we will often use $\{ H, X_{\alpha} \}$ to denote a pinning. As for any group, there are natural short exact sequences $$\label{eq:intout}\begin{aligned} 1 &\longrightarrow \mathop{\mathrm{Int}}(G) \longrightarrow \mathop{\mathrm{Aut}}(G) \buildrel p_{\mathop{\mathrm{Aut}}}\over\longrightarrow \mathop{\mathrm{Out}}(G) \longrightarrow 1\\ & 1 \longrightarrow Z(G) \longrightarrow G \buildrel p_G\over\longrightarrow \mathop{\mathrm{Int}}(G) \longrightarrow 1, \end{aligned}$$ where $\mathop{\mathrm{Aut}}(G)$ denotes the algebraic automorphisms of $G$; $$\label{eq:int} \mathop{\mathrm{Int}}(G) := \{ \mathop{\mathrm{Ad}}(g) \mid g \in G \} = \{ \text{inner automorphisms of } G \};$$ and $\mathop{\mathrm{Out}}(G) := \mathop{\mathrm{Aut}}(G) / \mathop{\mathrm{Int}}(G)$. In particular, $\mathop{\mathrm{Int}}(G) \simeq G/Z(G)$. **Proposition 1**. *Let $G$ be a complex connected reductive algebraic group. The group $\mathop{\mathrm{Int}}(G)$ acts simply transitively on the set of pinnings for $G$.* *Proof.* Let $\{ B, H, X_{\alpha} \}$ and $\{ B', H', X_{\alpha'}' \}$ be two pinnings for $G$, and let $\Pi$ and $\Pi'$ denote the corresponding sets of simple roots. Then by [@Springer Theorem 6.2.7], $B = gB'g^{-1}$ for some $g \in G$. Additionally, $gH'g^{-1}$ is a maximal torus of $gB'g^{-1} = B$, so we have that $H = bgH'g^{-1}b^{-1}$ for some $b \in B$ by [@Springer Theorem 6.4.1]. Setting $\{ X_{\alpha''}'' \} := \mathop{\mathrm{Ad}}(bg)(\{ X_{\alpha'}' \})$ (and letting $\Pi''$ denote the corresponding set of simple roots), we see that $$\mathop{\mathrm{Ad}}(bg)( \{ B', H', X_{\alpha'}' \} ) = \{ B, H, X_{\alpha''}'' \}.$$ Since a unique set of simple roots is determined by the pair $(B,H)$, we get that $\Pi = \Pi''$. Finally, let $h \in H$ be such that $\mathop{\mathrm{Ad}}(h) ( X_{\alpha}'' ) = X_{\alpha}$. (Such an $h$ exists by the linear independence of the simple roots: [@Springer Theorem 8.2.8].) Then $\mathop{\mathrm{Ad}}(hbg)(\{ H',X_{\alpha'}' \}) = \{ H,X_{\alpha} \}$, which proves that the action of $\mathop{\mathrm{Int}}(G)$ is transitive. To see that this action is *simply* transitive, suppose that $$\mathop{\mathrm{Ad}}(g)(\{ B,H,X_{\alpha }\}) = \mathop{\mathrm{Ad}}(g')(\{ B, H,X_{\alpha} \})$$ for some $g,g' \in G$. Then $$(g')^{-1} g \in N_G (H) \hspace{.5cm} \text{ and } \hspace{.5cm} \mathop{\mathrm{Ad}}((g')^{-1}g)(\{ X_{\alpha} \}) = \{ X_{\alpha} \},$$ so [@Springer Proposition 8.2.4] gives that $(g')^{-1}g \cdot H = 1_{W(G,H)}$. In other words, $(g')^{-1}g \in H$. But then [@Springer Proposition 8.1.1] and the relation $$\mathop{\mathrm{Ad}}((g')^{-1}g)(\{ X_{\alpha} \}) = \{ X_{\alpha} \}$$ show that $\mathop{\mathrm{Ad}}((g')^{-1}g)(X_{\alpha}) = X_{\alpha}$ for all $\alpha \in \Pi (B,H)$. It follows that $(g')^{-1}g \in Z(G)$, completing the proof. ◻ [\[se:disc1\]]{#se:disc1 label="se:disc1"} The *root datum of $G$* is the quadruple $$\label{eq:rootdata2} {\mathcal R}(G) := (X^*(H),R(G,H),X_*(H),R^\vee(G,H))$$ and the *based root datum* is $$\label{eq:basedrootdata} {\mathcal B}(G) := (X^*(H),\Pi(B,H),X_*(H),\Pi^\vee(B,H)).$$ It is worth noting that Proposition [Proposition 1](#prop:pinned){reference-type="ref" reference="prop:pinned"} is essentially Chevalley's theorem that every reductive algebraic group has an associated root datum, and the root datum determines the group up to isomorphism [@Chev]. With these notions established, define $$\label{eq:intout0} \mathop{\mathrm{Aut}}({\mathcal B}(G)) := \left\{T\in \mathop{\mathrm{Aut}}(X^*(H)) \mid \parbox{.35\textwidth}{$\ \ T(\Pi(B,H)) = \Pi(B,H)$\\$ \ {}^t T(\Pi^\vee(B,H)) = \Pi^\vee(B,H)$} \right\}$$ and $$\mathop{\mathrm{Aut}}(G,\{H,X_\alpha\}) := \{\tau\in \mathop{\mathrm{Aut}}(G) \mid \tau(\{H, X_\alpha\}) = \{H,X_\alpha \} \}.$$ Then Proposition [Proposition 1](#prop:pinned){reference-type="ref" reference="prop:pinned"} implies the following: **Corollary 1**. *Let $G$ be a complex connected reductive algebraic group with pinning $\{ H,X_{\alpha} \}$. An automorphism of $G$ preserving the pinning is precisely the same thing as an automorphism of the based root datum: $$\mathop{\mathrm{Aut}}({\mathcal B}(G)) = \mathop{\mathrm{Aut}}(G, \{ H,X_{\alpha} \}) .$$* Elements of $\mathop{\mathrm{Aut}}( {\mathcal B}(G) ) = \mathop{\mathrm{Aut}}(G, \{ H, X_{\alpha} \})$ are called *distinguished* automorphisms of $G$. **Corollary 1**. *Let $G$ be a complex connected reductive algebraic group with pinning $\{ H,X_{\alpha} \}$. The group of algebraic automorphisms of $G$ is the semidirect product of the inner automorphisms and the distinguished automorphisms: $$\mathop{\mathrm{Aut}}(G) = \mathop{\mathrm{Int}}(G) \rtimes \mathop{\mathrm{Aut}}(G,\{H,X_\alpha\}).$$ Consequently, $$\mathop{\mathrm{Out}}(G) \simeq \mathop{\mathrm{Aut}}(G,\{H,X_\alpha\}) = \mathop{\mathrm{Aut}}({\mathcal B}(G)) .$$* *Proof.* Let $\{ B,H,X_{\alpha} \}$ be a pinning of $G$. Proposition [Proposition 1](#prop:pinned){reference-type="ref" reference="prop:pinned"} implies that $$\mathop{\mathrm{Int}}(G)\, \mbox{\large$\cap$} \,\mathop{\mathrm{Aut}}(G , \{ H,X_{\alpha} \}) = \{ 1 \}.$$ To see that $\mathop{\mathrm{Int}}(G)$ and $\mathop{\mathrm{Aut}}(G, \{ H,X_{\alpha} \})$ generate $\mathop{\mathrm{Aut}}(G)$, we show that there is a surjective homomorphism $\mathop{\mathrm{Aut}}(G) \rightarrow \mathop{\mathrm{Aut}}(G , \{ H,X_{\alpha} \})$ with kernel $\mathop{\mathrm{Int}}(G)$. To this end, let $\tau \in \mathop{\mathrm{Aut}}(G)$. Then $\tau (H)$ is a maximal torus in $G$ and $\tau (B) \supseteq \tau (H)$ is a Borel subgroup in $G$. By the same reasoning as in the proof of Proposition [Proposition 1](#prop:pinned){reference-type="ref" reference="prop:pinned"}, there exists $g_{\tau} \in G$ such that $$g_{\tau} \tau (H) g_{\tau}^{-1} = H \hspace{.5cm} \text{ and } \hspace{.5cm} g_{\tau} \tau (B) g_{\tau}^{-1} = B.$$ In particular, $\mathop{\mathrm{Ad}}(g_{\tau}) \circ \tau \in \mathop{\mathrm{Aut}}(G, \{ H,X_{\alpha} \})$, and we have a homomorphism $$\begin{aligned} \mathop{\mathrm{Aut}}(G) & \rightarrow \mathop{\mathrm{Aut}}(G,\{ H,X_{\alpha} \}) \\ \tau & \mapsto \mathop{\mathrm{Ad}}(g_{\tau}) \circ \tau .\end{aligned}$$ It is straightforward to check that this homomorphism has kernel $\mathop{\mathrm{Int}}(G)$, completing the proof. ◻ # Disconnected reductive algebraic groups {#sec:disconnected} [\[se:disc2\]]{#se:disc2 label="se:disc2"} We can now describe the possible disconnected groups $E$ as in [\[eq:disc1\]](#eq:disc1){reference-type="eqref" reference="eq:disc1"}. We will take as given the connected complex reductive algebraic group $G$ as in Section [2](#sec:connected){reference-type="ref" reference="sec:connected"}, specified by the based root datum ${\mathcal B}(G)$. We fix also a finite group $\Gamma$, which may be specified in any convenient fashion. If we are to have a short exact sequence as in [\[eq:disc1\]](#eq:disc1){reference-type="eqref" reference="eq:disc1"}, we will get automatically a group homomorphism $$\label{eq:outer} \mathop{\mathrm{Ad}}\colon \Gamma \rightarrow \mathop{\mathrm{Out}}(G) \simeq \mathop{\mathrm{Aut}}(G, \{ H,X_{\alpha} \}) = \mathop{\mathrm{Aut}}({\mathcal B}(G)).$$ We will take the specification of such a homomorphism $\mathop{\mathrm{Ad}}$ as part of the data (along with $G$ and $\Gamma$) which we are given. For any group $G$, the inner automorphisms act trivially on the center $Z(G)$, so there is a natural homomorphism $$\mathop{\mathrm{Ad}}\colon \mathop{\mathrm{Out}}(G) \rightarrow \mathop{\mathrm{Aut}}(Z(G)).$$ In our setting, [\[eq:outer\]](#eq:outer){reference-type="eqref" reference="eq:outer"} therefore gives $$\label{eq:outerZ} \overline{\mathop{\mathrm{Ad}}}\colon \Gamma \rightarrow \mathop{\mathrm{Aut}}(Z(G)).$$ We pause briefly to recall what sort of group is $\mathop{\mathrm{Aut}}({\mathcal B}(G))$. Recall that the set of simple roots $\Pi(B,H)$ is the set of vertices of a graph $\mathop{\mathrm{Dynkin}}(G)$ with some directed multiple edges, the *Dynkin diagram* of $G$: an edge joins distinct vertices $\alpha$ and $\alpha'$ if and only if $\langle (\alpha')^\vee,\alpha\rangle\, \mbox{\large$\ne$}\, 0$. We will not make further use of the Dynkin diagram, so we do not recall the details. If $Z(G)$ has dimension $m$, then $$\label{eq:out} \mathop{\mathrm{Aut}}({\mathcal B}(G)) \subset \mathop{\mathrm{Aut}}(\mathop{\mathrm{Dynkin}}(G)) \times \mathop{\mathrm{Aut}}({\mathbb Z}^m).$$ Here the first factor is a finite group of "diagram automorphisms" and the second is the discrete group of $m\times m$ integer matrices of determinant $\pm 1$. To see why we have this containment, recall that $\mathop{\mathrm{Aut}}( {\mathcal B} (G) )$ is the set of automorphisms of the character lattice $X^* (H)$ preserving the set of simple roots. The character lattice $X^*(H)$ has a sublattice of finite index $$\label{eq:almost_product} X^* (H) \supset X^*( H/Z(G)) \times X^* (H/(H\cap [G,G])) \simeq \mathbb{Z}^{\dim (H) - m} \times \mathbb{Z}^{m}$$ which must be preserved by any automorphism of the root datum. The automorphisms of $X^*(H/Z(G))$ preserving the set of simple roots are precisely the diagram automorphisms. If $L_0 \subset L$ is a sublattice of finite index, restriction to $L_0$ defines an inclusion $$\mathop{\mathrm{Aut}}(L) \hookrightarrow \mathop{\mathrm{Aut}}(L_0),$$ identifying $\mathop{\mathrm{Aut}}(L)$ as a subgroup of finite index in $\mathop{\mathrm{Aut}}(L_0)$. It follows that the inclusion [\[eq:out\]](#eq:out){reference-type="eqref" reference="eq:out"} induced by [\[eq:almost_product\]](#eq:almost_product){reference-type="eqref" reference="eq:almost_product"} is of finite index. The automorphism group of a connected Dynkin diagram is small (order one or two except in the case of $D_4$, where the automorphism group is $S_3$). So for semisimple $G$ the possibilities for the homomorphism $\mathop{\mathrm{Ad}}$ are generally quite limited, and can be described concretely and explicitly. Understanding possible maps to $\mathop{\mathrm{Aut}}({\mathbb Z}^m)$ means understanding $m$-dimensional representations of $\Gamma$ over ${\mathbb Z}$. This is a more subtle and complicated subject; we will not concern ourselves with it, merely taking $\mathop{\mathrm{Ad}}$ as given somehow. With this information in hand, the quotient group $E/Z(G)$ can now be completely described. Define (using $p_{\mathop{\mathrm{Aut}}}$ from [\[eq:intout\]](#eq:intout){reference-type="eqref" reference="eq:intout"} and $\mathop{\mathrm{Ad}}$ from [\[eq:outer\]](#eq:outer){reference-type="eqref" reference="eq:outer"}) $$\mathop{\mathrm{Aut}}_{\Gamma}(G) := p_{\mathop{\mathrm{Aut}}}^{-1}(\mathop{\mathrm{Ad}}(\Gamma)).$$ We immediately get from [\[eq:intout\]](#eq:intout){reference-type="eqref" reference="eq:intout"} a short exact sequence $$1 \longrightarrow \mathop{\mathrm{Int}}(G) \longrightarrow \mathop{\mathrm{Aut}}_\Gamma(G) \longrightarrow \mathop{\mathrm{Ad}}(\Gamma) \longrightarrow 1.$$ Moreover, Corollary [Corollary 1](#cor:aut=int-dist){reference-type="ref" reference="cor:aut=int-dist"} implies that $$\label{eq:Aut_Gamma-semidirect} \mathop{\mathrm{Aut}}_{\Gamma}(G) = \mathop{\mathrm{Int}}(G) \rtimes \mathop{\mathrm{Ad}}(\Gamma).$$ Note that the image of $\mathop{\mathrm{Ad}}\colon E \rightarrow \mathop{\mathrm{Aut}}(G)$ equals $\mathop{\mathrm{Aut}}_{\Gamma} (G)$. Now, for a pinning $\{ H,X_{\alpha} \}$ of $G$, define $$E (\{ H, X_{\alpha} \}) := \{ e \in E \mid \mathop{\mathrm{Ad}}(e) ( \{ H,X_{\alpha} \}) = \{ H,X_{\alpha} \} \},$$ the subgroup of $E$ defining distinguished automorphisms of $G$ (see [\[eq:intout0\]](#eq:intout0){reference-type="eqref" reference="eq:intout0"}). **Proposition 1**. *Let $E$ be a complex disconnected reductive algebraic group with identity component $G$. Let $\{ H,X_{\alpha} \}$ be a pinning of $G$.* 1. *$G \cap E(\{ H,X_{\alpha} \}) = Z(G)$.* 2. *The map $\overline{p}_E \colon E(\{ H,X_{\alpha} \})/Z(G) \rightarrow \Gamma$ is an isomorphism.* *Proof.* The inclusion $Z(G) \subseteq G \cap E(\{ H,X_{\alpha} \})$ is clear. For the other, suppose that $g \in G \cap E(\{ H,X_{\alpha} \})$. Then $$\mathop{\mathrm{Ad}}(g) \in \mathop{\mathrm{Int}}(g) \cap \mathop{\mathrm{Aut}}(G, \{ H,X_{\alpha} \}) = \{ 1 \},$$ where we have used Proposition [Proposition 1](#prop:pinned){reference-type="ref" reference="prop:pinned"}. It follows that $g \in Z(G)$, which proves $(i)$. Next, we claim that $\overline{p}_E$ is injective. To this end, let $e \in E(\{ H,X_{\alpha} \})$ and suppose that $\overline{p}_E ( e Z(G) ) = \text{id}_{\Gamma}$ (i.e. that $e G = \text{id}_{\Gamma}$). Then $$e \in G \cap E(\{ H,X_{\alpha} \}) = Z(G),$$ and injectivity follows. For surjectivity, note that $$\begin{aligned} \mathop{\mathrm{Ad}}(E (\{ H,X_{\alpha} \}) ) &= \mathop{\mathrm{Ad}}(E) \cap \mathop{\mathrm{Aut}}(G,\{ H,X_{\alpha} \}) \\ &= [ \mathop{\mathrm{Int}}(G) \rtimes \mathop{\mathrm{Ad}}(\Gamma ) ] \cap \mathop{\mathrm{Aut}}(G, \{ H,X_{\alpha} \}) \\ &= \mathop{\mathrm{Ad}}(\Gamma),\end{aligned}$$ where we have used [\[eq:Aut_Gamma-semidirect\]](#eq:Aut_Gamma-semidirect){reference-type="eqref" reference="eq:Aut_Gamma-semidirect"} and Proposition [Proposition 1](#prop:pinned){reference-type="ref" reference="prop:pinned"}. Therefore, given a coset $eG \in \Gamma$, we can write $\mathop{\mathrm{Ad}}(eG) = \mathop{\mathrm{Ad}}(e')$ for some $e' \in E(\{ H,X_{\alpha} \})$. Explicitly, this means that $p_{\mathop{\mathrm{Aut}}}(\mathop{\mathrm{Ad}}(e)) = \mathop{\mathrm{Ad}}(e')$, which gives that $\mathop{\mathrm{Ad}}(e)^{-1} \circ \mathop{\mathrm{Ad}}(e') \in \mathop{\mathrm{Int}}(G)$, and hence that $e' \in eG$. In particular, $$\overline{p}_E ( e' Z(G) ) = e G \in \Gamma ,$$ which proves $(ii)$. ◻ Proposition [Proposition 1](#prop:E(H,X)/Z(G)->Gamma){reference-type="ref" reference="prop:E(H,X)/Z(G)->Gamma"} immediately implies the following: **Corollary 1**. *Let $E$ be a complex disconnected reductive algebraic group with identity component $G$. Then $$E/Z(G) \simeq [G/Z(G)] \rtimes \Gamma ,$$ and $E(\{ H,X_{\alpha} \})$ is an extension of $\Gamma$ by the abelian group $Z(G)$: $$\label{eq:E(H,X)-SES} 1 \longrightarrow Z(G) \longrightarrow E(\{ H,X_{\alpha} \}) \longrightarrow \Gamma \longrightarrow 1.$$* The following theorem shows that the problem of understanding extensions of $\Gamma$ by $G$ can be reduced to the problem of understanding the extensions [\[eq:E(H,X)-SES\]](#eq:E(H,X)-SES){reference-type="eqref" reference="eq:E(H,X)-SES"} of $\Gamma$ by $Z(G)$. **Theorem 1**. *Let $E$ be a complex disconnected reductive algebraic group with identity component $G$. Let $\{ H,X_{\alpha} \}$ be a pinning of $G$.* 1. *There is a natural surjective homomorphism $$G\rtimes E(\{H,X_\alpha\}) \rightarrow E$$ with kernel the antidiagonal copy $$Z(G)_{-\Delta} = \{(z,z^{-1})\mid z \in Z(G)\}$$ of $Z(G)$. Consequently, there is a natural bijection between algebraic extensions $E$ of $\Gamma$ by $G$ and algebraic extensions $E(\{H,X_\alpha\})$ of $\Gamma$ by $Z(G)$ (with the action [\[eq:outerZ\]](#eq:outerZ){reference-type="eqref" reference="eq:outerZ"} of $\Gamma$ on $Z(G)$. These latter are parametrized by the group cohomology $$H^2(\Gamma,Z(G))$$ (see for example [@CE pages 299--303]). The bijection is given by $$E = \left[G\rtimes E(\{H,X_\alpha\})\right]/Z(G)_{-\Delta}.$$* 2. *Define $$Z(G)_\text{\textnormal{fin}}= \text{elements of finite order in $Z(G)$,}$$ the torsion subgroup. Then the natural map $$H^p(\Gamma,Z(G)_\text{\textnormal{fin}}) \rightarrow H^p(\Gamma,Z(G)) \qquad (p\ge 2)$$ is an isomorphism.* *Proof of (i).* By Proposition [Proposition 1](#prop:E(H,X)/Z(G)->Gamma){reference-type="ref" reference="prop:E(H,X)/Z(G)->Gamma"}, $E(\{ H,X_{\alpha} \})$ meets every coset of $G$ in $E$ nontrivially, meaning $G$ and $E(\{ H,X_{\alpha} \})$ generate $E$. With this established, we see that there is a natural surjective homomorphism ----------------------------------- --------------- ------- $G \rtimes E(\{ H,X_{\alpha} \})$ $\rightarrow$ $E$ $(g,e)$ $\mapsto$ $ge$. ----------------------------------- --------------- ------- It is not hard to see that the kernel of this map is $\{ (z,z^{-1}) \mid z \in Z(G) \}$, i.e. the antidiagonal copy $Z(G)_{-\Delta}$. ◻ [\[se:grpcoh\]]{#se:grpcoh label="se:grpcoh"} Before we prove part $(ii)$, we explain some details about the description of $E(\{H,X_\alpha\})$ mentioned in Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"}$(i)$. The cohomology group $H^2(\Gamma,Z(G))$ can be described as the group $$Z^2(\Gamma,Z(G))/B^2(\Gamma,Z(G))$$ of cocycles modulo coboundaries. The set $Z^2(\Gamma,Z(G))$ of cocycles consists of maps $$\begin{aligned} c&\, \colon \Gamma \times \Gamma \rightarrow Z(G),\\ \gamma_1\cdot c(\gamma_2,\gamma_3) -c(\gamma_1\gamma_2,\gamma_3) &+ c(\gamma_1,\gamma_2\gamma_3) - c(\gamma_1,\gamma_2) = 0\qquad (\gamma_i \in \Gamma).\end{aligned}$$ A coboundary begins with an arbitrary map $b\,\colon \Gamma \rightarrow Z(G)$. The corresponding coboundary is $$db(\gamma_1,\gamma_2) = b(\gamma_1) + \gamma_1\cdot b(\gamma_2) -b(\gamma_1\gamma_2);$$ it is standard and easy to check that $db$ is a cocycle. Given a cocycle $c$, the extension $E(\{H,X_\alpha\})$ is generated by $Z(G)$ and additional elements $$\{\tilde\gamma \mid \gamma \in \Gamma\}$$ which are subject to the relations $$\label{eq:discrel} \begin{aligned} \tilde\gamma_1 \tilde\gamma_2 &= c(\gamma_1,\gamma_2)\widetilde{\gamma_1\gamma_2}\\ \tilde\gamma z \tilde\gamma^{-1} &= \gamma\cdot z \qquad (\gamma\in \Gamma, z \in Z(G)). \end{aligned}$$ Conversely, if we are given an extension $E(\{H,X_\alpha\})$ as in Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"}$(i)$, and we choose for each $\gamma\in \Gamma$ a preimage $\tilde\gamma \in E(\{H,X_\alpha\})$, then these choices must satisfy relations of the form [\[eq:discrel\]](#eq:discrel){reference-type="eqref" reference="eq:discrel"}, with $c$ a cocycle. Replacing the representatives $\tilde\gamma$ with alternate representatives $$\tilde\gamma' = b(\gamma)\tilde\gamma \qquad (b(\gamma) \in Z(G))$$ (which are the only possibilities) replaces $c$ by $c' = c+db$. More generally, consider a group $M$ and an abelian group $A$ together with a group action of $M$ on $A$. Recall that for $p \geq 0$, $H^p( M,A )$ can be described as the group $$Z^p( M,A )/B^p(M,A)$$ of cocycles modulo coboundaries. When $M$ is a finite group, the following result of Eckmann [@transfer] gives a condition under which the higher cohomology groups $H^p(M,A)$ (with $p > 0$) vanish. This result will help us prove Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"}$(ii)$. **Proposition 1** (Eckmann [@transfer Theorem 5]). *Suppose that $\Gamma$ is a finite group of order $n$, and that $A$ is an abelian group on which $\Gamma$ acts. Then multiplication by $n$ acts by zero on $H^p(\Gamma,A)$. If multiplication by $n$ is an *automorphism* of $A$, then $H^p(\Gamma,A) = 0$ for all $p>0$.* Because the functor $$H^0(\{1\},A) = A$$ is exact, it follows that $H^p(\{1\},A) = 0$ for all $p>0$. Therefore the obvious restriction map $$Q\colon H^p(\Gamma,A) \rightarrow H^p(\{1\},A) \qquad (p>0)$$ must be zero. Eckmann in [@transfer] defines a natural *transfer* homomorphism $$T\colon H^p(\{1\},A) \rightarrow H^p(\Gamma,A),$$ and proves that $T\circ Q$ is multiplication by $n$ [@transfer Theorem 5]. Since $Q=0$, it follows that multiplication by $n$ must be zero on $H^p(\Gamma,A)$ for all $p>0$. The last assertion is immediate. Finally, we are ready to prove Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"}$(ii)$. *Proof of $(ii)$.* Set $D:= Z(G)/Z(G)_\text{\textnormal{fin}}$ and consider the short exact sequence of $\Gamma$-modules $$\label{eq:ses} 1\longrightarrow Z(G)_\text{\textnormal{fin}}\longrightarrow Z(G) \longrightarrow D \longrightarrow 1.$$ Because $Z(G)$ is a reductive abelian group, it is a direct sum of copies of ${\mathbb C}^\times$ and a finite abelian group; so $D$ is a direct sum of copies of ${\mathbb C}^\times/(\text{roots of unity})$. It follows easily that multiplication by $n$ is an automorphism of $D$ for every positive $n$. By Proposition [Proposition 1](#prop:transfer){reference-type="ref" reference="prop:transfer"}, $$H^p(\Gamma,D) = 0 \qquad (p > 0).$$ Examining the long exact sequence in $\Gamma$-cohomology attached to [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"}, we deduce Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"}$(ii)$. (In fact we can arrange for all values of a representative cocycle to have order dividing some power of $|\Gamma|$.) ◻ One reason that Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"}$(ii)$ is interesting is for keeping calculations accessible to a computer. We would like the cocycle defining our disconnected group to take values in $Z(G)_\text{\textnormal{fin}}$, because such elements are easily described in a computer. **Remark 1**. This argument works equally well over any algebraically closed field of characteristic zero (and shows that the extensions described by Theorem [Theorem 1](#thm:listE){reference-type="ref" reference="thm:listE"} are independent of the field). In finite characteristic the same is true as long as the characteristic does not divide the order of $\Gamma$. If the characteristic *does* divide the order of $\Gamma$, then the extension $E$ is no longer a reductive group, and matters are more complicated. [^1]: Marisa Gaetz was supported by the NSF Graduate Research Fellowship Program under Grant No. 2141064 and by the Fannie & John Hertz Foundation.
arxiv_math
{ "id": "2310.00170", "title": "Disconnected reductive groups", "authors": "Marisa Gaetz and David Vogan", "categories": "math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | As we all know, the Minkowski type problem is the cornerstone of the Brunn-Minkowski theory in Euclidean space. The Heisenberg group as a sub-Riemannian space is the simplest non-Abelian degenerate Riemannian space that is completely different from a Euclidean space. By analogy with the Minkowski type problem in Euclidean space, the Minkowski type problem in Heisenberg groups is still open. In the present paper, we develop for the first time a sub-Riemannian version of Minkowski type problem in the horizontal distributions of Heisenberg groups, and further give a positive answer to this sub-Riemannian Minkowski type problem via the variational method. address: - Bin Chen, Juan Zhang, Peibiao Zhao, and Xia Zhao - - - author: - Bin Chen - Juan Zhang - Peibiao Zhao - Xia Zhao title: The Minkowski problem in Heisenberg groups --- [^1] [^2] [^3] # Introduction The Brunn-Minkowski theory of convex bodies (i.e., a compact, convex set), originated from Brunn's thesis [@Br] and Minkowski's paper [@M], in Euclidean space $\mathbb{R}^n$ has a hot research field of convex geometry for nearly a century (the books [@G; @S] are excellent references for this theory). This theory centers around the study of geometric functionals of convex bodies as well as the differentials of these functionals, and depends heavily on analytic tools such as the cosine transform on the unit sphere and Monge-Ampère type equations. The differentiations of different geometric functions are geometric measures, and these geometric measures are fundamental concepts in the Brun-Minkowski theory. It is well known that Minkowski type problem is one of the important components of the Brun-Minkowski theory. A Minkowski problem, roughly speaking, is a prescribed measure problem, which is also a characterization problem for a geometric measure generated by convex bodies. ## An overview of the Minkowski problem Without doubt, the most fundamental geometric functional in the Brunn-Minkowski theory is the volume functional. Via the variation of volume functional, it produces the most important geometric measure: surface area measure. The well-known *classical Minkowski problem*, which characterizes the surface area measure, asks the necessary and sufficient conditions for a Borel measure on the unit sphere to be the surface area measure of a convex body. More than a century ago, the classical Minkowski problem was first introduced and studied by Minkowski himself [@M0; @M], who demonstrated both existence and uniqueness of solutions for the problem when the given measure is either discrete or has a continuous density. Aleksandrov [@A; @A1] and Fenchel-Jessen [@FJ] independently solved the problem in 1938 for arbitrary measures. Analytically, the Minkowski problem is equivalent to solving a Monge-Ampère equation. Establishing the regularity of the solution to the Minkowski problem is difficult and has led to a long series of influential works (see, for example, Lewy [@L], Nirenberg [@N], Cheng-Yau [@C], Pogorelov [@P1], Caffarelli [@Ca]). The $L_p$ Minkowski problem related to the $L_p$ surface area measure, introduced in [@L1], is an extension of the classical Minkowski problem and has been intensively investigated and achieved great developments, see, e.g., [@BB; @Ch; @HLYZ; @J; @LO; @LYZ2; @St; @Z] and the references therein. The case of $p=1$ is the classical Minkowski problem, the case of $p=0$ is the logarithmic Minkowski problem (see [@BLYZ1]), and the case of $p=-n$ is the centro-affine Minkowski problem (see Chou-Wang [@CW], Lu-Wang [@LW], and Zhu [@Z1]). Recently, much effort has been made to develop the nonhomogeneous problem analogous to the $L_p$ Minkowski problem; such a new problem is called the Orlicz-Minkowski problem of convex bodies (see [@Hab]). It can be asked with the $L_p$ surface area measure (in $L_p$ Minkowski problem) replaced by the Orlicz surface area measure. Solutions to the Orlicz-Minkowski problem can be found in [@GH; @Hab; @HH; @Li] and the references therein. Moreover, the Minkowski-type problems (i.e., classical, $L_p$ and Orlicz Minkowski problems) also have dual analogues. It is only very recent that the dual curvature measures, which are dual to the surface area measures, were discovered in the seminal work [@Ha] by Huang, Lutwak, Yang and Zhang. The prescribed dual curvature measure problem is called the dual Minkowski problem. Subsequently, Luwwak et al [@LYZ5] and Zhu et al [@ZZ] introduced and studied dual $L_p$ Minkowski problem and dual Orlicz problem respectively. See e.g., [@BH; @BLYZZ; @CL; @HP; @Ha; @LLL; @LSW; @Zha1; @Zha2] for more works in the rapidly developing dual Minkowski type problem. With the development of Brunn-Minkowski theory in Euclidean setting, Li and Xu in a recent seminal article [@LX] started the study of Brunn-Minkowski theory in hyperbolic space, and solved the Horospherical Minkowski problem and the more general Horospherical Christoffel-Minkowski problem based on the concept of hyperbolic $p$-sum. Motivated by the foregoing celebrated works, we in the present paper will investigate and confirm the sub-Riemannian version of the Minkowski problem in sub-Riemannian Heisenberg groups. ## The sub-Riemannian Heisenberg group The Heisenberg group $\mathbb{H}^n$, the simplest non-Abelian sub-Riemannian space distinct from a Euclidean space, is the Lie group ($\mathbb{R}^{2n+1},\ast$), where we consider in $\mathbb{R}^{2n+1}\equiv\mathbb{R}^n\times \mathbb{R}^n\times\mathbb{R}$ its usual differentiable structure and the noncommutative group law $$\begin{aligned} \label{1.1} g_1\ast g_2= \bigg(z_1+z_2,t_1+t_2+\frac{1}{2}Imz_1\bar{z}_2\bigg),\end{aligned}$$ for $g_1=(z_1,t_1)$ and $g_2=(z_2,t_2)\in\mathbb{H}^n$, where $z=x_l+iy_l\in\mathbb{R}^{2n}$ and $t\in\mathbb{R}$. The identity element is $e=(0,0)$, the inverse element of $g=(z,t)$ is $g^{-1}=(-z,-t)$, and the center of the group is $c=\{(z,t)\in\mathbb{H}^n: z=0\}$. Left translation by $g\in\mathbb{H}^n$ is the mapping $L_g: L_g(\tilde{g})=g\ast\tilde{g}$. For any $\lambda>0$, the mapping $\delta_\lambda: \delta_\lambda(z,t)=(\lambda z,\lambda^2t)$ is called dilation. The Lie algebra $\mathfrak{h}$ is spanned by the left-invatiant vector fields $\{X_l, Y_l, T\}$, $l=1,\cdots,n$ given by $$\begin{aligned} \label{1.2} \nonumber &X_l=dL_g\frac{\partial}{\partial x_l}=\frac{\partial}{\partial x_l}-\frac{y_l}{2}\frac{\partial}{\partial t},\\ &Y_l=dL_g\frac{\partial}{\partial y_l}=\frac{\partial}{\partial y_l} +\frac{x_l}{2}\frac{\partial}{\partial t},\\ \nonumber &T=dL_g\frac{\partial}{\partial t}=\frac{\partial}{\partial t},\end{aligned}$$ where $dL_g$ is the differential of the left-translation ([\[1.1\]](#1.1){reference-type="ref" reference="1.1"}). The only non-trivial bracket relations are $[X_l,Y_l]=-T$, $l=1,\cdots,n$. In other words, any left invariant vector is a linear combination with real coefficients of the vector fields ([\[1.2\]](#1.2){reference-type="ref" reference="1.2"}). The Lie algebra $\mathfrak{h}$ admits a stratification. It decomposes as the vector space direct sum $$\mathfrak{h}=\mathfrak{h}_1\oplus\mathfrak{h}_2,$$ where $\mathfrak{h}_1$ is the subspace generated by $\{X_l,Y_l\}_{l=1}^n$ and $\mathfrak{h}_2=[\mathfrak{h}_1,\mathfrak{h}_1]$ is the one-dimensional space generated $T$. For this reason $\mathbb{H}^n$ is called a Carnot group of step 2. The Haar measure on the Heisenberg group $\mathbb{H}^n$ is induced by the exponential mapping from the Lebesgue measure on $\mathfrak{h}$. The natural volume in $\mathbb{H}^n$ is the Haar measure, which, up to a positive factor, coincides with Lebesgue measure in $\mathbb{R}^{2n+1}$. Lebesgue measure is also the Riemannian volume of the left-invariant metric for which $X_l,Y_l$ and $T$ are orthonormal. We denote by $Vol(E)$ the volume of a (Lebesgue) measurable set $E\subset\mathbb{H}^n$. We denote by $\mathbb{H}_e$ the set of horizontal vectors of the form $h=(z,0)$. Fixed a point $g_0=(z_0,t_0)\in\mathbb{H}^n$, the horizontal distribution $\mathbb{H}_{g_0}=g_0\ast\mathbb{H}_e=$ span$\{X_l(g_0),Y_l(g_0)\}_{l=1}^n$ is characterized by $$\begin{aligned} \label{1.3} \mathbb{H}_{g_0}=Ker\bigg(dt_0-\frac{1}{2}\langle z_0,Jdz_0\rangle_{\mathbb{R}^{2n}}\bigg),\end{aligned}$$ where $J=( \ ^0_{-1}\ ^1_0)$. From the left translation mapping we can set $g_0$ as the origin of $\mathbb{H}_{g_0}$. Fixed $g_0\in\mathbb{H}^n$, any vector $v\in\mathbb{H}_{g_0}$ can be written as $v=(\bar{v},v_t)\in\mathbb{R}^{2n}\times\mathbb{R}$. Let $\langle\cdot,\cdot\rangle_{\mathbb{H}^{n}}$ be a left invariant inner product on $\{X_l,Y_l\}_{l=1}^n$. Then, the sub-Riemannian metric (or Korányi gauge) is denoted as below (see [@FP]) $$\begin{aligned} \label{1.4} \langle v,\omega\rangle_{\mathbb{H}^{n}}= \langle\bar{v},\bar{\omega}\rangle_{\mathbb{R}^{2n}},\ \ \forall v,\omega\in\mathbb{H}_{g_0},\ g_0\in\mathbb{H}^{n},\end{aligned}$$ where $\langle\cdot,\cdot\rangle_{\mathbb{R}^{2n}}$ is the standard scalar product on $\mathbb{R}^{2n}$. A Lipschitz curve $\gamma(t): [0,1]\rightarrow\mathbb{H}^n$ is horizontal if $\gamma^\prime\in(\gamma(t))\ast\mathbb{H}_e$ for a.e. $t\in[0,1]$. Equivalently, $\gamma$ is horizontal if there exist functions $f_l\in L^\infty([0,1])$, $l=1,\cdots,2n$, such that $$\gamma^\prime(t)=\sum_{l=1}^nf_lX_l(\gamma) +f_{n+l}Y_l(\gamma),\ a.e.\ on\ [0,1].$$ The coefficients $f_l$ are unique, and by the structure of the vector fields $X_l$ and $Y_l$ they satisfy $f_l=\gamma^\prime_l$, where $\gamma=(\gamma_1,\cdots,\gamma_{2n+1})$ are the coordinates of $\gamma$ given by the identification $\mathbb{H}^n=\mathbb{R}^{2n+1}$. Given two points $g$ and $\tilde{g}$ we consider the set of all possible horizontal curves joining these two points $$\Gamma(g,\tilde{g})=\{\gamma\ horizontal\ curve:\ \gamma(0)=g,\ \gamma(1)=\tilde{g}\}.$$ This set is never empty by Chow theorem (see [@Be]). The Carnot-Caratheodory (abbreviated CC) distance is then defined as the infimum of the length of horizontal curves of the set $\Gamma$: $$d_{CC}(g,\tilde{g})=\inf_{\Gamma(g,\tilde{g})} \int_0^1|\gamma^\prime(t)|dt.$$ The curve $\gamma$ is called a geodesic or length minimizing curve joining $g$ and $\tilde{g}$. The CC ball of radius $r>0$ centered at a point $g\in\mathbb{H}^n$ is defined by $$B_r(g)=\{\tilde{g}\in\mathbb{H}^n: d_{CC}(g,\tilde{g})<r\}.$$ We also let $B_r=B_r(0)$. The CC gauge is given by $$|g|_{CC}=d_{CC}(e,g).$$ The CC distance, being constructed in terms of left invariant vector fields, is left invariant and dilation invariant on $\mathbb{H}^n$. Namely, for any $p,g,\tilde{g}\in\mathbb{H}^n$ and $\lambda>0$ there holds $$d_{CC}(p\ast g,p\ast\tilde{g})=d_{CC}(g,\tilde{g});\ \ d_{CC}(\delta_\lambda(g),\delta_\lambda(\tilde{g}))= d_{CC}(g,\tilde{g}).$$ An equivalent distance on $\mathbb{H}^n$ is defined by the so-called Korányi distance (see Section 2.2 of [@Cap]) $$d_{\mathbb{H}}(g,\tilde{g})=\|\tilde{g}^{-1}\ast g\|_{\mathbb{H}},$$ and Korányi gauge $$\|g\|_{\mathbb{H}}^4=(x_l^2+y_l^2)^2+16t^2.$$ Clearly, $d_{\mathbb{H}}$ is homogeneous of order 1 with respect to the dilations $(\delta_s): \|\delta_sg\|_{\mathbb{H}}=s\|g\|_{\mathbb{H}}$. Consequently, there exist constants $C_1, C_2>0$ so that $$C_1\|g\|_{\mathbb{H}}\leq d_{CC}(g,0)\leq C_2\|g\|_{\mathbb{H}}$$ for any $g\in\mathbb{H}^n$. This follows immediately from compactness of the Korányi unit sphere $$\mathcal{S}^{2n}=\{g\in\mathbb{H}^n: \|g\|_{\mathbb{H}}=1\}$$ and continuity of $g\mapsto d_{CC}(g,0)$. Now we introduce the concept of convexity in horizontal distributions (see [@D] for detail). A subset $\Omega\subset\mathbb{H}^n$ is called horizontal convex (abbreviated $H$-convex) if and only if the twisted convex combination $g\ast\delta_\lambda(g^{-1}\ast\tilde{g})\in\Omega$ for any $g\in\Omega$ and every $\tilde{g}\in\Omega\cap\mathbb{H}_g$. Since to every notion of convexity on sets there is a naturally associated notion of convexity for functions, from the definition above we have: Let $\Omega$ be a $H$-convex subset of $\mathbb{H}^n$. A function $u: \Omega\rightarrow\mathbb{R}$ is called $H$-convex if, for any $g, \tilde{g}\in\Omega$, one has $$u(g\ast\delta_\lambda(g^{-1}\ast\tilde{g})) \leq(1-\lambda)u(g)+\lambda u(\tilde{g}),\ \ \forall\lambda\in[0,1].$$ For more detailed research on convex functions and convex sets, one can refer to [@Cal; @GT; @GM; @LuM; @Mo1; @SY; @SY1] and therein for details. ## Definitions and main results Due to the existence of left translation and dilation mapping families, the geometric structures in Heisenberg groups are fundamentally different from Euclidean spaces, so the study of Brunn-Minkowski theory in Heisenberg group $\mathbb{H}^n$ is still blank, which leads us to develop Brunn-Minkowski theory in $\mathbb{H}^n$. In the present paper, we construct the horizontal support functions and horizontal geometric measure and other basic convex geometric elements, and develop a sub-Riemannian version of Minkowski type problem in the horizontal distributions of $\mathbb{H}^n$. Fix a point $g_0\in\mathbb{H}^n$, let $\mathbb{H}_{g_0}$ be the horizontal distribution through $g_0$, and a compact $H$-convex subset with non-empty interior be a $H$-convex body. For the horizontal vector field $V=\{X_l,Y_l\}_{l=1}^n$ in $\mathbb{H}_{g_0}$ and the horizontal geodesic $$\gamma: [0,1]\rightarrow\mathbb{H}_{g_0}, \ \ \gamma(t)=exp(-tY_l)exp(-tX_l)exp(tY_l)exp(tX_l)(g),$$ where $exp(tV)(g)$ is the flow of horizontal vector field $V$ at time $t$ starting from $g\in\mathbb{H}_{g_0}$. For fixed $g_0\in\mathbb{H}^n$, let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body. We denote by $\overrightarrow{\gamma}_{(g_0g)}$ the position vector of a point $g$ relative to the origin point $g_0$ in the horizontal sense, which is also equivalent to a horizontal geodesic connecting $g_0$ and $g$ with an initial tangent direction $\gamma^\prime(g_0)$. Similarly, we denote by $\nu$ the position vector of a point $\tilde{g}\in\mathcal{S}^{2n}$ relative to the origin point $g_0$ in the horizontal sense. Let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body, and $\nu(g)$ be the horizontal unit direction vector at $g\in\partial\Omega$. We denote $H_{\nu}$ by the "horizontal totally geodesic hypersurface\" to $\partial\Omega$ at $g$ orthogonal to $\nu$ passing through $g_0$. It is not hard to check that the "horizontal totally geodesic hypersurface\" to $\partial\Omega$ at $g$ orthogonal to $\nu$ passing through $g_0$ is essentially equivalent to "tangent plane\" defined below (see e.g., [@Mo]). $$H_{\nu}=\{g=(z,t)\in\partial\Omega: \langle\nu, z\rangle=0,\ t\in\mathbb{R}\}.$$ *Remark 1*. If $g$ belongs to the subspace $\mathbb{R}^{2n}\times\{0\}\subset\mathbb{H}_{g_0}$, it follows from the facts of geodesics in [@Haj] that the horizontal totally geodesic hypersurface $H_{\nu}$ is exactly "an Euclidean tangent plane\". We now define the horizontal support function (abbreviated $H$-support function) as follows: **Definition 2**. *Let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body with $\emptyset\neq\Omega\neq\mathbb{H}_{g_0}$. For the horizontal unit direction vector $\nu$, the $H$-support function, denoted by $h_H(\Omega,\cdot)$, of $\Omega$ is defined as the horizontal geodesic distance from the origin $g_0$ to $H_\nu$ at $g\in\partial\Omega$.* Since the Heisenberg group is isomorphic to Euclidean space, then the $H$-support function can be defined as mapping $h_H(\Omega,\cdot): \mathcal{S}^{2n}\rightarrow\mathbb{R}$, which is further expressed as $$\begin{aligned} \label{1.5} h_H(\Omega,\nu)=\max\{\langle \overrightarrow{\gamma}_{(g_0g)},\nu \rangle_{\mathbb{H}^n}: g\in\Omega\},\end{aligned}$$ for the unit direction vector $\nu$. Notice that if $g=(0,t)\in\partial\Omega$, $t\neq0$, then $H_\nu$ at $g$ orthogonal to $t$-axis, and there are an infinite number of the horizontal geodesic connecting $g_0$ and $g$ with initial vector $\gamma^\prime_{g_0}$ just as in ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}) (see Theorem 2.A). In this case, we can do make the following constraints to ensure the uniqueness of the $H$-support function. For any horizontal unit direction vector $\nu$, and $g=(0,t)\in\partial\Omega$, $t>0$, we can take a plane denoted by $\Pi_{g_0g}$ that crosses the points $g_0$, $g$ and the vector $\nu$ such that $\gamma^\prime_{g_0}$ belonging to $\Pi_{g_0g}$, as shown in Figure 1. We can now choose the horizontal geodesic $\gamma_{1(g_0g)}$ with an angle of $[0,\frac{\pi}{2}]$ to $\nu$ as the only horizontal geodesic connecting $g_0$ and $g$, and regard $\overrightarrow{\gamma}_{1(g_0g)}$ as the position vector $\overrightarrow{\gamma}_{(g_0g)}$ in ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}). For $t<0$, we can do the similar convention. ![$H$-support function of $g=(0,t)$.](F1.eps) From Definition [Definition 2](#d1.1){reference-type="ref" reference="d1.1"} we can easily verify that $h_H(\Omega,u\ast\nu)\leq h_H(\Omega,u)+h_H(\Omega,\nu)$ and $h_H(\Omega,\delta_\lambda\nu)=\lambda h_H(\Omega,\nu)$ for $\lambda\in[0,1]$. Hence, $h_H(\Omega,\cdot)$ is a $H$-convex function if $\Omega\neq\mathbb{H}_{g_0}$. In particular, we get that $h_H(\Omega,\nu)=\langle\overrightarrow{\gamma}_{(g_0g)}, \nu\rangle$ if and only if $\Omega=\{g\}$. It is also clear from the definition that $h_H(\Omega,\nu)\leq h_H(\tilde{\Omega},\nu)$ if and only if $\Omega\subset\tilde{\Omega}$. Further, we note that $h_H(\delta_\lambda\Omega,\nu)=\lambda h_H(\Omega,\nu)$ for $\lambda\in[0,1]$. *Remark 3*. From the fact of geodesics in [@Haj], if $g$ belongs to the subspace $\mathbb{R}^{2n}\times\{0\}\subset\mathbb{H}_{g_0}$, it is easy to know that $\overrightarrow{\gamma}_{(g_0g)}$ is a unique geodesic and that its length is equal to the Euclidean length $|\overrightarrow{g_0g}|$ of segment $\overrightarrow{g_0g}$. In this case, the $H$-support function is the Euclidean support function. A horizontal Gauss map (abbreviated $H$-Gauss map), denoted by $\nu_H$, is defined as follows: **Definition 4**. *For $g_0\in\mathbb{H}^n$, let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body with smooth boundary $\partial\Omega$. The unit outer normal to $\partial\Omega$, denoted by $\nu_H$, is well defined for $\mathcal{H}^{2n}$ almost all point on $\partial\Omega$. The map $\nu_H: \partial\Omega\rightarrow\mathcal{S}^{2n}$ is called the $H$-Gauss map of $\Omega$.* Let $\nu_{H}^{-1}$ be the inverse of the mapping $\nu_H$. Next, we introduce the horizontal surface area measure (abbreviated $H$-surface area measure). **Definition 5**. *For a $H$-convex body $\Omega$ and a Borel measure $\mathfrak{m}$ on $\mathcal{S}^{2n}$, the $H$-surface area measure of $\Omega$ is the pushforward of $\mathfrak{m}$ by $\nu_H$ (i.e., $S_H(\Omega,\cdot)=\nu_{H,\Omega\sharp}^{-1}\mathfrak{m}$) $$S_H(\Omega,\omega)=\nu_{H,\Omega\sharp}^{-1}\mathfrak{m}(\omega) =\int_{\nu_{H,\Omega}^{-1}(\omega)} d\mathcal{H}^{2n},$$ for every Borel set $\omega\subset\mathcal{S}^{2n}$. Here $\mathcal{H}^{k}$ is the $k$-dimensional Haussdorff measure which depends on the Korányi distance.* Consider a $C^1$ vector field $F$ with compact support on $\mathbb{H}^n$, and denote by $\{\phi_t\}_{|t|<\varepsilon}$, $t\in\mathbb{R}$, the associated group of diffeomorphisms (see [@Mo]). By the variation of $Vol(\Omega_t)$ along any $H$-convex body $\Omega$ at $t=0$, the display expression of the $H$-surface area measure can be derived: (see Section 3 for details): $$dS_H(\Omega,\omega) =\det(\nabla_{ij}^Hh_H+h_H\delta_{ij}+A_{ij})da,$$ where $da$ is the area measure of $\mathcal{S}^{2n}$, $\nabla^H$ is the horizontal gradient of $h_H$ on $\mathcal{S}^{2n}$, $\nabla_{ij}^H=(\nabla_i^H\nabla_j^H+\nabla_j^H\nabla_i^H)/2$ is the horizontal second order covariant derivative, $A_{ij}=\frac{1}{2}\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}( \langle\nabla_i^H\nabla_k^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n} +\langle\nabla_j^H\nabla_k^Hv, \nabla_i^Hv\rangle_{\mathbb{H}^n})$, and $\mathcal{G}$ denotes the inverse $H$-Gauss map. Now we introduce the following Minkowski type problem with respect to the $H$-surface area measure, which may be called the *Heisenberg Minkowksi problem*. **Problem 6**. Given a finite Borel measure $\mu$ on $\mathcal{S}^{2n}$, what are necessary and sufficient conditions for $\mu$ such that there exists a $H$-convex body $\Omega\subset\mathbb{H}_{g_0}$ satisfying $$dS_H(\Omega,\cdot)=d\mu?$$ There are generally two methods to solve the Minkowski problem in $\mathbb{R}^n$: variational method and curvature flow method. The existence of smooth solutions can be obtained through the curvature flow, while the variational method can only obtain the existence of weak solutions. Due to the degeneracy of the Laplace equation in the Heisenberg group, the existence of smooth solutions cannot be obtained through the curvature flow method. Therefore, we obtain the existence of solutions to **Problem [Problem 6](#p1.3){reference-type="ref" reference="p1.3"}** through the variational method, see Section 4 details. **Theorem 7**. *Let $\mu$ be a finite Borel measure on $\mathcal{S}^{2n}$ that is not concentrated on a closed hemisphere and satisfies $$\int_{\mathcal{S}^{2n}}ud\mu(u)=0.$$ Then there exists a $H$-convex body $\Omega$ such that $$S_H(\Omega,\cdot)=\mu.$$* The organization of this paper is as follows. In Section 2, we collect some basic knowledge about the Heisenberg group $\mathbb{H}^n$. In Section 3, through calculating the variation of volume by use the associated group of diffeomorphisms, we introduce the $H$-surface area measure and Heisenberg Minkowski problem. In Section 4, we give the existence of solutions to Heisenberg Minkowski problem by the variational method. In Section 5, we provide the relationship between the $H$-support function and the Euclidean support function, and propose a more general Orlicz Heisenberg Minkowski problem. # Preliminaries In this section, we give some basic knowledge about sub-Riemannian Heisenberg group $\mathbb{H}^n$. ## The equations for geodesics in $\mathbb{H}^n$ We recall here the equations for geodesics of unit length starting from $e=(0,0,0)$, since all other geodesics can be recovered by left translations and dilations (see e.g., [@AF; @Cap; @Ma; @Mon; @Y]). Let $s\in[0,1]$ and $\phi\in[-2\pi,2\pi]$, and let $A_j, B_j\in\mathbb{R}$ such that $\sum_{l=1}^n(A_l^2+B_l^2)=1$, then the set of equations $$\begin{aligned} \label{2.0} \begin{cases} x_l(s)=\frac{A_l(1-\cos(\phi s))+B_l\sin(\phi s)}{\phi}\ \ \ \ l=1,\cdots,n\\ y_l(s)=\frac{-B_l(1-\cos(\phi s))+A_l\sin(\phi s)}{\phi}\ \ \ \ l=1,\cdots,n\\ t(s)=\frac{\phi s-\sin(\phi s)}{2\phi^2} \end{cases}\end{aligned}$$ defines a geodesic $\gamma(s)$ connecting $(0,0,0)$ with the point $(x_l,y_l,t)$, whose coordinates are $$\begin{aligned} \begin{cases} x_l(s)=x_l(1)=\frac{A_l(1-\cos\phi)+B_l\sin\phi}{\phi}\ \ \ \ l=1,\cdots,n\\ y_l(s)=y_l(1)=\frac{-B_l(1-\cos\phi)+A_l\sin\phi}{\phi}\ \ \ \ l=1,\cdots,n\\ t(s)=t(1)=\frac{\phi-\sin\phi}{2\phi^2}. \end{cases}\end{aligned}$$ Of course, this gives a parameterization of the boundary of the CC ball with unit radius. In the limit case $\phi=0$ one has the Euclidean geodesic. The existence and uniqueness of geodesics in $\mathbb{H}^n$ has the following results:    *Let $g=(z,t)\neq e\in\mathbb{H}^n$, we have* i\) If $z\neq0$, there exists a unique length minimizing geodesic connecting $e$ and $g$. ii\) If $z=0$, (i.e., $g$ belongs to the center of $\mathbb{H}^n$) then there is a one parameter family of length minimizing geodesics connecting $e$ and $g$, obtained by rotation of a single geodesic around the $t$-axis. In addition, there are some facts about geodesics (see e.g., [@Haj]): If $g$ belongs to the subspace $\mathbb{R}^{2n}\times\{0\}\subset\mathbb{R}^{2n+1} =\mathbb{H}^n$, then it is easy to check that the straight line $\gamma(s)=sg$, $s\in[0,1]$ is a unique geodesic (up to a reparametrization) connecting origin $0$ to $g$. Indeed, it is easy to check that $\gamma$ is horizontal, and its length equals the Euclidean length $|\overrightarrow{0g}|$ of the segment $\overrightarrow{0g}$. ## Hausdorff measure For any $m\geq0$ and $\delta>0$, one can define the premeasures on $\mathbb{H}^n$ associated with Korányi distance (see [@Mo]). The diameter of a set $K\subset\mathbb{H}^n$ is $$diam K=\sup_{g,g^\prime\in K}d_{\mathbb{H}}(g,g^{\prime}).$$ \(i\) $\mathcal{H}_{\mathbb{H}}^m(E)=\lim_{\delta\rightarrow0} \mathcal{H}_{\mathbb{H}}^{m,\delta}(E)$, where, up to a constant multiple, $$\mathcal{H}_{\mathbb{H}}^{m,\delta}(E)= \inf\bigg\{\sum_{i\in\mathbb{N}}(diam(K_i))^m: E\subset\bigcup_{i\in\mathbb{N}}K_i, diam(K_i)<\delta\bigg\},$$ and the infimum is taken with respect to any non-empty family of closed subsets $\{K_i\}_{i\in\mathbb{N}}\subset\mathbb{H}^n$; \(ii\) $\mathcal{S}_{\mathbb{H}}^m(E)=\lim_{\delta\rightarrow0} \mathcal{S}_{\mathbb{H}}^{m,\delta}(E)$, where, up to a constant multiple, $$\mathcal{S}_{\mathbb{H}}^{m,\delta}(E)= \inf\bigg\{\sum_{i\in\mathbb{N}}(diam(B_i))^m: E\subset\bigcup_{i\in\mathbb{N}}B_i, diam(B_i)<\delta\bigg\},$$ and the infimum is taken with respect to closed Korányi balls $B_i$. The measures $\mathcal{H}_{\mathbb{H}}^m$ and $\mathcal{S}_{\mathbb{H}}^m$ are called, respectively, $m$-dimensional horizontal Hausdorff measure and $m$-dimensional horizontal spherical Hausdorff measure. ## Horizontal divergence and twisted convex combination Let $V$ be a smooth vector field in $\mathbb{H}^n=\mathbb{R}^{2n+1}$. We may express $V$ using both the basis $\{X_l, Y_l, T\}$ ($l=1,\cdots,n$) and the standard basis of vector fields of $\mathbb{R}^{2n+1}$: $$\begin{aligned} V&=\sum_{l=1}^n(\varphi_lX_l+\varphi_{n+l}Y_l)+\varphi_{2n+1}T\\ &=\sum_{l=1}^n\bigg[\varphi_l\frac{\partial}{\partial x_l} +\varphi_{n+l}\frac{\partial}{\partial y_l} +(2y_l\varphi_l-2x_l\varphi_{n+l})\frac{\partial}{\partial t}\bigg]+\varphi_{2n+1}\frac{\partial}{\partial t},\end{aligned}$$ where $\varphi_l, \varphi_{n+l}, \varphi_{2n+1}\in C^{\infty}(\mathbb{H}^n)$ are smooth functions. The standard divergence of $V$ is $$\begin{aligned} div V&=\sum_{l=1}^n\bigg[\frac{\partial\varphi_l}{\partial x_l}+\frac{\partial\varphi_{n+l}}{\partial y_l} +2y_l\frac{\partial\varphi_l}{\partial t}-2x_l\frac{\partial\varphi_{n+l}}{\partial t}\bigg] +\frac{\partial\varphi_{2n+1}}{\partial t}\\ &=\sum_{l=1}^n(X_l\varphi_l+Y_l\varphi_{n+l})+T\varphi_{2n+1}.\end{aligned}$$ The vector field $V$ is said to be horizontal if $V(g)\in\mathbb{H}_g$ for all $g\in\mathbb{H}^n$. These observations suggest the following definition. Let $A\subset\mathbb{H}^n$ be an open set. The horizontal divergence of a vector valued mapping $\varphi\in C^1(A; \mathbb{R}^{2n})$ is defined by $$\begin{aligned} \label{8} div_H \varphi=\sum_{l=1}^n(X_l\varphi_l+Y_l\varphi_{n+l}).\end{aligned}$$ Thus $div_H \varphi=div V$ is the standard divergence of the horizontal vector field $V$ with coordinates $\varphi=(\varphi_1,\cdots, \varphi_{2n})$ in the basis $\{X_l, Y_l\}$. Let $g=(x,y,t)$ and $g^\prime=(x^\prime,y^\prime,t^\prime)$ be two points in $\mathbb{H}^n$, and $\lambda\in[0,1]$. The twisted convex combination is defined by (see [@D]) $$\begin{aligned} \label{2.1} g\ast\delta_\lambda(g^{-1}\ast g^\prime) &=(x+\lambda(x^\prime-x),y+\lambda(y^\prime-y), t+2\lambda(xy^\prime-x^\prime y)\\ &\nonumber\ \ +\lambda^2(t^\prime-t+2x^\prime y-2xy^\prime)).\end{aligned}$$ If $g^\prime\in\mathbb{H}_g$, i.e. $g^{-1}\ast g^\prime\in\mathbb{H}_e$, then we get $$\begin{aligned} \label{2.2} t^\prime-t+2(x^\prime y-xy^\prime)=0.\end{aligned}$$ From ([\[2.1\]](#2.1){reference-type="ref" reference="2.1"}) and ([\[2.2\]](#2.2){reference-type="ref" reference="2.2"}), we derive for any $g^\prime\in\mathbb{H}_g$ that $$\begin{aligned} \label{2.3} g\ast\delta_\lambda(g^{-1}\ast g^\prime)=(1-\lambda)g+\lambda g^\prime.\end{aligned}$$ Note also that ([\[2.2\]](#2.2){reference-type="ref" reference="2.2"}) and ([\[2.3\]](#2.3){reference-type="ref" reference="2.3"}) hold and $g^\prime\in\mathbb{H}_g$ if and only if $g^{-1}\ast g^\prime\in\mathbb{H}_e$.   # $H$-surface area measure and Heisenberg Minkowski problem In this section, we first calculate the second fundamental form of $H$-support function. Then, we introduce the $H$-surface area measure and Heisenberg Minkowski problem in $\mathbb{H}_{g_0}$ by calculating the variation of volume of the associated group of diffeomorphisms. Before that, we provide a fact about the covariant derivative of the Riemannian connection of the left-invariant metric. **Lemma 8**. *([@Ma]) For the covariant derivatives of the Riemannian connection of the left-invariant metric, the following is true: $$\nabla=\left( \begin{array}{ccccc} 0&T&\ \ &\cdots&-Y_1\\ -T&0&\ \ &\cdots&X_1\\ \vdots&\ \ &\ddots&\ \ &\vdots\\ 0&\cdots&0&T&-Y_n\\ 0&\cdots&-T&0&X_n\\ -Y_1&X_1&\ \ &\cdots&0 \end{array} \right),$$ where the $(i,j)$-element in the table above equals $\nabla_{e_i}e_j$ for our basis $$\{e_l,l=1,\cdots,2n+1\}=\{X_1,Y_1,X_2,Y_2,\cdots,T\}.$$* **Theorem 9**. *Fix a point $g_0\in\mathbb{H}^n$, let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body with smooth boundary $\partial\Omega$. Relative to the orthonormal frame $\{e_l,l=1,\cdots,2n\}$, the second fundamental form of $\partial\Omega$ is $$\Pi_{ij}^H=\nabla_{ij}^Hh_H+h_H\delta_{ij}+A_{ij},$$ where $\nabla_{ij}^H\hat{=}\frac{1}{2}(\nabla_i^H\nabla_j^H+\nabla_j^H\nabla_i^H)$, $A_{ij}=\frac{1}{2}\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}( \langle\nabla_i^H\nabla_k^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n} +\langle\nabla_j^H\nabla_k^Hv, \nabla_i^Hv\rangle_{\mathbb{H}^n})$, and $\mathcal{G}$ denotes the inverse $H$-Gauss map. Moreover, if $\Omega$ is a convex body in Euclidean space, then $A_{ij}=0$. In this case, the horizontal second fundamental form is the Euclidean second fundamental form (see e.g., [@U]).* *Proof.* Since $\partial\Omega$ is a smooth, closed and $H$-convex hypersurface, we may assume that $\partial\Omega$ is parametrized by inverse $H$-Gauss map. Without loss of generality, we assume that $\partial\Omega$ encloses the origin $g_0$. From ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}), the $H$-support function of $\partial\Omega$ can be defined, for $v\in\mathcal{S}^{2n}$, as $$\begin{aligned} \label{3.1} h_H(\partial\Omega,v)=\langle\mathcal{G}(v), v\rangle_{\mathbb{H}^n}.\end{aligned}$$ Let $\{e_l,l=1,\cdots,2n\}$ be a smooth orthonormal frame field on $\mathcal{S}^{2n}$, and let $\nabla^H$ be the horizontal gradient on $\mathcal{S}^{2n}$. Differentiating ([\[3.1\]](#3.1){reference-type="ref" reference="3.1"}) we obtain $$\nabla_i^Hh_H=\langle\nabla_i^H\mathcal{G}, v\rangle_{\mathbb{H}^n} +\langle\mathcal{G},\nabla_i^Hv\rangle_{\mathbb{H}^n}.$$ Since $\nabla_i^H\mathcal{G}$ is tangential and $v$ is the normal, one has $$\nabla_i^Hh_H=\langle\mathcal{G}, \nabla_i^Hv\rangle_{\mathbb{H}^n}.$$ Differentiating once again, and writing $(\nabla_i^H\nabla_j^H+\nabla_j^H\nabla_i^H)/2=\nabla_{ij}^H$, we obtain $$\begin{aligned} \label{3.1.1} \nonumber\nabla_{ij}^Hh_H&=\frac{1}{2}(\nabla_i^H\nabla_j^Hh_H +\nabla_j^H\nabla_i^Hh_H)\\ \nonumber&=\frac{1}{2}(\langle\mathcal{G}, \nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n} +\langle\mathcal{G}, \nabla_j^H\nabla_i^Hv\rangle_{\mathbb{H}^n}\\ &\ \ \ \ +\langle\nabla_i^H\mathcal{G}, \nabla_j^Hv\rangle_{\mathbb{H}^n}+\langle\nabla_j^H\mathcal{G}, \nabla_i^Hv\rangle_{\mathbb{H}^n})\\ \nonumber&=\langle\mathcal{G}, \frac{1}{2}(\nabla_i^H\nabla_j^Hv +\nabla_j^H\nabla_i^Hv)\rangle_{\mathbb{H}^n} +\Pi_{ij}^H\\ \nonumber&=\langle\mathcal{G}, \nabla_{ij}^Hv\rangle_{\mathbb{H}^n}+\Pi_{ij}^H,\end{aligned}$$ where $\Pi_{ij}^H$ is the horizontal second fundamental form of $\partial\Omega$. To compute the term $\langle \mathcal{G},\nabla_{ij}^Hv\rangle_{\mathbb{H}^n}$ we differentiate the equation $\langle v,v\rangle_{\mathbb{H}^n}=1$ and obtain $$\begin{aligned} \label{3.2} \langle v,\nabla_i^Hv\rangle_{\mathbb{H}^n}=0,\end{aligned}$$ and $$\begin{aligned} \langle v,\nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n} +\langle\nabla_{i}^Hv,\nabla_{j}^Hv\rangle_{\mathbb{H}^n}=0.\end{aligned}$$ Since $\{e_l,l=1,\cdots,2n\}$ is an orthonormal frame field defined in Lemma [Lemma 8](#l3.1){reference-type="ref" reference="l3.1"}, by a direct computation, we arrive at $$\begin{aligned} \label{3.3} \langle v,\nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n} =-\langle\nabla_{i}^Hv,\nabla_{j}^Hv\rangle_{\mathbb{H}^n} \ \hat{=}-\delta_{ij}.\end{aligned}$$ Finally $$\begin{aligned} \label{3.4} \langle\nabla_k^H\nabla_i^Hv, \nabla_{j}^Hv\rangle_{\mathbb{H}^n} +\langle\nabla_k^H\nabla_j^Hv, \nabla_{i}^Hv\rangle_{\mathbb{H}^n}=0.\end{aligned}$$ From ([\[3.2\]](#3.2){reference-type="ref" reference="3.2"}) and ([\[3.3\]](#3.3){reference-type="ref" reference="3.3"}), we see that $\nabla_1^Hv,\cdots,\nabla_{2n}^Hv$ form an orthonormal basis for horizontal tangent space of $\partial\Omega$, and hence $$\begin{aligned} \langle\mathcal{G},\nabla_{ij}^Hv\rangle_{\mathbb{H}^n} =\langle\mathcal{G},\frac{1}{2}( \nabla_i^H\nabla_j^Hv +\nabla_j^H\nabla_i^Hv)\rangle_{\mathbb{H}^n},\end{aligned}$$ where $$\begin{aligned} \langle\mathcal{G},\nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n} &=\langle\langle\mathcal{G},v\rangle_{\mathbb{H}^n}v, \nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n}+\langle\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}\nabla_k^Hv, \nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n}\\ &=\langle\mathcal{G},v\rangle_{\mathbb{H}^n}\langle v, \nabla_i^H\nabla_j^Hv\rangle_{\mathbb{H}^n} +\nabla_i^H\langle\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}\nabla_k^Hv, \nabla_j^Hv\rangle_{\mathbb{H}^n}\\ &\ \ -\langle\nabla_i^H(\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}\nabla_k^Hv), \nabla_j^Hv\rangle_{\mathbb{H}^n}\\ &=-h_H\delta_{ij}+\nabla_i^H\langle \mathcal{G},\nabla_j^Hv\rangle_{\mathbb{H}^n} -\nabla_i^H\langle\mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n} \langle\nabla_k^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n}\\ &\ \ -\langle\mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n} \langle\nabla_i^H\nabla_k^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n}\\ &=-h_H\delta_{ij}-\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n} \langle\nabla_i^H\nabla_k^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n},\end{aligned}$$ and $$\begin{aligned} \langle\mathcal{G},\nabla_j^H\nabla_i^Hv\rangle_{\mathbb{H}^n} =-h_H\delta_{ij}-\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n} \langle\nabla_j^H\nabla_k^Hv,\nabla_i^Hv\rangle_{\mathbb{H}^n}\end{aligned}$$ by virtue of ([\[3.2\]](#3.2){reference-type="ref" reference="3.2"}) and ([\[3.3\]](#3.3){reference-type="ref" reference="3.3"}). Thus we get $$\begin{aligned} \label{3.5} \langle\mathcal{G},\nabla_{ij}^Hv\rangle_{\mathbb{H}^n} &=-h_H\delta_{ij}-A_{ij},\end{aligned}$$ where $A_{ij}=\frac{1}{2}\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}( \langle\nabla_i^H\nabla_k^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n} +\langle\nabla_j^H\nabla_k^Hv, \nabla_i^Hv\rangle_{\mathbb{H}^n})$. Substituting ([\[3.5\]](#3.5){reference-type="ref" reference="3.5"}) into ([\[3.1.1\]](#3.1.1){reference-type="ref" reference="3.1.1"}), we have $$\begin{aligned} \Pi_{ij}^H=\nabla_{ij}^Hh_H+h_H\delta_{ij}+A_{ij}.\end{aligned}$$ Moreover, if $\Omega$ is a convex body in Euclidean space, then, for the outer unit normal $v$, $\nabla_i^H\nabla_j^Hv=\nabla_j^H\nabla_i^Hv$. Using ([\[3.4\]](#3.4){reference-type="ref" reference="3.4"}), we have $$A_{ij}=\frac{1}{2}\langle \mathcal{G},\nabla_k^Hv\rangle_{\mathbb{H}^n}( \langle\nabla_k^H\nabla_i^Hv,\nabla_j^Hv\rangle_{\mathbb{H}^n} +\langle\nabla_k^H\nabla_j^Hv, \nabla_i^Hv\rangle_{\mathbb{H}^n}) =0.$$ In this case, the horizontal second fundamental form is the Euclidean second fundamental form. This ends the proof of Theorem [Theorem 9](#t3.1){reference-type="ref" reference="t3.1"} ◻ From Theorem [Theorem 9](#t3.1){reference-type="ref" reference="t3.1"}, we can compute the metric $g_{ij}$ of $\partial\Omega$. By the Gauss-Weingarten relations $$\nabla_i^Hv=\Pi_{ik}^Hg^{kl}\nabla_l^H\mathcal{G},$$ from which we obtain $$\delta_{ij}=\langle\nabla_i^Hv, \nabla_j^Hv\rangle_{\mathbb{H}^n} =\Pi_{ik}^Hg^{kl}\Pi_{jm}^Hg^{ms} \langle\nabla_l^H\mathcal{G},\nabla_s^H\mathcal{G} \rangle_{\mathbb{H}^n} =\Pi_{ik}^H\Pi_{jl}^Hg^{kl}.$$ Since $\Omega$ is $H$-convex, $\Pi_{ij}^H$ is invertible, and hence $$\begin{aligned} \label{3.6} g_{ij}=\Pi_{ik}^H\Pi_{jk}^H.\end{aligned}$$ Next, through computing the variation of volume by use the associated group of diffeomorphisms, we introduce the of $H$-surface area measure. **Theorem 10**. *For a point $g_0\in\mathbb{H}^n$, let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body, then $$Vol^\prime(0)=\int_{\mathcal{S}^{2n}}h_H(\Omega,v) \det(\nabla_{ij}^Hh_H+h_H\delta_{ij}+A_{ij})da,$$ where $da$ is the area element of $\mathcal{S}^{2n}$.* *Proof.* For any $\varphi\in C_c^1(\Omega; \mathbb{R}^{2n})$, let $V=\sum_{l=1}^n(\varphi_lX_l+\varphi_{n+l}Y_l)$ be the horizontal vector field with coordinates $\varphi=(\varphi_1,\cdots, \varphi_{2n})$. By the standard divergence theorem, we have $$\begin{aligned} Vol(0)^\prime=\int_\Omega\frac{d}{dt} \bigg|_{t=0}(d(\sigma_H^{n})_t) =\int_\Omega div Vd\sigma_H^{n} =\int_{\partial\Omega} \langle V,v\rangle_{\mathbb{H}^n} d\sigma_H^{n-1},\end{aligned}$$ where $d\sigma_H^{n}$ stands for the volume element of $\Omega$, and $d\sigma_H^{n-1}$ is the area element of $\partial\Omega$ at $\mathcal{G}$. In the second equality we have used that $\frac{d}{dt}|_{t=0}(d(\sigma_H^{n})_t) =div Vd\sigma_H^{n}$ (see e.g., [@Fr; @Sim]). Let $da$ be the area element of $\mathcal{S}^{2n}$ at $v$. By ([\[3.6\]](#3.6){reference-type="ref" reference="3.6"}), the area element $d\sigma_H^{n-1}$ of $\partial\Omega$ at $\mathcal{G}$ can be given by $$\begin{aligned} d\sigma_H^{n-1}&=\sqrt{\det g_{ij}}da =\sqrt{\det(\Pi_{ij}^H)^2}da\\ &=\det(\nabla_{ij}^Hh_H+h_H\delta_{ij}+A_{ij})da.\end{aligned}$$ This, together with the definition of $H$-support function, yields $$\begin{aligned} Vol(0)^\prime=\int_{\mathcal{S}^{2n}} h_H(\Omega,v)\det(\nabla_{ij}^Hh_H+h_H\delta_{ij}+A_{ij})da.\end{aligned}$$ This completes the proof of Theorem [Theorem 10](#t3.2){reference-type="ref" reference="t3.2"} ◻ **Definition 11**. *For a point $g_0\in\mathbb{H}^n$, let $\Omega\subset\mathbb{H}_{g_0}$ be a $H$-convex body. The $H$-surface area measure $S_H(\Omega,\cdot)$ of $\Omega$ can be defined by $$dS_H(\Omega,\cdot)=\det(\nabla_{ij}^Hh_H +h_H\delta_{ij}+A_{ij})da.$$* *Remark 12*. If $\Omega$ is a convex body in Euclidean space, then the $H$-surface area measure $S_H(\Omega,\cdot)$ is the Euclidean surface area measure (see [@S] or [@U]). By Theorem [Theorem 10](#t3.2){reference-type="ref" reference="t3.2"}, Definitions [Definition 5](#d1.3){reference-type="ref" reference="d1.3"} and [Definition 11](#d3.1){reference-type="ref" reference="d3.1"} and horizontal divergence theorem (see e.g., [@Mon1]), we have $$\begin{aligned} Vol(\Omega)&=\int_\Omega d\sigma_H^n =\frac{1}{2n+1}\int_\Omega div Vd\sigma_H^n\\ &=\frac{1}{2n+1}\int_{\partial\Omega}\langle V,\nu\rangle_{\mathbb{H}^n}d\mathcal{H}^{2n}\\ &=\frac{1}{2n+1}\int_{\mathcal{S}^{2n}} h_H(\Omega,\nu)dS_H(\Omega,\nu).\end{aligned}$$ This motivates us to propose the following Heisenberg Minkowski problem for Definition [Definition 11](#d3.1){reference-type="ref" reference="d3.1"}. **Problem 13**. Given a finite Borel measure $\mu$ on $\mathcal{S}^{2n}$, what are necessary and sufficient conditions for $\mu$ such that there exists a $H$-convex body $\Omega\subset\mathbb{H}_{g_0}$ satisfying $$dS_H(\Omega,\cdot)=d\mu?$$   # Existence of solutions to Heisenberg Minkowski problem In this section, we study the existence of solutions to Heisenberg Minkowski problem. The main methods used to solve this problem is the so-called variational method. First we need to introduce the concept of Wulff shape. Let $\omega$ be a closed subset of $\mathcal{S}^{2n}$ that is not contained on a closed hemisphere, and let $f: \mathcal{S}^{2n}\rightarrow\mathbb{R}$ be a positive continuous function. The closed convex set $$\Omega_f=\bigcap_{u\in\omega}H^-(u,f(u)),$$ is bounded, since $\omega$ positively spans $\mathbb{H}_{g_0}$. Here $$H^-(u,f(u))=\{g\in\mathbb{H}_{g_0}: \langle\overrightarrow{\gamma}_{(g_0g)},u \rangle_{\mathbb{H}^n}\leq f(u) \ for \ all\ u\in\omega\}.$$ It is a $H$-convex body that contains the origin in its interior, since the restriction of $f$ to $\omega$ has a positive lower bound. The body $\Omega_f$ is called *the horizontal Wulff shape* associated with $f(u)$. The Wulff shape can be also defined as the unique element of $$\{\Omega\in\mathcal{K}_0^H: h_H(\Omega,u)\leq f(u), \ u\in\omega\},$$ where $\mathcal{K}_0^H$ denotes the set of $H$-convex body that contain the origin in their interior. Now we prove the existence of solutions to Heisenberg Minkowski problem. **Theorem 14**. *For $g_0\in\mathbb{H}^n$ and the Korányi unit sphere $\mathcal{S}^{2n}$, let $\mu$ be a finite Borel measure on $\mathcal{S}^{2n}$ that is not concentrated on a closed hemisphere and satisfies $$\int_{\mathcal{S}^{2n}}ud\mu(u)=0.$$ Then there exists a $H$-convex body $\Omega$ in $\mathbb{H}_{g_0}$ such that $$S_H(\Omega,\cdot)=\mu.$$* *Proof.* Let $$\mathcal{R}_{\Omega}=\max_{u\in\mathcal{S}^{2n}} h_H(\Omega,u),$$ and $$\|f\|=\int_{\mathcal{S}^{2n}}f(u)d\mu(u)$$ for a positive continuous function $f$ on $\mathcal{S}^{2n}$. Let $C^+(\mathcal{S}^{2n})$ be the set of positive continuous functions on $\mathcal{S}^{2n}$. Define the continuous functional $$\Phi: C^+(\mathcal{S}^{2n})\rightarrow(0,\infty)$$ by $$\Phi(f)=\frac{\|f\|}{Vol(\Omega_f)^{\frac{1}{2n+1}}},$$ where $\Omega_f$ is the Wulff shape of $f$. It is easy to verify that $\Phi$ is homogeneous of degree $0$ and monotonically increasing. Next, we consider the following minimization problem $$\inf_{f\in C^+(\mathcal{S}^{2n})}\Phi(f).$$ We are searching for a function at which $\Phi$ attains a minimum. For any $f\in C^+(\mathcal{S}^{2n})$, from the definition of Wulff shape, there is $h_H(\Omega_f,\cdot)\leq f$. It implies that $\Phi(h_H(\Omega_f,\cdot))\leq\Phi(f)$. Therefore, one can search for the minimum point of $\Phi$ among the $H$-support function of $H$-convex bodies that contain the origin in their interior. Since $\Phi$ is homogeneous of degree $0$, the minimum of $\Phi$ is $$\inf\{\Phi(f): f\in C^+(\mathcal{S}^{2n})\} =\{\|h_H(\Omega,\cdot)\|: Vol(\Omega)=\omega_{2n}\},$$ where $\omega_{2n}$ denotes the volume of Korányi unit ball. Let $\Omega_j$ be a minimizing sequence for $\Phi$, that is $$\begin{aligned} \label{4.1} \lim_{j\rightarrow\infty}\Phi(h_H(\Omega_j,\cdot)) =\{\|h_H(\Omega,\cdot)\|: Vol(\Omega)=\omega_{2n}\}.\end{aligned}$$ The $H$-convex bodies $\Omega_j$ contain the origin in their interior. We claim that the sequence $\Omega_j$ is bounded. Since $\Omega_j$ is a minimizing sequence, we have $\Phi(h_H(\Omega_j,\cdot))<\Phi(1)+1$ when $j$ is large. Let $\mathcal{R}_{\Omega_j}$ be the maximal radius of $\Omega_j$. If $u_j$ is the direction of this radius, then $$\mathcal{R}_{\Omega_j}\langle u_j, v\rangle^+_{\mathbb{H}^n}\leq h_H(\Omega_j,v),$$ for all $v\in\mathcal{S}^{2n}$. We have $$\begin{aligned} \mathcal{R}_{\Omega_j}\int_{\mathcal{S}^{2n}}\langle u_j,v\rangle^+_{\mathbb{H}^n}d\mu(v)&\leq\int_{\mathcal{S}^{2n}} h_H(\Omega_j,v)d\mu(v)\\ &\leq\mu(\mathcal{S}^{2n})+1.\end{aligned}$$ Since $\mu$ is not concentrated on a closed hemisphere, there exists a constant $c>0$ such that $$\int_{\mathcal{S}^{2n}}\langle u_j, v\rangle^+_{\mathbb{H}^n}d\mu(u)\geq c$$ for all $v\in\mathcal{S}^{2n}$. Thus $$\mathcal{R}_{\Omega_j} \leq\frac{\mu(\mathcal{S}^{2n})+1}{c}.$$ Hence, the sequence $\Omega_j$ is bounded. By the Blaschke selection theorem, there exists a subsequence of $\Omega_j$ which converges to a $H$-convex body $\Omega_0$ that contains the origin. Since $Vol(\Omega_j)=\omega_{2n}$ in ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}), there is $Vol(\Omega_0)=\omega_{2n}$. Since the horizontal Wulff shape associated with the $H$-support function $h_H(\Omega,\cdot)$ of a $H$-convex body $\Omega$ is the body $\Omega$ itself, it follows from the assumption of $\Phi$ that $$\Phi(h_H(\Omega,\cdot))=\frac{\|h_H(\Omega,\cdot)\|} {Vol(\Omega)^{\frac{1}{2n+1}}}.$$ By the left translation invariance of Korányi gauge, we have $$\overrightarrow{\gamma}_{[(\tilde{g}\ast g_0)(\tilde{g}\ast g)]} =\overrightarrow{\gamma}_{(g_0g)},$$ this implies that $h_H(\Omega,\cdot)$ is left translation invariant. Thus $\Phi(h_H(\Omega,\cdot))$ is translation invariant via the left translation invariance of $V(\Omega)$. Therefore, any left translation of $\Omega_0$ is still a minimal point of $\Phi$. By this, one can assume that $\Omega_0$ contains the origin in its interior. Thus, $h_H(\Omega_0,u)>0$ for all $u\in\mathcal{S}^{2n}$. Consider a deformation of $h_H(\Omega_0,u)$, for any $f\in C^+(\mathcal{S}^{2n})$, $$h(u,t)=h_H(\Omega_0,u)+tf(u),$$ which is positive when $t$ is small. Since $h_H(\Omega_0,\cdot)$ is a minimal point of $\Phi$, we have $$\frac{d}{dt}\Phi(h(u,t))\bigg|_{t=0}=0.$$ This equation can be written as $$-\frac{\|h_H(\Omega_0,u)\|}{(2n+1)Vol(h_H(\Omega_0,u))} Vol^\prime(h_H(\Omega_0,u)) +\int_{\mathcal{S}^{2n}}f(u)d\mu(u)=0.$$ By the variational formula in Theorem [Theorem 10](#t3.2){reference-type="ref" reference="t3.2"} $$Vol^\prime(h_H(\Omega_0,u)) =\int_{\mathcal{S}^{2n}}f(u)dS_H(\Omega_0,u).$$ It follows that $$\frac{\|h_H(\Omega_0,u)\|}{(2n+1)Vol(h_H(\Omega_0,u))} \int_{\mathcal{S}^{2n}}f(u)dS_H(\Omega_0,u) =\int_{\mathcal{S}^{2n}}f(u)d\mu(u)$$ for any $f\in C^+(\mathcal{S}^{2n})$. This gives that $$\frac{\|h_H(\Omega_0,u)\|}{(2n+1)Vol(h_H(\Omega_0,u))} S_H(\Omega_0,u)=\mu.$$ Let $\Omega$ be the dilation of $\Omega_0$ satisfiying $$S_H(\Omega,\cdot) =\frac{\|h_H(\Omega_0,u)\|}{(2n+1)Vol(h_H(\Omega_0,u))} S_H(\Omega_0,\cdot).$$ Therefore, we obtain $$S_H(\Omega,\cdot)=\mu.$$ ◻   In the following Remark, we propose a more general Minkowski-type problem, the Orlicz Heisenberg Minkowski problem, in Heisenberg groups, which will be investigated in our next article. *Remark 16*. Inspired by the more general Orlicz case of the classical Minkowski problem in Euclidean spaces, we can naturally propose the following Orlicz form of Heisenberg Minkowski problem, which may be called the *Orlicz Heisenberg Minkowski problem* and stated as below **Problem 15**. (Orlicz Heisenberg Minkowski problem) What are the necessary and sufficient conditions on a given function $\phi$ and a given finite Borel measure $\mu$ on $\mathcal{S}^{2n}$, such that there exists a $H$-convex body $\Omega$, satisfying $$c\phi(h_H(\Omega,\cdot))dS_H(\Omega,\cdot)=d\mu,$$ where $c>0$ is a constant? 11 A. D. Alexandrov, *On the theory of mixed volumes of convex bodies. III. Extensions of two theorems of Minkowski on convex polyhedra to arbitrary convex bodies*, Mat. Sb. (N.S.), **3** (1938), 27-46. A. D. Alexandrov, *On the area function of a convex body*, Mat. Sb. (N.S.), **6** (1939), 167-174; English translation in A. D. Aleksandrov, Selected Works, Part 1, Chapter 9, pp. 155-162, Gordon and Breach, Amsterdam, 1996. N. Arcozzi and F. Ferrari, *The Hessian of the distance from a surface in the Heisenberg group*, Ann. Acad. Sci. Fenn. Math., **33** (2008), no. 1, 35-63. F. Baudoin, E. Grong, K. Kuwada and A. Thalmaier, *Sub-Laplacian comparison theorems on totally geodesic Riemannian foliations*, Calc. Var. Partial Differential Equations, **58** (2019), Paper No. 130. A. Bellaı̈che and J.-J. Risler (eds.), *Sub-Riemannian geometry*, Progress in Mathematics, Vol. 144, Birkhäuser, 1996. G. Bianchi, K. Böröczky, A. Colesanti and D. Yang, *The $L_p$-Minkowski problem for $-n<p<1$*, Adv. Math., **341** (2019), 493-535. K. J. Böröczky, M. Henk and H. Pollehn, *Subspace concentration of dual curvature measures of symmetric convex bodies*, J. Differential Geom., **109** (2018), 411-429. K. J. Böröczky, E. Lutwak, D. Yang and G. Zhang, *The logarithmic Minkowski problem*, J. Amer. Math. Soc., **26** (2013), 831-852. K. J. Böröczky, E. Lutwak, D. Yang, G. Zhang and Y. Zhao, *The dual Minkowski problem for symmetric convex bodies*, Adv. Math., **356** (2019), 106805. H. Brunn, Über Ovale und Eiflächen, Dissertation, München (1887). L. A. Caffarelli, *Interior $W^{2,p}$ estimates for solutions of the Monge-Ampère equation*, Ann. of Math., **131** (1990), 135-150. A. Calogero, G. Carcano and R. Pini, *On weakly H-quasiconvex functions on the Heisenberg group*, J. Convex Anal., **15** (2008), 753-766. L. Capogna, D. Danielli, S. D. Pauls and J. Tyson, *An introduction to the Heisenberg group and the sub-Riemannian isoperimetric problem*, Progress in Mathematics, 259. Birkhäuser Verlag, Basel, 2007. W. Chen, *$L_p$ Minkowski problem with not necessarily positive data*, Adv. Math., **201** (2006), 77-89. S. Chen and Q.-R. Li, *On the planar dual Minkowski problem*, Adv. Math., **333** (2018), 87-117. S. Y. Cheng and S. T. Yau, *On the regularity of the solution of the $n$-dimensional Minkowski problem*, Comm. Pure Appl. Math., **29** (1976), 495-516. K. S. Chou and X.-J. Wang, *The $L_p$-Minkowski problem and the Minkowski problem in centroaffine geometry*, Adv. Math., **205** (2006), 33-83. D. Danielli, N. Garofalo and D. M. Nhieu, *Notions of convexity in Carnot groups*, Comm. Anal. Geom., **11** (2003), 263-341. W. Fenchel and B. Jessen, *Mengenfunktionen und konvexe Korper*, Danske Vid. Selsk. Mat.-Fys. Medd, **16** (1938), 1-31. M. Francescopaolo, *Hypersurfaces and variational formulas in sub-Riemannian Carnot groups*, J. Math. Pures Appl., **87** (2007) 453-494. V. Franceschi and D. Prandi, *Hardy-type inequality for the Carnot-Caratheodory distance in the Heisenberg group*, J. Geom. Anal., **31** (2021) 2455-2480. R. J. Gardner, *Geometric tomography*, Cambridge Univ. Press, Cambridge, 1995. R. J. Gardner, D. Hug and W. Weil, *The Orlicz-Brunn-Minkowski theory: a general framework, additions, and inequalities*, J. Differential Geom., **97** (2014), 427-476. N. Garofalo and F. Tournier, *New properties of convex functions in the Heisenberg group*, Trans. Amer. Math. Soc., **358** (2005), 2011-2055. C. E. Gutiérrez and A. Montanari, *Maximum and comparison principles for convex functions on the Heisenberg group*, Comm. Partial Differential Equations, **29** (2004), 1305-1334. C. Haberl, E. Lutwak, D. Yang and G. Zhang, *The even Orlicz Minkowski problem*, Adv. Math., **224** (2010), 2485-2510. P. Hajłasz and S. Zimmerman, *Geodesics in the Heisenberg group*, Anal. Geom. Metr. Spaces, **3** (2015), 325-337. M. Henk and H. Pollehn, *Necessary subspace concentration conditions for the even dual Minkowski problem*, Adv. Math., **323** (2018), 114-141. Q. Huang and B. He, *On the Orlicz Minkowski problem for polytopes*, Discrete Comput Geom., **48** (2012), 281-297. Y. Huang, E. Lutwak, D. Yang and G. Zhang, *Geometric measures in the dual Brunn-Minkowski theory and their associated Minkowski problems*, Acta Math., **216** (2016), 325-388. D. Hug, E. Lutwak, D. Yang and G. Zhang, *On the $L_p$ Minkowski problem for polytopes*, Discrete Comput. Geom., **33** (2005), 699-715. M. Y. Jiang, *Remarks on the 2-dimensional $L_p$-Minkowski problem*, Adv. Nonlinear Stud., **10** (2010), 297-313. H. Lewy, *On differential geometry in the large. I. Minkowski's problem*, Trans. Amer. Math. Soc., **43** (1938), 258-270. A. J. Li, *The generalization of Minkowski problems for polytopes*, Geom. Dedicata, **168** (2014), 245-264. H. Li and B. Xu, *Hyperbolic $p$-sum and Horospherical $p$-Brunn-Minkowski theory in hyperbolic space*, arXiv:2211.06875. Q-R. Li, J. Liu and J. Lu, *Non-uniqueness of solutions to the dual $L_p$-Minkowski problem*, IMRN, (2022), 9114-9150. Q-R. Li, W. Sheng and X-J. Wang, *Flow by Gauss curvature to the Aleksandrov and dual Minkowski problems*, J. Eur. Math. Soc., **22** (2020), 893-923. G. Lu, J. J. Manfredi and B. Stroffolini, *Convex functions on the Heisenberg group*, Calc. Var. Partial Differential Equations, **19** (2004), 1-22. J. Lu and X.-J. Wang, *Rotationally symmetric solution to the $L_p$-Minkowski problem*, J. Differential Equations, **254** (2013), 983-1005. E. Lutwak, *The Brunn-Minkowski-Firey theory. I. Mixed volumes and the Minkowski problem*, J. Differential Geom., **38** (1993), 131-150. E. Lutwak and V. Oliker, *On the regularity of solutions to a generalization of the Minkowski problem*, J. Differential Geom., **62** (1995), 17-38. E. Lutwak, D. Yang and G. Zhang, *On the $L_p$-Minkowski problem*, Trans. Amer. Math. Soc., **356** (2004), 4359-4370. E. Lutwak, D. Yang and G. Zhang, *$L_p$ dual curvature measures*, Adv. Math., **329** (2018), 85-132. V. Marenich, *Geodesics in Heisenberg groups*, Geom. Dedicata, **66** (1997), 175-185. H. Minkowski, *Allgemeine Lehrsätze über die convexen Polyeder*, Nachr. Ges. Wiss. Göttingen, (1897), 198-219. H. Minkowski, *Volumen und Oberfläche*, Math. Ann., **57** (1903), 447-495. F. Montefalcone, *Hypersurfaces and variational formulas in sub-Riemannian Carnot groups*, J. Math. Pures Appl., **87** (2007), 453-494. R. Monti, *Some properties of Carnot-Carathéodory balls in the Heisenberg group*, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl., **11** (2000), 155-167. R. Monti, *Isoperimetric problem and minimal surfaces in the Heisenberg group*, Geometric measure theory and real analysis, 57-129, CRM Series, 17, Ed. Norm., Pisa, 2014. R. Monti and M. Rickly, *Geodetically convex sets in the Heisenberg group*, J. Convex. Anal., **12** (2005), 187-196. L. Nirenberg, *The Weyl and Minkowski problems in differential geometry in the large*, Comm. Pure Appl. Math., **6** (1953), 337-394. A. V. Pogorelov, *The Minkowski Multidimensional Problem*, Winston and Sons, Washington, DC, 1978. R. Schneider, *Convex Bodies: The Brunn-Minkowski theory*, 2nd edn, Cambridge University Press, Cambridge, 2014. L. Simon, *Lectures on geometric measure theory*, In: Proceedings of the Centre for Mathematical Analysis, Australian National University, vol. 3, Australian National University Centre for Mathematical Analysis, Canberra (1983). A. Stancu, *The discrete planar $L_0$-Minkowski problem*, Adv. Math., **167** (2002), 160-174. M. Sun and X. Yang, *Some properties of quasiconvex function on the Heisenberg group*, Acta Math. Appl. Sin., Engl. Ser., **21** (2005), 571-580. M. Sun and X. Yang, *Quasi-convex functions in Carnot groups*, Chin. Ann. Math. Ser. B, **28B** (2007), 235-242. J. Urbas, *An expansion of convex hypersurfaces*, J. Differential Geom., **33** (1991), 91-125. Q. Yang, *Hardy type inequalities related to Carnot-Carathéodory distance on the Heisenberg group*, Proc. Amer. Math. Soc., **141** (2013), 351-362. Y. Zhao, *The dual Minkowski problem for negative indices*, Calc. Var. Partial Differential Equations, **56**:18 (2017), 1-18. Y. Zhao, *Existence of solution to the even dual Minkowski problem*, J. Differential Geom., **110** (2018), 543-572. G. Zhu, *The $L_p$ Minkowski problem for polytopes for $0<p<1$*, J. Funct. Anal., **269** (2015), 1070-1094. G. Zhu, *The centro-affine Minkowski problem for polytopes*, J. Differential Geom., **101** (2015), 159-174. B. Zhu, S. Xing and D. Ye, *The dual Orlicz-Minkowski problem*, J. Geom. Anal., **28** (2018), 3829-3855. [^1]: [^2]: Research is supported in part by the Natural Science Foundation of China (12271254; 12141104). [^3]: Corresponding author: Peibiao Zhao
arxiv_math
{ "id": "2309.01170", "title": "The Minkowski problem in Heisenberg groups", "authors": "Bin Chen, Juan Zhang, Peibiao Zhao, Xia Zhao", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Mixed volumes in $n$-dimensional Euclidean space are functionals of $n$-tuples of convex bodies $K,L,C_1,\ldots,C_{n-2}$. The Alexandrov--Fenchel inequalities are fundamental inequalities between mixed volumes of convex bodies. As very special cases they cover or imply many important inequalities between basic geometric functionals. A complete characterization of the equality cases in the Alexandrov--Fenchel inequality remains a challenging open problem. Major recent progress was made by Yair Shenfeld and Ramon van Handel [@SvH22; @SvH23+], in particular they resolved the problem in the cases where $C_1,\ldots,C_{n-2}$ are polytopes, zonoids or smooth bodies (under some dimensional restriction). In [@HugReichert23+] we introduced the class of polyoids, which are defined as limits of finite Minkowski sums of polytopes having a bounded number vertices. Polyoids encompass polytopes, zonoids and triangle bodies, and they can be characterized by means of generating measures. Based on this characterization and Shenfeld and van Handel's contribution, we extended their result to polyoids (or smooth bodies). Our previous result was stated in terms of the support of the mixed area measure associated with the unit ball $B^n$ and $C_1,\ldots,C_{n-2}$. This characterization result is completed in the present work which more generally provides a geometric description of the support of the mixed area measure of an arbitrary $(n-1)$-tuple of polyoids (or smooth bodies). The result confirms a long-standing conjecture by Rolf Schneider in the case of polyoids and hence, in particular, of zonoids. author: - Daniel Hug and Paul A. Reichert title: The support of mixed area measures involving a new class of convex bodies --- #### MSC-classes 2020. 52A39, 52A20, 52A21, 52A40 #### Keywords. Polytope, zonoid, polyoid, Alexandrov--Fenchel inequality, generating measure, mixed area measure # Introduction {#sec:1} Mixed volumes of convex bodies (nonempty compact convex sets) in Euclidean space $\mathbb{R}^n$, $n\ge 2$, are symmetric functionals of $n$-tuples of convex bodies, which naturally arise as coefficients of polynomial expansions of nonnegative Minkowski combinations of convex bodies. We write $\mathop{\mathrm{V}}$ for the volume functional (Lebesgue measure) and $\alpha_1K_1+\cdots+\alpha_mK_m$ for the Minkowski combination of the convex bodies $K_1,\ldots,K_m\subset\mathbb{R}^n$ with nonnegative coefficients $\alpha_1,\ldots,\alpha_m\in\mathbb{R}$. Then $$\label{eq:1.1} \mathop{\mathrm{V}}(\alpha_1K_1+\cdots+\alpha_mK_m)=\sum_{i_1,\ldots,i_n=1}^m \mathop{\mathrm{V}}(K_{i_1},\ldots,K_{i_n})\alpha_{i_1}\cdots\alpha_{i_n},$$ where $\mathop{\mathrm{V}}(K_{i_1},\ldots,K_{i_n})$ is called the mixed volume of $K_{i_1},\ldots,K_{i_n}$. A local counterpart of the mixed volumes are the mixed area measures. For convex bodies $K_1,\ldots,K_{n-1}\subset\mathbb{R}^n$, the mixed area measure $\mathop{\mathrm{S}}(K_1,\ldots,K_{n-1},\cdot)$ is the uniquely determined Borel measure on the Euclidean unit sphere $\mathbb{S}^{n-1}$ such that $$\label{eqex} \mathop{\mathrm{V}}(K_1,\ldots,K_{n-1},K_n)=\frac{1}{n}\int_{\mathbb{S}^{n-1}} h_{K_n}(u)\, \mathop{\mathrm{S}}(K_1,\ldots,K_{n-1},\mathop{}\!\mathrm{d}u)$$ holds for all convex bodies $K_n\subset \mathbb{R}^n$, where $h_{K_n}$ is the support function of $K_n$ (see [@Schneider Sect. 5.1] or [@Hug Thm. 4.1]). A deep inequality for mixed volumes of convex bodies, with many consequences and applications to diverse fields, has been found and established by Alexandrov [@AF1937] (see Schneider [@Schneider Sect. 7.3], also for some historical comments). We write $\mathcal{K}^n$ for the set of convex bodies in $\mathbb{R}^n$. **Theorem 1** (Alexandrov--Fenchel Inequality). *[\[thm:af\]]{#thm:af label="thm:af"} Let $K, L\in\mathcal{K}^n$ be convex bodies, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be an $(n-2)$-tuple of convex bodies in $\mathbb{R}^n$. Then $$\begin{aligned} \mathop{\mathrm{V}}(K, L, \mathbfcal{C})^2 \ge \mathop{\mathrm{V}}(K, K, \mathbfcal{C}) \mathop{\mathrm{V}}(L, L, \mathbfcal{C}),\tag{AFI} \end{aligned}$$ where $\mathop{\mathrm{V}}(K, L, \mathbfcal{C}):=\mathop{\mathrm{V}}(K,L,C_1, \ldots, C_{n-2})$.* While the inequality was already established by Alexandrov and various proofs of the inequality are known, some of which were found recently (see [@Cor19; @SvH19; @Wa18] and the references given there), a complete characterization of the equality cases remains a major open problem in Brunn--Minkowski theory (see [@Schneider Sect. 7.6]). For recent progress, we mention the work by Shenfeld and van Handel [@SvH22; @SvH23+] and the literature cited there. Based on their findings for the case where $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ is a tuple of polytopes, zonoids or smooth bodies (satisfying a weak dimensionality assumption, called supercriticality), the following more general result has been shown in [@HugReichert23+]. It confirms a conjecture by Rolf Schneider [@Schneider Conjecture 7.6.16] for a new class of convex bodies, which we called polyoids, that contains all polytopes, zonoids and triangle bodies. A *polyoid* is a convex body $K$ for which there is some integer $k\in\mathbb{N}$ and a sequence of Minkowski sums of polytopes each having at most $k$ vertices that converges to $K$; see [@HugReichert23+ Sect. 2] (and Section [3](#sec:3){reference-type="ref" reference="sec:3"} below) for further details and a representation theorem characterizing polyoids. A convex body is smooth if each of its boundary points is contained in a unique supporting hyperplane. **Theorem 2** (Equality cases in (AFI) for polyoids and smooth bodies [@HugReichert23+]). *Let $K,L\in \mathcal{K}^n$, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be a supercritical $(n-2)$-tuple of polyoids or smooth convex bodies in $\mathbb{R}^n$. Assume that $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})>0$. Then (AFI) holds with equality if and only if there are $a>0$ and $x\in\mathbb{R}^n$ such that $$h_K=h_{aL+x}\quad\text{ on }\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot),$$ where $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$ denotes the support of the mixed area measure $\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$ of the unit ball $B^n$ and the $(n-2)$-tuple $\mathbfcal{C}$.* For a geometric understanding of the equality cases in the Alexandrov--Fenchel inequality (AFI) it thus remains to describe the support of the measure $\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$ in geometric terms. According to another (more general) conjecture by Rolf Schneider [@Schneider Conjecture 7.6.14], the support of the mixed area measure $\mathop{\mathrm{S}}(K_1,\ldots,K_{n-1},\cdot)$, for given convex bodies $K_1,\ldots,K_{n-1}\subset\mathbb{R}^n$, is the closure of the set of *$(K_1,\ldots,K_{n-1})$-extreme normal vectors*, for which we write $\mathop{\mathrm{cl}}\mathop{\mathrm{ext}}(K_1,\ldots,K_{n-1})$; an explicit definition and further information are given in Section [2](#sec:2){reference-type="ref" reference="sec:2"}. If all convex bodies are polytopes or all are smooth and strictly convex, then the conjecture is known to be true. The conjecture was also recently confirmed by Shenfeld and van Handel in the case of $(n-1)$-tuples of the form $(B^n, C_1,\ldots,C_{n-2})$, where $C_i$ is a zonoid or a smooth convex body in $\mathbb{R}^n$. However, even in the case where the unit ball $B^n$ is replaced by a general zonoid, the conjecture was open up to now. Our main result confirms Schneider's conjecture [@Schneider Conjecture 7.6.14] not only for general $(n-1)$-tuples of zonoids (or smooth bodies), but for the larger class of polyoids (or smooth bodies). **Theorem 1**. *[\[thm:suppChar\]]{#thm:suppChar label="thm:suppChar"} Let $\mathbfcal{C}= (C_1, \ldots, C_{n-1})$ be an $(n-1)$-tuple of polyoids (or smooth convex bodies provided at least one of the bodies $C_i$ is smooth and strictly convex) in $\mathbb{R}^n$. Then $$\label{eq:key} \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{C},\cdot) = \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{C} .$$* In combination with the preceding theorem on the characterization of the equality cases in (AFI), given in terms of the support of the mixed measure $\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$, we thus obtain the following result, which establishes Schneider's conjecture [@Schneider Conjecture 7.6.13] for the class of polyoids (or smooth bodies). **Theorem 2**. *Let $K,L\in \mathcal{K}^n$, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be a supercritical $(n-2)$-tuple of polyoids or smooth convex bodies in $\mathbb{R}^n$. Assume that $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})>0$. Then (AFI) holds with equality if and only if there are $a>0$ and $x\in\mathbb{R}^n$ such that $$h_K=h_{aL+x}\quad\text{ on } \mathop{\mathrm{ext}}(B^n,\mathbfcal{C}).$$* In the special case where $C_1,\ldots,C_{n-2}$ are all smooth, each unit vector is $(B^n,\mathbfcal{C})$-extreme and therefore $K$ and $L$ are homothetic (see [@Schneider Thm.7.6.8]). As another consequence of Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"}, we obtain the following partial confirmation of a conjecture on the monotonicity of mixed volumes (see [@Schneider1988 Conjecture A$^{\prime}$]). **Theorem 3**. *[\[thm:suppChar2\]]{#thm:suppChar2 label="thm:suppChar2"} Let $K,L\in\mathcal{K}^n$ satisfy $K\subseteq L$. Let $\mathbfcal{C}= (C_1, \ldots, C_{n-1})$ be an $(n-1)$-tuple of polyoids (or smooth convex bodies provided at least one of the bodies $C_i$ is smooth and strictly convex) in $\mathbb{R}^n$. Then equality holds in $$\mathop{\mathrm{V}}(K,\mathbfcal{C})\le \mathop{\mathrm{V}}(L,\mathbfcal{C})$$ if and only if $$\label{eq:key2} h_K=h_L \quad\text{on } \mathop{\mathrm{ext}}\mathbfcal{C} .$$* Condition [\[eq:key2\]](#eq:key2){reference-type="eqref" reference="eq:key2"} is expressed by saying that $K$ and $L$ have the same $\mathbfcal{C}$-extreme supporting hyperplanes. In order to show relation [\[eq:key\]](#eq:key){reference-type="eqref" reference="eq:key"}, which is the main result of this paper, we prove two inclusions. Both inclusions require various preparations and involve new ideas. The main task is to prove the result in the case where $C_1,\ldots,C_{n-1}$ are polyoids with generating measures $\mu_1,\ldots,\mu_{n-1}$. In order to show that $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{C})\subseteq \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{C}$, we express in a first step the support of the mixed area measure of $\mathbfcal{C}$ as the closure of the union of the extreme normal vectors $\mathop{\mathrm{ext}}\mathbfcal{P}$ of all $(n-1)$-tuples $\mathbfcal{P}$ of polytopes in the support of $\mu_1\otimes\cdots\otimes\mu_{n-1}$. A main tool is Theorem [Theorem 26](#thm:suppInt){reference-type="ref" reference="thm:suppInt"} which applies to more general bodies than polyoids. In a second step, we provide in Section [3](#sec:3){reference-type="ref" reference="sec:3"} information about projections of touching spaces (the linear subspaces orthogonal to the better known touching cones) and projections of polyoids and their generating measures. In Section [4](#sec:4){reference-type="ref" reference="sec:4"} we develop a method to characterize what it means that the touching space of a convex body, and in particular of a polyoid, is trivial. These ingredients are combined in Section [7](#sec:7){reference-type="ref" reference="sec:7"} to complete the proof of the inclusion "$\subseteq$". In fact, our arguments for the inclusion "$\subseteq$" apply to a formally larger class of convex bodies which we called macroids in [@HugReichert23+], see Proposition [Proposition 51](#prop:suppChar){reference-type="ref" reference="prop:suppChar"}. For the reverse inclusion, we proceed by induction over the dimension (see Section [7](#sec:7){reference-type="ref" reference="sec:7"}). A natural ingredient in the argument is a reduction formula that relates the mixed area measures of convex bodies, where some of these bodies are contained in a subspace $E$, to the mixed area measure of the remaining bodies, projected to the orthogonal subspace $E^\perp$ (see Secton [2](#sec:2){reference-type="ref" reference="sec:2"}). A crucial new idea to make the induction work is to reduce the complexity of a polyoid $M$, which has a nontrivial touching space in direction $u$, by a construction we call pruning. It ultimately allows us to replace $M$ locally by a lower dimensional witness polytope $\Re(M,u)$ which can be used in place of $M$ to explore the support of a mixed area measure involving $M$. Motivating examples for the construction of such a polytope and the crucial Witness Lemma [\[thm:pruningLemma\]](#thm:pruningLemma){reference-type="ref" reference="thm:pruningLemma"} are contained in Section [5](#sec:5){reference-type="ref" reference="sec:5"}. It is this part of the argument for the inclusion "$\supseteq$" which inhibits the extension of Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} to macroids. Another ingredient for the induction is provided in Section [6](#sec:6){reference-type="ref" reference="sec:6"}. It finally allows us in the induction step to replace, for a given direction $u$, some of the polyoids by their associated witness polytopes. The required Switching Lemma [Lemma 50](#thm:critSwitching){reference-type="ref" reference="thm:critSwitching"} is based on concepts of criticality that are discussed in Section [2](#sec:2){reference-type="ref" reference="sec:2"} and have already proved to be essential in recent work by Shenfeld and van Handel [@SvH23+]. # Preparations {#sec:2} We work in Euclidean space $\mathbb{R}^n$ with scalar product $\langle\cdot\,,\cdot\rangle$, norm $\|\cdot\|$ and Euclidean metric $d(\cdot\,,\cdot)$. Most of the time we work with nonempty compact convex subsets of $\mathbb{R}^n$ (convex bodies) and denote the space of all convex bodies in $\mathbb{R}^n$ by $\mathcal{K}^n$, together with the Hausdorff metric. We denote by $\mathcal{P}^n$ the subset of $\mathcal{K}^n$ consisting of polytopes. It is useful to consider some basic operations and concepts from convex geometry also for non-convex sets. This is straightforward for the Minkowski (i.e. vector) sum or Minkowski combinations with real coefficients of arbitrary subsets of $\mathbb{R}^n$. For $n\in \mathbb{N}$, we set $[n]:=\{1,\ldots,n\}$. If $\varnothing\neq A\subseteq\mathbb{R}^n$, we denote by $\mathop{\mathrm{span}}A$ the (linear) span and by $\mathop{\mathrm{\overline{span}}}A:=\mathop{\mathrm{span}}(A-A)$ the linear subspace parallel to the affine span of $A$. Then $\dim A \coloneqq \dim \mathop{\mathrm{\overline{span}}}A$ is the dimension of $A$. We write $\mathop{\mathrm{relint}}B$ for the relative interior of a convex set $B\subseteq\mathbb{R}^n$. For $x,y\in\mathbb{R}^n$, the segment connecting $x$ and $y$ is denoted by $[x,y]$ (which equals the convex hull of $\{x,y\}$). The support function of a subset $A\subseteq \mathbb{R}^n$ is $h_A \colon \mathbb{R}^n \to [-\infty, \infty]$, $u \mapsto \sup\set*{ \langle x, u\rangle \given x\in A}$ and the support set of $A$ in direction $u \in \mathbb{R}^n \setminus\set{0}$ is $$F(A, u) \coloneqq \set*{ x \in A \given \left<x, u\right> = h_A(u) },$$ which can be the empty set. For a convex body $A$ and $u \in \mathbb{R}^n \setminus\set{0}$, the support set $F(A,u)$ is again a convex body. ## Faces and touching spaces We follow Schneider [@Schneider p. 16] in defining a face of a convex set $A\subseteq\mathbb{R}^n$ as a convex subset $F \subseteq A$ with the following property: If $x, y \in A$ and $F\cap\mathop{\mathrm{relint}}[x, y]\neq\varnothing$, then $[x, y] \subseteq F$. Several useful properties of faces of nonemtpy closed convex sets are provided in [@Schneider Sect.2.1] and will be used in the following. In particular, for a polytope $P$ a set $\varnothing\neq F\subset P$ is a face of $P$ if and only if it is a support set. Note that "$\subset$" means strict inclusion. Next we collect and complement some definitions from [@Schneider p. 85]. As usual, for a subset $A\subset\mathbb{R}^n$ we set $A^\perp:=\set*{v\in \mathbb{R}^n\given \langle v, a\rangle=0 \text{ for }a\in A}$ (which equals the orthogonal complement of $\mathop{\mathrm{span}}A$). For a vector $u\in\mathbb{R}^n$, we set $u^\perp:=\{u\}^\perp$. **Definition 4**. *Let $K$ be a convex body contained in some linear subspace $V \subseteq \mathbb{R}^n$. The set of common outer normal vectors (including $0$) of some set $S \subseteq K$ is $$N_V(K, S) \coloneqq \set*{ u \in V\setminus\set{0} \given S \subseteq F(K, u) } \cup \set*{0} \subseteq V$$ and called the *normal cone of $K$ at $S$ (in $V$)*.* *If $u \in V\setminus\set{0}$, then $N_V(K, F(K, u))$ is a closed convex cone containing $u$. As such, it has a unique face $T_V(K, u)$ such that $u \in \mathop{\mathrm{relint}}T_V(K, u)$. This face is called the *touching cone of $K$ in direction $u$*.* *The space $\mathop{\mathrm{TS}}_V(K, u) \coloneqq V \cap T_V(K, u)^\perp$ is called the *touching space of $K$ in direction $u$*.* *In case of $V = \mathbb{R}^n$, we write $N \coloneqq N_{\mathbb{R}^n}, T \coloneqq T_{\mathbb{R}^n}$ and $\mathop{\mathrm{TS}}\coloneqq \mathop{\mathrm{TS}}_V$.* The following definition of extreme normal directions for an $(n-1)$-tuple of convex bodies in $\mathbb{R}^n$ can be easily seen to be equivalent to the definition given in [@Schneider p. 87] by means of [@Schneider Lem. 5.1.9], applied in $u^\perp$ for some $u\in\mathbb{S}^{n-1}$. **Definition 5**. * If $n \ge 1$ and $\mathbfcal{C}= (C_1, \ldots, C_{n-1})$ is a tuple of convex bodies in $\mathbb{R}^n$, then $u\in\mathbb{S}^{n-1}$ is said to be a *$\mathbfcal{C}$-extreme (normal) vector* if there are one-dimensional linear subspaces of $\mathop{\mathrm{TS}}(C_i, u)$, for $i\in[n-1]$, with linearly independent directions. The set of all $\mathbfcal{C}$-extreme normal vectors is denoted by $\mathop{\mathrm{ext}}\mathbfcal{C}$.* **Remark 6**. *In the situation of Definition [Definition 5](#def:extremeNormal){reference-type="ref" reference="def:extremeNormal"}, $u\in \mathop{\mathrm{ext}}\mathbfcal{C}$ if and only if $$\label{eq:condiextreme} \dim \sum_{i \in I} \mathop{\mathrm{TS}}(C_i, u) \ge \abs{I}\quad \text{for }I \subseteq [n-1],$$ where the empty sum is understood as the trivial vector space. For the equivalence of this condition with Definition [Definition 5](#def:extremeNormal){reference-type="ref" reference="def:extremeNormal"}, see [@Schneider Thm. 5.1.8].* *With the notation introduced later in Definition [Definition 20](#def:crit){reference-type="ref" reference="def:crit"}, condition [\[eq:condiextreme\]](#eq:condiextreme){reference-type="eqref" reference="eq:condiextreme"} will be expressed by writing $$\mathop{\mathrm{V}}\prn*{\mathop{\mathrm{TS}}(C_1, u), \ldots, \mathop{\mathrm{TS}}(C_{n-1}, u)} > 0 ;$$ see also the more general Lemma [Lemma 23](#thm:critIndep){reference-type="ref" reference="thm:critIndep"}.* The facial structure, touching cones and touching spaces of polytopes are reasonably well-understood. In Lemmas [Lemma 7](#thm:facialStability){reference-type="ref" reference="thm:facialStability"} and [Lemma 8](#thm:touchingConePoly){reference-type="ref" reference="thm:touchingConePoly"} we provide some related information that will be needed in the sequel. **Lemma 7** (Facial stability). *Let $P = \mathop{\mathrm{conv}}\set*{ v_1, \ldots, v_\ell } \in \mathcal{P}^n$ and $u \in \mathbb{R}^n \setminus\set{0}$. Consider the set $I_u \coloneqq \set*{ i \in [\ell] \given v_i \in F(P, u) }$.* *Then there is an $\varepsilon \in (0, \norm{u})$ such that for all $v, w_1, \ldots, w_\ell \in \mathbb{R}^n$ with $d(u, v) < \varepsilon$ and $d(v_i, w_i) < \varepsilon )$, $Q \coloneqq \mathop{\mathrm{conv}}\set*{ w_1, \ldots, w_\ell }$ satisfies $F(Q, v) \subseteq \mathop{\mathrm{conv}}\set*{ w_i \given i \in I_u }$.* *Equality holds if and only if additionally $v \in \mathop{\mathrm{\overline{span}}}\set*{ w_i \given i \in I_u }^\perp$.* *Proof.* Note that $I_u$ is not empty. If $I_u = [\ell]$, then the first claim follows from $F(Q, v) \subseteq Q$. Now assume that $[\ell]\setminus I_u \ne \varnothing.$ Let $\norm{u} > \varepsilon > 0$ and $v, w_1, \ldots, w_\ell \in V$ with $d(u, v) < \varepsilon$ and $d(v_i, w_i) < \varepsilon$. Then $v \ne 0$. Define convex bodies $$Q \coloneqq \mathop{\mathrm{conv}}\set*{ w_1, \ldots, w_\ell }$$ and $$P' \coloneqq \mathop{\mathrm{conv}}\set*{ v_i \given i \in [\ell] \setminus I_u }, \quad Q' \coloneqq \mathop{\mathrm{conv}}\set*{ w_i \given i \in [\ell] \setminus I_u } .$$ Note that $Q$ and $Q'$ depend on $v, w_1, \ldots, w_\ell$. It holds $$h(P', u) = \max_{i \in [\ell]\setminus I_u} \left<v_i, u\right> < \max_{i \in [\ell]} \left<v_i, u\right> = h(P, u) = h(F(P, u), u).$$ By continuity in $u$ and $(v_i)_{i \in [\ell]}$ of the left and right term of the inequality, we can choose $\varepsilon > 0$ such that for all $v, w_1, \ldots, w_\ell \in V$ with $d(u, v) < \varepsilon$ and $d(v_i, w_i) < \varepsilon$, $$h(Q', v) < h(Q, u) .$$ So $F(Q, v) \subseteq \mathop{\mathrm{conv}}\set*{ w_i \given i \in I_u }$, remembering that $F(Q, v)$ is spanned by vertices of $Q$ [@Hug Theorem 1.19]. If equality holds, then $v \in \prn*{\mathop{\mathrm{\overline{span}}}F(Q, v)}^\perp = \prn*{\mathop{\mathrm{\overline{span}}}\set*{ w_i \given i \in I_u }}^\perp$. Conversely, assume $v \in \prn*{\mathop{\mathrm{\overline{span}}}\set*{ w_i \given i \in I_u }}^\perp$. Since $F(Q, v)$ is spanned by vertices $w_i$ with $i\in I_u$ and nonempty, we may assume that $1 \in I_u$ and $\left<w_1, v\right> = h(Q, v)$. For $v \in \prn*{\mathop{\mathrm{\overline{span}}}\set*{ w_i \given i \in I_u }}^\perp$, we get $$\left<w_i, v\right> = \left<w_1, v\right> + \left<w_i - w_1, v\right> = h(Q, v) \quad \text{for all $i \in I_u$} ,$$ hence $w_i\in F(Q,v)$, and therefore also $\mathop{\mathrm{conv}}\set*{ w_i \given i \in I_u }\subseteq F(Q, v)$ holds. ◻ The next lemma should be compared to [@Schneider (2.26)]. **Lemma 8**. *Let $P \in \mathcal{P}^n$ be a polytope and $u \in \mathbb{R}^n\setminus\set{0}$. Then $$T(P, u) = N(P, F(P, u))\quad\text{and}\quad\mathop{\mathrm{TS}}(P, u) = \mathop{\mathrm{\overline{span}}}F(P, u).$$* *Proof.* If $v\in N(P, F(P, u))$, then $F(P,u)\subseteq F(P,v)$, and thus $v\in (\mathop{\mathrm{\overline{span}}}F(P, u))^\perp$. Hence $N(P, F(P, u))\subseteq(\mathop{\mathrm{\overline{span}}}F(P,u))^\perp$ and therefore $$\mathop{\mathrm{\overline{span}}}F(P, u) \subseteq N(P, F(P, u))^\perp \subseteq T(P, u)^\perp = \mathop{\mathrm{TS}}(P, u).$$ By Lemma [Lemma 7](#thm:facialStability){reference-type="ref" reference="thm:facialStability"}, there is an open neighborhood $U \subseteq \mathbb{R}^n\setminus\set{0}$ of $u$ such that for all $v \in U$, $$F(P, v) \subseteq F(P, u)$$ and such that for all $v \in U' \coloneqq U \cap \prn*{\mathop{\mathrm{\overline{span}}}F(P, u)}^\perp$, even equality holds. So $U' \subseteq N(P, F(P, u)) \subseteq \prn*{\mathop{\mathrm{\overline{span}}}F(P, u)}^\perp$. But $U'$ is open in $\prn*{\mathop{\mathrm{\overline{span}}}F(P, u)}^\perp$, so that $u \in \mathop{\mathrm{relint}}N(P, F(P, u))$. By definition of $T(P, u)$, $$T(P, u) = N(P, F(P, u))$$ and hence (by the preceding argument) $$\prn*{\mathop{\mathrm{\overline{span}}}F(P, u)}^\perp = \mathop{\mathrm{span}}N(P, F(P, u)) = \mathop{\mathrm{span}}T(P, u) .$$ Thus we get $\mathop{\mathrm{\overline{span}}}F(P, u) = T(P, u)^\perp = \mathop{\mathrm{TS}}(P, u)$. ◻ ## Mixed volumes and mixed area measures See [@Schneider; @Hug] for an introduction to mixed volumes and mixed area measures of convex bodies or differences of support functions of convex bodies. We start with some simple comments and conventions. **Conventions concerning tuples of sets** Most of the time, the ordering of a tuple will not be relevant for our purposes. This is why a *subtuple* of a tuple $\mathbfcal{A} = (A_1, \ldots, A_\ell)$, $\ell\in\mathbb{N}_0$, will denote any tuple $\mathbfcal{B}$ that is a prefix of a permutation of $\mathbfcal{A}$. The notation for this situation is $\mathbfcal{B} \le \mathbfcal{A}$. Every set $I \subseteq [\ell]$ can be uniquely written as $I = \set*{i_1, \ldots, i_m}$ such that $m\in\mathbb{N}_0$ and $(i_j)_{j \in [m]}$ is strictly increasing in $j$. Then we assign to $I$ a subtuple of $\mathbfcal{A}$, $$\mathbfcal{A}_I \coloneqq (A_{i_1}, \ldots, A_{i_m}) \le \mathbfcal{A} .$$ The *span* of a tuple of *nonempty* sets $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ with $A_i\subseteq\mathbb{R}^n$ is $$\mathop{\mathrm{\overline{span}}}\mathbfcal{A} \coloneqq \mathop{\mathrm{\overline{span}}}\sum_{i=1}^\ell A_\ell = \sum_{i=1}^\ell \mathop{\mathrm{\overline{span}}}A_\ell ,$$ where $\sum_{i=1}^\ell A_\ell \coloneqq \set*{0}$ if $\ell = 0$. The *dimension* of a tuple means the dimension of its affine span, that is, $\dim \mathbfcal{A}\coloneqq\dim\mathop{\mathrm{\overline{span}}}\mathbfcal{A}$. The *size* of a tuple $\mathbfcal{A}$ is the number of its components and is written as $\abs{\mathbfcal{A}} \coloneqq \ell$. Whenever tuples of sets are nested into other tuples, we will omit brackets as convenient. For example, if $C, D$ are sets and $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ is a tuple of sets, then $$(C, \mathbfcal{A}, D) \coloneqq (C, A_1, \ldots, A_\ell, D)$$ and therefore, for example, if the right term is well-defined, $$\mathop{\mathrm{V}}(C, \mathbfcal{A}, D) = \mathop{\mathrm{V}}(C, A_1, \ldots, A_\ell, D) .$$ If $\mathbfcal{A}, \mathbfcal{B}$ are tuples, we also write $$\mathbfcal{A} + \mathbfcal{B} \coloneqq (\mathbfcal{A}, \mathbfcal{B}) ,$$ using the nested-tuple convention as just described. If $k\in\mathbb{N}$, then for arbitrary $X$ (being a set, a measure, ...), $X[k]$ denotes the tuple consisting of $k$ copies of $X$, that is, $X[k] \coloneqq (X, \ldots, X)$. As usual we set $$S_{n-1}(K,\cdot)\coloneqq S(K[n-1],\cdot)\quad \text{ for } K\in\mathcal{K}^n.$$ If $f\colon A \to B$ is a function and $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ is a tuple of elements or subsets of $A$, then we write $$f(\mathbfcal{A}) = f(A_1, \ldots, A_\ell) \coloneqq \prn*{f(A_1), \ldots, f(A_\ell)} .$$ If $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ is a tuple and $r \in [\ell]$, the tuple obtained from $\mathbfcal{A}$ by removing the $r$-th entry (i.e. $A_r$) is denoted by $\mathbfcal{A}_{\setminus r}$. **Remark 9**. *For the discussion of mixed volumes and area measures it is usually assumed that $n\ge 1$ (or even $n\ge 2$). In view of induction arguments in the following, we set $$\mathop{\mathrm{V}}() := \mathop{\mathrm{V}}_0(\set*{0}) \coloneqq \mathcal{H}^{0}(\set*{0}) = 1 ,$$ where $\mathcal{H}^{0}$ is the zero-dimensional Hausdorff measure (counting measure). Moreover, for $n=1$ we define $\mathop{\mathrm{S}}()$ as the counting measure on $S^0 = \set*{-1, 1}$. Then e.g. relation [\[eqex\]](#eqex){reference-type="eqref" reference="eqex"} remains true.* *These definitions are consistent with the inductive definitions of volume and surface in [@Hug Definition 3.2].* **Remark 10**. *In order to simplify notation, we use the following conventions.* 1. *Let $\mu(\mathbfcal{C})$ be a measure which depends on a parameter $\mathbfcal{C}$. Then we write $\mu(\mathbfcal{C}, \cdot)$ or $\mu_{\mathbfcal{C}}(\cdot)$ as shorthands for $\mu(\mathbfcal{C})(\cdot)$.* 2. *Sometimes it is useful to pass the support function $h_K$ instead of the convex body $K \in\mathcal{K}^n$ to $\mathop{\mathrm{S}}$ or $\mathop{\mathrm{V}}$, i.e., write $\mathop{\mathrm{V}}(h_K, \mathbfcal{C})$ instead of $\mathop{\mathrm{V}}(K, \mathbfcal{C})$. Using this convention, $\mathop{\mathrm{V}}$ (and $\mathop{\mathrm{S}}$) can be extended to multilinear functions taking $n$ (or $n-1$) differences of support functions. For example, $$\mathop{\mathrm{V}}(h_K - h_L, \mathbfcal{C}) \coloneqq \mathop{\mathrm{V}}(K, \mathbfcal{C}) - \mathop{\mathrm{V}}(L, \mathbfcal{C}) .$$* 3. *In the following we write $\mathop{\mathrm{V}}$ for the mixed volume in $\mathbb{R}^n$, but we use the same symbol for the mixed volume in a subspace (the number of arguments already provides the relevant information). By the translation invariance of mixed volumes, the mixed volume of convex bodies lying in parallel subspaces is well-defined.* The mixed area measure of an $(n-1)$-tuple of polytopes can be written as a finite sum of weighted Dirac measures and the point mass (weight) of each atom is given as a mixed volume. We recall this relation in the remark below since it will be used in the following and a related (more general) result for general convex bodies is stated as Lemma [Lemma 16](#thm:areaVol){reference-type="ref" reference="thm:areaVol"}. **Remark 11**. * Let $P_1, \ldots, P_{n-1}\in \mathcal{P}^n$ and $P \coloneqq P_1 + \cdots + P_{n-1}$. Then the mixed area measure of $P_1, \ldots, P_{n-1}$ is a weighted sum of Dirac measures, that is, $$\mathop{\mathrm{S}}(P_1, \ldots, P_{n-1},\cdot) = \sum_{u \in \mathcal{N}_{n-1}(P)} \mathop{\mathrm{V}}(F(P_1, u), \ldots, F(P_{n-1}, u)) \delta_u ,$$ where $\mathcal{N}_{n-1}(P)$ is the set of all $u\in \mathbb{S}^{n-1}$ with $\dim F(P,u)=n-1$ (see [@Hug (4.2)]).* We will end this discussion by recalling a useful result which relates the mixed area measure $S_{n-1}(K;\cdot)$ of the $(n-1)$-tuple $(K,\ldots,K)$ to the (localized) $(n-1)$-dimensional Hausdorff measure $\mathcal{H}^{n-1}$ of the topological boundary $\partial K$ of an $n$-dimensional convex body $K$ in $\mathbb{R}^n$. **Definition 12**. * Let $n\ge 1$, $K \in \mathcal{K}^n$ a convex body and $\omega \subseteq \mathbb{S}^{n-1}$ a set. Then $$\tau(K, \omega) \coloneqq \bigcup_{u \in \omega} F(K, u)$$ is called the *reverse spherical image of $K$ at $\omega$* (compare [@Schneider p. 88]).* **Lemma 13**. *Let $n \ge 1$. For every $n$-dimensional convex body $K \in \mathcal{K}^n$ and every Borel measurable set $\omega \subseteq \mathbb{S}^{n-1}$, $$\mathop{\mathrm{S}}_{n-1}(K,\omega) = \mathcal{H}^{n-1}(\tau(K, \omega)) .$$* *Proof.* See [@Schneider Theorem 4.2.3] or [@Hug Thm. 4.8]. ◻ Lemma [Lemma 13](#thm:areaHaus){reference-type="ref" reference="thm:areaHaus"} in combination with the well-known (diagonality) Lemma [\[thm:multilin\]](#thm:multilin){reference-type="ref" reference="thm:multilin"} has many applications, such as Lemmas [Lemma 15](#thm:maTau){reference-type="ref" reference="thm:maTau"} and [Lemma 16](#thm:areaVol){reference-type="ref" reference="thm:areaVol"}. **Lemma 14**. *[\[thm:multilin\]]{#thm:multilin label="thm:multilin"} Let $f, g\colon (\mathcal{K}^n)^k \to \mathbb{R}$ be functionals that are symmetric and multilinear (i.e. Minkowski additive and positively homogeneous in each of their $k \in \mathbb{N}_0$ components) and let $\mathbfcal{C}= (C_1, \ldots, C_k)$ be a tuple of convex bodies in $\mathbb{R}^n$. If for all choices of $\lambda = (\lambda_1,\ldots,\lambda_k)\in [0, \infty)^k$ the convex body $$L_\lambda \coloneqq \sum_{i=1}^k \lambda_i C_i$$ satisfies $f(L_\lambda[k]) = g(L_\lambda[k])$, then $f(\mathbfcal{C}) = g(\mathbfcal{C})$.* The following lemma states that the mixed area measures are locally determined, which will be crucial for the proof of Lemma [\[thm:realPruningLemma\]](#thm:realPruningLemma){reference-type="ref" reference="thm:realPruningLemma"} (and it will be used in the discussion of some of the examples). For the area measures of a single convex body (and Euclidean balls), the corresponding simple fact is well known (see [@Schneider Note 11 for Sect. 4.2]). **Lemma 15**. *Let $n \ge 1$. Let $\mathbfcal{C}= (C_1, \ldots, C_{n-1}), \mathbfcal{D} = (D_1, \ldots, D_{n-1})$ be tuples of convex bodies in $\mathbb{R}^n$, and let $\omega \subseteq \mathbb{S}^{n-1}$ be a Borel set such that $$\tau(C_i, \omega) = \tau(D_i, \omega),\quad i \in [n-1].$$ Then $$\mathop{\mathrm{S}}(\mathbfcal{C})(\omega) = \mathop{\mathrm{S}}(\mathbfcal{D})(\omega) .$$* *Proof.* The case $n = 1$ follows from the fact that $\mathbfcal{C}= \mathbfcal{D}$ are empty tuples. We will prove the theorem for the case that $n \ge 2$ and $C_i = D_i$ for $i \ne 1$. This allows one to replace $C_1$ by $D_1$, yielding $$\begin{aligned} \label{eq:prepequal} \mathop{\mathrm{S}}(C_1, C_2, \ldots, C_{n-1})(\omega) = \mathop{\mathrm{S}}(D_1, C_2, \ldots, C_{n-1})(\omega) .\end{aligned}$$ Using symmetry of $\mathop{\mathrm{S}}$, we can afterwards replace $C_2$ by $D_2$, and so on until we have replaced all $C_i$ by $D_i$. We start with a preparatory remark. Let $K \in \mathcal{K}^n$, $\omega\subseteq\mathbb{S}^{n-1}$ and $u\in \omega$. We show that $F(K,u)=F(\tau(K, \omega), u)$. First, observe that $F(K, u) \subseteq \tau(K, \omega) \subseteq K$. Hence, $h_{\tau(K, \omega)}(u) = h_K(u)$ and $$\begin{aligned} F(K, u) &= \set*{ x \in K \given \left<x, u\right> = h_K(u) } = \set*{ x \in \tau(K, \omega) \given \left<x, u\right> = h_{\tau(K, \omega)}(u) }\\ & = F(\tau(K, \omega), u) ,\end{aligned}$$ where we again used that $F(K, u) \subseteq \tau(K, \omega)$. By Minkowski additivity of the mixed area measure in its first component, it suffices to show that [\[eq:prepequal\]](#eq:prepequal){reference-type="eqref" reference="eq:prepequal"} holds when $C_1, D_1$ are full-dimensional. To see this, replace $C_1$ by $C_1 + {B^{n}}$ and $D_1$ by $D_1 + {B^{n}}$ and note that $\tau(C_1 + {B^{n}},\omega)=\tau(D_1 + {B^{n}},\omega)$, since by the preparatory remark for any $u\in \omega$ we have $$\begin{aligned} F(C_1+B^n,u)&=F(C_1,u)+F(B^n,u)=F(\tau(C_1,\omega),u)+F(B^n,u)\\ &=F(\tau(D_1,\omega),u)+F(B^n,u) = F(D_1,u)+F(B^n,u)=F(D_1+B^n,u). \end{aligned}$$ For every $(\lambda_i)_{i\in[n-1]}\in[0, \infty)^{n-1}$, we claim that $$\begin{aligned} \label{eq:mt1} \mathop{\mathrm{S}}_{n-1}\prn*{\sum_{i=1}^{n-1}\lambda_i C_i}(\omega) = \mathop{\mathrm{S}}_{n-1}\prn*{\lambda_1D_1 + \sum_{i=2}^{n-1}\lambda_i C_i}(\omega) .\end{aligned}$$ If this holds, Lemma [\[thm:multilin\]](#thm:multilin){reference-type="ref" reference="thm:multilin"} will show $$\mathop{\mathrm{S}}(C_1, C_2, \ldots, C_{n-1})(\omega) = \mathop{\mathrm{S}}(D_1, C_2, \ldots, C_{n-1})(\omega) .$$ If $\lambda_1 = 0$, [\[eq:mt1\]](#eq:mt1){reference-type="ref" reference="eq:mt1"} clearly holds. Otherwise, $\sum_{i=1}^{n-1}\lambda_i C_i$ and $\lambda_1 D_1 + \sum_{i=2}^{n-1}\lambda_i C_i$ are full-dimensional and by Lemma [Lemma 13](#thm:areaHaus){reference-type="ref" reference="thm:areaHaus"} and the definition of $\tau$ it suffices to show that, for all $u \in \omega$, $$\begin{aligned} F\prn*{\sum_{i=1}^{n-1}\lambda_i C_i, u} &= \sum_{i=1}^{n-1}\lambda_i F(C_i, u) \overset{(!)}{=} \lambda_1 F(D_1, u) + \sum_{i=2}^{n-1}\lambda_i F(C_i, u) \\ &= F\prn*{\lambda_1 D_1 + \sum_{i=2}^{n-1}\lambda_i C_i, u} ,\end{aligned}$$ where we used at (!) that by the preparatory remark and the assumption we have $$F(C_1, u) = F(\tau(C_1, \omega), u) = F(\tau(D_1, \omega), u) = F(D_1, u) ,$$ concluding the proof. ◻ The next lemma is a simple consequence of Lemma [Lemma 15](#thm:maTau){reference-type="ref" reference="thm:maTau"}, but we will not need it in the current work. **Lemma 16**. *Assume $n \ge 1$. Let $K_1, \ldots, K_{n-1}\subset\mathbb{R}^n$ be convex bodies and $u \in \mathbb{S}^{n-1}$. Then $$\mathop{\mathrm{S}}(K_1, \ldots, K_{n-1})(\set*{u}) = \mathop{\mathrm{V}}(F(K_1, u), \ldots, F(K_{n-1}, u)) .$$* *Proof.* By multilinearity and symmetry of $\mathop{\mathrm{S}}$ and $\mathop{\mathrm{V}}$ and linearity of $F$, it suffices by Lemma [\[thm:multilin\]](#thm:multilin){reference-type="ref" reference="thm:multilin"} to prove the statement for $K_1 = \cdots = K_{n-1}$, i.e. to prove that $$\mathop{\mathrm{S}}_{n-1}(K_1)(\set*{u}) = \mathop{\mathrm{V}}_{n-1}(F(K_1, u)) ,$$ where $\mathop{\mathrm{V}}_{n-1}$ is the volume (intrinsic volume of order $n-1$) in an $(n-1)$-dimensional subspace of $\mathbb{R}^n$. Consider the truncated convex cone $$C \coloneqq \set*{ x \in {B^{n}}\given \left<x, u\right> \le -\frac{1}{2}\norm{x} } ,$$ which is a full-dimensional convex body satisfying $F(C, u) = \set*{0}$. So $$\tau(K_1, \set*{u}) = F(K_1, u) = F(C + K_1, u) = \tau(C + K_1, \set*{u}) .$$ By Lemmas [Lemma 15](#thm:maTau){reference-type="ref" reference="thm:maTau"} and [Lemma 13](#thm:areaHaus){reference-type="ref" reference="thm:areaHaus"} and since $\dim(C+K_1)=n$, it follows that $$\mathop{\mathrm{S}}_{n-1}(K_1)(\set*{u}) = \mathop{\mathrm{S}}_{n-1}(C + K_1)(\set*{u}) = \mathcal{H}^{n-1}(F(C + K_1, u)) = \mathop{\mathrm{V}}_{n-1}(F(K_1, u)) ,$$ which completes the argument. ◻ ## Reduction formulas We will use dimensional induction to prove assertions about mixed area measures. To succeed in this endeavor, we have to relate mixed area measures in $\mathbb{R}^n$ to mixed area measures in subspaces. By using basic integral geometry, the following two reduction formulas can be obtained. Recall that we write $\mathop{\mathrm{V}}$ for the mixed volume in $\mathbb{R}^n$ and use the same symbol for the mixed volume in a subspace (the number of arguments already provides the relevant information). For a linear subspace $L\subseteq\mathbb{R}^n$, the orthogonal projection to $L$ is denoted by $\pi_L:\mathbb{R}^n\to L$. **Lemma 17**. *[\[thm:mvRed\]]{#thm:mvRed label="thm:mvRed"} Let $\mathbfcal{C}= (C_1, \ldots, C_{n})$ be a tuple of convex bodies in $\mathbb{R}^n$, and let $k \in [n]\cup\set*{0}$ be such that $\mathop{\mathrm{\overline{span}}}\mathbfcal{C}_{[k]}$ is contained in a linear subspace $E \subseteq \mathbb{R}^n$ of dimension $k$. Then $$\begin{pmatrix} n \\ k \end{pmatrix} \mathop{\mathrm{V}}(\mathbfcal{C}) = \mathop{\mathrm{V}}(\mathbfcal{C}_{[k]}) \cdot \mathop{\mathrm{V}}(\pi_{E^\perp}(\mathbfcal{C}_{[n]\setminus[k]})) .$$* *Proof.* The cases $k \in \set*{0, n}$ are trivial. For the remaining cases, use the translation invariance of $\mathop{\mathrm{V}}$ and apply [@Schneider Theorem 5.3.1]. ◻ In dealing with mixed area measures, we will indicate by our notation in which subspace the measure is applied. For an $\ell$-dimensional linear subspace $L\subset\mathbb{R}^n$, $\ell\ge 1$, we write $\mathop{\mathrm{S}}_L$ for the mixed area measure in $L$, which is evaluated at $\ell-1$ convex bodies in $L$ and Borel subsets of $\mathbb{S}^{n-1}\cap L$. Moreover, we define $\mathop{\mathrm{S}}_L'$ as the Borel measure on $\mathbb{S}^{n-1}$ defined by $$\mathop{\mathrm{S}}_L'(C_1,\ldots,C_{\ell-1})(\omega)\coloneqq S_L(C_1,\ldots,C_{\ell-1})(\omega\cap L)$$ for convex bodies $C_1,\ldots,C_{\ell-1}\subset L$ and Borel sets $\omega\subseteq \mathbb{S}^{n-1}$. The following proposition will be essential for the proof of our main result in Section [7](#sec:7){reference-type="ref" reference="sec:7"}. **Proposition 18**. *[\[thm:maRed\]]{#thm:maRed label="thm:maRed"}[\[thm:splittingSc\]]{#thm:splittingSc label="thm:splittingSc"} Assume $n \in\mathbb{N}$. Let $\mathbfcal{C}= (C_1, \ldots, C_{n-1})$ be a tuple of convex bodies in $\mathbb{R}^n$, and let $k \in [n-1]\cup\set*{0}$ be such that $\mathop{\mathrm{\overline{span}}}\mathbfcal{C}_{[k]}$ is contained in a linear subspace $E \subseteq \mathbb{R}^n$ of dimension $k$. Then $$\begin{pmatrix} n-1 \\ k \end{pmatrix} \mathop{\mathrm{S}}(\mathbfcal{C}) = \mathop{\mathrm{V}}(\mathbfcal{C}_{[k]}) \cdot \mathop{\mathrm{S}}'_{E^\perp}(\pi_{E^\perp}(\mathbfcal{C}_{[n-1]\setminus[k]})) .$$ In particular, if $\dim \mathbfcal{C}_{[k]} < k$, then $\mathop{\mathrm{S}}(\mathbfcal{C}) = 0$.* *Proof.* The case $k=0$ is trivial. The assertion for $k=n-1$ is clear for polytopes (see Remark [Remark 11](#rem:maPoly){reference-type="ref" reference="rem:maPoly"}), the general case follows by approximation. So we can assume that $n \ge 3$ and $k \in [n-2]$. Let $C_n \in \mathcal{K}^n$. Then by Lemma [\[thm:mvRed\]](#thm:mvRed){reference-type="ref" reference="thm:mvRed"}, $$\begin{pmatrix} n \\ k \end{pmatrix} \mathop{\mathrm{V}}(C_1, \ldots, C_n) = \mathop{\mathrm{V}}(C_1, \ldots, C_k) \cdot \mathop{\mathrm{V}}(\pi_{E^\perp}(C_{k+1}, \ldots, C_n)) .$$ Expressing the mixed volumes by mixed area measures, we obtain $$\begin{aligned} & \begin{pmatrix} n \\ k \end{pmatrix} \frac{n - k}{n}\int h_{C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}(C_1, \ldots, C_{n-1}) \\ &= \mathop{\mathrm{V}}(C_1, \ldots, C_k) \cdot \int h_{\pi_{E^\perp} C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{E^\perp}(\pi_{E^\perp}(C_{k+1}, \ldots, C_{n-1}) .\end{aligned}$$ Noting that $h_{\pi_{E^\perp} C_n} = h_{C_n}$ on $E^\perp$, we find that $$\begin{aligned} \int h_{\pi_{E^\perp} C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{E^\perp}(\pi_{E^\perp}(C_{k+1}, \ldots, C_{n-1})) &= \int h_{C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{E^\perp}(\pi_{E^\perp}(C_{k+1}, \ldots, C_{n-1})) \\ &= \int h_{C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{E^\perp}'(\pi_{E^\perp}(C_{k+1}, \ldots, C_{n-1})) \end{aligned}$$ and conclude that $$\begin{aligned} & \begin{pmatrix} n - 1 \\ k \end{pmatrix} \int h_{C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}(C_1, \ldots, C_{n-1}) \\ &= \mathop{\mathrm{V}}(C_1, \ldots, C_k) \cdot \int h_{C_n} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}'_{E^\perp}(\pi_{E^\perp}(C_{k+1}, \ldots, C_{n-1}) .\end{aligned}$$ Because $C_n$ is an arbitrary convex body and differences of support functions are dense in $C(\mathbb{S}^{n-1})$, the claim follows. ◻ ## Criticality Criticality is a useful concept that describes dimensionality conditions on arrangements of convex bodies. Shenfeld and van Handel [@SvH23+] employed criticality in their investigation of equality cases in the Alexandrov--Fenchel inequality for polytopes. We will slightly deviate from their terminology in that we call "semicritical" what they called "subcritical", and we say "subcritical" to describe a situation which is "not critical". The most elementary occurrence and motivation for the terminology is the following result. **Lemma 19**. *Let $\mathbfcal{C} = (K_1, \ldots, K_n)$ be a tuple of convex bodies in $\mathbb{R}^n$. Then the following are equivalent:* 1. *$\mathop{\mathrm{V}}(\mathbfcal{C}) > 0$.* 2. *There are segments $S_i \subseteq K_i$ ($i \in [n]$) with linearly independent directions.* 3. *Whenever $\mathbfcal{D} \le \mathbfcal{C}$, then $\dim \mathop{\mathrm{\overline{span}}}\mathbfcal{D} \ge \abs{\mathbfcal{D}}$.* *Proof.* See [@Schneider Theorem 5.1.8]. ◻ Condition (c) in Lemma [Lemma 19](#thm:mvVanish){reference-type="ref" reference="thm:mvVanish"} suggests the definition of a "semicritical" tuple of convex bodies. Let us recall concepts of criticality and describe some consequences. **Definition 20**. *Let $\ell \in \mathbb{N}_0$. Let $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ be a tuple of nonempty subsets of  $\mathbb{R}^n$. Then $\mathbfcal{A}$ is called* 1. **semicritical* if for all $() \ne\mathbfcal{B} \le \mathbfcal{A}$ we have $\dim \mathop{\mathrm{\overline{span}}}\mathbfcal{B} \ge \abs{\mathbfcal{B}}$,* 2. **critical* if for all $() \ne \mathbfcal{B} \le \mathbfcal{A}$ we have $\dim\mathop{\mathrm{\overline{span}}}\mathbfcal{B} \ge \abs{\mathbfcal{B}} + 1$,* 3. **supercritical* if for all $() \ne \mathbfcal{B} \le \mathbfcal{A}$ we have $\dim\mathop{\mathrm{\overline{span}}}\mathbfcal{B} \ge \abs{\mathbfcal{B}} + 2$,* 4. **subcritical* if it is not critical.* *Abusing notation, we write $\mathop{\mathrm{V}}(\mathbfcal{A}) > 0$ to say that $\mathbfcal{A}$ is semicritical.* The following lemma is provided in [@HugReichert23+ Lem. 3.2] (see also the preceding remarks there). **Lemma 21**. *Let $\ell \in \mathbb{N}_0$, and let $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ be a tuple of nonempty subsets of  $\mathbb{R}^n$.* 1. *Subtuples of (super-, semi-)critical tuples are also (super-, semi-)critical.[\[it:critSimple3\]]{#it:critSimple3 label="it:critSimple3"}* 2. *Supercriticality implies criticality, which implies semicriticality.[\[it:critSimple4\]]{#it:critSimple4 label="it:critSimple4"}* 3. *The empty tuple is supercritical.[\[it:critSimple6\]]{#it:critSimple6 label="it:critSimple6"}* 4. *(Super-, Semi-)Criticality is invariant under permutations of $\mathbfcal{A}$.[\[it:critSimple7\]]{#it:critSimple7 label="it:critSimple7"}* 5. *(Super-, Semi-)Criticality is invariant under simultaneous affine isomorphisms and argumentwise translations.[\[it:critSimple8\]]{#it:critSimple8 label="it:critSimple8"}* 6. *(Super-, Semi-)Criticality is preserved if the sets in $\mathbfcal{A}$ are replaced by supersets.[\[it:critSimple9\]]{#it:critSimple9 label="it:critSimple9"}* 7. *Let $\mathbfcal{A}$ be critical and $A_{\ell + 1} \subseteq \mathbb{R}^n$ be nonempty. Then $(A_1, \ldots, A_{\ell + 1})$ is semicritical if and only if $A_{\ell + 1}$ is at least one-dimensional.[\[it:critSimple1\]]{#it:critSimple1 label="it:critSimple1"}* 8. *Let $\mathbfcal{A}$ be supercritical and $A_{\ell + 1} \subseteq \mathbb{R}^n$ be nonempty. Then $(A_1, \ldots, A_{\ell+1})$ is critical if and only if $A_{\ell+1}$ is at least two-dimensional.[\[it:critSimple2\]]{#it:critSimple2 label="it:critSimple2"}* 9. *If all sets $A_i$ are full-dimensional, then $\mathbfcal{A}$ is supercritical if and only if $\ell \le n-2$ or $\mathbfcal{A} = ()$.[\[it:critSimple5\]]{#it:critSimple5 label="it:critSimple5"}* The notation '$\mathop{\mathrm{V}}(\mathbfcal{A}) > 0$' suggests that semicriticality might abide laws similar to the ones applying to mixed volumes. In particular, we might hope for some kind of reduction theorem in analogy to [\[thm:mvRed\]](#thm:mvRed){reference-type="ref" reference="thm:mvRed"}. As the next result shows, this hope is not in vain. The following Lemmas [Lemma 22](#thm:critRed){reference-type="ref" reference="thm:critRed"} and [Lemma 24](#thm:critAdd){reference-type="ref" reference="thm:critAdd"} will be crucial for the arguments in Sections [6](#sec:6){reference-type="ref" reference="sec:6"} and [7](#sec:7){reference-type="ref" reference="sec:7"}. Lemma [Lemma 23](#thm:critIndep){reference-type="ref" reference="thm:critIndep"} is used in the proof of Lemma [Lemma 24](#thm:critAdd){reference-type="ref" reference="thm:critAdd"}. **Lemma 22** (Semicritical reduction). *Let $\ell\in\mathbb{N}_0$. Let $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ be a tuple of nonempty subsets of  $\mathbb{R}^n$ and let $\mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{[k]}$ be contained in a linear subspace $E$ of dimension $k \in \mathbb{N}_0$. Then the following are equivalent:* 1. *$\mathop{\mathrm{V}}(\mathbfcal{A}) > 0$;* 2. *$\mathop{\mathrm{V}}(\mathbfcal{A}_{[k]}) > 0$ and $\mathop{\mathrm{V}}(\pi_{E^\perp}(\mathbfcal{A}_{[\ell]\setminus[k]})) > 0$.* *Proof.* After applying suitable translations, we may assume that all sets contain $0$. "$\implies$": Assume that $\mathop{\mathrm{V}}(\mathbfcal{A}) > 0$. Then by Lemma [Lemma 21](#thm:critSimple){reference-type="ref" reference="thm:critSimple"}, $\mathop{\mathrm{V}}(\mathbfcal{A}_{[k]}) > 0$. It remains to show the second claim. For this, let $I \subseteq [\ell]\setminus[k]$. Then using the dimension formula from linear algebra and semicriticality of $\mathbfcal{A}$, $$\begin{aligned} \dim \mathop{\mathrm{\overline{span}}}\pi_{E^\perp}(\mathbfcal{A})_I &= \dim \pi_{E^\perp}\prn*{\mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{I \cup [k]}} \ge \dim \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{I \cup [k]} - \dim\ker\pi_{E^\perp}\nonumber \\&\ge \abs{I} + k - k. \end{aligned}$$ "$\impliedby$": Now assume that $\mathop{\mathrm{V}}(\mathbfcal{A}_{[k]}) > 0$ and $\mathop{\mathrm{V}}(\pi_{E^\perp}(\mathbfcal{A}_{[\ell]\setminus[k]})) > 0$. Let $I \subseteq [\ell]$ and consider the linear map $\Phi \colon \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_I \to \mathbb{R}^n$, $x \mapsto \pi_{E^\perp}(x)$. It satisfies $$\ker \Phi = E \cap \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_I \supseteq \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{I \cap [k]}$$ and $$\mathop{\mathrm{im}}\Phi = \mathop{\mathrm{\overline{span}}}\pi_{E^\perp}(\mathbfcal{A}_I) = \mathop{\mathrm{\overline{span}}}\pi_{E^\perp}(\mathbfcal{A}_{I \setminus [k]}) .$$ The dimension formula together with the assumption shows $$\begin{aligned} \dim\mathop{\mathrm{\overline{span}}}\mathbfcal{A}_I & = \dim \ker\Phi + \dim \mathop{\mathrm{im}}\Phi \\ &\ge \dim\mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{I \cap [k]} + \dim \mathop{\mathrm{\overline{span}}}\pi_{E^\perp}(\mathbfcal{A}_{I \setminus [k]}) \\ &= \dim\mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{I \cap [k]} + \dim\mathop{\mathrm{\overline{span}}}\prn*{\pi_{E^\perp}(\mathbfcal{A})}_{I \setminus [k]} \\ &\ge \abs{I \cap [k]} + \abs{I \setminus [k]} = \abs{I} ,\end{aligned}$$ which shows that $\mathbfcal{A}$ is semicritical. ◻ Having proved the reduction Lemma [Lemma 22](#thm:critRed){reference-type="ref" reference="thm:critRed"}, we can inductively prove an analogue of Lemma [Lemma 19](#thm:mvVanish){reference-type="ref" reference="thm:mvVanish"}. **Lemma 23**. *Let $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ be a tuple of nonempty subsets of $\mathbb{R}^n$. Then the following are equivalent:* 1. *$\mathop{\mathrm{V}}(\mathbfcal{A}) > 0$.* 2. *There are pairs of points $(x_i, y_i) \in A_i \times A_i$ ($i \in [\ell]$) such that the tuple $(y_i - x_i)_{i \in [\ell]}$ consists of linearly independent vectors.* *Proof.* "$\impliedby$": Clearly, whenever $I \subseteq [\ell]$, $$\dim \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_I \ge \dim \mathop{\mathrm{span}}\set*{ y_i - x_i \given i \in I } = \abs{I} .$$ "$\implies$": We may assume that every set $A_i$ contains $0$. We proceed by induction over the dimension $n$. Assume that the claim is true for all dimensions smaller than $n$. Then we distinguish three cases: - If $n = 0$, we have nothing to show since the empty family is clearly linearly independent. - If $n > 0$ and the tuple is critical, let $E$ be an arbitrary $(n-1)$-dimensional linear subspace. Then $\pi_E(\mathbfcal{A})$ is still semicritical because the kernel of the projection is one-dimensional. The inductive hypothesis guarantees the existence of pairs of points $(x_i, y_i) \in A_i \times A_i$ ($i \in [\ell]$) such that $\pi_E(y_i - x_i) \in E$ are linearly independent. But then $(y_i - x_i)$ are linearly independent, too. - If $n > 0$ and the tuple is subcritical, we find $\varnothing \ne I \subseteq [\ell]$ with $\dim \mathbfcal{A}_I = \abs{I}$. If $\ell=n$ and $\dim \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_I=|I|$ for all $\varnothing \ne I \subseteq [n]$, then clearly there exist points $(x_i, y_i) \in A_i \times A_i$ ($i \in [\ell]$) such that the family $(y_i - x_i)_{i \in [\ell]}$ is linearly independent. Otherwise, without loss of generality, $\mathbfcal{A}_I$ is a prefix of $\mathbfcal{A}$, so that $I = [k]$ for some $0 < k < n$. After defining the linear subspace $E := \mathop{\mathrm{\overline{span}}}\mathbfcal{A}_{[k]}$ of dimension $k$, we can apply Lemma [Lemma 22](#thm:critRed){reference-type="ref" reference="thm:critRed"} to deduce that $$\mathop{\mathrm{V}}(\mathbfcal{A}_{[k]}), \mathop{\mathrm{V}}(\pi_{E^\perp} (\mathbfcal{A}_{[\ell]\setminus[k]})) > 0 .$$ Because $0 < k < n$, two applications of the inductive hypothesis yield pairs of points $(x_i, y_i) \in A_i \times A_i$ ($i \in [\ell]$) such that - $y_1 - x_1, \ldots, y_k - x_k$ are linearly independent and - $\pi_{E^\perp} (y_{k+1} - x_{k+1}), \ldots, \pi_{E^\perp}(y_\ell - x_\ell)$ are linearly independent. Hence it follows that $y_1 - x_1, \ldots, y_\ell - x_\ell$ are linearly independent. Since these cases are exhaustive, the proof is complete. ◻ In analogy to the additivity of the mixed volume, we obtain the following result. **Lemma 24** (Semicritical additivity). *Let $\mathbfcal{A} = (A_1, A_2, \ldots, A_\ell)$ be a tuple of nonempty subsets of $\mathbb{R}^n$ and $\ell \ge 1$. Furthermore, let $A_1 = B + C$. Then the following are equivalent.* 1. *$\mathop{\mathrm{V}}(A_1, A_2, \ldots, A_\ell) > 0$.* 2. *$\mathop{\mathrm{V}}(B, A_2, \ldots, A_\ell) > 0$ or $\mathop{\mathrm{V}}(C, A_2, \ldots, A_\ell) > 0$.* *Proof.* "$\impliedby$" follows from $\mathop{\mathrm{\overline{span}}}B, \mathop{\mathrm{\overline{span}}}C \subseteq \mathop{\mathrm{\overline{span}}}A_1$. "$\implies$": In view of Lemma [Lemma 23](#thm:critIndep){reference-type="ref" reference="thm:critIndep"}, we find pairs of points $(x_i, y_i)\in A_i\times A_i$ for $i \in [\ell]$ such that the differences $y_i - x_i$ are linearly independent. In particular, $y_1 - x_1$ is not contained in $E \coloneqq \mathop{\mathrm{span}}\set*{y_i - x_i \given i \in [\ell]\setminus\set{1}}$. We can find $b, b' \in B$ and $c, c' \in C$ such that $x_1 = b + c$ and $y_1 = b' + c'$. Then either $b' - b$ or $c' - c$ is not contained in $E$ --- we may assume that $b' - b \notin E$. But then $(b'-b,y_2-x_2,\ldots,y_{\ell}-x_{\ell})$ are linearly independent, which yields $\mathop{\mathrm{V}}(B, A_2, \ldots, A_\ell) > 0$ via Lemma [Lemma 23](#thm:critIndep){reference-type="ref" reference="thm:critIndep"}. ◻ ## Support of mixed area measures The support of mixed area measures is the central topic of this work. This section provides some of its properties that will be needed. In the special case of polytopes, Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} is known and easy to verify. For the sake of completeness and to familiarize the reader with our notation, we include the argument. **Lemma 25**. *Let $n\ge 1$. Let $\mathbfcal{P} = (P_1, \ldots, P_{n-1})$ be a tuple of polytopes in $\mathbb{R}^n$. Then $$\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P}) = \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathcal{P} .$$* *Proof.* For $n=1$ the assertion is clear by our definitions. Let $n\ge 2$. By Remark [Remark 11](#rem:maPoly){reference-type="ref" reference="rem:maPoly"}, $$\begin{aligned} \label{eq:su1} \mathop{\mathrm{S}}(\mathcal{P}) = \sum_{u \in \mathcal{N}_{n-1}(P_1 + \cdots + P_{n-1})} \mathop{\mathrm{V}}(F(P_1, u), \ldots, F(P_{n-1}, u)) \delta_u, \end{aligned}$$ and for all $u \in \mathbb{S}^{n-1}$, Lemmas [Lemma 8](#thm:touchingConePoly){reference-type="ref" reference="thm:touchingConePoly"} and [Lemma 19](#thm:mvVanish){reference-type="ref" reference="thm:mvVanish"} show the equivalence $$\begin{aligned} \label{eq:su2} \mathop{\mathrm{V}}(F(P_1, u), \ldots, F(P_{n-1}, u)) > 0 \, \iff \, \mathop{\mathrm{V}}(\mathop{\mathrm{TS}}(P_1, u), \ldots, \mathop{\mathrm{TS}}(P_{n-1}, u)) > 0, \end{aligned}$$ the second statement by definition being equivalent to $u \in \mathop{\mathrm{ext}}\mathcal{P}$. So if $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P})$, then $\mathop{\mathrm{V}}(\mathop{\mathrm{TS}}(P_1, u), \ldots, \mathop{\mathrm{TS}}(P_{n-1}, u)) > 0$, i.e. $u \in \mathop{\mathrm{ext}}\mathcal{P}$. Therefore, $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P}) \subseteq \mathop{\mathrm{ext}}\mathcal{P}\subseteq \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathcal{P}$. Conversely, assume $u \in \mathop{\mathrm{ext}}\mathcal{P}$. Then $\mathop{\mathrm{V}}(F(P_1, u), \ldots, F(P_{n-1}, u)) > 0$ follows from [\[eq:su2\]](#eq:su2){reference-type="eqref" reference="eq:su2"}. In particular, $$\dim F\prn*{\sum_{i=1}^{n-1} P_i, u} = \dim \sum_{i=1}^{n-1} F(P_i, u) \ge n-1 .$$ So $u \in \mathcal{N}_{n-1}\prn*{\sum_{i=1}^{n-1} P_i}$ and $$\mathop{\mathrm{S}}(\mathcal{P})(\set*{u}) \ge \mathop{\mathrm{V}}(F(P_1, u), \ldots, F(P_{n-1}, u)) > 0 ,$$ which shows that $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P})$, hence $\mathop{\mathrm{ext}}\mathcal{P}\subseteq \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P})$. The claim follows, since $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P})$ is closed. ◻ Next we describe the support of a convex body which is defined as an integral average in terms of its support function. **Theorem 26**. *Assume that $n \ge 2$. Let $C_1 \in \mathcal{K}^n$ be a convex body, $\mathbfcal{C}= (C_2, \ldots, C_{n-1})$ an $(n-2)$-tuple of convex bodies and $\mu$ a finite Borel measure on $\mathcal{K}^n$ with bounded support such that $$h_{C_1}(x) = \int h_K(x) \,\mu(\mathop{}\!\mathrm{d}K),\quad x\in\mathbb{R}^n .$$ Then $$\mathop{\mathrm{S}}(C_1, \mathbfcal{C})=\int S(K,\mathbfcal{C})\, \mu(\mathop{}\!\mathrm{d}K)$$ and $$\mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}} = \mathop{\mathrm{cl}}\bigcup_{K \in \mathop{\mathrm{supp}}\mu} \mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{K, \mathbfcal{C}} .$$* *Proof.* Let $A \subseteq \mathbb{S}^{n-1}$ be closed. Let $d(u, A)$ denote the Euclidean distance of $u\in\mathbb{S}^{n-1}$ from $A$. Then the continuous function $$f_A \colon \mathbb{S}^{n-1}\to [0, \infty), \quad u \mapsto d(u, A),$$ satisfies $f_A^{-1}(\set*{0}) = A$. If $f$ is a difference of support functions, we can apply Fubini's theorem and the compactness of the support of $\mu$ to obtain $$\begin{aligned} \int f \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}} &= \int h_{C_1} \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{f, \mathbfcal{C}} \\ &= \int \int h_K(x) \, \mu(\mathop{}\!\mathrm{d}K) \mathop{\mathrm{S}}_{f, \mathbfcal{C}}(\mathop{}\!\mathrm{d}x) \\ &= \int \int h_K(x) \,\mathop{\mathrm{S}}_{f, \mathbfcal{C}}(\mathop{}\!\mathrm{d}x) \,\mu(\mathop{}\!\mathrm{d}K) \\ &= \int \int f(x) \,\mathop{\mathrm{S}}_{K, \mathbfcal{C}}(\mathop{}\!\mathrm{d}x)\, \mu(\mathop{}\!\mathrm{d}K). \end{aligned}$$ The same equality holds for all continuous functions $f \colon \mathbb{S}^{n-1}\to \mathbb{R}$ by approximation, and in particular for $f_A$ as defined above. Thus we have verified the first assertion. Now we turn to the second claim. "$\subseteq$": Set $f \coloneqq f_{\mathop{\mathrm{cl}}\bigcup_{K \in \mathop{\mathrm{supp}}\mu} \mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{K, \mathbfcal{C}}}$. Then $$\int f \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}} = \int\int f(x) \,\mathop{\mathrm{S}}_{K, \mathbfcal{C}}(\mathop{}\!\mathrm{d}x)\, \mu(\mathop{}\!\mathrm{d}K) = 0 .$$ So $\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}}(f^{-1}((0, \infty))) = 0$, concluding this direction. "$\supseteq$": Let $x \notin \mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}}$. Because $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}}$ is closed, it suffices to prove that $x \notin \mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{K, \mathbfcal{C}}$ for all $K \in\mathop{\mathrm{supp}}\mu$. There is an open set $U \subseteq \mathbb{S}^{n-1}$ with $x \in U$ such that $\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}}(U) = 0$. Define $f \coloneqq f_{U^\mathsf{c}}$. Then $$0 = \int f \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{C_1, \mathbfcal{C}} = \int\int f(z) \,\mathop{\mathrm{S}}_{K, \mathbfcal{C}}(\mathop{}\!\mathrm{d}z)\, \mu(\mathop{}\!\mathrm{d}K) .$$ The integrand $\varphi \colon K \mapsto \int f(z)\, \mathop{\mathrm{S}}_{K, \mathbfcal{C}}(\mathop{}\!\mathrm{d}z)$ is nonnegative and continuous by the continuity of $f$ and the weak continuity of the mixed area measure. Therefore, $\varphi(K)=0$ for $K\in\mathop{\mathrm{supp}}\mu$. In other words, if $K \in \mathop{\mathrm{supp}}\mu$, then $\int f \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{K, \mathbfcal{C}} = 0$. The integrand being nonnegative and continuous, $f$ vanishes on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{K, \mathbfcal{C}}$. Therefore, $$x \in U \subseteq (\mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{K, \mathbfcal{C}})^\mathsf{c} ,$$ which was to be shown. ◻ The preceding theorem can in particular be applied in the case where $C_1$ is a polyoid, as follows from [@HugReichert23+ Cor. 2.9]. Finally, we mention a general result which states that the support of the weak limit of a sequence of measures is covered (up to taking the closure) by the supports of these measures. The proof is a straightforward consequence of the definition of weak convergence of measures. **Lemma 27** (Support and weak convergence). *Let $\mu_\ell \to \mu$ be a weakly convergent sequence of finite Borel measures on a second-countable metric space $E$. Then $$\mathop{\mathrm{supp}}\mu \subseteq \mathop{\mathrm{cl}}\bigcup_{\ell=1}^\infty \mathop{\mathrm{supp}}\mu_\ell .$$* The goal of the remaining part of the work is to confirm Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} for polyoids. Before we get to the proof, we need to discuss four concepts: projections, cusps, pruning and switching. These will be combined at the end. # Projections {#sec:3} In the following, we assume that $n \ge 1$ and $k \in \mathbb{N}$. For the proof of Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} we show two inclusions. For one of these (namely, "$\subseteq$"), two crucial facts that enable us to carry out the argument are that the touching space (see Definition [Definition 4](#def:touching){reference-type="ref" reference="def:touching"}) of the orthogonal projection of a general convex body $K$ to a linear subspace is the orthogonal projection of the touching space of $K$, which is proved in Lemma [Lemma 30](#thm:tc3){reference-type="ref" reference="thm:tc3"}, and that the orthogonal projection to a subspace of a $k$-polyoid $K$ with generating measure $\mu$ is again a $k$-polyoid for which the projection of $\mu$ is a generating measure, which is established in Lemma [Lemma 31](#thm:projVertoid){reference-type="ref" reference="thm:projVertoid"}. Lemmas [Lemma 28](#thm:tc1){reference-type="ref" reference="thm:tc1"} and [Lemma 29](#thm:tc2){reference-type="ref" reference="thm:tc2"} prepare the proof of Lemma [Lemma 30](#thm:tc3){reference-type="ref" reference="thm:tc3"}. These auxiliary results are treated in the present section. Further ingredients needed to establish the inclusion "$\subseteq$" are developed in Section [4](#sec:4){reference-type="ref" reference="sec:4"}. **Lemma 28**. *Let $A \subseteq \mathbb{R}^n$ be a convex set, $W \subseteq \mathbb{R}^n$ a linear subspace and $u \in W \setminus\set{0}$. Then for all $x \in A$, $$x \in F(A, u) \iff \pi_W(x) \in F(\pi_W(A), u).$$* *Proof.* The basic observation is that for all $x \in A$, we have $\left<x, u\right>=\left<\pi_W(x), u\right>$, and hence $h_A(u) = h_{\pi_W(A)}(u)$. So if $x \in F(A, u)$, then $\left<\pi_W(x), u\right> = \left<x, u\right> = h_A(u) = h_{\pi_W(A)}(u)$ and therefore $\pi_W(x) \in F(\pi_W(A), u)$. Conversely, if $\pi_W(x) \in F(\pi_W(A), u)$, then $\left<x, u\right> = \left<\pi_W(x), u\right> = h_{\pi_W(A)}(u) = h_A(u)$ and hence $x \in F(A, u)$. ◻ **Lemma 29**. *Let $K \in \mathbb{R}^n$ be a convex body, $W \subseteq \mathbb{R}^n$ a linear subspace and $u \in W\setminus\set{0}$. Then $N_W(\pi_{W}(K), F(\pi_W(K), u)) = W \cap N(K, F(K, u))$.* *Proof.* By definition of $N_W$, both sides of the equation are subsets of $W$. Moreover, both contain $0$. Let $v \in W\setminus\set{0}$. Then the claim can be reformulated as $$F(\pi_W(K), u) \subseteq F(\pi_W(K), v) \iff F(K, u) \subseteq F(K, v) .$$ Let us first assume that $F(\pi_W(K), u) \subseteq F(\pi_W(K), v)$ and let $x \in F(K, u)$. Then by Lemma [Lemma 28](#thm:tc1){reference-type="ref" reference="thm:tc1"}, $\pi_W(x) \in F(\pi_W(K), u)$. By assumption, this implies $\pi_W(x) \in F(\pi_W(K), v)$. Another application of Lemma [Lemma 28](#thm:tc1){reference-type="ref" reference="thm:tc1"} now shows that $x \in F(K, v)$. Therefore, $F(K, u) \subseteq F(K, v)$. Now assume $F(K, u) \subseteq F(K, v)$ and let $y \in F(\pi_W(K), u)$. Writing $y = \pi_W(x)$ for some $x \in K$ and applying Lemma [Lemma 28](#thm:tc1){reference-type="ref" reference="thm:tc1"}, we obtain $x \in F(K, u)$. By assumption, this implies $x \in F(K, v)$, and again using Lemma [Lemma 28](#thm:tc1){reference-type="ref" reference="thm:tc1"}, this shows that $y = \pi_W(x) \in F(\pi_W(K), v)$. Therefore, $F(\pi_W(K), u) \subseteq F(\pi_W(K), v)$. ◻ **Lemma 30**. *Let $K$ be a convex body, $W \subseteq \mathbb{R}^n$ a linear subspace and $u \in W\setminus\set{0}$. Then $T_W(\pi_{W}(K), u) = W \cap T(K, u)$ and $\mathop{\mathrm{TS}}_W(\pi_W(K), u) = \pi_W(\mathop{\mathrm{TS}}(K, u))$.* *Proof.* By Definition [Definition 4](#def:touching){reference-type="ref" reference="def:touching"}, $T_W(\pi_W(K), u)$ is the unique face of the normal cone $N_W(\pi_W(K), F(\pi_W(K), u))$ such that its relative interior contains $u$. Similarly, $T(K, u)$ is the unique face of $N(K, F(K, u))$ such that its relative interior contains $u$. We show that $W \cap T(K, u)$ satisfies the definition of $T_W(\pi_W(K), u)$. Because $T(K, u)$ is a face of $N(K, F(K, u))$ and by Lemma [Lemma 29](#thm:tc2){reference-type="ref" reference="thm:tc2"}, $$W \cap T(K, u) \text{ is a face of } W \cap N(K, F(K, u)) = N_W(\pi_W(K), F(\pi_W(K), u)) .$$ As $(\mathop{\mathrm{relint}}T(K, u)) \cap W$ contains $u$ and $W$ is a linear subspace, $\mathop{\mathrm{relint}}\prn*{W \cap T(K, u)}$ also contains $u$. This proves the first claim. For the second claim, observe that $u \in (\mathop{\mathrm{relint}}T(K, u)) \cap W$ implies $$\mathop{\mathrm{span}}(T(K, u) \cap W) = (\mathop{\mathrm{span}}T(K, u)) \cap W .$$ Using the first claim, we get $$\mathop{\mathrm{span}}T_W(\pi_W K, u) = \mathop{\mathrm{span}}(T(K, u) \cap W) = (\mathop{\mathrm{span}}T(K, u)) \cap W .$$ Now we take the orthogonal complement in $W$ and obtain $$\begin{aligned} \mathop{\mathrm{TS}}_W(\pi_W K, u) &= T_W(\pi_W K, u)^\perp \cap W = (T(K, u)^\perp + W^\perp) \cap W\\ &= \pi_W T(K, u)^\perp = \pi_W \mathop{\mathrm{TS}}(K, u), \end{aligned}$$ which confirms also the second claim. ◻ In [@HugReichert23+] a $k$-polyoid, for an integer $k\in\mathbb{N}$, was defined as the limit of a sequence of Minkowski sums of $k$-topes, where a $k$-tope is a convex polytope having at most $k$ vertices. Let $\mathcal{P}^n_k$ denote the set of $k$-topes in $\mathbb{R}^n$. Furthermore, it was shown in [@HugReichert23+ Thm. 2.8] that a convex body $K\in\mathcal{K}^n$ is a $k$-polyoid if and only if there is a probability measure $\mu$ on $\mathcal{P}^n_k$ with compact support such that $$\label{eq:MGM} h_K(u) = \int h_P(u) \, \mu(\mathop{}\!\mathrm{d}P),\quad u \in \mathbb{R}^n.$$ Any such (in general non-unique) measure $\mu$ is called a generating measure of the $k$-polyoid $K$. Let $\varnothing\neq\mathcal{K}_*\subseteq \mathcal{K}^n$ be a Borel set (Borel sets are defined with respect to the topology induced by the Hausdorff metric on $\mathcal{K}^n$). A convex body $K$ in $\mathbb{R}^n$, $n\in\mathbb{N}_0$, for which there is a probability measure $\mu$ on $\mathcal{K}_*$ with bounded support such that [\[eq:MGM\]](#eq:MGM){reference-type="eqref" reference="eq:MGM"} holds, is called a $\mathcal{K}_*$-*macroid* with generating measure $\mu$. Here the support of $\mu$ is determined with respect to the metric space $\mathcal{K}_*$. It was shown in [@HugReichert23+ Lem. 2.11] that a $\mathcal{K}_*$-*macroid* with generating measure $\mu$ is the limit of a sequence of Minkowski sums of convex bodies in $\mathop{\mathrm{supp}}\mu$. In the case $\mathcal{K}_*=\mathcal{P}^n$, that is, $K$ is a $\mathcal{P}^n$-macroid with generating measure $\mu$ on $\mathcal{P}^n$, we simply say that $K$ is a macroid with generating measure $\mu$. **Lemma 31**. *Let $K$ be a macroid (a $k$-polyoid) with generating measure $\mu$, and let $W \subseteq \mathbb{R}^n$ be a linear subspace. Moreover, let $\tilde\pi_W$ be the function that maps the $k$-topes $P \subseteq \mathbb{R}^n$ to the $k$-topes $\pi_W(P) \subseteq W$. Then $\pi_W(K)$ is a macroid (a $k$-polyoid) with generating measure $$\mu^W \coloneqq \mu\circ\tilde\pi_W^{-1} \quad \text{and} \quad \tilde\pi_W(\mathop{\mathrm{supp}}\mu)\subseteq\mathop{\mathrm{supp}}\mu^W .$$ If $K \subset \mathbb{R}^n$ is a $k$-polyoid, then $\tilde\pi_W(\mathop{\mathrm{supp}}\mu)=\mathop{\mathrm{supp}}\mu^W.$* *Proof.* Let $K$ be a macroid (a $k$-polyoid) with generating measure $\mu$. For all $u \in W$, $$h_{\pi_W(K)}(u) = h_K(u) = \int h_P(u) \,\mu(\mathop{}\!\mathrm{d}P) = \int h_{\tilde\pi_W(P)}(u) \,\mu(\mathop{}\!\mathrm{d}P) = \int h_P(u) \, \mu^W(\mathop{}\!\mathrm{d}P) .$$ Moreover, if $\mu$ is a probability measure with bounded (compact) support on polytopes ($k$-topes) in $\mathbb{R}^n$, then $\mu^W$ is a probability measure with bounded (compact) support on polytopes ($k$-topes) in $W$. Let $P \in \tilde\pi_W(\mathop{\mathrm{supp}}\mu)$ and $U$ an open neighborhood of $P$ in the space of $k$-topes in $W$. Then there is $Q \in \mathop{\mathrm{supp}}\mu$ such that $\tilde\pi_W(Q) = P$, so that $\tilde\pi_W^{-1}(U)$ is an open neighborhood of $Q$ in $\mathcal{P}^n$ (respectively, in $\mathcal{P}^n_k$). Therefore, $$\mu^W(U) = \mu(\tilde\pi_W^{-1}(U)) > 0 ,$$ and because this holds for arbitrary $P \in \tilde\pi_W(\mathop{\mathrm{supp}}\mu)$ and neighborhoods $U$ of $P$, it follows that $\tilde\pi_W(\mathop{\mathrm{supp}}\mu) \subseteq \mathop{\mathrm{supp}}\mu^W$. Now we assume that $K$ is a $k$-polyoid. Because $\tilde\pi_W$ is continuous and $\mathop{\mathrm{supp}}\mu$ is compact, the set $\tilde\pi_W(\mathop{\mathrm{supp}}\mu)$ is compact and hence closed. From $$\mu^W(\tilde\pi_W(\mathop{\mathrm{supp}}\mu)^\mathsf{c}) = \mu(\tilde\pi_W^{-1}(\tilde\pi_W(\mathop{\mathrm{supp}}\mu))^\mathsf{c}) \le \mu((\mathop{\mathrm{supp}}\mu)^\mathsf{c}) = 0$$ we conclude that $\mathop{\mathrm{supp}}\mu^W \subseteq \tilde\pi_W(\mathop{\mathrm{supp}}\mu)$, and thus $\mathop{\mathrm{supp}}\mu^W = \tilde\pi_W(\mathop{\mathrm{supp}}\mu)$. ◻ **Remark 32**. *Let $C$ be a macroid ($k$-polyoid) with generating measure $\mu$ and $u \in \mathbb{S}^{n-1}$. Recall from [@HugReichert23+ Rem. 2.19] that $F(C, u)$ is a macroid ($k$-polyoid) with generating measure $F_u(\mu)$, which denotes the image measure of $\mu$ under the measurable map $F_u=F(\cdot\,,u)$. In other words, $$\label{eqsupportset} h_{F(C, u)} = \int h_P \,F_u(\mu)(\mathop{}\!\mathrm{d}P).$$ As a consequence, we obtain $$\label{eqnorcone2} \bigcap_{P\in \mathcal{P}(\mu)}N(P,F(P,u))\subseteq N(C,F(C,u)),$$ whenever $\mathcal{P}(\mu)\subseteq \mathcal{P}^n$ is a measurable set of full $\mu$-measure. For instance, we can choose $\mathcal{P}(\mu)=\mathop{\mathrm{supp}}\mu$. To verify [\[eqnorcone2\]](#eqnorcone2){reference-type="eqref" reference="eqnorcone2"}, let $v\in \bigcap_{P\in \mathcal{P}(\mu)}N(P,F(P,u))$. Then, for each $P\in \mathcal{P}(\mu)$, $F(P,u)\subseteq F(P,v)$, hence $h_{F(P,u)}\le h_{F(P,v)}$. Then [\[eqsupportset\]](#eqsupportset){reference-type="eqref" reference="eqsupportset"} yields $$h_{F(C,u)}=\int h_{F(P,u)}\, \mu(\mathop{}\!\mathrm{d}P) \le \int h_{F(P,v)}\, \mu(\mathop{}\!\mathrm{d}P)= h_{F(C,v)},$$ which shows that $F(C,u)\subseteq F(C,v)$, and therefore $v\in N(C,F(C,u))$.* *A corresponding inclusion for the touching cones does not hold in general, as shown by Example [Example 43](#ex:prune){reference-type="ref" reference="ex:prune"}, which is in contrast to the case of finite Minkowski sums (see [@Schneider Thm. 2.2.1 (a)]).* *There is a partial converse to [\[eqnorcone2\]](#eqnorcone2){reference-type="eqref" reference="eqnorcone2"}. Let $u\in\mathbb{S}^{n-1}$ be fixed and let $s(L)$ denote the Steiner point of $L\in\mathcal{K}^n$. Recall from [@Schneider (1.34)] that $s(L)\in \mathop{\mathrm{relint}}L$. Fubini's theorem yields $$s(F(C,u))=\int s(F(P,u))\, \mu(\mathop{}\!\mathrm{d}P),$$ (compare [@HugReichert23+ Rem. 2.14]), and therefore $$h_{C-s(F(C,u))}=\int h_{P-s(F(P,u))}\, \mu(\mathop{}\!\mathrm{d}P).$$ All support functions in this equation are nonnegative. If $v\in N(C,F(C,u))$, then $h_{C-s(F(C,u))}(v)=0$, and hence $h_{P-s(F(P,u))}(v)=0$ for $\mu$-almost all $P\in\mathcal{P}^n$. This shows that $v\in N(P,F(P,u))$ for $\mu$-almost all $P\in\mathcal{P}^n$, that is, there is a measurable set $\mathcal{P}_{u,v}(\mu)$ of full $\mu$-measure such that $v\in N(P,F(P,u))$ for all $P\in \mathcal{P}_{u,v}(\mu)$. Let $D_u$ be a countable dense subset of $N(C,F(C,u))$ and set $\mathcal{P}_u(\mu):=\cap_{v\in D_u}\mathcal{P}_{u,v}(\mu)$. Then $\mathcal{P}_{u}(\mu)$ is a measurable set that has full $\mu$-measure and $$N(C,F(C,u))\subseteq \bigcap_{P\in \mathcal{P}_u(\mu)}N(P,F(P,u)).$$ Together with [\[eqnorcone2\]](#eqnorcone2){reference-type="eqref" reference="eqnorcone2"} we obtain $$N(C,F(C,u))= \bigcap_{P\in \mathcal{P}_u(\mu)}N(P,F(P,u)).$$* # Cusps {#sec:4} The proof of Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} relies on the assumption that the convex bodies in question are polyoids. In fact, one inclusion holds for the larger class of macroids. For this reason, the results in this section are provided for the class of macroids or for general convex bodies. The following results about *cusps* describe what it means that the touching space of a convex body $K$ (a macroid $K$ with generating measure $\mu$) is $0$-dimensional, in terms of the polytopes in the support of $\mu$. One might hope that $\mathop{\mathrm{TS}}(K, u) = \set*{0}$ if and only if the same holds for all $P \in \mathop{\mathrm{supp}}\mu$, but this turns out to be false (the "only if" statement is true though, as follows from Lemmas [Lemma 35](#thm:tc4){reference-type="ref" reference="thm:tc4"} and [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"}). Cusps can be thought of as an attempt to quantify how far a convex body is from having a non-trivial touching space. Intuitively, Lemmas [Lemma 35](#thm:tc4){reference-type="ref" reference="thm:tc4"} and [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"} show that $\mathop{\mathrm{TS}}(K, u)$ is trivial if and only if the $k$-topes in $\mathop{\mathrm{supp}}\mu$ keep a minimum distance from having a non-trivial touching space. Lemmas [Lemma 35](#thm:tc4){reference-type="ref" reference="thm:tc4"} and [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"} will be employed in the crucial Witness Lemma [\[thm:pruningLemma\]](#thm:pruningLemma){reference-type="ref" reference="thm:pruningLemma"}. **Definition 33**. * For all $u \in \mathbb{S}^{n-1}$ and $c > 0$, define a cone with apex at $0$, $$\mathfrak{C}_c(u) \coloneqq \set{ x \in \mathbb{R}^n \given \left<x, u\right> \le -c \norm{x} } .$$ Let $K \subseteq \mathbb{R}^n$ be a convex body, $u \in \mathbb{S}^{n-1}$ and $c > 0$. Then $K$ is said to *have a $c$-cusp in direction $u \in \mathbb{S}^{n-1}$* if there is some $x \in K$ such that $K \subseteq x + \mathfrak{C}_c(u)$.* Note that $\mathfrak{C}_c(u)=\{0\}$ if $c>1$ and $\mathfrak{C}_1(u)=-[0,\infty) u$; the cone $\mathfrak{C}_c(u)$ is getting smaller as $c\in (0,1]$ is getting larger. In particular, if $K$ has a $c$-cusp in direction $u$, then it also has a $c^\prime$-cusp in direction $u$ for $0<c^\prime<c$. **Lemma 34**. *Let $K \in \mathcal{K}^n$ be a convex body, $u \in \mathbb{S}^{n-1}$ and $c > 0$. Then the following are equivalent:* 1. *$K$ has a $c$-cusp in direction $u$.* 2. *$h_K$ is linear on $U(u, c) \coloneqq c{B^{n}}+ u$.* *Proof.* The statement is invariant under translations. "(a) $\implies$ (b)": Assume that there is some $x \in K$ with $K \subseteq x + \mathfrak{C}_c(u)$. Translating $K$, we can arrange that $x = 0$. Then the Cauchy--Schwarz inequality shows that for all $v \in U(u, c)$ and $y \in K\subseteq \mathfrak{C}_c(u)$, $$\left<y, v\right> \le \left<y, u\right> + \norm{u - v}\norm{y} \le (-c + \norm{u - v}) \norm{y} \le 0 = \left<x, v\right> .$$ So $h_K(v) = 0$ for $v\in U(u, c)$. "(b) $\implies$ (a)": Assume that there is some $x \in \mathbb{R}^n$ such that $h_K = \left<x, \cdot\right>$ on $U(u, c)$. Translating $K$ by $-x$, we can arrange that $x = 0$. Then for all $y \in K \setminus\set{0}$, $$\left<y, u\right> = \left\langle y, u + \frac{c}{\norm{y}} y\right\rangle - c{\norm{y}} \le h_K\left( u + \frac{c}{\norm{y}} y \right) - {c}\norm{y} = -c\norm{y} .$$ So $K \subseteq \mathfrak{C}_c(u)$ (remembering $0 \in \mathfrak{C}_c(u)$). Moreover, $h_K'(u; \cdot) = 0$ because $U(u,c)$ is a neighborhood of $u$ where $h_K \equiv 0$. With [@Schneider Thm. 1.7.2] it follows that $$h_{F(K, u)} = h_K'(u; \cdot) = 0 = h_{\set*{0}} ,$$ proving that $0 \in \set*{0} = F(K, u) \subseteq K$. So $K \subseteq \mathfrak{C}_c(u)$ and $0 \in K$. ◻ Next we use Lemma [Lemma 34](#thm:locLin){reference-type="ref" reference="thm:locLin"} to characterize the situation when the touching space is trivial. **Lemma 35**. *Let $K \in \mathcal{K}^n$ be a convex body, and let $u \in \mathbb{S}^{n-1}$. Then the following are equivalent.* 1. *$\mathop{\mathrm{TS}}(K, u) = \set*{0}$.* 2. *There is some $c > 0$ such that $K$ has a $c$-cusp in direction $u$.* *Proof.* "(a) $\implies$ (b)": Assume that $\mathop{\mathrm{TS}}(K, u) = \set*{0}$. Then $u \in \mathop{\mathrm{int}}N(K, F(K, u))$. So there is $c > 0$ such that $U(u, c) = \set*{u} + c{B^{n}}\subseteq N(K, F(K, u))$. Choosing $x \in F(K, u)$, it follows that $h_K = \left<x, \cdot\right>$ on $U(u, c) \subseteq N(K, F(K, u))$. Then by Lemma [Lemma 34](#thm:locLin){reference-type="ref" reference="thm:locLin"}, $K$ has a $c$-cusp in direction $u$. "(b) $\implies$ (a)": Assume that $K$ has a $c$-cusp in direction $u$ for some $c > 0$. Then by Lemma [Lemma 34](#thm:locLin){reference-type="ref" reference="thm:locLin"}, there is $x \in \mathbb{R}^n$ such that $h_K = \left<x, \cdot\right>$ on $U(u, c) = \set*{u} + c{B^{n}}$. By [@Schneider Thm. 1.7.2], all $v \in \mathop{\mathrm{int}}U(u, c)$ satisfy $h_{F(K, v)} = h_K'(v; \cdot) = \left<x, \cdot\right>$, so that $F(K, v) = \set*{x} = F(K, u)$. Hence, $\mathop{\mathrm{int}}(\set*{u} + c{B^{n}}) \subseteq N(K, F(K, u))$, showing that $u \in \mathop{\mathrm{int}}N(K, F(K, u))$ and $\mathop{\mathrm{TS}}(K, u) = \set*{0}$. ◻ In the following we need to understand how the local linearity of the support function of a macroid is related to the local linearity of the support functions of the polytopes in the support of a generating measure of the macroid. This relation is given in Lemma [Lemma 38](#thm:locLin2){reference-type="ref" reference="thm:locLin2"}, which we prepare by two simple lemmas. The first lemma is well-known, but we state it for easier reference. The proof of the second lemma is included, since it is crucial for the proof of Lemma [Lemma 38](#thm:locLin2){reference-type="ref" reference="thm:locLin2"}. **Lemma 36**. *Let $A \subseteq \mathbb{R}^n$ be a convex set, $f \colon A \to \mathbb{R}$ a convex function and $a \in \mathop{\mathrm{relint}}A$. Then there is $u \in \mathop{\mathrm{\overline{span}}}A$ such that $$f(x) \ge \left<x - a, u\right> + f(a) \quad \text{for all $x \in A$} .$$* **Lemma 37**. *Let $A \subseteq \mathbb{R}^n$ be a convex set, and let $f\colon \mathbb{R}^n \to \mathbb{R}$ be positively $1$-homogeneous. Then the following are equivalent.* 1. *[\[it:conc1\]]{#it:conc1 label="it:conc1"} $f$ is linear on $A$ (i.e. agrees on $A$ with a function $x \mapsto \left<x, u\right>$, where $u \in \mathbb{R}^n$).* 2. *[\[it:conc2\]]{#it:conc2 label="it:conc2"} $f$ is affine on $A$ (i.e. agrees on $A$ with a function $x \mapsto \left<x, u\right> + c$, where $u \in \mathbb{R}^n$ and $c \in \mathbb{R}$).* 3. *[\[it:conc3\]]{#it:conc3 label="it:conc3"} $f$ is convex and concave on $A$.* *Proof.* ([\[it:conc1\]](#it:conc1){reference-type="ref" reference="it:conc1"}) implies ([\[it:conc2\]](#it:conc2){reference-type="ref" reference="it:conc2"}) and ([\[it:conc2\]](#it:conc2){reference-type="ref" reference="it:conc2"}) implies ([\[it:conc3\]](#it:conc3){reference-type="ref" reference="it:conc3"}). Without loss of generality, $A$ is nonempty. "([\[it:conc2\]](#it:conc2){reference-type="ref" reference="it:conc2"}) $\implies$ ([\[it:conc1\]](#it:conc1){reference-type="ref" reference="it:conc1"})": Assume that there are $u \in \mathbb{R}^n$ and $c \in \mathbb{R}$ such that $$f(x) = \left<x, u\right> + c \quad \text{for $x \in A$} .$$ Let $E$ be the affine span of $A$. If $0 \in E$, then choose $x \in \mathop{\mathrm{relint}}A$. There is $\lambda \in (0, 1)$ such that $\lambda x \in A$, so that we obtain $$\lambda\left<x, u\right> + c = f(\lambda x) = \lambda f(x) = \lambda\left<x, u\right> + \lambda c \implies c = \lambda c \implies c = 0 .$$ If $0 \notin E$, then $E \cap \mathop{\mathrm{\overline{span}}}A = \varnothing$. Choose $a \in A$. Then $a \notin \mathop{\mathrm{\overline{span}}}A = \prn*{\mathop{\mathrm{\overline{span}}}A}^{\perp\perp}$ and there is $v \in \prn*{\mathop{\mathrm{\overline{span}}}A}^\perp$ such that $\left<a, v\right> \ne 0$. Also observe that if $x \in A$, then $x - a \in \mathop{\mathrm{\overline{span}}}A$, so that $\left<x, v\right> = \left<a, v\right>$. So $$f(x) = \left<x, u\right> + c = \left<x, u\right> + c\frac{\left<x, v\right>}{\left<a, v\right>} = \left<x, u + c\frac{v}{\left<a, v\right>}\right> \quad \text{for all $x \in A$} .$$ "([\[it:conc3\]](#it:conc3){reference-type="ref" reference="it:conc3"}) $\implies$ ([\[it:conc2\]](#it:conc2){reference-type="ref" reference="it:conc2"})": Let $a \in \mathop{\mathrm{relint}}A$. By convexity of $f$ and Lemma [Lemma 36](#thm:suppAff){reference-type="ref" reference="thm:suppAff"}, there is $u \in \mathop{\mathrm{\overline{span}}}A$ such that $$f(x) \ge \left<x - a, u\right> + f(a) \quad \text{for all $x \in A$} .$$ By concavity of $f$ and Lemma [Lemma 36](#thm:suppAff){reference-type="ref" reference="thm:suppAff"} applied to $-f$, there is $v \in \mathop{\mathrm{\overline{span}}}A$ such that $$f(x) \le \left<x - a, v\right> + f(a) \quad \text{for all $x \in A$} .$$ Hence, $\left<x - a, v - u\right> \ge 0$ for all $x \in A$. Because $a \in \mathop{\mathrm{relint}}A$ and $u, v \in \mathop{\mathrm{\overline{span}}}A$, this shows that $v = u$ and so $$f(x) = \left<x - a, u\right> + f(a) = \left<x, u\right> - \left<a, u\right> + f(a) \quad \text{for all $x \in A$} ,$$ which completes the proof. ◻ **Lemma 38**. *Let $\varnothing\neq\mathcal{K}_*\subseteq \mathcal{K}^n$ be a Borel set. Let $K\in\mathcal{K}^n$ be a $\mathcal{K}_*$-macroid with generating measure $\mu$ on $\mathcal{K}_*$, and let $A \subseteq \mathbb{R}^n$ be convex. Then $h_K$ is linear on $A$ if and only if $h_P$ is linear on $A$ for all $P \in \mathop{\mathrm{supp}}\mu$.* *Proof.* Every support function of a convex body is convex and positively $1$-homogeneous. So by Lemma [Lemma 37](#thm:locLin3){reference-type="ref" reference="thm:locLin3"}, it is linear on $A$ if and only if it is concave on $A$. "$\implies$": Assume that there is $P \in \mathop{\mathrm{supp}}\mu$ such that $h_P$ is not concave on $A$. Then there are an open neighborhood $U$ of $P$, $\lambda \in (0, 1)$ and $y, z \in A$ such that for all $Q \in U$, $$h_Q(\lambda y + (1 - \lambda) z) < \lambda h_Q(y) + (1 - \lambda) h_Q(z) .$$ On the other hand, for all $Q \in U^\mathsf{c}$, convexity of $h_Q$ implies $$h_Q(\lambda y + (1 - \lambda) z) \le \lambda h_Q(y) + (1 - \lambda) h_Q(z) .$$ Since $\mu(U) > 0$, we thus obtain from [\[eq:MGM\]](#eq:MGM){reference-type="eqref" reference="eq:MGM"} that $$h_K(\lambda y + (1 - \lambda) z) < \lambda h_K(y) + (1 - \lambda) h_K(z) .$$ Therefore, $h_K$ is not concave on $A$. "$\impliedby$": Assume that $h_K$ is not concave on $A$. Then there are $\lambda \in (0, 1)$ and $y, z \in A$ such that $$h_K(\lambda y + (1 - \lambda) z) < \lambda h_K(y) + (1 - \lambda) h_K(z) .$$ In particular, there is at least one $P \in \mathop{\mathrm{supp}}\mu$ with $$h_P(\lambda y + (1 - \lambda) z) < \lambda h_P(y) + (1 - \lambda) h_P(z) .$$ Therefore, $h_P$ is not concave on $A$. ◻ **Lemma 39**. *Let $\varnothing\neq\mathcal{K}_*\subseteq \mathcal{K}^n$ be a Borel set. Let $K\in\mathcal{K}^n$ be a $\mathcal{K}_*$-macroid with generating measure $\mu$ on $\mathcal{K}_*$. Let $u \in \mathbb{S}^{n-1}$ and $c > 0$. Then the following are equivalent.* 1. *$K$ has a $c$-cusp in direction $u$.* 2. *Every $P \in \mathop{\mathrm{supp}}\mu$ has a $c$-cusp in direction $u$.* *Proof.* By Lemma [Lemma 34](#thm:locLin){reference-type="ref" reference="thm:locLin"}, $K$ has a $c$-cusp in direction $u$ if and only if $h_K$ is linear on $U(u, c)$. By Lemma [Lemma 38](#thm:locLin2){reference-type="ref" reference="thm:locLin2"}, this is equivalent to $h_P$ being linear on $U(u, c)$ for all $P \in \mathop{\mathrm{supp}}\mu$. Again by Lemma [Lemma 34](#thm:locLin){reference-type="ref" reference="thm:locLin"}, this in turn is equivalent to $P$ having a $c$-cusp in direction $u$ for all $P \in \mathop{\mathrm{supp}}\mu$. ◻ As a consequence of Lemmas [Lemma 35](#thm:tc4){reference-type="ref" reference="thm:tc4"} and [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"}, we obtain the following corollary. **Corollary 40**. *Let $\varnothing\neq\mathcal{K}_*\subseteq \mathcal{K}^n$ be a Borel set. Let $K\in\mathcal{K}^n$ be a $\mathcal{K}_*$-macroid with generating measure $\mu$ on $\mathcal{K}_*$ and let $u \in \mathbb{S}^{n-1}$. Then the following statements are equivalent:* 1. *$\mathop{\mathrm{TS}}(K,u)\neq \{0\}$.* 2. *For each $c > 0$ there exists some $P \in\mathop{\mathrm{supp}}\mu$ that does not have a $c$-cusp in direction $u$.* We denote by $\mathcal{K}^n_{sm}$ the set of all smooth convex bodies. Since the complement of $\mathcal{K}^n_{sm}$ is a countable union of closed sets, $\mathcal{K}^n_{sm}$ is measurable. It follows from [@Schneider Thm. 2.2.1 (a)] that a finite Minkowski sum of convex bodies, one of which is smooth, is smooth again. In other words, if the sum is not smooth, then none of the summands is smooth. Next we show that this fact extends to macroids. In particular, there is no point in considering $\mathcal{K}^n_{sm}$-macroids. **Corollary 41**. *Suppose that $K$ is a $\mathcal{K}_*$-macroid with generating measure $\mu$ and $K$ is not smooth. Then none of the $L\in\mathop{\mathrm{supp}}\mu$ is smooth.* *Proof.* If $K$ is not smooth, then there is a convex cone $A$ with $\dim A\ge 2$ such that $h_K$ is linear on $A$. By Lemma [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"}, $h_P$ is linear on $A$, for each $P\in \mathop{\mathrm{supp}}\mu$. But then $P$ is not smooth, for each $P\in\mathop{\mathrm{supp}}\mu$. ◻ # Pruning {#sec:5} This section develops a technique that is only relevant for proving one of the two inclusions on which the characterization Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} is based: Let $\mathbfcal{C}= (C_1, \ldots, C_{n-1})$ be a tuple of $k$-polyoids with generating measures $\mu_1, \ldots, \mu_{n-1}$. If $u \in \mathop{\mathrm{ext}}\mathbfcal{C}$, then we have to show that $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{C})$. Because this is the most difficult aspect of [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"}, we begin with some examples. The first example introduces the idea of a "witness polytope" that is used to prove that some normal vector is in the support of a mixed area measure. The other two examples exemplify how to find "witness polytopes" in more complicated situations using *pruning*, the method developed in this section. **Example 42** (A witness polytope). *Let $n=2$. Let $(e_1,e_2)$ be the standard orthonormal basis of $\mathbb{R}^2$. Let $$C^{(\ell )}\coloneqq \mathop{\mathrm{conv}}\set*{0, e_2, e_1+ (1+ {\ell}^{-1})e_2},\quad \ell\in\mathbb{N},$$ and define the triangle body (i.e., the $3$-polyoid) $$C \coloneqq \sum_{\ell=1}^\infty 2^{-\ell} C^{(\ell)}$$ with generating measure $$\mu \coloneqq \sum_{\ell=1}^\infty 2^{-\ell} \delta_{C^{(\ell)}} ;$$ see Figure [\[fig:ex1\]](#fig:ex1){reference-type="ref" reference="fig:ex1"} for an illustration. The sequence $(C^{(\ell)})_\ell$ converges to the triangle $$K \coloneqq \mathop{\mathrm{conv}}\set*{0, e_2, e_1+e_2}$$ and so $$\mathop{\mathrm{supp}}\mu = \set*{K, C^{(1)}, C^{(2)}, \ldots} .$$ By Corollary [Corollary 40](#cor:cusp){reference-type="ref" reference="cor:cusp"} we find that $\mathop{\mathrm{TS}}(C, e_2) \ne \set*{0}$ because $K \in\mathop{\mathrm{supp}}\mu$ does not have a $c$-cusp in direction $e_2$ for any $c > 0$. Hence, $e_2$ is a $(C)$-extreme normal vector. So Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} predicts $e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C)$.* *Indeed, Theorem [Theorem 26](#thm:suppInt){reference-type="ref" reference="thm:suppInt"} and $K \in \mathop{\mathrm{supp}}\mu$ show that $$e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(K) \subseteq \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C) .$$ Alternatively, we could argue that $C^{(\ell )}\to K$ and so Lemma [Lemma 27](#thm:suppEqOfTendsto){reference-type="ref" reference="thm:suppEqOfTendsto"} and Theorem [Theorem 26](#thm:suppInt){reference-type="ref" reference="thm:suppInt"} yield $$e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(K) \subseteq \bigcup_{\ell=1}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C^{(\ell)}) \subseteq \bigcup_{P\in\mathop{\mathrm{supp}}\mu} \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(P) = \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C) .$$ We have used $K \in \mathop{\mathrm{supp}}\mu$ as a "witness polytope" to establish $e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C)$.* **Example 43** (Pruning). *Let $n=2$. Let again $(e_1,e_2)$ be the standard orthonormal basis of $\mathbb{R}^2$. Let $$C^{(\ell )}\coloneqq \mathop{\mathrm{conv}}\set*{-e_2,0, -\ell^{-1}e_1-\ell^{-2}e_2},\quad \ell\in\mathbb{N} ;$$ see Figure [\[fig:ex2\]](#fig:ex2){reference-type="ref" reference="fig:ex2"} (a) and (b). Then we define $C$ to be the $3$-polyoid with generating measure $$\mu \coloneqq \sum_{\ell=1}^\infty 2^{-\ell} \delta_{C^{(\ell)}} .$$ The sequence $(C^{(\ell)})_\ell$ converges to the segment $$K \coloneqq \mathop{\mathrm{conv}}\set*{0,-e_2} ,$$ so that $\mathop{\mathrm{supp}}\mu = \set*{K, C^{(1)},C^{(2)}, \ldots}$. This time, $\mathop{\mathrm{TS}}(C^{(\ell)}, e_2)$ for $\ell\in\mathbb{N}$ and $\mathop{\mathrm{TS}}(K, e_2)$ are all $0$-dimensional. However, there is no fixed $c > 0$ such that every $C^{(\ell)}$ has a $c$-cusp in direction $e_2$, so that $\mathop{\mathrm{TS}}(C, e_2)$ is nontrivial by Lemmas [Lemma 35](#thm:tc4){reference-type="ref" reference="thm:tc4"} and [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"}. Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} predicts again that $e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C)$.* *However, since $e_2 \notin \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C^{(\ell)})$ for all $\ell\in\mathbb{N}$ and $e_2 \notin \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(K)$, we cannot choose a "witness polytope" in $\mathop{\mathrm{supp}}\mu$ and repeat the argument from Example [Example 42](#ex:simpleWitness){reference-type="ref" reference="ex:simpleWitness"}.* *The problem is this. In the previous example, the faces between the second and third vertex of $C^{(\ell)}$ converged to a one-dimensional face of the limit triangle $K$ with normal $e_2$. In the current example, however, these faces degenerate to a $0$-dimensional face. The only glimmer of hope is that the outer normals of these degenerating faces do still converge to $e_2$. If we could just scale up $C^{(\ell)}$ by a factor of $\ell$, the faces would not degenerate, but then we are confronted with the problem that $C^{(\ell)}$ is an unbounded sequence of convex bodies that does not converge to anything we might call a "witness polytope" anymore.* *On the other hand, by Lemma [Lemma 7](#thm:facialStability){reference-type="ref" reference="thm:facialStability"} we find a neighborhood $U \subseteq \mathbb{S}^{n-1}$ of $e_2$ such that for large enough $\ell$ and all $v \in U$, $$F(C^{(\ell)}, v) \subseteq \mathop{\mathrm{conv}}\set*{0, -\ell^{-1}e_1-\ell^{-2}e_2 } \eqqcolon F^{(\ell)} ;$$ see Figure [\[fig:ex2\]](#fig:ex2){reference-type="ref" reference="fig:ex2"} (c) for an illustration. Therefore, for all Borel sets $V \subseteq U$, $$\tau(C^{(\ell)}, V) = \tau(F^{(\ell)}, V) .$$ Now Lemma [Lemma 15](#thm:maTau){reference-type="ref" reference="thm:maTau"} implies that $$\mathop{\mathrm{S}}(C^{(\ell)})\llcorner U = \mathop{\mathrm{S}}(F^{(\ell)})\llcorner U .$$ So if we can show that $e_2 \in \mathop{\mathrm{cl}}\bigcup_{\ell=1}^\infty\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(F^{(\ell)})$, then $e_2 \in \mathop{\mathrm{cl}}\bigcup_{\ell=1}^\infty\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C^{(\ell)}) = \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C)$. Indeed, $\ell\cdot F^{(\ell)}\to F:=\mathop{\mathrm{conv}}\set*{0,-e_1}$, and therefore Lemma [Lemma 27](#thm:suppEqOfTendsto){reference-type="ref" reference="thm:suppEqOfTendsto"} yields $$e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(F) \subseteq \mathop{\mathrm{cl}}\bigcup_{\ell=1}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\ell\cdot F^{(\ell)}) = \mathop{\mathrm{cl}}\bigcup_{\ell=1}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(F^{(\ell)}) .$$ In this example, we have leveraged that the $c$-cusps of $C^{(\ell)}$ in direction $e_2$ become more and more obtuse in the sense that $c > 0$ becomes smaller and smaller. This helped us find a sequence of faces, which unfortunately degenerated to a $0$-dimensional face in the limit. After "pruning" the sequence of triangles, i.e. removing some irrelevant vertices, we were able to scale up the polytopes in the sequence so that the sequence of faces converged to a $1$-dimensional face $F$, which we used as our "witness polytope" to prove that $e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C)$.* **Example 44** (Double pruning). *We consider again $\mathbb{R}^2$ with the standard orthonormal basis $(e_1,e_2)$. Define for all $\ell\in\mathbb{N}$, $$v_1^{(\ell )}\coloneqq -e_2,\quad v_2^{(\ell )}\coloneqq 0,\quad v_3^{(\ell )}\coloneqq -\ell^{-1}e_1-\ell^{-1}e_2,\quad v_4^{(\ell )}\coloneqq -\ell^{-2}e_1-\ell^{-3}e_2, %C^\tidx\ell \coloneqq \conv\set*{\xenv{pmatrix}{0 \\ -1}, \xenv{pmatrix}{0 \\ 0}, \xenv{pmatrix}{-\frac{1}{\ell} \\ -\frac{1}{\ell}}, \xenv{pmatrix}{-\frac{1}{\ell^3} \\ -\frac{1}{\ell^2}} }$$ $$C^{(\ell )}\coloneqq \mathop{\mathrm{conv}}\set*{ v_1^{(\ell)}, v_2^{(\ell)}, v_3^{(\ell)}, v_4^{(\ell )}} .$$ The vertices $v_2^{(\ell )}, v_3^{(\ell )}, v_4^{(\ell )}$ all converge to $0$, which is the unique element of the support set $F(\lim C^{(\ell)}, e_2)$. In analogy to the previous example, we remove $v_1^{(\ell )}$ and scale by a factor of $\ell$ to obtain a sequence of triangles $$D^{(\ell )}\coloneqq \mathop{\mathrm{conv}}\set*{ 0, -e_1-e_2, -\ell^{-1}e_1-\ell^{-2}e_2} .$$ Again, $F(\lim D^{(\ell)}, e_2)$ is a singleton and the vertices $\ell v_2^{(\ell)}, \ell v_4^{(\ell)}$ of $D^{(\ell)}$ converge to its unique element. Removing $\ell v_3^{(\ell)}$ and scaling by $\ell$ again, we get $$E^{(\ell )}\coloneqq \mathop{\mathrm{conv}}\set*{ 0, -e_1-\ell^{-1}e_2 } .$$ Now, $F(\lim E^{(\ell)}, e_2) = \lim E^{(\ell)}$ is one-dimensional. Applying similar arguments as in the previous example, we conclude from $e_2 \in \mathop{\mathrm{cl}}\bigcup_{\ell=1}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(E^{(\ell)})$ that $e_2 \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C)$.* *This example shows that the pruning procedure may have to be repeated several times.* After these preparatory examples, we describe the general approach. **Definition 45**. *Let $\mathbf{Q}= (Q_\ell)_\ell$ be a bounded sequence of polytopes with a uniformly bounded number of vertices and $u \in \mathbb{S}^{n-1}$. Let $k\in\mathbb{N}$ be the smallest number such that all polytopes in $\mathbf{Q}$ are $k$-topes.* *Choose an arbitrary sequence $\mathbf{V}= (V_\ell)_\ell = ( (v_\ell^{(1)}, \ldots, v_\ell^{(k)}) )_\ell$ of $k$-tuples of points in $\mathbb{R}^n$ such that $$Q_\ell = \mathop{\mathrm{conv}}\set*{ v_\ell^{(i)} \given i \in [k] } \quad \text{for all $\ell\in\mathbb{N}$} .$$ Let $\mathbf{V}' = (V_{\ell_s})_s$ be a convergent subsequence of $\mathbf{V}$ and $Q\coloneqq \lim_{t\to\infty} Q_{\ell_t}$. Then we define a sequence $\mathop{\mathrm{prune}}(\mathbf{Q}, u) = (\mathop{\mathrm{prune}}(\mathbf{Q}, u, s))_s$ of polytopes $$\begin{aligned} \mathop{\mathrm{prune}}(\mathbf{Q}, u, s) \coloneqq &\, c_s\left(\mathop{\mathrm{conv}}\set*{ v_{\ell_s}^{(i)} \given i \in [k], \lim_{t\to\infty} v_{\ell_t}^{(i)} \in F(Q, u) }- v_{\ell_s}^{(i_0)}\right)\\ \subseteq &\, c_s \left(Q_{\ell_s}-v_{\ell_s}^{(i_0)}\right) ,\end{aligned}$$ where $i_0\in[k]$ is chosen such that $\lim_{t\to\infty} v_{\ell_t}^{(i_0)} \in F(Q, u)$ and $c_s$ is the unique positive number such that $\mathop{\mathrm{diam}}\mathop{\mathrm{prune}}(\mathbf{Q}, u, s) =1$ if the convex hull by which $\mathop{\mathrm{prune}}(\mathbf{Q}, u, s)$ is defined is not a singleton, otherwise we set $c_s \coloneqq 1$, for $s\in\mathbb{N}$. Note that $0\in \mathop{\mathrm{prune}}(\mathbf{Q}, u, s)$. We may also pass to a subsequence of $\mathop{\mathrm{prune}}(\mathbf{Q}, u)$ and denote it in the same way; in any case, the sequence $\mathop{\mathrm{prune}}(\mathbf{Q}, u)$ is subject to various choices and not uniquely determined by $\mathbf{Q}$ and $u$. The polytopes in $\mathop{\mathrm{prune}}(\mathbf{Q}, u)$ have diameter $1$ or are singletons and they contain $0$.* *If $$\lim_{t\to\infty} v_{\ell_t}^{(i)} \notin F(Q, u) \quad \text{for some }i\in[k],$$ then $k \ge 2$ and $\mathop{\mathrm{prune}}(\mathbf{Q}, u)$ consists of $(k-1)$-topes. After finitely many steps, the members of the sequence of sequences defined by $$\mathop{\mathrm{prune}}_0(\mathbf{Q}, u) \coloneqq \mathbf{Q}, \quad\mathop{\mathrm{prune}}_{m+1}(\mathbf{Q}, u) \coloneqq \mathop{\mathrm{prune}}(\mathop{\mathrm{prune}}_m(\mathbf{Q}, u), u) \quad \text{for all $m\in\mathbb{N}$}$$ remain unchanged (if we do not pass to a subsequence) and become equal to some "fixpoint" sequence $\mathop{\mathrm{prune}}_*(\mathbf{Q}, u)$.* **Remark 46**. * If $\mathop{\mathrm{prune}}_*(\mathbf{Q}, u)$ is obtained as described in Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"} and $Q^*:=\lim_{s\to\infty}\mathop{\mathrm{prune}}_*(\mathbf{Q}, u,s)$, then $0\in Q^*\subset u^\perp$ and $\mathop{\mathrm{diam}}Q^*\in\{0,1\}$.* The next two lemmas prepare the proof of the crucial [\[thm:pruningLemma\]](#thm:pruningLemma){reference-type="ref" reference="thm:pruningLemma"}. The first is Lemma [\[thm:realPruningLemma\]](#thm:realPruningLemma){reference-type="ref" reference="thm:realPruningLemma"} which implies that at least locally pruning does not change the mixed area measures as far as their support is concerned. Lemma [Lemma 48](#thm:sticky){reference-type="ref" reference="thm:sticky"} then states a condition, which can be used to ensure that the limit of a pruning sequence is non-degenerate. **Lemma 47** (Pruning lemma). *[\[thm:realPruningLemma\]]{#thm:realPruningLemma label="thm:realPruningLemma"} Let $\mathbf{Q}= (Q_\ell)_\ell$ be a bounded sequence of polytopes in $\mathbb{R}^n$ with a uniform bound on the number of vertices, let $u \in\mathbb{S}^{n-1}$ and $m \in \mathbb{N}_0$. Then there are an $\mathbb{S}^{n-1}$-open neighborhood $U \subseteq \mathbb{S}^{n-1}$ of $u$, a subsequence $(Q_{\ell_s})_s$ and a sequence of positive numbers $(\lambda_s)_s$ such that for all but finitely many $s\in\mathbb{N}$ and for all $(n-2)$-tuples $\mathbfcal{C}$ of convex bodies in $\mathbb{R}^n$, $$\mathop{\mathrm{S}}(Q_{\ell_s}, \mathbfcal{C})\llcorner U = \lambda_s \mathop{\mathrm{S}}(\mathop{\mathrm{prune}}_m(\mathbf{Q}, u, s), \mathbfcal{C})\llcorner U .$$ In particular, the statement is true if  $\mathop{\mathrm{prune}}_m$ is replaced by $\mathop{\mathrm{prune}}_*$.* *Proof.* The proof is by induction on $m \in \mathbb{N}_0$. If $m = 0$, the claim follows from $\mathbf{Q}= \mathop{\mathrm{prune}}_0(\mathbf{Q}, u)$. Now assume $m \ge 1$ and that the claim is true for smaller $m$. Let $k\in\mathbb{N}$ be the smallest possible number such that $\mathbf{Q}$ consists of $k$-topes, just as in Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"}. Let $\mathbf{V}= (V_\ell)_\ell = ( (v_\ell^{(1)}, \ldots, v_\ell^{(k)}) )_\ell$ be a sequence of spanning points and $\mathbf{V}' = (V_{\ell_s})_s$ a convergent subsequence (as in Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"}). Write $$v^{(i)} \coloneqq \lim_{s\to\infty} v_{\ell_s}^{(i)} \quad \text{for all $i \in [k]$} .$$ We apply Lemma [Lemma 7](#thm:facialStability){reference-type="ref" reference="thm:facialStability"} to $\lim_{s\to\infty} Q_{\ell_s} = \mathop{\mathrm{conv}}\set*{ v^{(i)} \given i \in [k] }$. Let $$I \coloneqq \set*{ i \in [k] \given v^{(i)} \in F\prn*{\lim_{s\to\infty} Q_{\ell_s}, u} } .$$ Lemma [Lemma 7](#thm:facialStability){reference-type="ref" reference="thm:facialStability"} shows that there is $\varepsilon\in (0,1)$ such that for all $w, x_1, \ldots, x_k$ with $d(u, w) < \varepsilon$ and $d(v^{(i)}, x_i) < \varepsilon$ ($i \in [k]$), the polytope $P \coloneqq \mathop{\mathrm{conv}}\set*{ x_i \given i \in [k] }$ satisfies $$F(P, w) \subseteq \mathop{\mathrm{conv}}\set*{ x_i \given i \in I } .$$ In particular, there is an open neighborhood $U \subseteq \mathbb{R}^n\setminus\set{0}$ of $u$ such that for all but finitely many $s$ and for all $w \in U$, $$F(Q_{\ell_s}, w) -v_{\ell_s}^{(i_0)}\subseteq \mathop{\mathrm{conv}}\set*{ v_{\ell_s}^{(i)} \given i \in I } -v_{\ell_s}^{(i_0)}= c_s^{-1} \mathop{\mathrm{prune}}(\mathbf{Q}, u, s) \subseteq Q_{\ell_s}-v_{\ell_s}^{(i_0)} ,$$ where $c_s$ is the positive factor in Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"}. It follows that $$F(\mathop{\mathrm{prune}}(\mathbf{Q}, u, s),w)=c_s\left(F(Q_{\ell_s},w)-v_{\ell_s}^{(i_0)}\right),$$ and by Lemma [Lemma 15](#thm:maTau){reference-type="ref" reference="thm:maTau"} and the translation invariance of mixed area measures we get, for all but finitely many $s\in\mathbb{N}$ and for every $(n-2)$-tuple $\mathbfcal{C}$ of convex bodies, $$\mathop{\mathrm{S}}(Q_{\ell_s}, \mathbfcal{C})\llcorner (U \cap \mathbb{S}^{n-1}) = c_s^{-1} \mathop{\mathrm{S}}(\mathop{\mathrm{prune}}(\mathbf{Q}, u, s), \mathbfcal{C})\llcorner (U \cap \mathbb{S}^{n-1}) .$$ Applying the inductive hypothesis for $m-1$ to $\mathop{\mathrm{prune}}(\mathbf{Q}, u)$, we obtain an $\mathbb{S}^{n-1}$-open neighborhood $V \subseteq \mathbb{S}^{n-1}$ of $u$, a subsequence $(\mathop{\mathrm{prune}}(\mathbf{Q}, u, s_t))_t$ and a sequence of positive numbers $(\mu_t)_t$ such that for all but finitely many $t \in \mathbb{N}$ and for every $(n-2)$-tuple $\mathbfcal{C}$ of convex bodies, $$\mathop{\mathrm{S}}(\mathop{\mathrm{prune}}(\mathbf{Q}, u, s_t), \mathbfcal{C})\llcorner V = \mu_t \mathop{\mathrm{S}}(\mathop{\mathrm{prune}}_m(\mathbf{Q}, u, t), \mathbfcal{C})\llcorner V .$$ Now, $V \cap U \subseteq \mathbb{S}^{n-1}$ is also an $\mathbb{S}^{n-1}$-open neighborhood of $u$, and for all but finitely many $t \in \mathbb{N}$, $$\begin{aligned} \mathop{\mathrm{S}}(Q_{\ell_{s_t}}, \mathbfcal{C})\llcorner (V \cap U) &= c_{s_t}^{-1} \mathop{\mathrm{S}}(\mathop{\mathrm{prune}}(\mathbf{Q}, u, s_t), \mathbfcal{C})\llcorner (V\cap U) \\ &= c_{s_t}^{-1} \mu_t \mathop{\mathrm{S}}(\mathop{\mathrm{prune}}_m(\mathbf{Q}, u, t), \mathbfcal{C})\llcorner (V \cap U) \end{aligned}$$ for every $(n-2)$-tuple $\mathbfcal{C}$ of convex bodies. This concludes the induction. Because there is $m\in\mathbb{N}_0$ such that $\mathop{\mathrm{prune}}_*(\mathbf{Q}, u) = \mathop{\mathrm{prune}}_m(\mathbf{Q}, u)$, the claim is also true for $\mathop{\mathrm{prune}}_*$. ◻ **Lemma 48** (Sticky vertices). *Let $u \in\mathbb{S}^{n-1}$. Let $\mathbf{Q}= (Q_\ell)_\ell$ be a bounded sequence of polytopes with a uniform bound on the number of vertices, satisfying the following property $\mathfrak{P}(\mathbf{Q},u)$: For all but finitely many $\ell\in\mathbb{N}$, there are distinct vertices $x_\ell, y_\ell$ of $Q_\ell$ such that $x_\ell \in F(Q_\ell, u)$ and $\norm{x_\ell - y_\ell}^{-1}{\left<x_\ell - y_\ell, u\right>} \to 0$ as $\ell\to\infty$.* *Then the sequences $\mathop{\mathrm{prune}}_m(\mathbf{Q}, u)$, $m\in\mathbb{N}$, can be chosen in such a way that the property $\mathfrak{P}(\mathop{\mathrm{prune}}(\mathbf{Q}, u)_m,u)$ is satisfied for all $m\in\mathbb{N}$.* *Proof.* Let $k\in\mathbb{N}$ be the smallest possible number such that $\mathbf{Q}$ consists of $k$-topes. It suffices to prove the claim for $\mathop{\mathrm{prune}}$, since the argument can then be iterated. Let $\mathbf{V}= (V_\ell)_\ell = ( (v_\ell^{(1)}, \ldots, v_\ell^{(k)}) )_\ell$ be a sequence of spanning points chosen as in Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"} which has $\mathbf{V}' = (V_{\ell_s})_s$ as a convergent subsequence. Moreover, the subsequence can be chosen such that there are distinct $i,j\in[k]$ with $x_{\ell_s}=v_{\ell_s}^{(i)}$ and $y_{\ell_s}=v_{\ell_s}^{(j)}$ for $s\in\mathbb{N}$. Then we set $Q:=\lim_{s\to\infty}Q_{\ell_s}$ and $$v^{(i')} \coloneqq \lim_{s\to\infty} v_{\ell_s}^{(i')} \quad \text{for $i' \in [k]$} .$$ It follows that $$\left\langle v^{(i)}, u\right\rangle \leftarrow \left<v_{\ell_s}^{(i)}, u\right> = \left<x_{\ell_s}, u\right> = h_{Q_{\ell_s}}(u)\to h_Q(u),$$ as $s\to\infty$, hence $\left<v^{(i)}, u\right> = h_Q(u)$ and $v^{(i)} \in F(Q, u)$. Moreover, $$\left\langle v^{(j)}, u\right\rangle \leftarrow \left<v_{\ell_s}^{(j)}, u\right> = \left<y_{\ell_s}, u\right> %= \left<x_{\ell_s}, u\right> + \left<y_{\ell_s} - x_{\ell_s}, u\right> = h_{Q_{\ell_s}}(u) + \left<y_{\ell_s} - x_{\ell_s}, u\right>\to h_Q(u),$$ as $s\to\infty$, because of the assumption and since $\left(\left\|y_{\ell_s} - x_{\ell_s}\right\|\right)_s$ is bounded. Hence, we also have $v^{(j)} \in F(Q, u)$. The construction of $\mathop{\mathrm{prune}}(\mathbf{Q}, u, s)$ then shows that $c_s (v_{\ell_s}^{(i)}-v_{\ell_s}^{(i_0)})$ and $c_s (v_{\ell_s}^{(j)}-v_{\ell_s}^{(i_0)})$ are distinct vertices of $\mathop{\mathrm{prune}}(\mathbf{Q}, u, s)$ for all $s\in\mathbb{N}$, where $c_s$ is the positive scaling factor in Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"}. for $s \in \mathbb{N}$. In addition, $c_s (v_{\ell_s}^{(i)}-v_{\ell_s}^{(i_0)}) \in F(\mathop{\mathrm{prune}}(\mathbf{Q}, u, s),u)$ and $$\frac{\left<c_s (v_{\ell_s}^{(i)}-v_{\ell_s}^{(i_0)}) -c_s (v_{\ell_s}^{(j)}-v_{\ell_s}^{(i_0)}) , u\right>}{\norm{c_s (v_{\ell_s}^{(i)}-v_{\ell_s}^{(i_0)}) -c_s (v_{\ell_s}^{(j)}-v_{\ell_s}^{(i_0)}) }} = \frac{\left<x_{\ell_s} - y_{\ell_s}, u\right>}{\norm{x_{\ell_s} - y_{\ell_s}}} \to 0 ,$$ as $s\to\infty$. Thus $\mathop{\mathrm{prune}}(\mathbf{Q}, u)$ has the required property and the iteration can be continued. ◻ After these preparations, we state and prove the main auxiliary result in this section. **Lemma 49** (Witness lemma). *[\[thm:pruningLemma\]]{#thm:pruningLemma label="thm:pruningLemma"} Let $u\in\mathbb{S}^{n-1}$. Let $M \subset \mathbb{R}^n$ be a $k$-polyoid with generating measure $\mu$. If  $\mathop{\mathrm{TS}}(M, u) \ne \{0\}$, then there is a $k$-tope $\Re(M,u) \subset u^\perp$ with $\{0\}\subset \Re(M,u)$ (that is not a singleton) such that for every $(n-2)$-tuple $\mathbfcal{C}$ of convex bodies in $\mathbb{R}^n$, $$u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\Re(M,u), \mathbfcal{C}) \quad \text{implies} \quad u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(M, \mathbfcal{C}) .$$* *Proof.* For every $Q \in \mathop{\mathrm{supp}}\mu$, choose an arbitrary vertex $x_Q \in F(Q, u)$. If $\mathop{\mathrm{TS}}(M, u) \ne \{0\}$, then Corollary [Corollary 40](#cor:cusp){reference-type="ref" reference="cor:cusp"} shows that for every $c > 0$ there is some $P \in \mathop{\mathrm{supp}}\mu$ *not* having a $c$-cusp in direction $u$. Hence we can find a sequence of $k$-topes $\mathbf{Q}:=(Q_\ell)_\ell$ in $\mathop{\mathrm{supp}}\mu$ and a sequence of vertices $y_\ell\in Q_\ell$, $\ell\in\mathbb{N}$, such that $x_{Q_\ell}\neq y_\ell$ and $$\norm{y_\ell - x_{Q_\ell}}^{-1} \left<y_\ell - x_{Q_\ell}, u\right>\to 0\quad \text{as }\ell\to\infty.$$ By Lemma [Lemma 48](#thm:sticky){reference-type="ref" reference="thm:sticky"}, $\mathop{\mathrm{prune}}_*(\mathbf{Q}, u, \ell)$ has at least two distinct vertices, and by Definition [Definition 45](#def:prune){reference-type="ref" reference="def:prune"} diameter $1$, for all but finitely many $\ell$. So $\mathop{\mathrm{prune}}_*( \mathbf{Q}, u)$, being a convergent sequence of $k$-topes, converges to a $k$-tope $\Re(M,u)\subset u^\perp$ of diameter $1$ with $0\in \Re(M,u)$ (see Remark [Remark 46](#rem:subspace){reference-type="ref" reference="rem:subspace"}). In particular, $\Re(M,u)$ is not a singleton. By [\[thm:realPruningLemma\]](#thm:realPruningLemma){reference-type="ref" reference="thm:realPruningLemma"}, there is a sequence of positive numbers $(\lambda_s)_s$, a subsequence $(Q_{\ell_s})_s$ of $\mathbf{Q}$ and an $\mathbb{S}^{n-1}$-open neighborhood $U \subseteq \mathbb{S}^{n-1}$ of $u$ such that for an arbitrary $(n-2)$-tuple $\mathbfcal{C}$ of convex bodies, $$\begin{aligned} \label{eq:jump} \mathop{\mathrm{S}}(Q_{\ell_s}, \mathbfcal{C})\llcorner U = \lambda_s \mathop{\mathrm{S}}(\mathop{\mathrm{prune}}_*( \mathbf{Q}, u, s ), \mathbfcal{C})\llcorner U .\end{aligned}$$ Now assume that $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\Re(M,u), \mathbfcal{C})$. Then by continuity of $\mathop{\mathrm{S}}$ and Lemma [Lemma 27](#thm:suppEqOfTendsto){reference-type="ref" reference="thm:suppEqOfTendsto"}, $$u \in \mathop{\mathrm{cl}}\bigcup_{s=1}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathop{\mathrm{prune}}_*( \mathbf{Q}, u, s ), \mathbfcal{C}) ,$$ and because $U$ is a neighborhood of $u$, eq. [\[eq:jump\]](#eq:jump){reference-type="eqref" reference="eq:jump"} and Theorem [Theorem 26](#thm:suppInt){reference-type="ref" reference="thm:suppInt"} now imply $$u \in \mathop{\mathrm{cl}}\bigcup_{s=1}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(Q_{\ell_s}, \mathbfcal{C}) \subseteq \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(M, \mathbfcal{C}) ,$$ which proves the assertion. ◻ # Switching {#sec:6} In this section, we provide a lemma that will be needed in the proof of our main result to carry out the induction step. Recall the conventions and the notation concerning tuples of sets introduced in Section [2](#sec:2){reference-type="ref" reference="sec:2"}. As usual, a linear subspace $R$ of some ambient vector space is said to be trivial if $R=\{0\}$. **Lemma 50** (Switching lemma). *Assume that $n \ge 2$ and $u \in \mathbb{S}^{n-1}$. Let $\mathbfcal{T} = (T_1, \ldots, T_{n-1})$ and $\mathbfcal{R} = (R_1, \ldots, R_{n-1})$ be tuples of linear subspaces of  $u^\perp$ such that $\mathbfcal{T}$ is semicritical and $R_i$ is nontrivial for all $i\in[n-1]$. Then there are index sets $\varnothing \ne I \subseteq J \subseteq [n-1]$ such that $\mathbfcal{R}_I$ spans an $\abs{I}$-dimensional subspace and $\mathbfcal{R}_J + \mathbfcal{T}_{J^\mathsf{c}}$ is semicritical.* *Proof.* Denote by $\mathbfcal{S} = (T_1 + R_1, \ldots, T_{n-1} + R_{n-1})$ the tuple of elementwise sums of $\mathbfcal{T}$ and $\mathbfcal{R}$. Choose $J \subseteq [n-1]$ inclusion-maximal such that $$\begin{aligned} \label{eq:jprop} \mathop{\mathrm{V}}\prn*{\mathbfcal{R}_J + \mathbfcal{S}_{J^\mathsf{c}}} > 0 .\end{aligned}$$ Such $J$ exists since $\mathbfcal{S} = \mathbfcal{R}_\varnothing + \mathbfcal{S}_{[n-1]}$ is semicritical: $T_i \subseteq S_i$ for $i \in [n-1]$, $\mathbfcal{T}$ is semicritical by assumption and hence Lemma [Lemma 21](#thm:critSimple){reference-type="ref" reference="thm:critSimple"} (6) implies the assertion. Because $J$ is maximal, even $\mathbfcal{R}_J + \mathbfcal{T}_{J^\mathsf{c}}$ is semicritical: Repeatedly applying Lemma [Lemma 24](#thm:critAdd){reference-type="ref" reference="thm:critAdd"}, we find a set $K \subseteq J^\mathsf{c}$ such that $\mathbfcal{R}_{J\cup K} + \mathbfcal{T}_{J^\mathsf{c}\setminus K}$ is semicritical. But then also $\mathbfcal{R}_{J \cup K} + \mathbfcal{S}_{J^\mathsf{c}\setminus K}$ is semicritical, forcing $K = \varnothing$ because $J$ is inclusion-maximal. Furthermore, let $I \subseteq [n-1]$ be inclusion-minimal such that $\mathbfcal{R}_{I \cap J} + \mathbfcal{S}_{I \setminus J}$ is subcritical. Such $I$ exists because $\mathbfcal{R}_J + \mathbfcal{S}_{J^\mathsf{c}}$ is subcritical as an $(n-1)$-tuple of $u^\perp$-subspaces (since $n\ge 2$), showing that at least $[n-1]$ satisfies the desired property. Note that $I\neq\varnothing$, since by definition an empty tuple is not subcritical. Then $E \coloneqq \mathop{\mathrm{\overline{span}}}\prn*{\mathbfcal{R}_{I \cap J} + \mathbfcal{S}_{I \setminus J}}$ is $\abs{I}$-dimensional: On the one hand, $\mathbfcal{R}_{I \cap J} + \mathbfcal{S}_{I \setminus J}$ is semicritical by Lemma [Lemma 21](#thm:critSimple){reference-type="ref" reference="thm:critSimple"} (1). On the other hand, it is subcritical by the construction of $I$. If it spanned a higher-dimensional subspace, then this tuple would have to contain an even smaller subcritical set, contradicting the minimality of $I$. By Lemma [Lemma 22](#thm:critRed){reference-type="ref" reference="thm:critRed"} and relation [\[eq:jprop\]](#eq:jprop){reference-type="eqref" reference="eq:jprop"}, it follows that $$\begin{aligned} \label{eq:scproj} \mathop{\mathrm{V}}\prn*{\pi_{E^\perp}(\mathbfcal{R}_{J \setminus I} + \mathbfcal{S}_{J^\mathsf{c}\setminus I})} > 0 .\end{aligned}$$ It remains to show that $I \subseteq J$. Assume for a contradiction that, without loss of generality, $1 \in I \setminus J$. Because $I$ was chosen inclusion-minimally such that $\mathbfcal{R}_{I \cap J} + \mathbfcal{S}_{I \setminus J}$ is subcritical, it follows that $\mathbfcal{R}_{I \cap J} + \mathbfcal{S}_{I \setminus (J \cup \set*{1})}$ is critical and as $R_1$ is nontrivial, Lemma [Lemma 21](#thm:critSimple){reference-type="ref" reference="thm:critSimple"} (7) implies that $\mathbfcal{R}_{I \cap (J \cup \set{1})} + \mathbfcal{S}_{I \setminus (J \cup \set{1})}$ is semicritical. Since $1\in I\setminus J$ and $R_1\subseteq S_1\subseteq E$, $\mathbfcal{R}_{I \cap (J \cup \set{1})} + \mathbfcal{S}_{I \setminus (J \cup \set{1})}$ spans a subspace of $E$ of dimension $\abs{I} = \dim E$, in other words, it also spans $E$. By Lemma [Lemma 22](#thm:critRed){reference-type="ref" reference="thm:critRed"} and relation [\[eq:scproj\]](#eq:scproj){reference-type="eqref" reference="eq:scproj"}, it follows that $$\mathop{\mathrm{V}}\prn*{\mathbfcal{R}_{J\cup\set{1}} + \mathbfcal{S}_{(J \cup\set{1})^\mathsf{c}}} > 0,$$ that is, $\mathbfcal{R}_{J\cup\set{1}} + \mathbfcal{S}_{(J \cup\set{1})^\mathsf{c}}$ is semicritical, in contradiction to the maximality of $J$ as expressed by relation [\[eq:jprop\]](#eq:jprop){reference-type="eqref" reference="eq:jprop"}. This proves that $I \subseteq J$. Finally, we get $\dim \mathbfcal{R}_{I}=|I|$, since $I\subseteq J$ and $\dim E=|I|$. ◻ # Proof of the characterization theorem {#sec:7} Now we are ready to confirm Theorem [\[thm:suppChar\]](#thm:suppChar){reference-type="ref" reference="thm:suppChar"} for smooth convex bodies and polyoids, for which eq. [\[eq:MGM\]](#eq:MGM){reference-type="eqref" reference="eq:MGM"} should be recalled. *Proof.* First, observe that it suffices to prove the claim for tuples that only contain polyoids (macroids). Consider the case that $\mathbfcal{M}$ does *not* solely consist of polyoids (macroids) and let $\mathbfcal{M}'$ be the tuple obtained from $\mathbfcal{M}$ by replacing all smooth bodies by ${B^{n}}$. Clearly, $\mathbfcal{M}'$ consists of polyoids (macroids), and the claim for $\mathbfcal{M}$ is equivalent to the claim for $\mathbfcal{M}'$ by the following argument. All smooth convex bodies have the same, $(n-1)$-dimensional, touching spaces. Therefore, $\mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{M} = \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{M}'$. We can assume that the smooth and strictly convex body, contained in $\mathbfcal{M}$ by assumption, is the first one. By [@Schneider Lem. 7.6.15], $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}) = \mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{M}_{\backslash 1})$. Now [@SvH23+ Cor. 14.3] shows that we can replace the remaining smooth bodies in $\mathbfcal{M}_{\backslash 1}$ by ${B^{n}}$, and we obtain $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}) = \mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{M}'_{\backslash 1}) = \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}')$. Hence, it suffices to prove the claim for tuples that only contain polyoids (macroids), such as $\mathbfcal{M}'$. It remains to show for tuples $\mathbfcal{M}$ of polyoids that $$\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}) = \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{M}.$$ For this we prove two inclusions. "$\subseteq$": For this part of the argument, we only need the weaker assumption that $\mathbfcal{M}$ is a tuple of macroids. By Theorem [Theorem 26](#thm:suppInt){reference-type="ref" reference="thm:suppInt"} and Lemma [Lemma 25](#thm:suppCharPoly){reference-type="ref" reference="thm:suppCharPoly"}, we get $$\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}) = \mathop{\mathrm{cl}}\bigcup_{\mathcal{P}\in\prod_{i=1}^{n-1}\mathop{\mathrm{supp}}\mu_i}\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathcal{P}) = \mathop{\mathrm{cl}}\bigcup_{\mathcal{P}\in\prod_{i=1}^{n-1}\mathop{\mathrm{supp}}\mu_i}\mathop{\mathrm{ext}}\mathcal{P} .$$ So it remains to verify that $$\mathop{\mathrm{cl}}\bigcup_{\mathcal{P}\in\prod_{i=1}^{n-1}\mathop{\mathrm{supp}}\mu_i}\mathop{\mathrm{ext}}(\mathcal{P}) \subseteq \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{M} .$$ Let $\mathcal{P}= (P_1, \ldots, P_{n-1}) \in\prod_{i=1}^{n-1}\mathop{\mathrm{supp}}\mu_i$. We claim that for all $u \in \mathbb{S}^{n-1}$ and $i \in [n-1]$, $$\begin{aligned} \label{eq:scp1} \mathop{\mathrm{TS}}(P_i, u) \subseteq \mathop{\mathrm{TS}}(M_i, u) ,\end{aligned}$$ which would imply $\mathop{\mathrm{ext}}\mathcal{P}\subseteq \mathop{\mathrm{ext}}\mathbfcal{M}$ by Lemma [Lemma 21](#thm:critSimple){reference-type="ref" reference="thm:critSimple"} (6) and conclude the proof (for zonoids, compare [\[eq:scp1\]](#eq:scp1){reference-type="eqref" reference="eq:scp1"} with [@Schneider1988 Lem. 3.2]). Set $W \coloneqq \mathop{\mathrm{TS}}(M_i, u)^\perp$ and note that $u \in W$. Then by Lemma [Lemma 30](#thm:tc3){reference-type="ref" reference="thm:tc3"}, relation [\[eq:scp1\]](#eq:scp1){reference-type="eqref" reference="eq:scp1"} is equivalent to $\mathop{\mathrm{TS}}_W(\pi_W(P_i), u) \subseteq \mathop{\mathrm{TS}}_W(\pi_W(M_i), u) = \set*{0}$. Now $\pi_W(P_i)$ is in the support of the generating measure of $\pi_W(M_i)$ by Lemma [Lemma 31](#thm:projVertoid){reference-type="ref" reference="thm:projVertoid"} (here we only need the inclusion which holds for general macroids). Together with $\mathop{\mathrm{TS}}_W(\pi_W(M_i), u) = \set*{0}$ and Lemmas [Lemma 35](#thm:tc4){reference-type="ref" reference="thm:tc4"} and [Lemma 39](#thm:tc5){reference-type="ref" reference="thm:tc5"}, this implies $\mathop{\mathrm{TS}}_W(\pi_W(P_i), u) = \set*{0}$ and relation [\[eq:scp1\]](#eq:scp1){reference-type="eqref" reference="eq:scp1"}. "$\supseteq$": The proof of this inclusion is by induction on $n$. The case $n=1$ follows from Remark [Remark 9](#rem:empty){reference-type="ref" reference="rem:empty"} and the fact that the empty tuple is semicritical, rendering every $u \in S^0$ extreme. Assume $n \ge 2$ and that the claim is true for smaller $n$. Let $u\in \mathop{\mathrm{ext}}\mathbfcal{M}$ be given. The linear subspaces $\mathop{\mathrm{TS}}(M_i, u)\subseteq u^\perp$, $i \in [n-1]$, form a semicritical tuple since $u\in \mathop{\mathrm{ext}}\mathbfcal{M}$, in particular $\mathop{\mathrm{TS}}(M_i, u)\neq\{0\}$. Then the linear subspaces $\mathop{\mathrm{span}}{\Re(M_i,u)}\subseteq u^\perp$, which were defined in [\[thm:pruningLemma\]](#thm:pruningLemma){reference-type="ref" reference="thm:pruningLemma"}, are nontrivial and $\{0\}\subset \Re(M_i,u)$ by Lemma [Lemma 49](#thm:defaultPolytope){reference-type="ref" reference="thm:defaultPolytope"}. Define $$\begin{aligned} \mathbfcal{D} \coloneqq&\ (\mathop{\mathrm{span}}\Re(M_1,u), \ldots, \mathop{\mathrm{span}}\Re(M_{n-1},u))\\ =&\ (\mathop{\mathrm{TS}}(\Re(M_1,u), u), \ldots, \mathop{\mathrm{TS}}(\Re(M_{n-1},u), u)) ,\end{aligned}$$ where the equality follows from Lemma [Lemma 8](#thm:touchingConePoly){reference-type="ref" reference="thm:touchingConePoly"}, since $\Re(M_i,u)$ are polytopes with $0\in \Re(M_i,u)\subseteq u^\perp$ so that $F(\Re(M_i,u),u)=\Re(M_i,u)$, for $i\in [n-1]$. According to Lemma [Lemma 50](#thm:critSwitching){reference-type="ref" reference="thm:critSwitching"}, there are index sets $\varnothing \ne I \subseteq J \subseteq [n-1]$ such that $\mathbfcal{D}_I$ spans an $\abs{I}$-dimensional subspace $E$ and $\mathbfcal{D}_J + \mathop{\mathrm{TS}}(\mathbfcal{M}_{J^\mathsf{c}}, u)$ is semicritical. We now interpret the $k$-tope $\Re(M_i,u)$, where $i \in J$, as a $k$-polyoid with generating Dirac measure $\delta_{\Re(M_i,u)}$ and define $$\mathbfcal{M}' \coloneqq (M_1', \ldots, M_{n-1}'), \quad \text{where } M_i' \coloneqq \begin{cases} \Re(M_i,u),& i \in J, \\ M_i,& i \notin J. \end{cases}$$ It now suffices to prove that $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}')$: Using that $\Re(M_j,u) = M'_j$ ($j \in J$), repeated applications of [\[thm:pruningLemma\]](#thm:pruningLemma){reference-type="ref" reference="thm:pruningLemma"} show that if $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}')$, then $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M})$. Clearly, $\mathbfcal{M}'$ is also a tuple of $k$-polyoids and $\mathop{\mathrm{TS}}( \mathbfcal{M}', u)$ is semicritical because $\mathbfcal{D}_J + \mathop{\mathrm{TS}}(\mathbfcal{M}_{J^\mathsf{c}}, u)$ is a semicritical permutation. Furthermore, all spaces in $\mathop{\mathrm{TS}}(\mathbfcal{M}'_I, u) = \mathbfcal{D}_I$ are subspaces of $E$. Lemmas [Lemma 22](#thm:critRed){reference-type="ref" reference="thm:critRed"} and [Lemma 30](#thm:tc3){reference-type="ref" reference="thm:tc3"} now imply that $\pi_{E^\perp}\mathop{\mathrm{TS}}(\mathbfcal{M}'_{I^\mathsf{c}}, u) = \mathop{\mathrm{TS}}_{E^\perp}(\pi_{E^\perp}\mathbfcal{M}'_{I^\mathsf{c}}, u)$ is also a semicritical tuple, that is, we have $u\in \mathop{\mathrm{ext}}\pi_{E^\perp}(\mathbfcal{M}')_{I^\mathsf{c}}$. There is an inner product space isomorphism $E^\perp \cong \mathbb{R}^{\dim E^\perp}$. Using this isomorphism and $\dim E^\perp = n - \abs{I} \in [1, n-1]$, we can apply the inductive hypothesis to $u \in E^\perp$ and the tuple $\pi_{E^\perp}(\mathbfcal{M}')_{I^\mathsf{c}}$ of $k$-polyoids in $E^\perp$ and thus conclude from $u\in \mathop{\mathrm{ext}}\pi_{E^\perp}(\mathbfcal{M}')_{I^\mathsf{c}}$ that $$\begin{aligned} \label{eq:mp1} u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{E^\perp}(\pi_{E^\perp}(\mathbfcal{M}')_{I^\mathsf{c}}) .\end{aligned}$$ On the other hand, $\mathbfcal{M}_I'$ consists of $k$-topes in $E$. So Lemma [\[thm:maRed\]](#thm:maRed){reference-type="ref" reference="thm:maRed"} yields $$\begin{aligned} \label{eq:mp2} \begin{pmatrix} n-1 \\ \abs{I} \end{pmatrix} \mathop{\mathrm{S}}(\mathbfcal{M}') = \mathop{\mathrm{V}}(\mathbfcal{M}'_I) \cdot \mathop{\mathrm{S}}_{E^\perp}'(\pi_{E^\perp}(\mathbfcal{M}'_{I^\mathsf{c}})) .\end{aligned}$$ Because $(\mathop{\mathrm{span}}M'_i)_{i\in I} = \mathbfcal{D}_I$ is a subtuple of the semicritical tuple $\mathbfcal{D}_J + \mathop{\mathrm{TS}}(\mathbfcal{M}_{J^\mathsf{c}}, u)$, since $I\subseteq J$, it follows from Lemma [Lemma 19](#thm:mvVanish){reference-type="ref" reference="thm:mvVanish"} that $\mathop{\mathrm{V}}(\mathbfcal{M}'_I) > 0$ and we conclude with relations [\[eq:mp1\]](#eq:mp1){reference-type="eqref" reference="eq:mp1"} and [\[eq:mp2\]](#eq:mp2){reference-type="eqref" reference="eq:mp2"} that $$u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}_{E^\perp}'(\pi_{E^\perp}(\mathbfcal{M}')_{I^\mathsf{c}}) = \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M}')$$ and, as noted previously, therefore $u \in \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{M})$. ◻ Finally, we note the following more general result which is implied by the preceding proof. **Proposition 51**. *Let $\mathbfcal{C}= (C_1, \ldots, C_{n-1})$ be an $(n-1)$-tuple of macroids (or smooth convex bodies provided at least one of the bodies $C_i$ is smooth and strictly convex) in $\mathbb{R}^n$. Then $$\label{eq:key2b} \mathop{\mathrm{supp}}\mathop{\mathrm{S}}(\mathbfcal{C},\cdot) \subseteq \mathop{\mathrm{cl}}\mathop{\mathrm{ext}}\mathbfcal{C} .$$* **Acknowledgements.** D. Hug was supported by DFG research grant HU 1874/5-1 (SPP 2265). 99 A. D. Alexandrov. A. D. Alexandrov selected works. Part I. In: 1st edition. Vol. 4. Classics of Soviet Mathematics. London: Yu. G. Reshetnyak and S. Kutateladze, 1996. Chap. To the theory of mixed volumes of convex bodies. Part II: New inequalities for mixed volumes and their applications. Dario Cordero-Erausquin, Bo'az Klartag, Quentin Merigot, Filippo Santambrogio. One more proof of the Alexandrov-Fenchel inequality. C. R. Math. Acad. Sci. Paris 357 (2019), no. 8, 676--680. Daniel Hug and Paul A. Reichert. Extremizers of the Alexandrov--Fenchel inequality within a new class of convex bodies. ArXiv. Submit/5141154 Daniel Hug and Wolfgang Weil. Lectures on Convex Geometry. Vol. 286. Graduate Texts in Mathematics. Springer, Cham, 2020, pp. xviii+287. Rolf Schneider. "On the Aleksandrov-Fenchel inequality". In: Discrete ge- ometry and convexity (New York, 1982). Vol. 440. Ann. New York Acad. Sci. New York Acad. Sci., New York, 1985, pp. 132--141. Rolf Schneider. On the Aleksandrov-Fenchel inequality involving zonoids. Geom. Dedicata 27 (1988), no. 1, 113--126. Rolf Schneider. Convex Bodies: the Brunn-Minkowski Theory. Second expanded edition. Vol. 151. Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2014, pp. xxii+736. Yair Shenfeld, Ramon van Handel. Mixed volumes and the Bochner method. Proc. Amer. Math. Soc. 147 (2019), no. 12, 5385--5402. Yair Shenfeld and Ramon van Handel. The extremals of Minkowski's quadratic inequality. Duke Math. J. 171 (2022), no. 4, 957--1027. Yair Shenfeld and Ramon van Handel. The Extremals of the Alexandrov--Fenchel inequality for convex polytopes. ArXiv:2011.04059. (To appear in Acta Mathematica.) Yu Wang. A remark on the Alexandrov-Fenchel inequality. J. Funct. Anal. 274 (2018), no. 7, 2061--2088. Authors' addresses: Daniel Hug, Karlsruhe Institute of Technology (KIT), Institute of Stochastics, D-76128 Karlsruhe, Germany, daniel.hug\@kit.edu Paul Reichert, Karlsruhe, Germany, math\@paulr.de
arxiv_math
{ "id": "2309.16872", "title": "The support of mixed area measures involving a new class of convex\n bodies", "authors": "Daniel Hug and Paul A. Reichert", "categories": "math.MG math.FA", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | We consider a ranking problem where we have noisy observations from a matrix with isotonic columns whose rows have been permuted by some permutation $\pi^*$. This encompasses many models, including crowd-labeling and ranking in tournaments by pair-wise comparisons. In this work, we provide an optimal and polynomial-time procedure for recovering $\pi^*$, settling an open problem in [@flammarion2019optimal]. As a byproduct, our procedure is used to improve the state-of-the art for ranking problems in the stochastically transitive model (SST). Our approach is based on iterative pairwise comparisons by suitable data-driven weighted means of the columns. These weights are built using a combination of spectral methods with new dimension-reduction techniques. In order to deal with the important case of missing data, we establish a new concentration inequality for sparse and centered rectangular Wishart-type matrices. author: - Emmanuel Pilliat, Alexandra Carpentier, and Nicolas Verzelen bibliography: - biblio.bib title: Optimal rates for ranking a permuted isotonic matrix in polynomial time --- # Introduction Ranking problems have recently spurred a lot of interest in the statistical and computer science literature. This includes a variety of problems ranging from ranking experts/workers in crowd-sourced data, ranking players in a tournament or equivalently sorting objects based on pairwise comparisons. To fix ideas, let us consider a problem where we have noisy partial observations from an unknown matrix $M\in [0,1]^{n\times d}$. In crowdsourcing problems, $n$ stands for the number of experts (or workers), $d$ stands for the number of questions (or tasks) and $M_{i,k}$ for the probability that expert $i$ answers question $k$ correctly. For tournament problems, we have $n=d$ players (or objects) and $M_{i,k}$ stands for the probability that player $i$ wins against player $k$. Based on these noisy data, the general goal is to provide a full ranking of the experts or of the players. Originally, these problems were tackled using parametric model for the matrix $M$. Notably, this includes the noisy sorting model [@braverman2008noisy] or Bradley-Luce-Terry model [@bradley1952rank]. Still, it has been observed that these simple models are often unrealistic and do not tend to fit well. This has spurred a recent line of literature where strong parametric assumptions are replaced by non-parametric assumptions [@shah2015estimation; @shah2016stochastically; @shah2019feeling; @shah2020permutation; @mao2020towards; @mao2018breaking; @liu2020better; @flammarion2019optimal; @bengs2021preference; @saad2023active]. In particular, for tournament problems, the strong stochastically transitive (SST) model presumes that the square matrix $M$ is, up to a common permutation $\pi^*$ of the rows and of the columns, bi-isotonic and satisfies the skew symmetry condition $M_{i,k}+M_{k,i}=1$. Although optimal rates for estimation of the permutation $\pi^*$ have been pinpointed in the earlier paper of Shah et al. [@shah2016stochastically], there remains a large gap between these optimal rates and the best known performances of polynomial-time algorithms. This has led to conjecture the existence of a statistical-computational gap [@mao2020towards; @liu2020better]. For crowdsourcing data, the counterpart of the SST model is the so-called bi-isotonic model, where the rectangular matrix $M$ is bi-isotonic, up to an unknown permutation $\pi^*$ of its rows and an unknown permutation $\eta^*$ of its columns. This model turns out to be really similar to the SST model and the existence of a statistical-computational gap has also been conjectured [@mao2020towards]. In this paper, we tackle a slightly different route and we consider the arguably more general isotonic model [@flammarion2019optimal]. The only assumption is that all the columns of $M$ are nondecreasing up to an unknown permutation of the rows, making the isotonic model more flexible than the bi-isotonic and SST models. It is in fact the most general model under which an unambiguous ranking of the experts is well-defined. In this model as well, there is a gap between the (statistical) optimal rates, and the rate obtained by the (polynomial-time) algorithm in [@flammarion2019optimal]. Our main contributions are as follows. For the isotonic model, we establish the optimal rate for recovering the permutation, and we introduce a polynomial-time procedure achieving this rate, thereby settling the absence of any computational gap in this model. Besides, our procedure and results have important consequences when applied to the SST and bi-isotonic model. More specifically, we achieve the best known guarantees in these two models [@liu2020better; @mao2018breaking] and even improve them in some regimes. ## Problem formulation {#ss:pf} Let us further introduce our model. A bounded matrix $A\in [0,1]^{n\times d}$ is said to be isotonic if its columns are nondecreasing, that is $A_{i,k}\leq A_{i+1,k}$ for any $i\in [n-1]$ and $k \in [d]$. Henceforth, we write $\mathbb{C}_{\mathrm{iso}}$ for the collection of all $n\times d$ isotonic matrices taking values in $[0,1]$. In our model, we recall that we assume that the signal matrix $M$ is isotonic up to an unknown permutation of its rows. In other words, there exists a permutation $\pi^*$ of $[n]$ such that the matrix $M_{\pi^{*-1}}$ defined by $(M_{\pi^{*-1}})_{i,k}= (M_{\pi^{*-1}(i),k})$ has nondecreasing columns, that is $$\label{eq:assumption_isotonic} M_{\pi^{*-1}(i),k} \leq M_{\pi^{*-1}(i+1),k} \enspace ,$$ for any $i \in \{1, \dots, n-1\}$ and $k \in \{1, \dots, d\}$, or equivalently $M_{\pi^{*-1}} \in \mathbb{C}_{\mathrm{iso}}$. Henceforth, $\pi^*$ is called an oracle permutation. Using the terminology of crowdsourcing, we refer to $i^{\mathrm{th}}$ row of $M$ as *expert* $i$ and to $k^{\mathrm{th}}$ column as *question* $k$. In this work, we have $N$ partial and noisy observations of the matrix $M$ of the form $(x_t,y_t)$ where $$\label{eq:model_partial} y_t = M_{x_t} + \varepsilon_t \quad t=1,\ldots, N \enspace .$$ For each $t$, the position $x_t\in [n]\times [d]$ is sampled uniformly. The noise variables $\varepsilon_t$'s are independent and their distributions only depend on the position $x_t$. We only assume that all these distributions are centered and are subGaussian with a subGaussian norm of at most $1$ -- see e.g. [@wainwright2019high]. In particular, this encompasses the typical case where the $y_t$'s follow Bernoulli distributions with parameters $M_{x_t}$. As usual in the literature e.g. [@pilliat2022optimal; @liu2020better; @mao2020towards], we use, for technical convenience, the Poissonization trick which amounts to assuming that the number $N$ of observations has been sampled according to a Poisson distribution with parameter $\lambda n d$. We refer to $\lambda>0$ as the sampling effort. When $\lambda > 1$, we have, in expectation, several independent observations per entry $(i,j)$ - and $\lambda=1$ means that there is on average one observation per entry. In this paper, we are especially interested in the sparse case where $\lambda$ is much smaller than one, i.e. the case where we have missing observations for some entries. We refer to $\lambda=1$ as the full observation regime at it bears some similarity to the case often considered in the literature --e.g. [@shah2016stochastically; @flammarion2019optimal], where we have a full observation of the matrix, $$\label{eq:model_0} Y= M + E' \in \mathbb{R}^{n\times d}\ .$$ The entries of the noise matrix $E'$ are independent, centered, and $1$-subGaussian. In this work, we are primarily interested in estimating the permutation $\pi^*$. Given an estimator $\hat{\pi}$, we use the square Frobenius norm $\|M_{\hat{\pi}^{-1}} - M_{\pi^{*-1}} \|_F^2$ as the loss. This loss quantifies the distance between the matrix $M$ reordered according to the estimator $\hat{\pi}$ and the matrix $M$ sorted according to the oracle permutation $\pi^*$. This loss is explicitly used in [@liu2020better; @pilliat2022optimal] and is implicit in earlier works --see e.g. [@shah2016stochastically]. We define the associated optimal risk of permutation recovery as a function of the number $n$ of experts, the number $d$ of question and the sampling effort $\lambda$, $$\label{eq:minimax_risk_perm} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)= \inf_{\hat \pi} \sup_{\stackrel{\pi^*\in \Pi_n}{M:\, M_{\pi^{*-1}}\in \mathbb{C}_{\mathrm{iso}}}} \mathbb E_{(\pi^*, M)} [\|M_{\hat \pi^{-1}} - M_{\pi^{*-1}}\|_F^2] \enspace ,$$ where the infimum is taken over all estimators. Here, $\Pi_n$ stands for the collection of all permutations of $[n]$. If the main focus is not only to estimate $\pi^*$, but also to reconstruct the unknown matrix $M$, we also consider the optimal reconstruction rate $$\label{eq:minimax_risk_est} \mathcal{R}^*_{\mathrm{reco}}(n,d,\lambda)= \inf_{\hat M}\sup_{\stackrel{\pi^*\in \Pi_n}{M:\, M_{\pi^{*-1}}\in \mathbb{C}_{\mathrm{iso}}}} \operatorname{\mathbb{E}}\left[\|\hat M - M\|_F^2\right]\ \enspace .$$ It turns out that reconstructing the matrix $M$ is more challenging than estimating the permutation $\pi^*$. Considering both risks allows to disentangle the reconstruction of the matrix $M$: looking at both enables to distinguish the error that is due to estimating the permutation, from the error that comes from estimating an isotonic matrix. ## Past results on the isotonic model and our contributions {#ss:contributions} In, the specific case where $d=1$ (a single column), our model is equivalent to uncoupled isotonic regression and is motivated by optimal transport. Rigollet and Niles-Weed [@rigollet2019uncoupled] have established that the reconstruction error of $M$ is of the order of $n (\tfrac{\log\log(n)}{\log(n)})^2$. For the general case $d\geq 1$, Flammarion et al. [@flammarion2019optimal] have shown[^1] that the optimal reconstruction error in the full observation model [\[eq:model_0\]](#eq:model_0){reference-type="eqref" reference="eq:model_0"} is of the order of $n^{1/3}d+n$. However, the corresponding procedure is not efficient. They also introduce an efficient procedure that first estimates $\pi^*$ using a score based on row comparisons on $Y$. Unfortunately, this method only achieves a reconstruction error of the order of $n^{1/3}d + n\sqrt{d}$ which is significantly slower than the optimal one. Whether or not there is a statistical-computationnal gap was therefore an open problem. We prove in this work that there is no computational statistical gap in this model. More precisely, we introduce estimators that are both polynomial-time and minimax optimal up to some polylog factors. To that end, we characterize the optimal risks $\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)$ and $\mathcal{R}^*_{\mathrm{reco}}(n,d,\lambda)$ of permutation estimation and matrix reconstruction, for all possible number of experts $n\geq 1$, number of questions $d\geq 1$ and all sampling efforts $\lambda$, up to some polylog factors in $nd$. summarizes our findings in the arguably most interesting cases[^2] $\lambda \in [1/(n\land d), 1]$.   $n \leq d^{3/2}\sqrt{\lambda}$ $d^{3/2}\sqrt{\lambda} \leq n$ -------------------------------- --------------------------------- -------------------------------- $\mathcal R^*_{\mathrm{perm}}$ $n^{2/3}\sqrt{d}\lambda^{-5/6}$ $n/\lambda$ $\mathcal R^*_{\mathrm{reco}}$ $n^{1/3}d\lambda^{-2/3}$ $n/\lambda$ : Optimal rates in our model, for all possible values of $n,d$ and $\lambda \in [1/(n\land d), 1]$, up to a polylogarithmic factor in $nd$. These rates are achieved by polynomial-time estimators. , ## Implication for other models and connection to the literature {#subsec:literature} As discussed earlier, the isotonic model is quite general and encompasses both the bi-isotonic model for crowdsourcing problems as well the SST model for tournament problems. Let us first focus on the SST model which corresponds to the case where $n=d$ together with a bi-isotonicity and a skew-symmetry assumption. In the full observation scheme (related to the case $\lambda=1$) where one observes the noisy matrix $n\times n$, Shah et al. [@shah2016stochastically] have established that the optimal rates for estimating $\pi^*$ and reconstructing the matrix $M$ are of the order of $n$. In contrast, their efficient procedure which estimates $\pi^*$ according to the row sums of $Y$ only achieves the rate of $n^{3/2}$. In more recent years, there has been a lot of effort dedicated to improving this $\sqrt{n}$ statistical-computational gap. The SST model was also generalized to partial observations by [@chatterjee2019estimation], which corresponds to $\lambda \leq 1$. They introduced an efficient procedure that targets a specific sub-class of the SST model, and that achieves a rate of order $n^{3/2}\lambda^{-1/2}$ in the worst case for matrix reconstruction. Recently, a few important contributions tackling both the bi-isotonic model and the SST model made important steps towards better understanding the statistical-computational gap. We first explain how their results translate in the SST model. Mao et al. [@mao2020towards; @mao2018breaking] introduced a polynomial-time procedure handling partial observation, achieving a rate of order $n^{5/4}\lambda^{-3/4}$ for matrix reconstruction. Nonetheless, [@mao2020towards] failed to exploit global information shared between the players/experts -- as they only compare players/experts two by two -- as pointed out by [@liu2020better]. Building upon this remark, [@liu2020better] managed to get the better rate $n^{7/6+o(1)}$ with a polynomial-time method in the case $\lambda = n^{o(1)}$. Let us turn to the more general bi-isotonic model. Here, the rectangular matrix $M \in \mathbb{R}^{n \times d}$ is bi-isotonic up an unknown permutation $\pi^*$ of the rows and an unknown permutation $\eta^*$ of the columns. Since $M$ is not necessarily square, this model can be used in more general crowd-sourcing problems. The optimal rate for reconstruction in this model with partial observation has been established in [@mao2020towards] to be of order $\nu(n,d,\lambda):= (n\lor d)/\lambda + \sqrt{nd/\lambda}\land n^{1/3}d\lambda^{-2/3}\land d^{1/3}n\lambda^{-2/3}$ up to polylog factors, in the non-trivial regime where $\lambda \in [1/(n\land d), 1]$. However, the polynomial-time estimator provided by Mao et al. [@mao2020towards] only achieves the rate $n^{5/4}\lambda^{-3/4} + \nu(\lambda, n, d)$. In a nutshell, Mao et al. first compute column sums to give a first estimator of the permutation of the questions. Then, they compare the experts on aggregated blocks of questions, and finally compare the questions on aggregated blocks of experts. As explained in the previous paragraph for SST models, Liu and Moitra [@liu2020better] improved this rate to $n^{7/6 + o(1)}$ in the square case $(n=d)$, with a subpolynomial number of observations per entry $(\lambda = n^{o(1)})$. Their estimators of the permutations $\pi^*$, $\eta^*$ were based on hierarchical clustering and on local aggregation of high variation areas. Both [@liu2020better; @mao2020towards] made heavily use of the bi-isotonicity structure of $M$ by alternatively sorting the columns and rows. As mentioned for the SST model, the order of magnitude $n^{7/6+o(1)}$ remains nevertheless suboptimal, and whether there exists an efficient algorithm achieving the optimal rate in this bi-isotonic model remains an open problem. We now discuss the implications of our work concerning the bi-isotonic model and SST model. First, in the full observation setting $(\lambda=1)$ and square case for the bi-isotonic model $(n=d)$, we reach in polynomial-time the upper bound $n^{7/6}$ up to polylog factors, for both permutation estimation and matrix reconstruction. In particular, we improve the rate in [@liu2020better] by a subpolynomial factor in $n$, and we do not need a subpolynomial number of observation per entry. Moreover, our procedure being primarily designed for the isotonic model, it does not require any shape constraint on the rows in contrast to [@liu2020better; @mao2020towards]. Beyond the full observation regimes, we provide guarantees on our estimator of $\pi^*$ for different values of $\lambda$. In particular, in , we derive an estimator of the matrix $M$ that achieves a maximum reconstruction risk $\sup_{\pi^*, \eta^*, M} \operatorname{\mathbb{E}}\left[\|\hat M - M_{\pi^{*-1}\eta^{*-1}}\|_F^2\right]$ of order less than $n^{7/6}\lambda^{-5/6}$ up to polylogs, thereby improving the state-of-the-art polynomial-time methods in partial observation [@mao2020towards]. Lastly, we perform our analysis in the general rectangular case, giving guarantees for general values of $d$. The optimal risks and the known polynomial-time upper bounds for the isotonic, bi-isotonic with two permutations and SST models are summarized in . For the sake of simplicity, we focus in the table to the specific case case $n=d$ and $\lambda \in [1/n, 1]$. \|P2cm \| P1cm \| P3.1cm \| P3.5cm \| P3.5cm \| & **Isotonic** & Bi-isotonic($\pi^*, \eta^*)$ & SST\ &**$M_{\pi^{*-1}}$ has nondecreasing columns** &$M_{\pi^{*-1} \eta^{*-1}}$ has nondecreasing columns and rows& $M_{\pi^{*-1} \pi^{*-1}}$ has nondecreasing columns and rows, and $M_{ik} + M_{ki} = 1$\ Permutation estimation & Poly. Time & **$n^{7/6}\lambda^{-5/6}$ **\[Th [Theorem 2](#th:UB){reference-type="ref" reference="th:UB"}\]**** & $\begin{array}{cc} n^{7/6+o(1)} & \text{\cite{liu2020better}}(\lambda=n^{o(1)})\\ n^{7/6}\lambda^{-5/6} & \textbf{[Th~\ref{th:UB}]}\\ \end{array}$ & $\begin{array}{cc} n^{7/6+o(1)} & \text{\cite{liu2020better}}(\lambda=n^{o(1)})\\ n^{7/6}\lambda^{-5/6} & \textbf{[Th~\ref{th:UB}]}\\ \end{array}$\   & optimal rate& **$n^{7/6}\lambda^{-5/6}$ \[Th [Theorem 1](#th:lower_bound_poisson){reference-type="ref" reference="th:lower_bound_poisson"}\]** & $n/\lambda$ [@mao2020towards] & $n/\lambda$ [@mao2020towards]\ Matrix reconstruction &Poly. Time & $\begin{array}{cc} n^{3/2} & (\lambda=1)\text{\cite{flammarion2019optimal}}\\ n^{4/3}\lambda^{-2/3} & \textbf{[Cor~\ref{cor:ub_reco_biso}]}\\ \end{array}$ & $\begin{array}{cc} n^{7/6+o(1)} & \text{\cite{liu2020better}}(\lambda=n^{o(1)})\\ n^{5/4}\lambda^{-3/4} & \text{\cite{mao2020towards}}\\ n^{7/6}\lambda^{-5/6} & \textbf{[Cor~\ref{cor:ub_reco_biso}]}\\ \end{array}$ & $\begin{array}{cc} n^{7/6+o(1)} & \text{\cite{liu2020better}}(\lambda=n^{o(1)})\\ n^{5/4}\lambda^{-3/4} & \text{\cite{mao2020towards}}\\ n^{7/6}\lambda^{-5/6} & \textbf{[Cor~\ref{cor:ub_reco_biso}]}\\ \end{array}$\   & optimal rate & $n^{4/3}\lambda^{-2/3}$ [@flammarion2019optimal](also \[Prop [Proposition 3](#prop:LB_reconstruction){reference-type="ref" reference="prop:LB_reconstruction"}\]) & $n/\lambda$ [@mao2020towards] & $n/\lambda$ [@mao2020towards]\ Finally, we mention the even more specific model where the matrix $M$ is bi-isotonic up to a single permutation $\pi^*$ acting on the rows. This corresponds to the case where $\eta^*$ is known in the previous paragraph [@mao2020towards; @pilliat2022optimal; @liu2020better]. Equivalently, this also corresponds to our isotonic model  with the additional assumption that all the rows are nondecreasing, that is $M_{i,k} \leq M_{i,k+1}$. For this model, it is possible to leverage the shape constrains on the rows to build efficient and optimal estimators, this for all $n$, $d$, and $\lambda$ -- see [@pilliat2022optimal]. ## Overview of our techniques In this work, we introduce the iterative soft ranking $(\mathbf{ISR})$ procedure, which gives an estimator $\hat \pi$ based on the observations. Informally, this method iteratively updates a weighted directed graph between experts, where the weight between any two experts quantifies the significance of their comparison. The procedure increases the weights at each step. After it stops, the final estimator is an arbitrary permutation $\hat \pi$ that agrees as well as possible with the final weighted directed graph. As mentioned in [@liu2020better], it is hopeless to use only local information between pairs of experts to obtain a rate of order $n^{7/6}$ up to polylogs, and we must exploit global information. Still, we do it in a completely different way of Liu and Moitra [@liu2020better] who were building upon the bi-isotonicity of the matrix. One first main ingredient of our procedure is a new dimension reduction technique. At a high level, suppose that we have partially ranked the rows in such a way that, for a given triplet ($P$, $O$, $I$) of subsets of $[n]$, we are already quite confident that experts in $P$ are below those in $I$ and above those in $O$. Relying on the shape constraint of the matrix $M$, it is therefore possible to build a high-probability confidence regions for rows in $P$ based on the rows in $O$ and the rows in $I$. If, for a question $j$, the confidence region is really narrow, this implies that all experts in $P$ take almost the same value on this column. As a consequence, this question is almost irrelevant for further comparing the experts in $P$. In summary, our dimension reduction technique selects the set of questions for which the confidence region of $P$ is wide enough, and in that way reduces the dimension of the problem while keeping most of the relevant information. The second main ingredient, once the dimension is reduced, is to use a spectral method to capture some global information shared between experts. That is why our procedure makes significant use of spectral methods to compute the updates of the weighted graph. Although this spectral scheme already appears in recent works [@pilliat2022optimal; @liu2020better], those are used here for updating the weight of the comparison graph rather than performing a clustering as in [@liu2020better]. Moreover, the analysis of the spectral step in the partial observation regime $(\lambda \ll 1)$ leads to technical difficulties -- see the discussion in . Related to the latter problem, we need to establish a new tail bound on sparse rectangular matrices. More specifically, for a rectangular matrix $X$ with centered independent entries that satisfy a Bernstein type condition, we provide a high-probability control of the operator norm of $XX^T -\mathbb{E}[XX^T]$. This result, based on non-commutative matrix Bernstein concentration inequality, may be of independent interest e.g. for controlling the spectral properties of a sparse bipartite random graph. We state it in , independently of the rest of the paper. ## Notation Given a vector $u$ and $p\in [1,\infty]$, we write $\|u\|_{p}$ for its $l_p$ norm. For a matrix $A$, $\|A\|_F$ and $\|A\|_{\mathrm{op}}$ stand for its Frobenius and its operator norm. We write $\lfloor x\rfloor$ (resp. $\lceil x\rceil$) for the largest (resp. smallest) integer smaller than (resp. larger than) or equal to $x$. Although $M$ stands for an $n\times d$ matrix, we extend it sometimes in an infinite matrix defined for all $i \in \mathbb{N}, k \in \{1, \dots, d\}$ by setting $M_{ik} = 0$ when $i \leq 0$ and $M_{ik} = 1$ when $i \geq n+1$. The corresponding infinite matrix $M_{\pi^{*(-1)}}$ which is obtained by permuting the $n$ original rows is still isotonic and takes values in $[0,1]$. We shall often work with submatrices $M(P, Q)$ of $M$ that are restricted to a subset $P\subset [n]$ and $Q\subset [d]$ of rows and columns. If $A$ is any matrix in $\mathbb{R}^{P \times Q}$, we write $\overline A$ for the matrix whose rows are all equal to the average row of $A$, namely $\overline A_{ik} = \frac{1}{|P|}\sum_{j \in P}A_{jk}$. # Results In this section, we first establish the statistical limit with a lower bound on $\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)$. Then, we state the existence of a polynomial-time estimator that is minimax optimal up to polylog factors. More precisely, we prove that for all integers $n,d$ and $\lambda \in [1/d, 8n^2]$, the optimal rate of permutation estimation $\mathcal{R}^*_{\mathrm{perm}}$ is of the order of $$\label{eq:rho_perm} \rho_{\mathrm{perm}}(n,d, \lambda):=\frac{n^{2/3}\sqrt{d}}{\lambda^{5/6}}\bigwedge n\sqrt{\frac{d}{\lambda}}+\frac{n}{\lambda} \enspace ,$$ up to some polylog factors. As a corollary, we then establish that the optimal rate of matrix reconstruction $\mathcal{R}^*_{\mathrm{reco}}$ is of order $$\label{eq:rho_est} \rho_{\mathrm{reco}}(n,d, \lambda):=\frac{n^{1/3}d}{\lambda^{2/3}}+\frac{n}{\lambda} \enspace ,$$ up to polylog factors. We therefore establish that these two problems do not exhibit a computational-statistical gap. ## Minimax lower bound for permutation estimation {#subsec:lower_bound} Assume that $\lambda \in [1/d, 8n^2]$ is fixed and that we are given $N=Poi(\lambda nd)$ independent observations under model . Namely, we observe $(x_t,y_t)_{t=1, \dots, N}$ where $x_t$ is sampled uniformly in $[n]\times [d]$ and $y_t=M_{x_t}+ \varepsilon_t$ conditionally to $x_t$. The following theorem states that $\rho_{\mathrm{perm}}$ is a lower bound on the maximum risk of permutation estimation for all $n,d, \lambda \in [1/d, 8n^2]$, up to some numerical constant. **Theorem 1**. *There exists a universal constant $c>0$ such that, for any $n\geq 2$, $d\geq 1$, and $\lambda \in [1/d, 8n^2]$, we have $$\begin{aligned} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq c\rho_{\mathrm{perm}}(n,d,\lambda)\ . \label{eq:lower_bound_partia_observation} \end{aligned}$$* In the proof, we show a slightly stronger result that also covers the cases $\lambda < 1/d$ and $\lambda > 8n^2$, where $\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)$ is in fact respectively lower bounded by a quantity of order $nd$ and $n \sqrt{d}/\lambda$. For the sake of readability, we chose to omit these arguably less interesting cases in the statement of and of . ## Optimal permutation estimation {#sec:minimax_tpper} Let us fix a quantity $\delta\in (0,1)$ that will correspond to a small probability. We need to introduce some notation. We write $$\label{eq:definition_phil1} \phi_{\mathrm{L}_1} = 10^4\log\left(\frac{10^2nd}{\delta}\right) \enspace .$$ Our procedure depends on a sequence of tuning parameters. For this reason, we introduce a subset $\Gamma \subset \mathbb{R}^+$, henceforth called a grid. The grid $\Gamma$ is said to be valid if it contains a sequence $\gamma_0 \geq \dots \geq \gamma_{2\left\lfloor\log_2(n)\right\rfloor+2}$ of length $2\left\lfloor\log_2(n)\right\rfloor + 3$ such that that for all $u$, $$\label{eq:valid_gamma} \gamma_{u} - \gamma_{u+1} \geq \gamma_{2\left\lfloor\log_2(n)\right\rfloor+2} + \phi_{\mathrm{L}_1} \quad \text{ and } \quad\gamma_{2\left\lfloor\log_2(n)\right\rfloor+2} \geq \phi_{\mathrm{L}_1}\enspace .$$ In light of this definition, we could simply choose the valid sequence $\Gamma=\{\phi_{\mathrm{L}_1}, 2\phi_{\mathrm{L}_1},\ldots, (2\left\lfloor\log_2(n)\right\rfloor+3)\phi_{\mathrm{L}_1}\}$ with a corresponding $\gamma_0$ that is polylogarithmic. Still, for practical purpose, we consider general grids; examples of such gris are discussed in more details in . For any valid subset $\Gamma$, we define $\bar \gamma$ as the smallest possible value of $\gamma_0$ over all sequences that satisfy [\[eq:valid_gamma\]](#eq:valid_gamma){reference-type="eqref" reference="eq:valid_gamma"}. $$\label{eq:def_gamma} \bar \gamma = \min \{\gamma ~:~ \exists (\gamma_u) \text{ satisfying \Cref{eq:valid_gamma} s.t. } \gamma_{0} = \gamma\} \enspace .$$ Our main procedure $\mathbf{ISR}$, for iterative soft ranking, will be described in detail in . The only tuning parameters are the the number of steps $T$ and the valid grid $\Gamma$. **Theorem 2**. *There exists $C>0$ such that the following holds. Let $\lambda \in [1/d, 8n^2]$ and $\delta>0$. Assume that $\Gamma$ is a valid grid and that $T \geq 4\bar \gamma^6$ with $\bar\gamma$ defined in . For any permutation $\pi^*\in \Pi_n$ and any matrix $M$ such that $M_{\pi^{*-1}}\in \mathbb{C}_{\mathrm{iso}}$, the estimator $\hat \pi$ from Algorithm $\mathbf{ISR}(T, \Gamma)$ defined in the next section satisfies $$\|M_{\hat \pi^{-1}} - M_{\pi^{*-1}} \|_F^2 \leq CT\bar\gamma^{6}\rho_{\mathrm{perm}}(n,d,\lambda) \enspace ,$$ with probability at least $1- 10T\delta$.* In particular, if we suitably choose $\Gamma$ (as discussed above) and $T= 4\lceil \bar \gamma^6 \rceil$ and $\delta= 1/(nd)^2$, we deduce from that $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \leq C' \log^{C'}(nd) \rho_{\mathrm{perm}}(n,d,\lambda) \enspace ,$$ for some numerical constant $C'>0$. In the case where $\lambda = n^{o(1)}$ and $n=d$, this bound achieves the order of magnitude $n^{7/6}$, which aligns with the result presented in Theorem 2 of Liu and Moitra [@liu2020better]. However, it is important to note that the analysis made in [@liu2020better] focuses on the statistically easier bi-isotonic model, and their procedure heavily relies on the isotonicity structure imposed on the questions. ## Optimal reconstruction of the matrix $M$ {#ss:recM} We now turn to the problem of estimating the signal matrix $M$. Obviously, the reconstruction of the matrix $M$ from the observation of model in is at least as hard as if we knew the permutation $\pi^*$. In this favorable situation, estimating $M$ amounts to estimating $d$ isotonic vectors from partial and noisy observations $Y_{ik} = \frac{1}{\lambda}\sum_ty_t{\mathbf 1}_{x_t=(ik)}$. The isotonic regression problem is already well understood, and we state the following lower bound without proof since it directly follows from [@mao2020towards] (see in particular Theorem 3.1 therein). We recall that $\rho_{\mathrm{reco}}(n,d,\lambda)$ is defined in [\[eq:rho_est\]](#eq:rho_est){reference-type="eqref" reference="eq:rho_est"}. **Proposition 3**. *There exists a universal constant $c>0$ such that, for any $n\geq 2$, any $d\geq 1$, and any $\lambda >0$, we have $$\begin{aligned} \mathcal{R}^*_{\mathrm{reco}}(n,d,\lambda) \geq c\rho_{\mathrm{reco}}(n,d,\lambda)\ . \label{eq:lower_bound_partia_observation} \end{aligned}$$* In particular, since $\rho_{\mathrm{perm}}(n,d,\lambda) \ll \rho_{\mathrm{reco}}(n,d,\lambda)$ in many regimes in $n$, $d$, $\lambda$, this proposition implies that the reconstruction of a permuted isotonic matrix is harder than the estimation of the permutation, namely that $\mathcal{R}^*_{\mathrm{perm}} \ll \mathcal{R}^*_{\mathrm{reco}}$. To build an optimal estimator of $M$, we compute the estimated permutation $\hat \pi$ of and estimate an isotononic matrix based on this ordering. This approach is similar to what is done in [@mao2020towards; @pilliat2022optimal], for related problems where a bi-isotonic assumption is done. For simplicity, set the tuning parameters $T$, $\Gamma$ for so that $T = 4\left\lceil\overline \gamma^{6}\right\rceil$ and $\bar \gamma^6 \leq C'\log^{C'}(nd/\delta)$. We split the samples $y_t$ defined in into two independent sequences of samples $(y_t^{(1)})$, $(y_t^{(2)})$. First, we compute the estimator $\hat \pi$ of $\pi^*$ with the first sub-samples $(y_t^{(1)})$. Then, we define $\hat M_{\mathrm{iso}}$ as the projection of $Y^{(2)}_{\hat \pi}$ onto the convex set of isotonic matrices, where $Y^{(2)}$ is the matrix defined by $Y^{(2)}_{ik}= \frac{1}{\lambda}\sum_ty_t^{(2)}{\mathbf 1}_{x_t^{(2)}=(i,k)}$. More precisely, set $$\hat M_{\mathrm{iso}} = \mathop{\mathrm{arg\,min}}_{\tilde M \in \mathbb{C}_{\mathrm{iso}}} \| \tilde M - Y^{(2)}_{\hat \pi^{-1}}\|_2^2 \enspace .$$ The following corollary controls the risk of $\hat M_{\mathrm{iso}}$. **Corollary 4**. *Assume that $\lambda \in [1/d, 8n^2]$. There exists a universal constant $C''$ such that the following holds for any permutation $\pi^*\in \Pi_n$ and any matrix $M\in \mathbb{C}_{\mathrm{iso}}$. $$\mathbb{E}[\|(\hat M_{\mathrm{iso}})_{\hat \pi} - M\|_F^2] \leq C'' \log^{C''}(nd)\rho_{\mathrm{reco}}(n,d,\lambda) \enspace .$$* As a consequence, the polynomial-time estimator $\hat M_{\mathrm{iso}}$ achieves the optimal risk for all values of $n$ and $d$. For $\lambda = 1$, the optimal risk $\rho_{\mathrm{reco}}(n,d,1)$ is of the order of $n^{1/3}d + n$. In particular, our risk bound strictly improves over the one of Flammarion et al. [@flammarion2019optimal] - e.g. their procedure achieves the estimation error $n\sqrt{d}$ for $n\geq d^{1/3}$. Their slower convergence rates are mainly due to the fact that their estimator of the permutation $\pi^*$ is suboptimal in this regime. ## Polynomial-time reconstruction in the bi-isotonic model {#subsec:reco_biso} We now turn our attention to the problem of estimating the matrix $M$ when $M$ satisfies the additional assumption of being bi-isotonic up to unknown permutations $\pi^*$ and $\eta^*$ of its rows and columns respectively. In other words, the matrix $M_{\pi^{*-1} \eta^{*-1}}$ has non-decreasing entries. As explained in the introduction, this model has attracted a lot of attention in the last decade and encompasses the SST model for tournament problems. To simplify the exposition, we focus in this section on the case $n=d$ and $\lambda \in [\tfrac{1}{n}, 1]$. Since the bi-isotonic model is a specific case of the isotonic model, we could rely on the estimator $\widehat{M}_{\mathrm{iso}}$ introduced in the previous subsection. In fact, we can improve this estimation rate by relying on the bi-isotonicity of the matrix $M_{\pi^{*-1} \eta^{*-1}}$. As previously, we choose the tuning parameters of in such a way that $T =4 \left\lceil\overline \gamma^{6}\right\rceil$ and $\bar \gamma^6 \leq C'\log^{C'}(nd/\delta)$. Then, we use the following procedure: 1. Subsample the data into $3$ independent samples $(y^{(1)}_t)$, $(y^{(2)}_t)$, $(y^{(3)}_t)$. 2. Run our procedure to obtain an estimator $\hat \pi$ of the permutation $\pi^*$ of the rows, using the first sample. 3. Run again to obtain an estimator $\hat \eta$ of the permutation $\eta^*$ of the columns, using the second sample. 4. Compute the least-square estimator $\hat M_{\mathrm{biso}} = \mathop{\mathrm{arg\,min}}_{\tilde M \in \mathbb{C}_{\mathrm{biso}}} \| \tilde M - Y^{(3)}_{\hat \pi^{-1} \hat \eta^{-1}}\|_2^2$, where $\mathbb{C}_{\mathrm{biso}}$ is the set of all bi-isotonic matrices with entries in $[0,1]$ and $Y^{(3)}_{ik}= \frac{1}{\lambda}\sum_ty_t^{(3)}{\mathbf 1}_{x_t^{(3)}=(i,k)}$. The following corollary states that $\hat M_{\mathrm{biso}}$ achieves a reconstruction rate of order $n^{7/6}\lambda^{-5/6}$ in the bi-isotonic model. **Corollary 5**. *Assume that $\lambda \in [1/n, 8n^2]$. There exists a universal constant $C''$ such that $$\sup_{\stackrel{\pi^*, \eta^* \in \Pi_n}{M:\, M_{\pi^{*-1} \eta^{*-1}}\in \mathbb{C}_{\mathrm{biso}}}} \operatorname{\mathbb{E}}\left[\|(\hat M_{\mathrm{biso}})_{\hat \pi \hat \eta}- M\|_F^2\right] \leq C''\log^{C''}(n)n^{7/6}\lambda^{-5/6} \enspace .$$* Here, we have fixed $n=d$ to simplify the exposition but we could extend the analysis to general $n$ and $d$. Our risk bound improves over the rate $n^{5/4}\lambda^{-3/4}$ of Mao et al. [@mao2020towards]. In [@liu2020better], Liu and Moitra have introduced a procedure achieving the rate $n^{7/6}$ in the specific case where $\lambda = n^{o(1)}$. In some way, our procedure generalizes their results for general $\lambda$, while being applicable to the more general isotonic models. Still, we recall that the optimal risk (without computational constraints) for estimating the matrix $M$ is of the order $n/\lambda$ -- see e.g. [@shah2016stochastically; @mao2020towards]. This remains an open problem to establish the existence of a computational-statistical gap or to construct a polynomial-time procedure achieving this risk on SST and bi-isotonic models. # Description of the $\mathbf{ISR}$ procedure {#sec:algo_sketch} ## Weighted directed graph $\mathcal{W}$ and estimator $\hat \pi$ {#subsec:weigthed_graph_permutation} Our approach involves the iterative construction of a weighted directed graph $\mathcal{W}$, represented by an antisymmetric matrix in $\mathbb{R}^{n \times n}$. More formally, for any experts $i$, $j$ in $[n]$, we have $\mathcal{W}(i, j) = -\mathcal{W}(j, i)$. In a nutshell, $\mathcal{W}(i, j)$ quantifies our evidence of the comparisons between expert $i$ and expert $j$. If $\mathcal{W}(i, j)$ is large and positive (resp. negative), we are confident that the expert $i$ is above (below) the expert $j$. Most of the procedure is dedicated to the construction of $\mathcal{W}$. Before this, let us explain how we deduce our estimator $\hat \pi$ from $\mathcal{W}$. For a given weighted directed graph $\mathcal{W}$, we define its corresponding directed graph at threshold $\gamma > 0$ as $$\label{eq:graph_from_weighted} \mathcal{G}(\mathcal{W}, \gamma) = \{(i,j) \in [n]^2 ~:~ \mathcal{W}(i,j) > \gamma \} \enspace .$$ For any thresholds $\gamma < \gamma'$, it holds that $\mathcal{G}(\mathcal{W}, \gamma) \subset \mathcal{G}(\mathcal{W}, \gamma')$. In other words, the function $\gamma \to \mathcal{G}(\mathcal{W}, \gamma)$ is nondecreasing. When $\gamma \geq \max_{i,j}|\mathcal{W}(i,j)|$, $\mathcal{G}(\mathcal{W}, \gamma) = \emptyset$ is the trivial graph with no edges. Let $\hat \gamma$ be the smallest threshold $\gamma$ such that $\mathcal{G}(\mathcal{W}, \gamma)$ is a directed acyclic graph ($\mathbf{DAG}$). By monotonicity, $\mathcal{G}(\mathcal{W}, \hat \gamma)$ is also the largest $\mathbf{DAG}$ among $\{\mathcal{G}(\mathcal{W}, \gamma), \gamma \geq \hat \gamma \}$. We then build the estimator $\hat \pi$ by picking any permutation that is consistent with the graph $\widehat \mathcal{G}:= \mathcal{G}(\mathcal{W}, \hat \gamma)$, that is if $(i,j) \in \widehat \mathcal{G}\cap [n]^2$ then $\hat \pi(i) \geq \hat \pi(j)$. To put it another way, the general idea of our procedure can be summarized into these three components: 1. Construct a weighted directed graph $\mathcal{W}$ between the experts. 2. Compute the largest directed acyclic graph $\widehat \mathcal{G}$ of $\mathcal{W}$. 3. Take any arbitrary permutation $\hat \pi$ that is consistent with $\widehat \mathcal{G}$. The construction of $\mathcal{W}$ is at the core of this paper, and the computation of $\widehat \mathcal{G}$ and $\hat \pi$ will be discussed in . Still, we already point out that the third point can be dealt in polynomial time using Mirsky's algorithm [@mirsky1971dual]. ## Construction of $\mathcal{W}$ with $\mathbf{ISR}$ ### Description of the subsampling {#subsec:subsample} Let us now describe the construction of the weighted directed graph $\mathcal{W}$. Let $T \geq 1$ be an arbitrary integer, representing the number of steps of our procedure. In what follows, we explain how we subsample the data from into $5T$ independent matrices $(Y^{(s)})_{s=1\dots5T}$. Recall that we are given $N$ observations $(x_t, y_t)$, where $N$ follows a Poisson distribution $\mathcal{P}(\lambda nd)$. Let us divide the observations into $5T$ batches $(N^{(s)})_{s = 0, \dots, 5T-1}$, aggregated into matrices of averaged observations $Y^{(s)}$. To that end, we let $S_u$ be i.i.d. uniform random variables in $\{0, \dots, 5T - 1\}$ representing a random batch for observation $u$ and we define $$\label{eq:batches} N^{(s)} = \{u \in \{1 \dots, N\} ~:~ S_u = s \} \quad \text{ and } \quad Y^{(s)}_{ik} = \sum_{t \in N^{(s)}} \tfrac{y_t}{\mathbf{r}^{(s)}_{ik}\lor 1}{\mathbf 1}\{x_t = (i,k)\} \enspace ,$$ where, for any $(i,k) \in [n] \times [d]$,  $\mathbf{r}^{(s)}_{ik} = \sum_{t \in N^{(s)}}{\mathbf 1}\{x_{t} = (i,k)\}$ is the number of times the coefficient position $(i,k)$ is observed in batch $s$. $Y^{(s)}_{ik}$ is equal to $0$ if $(i,k)$ is not observed in batch $s$ and it is equal to the average of the observations $y_t$ for which $x_t = (i,k)$ otherwise. We also define the mask matrix $B^{(s)}$ as being equal to $0$ at location $(i,k)$ if the value is missing from batch $s$, and to $1$ otherwise. $$\label{eq:def_matrix_B} B^{(s)}_{ik} = {\mathbf 1}\{\mathbf{r}^{(s)}_{ik} \geq 1\}\enspace .$$ Define $\lambda_0 = \lambda/5T$. In our sampling scheme, where the data is divided into $5T$ samples, each coefficient $B^{(s)}_{ik}$ has a probability of $1 - e^{-\lambda_0}$ of being equal to one. It is worth mentioning that a different subsampling scheme was performed in [@pilliat2022optimal], consisting in aggregating consecutive columns. However, such a scheme is not applicable in our case as we do not assume the rows of $M$ to be nondecreasing, unlike in [@pilliat2022optimal]. ### Neighborhoods in comparison graphs At each step $t = 0, \dots, T-1$ of the procedure, we aim to enrich our knowledge of the order of the experts, which we formally do by nondecreasing the weights of $\mathcal{W}$ in absolute value. At $T=0$, we start with the weights $\mathcal{W}_{ij}$ all being equal to zero. A meaningful update of $\mathcal{W}$ around a reference expert $i$ can be done when we restrict ourselves to experts that are in a neighborhood of $i$. Broadly speaking, a neighborhood of $i$ is a set made of all the experts $j$ that are not comparable to $i$ with respect to a given partial order. More precisely, for any directed graph $\mathcal{G}$ and any experts $i,j \in \{1, \dots, n \}$, we say that $i$ and $j$ are $\mathcal{G}$-comparable if there is a path from $i$ to $j$ or from $j$ to $i$ in $\mathcal{G}$. The neighborhood $\mathcal{N}(\mathcal{G}, i)$ of $i$ in $\mathcal{G}$ can then naturally be defined as the set of experts $j$ that are not $\mathcal{G}$-comparable with $i$. Equipped with the concept of neighborhood, our overall strategy involves iterating over all possible thresholds $\gamma \in \Gamma$ such that $\mathcal{G}(\mathcal{W}, \gamma)$ is acyclic, as well as all possible experts $i$. At each iteration, we apply the soft local ranking procedure described in the next subsection. updates the weights between $i$ and any expert $j$ in the neighborhood $\mathcal{N}(\mathcal{G}(\mathcal{W}, \gamma), i)$ of $i$. Our approach can be summarized as follows: 1. Subsample the data - see . 2. Initialize $\mathcal{W}$ to be the directed graph with all weights set to $0$. 3. For all $t= 0, \dots, T-1$ and $\gamma \in \Gamma$ such that $\mathcal{G}(\mathcal{W}, \gamma)$ is acyclic and all $i \in [n]$, update $\mathcal{W}$ with the soft local ranking procedure . $N$ and observations $(x_t, y_t)_{t = 1, \dots, N}$ according to , a number of steps $T$ and a valid grid $\Gamma$ as in A weighted graph $\mathcal{W}$ and an estimator $\hat{\pi}$ Aggregate the observation into $5T$ matrices of observation $(Y^{(s)})$ as in Initialize $\mathcal{W}(i,j) = 0$ for all $(i,j) \in [n]^2$, and $\hat \gamma = 0$ Compute $\mathcal{G}= \mathcal{G}(\mathcal{W}, \gamma)$ the directed graph at threshold $\gamma$ of $\mathcal{W}$ as in and set $P = \mathcal{N}(\mathcal{G}, i)$.[\[line:compute_directed_1\]]{#line:compute_directed_1 label="line:compute_directed_1"} Take $5$ samples $\mathcal{Y}= (Y^{(5t)}, \dots, Y^{(5t+4)})$ [\[line:loop\]]{#line:loop label="line:loop"} Apply $\mathbf{SLR}(\mathcal{Y}, \mathcal{W}, \gamma, i, \mathcal{G}, P)$ to update $\mathcal{W}$ [\[line:soft_cluster\]]{#line:soft_cluster label="line:soft_cluster"} Set $\hat \gamma$ as the smallest $\gamma$ such that $\mathcal{G}(\mathcal{W}, \gamma)$ is acyclic [\[line:acyclicite\]]{#line:acyclicite label="line:acyclicite"} Set $\widehat \mathcal{G}= \mathcal{G}(\mathcal{W}, \hat \gamma)$ be the largest acyclic $\mathbf{DAG}$ (see )[\[line:largest_dag\]]{#line:largest_dag label="line:largest_dag"} Set $\hat{\pi}$ to be any arbitrary permutation that is consistent with $\widehat \mathcal{G}$[\[line:dag_to_estimator\]]{#line:dag_to_estimator label="line:dag_to_estimator"} The main of aims to provide a soft ranking of the neighborhood $P$ of $i$ by setting positive (resp. negative) weights $\mathcal{W}_{ij}$ to experts $j\in P$ that are significantly below (resp. above) $i$. together with restricting $\gamma \geq \hat \gamma$ simply guarantees that all the considered graph $\mathcal{G}$ are acyclic. Finally, Lines [\[line:largest_dag\]](#line:largest_dag){reference-type="ref" reference="line:largest_dag"} and [\[line:dag_to_estimator\]](#line:dag_to_estimator){reference-type="ref" reference="line:dag_to_estimator"} simply correspond to the construction of the final permutation, described in the second and third points of . ## Description of the updating procedure {#subsec:description_procedure_trisection} ### Local weighted sums Let us describe the process of updating a given weighted graph $\mathcal{W}$, which will be used twice at each call of the soft local ranking . Let us fix a weighted graph $\mathcal{W}$, an element $s \in \{0, \dots, 5T-1\}$ and $Y:=Y^{(s)}$ the matrix defined in . We also let $i \in [n]$ be an arbitrary expert corresponding to of , and $\gamma$ be any threshold in the grid $\Gamma$. We write $P := \mathcal{N}(\mathcal{G}(\mathcal{W}, \gamma),i) \subset [n]$ for the neighborhood of $i$ in $\mathcal{G}(\mathcal{W}, \gamma)$, echoing the notation of the sets that are trisected in [@pilliat2022optimal]. Since the matrix $M$ is, up to a row-permutation, a column-wise isotonic matrix, it follows that, if the expert $i$ is above $j$, then for any vector $w\in \mathbb{R}_+^d$, we have $\sum_{k=1}^d w_{ik}M_{ik}\geq \sum_{k=1}^d w_{jk}M_{jk}$. As a consequence, the crux of the algorithm is to find suitable data-driven weights $w$ that allow to discriminate the experts. As explained in the introduction, earlier works focused on uniform weights $w={\mathbf 1}_{[d]}$ [@shah2016stochastically] which, unfortunately leads to suboptimal results. Before discussing the choice of the weights $w$ in the following subsections, let us first formalize how we leverage on $w$ to compare the experts and update the graph $\mathcal{W}$. Given a subset $Q \subset [d]$ of columns and a non-zero vector $w \in \mathbb{R}_+^{Q}$, we first check whether the following condition is satisfied: $$\label{eq:condition_w} \lambda_0\|w\|_2^2 \geq \|w\|^2_{\infty} \enspace ,$$ where we recall that $\lambda_0 = \lambda/5T$. This condition is always verified when $\lambda_0 \geq 1$, and it is equivalent to $\lambda_0 |Q| \geq 1$ when $w = {\mathbf 1}_Q$. Condition [\[eq:condition_w\]](#eq:condition_w){reference-type="eqref" reference="eq:condition_w"} ensures that $w$ is not too sparse which could be harmful when many observations are lacking ($\lambda_0$ small). If this condition is not satisfied, then we leave the weights of $\mathcal{W}$ unchanged. Otherwise, we define the $(Y, P, w)$-updating weights $\mathcal{U}:= \mathcal{U}(Y, P, w)$ around $i$ as $$\label{eq:updating_edges} \mathcal{U}_{i j} = \frac{1}{\sqrt{\frac{1}{\lambda_0} \land \lambda_0}} \cdot \langle Y_i \cdot - Y_j \cdot, \frac w\|w\|_2\rangle \enspace ,\\$$ where, for all $w' \in \mathbb{R}^Q$ and $a \in \mathbb{R}^{d}$, we write $\langle a, w'\rangle= \sum_{k \in Q}a_kw'_k$. We can then update the weighted directed graph around $i$ by setting, for all $i \in P$ such that $|\mathcal{U}_{i j}| \geq |\mathcal{W}_{i j}|$, $$\label{eq:update_W} \mathcal{W}_{i j} = \mathcal{U}_{i j} \quad \text{ and } \quad\mathcal{W}_{j i} = -\mathcal{U}_{i j}\enspace .$$ As explained above, if we replace $Y_{i \cdot}$ and $Y_{j \cdot}$ by $M_{i \cdot}$ and $M_{j \cdot}$ respectively in [\[eq:updating_edges\]](#eq:updating_edges){reference-type="eqref" reference="eq:updating_edges"}, then the corresponding value of the statistic is non-negative if expert $i$ is above $j$. Hence, a large value for $\mathcal{U}_{i j}$ provides evidence that $i$ is above $j$. Computing $\mathcal{U}(Y, P, w)$ for suitable directions $w$ is the basic brick or our procedure, since it is through the update that we iteratively increase the weights of $\mathcal{W}$. This update shares some similarities to the pivoting algorithm introduced in [@liu2020better] and also used in [@pilliat2022optimal], in the sense that while we are fixing an arbitrary reference expert $i$ to compute pairwise comparisons, they fix a set $P$ and compute a pivot expert $i_0$ that would correspond to a quantile of the set $\{\langle Y_j \cdot, \tfrac w\|w\|_2\rangle, j \in P\}$ in the case $\lambda_0 = 1$. Note that the orientation of a given weighted edge $(i,j)$ can change during the procedure if it turns out that $|\mathcal{U}_{i j}| \geq |\mathcal{W}_{i j}|$ and that $\mathcal{U}_{i j}\mathcal{W}_{i j} \leq 0$. This simply means that if the direction $w$ leads to a more significant weight between some experts $i$ and $j$, then we are more confident to use the vector $w$ and to revise the order between $i$ and $j$. For $Q \subset [d]$, choosing $w = {\mathbf 1}_Q$ in amounts to compute the average of the observations over all questions in $Q$. We now explain in the main sections how we iteratively build adaptive weights $w$ that allow to improve over the naive global average given by $w = {\mathbf 1}_{[d]}$. ### Definitions of a rank in a $\mathbf{DAG}$ We first introduce a few definitions on directed acyclic graphs $\mathcal{G}$, which we formally define as a set of directed edges $(i, j) \in [n]^2$ for which there is no cycle. We denote $\mathbf{path}(i,j) = \{(k_1, \dots, k_L) ~:~ L > 0 \mbox{ and } (i,k_1), \dots, (k_L, j) \in \mathcal{G}\}$ as the set of all possible paths from $i$ to $j$, and we write $|s|$ for the length of any path $s$. We say that $i$ and $j$ are $\mathcal{G}$-comparable if $\mathbf{path}(i,j) \cup \mathbf{path}(j,i) \neq \emptyset$, and we write $\mathcal{N}(i, \mathcal{G})$ for the set of all experts that are not $\mathcal{G}$-comparable with $i$. If $i$, $j$ are $\mathcal{G}$-comparable, it either holds that $\mathbf{path}(i,j) = \emptyset$ or $\mathbf{path}(j, i) = \emptyset$. We say in the first case that $i$ is $\mathcal{G}$-below $j$ and that $i$ is $\mathcal{G}$-above $j$ in the second case. we also define the relative rank from $i$ according to $\mathcal{G}$ as the length of the longest path in $\mathcal{G}$ from $i$ to $j$, or minus the longest past from $j$ to $i$ depending on wether $i$ is $\mathcal{G}$-above or $\mathcal{G}$-below $j$: $$\label{eq:rank} \begin{aligned} \mathbf{rk}_{\mathcal{G}, i}(j) &= \max \{|s| ~:~ s \in \mathbf{path}(i,j) \} - \max \{|s| ~:~ s \in \mathbf{path}(j,i)\} \enspace . \end{aligned}$$ Here, we use the convention $\max \emptyset = 0$. With this definition, the neighborhood of a given expert $i$ is equal to the set of experts whose relative rank is equal to $0$, that is $\mathcal{N}(\mathcal{G}, i) = \mathbf{rk}_{\mathcal{G}, i}^{-1}(0)$. Moreover, an expert $j\in [n]$ is $\mathcal{G}$-above (resp. $\mathcal{G}$-below) $i$ if and only if $\mathbf{rk}_{\mathcal{G},i}(j) \geq 1$ (resp. $\mathbf{rk}_{\mathcal{G},i}(j) \leq -1)$. Although $\mathcal{G}$ stands for a finite set of edges with endpoints in $[n]$, we extend it to a set of edges with endpoints in $\mathbb{Z}^2$ by putting in $\mathcal{G}$ every $(i,j) \in \mathbb{Z}^2$ such that $i > j$ and $j \leq 0$ or $i \geq n + 1$. ### Description of the soft local ranking algorithm {#sec:soft_local} To update the weighted directed graph $\mathcal{W}$ in of , we apply the soft local ranking procedure $\mathbf{SLR}$ to all experts $i \in [n]$ and all thresholds $\gamma$. To define our soft local ranking procedure, let us fix $\mathcal{W}$, an expert $i$ and a threshold $\gamma$ such that $\mathcal{G}(\mathcal{W}, \gamma)$ is acyclic. As a shorthand, we write $\mathcal{G}$ and $P$ respectively for the thresholded graph $\mathcal{G}(\mathcal{W}, \gamma)$ and the neighborhood $\mathcal{N}(\mathcal{G},i)$ of $i$ in $\mathcal{G}$. We write $\mathcal{D}$ for the set of all dyadic numbers: $\mathcal{D}= \{ 2^k ~:~ k \in \mathbb{Z}\}$ and we define the set $\mathcal{H}= \mathcal{D}\cap \left[\frac{1}{nd},1\right]$. We denote $\overline y(P)$ as the mean of the vectors $Y_{j \cdot}$ over all $j \in P$, that is $\overline y_k(P) = \tfrac{1}{|P|}\sum_{j \in P} Y_{j k}$, for any $k \in [d]$. $\mathbf{SLR}$ relies on the following steps repeated over all height $h \in \mathcal{H}$. It is also described in Algorithm [\[alg:refine_locally_sketch\]](#alg:refine_locally_sketch){reference-type="ref" reference="alg:refine_locally_sketch"}. 1. **Dimension reduction.** Using the first sample $Y^{(1)}$, we first reduce the dimension by selecting a subset $\widehat{Q}^h\subset [d]$ corresponding to wide confidence regions. Recall that $\mathbf{rk}_{\mathcal{G}, i}$ is the relative rank to $i$ defined in . For any $a > 0$, define the sets $\mathcal{N}_a := \mathcal{N}_a(\mathcal{G}, i)$ (resp. $\mathcal{N}_{-a}:=\mathcal{N}_{-a}(\mathcal{G}, i)$) of experts $j$ which are $\mathcal{G}$-above (resp. $\mathcal{G}$-below) all the experts of $P$ and whose relative rank to any $i' \in P$ is at most $a$ in absolute value: $$\label{eq:neighborhood} \mathcal{N}_a = \bigcap_{i' \in P} \mathbf{rk}_{\mathcal{G}, i'}^{-1}([1, a]) \quad \text{ and } \quad\mathcal{N}_{-a} = \bigcap_{i' \in P} \mathbf{rk}_{\mathcal{G}, i'}^{-1}([- 1, - a]) \enspace .$$ Secondly, we define for any question $k \in [d]$ and $a \geq 1$ the width statistic $\widehat {\boldsymbol\Delta}_k$ as the difference between the mean of the experts in $\mathcal{N}_a$ and the mean of the experts in $\mathcal{N}_{-a}$. Then, $\hat a_{k}$ is set to be the first value of $a \geq 1$ such that any $a' \geq a$ has a corresponding width statistic of at least $(\lambda_0 \land 1)h$: $$\label{eq:stat_delta} \widehat {\boldsymbol\Delta}_{k}(a) = \overline y_{k}(\mathcal{N}_a) - \overline y_{k}(\mathcal{N}_{-a}) \quad \text{ and } \quad\hat a_k(h) = \max \left\{a \geq 1 ~:~ \frac{1}{\lambda_0 \land 1}\widehat {\boldsymbol\Delta}_{k}(a) < h \right\} +1\enspace .$$ Finally, we define $\widehat Q^{h} := \widehat Q^{h}(\mathcal{G}, i)$ as the set of indices $k$ such that $\hat a_k(h)$ is relatively small. $$\label{eq:set_Q} \widehat Q^{h} = \{k \in [d] ~:~ |\mathcal{N}_{\hat a_k(h)}|\land |\mathcal{N}_{-\hat a_k(h)}|\leq \frac{1}{\lambda_0 h^2} \} \enspace .$$ Intuitively, if the experts above and below $i$ vary by more than $h$ on a specific question $k$, then this question should belong to $\widehat Q^{h}$. Conversely, if the experts below and above $i$ are nearly equal on the question $k$, than $\hat a_k(h)$ will be large and $k$ will not be selected in $\widehat Q^{h}$. 2. **Average-based weighted sums.** Still using the first sample $Y^{(1)}$, we examine the corresponding submatrix $Y^{(1)}(P, \widehat Q)$ restricted to questions in $\widehat Q$. If the row sums of $Y$ are larger than the current edges, we update the weighted edges. More formally, we compute the $(Y^{(1)}, P, {\mathbf 1}_{\widehat Q})$-updating weighted edges $(\mathcal{U}_{\widehat Q})$ around $i$ as defined in and update $\mathcal{W}$ as in . We then also update $\mathcal{G}= \mathcal{G}(\mathcal{W}, \gamma)$ and $P = \mathcal{N}(\mathcal{G}, i)$. 3. **PCA-based weighted sums.** Relying on the samples $Y^{(2)}$, $Y^{(3)}$, $Y^{(4)}$, $Y^{(5)}$, we do a slight abuse of notation and write $Y^{(s)}$ for the restriction of $Y^{(s)}$ to the subset $P, \widehat Q^{h}$ for $s=2,3,4,5$. Ideally, we would get an informative direction $w$ from the largest right singular vector of $\mathbb{E}[Y^{(2)}-\overline{Y}^{(2)}] \in \mathbb{R}^{P \times \widehat Q^{h}}$. Indeed, it is known (see the proofs for more details) that the entries of the first right singular vector of an isotonic matrix all share the same sign and are most informative to compare the experts. However, computing directly the empirical right-singular vector of $Y^{(2)}-\overline{Y}^{(2)}$ does not lead to the desired bounds because (i) this matrix is perhaps highly rectangular (ii) the noise is possibly heteroskedastic and (iii) this matrix is perhaps sparse because of the many missing observations when $\lambda_0$ is small. Here, we use a workaround which is reminiscent of that of [@pilliat2022optimal] and discussed later. First, we compute $\hat v$ as a proxy for the first left singular vector of $\mathbb{E}[Y^{(2)}-\overline{Y}^{(2)}]$. $$\label{eq:ACP} \hat v := \hat v( P, \widehat Q^h) = \mathop{\mathrm{arg\, max}}_{v \in \mathbb{R}^{P}:~\| v \|_2 \leq 1} \Big[ \|v^T(Y^{(2)} - \overline{Y}^{(2)})\|_2^2 - \frac{1}{2}\| v^T(Y^{(2)} - \overline{Y}^{(2)} - Y^{(3)} + \overline{Y}^{(3)})\|_2^2\Big] \enspace .$$ The right-hand side term in [\[eq:ACP\]](#eq:ACP){reference-type="eqref" reference="eq:ACP"} deals with the heteroskedasticity of the noise matrix $E$ in . $\hat v$ in [\[eq:ACP\]](#eq:ACP){reference-type="eqref" reference="eq:ACP"} can be computed efficiently since it corresponds to the leading eigenvector of a symetric matrix. For technical reasons occurring in the sparse observation regime (i.e. when $\lambda_0$ is small), we then threshold the largest absolute values of the coefficients of $\hat v$ at $\sqrt{\lambda_0}$ and define $(\hat v_-)_i = \hat v_i {\mathbf 1}\{|\hat v_i| \leq \sqrt{\lambda_0}\}$. After having calculated $\hat v_-$, we consider as in [@pilliat2022optimal] the image $\hat z = \hat v_-^T (Y^{(4)}-\overline Y^{(4)}) \in \mathbb{R}^{\widehat Q}$ of $\hat v_{-}$. We then threshold the smallest values of $\hat z$ and take the absolute values of the components. Thus, we get $\hat w^+\in \mathbb{R}^{\widehat Q}$ defined by $(\hat w^+)_l= |\hat z_l|{\mathbf 1}\{|\hat z_l|\geq \gamma\sqrt{\lambda_0 \land \frac{1}{\lambda_0}}\}$ for any $l \in \widehat Q$. Finally, we consider the last submatrix $Y^{(5)} = Y^{(5)}(P, \widehat Q)$. We apply these weights $\hat w^+$ to compute the row-wise weighted sums of $Y^{(5)}$ and update the weighted edges. More formally, we compute the $(Y^{(5)}, P, \hat w^+)$-updating weighted edges $\mathcal{U}(Y^{(5)}, P, \hat w)$ around $i$ as defined in . We finally update the weighted directed graph $\mathcal{W}$ with $\mathcal{U}(Y^{(5)}, P, \hat w^+)$ as in . $6$ samples $(Y^{(s)})_{s=1,\dots,5}$, a weighted directed graph $\mathcal{W}$, a threshold $\gamma$ such that $\mathcal{G}(\mathcal{W}, \gamma)$ is acyclic and an expert $i \in [n]$. $\mathcal{G}$ and $P$ are shorthands for the thresholded graph $\mathcal{G}(\mathcal{W}, \gamma)$ and the neighborhood $\mathcal{N}(\mathcal{G},i)$. An update of $\mathcal{W}$ [\[line:set_Q\]]{#line:set_Q label="line:set_Q"}Compute $\widehat Q^{h} := \widehat Q(\mathcal{G}, i)$ as in [\[eq:set_Q\]](#eq:set_Q){reference-type="ref" reference="eq:set_Q"} using sample $Y^{(1)}$ [\[line:first_stat_l1\]]{#line:first_stat_l1 label="line:first_stat_l1"}Let $\mathcal{U}_{\widehat Q^{h}}$ be the $(Y^{(1)}, P, {\mathbf 1}_{\widehat Q^{h}})$-updating weighted edges around $i$ as in , using again sample $Y^{(1)}$ [\[line:update_1\]]{#line:update_1 label="line:update_1"} Update $\mathcal{W}$ with $\mathcal{U}(\widehat Q^{h})$ as in and update $\mathcal{G}= \mathcal{G}(\mathcal{W}, \gamma)$, $P = \mathcal{N}(\mathcal{G}, i)$ Restrict the samples $(Y^{(s)})_{s=2,\dots,5}$ to $P, \widehat Q^h$ in the following remaining steps [\[line:ACP\]]{#line:ACP label="line:ACP"}Compute the PCA-like direction $\hat v := \hat v(P, \widehat Q^{h})$ as in [\[eq:ACP\]](#eq:ACP){reference-type="eqref" reference="eq:ACP"} and define $(\hat v_-)_i = \hat v_i {\mathbf 1}\{|\hat v_i| \leq \sqrt{\lambda_0}\}$ [\[line:direction\]]{#line:direction label="line:direction"}Compute $\hat z = \hat v_-^T (Y^{(4)}-\overline Y^{(4)})$ and define $\hat w^+$ by $(\hat w^+)_l= |\hat z_l|{\mathbf 1}\{|\hat z_l|\geq \gamma\sqrt{\lambda_0 \land \frac{1}{\lambda_0}}\}$ for any $l \in \widehat Q^{h}$ [\[line:second_stat_l1\]]{#line:second_stat_l1 label="line:second_stat_l1"}Let $\mathcal{U}(Y^{(5)},\hat w^+)$ be the $(Y^{(5)}, P,\hat w^+)$-updating weighted edges around $i$ as in Update $\mathcal{W}$ with $\mathcal{U}(Y^{(5)},\hat w^+)$ as in [\[line:update_2\]]{#line:update_2 label="line:update_2"} ## Toy example illustrating To understand why the steps described in are relevant, assume that $\pi^* = \mathrm{id}$ and consider the following simple example where $n = 204$, $d = 10$, and where the isotonic matrix $M_{\pi^{*-1}}$ can be decomposed into three blocks of rows as $$M_{\pi^{*-1}}= \alpha + \frac{h}{2} \left(\begin{array}{cccccccccc} {\mathbf 0}&{\mathbf 0}&\tikzmarkin[ver=style cyan]{1}{\mathbf 1}&{\mathbf 1}&{\mathbf 0}&\tikzmarkin[ver=style cyan]{2}{\mathbf 1}&{\mathbf 1}&{\mathbf 0}&\tikzmarkin[ver=style cyan]{3}{\mathbf 1}&{\mathbf 1}\\ \hline 0&0&0&1&0&1&0&0&1&1 \\ 0&0&0&1&0&1&0&0&1&1 \\ 0&0&0&-1&0&-1&0&0&-1&-1 \\ 0&0&0&-1&0&-1&0&0&-1&-1\\ \hline {\mathbf 0}&{\mathbf 0}&-{\mathbf 1}&-{\mathbf 1}\tikzmarkend{1}&{\mathbf 0}&-{\mathbf 1}&-{\mathbf 1}\tikzmarkend{2}&{\mathbf 0}&-{\mathbf 1}&-{\mathbf 1}\tikzmarkend{3} \end{array}\right) \enspace .$$ In the above matrix, $\alpha$ is any number in $(h, 1-h)$, and ${\mathbf 0},{\mathbf 1}$ are the columns in $\mathbb{R}^{100}$ whose coefficients are respectively all equal to $0$ and $1$. Assume that the statistician already knows that the first and the third blocks are made of experts that are respectively above and below the second block. If $\mathcal{W}, P,\gamma$ are the parameters fixed in , the three blocks correspond respectively to the subsets $\mathcal{N}_{1} \cup \mathcal{N}_2$, $P$ and $\mathcal{N}_{-1}\cup \mathcal{N}_{-2}$ in our example. Provided that $\mathcal{N}_{-2}$ and $\mathcal{N}_{2}$ are large enough, the set $\widehat Q^{h}$ only keeps columns corresponding to indices $k$ where $\widehat {\boldsymbol\Delta}_k(1)$ is large -- those are highlighted in blue. Then, we can work on the reduced subset $\widehat Q^h$ of columns highlighted in blue. As one may check, $\widehat Q^h$ contains all the relevant columns to decipher the experts in the block $P$. Besides, the expected matrix of observations restricted to the block $P$ and to $\widehat Q^h$ is of rank one: $$\mathbb{E}[Y - \overline Y]= \frac{h}{2}\left(\begin{array}{cccccccccc} 0&1&1&0&1&1 \\ 0&1&1&0&1&1 \\ 0&-1&-1&0&-1&-1 \\ 0&-1&-1&0&-1&-1\\ \end{array}\right) \enspace .$$ In particular, the right singular vector of this matrix is of the form $(0,1,1,0,1,1)$ and provides suitable weights to decipher the two largest experts from the two lowest experts in the above matrix. The PCA-based weighted sums steps above precisely aims at estimating these weights. ## Comments on the procedure and relation to the literature {#subsec:comments} Finding confidence regions $\widehat Q$ before computing weighted sums on the corresponding columns is at the core of our procedure. This idea generalizes the RankScore procedure of [@flammarion2019optimal] which rather computes averages on the subsets $[d]$ or on the singletons $\{1\}, \dots, \{d\}$. As mentioned in the introduction, only using the subsets of the RankScore method in [@flammarion2019optimal] does not allow to reach the optimal rate for permutation estimation or matrix reconstruction. In , the computation of subsets $\widehat Q^{h}$ is reminiscent of some aspects of the non oblivious trisection procedure used in [@pilliat2022optimal] for the bi-isotonic model. In fact, the statistic $\widehat {\boldsymbol\Delta}_k$ corresponds to the statistic $\widehat {\boldsymbol\Delta}^{(\mathrm{ext})}_{k,1}$ in [@pilliat2022optimal]. Apart from that, the selection of subsets of questions was quite different in [@pilliat2022optimal] as it mostly involved change-point detection ideas as introduced in [@liu2020better]. However, those ideas are irrelevant in our setting because the rows do not exhibit any specific structure in the isotonic model. The high-level sorting method in [@pilliat2022optimal] is based on a hierarchical sorting tree with memory. In contrast, our new algorithm is based on an iterative refinement of a weighted comparison graph. This new algorithm is more natural and benefits from the fact that it is almost free of any tuning parameter. Indeed, at the end of , we simply use the threshold $\hat \gamma$ corresponding to the largest acyclic $\hat \mathcal{G}$ graph in $\mathcal{W}$. No significant threshold needs to be chosen, since any permutation that is consistent with $\hat \mathcal{G}$ is also necessarily consistent with $\mathcal{W}$ thresholded at values larger than $\hat \gamma$. The spectral step in [@pilliat2022optimal] is quite similar to the third step of our procedure described in [3.3.3](#sec:soft_local){reference-type="ref" reference="sec:soft_local"}, except for the first thresholding of $\hat v$ to obtain $\hat v_-$. In [@pilliat2022optimal], this workaround was not needed mainly because in the bi-isotonic model, it is possible to aggregate sparse observations by merging consecutive columns -- see [@pilliat2022optimal] for further details. This is however not possible here. As mentioned in the introduction, Liu and Moitra [@liu2020better] obtain an upper bound of the permutation loss of the order of $n^{7/6}$ for the estimation of two unknown permutations in the case where $M \in \mathbb{R}^{n\times n}$ is bi-isotonic. Broadly speaking, their method involves iterating a clustering method called block-sorting over groups of rows or columns that are close with each other. Using this sorting method based on block-sorting, their whole approach alternates between row sorting and column sorting for a subpolynomial number of time. Besides, their procedure makes heavily use of bi-isotonicity of the matrix. It turns out that reaches the same rate in this bi-isotonic model by running only once on the rows, and once on the columns, as described in . Otherwise said, if the problem is to estimate only $\pi^*$ in the bi-isotonic model, we proved that only the isotonicity of the columns is necessary to achieve the state-of-the-art polynomial-time upper bound of order $n^{7/6}$. ## Examples of valid grids $\Gamma$ {#subsect:valid_grid} Remark that the simple set $\{(u+1)\cdot\phi_{\mathrm{L}_1}, u \in \{0, \dots, 2\left\lfloor\log_2(n)\right\rfloor + 2\}\}$ is a valid grid of logarithmic size with $\bar \gamma \leq (2\log_2(n) + 3)\phi_{\mathrm{L}_1}$. This set is the smallest valid grid achieving the smallest possible value of $\bar \gamma$. However, it depends on the quantity $\phi_{\mathrm{L}_1}$ which is perhaps a bit pessimistic in practice. An other choice can be to take $\mathbb{R}^+$ itself, albeit infinite. Indeed, the set $\{\mathcal{G}(\mathcal{W}, \gamma), \gamma \geq 0 \}$ is made of at most $n^2$ possible directed graphs for any $\mathcal{W}$ during the whole procedure. Choosing $\mathbb{R}^+$ is convenient since it does not depend on the constants in $\phi_{\mathrm{L}_1}$ that are likely to be overestimated. The drawback of choosing $\mathbb{R}^+$ though is that the number of tested $\gamma$ in becomes quadratic in $n$. Finally, a good compromise is to take the set $\{(1+ \frac{1}{\log_2(n)})^{u'}, ~u' \in \mathbb{Z}\}$. It is easy to check that it contains a sequence satisfying whose length is at least $2\left\lfloor\log_2(n)\right\rfloor+3$ and whose maximum $\bar \gamma$ is a polylogarithmic function in $nd/\delta$. ## Discussion on the computation of $\widehat \mathcal{G}$ and $\hat \pi$ {#subsec:further_discussion} Once we have suitable weighted graph $\mathcal{W}$, it remains to construct the permutation $\hat \pi$, as in the second and third point of . For the second point, checking that a given directed graph is acyclic can be done through depth first search with a computational complexity less than $n$, so that computing $\hat \gamma$ can be done with less than $|\Gamma|n$ operations. As discussed in , it is possible to choose $\Gamma$ to be of size of order less than $\log(n)$. If $\Gamma$ is bounded and is such that any different thresholds $\gamma, \gamma'$ in $\Gamma$ satisfy $|\gamma - \gamma'|\geq \eta$ for some $\eta > 0$, the computation of $\hat \gamma$ can always be done with complexity of order less than $n\log(\max (\Gamma)/\eta)$. Regarding the third point, a permutation $\hat \pi$ can be computed in polynomial time from the directed acyclic graph $\hat \mathcal{G}$ using Mirsky's algorithm [@mirsky1971dual] -- see also [@pananjady2022isotonic]. It simply consists in finding the minimal experts $i$ in $\hat \mathcal{G}$, removing them and repeat this process. This construction is in fact equivalent to ranking the experts according to the index $\mathbf{rk}_{\hat \mathcal{G}, 0}$ as defined in . # Concentration inequality for rectangular matrices {#sec:concentration} In this section, we state a concentration inequality for rectangular random matrices with independent entries satisfying a Bernstein-type condition. This section can be read independently of the rest of the paper. Let $p$ and $q$ be two positive integers and $X \in \mathbb{R}^{p \times q}$ be a random matrix with independent and mean zero coefficients. Assume that there exists $\sigma > 0$ and $K \geq 1$ such that for any $i = 1,\dots, p$ and $k = 1, \dots, q$, $$\label{eq:condition_moments_bernstein} \forall u \geq 1, ~~~~\mathbb{E}[(X_{ik})^{2u}] \leq \frac{1}{2}u!\sigma^2 K^{2(u-1)} \enspace .$$ This Bernstein-type condition is exactly the same as Assumption 1 in [@bellec2019concentration] -- see [@bellec2019concentration] for a discussion. Let $\Lambda \in \mathbb{R}^{p\times p}$ be any orthogonal projection matrix, i.e. $\Lambda = \Lambda^T$ and $\Lambda^2 = \Lambda$. We write $r_{\Lambda}$ for the rank of $\Lambda$. **Proposition 6**. *There exists a positive numerical constant $\kappa$ such that the following holds for any $\delta > 0$. $$\|\Lambda(XX^T - \mathbb{E}[XX^T])\Lambda\|_{\mathrm{op}} \leq \kappa \left[\sqrt{(\sigma^4pq + \sigma^2 q)\log(p/\delta)} + (\sigma^2r_{\Lambda} + K^2\log(q))\log(p/\delta)\right] \enspace .$$* For the sake of the discussion, consider the particular case where $X_{ik} = B_{ik} E_{ik}$, with $B_{ik}$ and $E_{ik}$ being respectively independent Bernoulli random variable of parameter $\sigma^2$ and centered Gaussian random variable with variance $1$. By a simple computation done e.g. in , $X_{ik}$ satisfies condition with $K$ being of the order of a constant. Hence, if $K^2\log(q) \leq \sigma^2 p$, applying with the identity matrix $\Lambda$ gives $$\label{eq:concentration_bernstein_simple} \|XX^T - \mathbb{E}[XX^T]\|_{\mathrm{op}} \leq 2\kappa\sigma^2\left[\sqrt{pq \log(p/\delta)} + p\log(p/\delta)\right]\enspace ,$$ with probability at least $1- \delta$. Up to our knowledge, the inequality is tighter than state-of-the-art result random rectangular sparse matrices in the regime where $q \gg p$ and $\sigma^2 \ll 1$. In fact, most of the results in the literature concerning random matrices state concentration inequalities for the non centered operator norm $\|XX^T\|_{\mathrm{op}}$ -- see the survey of Tropp [@2015arXiv150101571T]. More specifically, Bandeira and Van Handel [@bandeira2016sharp] provide tight non-asymptotic bounds for the spectral norm of a square symmetric random matrices with independent Gaussian entries, and derive tail bounds for the operator norm of $XX^T$. For instance, Corollary 3.11 in [@bandeira2016sharp], implies that, for some numerical constant $c$, $\mathbb{E}[\|XX^T\|^2_{\mathrm{op}}] \leq c(\sigma^2 (p \lor q) + \log(p \lor q))$. Together with a triangular inequality, Bandeira and Van Handel imply $\|XX^T - \mathbb{E}[XX^T]\|^2_{\mathrm{op}}\leq c\sigma^2( (p \lor q) + \log(\tfrac{p \lor q}{\delta}))$ with probability higher than $1-\delta$. While the order of magnitude $\sigma^2(p\lor q)$ is tight for controlling the operator norm $\|XX^T\|^2_{\mathrm{op}}$ of the non-centered Gram matrix with high probability, implies that the right bound for $\|XX^T - \mathbb{E}[XX^T]\|^2_{\mathrm{op}}$ is rather $\sigma^2\sqrt{pq}$ which is significantly smaller in the regime $p \ll q$ and $\sigma^2 \ll 1$. In the proof of , we could have used those previous results for controlling the matrices of the form $\|XX^T - \mathbb{E}[XX^T]\|^2_{\mathrm{op}}$. However, we would have then achieved a suboptimal risk upper bound. Indeed, plays critical role in the proof of , when we need to handle matrices with partial observations that are possibly highly rectangular in the spectal step of the procedure . The proof of relies on the observation that the matrix $XX^T - \mathbb{E}[XX^T]$ is the sum of $q$ centered rank 1 random matrices. This allows us to apply Matrix Bernstein-type concentration inequalities for controlling the operator norm of this sum -- see [@2015arXiv150101571T] or Section 6 of [@wainwright2019high]. # Proof of {#sec:general_analysis} ## Notation and signal-noise decomposition We first introduce some notation, and in particular the noise matrices on which we will apply concentration inequalities. In what follows, we define for any matrix $A \in \mathbb{R}^{n \times d}$, and any vector $w \in \mathbb{R}^d$: $$\label{eq:abuse_notation} \langle A_i \cdot, w\rangle = \sum_{k =1}^d A_{ik}w_k \enspace .$$ If $w$ belongs to $\mathbb{R}^Q$ where $Q$ is some subset of $[d]$, we also write $<A_{i \cdot}, w>= \sum_{k \in Q}^d A_{ik}w_k$. The same notation stands for the scalar product on matrices, namely $\langle A,A'\rangle = \mathop{\mathrm{Tr}}(A^TA')$ if $A' \in \mathbb{R}^{n\times d}$. If $A$ and $A'$ are two matrices in $\mathbb{R}^{n \times d}$, then we write the coordinate-wise product $(A \odot A')_{ik} = A_{ik}A'_{ik}$. In what follows, we assume that $\pi^* = \mathrm{id}$. We make this assumption without loss of generality since we can reindex each expert $i$ with $i'=\pi^{*-1}(i)$. Recalling that $B$ is defined in we define $$\label{eq:def_lambda_1} \lambda_1 := \mathbb{P}(B^{(s)}_{ik} = 1) = 1- e^{-\lambda_0}\enspace .$$ If $\lambda_0 \leq 1$, we have $\lambda_0 \geq \lambda_1 \geq (1-\tfrac{1}{e})\lambda_0$. We assume in what follows that $\lambda_0 \leq 1$, which corresponds to the case where there are potentially many unobserved coefficients. The case $\lambda_0 \geq 1$ will be treated in . For an observation matrix $Y^{(s)}$ defined in , we make the difference between $\mathbb{E}[Y^{(s)}] = \lambda_1 M$, which is the unconditional expectation of $Y^{(s)}$, and $\mathbb{E}[Y^{(s)}|B^{(s)}] = B^{(s)} \odot M$, which is the expectation of $Y^{(s)}$ conditionally to the matrix $B$. We write the noise matrix $$\label{eq:two_possible_noises} E^{(s)} = Y^{(s)} - \lambda_1 M \quad \text{ and } \quad\widetilde E^{(s)} = Y^{(s)} - B^{(s)}\odot M \enspace .$$ Recall that $\varepsilon_t = y_t - M_{x_t}$ is the subGaussian noise part in model , and that $N_s$ is defined in . Each coefficient $\widetilde E^{(s)}_{ik}$ can be rewritten as the average of the noise $\varepsilon_t$. that are present in $N^{(s)}$ and that correspond to coefficient $x_t=(i,k)$. $$\label{eq:aggregated_noise} \widetilde E^{(s)}_{ik} = \sum_{t \in N^{(s)}} \tfrac{\varepsilon_t}{\mathbf{r}^{(s)}_{ik}\lor 1}{\mathbf 1}\{x_t = (i,k)\} \enspace .$$ From now on, we often omit the dependence in $s$. We will extensively use the decomposition $Y = \lambda_1M + E$, where $\lambda_1$ is defined in and $E$ in . Recalling that $B_{ik} = {\mathbf 1}\{r_{ik} \geq 1\}$, we often rewrite $E$ as the sum of two centered random variables: $$E_{ik} = (B_{ik} - \lambda_1)M + B_{ik}\widetilde E_{ik} \enspace .$$ Handling the concentration of the noise is more challenging in the case $\lambda_0 \leq 1$ than in the full observation regime $\lambda_0 \geq 1$ discussed in . Indeed, while subGaussian concentration inequalities are effective in the full observation regime $\lambda_0 \geq 1$, they lead to slower estimation rate in the case $\lambda_0 \leq 1$, for instance in . Indeed, it turns out that the variance of a coefficient $\varepsilon_{ik}$ is of order $\lambda_0 \leq 1$, while the hoeffding inequality only implies that $B_{ik} - \lambda_1$, and in particular $\varepsilon_{ik}$ are $c$-subGaussian for some numerical constant $c$. To overcome this issue, one of the main ideas is to use Bernstein-type bounds on the coefficients of $E$ and on the random matrix $EE^T - \mathbb{E}[EE^T]$- see and . ## General property on $\mathcal{W}$ Recall that we assume that $\lambda_0 \leq 1$, so that $\tfrac{1}{\lambda_0} \land \lambda_0= \lambda_0$ in , and that $\phi_{\mathrm{L}_1}$ is defined in by $\phi_{\mathrm{L}_1} := 10^4\log(10^2nd/\delta)$. In the following, we let $\xi$ be the event on which the noise concentrates well for all the pairs $(Q,w)$ considered during the whole procedure. More precisely, we say that we are under event $\xi$, if for any $s=0, \dots, 5T-1$ and for any pair $(Q,w)$ that is used to compute a refinement as in we have $$\label{eq:condition_phil1} \left\lvert\langle E^(s)_i \cdot - E^(s)_j \cdot, w\rangle\right\rvert \leq \tfrac{1}{3}\phi_{\mathrm{L}_1}\sqrt{\lambda_0} \quad \text{ for any } (i,j) \in [n]^2 \enspace .$$ **Lemma 7**. *The event $\xi$ holds true with probability at least $1-2T\delta$.* The idea of is to apply a bernstein-type inequality and a union bound on all the possible dot products $\langle E^(s)_i \cdot, w\rangle$, for all the $5T$ possible $s$ and the at most $2T$ possible $w$. The upper bound is of the order of the square of the variance of $E_{ik}$ up to the polylogarithm factor $\phi_{\mathrm{L}_1}$. The crucial point is that if $\langle E^(s)_i \cdot, w\rangle$ is not $\lambda_0$-subGaussian, it satisfies the Bernstein's Condition \[ 2.15 of [@massart2007concentration]\] with variance $\nu=\lambda_0$ and scaling factor $b=\|w\|_{\infty}$. We then obtain an upper bound of order $\sqrt{\lambda_0}$ since any $w$ considered in the update step must satisfy .Recall that $\bar \gamma$ is defined in . We fix in what follows a sequence $\overline \gamma = \gamma_0 > \gamma_1 > \gamma_2 > \dots > \gamma_{\left\lfloor 2\log_2(n)\right\rfloor} = \gamma_{\min}$ in $\Gamma$ satisfying property . We say that $u$ is the level of the corresponding threshold $\gamma_u$. We say $\mathcal{W}$ and $(\gamma_u)$ satisfies the property $\mathcal{C}(\mathcal{W}, (\gamma_u))$ if the following holds 1. **consistency:** For any $(i,j) \in \mathcal{G}(\mathcal{W}, \gamma_{\min})$ it holds that $\pi^*(i) > \pi^*(j)$. 2. **weak-transitivity:** Fix any $u \in \{0, \dots, \left\lfloor 2\log_2(n)\right\rfloor -1\}$. For any experts $i, j$, $k$, if $i$ is $\mathcal{G}(\mathcal{W}, \gamma_u)$-above $j$ and $k \in \mathcal{N}(\mathcal{G}(\mathcal{W}, \gamma_{u+1}), j)$, then any $i'\geq i$ is also $\mathcal{G}(\mathcal{W}, \gamma_{\min})$-above $k$. The first point of the above property means that at threshold $\gamma_{\min}$, there is no mistake in the directed graph $\mathcal{G}(\mathcal{W}, \gamma_{\min})$, meaning that if there is an edge from $i$ to $j$ in $\mathcal{G}(\mathcal{W}, \gamma_{\min})$, then $i$ is truly above $j$. Moreover, we only state the consistency property of the graph $\mathcal{G}(\mathcal{W}, \gamma_{\min})$, but this property also implies that, for any more conservative threshold $\gamma \geq \gamma_{\min}$, any $(i,j) \in \mathcal{G}(\mathcal{W}, \gamma)$ satisfies $\pi^*(i) > \pi^*(j)$. This is due to the fact that $\mathcal{G}(\mathcal{W}, \gamma) \subset \mathcal{G}(\mathcal{W}, \gamma_{\min})$. The weak transitivity property states in particular that if there is a path from $i$ to $j$ in the more conservative graph $\mathcal{G}(\mathcal{W}, \gamma_u)$, then there is a path from $i$ to any $k$ in the neighborhood of $j$ at the less conservative threshold $\gamma_{\min}$. The following lemma states that the above property remains true for the weighted graph $\mathcal{W}'$, after any update of the whole procedure. **Lemma 8**. *Under $\xi$, the property $\mathcal{C}(\mathcal{W}', (\gamma_u))$ holds true for any directed weighted graph $\mathcal{W}'$ obtained at any stage of and .* We denote in the following $\mathcal{W}_t$ for the directed weighted graph at the begining of step $t$. For any $u \in [0, \left\lfloor 2\log_2(n)\right\rfloor]$, we also write as a short hand $\mathcal{G}_{t, u} = \mathcal{G}(\mathcal{W}_t, \gamma_u)$ for the directed graph at begining of step $t$ and level $u$ and $P_{t,u}(i) = \mathcal{N}(\mathcal{G}_{t, u}, i)$ for the set of experts that are not comparable with $i$ according to $\mathcal{G}_{t,u}$. For any sequence of experts $I$, we write $\mathcal{P}_{t, u}(I)$ for the sequence of subsets $(P_{t,u}(i))_{i \in I}$. Let us now divide the $T$ steps of the algorithm into $\tau_{\max} = \left\lfloor\log_2(n)\right\rfloor+1$ epochs of $K= \left\lfloor T/\tau_{\max}\right\rfloor$ steps. For any $\tau \in [0,\tau_{\max}]$, we also write $\mathcal{G}^K_{\tau, u} = \mathcal{G}_{\tau K, u}$, $P^K_{\tau,u}(i) = P_{\tau K, u}(i)$ and $\mathcal{P}^K_{\tau, u}(I) = \mathcal{P}^K_{\tau, u}(I)$. Now we consider for each epoch $\tau$ a sequence of experts $I(\tau) = (i_1(\tau), \dots, i_{L_\tau}(\tau))$ defined by induction: - $I(0)$ is the empty sequence - For $\tau \geq 0$, let $(i_1, \dots, i_L)$ be the sequence ordered according to $\pi^*$ and corresponding to the union of the already constructed sequences $\bigcup_{\tau' \leq \tau} I(\tau')$ , and $i = 0$, $i_{L + 1} = n+1$. For any $l \in [0, L]$, let $A_l$ be the set of experts that are $\mathcal{G}^K_{\tau+1, 2\tau+1}$-below $i_{l+1}$ but $\mathcal{G}^K_{\tau+1, 2\tau+1}$-above $i_l$. For all $l$ such that $A_l$ is not empty, we define $i'_l$ as the expert of $A_l$ which is any expert closest to the median $\left\lfloor(i_l + i_{l+1})/2\right\rfloor$, and the new sequence $I(\tau + 1) := (i'_l)$. By definition, remark that $I(1)$ is equal to $(\left\lfloor(n+1)/2\right\rfloor)$. The induction step aims at building a sequence $I(\tau+1)$ that is disjoint from $\cup_{\tau' \leq \tau} I(\tau')$, and that cuts each set $A_l$ of experts that are above $i_l$ and below $i_{l+1}$ according to the graph at epoch $\tau + 1$ and level $2\tau+1$. Given the already constructed collections of perfectly ordered experts $I(\tau')$ for $\tau' \leq \tau$, the idea of $I(\tau+1)$ is that it tends to fill the gaps between the neighborhoods in $\mathcal{G}_{\tau+1,2\tau+1}$ of any two successive experts in $\cup_{\tau' \leq \tau}I(\tau')$. By monotonicity, it holds that for any expert $i$, epoch $\tau$ and level $u$ that $P^K_{\tau+1, u+1}(i) \subset P^K_{\tau+1,u}(i) \subset P^K_{\tau,u}(i)$. We say that the sets $P^K_{\tau,2\tau}(i)$ and $P^K_{\tau, 2\tau+1}(i)$ are the neighborhoods of $i$ at the beginning of epoch $\tau$ and that the sets $P^K_{\tau+1, 2\tau}(i), P^K_{\tau+1, 2\tau+1}(i)$ are the neighborhood of $i$ at the end of epoch $\tau$. The neighborhoods at the end of a given epoch $\tau$ are obtained from the neighborhoods at the beginning the of epoch $\tau$ after $K$ steps of the . On the other hand, we say that the sets $P^K_{\tau, 2\tau}, P^K_{\tau+1, 2\tau}$ are the conservative subsets at epoch $\tau$, since they correspond to a more conservative directed graph with threshold $\gamma_{2\tau} \geq \gamma_{2\tau+1}$. The following lemma states that, at any epoch $\tau$, the conservative subsets at the beginning of epoch $\tau$ are well separated according to the true order $\pi^* = \mathrm{id}$: **Lemma 9**. *Under event $\xi$, for any $\tau \in [0, \tau_{\max}]$, letting $(i_1, \dots, i_L) = I(\tau)$, we have $$\begin{aligned} P^K_{\tau,2\tau}(i_1) < \dots < P^K_{\tau,2\tau}(i_{L}). \end{aligned}$$* In other words, implies that, for any $l < l'$, any expert in $P^K_{\tau,2\tau}(i_l)$ is $\pi^*$-below any expert in $P^K_{\tau,2\tau}(i_{l'})$. As a consequence, it holds that for any $l \in [1, L_{\tau} - 2]$, $$\label{eq:packing_stronger} P^K_{\tau,2\tau}(i_l) \overset{\mathcal{G}^K_{\tau, 2\tau}}{\prec} P^K_{\tau,2\tau}(i_{l+2})\enspace .$$ Namely, any expert in $P^K_{\tau,2\tau}(i_l)$ is $\mathcal{G}^K_{\tau, 2\tau}$-below any expert in $P^K_{\tau,2\tau}(i_{l+2})$. Indeed, and first point of event $\xi$ imply that any expert $j$ in $P^K_{\tau,2\tau}(i_l)$ is $\mathcal{G}^K_{\tau, 2\tau}$-below $i_{l+1}$, since $j$ cannot be in $P^K_{\tau,2\tau}(i_l)$. On the other hand, $i_{l+1}$ is itself $\mathcal{G}^K_{\tau, 2\tau}$-below any expert of $P^K_{\tau,2\tau}(i_{l+2})$ for the same reason. The following lemma states that the ending less conservative subsets are covering the set of all experts. **Lemma 10**. *Under event $\xi$, it holds that $$[n] = \bigcup_{\tau = 0}^{\tau_{\max} - 1}\bigcup_{i\in I(\tau)} P^K_{\tau+1, 2\tau+1}(i) \enspace .$$* Let $\hat \pi$ be the estimator obtained from the final weighted directed graph $\mathcal{W}$ at the end of the procedure, that is any permutation on $[n]$ that is consistent with the largest acyclic graph of the form $\mathcal{G}(\mathcal{W}, \gamma)$ for all $\gamma > 0$. For any sequence of subsets $\mathcal{P}= (P_1, \dots, P_L)$ we define $$\label{eq:def_square_norm} \mathop{\mathrm{SN}}(\mathcal{P}) = \sum_{P \in \mathcal{P}} \|M(P) - \overline M(P) \|_F^2 \enspace.$$ The following proposition that we can control the $L_2$ error of $\hat \pi$ by the maximum over all epoch $\tau$ of the sum over $\tau$ of the square norms of the groups in $\mathcal{P}^K_{\tau+1, 2\tau+1}$. **Proposition 11**. *Under event $\xi$, it holds that $$\label{eq:UB_by_square_norm} \|M_{\hat \pi^{-1}} - M\|_F^2 \leq 4\sum_{\tau = 0}^{\tau_{\max}-1}\mathop{\mathrm{SN}}(\mathcal{P}^K_{\tau+1, 2(\tau + 1)}) \enspace .$$* Recall that $\bar \gamma$ is defined in , and that $\Gamma$ can be taken to be a valid grid with $\bar \gamma$ smaller than a polylogarithm in $n,d,\delta$. The final proposition states that at any level $u$ and any step $t$, any sequence of subset that can be ordered according to the already constructed graph $\mathcal{G}_{t,u}$ as in will either have a square norm smaller than the minimax rate $\rho_{\mathrm{perm}}$, defined in or almost exponentially decrease its square norm with high probability. **Proposition 12**. *Fix any $u \in [0, 2\tau_{\max}]$ and step $t < T$, and assume that $I = (i_1, \dots, i_{L})$ is a sequence of experts that satisfies $P_{t, u}(i_1) \overset{\mathcal{G}_{t,u}}{\prec} \dots \overset{\mathcal{G}_{t,u}}{\prec} P_{t, u}(i_{L})$. Then on the intersection of the event $\xi$ (defined in ) and an event of probability higher than $1 - 5\delta$, it holds that $$\mathop{\mathrm{SN}}(\mathcal{P}_{t+1, u}(I)) \leq \left[C\bar \gamma^{6}\rho_{\mathrm{perm}}(n, d, \lambda_0)\right]\lor \left[\left(1 - \frac{1}{4\bar \gamma^2}\right)\mathop{\mathrm{SN}}(\mathcal{P}_{t,u}(I))\right]\enspace ,$$ for some numerical constant $C$.* Let us fix $\tau \in \{0, \dots, \tau_{\max}-1\}$. Applying for each $t = K\tau, \dots, K\tau + K-1$ and $u = 2(\tau + 1)$ -the hypothesis of being satisfied by , we obtain with probability $1 - 5(K+T)\delta$ that $$\begin{aligned} \mathop{\mathrm{SN}}(\mathcal{P}_{\tau+1, 2(\tau+1)}) &\leq \left[C\bar \gamma^{6}\rho_{\mathrm{perm}}(n, d, \lambda_0)\right]\lor e^{-\frac{T}{4\tau_{\max}\bar \gamma^4}}nd \\ &\leq CT\bar \gamma^{6}\rho_{\mathrm{perm}}(n, d, \lambda) \enspace,\end{aligned}$$ if $T$ is larger than $4\bar \gamma^6 \geq 4\log^2(nd)\bar \gamma^4$. We conclude the proof of with , using that $4\tau_{\max} \leq \bar \gamma$: $$\|M_{\hat \pi^{-1}} - M\|_F^2 \leq 4\sum_{\tau = 0}^{\tau_{\max}-1}\mathop{\mathrm{SN}}(\mathcal{P}^K_{\tau+1, 2(\tau + 1)})\leq CT\bar \gamma^{7}\rho_{\mathrm{perm}}(n, d, \lambda) \enspace .$$ # Proofs of the lemmas of and of ## Proof of {#proof-of} Let $\hat \pi$ be any arbitrary permutation that is consistent with the largest $\mathbf{DAG}$ $\mathcal{G}(\mathcal{W}, \bar \gamma)$, as defined in . Recall that we assume in this proof that $\pi^* = \mathrm{id}$. By , for any $i\in [n]$ there exists $\tau \in [0, \tau_{\max}-1]$ and $i_0 \in I(\tau)$ such that $i \in P^K_{\tau+1, 2\tau + 1}(i_0)$. Let us define the interval $[a,b]$ as the maximal interval containing $i_0$ and that is included in the more conservative set $P^{K}_{\tau+1, 2\tau}$. Now, if $j > b$, then by definition there exists $j'$ such that $j \geq j' > b$ and $j' \not \in P^{K}_{\tau+1, 2\tau}$. Summarizing the properties, we have $j \geq j' \overset{\mathcal{G}_{\tau+1, 2\tau}}{\succ} i_0$, and that $i$ is in the neighborhood of $i_0$ in the graph $\mathcal{G}_{\tau+1, 2\tau+1}$. Hence, applying the weak-transitivity property (first in $\mathcal{C}$), holding true on event $\xi$ - see , we obtain that $j$ is $\mathcal{G}(\mathcal{W}_{K(\tau+1)}, \gamma_{\min})$-above $i$. By the consistency property (second point in $\mathcal{C}$), $j$ is also necessarily $\mathcal{G}(\mathcal{W}, \gamma_{\min})$-above $i$, and this proves that all the $n - b$ experts $j$ satisfying $j > b$ are $\mathcal{G}(\mathcal{W}, \gamma_{\min})$ above $i$. Hence it holds that $\hat \pi(i) \leq b$. By symmetry, we also prove that $\hat \pi(i) \geq a$, so that $$\label{eq:subset_3} \hat \pi(i) \in [a,b] \subset P^K_{\tau+1, 2\tau}(i_0).$$ Finally, we have $$\begin{aligned} \| M_{\hat \pi^{-1}} - M \|_F^2 &= \sum_{i =1}^n \|M_{\hat \pi(i) \cdot} - M_{i \cdot} \|_F^2 \\ &\leq \sum_{\tau = 0}^{\tau_{\max}}\sum_{i_0 \in I(\tau)}\sum_{i \in P^K_{\tau+1, 2\tau+1}(i_0)} \|M_{\hat \pi(i) \cdot} - M_{i \cdot} \|^2 \\ &\leq 2\sum_{\tau = 0}^{\tau_{\max}}\sum_{i_0 \in I(\tau)}\sum_{i \in P^K_{\tau+1, 2\tau+1}(i_0)} \|M_{i \cdot} - \overline m(P^K_{\tau+1, 2\tau}(i_0)) \|^2 + \|M_{\hat \pi(i) \cdot} - \overline m(P^K_{\tau+1, 2\tau}(i_0)) \|^2 \\ &\leq 4\sum_{\tau = 0}^{\tau_{\max}}\sum_{i_0 \in I(\tau)}\sum_{i \in P^K_{\tau+1, 2\tau}(i_0)} \|M_{i \cdot} - \overline m(P^K_{\tau+1, 2\tau}(i_0)) \|^2 \enspace , \end{aligned}$$ where we used for the first inequality and for the last inequality. ## Proof of the lemmas of We postpone the proof of to the next subsection. *Proof of .* Recall that we consider the case $\lambda_0 \leq 1$, so that $\lambda_0 \land 1/\lambda_0 = \lambda_0$ in . Consider any substep of the whole procedure where the current directed weighted graph is $\mathcal{W}'$. For the first point, remark that $i$ is $\mathcal{G}(\mathcal{W}', \gamma_{\min})$-above $j$ only if there exists a previous substep during which we find out that $\langle Y_i \cdot - Y_j \cdot,w\rangle \geq \gamma_{\min}$ on some direction $w \in \mathbb{R}^Q$, where $Y$ is the sample used to refine the edges . Since $\gamma_{\min} > \phi_{\mathrm{L}_1}$, then decomposing $Y = \lambda_1 M + E$ as in , we have $$\begin{aligned} \lambda_1\langle M_i \cdot - M_j \cdot,w\rangle \geq \langle Y_i \cdot - Y_j \cdot,w\rangle - \langle E_i \cdot - E_j \cdot,w\rangle > 0 \enspace, \end{aligned}$$ where the last inequality comes from , using the notation . Since the coefficients of $w$ are nonegative, we have proven that $i$ is above $j$. For the second point, assume that $i$ is $\mathcal{G}(\mathcal{W}, \gamma_u)$-above $j$, and take $i' \geq i$. As before, there exists a direction $w$ used during the procedure such that $\langle Y_i \cdot - Y_j \cdot,w\rangle \geq \gamma_u$. Now consider any $k \in \mathcal{N}(\mathcal{G}(\mathcal{W}, \gamma_{u+1}), j)$. On the direction $w$, we have under the event $\xi$ defined in that $$\begin{aligned} \langle Y_i' \cdot - Y_k \cdot,w\rangle &\geq \lambda_1\langle M_i' \cdot - M_k \cdot,w\rangle - \tfrac{1}{3}\phi_{\mathrm{L}_1}\sqrt{\lambda_0 }\\ &\geq \lambda_1\langle M_i \cdot - M_k \cdot,w\rangle - \tfrac{1}{3}\phi_{\mathrm{L}_1}\sqrt{\lambda_0 }\\ &\geq \langle Y_i \cdot - Y_j \cdot,w\rangle - \langle Y_k \cdot - Y_j \cdot,w\rangle - \phi_{\mathrm{L}_1}\sqrt{\lambda_0 } \\ &\geq (\gamma_u - \gamma_{u+1} - \phi_{\mathrm{L}_1})\sqrt{\lambda_0 } \geq \gamma_{\min}\sqrt{\lambda_0 } \enspace , \end{aligned}$$ where the last inequality comes from the assumption . We conclude that $i'$ is $\mathcal{G}(\mathcal{W}',\gamma_{\min})$-above $k$. ◻ *Proof of .* We proceed by induction over $\tau\geq 0$. The lemma is trivial for $\tau = 0,1$ since $I(0)$ is empty and $I(1) = (\left\lfloor(n+1)/2\right\rfloor)$. Let $\tau \geq 1$ and $i_1,i_2, i_3$ be three experts in $I(\tau)\cup\{0, n+1\}$ such that $i_1 < i_2 < i_3$. Let $A$ be the set of experts that are $\mathcal{G}^K_{\tau+1, 2\tau+1}$-above $i_1$ and $\mathcal{G}^K_{\tau+1,2\tau+1}$-below $i_2$, and $A'$ be the set of experts that are $\mathcal{G}^K_{\tau+1, 2\tau+1}$-above $i_2$ and $\mathcal{G}^K_{\tau+1,2\tau+1}$-below $i_3$. Assume that both sets $A$ and $A'$ are nonempty, and let $j \in A$ and $j' \in A'$. Let us apply the weak-transitivity of $\mathcal{W},(\gamma_u)$ in Property $\mathcal{C}$ - which holds true under $\xi$ from - with $u=2\tau+1$. Since $j$ is $\mathcal{G}^K_{\tau+1,2\tau+1}$-below $i_2$, any $k \in P^K_{\tau+1, 2(\tau + 1)}(j)$ is $\pi^*$-below $i_2$. We also prove that any $k' \in P^K_{\tau+1, 2(\tau + 1)}(j')$ is $\pi^*$-above $i_2$. We conclude that $P^K_{\tau+1, 2(\tau + 1)}(j) < P^K_{\tau+1, 2(\tau + 1)}(j')$, and the proof of the lemma follows. ◻ *Proof of .* We prove that, by construction, any expert $i \in [n]$ is at distance less than $(n+1)/2^{\tau+1}$ of $\bigcup_{\tau' = 0}^{\tau} \bigcup_{i\in I(\tau)} P^K_{\tau+1, 2\tau+1}(i)\cup\{0, n+1\}$. This is obvious for $\tau = 0$ since any expert is at distance less than $(n+1)/2$ of $0$ or $n+1$. Let $(i_1, \dots, i_L)=\bigcup_{\tau' \leq \tau} I(\tau')$ be the collection of experts in the union of all possible $I(\tau')$ that is ordered according to $\pi^*$. If $j$ is any expert in $[n]$, then we let $l \in [0, L]$ be such that $i_l \leq j \leq i_{l+1}$. We can assume that $j \not \in P^K_{\tau+1, 2\tau+1}(i_l)$ and $j \not\in P^K_{\tau+1, 2\tau+1}(i_{l+1})$ because otherwise the distance of $j$ to $\bigcup_{\tau' = 0}^{\tau} \bigcup_{i \in I(\tau)}^{L}P^K_{\tau+1, 2\tau + 1}(i)$ is $0$. Using property $\mathcal{C}$ holding true from , it holds that the set $A$ of experts that are $\mathcal{G}^K_{\tau+1, 2\tau+1}$-above $i_l$ but $\mathcal{G}^K_{\tau+1, 2\tau+1}$-below $i_{l+1}$ contains $j$ and therefore is nonempty. Now, let $m = \left\lfloor(i_{l} + i_{l+1})/2\right\rfloor$ and $i'$ be any expert closest to $m$ in $A$, as defined in the construction of $I(\tau+1)$, and assume without loss of generality that $m \leq i'$. We consider the following cases: - $m \leq i' \leq j$: In that case, $j$ is at distance less than $(i_{l+1} - m)/2$ of $i'$ or $i_{l+1}$. - $m \leq j < i'$: This case is not possible since $i'$ is the closest expert to $m$ in $A$. - $j < m < i'$: In that case, since $i'$ minimizes the distance to $m$, we necessarily have that $m \in P^K_{\tau+1, 2\tau + 1}(i_l) \cup P^K_{\tau+1, 2\tau + 1}(i_{l+1})$. Hence $j$ is at distance less than $(m - i_l)/2$ of $m$ or $i_{l}$. We have proved that the distance of any $j$ to $\bigcup_{\tau' = 0}^{\tau + 1} \bigcup_{i \in I(\tau)}^{L}P^K_{\tau+1, 2\tau + 1}(i)\cup\{0,n+1\}$ is at most $(m-i_l)/2$ or $(i_{l+1}-m)/2$. Using the induction hypothese, we have that $m - i_l$ and $i_{l+1} -m$ are both less than $n/2^{\tau+1}$, which concludes the induction. Finally, applying this property with $\tau_{\max}-1 = \left\lfloor\log_2(n)\right\rfloor$ gives a distance strictly smaller than $1$, which proves the result. ◻ ## Proof of {#proof-of-1} Let us start with the following lemma, which gives a concentration bound when $\lambda_0 \leq 1$: **Lemma 13**. *For any $\delta'>0$ and for any matrix $W \in \mathbb{R}^{n \times d}$, the following inequality holds with probability at least $1-\delta'$: $$\label{eq:concentration_bernstein} |\langle E, W\rangle| \leq \sqrt{4e^2\|W\|_F^2\lambda_0\log\left(\frac{2}{\delta'}\right)} + \|W\|_{\infty}\log\left(\frac{2}{\delta'}\right) \enspace .$$* Now we apply with the matrix $W$ with $0$ coefficients except at line $i$ where it is equal to the vector $\frac{w}{\|w\|_2}$ as defined in and we deduce that $$\label{eq:transformation_B_w} |\langle E_i, \cdot, \frac w\|w\|_2\rangle| \leq \sqrt{4e^2\lambda_0\log\left(\frac{2}{\delta'}\right)} + \frac{\|w\|_{\infty}}{\|w\|_2}\log\left(\frac{2}{\delta'}\right) \leq 11 \sqrt{\lambda_0} \log(2/\delta') \enspace ,$$ where the last inequality comes from Condition on $w$. Now choosing $\delta' = \delta/(4Tn^6)$, a union bound over the at most $2n^2T|\mathcal{H}|(|\Gamma| \land n^2)$ pairs $(Q,w)$ considered during the procedure, we deduce the bound of for all $\lambda_0 \leq 1$. *Proof of .* Recall that $E, \widetilde E$ are defined in and that we have in particular $$E_{ik} = (B_{ik}-\lambda_1)M_{ik} + \widetilde E_{ik} \enspace .$$ Let $x>0$. By Cauchy-Schwarz inequality, we have $$\mathbb{E}[e^{x E_{ik}}] \leq \sqrt{\mathbb{E}[e^{2x(B_{ik} - \lambda_1)M_{ik}}]}\sqrt{\mathbb{E}[e^{2x \widetilde E_{ik}}]} \enspace ,$$ where we recall that $\lambda_1 = 1 - e^{-\lambda_0} \leq \lambda_0$. We have $$\begin{aligned} \mathbb{E}[e^{2x(B_{ik} - \lambda_1)M_{ik}}] \leq e^{-2\lambda_1 xM_{ik}}(\lambda_1 (e^{2xM_{ik}}-1) + 1) \leq e^{\lambda_1 e^2 x^2} \enspace , \end{aligned}$$ and $$\begin{aligned} \mathbb{E}[e^{2x\widetilde E_{ik}}] \leq \lambda_1 (e^{2x^2}-1) + 1 \leq e^{\lambda_1 e^2 x^2} \enspace , \end{aligned}$$ where we used the inequalities $e^{2x^2} - 1 \leq e^{2}x^2$ and $e^{2x} - 1 - 2x \leq e^{2}x^2$ for any $x \in [-1, 1]$. In particular, if $t>0$, a Chernoff bound with $x = \tfrac{t}{2\|W\|_F^2 \lambda_0 e^2} \land 1$ gives $$\mathbb{P}(\langle W, E\rangle \geq t) \leq \exp(-(\tfrac{t^2}{4\|W\|_F^2\lambda_0 e^2 }\land t)) \enspace ,$$ so that with probability at least $1-\delta'$: $$|\langle W, E\rangle| \leq \sqrt{4e^2\|W\|_F^2\lambda_0\log\left(\frac{2}{\delta'}\right)} + \log\left(\frac{2}{\delta'}\right) \enspace .$$ ◻ # Proof of {#sec:proof_UB_on_square_norm} **Step 0 : general definitions** In this proof, we fix $u \in \{0,\dots, 2\left\lfloor\log_2(n)\right\rfloor+2\}$ and a corresponding threshold $\gamma_u$ in the sequence in $\Gamma$ satisfying $\gamma_u \geq \phi_{\mathrm{L}_1}$ - see - and a step $t < T$. We assume that $I = (i_1, \dots, i_{L})$ is a fixed sequence of experts that satisfies $P_{t, u}(i_1) \overset{\mathcal{G}_{t,u}}{\prec} \dots \overset{\mathcal{G}_{t,u}}{\prec} P_{t, u}(i_{L})$. From now on, we ease the notation by omitting the dependence in $t,u,\gamma_u$ and we write $\mathcal{G}= \mathcal{G}_{t,u}$, $\mathcal{G}' = \mathcal{G}_{t+1, u}$, $\mathcal{P}=(P_1, \dots, P_{L})$ for $\mathcal{P}_{t,u}$ and $\mathcal{P}'$ for $\mathcal{P}_{t+1,u}$. We denote $\widetilde \mathcal{G}^h$ for the directed graph at threshold $\gamma_u$ of the directed weighted graph $\widetilde \mathcal{W}^h$ obtained at the end the first update of . We also write $\widetilde P^h_l = \mathcal{N}(\widetilde \mathcal{G}^h, i_l)$ and $\widetilde \mathcal{P}^h = (\widetilde P^h_1, \dots, \widetilde P^h_{L})$ for the corresponding sequence of subsets at height $h \in \mathcal{H}$. By monotonicity, it holds for any $h \in \mathcal{H}$ that $$P'_l \subset \widetilde P^h_{l} \subset P_l \enspace .$$ ## Step 1: Analysis of the selected set $\widehat Q$ {#subsec:step_1} Recall the definition of the neighborhoods of the set $P_l$ in the graph $\mathcal{G}$: $$\mathcal{N}_a(l) = \bigcap_{i \in P_l} \mathbf{rk}_{\mathcal{G}, i_l}^{-1}([1, a]) \quad \text{ and } \quad\mathcal{N}_{-a}(l) = \bigcap_{i \in P_l} \mathbf{rk}_{\mathcal{G}, i_l}^{-1}([- 1, - a]) \enspace ,$$ Define for $\kappa > 0$ and $l \in [1, L]$ the population version ${\boldsymbol\Delta}^*_k$ of the width statistic $\widehat {\boldsymbol\Delta}_k$ - see - as the the difference of the best and worst expert of $P(i_l)$ if $a=0$ and as the difference of the average of the experts in $\mathcal{N}_a(l)$ and the average of the expert in $\mathcal{N}_{-a}(l)$: $${\boldsymbol\Delta}^*_k(0, l) = \max_{i,j \in P(i_l)}|M_{i,k} - M_{j,k}|\quad \text{ and } \quad\enspace {\boldsymbol\Delta}^*_k(a, l) = \overline m_{k}(\mathcal{N}_a(l)) - \overline m_{k}(\mathcal{N}_{-a}(l)) \mbox{ if a $\geq 1$} .$$ We also define $a^*(h,l)$ as the minimum $a \geq 1$ such that there are at least $\tfrac{1}{\lambda_0h^2}$ experts in $\mathcal{N}_a(l)$ and in $\mathcal{N}_{-a}(l)$: $$\label{eq:definition_nu} a^*(h, l) = \min \{a \geq 1 ~:~ |\mathcal{N}_a(l)|\land |\mathcal{N}_{-a}(l)| \geq \frac{1}{\lambda_0 h^2}\} \enspace .$$ Now, define for $\phi \geq 1$: $$\begin{aligned} \label{eq:definition_Q_pop} \begin{split} Q^{*h}_l(\phi) &:= \{k \in [d] ~:~ {\boldsymbol\Delta}^*_k(0, l) \in [\phi h, 2 \phi h]\} \\ \overline Q^{*h}_l(\phi) &:= \{k \in [d] ~:~ {\boldsymbol\Delta}^*_k(a^*(\phi^{-1}h, l), l) \geq h/2 \}\enspace . \end{split}\end{aligned}$$ The following lemma states that, for $\phi$ of order $\log(nd/\delta)$, we can sandwich $\widehat Q^h_l$ between the two fixed sets $Q^{*h}_l$ and $\overline Q^{*h}_l$: **Lemma 14**. *Let $l$ be a fixed index in $\{1, \dots, L\}$ and $h$ a fixed height in $\mathcal{H}$. There exists a numerical constant $\kappa_0 > 0$ such that, with probability at least $1- \delta/(L|\mathcal{H}|)$, we have $$Q^{*h}_l(\kappa_0\log(nd/\delta)) \subset \widehat Q_l^{h} \subset \overline Q^{*h}_l(\kappa_0\log(nd/\delta)) \enspace .$$* ## Step 2 : l1-control of the intermediary sets $\widetilde \mathcal{P}^h$ Recall that $\gamma_u$ is a threshold corresponding to a sequence in $\Gamma$ as defined in . For any sets $P\subset [n], Q \subset [d]$, we say that $M(P,Q)$ is indistinguishable in $L_1$-norm if it satisfies $$\label{eq:indistinguishable} \max_{i,j \in P} \| M_{i\cdot}(P, Q) - M_{j\cdot}(P,Q) \|_1 \leq 3\gamma_u\sqrt{\frac{|Q|}{\lambda_0}}\enspace .$$ For a fixed $l \in \{1, \dots, L\}$, let $\xi_{\mathrm{L}_1}(l,h)$ be the event under which $M(\widetilde P^h_l, \widehat Q^{h}_l)$ is indistinguishable in $L_1$-norm. **Lemma 15**. *Let $l$ be a fixed index in $\{1, \dots, L\}$ and $h \in \mathcal{H}$ such that $\lambda_0|Q^{*h}_l| \geq 1$. The event $\xi_{\mathrm{L}_1}(l,h)$ holds true with probability at least $1-\delta / (L|\mathcal{H}|)$.* Let $\kappa_0$ be a numerical constant given by and let $\phi_0 = \kappa_0 \log(nd/\delta)$. In what follows, we write for simplicity $(Q^{*h}_l, \widehat Q^{h}_l,\overline Q^{h}_l) = (Q^{*h}_l(\phi_0), \widehat Q^{h}_l(\phi_0),\overline Q^{h}_l(\phi_0))$. provides an upper bound only on the $L_1$ distance between rows of $M$ restricted to the subsets $\widetilde P^h_l$ and $\widehat Q^{h}_l$, while the square norm of a group is defined with the $L_2$ distance. with . The idea is that for any $k$ in $Q^{*h}$, and for any $i \in \widetilde P^h$, we have that $|M_{ik} - \overline m_{k}|^2\leq 2\phi_0 h|M_{ik} - \overline m_{k}|$. In particular, $\| M_{i\cdot}(\widetilde P^h_l, Q^{*h}_l) - \overline m_{\cdot}(\widetilde P^h_l, Q^{*h}_l) \|_2^2 \leq 2\phi_0 h\| M_{i\cdot}(\widetilde P^h_l, Q^{*h}_l) - \overline m_{\cdot}(\widetilde P^h_l, Q^{*h}_l) \|_1$. Hence, it holds from , and a union bound over all $l \in \{1, \dots, L\}$ and all $h \in \mathcal{H}$ satisfying $\lambda_0|Q^{*h}_l| \geq 1$ that with probability at least $1-2\delta$, $$\label{eq:control_l1_to_l2} \sum_{i \in \widetilde P^h_l} \| M_{i\cdot}(\widetilde P^h_l, Q^{*h}_l) - \overline m_{\cdot}(\widetilde P^h_l, Q^{*h}_l) \|_2^2 \leq 6\phi_0 \gamma_u \left[h|\widetilde P^h_l|\sqrt{\frac{|\overline Q^{*h}_l|}{\lambda_0}}\right] \enspace ,$$ simultaneously for all $l \in \{1, \dots, L\}$ and $h \in \mathcal{H}$ satisfying $\lambda_0|Q^{*h}_l| \geq 1$. *Proof of .* Let $l$ be a fixed index in $\{1, \dots, L\}$ and $h$ be a fixed height in $\mathcal{H}$. If $a \geq 1$, the subset $P_l$ is disjoint from the sets $\mathcal{N}_a(l) \cup \mathcal{N}_{-a}(l)$ so that $\widehat Q^h_l$ is independent of $Y^{(1)}(P_l)$. Remark also that condition is satisfied since $\lambda_0|Q^{*h}_l| \geq 1$ and $Q^{*h}_l \subset \widehat{Q}^{h}_l$. Recall that we assume that $\lambda_0 \leq 1$. We write $w = {\mathbf 1}_{\widehat Q^h_l}$ and we recall that $B = (B_{ik})$ is the matrix defined in . Let $i,j \in \widetilde P^{h}_l$ so that, by definition, we have that $\left\lvert\langle Y_i\cdot - Y_j\cdot, w\rangle\right\rvert \leq \gamma_u \sqrt{\lambda_0|\widehat Q^h_l|}$. With probability at least $1-\delta /L$, for all $i,j$ in $P_l$ we have that $$\begin{aligned} \lambda_1 \left\lvert\langle M_i\cdot - M_j\cdot, w\rangle\right\rvert \leq \left\lvert\langle Y_i\cdot - Y_j\cdot, w\rangle\right\rvert + \left\lvert\langle E_i\cdot - E_j\cdot, w\rangle\right\rvert \leq (\gamma_u + \phi_{\mathrm{L}_1}/2)\sqrt{\lambda_0|\widehat Q^h_l|} \enspace . \end{aligned}$$ where the last inequality comes from applied with $\delta' = \delta/n^3$ and from the definition of $\phi_{\mathrm{L}_1}$ . Recalling the two inequalities $\lambda_1 = 1- e^{-\lambda_0} \geq \lambda_0/2$ and $\phi_{\mathrm{L}_1} \leq \gamma_u$, we obtain the result. ◻ ## Step 3 : Local square norm reduction Henceforth we condition to the sample $Y^{(1)}$ of which allows us to assume that, for any $h \in \mathcal{H}$, the two sequences of sets $\widetilde \mathcal{P}^{h}$ and $\widehat \mathcal{Q}^h$ are fixed. For $\kappa_1 > 0$, let $\xi_{{\mathrm{loc}}}(l, h, \kappa_1)$ be the event holding true if the local square norm of $M(P_l, \widehat Q_l^h)$ has decreased at the end of , that is $$\begin{aligned} \label{eq:local_square_norm_reduction} \begin{split} \| M(P'_l, \widehat Q_l^h) - \overline M(P'_l, \widehat Q_l^h)\|_F^2 \leq & ~\kappa_1 \gamma_u^4\left[\frac{1}{\lambda_0}\sqrt{|P_l| |\widehat Q_l^h|} + \frac{|P_l|}{\lambda_0}\right] \\ &~\lor \left(1 - \frac{1}{4\gamma_u^2}\right)\| M(P_l, \widehat Q_l^h) - \overline M(P_l, \widehat Q_l^h)\|_F^2 \enspace . \end{split}\end{aligned}$$ The following proposition states that given the fact that the experts in $\widetilde P^{h}_l$ are indistinguishable in $L_1$-norm and $\lambda_0 (|\widetilde P^h_l|\land |Q_l^{*h}|) \geq 1$, the event $\xi_{{\mathrm{loc}}}$ holds true simultaneously for all $l$ and $h$ with high probability. **Proposition 16**. *There exists a numerical constant $\kappa_1$ such that the following holds, for any fixed index $l$ in $\{1, \dots, L\}$, and fixed height $h$ in $\mathcal{H}$. Conditionally to $Y^{(1)}$, the event $\xi_{\mathrm{L}_1}(l)$ and $\lambda_0 (|\widetilde P^h_l|\land |Q_l^{*h}|) \geq 1$, the event $\xi_{{\mathrm{loc}}}(l, h, \kappa_1)$ holds true with probability at least $1- 3\delta/(L|\mathcal{H}|)$.* is at the core of the analysis, and its proof contains a significant part of the arguments. This proposition and its proof are similar to Proposition D.5 in [@pilliat2022optimal], but the main difficulty with respect to [@pilliat2022optimal] is that we do not achieve the optimal rate in $\lambda_0 \leq 1$ using only the subgaussianity of the coefficients of the noise $E$. A key step in the proof of is , which implies and gives a concentration inequality of the operator norm of $EE^T - \mathbb{E}[EE^T]$. is effective in that case since the coefficients of $E$ will be proven to satisfy . Then, the idea is that when a group $P'_l$ has a square norm of order at least $\frac{1}{\lambda_0}\sqrt{|P_l| |\widehat Q_l^h|} + \frac{|P_l|}{\lambda_0}$, the PCA-based procedure defined as in will output a vector $\hat v$ that is well aligned with the first left singular vector of $M(\widetilde P^h_l, \widehat Q^h_l)- \overline M(\widetilde P^h_l, \widehat Q^h_l)$. Moreover, the isotonic structure of $M(\widetilde P^h_l, \widehat Q^h_l)- \overline M(\widetilde P^h_l, \widehat Q^h_l)$ implies in fact that its operator norm is greater than a polylogarithmic fraction of its Frobenius norm (see or Lemma E.4 in [@pilliat2022optimal]\], so that $\|\hat v^T(M(\widetilde P^h_l, \widehat Q^h_l)- \overline M(\widetilde P^h_l, \widehat Q^h_l))\|_2^2$ is of the same order as the square Frobenius norm. Hence after updating the edges, we can prove that the experts in $\widetilde P^h_l \setminus P'_l$ were contributing significantly to the Frobenius norm, which enforces the contraction part in the second term of the maximum in . All the details of the proof can be found in . ## Step 4 : Control of the size of the sets $\overline Q^{*h}_l$ For any $p \in [n]\cap \{2^k~:~ k \in \mathbb{Z}^+\}$, let $\mathcal{L}(p)$ be the sets of indices $l = 1, \dots, L$ whose corresponding group size $|P_l|$ belongs to $[p,2p)$. The two upper bounds implied by and both depend on the selected subset of columns, which is included in $\overline Q_l^{*h}$ under the event of . The following lemma provides an upper bound on the sum over $l\in \mathcal{L}(p)$ of the size of the sets $\overline Q_l^{*h}(\phi)$ defined in , for any $\phi>0$. **Lemma 17**. *For any $\phi \geq 1$ and any $h\in \mathcal{H}$, it holds that $$\sum_{l \in \mathcal{L}(p)}|\overline Q^{*h}(\phi)| \leq 12 \phi^2 \left(\frac{1}{p\lambda_0 h^2}\lor 1\right)\frac{d}{h} \enspace .$$* The proof of is mainly implied by the fact that the coefficients of $M$ are bounded by $1$. Then, the idea is that in the case where all the sets $P_{l}$ are of size $p$, it is enough to take a number of group $a$ of order at most $\tfrac{1}{p\lambda_0h^2} \lor 1$ above and below each $P_l$ to ensure that the corresponding neighborhood of $P_l$ has size $|\mathcal{N}_a(l)| \land |\mathcal{N}_{-a}(l)| \geq \frac{1}{\lambda_0h^2}$. ## Step 5 : Conclusion of the previous steps We first decompose the square norm $\mathop{\mathrm{SN}}(\mathcal{P})$ as defined in into two terms. Assume that the event of , $\xi_{\mathrm{L}_1}(l)$ and $\xi_{{\mathrm{loc}}}(l,h, \kappa_1)$ - see and - hold true. Define $\mathcal{L}_{-}$ as the sequence of indices $l$ such that the corresponding reduced subsets $P'_l$ have low local square norm for all $h \in \mathcal{H}$. More precisely, we say that $l \in \mathcal{L}_{-}$ if for all $h \in \mathcal{H}$ we have $$\begin{aligned} \label{eq:L_minus} \begin{split} \| M(P'_l, \widehat Q_l^h) - \overline M(P'_l, \widehat Q_l^h)\|_F^2 \leq & ~\kappa_1 \gamma_u^4\left[\frac{1}{\lambda_0}\sqrt{|P_l| |\widehat Q_l^h|} + \frac{|P_l|}{\lambda_0}\right] \\ &~\lor \frac{1}{2|\mathcal{H}|}\| M(P_l) - \overline M(P_l)\|_F^2 \enspace . \end{split}\end{aligned}$$ We also define the complementary $\mathcal{L}_{+} = [1, L] \setminus \mathcal{L}_{-}$ and their corresponding subsets $\mathcal{P}'_+, \mathcal{P}'_-$ in $\mathcal{P}'$. We have the following decomposition: $$\label{eq:decomposition_square_norm} \mathop{\mathrm{SN}}(\mathcal{P}') = \mathop{\mathrm{SN}}(\mathcal{P}'_+) + \mathop{\mathrm{SN}}(\mathcal{P}'_-) \enspace .$$ Let us now give an upper bound of $\mathop{\mathrm{SN}}(\mathcal{P}'_+)$. For any $l \in \mathcal{L}_{+}$, there exists by definition an element $h_l \in \mathcal{H}$ such that $\|M(P'_l,\widehat Q^{h_l}_l) - \overline M(P'_l,\widehat Q^{h}_l)\|_F^2 > \kappa_1 \gamma_u^4\left[\frac{1}{\lambda_0}\sqrt{|P_l| |\widehat Q_l^h|} + \frac{|P_l|}{\lambda_0}\right] \lor \frac{1}{2|\mathcal{H}|}\| M(P_l, \widehat Q_l^h) - \overline M(P_l, \widehat Q_l^h)\|_F^2$. Hence applying with $h=h_l$, we have that, for any $l \in \mathcal{L}_+$, $$\begin{aligned} \|M(P'_l) - \overline M(P'_l) \|_F^2 &= \|M(P'_l,\widehat Q^{h_l}_l) - \overline M(P'_l,\widehat Q^{h_l}_l)\|_F^2 + \|M(P'_l, [d]\setminus \widehat Q^{h_l}_l) - \overline M(P'_l, [d]\setminus \widehat Q^{h_l}_l)\|_F^2 \\ &\leq \| M(P_l) - \overline M(P_l)\|_F^2 - \frac{1}{4\gamma_u^2}\|M(P_l,\widehat Q^{h_l}_l) - \overline M(P_l,\widehat Q^{h_l}_l)\|_F^2 \\ &\leq \left(1 - \frac{1}{\gamma_u^3}\right)\| M(P_l) - \overline M(P_l)\|_F^2 \enspace ,\end{aligned}$$ where the third inequality comes from the second term of together with $P'_l \subset P_l$ and $\gamma_u \geq \phi_{L_1} \geq 8|\mathcal{H}|$, with $\phi_{L_1}$ defined in . Hence we obtain that $$\label{eq:ub_var_P_plus} \mathop{\mathrm{SN}}(\mathcal{P}'_+) \leq \left(1 - \frac{1}{\gamma_u^3}\right)\mathop{\mathrm{SN}}(\mathcal{P}_+) \enspace .$$ Finally, we give an upper bound of $\mathop{\mathrm{SN}}(\mathcal{P}'_-)$. Let us write $\mathcal{D}_n = \{2^k~:~ k\in \mathbb{Z}^+\}\cap [n]$ for the set of dyadic integer smaller than $n$. Given $p\in \mathcal{D}_n$, we write $\mathcal{L}_-(p) = \mathcal{L}(p)\cap\mathcal{L}_-$ for the set of indices in $\mathcal{L}_-$ such that $|P_l|\in [p, 2p)$, and $\mathcal{P}'_-(p)$ for the corresponding sequence of subsets in $\mathcal{P}'_-(p)$. Let $\phi_0 = \kappa_0 \log(nd/\delta)$, where $\kappa_0$ is a numerical constant given by . By definition of $Q^{*h}_l$, the square norm of a group $P'_l$ restricted to questions that do not belong the set $\cup_{h\in \mathcal{H}}Q^{*h}_l$ is smaller than $\phi_0nd\cdot\min(\mathcal{H}) \leq \phi_0$. Hence we have that $$\label{eq:var_P_minus_1} \mathop{\mathrm{SN}}(\mathcal{P}'_-) = \sum_{p \in \mathcal{D}_n}\mathop{\mathrm{SN}}(\mathcal{P}'_-(p))\leq \phi_0 + \sum_{(p,h) \in \mathcal{D}_n \times \mathcal{H}}\sum_{l \in \mathcal{L}_-(p)}\| M(P'_l,Q^{*h}_l) - \overline M(P'_l,Q^{*h}_l)\|_F^2 \enspace .$$ If $\lambda_0|Q^{*h}_l|\leq 1$ then we use the trivial inequality $\| M(P'_l,Q^{*h}_l) - \overline M(P'_l,Q^{*h}_l)\|_F^2 \leq |P'_l||Q^{*h}_l| \leq |P^h_l|/\lambda_0$,since the entries of $M$ are bounded by one. If $\lambda_0|Q^{*h}_l|\geq 1$ and $|\widetilde P^{h}_l| \lambda_0 \leq 1$, we have that $h|\widetilde P^{h}_l|\sqrt{\frac{|\overline Q^{*h}_l|}{\lambda_0}} \leq \sqrt{\frac{|\widetilde P^{h}_l||\overline Q^{*h}_l|}{\lambda_0^2}}$, using the fact that $h \leq 1$. Hence, since the experts in $P'_l \subset \widetilde P^{h}$ are indistinguishable in $L_1$ norm by , holds true and we have $$\begin{aligned} \| M(P'_l,Q^{*h}_l) - \overline M(P'_l,Q^{*h}_l)\|_F^2 &\leq 6\phi_0 \gamma_u \left[h|\widetilde P^h_l|\sqrt{\frac{|\overline Q^{*h}_l|}{\lambda_0}}\right]\\ &\leq 6\phi_0 \gamma_u \left[\sqrt{h^2|\widetilde P^h_l|^2\frac{|\overline Q^{*h}_l|}{\lambda_0}} \land \sqrt{\frac{|\widetilde P^{h}_l||\overline Q^{*h}_l|}{\lambda_0^2}}\right]\\ &\leq 12\phi_0\gamma_u \left[\sqrt{(h^2p \lambda_0 \land 1)\frac{p|\overline Q_l^{*h}|}{\lambda_0^2}} + \frac{p}{\lambda_0}\right] \enspace .\end{aligned}$$ Finally, if $\lambda_0(|Q^{*h}_l|\lor |\widetilde P^{h}_l|) \geq 1$, we are in position to apply . For all $l \in \mathcal{L}_-(p)$ and $h \in \mathcal{H}$ that $\| M(P'_l,Q^{*h}_l) - \overline M(P'_l,Q^{*h}_l)\|_F^2$ is either smaller than $\frac{1}{2|\mathcal{H}|}\| M(P_l) - \overline M(P_l)\|_F^2$, or it is smaller than $\kappa_1 \gamma_u^4\left[\frac{1}{\lambda_0}\sqrt{|P_l| |\widehat Q_l^h|} + \frac{|P_l|}{\lambda_0}\right]$. From , it is also smaller than $6\phi_0 \gamma_u h|\widetilde P^{h}_l|\sqrt{\frac{|\overline Q^{*h}_l|}{\lambda_0}}$. As a consequence, we obtain the following upper bound: $$\begin{aligned} \label{eq:var_P_minus_2} \begin{split} \| M(P'_l,Q^{*h}_l) - \overline M(P'_l,Q^{*h}_l)\|_F^2 \leq &\kappa_2\gamma_u^4 \left[\sqrt{(h^2p \lambda_0 \land 1)\frac{p|\overline Q_l^{*h}|}{\lambda_0^2}} + \frac{p}{\lambda_0}\right]\\ &~\lor\frac{1}{2|\mathcal{H}|}\| M(P_l) - \overline M(P_l)\|_F^2 \enspace , \end{split}\end{aligned}$$ with $\kappa_2 = 12(\kappa_0 \lor \kappa_1)$, and using that $\phi_0 \leq \kappa_0\gamma_u$ and $|\widetilde P^h_l| \leq |P_l| \leq 2p$. By the two previous cases on $l$, the inequality is valid for any $l \in \mathcal{L}_-(p)$. Now, we decompose into two terms, corresponding to the maximum in . First, since each $P_l$ is in at most one $\mathcal{P}_-(p)$ for $p \in \mathcal{D}_n$, we have $$\label{eq:var_P_minus_3} \sum_{(p,h) \in \mathcal{D}_n \times \mathcal{H}}\sum_{l \in \mathcal{L}_-(p)}\frac{1}{2|\mathcal{H}|}\| M(P_l) - \overline M(P_l)\|_F^2 \leq \frac{1}{2}\mathop{\mathrm{SN}}(\mathcal{P}_-)\enspace .$$ Secondly, we have that $$\begin{aligned} \kappa_2\gamma_u^4 \sum_{(p,h) \in \mathcal{D}_n \times \mathcal{H}} \sum_{l \in \mathcal{L}_-(p)} &\left[\sqrt{(h^2p \lambda_0 \land 1)\frac{p|\overline Q_l^{*h}|}{\lambda_0^2}} + \frac{p}{\lambda_0}\right]\\ &\leq \kappa_2\gamma_u^6\left[\max_{p,h} \sum_{l \in \mathcal{L}_-(p)} \sqrt{(h^2p \lambda_0 \land 1)\frac{p|\overline Q_l^{h*}|}{\lambda_0^2}} + \frac{p}{\lambda_0} \right]\\ &\overset{(a)}{\leq} 2\kappa_2\gamma_u^6\max_{p,h} \left[\frac{n}{\lambda_0} + \sqrt{(h^2p\lambda_0 \land 1)\frac{p|\mathcal{L}(p)|\sum_{l \in \mathcal{L}(p)}|\overline Q_l^{h*}|}{\lambda_0^2}} \right]\\ &\overset{(b)}{\leq}4\kappa_2^2\gamma_u^7\max_{p,h} \left[\frac{n}{\lambda_0} + \sqrt{(h^2p\lambda_0 \land 1)\left(\frac{n^2d}{\lambda_0^2p}\land \left(\frac{nd}{p \lambda_0^3h^3} \lor \frac{nd}{\lambda_0^2h}\right)\right)} \right] \\ &\leq 4\kappa_2^2\gamma_u^7\max_{p,h} \left[\frac{n}{\lambda_0} + nh\sqrt{\frac{d}{\lambda_0}}\land \sqrt{\frac{n^2dh^2}{\lambda_0}\land \frac{nd}{\lambda_0^2h}} \right] \\ &\overset{(c)}{\leq} 4\kappa_2^2\gamma_u^7\left[\frac{n}{\lambda_0} + n\sqrt{\frac{d}{\lambda_0}} \land \frac{n^{2/3}\sqrt{d}}{\lambda_0^{5/6}} \right] \enspace .\end{aligned}$$ where in $(a)$ we used the Jensen inequality, in $(b)$ we used with $\phi = \phi_0$ together with the trivial inequality $\sum_{l \in \mathcal{L}(p)}|\overline Q_l^{h*}| \leq nd/p$ and in $(c)$ the fact that $x \land y \leq x^{2/3}y^{1/3}$ and $h \leq 1$. Finally, combining this last inequality with , and , we obtain $$\begin{aligned} \mathop{\mathrm{SN}}(\mathcal{P}') &= \mathop{\mathrm{SN}}(\mathcal{P}'_+) + \mathop{\mathrm{SN}}(\mathcal{P}'_-) \\ &\leq \left(1 - \frac{1}{\gamma_u^3}\right)\mathop{\mathrm{SN}}(\mathcal{P}_+) + 4\kappa_2^2\gamma_u^7\left[\frac{n}{\lambda_0} + n\sqrt{\frac{d}{\lambda_0}} \land \frac{n^{2/3}\sqrt{d}}{\lambda_0^{5/6}} \right] \lor \left[\frac{1}{2}\mathop{\mathrm{SN}}(\mathcal{P}_-)\right]\\ &\leq \left[C\bar \gamma^7\left(\frac{n}{\lambda_0} + n\sqrt{\frac{d}{\lambda_0}} \land \frac{n^{2/3}\sqrt{d}}{\lambda_0^{5/6}} \right)\right] \lor \left[\left(1 - \frac{1}{\bar \gamma^3}\right)\mathop{\mathrm{SN}}(\mathcal{P})\right] \enspace ,\end{aligned}$$ where we recall that $\bar \gamma$ is defined in and satisfies $\bar \gamma \geq \gamma_u$. This concludes the proof of . # Proof of the lemmas of {#sec:proof_lemma_ub_square_norm} Recall that we can write $$\label{eq:signal_noise_2} E= (B- \mathbb{E}[B]) \odot M + B \odot \widetilde E \enspace .$$ where we recall that $\widetilde E = Y - \mathbb{E}[Y | B]$ and that $B$ is a matrix of Bernoulli random variables with parameter $\lambda_1$. *Proof of .* Assume first that $\lambda_0 \leq 1$. Let us fix $l\in \{1, \dots, L\}$ and $h \in \mathcal{H}$. We omit the dependence in $l$ in this proof to ease the notation and we write $P$ for $P_l$. Let us define $$E'_{k}(a) := \frac{1}{|\mathcal{N}_a|}\sum_{i \in \mathcal{N}_a}E_{ik} - \frac{1}{|\mathcal{N}_{-a}|}\sum_{i \in \mathcal{N}_{-a}}E_{ik} \quad \text{ and } \quad\nu(a) := |\mathcal{N}_a| \land |\mathcal{N}_{-a}| \enspace .$$ Using with a column matrix $W$ with coefficient in $\{0, \tfrac{1}{|\mathcal{N}_a|}, -\frac{1}{|\mathcal{N}_{-a}|}\}$ and a union bound over all $k \in [d]$ and $a \in [n]$, we have with probability at least $1-\delta/L$ that: $$\label{eq:controle_varepsilon_echange} \frac{1}{\lambda_0}\left\lvert E'_{k}(a)\right\rvert \leq \kappa'_0 \log(nd/\delta)\left[\sqrt{\frac{1}{\lambda_0\nu(a)}} + \frac{1}{\lambda_0\nu(a)}\right] \enspace ,$$ for some numerical constant $\kappa'_0$. In what follows, we work under that holds true for all $a \in [n]$ and $k \in [d]$. **First inclusion.** Let $k \in Q^{*}(\kappa_0\log(nd/\delta)h)$ with numerical constant $\kappa_0$ to be fixed later. Let $a' \geq 1$ be any integer such that $\nu(a')\geq 1/(\lambda_0 h^2)$. We have $$\begin{aligned} \frac{1}{\lambda_0}\left\lvert E'_k(a')\right\rvert \leq 2\kappa'_0\log(nd/\delta)h \enspace , \end{aligned}$$ since we work under the event defined by and since $h^2 \leq h$. Then by consistency of the already constructed graph $\mathcal{G}_{t,u}$ at the beginning of step $t$, $\mathcal{N}_{a'}$ (resp. $\mathcal{N}_{-a'}$) contains by definition only experts that are $\pi^*$-above (resp. below) all the experts of $P$. Since by assumption $k$ is in $Q^{*h}$, it holds that ${\boldsymbol\Delta}^*_k(a') \geq {\boldsymbol\Delta}^*_k(0) \geq \kappa_0\log(nd/\delta)h$ - see the definition of $Q^{*h}$. Hence, recalling the signal-noise decomposition , we have that $$\begin{aligned} \frac{1}{\lambda_0}\widehat {\boldsymbol\Delta}_k(a') &= \frac{\lambda_1}{\lambda_0}{\boldsymbol\Delta}_k^*(a') + \frac{1}{\lambda_0}E'_k(a') \label{eq:noise_div_lambda_0} \geq \log(nd/\delta)((1- 1/e)\kappa_0 - 2\kappa'_0)h \enspace . \end{aligned}$$ Choosing $\kappa_0 \geq 10\kappa'_0 + 1$, we obtain by definition that $\nu(\hat a_k(h)) \leq \frac{1}{\lambda_0 h^2}$ so that $k \in \widehat Q^h$. **Second inclusion.** Let $k \in \widehat Q^h$, and $a' = a^*((\kappa_0\log(nd/\delta))^{-1}h)$ be as defined in . By definition, it holds that $\nu(a') \geq \kappa_0\log(nd/\delta)/(\lambda_0h^2) \geq \tfrac{1}{\lambda_0h^2}$. Hence, since $k \in \widehat Q^h$, we have by definition that $\nu(\hat a_k(h)) \leq \frac{1}{\lambda_0h^2} \leq \nu(a')$, which implies in particular that $\hat a_k(h) \leq a'$. Then, by definition of $\hat a_k(h)$ we have that $\tfrac{1}{\lambda_0}\widehat {\boldsymbol\Delta}_k(a') \geq h$. Using the concentration inequality with $h' = (\kappa_0\log(nd/\delta))^{-1}h$ and the fact that $\lambda_0 \geq \lambda_1$ we obtain $$\begin{aligned} {\boldsymbol\Delta}^*_k(a') \geq h - \frac{2\kappa'_0}{\kappa_0}h \enspace , \end{aligned}$$ and we get the second inclusion by also choosing $\kappa_0 \geq 4(\kappa'_0 + 1)$. ◻ *Proof of .* For simplicity, we renumber $\mathcal{L}(p) = (1, 2, \dots, L':=|\mathcal{L}(p)|)$. Let us write $\nu(a,l) = |\mathcal{N}_a(l)|\land |\mathcal{N}_{-a}(l)|$ and $\Lambda = \left\lfloor\frac{\phi^2}{p\lambda_0 h^2}\right\rfloor + 1$. We let $a^* := a^*(\phi^{-1}h, l)$ be as defined in so that for any $l$, $\nu(a^*, l) \geq \tfrac{\phi^2}{\lambda_0 h^2}$. By assumption of , it holds that $P_1 \overset{\mathcal{G}}{\prec} P_2 \overset{\mathcal{G}}{\prec} \dots \overset{\mathcal{G}}{\prec} P_{|\mathcal{L}(p)|}$ where we recall $\mathcal{G}= \mathcal{G}_{t,u}$ is the already constructed graph - see . Hence it holds that $\mathbf{rk}_{\mathcal{G}, i}(j) \geq \Lambda$ for any $i \in P_l$ and $j \in P_{l+\Lambda}$ - see for the definition of $\mathbf{rk}$. Since there are at least $p\Lambda \geq \tfrac{\phi^2}{\lambda_0 h^2}$ experts in the union $P_{l+1} \cup \dots \cup P_{l+ \Lambda}$, we conclude that $a^* \leq \Lambda$, and that any expert in $\mathcal{N}_{a^*}$ (resp. $\mathcal{N}_{-a^*})$ is below the maximal expert of $P_{l+\Gamma}$ (resp. above) the minimal expert of $P_{l - \Lambda}$. This implies that, upon writing $\overline {\boldsymbol\Delta}^*_k(l)$ for the difference of these maximal and minimal experts, we have by definition of $\overline Q^{*h}$ that $\overline {\boldsymbol\Delta}^*_k(l)>h/2$ for all $k$ in $\overline Q^{*h}$. This implies in particular that $$\sum_{l \in \mathcal{L}(p)} |\overline Q^{*h}_l(h, \phi)| \leq \sum_{k =1}^d \sum_{l \in \mathcal{L}(p)}{\mathbf 1}\{\overline {\boldsymbol\Delta}^*_k(l) \geq h/2\} \leq \frac{2}{h}\sum_{k =1}^d \sum_{l \in \mathcal{L}(p)}\overline {\boldsymbol\Delta}^*_k(l) \leq (2\Lambda+1) \frac{2d}{h} \leq 6\frac{\Lambda d}{h} \enspace ,$$ where in the last inequality we used the fact that $M_{i,k} \in [0, 1]$ and that the sequence $P_{l-\Lambda}, \dots, P_{l+\Lambda}$ is of length $2\Lambda + 1$, for any $l \in \mathcal{L}(p)$. ◻ # Proof of {#sec:proof_of_prop_reduction} Let us fix any $l \in \{1, \dots, L\}$ and $h \in \mathcal{H}$. Since $l, h$ and $\widehat Q^h_l$ are fixed in this proof, we simplify the notation and we write $(P',\widetilde P, Q) = (P'_l, \widetilde{P}^h_l,\widehat Q^h_l)$ and $M := M(\widetilde P, Q)$ and $M(P') := M(P', Q)$. We also fix $\delta' = \delta/(L|\mathcal{H}|)$, where we recall that $L \leq n$ is the number of groups. Let us assume that $$\label{eq:condition_Frobenius_acp} \| M(P') - \overline M(P')\|_F^2 \geq ~\kappa_1 \gamma_u^4\left[\frac{1}{\lambda_0}\sqrt{|\widetilde P| |Q|} + \frac{|\widetilde P|}{\lambda_0}\right]\ ,$$ for some constant $\kappa_1$ to be fixed later. In what follows, we show that under assumption for some large enough numerical constant $\kappa_1$, we necessarily have that the square norm of $P'$ is a contraction of the square norm of $P$, that is $$\label{eq:obj_contraction_pca} \|M(P') - \overline M(P')\|_F^2 \leq \left(1 - \frac{1}{4\gamma_u^2}\right)\|M - \overline M\|_F^2 \enspace .$$ **Step 1: control of the vector $\hat v$** First, the following lemma states that the first singular value of $(M - \overline{M})$ is, up to polylogarithmic terms, of the same order as its Frobenius norm. This is mainly due to the fact that the entries of $M$ lie in $[0,1]$ and that $M - \overline M$ is an isotonic matrix. **Lemma 18** (Lemma E.4 in [@pilliat2022optimal]). *Assume that $\|M - \overline{M} \|_F \geq 2$. For any sets $\widetilde P$ and $Q$, we have $$\|M - \overline{M} \|^2_{\mathrm{op}} \geq \frac{4}{\gamma_u^2} \| M - \overline{M} \|^2_F \enspace .$$* This lemma was already stated and proved as Lemma E.4 in [@pilliat2022optimal], recalling that $\gamma_u > \phi_{\mathrm{L}_1} \geq 8\log(nd)$ -- see and . Now, write $\hat v = \mathop{\mathrm{arg\, max}}_{\|v\|_2 \leq 1} \Big[ \|v^T(Y^{(2)} - \overline{Y}^{(2)})\|_2^2 - \frac{1}{2}\| v^T(Y^{(2)} - \overline{Y}^{(2)} - Y^{(3)} + \overline{Y}^{(3)})\|_2^2\Big]$, where the argmax is taken over all $v$ in $\widetilde P$. **Lemma 19**. *Assume that $\lambda_0 |\widetilde P| \geq 1$. There exists a numerical constant $\kappa'_0$ such that if $$\label{eq:condition_pca} \|M - \overline{M}\|_{\mathrm{op}}^2 \geq \kappa'_0 \log^2(nd/\delta') \left(\frac{1}{\lambda_0}\sqrt{|Q||\widetilde P|} + \frac{|\widetilde P|}{\lambda_0} \right) \enspace ,$$ then, with probability higher than $1-\delta'$, we have $$\|\hat v^T \left(M - \overline M\right) \|_2^2 \geq \frac{1}{2}\| M - \overline M \|_{\mathrm{op}}^2 \enspace .$$* In light of Lemma [Lemma 18](#lem:structure_isotonic_matrices){reference-type="ref" reference="lem:structure_isotonic_matrices"} and Condition [\[eq:condition_Frobenius_acp\]](#eq:condition_Frobenius_acp){reference-type="eqref" reference="eq:condition_Frobenius_acp"}, the Condition [\[eq:condition_pca\]](#eq:condition_pca){reference-type="eqref" reference="eq:condition_pca"} in Lemma [Lemma 19](#lem:concentration_pca){reference-type="ref" reference="lem:concentration_pca"} is valid if we choose $\kappa_1$ in such that $\kappa_1 \geq 16\kappa'_0$. Consequently, there exists an event of probability higher than $1-\delta'$ such that $$\label{eq:pca_signal_with_Frobenius} \|\hat v^T \left(M - \overline M\right) \|^2_2 \geq \frac{2}{\gamma_u^2}\|M - \overline M\|^2_F \enspace .$$ **Step 2: control of the vector $\hat v_-$** Now remark that since $\|\hat v_{i}\|_2 = 1$, there are at most $\frac{1}{\lambda_0}$ of experts $i$ such that $\hat v_{i} > \sqrt{\lambda_0}$. Hence we have that $$\begin{aligned} \|\hat v_-^T \left(M - \overline M\right) \|^2_2 &\geq \frac{2}{\gamma_u^2}\|M - \overline M\|^2_F - \sum_{i \in \widetilde P}{\mathbf 1}_{\hat v_{i} > \sqrt{\lambda_0}} \|M_{i\cdot} - \overline m\|_2^2 \\ &\overset{(a)}{\geq} \frac{2}{\gamma_u^2}\|M - \overline M\|^2_F - \frac{3\gamma_u}{\lambda_0}\sqrt{\frac{|\widehat Q|}{\lambda_0}}\\ &\overset{(b)}{\geq} \frac{1}{\gamma_u^2}\|M - \overline M\|^2_F \enspace .\end{aligned}$$ $(a)$ comes from the fact that any expert in $\widetilde P$ satisfies under the event of . $(b)$ comes from Condition and the assumption that $\lambda_0|\widetilde P| \geq 1$. **Step 3: control of the vector $\hat w$** Next, we show that a thresholded version of $\hat z= ( Y^{(4)}- \overline Y^{(4)} )^T \hat v_-$ is almost aligned with $z^*= \lambda_1(M-\overline{M})^T \hat v_-$. We define the sets $S^*\subset Q$ and $\hat S\subset Q$ of questions by $$\label{eq:def_S_star_S_hat} S^* = \left\{ k \in Q ~:~ |z^*_k| \geq 2\gamma_u\sqrt{\lambda_0}\right\} \ ; \quad \hat S = \left\{ k \in Q ~:~ |\hat z_k| \geq \gamma_u\sqrt{\lambda_0}\right\} \enspace .$$ $S^*$ stands for the collection of questions $k$ such that $z^*_k$ is large whereas $\hat S$ is the collection questions $k$ with large $\hat z_k$. Finally, we consider the vectors $w^*$ and $\hat{w}$ defined as theresholded versions of $z^*$ and $\hat z$ respectively, that is $w^*_k= z^*_k1_{k\in S^*}$ and $\hat{w}_k= \hat z_k{\mathbf 1}_{k\in \hat S}$. Note that, up to the sign, $\hat{w}$ stands for the active coordinates computed in $\mathbf{SLR}$, of . Recall that we assume that $\lambda_0 \leq 1$. We write $v$ for any unit vector in $\mathbb{R}^{|\widetilde P|}$. Let us apply for each column $k \in Q$ of the noise matrix $E$ with the matrix $W$ equal to $v - (\tfrac{1}{|\widetilde P|}\sum_{i \in \widetilde P} v_i){\mathbf 1}_{\widetilde P}$ at column $k$ and $0$ elsewhere. We deduce that, for any fixed matrix $M$, any subsets $\widetilde P$ and $Q$, and any unit vector $v \in \mathbb{R}^{\widetilde P}$ such that $\|v\|_{\infty} \leq 2\sqrt{\lambda_0}$, we have $$\label{eq:control_noise_v_hat} \operatorname{\mathbb{P}}\left[\max_{k\in Q} \left|(v^T (E^{(3)} - \overline E^{(3)}))_k\right| \leq 100\log(2|Q|/\delta')\sqrt{\lambda_0}\right]\geq 1-\delta'\enspace .$$ Observe that $\hat z=z^* + (E^{(3)} - \overline E^{(3)})^T \hat v_-$. Conditioning on $\hat v_-$, we deduce that, on an event of probability higher than $1-\delta'$, we have $$\label{eq:upper_noisec2} \|\hat z-z^* \|_{\infty}\leq 100\log(2|Q|/\delta')\sqrt{\lambda_0}\ \leq \frac{\gamma_u}{2}\sqrt{\lambda_0} \enspace ,$$ where the last inequality comes from $\gamma_u > \phi_{\mathrm{L}_1}$. Hence it holds that $S^*\subset \hat S$ and for $k\in \hat S$, we have $z^*_k/\hat z_k\in [1/2,2]$. Next, we shall prove that, under this event, $\lambda_1\hat v_-^T (M - \overline M) \hat w/\|\hat w\|_2$ is large (in absolute value): $$\lambda_1\left\lvert\hat v_-^T (M- \overline M) \hat w\right\rvert = \left\lvert(z^*)^T \hat{w}\right\rvert= \sum_{k\in \hat S} z^*_k \hat z_l\geq \frac{2}{5}\sum_{k\in \hat S} (z^*_k)^2 + (\hat z_l)^2\geq \frac{2}{5} [\|w^*\|_2^2 + \|\hat{w}\|_2^2]\geq \frac{4}{5}\| \hat w \|_2 \| w^* \|_2\ ,$$ where we used in the first inequality that $z^*_k/\hat z_k\in [1/2,2]$ and in the second inequality that $S^*\subset \hat S$. Thus, it holds that $$\label{eq:lb_estw_truew} \lambda_1^2\left\lvert\hat v_-^T (M - \overline M) \frac{\hat w}{\|\hat w\|_2}\right\rvert^2 \geq \frac{16}{25}\|w^* \|_2^2 \enspace .$$ It remains to prove that $\|w^* \|_2$ is large enough. Writing $S^{*c}$ for the complementary of $S^*$ in $Q$, it holds that $$\label{eq:wstar_on_S} \|w^* \|_2^2 = \|z^*\|_2^2 - \sum_{k\in S^{*c}}(z^*_k)^2\ ,$$ so that we need to upper bound the latter quantity. Write $z^*_{S^{*c}}= z^*-w^*$. Coming back to the definition of $z^*$, $$\begin{aligned} \left[\sum_{k\in S^{*c}}(z^*_k)^2\right]^2&~= \left[\sum_{k\in S^{*c}}\lambda_1[\hat v_-^T(M - \overline M )]_k z^*_k\right]^2\\ &~\leq \|\lambda_1\left(M - \overline M \right)z^*_{S^{*c}}\|^2_2= \sum_{i \in \widetilde P}\left(\sum_{k \in S^{*c}}\lambda_1(M_{ik} - \overline m_k)z^*_k \right)^2 \\ &\overset{(a)}{\leq} \frac{4\gamma_u^2}{|\widetilde P|^2}\lambda_0 \sum_{i \in \widetilde P}\left(\sum_{k \in S^{*c}}\sum_{j\in \widetilde P}\lambda_1|M_{ik} - M_{jk}|\right)^2\\ &~\leq \frac{4\gamma_u^2}{|\widetilde P|^2}\lambda_0 \sum_{i \in \widetilde P}\left(\sum_{j\in \widetilde P}\lambda_1\|M_{i\cdot} - M_{j\cdot}\|_1\right)^2\\ &\overset{(b)}{\leq} 40 \gamma_u^4 \lambda_0^2|\widetilde P||Q|\\ &~\leq \left[7\gamma_u^2 \lambda_0\sqrt{|\widetilde{P}||Q|}\right]^2 \leq \left[\frac{1}{2\gamma_u^2}\lambda_0^2\| M - \overline M\|_F^2\right]^2\enspace .\end{aligned}$$ In $(a)$, we used the definition of $S^*$. In $(b)$, we used that holds true since we are under the event and $\lambda_0 |Q| \geq 1$. The last inequality comes from Condition , choosing $\kappa_1 \geq 14$. Recall that $z^*= \hat v_-^T(M-\overline{M})$. Combining and , we deduce that $$\label{eq:w_star_captures_significant_part} \|w^* \|_2^2 \geq \frac{1}{2\gamma_u^2}\lambda_0^2\|M - \overline M\|_F^2\ ,$$ which, together with and $\lambda_0 \geq \lambda_1$, yields $$\label{eq:energy_hat_w} \left\|(M - \overline M) \frac{\hat w}{\|\hat w\|_2} \right\|_2^2 \geq \left\lvert\hat v_-^T (M - \overline M) \frac{\hat w}{\|\hat w\|_2}\right\rvert^2 \geq \frac{1}{2\gamma_u^2}\|M - \overline M\|_F^2 \enspace .$$ Write $\hat{w}^{(1)}$ and $\hat{w}^{(2)}$ the positive and negative parts of $\hat{w}$ respectively so that $\hat{w}= \hat{w}^{(1)}- \hat{w}^{(2)}$ and $\hat{w}^{+}= \hat{w}^{(1)}+ \hat{w}^{(2)}$. We obviously have $\|\hat{w}\|_2= \|\hat{w}^{+}\|_2$. Besides, if the rows of $M$ are ordered according to the oracle permutation, then $(M - \overline M)\hat{w}^{(1)}$ and $(M - \overline M)\hat{w}^{(2)}$ are nondecreasing vectors with mean zero. It then follows from Harris' inequality that these two vectors have a nonegative inner product. We have proved that $$\label{eq:lower_former_signal} \left\|(M - \overline M) \frac{\hat w^+}{\|\hat w^+\|_2} \right\|_2^2\geq \left\|(M - \overline M) \frac{\hat w}{\|\hat w\|_2} \right\|_2^2 \geq \frac{1}{2\gamma_u^2}\|M - \overline M\|_F^2 \enspace .$$ **Step 4: Showing that $\hat w$ satisfies Condition** Recall that we assume for simplicity that $\lambda_0 \leq 1$. First we upper bound $\|w\|_\infty^2$ by using $(a)$ that $\hat z$ is close to $z^*$ with , $(b)$ that for any $k \in Q$, $v^TM_{\cdot k} \leq \|v\|_1$ and $(c)$ that $\lambda_0 |\widetilde P| \geq 1$: $$\begin{aligned} \|\hat w\|_{\infty}^2 \overset{(a)}{\leq} 2\|z^*\|_{\infty}^2 + \gamma_u^2\lambda_0 \overset{(b)}{\leq} 2\lambda_0^2\|\hat v\|_1^2 + \gamma_u^2\lambda_0 \overset{(c)}{\leq} 3\gamma_u^2 \lambda_0^2 |\widetilde P| \enspace .\end{aligned}$$ Secondly, we lower bound $\|w\|_2^2$ by using $(a)$ that $S^* \subset \hat S$ and that $z^*_k/\hat z_k \in [1/2, 2]$, $(b)$ that $\|w^*\|_2^2$ captures a significant part of the $L_2$ norm -see , and $(c)$ the Condition with $\kappa_1 \geq 24$: $$\begin{aligned} \|\hat w\|_2^2 \overset{(a)}{\geq} \frac{1}{4}\|w^*\|_2^2 \overset{(b)}{\geq} \frac{1}{8\gamma_u^2}\lambda_0^2 \|M- \overline M \|_F^2 \overset{(c)}{\geq} 3\gamma_u^2 \lambda_0 |\widetilde P| \enspace .\end{aligned}$$ We deduce that $\|\hat w\|_{\infty}^2 \leq \lambda_0 \|\hat w\|_2^2$, which is exactly Condition . This shows that $\hat w^+$ is considered for the update in the final step of the procedure of . **Step 5: upper bound of the Frobenius-norm restricted to $P'$** Equipped with this bound, we are now in position to show that the set $P'$ of experts obtained from $\widetilde{ P}$ when applying the pivoting algorithm with $\hat w^+/\|\hat w^+\|_2$ has a much smaller square norm. By used with the matrix $W$ equal to $0$ except at line $i$ where it is equal to the vector $\hat w^+/\|\hat w^+\|_2$, there exists an event of probability higher than $1-\delta'$ such that $$\max_{i,j\in P'}\left\lvert\langle E_i\cdot - E_j\cdot,\frac\hat w^+\|\hat w^+\|_2\rangle\right\rvert \leq \phi_{\mathrm{L}_1}\sqrt{\lambda_0} \leq \gamma_u\sqrt{\lambda_0}\ ,$$ where we recall that $\phi_{l_1}$ is defined in . Hence, since the vector $\hat w$ is considered in the update , we have $\max_{i,j\in P'}\left\lvert\langle Y_i\cdot - Y_j\cdot,\frac\hat w^+\|\hat w^+\|_2\rangle\right\rvert \leq \gamma_u\sqrt{\lambda_0}$ and $$\max_{i,j\in P'}\left\lvert\langle M_i\cdot - M_j\cdot,\frac\hat w^+\|\hat w^+\|_2\rangle\right\rvert \leq 2\gamma_u \sqrt{\frac{1}{\lambda_0}} \enspace .$$ By convexity, it follows that $$\left\|(M(P') - \overline M(P'))\tfrac{\hat w^+}{\|\hat w^+\|_2} \right\|_2^2 \leq 4\gamma_u^2\frac{1}{\lambda_0}|P'| \leq 4\gamma_u^2\frac{1}{\lambda_0}|\widetilde P|\ .$$ In light of Condition , this quantity is small compared to $\| M - \overline{M}\|_F^2$: $$\label{eq:upper_new_signal} \| (M(P') - \overline M(P'))\tfrac{\hat w^+}{\|\hat w^+\|_2} \|_2^2 \leq\frac{1}{4\gamma_u^2} \| M - \overline{M}\|_F^2\ ,$$ which together with [\[eq:lower_former_signal\]](#eq:lower_former_signal){reference-type="eqref" reference="eq:lower_former_signal"} leads to $$\label{eq:upper_new_signal_2} \|(M - \overline M )\tfrac{\hat w^+}{\|\hat w^+\|_2} \|_2^2 - \|(M(P') - \overline M(P') )\tfrac{\hat w^+}{\|\hat w^+\|_2} \|_2^2\geq \frac{1}{4\gamma_u^2} \| M - \overline{M}\|_F^2\enspace .$$ Since $P'\subset \widetilde P$, we deduce that, for any vector $w'\in \mathbb{R}^q$, we have $\|(M - \overline M)w' \|_2^2 \geq \|(M(P') - \overline M(P') )w' \|^2$. It then follows from the Pythagorean theorem that $$\|M - \overline M\|_F^2 - \|M(P') - \overline M(P') \|_F^2 \geq \|(M - \overline M )\tfrac{\hat w^+}{\|\hat w^+\|_2} \|_2^2 - \|(M(P') - \overline M(P') )\tfrac{\hat w^+}{\|\hat w^+\|_2} \|_2^2 \enspace .$$ Then, together with [\[eq:upper_new_signal_2\]](#eq:upper_new_signal_2){reference-type="eqref" reference="eq:upper_new_signal_2"}, we arrive at $$\|M(P') - \overline M(P')\|_F^2 \leq \left(1 - \frac{1}{4\gamma_u^2}\right)\|M - \overline M\|_F^2 \enspace .$$ We have shown that if is satisfied, then there is a contraction in the sense of . This in turn gives the upper bound and it concludes the proof of . *Proof of .* Recall that we consider the case $\lambda_0 \leq 1$ and that the case $\lambda_0 \geq 1$ is discussed in . We start with the two following lemmas. To ease the notation, we assume in this proof that $\widetilde P = \{1, \dots, p\}$, that $Q = \{1, \dots q\}$. We only consider the matrices restricted to the sets $\widetilde P, Q$ and we write $E:= E(\widetilde P, Q)$. Let us define $J = {\mathbf 1}{\mathbf 1}^T \in \mathbb{R}^{p \times p}$ the matrix with constant coefficients equals to $1$ and $A = (\mathbf{I}_p - \frac{1}{p}J)$ be the projector on the orthogonal of ${\mathbf 1}$, so that $E- \overline E= AE\in \mathbb{R}^{p \times q}$. The two following lemmas are direct consequences of , and a discussion of the corresponding concenration inequality on random rectangular matrices can be found in . We state weaker concentration inequalities than what is proven in in order to factorize the polylogarithmic factors and to ease the reading of the proof. **Lemma 20**. *Assume that $\lambda_0 \leq 1$ and that $\lambda_0(p\lor q) \geq 1$. It holds with probability larger than $1-\delta'/4$ that $$\|EE^T - \mathbb{E}[EE^T]\|_{\mathrm{op}} \leq \kappa''_0\log^2(pq/\delta')\left[\lambda_0\sqrt{pq} + \lambda_0 p\right]\enspace .$$* **Lemma 21**. *Assume that $\lambda_0 \leq 1$ and that $\lambda_0(p\lor q) \geq 1$. With probability larger than $1-\delta'/4$, one has for any orthogonal projection $\Lambda \in \mathbb{R}^{q\times q}$ satisfying $\mathop{\mathrm{rank}}(\Lambda) \leq p$ that $$\|\Lambda E^T E\Lambda\|_{\mathrm{op}} \leq \kappa''_1\log^2(pq/\delta')\left[\lambda_0\sqrt{pq} + \lambda_0 p\right]\enspace ,$$* *Proofs of and .* First, we recall that for any $i,k$, we have that $E_{ik} = (B_{ik} - \lambda_1)M_{ik} + \widetilde E_{ik}$, and that $\widetilde E$ is an average of $1$-subGaussian random variables, as described in For any $u \geq 0$ we have $$\label{eq:moment_bernstein} \mathbb{E}[E_{ik}^{2u}] \leq 3^u \mathbb{E}\left[ B_{ik} + \lambda_0^{2u} + \widetilde E_{ik}^{2u}\right] \leq 3^u \left(2\lambda_0 + u! \mathbb{E}[e^{\widetilde E_{ik}^2}] \right) \leq \frac{1}{2}u!\lambda_0 1000^u \enspace,$$ where for the last inequality we used the following inequalities: $$\begin{aligned} \mathbb{E}[e^{\widetilde E_{ik}^2}] \leq \sum_{u \geq 1} e^{-\lambda_0} \frac{\lambda_0^u}{u!} e^{1/u} \leq \lambda_0 e \enspace . \end{aligned}$$ Hence condition is satisfied with $K= 1000$ and $\sigma^2=\lambda_0$ for the coefficients of $E$. We just apply with $X = E$ for . For , we apply with $X = E^T$ and we remark that $\| \Lambda E^T E\Lambda \|_{\mathrm{op}}^2 \leq 2\| \Lambda E^T E- \mathbb{E}[E^T E] \Lambda \|_{\mathrm{op}}^2 + 2\| \mathbb{E}[E^T E] \|_{\mathrm{op}}^2$ together with the fact that $\|\mathbb{E}[E^T E] \|_{\mathrm{op}}^2 \leq c'\lambda_0p$ for some numerical constant $c'$. ◻ Remark that since we assume in that $\lambda_0 p \geq 1$, it holds that $\sqrt{\lambda_0 p} \leq \lambda_0 p$ and $\sqrt{\lambda_0 q} \leq \lambda_0^2 \sqrt{pq}$, so that both upper bounds of and reduce - up to logarithmic factors - to $\lambda_0\sqrt{pq} + \lambda_0 p$. We write for short in the following $$\label{eq:def_F_short} F:= F(p,q,\lambda_0, \delta') = \log^2(pq/\delta')[\lambda_0\sqrt{pq} + \lambda_0 p] \enspace,$$ and $\kappa''_2 = 8(\kappa''_0 \lor \kappa''_1)$. Now let us write $$AY = \lambda_1AM + AE\enspace ,$$ so that, for any $v\in\mathbb{R}^p$, recalling that $AY = Y - \overline Y$, $$\begin{aligned} \|v^TAY\|_2^2 = \lambda_1^2\|v^TAM\|_2^2 + \|v^TAE\|_2^2 + 2\lambda_1\langle v^TAE, v^TAM \rangle\enspace , \end{aligned}$$ which, in turn, implies that $$\begin{aligned} \Big|\|v^TAY\|_2^2 - \lambda_1^2\|v^TAM\|_2^2 - \operatorname{\mathbb{E}}\big[\|v^TAE\|_2^2\big] \Big| &~\leq \Big|\|v^TAE\|_2^2 - \operatorname{\mathbb{E}}\big[\|v^TAE\|_2^2\big] \Big| + 2\lambda_1 |v^TAM E^T (Av)|\\ &\stackrel{(a)}{\leq} \|A(EE^T - \mathbb{E}[EE^T])A\|_{\mathrm{op}} + 2 \lambda_1\|AM E^TE(AM)^T\|^{1/2}_{\mathrm{op}} \\ &~\leq \|EE^T - \mathbb{E}[EE^T]\|_{\mathrm{op}} + 2\lambda_1\|AM\|_{\mathrm{op}}\| \Lambda E^TE\Lambda\|^{1/2}_{\mathrm{op}} \enspace , \end{aligned}$$ Where we define $\Lambda \in \mathbb{R}^{d\times d}$ as the orthogonal projector on the image of $\ker(AM)^{\perp}$ which is of rank less than $p$. For $(a)$, we used the fact that $A$ is contracting the operator norm as an orthogonal projector so that $\|Av\|_2 \leq 1$. We now apply and together with the fact that $\lambda_1 \leq \lambda_0$, and we obtain with probability at least $1-\delta'/2$ that $$\label{eq:upper_1} \sup_{v \in \mathbb{R}^p, \|v\|=1}\Big|\|v^TAY\|_2^2 - \lambda_1^2\|v^TAM\|_2^2 - \operatorname{\mathbb{E}}\big[\|v^TAE\|_2^2\big] \Big| \leq \kappa''_2 F + \lambda_1\|AM\|_{\mathrm{op}}\sqrt{\kappa''_2 F}\enspace .$$ where $F$ is defined in . In the same way, we have that, with probability larger than $1-\delta'/2$, $$\begin{aligned} \sup_{v\in \mathbb R^{p}:\ \|v\|_2\leq 1} \Big|\frac{1}{2}\|v^TA(Y-Y')\|_2^2 - \operatorname{\mathbb{E}}\big[\|v^TAE\|_2^2\big] \Big| &= \frac{1}{2}\sup_{v\in \mathbb R^{p}:\ \|v\|_2\leq 1}\Big|\|v^TA(Y-Y')\|_2^2 - \mathbb E \|v^TA(Y-Y')\|_2^2 \Big| \\ &\leq \kappa''_3F\enspace, \end{aligned}$$ for some numerical constant $\kappa''_3$. Putting everything together we conclude that, on an event of probability higher than $1-\delta'$, we have simultaneously for all $v \in \mathbb R^{p}$ with $\|v\|_2\leq 1$ that $$\begin{aligned} \Big|\|v^TAY\|_2^2 - \|v^TAM\|_2^2 - \frac{1}{2}\|v^TA(Y-Y')\|_2^2 \Big| \leq \kappa''_4 F + \lambda_1\|AM\|_{\mathrm{op}}\sqrt{\kappa''_4 F} \enspace , \end{aligned}$$ with $\kappa''_4 = \kappa''_2 \lor \kappa''_3$. Choosing the numerical constant $\kappa'_0$ of such that $\kappa'_0 \geq 4\cdot 16 (1-1/e)^{-1}\kappa''_4$ we have $$\lambda_1^2\|AM\|_{\mathrm{op}}^2 \geq 4\cdot 16 \kappa''_4 F\enspace ,$$ since it holds that $\lambda_1 \geq (1-1/e)\lambda_0$. We deduce that on the same event: $$\sup_{v\in \mathbb{R}^p: \ \|v\|_2\leq 1} \Big|\|v^TAY\|_2^2 - \|v^TAM\|_2^2 - \frac{1}{2}\|v^TA(Y-Y')\|_2^2 \Big|\leq \frac{1}{4}\|AM\|_{\mathrm{op}}^2\enspace .$$ Writing $\psi(v)= \big|\|v^T(Y - \overline{Y})\|_2^2 - \frac{1}{2}\|v^TA(Y-Y')\|_2^2\big|$, we deduce that, for $v$ such that $\|v^TAM\|_{2}^2 = \|AM\|_{\mathrm{op}}^2$, we have $\Psi(v)\geq \tfrac{3}{4}\|AM\|_{\mathrm{op}}^2$, whereas, for $v$ such that $\|v^TAM\|_{2}^2< \frac{1}{2} \|AM\|_{\mathrm{op}}^2$, we have $\Psi(v)<\tfrac{3}{4}\|AM\|_{\mathrm{op}}^2$. We conclude that $\hat v$ satisfies $\|\hat v^TAM\|_{2}^2> \frac{1}{2} \|AM\|_{\mathrm{op}}^2$ with probability at least $1-\delta'$. ◻ # Proof of when $\lambda_0 \geq 1$ {#sec:full} The aim of this section is to provide an extension of the proof of to the case $\lambda_0 \geq 1$. Recall that we fix $\delta$ to be a small probability the proof of , and that $E$ and $\widetilde E$ are the matrices defined in and by $$\widetilde E^{(s)}_{ik} = \sum_{t \in N^{(s)}} \tfrac{\varepsilon_t}{\mathbf{r}^{(s)}_{ik}\lor 1}{\mathbf 1}\{x_t = (i,k)\} \quad \text{ and } \quad E^{(s)}_{ik} = (B^{(s)}_{ik} - \lambda_1)M + B^{(s)}_{ik}\widetilde E^{(s)}_{ik}\enspace .$$ In what follows, we consider the two subcases where $\lambda_0 > 16\log(5nd/\delta)$ or $\lambda_0 \leq 16\log(5nd/\delta)$, which essentialy rely on the two following ideas: - If $\lambda_0 \leq 16\log(5nd/\delta)$, we use the fact that the coefficients of $E$ defined in are $5$-subGaussian together with the same signal-noise decomposition $Y = \lambda_1 M + E$ as in the proofs when $\lambda_0 \leq 1$. The difference from the case $\lambda_0 \leq 1$ lies in the application of subGaussian inequalities of $E_{ik}$ instead of Bernstein inequalities as in . - If $\lambda_0 > 16\log(5nd/\delta)$, we show that the event $\{\mathbf{r}_{ik}^{(s)}\geq \lambda_0 /2\}$ holds true for all $i,k,s$ with high probability. Working conditionally to this event, we use the decomposition $Y = M + \widetilde E$ and we show that the noise $\widetilde E$ has $\frac{2}{\lambda_0}$-subGaussian independent coefficients. The rationale behind using $\widetilde E$ when $\lambda_0$ is large is that $\widetilde E_{ik}$ takes advantage of the mean of $2/\lambda_0$ subGaussian variables with high probability. Let $\mathbf{r}_{\min}^{(s)} = \min_{i,k}\mathbf{r}_{ik}^{(s)}$ be the minimum number of observation at positions $(i,k)$ in $N_s$ - see . In the case $\lambda_0 > 16\log(5nd/\delta)$, the following lemma states that with high probability, we observe all the coefficients for all sample $s$ in the full observation regime. **Lemma 22**. *Assume that $\lambda_0 \geq 16\log(5nd/\delta)$. The event $\{\mathbf{r}^{(s)}_{\min} \geq \lambda_0/2 \}$ holds simultaneously for all sample $s$ with probability at least $1 - 5T\delta$.* *Proof of .* We apply Chernoff's inequality - see e.g. section 2.2 of [@massart2007concentration] - to derive that for any $i,k$ $$\mathbb{P}(\mathbf{r}^{(s)}_{ik} \leq \lambda_0/2) \leq \exp(-\frac{1}{8} \lambda_0) \leq \delta/(nd) \enspace ,$$ where we use the inequality $(1- \log(2))/2 \geq 1/8$. We conclude with a union bound over all coefficients in $[n] \times [d]$ and all $5T$ samples. ◻ Let us now omit the dependence of $E$ and $\widetilde E$ in the sample $s$. In what follows, use that the coefficients of $E$ are $5$-subGaussian, which is a consequence of the fact that $E_{ik}$ is the sum of a centered variable bounded by $1$ and a $1$-subgaussian random variable $\widetilde E_{ik}$, so that by Cauchy-Schwarz and the Hoeffding inequality we have $$\label{eq:1_subgaussian} \mathbb{E}[\exp(x E_{ik})] \leq \sqrt{\exp(4x^2/8)}\sqrt{\exp(4x^2/2)} = \exp(5/4x^2)\enspace .$$ Under the event of , we use that $\widetilde E_{ik}$ is $\lambda_0/2$-subGaussian, as an average of at least $2/\lambda_0$ random variables that are $1$-subGaussians: $$\label{eq:lambda_subgaussian} \mathbb{E}[\exp(x \widetilde E_{ik})] \leq \exp(\tfrac{1}{\lambda_0} x^2) \enspace ,$$ ## Adjustements for the general analysis We first make the changes that should be done in to have a proper proof in the case $\lambda_0 \geq 1$. If $\lambda_0 \in [1, 16\log(5nd/\delta)]$, we simply replace $\lambda_0$ by $1/\lambda_0$ in the upper bound of for the event $\xi$ in . In the proof of the restated , we can replace the inequality by $$\label{eq:subgaussian_1} |\langle E, W\rangle| \leq \sqrt{10\|W\|_F^2\log\left(\frac{2}{\delta'}\right)} \enspace ,$$ for any matrix $W \in \mathbb{R}^{n \times d}$, with probability at least $1-\delta'$. We can then obtain $1/\lambda_0$ instead of $\lambda_0$ simply by using that $\phi_{\mathrm{L}_1}/\sqrt{\lambda_0} \geq \sqrt{\phi_{\mathrm{L}_1}}$, recalling that $\phi_{\mathrm{L}_1}$ is defined in . If $\lambda_0 > 16\log(5nd/\delta)$, we say that we are under event $\xi$ if the event of holds and holds for all pairs $(Q,w)$, replacing $E$ by $\widetilde E$, and $\lambda_0$ by $1/\lambda_0$. The proof of the new version of lies in the Hoeffding inequality applied to $\widetilde E$ under the event of , leading to the subsequent equation: $$\label{eq:hoeffding_E_tilde} |\langle \widetilde E, W\rangle| \leq \sqrt{\frac{4\|W\|_F^2}{\lambda_0}\log\left(\frac{2}{\delta'}\right)} \enspace ,$$ for any matrix $W \in \mathbb{R}^{n \times d}$, with probability at least $1-\delta'$. This equation then replaces . ## Adjustments to the proofs of We now adapt the proofs in of to the case $\lambda_0 \geq 1$. All the lemmas of can be stated as is for any $\lambda_0 \geq 1$, and the only adjustments concern the proofs of , and . ### Adjustments in the proofs of and Consider the proof of . First, if $\lambda_0 \geq 16 \log(5nd/\delta)$, we place ourselves under the event and replace $\lambda_1$ by $1$ and all the $E$ by $\widetilde E$. Instead of inequality , we use the fact that the coefficients of $\widetilde E$ are $2/\lambda_0$-subGaussian - see - leading to the following inequality with probability at least $1 - \delta$: $$\left\lvert\widetilde E_{k}(a)\right\rvert := \left\lvert\frac{1}{|\mathcal{N}_a|}\sum_{i \in \mathcal{N}_a}\widetilde E_{ik} - \frac{1}{|\mathcal{N}_{-a}|}\sum_{i \in \mathcal{N}_{-a}}\widetilde E_{ik}\right\rvert \leq \kappa'_0 \log(nd/\delta)\sqrt{\frac{1}{\lambda_0\nu(a)}} \enspace ,$$ for some numerical constant $\kappa'_0$. The rest of the proof remains unchanged. If $\lambda_0 \in [1, 16\log(5nd/\delta)]$, we use the fact that $E$ has $5$-subGaussians coefficients - see and we do not divide by $\lambda_0$ in - see the definition of $\widehat {\boldsymbol\Delta}$ . Concerning , the adjustments are the same as for , namely working under the event of and we replacing $E$ by $\widetilde E$, $\lambda_0$ by $1/\lambda_0$ and $\lambda_1$ by $1$ if $\lambda_0 \geq 16\log(5nd/\delta)$, and using the fact that the coefficient of $E$ are $5$-subGaussians - see if $\lambda_0 \in [1, 16\log(5nd/\delta)]$. ### Adjustments in the proof of We now adapt the proofs in of to the case $\lambda_0 \geq 1$. First, can be stated as is, and its proof when $\lambda_0 \geq 1$ is directly implied by Lemma E.5 in [@pilliat2022optimal] with $\Theta := M$ either conditionally on with noise $N:= \widetilde E$ and $\zeta^2 := 2/\lambda_0$ when $\lambda_0 \geq 16\log(5nd/\delta)$ or with noise $N:= E$ and $\zeta^2 := 5$ when $\lambda_0 \leq 16\log(5nd/\delta)$. Secondly, remark that if $\lambda_0 \geq 1$, it holds that $\hat v_- = \hat v$ and that Condition on $\hat w$ is automatically satisfied, so that step 2 and step 4 can be removed from the proof in that case. For Step 3 and 5, we do the following adjustments: If $\lambda_0 \in [1, 16\log(5nd/\delta)]$, the proof remains unchanged except that we use that the coefficients of $E$ are $5$-subGaussian -see . If $\lambda_0 \geq 16\log(5nd/\delta)$, we work conditionnally on the event of and we replace $\lambda_1$ by $1$ and $E$ by $\widetilde E$. The subgaussian concentration bound on $\widetilde E$ allows us to replace $\lambda_0$ by $\frac{1}{\lambda_0}$ in the equations from to . # Proof of Corollaries [Corollary 4](#cor:reconstruction_UB){reference-type="ref" reference="cor:reconstruction_UB"} and [Corollary 5](#cor:ub_reco_biso){reference-type="ref" reference="cor:ub_reco_biso"} {#proof-of-corollaries-correconstruction_ub-and-corub_reco_biso} *Proof of .* Assume that $\pi^* = \mathrm{id}$ for simplicity. Let $P_{\mathrm{iso}}$ be the projector on the set of isotonic matrices, and $E' = Y^{(2)}_{\hat \pi^{-1}} - M_{\hat \pi^{-1}}$ so that $\hat M_{\mathrm{iso}} = P_{\mathrm{iso}} (M_{\hat \pi^{-1}} + E')$. Remark that the loss can be decomposed as $$\|(\hat M_{\mathrm{iso}})_{\hat \pi} - M\|_F^2 = \|P_{\mathrm{iso}}M_{\hat \pi^{-1}} - P_{\mathrm{iso}}M_{} + P_{\mathrm{iso}}(M_{} + E') - M_{} + M_{}- M_{\hat \pi^{-1}}\|_F^2\enspace .$$ Using the non-expansiveness of $P_{\mathrm{iso}}$ and the triangular inequality as in the proof of proposition 3.3 of [@mao2020towards], we deduce that $$\label{eq:triangular_reco} \|\hat M_{\mathrm{iso}} - M\|_F^2 \leq 4\|M_{\hat \pi^{-1}} - M\|_F^2 + 2\|P_{\mathrm{iso}}(M + E') - M\|_F^2 \enspace .$$ Since the projection of $M+E'$ on isotonic matrices is equal to the columnwise projection on isotonic vectors, it holds that $\sup_{M \in \mathbb{C}_{\mathrm{iso}}(n,d)}\mathbb{E}\|P_{\mathrm{iso}}(M + E') - M\|_F^2 = d \sup_{M \in \mathbb{C}(n,1)}\mathbb{E}\|P_{\mathrm{iso}}(M_{\cdot 1} + E'_{\cdot 1}) - M_{\cdot 1}\|_F^2$, where we also use the notation $P_{\mathrm{iso}}$ for the projector on isotonic vectors. The rate of estimation in $L_2$ norm of an isotonic vector with bounded total variation partial observation can be found in [@zhang2002risk], with $V := 1$ and $\sigma^2 := 1/\lambda$. Hence, we obtain that $\sup_{M \in \mathbb{C}(n,1)}\mathbb{E}\|P_{\mathrm{iso}}(M_{\cdot 1} + E'_{\cdot 1}) - M_{\cdot 1}\|_F^2 \leq C_1 n^{1/3}/\lambda^{2/3}$. Upper bounding the first term in [\[eq:triangular_reco\]](#eq:triangular_reco){reference-type="eqref" reference="eq:triangular_reco"} with a quantity of order $\rho_{\mathrm{perm}} \leq 2\rho_{\mathrm{reco}}$ by concludes the proof. ◻ *Proof of .* We follow the same steps as in . Assume that $\pi^* = \eta^* = \mathrm{id}$, $E'=Y^{(3)}_{\hat\pi^{-1}\hat\eta^{-1}} - M$, and let $P_{\mathrm{biso}}$ be the projector on bi-isotonic matrices. We have that $$\label{eq:ineq_trig_biso} \|(\hat M_{\mathrm{biso}})_{\hat \pi \hat \eta} - M\|_F^2 \leq 4\|M_{\hat \pi^{-1}\hat\eta^{-1}} - M\|_F^2 + 2\|P_{\mathrm{biso}}(M + E') - M\|_F^2 \enspace .$$ $M$ is isotonic in both directions so that we can apply in rows and columns. After the first two steps of the above procedure, we obtain two estimator $\hat \pi, \hat \eta$ that satisfy $$\label{eq:ub_liu_moitra_2_perm} \sup_{\stackrel{\pi^*, \eta^* \in \Pi_n}{M:\, M_{\pi^{*-1} \eta^{*-1}}\in \mathbb{C}_{\mathrm{biso}}}} \operatorname{\mathbb{E}}\left[\|M_{\hat{\pi}^{-1}\hat{\eta}^{-1}}- M_{\pi^{*-1}\eta^{*-1}}\|_F^2\right] \leq C''\log^{C''}(n)n^{7/6}\lambda^{-5/6} \enspace .$$ The second term of is the risk of a bi-isotonic regression by least square, and is smaller than $n/\lambda \leq n^{7/6}\lambda^{-5/6}$ - see e.g. [@mao2020towards]. ◻ # Proof of the minimax lower bound *Proof of .* Since $\rho_{\mathrm{perm}}(n,d,\lambda)$ is nondecreasing with $n$ and $d$, we can assume without loss of generality that both $n$ and $d$ express as a power of $2$. The following proof is strongly related to the proof of Theorem 4.1 in [@pilliat2022optimal]. While a worst case distribution is defined on the set of matrices that have nondecreasing rows and nondecreasing columns in [@pilliat2022optimal], we aim here at defining a worst case distribution on matrices only have nondecreasing columns. Since the isotonic model is less constrained than the bi-isotonic model studied in [@pilliat2022optimal], the permutation estimation problem is statistically harder, and the lower bound has a greater order of magnitude. As in [@pilliat2022optimal], the general idea is first to build a collection of prior $\nu_{{\bf G}}$ indexed by some ${\bf G}\in \boldsymbol{\mathcal{G}}$ on $M$, then to reduce the problem to smaller problems and finally to specify the prior in function of the regime in $n,d$ and $\lambda$. By assumption, the data $y_t$ is distributed as a normal random variable with mean $M_{x_t}$ and variance $1$, conditionally on $M$ and $x_t$. We write as in [@pilliat2022optimal] $\mathbf{P}_{\bf G}^{(\mathbf{full})}$ and $\mathbf{E}_{\bf G}^{(\mathbf{full})}$ the corresponding marginal probability distributions and expectations on the data $(x_t,y_t)$. Our starting point is the fact that the minimax risk [\[eq:minimax_risk_perm\]](#eq:minimax_risk_perm){reference-type="eqref" reference="eq:minimax_risk_perm"} is higher than the worst Bayesian risk: $$\label{eq:starting_point} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq \inf_{\hat{\pi}}\sup_{{\bf G}\in \mathbfcal{G}}\mathbf{E}_{\bf G}^{\mathbf{full}}\left[\|M_{\hat{\pi}^{-1}}- M_{\pi^{*-1}}\|_F^2\right]\ .$$ **Step 1: Construction of the prior distribution on $M$** Let $p \in \{2, \dots, n\}$ and $q \in [d]$ be two powers of $2$ to be fixed later, and $\overline G^{(\iota)} :=[(\iota -1) p + 1, \iota p]$, for $\iota \in \{1, \dots, n/p\}$. The general idea is to build a simple prior distribution on isotonic matrices in $\mathbb{R}^{\overline G^{(\iota)} \times d}$, and to derive a prior distribution on isotonic matrices in $\mathbb{R}^{n \times d}$ by combining $n/p$ independent simple prior distributions defined on each strip $\mathbb{R}^{\overline G^{(\iota)} \times d}$. Let $w \in \mathbb{R}^n$ be a vector that is constant on each group $\overline G^{(\iota)} =[(\iota -1) p + 1, \iota p]$ and that has linearly nondecreasing steps: $$\label{eq:vector_w_lb} w_i = \left\lfloor\frac{i}{p}\right\rfloor \frac{p}{4n} \in [0, 1/4] \enspace .$$ Letting ${\mathbf 1}_{[d]}$ be constant equal to $1$ in $\mathbb{R}^d$, we define $$\label{eq:definition_M_lower_bound} M = w {\mathbf 1}_{[d]}^T + \frac{\upsilon}{\sqrt{p\lambda}} B^{(\mathbf{full})}\ ,$$ where the random matrix $B^{(\mathbf{full})}\in \{0,1\}^{n\times d}$ is defined as in [@pilliat2022optimal]. We recall the definition of its distribution in what follows for the sake of completeness. Consider a collection $\mathcal{G}$ of subsets of $[p]$ with size $p/2$ that are well-separated in symmetric difference as defined by the following lemma. **Lemma 23**. *There exists a numerical constant $c_0$ such that the following holds for any even integer $p$. There exists a collection $\mathcal G$ of subsets of $[p]$ with size $p/2$ which satisfies $\log(|\mathcal{G}|)\geq c_0 |p|$ and whose elements are $p/4$-separated, that is $|G_1 \Delta G_2| \geq p/4$ for any $G_1\neq G_2$.* The above result is stated as is in [@pilliat2022optimal] and is a consequence of Varshamov-Gilbert's lemma - see e.g. [@tsybakov]. For each $\iota \in [n/p]$, we fix a subset $G^{(\iota)}$ from $\mathcal{G}$, and its translation $G^{t(\iota)}=\{(\iota-1)p + x: x\in G^{(\iota)}\} \subset \overline G^{(\iota)}$. The experts of $G^{t(\iota)}$ will correspond the $p/2$ experts in $\overline G^{(\iota)}$ that are above the $p/2$ experts in $\overline G^{(\iota)}\setminus G^{t(\iota)}$. We write ${\bf G}= (G^{t(1)}, \ldots, G^{t(n/p)})$ and $\boldsymbol{\mathcal{G}}$ the corresponding collection of all possible ${\bf G}$. Given any such ${\bf G}$, we shall define a distribution $\nu_{{\bf G}}$ of $B^{(\mathbf{full})}$, and equivalently of $M$ by . For $\iota\in [n/p]$, we sample uniformly a subset $Q^{(\iota)}$ of $q$ questions among the $d$ columns. In each of these $q$ columns, the corresponding rows of $B^{(\mathbf{full})}$ are equal to one. More formally, we have $$\label{eq:definition_B_Full} B^{(\mathbf{full})} = \sum_{\iota=1}^{n/p}\mathbf 1_{G^{t(\iota)}}{\mathbf 1}_{Q^{(\iota)}} \enspace .$$ As mentioned above, the definition of $B^{(\mathbf{full})}$ is the same as in [@pilliat2022optimal], if $\tilde d$ is set to be equal to $d$. They define a block constant constant matrix when $\tilde d < d$ to get an appropriate prior distribution for bi-isotonic matrices, but we do not need to do that here since we do not put any constraint on the rows of $M$. The matrix $M$ defined in [\[eq:definition_B\_Full\]](#eq:definition_B_Full){reference-type="eqref" reference="eq:definition_B_Full"} is isotonic up to a permutation of its rows and has coefficients in $[0,1]$, if the following inequality is satisfied. $$\label{eq:condition_v} \frac{\upsilon}{\sqrt{p\lambda}} \leq \frac{p}{8n} \ .$$ This constraint is strictly weaker than its counterpart (149) in [@pilliat2022optimal], and this is precisely what makes the lower bound in the isotonic setting larger than the lower bound in the bi-isotonic setting of [@pilliat2022optimal]. Our purpose will be to wisely choose parameters $p,q$ and $\upsilon > 0$ to maximize the Bayesian risk with $\nu_{\mathbf{G}}$. **Step 2: Problem Reduction** In what follows, we use the same reduction arguments as in [@pilliat2022optimal]. Using the notation of [@pilliat2022optimal], we write $\mathbf{P}_{\bf G}^{(\mathbf{full})}$ and $\mathbf{E}_{\mathbf{G}}^{(\mathbf{full})}$ for the probability distribution and corresponding expectation of the data $(x_t, y_t)$, when $M$ is sampled according to $\nu_{\mathbf{G}}$. Since the distribution of the rows of $M$ in $\overline G^{t(\iota)}$ only depend on $G^{t(\iota)}$, we write $\nu_{G^{t(\iota)}}$ for the distribution of these rows. We also write $\mathbf{P}^{(\mathbf{full})}_{G^{t(\iota)}}$ and $\mathbf{E}^{(\mathbf{full})}_{G^{t(\iota)}}$ for the corresponding marginal distribution and corresponding expectation of the observations $(x_t,y_t)$ such that $(x_t)_1\in \overline{G}^{t(\iota)}$. By the poissonization trick, the distribution $\mathbf{P}^{(\mathbf{full})}_{\bf G}$ is a product measure of $\mathbf{P}^{(\mathbf{full})}_{G^{t(\iota)}}$ for $\iota=1,\ldots, n/p$. Let $\tilde \pi$ be any estimator of $\pi^*$. Let us provide more details than [@pilliat2022optimal] to prove that $\tilde \pi$ can be modified into an estimator $\hat \pi$ satisfying $\hat \pi(\overline G^{(\iota)}) = \overline G^{(\iota)}$ for all $\iota = 1, \dots, n/p$, and reducing the loss $\|M_{\hat \pi^{-1}}- M_{\pi^{*-1}}\|_F^2 \leq \|M_{\tilde \pi^{-1}}- M_{\pi^{*-1}}\|_F^2$ almost surely, for all possible prior $\nu_{\mathbf{G}}$. For that purpose, we introduce $$N(\pi) = \sum_{\iota = 1}^{n/p}\sum_{i \in \overline G^{(\iota)}}{\mathbf 1}\{i \not \in \overline G^{(\iota)}\} \enspace .$$ If $N(\tilde \pi) > 0$, then there exists $\iota_0$ and $i_0 \in \overline G^{(\iota_0)}$ such that $\tilde \pi(i_0) \in \overline G^{(\iota_1)}$ with $\iota_1 \neq \iota_0$. Then, $\tilde \pi$ being a permutation, we consider its associated cycle containing $i_0$, which we denote by $(i_1, \dots, i_{K})$. Let $(i'_1, i'_2, \dots, i'_L)$ be the elements of this cycle such that $\tilde \pi(i'_l) \not \in \overline G^{(\iota_l)}$, where $\iota_l$ satisfies $i'_l \in \overline G^{(\iota_l)}$. Then it holds that for any $l = 1, \dots, L-1$, $\tilde \pi(i'_l) \in \overline G^{(\iota_{l+1})}$, and $\tilde \pi(i'_L) \in \overline G^{(\iota_{1})}$. We now define $\tilde \pi'(i) = \tilde \pi(i)$ for all $i$, except on the cycle $(i'_1, \dots, i'_L)$ where we set $\tilde \pi'(i'_l) = \tilde \pi(i'_{l-1})$. Then, we easily check that $N(\tilde \pi') = N(\tilde \pi) - L < N(\tilde \pi)$, and that $\|M_{\tilde \pi'^{-1}}- M_{\pi^{*-1}}\|_F^2 \leq \|M_{\tilde \pi^{-1}}- M_{\pi^{*-1}}\|_F^2$ if condition is satisfied. We can therefore restrict ourselves to estimators $\hat \pi$ such that $\hat{\pi}(\overline{G}^{(\iota)})=\overline{G}^{(\iota)}$ for all $\iota$. There is however still another catch to obtain the same lines as in [@pilliat2022optimal]. Indeed, the restriction $\hat \pi^{(\iota)}$ of $\hat \pi$ to $\overline{G}^{(\iota)}$ is measurable with respect to the observation $Y$, but not necessarily to $Y(\overline{G}^{(\iota)})$. Still, this restriction can be writen as $\hat \pi^{(\iota)} = \hat \pi^{(\iota)}(Y(\overline{G}^{(\iota)}), Y([n]\setminus\overline{G}^{(\iota)}))$, and, for any $\alpha > 0$, there exists $y^{*(\iota)}(\alpha)$ such that $$\mathbf{E}^{(\mathbf{full})}_{\bf G}\left[\|M_{\hat \pi^{(\iota)-1}} - M_{\pi^{*-1}}\|_F^2\right] \geq \mathbf{E}^{(\mathbf{full})}_{\bf G}\left[\|M_{\bar \pi^{(\iota)-1}(\alpha)} - M_{\pi^{*-1}}\|_F^2\right] - \alpha \enspace ,$$ where $\bar \pi^{(\iota)} := \hat \pi^{(\iota)}(Y(\overline{G}^{(\iota)}), y^{*(\iota)}(\alpha))$ is measurable with respect to $Y(\overline{G}^{(\iota)})$. Since it is possible such a stable estimator for any $\alpha > 0$, we finally obtain the inequality $$\begin{aligned} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)&\geq \inf_{\hat \pi:\ \hat{\pi}(\overline{G}^{(\iota)})=\overline{G}^{(\iota)} }\sup_{{\bf G}\in \boldsymbol{\mathcal{G}}} \sum_{\iota=1}^{ n/p } \mathbf{E}^{(\mathbf{full})}_{\bf G} \left[\|\big(M_{\hat \pi^{-1}} - M_{\pi^{*-1}}\big)_{\overline{G}^{(\iota)}}\|_F^2\right]\\ &\geq \sum_{\iota=1}^{ n/p } \inf_{\hat{\pi}^{(\iota)}} \sup_{G^{t(\iota)}}\mathbf{E}^{(\mathbf{full})}_{G^{t(\iota)}} \left[\|\big(M_{\hat \pi^{(\iota)-1}} - M_{\pi^{*-1}}\big)_{\overline{G}^{(\iota)}}\|_F^2\right]\ .\end{aligned}$$ The problem of estimating the permutation $\pi^*$ is now broken down into the $n/p$ smaller problems of estimating the subsets $G^{t(\iota)} \subset \overline G^{(\iota)}$. The square Euclidean distance between to experts in $\overline G^{(\iota)}$ of experts is $0$ is they are both either in or not in $G^{t(\iota)}$ and it is equal to $\frac{q\upsilon^2}{p\lambda}$ otherwise. Let us focus on the easier problem of estimating the subsets $G^{t(\iota)}$ and define $\hat{G}^{t(\iota)}$ the set of the $p/2$ experts that are ranked above according $\hat \pi^{(\iota)}$. Then, we have that $$\|\big(M_{\hat \pi^{(\iota)-1}} - M_{\pi^{*-1}}\big)_{\overline{G}^{(\iota)}}\|_F^2=\frac{q\upsilon^2}{p\lambda}\big|\hat{G}^{(\iota)}\Delta G^{t(\iota)}| \geq \frac{q\upsilon^2}{4\lambda}{\mathbf 1}\{\hat{G}^{(\iota)}\neq G^{t(\iota)}\}\enspace ,$$ where the last inequality comes from the construction of the sets $G^{t(\iota)}$ by . Hence, we deduce that $$\label{eq:lower_minimax_1} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)\geq \frac{q\upsilon^2}{4\lambda}\sum_{\iota=1}^{n/p}\inf_{\hat{\pi}^{(\iota)}} \sup_{G^{t(\iota)}} \mathbf{P}^{(\mathbf{full})}_{G^{t(\iota)}}\left[\hat{G}^{(\iota)}\neq G^{t(\iota)}\right] \ ,$$ so that by symmetry, $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)\geq \frac{nq\upsilon^2}{4p\lambda}\inf_{\hat{G}^{(1)}}\sup_{G^{t(1)}}\mathbf{P}^{(\mathbf{full})}_{G^{t(1)}}\left[\hat{G}^{(1)}\neq G^{t(1)}\right]\enspace .$$ Consider the $p\times d$ matrices $N$ and $Y^{\downarrow}$ defined by $$N_{ik}= \sum_t {\mathbf 1}_{x_t=(i,k)} \ ;\quad \quad Y^{\downarrow}_{ik} = \sum_t {\mathbf 1}_{x_t=(i,k)} (y_t - w_i) \enspace ,$$ where $w$ is defined in . To simplify the notation, we write henceforth $G$ and $\hat{G}$ for $G^{t(1)}$ and $\hat{G}^{(1)}$ respectively. Letting $\mathbf{P}_{G}$ for the corresponding marginal distribution of $N$ and $Y^{\downarrow}$, the same sufficiency argument as in [@pilliat2022optimal] gives that $$\inf_{\hat{G}} \sup_{G} \mathbf{P}^{(\mathbf{full})}_{G}\left[\hat{G}\neq G\right]=\inf_{\hat{G}} \sup_{G} \mathbf{P}_{G}\left[\hat{G}\neq G\right]\ .$$ We finally obtain the following inequality: $$\label{eq:lower_minimax_G_T1} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)\geq \frac{nq\upsilon^2}{4p\lambda}\inf_{\hat{G}} \sup_{G} \mathbf{P}_{G}\left[\hat{G}\neq G\right]\enspace .$$ Let $\mathbf{P}_0$ be the distribution on $N$ and $Y^{\downarrow}$ corresponding to the case $\upsilon=0$. The entries of $N$ of are independent and follow a poisson distribution of parameter $\lambda$. Conditionally to $N_{ik}$, we have $Y^{\downarrow}_{ik}$ is a gaussian variable with mean $0$ and variance $N_{ik}$. Then, we deduce from Fano's inequality [@tsybakov] that $$\begin{aligned} \label{eq:KLFano} \inf_{\hat{G}} \sup_{G\in \mathcal{G}} \mathbf P_{G}(\hat{G} \neq G) &\geq 1 - \frac{1 + \max_{G \in \mathcal{G}} \mathrm{KL}(\mathbf P_{G}||\mathbf P_0)}{\log(|\mathcal{G}|)}\ , \end{aligned}$$ where $\mathrm{KL}(.||.)$ stands for the Kullback-Leibler divergence. The following lemma gives an upper bound of these Kullback-Leibler divergences. It can be found in [@pilliat2022optimal], with the slightly stronger assumption that $p\lambda \geq 1$. **Lemma 24** (Lemma J.2 of [@pilliat2022optimal]). *There exists a numerical constant $c_1$ such that the following holds true. If $\upsilon^2 \leq 1 \land {p \lambda}$, then for any $G\in \mathcal{G}$, we have $$\begin{aligned} \mathrm{KL}(\mathbf P_G||\mathbf P_0)&\leq c_1\frac{\upsilon^2q^2}{d}\ \enspace .\end{aligned}$$* The proof of can be found in [@pilliat2022optimal], with $\tilde n := p$ and $\tilde d = d$. The slighlty stronger assumption that $p\lambda \geq 1$ made in Lemma J.2 in [@pilliat2022optimal] is in fact not necessary. Indeed, it is only used to prove that $\mathcal{I}:= \lambda p(e^{\upsilon^2/(\lambda p)} - 1) \leq c_1'\upsilon^2$ in the proof of Lemma J.2 in [@pilliat2022optimal], and this inequality remains valid under the assumption of , that is $u^2 := \upsilon^2/(\lambda p) \leq 1$. **Step 3: Choice of suitable parameters $p,q$ and $\upsilon$** By combining [\[eq:lower_minimax_G\_T1\]](#eq:lower_minimax_G_T1){reference-type="eqref" reference="eq:lower_minimax_G_T1"}, [\[eq:KLFano\]](#eq:KLFano){reference-type="eqref" reference="eq:KLFano"}, with Lemma [Lemma 24](#lem:KLborn){reference-type="ref" reference="lem:KLborn"} and the different constraints on the parameters , we directly obtain the following proposition. **Proposition 25**. *There exists a numerical constant $c$ such that if $p \in \{2, \dots, n\}$, $q \in \{1, \dots, d\}$ are dyadic integers, and $\upsilon$ satisfy the following condition: $$\begin{aligned} \label{eq:cond_1} \upsilon & \leq& c\left[ 1\wedge \sqrt{p \lambda} \wedge \frac{\sqrt{pd}}{q} \wedge \sqrt{\lambda}\frac{p^{3/2}}{n}\right]\enspace , \label{eq:cond_2}\end{aligned}$$ then we have $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda)\geq c \frac{n q \upsilon^2 }{p\lambda } \ .$$* The above proposition being a direct consequence of what preceeds it, we consider that it does not require a proof. Let us now apply for different parameters $p,q$ and $\upsilon$ to conclude the proof of . First, using the lower bound in the bi-isotonic case -- see Theorem 4.1 of [@pilliat2022optimal], we have for some constant $c'$ that $$\label{eq:first_lb} \mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq c' (n/\lambda \land nd) \enspace .$$ In what follows, we write $\left\lfloor x\right\rfloor_{\mathrm{dya}}$ for the greatest integer that is a power of two and smaller than $x$. Let us consider the following inequality: $$\label{eq:assumption_lambda} \lambda \geq 1/d \lor n^2/d^3 \enspace .$$ In the case where is not satisfied, then $n\sqrt{d/\lambda}\land n^{2/3}\sqrt{d}\lambda^{-5/6} \leq n/\lambda \land nd$ and the lower bound of is proven by . We subsequently assume that is satisfied. **Case 1**: $\lambda n \leq 1$. In this case, we choose $q = \left\lfloor\sqrt{\tfrac{d}{\lambda}}\right\rfloor_{\mathrm{dya}}$ and $p = n/2$. We have that $q \in \{1, \dots, d\}$ since $\lambda \leq 1$ in that case and by assumption , $\lambda \geq 1/d$. We deduce from applied with $v/c = \sqrt{p\lambda} = \sqrt{pd}/q$ that $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq c'' n\sqrt{\tfrac{d}{\lambda}} \enspace .$$ **Case 2**: $\lambda \in [\tfrac{1}{n}, 8n^2]$. In this case, we choose $q = \left\lfloor\tfrac{n^{1/3}\sqrt{d}}{\lambda^{1/6}}\right\rfloor_{\mathrm{dya}}$ and $p = \left\lfloor\tfrac{n^{2/3}}{\lambda^{1/3}}\right\rfloor_{\mathrm{dya}}$. We deduce from that $q \leq d$. Since $\lambda \in [\tfrac{1}{n}, 8n^2]$, we also necessarily have that that $q \geq 1, p\geq 2$ and $p \leq n$. Applying the above proposition with $\upsilon/c = 1 = \sqrt{pd}/q = \sqrt{\lambda}p^{3/2}/n$, we deduce that $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq c'' \tfrac{n^{2/3}\sqrt{d}}{\lambda^{5/6}}\enspace .$$ **Case 3**: $\lambda \geq 8n^2$. When $\lambda$ satisfies this condition that is out of the scope of but discussed below , we choose $q = \left\lfloor\sqrt{d}\right\rfloor_{\mathrm{dya}}$ and $p = 2$. Applying the above proposition with $\upsilon/c = 1$, we deduce that $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq c'' \tfrac{n\sqrt{d}}{\lambda}\enspace .$$ We have proved that for any $n,d$ and $\lambda$, we have the lower bound $$\mathcal{R}^*_{\mathrm{perm}}(n,d,\lambda) \geq c'' \left[n\sqrt{\tfrac{d}{\lambda}} \wedge \tfrac{n^{2/3}\sqrt{d}}{\lambda^{5/6}}\wedge \tfrac{n\sqrt{d}}{\lambda} + n/\lambda \right] \land nd \enspace .$$ This concludes in particular the proof of , stated for $\lambda \in [1/d, 8n^2]$. ◻ # Proof of {#proof-of-2} Let us introduce $\mathbf{P}_k = \Lambda(X_{\cdot k} X_{\cdot k}^T - \mathbb{E}[X_{\cdot k} X_{\cdot k}^T])\Lambda \in \mathbb{R}^{p \times p}$, so that $$\label{eq:operator_norm_reduction} \Lambda(XX^T - \mathbb{E}[XX^T])\Lambda = \sum_{k=1}^q \mathbf{P}_k \enspace .$$ **Lemma 26**. *There exists a numerical constant $\kappa'''_3$ such that for any\ $x \in [0, (\kappa'''_3(\sigma^2 r_{\Lambda}+ K^2\log(q)))^{-1}]$, we have $$\|\mathbb{E}[e^{x\mathbf{P}_k}]\|_{\mathrm{op}} \leq \exp(\kappa'''_3 x^2(\sigma^2 + \sigma^4 p)) + \frac{1}{q} \enspace .$$* Moreover, applying the Matrix Chernoff techniques for the independent matrices $\mathbf{P}_{k}$ (see lemma 6.12 and 6.13 of [@wainwright2019high]), we have for any $t > 0$ that $$\begin{aligned} \log(\mathbb{P}(\|\sum_{k=1}^q\mathbf{P}_k \|_{\mathrm{op}} \geq t)) &\leq \log(\mathop{\mathrm{tr}}\left[\mathbb{E}[e^{x \sum_{k=1}^q\mathbf{P}_k} ]\right]) -xt \\ &\leq \log\left(\mathop{\mathrm{tr}}\left[\exp\left(\sum_{k=1}^q\log(\mathbb{E}[e^{x\mathbf{P}_k}])\right) \right]\right) -xt \\ &\leq \log(p) + \sum_{k =1}^q \|\log(\mathbb{E}[e^{x \mathbf{P}_k}]) \|_{\mathrm{op}} - xt \\ &= \log(p) + \sum_{k =1}^q \log(\|\mathbb{E}[e^{x \mathbf{P}_k}] \|_{\mathrm{op}}) - xt \enspace .\\\end{aligned}$$ Applying , it holds for any $x \in [0, (\kappa'''_3(\sigma^2 r_{\Lambda}+ K^2\log(q)))^{-1}]$ that $$\begin{aligned} \sum_{k =1}^q \log(\|\mathbb{E}[e^{x \mathbf{P}_k}] \|_{\mathrm{op}}) &\leq q\log\left(\exp\left(\kappa'''_3 x^2 (\sigma^2 + \sigma^4 p)\right) + \frac{1}{q}\right) \\ &\leq \kappa'''_3 x^2 (\sigma^2q + \sigma^4 pq) + 1 \enspace .\end{aligned}$$ where in the last inequality we used the fact that for any $a\geq 1$ and $u> 0$, $\log(a + u) \leq \log(a) + u/a$. Hence we obtain $$\begin{aligned} \log(\mathbb{P}(\|\sum_{k=1}^q\mathbf{P}_k \|_{\mathrm{op}} \geq t)) \leq \log(ep) + \kappa'''_3 x^2 (\sigma^2q + \sigma^4 pq) -xt \enspace .\end{aligned}$$ Hence if $t \leq 2\frac{\sigma^2 q + \sigma^4 pq}{\sigma^2 r_{\Lambda}+ K^2\log(q)}$, we choose $x = \frac{t}{2\kappa'''_3(\sigma^2 q + \sigma^4 pq)}$ and if $t > 2\frac{\sigma^2 q + \sigma^4 pq}{\sigma^2 r_{\Lambda}+ K^2\log(q)}$ we choose $x = \tfrac{1}{\kappa'''_3(\sigma^2 r_{\Lambda}+ K^2\log(q))}$, which gives $$\mathbb{P}(\|\sum_{k=1}^q\mathbf{P}_k \|_{\mathrm{op}} \geq t) \leq e p \max\left[\exp\left(-\frac{1}{\kappa_3}\left(\frac{t^2}{4(\sigma^2 q + \sigma^4 pq)} \lor \frac{t}{2(\sigma^2 r_{\Lambda}+ K^2\log(q))}\right)\right)\right]\enspace .$$ We deduce that with probability at least $1-\delta$, it holds that $$\|\sum_{k=1}^q\mathbf{P}_k \|_{\mathrm{op}} \leq \kappa \left[\sqrt{(\sigma^4pq + \sigma^2 q)\log(p/\delta'')} + (\sigma^2r_{\Lambda} + K^2\log(q))\log(p/\delta'')\right]\enspace ,$$ for some numerical constant $\kappa$. *Proof of .* Since $\|\Lambda X_{\cdot k} X_{\cdot k}^T\Lambda \|_{\mathrm{op}} = \|\Lambda X_{\cdot k}\|_2^2$, we state the following lemma controlling the moment generating function of the $L_2$ norm of the projection $\Lambda X_{\cdot k}$: **Lemma 27**. * [\[lem:prop_concentration_norme_2\_bellec\]]{#lem:prop_concentration_norme_2_bellec label="lem:prop_concentration_norme_2_bellec"} There exists a numerical constant $\kappa'''_0$ such that for any $x \leq \tfrac{1}{\kappa'''_0K^2}$ we have $$\mathbb{E}[e^{x \| \Lambda X_{\cdot k} \|_2^2}] \leq e^{\kappa'''_0 \sigma^2 r_{\Lambda} x} \enspace .$$* Now we define the event $\xi_{\mathrm{op}} := \{ \max_{k=1,\dots,d} \| \Lambda X_{\cdot k}\|_2^2 \leq \kappa'''_0(\sigma^2 r_{\Lambda} + K^2\log(q^3))\}$, where $\kappa'''_0$ is the numerical constant given by . Applying the same lemma together with the Chernoff bound, a union bound over all $k = 1 \dots d$ gives $$\mathbb{P}(\xi_{\mathrm{op}}^c) \leq \tfrac{1}{q^2} \enspace .$$ We consider in what follows the relation order $\preceq$ induced by the cone of nonegative symetric matrices $\mathbb{S}_n^+$, namely $X' \preceq X''$ if and only if $X'' - X' \in \mathbb{S}_n^+$. Under the event $\xi_{\mathrm{op}}$, it holds that for any $u \geq 2$, $$\begin{aligned} \mathbf{P}_k^u &\preceq \|\mathbf{P}_k \|_{\mathrm{op}}^{u-2} \mathbf{P}_k^2 \\ &\preceq \|\Lambda(X_{\cdot k}^T X_{\cdot k} - \mathbb{E}[X_{\cdot k}^T X_{\cdot k}]) \Lambda \|_{\mathrm{op}}^{u-2} \mathbf{P}_k ^2\\ &\preceq ( \kappa'''_1(\sigma^2 r_{\Lambda} + K^2\log(q)))^{u-2}\mathbf{P}_k^2 \enspace ,\end{aligned}$$ for some numerical constant $\kappa'''_1$ (depending on $\kappa'''_0$). In the third inequality we used the definition of $\xi_{\mathrm{op}}$ the fact that $\mathbb{E}[\|\Lambda X_{\cdot k}\|_2^2] \leq \kappa'''_0 \sigma^2 r_{\Lambda}$. We now give an upper bound of $\|\mathbb{E}[\mathbf{P}_k^2]\|_{\mathrm{op}}$, which is the operator norm of the variance of $\mathbf{P}_k$ as defined in section 6 in [@wainwright2019high]. Remark that since any matrix $U \in \mathbb{R}^{q \times q}$ satisfies $U\Lambda U^T \preceq U U^T$, we have that $\mathbf{P}_k \preceq \Lambda (X_{\cdot k}^T X_{\cdot k} - \mathbb{E}[X_{\cdot k}^T X_{\cdot k}])^2 \Lambda$. Let us compute the expectation of $(X_{\cdot k}^T X_{\cdot k} - \mathbb{E}[X_{\cdot k}^T X_{\cdot k}])^2$: $$\mathbb{E}[(X_{\cdot k}^T X_{\cdot k} - \mathbb{E}[X_{\cdot k}^T X_{\cdot k}])^2]_{ij} = \sum_{l \in P} \mathbb{E}[( X_{ik} X_{lk} - \mathbb{E}[ X_{ik} X_{lk}])( X_{lk} X_{jk} - \mathbb{E}[ X_{lk} X_{jk}])] \enspace .$$ The off diagonal terms are zero, and the $i^{th}$ diagonal element satisfies: $$\label{eq:diagonal_coef_X} \mathbb{E}[(X_{\cdot k}^T X_{\cdot k} - \mathbb{E}[X_{\cdot k}^T X_{\cdot k}])^2]_{ii} = \mathbb{E}[( X_{ik}^2 - \mathbb{E}[ X_{ik}^2])^2] + \sum_{j \neq i}\mathbb{E}[ X_{ik}^2]\mathbb{E}[ X_{jk}^2] \enspace .$$ By assumption , the first term of satisfies $$\mathbb{E}[( X_{ik}^2 - \mathbb{E}[ X_{ik}^2])^2] \leq 4\mathbb{E}[( X_{ik}^4)] \leq 48\sigma^2K^2\enspace .$$ The second term of is smaller than $\sigma^4p$, still by assumption . Hence we have some numerical constant $\kappa'''_2$ that $$\|\mathbb{E}[\mathbf{P}_k^2]\|_{\mathrm{op}} \leq \|\mathbb{E}[( X_{ik}^2 - \mathbb{E}[ X_{ik}^2])^2]\|_{\mathrm{op}} \leq \kappa'''_2 (\sigma^2 + \sigma^4 p) \enspace .$$ Now, by the definition of the exponential of matrices, the triangular inequality and the fact that $\mathbf{P}_k$ is centered, we have $$\label{eq:decomposition_evenement} \|\mathbb{E}[\exp(x\mathbf{P}_k)]\|_{\mathrm{op}} = 1 + \sum_{u \geq 2} \frac{x^u}{u!}\|\mathbb{E}[\mathbf{P}_k^u{\mathbf 1}_{\xi_{\mathrm{op}}}]\|_{\mathrm{op}} + \sum_{u \geq 2} \frac{x^u}{u!}\|\mathbb{E}[\mathbf{P}_k^u{\mathbf 1}_{\xi^c_{\mathrm{op}}}]\|_{\mathrm{op}} \enspace .$$ By definition of $\xi_{\mathrm{op}}$ together with the upper bound of the variance of $\mathbf{P}^2_k {\mathbf 1}_{\xi_{\mathrm{op}}} \preceq \mathbf{P}^2_k$, it holds for any $x \in [0, (\kappa'''_1(\sigma^2 r_{\Lambda} + K^2\log(q)))^{-1}]$ that $$\begin{aligned} \sum_{u \geq 2} x^u\|\mathbb{E}[\mathbf{P}_k^u{\mathbf 1}_{\xi_{\mathrm{op}}}]\|_{\mathrm{op}} &\leq x^2 \|\mathbb{E}[\mathbf{P}_k^2]\|_{\mathrm{op}}\sum_{u\geq 2} \frac{x^{u-2}}{u!} (\kappa'''_1(\sigma^2 r_{\Lambda} + K^2\log(q)))^{u-2}\\ &\leq x^2 \kappa'''_2(\sigma^2 +\sigma^4 p) \sum_{u \geq 0} \frac{x^u}{(u+2)!} (\kappa'''_1(\sigma^2 r_{\Lambda} + K^2\log(q)))^u \\ &\leq \exp(\kappa'''_3 x^2 (\sigma^2 +\sigma^4 p)) - 1 \enspace , \\ \end{aligned}$$ for some numerical constant $\kappa'''_3$. We now control the second term of under the complementary event $\xi_{\mathrm{\mathrm{op}}}$, for any $x \in [0, (2\kappa'''_0(\sigma^2 r_{\Lambda} + K^2\log(q)))^{-1}]$: $$\begin{aligned} \sum_{u \geq 2} \frac{x^u}{u!}\|\mathbb{E}[\mathbf{P}_k^u{\mathbf 1}_{\xi^c_{\mathrm{op}}}] &~\leq \mathbb{E}[\exp(x\|\mathbf{P}_k \|_{\mathrm{op}}{\mathbf 1}_{\xi_{\mathrm{op}}^c}]] \\ &\overset{(a)}{\leq} \sqrt{\frac{1}{q^2}}\sqrt{\mathbb{E}[\exp(2x \|\mathbf{P}_k \|_{\mathrm{op}})]} \\ &\overset{(b)}{\leq} \frac{1}{q}\exp(x \kappa'''_0 \sigma^2 r_{\Lambda}) \\ &~\leq \frac{1}{q}\enspace , \end{aligned}$$ where in $(a)$ we used the cauchy-schwarz inequality for real random variables and in $(b)$ we applied . ◻ *Proof of .* We use the result of [@bellec2019concentration] which is a generalization of the Hanson-Wright inequality to random variables with coefficients with bernstein's moments. \[Assumption 1 of [@bellec2019concentration]\] is satisfied with parameters $\sigma^2$ and $K$, and we have the following upper bound on the moment generating function of the quadratic form $\|\Lambda X_{\cdot k}^T \|_2^2 = | X_{\cdot k} \Lambda X_{\cdot k}^T |$: $$\mathbb{E}[e^{x \| \Lambda X_{\cdot k}^T \|_2^2}] \leq e^{x \mathbb{E}[\| \Lambda X_{\cdot k}^T \|_2^2]} e^{\kappa'''_0 x^2 K^2\sigma^2\|\Lambda \|_F^2} \leq e^{\kappa'''_1 x \sigma^2 r_{\Lambda}}\enspace ,$$ for any $x$ satisfying condition $(6)$ of [@bellec2019concentration], that is $128 x\|\Lambda \|_{\mathrm{op}} K^2 \leq 1$. For the last inequality, we used the fact that $\|\Lambda\|_F^2 = \mathop{\mathrm{rank}}(\Lambda)$. We obtain the result by choosing $\kappa'''_2 = \kappa'''_1 \lor 128$. ◻ [^1]: The authors consider the isotonic model as a subcase of a seriation model, where each columns of $M_{\pi^{*-1}}$ is only assumed to be unimodal. [^2]: We are indeed mostly interested in the more realistic sparse observation regime (meaning $\lambda \leq 1$). The case $\lambda \leq 1/d$ leads to the trivial minimax bound of order $nd$ for both reconstruction and estimation, as in this case we have less than one observations per expert on average. As for the case $\lambda > 1/d$ but $\lambda \leq 1/n$, we have less than one observation per question on average, and this leads to a minimax risk of order $n\sqrt{d/\lambda}$ for permutation estimation and of order $nd$ for matrix recontruction.
arxiv_math
{ "id": "2310.01133", "title": "Optimal rates for ranking a permuted isotonic matrix in polynomial time", "authors": "Emmanuel Pilliat (IMAG), Alexandra Carpentier, Nicolas Verzelen\n (MISTEA)", "categories": "math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- bibliography: - biblio.bib --- **Spatial growth-fragmentations and excursions from hyperplanes** by William Da Silva[^1] and Juan Carlos Pardo[^2] =2truecm =2truecm In this paper, we are interested in the self-similar growth-fragmentation process that shows up when slicing half-space excursions of a $d$-dimensional Brownian motion from hyperplanes. Such a family of processes turns out to be a spatial self-similar growth-fragmentation processes driven by an isotropic self-similar Markov process. The former can be seen as multitype growth-fragmentation processes, in the sense of [@DP23], where the set of types is $\mathbb{S}^{d-2}$, the $(d-1)$--dimensional unit sphere. In order to characterise such family of processes, we study their spinal description similarly as in the monotype [@Ber-GF] and multitype [@DP23] settings. Finally, we extend our study to the case when the $d$-dimensional Brownian motion is replaced by an isotropic Markov process whose first $(d-1)$ coordinates are driven by an isotropic stable Lévy process and the remaining coordinate is an independent standard real-valued Brownian motion. Growth-fragmentation process, self-similar Markov process, Markov additive process, spinal decomposition, excursion theory. # Introduction and main results Let us consider a $d$-dimensional Brownian motion for $d\ge 2$, here denoted by $B^d=(B^d_t, t\ge 0)$. In this paper, we are interested in a typical Brownian excursion above the hyperplane $\mathcal{H}=\{(x_1, \ldots, x_d)\in \mathbbm{R}^{d}: x_d=0\}$ starting from $\overline{0}$ and ending at $z\in \mathcal{H}$. Such type of excursions were first studied by Burdzy in [@Bur] and can be easily described by considering in the last coordinate a positive Brownian excursion and in the first $d-1$ coordinates a $(d-1)$-dimensional Brownian motion stopped at the lifetime of the positive Brownian excursion. Brownian excursions away from the hyperplane $\mathcal{H}$ form a Poisson point process with characteristic measure $\mathfrak{n}$ defined on the space of continuous functions starting from $\overline{0}$ and ending in a given point in $\mathcal{H}$. Moreover the latter can be described in terms of the product of the one-dimensional Itô excursion measure and the probability of a $(d-1)$-dimensional Brownian motion stopped at the lifetime of the one-dimensional excursion, see Section [5](#sec: excursion){reference-type="ref" reference="sec: excursion"} for further details. Motivated by the recent work of Aïdékon and Da Silva [@AD] in the two dimensional case, we study the family of processes that arises when slicing upper half-space excursions from hyperplanes of the form $\mathcal{H}_a=\{(x_1, \ldots, x_d)\in \mathbbm{R}^{d}: x_d=a\}$, for $a>0$. That is, if the excursion from $\mathcal{H}$ hits $\mathcal{H}_a$, it will make a countable number of excursions above it, here denoted by $(e^{a,+}_i, i\ge 1)$. For every excursion $e^{a,+}_i$, we let $\Delta e^{a,+}_i$ be the difference between the endpoint and the starting point of $e^{a,+}_i$. Since both points are in the same hyperplane, we observe that $\Delta e^{a,+}_i$ is a vector in $\mathbb{R}^{d-1}$ and therefore the family $(\Delta e^{a,+}_i; i\ge 1)$ is a collection of vectors in $\mathbb{R}^{d-1}$ that we suppose to be ranked in decreasing order of the norm. The main result of this paper describes the law of the process $(\Delta e^{a,+}_i; i\ge 1)$ indexed by $a$ in terms of, what we have called, *spatial self-similar growth-fragmentations* (in $\mathbbm{R}^{d-1}$). Spatial self-similar growth-fragmentations $\in \mathbbm{R}^{d}$ are extensions of multitype self-similar growth-fragmentations, recently introduced by the authors in [@DP23], with the difference that the set of types is given by $\mathbb{S}^{d-1}$, the $d$-dimensional unit sphere. Let $X=(X_t, t\ge 0)$ be an isotropic self-similar Markov process in $\mathbb{R}^d$. The construction of the associated spatial self-similar growth-fragmentation is very similar to [@Ber-GF] and much simpler than the multitype case [@DP23]. Roughly speaking, a spatial self-similar growth-fragmentation describes a cloud of elements in $\mathbb{R}^d$, that we may refer to as atoms, which may grow and dislocate in a binary way. Initially, the cloud of atoms starts from one particle (the common ancestor of all future particles) whose ($d$--dimensional) *size* is given by the process $X$, and that will split in a binary way whenever $X$ has a jump. More precisely, at each jump of size $y=X(t)-X(t^-)$, we add to the cloud at time $t$ a new particle with initial size $-y$. These binary divisions are conservative, in the sense that the size of the child and the size of the parent just after division exactly sum up to the size of the parent before division. The newborn particles evolve independently of the parent, and independently of each another, according to a copy of $X$. The next generations are constructed in the same way by repeating the same division process for each new individual in the cloud. We are then interested in the collection $\mathbf{X}(a)$ of sizes of cells alive at time $a\ge 0$, ranked in a decreasing order of their norm. Self-similar growth-fragmentation processes first appeared in [@Ber-GF] and recently extended to the mutitype setting, first in [@AD] and subsequently in [@DS] and [@DP23]. Importantly, the self-similar growth-fragmentation process we just defined is not included in the aforementioned works. We refer to Section [3.1](#construction){reference-type="ref" reference="construction"} for a formal construction of such object and for the precise statement of its branching structure. In order to state the first main result of this paper, let us first introduce some notation. Let $\mathfrak{n}_+$ be the excursion measure of the excursion above the hyperplane $\mathcal{H}$, that is the restriction of $\mathfrak{n}$ to $\mathcal{H}^+=\{(x_1, \ldots, x_d)\in \mathbbm{R}^{d}: x_d>0\}$, and introduce the following family of measures $\gamma_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^{d-1}$, which are associated with the Brownian excursions from the hyperplane $\mathcal{H}$ *conditioned* on ending at $(v,0)$, by disintegrating $\mathfrak{n}_+$ over its endpoint. Whenever $r\ge 0$ and $\mathbf{x}\in \mathbbm{R}^{d-1}$, we write $\Pi_r$ for the law of a Bessel bridge from $0$ to $0$ over $[0,r]$, and $\mathbbm{P}_{r}^{0\rightarrow \mathbf{x}}$ for the law of a $(d-1)$--dimensional Brownian bridge from $0$ to $\mathbf{x}$ with duration $r$. See [@AD] for the case $d=2$. For all $\mathbf{x}\in \mathbb{R}^{d-1}\setminus\{0\}$, we let $$\gamma_{\mathbf{x}} := \int_0^{\infty} \mathrm{d}r \frac{\mathrm{e}^{-\frac{1}{2r}}}{2^{\frac{d}{2}}\Gamma\left(\frac{d}{2}\right) r^{\frac{d}{2}+1}} \mathbbm{P}_{r|\mathbf{x}|^2}^{0\rightarrow \mathbf{x}} \otimes \Pi_{r|\mathbf{x}|^2}.$$ **Theorem 1**. *Under $\gamma_{\mathbf{x}}$, the process $(\Delta e^{a,+}_i; i\ge 1)$ is a spatial self-similar growth-fragmentation in $\mathbbm{R}^{d-1}$ whose distribution is the same as the spatial self-similar growth-fragmentation described by a $(d-1)$--dimensional isotropic Cauchy process.* The previous result can be extended to the case when the $d$-dimensional Brownian motion is replaced by a process that we denote by $Z^d=(X^{d-1}, Z)$, where $X^{(d-1)}$ is a $(d-1)$-dimensional isotropic $\alpha$-stable Lévy process, with $\alpha\in (0,2)$, and $Z$ is an independent real-valued Brownian motion. The excursions of the process $Z^d$ away from the hyperplane $\mathcal{H}$ also form a Poisson point process with characteristic measure $\mathfrak{n}^\alpha$ defined on the space of cádlág functions starting from $\overline{0}$ and ending in a given point in $\mathcal{H}$. Similarly as in the Brownian case, the latter can be described in terms of the product of the one-dimensional Itô excursion measure and the law of a $(d-1)$-dimensional isotropic stable process stopped at the lifetime of the one-dimensional Brownian excursion, see Section [5](#sec: excursion){reference-type="ref" reference="sec: excursion"} for further details. Similarly as in the Brownian case, we let $\mathfrak{n}^\alpha_+$ be the excursion measure of the excursions of the process $Z^d$ above the hyperplane $\mathcal{H}$, that is the restriction of $\mathfrak{n}^\alpha$ to $\mathcal{H}^+$, and introduce the following family of measures $\gamma^\alpha_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^{d-1}$ which are associated with the excursions of $Z^{d}$ from the hyperplane $\mathcal{H}$ *conditioned* on ending at $(v,0)$, by disintegrating $\mathfrak{n}^\alpha_+$ over its endpoint. For $\mathbf{x}\in\mathbbm{R}^{N-1}$ and $r>0$, let $\mathbbm{P}_{r}^{\alpha, 0\rightarrow \mathbf{x}}$ denote the law of an $\alpha$--stable bridge from $0$ to $\mathbf{x}$ over $[0,r]$. See [@DS] for the case $d=2$. In addition, we write $(p^{\alpha}_r, r\ge 0)$ for the transition densities of $X^{d-1}$. For all $\mathbf{x}\in \mathbb{R}^{d-1}\setminus\{0\}$, we let $$\gamma^{\alpha}_{\mathbf{x}} = \int_0^{\infty} \mathrm{d}r \frac{p_1^{\alpha}(r^{-1/\alpha}v\cdot\mathbf{1})}{2\sqrt{2\pi}r^{1+\frac{\omega_d}{\alpha}}} \mathbbm{P}_{r|\mathbf{x}|^2}^{\alpha, 0\rightarrow \mathbf{x}} \otimes \Pi_{r|\mathbf{x}|^2},$$ where $\mathbf{1}$ denotes the "north pole" in $\mathbb{S}^{d-2}$ and $\omega_d:= d-1+\frac{\alpha}{2}$. Recall that $(\Delta e^{a,+}_i; i\ge 1)$ denotes the collection of vectors in $\mathbb{R}^{d-1}$ obtained by slicing the excursion at height $a$. **Theorem 2**. *Under $\gamma^\alpha_{\mathbf{x}}$, the process $(\Delta e^{a,+}_i; i\ge 1)$ is a spatial self-similar growth-fragmentation whose distribution is the same as the spatial self-similar growth-fragmentation described by an isotropic $(d-1)$--dimensional $\frac{\alpha}{2}$--stable process.* **Related works.** Growth-fragmentation models have deep connections to random geometry. The first connections were made by Bertoin, Curien and Kortchemski [@BCK] and Bertoin, Budd, Curien and Kortchemski [@BBCK], who pointed out a remarkable class of (positive) self-similar growth-fragmentations obtained from the scaling limit of perimeter processes (see [@Budd]) in peeling explorations of Boltzmann planar maps. These growth-fragmentation processes are closely related to stable Lévy processes with stability parameter $\theta \in (\frac12, \frac32]$. Later, Miller, Sheffield and Werner constructed the same growth-fragmentation processes in the continuum [@MSW] for $1<\theta\le \frac32$ from a CLE exploration on a quantum disc. Moreover, the boundary case $\theta=\frac32$ shows up in the metric construction of Le Gall and Riera [@LG-Rie] when slicing a Brownian disc at heights. As already mentioned above, the critical Cauchy case $\theta=1$ appears when slicing a Brownian half-plane excursion at heights [@AD] and the case $\frac12<\theta<1$ by considering half-plane excursions of $Z^2=(X,Z)$, where $X$ is an $\alpha$-stable Lévy process, for $\alpha\in (1,2)$ and $Z$ and independent Brownian motion, see [@DS]. We stress that the masses in [@AD] and [@DS] are signed, with the sign depending on the time orientation of the excursions. In particular, the growth-fragmentation of [@DS] only yields the critical case $\theta=1$ in [@BBCK] after removing the negative mass in the system. Geometrically, this negative mass encodes the loops of a loop $O(n)$--model, whose gasket is described by the Boltzmann planar maps studied in [@BCK; @BBCK]. This prompted [@DS] to provide a framework for self-similar *signed* growth-fragmentations. Finally, we mention that the notion of self-similar signed growth-fragmentations was recently naturally extended to a finite number of types in [@DP23] using deeply the interplay between self-similar Markov processes and Markov additive processes, see [@KP]. The present paper uses a similar viewpoint, albeit more intricate due to the presence of uncountably many types. The paper is organised as follows. In Section [2](#sec: ssmp){reference-type="ref" reference="sec: ssmp"}, we provide some background on isotropic self-similar Markov processes and their Lamperti-Kiu representation in terms of Markov additive processes. In Section [3](#sec: spatial GF){reference-type="ref" reference="sec: spatial GF"}, spatial growth-fragmentation processes are formally constructed. Moreover, *isotropic cumulant functions* are introduced whose roots will lead to martingales for spatial growth-fragmentations. The spinal decomposition will be described in Section [4](#sec: spine isotropic){reference-type="ref" reference="sec: spine isotropic"}. Finally, the proofs of Theorems [Theorem 1](#thm:exc){reference-type="ref" reference="thm:exc"} and [Theorem 2](#thm:excal){reference-type="ref" reference="thm:excal"} are given in Section [5](#sec: excursion){reference-type="ref" reference="sec: excursion"}. **Acknowledgments.** We thank Andreas Kyprianou for some interesting discussions. W.D.S. acknowledges the support of the Austrian Science Fund (FWF) grant P33083-N on "Scaling limits in random conformal geometry". J.C.P. acknowledges the support of CONACyT grant A1-S-33854. # Isotropic self-similar Markov processes {#sec: ssmp} We start with some background on self-similar Markov processes in $\mathbbm{R}^d$. We lay emphasis on their connection to Markov additive processes, through the Lamperti-Kiu representation. We refer to [@KP] for a detailed treatment of these questions. **Markov additive processes.** Let $E$ be a locally compact, complete and separable metric space, endowed with a cemetery state $\dagger$. We also let $(\xi(t),\Theta(t), t\ge 0)$ be a regular Feller process in $\mathbbm{R}\times E$ with probabilities ${\tt P}_{x,\theta}$, $x\in\mathbbm{R}$, $\theta\in E$, on $(\Omega,\mathcal{F},\mathbbm{P})$, and denote by $(\mathcal{G}_t)_{t\ge 0}$ the natural standard filtration associated with $(\xi,\Theta)$. We say that $(\xi,\Theta)$ is a *Markov additive process* (MAP for short) if for every bounded measurable $f: \mathbbm{R}\times E \rightarrow \mathbbm{R}$, $s,t\ge 0$ and $(x,\theta)\in\mathbbm{R}\times E$, $$\mathtt{E}_{x,\theta}\Big[f(\xi(t+s)-\xi(t),\Theta(t+s))\mathds{1}_{\{t+s<\varsigma\}} \Big| \mathcal{G}_t\Big] = \mathds{1}_{\{t<\varsigma\}}\mathtt{E}_{0,\Theta(t)}\Big[f(\xi(s),\Theta(s))\mathds{1}_{\{s<\varsigma\}}\Big],$$ where $\varsigma := \inf\{t>0, \, \Theta(t)= \dagger\}$. Observe that the process $\Theta$ is itself a regular Feller process in $E$. We call $\xi$ the *ordinate* and $\Theta$ the *modulator* of the MAP. The notation $$\mathtt{P}_{\theta}:=\mathbb{P}(\;\cdot\; |\,\xi(0)=0 \,\text{and}\, \Theta(0)=\theta)\quad\textrm{ for }\quad \theta\in E,$$ will be in force throughout the paper. Whilst MAPs have found a prominent role in e.g. classical applied probability models for queues and dams when $\Theta$ is a Markov chain with a finite state space (see for instance [@Asm] and [@Iva]), the case that $\Theta$ is a general Markov process has received somewhat less attention. However, this case has been treated in the literature before, see for instance [@Cin] and references therein. MAPs are a natural extension of a Lévy process in the sense that $\Theta$ is an arbitrary well-behaved Markov process and $((\xi(t), \Theta(t))_{t\ge 0}, \mathtt{P}_{x,\theta})$ is equal in law to $((\xi(t)+x, \Theta(t))_{t\ge 0}, \mathtt{P}_{\theta})$. Moreover when $\Theta$ is a Markov chain with a finite state space a more natural description can be given for the ordinate process $\xi$. Indeed it can be thought as the concatenation of Lévy processes which depend on the current type in $E$ given by $\Theta$. Here we are interested in the specific case when $E=\mathbb{S}^{d-1}$ which describes the angles of a process in $\mathbbm{R}^d$. **Self-similar Markov processes in $\mathbbm{R}^d$ and isotropy.** Let $E=\mathbb{S}^{d-1}$ and $\alpha\in \mathbbm{R}$. Let $X$ be a Markov process in $\mathbbm{R}^d$, which under $\mathbbm{P}_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^d\setminus\{0\}$, starts from $\mathbf{x}$. We say that $X$ is a *self-similar* process if for all $c>0$ and all $\mathbf{x}\in\mathbbm{R}^d$, $$\Big( (cX(c^{-\alpha}s), s\ge 0), \mathbbm{P}_{\mathbf{x}}\Big) \overset{d}{=} \Big( (X(s), s\ge 0), \mathbbm{P}_{c\mathbf{x}}\Big).$$ The Lamperti representation of $\alpha$-self-similar $\mathbbm{R}^d$--valued Markov processes is the content of the following proposition which is attributed to [@Kiu] with additional clarification from [@ACGZ], building on the original work of [@Lam]. **Proposition 3**. *Let $X$ be a self-similar $\mathbbm{R}^d$--valued Markov process with index $\alpha$. Then there exists a Markov additive process $(\xi,\Theta)$ in $\mathbbm{R}\times \mathbb{S}^{d-1}$ such that $$\label{eq: Lamperti R^d} X(t) = \mathrm{e}^{\xi(\varphi(t))} \Theta(\varphi(t)), \quad t\le I_{\varsigma}:= \int_0^{\varsigma} \mathrm{e}^{\alpha\xi(s)}\mathrm{d}s,$$ where $$\varphi(t) := \inf\left\{s>0, \; \int_0^s \mathrm{e}^{\alpha \xi(u)}\mathrm{d}u > t \right\},$$ and $I_{\varsigma}$ is the lifetime of $X$. Conversely, any process $X$ satisfying [\[eq: Lamperti R\^d\]](#eq: Lamperti R^d){reference-type="eqref" reference="eq: Lamperti R^d"} is a self-similar Markov process in $\mathbbm{R}^d$ with index $\alpha$.* In the previous statement we implicitly took the convention that $0\times \dagger = 0$. The integral $\zeta = I_{\varsigma}$ is the lifetime of $X$ until it eventually hits $0$, which acts as an absorbing state. The analysis of MAPs with uncountable state space is much more intricate than the countable case. One way to capture their properties is using the compensation formula. It was proved in [@Cin] that any MAP $(\xi,\Theta)$ on $\mathbbm{R}\times \mathbb{S}^{d-1}$ is associated with a so-called *Lévy system* $(H,L)$, made up of an increasing additive functional $t\mapsto H_t$ of $\Theta$ and a transition kernel $L$ from $\mathbb{S}^{d-1}$ to $\mathbbm{R}^*\times \mathbb{S}^{d-1}$, with $\mathbbm{R}^*=\mathbbm{R}\setminus\{0\}$, such that, for all $\theta\in\mathbb{S}^{d-1}$, $$\int_{\mathbbm{R}^*} (1 \wedge |x|^2) \, L_{\theta}(\mathrm{d}x \times \{\theta\}) <\infty.$$ More importantly, this Lévy system satisfies the following *compensation formula* for all bounded measurable $F:(0,\infty)\times \mathbbm{R}^2\times \mathbb{S}^{d-1}\times \mathbb{S}^{d-1} \rightarrow \mathbbm{R}$, and all $(x,\theta)\in \mathbbm{R}\times \mathbb{S}^{d-1}$, $$\begin{gathered} \label{eq: compensation MAP} \mathtt{E}_{x,\theta}\left[\sum_{s>0} F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s))\mathds{1}_{\{\xi(s^-)\ne\xi(s) \, \text{or}\, \Theta(s^-)\ne\Theta(s)\}} \right] \\ = \mathtt{E}_{x,\theta} \left[ \int_0^{\infty} \mathrm{d}H_s \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\Theta(s)}(\mathrm{d}x,\mathrm{d}\Phi) F(s,\xi(s),x,\Theta(s),\Phi) \right].\end{gathered}$$ For the remainder of the paper we restrict ourselves to the usual setting $H_t = t$. Because of the bijection in Proposition [Proposition 3](#prop: Lamperti R^d){reference-type="ref" reference="prop: Lamperti R^d"}, this naturally puts us in a restricted class of self-similar Markov processes through the underlying driving MAP. Observe how [\[eq: compensation MAP\]](#eq: compensation MAP){reference-type="eqref" reference="eq: compensation MAP"} compares with the compensation formula for Lévy processes: $L$ essentially plays the role of a Lévy measure, albeit now depending on the current angle from which the process jumps. A nice subclass of MAPs is provided by *isotropic* self-similar Markov processes, and we shall mainly restrict ourselves to this setting. We say that a self-similar Markov process $X$ is isotropic if, for all isometry $U$, and all $\mathbf{x}\in\mathbbm{R}^d\setminus\{0\}$, the law of $(U\cdot X(t), \mathbbm{P}_{\mathbf{x}})$ is $\mathbbm{P}_{U\cdot \mathbf{x}}$. Equivalently, this means [@KP Theorem 11.14] that for all $(x,\theta)\in\mathbbm{R}\times \mathbb{S}^{d-1}$, the law of $((\xi,U\cdot \Theta), \mathtt{P}_{x,\theta})$ is $\mathtt{P}_{x, U\cdot \theta}$. The key advantage of restricting to isotropic processes is the following proposition, which is [@KP Corollary 11.15]. **Proposition 4**. *If $X$ is an isotropic self-similar Markov process, then the underlying ordinate $\xi$ is a Lévy process.* Let us briefly mention that the proof of [Proposition 4](#prop:isotropy Levy){reference-type="ref" reference="prop:isotropy Levy"} relies on the fact that by isotropy, $|X|$ is a positive self-similar Markov process, for which we can apply the classical Lamperti theory. This result opens the way to many useful Lévy tools, such as the Lévy-Itô description of $\xi$, the compensation or exponential formulas, or the existence of an exponential martingale and the corresponding change of measures. We will make heavy use of these additional properties when describing growth-fragmentations driven by isotropic processes in [3](#sec: spatial GF){reference-type="ref" reference="sec: spatial GF"}. Note that this notion of isotropy in particular covers the $\alpha$--stable isotropic Lévy case [@Kyp Theorem 3.13], for which the Lévy system is given by $H_t=t$ and $$L_{\theta}(\mathrm{d}x,\mathrm{d}\Phi) = \frac{c(\alpha)\mathrm{e}^{dx}}{|\mathrm{e}^x\Phi-\theta|^{\alpha+d}} \mathrm{d}x\sigma_{d-1}(\mathrm{d}\Phi),$$ where $c(\alpha) = 2^{\alpha-1} \pi^{-d} \frac{\Gamma((d+\alpha)/2)\Gamma(d/2)}{|\Gamma(-\alpha/2)|}$, and $\sigma_{d-1}(\mathrm{d}\Phi)$ is the surface measure on the sphere $\mathbb{S}^{d-1}$. See also [@BW] for the planar case. Numerous applications of Lévy systems can be found in [@KRS; @KRSY] to name but a few. # Spatial isotropic growth-fragmentation processes {#sec: spatial GF} Now, we extend the framework of [@Ber-GF] and [@DP23] to isotropic $\mathbbm{R}^d$--valued Markov processes for $d\ge 2$. We exclude the case $d=1$, since it can be deduced from the construction [@DP23] by considering a symmetric self-similar Markov process or from [@DS]. It is important to note that the construction in [@DP23] does not consider the isotropy assumption as well as in [@DS]. In what follows $X$ will be an isotropic $\mathbbm{R}^d$--valued self-similar Markov process with index $\alpha$, as defined in the last paragraph of [2](#sec: ssmp){reference-type="ref" reference="sec: ssmp"}, which under $\mathbbm{P}_{\mathbf{x}}$, $\mathbf{x}\in \mathbbm{R}^{d}\setminus \{0\}$, starts from $\mathbf{x}$. For technical reasons, we shall assume that $X$ is either absorbed after time $\zeta$ at some cemetery state $\partial$, or that $X$ converges to $0$ at infinity, for all starting points. We also recall that $(\xi,\Theta)$ denotes the MAP associated with $X$. ## Construction of spatial growth-fragmentation processes {#construction} We now construct a cell system whose building block is the isotropic self-similar Markov process $X$. The construction of the cell system in this case is similar to [@Ber-GF] and simpler than the multitype case in [@DP23]. It is important to note that actually the construction holds without the self-similarity or isotropy assumptions. This cell system will start from a single particle whose size is given by the process $X$, that will split in a binary way whenever $X$ has a jump. Let $\Delta X(t) := X(t)-X(t^-)$, for $t\ge 0$, denotes the possible jump of $X$ at time $t$. At any jump time $t$ of $X$, one places a new particle in the system and, conditionally given their size $-\Delta X(t)$ at birth, each of these newborn particles evolves independently as $\mathbbm{P}_{-\Delta X(t)}$. Then, one repeats this construction for any such child, thus creating the second generation, and so on. We may now construct the cell system associated with $X$ and indexed by the tree $\mathbbm{U}:=\bigcup_{i\ge 0} \mathbbm{N}^i$, with $\mathbbm{N}=\{1,2,\ldots\}$ and $\mathbbm{N}^0:=\{\varnothing\}$ is the label of the *Eve cell*. For $u:=(u_1,\ldots,u_i)\in \mathbbm{U}$, we denote by $|u|=i$, the *generation* of $u$. In this tree, the offspring of $u$ will be labelled by the lists $(u_1,\ldots,u_i,k)$, with $k\in \mathbbm{N}$. Let $b_{\varnothing}=0$ and $\mathcal{X}_{\varnothing}$ be distributed as $X$ started from some vector $\mathbf{x}\in\mathbbm{R}^d\setminus\{0\}$. At each jump of $\mathcal{X}_{\varnothing}$, place a new particle with initial size given by minus the jump size (so that there is conservation at splitting events). Since $X$ converges at infinity, it is possible to rank the sequence of jump sizes and times $(\mathbf{x}_1,\beta_1),(\mathbf{x}_2,\beta_2),\ldots$ of $-\mathcal{X}_{\varnothing}$ by descending lexicographical order for the norm of the $x_i$'s. Given this sequence of jumps, we define the first generation $\mathcal{X}_i, i\in \mathbbm{N},$ of our cell system as independent processes with respective law $\mathbbm{P}_{\mathbf{x}_i}$, and we set $b_i = b_{\varnothing}+\beta_i$ for the *birth time* of $i$ and $\zeta_i$ for its lifetime. The law of the $n$-th generation is constructed likewise given generations $1,\ldots, (n-1)$. A cell $u'=(u_1,\ldots,u_{n-1})\in \mathbbm{N}^{n-1}$ gives birth to the cell $u=(u_1,\ldots,u_{n-1},i)$, with lifetime $\zeta_u$, at time $b_u = b_{u'}+\beta_{i}$ where $\beta_{i}$ is the $i$-th jump of $\mathcal{X}_{u'}$ (with respect to the previous ranking). Moreover, conditionally on the jump sizes and times of $\mathcal{X}_{u'}$, $\mathcal{X}_u$ has law $\mathbbm{P}_{\mathbf{y}}$ with $-\mathbf{y}=\Delta \mathcal{X}_{u'}(\beta_i)$ and is independent of the other daughter cells at generation $n$. In this construction, the cells are not labelled chronologically. However, it still uniquely defines the law $\mathcal{P}_{\mathbf{x}}$ of the cell system $(\mathcal{X}_u(t), u\in\mathbbm{U}, t\ge 0)$ started from $\mathbf{x}$. Finally, we introduce the *(spatial) growth-fragmentation process* $$\mathbf{X}(t) := \left\{\left\{ \mathcal{X}_u(t-b_u), \; u\in\mathbbm{U}\; \text{and} \; b_u\le t<b_u+\zeta_u \right\}\right\}, \quad t\ge 0,$$ describing the collection of cells alive at time $t\ge 0$ (the double brackets here denote multisets). We define $\mathbf{P}_{\mathbf{x}}$ to be the law of the growth-fragmentation $\mathbf{X}$ started at $\mathbf{x}$. We point out that one can view this construction as a *multitype growth-fragmentation* process, where the types correspond to the directions (in the $d=1$ case, it is the sign). The set of types is therefore the sphere $\mathbb{S}^{d-1}$, which is uncountable, so that the construction does not quite fall into the framework developed in [@DP23]. From this standpoint, note that the type corresponding to the daughter cell created by the jump $\Delta X(t)$ is, up to time-change, $$\Theta_{\Delta}(t) := \frac{\Theta(t^-)-\mathrm{e}^{\Delta \xi(t)}\Theta(t)}{|\Theta(t^-)-\mathrm{e}^{\Delta \xi(t)}\Theta(t)|}.$$ Next, let $$\overline{\mathbf{X}}(t) := \{\{(\mathcal{X}_u(t-b_u),|u|), \; u\in\mathbbm{U}\; \text{and} \; b_u\le t < b_u+\zeta_u\}\}, \quad t\ge 0.$$ We shall denote by $(\mathcal{F}_t, t\ge 0)$ the natural filtration associated with $\mathbf{X}$, and $(\overline{\mathcal{F}}_t, t\ge 0)$ the one associated with $\overline{\mathbf{X}}$. Under the existence of an excessive function for $\mathbf{X}$, that is that there exist a measurable function $f:\mathbb{R}^d\to [0,\infty)$ is called excessive for $\mathbf{X}$ if $$\mathbb{E}_x\left[\sum_{z\in \mathbf{X}(t) }f(z)\right]\le f(x),$$ for all $t\ge 0$ and $x\in \mathbb{R}\setminus \{0\}$. If such an excessive function exists one can rank the elements $X_1(t), X_2(t), \ldots$ of $\mathbf{X}(t)$ by descending order of their norm for any fixed $t$. Under the same assumption, we have the following. **Proposition 5**. *Assume that $\mathbf{X}$ has an excessive function. Then for any $t\ge 0$, conditionally on $\overline{\mathbf{X}}(t)=\{\{(\mathbf{x}_i,n_i)\}\}$, the process $(\overline{\mathbf{X}}(t+s), s\ge 0)$ is independent of $\overline{\mathcal{F}}_t$ and distributed as $$\bigsqcup_{i\ge 1} \overline{\mathbf{X}}_i(s) \circ \theta_{n_i},$$ where the $\mathbf{X}_i, i\ge 1,$ are independent processes distributed as $\overline{\mathbf{X}}$ under $\mathcal{P}_{\mathbf{x}_i}$, $\theta_n$ is the shift operator $\{\{(\mathbf{z}_i, k_i), i\ge 1\}\}\circ \theta_n := \{\{(\mathbf{z}_i, k_i+n), i\ge 1\}\}$, and $\sqcup$ denotes union of multisets.* We omit the proof of the precedent result since it follows from the same arguments as in Proposition 2 in [@Ber-GF]. ## The isotropic cumulant function and genealogical martingales {#sec: cumulant} We are first of all interested in pointing out martingales as in the monotype [@Ber-GF] and multitype [@DP23] setting in the spatial growth-fragmentation case. It turns out that the exponents $\omega$ corresponding to these martingales will be found as the roots of an *isotropic cumulant function* which generalises the cumulant function $\kappa$ in [@Ber-GF; @BBCK]. Recall that, as readily seen from the rotational invariance property, the radial part of $X$ is a positive self-similar Markov process, so that the ordinate $\xi$ is in fact a Lévy process. We will extensively make use of this argument and its consequences. Let us start with a simple but typical calculation: for $q\ge 0$ and $\theta\in\mathbb{S}^{d-1}$, we aim at computing the quantity $\mathbbm{E}_{\theta}\Big[ \sum_{0<t<\zeta} |\Delta X(t)|^{q}\Big]$ in terms of the MAP characteristics of $X$. We will now consider the Lévy system $(L,H)$ of $(\xi,\Theta)$ (see [2](#sec: ssmp){reference-type="ref" reference="sec: ssmp"}), and we take as usual $H_t=t$ to avoid notational clutter. Since we want to sum over all $t$'s, we can omit the Lamperti-Kiu time-change between $X$ and $(\xi,\Theta)$, so that $$\mathbbm{E}_{\theta}\Bigg[ \sum_{0<t<\zeta} |\Delta X(t)|^{q}\Bigg] = \mathtt{E}_{0,\theta}\Bigg[ \sum_{0<t<\varsigma} \mathrm{e}^{q\xi(t^-)}|\Theta(t^-)-\mathrm{e}^{\Delta \xi(t)}\Theta(t)|^{q}\Bigg].$$ The compensation formula [\[eq: compensation MAP\]](#eq: compensation MAP){reference-type="eqref" reference="eq: compensation MAP"} for Markov additive processes then yields $$\label{eq:compensation cumulant} \mathbbm{E}_{\theta}\Bigg[ \sum_{0<t<\zeta} |\Delta X(t)|^{q}\Bigg] = \mathtt{E}_{0,\theta}\left[ \int_{0}^{\infty} \mathrm{d}t \, \mathrm{e}^{q\xi(t)} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\Theta(t)}(\mathrm{d}x, \mathrm{d}\Phi)|\Theta(t)-\mathrm{e}^{x}\Phi|^{q}\right].$$ Remark that the integral $$\int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\theta}(\mathrm{d}x, \mathrm{d}\Phi)|\theta-\mathrm{e}^{x}\Phi|^{q},$$ does not depend on the angle $\theta$, since isotropy entails that if $\theta,\theta'\in\mathbb{S}^{d-1}$, and $U$ is an isometry mapping $\theta$ to $\theta'$, then $L_{\theta'}(\mathrm{d}x, \mathrm{d}\Phi) = L_{\theta}(\mathrm{d}x,U^{-1}\mathrm{d}\Phi)$. More generally, the image measures $\widetilde{L}_{\theta}(\mathrm{d}y,\mathrm{d}\phi)$ of $L_{\theta}(\mathrm{d}x,\mathrm{d}\Phi)$ through the mapping $(y,\phi) = (\log |\theta - \mathrm{e}^x\Phi|, \frac{\theta - \mathrm{e}^x\Phi}{|\theta - \mathrm{e}^x\Phi|})$ satisfy the same relationship. Indeed, for any nonnegative measurable function $F$, since $|U|=1$, we have $$\label{eq: Ltilde isotropy} \begin{split} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\theta'}(\mathrm{d}x, \mathrm{d}\Phi)&F\left(\log|\theta'-\mathrm{e}^{x}\Phi|, \frac{\theta'-\mathrm{e}^{x}\Phi}{|\theta'-\mathrm{e}^{x}\Phi|}\right) \\ &= \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\theta}(\mathrm{d}x, U^{-1}\mathrm{d}\Phi)F\left(\log|U\theta-\mathrm{e}^{x}\Phi|, \frac{U\theta-\mathrm{e}^{x}\Phi}{|U\theta-\mathrm{e}^{x}\Phi|}\right) \\ &= \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\theta}(\mathrm{d}x, \mathrm{d}\varphi)F\left(\log|U\theta-\mathrm{e}^{x}U\varphi|, \frac{U\theta-\mathrm{e}^{x}U\varphi}{|U\theta-\mathrm{e}^{x}U\varphi|}\right) \\ &= \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\theta}(\mathrm{d}x, \mathrm{d}\varphi)F\left(\log|\theta-\mathrm{e}^{x}\varphi|, U \frac{\theta-\mathrm{e}^{x}\varphi}{|\theta-\mathrm{e}^{x}\varphi|}\right). \end{split}$$ Hence $L_{\theta'}(\mathrm{d}x, \mathrm{d}\Phi) = L_{\theta}(\mathrm{d}x,U^{-1}\mathrm{d}\Phi)$. Singling out the image measure $\widetilde{L}(\mathrm{d}y,\mathrm{d}\phi)$ of $L_{\theta}(\mathrm{d}x,\mathrm{d}\Phi)$ when $\theta=(1,0,\ldots,0)$ say, [\[eq:compensation cumulant\]](#eq:compensation cumulant){reference-type="eqref" reference="eq:compensation cumulant"} boils down to $$\mathbbm{E}_{\theta}\Bigg[ \sum_{0<t<\zeta} |\Delta X(t)|^{q}\Bigg] = \mathtt{E}_{0,\theta}\left[ \int_{0}^{\infty} \mathrm{d}t \, \mathrm{e}^{q\xi(t)}\right] \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y, \mathrm{d}\phi)\mathrm{e}^{qy}.$$ Recall that $\xi$ is a possibly killed Lévy process and assume that its Laplace exponent $\psi$ satisfies $\psi(q)<0$, otherwise the first integral blows up. Then we are left with $$\mathbbm{E}_{\theta}\Bigg[ \sum_{0<t<\zeta} |\Delta X(t)|^{q}\Bigg] = 1-\frac{\kappa(q)}{\psi(q)},$$ where we have set $$\label{eq:kappa} \kappa(q) = \psi(q) + \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y, \mathrm{d}\phi)\mathrm{e}^{qy}.$$ The previous calculations finally show that $$\label{eq:sum kappa} \mathbbm{E}_{\theta}\Bigg[ \sum_{0<t<\zeta} |\Delta X(t)|^{q}\Bigg] = \begin{cases} 1-\displaystyle\frac{\kappa(q)}{\psi(q)} & \text{if} \; \kappa(q)<\infty \; \text{and} \; \psi(q)<0, \\[2mm] +\infty & \text{otherwise}. \end{cases}$$ We call the function $\kappa$ the *isotropic cumulant function*. Its roots will lead to martingales for the growth-fragmentation cell system through the identity [\[eq:sum kappa\]](#eq:sum kappa){reference-type="eqref" reference="eq:sum kappa"}. Thus, throughout the paper we make the following assumption **(H)** *There exists $\omega\ge 0$ such that $\kappa(\omega)=0$.* Notice that, as readily seen from [\[eq:kappa\]](#eq:kappa){reference-type="eqref" reference="eq:kappa"}, $\kappa$ is a convex function, so that there exist at most two such roots. For such a root $\omega$, we obtain by self-similarity and [\[eq:sum kappa\]](#eq:sum kappa){reference-type="eqref" reference="eq:sum kappa"} that for all $\mathbf{x}\in\mathbbm{R}^d\setminus\{0\}$, $$\label{eq: omega generalised.0} \mathbbm{E}_{\mathbf{x}}\Bigg[ \sum_{0<t<\zeta} |\Delta X(t)|^{\omega}\Bigg] = |\mathbf{x}|^{\omega}.$$ Following the strategy of Section 3.2 in [@DS], we now show that the roots of $\kappa$ pave the way for remarkable martingales. The proof of the following result follows exactly from the same arguments as those used in Proposition 3.6 in [@DS]. **Proposition 6**. *Under $\mathbbm{P}_{\mathbf{x}}$, for all $\mathbf{x}\in \mathbbm{R}^d\setminus \{0\}$, the process $$M(t) := |X(t)|^{\omega} + \sum_{0<s\le t\wedge \zeta} |\Delta X(s)|^{\omega},$$ is a martingale for the filtration $(F_t^X, t\ge 0)$ associated with $X$.* Moreover, the definition of $\omega$ and the branching structure of growth-fragmentation processes entail the existence of the following genealogical martingale, which will be crucial for the spine decomposition. Let $\mathscr{G}_n := \sigma\left(\mathcal{X}_u, |u|\le n \right), n\ge 0$. **Theorem 7**. *The process $$\mathcal{M}(n):= \sum_{|u|=n+1} |\mathcal{X}_u(0)|^{\omega}, \quad n\ge 0,$$ is a $(\mathscr{G}_n, n\ge 0)$--martingale under $\mathcal{P}_{\mathbf{x}}$ for all $\mathbf{x}\in \mathbbm{R}^d\setminus\{0\}$.* The arguments used to deduce the previous result are the same as those presented in Theorem 3.5 in [@DS]. ## A change of measures We introduce a new probability measure $\widehat{\mathcal{P}}_{\mathbf{x}}$ for $\mathbf{x}\in \mathbbm{R}^d \setminus\{ 0\}$ using the martingale $(\mathcal{M}(n))_{n\ge 0}$ in [Theorem 7](#thm:M(n)){reference-type="ref" reference="thm:M(n)"}. This is the analogue of [@BBCK Section 4.1] in the positive case or [@DS Section 3.3] in the $d=1$ case. It describes the law of a new cell system $(\mathcal{X}_u)_{u\in\mathbbm{U}}$ together with an infinite distinguished *ray*, or *leaf*, $\mathcal{L}\in \partial \mathbbm{U}= \mathbbm{N}^{\mathbbm{N}}$. On $\mathscr{G}_n$, for $n\ge 0$, it has Radon-Nikodym derivative with respect to $\mathcal{P}_{\mathbf{x}}$ given by $\mathcal{M}(n)$, normalized to be a probability measure, *i.e.* for all $G_n \in \mathscr{G}_n$, $$\widehat{\mathcal{P}}_{\mathbf{x}}(G_n) := |\mathbf{x}|^{-\omega} \mathcal{E}_{\mathbf{x}}\left[\mathcal{M}(n) \mathds{1}_{G_n} \right].$$ The law of the particular leaf $\mathcal{L}$ under $\widehat{\mathcal{P}}_z$ is chosen so that, for all $n\ge 0$ and all $u\in \mathbbm{U}$ such that $|u|=n+1$ $$\label{eq: generation spine} \widehat{\mathcal{P}}_{\mathbf{x}} \left( \mathcal{L}(n+1)=u \,\big|\, \mathscr{G}_n\right) := \frac{|\mathcal{X}_u(0)|^{\omega}}{\mathcal{M}(n)},$$ where for any $\ell\in\partial\mathbbm{U}$, $\ell(n)$ denotes the ancestor of $\ell$ at generation $n$. In words, to define the next generation of the spine, we select one of its jumps proportionally to its size to the power $\omega$ (the spine at generation $0$ being the Eve cell). By an application of the Kolmogorov extension theorem, the martingale property and the branching structure of the system ensure that these definitions are compatible, and therefore this uniquely defines the probability measure $\widehat{\mathcal{P}}_{\mathbf{x}}$. We will be interested in the evolution of the *tagged cell*, which is the cell associated with the distinguished leaf $\mathcal{L}$. More precisely, set $b_{\ell} = \lim\uparrow b_{\ell(n)}$ for any leaf $\ell\in\partial \mathbbm{U}$. Then, define $\widehat{\mathcal{X}}$ by $\widehat{\mathcal{X}}(t):=\partial$ if $t\ge b_{\mathcal{L}}$ and $$\label{eq:tagged cell} \widehat{\mathcal{X}}(t):= \mathcal{X}_{\mathcal{L}(n_t)}(t-b_{\mathcal{L}(n_t)}), \quad t<b_{\mathcal{L}},$$ where $n_t$ is the unique integer $n$ such that $b_{\mathcal{L}(n)}\le t < b_{\mathcal{L}(n+1)}$. By construction of $\widehat{\mathcal{P}}_{\mathbf{x}}$, we have the following genealogical *many-to-one* formula: for all nonnegative measurable function $f$ and all $\mathscr{G}_n$--measurable nonnegative random variable $B_n$, $$|\mathbf{x}|^{\omega} \widehat{\mathcal{E}}_{\mathbf{x}}\left[f(\mathcal{X}_{\mathcal{L}(n+1)}(0))B_n\right] = \mathcal{E}_{\mathbf{x}}\Bigg[ \sum_{|u|=n+1} |\mathcal{X}_u(0)|^{\omega}f(\mathcal{X}_u(0))B_n\Bigg].$$ This may be extended to a temporal many-to-one formula. The existence of $(v,\omega)$ ensures that we may rank the elements in $\mathbf{X}(t)=\left\{\left\{X_i(t), i\ge 1\right\}\right\}, \, t\ge 0$, by decreasing order of the norms. **Proposition 8**. *For every $t\ge 0$, every nonnegative measurable function $f$ vanishing at $\partial$, and every $\overline{\mathcal{F}}_t$--measurable nonnegative random variable $B_t$, we have $$|\mathbf{x}|^{\omega} \widehat{\mathcal{E}}_{\mathbf{x}}\big[f(\widehat{\mathcal{X}}(t))B_t\big] = \mathcal{E}_{\mathbf{x}}\Bigg[ \sum_{i\ge 1} |X_i(t)|^{\omega}f(X_i(t))B_t\Bigg].$$* **Proof.** See Proposition 4.1 in [@DP23] for the multitype case, which is easily extended. 0◻ # The spine decomposition of spatial isotropic growth-fragmentation processes {#sec: spine isotropic} ## The spine decomposition for isotropic growth-fragmentation processes In this section, we describe the law of the growth-fragmentation process under the change of measures $\widehat{\mathcal{P}}_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^d$, and in particular the law of the tagged cell $\widehat{\mathcal{X}}$ [\[eq:tagged cell\]](#eq:tagged cell){reference-type="eqref" reference="eq:tagged cell"}. In order to make sense of this, we need to rebuild the growth-fragmentation along the spine, and so we must first label the jumps of $\widehat{\mathcal{X}}$. In general, one cannot rank those in lexicographical order. Instead, they will be labelled by couples $(n,j)$, where $n\ge 0$ stands for the generation of the tagged cell immediately before the jump, and $j\ge 1$ is the rank (in the usual lexicographical sense) of the jump among those of the tagged cell at generation $n$ (including the final jump, when the generation changes to $n+1$). For each such $(n,j)$, we define the growth-fragmentation $\widehat{\mathbf{X}}_{n,j}$ induced by the corresponding jump. More precisely, if the generation stays constant during the $(n,j)$--jump, then we set $$\widehat{\mathbf{X}}_{n,j}(t) := \left\{\left\{ \mathcal{X}_{uw}(t-b_{uw}+b_u), \; w\in\mathbbm{U}\; \text{and} \; b_{uw}\le t+b_u<b_{uw}+\zeta_{uw} \right\}\right\},$$ where $u$ is the label of the cell born at the $(n,j)$--jump. Otherwise the $(n,j)$--jump corresponds to a jump for the generation of the tagged cell so that the tagged cell jumps from label $u$ to label $uj$ say. In this case, we set $$\begin{gathered} \widehat{\mathbf{X}}_{n,j}(t) := \left\{\left\{ (\mathcal{X}_{u}(t-b_{u}+b_{uj}), \mathcal{J}_{u}(t-b_{u}+b_{uj})), \; b_{u}\le t+b_{uj}<b_{u}+\zeta_{u} \right\}\right\} \\ \cup \left\{\left\{ (\mathcal{X}_{uw}(t-b_{uw}+b_{uj}), \mathcal{J}_{uw}(t-b_{uw}+b_{uj})), \; w\notin \mathbb{T}_{uj} \; \text{and} \; b_{uj}\le b_{uw}\le t+b_{uj}<b_{uw}+\zeta_{uw} \right\}\right\},\end{gathered}$$ where for $v\in\mathbbm{U}$, $\mathbb{T}_v := \{vw, \, w\in\mathbbm{U}\}$. Finally, we agree that $\widehat{\mathbf{X}}_{n,j} := \partial$ when the $(n,j)$--jump does not exist, and this sets $\widehat{\mathbf{X}}_{n,j}$ for all $n\ge 0$ and all $j\ge 1$. Recall also that $n_t$ was defined in [\[eq:tagged cell\]](#eq:tagged cell){reference-type="eqref" reference="eq:tagged cell"} and stands for the generation of the spine at time $t$. We can now state our main theorem describing the law of the growth-fragmentation under $\widehat{\mathcal{P}}_{\mathbf{x}}$. **Theorem 9**. *Under $\widehat{\mathcal{P}}_{\mathbf{x}}$, $\widehat{\mathcal{X}}$ is a self-similar Markov process with values in $\mathbbm{R}^d$ and index $\alpha$. The Lévy system of the underlying Markov additive process $(\widehat{\xi},\widehat{\Theta})$ is given by $(\widehat{H},\widehat{L})$ where $\widehat{H}_t=t$ and $$\label{eq: Lhat} \widehat{L}_{\theta}(\mathrm{d}y,\mathrm{d}\phi) = \mathrm{e}^{\omega y}\big(L_{\theta}(\mathrm{d}y,\mathrm{d}\phi) + \widetilde{L}_{\theta}(\mathrm{d}y,\mathrm{d}\phi) \big).$$ Besides, $\widehat{\mathcal{X}}$ is isotropic, and the ordinate $\widehat{\xi}$ is a Lévy process with Laplace exponent $\widehat{\psi}(q)=\kappa(\omega+q)$. Moreover, conditionally on $(\widehat{\mathcal{X}}(t), n_t)_{0\le t<b_{\mathcal{L}}}$, the processes $\widehat{\mathbf{X}}_{n,j}$, $n\ge 0$, $j\ge 1$, are independent and each $\widehat{\mathbf{X}}_{n,j}$ has law $\mathbf{P}_{x(n,j)}$ where $-x(n,j)$ is the size of the $(n,j)$--th jump.* **Remark 10**. 1. Observe that we have the following description of the MAP $(\widehat{\xi},\widehat{\Theta})$. Let $(\eta^0,\Phi^0)$ be a MAP with Lévy system $(H^0,L^0)$ given by $H^0_t:=t$ and $L^0_{\theta}(\mathrm{d}y,\mathrm{d}\phi):=\mathrm{e}^{\omega y} L_{\theta}(\mathrm{d}y,\mathrm{d}\phi)$. Consider an independent compound Poisson process $D=(D_1,D_2)$ on $\mathbbm{R}^*\times \mathbb{S}^{d-1}$ with intensity measure $\mathrm{e}^{\omega y} \widetilde{L}(\mathrm{d}y,\mathrm{d}\phi)$. This definition makes sense because, since $\kappa(\omega)=0$, $$\int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}}\widetilde{L}(\mathrm{d}y,\mathrm{d}\varphi)\mathrm{e}^{\omega y} = - \psi(\omega) <\infty.$$ Then $(\widehat{\xi},\widehat{\Theta})$ is the *superimposition* of $(\eta^0,\Phi^0)$ and $D$, in the following sense. Let $T_1$ the first jump time of $D$, which is exponential with parameter $-\psi(\omega)$. Then $(\widehat{\xi}(s),\widehat{\Theta}(s), s<b_{\mathcal{L}(1)})$ evolves as $(\eta^0(s),\Phi^0(s), s<T_1)$, and $(\widehat{\xi}(b_{\mathcal{L}(1)}),\widehat{\Theta}(b_{\mathcal{L}(1)}))$ is distributed as $$(\eta^0(T_1)+D_1(T_1), U_{\Phi^0(T_1)}\cdot D_2(T_1)),$$ where $U_{\theta}$ is an isommetry mapping $(1,0,\ldots,0)$ to $\theta$. 2. The proof actually provides a more precise statement describing the law of $(\widehat{\mathcal{X}}(t),n_t, t\ge 0)$. The process $n_t$ is then the Poisson process counting the jumps up to the usual Lamperti time change. 3. . More precisely, recall that in the isotropic setting, $\xi$ is itself a Lévy process, so that we can consider the usual exponential martingale $(\mathrm{e}^{\omega\xi(t)-t\psi(\omega)},t\ge 0)$. Then the law of $(\xi,\Theta)$ under the exponential change of measures is $(\xi^{[\omega]},\Theta^{[\omega]})$. This will appear in the proof. 4. Combining these two remarks casts light on equation [\[eq: Lhat\]](#eq: Lhat){reference-type="eqref" reference="eq: Lhat"}. Loosely speaking, it is a decomposition of $\widehat{L}$ in terms of the jumps of the Esscher transform of $(\xi,\Theta)$ and the special jumps when the spine picks one of the jumps according to [\[eq: generation spine\]](#eq: generation spine){reference-type="eqref" reference="eq: generation spine"}. 5. We deduce from [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"} that the temporal version of $(\mathcal{M}(n), n\ge 0)$, namely $$\mathcal{M}_t := \sum_{i=1}^{\infty} |X_i(t)|^{\omega}, \quad t\ge 0,$$ is a $(\mathcal{F}_t)$--martingale if, and only if, $\alpha\kappa'(\omega)<0$. Indeed, by taking $f=\mathds{1}_{\partial}$ the many-to-one formula ([Proposition 8](#prop:spine temporal){reference-type="ref" reference="prop:spine temporal"}) yields that $(\mathcal{M}_t,t\ge 0)$ is a supermartingale, and that it is a martingale if, and only if, $X$ has infinite lifetime. From the Lamperti representation of $|X|$, and the expression $\widehat{\psi}(q)=\kappa(\omega+q)$ of the Laplace exponent of $\widehat{\xi}$, this happens exactly when $\alpha\kappa'(\omega)<0$. ## Proof of [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"} {#proof-of-thmspine} The proof will roughly follow the same lines as the one of Theorem 4.3 in [@DP23], although the structure of the modulator is more involved. **The law of the spine $\widehat{\mathcal{X}}$.** The definition of $\widehat{\mathcal{X}}$ readily shows that $\widehat{\mathcal{X}}$ is an $\alpha$--self-similar Markov process. By Lamperti's time change, we may place ourselves in the homogeneous case $\alpha=0$. In this case, note that there is no time change between $\widehat{\mathcal{X}}$ and $(\widehat{\xi},\widehat{\Theta})$. For this reason, and to avoid notational clutter, we will sometimes make an abuse of notation by considering them on the same probability space. Likewise, we will use expressions involving both $X$ and its MAP $(\xi,\Theta)$ as a shorthand. Moreover, the Markov property implies that we only need to check the compensation formula up to the first time $b_{\mathcal{L}(1)}$ when the spine selects another generation. More precisely, we want to show that $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s>0} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \mathds{1}_{\{s\le b_{\mathcal{L}(1)}\}} \right] \\ = \widehat{\mathcal{E}}_{\theta}\left[ \int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\psi(\omega)s} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widehat{L}_{\widehat{\Theta}(s)}(\mathrm{d}x, \mathrm{d}\varphi) F(s,\widehat{\xi}(s),x, \widehat{\Theta}(s), \varphi)\right]. \end{gathered}$$ We may split the sum into two parts: $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s>0} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \mathds{1}_{\{s\le b_{\mathcal{L}(1)}\}} \right] = \\ \widehat{\mathcal{E}}_{\theta}\left[\sum_{s<b_{\mathcal{L}(1)}} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \right] + \widehat{\mathcal{E}}_{\theta}\Big[ F(b_{\mathcal{L}(1)},\widehat{\xi}(b_{\mathcal{L}(1)}^-),\Delta\widehat{\xi}(b_{\mathcal{L}(1)}),\widehat{\Theta}(b_{\mathcal{L}(1)}^-),\widehat{\Theta}(b_{\mathcal{L}(1)})) \Big]. \label{eq: sum decomposed b(1)} \end{gathered}$$ We compute the first term of [\[eq: sum decomposed b(1)\]](#eq: sum decomposed b(1)){reference-type="eqref" reference="eq: sum decomposed b(1)"}. By definition of $b_{\mathcal{L}(1)}$, $$(\widehat{\xi}(s),\widehat{\Theta}(s), s<b_{\mathcal{L}(1)}) = (\xi(s),\Theta(s), s<b_{\mathcal{L}(1)}).$$ Applying the change of measure [\[eq: generation spine\]](#eq: generation spine){reference-type="eqref" reference="eq: generation spine"}, and recalling that we are in the homogeneous case, we get $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s<b_{\mathcal{L}(1)}} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \right] \\ = \mathbbm{E}_{\theta}\left[\sum_{s>0}\sum_{t>s} F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s)) |\Delta X(t)|^{\omega}\right]. \label{eq: split b1}\end{gathered}$$ Now, the Markov property of $X$ at fixed time $s>0$ yields that $$\begin{gathered} \mathbbm{E}_{\theta}\left[\sum_{t>s} F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s)) |\Delta X(t)|^{\omega}\right] \\ = \mathbbm{E}_{\theta}\left[ F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s)) \mathbbm{E}_{X(s)} \left[\sum_{t>0} |\Delta X(t)|^{\omega}\right]\right], \end{gathered}$$ and using the definition of $\omega$ in identity [\[eq: omega generalised.0\]](#eq: omega generalised.0){reference-type="eqref" reference="eq: omega generalised.0"}, $$\mathbbm{E}_{\theta}\left[\sum_{t>s} F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s)) |\Delta X(t)|^{\omega}\right] = \mathtt{E}_{0,\theta}\Big[ F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s)) \mathrm{e}^{\omega \xi(s)}\Big].$$ Coming back to [\[eq: split b1\]](#eq: split b1){reference-type="eqref" reference="eq: split b1"}, this means $$\widehat{\mathcal{E}}_{\theta}\left[\sum_{s<b_{\mathcal{L}(1)}} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \right] = \mathtt{E}_{0,\theta}\left[\sum_{s>0} F(s,\xi(s^-),\Delta\xi(s),\Theta(s^-),\Theta(s)) \mathrm{e}^{\omega \xi(s)}\right].$$ Using the compensation formula entails $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s<b_{\mathcal{L}(1)}} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \right] \\ = \mathtt{E}_{0,\theta}\left[\int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\omega \xi(s)} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\Theta(s)}(\mathrm{d}x,\mathrm{d}\varphi) \mathrm{e}^{\omega x} F(s,\xi(s),x,\Theta(s),\varphi) \right]. \label{eq: compensation < b1}\end{gathered}$$ We now tilt the measure using the classical Esscher transform (see for example [@KP]). Recall from [Remark 10](#rk: spine){reference-type="ref" reference="rk: spine"} that the process obtained has the law $\mathtt{P}^0_{0,\theta}$ of $(\eta^0,\Phi^0)$. Thus equation [\[eq: compensation \< b1\]](#eq: compensation < b1){reference-type="eqref" reference="eq: compensation < b1"} rewrites $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s<b_{\mathcal{L}(1)}} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \right] \\ = \mathtt{E}^0_{0,\theta}\left[\int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\psi(\omega)s}\int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\Phi^0(s)}(\mathrm{d}x,\mathrm{d}\varphi) \mathrm{e}^{\omega x} F(s,\eta^0(s),x,\Phi^0(s),\varphi) \right]. \label{eq: first term b1}\end{gathered}$$ Note that, since $L_{\theta}(\mathrm{d}x,\mathrm{d}\varphi) \mathrm{e}^{\omega x}$ is the jump measure of the Lévy system associated with $(\eta^0,\Phi^0)$, this shows that $(\widehat{\xi}(s),\widehat{\Theta}(s), s<b_{\mathcal{L}(1)})$ behaves as $(\eta^0(s),\Phi^0(s), s<T_1)$, where $T_1$ is an independent exponential time with parameter $-\psi(\omega)$, a fact that could have been derived directly. Let us now compute the second term of [\[eq: sum decomposed b(1)\]](#eq: sum decomposed b(1)){reference-type="eqref" reference="eq: sum decomposed b(1)"}. Changing the measure according to [\[eq: generation spine\]](#eq: generation spine){reference-type="eqref" reference="eq: generation spine"} again, one obtains $$\begin{aligned} \widehat{\mathcal{E}}_{\theta}&\Big[ F(b_{\mathcal{L}(1)},\widehat{\xi}(b_{\mathcal{L}(1)}^-),\Delta\widehat{\xi}(b_{\mathcal{L}(1)}),\widehat{\Theta}(b_{\mathcal{L}(1)}^-),\widehat{\Theta}(b_{\mathcal{L}(1)})) \Big] \\ &= \mathbbm{E}_{\theta}\left[ \sum_{s>0} |\Delta X(s)|^{\omega} F\left(s,\xi(s^-),\log|\Delta X(s)|-\xi(s^-), \Theta(s^-), \frac{\Delta X(s)}{|\Delta X(s)|}\right)\right] \\ &= \mathtt{E}_{0,\theta}\left[ \sum_{s>0} \mathrm{e}^{\omega\xi(s^-)}|\Theta(s^-)-\mathrm{e}^{\Delta \xi(s)}\Theta(s)|^{\omega} F(s,\xi(s^-),\log|\Theta(s^-)-\mathrm{e}^{\Delta \xi(s)}\Theta(s)|, \Theta(s^-), \Theta_{\Delta}(s))\right],\end{aligned}$$ where as usual $$\Theta_{\Delta}(s) = \frac{\Theta(s^-)-\mathrm{e}^{\Delta \xi(s)}\Theta(s)}{|\Theta(s^-)-\mathrm{e}^{\Delta \xi(s)}\Theta(s)|}.$$ Using the compensation formula for $(\xi,\Theta)$, this is $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\Big[ F(b_{\mathcal{L}(1)},\widehat{\xi}(b_{\mathcal{L}(1)}^-),\Delta\widehat{\xi}(b_{\mathcal{L}(1)}),\widehat{\Theta}(b_{\mathcal{L}(1)}^-),\widehat{\Theta}(b_{\mathcal{L}(1)})) \Big] \\ = \mathtt{E}_{0,\theta}\left[ \int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\omega\xi(s)} \int L_{\Theta(s)}(\mathrm{d}x, \mathrm{d}\varphi) |\Theta(s)-\mathrm{e}^x\varphi|^{\omega}\right . \\ \left. \times F\left(s,\xi(s),\log|\Theta(s)-\mathrm{e}^{x}\varphi|, \Theta(s), \frac{\Theta(s)-\mathrm{e}^x\varphi}{|\Theta(s)-\mathrm{e}^x\varphi|}\right)\right]\end{gathered}$$ We want to perform the change of variables $(y,\phi)=(\log |\theta-\mathrm{e}^x\varphi|,\frac{\theta-\mathrm{e}^x\varphi}{|\theta-\mathrm{e}^x\varphi|})$ for fixed $\theta$ in the second integral. Recall that we have defined as the image measure of $L_{\theta}$ through this mapping, and that . Therefore, $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\Big[ F(b_{\mathcal{L}(1)},\widehat{\xi}(b_{\mathcal{L}(1)}^-),\Delta\widehat{\xi}(b_{\mathcal{L}(1)}),\widehat{\Theta}(b_{\mathcal{L}(1)}^-),\widehat{\Theta}(b_{\mathcal{L}(1)})) \Big] \\ = \mathtt{E}_{0,\theta}\left[ \int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\omega\xi(s)} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widetilde{L}_{\added{\Theta(s)}}(\mathrm{d}y, \mathrm{d}\phi) \mathrm{e}^{\omega y} F(s,\xi(s),y, \Theta(s), \phi)\right].\end{gathered}$$ Tilting with the exponential martingale of $\xi$ finally provides $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\Big[ F(b_{\mathcal{L}(1)},\widehat{\xi}(b_{\mathcal{L}(1)}^-),\Delta\widehat{\xi}(b_{\mathcal{L}(1)}),\widehat{\Theta}(b_{\mathcal{L}(1)}^-),\widehat{\Theta}(b_{\mathcal{L}(1)})) \Big] \\ = {\tt E}^0_{0,\theta}\left[ \int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\psi(\omega)s} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widetilde{L}_{\added{\Phi^0(s)}}(\mathrm{d}y, \mathrm{d}\phi) \mathrm{e}^{\omega y} F(s,\added{\eta^0(s)},y, \added{\Phi^0(s)}, \phi)\right]. \label{eq: second term b1}\end{gathered}$$ Putting together [\[eq: sum decomposed b(1)\]](#eq: sum decomposed b(1)){reference-type="eqref" reference="eq: sum decomposed b(1)"}, [\[eq: first term b1\]](#eq: first term b1){reference-type="eqref" reference="eq: first term b1"} and [\[eq: second term b1\]](#eq: second term b1){reference-type="eqref" reference="eq: second term b1"}, we end up with $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s>0} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \mathds{1}_{\{s\le b_{\mathcal{L}(1)}\}} \right] \\ = {\tt E}^0_{0,\theta}\left[ \int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\psi(\omega)s} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widehat{L}_{\Phi^0(s)}(\mathrm{d}x, \mathrm{d}\varphi) F(s,\eta^0(s),x, \Phi^0(s), \varphi)\right],\end{gathered}$$ and , we can rewrite this as $$\begin{gathered} \widehat{\mathcal{E}}_{\theta}\left[\sum_{s>0} F(s,\widehat{\xi}(s^-),\Delta\widehat{\xi}(s),\widehat{\Theta}(s^-),\widehat{\Theta}(s)) \mathds{1}_{\{s\le b_{\mathcal{L}(1)}\}} \right] \\ = \widehat{\mathcal{E}}_{\theta}\left[ \int_{0}^{\infty} \mathrm{d}s \mathrm{e}^{\psi(\omega)s} \int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widehat{L}_{\widehat{\Theta}(s)}(\mathrm{d}x, \mathrm{d}\varphi) F(s,\widehat{\xi}(s),x, \widehat{\Theta}(s), \varphi)\right]. \label{eq: final (xihat,Thetahat)}\end{gathered}$$ This completes the proof of [\[eq: Lhat\]](#eq: Lhat){reference-type="eqref" reference="eq: Lhat"}. The second assertion of the theorem is then a straightforward consequence. First, it is clear that since $X$ is isotropic, so is $\widehat{\mathcal{X}}$ by construction. Hence, by [Proposition 4](#prop:isotropy Levy){reference-type="ref" reference="prop:isotropy Levy"}, $\widehat{\xi}$ must be a Lévy process. The expression for $\widehat{\psi}$ can be found using a particular case of the compensation formula [\[eq: final (xihat,Thetahat)\]](#eq: final (xihat,Thetahat)){reference-type="eqref" reference="eq: final (xihat,Thetahat)"}. Alternatively, for any nonnegative measurable functionals $F$ and $G$ defined respectively on the space of finite càdlàg paths and on $\mathbbm{R}$, we may compute $$\begin{aligned} &\widehat{\mathcal{E}}_{\theta}\left[F(\widehat{\xi}(s),s<b_{\mathcal{L}(1)})G(\Delta \widehat{\xi}(b_{\mathcal{L}(1)})) \right] \\ &= \mathbbm{E}_{\theta} \left[ \sum_{t>0} |\Delta X(t)|^{\omega} F(\log|X(s)|,s<t)G\left(\log\frac{|\Delta X(t)|}{|X(t^-)|}\right)\right] \\ &= \mathtt{E}_{0,\theta} \left[ \sum_{t>0} \mathrm{e}^{\omega \xi(t^-)}|\Theta(t^-)-\mathrm{e}^{\Delta \xi(t)} \Theta(t)|^{\omega} F(\xi(s),s<t)G\Big(\log |\Theta(t^-)-\mathrm{e}^{\Delta\xi(t)}\Theta(t)|\Big)\right] \\ &= \mathtt{E}_{0,\theta}\left[\int_0^{\infty} \mathrm{d}t \mathrm{e}^{\omega \xi(t)} F(\xi(s),s<t)\int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} L_{\Theta(t)}(\mathrm{d}x,\mathrm{d}\varphi) |\Theta(t)-\mathrm{e}^{x}\varphi|^{\omega} G(\log |\Theta(t)-\mathrm{e}^x\varphi|)\right].\end{aligned}$$ By isotropy of $X$, the second integral does not depend on the angle $\Theta(t)$ (see [\[eq: Ltilde isotropy\]](#eq: Ltilde isotropy){reference-type="eqref" reference="eq: Ltilde isotropy"}). Hence by applying the change of variables $(y,\phi)=\Big(\log|\Theta(t)-\mathrm{e}^{x}\varphi|, \frac{\Theta(t)-\mathrm{e}^{x}\varphi}{|\Theta(t)-\mathrm{e}^{x}\varphi|}\Big)$, we end up with $$\begin{split} \widehat{\mathcal{E}}_{\theta}&\left[F(\widehat{\xi}(s),s<b_{\mathcal{L}(1)})G(\Delta \widehat{\xi}(b_{\mathcal{L}(1)})) \right]\\ &\hspace{2cm}= \mathtt{E}_{0,\theta}\left[\int_0^{\infty} \mathrm{d}t \mathrm{e}^{\omega \xi(t)} F(\xi(s),s<t)\right]\int_{\mathbbm{R}^*\times \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y,\mathrm{d}\phi) \mathrm{e}^{\omega y} G(y). \end{split}$$ In words, this proves that $(\widehat{\xi}(s),s<b_{\mathcal{L}(1)})$ and $\Delta \widehat{\xi}(b_{\mathcal{L}(1)})$ are independent. The former has the law of $\xi$ killed according to its exponential martingale, leading to a Lévy process with Laplace exponent $q\mapsto \psi(\omega+q)$. On the other hand, the latter is distributed as $(\psi(\omega))^{-1} \int_{\phi\in \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y,\mathrm{d}\phi) \mathrm{e}^{\omega y}$, which is the law of the first jump of a compound Poisson process with intensity measure $\int_{\phi\in \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y,\mathrm{d}\phi) \mathrm{e}^{\omega y}$. By removing the killing, this entails that $\widehat{\xi}$ has Laplace exponent $$\widehat{\psi}(q) = \psi(\omega+q) - \psi(\omega) + \int_{\mathbbm{R}^* \times \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y,\mathrm{d}\phi) \mathrm{e}^{\omega y} (\mathrm{e}^{qy}-1), \quad q\ge 0.$$ Using that $\kappa(\omega)=0$, this is $$\widehat{\psi}(q) = \psi(\omega+q) + \int_{\mathbbm{R}^* \times \mathbb{S}^{d-1}} \widetilde{L}(\mathrm{d}y,\mathrm{d}\phi) \mathrm{e}^{(\omega+q) y}, \quad q\ge 0,$$ whence $\widehat{\psi}(q) = \kappa(\omega+q)$. **The law of the growth-fragmentations $\widehat{\mathbf{X}}_{n,j}$.** We prove the last assertion of [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"}. It actually follows from the same arguments as in [@BBCK], but we provide the proof for the sake of completeness. To avoid cumbersome notation, we will restrict to proving the statement for the first generation. This is then easily extended thanks to the branching property. Let $F$ be a nonnegative measurable functional on the space of càdlàg trajectories, and $G_j$, $j\ge 1$, be nonnegative measurable functionals on the space of multiset--valued paths. For $t>0$, denote by $(\Delta_j(t), j\ge 1)$ the sequence consisting of all the jumps of $\mathcal{X}_{\varnothing}$ that happened strictly before time $t$, and the extra value of $\mathcal{X}_{\varnothing}(t)$, all ranked in descending order of their absolute value. We are after the identity $$\widehat{\mathcal{E}}_1 \left[F(\mathcal{X}_{\varnothing}(s), 0\le s\le b_{\mathcal{L}(1)}) \prod_{j\ge 1} G_j(\widehat{\mathbf{X}}_{0,j})\right] = \widehat{\mathcal{E}}_1 \left[F(\mathcal{X}_{\varnothing}(s), 0\le s\le b_{\mathcal{L}(1)}) \prod_{j\ge 1} \mathbf{E}_{\Delta_j(b_{\mathcal{L}(1)})} \left[G_j(\mathbf{X})\right]\right].$$ We start from the left-hand side, and apply the change of measure [\[eq: generation spine\]](#eq: generation spine){reference-type="eqref" reference="eq: generation spine"}: $$\begin{gathered} \widehat{\mathcal{E}}_1 \left[F(\mathcal{X}_{\varnothing}(s), 0\le s\le b_{\mathcal{L}(1)}) \prod_{j\ge 1} G_j(\widehat{\mathbf{X}}_{0,j})\right] \\ = \mathcal{E}_1 \left[\sum_{t>0} |\Delta \mathcal{X}_{\varnothing}(t)|^{\omega} F(\mathcal{X}_{\varnothing}(s), 0\le s\le t) \prod_{j\ge 1} G_j(\widehat{\mathbf{X}}_{0,j})\right].\end{gathered}$$ Using the definition of the $\widehat{\mathbf{X}}_{0,j}$ together with the branching property under $\mathcal{P}_1$ give $$\begin{gathered} \widehat{\mathcal{E}}_1 \left[F(\mathcal{X}_{\varnothing}(s), 0\le s\le b_{\mathcal{L}(1)}) \prod_{j\ge 1} G_j(\widehat{\mathbf{X}}_{0,j})\right] \\ = \mathcal{E}_1 \left[\sum_{t>0} |\Delta \mathcal{X}_{\varnothing}(t)|^{\omega} F(\mathcal{X}_{\varnothing}(s), 0\le s\le t) \prod_{j\ge 1} \mathbf{E}_{\Delta_j(t)} \left[G_j(\mathbf{X})\right]\right].\end{gathered}$$ Applying the change of measure backwards, we get the desired identity. This concludes the proof of Theorem [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"}. ## Comments on the isotropy assumption {#sec:comments isotropy} The previous analysis of $\mathbbm{R}^d$--valued growth-fragmentations relies heavily on the isotropy assumption. Because of the complications caused by the underlying MAP structure, describing growth-fragmentations driven by anisotropic processes is a much more challenging task. We stress the importance of the isotropy assumption and comment on possible extensions to anisotropic growth-fragmentation processes. First, we expect that in the anisotropic case, there should be an angular component in all the (super-)martingales, appearing in particular in [Theorem 7](#thm:M(n)){reference-type="ref" reference="thm:M(n)"}. This already takes place in the $d=1$ case [@DS], for asymmetric signed growth-fragmentation processes, where the angular component is nothing but the sign. Remember in addition that, in analogy with the discrete multitype case [@DP23], the *types* in the spatial framework are the angles, and that the martingales in the multitype setting also involve the types, see Section 3.2 in [@DP23]. If $X$ is a $\mathbbm{R}^d\setminus\{0\}$--valued self-similar Markov process, this actually prompts us to define, for $q\ge 0$, the linear operator $$T_q: f\in \mathcal{C}\mapsto \left( \theta\in\mathbb{S}^{d-1} \mapsto \mathbbm{E}_{\theta}\left[\sum_{t>0} f(\Theta_{\Delta}(\varphi(t)))|\Delta X(t)|^q\right] \right),$$ where $\varphi$ is the Lamperti-Kiu time-change. This is the analogue of the matrix $m$ appearing in the multitype case. Assume that $X$ has jumps (otherwise the construction is irrelevant), and that $M_q:=\underset{\theta\in\mathbb{S}^{d-1}}{\sup}\mathbbm{E}_{\theta} \left[ \sum_{t>0} |\Delta X(t)|^q\right] <\infty$. Then $T_q(f)$ is well-defined for all $f\in \mathcal{C}$, and for $f\in\mathcal{C}$, $$|| T_q(f)||_{\infty} \le M_q || f ||_{\infty},$$ whence $T_q$ is a continuous operator. Note also that, at least under the assumption that $X$ jumps with positive probability to any open set $D\subset\mathbb{S}^{d-1}$ of directions, $T_q$ is strongly positive, in the sense that for all nonnegative $f\ne 0$, $T_q(f)>0$. Assume moreover that $T_q$ takes values in $\mathcal{C}$, and that it is a compact operator. Then, by the Krein-Rutman theorem [@Dei], it must have positive spectral radius $r(q)$, which is moreover a simple eigenvalue associated to a positive eigenfunction $v$. In the spirit of Assumption **(H)**, [3.2](#sec: cumulant){reference-type="ref" reference="sec: cumulant"}, one could impose the additional assumption *(H')* *There exists $\omega\ge 0$ such that $r(\omega)=1$.* Then by definition, we have $$\forall \theta\in\mathbb{S}^{d-1}, \quad \mathbbm{E}_{\theta}\left[ \sum_{t>0} v(\Theta_{\Delta}(\varphi(t)))|\Delta X(t)|^{\omega}\right] = v(\theta).$$ This generalises to vectors in $\mathbbm{R}^d$ by self-similarity of $X$: $$\label{eq: omega generalised} \forall (r,\theta)\in \mathbbm{R}_+ \times \mathbb{S}^{d-1}, \quad \mathbbm{E}_{r\theta}\left[ \sum_{t>0} v(\Theta_{\Delta}(\varphi(t)))|\Delta X(t)|^{\omega}\right] = v(\theta) r^{\omega}.$$ **Remark 11**. When $X$ is *isotropic* in the sense of [2](#sec: ssmp){reference-type="ref" reference="sec: ssmp"}, one can show that $v(\theta)=1$ for all $\theta\in\mathbb{S}^{d-1}$ up to normalisation, and one therefore retrieves the cumulant approach presented in [3.2](#sec: cumulant){reference-type="ref" reference="sec: cumulant"}. Indeed, isotropy entails that if $v$ is an eigenfunction associated with $r(q)$, then for all isometries $U$, $v(U\cdot)$ is also an eigenfunction associated with $r(q)$, and we conclude by simplicity of the eigenvalue that $v(U\cdot) = v$, so that $v$ is constant. Once [\[eq: omega generalised\]](#eq: omega generalised){reference-type="eqref" reference="eq: omega generalised"} holds for some positive function $v$, then modulo these adjustments one can carry through the arguments for the genealogical martingale ([Theorem 7](#thm:M(n)){reference-type="ref" reference="thm:M(n)"}) and the many-to-one formula ([Proposition 8](#prop:spine temporal){reference-type="ref" reference="prop:spine temporal"}). However, the description of the spine in [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"} is more involved. This is mainly due to the fact that the jump intensity at time $b_{\mathcal{L}(1)}$ depends on the current angle of the spine. In the isotropic case, one can more or less get rid of this dependency. The proof of [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"} hinges upon the existence of an *Esscher transform*. In the isotropic case, this readily comes from the fact that the ordinate $\xi$ of $X$ is a Lévy process, which does not hold anymore for anisotropic processes. This in particular yielded that $b_{\mathcal{L}(1)}$ (up to Lamperti time change) is an exponential random variable. This last feature should not hold in general, as already indicated by the discrete multitype case. # The growth-fragmentation embedded in Brownian excursions from hyperplanes {#sec: excursion} ## The excursion measure {#sec:excursion measure} **Construction of the excursion measure $\mathfrak{n}_+$.** We fix $d\ge 3$ and recall from [@Bur] how one may define the Brownian excursion measure from hyperplanes in $\mathbbm{R}^d$. Let $(\Omega, \mathscr{F}, (\mathscr{F}_t)_{t\ge 0}, \mathbbm{P})$ a complete filtered probability space, on which is defined a $d$--dimensional Brownian motion $B^d$. We single out the last coordinate and write $B^d=(B^{d-1}, Z)$. Introduce the set $\mathscr{X}$ of càdlàg functions $x$ defined on some finite time interval $[0,R(x)]$, and the set $\mathscr{X}_0$ of such functions $x$ in $\mathscr{X}$ that are continuous and vanish at $R(x)$. Moreover, we define $$U := \left\{u:=(x_1,\ldots, x_{d-1},z)\in \mathscr{X}^{d-1}\times \mathscr{X}_0, \; R(x_1)=\ldots=R(x_{d-1})=R(z) \; \text{and} \; u(0)=0 \right\}.$$ For $u\in U$, we shall write $R(u)$ for the common value of the lifetimes. All these sets are equipped with their usual $\sigma$--fields. Finally, in order to study the excursions of $B^d$ from the hyperplane $\mathcal{H}=\{x_d=0\}$, we introduce the local time $(\ell_s, s\ge 0)$ at $0$ of the Brownian motion $Z$, as well as its inverse $(\uptau_s, s\ge 0)$. More precisely, $\ell$ is normalised as $$\ell_s := \lim_{\varepsilon\to 0} \frac{1}{2\varepsilon} \int_0^s \mathds{1}_{\{|Z_r|\le \varepsilon\}} \mathrm{d}r, \quad s\ge 0.$$ The *excursion process* $(\mathfrak{e}_s, s>0)$ of our interest is easily defined following the one-dimensional case (see [@RY], Chapter XII), by - if $\uptau_s-\uptau_{s^-}>0$, then $$\mathfrak{e}_s : r\mapsto \left(B^{d-1}_{r+\uptau_{s^-}}-B^{d-1}_{\uptau_{s^-}}, Z_{r+\uptau_{s^-}}\right), \quad r\leq \uptau_s-\uptau_{s^-},$$ - if $\uptau_s-\uptau_{s^-}=0$, then $\mathfrak{e}_s = \partial$, where $\partial$ is some cemetery state. The following proposition directly stems from the one-dimensional case. **Proposition 12**. *The excursion process $(\mathfrak{e}_s, s>0)$ is a $(\mathscr{F}_{\uptau_s})_{s>0}$--Poisson point process of excursions in $U$. Its intensity measure is $$(\mathrm{d}u',\mathrm{d}z) := n(\mathrm{d}z) \mathbbm{P}\Big((B^{d-1})^{R(z)}\in \mathrm{d}u'\Big),$$ where $n$ denotes the one-dimensional Itô measure on $\mathscr{X}_0$, and for any process $X$, and any time $T$, $X^T:=(X_t, \, t\in [0,T])$.* We shall denote by $\mathfrak{n}_+$ and $\mathfrak{n}_-$ the restrictions of $\mathfrak{n}$ to $U^+:=\{(u',z)\in U, \; z\ge 0\}$ and $U^-:=\{(u',z)\in U, \; z\le 0\}$ respectively. In [@Bur], excursion measures from hyperplanes in $\mathbbm{R}^d$ are rather constructed using Bessel processes. More precisely, one first samples the duration of the excursion with density $r\mapsto (2\pi r^3)^{-1/2}\mathds{1}_{\{r\ge 0\}}$ with respect to Lebesgue measure, and then for the last coordinate, one samples a $3$--dimensional Bessel bridge from $0$ to $0$ over $[0,r]$. This is equivalent to $\mathfrak{n}_+$ in our representation (up to a multiplicative factor) thanks to Itô's description of $n$, for which we refer again to [@RY]. We conclude this paragraph with the following Markov property under $\mathfrak{n}_+$. We set $\mathcal{F}_t = \sigma(u(s), \, 0\le s\le t)$. **Proposition 13**. *On the event that $T_a:= \inf\{0\le t\le R(u), \, z(t)=a\}<\infty$, the process $(u(T_a+t)-u(T_a), 0\le t\le R(u)-T_a)$ is independent of $\mathcal{F}_{T_a}$ and is a $d$-dimensional Brownian motion stopped when hitting $\{x_d=-a\}$.* **Disintegration of $\mathfrak{n}_+$.** We now construct measures $\gamma_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^{d-1}$, for Brownian excursions from the hyperplane $\mathcal{H}=\{x_d=0\}$ *conditioned* on ending at $(v,0)$, by disintegrating $\mathfrak{n}_+$ over its endpoint. Whenever $r\ge 0$ and $\mathbf{x}\in \mathbbm{R}^{d-1}$, we write $\Pi_r$ for the law of a Bessel bridge from $0$ to $0$ over $[0,r]$, and $\mathbbm{P}_{r}^{0\rightarrow \mathbf{x}}$ for the law of a $(d-1)$--dimensional Brownian bridge from $0$ to $\mathbf{x}$ with duration $r$. See [@AD] for the case $d=2$. **Proposition 14**. *The following disintegration formula holds: $$\mathfrak{n}_+ = \int_{\mathbbm{R}^{d-1}\setminus \{0\}} \mathrm{d}\mathbf{x}\, \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}|\mathbf{x}|^{d}} \cdot \gamma_{\mathbf{x}},$$ where $\gamma_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$, are probability measures. In addition, for all $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$, $$\gamma_{\mathbf{x}} = \int_0^{\infty} \mathrm{d}r \frac{\mathrm{e}^{-\frac{1}{2r}}}{2^{\frac{d}{2}}\Gamma\left(\frac{d}{2}\right) r^{\frac{d}{2}+1}} \mathbbm{P}_{r|\mathbf{x}|^2}^{0\rightarrow \mathbf{x}} \otimes \Pi_{r|\mathbf{x}|^2}.$$* **Proof.** The proposition follows from Theorem 3.3 in [@Bur], but we rephrase it in our framework for completeness. Let $f:\mathscr{X}^{d-1}\longrightarrow \mathbbm{R}_+$ and $g:\mathscr{X}_0\longrightarrow \mathbbm{R}_+$ be two nonnegative measurable functions. Then by Proposition [Proposition 12](#prop:excursion measure){reference-type="ref" reference="prop:excursion measure"}, $$\int_{U^+} f(u')g(z) \mathfrak{n}_+(\mathrm{d}u', \mathrm{d}z) = \int_{U^+} f(u')g(z) n_+(\mathrm{d}z) \mathbbm{P}\Big((B^{d-1})^{R(z)}\in \mathrm{d}u'\Big).$$ Then by Itô's description of $\mathfrak{n}_+$ (see Chap. XII, Theorem 4.2 in [@RY]), we may split this integral over the duration $R(z)$: $$\int_{U^+} f(u')g(z) \mathfrak{n}_+(\mathrm{d}u', \mathrm{d}z) = \int_{0}^{\infty} \frac{\mathrm{d}r}{2\sqrt{2\pi r^3}} \Pi_r[g] \mathbbm{E}[f((B^{d-1})^r)].$$ We now condition on $B^{d-1}_r$, and we obtain $$\int_{U^+} f(u')g(z) \mathfrak{n}_+(\mathrm{d}u', \mathrm{d}z) = \int_{0}^{\infty} \frac{\mathrm{d}r}{2\sqrt{2\pi r^3}} \int_{\mathbbm{R}^{d-1}} \mathrm{d}\mathbf{x}\frac{\mathrm{e}^{-\frac{|\mathbf{x}|^2}{2r}}}{(2\pi r)^{\frac{d-1}{2}}} \Pi_r[g] \mathbbm{E}_{r}^{0\rightarrow \mathbf{x}}[f].$$ Finally, we perform the change of variables $r\mapsto t = r/|x|^2$: $$\int_{U^+} f(u')g(z) \mathfrak{n}_+(\mathrm{d}u', \mathrm{d}z) = \int_{\mathbbm{R}^{d-1}} \frac{\mathrm{d}\mathbf{x}}{|x|^d} \int_{0}^{\infty} \mathrm{d}t \frac{\mathrm{e}^{-\frac{1}{2t}}}{2(2\pi)^{\frac{d}{2}} t^{\frac{d}{2}+1}} \Pi_{t|\mathbf{x}|^2}[g] \mathbbm{E}_{t|\mathbf{x}|^2}^{0\rightarrow \mathbf{x}}[f].$$ Since $$\int_{0}^{\infty} \mathrm{d}t \frac{\mathrm{e}^{-\frac{1}{2t}}}{2(2\pi)^{\frac{d}{2}} t^{\frac{d}{2}+1}} = \frac12 \pi^{-\frac{d}{2}}\Gamma\left(\frac{d}{2}\right),$$ this gives that $\gamma_{\mathbf{x}}$, for $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$, are probability measures, and the disintegration claim holds. 0◻ ![Bismut's description of $\mathfrak{n}_+$ in dimension $d=3$. The height of a uniformly chosen point $t$ on the excursion weighted by its duration is distributed according to the Lebesgue measure $\mathrm{d}A$. Moreover, conditionally on the height, the excursion splits into two independent trajectories depicted in blue and red. Both are distributed as Brownian motion killed when hitting the bottom half-plane (in grey).](Bismut-spatial-eps-converted-to.pdf){#fig:Bismut} **Bismut's description of $\mathfrak{n}_+$.** The following decomposition of $\mathfrak{n}_+$ describes the left and right parts of the trajectory seen from a point chosen uniformly at random on the Brownian excursion weighted by its lifetime. **Proposition 15**. *(Bismut's description of $\mathfrak{n}_+$)* *Let $\overline{\mathfrak{n}}_+$ be the measure defined on $\mathbbm{R}_+\times U^+$ by $$\overline{\mathfrak{n}}_+(\mathrm{d}t,\mathrm{d}u) = \mathds{1}_{\{0\leq t\leq R(u)\}} \mathrm{d}t \, \mathfrak{n}_+(\mathrm{d}u).$$ Then under $\overline{\mathfrak{n}}_+$ the "law" of $(t,(u',z))\mapsto z(t)$ is the Lebesgue measure $\mathrm{d}A$ on $\mathbbm{R}_+$, and conditionally on $z(t)=A$, $u^{t, \leftarrow}=\left(u(t-s)-u(t)\right)_{0\leq s\leq t}$ and $u^{t,\rightarrow}=\left(u(t+s)-u(t)\right)_{0\leq s\leq R(u)-t}$ are independent Brownian motions killed when reaching the hyperplane $\left\{x_d=-A\right\}$.* Proposition [Proposition 15](#prop:Bismut){reference-type="ref" reference="prop:Bismut"} is a straightforward consequence of Bismut's description of the one-dimensional Itô measure $n$. [1](#fig:Bismut){reference-type="ref" reference="fig:Bismut"} illustrates how the excursion splits when seen from a uniform point. ## Slicing excursions with hyperplanes {#sec: slicing excursions} This section is an easy extension of the framework introduced in [@AD]. Let $u\in U^+$, and $a\ge 0$. We may write $u:=(u',z)$ with $u'\in \mathscr{X}^{d-1}$ and $z\in \mathscr{X}_0, z\ge 0$. **Notation and setup.** Define the superlevel set $$\label{eq:I(a) partition} \mathcal{I}(a) = \left\{s\in [0,R(u)], \; z(s)>a\right\}.$$ This is a countable (possibly empty) union of disjoint open intervals, and for any such interval $I=(i_-,i_+)$, we write $u_{I}(s) := u(i_- +s)-u(i_-), 0\leq s\leq i_+ -i_-,$ for the restriction of $u$ to $I$, and $\Delta u_I := x(i_+)-x(i_-)$. Remark that $\Delta u_I$ is a vector in the hyperplane $\mathcal{H}_a :=\{x_d=a\}$, which we call the *size* or *length* of the excursion $u_I$, see [2](#fig:slicing excursion){reference-type="ref" reference="fig:slicing excursion"}. If $0\le t\le R(u)$, we denote by $e_a^{(t)}$ the excursion $u_I$ corresponding to the unique such interval $I$ which contains $t$. Moreover, we define $\mathcal{H}_a^+$ as the set of excursions above $\mathcal{H}_a$ corresponding to the previous partition of $\mathcal{I}(a)$. We may now present an application of Proposition [Proposition 15](#prop:Bismut){reference-type="ref" reference="prop:Bismut"}, which is similar to Proposition 2.7 in [@AD]. We show that, almost surely, excursions cut at heights do not make bubbles above any hyperplane. More precisely, we set $$\mathscr{L} := \{u\in U^+, \; \exists 0\leq t\leq R(u), \; \exists 0\leq a<z(t), \; \Delta e_a^{(t)}(u) = 0\}. \label{eq: loop}$$ This is the set of $u\in U^+$ making above some level an excursion which comes back to itself. Then **Proposition 16**. *$$\mathfrak{n}_+(\mathscr{L}) = 0.$$* **Proof.** We first notice that if $u\in \mathscr{L}$, then the set of $t$'s such that $\Delta e_a^{(t)}(u) = 0$ for some $0\le a<z(t)$ has positive Lebesgue measure. Therefore $$\label{eq: inclusion loop} \mathscr{L} \subset \left\{u\in U^+, \; \int_0^{R(u)} \mathds{1}_{\{\exists 0\le a<z(t), \; \Delta e_a^{(t)}(u) = 0\}} \mathrm{d}t >0\right\}.$$ Now using the notation in Proposition [Proposition 15](#prop:Bismut){reference-type="ref" reference="prop:Bismut"}, and defining $$T_a^{t,\leftarrow} := \inf\{s>0, z(t-s)=a\} \quad \text{and} \quad T_a^{t,\rightarrow} := \inf\{s>0, z(t+s)=a\},$$ we get $$\begin{aligned} \mathfrak{n}_+&\left( \int_0^{R(u)} \mathds{1}_{\{\exists 0\le a<z(t), \; \Delta e_a^{(t)}(u) = 0\}} \mathrm{d}t \right) \\ &= \overline{\mathfrak{n}}_+\left(\{(t,u)\in \mathbbm{R}_+\times U^+, \; \exists 0\leq a<z(t), \; \Delta e_a^{(t)}(u) = 0\} \right) \\ &= \overline{\mathfrak{n}}_+\left(\{(t,u)\in \mathbbm{R}_+\times U^+, \; \exists 0\leq a<z(t), \; u^{t, \leftarrow}(T_a^{t,\leftarrow}) = u^{t, \rightarrow}(T_a^{t,\rightarrow}) \} \right).\end{aligned}$$ Bismut's description [Proposition 15](#prop:Bismut){reference-type="ref" reference="prop:Bismut"} of $\mathfrak{n}_+$ (see [1](#fig:Bismut){reference-type="ref" reference="fig:Bismut"}) finally gives $$\mathfrak{n}_+\left( \int_0^{R(u)} \mathds{1}_{\{\exists 0\le a<z(t), \; \Delta e_a^{(t)}(u) = 0\}} \mathrm{d}t \right) = \int_0^{+\infty} \mathrm{d}A \, \mathbbm{P}\left(\exists 0<a\le A, B^{d-1}_1(T_a^1) = B^{d-1}_2(T_a^2) \right),$$ where $B^{d-1}_1, B^{d-1}_2$ are independent $(d-1)$--dimensional Brownian motions, and $T_a^1, T_a^2$ are independent Brownian hitting times. It is now well-known that the entries of $B^{d-1}_1(T_a^1)$ and $B^{d-1}_2(T_a^2)$ are symmetric Cauchy processes in $a$. By independence, the entries of $B^{d-1}_1(T_a^1)-B^{d-1}_2(T_a^2)$ are also Cauchy processes, for which points are polar (see [@Ber], Chap. II, Section 5). Hence $$\mathfrak{n}_+\left( \int_0^{R(u)} \mathds{1}_{\{\exists 0\le a<z(t), \; \Delta e_a^{(t)}(u) = 0\}} \mathrm{d}t \right) = 0.$$ This yields that for $\mathfrak{n}_+$--almost every excursion $u$, $$\int_0^{R(u)} \mathds{1}_{\{\exists 0\le a<z(t), \; \Delta e_a^{(t)}(u) = 0\}} \mathrm{d}t = 0,$$ and given the inclusion [\[eq: inclusion loop\]](#eq: inclusion loop){reference-type="eqref" reference="eq: inclusion loop"}, we infer that $\mathfrak{n}_+(\mathscr{L})=0$. 0◻ **The branching property of excursions in $\mathcal{H}_a^+$.** When cutting excursions with the hyperplanes $\mathcal{H}_a$, the natural filtration is the one carrying the information below these hyperplanes. We call $(\mathcal{G}_a, a\ge 0)$ this filtration, completed with the $\mathfrak{n}_+$--negligible sets. More precisely, we first introduce the path $u^{<a}$ defined as $u^{< a}_t:=u_{\tau^{< a}_t}$ if $t< A(R(u))$ and $u^{< a}_t:= u(R(u))$ if $t=A(R(u))$, where $$\label{eq: A_t time change} A(t):= \int_0^t \mathds{1}_{\{z(s) \leq a \}}\mathrm{d}s, \qquad \hbox{and} \qquad \tau^{< a}_t:= \inf\{s> 0\,:\, A(s)>t\}.$$ The filtration $(\mathcal{G}_a, a\ge 0)$ is then the (completed) filtration generated by $u^{<a}$. Recall that we have set $T_a:= \inf\{0\le t\le R(u), \, z(t)=a\}$. Finally, we let $a>0$ and rank the excursions $(e_i^{a,+}, i\ge 1)$ in $\mathcal{H}_a^+$ by descending order of the norm of their sizes $(\mathbf{x}^{a,+}_i, i\ge 1)$. Then the following branching property holds. **Proposition 17**. *For all $A\in \mathcal{G}_a$, and all nonnegative measurable functions $F_1,\ldots,F_k:U^+\rightarrow \mathbbm{R}_+, k\ge 1,$ $$\mathfrak{n}_{+}\left(\mathds{1}_{\{T_a<\infty\}}\mathds{1}_A \prod_{i=1}^k F_i(e_i^{a,+}) \right) = \mathfrak{n}_{+}\left(\mathds{1}_{\{T_a<\infty\}}\mathds{1}_A \prod_{i=1}^k \gamma_ {\mathbf{x}^{a,+}_i}[F_i]\right),$$ and the same also holds under $\gamma_{\mathbf{x}}$ for all $\mathbf{x}\in \mathbbm{R}^{d-1}\setminus \{0\}$.* **Proof.** We refer to [@AD] for the proof in the planar case, which is easily extended to higher dimensions. 0◻ ![Slicing at height $a$ of an excursion $u$ away from $\mathcal{H}$. The blue trajectory represents an excursion in the half-space $\{x_d>0\}$, $d=3$. For some fixed height $a>0$ we draw the hyperplane $\mathcal{H}_a$ and record the sub-excursions above $\mathcal{H}_a$. The four largest of them are represented in dark blue (the reader should imagine many infinitesimal excursions). The red arrows indicate the *size* of the sub-excursions, counted with respect to the orientation of $u$.](Excursion-spatial-eps-converted-to.pdf){#fig:slicing excursion} ## A many-to-one formula {#sec: many-to-one} We now establish a key formula in the description of the process of excursions cut at heights. We first need some notation and terminology. We argue on the event that $T_a<\infty$. Let $(\ell_t^a)_{t\in [0,R(u)]}$ (resp. $(\uptau^{a}_s)_{s\in [0,\ell^a_{R(u)}]}$) be the (resp. inverse) local time process of $u$ at level $a$ and let $(\mathfrak{e}_s^a,\, s\in(0,\ell_{R(u)}^a))$ be the excursion process at level $a$ of $u$. For convenience we also define $\mathfrak{e}_0^a$ and $\mathfrak{e}^a_{\ell_{R(u)}^a}$ respectively as the first and last bits of excursion between $\{x_d=0\}$ and $\{x_d=a\}$. A direct consequence of Proposition [Proposition 13](#prop:Markov){reference-type="ref" reference="prop:Markov"} is that, on the event $T_a<\infty$ and conditionally on $\mathcal{F}_{T_a}$, $(\mathfrak{e}_s^a,\, s\in(0,\ell_{R(u)}^a))$ forms a Poisson point process with intensity $\mathfrak{n}_+$ for the filtration $(\mathcal{F}_{\uptau_s^{a}}, 0\le s\le \ell^a_{R(u)})$, stopped at the first time when an excursion hits $\{x_d=-a\}$. Recall that $\mathscr{X}$ stands for the set of finite duration continuous trajectories in $\mathbbm{R}^d$. We set $$\begin{aligned} {\mathfrak{u}}_1^s &:= \Big( u(t), \, t\in [0, \uptau^a_{s^-}]\Big), \\ {\mathfrak{u}}_2^s &:= \Big(u(R(u)-t),\, t \in [0, R(u)-\uptau^a_s] \Big). \end{aligned}$$ In words, ${\mathfrak{u}}_1^s$ and ${\mathfrak{u}}_2^s$ are the two elements of $\mathscr{X}$ which describe the trajectory of $u$ before the excursion $\mathfrak{e}^{a}_s$ and for the time-reversed trajectory of $u$ after the excursion $\mathfrak{e}^{a}_s$. Finally, we use the shorthand $s^+ \in [0, \ell_{R(u)}^a]$ to denote times $0\le s \le \ell_{R(u)}^a$ such that $\mathfrak{e}_s^{a} \in U^+$. Our description involves two processes $\mathfrak{h}_1$ and $\mathfrak{h}_2$ defined as follows. Call Bessel-Brownian excursion a process in the half-space $\mathcal{H}^+ := \{x_d>0\}$ whose first $d-1$ entries are independent Brownian motions, and whose last coordinate $\mathfrak{z}$ is an independent three-dimensional Bessel process starting at $0$. Under $\mathbbm{P}$, take $\mathfrak{h}_1$ and $\mathfrak{h}_2=\mathfrak{h}_2^\mathbf{x}$ to be independent Bessel-Brownian excursions, with $\mathfrak{h}_1(0) = 0$ and $\mathfrak{h}_2(0) = (\mathbf{x}, 0)$. We write $S_i^a := \sup\{t\ge 0\,:\, {\mathfrak z}_i(t)\le a\}$ for the last passage time at $a$ of ${\mathfrak z}_i$, $i\in\{1,2\}$. **Proposition 18**. *Let $F: \mathscr{X}\times \mathscr{X}\rightarrow \mathbbm{R}_+$ be a nonnegative measurable function. Then $$\label{eq: many-to-one excursions} \gamma_{\mathbf{x}}\bigg[\mathds{1}_{\{T_a<\infty\}}\sum_{ s^+ \in [0,\ell^a_{R(u)}]} |\Delta \mathfrak{e}^{a}_s|^d F({\mathfrak u}_1^s, {\mathfrak u}_2^s) \bigg] = |\mathbf{x}|^d \mathbbm{E}\big[F\left(({\mathfrak h}_1(t),t\in [0,S^a_1]\right),\left({\mathfrak h}_2^{\mathbf{x}}(t),t\in [0,S^a_2])\right)\big].$$* **Proof.** The proof essentially follows from that of [@AD Equation (17)]. It suffices to prove [\[eq: many-to-one excursions\]](#eq: many-to-one excursions){reference-type="eqref" reference="eq: many-to-one excursions"} for $F(u,v)=f(u)g(v)$ with $f,g: \mathscr{X}\rightarrow \mathbbm{R}_+$ two measurable functions. We first deal with the left-hand side under the measure $\mathfrak{n}_+$. By the master formula [@RY Proposition XII.1.10] for the Poisson point process $(\mathfrak{e}_s^a,\, s\in(0,\ell_{R(u)}^a))$ and the disintegration property ([Proposition 14](#prop:disintegration){reference-type="ref" reference="prop:disintegration"}), $$\begin{aligned} &\mathfrak{n}_+\bigg( \mathds{1}_{\{T_a<\infty\}}\sum_{s^+ \in [0,\ell_{R(u)}^a]} f(\mathfrak{u}_1^s) g(\mathfrak{u}_2^s) |\Delta \mathfrak{e}^{a}_s|^d \bigg) \label{beginning H excursion}\\ &= \mathfrak{n}_+\bigg(\mathds{1}_{\{T_a<\infty\}} \int_0^{R(u)} f\left(\mathfrak{u}_1^{\ell_r^a}\right) \mathrm{d}\ell_r^a \nonumber \\ & \int_{\mathbbm{R}^{d-1}\setminus\{0\}} \mathrm{d}\mathbf{x}\frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \, \mathbbm{E}\left[ g(\mathbf{x}+\mathbf{x}'+B^{d-1}(T^Z_{-a}-s),a+Z(T^Z_{-a}-s), 0\le s\le T^Z_{-a})\right]_{\mathbf{x}'=B^{d-1}(r)} \bigg), \nonumber %\label{H excursion}\end{aligned}$$ where $T^Z_{-a} = \inf\{t>0, \; Z(t) = -a\}$. The change of variables $\mathbf{x}+B^{d-1}(r) \mapsto \mathbf{x}$ then shows that $$\begin{gathered} \label{eq: key formula change variables} \mathfrak{n}_+\bigg( \mathds{1}_{\{T_a<\infty\}}\sum_{s^+ \in [0,\ell_{R(u)}^a]} f(\mathfrak{u}_1^s) g(\mathfrak{u}_2^s) |\Delta \mathfrak{e}^{a}_s|^d \bigg) = \mathfrak{n}_+\bigg(\mathds{1}_{\{T_a<\infty\}} \int_0^{R(u)} f\left(\mathfrak{u}_1^{\ell_r^a}\right) \mathrm{d}\ell_r^a \bigg) \\ \cdot \int_{\mathbbm{R}^{d-1}\setminus\{0\}} \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \mathrm{d}\mathbf{x}\mathbbm{E}\left[ g(\mathbf{x}+B^{d-1}(T^Z_{-a}-s),a+Z(T^Z_{-a}-s), 0\le s\le T^Z_{-a})\right].\end{gathered}$$ We first argue conditionally on $Z$, where $(B^{d-1}(s),\, 0\le s\le T^Z_{-a})$ is a $(d-1)$--dimensional Brownian motion stopped at time $T^Z_{-a}$. By reversibility of Brownian motion with respect to the Lebesgue measure on $\mathbbm{R}^{d-1}$, the "law" of $(\mathbf{x}+B^{d-1}_{T^Z_{-a}-s},\, 0\le s \le T^Z_{-a})$ for $\mathbf{x}$ sampled from the Lebesgue measure is that of a Brownian motion with initial measure the Lebesgue measure in $\mathbbm{R}^{d-1}$, stopped at time $T^Z_{-a}$. Secondly, it is standard that the process $(a+Z(T^Z_{-a}-s), 0\le s\le T^Z_{-a})$ is a 3-dimensional Bessel process starting from $0$ and run until its last passage time at $a$ (see for instance [@RY Corollary VII.4.6]). The last integral in the above display therefore boils down to $$\begin{gathered} \int_{\mathbbm{R}^{d-1}\setminus\{0\}} \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \mathrm{d}\mathbf{x}\mathbbm{E}\left[ g(\mathbf{x}+B^{d-1}(T^Z_{-a}-s),a+Z(T^Z_{-a}-s), 0\le s\le T^Z_{-a})\right] \\ = \int_{\mathbbm{R}^{d-1}\setminus\{0\}} \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \mathrm{d}\mathbf{x}\, \mathbbm{E}\Big[ g({\mathfrak h}_2^{\mathbf{x}}(t),t\in [0,S_2^a])\Big].\end{gathered}$$ Going back to [\[eq: key formula change variables\]](#eq: key formula change variables){reference-type="eqref" reference="eq: key formula change variables"}, we have $$\begin{gathered} \label{eq: key formula Bessel} \mathfrak{n}_+\bigg( \mathds{1}_{\{T_a<\infty\}}\sum_{s^+ \in [0,\ell_{R(u)}^a]} f(\mathfrak{u}_1^s) g(\mathfrak{u}_2^s) |\Delta \mathfrak{e}^{a}_s|^d \bigg) \\ = \mathfrak{n}_+\bigg(\mathds{1}_{\{T_a<\infty\}} \int_0^{R(u)} f\left(\mathfrak{u}_1^{\ell_r^a}\right) \mathrm{d}\ell_r^a \bigg) \cdot \int_{\mathbbm{R}^{d-1}\setminus\{0\}} \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \mathrm{d}\mathbf{x}\, \mathbbm{E}\Big[ g({\mathfrak h}_2^{\mathbf{x}}(t),t\in [0,S_2^a])\Big].\end{gathered}$$ On the other hand, by another application of the master formula [@RY Proposition XII.1.10], $$\mathfrak{n}_+\bigg( \mathds{1}_{\{T_a<\infty\}} f\left(\mathfrak{u}_1^{\ell_{R(u)}^a}\right) \bigg) = \mathfrak{n}_+\bigg(\mathds{1}_{\{T_a<\infty\}} \int_0^{R(u)} f(\mathfrak{u}_1^{\ell_r^a}) \mathrm{d}\ell_r^a \bigg)\mathfrak{n}_-(T_{-a}<\infty).$$ Since $\mathfrak{n}_-(T_{-a}<\infty) = \mathfrak{n}_+(T_{a}<\infty)$, we conclude that $$\mathfrak{n}_+\bigg(\mathds{1}_{\{T_a<\infty\}} \int_0^{R(u)} f(\mathfrak{u}_1^{\ell_r^a}) \mathrm{d}\ell_r^a \bigg) = \mathfrak{n}_+\left( f\left(\mathfrak{u}_1^{\ell_{R(u)}^a}\right) \Big| \, T_a <\infty\right).$$ Finally, under $\mathfrak{n}^\alpha_+(\cdot \mid T_a<\infty)$, $u$ up to its last passage time at $a$ has the law of ${\mathfrak h}_1$ up to $S_1^a$. Hence [\[eq: key formula Bessel\]](#eq: key formula Bessel){reference-type="eqref" reference="eq: key formula Bessel"} becomes $$\begin{gathered} \mathfrak{n}_+\bigg( \mathds{1}_{\{T_a<\infty\}}\sum_{s^+ \in [0,\ell_{R(u)}^a]} f(\mathfrak{u}_1^s) g(\mathfrak{u}_2^s) |\Delta \mathfrak{e}^{a}_s|^d \bigg) \\ = \mathbbm{E}\Big[ f({\mathfrak h}_1(t),t\in [0,S_1^a]) \Big] \int_{\mathbbm{R}^{d-1}\setminus\{0\}} \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \mathrm{d}\mathbf{x}\, \mathbbm{E}\Big[ g({\mathfrak h}_2^{\mathbf{x}}(t),t\in [0,S_2^a])\Big].\end{gathered}$$ It remains to disintegrate $\mathfrak{n}_+$ over the endpoint using [Proposition 14](#prop:disintegration){reference-type="ref" reference="prop:disintegration"}: $$\begin{gathered} \int_{\mathbbm{R}^{d-1}\setminus\{0\}}\frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}|\mathbf{x}|^d} \mathrm{d}\mathbf{x}\, \gamma_{\mathbf{x}} \bigg( \mathds{1}_{\{T_a<\infty\}}\sum_{s^+ \in [0,\ell_{R(u)}^a]} f(\mathfrak{u}_1^s) g(\mathfrak{u}_2^s) |\Delta \mathfrak{e}^{a}_s|^d \bigg) \\ = \mathbbm{E}\Big[ f({\mathfrak h}_1(t),t\in [0,S_1^a]) \Big] \int_{\mathbbm{R}^{N-1}\setminus\{0\}} \frac{\Gamma(\frac{d}{2})}{2\pi^{d/2}} \mathrm{d}\mathbf{x}\, \mathbbm{E}\Big[ g({\mathfrak h}_2^{\mathbf{x}}(t),t\in [0,S_2^a])\Big].\end{gathered}$$ One can then multiply $g$ by an arbitrary function of the endpoint. This entails that for Lebesgue--almost every $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$, $$\gamma_{\mathbf{x}} \bigg( \mathds{1}_{\{T_a<\infty\}}\sum_{s^+ \in [0,\ell_{R(u)}^a]} f(\mathfrak{u}_1^s) g(\mathfrak{u}_2^s) |\Delta \mathfrak{e}^{a}_s|^d \bigg) \\ = |\mathbf{x}|^d \mathbbm{E}\Big[ f({\mathfrak h}_1(t),t\in [0,S_1^a]) \Big] \cdot \mathbbm{E}\Big[ g({\mathfrak h}_2^{\mathbf{x}}(t),t\in [0,S_2^a])\Big].$$ This proves [\[eq: many-to-one excursions\]](#eq: many-to-one excursions){reference-type="eqref" reference="eq: many-to-one excursions"} for almost every $\mathbf{x}$. By a continuity argument that we feel free to skip, [\[eq: many-to-one excursions\]](#eq: many-to-one excursions){reference-type="eqref" reference="eq: many-to-one excursions"} remains true for all $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$. 0◻ ## A change of measures Recall from ([\[eq:I(a) partition\]](#eq:I(a) partition){reference-type="ref" reference="eq:I(a) partition"}) the notation $\mathcal{H}_a^+$ for the set of excursions above $\mathcal{H}_a$. **Theorem 19**. *Under $\gamma_{\mathbf{x}}$ for all $\mathbf{x}\in \mathbbm{R}^{d-1}\setminus\{0\}$, the process $$\mathcal{M}_a := \mathds{1}_{\{T_{a}<\infty\}} \cdot \sum_{e\in \mathcal{H}_a^+} |\Delta e|^{d}, \quad a\ge 0,$$ is a martingale with respect to $(\mathcal{G}_a, a\ge 0)$.* **Proof.** By the branching property, it is enough to check that $\gamma_\mathbf{x}[\mathcal{M}_a] =|\mathbf{x}|^d$. The claim then follows directly from [Proposition 18](#prop: key many to one){reference-type="ref" reference="prop: key many to one"} by taking $F=1$. 0◻ We fix $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$. To the martingale in Theorem [Theorem 19](#thm:martingale){reference-type="ref" reference="thm:martingale"}, we can associate the following change of measures. Recall from [\[eq: A_t time change\]](#eq: A_t time change){reference-type="eqref" reference="eq: A_t time change"} the notation $u^{<a}$. Define on the same probability space the process $({\mathfrak U}_a^\mathbf{x},\, a> 0)$ such that for any $a> 0$, the law of ${\mathfrak U}_a^\mathbf{x}$ is that of $u^{<a}$ under the probability measure $|\mathbf{x}|^{-d} {\mathcal M}_a \mathrm{d}\gamma_\mathbf{x}$. The existence of ${\mathfrak U}^\mathbf{x}$ results from Kolmogorov's extension theorem (the martingale property of $\mathcal{M}$ makes this definition consistent). Our goal is to describe the law of $({\mathfrak U}_a^\mathbf{x}, a\ge 0)$. The construction involves the two processes $\mathfrak{h}_1$ and $\mathfrak{h}_2$ of [5.3](#sec: many-to-one){reference-type="ref" reference="sec: many-to-one"}. As in [\[eq: A_t time change\]](#eq: A_t time change){reference-type="eqref" reference="eq: A_t time change"}, we introduce $$\label{eq: A_i(t) time change} A_i(t):= \int_0^t \mathds{1}_{\{{\mathfrak z}_i(s) \leq a \}}\mathrm{d}s, \quad \tau_i(t):= \inf\{s> 0\,:\, A_i(s)>t\}, \quad \text{for} \; i\in\{1,2\},$$ and we also set $$A_i(\infty) = \lim_{t\to\infty} A_i(t), \quad i\in\{1,2\}.$$ Under $\mathbbm{P}$, we now define $\widetilde {\mathfrak U}_a^{\mathbf{x}}$ as the process obtained by concatenating $\mathfrak{h}_1$ and $\mathfrak{h}_2$ when they leave $\{x_d\le a\}$ forever, and removing everything above level $a$. More precisely, $$\widetilde {\mathfrak U}_a^{\mathbf{x}}(t) := \left\{ \begin{array}{ll} {\mathfrak h}_1(\tau_1(t)) &\hbox{ if } t \in [0,A_1(\infty)), \\ {\mathfrak h}_2(\tau_2(A_1(\infty)+A_2(\infty)-t)) &\hbox{ if } t \in [A_1(\infty),A_1(\infty)+A_2(\infty)], \end{array} \right.$$ with the convention that ${\mathfrak h}_2(\tau_2(A_2(\infty))) := {\mathfrak h}_2(\tau_2(A_2(\infty))^-)$. We are now ready to describe the law of ${\mathfrak U}^\mathbf{x}$. **Theorem 20**. *For any $z\ne 0$, the process $({\mathfrak U}_a^\mathbf{x},a>0)$ is distributed as $(\widetilde {\mathfrak U}_a^\mathbf{x},a>0)$.* Performing the change of measure thus results in splitting the excursion into two independent excursions in the half-space $\mathcal{H}^+$ going to infinity, as in [3](#fig:change measures){reference-type="ref" reference="fig:change measures"}. **Proof.** This follows readily from [Proposition 18](#prop: key many to one){reference-type="ref" reference="prop: key many to one"} by taking $F(\mathfrak{u}_1^s, \mathfrak{u}_2^s)$ as a measurable function of $u^{<a}$. 0◻ ## Proof of Theorem [Theorem 1](#thm:exc){reference-type="ref" reference="thm:exc"} {#proof-of-theorem-thmexc} We reformulate the previous results in the parlance of Section [3](#sec: spatial GF){reference-type="ref" reference="sec: spatial GF"}. Setting $$\mathbf{Z}(a) := \left\{\left\{ \Delta e, \; e\in\mathcal{H}_a^+ \right\}\right\}, \quad a\ge 0,$$ it follows from [Proposition 17](#prop:branching){reference-type="ref" reference="prop:branching"} that $\mathbf{Z}$ enjoys a branching property akin to [Proposition 5](#prop: temporal branching spatial GF){reference-type="ref" reference="prop: temporal branching spatial GF"}. We could have pointed out an Eve cell in the spirit of [@AD Theorem 3.3] by considering the locally largest excursion. Together with an avatar of [@AD Theorem 3.6], this proves that under $\gamma_{\mathbf{x}}$, $\mathbf{Z}$ is a *spatial growth-fragmentation process*. Actually, one should first check that the evolution of the Eve cell generates all the excursions, but this is a simple consequence of the arguments presented in [@AD Theorem 4.1]. In the previous exposition, we chose to rather dwell on the spine description. More specifically, the martingale in [Theorem 19](#thm:martingale){reference-type="ref" reference="thm:martingale"} is a temporal version of the martingale in [Theorem 7](#thm:M(n)){reference-type="ref" reference="thm:M(n)"}. Then, [Theorem 20](#thm:law change measures){reference-type="ref" reference="thm:law change measures"} determines the law of the spine without reference to [Theorem 9](#thm:spine){reference-type="ref" reference="thm:spine"}. The spine is described as the Brownian motion $B^{d-1}$ taken at the hitting times of another independent linear Brownian motion, and hence is a $(d-1)$--dimensional isotropic Cauchy process. ![Splitting of the excursion under the change of measure. Under the change of measure, the excursion splits into two independent Bessel-Brownian excursions (blue and red). For fixed height $a>0$, the sub-excursion above $a$ straddling the point at infinity is obtained by running two independent $d$--dimensional Brownian motions started from infinity and stopped when hitting the hyperplane $\mathcal{H}_a$. ](Change-measures.eps){#fig:change measures} ## Extension to isotropic stable Lévy processes As in [@DS], we can extend the previous construction to stable processes with index $\alpha\in(0,2)$. We recall that we have set $d\ge 3$, and that the case $d=2$ was already treated in [@DS]. We will not provide all the details of the proofs since the arguments are similar to the Brownian case described above. **The excursion measure $\mathfrak{n}^{\alpha}$.** We shall consider the following excursions, which consist in replacing the first $(d-1)$ entries of the previous setting by an isotropic $\alpha$--stable Lévy process in $\mathbbm{R}^{d-1}$. We keep the notation in [5.1](#sec:excursion measure){reference-type="ref" reference="sec:excursion measure"}, except that now is defined on the probability space a $(d-1)$--dimensional isotropic stable Lévy process $X^{d-1}$, and we consider the process $Z^d :=(X^{d-1},Z)$ with Brownian last coordinate. Then, we introduce the excursion process $(\mathfrak{e}^{\alpha}_s,s>0)$ as - if $\uptau_s-\uptau_{s^-}>0$, then $$\mathfrak{e}^{\alpha}_s : r\mapsto \left(X^{d-1}_{r+\uptau_{s^-}}-X^{d-1}_{\uptau_{s^-}}, Z_{r+\uptau_{s^-}}\right), \quad r\leq \uptau_s-\uptau_{s^-},$$ - if $\uptau_s-\uptau_{s^-}=0$, then $\mathfrak{e}^{\alpha}_s = \partial$. As in [Proposition 12](#prop:excursion measure){reference-type="ref" reference="prop:excursion measure"}, this defines a Poisson point process with intensity measure $$\mathfrak{n}^{\alpha}(\mathrm{d}u',\mathrm{d}z) := n(\mathrm{d}z) \mathbbm{P}\Big((X^{d-1})^{R(z)}\in \mathrm{d}u'\Big).$$ Let $\mathfrak{n}^{\alpha}_+$ be the restriction of $\mathfrak{n}^{\alpha}$ to positive excursions. We now want to condition $\mathfrak{n}^{\alpha}_+$ on the endpoint of the excursion. For $\mathbf{x}\in\mathbbm{R}^{d-1}$ and $r>0$, let $\mathbbm{P}_{r}^{\alpha, 0\rightarrow \mathbf{x}}$ denote the law of an $\alpha$--stable bridge from $0$ to $\mathbf{x}$ over $[0,r]$. In addition, we write $(p^{\alpha}_r, r\ge 0)$ for the transition densities of $X^{d-1}$. Throughout this section, we fix $\omega_d:= d-1+\frac{\alpha}{2}$. **Proposition 21**. *The following disintegration formula holds: $$\label{eq: disintegration stable} \mathfrak{n}^{\alpha}_+ = \int_{\mathbbm{R}^{d-1}\setminus \{0\}} \mathrm{d}\mathbf{x}\, \frac{C_d}{|\mathbf{x}|^{\omega_d}} \cdot \gamma^{\alpha}_{\mathbf{x}},$$ where $\gamma^{\alpha}_{\mathbf{x}}$, $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$, are probability measures, and $$C_d = \frac{\alpha}{2\sqrt{2\pi}} \int_{\mathbbm{R}_+} \mathrm{d}v \, p_1^{\alpha}(v\cdot\mathbf{1}) v^{\omega_d-1},$$ where $\mathbf{1}$ denotes the "north pole" in $\mathbb{S}^{d-2}$. In addition, for all $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$, $$\gamma^{\alpha}_{\mathbf{x}} = \int_0^{\infty} \mathrm{d}r \frac{p_1^{\alpha}(r^{-1/\alpha}v\cdot\mathbf{1})}{2\sqrt{2\pi}r^{1+\frac{\omega_d}{\alpha}}} \mathbbm{P}_{r|\mathbf{x}|^2}^{\alpha, 0\rightarrow \mathbf{x}} \otimes \Pi_{r|\mathbf{x}|^2}.$$* **Proof.** Let $f$ and $g$ be two nonnegative measurable functions, respectively defined on $\mathscr{X}^{d-1}$ and $\mathscr{X}_0$. Following the proof of [Proposition 14](#prop:disintegration){reference-type="ref" reference="prop:disintegration"}, we end up with $$\begin{aligned} \int_{U^+} f(u')g(z)\mathfrak{n}^{\alpha}_+(\mathrm{d}u',\mathrm{d}z) &= \int_0^{\infty} \frac{\mathrm{d}r}{2\sqrt{2\pi r^3}} \Pi_r[g]\mathbbm{E}[f((X^{d-1})^r)] \\ &= \int_0^{\infty} \frac{\mathrm{d}r}{2\sqrt{2\pi r^3}} \int_{\mathbbm{R}^{d-1}} \mathrm{d}\mathbf{x}\, p_r^{\alpha}(\mathbf{x}) \Pi_r[g]\mathbbm{E}_{r}^{\alpha, 0\rightarrow \mathbf{x}}[f(X^{d-1})].\end{aligned}$$ Note that, by self-similarity, for all $r>0$ and $\mathbf{x}\in\mathbbm{R}^{d-1}$, $$p_r^{\alpha}(\mathbf{x}) = r^{-\frac{d-1}{\alpha}} p_1^{\alpha}(r^{-1/\alpha}\cdot\mathbf{x}).$$ Hence $$\int_{U^+} f(u')g(z)\mathfrak{n}^{\alpha}_+(\mathrm{d}u',\mathrm{d}z) = \int_0^{\infty} \frac{\mathrm{d}r}{2\sqrt{2\pi r^3}} \int_{\mathbbm{R}^{d-1}} \mathrm{d}\mathbf{x}\, r^{-\frac{d-1}{\alpha}} p_1^{\alpha}(r^{-1/\alpha}\cdot\mathbf{x}) \Pi_r[g]\mathbbm{E}_{r}^{\alpha, 0\rightarrow \mathbf{x}}[f(X^{d-1})],$$ and by the change of variables $u(r):=\frac{r}{|\mathbf{x}|^{\alpha}}$, this is $$\begin{gathered} \int_{U^+} f(u')g(z)\mathfrak{n}^{\alpha}_+(\mathrm{d}u',\mathrm{d}z) \\ = \int_{\mathbbm{R}^{d-1}} \frac{\mathrm{d}\mathbf{x}}{|\mathbf{x}|^{\omega_d}} \int_0^{\infty} \frac{\mathrm{d}u}{2\sqrt{2\pi u^3}} u^{-\frac{d-1}{\alpha}} p_1^{\alpha}\Big(u^{-1/\alpha}\cdot\frac{\mathbf{x}}{|\mathbf{x}|}\Big) \Pi_{u|\mathbf{x}|^{\alpha}}[g]\mathbbm{E}_{u|\mathbf{x}|^{\alpha}}^{\alpha, 0\rightarrow \mathbf{x}}[f(X^{d-1})].\end{gathered}$$ Observe that the isotropy of $X^{d-1}$ yields the relationship $p_1^{\alpha}\Big(u^{-1/\alpha}\cdot\frac{\mathbf{x}}{|\mathbf{x}|}\Big) = p_1^{\alpha}(u^{-1/\alpha}\cdot \mathbf{1})$, so that $$\int_{U^+} f(u')g(z)\mathfrak{n}^{\alpha}_+(\mathrm{d}u',\mathrm{d}z) \\ = \int_{\mathbbm{R}^{d-1}} \frac{\mathrm{d}\mathbf{x}}{|\mathbf{x}|^{\omega_d}} \int_0^{\infty} \mathrm{d}u \frac{p_1^{\alpha}(u^{-1/\alpha}\cdot \mathbf{1})}{2\sqrt{2\pi}u^{1+\frac{\omega_d}{\alpha}}} \Pi_{u|\mathbf{x}|^{\alpha}}[g]\mathbbm{E}_{u|\mathbf{x}|^{\alpha}}^{\alpha, 0\rightarrow \mathbf{x}}[f(X^{d-1})].$$ The proposition follows. 0◻ **Remark 22**. We emphasize that the proof of [Proposition 21](#prop: disintegration stable){reference-type="ref" reference="prop: disintegration stable"} uses the isotropy assumption on $X^{d-1}$, and indeed formula [\[eq: disintegration stable\]](#eq: disintegration stable){reference-type="eqref" reference="eq: disintegration stable"} shows that the excursion measure $\mathfrak{n}^{\alpha}_+$ assigns a weight to the endpoint $\mathbf{x}$ which only depends on its radial part $|\mathbf{x}|$. If $X^{d-1}$ were not isotropic, then one would have to deal with the angular part of $\mathbf{x}$ in the disintegration. The following proposition is a Bismut description of $\mathfrak{n}^{\alpha}_+$, which is easily extended from [Proposition 15](#prop:Bismut){reference-type="ref" reference="prop:Bismut"}. The picture looks roughly the same as in [1](#fig:Bismut){reference-type="ref" reference="fig:Bismut"}, albeit the two trajectories have their first $(d-1)$ entries distributed as an isotropic stable process in $\mathbbm{R}^{d-1}$. **Proposition 23**. *(Bismut's description of $\mathfrak{n}^{\alpha}_+$)* *Let $\overline{\mathfrak{n}^{\alpha}}_+$ be the measure defined on $\mathbbm{R}_+\times U^+$ by $$\overline{\mathfrak{n}^{\alpha}_+}(\mathrm{d}t,\mathrm{d}u) = \mathds{1}_{\{0\leq t\leq R(u)\}} \mathrm{d}t \, \mathfrak{n}^{\alpha}_+(\mathrm{d}u).$$ Then under $\overline{\mathfrak{n}^{\alpha}_+}$ the \"law\" of $(t,(u',z))\mapsto z(t)$ is the Lebesgue measure $\mathrm{d}A$ on $\mathbbm{R}_+$, and conditionally on $z(t)=A$, $u^{t, \leftarrow}=\left(u(t-s)-u(t)\right)_{0\leq s\leq t}$ and $u^{t,\rightarrow}=\left(u(t+s)-u(t)\right)_{0\leq s\leq R(u)-t}$ are independent and evolve as $Z^d$ killed when reaching the hyperplane $\left\{x_d=-A\right\}$.* One of the consequences of this decomposition is that for $\mathfrak{n}^{\alpha}_+$--almost every excursion, there is no loop above any level. More precisely, recall the definition of $\mathscr{L}$ in [\[eq: loop\]](#eq: loop){reference-type="eqref" reference="eq: loop"}. Then $\mathfrak{n}^{\alpha}_+(\mathscr{L}) =0$. The proof can be taken *verbatim* from [Proposition 16](#prop: loop){reference-type="ref" reference="prop: loop"}, using that a stable process in dimension $d-1\ge 2$ does not hit points (see [@Ber II, Corollary 17]). ![Slicing of an excursion in $\mathcal{H}_+$ with stable first two coordinates, in dimension $d=3$. The excursion is drawn in blue. The trajectory is càdlàg but jumps never occur for the height. We record the *length* (in red) of the sub-excursions (in dark blue) made above $\mathcal{H}_a$.](Excursion-spatial-stable-eps-converted-to.pdf){#fig:slicing excursion stable} **The branching property under $\mathfrak{n}^{\alpha}_+$.** We will be interested in cutting excursions with hyperplanes at varying heights, and study the *length* of the subexcursions above these hyperplanes ([4](#fig:slicing excursion stable){reference-type="ref" reference="fig:slicing excursion stable"}). As in [Proposition 17](#prop:branching){reference-type="ref" reference="prop:branching"}, this exhibits a branching structure that we summarise in the next result, in the language introduced in [5.2](#sec: slicing excursions){reference-type="ref" reference="sec: slicing excursions"}. **Proposition 24**. *For all $A\in \mathcal{G}_a$, and all nonnegative measurable functions $F_1,\ldots,F_k:U^+\rightarrow \mathbbm{R}_+, k\ge 1,$ $$\mathfrak{n}^{\alpha}_{+}\left(\mathds{1}_{\{T_a<\infty\}}\mathds{1}_A \prod_{i=1}^k F_i(e_i^{a,+}) \right) = \mathfrak{n}^{\alpha}_{+}\left(\mathds{1}_{\{T_a<\infty\}}\mathds{1}_A \prod_{i=1}^k \gamma^{\alpha}_ {\mathbf{x}^{a,+}_i}[F_i]\right),$$ and the same also holds under $\gamma^{\alpha}_{\mathbf{x}}$ for all $\mathbf{x}\in \mathbbm{R}^{d-1}\setminus \{0\}$.* **Martingale and spine decomposition under $\gamma^{\alpha}_{\mathbf{x}}$.** In line with [Theorem 19](#thm:martingale){reference-type="ref" reference="thm:martingale"} and [Theorem 20](#thm:law change measures){reference-type="ref" reference="thm:law change measures"}, we reveal the martingale in the stable setting and describe the law after the change of measure. The notation is implicitly taken from the Brownian case. All the proofs are omitted because they are simple extensions of their Brownian analogues, going through a many-to-one formula akin to [Proposition 18](#prop: key many to one){reference-type="ref" reference="prop: key many to one"}. Recall that $\omega_d = d-1+\frac{\alpha}{2}$. **Theorem 25**. *Under $\gamma^{\alpha}_{\mathbf{x}}$ for all $\mathbf{x}\in \mathbbm{R}^{d-1}\setminus\{0\}$, the process $$\mathcal{M}^{\alpha}_a := \mathds{1}_{\{T_{a}<\infty\}} \cdot \sum_{e\in \mathcal{H}_a^+} |\Delta e|^{\omega_d}, \quad a\ge 0,$$ is a martingale with respect to $(\mathcal{G}_a, a\ge 0)$.* ![The excursion $u$ seen under $\mu_{\mathbf{x}}^{\alpha}$. Under the change of measure, $u$ splits into two independent $(\alpha,\mathcal{H}_+)$--excursions (blue and red), which are the analogues of the Brownian half-space excursions appearing in [3](#fig:change measures){reference-type="ref" reference="fig:change measures"}, when the first $(d-1)$ coordinates are replaced with an isotropic stable process. The length of the sub-excursion above some height $a$ straddling the point at infinity is obtained by subordinating the isotropic process at the Brownian hitting time of level $a$. Let us stress once more that the last coordinate is continuous, so that this length is well defined for all positive height $a>0$.](Change-measures-stable.eps){#fig:change measures stable} Let $\mathbf{x}\in\mathbbm{R}^{d-1}\setminus\{0\}$. Consider the change of measure such that $$\frac{\mathrm{d}\mu^{\alpha}_{\mathbf{x}}}{\mathrm{d}\gamma^{\alpha}_{\mathbf{x}}}\bigg|_{\mathcal{G}_a} := \frac{\mathcal{M}^{\alpha}_a}{|\mathbf{x}|^{\omega_N}}, \quad a\ge 0.$$ We now come to the description of the excursion under $\mu^{\alpha}_{\mathbf{x}}$. Call $(\alpha,\mathcal{H}^+)$--excursion a process in $\mathbbm{R}^d$ whose first $(d-1)$ entries form an isotropic $\alpha$--stable Lévy process, and whose last entry is an independent $3$--dimensional Bessel process starting at $0$ (so that this process actually remains in $\mathcal{H}^+$). We set $T_a:= \inf\{0\le t\le R(u), \, z(t)=a\}$, and $S_a:= \inf\{0\le t\le R(u), \, z(R(u)-t)=a\}$. **Theorem 26**. *Under $\mu^{\alpha}_{\mathbf{x}}$, for all $a>0$, the processes $(u(s), s\le T_a)$ and $(u(R(u)-s), s\le S_a)$ are independent $(\alpha,\mathcal{H}^+)$--excursions started respectively from $0$ and $(\mathbf{x},0)$ and stopped when hitting $\mathcal{H}_a$.* [5](#fig:change measures stable){reference-type="ref" reference="fig:change measures stable"} illustrates the theorem. The proof of Theorem [Theorem 2](#thm:excal){reference-type="ref" reference="thm:excal"} then follows similarly as in the proof of Theorem [Theorem 1](#thm:exc){reference-type="ref" reference="thm:exc"}. Indeed, we note that the process $$\mathbf{Z}(a) := \left\{\left\{ \Delta e, \; e\in\mathcal{H}_a^+ \right\}\right\}, \quad a\ge 0,$$ is a spatial growth-fragmentation process under $\gamma^{\alpha}_{\mathbf{x}}$. One could fiddle with the ideas of [@DS Theorem 6.8] in order to define an Eve cell process driving $\mathbf{Z}$, but beware that the (signed) growth-fragmentation process described therein is not isotropic as such (one needs to adjusts the constants $c_+$ and $c_-$ to recover an isotropic process). [Theorem 26](#thm:law change measures stable){reference-type="ref" reference="thm:law change measures stable"} provides the law of the spine as an isotropic $(d-1)$--dimensional $\frac{\alpha}{2}$--stable process. [^1]: University of Vienna, Austria, `william.da.silva@univie.ac.at` [^2]: Centro de Investigación en Matemáticas A.C. Calle Jalisco s/n. 36240 Guanajuato, México, `jcpardo@cimat.mx`
arxiv_math
{ "id": "2309.08459", "title": "Spatial growth-fragmentations and excursions from hyperplanes", "authors": "William Da Silva and Juan Carlos Pardo", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study semi-parametric estimation of the population mean when data is observed missing at random (MAR) in the $n < p$ "inconsistency regime", in which neither the outcome model nor the propensity/missingness model can be estimated consistently. Consider a high-dimensional linear-GLM specification in which the number of confounders is proportional to the sample size. In the case $n > p$, past work has developed theory for the classical AIPW estimator in this model and established its variance inflation and asymptotic normality when the outcome model is fit by ordinary least squares. Ordinary least squares is no longer feasible in the case $n < p$ studied here, and we also demonstrate that a number of classical debiasing procedures become inconsistent. This challenge motivates our development and analysis of a novel procedure: we establish that it is consistent for the population mean under proportional asymptotics allowing for $n < p$, and also provide confidence intervals for the linear model coefficients. Providing such guarantees in the inconsistency regime requires a new debiasing approach that combines penalized $M$-estimates of both the outcome and propensity/missingness models in a non-standard way. bibliography: - db_missing_data.bib --- **Challenges of the inconsistency regime:\ Novel debiasing methods for missing data models**\ ----------------------------- ------------------------------- Michael Celentano$^{\star}$ Martin J. Wainwright$^{\star, \ddagger, \dagger, \circ}$ ----------------------------- ------------------------------- ----------------------------------------------------------------------- Department of Statistics$^\star$, and Department of Electrical Engineering and Computer Sciences$^\ddagger$ UC Berkeley, Berkeley, CA ----------------------------------------------------------------------- ---------------------------------------------------------------------- Department of Electrical Engineering and Computer Sciences$^\dagger$ Department of Mathematics$^\circ$ LIDS, and Statistics and Data Science Center Massachusetts Institute of Technology, Cambridge, MA ---------------------------------------------------------------------- **Possible additions for v2** - For degrees-of-freedom adjusted IPW, show there is some $\lambda$ that will suffice for any fixed $\omega$. **Other notes** - does some careful analysis of the subgradient of the objective with respect to $\boldsymbol{u}$ that may be an artifact of when I did not require $\ell_\mathsf{a}$ to be strongly convex. A much simpler analysis may work now. # Introduction In semi-parametric problems, the goal is to estimate a target parameter in the presence of one or more "nuisance" components. This paper studies the estimation of a population mean with data missing at random (MAR) in a setting where the outcome and propensity nuisance models cannot be estimated consistently. We receive iid observations $(y_i\ensuremath{a}_i,\ensuremath{a}_i,\boldsymbol{x}_i)$, $i \in [n] \ensuremath{: =}\{1, \ldots, \ensuremath{n}\}$, where $y_i$ is the outcome of interest, $\ensuremath{a}_i \in \{0,1\}$ is a binary missingness indicator, and $\boldsymbol{x}_i \in {\mathbb R}^p$ are covariates. The MAR assumption is that the outcome and missingness indicator are conditionally independent given covariates: $y_i \perp\!\!\!\perp\ensuremath{a}_i \mid \boldsymbol{x}_i$. The estimand of interest is the population mean $\mu_{\mathsf{y}} \ensuremath{: =}\mathbb{E}[y_i]$. Many methods for estimating the population mean rely on access to sufficiently accurate estimates of either the *outcome model* $\mu: {\mathbb R}^p \rightarrow {\mathbb R}$ or the *propensity model* $\pi: {\mathbb R}^p \rightarrow (0,1)$, or both, where $$\mu(\boldsymbol{x}) \ensuremath{: =}\mathbb{E}[y\mid \boldsymbol{x}], \quad \mbox{and} \quad \pi(\boldsymbol{x}) \ensuremath{: =}\mathbb{P}(\ensuremath{a}= 1 \mid \boldsymbol{x}).$$ These methods, and their accompanying statistical guarantees, can be roughly categorized as either *model-agnostic* or *model-aware*. Model-agnostic approaches assume that the functions $\mu$ and/or $\pi$ can be estimated consistently, and at sufficiently fast rates, but are agnostic to the particular structural assumptions under which such guarantees hold, or to the particular methods that achieve them. There are at least three standard approaches---outcome regression (a special case of the $G$-formula [@ROBINS19861393]), Inverse Probability Weighting (IPW) (also known as the Horvitz-Thompson estimator) [@Hahn1998; @hirano2003; @horvitzThompson1952], and Augmented Inverse Probability Weighting (AIPW) [@bangRobins2005]---each of which can be wrapped around any black-box method for estimating $\mu$ or $\pi$. Letting $\widehat{\mu}{}$ (respectively $\widehat{\pi}{}$) be some estimate of $\mu$ (respectively of $\pi$), these approaches compute the following quantities: [\[eq:classical-estimates\]]{#eq:classical-estimates label="eq:classical-estimates"} $$\begin{aligned} \mbox{$\mathsf{G}$-approach:} \quad & \widehat{\mu}{}_{\mathsf{y}}^{\mathsf{G}} \ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \widehat{\mu}{}(\boldsymbol{x}_i) \\ % % \mbox{$\mathsf{IPW}$-approach:} \quad & \widehat{\mu}{}_{\mathsf{y}}^{\mathsf{IPW}} \ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \frac{\ensuremath{a}_i y_i}{\widehat{\pi}{}(\boldsymbol{x}_i)}, \quad \mbox{and} \\ % \mbox{$\mathsf{AIPW}$-approach:} \quad & \quad \widehat{\mu}{}_{\mathsf{y}}^{\mathsf{AIPW}} \ensuremath{: =} \frac{1}{n} \sum_{i=1}^n \widehat{\mu}{}(\boldsymbol{x}_i) + \frac{1}{n} \sum_{i=1}^n \frac{\ensuremath{a}_i}{\widehat{\pi}{}(\boldsymbol{x}_i)} (y_i - \widehat{\mu}{}(\boldsymbol{x}_i)).\end{aligned}$$ These procedures can be combined with sample splitting strategies that reduce bias and allow for improved statistical guarantees [@chernozhukov2018; @newey2018crossfitting]. All three enjoy consistency or $\sqrt{n}$-consistency guarantees under appropriate consistency conditions on $\widehat{\mu}{}$ or $\widehat{\pi}{}$. A notable property of the AIPW estimator, which involves estimates of both nuisance parameters, is that it achieves $\sqrt{n}$-consistency if $\mathbb{E}[(\widehat{\mu}{}(\boldsymbol{x})-\mu(\boldsymbol{x}))^2]^{1/2}\mathbb{E}[(\widehat{\pi}{}(\boldsymbol{x})-\pi(\boldsymbol{x}))^2]^{1/2} = o(n^{-1/2})$ and appropriate overlap conditions are satisfied [@bickelKlassenRitovWellner1998; @bangRobins2005; @chernozhukov2018; @kennedy2023semiparametric]. More generally, the error in estimating the nuisance parameters induces an error in the estimate of $\mu_\mathsf{y}$ that is, up to a constant, the product of the size of the errors of the two nuisance parameters individually. When the model is known to satisfy additional structural conditions, model-agnostic methods and their associated guarantees can become suboptimal. For example, when $\mu$ and $\pi$ are assumed to belong to Hölder smoothness classes, estimators that are tailored to particular function classes can, in some regimes, achieve errors smaller than those given by optimal model-agnostic guarantees (e.g., [@mukherjee2017semiparametric; @robins2008higher; @robinsLiMukherjeeTchetgen2017; @vanDerVaart2014]). Such modified methods and analyses can be termed as being "model-aware". Recent work has established that the improvements enjoyed by model-aware approaches relative to model-agnostic approaches are fundamental: Balakrishnan et al. [@Balakrishnan:2023aa] show that model-agnostic guarantees cannot be improved without additional structural assumptions. ## The inconsistency regime Both of these previously described lines of work involve methods that are based on consistent estimators of one or both of the nuisance functions. In contrast, the focus of this paper is a fundamentally different and more challenging setting---known as the *inconsistency regime*---in which neither of the nuisance functions $\mu$ nor $\pi$ can be estimated consistently. Focusing on the regime in which the sample size $n$ is less than the dimension $p$, we ask: is it possible (and if so, how) to obtain consistent estimates of the population mean when consistent estimation of the nuisance components is no longer possible? ### Investigation for GLM-based missing data models In order to bring focus to the previously posed question, we study a model-aware procedure for a particular instantiation of the missing data problem. Suppose that the outcome variable $y_i \in {\mathbb R}$ is related to the covariate vector $\boldsymbol{x}_i \in {\mathbb R}^p$ via a linear model, whereas the missingness indicators are related via a generalized linear model (GLM). These assumptions are formalized via the equations $$\begin{aligned} \label{eq:model} y_i = \theta_{\mathsf{y},0} + \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle + \varepsilon_i, \qquad \mbox{and} \qquad \mathbb{P}(\ensuremath{a}_i = 1 \mid \boldsymbol{x}_i) = \pi(\theta_{\mathsf{a},0} + \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{a}\rangle),\end{aligned}$$ where $\varepsilon_i \sim \mathsf{N}(0,\sigma^2)$ and $\pi : {\mathbb R}\rightarrow (0,1)$ is a known link function satisfying certain regularity conditions (see Assumption A1 below). We also assume that the features $\boldsymbol{x}_i$ are uncentered jointly Gaussian $\boldsymbol{x}_i \sim \mathsf{N}(\boldsymbol{\mu}_\mathsf{x},\boldsymbol{\Sigma})$, where the feature covariance $\boldsymbol{\Sigma}$ is either known to the statistician or can be estimated consistently in operator norm. We study this model in an asymptotic regime in which model-agnostic guarantees fail dramatically: the sample size $n$ scales proportionally with the dimensionality of the covariates $p$, and the sparsity or other assumptions we make about $\boldsymbol{\theta}_\mathsf{y}$ and $\boldsymbol{\theta}_\mathsf{a}$ are fundamentally insufficient to guarantee consistent estimation of either the outcome model or the propensity model [@Barbier_2019]. We call such a regime an *inconsistency regime*. Despite the inconsistency of nuisance parameter estimates, we develop methods that are provably consistent for the population mean. When $n\mathbb{E}[\pi(\boldsymbol{x})] > (1+\epsilon) p$ for some $\epsilon > 0$, previous work [@yadlowskyYun2021; @sur2022] has established that both outcome regression and AIPW are $\sqrt{n}$-consistent for the population mean, provided that the outcome model is fit with ordinary least squares. On the other hand, in the complementary regime $n \mathbb{E}[\pi(\boldsymbol{x})] < p$, ordinary least squares is infeasible, and, as discussed in the sequel, neither the outcome regression estimate nor the AIPW estimate are consistent. In this challenging regime, we develop alternative estimators that are provably consistent for the population mean, and we provide numerical evidence that these estimators are in fact $\sqrt{n}$-consistent. Like the AIPW estimator, the estimators we propose involve estimates of both the outcome and propensity models, but we combine these estimates in a non-standard way. Unlike the previous works [@yadlowskyYun2021; @sur2022], our estimate does not require sample splitting or cross fitting. Some previous work has studied conditions under which sample splitting is not required, but is restricted to consistency regimes [@chenSyrgkanisAustern2022]. ### Classical models fail in the inconsistency regime We begin by presenting the results of a simulation study that recapitulates three known phenomena that motivate the present work. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Comparison of G-computation, AIPW, and IPW. We take $\theta_{\mathsf{a},0} = \theta_{\mathsf{y},0} = 0$, $\boldsymbol{\theta}_\mathsf{a}= ![Comparison of G-computation, AIPW, and IPW. We take $\theta_{\mathsf{a},0} = \theta_{\mathsf{y},0} = 0$, $\boldsymbol{\theta}_\mathsf{a}= \boldsymbol{\theta}_\mathsf{y}= \boldsymbol{e}_1$, $\boldsymbol{\Sigma}= {\mathbf I}_p$, $\epsilon_i \mathrel{\stackrel{\mathrm{iid}}{\sim}} \boldsymbol{\theta}_\mathsf{y}= \boldsymbol{e}_1$, $\boldsymbol{\Sigma}= {\mathbf I}_p$, $\epsilon_i \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}(0,1)$, $\mu_{\mathsf{x}} = {\boldsymbol 0}$, and link $\pi(\eta) = \mathsf{N}(0,1)$, $\mu_{\mathsf{x}} = {\boldsymbol 0}$, and link $\pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10} \operatorname{logit}^{-1}(\eta)$. The outcome model is fit by ordinary least squares and the propenisty model by logistic regression. In this case, the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. For each value of $n$, $p$ is taken as the closest integer to $0.07n$. The mean and variance of these estimates are computed across 1000 replicates, with 95% confidence intervals shown. See text for detailed description of cross-fitting strategies.](fig/well_specified_OLS_outcome_comparison_MEAN.pdf){#fig:variance-well-specified-outcome width="0.45\\linewidth"} \tfrac{1}{10} + \tfrac{9}{10} \operatorname{logit}^{-1}(\eta)$. The outcome model is fit by ordinary least squares and the propenisty model by logistic regression. In this case, the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. For each value of $n$, $p$ is taken as the closest integer to $0.07n$. The mean and variance of these estimates are computed across 1000 replicates, with 95% confidence intervals shown. See text for detailed description of cross-fitting strategies.](fig/well_specified_OLS_outcome_comparison_VAR.pdf){#fig:variance-well-specified-outcome width="0.45\\linewidth"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Comparison of G-computation, AIPW, and IPW. We take $\theta_{\mathsf{a},0} = \theta_{\mathsf{y},0} = 0$, $\boldsymbol{\theta}_\mathsf{a}= \boldsymbol{\theta}_\mathsf{y}= \boldsymbol{e}_1$, $\boldsymbol{\Sigma}= {\mathbf I}_p$, $\epsilon_i \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}(0,1)$, $\mu_{\mathsf{x}} = {\boldsymbol 0}$, and link $\pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10} \operatorname{logit}^{-1}(\eta)$. The outcome model is fit by ordinary least squares and the propenisty model by logistic regression. In this case, the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. For each value of $n$, $p$ is taken as the closest integer to $0.07n$. The mean and variance of these estimates are computed across 1000 replicates, with 95% confidence intervals shown. See text for detailed description of cross-fitting strategies.](fig/well_specified_OLS_outcome_comparison_LEGEND.pdf "fig:"){#fig:variance-well-specified-outcome width="0.8\\linewidth"}\ First, as described in the papers [@yadlowsky2022; @sur2022], when the outcome model is fit with ordinary least squares, both G-computation and AIPW are $\sqrt{n}$-consistent under proportional asymptotics. displays the results of a simulation study in which $$\begin{aligned} \theta_{\mathsf{a},0} = \theta_{\mathsf{y},0} = 0, \quad \boldsymbol{\theta}_\mathsf{a}= \boldsymbol{\theta}_\mathsf{y}= \boldsymbol{e}_1, \quad \boldsymbol{\Sigma}= {\mathbf I}_p, \quad \epsilon_i \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}(0,1), \quad \mu_{\mathsf{x}} = {\boldsymbol 0}, \quad \mbox{and} \quad \pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10} \operatorname{logit}^{-1}(\eta),\end{aligned}$$ whence the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. Because $\boldsymbol{\theta}_\mathsf{a}= \boldsymbol{\theta}_\mathsf{y}$, the observed outcomes are confounded, and the mean of $y$ conditional on its being observed is biased. For 10 values of the sample size $n$ spanning from $n = 100$ to $n = 1000$, we simulate from the model [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} in which $p = 0.07 n$ (or the closest integer), $\widehat{\theta}{}_{\mathsf{a},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$ is fit using logistic regression, and $\widehat{\theta}{}_{\mathsf{y},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}$ is fit using ordinary least squares. We then estimate $\mu_\mathsf{y}$ using either outcome regression ($G$-computation), AIPW, or IPW, as in [\[eq:classical-estimates\]](#eq:classical-estimates){reference-type="ref" reference="eq:classical-estimates"}, where $$\begin{aligned} \widehat{\mu}{}(\boldsymbol{x}_i) = \widehat{\theta}{}_{\mathsf{y},0} + \langle \boldsymbol{x}_i , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle, \quad \mbox{and} \quad \widehat{\pi}{}(\boldsymbol{x}_i) = \tfrac{1}{10} + \tfrac{9}{10} \operatorname{logit}^{-1}(\widehat{\theta}{}_{\mathsf{a},0} + \langle \boldsymbol{x}_i , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}\rangle).\end{aligned}$$ We employ three cross-fitting strategies. A $1$-fold cross-fit fits the functions $\widehat{\mu}{}$ and $\widehat{\pi}{}$ and computes the averages in display [\[eq:classical-estimates\]](#eq:classical-estimates){reference-type="eqref" reference="eq:classical-estimates"} using all the data. A 2-fold cross-fit splits the data into disjoint sets $\mathcal{I}_1$ and $\mathcal{I}_2$ each of size $n/2$, fits the functions $\widehat{\mu}{}$ and $\widehat{\pi}{}$ on $\mathcal{I}_1$, and computes the averages in display [\[eq:classical-estimates\]](#eq:classical-estimates){reference-type="eqref" reference="eq:classical-estimates"} on $\mathcal{I}_2$. It then does the same with the roles of $\mathcal{I}_1$ and $\mathcal{I}_2$ reversed and finally takes the averages of the two estimates of $\mu_\mathsf{y}$. A 3-fold cross fit splits the data into disjoint sets $\mathcal{I}_1$, $\mathcal{I}_2$, $\mathcal{I}_3$ of size $n/3$, fits $\widehat{\mu}{}$ on $\mathcal{I}_1$, $\widehat{\pi}{}$ on $\mathcal{I}_2$, and computes the averages in display [\[eq:classical-estimates\]](#eq:classical-estimates){reference-type="eqref" reference="eq:classical-estimates"} on $\mathcal{I}_3$. It then does the same for all 6 permutations of the roles $\mathcal{I}_1$, $\mathcal{I}_2$, and $\mathcal{I}_3$ and finally takes the average of the estimates of $\mu_\mathsf{y}$ across all permutations. The 3-fold cross-fitting strategy was considered in the paper [@sur2022]. Taking $p = 0.07 \, n$ guarantees that the least-squares estimate and logistic regression estimate are well-defined on all folds with high probability [@candesSur2020; @wenpinYe2020]. The left panel of  shows the average value of each estimate computed over 1000 replicates at several sample sizes, and the right panel in displays the empirical variance of these estimates. In both cases, we plot error bars with half-width $1.96$ times the standard error of the estimated mean and variance across the $1000$ replicates. Recalling that the true value of the population mean is $\mu_\mathsf{y}= 0$, one can see that the IPW estimate has a noticeable bias that does not decay as $n \rightarrow \infty$. On the other hand, one can see that both the $G$-computation and AIPW estimates have no significant bias, and their variance decays as $1/\ensuremath{n}$. The goal of this paper is to develop an estimator with these same properties when $\ensuremath{n}\mathbb{E}[\pi(\boldsymbol{x})] < p$. Second, the success of $G$-computation and AIPW in the inconsistency regime relies crucially on the use of ordinary least squares to fit the outcome model. In , we simulate from the same model that underlies the results shown in , except that we use ridge regression to fit the outcome model parameter $\boldsymbol{\theta}_\mathsf{y}$. We see that 2-fold and 3-fold AIPW estimators have substantially smaller bias than 1-fold and 2-fold $G$-computation estimators. Nevertheless, all estimates are biased with a bias that does not decay as $n \rightarrow \infty$. Thus, the main challenge in the regime $n\mathbb{E}[\pi(\boldsymbol{x})] < p$ is that ordinary least squares is not feasible. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Does AIPW require the outcome to be fit using Ordinary Least Squares? Comparison of $G$-computation and AIPW with a well-specified linear outcome model. We simulate from exactly the same setting as in . The outcome model is fit by ridge regression with regularization parameter chosen by cross-validation using the default parameters in `cv.glmnet` in `glmnet version 4.1.7`, `R version 4.2.2`, and the propensity model fit by logistic regression (without penalty). The true value of the population mean is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$.](fig/well_specified_ridge_outcome_comparison_MEAN.pdf){#fig:variance-regularization width="0.45\\linewidth"} ![Does AIPW require the outcome to be fit using Ordinary Least Squares? Comparison of $G$-computation and AIPW with a well-specified linear outcome model. We simulate from exactly the same setting as in . The outcome model is fit by ridge regression with regularization parameter chosen by cross-validation using the default parameters in `cv.glmnet` in `glmnet version 4.1.7`, `R version 4.2.2`, and the propensity model fit by logistic regression (without penalty). The true value of the population mean is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$.](fig/well_specified_ridge_outcome_comparison_VAR.pdf){#fig:variance-regularization width="0.45\\linewidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Does AIPW require the outcome to be fit using Ordinary Least Squares? Comparison of $G$-computation and AIPW with a well-specified linear outcome model. We simulate from exactly the same setting as in . The outcome model is fit by ridge regression with regularization parameter chosen by cross-validation using the default parameters in `cv.glmnet` in `glmnet version 4.1.7`, `R version 4.2.2`, and the propensity model fit by logistic regression (without penalty). The true value of the population mean is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$.](fig/well_specified_ridge_outcome_comparison_LEGEND.pdf "fig:"){#fig:variance-regularization width="0.5\\linewidth"}\ Third, in the inconsistency regime, AIPW is less efficient than 1-fold $G$-computation and is not protected against misspecification of the outcome model. Indeed, in the right-hand plot of , we see that 1-fold $G$-computation has the smallest variance among all simulated estimators, which is consistent with the results in the paper [@yadlowsky2022]. In fact, even in the classical semi-parametric setting, AIPW is often less efficient than $G$-computation [@kangShafer2007; @robinsSuedLeiGomezRotnitzky2007]. Classically, AIPW may be preferred in order to protect against misspecification of the outcome model. Unsurprisingly, because in the current setting the propensity model is not estimated consistently, this protection is not achieved. In  we simulate from the same setting as in , except that the outcome model includes a term quadratic in the features---viz. $$\begin{aligned} y_i = \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle + (\langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle^2 - 1) + \varepsilon_i.\end{aligned}$$ Although both the 2-fold and 3-fold AIPW estimators have substantially smaller bias than than those based on 1-fold and 2-fold outcome regression, their bias does not decay as $n \rightarrow \infty$, so that they are not actually consistent. Thus, when $n\mathbb{E}[\pi(\boldsymbol{x})] > (1+\epsilon)p$ in the inconsistency regime, it is not clear that propensity modeling plays any useful role: the IPW estimator is inconsistent (), and the AIPW estimator incurs inflated variance relative to outcome regression without protecting against outcome model misspecification. In contrast, the estimator that we develop for the regime $n\mathbb{E}[\pi(\boldsymbol{x})] < p$ relies crucially on estimating the propensity model; we are unaware of any consistent estimator that avoids propensity estimation altogether. ![Does AIPW protect against outcome misspecification? Comparison of $G$-computation and AIPW with a misspecified outcome model. We simulate from the same setting as in , except generate outcomes according to $y_i = \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle + (\langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle^2 - 1) + \varepsilon_i$. The outcome model is fit by ordinary least squares and the propenisty model by logistic regression. The true value of the population mean is $\mu_\mathsf{y} \ensuremath{: =}\mathbb{E}[y] = 0$.](fig/misspecified_OLS_outcome_comparison_MEAN.pdf "fig:"){#fig:outcome-misspecification width="0.45\\linewidth"} ![Does AIPW protect against outcome misspecification? Comparison of $G$-computation and AIPW with a misspecified outcome model. We simulate from the same setting as in , except generate outcomes according to $y_i = \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle + (\langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle^2 - 1) + \varepsilon_i$. The outcome model is fit by ordinary least squares and the propenisty model by logistic regression. The true value of the population mean is $\mu_\mathsf{y} \ensuremath{: =}\mathbb{E}[y] = 0$.](fig/misspecified_OLS_outcome_comparison_VAR.pdf "fig:"){#fig:outcome-misspecification width="0.45\\linewidth"}\ ![Does AIPW protect against outcome misspecification? Comparison of $G$-computation and AIPW with a misspecified outcome model. We simulate from the same setting as in , except generate outcomes according to $y_i = \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle + (\langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle^2 - 1) + \varepsilon_i$. The outcome model is fit by ordinary least squares and the propenisty model by logistic regression. The true value of the population mean is $\mu_\mathsf{y} \ensuremath{: =}\mathbb{E}[y] = 0$.](fig/misspecified_OLS_outcome_comparison_LEGEND.pdf "fig:"){#fig:outcome-misspecification width="0.675\\linewidth"} ## Related literature The problem of estimating population means from data missing at random is a now classical problem in statistics, with the AIPW approach incorporating both outcome and propensity modeling going back to the 1990s (e.g, [@robinsRotnitzky1995; @robinsRotnitzkyZhou1994; @robinsRotnitzky1995; @scharfsteinRotnitzkyRobins1999]). The double-robustness property of this and related estimators has been explored in various papers [@robinsRotnitzky2001; @laan2003unified; @bangRobins2005; @kangShafer2007; @robinsSuedLeiGomezRotnitzky2007]. There is also a long line of work that studies approaches based on outcome modeling and propensity modeling (e.g., [@ROBINS19861393; @snowdenRoseMortimer2011; @vansteelandtKeiding2011; @horvitzThompson1952; @rosenbaumRubin1983; @Hahn1998; @hirano2003; @su2023estimated]). Recently, there has been increased interest in the double-robustness property and greater emphasis on the ability to achieve estimation error guarantees with black-box methods (e.g., [@chernozhukov2018; @chernozhukov2021automatic; @chernozhukovChetverikovDemirerDufloHansenNewey2017]). Model-aware approaches in the context of Hölder smoothness classes have also been studied [@robins2008higher; @robinsLiMukherjeeTchetgen2017; @newey2018crossfitting]. Model-aware analysis of $G$-computation, AIPW, and IPW estimators in the context of random design, linear outcome, generalized linear propensity models was studied by [@yadlowsky2022; @sur2022]. $G$-computation in the proportional regime with a linear outcome model and potential heteroscedasticity was studied in the papers [@cattaneoJanssonNewey2018; @cattaneo_jansson_newey_2018]. There is a literature on minimax lower bounds, including of the model-aware variety [@robinsTchetgenLiVanderVaart2009; @mou2022offpolicy; @mou2023kernelbased] as well as the model-agnostic variety [@Balakrishnan:2023aa]. Barbier et al. [@Barbier_2019] established the impossibility of consistent nuisance parameter estimation under proportional asymptotics. Our methods build on a large literature developing exact asymptotic characterization of convex procedures in generalized linear models; for instance, see the papers [@bayatiMontanari2012; @stojnic2013framework; @Donoho:2016aa; @Karoui2015OnTP; @pmlr-v40-Thrampoulidis15; @thrampoulidisAbbasiHassibi2018; @surCandes2019; @zhou2022; @zhaoSurCandes2022; @miolane2021; @celentanoMontanariWei2020; @montanari2023generalization], as well as references therein. Specifically, our methods are based on debiasing constructions that were first introduced in the papers [@zhang2014confidence; @buhlmann2013statistical; @javanmard2014confidence], and more recently, developed under proportional asymptotics in the papers [@javanmard2014hypothesis; @javanmard2018debiasing; @miolane2021; @celentanoMontanariWei2020; @bellec2019biasing; @bellec2019debiasingIntervals]. Our development builds upon our own past work in Celentano and Montanari [@celentano2021cad], where a subset of the current authors studied estimation of conditional covariances for random-design linear regression models, and introduced "correlation adjustment" for accounting for joint errors in estimating nuisance parameters. The methods in this paper require similarly motivated but technically distinct forms of correlation adjustment. Our proof is based on the Convex Gaussian Min-Max theorem, developed by Stojnic [@stojnic2013framework] as a sharpening of Gordon's Gaussian comparison inequality [@gordon1985; @gordon1988] for convex-concave objectives. ## Outline of paper Let us provide an outline of the remainder of this paper. In , we provide the construction of our proposed estimator for outcome model, propensity model, and the population mean. is devoted to our two main results: establishes consistency of our proposed estimator for the population mean, whereas  establishes, in a certain sense, the asymptotic normality and unbiasedness of certain estimates we construct for the linear-model parameter coefficients $\theta_{\mathsf{y},j}$. Our estimator requires four estimating four adjustment factors, and our consistency and asymptotic normality guarantees hold under a general consistency guarantee on these adjustment factors. In , we provide one such construction of these adjustment factors, and show they satisfy the necessary consistency guarantee (). In , we provide simulations that demonstrate the success of our estimator. In , we provide a high-level overview of our proof techniques. Our proofs rely on an exact asymptotic characterization for missing data models, which we also provide there. This section is intended only to provide a preview of our proof techniques. Technical details are deferred to the appendices. We end in  with a concluding discussion. ## Notation For a square matrix $\boldsymbol{A}\in {\mathbb R}^{K \times K}$, we denote by $\boldsymbol{A}_k$ the $k\times k$ upper-left sub-matrix of $\boldsymbol{A}$, by $\boldsymbol{A}_{\,\cdot\,,k}$ the $k^\text{th}$ column of $\boldsymbol{A}$, and by $\boldsymbol{A}_{k,\,\cdot\,}$ the $k^\text{th}$ row of $\boldsymbol{A}$. For two vectors $\boldsymbol{a},\boldsymbol{b}\in {\mathbb R}^N$, we denote the inner product between $\boldsymbol{a}$ and $\boldsymbol{b}$ by $\langle \boldsymbol{a}, \boldsymbol{b}\rangle$ and the Euclidean norm by $\|\boldsymbol{a}\|$. For two matrices $\boldsymbol{A},\boldsymbol{B}\in {\mathbb R}^{N \times k}$, we denote $\langle\!\langle \boldsymbol{A}, \boldsymbol{B}\rangle\!\rangle= \boldsymbol{A}^\top \boldsymbol{B}\in {\mathbb R}^{k \times k}$. Note that $\langle\!\langle\boldsymbol{A}, \boldsymbol{B}\rangle\!\rangle_{\ell\ell'} = \langle \boldsymbol{a}_\ell , \boldsymbol{b}_{\ell'}\rangle$, where $\boldsymbol{a}_\ell$ is the $\ell^\text{th}$ column of $\boldsymbol{A}$, and $\boldsymbol{b}_\ell$ is the $\ell^\text{th}$ column of $\boldsymbol{B}$. If $\boldsymbol{a},\boldsymbol{b}\in {\mathbb R}^N$ are random vectors with finite second moments, we denote $\langle \boldsymbol{a}, \boldsymbol{b}\rangle_{L_2} = \mathbb{E}[\langle \boldsymbol{a}, \boldsymbol{b}\rangle]$ and $\| \boldsymbol{a} \|_{L_2}^2 = \mathbb{E}[\| \boldsymbol{a}\|^2]$. Likewise, if $\boldsymbol{A},\boldsymbol{B}\in {\mathbb R}^{N \times k}$ are random matrices with finite second moments, we denote $\langle\!\langle\boldsymbol{A}, \boldsymbol{B}\rangle\!\rangle_{L_2} = \mathbb{E}[\langle\!\langle\boldsymbol{A}, \boldsymbol{B}\rangle\!\rangle]$. Thus, in these cases, $\langle \boldsymbol{a}, \boldsymbol{b}\rangle$ and $\langle\!\langle\boldsymbol{A}, \boldsymbol{B} \rangle\!\rangle$ are random quantities, and $\langle \boldsymbol{a}, \boldsymbol{b}\rangle_{L_2}$ and $\langle\!\langle\boldsymbol{A}, \boldsymbol{B}\rangle\!\rangle_{L_2}$ are deterministic quantities. The operator norm of a matrix is denotes $\|\boldsymbol{A}\|_{{\rm op}}$. For a positive-definite symmetric matrix $\boldsymbol{\Sigma}$, we denote the $\boldsymbol{\Sigma}$-inner product $\langle \boldsymbol{a}, \boldsymbol{b}\rangle_{\boldsymbol{\Sigma}} = \langle \boldsymbol{a}, \boldsymbol{\Sigma} \boldsymbol{b}\rangle$ and the $\boldsymbol{\Sigma}$-norm $\| \boldsymbol{a}\|_{\boldsymbol{\Sigma}} = \sqrt{\langle \boldsymbol{a}, \boldsymbol{a}\rangle_{\boldsymbol{\Sigma}}}$. For a random vector with iid coordinates, we sometimes denote a scalar random variable distributed from the distribution of each coordinate by a non-boldface letter without a subscript. Thus, $d$ denotes a random variable distributed according to $\ensuremath{a}_i$, $y$ a random variable distributed according to $y_i$. We also denote a random variable distributed according to $\boldsymbol{x}_i$ by $\boldsymbol{x}$. Constants $C,c,C',c' > 0$ or with additional subscripts or superscripts are reserved to refer to positive constants that depend only on the constants appearing in Assumption A1 below. They do not depend on $n,p$ and may change at each appearance unless otherwise specified. We use $A \lesssim B$ to mean $A \leq C \, B$ for some constant $C$ depending on Assumption A1, and define "$\gtrsim$" similarly. # Debiasing in the inconsistency regime {#SecProblem} In this section, we describe the debiasing estimators that we analyze in this paper. Recall that the observed data set consists of an i.i.d. collection of triples $\{(\ensuremath{a}_i y_i, y_i, \boldsymbol{x}_i) \}_{i=1}^n$ obeying [\[eq:model\]](#eq:model){reference-type="ref" reference="eq:model"}, where $\varepsilon_i \sim \mathsf{N}(0,\sigma^2)$, $\pi : {\mathbb R}\rightarrow (0, 1)$ is a known link function, and $\boldsymbol{x}_i \mathrel{\stackrel{\mathrm{iid}}{\sim}}\mathsf{N}(\boldsymbol{\mu}_{\mathsf{x}}, \boldsymbol{\Sigma})$ for some unknown mean vector $\boldsymbol{\mu}_{\mathsf{x}} \in {\mathbb R}^p$, and a known covariance matrix $\boldsymbol{\Sigma}\in {\mathbb R}^{p\times p}$. Our goal is to estimate the outcome mean $\mu_\mathsf{y}= \mathbb{E}[y]$. Given the assumed structure of our model, the tower property guarantees the representation $$\begin{aligned} \label{EqnPopRep} \mu_\mathsf{y}= \mathbb{E}\Big[ \mathbb{E}[y \mid \boldsymbol{x}] \Big] \; = \; \mathbb{E}\Big[ \theta_{\mathsf{y},0} + \langle\boldsymbol{x}, \boldsymbol{\theta}_\mathsf{y}\rangle \Big],\end{aligned}$$ which we exploit in developing our estimators. ## High-level overview We begin with a high-level description of the estimators analyzed in this paper, and the ingredients in their construction. ### Three-stage approach A standard approach to estimating $\mu_\mathsf{y}$---known either as outcome regression or as a special case of $G$-computation---is to replace the population coefficients $(\theta_{\mathsf{y},0}, \boldsymbol{\theta}_\mathsf{y}) \in {\mathbb R} \times {\mathbb R}^p$ in [\[EqnPopRep\]](#EqnPopRep){reference-type="ref" reference="EqnPopRep"} with empirical estimates $(\widehat{\theta}{}_{\mathsf{y}, 0}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y})$, and to replace the population expectation over $\boldsymbol{x}$ with an empirical expectation. In our work, due to the challenges of the inconsistency regime, an additional step is required: we need to construct *suitably debiased* $(\widehat{\theta}{}_{\mathsf{y},0}^\textup{d}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d})$ versions of the estimates, and use them in the plug-in estimator $$\begin{aligned} \label{eq:db-G-computation} \widehat{\mu}{}_\mathsf{y}^\textup{d}\ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \big( \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}+ \langle\boldsymbol{x}_i , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\rangle \big),\end{aligned}$$ All of the methods analyzed here involve the following three steps: - In all cases, we begin with *base estimates* $(\widehat{\theta}{}_{\mathsf{y}, 0}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y})$ obtained from penalized regression methods, as described in . - Let $\ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y}, 0}$ and $\ensuremath{\widehat{\ensuremath{\mathbf{\mathsf{bias}}}}}_\mathsf{y}$ be estimates of the biases of $\widehat{\boldsymbol{\theta}}{}_{\mathsf{y},0}$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}$, respectively. Using these bias estimates, we form the *debiased coefficient estimates* $$\begin{aligned} \label{eq:outcome-bias-correction} \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}\ensuremath{: =}\widehat{\theta}{}_{\mathsf{y},0} - \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y}, 0}, \qquad \mbox{and} \qquad \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\ensuremath{: =}\widehat{\boldsymbol{\theta}}{}_\mathsf{y}- \ensuremath{\widehat{\ensuremath{\mathbf{\mathsf{bias}}}}}_\mathsf{y}.\end{aligned}$$ Among a number of other ingredients, our bias estimates are based upon a degrees-of-freedom adjustment, as described in . - Finally, using these debiased estimates, we compute the *plug-in estimator* [\[eq:db-G-computation\]](#eq:db-G-computation){reference-type="eqref" reference="eq:db-G-computation"}. To be clear, our methods do *not* require sample splitting: the same data is used to compute the initial estimates, perform debiasing, and in the empirical expectation in the final estimate [\[eq:db-G-computation\]](#eq:db-G-computation){reference-type="eqref" reference="eq:db-G-computation"}. The main difficulty associated with these three step methods lies in devising suitable estimates of the bias, and in analyzing them. In the consistency regime, the biases of $\widehat{\boldsymbol{\theta}}{}_{\mathsf{y},0}$ and $\boldsymbol{\theta}_\mathsf{y}$ are sufficiently small that debiasing is not required for consistency of $G$-computation. As we discuss in , a naïve application of standard bias estimates--- even those designed for the inconsistency regime when data is not missing---produce inconsistent estimates of $\mu_\mathsf{y}$ when used as plug-ins. Accordingly, in this paper, we propose and analyze a novel method for estimating the bias, as described in . This method does not assume knowledge of the propensity score. This estimate uses the base estimates described in  and the degrees-of-freedom adjustment described in . ### Regularized estimators {#SecBaseEstimates} Both the outcome and propensity models are generalized linear models, and we denote the linear predictors in these models by $$\eta_{\mathsf{y},i} \ensuremath{: =}\theta_{\mathsf{y},0} + \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{y}\rangle, \qquad \mbox{and} \qquad \eta_{\mathsf{a},0} \ensuremath{: =}\theta_{\mathsf{a},i} + \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{a}\rangle.$$ Similarly, we use $\eta_\mathsf{y}\equiv \eta_\mathsf{y}(\boldsymbol{x})$ and $\eta_\mathsf{a} \equiv \eta_\mathsf{a}(\boldsymbol{x})$ to denote the generic form of these linear predictors with a generic covariate vector $\boldsymbol{x}$. Note that the population means of these quantities are given by $\mu_\mathsf{y}\ensuremath{: =} \mathbb{E}[\eta_\mathsf{y}(\boldsymbol{x})]$ and $\mu_\mathsf{a}\ensuremath{: =}\mathbb{E}[\eta_\mathsf{a}(\boldsymbol{x})]$. #### Estimating the outcome model: Letting $w: {\mathbb R}\rightarrow {\mathbb R}_{>0}$ be a weight function, define (with a slight abuse of notation) the weights $w_i = w(\theta_{\mathsf{a},0} + \langle \boldsymbol{x}_i, \boldsymbol{\theta}_\mathsf{a}\rangle)$ for $i \in [n]$. Using these weights, we estimate the pair $(\theta_{\mathsf{y},0},\boldsymbol{\theta}_\mathsf{y})$ using a penalized form of weighted-least squares $$\label{eq:outcome-fit} (\widehat{\theta}{}_{\mathsf{y},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}) \ensuremath{: =}\mathop{\mathrm{arg\,min}}_{(v_0,\boldsymbol{v}) \in {\mathbb R}\times {\mathbb R}^p} \Big\{ \frac{1}{2n} \sum_{i=1}^n \ensuremath{a}_i w_i \big(y_i - v_0 - \langle \boldsymbol{x}_i , \boldsymbol{v}\rangle\big)^2 + \Omega_\mathsf{y}(\boldsymbol{v}) \Big\},$$ where $\Omega_\mathsf{y}: {\mathbb R}^p \rightarrow {\mathbb R}$ is a convex penalty function. For example, taking $\Omega_\mathsf{y}(\boldsymbol{v}) = \lambda \| \boldsymbol{v} \|^2/2$ leads to a weighted form of ridge regression. We only establish results for fixed regularization parameter, which we therefore absorb into the penalty $\Omega_\mathsf{y}$. We require $\Omega_\mathsf{y}$ to be strongly convex and smooth with constants of order one; see  below for more detail. In the case of ridge regression, this corresponds to taking an order one $\lambda$. In the inconsistency regime and under our choice of feature normalization, the regularization parameter that minimizes prediction error and is chosen by cross validation is of this order (see, for example, the papers [@karoui2013asymptotic; @miolane2021; @hastieMontanariRossetTibshirani2022]). Typically, we take $w \equiv 1$ (i.e., $w(\eta) = 1$ for all $\eta$), so that this is just an unweighted penalized-least squares estimate. However, we also consider oracle inverse probability weighting and adjusted modified inverse probability weighting schemes, which motivates writing the estimator in a more general form. (Note that we are ultimately interested in methods that have no prior knowledge of the propensity model, in which case we always take $w \equiv 1$. Other choices of the function $w$ are presented primarily for motivation and illustrative purposes.) #### Estimating the propensity model: We estimate the propensity model parameter using the penalized M-estimator $$\begin{aligned} \label{eq:propensity-fit} (\widehat{\theta}{}_{\mathsf{a},0},\widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}) \ensuremath{: =}\mathop{\mathrm{arg\,min}}_{(v_0,\boldsymbol{v}) \in {\mathbb R}\times {\mathbb R}^p} \Big\{ \frac{1}{2 n} \sum_{i=1}^n \ell_{\mathsf{a}}\big(v_0 + \langle \boldsymbol{x}_i , \boldsymbol{v}\rangle ; \ensuremath{a}_i\big) + \Omega_\mathsf{a}(\boldsymbol{v}) \Big\},\end{aligned}$$ where $\ell_\mathsf{a}: {\mathbb R}\times \{0,1\}$ is a loss function that is convex in its first argument, and $\Omega_\mathsf{a}: {\mathbb R}^p \rightarrow {\mathbb R}$ is another convex penalty function. As above, the regularization parameter is absorbed into the penalty $\Omega_\mathsf{a}$, and, in the case of ridge regularization, corresponds to taking $\lambda = \Theta(1)$. The Gaussian-GLM structure also allows us to estimate the propensity model parameter by a moment method $$\widehat{\boldsymbol{\theta}}{}_\mathsf{a} = \frac{1}{n_1}\sum_{i=1}^n \ensuremath{a}_i \boldsymbol{x}_i - \frac{1}{n}\sum_{i=1}^n \boldsymbol{x}_i,$$ where $n_1 = \sum_{i=1}^n \ensuremath{a}_i$, with $\widehat{\theta}{}_{\mathsf{a},0}$ then determined by solving moment equations described in . We establish results for both the penalized M-estimator and moment-method estimator for the propensity model. ### Degrees of freedom adjustments {#sec:dof-adjustment} Our estimates of the bias terms $\mathsf{bias}_{\mathsf{y},0}$ and $\mathsf{bias}_\mathsf{y}$ rely on adjustments for the degrees-of-freedom of the estimators [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} and [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"}. Recall the loss function $\ell_{\mathsf{y}}(\eta;\eta_{\mathsf{a}}, \ensuremath{a}, y) \ensuremath{: =}\ensuremath{a}w(\eta_\mathsf{a}) (y - \eta)^2/2$ that underlies the outcome estimator [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} and, similarly, the loss $\ell_\mathsf{a}(\eta; \ensuremath{a})$ for the propensity estimate [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"}. The degrees-of-freedom adjustments depend on the real number solutions $\widehat{\zeta}{}_\mathsf{y}^\theta$ and $\widehat{\zeta}{}_\mathsf{y}^\eta$ to the estimating equations [\[eq:dof-emp-def\]]{#eq:dof-emp-def label="eq:dof-emp-def"} $$\label{eq:dof-adjust-emp} \begin{gathered} \widehat{\zeta}{}_\mathsf{y}^\theta = \frac{1}{n} \sum_{i=1}^n \frac{\ddot\ell_\mathsf{y}(\widehat{\eta}{}_{\mathsf{y},i};\eta_{\mathsf{a},i},\ensuremath{a}_i,y_i)}{\widehat{\zeta}{}_\mathsf{y}^\eta \ddot \ell_\mathsf{y}(\widehat{\eta}{}_{\mathsf{y},i};\eta_{\mathsf{a},i},\ensuremath{a}_i,y_i)+1}, \quad \mbox{and} \quad \widehat{\zeta}{}_\mathsf{y}^\eta = \frac{1}{n} \mathrm{Tr}\Big( \boldsymbol{\Sigma}\big(\widehat{\zeta}{}_\mathsf{y}^\theta \boldsymbol{\Sigma}+ \nabla^2 \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)^{-1} \Big), \end{gathered}$$ along with the real number solutions $\widehat{\zeta}{}_\mathsf{a}^\theta$ and $\widehat{\zeta}{}_\mathsf{a}^\eta$ to the estimating equations $$\begin{gathered} \label{eq:dof-adjust-action} \widehat{\zeta}{}_\mathsf{a}^\theta = \frac{1}{n} \sum_{i=1}^n \frac{\ddot\ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i};\ensuremath{a}_i)}{\widehat{\zeta}{}_\mathsf{a}^\eta \ddot \ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i};\ensuremath{a}_i)+1}, \quad \mbox{and} \quad \widehat{\zeta}{}_\mathsf{a}^\eta = \frac{1}{n} \mathrm{Tr}\Big( \boldsymbol{\Sigma} \big(\widehat{\zeta}{}_\mathsf{a}^\theta \boldsymbol{\Sigma}+ \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a})\big)^{-1} \Big). \end{gathered}$$ As shown in , these equations have unique positive solutions. The quantities $\widehat{\zeta}{}_\mathsf{y}^\theta$ and $\widehat{\zeta}{}_\mathsf{a}^\theta$ correspond to degrees-of-freedom adjustment factors that have appeared elsewhere in the debiasing literature [@javanmard2014hypothesis; @miolane2021; @celentanoMontanariWei2020; @bellec2019biasing; @bellec2019debiasingIntervals; @bellec2022]. For example, consider [\[eq:dof-adjust-emp\]](#eq:dof-adjust-emp){reference-type="ref" reference="eq:dof-adjust-emp"} in the case that $\ensuremath{a}_i = 1$ for all $i$ and we fit the model with unpenalized least squares ($n > p$) or the Lasso with least-squares loss. In these cases, $\ddot\ell_\mathsf{y}(\eta;\eta_{\mathsf{a},i},\ensuremath{a}_i,y_i) = 1$, whence the first equation in [\[eq:dof-adjust-emp\]](#eq:dof-adjust-emp){reference-type="ref" reference="eq:dof-adjust-emp"} states $\widehat{\zeta}{}_\mathsf{y}^\theta = 1 / (\widehat{\zeta}{}_\mathsf{y}^\eta + 1)$. For least-squares, the second equation in [\[eq:dof-adjust-emp\]](#eq:dof-adjust-emp){reference-type="ref" reference="eq:dof-adjust-emp"} reads $\widehat{\zeta}{}_\mathsf{y}^\eta = p/(n\widehat{\zeta}{}_\mathsf{y}^\theta)$. Solving for $\widehat{\zeta}{}_\mathsf{y}^\eta$ and $\widehat{\zeta}{}_\mathsf{y}^\theta$, we have $\widehat{\zeta}{}_\mathsf{y}^\eta = (p/n)/(1-p/n)$ and $\widehat{\zeta}{}_\mathsf{y}^\theta = 1-p/n$. In the case of the Lasso, we interpret $\nabla^2 \lambda \| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\|_1 = \text{\rm diag}(s_j)$, with $s_j = \infty$ if $\widehat{\theta}{}_{\mathsf{y},j} = 0$ and $0$ otherwise, and get $\widehat{\zeta}{}_\mathsf{y}^\eta = \| \widehat{\theta}{}_\mathsf{y} \|_0/(n\widehat{\zeta}{}_\mathsf{y}^\theta)$. Solving gives $\widehat{\zeta}{}_\mathsf{y}^\eta = (\|\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\|_0/n)/(1-\|\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\|_0/n)$ and $\widehat{\zeta}{}_\mathsf{y}^\theta = 1-\|\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\|_0/n$. In both these cases $\widehat{\zeta}{}_\mathsf{y}^\theta = 1 - {\widehat{\mathsf{df}}}_\mathsf{y}/n$, where ${\widehat{\mathsf{df}}}_\mathsf{y}= \mathrm{Tr}\Big(\frac{\textup{d}}{\textup{d}\boldsymbol{y}} \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\Big)$ is the degrees-of-freedom in the Steinian sense [@stein1981]. Equation [\[eq:dof-adjust-emp\]](#eq:dof-adjust-emp){reference-type="eqref" reference="eq:dof-adjust-emp"} generalizes this definition to a wide class of loss functions and penalties and to linear models with missing outcomes and binary GLMs. To be clear, alternative generalizations are possible [@bellec2022]. We make no claims regarding the relative merits of the choice [\[eq:dof-emp-def\]](#eq:dof-emp-def){reference-type="eqref" reference="eq:dof-emp-def"} and these alternatives. We have adopted the definition [\[eq:dof-emp-def\]](#eq:dof-emp-def){reference-type="eqref" reference="eq:dof-emp-def"} because is best suited to our proof techniques. ## Two classes of debiasing procedures {#sec:procedure-definitions} With this high-level overview in place, we are equipped to describe two classes of debiasing procedures, depending on whether or not the propensity score is known. To be clear, our primary contribution is to develop an estimator that does *not* require knowledge of the propensity function, which we describe in . Nonetheless, in order to develop intuition, it is helpful to analyze an estimator in which the bias estimates are based on this knowledge, which we do in  to follow. Before presenting these debiasing procedures, it is worth emphasizing the difficulty of achieving consistency for $\mu_\mathsf{y}$ using plug-ins to the G-computation [\[eq:db-G-computation\]](#eq:db-G-computation){reference-type="eqref" reference="eq:db-G-computation"}. There are various natural estimators that we have found to fail. For example, we studied an unmodified version of the debiasing procedures presented in the papers [@javanmard2014hypothesis; @miolane2021; @celentanoMontanariWei2020; @bellec2019biasing; @bellec2019debiasingIntervals; @bellec2022] (which, by necessity, is computed using only the units with observed outcomes): [\[eq:db-naive\]]{#eq:db-naive label="eq:db-naive"} $$\begin{aligned} \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y},0} & \ensuremath{: =}\boldsymbol{\mu}_{\mathsf{x}}^\top \frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big(\boldsymbol{a}\odot \boldsymbol{w}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n\widehat{\zeta}{}_\mathsf{y}^\theta}, \\ % \ensuremath{\widehat{\ensuremath{\mathbf{\mathsf{bias}}}}}_{\mathsf{y}} & \ensuremath{: =}- \frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big (\boldsymbol{a} \odot \boldsymbol{w}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n\widehat{\zeta}{}_\mathsf{y}^\theta}. \end{aligned}$$ We considered these bias estimates under two weighting schemes in [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="ref" reference="eq:outcome-fit"}. The first takes weights $w_i = 1$ and the second takes weights $w_i = 1 / \pi(\theta_{\mathsf{a},0} + \langle \boldsymbol{x}_i,\boldsymbol{\theta}_\mathsf{a}\rangle)$. In Appendix [21](#SecInconsistency){reference-type="ref" reference="SecInconsistency"}, we show that under both weighting schemes and certain additional conditions, these bias estimates lead to provably inconsistent estimates of $\mu_\mathsf{y}$. The former weighting scheme is our primary interest in this paper because we assume no knowledge of the propensity model. The latter is an oracle procedure inspired by M-estimation approaches that reweight the loss by the inverse propensity score (e.g., [@arkhangelsky2021doublerobust]). It fails despite using oracle knowledge of the propensity score in a natural way. We also considered using the base estimates [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} as plug-ins. In Appendix [21](#SecInconsistency){reference-type="ref" reference="SecInconsistency"}, we show that these lead to inconsistent estimates of $\mu_\mathsf{y}$ as well. In fact, the conditions under which we prove inconsistency are simple enough to be summarized here. It occurs as soon as the signal strength $\| \boldsymbol{\theta}_\mathsf{a}\|_2$ and its overlap with the outcome model parameter $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}\rangle$ are positive and bounded away from zero. The signal strength lower bound states that the missingness indicator is not independent (or nearly independent) of the covariates. The overlap lower bound states that the missingness indicator is sufficiently confounded with the outcome. Of course, this type of confounding is exactly what motivates the analysis of the current paper. For simplicity, we only prove inconsistency when ridge regression is used and features are independent ($\boldsymbol{\Sigma}= {\mathbf I}_p$), although we expect it to occur more generally. Precise statements can be found in Appendix [21](#SecInconsistency){reference-type="ref" reference="SecInconsistency"}. ### Oracle augmented shifted-confounder weighting {#SecKnownProp} The bias of the base estimates can be estimated by a weighted sample averages as follows: $$\label{eq:db-with-offset-explicit-intro} \begin{aligned} \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y},0} & \ensuremath{: =}\frac{1}{\ensuremath{n}} \sum_{i=1}^n \omega_{0,i}^\mathsf{orc}a_i (\widehat{\theta}{}_{\mathsf{y},0} + \langle \boldsymbol{x}_i , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle - y_i), \qquad % & \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y}} & \ensuremath{: =}\frac{1}{\ensuremath{n}} \sum_{i=1}^n \boldsymbol{\omega}_i^\mathsf{orc}a_i (\widehat{\theta}{}_{\mathsf{y},0} + \langle \boldsymbol{x}_i , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle - y_i), \end{aligned}$$ where $$\omega_{0,i}^\mathsf{orc} \ensuremath{: =} - \frac{\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}}^\top \boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} \boldsymbol{x}_i }{\widehat{\zeta}{}_\mathsf{y}^\theta}, \qquad \boldsymbol{\omega}_i^\mathsf{orc} \ensuremath{: =} \frac{\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} \boldsymbol{x}_i }{\widehat{\zeta}{}_\mathsf{y}^\theta},$$ and $\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} = \mathbb{E}[\boldsymbol{x}\mid \ensuremath{a}= 1]$ and $\boldsymbol{\Sigma}_{\mathsf{cfd}} = \mathrm{Var}(\boldsymbol{x}\mid d=1)$ are the mean and variance of the features conditional on the outcome being observed, and $\widehat{\zeta}{}_\mathsf{y}^\theta$ is the previously described adjustment [\[eq:dof-adjust-emp\]](#eq:dof-adjust-emp){reference-type="eqref" reference="eq:dof-adjust-emp"} for the degrees-of-freedom. In matrix form, these estimates take the form $$\label{eq:orc-ASCW-matrix-form} \begin{aligned} \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y},0} & = \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}}^\top \frac{\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1}\boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n\widehat{\zeta}{}_\mathsf{y}^\theta}, % \quad& % \ensuremath{\widehat{\ensuremath{\mathbf{\mathsf{bias}}}}}_{\mathsf{y}} & = - \frac{\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1}\boldsymbol{X}^\top\big (\boldsymbol{a} \odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n\widehat{\zeta}{}_\mathsf{y}^\theta}. \end{aligned}$$ Even if the confounder distribution is known, the weights $\omega_{i,0}^\mathsf{orc}$, $\boldsymbol{\omega}_i^\mathsf{orc}$ depend on oracle knowledge of the propensity model. Indeed, an elementary computation shows that $$\label{eq:cfd-mean-variance} \begin{gathered} \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} = \boldsymbol{\mu}_{\mathsf{x}} + \alpha_1 \boldsymbol{\Sigma}^{1/2}\boldsymbol{\theta}_\mathsf{a}, \qquad \boldsymbol{\Sigma}_{\mathsf{cfd}} = \boldsymbol{\Sigma}+ (\alpha_2 - \alpha_1^2) \boldsymbol{\Sigma}^{1/2} \boldsymbol{\theta}_\mathsf{a} \boldsymbol{\theta}_\mathsf{a}^\top \boldsymbol{\Sigma}^{1/2}, \end{gathered}$$ where $$\label{eq:alpha-12} \alpha_1 \ensuremath{: =}\frac{\mathbb{E}[\pi'(\eta_\mathsf{a})]}{\mathbb{E}[\pi(\eta_\mathsf{a})]}, \qquad \alpha_2 \ensuremath{: =} \frac{\mathbb{E}[\pi''(\eta_\mathsf{a})]}{\mathbb{E}[\pi(\eta_\mathsf{a})]}.$$ Thus, the conditional first and second moments of confounder distribution are shifted by the missingness mechanism. The weights $\omega_{i,0}^\mathsf{orc}$, $\boldsymbol{\omega}_i^\mathsf{orc}$ account for this confounder shift. We refer to the correction of the base estimates with the bias estimates [\[eq:db-with-offset-explicit-intro\]](#eq:db-with-offset-explicit-intro){reference-type="ref" reference="eq:db-with-offset-explicit-intro"} as the *oracle augmented shifted-confounder weighting*, or more succinctly, *oracle ASCW*. The estimates [\[eq:db-with-offset-explicit-intro\]](#eq:db-with-offset-explicit-intro){reference-type="eqref" reference="eq:db-with-offset-explicit-intro"} are equivalent to the estimates [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} with $\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}}$ and $\boldsymbol{\Sigma}_{\mathsf{cfd}}$ in place of $\boldsymbol{\mu}_{\mathsf{x}} = \mathbb{E}[\boldsymbol{x}]$ and $\boldsymbol{\Sigma}= \mathrm{Var}(\boldsymbol{x})$, respectively. Because the data used to fit $(\widehat{\theta}{}_{\mathsf{y},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{y})$ only includes units with $\ensuremath{a}= 1$, it is natural to replace the unconditional mean and variance of the features with their conditional mean and variance. For this reason, equation [\[eq:db-with-offset-explicit-intro\]](#eq:db-with-offset-explicit-intro){reference-type="eqref" reference="eq:db-with-offset-explicit-intro"} rather than equation [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} is arguably the natural generalization of the debiasing methods in the papers [@javanmard2014hypothesis; @bellec2019debiasingIntervals; @celentanoMontanariWei2020]. ### Empirical shifted-confounder augmentation {#SecUnknownProp} When the confounder mean $\boldsymbol{\mu}_{\mathsf{x}}$ and propensity model parameters $\boldsymbol{\theta}_\mathsf{a}, \alpha_1, \alpha_2$ are not known, one might consider computing the shifted-confounder weights with plug-in estimates for $\boldsymbol{\theta}_\mathsf{a}, \alpha_1, \alpha_2$. We shall see below that it is possible to consistently estimate $\alpha_1$ and $\alpha_2$ in the present setting. Nevertheless, as we have already stated, consistent estimation of $\boldsymbol{\theta}_\mathsf{a}$ in the $\ell_2$-norm is impossible [@Barbier_2019]. Under proportional asymptotics with inconsistent estimates of $\boldsymbol{\theta}_\mathsf{y}$, all existing guarantees (of which we are aware) for the debiased estimate require that $\boldsymbol{\Sigma}$ be estimated consistently in operator norm [@bellec2019debiasingIntervals]; such operator-norm consistent estimation is possible only under very strong structural assumptions on $\boldsymbol{\Sigma}$ (e.g., see the papers [@bickelLevina2008; @karoui2008; @caiZhangZhou2010]). In the present setting, because $\ell_2$-norm consistent estimation of $\boldsymbol{\theta}_\mathsf{a}$ is not possible, operator norm consistent estimation of $\boldsymbol{\Sigma}_{\mathsf{cfd}}$ is not possible. Without knowledge of $\boldsymbol{\theta}_\mathsf{a}$, a new approach is required. Our approach is based on an expansion of the estimates [\[eq:orc-ASCW-matrix-form\]](#eq:orc-ASCW-matrix-form){reference-type="eqref" reference="eq:orc-ASCW-matrix-form"}. Using the Sherman-Morrison-Woodbury formula, we can write $\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} = \boldsymbol{\Sigma}^{-1} - \mathsf{c}_{\boldsymbol{\Sigma}}\boldsymbol{\Sigma}^{-1/2}\boldsymbol{\theta}_\mathsf{a}\boldsymbol{\theta}_\mathsf{a}^\top \boldsymbol{\Sigma}^{-1/2}$, where $\mathsf{c}_{\boldsymbol{\Sigma}} \ensuremath{: =}(\alpha_2-\alpha_1^2)/(1 + (\alpha_2-\alpha_1^2)\|\boldsymbol{\theta}_\mathsf{a}\|_{\boldsymbol{\Sigma}}^2)$. With this notation, the bias estimates in [\[eq:db-with-offset-explicit-intro\]](#eq:db-with-offset-explicit-intro){reference-type="ref" reference="eq:db-with-offset-explicit-intro"} can be expanded as $$\label{eq:db-cfd-expanded} \begin{gathered} \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y},0} = \mathsf{dbAdj}_{01} + \mathsf{dbAdj}_{02}, \qquad \ensuremath{\widehat{\mathsf{bias}}}_\mathsf{y}= -\frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big (\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n\widehat{\zeta}{}_\mathsf{y}^\theta} + \mathsf{dbAdj}_1\, \boldsymbol{\Sigma}^{-1/2}\boldsymbol{\theta}_\mathsf{a}, \end{gathered}$$ where $$\label{eq:dbAdj} \begin{gathered} \mathsf{dbAdj}_{01} \ensuremath{: =}\frac{\big\langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} , \boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1} - \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)\big\rangle}{n\widehat{\zeta}{}_\mathsf{y}^\theta}, \\ \mathsf{dbAdj}_{02} \ensuremath{: =}-\mathsf{c}_{\boldsymbol{\Sigma}} \frac{\big\langle\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}},\boldsymbol{\Sigma}^{-1/2}\boldsymbol{\theta}_\mathsf{a}\big\rangle\big\langle\boldsymbol{\Sigma}^{-1/2}\boldsymbol{\theta}_\mathsf{a},\boldsymbol{X}^\top\big(\boldsymbol{a} \odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)\big\rangle}{n\widehat{\zeta}{}_\mathsf{y}^\theta}, \\ \mathsf{dbAdj}_1 \ensuremath{: =}\mathsf{c}_{\boldsymbol{\Sigma}} \frac{\big\langle\boldsymbol{\Sigma}^{-1/2}\boldsymbol{\theta}_\mathsf{a},\boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)\big\rangle}{n\widehat{\zeta}{}_\mathsf{y}^\theta}. \end{gathered}$$ Our main insight is that the debiasing procedure is successful provided that we use as plug-ins: *(i)* any consistent estimates of the scalar quantities $\mathsf{dbAdj}_{01}, \mathsf{dbAdj}_{02}, \mathsf{dbAdj}_1$, and *(ii)* an appropriately debiased (but not necessarily consistent) estimate of the high-dimensional parameter $\boldsymbol{\theta}_\mathsf{a}$. Explicitly, we use $$\label{eq:SCA-bias-hat} \begin{gathered} \ensuremath{\widehat{\mathsf{bias}}}_{\mathsf{y},0} \ensuremath{: =}\widehat{\mathsf{dbAdj}}_{01} + \widehat{\mathsf{dbAdj}}_{02}, \\ \ensuremath{\widehat{\mathsf{bias}}}_\mathsf{y}= -\frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big (\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n\widehat{\zeta}{}_\mathsf{y}^\theta} + \widehat{\mathsf{dbAdj}}_1\, \boldsymbol{\Sigma}^{-1/2}\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}. \end{gathered}$$ We provide consistency guarantees for $\widehat{\mu}{}_\mathsf{y}$ and approximate normality guarantees for $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$ for two constructions of the debiased estimate $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ and any consistent estimates $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, $\widehat{\mathsf{dbAdj}}_1$ of $\mathsf{dbAdj}_{01}$, $\mathsf{dbAdj}_{02}$, $\mathsf{dbAdj}_1$. In , we give particular constructions of $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, and $\widehat{\mathsf{dbAdj}}_1$ that are consistent, although any construction achieving consistency suffices. We consider the following two estimates of $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$: $$\label{eq:prop-db-option} \begin{aligned} &\textbf{Moment method:} \qquad& \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}&= \frac{1}{\widehat{\beta}{}} \boldsymbol{\Sigma}^{-1/2} \big( \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}-\widehat{\boldsymbol{\mu}}{}_{\mathsf{x}} \big), \quad \mbox{and} \\ % & \textbf{M-estimation:} \qquad& \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}&= \frac{1}{\widehat{\beta}{}} \Big( \widehat{\boldsymbol{\theta}}{}_\mathsf{a}+ \frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top \nabla_{\boldsymbol{\eta}} \ell_\mathsf{a}(\widehat{\theta}{}_{\mathsf{a},0}\boldsymbol{1}+ \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{a};\boldsymbol{a})}{n\widehat{\zeta}{}_\mathsf{a}^\theta} \Big), \end{aligned}$$ where $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$ is the base estimate [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"}, $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} \ensuremath{: =}\frac{1}{n_1} \sum_{i=1}^n a_i \boldsymbol{x}_i$, $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x}} \ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \boldsymbol{x}_i$, and $n_1 = \sum_{i=1}^n a_i$. The factor $\widehat{\beta}{}$ corrects for the shrinkage that occurs in high-dimensional binary regression models [@surCandes2019; @yadlowskyYun2021], and its definition depends on whether the moment method or M-estimation is used to construct $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ in [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"}. The first construction of $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ is based on the identity $\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} - \boldsymbol{\mu}_{\mathsf{x}} = \alpha_1 \boldsymbol{\Sigma}^{1/2} \boldsymbol{\theta}_\mathsf{a}$, which is a straightforward consequence of Gaussian integration by parts. In this case, we set $\widehat{\beta}{}$ to be any consistent estimate of $\alpha_1$. The construction of one such estimate is given in . The second construction of $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ is based on prior work on debiasing penalized binary-outcome regression developed [@surCandes2019; @yadlowskyYun2021; @bellec2022]. This prior work establishes that the debiased estimate $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}+\frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top \nabla_{\boldsymbol{\eta}} \ell_\mathsf{a} (\widehat{\theta}{}_{\mathsf{a},0}\boldsymbol{1}+ \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{a};\boldsymbol{a})}{n\widehat{\zeta}{}_\mathsf{a}^\theta}$ is centered not on the true parameter $\boldsymbol{\theta}_\mathsf{a}$, as it would be in linear outcome models, but on a shrunken version of the true parameter $\beta \boldsymbol{\theta}_\mathsf{a}$. These works also provide several methods for consistently estimating this shrinkage factor. Our method succeeds if $\widehat{\beta}{}$ is any consistent estimate of this shrinkage factor. The construction of one such estimate is given in . # Theoretical guarantees {#SecTheory} In this section, we state and discuss a number of theoretical results associated with our estimators. In , we state the assumptions under which our results hold. In , we state our consistency result for estimation of the population mean $\mu_\mathsf{y}$. In , we state our result regarding the approximate normality (in a certain sense) of the estimate $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$. ## Assumptions {#SecAssumptions} Our consistency result holds as $n,p \rightarrow \infty$ simultaneously. It holds for any sequence of models that satisfy the following constraints, each of which are defined by a pair of constants $C, c > 0$ that do not depend on $(n,p)$. **Assumption A1** 1. The sampling rate $n/p$ lies in the interval $(c, C)$. 2. The noise variance $\sigma^2$ in the linear model lies in the interval $(c, C)$. 3. The feature covariance is strictly positive definite: $\boldsymbol{\Sigma}\succ 0$. 4. The link function $\pi: {\mathbb R}\rightarrow (0,1)$ is twice-differentiable, strictly increasing, with first and second derivatives bounded in absolute value by $C$. Moreover, $1-c > \mathbb{E}[\pi(\theta_{\mathsf{a},0} + \langle \boldsymbol{x}, \boldsymbol{\theta}_{\mathsf{a}}\rangle)] > c > 0$. Moreover, for $\eta > 0$, $1 - Ce^{-c\eta} > \pi(\eta)$ and $\pi(-\eta) > Ce^{-c\eta}$. 5. The mean of the features has bounded $\boldsymbol{\Sigma}^{-1}$-norm: $\| \boldsymbol{\mu}_{\mathsf{x}} \|_{\boldsymbol{\Sigma}^{-1}} < C$. 6. The size of the signals and offsets are bounded above in the $\boldsymbol{\Sigma}$-norm: $|\theta_{\mathsf{y},0}|,|\theta_{\mathsf{a},0}| < C$, $\| \boldsymbol{\theta}_{\mathsf{y}} \|_{\boldsymbol{\Sigma}}, \| \boldsymbol{\theta}_{\mathsf{a}} \|_{\boldsymbol{\Sigma}} < C$. 7. The penalties are differentiable, strongly smooth, and strongly convex in the $\boldsymbol{\Sigma}$-metric: $\nabla \Omega_{\mathsf{y}}(\boldsymbol{v})$ and $\nabla \Omega_{\mathsf{a}}(\boldsymbol{v})$ exist for all $\boldsymbol{v}$, and $\boldsymbol{v}\mapsto \Omega_{\mathsf{y}}(\boldsymbol{\Sigma}^{-1/2}\boldsymbol{v})$ and $\boldsymbol{v}\mapsto \Omega_{\mathsf{a}}(\boldsymbol{\Sigma}^{-1/2}\boldsymbol{v})$ are $c$-strongly convex and $C$-strongly smooth. The penalties have minimizers $\boldsymbol{v}_\mathsf{y}$ and $\boldsymbol{v}_\mathsf{a}$ bounded by $\| \boldsymbol{v}_\mathsf{y} \|_{\boldsymbol{\Sigma}},\|\boldsymbol{v}_\mathsf{a}\|_{\boldsymbol{\Sigma}} < C$. Their Hessian is $C\sqrt{n}$-Lipschitz in Frobenius norm: 8. The weight function is uniformly bounded above and below: $0 < c < w(h) < C < \infty$ for all $h$. Moreover, $1/w(h)$ is differentiable and $C$-Lipschitz. 9. The loss function $\ell_{\mathsf{a}}$ satisfies the following properties: - It is $c$-strongly convex and three-times differentiable in the linear predictor, with its second and third derivatives bounded in absolute value by $C$. - It is non-negative. - It is informative in the sense that $\partial_\eta \ell_\mathsf{a}(\eta;0) - \partial_\eta \ell_\mathsf{a}(\eta;1) \geq c > 0$ for all $\eta \in {\mathbb R}$. - It is bounded and has bounded derivatives at $\eta = 0$: $\ell_\mathsf{a}(0;0),\ell_\mathsf{a}(0;1) < C$, $-C < \partial_\eta \ell_\mathsf{a}(0;1) < -c < 0$ and $C > \partial_\eta \ell_\mathsf{a}(0;0) \geq c > 0$. We sometimes refer to the list of constants appearing in Assumption A1 by $\mathcal{P}_{\mathrm{model}}$. The approximate normality result also holds for models satisfying assumption A1, with approximation errors that vanish as $n,p \rightarrow \infty$. Assumption A1(a) requires $n$ and $p$ grow proportionally. We expect only a lower bound on $n/p$ is necessary, but our current proof requires an upper bound as well. Importantly, we allow $n/p < 1$, and in fact, allow it to be arbitrarily small as long is it does not vanish in the high dimensional limit. The accuracy of $\widehat{\mu}{}_\mathsf{y}^\textup{d}$ necessarily diverges as $\sigma^2$ diverges, justifying the upper bound on $\sigma^2$. The lower bound on $\sigma^2$ is likely an artifact of the proof. Assumption A1(d) imposes strict overlap on average over the covariates $\boldsymbol{x}$. It implies that a strictly positive fraction of the units has outcomes observed, and a strictly positive fraction has outcomes unobserved. It does not require strict overlap conditional on the covariates $\boldsymbol{x}$, but does constrain the rate at which overlap decays as the linear predictor grows. Together with assumptions A1(e) and (f), it implies that the "typical" unit satisfies a form a strict overlap. The decay rate is satisfied by popular link functions, including the logistic-link. The upper bounds on $|\theta_{\mathsf{y},0}|,|\theta_{\mathsf{a},0}|,\| \boldsymbol{\theta}_{\mathsf{y}} \|_{\boldsymbol{\Sigma}},\| \boldsymbol{\theta}_{\mathsf{a}}\|_{\boldsymbol{\Sigma}}$ together with the lower bound on $\sigma^2$ imply that the data is noisy: the fraction of variation in the outcome explained by the covariates (i.e., $R^2$) is necessarily bounded away from 1. Our constraints permit (but do not require) that it also be bounded away from 0. Assumption A1(g) requires strong convexity and strong smoothness of $\Omega_\mathsf{y}$ and $\Omega_\mathsf{a}$ in the $\boldsymbol{\Sigma}$-metric. Generalizations to penalties that are not strongly smooth (e.g., elastic net) or strongly convex (e.g., Lasso) are of substantial interest. It is likely that specialized analyses can generalize our results to these cases, as in the papers [@miolane2021; @celentanoMontanariWei2020; @bellec2019debiasingIntervals]. We make the current assumptions to facilitate our proofs. When the singular values of $\boldsymbol{\Sigma}$ are all of order $O(1)$, assumption A1(g) includes ridge regression with the choice of regularization parameter that minimizes prediction error and which is chosen by cross validation (see, for example, [@karoui2013asymptotic; @miolane2021; @hastieMontanariRossetTibshirani2022]). Bounds on the weight $w(h)$ is required to make Assumption A1(g) meaningful. If strict overlap is satisfied ($\pi(\eta) > c$ for all $\eta$), then assumption A1(h) is satisfied by the inverse propensity weights $w(h) = 1/\pi(h)$. Assumption A1(i) includes the least-squares loss. Because of the strong convexity constraint, it does not include the logistic loss. We expect a generalization to logistic loss is possible at the cost of additional technical difficulty in the proofs. Our primary interest is to show that constructing a consistent estimate of $\mu_\mathsf{y}$ is possible, so we did not pursue extending the result to logistic loss. Throughout the paper, we always use $C,c > 0$, possibly with subscripts or superscripts, to denote constants that depend only on the constants in $\mathcal{P}_{\mathrm{model}}$, even though we do not explicitly state this fact on each occurrence. ## Consistency under proportional asymptotics {#sec:consistency} We can now state our main consistency result for the population mean. Within our framework, we say that an estimate $\widehat{\mathsf{dbAdj}}$ is *consistent* for $\mathsf{dbAdj}$ if $\widehat{\mathsf{dbAdj}} - \mathsf{dbAdj}\buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0$ as $(n, p) \rightarrow \infty$. **Theorem 1**. *Under Assumption A1, outcome regression [\[eq:db-G-computation\]](#eq:db-G-computation){reference-type="eqref" reference="eq:db-G-computation"} with empirical SCA plug-ins $(\widehat{\theta}{}_{\mathsf{y},0}^\textup{d},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d})$ achieves consistency for the population mean $$\begin{aligned} |\widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}| & \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0 \qquad \mbox{as $(n, p) \rightarrow \infty$,}\end{aligned}$$ provided the bias corrections [\[eq:SCA-bias-hat\]](#eq:SCA-bias-hat){reference-type="eqref" reference="eq:SCA-bias-hat"} are computed using consistent estimates of $\mathsf{dbAdj}_{01}$, $\mathsf{dbAdj}_{02}$, $\mathsf{dbAdj}_1$ and either the moment method or M-estimation approach [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} for the debiased propensity parameter.* Theorem [Theorem 1](#thm:pop-mean){reference-type="ref" reference="thm:pop-mean"} also applies to the oracle ASCW bias corrections [\[eq:db-cfd-expanded\]](#eq:db-cfd-expanded){reference-type="eqref" reference="eq:db-cfd-expanded"} because they use exact knowledge of the $\mathsf{dbAdj}$ quantities and propensity model parameter. In , we provide constructions of $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, $\widehat{\mathsf{dbAdj}}_1$, and $\widehat{\beta}{}$ that are consistent under assumption A1. Thus, together with these constructions provides a consistent estimate of the population mean under a proportional asymptotics with $n/p$ possibly (but not necessarily) less than 1. Other constructions $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, $\widehat{\mathsf{dbAdj}}_1$, and $\widehat{\beta}{}$ are likely possible. We prove  in . The propensity model parameter plays a crucial role in the oracle ASCW debiased estimate because it appears in $\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}}$ and $\boldsymbol{\Sigma}_{\mathsf{cfd}}$ (see [\[eq:cfd-mean-variance\]](#eq:cfd-mean-variance){reference-type="ref" reference="eq:cfd-mean-variance"}). Estimation of the propensity model parameter plays a crucial role in the oracle SCA debiased estimate via $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ and, as to be seen in , via the estimates $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$ and $\widehat{\mathsf{dbAdj}}_1$. As we have summarized in  and will state precisely in Appendix [21](#SecInconsistency){reference-type="ref" reference="SecInconsistency"}, some natural approaches that avoid propensity modeling are provably inconsistent in relevant settings. We are unaware of successful approaches that avoid propensity modeling altogether. This is in stark contrast to the setting $n > p$, where G-computation with OLS plug-ins achieve consistency without propensity modeling [@yadlowsky2022; @sur2022]. An important feature of  is that, unlike the approaches in past work [@yadlowsky2022; @sur2022], it avoids sample splitting: the G-computation [\[eq:db-G-computation\]](#eq:db-G-computation){reference-type="eqref" reference="eq:db-G-computation"} averages over the same data used to fit $\widehat{\theta}{}_{\mathsf{y},0}^\textup{d}$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$ as well as the propensity estimate $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ that enters into the construction of the empirical SCA estimate. Cross-fitting, which cuts in half the amount of data used to fit each nuisance parameter, can degrade the nuisance error by a constant factor. In consistency regimes in which the classical semi-parametric efficiency lower bound can be achieved, the nuisance errors have a negligible impact on the estimation error of the population mean, whence this potential constant inflation has no impact on asymptotic variance [@chernozhukov2018; @newey2018crossfitting]. In contrast, in the present setting the nuisance errors may have a non-negligible impact on the estimation error of the population mean, whence cross-fitting may potentially impact efficiency. In this and related settings, estimators that avoid sample splitting sometimes outperform estimators that use sample splitting in simulation (see , or, for example, the papers [@guoWangCaiLi2019; @maCaiLi2021]). We do not perform an efficiency comparison of various sample splitting strategies (indeed, our current theory does not precisely quantify the fluctuations of $\widehat{\mu}{}_\mathsf{y}^\textup{d}$), but we believe it is important to develop methods and theory for which sample splitting is not required. Some previous work has studied conditions under which sample splitting is not required, but this analysis is restricted to consistency regimes [@chenSyrgkanisAustern2022]. We in fact prove a more quantitative form of consistency than stated in . For the constructions of $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, $\widehat{\mathsf{dbAdj}}_1$, and $\widehat{\beta}{}$ provided in , the proof of  establishes a non-asymptotic exponential tail bound on $\widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}$ depending only on the constants $C,c > 0$ appearing in Assumption A1 (see Appendix [18](#sec:db-proofs){reference-type="ref" reference="sec:db-proofs"}). Although this bound provides a more quantitative statement than in , we do not expect that it is tight in the rate of convergence $\widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y} \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0$ or in its dependence on the constants in Assumption A1. Identifying the correct rate of convergence or dependence on these constants may be beyond the capacity of the Gaussian comparison proof technique used here, which often fails to identify optimal rates [@celentanoMontanariWei2020]. The simulations in  suggest the rate of convergence is $\widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}= O_p(n^{-1/2})$ (ignoring dependence on other constants). ### Normality of debiased estimates {#sec:db-review} A main focus of the debiasing literature is normal inference on the linear model coefficients (e.g., [@zhang2014confidence; @buhlmann2013statistical; @javanmard2014confidence; @javanmard2014hypothesis; @javanmard2018debiasing; @miolane2021; @celentanoMontanariWei2020; @bellec2019biasing; @bellec2019debiasingIntervals]). We augment  by establishing the approximate normality and unbiasedness (in a certain sense) of $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$ with a fully empirical standard error. The coordinate-wise standard error is given by $\widehat{\tau}{}\boldsymbol{\Sigma}_{j|-j}^{-1/2}$, where $\widehat{\tau}{}^2 \ensuremath{: =}\widehat{s}_\mathsf{y}^2 - 2\, \widehat{\mathsf{dbAdj}}_1\, \widehat{s}_{\mathsf{y}\circ} + (\widehat{\mathsf{dbAdj}}_1)^2\, \widehat{s}_\circ^2$, and $\widehat{s}_\mathsf{y}^2$, $\widehat{s}_{\mathsf{y}\circ}$, and $\widehat{s}_{\circ}^2$ are given by the correlation between certain empirical influence functions: $\widehat{s}_\mathsf{y}^2 \ensuremath{: =}\frac{1}{\ensuremath{n}} \| \widehat{\boldsymbol{i}}{}_\mathsf{y}\|^2$, $\widehat{s}_{\mathsf{y}\circ}^2 \ensuremath{: =}\frac{1}{n\widehat{\beta}{}} \langle \widehat{\boldsymbol{i}}{}_\mathsf{y}, \widehat{\boldsymbol{i}}{}_\circ \rangle$, and $\widehat{s}_{\circ}^2 \ensuremath{: =}\frac{1}{n\widehat{\beta}{}^2} \| \widehat{\boldsymbol{i}}{}_\circ \|^2$, where $\widehat{\boldsymbol{i}}{}_\mathsf{y}= \frac{\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})}{\widehat{\zeta}{}_\mathsf{y}^\theta}$ and $$\widehat{\boldsymbol{i}}{}_\circ = \begin{cases} \frac{\boldsymbol{a}}{\widehat{\pi}{}} - \boldsymbol{1}\quad&\text{if the moment method in equation~\eqref{eq:prop-db-option} is used}, \\[5pt] -\frac{\nabla_{\boldsymbol{\eta}}\ell_\mathsf{a}(\widehat{\theta}{}_{\mathsf{a},0}\boldsymbol{1} + \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{a};\boldsymbol{a})}{\widehat{\zeta}{}_\mathsf{a}^\theta} \quad&\text{if M-estimation in equation~\eqref{eq:prop-db-option} is used}, \end{cases}$$ with $\widehat{\pi}{}= \frac{1}{\ensuremath{n}} \sum_{i=1}^n \ensuremath{a}_i$. **Theorem 2**. *Under Assumption A1, the empirical SCA estimates $(\widehat{\theta}{}_{\mathsf{y},0}^\textup{d},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d})$ satisfy $$\begin{gathered} \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}- \theta_{\mathsf{y},0} \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0 \quad \text{and} \quad \frac{1}{p} \sum_{j=1}^p \mathbb{I}\Bigg\{ \frac{ \sqrt{n} \big( \widehat{\theta}{}_{\mathsf{y},j}^\textup{d}- \theta_{\mathsf{y},j} \big) }{ \widehat{\tau}{}\boldsymbol{\Sigma}_{j|-j}^{-1/2} } \leq t \Bigg\} \buildrel \mbox{\tiny{prob.}} \over \longrightarrow \mathbb{P}\big(\mathsf{N}(0,1) \leq t\big) \qquad \mbox{as $(n, p) \rightarrow \infty$,} \end{gathered}$$ provided the bias corrections [\[eq:SCA-bias-hat\]](#eq:SCA-bias-hat){reference-type="eqref" reference="eq:SCA-bias-hat"} are computed using consistent estimates of $\mathsf{dbAdj}_{01}$, $\mathsf{dbAdj}_{02}$, $\mathsf{dbAdj}_1$ and either the moment method or M-estimation approach [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} for the debiased propensity parameter.* As in the case of Theorem [Theorem 1](#thm:pop-mean){reference-type="ref" reference="thm:pop-mean"}, the oracle ASCW bias corrections [\[eq:db-cfd-expanded\]](#eq:db-cfd-expanded){reference-type="eqref" reference="eq:db-cfd-expanded"} satisfy the criteria of Theorem [Theorem 2](#thm:db-normality){reference-type="ref" reference="thm:db-normality"} because they use exact knowledge of the $\mathsf{dbAdj}$ quantities. Because they use the true propensity model parameter rather than propensity model estimates, the oracle ASCW standard error instead must take $\widehat{\tau}{}^2 = \widehat{s}_\mathsf{y}^2$. We prove Theorem [Theorem 2](#thm:db-normality){reference-type="ref" reference="thm:db-normality"} and describe the oracle ASCW standard error in . establishes a form of $\ensuremath{n}^{-1/2}$-asymptotic normality that holds on average across coordinates. We note that these types of average-coordinate guarantees are common in the literature on high-dimensional asymptotics (e.g., [@javanmard2014hypothesis; @miolane2021; @celentanoMontanariWei2020]). The standard errors $\widehat{\tau}{}\boldsymbol{\Sigma}_{j|-j}^{-1/2}/\sqrt{n}$ are fully empirical (assuming knowledge of the feature variance $\boldsymbol{\Sigma}$). They agree with the standard errors one would get from OLS in a low-dimensional asymptotics with outcome noise $\widehat{\tau}{}^2$. In general, this noise variance is larger than $\sigma^2$ by a constant factor that does not decay as $n,p \rightarrow \infty$. implies that if one constructs confidence intervals for each coordinates of the unknown parameter vector $\boldsymbol{\theta}_\mathsf{y}$ based on normality theory and the empirical effective noise level $\widehat{\tau}{}^2$, then the empirical coverage of these intervals across coordinates is, with high probability, close to the nominal level. Usually, the statistician is also interested in a guarantee that holds for a single, prespecified coordinate $\theta_{\mathsf{y},j}$. Using alternative proof techniques to those used here, Bellec and Zhang [@bellec2019debiasingIntervals] establish coordinate-wise normality for debiased estimates in linear models with fully-observed outcomes under a large class of penalties. Whether these techniques are able to establish similar results for the oracle ASCW or empirical SCA debiased estimates, or appropriate modifications of them, is a promising direction for future work. states that the oracle ASCW and empirical SCA debiased estimates behave, in a certain sense, like OLS estimates. Given the success of G-computation with OLS estimates as a plug-in, as studied by Yadlowsky [@yadlowsky2022] and Jiang et al. [@sur2022], it is perhaps not surprising that $G$-computation with these debiased estimates as a plug-in is consistent for the population mean. As with the convergence $\widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}\buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0$ in , we prove a more quantitative form of the consistency and coverage guarantees of . For the constructions of $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, $\widehat{\mathsf{dbAdj}}_1$, and $\widehat{\beta}{}$ provided in , the proof of  establishes non-asymptotic exponential tail bounds depending only on the constants $C,c > 0$ appearing in Assumption A1 (see ). As before, we do identifying optimal rates and dependence on constants is left to future work. # Explicit constructions of empirical adjustments {#sec:adjustment-construction} hold for any consistent estimates of the $\mathsf{dbAdj}$-quantities, along with any consistent estimate of $\alpha_1$ or of a shrinkage factor that we define below. In this section, we explicitly provide one such construction of such consistent estimates. ## Estimating the signal strength, offset, and covariance spike {#sec:summary-stat-estimates} Our construction of consistent estimates for the $\mathsf{dbAdj}$-quantities relies on estimates of certain model parameters, including: - the probability of observing the outcome marginalized over the covariates ${\overline{\pi}}\ensuremath{: =}\mathbb{E}[\pi(\boldsymbol{x})]$ - the norm of the feature mean $\gamma_{\boldsymbol{\mu}} \ensuremath{: =}\| \boldsymbol{\mu}_{\mathsf{x}} \|_{\boldsymbol{\Sigma}^{-1}}$ - the propensity model signal strength $\gamma_\mathsf{a}\ensuremath{: =}\| \boldsymbol{\theta}_\mathsf{a}\|_{\boldsymbol{\Sigma}}$ - the propensity model linear signal strength $\gamma_{\mathsf{a}*} \ensuremath{: =}\alpha_1\| \boldsymbol{\theta}_\mathsf{a}\|_{\boldsymbol{\Sigma}}$. - the propensity model offset $\mu_\mathsf{a}\ensuremath{: =}\langle \boldsymbol{\mu}_\mathsf{x}, \boldsymbol{\theta}_\mathsf{a}\rangle$ - and the covariance spike prefactor $\mathsf{c}_{\boldsymbol{\Sigma}}$. (Recall that $\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} = \boldsymbol{\Sigma}^{-1} - \mathsf{c}_{\boldsymbol{\Sigma}}\boldsymbol{\Sigma}^{-1/2}\boldsymbol{\theta}_\mathsf{a}\boldsymbol{\theta}_\mathsf{a}^\top \boldsymbol{\Sigma}^{-1/2}$, where $\mathsf{c}_{\boldsymbol{\Sigma}} \ensuremath{: =}(\alpha_2-\alpha_1^2)/(1 + (\alpha_2-\alpha_1^2)\|\boldsymbol{\theta}_\mathsf{a}\|_{\boldsymbol{\Sigma}}^2)$). The best linear predictor of $a$ given $\boldsymbol{x}$ is $\langle \boldsymbol{x}, \alpha_1 \boldsymbol{\theta}_\mathsf{a}\rangle$, which is why we refer to $\gamma_{\mathsf{a}*}$ as the "propensity model linear signal strength." We estimate these parameters as follows. First, we define the estimates $$\begin{gathered} \widehat{\pi}{}\ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \ensuremath{a}_i, \qquad \widehat{\gamma}{}_{\boldsymbol{\mu}}^2 \ensuremath{: =}\Big( \| \widehat{\boldsymbol{\mu}}{}_\mathsf{x}\|^2 - \frac{p}{n} \Big)_+, \quad \mbox{and} \quad \widehat{\gamma}{}_{\mathsf{a}*}^2 \ensuremath{: =}\Big( \| \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} - \widehat{\boldsymbol{\mu}}{}_\mathsf{x} \|^2 - \frac{p}{n} \frac{1-\widehat{\pi}{}}{\widehat{\pi}{}} \Big)_+. \end{gathered}$$ Then, we let $\widehat{\mu}{}_\mathsf{a}$ and $\widehat{\gamma}{}_{\mathsf{a}}$ be the unique solutions (cf. ) to the fixed point equations $$\label{eq:prop-offset-and-signal-strenght-estimates} \mathbb{E}_{G \sim \mathsf{N}(0,1)}[\pi(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)] = \widehat{\pi}{}, \quad \mbox{and} \quad \mathbb{E}_{G \sim \mathsf{N}(0,1)}[G\pi(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)] = \widehat{\pi}{} \widehat{\gamma}{}_{\mathsf{a}*}.$$ Finally, we define the estimates $$\begin{gathered} \widehat{\alpha}{}_1 = \frac{\mathbb{E}[\pi'(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a} G)]}{\mathbb{E}[\pi(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)]}, \qquad \widehat{\alpha}{}_2 = \frac{\mathbb{E}[\pi''(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)]}{\mathbb{E}[\pi(\widehat{\mu}{}_\mathsf{a} + \widehat{\gamma}{}_\mathsf{a}G)]}, \quad \mbox{and} \quad \widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}} = \frac{\widehat{\alpha}{}_2 - \widehat{\alpha}{}_1^2}{1 + (\widehat{\alpha}{}_2 - \widehat{\alpha}{}_2^2)\widehat{\gamma}{}_\mathsf{a}^2}. \end{gathered}$$ With these definitions, we have the following auxiliary result: **Lemma 1**. *Under Assumption A1, with exponentially high probability, there are unique solutions to the fixed point relation [\[eq:prop-offset-and-signal-strenght-estimates\]](#eq:prop-offset-and-signal-strenght-estimates){reference-type="eqref" reference="eq:prop-offset-and-signal-strenght-estimates"}. Moreover, as $n,p \rightarrow \infty$, we have the consistency relations: $$\begin{gathered} \widehat{\pi}{}- {\overline{\pi}}\buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\gamma}{}_{\boldsymbol{\mu}} - \| \boldsymbol{\mu}_\mathsf{x}\| \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\gamma}{}_{\mathsf{a}*} - \alpha_1 \| \boldsymbol{\theta}_\mathsf{a} \| \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\mu}{}_\mathsf{a}- \mu_\mathsf{a}\buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\gamma}{}_\mathsf{a}- \| \boldsymbol{\theta}_\mathsf{a}\| \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \\ \widehat{\alpha}{}_1 - \alpha_1 \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\alpha}{}_2 - \alpha_2 \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}} - \frac{\alpha_2 - \alpha_1^2}{1 + (\alpha_2 - \alpha_1^2)\| \boldsymbol{\theta}_\mathsf{a}\|^2} \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0. \end{gathered}$$* We prove  in . ## Construction of propensity renormalization factor {#sec:prop-debiasing} Both the moment method and M-estimation approach for constructing $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ in equation [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} rely on a renormalization factor $\widehat{\beta}{}$. For the moment method approach, hold provided $\widehat{\beta}{}- \alpha_1 \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0$. In this case, we take $\widehat{\beta}{}= \widehat{\alpha}{}_1$ as defined in equation [\[eq:prop-offset-and-signal-strenght-estimates\]](#eq:prop-offset-and-signal-strenght-estimates){reference-type="eqref" reference="eq:prop-offset-and-signal-strenght-estimates"}, with consistency given by . For the M-estimation approach, we must take $\widehat{\beta}{}$ to be a consistent estimate of the shrinkage factor $\beta$ describing the shrinkage of $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}+\frac{\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top \nabla_{\boldsymbol{\eta}} \ell_\mathsf{a}(\widehat{\theta}{}_{\mathsf{a},0}\boldsymbol{1}+ \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{a};\boldsymbol{a})}{n\widehat{\zeta}{}_\mathsf{a}^\theta}$ as an estimate of $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$. This shrinkage factor is given by the solution to a complicated set of non-linear equations that describes the estimator [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"}. We describe these equations, which we call "the fixed point equations", in . In the unpenalized case, multiple approaches have been developed to estimate this shrinkage factor [@surCandes2019; @yadlowskyYun2021]. Our argument is inspired by the strategy given in the paper [@yadlowskyYun2021], which solves an empirical version of the fixed point equations. It is difficult to build intuition for this construction because the fixed point equations themselves are difficult to understand. Nevertheless, the construction is fully explicit and easy to state. We begin by defining $$\begin{aligned} \widehat{\boldsymbol{\eta}}_\mathsf{a}^{\mathsf{loo}} \ensuremath{: =}\widehat{\boldsymbol{\eta}}_\mathsf{a}+ \widehat{\zeta}{}_\mathsf{a}^\eta \nabla_{\boldsymbol{\eta}} \ell_\mathsf{a}(\widehat{\boldsymbol{\eta}}_\mathsf{a};\boldsymbol{a}), \qquad {\overline{\eta}}_\mathsf{a}^\mathsf{loo}= \frac{1}{n} \sum_{i=1}^n \widehat{\eta}{}_{\mathsf{a},i}^{\mathsf{loo}}, \quad \mbox{and} \quad \widehat{s}_{\mathsf{a}\widehat{\mathsf{a}}} \ensuremath{: =}\frac{1}{n \widehat{\pi}{}\widehat{\alpha}{}_1} \sum_{i=1}^n \big(\widehat{\eta}{}_{\mathsf{a},i}^\mathsf{loo}- {\overline{\eta}}_\mathsf{a}^\mathsf{loo}\big) \ensuremath{a}_i,\end{aligned}$$ along with $$\begin{aligned} \label{eq:pen-logit-shrinage-est} \widehat{\beta}{}& \ensuremath{: =}\mathbb{E}\Big[ \pi'(G_\mathsf{a}) \Big( \ell_\mathsf{a}'(\mathrm{prox}\big[\widehat{\zeta}{}_\mathsf{a}^\eta \ell_\mathsf{a}(\,\cdot\,;1)(G_\mathsf{a}^\mathsf{loo});1) - \ell_\mathsf{a}'(\mathrm{prox}\big[\widehat{\zeta}{}_\mathsf{a}^\eta \ell_\mathsf{a}(\,\cdot\,;)(G_\mathsf{a}^\mathsf{loo});0) \big]) \Big) \Big],\end{aligned}$$ where the expectation is taken over the pair $$(G_\mathsf{a}, G_\mathsf{a}^\mathsf{loo}) \sim \mathsf{N}\left( \begin{pmatrix} \widehat{\mu}{}_\mathsf{a}\\ {\overline{\eta}}_\mathsf{a}^\mathsf{loo} \end{pmatrix}, \begin{pmatrix} \widehat{\gamma}{}_\mathsf{a}^2 & \widehat{s}_{\mathsf{a}\widehat{\mathsf{a}}} \\ \widehat{s}_{\mathsf{a}\widehat{\mathsf{a}}} & \| \widehat{\boldsymbol{\theta}}{}_\mathsf{a}\|^2 \end{pmatrix} \right).$$ For both the moment method and $M$-estimation approaches for constructing $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ (cf. [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="ref" reference="eq:prop-db-option"}), asymptotic normality holds in a sense similar to . This is not the focus of the present paper, but is proved in the course of establishing our main results. See  in  for precise statements. ## Construction of debiasing adjustments {#sec:pop-cor-with-adjust} We construct $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, and $\widehat{\mathsf{dbAdj}}_1$ using $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} = \frac{1}{n_1}\sum_{i=1}^n \ensuremath{a}_i \boldsymbol{x}_i$, $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$, and $\widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}}$ as plug-ins in [\[eq:dbAdj\]](#eq:dbAdj){reference-type="ref" reference="eq:dbAdj"}. It should be noted that the estimates $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ are not $\ell_2$-consistent for $\boldsymbol{\mu}_\mathsf{x}$ and $\boldsymbol{\theta}_\mathsf{a}$. Moreover, because they are fit using the same data, their errors are correlated with each other and with $\boldsymbol{X}^\top \big( \boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)$. Thus, the plug-in estimates have an order-one bias, which must itself be estimated and removed. We refer to such a correction as a "correlation adjustment," in keeping with the terminology of Celentano and Montanari [@celentano2021cad]. Because the correlation between $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ and $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}$ and $\boldsymbol{X}^\top \big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)$ depends on the way $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ is constructed, the correlation adjustment depends on whether the moment method or M-estimation approach [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} is used to construct $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$. In particular, we set $$\label{eq:hat-dbAjd} \begin{gathered} \widehat{\mathsf{dbAdj}}_{01} \ensuremath{: =} \frac{\big\langle \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} , \boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)\big\rangle}{n\widehat{\zeta}{}_\mathsf{y}^\theta}, \\ \widehat{\mathsf{dbAdj}}_{02} \ensuremath{: =} - \widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}} \Big( \langle\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}},\boldsymbol{\Sigma}^{-1/2}\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}\rangle - \frac{\widehat{s}_{\mathsf{x}\circ}p}{n} \Big) \Big( \frac{\langle\boldsymbol{\Sigma}^{-1/2}\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}, \boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)\rangle}{n\widehat{\zeta}{}_\mathsf{y}^\theta} - \frac{\widehat{s}_{\mathsf{y}\circ}(p - n\widehat{\zeta}{}_\mathsf{y}^\eta\widehat{\zeta}{}_\mathsf{y}^\theta)}{n} \Big), \\ \widehat{\mathsf{dbAdj}}_1 \ensuremath{: =} \widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}} \Big( \frac{\langle\boldsymbol{\Sigma}^{-1/2}\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}, \boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)\rangle}{n\widehat{\zeta}{}_\mathsf{y}^\theta} - \frac{\widehat{s}_{\mathsf{y}\circ}(p - n\widehat{\zeta}{}_\mathsf{y}^\eta\widehat{\zeta}{}_\mathsf{y}^\theta)}{n} \Big), \end{gathered}$$ where $\widehat{s}_{\mathsf{y}\circ}$ is as in and $\widehat{s}_{\mathsf{x}\circ} = \frac{1}{\ensuremath{n}} \langle \widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}, \widehat{\boldsymbol{i}}{}_\circ\rangle$ where $\widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}} = \boldsymbol{a}/\widehat{\pi}{}$ and $\widehat{\pi}{},\widehat{\boldsymbol{i}}{}_\circ$ are as in . Note that $\widehat{\mathsf{dbAdj}}_{01}$ does not include a correlation adjustment term. This is because, as our proof shows, $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}$ is approximately uncorrelated with $\boldsymbol{\Sigma}^{-1}\boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)$ despite being fit on the same data. This is a consequence of the fact that we fit the outcome model with an offset. The following auxiliary result certifies consistency of these estimates: **Proposition 1**. *The estimates $\widehat{\mathsf{dbAdj}}_{01}$, $\widehat{\mathsf{dbAdj}}_{02}$, $\widehat{\mathsf{dbAdj}}_1$, and $\widehat{\beta}{}$ constructed in [\[eq:pen-logit-shrinage-est,eq:hat-dbAjd\]](#eq:pen-logit-shrinage-est,eq:hat-dbAjd){reference-type="ref" reference="eq:pen-logit-shrinage-est,eq:hat-dbAjd"} are all consistent in probability.* We prove  in . # Simulations {#sec:simulations} In , we display the performance of outcome regression [\[eq:db-G-computation\]](#eq:db-G-computation){reference-type="eqref" reference="eq:db-G-computation"} as a function of sample size for several choices of plug-in. For 10 values of $n$ ranging from 100 to 1000 equidistant on a log-scale, and set $p = 1.25 n$ (or the closest integer) at each value. We set $\theta_{\mathsf{y},0} = \theta_{\mathsf{a},0} = 0$, $\boldsymbol{\theta}_\mathsf{y}= \boldsymbol{\theta}_\mathsf{a}= \boldsymbol{e}_1$, $\sigma = .2$, $\pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10}\operatorname{logit}^{-1}(\eta)$, $\boldsymbol{\mu}_\mathsf{x}= {\boldsymbol 0}$, and $\boldsymbol{\Sigma}= {\mathbf I}_p$. At each value of $n$, we generate 1000 replicates of the data and compute each of 5 different plug-ins. [Ridge]{.sans-serif} uses the solution to [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="ref" reference="eq:outcome-fit"} as a plug-in, with $w_i = 1$ and $\Omega_\mathsf{y}(\boldsymbol{v}) = \|\boldsymbol{v}\|^2/2$. [Debiased ridge (naive)]{.sans-serif} uses the debiased ridge estimate with bias estimates as given by the relation [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} and $w_i = 1$ in equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}. [Debiased ridge (IPW)]{.sans-serif} uses the debiased ridge estimate with bias estimates as given by [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} and weights $w_i = 1/\pi(\boldsymbol{x}_i)$ in equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}. [Oracle ASCW]{.sans-serif} uses the oracle ASCW debiased ridge estimates of . [Empirical SCA]{.sans-serif} uses the empirical SCA debiased ridge estimates of . We use the debiased estimate of the propensity model parameter given by moment method in  [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"}; that is, $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}= (\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}-\widehat{\boldsymbol{\mu}}{}_\mathsf{x})/\widehat{\beta}{}$, with the corresponding definitions of $\widehat{s}_\mathsf{y}$, $\widehat{s}_\mathsf{a}$, $\widehat{s}_{\mathsf{y}\mathsf{a}}$, and $\widehat{s}_{\mathsf{x}\mathsf{a}}$. In the left plot of , we show the mean value of each outcome regression estimate across 1000 replicates with $95\%$ confidence bars based on normal approximation for the empirical mean. In the right plot of , we show the empirical variance of each outcome regression estimate across 1000 replicates with $95\%$ confidence bars based on normal approximation for the empirical variance. Consistent with our theory, we see that [Ridge]{.sans-serif}, [Debiased ridge (naive)]{.sans-serif}, and [Debiased ridge (IPW)]{.sans-serif} have non-negligible bias that does not appear to decay as $n$ grows. On the other hand, both [Oracle ASCW]{.sans-serif} and the [Empirical SCA]{.sans-serif} have no apparent bias across the entire range of sample sizes. We see that the variance of each estimate decays with $n$, apparently at a $1/n$ rate. We emphasize that oracle ASCW depends on oracle knowledge of the propensity model, whereas empirical SCA relies on an inconsistent estimate of the propensity model. An interesting feature of the variance plot on the right is that empirical SCA has smaller variance than oracle ASCW. We note that a similar phenomenon has been observed the context of inverse propensity weighted estimates [@hirano2003]. In , we consider a fixed sample size ($n = 1000$) and several values of the regularization parameter. In , we plot the prediction error on a new sample as a function of the regularization parameter for the ridge estimator with $w_i = 1$ ([Ridge]{.sans-serif}) and $w_i = 1/\pi(\boldsymbol{x}_i)$ ([Ridge (IPW)]{.sans-serif}). (We do not perform the debiasing correction). The purpose of this plot is to demonstrate that the range of $\lambda$ we consider corresponds to the range in which the prediction error is optimized. It is also the range within which the cross-validated choice of $\lambda$ is expected to lie. In the bottom two plots, we see that across this range of $\lambda$, [Ridge]{.sans-serif}, [Debiased ridge (naive)]{.sans-serif}, and [Debiased ridge (IPW)]{.sans-serif} are strongly biased. On the other hand, [Oracle ASCW]{.sans-serif} and [Empirical SCA]{.sans-serif} have no apparent bias. Across this range of regularization parameters, [Empirical SCA]{.sans-serif} has lower variance than [Oracle ASCW]{.sans-serif}. ![Comparison of debiasing methods. We set $\theta_{\mathsf{y},0} = \theta_{\mathsf{a},0} = 0$, $\boldsymbol{\theta}_\mathsf{y}= \boldsymbol{\theta}_\mathsf{a}= \boldsymbol{e}_1$, $\sigma = \frac{1}{5}$, $\pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10}\operatorname{logit}^{-1}(\eta)$, $\boldsymbol{\mu}_\mathsf{x}= {\boldsymbol 0}$, and $\boldsymbol{\Sigma}= {\mathbf I}_p$. The outcome model is fit by ridge regression with regularization parameter $\lambda = 1$, and the propensity model by the moment method. In this case, the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. For each value of $n$, $p$ is taken as the closest integer to $1.25n$. The mean and variance of these estimates are computed across 1000 replicates, with 95% confidence intervals shown. See the text for further details.](fig/debiasing_MEAN.pdf "fig:"){#fig:debiasing width="0.45\\linewidth"} ![Comparison of debiasing methods. We set $\theta_{\mathsf{y},0} = \theta_{\mathsf{a},0} = 0$, $\boldsymbol{\theta}_\mathsf{y}= \boldsymbol{\theta}_\mathsf{a}= \boldsymbol{e}_1$, $\sigma = \frac{1}{5}$, $\pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10}\operatorname{logit}^{-1}(\eta)$, $\boldsymbol{\mu}_\mathsf{x}= {\boldsymbol 0}$, and $\boldsymbol{\Sigma}= {\mathbf I}_p$. The outcome model is fit by ridge regression with regularization parameter $\lambda = 1$, and the propensity model by the moment method. In this case, the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. For each value of $n$, $p$ is taken as the closest integer to $1.25n$. The mean and variance of these estimates are computed across 1000 replicates, with 95% confidence intervals shown. See the text for further details.](fig/debiasing_VAR.pdf "fig:"){#fig:debiasing width="0.45\\linewidth"}\ ![Comparison of debiasing methods. We set $\theta_{\mathsf{y},0} = \theta_{\mathsf{a},0} = 0$, $\boldsymbol{\theta}_\mathsf{y}= \boldsymbol{\theta}_\mathsf{a}= \boldsymbol{e}_1$, $\sigma = \frac{1}{5}$, $\pi(\eta) = \tfrac{1}{10} + \tfrac{9}{10}\operatorname{logit}^{-1}(\eta)$, $\boldsymbol{\mu}_\mathsf{x}= {\boldsymbol 0}$, and $\boldsymbol{\Sigma}= {\mathbf I}_p$. The outcome model is fit by ridge regression with regularization parameter $\lambda = 1$, and the propensity model by the moment method. In this case, the population mean outcome is $\mu_\mathsf{y}\ensuremath{: =}\mathbb{E}[y] = 0$. For each value of $n$, $p$ is taken as the closest integer to $1.25n$. The mean and variance of these estimates are computed across 1000 replicates, with 95% confidence intervals shown. See the text for further details.](fig/debiasing_LEGEND.pdf "fig:"){#fig:debiasing width="0.675\\linewidth"} ![Prediction error of ridge regression with $w_i = 1$ ([Ridge]{.sans-serif}) and with $w_i = 1/\pi(\boldsymbol{x}_i)$ ([Ridge (IPW)]{.sans-serif}) across a range of regularization parameters $\lambda$. $n = 1000$ and we simulate from exactly the same setting as in , except that the outcome model is fit by ridge regression for a range of regularization parameters as displayed in the figure. For each replicate, the prediction error is $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}- \boldsymbol{\theta}_\mathsf{y} \|^2$ because $\boldsymbol{x}_i \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_p)$.](fig/pred_err_lam_dependence_ERR.pdf "fig:"){#fig:PE-w-lam width="0.5\\linewidth"}\ ![Prediction error of ridge regression with $w_i = 1$ ([Ridge]{.sans-serif}) and with $w_i = 1/\pi(\boldsymbol{x}_i)$ ([Ridge (IPW)]{.sans-serif}) across a range of regularization parameters $\lambda$. $n = 1000$ and we simulate from exactly the same setting as in , except that the outcome model is fit by ridge regression for a range of regularization parameters as displayed in the figure. For each replicate, the prediction error is $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}- \boldsymbol{\theta}_\mathsf{y} \|^2$ because $\boldsymbol{x}_i \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_p)$.](fig/pred_err_lam_dependence_LEGEND.pdf "fig:"){#fig:PE-w-lam width="0.25\\linewidth"} ![Comparison of debiasing methods at $n = 1000$ across a range of regularization parameter $\lambda$. We simulate from exactly the same setting as in , except that the outcome model is fit by ridge regression for a range of regularization parameters as displayed in the figure. For each replicate, the prediction error is $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}- \boldsymbol{\theta}_\mathsf{y} \|^2$ because $\boldsymbol{x}_i \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_p)$.](fig/debiasing_lam_dependence_MEAN.pdf "fig:"){#fig:debiasing-w-lam width="0.45\\linewidth"} ![Comparison of debiasing methods at $n = 1000$ across a range of regularization parameter $\lambda$. We simulate from exactly the same setting as in , except that the outcome model is fit by ridge regression for a range of regularization parameters as displayed in the figure. For each replicate, the prediction error is $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}- \boldsymbol{\theta}_\mathsf{y} \|^2$ because $\boldsymbol{x}_i \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_p)$.](fig/debiasing_lam_dependence_VAR.pdf "fig:"){#fig:debiasing-w-lam width="0.45\\linewidth"}\ ![Comparison of debiasing methods at $n = 1000$ across a range of regularization parameter $\lambda$. We simulate from exactly the same setting as in , except that the outcome model is fit by ridge regression for a range of regularization parameters as displayed in the figure. For each replicate, the prediction error is $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}- \boldsymbol{\theta}_\mathsf{y} \|^2$ because $\boldsymbol{x}_i \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_p)$.](fig/debiasing_lam_dependence_LEGEND.pdf "fig:"){#fig:debiasing-w-lam width="0.6\\linewidth"} # Proofs {#SecProofs} The proofs of , as well as that of , are all based on an exact asymptotic characterization of the joint behavior of the estimates $(\widehat{\theta}{}_{\mathsf{y},0}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y})$ and $(\theta_{\mathsf{a},0}, \widehat{\boldsymbol{\theta}}{}_\mathsf{a})$, along with the mean estimates $\widehat{\boldsymbol{\mu}}{}_\mathsf{x}= \frac{1}{\ensuremath{n}} \sum_{i=1}^n \boldsymbol{x}_i$ and $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} = \frac{1}{n_1} \sum_{i=1}^n \ensuremath{a}_i \boldsymbol{x}_i$. We state the exact asymptotic characterization here, and then provide a high-level summary of its utility in proving these three results. We defer many technical aspects of the argument to the supplementary material. ## Exact asymptotics for missing data models {#sec:exact-asymptotics-body} The exact asymptotic characterization involves a comparison of the estimates $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$ to analogous quantities in what we call the *fixed-design model*. We first introduce some notation and define some important quantities. Then we describe the fixed-design models. The parameters of these models must be chosen to solve a certain system of equations called *the fixed point equations*, which we present next. Finally, we present the exact asymptotic characterization. ### The random-design model We refer to the model [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} as the *random-design model*. The goal of exact asymptotic theory is to describe the joint distribution of the following random-design quantities: [\[eq:rnd-des-objects\]]{#eq:rnd-des-objects label="eq:rnd-des-objects"} $$\begin{aligned} \label{eq:rnd-des-lin-pred} \mbox{\emph{Estimates:}} \qquad & \widehat{\boldsymbol{\theta}}{}_\mathsf{a}\ensuremath{: =} \text{see~\cref{eq:outcome-fit}}, \qquad \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\ensuremath{: =} \text{see~\cref{eq:propensity-fit}}. \\ % \mbox{\emph{True linear predictors:}} \qquad & \boldsymbol{\eta}_\mathsf{a}\ensuremath{: =} \boldsymbol{1}\theta_{\mathsf{a},0} + \boldsymbol{X}\boldsymbol{\theta}_\mathsf{a}, \qquad \boldsymbol{\eta}_\mathsf{y}\ensuremath{: =} \boldsymbol{1}\theta_{\mathsf{y},0} + \boldsymbol{X}\boldsymbol{\theta}_\mathsf{y} \\ % \mbox{\emph{Empirical covariate means:}} \qquad & \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} \ensuremath{: =}\frac{1}{n_1} \sum_{i=1}^n \ensuremath{a}_i\boldsymbol{x}_i, \qquad \widehat{\boldsymbol{\mu}}{}_\mathsf{x}= \frac{1}{\ensuremath{n}} \sum_{i=1}^n \boldsymbol{x}_i, \\ % \mbox{\emph{Estimated linear predictors:}} \qquad & \widehat{\boldsymbol{\eta}}_\mathsf{a} \ensuremath{: =}\boldsymbol{1}\widehat{\theta}{}_{\mathsf{a},0} + \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{a}, \qquad \widehat{\boldsymbol{\eta}}_\mathsf{y}\ensuremath{: =}\boldsymbol{1}\widehat{\theta}{}_{\mathsf{y},0} + \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\\ % \mbox{\emph{Empirical scores:}} \qquad & \widehat{\boldsymbol{\psi}}{}_\mathsf{a}\ensuremath{: =} \nabla_{\boldsymbol{\eta}} \ell_\mathsf{a}\big(\widehat{\boldsymbol{\eta}}_\mathsf{a};\boldsymbol{a}\big), \qquad \widehat{\boldsymbol{\psi}}{}_\mathsf{y}\ensuremath{: =}\underbrace{\nabla_{\boldsymbol{\eta}} \ell_\mathsf{y}\big(\widehat{\boldsymbol{\eta}}_\mathsf{y};\boldsymbol{y}\odot \boldsymbol{a},\boldsymbol{a},\boldsymbol{\eta}_\mathsf{a}\big)}_{ - \boldsymbol{a}\odot \boldsymbol{w}\odot (\boldsymbol{y}- \widehat{\boldsymbol{\eta}}_\mathsf{y})},\end{aligned}$$ where $n_1 \ensuremath{: =}\sum_{i=1}^n a_i$. The characterization of the distribution of these quantities involves: 1. the *empirical influence functions* ### The fixed-design models The exact asymptotic characterization states that, in a certain sense, the estimates [\[eq:rnd-des-objects\]](#eq:rnd-des-objects){reference-type="eqref" reference="eq:rnd-des-objects"} in the random-design model [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} behave like analogous estimates in two related statistical models. These models involve linear observations of unknown parameters with a deterministic design matrix, so that we call them the *fixed-design models*. The two fixed-design models are defined on separate probability spaces, which we now define. #### Fixed-design model for parameter estimation: This model contains Gaussian observations of the feature means and outcome linear effects $$\label{eq:fixed-design-param-outcomes} \begin{gathered} \boldsymbol{y}_{\mathsf{x}}^f = \boldsymbol{\mu}_{\mathsf{x}} + \boldsymbol{g}_{\mathsf{x}}^f, \qquad \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f = \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f, \\ % \boldsymbol{y}_\mathsf{a}^f = \boldsymbol{\Sigma}^{1/2} \big(\beta_{\mathsf{a}\mathsf{a}} \boldsymbol{\theta}_{\mathsf{a}}\big) + \boldsymbol{g}_{\mathsf{a}}^f, \quad \mbox{and} \quad \boldsymbol{y}_\mathsf{y}^f = \boldsymbol{\Sigma}^{1/2} \big( \beta_{\mathsf{y}\mathsf{a}}\boldsymbol{\theta}_{\mathsf{a}} + \boldsymbol{\theta}_{\mathsf{y}}\big) + \boldsymbol{g}_{\mathsf{y}}^f, \end{gathered}$$ where $(g_{\mathsf{x},i}^f, g_{\mathsf{x},\mathsf{cfd},i}^f, g_{\mathsf{a},i}^f,g_{\mathsf{y},i}^f)\stackrel{\mathrm{iid}}\sim \mathsf{N}(0,\boldsymbol{S}/n)$, and $\boldsymbol{S}\in \mathbb{S}_{\geq0}^4$, $\beta_{\mathsf{a}\mathsf{a}}$, and $\beta_{\mathsf{y}\mathsf{a}}$ are parameters to be determined below. In this model, we consider the estimates $$\label{eq:param-fit-f} \begin{gathered} \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f = \mathop{\mathrm{arg\,min}}_{\boldsymbol{v}} \Big\{ \frac{\zeta_{\mathsf{a}}^\theta}2 \big\| \boldsymbol{y}_{\mathsf{a}}^f - \boldsymbol{\Sigma}^{1/2} \boldsymbol{v}\big\|^2 + \Omega_{\mathsf{a}}(\boldsymbol{v}) \Big\}, \quad \mbox{and} \quad \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f = \mathop{\mathrm{arg\,min}}_{\boldsymbol{v}} \Big\{ \frac{\zeta_{\mathsf{y}}^\theta }2 \big\| \boldsymbol{y}_{\mathsf{y}}^f - \boldsymbol{\Sigma}^{1/2} \boldsymbol{v}\big\|^2 + \Omega_{\mathsf{y}}(\boldsymbol{v}) \Big\}, \end{gathered}$$ where $\zeta_{\mathsf{a}}^\theta$, $\zeta_{\mathsf{y}}^\theta$ are parameters to be determined below. #### Fixed-design model for linear-predictor estimation: It contains linear predictors $(\boldsymbol{\eta}_{\mathsf{a}}^f, \boldsymbol{\eta}_{\mathsf{y}}^f)$ with the same distribution as the linear predictors in the random design model [\[eq:rnd-des-lin-pred\]](#eq:rnd-des-lin-pred){reference-type="eqref" reference="eq:rnd-des-lin-pred"}: $(\eta_{\mathsf{a},i}^f, \eta_{\mathsf{y},i}^f) \mathrel{\stackrel{\mathrm{iid}}{\sim}}\mathsf{N}\big((\mu_\mathsf{a}, \mu_\mathsf{y})^\top, \langle\!\langle \boldsymbol{\Theta}\rangle\!\rangle_{\boldsymbol{\Sigma}}\big)$, where $\boldsymbol{\Theta}= (\boldsymbol{\theta}_{\mathsf{a}},\boldsymbol{\theta}_{\mathsf{y}})$, $\mu_\mathsf{a}= \theta_{\mathsf{a},0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_\mathsf{a}\rangle$ and $\theta_{\mathsf{y},0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_\mathsf{y}\rangle$. It also contains missingness indicator and outcome variables with the same distribution as in the random design model: $$\label{eq:fixed-design-outcomes} \begin{gathered} \boldsymbol{a}^f \text{ where } \mathbb{P}\big(\ensuremath{a}_i^f = 1 \mid \eta_{\mathsf{a},i}^f\big) = \pi(\eta_{\mathsf{a},i}^f) = \pi_i^f \text{ independently for each $i$}, \\ \boldsymbol{y}^f = \boldsymbol{\eta}_{\mathsf{y}}^f + \boldsymbol{\varepsilon}_\mathsf{y}^f, \qquad \boldsymbol{w}^f = w(\boldsymbol{\eta}_{\mathsf{a}}^f), \qquad \boldsymbol{\varepsilon}_\mathsf{y}^f \, {\buildrel d \over =} \,\boldsymbol{\varepsilon}_\mathsf{y} \text{ independent of everything else.} \end{gathered}$$ Further, it contains observations $\widehat{\boldsymbol{\eta}}_\mathsf{a}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}}$ correlated with the linear predictors $(\boldsymbol{\eta}_{\mathsf{a}}^f,\boldsymbol{\eta}_{\mathsf{y}}^f)$ via $$\label{eq:fixed-design-linear-predictor-dist} \begin{pmatrix} \eta_{\mathsf{a},i}^f \\[3pt] \eta_{\mathsf{y},i}^f \\[3pt] \widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}} \\[3pt] \widehat{\eta}{}_{\mathsf{y},i}^{f,\mathsf{loo}} \end{pmatrix} \stackrel{\mathrm{iid}}\sim \mathsf{N} \left( \begin{pmatrix} \mu_{\mathsf{a}}\\[3pt] \mu_{\mathsf{y}}\\[3pt] \widehat{\mu}{}_{\mathsf{a}}^f\\[3pt] \widehat{\mu}{}_{\mathsf{y}}^f \end{pmatrix} ,\; \begin{pmatrix} \langle\!\langle\boldsymbol{\Theta}\rangle\!\rangle_{\boldsymbol{\Sigma}} & \langle\!\langle\boldsymbol{\Theta}, \widehat{\boldsymbol{\Theta}}^f \rangle\!\rangle_{\boldsymbol{\Sigma},L^2} \\[3pt] \langle\!\langle\widehat{\boldsymbol{\Theta}}^f , \boldsymbol{\Theta}\rangle\!\rangle_{\boldsymbol{\Sigma},L^2} & \langle\!\langle\widehat{\boldsymbol{\Theta}}^f \rangle\!\rangle_{\boldsymbol{\Sigma},L^2}\\[3pt] \end{pmatrix} \right),$$ and $\widehat{\boldsymbol{\Theta}}^f = (\widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f,\widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f)$, where $\widehat{\mu}{}_{\mathsf{a}}^f,\widehat{\mu}{}_{\mathsf{y}}^f$ are deterministic parameters to be determined below. The covariance matrix depends on expectations in the fixed-design model for parameter estimation, so is a function of the parameters $\boldsymbol{S}$, $\beta_{\mathsf{a}\mathsf{a}}$ and $\beta_{\mathsf{y}\mathsf{a}}$, in addition to $\widehat{\mu}{}_{\mathsf{a}}^f$, $\widehat{\mu}{}_{\mathsf{y}}^f$. The *estimated linear predictors* are $$\label{eq:lin-predict-f} \begin{gathered} \widehat{\boldsymbol{\eta}}_{\mathsf{a}}^f = \mathop{\mathrm{arg\,min}}_{\boldsymbol{u}} \Big\{ \frac{1}2 \big\| \widehat{\boldsymbol{\eta}}_{\mathsf{a}}^{f,\mathsf{loo}} - \boldsymbol{u} \big\|^2 + \zeta_{\mathsf{a}}^\eta \sum_{i=1} \ell_{\mathsf{a}}(u_i;\ensuremath{a}_i^f) \Big\}, \\ \widehat{\boldsymbol{\eta}}_{\mathsf{y}}^f = \mathop{\mathrm{arg\,min}}_{\boldsymbol{u}} \Big\{ \frac{1}2 \big\| \widehat{\boldsymbol{\eta}}_{\mathsf{y}}^{f,\mathsf{loo}} - \boldsymbol{u} \big\|^2 + \frac{\zeta_{\mathsf{y}}^\eta}2 \sum_{i=1}^n \ensuremath{a}_i^f w_i^f ( y_i^f - u_i )^2 \Big\}, \end{gathered}$$ for parameters $\zeta_\mathsf{y}^\eta,\zeta_\mathsf{y}^\theta$ to be determined below. The *empirical scores* are $$\label{eq:fixed-design-score} \widehat{\boldsymbol{\psi}}{}_{\mathsf{a}}^f = \nabla_{\boldsymbol{\eta}} \ell_{\mathsf{a}} \big(\widehat{\boldsymbol{\eta}}_{\mathsf{a}}^f;\boldsymbol{a}^f\big), \qquad \widehat{\boldsymbol{\psi}}{}_{\mathsf{y}}^f = \nabla_{\boldsymbol{\eta}} \ell_{\mathsf{y}} \big(\widehat{\boldsymbol{\eta}}_{\mathsf{y}}^f;\boldsymbol{y}^f,\boldsymbol{a}^f,\boldsymbol{\eta}_{\mathsf{y}}^f\big) = - \boldsymbol{a}^f \odot \boldsymbol{w}^f \odot (\boldsymbol{y}^f - \widehat{\boldsymbol{\eta}}_\mathsf{y}^f).$$ The *empirical influence functions* are $$\label{eq:empirical-influence-function} \begin{gathered} \widehat{\boldsymbol{i}}{}_{\mathsf{x}}^f = \boldsymbol{1}, \qquad \widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f = \frac{1}{{\overline{\pi}}}\boldsymbol{a}^f, \qquad \widehat{\boldsymbol{i}}{}_\mathsf{a}^f = -\frac{\widehat{\boldsymbol{\psi}}{}_\mathsf{a}^f}{\zeta_\mathsf{a}^\theta}, \qquad \widehat{\boldsymbol{i}}{}_\mathsf{y}^f = -\frac{\widehat{\boldsymbol{\psi}}{}_\mathsf{y}^f}{\zeta_\mathsf{y}^\theta}, \end{gathered}$$ where ${\overline{\pi}}= \mathbb{E}[d]/n = \mathbb{E}\big[\pi(\eta_\mathsf{a}^f)\big]$. We denote by ${\widehat{\mathsf{IF}}}^f \in {\mathbb R}^{n \times 4}$ the matrix with columns $\widehat{\boldsymbol{i}}{}_{\mathsf{x}}^f$, $\widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f$, $\widehat{\boldsymbol{i}}{}_{\mathsf{a}}^f$ and $\widehat{\boldsymbol{i}}{}_{\mathsf{y}}^f$. ### The fixed point equations By construction, the two fixed-design models are determined by the collection of parameters $$\begin{aligned} \big \{ \boldsymbol{S}, \; \beta_{\mathsf{a}\mathsf{a}}, \; \beta_{\mathsf{y}\mathsf{a}}, \; \zeta_{\mathsf{a}}^\theta, \; \zeta_{\mathsf{a}}^\eta, \; \zeta_{\mathsf{y}}^\theta, \; \zeta_{\mathsf{a}}^\theta, \; \widehat{\mu}{}_\mathsf{a}^f, \; \widehat{\mu}{}_\mathsf{y}^f \big \}.\end{aligned}$$ The fixed-design models characterize the behavior of the random-design model when these parameters are the unique solution to *the fixed point equations*: $$\label{eq:regr-fixed-pt} \tag{FD-fixpt} \begin{gathered} \langle \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_{\mathsf{a}}^f \rangle_{L_2} = \langle \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_{\mathsf{y}}^f \rangle_{L_2} = 0, \\ \beta_{\mathsf{a}\mathsf{a}} = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \frac{\textup{d}\widehat{\boldsymbol{i}}{}_{\mathsf{a}}^f}{\textup{d}\boldsymbol{\eta}_{\mathsf{a}}^f} \Big) \Big], \quad \beta_{\mathsf{y}\mathsf{a}} = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \frac{\textup{d}\widehat{\boldsymbol{i}}{}_{\mathsf{y}}^f}{\textup{d}\boldsymbol{\eta}_{\mathsf{a}}^f} \Big) \Big], \\ \boldsymbol{S}= \frac{1}{n}\langle\!\langle{\widehat{\mathsf{IF}}}^f \rangle\!\rangle_{L_2}. \\ \zeta_{\mathsf{a}}^\theta = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \frac{\textup{d} \widehat{\boldsymbol{\psi}}{}_{\mathsf{a}}^f}{\textup{d}\widehat{\boldsymbol{\eta}}_{\mathsf{a}}^{f,\mathsf{loo}}} \Big) \Big], \qquad \zeta_{\mathsf{y}}^\theta = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \frac{\textup{d}\widehat{\boldsymbol{\psi}}{}_{\mathsf{y}}^f}{\textup{d}\widehat{\boldsymbol{\eta}}_{\mathsf{y}}^{f,\mathsf{loo}}} \Big) \Big], \\ \zeta_{\mathsf{a}}^\eta\zeta_{\mathsf{a}}^\theta = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \boldsymbol{\Sigma}^{1/2} \frac{\textup{d}\widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f }{\textup{d} \boldsymbol{y}_{\mathsf{a}}^f} \Big) \Big], \qquad \zeta_{\mathsf{y}}^\eta\zeta_{\mathsf{y}}^\theta = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \boldsymbol{\Sigma}^{1/2} \frac{\textup{d}\widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f }{\textup{d} \boldsymbol{y}_{\mathsf{y}}^f} \Big) \Big], \\ \text{and indices 3, 4 are innovative with respect to both $\boldsymbol{S}$ and the covariance in~\cref{eq:fixed-design-linear-predictor-dist}.} \end{gathered}$$ The right-hand sides of these equations involve expectations taken with respect to the distributions of the fixed-design models determined by the parameters $\big \{ \boldsymbol{S}, \; \beta_{\mathsf{a}\mathsf{a}}, \; \beta_{\mathsf{y}\mathsf{a}}, \; \zeta_{\mathsf{a}}^\theta, \; \zeta_{\mathsf{a}}^\eta, \; \zeta_{\mathsf{y}}^\theta, \; \zeta_{\mathsf{a}}^\theta, \; \widehat{\mu}{}_\mathsf{a}^f, \; \widehat{\mu}{}_\mathsf{y}^f \big \}$. Thus, they are functions of these parameters. In all cases, the derivatives do not exist in the classical sense, and so need to be interpreted carefully. (See  for the correct interpretation). There are 18 degrees of freedom in choosing the fixed point parameters, and 18 independent scalar equations in [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="ref" reference="eq:regr-fixed-pt"}, accounting for symmetry of $\boldsymbol{S}$. In , we state a lemma asserting that the fixed point equations have a unique solution. It also provides bounds on this solution that are useful in our proofs, and may be of independent interest because they control the operating characteristics of the estimators $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$. ### Characterization We call a function $\phi: ({\mathbb R}^N)^k \rightarrow {\mathbb R}$ is order-$2$ pseudo-Lipschitz with constant $L$ if $$\begin{gathered} |\phi({\boldsymbol 0},\ldots,{\boldsymbol 0})| \leq L, \\ \Big| \phi(\boldsymbol{x}_1,\ldots,\boldsymbol{x}_k) - \phi(\boldsymbol{x}_1',\ldots,\boldsymbol{x}_k') \Big| \leq L \Big( 1 + \sum_{\ell=1}^k \| \boldsymbol{x}_\ell \| + \sum_{\ell=1}^k \| \boldsymbol{x}_\ell' \| \Big) \sum_{\ell=1}^k \| \boldsymbol{x}_\ell - \boldsymbol{x}_\ell' \|. \end{gathered}$$ If the constant $L$ is not stated, then it should be taken as $L = 1$. The following result guarantees that order-$2$ pseudo-Lipschitz functions of the quantities [\[eq:rnd-des-objects\]](#eq:rnd-des-objects){reference-type="eqref" reference="eq:rnd-des-objects"} concentrate on the expectation of the analogous quantities in the fixed-design models determined by the solution to the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}: **Theorem 3**. *Under Assumption A1 and as $(n,p)\rightarrow \infty$, we have the following:* (a) *The estimated population means concentrate: $$\widehat{\theta}{}_{\mathsf{a},0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}\rangle - \widehat{\mu}{}_\mathsf{a}^f \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\theta}{}_{\mathsf{y},0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle - \widehat{\mu}{}_\mathsf{y}^f \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0.$$* (b) *The effective-regularization terms concentrate: $$\widehat{\zeta}{}_{\mathsf{a}}^\eta - \zeta_{\mathsf{a}}^\eta \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\zeta}{}_{\mathsf{a}}^\theta - \zeta_{\mathsf{a}}^\theta \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\zeta}{}_{\mathsf{y}}^\eta - \zeta_{\mathsf{y}}^\eta \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0, \qquad \widehat{\zeta}{}_{\mathsf{y}}^\theta - \zeta_{\mathsf{y}}^\theta \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0.$$* (c) *For any sequence of order-2 pseudo-Lipschitz function $\phi:({\mathbb R}^p)^6\rightarrow {\mathbb R}$, $$\phi\big( \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}, \widehat{\boldsymbol{\mu}}{}_{\mathsf{x}}, \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^{\mathsf{loo}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^{\mathsf{loo}} \big) - \mathbb{E}\Big[ \phi\Big( \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{x}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{a}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{y}}^f \Big) \Big] \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0.$$* (d) *For any sequence of order-2 pseudo-Lipschitz functions $\phi:{\mathbb R}^4 \rightarrow {\mathbb R}$, $$\frac{1}{n} \sum_{i=1}^n \phi\big( \eta_{\mathsf{a},i}, \eta_{\mathsf{y},i}, \widehat{\psi}{}_{\mathsf{a},i}, \widehat{\psi}{}_{\mathsf{y},i} \big) - \mathbb{E}\Big[ \phi\Big( \eta_{\mathsf{a},i}^f, \eta_{\mathsf{y},i}^f, \widehat{\psi}{}_{\mathsf{a},i}^f, \widehat{\psi}{}_{\mathsf{y},i}^f \Big) \Big] \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0.$$* is similar to statements made elsewhere in the exact asymptotics literature [@bayatiMontanari2012; @karoui2013asymptotic; @thrampoulidisAbbasiHassibi2018; @miolane2021; @celentano2021cad], but differs in several important respects. First, rather than characterize the behavior of a single regression estimate $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}$ or $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}$ marginally, it describes their distribution jointly with each other and with the estimates of the marginal and confounded means of the covariates $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x}}$ and $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}$. This is essential to the analysis of the empirical SCA estimate because non-negligible biases can result from correlations between these quantities. Indeed, estimating the $\mathsf{dbAdj}$ terms requires correcting for these biases. A similar joint characterization was used in Celentano and Montanari [@celentano2021cad], but was restricted to two linear models rather than a linear and binary outcome model. Substantial technical novelty is required then to establish the existence and uniqueness of and bounds on the solutions to the fixed-point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, as we see in Appendix [16](#sec:fixed-point-parameter-analysis){reference-type="ref" reference="sec:fixed-point-parameter-analysis"}. Further, in the context of binary outcome models, as far as we are aware, exact asymptotics has been limited either to the case of separable penalties with iid designs [@salehiAbbasiHassibi2019] or unpenalized regression with correlated designs [@surCandes2019; @zhaoSurCandes2022]. We handle non-separable penalties with arbitrary covariance (although, we should note, Assumption A1 does not hold for the logistic loss, which is beyond our current scope). Further, we make no assumption that the coordinates of $\boldsymbol{\theta}_\mathsf{y}$ or $\boldsymbol{\theta}_\mathsf{a}$ are distributed iid from some prior, which is common in the exact asymptotic literature [@salehiAbbasiHassibi2019]. Instead, as we will see in the Appendix [9](#sec:exact-asymptotics){reference-type="ref" reference="sec:exact-asymptotics"}, our results take the form of non-asymptotic concentration bounds which hold uniformly over the class of parameter vectors defined by Assumption A1. This implies that the limits in Theorems [Theorem 1](#thm:pop-mean){reference-type="ref" reference="thm:pop-mean"} and [Theorem 2](#thm:db-normality){reference-type="ref" reference="thm:db-normality"} hold uniformly over this model class. The proof of  is based on the Convex-Gaussian min-max theorem in a conditional form. This conditional form was also used in the paper [@celentano2021cad], but our application of it here must address several technical difficulties due to the novelties listed above. In , we provide a slightly more quantitative bound for the concentration in . ## Debiasing theorems from exact asymptotics is the basis for the proofs of  along with . It allows us to reduce the consistency and coverage guarantees in the random design model to corresponding statements in the fixed-design model. In the fixed-design model, these statements either hold by definition or as algebraic relationships guaranteed by the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}. Here we provide a very high-level and heuristic summary of some of these arguments; see  for the full technical details. Throughout, we assume that $\boldsymbol{\Sigma}= {\mathbf I}_p$. (As we describe in , this condition can be imposed without loss of generality.) By [\[eq:orc-ASCW-matrix-form\]](#eq:orc-ASCW-matrix-form){reference-type="ref" reference="eq:orc-ASCW-matrix-form"}, the oracle ASCW estimate can be written as $$\begin{aligned} \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}& = \widehat{\boldsymbol{\theta}}{}_\mathsf{y}+ \frac{\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1}\boldsymbol{X}^\top \big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0} \boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n \widehat{\zeta}{}_\mathsf{y}^\theta}\end{aligned}$$ Moreover, by the KKT conditions for [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="ref" reference="eq:outcome-fit"}, we have the relation $$\begin{aligned} \frac{1}{\ensuremath{n}} \boldsymbol{X}^\top \big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0} \boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) & = \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}),\end{aligned}$$ and by the Sherman-Morrison-Woodbury identity, we have $\boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} = {\mathbf I}_p - \mathsf{c}_{\boldsymbol{\Sigma}} \boldsymbol{\theta}_\mathsf{a}\boldsymbol{\theta}_\mathsf{a}^\top$. Thus, as indicated by , the object in the fixed-design model corresponding to $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$ is given by $$\begin{aligned} \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f + \frac{1}{ \zeta_\mathsf{y}^\theta} \boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f).\end{aligned}$$ Similarly, by the KKT conditions for [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="ref" reference="eq:param-fit-f"} in the fixed-design model, we have $\frac{1}{\zeta_\mathsf{y}^\theta} \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) = \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f$, and hence the equivalence $$\begin{aligned} \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f + \frac{1}{\zeta_\mathsf{y}^\theta} \boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1} \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) & = \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f + ({\mathbf I}_p - \mathsf{c}_{\boldsymbol{\Sigma}}\boldsymbol{\theta}_\mathsf{y}\boldsymbol{\theta}_\mathsf{y}^\top)(\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) = \beta_{\mathsf{y}\mathsf{a}}\boldsymbol{\theta}_\mathsf{a}+ \boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_\mathsf{y}^f - \mathsf{c}_{\boldsymbol{\Sigma}}\langle\boldsymbol{\theta}_\mathsf{y}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle.\end{aligned}$$ By Gaussian concentration of measure, the random variable $\langle\boldsymbol{\theta}_\mathsf{y}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle$ concentrates on its expectation $\langle\boldsymbol{\theta}_\mathsf{y}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}$. Moreover, one can show (cf. ) that the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} imply the relation $$\begin{aligned} \beta_{\mathsf{y}\mathsf{a}} = \mathsf{c}_{\boldsymbol{\Sigma}} \langle\boldsymbol{\theta}_\mathsf{y},\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}.\end{aligned}$$ Thus, the quantity $\boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_\mathsf{y}^f$ in the fixed-design model corresponds to $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$. This is an unbiased Gaussian estimate of $\boldsymbol{\theta}_\mathsf{y}$ with coordinate-wise variance $\| \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \|_{L_2}^2/n^2$. allows us to argue that the empirical standard error $\widehat{\tau}{}^2/n = \| \widehat{\boldsymbol{i}}{}_\mathsf{y}\|^2/n^2$ concentrates on $\| \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \|_{L_2}^2/n^2$. Thus, the coverage guarantee of holds for the corresponding object in the fixed-design models. Using a Lipschitz approximations of the indicator functions, can be used to establish the same holds for oracle ASCW debiased estimator.\ We next observe that the population mean estimate can be written as $$\begin{aligned} \widehat{\mu}{}_{\mathsf{y}}^\textup{d}= \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}+ \langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\rangle.\end{aligned}$$ Following the reasoning of the previous paragraph, the quantity $\langle \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_\mathsf{x}^f , \boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_\mathsf{y}^f \rangle$ is the the fixed-design object corresponding to $\langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d} \rangle$. Using the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, we can show that $\boldsymbol{g}_\mathsf{x}^f$ is uncorrelated with $\boldsymbol{g}_\mathsf{y}^f$, so that the fixed-design quantity $\langle \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_\mathsf{x}^f , \boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_\mathsf{y}^f \rangle$ concentrates on $\langle \boldsymbol{\mu}_\mathsf{x}, \boldsymbol{\theta}_\mathsf{y}\rangle$. can be used that the same holds for the random-design quantity $\langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d} \rangle$. We also argue that $\widehat{\theta}{}_{\mathsf{y},0}^f$ concentrates on $\theta_{\mathsf{y},0}$, so that $\widehat{\mu}{}_{\mathsf{y}}^\textup{d}$ concentrates on $\theta_{\mathsf{y},0} + \langle \boldsymbol{\mu}_\mathsf{x}, \boldsymbol{\theta}_\mathsf{y}\rangle = \mu_\mathsf{y}$. This argument leads to the guarantee in  for the oracle ASCW estimates. The analysis of the empirical SCA estimates and the $\mathsf{dbAdj}$-estimates given in [\[eq:hat-dbAjd\]](#eq:hat-dbAjd){reference-type="ref" reference="eq:hat-dbAjd"} are based on a similar strategy. Estimating the $\mathsf{dbAdj}$-quantities requires correlation adjustments; let us provide the basic intuition here. For example, in order to consistently estimate the term $\langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} , \boldsymbol{\theta}_\mathsf{a}\rangle$ in the second line of [\[eq:dbAdj\]](#eq:dbAdj){reference-type="ref" reference="eq:dbAdj"}, we might consider the plug-in estimate $\langle \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}\rangle$. When we use the moment-method to construct $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ in [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="ref" reference="eq:prop-db-option"}, this corresponds to the objective $\langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f , \boldsymbol{\theta}_\mathsf{a}+ (\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f - \boldsymbol{g}_\mathsf{x}^f ) / \alpha_1 \rangle$ in the fixed-design models. This quantity is biased for $\langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} , \boldsymbol{\theta}_\mathsf{a}\rangle$ because $\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f$ and $\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f - \boldsymbol{g}_\mathsf{x}^f$ are correlated. The fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} give us an estimate of this correlation in terms of the correlation of empirical influence functions. This allows us to estimate this bias consistently and remove it, as in [\[eq:dbAdj\]](#eq:dbAdj){reference-type="ref" reference="eq:dbAdj"}. We refer to this procedure as a correlation adjustment.\ Complete technical details are given in . # Discussion {#SecDiscussion} This paper studied consistent estimation of the population mean of an outcome missing at random in an "inconsistency regime;" that is, in a regime in which consistent estimation of both the outcome and the propensity model are impossible. We focused on a linear-GLM outcome-propensity specification. G-computation, AIPW, and IPW enjoy model-agnostic guarantees that rely on consistent estimation of either the outcome model or propensity model, but they fail to be consistent for the population mean in our regime. When $n > p$ by a constant fraction, both the G-computation and AIPW estimators are $\sqrt{n}$-consistent and asymptotically normal provided OLS is used to estimate the outcome model [@yadlowsky2022; @sur2022]. The primary contribution of this paper is to develop a method---*empirical shifted-confounder augmentation (SCA)*---which is consistent for the outcome mean in the challenging regime that $n < p$ and hence least-squares is no longer possible. The method can be viewed as G-computation with a carefully constructed plug-in estimate for the outcome model. Our construction of the plug-in builds on previous methods for debiasing penalized regression estimators, but, unlike these previous methods, must deal with bias introduced not only by regularization but also by the missingness mechanism. In particular, missingness induces a distribution shift between the features conditional on missingness and unconditionally. This distribution shift biases parameter estimates. We show how, in the linear-GLM setting, an estimate of the propensity model parameter can be used to correct for this bias despite the estimate's inconsistency. We moreover show that "on average across coordinates" the debiased outcome model parameter is asymptotically normal for the true parameter at a $\sqrt{n}$-rate, and we provide an empirical standard error. In the course of studying our method, we develop exact asymptotic theory for the joint distribution of the outcome and propensity model estimates, which may be of independent interest. In the context of semiparametric estimation and inference more broadly, the empirical SCA method is a step towards designing methodology that eliminates or reduces bias in moderate signal-to-noise regimes in which nuisance parameters cannot be estimated well and in which classical approaches fail. There are several promising avenues for future work to address its current shortcomings and to determine whether these are fundamental. - We have not studied the rate of convergence of $\widehat{\mu}{}_\mathsf{y}^\textup{d}$, although Figure [12](#fig:debiasing){reference-type="ref" reference="fig:debiasing"} indicates it may converge at a $\sqrt{n}$-rate. Moreover, Theorem [Theorem 2](#thm:db-normality){reference-type="ref" reference="thm:db-normality"} establishes normality of $\widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^\textup{d}$ "on average across coordinates," but does not provide a normality guarantee for any fixed coordinate $\theta_{\mathsf{y},j}$. Similar shortcomings have been present in earlier work on exact asymptotics. Bayati et al. [@bayatiErdogduMontanari] and Celentano and Montanari [@celentano2021cad] do not establish a rate for their estimates of the conditional variance or covariance in high-dimensional linear models. Likewise, Miolane and Montanari [@miolane2021] and Celentano et al. [@celentanoMontanariWei2020] do not establish coordinate-wise normality for the debiased Lasso. In both these cases, this is due to their reliance on approximate message passing or Convex Gaussian Min-Max Theorem based proof techniques. Recently, Tan et al. [@tan2022noise] and Bellec and Zhang [@bellec2019debiasingIntervals] established $\sqrt{n}$-consistent estimation for conditional covariances and coordinate-wise normality of debiased estimates in high-dimensional linear models. Their methods are based on Stein's formula and Gaussian Poincaré inequalities, and it is possible similar techniques could be applied fruitfully in the present context. - Our results require that the unconditional covariance of the confounders is known. It is likely straightforward to generalize our results to cases in which this covariance can be estimated consistently in operator norm. In the high-dimensional regime, operator-norm consistency would require either strong structural assumptions [@bickelLevina2008; @karoui2008; @caiZhangZhou2010] or access to a much larger unlabeled data set. Although this is a strong requirement, a large body of work assumes feature distributions are known, and in some applications this may be a reasonable assumption [@candesFanJansonLv2018]. In contrast, when $n > (1+\epsilon)p$ and under mild conditions, $\sqrt{n}$-consistency is possible without any knowledge of the feature covariance [@yadlowsky2022]. Thus, an important open research direction is to establish tight minimax lower bounds for estimation that are sensitive to varying levels of prior knowledge of the feature covariance $\boldsymbol{\Sigma}$, and to determine whether these exhibit a phase transition at $n = p$. It is interesting to note that a similar phenomenon occurs in the problem of conditional covariance estimation across two linear models. In this setting, consistent estimation is possible when $n > (1+\epsilon)p$ without knowledge of the feature covariance, but existing approaches require operator-norm consistent estimation of $\boldsymbol{\Sigma}$ when $n < p$ [@celentano2021cad]. - Our results require that the features be jointly Gaussian. There is a growing body of work studying universality for regression models under proportional asymptotics [@bayatiLelargeMontanari2015; @abbasiSalehiHassibi2019; @montanariSaeed2022; @huLu2023]. In particular, in the proportional regime, one can often generalize exact asymptotic results from Gaussian features to sub-Gaussian features or features of the form $\boldsymbol{x}= \boldsymbol{\Sigma}^{1/2} \boldsymbol{z}$ where $\boldsymbol{z}$ has independent sub-Gaussian entries. We thus expect that the Gaussianity assumption can be relaxed substantially. Rigorously investigating this possibility is left to future work. ## Acknowledgements {#acknowledgements .unnumbered} This work was partially supported by National Science Foundation grants NSF-CCF-1955450 and NSF-DMS-2311072, as well as DOD-ONR Office of Naval Research N00014-21-1-2842 to MJW. MC is supported by the Miller Institute for Basic Research in Science, University of California, Berkeley. # High probability approximations: conventions {#sec:high-probability} The concentration bounds that we establish in this paper hold with exponentially high probability, with constants depending on $\mathcal{P}_{\mathrm{model}}$. Let us introduce some shorthand notation for the the type of concentration results that we provide. For two scalar random variables $A, B$, we use $A \ensuremath{\stackrel{\bullet}{=}}B$ to mean there exist constants $C,c,c',r > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$ such that, for all $\epsilon < c'$, $\mathbb{P}(|A - B| \geq \epsilon) \leq C e^{-cn\epsilon^r}$. We use $A \lessdot B$ to mean there exist constants $C,c,c',r > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$ such that, for all $\epsilon < c'$, $\mathbb{P}(A - B > \epsilon) \leq C e^{-cn\epsilon^r}$. We equivalently write $B \gtrdot A$. Moreover, for two random vectors $\boldsymbol{a},\boldsymbol{b}\in {\mathbb R}^N$, we use $\boldsymbol{a}\ensuremath{\stackrel{\bullet}{=}}\boldsymbol{b}$ to denote $\| \boldsymbol{a}- \boldsymbol{b}\| \ensuremath{\stackrel{\bullet}{=}}0$. Moreover, for two random matrices $\boldsymbol{A},\boldsymbol{B}\in {\mathbb R}^{N \times M}$ we use $\boldsymbol{A}\ensuremath{\stackrel{\bullet}{=}}\boldsymbol{B}$ to denote $\| \boldsymbol{A}- \boldsymbol{B}\|_{{\rm op}} \ensuremath{\stackrel{\bullet}{=}}0$. It is straightforward to check that these relation have the following properties: $A \ensuremath{\stackrel{\bullet}{=}}B$ if and only if $A \lessdot B$ and $B \lessdot A$; if $A \ensuremath{\stackrel{\bullet}{=}}B$ and $B \ensuremath{\stackrel{\bullet}{=}}C$, then $A \ensuremath{\stackrel{\bullet}{=}} C$; if $A \lessdot B$ and either $B \lessdot C$, then $A \lessdot C$. We use these relationships freely and without comment throughout the paper. We also say an event occurs with "exponentially high probability" if there exist constants $C,c > 0$ depending on $\mathcal{P}_{\mathrm{model}}$ such that the event occurs with probability at least $1 - Ce^{-cn}$. Note that an event that occurs with exponentially high probability occurs with probability approaching 1 along any sequence of models and estimators each satisfying assumption A1 and $n \rightarrow \infty$. Likewise, along any such sequence, vectors $\boldsymbol{a}_n$, $\boldsymbol{b}_n$ satisfying $\boldsymbol{a}_n \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{b}_n$ have $\| \boldsymbol{a}_n - \boldsymbol{b}_n \| \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0$. Because Assumption A1 requires $C > n/p > c > 0$, these high probability bounds are meaningful in a proportional asymptotics. We typically use this notation and terminology to describe our concentration bounds, making explicit probability statements only when required for clarity or rigor. # Exact asymptotics for missing data models {#sec:exact-asymptotics} In this appendix, we provide additional details on the exact asymptotic characterization that we sketched out in . ## Definition of derivatives In all cases, the derivatives in the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} do not exist in the classical sense. Here we describe how they are to be interpreted. We begin by writing the scores explicitly as $$\label{eq:score-explicit} \begin{gathered} \widehat{\psi}{}_{\mathsf{a},i}^f = \ell_\mathsf{a}'\big( \mathrm{prox}\big[ \zeta_\mathsf{a}^\eta\ell_\mathsf{a}(\,\cdot\,;\ensuremath{a}_i^f)\big]'(\widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}}); \ensuremath{a}_i^f\big), \qquad \widehat{\psi}{}_{\mathsf{y},i}^f = -\frac{\zeta_\mathsf{y}^\eta w(\eta_{\mathsf{a},i}^f) \ensuremath{a}_i^f}{1 + \zeta_\mathsf{y}^\eta w(\eta_{\mathsf{a},i}^f)} (y_i^f - \widehat{\eta}{}_{\mathsf{y},i}^{f,\mathsf{loo}}). \end{gathered}$$ Here the reader should recall that for a lower semi-continuous, proper, convex function $f: {\mathbb R}^N \rightarrow {\mathbb R}\cup \{\infty\}$, the proximal operator is given by $$\label{eq:prox-def} \mathrm{prox}[f](\boldsymbol{y}) \ensuremath{: =}\mathop{\mathrm{arg\,min}}_{\boldsymbol{v}\in {\mathbb R}^N} \Big\{ \frac{1}2 \| \boldsymbol{y}- \boldsymbol{v}\|^2 + f(\boldsymbol{v}) \Big\}.$$ (See the sources [@bauschke2011convex; @parikhBoyd2014] for more background on proximal operators.) The derivatives of the score are interpreted as $$\label{eq:score-deriv-explicit} \begin{gathered} \frac{\textup{d}\widehat{\psi}{}_{\mathsf{a},i}^f}{\textup{d}\eta_{\mathsf{a},j}^f} = \delta_{ij} \pi'(\eta_{\mathsf{a},j}^f) \Big( \ell_\mathsf{a}'\big(\mathrm{prox}\big[\zeta_\mathsf{a}^\eta\ell_\mathsf{a}(\,\cdot\,;1)\big](\widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}});1\big) - \ell_\mathsf{a}'\big(\mathrm{prox}\big[\zeta_\mathsf{a}^\eta\ell_\mathsf{a}(\,\cdot\,;0)\big](\widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}});0\big) \Big), \\ \frac{\textup{d}\widehat{\psi}{}_{\mathsf{a},i}^f}{\textup{d}\widehat{\eta}{}_{\mathsf{a},j}^{f,\mathsf{loo}}} = \delta_{ij} \mathrm{prox}\big[\zeta_\mathsf{a}^\eta\ell_\mathsf{a}(\,\cdot\,;\ensuremath{a}_i^f)\big]'(\widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}}) = \delta_{ij} \frac{1}{\zeta_\mathsf{a}^\eta \ddot\ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i}^f;\ensuremath{a}_i^f)+1}, \\ \frac{\textup{d}\widehat{\psi}{}_{\mathsf{y},i}^f}{\textup{d}\eta_{\mathsf{a},j}^f} = -\delta_{ij} \Big( \frac{\textup{d}}{\textup{d}\eta_{\mathsf{a},i}^f} \frac{\zeta_\mathsf{y}^\eta w(\eta_{\mathsf{a},i}^f)\pi(\eta_{\mathsf{a},i}^f)}{1 + \zeta_\mathsf{y}^\eta w(\eta_{\mathsf{a},i}^f)} \Big) (y_i^f - \widehat{\eta}{}_{\mathsf{y},i}^{f,\mathsf{loo}}), \qquad \frac{\textup{d}\widehat{\psi}{}_{\mathsf{y},i}^f}{\textup{d}\widehat{\eta}{}_{\mathsf{y},j}^{f,\mathsf{loo}}} = \ensuremath{a}_i^f \frac{\zeta_\mathsf{y}^\eta w(\eta_{\mathsf{a},i}^f)}{1 + \zeta_\mathsf{y}^\eta w(\eta_{\mathsf{a},i}^f)}, \end{gathered}$$ Since the functions $\Omega_\mathsf{a}$ and $\Omega_\mathsf{y}$ are twice differentiable, both of the functions $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f$ are (classically) differentiable in $\boldsymbol{y}_\mathsf{a}^f$ and $\boldsymbol{y}_\mathsf{y}^f$, with derivatives $$\label{eq:param-est-deriv-fixed-design} \boldsymbol{\Sigma}^{1/2} \frac{\textup{d}\widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f }{\textup{d}\boldsymbol{y}_{\mathsf{a}}^f} = \zeta_\mathsf{a}^\theta\boldsymbol{\Sigma}^{1/2} \Big( \zeta_\mathsf{a}^\theta \boldsymbol{\Sigma}+ \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f) \Big)^{-1} \boldsymbol{\Sigma}^{1/2}, \quad \mbox{and} \quad \boldsymbol{\Sigma}^{1/2} \frac{\textup{d}\widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f }{\textup{d}\boldsymbol{y}_{\mathsf{y}}^f} = \zeta_\mathsf{y}^\theta\boldsymbol{\Sigma}^{1/2} \Big( \zeta_\mathsf{y}^\theta \boldsymbol{\Sigma}+ \nabla^2 \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) \Big)^{-1} \boldsymbol{\Sigma}^{1/2}.$$ ## Existence/uniqueness of fixed points The following result certifies uniqueness of the fixed points defined by the relation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, along with some of their properties: **Lemma 2**. *Under Assumption A1, equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} have a unique solution. Moreover, these solutions satisfy the following properties:* - ***Bounded standard errors.** $0 < c < S_{\ell\ell} < C$ for $1 \leq \ell \leq 4$.* - ***Bounded effective regularization.** $0 < c < \zeta_\mathsf{y}^\theta,\zeta_\mathsf{y}^\eta,\zeta_\mathsf{a}^\theta,\zeta_\mathsf{a}^\theta < C$.* - ***Bounded bias.** $|\beta_{\mathsf{a}\mathsf{a}}|,|\beta_{\mathsf{y}\mathsf{a}}| <C$, and $\beta_{\mathsf{a}\mathsf{a}} > c > 0$.* - ***Bounded estimates.** $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f \|_{L_2}$, $\|\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \|_{L_2} < C$.* See . for the proof. ## Exact asymptotics Here we give a more quantitatively refined statement of , in particular using the high-probability approximation notions introduced in . **Theorem 4**. *Under Assumption A1, we have:* (a) *The estimated population means concentrate: $$\widehat{\theta}{}_{\mathsf{a},0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}\rangle \ensuremath{\stackrel{\bullet}{=}} \widehat{\mu}{}_\mathsf{a}^f, \qquad \widehat{\theta}{}_{\mathsf{y},0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \ensuremath{\stackrel{\bullet}{=}}\widehat{\mu}{}_\mathsf{y}^f.$$* (b) *The effective-regularization terms concentrate: $$\widehat{\zeta}{}_{\mathsf{a}}^\eta \ensuremath{\stackrel{\bullet}{=}}\zeta_{\mathsf{a}}^\eta, \qquad \widehat{\zeta}{}_{\mathsf{a}}^\theta \ensuremath{\stackrel{\bullet}{=}}\zeta_{\mathsf{a}}^\theta, \qquad \widehat{\zeta}{}_{\mathsf{y}}^\eta \ensuremath{\stackrel{\bullet}{=}}\zeta_{\mathsf{y}}^\eta, \qquad \widehat{\zeta}{}_{\mathsf{y}}^\theta \ensuremath{\stackrel{\bullet}{=}}\zeta_{\mathsf{y}}^\theta.$$* (c) *For any order-2 pseudo-Lipschitz function $\phi: ({\mathbb R}^p)^6 \rightarrow {\mathbb R}$, $$\phi \big( \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}, \widehat{\boldsymbol{\mu}}{}_{\mathsf{x}}, \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^{\mathsf{loo}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^{\mathsf{loo}} \big) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{x}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{a}}^f, \boldsymbol{\Sigma}^{-1/2} \boldsymbol{y}_{\mathsf{y}}^f \Big) \Big].$$* (d) *For any order-2 pseudo-Lipschitz function $\phi:{\mathbb R}^4 \rightarrow {\mathbb R}$, $$\frac{1}{n} \sum_{i=1}^n \phi\big( \eta_{\mathsf{a},i}, \eta_{\mathsf{y},i}, \widehat{\psi}{}_{\mathsf{a},i}, \widehat{\psi}{}_{\mathsf{y},i} \big) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \eta_{\mathsf{a},i}^f, \eta_{\mathsf{y},i}^f, \widehat{\psi}{}_{\mathsf{a},i}^f, \widehat{\psi}{}_{\mathsf{y},i}^f \Big) \Big].$$* *where in both locations, $\phi$ is an order-2 pseudo-Lipschitz function (with the appropriate number of arguments).* See  for the proof. ## Reduction to independent covariates {#sec:independent-covariates} Without loss of generality, we can assume that $\boldsymbol{\Sigma}= {\mathbf I}_p$. Indeed, if one replaces $\boldsymbol{X}$ by $\boldsymbol{X}\boldsymbol{\Sigma}^{-1/2}$, $\boldsymbol{\theta}_\mathsf{a}$ with $\boldsymbol{\Sigma}^{1/2} \boldsymbol{\theta}_\mathsf{a}$, $\boldsymbol{\theta}_\mathsf{y}$ with $\boldsymbol{\Sigma}^{1/2} \boldsymbol{\theta}_\mathsf{y}$, $\boldsymbol{\mu}_{\mathsf{x}}$ with $\boldsymbol{\Sigma}^{-1/2} \boldsymbol{\theta}_\mathsf{y}$, $\boldsymbol{v} \mapsto \Omega_\mathsf{y}(\boldsymbol{v})$ with $\boldsymbol{v}\mapsto \Omega_\mathsf{y}(\boldsymbol{\Sigma}^{-1/2}\boldsymbol{v})$, and $\boldsymbol{v}\mapsto \Omega_\mathsf{a}(\boldsymbol{v})$ with $\boldsymbol{v}\mapsto \Omega_\mathsf{a}(\boldsymbol{\Sigma}^{-1/2}\boldsymbol{v})$, one can see that Assumption A1 holding before these replacements is equivalent to Assumption A1 holding after these replacements if $\boldsymbol{\Sigma}$ is replaced with ${\mathbf I}_p$. Moreover, the conclusion of  as well as all other results in the paper (, as well as ) are equivalent to the same results applied with $\boldsymbol{\Sigma}= {\mathbf I}_p$ to the model arrived at after these replacements. Thus in the remainder of remainder of the Appendix, we assume that $\boldsymbol{\Sigma}= {\mathbf I}_p$. # The Cholesky decomposition {#sec:cholesky} The proof of  is greatly facilitated by the Cholesky decomposition. The main ideas are most easily understood if one assumes all matrices are invertible, and all Cholesky decompositions are defined in the standard way [@horn13 Corollary 7.2.9]. This section is devoted to the technical details required to relax this invertibility assumption. Accordingly, for the first reading, we suggest the reader may skip this section and operate under the simplifying assumption of invertibility. In order to define the Cholesky decomposition for possibly non-invertible matrices, we need to adopt certain conventions so as to resolve its non-uniqueness, which we describe in this section. ## Definition of the decomposition Consider $\boldsymbol{K}\in \mathbb{S}_{\geq0}^k$ and $(X_1, \ldots, X_k) \sim \mathsf{N}(0, \boldsymbol{K})$. For each index $\ell \in [k]$, we have the decomposition $$\label{eq:seq-regr} X_\ell = X_\ell^\| + \alpha_\ell X_\ell^\perp,$$ where $X_\ell^\|$ is a linear combination of $\{X_{\ell'}\}_{\ell' < \ell}$ and $\alpha_\ell X_\ell^\perp$ is independent of $\{X_{\ell'}\}_{\ell' < \ell}$. If $\mathsf{rank}(\boldsymbol{K}_\ell) = \mathsf{rank}(\boldsymbol{K}_{\ell-1}) + 1$, we call the index $\ell$ *innovative*. In this case, $\alpha_\ell X_\ell^\perp \neq 0$, and we take $\alpha_\ell > 0$ and $\| X_\ell^\perp \|_{L_2} = 1$. If $\mathsf{rank}(\boldsymbol{K}_\ell) = \mathsf{rank}(\boldsymbol{K}_{\ell-1})$, we call the index $\ell$ *predictable*. In the case, $\alpha_\ell X_\ell^\perp = 0$, and we take $\alpha_\ell = 0$ and $X_\ell^\perp = 0$. One of these two cases always occurs. With these conventions, the decomposition in the previous display is unique. This construction uniquely determines a matrix $\boldsymbol{L}\in {\mathbb R}^{k \times k}$ defined by $$L_{\ell \ell'} = \langle X_\ell , X_{\ell'}^\perp \rangle_{L_2} \quad \text{for $\ell,\ell' \in [k]$.}$$ We show that $\boldsymbol{K}= \boldsymbol{L}\boldsymbol{L}^\top$, and $\boldsymbol{L}$ is the unique such choice that is lower-triangular and "innovation-compatible" with $\boldsymbol{K}$: **Definition 1**. *A lower-triangular matrix $\boldsymbol{L}\in {\mathbb R}^{k \times k}$ *innovation-compatible* with $\boldsymbol{K}$ if the $\ell^\text{th}$ column of $\boldsymbol{L}$ is ${\boldsymbol 0}$ for all indices $\ell$ predictable with respect to $\boldsymbol{K}$.* **Lemma 3** (Cholesky decomposition). *The matrix $\boldsymbol{L}$ is lower-triangular, has non-negative entries on its diagonal, is innovation-compatible with respect to $\boldsymbol{K}$, and satisfies $\boldsymbol{K}= \boldsymbol{L}\boldsymbol{L}^\top$. The $\ell^\text{th}$ entry on its diagonal is 0 if and only if $\ell$ is predictable with respect to $\boldsymbol{K}$. The sub-matrix $\boldsymbol{L}_\ell$ depends on $\boldsymbol{K}$ only via $\boldsymbol{K}_\ell$. Moreover, $$\label{eq:original-from-perp} X_\ell = \sum_{\ell'=1}^\ell L_{\ell \ell'} X_{\ell'}^\perp.$$* *Proof of .* Because the quantity $X_{\ell'}^\perp$ is independent of $X_\ell$ for $\ell' > \ell$, we have $L_{\ell \ell'} = \langle X_\ell , X_{\ell'}^\perp \rangle_{L_2} = 0$ for $\ell' > \ell$, so $\boldsymbol{L}$ is lower-triangular. Because $X_\ell^\perp$ is independent of $X_\ell^\|$, $L_{\ell \ell} = \langle X_\ell, X_\ell^{\perp}\rangle_{L_2} = \alpha_\ell \| X_\ell^\perp \|_{L_2}^2 \geq 0$ because we have chosen that $\alpha_\ell \geq 0$. Thus, $\boldsymbol{L}$ has non-negative entries on its diagonal. If $\ell'$ is predictable, then $X_{\ell'}^\perp = 0$, so that $L_{\ell \ell'} = \langle X_\ell , X_{\ell'}^\perp \rangle_{L_2} = 0$. Thus, $\boldsymbol{L}$ is innovation compatible with respect to $\boldsymbol{K}$. $\alpha_\ell \| X_\ell^\perp \|_{L_2}^2 = 0$ if and only if $\ell$ is predictable, whence the $\ell^\text{th}$ diagonal entry of $\boldsymbol{L}$ is 0 if and only if $\ell$ is predictable. The sub-matrix $\boldsymbol{L}_\ell$ depends on $\boldsymbol{K}$ only via $\boldsymbol{K}_\ell$ because its construction relies only on the random variables $(X_1, \ldots, X_\ell) \sim \mathsf{N}(0,\boldsymbol{K}_\ell)$. Using the fact that $X_\ell^\|$ can be written as a linear combination of $\{X_{\ell'}\}_{\ell' < \ell}$ and using $X_1 = \alpha_{11} X_1^\perp$ as a base case, by induction each $X_\ell$ can be written as a linear combination $$X_\ell = \sum_{\ell'=1}^\ell \alpha_{\ell \ell'} X_{\ell'}^\perp.$$ Taking the inner product with $X_{\ell''}^\perp$ on both sides and using that $\langle X_{\ell'}^\perp , X_{\ell''}^\perp \rangle_{L_2} = 0$ when $\ell' \neq \ell''$, we have $\alpha_{\ell \ell'} \| X_{\ell'}^\perp \|^2 = \langle X_{\ell},X_{\ell'}^\perp\rangle = L_{\ell \ell'}$. If $\ell'$ is innovative, this gives $\alpha_{\ell \ell'} = L_{\ell \ell'}$. If $\ell'$ is predictable, then $X_{\ell'} = 0$, so that we can set $\alpha_{\ell \ell'} = L_{\ell\ell'}$ without affecting the sum. equation [\[eq:original-from-perp\]](#eq:original-from-perp){reference-type="eqref" reference="eq:original-from-perp"} follows. It implies $K_{\ell\ell'} = \langle X_\ell , X_{\ell'} \rangle_{L_2} = \sum_{j=1}^k L_{\ell j} L_{\ell' j} = (\boldsymbol{L}\boldsymbol{L}^\top)_{\ell \ell'}$, whence $\boldsymbol{K} = \boldsymbol{L}\boldsymbol{L}^\top$. ◻ We next define the Cholesky pseudo-inverse $\boldsymbol{L}^\ddagger$, which behaves, in certain ways, like the inverse of $\boldsymbol{L}$ even when $\boldsymbol{L}$ is singular. By equation [\[eq:seq-regr\]](#eq:seq-regr){reference-type="eqref" reference="eq:seq-regr"}, we have the inclusion $X_\ell^\perp \in \mathsf{span}\{X_{\ell'}\}_{\ell' \leq \ell}$. Because $X_{\ell'+1}$ is linearly independent of $\{ X_{\ell''} \}_{\ell'' \leq \ell'}$ if and only if $\ell'+1$ is innovative, $\{X_{\ell'}: \ell' \leq \ell,\, \text{$\ell'$ innovative}\}$ is a basis for $\mathsf{span}\{X_{\ell'}\}_{\ell' \leq \ell}$. We can thus uniquely write $X_\ell^\perp$ as a linear combination $$\label{eq:perp-from-innovatives} X_\ell^\perp = \sum_{\substack{\ell' \leq \ell \\ \text{$\ell'$ innovative}}} L_{\ell \ell'}^\ddagger X_{\ell'},$$ which we take to define $L_{\ell \ell'}^\ddagger$ for innovative $\ell' \leq \ell$. Define $L_{\ell \ell'}^\ddagger = 0$ for $\ell'$ predictable or $\ell' > \ell$. Then $\boldsymbol{L}^\ddagger$ is lower-triangular, innovation-compatible with $\boldsymbol{K}$, and $$\label{eq:perp-from-cholinv} X_\ell^\perp = \sum_{\ell' = 1}^k L_{\ell \ell'}^\ddagger X_{\ell'}.$$ The Cholesky pseudo-inverse has the following properties. **Lemma 4** (Cholesky pseudo-inverse). *The matrix $\boldsymbol{L}^\ddagger \in {\mathbb R}^{k \times k}$ is lower-triangular, has non-negative entries on its diagonal, and is innovation-compatible with $\boldsymbol{K}$. If $\ell$ is predictable, then the $\ell^{\text{th}}$ row of $\boldsymbol{L}^\ddagger$ is 0. The sub-matrix $\boldsymbol{L}_\ell^\ddagger$ depends on $\boldsymbol{K}$ only via $\boldsymbol{K}_\ell$.* *Proof of .* We have already shown that $\boldsymbol{L}^\ddagger$ is lower triangular and innovation-compatible with $\boldsymbol{K}$. Because $\langle X_\ell^\perp,X_{\ell'}\rangle_{L_2} = 0$ for $\ell' < \ell$ and $L_{\ell \ell'}^\ddagger = 0$ for $\ell' > \ell$, taking the inner product of equation [\[eq:perp-from-cholinv\]](#eq:perp-from-cholinv){reference-type="eqref" reference="eq:perp-from-cholinv"} with $X_\ell^\perp$ gives $\| X_\ell^\perp \|_{L_2}^2 = L_{\ell \ell}^\ddagger \langle X_\ell , X_\ell^\perp \rangle_{L_2} = L_{\ell \ell}^\ddagger L_{\ell \ell} \| X_\ell^\perp \|_{L_2}^2$. If $\ell$ is predictable, we have $L_{\ell\ell}^\ddagger = 0$, and if $\ell$ is innovative, we must have $L_{\ell\ell}^\ddagger > 0$ because $\| X_\ell^\perp \|_{L_2}^2 = 1$ and $L_{\ell\ell} \geq 0$. Thus, $\boldsymbol{L}^\ddagger$ has non-negative entries on its diagonal. If $\ell$ is predictable, then $X_\ell^\perp = 0$, whence equation [\[eq:perp-from-innovatives\]](#eq:perp-from-innovatives){reference-type="eqref" reference="eq:perp-from-innovatives"} is solved by $L_{\ell \ell'}^\ddagger = 0$ for all $\ell' \leq \ell$ and $\ell'$ innovative. Because the decomposition in equation [\[eq:perp-from-innovatives\]](#eq:perp-from-innovatives){reference-type="eqref" reference="eq:perp-from-innovatives"} is unique, the $\ell^\text{th}$ row of $\boldsymbol{L}^\ddagger$ is 0 if $\ell$ is predictable. The sub-matrix $\boldsymbol{L}_\ell^\ddagger$ depends on $\boldsymbol{K}$ only via $\boldsymbol{K}_\ell$ because its construction relies only on the random variables $(X_1,\ldots,X_\ell) \sim \mathsf{N}(0,\boldsymbol{K}_\ell)$. ◻ Define the pseudo-inverse $\boldsymbol{K}^\ddagger \ensuremath{: =}(\boldsymbol{L}^\ddagger)^\top \boldsymbol{L}^\ddagger$, as well as the $k$-dimensional matrices $$\begin{gathered} {\mathbf I}_{\boldsymbol{K}} \boldsymbol{e}_\ell = {\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{e}_\ell = {\boldsymbol 0}\quad \text{if $\ell$ is predictable}, \\ {\mathbf I}_{\boldsymbol{K}} \boldsymbol{x}= \boldsymbol{x}\quad \text{if $\boldsymbol{x}\in \mathsf{range}(\boldsymbol{K})$} \qquad\text{and}\qquad {\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{e}_\ell = \boldsymbol{e}_\ell \quad \text{if $\ell$ is innovative}. \end{gathered}$$ The next lemma establishes the sense in which Cholesky pseudo-inverse $\boldsymbol{L}^\ddagger$ behaves like an inverse of $\boldsymbol{L}$, and the pseudo-inverse $\boldsymbol{K}^\ddagger$ behaves like an inverse of $\boldsymbol{K}$. **Lemma 5** (Pseudo-inverse identities). *The Cholesky pseudo-inverse of $\boldsymbol{L}$ satisfies $$\boldsymbol{L}\boldsymbol{L}^{\ddagger} = {\mathbf I}_{\boldsymbol{K}}, \qquad \boldsymbol{L}^{\ddagger} \boldsymbol{L}= {\mathbf I}_{\boldsymbol{K}}^\perp, \qquad \boldsymbol{L}{\mathbf I}_{\boldsymbol{K}}^\perp = {\mathbf I}_{\boldsymbol{K}} \boldsymbol{L}= \boldsymbol{L}, \qquad \boldsymbol{L}^\ddagger {\mathbf I}_{\boldsymbol{K}} = {\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{L}^\ddagger = \boldsymbol{L}^\ddagger.$$ The pseudo-inverse of $\boldsymbol{K}$ satisfies $$\label{eq:K-pseudo-inverse} \boldsymbol{K}\boldsymbol{K}^\ddagger = {\mathbf I}_{\boldsymbol{K}}, \qquad \boldsymbol{K}^\ddagger\boldsymbol{K}= {\mathbf I}_{\boldsymbol{K}}^\top.$$ The matrix ${\mathbf I}_{\boldsymbol{K}}^\perp$ is diagonal, so ${\mathbf I}_{\boldsymbol{K}}^\perp = ({\mathbf I}_{\boldsymbol{K}}^\perp)^\top$. The matrix ${\mathbf I}_{\boldsymbol{K}}$ is the unique matrix with the properties that ${\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{e}_\ell = \boldsymbol{e}_\ell$ if $\ell$ is innovative, and ${\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{x}= {\boldsymbol 0}$ if $\boldsymbol{x}\in \mathsf{range}(\boldsymbol{K})^\perp$.* *Proof of .* If $\ell$ is predictable, then the $\ell^{\text{th}}$ column of both $\boldsymbol{L}$ and $\boldsymbol{L}^\ddagger$ are 0 by innovation compatibility, whence $\boldsymbol{L}\boldsymbol{L}^{\ddagger}\boldsymbol{e}_\ell = \boldsymbol{L}{\boldsymbol 0}= {\boldsymbol 0}$ and $\boldsymbol{L}^{\ddagger}\boldsymbol{L}\boldsymbol{e}_\ell = \boldsymbol{L}^\ddagger {\boldsymbol 0}= {\boldsymbol 0}$. By equations [\[eq:original-from-perp\]](#eq:original-from-perp){reference-type="eqref" reference="eq:original-from-perp"} and [\[eq:perp-from-cholinv\]](#eq:perp-from-cholinv){reference-type="eqref" reference="eq:perp-from-cholinv"}, almost surely $$X_\ell = \sum_{\ell' = 1}^k L_{\ell\ell'} X_{\ell'}^\perp = \sum_{\ell' = 1}^k L_{\ell\ell'} \sum_{\ell'' = 1}^k L_{\ell' \ell''}^\ddagger X_{\ell''}, \quad \text{whence } \begin{pmatrix} X_1 \\[1pt] \vdots \\[1pt] X_k \end{pmatrix} = \boldsymbol{L}\boldsymbol{L}^\ddagger \begin{pmatrix} X_1 \\[1pt] \vdots \\[1pt] X_k \end{pmatrix} = \boldsymbol{L}\boldsymbol{L}^\ddagger\boldsymbol{L} \begin{pmatrix} X_1^\perp \\[1pt] \vdots \\[1pt] X_k^\perp \end{pmatrix}.$$ The support of $(X_1,\ldots,X_k)^\top$ is exactly $\mathsf{range}(\boldsymbol{K})$, whence $\boldsymbol{L}\boldsymbol{L}^\ddagger \boldsymbol{x}= \boldsymbol{x}$ for all $\boldsymbol{x}\in \mathsf{range}(\boldsymbol{K})$. Note that $\{ \boldsymbol{e}_\ell : \text{$\ell$ predictable} \} \cup \mathsf{range}(\boldsymbol{K})$ span ${\mathbb R}^k$ because $\mathsf{range}(\boldsymbol{K}) = \mathsf{range}(\boldsymbol{L})$, and $\boldsymbol{L}$ is lower triangular with 0's on the diagonal only in predictable indices. Thus, $\boldsymbol{L}\boldsymbol{L}^\ddagger$ and ${\mathbf I}_{\boldsymbol{K}}$ act by matrix multiplication equivalently on a set spanning ${\mathbb R}^k$, so are the same. Likewise, almost surely $$X_\ell^\perp = \sum_{\ell' = 1}^k L_{\ell\ell'}^\ddagger X_{\ell'} = \sum_{\ell' = 1}^k L_{\ell\ell'}^\ddagger \sum_{\ell'' = 1}^k L_{\ell' \ell''} X_{\ell''}^\perp, \quad \text{whence } \begin{pmatrix} X_1^\perp \\[1pt] \vdots \\[1pt] X_k^\perp \end{pmatrix} = \boldsymbol{L}^\ddagger\boldsymbol{L} \begin{pmatrix} X_1^\perp \\[1pt] \vdots \\[1pt] X_k^\perp \end{pmatrix} = \boldsymbol{L}^\ddagger\boldsymbol{L}\boldsymbol{L}^\ddagger \begin{pmatrix} X_1 \\[1pt] \vdots \\[1pt] X_k \end{pmatrix}.$$ The support of $(X_1^\perp,\ldots,X_k^\perp)^\top$ is exactly $\mathsf{span}\{\boldsymbol{e}_\ell : \text{$\ell$ innovative}\}$, whence $\boldsymbol{L}^\ddagger\boldsymbol{L}\boldsymbol{e}_\ell = \boldsymbol{e}_\ell$ for all $\ell$ innovative. Thus, $\boldsymbol{L}^\ddagger\boldsymbol{L}$ and ${\mathbf I}_{\boldsymbol{K}}^\perp$ act by matrix multiplication equivalently on a set spanning ${\mathbb R}^k$, so are the same. For all predictable $\ell$, $\boldsymbol{L}{\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{e}_\ell = {\mathbf I}_{\boldsymbol{K}} \boldsymbol{L}\boldsymbol{e}_\ell = \boldsymbol{L}\boldsymbol{e}_\ell = {\boldsymbol 0}$ because ${\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{e}_\ell = \boldsymbol{L}\boldsymbol{e}_\ell = {\boldsymbol 0}$. For innovative $\ell$, $\boldsymbol{L}{\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{e}_\ell = \boldsymbol{L}\boldsymbol{e}_\ell$. Thus $\boldsymbol{L}{\mathbf I}_{\boldsymbol{K}}^\perp = \boldsymbol{L}$. Further, $\boldsymbol{L}\boldsymbol{e}_\ell \in \mathsf{range}(\boldsymbol{K})$, whence ${\mathbf I}_{\boldsymbol{K}} \boldsymbol{L}\boldsymbol{e}_\ell = \boldsymbol{L}\boldsymbol{e}_\ell$. Thus, $\boldsymbol{L}= {\mathbf I}_{\boldsymbol{K}} \boldsymbol{L}$. For all predicable $\ell$, $\boldsymbol{L}^\ddagger {\mathbf I}_{\boldsymbol{K}} \boldsymbol{e}_\ell = {\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{L}^\ddagger \boldsymbol{e}_\ell = \boldsymbol{L}^\ddagger \boldsymbol{e}_\ell = {\boldsymbol 0}$ because ${\mathbf I}_{\boldsymbol{K}} \boldsymbol{e}_\ell = \boldsymbol{L}^\ddagger \boldsymbol{e}_\ell = {\boldsymbol 0}$. For $\boldsymbol{x}\in \mathsf{range}(\boldsymbol{K})$, $\boldsymbol{L}^\ddagger {\mathbf I}_{\boldsymbol{K}} \boldsymbol{x}= \boldsymbol{L}^\ddagger \boldsymbol{x}$. Thus, $\boldsymbol{L}^\ddagger {\mathbf I}_{\boldsymbol{K}}$ and $\boldsymbol{L}^\ddagger$ act equivalently by matrix multiplication on a basis, so are the same Further, $\boldsymbol{L}^\ddagger \boldsymbol{x}\in \mathsf{span}\{\boldsymbol{e}_\ell : \text{$\ell$ innovative}\}$, whence ${\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{L}^\ddagger \boldsymbol{x}= \boldsymbol{L}^\ddagger \boldsymbol{x}$. Thus $\boldsymbol{L}^\ddagger$ and ${\mathbf I}_{\boldsymbol{K}}^\perp \boldsymbol{L}^\ddagger$ act equivalently by matrix multiplication on a basis, so are the same. The identities $\boldsymbol{K}\boldsymbol{K}^\ddagger = {\mathbf I}_{\boldsymbol{K}}$ and $\boldsymbol{K}^\ddagger\boldsymbol{K}= {\mathbf I}_{\boldsymbol{K}}^\top$ are consequences of the above identities. By its definition, ${\mathbf I}_{\boldsymbol{K}}^\perp$ is the diagonal matrix that has entries $({\mathbf I}_{\boldsymbol{K}}^\perp)_{\ell\ell} = 1$ for innovative $\ell$ and $({\mathbf I}_{\boldsymbol{K}}^\perp)_{\ell\ell} = 0$ for predictable $\ell$. Note that for any $\ell$ innovative, $\ell'$ predictable, $\boldsymbol{x}\in \mathsf{range}(\boldsymbol{K})$, and $\boldsymbol{x}' \in \mathsf{range}(\boldsymbol{K})^\perp$, we have $\boldsymbol{e}_{\ell'}^\top {\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{e}_\ell = {\boldsymbol 0}^\top \boldsymbol{e}_\ell = 0$, $\boldsymbol{x}^\top {\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{e}_\ell = \boldsymbol{x}^\top \boldsymbol{e}_\ell = x_\ell$, $\boldsymbol{e}_{\ell'}^\top {\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{x}' = {\boldsymbol 0}^\top \boldsymbol{x}' = 0$, $\boldsymbol{x}^\top {\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{x}' = \boldsymbol{x}^\top \boldsymbol{x}' = 0$. Because $\{ \boldsymbol{e}_{\ell'} : \text{$\ell'$ predictable} \} \cup \mathsf{range}(\boldsymbol{K})$ spans ${\mathbb R}^k$, we conclude ${\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{e}_\ell = \boldsymbol{e}_\ell$ and ${\mathbf I}_{\boldsymbol{K}}^\top \boldsymbol{x}= {\boldsymbol 0}$. ◻ ## Explicit construction of the Cholesky decomposition {#sec:chol-explicit} At times we will need to quantify the continuity of $\boldsymbol{L}$, $\boldsymbol{L}^\ddagger$, and $\boldsymbol{K}^\ddagger$ in $\boldsymbol{K}$. This is most easily done by explicit computation, which is carried out here for reference. Let $\mathsf{L}: \mathbb{S}_{\geq 0}^k \rightarrow {\mathbb R}^{k \times k}$, $\mathsf{L}^\ddagger:\mathbb{S}_{\geq 0}^k \rightarrow {\mathbb R}^{k \times k}$, and $\mathsf{K}^\ddagger:\mathbb{S}_{\geq 0}^k \rightarrow \mathbb{S}_{\geq 0}^k$ be the function which takes $\boldsymbol{K}\in \mathbb{S}_{\geq 0}^k$ to the Cholesky matrix $\boldsymbol{L}$, Cholesky pseudo-inverse $\boldsymbol{L}^\ddagger$, and pseudo-inverse $\boldsymbol{K}^\ddagger$. **Lemma 6** (Explicit Cholesky decomposition). *The functions $\mathsf{L}$ and $\mathsf{L}^\ddagger$ are defined inductively in $\ell$ by $$\label{eq:L-explicit} \begin{gathered} \mathsf{L}(\boldsymbol{K})_{\ell \ell'} = \begin{cases} \sum_{\ell''=1}^{\ell'} \mathsf{L}^\ddagger(\boldsymbol{K})_{\ell' \ell''} K_{\ell\ell''} \quad &\text{if $\ell' < \ell$}, \\[3pt] \sqrt{ K_{\ell\ell} - \sum_{j = 1}^{\ell-1} \mathsf{L}(\boldsymbol{K})_{\ell \ell'}^2 } \quad &\text{if $\ell' = \ell$}, \end{cases} \\[5pt] \mathsf{L}^\ddagger(\boldsymbol{K})_{\ell\ell'} = \begin{cases} -\frac{1}{\mathsf{L}(\boldsymbol{K})_{\ell\ell}} \sum_{\ell''=\ell'}^{\ell-1} \mathsf{L}(\boldsymbol{K})_{\ell \ell''}\mathsf{L}^\ddagger(\boldsymbol{K})_{\ell''\ell'} \quad &\text{if $\ell' < \ell$}, \\[3pt] \frac{1}{\mathsf{L}(\boldsymbol{K})_{\ell\ell}} \quad&\text{if $\ell' = \ell$ and $\ell$ is innovative}, \\[3pt] 0\quad &\text{if $\ell$ is predictable}. \end{cases} \end{gathered}$$ Moreover, $\mathsf{K}^\ddagger(\boldsymbol{K}) = \mathsf{L}^\ddagger(\boldsymbol{K})^\top \mathsf{L}^\ddagger(\boldsymbol{K})$.* *Proof of .* First, the base case $\ell = 1$. We have $\mathsf{L}(\boldsymbol{K})_{11} = \| X_1^\perp \|_{L_2} = \| X_1 \|_{L_2} = K_{11}^{1/2}$. Because $X_1^\perp = X_1 / \| X_1 \|_{L_2}$ if $X_1 \neq 0$ and $L_{11}^\ddagger = 0$ otherwise, we have $\mathsf{L}^\ddagger(\boldsymbol{K})_{11} = \frac{1}{\mathsf{L}_{11}(\boldsymbol{K})}$ if 1 is innovative and 0 if 1 is predictable. Next, assume we have defined $\mathsf{L}(\boldsymbol{K})_{\ell'\ell''}$ and $\mathsf{L}^\ddagger(\boldsymbol{K})_{\ell'\ell''}$ for $\ell'' \leq \ell' < \ell$. Then, for $\ell' < \ell$, $\mathsf{L}(\boldsymbol{K})_{\ell\ell'} = \langle X_\ell , X_{\ell'}^\perp \rangle_{L_2} = \Big\langle X_\ell , \sum_{\ell''=1}^{\ell'} \mathsf{L}^\ddagger(\boldsymbol{K})_{\ell' \ell''} X_{\ell''} \Big\rangle_{L_2} = \sum_{\ell''=1}^{\ell'} \mathsf{L}^\ddagger(\boldsymbol{K})_{\ell' \ell''} K_{\ell\ell''}$, and $\mathsf{L}(\boldsymbol{K})_{\ell\ell} = \Big\| X_\ell - \sum_{\ell'=1}^{\ell-1} \langle X_\ell,X_{\ell'}^\perp\rangle_{L_2}X_{\ell'}^{\perp} \Big\|_{L_2} = \sqrt{ K_{\ell\ell} - \sum_{j = 1}^{\ell-1} \mathsf{L}(\boldsymbol{K})_{\ell \ell'}^2 }$. Moreover, $$X_k^\perp = \frac{X_\ell - \sum_{\ell'=1}^{\ell-1} \langle X_\ell,X_{\ell'}^\perp\rangle_{L_2}X_{\ell'}^{\perp}}{\|X_\ell - \sum_{\ell'=1}^{\ell-1} \langle X_\ell,X_{\ell'}^\perp\rangle_{L_2}X_{\ell'}^{\perp}\|_{L_2}} = \frac{X_\ell - \sum_{\ell'=1}^{\ell-1} \mathsf{L}(\boldsymbol{K})_{\ell\ell'}\sum_{\ell''=1}^{\ell'} \mathsf{L}^\ddagger(\boldsymbol{K})_{\ell'\ell''} X_{\ell''}}{\mathsf{L}(\boldsymbol{K})_{\ell\ell}},$$ whence the form given for $\mathsf{L}^\ddagger(\boldsymbol{K})_{\ell\ell'}$ holds by comparison with equation [\[eq:perp-from-cholinv\]](#eq:perp-from-cholinv){reference-type="eqref" reference="eq:perp-from-cholinv"}. ◻ # Reduction to Gordon standard form The first step in proving  is to rewrite the optimization problems [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"} and [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} in a min-max form that is more immediately amenable to the application of the sequential Gordon inequality. We likewise rewrite the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} in an equivalent form that is easier to work with when applying the sequential Gordon inequality. Without loss of generality, we assume that $\boldsymbol{\Sigma}= {\mathbf I}_p$ throughout our proof. (See  for justification.) ## Reindexing and simplification to independent covariates From Assumption A1, the function $\pi$ is differentiable and strictly increasing. Define the limits $\pi_\infty \ensuremath{: =}\lim_{\eta \rightarrow \infty} \pi(\eta)$ and $\pi_{-\infty} = \lim_{\eta \rightarrow -\infty} \pi(\eta)$. There is a probability distribution on ${\mathbb R}$ with density with respect to Lebesgue measure given by $\pi'(\,\cdot\,)/(\pi_\infty - \pi_{-\infty})$. Let $\boldsymbol{\varepsilon}_\mathsf{a}$ have components $\varepsilon_{\mathsf{a},i}$ distributed iid from this measure. Let $\boldsymbol{\varepsilon}_\mathsf{a}'$ be independent of $\boldsymbol{\varepsilon}_\mathsf{a}$ have iid components $\varepsilon_{\mathsf{a},i}' \in \{0,1,\circ\}$ with probabilities $\mathbb{P}(\varepsilon_{\mathsf{a},i}' = 0) = \pi_{-\infty}$, $\mathbb{P}(\varepsilon_{\mathsf{a},i}' = 1) = 1 - \pi_\infty$, and $\mathbb{P}(\varepsilon_{\mathsf{a},i}' = \circ) = \pi_\infty - \pi_{-\infty}$. We can realize the model [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} by setting $$\label{eq:prop-functional-form} \ensuremath{a}_i = \mathbb{I}\{ \varepsilon_{\mathsf{a},i}' = 1\} + \mathbb{I}\{ \varepsilon_{\mathsf{a},i}' = \circ\} \mathbb{I}\{ \varepsilon_{1,i} \leq \theta_{\mathsf{a},0} + \langle \boldsymbol{x}_i , \boldsymbol{\theta}_{\mathsf{a}} \rangle , \}.$$ The proof of  has an inductive structure. For the purposes of the proof, it is convenient to use an alternative notation in which random-design quantities are indexed by the order in which they appear in this induction. Define $$\begin{aligned} (\theta_{1,0},\boldsymbol{\theta}_1) &\ensuremath{: =}(\theta_{\mathsf{a},0},\boldsymbol{\theta}_\mathsf{a}), \quad & (\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_1') &\ensuremath{: =}(\boldsymbol{\varepsilon}_\mathsf{a},\boldsymbol{\varepsilon}_\mathsf{a}'), & (\theta_{2,0},\boldsymbol{\theta}_2) &\ensuremath{: =}(\theta_{\mathsf{y},0},\boldsymbol{\theta}_\mathsf{y}), \quad & \boldsymbol{\varepsilon}_2 &\ensuremath{: =}\boldsymbol{\varepsilon}_{\mathsf{y}}, \\ \boldsymbol{y}_1 &\ensuremath{: =}\boldsymbol{a}, \quad & \boldsymbol{\eta}_1 &\ensuremath{: =}\boldsymbol{\eta}_\mathsf{a}, \quad & \boldsymbol{y}_2 &\ensuremath{: =}\boldsymbol{y}, &\quad \boldsymbol{\eta}_2 &\ensuremath{: =}\boldsymbol{\eta}_\mathsf{y}. \end{aligned}$$ We will sometimes go back and forth between this notation and that in equation [\[eq:rnd-des-objects\]](#eq:rnd-des-objects){reference-type="eqref" reference="eq:rnd-des-objects"}, and will attempt to do so in a way that minimizes potential confusion. ## Min-max formulation of primary optimization Our first step is to rewrite the optimizations in equations [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"} and [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} as saddle point problems of the form: $$\label{eq:min-max} \begin{gathered} \min_{\substack{v_0 \in {\mathbb R}\\ \boldsymbol{v}\in {\mathbb R}^p}} \max_{\boldsymbol{u}\in {\mathbb R}^n} \Big\{ \boldsymbol{u}^\top \boldsymbol{A}\boldsymbol{v} + \phi_k(\boldsymbol{u};v_0,\boldsymbol{v};) \Big\}, \end{gathered}$$ where $\boldsymbol{A}= \boldsymbol{X}- \boldsymbol{1}\boldsymbol{\mu}_{\mathsf{x}}^\top$ is the mean-centered version of $\boldsymbol{X}$, with entries $A_{ij} \mathrel{\stackrel{\mathrm{iid}}{\sim}}\mathsf{N}(0,1)$. It is also convenient to define the true parameters $(\theta_{\mathsf{a},0},\boldsymbol{\theta}_\mathsf{a})$ and $(\theta_{\mathsf{y},0},\boldsymbol{\theta}_\mathsf{y})$ as well as the mean and confounded mean estimates $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x}}$, $\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}$ as solutions to saddle point problems. Thus, define $$\label{eq:objectives} \begin{gathered} \phi_1(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} - \mathbb{I}\{\boldsymbol{u}= {\boldsymbol 0}\} + \mathbb{I}\{ (v_0,\boldsymbol{v}) = (\theta_{1,0},\boldsymbol{\theta}_1)\}, \\ \phi_2(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} - \mathbb{I}\{ \boldsymbol{u}= {\boldsymbol 0}\} + \mathbb{I}\{ (v_0,\boldsymbol{v}) = (\theta_{2,0},\boldsymbol{\theta}_2) \}, \\ \phi_3(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} -\mathbb{I}\{ \boldsymbol{u}= -\boldsymbol{1}/n \} +\mathbb{I}\{ (v_0,\boldsymbol{v}) = (0,{\boldsymbol 0}) \}, \\ \phi_4(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} -\mathbb{I}\{ \boldsymbol{u}= -\boldsymbol{a}/n \} +\mathbb{I}\{ (v_0,\boldsymbol{v}) = (0,{\boldsymbol 0}) \}, \\ \phi_5(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} \langle \boldsymbol{u}, \boldsymbol{1}\rangle(v_0 + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}\rangle) - \ell_5^*(\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) + \Omega_5(\boldsymbol{v}), \\ \phi_6(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} \langle\boldsymbol{u}, \boldsymbol{1}\rangle(v_0 + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}\rangle) -\ell_6^*(\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) + \Omega_6(\boldsymbol{v}), \end{gathered}$$ where $$\label{eq:ellk*-def} \begin{gathered} \ell_5^*(\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) \ensuremath{: =} \frac{1}{n} \sum_{i=1}^n \ell_{\mathsf{a}}^*(nu_i;y_{1,i}), \qquad \ell_6^*(\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) \ensuremath{: =} \langle \boldsymbol{u}, \boldsymbol{y}_2 \rangle + \frac{n}2 \sum_{i=1}^n w_i^{-1} \frac{u_i^2}{y_{1,i}}, \\ \Omega_5(\boldsymbol{v}) \ensuremath{: =}\Omega_{\mathsf{a}}(\boldsymbol{v}), \qquad \Omega_6(\boldsymbol{v}) \ensuremath{: =}\Omega_{\mathsf{y}}(\boldsymbol{v}), \end{gathered}$$ and we adopt the convention that $u\mapsto u^2/0$ is the convex indicator function $\mathbb{I}\{u = 0\}$. Here $\ell_\mathsf{a}^*$ is the Fenchel-Legendre conjugate of $\ell_\mathsf{a}$ in its first argument : $$\ell_\mathsf{a}^*(nu_i;y_{1,i}) \ensuremath{: =} \sup_{\eta \in {\mathbb R}} \Big\{ nu_i\eta - \ell_\mathsf{a}(\eta;y_{1,i}) \Big\}.$$ The min-max problems in equation [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"} are referred to as the *primary optimizations*, and the objectives as the *primary objectives*. We denote the saddle point of the $k^\text{th}$ primary optimization by $(\boldsymbol{u}_k^{\mathsf{po}};v_{k,0}^{\mathsf{po}},\boldsymbol{v}_k^{\mathsf{po}})$, where "$\mathsf{po}$" denotes "primary optimization." By Fenchel-Legendre duality, $$\label{eq:po-to-regr-cov} \begin{aligned} (v_{1,0}^{\mathsf{po}},\boldsymbol{v}_1^{\mathsf{po}}) &= (\theta_{\mathsf{a},0},\boldsymbol{\theta}_\mathsf{a}), \qquad& (v_{2,0}^{\mathsf{po}},\boldsymbol{v}_2^{\mathsf{po}}) &= (\theta_{\mathsf{a},0},\boldsymbol{\theta}_\mathsf{y}), \\ (v_{3,0}^{\mathsf{po}},\boldsymbol{v}_3^{\mathsf{po}}) &= (0,{\boldsymbol 0}), \qquad& (v_{4,0}^{\mathsf{po}},\boldsymbol{v}_4^{\mathsf{po}}) &= (0,{\boldsymbol 0}), \\ (v_{5,0}^{\mathsf{po}},\boldsymbol{v}_5^{\mathsf{po}}) &= (\widehat{\theta}{}_{\mathsf{a},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{a}), \qquad& (v_{6,0}^{\mathsf{po}},\boldsymbol{v}_6^{\mathsf{po}}) &= (\widehat{\theta}{}_{\mathsf{y},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}), \end{aligned}$$ and $$\label{eq:uPO-identity} \begin{aligned} \boldsymbol{u}_1^{\mathsf{po}} &= {\boldsymbol 0}, \quad& \boldsymbol{u}_2^{\mathsf{po}} &= {\boldsymbol 0}, \\ \boldsymbol{u}_2^{\mathsf{po}} &= -\boldsymbol{1}/n, \quad& \boldsymbol{u}_2^{\mathsf{po}} &= -\boldsymbol{a}/n = -\boldsymbol{y}_1/n, \\ \boldsymbol{u}_5^{\mathsf{po}} &= \frac{1}{n} \ell_{\mathsf{a}}'\big(\widehat{\boldsymbol{\eta}}_\mathsf{a};\boldsymbol{a}\big) = \frac{1}{n} \widehat{\boldsymbol{\psi}}{}_{\mathsf{a}}, \quad& \boldsymbol{u}_6^{\mathsf{po}} &= -\frac{1}{n} \boldsymbol{a}\odot \boldsymbol{w}\odot \big(\boldsymbol{y}- \widehat{\boldsymbol{\eta}}_\mathsf{y}\big) = \frac{1}{n} \widehat{\boldsymbol{\psi}}{}_{\mathsf{y}}, \end{aligned}$$ where $\ell_\mathsf{a}'$ is applied coordinate-wise. follows from an exact asymptotic characterization of $(v_{1,0}^{\mathsf{po}},\boldsymbol{v}_1^{\mathsf{po}})$, $(v_{2,0}^{\mathsf{po}},\boldsymbol{v}_2^{\mathsf{po}})$, $(v_{3,0}^{\mathsf{po}},\boldsymbol{v}_3^{\mathsf{po}})$, $(v_{4,0}^{\mathsf{po}},\boldsymbol{v}_4^{\mathsf{po}})$, $(v_{5,0}^{\mathsf{po}},\boldsymbol{v}_5^{\mathsf{po}})$, $(v_{6,0}^{\mathsf{po}},\boldsymbol{v}_6^{\mathsf{po}})$ and $\boldsymbol{u}_3^{\mathsf{po}}$, $\boldsymbol{u}_4^{\mathsf{po}}$, $\boldsymbol{u}_5^{\mathsf{po}}$, $\boldsymbol{u}_6^{\mathsf{po}}$. ## Definition of state evolution Similar to , the exact asymptotic characterization of $(v_{1,0}^{\mathsf{po}},\boldsymbol{v}_1^{\mathsf{po}})$, $(v_{2,0}^{\mathsf{po}},\boldsymbol{v}_2^{\mathsf{po}})$, $(v_{3,0}^{\mathsf{po}},\boldsymbol{v}_3^{\mathsf{po}})$, $(v_{4,0}^{\mathsf{po}},\boldsymbol{v}_4^{\mathsf{po}})$, $(v_{5,0}^{\mathsf{po}},\boldsymbol{v}_5^{\mathsf{po}})$, $(v_{6,0}^{\mathsf{po}},\boldsymbol{v}_6^{\mathsf{po}})$ and $\boldsymbol{u}_3^{\mathsf{po}}$, $\boldsymbol{u}_4^{\mathsf{po}}$, $\boldsymbol{u}_5^{\mathsf{po}}$, $\boldsymbol{u}_6^{\mathsf{po}}$ states that these behave like analogous quantities in two models on distinct probability spaces. We call these models *state evolution*. One probability space contains Gaussian randomness $\boldsymbol{G}^{\mathsf{se}} \in {\mathbb R}^{p \times 6}$ distributed $\boldsymbol{G}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_g \otimes {\mathbf I}_p)$. The other contains Gaussian randomness $\boldsymbol{H}^{\mathsf{se}} \in {\mathbb R}^{n \times 6}$ distributed $\boldsymbol{H}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_h \otimes {\mathbf I}_n)$ and randomness $\boldsymbol{\varepsilon}_1^\mathsf{se}$, ${\boldsymbol{\varepsilon}_1^\mathsf{se}}'$, and $\boldsymbol{\varepsilon}_2^\mathsf{se}$ with the same distribution as $\boldsymbol{\varepsilon}_1$, $\boldsymbol{\varepsilon}_1'$, and $\boldsymbol{\varepsilon}_2$, respectively. The covariances $\boldsymbol{K}_g \in \mathbb{S}_{\geq 0}^6$ and $\boldsymbol{K}_h \in \mathbb{S}_{\geq 0}^6$ are defined below. The superscript "$\mathsf{se}$" denotes "state evolution." On these probability spaces, we define $\boldsymbol{u}_i^{\mathsf{se}}$, $\boldsymbol{v}_k^{\mathsf{se}}$, $k \leq 6$ by $$\begin{aligned} \label{eq:SE-opt} \begin{gathered} \boldsymbol{u}_k^{\mathsf{se}} = \mathop{\mathrm{arg\,min}}_{\boldsymbol{u}} \Big\{ \frac{\zeta_{kk}^u}2\|\boldsymbol{u}\|^2 + \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \langle\boldsymbol{u}_\ell^{\mathsf{se}},\boldsymbol{u}\rangle - \langle\boldsymbol{h}_k^{\mathsf{se}},\boldsymbol{u}\rangle + \phi_{k,u}(\boldsymbol{u};\boldsymbol{H}_{k-1}^{\mathsf{se}};\boldsymbol{\varepsilon}_1^{\mathsf{se}},{\boldsymbol{\varepsilon}_1^{\mathsf{se}}}',\boldsymbol{\varepsilon}_2^{\mathsf{se}}) \Big\}, \\ \boldsymbol{v}_k^{\mathsf{se}} = \mathop{\mathrm{arg\,min}}_{\boldsymbol{v}} \Big\{ \frac{\zeta_{kk}^v}2\|\boldsymbol{v}\|^2 + \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^v \langle\boldsymbol{v}_\ell^{\mathsf{se}},\boldsymbol{v}\rangle - \langle\boldsymbol{g}_k^{\mathsf{se}},\boldsymbol{v}\rangle + \phi_{k,v}(\boldsymbol{v};\boldsymbol{G}_{k-1}^{\mathsf{se}}) \Big\}, \end{gathered}\end{aligned}$$ where $\phi_{k,u}$ and $\phi_{k,v}$ are given by [\[eq:SE-penalties\]]{#eq:SE-penalties label="eq:SE-penalties"} $$\begin{gathered} \begin{aligned} \phi_{1,u}(\boldsymbol{u}) &= \mathbb{I}\{\boldsymbol{u}= {\boldsymbol 0}\}, \qquad& \phi_{2,u}(\boldsymbol{u};\boldsymbol{H}_1) &= \mathbb{I}\{\boldsymbol{u}= {\boldsymbol 0}\}, \\ \phi_{3,u}(\boldsymbol{u};\boldsymbol{H}_2) &= \mathbb{I}\{\boldsymbol{u}= -\boldsymbol{1}/n\}, \qquad& \phi_{4,u}(\boldsymbol{u};\boldsymbol{H}_3) &= \mathbb{I}\{\boldsymbol{u}= -\boldsymbol{a}/n\}, \end{aligned} \\ \begin{aligned} \phi_{5,u}(\boldsymbol{u};\boldsymbol{H}_4;\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_2) &= - \langle\boldsymbol{u},\boldsymbol{1}\rangle(\nu_{5,0}+\nu_{5,\mathsf{x}}) + \ell_5^*(\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2), \\ \phi_{6,u}(\boldsymbol{u};\boldsymbol{H}_5;\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_2) &= - \langle \boldsymbol{u},\boldsymbol{1}\rangle(\nu_{6,0} + \nu_{6 ,\mathsf{x}}) + \ell_6^*(\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2), \end{aligned} \end{gathered}$$ and $$\begin{aligned} \begin{gathered} \phi_{1,v}(\boldsymbol{v}) = \mathbb{I}\{ \boldsymbol{v}= \boldsymbol{\theta}_1 \}, \quad \phi_{2,v}(\boldsymbol{v};\boldsymbol{G}_1) = \mathbb{I}\{ \boldsymbol{v}= \boldsymbol{\theta}_2 \}, \quad \phi_{3,v}(\boldsymbol{v}) = \mathbb{I}\{ \boldsymbol{v}= {\boldsymbol 0}\}, \quad \phi_{4,v}(\boldsymbol{v};\boldsymbol{G}_1) = \mathbb{I}\{ \boldsymbol{v}= {\boldsymbol 0}\}, \\ \phi_{5,v}(\boldsymbol{v};\boldsymbol{G}_2) = \Omega_5(\boldsymbol{v}), \qquad \phi_{6,v}(\boldsymbol{v};\boldsymbol{G}_3) = \Omega_6(\boldsymbol{v}), \end{gathered}\end{aligned}$$ where $\boldsymbol{y}_1,\boldsymbol{y}_2,\boldsymbol{w}$ on the right-hand sides of the previous displays are interpreted as functions of $\boldsymbol{H}_k,\boldsymbol{\varepsilon}_1,{\boldsymbol{\varepsilon}_1}',\boldsymbol{\varepsilon}_2$ given by $$\label{eq:yw-func-of-Heps} \begin{gathered} y_{1,i} = \mathbb{I}\{ \varepsilon_{1,i}' = 1\} + \mathbb{I}\{ \varepsilon_{1,i}' = \circ\} \mathbb{I}\{ \varepsilon_{1,i} \leq \theta_{1,0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_1\rangle + h_{1,i}\}, \qquad \boldsymbol{y}_2 = (\theta_{2,0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{\theta}_2 \rangle) \boldsymbol{1}+ \boldsymbol{h}_2 + \boldsymbol{\varepsilon}_2, \\ \boldsymbol{w}\ensuremath{: =}w\big((\theta_{1,0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_1\rangle)\boldsymbol{1}+ \boldsymbol{h}_1\big), \end{gathered}$$ and the parameters $\{\zeta_{k\ell}^u\}_{1 \leq \ell \leq k \leq 6}$, $\{ \zeta_{k\ell}^v \}_{1 \leq \ell \leq k \leq 6}$, $\{\nu_{k,0}\}_{5\leq k \leq 6}$, $\{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6}$ are defined below. The objective $\phi_{k,u}(\boldsymbol{u};\boldsymbol{H}_{k-1}^{\mathsf{se}};\boldsymbol{\varepsilon}_1^{\mathsf{se}},{\boldsymbol{\varepsilon}_1^{\mathsf{se}}}',\boldsymbol{\varepsilon}_2^{\mathsf{se}})$ depends on the auxiliary noise $\boldsymbol{\varepsilon}_1^{\mathsf{se}},{\boldsymbol{\varepsilon}_1^{\mathsf{se}}}',\boldsymbol{\varepsilon}_2^{\mathsf{se}}$ and the parameters listed in the previous sentence, but for notational simplicity this dependence will often be left implicit. We denote by $\boldsymbol{Z}_u$ and $\boldsymbol{Z}_v$ the lower triangular matrices with $[\boldsymbol{Z}_u]_{k \ell} = \zeta_{k\ell}^u$ and $[\boldsymbol{Z}_v]_{k \ell} = \zeta_{k\ell}^v$. The state evolution distribution is determined by the parameters $\boldsymbol{K}_g$, $\boldsymbol{K}_h$, $\boldsymbol{Z}_u$, $\boldsymbol{Z}_v$, $\{\nu_{k,0}\}_{5\leq k \leq 6}$, $\{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6}$. The state evolution characterizes the distribution of $(v_{1,0}^{\mathsf{po}},\boldsymbol{v}_1^{\mathsf{po}})$, $(v_{2,0}^{\mathsf{po}},\boldsymbol{v}_2^{\mathsf{po}})$, $(v_{3,0}^{\mathsf{po}},\boldsymbol{v}_3^{\mathsf{po}})$, $(v_{4,0}^{\mathsf{po}},\boldsymbol{v}_4^{\mathsf{po}})$, $(v_{5,0}^{\mathsf{po}},\boldsymbol{v}_5^{\mathsf{po}})$, $(v_{6,0}^{\mathsf{po}},\boldsymbol{v}_6^{\mathsf{po}})$ and $\boldsymbol{u}_3^{\mathsf{po}}$, $\boldsymbol{u}_4^{\mathsf{po}}$, $\boldsymbol{u}_5^{\mathsf{po}}$, $\boldsymbol{u}_6^{\mathsf{po}}$ when these parameters are the unique solution to the fixed point equations: $$\label{eq:fixpt-general} \tag{SE-fixpt} \begin{gathered} \boldsymbol{K}_g = \langle\!\langle\boldsymbol{U}^{\mathsf{se}} \rangle\!\rangle_{L_2}, \qquad \boldsymbol{K}_h = \langle\!\langle\boldsymbol{V}^{\mathsf{se}} \rangle\!\rangle_{L_2}, \\ \boldsymbol{K}_h \boldsymbol{Z}_v^\top = \langle\!\langle\boldsymbol{H}^{\mathsf{se}} , \boldsymbol{U}^{\mathsf{se}} \rangle\!\rangle_{L_2}, \qquad \boldsymbol{K}_g \boldsymbol{Z}_u^\top = \langle\!\langle\boldsymbol{G}^{\mathsf{se}} , \boldsymbol{V}^{\mathsf{se}} \rangle\!\rangle_{L_2}, \\ \nu_{5,\mathsf{x}} = \langle\boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_5^{\mathsf{se}}\rangle_{L_2}, \qquad \nu_{6,\mathsf{x}} = \langle\boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_6^{\mathsf{se}}\rangle_{L_2}, \\ \langle\boldsymbol{1},\boldsymbol{u}_5^{\mathsf{se}}\rangle_{L_2} = \langle\boldsymbol{1},\boldsymbol{u}_6^{\mathsf{se}}\rangle_{L_2} = 0, \\ \text{where $\boldsymbol{Z}_v$ is lower-triangular and innovation-compatible with $\boldsymbol{K}_h$} \\ \text{and $\boldsymbol{Z}_u$ is lower-triangular and innovation-compatible with $\boldsymbol{K}_g$}. \end{gathered}$$ The right-hand sides of these equations involve expectations taken with respect to the state evolution distribution, so they are functions of these parameters $\boldsymbol{K}_g$, $\boldsymbol{K}_h$, $\boldsymbol{Z}_u$, $\boldsymbol{Z}_v$, $\{\nu_{k,0}\}_{5\leq k \leq 6}$, $\{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6}$. The next two lemmas shows that the fixed point equations have a unique solution and establishes bounds on their solution. **Lemma 7** (Existence and uniqueness of fixed point parameters). *Under Assumption A1, the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} have a unique solution.* **Lemma 8** (Bounds on fixed point parameters). *Under Assumption A1, the solution to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} satisfies the following bounds.* - *Covariance bounds: $|L_{g,k\ell}| \lesssim 1/\sqrt{n}$, $|L_{h,k\ell}| \lesssim 1$ for all $\ell \leq k$, $|K_{g,k\ell}| \lesssim 1 / n$, and $|K_{h,k\ell}| \lesssim 1$.* - *Non-trivial innovations: $L_{g,kk} \asymp 1/\sqrt{n}$ for $k = 3,4,5,6$, and and $L_{h,kk} \asymp 1$ for $k = 5,6$.* - *Bounds on effective regularization: $\zeta_{k\ell}^v \lesssim 1$ and $\zeta_{k\ell}^u \lesssim n$ for all $\ell \leq k$ and all $k$, and $\zeta_{kk}^v \asymp 1$, $\zeta_{kk}^u \asymp n$ for $k = 5,6$. Moreover, $-\zeta_{51}^v \gtrsim 1$.* - *Bounds on Cholesky pseudo-inverse: $(\boldsymbol{L}_g^\ddagger)_{k\ell} \lesssim \sqrt{n}$ and $(\boldsymbol{L}_h^\ddagger)_{k\ell} \lesssim 1$ for all $\ell \leq k$.* - *Bounds on offset: $|\nu_{5,0}| \leq C$ and $|\nu_{6,0}| \leq C$.* - *Bounds on mean-effects: $|\nu_{k,\mathsf{x}}| \leq C$.* *Moreover, the solutions have the following form: $$\label{eq:Z-form} \boldsymbol{Z}_v = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] -{\overline{\pi}}\alpha_1 \mathbb{I}\{\boldsymbol{\theta}_1 \neq {\boldsymbol 0}\} & 0 & 0 & 0 & 0 & 0 \\[5pt] \zeta_{51}^v & 0 & 0 & 0 & \zeta_{55}^v & 0 \\[5pt] \zeta_{61}^v & \zeta_{62}^v & 0 & 0 & 0 & \zeta_{66}^v \end{pmatrix}, \qquad \boldsymbol{Z}_u = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & \zeta_{55}^u & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & \zeta_{66}^u \end{pmatrix},$$ and $$\label{eq:K-form} \boldsymbol{K}_g = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 1/n & {\overline{\pi}}/n & K_{g,35} & K_{g,36} \\[5pt] 0 & 0 & {\overline{\pi}}/n & {\overline{\pi}}/n & K_{g,45} & K_{g,46} \\[5pt] 0 & 0 & K_{g,53} & K_{g,54} & K_{g,55} & K_{g,56} \\[5pt] 0 & 0 & K_{g,63} & K_{g,64} & K_{g,65} & K_{g,66} \end{pmatrix}, \qquad \boldsymbol{K}_h = \begin{pmatrix} \langle\!\langle\boldsymbol{\Theta}\rangle\!\rangle& {\boldsymbol 0}_{2 \times 2} & \boldsymbol{K}_{h,1:2,5:6} \\[5pt] {\boldsymbol 0}_{2 \times 2} & {\boldsymbol 0}_{2 \times 2} & {\boldsymbol 0}_{2 \times 2} \\[5pt] \boldsymbol{K}_{h,5:6,1:2} & {\boldsymbol 0}_{2 \times 2} & \boldsymbol{K}_{h,5:6,5:6} \end{pmatrix}.$$* We prove  in  respectively. Unless otherwise specified, we will henceforth assume that the parameters $\boldsymbol{K}_g$, $\boldsymbol{K}_h$, $\boldsymbol{Z}_u$, $\boldsymbol{Z}_v$, $\{\nu_{k,0}\}_{5\leq k \leq 6}$, $\{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6}$ are taken to be the solution to equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, and that $(\boldsymbol{u}_1^{\mathsf{se}},\boldsymbol{u}_2^{\mathsf{se}},\boldsymbol{u}_3^{\mathsf{se}},\boldsymbol{u}_4^{\mathsf{se}},\boldsymbol{u}_5^{\mathsf{se}},\boldsymbol{u}_6^{\mathsf{se}})$, $(\boldsymbol{v}_1^{\mathsf{se}},\boldsymbol{v}_2^{\mathsf{se}},\boldsymbol{v}_3^{\mathsf{se}},\boldsymbol{v}_4^{\mathsf{se}},\boldsymbol{v}_5^{\mathsf{se}},\boldsymbol{v}_6^{\mathsf{se}})$ have the state evolution distribution corresponding to these parameters. **Remark 1** (Fenchel-Legendre dual of state evolution optimization). *We will frequently work with the Fenchel-Legendre dual of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}, which we state here for later reference. In particular, for $k = 5,6$, the first line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is equivalent to $$\label{eq:fenchel-legendre} \begin{gathered} n\boldsymbol{u}_5^{\mathsf{se}} = \nabla \ell_a\Big((\nu_{5,0} + \nu_{5,\mathsf{x}})\boldsymbol{1}+ \boldsymbol{h}_5^\mathsf{se}- \sum_{\ell=1}^5 \zeta_{5\ell}^u \boldsymbol{u}_\ell^\mathsf{se};\boldsymbol{y}_1^{\mathsf{se}}\Big), \\ n\boldsymbol{u}_6^{\mathsf{se}} = -\boldsymbol{y}_1^{\mathsf{se}} \odot w(\boldsymbol{h}_1^\mathsf{se}) \odot \Big( \boldsymbol{h}_2^\mathsf{se}+ \boldsymbol{\varepsilon}_2^\mathsf{se} - \Big( (\nu_{6,0} + \nu_{6,\mathsf{x}})\boldsymbol{1}+ \boldsymbol{h}_6^{\mathsf{se}} - \sum_{\ell=1}^6 \zeta_{5\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} \Big) \Big). \end{gathered}$$* ## State evolution describes the primary optimization Define the *debiasing noise* for each optimization in equation [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"} via: $$\label{eq:loo-noise-def} \boldsymbol{g}_k^{\mathsf{po}} = \sum_{\ell=1}^k \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{po}} - \boldsymbol{A}^\top \boldsymbol{u}_k^{\mathsf{po}}, \qquad \boldsymbol{h}_k^{\mathsf{po}} = \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{po}} + \boldsymbol{A}\boldsymbol{v}_k^{\mathsf{po}}.$$ We define $$\label{eq:perp-from-orig} \begin{gathered} \boldsymbol{U}^{\mathsf{po},\perp} = \boldsymbol{U}^{\mathsf{po}}\boldsymbol{L}_g^{\ddagger\top}, \qquad \boldsymbol{V}^{\mathsf{po},\perp} = \boldsymbol{V}^{\mathsf{po}}\boldsymbol{L}_h^{\ddagger\top}, \qquad \boldsymbol{G}^{\mathsf{po},\perp} = \boldsymbol{G}^{\mathsf{po}}\boldsymbol{L}_g^{\ddagger\top}, \qquad \boldsymbol{H}^{\mathsf{po},\perp} = \boldsymbol{H}^{\mathsf{po}}\boldsymbol{L}_h^{\ddagger\top}, \\ \boldsymbol{U}^{\mathsf{se},\perp} = \boldsymbol{U}^{\mathsf{se}}\boldsymbol{L}_g^{\ddagger\top}, \qquad \boldsymbol{V}^{\mathsf{se},\perp} = \boldsymbol{V}^{\mathsf{se}}\boldsymbol{L}_h^{\ddagger\top}, \qquad \boldsymbol{G}^{\mathsf{se},\perp} = \boldsymbol{G}^{\mathsf{se}}\boldsymbol{L}_g^{\ddagger\top}, \qquad \boldsymbol{H}^{\mathsf{se},\perp} = \boldsymbol{H}^{\mathsf{se}}\boldsymbol{L}_h^{\ddagger\top}. \end{gathered}$$ whence we also have $$\label{eq:orig-from-perp} \begin{gathered} \boldsymbol{U}^{\mathsf{po}} = \boldsymbol{U}^{\mathsf{po},\perp}\boldsymbol{L}_g^{\top}, \qquad \boldsymbol{V}^{\mathsf{po}} = \boldsymbol{V}^{\mathsf{po},\perp}\boldsymbol{L}_h^{\top}, \qquad \boldsymbol{G}^{\mathsf{po}} = \boldsymbol{G}^{\mathsf{po},\perp}\boldsymbol{L}_g^{\top}, \qquad \boldsymbol{H}^{\mathsf{po}} = \boldsymbol{H}^{\mathsf{po},\perp}\boldsymbol{L}_h^{\top}, \\ \boldsymbol{U}^{\mathsf{se}} = \boldsymbol{U}^{\mathsf{se},\perp}\boldsymbol{L}_g^{\top}, \qquad \boldsymbol{V}^{\mathsf{se}} = \boldsymbol{V}^{\mathsf{se},\perp}\boldsymbol{L}_h^{\top}, \qquad \boldsymbol{G}^{\mathsf{se}} = \boldsymbol{G}^{\mathsf{se},\perp}\boldsymbol{L}_g^{\top}, \qquad \boldsymbol{H}^{\mathsf{se}} = \boldsymbol{H}^{\mathsf{se},\perp}\boldsymbol{L}_h^{\top}. \end{gathered}$$ Order-$2$ pseudo-Lipschitz functions of primary-optimization quantities concentrate on the expectation of the analogous state evolution quantities: **Theorem 5**. *Under Assumption A1, we have the following.* - ***Concentration of parameter estimates.** For any function $\phi: ({\mathbb R}^p)^{6 + 6} \rightarrow {\mathbb R}$ which is order-$2$ pseudo-Lipschitz, we have $$\label{eq:se-conc} \begin{gathered} \phi\Big( \boldsymbol{V}^{\mathsf{po},\perp}, \frac{1}{\sqrt{n}}\boldsymbol{G}^{\mathsf{po},\perp} \Big) \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}\Big[ \phi\Big( \boldsymbol{V}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}}\boldsymbol{G}^{\mathsf{se},\perp} \Big) \Big]. \end{gathered}$$* - ***Concentration of score estimates.** For any function $\phi: {\mathbb R}^{6 + 6 +1} \rightarrow {\mathbb R}$ which is order-2 pseudo-Lipschitz, $$\label{eq:se-conc-u} \frac{1}{n} \sum_{i=1}^n \phi\Big( (nu_{\ell,i}^{\mathsf{po}})_{\ell=1}^6, (h_{\ell,i}^{\mathsf{po}})_{\ell=1}^5, \varepsilon_{2,i} \Big) \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}\Big[ \phi\Big( (nu_{\ell,i}^{\mathsf{se}})_{\ell=1}^6, (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^5, \varepsilon_{2,i}^{\mathsf{se}} \Big) \Big].$$* - ***Concentration of offset.** For $\ell = 5,6$, $$v_{\ell,0}^{\mathsf{po}} \ensuremath{\stackrel{\bullet}{=}}\nu_{\ell,0}.$$* **Remark 2**. *We expect that the relation [\[eq:se-conc-u\]](#eq:se-conc-u){reference-type="eqref" reference="eq:se-conc-u"} holds also if $(h_{\ell,i}^{\mathsf{po}})_{\ell=1}^5$ is replaced by $(h_{\ell,i}^{\mathsf{po}})_{\ell=1}^6$ and $(h_{\ell,i}^{\mathsf{se}})_{\ell=1}^5$ is replaced by $(h_{\ell,i}^{\mathsf{se}})_{\ell=1}^6$. Due to the lack of differentiability of $\ell_6^*$ when $y_{1,i} = 0$ (see equation [\[eq:ellk\*-def\]](#eq:ellk*-def){reference-type="eqref" reference="eq:ellk*-def"}), establishing this involves technical arguments which we do not pursue because they are not necessary for the debiasing results.* # State evolution identities {#sec:reg-explicit} Using Gaussian integration by parts, we get explicit expressions for the effective regularization parameters $\zeta_{k\ell}^u$ and $\zeta_{k\ell}^v$ for $k = 5,6$. **Lemma 9**. (a) *Consider a state evolution $(\boldsymbol{u}_k^{\mathsf{se}})_k$, $(\boldsymbol{v}_k^{\mathsf{se}})_k$, $(\boldsymbol{g}_k^{\mathsf{se}})_k$, $(\boldsymbol{v}_k^{\mathsf{se}})_k$, $\boldsymbol{\varepsilon}_1^\mathsf{se}$, ${\boldsymbol{\varepsilon}_1^{\mathsf{se}}}'$, $\boldsymbol{\varepsilon}_2^\mathsf{se}$ at parameters $(\boldsymbol{K}_g,\allowbreak \boldsymbol{K}_h,\allowbreak \boldsymbol{Z}_u,\allowbreak \boldsymbol{Z}_v,\allowbreak \{\nu_{k,0}\}_{5\leq k \leq 6},\allowbreak \{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6})$ (not necessarily a solution to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}). That is, we assume $\boldsymbol{G}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_g \otimes {\mathbf I}_p)$, $\boldsymbol{H}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_h \otimes {\mathbf I}_n)$, and equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}. For $\ensuremath{a}\in \{0,1\}$, let $u_{5,i}^{\mathsf{se},a} \ensuremath{: =}\mathop{\mathrm{arg\,min}}_{u \in {\mathbb R}}\Big\{\frac{\zeta_{55}^u}2u^2 - (\nu_{5,0} + \nu_{5,\mathsf{x}} + h_{5,i}^{\mathsf{se}}) + \ell_\mathsf{a}^*(nu;a)\Big\}$ and $\widehat{\eta}{}_{5,i}^{\mathsf{se},a} \ensuremath{: =}\nu_{5,0} + \nu_{5,\mathsf{x}}+h_{5,i}^{\mathsf{se}}-\zeta_{55}^u u_{5,i}^{\mathsf{se},a}$. Let $\widehat{\boldsymbol{Z}}_u,\widehat{\boldsymbol{Z}}_v \in {\mathbb R}^{6\times6}$ with $\widehat{\zeta}{}_{41}^v = -{\overline{\pi}}\alpha_1$, $$\label{eq:zeta-hat-56-u} \begin{gathered} \widehat{\zeta}{}_{55}^u \ensuremath{: =} \mathbb{E}\Big[ \mathrm{Tr}\Big( \Big( \zeta_{55}^v {\mathbf I}_p + \nabla^2 \Omega_5(\boldsymbol{v}_5^{\mathsf{se}}) \Big)^{-1} \Big) \Big], \quad \widehat{\zeta}{}_{66}^u \ensuremath{: =} \mathbb{E}\Big[ \mathrm{Tr}\Big( \Big( \zeta_{66}^v {\mathbf I}_p + \nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}}) \Big)^{-1} \Big) \Big], \\ \widehat{\zeta}{}_{65}^u \ensuremath{: =} \zeta_{65}^v \mathbb{E}\Big[ \mathrm{Tr}\Big( \Big( \zeta_{66}^v {\mathbf I}_p + \nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}}) \Big)^{-1} \Big( \zeta_{55}^v {\mathbf I}_p + \nabla^2 \Omega_5(\boldsymbol{v}_5^{\mathsf{se}}) \Big)^{-1} \Big) \Big], \end{gathered}$$ and $$\label{eq:zeta-hat-56-v} \begin{gathered} \widehat{\zeta}{}_{51}^v \ensuremath{: =} \mathbb{E}[\pi'(h_{1,i}^{\mathsf{se}})(u_{5,i}^{\mathsf{se},1} - u_{5,i}^{\mathsf{se},0})], \qquad \widehat{\zeta}{}_{55}^v \ensuremath{: =} \mathbb{E}\Big[ \frac{1-\pi(h_{1,i}^{\mathsf{se}})}{\zeta_{55}^u + n/\ell_\mathsf{a}''(\widehat{\eta}{}_{5,i}^{\mathsf{se},0};0)} + \frac{\pi(h_{1,i}^{\mathsf{se}})}{\zeta_{55}^u + n/\ell_\mathsf{a}''(\widehat{\eta}{}_{5,i}^{\mathsf{se},1};1)} \Big], \\ \widehat{\zeta}{}_{61}^v \ensuremath{: =} n\mathbb{E}\Big[ \Big( \frac{\textup{d}}{\textup{d}h_{1,i}^{\mathsf{se}}} \frac{\pi(h_{1,i}^{\mathsf{se}})w(h_{1,i}^{\mathsf{se}})}{\zeta_{66}^u w(h_{1,i}^{\mathsf{se}}) + n} \Big) \big(\nu_{6,0} + \nu_{6,\mathsf{x}} - \mu_\mathsf{y}+ h_{6,i}^{\mathsf{se}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i}^{\mathsf{se}} - \zeta_{65}^u u_{5,i}^{\mathsf{se},1}\big) \Big], \\ \widehat{\zeta}{}_{66}^v \ensuremath{: =} -\widehat{\zeta}{}_{62}^v \ensuremath{: =} n\mathbb{E}\Big[ \frac{\pi(h_{1,i}^{\mathsf{se}})}{\zeta_{66}^u + n/w(h_{1,i}^{\mathsf{se}})} \Big], \quad \widehat{\zeta}{}_{65}^v \ensuremath{: =} -n\zeta_{65}^u \mathbb{E}\Big[ \frac{\pi(h_{1,i}^{\mathsf{se}})}{(\zeta_{55}^u + n/\ell_\mathsf{a}''(\widehat{\eta}{}_{5,i}^{\mathsf{se},1};1))(\zeta_{66}^u + n/w(h_{1,i}^{\mathsf{se}}))} \Big], \end{gathered}$$ and all other entries are set to 0. Then $$\label{eq:IBP-general} \langle\!\langle\boldsymbol{G}^\mathsf{se}, \boldsymbol{V}^\mathsf{se}\rangle\!\rangle= \boldsymbol{K}_g \widehat{\boldsymbol{Z}}_u^\top, \qquad \text{and if $\zeta_{k\ell}^u = 0$ for $\ell \leq 4$, then} \quad \langle\!\langle\boldsymbol{H}^\mathsf{se}, \boldsymbol{U}^\mathsf{se}\rangle\!\rangle= \boldsymbol{K}_h \widehat{\boldsymbol{Z}}_v^\top.$$* (b) *If $(\boldsymbol{K}_g,\allowbreak \boldsymbol{K}_h,\allowbreak \boldsymbol{Z}_u,\allowbreak \boldsymbol{Z}_v,\allowbreak \{\nu_{k,0}\}_{5\leq k \leq 6},\allowbreak \{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6})$ is a solution to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, we further have $\boldsymbol{Z}_u^\top = {\mathbf I}_{\boldsymbol{K}_g}^\top \widehat{\boldsymbol{Z}}_u^\top$ and $\boldsymbol{Z}_v^\top = {\mathbf I}_{\boldsymbol{K}_h}^\top \widehat{\boldsymbol{Z}}_v^\top$, which we display for future reference: $$\label{eq:zeta56-uv-explicit} \boldsymbol{Z}_u^\top = {\mathbf I}_{\boldsymbol{K}_g}^\top \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \widehat{\zeta}{}_{55}^u & \widehat{\zeta}{}_{65}^u \\ 0 & 0 & 0 & 0 & 0 & \widehat{\zeta}{}_{66}^u \end{pmatrix}, \qquad \boldsymbol{Z}_v^\top = {\mathbf I}_{\boldsymbol{K}_h}^\top \begin{pmatrix} 0 & 0 & 0 & -{\overline{\pi}}\alpha_1 & \widehat{\zeta}{}_{51}^v & \widehat{\zeta}{}_{61}^v \\ 0 & 0 & 0 & 0 & 0 & \widehat{\zeta}{}_{62}^v \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \widehat{\zeta}{}_{55}^v & \widehat{\zeta}{}_{65}^v \\ 0 & 0 & 0 & 0 & 0 & \widehat{\zeta}{}_{66}^v \end{pmatrix}.$$ For $\zeta_{51}^v,\zeta_{61}^v,\zeta_{62}^v$, we have the explicit expressions $$(\zeta_{51}^v,\zeta_{61}^v,\zeta_{62}^v) = \begin{cases} (0,0,0) \quad&\text{if } \boldsymbol{\theta}_1 = \boldsymbol{\theta}_2 = {\boldsymbol 0}, \\ \big(\widehat{\zeta}{}_{51}^v,\widehat{\zeta}{}_{61}^v + \frac{\langle\boldsymbol{\theta}_2,\boldsymbol{\theta}_1\rangle}{\|\boldsymbol{\theta}_1\|^2}\widehat{\zeta}{}_{62},0\big) \quad&\text{if } \boldsymbol{\theta}_1 \neq {\boldsymbol 0},\, \boldsymbol{\theta}_2 \propto \boldsymbol{\theta}_1, \\ (\mathbb{I}_{\boldsymbol{\theta}_1\neq {\boldsymbol 0}}\widehat{\zeta}{}_{51}^v,\mathbb{I}_{\boldsymbol{\theta}_1\neq {\boldsymbol 0}}\widehat{\zeta}{}_{61}^v,\widehat{\zeta}{}_{62}^v) \quad&\text{if } \boldsymbol{\theta}_2 \not\propto \boldsymbol{\theta}_1. \end{cases}$$ In all cases, $\zeta_{51}^v \boldsymbol{\theta}_1 = \widehat{\zeta}{}_{51}^v \boldsymbol{\theta}_1$ and $\zeta_{61}^v \boldsymbol{\theta}_1 + \zeta_{62}^v \boldsymbol{\theta}_2 = \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 + \widehat{\zeta}{}_{62}^v \boldsymbol{\theta}_2 = \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 - \widehat{\zeta}{}_{66}^v \boldsymbol{\theta}_2$.* **Remark 3**. *The proof of  uses only the fixed point equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} and the definition of the state evolution. It assumes neither  nor . This fact is germane because  is used in the proof of .* *Proof of (a).* Using the relations $\boldsymbol{v}_3^{\mathsf{se}} = \boldsymbol{v}_4^{\mathsf{se}} = {\boldsymbol 0}$, the KKT conditions for the optimization problem [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} imply that $\boldsymbol{v}_5^{\mathsf{se}}$ is the unique solution to $\zeta_{55}^v \boldsymbol{v}_5^{\mathsf{se}} + \zeta_{51}^v \boldsymbol{\theta}_1 + \zeta_{52}^v \boldsymbol{\theta}_2 - \boldsymbol{g}_5^{\mathsf{se}} + \nabla \Omega_5(\boldsymbol{v}_5^{\mathsf{se}}),$ whence $\boldsymbol{v}_5^{\mathsf{se}}$ can be written as a function only of $\boldsymbol{g}_5^{\mathsf{se}}$, which we denote by $f_5^v(\boldsymbol{g}_5^{\mathsf{se}})$. By Assumption A1, $\Omega_5$ is twice-differentiable and $c$-strongly convex. Thus, taking the derivative of the preceding display, we see that $f_5^v$ is differentiable, and $$\begin{aligned} \frac{\textup{d}f_5^v}{\textup{d}\boldsymbol{g}_5^{\mathsf{se}}} (\boldsymbol{g}_5^{\mathsf{se}}) = \Big( \zeta_{55}^v {\mathbf I}_p + \nabla^2 \Omega_5(\boldsymbol{v}_5^{\mathsf{se}}) \Big)^{-1}. \end{aligned}$$ Likewise, the KKT conditions for the problem [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} imply that $\boldsymbol{v}_6^{\mathsf{se}}$ is the unique solution to $\zeta_{66}^v \boldsymbol{v}_5^{\mathsf{se}} + \zeta_{61}^v \boldsymbol{\theta}_1 + \zeta_{62}^v \boldsymbol{\theta}_2 + \zeta_{65}^v f_5^v(\boldsymbol{g}_5^{\mathsf{se}}) - \boldsymbol{g}_6^{\mathsf{se}} + \nabla \Omega_6(\boldsymbol{v}_6^{\mathsf{se}}),$ whence $\boldsymbol{v}_6^{\mathsf{se}}$ can be written as a function only of $\boldsymbol{g}_5^{\mathsf{se}},\boldsymbol{g}_6^{\mathsf{se}}$, which we denote by $f_6^v(\boldsymbol{g}_5^{\mathsf{se}},\boldsymbol{g}_6^{\mathsf{se}})$. As before, $f_6^v$ is differentiable. Its derivatives are given by $$\begin{aligned} \frac{\textup{d}f_6^v}{\textup{d}\boldsymbol{g}_6^{\mathsf{se}}} (\boldsymbol{g}_5^{\mathsf{se}},\boldsymbol{g}_6^{\mathsf{se}}) & = \Big( \zeta_{66}^v {\mathbf I}_p + \nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}}) \Big)^{-1}, and \\ % \frac{\textup{d}f_6^v}{\textup{d}\boldsymbol{g}_5^{\mathsf{se}}} (\boldsymbol{g}_5^{\mathsf{se}},\boldsymbol{g}_6^{\mathsf{se}}) & = \zeta_{65}^v \Big( \zeta_{66}^v {\mathbf I}_p + \nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}}) \Big)^{-1} \Big( \zeta_{55}^v {\mathbf I}_p + \nabla^2 \Omega_5(\boldsymbol{v}_5^{\mathsf{se}}) \Big)^{-1}.\end{aligned}$$ The fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} (in the first equality) and Gaussian integration by parts (in the second equality) establish[^1] the first equation in the relation [\[eq:IBP-general\]](#eq:IBP-general){reference-type="eqref" reference="eq:IBP-general"}. From [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="ref" reference="eq:zeta56-uv-explicit"}, we have $\zeta_{5\ell}^u = 0$ for $\ell \leq 4$. Moreover, using the definition of $\boldsymbol{u}_5^{\mathsf{se}}$ (see [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"}), we can write $$\begin{aligned} u_{5,i}^{\mathsf{se}} = (1 - y_{1,i}^{\mathsf{se}})u_{5,i}^{\mathsf{se},0} + y_{1,i}^{\mathsf{se}} u_{5,i}^{\mathsf{se},1}.\end{aligned}$$ Note that $u_{5,i}^{\mathsf{se},a}$ is the unique solution to the subgradient inclusion $$\begin{aligned} \nu_{5,0} + \nu_{5,\mathsf{x}} + h_{5,i}^{\mathsf{se}} - \zeta_{55}^u u_{5,i}^{\mathsf{se},a} \in \partial \ell_\mathsf{a}^*(nu_{5,i}^{\mathsf{se}};a).\end{aligned}$$ Thus, the quantity $u_{5,i}^{\mathsf{se},a}$ can be written only as a function of $h_{5,i}^{\mathsf{se}}$, which we denote by $f_5^{u,a}(\boldsymbol{h}_5^{\mathsf{se}})$. By the relation [\[eq:fenchel-legendre\]](#eq:fenchel-legendre){reference-type="eqref" reference="eq:fenchel-legendre"} and the fact that the function $\ell_\mathsf{a}$ is twice-differentiable and strongly convex in its first argument (Assumption A1), we have $$\frac{\textup{d}f_5^{u,a}(\boldsymbol{h}_5^{\mathsf{se}})_i}{\textup{d}h_{5,i}^{\mathsf{se}}} = \frac{1}{\zeta_{55}^u + n/\ell_\mathsf{a}''(\widehat{\eta}{}_{5,i}^{\mathsf{se},a};a)}.$$ Then, for any $\ell$, $$\label{eq:hl-corr-u5} \begin{aligned} \langle \boldsymbol{h}_\ell^{\mathsf{se}} , \boldsymbol{u}_5^{\mathsf{se}} \rangle_{L_2} &= \langle \boldsymbol{h}_\ell^{\mathsf{se}} , (1-y_{1,i}^{\mathsf{se}})\boldsymbol{u}_5^{\mathsf{se},0} + y_{1,i}^{\mathsf{se}}\boldsymbol{u}_5^{\mathsf{se},1} \rangle_{L_2} = \mathbb{E}[ \mathbb{E}[\langle \boldsymbol{h}_\ell^{\mathsf{se}} , (1-y_{1,i}^{\mathsf{se}})\boldsymbol{u}_5^{\mathsf{se},0} + y_{1,i}^{\mathsf{se}}\boldsymbol{u}_5^{\mathsf{se},1} \rangle \mid \boldsymbol{h}_1^{\mathsf{se}} , \boldsymbol{h}_5^{\mathsf{se}}] ] \\ &= \mathbb{E}[\langle \boldsymbol{h}_\ell^{\mathsf{se}} , (1-\pi(\boldsymbol{h}_1^{\mathsf{se}}))\boldsymbol{u}_5^{\mathsf{se},0} + \pi(\boldsymbol{h}_1^{\mathsf{se}})\boldsymbol{u}_5^{\mathsf{se},1} \rangle] = K_{h,\ell1} \widehat{\zeta}{}_{51}^v + K_{h,\ell5} \widehat{\zeta}{}_{55}^v, \end{aligned}$$ where the last equality uses Gaussian integration by parts. Now using $\zeta_{6\ell}^u = 0$ for $\ell \leq 4$ and that $w(\,\cdot\,)$ is bounded above by $C$ (Assumption A1), the KKT conditions for [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"} imply that $$\begin{aligned} u_{6,i}^{\mathsf{se}} &= y_{1,i}^{\mathsf{se}}\cdot \frac{\nu_{6,0} + \nu_{6,\mathsf{x}} + h_{6,i}^{\mathsf{se}} - y_{2,i}^{\mathsf{se}} - \zeta_{65}^u u_{5,i}^{\mathsf{se}}}{\zeta_{66}^u + n(w_i^{\mathsf{se}})^{-1}} = y_{1,i}^{\mathsf{se}}\cdot \frac{\nu_{6,0} + \nu_{6,\mathsf{x}} - \mu_\mathsf{y}+ h_{6,i}^{\mathsf{se}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i}^{\mathsf{se}} - \zeta_{65}^u u_{5,i}^{\mathsf{se},1}}{\zeta_{66}^u + n/w(h_{1,i}^{\mathsf{se}})}. \end{aligned}$$ Then, for any $\ell$ $$\label{eq:hl-corr-u6} \begin{aligned} \langle \boldsymbol{h}_\ell^{\mathsf{se}} , \boldsymbol{u}_6^{\mathsf{se}} \rangle_{L_2} & = \mathbb{E}[ \mathbb{E}[\langle \boldsymbol{h}_\ell^{\mathsf{se}} , \boldsymbol{u}_6^{\mathsf{se}} \rangle | \boldsymbol{h}_1^{\mathsf{se}} , \boldsymbol{h}_2^{\mathsf{se}} , \boldsymbol{h}_5^{\mathsf{se}} , \boldsymbol{h}_6^{\mathsf{se}}] ] = n \mathbb{E}\Big[ h_{\ell,i}^{\mathsf{se}} \pi(h_{1,i}^{\mathsf{se}}) \frac{\nu_{6,0} + \nu_{6,\mathsf{x}} - \mu_\mathsf{y}+ h_{6,i}^{\mathsf{se}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i}^{\mathsf{se}} - \zeta_{65}^u u_{5,i}^{\mathsf{se},1}}{\zeta_{66}^u + n/w(h_{1,i}^{\mathsf{se}})} \Big] \\ &= K_{h,\ell1} \widehat{\zeta}{}_{61}^v + K_{h,\ell2} \widehat{\zeta}{}_{62}^v + K_{h,\ell5} \widehat{\zeta}{}_{65}^v + K_{h,\ell6} \widehat{\zeta}{}_{66}^v, \end{aligned}$$ where the last equality uses Gaussian integration by parts. Using these computations and Gaussian integration by parts for the remaining terms leads to the second term in the relation [\[eq:IBP-general\]](#eq:IBP-general){reference-type="eqref" reference="eq:IBP-general"}. ◻ *Proof of (b).* When $\boldsymbol{Z}_u$ is innovation compatible with $\boldsymbol{K}_g$ and $\boldsymbol{Z}_v$ is innovation compatible with $\boldsymbol{K}_h$, we get ${\mathbf I}_{\boldsymbol{K}_g}^\top \boldsymbol{Z}_u^\top = \boldsymbol{Z}_u^\top$ and ${\mathbf I}_{\boldsymbol{K}_h}^\top \boldsymbol{Z}_v^\top = \boldsymbol{Z}_v^\top$ (see ). The first identity in equation [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"} follows then from part (a). This implies $\zeta_{k\ell}^u = 0$ for $\ell \leq 4$, whence the second equation in equation [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"} also follows from part (a). If $\boldsymbol{\theta}_1 = \boldsymbol{\theta}_2 = {\boldsymbol 0}$, then both indices 1 and 2 are predictable, $[{\mathbf I}_{\boldsymbol{K}_h}^\top]_{1:2,1:2} = {\boldsymbol 0}$, and $\zeta_{51}^v = \zeta_{61}^v = \zeta_{62}^v = 0$. If $\boldsymbol{\theta}_1 \neq {\boldsymbol 0}$ and $\boldsymbol{\theta}_2 \propto \boldsymbol{\theta}_1$, then index 1 is innovative, index 2 is predictable, $[{\mathbf I}_{\boldsymbol{K}_h}^\top]_{1:2,1:2} = \begin{pmatrix} 1 & 0 \\ L_{h,11}^{-1}L_{h,12} & 0 \end{pmatrix}$, and $\zeta_{51} = \widehat{\zeta}{}_{51}^v$, $\zeta_{61}^v = \widehat{\zeta}{}_{61}^v + L_{h,11}^{-1}L_{h,12} \widehat{\zeta}{}_{62}^v$, and $\zeta_{62}^v = 0$. Otherwise, if $\boldsymbol{\theta}_2 \not\propto \boldsymbol{\theta}_1$, then index 2 is innovative, $[{\mathbf I}_{\boldsymbol{K}_h}^\top]_{1:2,1:2} = \text{\rm diag}(\mathbb{I}_{\boldsymbol{\theta}_1 \neq {\boldsymbol 0}},1)$, and $\zeta_{51}^v = \mathbb{I}_{\boldsymbol{\theta}_1 \neq {\boldsymbol 0}} \widehat{\zeta}{}_{51}^v$, $\zeta_{61}^v = \mathbb{I}_{\boldsymbol{\theta}_1 \neq {\boldsymbol 0}} \widehat{\zeta}{}_{61}^v$, and $\zeta_{62}^v = \widehat{\zeta}{}_{62}^v$. The identities $\zeta_{61}^v \boldsymbol{\theta}_1 + \zeta_{62}^v \boldsymbol{\theta}_2 = \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 + \widehat{\zeta}{}_{62}^v \boldsymbol{\theta}_2 = \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 - \widehat{\zeta}{}_{66}^v \boldsymbol{\theta}_2$ and $\zeta_{51}^v \boldsymbol{\theta}_1 = \widehat{\zeta}{}_{51}^v \boldsymbol{\theta}_1$ are verified by checking each case individually. ◻ # Regression exact asymptotics follows from state evolution {#sec:exact-asymptotics-proof} In this section, we show that  and  imply  along with . ## A construction of fixed-design models from state evolution Our first step is to show that any state evolution that solves fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} has embedded on the same probability spaces a solution the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}. **Lemma 10**. *Consider any solution $(\boldsymbol{K}_g, \boldsymbol{K}_h, \boldsymbol{Z}_u, \boldsymbol{Z}_v, \{\nu_{k,0}\}_{5\leq k \leq 6}, \{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6})$ to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} satisfying , and let $(\boldsymbol{u}_k^{\mathsf{se}})_k$, $(\boldsymbol{v}_k^{\mathsf{se}})_k$, $(\boldsymbol{g}_k^{\mathsf{se}})_k$, $(\boldsymbol{v}_k^{\mathsf{se}})_k$, $\boldsymbol{\varepsilon}_1^\mathsf{se}$, ${\boldsymbol{\varepsilon}_1^{\mathsf{se}}}'$, $\boldsymbol{\varepsilon}_2^\mathsf{se}$ have distribution given by the state evolution corresponding to these parameters.* *Then there exists on the same probability spaces random variables $(\boldsymbol{g}_{\mathsf{x}}^f, \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{g}_{\mathsf{a}}^f, \boldsymbol{g}_\mathsf{y}^f)$, $(\boldsymbol{y}_{\mathsf{x}}^f, \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{y}_{\mathsf{a}}^f, \boldsymbol{y}_\mathsf{y}^f)$, $(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f)$, $(\boldsymbol{\eta}_\mathsf{a}^f, \boldsymbol{\eta}_\mathsf{y}^f, \widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}}, \widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}}, \widehat{\boldsymbol{\eta}}_\mathsf{a}^f, \widehat{\boldsymbol{\eta}}_\mathsf{y}^f)$ with distribution given by the fixed-design models with parameters $(\boldsymbol{S},\allowbreak \beta_{\mathsf{a}\mathsf{a}}, \allowbreak \beta_{\mathsf{y}\mathsf{a}}, \allowbreak \zeta_\mathsf{a}^\theta, \allowbreak \zeta_\mathsf{a}^\eta, \allowbreak \zeta_\mathsf{y}^\theta, \allowbreak \zeta_\mathsf{y}^\theta, \allowbreak \widehat{\mu}{}_\mathsf{a}^f, \allowbreak \widehat{\mu}{}_\mathsf{y}^f)$ that solve the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}.* *Proof of .* We first provide the construction of the solutions to the fixed-point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} from the solutions to [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. With $\widehat{\zeta}{}_{51}^v,\widehat{\zeta}{}_{61}^v,\widehat{\zeta}{}_{62}^v$ as in  and $\boldsymbol{a}^f = \boldsymbol{y}_1^\mathsf{se}$, $\boldsymbol{y}^f = \boldsymbol{y}_2^\mathsf{se}$, $\boldsymbol{w}^f = w\big((\theta_{\mathsf{a},0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{\theta}_{\mathsf{a}} \rangle)\boldsymbol{1}+ \boldsymbol{h}_1^{\mathsf{se}}\big)$, we can set $$\label{eq:from-uv-to-theta-eta-psi} \begin{gathered} \boldsymbol{S}= n \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & {\overline{\pi}}& 0 & 0 \\ 0 & 0 & \zeta_{55}^v & 0 \\ 0 & 0 & 0 & \zeta_{66}^v \end{pmatrix}^{-1} \boldsymbol{K}_{g,3:6,3:6} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & {\overline{\pi}}& 0 & 0 \\ 0 & 0 & \zeta_{55}^v & 0 \\ 0 & 0 & 0 & \zeta_{66}^v \end{pmatrix}^{-1}, \\ \beta_{\mathsf{a}\mathsf{a}} = -\widehat{\zeta}{}_{51}^v / \zeta_{55}^v, \qquad \beta_{\mathsf{y}\mathsf{a}} = -\widehat{\zeta}{}_{61}^v / \zeta_{66}^v, \\ \zeta_\mathsf{a}^\eta = \zeta_{55}^u/n, \qquad \zeta_\mathsf{y}^\eta = \zeta_{66}^u/n, \qquad \zeta_\mathsf{a}^\theta = \zeta_{55}^v, \qquad \zeta_\mathsf{y}^\theta = \zeta_{66}^v, \\ \widehat{\mu}{}_\mathsf{a}^f = \nu_{5,0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_5^{\mathsf{se}}\rangle_{L_2}, \qquad \widehat{\mu}{}_\mathsf{a}^f = \nu_{6,0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_6^{\mathsf{se}} \rangle_{L_2}, \end{gathered}$$ and $$\label{eq:fix-des-from-se} \begin{aligned} & \begin{aligned} \boldsymbol{g}_{\mathsf{x}}^f &= \boldsymbol{g}_3^{\mathsf{se}}, \quad& \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f &= \boldsymbol{g}_4^{\mathsf{se}}/{\overline{\pi}}, \\ \boldsymbol{y}_{\mathsf{x}}^f &= \boldsymbol{\mu}_{\mathsf{x}} + \boldsymbol{g}_3^{\mathsf{se}}, \quad& \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f &= \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_4^{\mathsf{se}}/{\overline{\pi}}, \\ \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f &= \boldsymbol{v}_5^{\mathsf{se}}, \quad& \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f &= \boldsymbol{v}_6^{\mathsf{se}}, \end{aligned} \quad&& \begin{aligned} \boldsymbol{g}_\mathsf{a}^f &= \boldsymbol{g}_5^{\mathsf{se}}/\zeta_{33}^v, \quad& \boldsymbol{g}_\mathsf{y}^f &= \boldsymbol{g}_6^{\mathsf{se}}/\zeta_{66}^v, \\ \boldsymbol{y}_\mathsf{a}^f &= \frac{-\zeta_{51}^v\boldsymbol{\theta}_\mathsf{a}+ \boldsymbol{g}_5^{\mathsf{se}}}{\zeta_{55}^v}, \quad& \boldsymbol{y}_\mathsf{y}^f &= \frac{-\widehat{\zeta}{}_{61}^v\boldsymbol{\theta}_\mathsf{a}- \widehat{\zeta}{}_{62}^v \boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_6^{\mathsf{se}}}{\zeta_{66}^v}, \\ \boldsymbol{\eta}_\mathsf{a}^f &= \mu_\mathsf{a}\boldsymbol{1}+ \boldsymbol{h}_1^{\mathsf{se}}, \quad& \boldsymbol{\eta}_\mathsf{y}^f &= \mu_\mathsf{y}\boldsymbol{1}+ \boldsymbol{h}_2^{\mathsf{se}}, \end{aligned} \\ & \widehat{\boldsymbol{\eta}}_\mathsf{a}^{f,\mathsf{loo}} = (\nu_{5,0} + \nu_{5,\mathsf{x}})\boldsymbol{1}+ \boldsymbol{h}_5^{\mathsf{se}}, \qquad &&\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}} = (\nu_{6,0} + \nu_{6,\mathsf{x}})\boldsymbol{1}+ \boldsymbol{h}_6^{\mathsf{se}}, \\ &\widehat{\boldsymbol{\eta}}_\mathsf{a}^f = (\nu_{5,0} + \nu_{5,\mathsf{x}})\boldsymbol{1}+ \boldsymbol{h}_5^{\mathsf{se}} - \zeta_{55}^u\boldsymbol{u}_5^{\mathsf{se}}, \qquad&& \widehat{\boldsymbol{\eta}}_\mathsf{y}^f = (\nu_{6,0} + \nu_{6,\mathsf{x}})\boldsymbol{1}+ \boldsymbol{h}_6^{\mathsf{se}} - \zeta_{66}^u\boldsymbol{u}_6^{\mathsf{se}}. \end{aligned}$$ We prove that this construction solves [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} in two steps.\ **Step 1: The random variables in equation [\[eq:fix-des-from-se\]](#eq:fix-des-from-se){reference-type="eqref" reference="eq:fix-des-from-se"} are distributed from the fixed-design models with parameters [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"}.** That is, we check equations [\[eq:fixed-design-param-outcomes\]](#eq:fixed-design-param-outcomes){reference-type="eqref" reference="eq:fixed-design-param-outcomes"}, [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"}, [\[eq:fixed-design-outcomes\]](#eq:fixed-design-outcomes){reference-type="eqref" reference="eq:fixed-design-outcomes"}, [\[eq:fixed-design-linear-predictor-dist\]](#eq:fixed-design-linear-predictor-dist){reference-type="eqref" reference="eq:fixed-design-linear-predictor-dist"}, and [\[eq:lin-predict-f\]](#eq:lin-predict-f){reference-type="eqref" reference="eq:lin-predict-f"}, and that $(g_{\mathsf{x},i}^f,g_{\mathsf{x},\mathsf{cfd},i}^f,g_{\mathsf{a},i}^f,g_{\mathsf{y},i}^f) \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}({\boldsymbol 0},\boldsymbol{S}/n)$. equation [\[eq:fixed-design-param-outcomes\]](#eq:fixed-design-param-outcomes){reference-type="eqref" reference="eq:fixed-design-param-outcomes"} is true by construction. The KKT conditions for the second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} and for equation [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"} are $$\label{eq:se-v-KKT} \begin{gathered} \zeta_{55}^v(\boldsymbol{g}_5^{\mathsf{se}}/\zeta_{55}^v - \zeta_{51}^v / \zeta_{55}^v\boldsymbol{v}_1^{\mathsf{se}} - \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^f) \in \partial \Omega_\mathsf{a}(\boldsymbol{v}_5^{\mathsf{se}}), \\ \zeta_{66}^v(\boldsymbol{g}_6^{\mathsf{se}}/\zeta_{66}^v - \zeta_{61}^v / \zeta_{66}^v\boldsymbol{v}_1^{\mathsf{se}} - \zeta_{62}^v / \zeta_{66}^v \boldsymbol{v}_2^{\mathsf{se}} - \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^f) \in \partial \Omega_\mathsf{y}(\boldsymbol{v}_6^{\mathsf{se}}), \end{gathered} \qquad\text{and}\qquad \begin{gathered} \zeta_\mathsf{a}^\theta(\boldsymbol{y}_\mathsf{a}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f) \in \partial \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f), \\ \zeta_\mathsf{y}^\theta(\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) \in \partial \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f), \end{gathered}$$ respectively. By , we have $$\begin{aligned} \zeta_{61}^v \boldsymbol{v}_1^\mathsf{se}+ \zeta_{62}^v \boldsymbol{v}_2^\mathsf{se}= \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 - \widehat{\zeta}{}_{66}^v \boldsymbol{\theta}_2, \quad \mbox{and} \quad \zeta_{51}^v \boldsymbol{v}_1^\mathsf{se}= \widehat{\zeta}{}_{51}^v \boldsymbol{\theta}_1. \end{aligned}$$ By , index 6 is innovative with respect to $\boldsymbol{K}_h$, whence equation [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"} implies $\widehat{\zeta}{}_{66}^v = \zeta_{66}^v$. Thus, under assignments [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"} and [\[eq:fix-des-from-se\]](#eq:fix-des-from-se){reference-type="eqref" reference="eq:fix-des-from-se"}, the two sets of KKT conditions in the previous display are equivalent, whence equation [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"} is satisfied. Equation [\[eq:fixed-design-outcomes\]](#eq:fixed-design-outcomes){reference-type="eqref" reference="eq:fixed-design-outcomes"} follows from our construction of $\boldsymbol{a}^f,\boldsymbol{y}^f,\boldsymbol{w}^f$ above. Because $\boldsymbol{K}_h = \langle\!\langle \boldsymbol{V}^{\mathsf{se}} \rangle\!\rangle_{L_2}$ and $\boldsymbol{v}_1^{\mathsf{se}} = \boldsymbol{\theta}_\mathsf{a}$, $\boldsymbol{v}_2^{\mathsf{se}} = \boldsymbol{\theta}_\mathsf{y}$, $\boldsymbol{v}_5^{\mathsf{se}} = \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f$, $\boldsymbol{v}_6^{\mathsf{se}} = \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f$, we conclude that $\boldsymbol{\eta}_\mathsf{a}^f$, $\boldsymbol{\eta}_\mathsf{y}^f$, $\widehat{\boldsymbol{\eta}}_\mathsf{a}^{f,\mathsf{loo}}$, $\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}}$, $\widehat{\mu}{}_\mathsf{a}^f$, $\widehat{\mu}{}_\mathsf{y}^f$ as we have defined them satisfy equation [\[eq:fixed-design-linear-predictor-dist\]](#eq:fixed-design-linear-predictor-dist){reference-type="eqref" reference="eq:fixed-design-linear-predictor-dist"}. Using that $\zeta_{k\ell} = 0$ for $\ell \leq 4$, under assignments [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"} and [\[eq:fix-des-from-se\]](#eq:fix-des-from-se){reference-type="eqref" reference="eq:fix-des-from-se"}, the KKT conditions for equation [\[eq:lin-predict-f\]](#eq:lin-predict-f){reference-type="eqref" reference="eq:lin-predict-f"} are equivalent to the KKT conditions [\[eq:fenchel-legendre\]](#eq:fenchel-legendre){reference-type="eqref" reference="eq:fenchel-legendre"}. equation [\[eq:lin-predict-f\]](#eq:lin-predict-f){reference-type="eqref" reference="eq:lin-predict-f"} follows. That $(g_{\mathsf{x},i}^f,g_{\mathsf{x},\mathsf{cfd},i}^f,g_{\mathsf{a},i}^f,g_{\mathsf{y},i}^f) \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}({\boldsymbol 0},\boldsymbol{S}/n)$ holds by the definition of $\boldsymbol{g}_{\mathsf{x}}^f$, $\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f$, $\boldsymbol{g}_\mathsf{a}^f$, $\boldsymbol{g}_\mathsf{y}^f$, $\boldsymbol{S}$, and the fact that $(g_{3,i}^{\mathsf{se}},g_{4,i}^{\mathsf{se}},g_{5,i}^{\mathsf{se}},g_{6,i}^{\mathsf{se}}) \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{g,3:6,3:6})$.\ **Step 2: The fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} are satisfied by parameters [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"}.** By equations [\[eq:fixed-design-score\]](#eq:fixed-design-score){reference-type="eqref" reference="eq:fixed-design-score"} and [\[eq:empirical-influence-function\]](#eq:empirical-influence-function){reference-type="eqref" reference="eq:empirical-influence-function"}, $\widehat{\boldsymbol{i}}{}_\mathsf{x}^f = \boldsymbol{1}= -n\boldsymbol{u}_3^{\mathsf{se}}$, $\widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f = -n\boldsymbol{u}_4^{\mathsf{se}}/{\overline{\pi}}$, $\widehat{\boldsymbol{i}}{}_\mathsf{a}^f = -\widehat{\boldsymbol{\psi}}{}_\mathsf{a}^f/\zeta_\mathsf{a}^\theta = -n\boldsymbol{u}_5^{\mathsf{se}}/\zeta_\mathsf{a}^\theta$, and $\widehat{\boldsymbol{i}}{}_\mathsf{y}^f = -\widehat{\boldsymbol{\psi}}{}_\mathsf{y}^f/\zeta_\mathsf{y}^\theta = -n\boldsymbol{u}_6^{\mathsf{se}}/\zeta_\mathsf{y}^\theta$. Thus, the first line of equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} is equivalent to the fourth line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. Comparing the derivative identities [\[eq:score-deriv-explicit\]](#eq:score-deriv-explicit){reference-type="eqref" reference="eq:score-deriv-explicit"} with the expressions for $\widehat{\zeta}{}_{k\ell}^v$ in , the second line of equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} follows. The third line of equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} is equivalent to $\boldsymbol{K}_g = \langle\!\langle\boldsymbol{U}^{\mathsf{se}} \rangle\!\rangle$ using the expressions for $\widehat{\boldsymbol{i}}{}_{\mathsf{x}}^f$, $\widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f$, $\widehat{\boldsymbol{i}}{}_\mathsf{a}^f$, $\widehat{\boldsymbol{i}}{}_\mathsf{y}^f$ above. Because $5,6$ are innovative with respect to both $\boldsymbol{K}_g,\boldsymbol{K}_h$ (); implies $\zeta_{kk}^u = \widehat{\zeta}{}_{kk}^u$ and $\zeta_{kk}^v = \widehat{\zeta}{}_{kk}^v$ for $k=5,6$. Then, comparing the derivative identities [\[eq:score-deriv-explicit\]](#eq:score-deriv-explicit){reference-type="eqref" reference="eq:score-deriv-explicit"} with the expressions for $\widehat{\zeta}{}_{k\ell}^v,\widehat{\zeta}{}_{k\ell}^u$ in  implies the fourth and fifth lines of [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="ref" reference="eq:regr-fixed-pt"}. Indices 3 and 4 are innovative with respect to $\boldsymbol{S}$ and the covariance in [\[eq:fixed-design-linear-predictor-dist\]](#eq:fixed-design-linear-predictor-dist){reference-type="ref" reference="eq:fixed-design-linear-predictor-dist"} because indices 5 and 6 are innovative with respect to $\boldsymbol{K}_g$, $\boldsymbol{K}_h$ by . ◻ ## A construction of state evolution from fixed-design models Similarly, any fixed-design model that solves the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} has embedded on the same probability spaces a solution the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}. **Lemma 11**. *Consider any solution $(\boldsymbol{S},\allowbreak\beta_{\mathsf{a}\mathsf{a}},\allowbreak\beta_{\mathsf{y}\mathsf{a}},\allowbreak\zeta_\mathsf{a}^\theta,\allowbreak\zeta_\mathsf{a}^\eta,\allowbreak\zeta_\mathsf{y}^\theta,\allowbreak\zeta_\mathsf{y}^\theta,\allowbreak\widehat{\mu}{}_\mathsf{a}^f,\allowbreak\widehat{\mu}{}_\mathsf{y}^f)$ to the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, and let $(\boldsymbol{g}_{\mathsf{x}}^f,\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{g}_{\mathsf{a}}^f,\boldsymbol{g}_\mathsf{y}^f)$, $(\boldsymbol{y}_{\mathsf{x}}^f,\boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{y}_{\mathsf{a}}^f,\boldsymbol{y}_\mathsf{y}^f)$, $(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f,\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f)$, $(\boldsymbol{\eta}_\mathsf{a}^f,\boldsymbol{\eta}_\mathsf{y}^f,\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{a}^f,\widehat{\boldsymbol{\eta}}_\mathsf{y}^f)$ have distribution given by the fixed-design models corresponding to these parameters.* *Then there exists on the same probability spaces random variables $(\boldsymbol{u}_k^{\mathsf{se}})_k$, $(\boldsymbol{v}_k^{\mathsf{se}})_k$, $(\boldsymbol{g}_k^{\mathsf{se}})_k$, $(\boldsymbol{v}_k^{\mathsf{se}})_k$, $\boldsymbol{\varepsilon}_1^\mathsf{se}$, ${\boldsymbol{\varepsilon}_1^{\mathsf{se}}}'$, $\boldsymbol{\varepsilon}_2^\mathsf{se}$ with distribution given by the state evolution with parameters $(\boldsymbol{K}_g, \boldsymbol{K}_h, \boldsymbol{Z}_u, \boldsymbol{Z}_v, \{\nu_{k,0}\}_{5\leq k \leq 6}, \{\nu_{k,\mathsf{x}}\}_{5\leq k \leq 6})$ that solve the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}.* *When the mapping of  is applied to these random variables and parameters, we obtain $(\boldsymbol{S},\allowbreak\beta_{\mathsf{a}\mathsf{a}},\allowbreak\beta_{\mathsf{y}\mathsf{a}},\allowbreak\zeta_\mathsf{a}^\theta,\allowbreak\zeta_\mathsf{a}^\eta,\allowbreak\zeta_\mathsf{y}^\theta,\allowbreak\zeta_\mathsf{y}^\theta,\allowbreak\widehat{\mu}{}_\mathsf{a}^f,\allowbreak\widehat{\mu}{}_\mathsf{y}^f)$ and $(\boldsymbol{g}_{\mathsf{x}}^f,\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{g}_{\mathsf{a}}^f,\boldsymbol{g}_\mathsf{y}^f)$, $(\boldsymbol{y}_{\mathsf{x}}^f,\boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{y}_{\mathsf{a}}^f,\boldsymbol{y}_\mathsf{y}^f)$, $(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f,\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f)$, $(\boldsymbol{\eta}_\mathsf{a}^f,\boldsymbol{\eta}_\mathsf{y}^f,\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{a}^f,\widehat{\boldsymbol{\eta}}_\mathsf{y}^f)$.* ### Proof of  The construction is as follows. Let $\boldsymbol{D}= \text{\rm diag}(1,{\overline{\pi}},\zeta_\mathsf{a}^\theta,\zeta_\mathsf{y}^\theta)$, $\widehat{\zeta}{}_{51}^v = - \beta_{\mathsf{a}\mathsf{a}} \zeta_\mathsf{a}^\theta$, $\widehat{\zeta}{}_{61}^v = - \beta_{\mathsf{y}\mathsf{a}} \zeta_\mathsf{y}^\theta$, and $\widehat{\zeta}{}_{62}^v = - \zeta_\mathsf{y}^\theta$. Then set $$\begin{gathered} \boldsymbol{K}_g = \begin{pmatrix} {\boldsymbol 0}_{2\times2} & {\boldsymbol 0}_{2\times4} \\ {\boldsymbol 0}_{4\times2} & \frac{1}{n} \boldsymbol{D} \boldsymbol{S} \boldsymbol{D} \end{pmatrix}, \qquad \boldsymbol{K}_h = \begin{pmatrix} \langle\!\langle\boldsymbol{\Theta}\rangle\!\rangle& {\boldsymbol 0}_{2 \times 2} & \langle\!\langle\boldsymbol{\Theta}, \widehat{\boldsymbol{\Theta}}^f \rangle\!\rangle_{L_2} \\[5pt] {\boldsymbol 0}_{2 \times 2} & {\boldsymbol 0}_{2 \times 2} & {\boldsymbol 0}_{2 \times 2} \\[5pt] \langle\!\langle\widehat{\boldsymbol{\Theta}}^f , \boldsymbol{\Theta}\rangle\!\rangle_{L_2} & {\boldsymbol 0}_{2 \times 2} & \langle\!\langle\widehat{\boldsymbol{\Theta}}^f \rangle\!\rangle_{L_2} \end{pmatrix}, \\ \nu_{5,0} = \widehat{\mu}{}_\mathsf{a}^f - \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f\rangle_{L_2}, \qquad \nu_{6,0} = \widehat{\mu}{}_\mathsf{y}^f - \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}, \qquad \nu_{5,\mathsf{x}} = \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f\rangle_{L_2}, \qquad \nu_{6,\mathsf{x}} = \langle \boldsymbol{\mu}_{\mathsf{x}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}. \end{gathered}$$ Then, with these definitions, set $\boldsymbol{Z}_v^\top={\mathbf I}_{\boldsymbol{K}_h}^\top\widehat{\boldsymbol{Z}}_v^\top$ and $\boldsymbol{Z}_u=\text{\rm diag}(0,0,0,0,n\zeta_\mathsf{a}^\eta,n\zeta_\mathsf{y}^\eta)$, where $$\begin{gathered} \widehat{\boldsymbol{Z}}_v = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] 0 & 0 & 0 & 0 & 0 & 0 \\[5pt] -{\overline{\pi}}\alpha_1 & 0 & 0 & 0 & 0 & 0 \\[5pt] -\beta_{\mathsf{a}\mathsf{a}}\zeta_{\mathsf{a}}^\theta & 0 & 0 & 0 & \zeta_{\mathsf{a}}^\theta & 0 \\[5pt] -\beta_{\mathsf{y}\mathsf{a}}^\theta\zeta_\mathsf{y}^\theta & -\zeta_\mathsf{y}^\theta & 0 & 0 & 0 & \zeta_{\mathsf{y}}^\theta \end{pmatrix}. \end{gathered}$$ We construct a state evolution with these parameters by setting $\boldsymbol{y}_1^{\mathsf{se}} = \boldsymbol{a}^f$, $\boldsymbol{y}_2^\mathsf{se}= \boldsymbol{y}^f$, $\boldsymbol{w}^\mathsf{se}= w(\boldsymbol{\eta}_\mathsf{a}^f)$, and $$\begin{gathered} \boldsymbol{g}_1^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{g}_2^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{g}_3^{\mathsf{se}} = \boldsymbol{g}_{\mathsf{x}}^f, \quad \boldsymbol{g}_4^{\mathsf{se}} = {\overline{\pi}}\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f, \quad \boldsymbol{g}_5^{\mathsf{se}} = \zeta_{\mathsf{a}}^\theta\boldsymbol{g}_\mathsf{a}^f, \quad \boldsymbol{g}_6^{\mathsf{se}} = \zeta_{\mathsf{y}}^\theta \boldsymbol{g}_\mathsf{y}^f, \\ % \boldsymbol{h}_1^{\mathsf{se}} = \boldsymbol{\eta}_\mathsf{a}^f - \mu_\mathsf{a}\boldsymbol{1}, \quad \boldsymbol{h}_2^{\mathsf{se}} = \boldsymbol{\eta}_\mathsf{y}^f - \mu_\mathsf{y} \boldsymbol{1}, \quad \boldsymbol{h}_3^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{h}_4^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{h}_5^{\mathsf{se}} = \widehat{\boldsymbol{\eta}}_\mathsf{a}^{f,\mathsf{loo}} - \widehat{\mu}{}_\mathsf{a}^f \boldsymbol{1}, \quad \boldsymbol{h}_6^{\mathsf{se}} = \widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}} - \widehat{\mu}{}_\mathsf{y}^f \boldsymbol{1}, \\ % \boldsymbol{u}_1^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{u}_2^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{u}_3^{\mathsf{se}} = \frac{\boldsymbol{1}}{n}, \quad \boldsymbol{u}_4^{\mathsf{se}} = \frac{\boldsymbol{a}^f}{n}, \quad \boldsymbol{u}_5^{\mathsf{se}} = \frac{\widehat{\boldsymbol{\psi}}{}_\mathsf{a}^f}{n}, \quad \boldsymbol{u}_6^{\mathsf{se}} = \frac{\widehat{\boldsymbol{\psi}}{}_\mathsf{y}^f}{n}, \\ % \boldsymbol{v}_1^{\mathsf{se}} = \boldsymbol{\theta}_\mathsf{a}, \quad \boldsymbol{v}_2^{\mathsf{se}} = \boldsymbol{\theta}_\mathsf{y}, \quad \boldsymbol{v}_3^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{v}_4^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{v}_5^{\mathsf{se}} = \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f, \quad \boldsymbol{v}_6^{\mathsf{se}} = \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f. \end{gathered}$$ As for , we prove the lemma in two steps.\ **Step 1: The random variables just defined are distributed from the state evolution with parameters $(\boldsymbol{S},\allowbreak\beta_{\mathsf{a}\mathsf{a}},\allowbreak\beta_{\mathsf{y}\mathsf{a}},\allowbreak\zeta_\mathsf{a}^\theta,\allowbreak\zeta_\mathsf{a}^\eta,\allowbreak\zeta_\mathsf{y}^\theta,\allowbreak\zeta_\mathsf{y}^\theta,\allowbreak\widehat{\mu}{}_\mathsf{a}^f,\allowbreak\widehat{\mu}{}_\mathsf{y}^f)$.** That is, we check $\boldsymbol{G}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_g \otimes {\mathbf I}_p)$, $\boldsymbol{H}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_h \otimes {\mathbf I}_n)$, and [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"}. The fact that $\boldsymbol{G}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_g \otimes {\mathbf I}_p)$ and $\boldsymbol{H}^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_h \otimes {\mathbf I}_n)$ follows from the fact that $(g_{\mathsf{x},i}^f,g_{\mathsf{x},\mathsf{cfd},i}^f,g_{\mathsf{a},i}^f,g_{\mathsf{y},i}^f) \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}(0, \boldsymbol{S}/n)$ and [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="ref" reference="eq:fixpt-general"} as well as out definition of $\boldsymbol{K}_g,\boldsymbol{K}_h$ above. That $\boldsymbol{u}_1^{\mathsf{se}}$, $\boldsymbol{u}_2^{\mathsf{se}}$, $\boldsymbol{u}_3^{\mathsf{se}}$, and $\boldsymbol{u}_4^{\mathsf{se}}$ satisfy [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"} follows from the definition of $\phi_{k,u}$ for $k = 1,2,3,4$. By the KKT conditions for [\[eq:lin-predict-f\]](#eq:lin-predict-f){reference-type="ref" reference="eq:lin-predict-f"}, we have $\widehat{\boldsymbol{\eta}}_\mathsf{a}^f = \widehat{\boldsymbol{\eta}}_\mathsf{a}^{f,\mathsf{loo}} - \zeta_\mathsf{a}^\eta \widehat{\boldsymbol{\psi}}{}_\mathsf{a}^f = \widehat{\mu}{}_\mathsf{a}^f \boldsymbol{1}+ \boldsymbol{h}_5^\mathsf{se}- n \zeta_\mathsf{a}^\eta \boldsymbol{u}_5^{\mathsf{se}}$ and, likewise, $\widehat{\boldsymbol{\eta}}_\mathsf{y}^f = \widehat{\mu}{}_\mathsf{y}^f \boldsymbol{1}+ \boldsymbol{h}_6^{\mathsf{se}} - n \zeta_\mathsf{y}^\eta \boldsymbol{u}_6^{\mathsf{se}}$. Thus, under the change of variables above, the KKT conditions for [\[eq:lin-predict-f\]](#eq:lin-predict-f){reference-type="ref" reference="eq:lin-predict-f"} are equivalent to the KKT conditions [\[eq:fenchel-legendre\]](#eq:fenchel-legendre){reference-type="eqref" reference="eq:fenchel-legendre"}. The first line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} follows. That $\boldsymbol{v}_1^{\mathsf{se}}$, $\boldsymbol{v}_2^{\mathsf{se}}$, $\boldsymbol{v}_3^{\mathsf{se}}$, and $\boldsymbol{v}_4^{\mathsf{se}}$ satisfy equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} follows from the definition of $\phi_{k,v}$ for $k = 1,2,3,4$. Because indices 3 and 4 are innovative with respect to the covariance in equation [\[eq:fixed-design-linear-predictor-dist\]](#eq:fixed-design-linear-predictor-dist){reference-type="eqref" reference="eq:fixed-design-linear-predictor-dist"}, we have that indices 5 and 6 are innovative with respect to $\boldsymbol{K}_h$, so $\zeta_{55}^v = \widehat{\zeta}{}_{55}^v$ and $\zeta_{66}^v = \widehat{\zeta}{}_{66}^v$. Further, by the same computation from the proof of , we have that $\zeta_{51}^v \boldsymbol{\theta}_1 = \widehat{\zeta}{}_{51}^v \boldsymbol{\theta}_1$ and $\zeta_{61}^v \boldsymbol{\theta}_1 + \zeta_{62}^v \boldsymbol{\theta}_2 = \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 + \widehat{\zeta}{}_{62}^v \boldsymbol{\theta}_2 = \widehat{\zeta}{}_{61}^v \boldsymbol{\theta}_1 - \widehat{\zeta}{}_{66}^v \boldsymbol{\theta}_2$. Thus, we see the KKT conditions for equation [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"} are equivalent to the KKT conditions for the the second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}. The second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} follows.\ **Step 2: The fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} are satisfied by these parameters.** The first line of [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} follows from the covariance in equation [\[eq:fixed-design-score\]](#eq:fixed-design-score){reference-type="eqref" reference="eq:fixed-design-score"} and the third line of [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}. Using the derivative identities [\[eq:score-deriv-explicit\]](#eq:score-deriv-explicit){reference-type="eqref" reference="eq:score-deriv-explicit"} and the final two lines of equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, (a) implies that $\langle\!\langle\boldsymbol{G}^\mathsf{se}, \boldsymbol{V}^\mathsf{se}\rangle\!\rangle= \boldsymbol{K}_g \boldsymbol{Z}_u^\top$ and $\langle\!\langle\boldsymbol{H}^\mathsf{se}, \boldsymbol{U}^\mathsf{se}\rangle\!\rangle= \boldsymbol{K}_h \widehat{\boldsymbol{Z}}_v^\top$ for $\widehat{\boldsymbol{Z}}_v$ and $\boldsymbol{Z}_u$ as defined above. Because $\boldsymbol{K}_h {\mathbf I}_{\boldsymbol{K}_h}^\top = \boldsymbol{K}_h$ (), we have $\langle\!\langle \boldsymbol{H}^\mathsf{se}, \boldsymbol{U}^\mathsf{se}\rangle\!\rangle= \boldsymbol{K}_h \boldsymbol{Z}_v^\top$. The identities $\nu_{5,\mathsf{x}} = \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_5^{\mathsf{se}} \rangle_{L_2}$ and $\nu_{6,\mathsf{x}} = \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_6^{\mathsf{se}}\rangle_{L_2}$ hold by construction.. Because $\boldsymbol{u}_5^{\mathsf{se}}$ is a constant times $\widehat{\boldsymbol{\psi}}{}_\mathsf{a}^f$ and $\boldsymbol{u}_6^{\mathsf{se}}$ is a constant times $\widehat{\boldsymbol{\psi}}{}_\mathsf{y}^f$, the equations $\langle \boldsymbol{1}, \boldsymbol{u}_5^\mathsf{se}\rangle_{L_2} = \langle \boldsymbol{1}, \boldsymbol{u}_6^{\mathsf{se}} \rangle_{L_2} = 0$ follow from the equations $\langle \boldsymbol{1},\widehat{\boldsymbol{i}}{}_\mathsf{a}^f \rangle_{L_2} = \langle \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \rangle_{L_2} = 0$. The innovation compatibility of $\boldsymbol{Z}_v$ with $\boldsymbol{K}_h$ holds by construction (because we multiply by ${\mathbf I}_{\boldsymbol{K}_h}^\top$). Indices 5 and 6 are innovative with respect to $\boldsymbol{K}_g$ because indices 3 and 4 are innovative with respect to $\boldsymbol{S}$, whence $\boldsymbol{Z}_u$ is innovation compatible with $\boldsymbol{K}_g$. Thus, we have verified the fixed point relations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. Using the relations $$\begin{aligned} \widehat{\boldsymbol{\eta}}_\mathsf{a}^f = \widehat{\mu}{}_\mathsf{a}^f \boldsymbol{1}+ \boldsymbol{h}_5^\mathsf{se}- n \zeta_\mathsf{a}^\eta \boldsymbol{u}_5^{\mathsf{se}}, \quad \mbox{and} \quad \widehat{\boldsymbol{\eta}}_\mathsf{y}^f = \widehat{\mu}{}_\mathsf{y}^f \boldsymbol{1}+ \boldsymbol{h}_6^{\mathsf{se}} - n \zeta_\mathsf{y}^\eta \boldsymbol{u}_6^{\mathsf{se}}\end{aligned}$$ straightforward but tedious algebra shows that when the mapping of  is applied to these random variables and parameters, we obtain $(\boldsymbol{S}, \allowbreak \beta_{\mathsf{a}\mathsf{a}}, \allowbreak \beta_{\mathsf{y}\mathsf{a}}, \allowbreak \zeta_\mathsf{a}^\theta, \allowbreak \zeta_\mathsf{a}^\eta, \allowbreak \zeta_\mathsf{y}^\theta, \allowbreak\zeta_\mathsf{y}^\theta,\allowbreak \widehat{\mu}{}_\mathsf{a}^f,\allowbreak\widehat{\mu}{}_\mathsf{y}^f)$ and $(\boldsymbol{g}_{\mathsf{x}}^f, \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{g}_{\mathsf{a}}^f, \boldsymbol{g}_\mathsf{y}^f)$, $(\boldsymbol{y}_{\mathsf{x}}^f, \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{y}_{\mathsf{a}}^f, \boldsymbol{y}_\mathsf{y}^f)$, $(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f)$, $(\boldsymbol{\eta}_\mathsf{a}^f, \boldsymbol{\eta}_\mathsf{y}^f, \widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{y}^{f,\mathsf{loo}},\widehat{\boldsymbol{\eta}}_\mathsf{a}^f,\widehat{\boldsymbol{\eta}}_\mathsf{y}^f)$. ## Proof of  {#sec:regr-fixpt-exist-unique-bound} guarantees existence of solutions to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. Moreover, these satisfy the bounds of , and via , we can use them to construct a solution to the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}. Thus, we have established existence. For any solution so constructed, the asserted bounds on the standard errors, effective regularization, bias, and estimates follow from the bounds in  pushed through the construction equations [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"} and [\[eq:fix-des-from-se\]](#eq:fix-des-from-se){reference-type="eqref" reference="eq:fix-des-from-se"}. ensures uniqueness, because the its mapping inverts the mapping of , distinct solutions to the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} must map to distinct solutions to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. But  ensures that this cannot happen. ## Proof of  {#proof-of-1} As justified in , we can assume without loss of generality that $\boldsymbol{\Sigma}= {\mathbf I}_p$.\ **Proof of part (a).** By the change of variables in equations [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"} and [\[eq:po-to-regr-cov\]](#eq:po-to-regr-cov){reference-type="eqref" reference="eq:po-to-regr-cov"}, we have that this concentration is equivalent to $v_{k,0}^{\mathsf{po}} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_k^{\mathsf{po}}\rangle \ensuremath{\stackrel{\bullet}{=}} \nu_{k,0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2}$, which follows from .\ **Proof of part (b).** We defer the proof of part (b) to .\ **Proof of part (c).** Recall that $\boldsymbol{X}= \boldsymbol{1} \boldsymbol{\mu}_\mathsf{x}^\top + \boldsymbol{A}$, whence $$\begin{aligned} \widehat{\boldsymbol{\mu}}{}_\mathsf{x}& = \frac{1}{n} \sum_{i=1}^n \boldsymbol{x}_i = \boldsymbol{X}^\top \boldsymbol{1}/ n = \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{A}^\top \boldsymbol{1}/ n = \boldsymbol{\mu}_\mathsf{x}- \boldsymbol{A}^\top \boldsymbol{u}_3^{\mathsf{po}} = \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_3^{\mathsf{po}}, \quad \mbox{and} \\ % \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} & = \frac{1}{n_1} \sum_{i=1}^n \ensuremath{a}_i \boldsymbol{x}_i = \boldsymbol{X}^\top \boldsymbol{a}/ \langle \boldsymbol{1},\boldsymbol{a}\rangle = \boldsymbol{\mu}_\mathsf{x}- \boldsymbol{A}^\top \boldsymbol{u}_4^{\mathsf{po}} / (-\langle \boldsymbol{1}, \boldsymbol{u}_4^{\mathsf{po}} \rangle) \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{\mu}_\mathsf{x}+ \alpha_1 \boldsymbol{\theta}_1 + \boldsymbol{g}_4^{\mathsf{po}}/{\overline{\pi}},\end{aligned}$$ where we have used equations [\[eq:uPO-identity\]](#eq:uPO-identity){reference-type="eqref" reference="eq:uPO-identity"} and [\[eq:loo-noise-def\]](#eq:loo-noise-def){reference-type="eqref" reference="eq:loo-noise-def"} and the fact that $\langle \boldsymbol{1},\boldsymbol{u}_4^{\mathsf{po}} \rangle \ensuremath{\stackrel{\bullet}{=}}-{\overline{\pi}}$ by . By [\[eq:po-to-regr-cov\]](#eq:po-to-regr-cov){reference-type="ref" reference="eq:po-to-regr-cov"}, we have $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}= \boldsymbol{v}_5^{\mathsf{po}}$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}= \boldsymbol{v}_6^{\mathsf{po}}$. By equations [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"} and [\[eq:eff-loo-est\]](#eq:eff-loo-est){reference-type="eqref" reference="eq:eff-loo-est"} and the concentration of the effective regularization terms (part (b)), we have $$\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{loo}} \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{v}_5^{\mathsf{po}} + \frac{\nabla \Omega_5(\boldsymbol{v}_5^{\mathsf{po}})}{\zeta_{55}^v}, \qquad \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{loo}} \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{v}_6^{\mathsf{po}} + \frac{\nabla \Omega_6(\boldsymbol{v}_6^{\mathsf{po}})}{\zeta_{66}^v}.$$ Thus,  together with [\[eq:Z-form\]](#eq:Z-form){reference-type="ref" reference="eq:Z-form"} and using the KKT conditions for equations [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"} and [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}, $$\begin{aligned} \phi\Big( \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}, &\widehat{\boldsymbol{\mu}}{}_{\mathsf{x}}, \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{a}}^{\mathsf{loo}}, \widehat{\boldsymbol{\theta}}{}_{\mathsf{y}}^{\mathsf{loo}} \Big) \ensuremath{\stackrel{\bullet}{=}}\phi\Big( \boldsymbol{v}_5^{\mathsf{po}}, \boldsymbol{v}_6^{\mathsf{po}}, \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_3^\mathsf{po}, \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_4^{\mathsf{po}}/{\overline{\pi}}, \boldsymbol{v}_5^{\mathsf{po}} + \frac{\nabla \Omega_5(\boldsymbol{v}_5^{\mathsf{po}})}{\zeta_{55}^v}, \boldsymbol{v}_6^{\mathsf{po}} + \frac{\nabla \Omega_6(\boldsymbol{v}_6^{\mathsf{po}})}{\zeta_{66}^v} \Big) \\ % & \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \boldsymbol{v}_5^{\mathsf{se}}, \boldsymbol{v}_6^{\mathsf{se}}, \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_3^\mathsf{se}, \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_4^{\mathsf{se}}/{\overline{\pi}}, \boldsymbol{v}_5^{\mathsf{se}} + \frac{\nabla \Omega_5(\boldsymbol{v}_5^{\mathsf{se}})}{\zeta_{55}^v}, \boldsymbol{v}_6^{\mathsf{se}} + \frac{\nabla \Omega_6(\boldsymbol{v}_6^{\mathsf{se}})}{\zeta_{66}^v} \Big) \Big] \\ % & = \mathbb{E}\Big[ \phi\Big( \boldsymbol{v}_5^{\mathsf{se}}, \boldsymbol{v}_6^{\mathsf{se}}, \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_3^\mathsf{se}, \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_4^{\mathsf{se}}/{\overline{\pi}}, \boldsymbol{v}_5^{\mathsf{se}} - \frac{\zeta_{51}^v \boldsymbol{v}_1^{\mathsf{se}} + \zeta_{55}^v\boldsymbol{v}_5^{\mathsf{se}} - \boldsymbol{g}_5^{\mathsf{se}}}{\zeta_{55}^v}, \boldsymbol{v}_6^{\mathsf{se}} - \frac{\zeta_{61}^v \boldsymbol{v}_1^{\mathsf{se}} - \zeta_{66}^v\boldsymbol{v}_2^{\mathsf{se}} + \zeta_{66}^v \boldsymbol{v}_6^{\mathsf{se}} - \boldsymbol{g}_6^{\mathsf{se}}}{\zeta_{66}^v} \Big) \Big] \\ % & = \mathbb{E}\Big[ \phi\Big( \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f, \boldsymbol{y}_\mathsf{x}^f, \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f, \boldsymbol{y}_\mathsf{y}^f, \boldsymbol{y}_\mathsf{a}^f \Big) \Big], \end{aligned}$$ where in the last line we have used the change of variables in equations [\[eq:from-uv-to-theta-eta-psi\]](#eq:from-uv-to-theta-eta-psi){reference-type="eqref" reference="eq:from-uv-to-theta-eta-psi"} and [\[eq:fix-des-from-se\]](#eq:fix-des-from-se){reference-type="eqref" reference="eq:fix-des-from-se"}.\ **Proof of part (d).** By [\[eq:loo-noise-def\]](#eq:loo-noise-def){reference-type="ref" reference="eq:loo-noise-def"}, we have $$\begin{aligned} \boldsymbol{\eta}_\mathsf{a}& = \mu_\mathsf{a}\boldsymbol{1}+ \boldsymbol{A}\boldsymbol{v}_1^{\mathsf{po}} = \mu_\mathsf{a}\boldsymbol{1}+ \boldsymbol{h}_1^{\mathsf{po}} - \zeta_{11}^u \boldsymbol{u}_1^{\mathsf{po}} = \mu_\mathsf{a}\boldsymbol{1}+ \boldsymbol{h}_1^{\mathsf{po}}, \quad \mbox{and} \\ % \boldsymbol{\eta}_\mathsf{y}& = \mu_\mathsf{y}\boldsymbol{1}+ \boldsymbol{A}\boldsymbol{v}_2^{\mathsf{po}} = \mu_\mathsf{y}\boldsymbol{1}+ \boldsymbol{h}_2^{\mathsf{po}} - \zeta_{21}^u \boldsymbol{u}_1^{\mathsf{po}} - \zeta_{22}^u \boldsymbol{u}_2^{\mathsf{po}} = \mu_\mathsf{y}\boldsymbol{1}+ \boldsymbol{h}_2^{\mathsf{po}}\end{aligned}$$ As we noted in [\[eq:uPO-identity\]](#eq:uPO-identity){reference-type="ref" reference="eq:uPO-identity"}, we have $\widehat{\boldsymbol{\psi}}{}_\mathsf{a}= n \boldsymbol{u}_5^{\mathsf{po}}$ and $\widehat{\boldsymbol{\psi}}{}_\mathsf{y}= n \boldsymbol{u}_6^{\mathsf{po}}$. Further, by the change of variables [\[eq:fix-des-from-se\]](#eq:fix-des-from-se){reference-type="eqref" reference="eq:fix-des-from-se"}, $\boldsymbol{\eta}_\mathsf{a}^f = \mu_\mathsf{a}\boldsymbol{1}+ \boldsymbol{h}_1^{\mathsf{se}}$, $\boldsymbol{\eta}_\mathsf{y}^f = \mu_\mathsf{y}\boldsymbol{1}+ \boldsymbol{h}_2^{\mathsf{se}}$, $\widehat{\boldsymbol{\psi}}{}_\mathsf{a}^f = n \boldsymbol{u}_5^{\mathsf{se}}$, and $\widehat{\boldsymbol{\psi}}{}_\mathsf{y}^f = n \boldsymbol{u}_6^{\mathsf{se}}$. Thus, part (d) is a consequence of . # Proof of  {#proof-of-2} We prove the state evolution in used an inductive argument based on the induction hypothesis: Hypothesis (k) : holds for the first $k$ optimization problems; that is, with $\boldsymbol{V}^{\mathsf{po},\perp}$ replaced by $\boldsymbol{V}_k^{\mathsf{po},\perp}$, $\boldsymbol{G}^{\mathsf{po},\perp}$ replaced by $\boldsymbol{G}_k^{\mathsf{po},\perp}$, $(nu_{\ell,i}^{\mathsf{po}})_{\ell=1}^6$, $(nu_{\ell,i}^{\mathsf{se}})_{\ell=1}^6$, $(h_{\ell,i}^{\mathsf{po}})_{\ell=1}^5$, and $(h_{\ell,i}^{\mathsf{se}})_{\ell=1}^5$ replaced by $(nu_{\ell,i}^{\mathsf{po}})_{\ell=1}^k$, $(nu_{\ell,i}^{\mathsf{se}})_{\ell=1}^k$, $(h_{\ell,i}^{\mathsf{po}})_{\ell=1}^{k \wedge 5}$, and $(h_{\ell,i}^{\mathsf{se}})_{\ell=1}^{k \wedge 5}$, and with $\ell \in \{5,6\} \cap [k]$ in the third bullet point. We take as a base case is $k = 4$, because it does not require Gaussian comparison techniques to prove. **Lemma 12** (Base case). *Hypothesis (k) holds for $k = 4$.* *Proof of .* By equations [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"} and [\[eq:SE-penalties\]](#eq:SE-penalties){reference-type="eqref" reference="eq:SE-penalties"}, we have the relations $\boldsymbol{u}_1^{\mathsf{po}} = \boldsymbol{u}_2^{\mathsf{po}} = {\boldsymbol 0}$, $\boldsymbol{u}_3^{\mathsf{po}} = \frac{\boldsymbol{1}}{n}$, $\boldsymbol{u}_4^{\mathsf{po}} = \frac{\boldsymbol{a}}{n} = \frac{\boldsymbol{y}_1}{n}$, $\boldsymbol{v}_1^{\mathsf{po}} = \boldsymbol{\theta}_1,\; \boldsymbol{v}_2^{\mathsf{po}} = \boldsymbol{\theta}_2$, $\boldsymbol{v}_3^{\mathsf{po}} = \boldsymbol{v}_4^{\mathsf{po}} = {\boldsymbol 0}$, and $\boldsymbol{u}_1^{\mathsf{se}} = \boldsymbol{u}_2^{\mathsf{se}} = {\boldsymbol 0}$, $\boldsymbol{u}_3^{\mathsf{se}} = \frac{\boldsymbol{1}}{n}$, $\boldsymbol{u}_4^{\mathsf{se}} = \frac{\boldsymbol{a}^{\mathsf{se}}}{n}$, $\boldsymbol{v}_1^{\mathsf{se}} = \boldsymbol{\theta}_1,\; \boldsymbol{v}_2^{\mathsf{se}} = \boldsymbol{\theta}_2$, $\boldsymbol{v}_3^{\mathsf{se}} = \boldsymbol{v}_4^{\mathsf{se}} = {\boldsymbol 0}$. Using the definition of the leave-one-out noise and the constraints on $\boldsymbol{Z}_v,\boldsymbol{Z}_u$ in [\[eq:Z-form\]](#eq:Z-form){reference-type="ref" reference="eq:Z-form"}, $$\label{eq:gh-1to4} \begin{gathered} \boldsymbol{g}_1^{\mathsf{po}} = - \boldsymbol{A}^\top \boldsymbol{u}_1^{\mathsf{po}} = {\boldsymbol 0}, \;\; \boldsymbol{g}_2^{\mathsf{po}} = - \boldsymbol{A}^\top \boldsymbol{u}_1^{\mathsf{po}} = {\boldsymbol 0}, \;\; \boldsymbol{g}_3^{\mathsf{po}} = \boldsymbol{A}^\top \boldsymbol{1}/n, \;\; \boldsymbol{g}_4^{\mathsf{po}} = -{\overline{\pi}}\alpha_1 \boldsymbol{\theta}_1 + \boldsymbol{A}^\top \boldsymbol{y}_1/n, \\ \boldsymbol{h}_1^{\mathsf{po}} = \boldsymbol{A}\boldsymbol{v}_1^{\mathsf{po}} = \boldsymbol{A} \boldsymbol{\theta}_1, \;\; \boldsymbol{h}_2^{\mathsf{po}} = \boldsymbol{A}\boldsymbol{v}_2^{\mathsf{po}} = \boldsymbol{A}\boldsymbol{\theta}_2, \;\; \boldsymbol{h}_3^{\mathsf{po}} = \boldsymbol{A}\boldsymbol{v}_3^{\mathsf{po}} = {\boldsymbol 0}, \;\; \boldsymbol{h}_4^{\mathsf{po}} = \boldsymbol{A}\boldsymbol{v}_4^{\mathsf{po}} = {\boldsymbol 0}. \end{gathered}$$ We see that $(\boldsymbol{h}_1^{\mathsf{po}}, \boldsymbol{h}_2^{\mathsf{po}}, \boldsymbol{h}_3^{\mathsf{po}}, \boldsymbol{h}_4^{\mathsf{po}}) \sim \mathsf{N}(0,\boldsymbol{K}_{h,4} \otimes {\mathbf I}_n)$, with $\boldsymbol{K}_h$ as in [\[eq:K-form\]](#eq:K-form){reference-type="ref" reference="eq:K-form"}. Moreover, by their definitions, $(u_{\ell,i}^{\mathsf{po},\perp})_{\ell=1}^4$, $(h_{\ell,i}^{\mathsf{po},\perp})_{\ell=1}^4$, $\varepsilon_{1,i}$, $\varepsilon_{1,i}'$, $\varepsilon_{2,i}$ has exactly the same distribution as $(u_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^4$, $(h_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^4$, $\varepsilon_{1,i}^{\mathsf{se}}$, ${\varepsilon_{1,i}^{\mathsf{se}}}'$, $\varepsilon_{2,i}^{\mathsf{se}}$. They are independent across $i$ and $C$-sub-Gaussian by Assumption A1. Thus, because $\phi$ is order-2 pseudo-Lipschitz, the left-hand side of equation [\[eq:se-conc-u\]](#eq:se-conc-u){reference-type="eqref" reference="eq:se-conc-u"} has independent terms which are sub-Gamma with variance and scale parameters bounded by $C$ (see, for example, [@miolane2021 Proposition G.5]). The asserted concentration holds by Bernstein's inequality [@boucheronLugosiMassart2013 Section 2.8]. We write $\boldsymbol{A}= \boldsymbol{A}\mathsf{P}_{\boldsymbol{\theta}_1} + \boldsymbol{A}\mathsf{P}_{\boldsymbol{\theta}_1}^\perp = \frac{\boldsymbol{h}_1^{\mathsf{po}} \boldsymbol{\theta}_1^\top}{\|\boldsymbol{\theta}_1\|^2} + \boldsymbol{A} \mathsf{P}_{\boldsymbol{\theta}_1}^\perp$. Note that $\boldsymbol{a}$ depends on $\boldsymbol{A}$ only via $\boldsymbol{h}_1^{\mathsf{po}}$, and $\boldsymbol{A}\mathsf{P}_{\boldsymbol{\theta}_1}^\perp$ is independent of $\boldsymbol{h}_1^{\mathsf{po}}$. We can write $$\begin{gathered} \boldsymbol{g}_3^{\mathsf{po}} = \frac{\boldsymbol{\theta}_1}{\|\boldsymbol{\theta}_1\|^2} \frac{\langle\boldsymbol{h}_1^{\mathsf{po}},\boldsymbol{1}\rangle}{n} + \mathsf{P}_{\boldsymbol{\theta}_1}^\perp \boldsymbol{A}^\top \boldsymbol{1}/n, \qquad \boldsymbol{g}_4^{\mathsf{po}} = -{\overline{\pi}}\alpha_1\boldsymbol{\theta}_1 + \frac{\boldsymbol{\theta}_1}{\|\boldsymbol{\theta}_1\|^2} \frac{\langle\boldsymbol{h}_1^{\mathsf{po}},\boldsymbol{y}_1\rangle}{n} + \frac{\mathsf{P}_{\boldsymbol{\theta}_1}^\perp \boldsymbol{A}^\top \boldsymbol{y}_1}{n}. \end{gathered}$$ Because $\boldsymbol{h}_1^{\mathsf{po}} \sim \mathsf{N}(0,\|\boldsymbol{\theta}_1\|^2{\mathbf I}_n)$, we have that $\langle \boldsymbol{h}_1^{\mathsf{po}} , \boldsymbol{1}\rangle/n \sim \mathsf{N}(0,\|\boldsymbol{\theta}_1\|^2/n)$. Thus, with probability at least $Ce^{-cn\epsilon^2}$, $\Big\|\frac{\boldsymbol{\theta}_1}{\|\boldsymbol{\theta}_1\|^2} \frac{\langle\boldsymbol{h}_1^{\mathsf{po}},\boldsymbol{1}\rangle}{n}\Big\| \leq \epsilon$. Moreover, $h_{1,i}^{\mathsf{po}}y_{1,i}$ is $\|\boldsymbol{\theta}_1\|^2$-sub-Gaussian with, by equations [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"}, [\[eq:alpha-12\]](#eq:alpha-12){reference-type="eqref" reference="eq:alpha-12"}, and Gaussian integration by parts, expectation $\mathbb{E}[h_{1,i}^{\mathsf{po}}y_{1,i}] = \| \boldsymbol{\theta}_1\|^2 {\overline{\pi}}\alpha_1$. Thus, for $Z_2 \sim \mathsf{N}(0,1)$, we have by sub-Gaussian concentration that with probability at least $Ce^{-cn\epsilon^2}$, $\Big\|\frac{\boldsymbol{\theta}_1}{\|\boldsymbol{\theta}_1\|^2}\frac{\langle\boldsymbol{h}_1^{\mathsf{po}},\boldsymbol{y}_1\rangle}{n} - {\overline{\pi}}\alpha_1 \boldsymbol{\theta}_1 - \frac{Z_2 \boldsymbol{\theta}_1}{\sqrt{n}} \Big\| \leq \epsilon$. Thus, for any order-2 pseudo-Lipschitz function $\phi$, $$\phi \big( \boldsymbol{g}_3^{\mathsf{po}},\boldsymbol{g}_4^{\mathsf{po}} \big) \ensuremath{\stackrel{\bullet}{=}}\phi\Big( \frac{Z_1 \boldsymbol{\theta}_1}{\sqrt{n}} + \frac{\mathsf{P}_{\boldsymbol{\theta}_1}^\perp \boldsymbol{A}^\top \boldsymbol{1}}{n}, \frac{Z_2\boldsymbol{\theta}_1}{\sqrt{n}} + \frac{\mathsf{P}_{\boldsymbol{\theta}_1}^\perp \boldsymbol{A}^\top \boldsymbol{y}_1}{n} \Big),$$ where, conditionally on the realization of $\boldsymbol{y}_1$, $Z_1, Z_2$ are drawn from $\mathsf{N}\Big(0, \widehat{\boldsymbol{K}}\Big)$ independently of everything else (except for $\boldsymbol{y}_1$, with dependence through the conditional covariance as shown), where $$\widehat{\boldsymbol{K}}\ensuremath{: =}\frac{1}{n} \begin{pmatrix} 1 & \langle \boldsymbol{1},\boldsymbol{y}_1 \rangle/n \\ \langle \boldsymbol{1},\boldsymbol{y}_1 \rangle/n & \langle \boldsymbol{1},\boldsymbol{y}_1 \rangle/n \end{pmatrix}.$$ We see that, conditionally on $\boldsymbol{y}_1^{\mathsf{po}}$, the arguments to the function on the right-hand side are Gaussian vectors with covariance $\widehat{\boldsymbol{K}}\otimes {\mathbf I}_p$. By Gaussian concentration of Lipschitz functions and the fact that with exponentially high probability the arguments to $\phi$ on the right-hand sides of $\ell_2$ norm bounded by $C$, $$\phi\Big( \frac{Z_1 \boldsymbol{\theta}_1}{\sqrt{n}} + \frac{\mathsf{P}_{\boldsymbol{\theta}_1}^\perp \boldsymbol{A}^\top \boldsymbol{1}}{n}, \frac{Z_2\boldsymbol{\theta}_1}{\sqrt{n}} + \frac{\mathsf{P}_{\boldsymbol{\theta}_1}^\perp \boldsymbol{A}^\top \boldsymbol{y}_1}{n} \Big) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}_{(\tilde\boldsymbol{g}_3,\tilde\boldsymbol{g}_4)\sim \mathsf{N}(0,\widehat{\boldsymbol{K}})} \big[ \phi(\tilde\boldsymbol{g}_3,\tilde\boldsymbol{g}_4) \big].$$ Note the right-hand side is random because it depends on $\widehat{\boldsymbol{K}}$. But, because by sub-Gaussian concentration $\frac{1}{n}\sum_{i=1}^n y_{1,i} \ensuremath{\stackrel{\bullet}{=}}{\overline{\pi}}$, we in fact have by [\[eq:K-form\]](#eq:K-form){reference-type="ref" reference="eq:K-form"}, that $\mathbb{E}_{(\tilde\boldsymbol{g}_3, \tilde\boldsymbol{g}_4) \sim \mathsf{N}(0,\widehat{\boldsymbol{K}})} \big[\phi( \tilde\boldsymbol{g}_3, \tilde\boldsymbol{g}_4)\big] \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\big[\phi(\boldsymbol{g}_3^{\mathsf{se}}, \boldsymbol{g}_4^{\mathsf{se}})\big]$. Because $\boldsymbol{g}_3^{\mathsf{po}}$, $\boldsymbol{g}_4^{\mathsf{po}}$ are the only random parts of $(\boldsymbol{v}_\ell)_{\ell=1}^4$, $(\boldsymbol{g}_\ell)_{\ell=1}^4$, we have established [\[eq:se-conc\]](#eq:se-conc){reference-type="ref" reference="eq:se-conc"} in the case $k = 4$. This completes the proof of the base case. ◻ Our next lemma captures the induction step required to complete the proof of . **Lemma 13** (Induction step). *For $k = 5, 6$, Hypothesis (k-1) implies Hypothesis (k).* # The induction step: Proof of  The proof of  is based upon the sequential Gordon inequality, introduced in past work [@celentano2021cad] due to a subset of the current authors. This involves introducing a certain auxiliary objective whose saddle points behave, under the induction hypothesis, like the saddle points of the problem [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"}. The sequential Gordon inequality establishes the essential relationship between the auxiliary objective and the min-max problem [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"}. We first introduce the auxiliary objective objective. Then, in , we introduce some random vectors whose behavior is central to our proof. In , we establish some properties of these random vectors. These properties, together with the Convex Gaussian Min-Max Theorem (Gordon's inequality) implies the sequential Gordon inequality, which we state and prove in . In , we establish several additional properties of the auxiliary objective. In , we combine these properties with the sequential Gordon inequality to prove the induction step (cf. ). ## The auxiliary objective Let $\boldsymbol{\xi}_h \sim \mathsf{N}(0,{\mathbf I}_n)$ and $\boldsymbol{\xi}_g \sim \mathsf{N}(0,{\mathbf I}_p)$ be independent of each other, and all other randomness in the problem. Introducing the shorthand $\boldsymbol{T}_g \ensuremath{: =}\boldsymbol{G}_{k-1}^{\mathsf{po}} \boldsymbol{K}_g^\ddagger \boldsymbol{U}_{k-1}^{\mathsf{po}\top}$ and $\boldsymbol{T}_h \ensuremath{: =}\boldsymbol{H}_{k-1}^{\mathsf{po}} \boldsymbol{K}_h^\ddagger \boldsymbol{V}_{k-1}^{\mathsf{po}\top}$, we define the functions $$\begin{aligned} \label{eq:gordon-gh-matrix} \boldsymbol{g}(\boldsymbol{u}) & \ensuremath{: =}\boldsymbol{T}_g\boldsymbol{u}+ \big\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u} \big\|\boldsymbol{\xi}_g, \quad \mbox{and} \\ % \boldsymbol{h}(\boldsymbol{v}) & \ensuremath{: =}\boldsymbol{T}_h\boldsymbol{v}+ \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v} \|\boldsymbol{\xi}_h.\end{aligned}$$ The proof of  relies on a careful analysis of an auxiliary saddle point problem defined by the objective function $$\label{eq:def-AuxObj} \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =}- \langle \boldsymbol{g}(\boldsymbol{u}) , \boldsymbol{v}\rangle + \langle \boldsymbol{h}(\boldsymbol{v}) , \boldsymbol{u}\rangle + \phi_k(\boldsymbol{u};v_0,\boldsymbol{v}).$$ Our first step in analyzing the auxiliary objective involves finding an approximate saddle point $(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})$. We first construct the approximate saddle point and then study its properties. **Lemma 14** (Convexity and concavity of $\mathsf{AuxObj}_k$). *For $\langle \boldsymbol{\xi}_g , \boldsymbol{v}\rangle \geq 0$, the function $\boldsymbol{u}\mapsto \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ is concave. For $\langle \boldsymbol{\xi}_h , \boldsymbol{u}\rangle \geq 0$, the function $(v_0,\boldsymbol{v}) \mapsto \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ is convex.* *Proof.* From their definitions [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"}, the function $\phi_k(\boldsymbol{u};v_0,\boldsymbol{v})$ are convex in $\boldsymbol{u}$ and concave in $(v_0,\boldsymbol{v})$. By inspection, the function $(\boldsymbol{u},\boldsymbol{v}) \mapsto \langle \boldsymbol{h}(\boldsymbol{v}) , \boldsymbol{u}\rangle$ is linear in $\boldsymbol{u}$ and, for $\langle \boldsymbol{\xi}_h , \boldsymbol{u}\rangle \geq 0$, convex in $\boldsymbol{v}$. Similarly, the function $(\boldsymbol{u},\boldsymbol{v}) \mapsto -\langle \boldsymbol{g}(\boldsymbol{u}) , \boldsymbol{v} \rangle$ is linear in $\boldsymbol{v}$ and, for $\langle \boldsymbol{\xi}_g , \boldsymbol{v}\rangle \geq 0$, concave in $\boldsymbol{u}$. ◻ **Remark 4**. *An equivalent, and sometimes easier to work with, representation of the functions $\boldsymbol{g}$ and $\boldsymbol{h}$ is given by $$\label{eq:gordon-gh-explicit} \begin{gathered} \boldsymbol{g}(\boldsymbol{u}) = \sum_{\ell=1}^{k-1} \boldsymbol{g}_\ell^{\mathsf{po},\perp} \langle \boldsymbol{u}_\ell^{\mathsf{po},\perp} , \boldsymbol{u}\rangle + \big\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u}\big\|\boldsymbol{\xi}_g, \\ % \boldsymbol{h}(\boldsymbol{u}) = \sum_{\ell=1}^{k-1} \boldsymbol{h}_\ell^{\mathsf{po},\perp} \langle \boldsymbol{v}_\ell^{\mathsf{po},\perp} , \boldsymbol{v}\rangle + \big\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v}\big\|\boldsymbol{\xi}_h. \end{gathered}$$* ## The auxiliary vectors {#sec:aux-vectors} In this section, we construct an approximate saddle point of $\mathsf{AuxObj}_k$. We defer the study of most of its properties (and, in particular, the sense in which it is an approximate saddle point) to later sections. Recalling the definitions [\[eq:perp-from-orig\]](#eq:perp-from-orig){reference-type="eqref" reference="eq:perp-from-orig"} of $\boldsymbol{g}_\ell^{\mathsf{po},\perp}$ and $\boldsymbol{h}_\ell^{\mathsf{po},\perp}$, we define $$\label{eq:GHk-ao} \boldsymbol{G}_k^{\mathsf{ao},\perp} = \begin{pmatrix} \vert & & \vert & \vert \\[5pt] \boldsymbol{g}_1^{\mathsf{po},\perp} & \cdots & \boldsymbol{g}_{k-1}^{\mathsf{po},\perp} & \boldsymbol{\xi}_g\\[5pt] \vert & & \vert & \vert \end{pmatrix}, \qquad \boldsymbol{H}_k^{\mathsf{ao},\perp} = \begin{pmatrix} \vert & & \vert & \vert \\[5pt] \boldsymbol{h}_1^{\mathsf{po},\perp} & \cdots & \boldsymbol{h}_{k-1}^{\mathsf{po},\perp} & \boldsymbol{\xi}_h\\[5pt] \vert & & \vert & \vert \end{pmatrix},$$ and let $$\label{eq:GHk-ao-perp} \boldsymbol{G}_k^{\mathsf{ao}} = \boldsymbol{G}_k^{\mathsf{ao},\perp} \boldsymbol{L}_{g,k}^\top, \qquad \boldsymbol{H}_k^{\mathsf{ao}} = \boldsymbol{H}_k^{\mathsf{ao},\perp} \boldsymbol{L}_{h,k}^\top.$$ Here "[ao]{.sans-serif}" stands for "auxiliary objective." Using equation [\[eq:perp-from-orig\]](#eq:perp-from-orig){reference-type="eqref" reference="eq:perp-from-orig"} and the fact that $k = 5,6$ are innovative with respect to both $\boldsymbol{K}_g$ and $\boldsymbol{K}_h$ by  (so that $L_{g,kk}^\ddagger > 0$ and $L_{h,kk}^\ddagger > 0$), we may also write $\boldsymbol{G}_k^{\mathsf{ao},\perp}$ and $\boldsymbol{H}_k^{\mathsf{ao},\perp}$ in terms of $\boldsymbol{G}_k^{\mathsf{ao}}$ and $\boldsymbol{H}_k^{\mathsf{ao}}$: $$\label{eq:GHk-ao-perp-from-std} \boldsymbol{G}_k^{\mathsf{ao},\perp} = \boldsymbol{G}_k^{\mathsf{ao}} \boldsymbol{L}_{g,k}^{\ddagger\top}, \qquad \boldsymbol{H}_k^{\mathsf{ao},\perp} = \boldsymbol{H}_k^{\mathsf{ao}} \boldsymbol{L}_{h,k}^{\ddagger\top}.$$ These definitions also imply that $$\begin{gathered} \text{for $\ell < k$,} \quad \boldsymbol{g}_\ell^{\mathsf{ao}} = \boldsymbol{g}_\ell^{\mathsf{po}}, \quad \boldsymbol{h}_\ell^{\mathsf{ao}} = \boldsymbol{h}_\ell^{\mathsf{po}}, \\ \boldsymbol{g}_k^{\mathsf{ao}} = \sum_{\ell = 1}^{k-1} L_{g,k\ell} \boldsymbol{g}_\ell^{\mathsf{po},\perp} + L_{g,kk} \boldsymbol{\xi}_g, \quad \boldsymbol{h}_k^{\mathsf{ao}} = \sum_{\ell = 1}^{k-1} L_{h,k\ell} \boldsymbol{h}_\ell^{\mathsf{po},\perp} + L_{h,kk} \boldsymbol{\xi}_h. \end{gathered}$$ We further define $\boldsymbol{u}_\ell^{\mathsf{ao}} = \boldsymbol{u}_\ell^{\mathsf{po}}$ and $\boldsymbol{v}_\ell^{\mathsf{ao}} = \boldsymbol{v}_\ell^{\mathsf{po}}$ for $\ell < k$, and define $\boldsymbol{u}_k^{\mathsf{ao}}$ and $\boldsymbol{v}_k^{\mathsf{ao}}$ as in equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} with $\boldsymbol{H}_k^{\mathsf{ao}}$ in place of $\boldsymbol{H}_k^{\mathsf{se}}$ and $\boldsymbol{G}_k^{\mathsf{ao}}$ in place of $\boldsymbol{G}_k^{\mathsf{se}}$. Explicitly, $$\label{eq:AO-separated-opt} \begin{aligned} \boldsymbol{u}_k^{\mathsf{ao}} &= \mathop{\mathrm{arg\,min}}_{\boldsymbol{u}} \Big\{ \frac{\zeta_{kk}^u}2\|\boldsymbol{u}\|^2 + \sum_{\ell=1}^{k-1} \zeta_{k\ell}^u \langle\boldsymbol{u}_\ell^{\mathsf{ao}},\boldsymbol{u}\rangle - \langle\boldsymbol{h}_k^{\mathsf{ao}},\boldsymbol{u}\rangle + \phi_{k,u}(\boldsymbol{u};\boldsymbol{H}_{k-1}^{\mathsf{ao}};\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_1',\boldsymbol{\varepsilon}_2) \Big\}, \\ \boldsymbol{v}_k^{\mathsf{ao}} &= \mathop{\mathrm{arg\,min}}_{\boldsymbol{v}} \Big\{ \frac{\zeta_{kk}^v}2\|\boldsymbol{v}\|^2 + \sum_{\ell=1}^{k-1} \zeta_{k\ell}^v \langle\boldsymbol{v}_\ell^{\mathsf{ao}},\boldsymbol{v}\rangle - \langle\boldsymbol{g}_k^{\mathsf{ao}},\boldsymbol{v}\rangle + \phi_{k,v}(\boldsymbol{v};\boldsymbol{G}_{k-1}^{\mathsf{ao}}) \Big\}. \end{aligned}$$ Finally, define $$\label{eq:UVk-ao-perp} \boldsymbol{U}_k^{\mathsf{ao},\perp} = \boldsymbol{U}_k^{\mathsf{ao}} \boldsymbol{L}_{g,k}^{\ddagger\top}, \qquad \boldsymbol{V}_k^{\mathsf{ao},\perp} = \boldsymbol{V}_k^{\mathsf{ao}} \boldsymbol{L}_{h,k}^{\ddagger\top}.$$ Under this definition, we also have $$\label{eq:UVk-ao} \boldsymbol{U}_k^{\mathsf{ao}} = \boldsymbol{U}_k^{\mathsf{ao},\perp} \boldsymbol{L}_{g,k}^{\top}, \qquad \boldsymbol{V}_k^{\mathsf{ao}} = \boldsymbol{V}_k^{\mathsf{ao},\perp} \boldsymbol{L}_{h,k}^{\top}.$$ Indeed, by , $\boldsymbol{U}_k^{\mathsf{ao},\perp} \boldsymbol{L}_{g,k}^\top = \boldsymbol{U}_k^{\mathsf{ao}}\boldsymbol{L}_{g,k}^{\ddagger\top}\boldsymbol{L}_{g,k}^\top = \boldsymbol{U}_k^{\mathsf{ao}} {\mathbf I}_{\boldsymbol{K}_{g,k}}^\top$ and $\boldsymbol{V}_k^{\mathsf{ao},\perp} \boldsymbol{L}_{h,k}^\top = \boldsymbol{V}_k^{\mathsf{ao}} {\mathbf I}_{\boldsymbol{K}_{h,k}}^\top$. By the definition of ${\mathbf I}_{\boldsymbol{K}_{g,k}}$, $\boldsymbol{U}_k^{\mathsf{ao}} {\mathbf I}_{\boldsymbol{K}_{g,k}}^\top$ is equal to $\boldsymbol{U}_k^{\mathsf{ao}}$ provided the rows of $\boldsymbol{U}_k^{\mathsf{ao}}$ are in $\mathsf{range}(\boldsymbol{K}_{g,k})$. By , $\mathsf{range}(\boldsymbol{K}_{g,k}) = \mathsf{span}\{\boldsymbol{e}_\ell : 2 < \ell \leq k\}$, and by equation [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"}, $\boldsymbol{u}_\ell^{\mathsf{ao}} = \boldsymbol{u}_\ell^{\mathsf{po}} = {\boldsymbol 0}$ for $\ell = 1,2$, whence indeed the rows of $\boldsymbol{U}_k^{\mathsf{ao}}$ are in $\mathsf{range}(\boldsymbol{K}_{g,k})$. Similarly, $\boldsymbol{V}_k^{\mathsf{ao}} {\mathbf I}_{\boldsymbol{K}_{h,k}}^\top$ is equal to $\boldsymbol{V}_k^{\mathsf{ao}}$ provided the rows of $\boldsymbol{V}_k^{\mathsf{ao}}$ are in $\mathsf{range}(\boldsymbol{K}_{h,k})$. By , $\ell = 5,6$ are innovative with respect to $\boldsymbol{K}_{h,k}$, whence the standard bases $\boldsymbol{e}_5,\boldsymbol{e}_5 \in \mathsf{range}(\boldsymbol{K}_{h,k})$. Also, $\boldsymbol{v}_3^{\mathsf{ao}} = \boldsymbol{v}_4^{\mathsf{ao}} = {\boldsymbol 0}$, whence we only need to check that the rows of $\boldsymbol{V}_2^{\mathsf{ao}}$ are in $\mathsf{range}(\boldsymbol{K}_{h,2})$. Recall $\boldsymbol{V}_2^{\mathsf{ao}} = \boldsymbol{V}_2^{\mathsf{po}} = \boldsymbol{\Theta}$ and $\boldsymbol{K}_{h,2} = \langle\!\langle\boldsymbol{\Theta}\rangle\!\rangle$ by , whence the rows of $\boldsymbol{V}_2^{\mathsf{ao}}$ are indeed in $\mathsf{range}(\boldsymbol{K}_{h,2})$. We have thus established equation [\[eq:UVk-ao\]](#eq:UVk-ao){reference-type="eqref" reference="eq:UVk-ao"}. Comparing equation [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"} with equation [\[eq:perp-from-orig\]](#eq:perp-from-orig){reference-type="eqref" reference="eq:perp-from-orig"}, we also see that $\boldsymbol{U}_{k-1}^{\mathsf{ao}, \perp} = \boldsymbol{U}_{k-1}^{\mathsf{po}, \perp}$ and $\boldsymbol{V}_{k-1}^{\mathsf{ao}, \perp} = \boldsymbol{V}_{k-1}^{\mathsf{po}, \perp}$, and sometime interchange these in our calculations. ## Concentration of statistics of auxiliary vectors {#sec:conc-aux-vectors} We first show concentration of order-2 pseudo-Lipschitz functions of the auxiliary vectors. **Lemma 15**. *For $k = 5,6$, Hypothesis (k-1) implies that for $\phi: ({\mathbb R}^p)^{2k} \rightarrow {\mathbb R}$ (in the first line) or for $\phi: {\mathbb R}^{2k+1} \rightarrow {\mathbb R}$ (in second line) which is pseudo-Lipschitz of order 2, $$\label{eq:ao-conc} \begin{gathered} \phi\Big( \boldsymbol{V}_k^{\mathsf{ao}}, \boldsymbol{G}_k^{\mathsf{ao}} \Big) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \boldsymbol{V}_k^{\mathsf{se}}, \boldsymbol{G}_k^{\mathsf{se}} \Big) \Big], \\ \frac{1}{n} \sum_{i=1}^n \phi\Big( (nu_{\ell,i}^{\mathsf{ao}})_{\ell=1}^K, (h_{\ell,i}^{\mathsf{ao}})_{\ell=1}^K, \varepsilon_{2,i} \Big) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( (nu_{\ell,i}^{\mathsf{se}})_{\ell=1}^K, (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^K, \varepsilon_{2,i}^{\mathsf{se}} \Big) \Big]. \end{gathered}$$* *Proof of .* We prove the two lines of equation [\[eq:ao-conc\]](#eq:ao-conc){reference-type="eqref" reference="eq:ao-conc"} one at at time. Throughout the proof, we freely invoke the bounds on the problem parameters or fixed point parameters (e.g., $L_{g,k\ell}$, $L_{h,k\ell}$, etc.) from  and Assumption A1 without citing the lemma or assumption each time.\ **First line of equation [\[eq:ao-conc\]](#eq:ao-conc){reference-type="eqref" reference="eq:ao-conc"}.** Denote the functions which maps $\boldsymbol{G}_{k-1}^{\mathsf{po},\perp}$, $\boldsymbol{\xi}_g$ to $\boldsymbol{G}_k^{\mathsf{ao}}$ via equation [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"} and the function which maps $\boldsymbol{V}_{k-1}^{\mathsf{po},\perp}$, $\boldsymbol{G}_{k-1}^{\mathsf{po},\perp}$, $\boldsymbol{\xi}_g$ to $\boldsymbol{V}_k^{\mathsf{ao}}$ via equations [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"} and [\[eq:UVk-ao\]](#eq:UVk-ao){reference-type="eqref" reference="eq:UVk-ao"} by $$\label{eq:sG-func} \boldsymbol{G}_k^{\mathsf{ao}} = \mathsf{G}_k \Big( \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big), \qquad \boldsymbol{V}_k^{\mathsf{ao}} = \mathsf{V}_k \Big( \boldsymbol{V}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big),$$ respectively. The function $\mathsf{G}_k$ is $C$-Lipschitz in its arguments because $L_{g,\ell\ell'} \lesssim 1 / \sqrt{n}$ and $L_{h,\ell\ell'} \lesssim 1$. Because $L_{h,\ell\ell'} \lesssim 1$, equation [\[eq:UVk-ao\]](#eq:UVk-ao){reference-type="eqref" reference="eq:UVk-ao"} implies that for $\ell \leq k-1$, $\boldsymbol{v}_{\ell}^{\mathsf{ao}}$ is $C$-Lipschitz in $\big(\boldsymbol{v}_{\ell'}^{\mathsf{ao},\perp}\big)_{\ell'=1}^\ell$. Because $\phi_{k,v}$ does not depend on $\boldsymbol{G}_{k-1}^{\mathsf{ao}}$ (see equation [\[eq:SE-penalties\]](#eq:SE-penalties){reference-type="eqref" reference="eq:SE-penalties"}), because proximal operators are 1-Lipschitz [@bauschke2011convex Proposition 12.27], and because $\zeta_{k\ell}^v \lesssim 1$ for $\ell \leq k$ and $\zeta_{kk}^v \asymp 1$, the definition of $\boldsymbol{v}_k^{\mathsf{ao}}$ (equation [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"}) implies it is $C$-Lipschitz in $\boldsymbol{V}_{k-1}^{\mathsf{ao},\perp}$, $\boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}/\sqrt{n}$, and $\boldsymbol{\xi}_g/\sqrt{n}$. Thus, $\mathsf{V}_k$ is $C$-Lipschitz in its arguments. These Lipschitz properties imply, by Hypothesis (k-1), that $$\label{eq:v-ao-conc} \begin{aligned} \phi\Big( \boldsymbol{V}_k^{\mathsf{ao}}, \boldsymbol{G}_k^{\mathsf{ao}} \Big) &= \phi\Big( \mathsf{V}_k \Big( \boldsymbol{V}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big), \mathsf{G}_k \Big( \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big) \Big) \\ &\ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \mathsf{V}_k \Big( \boldsymbol{V}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big), \mathsf{G}_k \Big( \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big) \Big) \Bigm| \boldsymbol{V}_{k-1}^{\mathsf{ao},\perp},\boldsymbol{G}_{k-1}^{\mathsf{ao},\perp} \Big] \\ &\ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \mathsf{V}_k \Big( \boldsymbol{V}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big), \mathsf{G}_k \Big( \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big) \Big) \Big]. \end{aligned}$$ In the first approximate equality we use that conditionally on $\boldsymbol{V}_{k-1}^{\mathsf{ao},\perp}$ and $\boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}$, the quantity $\phi\Big(\boldsymbol{V}_k^{\mathsf{ao}},\boldsymbol{G}_k^{\mathsf{ao}}/\sqrt{n}\Big)$ is conditionally sub-Gamma with variance parameter $C(\|\boldsymbol{V}_{k-1}^{\mathsf{ao}}\|_{\mathsf{F}}^2 + \| \boldsymbol{G}_{k-1}^{\mathsf{ao}}\|_{\mathsf{F}}^2)/n$ and scale parameter $C/n$ (see, for example, [@miolane2021 Proposition G.5]), and that $\|\boldsymbol{V}_{k-1}^{\mathsf{ao}}\|_{\mathsf{F}}^2 + \| \boldsymbol{G}_{k-1}^{\mathsf{ao}}\|_{\mathsf{F}}^2 \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}[\|\boldsymbol{V}_{k-1}^{\mathsf{se}}\|_{\mathsf{F}}^2 + \| \boldsymbol{G}_{k-1}^{\mathsf{se}}\|_{\mathsf{F}}^2] \lessdot C$ by Hypothesis (k-1). The second approximate equality holds by Hypothesis (k-1). Then note that $$\begin{gathered} \mathsf{G}_k \Big( \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big) \underset{\emph{(i)}}\, {\buildrel d \over =} \,\mathsf{G}_k \Big( \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{g}_k^{\mathsf{se},\perp} \Big) \underset{\emph{(ii)}}= \boldsymbol{G}_k^{\mathsf{se}}. \end{gathered}$$ Recall $\boldsymbol{g}_k^{\mathsf{se},\perp} = \boldsymbol{G}_k^{\mathsf{se}} (\boldsymbol{L}_{g,k}^\ddagger)^\top$ where $\boldsymbol{L}_{g,k}^\ddagger$ is the Cholesky pseudo-inverse of $\boldsymbol{K}_{g,k}$ and $\boldsymbol{G}_k^{\mathsf{se}} \sim \mathsf{N}(0,\boldsymbol{K}_{g,k}\otimes {\mathbf I}_p)$. Thus, distributional equality *(i)* in the first line holds because $\boldsymbol{\xi}_g \, {\buildrel d \over =} \,\boldsymbol{g}_k^{\mathsf{se},\perp} \sim \mathsf{N}(0,{\mathbf I}_p)$ independent of everything else, where we have used that index $k$ is innovative with respect to $\boldsymbol{K}_{g,k}$. Distributional equality *(i)* in the second line holds similarly. Equality *(ii)* holds by comparing equations [\[eq:orig-from-perp\]](#eq:orig-from-perp){reference-type="eqref" reference="eq:orig-from-perp"} and [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"}. Similarly, $$\begin{gathered} \mathsf{V}_k \Big( \boldsymbol{V}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{\xi}_g \Big) \, {\buildrel d \over =} \,\mathsf{V}_k \Big( \boldsymbol{V}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{G}_{k-1}^{\mathsf{se},\perp}, \frac{1}{\sqrt{n}} \boldsymbol{g}_k^{\mathsf{se},\perp} \Big) = \boldsymbol{V}_k^{\mathsf{se}} \end{gathered}$$ The distributional equality holds by the same reason as before. The equality holds by comparing equations [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} and [\[eq:orig-from-perp\]](#eq:orig-from-perp){reference-type="eqref" reference="eq:orig-from-perp"} with equations [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"}, [\[eq:UVk-ao\]](#eq:UVk-ao){reference-type="eqref" reference="eq:UVk-ao"}, and [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"}. Thus, equation [\[eq:v-ao-conc\]](#eq:v-ao-conc){reference-type="eqref" reference="eq:v-ao-conc"} implies the first line of equation [\[eq:ao-conc\]](#eq:ao-conc){reference-type="eqref" reference="eq:ao-conc"}.\ **Second line of equation [\[eq:ao-conc\]](#eq:ao-conc){reference-type="eqref" reference="eq:ao-conc"}.** To establish the second line of equation [\[eq:ao-conc\]](#eq:ao-conc){reference-type="eqref" reference="eq:ao-conc"}, we use explicit expressions for for $\boldsymbol{u}_k^{\mathsf{ao}}$ in the case that $k = 5$ and $k = 6$, treating the two cases separately. First, we establish a fact to be used in both cases. We denote the functions which map $(h_{\ell,i}^{\mathsf{ao}})_{\ell=1}^{k-1}$, $\xi_{h,i}$ to $(h_{\ell,i}^{\mathsf{ao}})_{\ell=1}^k$ via equation [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"} and $(u_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^{k-1}$ to $(u_{\ell,i}^{\mathsf{ao}})_{\ell=1}^{k-1}$ via equation [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"} by $$(h_{\ell,i}^{\mathsf{ao}})_{k=1}^k = \mathsf{h}_k \big( (h_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^{k-1}, \xi_{h,i} \big), \qquad (u_{\ell,i}^{\mathsf{ao}})_{k=1}^{k-1} = \mathsf{u}_{k-1} \big( (nu_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^{k-1} \big),$$ respectively. The functions $\mathsf{h}_k$ and $n\mathsf{u}_{k-1}$ are $C$-Lipschitz in their arguments by equation [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"} because $L_{h, \ell \ell'} \lesssim 1$ and $L_{g, \ell \ell'} \lesssim 1/\sqrt{n}$. Now we specialize to the case $k = 6$. Rearranging equation [\[eq:fenchel-legendre\]](#eq:fenchel-legendre){reference-type="eqref" reference="eq:fenchel-legendre"} and using $-nu_{4,i}^{\mathsf{ao}} = y_{1,i} \in \{0,1\}$ and $\zeta_{6\ell}^u = 0$ for $\ell \leq 5$, $$\label{eq:u6ao-explicit} u_{6i}^{\mathsf{ao}} = - \frac{\nu_{6,0} + \nu_{6,\mathsf{x}} + h_{6,i}^{\mathsf{ao}} - h_{2,i}^{\mathsf{ao}} - \varepsilon_{2,i}}{\zeta_{66}^u + n w^{-1}(h_{1,i}^{\mathsf{ao}})} nu_{4,i}^{\mathsf{ao}}.$$ This function is not Lipschitz in $h_{1,i}^{\mathsf{ao}}$, $h_{2,i}^{\mathsf{ao}}$, $h_{6,i}^{\mathsf{ao}}$, $\varepsilon_{2,i}$, $nu_{4,i}^{\mathsf{ao}}$ because the derivative with respect to $nu_{4,i}^{\mathsf{ao}}$ and with respect to $h_{1,i}^{\mathsf{ao}}$ diverges as $h_{2,i}^{\mathsf{ao}}$, $h_{6,i}^{\mathsf{ao}}$, $\varepsilon_{2,i}$ diverge. We thus must resort to a truncation argument. In particular, for $M > 1$, define $\mathsf{Trunc}^M(x) = (x \wedge M) \vee (-M)$ and define $\overline{\mathsf{u}}_6^M$ by $$\label{eq:ubar-fnc} \overline{\mathsf{u}}_6^M (h_{1,i},h_{2,i},h_{6,i},nu_{4,i},\varepsilon_{2,i}) \ensuremath{: =}(-n u_{4,i}\wedge 1)\,\frac{\mathsf{Trunc}^M(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i} - \varepsilon_{2,i} + h_{6,i})}{\zeta_{66}^u + n/w(h_{1,i})}.$$ We have that $n\overline{\mathsf{u}}_6^M$ is $CM$-Lipschitz because $w(\,\cdot\,)$ is bounded above by $C$ and below by $c$ and has derivative bounded by $C$ and $|\nu_{6,0}|$, $|\nu_{6,\mathsf{x}}|$, $\zeta_{66}^u/n$ are bounded above by $C$. Denote ${\overline{u}}{}_6^{\mathsf{ao}}(M) \ensuremath{: =}\overline{\mathsf{u}}_6^M (h_{1,i}^\mathsf{ao}, h_{2,i}^\mathsf{ao}, h_{6,i}^\mathsf{ao}, n u_{4,i}^\mathsf{ao}, \varepsilon_{2,i})$. Then $\phi/M^2$ is $C$ order-$2$ pseudo-Lipschitz, so by the exact same logic we used in equation [\[eq:v-ao-conc\]](#eq:v-ao-conc){reference-type="eqref" reference="eq:v-ao-conc"}, we have $$\label{eq:u-ao-conc} \begin{aligned} &\frac{1}{nM^2} \sum_{i=1}^n \phi\big((nu_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5,n{\overline{u}}{}_6^{\mathsf{ao}}(M),(h_{\ell,i}^{\mathsf{ao}})_{\ell = 1}^6\big) \\ &\qquad\qquad= \frac{1}{nM^2} \sum_{i=1}^n \phi\big( \mathsf{u}_5 \big( (nu_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^5 \big), \overline{\mathsf{u}}_6^M (h_{1,i}^{\mathsf{ao}},h_{2,i}^{\mathsf{ao}},h_{6,i}^{\mathsf{ao}},nu_{4,i}^{\mathsf{ao}},\varepsilon_{2,i}), \mathsf{h}_k \big( (h_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^5, \xi_{h,i} \big) \big) \\ % & \qquad\qquad\ensuremath{\stackrel{\bullet}{=}}\frac{1}{M^2} \mathbb{E}\Big[ \phi\big( \mathsf{u}_5 \big( (nu_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^5 \big), \overline{\mathsf{u}}_6^M (h_{1,i}^{\mathsf{ao}},h_{2,i}^{\mathsf{ao}},h_{6,i}^{\mathsf{ao}},nu_{4,i}^{\mathsf{ao}},\varepsilon_{2,i}), \mathsf{h}_k \big( (h_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^5, \xi_{h,i} \big) \big) \Bigm| (u_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5, (h_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5 \Big] \\ % & \qquad \qquad \ensuremath{\stackrel{\bullet}{=}}\frac{1}{M^2} \mathbb{E}\Big[ \phi\big( \mathsf{u}_5 \big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5 \big), \overline{\mathsf{u}}_6^M (h_{1,i}^{\mathsf{se}},h_{2,i}^{\mathsf{se}},h_{6,i}^{\mathsf{se}},nu_{4,i}^{\mathsf{se}},\varepsilon_{2,i}), \mathsf{h}_k \big( (h_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5, \xi_{h,i} \big) \big) \Big]. \end{aligned}$$ If we denote ${\overline{u}}{}_6^{\mathsf{se}}(M)\ensuremath{: =}\overline{\mathsf{u}}_6^M (h_{1,i}^\mathsf{se},h_{2,i}^\mathsf{se},h_{6,i}^\mathsf{se},nu_{4,i}^\mathsf{se},\varepsilon_{2,i})$, then we can recognize the right-hand side of the preceding display as $\frac{1}{M^2} \mathbb{E}\Big[ \phi\big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5, {\overline{u}}{}_{6,i}^{\mathsf{se}}(M), (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^6 \big) \Big].$ For $nu_{4,i} \in \{-1,0\}$, $\big|\overline{\mathsf{u}}_6^M (h_{1,i}, h_{2,i}, h_{6,i}, nu_{4,i},\varepsilon_{2,i}) - \overline{\mathsf{u}}_6 (h_{1,i}, h_{2,i}, h_{6,i}, nu_{4,i},\varepsilon_{2,i})\big| \leq C(1+|h_{2,i}| + |h_{6,i}| + |\varepsilon_{2,i}| - M)_+$, where $\overline{\mathsf{u}}_6$ is defined as $\overline{\mathsf{u}}_6^M$ with the minimum with 1 and $M$ removed. Thus, standard Gaussian tail bounds give $$\Big| \frac{1}{M^2} \mathbb{E}\Big[ \phi\big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5, {\overline{u}}{}_{6,i}^{\mathsf{se}}(M), (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^6 \big) \Big] - \frac{1}{M^2} \mathbb{E}\Big[ \phi\big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5, u_{6,i}^{\mathsf{se}}, (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^6 \big) \Big] \Big| \leq Ce^{-cM^2},$$ and $$\begin{aligned} &\Big| \frac{1}{nM^2} \sum_{i=1}^n \phi \big((nu_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5,n{\overline{u}}{}_{6,i}^{\mathsf{ao}}(M),(h_{\ell,i}^{\mathsf{ao}})_{\ell = 1}^6\big) - \frac{1}{nM^2} \sum_{i=1}^n \phi\big((nu_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5,nu_{6,i}^{\mathsf{ao}},(h_{\ell,i}^{\mathsf{ao}})_{\ell = 1}^6\big) \Big| \\ &\qquad\qquad \leq \frac{C}{nM^2} \sum_{i=1}^n \Big( 1 + \sum_{\ell=1}^5|nu_{\ell,i}^\mathsf{ao}| + \sum_{\ell=1}^6|h_{\ell,i}^\mathsf{ao}| + |\varepsilon_{2,i}| \Big) \Big( 1 + |h_{2,i}^{\mathsf{ao}}| + |h_{6,i}^\mathsf{ao}| + |\varepsilon_{2,i}| - M \Big)_+ \\ % & \qquad\qquad \ensuremath{\stackrel{\bullet}{=}}\frac{C}{M^2} \mathbb{E}\Big[ \Big( 1 + \sum_{\ell=1}^5 |nu_{\ell,i}^\mathsf{se}| + \sum_{\ell=1}^6 |h_{\ell,i}^\mathsf{se}| + |\varepsilon_{2,i}| \Big) \Big( 1 + |h_{2,i}^{\mathsf{se}}| + |h_{6,i}^\mathsf{se}| + |\varepsilon_{2,i}^{\mathsf{se}}| - M \Big)_+ \Big] \leq C e^{-c n M^2}. \end{aligned}$$ Combining the previous three displays, $$\begin{aligned} &\Big| \frac{1}{n M^2} \sum_{i=1}^n \phi\big((nu_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5,nu_k^{\mathsf{ao}},(h_{\ell,i}^{\mathsf{ao}})_{\ell = 1}^6\big) - \frac{1}{M^2} \mathbb{E}\Big[ \phi\big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5, u_{6,i}^{\mathsf{se}}, (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^6 \big) \Big] \Big| \lessdot Ce^{-cM^2}. \end{aligned}$$ That is, for all $\epsilon > 0$, the probability that the left-hand side exceeds $\epsilon + Ce^{-cM^2}$ is bounded above by $C'e^{-c'n\epsilon^r}$ for some $C,c,C',c',r>0$ depending only on $\mathcal{P}_{\mathrm{model}}$. Taking $M = \sqrt{\log(1/\epsilon)}$, $$\frac{1}{n} \sum_{i=1}^n \phi\big((nu_{\ell,i}^{\mathsf{ao}})_{\ell=1}^5,nu_k^{\mathsf{ao}},(h_{\ell,i}^{\mathsf{ao}})_{\ell = 1}^6\big) \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}\Big[ \phi\big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^5, u_{6,i}^{\mathsf{se}}, (h_{\ell,i}^{\mathsf{se}})_{\ell=1}^6 \big) \Big].$$ The proof of the second line of equation [\[eq:ao-conc\]](#eq:ao-conc){reference-type="eqref" reference="eq:ao-conc"} in the case $k = 6$ is complete. Now we turn to the case $k = 5$. By the definition of $\boldsymbol{u}_5^{\mathsf{ao}}$ (equation [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"}), $$u_{5,i}^{\mathsf{ao}} = - \mathrm{prox} \Big[ \frac{\ell_{\mathsf{a}}^*(n\,\cdot\,;1)}{n\zeta_{55}^u} \Big] \Big( \frac{\nu_{5,0} + \nu_{5,\mathsf{x}} + h_{5,i}^{\mathsf{ao}}}{\zeta_{55}^u} \Big) nu_{4,i}^{\mathsf{ao}} + \mathrm{prox} \Big[ \frac{\ell_{\mathsf{a}}^*(n\,\cdot\,;0)}{n\zeta_{55}^u} \Big] \Big( \frac{\nu_{5,0} + \nu_{5,\mathsf{x}} + h_{5,i}^{\mathsf{ao}}}{\zeta_{55}^u} \Big) (1+nu_{4,i}^{\mathsf{ao}}).$$ Because $\zeta_{55}^u \gtrsim n$ and proximal operators are 1-Lipschitz [@bauschke2011convex Proposition 12.27], $nu_{5,i}^{\mathsf{ao}}$ is $C$-Lipschitz in $h_{5,i}^{\mathsf{ao}}$. The derivative with respect to $nu_{4,i}^{\mathsf{ao}}$ may diverge linearly in $h_{4,i}^{\mathsf{ao}}$. Thus, as in the case $k = 6$, we must resort to a truncation argument. We do this by replacing $\nu_{5,0} + \nu_{5,\mathsf{x}} + h_{5,i}^{\mathsf{ao}}$ with $\mathsf{Trunc}^M(\nu_{5,0} + \nu_{5,\mathsf{x}} + h_{5,i}^{\mathsf{ao}})$ in the preceding display, and following, line for line, the argument used for $k = 6$. Thus, the proof of the lemma is complete. ◻ has several useful corollaries. **Corollary 1** (Concentration of auxiliary second moments). *For $k = 5,6$, under Hypothesis (k-1), $$\label{eq:second-moment-conc} \begin{aligned} \langle\!\langle\boldsymbol{G}_k^{\mathsf{ao}} \rangle\!\rangle &\ensuremath{\stackrel{\bullet}{=}} p\boldsymbol{K}_{g,k}, \quad&\quad \frac{1}{n}\langle\!\langle\boldsymbol{H}_k^{\mathsf{ao}} \rangle\!\rangle &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{K}_{h,k}, \\ n\langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle &\ensuremath{\stackrel{\bullet}{=}} n\boldsymbol{K}_{g,k}, \quad&\quad \langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{K}_{h,k}, \\ \langle\!\langle\boldsymbol{G}_k^{\mathsf{ao}} , \boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{K}_{g,k}\boldsymbol{Z}_{u,k}^\top, \quad&\quad \langle\!\langle\boldsymbol{H}_k^{\mathsf{ao}} , \boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{K}_{h,k} \boldsymbol{Z}_{v,k}^\top. \end{aligned}$$* *Proof of .* The entries of the second-moment functions $\langle\!\langle\cdot , \cdot \rangle\!\rangle$ and $\langle\!\langle\cdot \rangle\!\rangle$ are the empirical averages order-2 pseudo-Lipschitz functions applied to each coordinate $i$. Thus, this is a consequence of  and the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. ◻ **Corollary 2** (Concentration of Cholesky decomposition). *For $k =5,6$, under Hypothesis (k-1) $$\begin{aligned} \mathsf{L}\big(n\langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle\big) &\ensuremath{\stackrel{\bullet}{=}} \sqrt{n} \boldsymbol{L}_{g,k}, \qquad &\mathsf{L}^{\ddagger}\big(n\langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle\big) &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{L}_{g,k}^{\ddagger}/\sqrt{n}, \\ \mathsf{L}\big(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle\big) &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{L}_{h,k}, &\qquad \mathsf{L}^{\ddagger}\big(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle\big) &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{L}_{h,k}^{\ddagger}. \end{aligned}$$ We also have $$\begin{aligned} \mathsf{K}^\ddagger\big(n\langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle\big) &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{K}_{g,k}^\ddagger/n, \qquad &\mathsf{K}^\ddagger\big(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle\big) &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{K}_{h,k}^\ddagger. \end{aligned}$$* *Proof of .* We have $n \langle\!\langle\boldsymbol{U}_4^{\mathsf{ao}} \rangle\!\rangle_2 \ensuremath{\stackrel{\bullet}{=}}n \boldsymbol{K}_{g,4} = {\boldsymbol 0}$ and $\langle\!\langle\boldsymbol{V}_4^{\mathsf{ao}} \rangle\!\rangle_2 = \boldsymbol{K}_{h,2} = \langle\!\langle\boldsymbol{\Theta}\rangle\!\rangle$ using equation [\[eq:K-form\]](#eq:K-form){reference-type="eqref" reference="eq:K-form"} and the explicit expression $(\boldsymbol{u}_1^\mathsf{ao},\boldsymbol{u}_2^\mathsf{ao},\boldsymbol{u}_3^\mathsf{ao},\boldsymbol{u}_4^\mathsf{ao}) = ({\boldsymbol 0},{\boldsymbol 0},-\boldsymbol{1}/n,-\boldsymbol{y}_1/n)$ and $(\boldsymbol{v}_1^\mathsf{ao},\boldsymbol{v}_2^\mathsf{ao},\boldsymbol{v}_3^\mathsf{ao},\boldsymbol{v}_4^\mathsf{ao}) = (\boldsymbol{\theta}_1,\boldsymbol{\theta}_2,{\boldsymbol 0},{\boldsymbol 0})$. Thus, we have equality of the upper-left $4 \times 4$ sub-matrices in the lemma. We check the high-probability approximation for entries $(\ell,\ell')$ with $\ell' \leq \ell$ and $\ell \leq k$. We induct on $\ell$. We use the explicit expressions for the Cholesky decomposition in  and the bounds on the fixed point parameters in  without explicitly citing them each time. Assume the result has been shown for the upper-left $(\ell-1)\times(\ell-1)$ submatrices. For $\ell' < \ell$, the formula $\mathsf{L}(\boldsymbol{K})_{\ell \ell'} = \sum_{\ell''=1}^{\ell'} \mathsf{L}^\ddagger(\boldsymbol{K})_{\ell' \ell''} K_{\ell\ell''}$ for both $\boldsymbol{K}= n \langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle$ and $\boldsymbol{K}= \langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle$ implies $\mathsf{L}(n \langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle)_{\ell \ell'} \ensuremath{\stackrel{\bullet}{=}}\sqrt{n} L_{g,\ell\ell'}$ and $\mathsf{L}(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle)_{\ell \ell'} \ensuremath{\stackrel{\bullet}{=}}L_{h,\ell\ell'}$ because, by the inductive hypothesis and , we have $\mathsf{L}(n \langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}}\rangle\!\rangle)_{\ell'\ell''}^\ddagger \ensuremath{\stackrel{\bullet}{=}} L_{g,\ell'\ell''}^\ddagger/\sqrt{n}$, $n \langle\!\langle \boldsymbol{U}_k^{\mathsf{ao}}\rangle\!\rangle_{\ell\ell''}^\ddagger \ensuremath{\stackrel{\bullet}{=}}n K_{g,\ell\ell'}$, $\mathsf{L}^\ddagger( \langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}}\rangle\!\rangle)_{\ell'\ell''} \ensuremath{\stackrel{\bullet}{=}} L_{h,\ell'\ell''}^\ddagger$, and $\langle\!\langle \boldsymbol{V}_k^{\mathsf{ao}}\rangle\!\rangle_{\ell\ell''}^\ddagger \ensuremath{\stackrel{\bullet}{=}}K_{h,\ell\ell'}$, and because $L_{g,\ell'\ell''}^\ddagger/\sqrt{n}$, $n K_{g,\ell\ell'}$, $L_{h,\ell'\ell''}^\ddagger$, and $K_{h,\ell\ell'}$ are all $\lesssim 1$. Similarly, the formula $\mathsf{L}(\boldsymbol{K})_{\ell \ell} = \sqrt{K_{\ell\ell} - \sum_{j=1}^{\ell - 1} \mathsf{L}(\boldsymbol{K})^2_{\ell \ell'}}$ for both $\boldsymbol{K}= n \langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle$ and $\boldsymbol{K}= \langle\!\langle \boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle$ implies $\mathsf{L}(n \langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle)_{\ell \ell} \ensuremath{\stackrel{\bullet}{=}}\sqrt{n} L_{g,\ell\ell}$ and $\mathsf{L}(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle)_{\ell \ell} \ensuremath{\stackrel{\bullet}{=}} L_{h,\ell\ell}$. Because $\ell > 4$, $\ell$ is innovative with respect to both $n\boldsymbol{K}_g$ and $\boldsymbol{K}_h$. Moreover, $\sqrt{n} L_{g,\ell\ell} \gtrsim 1$ and $L_{h,\ell \ell} \gtrsim 1$, whence, by the high-probability approximation in the previous paragraph, with exponentially high probability (depending only on $\mathcal{P}_{\mathrm{model}}$), $\ell$ is innovative with respect to both $n \langle\!\langle\boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle$ and $\langle\!\langle \boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle$. Using the identities $\mathsf{L}^{\ddagger}(\boldsymbol{K})_{\ell\ell'} = -\frac{1}{\mathsf{L}(\boldsymbol{K})_{\ell\ell}} \sum_{\ell''=\ell'}^{\ell-1} \mathsf{L}(\boldsymbol{K})_{\ell \ell''}\mathsf{L}^\ddagger(\boldsymbol{K})_{\ell''\ell'}$, $\ell' < \ell$ and $\mathsf{L}^{\ddagger}(\boldsymbol{K})_{\ell\ell} = -\frac{1}{\mathsf{L}(\boldsymbol{K})_{\ell\ell}}$, which hold on this high-probability event, the inductive hypothesis and bounds on the fixed point parameters establish the desired high-probability approximation for $\mathsf{L}^\ddagger(n \langle\!\langle \boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle)$ and $\mathsf{L}^\ddagger(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle)$. The second display in the lemma is a consequence of the first. ◻ **Corollary 3** (Bounds on auxiliary objective vectors). *For $k =5,6$, under Hypothesis (k-1), for $\ell,\ell' \leq k$, $$\begin{aligned} \sqrt{n} \| \boldsymbol{u}_\ell^{\mathsf{ao}} \| &\lessdot C, \qquad &\| \boldsymbol{u}_\ell^{\mathsf{ao},\perp} \| &\lessdot C, \qquad & \| \boldsymbol{g}_\ell^{\mathsf{ao}} \| &\lessdot C, \qquad &\| \boldsymbol{g}_\ell^{\mathsf{ao},\perp} \|/\sqrt{n} &\lessdot C, \\ \| \boldsymbol{v}_\ell^{\mathsf{ao}} \| &\lessdot C, \qquad &\| \boldsymbol{v}_\ell^{\mathsf{ao},\perp} \| &\lessdot C, \qquad &\| \boldsymbol{h}_\ell^{\mathsf{ao}} \| / \sqrt{n} &\lessdot C, \qquad &\| \boldsymbol{h}_\ell^{\mathsf{ao},\perp} \| / \sqrt{n} &\lessdot C, \\ && \langle \boldsymbol{g}_k^{\mathsf{ao},\perp} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle / \sqrt{n} &\gtrdot c, & \langle \boldsymbol{h}_k^{\mathsf{ao},\perp} , \boldsymbol{u}_k^{\mathsf{ao}} \rangle &\gtrdot c. \end{aligned}$$* *Proof of .* The bounds on $\sqrt{n} \| \boldsymbol{u}_\ell^{\mathsf{ao}} \|$, $\| \boldsymbol{v}_\ell^{\mathsf{ao}} \|$, $\| \boldsymbol{g}_\ell^{\mathsf{ao}} \|$, and $\| \boldsymbol{h}_\ell^{\mathsf{ao},\perp} \| / \sqrt{n}$ hold by the first-two lines of  and the bounds on $\boldsymbol{K}_g$ and $\boldsymbol{K}_h$ in , and using that $p/n < C$. We then get the bounds $\| \boldsymbol{u}_\ell^{\mathsf{ao},\perp} \|$, $\| \boldsymbol{g}_\ell^{\mathsf{ao},\perp} \|/\sqrt{n}$, $\| \boldsymbol{v}_\ell^{\mathsf{ao},\perp} \|$, and $\| \boldsymbol{h}_\ell^{\mathsf{ao},\perp} \| / \sqrt{n}$ as consequences, using their definition in equations [\[eq:GHk-ao-perp-from-std\]](#eq:GHk-ao-perp-from-std){reference-type="eqref" reference="eq:GHk-ao-perp-from-std"} and [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"} and the bounds on the entries of $\boldsymbol{L}_{g,k}^{\ddagger\top}/\sqrt{n}$ and $\boldsymbol{L}_{h,k}^{\ddagger\top}$ in . We get the lower bound on $\langle \boldsymbol{g}_k^{\mathsf{ao},\perp} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle / \sqrt{n}$ by observing $$\begin{aligned} \langle \boldsymbol{g}_k^{\mathsf{ao},\perp} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle / \sqrt{n} &\underset{\emph{(i)}}= \langle\!\langle\boldsymbol{G}_k^{\mathsf{ao}} \boldsymbol{L}_{g,k}^{\ddagger\top}, \boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle_{kk} / \sqrt{n} = [\boldsymbol{L}_{g,k}^\ddagger\langle\!\langle\boldsymbol{G}_k^{\mathsf{ao}} , \boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle]_{kk}/ \sqrt{n} \\ &\underset{\emph{(ii)}}\ensuremath{\stackrel{\bullet}{=}} [\boldsymbol{L}_{g,k}^\ddagger\boldsymbol{K}_{g,k}\boldsymbol{Z}_{u,k}^\top]_{kk}/\sqrt{n} \underset{\emph{(iii)}}= [\boldsymbol{L}_{g,k}^\top\boldsymbol{Z}_{u,k}^\top]_{kk} / \sqrt{n} = L_{g,kk} \zeta_{kk}^u / \sqrt{n} \underset{\emph{(iv)}}\asymp 1, \end{aligned}$$ where equality *(i)* holds by equation [\[eq:GHk-ao-perp-from-std\]](#eq:GHk-ao-perp-from-std){reference-type="eqref" reference="eq:GHk-ao-perp-from-std"}, approximate equality *(ii)* holds by  and the bounds on $\boldsymbol{L}_{g,k}^\ddagger$ and $\boldsymbol{Z}_{u,k}$ in , equality *(iii)* holds by , and the approximation *(iv)* holds by . The lower bound on $\langle \boldsymbol{h}_k^{\mathsf{ao},\perp},\boldsymbol{u}_k^{\mathsf{ao}} \rangle$ holds similarly. ◻ **Corollary 4**. *For $k \geq 3$, under Hypothesis (k-1) we have $$\label{eq:uk-vk-perp-approx} \frac{\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}}}{\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}}\|} \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{u}_k^{\mathsf{ao},\perp}, \qquad \frac{\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}}}{\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}}\|} \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{v}_k^{\mathsf{ao},\perp},$$ where, recall, $\boldsymbol{u}_k^{\mathsf{ao},\perp}$ and $\boldsymbol{v}_k^{\mathsf{ao},\perp}$ are defined by equations [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"}.* *Proof of .* By equation [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"}, $$\begin{gathered} \frac{\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}}}{\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}}\|} = \Big[ \sqrt{n} \boldsymbol{U}_k^{\mathsf{ao}} \big(\mathsf{L}^\ddagger(n \langle\!\langle \boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle)\big)^\top \Big]_{\,\cdot\,,k} \ensuremath{\stackrel{\bullet}{=}} \Big[ \sqrt{n} \boldsymbol{U}_k^{\mathsf{ao}} \big(\boldsymbol{L}_{g,k}^\ddagger/\sqrt{n}\big)^\top \Big]_{\,\cdot\,,k} = \boldsymbol{u}_k^{\mathsf{ao},\perp}, \\ \frac{\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}}}{\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}}\|} = \Big[ \boldsymbol{V}_k^{\mathsf{ao}} \big(\mathsf{L}^\ddagger(\langle\!\langle\boldsymbol{V}_k^{\mathsf{ao}} \rangle\!\rangle)\big)^\top \Big]_{\,\cdot\,,k} \ensuremath{\stackrel{\bullet}{=}}\Big[ \boldsymbol{V}_k^{\mathsf{ao}} \big(\boldsymbol{L}_{h,k}^\ddagger\big)^\top \Big]_{\,\cdot\,,k} = \boldsymbol{v}_k^{\mathsf{ao},\perp}, \end{gathered}$$ where in each line the approximate equality uses  and the fact that $\sqrt{n} \|\boldsymbol{u}_\ell^{\mathsf{ao}}\| \lessdot C$ and $\| \boldsymbol{v}_\ell^{\mathsf{ao}} \| \lessdot C$ for all $\ell$ and some $\mathcal{P}_{\mathrm{model}}$-dependent $C > 0$ by . ◻ ## The sequential Gordon inequality {#sec:seq-gordon} The key tool for showing Hypothesis (k-1) implies Hypothesis (k) is the sequential Gordon inequality. We call the objective in equation [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"} the *primary objective*, and denote it by $\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$. **Lemma 16** (Sequential Gordon inequality). *Assume Hypothesis (k-1). Let $S_u = S_u(\boldsymbol{U}_{k-1}^{\mathsf{po}},\boldsymbol{H}_{k-1}^{\mathsf{po}},\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_2)$ be a random, compact subset of ${\mathbb R}^n$ which is, with probability 1, contained in a ball $\mathsf{B}_2({\boldsymbol 0},C/\sqrt{n})$, and let $S_v = S_v(\boldsymbol{V}_{k-1}^{\mathsf{po}},\boldsymbol{G}_{k-1}^{\mathsf{po}})$ be a random, compact subset of ${\mathbb R}^{p + 1}$ that is, with probability 1, contained in the ball $\mathsf{B}_2({\boldsymbol 0},C)$, where $C$ is any $\mathcal{P}_{\mathrm{model}}$-dependent constant. Then $$\min_{(v_0,\boldsymbol{v}) \in S_v} \max_{\boldsymbol{u}\in S_u} \; \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \gtrdot \min_{(v_0,\boldsymbol{v}) \in S_v} \max_{\boldsymbol{u} \in S_u} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}),$$ with the reverse inequality if the minimization and maximization are reversed. If $S_u$ and $S_v$ are almost-surely convex, we also have $$\min_{\boldsymbol{v}\in S_v} \max_{\boldsymbol{u}\in S_u} \; \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \lessdot \min_{\boldsymbol{v}\in S_v} \max_{\boldsymbol{u} \in S_u} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}),$$ with the reverse inequality if the minimization and maximization are reversed.* *Proof of .* Let $\boldsymbol{A}^\| \ensuremath{: =}\boldsymbol{A}- \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{A} \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp$. The proof of  is based on following two facts, to be proved in the sequel: 1. We have the following equality of distributions: $$\label{eq:X-to-Xtilde} (\boldsymbol{U}_{k-1}^{\mathsf{po}},\;\boldsymbol{V}_{k-1}^{\mathsf{po}},\;\boldsymbol{G}_{k-1}^{\mathsf{po}},\;\boldsymbol{H}_{k-1}^{\mathsf{po}},\;\boldsymbol{A}) \, {\buildrel d \over =} \, (\boldsymbol{U}_{k-1}^{\mathsf{po}},\;\boldsymbol{V}_{k-1}^{\mathsf{po}},\;\boldsymbol{G}_{k-1}^{\mathsf{po}},\;\boldsymbol{H}_{k-1}^{\mathsf{po}},\;\boldsymbol{A}^\| + \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp {\widetilde{\boldsymbol{A}}} \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp),$$ where ${\widetilde{\boldsymbol{A}}}$ is independent of everything else. 2. We have the approximation $$\label{eq:approx-gordon-terms} \frac{1}{\sqrt{n}} \big\| \boldsymbol{A}^\| - (-\boldsymbol{T}_g^\top + \boldsymbol{T}_h) \big\|_{{\rm op}} \ensuremath{\stackrel{\bullet}{=}}0.$$ First we show how these two facts imply the result. Applying the standard Gordon inequality (see, for example, Theorem 5.1 in the paper [@miolane2021] or Theorem 3 in the paper [@pmlr-v40-Thrampoulidis15]) to the matrix ${\widetilde{\boldsymbol{A}}}$ conditionally on $\boldsymbol{A}^\|, \boldsymbol{U}_{k-1}^{\mathsf{po}}, \boldsymbol{V}_{k-1}^{\mathsf{po}}, \boldsymbol{G}_{k-1}^{\mathsf{po}}, \boldsymbol{H}_{k-1}^{\mathsf{po}}$, and then marginalizing over $\boldsymbol{A}^\|, \boldsymbol{U}_{k-1}^{\mathsf{po}}, \boldsymbol{V}_{k-1}^{\mathsf{po}}, \boldsymbol{G}_{k-1}^{\mathsf{po}}, \boldsymbol{H}_{k-1}^{\mathsf{po}}$, we have $$\label{eq:seq-gordon-exact} \begin{aligned} &\mathbb{P}\Big( \min_{\boldsymbol{v}\in S_v} \max_{\boldsymbol{u}\in S_u} \boldsymbol{u}^\top \boldsymbol{A}\boldsymbol{v}+ \phi(\boldsymbol{u};v_0,\boldsymbol{v}) \leq t \Big) \\ &\qquad\leq 2\mathbb{P}\Big( \min_{\boldsymbol{v} \in S_v} \max_{\boldsymbol{u}\in S_u} \boldsymbol{u}^\top \boldsymbol{A}^\| \boldsymbol{v}- \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u}\| \langle \boldsymbol{\xi}_g, \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v}\rangle + \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v}\| \langle \boldsymbol{\xi}_h, \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u}\rangle + \phi(\boldsymbol{u};v_0,\boldsymbol{v}) \leq t \Big). \end{aligned}$$ Because $\|\boldsymbol{v}\| \leq C$ for all $(v_0,\boldsymbol{v}) \in S_v$ and $\| \boldsymbol{u}\| \leq C / \sqrt{n}$ for all $\boldsymbol{u}\in S_u$, $$\begin{gathered} \sup_{\substack{\boldsymbol{v}\in S_v\\ \boldsymbol{u}\in S_u}} \big| \boldsymbol{u}^\top \boldsymbol{A}^\| \boldsymbol{v}- \boldsymbol{u}^\top(-\boldsymbol{T}_g^\top + \boldsymbol{T}_h)\boldsymbol{v}\big| \leq \frac{C}{\sqrt{n}} \big\| \boldsymbol{A}^\| - (-\boldsymbol{T}_g^\top + \boldsymbol{T}_h) \big\|_{{\rm op}} \ensuremath{\stackrel{\bullet}{=}}0, \\ \sup_{\substack{\boldsymbol{v}\in S_v \\ \boldsymbol{u}\in S_u}} \big| \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u}\| \langle \boldsymbol{\xi}_g, \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v}\rangle - \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u}\| \langle \boldsymbol{\xi}_g, \boldsymbol{v}\rangle \big| \leq \frac{C}{\sqrt{n}} \|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}} \boldsymbol{\xi}_g \| \ensuremath{\stackrel{\bullet}{=}}0, \\ % \sup_{\substack{\boldsymbol{v}\in S_v \\ \boldsymbol{u}\in S_u}} \big| \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v}\| \langle \boldsymbol{\xi}_h, \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{u}\rangle - \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{v}\| \langle \boldsymbol{\xi}_h, \boldsymbol{u}\rangle \big| \leq \frac{C}{\sqrt{n}} \|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}} \boldsymbol{\xi}_h \| \ensuremath{\stackrel{\bullet}{=}}0, \end{gathered}$$ where the final two lines use the distributional relations $\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}} \boldsymbol{\xi}_g \|^2 \sim \chi_{\mathsf{rank}(\boldsymbol{V}_{k-1}^{\mathsf{po}})}^2$ and $\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}} \boldsymbol{\xi}_h \|^2 \sim \chi_{\mathsf{rank}(\boldsymbol{U}_{k-1}^{\mathsf{po}})}^2$, and $\boldsymbol{V}_{k-1}^{\mathsf{po}}$ and $\boldsymbol{U}_{k-1}^{\mathsf{po}}$ have rank bounded by $k-1$, where $k$ is a finite constant. The first display in the lemma follows from the previous two displays. The second display in the lemma follows by an analogous argument, except that when $S_v$ and $S_u$ are convex, the standard Gordon inequality (see, for example, Theorem 5.1 in the paper [@miolane2021] or Theorem 3 in the paper [@pmlr-v40-Thrampoulidis15]) yields the claim [\[eq:seq-gordon-exact\]](#eq:seq-gordon-exact){reference-type="eqref" reference="eq:seq-gordon-exact"} with $\leq t$ replaced by $\geq t$. We now turn to proving the claims [\[eq:X-to-Xtilde\]](#eq:X-to-Xtilde){reference-type="eqref" reference="eq:X-to-Xtilde"} and [\[eq:approx-gordon-terms\]](#eq:approx-gordon-terms){reference-type="eqref" reference="eq:approx-gordon-terms"}. Beginning with the claim [\[eq:X-to-Xtilde\]](#eq:X-to-Xtilde){reference-type="eqref" reference="eq:X-to-Xtilde"}, consider any deterministic quantities $\boldsymbol{U}_{k-1}, \allowbreak \boldsymbol{V}_{k-1}, \allowbreak \{v_{0,\ell}\}_{\ell \leq k-1},, \allowbreak \boldsymbol{M}_{k-1},\allowbreak \boldsymbol{N}_{k-1}, \allowbreak \boldsymbol{e}_1, \allowbreak \boldsymbol{e}_1'$ and $\boldsymbol{e}_2$ such that $$\label{eq:deterministic-subgradient-identity} \begin{gathered} \boldsymbol{a}_\ell \in \partial_{\boldsymbol{v}} \phi_k(\boldsymbol{u}_\ell; v_{0,\ell}, \boldsymbol{v}_\ell; \boldsymbol{e}_1, \boldsymbol{e}_1', \boldsymbol{e}_2), \quad \boldsymbol{b}_\ell \in \partial_{\boldsymbol{u}}\Big(-\phi_k(\boldsymbol{u}_\ell; v_{0,\ell}, \boldsymbol{v}_\ell; \boldsymbol{e}_1, \boldsymbol{e}_1', \boldsymbol{e}_2)\Big), \quad 0 \in \partial_{v_0} \phi_k(\boldsymbol{u}_\ell;v_{0,\ell},\boldsymbol{v}_\ell;\boldsymbol{e}_1,\boldsymbol{e}_1',\boldsymbol{e}_2) \end{gathered}$$ for all $\ell \leq k - 1$. Then the event that $$\label{eq:soln-event} \mathcal{E}_1 \ensuremath{: =}\Bigg\{ \begin{gathered} \boldsymbol{U}_{k-1}^{\mathsf{po}} = \boldsymbol{U}_{k-1},\quad \boldsymbol{A}^\top \boldsymbol{U}_{k-1}^{\mathsf{po}} = \boldsymbol{M}_{k-1},\quad \boldsymbol{V}_{k-1}^{\mathsf{po}} = \boldsymbol{V}_{k-1},\quad \boldsymbol{A} \boldsymbol{V}_{k-1}^{\mathsf{po}} = \boldsymbol{N}_{k-1}, \\ \boldsymbol{\varepsilon}_1 = \boldsymbol{e}_1,\quad \boldsymbol{\varepsilon}_1' = \boldsymbol{e}_1',\quad \boldsymbol{\varepsilon}_2 = \boldsymbol{e}_2,\quad v_{0,\ell}^{\mathsf{po}} = v_{0,\ell} \text{ for $\ell \leq k-1$}, \end{gathered} \Bigg\}$$ is equivalent to the event that $$\label{eq:lin-constraint-event} \mathcal{E}_2 \ensuremath{: =}\Big\{ \boldsymbol{A}^\top \boldsymbol{U}_{k-1} = \boldsymbol{M}_{k-1},\quad \boldsymbol{A} \boldsymbol{V}_{k-1} = \boldsymbol{N}_{k-1},\quad \boldsymbol{\varepsilon}_1 = \boldsymbol{e}_1,\quad \boldsymbol{\varepsilon}_1' = \boldsymbol{e}_1',\quad \boldsymbol{\varepsilon}_2 = \boldsymbol{e}_2 \Big\}.$$ Indeed, the inclusion of $\mathcal{E}_1$ within $\mathcal{E}_2$ is immediate. The opposite inclusion of $\mathcal{E}_2$ within $\mathcal{E}_w$ follows by the KKT conditions for the saddle point problem [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"} and the identities [\[eq:deterministic-subgradient-identity\]](#eq:deterministic-subgradient-identity){reference-type="eqref" reference="eq:deterministic-subgradient-identity"}. Writing $$\begin{aligned} \boldsymbol{A}^\| = \boldsymbol{A}\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}} + \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}} \boldsymbol{A}- \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}} \boldsymbol{A}\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}},\end{aligned}$$ we see that $\boldsymbol{A}^\|$ is a function of $\boldsymbol{U}_{k-1}^{\mathsf{po}}$, $\boldsymbol{A}^\top \boldsymbol{U}_{k-1}^{\mathsf{po}}$, $\boldsymbol{V}_{k-1}^{\mathsf{po}}$, $\boldsymbol{A}\boldsymbol{V}_{k-1}^{\mathsf{po}}$, which we denote by $f$. Then $\boldsymbol{A}= f\big(\boldsymbol{U}_{k-1}^{\mathsf{po}},\allowbreak \boldsymbol{A}^\top \boldsymbol{U}_{k-1}^{\mathsf{po}},\allowbreak \boldsymbol{V}_{k-1}^{\mathsf{po}},\allowbreak \boldsymbol{A} \boldsymbol{V}_{k-1}^{\mathsf{po}}\big) + \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}^\perp \boldsymbol{A} \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}^\perp$, so that $$\begin{aligned} \mathsf{Law}(\boldsymbol{A}\bigm| \mathcal{E}_1) &= \mathsf{Law}(\boldsymbol{A}\bigm| \mathcal{E}_2) = \mathsf{Law}(f(\boldsymbol{U}_{k-1},\boldsymbol{M}_{k-1},\boldsymbol{V}_{k-1},\boldsymbol{N}_{k-1}) + \mathsf{P}_{\boldsymbol{U}_{k-1}}^\perp \boldsymbol{A}\mathsf{P}_{\boldsymbol{V}_{k-1}}^\perp \bigm| \mathcal{E}_2) \\ &= \mathsf{Law}(f(\boldsymbol{U}_{k-1},\boldsymbol{M}_{k-1},\boldsymbol{V}_{k-1},\boldsymbol{N}_{k-1}) + \mathsf{P}_{\boldsymbol{U}_{k-1}}^\perp {\widetilde{\boldsymbol{A}}}\mathsf{P}_{\boldsymbol{V}_{k-1}}^\perp ), \end{aligned}$$ where the final equality uses that $\mathsf{P}_{\boldsymbol{U}_{k-1}}^\perp \boldsymbol{A} \mathsf{P}_{\boldsymbol{V}_{k-1}}^\perp$ is independent of $\boldsymbol{A}^\top \boldsymbol{U}_{k-1}$ and $\boldsymbol{A}\boldsymbol{V}_{k-1}$ (for fixed $\boldsymbol{U}_{k-1}$, $\boldsymbol{V}_{k-1}$), which is easily checked because $\boldsymbol{A}$ has iid standard Gaussian entries. Note that for all realizations of the matrix $\boldsymbol{A}$ and the noise $\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_2$, there is some collection of quantities $\boldsymbol{U}_{k-1}\allowbreak\boldsymbol{V}_{k-1}\allowbreak\{v_{0,\ell}\}_{\ell \leq k-1},\allowbreak\boldsymbol{M}_{k-1}\allowbreak\boldsymbol{N}_{k-1}\allowbreak\boldsymbol{e}_1\allowbreak\boldsymbol{e}_2$ for which the event $\mathcal{E}_1$ has occurred. Thus, marginalizing the previous display over the realization of the matrix $\boldsymbol{A}$ and noise $\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_1',\boldsymbol{\varepsilon}_2$ gives [\[eq:X-to-Xtilde\]](#eq:X-to-Xtilde){reference-type="ref" reference="eq:X-to-Xtilde"}. Next, we prove [\[eq:approx-gordon-terms\]](#eq:approx-gordon-terms){reference-type="ref" reference="eq:approx-gordon-terms"}. By [\[eq:loo-noise-def\]](#eq:loo-noise-def){reference-type="ref" reference="eq:loo-noise-def"}, we have $\boldsymbol{G}_{k-1}^{\mathsf{po}} = \boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{Z}_{v,k-1}^\top - \boldsymbol{A}^\top \boldsymbol{U}_{k-1}^{\mathsf{po}}$ and $\boldsymbol{H}_{k-1}^{\mathsf{po}} = \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{Z}_{u,k-1}^\top + \boldsymbol{A}\boldsymbol{V}_{k-1}^{\mathsf{po}}$. Thus, $$\begin{aligned} -\boldsymbol{T}_g^\top + \boldsymbol{T}_h &= - \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{Z}_{v,k-1}\boldsymbol{V}_{k-1}^{\mathsf{po}\top} + \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{U}_{k-1}^{\mathsf{po}\top} \boldsymbol{A}\\ &\qquad\qquad+ \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{Z}_{u,k-1}^\top \boldsymbol{K}_{h,k-1}^\ddagger \boldsymbol{V}_{k-1}^{\mathsf{po}\top} + \boldsymbol{A}\boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{h,k-1}^{\ddagger}\boldsymbol{V}_{k-1}^{\mathsf{po}\top}. \end{aligned}$$ We have $$\begin{aligned} \frac{1}{\sqrt{n}}\boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{Z}_{u,k-1}^\top\boldsymbol{K}_{h,k-1}^\ddagger&\boldsymbol{V}_{k-1}^{\mathsf{po}\top} \underset{\emph{(i)}}= \frac{1}{\sqrt{n}}\boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{K}_{g,k-1} \boldsymbol{Z}_{u,k-1}^\top \boldsymbol{K}_{h,k-1}^\ddagger\boldsymbol{V}_{k-1}^{\mathsf{po}\top} \\ &\underset{\emph{(ii)}}\ensuremath{\stackrel{\bullet}{=}} \frac{1}{\sqrt{n}}\boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \langle\!\langle\boldsymbol{G}_{k-1}^{\mathsf{po}} , \boldsymbol{V}_{k-1}^{\mathsf{po}} \rangle\!\rangle \boldsymbol{K}_{h,k-1}^\ddagger\boldsymbol{V}_{k-1}^{\mathsf{po}\top} \\ &\underset{\emph{(iii)}}= \frac{1}{\sqrt{n}}\boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \langle\!\langle\boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{Z}_{v,k-1}^\top - \boldsymbol{A}^\top \boldsymbol{U}_{k-1}^{\mathsf{po}} , \boldsymbol{V}_{k-1}^{\mathsf{po}} \rangle\!\rangle \boldsymbol{K}_{h,k-1}^\ddagger\boldsymbol{V}_{k-1}^{\mathsf{po}\top} \\ &\underset{\emph{(iv)}}\ensuremath{\stackrel{\bullet}{=}}\frac{1}{\sqrt{n}} \Big( \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{Z}_{v,k-1}\boldsymbol{V}_{k-1}^{\mathsf{po}\top} - \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{U}_{k-1}^{\mathsf{po}\top} \boldsymbol{A}\boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{h,k-1}^\ddagger \boldsymbol{V}_{k-1}^{\mathsf{po}\top} \Big). \end{aligned}$$ Equation *(i)* holds because $\boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{K}_{g,k-1} \boldsymbol{Z}_{u,k-1}^\top = {\mathbf I}_{\boldsymbol{K}_{g,k-1}}^\top \boldsymbol{Z}_{u,k-1}^\top = \boldsymbol{Z}_{u,k-1}^\top$ by  and the innovation compatibility of $\boldsymbol{Z}_{u,k-1}$ with $\boldsymbol{K}_{g,k-1}$ (the columns of $\boldsymbol{Z}_{u,k-1}^\top$ lie in the span of the standard basis vectors corresponding to innovative indices). Approximation *(ii)* holds by , the bounds on $\boldsymbol{U}_{k-1}^{\mathsf{po}}$, $\boldsymbol{V}_{k-1}^{\mathsf{po}}$ in , and the bounds on $\boldsymbol{K}_{g,k-1}^\ddagger$, $\boldsymbol{K}_{h,k-1}^\ddagger$ in . Equation *(iii)* holds by the definition of $\boldsymbol{G}_{k-1}^{\mathsf{po}}$. Approximation *(iv)* holds because, by , $\langle\!\langle\boldsymbol{V}_{k-1}^{\mathsf{po}} \rangle\!\rangle \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{K}_{h,k-1}$, and, using  and the innovation compatibility of $\boldsymbol{Z}_{v,k-1}$ with respect to $\boldsymbol{K}_{h,k-1}$, $\boldsymbol{Z}_{v,k-1} \boldsymbol{K}_{h,k-1} \boldsymbol{K}_{h,k-1}^\ddagger = \boldsymbol{Z}_{v,k-1} {\mathbf I}_{\boldsymbol{K}_{h,k-1}} = \boldsymbol{Z}_{v,k-1}$. Combining the previous two displays, we get $$\frac{1}{\sqrt{n}}(-\boldsymbol{T}_g^\top + \boldsymbol{T}_h) \ensuremath{\stackrel{\bullet}{=}}\frac{1}{\sqrt{n}} \Big( \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{U}_{k-1}^{\mathsf{po}\top} \boldsymbol{A}+ \boldsymbol{A}\boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{h,k-1}^{\ddagger}\boldsymbol{V}_{k-1}^{\mathsf{po}\top} - \boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{U}_{k-1}^{\mathsf{po}\top} \boldsymbol{A} \boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{h,k-1}^\ddagger \boldsymbol{V}_{k-1}^{\mathsf{po}\top} \Big).$$ Now, using , we have $\boldsymbol{U}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{g,k-1}^\ddagger \boldsymbol{U}_{k-1}^{\mathsf{po}\top} \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{U}_{k-1}^{\mathsf{po}} \mathsf{K}^\ddagger(\langle\!\langle\boldsymbol{U}_{k-1}^{\mathsf{po}}\rangle\!\rangle) \boldsymbol{U}_{k-1}^{\mathsf{po}\top} = \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}}$ and likewise, $\boldsymbol{V}_{k-1}^{\mathsf{po}} \boldsymbol{K}_{h,k-1}^\ddagger \boldsymbol{V}_{k-1}^{\mathsf{po}\top} \ensuremath{\stackrel{\bullet}{=}}\boldsymbol{V}_{k-1}^{\mathsf{po}} \mathsf{K}^\ddagger(\langle\!\langle\boldsymbol{V}_{k-1}^{\mathsf{po}}\rangle\!\rangle) \boldsymbol{V}_{k-1}^{\mathsf{po}\top} = \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}$. Moreover, $\| \boldsymbol{A}\|_{{\rm op}} /\sqrt{n} \lessdot C$ [@vershynin_2012 Corollary 5.35], whence $$\frac{1}{\sqrt{n}}(-\boldsymbol{T}_g^\top + \boldsymbol{T}_h) \ensuremath{\stackrel{\bullet}{=}} \frac{1}{\sqrt{n}}(\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}} \boldsymbol{A}+ \boldsymbol{A} \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}} - \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{po}}} \boldsymbol{A} \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{po}}}) = \frac{1}{\sqrt{n}} \boldsymbol{A}^\|,$$ as desired. ◻ ## Properties of the auxiliary objective {#sec:properties-aux-obj} Throughout the proofs in this section, we freely invoke the bounds on the problem parameters or fixed point parameters from  and Assumption A1 without citing the lemma or assumption each time. **Lemma 17**. *For $k = 5,6$, under Hypothesis $(k-1)$, we have $$\ell_k^*(\boldsymbol{u}_k^{\mathsf{ao}}; \boldsymbol{w}, \boldsymbol{y}_1, \boldsymbol{y}_2) \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}[\ell_k^*(\boldsymbol{u}_k^{\mathsf{se}}; \boldsymbol{w}^{\mathsf{se}}, \boldsymbol{y}_1^{\mathsf{se}}, \boldsymbol{y}_2^{\mathsf{se}})].$$* *Proof of .* We handle $k = 5$ and $k = 6$ separately. First, we study $k = 5$. Because $\ell_\mathsf{a}(\eta;\ensuremath{a})$ is strongly convex and smooth (Assumption A1) in $\eta$, so too is $\ell_\mathsf{a}^*(nu;\ensuremath{a})$ in $nu$. Moreover, $\ell_\mathsf{a}^*(0;\ensuremath{a}) = - \sup_\eta \{ \ell_\mathsf{a}(\eta;\ensuremath{a}) \}$ and $\partial_\eta\ell_\mathsf{a}^*(\eta;\ensuremath{a})|_{\eta=0} = \mathop{\mathrm{arg\,min}}_\eta \{ \ell_\mathsf{a}(\eta;\ensuremath{a})$. From Assumption A1, we can conclude that all of these quantities are bounded above in absolute value by $C$. This implies that $|\ell_\mathsf{a}^*(\eta;1) - \ell_\mathsf{a}^*(\eta;0)| \leq C(1+|\eta|)$. In particular, $\ell_5^*(\boldsymbol{u}_6;\boldsymbol{w},-n\boldsymbol{u}_5,\boldsymbol{y}_2)$ is a $C$ order-2 pseudo-Lipschitz function of $n\boldsymbol{u}_6$ and $n\boldsymbol{u}_5$, whence the result for $k = 5$ holds by . Next, we study $k = 6$. Recalling the expression for $u_{6,i}^\mathsf{ao}$ in equation [\[eq:u6ao-explicit\]](#eq:u6ao-explicit){reference-type="eqref" reference="eq:u6ao-explicit"}, we may write $$\begin{gathered} \ell_6^*(\boldsymbol{u}_6^{\mathsf{ao}};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) = \langle\boldsymbol{u}_6^{\mathsf{ao}},\boldsymbol{h}_2^{\mathsf{ao}} + \boldsymbol{\varepsilon}_2\rangle - nu_{4,i}^{\mathsf{po}}\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{po}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{po}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{po}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{po}})}}\Big)^2}, \end{gathered}$$ where we use that $-nu_{4,i}^{\mathsf{ao}} = y_{1,i}^{\mathsf{ao}}$ and that $y_{1,i}^{\mathsf{ao}} \in \{0,1\}$. By , we have that $\langle \boldsymbol{u}_6^{\mathsf{ao}} , \boldsymbol{h}_2^{\mathsf{ao}} + \boldsymbol{\varepsilon}_2 \rangle \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{u}_6^{\mathsf{se}} , \boldsymbol{h}_2^{\mathsf{se}} + \boldsymbol{\varepsilon}_2^{\mathsf{se}} \rangle_{L_2}$. Because the derivatives with respect to $h_{1,i}^{\mathsf{ao}}$ and $nu_{4_i}^{\mathsf{ao}}$ grow quadratically in $h_{2,i}^{\mathsf{ao}}$, $\varepsilon_{6,i}$, and $h_{6,i}^{\mathsf{ao}}$, its concentration requires a truncation argument. For $$(h_{1,i}^{\mathsf{ao}},h_{2,i}^{\mathsf{ao}},h_{6,i}^{\mathsf{ao}},nu_{4,i}^{\mathsf{ao}},\varepsilon_i) \mapsto (-nu_{4,i}^{\mathsf{ao}}\wedge 1)\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{ao}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{ao}})^2\wedge M^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{ao}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{ao}})}}\Big)^2}$$ is $CM^2$-Lipschitz, because $c < w(\,\cdot\,) < C$ has derivative bounded by $C$ and $|\nu_{6,0}|$, $|\nu_{6,\mathsf{x}}|$, $\zeta_{66}^u/n$ are bounded above by $C$. By , $$\begin{aligned} &\frac{1}{2nM^2} \sum_{i=1}^n (-nu_{4,i}^{\mathsf{ao}}\wedge 1)\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{ao}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{ao}})^2\wedge M^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{ao}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{ao}})}}\Big)^2} \\ &\qquad\qquad\ensuremath{\stackrel{\bullet}{=}}\frac{1}{2M^2} \mathbb{E}\Big[ (-nu_{4,i}^{\mathsf{se}}\vee 1)\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{se}})^2\wedge M^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{se}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{se}})}}\Big)^2} \Big] \\ &\qquad\qquad= \frac{1}{2M^2} \mathbb{E}\Big[ -nu_{4,i}^{\mathsf{se}}\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{se}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{se}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{se}})}}\Big)^2} \Big] + \epsilon(M), \end{aligned}$$ where $|\epsilon(M)|\leq Ce^{-cM^2}$ by standard Gaussian tail bounds and the fact that $w(h_{1,i}^{\mathsf{se}})$ is bounded above and below and $-nu_{4,i}^{\mathsf{se}} = \in \{0,1\}$. Moreover, $$\begin{aligned} &\Big| \frac{1}{2nM^2} \sum_{i=1}^n (-nu_{4,i}^{\mathsf{ao}}\wedge 1)\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{ao}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{ao}})^2\wedge M^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{ao}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{ao}})}}\Big)^2} \\ &\qquad\qquad\qquad\qquad\qquad\qquad- \frac{1}{2nM^2} \sum_{i=1}^n (-nu_{4,i}^{\mathsf{ao}})\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{ao}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{ao}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{ao}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{ao}})}}\Big)^2} \Big| \\ &\qquad\qquad\leq \frac{C}{2nM^2} \sum_{i=1}^n (C + |h_{2,i}^{\mathsf{ao}}| + |\varepsilon_{2,i}| + |h_{6,i}^{\mathsf{ao}}| - M)_+^2 \\ &\qquad\qquad\ensuremath{\stackrel{\bullet}{=}}\frac{C}{2M^2} \mathbb{E}\big[(C + |h_{2,i}^{\mathsf{se}}| + |\varepsilon_{2,i}| + |h_{6,i}^{\mathsf{se}}| - M)_+^2\big] \Big) \leq Ce^{-cM^2}, \end{aligned}$$ where in the first inequality we have applied , in the approximate equality we have applied , and in the second inequality we have applied standard Gaussian tail bounds. Combining the previous two displays, we get $$\begin{aligned} &\Big| \frac{1}{2n(M \vee M^2)} \sum_{i=1}^n (-nu_{4,i}^{\mathsf{po}})\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{po}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{po}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{po}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{po}})}}\Big)^2} \\ % & \qquad\qquad\qquad\qquad\qquad- \frac{1}{2(M \vee M^2)} \mathbb{E}\Big[ (-nu_{4,i}^{\mathsf{se}})\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{se}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{se}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{se}})}}\Big)^2} \Big] \Big| \lessdot Ce^{-cM^2}. \end{aligned}$$ That is, for all $\epsilon > 0$, the probability that the left-hand side exceeds $\epsilon + Ce^{-cM^2}$ is bounded above by $C'e^{-c'n\epsilon^r}$ for some $C,c,C',c',r > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$. Then, taking $M = \sqrt{\log(1/\epsilon)}$, we get $$\frac{1}{2n} \sum_{i=1}^n (-nu_{4,i}^{\mathsf{po}})\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{po}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{po}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{po}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{po}})}}\Big)^2} \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ (-nu_{4,i}^{\mathsf{se}})\,\frac{(\nu_{6,0} + \nu_{6,\mathsf{x}} - h_{2,i}^{\mathsf{se}} - \varepsilon_{2,i} + h_{6,i}^{\mathsf{se}})^2}{\Big(\frac{\zeta_{66}^u}{n} \sqrt{w(h_{1,i}^{\mathsf{se}})} + \frac{1}{\sqrt{w(h_{1,i}^{\mathsf{se}})}}\Big)^2} \Big].$$ Because also $\langle \boldsymbol{u}_6^{\mathsf{ao}} , \boldsymbol{h}_2^{\mathsf{ao}} + \boldsymbol{\varepsilon}_2 \rangle \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{u}_6^{\mathsf{se}} , \boldsymbol{h}_2^{\mathsf{se}} + \boldsymbol{\varepsilon}_2^{\mathsf{se}} \rangle_{L_2}$, the proof of the lemma in the case $k = 6$ is complete. ◻ **Lemma 18** (Value of $\mathsf{AuxObj}_k$ at $(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})$). *For $k = 5,6$, under Hypothesis (k-1) we have $$\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}\ell_k,$$ where $$\ell_k \ensuremath{: =}- \langle \boldsymbol{g}_k^{\mathsf{se}} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} + \langle \boldsymbol{h}_k^{\mathsf{se}} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} - \mathbb{E}[\ell_k^*(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] + \mathbb{E}[\Omega_k(\boldsymbol{v}_k^{\mathsf{se}})].$$* *Proof of .* By the definition of $\phi_k$ and $\mathsf{AuxObj}_k$ (equations [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"} and [\[eq:def-AuxObj\]](#eq:def-AuxObj){reference-type="eqref" reference="eq:def-AuxObj"}), $$\begin{aligned} \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) &= - \langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}) , \boldsymbol{v}_k^{\mathsf{ao}} \rangle + \langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}) , \boldsymbol{u}_k^{\mathsf{ao}} \rangle + \langle \boldsymbol{u}_k^{\mathsf{ao}} , \boldsymbol{1}\rangle(\nu_{k,0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle) \\ &\qquad- \ell_k^*(\boldsymbol{u}_k^{\mathsf{ao}};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) + \Omega_k(\boldsymbol{v}_k^{\mathsf{ao}}). \end{aligned}$$ Using the definition of $\boldsymbol{g}(\,\cdot\,)$ (see equation [\[eq:gordon-gh-explicit\]](#eq:gordon-gh-explicit){reference-type="eqref" reference="eq:gordon-gh-explicit"}), we have $$\begin{aligned} \langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}) , \boldsymbol{v}_k^{\mathsf{ao}} \rangle &= \sum_{\ell=1}^{k-1} \langle \boldsymbol{u}_\ell^{\mathsf{ao},\perp},\boldsymbol{u}_k^{\mathsf{ao}}\rangle\langle\boldsymbol{g}_\ell^{\mathsf{po},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle + \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}} \| \langle \boldsymbol{\xi}_g , \boldsymbol{v}_k^{\mathsf{ao}} \rangle \\ &\ensuremath{\stackrel{\bullet}{=}}\sum_{\ell=1}^k \langle \boldsymbol{u}_\ell^{\mathsf{ao},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle\langle\boldsymbol{g}_\ell^{\mathsf{po},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle = \langle\!\langle\boldsymbol{u}_k^{\mathsf{ao}} , \boldsymbol{U}_k^{\mathsf{ao}} \boldsymbol{L}_{g,k}^{\ddagger\top} \rangle\!\rangle \langle\!\langle\boldsymbol{G}_k^{\mathsf{ao}} \boldsymbol{L}_{g,k}^{\ddagger\top} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle\!\rangle \\ &\ensuremath{\stackrel{\bullet}{=}}(\boldsymbol{K}_{g,k})_{k,1:k} \boldsymbol{K}_{g,k}^\ddagger (\boldsymbol{K}_{g,k}\boldsymbol{Z}_{u,k}^\top)_{1:k,k} = ({\mathbf I}_{\boldsymbol{K}_{g,k}} \boldsymbol{K}_{g,k} \boldsymbol{Z}_{u,k}^\top)_{kk} = (\boldsymbol{K}_{g,k} \boldsymbol{Z}_{u,k}^\top)_{kk} = \langle \boldsymbol{g}_k^{\mathsf{se}} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2}, \end{aligned}$$ where the approximate equality in the second line holds by  and the high-probability bounds on $\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}} \|$ and $\langle \boldsymbol{\xi}_g , \boldsymbol{u}_k^{\mathsf{ao}} \rangle$ implied by ; the equality in the second line holds by equations [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"} and [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"}; the approximate equality in the third line holds by , the fixed point relation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}; and the remaining equalities in the third line use equation [\[eq:K-pseudo-inverse\]](#eq:K-pseudo-inverse){reference-type="eqref" reference="eq:K-pseudo-inverse"}, the definition of ${\mathbf I}_{\boldsymbol{K}}$ from , and the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. We have used bounds on the fixed point parameters throughout. By a completely analogous argument we can show that $\langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}) , \boldsymbol{u}_k^{\mathsf{ao}} \rangle_{L_2} \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{h}_k^{\mathsf{se}} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2}$. Combining these facts with  and the fact $\Omega_k(\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}[\Omega_k(\boldsymbol{v}_k^{\mathsf{se}})]$ by   gives . ◻ **Lemma 19** (Approximate stationarity of $\mathsf{AuxObj}_k$ at $(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})$). *For $k = 5,6$, under Hypothesis (k-1) we have $$\begin{gathered} \frac{\textup{d}\,}{\textup{d}v_0}\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}0, \qquad \frac{\textup{d}\,}{\textup{d} \boldsymbol{v}}\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}} {\boldsymbol 0}. \end{gathered}$$ Moreover, there exists a random vector $\boldsymbol{\delta}_u$ such that $$\boldsymbol{\delta}_u \in \partial_{\boldsymbol{u}} \Big( -\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \Big) \text{ almost surely, and } \frac{1}{\sqrt{n}}\boldsymbol{\delta}_u \ensuremath{\stackrel{\bullet}{=}} {\boldsymbol 0}.$$* The proof of  relies on an approximation for the derivative of each term in $\mathsf{AuxObj}_k$. The next two lemmas give these term-by-term approximations. We first state these two lemmas, then show that they imply , and then prove the two lemmas. **Lemma 20** (Derivatives of Gordon terms). *For $k =5,6$, under Hypothesis (k-1) we have $$\begin{aligned} \frac{1}{\sqrt{n}}\frac{\textup{d}}{\textup{d}\boldsymbol{u}}\langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}), \boldsymbol{v}_k^{\mathsf{ao}} \rangle &\ensuremath{\stackrel{\bullet}{=}}\frac{1}{\sqrt{n}} \sum_{\ell} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{ao}}, \;\; & \;\; \frac{\textup{d}}{\textup{d} \boldsymbol{v}}\langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}), \boldsymbol{v}_k^{\mathsf{ao}} \rangle &\ensuremath{\stackrel{\bullet}{=}} \boldsymbol{g}_k^{\mathsf{ao}}, \\ \frac{1}{\sqrt{n}}\frac{\textup{d}}{\textup{d}\boldsymbol{u}}\langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}), \boldsymbol{u}_k^{\mathsf{ao}} \rangle &\ensuremath{\stackrel{\bullet}{=}} \frac{1}{\sqrt{n}}\boldsymbol{h}_k^{\mathsf{ao}}, \;\; & \;\; \frac{\textup{d}}{\textup{d} \boldsymbol{v}}\langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}), \boldsymbol{u}_k^{\mathsf{ao}} \rangle &\ensuremath{\stackrel{\bullet}{=}} \sum_{\ell} \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{ao}}. \end{aligned}$$* **Lemma 21** (Derivatives of penalty terms). *For $k \geq 5,6$, under Hypothesis (k-1), we have the relations $$\begin{gathered} \frac{\textup{d}\,}{\textup{d}v_0} \phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}0, \qquad \frac{\textup{d}\,}{\textup{d}\boldsymbol{v}} \phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}\frac{\textup{d}\,}{\textup{d}\boldsymbol{v}} \phi_{k,v}(\boldsymbol{v}_k^{\mathsf{ao}};\boldsymbol{G}_{k-1}^{\mathsf{ao}}), \end{gathered}$$ provided that $\phi_{k,v}$ is defined using solutions to the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}.* *Moreover, there exists a random vector $\boldsymbol{\delta}\in {\mathbb R}^n$ such that $\boldsymbol{\delta}/\sqrt{n} \ensuremath{\stackrel{\bullet}{=}}{\boldsymbol 0}$ and almost surely the following occurs: for all $\boldsymbol{\delta}\in \partial_{\boldsymbol{u}} \phi_{k,u}(\boldsymbol{u}_k^{\mathsf{ao}};\boldsymbol{H}_{k-1}^{\mathsf{ao}};\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_1',\boldsymbol{\varepsilon}_2)$, also $\boldsymbol{\delta}+ \boldsymbol{\delta}\in \partial_{\boldsymbol{u}} \big( -\phi_k (\boldsymbol{u}_k^{\mathsf{ao}}; \nu_{k,0}, \boldsymbol{v}_k^{\mathsf{ao}}) \big)$. (This holds provided $\phi_{k,v}$ is defined using the parameters that solve the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}).* **Remark 5**. *We have a more complicated derivative guarantee for the derivative with respect to $\boldsymbol{u}$ because, when $y_{1,i} = 0$, $\ell_6^*$ is the convex indicator of $u_{6,i} = 0$, which is not differentiable.* *Proof of .* By  and the definition of $\mathsf{AuxObj}_k$ (see [\[eq:def-AuxObj\]](#eq:def-AuxObj){reference-type="ref" reference="eq:def-AuxObj"}), we have $$\begin{aligned} \frac{\textup{d}\,}{\textup{d}\boldsymbol{v}}\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}-\boldsymbol{g}_k^{\mathsf{ao}} + \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{v}_\ell^{\mathsf{ao}} + \frac{\textup{d}\,}{\textup{d}\boldsymbol{v}} \phi_{k,v}(\boldsymbol{v}_k^{\mathsf{ao}};\boldsymbol{G}_{k-1}^{\mathsf{ao}}) = {\boldsymbol 0},\end{aligned}$$ where the final equality holds by the KKT conditions for the problem [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"}. Because $-\langle \boldsymbol{g}(\boldsymbol{u}),\boldsymbol{v}\rangle$ and $\langle \boldsymbol{h}(\boldsymbol{v}),\boldsymbol{u}\rangle$ have no dependence on $v_0$, and the definition of $\mathsf{AuxObj}_k$ imply that $$\begin{aligned} \frac{\textup{d}\,}{\textup{d}v_0}\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) = \frac{\textup{d}\,}{\textup{d}v_0} \phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}0.\end{aligned}$$ By the KKT conditions for the problem [\[eq:AO-separated-opt\]](#eq:AO-separated-opt){reference-type="eqref" reference="eq:AO-separated-opt"}, we have $$\begin{aligned} \boldsymbol{h}_k^{\mathsf{ao}} - \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{ao}} \in \partial_{\boldsymbol{u}}\phi_{k,u}(\boldsymbol{u}_k^{\mathsf{ao}};\boldsymbol{H}_{k-1}^{\mathsf{ao}};(\boldsymbol{\varepsilon}_j)_j).\end{aligned}$$ Taking $\boldsymbol{\delta}$ as in Creflem:deriv-penalty yields $$\begin{aligned} \boldsymbol{h}_k^{\mathsf{ao}} - \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{ao}} + \boldsymbol{\delta} \in \partial_{\boldsymbol{u}} \Big( -\phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \Big).\end{aligned}$$ Then, by  and the definition of $\mathsf{AuxObj}_k$, we have $${\boldsymbol 0}\ensuremath{\stackrel{\bullet}{=}}\frac{1}{\sqrt{n}} \Big( \frac{\textup{d}}{\textup{d}\boldsymbol{u}}\langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}), \boldsymbol{v}_k^{\mathsf{ao}} \rangle - \frac{\textup{d}}{\textup{d}\boldsymbol{u}}\langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}), \boldsymbol{u}_k^{\mathsf{ao}} \rangle + \boldsymbol{h}_k^{\mathsf{ao}} - \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{ao}} + \boldsymbol{\delta}\Big) \in \partial_{\boldsymbol{u}} \Big( -\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \Big),$$ which completes the proof. ◻ *Proof of .* Because $\sqrt{n} L_{g,kk} \gtrsim 1$ and $L_{h,kk} \gtrsim 1$, implies that with exponentially high probability $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}} \neq {\boldsymbol 0}$ and $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}} \neq {\boldsymbol 0}$. On this event, both $\boldsymbol{g}(\boldsymbol{u})$ and $\boldsymbol{h}(\boldsymbol{v})$, as defined in equation [\[eq:gordon-gh-explicit\]](#eq:gordon-gh-explicit){reference-type="eqref" reference="eq:gordon-gh-explicit"}, are differentiable at $\boldsymbol{u}_k^{\mathsf{ao}}$ and $\boldsymbol{v}_k^{\mathsf{ao}}$. We compute $$\begin{aligned} \frac{1}{\sqrt{n}}\frac{\textup{d}}{\textup{d}\boldsymbol{u}}\langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}), \boldsymbol{v}_k^{\mathsf{ao}} \rangle &= \frac{1}{\sqrt{n}}\sum_{\ell=1}^{k-1} \boldsymbol{u}_\ell^{\mathsf{ao},\perp} \langle \boldsymbol{g}_\ell^{\mathsf{ao},\perp} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle + \frac{1}{\sqrt{n}}\frac{\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}}}{\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}}\|} \langle\boldsymbol{g}_k^{\mathsf{ao},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle \\ &\underset{\emph{(i)}}\ensuremath{\stackrel{\bullet}{=}} \frac{1}{\sqrt{n}}\sum_{\ell=1}^k \boldsymbol{u}_\ell^{\mathsf{ao},\perp} \langle \boldsymbol{g}_\ell^{\mathsf{ao},\perp} , \boldsymbol{v}_k^{\mathsf{ao}} \rangle \underset{\emph{(ii)}}= \frac{1}{\sqrt{n}}\boldsymbol{U}_k^{\mathsf{ao}} \boldsymbol{K}_{g,k}^{\ddagger} (\boldsymbol{G}_k^{\mathsf{ao}})^\top \boldsymbol{v}_k^{\mathsf{ao}} \\ &\underset{\emph{(iii)}}\ensuremath{\stackrel{\bullet}{=}} \frac{1}{\sqrt{n}}\boldsymbol{U}_k^{\mathsf{ao}} \boldsymbol{K}_{g,k}^{\ddagger} [\boldsymbol{K}_{g,k}\boldsymbol{Z}_{u,k}^\top]_{\,\cdot\,,k} \underset{\emph{(iv)}}= \frac{1}{\sqrt{n}}[\boldsymbol{U}_k^{\mathsf{ao}} {\mathbf I}_{\boldsymbol{K}_{g,k}}^\top \boldsymbol{Z}_{u,k}^\top]_{\,\cdot\,,k} \underset{\emph{(v)}}= \frac{1}{\sqrt{n}}[\boldsymbol{U}_k^{\mathsf{ao}}\boldsymbol{Z}_{u,k}^\top]_{\,\cdot\,,k}. \end{aligned}$$ Approximate equality *(i)* uses that $\langle \boldsymbol{g}_k^{\mathsf{ao},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle/\sqrt{n} \lessdot C$ by Cauchy--Schwartz and that $\|\boldsymbol{g}_\ell^{\mathsf{ao},\perp}\|/\sqrt{n} \lessdot C$ and $\| \boldsymbol{v}_k^{\mathsf{ao}} \| \lessdot C$ by . Equality *(ii)* uses equations [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"} and [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"}. Approximate equality *(iii)* uses and the fact that $\sqrt{n}\| \boldsymbol{H}_k^{\mathsf{ao}} \|_{{\rm op}} \lessdot C$ and $\|\boldsymbol{K}_{g,k}^\ddagger\|_{{\rm op}}/n \lessdot C$ by . Equality *(iv)* uses . Equality *(v)* uses the definition of ${\mathbf I}_{\boldsymbol{K}_{g,k}}^\top$ and the innovation compatibility of $\boldsymbol{Z}_{u,k}$ with $\boldsymbol{K}_{g,k}$ (the columns of $\boldsymbol{Z}_{u,k}^\top$ lie in the span of the standard basis vectors corresponding to innovative indices). Bounds are the fixed point parameters are used throughout. This gives the first approximate equality in the lemma. Again using the expression for $\boldsymbol{g}$ and $\boldsymbol{h}$ in equation [\[eq:gordon-gh-explicit\]](#eq:gordon-gh-explicit){reference-type="eqref" reference="eq:gordon-gh-explicit"}, we compute $\frac{\textup{d}}{\textup{d} \boldsymbol{v}}\langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}), \boldsymbol{v}_k^{\mathsf{ao}} \rangle = \sum_{\ell=1}^{k-1} \boldsymbol{g}_\ell^{\mathsf{ao},\perp} \langle \boldsymbol{u}_\ell^{\mathsf{ao},\perp} , \boldsymbol{u}_k^{\mathsf{ao}} \rangle + \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}} \| \boldsymbol{\xi}_g.$ We have $\sqrt{n}\langle \boldsymbol{u}_\ell^{\mathsf{ao},\perp} , \boldsymbol{u}_k^{\mathsf{ao}} \rangle = \sqrt{n}\big[\boldsymbol{L}_{g,k}^\ddagger\boldsymbol{U}_k^{\mathsf{ao}\top}\boldsymbol{u}_k^{\mathsf{ao}}\big]_\ell \ensuremath{\stackrel{\bullet}{=}}\sqrt{n}\big[\boldsymbol{L}_{g,k}^\ddagger\boldsymbol{K}_{g,k}\big]_{\ell k} = \sqrt{n}L_{g,k\ell},$ where the first equality holds by equation [\[eq:UVk-ao-perp\]](#eq:UVk-ao-perp){reference-type="eqref" reference="eq:UVk-ao-perp"}; the approximate equality holds by , and the second equality holds by  (in particular, that $\boldsymbol{L}_{g,k}^\ddagger \boldsymbol{L}_{g,k} \boldsymbol{L}_{g,k}^\top = {\mathbf I}_{\boldsymbol{K}_{g,k}}^\perp \boldsymbol{L}_{g,k}^\top = ({\mathbf I}_{\boldsymbol{K}_{g,k}}^\perp)^\top\boldsymbol{L}_{g,k}^\top = \boldsymbol{L}_{g,k}^\top$). By , $\sqrt{n} \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{u}_k^{\mathsf{ao}} \| = \mathsf{L}(n \langle\!\langle \boldsymbol{U}_k^{\mathsf{ao}} \rangle\!\rangle)_{kk} \ensuremath{\stackrel{\bullet}{=}}\sqrt{n} L_{g,kk}.$ Because $\|\boldsymbol{g}_\ell^{\mathsf{ao},\perp}\|/\sqrt{n} \lessdot C$ and $\|\boldsymbol{\xi}_g\|/\sqrt{n} \lessdot C$ by , these computations imply that $\frac{\textup{d}}{\textup{d}\boldsymbol{v}}\langle \boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}), \boldsymbol{v}_k^{\mathsf{ao}} \rangle \ensuremath{\stackrel{\bullet}{=}}\sum_{\ell=1}^k L_{g,k\ell}\boldsymbol{g}_\ell^{\mathsf{ao},\perp} = \boldsymbol{g}_k^{\mathsf{ao}}$, where the equality uses equation [\[eq:GHk-ao-perp\]](#eq:GHk-ao-perp){reference-type="eqref" reference="eq:GHk-ao-perp"}. This gives the second equation in the first line of the lemma. The second line of the lemma follows similarly. For completeness, we write out the computations, but do not explain the justification for each line, as they are analogous to the justifications above. We have $$\begin{aligned} \frac{\textup{d}}{\textup{d}\boldsymbol{v}}\langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}), \boldsymbol{u}_k^{\mathsf{ao}} \rangle &\ensuremath{\stackrel{\bullet}{=}} \sum_{\ell=1}^k\boldsymbol{v}_\ell^{\mathsf{ao},\perp}\langle\boldsymbol{h}_\ell^{\mathsf{ao},\perp},\boldsymbol{u}_k^{\mathsf{ao}}\rangle = \boldsymbol{V}_k^{\mathsf{ao}}\boldsymbol{K}_{h,k}^\ddagger\boldsymbol{H}_k^{\mathsf{ao}\top}\boldsymbol{u}_k^{\mathsf{ao}} \ensuremath{\stackrel{\bullet}{=}} \boldsymbol{V}_k^{\mathsf{ao}}\boldsymbol{K}_{h,k}^\ddagger[\boldsymbol{K}_{h,k}\boldsymbol{Z}_v^\top]_{\,\cdot\,,k} = [\boldsymbol{V}_k^{\mathsf{ao}}\boldsymbol{Z}_v^\top]_{\,\cdot\,,k}, \end{aligned}$$ which gives the second equation in the second line of the lemma. Further, we have $$\begin{aligned} \frac{1}{\sqrt{n}}\frac{\textup{d}}{\textup{d}\boldsymbol{u}}\langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}), \boldsymbol{u}_k^{\mathsf{ao}} \rangle = \sum_{\ell=1}^k\boldsymbol{h}_\ell^{\mathsf{ao},\perp}\langle\boldsymbol{v}_\ell^{\mathsf{ao},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle + \|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}}\|\boldsymbol{\xi}_h. \end{aligned}$$ Because $\langle \boldsymbol{v}_\ell^{\mathsf{ao},\perp},\boldsymbol{v}_k^{\mathsf{ao}}\rangle \ensuremath{\stackrel{\bullet}{=}}L_{h,k\ell}$ and $\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{ao}}}^\perp \boldsymbol{v}_k^{\mathsf{ao}} \| \ensuremath{\stackrel{\bullet}{=}} L_{h,kk}$, we get the first equation in the second line of the lemma. ◻ *Proof of .* By explicit differentiation of equation [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"} for $k = 3,4$, $\frac{\textup{d}\,}{\textup{d}v_0} \phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) = \langle \boldsymbol{u}_k^{\mathsf{ao}} , \boldsymbol{1}\rangle \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{u}_k^{\mathsf{se}} , \boldsymbol{1}\rangle_{L_2} = 0,$ where the approximate equality holds by  and the final equality holds by the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. We conclude the first approximate equality in the first display of the lemma. By Assumption A1, the function $\Omega_k$ is differentiable, whence by explicit differentiation of equation [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"} for $k = 3,4$, $\frac{\textup{d}}{\textup{d}\boldsymbol{v}} \phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) = \langle \boldsymbol{u}_k^{\mathsf{ao}} , \boldsymbol{1}\rangle \boldsymbol{\mu}_{\mathsf{x}} + \nabla \Omega_k(\boldsymbol{v}_k^{\mathsf{ao}}) \ensuremath{\stackrel{\bullet}{=}}\nabla \Omega_k(\boldsymbol{v}_k^{\mathsf{ao}}),$ where the approximate equality uses that $\langle \boldsymbol{u}_k^{\mathsf{ao}} , \boldsymbol{1}\rangle \ensuremath{\stackrel{\bullet}{=}}0$ and $\| \boldsymbol{\mu}_{\mathsf{x}} \| \lesssim C$. By the fixed point equations, $\nu_{k,\mathsf{u}} = 0$, whence $\frac{\textup{d}\,}{\textup{d}\boldsymbol{v}} \phi_{k,v}(\boldsymbol{v}_k^{\mathsf{ao}};\boldsymbol{G}_{k-1}^{\mathsf{ao}}) = \nabla \Omega_k(\boldsymbol{v}_k^{\mathsf{ao}})$. We conclude the second approximate equality in the first display of the lemma. Recall from equations [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} and [\[eq:prop-functional-form\]](#eq:prop-functional-form){reference-type="eqref" reference="eq:prop-functional-form"} that $y_{1,i} = \mathbb{I}\{ \varepsilon_{1,i}' = 1\} + \mathbb{I}\{\varepsilon_{1,i}' = \circ\} \mathbb{I}\{\varepsilon_{1,i} \leq \theta_{1,0} + \boldsymbol{X}\boldsymbol{\theta}_1 \}$, and $\boldsymbol{y}_2 = \theta_{2,0} \boldsymbol{1} + \boldsymbol{X}\boldsymbol{\theta}_2 + \boldsymbol{\varepsilon}_2$. Recalling that $\boldsymbol{X}= \boldsymbol{A}+ \boldsymbol{1} \boldsymbol{\mu}_{\mathsf{x}}^\top$, $\boldsymbol{w}= w(\theta_{1,0}\boldsymbol{1}+ \boldsymbol{X}\boldsymbol{\theta}_1)$ (equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}), and, by equation [\[eq:gh-1to4\]](#eq:gh-1to4){reference-type="eqref" reference="eq:gh-1to4"}, that $\boldsymbol{h}_1^{\mathsf{ao}} = \boldsymbol{h}_1^{\mathsf{po}} = \boldsymbol{A} \boldsymbol{\theta}_1$ and $\boldsymbol{h}_2^{\mathsf{ao}} = \boldsymbol{h}_2^{\mathsf{po}} = \boldsymbol{A}\boldsymbol{\theta}_2$, we see that $$\begin{gathered} \boldsymbol{w}= w((\theta_{1,0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_1\rangle) + \boldsymbol{h}_1^{\mathsf{ao}}), \\ y_{1,i} = \mathbb{I}\{ \varepsilon_{1,i}' = 1\} + \mathbb{I}\{\varepsilon_{1,i}' = \circ\} \mathbb{I}\{\varepsilon_{1,i} \leq \theta_{1,0} + \boldsymbol{X}\boldsymbol{\theta}_1 \}, \quad \boldsymbol{y}_2 = (\mu_2 + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_2\rangle)\boldsymbol{1}+ \boldsymbol{h}_2^{\mathsf{ao}} + \boldsymbol{\varepsilon}_2. \end{gathered}$$ By explicit computation using equation [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"}, we see that $\boldsymbol{\delta}\in \partial_{\boldsymbol{u}}\big(-\phi_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})\big)$ if and only if $$(\nu_{k,0} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_k^{\mathsf{ao}}\rangle )\boldsymbol{1}- \boldsymbol{\delta} \in \partial_{\boldsymbol{u}} \ell_k^*\big(n\boldsymbol{u}_k^{\mathsf{ao}}; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big).$$ By comparison of the second-to-last display with equations [\[eq:SE-penalties\]](#eq:SE-penalties){reference-type="eqref" reference="eq:SE-penalties"} and [\[eq:yw-func-of-Heps\]](#eq:yw-func-of-Heps){reference-type="eqref" reference="eq:yw-func-of-Heps"}, $\boldsymbol{\delta}' \in \partial_{\boldsymbol{u}}\phi_{k,u}(\boldsymbol{u}_k^{\mathsf{ao}};\boldsymbol{H}_{k-1}^{\mathsf{ao}},\boldsymbol{\varepsilon}_1,\boldsymbol{\varepsilon}_2)$ if and only if $$(\nu_{k,0} + \nu_{k,\mathsf{x}} )\boldsymbol{1}- \boldsymbol{\delta}' \in \partial_{\boldsymbol{u}} \ell_k^*\big(n\boldsymbol{u}_k^{\mathsf{ao}}; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big).$$ Because the sub-differential on the right-hand side of the previous two displays are the same, we see that we can take $\boldsymbol{\delta}= (\langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_k^{\mathsf{ao}}\rangle - \nu_{k,\mathsf{x}} )\boldsymbol{1}$. By  and because $\langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_k^{\mathsf{se}}\rangle_{L_2} = \nu_{k,\mathsf{x}}$ by the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, we get that $\langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_k^{\mathsf{ao}}\rangle - \nu_{k,\mathsf{x}} \ensuremath{\stackrel{\bullet}{=}}0$. Because $\| \boldsymbol{1}\| / \sqrt{n} = 1$, we conclude that $\boldsymbol{\delta}/ \sqrt{n} \ensuremath{\stackrel{\bullet}{=}} {\boldsymbol 0}$, as desired. ◻ **Lemma 22** (Curvature of auxiliary objective). *For $k = 5,6$, under Hypothesis (k-1), with exponentially high probability $\boldsymbol{u}\mapsto \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{0,k},\boldsymbol{v}_k^{\mathsf{ao}})$ is $cn$-strongly concave and $\boldsymbol{v}\mapsto \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{0,k},\boldsymbol{v})$ is $c$-strongly convex.* *Proof of .* By Assumption A1 $\ell_{\mathsf{a}}$ is $C$-strongly smooth, whence by Fenchel-Legendre duality, $\ell_{\mathsf{a}}^*$ is $1/C$-strongly convex. By the definition of $\phi_5$ and $\ell_5^*$ in equations [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"} and [\[eq:ellk\*-def\]](#eq:ellk*-def){reference-type="eqref" reference="eq:ellk*-def"}, $\phi_5(\boldsymbol{u};v_0,\boldsymbol{v})$ is $cn$-strongly concave in $\boldsymbol{u}$. Moreover, $\ell_6^*$ is $cn$-strongly concave in $\boldsymbol{u}$ by assumption, using that $w_i$ is bounded below by $c > 0$ due to Assumption A1. By the definition of $\boldsymbol{g}(\boldsymbol{u})$ and $\boldsymbol{h}(\boldsymbol{v})$ in equation [\[eq:gordon-gh-explicit\]](#eq:gordon-gh-explicit){reference-type="eqref" reference="eq:gordon-gh-explicit"}, we see that $-\langle\boldsymbol{g}(\boldsymbol{u}),\boldsymbol{v}_k^{\mathsf{ao}}\rangle + \langle \boldsymbol{h}(\boldsymbol{v}_k^{\mathsf{ao}}),\boldsymbol{u}\rangle$ is concave in $\boldsymbol{u}$ provided $\langle\boldsymbol{\xi}_g,\boldsymbol{v}_k^{\mathsf{ao}}\rangle \geq 0$. This occurs with exponentially high-probability by  (recalling that $\boldsymbol{\xi}_g = \boldsymbol{g}_k^{\mathsf{ao},\perp}$, see equation [\[eq:GHk-ao\]](#eq:GHk-ao){reference-type="eqref" reference="eq:GHk-ao"}). Thus with exponentially high-probability, $\boldsymbol{u}\mapsto \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{0,k},\boldsymbol{v}_k^{\mathsf{ao}})$ is $cn$-strongly concave. Likewise, by Assumption A1, $\Omega_k$ is $c$-strong-convex, whence by the definition of $\phi_k$ in equation [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"}, $\phi_k$ is $c$-strongly convex in $\boldsymbol{v}$. By the definition of $\boldsymbol{g}(\boldsymbol{u})$ and $\boldsymbol{h}(\boldsymbol{v})$ in equation [\[eq:gordon-gh-explicit\]](#eq:gordon-gh-explicit){reference-type="eqref" reference="eq:gordon-gh-explicit"}, we see that $-\langle\boldsymbol{g}(\boldsymbol{u}_k^{\mathsf{ao}}),\boldsymbol{v}\rangle + \langle \boldsymbol{h}(\boldsymbol{v}),\boldsymbol{u}_k^{\mathsf{ao}}\rangle$ is convex in $\boldsymbol{v}$ provided $\langle\boldsymbol{\xi}_h,\boldsymbol{u}_k^{\mathsf{ao}}\rangle \geq 0$. This occurs with exponentially high-probability by , recalling that $\boldsymbol{\xi}_h = \boldsymbol{h}_k^{\mathsf{ao},\perp}$, see equation [\[eq:GHk-ao\]](#eq:GHk-ao){reference-type="eqref" reference="eq:GHk-ao"}. Thus with exponentially high-probability, $\boldsymbol{v}\mapsto \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{0,k},\boldsymbol{v})$ is $c$-strongly concave. ◻ ## Local stability of Gordon's objective {#sec:local-stability} Let $\phi_u : {\mathbb R}^{2k+1} \rightarrow {\mathbb R}$ and $\phi_v: ({\mathbb R}^p)^{2k}$ be any order-2 pseudo-Lipschitz functions. Define the random sets (which depend on $\boldsymbol{U}_{k-1}^{\mathsf{ao}}$, $\boldsymbol{H}_{k-1}^{\mathsf{ao}}$, $\boldsymbol{\varepsilon}_2$, $\boldsymbol{V}_{k-1}^{\mathsf{ao}}$, and $\boldsymbol{G}_{k-1}^{\mathsf{ao}}$) $$\begin{gathered} E_u(\epsilon) \ensuremath{: =}\Big\{ \boldsymbol{u}\in {\mathbb R}^n : \Big| \frac{1}{n} \sum_{i=1}^n \phi\Big( (nu_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^{k-1}, nu_i, (h_{\ell,i}^{\mathsf{ao},\perp})_{\ell=1}^{k-1}, \varepsilon_{2,i} \Big) - \mathbb{E}\Big[ \phi\Big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^{k-1}, nu_i, (h_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^{k-1}, \varepsilon_{2,i}^{\mathsf{se}} \Big) \Big] \Big| < \epsilon \Big\}, \\ E_v(\epsilon) \ensuremath{: =}\Big\{ \boldsymbol{v} \in {\mathbb R}^p : \Big| \phi_v \Big(\boldsymbol{V}_{k-1}^{\mathsf{ao},\perp},\boldsymbol{v},\frac{1}{\sqrt{n}}\boldsymbol{G}_{k-1}^{\mathsf{ao},\perp}\Big) - \mathbb{E} \Big[\phi\Big(\boldsymbol{V}_{k-1}^{\mathsf{se},\perp},\boldsymbol{v},\frac{1}{\sqrt{n}}\boldsymbol{G}_{k-1}^{\mathsf{se},\perp}\Big)\Big] \Big| < \epsilon \Big\}. \end{gathered}$$ **Lemma 23** (Local stability). *Define $\ell_k$ as in . For $k \geq 5,6$, under Hypothesis (k-1), we have for sufficiently large constant $C$ depending only on $\mathcal{P}_{\mathrm{model}}$ that $$\label{eq:max-min-AO-conc} \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{\stackrel{\bullet}{=}}\ell_k,$$ and $$\label{eq:min-max-AO-conc} \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{\stackrel{\bullet}{=}}\min_{\|\boldsymbol{v}\| \leq C} \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}) \ensuremath{\stackrel{\bullet}{=}}\ell_k.$$ Further, for sufficiently large $C$ and appropriately chosen $c,c' > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$, and any $\epsilon < c'$ $$\label{eq:AO-local-stability} \max_{\substack{\boldsymbol{u}\in E_u^c(\epsilon)\\ \| \boldsymbol{u}\| \leq C/\sqrt{n}}}\; \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \lessdot \ell_k - c\epsilon^2, \qquad \min_{\substack{\boldsymbol{v}\in E_v^c(\epsilon) \\ |v_0| \leq C \\ \| \boldsymbol{v}\| \leq C }} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}) \gtrdot \ell_k + c\epsilon^2.$$* *Proof of .* First, let $\boldsymbol{\delta}_u$ be as in . Then there exists $C > 0$ such that $$\begin{aligned} &\max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \underset{\emph{(i)}}\lessdot \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \\ &\qquad \underset{\emph{(ii)}}\lessdot \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \left\{ \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) + \boldsymbol{\delta}_u^\top(\boldsymbol{u}- \boldsymbol{u}_k^{\mathsf{ao}}) \right\} \\ &\qquad \leq \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) + \frac{C}{\sqrt{n}}\|\boldsymbol{\delta}_u\| \underset{\emph{(iii)}}\lessdot \ell_k. \end{aligned}$$ Inequality $\emph{(i)}$ holds because on the events $|\nu_{0,k}| \leq C$ and $\| \boldsymbol{v}_k^{\mathsf{ao}} \| < C$, which  shows have exponentially high probability, the inequality holds exactly (that is, with "$\lessdot$" replaced by "$\leq$"). Inequality *(ii)* holds because $\boldsymbol{\delta}_u \in \partial_{\boldsymbol{u}}\big(-\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})\big)$ and by , on the event $\langle \boldsymbol{\xi}_g , \boldsymbol{v}_k^{\mathsf{ao}} \rangle \geq 0$, $\mathsf{AuxObj}_k$ is concave in $\boldsymbol{u}$, and by , $\langle \boldsymbol{\xi}_g , \boldsymbol{v}_k^{\mathsf{ao}} \rangle \geq 0$ with exponentially high probability (recall $\boldsymbol{\xi}_g = \boldsymbol{g}_k^{\mathsf{ao},\perp}$). Inequality *(iii)* holds by  and because $\| \boldsymbol{\delta}_u \| / \sqrt{n} \ensuremath{\stackrel{\bullet}{=}}0$ by . To establish the reverse inequality, let $\boldsymbol{\delta}_v = \frac{\textup{d}\,}{\textup{d}\boldsymbol{v}}\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})$ and $\delta_0 = \frac{\textup{d}\,}{\textup{d} v_0}\mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})$. We follow an very similar argument as above $$\begin{aligned} &\max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \underset{\emph{(i)}}\gtrdot \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};v_0,\boldsymbol{v}) \\ &\qquad \underset{\emph{(ii)}}\gtrdot \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \Big\{ \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) + \boldsymbol{\delta}_v^\top(\boldsymbol{v}- \boldsymbol{v}_k^{\mathsf{ao}}) + \delta_0(v_0 - \nu_{k,0}) \Big\} \\ &\qquad\gtrdot \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) - C\|\boldsymbol{\delta}_v\| - C|\delta_0| \underset{\emph{(iii)}}\gtrdot \ell_k. \end{aligned}$$ Inequality $\emph{(i)}$ holds because on the event $\|\boldsymbol{u}_k^{\mathsf{ao}}\| \leq C/\sqrt{n}$, which  and  show have exponentially high probability, the inequality holds exactly (that is, with "$\gtrdot$" replaced by "$\geq$"). Inequality *(ii)* holds because by , on the event $\langle \boldsymbol{\xi}_h , \boldsymbol{u}_k^{\mathsf{ao}} \rangle \geq 0$, $\mathsf{AuxObj}_k$ is convex in $(v_0,\boldsymbol{v})$, and by , $\langle \boldsymbol{\xi}_h , \boldsymbol{u}_k^{\mathsf{ao}} \rangle \geq 0$ with exponentially high probability (recall $\boldsymbol{\xi}_g = \boldsymbol{g}_k^{\mathsf{ao},\perp}$). Inequality *(iii)* holds by  and because $\|\boldsymbol{\delta}_v\| \ensuremath{\stackrel{\bullet}{=}}0$ and $|\delta_0| \ensuremath{\stackrel{\bullet}{=}}0$ by . Equations [\[eq:min-max-AO-conc\]](#eq:min-max-AO-conc){reference-type="eqref" reference="eq:min-max-AO-conc"} follows by a completely analogous argument, which we present for completeness. There exists $C$ such that $$\begin{aligned} &\min_{\|\boldsymbol{v}\| \leq C} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}) \geq \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \geq \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};v_0,\boldsymbol{v}) \\ &\qquad\gtrdot \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \Big\{ \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};v_0,\boldsymbol{v}) + \boldsymbol{\delta}_v^\top (\boldsymbol{v}- \boldsymbol{v}_k^{\mathsf{ao}}) + \delta_0(v_0 - \nu_{k,0}) \Big\} \\ &\qquad\gtrdot \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) - C\|\boldsymbol{\delta}_v\| - C|\delta_0| \gtrdot \ell_k, \end{aligned}$$ where the inequalities and approximate inequalities hold by similar arguments as before. Similarly, $$\begin{aligned} &\min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \leq \min_{\|\boldsymbol{v}\| \leq C} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}) \leq \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \; \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \\ &\qquad\lessdot \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \left\{ \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) + \boldsymbol{\delta}_u^\top (\boldsymbol{u}- \boldsymbol{u}_k^{\mathsf{ao}}) \right\} \leq \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) + \frac{C}{\sqrt{n}}\| \boldsymbol{\delta}_u \| \lessdot \ell_k, \end{aligned}$$ where the inequalities and approximate inequalities hold by similar arguments as before. Combining the previous two displays gives equation [\[eq:min-max-AO-conc\]](#eq:min-max-AO-conc){reference-type="eqref" reference="eq:min-max-AO-conc"}. Now we prove equation [\[eq:AO-local-stability\]](#eq:AO-local-stability){reference-type="eqref" reference="eq:AO-local-stability"}. By , for sufficiently small $c' > 0$ and any fixed $\epsilon < c'$, $\boldsymbol{u}_k^{\mathsf{ao}} \in E_u(\epsilon/2)$ with probability at least $1 - Ce^{-cn\epsilon^r}$. When $\| \boldsymbol{u}_k^{\mathsf{ao}} \| \leq C/\sqrt{n}$, which occurs with exponentially high probability, we have for every $\boldsymbol{u}\in E_u^c(\epsilon)$ that $\| \boldsymbol{u} - \boldsymbol{u}_k^{\mathsf{ao}} \| \geq \epsilon/(2\sqrt{n})$. Thus, $$\begin{aligned} \max_{\substack{\boldsymbol{u}\in E_u^c(\epsilon)\\ \| \boldsymbol{u}\| \leq C/\sqrt{n}}}\; \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \mathsf{AuxObj}_k&(\boldsymbol{u};v_0,\boldsymbol{v}) \leq \max_{\substack{\boldsymbol{u}\in E_u^c(\epsilon)\\ \|\boldsymbol{u}\|\leq C/\sqrt{n}}} \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) \\ &\underset{\emph{(i)}}\lessdot \max_{\substack{\boldsymbol{u}\in E_u^c(\epsilon)\\ \| \boldsymbol{u}\| \leq C/\sqrt{n}}} \Big\{ \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) + \|\boldsymbol{\delta}_u\|\,\|\boldsymbol{u}- \boldsymbol{u}_k^{\mathsf{ao}}\| - \frac{cn}2 \| \boldsymbol{u}- \boldsymbol{u}_k^{\mathsf{ao}} \|^2 \Big\} \\ &\underset{\emph{(ii)}}\lessdot \max_{\epsilon/2 \leq x \leq C} \Big\{ \ell_k^* + \frac{c\epsilon}{8}\,x - \frac{c}2 x^2 \Big\} = \ell_k^* - \frac{3c\epsilon^2}{16}. \end{aligned}$$ Inequality $\emph{(i)}$ holds because on the event $\|\boldsymbol{u}_k^{\mathsf{ao}}\| \leq C/\sqrt{n}$, which  shows has exponentially high probability, the inequality holds exactly (that is, with "$\gtrdot$" replaced by "$\geq$"), and because $\boldsymbol{u}\mapsto \mathsf{AuxObj}_k(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}})$ is $cn$-strongly concave with exponentially high-probability by . Inequality *(ii)* holds because by  and the argument preceding the display, with exponentially high probability, $\| \boldsymbol{\delta}_u \| /\sqrt{n} \leq c\epsilon/8$ and $\| \boldsymbol{u}- \boldsymbol{u}_k^{\mathsf{ao}} \| \geq \epsilon/(2 \sqrt{n})$ for all $\boldsymbol{u}_k^{\mathsf{ao}} \in E_u(\epsilon/2)$. The first bound in equation [\[eq:AO-local-stability\]](#eq:AO-local-stability){reference-type="eqref" reference="eq:AO-local-stability"} follows. The second bound in equation [\[eq:AO-local-stability\]](#eq:AO-local-stability){reference-type="eqref" reference="eq:AO-local-stability"} holds by a similar argument. Indeed, by Hypothesis (k-1), for sufficiently small $c' > 0$ and any $\epsilon < c'$, $\boldsymbol{v}_k^{\mathsf{ao}} \in E_v(\epsilon/2)$ with exponentially high probability. On this event, for every $\boldsymbol{v}\in E_v^c(\epsilon)$ we have $\| \boldsymbol{v}- \boldsymbol{v}_k^{\mathsf{ao}} \| \geq \epsilon/2$. We thus have $$\begin{aligned} \min_{\substack{\boldsymbol{v}\in E_v^c(\epsilon)\\|v_0| \leq C \\ \| \boldsymbol{v}\| \leq C}}\; \max_{\|\boldsymbol{u}\| \leq C / \sqrt{n}}\; \; &\mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \geq \min_{\substack{\boldsymbol{v}\in E_v^c(\epsilon)\\|v_0| \leq C \\ \| \boldsymbol{v}\| \leq C}}\; \mathsf{AuxObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}_k^{\mathsf{ao}}) \\ &\underset{\emph{(i)}}\gtrdot \min_{\substack{\boldsymbol{v}\in E_v^c(\epsilon)\\ |v_0| \leq C \\ \| \boldsymbol{v}\| \leq C }}\; \Big\{ \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{ao}}) - \|\boldsymbol{\delta}_v\|\,\|\boldsymbol{v}- \boldsymbol{v}_k^{\mathsf{ao}}\| - |\delta_0|\,|v_0 - \nu_{0,k}| + \frac{c}2 \| \boldsymbol{v}- \boldsymbol{v}_k^{\mathsf{ao}} \|^2 \Big\} \\ &\underset{\emph{(ii)}}\gtrdot \max_{\epsilon/2 \leq x \leq C} \Big\{ \ell_k^* - \frac{c\epsilon}{8}\,x - \frac{c\epsilon^2}{16} + \frac{c}2 x^2 \Big\} = \ell_k^* + \frac{c\epsilon^2}{8}. \end{aligned}$$ Inequality $\emph{(i)}$ holds because on the event $\|\boldsymbol{v}_k^{\mathsf{ao}}\| \leq C$, which  shows has exponentially high probability, the inequality holds exactly (that is, with "$\gtrdot$" replaced by "$\geq$"), and because $\boldsymbol{v} \mapsto \mathsf{AuxObj}_k(\boldsymbol{u}_k^{\mathsf{ao}};\nu_{k,0},\boldsymbol{v})$ is $c$-strongly convex with exponentially high-probability by . Inequality *(ii)* holds because by  and the argument preceding the display, with exponentially high probability, $\| \boldsymbol{\delta}_v \| \leq c\epsilon/8$, $|\delta_0| \leq c\epsilon^2/(16C)$, and $\| \boldsymbol{v}- \boldsymbol{v}_k^{\mathsf{ao}} \| \geq \epsilon/2$ for all $\boldsymbol{v}_k^{\mathsf{ao}} \in E_v(\epsilon/2)$. The first bound in equation [\[eq:AO-local-stability\]](#eq:AO-local-stability){reference-type="eqref" reference="eq:AO-local-stability"} follows. ◻ ## Proof of  {#sec:inductive-step-proof} Recall that $\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =}\boldsymbol{u}^\top \boldsymbol{A}\boldsymbol{v}+ \phi_k(\boldsymbol{u};v_0,\boldsymbol{v})$. ### Crude bounds on primary optimization saddle point **Lemma 24**. *For $k = 5,6$, the primary objective has the following properties.* (a) *For any fixed $v_0,\boldsymbol{v}$, the function $\boldsymbol{u}\mapsto \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ is $cn$-strongly concave in $\boldsymbol{u}$. Thus also $\boldsymbol{u}\mapsto \min_{(v_0,\boldsymbol{v}) \in S}\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ is $cn$-strongly concave in $\boldsymbol{u}$ for any set $S$.* (b) *For any constant $M$ depending on $\mathcal{P}_{\mathrm{model}}$, with exponentially high-probability the function $(v_0,\boldsymbol{v}) \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ is $c$-strongly convex on $\max\{|v_0|,\| \boldsymbol{v}\|\} \leq M$, where $c$ depends only on $\mathcal{P}_{\mathrm{model}}$ and $M$.* (c) *With exponentially high probability, $\| \boldsymbol{u}_k^{\mathsf{po}} \| \leq C / \sqrt{n}$, $|v_{k,0}^{\mathsf{po}}| \leq C$, $\| \boldsymbol{v}_k^{\mathsf{po}} \| \leq C$. On this event, $$\mathsf{PriObj}_k(\boldsymbol{u}_k^{\mathsf{po}};v_{k,0}^{\mathsf{po}},\boldsymbol{v}_k^{\mathsf{po}}) = \min_{\substack{v_0 \in {\mathbb R}\\\boldsymbol{v}\in {\mathbb R}^p}} \; \max_{\boldsymbol{u}\in {\mathbb R}^n} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) = \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}),$$ and in both places the order of minimization and maximization may be exchanged.* (d) *For any $C > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$, there exists $C' > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$ and $C$ such that with exponentially high probability $$\begin{aligned} \text{for any $|v_0| \leq C$,}\quad &\min_{\substack{\boldsymbol{v} \in {\mathbb R}^p}} \; \max_{\boldsymbol{u}\in {\mathbb R}^n} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) = \min_{\substack{\|\boldsymbol{v}\| \leq C'}} \; \max_{\|\boldsymbol{u}\| \leq C'/\sqrt{n}} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}). \end{aligned}$$* *Proof of .* We prove each item, one at a time. (a) We showed that $\phi_k$ is $cn$-strongly concave in $\boldsymbol{u}$ in the proof of . The function $\boldsymbol{u}\mapsto \boldsymbol{u}^\top \boldsymbol{A}\boldsymbol{v}$ is linear, whence the result follows. (b) By [@vershynin_2012 Corollary 5.35], there exists $M > 1$ such that $\| \boldsymbol{A}\|_{{\rm op}} / \sqrt{n} \leq M$ with exponentially high-probability. Throughout the proof of item 2, the value of $M$ stays constant but $C',c,c' > 0$ may change at each appearance and depend on $M$ and $\mathcal{P}_{\mathrm{model}}$. We will show that for any $|v_0|,\|\boldsymbol{v}\| \leq C'$ and $\delta_0^2 + \| \boldsymbol{\delta}\|^2 = 1$, the function $t \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0 + t \delta_0,\boldsymbol{v}+ t\boldsymbol{\delta})$ is $c$-strongly convex for $t \in [0,1]$. Note that $\max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ are exactly the objectives given in equations [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"} (for $k = 5$) and [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} (for $k = 6$). With exponentially high-probability, there is a set $S_1 \subset [n]$, $|S_1| \geq c'n$ such that for all $i \in S_1$, $\eta \mapsto \ensuremath{a}_i w(\langle \boldsymbol{x}_i , \boldsymbol{\theta}_{\mathsf{a}} \rangle)(y_i-\eta)^2/2$ is $c$-strongly convex. Indeed, because the weight $w$ is lower-bounded by A1, the former holds whenever $\ensuremath{a}_i = 1$, which occurs in at least $c'n$ coordinates with exponentially high probability because $\ensuremath{a}_i \mathrel{\stackrel{\mathrm{iid}}{\sim}}\mathsf{Ber}(\mathbb{E}[\pi(\mu_{\mathsf{a}} + \langle \boldsymbol{x}_i,\boldsymbol{\theta}_{\mathsf{a}}\rangle)])$ and $\mathbb{E}[\pi(\mu_{\mathsf{a}} + \langle \boldsymbol{x}_i,\boldsymbol{\theta}_{\mathsf{a}}\rangle)] > c$ by Assumption A1. Now, to show the function $t \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0 + t \delta_0,\boldsymbol{v}+ t\boldsymbol{\delta})$ is $c$-strongly convex for $t \in [0,1]$, consider first the case $|\delta_0| \geq 2M \| \boldsymbol{\delta}\|$, or equivalently, $|\delta_0|^2 \geq 4M^2/(1 + 4M^2)$. On the event $\| \boldsymbol{A}\|_{{\rm op}} / \sqrt{n} \leq M$, we have $C' \geq \|\delta_0 \boldsymbol{1}+ \boldsymbol{A}\boldsymbol{\delta}\|/\sqrt{n} \geq |\delta_0| - \| \boldsymbol{A}\boldsymbol{\delta}\| / \sqrt{n} \geq (1 - 1/(2M))|\delta_0| \geq c > 0$. Using Markov's inequality across coordinates, there is a set $S_2 \subset [n]$, $|S_2| \geq (1-c'/2)n$ (using the same $c'$ as above) such that for all $i \in S_2$ both $|\delta_0 + \langle \boldsymbol{x}_i , \boldsymbol{\delta}\rangle| \geq c > 0$ and $|v_0 + t\delta_0 + \langle \boldsymbol{x}_i , \boldsymbol{v}+ t \boldsymbol{\delta}\rangle | \leq C'$ for all $t \in [0,1]$. Note that $|S_1 \cap S_2| \geq c'/2$, whence $t \mapsto \frac{1}{n}\sum_{i \in S_1 \cap S_2}\ensuremath{a}_i w(\langle \boldsymbol{x}_i,\boldsymbol{\theta}_{\mathsf{a}}\rangle)(y_i - v_0 - t\delta_0 - \langle \boldsymbol{x}_i , \boldsymbol{v}+ t \boldsymbol{\delta}\rangle)^2$ (in the case $k = 5$) and $t \mapsto \frac{1}{n}\sum_{i \in S_1 \cap S_2}\ell_{\mathsf{a}}(v_0 + t\delta_0 + \langle \boldsymbol{x}_i , \boldsymbol{v}+ t \boldsymbol{\delta}\rangle ; \ensuremath{a}_i)$ (in the case $k =6$) is $c$-strongly convex for $t \in [0,1]$. Because the remaining terms in equations [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"} and [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} are are convex, we conclude $t \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0 + t \delta_0,\boldsymbol{v}+ t\boldsymbol{\delta})$ is $c$-strongly convex for $t \in [0,1]$. Consider, alternatively, the case that $|\delta_0| < 2M \| \boldsymbol{\delta} \|$, or equivalently, $\|\boldsymbol{\delta}\|^2 \geq 1/(1 + 4M^2)$. Then, because $\Omega_{\mathsf{a}}$ is $c$-strongly convex by Assumption A1, $t \mapsto \Omega_\mathsf{a}(\boldsymbol{v}+ t \boldsymbol{\delta})$ is $c$-strongly convex. Thus, $t \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0 + t \delta_0,\boldsymbol{v}+ t\boldsymbol{\delta})$ is $c$-strongly convex for $t \in [0,1]$ in this case as well. (c) Let $\boldsymbol{v}^*$ be the minimizer of $\Omega_k$, which has bounded $\ell_2$-norm by Assumption A1. Moreover, because $\eta \mapsto \ell_{\mathsf{a}}(\eta;\ensuremath{a}_i)$ and $\eta\mapsto \ensuremath{a}_i w(\langle \boldsymbol{x}_i , \boldsymbol{\theta}_{\mathsf{a}}\rangle)(y_i - \eta)^2$ have $C$-Lipschitz gradient which is bounded by $C(1 + |y_i|)$ at $\eta = 0$, we conclude that $(v_0,\boldsymbol{v}) \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ has gradient bounded by $C$ with exponentially high probability at $v_0 = 0, \boldsymbol{v}= \boldsymbol{v}^*$. The bounds on $|v_{k,0}^{\mathsf{po}}|$ and $\|\boldsymbol{v}_k^{\mathsf{po}}\|$ then hold by item 2. The bounds on $\| \boldsymbol{u}_k ^{\mathsf{po}} \|$ then hold by equations [\[eq:uPO-identity\]](#eq:uPO-identity){reference-type="eqref" reference="eq:uPO-identity"} and the Lipschitz continuity of the gradient of loss in Assumption A1. Then the display holds by [@rockafellar-1970a Lemma 36.2]. (d) This holds by the same argument as in the previous item, where we do not optimize over $v_0$ in equations [\[eq:propensity-fit\]](#eq:propensity-fit){reference-type="eqref" reference="eq:propensity-fit"} and [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}.  ◻ ### Refined bounds on primary optimization saddle point By , the sequential Gordon inequality (), and , for sufficiently large $C > 0$ depending on $\mathcal{P}_{\mathrm{model}}$ $$\label{eq:PO-saddle-value-conc} \mathsf{PriObj}_k(\boldsymbol{u}_k^{\mathsf{po}};v_{k,0}^{\mathsf{po}},\boldsymbol{v}_k^{\mathsf{po}}) = \min_{\substack{v_0 \in {\mathbb R}\\\boldsymbol{v}\in {\mathbb R}^p}} \; \max_{\boldsymbol{u} \in {\mathbb R}^n} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) = \min_{\substack{|v_0| \leq C \\ \|\boldsymbol{v}\| \leq C}} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{\stackrel{\bullet}{=}}\ell_k,$$ and $$\min_{\boldsymbol{v}\in {\mathbb R}^p} \; \max_{\boldsymbol{u}\in {\mathbb R}^n} \mathsf{PriObj}_k(\boldsymbol{u};\nu_{0,k},\boldsymbol{v}) = \min_{\|\boldsymbol{v}\| \leq C} \; \max_{\|\boldsymbol{u}\| \leq C/\sqrt{n}} \mathsf{PriObj}_k(\boldsymbol{u};\nu_{0,k},\boldsymbol{v}) \ensuremath{\stackrel{\bullet}{=}} \ell_k,$$ where we have used item (c) of  in the first display, and item (d) of  in the second display. In particular, for any $\epsilon < c'$, with probability at least $1 - Ce^{-cn\epsilon^r}$, $\min_{\boldsymbol{v}\in {\mathbb R}^p} \max_{\boldsymbol{u}\in {\mathbb R}^n} \mathsf{PriObj}_k(\boldsymbol{u}; \nu_{0,k},\boldsymbol{v})$ is within $\epsilon$ of the minimum of the function $v_0 \mapsto \min_{\boldsymbol{v}\in {\mathbb R}^p} \max_{\boldsymbol{u}\in {\mathbb R}^n} \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ over $[-C,C]$. By (b), the function $(v_0, \boldsymbol{v}) \mapsto \max_{\boldsymbol{u}\in {\mathbb R}^n}\mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v})$ is $c$-strongly convex over the set defined by the constraints $|v_0|\leq C$ and $\| \boldsymbol{v}\| \leq C$ with exponentially high probability. When this occurs, the interval $I \subset [-C,C]$ on which $\min_{\boldsymbol{v}\in {\mathbb R}^p} \allowbreak \max_{\boldsymbol{u}\in {\mathbb R}^n} \allowbreak \mathsf{PriObj}_k(\boldsymbol{u}; \allowbreak v_0,\boldsymbol{v})$ is within $\epsilon$ of its minimum has length at most $2 \sqrt{2\epsilon/c}$. Thus, with probability at least $1 - Ce^{-c n \epsilon^r}$, $|v_{k,0}^{\mathsf{po}} - \nu_{0,k}| \leq 2\sqrt{2\epsilon/c}$. That is, $v_{k,0}^{\mathsf{po}} \ensuremath{\stackrel{\bullet}{=}}\nu_{0,k}$, as desired. Next, taking $C$ large enough so that $|v_{k,0}^{\mathsf{po}} | \leq C$ and $\| \boldsymbol{v}_k^{\mathsf{po}} \| \leq C/2$ (which is possible by , item 3) and such that [\[eq:AO-local-stability\]](#eq:AO-local-stability){reference-type="ref" reference="eq:AO-local-stability"} holds, we have $$\min_{\substack{\boldsymbol{v}\in E_v^c(\epsilon) \\ |v_0| \leq C \\ \| \boldsymbol{v}\| \leq C }} \; \max_{ \boldsymbol{u}\in {\mathbb R}^n } \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \geq \min_{\substack{\boldsymbol{v}\in E_v^c(\epsilon) \\ |v_0| \leq C \\ \| \boldsymbol{v}\| \leq C }} \; \max_{\|\boldsymbol{u}\| \leq C'/\sqrt{n}} \; \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \gtrdot \ell_k + c\epsilon^2.$$ Likewise, taking $C$ large enough so that $|\boldsymbol{u}_k^{\mathsf{po}} | \leq C/(2\sqrt{n})$ (which is possible by , item 3) and such that [\[eq:AO-local-stability\]](#eq:AO-local-stability){reference-type="ref" reference="eq:AO-local-stability"} holds, we have $$\max_{ \substack{ \boldsymbol{u}\in E_u^c(\epsilon) \\ \| \boldsymbol{u}\| \leq C/\sqrt{n}} }\; \min_{\substack{v_0 \in {\mathbb R}\\ \boldsymbol{v}\in {\mathbb R}^p }} \; \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \leq \max_{ \substack{ \boldsymbol{u}\in E_u^c(\epsilon) \\ \| \boldsymbol{u}\| \leq C/\sqrt{n}} }\; \min_{\substack{|v_0| \leq C' \\ \|\boldsymbol{v}\| \leq C' }} \; \mathsf{PriObj}_k(\boldsymbol{u};v_0,\boldsymbol{v}) \lessdot \ell_k - c\epsilon^2,$$ where recall that we may exchange the order of minimization and maximization in equation [\[eq:PO-saddle-value-conc\]](#eq:PO-saddle-value-conc){reference-type="eqref" reference="eq:PO-saddle-value-conc"}. Combined with the relation [\[eq:PO-saddle-value-conc\]](#eq:PO-saddle-value-conc){reference-type="eqref" reference="eq:PO-saddle-value-conc"}, we find that probability at least $1 - Ce^{-cn\epsilon^r}$, $\boldsymbol{u}_k^{\mathsf{po}} \in E_u(\epsilon)$ and $\boldsymbol{v}_k^{\mathsf{po}} \in E_v(\epsilon)$. That is, $$\label{eq:conc-without-gh} \begin{gathered} \frac{1}{n} \sum_{i=1}^n \phi_u\Big( (nu_{\ell,i}^{\mathsf{po},\perp})_{\ell=1}^{k-1}, nu_{k,i}^{\mathsf{po}}, (h_{\ell,i}^{\mathsf{po},\perp})_{\ell=1}^{k-1}, \varepsilon_{2,i} \Big) \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}\Big[ \mathbb{E}\Big[ \phi_u\Big( (nu_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^{k-1}, nu_{k,i}^{\mathsf{se}}, (h_{\ell,i}^{\mathsf{se},\perp})_{\ell=1}^{k-1}, \varepsilon_{2,i}^{\mathsf{se}} \Big) \Big], \\ \phi_v\Big(\boldsymbol{V}_{k-1}^{\mathsf{po},\perp},\boldsymbol{v}_k^{\mathsf{po}},\frac{1}{\sqrt{n}}\boldsymbol{G}_{k-1}^{\mathsf{po},\perp}\Big) \ensuremath{\stackrel{\bullet}{=}} \mathbb{E}\Big[\phi_v\Big(\boldsymbol{V}_{k-1}^{\mathsf{se},\perp},\boldsymbol{v}_k^{\mathsf{se}},\frac{1}{\sqrt{n}}\boldsymbol{G}_{k-1}^{\mathsf{se},\perp}\Big)\Big]. \end{gathered}$$ By the KKT conditions for the problem [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"}, the definitions of $\phi_k$ in equation  [\[eq:objectives\]](#eq:objectives){reference-type="eqref" reference="eq:objectives"}, and the definition of $\boldsymbol{g}_k^{\mathsf{po}}$ (see equation [\[eq:loo-noise-def\]](#eq:loo-noise-def){reference-type="eqref" reference="eq:loo-noise-def"}) we have $\langle \boldsymbol{u}_k^{\mathsf{po}},\boldsymbol{1}\rangle = 0$ and $$\boldsymbol{g}_k^{\mathsf{po}} = \sum_{\ell=1}^k \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{po}} - \boldsymbol{A}^\top \boldsymbol{u}_k^{\mathsf{po}} = \sum_{\ell=1}^k \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{po}} + \langle \boldsymbol{u}_k^{\mathsf{po}},\boldsymbol{1}\rangle \boldsymbol{\mu}_{\mathsf{x}} + \nabla \Omega_k(\boldsymbol{v}_k^{\mathsf{po}}) = \sum_{\ell=1}^k \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{po}} + \nabla \Omega_k(\boldsymbol{v}_k^{\mathsf{po}}).$$ Because $\nabla \Omega_k$ is $C$-Lipschitz by Assumption A1, the second line of equation [\[eq:conc-without-gh\]](#eq:conc-without-gh){reference-type="eqref" reference="eq:conc-without-gh"} holds also if we allow $\phi_v$ to also be a function of $\boldsymbol{g}_k^{\mathsf{po}}$. Now we show that, in the case $k = 5$, the first line of [\[eq:conc-without-gh\]](#eq:conc-without-gh){reference-type="ref" reference="eq:conc-without-gh"} holds also if we allow $\phi_u$ to also be a function of $h_{5,i}^{\mathsf{po}}$. Observe that because $\ell_\mathsf{a}$ is $c$-strongly convex and twice differentiable, $\nabla_{\boldsymbol{u}} \ell_5^*(\boldsymbol{u}_k^{\mathsf{po}};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2)$ exists and is $C$-Lipschitz (see its definition in equation [\[eq:ellk\*-def\]](#eq:ellk*-def){reference-type="eqref" reference="eq:ellk*-def"}). Thus, using the KKT conditions for equation [\[eq:min-max\]](#eq:min-max){reference-type="eqref" reference="eq:min-max"} and the definition of $\boldsymbol{h}_5^{\mathsf{po}}$ (equation [\[eq:loo-noise-def\]](#eq:loo-noise-def){reference-type="eqref" reference="eq:loo-noise-def"}), we have $$\boldsymbol{h}_5^{\mathsf{po}} = \sum_{\ell=1}^5 \zeta_{5\ell}^u \boldsymbol{u}_\ell^{\mathsf{po}} + \boldsymbol{A}\boldsymbol{v}_5^{\mathsf{po}} = \sum_{\ell=1}^5 \zeta_{5\ell}^u \boldsymbol{u}_\ell^{\mathsf{po}} - \boldsymbol{1}(v_{0,5}^{\mathsf{po}} + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_5^{\mathsf{po}} \rangle) + \nabla_{\boldsymbol{u}}\ell_5^*(\boldsymbol{u}_5^{\mathsf{po}};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2),$$ whence equation [\[eq:conc-without-gh\]](#eq:conc-without-gh){reference-type="eqref" reference="eq:conc-without-gh"} holds if we allow $\phi_u$ to also be a function of $h_{5,i}^{\mathsf{po}}$. equation [\[eq:se-conc\]](#eq:se-conc){reference-type="eqref" reference="eq:se-conc"} holds because, for $k = 5,6$, $\boldsymbol{v}_k^{\mathsf{po},\perp}$, $\boldsymbol{g}_k^{\mathsf{po},\perp}$ are $C$-Lipschitz function of $\boldsymbol{V}_{k-1}^{\mathsf{po}}$, $\boldsymbol{G}_{k-1}^{\mathsf{po}}$, and $\boldsymbol{v}_k^{\mathsf{po}}$; and equation [\[eq:se-conc-u\]](#eq:se-conc-u){reference-type="eqref" reference="eq:se-conc-u"} holds because $nu_k^{\mathsf{po},\perp}$ and (in the case $k = 5$) $h_{\ell,k}^{\mathsf{po},\perp}$ are $C$-Lipschitz functions of $(n u_{\ell,i}^{\mathsf{po},\perp})_{\ell=1}^{k-1}$, $(h_{\ell,i}^{\mathsf{po},\perp})_{\ell=1}^{k-1}$, and $nu_{k,i}^{\mathsf{po}}$ (see [\[eq:perp-from-orig\]](#eq:perp-from-orig){reference-type="ref" reference="eq:perp-from-orig"}). # Analysis of fixed point parameters {#sec:fixed-point-parameter-analysis} This section is devoted to the proofs of , in , respectively. ## Proof of  {#sec:fix-pt-exist-proof} Our proof proceeds via induction on $k$, based on the following induction hypothesis: Hypothesis $\mbox{FP}(k)$ : There exists a unique solution $\boldsymbol{K}_{g,k}$, $\boldsymbol{K}_{h,k}$, $\boldsymbol{Z}_{u,k}$, $\boldsymbol{Z}_{v,k}$, $\{\nu_{\ell,0}\}_{\ell \leq k}$, $\{\nu_{\ell,\mathsf{x}}\}_{\ell \leq k}$ to the equations $$\label{eq:fixpt-inductive} \tag{SE-fixpt-$k$} \begin{gathered} \boldsymbol{K}_{g,k} = \langle\!\langle\boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}, \qquad \boldsymbol{K}_{h,k} = \langle\!\langle\boldsymbol{V}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}, \\ \boldsymbol{K}_{h,k} \boldsymbol{Z}_{v,k}^\top = \langle\!\langle\boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}, \qquad \boldsymbol{K}_{g,k} \boldsymbol{Z}_{u,k}^\top = \langle\!\langle\boldsymbol{G}_k^{\mathsf{se}} , \boldsymbol{V}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}, \\ % \text{if $k \geq 5$,} \quad \nu_{\ell,\mathsf{x}} = \langle\boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}_\ell^{\mathsf{se}}\rangle_{L_2}, \qquad \nu_{\ell,\mathsf{u}} = \langle\boldsymbol{1},\boldsymbol{u}_\ell^{\mathsf{se}}\rangle_{L_2}, \qquad \langle\boldsymbol{1},\boldsymbol{u}_\ell^{\mathsf{se}}\rangle_{L_2} = 0, \quad \text{for $5 \leq \ell \leq k$.} \end{gathered}$$ where $\boldsymbol{Z}_{v,k}$ is lower-triangular and innovation-compatible with $\boldsymbol{K}_{h,k}$ and $\boldsymbol{Z}_{u,k}$ is lower-triangular and innovation compatible with $\boldsymbol{K}_{g,k}$. Equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} is a subset of the fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, and involves a subset of the fixed point parameters. Note that  is equivalent to Hypothesis $\mbox{FP}(6)$. Our strategy is to first check Hypotheis $\mbox{FP}(4)$ directly, and then prove Hypothesis $\mbox{FP}(6)$ by induction. ### Base case: $\mathbf{k = 4}$ {#sec:fixpt-existence-base-case} Using equations [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} and [\[eq:SE-penalties\]](#eq:SE-penalties){reference-type="eqref" reference="eq:SE-penalties"}, we find that the equalities $$\begin{aligned} \boldsymbol{u}_1^{\mathsf{se}} = \boldsymbol{u}_2^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{u}_3^{\mathsf{se}} = -\boldsymbol{1}/n, \quad \boldsymbol{u}_4^{\mathsf{se}} = -\boldsymbol{y}_1^{\mathsf{se}}/n, \quad \mbox{and} \\ \boldsymbol{v}_1^{\mathsf{se}} = \boldsymbol{\theta}_1, \quad \boldsymbol{v}_2^{\mathsf{se}} = \boldsymbol{\theta}_2, \quad \boldsymbol{v}_3^{\mathsf{se}} = {\boldsymbol 0}, \quad \boldsymbol{v}_4^{\mathsf{se}} = {\boldsymbol 0}\end{aligned}$$ all hold regardless of the choice of $\boldsymbol{K}_g$, $\boldsymbol{K}_h$, $\boldsymbol{Z}_u$, and $\boldsymbol{Z}_v$. Thus, the first line of equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} takes the form $$\boldsymbol{K}_{g,4} = \begin{pmatrix} {\boldsymbol 0}_{2 \times 2} & {\boldsymbol 0}_{2 \times 2} \\ {\boldsymbol 0}_{2 \times 2} & \begin{matrix} 1/n & {\overline{\pi}}/n \\ {\overline{\pi}}/n & {\overline{\pi}}/n \end{matrix} \end{pmatrix}, \qquad \boldsymbol{K}_{h,4} = \begin{pmatrix} \langle\!\langle\boldsymbol{\Theta}\rangle\!\rangle& {\boldsymbol 0}_{2 \times 2} \\ {\boldsymbol 0}_{2 \times 2} & {\boldsymbol 0}_{2 \times 2} \end{pmatrix},$$ where $\boldsymbol{\Theta}\in {\mathbb R}^{p \times 2}$ is the matrix with columns $\boldsymbol{\theta}_1$ and $\boldsymbol{\theta}_2$. Here, in computing the entries of $\boldsymbol{K}_{g,k}$, we have used that $\mathbb{P}(y_{1,i}^{\mathsf{se}} = 1|h_{1,i}^{\mathsf{se}}) = \pi(\theta_{1,0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_1\rangle + h_{1,i}^{\mathsf{se}})$ and that $h_{1,i}^{\mathsf{se}} \sim \mathsf{N}(0,\|\boldsymbol{\theta}_1\|^2)$. Thus, the first line of the display [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} uniquely determines $\boldsymbol{K}_{g,k}$ and $\boldsymbol{K}_{h,k}$. Moreover, this implies that $\boldsymbol{g}_1^{\mathsf{se}} = \boldsymbol{g}_2^{\mathsf{se}} = {\boldsymbol 0}$ and $\boldsymbol{h}_3^{\mathsf{se}} = \boldsymbol{h}_4^{\mathsf{se}} = 0$. Then $$\langle\!\langle\boldsymbol{H}_4^\mathsf{se},\boldsymbol{U}_4^\mathsf{se}\rangle\!\rangle_{L_2} = \boldsymbol{K}_{h,4} \begin{pmatrix} \begin{matrix} \vert \\ {\boldsymbol 0}_{4\times3} \\ \vert \end{matrix} & \begin{matrix} -{\overline{\pi}}\alpha_1 \\ 0 \\ 0 \\ 0\end{matrix} \end{pmatrix}, \qquad \langle\!\langle\boldsymbol{G}_4^{\mathsf{se}} , \boldsymbol{H}_4^{\mathsf{se}} \rangle\!\rangle_{L_2} = {\boldsymbol 0}_{4\times 4}.$$ Using  and because ${\overline{\pi}}\neq 1$, we have the equivalence ${\mathbf I}_{\boldsymbol{K}_{g,4}} = \text{\rm diag}(0,0,1,1)$ and $$\begin{aligned} {\mathbf I}_{\boldsymbol{K}_{h,4}} = \begin{pmatrix} \mathbb{I}_{\boldsymbol{\theta}_1 \neq {\boldsymbol 0}} & 0 \\ \frac{\langle\boldsymbol{\theta}_2,\boldsymbol{\theta}_1\rangle}{\|\boldsymbol{\theta}_1\|^2}\mathbb{I}_{\boldsymbol{\theta}_1 \neq {\boldsymbol 0},\boldsymbol{\theta}_2 \propto \boldsymbol{\theta}_1} & \mathbb{I}_{\boldsymbol{\theta}_2 \not \propto \boldsymbol{\theta}_1} \end{pmatrix}.\end{aligned}$$ Multiplying the left and right-hand sides of the second line of the display [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} by the matrices $\boldsymbol{K}_{h,4}^\ddagger$ and $\boldsymbol{K}_{g,4}^\ddagger$, respectively, gives $${\mathbf I}_{\boldsymbol{K}_{h,4}}^\top\boldsymbol{Z}_{v,4}^\top = \begin{pmatrix} \begin{matrix} \vert \\ {\boldsymbol 0}_{4\times3} \\ \vert \end{matrix} & \begin{matrix} -{\overline{\pi}}\alpha_1 \mathbb{I}_{\boldsymbol{\theta}_1 \neq {\boldsymbol 0}} \\ 0 \\ 0 \\ 0\end{matrix} \end{pmatrix}, \qquad {\mathbf I}_{\boldsymbol{K}_{g,4}}^\top \boldsymbol{Z}_{u,4}^\top = {\boldsymbol 0}_{4\times4}.$$ By innovation compatibility, ${\mathbf I}_{\boldsymbol{K}_{h,4}}^\top \boldsymbol{Z}_{v,4}^\top = \boldsymbol{Z}_{v,4}^\top$ and ${\mathbf I}_{\boldsymbol{K}_{g,4}}^\top \boldsymbol{Z}_{u,4}^\top = \boldsymbol{Z}_{u,4}^\top$. The proof of the base case is complete. ### Induction step: Hilbert space problem Assume that Hypothesis $\mbox{FP}(k-1)$ has been established, and let $\boldsymbol{K}_{g,k-1}$, $\boldsymbol{K}_{h,k-1}$, $\boldsymbol{Z}_{v,k-1}$, $\boldsymbol{Z}_{u,k-1}$, $\{\nu_{\ell,\mathsf{x}}\}_{5 \leq \ell \leq k-1}$, $\{\nu_{\ell,0}\}_{5 \leq \ell \leq k-1}$ be the unique solution to equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} under the appropriate innovation-compatibility constraints. Because the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} at index $k-1$ are a subset of those at index $k$, Hypothesis $\mbox{FP}(k-1)$ implies that the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} at index $k$ uniquely determines $\boldsymbol{K}_{g,k-1}$, $\boldsymbol{K}_{h,k-1}$, $\boldsymbol{Z}_{v,k-1}$, $\boldsymbol{Z}_{u,k-1}$, $\{\nu_{\ell,\mathsf{x}}\}_{5 \leq \ell \leq k-1}$, $\{\nu_{\ell,0}\}_{5 \leq \ell \leq k-1}$. It is our task to show that the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} at index $k$ also uniquely determine the final row of $\boldsymbol{K}_{g,k}$, $\boldsymbol{K}_{h,k}$, $\boldsymbol{Z}_{u,k}$, $\boldsymbol{Z}_{v,k}$, and $\nu_{k,0}$ and $\nu_{k,\mathsf{x}}$. Our strategy for doing so is to establish a correspondence between the solutions to equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} and the saddle points of a certain convex-concave saddle point problem on an infinite dimensional Hilbert space. Via this connection, we can reduce the proof of Hypothesis $\mbox{FP}(k)$ to a proof of the existence and uniqueness of such saddle points, for which we can draw on techniques from convex analysis. The purpose of this subsection is to introduce the infinite dimensional saddle point problem. The saddle point objective that we define closely resembles the saddle point objective [\[eq:def-AuxObj\]](#eq:def-AuxObj){reference-type="eqref" reference="eq:def-AuxObj"} in the proof of . Whereas the proof of  involves a random objective $\mathsf{AuxObj}_k : {\mathbb R}^n \times {\mathbb R}\times {\mathbb R}^p$, the proof of the induction step involves a deterministic objective $\mathsf{AuxObj}_k^{L_2} : L_{\mathsf{u}}^2 \times {\mathbb R}\times L_{\mathsf{v}}^2 \rightarrow {\mathbb R}$, where $L_{\mathsf{u}}^2$ and $L_{\mathsf{v}}^2$ are infinite-dimensional Hilbert spaces of random $n$- and $p$-dimensional vectors, respectively. In particular, consider a probability space $P_{\mathsf{u}}$ containing the random vectors $(\boldsymbol{h}_1^{\mathsf{se}},\ldots,\boldsymbol{h}_{k-1}^{\mathsf{se}}) \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{h,k-1} \otimes {\mathbf I}_n)$, the independent noise random vectors $(\boldsymbol{\varepsilon}_1^{\mathsf{se}},\boldsymbol{\varepsilon}_2^{\mathsf{se}})$, the first $k-1$ solutions $(\boldsymbol{u}_1^{\mathsf{se}},\ldots,\boldsymbol{u}_{k-1}^{\mathsf{se}})$ to [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"} (defined with parameters solving to the fixed point equations up to iteration $k-1$), and auxiliary Gaussian noise $\boldsymbol{\xi}_h \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_n)$ independent of everything else. Let $L_{\mathsf{u}}^2$ be the space of square-integrable random $n$-dimensional vectors $\boldsymbol{u}$ defined on $P_{\mathsf{u}}$. Similarly, consider a probability space $P_{\mathsf{v}}$ containing the random vectors $(\boldsymbol{g}_1^{\mathsf{se}},\ldots,\boldsymbol{g}_{k-1}^{\mathsf{se}}) \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{g,k-1} \otimes {\mathbf I}_n)$, the first $k-1$ solutions $(\boldsymbol{v}_1^{\mathsf{se}},\ldots,\boldsymbol{v}_{k-1}^{\mathsf{se}})$ to equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} (defined with parameters solving to the fixed point equations up to iteration $k-1$), and auxiliary Gaussian noise $\boldsymbol{\xi}_g \sim \mathsf{N}({\boldsymbol 0},{\mathbf I}_n)$ independent of everything else. Let $L_{\mathsf{v}}^2$ be the space of square-integrable random $p$-dimensional vectors $\boldsymbol{v}$ defined on $P_{\mathsf{v}}$. In the sequel, we consider functions $f:L_\mathsf{u}^2 \rightarrow {\mathbb R}$ (or $L_\mathsf{v}^2 \rightarrow {\mathbb R}$), and $f(\boldsymbol{u})$ will denote $f$ applied to the random vector $\boldsymbol{u}\in L_\mathsf{u}^2$. (This should not be confused with evaluating a function $\tilde f: {\mathbb R}^n \rightarrow {\mathbb R}$ at the realization of the random variable $\boldsymbol{u}$. To avoid confusion, we sometimes write the latter as $\tilde f(\boldsymbol{u}(\omega))$ where $\omega$ denotes an element of the sample space $P_\mathsf{u}$.) Define a mappings $\boldsymbol{g}^{L_2}: L_{\mathsf{u}}^2 \rightarrow L_{\mathsf{v}}^2$, $\boldsymbol{h}^{L_2}: L_{\mathsf{v}}^2 \rightarrow L_{\mathsf{u}}^2$ by $$\label{eq:g-L2} \boldsymbol{g}^{L_2}(\boldsymbol{u}) \ensuremath{: =}\sum_{\ell=1}^{k-1} \boldsymbol{g}_\ell^\perp \langle\boldsymbol{u}_\ell^\perp,\boldsymbol{u}\rangle_{L_2} + \big\| \mathsf{P}_{\boldsymbol{U}_{k-1}^\mathsf{se}}^\perp \boldsymbol{u}\big\|_{L_2} \boldsymbol{\xi}_g, \qquad \boldsymbol{h}^{L_2}(\boldsymbol{v}) \ensuremath{: =}\sum_{\ell = 1}^{k-1} \boldsymbol{h}_\ell^\perp \langle\boldsymbol{v}_\ell^\perp,\boldsymbol{v}\rangle_{L_2} + \big\| \mathsf{P}_{\boldsymbol{V}_{k-1}^\mathsf{se}}^\perp \boldsymbol{v}\big\|_{L_2}\boldsymbol{\xi}_h,$$ where $\mathsf{P}_{\boldsymbol{U}_{k-1}^\mathsf{se}}^\perp$ is the projection in $L_{\mathsf{u}}^2$ onto the space orthogonal to the span of $\boldsymbol{u}_1,\ldots,\boldsymbol{u}_{k-1}$ in $L_{\mathsf{u}}^2$ and likewise for $\mathsf{P}_{\boldsymbol{V}_{k-1}^\mathsf{se}}^\perp$; that is, $$\mathsf{P}_{\boldsymbol{U}_{k-1}^\mathsf{se}}^\perp \boldsymbol{u}= \boldsymbol{u}- \sum_{\ell = 1}^{k-1} \boldsymbol{u}_\ell^\perp \langle\boldsymbol{u}_\ell^\perp,\boldsymbol{u}\rangle_{L_2}, \qquad \mathsf{P}_{\boldsymbol{V}_{k-1}^\mathsf{se}}^\perp \boldsymbol{v}= \boldsymbol{v}- \sum_{\ell = 1}^{k-1} \boldsymbol{v}_\ell^\perp \langle\boldsymbol{v}_\ell^\perp,\boldsymbol{v}\rangle_{L_2}.$$ For $k = 5, 6$, define the functions $\phi_k^{L_2}: L_{\mathsf{u}}^2 \times {\mathbb R}\times L_{\mathsf{v}}^2 \rightarrow {\mathbb R}$ by $$\label{eq:phi-L2} \begin{gathered} \phi_5^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =}\langle\boldsymbol{u},\boldsymbol{1}\rangle_{L_2}(v_0 + \langle\boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}\rangle_{L_2}) - \frac{1}{n}\sum_{i=1}^n \mathbb{E}\big[\ell_{\mathsf{a}}^*(nu_i;y_{1,i}^{\mathsf{se}})\big] + \mathbb{E}\big[\Omega_{\mathsf{a}}(\boldsymbol{v})\big], \\ % \phi_6^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =}\langle\boldsymbol{u},\boldsymbol{y}_2^{\mathsf{se}}\rangle_{L_2} + \langle\boldsymbol{u},\boldsymbol{1}\rangle_{L_2}(v_0 + \langle\boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}\rangle_{L_2}) - \frac{n}{2}\sum_{i=1}^n \mathbb{E}\Big[ (w_i^{\mathsf{se}})^{-1} \frac{u_i^2}{y_{1,i}^{\mathsf{se}}} \Big] + \mathbb{E}\big[\Omega_{\mathsf{y}}(\boldsymbol{v})\big], \end{gathered}$$ where we recall the convention that $u \mapsto u^2/0$ is the convex indicator function equal to zero when $u=0$ and $\infty$ otherwise. Because the function $u_i \mapsto \ell_\mathsf{a}^*(n u_i;y_{1,i}^{\mathsf{se}})$ is convex, and $(w_i^{\mathsf{se}})^{-1}u_i^2/y_{1,i}^{\mathsf{se}}$ is non-negative, the expectations of these quantities are well-defined but possibly infinite for $\boldsymbol{u}\in L_\mathsf{u}^2$. Finally, define a function $\mathsf{AuxObj}_k^{L_2}: L_{\mathsf{u}}^2 \times {\mathbb R}\times L_{\mathsf{v}}^2 \rightarrow {\mathbb R}$ via $$\label{eq:def-AuxObj-L2} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) \ensuremath{: =} -\langle\boldsymbol{g}^{L_2}(\boldsymbol{u}),\boldsymbol{v}\rangle_{L_2} + \langle\boldsymbol{h}^{L_2}(\boldsymbol{v}),\boldsymbol{u}\rangle_{L_2} + \phi_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}).$$ The definition $\boldsymbol{g}^{L_2}(\boldsymbol{u})$ and $\boldsymbol{h}^{L_2}(\boldsymbol{v})$ resembles the definition of $\boldsymbol{g}(\boldsymbol{u})$ and $\boldsymbol{h}(\boldsymbol{v})$ in [\[eq:gordon-gh-explicit\]](#eq:gordon-gh-explicit){reference-type="ref" reference="eq:gordon-gh-explicit"}, and the definition of $\phi_k^{L_2}$ resembles to the definition of $\phi_k$ in [\[eq:objectives\]](#eq:objectives){reference-type="ref" reference="eq:objectives"}. Nevertheless, we emphasize that arguments to $\boldsymbol{g}^{L_2}$, $\boldsymbol{h}^{L_2}$, $\phi_k^{L_2}$, $\mathsf{AuxObj}_k^{L_2}$ are on completely different spaces than the arguments to $\boldsymbol{g}$, $\boldsymbol{h}$, $\phi_k$, $\mathsf{AuxObj}_k$. The former take arguments on a Hilbert space of random vectors; the latter take arguments on Euclidean space. In the sequel, we show that the solutions to the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} are related to the KKT conditions for the saddle point problem $$\label{eq:min-max-L2} \tag{Aux-$L_2$} \min_{\substack{v_0 \in {\mathbb R}\\ \langle\boldsymbol{v},\boldsymbol{\xi}_g\rangle_{L_2} \geq 0}} \max_{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}).$$ The sets $\{ \boldsymbol{v}\in L_{\mathsf{v}}^2 : \langle \boldsymbol{v}, \boldsymbol{\xi}_g \rangle_{L_2} \geq 0\}$ and $\{ \boldsymbol{u}\in L_{\mathsf{u}}^2 : \langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0\}$ are closed and convex. It is easy to check that the mapping $(v_0,\boldsymbol{v}) \mapsto \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ is convex if $\langle \boldsymbol{\xi}_h , \boldsymbol{u}\rangle_{L_2} \geq 0$ (note, in particular, that the function $\boldsymbol{v} \mapsto \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}\|_{L_2} \langle \boldsymbol{\xi}_h , \boldsymbol{u}\rangle_{L_2}$ is then convex, which is not the case for $\langle \boldsymbol{\xi}_h,\boldsymbol{u}\rangle_{L_2} < 0$). Likewise, the function $\boldsymbol{u}\mapsto \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ is concave if $\langle \boldsymbol{\xi}_g , \boldsymbol{v}\rangle_{L_2} \geq 0$, Thus, equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"} is a convex-concave saddle point problem. ### Outline of proof of induction step Our proof of the existence and uniqueness of the fixed point solutions [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} follows three steps: Step 1. : Given a fixed point [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}, we show how to construct a saddle point for the problem [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}. Step 2a. : We show that the problem [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"} has at least one saddle point $(\widehat{\boldsymbol{u}};\widehat{v}_0,\widehat{\boldsymbol{v}})$, and the values of $\widehat{\boldsymbol{u}}$ and $\widehat{\boldsymbol{v}}$ are unique. Step 2b. : Given a saddle point for the problem [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}, we show how to construct a fixed point [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}. Step 3. : We show that the previous steps imply that equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} has a unique solution. ### Derivatives of the auxiliary objective on Hilbert space {#sec:derivative-aux-Hilbert} The connection between the fixed point equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} and the KKT conditions of the objective [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"} rely on several subgradient identities, which we state here as lemmas. Their proofs are straightforward, and so omitted. We adopt the convention that $\boldsymbol{\delta}/ \| \boldsymbol{\delta}\|_{L_2} = {\boldsymbol 0}$ if $\boldsymbol{\delta}= {\boldsymbol 0}$.\ **Lemma 25** (Derivatives of Gordon terms). *If $\langle \boldsymbol{\xi}_g , \boldsymbol{v}\rangle_{L_2} \geq 0$, then the function $f(\boldsymbol{u}, \boldsymbol{v}) \ensuremath{: =}\langle \boldsymbol{g}^{L_2}(\boldsymbol{u}) , \boldsymbol{v}\rangle_{L_2}$ is convex in $\boldsymbol{u}$ and linear in $\boldsymbol{v}$. Moreover, $$\label{eq:L2-deriv-gordon-u} \begin{gathered} \sum_{\ell=1}^{k-1} \boldsymbol{u}_\ell^{\mathsf{se},\perp}\langle \boldsymbol{g}_\ell^{\mathsf{se},\perp} , \boldsymbol{v}\rangle_{L_2} + \frac{\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}}{\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}\|_{L_2}} \langle \boldsymbol{\xi}_g , \boldsymbol{v}\rangle_{L_2} \in \partial_{\boldsymbol{u}} f(\boldsymbol{u}, \boldsymbol{v}), \\ \partial_{\boldsymbol{v}} f(\boldsymbol{u}, \boldsymbol{v}) = \boldsymbol{g}^{L_2}(\boldsymbol{u}). \end{gathered}$$ If either $\langle \boldsymbol{\xi}_g,\boldsymbol{v}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}\neq {\boldsymbol 0}$, then $f$ is differentiable with respect to $\boldsymbol{u}$ at $(\boldsymbol{u},\boldsymbol{v})$, with gradient given by the expression in the first line of the previous display. Likewise, $$\begin{gathered} \sum_{\ell=1}^{k-1} \boldsymbol{v}_\ell^{\mathsf{se},\perp}\langle \boldsymbol{h}_\ell^{\mathsf{se},\perp} , \boldsymbol{u}\rangle_{L_2} + \frac{\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}}{\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}\|_{L_2}} \langle \boldsymbol{\xi}_h , \boldsymbol{u}\rangle_{L_2} \in \partial_{\boldsymbol{v}}\big\langle\boldsymbol{h}^{L_2}(\boldsymbol{v}),\boldsymbol{u}\big\rangle_{L_2}, \\ \partial_{\boldsymbol{u}}\big\langle\boldsymbol{h}^{L_2}(\boldsymbol{v}),\boldsymbol{u}\big\rangle_{L_2} = \boldsymbol{h}^{L_2}(\boldsymbol{v}), \end{gathered}$$ and if either $\langle \boldsymbol{\xi}_h,\boldsymbol{u}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}\neq {\boldsymbol 0}$, then in fact $\big\langle\boldsymbol{h}^{L_2}(\boldsymbol{v}),\boldsymbol{y}\big\rangle_{L_2}$ is differentiable with respect to $\boldsymbol{v}$ at $(\boldsymbol{u},\boldsymbol{v})$, with gradient given by the expression in the first line of the previous display.* The fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} depend implicitly on the optimizations in display [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}, which depend, via $\phi_{k,u}$, implicitly on the parameters $\nu_{k,0}$ and $\nu_{k,\mathsf{x}}$. It is useful to make this dependence explicit by writing $\phi_{k,u}(\boldsymbol{u}; \boldsymbol{H}_{k-1}^{\mathsf{se}}; \nu_{k,0},\nu_{k,\mathsf{x}})$. **Lemma 26** (Derivatives of penalty terms). *We have* *$$\label{eq:L2-pen-deriv-u} \boldsymbol{\delta}\in \partial_{\boldsymbol{u}}\big(-\phi_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})\big) \;\; \text{$\Longleftrightarrow$ for almost all $\omega$,} \;\; \boldsymbol{\delta}(\omega) \in \partial_{\boldsymbol{u}}\phi_{k,u}\big(\boldsymbol{u}(\omega);\boldsymbol{H}_{k-1}^{\mathsf{se}}(\omega);v_0,\langle\boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{v}\rangle_{L_2}\big),$$ and likewise, $$\label{eq:L2-pen-deriv-v} \boldsymbol{\delta}\in \partial_{\boldsymbol{v}}\phi_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) \;\; \text{$\Longleftrightarrow$ for almost all $\omega$,} \;\; \boldsymbol{\delta}(\omega) \in \partial_{\boldsymbol{v}}\phi_{k,v}\big(\boldsymbol{v}(\omega);\boldsymbol{G}_{k-1}^{\mathsf{se}}(\omega);\langle\boldsymbol{1},\boldsymbol{u}\rangle_{L_2}\big).$$* *We emphasize that in each line, the first subgradient occurs on infinite dimensional Hilbert space, and the second on finite-dimensional Euclidean space. Finally, we have $$\begin{aligned} \partial_{v_0} \phi_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) = \langle \boldsymbol{1}, \boldsymbol{u} \rangle_{L_2}.\end{aligned}$$* ### Step 1: From fixed points to saddle points We show how to construct a saddle point for the optimization problem [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"} based on a solution to the fixed point equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}. We first claim that any solution to the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} must satisfy $\zeta_{kk}^v \geq 0$ and $\zeta_{kk}^u \geq 0$. First consider $\zeta_{kk}^v$. Recall that we may write $\boldsymbol{h}_k^{\mathsf{se}} = \sum_{\ell = 1}^k L_{g,k\ell} \boldsymbol{h}_\ell^{\mathsf{se},\perp}$, whence we can write the objective in the first line in [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"} as $\mathsf{R}(\boldsymbol{u})- L_{g,kk}\langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}\rangle$, where $\mathsf{R}(\boldsymbol{u})$ depends implicitly on $\{ \boldsymbol{u}_\ell^{\mathsf{se}} \}_{\ell \leq k-1}$, $\{ \boldsymbol{h}_\ell^{\mathsf{se}} \}_{\ell \leq k-1}$, but does not depend on $\boldsymbol{h}_k^{\mathsf{se},\perp}$. Consider the optimization problem [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} for two values of $\boldsymbol{h}_k^{\mathsf{se},\perp}$ and ${\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp}$, holding everything else constant, and denote the respective optimizers $\boldsymbol{u}_k^\mathsf{se}$ and ${\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}$. By the optimality of $\boldsymbol{u}_k^\mathsf{se}$ and ${\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}$, $$\begin{aligned} \mathsf{R}(\boldsymbol{u}_k^{\mathsf{se}}) - L_{g,kk}\langle\boldsymbol{h}_k^{\mathsf{se},\perp},\boldsymbol{u}_k^{\mathsf{se}}\rangle &\leq \mathsf{R}({\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}) - L_{g,kk}\langle\boldsymbol{h}_k^{\mathsf{se},\perp},{\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}\rangle = \mathsf{R}({\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}) - L_{g,kk}\langle{\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp},{\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}\rangle + L_{g,kk}\langle{\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp} - \boldsymbol{h}_k^{\mathsf{se},\perp}, {\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}\rangle \\ &\leq \mathsf{R}(\boldsymbol{u}_k^{\mathsf{se}}) - L_{g,kk}\langle{\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp},\boldsymbol{u}_k^{\mathsf{se}}\rangle + L_{g,kk}\langle{\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp} - \boldsymbol{h}_k^{\mathsf{se},\perp}, {\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}\rangle. \end{aligned}$$ We see that $L_{g,kk}\langle{\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp} - \boldsymbol{h}_k^{\mathsf{se},\perp}, {\tilde{\boldsymbol{u}}}_k^{\mathsf{se}} - \boldsymbol{u}_k^{\mathsf{se}}\rangle \geq 0$. Because ${\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp}$ is independent of $\boldsymbol{u}_k^{\mathsf{se}}$, $\boldsymbol{h}_k^{\mathsf{se},\perp}$ is independent of ${\tilde{\boldsymbol{u}}}_k^{\mathsf{se}}$, and $\langle {\tilde{\boldsymbol{h}}}_k^{\mathsf{se},\perp} , {\tilde{\boldsymbol{u}}}_k^{\mathsf{se}} \rangle_{L_2} = \langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2}$, taking expectations gives $L_{g,kk}\langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} \geq 0$. If $k$ is innovative with respect to $\boldsymbol{K}_{g,k}$, then $L_{g,kk} > 0$ (see ), whence $\langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2}\geq 0$. Otherwise, if $k$ is predictable with respect to $\boldsymbol{K}_{g,k}$, then $\boldsymbol{h}_k^{\mathsf{se},\perp} = {\boldsymbol 0}$, whence $\langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2}\geq 0$ in this case as well. Multiplying the first equation in the second line of equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} by $\boldsymbol{L}_{g,k}^\ddagger$, and recalling that $\boldsymbol{H}_k^{\mathsf{se},\perp} = \boldsymbol{H}_k^{\mathsf{se}} (\boldsymbol{L}_g^\ddagger)^\top$, we get that $\boldsymbol{L}_{g,k}^\top \boldsymbol{Z}_{v,k}^\top = \langle\!\langle\boldsymbol{H}_k^{\mathsf{se},\perp} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}$. Because $\boldsymbol{L}_{g,k}$ is lower-diagonal, the $k^\text{th}$ row and column of this equation is $L_{g,kk}\zeta_{kk}^v = \langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} \geq 0$. If $k$ is innovative with respect to $\boldsymbol{K}_{g,k}$, then $L_{g,kk} > 0$ (see ), whence $\zeta_{kk}^v \geq 0$. If $k$ is predictable with respect to $\boldsymbol{K}_{g,k}$, then $\zeta_{kk}^v = 0$ by innovation compatibility. Thus, in all cases, $\zeta_{kk}^v \geq 0$. The inequality $\zeta_{kk}^u \geq 0$ follows by exactly the same argument, applied to the second optimization in  and the second equation in the second line of . Recall the Hilbert space $L_{\mathsf{u}}^2$ already comes endowed with random variables $\boldsymbol{h}_1^{\mathsf{se}},\ldots,\boldsymbol{h}_{k-1}^{\mathsf{se}}$ and $\boldsymbol{u}_1^{\mathsf{se}},\ldots,\boldsymbol{u}_{k-1}^{\mathsf{se}}$ and $\boldsymbol{\varepsilon}_1^{\mathsf{se}}$, $\boldsymbol{\varepsilon}_2^{\mathsf{se}}$ with joint distribution given by the state evolution up to iteration $k-1$. We then define $\boldsymbol{h}_k^{\mathsf{se}} = \sum_{\ell = 1}^{k-1} L_{h,k\ell} \boldsymbol{h}_\ell^{\mathsf{se},\perp} + L_{h,kk} \boldsymbol{\xi}_h$, so that $\boldsymbol{H}_k^{\mathsf{se}} \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{h,k} \otimes {\mathbf I}_n)$ is then embedded in the Hilbert space $L_{\mathsf{u}}^2$. We then define $\boldsymbol{u}_k^{\mathsf{se}}$ via equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} with this choice of $\boldsymbol{H}_k^{\mathsf{se}}$, so that $\boldsymbol{U}_k^{\mathsf{se}}$ is also embedded on the Hilbert space $L_{\mathsf{u}}^2$ with distribution given by the state evolution. We similarly define $\boldsymbol{g}_k^{\mathsf{se}} = \sum_{\ell = 1}^{k-1} L_{g,k\ell} \boldsymbol{g}_\ell^{\mathsf{se},\perp} + L_{g,kk} \boldsymbol{\xi}_g$ and $\boldsymbol{v}_k^{\mathsf{se}}$ via equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}, so that $\boldsymbol{G}_k^{\mathsf{se}} \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{g,k} \otimes {\mathbf I}_p)$ and $\boldsymbol{V}_k^{\mathsf{se}}$ are embedded on the Hilbert space $L_{\mathsf{v}}^2$ with distribution given by the state evolution. We claim that $(\boldsymbol{u}_k^{\mathsf{se}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$ is a saddle point of equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}, which we now show. First, the point $(\boldsymbol{u}_k^{\mathsf{se}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$ is feasible. Note that $\boldsymbol{h}_k^{\mathsf{se},\perp} = L_{g,kk} \boldsymbol{\xi}_h$. We showed above that $\langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} \geq 0$. In the case that $L_{g,kk} > 0$, this implies that $\langle \boldsymbol{\xi}_h , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} \geq 0$. In the case that $L_{g,kk} = 0$, we see from equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} that $\boldsymbol{\xi}_h$ is independent of $\boldsymbol{u}_k^{\mathsf{se}}$, whence $\langle \boldsymbol{\xi}_h , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} = 0$. By an analogous argument, $\langle \boldsymbol{\xi}_g , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} \geq 0$. Thus, $(\boldsymbol{u}_k^{\mathsf{se}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$ is feasible, as claimed. In particular, this implies $(v_0,\boldsymbol{v}) \mapsto \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k^{\mathsf{se}};v_0,\boldsymbol{v})$ is convex, and $\boldsymbol{u}\mapsto \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$ is concave (see discussion following equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}). Next, by equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}, there exists random vector $\boldsymbol{\delta}$ such that for almost every $\omega$, $$\boldsymbol{h}_k^{\mathsf{se}}(\omega) - \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}}(\omega) \in \partial_{\boldsymbol{u}} \phi_{k,u}(\boldsymbol{u}_k^{\mathsf{se}}(\omega);\boldsymbol{H}_{k-1}^{\mathsf{se}}(\omega);\nu_{k,0},\nu_{k,\mathsf{x}}) = \partial_{\boldsymbol{u}} \phi_{k,u}(\boldsymbol{u}_k^{\mathsf{se}}(\omega);\boldsymbol{H}_{k-1}^{\mathsf{se}}(\omega);\nu_{k,0},\langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2}),$$ where the equality holds by the last line of the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}. Thus, by equation [\[eq:L2-pen-deriv-u\]](#eq:L2-pen-deriv-u){reference-type="eqref" reference="eq:L2-pen-deriv-u"}, $$\boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} \in \partial_{\boldsymbol{u}}\big(- \phi_k^{L_2}(\boldsymbol{u}_k^{\mathsf{se}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})\big).$$ By the first equation in the first line and the second equation in the second line of equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}, $$\label{eq:gor-deriv-to-se-deriv} \begin{aligned} \sum_{\ell=1}^{k-1} \boldsymbol{u}_\ell^{\mathsf{se},\perp}\langle \boldsymbol{g}_\ell^{\mathsf{se},\perp} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} + \frac{\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}}}{\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}}\|_{L_2}} \langle \boldsymbol{\xi}_g , \boldsymbol{v}\rangle_{L_2} &= \boldsymbol{U}_k^{\mathsf{se}} (\boldsymbol{L}_{u,k}^\ddagger)^\top\mathbb{E}[\boldsymbol{L}_{u,k}^\ddagger \boldsymbol{G}_k^{\mathsf{se},\top} \boldsymbol{v}_k^{\mathsf{se}}] \\ &= \boldsymbol{U}_k^{\mathsf{se}} \boldsymbol{K}_{g,k}^\ddagger [\boldsymbol{K}_{g,k} \boldsymbol{Z}_{u,k}^\top]_{\,\cdot\,,k} = \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}}, \end{aligned}$$ where the last equation uses $\boldsymbol{K}_{g,k}^\ddagger \boldsymbol{K}_{g,k} = {\mathbf I}_{\boldsymbol{K}_{g,k}}^\top$ and the innovation compatibility of $\boldsymbol{Z}_{u,k}$ and $\boldsymbol{K}_{g,k}$ (see ). Moreover, by the second equation in the first line of equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}, $\boldsymbol{h}^{L_2}(\boldsymbol{v}_k^{\mathsf{se}}) = \boldsymbol{h}_k^{\mathsf{se}}$. Thus, by equation [\[eq:L2-deriv-gordon-u\]](#eq:L2-deriv-gordon-u){reference-type="eqref" reference="eq:L2-deriv-gordon-u"}, $$\sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \boldsymbol{h}_k^{\mathsf{se}} \in \partial_{\boldsymbol{u}} \Big( \langle \boldsymbol{g}^{L_2}(\boldsymbol{u}_k^{\mathsf{se}}) , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} - \langle \boldsymbol{h}^{L_2}(\boldsymbol{v}_k^{\mathsf{se}}) , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} \Big).$$ Combining this display with the third-to-last display shows that ${\boldsymbol 0}\in \partial_{\boldsymbol{u}}\big(- \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k;\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}}) \big)$. An analogous argument shows that ${\boldsymbol 0}\in \partial_{\boldsymbol{v}} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k;\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$. Further, $\partial_{v_0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k;\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}}) = \langle \boldsymbol{u}_k^{\mathsf{se}}, \boldsymbol{1}\rangle_{L_2}$ (see equation [\[eq:phi-L2\]](#eq:phi-L2){reference-type="eqref" reference="eq:phi-L2"}), which, by the last line of the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} is equal to 0. Thus $(\boldsymbol{u}_k^{\mathsf{se}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$ is a saddle point of equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}, as claimed. ### Step 2a: Existence/uniqueness of saddle points {#sec:L2-saddle-exist-unique} Now we show that equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"} has at least one saddle point $(\widehat{\boldsymbol{u}};\widehat{v}_0,\widehat{\boldsymbol{v}})$, and the value of $\widehat{\boldsymbol{u}}$ and $\widehat{\boldsymbol{v}}$ is unique. We isolate the dependence of $\mathsf{AuxObj}_k^{L_2}$ on $v_0$ by decomposing $$\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) = \langle\boldsymbol{u},\boldsymbol{1}\rangle_{L_2}v_0 + \overline{\mathsf{AuxObj}}_k^{L_2}(\boldsymbol{u};\boldsymbol{v}),$$ where this equation defines $\overline{\mathsf{AuxObj}}_k^{L_2}$. We complete Step 2a in several steps. 1. *When restricted to the convex domains [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}, the function $\overline{\mathsf{AuxObj}}_k^{L_2}$ (hence $\mathsf{AuxObj}_k^{L_2}$) is strongly convex and lower semi-continuous in $\boldsymbol{v}$ and strongly concave and upper semi-continuous in $\boldsymbol{u}$.* Weak convexity-concavity and lower/upper semi-continuity are clear (see discussion following equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}). Strong convexity is due to the terms $\mathbb{E}[\Omega_k(\boldsymbol{v})]$. Strong concavity is due to the terms $-\frac{1}{n}\sum_{i=1}^n \mathbb{E}\big[\ell_{\mathsf{a}}^*(nu_i;y_{1,i}^{\mathsf{se}})\big]$ and $-\frac{n}{2}\sum_{i=1}^n\mathbb{E}\Big[(w_i^{\mathsf{se}})^{-1}\frac{u_i^2}{y_{1,i}^{\mathsf{se}}}\Big]$ for $k = 5,6$, respectively. In making these assertions, we use the strong-convexity of $\Omega_k$, the strong-smoothness of $\ell_\mathsf{a}$, and the upper bound on the weight function $w$ (cf. Assumption A1). 2. *The function $(v_0,\boldsymbol{v}) \mapsto \max_{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ is lower semi-continuous and convex, and is strongly convex in $\boldsymbol{v}$.* This property follows from its definition as a supremum of lower semi-continuous and convex functions. 3. *The function $(v_0,\boldsymbol{v}) \mapsto \max_{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ is coercive; that is, it diverges to infinity if either $|v_0| \rightarrow \infty$ or $\| \boldsymbol{v}\|_{L_2} \rightarrow \infty$.* To establish this property, we define two elements $\boldsymbol{u}^+,\boldsymbol{u}^- \in L_{\mathsf{u}}^2$ satisfying $\langle\boldsymbol{u}^+,\boldsymbol{1}\rangle_{L_2} \geq 0$ and $\langle\boldsymbol{u}^-,\boldsymbol{1}\rangle_{L_2} \leq 0$ and $$\lim_{\substack{v_0 \rightarrow \infty\\\|\boldsymbol{v}\|_{L_2}\rightarrow \infty}}\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^+;v_0,\boldsymbol{v}) = \infty \quad \text{and} \quad \lim_{\substack{v_0 \rightarrow -\infty\\\|\boldsymbol{v}\|_{L_2}\rightarrow \infty}}\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^-;v_0,\boldsymbol{v}) = \infty.$$ The coercivity of $(v_0,\boldsymbol{v}) \mapsto \max_{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ then follows because $$\max_{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) \geq \max\Big\{ \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^+;v_0,\boldsymbol{v}), \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^-;v_0,\boldsymbol{v}) \Big\}.$$ Define $u_i^+(\omega) =\boldsymbol{1}\{y_{1,i}^{\mathsf{se}}(\omega) \neq 0\}$, and note that $\boldsymbol{u}^+$ is in $L_{\mathsf{u}}^2$ because it is bounded. Because $\boldsymbol{u}^+$ is independent of $\boldsymbol{\xi}_h$, $\langle \boldsymbol{u}^+ , \boldsymbol{\xi}_h \rangle_{L_2} = 0$, whence it is in the domain of optimization. Further, we have $\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^+;v_0,\boldsymbol{v}) > -\infty$. Indeed, for $k = 5$, this is clear because $\mathsf{AuxObj}_k^{L_2}$ is finite everywhere. For $k = 6$, the only term which might be infinite is $\frac{n}{2}\sum_{i=1}^n\mathbb{E}\Big[(w_i^{\mathsf{se}})^{-1}\frac{u_i^{{+}2}}{y_{1,i}^{\mathsf{se}}}\Big]$, but we have constructed $\boldsymbol{u}^+$ so that this term is bounded by $\frac{n}{2}\sum_{i=1}^n\mathbb{E}\big[(w_i^{\mathsf{se}})^{-1}\big] < \infty$ (by the lower bound on the weight function, Assumption A1). Finally, $\langle\boldsymbol{u}^+,\boldsymbol{1}\rangle_{L_2} v_0 = v_0 \sum_{i=1}^n \mathbb{P}(y_{1,i}^{\mathsf{se}} \neq 0)$. Because $\mathbb{P}(y_{1,i}^{\mathsf{se}} \neq 0) > 0$, we get that $$\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^+;v_0,\boldsymbol{v}) \geq v_0 \sum_{i=1}^n \mathbb{P}(y_{1,i}^{\mathsf{se}} \neq 0) + \overline{\mathsf{AuxObj}}_k^{L_2}(\boldsymbol{u};\boldsymbol{v}).$$ Because $\overline{\mathsf{AuxObj}}_k^{L_2}$ is strongly convex in $\boldsymbol{v}$, we see that this diverges if $v_0 \rightarrow \infty$ or $\| \boldsymbol{v}\|_{L_2} \rightarrow \infty$. Setting $\boldsymbol{u}^- = - \boldsymbol{u}^+$ and following the same argument, we get that $\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}^+;v_0,\boldsymbol{v})$ diverges if $v_0 \rightarrow -\infty$ or $\| \boldsymbol{v}\|_{L_2} \rightarrow \infty$. Thus, coercivity is established. 4. *There exists a saddle point of $\mathsf{AuxObj}_k^{L_2}$ restricted to the domains [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}.* Using the lower semi-continuity, convexity, and coercivity of $(v_0,\boldsymbol{v}) \mapsto \max_{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$, Theorem 11.9 in the book [@bauschke2011convex] implies that this function has a minimizer on the closed domain $\langle \boldsymbol{v}, \boldsymbol{\xi}_g \rangle_{L_2} \geq 0$. Let $(\widehat{v}_0,\widehat{\boldsymbol{v}}_0)$ be one such minimizer. Because $\boldsymbol{u} \mapsto \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}; \widehat{v}_0,\widehat{\boldsymbol{v}})$ is strongly concave in $\boldsymbol{u}$ (see above), this function is maximized by a unique $\boldsymbol{u}$, which we call $\widehat{\boldsymbol{u}}$. Then, $(\widehat{\boldsymbol{u}}; \widehat{v}_0,\widehat{\boldsymbol{v}})$ is a saddle point. 5. *The value of $\widehat{\boldsymbol{u}}$ and $\widehat{\boldsymbol{v}}$ at the saddle point of $\mathsf{AuxObj}_k^{L_2}$ is unique.* Consider any saddle point $(\widehat{\boldsymbol{u}}; \widehat{v}_0, \widehat{\boldsymbol{v}})$. Because $\mathsf{AuxObj}_k^{L_2}$ has a saddle point, we may exchange the order of minimization and maximization, so that $\widehat{\boldsymbol{u}}$ is a maximizer of $\boldsymbol{u} \mapsto \allowbreak \min_{v_0 \in {\mathbb R}, \langle\boldsymbol{v},\boldsymbol{\xi}_g\rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$. Because $\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ is strongly-concave in $\boldsymbol{u}$, so too is this function, whence $\widehat{\boldsymbol{u}}$ is unique. Next, by the KKT conditions, we must have at any saddle point $(\widehat{\boldsymbol{u}};\widehat{v}_0,\widehat{\boldsymbol{v}})$ $$\begin{gathered} \widehat{v}_0 \boldsymbol{1}\in -\partial_{\boldsymbol{u}} \overline{\mathsf{AuxObj}}_k^{L_2}(\widehat{\boldsymbol{u}};\widehat{\boldsymbol{v}}) + \mathcal{N}_{\{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2}\geq 0\}}(\widehat{\boldsymbol{u}}), \\ {\boldsymbol 0}\in \partial_{\boldsymbol{v}} \overline{\mathsf{AuxObj}}_k^{L_2}(\widehat{\boldsymbol{u}};\widehat{\boldsymbol{v}}) + \mathcal{N}_{\{\langle\boldsymbol{v},\boldsymbol{\xi}_g\rangle_{L_2}\geq 0\}}(\widehat{\boldsymbol{v}}), \end{gathered}$$ where $\mathcal{N}_{\{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2}\geq 0\}}(\widehat{\boldsymbol{u}})$ denotes the normal cone to the set $\{\langle\boldsymbol{u},\boldsymbol{\xi}_h\rangle_{L_2}\geq 0\}$ at $\widehat{\boldsymbol{u}}$, and analogously for $\mathcal{N}_{\{\langle\boldsymbol{v},\boldsymbol{\xi}_g\rangle_{L_2}\geq 0\}}(\widehat{\boldsymbol{v}})$. Because $\overline{\mathsf{AuxObj}}_k^{L_2}(\boldsymbol{u};\boldsymbol{v})$ is strongly-concave in $\boldsymbol{u}$ and strongly convex in $\boldsymbol{v}$, the correspondence on the right-hand side of the preceding display is strongly monotone (see Definition 22.1 in the book [@bauschke2011convex]). Thus, for some $c > 0$ and any two saddle points $(\widehat{\boldsymbol{u}}; \widehat{v}_0, \widehat{\boldsymbol{v}})$, $(\widehat{\boldsymbol{u}}; \widehat{v}_0', \widehat{\boldsymbol{v}}')$, we must have $$\langle\widehat{v}_0' \boldsymbol{1}- \widehat{v}_0 \boldsymbol{1},\widehat{\boldsymbol{u}}-\widehat{\boldsymbol{u}}\rangle_{L_2} + \langle {\boldsymbol 0} - {\boldsymbol 0}, \widehat{\boldsymbol{v}}' - \widehat{\boldsymbol{v}}\rangle_{L_2} \geq c(\|\widehat{\boldsymbol{u}}-\widehat{\boldsymbol{u}}\|_{L_2}^2 + \| \widehat{\boldsymbol{v}}' - \widehat{\boldsymbol{v}}\|_{L_2}^2),$$ whence $\widehat{\boldsymbol{v}}' = \widehat{\boldsymbol{v}}$. This completes Step 2a. ### Step 2b: From saddle points to fixed points Consider a saddle point $(\widehat{\boldsymbol{u}}, \widehat{v}_0,\widehat{\boldsymbol{v}})$ of problem [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}. Now define $\boldsymbol{u}_k^{\mathsf{se}} = \widehat{\boldsymbol{u}}$, $\boldsymbol{v}_k^{\mathsf{se}} = \widehat{\boldsymbol{v}}$, $\nu_{k,0} = \widehat{v}_0$, $\boldsymbol{g}_k^{\mathsf{se}} = \boldsymbol{g}^{L_2}(\widehat{\boldsymbol{u}})$ and $\boldsymbol{h}_k^{\mathsf{se}} = \boldsymbol{h}^{L_2}(\widehat{\boldsymbol{v}})$. Define the parameters $\boldsymbol{K}_{g,k}$, $\boldsymbol{K}_{h,k}$, $\{\nu_{\ell,\mathsf{x}}\}_{\ell \leq k}$ via equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} with these choices of $\boldsymbol{u}_k^{\mathsf{se}}$, $\boldsymbol{v}_k^{\mathsf{se}}$, $\boldsymbol{g}_k^{\mathsf{se}}$, and $\boldsymbol{h}_k^{\mathsf{se}}$. Then define $\boldsymbol{Z}_{v,k}^\top = \boldsymbol{K}_{h,k}^\ddagger \langle\!\langle\boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}}\rangle\!\rangle_{L_2}$ and $\boldsymbol{K}_{g,k}^\ddagger \langle\!\langle \boldsymbol{G}_k^{\mathsf{se}} \boldsymbol{V}_k^{\mathsf{se}} \rangle\!\rangle$. In order to show that this gives a solution to the fixed point equations, we must show that *(1)* under these choices of $\boldsymbol{g}_k^{\mathsf{se}}$ and $\boldsymbol{h}_k^{\mathsf{se}}$, we have $\boldsymbol{G}_k^{\mathsf{se}} \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{g,k} \otimes {\mathbf I}_p)$ and $\boldsymbol{H}_k^{\mathsf{se}} \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{h,k} \otimes {\mathbf I}_n)$, *(2)* $\langle \boldsymbol{1}, \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} = 0$, *(3)* the second line of equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} holds, and $\boldsymbol{Z}_{v,k}$, $\boldsymbol{Z}_{u,k}$ satisfy the appropriate innovation compatibility constraints, and *(4)* for almost all $\omega$, equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is satisfied with these choices of $\boldsymbol{u}_k^{\mathsf{se}}$, $\boldsymbol{v}_k^{\mathsf{se}}$, $\boldsymbol{g}_k^{\mathsf{se}}$, and $\boldsymbol{h}_k^{\mathsf{se}}$. The distributional properties $\boldsymbol{G}_k^{\mathsf{se}} \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{g,k} \otimes {\mathbf I}_p)$ and $\boldsymbol{H}_k^{\mathsf{se}} \sim \mathsf{N}({\boldsymbol 0},\boldsymbol{K}_{h,k} \otimes {\mathbf I}_n)$ follow from the definition of $\boldsymbol{g}^{L_2}$ and $\boldsymbol{h}^{L_2}$. That $\langle \boldsymbol{1}, \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} = 0$ holds from the KKT conditions for equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"} because $\partial_{v_0} \mathsf{AuxObj}_k^{L_2}(\widehat{\boldsymbol{u}};\widehat{v}_0,\widehat{\boldsymbol{v}}) = \langle \widehat{\boldsymbol{u}}, \boldsymbol{1} \rangle_{L_2}$. Note that by our definition of $\boldsymbol{Z}_{v,k}^\top$, we have $\boldsymbol{K}_{h,k}\boldsymbol{Z}_{v,k}^\top = {\mathbf I}_{\boldsymbol{K}_{h,k}} \langle\!\langle\boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}$, where we have used equation [\[eq:K-pseudo-inverse\]](#eq:K-pseudo-inverse){reference-type="eqref" reference="eq:K-pseudo-inverse"}. Because $\boldsymbol{H}_k^{\mathsf{se}} \sim \mathsf{N}(\boldsymbol{K}_{h,k} \otimes {\mathbf I}_n)$, we have that $\mathsf{range}(\langle\!\langle \boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}) \subset \mathsf{range}(\boldsymbol{K}_{h,k})$, so that ${\mathbf I}_{\boldsymbol{K}_{h,k}} \langle\!\langle\boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2} = \langle\!\langle\boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle_{L_2}$. Thus, the first equation in the second line of equation [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} is satisfied, and the second equation in the second line follows similarly. Moreover, because $\boldsymbol{L}_{h,k}^\ddagger$ is innovation compatible with respect to $\boldsymbol{K}_{h,k}$, the $\ell^\text{th}$ row of $\boldsymbol{K}_{h,k}^\ddagger = (\boldsymbol{L}_{h,k}^\ddagger)^\top \boldsymbol{L}_{h,k}^\ddagger$ is 0 for predictable $\ell$. Thus, $\boldsymbol{Z}_{v,k}$ is innovation compatible with $\boldsymbol{K}_{h,k}$. Similarly, $\boldsymbol{Z}_{u,k}$ is innovation compatible with $\boldsymbol{K}_{g,k}$. The remainder of our argument is devoted to proving item *(4)*; the proof consists of two steps. 1. *If either $\langle \boldsymbol{\xi}_g , \widehat{\boldsymbol{v}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{u}}\neq {\boldsymbol 0}$, then the first line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is satisfied. Likewise, if either $\langle \boldsymbol{\xi}_h , \widehat{\boldsymbol{u}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{v}}\neq {\boldsymbol 0}$, then the second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is satisfied.* Consider that either $\langle \boldsymbol{\xi}_g , \widehat{\boldsymbol{v}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{u}}\neq {\boldsymbol 0}$. states $\boldsymbol{u}\mapsto \langle \boldsymbol{g}^{L_2}(\boldsymbol{u}),\widehat{\boldsymbol{v}}\rangle_{L_2}$ and $\boldsymbol{u}\mapsto \langle \boldsymbol{h}^{L_2}(\widehat{\boldsymbol{v}}) , \boldsymbol{u}\rangle_{L_2}$ are differentiable at $\boldsymbol{u}= \widehat{\boldsymbol{u}}$ and gives expressions for the derivative. These expressions can be simplified using the same calculation we performed in equation [\[eq:gor-deriv-to-se-deriv\]](#eq:gor-deriv-to-se-deriv){reference-type="eqref" reference="eq:gor-deriv-to-se-deriv"}. Combined with  and the KKT conditions for equation [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}, we must have for some $\lambda_{\mathsf{u}} \geq 0$ and almost every $\omega$ $$\label{eq:u-min} \lambda_{\mathsf{u}} \boldsymbol{\xi}_h(\omega) + \boldsymbol{h}_k^{\mathsf{se}}(\omega) - \sum_{\ell=1}^k \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} \in \partial_{\boldsymbol{u}} \phi_{k,u}(\boldsymbol{u}_k^{\mathsf{se}}(\omega),\boldsymbol{H}_{k-1}^{\mathsf{se}}(\omega);\nu_{k,0},\nu_{k,\mathsf{x}}),$$ where $\lambda_{\mathsf{u}} \langle \boldsymbol{\xi}_h , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} = 0$. Thus $$\begin{aligned} \boldsymbol{u}_k^{\mathsf{se}}(\omega) = \mathop{\mathrm{arg\,min}}_{\boldsymbol{u}} \Big\{ &\frac{\zeta_{kk}^u}2 \| \boldsymbol{u}\|^2 + \sum_{\ell=1}^{k-1} \zeta_{k\ell}^u \langle \boldsymbol{u}_\ell^{\mathsf{se}}(\omega) , \boldsymbol{u}\rangle \\ &\qquad- \langle \boldsymbol{h}_k^{\mathsf{se}}(\omega) , \boldsymbol{u}\rangle - \lambda_{\mathsf{u}} \langle\boldsymbol{\xi}_h(\omega),\boldsymbol{u}\rangle + \phi_{k,u}(\boldsymbol{u},\boldsymbol{H}_{k-1}^{\mathsf{se}}(\omega);\nu_{k,0},\nu_{k,\mathsf{x}}) \Big\}. \end{aligned}$$ We need only show that $\lambda_{\mathsf{u}} = 0$. Recall that $\boldsymbol{h}_k^{\mathsf{se}} = \sum_{\ell=1}^k L_{h,k\ell} \boldsymbol{h}_\ell^{\mathsf{se},\perp}$, where $\boldsymbol{h}_{\ell}^{\mathsf{se},\perp}$ is independent of $\boldsymbol{\xi}_h$ for $\ell \leq k-1$ and $L_{h,kk} \geq 0$. If $\langle \boldsymbol{\xi}_h,\boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} > 0$, then $\lambda_{\mathsf{u}} = 0$ by complementary slackness. On the other hand, if $\langle \boldsymbol{\xi}_h,\boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} = 0$, we must have that $L_{h,kk} + \lambda_{\mathsf{u}} = 0$. Indeed, assume otherwise, that $L_{h,kk} + \lambda_{\mathsf{u}} > 0$. The only dependence of the objective in the previous display on $\boldsymbol{\xi}_h(\omega)$ is given by $(L_{h,kk} + \lambda_u)\langle \boldsymbol{\xi}_h(\omega),\boldsymbol{u}\rangle$ (the term $L_{h,kk}\langle \boldsymbol{\xi}_h(\omega),\boldsymbol{u}\rangle$ comes from the expansion of $\langle \boldsymbol{h}_k^{\mathsf{se}}(\omega) , \boldsymbol{u}\rangle$). Thus, using that the minimizer of the objective is unique, we have for two $\omega,\omega'$ such that $\boldsymbol{\xi}_h(\omega) \neq \boldsymbol{\xi}_h(\omega;)$ and all other random variables are constant across $\omega,\omega'$, that $\langle \boldsymbol{\xi}_h(\omega') - \boldsymbol{\xi}_h(\omega) , \boldsymbol{u}_k^{\mathsf{se}}(\omega') - \boldsymbol{u}_k^{\mathsf{se}}(\omega) \rangle\geq 0$ with equality if and only if $\boldsymbol{u}_k^{\mathsf{se}}(\omega') = \boldsymbol{u}_k^{\mathsf{se}}(\omega)$. Because $\boldsymbol{\xi}_h$ is independent of everything else and, conditional on everything else, $\boldsymbol{u}_k^{\mathsf{se}}(\omega)$ is not conditionally constant, we have that $\langle \boldsymbol{\xi}_h , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} > 0$, a contradiction. Therefore, we conclude that in this case $L_{h,kk} + \lambda_{\mathsf{u}} = 0$. Because $L_{h,kk} \geq 0$, this implies that $\lambda_{\mathsf{u}} = 0$. Thus we have established the first line of the claim [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}. If either $\langle \boldsymbol{\xi}_h , \widehat{\boldsymbol{u}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{v}}\neq {\boldsymbol 0}$, then the second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is satisfied by an equivalent argument. 2. *Either $\langle \boldsymbol{\xi}_g , \widehat{\boldsymbol{v}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{u}}\neq {\boldsymbol 0}$. Likewise, either $\langle \boldsymbol{\xi}_h , \widehat{\boldsymbol{u}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{v}}\neq {\boldsymbol 0}$.* Assume to the contrary that both $\langle \boldsymbol{\xi}_g , \widehat{\boldsymbol{v}}\rangle_{L_2} > 0$ and $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{u}}= {\boldsymbol 0}$. Then $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{v}}\neq {\boldsymbol 0}$ because $\boldsymbol{\xi}_g$ is independent and hence orthogonal to $\boldsymbol{V}_{k-1}^{\mathsf{se}}$. Then, by item 1 above, the second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is satisfied. Note, however, that because $\| \mathsf{P}_{\boldsymbol{U}_{k-1}^\mathsf{se}} \widehat{\boldsymbol{u}}\|_{L_2} = 0$, we have $\boldsymbol{g}_k^{\mathsf{se}} = \boldsymbol{g}^{L_2}(\widehat{\boldsymbol{u}})$ is independent of $\boldsymbol{\xi}_g$. Thus the objective in the second line of equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is independent of $\boldsymbol{\xi}_g$, so that $\boldsymbol{v}_k^{\mathsf{se}} = \widehat{\boldsymbol{v}}$ is independent of $\boldsymbol{\xi}_g$. We conclude that $\langle \boldsymbol{\xi}_g , \widehat{\boldsymbol{v}}\rangle_{L_2} = 0$, contradicting that $\langle \boldsymbol{\xi}_g, \widehat{\boldsymbol{v}}\rangle_{L_2}$. Thus, either $\langle \boldsymbol{\xi}_g , \widehat{\boldsymbol{v}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{u}}\neq {\boldsymbol 0}$. Either $\langle \boldsymbol{\xi}_h , \widehat{\boldsymbol{u}}\rangle_{L_2} = 0$ or $\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \widehat{\boldsymbol{v}}\neq {\boldsymbol 0}$ by an equivalent argument. Combining these two steps establishes the claim [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"}. ### Step 3: Existence/uniqueness of fixed points By Steps 2a and 2b above, there exists a solution to the fixed point equations [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"}. We now establish uniqueness. Consider two solutions $\boldsymbol{K}_{g,k}$, $\boldsymbol{K}_{h,k}$, $\boldsymbol{Z}_{u,k}$, $\boldsymbol{Z}_{v,k}$, $\{\nu_{\ell,0}\}_{\ell \leq k}$, $\{\nu_{\ell,\mathsf{x}}\}_{\ell \leq k}$ and $\boldsymbol{K}_{g,k}'$, $\boldsymbol{K}_{h,k}'$, $\boldsymbol{Z}_{u,k}'$, $\boldsymbol{Z}_{v,k}'$, $\{\nu_{\ell,0}'\}_{\ell \leq k}$, $\{\nu_{\ell,\mathsf{x}}'\}_{\ell \leq k}$, $\{\nu_{\ell,\mathsf{u}}'\}_{\ell \leq k}$. The construction in Step 1 allows us to embed the corresponding random variables $\boldsymbol{U}_k^{\mathsf{se}}$, $\boldsymbol{H}_k^{\mathsf{se}}$ and ${\boldsymbol{U}_k^{\mathsf{se}}}'$, ${\boldsymbol{H}_k^{\mathsf{se}}}'$ into $L_{\mathsf{u}}$ and $\boldsymbol{V}_k^{\mathsf{se}}$, $\boldsymbol{G}_k^{\mathsf{se}}$ and ${\boldsymbol{V}_k^{\mathsf{se}}}'$, ${\boldsymbol{G}_k^{\mathsf{se}}}'$ into $L_{\mathsf{v}}$ such that $(\boldsymbol{u}_k^{\mathsf{se}};\nu_{k,0},\boldsymbol{v}_k^{\mathsf{se}})$ and $({\boldsymbol{u}_k^{\mathsf{se}}}';\nu_{k,0}',{\boldsymbol{v}_k^{\mathsf{se}}}')$ are both saddle points of the problem [\[eq:min-max-L2\]](#eq:min-max-L2){reference-type="eqref" reference="eq:min-max-L2"}. By Step 2, we must have $\boldsymbol{u}_k^{\mathsf{se}} = {\boldsymbol{u}_k^{\mathsf{se}}}'$ and $\boldsymbol{v}_k^{\mathsf{se}} = {\boldsymbol{v}_k^{\mathsf{se}}}'$. We also have $\boldsymbol{U}_{k-1}^{\mathsf{se}} = \boldsymbol{U}_{k-1}^{\mathsf{se}'}$ and $\boldsymbol{V}_{k-1}^{\mathsf{se}} = \boldsymbol{V}_{k-1}^{\mathsf{se}'}$ by construction. Thus, by the first line of [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="ref" reference="eq:fixpt-inductive"}, $\boldsymbol{K}_{g,k} = \boldsymbol{K}_{g,k}'$ and $\boldsymbol{K}_{h,k} = \boldsymbol{K}_{h,k}'$. By the third line of [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="ref" reference="eq:fixpt-inductive"}, $\nu_{\ell,\mathsf{x}} = \nu_{\ell,\mathsf{x}}'$ and $\nu_{\ell,\mathsf{u}} = \nu_{\ell,\mathsf{u}}'$ for $\ell \leq k$. By [\[eq:SE-opt\]](#eq:SE-opt){reference-type="ref" reference="eq:SE-opt"}, the joint distribution of $\boldsymbol{U}_k^{\mathsf{se}}$ and $\boldsymbol{H}_k^{\mathsf{se}}$ is a function only of $\boldsymbol{K}_{h,k}$, and likwise, the joint distribution of $\boldsymbol{V}_k^{\mathsf{se}}$ and $\boldsymbol{G}_k^{\mathsf{se}}$ is a function only of $\boldsymbol{K}_{g,k}$. Thus, we have $\langle\!\langle\boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle= \langle\!\langle{\boldsymbol{H}_k^{\mathsf{se}}}' , {\boldsymbol{U}_k^{\mathsf{se}}}' \rangle\!\rangle$ and $\langle\!\langle\boldsymbol{G}_k^{\mathsf{se}} , \boldsymbol{V}_k^{\mathsf{se}} \rangle\!\rangle= \langle\!\langle {\boldsymbol{G}_k^{\mathsf{se}}}' , {\boldsymbol{V}_k^{\mathsf{se}}}' \rangle\!\rangle$. Multiplying the equations in the second line of display [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="eqref" reference="eq:fixpt-inductive"} by $\boldsymbol{K}_{h,k}^\ddagger$ and $\boldsymbol{K}_{h,k}^\ddagger$, respectively, gives $$\begin{gathered} {\mathbf I}_{\boldsymbol{K}_{h,k}}^\top \boldsymbol{Z}_{v,k}^\top = \boldsymbol{K}_{h,k}^\ddagger \langle\!\langle \boldsymbol{H}_k^{\mathsf{se}} , \boldsymbol{U}_k^{\mathsf{se}} \rangle\!\rangle, \\ {\mathbf I}_{\boldsymbol{K}_{g,k}}^\top \boldsymbol{Z}_{u,k}^\top = \boldsymbol{K}_{g,k}^\ddagger \langle\!\langle\boldsymbol{G}_k^{\mathsf{se}} , \boldsymbol{V}_k^{\mathsf{se}} \rangle\!\rangle. \end{gathered}$$ Because $\boldsymbol{Z}_{v,k}$ is innovation compatible with $\boldsymbol{K}_{h,k}$, $\boldsymbol{Z}_{v,k}^\top$ is $0$ in rows with predictable index, so that the first row in the preceding display uniquely determines $\boldsymbol{Z}_{v,k}^\top$ under the innovation compatibility constraint. Thus, $\boldsymbol{Z}_{v,k} = \boldsymbol{Z}_{v,k}'$. Likewise, we have the equivalence $\boldsymbol{Z}_{u,k} = \boldsymbol{Z}_{u,k}'$. It remains to establish uniqueness of $\nu_{k,0}$. Assume for the sake of contradiction that $\nu_{k,0} \neq \nu_{k,0}'$, and consider the optimization in the first line of the display [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} at $\nu_{k,0}$ and $\nu_{k,0}'$. Because $\boldsymbol{K}_{h,k} = \boldsymbol{K}_{h,k}'$ and $\boldsymbol{Z}_{u,k} = \boldsymbol{Z}_{u,k}'$, consider two problems defined using the same random variables $\boldsymbol{H}_k^{\mathsf{se}}$, $\boldsymbol{U}_{k-1}^{\mathsf{se}}$ and parameters $\boldsymbol{Z}_{u,k}$, $\nu_{k,\mathsf{x}}$, but possibly different values of the parameters $\nu_{k,0}$ and $\nu_{k,0}'$. Denote objectives for the two problems by $\Phi_{k,u}(\,\cdot\,;\boldsymbol{H}_{k-1}^{\mathsf{se}})$ and $\Phi_{k,u}'(\,\cdot\,;\boldsymbol{H}_{k-1}^{\mathsf{se}})$. Then $\Phi_{k,u}'(\boldsymbol{u};\boldsymbol{H}_{k-1}^{\mathsf{se}}) = \Phi_{k,u}(\boldsymbol{u};\boldsymbol{H}_{k-1}^{\mathsf{se}}) -\langle \boldsymbol{u},\boldsymbol{1}\rangle(\nu_{k,0}' - \nu_{k,0})$ (see equation [\[eq:SE-penalties\]](#eq:SE-penalties){reference-type="eqref" reference="eq:SE-penalties"}). By the optimality of $\boldsymbol{u}_k^{\mathsf{se}}$ and ${\boldsymbol{u}_k^{\mathsf{se}}}'$, we have $$\begin{aligned} \Phi_{k,u} (\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{H}_{k-1}^{\mathsf{se}}) & \leq \Phi_{k,u}({\boldsymbol{u}_k^{\mathsf{se}}}';\boldsymbol{H}_{k-1}^{\mathsf{se}}) = \Phi_{k,u}'({\boldsymbol{u}_k^{\mathsf{se}}}';\boldsymbol{H}_{k-1}^{\mathsf{se}}) + \langle{\boldsymbol{u}_k^{\mathsf{se}}}',\boldsymbol{1}\rangle(\nu_{k,0}' - \nu_{k,0}) \\ % & \leq \Phi_{k,u}'({\boldsymbol{u}_k^{\mathsf{se}}};\boldsymbol{H}_{k-1}^{\mathsf{se}}) + \langle{\boldsymbol{u}_k^{\mathsf{se}}}',\boldsymbol{1}\rangle(\nu_{k,0}' - \nu_{k,0}) = \Phi_{k,u}({\boldsymbol{u}_k^{\mathsf{se}}};\boldsymbol{H}_{k-1}^{\mathsf{se}}) + \langle{\boldsymbol{u}_k^{\mathsf{se}}}' - \boldsymbol{u}_k^{\mathsf{se}},\boldsymbol{1}\rangle(\nu_{k,0}' - \nu_{k,0}),\end{aligned}$$ where equality holds in the first inequality if and only if ${\boldsymbol{u}_k^{\mathsf{se}}}'$ is also a minimizer of $\Phi_{k,u}$. Because $\Phi_{k,u}$ is strongly convex, this occurs if and only if ${\boldsymbol{u}_k^{\mathsf{se}}}' = \boldsymbol{u}_k^{\mathsf{se}}$. In particular, $\langle{\boldsymbol{u}_k^{\mathsf{se}}}' - \boldsymbol{u}_k^{\mathsf{se}},\boldsymbol{1}\rangle(\nu_{k,0}' - \nu_{k,0}) \geq 0$, with equality if and only if ${\boldsymbol{u}_k^{\mathsf{se}}}' = \boldsymbol{u}_k^{\mathsf{se}}$. By the final line of [\[eq:fixpt-inductive\]](#eq:fixpt-inductive){reference-type="ref" reference="eq:fixpt-inductive"}, $\mathbb{E}[\langle{\boldsymbol{u}_k^{\mathsf{se}}}' - \boldsymbol{u}_k^{\mathsf{se}},\boldsymbol{1}\rangle(\nu_{k,0}' - \nu_{k,0})] = 0$, whence the equality conditions must hold almost surely. That is, $\boldsymbol{u}_k^{\mathsf{se}} = {\boldsymbol{u}_k^{\mathsf{se}}}'$ almost surely. Now using that $\boldsymbol{u}_k^{\mathsf{se}} = {\boldsymbol{u}_k^{\mathsf{se}}}'$, we have that ${\boldsymbol 0}\in \partial_{\boldsymbol{u}} \Phi_{k,u}(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{H}_{k-1}^{\mathsf{se}})$ and ${\boldsymbol 0}\in \partial_{\boldsymbol{u}} \Phi_{k,u}'(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{H}_{k-1}^{\mathsf{se}})$, whence $(\nu_{k,0}' - \nu_{k,0}) \boldsymbol{1}\in \partial_{\boldsymbol{u}} \Phi_{k,u}(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{H}_{k-1}^{\mathsf{se}})$ almost surely. Note that because the function $\ell_\mathsf{a}$ is strongly convex, its conjugate dual $\ell_{\mathsf{a}}^*$ must be smooth. This fact combined with  [\[eq:SE-penalties\]](#eq:SE-penalties){reference-type="ref" reference="eq:SE-penalties"} ensures that the function $\Phi_{5,u}$ is smooth. Further, the function $\Phi_{6,u}$ is smooth in those coordinates for which $y_{1,i}^{\mathsf{se}} \neq 0$, and with positive probability $y_{1,i}^{\mathsf{se}} \neq 0$ for some $i$. Thus, with positive probability, we cannot have both ${\boldsymbol 0}$ and $c\boldsymbol{1}$ in $\partial_{\boldsymbol{u}} \Phi_{k,u}(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{H}_{k-1}^{\mathsf{se}})$ for some $c \neq 0$. We conclude that $\nu_{k,0}' - \nu_{k,0} = 0$, so that uniqueness of $\nu_{k,0}$ is established. ## Proof of  {#sec:fix-pt-bound-proof} *Proof of .* Without loss of generality, assume that $\Omega_k({\boldsymbol 0}) = 0$ for $k = 5,6$.\ **Locations of zeros in $\boldsymbol{K}_g$, $\boldsymbol{K}_h$, $\boldsymbol{Z}_v$, and $\boldsymbol{Z}_u$.** We established that the upper-left $4\times4$ blocks of $\boldsymbol{Z}_v$, $\boldsymbol{Z}_u$, $\boldsymbol{K}_g$, and $\boldsymbol{K}_h$ are of the claimed form when proving the base case in the proof of . This implies the lower-left and upper-right $2 \times 2$ block of $\boldsymbol{K}_g$ middle $2\times 2$ block in the last two rows and the middle $2\times 2$ block in the last two columns of $\boldsymbol{K}_h$ must be ${\boldsymbol 0}_{2 \times 2}$. This confirms the location of zeroes in $\boldsymbol{K}_g,\boldsymbol{K}_h$ asserted by the lemma. The location of all zeroes in $\boldsymbol{Z}_v$, $\boldsymbol{Z}_u$ asserted by the lemma, except for $\zeta_{65}^v = 0$ and $\zeta_{65}^u = 0$, are confirmed by (b). We show $\zeta_{65}^v = 0$ and $\zeta_{65}^u = 0$ below.\ **Bound on $\nu_{0,k}$.** Because $\Omega_k$ is $c$-strongly convex, has minimizer with $\ell_2$-norm bounded by $c$, and $\Omega_k({\boldsymbol 0}) = 0$, we have that $\Omega_k(\boldsymbol{v}) \geq - C + \frac{c}{2} \| \boldsymbol{v}\|^2$. Below we construct $\boldsymbol{u}_+ \in L_\mathsf{u}^2$ such that $\| \boldsymbol{u}_+ \|_{L_2} \leq C/\sqrt{n}$, $\langle \boldsymbol{1}, \boldsymbol{u}_+ \rangle_{L_2} \geq c > 0$, and $\mathbb{E}[\ell_k^*(n\boldsymbol{u}_+;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] \leq C$. Assuming we find such a $\boldsymbol{u}_+$, we get $$\begin{aligned} &\min_{\langle \boldsymbol{v}, \boldsymbol{\xi}_g \rangle_{L_2} \geq 0} \; \max_{\langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) \geq \min_{\boldsymbol{v}\in L_{\mathsf{v}}^2} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_+;v_0,\boldsymbol{v}) \\ &= \min_{\boldsymbol{v}\in L_{\mathsf{v}}^2} -\big\langle \boldsymbol{g}^{L_2}\big(\boldsymbol{u}_+\big) , \boldsymbol{v}\big\rangle_{L_2} +\big\langle \boldsymbol{h}^{L_2}(\boldsymbol{v}) , \boldsymbol{u}_+ \big\rangle_{L_2} +\langle \boldsymbol{u}_+ , \boldsymbol{1}\rangle_{L_2} (v_0 + \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}\rangle_{L_2}) -\mathbb{E}[\ell_k^*(n\boldsymbol{u}_+;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] +\mathbb{E}[\Omega_k(\boldsymbol{v})] \\ &\geq \min_{\boldsymbol{v}\in L_{\mathsf{v}}^2} -C\|\boldsymbol{v}\|_{L_2} + \langle \boldsymbol{u}_+,\boldsymbol{1}\rangle_{L_2} v_0 -C+ \frac{c}{2} \| \boldsymbol{v}\|_{L_2}^2 \\ &\geq \langle \boldsymbol{u}_+ , \boldsymbol{1}\rangle_{L_2} v_0 -C, \end{aligned}$$ where in the first inequality we have used the bounds on $\Omega_k$ and the properties of $\boldsymbol{u}_+$ given above, as well as the fact that $\big\|\boldsymbol{g}^{L_2}(\boldsymbol{u}_+)\big\|_{L_2} = \sqrt{p}\|\boldsymbol{u}_+\|_{L_2} \leq C$, $\big\|\boldsymbol{h}^{L_2}(\boldsymbol{v})\big\|_{L_2} \leq \sqrt{n}\|\boldsymbol{v}\|_{L_2}$, and $\|\boldsymbol{\mu}_{\mathsf{x}}\| \leq C$. We further have that $$\begin{aligned} \min_{\substack{v_0 \in {\mathbb R}\\ \langle \boldsymbol{v}, \boldsymbol{\xi}_g \rangle_{L_2} \geq 0}} \; \max_{\langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v}) &\leq \max_{\langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};0,{\boldsymbol 0}) = \max_{\langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0} -\mathbb{E}[\ell_k^*(n \boldsymbol{u}; \boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] \\ &\leq \frac{C}{n} + \frac{c}{2n} \| \boldsymbol{y}_2^{\mathsf{se}} \|_{L_2}^2 \leq C, \end{aligned}$$ where the final inequality holds by the following logic in the case $k = 5$ and $k = 6$. For $k = 5$, we use that, by equation [\[eq:ellk\*-def\]](#eq:ellk*-def){reference-type="eqref" reference="eq:ellk*-def"}, $\ell_5^*(n\boldsymbol{u};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}}) = \frac{1}{n} \sum_{i=1}^n \sup_{\eta \in {\mathbb R}} \{n\eta u_i - \ell_\mathsf{a}(\eta;y_{1,i}^{\mathsf{se}})\} \geq -\frac{1}{n} \sum_{i=1}^n \ell_\mathsf{a}(0;y_{1,i}^{\mathsf{se}}) \geq -C$ by Assumption A1. For $k = 6$, we use that, by equation [\[eq:ellk\*-def\]](#eq:ellk*-def){reference-type="eqref" reference="eq:ellk*-def"}, and using that $w(h)$ is bounded above by $C$ by Assumption A1, we have that $\ell_6^*(n\boldsymbol{u};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) \geq - \| \boldsymbol{y}_2 \| \| \boldsymbol{u}\| - \frac{C}{n} + \frac{cn}{2} \| \boldsymbol{u}\|^2$. We have that $\|\boldsymbol{y}_2^{\mathsf{se}}\|_{L_2} \leq C$ using equation [\[eq:yw-func-of-Heps\]](#eq:yw-func-of-Heps){reference-type="eqref" reference="eq:yw-func-of-Heps"} and that $\| \boldsymbol{h}_2 \|_{L_2}^2/n = \| \boldsymbol{\theta}_2 \|^2 \leq C$, $\|\boldsymbol{\varepsilon}_2\|_{L_2}^2/n = \sigma^2 < C$, and $|\theta_{2,0} + \langle \boldsymbol{\mu}_{\mathsf{x}},\boldsymbol{\theta}_2 \rangle_{L_2}| \leq C$. Combining the previous two displays, and using that $\langle \boldsymbol{u}_+ , \boldsymbol{1}\rangle_{L_2} > c > 0$ gives that the minimum of $\min_{\langle \boldsymbol{v}, \boldsymbol{\xi}_g \rangle_{L_2} \geq 0}\max_{\langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0}\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};v_0,\boldsymbol{v})$ over $v_0$ must be bounded above by $C$. If we replace $\boldsymbol{u}_+$ by $\boldsymbol{u}_- \in L_\mathsf{u}^2$ which satisfies $\| \boldsymbol{u}_- \|_{L_2} \leq C/\sqrt{n}$, $\langle \boldsymbol{1}, \boldsymbol{u}_- \rangle_{L_2} \geq c > 0$, and $\mathbb{E}[\ell_k^*(n\boldsymbol{u}_-;\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2)] \leq C$, we could by the same argument show that $v_0$ is bounded below by $-C$. All that remains is to construct $\boldsymbol{u}_+$ and $\boldsymbol{u}_-$. First consider $k = 5$. Note that $\ell_\mathsf{a}^*(\partial_\eta\ell_\mathsf{a}(0;0);0) = -\ell_\mathsf{a}(0;0) \leq 0$, $\ell_\mathsf{a}^*(0;0) = \sup_{\eta \in {\mathbb R}} \{ -\ell_\mathsf{a}(\eta;0) \} \leq 0$, $\ell_\mathsf{a}^*(\partial_\eta\ell_\mathsf{a}(0;1);1) = -\ell_\mathsf{a}(0;1)$, and $\ell_\mathsf{a}^*(0;1) = \sup_{\eta \in {\mathbb R}} \{ -\ell_\mathsf{a}(\eta;1) \}$. Set $u_{+,i} = \partial_\eta\ell_\mathsf{a}(0;0) \mathbb{I}\{y_{1,i}^{\mathsf{se}} = 0\}/n$ and $u_{-,i} = \partial_\eta\ell_\mathsf{a}(0;1) \mathbb{I}\{y_{1,i}^{\mathsf{se}} = 0\}/n$. Then, by Assumption A1 (which bounds $|\partial_\eta\ell_\mathsf{a}(0;0)|,|\partial_\eta\ell_\mathsf{a}(0;1)|<C$), we have $\| \boldsymbol{u}_+ \|_{L_2},\|\boldsymbol{u}_-\|_{L_2} \leq C/\sqrt{n}$. We moreover have that, by Assumption A1, $\langle \boldsymbol{1}, \boldsymbol{u}_+ \rangle_{L_2} = \partial_\eta\ell_\mathsf{a}(0;0) \mathbb{P}(y_{1,i}^{\mathsf{se}} = 0) = (1-{\overline{\pi}}) \partial_\eta\ell_\mathsf{a}(0;0) > c > 0$, and likewise, $\langle \boldsymbol{1}, \boldsymbol{u}_- \rangle_{L_2} = \partial_\eta\ell_\mathsf{a}(0;1) \mathbb{P}(y_{1,i}^{\mathsf{se}} = 1) = {\overline{\pi}}\partial_\eta\ell_\mathsf{a}(0;0) < -c < 0$. Then, we have $\ell_5^*(n\boldsymbol{u}_+;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}}) = \frac{1}{n} \sum_{i=1}^n \ell_\mathsf{a}^*(\partial_\eta\ell_\mathsf{a}(0,0)\mathbb{I}\{y_{1,i}^{\mathsf{se}}=0\};y_{1,i}^{\mathsf{se}}) \leq 0$. where we use $\ell_\mathsf{a}^*(\partial_\eta\ell_\mathsf{a}(0;0);0) = -\ell_\mathsf{a}(0;0) \leq 0$ and $\ell_\mathsf{a}^*(0;0) = \sup_{\eta \in {\mathbb R}} \{ -\ell_\mathsf{a}(\eta;0) \} \leq 0$. Verifying that $\ell_5^*(n\boldsymbol{u}_+;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})\leq 0$ follows similarly, but instead uses $\ell_\mathsf{a}^*(\partial_\eta\ell_\mathsf{a}(0;1);1) = -\ell_\mathsf{a}(0;1)$ and $\ell_\mathsf{a}^*(0;1) = \sup_{\eta \in {\mathbb R}} \{ -\ell_\mathsf{a}(\eta;1) \}$. Next, consider $k = 6$. In this case, we set $\boldsymbol{u}_+ = \boldsymbol{y}_1^{\mathsf{se}}/n$ and $\boldsymbol{u}_- = -\boldsymbol{y}_1^{\mathsf{se}}/n$. Because the entries of $\boldsymbol{y}_1^{\mathsf{se}}$ are either $0$ or $1$, it follows that $\| \boldsymbol{u}_+ \|_{L_2}, \| \boldsymbol{u}_- \|_{L_2} \leq C/\sqrt{n}$. Moreover, $\langle \boldsymbol{1},\boldsymbol{u}_+ \rangle_{L_2} = -\langle \boldsymbol{1}, \boldsymbol{u}_-\rangle_{L_2} = {\overline{\pi}}> c > 0$. Moreover, because $(y_{1,i}^{\mathsf{se}})^2/y_{1,i} = y_{1,i}^{\mathsf{se}}$ (when $y_{1,i}^{\mathsf{se}} = 0$, this is 0/0, which we have by convention set equal to 0, and when $y_{1,i}^{\mathsf{se}} = 1$, this is 1), we have $\mathbb{E}[\ell_6^*(n\boldsymbol{u}_+;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] = - \langle \boldsymbol{y}_1^{\mathsf{se}} , \boldsymbol{y}_2^{\mathsf{se}} \rangle_{L_2}/n + (n/2) \sum_{i=1}^n (w_i^{\mathsf{se}})^{-1}y_{1,i}^{\mathsf{se}}$. Because $w(h)$ is bounded below by $c > 0$ by assumption, and $\| \boldsymbol{y}_1^{\mathsf{se}} \|_{L_2}, \| \boldsymbol{y}_2^{\mathsf{se}}\|_{L_2} \leq C\sqrt{n}$, we get $\mathbb{E}[\ell_6^*(n\boldsymbol{u}_+;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] \leq C$. Showing $\mathbb{E}[\ell_6^*(n\boldsymbol{u}_-;\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] \leq C$ holds similarly.\ **Upper bound on $|L_{g,k\ell}|$, $|L_{h,k\ell}|$, $|K_{g,k\ell}|$.** By Assumption A1, $\|\boldsymbol{v}_1^{\mathsf{se}}\|_{L_2} = \|\boldsymbol{\theta}_1\|$, $\|\boldsymbol{v}_2^{\mathsf{se}}\|_{L_2} = \|\boldsymbol{\theta}_2\|$, $\|\boldsymbol{v}_3^{\mathsf{se}}\|_{L_2} = 0$, and $\|\boldsymbol{v}_4^{\mathsf{se}}\|_{L_2} = 0$ are all upper bounded by $C$, and $\|\boldsymbol{u}_1^{\mathsf{se}}\|_{L_2} = 0$, $\|\boldsymbol{u}_2^{\mathsf{se}}\|_{L_2} = 0$, $\|\boldsymbol{u}_3^{\mathsf{se}}\|_{L_2} = \| \boldsymbol{1}\|/n = 1/\sqrt{n}$, $\|\boldsymbol{u}_4^{\mathsf{se}}\|_{L_2} = \|\boldsymbol{y}_1^{\mathsf{se}}\|_{L_2}/n \leq 1/\sqrt{n}$ are all upper bounded by $C/\sqrt{n}$. Upper bounds on $\| \boldsymbol{v}_5^{\mathsf{se}} \|_{L_2}$, $\| \boldsymbol{v}_6^{\mathsf{se}} \|_{L_2}$, $\| \boldsymbol{u}_5^{\mathsf{se}} \|_{L_2}$, and $\| \boldsymbol{u}_6^{\mathsf{se}} \|_{L_2}$ require some more work. By optimality of $\boldsymbol{v}_k^{\mathsf{se}}$, we have $$\begin{aligned} &\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k^{\mathsf{se}};\nu_{0,k},\boldsymbol{v}_k^{\mathsf{se}}) = \max_{\langle \boldsymbol{u}, \boldsymbol{\xi}_h \rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u};\nu_{0,k},\boldsymbol{v}_k^{\mathsf{se}}) \\ &\qquad\geq \mathsf{AuxObj}_k^{L_2}({\boldsymbol 0};\nu_{0,k},\boldsymbol{v}_k^{\mathsf{se}}) = -\mathbb{E}[\ell_k^*({\boldsymbol 0};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] +\mathbb{E}[\Omega_k(\boldsymbol{v}_k^{\mathsf{se}})] \geq -C + \frac{c}{2}\| \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}^2, \end{aligned}$$ where we use that $\Omega_k(\boldsymbol{v}) \geq - C + \frac{c}{2} \| \boldsymbol{v}\|^2$ (as we argued above); $\ell_5^*({\boldsymbol 0};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}}) = \frac{1}{n} \sum_{i=1}^n \sup_{\eta \in {\mathbb R}}\{ - \ell_\mathsf{a}^*(\eta;y_{1,i}^{\mathsf{se}})\} \geq -C$ because $\ell_\mathsf{a}^*(0;0),\ell_\mathsf{a}^*(0;1) \leq$ $C$ by Assumption A1; and $\ell_6^*({\boldsymbol 0};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}}) = 0$ by equation [\[eq:ellk\*-def\]](#eq:ellk*-def){reference-type="eqref" reference="eq:ellk*-def"}. By optimality of $\boldsymbol{u}_k^{\mathsf{se}}$, we have $$\begin{aligned} &\mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k^{\mathsf{se}};\nu_{0,k},\boldsymbol{v}_k^{\mathsf{se}}) = \min_{\langle \boldsymbol{v}, \boldsymbol{\xi}_g \rangle_{L_2} \geq 0} \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k^{\mathsf{se}};\nu_{0,k},\boldsymbol{v}) \\ &\qquad\leq \mathsf{AuxObj}_k^{L_2}(\boldsymbol{u}_k^{\mathsf{se}};\nu_{0,k},{\boldsymbol 0}) = -\mathbb{E}[\ell_k^*(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}})] +\mathbb{E}[\Omega_k({\boldsymbol 0})] \leq C - \frac{cn}{2}\| \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}^2, \end{aligned}$$ where we use that $\Omega_k({\boldsymbol 0}) = 0$ (which we assumed at the beginning of this section without loss of generality); that, $\ell_k^*$ is $cn$-strongly convex in $\boldsymbol{u}$ because $\ell_k$ are strongly smooth in $\boldsymbol{\eta}$ (for $k = 5$, this is by assumption, and for $k = 6$, we use that the weight function $w(h)$ is bounded above). Chaining the previous displays together, we conclude both that $\| \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}^2 \leq C$ and $\| \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}^2 \leq C/n$ for $k = 5,6$. Having bounded $\| \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}^2 \leq C$ for $k = 1,\ldots,6$, by the fixed point equations (equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}) we get that $|K_{h,k\ell}| \leq C$ for all $1 \leq k,\ell \leq 6$. Having bounded $\| \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}^2 \leq C/n$ for $k = 1,\ldots,6$, by the fixed point equations (equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}) we get that $|K_{g,k\ell}| \leq C/n$ for all $1 \leq k,\ell \leq 6$. Because $\sum_{\ell=1}^k L_{g,k\ell}^2 = K_{g,kk}$, we also have $|L_{g,k\ell}| \leq C/\sqrt{n}$, and because $\sum_{\ell = 1}^k L_{h,k\ell}^2 = K_{h,kk}$, we also have $|L_{h,k\ell}| \leq C$, for $1 \leq \ell \leq k \leq 6$.\ **Upper bound on $|\nu_{k,\mathsf{x}}|$.** The fixed point equations [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} state $\nu_{k,\mathsf{x}} = \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2}$. Thus, the upper bound on $|\nu_{k,\mathsf{x}}|$ holds by Cauchy-Schwartz and because $\|\boldsymbol{\mu}_{\mathsf{x}}\| \leq C$ (by Assumption A1) and because $\| \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2} \leq C$ (as we have just shown).\ **Lower bound on $L_{g,kk}$ and $L_{h,kk}$.** For $k=5,6$, the KKT conditions for the first line of [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is $$\begin{gathered} (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \boldsymbol{u}_k^{\mathsf{se}} \in \partial_{\boldsymbol{u}}\ell_k^*(\boldsymbol{u}_k^{\mathsf{se}};\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}}). \end{gathered}$$ By Fenchel-Legendre duality, the first of these is equivalent to $$\label{eq:ukse-is-ellk-deriv} n\boldsymbol{u}_k^{\mathsf{se}} = \nabla \ell_k\Big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}} \Big),$$ where $\ell_5(\boldsymbol{\eta};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) \ensuremath{: =}\sum_{i=1}^n \ell_\mathsf{a}(\eta_i;y_{1,i})$ and $\ell_6(\boldsymbol{\eta};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) \ensuremath{: =}\frac{1}2 \sum_{i=1}^n y_{1,i} w_i(y_{2,i} - \eta_i)^2$. Let $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp$ denote the orthogonal projection in Hilbert space: $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp\boldsymbol{u}= \boldsymbol{u}- \sum_{\ell=1}^{k-1} \boldsymbol{u}_\ell^{\mathsf{se},\perp} \langle \boldsymbol{u}, \boldsymbol{u}_\ell^{\mathsf{se},\perp} \rangle_{L_2}/\|\boldsymbol{u}_\ell^{\mathsf{se},\perp}\|_{L_2}^2$, where we adopt the convention that ${\boldsymbol 0}/{\boldsymbol 0}= {\boldsymbol 0}$. Multiplying both sides of the second equation in the second line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"} by $\boldsymbol{L}_g^\dagger$ gives $\boldsymbol{L}_g^{\ddagger\top}\boldsymbol{Z}_u^\top = \langle\!\langle\boldsymbol{G}^{\mathsf{se},\perp} , \boldsymbol{V}^{\mathsf{se}} \rangle\!\rangle_{L_2}$. Using that $\boldsymbol{L}_g^\ddagger$ is lower-triangular and that $L_{g,kk} = \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$ by the first line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, we get $\zeta_{kk}^u = \langle \boldsymbol{g}_k^{\mathsf{se},\perp} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} / \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$ (where we use the convention $0/0 = 0$). Thus, $\| \zeta_{kk}^u \boldsymbol{u}_k^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2} = |\langle \boldsymbol{g}_k^{\mathsf{se},\perp},\boldsymbol{v}_k^{\mathsf{se}}\rangle_{L_2}| \leq \sqrt{p} \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$, because $\| \boldsymbol{g}_k^{\mathsf{se},\perp} \|_{L_2} \leq \sqrt{p}$. Moreover, by the second equation in the first line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, $\|\mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{h}_k^{\mathsf{se}}\|_{L_2} = \sqrt{n} \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$. Thus, because $\nabla \ell_k$ are $C$-Lipschitz by Assumption A1 (for $k = 5$, this is by assumption; for $k = 6$, we use that the weights $w(h)$ are bounded above by $C$), we have $$\label{eq:ukse-approx-grad} \begin{gathered} \Big\| n\boldsymbol{u}_k^{\mathsf{se}} - \nabla \ell_k\Big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}} \Big) \Big\|_{L_2} \leq C\sqrt{n}\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}. \end{gathered}$$ Note that $\nabla \ell_k\big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1}+ \mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big)$ is independent of $\boldsymbol{h}_k^{\mathsf{se},\perp}$ because $\mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}}$ is in the span of $\boldsymbol{H}_{k-1}^{\mathsf{se}}$, $\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}}$ is in the span of $\boldsymbol{U}_{k-1}^{\mathsf{se}}$ which is a function of $\boldsymbol{H}_{k-1}^{\mathsf{se}}$ and the auxiliary noise vectors $\boldsymbol{\varepsilon}_1^{\mathsf{se}},\boldsymbol{\varepsilon}_2^{\mathsf{se}}$, and $\boldsymbol{w}^{\mathsf{se}},\boldsymbol{y}_1^{\mathsf{se}},\boldsymbol{y}_2^{\mathsf{se}}$ are functions of $\boldsymbol{H}_2^{\mathsf{se}}$ and the auxiliary noise vectors. Thus, their inner product in $L_\mathsf{u}^2$ is 0. Then, the previous display and the bound $\| \zeta_{kk}^u \boldsymbol{u}_k^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2} \leq \sqrt{p} \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$ imply $|\langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2}| \leq C\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$. For $k=5,6$, the KKT conditions for the second line of [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is $$\begin{gathered} \boldsymbol{g}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{se}} - \zeta_{kk}^v \boldsymbol{v}_k^{\mathsf{se}} - \nabla\Omega_k(\boldsymbol{v}_k^{\mathsf{se}}) = {\boldsymbol 0}. \end{gathered}$$ Similarly to how we showed $\zeta_{kk}^u = \langle \boldsymbol{g}_k^{\mathsf{se},\perp} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} / \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$, we have that $\zeta_{kk}^v = \langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} / \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$, whence $\| \zeta_{kk}^v \boldsymbol{v}_k^{\mathsf{se}} - \zeta_{kk}^v \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}} \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2} = |\langle \boldsymbol{h}_k^{\mathsf{se},\perp},\boldsymbol{u}_k^{\mathsf{se}}\rangle_{L_2}| \leq C \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$, where in the last inequality we have used the bound established at the end of the previous paragraph. Using this bound and the fact that $\nabla \Omega_k$ is $C$-Lipschitz by Assumption A1, we have $$\Big\| \mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}}^\perp\boldsymbol{g}_k^{\mathsf{se}} + \mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}}\boldsymbol{g}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{se}} - \zeta_{kk}^v \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}\boldsymbol{v}_k^{\mathsf{se}} - \nabla\Omega_k(\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}\boldsymbol{v}_k^{\mathsf{se}}) \Big\|_{L_2} \leq C\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}.$$ Because $\mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}}^\perp\boldsymbol{g}_k^{\mathsf{se}}$ is independent of $\mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}}\boldsymbol{g}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{se}} - \zeta_{kk}^v \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}\boldsymbol{v}_k^{\mathsf{se}} - \nabla\Omega_k(\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}\boldsymbol{v}_k^{\mathsf{se}}),$ their inner product in $L_\mathsf{v}^2$ is 0. Thus, the left-hand side of the previous display is lower bounded by $\big\| \mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{g}_k^{\mathsf{se}}\big\|_{L_2} = \sqrt{p} \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}}\|_{L_2}$, with the equality holding by the first equation in the first line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}. Because $n/p \leq C$, we thus have $C\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}}\|_{L_2} \geq C\sqrt{n}\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$. Moreover, $$\begin{aligned} &\Big\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \nabla \ell_k\big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big) \Big\|_{L_2} \\ &\geq \sqrt{ \mathbb{E}\Big[ \mathrm{Tr}\Big( \mathrm{Var}\Big( \nabla \ell_k\big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big) \Bigm| \boldsymbol{U}_{k-1}^{\mathsf{se}}, \boldsymbol{H}_{k-1}^{\mathsf{se}} \Big) \Big) \Big] }. \end{aligned}$$ Note that the argument $(\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1}+ \mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}}$ is a function of $\boldsymbol{U}_{k-1}^{\mathsf{se}}$, $\boldsymbol{H}_{k-1}^{\mathsf{se}}$. Consider $k = 5$. Then $\nabla \ell_k(\boldsymbol{\eta};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) = \nabla_{\boldsymbol{\eta}} \ell_\mathsf{a}(\boldsymbol{\eta};\boldsymbol{y}_1)$. Recall by Assumption A1, $\partial_\eta \ell_\mathsf{a}(\eta;0) - \partial_\eta \ell_\mathsf{a}(\eta;1) \geq c$. Further, $y_{1,i} \mid \boldsymbol{U}_{k-1}^{\mathsf{se}}, \boldsymbol{H}_{k-1}^{\mathsf{se}} \sim \mathsf{Ber}(\pi(\mu_\mathsf{a}+ h_{5,i}^{\mathsf{se}}))$. By the decay and growth conditions on $\pi$ in Assumption A1 and the upper bound on $L_{h,55}$ proved above, we have that the right-hand side of the previous display is bounded below by $C\sqrt{n}$ when $k = 5$. When $k = 6$, we have $\nabla \ell_k(\boldsymbol{\eta};\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) = \boldsymbol{w}\odot \boldsymbol{y}_1 \odot (\boldsymbol{y}_2 - \boldsymbol{\eta})$. Using the upper and lower bound $w$ and the fact that ${\overline{\pi}}> c > 0$ by Assumption A1, and because $\boldsymbol{y}_2 = \mu_\mathsf{y}+ \boldsymbol{h}_2 + \boldsymbol{\varepsilon}_2$, and $\boldsymbol{\varepsilon}_2$ is independent of $\boldsymbol{U}_{5}^{\mathsf{se}}$, $\boldsymbol{H}_{5}^{\mathsf{se}}$ and $\boldsymbol{w},\boldsymbol{a}$, we have that the right-hand side of the previous display is bounded below by $C\sqrt{n}$ when $k = 6$ as well. Thus, by equation [\[eq:ukse-approx-grad\]](#eq:ukse-approx-grad){reference-type="eqref" reference="eq:ukse-approx-grad"}, $$\big\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp\boldsymbol{u}_k^{\mathsf{se}} \big\|_{L_2} \geq C/\sqrt{n} - C\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}/\sqrt{n}.$$ Combining this with $C\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}}\|_{L_2} \geq C\sqrt{n}\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$ implies $\|\mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}}\|_{L_2} \geq c > 0$. By the second equation in the first line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, this implies that $L_{h,kk} \geq c > 0$. Next, observe that the KKT condition for equation [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is equivalent to $$\boldsymbol{v}_k = \nabla \Omega_k^*\Big( \boldsymbol{g}_k^{\mathsf{se}} - \sum_{\ell=1}^{k-1}\zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{se}} - \zeta_{kk}^v \boldsymbol{v}_k^{\mathsf{se}} \Big).$$ By Assumption A1 (strong convexity of $\Omega_k$), we have that $\nabla \Omega_k^*$ is $C$-Lipschitz. Thus, using that $\| \zeta_{kk}^v \boldsymbol{v}_k^{\mathsf{se}} - \zeta_{kk}^v \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}} \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2} \leq C \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$, (which we have established above) and that $\|\mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{g}_k^{\mathsf{se}}\|_{L_2} = \sqrt{p}\|\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}}\|_{L_2}$ (by the first equation in the first line of [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}), $$\big\| \boldsymbol{v}_k - \nabla \Omega_k^* \big( \mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}} \boldsymbol{g}_k^{\mathsf{se}} - \sum_{\ell=1}^{k-1} \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{se}} - \zeta_{kk}^v \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}\boldsymbol{v}_k^{\mathsf{se}} \big) \big\|_{L_2} \leq C\sqrt{n}\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}.$$ Because $\nabla \Omega_k^* \big( \mathsf{P}_{\boldsymbol{G}_{k-1}^{\mathsf{se}}} \boldsymbol{g}_k^{\mathsf{se}} - \sum_{\ell=1}^{k-1} \zeta_{k\ell}^v \boldsymbol{v}_\ell^{\mathsf{se}} - \zeta_{kk}^v \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}\boldsymbol{v}_k^{\mathsf{se}} \big)$ is independent of $\boldsymbol{g}_k^{\mathsf{se},\perp}$, their inner product in $L_\mathsf{v}^2$ is 0. Thus, the previous display and the fact that $\|\boldsymbol{g}_k^{\mathsf{se},\perp}\|_{L_2}$ is either 0 or $\sqrt{p}$ implies $\langle \boldsymbol{g}_k^{\mathsf{se},\perp}, \boldsymbol{v}_k \rangle_{L_2} \leq Cn \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$. Recalling that $\zeta_{kk}^u = \langle \boldsymbol{g}_k^{\mathsf{se},\perp},\boldsymbol{v}_k^{\mathsf{se}}\rangle_{L_2}/\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$, we conclude that $\| \zeta_{kk}^u \boldsymbol{u}_k^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2} = |\langle \boldsymbol{g}_k^{\mathsf{se},\perp},\boldsymbol{v}_k^{\mathsf{se}}\rangle_{L_2}| \leq C n\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$. Using equation [\[eq:ukse-is-ellk-deriv\]](#eq:ukse-is-ellk-deriv){reference-type="eqref" reference="eq:ukse-is-ellk-deriv"} and the fact that $\nabla \ell_k$ is $C$-Lipschitz, we have that $$\Big\| n\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}\boldsymbol{u}_k^{\mathsf{se}} - \nabla \ell_k\big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big) \Big\|_{L_2} \leq Cn \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}.$$ Moreover, $$\begin{aligned} &\Big\| n\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}\boldsymbol{u}_k^{\mathsf{se}} - \nabla \ell_k\big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big) \Big\|_{L_2} \\ &\geq \sqrt{ \mathbb{E}\Big[ \mathrm{Tr}\Big( \mathrm{Var}\Big( n\mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}\boldsymbol{u}_k^{\mathsf{se}} - \nabla \ell_k\big( (\nu_{k,0} + \nu_{k,\mathsf{x}})\boldsymbol{1} + \boldsymbol{h}_k^{\mathsf{se}} - \sum_{\ell = 1}^{k-1} \zeta_{k\ell}^u \boldsymbol{u}_\ell^{\mathsf{se}} - \zeta_{kk}^u \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}} \boldsymbol{u}_k^{\mathsf{se}} ; \boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2 \big) \Bigm| \boldsymbol{U}_{k-1}^{\mathsf{se}}, \boldsymbol{H}_{k-1}^{\mathsf{se}} \Big) \Big) \Big] }. \end{aligned}$$ For both $k =5$ and $6$, Assumption A1 gives $\nabla_{\eta_i} \ell_{k,i}(\eta_i;w_i,y_{1,i},y_{2,i}) > c > 0$. Because $\boldsymbol{h}_k^{\mathsf{se}} - \mathsf{P}_{\boldsymbol{H}_{k-1}^{\mathsf{se}}} \boldsymbol{h}_k^{\mathsf{se}}$ is independent of $\boldsymbol{U}_{k-1}^{\mathsf{se}},\boldsymbol{H}_{k-1}^{\mathsf{se}}$, and everything else appearing inside the conditional variance in the preceding display, and because its coordinate-wise variance is given by $\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}} \boldsymbol{v}_k \|_{L_2}$ by equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, we have that the conditional variance in the preceding display is, with probability 1, coordinate-wise larger than $c\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}} \boldsymbol{v}_k \|_{L_2}^2$. Thus, the right-hand side of the previous display is bounded below by $\sqrt{n} \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k \|_{L_2}$. Combining the previous two displays thus gives $\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2} \geq \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}/\sqrt{n}$. We showed above that $\| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2} \geq c > 0$. Thus, $\| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2} \geq c/\sqrt{n}$. By the first equation in the first line of equation [\[eq:fixpt-general\]](#eq:fixpt-general){reference-type="eqref" reference="eq:fixpt-general"}, this implies $L_{g,kk} \geq c/\sqrt{n} > 0$.\ **Upper bounds on $\zeta_{kk}^v$ and $\zeta_{kk}^u$.** For $k \leq 4$, we have shown that $\zeta_{kk}^v = \zeta_{kk}^u = 0$. For $k = 5,6$, we use that $\zeta_{kk}^u = \langle \boldsymbol{g}_k^{\mathsf{se},\perp} , \boldsymbol{v}_k^{\mathsf{se}} \rangle_{L_2} / \| \mathsf{P}_{\boldsymbol{U}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{u}_k^{\mathsf{se}} \|_{L_2}$ and $\zeta_{kk}^v = \langle \boldsymbol{h}_k^{\mathsf{se},\perp} , \boldsymbol{u}_k^{\mathsf{se}} \rangle_{L_2} / \| \mathsf{P}_{\boldsymbol{V}_{k-1}^{\mathsf{se}}}^\perp \boldsymbol{v}_k^{\mathsf{se}} \|_{L_2}$, which we showed above. Then $|\zeta_{kk}^u| \leq Cn$ and $|\zeta_{kk}^v| \leq C$ by the upper bounds $\|\boldsymbol{v}_k\|_{L_2} \leq C$, $\|\boldsymbol{u}_k\|_{L_2} \leq C/\sqrt{n}$ and the lower bounds $\| \mathsf{P}_{\boldsymbol{U}_{k-1}^\mathsf{se}}^\perp \boldsymbol{u}_k^{\mathsf{se}}\|{L_2} \geq C/\sqrt{n}$, $\| \mathsf{P}_{\boldsymbol{V}_{k-1}^\mathsf{se}}^\perp \boldsymbol{v}_k^{\mathsf{se}}\|{L_2} \geq C$ we established above.\ **Lower bounds on $\zeta_{kk}^v$ and $\zeta_{kk}^u$.** Because $L_{g,kk} > 0$ and $L_{h,kk} > 0$ for $k=5,6$, the indices $5$ and $6$ are innovative with respect to both $\boldsymbol{K}_g$ and $\boldsymbol{K}_h$. Thus, by equations [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"} and [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"}, we have $\zeta_{kk}^u = \widehat{\zeta}{}_{kk}^u$ and $\zeta_{kk}^v = \widehat{\zeta}{}_{kk}^v$ for $k = 5,6$. By equation [\[eq:zeta-hat-56-u\]](#eq:zeta-hat-56-u){reference-type="eqref" reference="eq:zeta-hat-56-u"}, the upper bounds on $\zeta_{55}^v$ and $\zeta_{66}^v$ we just established, and the fact that $\Omega_5$ and $\Omega_6$ are $C$-strongly convex by Assumption A1, we conclude that $\zeta_{kk}^u \geq cn$ for $k = 5,6$. By equation [\[eq:zeta-hat-56-v\]](#eq:zeta-hat-56-v){reference-type="eqref" reference="eq:zeta-hat-56-v"}, the upper bounds on $\zeta_{kk}^v$, $\zeta_{kk}^u$ we just established, the fact that $w(\,\cdot\,)^{-1}$ is upper bounded by $C$, the fact that $\ell_\mathsf{a}$ is $c$-strongly convex, and the growth and decay conditions on $\pi$ (all by Assumption A1), we conclude $\zeta_{kk}^v \geq cn$ for $k = 5,6$.\ **No dependence on $\boldsymbol{u}_5^{\mathsf{se}}$ and $\boldsymbol{v}_5^{\mathsf{se}}$ (i.e., $\zeta_{65}^v = \zeta_{65}^u = 0$).** Because indices $5$ and $6$ are innovative with respect to both $\boldsymbol{K}_g$ and $\boldsymbol{K}_h$. Thus, by equations [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"} and [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"}, we have $\zeta_{65}^u = \widehat{\zeta}{}_{65}^u$ and $\zeta_{65}^v = \widehat{\zeta}{}_{65}^v$ for $k = 5,6$. Combining the expression for $\zeta_{65}^u = \widehat{\zeta}{}_{65}^u$ in equation [\[eq:zeta56-uv-explicit\]](#eq:zeta56-uv-explicit){reference-type="eqref" reference="eq:zeta56-uv-explicit"} with the expression for $\zeta_{65}^v = \widehat{\zeta}{}_{65}^v$ in equation [\[eq:zeta-hat-56-v\]](#eq:zeta-hat-56-v){reference-type="eqref" reference="eq:zeta-hat-56-v"}, we have that either $\zeta_{65}^u = \zeta_{65}^v = 0$ or $$\begin{aligned} 1 &= \frac{ n\mathbb{E}\Big[ \mathrm{Tr}\Big( \Big( {\mathbf I}_p + \frac{\nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}})}{\zeta_{66}^v} \Big)^{-1} \Big( {\mathbf I}_p + \frac{\nabla^2 \Omega_5(\boldsymbol{v}_5^{\mathsf{se}})}{\zeta_{55}^v} \Big)^{-1} \Big) \Big] \mathbb{E}\Big[ \frac{\pi(h_{1,i}^{\mathsf{se}})}{(1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};1)/\zeta_{55}^u)(1 + nw(h_{1,i}^{\mathsf{se}})^{-1}/\zeta_{66}^u)} \Big] }{ \zeta_{55}^u\zeta_{55}^v\zeta_{66}^u\zeta_{66}^v } \\ &= \frac{ \mathbb{E}\Big[ \mathrm{Tr}\Big( \Big( {\mathbf I}_p + \frac{\nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}})}{\zeta_{66}^v} \Big)^{-1} \Big( {\mathbf I}_p + \frac{\nabla^2 \Omega_5(\boldsymbol{v}_5^{\mathsf{se}})}{\zeta_{55}^v} \Big)^{-1} \Big) \Big] \mathbb{E}\Big[ \frac{\pi(h_{1,i}^{\mathsf{se}})}{(1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};1)/\zeta_{55}^u)(1 + nw(h_{1,i}^{\mathsf{se}})^{-1}/\zeta_{66}^u)} \Big] }{ \mathbb{E}\Big[ \frac{1-\pi(h_{1,i}^{\mathsf{se}})}{1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};0)/\zeta_{55}^u} + \frac{\pi(h_{1,i}^{\mathsf{se}})}{1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};1)/\zeta_{55}^u} \Big] \mathbb{E}\Big[ \mathrm{Tr}\Big[ \Big( {\mathbf I}_p + \frac{\nabla^2\Omega_6(\boldsymbol{v}_6^{\mathsf{se}})}{\zeta_{66}^v} \Big)^{-1} \Big] \Big] } \\ &<1, \end{aligned}$$ where the last equality follows from the fact that $\Big\|\Big({\mathbf I}_p + \frac{\nabla^2 \Omega_6(\boldsymbol{v}_6^{\mathsf{se}})}{\zeta_{66}^v}\Big)^{-1}\Big\|_{{\rm op}} < 1$ and $\frac{\pi(h_{1,i}^{\mathsf{se}})}{(1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};1)/\zeta_{55}^u)(1 + nw(h_{1,i}^{\mathsf{se}})^{-1}/\zeta_{66}^u)} < \frac{1-\pi(h_{1,i}^{\mathsf{se}})}{1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};0)/\zeta_{55}^u}+\frac{\pi(h_{1,i}^{\mathsf{se}})}{1 + (\ell_\mathsf{a}^*)''(nu_{5,i}^{\mathsf{se}};1)/\zeta_{55}^u}$. Because we cannot have $1 < 1$, we conclude that $\zeta_{65}^u = \zeta_{65}^v = 0$.\ **Upper bounds on $\zeta_{51}^v$, $\zeta_{61}^v$.** This follows now from the expressions for $\widehat{\zeta}{}_{51}^v$ and $\widehat{\zeta}{}_{61}^v$ in equation [\[eq:zeta-hat-56-v\]](#eq:zeta-hat-56-v){reference-type="eqref" reference="eq:zeta-hat-56-v"}, together with the upper bounds on $\pi'(\,\cdot\,)$, $w'(\cdot)$, and $w(\,\cdot\,)^{-1}$ in Assumption A1 as well as the upper and lower bounds on the fixed point parameters we have established above. Note that although we have not bounded $\|\boldsymbol{u}_5^{\mathsf{se},1}\|_{L_2}$ and $\|\boldsymbol{u}_{5,i}^{\mathsf{se},0}\|_{L_2}$ (we only bounded $\| \boldsymbol{u}_5^{\mathsf{se}} \|_{L_2}$), these are bounded by noting that, fixing $y_{1,i}^{\mathsf{se}} = 1$ or $y_{1,i}^{\mathsf{se}} = 0$, the objective [\[eq:SE-opt\]](#eq:SE-opt){reference-type="eqref" reference="eq:SE-opt"} is $cn$-strongly convex (by the lower bound on $\zeta_{kk}^u$) and has derivative at $\boldsymbol{u}= {\boldsymbol 0}$ whose expected $\ell_2$-norm is bounded above by $C\sqrt{n}$ (by the upper bound on $\| \boldsymbol{h}_k^{\mathsf{se}} \|_{L_2}$), the upper bounds on $\nu_{5,0},\nu_{5,\mathsf{x}}$, and the fact that (using Fenchel-Legendre duality) $\nabla_{\boldsymbol{u}} \ell_5^*({\boldsymbol 0});\boldsymbol{w},\boldsymbol{y}_1,\boldsymbol{y}_2) = \mathop{\mathrm{arg\,min}}_{\boldsymbol{\eta}} \ell_\mathsf{a}(\boldsymbol{\eta};\boldsymbol{y}_1)$ is bounded by $C\sqrt{n}$ (by the bounds on the derivatives of $\ell_\mathsf{a}$ at ${\boldsymbol 0}$ and its strong convexity).\ **Lower bound on $\zeta_{51}^v$.** By equation [\[eq:zeta-hat-56-v\]](#eq:zeta-hat-56-v){reference-type="eqref" reference="eq:zeta-hat-56-v"}, the decay condition on $\pi$ in Assumption A1, the bound on $\nu_{5,0}$ and $\nu_{5,\mathsf{x}}$ we have established above, and the fact that $u_{5,i}^{\mathsf{se},1} - u_{5,i}^{\mathsf{se},0} < -c$, we get that $-\zeta_{51}^v \gtrsim c$. (Note that $u_{5,i}^{\mathsf{se},1} - u_{5,i}^{\mathsf{se},0} < -c$ because by Assumption A1, $\partial_\eta\ell_\mathsf{a}(\eta;0) - \partial_\eta \ell_\mathsf{a}(\eta;1) \geq c >0$). ◻ # Degrees of freedom adjustment factors {#AppDOF} As justified in , we assume without loss of generality that $\boldsymbol{\Sigma}= {\mathbf I}_p$. ## Existence and uniqueness of degrees-of-freedom adjustments Our first claim asserts that equations [\[eq:dof-emp-def\]](#eq:dof-emp-def){reference-type="eqref" reference="eq:dof-emp-def"} have unique solutions. **Lemma 27**. *Under Assumption A1, equations [\[eq:dof-emp-def\]](#eq:dof-emp-def){reference-type="eqref" reference="eq:dof-emp-def"} have unique solutions $(\widehat{\zeta}{}_\mathsf{y}^\theta, \widehat{\zeta}{}_\mathsf{y}^\eta)$ and $(\widehat{\zeta}{}_\mathsf{a}^\theta, \widehat{\zeta}{}_\mathsf{a}^\eta)$ with strictly positive components.* *Proof.* By Assumption A1, the function $\ell_\mathsf{a}$ is strongly convex, whence $\ddot\ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i}) > 0$ for all $i$. Thus, we have $$\frac{1}{n} \sum_{i=1}^n \frac{\ddot\ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i})}{\widehat{\zeta}{}_{\mathsf{a}}^\eta \ddot\ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i})+1} = \mathbb{E}\Big[\frac{1}{\widehat{\zeta}{}_\mathsf{a}^\eta + X_\mathsf{a}}\Big], \qquad \text{where}\quad X_\mathsf{a}\sim \frac{1}{n} \sum_{i=1}^n \delta_{1/\ddot\ell_\mathsf{a}(\widehat{\eta}{}_{\mathsf{a},i})}.$$ Similarly, $$\frac{1}{n} \mathrm{Tr}\Big(\big(\widehat{\zeta}{}_\mathsf{a}^\theta {\mathbf I}_p + \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a})\big)^{-1}\Big) = \frac{p}{n}\mathbb{E}\Big[\frac{1}{\widehat{\zeta}{}_\mathsf{a}^\theta + X_\mathsf{y}}\Big], \qquad \text{where}\quad X_\mathsf{y}\sim \frac{1}{p} \sum_{i=1}^n \delta_{1/\sigma_i^2},$$ and $\sigma_i^2$, $i \in [p]$ are the singular values of $\nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a})$. **Lemma 28**. *If for some $\gamma,\gamma' > 0$, $X_1,X_2$ are random variables supported on $[\gamma,\infty)$, then the system of equations $$\label{eq:iterated-stieltjes} \zeta_2 = \mathbb{E}\Big[ \frac{1}{\zeta_1 + X_1} \Big], \qquad \zeta_1 = \gamma' \mathbb{E}\Big[ \frac{1}{\zeta_2 + X_2} \Big],$$ has a unique non-negative solution $\zeta_1,\zeta_2$, and this solution is in fact positive.* follows from , which we prove below. ◻ *Proof of .* The positivity of any solutions follows from the fact that $X_1,X_2$ are supported on $[c,\infty)$. Moreover, at any solution, we have $$\zeta_2 \leq \frac{1}{c}, \qquad \zeta_1 \leq \frac{\gamma'}{ c}.$$ Define functions $f_1,f_2$ by $f_1(\zeta_1) = \mathbb{E}\Big[\frac{1}{\zeta_1 + X_1}\Big]$ and $f_2(\zeta_2) = \gamma'\mathbb{E}\Big[\frac{1}{\zeta_2 + X_2}\Big]$. Note that $f_1$ is defined in $[0,\infty)$, is positive, continuous, and strictly decreasing, with $f_1(0) = \mathbb{E}[1/X_1]$ and $\lim_{\zeta_1 \rightarrow \infty} f_1(\zeta_1) = 0$. Thus, $f_1^{-1}: (0,\mathbb{E}[1/X_1]]$ is a non-negative, strictly decreasing function satisfying $\lim_{\zeta_1 \rightarrow 0} f_1^{-1}(\zeta_1) = \infty$ and $f_1^{-1}(\mathbb{E}[1/X_1]) = 0$. Likewise, $f_2$ is positive, continuous, and strictly decreasing, with $f_2(0) = \delta^{-1}\mathbb{E}[1/X_2]$ and $f_2(\mathbb{E}[1/X_1]) > 0$. Thus, by the intermediate value theorem on $[\epsilon,\mathbb{E}[1/X_1])$ for sufficiently small $\epsilon$, there exists $\zeta_2 \in [\epsilon,\mathbb{E}[1/X_1]]$ such that $f_1^{-1}(\zeta_2) = f_2(\zeta_2)$. Setting $\zeta_1 = f_1^{-1}(\zeta_2) = f_2(\zeta_2)$ gives a positive solution to the system of equations in the lemma. We now show uniqueness. At any solution $f_1(\zeta_1) = f_2^{-1}(\zeta_1)$, $$\begin{aligned} -f_1'(\zeta_1) &= \mathbb{E}\Big[ \frac{1}{(\zeta_1 + X_1)^2} \Big] < \frac{1}{\zeta_1}\mathbb{E}\Big[ \frac{1}{\zeta_1 + X_1} \Big] = \frac{\zeta_2}{\gamma'\mathbb{E}\big[\frac{1}{\zeta_2 + X_2}\big]} \\ &< \frac{\zeta_2^2}{\frac{\gamma'(\zeta_2+c)}{\zeta_2}\mathbb{E}\big[\big(\frac{\zeta_2}{\zeta_2 + X_2}\big)^2\big]} = \frac{\zeta_2^2}{-\zeta_2(\zeta_2+c)f_2'(\zeta_2)} < \frac{1}{\big(1 + \frac{c}{\zeta_2}\big)} \frac{1}{(-f_2'(\zeta_2))} \leq \frac{-(f_2^{-1})'(\zeta_2))}{1 + c^2}. \end{aligned}$$ That is, at any solution $\zeta_1$, $f_1$ is decreasing more slowly than $f_2^{-1}$, which implies there can only be one solution. ◻ ## Concentration of degrees-of-freedom adjustments This section is devoted to the proof of (b). Define $$\begin{gathered} f_\eta(\zeta^\eta) \ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \mathbb{E}\Big[ \frac{1}{\zeta_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{se}};y_{1,i}^{\mathsf{se}})} \Big], \qquad f_\theta(\zeta^\theta) \ensuremath{: =}\frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \big(\zeta_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{se}})\big)^{-1} \Big) \Big], \\ \widehat{f}_\eta(\widehat{\zeta}{}_\mathsf{a}^\eta) \ensuremath{: =}\frac{1}{n} \sum_{i=1}^n \frac{1}{\widehat{\zeta}{}_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{po}};y_{1,i})} , \qquad \widehat{f}_\theta(\widehat{\zeta}{}_\mathsf{a}^\theta) \ensuremath{: =}\frac{1}{n} \mathrm{Tr}\Big( \big(\widehat{\zeta}{}_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2 \Omega_\mathsf{a}(\boldsymbol{v}_5^{\mathsf{po}})\big)^{-1} \Big). \end{gathered}$$ The quantities $\zeta_\mathsf{a}^\eta,\zeta_\mathsf{a}^\theta$ solve $\zeta_\mathsf{y}^\theta = f_\eta(\zeta_\mathsf{a}^\eta)$ and $\zeta_\mathsf{a}^\eta = f_\theta(\zeta_\mathsf{a}^\theta)$ by . The quantities $\widehat{\zeta}{}_\mathsf{a}^\eta,\widehat{\zeta}{}_\mathsf{a}^\theta$ solve $\widehat{\zeta}{}_\mathsf{y}^\theta = \widehat{f}_\eta(\widehat{\zeta}{}_\mathsf{a}^\eta)$ and $\widehat{\zeta}{}_\mathsf{a}^\eta = \widehat{f}_\theta(\widehat{\zeta}{}_\mathsf{a}^\theta)$ by [\[eq:dof-emp-def\]](#eq:dof-emp-def){reference-type="ref" reference="eq:dof-emp-def"}. The result follows from the point-wise concentration of $\widehat{f}_\eta,\widehat{f}_\theta$ on $f_\eta,f_\theta$ and the monotonicity properties of $f_\eta,f_\theta$, as we now show. We will find constants $\gamma_2,\gamma_2',\gamma_2'' > 0$, which are upper bounded by $\mathcal{P}_{\mathrm{model}}$-dependent constant $C$, such that for $\epsilon < \gamma_2''$, $$\label{eq:f-theta-eta-bounds} f_\theta(\zeta_\mathsf{a}^\theta + \gamma_2 \epsilon) \geq \zeta_\mathsf{a}^\eta - \epsilon + \gamma_2'\epsilon, \qquad f_\eta(\zeta_\mathsf{a}^\eta - \epsilon) \leq \zeta_\mathsf{a}^\theta + \gamma_2 \epsilon - \gamma_2'\epsilon.$$ Assume, for now, we can find such constants. By Assumption A1, the terms in the sum in the definition of $\widehat{f}_\eta(\zeta^\eta)$ are $C$-Lipschitz in $nu_{5,i}^{\mathsf{po}}$. Moreover, because $\boldsymbol{v}_5^{\mathsf{po}}\mapsto \nabla^2 \Omega_\mathsf{a}(\boldsymbol{v}_5^{\mathsf{po}})$ is $C\sqrt{n}$-Lipschitz in Frobenius norm and $\big\|\big(\zeta_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2\Omega_\mathsf{a}(\boldsymbol{v}_5^{\mathsf{po}})\big)^{-1}\big\|_{{\rm op}} \leq C$, the function $\boldsymbol{v}_5^{\mathsf{po}} \mapsto \big(\zeta_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2 \Omega_\mathsf{a}(\boldsymbol{v}_5^{\mathsf{po}})\big)^{-1}$ is $C\sqrt{n}$-Lipschitz in Frobenius norm. This implies that $\widehat{f}_\theta(\zeta_\mathsf{a}^\theta)$ is $C$-Lipschitz in $\boldsymbol{v}_5^{\mathsf{po}}$. By , with probability at least $1 - Ce^{-cn\epsilon^r}$, $\widehat{f}_\theta(\zeta_\mathsf{a}^\theta + \gamma_2\epsilon) \geq \zeta_\mathsf{a}^\eta - \epsilon$ and $\widehat{f}_\eta(\zeta_\mathsf{a}^\eta - \epsilon) \leq \zeta_\mathsf{a}^\theta + \gamma_2 \epsilon.$ On this event, $\widehat{f}_\eta(\zeta_\mathsf{a}^\eta - \epsilon) \leq \zeta_\mathsf{a}^\theta + \gamma_2\epsilon = \widehat{f}_\theta^{-1}(\widehat{f}_\theta(\zeta_\mathsf{a}^\theta + \gamma_2\epsilon)) \leq \widehat{f}_\theta^{-1}(\zeta_\mathsf{a}^\eta - \epsilon)$, where we have used that $\widehat{f}_\theta^{-1}$ is strictly decreasing (see proof of ). The reverse inequality holds at $\zeta^\eta = \frac{1}{n} \mathrm{Tr}\Big(\nabla^2\Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{po}})^{-1}\Big)$, where $\widehat{f}_\eta(\zeta^\eta) > 0$ and $\widehat{f}_\theta^{-1}(\zeta^\eta) = 0$. By continuity (see proof of ), the intermediate value theorem shows that the solution must satisfy $\widehat{\zeta}{}_\mathsf{a}^\eta \geq \zeta_\mathsf{a}^\eta - \epsilon$ on this event. Likewise, on this event, we have $\widehat{f}_\theta(\zeta_\mathsf{a}^\theta + \gamma_2\epsilon) \geq \zeta_\mathsf{a}^\eta - \epsilon = \widehat{f}_\eta^{-1}(\widehat{f}_\eta(\zeta_\mathsf{a}^\eta-\epsilon)) \geq \widehat{f}_\eta^{-1}(\zeta_\mathsf{a}^\theta + \gamma_2\epsilon)$. The reverse inequality holds of $\zeta^\theta$ near 0: $\widehat{f}_\theta(0) = \frac{1}{n} \mathrm{Tr}\Big(\nabla^2\Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{po}})^{-1}\Big) > 0$ and $\lim_{\zeta^\theta \downarrow 0}\widehat{f}_\eta^{-1}(\zeta^\theta) = \infty$. Thus, the solution must satisfy $\widehat{\zeta}{}_\mathsf{a}^\theta \leq \zeta_\mathsf{a}^\theta + \gamma_2 \epsilon$ on this event. By a completely analogous argument, if we can find constants $\gamma_2,\gamma_2',\gamma_2''$, upper bounded by $\mathcal{P}_{\mathrm{model}}$-dependent constants, such that for $\epsilon < \gamma_2''$, $f_\theta(\zeta_\mathsf{a}^\theta - \epsilon) \leq \zeta_\mathsf{a}^\eta + \gamma_2\epsilon - \gamma_2'\epsilon$ and $f_\eta(\zeta_\mathsf{a}^\eta + \gamma_2\epsilon) \leq \zeta_\mathsf{a}^\theta - \epsilon + \gamma_2'\epsilon,$ then with probability at least $1 - Ce^{-cn\epsilon^r}$, $\widehat{\zeta}{}_\mathsf{a}^\eta \leq \zeta_\mathsf{a}^\eta + \gamma_2\epsilon$ and $\widehat{\zeta}{}_\mathsf{a}^\theta \leq \zeta_\mathsf{a}^\theta - \epsilon$. We conclude that $\widehat{\zeta}{}_\mathsf{a}^\eta \ensuremath{\stackrel{\bullet}{=}} \zeta_\mathsf{a}^\eta$ and $\widehat{\zeta}{}_\mathsf{a}^\theta \ensuremath{\stackrel{\bullet}{=}} \zeta_\mathsf{a}^\theta$. We now turn to establishing the claim [\[eq:f-theta-eta-bounds\]](#eq:f-theta-eta-bounds){reference-type="eqref" reference="eq:f-theta-eta-bounds"}. By Assumption A1, the functions $\ell_\mathsf{a}^*$ and $\Omega_\mathsf{a}$ are $C_0$-strongly smooth and $c_0$-strongly convex. For the remainder of the proof, $C_0,c_0$ are $\mathcal{P}_{\mathrm{model}}$-dependent constants whose value does *not* change at each appearance. Letting $\delta \ensuremath{: =} n/p$, the equations $\zeta_\mathsf{y}^\theta = f_\eta(\zeta_\mathsf{a}^\eta)$, $\zeta_\mathsf{a}^\eta = f_\theta(\zeta_\mathsf{a}^\theta)$, $\widehat{\zeta}{}_\mathsf{y}^\theta = \widehat{f}_\eta(\widehat{\zeta}{}_\mathsf{a}^\eta)$, and $\widehat{\zeta}{}_\mathsf{a}^\eta = \widehat{f}_\theta(\widehat{\zeta}{}_\mathsf{a}^\theta)$ imply $\zeta_\mathsf{a}^\theta \leq 1/c_0$ and $\zeta_\mathsf{a}^\eta \leq \delta^{-1}/c_0$. These inequalities then imply the upper bounds $\zeta_\mathsf{a}^\theta \geq \frac{1}{1/c_0 + C_0} \ensuremath{: =}c_1$ and $\zeta_\mathsf{a}^\eta \geq \frac{p}{n(1/c_0 + C_0)} = \delta^{-1}c_1$. The quantity $c_1$ will not change in future appearances. We have $$\begin{aligned} -f_\eta'(\zeta_\mathsf{a}^\eta) &= \frac{1}{n} \sum_{i=1}^n \mathbb{E}\Big[ \frac{1}{\big(\zeta_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{se}};y_{1,i}^{\mathsf{se}})\big)^2} \Big] < \frac{1}{n\zeta_\mathsf{a}^\eta} \sum_{i=1}^n \mathbb{E}\Big[ \frac{1}{\zeta_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{se}};y_{1,i}^{\mathsf{se}})} \Big] = \frac{\zeta_\mathsf{a}^\theta}{ \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \big(\zeta_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{se}})\big)^{-1} \Big) \Big] } \\ &< \frac{\zeta_\mathsf{a}^\theta}{ \frac{\zeta_\mathsf{a}^\theta+c_0}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \big(\zeta_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{se}})\big)^{-2} \Big) \Big] } = \frac{\zeta_\mathsf{a}^\theta}{(\zeta_\mathsf{a}^\theta + c_0)(-f_\theta'(\zeta_\mathsf{a}^\theta))} < \frac{1}{\big(1 + c_0^2\big)} (-f_\theta^{-1})'(\zeta_\mathsf{a}^\eta)), \end{aligned}$$ where in the last inequality we have used that $\zeta_\mathsf{a}^\theta < 1/c_0$. We further have that for $\alpha < 1$, $$\begin{gathered} -f_\eta'(\alpha \zeta_\mathsf{a}^\eta) = \frac{1}{n} \sum_{i=1}^n \mathbb{E}\Big[ \frac{1}{(\alpha \zeta_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{se}};y_{1,i}^{\mathsf{se}}))^2} \Big] \leq \frac{1}{\alpha^2}\big(-f_\eta'(\zeta_\mathsf{a}^\eta)\big), \end{gathered}$$ where we use that $1/(\alpha \zeta_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{se}};y_{1,i}^{\mathsf{se}})) \leq 1/(\alpha( \zeta_\mathsf{a}^\eta + \ddot \ell_\mathsf{a}^*(n u_{5,i}^{\mathsf{se}};y_{1,i}^{\mathsf{se}})))$. We also have that for all $\alpha > 1$, $$\begin{gathered} -f_\theta'(\alpha \zeta_\mathsf{a}^\theta) = \frac{1}{n} \mathbb{E}\Big[ \mathrm{Tr}\Big( \big(\alpha \zeta_\mathsf{a}^\theta {\mathbf I}_n + \nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{se}})\big)^{-2} \Big) \Big] \leq \Big(\frac{c_1 + C_0}{\alpha c_1 + C_0}\Big)^2\big(-f_\theta'(\zeta_\mathsf{a}^\theta)\big), \end{gathered}$$ where we have used that for any singular value $\sigma_i^2$ of $\nabla^2 \Omega_\mathsf{a}(\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^{\mathsf{se}})$, we have $\frac{1}{\alpha \zeta_\mathsf{a}^\theta + \sigma_i^2} \leq \frac{c_1 + C_0}{\alpha c_1 + C_0} \frac{1}{\zeta_\mathsf{a}^\theta + \sigma_i^2}$ because $\sigma_i^2 \leq C_0$ and $\zeta_\mathsf{a}^\theta > c_1$. We therefore have that for $\epsilon > 0$ $$\begin{aligned} f_\eta(\zeta_\mathsf{a}^\eta - \epsilon) & \leq f_\eta(\zeta_\mathsf{a}^\eta) + \frac{\epsilon (-f_\eta'(\zeta_\mathsf{a}^\eta))}{(1 - \epsilon / \zeta_\mathsf{a}^\eta)^2} = \zeta_\mathsf{a}^\theta + \frac{\epsilon (-f_\eta'(\zeta_\mathsf{a}^\eta))}{(1 - \epsilon / \zeta_\mathsf{a}^\eta)^2} \leq \zeta_\mathsf{a}^\theta + \frac{\epsilon (-f_\eta'(\zeta_\mathsf{a}^\eta))}{(1 - \delta \epsilon / c_1)^2}, \quad \mbox{and} \\ % f_\theta(\zeta_\mathsf{a}^\theta + \epsilon) & \geq f_\theta(\zeta_\mathsf{a}^\theta) - \epsilon(-f_\theta'(\zeta_\mathsf{a}^\theta)) \Big( \frac{c_1 + C_0}{(1-\epsilon/\zeta_\mathsf{a}^\theta)c_1 + C_0}\Big)^2 \geq \zeta_\mathsf{a}^\eta - \epsilon \frac{1}{ (1 + c_0^2)(-f_\eta'(\zeta_\mathsf{a}^\eta))} \Big( \frac{c_1 + C_0}{ (1-\epsilon/c_1) c_1 + C_0} \Big)^2.\end{aligned}$$ Thus, we can take $\gamma_2 = (1 + c_0^2/2)(-f_\eta'(\zeta_\mathsf{a}^\eta))$, $\gamma_2' = c_0^2(-f_\eta'(\zeta_\mathsf{a}^\eta))/4$, and $\gamma_2''$ sufficiently small so that $1/(1-\delta \epsilon/c_1)^2$ and $(c_1 + C_0)/((1-\epsilon/c_1)c_1 + C_0)$ are sufficiently close to 1 to make equation [\[eq:f-theta-eta-bounds\]](#eq:f-theta-eta-bounds){reference-type="eqref" reference="eq:f-theta-eta-bounds"} true. Note that $|f_\eta'(\zeta_\mathsf{a}^\eta)| \leq 1/c_0^2$, whence these constants are upper bounded by $C$. Switching the roles of $f_\theta$ and $f_\eta$, if we can find constants $\gamma_2,\gamma_2',\gamma_2''$, upper bounded by $\mathcal{P}_{\mathrm{model}}$-dependent constants, such that for $\epsilon < \gamma_2''$, we have the inequalities $$\begin{aligned} f_\theta(\zeta_\mathsf{a}^\theta - \epsilon) \leq \zeta_\mathsf{a}^\eta + \gamma_2\epsilon - \gamma_2'\epsilon, \quad \mbox{and} \quad f_\eta(\zeta_\mathsf{a}^\eta + \gamma_2\epsilon) \leq \zeta_\mathsf{a}^\theta - \epsilon + \gamma_2'\epsilon.\end{aligned}$$ # Oracle debiasing: the general case {#sec:db-proofs} as regards oracle ASCW follow from a characterization () of a general class of debiasing constructions. This characterization also will allow us to characterize the unsuccessful debiasing constructions alluded to in . As described in , we can assume without loss of generality that $\boldsymbol{\Sigma}= {\mathbf I}_p$. ## Some consequences of the fixed point equations Our proofs require some basic consequences of the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, which we state here. Recall that $\langle \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \rangle_{L_2} = 0$ by equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}. Using the explicit expressions for the score in equation [\[eq:score-explicit\]](#eq:score-explicit){reference-type="eqref" reference="eq:score-explicit"}, we have $\mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}(\eta_{\mathsf{a},i}^f)(y_i^f - \widehat{\zeta}{}_{\mathsf{y},i}^{f,\mathsf{loo}})] = 0$. Using Gaussian integration by parts, this is equivalent to $0 = \mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}(\eta_{\mathsf{a},i}^f)](\mu_\mathsf{y}- \widehat{\mu}{}_\mathsf{y}^f) + \mathbb{E}\big[\pi_{\zeta_\mathsf{y}^\eta}'(\eta_{\mathsf{a},i}^f)\big]\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}.$ Because, by assumption A1, $C > w(\eta_{\mathsf{a},i}^f) > c > 0$ and $\mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}] > c > 0$, and because $\zeta_\mathsf{y}^\theta > c > 0$ by Lemma , we have $\mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}(\eta_{\mathsf{a},i}^f)] > c \mathbb{E}[\pi(\eta_{\mathsf{a},i}^f)] > c > 0$. Thus, recalling the definition of $\alpha_1(\zeta)$ in [\[eq:alpha-12-zeta-def\]](#eq:alpha-12-zeta-def){reference-type="ref" reference="eq:alpha-12-zeta-def"}, we have $$\label{eq:hmu-out-f-bias} \widehat{\mu}{}_\mathsf{y}^f - \mu_\mathsf{y}= \alpha_1(\zeta_\mathsf{y}^\eta)\langle \boldsymbol{\theta}_\mathsf{a}, \quad \mbox{and} \quad \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}.$$ Then, by equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} and [\[eq:score-deriv-explicit\]](#eq:score-deriv-explicit){reference-type="eqref" reference="eq:score-deriv-explicit"}, $$\label{eq:beta-out-prop} \begin{aligned} \beta_{\mathsf{y}\mathsf{a}} &= \frac{ \mathbb{E}\big[\pi_{\zeta_\mathsf{y}^\eta}'(\eta_{\mathsf{a},i}^f)(y_i^f - \widehat{\eta}{}_{\mathsf{y},i}^{f,\mathsf{loo}})\big] }{ \mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}(\eta_{\mathsf{a},i}^f)] } = \frac{ \mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}'(\eta_{\mathsf{a},i}^f)](\mu_\mathsf{y}- \widehat{\mu}{}_\mathsf{y}^f) + \mathbb{E}\big[\pi_{\zeta_\mathsf{y}^\eta}''(\eta_{\mathsf{a},i}^f)]\langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} }{ \mathbb{E}[\pi_{\zeta_\mathsf{y}^\eta}(\eta_{\mathsf{a},i}^f)] } \\ &= \left( \alpha_2(\zeta_\mathsf{y}^\eta) - \alpha_1^2(\zeta_\mathsf{y}^\eta) \right) \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}. \end{aligned}$$ ## Proof of  {#proof-of-3} We now prove our general theorem on oracle debiasing. We consider debiased estimates based on the modified mean and inverse covariance matrices $$\boldsymbol{m}\ensuremath{: =}\boldsymbol{\mu}_{\mathsf{x}} + \mathsf{c}_{\boldsymbol{\mu}} \boldsymbol{\theta}_\mathsf{a}, \qquad \boldsymbol{M}\ensuremath{: =} {\mathbf I}_p - \mathsf{c}_{\boldsymbol{\Sigma}} \boldsymbol{\theta}_\mathsf{a}\boldsymbol{\theta}_\mathsf{a}^\top,$$ for deterministic values $\mathsf{c}_{\boldsymbol{\mu}},\mathsf{c}_{\boldsymbol{\Sigma}} \in {\mathbb R}$. In particular, we consider the debiased estimates $$\label{eq:general-db-explicit} \begin{gathered} \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}= \widehat{\theta}{}_{\mathsf{y},0} - \boldsymbol{m}^\top \frac{\boldsymbol{M} \boldsymbol{X}^\top (\boldsymbol{a}\odot \boldsymbol{w}\odot(\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0} \boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y}))}{n\widehat{\zeta}{}_\mathsf{y}^\theta}, \\ \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}= \widehat{\boldsymbol{\theta}}{}_\mathsf{y}+ \frac{\boldsymbol{M}\boldsymbol{X}^\top\big(\boldsymbol{a}\odot \boldsymbol{w}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0} \boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big)}{n \widehat{\zeta}{}_\mathsf{y}^\theta}, \end{gathered}$$ where $(\widehat{\theta}{}_{\mathsf{y},0},\widehat{\boldsymbol{\theta}}{}_\mathsf{y})$ is fit using weights $w_i$ in equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} satisfying Assumption A1. The next theorem characterizes their behavior. **Theorem 6**. *Define the modified propensity scores $$\pi_\zeta(\eta) \ensuremath{: =} \frac{\zeta w(\eta)}{1 + \zeta w(\eta)} \pi(\eta),$$ and define the functions $$\label{eq:alpha-12-zeta-def} \alpha_1(\zeta) = \frac{\mathbb{E}[\pi_\zeta'(\eta_\mathsf{a})]}{\mathbb{E}[\pi_\zeta(\eta_\mathsf{a})]}, \qquad \alpha_2(\zeta) = \frac{\mathbb{E}[\pi_\zeta''(\eta_\mathsf{a})]}{\mathbb{E}[\pi_\zeta(\eta_\mathsf{a})]}, \qquad \text{where }\eta_\mathsf{a}\sim \mathsf{N}(\mu_\mathsf{a},\|\boldsymbol{\theta}_\mathsf{a}\|^2).$$ Define also $$\label{eq:db-gen-conc} \mathsf{bias}_1(\zeta) \ensuremath{: =}\alpha_1(\zeta) - \mathsf{c}_{\boldsymbol{\mu}}, \qquad \mathsf{bias}_2(\zeta) \ensuremath{: =}\left( \big(\alpha_2(\zeta) - \alpha_1^2(\zeta)\big) (1 - \mathsf{c}_{\boldsymbol{\Sigma}}\|\boldsymbol{\theta}_\mathsf{a}\|^2) - \mathsf{c}_{\boldsymbol{\Sigma}} \right).$$ Then, under Assumption A1 and if $|\mathsf{c}_{\boldsymbol{\mu}}|,|\mathsf{c}_{\boldsymbol{\Sigma}}| \leq C$, for any $t \in {\mathbb R}$ $$\begin{gathered} \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}- \theta_{\mathsf{y},0} \ensuremath{\stackrel{\bullet}{=}} \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \Big( \mathsf{bias}_1(\zeta_\mathsf{y}^\eta) - \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \langle \boldsymbol{m},\boldsymbol{\theta}_\mathsf{a}\rangle \Big), \\ % \widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}\ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y} - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \Big( \mathsf{bias}_1(\zeta_\mathsf{y}^\eta) - \mathsf{c}_{\boldsymbol{\mu}} \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \|\boldsymbol{\theta}_\mathsf{a}\|^2 \Big), \\ % \frac{1}{p} \sum_{j=1}^p \mathbb{I}\Bigg\{ \frac{ \sqrt{n} \big( \widehat{\theta}{}_{\mathsf{y},j}^\textup{d}- \big( \theta_{\mathsf{y},j} + \langle \boldsymbol{\theta}_\mathsf{a} , \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle\mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \theta_{\mathsf{a},j} \big) \big) }{ \widehat{s}_\mathsf{y}} \leq t \Bigg\} \ensuremath{\stackrel{\bullet}{=}}\mathbb{P}\big(\mathsf{N}(0,1) \leq t\big), \end{gathered}$$ where $$\widehat{s}_\mathsf{y}^2 \ensuremath{: =} \frac{\|\boldsymbol{a}\odot \boldsymbol{w}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\|^2/n}{(\widehat{\zeta}{}_\mathsf{y}^\theta)^2}.$$* *Proof of .* By the KKT conditions for the problem [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}, we have $$\frac{1}{n}\boldsymbol{X}^\top\big( \boldsymbol{a}\odot \boldsymbol{w}\odot (\boldsymbol{y}- \boldsymbol{1} \widehat{\theta}{}_{\mathsf{y},0} - \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) = \nabla \Omega_\mathsf{y}\big(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\big).$$ Thus, equation [\[eq:general-db-explicit\]](#eq:general-db-explicit){reference-type="eqref" reference="eq:general-db-explicit"} implies that $$\begin{gathered} \widehat{\theta}{}_{\mathsf{y},0}^{\textup{d}} = \widehat{\theta}{}_{\mathsf{y},0} - \frac{\boldsymbol{m}^\top \boldsymbol{M} \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y})}{\widehat{\zeta}{}_\mathsf{y}^\theta}, \qquad \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}= \widehat{\boldsymbol{\theta}}{}_\mathsf{y}+ \frac{\boldsymbol{M}\nabla \Omega_\mathsf{y}\big(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\big)}{\widehat{\zeta}{}_\mathsf{y}^\theta}. \end{gathered}$$ By , $\widehat{\zeta}{}_\mathsf{y}^\theta \ensuremath{\stackrel{\bullet}{=}}\zeta_\mathsf{y}^\theta > c$. By Assumption A1 and because $|\mathsf{c}_{\boldsymbol{\mu}}|,|\mathsf{c}_{\boldsymbol{\Sigma}}| \leq C$, we have $\| \boldsymbol{m}\| \leq C$, and $\| \boldsymbol{M}\|_{{\rm op}} \leq C$. By  and , $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\| \ensuremath{\stackrel{\bullet}{=}}\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \|_{L_2} < C$. By Assumption A1, $\nabla\Omega_\mathsf{y}$ is $C$-Lipschitz and $\Omega_\mathsf{y}$ has minimizer bounded in $\ell_2$-norm by $C$, whence, because $\| \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\| \lessdot C$, we have $\| \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}) \| \lessdot C$. Combining these exponential high probability bounds, we conclude $$\begin{gathered} \widehat{\theta}{}_{\mathsf{y},0}^{\textup{d}} \ensuremath{\stackrel{\bullet}{=}}\widehat{\theta}{}_{\mathsf{y},0} - \frac{\boldsymbol{m}^\top \boldsymbol{M}\nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y})}{\zeta_\mathsf{y}^\theta}, \qquad \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\ensuremath{\stackrel{\bullet}{=}}\widehat{\boldsymbol{\theta}}{}_\mathsf{y}+ \frac{\boldsymbol{M}\nabla \Omega_\mathsf{y}\big(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\big)}{\zeta_\mathsf{y}^\theta}. \end{gathered}$$ The KKT conditions for the problem [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"} imply that $\nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) / \zeta_\mathsf{y}^\theta = \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f$. Thus, implies that $$\label{eq:db-offset-conc} \begin{aligned} \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}&- \theta_{\mathsf{y},0} \ensuremath{\stackrel{\bullet}{=}}\widehat{\mu}{}_\mathsf{y}^f - \langle \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} - \mathbb{E}[\langle \boldsymbol{m}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{\boldsymbol{M}}] - \theta_{\mathsf{y},0} \\ &= \widehat{\mu}{}_\mathsf{y}^f - \langle \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} - \langle \boldsymbol{m}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} + \mathsf{c}_{\boldsymbol{\Sigma}}\langle \boldsymbol{m}, \boldsymbol{\theta}_\mathsf{a} \rangle_{L_2} \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} - \theta_{\mathsf{y},0} \\ &= (\widehat{\mu}{}_\mathsf{y}^f- \langle \boldsymbol{\mu}_{\mathsf{x}} , \boldsymbol{\theta}_\mathsf{y}\rangle - \theta_{\mathsf{y},0}) - \mathsf{c}_{\boldsymbol{\mu}}\langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} - \beta_{\mathsf{y}\mathsf{a}}\langle\boldsymbol{m},\boldsymbol{\theta}_\mathsf{a}\rangle + \mathsf{c}_{\boldsymbol{\Sigma}} \langle \boldsymbol{m}, \boldsymbol{\theta}_\mathsf{a}\rangle_{L_2} \big( \beta_{\mathsf{y}\mathsf{a}} \| \boldsymbol{\theta}_\mathsf{a} \|^2 + \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} \big) \\ &= \left( \alpha_1(\zeta_\mathsf{y}^\eta) - \mathsf{c}_{\boldsymbol{\mu}} \right) \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} - \left( \big( \alpha_2(\zeta_\mathsf{y}^\eta) - \alpha_1^2(\zeta_\mathsf{y}^\eta) \big) (1 - \mathsf{c}_{\boldsymbol{\Sigma}} \| \boldsymbol{\theta}_\mathsf{a}\|^2) - \mathsf{c}_{\boldsymbol{\Sigma}} \right) \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} \langle \boldsymbol{m}, \boldsymbol{\theta}_\mathsf{a}\rangle, \end{aligned}$$ where in the last equation we use equations [\[eq:hmu-out-f-bias\]](#eq:hmu-out-f-bias){reference-type="eqref" reference="eq:hmu-out-f-bias"} and [\[eq:beta-out-prop\]](#eq:beta-out-prop){reference-type="eqref" reference="eq:beta-out-prop"}. also implies that $\widehat{s}_\mathsf{y}^2 \ensuremath{\stackrel{\bullet}{=}}S_{44}$, and by , $S_{44} > c > 0$. Thus, we also have $$\label{eq:db-param-conc} \begin{aligned} &\phi\Big( \frac{ \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}- (\boldsymbol{\theta}_\mathsf{y}+ \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \boldsymbol{\theta}_\mathsf{a}) }{ \widehat{s}_\mathsf{y}} \Big) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \frac{ \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f + \boldsymbol{M}(\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) - \big( \boldsymbol{\theta}_\mathsf{y}+ \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \boldsymbol{\theta}_\mathsf{a}\big) }{ S_{44}^{1/2} } \Big) \Big] \\ % &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= \mathbb{E}\Big[ \phi\Big( \frac{ \boldsymbol{y}_\mathsf{y}^f - \mathsf{c}_{\boldsymbol{\Sigma}} \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle\boldsymbol{\theta}_\mathsf{a} - \big( \boldsymbol{\theta}_\mathsf{y}+ \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \boldsymbol{\theta}_\mathsf{a}\big) }{ S_{44}^{1/2} } \Big) \Big] \\ % &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\ensuremath{\stackrel{\bullet}{=}} \mathbb{E}\Big[ \phi\Big( \frac{ \boldsymbol{g}_\mathsf{y}^f + \big( \beta_{\mathsf{y}\mathsf{a}} - (\mathsf{c}_{\boldsymbol{\Sigma}} + \mathsf{bias}_2(\zeta_\mathsf{y}^\eta)) \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} \big) \boldsymbol{\theta}_\mathsf{a} }{ S_{44}^{1/2} } \Big] \\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= \mathbb{E}\Big[ \phi\big( \boldsymbol{g}_\mathsf{y}^f / S_{44}^{1/2} \big) \Big], \end{aligned}$$ where in the first line we use  first to approximate $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}$ and then to approximate the value of $\phi$ applied to the given parameter estimate; in the second line we have used the definition of $\boldsymbol{M}$; in the third line we have used that $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle$ concentrates on $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}$ with sub-Gaussian tails and variance parameter $C/n$ using the fact that $\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f$ is $C$-Lipschitz in $\boldsymbol{g}_\mathsf{y}^f$ by [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="ref" reference="eq:param-fit-f"} and the bounds on the variance parameters in ; and in the fourth line we have used the definition of $\mathsf{bias}_2$ and the identity for $\beta_{\mathsf{y}\mathsf{a}}$ in equation [\[eq:beta-out-prop\]](#eq:beta-out-prop){reference-type="eqref" reference="eq:beta-out-prop"}. Finally, note that we can write $\widehat{\mu}{}_\mathsf{y}^\mathsf{a}= \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}+ \langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\rangle,$ where recall $\widehat{\boldsymbol{\mu}}{}_\mathsf{x}= \frac{1}{\ensuremath{n}} \sum_{i=1}^n \boldsymbol{x}_i$. Thus, $$\begin{aligned} \widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}&= \widehat{\theta}{}_{\mathsf{y},0}^\textup{d}- \theta_{\mathsf{y},0} + \langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}- \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\rangle + \langle \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}- \boldsymbol{\theta}_\mathsf{y}\rangle. \end{aligned}$$ By , $$\begin{aligned} \langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}- \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}\rangle &\ensuremath{\stackrel{\bullet}{=}}\Big\langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x}- \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}+ \frac{\boldsymbol{M}\nabla \Omega_\mathsf{y}\big(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\big)}{\zeta_\mathsf{y}^\theta} \Big\rangle \ensuremath{\stackrel{\bullet}{=}}\big\langle \boldsymbol{y}_\mathsf{x}^f - \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f + \boldsymbol{M} (\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) \big\rangle_{L_2} \\ &\ensuremath{\stackrel{\bullet}{=}}\big\langle \boldsymbol{g}_\mathsf{x}^f, \boldsymbol{y}_\mathsf{y}^f - \mathsf{c}_{\boldsymbol{\Sigma}} \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle\boldsymbol{\theta}_\mathsf{a}\big\rangle_{L_2} \ensuremath{\stackrel{\bullet}{=}}\big\langle \boldsymbol{g}_\mathsf{x}^f, \boldsymbol{y}_\mathsf{y}^f - \mathsf{c}_{\boldsymbol{\Sigma}} \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}\boldsymbol{\theta}_\mathsf{a}\big\rangle_{L_2} \\ &= \langle \boldsymbol{g}_\mathsf{x}^f , \boldsymbol{g}_\mathsf{y}^f \rangle_{L_2} = \frac{p}{n} \langle \widehat{\boldsymbol{i}}{}_\mathsf{x}^f, \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \rangle_{L_2} = 0, \end{aligned}$$ where in the second line we have used the fact that $\langle \boldsymbol{\theta}_\mathsf{a} , \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle$ concentrates on $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}$ with sub-Gaussian tails and variance parameter $C/n$ as argued above; and in the final line we have used in the second equality the fixed point equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} and in the final equality the fact that $\langle \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \rangle_{L_2} = 0$. We also have $$\begin{aligned} \langle \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}- \boldsymbol{\theta}_\mathsf{y}\rangle &\ensuremath{\stackrel{\bullet}{=}}\big\langle \boldsymbol{\mu}_\mathsf{x}, \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f + \boldsymbol{M}(\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f) - \boldsymbol{\theta}_\mathsf{y}\big\rangle_{L_2} = \big\langle \boldsymbol{\mu}_\mathsf{x}, \boldsymbol{g}_\mathsf{y}^f + \beta_{\mathsf{y}\mathsf{a}}\boldsymbol{\theta}_\mathsf{a}- \mathsf{c}_{\boldsymbol{\Sigma}} \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle\boldsymbol{\theta}_\mathsf{a}\big\rangle_{L_2} \\ % & \ensuremath{\stackrel{\bullet}{=}}\mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \langle\boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} \langle \boldsymbol{\mu}_\mathsf{x}, \boldsymbol{\theta}_\mathsf{a}\rangle, \end{aligned}$$ where in the last line we have used that $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle$ concentrates on $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}$ with sub-Gaussian tails and variance parameter $C/n$ as argued above, as well as the definition of $\mathsf{bias}_2$ and the expression of $\beta_{\mathsf{y}\mathsf{a}}$ in [\[eq:beta-out-prop\]](#eq:beta-out-prop){reference-type="ref" reference="eq:beta-out-prop"}. In order to obtain the final line of [\[eq:db-gen-conc\]](#eq:db-gen-conc){reference-type="ref" reference="eq:db-gen-conc"}, we use [\[eq:db-param-conc\]](#eq:db-param-conc){reference-type="ref" reference="eq:db-param-conc"} along with Lipschitz approximations to indicator functions. In particular, let $\mathsf{SoftIndic}_{t,\Delta}(\cdot)$ be the function which is $1$ for $x \leq t$, $0$ for $x \geq t+\Delta$, and linearly interpolates between the boundaries for $x \in [t,t+\Delta]$. Note that $\mathsf{SoftIndic}$ is $1/\Delta$-Lipschitz. Then, by  and [\[eq:db-param-conc\]](#eq:db-param-conc){reference-type="ref" reference="eq:db-param-conc"}, and using that $\boldsymbol{g}_\mathsf{y}^f / S_{44}^{1/2} \sim \mathsf{N}(0,{\mathbf I}_p)$, we have $$\frac{\Delta}{\sqrt{p}} \sum_{j=1}^p \mathsf{SoftIndic}_{t,\Delta} \Big( \frac{ \widehat{\theta}{}_{\mathsf{y},j}^\textup{d}- \big(\theta_{\mathsf{y},j} + \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \theta_{\mathsf{a},j}\big) }{ \widehat{s}_\mathsf{y}} \Big) \ensuremath{\stackrel{\bullet}{=}}\Delta\sqrt{p} \mathbb{E}[\mathsf{SoftIndic}_{t,\Delta}(G)],$$ where $G \sim \mathsf{N}(0,1)$, and the constants in the exponential concentration do not depend on $\Delta$. Note that $\Phi(t) \leq \mathbb{E}[\mathsf{SoftIndic}_{t,\Delta}(G)] \leq \Phi(t+\Delta) \leq \Phi(t) + \Delta/\sqrt{2\pi}$, where $\Phi$ is the standard Gaussian cdf and $\pi$ here refers to the numerical constant and not the propensity function. Thus, with probability at least $1 - Ce^{-cn\epsilon^r}$ we have $$\begin{aligned} &\frac{1}{p} \sum_{j=1}^p \mathbb{I}\Big\{ \frac{ \widehat{\theta}{}_{\mathsf{y},j}^\textup{d}- \big(\theta_{\mathsf{y},j} + \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \theta_{\mathsf{a},j}\big) }{ \widehat{s}_\mathsf{y}} \leq t \Big\} \\ &\qquad\qquad\leq \frac{1}{p} \sum_{j=1}^p \mathsf{SoftIndic}_{t,\Delta} \Big( \frac{ \widehat{\theta}{}_{\mathsf{y},j}^\textup{d}- \big(\theta_{\mathsf{y},j} + \langle \boldsymbol{\theta}_\mathsf{a} , \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \mathsf{bias}_2(\zeta_\mathsf{y}^\eta) \theta_{\mathsf{a},j}\big) }{ \widehat{s}_\mathsf{y}} \Big) \leq \Phi(t) + \frac{\Delta}{\sqrt{2\pi}} + \frac{\epsilon}{\Delta \sqrt{p}}. \end{aligned}$$ If we take $\Delta = \sqrt{\epsilon}$, we get that with probability at least $1 - Ce^{-cn\epsilon^r}$ that the left-hand side in the previous display is upper bounded by $\Phi(t) + C'\sqrt{\epsilon}$. Using $\mathsf{SoftIndic}_{t-\Delta,\Delta}$ in place of $\mathsf{SoftIndic}_{t,\Delta}$, we can replace the "$\leq$'s" in the previous display by "$\geq$'s", whence we also have that with probability at least $1 - Ce^{-cn\epsilon^r}$ that the left-hand side in the previous display is lower bounded by $\Phi(t) - C'\sqrt{\epsilon}$. The final line of [\[eq:db-gen-conc\]](#eq:db-gen-conc){reference-type="ref" reference="eq:db-gen-conc"} follows. ◻ **Remark 6**. *Because $\mathsf{bias}_1(\zeta)$ and $\mathsf{bias}_2(\zeta)$ are $C$-Lipschitz, by  (part (b)), we have that $\mathsf{bias}_1(\widehat{\zeta}{}_\mathsf{y}^\eta) \ensuremath{\stackrel{\bullet}{=}}\mathsf{bias}_1(\zeta_\mathsf{y}^\eta)$ and $\mathsf{bias}_2(\widehat{\zeta}{}_\mathsf{y}^\eta) \ensuremath{\stackrel{\bullet}{=}}\mathsf{bias}_2(\zeta_\mathsf{y}^\eta)$. Thus, it is straightforward to show that  holds with $\mathsf{bias}_1(\zeta_\mathsf{y}^\eta)$, $\mathsf{bias}_2(\zeta_\mathsf{y}^\eta)$ replaced by $\mathsf{bias}_1(\widehat{\zeta}{}_\mathsf{y}^\eta)$, $\mathsf{bias}_1(\widehat{\zeta}{}_\mathsf{y}^\eta)$. This statement has the benefit that the quantities in [\[eq:db-gen-conc\]](#eq:db-gen-conc){reference-type="ref" reference="eq:db-gen-conc"}, while not fully empirical because of $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle$, have no explicit dependence on the solutions to the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}.* # Proofs of  {#sec:proof-main-theorems} In this section, we prove , separating our analysis according to the method being analyzed: a section on oracle ASCW () and one on empirical SCA (). ## Proofs for oracle ASCW {#SecOrASCW} We prove  for the oracle ASCW estimate by applying  with $w(\eta) = 1$, $\boldsymbol{m}= \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}}$, and $\boldsymbol{M}= \boldsymbol{\Sigma}_{\mathsf{cfd}}^{-1}$. By [\[eq:cfd-mean-variance\]](#eq:cfd-mean-variance){reference-type="ref" reference="eq:cfd-mean-variance"} and the Sherman-Morrison-Woodbury formula, this is equivalent to taking $\mathsf{c}_{\boldsymbol{\mu}} = \alpha_1$ and $\mathsf{c}_{\boldsymbol{\Sigma}} = \frac{\alpha_2 - \alpha_1^2}{1 + (\alpha_2 - \alpha_1^2)\|\boldsymbol{\theta}_\mathsf{a}\|^2}$. In this case, equation [\[eq:alpha-12-zeta-def\]](#eq:alpha-12-zeta-def){reference-type="eqref" reference="eq:alpha-12-zeta-def"} implies that $$\begin{gathered} \pi_\zeta(\eta) \ensuremath{: =}\frac{\zeta}{1 + \zeta} \pi(\eta), \\ \alpha_1(\zeta) = \frac{\mathbb{E}[\pi'(\eta_\mathsf{a})]}{\mathbb{E}[\pi(\eta_\mathsf{a})]} = \alpha_1, \qquad \alpha_2(\zeta) = \frac{\mathbb{E}[\pi''(\eta_\mathsf{a})]}{\mathbb{E}[\pi(\eta_\mathsf{a})]} = \alpha_2, \qquad \text{where }\eta_\mathsf{a}\sim \mathsf{N}(\mu_\mathsf{a},\|\boldsymbol{\theta}_\mathsf{a}\|_{\boldsymbol{\Sigma}}^2). \end{gathered}$$ Thus, $\mathsf{bias}_1(\widehat{\zeta}{}_\mathsf{y}^\eta) = \alpha_1 - \mathsf{c}_{\boldsymbol{\mu}} = 0$ and $\mathsf{bias}_2(\widehat{\zeta}{}_\mathsf{y}^\eta) = \Big((\alpha_2 - \alpha_1^2)\Big(1 - \frac{\alpha_2-\alpha_1^2}{1 + (\alpha_2 - \alpha_1^2)\|\boldsymbol{\theta}_\mathsf{a}\|^2}\|\boldsymbol{\theta}_\mathsf{a}\|^2\Big) - \frac{\alpha_2-\alpha_1^2}{1 + (\alpha_2 - \alpha_1^2)\|\boldsymbol{\theta}_\mathsf{a}\|^2}\Big) = 0$. The result follows. ## Empirical SCA {#SecEmpSCA} First assume we have the stronger guarantees $\widehat{\mathsf{dbAdj}}_{01} \ensuremath{\stackrel{\bullet}{=}}\mathsf{dbAdj}_{01}$, $\widehat{\mathsf{dbAdj}}_{02} \ensuremath{\stackrel{\bullet}{=}}\mathsf{dbAdj}_{02}$, $\widehat{\mathsf{dbAdj}}_1 \ensuremath{\stackrel{\bullet}{=}}\mathsf{dbAdj}_1$, $\widehat{\beta}{}\ensuremath{\stackrel{\bullet}{=}}\beta$, with $\beta = \alpha_1$ if we use the moment method in [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} and $\beta = \beta_{\mathsf{a}\mathsf{a}}$ if we use M-estimation. Indeed, these stronger consistency guarantees will be established for the estimates in . To avoid confusion, we denote $(\widehat{\mu}{}_\mathsf{y}^\textup{d},\widehat{\theta}{}_{\mathsf{y},0}^\textup{d},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d})$ by $(\widehat{\mu}{}_\mathsf{y}^{\mathsf{ASCW}},\widehat{\theta}{}_{\mathsf{y},0}^{\mathsf{ASCW}},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{ASCW}})$ when defined using the oracle ASCW bias estimates [\[eq:orc-ASCW-matrix-form\]](#eq:orc-ASCW-matrix-form){reference-type="eqref" reference="eq:orc-ASCW-matrix-form"} and by $(\widehat{\mu}{}_\mathsf{y}^{\mathsf{SCA}},\widehat{\theta}{}_{\mathsf{y},0}^{\mathsf{SCA}},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{SCA}})$ when using the empirical SCA bias estimates [\[eq:SCA-bias-hat\]](#eq:SCA-bias-hat){reference-type="eqref" reference="eq:SCA-bias-hat"}. Then we have $$\begin{aligned} \widehat{\theta}{}_{\mathsf{y},0}^{\mathsf{SCA}}= \widehat{\theta}{}_{\mathsf{y},0} - \widehat{\mathsf{dbAdj}}_{01} - \widehat{\mathsf{dbAdj}}_{02} \ensuremath{\stackrel{\bullet}{=}}\widehat{\theta}{}_{\mathsf{y},0} - \mathsf{dbAdj}_{01} - \mathsf{dbAdj}_{02} = \widehat{\theta}{}_{\mathsf{y},0}^{\mathsf{ASCW}}\ensuremath{\stackrel{\bullet}{=}}0 \theta_{\mathsf{y},0}\end{aligned}$$ by  for the oracle ASCW estimates. Likewise, $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{SCA}}\ensuremath{\stackrel{\bullet}{=}}\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{ASCW}}- \mathsf{dbAdj}_1 \cdot (\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}- \boldsymbol{\theta}_\mathsf{a})$. Then, using the calculation in equation [\[eq:db-param-conc\]](#eq:db-param-conc){reference-type="eqref" reference="eq:db-param-conc"} and the fact that $\mathsf{bias}_2(\zeta_\mathsf{y}^\eta) = 0$ (see proofs of for oracle ASCW), for any Lipschitz function $\phi: {\mathbb R}^p \rightarrow {\mathbb R}$ we have $$\phi(\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{SCA}}) \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}\Big[ \phi\Big( \boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_\mathsf{y}^f - \mathsf{dbAdj}_1 \cdot \boldsymbol{g}_\circ^f \Big) \Big],$$ where $\boldsymbol{g}_\circ^f = (\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}} - \boldsymbol{g}_{\mathsf{x}})/\alpha_1$ if the moment method from equation [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} is used to compute $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ and $\boldsymbol{g}_\circ^f = \boldsymbol{g}_\mathsf{a}^f / \beta_{\mathsf{a}\mathsf{a}}$ if $M$-estimation is used. Because $\boldsymbol{S}= \langle\!\langle{\widehat{\mathsf{IF}}}^f \rangle\!\rangle_{L_2}$ (equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}), $\boldsymbol{g}_\mathsf{y}^f - \mathsf{dbAdj}_1 \cdot \boldsymbol{g}_\circ^f \sim \mathsf{N}({\boldsymbol 0},\tau^2{\mathbf I}_p/n)$ where $\tau^2 = \| \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \|_{L_2}^2/n - 2\, \mathsf{dbAdj}_1 \langle \widehat{\boldsymbol{i}}{}_\mathsf{y}^f , \widehat{\boldsymbol{i}}{}_\circ^f \rangle_{L_2}/n + \mathsf{dbAdj}_1^2 \| \widehat{\boldsymbol{i}}{}_\circ \|_{L_2}^2/n$, where $\widehat{\boldsymbol{i}}{}_\circ^f \ensuremath{: =}(\boldsymbol{a}^f / {\overline{\pi}}- \boldsymbol{1})/\alpha_1$ if the moment method [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} is used to compute $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ and $\widehat{\boldsymbol{i}}{}_\circ^f \ensuremath{: =}\widehat{\boldsymbol{i}}{}_\mathsf{a}^f / \beta_{\mathsf{a}\mathsf{a}}$ if M-estimation is used. Thus, by (d), $\widehat{\tau}{}^2 \ensuremath{\stackrel{\bullet}{=}}\tau^2$. The concentration of the empirical quantiles in  is a consequence of the concentration of Lipschitz functions $\phi$ by considering Lipschitz approximations to indicator functions exactly as in the proof of . Finally, we have $$\begin{aligned} \widehat{\mu}{}_\mathsf{y}^{\mathsf{SCA}}= \widehat{\theta}{}_{\mathsf{y},0}^{\mathsf{SCA}}+ \langle \widehat{\boldsymbol{\mu}}{}_\mathsf{x},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^{\mathsf{SCA}}\rangle \ensuremath{\stackrel{\bullet}{=}}\theta_{\mathsf{y},0} + \langle \boldsymbol{\mu}_\mathsf{x}+ \boldsymbol{g}_\mathsf{x}^f , \boldsymbol{\theta}_\mathsf{y}+ \boldsymbol{g}_\mathsf{y}^f - \mathsf{dbAdj}_1\cdot\boldsymbol{g}_\circ^f\rangle_{L_2} = \theta_{\mathsf{y},0} + \langle\boldsymbol{\mu}_\mathsf{x},\boldsymbol{\theta}_\mathsf{y}\rangle_{L_2} = \mu_\mathsf{y},\end{aligned}$$ which proves  for empirical SCA.\ **Weaker consistency guarantees.** If instead we only have the guarantees $\widehat{\mathsf{dbAdj}}_{01} \buildrel \mbox{\tiny{prob.}} \over \longrightarrow\mathsf{dbAdj}_{01}$, $\widehat{\mathsf{dbAdj}}_{02} \buildrel \mbox{\tiny{prob.}} \over \longrightarrow\mathsf{dbAdj}_{02}$, $\widehat{\mathsf{dbAdj}}_1 \buildrel \mbox{\tiny{prob.}} \over \longrightarrow \mathsf{dbAdj}_1$, $\widehat{\beta}{}\buildrel \mbox{\tiny{prob.}} \over \longrightarrow\beta$ as $(n,p) \rightarrow \infty$, then all previous arguments hold except with expressions of the form "$A\ensuremath{\stackrel{\bullet}{=}}B$" replaced by "$A-B \buildrel \mbox{\tiny{prob.}} \over \longrightarrow 0$ as $(n,p) \rightarrow \infty$" in all places. # Empirical adjustments: Proofs of  and  {#sec:hbeta-dbAdj-consistency} *Proof of .* The random variables $\ensuremath{a}_i$ are independent Bernoullis with success probability ${\overline{\pi}}$, so that $\widehat{\pi}{}\ensuremath{\stackrel{\bullet}{=}}{\overline{\pi}}$ by sub-Gaussian concentration. By , we have $$\begin{aligned} \| \widehat{\boldsymbol{\mu}}{}_\mathsf{x}\|^2 \ensuremath{\stackrel{\bullet}{=}}\| \boldsymbol{y}_\mathsf{x}^f \|_{L_2}^2 = \| \boldsymbol{\mu}_\mathsf{x}\|^2 + p S_{11}/n = \| \boldsymbol{\mu}_\mathsf{x} \|^2 + p/n,\end{aligned}$$ using the fact that $S_{11} = \| \widehat{\boldsymbol{i}}{}_\mathsf{x}^f \|_{L_2}^2 / n = \| \boldsymbol{1}\|^2 / n = 1$ from equation [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, Thus, we get $\widehat{\gamma}{}_{\boldsymbol{\mu}} \ensuremath{\stackrel{\bullet}{=}}\| \boldsymbol{\mu}_\mathsf{x}\|$. By , $\| \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} - \widehat{\boldsymbol{\mu}}{}_{\mathsf{x}} \|^2 \ensuremath{\stackrel{\bullet}{=}}\| \boldsymbol{y}_\mathsf{x}^f - \boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f \|_{L_2}^2 = \| \boldsymbol{g}_\mathsf{x}^f - \alpha_1 \boldsymbol{\theta}_\mathsf{a}- \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f \|_{L_2}^2 = \alpha_1^2 \| \boldsymbol{\theta}_\mathsf{a}\|^2 + \| \boldsymbol{g}_\mathsf{x}^f - \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}} \|_{L_2}^2 = \alpha_1^2 \| \boldsymbol{\theta}_\mathsf{a}\|^2 + p(S_{11} - 2S_{12} + S_{22})/n$, where we have used the definition of $\boldsymbol{y}_\mathsf{x}^f$, $\boldsymbol{y}_{\mathsf{x},\mathsf{cfd}}^f$ in equation [\[eq:fixed-design-outcomes\]](#eq:fixed-design-outcomes){reference-type="eqref" reference="eq:fixed-design-outcomes"} and the covariance of $\boldsymbol{g}_\mathsf{x}^f$, $\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}$. By the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, we have $S_{11} - 2S_{12} + S_{22} = \| \widehat{\boldsymbol{i}}{}_{\mathsf{x}}^f - \widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f \|_{L_2}^2/n = \| \boldsymbol{1}- \boldsymbol{a}^f/{\overline{\pi}}\|_{L_2}^2/n = \mathbb{E}[(1 - \ensuremath{a}_i^f/{\overline{\pi}})^2] = (1-{\overline{\pi}})/{\overline{\pi}}$, where we have used equation [\[eq:empirical-influence-function\]](#eq:empirical-influence-function){reference-type="eqref" reference="eq:empirical-influence-function"}. Because $p/n < C$ and ${\overline{\pi}}> c$ by Assumption A1, we conclude $(p/n)(1-\widehat{\pi}{})/\widehat{\pi}{} \ensuremath{\stackrel{\bullet}{=}}(p/n)(1-{\overline{\pi}})/{\overline{\pi}}$ and $\| \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} - \widehat{\boldsymbol{\mu}}{}_{\mathsf{x}} \|^2 \ensuremath{\stackrel{\bullet}{=}}\alpha_1^2 \| \boldsymbol{\theta}_\mathsf{a}\| + (p/n)(1-{\overline{\pi}})/{\overline{\pi}}$. Thus, $\widehat{\gamma}{}_{\mathsf{a}*} \ensuremath{\stackrel{\bullet}{=}}\alpha_1 \| \boldsymbol{\theta}_\mathsf{a}\|$. Next we show that equation [\[eq:prop-offset-and-signal-strenght-estimates\]](#eq:prop-offset-and-signal-strenght-estimates){reference-type="eqref" reference="eq:prop-offset-and-signal-strenght-estimates"} has a unique solution. Define the function[^2] $$F(t) = \int_0^t \pi(t) \textup{d}t.$$ By the KKT conditions, the pair $(\widehat{\mu}{}_\mathsf{a},\widehat{\gamma}{}_\mathsf{a})$ is a solution to equation [\[eq:prop-offset-and-signal-strenght-estimates\]](#eq:prop-offset-and-signal-strenght-estimates){reference-type="eqref" reference="eq:prop-offset-and-signal-strenght-estimates"} if and only if it is a stationary point of the objective $f(v_0, v_1) \ensuremath{: =} \mathbb{E}_{G \sim \mathsf{N}(0,1)} [F(v_0 + v_1 G)] - \widehat{\pi}{}v_0 - \widehat{\pi}{} \widehat{\gamma}{}_{\mathsf{a}*} v_1$. Assumption A1 ensures that $\pi$ is strictly increasing, so that $F$ is strictly convex, so that this is a convex objective. Moreover, it has Hessian given by $$\nabla^2 f(v_0,v_1) = \mathbb{E}_{G \sim \mathsf{N}(0,1)} \left[ \pi'(v_0 + v_1G) \begin{pmatrix} 1 & G \\ G & G^2 \end{pmatrix} \right].$$ Since $\pi'$ is always positive by Assumption A1, and $\begin{pmatrix} 1 & G \\ G & G^2 \end{pmatrix}$ is positive definite with probability 1 (the only realization of $G$ for which is is positive semi-definite are $G = \pm1$), the objective $f$ is strictly convex. Thus, if equation [\[eq:prop-offset-and-signal-strenght-estimates\]](#eq:prop-offset-and-signal-strenght-estimates){reference-type="eqref" reference="eq:prop-offset-and-signal-strenght-estimates"} has a solution, it must be unique. Now note that for any $\mathcal{P}_{\mathrm{model}}$-dependent $C' > 0$, for all $v_0,v_1 \in [-C',C']$ we have that with probability at least $\mathbb{P}(|G| \leq 1)$ that $v_0 + v_1 G \in [-2C',2C']$. Thus, for any such $v_0,v_1$, we have $$\nabla^2f(v_0,v_1) \succeq \mathbb{P}(|G| \leq 1) \big(\min_{|\eta|\leq 2C'} \pi'(\eta)\big) {\mathbf I}_2 \succeq c{\mathbf I}_2,$$ where we have used the fact that $\mathbb{E}\left[\begin{pmatrix} 1 & G \\ G & G^2 \end{pmatrix}\right] = {\mathbf I}_2$ and that, by Assumption A1, $\min_{|\eta|\leq 2C'} \pi'(\eta) > c$. Note also that $\partial_{v_0}\mathbb{E}[F(v_0 + v_1G)]|_{(v_0,v_1)=(\mu_\mathsf{a},\|\boldsymbol{\theta}_\mathsf{a}\|)} = {\overline{\pi}}$ and $\partial_{v_1}\mathbb{E}[F(v_0 + v_1G)]|_{(v_0,v_1)=(\mu_\mathsf{a},\|\boldsymbol{\theta}_\mathsf{a}\|)} = {\overline{\pi}}\alpha_1 \| \boldsymbol{\theta}_\mathsf{a}\|$. Thus, $$\nabla f({\overline{\pi}},\|\boldsymbol{\theta}_\mathsf{a}\|) = \begin{pmatrix} {\overline{\pi}}- \widehat{\pi}{}\\ {\overline{\pi}}\alpha_1 \| \boldsymbol{\theta}_\mathsf{a}\| - \widehat{\pi}{}\widehat{\gamma}{}_{\mathsf{a}*} \end{pmatrix}.$$ Therefore, because ${\overline{\pi}}\leq C$ and ${\overline{\pi}}\alpha_1 \| \boldsymbol{\theta}_\mathsf{a}\| \leq C$, the previous display and the fact that $\widehat{\pi}{} \ensuremath{\stackrel{\bullet}{=}}{\overline{\pi}}$ and $\widehat{\gamma}{}_{\mathsf{a}*} \ensuremath{\stackrel{\bullet}{=}}\alpha_1 \| \boldsymbol{\theta}_\mathsf{a}\|$ implies that with probability at least $1-Ce^{-cn\epsilon^r}$, the function $f$ has a minimizer satisfying $\|(\widehat{\mu}{}_\mathsf{a},\widehat{\gamma}{}_\mathsf{a}) - (\mu_\mathsf{a},\|\boldsymbol{\theta}\|)\|\leq \epsilon$. Thus, we have that with exponentially high probability, equation [\[eq:prop-offset-and-signal-strenght-estimates\]](#eq:prop-offset-and-signal-strenght-estimates){reference-type="eqref" reference="eq:prop-offset-and-signal-strenght-estimates"} has a unique solution, and it satisfies $\widehat{\pi}{}\ensuremath{\stackrel{\bullet}{=}}{\overline{\pi}}$ and $\widehat{\gamma}{}_\mathsf{a}\ensuremath{\stackrel{\bullet}{=}}\| \boldsymbol{\theta}_\mathsf{a}\|$. Further, by Assumption A1 (because $\pi$ has second derivative bounded by $C$ and ${\overline{\pi}}> c$), we have $\frac{\mathbb{E}[\pi'(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)]}{\mathbb{E}[\pi(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)]}$ and $\frac{\mathbb{E}[\pi''(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)]}{\mathbb{E}[\pi(\widehat{\mu}{}_\mathsf{a}+ \widehat{\gamma}{}_\mathsf{a}G)]}$ are $C$-Lipschitz in $\widehat{\mu}{}_\mathsf{a},\widehat{\gamma}{}_\mathsf{a}$ on $|\widehat{\mu}{}_\mathsf{a}- \mu_\mathsf{a}|$, $|\widehat{\gamma}{}_\mathsf{a}- \gamma_\mathsf{a}| \leq c'$ for some constant $c'$. Thus, because $\widehat{\mu}{}_\mathsf{a}\ensuremath{\stackrel{\bullet}{=}}\mu_\mathsf{a}$ and $\widehat{\gamma}{}_\mathsf{a}\ensuremath{\stackrel{\bullet}{=}}\gamma_\mathsf{a}$, we have $\widehat{\alpha}{}_1 \ensuremath{\stackrel{\bullet}{=}} \alpha_1$ and $\widehat{\alpha}{}_2 \ensuremath{\stackrel{\bullet}{=}}\alpha_2$. The fact that $\widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}} \ensuremath{\stackrel{\bullet}{=}}\frac{\alpha_2 - \alpha_1^2}{1 + (\alpha_2 - \alpha_1^2)\| \boldsymbol{\theta}_\mathsf{a}\|^2}$ then follows from the concentration properties we have established and the boundedness of the parameters $\alpha_2,\alpha_1,\|\boldsymbol{\theta}_\mathsf{a}\|$ by $C$. ◻ *Proof of .* We establish the concentration claims one at at time.\ **Consistency of $\widehat{\beta}{}$.** When the moment method [\[eq:prop-db-option\]](#eq:prop-db-option){reference-type="eqref" reference="eq:prop-db-option"} is used to define $\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}$ and $\widehat{\beta}{}= \widehat{\alpha}{}_1$, $\widehat{\beta}{}\ensuremath{\stackrel{\bullet}{=}} \alpha_1$ by . By (b), $\widehat{\boldsymbol{\eta}}_\mathsf{a}^\mathsf{loo}\ensuremath{\stackrel{\bullet}{=}} \widehat{\boldsymbol{\eta}}_\mathsf{a}+ \zeta_\mathsf{a}^\eta \nabla_{\boldsymbol{\eta}}\ell_\mathsf{a}(\widehat{\boldsymbol{\eta}}_\mathsf{a};\boldsymbol{a})$. Then, by (d) and the KKT conditions for equation [\[eq:lin-predict-f\]](#eq:lin-predict-f){reference-type="eqref" reference="eq:lin-predict-f"}, ${\overline{\eta}}_\mathsf{a}^\mathsf{loo}\ensuremath{\stackrel{\bullet}{=}} \mathbb{E}[(\widehat{\eta}{}_{\mathsf{a},i}^f + \zeta_\mathsf{a}^\eta \ell_\mathsf{a}'(\widehat{\eta}{}_{\mathsf{a},i}^f;\ensuremath{a}_i^f))] = \mathbb{E}[\eta_{\mathsf{a},i}^{f,\mathsf{loo}}] = \widehat{\mu}{}_\mathsf{a}^f$. Similarly, $\widehat{s}_{\mathsf{a}\widehat\mathsf{a}} \ensuremath{\stackrel{\bullet}{=}}\mathbb{E}[(\widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}} - \widehat{\mu}{}_\mathsf{a}^f)\ensuremath{a}_i^f]/({\overline{\pi}}\alpha_1) = \mathrm{Cov}(\widehat{\eta}{}_{\mathsf{a},i}^{f,\mathsf{loo}},\eta_{\mathsf{a},i}^f) = \langle \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f,\boldsymbol{\theta}_\mathsf{a}\rangle_{L_2}$, where we have applied Gaussian integration by parts and equation [\[eq:fixed-design-score\]](#eq:fixed-design-score){reference-type="eqref" reference="eq:fixed-design-score"}. We also have that $\|\widehat{\boldsymbol{\theta}}{}_\mathsf{a}\|^2 \ensuremath{\stackrel{\bullet}{=}}\| \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^f \|_{L_2}$ by . That $\widehat{\beta}{}\ensuremath{\stackrel{\bullet}{=}} \beta_{\mathsf{a}\mathsf{a}}$ follows now from the smoothness properties of $\ell_\mathsf{a}$ and bounds on its derivatives (Assumption A1), the bounds on $\zeta_\mathsf{a}^\eta$ (), and the identities [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} and [\[eq:score-deriv-explicit\]](#eq:score-deriv-explicit){reference-type="eqref" reference="eq:score-deriv-explicit"}.\ **Consistency of $\widehat{\mathsf{dbAdj}}_{01}$.** Recall $\boldsymbol{X}^\top \big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) / n = \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y})$ by the KKT conditions for equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}. Then , the bounds on the fixed point parameters (), the bounds in Assumption A1, and the KKT conditions for the fixed-design model optimization [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"} imply that $$\begin{aligned} \widehat{\mathsf{dbAdj}}_{01} - \mathsf{dbAdj}_{01} \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2}.\end{aligned}$$ By the fixed point equations'[\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} and the fact that $\widehat{\psi}{}_{\mathsf{y},i}^f$ is non-zero only if $a_i^f$ is nonzero, we have $\langle \boldsymbol{a}^f , \widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f \rangle_{L_2} = \langle \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_{\mathsf{x},\mathsf{cfd}}^f \rangle_{L_2} = 0$. Thus, again applying the fixed point equations, $\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f$ is independent of $\boldsymbol{g}_{\mathsf{y}}^f$/ Because both $\boldsymbol{y}_\mathsf{y}^f$ and $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f$ are functions of $\boldsymbol{g}_\mathsf{y}^f$, we have $\langle \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f\rangle_{L_2} = 0$, whence $\widehat{\mathsf{dbAdj}}_{01} - \mathsf{dbAdj}_{01}\ensuremath{\stackrel{\bullet}{=}}0$.\ **Consistency of $\widehat{\mathsf{dbAdj}}_{02}$.** By , $\widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}} \ensuremath{\stackrel{\bullet}{=}} \mathsf{c}_{\boldsymbol{\Sigma}}$. By , $\langle \widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}} , \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}\rangle \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} + \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f,\boldsymbol{\theta}_\mathsf{a}+ \boldsymbol{g}_\circ^f\rangle_{L_2} = \langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}} , \boldsymbol{\theta}_\mathsf{a}\rangle + \langle \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f , \boldsymbol{g}_\circ^f\rangle_{L_2}$, where $\boldsymbol{g}_\circ^f$ is defined as in . Completely analogous to the argument in , the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"} imply $\boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f$ and $\boldsymbol{g}_\circ^f$ have iid coordinates with correlation given by $\langle \boldsymbol{a}^f / {\overline{\pi}}- \boldsymbol{1}, \widehat{\boldsymbol{i}}{}_\circ^f \rangle_{L_2}/n^2$, whence, by the definition of $\widehat{s}_{\mathsf{x}\circ}$ and , $\widehat{s}_{\mathsf{x}\circ}p/n \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{g}_{\mathsf{x},\mathsf{cfd}}^f , \boldsymbol{g}_\circ^f\rangle_{L_2}$. We conclude $\langle\widehat{\boldsymbol{\mu}}{}_{\mathsf{x},\mathsf{cfd}},\widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}\rangle - \widehat{s}_{\mathsf{x}\circ}p/n \ensuremath{\stackrel{\bullet}{=}}\langle \boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}},\boldsymbol{\theta}_\mathsf{a}\rangle$. By a similar argument, and again using $\boldsymbol{X}^\top \big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) / n = \nabla \Omega_\mathsf{y}(\widehat{\boldsymbol{\theta}}{}_\mathsf{y})$ and the KKT conditions for the fixed-design model optimization [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"}, we have $\big\langle \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}, \boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) \big\rangle / (n\widehat{\zeta}{}_\mathsf{y}^\theta) \ensuremath{\stackrel{\bullet}{=}}\langle\boldsymbol{\theta}_\mathsf{a}+ \boldsymbol{g}_\circ^f , \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} = \langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} + \langle \boldsymbol{g}_\circ^f , \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} \ensuremath{\stackrel{\bullet}{=}}\big\langle \widehat{\boldsymbol{\theta}}{}_\mathsf{a}, \boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X} \widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) \big\rangle / (n\widehat{\zeta}{}_\mathsf{y}^\theta) + \langle \boldsymbol{g}_\circ^f , \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}$. By Gaussian integration by parts and the last line of the fixed point equations [\[eq:regr-fixed-pt\]](#eq:regr-fixed-pt){reference-type="eqref" reference="eq:regr-fixed-pt"}, $\langle \boldsymbol{g}_\circ^f , \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} = (\langle \widehat{\boldsymbol{i}}{}_\circ^f , \widehat{\boldsymbol{i}}{}_\mathsf{y}^f \rangle_{L_2}/n^2)(p-n\zeta_\mathsf{y}^\eta\zeta_\mathsf{y}^\theta )$, whence, by the definition of $\widehat{s}_{\mathsf{x}\circ}$ and , $\widehat{s}_{\mathsf{y}\circ} (p-n\widehat{\zeta}{}_\mathsf{y}^\eta\widehat{\zeta}{}_\mathsf{y}^\theta)/n \ensuremath{\stackrel{\bullet}{=}} \langle \boldsymbol{g}_\circ^f , \boldsymbol{y}_\mathsf{y}^f - \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2}$. We conclude $$\frac{\big\langle \widehat{\boldsymbol{\theta}}{}_\mathsf{a}^\textup{d}, \boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) \big\rangle} {n\widehat{\zeta}{}_\mathsf{y}^\theta} - \frac{\widehat{s}_{\mathsf{y}\circ}(p-n \widehat{\zeta}{}_\mathsf{y}^\eta \widehat{\zeta}{}_\mathsf{y}^\theta)}{n} \ensuremath{\stackrel{\bullet}{=}} \frac{\big\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{X}^\top\big(\boldsymbol{a}\odot (\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0} \boldsymbol{1}- \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y})\big) \big\rangle} {n\widehat{\zeta}{}_\mathsf{y}^\theta}.$$ Combining the above approximations gives $\widehat{\mathsf{dbAdj}}_{02} \ensuremath{\stackrel{\bullet}{=}}\mathsf{dbAdj}_{02}$.\ **Consistency of $\widehat{\mathsf{dbAdj}}_1$.** This follows from the previous display and the consistency of $\widehat{\mathsf{c}}{}_{\boldsymbol{\Sigma}}$. ◻ # Failure of alternative estimators {#SecInconsistency} We now establish that several natural constructions of the debiased plug-ins fail to achieve consistent estimation of $\mu_\mathsf{y}$ in the sense of  or unbiased normality in the sense of . These construction were briefly described in the beginning of . ## Failure of naïve debiased ridge {#sec:db-prop-agnostic-failure} Using the bias estimates [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} with weights $w_i = 1$ (both in this equation and in the M-estimation objective [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"}) gives provably inconsistent estimates when $\boldsymbol{\Sigma}= {\mathbf I}_p$ and ridge regression is used. **Proposition 2**. *Consider the bias estimates [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"}, obtained from the $M$-estimate [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} using $\Omega_\mathsf{y}(\boldsymbol{v}) = \frac\lambda2 \|\boldsymbol{v}\|^2$ for $w_i = 1$, some $\lambda > c > 0$. Assume $\boldsymbol{\Sigma}= {\mathbf I}_p$.* *Under Assumption A1, there exists $C_0,c_0,c_1 > 0$, depending only on $\mathcal{P}_{\mathrm{model}}$, such that if $c_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle > \Delta$ or $C_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle < -\Delta$ for $\Delta \geq 0$, then $$\begin{gathered} \widehat{\mu}{}_\mathsf{y}^\textup{d}\gtrdot \mu_\mathsf{y}+ c_1\Delta, \qquad \widehat{\mu}{}_\mathsf{y}^\textup{d}\lessdot \mu_\mathsf{y}- c_1\Delta, \end{gathered}$$ respectively. Moreover, in either of these cases, there exists a number $\mathsf{bias}$, which depends on the model [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} and the choice of $\lambda$, such that $$\frac{1}{p} \sum_{j=1}^p \mathbb{I}\Bigg\{ \frac{ \sqrt{n} \big( \widehat{\theta}{}_{\mathsf{y},j}^\textup{d}- \big( \theta_{\mathsf{y},j} + \mathsf{bias}\cdot \theta_{\mathsf{a},j} \big) \big) }{ \widehat{s}_\mathsf{y} \boldsymbol{\Sigma}_{j|-j}^{-1/2} } \leq t \Bigg\} \ensuremath{\stackrel{\bullet}{=}} \mathbb{P}\big(\mathsf{N}(0,1) \leq t\big),$$ where $$\label{eq:se-w-zeta} \widehat{s}_\mathsf{y}^2 = \frac{\frac{1}{n}\|\boldsymbol{y}- \widehat{\theta}{}_{\mathsf{y},0}\boldsymbol{1} - \boldsymbol{X}\widehat{\boldsymbol{\theta}}{}_\mathsf{y}\|^2}{(\widehat{\zeta}{}_\mathsf{y}^\theta)^2}.$$ Moreover, $|\mathsf{bias}| \geq c_1\Delta|\alpha_2-\alpha_1^2|$, where $\alpha_1 \ensuremath{: =}\mathbb{E}[\pi'(\eta_\mathsf{a})]/\mathbb{E}[\pi(\eta_\mathsf{a})]$ and $\alpha_2 = \mathbb{E}[\pi''(\eta_\mathsf{a})]/\mathbb{E}[\pi(\eta_\mathsf{a})]$.* The conditions $c_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y} \rangle > \Delta$ and $C_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle < -\Delta$ have a meaningful interpretation. The first case, $c_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle > \Delta$, is one in which both (1) the propensity model signal strength is bounded below, which is equivalent to their being non-negligible dependency between the missingness indicator and the covariates, and (2) the propensity model and outcome model parameters are sufficiently aligned, so that the missingness mechanism is non-negligibly confounded with the outcome. The case $C_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle < -\Delta$ likewise requires that the propensity model signal strength is bounded below and the propensity and outcome models are sufficiently anti-aligned. also implies that the coordinates of the debiased estimates have a bias in the direction of the propensity model parameter. Note that $\alpha_2$ and $\alpha_1$ depend only on the link $\pi$ and the mean and variance of the linear predictors $\eta_{\mathsf{a},i}$. Thus, in a proportional asymptotics in which the signal strength $\| \boldsymbol{\theta}_\mathsf{a}\|$ and the linear predictor mean $\langle \boldsymbol{\mu}_\mathsf{x}, \boldsymbol{\theta}_\mathsf{a}\rangle$ are held constant, the $\mathsf{bias}$ term will not, in general, decay to 0. *Proof of .* We apply  with $w(\eta) = 1$, $\boldsymbol{m}= \boldsymbol{\mu}_\mathsf{x}$, and $\boldsymbol{M}= {\mathbf I}_p$. This corresponds to taking $\mathsf{c}_{\boldsymbol{\mu}} = \mathsf{c}_{\boldsymbol{\Sigma}} = 0$. As in the proof of  for oracle ASCW, we have that $\pi_\zeta(\eta) = \frac{\zeta}{1 + \zeta}\pi(\eta)$, whence $\alpha_1(\zeta) = \alpha_1$ and $\alpha_2(\zeta) = \alpha_2$. Thus, $\mathsf{bias}_1(\zeta_\mathsf{y}^\eta) = \alpha_1$ and $\mathsf{bias}_2(\zeta_\mathsf{y}^\eta) = \alpha_2 - \alpha_1^2$. By Assumption A1 and because $\pi(\eta) \leq 1$, $$\mathsf{bias}_1(\widehat{\zeta}{}_\mathsf{y}^\theta) = \frac{\mathbb{E}[\pi'(\eta_\mathsf{a})]}{\mathbb{E}[\pi(\eta_\mathsf{a})]} \geq \mathbb{E}[\pi'(\eta_\mathsf{a})] > c,$$ because $\pi'$ is lower bounded by a constant on compact intervals and $\eta_\mathsf{a}$ has mean and variance bounded by $C$. In the case of ridge regression with $\Omega_\mathsf{y}(\boldsymbol{v}) = \frac{\lambda}{2} \|\boldsymbol{v}\|^2$, equation [\[eq:param-fit-f\]](#eq:param-fit-f){reference-type="eqref" reference="eq:param-fit-f"} gives $\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f = \frac{\zeta_\mathsf{y}^\theta}{\lambda + \zeta_\mathsf{y}^\theta} \boldsymbol{y}_\mathsf{y}^f.$ Thus, by , $\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}^f \rangle_{L_2} = \frac{\zeta_\mathsf{y}^\theta}{\lambda + \zeta_\mathsf{y}^\theta} \big( \beta_{\mathsf{y}\mathsf{a}} \|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle \big).$ Because $C > \zeta_\mathsf{y}^\theta,\beta_{\mathsf{y}\mathsf{a}} > c$ by , we have that for some $c > 0$, if $c\|\boldsymbol{\theta}_\mathsf{a}\|^2+\langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle > \Delta > 0$, the right-hand side of the preceding display is bounded below by $c''\Delta$, whence $\langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \gtrdot c''\Delta$. On the other hand, if $C\|\boldsymbol{\theta}_\mathsf{a}\|^2+\langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle < -\Delta$, then the right-hand side of the preceding display is bounded above by $-c''\Delta$, whence $\langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle \lessdot -c''\Delta$. The two claims of the Lemma now follow directly from , using that $\mathsf{c}_{\boldsymbol{\mu}} = 0$. ◻ ## Failure of debiased ridge with IPW loss {#sec:db-reweighting} The failure of the construction which uses $\boldsymbol{\mu}_{\mathsf{x}} = \mathbb{E}[\boldsymbol{x}]$ and $\boldsymbol{\Sigma}= \mathrm{Var}(\boldsymbol{x})$ in place of $\boldsymbol{\mu}_{\mathsf{x},\mathsf{cfd}}$ and $\boldsymbol{\Sigma}_{\mathsf{cfd}}$ occurs due to the discrepancy between the unconditional feature distribution $\boldsymbol{x}_i \mathrel{\stackrel{\mathrm{iid}}{\sim}} \mathsf{N}(\boldsymbol{\mu}_\mathsf{x},\boldsymbol{\Sigma})$ and the feature distribution conditional on inclusion in the outcome fit $\boldsymbol{x}_i \mathrel{\stackrel{\mathrm{iid}}{\sim}}\mathsf{Law}(\boldsymbol{x}\mid d= 1)$. An alternative and popular approach to addressing this discrepancy is to reweight the sample so that the reweighted sample has the desired unconditional distribution [@Hahn1998; @arkhangelsky2021doublerobust]. In this section, we study such approaches when they are based on oracle knowledge of the propensity score. The failure with inverse propensity weighted loss suggests to us that it may be difficult to construct a fully empirical approach based on bias estimates [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} with appropriately constructed weights $w_i$, although we cannot eliminate this possibility. **Proposition 3**. *Consider the bias estimates [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"}, obtained from equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} with $w_i = 1/\pi(\theta_{\mathsf{a},0} + \langle \boldsymbol{x}_i , \boldsymbol{\theta}_\mathsf{a}\rangle)$, penalty $\Omega_\mathsf{y}(\boldsymbol{v}) = \frac\lambda2 \|\boldsymbol{v}\|^2$ for some $\lambda > c > 0$, and $\boldsymbol{\Sigma}= {\mathbf I}_p$. Then, there exists $C_0,c_0,c_1 > 0$, depending only on $\mathcal{P}_{\mathrm{model}}$, such that if $c_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle > \Delta$ or $C_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle < -\Delta$ for $\Delta \geq 0$, then, with $\widehat{\mu}{}_\mathsf{y}^\textup{d}$ the G-computation estimate with these debiased estimates as plug-ins, we have $$\begin{gathered} \widehat{\mu}{}_\mathsf{y}^\textup{d}\gtrdot \mu_\mathsf{y}+ c_1\Delta, \qquad \widehat{\mu}{}_\mathsf{y}^\textup{d}\lessdot \mu_\mathsf{y}- c_1\Delta, \end{gathered}$$ respectively.* See the paragraph following  for a discussion of the assumptions $c_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle > \Delta$ or $C_0\|\boldsymbol{\theta}_\mathsf{a}\|^2 + \langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}\rangle < -\Delta$ for $\Delta \geq 0$. *Proof of .* We apply  with $w(\eta) = 1/\pi(\eta)$, $\boldsymbol{m}= \boldsymbol{\mu}_\mathsf{x}$, and $\boldsymbol{M}= {\mathbf I}_p$. This corresponds to taking $\mathsf{c}_{\boldsymbol{\mu}} = \mathsf{c}_{\boldsymbol{\Sigma}} = 0$. In this case, by equation [\[eq:alpha-12-zeta-def\]](#eq:alpha-12-zeta-def){reference-type="eqref" reference="eq:alpha-12-zeta-def"}, $\pi_\zeta(\eta) \ensuremath{: =} \frac{\zeta \pi(\eta)}{\zeta + \pi(\eta)}.$ Note that $\pi_\zeta'(\eta)/\pi_\zeta(\eta) = \frac{\zeta\pi'(\eta)}{(\zeta+\pi(\eta))\pi(\eta)}\geq \frac{\zeta\pi'(\eta)}{\zeta + 1}$. Because by Assumption A1, $\pi'(\eta) \geq \mathsf{c}(M)$ for $\eta \in [-M,M]$, $\eta_\mathsf{a}$ has mean and variance bounded by $C$, and $\widehat{\zeta}{}_\mathsf{a}^\theta \ensuremath{\stackrel{\bullet}{=}}\zeta_\mathsf{a}^\theta \geq c$ by  and , we have that $\alpha_1(\widehat{\zeta}{}_\mathsf{a}^\theta)\gtrdot c > 0$. The result then follows by combining this bound with the bounds on $\langle \boldsymbol{\theta}_\mathsf{a},\boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle_{L_2}$ in the proof of . ◻ ## Debiased ridge with degrees-of-freedom adjusted IPW loss In the course of studying debiased ridge with IPW loss, the bias estimates [\[eq:db-naive\]](#eq:db-naive){reference-type="eqref" reference="eq:db-naive"} are successful provided a certain degrees-of-freedom adjusted IPW loss is used in the outcome fit [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} and a strict overlap conditions holds. This method relies on oracle knowledge of the propensity score, so it is not clear that it is of practical interest: for example, we did not assess whether it outperforms the Horwitz-Thompson estimator. Further, it relies on choosing the correct regularization parameter, and we have not investigated the success of methods for hyperparameter tuning. Nevertheless, we find it interesting that the failure of debiased ridge with IPW loss can, in principle, be resolved by certain degrees-of-freedom adjusted weights, so we include this result here. **Proposition 4**. *In addition to Assumption A1, suppose that $\pi(\eta) \geq \pi_{\min}$ uniformly for some $\pi_{\min} > 0$. Moreover, consider $\omega \in (0,\pi_{\min})$. Then, there exists some $C > c > 0$ depending only on $\mathcal{P}_{\mathrm{model}}$, $\pi_{\min}$ and $\pi_{\min} - \omega$ such that if we compute $\widehat{\theta}{}_{\mathsf{y},0}^\textup{d},\widehat{\boldsymbol{\theta}}{}_\mathsf{y}^\textup{d}$ as in equations [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} with with weights $w_i = \frac{1}{\pi(\theta_{\mathsf{a},i} + \langle \boldsymbol{x}_i,\boldsymbol{\theta}_\mathsf{a}\rangle) - \omega},$ and penalty $\lambda \Omega_\mathsf{y}$ in place of $\Omega_\mathsf{y}$, we have $$\begin{aligned} \widehat{\mu}{}_\mathsf{y}^\textup{d}- \mu_\mathsf{y}\lessdot C \big|\widehat{\zeta}{}_\mathsf{y}^\eta/\lambda - \omega\big|.\end{aligned}$$* Note that $\big|\widehat{\zeta}{}_\mathsf{y}^\eta/\lambda - \omega\big|$ is an empirical quantity, so that  gives an empirical upper bound on the bias of the population mean estimate $\widehat{\mu}{}_\mathsf{y}^\textup{d}$. This suggests a natural approach to estimating the population mean with a weighting-based strategy: tune $\lambda$ so that $\widehat{\zeta}{}_\mathsf{y}^\eta/\lambda - \omega = 0$, and then use the estimates in equation [\[eq:outcome-fit\]](#eq:outcome-fit){reference-type="eqref" reference="eq:outcome-fit"} with this choice of $\lambda$. Analyzing this approach rigorously would require addressing several difficulties. First, all results we have presented hold for a fixed penalty $\Omega_\mathsf{y}$, so do not immediately apply to $\lambda$ chosen adaptively. The papers [@miolane2021; @celentanoMontanariWei2020] consider adaptive choices of $\lambda$ using uniform concentration and continuity results, and it is likely the similar analyses could be carried out in the present setting. Second, $\widehat{\zeta}{}_\mathsf{y}^\eta$ is itself a function of $\lambda$, so that it is not clear that a solution to $\widehat{\zeta}{}_\mathsf{y}^\eta/\lambda - \omega = 0$ exists. Nevertheless, we suspect that such solutions do exist. Indeed, recall that for least-squares and the Lasso with fully observed outcomes, $\widehat{\zeta}{}_\mathsf{y}^\eta = ({\widehat{\mathsf{df}}}_\mathsf{y}/n)/(1-{\widehat{\mathsf{df}}}_\mathsf{y}/n)$ (see ). Intuitively, the degrees-of-freedom is large for small $\lambda$ and small for large $\lambda$. Because the degrees-of-freedom tends to decrease with $\lambda$ (though it need not be strictly decreasing in $\lambda$), it is natural to expect that $\widehat{\zeta}{}_\mathsf{y}^\eta/\lambda - \omega = 0$ is solved for some value of $\lambda$ based on an Intermediate Value Theorem type argument. Because our primary interest in this paper is to provide estimates which are empirical in both the outcome and propensity model, and alternative approaches are known to be consistent when propensity score are known exactly (e.g., the Horwitz-Thompson estimator [@Hahn1998; @horvitzThompson1952]), we do not rigorously investigate a tuning-based version of  here. **Remark 7** (The consistency regime limit). *Rather than tuning $\lambda$, one could also imagine tuning $\omega$. Based on the intuition described above that $\widehat{\zeta}{}_\mathsf{y}^\eta = ({\widehat{\mathsf{df}}}_\mathsf{y}/n)/(1-{\widehat{\mathsf{df}}}_\mathsf{y}/n)$, we should expect that $\widehat{\zeta}{}_\mathsf{y}^\eta \rightarrow 0$ in a consistency regime. For example, the consistency regime for the Lasso is $s = o(n/\log p)$. If this is case, we can take $\omega \rightarrow 0$ in , recovering inverse propensity weighting. The details of this statement depend on how $\lambda$ scales in consistency regimes, with the correct scaling in this regime potentially incompatible with Assumption A1 under which . Thus, verifying this intuition rigorously is beyond the scope of the present paper.* *Proof of .* We apply  with $w(\eta) = \frac{\lambda^{-1}}{\pi(\eta) - \omega}$, , $\boldsymbol{m}= \boldsymbol{\mu}_\mathsf{x}$, and $\boldsymbol{M}= {\mathbf I}_p$. This corresponds to taking $\mathsf{c}_{\boldsymbol{\mu}} = 0$ and $\mathsf{c}_{\boldsymbol{\Sigma}} = 0$. Note that Assumption A1 is satisfied for this choice of weights if we allow the constants in the upper and lower bounds on $w$ and its Lipschitz constant to depend on an upper and lower bound on $\lambda$ and a lower bound on $\pi_{\min} - \omega$. Thus, allowing the constants in  (including those in the high probability approximations) to depend on these additional quantities, we may apply  with this choice of weights. We compute $$\pi_\zeta(\eta) = \frac{\zeta\lambda^{-1}}{1 + (\zeta\lambda^{-1}-\omega)/\pi(\eta)}, \qquad \frac{\pi_\zeta'(\eta)}{\pi_\zeta(\eta)} = \frac{ (\zeta\lambda^{-1}-\omega)\pi'(\eta)/\pi(\eta)^2 }{ 1 + (\zeta\lambda^{-1}-\omega)/\pi(\eta) }.$$ Using the upper bound on $\pi'(\eta)$ from Assumption A1, the lower bound on $\pi(\eta) \geq \pi_{\min}$, and the lower bound on $\pi_{\min} - \omega$, we see that $|\mathsf{bias}_1(\zeta)| = |\alpha_1(\zeta)| \leq C |\zeta \lambda^{-1} - \omega|$. and the bounds in  imply that $|\langle \boldsymbol{\theta}_\mathsf{a}, \boldsymbol{\theta}_\mathsf{y}- \widehat{\boldsymbol{\theta}}{}_\mathsf{y}\rangle| \lessdot C$. Thus, the first claim of the proposition follows from . ◻ [^1]: This formula is valid even if $K_{g,55} = 0$ or $K_{g,66} = 0$, in which case $\boldsymbol{g}_5^{\mathsf{se}} = {\boldsymbol 0}$ or $\boldsymbol{g}_6^{\mathsf{se}} = {\boldsymbol 0}$ [^2]: Here we allow $t < 0$, and for $t < 0$ interpret the integral in the normal way as $\int_0^t \pi(t) \textup{d}t = -\int_t^0 \pi(t) \textup{d}t$.
arxiv_math
{ "id": "2309.01362", "title": "Challenges of the inconsistency regime: Novel debiasing methods for\n missing data models", "authors": "Michael Celentano and Martin J. Wainwright", "categories": "math.ST stat.ME stat.TH", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The recent deployment of multi-agent systems in a wide range of scenarios has enabled the solution of learning problems in a distributed fashion. In this context, agents are tasked with collecting local data and then cooperatively train a model, without directly sharing the data. While distributed learning offers the advantage of preserving agents' privacy, it also poses several challenges in terms of designing and analyzing suitable algorithms. This work focuses specifically on the following challenges motivated by practical implementation: (i) online learning, where the local data change over time; (ii) asynchronous agent computations; (iii) unreliable and limited communications; and (iv) inexact local computations. To tackle these challenges, we introduce the Distributed Operator Theoretical (DOT) version of the Alternating Direction Method of Multipliers (ADMM), which we call the DOT-ADMM Algorithm. We prove that it converges with a linear rate for a large class of convex learning problems (e.g., linear and logistic regression problems) toward a bounded neighborhood of the optimal time-varying solution, and characterize how the neighborhood depends on $\text{(i)--(iv)}$. We corroborate the theoretical analysis with numerical simulations comparing the DOT-ADMM Algorithm with other state-of-the-art algorithms, showing that only the proposed algorithm exhibits robustness to (i)--(iv). author: - | Nicola Bastianello$^*$, , Diego Deplano$^*$, ,\ Mauro Franceschelli, , and Karl H. Johansson, [^1] [^2] [^3] [^4] bibliography: - autosam.bib title: Online Distributed Learning over Random Networks --- Distributed learning, online learning, asynchronous networks, unreliable communications, ADMM # Introduction {#sec:introduction} In recent years, significant technological advancements have enabled the deployment of multi-agent systems across various domains, including robotics, power grids, and traffic networks [@molzahn_survey_2017; @nedic_distributed_2018]. These systems consist of interconnected agents that leverage their computational and communication capabilities to collaborate in performing assigned tasks. Many of these tasks -- such as estimation [@Montijano21; @Deplano23], coordination and control [@Deplano20; @Santilli21; @Deplano23novel], resilient operation [@Shang20; @Sheng22; @santilli2022secure], and learning [@boyd2011distributed; @qian_distributed_2022; @park_communication-efficient_2021] -- can be formulated as *distributed optimization problems*, see [@nedic_distributed_2018; @notarstefano_distributed_2019; @yang_survey_2019] for some recent surveys. In this context, this work focuses specifically on distributed optimization algorithms for learning under network constraints. Traditional machine learning methods require transmitting all the data collected by the agents to a single location, where they are processed to train a model. However, communicating raw data exposes agents to privacy breaches, which is inadmissible in many applications such as healthcare and industry [@gafni_federated_2022]. Moreover, it is often the case that the agents collect new data over time, which requires another round of data transmission and, as a result, both an increased privacy vulnerability and the need of repeating the centralized training task. In the alternative approach of distributed learning, the agents within the network first process their data to compute an approximate local model, and then refine such model by sharing it with their peers and agreeing upon a common model that better fits all the distributed data sets, with the goal of improving the overall accuracy. This strategy, however, poses some practical challenges -- discussed in the next subsection -- both at the computation level, such as different speed and precision computing, and at the communication level, such as loss and corruption of packets. The focus of this paper is thus on solving online learning problems in a distributed way while addressing these practical challenges, which are discussed in detail in the next Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"}. In principle, different approaches developed in the distributed optimization literature can be applied to this problem, which are reviewed in Section [1.2](#sec:review){reference-type="ref" reference="sec:review"}. This paper introduces a distributed operator theoretical (DOT) version of the alternate direction method of multipliers (ADMM), which we call the DOT-ADMM Algorithm. Section [1.3](#sec:maincont){reference-type="ref" reference="sec:maincont"} outlines how the DOT-ADMM advances the state-of-the-art thanks to its unique set of features and advantages, which are supported by novel theoretical results on stochastic operators. ## Practical challenges in distributed learning {#subsec:challenges} **Asynchronous agents computations**. The agents cooperating for the solution of a learning problem are oftentimes highly heterogeneous, in particular in terms of their computational resources [@gafni_federated_2022]; consequently, the agents may perform local computations at different rates. The simple solution of synchronizing the network by enforcing that all agents terminate the $k$-th local computation at iteration $k$ entails that better-performing agents must wait for the slower ones (cf. [@peng_coordinate_2016 Fig. 2]). Therefore, in this paper we allow the agents to perform local processing at their own pace, which is a more efficient strategy than enforcing synchronization. **Unreliable communications.** In real-world scenarios, the agents have at their disposal imperfect channels to deliver the locally processed models to their peers. One problem that may occur, particularly when relying on wireless communications, is that transmissions from one agent to another may be lost in transit (e.g., due to interference [@qian_distributed_2022]). When a transmission is lost, the newly processed local model of an agent is not delivered to its neighboring agents, and the algorithms need to be robust to this occurrence. A second problem that must be faced, especially when the local models stored by the agents are high dimensional, is the impracticability of sharing the exact local model over limited channels (e.g., when training a deep neural network). To satisfy limited communications constraints, different approaches have been explored, foremost of which is *quantization/compression* of the messages exchanged by the agents [@park_communication-efficient_2021]. But quantizing a communication implies that an inexact version of the local models is shared by the agents, which can be seen as a disturbance in the communication. Therefore, we consider both packet-loss and packet-corruption during the communications between agents, which allow us to deal with the two above-mentioned problems. **Inexact local computations.** In learning applications, the local costs of the agents are often defined on a large number of data points. Computing the gradient (or, as will be the case of the proposed algorithm, the proximal) of the costs may thus be impractical, particularly in an online set-up in which agents can process local data only in the interval of time $[k, k+1)$. To solve this issue, so-called *stochastic gradients* are employed, which construct an approximation of the local gradients using a limited number of data points [@koloskova_unified_2020]. This implies that every time a stochastic gradient is applied by an agent, some error is introduced in the algorithm. As mentioned above, the proposed algorithm needs to compute the proximal of the local cost function. However, unless the cost is proximable [@Parikh14] and there is a closed form, the proximal needs to be computed via an iterative scheme, such as gradient descent. But again due to the limited computational time $[k, k+1)$ allowed to the agent, the proximal can be computed only with a limited number of iterations, introducing an approximation. The issue may be further compounded by the use of stochastic gradients instead of full gradients. ## Distributed learning review {#sec:review} Among the numerous and various approaches proposed in the state-of-the-art, the pioneering algorithms are those based on (sub)gradient methods [@nedic_distributed_2018], which however require the use of diminishing step-sizes to guarantee convergence, and thus are not compatible with asynchronous computations. Another class of suitable algorithms is that of gradient tracking, which can achieve convergence with the use of fixed step-sizes [@xin_general_2020]. On the one hand, different gradient tracking methods have been proposed for application in an online learning context, see *e.g.* [@yuan_can_2020; @carnevale_gtadam_2023]. On the other, the use of robust average consensus techniques has also enabled the deployment of gradient tracking for learning under the constraints of Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"}, see *e.g.* [@xu_convergence_2018; @bof_multiagent_2019; @tian_achieving_2020; @li_variance_2022; @lei_distributed_2022]. However, gradient tracking algorithms may suffer from a lack of robustness to some of these challenges [@bin_stability_2022]. Our approach falls in a different branch of research, which is based on the alternating direction method of multipliers (ADMM). ADMM-based algorithms have turned out to be reliable and versatile for distributed optimization [@boyd2011distributed; @peng_arock_2016]. In particular, ADMM has been shown to be robust to asynchronous computations [@wei_o1k_2013; @chang_asynchronous_2016; @peng_arock_2016], packet losses [@majzoobi_analysis_2018], and both [@Bastianello21]. Additionally, its convergence with inexact communications has been analyzed in [@majzoobi_analysis_2019], and the impact of inexact local computations has been studied in [@xie_siadmm_2020]. Moreover, the convergence of ADMM-based algorithm under network constraints has been usually shown to occur at a sub-linear rate, whereas a linear rate can be proved only under additional assumptions, such as strong convexity [@Bastianello21]). Instead, the algorithm we propose is shown to converge with a linear rate without the strong convexity assumption, but under a milder set of assumptions, while facing at the same time all the challenges described in Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"}. ## Main contributions {#sec:maincont} The proposed DOT-ADMM Algorithm has the following set of features: - Convergence with a linear rate for linear and logistic regression problems; - Applicability in an online scenario where the data sets available to the agents change over time; - Robustness to asynchronous and inexact computations of the agents; - Robustness to unreliable communications. The main theoretical results of the paper are as follows: - The first main result is that of proving linear convergence (in mean) to a neighborhood of the set of fixed points for a special class of time-varying stochastic operators defined by averaged and *metric subregular* operators (see Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"}). Such a neighborhood is formally described and it is shown to be almost surely reached asymptotically. Current results in the state-of-the-art can only guarantee sub-linear convergence without the additional assumption property of metric subregularity [@combettes_stochastic_2015]. When metric subregularity does not hold everywhere but only in a subset of the state space, it is shown that the above-described results hold eventually after a finite time (see Theorem [Theorem 4](#thm:loc-msr-static-converr){reference-type="ref" reference="thm:loc-msr-static-converr"}). - The second main result shows that if the proposed DOT-ADMM Algorithm is employed to solve convex distributed optimization problems and if the associated operator is *metric subregular*, then the distance to a neighborhood of the time-varying set of optimal solutions converges linearly (in mean) and the neighborhood is almost surely reached asymptotically (see Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"}). The main novelty of this result relies on the fact that it is not required that the optimization problem is strongly convex, as previous results do [@shi_linear_2014; @iutzeler_explicit_2016; @makhdoumi_convergence_2017; @Bastianello21]. Complimentary results are provided for simpler scenarios where some of the challenges addressed do not come into play (see Corollary [Corollary 1](#cor:stochastic){reference-type="ref" reference="cor:stochastic"}) and for the case in which metric subregularity holds in a subset of the state space (see Theorem [Theorem 2](#thm:local){reference-type="ref" reference="thm:local"}). - The last but most practical result consists in showing how metric subregularity can be verified for standard learning problems, such as linear regression (see Proposition [Proposition 1](#prop:linear){reference-type="ref" reference="prop:linear"}) and logisitc regression (see Proposition [Proposition 2](#prop:logistic){reference-type="ref" reference="prop:logistic"}). These results can be also used as tutorial examples for other learning problems with different regression models. The performance of the DOT-ADMM Algorithm is evaluated for the more complex logistic regression problem, which is also compared with that of gradient tracking algorithms, usually employed in this scenario [@tian_achieving_2020; @li_variance_2022]. Simulations reveal that the DOT-ADMM Algorithm outperforms gradient tracking algorithms, showing high resilience to all the above-discussed challenges *at the same time*. ## Outline Section [2](#sec:notation){reference-type="ref" reference="sec:notation"} provides the notation used throughout the paper and useful preliminaries on graph theory and operator theory. In Section [3](#sec:algorithm){reference-type="ref" reference="sec:algorithm"} we first formalize the problem of distributed learning and state our main working assumptions, then we describe the proposed $\text{DOT-ADMM}$ Algorithm and discuss its convergence properties. Section [4](#sec:convergence){reference-type="ref" reference="sec:convergence"} is devoted to the proof of the convergence result anticipated in Section [3](#sec:algorithm){reference-type="ref" reference="sec:algorithm"}, which relies on a novel foundational result on stochastic operators that are metric subregular. Section [5](#sec:distributed-application){reference-type="ref" reference="sec:distributed-application"} outlines how the proposed algorithm can be applied to different learning problems, with a focus on linear and logistic regression problems. In Section [6](#sec:numerical){reference-type="ref" reference="sec:numerical"} several numerical results are carried out, and Section [7](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} gives some concluding remarks. # Problem formulation {#sec:notation} ## Notation and preliminaries The set of real and integer numbers are denoted by $\mathbb{R}$ and $\mathbb{Z}$, respectively, while $\mathbb{R}_+$ and $\mathbb{N}$ denote their restriction to positive entries. Matrices are denoted by uppercase letters, vectors and scalars by lowercase letters, while sets and spaces are denoted by uppercase calligraphic letters. The identity matrix is denoted by $I_n$, $n\in\mathbb{N}$, while the vectors of ones and zeros are denoted by $\mathbbold{1}_n$ and $\mathbbold{0}_n$; subscripts are omitted if clear from the context. Maximum and minimum of an $n$-element vector ${u=[u_1,\ldots, u_n]^\top}$, are denoted by $\overline{u}=\max_{i =1,\ldots,n} u_{i}$ and $\underline{u}=\min_{i =1,\ldots,n} u_{i}$, respectively. The Kronecker product between two vectors $x, y\in\mathbb{R}^n$ is denoted by the symbol $\otimes$ and is defined component-wise by $( x\otimes y)_i = x_iy_i$ for $i=1,\ldots,n$. *Networks and graphs -* We consider networks modeled by undirected graphs ${\mathcal{G}=(\mathcal{V},\mathcal{E})}$, where $\mathcal{V}=\{1,\ldots,n\}$, $n\in\mathbb{N}$, is the set of *nodes*, and $\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V}$ is the set of *edges* connecting the nodes. An undirected graph $\mathcal{G}$ is said to be *connected* if there exists a sequence of consecutive edges between any pair of nodes $i,j \in \mathcal{V}$. Nodes $i$ and $j$ are *neighbors* if there exists an edge ${(i,j)\in \mathcal{E}}$. The set of neighbors of node $i$ is denoted by $\mathcal{N}_i=\left\{j\in \mathcal{V}: (i,j)\in \mathcal{E}\right\}$. For the sake of simplicity, we consider graphs with self-loops, i.e., $i\in \mathcal{N}_i$, and denote by $\eta_i = |\mathcal{N}_i|$ the number of neighbors and by $\xi=2|\mathcal{E}|=\eta_1+\cdots+\eta_n$ twice the number of undirected edges in the network. *Operator theory -* We introduce some key notions from operator theory in finite-dimensional euclidean spaces, i.e., vector spaces $\mathbb{R}^n$ with ${\left\vert\kern-0.25ex\left\vert \cdot\right\vert\kern-0.25ex\right\vert}$ and distance $d$ $${\left\vert\kern-0.25ex\left\vert x\right\vert\kern-0.25ex\right\vert}=\sqrt{ x^\top x},\qquad d( x, y) = {\left\vert\kern-0.25ex\left\vert x- y\right\vert\kern-0.25ex\right\vert}.$$ Operators $\mathsf{F}:\mathbb{R}^n\rightarrow \mathbb{R}^n$ are denoted with block capital letters. An operator is *affine* if there exist a matrix $A\in\mathbb{R}^{n\times n}$ and a vector $b\in\mathbb{R}^n$ such that ${\mathsf{F}: x\mapsto A x+b}$, and *linear* if $b=\mathbbold{0}$. The linear operator associated to the identity matrix $I$ is defined by ${\mathsf{Id}: x \mapsto I x}$. Given $\mathsf{F}:\mathbb{R}^n\rightarrow\mathbb{R}^n$, ${\mathop{\mathrm{fix}}(\mathsf{F})=\{ x\in\mathbb{R}^n:\mathsf{F}( x)= x\}}$ denotes its set of *fixed points*. The distance of a point $x$ from a set $\mathcal{X}$ is defined by $$d_{\mathcal{X}}( x) = \inf_{ y \in \mathcal{X}} {\left\vert\kern-0.25ex\left\vert x - y\right\vert\kern-0.25ex\right\vert}.$$ When $\mathcal{X}$ is the set of fixed points of an operator $\mathsf{F}$, we use the shorthand notation $d_{\mathsf{F}}$ instead of $d_{\mathop{\mathrm{fix}}(\mathsf{F})}$. With this notation, we now define some operator-theoretical properties of operators, which are pivotal in this paper. **Definition 1**. *An operator $\mathsf{F}: \mathbb{R}^n \to \mathbb{R}^n$ is metric subregular if there is a positive constant $\gamma>0$ such that $$\label{eq:metric-subregularity} d_{\mathsf{F}}( x) \leq \gamma {\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{F}) x\right\vert\kern-0.25ex\right\vert},\qquad \forall x\in\mathbb{R}^n.$$* **Definition 2**. *An operator $\mathsf{F}: \mathbb{R}^n \to \mathbb{R}^n$ is nonexpansive if $${\left\vert\kern-0.25ex\left\vert \mathsf{F}( x)-\mathsf{F}( y)\right\vert\kern-0.25ex\right\vert}\leq {\left\vert\kern-0.25ex\left\vert x- y\right\vert\kern-0.25ex\right\vert},\qquad \forall x,y\in\mathbb{R}^n.$$ Moreover, the operator $\mathsf{G}:=(1-\alpha)\mathsf{Id}+\alpha\mathsf{F}$ with $\alpha\in(0,1)$ is $\alpha$-averaged if $\mathsf{F}$ is nonexpansive, or, equivalently, if =-1mu =-1mu =-1mu $${\left\vert\kern-0.25ex\left\vert \mathsf{G}( x)-\mathsf{G}( y)\right\vert\kern-0.25ex\right\vert}^2\leq {\left\vert\kern-0.25ex\left\vert x- y\right\vert\kern-0.25ex\right\vert}^2 - \frac{1-\alpha}{\alpha}{\left\vert\kern-0.25ex\left\vert (\mathsf{Id}-\mathsf{G})( x)-(\mathsf{Id}-\mathsf{G})( y)\right\vert\kern-0.25ex\right\vert}^2.$$* Let $\Gamma_0^n$ be the set of proper, lower semicontinuous[^5], convex functions from $\mathbb{R}^n$ to $\mathbb{R}\cup \{ +\infty \}$. Then, given $f \in \Gamma_0^n$, the proximal operator of $f$ is defined by $$\mathop{\mathrm{prox}}^{\rho}_{f}( y)=\mathop{\mathrm{\arg\!\min}}_{ x}\left\{f( x)+\frac{1}{2\rho}{\left\vert\kern-0.25ex\left\vert x- y\right\vert\kern-0.25ex\right\vert}^2\right\},$$ where $\rho>0$ is a penalty parameter; if $\rho = 1$, it is omitted. The proximal operator is $1/2$-averaged. ## The problem We consider a network of $n$ agents linked according to an undirected, connected graph ${\mathcal{G}=(\mathcal{V},\mathcal{E})}$. Each agent $i\in\mathcal{V}$ has a vector state ${ x_i(k) \in \mathbb{R}^p}$ with $p\in\mathbb{N}$, and has access to a *time-varying* local cost $f_{i,k} : \mathbb{R}^p \to \mathbb{R}\cup \{ +\infty \}$, $k \in \mathbb{N}$. The objective of the network is to solve the optimization problem: [\[eq:online-distributed-optimization\]]{#eq:online-distributed-optimization label="eq:online-distributed-optimization"} $$\begin{aligned} &\min_{ x_i} \sum_{i \in\mathcal{V}} f_{i,k}( x_i) \\ &\text{s.t.} \ x_i = x_j \ \text{if} \ (i,j) \in \mathcal{E}\end{aligned}$$ whose set of solutions is denoted by $\mathcal{X}^*_k$. We make the following two standing assumptions, which are standard assumptions in online optimization [@dallanese_optimization_2020]. **Assumption 1**. *At each time $k\in\mathbb{N}$, the local cost functions $f_{i,k}$ of problem [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} are proper, lower semi-continuous, and convex, i.e., $f_{i,k}\in \Gamma_0^p$.* **Assumption 2**. *The set of solutions $\mathcal{X}^*_k$ to problem [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} are non-empty at each time $k\in\mathbb{N}$. In addition, the minimum distance between two solutions $x^*(k)\in\mathcal{X}^*_k$ at consecutive times is upper-bounded by a nonnegative constant $\sigma \geq 0$, i.e., $$\inf_{k\in\mathbb{N}} {\left\vert\kern-0.25ex\left\vert x^*(k) - x^*(k-1)\right\vert\kern-0.25ex\right\vert} \leq \sigma.$$* **Remark 1**. *Under Assumption [Assumption 1](#as:convexity){reference-type="ref" reference="as:convexity"} the set of solutions $\mathcal{X}_k^*$ to problem [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} is convex (see [@Bauschke2017 Proposition 11.6]) but not necessarily closed nor bounded. Consequently, the solution is not necessarily unique.* The dynamic nature of the problem implies that the solution -- in general -- cannot be reached exactly, but rather that the agents' states will reach a neighborhood of it [@simonetto_time-varying_2020; @dallanese_optimization_2020]. Our goal is thus to quantify how closely the agents can track the optimal solution over *random networks*, which are characterized by the following challenging conditions, as discussed in Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"}: - asynchronous agents computations; - unreliable communications; - inexact local computations. The next section introduces the proposed algorithm along with the formal description of the above challenges and the main working assumptions. # Proposed algorithm and convergence results {#sec:algorithm} To solve the problem in eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"}, we employ the Distributed Operator Theoretical (DOT) version of the Alternating Direction Method of Multipliers (ADMM), called DOT-ADMM Algorithm (see [@Bastianello21] and references therein), which is derived by applying the relaxed Peaceman-Rachford splitting method to the dual of problem in eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"}, and its distributed implementation is detailed in Algorithm [\[alg:distributed-admm\]](#alg:distributed-admm){reference-type="ref" reference="alg:distributed-admm"}. Each agent $i \in \mathcal{V}$ first updates its local state $x_i \in \mathbb{R}^p$ according to eq. [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"}, then sends some information to each neighbor $j\in\mathcal{N}_i$ within the packet $y_{i\to j}$. The agents are assumed to be heterogeneous in their computation capabilities, therefore at each time step only some of the agents are ready for the communication phase. In turn, after receiving the information from its neighbors, each agent updates its auxiliary state variables $z_{ij}\in\mathbb{R}^p$ with $j\in\mathcal{N}_i$ according to eq. [\[eq:admm-z\]](#eq:admm-z){reference-type="eqref" reference="eq:admm-z"}. Note that the local update in eq. [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"} depends only on information available to agent $i$, while for the auxiliary update in eq. [\[eq:admm-z\]](#eq:admm-z){reference-type="eqref" reference="eq:admm-z"} the agent needs to first receive the aggregate information $y_{j\to i}(k)$ from the neighbor $j$. The DOT-ADMM Algorithm makes also explicit where the sources of stochasticity discussed in Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"} come into play: asynchronous agents' computations and packet-loss prevent the updates in eqs. [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"}-[\[eq:admm-z\]](#eq:admm-z){reference-type="eqref" reference="eq:admm-z"} to be performed at each time $k\in\mathbb{N}$, whereas inexact local computations and noisy communications make these updates inaccurate. We model these sources of stochasticity by means of the following random variables: - ${\beta_{ij}(k)\sim \text{Ber}(\rho_{ij})}$ are Bernoulli random variables[^6] with $\rho_{ij}\in(0,1]$ modeling the asynchronous agents' computations and packet-loss: $p_{ij}$ denotes the probability that agent $j$ has completed its computation and packet $y_{j\to i}$ successfully arrives to agent $i$; - $u_{i}(k)$ and $v_{ij}(k)$ are i.i.d. random variables which represent, respectively, the additive error modeling inexact local updates of $x_i$ and noisy transmission of $y_{j\to i}(k)$ sent by node $j$ to node $i$. With these definitions in place, we define the perturbed variables as follows $$\begin{aligned} \widetilde{x}_i(k) &= x_i(k) +u_i(k)\\ \widetilde{z}_{ij}(k) &= z_{ij}(k) + \alpha v_{ij}(k) \end{aligned}$$ One can further notice that the additive error $u_i(k)$ on $x_i(k)$ is also an additive error on $y_{j\to i}$ (scaled by a factor equal to $2\rho$). Therefore, one can consider only one source of error $e_{ij}=v_{ij}(k) + 2\rho u_i(k)$ and write the perturbed updates as =3mu =3mu =3mu [\[eq:admm-complete\]]{#eq:admm-complete label="eq:admm-complete"} $$\begin{aligned} x_i(k) &= \mathsf{F}_{i,k}( \widetilde{z}(k-1))\label{eq:admm-compact-x}\\ \widetilde{z}_{ij}(k) &= \begin{cases} \mathsf{T}_{ij,k}(\widetilde{z}(k-1)) + \alpha e_{ij}(k) & \text{if } \beta_{ij}(k) = 1 \\ \widetilde{z}_{ij}(k-1)& \text{otherwise} \end{cases}\label{eq:admm-compact-z}\end{aligned}$$ where the operators $\mathsf{F}_{i,k}$, $\mathsf{T}_{ij,k}$ are those of eqs. [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"}-[\[eq:admm-z\]](#eq:admm-z){reference-type="eqref" reference="eq:admm-z"}. **for $k=1,2,\ldots$ each active agent $i\in\mathcal{V}$** receives a local cost $f_{i,k}$ and applies the local update $$\label{eq:admm-x} x_i(k) = \mathop{\mathrm{prox}}_{f_{i,k}}^{1/\rho \eta_i} \left( \frac{1}{\rho \eta_i} \sum_{j \in \mathcal{N}_i} z_{ij}(k-1) \right)$$ updates the auxiliary variable $$\label{eq:admm-z} z_{ij}(k) = (1 - \alpha) z_{ij}(k-1) + \alpha y_{j\to i}(k),$$\ **end for** With this notation, we formalize next the challenging assumptions under which the problem in eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} must be solved. **Assumption 3**. *At each time step, each node $j$ has completed its local computation and successfully transmits data to its neighbors $i\in\mathcal{N}_j$ with probability $\rho_{ij} \in (0, 1]$.* **Assumption 4**. *Each node $i\in \mathcal{V}$ updates its local variable $x_i(k)$ with an additive error ${u_{i}(k)\in\mathbb{R}^p}$ and receives information from neighbor $j\in\mathcal{N}_i$ with an additive noise ${v_{ij}(k)\in\mathbb{R}^p}$ such that the overall error ${e_{ij}(k) = v_{ij}(k)+2\rho u_i(k)}$ on the update of the auxiliary variable $z_{ij}(k)$ is bounded by ${\mathbb{E}\left[{\left\vert\kern-0.25ex\left\vert e_{ij}(k)\right\vert\kern-0.25ex\right\vert}\right]\leq \nu_e<\infty}$.* ## Convergence results {#subsec:algorithm-convergence} For the convenience of the reader, we state next our main results, while we postpone their proofs to Sections [4.2](#sec:proofmainres){reference-type="ref" reference="sec:proofmainres"}-[4.3](#sec:proofsecres){reference-type="ref" reference="sec:proofsecres"}, since they require several intermediate developments on stochastic operators discussed in Section [4.1](#subsec:stochastic){reference-type="ref" reference="subsec:stochastic"}. We begin with the following theorem, which characterizes the mean linear convergence of DOT-ADMM in the stochastic scenario described in Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"} and formalized in Section [3](#sec:algorithm){reference-type="ref" reference="sec:algorithm"}. **Theorem 1** (Linear convergence). *Consider the online distributed optimization in problem [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} under Assumptions [Assumption 1](#as:convexity){reference-type="ref" reference="as:convexity"}-[Assumption 2](#as:time-variability){reference-type="ref" reference="as:time-variability"}, and a connected network of agents that solves it by running the DOT-ADMM Algorithm under Assumptions [Assumption 3](#as:random-updates){reference-type="ref" reference="as:random-updates"}-[Assumption 4](#as:additive-error){reference-type="ref" reference="as:additive-error"}. If:* - *the map $\mathsf{T}_k$ defined block-wise by $\mathsf{T}_{ij,k}$ as in eq. [\[eq:admm-z\]](#eq:admm-z){reference-type="eqref" reference="eq:admm-z"} is metric subregular;* *then the following results for the distance between $x(k)$ and the set of solutions $\mathcal{X}_k^*$ to problem [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} hold:* - *there is an upper bound for $d_{\mathcal{X}_k^*}( x(k))$ that holds in mean for all $k \in \mathbb{N}$, and this bound has a linearly decaying dependence on the initial condition;* - *there is an upper bound for $d_{\mathcal{X}_k^*}( x(k))$ that holds almost surely[^7] when $k \to \infty$, and this bound does not depend on the initial condition.* The following corollary makes explicit how the results of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"} simplifies for some simpler scenarios. **Corollary 1** (Particular cases). *Consider the scenario of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"}. If the cost functions $f_{i,k}=f_i$ are static and there are no sources of additive error, then, letting $\mathcal{X}^*$ be the set of solutions to problem in eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"}:* - *the distance of $x(k)$ from $\mathcal{X}^*$ converges linearly to zero in mean;* - *$x(k)$ almost surely converges to $\mathcal{X}^*$ when $k \to \infty$.* *If further the communications are synchronous and reliable, then* - *$x(k)$ converges linearly and almost surely to $\mathcal{X}^*$.* We conclude with another result showing that *eventual* linear convergence can be proved for strongly convex and smooth costs; this result subsumes that of [@Bastianello21], proving it in a more general scenario. **Theorem 2** (Eventual linear convergence). *Consi-der a static instance of the distributed optimization in problem [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} over a connected network of agents that solves it by running the DOT-ADMM Algorithm under Assumptions [Assumption 3](#as:random-updates){reference-type="ref" reference="as:random-updates"}-[Assumption 4](#as:additive-error){reference-type="ref" reference="as:additive-error"}. If the costs are strongly convex and twice continuously differentiable, then:* - *there is a finite time $k^* \in \mathbb{N}$ such that for $k \geq k^*$ results $i)$ and $ii)$ of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"} hold.* # Proofs of the Convergence Results {#sec:convergence} In this section, we first provide general convergence results for stochastic metric subregular operators, which we then leverage to prove the convergence results of Section [3.1](#subsec:algorithm-convergence){reference-type="ref" reference="subsec:algorithm-convergence"}. ## Foundational convergence results {#subsec:stochastic} The goal of this section is to prove the eventual linear convergence (in mean) of a special class of time-varying stochastic operators defined by $\alpha$-averaged and $\gamma$-metric subregular operators. In particular, we are interested in operators of the form of eq. [\[eq:gen-stoch\]](#eq:gen-stoch){reference-type="eqref" reference="eq:gen-stoch"}, which have been proven to be convergent in mean (but sub-linearly) by Combettes and Pesquet [@combettes_stochastic_2015], under the assumption of $\alpha$-averagedness and diminishing errors. **Theorem 3**. *Let ${\widetilde{\mathsf{T}}^e_k:\mathbb{R}^m\rightarrow \mathbb{R}^m}$ be a time-varying operator defined component-wise by $$\label{eq:gen-stoch} \widetilde{\mathsf{T}}^e_{\ell,k}( z) := \begin{cases} \mathsf{T}_{\ell,k}( z) + e_{\ell,k} & \text{if } \beta_{\ell,k} = 1 \\ z_{\ell}& \text{otherwise} \end{cases}$$ for $\ell=1,\ldots,m$, where $e_{\ell,k}$ are i.i.d. random variables and ${\beta_{\ell,k}\sim \emph{\text{Ber}}(\rho_{\ell})}$ are Bernoulli i.i.d. random variables such that $\rho_{\ell}\in(0,1]$. If it holds:* - *$\exists \sigma>0$ such that $d_{\mathsf{T}_{k}}( z^*) \leq \sigma$ for ${ z^* \in \mathop{\mathrm{fix}}(\mathsf{T}_{k-1})}$;* - *${\mathsf{T}_k}$ is $\alpha$-averaged;* - *$\mathsf{T}_k$ is $\gamma$-metric subregular;* *then the iteration $z(k)=\widetilde{\mathsf{T}}^e_k( z(k-1))$ converges linearly in mean according to =1mu =1mu =1mu $$\mathbb{E}\left[d_{\mathsf{T}_k}( z(k))\right] \leq \sqrt{\frac{\overline{p}}{\underline{p}}} \left[\mu^k d_{\mathsf{T}_0}( z(0)) + \sum_{h=1}^{k}\mu^{k-h}(\mathbb{E}\left[{\left\vert\kern-0.25ex\left\vert e_h\right\vert\kern-0.25ex\right\vert}\right]+\mu\sigma)\right]$$ and it almost surely holds that $$\limsup_{k \to \infty} d_{\mathsf{T}_k}( z(k)) \leq \lim_{k \to \infty} \sqrt{\frac{\overline{p}}{\underline{p}}}\sum_{h = 1}^{k} \mu^{k - h} (\mathbb{E}\left[{\left\vert\kern-0.25ex\left\vert e_h\right\vert\kern-0.25ex\right\vert}\right]+\mu\sigma),$$ where $$\label{eq:mu} \mu = \sqrt{1 - \frac{(1-\alpha)\underline{p}}{\alpha\lambda} }, \quad \lambda > \max\left\{\gamma^2,\frac{(1-\alpha)\underline{p}}{\alpha}\right\}.$$* *Proof.* *\[Punctual upper bound\] -* We make use of the so-called *diagonally-weighted norm* in the sense of [@FB-CTDS], where the vector of nonnegative weights is the vector of probabilities $p=[p_1,\ldots,p_m]^\top$, which is defined next $$\label{eq:newnorm} {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2 = \sum_{\ell = 1}^m \frac{1}{p_\ell} z_\ell^2,$$ following the usual notation in our community [@combettes_stochastic_2015]. Clearly, such a norm satisfies the following $$\label{eq:equivalent-norms} \underline{p}{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2 \leq {\left\vert\kern-0.25ex\left\vert z\right\vert\kern-0.25ex\right\vert}^2 \leq \overline{p}{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2.$$ Similarly to the Euclidean distance $d_{\mathsf{T}_k}( z)$ from the set of fixed points of $\mathsf{T}$, we define the distance ${d_{\mathsf{T}_k}'( z) = \inf_{ y \in \mathop{\mathrm{fix}}(\mathsf{T}_k)} {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z - y\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}}$, such that $$\label{eq:metric-sub-mnorm} \frac{1}{\overline{p}} d_{\mathsf{T}_k}( z)^2 \overset{(i)}{\leq} d_{\mathsf{T}_k}'( z)^2 \overset{(i)}{\leq} \frac{1}{\underline{p}} d_{\mathsf{T}_k}( z)^2 \overset{(ii)}{\leq} \frac{\gamma^2}{\underline{p}} {\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{T}_k) z\right\vert\kern-0.25ex\right\vert}^2,$$ where (i) follow by the definition of projection and eq. [\[eq:equivalent-norms\]](#eq:equivalent-norms){reference-type="eqref" reference="eq:equivalent-norms"}, whereas (ii) by $\gamma$-metric subregularity of $\mathsf{T}_k$. We also conveniently rewrite the operator $\widetilde{\mathsf{T}}^e_k$ in eq. [\[eq:gen-stoch\]](#eq:gen-stoch){reference-type="eqref" reference="eq:gen-stoch"} by $$z_{\ell}(k)=\widetilde{\mathsf{T}}^e_{\ell,k} ( z(k-1)) =\widetilde{\mathsf{T}}_{\ell,k}( z(k-1)) + \beta_{\ell,k} e_{\ell,k},$$ where $$\widetilde{\mathsf{T}}_{\ell,k}( z(k-1)) = z_{\ell}(k-1) + \beta_{\ell,k} (\mathsf{T}_{\ell,k} ( z(k-1)) - z_{\ell}(k-1)).$$ Letting $\boldsymbol{e}\in\mathbb{R}^m$ be the vector stacking all the errors and ${ z^*_k \in \mathop{\mathrm{fix}}(\mathsf{T}_k)}$, then by eq. [\[eq:metric-sub-mnorm\]](#eq:metric-sub-mnorm){reference-type="eqref" reference="eq:metric-sub-mnorm"} and the triangle inequality we can write $$\begin{aligned} d'_{\mathsf{T}_k}( z(k)) & = {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z(k) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert} \\ &= {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\widetilde{\mathsf{T}}^e_k( z(k-1)) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\\ &\leq {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\widetilde{\mathsf{T}}_k( z(k-1)) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert} + {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert e_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\end{aligned}$$ and thus $$\mathbb{E}\left[d'_{\mathsf{T}}( z(k))\right] \leq \mathbb{E}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\widetilde{\mathsf{T}}_k( z(k-1)) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\right] +\mathbb{E}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert e_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\right].$$ We are interested in finding an upper-bound to the first term on the right-hand side of the above inequality, whose explicit form is given by =0mu =0mu =0mu $$\begin{aligned} &{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\widetilde{\mathsf{T}}_k( z(k-1)) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2 = \\ &= \sum_{\ell=1}^m \frac{1}{p_{\ell}} \Big[(1-\beta_{{\ell},k}) z_{\ell}(k-1) + \beta_{\ell,k} \mathsf{T}_{\ell,k}( z(k-1)) - z_{\ell,k}^*) \Big]^2 \\ &= \sum_{\ell=1}^m\Big[\frac{1-\beta_{\ell,k}}{p_{\ell}} (z_{\ell}(k-1) - z_{\ell}^*)^2 + \frac{\beta_{\ell}}{p_{ij}} (\mathsf{T}_{\ell,k}( z(k-1)) - z_{\ell,k}^*)^2 \Big]\end{aligned}$$ where, since $\beta_{\ell,k} \in \{ 0, 1 \}$, we have used the following: ${\beta_{\ell,k}^2 = \beta_{\ell,k}}$; ${(1-\beta_{\ell,k})^2 = (1-\beta_{\ell,k})}$; ${(1-\beta_{\ell,k}) \beta_{\ell,k} = 0}$. An upper-bound to the conditional expectation w.r.t. the realizations of all r.v.s at time $k-1$ is given by =1mu =1mu =1mu $$\begin{aligned} &\mathbb{E}_{k-1}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\widetilde{\mathsf{T}}_k( z(k-1)) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2\right] = \\ &= \sum_{\ell=1}^m \left[\frac{1-p_\ell}{p_\ell} (z_\ell(k-1) - z_{\ell,k}^*)^2 + (\mathsf{T}_{\ell,k}( z(k-1)) - z_{\ell,k}^*)^2 \right] \\ &= {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z(k-1) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2 - {\left\vert\kern-0.25ex\left\vert z(k-1) - z^*_k\right\vert\kern-0.25ex\right\vert}^2 + {\left\vert\kern-0.25ex\left\vert \mathsf{T}_k( z(k-1)) - z^*_k\right\vert\kern-0.25ex\right\vert}^2 \\ &\overset{(i)}{\leq} {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z(k-1) - z^*_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2 - \frac{1-\alpha}{\alpha} {\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{T}_k) z(k-1)\right\vert\kern-0.25ex\right\vert}^2 \\ &\overset{(ii)}{\leq} d'_{\mathsf{T}_k}( z(k-1))^2 -\frac{1-\alpha}{\alpha} {\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{T}_k) z(k-1)\right\vert\kern-0.25ex\right\vert}^2 \\ &\overset{(iii)}{\leq} d'_{\mathsf{T}_k}( z(k-1))^2 - \frac{1-\alpha}{\alpha\gamma^2} \underline{p}d'_{\mathsf{T}_k}( z(k-1))^2 \\ &= \left(1 - \frac{(1-\alpha)\underline{p}}{\alpha\gamma^2} \right)d'_{\mathsf{T}_k}( z(k-1))^2 := \mu^2 d'_{\mathsf{T}_k}( z(k-1))^2 \end{aligned}$$ where (i) holds by $\alpha$-averagedness, (ii) follows by selecting $z^*_k=\mathop{\mathrm{\arg\!\inf}}_{ y \in \mathop{\mathrm{fix}}(\mathsf{T}_k)} {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z(k-1) - y\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}$, and (iii) is a consequence of metric subregularity highlighted in eq. [\[eq:metric-sub-mnorm\]](#eq:metric-sub-mnorm){reference-type="eqref" reference="eq:metric-sub-mnorm"}, and where $\mu \in (0, 1)$ provided that $\gamma$ is sufficiently large, which we can always assume by overestimating the metric subregularity constant of $\mathsf{T}_k$, in particular by replacing $\gamma^2$ with $\lambda=\max\{\gamma^2, (1-\alpha)\underline{p}/\alpha\}$ as in eq. [\[eq:mu\]](#eq:mu){reference-type="eqref" reference="eq:mu"}. Now, exploiting the (i) concavity of the square root, the (ii) Jensen's inequality and the (iii) law of total expectation, we have =1mu =1mu =1mu $$\label{eq:square-expected-value} \mathbb{E}\Big[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\cdot\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\Big] \overset{(i)}{=} \mathbb{E}\left[\sqrt{{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\cdot\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2}\right] \overset{(ii)}{\leq} \sqrt{\mathbb{E}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\cdot\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2\right]} \overset{(iii)}{=} \sqrt{\mathbb{E}\left[\mathbb{E}_{k-1}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\cdot\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}^2\right]\right]}$$ which implies $$\mathbb{E}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert\widetilde{\mathsf{T}}_k( z(k-1))- z^*\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\right]\leq \mu \mathbb{E}\left[d_{\mathsf{T}}( z(k-1))\right].$$ Therefore we can write $$\begin{aligned} &\mathbb{E}\left[d'_{\mathsf{T}_k}( z(k))\right] \leq \mu \mathbb{E}\left[d'_{\mathsf{T}_k}( z(k-1))\right] + \mathbb{E}\left[{\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert e_k\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}\right] \\ &\qquad \overset{(i)}{\leq} \mu \mathbb{E}\left[d'_{\mathsf{T}_{k-1}}( z(k-1))\right] + \frac{1}{\sqrt{\underline{p}}} \left( \mu \sigma + \mathbb{E}\left[{\left\vert\kern-0.25ex\left\vert e_k\right\vert\kern-0.25ex\right\vert}\right] \right)\end{aligned}$$ where (i) holds by [\[eq:equivalent-norms\]](#eq:equivalent-norms){reference-type="eqref" reference="eq:equivalent-norms"} and assumption *i*). Iterating we get $$\begin{split} \mathbb{E}\left[d'_{\mathsf{T}_k}( z(k))\right] &\leq \mu^k \mathbb{E}\left[d'_{\mathsf{T}_0}( z(0))\right] + \\ &+ \frac{1}{\sqrt{\underline{p}}} \sum_{h = 1}^{k} \mu^{k - h} \left( \mu \sigma + \mathbb{E}\left[{\left\vert\kern-0.25ex\left\vert e_h\right\vert\kern-0.25ex\right\vert}\right] \right) \end{split}$$ and using [\[eq:equivalent-norms\]](#eq:equivalent-norms){reference-type="eqref" reference="eq:equivalent-norms"} again yields the thesis. *\[Asymptotic upper bound\] -* We use the same proof technique as in [@bastianello_stochastic_2022 Corollary 5.3]. Let us define $$y(k) = \max\left\{ 0, d_{\mathsf{T}_k}( z(k)) - \sqrt{\frac{\overline{p}}{\underline{p}}} \sum_{h = 1}^{k} \mu^{k - h} ({\left\vert\kern-0.25ex\left\vert e_h\right\vert\kern-0.25ex\right\vert}+\mu\sigma) \right\}$$ for which, by the previous result on the expected distance and Markov's inequality, we have that for any $\varepsilon > 0$ $$\mathbb{P}[y(k) \geq \varepsilon] \leq \frac{\mathbb{E}\left[y(k)\right]}{\varepsilon} \leq \frac{1}{\varepsilon} \sqrt{\frac{\overline{p}}{\underline{p}}} \mu^k d( z(0)),$$ and, summing over $k$ and using the geometric series, $$\sum_{k = 0}^\infty \mathbb{P}[y(k) \geq \varepsilon] \leq \frac{1}{\varepsilon} \sqrt{\frac{\overline{p}}{\underline{p}}} \frac{d( z(0))}{1 - \mu} < \infty.$$ But by Borel-Cantelli lemma this means that $$\limsup_{k \to \infty} y(k) \leq \varepsilon$$ almost surely; and since the inequality holds for any ${\varepsilon > 0}$, the thesis follows. ◻ **Remark 2**. *The presence of random coordinate updates leads to a worse convergence rate $\mu$ compared to the convergence rate $\mu_s$ attained when all coordinates update at each iteration, indeed, $$\mu_s = \sqrt{1 - \frac{(1-\alpha)}{\alpha\lambda}} \leq \sqrt{1 - \frac{(1-\alpha)\underline{p}}{\alpha\lambda}} = \mu,$$ with $\alpha\in(0,1)$ and $\lambda$ as in eq. [\[eq:mu\]](#eq:mu){reference-type="eqref" reference="eq:mu"}. This is in line with the results proved in [@combettes_stochastic_2015], and makes intuitive sense since the fewer updates are performed the slower the convergence.* We also provide a result to cover the case in which metric subregularity does not hold in the entire state space, but only in a subspace of it: this property is called locally metric subregularity. **Theorem 4**. *In the scenario of Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"}, if it holds:* - *$\mathsf{T}_k=\mathsf{T}$ is not time-varying, i.e., $\sigma = 0$;* - *${\mathsf{T}}$ is $\alpha$-averaged;* - *$\mathsf{T}$ is metric subregular in a set $\mathcal{X}\subset \mathbb{R}^n$;* - *$\lim_{k\rightarrow\infty}{\left\vert\kern-0.25ex\left\vert e_k\right\vert\kern-0.25ex\right\vert} \rightarrow 0$;* *then it almost surely holds $\limsup_{k \to \infty} d_{\mathsf{T}}( z(k)) = 0.$ Moreover, there is a finite time $k^*$ such that for $k\geq k^*$ $\text{linear}$ convergence in mean is achieved with rate $\mu$ in eq. [\[eq:mu\]](#eq:mu){reference-type="eqref" reference="eq:mu"}.* *Proof.* The first claim is a straightforward consequence of Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"} and Lemma 3.1(a) in [@sundharram_distributed_2010]. For the second claim, notice that the map $\mathsf{T}$ is metric subregular at fixed points; indeed, if $z^*\in\mathop{\mathrm{fix}}(\mathsf{T})$ then $d_{\mathsf{T}}( z^*)=0$ and ${\left\vert\kern-0.25ex\left\vert (\mathsf{Id}-\mathsf{T}) z^*\right\vert\kern-0.25ex\right\vert} = 0$. This means that $\mathop{\mathrm{fix}}(\mathsf{T}) \subset \mathcal{X}$ and, in turn, that $\mathcal{X}$ is a neighborhood of $\mathop{\mathrm{fix}}(\mathsf{T})$ with $$\exists r>0:\quad \mathcal{X}\supset \{ z \in \mathbb{R}^m \ | \ d_{\mathsf{T}}( z) \leq r \}.$$ But by the first claim, we know that $z(k)$ converges almost surely to $\mathop{\mathrm{fix}}(\mathsf{T})$ and, therefore, there exists a finite time $k^*\in\mathbb{N}$ after which the $z(k)$ evolves inside the neighborhood $\mathcal{X}$ in which locally metric subregularity holds. We can now apply Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"} to prove linear convergence in mean for $k\geq k^*$, completing the proof. ◻ ## Proof of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"} {#sec:proofmainres} We conveniently rewrite the operators of the DOT-ADMM updates in eq. [\[eq:admm-complete\]](#eq:admm-complete){reference-type="eqref" reference="eq:admm-complete"} in compact form as follows[^8] =1mu =1mu =1mu $$\begin{aligned} x(k) &= \mathsf{F}_k( z(k-1)) = \mathop{\mathrm{prox}}_{f_{k}}^{1/\rho \eta}(D A^\top z(k-1))\\ z(k) &= \mathsf{T}_k( z(k-1)) = \left[ (1 - \alpha) I - \alpha P \right] z(k-1) + 2 \alpha \rho P A x(k) \end{aligned}$$ where the operator $\mathop{\mathrm{prox}}_{f_{k}}^{1/\rho \eta} : \mathbb{R}^{np} \to \mathbb{R}^{np}$ applies block-wise the proximal of the time-varying local costs $f_{i,k}$; the matrix $A\in\mathbb{R}^{\xi p\times np}$ is given by $A=\Lambda\otimes I_p$ with ${\Lambda\in\{0,1\}^{\xi\times n}}$ given by ${\Lambda = \operatorname{blk\,diag}\{ \mathbbold{1}_{\eta_i}\}_{i = 1}^n}$; the matrix $D\in\mathbb{R}^{np\times np}$ is given by $D = \operatorname{blk\,diag}\{ (\rho\eta_i)^{-1} I_p \}_{i = 1}^n$; the matrix ${P\in \{0,1\}^{\xi p \times \xi p} }$ is given by $P=\Pi\otimes I_p$ with $\Pi\in\{0,1\}^{\xi\times \xi}$ being a permutation matrix swapping $(i,j)\in\mathcal{E}$ with $(j,i)\in\mathcal{E}$. By [@Bastianello21 Proposition 3], for each fixed point $z^*_k \in \mathop{\mathrm{fix}}(\mathsf{T}_k)$ there is $x_k^*=\mathsf{F}_k( z_k^*)$ which is a solution to the problem in eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"}. Thus, letting $\mathcal{X}_k^*$ be the time-vaying set of solutions, we can write =0mu =0mu =0mu $$\begin{aligned} d_{\mathcal{X}^*_k}( x(k)) &= \inf_{ y \in \mathcal{X}^*_k} {\left\vert\kern-0.25ex\left\vert x(k) - y\right\vert\kern-0.25ex\right\vert} \overset{(i)}{\leq} {\left\vert\kern-0.25ex\left\vert x(k) - x^*_k\right\vert\kern-0.25ex\right\vert} \\ &= {\left\vert\kern-0.25ex\left\vert \mathsf{F}_k( z(k-1)) - \mathsf{F}_k( z_k^*)\right\vert\kern-0.25ex\right\vert} \\ &= {\left\vert\kern-0.25ex\left\vert \mathop{\mathrm{prox}}_{f_{k}}^{1/\rho \eta}(D A^\top z(k-1)) - \mathop{\mathrm{prox}}_{f_{k}}^{1/\rho \eta}(D A^\top z^*_k)\right\vert\kern-0.25ex\right\vert} \\ &\overset{(ii)}{\leq} {\left\vert\kern-0.25ex\left\vert D A^\top ( z(k-1) - z^*_k)\right\vert\kern-0.25ex\right\vert} \leq {\left\vert\kern-0.25ex\left\vert D A^\top\right\vert\kern-0.25ex\right\vert} {\left\vert\kern-0.25ex\left\vert z(k-1) - z^*_k\right\vert\kern-0.25ex\right\vert} \\ &\overset{(iii)}{=} {\left\vert\kern-0.25ex\left\vert DA^\top\right\vert\kern-0.25ex\right\vert} d_{\mathsf{T}_k}( z(k-1))\end{aligned}$$ where $(i)$ holds since $x_k^*\in\mathcal{X}_k^*$, $(ii)$ follows by the non-expansiveness of the proximal, and $(iii)$ holds by choosing $$z^*_k =\mathop{\mathrm{\arg\!\inf}}_{ y \in \mathop{\mathrm{fix}}(\mathsf{T}_k)} {\left\vert\kern-0.3ex\left\vert\kern-0.3ex\left\vert z(k-1) - y\right\vert\kern-0.3ex\right\vert\kern-0.3ex\right\vert}.$$ This means that the linear convergence of $x(k)$ to a neighborhood of $\mathcal{X}^*_k$ is implied by that of $z(k)$ to a neighborhood of $\mathop{\mathrm{fix}}(\mathsf{T}_k)$, which can be proved by means of Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"}. Indeed, by Assumptions [Assumption 3](#as:random-updates){reference-type="ref" reference="as:random-updates"}-[Assumption 4](#as:additive-error){reference-type="ref" reference="as:additive-error"} the update of $z(k)$ can be described by a stochastically perturbed operator as in eq. [\[eq:gen-stoch\]](#eq:gen-stoch){reference-type="eqref" reference="eq:gen-stoch"} object of Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"}. We thus prove both statements of the theorem by checking all the conditions under which Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"} holds: - By Assumption [Assumption 2](#as:time-variability){reference-type="ref" reference="as:time-variability"} it follows that $\sigma>0$ is such that $d_{\mathsf{T}_{k}}( z^*) \leq \sigma$ for ${ z^* \in \mathop{\mathrm{fix}}(\mathsf{T}_{k-1})}$; - By Assumption [Assumption 1](#as:convexity){reference-type="ref" reference="as:convexity"} it follows that $\mathsf{T}_k$ are $\alpha$-averaged for all $k \in \mathbb{N}$. Indeed, the operator $\mathsf{T}_k$ comes from the application of the Peaceman-Rachford operator to the dual problem of [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"}, which guarantees its $\alpha$-averagedness when the cost functions $f_{i,k}$ are convex (cfr. [@Bastianello21]); - $\mathsf{T}_k$ is $\gamma$-metric subregular by assumption. We remark that the neighborhood $\widetilde{\mathcal{X}}^*_k$ is not unbounded due to Assumption [Assumption 4](#as:additive-error){reference-type="ref" reference="as:additive-error"} which implies ${\mathbb{E}\left[{\left\vert\kern-0.25ex\left\vert e_{k}\right\vert\kern-0.25ex\right\vert}\right]\leq \sqrt{n}\nu_e}$. $\square$ ## Proof of Theorem [Theorem 2](#thm:local){reference-type="ref" reference="thm:local"} {#sec:proofsecres} The result is a consequence of Theorem [Theorem 4](#thm:loc-msr-static-converr){reference-type="ref" reference="thm:loc-msr-static-converr"}. Indeed, following the argument of [@Bastianello21], one can approximate the local cost functions as $$\begin{aligned} f_i(x) &= \frac{1}{2} (x - x^*)^\top \nabla^2 f_i(x^*) (x - x^*) \\ &+ \langle \nabla f_i(x^*), x - x^* \rangle + o(x - x^*)\end{aligned}$$ where $x^* = \mathop{\mathrm{\arg\!\min}}\sum_{i = 1}^n$ and ${\left\vert\kern-0.25ex\left\vert o(x - x^*)\right\vert\kern-0.25ex\right\vert} / {\left\vert\kern-0.25ex\left\vert x - x^*\right\vert\kern-0.25ex\right\vert} \to 0$ as $x \to x^*$. Therefore, one can interpret the DOT-ADMM Algorithm as being characterized by an affine operator with an additive error that depends on the higher order terms $o(x - x^*)$. But since affine operators are metric subregular [@robinson1981some; @Themelis19], then the DOT-ADMM Algorithm is metric subregular around the optimal solution $x^*$. Finally, since the additive error vanishes around $x^*$, then we can apply Theorem [Theorem 4](#thm:loc-msr-static-converr){reference-type="ref" reference="thm:loc-msr-static-converr"} and prove that linear convergence can be achieved locally. $\square$ # Application to online distributed learning {#sec:distributed-application} In the online distributed learning problem, each agent $i\in\mathcal{V}$ holds a local and private time-varying data set $\{a_{i,h,k},b_{i,h,k}\}_{h=1}^{m_i}$ of dimension $m_i\in\mathbb{N}$ and aims at estimating the parameters $x\in\mathbb{R}^p$ of a common regression model $g\in \Gamma_0^p$ such that the sum of local partial costs $g(x,a_{i,h,k},b_{i,h,k})$ is minimized. In other words, this problem can be formulated as in eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"}, where the local costs are given by $$f_{i,k}(x) = \sum_{h = 1}^{m_i} g(x,a_{i,h,k},b_{i,h,k}).$$ In the following subsections, we are going to show that the important classes of linear and logistic regression problems, relevant within the context of distributed learning, can be solved by means of the DOT-ADMM Algorithm with solutions characterized by Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"}. This amounts to showing that the updates of the auxiliary variables are ruled by an operator $\mathsf{T}_k$ which is metric subregular. For the sake of clarity, we recall that $\mathsf{T}_{k}$ can be written in compact form as =0mu =0mu =0mu $$\label{eq:compact-z} \mathsf{T}_k( z) = \left[ (1 - \alpha) I - \alpha P \right] z + 2 \alpha \rho P A \mathop{\mathrm{prox}}_{f_{k}}^{1/\rho \eta}(D^{-1} A^\top z)$$ where the operator $\mathop{\mathrm{prox}}_{f_{k}}^{1/\rho \eta} : \mathbb{R}^{np} \to \mathbb{R}^{np}$ applies block-wise the proximal of the time-varying local costs $f_{i,k}$, while other parameters are defined in Section [4.2](#sec:proofmainres){reference-type="ref" reference="sec:proofmainres"}. ## Linear regression In linear regression problems, the data sets are such that $a_{i,h,k}\in\mathbb{R}^p$ and $b_{i,h,k}\in\mathbb{R}$, while the model is given by $$g(x,a_{i,h,k},b_{i,h,k}) = (a_{i,h,k}^\top x - b_{i,h,k})^2.$$ Denoting by $A_{i,k} = [a_{i,1,k},\cdots,a_{i,m_i,k}]^\top \in\mathbb{R}^{m_i\times p}$ and $b_{i,k}=[b_{i,1,k},\cdots,b_{i,m_i,k}]^\top\in\mathbb{R}^{m_i}$, the local time-varying cost for each agent $i\in\mathcal{V}$ becomes $$\label{eq:linear-regression} f_{i,k}(x) = \sum_{h = 1}^{m_i} (a_{i,h,k}^\top x_i - b_{i,h,k})^2 = {\left\vert\kern-0.25ex\left\vert A_{i,k}x-b_{i,k}\right\vert\kern-0.25ex\right\vert}^2%\\$$ We now show that the online linear regression problem satisfies the conditions of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"}. **Proposition 1**. *Let $\mathsf{T}_k$ be the operator characterizing the DOT-ADMM Algorithm applied to an online linear regression problem, that is eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} with the local costs in eq. [\[eq:linear-regression\]](#eq:linear-regression){reference-type="eqref" reference="eq:linear-regression"}. Then, $\mathsf{T}_k$ is metric subregular as in Definition [Definition 1](#def:metsubreg){reference-type="ref" reference="def:metsubreg"}, and the results of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"} hold. The result holds also when a regularization term is introduced in the cost function.* *Proof.* The operator $\mathsf{T}_k$ has the form in eq. [\[eq:compact-z\]](#eq:compact-z){reference-type="eqref" reference="eq:compact-z"}. Thus, it requires that the agents compute the proximal of the local costs as in eq. [\[eq:linear-regression\]](#eq:linear-regression){reference-type="eqref" reference="eq:linear-regression"}, which in this particular case have the following closed-form expression: =2mu =2mu =2mu $$\label{eq:proximal-quadratic} \mathop{\mathrm{prox}}_{f_{i,k}}^{1/\rho\eta_i}(w) = (A_{i,k}^\top A_{i,k}+\rho\eta_i I)^{-1} \left(\rho\eta_i w + A_{i,k}^\top b_{i,k}\right).$$ By noticing that the proximals are affine functions of their argument $w$, it follows that also the operator $\mathsf{T}_k$ is affine. Consequently, $\mathsf{T}_k$ is metric subregular since "any affine operator is\" [@robinson1981some; @Themelis19]. ◻ **Remark 3**. *Similar reasoning can be made for linear regression problems with linear constraints $C_{i,k}x = d_{i,k}$ with $C_{i,k} \in\mathbb{R}^{m_i\times p}$ and $b_{i,k}\in\mathbb{R}^{m_i}$ or halfspace constraints $p_{i,k}^\top x\leq q_{i,k}$ with $p_{i,k} \in\mathbb{R}^{p}$ and $q_{i,k}\in\mathbb{R}$.* ## Logistic regression In logistic regression problems, the data sets are such that $a_{i,h,k}\in\mathbb{R}^p$ and $b_{i,h,k}\in\{-1,1\}$, while the model is given by $$g(x,a_{i,h,k},b_{i,h,k}) = \log\left( 1 + \exp\left( - b_{i,h,k} a_{i,h,k}^\top x \right) \right),$$ which yields the following local costs $$\label{eq:logistic-regression} f_{i,k}(x) = \sum_{h=1}^{m_i} \log\left( 1 + \exp\left( - b_{i,h,k} a_{i,h,k}^\top x \right) \right).$$ Differently from linear regression problems, the proximal of the local costs for the logistic regression problems does not have a closed-form solution and thus needs to be computed with an iterative algorithm. **Remark 4**. *An approximated proximal can be computed by applying gradient descent methods for a finite number of steps. Clearly, this will introduce an error in the DOT-ADMM Algorithm, which impacts the neighborhood of the optimal solution that is reached, according to Theorem [Theorem 3](#thm:glob-msr){reference-type="ref" reference="thm:glob-msr"}. This is indeed one of the practical challenges that motivated our interest in the problem set-up of this paper.* Nevertheless, we now show that the online logistic regression problem satisfies the conditions of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"}. **Proposition 2**. *Let $\mathsf{T}_k$ be the operator characterizing the DOT-ADMM Algorithm applied to an online logistic regression problem, that is eq. [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} with the local costs in eq. [\[eq:logistic-regression\]](#eq:logistic-regression){reference-type="eqref" reference="eq:logistic-regression"}. Then, $\mathsf{T}_k$ is metric subregular as in Definition [Definition 1](#def:metsubreg){reference-type="ref" reference="def:metsubreg"}, and the results of Theorem [Theorem 1](#thm:stochastic+noise+varying){reference-type="ref" reference="thm:stochastic+noise+varying"} hold. The result holds also when a regularization term is introduced in the cost function.* *Proof.* For the sake of clarity, we first consider the simplest case in which the logistic regression models are static with scalar parameters $x\in\mathbb{R}$, and the data sets consist of only one entry $m_i=1$. We then complete the proof by generalizing this result for the general case of $x\in\mathbb{R}^p$ and $m_i\geq 1$, and to the case when a regularization term is introduced. *Base case --* The local cost functions become $$f_{i}(x) = \log(1+\exp(-b_ia_ix)).$$ The proximal associated with these cost functions is =-1mu =-1mu =-1mu $$\mathop{\mathrm{prox}}^{1/\rho\eta_i}_{f_i}(w)=\mathop{\mathrm{\arg\!\min}}_{ x} \underbrace{\left\{\log(1+\exp(-b_ia_ix))+\frac{1}{2\rho\eta_i}(x- w)^2\right\}}_{h(x)}.$$ The proximal is the (unique) stationary point $x^*$ of $h(x)$, which is the solution of $$\frac{\partial}{\partial x}h(x) = % -b_ia_i\frac{\exp(-b_ia_ix^*)}{1+\exp(-b_ia_ix^*)} + \frac{1}{\rho\eta_i}(x^*-w) = 0.$$ Even though such a solution $x^*$ does not have a closed form, we can find two affine functions bounding it from above and from below. Indeed, it holds $$\frac{\exp(-b_ia_ix^*)}{1+\exp(-b_ia_ix^*)} \in [0,1],$$ and therefore =2mu =2mu =2mu $$\label{eq:def_lu} l_i(w):=w - \rho\eta_i{\left\vert b_ia_i \right\vert} \leq x^* \leq w+\rho\eta_i{\left\vert b_ia_i \right\vert}:=u_i(w).$$ In other words, it holds that ${l_i(\cdot)\leq \mathop{\mathrm{prox}}^{1/\rho\eta_i}_{f_i}(\cdot)\leq u_i(\cdot)}$. Let $u(\cdot)=[u_1(\cdot),\cdots, u_n(\cdot)]^\top$ and $l(\cdot)=[l_1(\cdot),\cdots, l_n(\cdot)]^\top$, then $\mathsf{T}$ as in eq. [\[eq:compact-z\]](#eq:compact-z){reference-type="eqref" reference="eq:compact-z"} can be upper and lower bounded by two affine operators $\mathsf{L}(\cdot)$, $\mathsf{U}(\cdot)$ as follows $$\begin{aligned} \mathsf{T}( z) \leq \mathsf{U}(z) &:= \left[ (1 - \alpha) I - \alpha P \right] z + 2 \alpha \rho P A u(D^{-1} A^\top z),\\ \mathsf{T}( z) \geq \mathsf{L}(z) &:= \left[ (1 - \alpha) I - \alpha P \right] z + 2 \alpha \rho P A l(D^{-1} A^\top z), \end{aligned}$$ because $\alpha,\rho>0$ and $A,P\geq 0$. Note that each component $\mathsf{U}_i$, $\mathsf{L}_i$ of the functions $\mathsf{U}$, $\mathsf{L}$ are given by the same linear combinations of the entries of $z$ plus a different constant term, i.e., there are $c_i$, $q^{\mathsf{L}}_i$, and $q^{\mathsf{U}}_i$ with $i\in\mathcal{V}$ such that $$\mathsf{L}_i(z) = c_i z_i + \sum_{j\neq i} c_j z_j + q^{\mathsf{L}}_i,\: \text{ or }\: \mathsf{U}_i(z) = c_i z_i + \sum_{j\neq i} c_j z_j + q^{\mathsf{U}}_i.$$ Given $x\in\mathbb{R}^n$, let $\hat{x}^{\mathsf{T}}\in \mathop{\mathrm{fix}}(\mathsf{T})$ be the closest fixed point of $\mathsf{T}$ to $x$, such that $$\hat{x}^{\mathsf{T}} = \mathop{\mathrm{\arg\!\inf}}_{ y \in \mathsf{T}} {\left\vert\kern-0.25ex\left\vert x - y\right\vert\kern-0.25ex\right\vert}\quad \Rightarrow \quad d_{\mathsf{T}}( x) = {\left\vert\kern-0.25ex\left\vert x-\hat{x}^{\mathsf{T}}\right\vert\kern-0.25ex\right\vert}.$$ Since ${\mathsf{L}_i(z)\leq \mathsf{T}_i(z)\leq \mathsf{U}_i(z)}$, and since $\mathsf{L}_i(z)$, $\mathsf{U}_i(z)$ are linear functions of $z_i$ with the same slope $c_i$ but different intercept, by construction it holds $$\hat{x}^{\mathsf{L}}_i\leq \hat{x}^{\mathsf{T}}_i\leq \hat{x}^{\mathsf{U}}_i \quad \Rightarrow \quad \hat{x}^{\mathsf{L}}_i-x_i\leq \hat{x}^{\mathsf{T}}_i-x_i\leq \hat{x}^{\mathsf{U}}_i-x_i,$$ and therefore $$\label{eq:inter} {\left\vert x_i-\hat{x}^{\mathsf{T}}_i \right\vert}\leq \max\{{\left\vert x_i-\hat{x}^{\mathsf{U}}_i \right\vert},{\left\vert x_i-\hat{x}^{\mathsf{L}}_i \right\vert}\}.$$ Thus, the following chain of inequalities holds $$\begin{aligned} d_{\mathsf{T}}(x)^2 &= {\left\vert\kern-0.25ex\left\vert x-\hat{x}^{\mathsf{T}}\right\vert\kern-0.25ex\right\vert} = \sum_{i=1}^n (x_i-\hat{x}^{\mathsf{T}}_i)^2 \\ &\overset{(a)}{\leq} \sum_{i=1}^n \max\{{\left\vert x_i-\hat{x}^{\mathsf{U}}_i \right\vert}^2,{\left\vert x_i-\hat{x}^{\mathsf{L}}_i \right\vert}^2\}\\ &\overset{(b)}{\leq} \sum_{i=1}^n \max\{ {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{x}^{\mathsf{U}}\right\vert\kern-0.25ex\right\vert}_\infty^2, {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{x}^{\mathsf{L}}\right\vert\kern-0.25ex\right\vert}_\infty^2 \} \\ &\leq n \max\{ {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{x}^{\mathsf{U}}\right\vert\kern-0.25ex\right\vert}_\infty^2, {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{x}^{\mathsf{L}}\right\vert\kern-0.25ex\right\vert}_\infty^2 \} \\ &\overset{(c)}{\leq} n \max\{ {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{x}^{\mathsf{U}}\right\vert\kern-0.25ex\right\vert}^2, {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{x}^{\mathsf{L}}\right\vert\kern-0.25ex\right\vert}^2 \}\end{aligned}$$ where $(a)$ follows from eq. [\[eq:inter\]](#eq:inter){reference-type="eqref" reference="eq:inter"}, $(b)$ holds since the infinity norm is the maximum distance among each component, $(c)$ holds since ${\left\vert\kern-0.25ex\left\vert \boldsymbol{x}\right\vert\kern-0.25ex\right\vert}_\infty \leq {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}\right\vert\kern-0.25ex\right\vert}$. Finally, by metric subregularity of the operators $\mathsf{U}$ and $\mathsf{L}$ follows that of $\mathsf{T}$, $$\begin{aligned} d_{\mathsf{T}}(x) &\leq \sqrt{n} \max\{ {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{\boldsymbol{x}}^{\mathsf{U}}\right\vert\kern-0.25ex\right\vert}, {\left\vert\kern-0.25ex\left\vert \boldsymbol{x}- \hat{\boldsymbol{x}}^{\mathsf{L}}\right\vert\kern-0.25ex\right\vert} \} \\ & \leq \sqrt{n} \max\{ \gamma_{\mathsf{L}} {\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{L}) x\right\vert\kern-0.25ex\right\vert}, \gamma_{\mathsf{U}} {\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{U}) x\right\vert\kern-0.25ex\right\vert} \}\\ & \leq \sqrt{n} \max\{\gamma_{\mathsf{L}} ,\gamma_{\mathsf{U}} \} \max\{{\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{L}) x\right\vert\kern-0.25ex\right\vert},{\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{U}) x\right\vert\kern-0.25ex\right\vert} \}\\ & \leq \gamma_{\mathsf{T}}{\left\vert\kern-0.25ex\left\vert (\mathsf{Id}- \mathsf{T}) x\right\vert\kern-0.25ex\right\vert}, \end{aligned}$$ for a sufficiently large value of $\gamma_{\mathsf{T}}$ This completes the proof for the base case. *General case --* For the general case in which $x\in\mathbb{R}^p$ and $m_i>1$, the bounding functions in eq. [\[eq:def_lu\]](#eq:def_lu){reference-type="eqref" reference="eq:def_lu"} become time-varying operators $l_{i,k}(\cdot),u_{i,k}(\cdot):\mathbb{R}^p\rightarrow\mathbb{R}^p$ given component-wise by $$\begin{aligned} l_{i,j,k}(w) &:= w-\rho\eta_i\sum_{h=1}^{m_i}{\left\vert b_{i,h,k}^ja_{i,h,k}^j \right\vert},\quad j=1,\ldots,p,\\ u_{i,j,k}(w) &:= w+\rho\eta_i\sum_{h=1}^{m_i}{\left\vert b_{i,h,k}^ja_{i,h,k}^j \right\vert},\quad j=1,\ldots,p, \end{aligned}$$ where $a_{i,h,k}=[a_{i,h,k}^1,\cdots,a_{i,h,k}^p]^\top$ and $b_{i,h,k}=[b_{i,h,k}^1,\cdots,b_{i,h,k}^p]^\top$. One can notice that also in this case these operators are affine, and thus the proof for $\mathsf{T}_k$ follows the arguments above discussed. *Regularization --* If a regularization term is introduced, the local cost functions become $$\varphi_{i,k}(x) = f_{i,k}(x) + \frac{\epsilon}{2} {\left\vert\kern-0.25ex\left\vert x\right\vert\kern-0.25ex\right\vert}^2$$ According to [@Parikh14 Section 2.2], it holds $$\mathop{\mathrm{prox}}^{1/\rho\eta_i}_{\varphi_{i,k}}(w) = \mathop{\mathrm{prox}}^{1/(\rho\eta_i+\epsilon)}_{f_{i,k}}\left(\frac{\rho\eta_i}{\rho\eta_i+\epsilon} w\right).$$ The proof follows by using similar steps of the base case. ◻ # Numerical results {#sec:numerical} In this section, we carry out numerical simulations corroborating the theoretical results of the previous sections; all simulations have been implemented in Python using the `tvopt` package [@bastianello_tvopt_2021], and run on a laptop with 12$^\text{th}$ generation Intel i7 CPU and $16$ GB of RAM. The considered set-up is a network of $N=10$ nodes, exchanging information through a random graph topology of $20$ edges, that want to solve an online logistic regression problem characterized by [\[eq:online-distributed-optimization\]](#eq:online-distributed-optimization){reference-type="eqref" reference="eq:online-distributed-optimization"} and the local costs $$f_{i,k}(x) = \sum_{h = 1}^{m_i} \log\left( 1 + \exp\left( - b_{i,h,k} a_{i,h,k} x \right) \right) + \frac{\epsilon}{2} {\left\vert\kern-0.25ex\left\vert x\right\vert\kern-0.25ex\right\vert}^2$$ where $x \in \mathbb{R}^{p}$, $p = 16$, is the vector of weights and intercept to be learned, and $\{ (a_{i,h,k}, b_{i,h,k} \}_{h = 1}^{m_i}\}$ with $a_{i,h,k} \in \mathbb{R}^{1 \times p}$, $b_{i,h,k} \in \{ -1, 1 \}$ are the $m_i = 20$ feature vectors and class pairs available to the node at time $k \in \mathbb{N}$. Notice that the cost is defined in [\[eq:logistic-regression\]](#eq:logistic-regression){reference-type="eqref" reference="eq:logistic-regression"} with the addition of a regularization term ($\epsilon = 5$) which makes the problem strongly convex. In the following sections, we discuss the performance of the DOT-ADMM Algorithm in the different scenarios presented in Section [1.1](#subsec:challenges){reference-type="ref" reference="subsec:challenges"}. The algorithm will then be compared to the gradient tracking methods [@bof_multiagent_2019] (designed to be robust to asynchrony), and [@liu_linear_2021] (designed to be robust to quantization). Threshold Comp. time \[s\] Asymptotic err. --------------------- ----------------------- ------------------------ $\theta = 10^{-14}$ $3.47 \times 10^{-3}$ $4.14 \times 10^{-14}$ $\theta = 10^{-12}$ $2.84 \times 10^{-3}$ $3.65 \times 10^{-12}$ $\theta = 10^{-10}$ $2.42 \times 10^{-3}$ $4.88 \times 10^{-10}$ $\theta = 10^{-8}$ $1.95 \times 10^{-3}$ $5.30 \times 10^{-8}$ $\theta = 10^{-6}$ $1.39 \times 10^{-3}$ $1.01 \times 10^{-5}$ $\theta = 10^{-4}$ $8.88 \times 10^{-4}$ $5.73 \times 10^{-4}$ $\theta = 10^{-2}$ $4.12 \times 10^{-4}$ $9.71 \times 10^{-2}$ : Computational time of local updates and asymptotic error for a static logistic regression problem. ## Local updates for logistic regression {#subsec:local-updates} In the DOT-ADMM Algorithm, each active node needs to compute the local update [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"}. However, when applied to logistic regression in eq. [\[eq:logistic-regression\]](#eq:logistic-regression){reference-type="eqref" reference="eq:logistic-regression"} the proximal of $f_{i,k}$ does not have a closed form -- differently from the linear regression problem [\[eq:linear-regression\]](#eq:linear-regression){reference-type="eqref" reference="eq:linear-regression"} -- and therefore, the proximal needs to be computed approximately. In our set-up, a node computes an approximation of [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"} via the accelerated gradient descent, terminating when the distance between consecutive iterates is smaller than a threshold $\theta > 0$. The error introduced by such an inexact local update is smaller the smaller $\theta$ is. However, smaller values of the threshold make the computational time required for a local update longer, presenting a trade-off. To exemplify this trade-off, we apply the DOT-ADMM Algorithm to a static version of [\[eq:logistic-regression\]](#eq:logistic-regression){reference-type="eqref" reference="eq:logistic-regression"}. In Table [1](#tab:prox-computation-admm){reference-type="ref" reference="tab:prox-computation-admm"} we report the computational time required to compute the local updates for different choices of $\theta$, as well as the corresponding asymptotic error (that is, the distance ${\left\vert\kern-0.25ex\left\vert x(k) - x^*\right\vert\kern-0.25ex\right\vert}$ from the unique solution at the end of the simulation). The computational time is computed by averaging over $250$ iterations of the algorithm. ![Error trajectories of the DOT-ADMM Algorithm with synchronous and asynchronous updates for different numbers of slow nodes.](figs/error-asynchrony.pdf){#fig:asynchrony width="45%"} Hereafter, unless otherwise stated we use the DOT-ADMM Algorithm with $\theta = 10^{-8}$, for a local update time of ${\sim 2.42 \times 10^{-3} s}$. For comparison, we note that a local update of DGT (with hand-tuned parameters to improve performance) requires $\sim 1.90 \times 10^{-3} s$, and in the simulations we allow DGT to run two iterations per each iteration of the DOT-ADMM, to account for the longer time required in the latter local updates. ## Quantized communications As in the section above, we consider a static logistic regression problem, and assume that the agents can exchange quantized communications. In particular, an agent $i$ can only send the quantized version $q(x)$ of a message $x \in \mathbb{R}^p$, as defined component-wise by $$[q(x)]_j = \begin{cases} \underline{q} & \text{if} \ [x]_j < \underline{q} \\ \delta \lfloor [x]_j / \delta \rfloor & \text{if} \ \underline{q} \leq [x]_j \leq \overline{q} \\ \overline{q} & \text{if} \ [x]_j > \overline{q} \\ \end{cases}, \quad j \in \{ 1, \ldots, p \}$$ with $\overline{q} = -\underline{q} = 10$, and $\delta > 0$ the quantization level. Table [2](#tab:quantization){reference-type="ref" reference="tab:quantization"} reports the asymptotic error of the DOT-ADMM for different quantization levels $\delta$. Quantization Asymptotic error --------------------- ----------------------- No quantization $5.30 \times 10^{-8}$ $\delta = 10^{-10}$ $5.30 \times 10^{-8}$ $\delta = 10^{-8}$ $7.36 \times 10^{-8}$ $\delta = 10^{-6}$ $4.74 \times 10^{-6}$ $\delta = 10^{-4}$ $5.64 \times 10^{-4}$ $\delta = 10^{-2}$ $5.32 \times 10^{-2}$ $\delta = 10^{-1}$ $4.91 \times 10^{-1}$ : Asymptotic error for different quantization levels. ## Asynchrony {#subsec:asynchrony} In Section [6.1](#subsec:local-updates){reference-type="ref" reference="subsec:local-updates"} we discussed how the local updates [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"} for a logistic regression problem need to be computed recursively as the proximal does not have a closed form solution. We then discussed how the threshold specifying the accuracy of the local update impacts the convergence of the algorithm. Here we consider how recursive local updates can lead to *asynchronous operations* of the agents, due to their heterogeneous computational capabilities. We consider the following scenario: at iteration $k$ each agent completes the local update [\[eq:admm-x\]](#eq:admm-x){reference-type="eqref" reference="eq:admm-x"} -- using $\theta = 10^{-8}$ -- with some probability, $\underline{p}$ or $\overline{p}$, $\underline{p} < \overline{p}$. The agents characterized by the smaller probability $\underline{p}$ are the "slow" nodes, which, having fewer computational resources, take on average a longer time to reach the threshold $\theta$. Notice that all the nodes use the same threshold, and their more or less frequent updates mimic the effect of different resources. In Figure [1](#fig:asynchrony){reference-type="ref" reference="fig:asynchrony"} we report the mean tracking error (as averaged over $100$ Monte Carlo iterations) for the asynchronous case with different numbers of slow nodes $N_s$. We also compare the result with the error in the synchronous case, in which all nodes complete an update at each iteration $k$. As discussed in Remark [Remark 2](#rem:worse-convergence-rate){reference-type="ref" reference="rem:worse-convergence-rate"}, asynchronous agent operations, which translate into random coordinate updates, lead to worse convergence rates. Indeed, the more frequent the updates are, the faster the convergence rate (until achieving that of the synchronous version), and the introduction of slower nodes implies less frequent coordinate updates overall. ## Online optimization In this section, we evaluate the performance of the DOT-ADMM Algorithm when applied to two instances of the online logistic regression problem, in which the local cost functions are piece-wise constant. Specifically, the costs change $10$ and $100$ times, respectively, and are generated so that the maximum distance between consecutive optima is $\sim 2.5$ (cf. Assumption [Assumption 2](#as:time-variability){reference-type="ref" reference="as:time-variability"}). In Figure [2](#fig:online){reference-type="ref" reference="fig:online"} we report the tracking error of DOT-ADMM when applied to the two problems. Notice that when the problem changes less frequently, the DOT-ADMM has time to converge to smaller errors, up to the bound imposed by the inexact local updates (computed with $\theta = 10^{-4}$). Notice that in the transient the convergence is linear, as predicted by the theory. On the other hand, more frequent changes in the problem yield larger tracking errors overall. ![Tracking error of the DOT-ADMM Algorithm applied to two online problems with different piece-wise constant cost functions.](figs/online.pdf){#fig:online width="45%"} ## Comparison with gradient tracking methods We conclude by comparing the DOT-ADMM Algorithm with two gradient tracking methods: - *ra-GD* [@bof_multiagent_2019][^9]: which makes use of the robust ratio consensus to ensure convergence in the presence of asynchrony, - *LEAD* [@liu_linear_2021]: which is designed to be robust to a certain class of unbiased quantizers. Due to the fact that DOT-ADMM requires a longer time to update the local states (cf. Section [6.1](#subsec:local-updates){reference-type="ref" reference="subsec:local-updates"}), ra-GD and LEAD were run for a larger number of iterations to match the computational time of DOT-ADMM. ![Comparison of the DOT-ADMM with [@bof_multiagent_2019; @liu_linear_2021] for different scenarios combining quantization and asynchrony.](figs/comparison.pdf){#fig:comparison width="45%"} In Figure [3](#fig:comparison){reference-type="ref" reference="fig:comparison"} we report the error for the three methods for different scenarios combining quantization and asynchrony. In particular, we either use or not the quantizer [@liu_linear_2021 eq. (14)], and the agents either activate synchronously or asynchronusly, using the same set-up as Section [6.3](#subsec:asynchrony){reference-type="ref" reference="subsec:asynchrony"}. In accordance with the theory, ra-GD is robust to asynchrony, although the convergence is somewhat slow due to a conservative step-size choice. On the other hand, when quantization is employed the algorithm seems to converge only to a neighborhood of the optimal solution, which is larger than the neighborhood reached by DOT-ADMM (despite the fact that DOT-ADMM also uses inexact updates). As predicted, LEAD shows convergence in the presence of quantization; however, the algorithm is not robust to asynchrony and seems to diverge when the agents are not synchronized. Overall, only the DOT-ADMM shows robustness to both asynchrony and quantization of the communications. # Conclusions {#sec:conclusion} This paper proposes the DOT-ADMM Algorithm to solve online learning problems in a multi-agent setting under challenging network constraints, such as asynchronous and inexact agent computations, and unreliable communications. The convergence and robustness of the DOT-ADMM Algorithm have been proven by deriving novel theoretical results in stochastic operator theory for the class of metric-subregular operators, which turns out to be an important class of operators that shows linear convergence to the set of optimal solutions. The broad applicability of this class of operators is supported by the fact that the operator ruling the DOT-ADMM Algorithm applied to the standard linear and logistic regression problems is indeed metric subregular. Future works will focus on studying the optimal design of the DOT-ADMM algorithm and on the characterization of the linear rate of convergence for specific distributed problems, e.g. online learning and dynamic tracking. [^1]: $^*$ Nicola Bastianello and Diego Deplano are co-first authors as they contributed equally. Nicola Bastianello is the corresponding author. [^2]: The work of N. Bastianello and K. H. Johansson was partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101070162, and partially by the Swedish Research Council Distinguished Professor Grant 2017-01078 Knut and Alice Wallenberg Foundation Wallenberg Scholar Grant. [^3]: Nicola Bastianello and Karl H. Johansson are with the School of Electrical Engineering and Computer Science and Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden. Emails: `{nicolba,kallej}@kth.se` [^4]: D. Deplano and M. Franceschelli and A. Giua are with DIEE, University of Cagliari, 09123 Cagliari, Italy. Emails: `{diego.deplano,mauro.franceschelli}@unica.it` [^5]: A function $f : \mathbb{R}^n \to \mathbb{R}\cup \{ +\infty \}$ is lower semicontinuous if its epigraph $\{ (x, t) \in \mathbb{R}^n \times \mathbb{R}\ | \ f(x) \leq t \}$ is closed in $\mathbb{R}^n \times \mathbb{R}$ [@Bauschke2017 Lemma 1.24]. [^6]: i.e., $\beta_{ij}(k)=1$ with probability $\rho_{ij}$, ${\beta_{ij}(k)=0}$ with probability $1-\rho_{ij}$. [^7]: *That is, with probability $1$.* [^8]: Note that since $x(k)$ is a function of $z(k-1)$, the update of $z(k)$ depends solely on $z(k-1)$ or, in other words, $x(k)$ is an internal variable of the DOT-ADMM Algorithm. [^9]: The paper [@bof_multiagent_2019] proposes a distributed Newton method, but ra-GD can be derived by replacing the Hessians with identity matrices (cf. [@bof_multiagent_2019 Remark IV.1]).
arxiv_math
{ "id": "2309.00520", "title": "Online Distributed Learning over Random Networks", "authors": "Nicola Bastianello, Diego Deplano, Mauro Franceschelli, Karl H.\n Johansson", "categories": "math.OC cs.LG cs.MA cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider analytic maps and vector fields defined in $\mathbb{R}^2 \times \mathbb{T}^d$, having a $d$-dimensional invariant torus $\mathcal{T}$. The map (resp. vector field) restricted to $\mathcal{T}$ defines a rotation of frequency $\omega$, and its derivative restricted to transversal directions to $\mathcal{T}$ does not diagonalize. In this context, we give conditions on the coefficients of the nonlinear terms of the map (resp. vector field) under which $\mathcal{T}$ possesses stable and unstable invariant manifolds, and we show that such invariant manifolds are analyitic away from the invariant torus. We also provide effective algorithms to compute approximations of parameterizations of the invariant manifolds, and present some applications of the results. title: Invariant manifolds of maps and vector fields with nilpotent parabolic tori --- Clara Cufí-Cabré Departament de Matemàtiques Universitat Autònoma de Barcelona (UAB) 08193 Bellaterra, Barcelona, Spain `clara.cufi@uab.cat` Ernest Fontich Departament de Matemàtiques i Informàtica Universitat de Barcelona (UB) Centre de Recerca Matemàtica (CRM) Gran Via 585, 08007 Barcelona, Spain `fontich@ub.edu` *2020 Mathematics Subject Classification:* Primary: 37D10; Secondary: 37C25.\ *Key words and phrases:* Parabolic torus, invariant manifold, parameterization method. # Introduction The study of parabolic invariant manifolds is relevant, apart from the interest that presents itself as a mathematical problem, because this kind of manifolds appears naturally in many problems motivated by physics, chemistry and other sciences. Parabolic manifolds have been used to prove the existence of oscillatory motions in some well-known problems of Celestial Mechanics as the Sitnikov problem [@sit60; @mos73] and the circular planar restricted three-body problem [@gms16; @gsms17; @llibresimo80] using the transversal intersection of invariant manifolds of parabolic points and symbolic dynamics. The Sitnikov problem deals with a configuration of the restricted three-body problem where the two primary bodies (those with non-zero mass) describe ellipses, while the third body moves in the line through their center of mass and orthogonal to the plane where the motion of the primaries takes place. The circular planar restricted three-body problem, instead, considers the motion of a body of negligible mass moving under the gravitational action of the two primary bodies, which perform a circular motion, while all three bodies remain in the same plane. The existence of oscillatory motions in all these instances is strongly related to some invariant objects at infinity that are either fixed points, periodic orbits or invariant tori and to their stable and unstable manifolds. Although these invariant objects are parabolic in the sense that the linearization of the vector field on them has all the eigenvalues equal to zero, they do have stable and unstable invariant manifolds in a similar way as for hyperbolic invariant objects, that is, the sets of initial conditions of solutions that tend to the invariant object when $t\to \pm \infty$, for the stable/unstable manifolds, respectively. Let $\mathbb{T}^d = (\mathbb{R}/ \mathbb{Z})^d$ be the real torus of dimension $d$ and $U \subset \mathbb{R}^2$. In this paper, we consider analytic maps $F: U \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ of the form $$\label{forma-general-maps} F(x, y, \theta, \lambda) = \begin{pmatrix} x + c(\theta, \lambda) y + f_1(x, y, \theta, \lambda) \\ y + f_2(x, y, \theta, \lambda) \\ \theta + \omega + f_3(x, y, \theta, \lambda) \end{pmatrix},$$ where $(x, y) \in U \subset \mathbb{R}^2$, $\theta \in \mathbb{T}^d$, $\omega \in \mathbb{R}^d$, $\lambda \in \Lambda \subset \mathbb{R}^m$, with $f_1(x, y, \theta, \lambda), f_2(x, y, \theta, \lambda) = O(\|(x, y)\|^2)$, and $f_3(x, y, \theta, \lambda) = O(\|(x, y)\|)$, and analytic vector fields $X: U \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ of the form $$\label{forma-general-edo} X(x, y, \theta, t, \lambda) = \begin{pmatrix} c(\theta, t, \lambda) y + g_1(x,y,\theta, t, \lambda) \\ g_2(x,y,\theta, t, \lambda) \\ \omega + g_3(x,y,\theta, t, \lambda) \end{pmatrix},$$ with $g_1(x, y, \theta, \lambda), g_2(x, y, \theta, \lambda) = O(\|(x, y)\|^2)$, and $g_3(x, y, \theta, \lambda) = O(\|(x, y)\|)$, depending quasiperiodically on the time variable $t$. The set $$\mathcal{T} = \{ (0,0,\theta) \in U \times \mathbb{T}^d \}$$ is an invariant torus of $F$ and $X$, that is, for all $\lambda \in \Lambda$, $F(\mathcal{T}, \lambda) \subset \mathcal{T}$, or in the case of vector fields, for any point $x \in \mathcal{T}$, $X(x, \lambda)$ is tangent to $\mathcal{T}$ at $x$. We say that $\mathcal{T}$ is a *parabolic torus with nilpotent part* because the top-left $2 \times 2$ box of the derivative of $F$ (resp. $X$) at $(0, 0, \theta)$ does not diagonalize. We study the existence and regularity of $(d+1)$-dimensional invariant manifolds of such maps and vector fields. Concretely, we give sufficient conditions on the coefficients of $F$ and $X$ that guarantee the existence of a stable and an unstable invariant manifold, and we prove that such invariant manifolds are analytic away form $\mathcal{T}$. Moreover, we provide an algorithm to compute an approximation of a parameterization of the invariant manifolds up to any order. The results also provide analytic dependence on parameters. For the case of maps we use a method similar to the one in [@BFM-20-w], where the authors study the existence of invariant manifolds of analytic maps and vector fields defined on $\mathbb{R}^n \times \mathbb{T}^d$, and where the first $n \times n$ box of the linear part of the maps is equal to the identity. There, applications to the study of the planar $(n+1)$-body problem are provided. Contrary to that paper, here the results for vector fields are presented as a direct study of a functional equation in a suitable Banach space, while in [@BFM-20-w] the corresponding results are obtained from the results for maps. In [@zhang16] the authors deal with invariant curves of $C^\infty$ planar maps of the form corresponding to the first two components of [\[forma-general-maps\]](#forma-general-maps){reference-type="eqref" reference="forma-general-maps"} and they obtain the existence of a local stable manifold as the graph of some function $\varphi$ by solving a fixed point equation equivalent to the invariance of the graph of $\varphi$. This equation is solved by applying the Schauder fixed point theorem, and they obtain invariant manifolds of class $C^{[(k+1)/2]}$, where in our notation such $k$ appears in the reduced form of $F$ given in Section [2.1](#sec-reduced){reference-type="ref" reference="sec-reduced"}. In this paper we generalize the results of [@zhang16] in several ways. First, we consider both maps and vector fields having a parabolic invariant torus, where the particular case of the maps restricted to the directions transversal to the torus have the form considered in [@zhang16]. Also, the maps and vector fields that we consider are analytic, and we provide the existence of analytic invariant manifolds for them, away from the invariant tori. We also provide a result of uniqueness for the stable manifold for maps. In our approach we use the parameterization method, which allows us to state the existence theorems as *a posteriori* results, and we provide explicit algorithms to compute an approximation of a parameterization of the invariant manifolds. Moreover, the use of this technique and the Banach fixed point theorem allows to prove the existence results in a more compact way than in [@zhang16]. The results of [@CC-EF-20] also generalize in some ways the ones of [@zhang16]. There, planar maps of class $C^r$ having a nilpotent parabolic point are considered, and the existence of $C^r$ invariant curves asymptotic to the fixed point is provided. The present paper is indeed a natural generalization of [@CC-EF-20] in the analytic case. The paper is organized as follows. In Section [2.1](#sec-reduced){reference-type="ref" reference="sec-reduced"} we introduce the *reduced form* of the maps and vector fields we deal with and we present the parameterization method. Next we state the main results concerning the existence of invariant manifolds (Theorems [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} - [Theorem 6](#teorema_posteriori_tors_edo){reference-type="ref" reference="teorema_posteriori_tors_edo"}) and we present applications to the study of a quasiperiodically forced oscillator and the scattering of He atoms off Cu surfaces in Section [2.3](#sec-appl-tors){reference-type="ref" reference="sec-appl-tors"}. In Section [3](#sec-algoritme-tors){reference-type="ref" reference="sec-algoritme-tors"}, we provide an algorithm to compute an approximation of a parameterization of the invariant manifolds, both for maps and vector fields. The main results of that section are Theorems [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"} and [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"}. The rest of the paper is devoted to prove the main results of existence. First, in Section [4](#sec-funcional){reference-type="ref" reference="sec-funcional"} we introduce a functional equation equivalent to the invariance of a parameterized manifold which will be the object of our study. We also introduce suitable function spaces, some operators, and state their properties. Finally, the proofs of the main results are provided in Section [5](#sec-dems-tors){reference-type="ref" reference="sec-dems-tors"}. In the Appendix we give two postponed proofs and we deal with unstable manifolds. # Main results and applications In this section we state the main results of the paper. To simplify the statements we write them for maps and vector fields in their reduced form, introduced below. This requires some preliminary notation. After the statements of the results we provide some applications. Let $V \subset \mathbb{R}^n$ be an open set. We say that a function $h: V \times \mathbb{R}\to \mathbb{R}^\ell$, $h=h(x, t)$, depends *quasiperiodically* on $t$ if there exists a vector $\nu = (\nu_1 , \, \dots , \, \nu_{d'}) \in \mathbb{R}^{d'}$ and a function $\check h : U \times \mathbb{T}^{d'} \to \mathbb{R}^\ell$, called the hull function of $h$, such that $$h(x, t) = \check h(x, \nu t).$$ We call $\nu$ the vector of *time frequencies* of $h$. If $d'=1$ then $h$ is a periodic function of $t$. Given a map $h : V\times \mathbb{T}^d \to \mathbb{R}^\ell$, we define the *average* of $h$ with respect to $\theta \in \mathbb{T}^d$ as $$\bar{h}(x) = \frac{1}{\text{vol}(\mathbb{T}^d)} \int_{\mathbb{T}^d} h(x, \theta) \, d\theta,$$ and the *oscillatory* part of $h$ as $\tilde{h}(x,\theta) = h( x,\theta) - \bar{h}(x)$. Sometimes, if a function $h$ does not depend on $\theta$, we will still write $\bar h$ to emphasize this fact. If $h$ also depends quasiperiodically on time, we write $$\bar{h}(x) = \frac{1}{\text{vol}(\mathbb{T}^d\times \mathbb{T}^{d'})} \int_{\mathbb{T}^d\times \mathbb{T}^{d'}} \check h(x, \theta,\theta') \, d\theta d\theta'.$$ If $t$ denotes the time variable, then given two functions $g(x,t)$ and $h(x,t)$ the composition $f = h \circ g$ will mean $$f(x, t) = h(g(x, t), t).$$ We will deal with functions depending on a paramater $\lambda \in \Lambda\subset \mathbb{R}^m$. The previous definitions and notation extend naturally to such functions. ## Reduced form of the maps and vector fields {#sec-reduced} Along this paper we consider analytic maps of the form [\[forma-general-maps\]](#forma-general-maps){reference-type="eqref" reference="forma-general-maps"} and analytic vector fields of the form [\[forma-general-edo\]](#forma-general-edo){reference-type="eqref" reference="forma-general-edo"}. Performing the analytic change of variables given by $\tilde x = x$, $\tilde{y} = y + \frac{1}{c(\theta, \lambda)} \, f_1(x, y, \theta, \lambda)$, $\tilde \theta = \theta$, the nonlinear terms of the first components of [\[forma-general-maps\]](#forma-general-maps){reference-type="eqref" reference="forma-general-maps"} and [\[forma-general-edo\]](#forma-general-edo){reference-type="eqref" reference="forma-general-edo"} are removed, and there remain only the linear terms. Performing also the change $y \mapsto -y$ if necessary, [\[forma-general-maps\]](#forma-general-maps){reference-type="eqref" reference="forma-general-maps"} and [\[forma-general-edo\]](#forma-general-edo){reference-type="eqref" reference="forma-general-edo"} can be written respectively as $$\label{forma_normal_tor} F(x, y, \theta, \lambda) = \begin{pmatrix} x + c(\theta, \lambda) y \\ y + a_k(\theta, \lambda)x^k + A(x, y, \theta, \lambda) \\ \theta + \omega +d_p(\theta, \lambda)x^p + B(x, y, \theta, \lambda) \end{pmatrix}$$ and $$\label{fnormal_tors_edo1} X(x, y, \theta, t, \lambda) = \begin{pmatrix} c(\theta, t, \lambda) y \\ a_k(\theta, t, \lambda)x^k + A(x, y, \theta, t, \lambda) \\ \omega + d_p(\theta, t, \lambda)x^p + B(x, y, \theta, t, \lambda) \end{pmatrix},$$ with $(x,y, \theta, \lambda) \in U \times \mathbb{T}^d \times \Lambda$ and $\bar c >0$, for some $k \geq 2$, $p \geq 1$. We assume that $$\begin{aligned} \begin{split} \label{condicions_fnormal} A(x, y, \theta,t, \lambda) = y \, O(\|(x, y)\|^{k-1}) + O(\|(x, y)\|^{k+1}), \\ B(x, y, \theta, t, \lambda) = y \, O(\|(x, y)\|^{p-1}) + O(\|(x, y)\|^{p+1}), \end{split} \end{aligned}$$ without the time dependence in the case of maps. We also assume that the vector field $X$ given in [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} depends quasiperiodically on $t$, being $\nu \in \mathbb{R}^{d'}$ the vector of time frequencies. From now on, we will refer to [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} and [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} as the *reduced form* of [\[forma-general-maps\]](#forma-general-maps){reference-type="eqref" reference="forma-general-maps"} and [\[forma-general-edo\]](#forma-general-edo){reference-type="eqref" reference="forma-general-edo"}, respectively, and we will also denote them by $F$ and $X$. When using those reduced forms, we will refer not only to the form of the map and the vector field [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} and [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} but also to the conditions [\[condicions_fnormal\]](#condicions_fnormal){reference-type="eqref" reference="condicions_fnormal"}. For the sake of simplicity we will only consider the cases of $F$ and $X$ with $\bar a_k(\lambda), \, \bar d_p( \lambda) \neq 0$, which include the generic ones. Other more degenerate cases may be treated with the same techniques. In Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"} we consider a case with $\bar a_k(\lambda) = 0$ to be able to deal with an application to the scattering of He atoms off Cu surfaces. Along the paper we will sometimes omit the dependence of the functions we work with on the parameter $\lambda$ when there is no danger of confusion. Concretely, we present the statements, setting and function spaces with detail but we skip the dependence on parameters in the lemmas and proofs. To prove the results we use the parameterization method for invariant manifolds (see [@cfdlll03I], [@cfdlll05], [@HCFLM16]). It consists in looking for the invariant manifolds of $F$ or $X$ as images of parameterizations together with a representation of the dynamics of $F$ or $X$ restricted to the invariant manifolds. In the maps setting, we look for a pair of functions, $K(u, \theta, \lambda) : [0, \rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ and $R(u, \theta, \lambda) : [0, \rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ satisfying the invariance equation $$\label{eq-param-met-def} F ( K(u, \theta, \lambda), \lambda) = K (R(u, \theta, \lambda), \lambda).$$ This equation establishes that the range of $K$ is contained in the domain of $F$. It is a functional equation that has to be adapted to the setting of the problem at hand. It follows immediately from [\[eq-param-met-def\]](#eq-param-met-def){reference-type="eqref" reference="eq-param-met-def"} that the range of $K$ is invariant. Actually, $K$ is a (semi)conjugation of the map $F$ restricted to the range of $K$ to $R$. Then, one has to solve equation [\[eq-param-met-def\]](#eq-param-met-def){reference-type="eqref" reference="eq-param-met-def"} in a suitable space of functions. Usually it is convenient to have good approximations of $K$ and $R$ and look for a small correction of $K$, in some sense, while maintaining $R$ fixed. Assuming differentiability and taking derivatives in [\[eq-param-met-def\]](#eq-param-met-def){reference-type="eqref" reference="eq-param-met-def"} we get $DF \circ K \cdot DK =DK \circ R \cdot DR$ which says that the range of $DK$ has to be invariant by $DF$. Therefore, in our setting we look for $K=( K^x, K^y)$ and $R$ such that $K(0, \theta, \lambda) = (0,0, \theta)$, $R(0, \theta, \lambda) = (0, \theta + \omega)$, and $\partial_u K^y(u,\theta, \lambda) / \partial_u K^x(u,\theta, \lambda) \to 0$ as $u \to 0$. The existence of $K$ provides the existence of an invariant manifold, stable or unstable depending on whether $0$ is an attractor or a repellor for $R$. In the vector fields setting, to find an invariant manifold of $X$ following the parameterization method, we look for a map $K$ and a vector field $Y$ satisfying $$\label{eq-camps-conj-1} X \circ K = DK \cdot Y.$$ This equation expresses that on the range of $K$, the vector field $X$ is tangent to the range of $K$, and therefore, the image of $K$ is invariant under the flow of $X$. Moreover, the vector field $Y$ is a representation of $X$ restricted to the image of $K$. When $X$ is nonautonomous one needs a time-dependent version of equation [\[eq-camps-conj-1\]](#eq-camps-conj-1){reference-type="eqref" reference="eq-camps-conj-1"}. Adding the equation $\dot t =1$ to the system $\dot z = X(z,t)$, $z \in U \times \mathbb{T}^d$, and applying [\[eq-camps-conj-1\]](#eq-camps-conj-1){reference-type="eqref" reference="eq-camps-conj-1"} to the new vector field we arrive at the equation $$\label{eq-conj-tors-edo} X(K(u, \theta, t, \lambda), t, \lambda)- \partial_{(u, \theta)} K (u, \theta, t, \lambda) \cdot Y (u, \theta, t, \lambda) - \partial_t K(u, \theta, t, \lambda) = 0$$ for $K$ and $Y$ also depending on $t$. Concretely, look for a map $K(u, \theta, t, \lambda)$ and a vector field $Y(u, \theta, t,\lambda)$ satisfying [\[eq-conj-tors-edo\]](#eq-conj-tors-edo){reference-type="eqref" reference="eq-conj-tors-edo"}. Then equation [\[eq-conj-tors-edo\]](#eq-conj-tors-edo){reference-type="eqref" reference="eq-conj-tors-edo"} expresses that on the range of $K$, the vector field $(X, 1)$ is tangent to the range of $K$, and therefore, the image of $K$ is invariant under the flow of $(X,1)$. Moreover, we look for $K$ and $Y$ satisfying $K(0,\theta, t, \lambda) = (0, 0, \theta) \in \mathbb{R}^2 \times \mathbb{T}^d$, $Y(0,\theta, t, \lambda) = (0, \omega) \in \mathbb{R}\times \mathbb{T}^d$ and $\partial_u K^y / \partial_u K^x \to 0$ as $u \to 0$. It is known that in the parabolic case, in general, there is a loss of regularity of the invariant manifolds around the invariant torus with respect to the regularity of the map or the vector field ([@BFM20a; @BFM20b; @fontich99]). Then we cannot assume *a priori* a Taylor expansion with respect to $u$ of high degree of the manifolds at $u=0$. However, we can obtain formal approximations, $\mathcal{K}_n$, $\mathcal{R}_n$ and $\mathcal{Y}_n$ of $K$, $R$ and $Y$, satisfying the equations [\[eq-param-met-def\]](#eq-param-met-def){reference-type="eqref" reference="eq-param-met-def"} and [\[eq-conj-tors-edo\]](#eq-conj-tors-edo){reference-type="eqref" reference="eq-conj-tors-edo"} up to any order. In our results we will show that these expressions are indeed approximations of true invariant manifolds, whose existence is rigorously established. ## Main results {#mainresults} In this section we state the results on existence of analytic stable invariant manifolds for maps and vector fields of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} and [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"}, respectively, asymptotic to the invariant torus $\mathcal{T}$. For both cases we also state an *a posteriori* result, which provides the existence of a stable invariant manifold assuming it has been previously approximated but the statement is independent of the way such an approximation has been obtained. In the Appendix at the end of the paper we show that completely analogous results for the unstable manifolds also hold true. **Theorem 1** (Invariant manifolds of maps). *Let $F: U \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ be an analytic map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"}. Assume that $2p> k-1$, $\bar a_k (\lambda) > 0$ for $\lambda \in \Lambda$, and that $\omega$ is Diophantine. Then, there exists $\rho > 0$ and a $C^1$ map $K: [0, \, \rho ) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic in $(0,\rho) \times \mathbb{T}^d \times \Lambda$, of the form $$\label{propietats_k_analitic_tors} K(u, \theta, \lambda) = (u^2, \, \overline{ K }_{k+1}^y (\lambda) u^{k+1},\, \theta + \bar K_{2p-k+1}^\theta(\lambda) u^{2p-k+1} ) + (O(u^3), O(u^{k+2}), O(u^{2p-k+2})),$$ and a polynomial map $R$ of the form $$R(u, \theta, \lambda) = ( u + \bar R^x_k (\lambda) u^k + \overline R^x_{2k-1}(\lambda) u^{2k-1}, \, \theta + \omega ),$$ with $\bar R_k^x (\lambda) <0$, such that $$F(K(u, \theta, \lambda), \lambda) = K(R(u, \theta, \lambda), \lambda), \qquad (u, \theta, \lambda) \in [0, \, \rho) \times \mathbb{T}^d \times \Lambda.$$ Moreover, we have* *$$\overline K_{k+1}^y(\lambda) = - \sqrt{\frac{2 \, \bar a_k(\lambda)}{\bar c(\lambda)\, (k+1)}}, \quad \overline K_{2p-k+1}^\theta (\lambda) = - \frac{\bar d_p(\lambda)}{2p-k+1} \sqrt{\frac{2(k+1)}{\bar c (\lambda) \, \bar a_k}}, \quad \overline R_k^x (\lambda) = - \sqrt{\frac{\bar c(\lambda) \, \bar a_k(\lambda)}{2(k+1)}}.$$* **Remark 2**. The statement of Theorem [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} provides a local stable manifold parameterized by $K: [0, \, \rho ) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ with $\rho$ small. The proof does not give an explicit estimate for the value of $\rho$, however, one can extend the domain of $K$ by using the formula $$K(u, \theta, \lambda) = F^{-j} K(R^j(u, \theta, \lambda)), \qquad j \geq 1,$$ while the iterates of the inverse map $F^{-1}$ exist. **Theorem 3** (*A posteriori* result for maps). *Let $F: U \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ be an analytic map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} with $2p>k-1$, and let $\widehat K:(-\rho, \rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ and $\widehat R = (-\rho, \rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ be analytic maps of the form $$\widehat K(u, \theta, \lambda) = (u^2, \, \overline K_{k+1}^y (\lambda) u^{k+1},\, \theta + \overline K_{2p-k+1}^\theta(\lambda) u^{2p-k+1} ) + (O(u^3), O(u^{k+2}), O(u^{2p-k+2})),$$ and $$\widehat R(u, \theta, \lambda) = ( u + \overline R_k^x(\lambda) u^k + O(u^{k+1}) , \, \theta + \omega ),$$ with $\overline R_k^x(\lambda)<0$, satisfying $$F ( \widehat K (u, \theta , \lambda), \lambda) - \widehat K (\widehat R(u, \theta, \lambda), \lambda) = (O(u^{n+k}), O(u^{n+2k-1}), \, O(u^{n+2p-1})),$$ for some $n\ge 2$.* *Then, there exists a $C^1$ map $K:[0,\rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic in $(0,\rho) \times \mathbb{T}^d \times \Lambda$, and an analytic map $R:(-\rho,\rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ such that $$F ( K (u , \theta, \lambda), \lambda) = K ( R(u, \theta , \lambda), \lambda) , \qquad (u, \theta ) \in [0,\rho) \times \mathbb{T}^d,$$ and $$K (u , \theta, \lambda) - \widehat K(u, \theta, \lambda ) = (O(u^{n+1}), O(u^{n+k}), \, O(u^{n+2p-k})) ,$$ $$R(u, \theta, \lambda) -\widehat R(u, \theta, \lambda) = \left\{ \begin{array}{ll} ( O(u^{2k-1}) , 0 ) & \text{ if }\; n\le k, \\ (0, 0) & \text{ if } \; n>k. \end{array} \right.$$* We also have a uniqueness result for the invariant manifolds obtained in the previous theorems. **Theorem 4**. *Under the hypotheses of Theorems [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} and [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"} the stable manifold is unique.* The proof is deferred to the Appendix. **Theorem 5** (Invariant manifolds of vector fields). *Let $X$ be an analytic vector field of the form [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} and let $\nu \in \mathbb{R}^{d'}$ be the time frequencies of $X$. Assume that $2p> k-1$. Assume also that $(\omega, \nu)$ is Diophantine and that $\bar{a}_k(\lambda) > 0$ for $\lambda \in \Lambda$.* *Then, there exists $\rho > 0$ and a $C^1$ map $K: [0, \, \rho ) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic in $(0,\rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda$, of the form $$K(u, \theta, t, \lambda) = (u^2, \, \overline K_{k+1}^y(\lambda) u^{k+1},\, \theta + \overline K_{2p-k+1}^\theta(\lambda) u^{2p-k+1} ) + (O(u^3), O(u^{k+2}), O(u^{2p-k+2})),$$ depending quasiperiodically on $t$ with the same frequencies as $X$, and a polynomial vector field $Y$ of the form $$Y(u, \theta, t, \lambda) = Y(u, \lambda)= ( \overline Y^x_k(\lambda) u^k + \overline Y^x_{2k-1}(\lambda) u^{2k-1}, \, \omega ),$$ with $\overline Y_k^x(\lambda) <0$, such that $$\begin{aligned} X(K(u, \theta, t, \lambda), t, \lambda) - \partial_{(u, \theta)} K(u, \theta, t, \lambda) \cdot Y(u, \theta, t, \lambda) & - \partial_t K(u, \theta, t, \lambda) = 0,\\ & (u, \theta, t, \lambda) \in [0, \, \rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda. \end{aligned}$$ Moreover, we have* *$$\overline K_{k+1}^y(\lambda) = - \sqrt{\frac{2 \, \bar a_k(\lambda)}{\bar c(\lambda)\, (k+1)}}, \quad \overline K_{2p-k+1}^\theta (\lambda) = - \frac{\bar d_p(\lambda)}{2p-k+1} \sqrt{\frac{2(k+1)}{\bar c (\lambda) \, \bar a_k}}, \quad \overline Y_k^x (\lambda) = - \sqrt{\frac{\bar c(\lambda) \, \bar a_k(\lambda)}{2(k+1)}}.$$* **Theorem 6** (*A posteriori* result for vector fields). *Let $X$ be an analytic vector field of the form [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} and let $\nu \in \mathbb{R}^{d'}$ be the time frequencies of $X$. Let $\widehat K:(-\rho, \rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ and $\widehat Y = (-\rho, \rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ be an analytic map and an analytic vector field, respectively, of the form $$K(u, \theta, t, \lambda) = (u^2, \, \overline K_{k+1}^y(\lambda) u^{k+1},\, \theta + \overline K_{2p-k+1}^\theta(\lambda) u^{2p-k+1} ) + (O(u^3), O(u^{k+2}), O(u^{2p-k+2})),$$ and $$Y(u, \theta, t, \lambda) = (\overline Y^x_k(\lambda) u^k + O(u^{k+1}), \, \omega ),$$ with $\overline Y_k^x(\lambda)<0$, depending quasiperiodically on $t$ with the same frequencies as $X$, satisfying $$\begin{aligned} X(\widehat K(u, \theta, t, \lambda),t, \lambda) - \partial_{(u, \theta)} \widehat K(u, \theta, t, \lambda) \cdot \widehat Y(u, \theta, t, \lambda) &- \partial_t \widehat K(u, \theta, t, \lambda) \\ &= (O(u^{n+k}), O(u^{n+2k-1}), \, O(u^{n+2p-1})), \end{aligned}$$ for some $n\ge 2$.* *Then, there exists a $C^1$ map $K:[0,\rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic in $(0,\rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda$, and an analytic vector field $Y:(-\rho,\rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ such that $$\begin{aligned} X(K(u, \theta, t, \lambda), t, \lambda) - \partial_{(u, \theta)} K(u, \theta, t, \lambda) \cdot Y(u, \theta, t, \lambda) &- \partial_t K(u, \theta, t, \lambda) = 0, \\ & (u, \theta, t, \lambda) \in [0, \, \rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda, \end{aligned}$$ and $$K (u , \theta, t, \lambda) - \widehat K(u, \theta, t, \lambda) = (O(u^{n+1}), O(u^{n+k}), \, O(u^{n+2p-k})) ,$$ $$Y(u, \theta, t, \lambda) -\widehat Y(u, \theta, t, \lambda) = \left\{ \begin{array}{ll} ( O(u^{2k-1}) , 0 ) & \text{ if }\; n\le k, \\ (0, 0) & \text{ if } \; n>k. \end{array} \right.$$* **Remark 7**. The first component of the map $R$ and the vector field $Y$ (corresponding to the directions normal to the invariant torus) given in Theorems [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} and [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"} is the normal form of the dynamics of a one-dimensional system in a neighborhood of a parabolic point ([@chen68; @takens73]). In the second component, $R$ and $Y$ define a rigid rotation of frequency $\omega$. Finally, the following result is a particular case of a slightly modified version of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}. It will be used later on in Section [2.3](#sec-appl-tors){reference-type="ref" reference="sec-appl-tors"} applied to the study of the scattering of helium atoms off copper surfaces. The proof, which is completely analogous to the one of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}, is sketched in Appendix [6.2](#App-A1){reference-type="ref" reference="App-A1"}. **Theorem 8**. *Let $X$ be an analytic vector field of the form $$X(x, y, \theta) = \begin{pmatrix} c(\theta) y \\ b(\theta )x y + O(y^2) \\ \omega + d(\theta )y + O(\|(x,y)\|^2) \end{pmatrix},$$ with $(x, y) \in \mathbb{R}^2$, $\theta \in \mathbb{T}^d$, $\omega \in \mathbb{R}^d$. Assume that $\bar c > 0$, $\bar b \neq 0$ and $\bar d \neq 0$. Assume also that $\omega$ is Diophantine.* *Then, there exists $\rho > 0$ and a $C^1$ map $K: [0, \, \rho ) \times \mathbb{T}^d \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic in $(0,\rho) \times \mathbb{T}^d$, of the form $$K(u, \theta ) = (u, \, \overline K_{2}^y u^{2},\, \theta + \overline K_{1}^\theta u ) + (O(u^2), O(u^{3}), O(u^{2})),$$ and a polynomial vector field $Y$ of the form $$Y(u, \theta) = Y(u )= ( \overline Y^x_2 u^2 + \overline Y^x_{3} u^{3}, \, \omega ),$$ with $\overline Y_2^x <0$, such that $$\begin{aligned} X(K(u, \theta )) - D K(u, \theta) \cdot Y(u, \theta) = 0, \qquad (u, \theta) \in [0, \, \rho) \times \mathbb{T}^d. \end{aligned}$$ Moreover, we have $$\label{coefs1_th_helicoure} \overline K_{2}^y = \frac{b}{2c}, \quad \overline K_{2p-k+1}^\theta = \frac{2d}{c}, \quad \overline Y_2^x = \frac{b}{2}.$$* ## Applications {#sec-appl-tors} In this section we present two applications of the previous results. The first one is a simple illustrative example, and the second one is related to a problem from chemistry. ### A quasiperiodically forced oscillator Consider a particle of mass $1$ moving along a straight line under the action of a potential $V(x)$, with $V(x)= c x^{2n}$, $c >0$, $n \in \mathbb{N}$. When $n=1$, the system is a harmonic oscillator. The corresponding equation of motion is $$\ddot x = - V'(x) = -2n c \, x^{2n-1}.$$ Denoting $y=\dot x$ the velocity of the particle, the equations of motion are $$\begin{aligned} \begin{split} \label{oscillador_gen} \dot x & = y, \\ \dot y & = -2nc \, x^{2n-1}. \end{split}\end{aligned}$$ System [\[oscillador_gen\]](#oscillador_gen){reference-type="eqref" reference="oscillador_gen"} has the first integral $H(x,y) = \frac{1}{2} y^2 + c x^{2n}$, and hence, the phase space is foliated by periodic orbits around the origin, corresponding to the closed curves $\frac{1}{2} y^2 + c x^{2n}=h$, for energy levels $h >0$. We show next that perturbing that oscillator with a suitable external force one can break the center character of the origin of the system and introduce a parabolic stable invariant manifold. Assume that the particle moving under the action of the potential $V(x)$ is also submitted to an external analytic force, $F$ that may depend on the position $x$, the velocity $\dot x$ and the time $t$. Now the equations of motion become $$\begin{aligned} \begin{split} \label{qp_forcat_1dim} \dot x & = y, \\ \dot y & = -2nc \, x^{2n-1} + F(x, y, t). \end{split}\end{aligned}$$ System [\[qp_forcat_1dim\]](#qp_forcat_1dim){reference-type="eqref" reference="qp_forcat_1dim"} is a particular case of system [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} without its third component. Moreover, if the analytic function $f(x, y, t) = -2ncx^{2n-1} + F(x,y,t)$ satisfies the hypotheses [\[condicions_fnormal\]](#condicions_fnormal){reference-type="eqref" reference="condicions_fnormal"} and the dependence on $t$ is Diophantine with frequencies $\nu \in \mathbb{R}^d$, then system [\[qp_forcat_1dim\]](#qp_forcat_1dim){reference-type="eqref" reference="qp_forcat_1dim"} satisfies the hypotheses of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}. Since $V$ and $F$ are analytic around $(0,0)$, with $F(x, y, t) = O(\|(x, y)\|^2)$, to study whether the origin of [\[qp_forcat_1dim\]](#qp_forcat_1dim){reference-type="eqref" reference="qp_forcat_1dim"} has a stable invariant manifold one can apply Theorem [\[th_edo_qp\]](#th_edo_qp){reference-type="ref" reference="th_edo_qp"}. One can write $$-V'(x) + F(x, y, t) = p(x, t) + y q(x, t) + u(x, y, t),$$ with $u(x, y, t) = O(y^2)$ and $$p(x, t) = x^k (a_k(t) + \cdots + a_r(t)x^{r-k}), \qquad p(x, t) = x^{l-1} (b_l(t) + \cdots + b_r(t)x^{r-l}),$$ for some $k, l \geq 2$ and $r \geq k+1, \, r \geq l+1$, and then classify system [\[qp_forcat_1dim\]](#qp_forcat_1dim){reference-type="eqref" reference="qp_forcat_1dim"} to case $1$, $2$ or $3$. Then, Theorem [\[th_edo_qp\]](#th_edo_qp){reference-type="ref" reference="th_edo_qp"} provides the existence of a stable curve asymptotic to the origin, $K(u, t) : [0, \rho) \times \mathbb{R}\to \mathbb{R}^2$, analytic in $[0, \rho) \times \mathbb{R}$, provided that $\nu$ is Diophantine and that $\bar a_k>0$ (cases $1$ and $2$) or $\bar b_l <0$ (case $3$). As a concrete example, take $n=2$ and $F(x, y, t) = \alpha x^2 g(t)$, with $\alpha >0$ and $g$ a quasiperiodic function of $t$ with frequency $\nu \in \mathbb{R}^d$, $\nu$ Diophantine, and $\bar g >0$. This system is modeled by $$\begin{aligned} \begin{split} \label{oscillador1} & \dot x = y, \\ & \dot y = - 4cx^3 + \alpha x^2 g(t). \end{split}\end{aligned}$$ By Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"} there are solutions of system [\[oscillador1\]](#oscillador1){reference-type="eqref" reference="oscillador1"} asymptotic to $(0,0)$, analytic away from $(0,0)$, contained in the stable manifold of the origin. Moreover, one can apply Proposition [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"} (see Section [3](#sec-algoritme-tors){reference-type="ref" reference="sec-algoritme-tors"}) to obtain the coefficients of an approximation of a parameterization of such stable manifold. To look for an unstable manifold of system [\[oscillador1\]](#oscillador1){reference-type="eqref" reference="oscillador1"} we consider the vector field obtained after changing the sign of the time, $t \mapsto -t$, in [\[oscillador1\]](#oscillador1){reference-type="eqref" reference="oscillador1"}. The stable manifold of the transformed system, namely $$\begin{aligned} \begin{split} \label{unstable-qp-1} & \dot x = -y, \\ & \dot y = 4cx^3 - \alpha x^2 g(-t), \end{split}\end{aligned}$$ will be unstable manifold of system [\[oscillador1\]](#oscillador1){reference-type="eqref" reference="oscillador1"}. Performing the change of variables $y \mapsto -y$ to system [\[unstable-qp-1\]](#unstable-qp-1){reference-type="eqref" reference="unstable-qp-1"} we can apply again Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"} to obtain the existence of an analytic stable manifold. Finally, undoing the changes of variables we have that, system [\[oscillador1\]](#oscillador1){reference-type="eqref" reference="oscillador1"} has a stable manifold in the lower right plane and an unstable manifold in the upper right plane, both of them analytic away from the origin. We have then that for every $\alpha >0$, an external force of the form $\alpha x^2 g(t)$ with the conditions stated before breaks the oscillatory behavior of the system and induces solutions that brings the particle to the origin and others out of the it. ### Scattering of He atoms off Cu corrugated surfaces In the paper[@gbm97], the authors study the phase-space structure of a differential equation modelling the scattering of helium atoms off copper corrugated surfaces. Concretely, elastic collisions of $^4$He atoms with corrugated Cu surfaces are considered, in particular those made of Cu(110) and Cu(117). The system, which can be adequately treated at the classical level, can be modeled by the following two degrees of freedom Hamiltonian describing the motion of a $^4$He atom, $$\label{ham-he-cu} H(x, z, p_x, p_z) = \frac{p_x^2 + p_z^2}{2m} + V(x, z),$$ where $x$ is the coordinate parallel to the copper surface and $z$ is the coordinate perpendicular to it, $p_x$ and $p_z$ are the respective momenta, and $m$ is the mass of the atom. The potential energy $V(x, z)$ is given by $$V(x, z) = V_M(z) + V_C(x, z),$$ where $V_M(z) = D(1-e^{-\alpha z})^2$ is the Morse potential and $V_C(x, z) = D e^{-2 \alpha z} g (x)$ is the coupling potential, with $D=6.35$ meV, $\alpha = 1.05$ $\AA^{-1}$, and $g(x)$ is a periodic function. Thus the variable $x$ can be thought as an angle. For more information on the coefficients of the Morse and coupling potentials, see Table 1 of [@gbm97]. The equations of motion derived from the Hamiltonian function [\[ham-he-cu\]](#ham-he-cu){reference-type="eqref" reference="ham-he-cu"} are $$\label{eq-ham-he-cu} \dot x = \frac{p_x}{m}, \qquad \dot z = \frac{p_z}{m}, \qquad \dot p_x = -D e^{-2 \alpha z} g'(x), \qquad \dot p_z = -2D\alpha e^{-\alpha z} + 2 D \alpha e^{-2\alpha z} (1+g(x)).$$ This system has a periodic orbir at infinity (see below for the precise meaning). We will use the results presented in Section [2.2](#mainresults){reference-type="ref" reference="mainresults"} to show that the parabolic periodic orbit at infinity has stable and unstable invariant manifolds. This means that for certain initial conditions the helium atom goes away spiraling asymptotically to a periodic orbit, and also, for other initial conditions with position far away, the atom comes asymptotically from the periodic orbit. Since [\[eq-ham-he-cu\]](#eq-ham-he-cu){reference-type="eqref" reference="eq-ham-he-cu"} is a Hamiltonian system, the energy $H$ is conserved, and thus each solution of the system is contained in a level set $H(x, z, p_x, p_z) = h$. Therefore we can reduce system [\[eq-ham-he-cu\]](#eq-ham-he-cu){reference-type="eqref" reference="eq-ham-he-cu"} to a three dimensional system restricting it to an energy level, $H(x, z, p_x, p_z) = h$, removing the equation for $\dot p_x$. The obtained system reads $$\begin{aligned} \dot x &= \frac{1}{m} \big{(} 2m(h-D(1- e^{-\alpha z})^2 - D e^{-2 \alpha z} g(x)) - p_z^2 \big{)}^{1/2}, \\ \dot z &= \frac{p_z}{m}, \\ \dot p_z &= -2D\alpha e^{-\alpha z} + 2 D \alpha e^{-2\alpha z} (1+g(x)). \end{aligned}$$ Next, to study the motion for very big values of $z$ we perform the McGehee-like change of variables given by $y = -e^{-\alpha z}$. Now the set $y=0$ corresponds to infinit distance from the copper surface. To adapt the notation to the one of Section [2.2](#mainresults){reference-type="ref" reference="mainresults"}, we write $\theta= x$ and $p=p_z$. We get $$\begin{aligned} \begin{split} \label{sist-fisic-he-cu} \dot p &= 2 D \alpha y + 2 D \alpha y^2 (1+g(\theta)), \\ \dot y &= - \frac{\alpha}{m} py, \\ \dot \theta &= \frac{1}{m} \big{(} 2m(h-D(1-y)^2 - D y^2 g(\theta)) - p^2 \big{)}^{1/2}, \end{split} \end{aligned}$$ with $y \leq 0$, $p \in \mathbb{R}$ and $\theta \in \mathbb{T}$ The set $\{p = y = 0\}$ is invariant for [\[sist-fisic-he-cu\]](#sist-fisic-he-cu){reference-type="eqref" reference="sist-fisic-he-cu"}, and corresponds to a periodic orbit at the infinity of system [\[eq-ham-he-cu\]](#eq-ham-he-cu){reference-type="eqref" reference="eq-ham-he-cu"}. For system [\[sist-fisic-he-cu\]](#sist-fisic-he-cu){reference-type="eqref" reference="sist-fisic-he-cu"} we have the following result. **Theorem 9**. *Let $X$ be the vector field associated to system [\[sist-fisic-he-cu\]](#sist-fisic-he-cu){reference-type="eqref" reference="sist-fisic-he-cu"}, and assume that $h > D$. Then, the set $\gamma = \{ p = y = 0 \}$ is a periodic orbit and it has stable and unstable invariant manifolds. Concretely, there exist $\rho > 0$ and two $C^1$ maps, $K^-, K^+: [0,\rho) \times \mathbb{T}\to \mathbb{R}^2 \times \mathbb{T}$, analytic on $(0,\rho) \times \mathbb{T}$, and two analytic vector fields, $Y^-, Y^+: [0,\rho) \times \mathbb{T}\to \mathbb{R}\times \mathbb{T}$ of the form $$\label{param-sta-hecu} K^-(u, \theta) = \begin{pmatrix} u + O(u^2) \\ K_2^y u^2 + O(u^3) \\ K_1^\theta u + O(u^2) \end{pmatrix}, \qquad Y^-(u, \theta) = \begin{pmatrix} -Y_2 u^2 + Y_3 u^3 \\ \omega \end{pmatrix},$$ corresponding to the stable manifold, and $$\label{param-inest-hecu} K^+(u, \theta) = \begin{pmatrix} u + O(u^2) \\ -K_2^y u^2 + O(u^3) \\ K_1^\theta u + O(u^2) \end{pmatrix}, \qquad Y^+(u, \theta) = \begin{pmatrix} Y_2 u^2 + \widehat Y_3 u^3 \\ \omega \end{pmatrix},$$ corresponding to the unstable manifold, with $$\label{coefs-sta-hecu} K_2^y = - \frac{1}{4mD} , \qquad K_1^\theta = -\frac{1}{\alpha \sqrt{2m(h-D)}}, \qquad Y_2 = \frac{\alpha}{2m} , \qquad \omega = \sqrt{\frac{2(h-D)}{m}},$$ such that $$X \circ K^- (u,\theta) = DK^- \cdot Y^- (u, \theta) \ \ \text{ and } \ \ X \circ K^+ (u,\theta) = DK^+ \cdot Y^+ (u, \theta), \qquad (u, \theta) \in [0, \rho) \times \mathbb{T}.$$* *Proof.* We do the following analytic change of variables to system [\[sist-fisic-he-cu\]](#sist-fisic-he-cu){reference-type="eqref" reference="sist-fisic-he-cu"}, $$\label{canvi-var-hecu} \tilde p = p, \qquad \tilde y = y + (1+g(\theta))y^2, \qquad \tilde \theta = \theta,$$ and we expand the right hand side of the third equation in Taylor series around $(p, y)=(0,0)$, so that the new system reads, writing the new variables without tilde, $$\label{sist-he-cu-adaptat} \dot p = 2D\alpha y, \qquad \dot y = -\frac{\alpha}{m} p y + O(y^2), \qquad \dot \theta = \omega - d_1 y + O(\|(p, y)\|^2),$$ with $d_1 = \frac{D}{\sqrt{2m (h-D)}}$. It is clear that system [\[sist-he-cu-adaptat\]](#sist-he-cu-adaptat){reference-type="eqref" reference="sist-he-cu-adaptat"} has a periodic orbit, $\gamma$, at $p=y=0$ parameterized by $\gamma (t) = (0,0, \omega t)$. Moreover, such system satisfies the hypotheses of Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"} with $d' =0$, $c(\theta, \lambda) = 2 D \alpha$, $b(\theta, \lambda) = - \tfrac{\alpha}{m}$, and $d(\theta, \lambda) = -d_1$. and moreover we have $$DX(0,0,\theta)= \begin{pmatrix} 0 & 2D\alpha & 0 \\ 0 & 0 & 0 \\ 0 & -d_1 & 0 \end{pmatrix},$$ and thus the periodic orbit $\gamma$ is parabolic in the normal directions, with a nilpotent term. Then, the stated results are a direct consequence of Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"}, which provides the existence of an analytic stable invariant manifold of system [\[sist-he-cu-adaptat\]](#sist-he-cu-adaptat){reference-type="eqref" reference="sist-he-cu-adaptat"} and the expressions given in [\[param-sta-hecu\]](#param-sta-hecu){reference-type="eqref" reference="param-sta-hecu"} and [\[coefs-sta-hecu\]](#coefs-sta-hecu){reference-type="eqref" reference="coefs-sta-hecu"}. Undoing the change of variables [\[canvi-var-hecu\]](#canvi-var-hecu){reference-type="eqref" reference="canvi-var-hecu"} we obtain the parameterizations, $K^-$ and $Y^-$, of the stable manifold of $\gamma$ for the original system and the restricted dynamics on it, whose lower order coefficients are the ones in [\[coefs-sta-hecu\]](#coefs-sta-hecu){reference-type="eqref" reference="coefs-sta-hecu"}. The existence of the unstable manifold is obtained through the application of Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"} to the system obtained doing the change $t \mapsto -t$ to [\[sist-he-cu-adaptat\]](#sist-he-cu-adaptat){reference-type="eqref" reference="sist-he-cu-adaptat"}. Also, performing the change $p \mapsto -p$, we obtain $$\label{sist-he-cu-3} \dot p = 2D\alpha y, \quad \dot y = -\frac{\alpha}{m} p y + O(y^2), \quad \dot \theta = -\omega + d_1 y + O(\|(p, y)\|^2).$$ Then we can apply again Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"} to system [\[sist-he-cu-3\]](#sist-he-cu-3){reference-type="eqref" reference="sist-he-cu-3"}, which provides an analytic stable invariant manifold, $\widetilde K^+$, asymptotic to $\gamma = \{p=y=0\}$, and an expression for the restricted dynamics, $\widetilde Y^+$, parameterized by $$\widetilde K^+(u, \theta) = \begin{pmatrix} u + O(u^2) \\ K_2^y u^2 + O(u^3) \\ -K_1^\theta u + O(u^2) \end{pmatrix}, \qquad \widetilde Y^+(u, \theta) = \begin{pmatrix} -Y_2 u^2 + \widetilde Y_3 u^3 \\ -\omega \end{pmatrix}.$$ Finally, going back to the original variables we get the parameterizations of the unstable manifold of $\gamma$ and the restricted dynamics on it, namely $K^+$ and $Y^+$, given in [\[param-inest-hecu\]](#param-inest-hecu){reference-type="eqref" reference="param-inest-hecu"}. ◻ # Formal approximation of parameterizations of the manifolds {#sec-algoritme-tors} In this section we provide an algorithm to compute an approximation of a parameterization of the invariant manifolds of a map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} and a vector field of the form [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"}. From now on, the superindices $x$, $y$ and $\theta$ on the symbol of a function or an operator with values in $\mathbb{R}^2 \times \mathbb{T}^d$ denote the respective components of the function or the operator. In the next sections we also use the superindices $u, \, \theta$ and $t$, respectively, for functions or operators that take values in $\mathbb{C}\times \mathbb{T}^d_\sigma \times \mathbb{T}^{d'}_\sigma$. ## The case of maps {#sec-algoritme-tors-maps} First, we recall a basic result concerning Diophantine vectors and the small divisors equation. Let $$\mathbb{T}^d_\sigma = \{ \theta = (\theta_1 , \cdots , \theta_d) \in (\mathbb{C}/\mathbb{Z})^d \, | \ |\text{Im} \, \theta| < \sigma \}$$ denote a complex torus of dimension $d$. We also denote by $\Lambda_\mathbb{C}$ a complex neighborhood of the parameter space, $\Lambda$. We say that a vector $\omega \in \mathbb{R}^d$ is *Diophantine (in the map setting)* if there exists $c >0$ and $\tau \geq d$ such that $$|\omega \cdot k - l| \geq c|k|^{-\tau} \qquad \text{for all} \quad k \in \mathbb{Z}^d \backslash \{0\}, \, l \in \mathbb{Z},$$ where $|k| = |k_1| + \dots + |k_d|$ and $\omega \cdot k$ denotes the scalar product. Along this section, when solving cohomological equations, we will encounter the so-called *small divisors equation.* In the map setting this equation has the following form, $$\label{SDequation} \varphi (\theta + \omega, \lambda) - \varphi(\theta, \lambda) = h (\theta, \lambda),$$ with $h : \mathbb{T}^d \times \Lambda \to \mathbb{R}^n$ and $\omega \in \mathbb{R}^d$. To find a solution $\varphi(\theta, \lambda)$ of [\[SDequation\]](#SDequation){reference-type="eqref" reference="SDequation"} we consider the Fourier expansion of $h$ with respect to $\theta$: $$h(\theta, \lambda) = \sum_{k \in \mathbb{Z}^d} h_k(\lambda) e^{2 \pi i k \cdot \theta} .$$ with $$h_k(\lambda) = \int_0^1 h(\theta, \lambda) e^{-2 \pi i k \cdot \theta} \, d\theta , \qquad k \cdot \theta = k_1 \theta_1 + \cdots + k_d \theta_d.$$ If $h$ has zero average and $k \cdot \omega \notin \mathbb{Z}$ for all $k \neq 0$, then equation [\[SDequation\]](#SDequation){reference-type="eqref" reference="SDequation"} has the formal solution $$\varphi (\theta, \lambda ) = \sum_{k \in \mathbb{Z}^d} \varphi_k(\lambda) e^{2 \pi i k \cdot \theta}, \qquad \varphi_k(\lambda) = \frac{h_k(\lambda)}{1- e^{2\pi i k \cdot \omega}}, \qquad k \neq 0.$$ Note that all coefficients $\varphi_k$ are uniquely determined except $\varphi_0$ (the average of $\varphi$), which is free. The following well-known result establishes the existence of a solution to equation [\[SDequation\]](#SDequation){reference-type="eqref" reference="SDequation"} when $h$ is analytic. **Lemma 10** (Small divisors lemma for maps). *Let $h: \mathbb{T}^d_\sigma \times \Lambda_\mathbb{C}\to \mathbb{C}^n$ be an analytic function such that $\sup_{(\theta, \lambda) \in \mathbb{T}^d_{\sigma}\times \Lambda_\mathbb{C}} \|h(\theta, \lambda)\| \, < \infty$ and having zero average. Let $\omega \in \mathbb{R}^d$ be Diophantine with $\tau \geq d$. Then, there exists a unique analytic solution $\varphi : \mathbb{T}^d_\sigma \times \Lambda_\mathbb{C}\to \mathbb{C}^n$ of [\[SDequation\]](#SDequation){reference-type="eqref" reference="SDequation"} with zero average. Moreover, $$\sup_{(\theta, \lambda ) \in \mathbb{T}^d_{\sigma-\delta} \times \Lambda_\mathbb{C}} \|\varphi(\theta, \lambda)\| \, \leq \, C \delta^{-\tau} \sup_{(\theta, \lambda) \in \mathbb{T}^d_\sigma \times \Lambda_\mathbb{C}} \, \|h(\theta, \lambda)\|, \qquad 0 < \delta < \sigma,$$ where $C$ depends on $\tau$ and $d$ but not on $\delta$.* The proof with close to optimal estimates is due to Russmann [@Rus75]. We will denote by $\mathcal{SD}(h)$ the unique solution of [\[SDequation\]](#SDequation){reference-type="eqref" reference="SDequation"} with zero average. In the next result we obtain two pairs of maps, $\mathcal{K}_n$ and $\mathcal{R}_n$, that are approximations of solutions $K$ and $R$ of the invariance equation $$F \circ K = K \circ R.$$ The obtained approximations correspond to the stable manifold when the coefficient $\overline R_k^x(\lambda)$ of $\mathcal{R}_n$ is negative, and to the unstable manifold when such coefficient is positive. Moreover, the obtained parameterizations of $\mathcal{K}_n$ and $\mathcal{R}_n$ will satisfy the hypotheses of Theorem [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"}, and therefore $\mathcal{K}_n$ will be an approximation of a true invariant manifold of $F$. Moreover, the first component of $\mathcal{R}_n$ coincides with the expression of the normal form of a one-dimensional map around a parabolic point ([@chen68; @takens73]). **Theorem 11** (A computable appoximation for maps). *Let $F$ be an analytic map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} satisfying the hypotheses [\[condicions_fnormal\]](#condicions_fnormal){reference-type="eqref" reference="condicions_fnormal"}. Assume that $2p > k-1$, $\bar{c}(\lambda), \bar{a}_k(\lambda) > 0$ for $\lambda \in \Lambda$, and that $\omega$ is Diophantine. Then, for all $n \geq 2$, there exist two pairs of maps, $\mathcal{K}_n : \mathbb{R}\times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ and $\mathcal{R}_n: \mathbb{R}\times \mathbb{T}^d \times \Lambda \to \mathbb{R}\times \mathbb{T}^d$, of the form $$\mathcal{K}_{n} (u, \theta, \lambda)= \begin{pmatrix} u^2 + \sum_{i=3}^n \overline K_i^x(\lambda) u^i + \sum_{i=k+1}^{n+k-1} \widetilde{ K}_{i}^x(\theta, \lambda)u^{i} \\ \sum_{i=k+1}^{n+k-1} \overline K_{i}^y(\lambda) u^{i} + \sum_{i=2k}^{n+2k-2} \widetilde{ K}_{i}^y(\theta, \lambda)u^{i} \\ \theta + \sum_{i=2p-k+1}^{n+2p-k-1} \overline K_{i}^\theta(\lambda) u^{i} + \sum_{i=2p}^{n+2p-2} \widetilde{ K}_{i}^\theta(\theta, \lambda)u^{i} \end{pmatrix}$$ and $$\mathcal{R}_n(u, \theta, \lambda) = \begin{cases} \begin{pmatrix} u+ \overline R_k^x(\lambda) u^k \\ \theta + \omega \end{pmatrix} &\qquad \text{ if } \;\; 2 \leq n \leq k, \\ \begin{pmatrix} u+ \overline R_k^x(\lambda) u^k + \overline R_{2k-1}^x(\lambda) u^{2k-1} \\ \theta + \omega \end{pmatrix} & \qquad \text{ if } \;\; n\ge k+1, \end{cases}$$ such that $$\label{eq_prop1_tor_sim} \mathcal{G}_n (u, \theta, \lambda) := F(\mathcal{K}_{n}(u, \theta, \lambda), \lambda)- \mathcal{K}_{n}(\mathcal{R}_n(u, \theta, \lambda)\lambda) = (O(u^{n+k}),\, O(u^{n+2k-1}), \, O(u^{n+2p-1})).$$ Moreover, for the lowest order coefficients we have* *$$\begin{aligned} \label{coefcasmapes} & \overline K_{k+1}^y(\lambda) = \pm \sqrt{\frac{2 \, \bar a_k(\lambda)}{\bar c(\lambda)\, (k+1)}}, \ \ \overline K_{2p-k+1}^\theta (\lambda) = \pm \frac{\bar d_p(\lambda)}{2p-k+1} \sqrt{\frac{2(k+1)}{\bar c (\lambda) \, \bar a_k(\lambda)}}, \ \ \overline R_k^x (\lambda) = \pm \sqrt{\frac{\bar c(\lambda) \, \bar a_k(\lambda)}{2(k+1)}},\\ \nonumber & \widetilde{ K}_{k+1}^x (\theta, \lambda ) = \mathcal{SD}(\tilde{c}(\theta, \lambda)) \overline{K}_{k+1}^y(\lambda), \quad \widetilde{ K}_{2k}^y (\theta, \lambda ) =\mathcal{SD}(\tilde{a}_k(\theta, \lambda)), \quad \widetilde{ K}_{2p}^\theta (\theta , \lambda) =\mathcal{SD}(\tilde{d}_p(\theta, \lambda)). \end{aligned}$$* **Remark 12**. Although $\mathcal{K}_{n}$ and $\mathcal R_n$ are polynomials in $u$ and therefore are defined for all $u\in \mathbb{R}$, we only consider them for $u\ge 0$, so that choosing the sign $-$ in [\[coefcasmapes\]](#coefcasmapes){reference-type="eqref" reference="coefcasmapes"} we get an approximation of the stable manifold and choosing the sign $+$ we get the unstable one. **Notation 13**. Along the proof, given a map $f(u, \theta)$, we will denote by $[f]_n$ the coefficient of the term of order $n$ of the jet of $f$ with respect to $u$ at $0$ . *Proof.* We prove the result by induction on $n$ showing that we can determine $\mathcal{K}_n$ and $\mathcal{R}_n$ iteratively. For the first induction step, $n=2$, we claim that there exist maps of the form $$\mathcal{K}_{2}(u, \theta) = \begin{pmatrix} u^2 + \widetilde{ K}_{k+1}^x(\theta) u^{k+1} \\ \overline{K}_{k+1}^y u^{k+1} + \widetilde{ K}_{2k}^y(\theta) u^{2k} \\ \theta + \overline K_{2p-k+1}^\theta u^{2p-k+1} + \widetilde{ K}^\theta_{2p} (\theta) u^{2p} \end{pmatrix}, \qquad \mathcal{R}_2(t, \theta) = \begin{pmatrix} u + \overline R_k^x u^k \\ \theta + \omega \end{pmatrix},$$ such that $\mathcal{G}_2 (u, \theta) = F(\mathcal{K}_{2}(u, \theta))- \mathcal{K}_{2}(\mathcal{R}_2(u, \theta)) = (O(u^{k+2}), O(u^{2k+1}), O(u^{2p+1}))$. Indeed, from the expansion of $\mathcal{G}_2$, since $2p > k-1$ we have $$\begin{aligned} & \mathcal{G}_2^x (u, \theta) = u^{k+1}[\widetilde{ K}_{k+1}^x(\theta) - \widetilde{ K}_{k+1}^x(\theta + \omega) + c(\theta)\overline{K}_{k+1}^y - 2 \overline{R}_k^x] + O(u^{k+2}), \\ & \mathcal{G}_2^y (u, \theta) = u^{2k}[\widetilde{ K}_{2k}^y (\theta) - \widetilde{ K}_{2k}^y (\theta + \omega) + a_k(\theta) - (k+1) \overline{K}_{k+1}^y \overline{R}_k^x] + O(u^{2k+1}), \\ & \mathcal{G}_2^\theta (u, \theta) = u^{2p}[\widetilde{ K}_{2p}^\theta (\theta) -\widetilde{ K}_{2p}^\theta (\theta + \omega) + d_p(\theta) - (2p-k+1) \overline K_{2p-k+1}^\theta \overline R_k^x ] + O(u^{2p+1}). \end{aligned}$$ To obtain $\mathcal{G}_2^x (u, \theta) = O(u^{k+2})$ we have to solve the equation $$\widetilde{ K}_{k+1}^x(\theta) - \widetilde{ K}_{k+1}^x(\theta + \omega) + c(\theta)\overline{K}_{k+1}^y - 2 \overline{R}_k^x = 0.$$ We proceed as follows. We separate the average and the oscillatory part of the functions that depend on $\theta$ and we split the equation into two parts, one containing the terms that are independent of $\theta$, namely $\bar c \overline{K}_{k+1}^y = 2 \overline{R}_k^x$, and the other being a small divisors equation of functions with zero average, $\widetilde{ K}_{k+1}^x(\theta + \omega) - \widetilde{ K}_{k+1}^x(\theta) = \tilde{ c}(\theta)\overline{K}_{k+1}^y$. We proceed in the same way to get $\mathcal{G}_2^y (u, \theta) = O(u^{2k+1})$ and $\mathcal{G}_2^\theta (u, \theta) = O(u^{2p+1})$. Since $\bar{c}, \bar{a}_k >0$ and $\omega$ is Diophantine, the obtained equations have the solutions $\overline K_{k+1}^y, \, \overline K_{2p-k+1}^\theta, \, \overline R_k^x$, $\widetilde{ K}_{k+1}^x (\theta ), \, \widetilde{K}_{2k}^y (\theta )$ and $\widetilde{ K}_{2p}^\theta (\theta )$ given in the statement. Next we perform the induction procedure. We assume that we have already obtained maps $\mathcal{K}_n$ and $\mathcal{R}_n$, $n \geq 2$, such that [\[eq_prop1_tor_sim\]](#eq_prop1_tor_sim){reference-type="eqref" reference="eq_prop1_tor_sim"} holds true, and we look for $$\begin{aligned} \mathcal{K}_{n+1} (u, \theta) &= \mathcal{K}_n(u, \theta) + \begin{pmatrix} \overline K_{n+1}^x u^{n+1} + \widetilde{ K}_{n+k}^x(\theta) u^{n+k} \\ \overline K_{n+k}^y \, u^{n+k} + \widetilde{ K}_{n+2k-1}^y(\theta) u^{n+2k-1} \\ \overline K_{n+2p-k}^\theta u^{n+2p-k} + \widetilde{ K}_{n+2p-1}^\theta(\theta) u^{n+2p-1} \end{pmatrix}, \\ \mathcal{R}_{n+1}(u, \theta) &= \mathcal{R}_n(u, \theta) + \begin{pmatrix} \overline R^x_{n+k-1} \, u^{n+k-1} \\ 0 \end{pmatrix}, \end{aligned}$$ such that $\mathcal{G}_{n+1}(u, \theta) = (O(u^{n+k+1}), \, O(u^{n+2k}), \, O(u^{n+2p}))$. To simplify the notation, we denote $\mathcal{K}_{n+1}^+ = \mathcal{K}_{n+1} - \mathcal{K}_n$ and $\mathcal{R}_{n+1}^+ = \mathcal{R}_{n+1} - \mathcal{R}_n$. Using Taylor's theorem, we write $$\begin{aligned} \mathcal{G}_{n+1}(u, \theta) &= F(\mathcal{K}_n(u, \theta) + \mathcal{K}_{n+1}^+ (u, \theta) ) - (\mathcal{K}_n(u, \theta ) + \mathcal{K}_{n+1}^+ (u, \theta))\circ (\mathcal{R}_n(u, \theta) + \mathcal{R}_{n+1}^+(u, \theta)) \\ & = \mathcal{G}_n(u, \theta) + DF(K_n(u, \theta )) \cdot \mathcal{K}_{n+1}^+ (u, \theta) - \mathcal{K}_{n+1}^+ (u, \theta) \circ (\mathcal{R}_n(u, \theta) + \mathcal{R}_{n+1}^+ (u, \theta) ) \\ & \quad+ \int_0^1 \hspace{-1pt} (1-s) D^2F(\mathcal{K}_n(u, \theta) + s \, \mathcal{K}_{n+1}^+ (u, \theta) ) \, ds \, \mathcal{K}_{n+1}^+ (u, \theta)^{\otimes 2} \\ & \quad - D\mathcal{K}_n \circ \mathcal{R}_n(u, \theta ) \cdot \mathcal{R}_{n+1}^+(u, \theta) \\ & \quad - \int_0^1 \hspace{-1pt} (1-s) D^2\mathcal{K}_n (\mathcal{R}_n(u, \theta) + s \, \mathcal{R}_{n+1}^+(u, \theta) ) \, ds \, \mathcal{R}_{n+1}^+(u, \theta)^{\otimes 2}. \end{aligned}$$ Expanding the components of the previous expression we have $$\begin{aligned} \begin{split} \label{eqs_pasn_alg_sim} & \mathcal{G}_{n+1}^x (u, \theta) = \mathcal{G}_n^x (u, \theta) \\ & \quad + u^{n+k}[\widetilde{ K}_{n+k}^x(\theta) - \widetilde{ K}_{n+k}^x(\theta + \omega) + c(\theta)\overline{K}_{n+k}^y - (n+1) \overline K_{n+1}^x \overline{R}_k^x - 2 \overline R_{n+k-1}^x] + O(u^{n+k+1}), \\ & \mathcal{G}_{n+1}^y (u, \theta) = \mathcal{G}_n^y (u, \theta) \\ &\quad + u^{n+2k-1}[\widetilde{ K}_{n+2k-1}^y (\theta) - \widetilde{ K}_{n+2-1}^y (\theta + \omega) + k \, a_k(\theta) \overline K_{n+1}^x - (n+k) \overline{K}_{n+k}^y \overline{R}_k^x \\ & \quad - (k+1) \overline K_{k+1}^y \overline R_{n+k-1}^x ] + O(u^{n+2k}), \\ & \mathcal{G}_{n+1}^\theta (u, \theta) = \mathcal{G}_n^\theta (u, \theta) \\ & \quad + u^{n+2p-1}[\widetilde{ K}_{n+2p-1}^\theta (\theta) -\widetilde{ K}_{n+2p-1}^\theta (\theta + \omega) + p \, d_p(\theta) \overline K_{n+1}^x \\ & \quad - (n+2p-k) \overline K_{n+2p-k}^\theta \overline R_k^x - (2p-k+1) \overline K_{2p-k+1}^\theta \overline R_{n+k-1}] + O(u^{n+2p}). \end{split} \end{aligned}$$ Since, by the induction hypothesis, $\mathcal{G}_{n}(u, \theta) = (O(u^{n+k}), \, O(u^{n+2k-1}), \, O(u^{n+2p-1}))$, to complete the induction step we need to make $[\mathcal{G}_{n+1}^x]_{n+k}$, $[\mathcal{G}_{n+1}^y]_{n+2k-1}$ and $[\mathcal{G}_{n+1}^\theta]_{n+2p-1}$ vanish. From [\[eqs_pasn_alg_sim\]](#eqs_pasn_alg_sim){reference-type="eqref" reference="eqs_pasn_alg_sim"}, such conditions lead to the following cohomological equations, $$\begin{aligned} \begin{split} \label{6eqs_cohom_n_sim} & \widetilde{ K}_{n+k}^x(\theta) - \widetilde{ K}_{n+k}^x(\theta + \omega) + c(\theta)\overline{K}_{n+k}^y - (n+1) \overline K_{n+1}^x \overline{R}_k^x - 2 \overline R_{n+k-1}^x + [\mathcal{G}_{n}^x(\theta)]_{n+k} =0, \\ & \widetilde{ K}_{n+2k-1}^y (\theta) - \widetilde{ K}_{n+2-1}^y (\theta + \omega) + k \, a_k(\theta) \overline K_{n+1}^x - (n+k) \overline{K}_{n+k}^y \overline{R}_k^x \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \, \qquad - (k+1) \overline K_{k+1}^y \overline R_{n+k-1}^x + [\mathcal{G}_n^y(\theta)]_{n+2k-1} = 0, \\ & \widetilde{ K}_{n+2p-1}^\theta (\theta) -\widetilde{ K}_{n+2p-1}^\theta (\theta + \omega) + p \, d_p(\theta) \overline K_{n+1}^x \\ & \qquad \qquad \qquad - (n+2p-k) \overline K_{n+2p-k}^\theta \overline R_k^x - (2p-k+1) \overline K_{2p-k+1}^\theta \overline R_{n+k-1}+ [\mathcal{G}_n^\theta (\theta)]_{n+2p-1} = 0. \end{split} \end{aligned}$$ Taking averages with respect to $\theta$ in the previous equations and separating the terms that depend on $\theta$ from the constant ones, we split [\[6eqs_cohom_n\_sim\]](#6eqs_cohom_n_sim){reference-type="eqref" reference="6eqs_cohom_n_sim"} into three small divisors equations of functions with zero average, namely, $$\begin{aligned} \label{SDequations_induccio_sim} \begin{split} & \widetilde{ K}_{n+k}^x(\theta + \omega) - \widetilde{ K}_{n+k}^x(\theta) = \tilde c(\theta) \overline K_{n+k}^y + [\widetilde\mathcal{G}_{n}^x(\theta)]_{n+k}, \\ & \widetilde{ K}_{n+2-1k}^y (\theta + \omega) - \widetilde{ K}_{n+2k-1}^y (\theta) = k \, \tilde a_k(\theta) \overline K_{n+1}^x + [\widetilde\mathcal{G}_n^y(\theta)]_{n+2k-1}, \\ & \widetilde{ K}_{n+2p-1}^\theta (\theta + \omega) - \widetilde{ K}_{n+2p-1}^\theta (\theta) = p \, \tilde d_p(\theta) \overline K_{n+1}^x + [\widetilde\mathcal{G}_n^\theta (\theta)]_{n+2p-1}, \end{split} \end{aligned}$$ and the following linear system of equations with constant coefficients, $$\begin{aligned} \begin{split} \label{sist_lineal_tors_sim} & \begin{pmatrix} -(n+1) \overline R_k^x & \bar c & 0 \\ k \, \bar a_k & -(n+k) \, \overline R_k^x & 0 \\ p\, \bar d_p & 0 & -(n+2p-k) \overline R_k^x \end{pmatrix} \hspace{-2pt} \begin{pmatrix} \overline K_{n+1}^x \\ \overline K_{n+k}^y \\ \overline K_{n+2p-k}^\theta \end{pmatrix} \\ & \qquad = \begin{pmatrix} - [\overline\mathcal{G}_n^x]_{n+k} + 2 \overline R_{n+k-1}^x \\ - [\overline\mathcal{G}_n^y]_{n+2k-1} + (k+1) \overline K_{k+1}^y \overline R_{n+k-1}^x \\ - [\overline\mathcal{G}_n^\theta ]_{n+2p-1} + (2p-k+1) \overline K_{2p-k+1}^\theta \overline R_{n+k-1}^x \end{pmatrix}. \end{split} \end{aligned}$$ Note that the determinant of the matrix in the left hand side of [\[sist_lineal_tors_sim\]](#sist_lineal_tors_sim){reference-type="eqref" reference="sist_lineal_tors_sim"} is $$(n+2p-k) \overline R_k^x \,[ k \, \bar c \, \bar a_k - (n+1)(n+k) (\overline R_k^x)^2] = (n+2p-k) \overline R_k^x \, \, \bar c \, \bar a_k \frac{(k-n)(n+2k+1)}{2(k+1)},$$ which vanishes when $k \, \bar c \, \bar a_k - (n+1)(n+k) (\overline R_k^x)^2 = 0$. Since by hypothesis $n+2p-k \geq 1$, if $n \neq k$ the matrix is invertible, so we can take $\overline R_{n+k-1}^x = 0$ and then obtain $\overline K_{n+1}^x$, $\overline K_{n+k}^y$ and $\overline K_{n+2p-k}^\theta$ in a unique way. When $n=k$ the determinant of the matrix is zero. Then, choosing $$\overline R_{2k-1}^x = \frac{2k \, \overline R_k^x \, [\overline\mathcal{G}_n^x]_{2k} + \bar c \, [\mathcal{G}_n^y]_{3k-1}}{2 \, (3k+1) \, \overline R_k^x},$$ system [\[sist_lineal_tors_sim\]](#sist_lineal_tors_sim){reference-type="eqref" reference="sist_lineal_tors_sim"} has solutions. In this case, however, $\overline K_{k+1}^x$, $\overline K_{2k}^y$ and $\overline K_{2p}^\theta$ are not uniquely determined. Once we have chosen solutions $\overline K_{n+1}^x$, $\overline K_{n+k}^y$ and $\overline K_{n+2p-k}^\theta$ of system [\[sist_lineal_tors_sim\]](#sist_lineal_tors_sim){reference-type="eqref" reference="sist_lineal_tors_sim"}, we solve the small divisors equations in [\[SDequations_induccio_sim\]](#SDequations_induccio_sim){reference-type="eqref" reference="SDequations_induccio_sim"} taking $$\begin{aligned} & \widetilde{ K}_{n+k}^x(\theta) = \mathcal{SD} ( \tilde c(\theta) \overline K_{n+k}^y + [\widetilde\mathcal{G}_{n}^x(\theta)]_{n+k}), \\ & \widetilde{ K}_{n+2k-1}^y (\theta) = \mathcal{SD} ( k \, \tilde a_k(\theta) \overline K_{n+1}^x + [\widetilde\mathcal{G}_n^y(\theta)]_{n+2k-1}), \\ & \widetilde{ K}_{n+2p-1}^\theta (\theta) = \mathcal{SD} ( p \, \tilde d_p(\theta) \overline K_{n+1}^x + [\widetilde\mathcal{G}_n^\theta (\theta)]_{n+2p-1}). \end{aligned}$$ In this way all equations in [\[6eqs_cohom_n\_sim\]](#6eqs_cohom_n_sim){reference-type="eqref" reference="6eqs_cohom_n_sim"} are solved and one can proceed to the next induction step. ◻ ## The case of vector fields {#sec_algoritme_tors_edo} In this case we have an analogous results. We denote $\mathbb{H}_\sigma = \{ z \in \mathbb{C}\, | \ |\text{Im}(z)| < \sigma\}$ the complex strip of thickness $2\sigma >0$. We say that $\omega \in \mathbb{R}^d$ is *Diophantine (in the vector field setting)* if there exist $c > 0$ and $\tau \geq d-1$ such that $$|\omega \cdot k| \geq c |k|^{-\tau} \qquad \text{for all} \qquad k \in \mathbb{Z}^d \backslash \{ 0 \}.$$ The small divisors equation in the vector field setting is $$\label{SDequation_edo} \partial_\theta \varphi(\theta, \lambda) \cdot \omega = h(\theta, \lambda),$$ with $h : \mathbb{T}^d \times \Lambda \to \mathbb{R}^n$ and $\omega \in \mathbb{R}^d$. Similarly to the case of maps, if $h$ has zero average and $k \cdot \omega \notin \mathbb{Z}$ for all $k \neq 0$, then equation [\[SDequation_edo\]](#SDequation_edo){reference-type="eqref" reference="SDequation_edo"} has the formal solution $$\varphi (\theta, \lambda ) = \sum_{k \in \mathbb{Z}^d} \varphi_k (\lambda) e^{2 \pi i k \cdot \theta}, \qquad \varphi_k (\lambda) = \frac{h_k (\lambda)}{2 \pi i k \cdot \omega}, \qquad k \neq 0,$$ where all the coefficients $\varphi_k$ are uniquely determined except $\varphi_0$ which is free. **Lemma 14** (Small divisors lemma for vector fields). *Let $h: \mathbb{T}^d_\sigma \times \Lambda_\mathbb{C}\to \mathbb{C}^n$ be an analytic function such that $\sup_{(\theta, \lambda) \in \mathbb{T}^d_{\sigma}\times \Lambda_\mathbb{C}} \|h(\theta, \lambda)\| \, < \infty$ and having zero average. Let $\omega$ be Diophantine with $\tau \geq d -1$. Then, there exists a unique analytic solution $\varphi : \mathbb{T}^d_\sigma \times \Lambda_\mathbb{C}\to \mathbb{C}^n$ of [\[SDequation_edo\]](#SDequation_edo){reference-type="eqref" reference="SDequation_edo"} with zero average. Moreover, $$\sup_{(\theta, \lambda) \in \mathbb{T}^d_{\sigma-\delta}\times \Lambda_\mathbb{C}} \|\varphi(\theta, \lambda)\| \, \leq \, C \delta^{-\tau} \sup_{(\theta, \lambda) \in \mathbb{T}^d_\sigma \times \Lambda_\mathbb{C}} \, \|h(\theta, \lambda)\|, \qquad 0 < \delta < \sigma,$$ where $C$ depends on $\tau$ and $d$ but not on $\delta$.* As in the case of maps we denote by $\mathcal{SD}(h)$ the unique solution of [\[SDequation_edo\]](#SDequation_edo){reference-type="eqref" reference="SDequation_edo"} with zero average. As a consequence, if $h: \mathbb{T}^d_\sigma \times \mathbb{H}_\sigma \times \Lambda_\mathbb{C}\to \mathbb{C}^n$ is quasiperiodic with respect to $t\in \mathbb{H}$ with frequencies $\nu \in \mathbb{R}^{d'}$, $h$ has zero average and $(\omega, \nu) \in \mathbb{R}^{d + d'}$ is Diophantine, then the equation $$\label{SDeq_time} (\partial_\theta \varphi(\theta, t, \lambda) , \partial_t \varphi(\theta, t, \lambda))\cdot (\omega, 1) = h(\theta, t, \lambda), \qquad (\theta, t, \lambda) \in \mathbb{T}^d_\sigma \times \mathbb{H}_\sigma \times \Lambda_\mathbb{C},$$ has a unique solution with zero average, defined in $\mathbb{T}^d_\sigma \times \mathbb{H}_\sigma \times \Lambda_\mathbb{C}$ and bounded in $\mathbb{T}^d_{\sigma'} \times \mathbb{H}_{\sigma'} \times \Lambda_\mathbb{C}$ for any $0 < \sigma' < \sigma$. Indeed, since $h$ is quasiperiodic in $t$, equation [\[SDeq_time\]](#SDeq_time){reference-type="eqref" reference="SDeq_time"} is equivalent to $$\label{SD_time_hat} (\partial_\theta \check \varphi(\theta, \tau, \lambda), \partial_\tau \check \varphi (\theta, \tau, \lambda)) \cdot (\omega, \nu) = \check h(\theta, \tau, \lambda), \qquad (\theta, \tau, \lambda) \in \mathbb{T}^{d+d'}_\sigma \times \Lambda_\mathbb{C},$$ where $\tau = \nu t$ and $h(\theta, t, \lambda) = \check h (\theta, \tau,\lambda)$. Then, applying Theorem [Lemma 14](#SDlemma_edo){reference-type="ref" reference="SDlemma_edo"} to equation [\[SD_time_hat\]](#SD_time_hat){reference-type="eqref" reference="SD_time_hat"} taking $(\omega, \nu)$ as the frequency vector, we obtain a unique solution $\check \varphi : \mathbb{T}^{d+d'}_{\sigma'}\times \Lambda_\mathbb{C}\to \mathbb{C}^n$ with zero average, and thus $\varphi(\theta, t, \lambda) = \check \varphi(\theta, \tau, \lambda)$ is the unique solution of equation [\[SDeq_time\]](#SDeq_time){reference-type="eqref" reference="SDeq_time"} with zero average. We also denote it by $\mathcal{SD}(h)$. We use the same notation to denote the solution of a small divisors equation that is either time dependent or independent, as such dependence will be understood by the context. In the next result, given an analytic vector field $X$ of the form [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"}, we obtain two maps, $\mathcal{K}_n (u, \theta, t, \lambda)$ and two vector fields, $\mathcal{Y}_n (u, \theta, t, \lambda)$, that are approximations of solutions $K$ and $Y$ of the invariance equation $$\label{conj_tors_edo} X \circ (K,t) - \partial_{(u, \theta )} K \cdot Y - \partial_t K = 0,$$ Note that the obtained vector field $\mathcal{Y}_n$ neither depends on $\theta$ nor on $t$. Moreover, the first component of $\mathcal{Y}_n$, which represents the dynamics in a transversal directions to the invariant torus, coincides with the expression of the normal form of a one-dimensional vector field around a parabolic point given in ([@takens73]). **Theorem 15** (A computable approximation for vector fields). *Let $X$ be an analytic vector field of the form [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} satisying the hypotheses [\[condicions_fnormal\]](#condicions_fnormal){reference-type="eqref" reference="condicions_fnormal"}. Assume that $2p> k-1$. Assume also that $(\omega, \nu)$ is Diophantine and $\bar{a}_k (\lambda) > 0$ for $\lambda \in \Lambda$. Then, for all $n \geq 2$, there exist two maps, $\mathcal{K}_n : \mathbb{R}\times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, of the form $$\mathcal{K}_{n} (u, \theta, t, \lambda)= \begin{pmatrix} u^2 + \sum_{i=3}^n \overline K_i^x( \lambda) u^i + \sum_{i=k+1}^{n+k-1} \widetilde{ K}_{i}^x(\theta, t, \lambda)u^{i} \\ \sum_{i=k+1}^{n+k-1} \overline K_{i}^y(\lambda) u^{i} + \sum_{i=2k}^{n+2k-2} \widetilde{ K}_{i}^y(\theta, t, \lambda)u^{i} \\ \theta + \sum_{i=2p-k+1}^{n+2p-k-1} \overline K_{i}^\theta(\lambda) u^{i} + \sum_{i=2p}^{n+2p-2} \widetilde{ K}_{i}^\theta(\theta, t, \lambda)u^{i} \end{pmatrix},$$ depending quasiperiodically on time with the same frequencies as $X$, and two vector fields, $\mathcal{Y}_n: \mathbb{R}\times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}\times \mathbb{T}^d$, of the form $$\mathcal{Y}_n(u, \theta, t, \lambda) = \mathcal{Y}_n(u, \lambda) = \begin{cases} \begin{pmatrix} \overline Y_k^x(\lambda) u^k \\ \omega \end{pmatrix} &\qquad \text{ if } \;\; 2 \leq n \leq k, \\ \begin{pmatrix} \overline Y_k^x(\lambda) u^k + \overline Y_{2k-1}^x(\lambda) u^{2k-1} \\ \omega \end{pmatrix} & \qquad \text{ if } \;\; n\ge k+1, \end{cases}$$ such that $$\begin{aligned} \begin{split} \label{eq_prop1_tor_edo_sim} \mathcal{G}_n (u, \theta, t, \lambda) &:= X(\mathcal{K}_{n}(u, \theta, t, \lambda), t, \lambda)- \partial_{(u, \theta)}\mathcal{K}_{n} (u, \theta, t, \lambda) \cdot \mathcal{Y}_n(u, \theta, t, \lambda) - \partial_t \mathcal{K}_n(u, \theta, t, \lambda)\\ & \; = (O(u^{n+k}),\, O(u^{n+2k-1}), \, O(u^{n+2p-1})). \end{split} \end{aligned}$$ Moreover, for the lowest order coefficients we obtain* *$$\begin{aligned} & \overline K_{k+1}^y(\lambda) = \pm \sqrt{\frac{2 \, \bar a_k(\lambda)}{\bar c(\lambda)\, (k+1)}}, \qquad \overline K_{2p-k+1}^\theta(\lambda) = \pm \frac{\bar d_p(\lambda)}{2p-k+1} \sqrt{\frac{2(k+1)}{\bar c(\lambda) \, \bar a_k(\lambda)}}, \qquad \overline Y_k^x(\lambda) = \pm \sqrt{\frac{\bar c(\lambda)\, \bar a_k(\lambda)}{2(k+1)}} , \\ & \widetilde{ K}_{k+1}^x (\theta, t, \lambda ) = \mathcal{SD}(\tilde{ c}(\theta,t, \lambda)) \overline{K}_{k+1}^y(\lambda), \qquad \widetilde{ K}_{2k}^y (\theta, t, \lambda ) =\mathcal{SD}(\tilde{a}_k(\theta,t, \lambda )), \qquad \widetilde{ K}_{2p}^\theta (\theta, t, \lambda ) =\mathcal{SD}(\tilde{d}_p(\theta,t, \lambda)). \end{aligned}$$* A remark analogous to Remark [Remark 12](#rem:KnRnsonpolis){reference-type="ref" reference="rem:KnRnsonpolis"} also applies here. *Proof.* The proof is analogous to the one of Theorem [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, but in this case we look for parameterizations $\mathcal{K}_n$ and $\mathcal{Y}_n$ that approximate solutions of equation [\[conj_tors_edo\]](#conj_tors_edo){reference-type="eqref" reference="conj_tors_edo"}. For the first induction step, $n=2$, we claim that there exist a map and a vector field, $$\mathcal{K}_{2}(u, \theta, t) = \begin{pmatrix} u^2 + \widetilde{ K}_{k+1}^x(\theta, t) u^{k+1} \\ \overline{K}_{k+1}^y u^{k+1} + \widetilde{ K}_{2k}^y(\theta, t) u^{2k} \\ \theta + \overline K_{2p-k+1}^\theta u^{2p-k+1} + \widetilde{ K}^\theta_{2p} (\theta, t) u^{2p} \end{pmatrix}, \qquad \mathcal{Y}_2(u, \theta, t) = \begin{pmatrix} \overline Y_k^x u^k \\ \omega \end{pmatrix},$$ such that $$\begin{aligned} \begin{split} \label{expans_vf} \mathcal{G}_2 (u, \theta, t) &= X(\mathcal{K}_{2}(u, \theta, t), t)- \partial_{(u, \theta)}\mathcal{K}_{2}(u, \theta, t) \cdot \mathcal{Y}_2(u, \theta, t) - \partial_t \mathcal{K}_2 (u, \theta, t) \\ &= (O(u^{k+2}), O(u^{2k+1}), O(u^{2p+1})). \end{split} \end{aligned}$$ This leads to a set of $d+2$ cohomological equations, that we split into two parts, one containing the terms that are independent of $(\theta, t)$, and the other being a small divisors equation for functions of $(\theta, t)$ with zero average. Then, since $\bar{c},\bar{a}_k >0$ and $(\omega, \nu)$ is Diophantine, by the small divisors lemma the equations obtained from [\[expans_vf\]](#expans_vf){reference-type="eqref" reference="expans_vf"} have solutions $\overline K_{k+1}^y, \, \overline K_{2p-k+1}^\theta , \, \overline Y_k^x$, $\widetilde{ K}_{k+1}^x (\theta, t ), \, \widetilde{ K}_{2k}^y (\theta, t )$ and $\widetilde{ K}_{2p}^\theta (\theta, t )$ as given in the statement. We emphasize that we obtain two solutions. The next terms will depend on the choice we make for those solutions. In the induction procedure we look for $$\begin{aligned} \mathcal{K}_{n+1} (u, \theta, t) &= \mathcal{K}_n(u, \theta, t) + \begin{pmatrix} \overline K_{n+1}^x u^{n+1} + \widetilde{ K}_{n+k}^x(\theta, t) u^{n+k} \\ \overline K_{n+k}^y \, u^{n+k} + \widetilde{ K}_{n+2k-1}^y(\theta, t) u^{n+2k-1} \\ \overline K_{n+2p-k}^\theta u^{n+2p-k} \widetilde{ K}_{n+2p-1}^\theta(\theta, t) u^{n+2p-1} \end{pmatrix}, \\ \mathcal{Y}_{n+1}(u, \theta, t) &= \mathcal{Y}_n(u, \theta, t) + \begin{pmatrix} \overline Y^x_{n+k-1} \, u^{n+k-1} \\ 0 \end{pmatrix}, \end{aligned}$$ such that $\mathcal{G}_{n+1}(u, \theta,t) = (O(u^{n+k+1}), \, O(u^{n+2k}), \, O(u^{n+2p}))$. Proceeding in the same way as in the case of maps we arrive to the following completely analogous equations for the average and the oscillatory parts of the coefficients of $\mathcal{K}_n$ and $\mathcal{Y}_n$, $$\begin{aligned} \label{SDequations_induccio_edo_sim} \begin{split} & \partial_\theta \widetilde{ K}_{n+k}^x(\theta , t) \cdot \omega + \partial_t \widetilde{ K}_{n+k}^x(\theta, t) = \tilde c(\theta, t) \overline K_{n+k}^y + [\widetilde\mathcal{G}_{n}^x(\theta, t)]_{n+k}, \\ & \partial_\theta \widetilde{ K}_{n+2-1k}^y (\theta , t)\cdot \omega + \partial_t \widetilde{ K}_{n+2k-1}^y (\theta, t) = k \, \tilde a_k(\theta, t) \overline K_{n+1}^x + [\widetilde\mathcal{G}_n^y(\theta, t)]_{n+2k-1}, \\ & \partial_\theta \widetilde{ K}_{n+2p-1}^\theta (\theta , t)\cdot \omega +\partial_t \widetilde{ K}_{n+2p-1}^\theta (\theta, t) = p \, \tilde d_p(\theta, t) \overline K_{n+1}^x + [\widetilde\mathcal{G}_n^\theta (\theta, t)]_{n+2p-1}, \end{split} \end{aligned}$$ and $$\begin{aligned} \begin{split} \label{sist_lineal_tors_edo_sim} & \begin{pmatrix} -(n+1) \overline Y_k^x & \bar c & 0 \\ k \, \bar a_k & -(n+k) \, \overline Y_k^x & 0 \\ p\, \bar d_p & 0 & - (n+2p-k) \overline Y_k^x \end{pmatrix} \hspace{-2pt} \begin{pmatrix} \overline K_{n+1}^x \\ \overline K_{n+k}^y \\ \overline K_{n+2p-k}^\theta \end{pmatrix} \\ & \qquad = \begin{pmatrix} - [\overline\mathcal{G}_n^x]_{n+k} + 2 \overline Y_{n+k-1}^x \\ - [\overline\mathcal{G}_n^y]_{n+2k-1} + (k+1) K_{k+1}^y \overline Y_{n+k-1}^x \\ - [\overline\mathcal{G}_n^\theta ]_{n+2p-1} + (2p-k+1) \overline K_{2p-k+1}^\theta \overline Y_{n+k-1}^x \end{pmatrix}. \end{split} \end{aligned}$$ As in Theorem [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, the matrix in the left hand side of [\[sist_lineal_tors_edo_sim\]](#sist_lineal_tors_edo_sim){reference-type="eqref" reference="sist_lineal_tors_edo_sim"} is invertible provided that $n \neq k$. In such case one can take $\overline Y_{n+k-1}^x = 0$ and we obtain $\overline K_{n+1}^x$, $\overline K_{n+k}^y$ and $\overline K_{n+2p-k}^\theta$ in a unique way. When $n=k$, the determinant of the matrix is zero. Choosing $$\overline Y_{2k-1}^x = \frac{2k \, \overline Y_k^x \, [\overline\mathcal{G}_n^x]_{2k} + \bar c \, [\mathcal{G}_n^y]_{3k-2}}{2 \, (3k+1) \, \overline Y_k^x},$$ system [\[sist_lineal_tors_edo_sim\]](#sist_lineal_tors_edo_sim){reference-type="eqref" reference="sist_lineal_tors_edo_sim"} has solutions. In this case, $\overline K_{k+1}^x$, $\overline K_{2k}^y$ and $\overline K_{2p}^\theta$ are not uniquely determined. Once we have chosen solutions $\overline K_{k+1}^x$, $\overline K_{2k}^y$ and $\overline K_{n+2p-k}^\theta$ for system [\[sist_lineal_tors_edo_sim\]](#sist_lineal_tors_edo_sim){reference-type="eqref" reference="sist_lineal_tors_edo_sim"} we proceed as in the case of maps. ◻ # A functional equation for a parametrization of the stable manifold {#sec-funcional} In this section we explain the approach to study the existence of stable invariant manifolds for analytic maps of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} and analytic time-dependent vector fields of the form [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"}. We establish a functional equation for a parametrization of the stable invariant manifolds and we present the function spaces and operators that we will use. The treatment in the map and the vector field settings are somehow analogous, so we will omit some details in the latter. ## The case of maps {#sec-funcional-maps} To study the existence of a stable invariant manifold of a map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"}, we first consider approximations $\mathcal{K}_n : \mathbb{R}\times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ and $\mathcal{R}_n: \mathbb{R}\times \mathbb{T}^d \times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ of solutions of the equation $$\label{funcional-tors-1} F \circ K = K \circ R,$$ obtained in Section [3.1](#sec-algoritme-tors-maps){reference-type="ref" reference="sec-algoritme-tors-maps"} up to a high enough order, to be determined later on. Then, keeping $R=\mathcal{R}_n$ fixed, we look for a correction $\Delta: [0, \, \rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, for some $\rho>0$, of $\mathcal{K}_n$, analytic on $(0,\rho) \times \mathbb{T}^d \times \Lambda$, such that the pair $K= \mathcal{K}_n + \Delta$, $R=\mathcal{R}_n$ satisfies the invariance condition $$\label{eq_delta_analitic_tors} F \circ (\mathcal{K}_n + \Delta) - (\mathcal{K}_n + \Delta) \circ R = 0.$$ The proof of Theorems [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} and [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"} concerning the stable manifolds is organized as follows. First, we rewrite equation [\[eq_delta_analitic_tors\]](#eq_delta_analitic_tors){reference-type="eqref" reference="eq_delta_analitic_tors"} to separate the dominant linear part with respect to $\Delta$ and the remaining terms. This motivates the introduction of two families of operators, $\mathcal{S}_{n, \, R}^\times$ and $\mathcal{N}_{n, \, F}$, and the spaces where these operators will act on. We provide the properties of these operators in Lemmas [Lemma 20](#invers_S_tor){reference-type="ref" reference="invers_S_tor"} and [Lemma 22](#lema_N_tors){reference-type="ref" reference="lema_N_tors"}, in particular the invertibility of $\mathcal{S}^\times_{n, R}$. Finally, we rewrite the equation for $\Delta$ as the fixed point equation $$\Delta = \mathcal{T}_{n,\, F} (\Delta), \qquad \text{where}\qquad \mathcal{T}_{n, \, F} = (\mathcal{S}_{n, \, R}^\times)^{-1} \circ \mathcal{N}_{n, \, F},$$ and we apply the Banach fixed point theorem to get the solution. The needed properties of the operators $\mathcal{T}_{n, F}$ are given in Lemma [Lemma 26](#lema_contraccio_tors){reference-type="ref" reference="lema_contraccio_tors"}. Let $F: U \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ be an analytic map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"}: $$\label{forma_simplificada_tor} F(x, y, \theta, \lambda) = \begin{pmatrix} x + c(\theta, \lambda) y \\ y + P(x, y , \theta, \lambda) \\ \theta + \omega + Q(x, y, \theta, \lambda) \end{pmatrix},$$ where $P(x, y, \theta, \lambda) = a_k (\theta, \lambda)x^k + A(x, y, \theta, \lambda)$ and $Q(x, y, \theta, \lambda) = d_p (\theta, \lambda)x^p + B(x, y, \theta, \lambda)$ and $A$ and $B$ have the form [\[condicions_fnormal\]](#condicions_fnormal){reference-type="eqref" reference="condicions_fnormal"}. From Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, given $n \geq 2$ there exist $\mathcal{K}_n$ and $R= \mathcal{R}_n$, polynomial in $u$, such that $$\label{analitic_hipotesi_tors} F \circ \mathcal{K}_n - \mathcal{K}_n \circ R = \mathcal{G}_n,$$ where $\mathcal{G}_n(t, \theta) = (O(t^{n+k}), \,O(t^{n+2k-1}), \, O(t^{n+2p-1}))$. Since we are looking for a stable manifold of $F$ we will take the approximation corresponding to $R=\mathcal{R}_n$ with the coefficient $\overline R_k^x(\lambda) <0$. Hence, we look for $\rho>0$ and a map $\Delta: [0, \, \rho) \times \mathbb{T}^d \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, $\Delta = (\Delta^x, \Delta^y, \Delta^\theta) = (O(u^n), \, O(u^{n+k-1}), O(u^{n+2p-k-1}))$, satisfying [\[eq_delta_analitic_tors\]](#eq_delta_analitic_tors){reference-type="eqref" reference="eq_delta_analitic_tors"}, where $\mathcal{K}_n$ and $R$ are the mentioned maps that satisfy [\[analitic_hipotesi_tors\]](#analitic_hipotesi_tors){reference-type="eqref" reference="analitic_hipotesi_tors"}. Using [\[analitic_hipotesi_tors\]](#analitic_hipotesi_tors){reference-type="eqref" reference="analitic_hipotesi_tors"} we can rewrite [\[eq_delta_analitic_tors\]](#eq_delta_analitic_tors){reference-type="eqref" reference="eq_delta_analitic_tors"} as $$\begin{aligned} \begin{split}\label{eqdelta_analitic_tors} \Delta^x \circ R - \Delta^x &= \mathcal{K}_n^y [c\circ (\mathcal{K}_n^\theta + \Delta^\theta) - c \circ \mathcal{K}_n^\theta ] + \Delta^y \, c\circ(\mathcal{K}_n^\theta + \Delta ^\theta) + \mathcal{G}_n^x , \\ \Delta^y \circ R - \Delta^y & = P \circ (\mathcal{K}_n + \Delta) - P \circ \mathcal{K}_n + \mathcal{G}_n^y ,\\ \Delta^\theta \circ R - \Delta^\theta & = Q \circ (\mathcal{K}_n + \Delta) - Q \circ \mathcal{K}_n + \mathcal{G}_n^\theta. \end{split} \end{aligned}$$ Given $\rho \in (0, 1)$ and $\beta \in (0, \frac{\pi}{k-1})$, let $S$ be the complex sector $$S = S(\beta, \rho) = \big\{ z \in \mathbb{C}\, | \ |\arg(z)| < \tfrac{\beta}{2}, \, 0 < |z| < \rho \big\}.$$ **Definition 16**. *Given a sector $S = S(\beta, \, \rho)$, the complex torus $\mathbb{T}^d_{\sigma}$ with $\sigma > 0$, $\Lambda_{\mathbb{C}} \subset \mathbb{C}^\ell$ and $n \in \mathbb{N}$, let $\mathcal{W}_n$ be the Banach space $$\mathcal{W}_{n} = \bigg{\{} f : S \times \mathbb{T}^d_{\sigma} \times \Lambda_{\mathbb{C}} \rightarrow \mathbb{C}\, | \ f \text{ real analytic, } \; \|f\|_n: = \sup_{(u, \theta, \lambda) \in S \times \mathbb{T}^d_{\sigma} \times \Lambda_{\mathbb{C}} } \frac{|f(u,\theta, \lambda)|}{|u|^n} < \infty \bigg{\}},$$ with the norm $\|\cdot\|_n$.* Note that when $n\ge 1$ the functions $f$ in $\mathcal{W}_n$ can be continuously extended to $z=0$ with $f(0, \theta, \lambda )= 0$ and, if moreover we have $n\ge 2$, the derivative of $f$ with respect to $z$ can be continuously extended to $z=0$ with $\tfrac{\partial f}{\partial u}(0, \theta, \lambda)=0$. Note also that $\mathcal{W}_{n+1} \subset \mathcal{W}_n$ for all $n \in \mathbb{N}$, and that if $f\in \mathcal{W}_{n+1}$, then $\|f\|_n \le \|f\|_{n+1}$. More concretely we have that $\|f\|_n \le \rho \|f\|_{n+1}$. Moreover, if $f\in \mathcal{W}_m, \, g \in \mathcal{W}_n$, then $f g \in \mathcal{W}_{m+n}$ and $\|fg\|_{m+n} \leq \|f\|_m \,\|g\|_n.$ Given a product space, $\prod_i \mathcal{W}_i$, we endow it with the product norm $$\|f\|_{\prod_i \mathcal{W}_i} = \max_i { \|f_i \|_{\mathcal{W}_i}},$$ where $f_i = \pi_i \circ f$, and $\pi_i$ is the canonical projection from $\prod_j \mathcal{W}_j$ to $\mathcal{W}_i$. Next we define the spaces $$\mathcal{W}_n^\times = \mathcal{W}_n \times \mathcal{W}_{n+k-1} \times \mathcal{W}_{n+2p-k-1}^d,$$ endowed with the product norm defined above. Note that, in our setting, the functions in $\mathcal{W}_{n+2p-k-1}$ are mapped into $\mathbb{C}/ \mathbb{Z}$. We will use the notation $\mathcal{B}_{\alpha}$ to denote a closed ball of radius $\alpha$ not always belonging to the same space. Such space will be understood by the context. For instance, we will write $$\begin{aligned} \mathcal{B}_{\alpha} &= \{ f = (f^x, f^y, f^\theta)\in \mathcal{W}_n^\times \, | \ \|f\|_{\mathcal{W}_n^\times} \leq \alpha \} \subset \mathcal{W}_n^\times. \end{aligned}$$ For the sake of simplicity, we will omit the parameters $\rho$, $\beta$ and $\sigma$ in the notation of the spaces $\mathcal{W}_n$ and $\mathcal{W}_n^\times$. We will consider $\Lambda$ bounded. If not, we will work locally in bounded subsets of $\Lambda$. Since $F$ is analytic in $U \times \mathbb{T}^d \times \Lambda$, which is relatively compact, it has a holomorphic extension to some complex neighborhood of the form $U_\mathbb{C}\times \mathbb{T}^d_{\sigma} \times \Lambda_\mathbb{C}$ that contains $U \times \mathbb{T}^d \times \Lambda$, where $U_\mathbb{C}$ is a neighborhood of $(0,0)$ in $\mathbb{C}^2$, $\mathbb{T}^d_{\sigma}$ is a complex $d$-dimensional torus and $\Lambda_\mathbb{C}$ is a complex extension of $\Lambda$. Moreover since $\mathcal{K}_n$ and $R$ are analytic maps, their domain extends to a complex domain of the form $S(\beta, \, \rho) \times \mathbb{T}^d_{\sigma'} \times \Lambda_\mathbb{C}$. Then it is possible to set equation [\[eqdelta_analitic_tors\]](#eqdelta_analitic_tors){reference-type="eqref" reference="eqdelta_analitic_tors"} in a space of holomorphic functions defined on $S(\beta, \, \rho) \times \mathbb{T}^d_{\sigma'} \times \Lambda_\mathbb{C}$, and to look for $\Delta$ being a real analytic function of complex variables. To solve equation [\[eqdelta_analitic_tors\]](#eqdelta_analitic_tors){reference-type="eqref" reference="eqdelta_analitic_tors"}, we will consider $n$ big enough and we will look for a solution, $\Delta \in \mathcal{B}_{\alpha} \subset \mathcal{W}_n^\times$, for some $\alpha >0$. In what follows, we describe some conditions on $\alpha$. For compositions in [\[eqdelta_analitic_tors\]](#eqdelta_analitic_tors){reference-type="eqref" reference="eqdelta_analitic_tors"} to make sense, we need to ensure that the range of $\mathcal{K}_n+ \Delta$ is contained in the domain where $F$ is analytic. Let $b > 0$ be the radius of a closed ball in $\mathbb{C}^2$ contained in $U_\mathbb{C}$, and let $\tilde \sigma < \sigma$. We have to consider $\rho >0$ and $\Delta$ such that $((\mathcal{K}_n+\Delta)^x, (\mathcal{K}_n+\Delta)^y) \in U_\mathbb{C}$, $(\mathcal{K}_n + \Delta)^\theta \in \mathbb{T}^d_{ \sigma}$. To this end we will ensure that $$\label{restrict_1} |((\mathcal{K}_n+\Delta)^x, (\mathcal{K}_n+\Delta)^y)| \leq b \ \ \ \text{ and } \ \ \ | \text{Im} ((\mathcal{K}_n + \Delta)^\theta)| \leq \tilde \sigma.$$ We choose $\rho$ and $\sigma'$ small enough such that $\sup_{S(\beta, \rho) \times \mathbb{T}^d_{\sigma'} \times \Lambda_\mathbb{C}} |(\mathcal{K}_n^x (u, \theta,\lambda), \mathcal{K}_n^y(u, \theta, \lambda))| \leq \tfrac{b}{2}$ and such that $\sup_{S(\beta, \rho) \times \mathbb{T}^d_{\sigma'} \times \Lambda_\mathbb{C}} | \text{Im}(\mathcal{K}_n^\theta (u, \theta, \lambda))| \leq \tfrac{\tilde \sigma}{2}$. Later on we may take smaller values of $\rho$. We choose $$\label{def_alfa} \alpha = \min \, \big\{ \tfrac{1}{2}, \, \tfrac{b}{2}, \, \tfrac{\tilde \sigma}{2} \big\}.$$ Therefore, for $\Delta \in \mathcal{B}_\alpha \subset \mathcal{W}_n^\times$, $$\sup_{S(\beta, \rho) \times \mathbb{T}^d_{\sigma '} \times \Lambda_\mathbb{C}} |\Delta^x (u, \theta, \lambda)| \leq \sup_{S} \|\Delta^x\|_{n} \, |u|^n \leq \alpha \, \rho^n \leq \tfrac{b}{2} \, \rho^n,$$ and similarly, $\sup_{S \times \mathbb{T}^d_{\sigma'} \times \Lambda_\mathbb{C}} |\Delta^y (u, \theta, \lambda)| \leq \tfrac{b}{2} \, \rho^{n+k-1}$, and $$\sup_{S \times \mathbb{T}^d_{\sigma'} \times \Lambda_\mathbb{C}} |\Delta^\theta (u, \theta, \lambda)| \leq \sup_{S} \|\Delta^\theta\|_{n+2p-k-1} \, |u|^{n+2p-k-1} \leq \alpha \, \rho^{n+2p-k-1} \leq \tfrac{\tilde \sigma}{2} \, \rho^{n+2p-k-1},$$ and in particular, $|\text{Im}(\Delta^\theta) | \leq \tilde \sigma/2$. Hence, with this choice of $\alpha$ the condition [\[restrict_1\]](#restrict_1){reference-type="eqref" reference="restrict_1"} holds true. Below we introduce two families of operators that will be used to deal with [\[eqdelta_analitic_tors\]](#eqdelta_analitic_tors){reference-type="eqref" reference="eqdelta_analitic_tors"}. The definition of such operators is motivated by the equation itself. We recall the next lemma, Lemma 2.4.2 from [@BFM17] that we state here with a slightly modified notation adapted to the one of this paper. **Lemma 17**. *Let $R^x: S(\beta, \rho) \rightarrow \mathbb{C}$ be a holomorphic function of the form $R^x(u) = u +R_k u^k + O(|u|^{k+1})$, with $R_k<0$ and $k \geq 2$. Assume that $0<\beta < \frac{\pi}{k-1}$. Then, for any $\mu \in (0,\, (k-1)|R_k| \cos \kappa)$, with $\kappa = \frac{k-1}{2}\beta$, there exists $\rho>0$ small enough such that $R^x$ maps $S(\beta, \rho)$ into itself and $$| (R^x)^j(u)| \leq \frac{|u|}{(1+ j \, \mu \, |u|^{k-1})^{1/k-1}}, \qquad u \in S(\beta, \, \rho),\quad j \geq 0,$$ where $(R^x)^j$ refers to the $j$-th iterate of the map $R^x$.* **Definition 18**. *Let $n \geq 1$, $\beta \in (0, \tfrac{\pi}{k-1})$, and let $R : S (\beta, \rho) \times \mathbb{T}^d_{\sigma' } \to S (\beta, \rho) \times \mathbb{T}^d_{\sigma' }$ be an analytic map of the form $$\label{R_def_Smaps} R(u, \theta) = \begin{pmatrix} u + R_k u^k + O(u^{k+1}) \\ \theta + \omega \end{pmatrix},$$ where $R_k<0$ and the terms $O(u^{k+1})$ do not depend on $\theta$.* *We define $\mathcal{S}_{n, \, R}: \mathcal{W}_n \rightarrow \mathcal{W}_n$, as the linear operator given by $$\begin{aligned} \mathcal{S}_{n, \, R}\, f = f \circ R - f. \end{aligned}$$* **Remark 19**. By Lemma [Lemma 17](#lema_sector_tor){reference-type="ref" reference="lema_sector_tor"}, for a map $R$ as in Definition [Definition 18](#def_S_tor){reference-type="ref" reference="def_S_tor"}, we have that $R^x (u, \theta) = R^x(u)$ maps $S(\beta, \rho)$ into itself, and also, since $\omega \in \mathbb{R}^d$, $R^\theta (u, \theta) = R^\theta (\theta)$ maps $\mathbb{T}^d_{\sigma'}$ into itself. Moreover, the functions $f \in \mathcal{W}_n$ are defined on $S(\beta, \rho) \times \mathbb{T}^d_{\sigma'}$, and thus the composition in the definition of $\mathcal{S}_{n, R}$ is well defined. The following lemma states that the operators $\mathcal{S}_{n,\, R}$ have a bounded right inverse and provides a bound for $\|\mathcal{S}_{n, \, R}^{-1}\|$. It is a slightly modified version of Lemma 5.6 of [@CC-EF-20]. Its proof will be omitted. **Lemma 20**. *The operator $\mathcal{S}_{n, \, R} : \mathcal{W}_n \to \mathcal{W}_n$ with $n \geq 1$ has a bounded right inverse, $$\mathcal{S}_{n, \, R}^{-1} : \mathcal{W}_{n+k-1} \to \mathcal{W}_n$$ given by $$\label{sol_parabolic_tor} \mathcal{S}_{n, \, R}^{ -1} \, \eta = - \, \sum_{j=0}^\infty \, \eta \circ R^j, \qquad \eta \in \mathcal{W}_{n+k-1}.$$ Moreover, for any fixed $\mu \in (0, \,\, (k-1) |R_k^x | \cos \kappa)$, with $\kappa = \frac{k-1}{2}\beta$, there exists $\rho >0$ such that, taking $S(\beta, \, \rho) \times \mathbb{T}^d_{\sigma'}$ as the domain of the functions of $\mathcal{W}_{n+k-1}$, we have the bound $$\|(\mathcal{S}_{n, \, R})^{-1}\| \leq \rho^{k-1} + \tfrac{1}{\mu} \, \tfrac{k-1}{n}.$$* **Definition 21**. *Let $F$ be the holomorphic extension of an analytic map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} satisfying the hypotheses of Theorem [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"}. Let $\alpha$ be as in [\[def_alfa\]](#def_alfa){reference-type="eqref" reference="def_alfa"}.* *Given $n \geq 3$ we introduce $\mathcal{N}_{n, \, F} = (\mathcal{N}^x_{n, F},\, \mathcal{N}^y_{n, F}, \, \mathcal{N}_{n, F}^\theta): \mathcal{B}_{\alpha} \subset \mathcal{W}_n^\times \to \mathcal{W}_{n+k-1}^\times$, given by $$\begin{aligned} \mathcal{N}_{n, \, F}^x (f) &= \mathcal{K}_n^y [c\circ (\mathcal{K}_n^\theta + f^\theta) - c \circ \mathcal{K}_n^\theta ] + f^y \, c\circ(\mathcal{K}_n^\theta + f^\theta) + \mathcal{G}_n^x , \\ \mathcal{N}_{n, \, F}^y(f) & = P \circ (\mathcal{K}_n + f) - P \circ \mathcal{K}_n + \mathcal{G}_n^y ,\\ \mathcal{N}_{n, \, F}^\theta(f) & = Q \circ (\mathcal{K}_n + f) - Q \circ \mathcal{K}_n + \mathcal{G}_n^\theta. \end{aligned}$$* In the following lemma we show that the operators $\mathcal{N}_{n, \, F}$ are Lipschitz and we provide bounds for their Lipschitz constants. **Lemma 22**. *For each $n \geq 3$, there exists a constant, $M_n >0$, for which the operator $\mathcal{N}_{n, \, F}$ satisfies $$\begin{aligned} \mathrm{Lip\,}\mathcal{N}^x_{n, \, F} & \leq \sup_{\theta \in \mathbb{T}^d_\sigma} |c(\theta)| + M_n \rho, \\ \mathrm{Lip\,}\mathcal{N}^y_{n, \, F} & \leq k\, \sup_{\theta \in \mathbb{T}^d_\sigma} |a_k(\theta)| + M_n \rho, \\ \mathrm{Lip\,}\mathcal{N}^\theta_{n, \, F} & \leq p \, \sup_{\theta \in \mathbb{T}^d_\sigma} |d_p(\theta)| + M_n \rho, \end{aligned}$$ where $\rho$ is the radius of the sector $S(\beta, \, \rho)$.* *Proof.* We deal with the three components of $\mathcal{N}_{n, F}$ separately. First we prove the bound for $\mathrm{Lip\,}\mathcal{N}_{n, F}^x$. Let $f, \, \tilde f \in \mathcal{B}_{\alpha} \subset \mathcal{W}_n^\times$. We have, $$\begin{aligned} \mathcal{N}_{n, F}^x (f) - \mathcal{N}_{n, F}^x (\tilde f) & = (\mathcal{K}_n^y + f^y) \, \int_0^1 Dc \circ (\mathcal{K}_n^\theta + \tilde f^\theta + s(f^\theta - \tilde f^\theta)) \, ds \, (f^\theta - \tilde f^\theta) \\ & \quad + c \circ (\mathcal{K}_n^\theta + \tilde f^\theta) \, (f^y - \tilde f^y). \end{aligned}$$ We can then bound, for some $M_n >0$, $$\begin{aligned} \| (\mathcal{K}_n^y + f^y) \, \int_0^1 & Dc \circ (\mathcal{K}_n^\theta + \tilde f^\theta + s(f^\theta - \tilde f^\theta)) \, ds \, (f^\theta - \tilde f^\theta) \|_{n+k-1} \\ & \leq \sup_{\mathbb{T}^d_\sigma} |Dc(\theta)| \, \sup_{S \times \mathbb{T}^d_{\sigma'}} \frac{1}{|u|^{n+k-1}} |\mathcal{K}_n^y(u, \theta) + f^y(u, \theta)| |f^\theta(u, \theta) - \tilde f^\theta (u, \theta)| \\ & \leq \|f^\theta - \tilde f^\theta \|_{n+2p-k-1} \, \|\mathcal{K}_n^y + f^y\|_{k+1} \,\sup_{\mathbb{T}^d_\sigma} |Dc(\theta)| \, \rho^{2p-k+1} \\ & \leq M_n \, \rho^{2p-k+1} \, \|f^\theta - \tilde f^\theta \|_{n+2p-k-1}. \end{aligned}$$ On the other hand, $$\begin{aligned} \|c \circ (\mathcal{K}_n^\theta + \tilde f^\theta) \, (f^y - \tilde f^y) \|_{n+k-1} & = \sup_{S \times \mathbb{T}^d_{\sigma '}} | c \circ (\mathcal{K}_n^\theta + \tilde f^\theta) (u, \theta)| \frac{ |f^y (u, \theta) - \tilde f^y (u, \theta)|}{|u|^{n+k-1}} \\ & \leq \sup_{\mathbb{T}^d_\sigma} |c (\theta)| \, \|f^y - \tilde f^y\|_{n+k-1}, \end{aligned}$$ and thus, we obtain $$\begin{aligned} \| \mathcal{N}^x_{n, F} (f) - \mathcal{N}^x_{n, F} (\tilde{f})\|_{n+k-1} \leq (\sup_{\mathbb{T}^d_\sigma} |c(\theta)| + M_n \rho ) \max \{ \|f^y - \tilde f^y\|_{n+k-1}, \, \|f^\theta - \tilde f^\theta\|_{n+2p-k-1}\}, \end{aligned}$$ that is, $\mathrm{Lip\,}\mathcal{N}^x_{n, \, F} \leq \sup_{\mathbb{T}^d_\sigma} |c(\theta)| + M_n \rho.$ Next we consider $\mathcal{N}_{n, F}^y$. We write, for $f, \, \tilde f \in \mathcal{B}_\alpha \subset \mathcal{W}_n^\times$, $$\begin{aligned} \begin{split} \label{dif_N_tor} & \mathcal{N}^y_{n, \, F} (f) - \mathcal{N}^y_{n, \, F} (\tilde f) = T_1^y + T_2^y, \end{split} \end{aligned}$$ where $$\label{part_dominant_Ny} T_1^y = a_k \circ (\mathcal{K}_n^\theta + f^\theta)(\mathcal{K}_n^x + f^x)^k - a_k \circ (\mathcal{K}_n^\theta + \tilde f^\theta)(\mathcal{K}_n^x + \tilde f^x)^k \, \in \mathcal{W}_{n+2k-2},$$ and $$T_2^y = \int_0^1 DA \circ \xi_s \, ds \, (f-\tilde f) \, \in \mathcal{W}_{n+2k-1},$$ where we have defined, for $s \in [0,1]$, $$\xi_s = \xi_s(f, \tilde{f}) = \mathcal{K}_n + \tilde f + s(f - \tilde f) \in \mathcal{W}_{2} \times \mathcal{W}_{k+1} \times (\mathcal{W}_{0})^d.$$ Note that indeed we have $\xi_s^x (u, \theta) = u^2 + O(|u|^3)$, $\xi_s^y (u, \theta) = \overline K_{k+1}^y u^{k+1} + O(|u|^{k+2})$, and $\xi_s^\theta (u, \theta) = \theta + O(|u|)$, since the presence of $f$ does not affect the lowest order terms of $\xi_s$, and since the coefficients depending on $\theta$ of $\mathcal{K}_n(u, \theta)$ are bounded for $\theta \in \mathbb{T}^d_{\sigma'}$, as a consequence of the small divisors lemma. Since $T_1^y$ contains the leading terms of [\[dif_N\_tor\]](#dif_N_tor){reference-type="eqref" reference="dif_N_tor"}, it is sufficient to bound the norm $\| T_1^y \|_{n+2k-2}$ to obtain the required estimate for [\[dif_N\_tor\]](#dif_N_tor){reference-type="eqref" reference="dif_N_tor"}. We write $$\begin{aligned} T_1^y & = a_k \circ (\mathcal{K}_n^\theta + f^\theta)(\mathcal{K}_n^x + f^x)^k - a_k \circ (\mathcal{K}_n^\theta + \tilde f^\theta)(\mathcal{K}_n^x + \tilde f^x)^k \\ &= a_k \circ (\mathcal{K}_n^\theta + f^\theta) \, k \int_0^1 (\xi_s^x)^{k-1} \, ds (f^x - \tilde f^x) + (\mathcal{K}_n^x + \tilde f^x)^k \int_0^1 D a_k \circ (\xi_s^\theta) \, ds \, (f^\theta - \tilde f^\theta) \end{aligned}$$ and decomposing the last expression as $T_{11}^y + T_{12}^y$ in the obvious way, we have $$\begin{aligned} \| T_{11}^y\|_{n+ 2k-2} & \leq \| a_k \circ (\mathcal{K}_n^\theta + f^\theta) \, k \int_0^1 (\xi_s^x)^{k-1} \, ds \, \|_{2k-2} \, \|(f^x - \tilde f^x)\|_{n} \\ & \leq k \, \sup_{\mathbb{T}^d_\sigma} \, |a_k(\theta)| \, \sup_{s \in [0, 1]} \, \sup_{S \times \mathbb{T}^d_{\sigma' }} \, \frac{1}{|u|^{2k-2}} \, |\xi_s^x(u, \theta)|^{k-1} \, \|f^x- \tilde f^x\|_n \\ & \leq k \, \sup_{\mathbb{T}^d_{\sigma}} \, |a_k (\theta)|(1+ M_n \rho)\, \|f^x- \tilde f^x\|_n, \end{aligned}$$ and $\| T_{12}^y\|_{n+2k-2} \leq M_n \, \rho^{2p-k+1} \, \|f^\theta - \tilde f^\theta \|_{n+2p-k-1},$ where $2p-k+1 \geq 1$. Putting together the obtained bounds, we have $$\begin{aligned} \|\mathcal{N}^y_{n, \, F} (f) - \mathcal{N}^y_{n, \, F} (\tilde f)\|_{n+2k-2} & \leq \|a_k(\mathcal{K}_n^\theta + f^\theta)(\mathcal{K}_n^x + f^x)^k - a_k(\mathcal{K}_n^\theta + \tilde f^\theta)(\mathcal{K}_n^x + \tilde f^x)^k\|_{n+2k-2} \\ & \quad + \| T_2^y \|_{n+2k-2} \\ & \leq (k \, \sup_{\mathbb{T}^d_{\sigma}} \, |a_k (\theta)| + M_n \rho)\, \|f- \tilde f\|_{\mathcal{W}_n^\times}. \end{aligned}$$ Finally we prove the result for $\mathcal{N}_{n, F}^\theta$ in an analogous way as for $\mathcal{N}_{n, F}^y$. Here we have, for each $f , \tilde f \in \mathcal{B}_\alpha \subset \mathcal{W}_n^\times$, $$\begin{aligned} \begin{split} \label{dif_N_tor_2} & \mathcal{N}^\theta_{n, \, F} (f) - \mathcal{N}^\theta_{n, \, F} (\tilde f) = T_1^\theta + T_2^\theta, \end{split} \end{aligned}$$ where $$T_1^\theta = d_p \circ(\mathcal{K}_n^\theta + f^\theta)(\mathcal{K}_n^x + f^x)^p - d_p \circ (\mathcal{K}_n^\theta + \tilde f^\theta)(\mathcal{K}_n^x + \tilde f^x)^p \, \in \mathcal{W}_{n+2p-2},$$ and $$T_2^\theta = \int_0^1 DB \circ \xi_s \, ds \, (f - \tilde f) \, \in \mathcal{W}_{n+2p-1}.$$ Since $T_1^\theta$ contains the leading terms of [\[dif_N\_tor_2\]](#dif_N_tor_2){reference-type="eqref" reference="dif_N_tor_2"} we look for a bound for $\|T_1^\theta\|_{n+2p-2}$. We have $$\begin{aligned} T_1^\theta &= d_p \circ (\mathcal{K}_n^\theta + f^\theta)(\mathcal{K}_n^x + f^x)^p - d_p \circ (\mathcal{K}_n^\theta + \tilde f^\theta)(\mathcal{K}_n^x + \tilde f^x)^p \\ & = d_p \circ (\mathcal{K}_n^\theta + f^\theta) \, p \int_0^1 (\xi_s^x)^{p-1} \, ds (f^x - \tilde f^x) + (\mathcal{K}_n^x + \tilde f^x)^p \int_0^1 D d_p \circ (\xi_s^\theta) \, ds (f^\theta - \tilde f^\theta). \end{aligned}$$ We decompose $T_1^\theta = T_{11}^\theta + T_{12}^\theta$. We have $$\begin{aligned} \| T_{11}^\theta \|_{n+ 2p-2} & \leq \| d_p \circ (\mathcal{K}_n^\theta + f^\theta) \, p \int_0^1 (\xi_s^x)^{p-1} \, ds \, \|_{2p-2} \, \|f^x - \tilde f^x\|_{n} \\ & \leq \sup_{s \in [0, 1]} \, \sup_{S \times \mathbb{T}^d_{\sigma' }} \, \frac{1}{|u|^{2p-2}} \,(p \, |d_p \circ (\mathcal{K}_n^\theta + f^\theta)(u, \theta)|\, |\xi_s^x(u, \theta)|^{p-1} ) \, \|f^x- \tilde f^x\|_n \\ & \leq p \, \sup_{\mathbb{T}^d_{\sigma}} \, |d_p (\theta)|(1 + M_n \rho)\, \|f^x- \tilde f^x\|_n, \end{aligned}$$ and similarly, $\|T_{12}^\theta \|_{n+2p-2} \leq M_n \, \rho^{2p-k+1} \, \|f^\theta - \tilde f^\theta \|_{n+2p-k-1}$. The term $T_2^\theta$ is of higher order. We have $\| T_2^\theta \|_{n+ 2p-2}\le \rho\, \| T_2^\theta \|_{n+ 2p-1}$. With these estimates we get the bound for $\mathrm{Lip\,}\mathcal{N}^\theta_{n, \, F}$ claimed in the statement. ◻ Next, we introduce some more operators. **Definition 23**. *For $n > 2p-k-1$, we denote by $\mathcal{S}_{n, R}^\times : \mathcal{W}_n^\times \to \mathcal{W}_n^\times$ the linear operator defined component-wise as $\mathcal{S}_{n, \, R}^\times = (\mathcal{S}_{n,\, R}, \, \mathcal{S}_{n+k-1,\, R}, \, ( \mathcal{S}_{n+2p-k-1,\, R})^d)$.* **Remark 24**. Since the components of $\mathcal{S}_{n, \, R}^\times$ are uncoupled, a right inverse $(\mathcal{S}_{n, R}^\times)^{-1} :\mathcal{W}_{n+k-1}^\times \to \mathcal{W}_n^\times$ is given by $$(\mathcal{S}_{n, R}^\times)^{-1} = (\mathcal{S}_{n, \, R}^{-1}, \, \mathcal{S}_{n+k-1, \, R}^{-1}, \, (\mathcal{S}_{n+2p-k-1, \, R}^{-1})^d).$$ **Definition 25**. *Let $F$ be the holomorphic extension of an analytic map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} satisfying the hypotheses of Theorem [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"}. Given $n \geq 3$, we define $\mathcal{T}_{n, \, F} : \mathcal{B}_\alpha \subset \mathcal{W}_n^\times \to \mathcal{W}_n^\times$ by $$\mathcal{T}_{n, F} = (\mathcal{S}_{n, R}^\times)^{-1} \circ \mathcal{N}_{n, F}.$$* Using the above operators, equations [\[eqdelta_analitic_tors\]](#eqdelta_analitic_tors){reference-type="eqref" reference="eqdelta_analitic_tors"} can be written as $$\mathcal{S}_{n, \, R}^\times \, \Delta = \mathcal{N}_{n, \, F} (\Delta ).$$ **Lemma 26**. *There exist $m_0 > 0$ and $\rho_0 >0$ such that if $\rho < \rho_0$ and $n \geq m_0$, we have $\mathcal{T}_{n, \, F} (\mathcal{B}_\alpha) \subseteq \mathcal{B}_\alpha$ and $\mathcal{T}_{n , \, F}$ is a contraction operator in $\mathcal{B}_\alpha$.* *Proof.* By the definition of $\mathcal{T}_{n, F}$ and the norm in $\mathcal{W}_n^\times$, $$\begin{aligned} \begin{split} \label{formula_lip_T} \mathrm{Lip\,}\mathcal{T}_{n, \, F} \leq \max \Big\{ & \|(\mathcal{S}_{n, \, R})^{-1}\| \, \mathrm{Lip\,}\mathcal{N}^x_{n, \, F}, \, \|(\mathcal{S}_{n+k-1, \, R})^{-1}\| \, \mathrm{Lip\,}\mathcal{N}^y_{n, F}, \\ & \|(\mathcal{S}_{n+2p-k-1, \, R})^{-1}\| \, \mathrm{Lip\,}\mathcal{N}^\theta_{n, \, F}\Big\}. \end{split} \end{aligned}$$ From [\[formula_lip_T\]](#formula_lip_T){reference-type="eqref" reference="formula_lip_T"} and the estimates obtained in Lemmas [Lemma 20](#invers_S_tor){reference-type="ref" reference="invers_S_tor"} and [Lemma 22](#lema_N_tors){reference-type="ref" reference="lema_N_tors"}, given $\mu \in (0, (k-1) |\overline R_k^x | \cos \kappa)$, with $\kappa = \frac{k-1}{2}\beta$, there is $\rho_0 > 0$ such that for $\rho \in (0, \rho_0)$ we have the bound $$\begin{aligned} \mathrm{Lip\,}\mathcal{T}_{n, \, F} \leq & \max \, \Big\{ (\rho^{k+1} + \tfrac{1}{\mu} \tfrac{k-1}{n})(\sup_{T^d_{\sigma}} \, |c(\theta)| + M_n \rho), \\ & \qquad (\rho^{k+1} + \tfrac{1}{\mu} \tfrac{k-1}{n+k-1})(\sup_{T^d_{\sigma}} \, |a_k(\theta)| + M_n \rho), \, (\rho^{k+1} + \tfrac{1}{\mu} \tfrac{k-1}{n+2p-k-1})(\sup_{T^d_{\sigma}} \, |d_p(\theta)| + M_n \rho ) \Big\} , \end{aligned}$$ taking $S(\beta, \rho) \times \mathbb{T}^d_{\sigma' }$ as the domain of the functions of $\mathcal{B}_\alpha$. Then, choosing $\rho < \rho_0$ small enough, it is clear that one can chose $m_0$ such that, for $n \geq m_0$, one has $\mathrm{Lip\,}\mathcal{T}_{n, \, F} < 1$. Next we prove that one can find $\tilde \rho_0 > 0$, maybe smaller than $\rho_0$, such that taking $S(\beta, \rho) \times \mathbb{T}^d_{\sigma'}$, with $\rho < \tilde \rho_0$ as the domain of the functions of $\mathcal{B}_\alpha$, then $\mathcal{T}_{n, F}$ maps $\mathcal{B}_\alpha$ into itself. For each $f \in \mathcal{B}_\alpha$ we can write $$\begin{aligned} \|\mathcal{T}_{n, \, F} (f) \|_{\mathcal{W}_n^\times} & \leq \| \mathcal{T}_{n, \, F} (f) - \mathcal{T}_{n, \, F} (0)\|_{\mathcal{W}_n^\times} + \|\mathcal{T}_{n, \, F} (0)\|_{\mathcal{W}_n^\times} \\ & \leq \alpha \, \mathrm{Lip\,}\mathcal{T}_{n, \, F} + \|\mathcal{T}_{n, \, F} (0)\|_{\mathcal{W}_n^\times}. \end{aligned}$$ From the definitions of $\mathcal{T}_{n, F}$ and $\mathcal{N}_{n, F}$ we have, for each $n \in \mathbb{N}$, $$\mathcal{T}_{n, F}(0) = (\mathcal{S}_{n, \, R}^\times)^{-1} \circ \mathcal{N}_{n, \, F} ( 0) = (\mathcal{S}_{n, \, R}^\times)^{-1} \, \mathcal{G}_n.$$ Also, we have $\mathcal{G}_n = (\mathcal{G}_n^x, \, \mathcal{G}_n^y, \, \mathcal{G}_n^\theta) \in \mathcal{W}_{n+k} \times \mathcal{W}_{n+2k-1} \times (\mathcal{W}_{n+2p-1})^p$, and thus, for every $\varepsilon > 0$, there is $\rho_n>0$ such that for $\rho< \rho_n$ one has $$\begin{aligned} \|\mathcal{T}_{n, \, F} (0)\|_{\mathcal{W}_n^\times} \leq \| (\mathcal{S}_{n, \, R}^\times)^{-1} \| \, \max \{ \| \mathcal{G}_n^x \|_{n+k-1}, \, \| \mathcal{G}_n^y \|_{n+2k-2}, \, \| \mathcal{G}_n^\theta \|_{n+2p-2} \} < \varepsilon. \end{aligned}$$ Moreover, since we have $\mathrm{Lip\,}\mathcal{T}_{n, F} <1$, we can take $\rho_n$ such that $\alpha \, \mathrm{Lip\,}\mathcal{T}_{n, \, F} + \|\mathcal{T}_{n, \, F} (0) \|_{\mathcal{W}_n^\times} \leq \alpha$, and then for every $\rho < \rho_n$ one has $\mathcal{T}_{n, \, F} (\mathcal{B}_\alpha) \subseteq \mathcal{B}_\alpha$. We have to take $\varepsilon \le \alpha (1-\mathrm{Lip\,}\mathcal{T}_{n, \, F} )$. ◻ ## The case of vector fields {#sec-funcional-vf} In this setting, we consider approximations $\mathcal{K}_n : \mathbb{R}\times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$ and $\mathcal{Y}_n: \mathbb{R}\times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}\times \mathbb{T}^d$ of solutions of equation [\[conj_tors_edo\]](#conj_tors_edo){reference-type="eqref" reference="conj_tors_edo"} obtained in Section [3.2](#sec_algoritme_tors_edo){reference-type="ref" reference="sec_algoritme_tors_edo"} up to a high enough order. Then, keeping $Y=\mathcal{Y}_n$ fixed, we look for a correction $\Delta: [0, \, \rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, for some $\rho>0$, of $\mathcal{K}_n$, analytic on $(0,\rho) \times \mathbb{T}^d \times \mathbb{R}\times \Lambda$, such that the pair $K= \mathcal{K}_n + \Delta$, $Y=\mathcal{Y}_n$ satisfies the invariance condition $$\label{eq_delta_analitic_tors_edo} X \circ (\mathcal{K}_n + \Delta, t) - \partial_{(u, \theta)} (\mathcal{K}_n + \Delta) \cdot Y - \partial_t (\mathcal{K}_n + \Delta) = 0.$$ To be able to deal with equation [\[eq_delta_analitic_tors_edo\]](#eq_delta_analitic_tors_edo){reference-type="eqref" reference="eq_delta_analitic_tors_edo"} in a suitable space of analytic functions, we rewrite the vector field [\[fnormal_tors_edo1\]](#fnormal_tors_edo1){reference-type="eqref" reference="fnormal_tors_edo1"} in terms of its hull function $\check X(x, y, \theta, \tau , \lambda) = X (x, y, \theta, t, \lambda)$, with $\tau = \nu t$ and $\nu \in \mathbb{R}^{d'}$, and similarly for the functions that appear in its components. Hence, the corresponding differential equation reads $$\label{fnormal_edo_skew} \begin{pmatrix} \dot x \\ \dot y \\ \dot \theta \end{pmatrix} = \begin{pmatrix} \check c(\theta, \tau, \lambda) y \\ \check a_k(\theta, \tau, \lambda)x^k + \check A(x, y, \theta, \tau, \lambda) \\ \omega + \check d_p(\theta, \tau, \lambda)x^p + \check B(x, y, \theta, \tau, \lambda) \end{pmatrix},$$ where $\check c : \mathbb{T}^d \times \mathbb{T}^{d'} \times \Lambda \to \mathbb{R}$, $\check c(\theta , \tau, \lambda ) = c (\theta, t, \lambda)$, and similarly for the other quantities. Now the vector field $\check X$ is defined in a domain of the form $U \times \mathbb{T}^{d+d'}$, and thus the new variables $(\theta, \tau)$ can be thought as angles. We also introduce $$\check \mathcal{K}_n (u, \theta, \tau, \lambda) = \mathcal{K}_n(u, \theta, t, \lambda) , \qquad \check Y (u, \theta, \tau, \lambda) = Y (u, \theta, t, \lambda),$$ and $$J (u, \theta, \tau, \lambda) = \begin{pmatrix} \check Y (u, \theta, \tau, \lambda) \\ \nu \end{pmatrix}.$$ Therefore, equation [\[eq_delta_analitic_tors_edo\]](#eq_delta_analitic_tors_edo){reference-type="eqref" reference="eq_delta_analitic_tors_edo"} can be written as $$\label{analitic_hipotesi_compacte} \check X \circ (\check \mathcal{K}_n + \Delta, \tau) - D (\check \mathcal{K}_n + \Delta) \cdot J = 0,$$ and then we look for a solution $\Delta = \Delta (u, \theta, \tau, \lambda)$ with $\Delta: [0, \, \rho) \times \mathbb{T}^d \times \mathbb{T}^{d'} \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$. The proofs of Theorems [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"} and [Theorem 6](#teorema_posteriori_tors_edo){reference-type="ref" reference="teorema_posteriori_tors_edo"} are organized in a similar way as the ones of Theorems [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} and [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"}. As for the case of maps, we will rewrite the equation for $\Delta$ as the fixed point equation. From Proposition [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"}, given $n$ there exist a map $\mathcal{K}_n$ and a vector field $Y= \mathcal{Y}_n$ such that $$X \circ (\mathcal{K}_n, t) - \partial_{(u, \theta)}\mathcal{K}_n \cdot Y - \partial_t \mathcal{K}_n = \mathcal{G}_n,$$ or equivalently, $$\label{analitic_hipotesi_tors_edo} \check X \circ (\check \mathcal{K}_n, \tau ) - D \check \mathcal{K}_n \cdot J = \check \mathcal{G}_n,$$ where $\check \mathcal{G}_n(u, \theta, \tau, \lambda) = (O(u^{n+k}), \,O(u^{n+2k-1}), \, O(u^{n+2p-1}))$. Since we are looking for a stable manifold we will take the approximations corresponding to $\check Y= \check \mathcal{Y}_n$ with the coefficient $\overline Y_k^x (\lambda)<0$. Summarizing, we look for $\rho>0$ and a map $\Delta: [0, \, \rho) \times \mathbb{T}^{d+d'} \times \Lambda \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic on $(0,\rho) \times \mathbb{T}^{d+d'} \times \Lambda$ satisfying [\[analitic_hipotesi_compacte\]](#analitic_hipotesi_compacte){reference-type="eqref" reference="analitic_hipotesi_compacte"}, where $\check \mathcal{K}_n$ and $J$ satisfy [\[analitic_hipotesi_tors_edo\]](#analitic_hipotesi_tors_edo){reference-type="eqref" reference="analitic_hipotesi_tors_edo"}. Moreover, we ask $\Delta$ to be of the form $\Delta = (\Delta^x, \Delta^y, \Delta^\theta) = (O(u^n), \, O(u^{n+k-1}), O(u^{n+2p-k-1}))$. Similarly as in Section [4.1](#sec-funcional-maps){reference-type="ref" reference="sec-funcional-maps"}, we write $$\check P(x, y, \theta, \tau, \lambda) = \check a_k (\theta, \tau , \lambda)x^k + \check A(x, y, \theta, \tau, \lambda ),$$ $$\check Q(x, y, \theta, \tau, \lambda ) = \check d_p (\theta,\tau , \lambda)x^p + \check B(x, y, \theta,\tau, \lambda).$$ Then, using [\[analitic_hipotesi_tors_edo\]](#analitic_hipotesi_tors_edo){reference-type="eqref" reference="analitic_hipotesi_tors_edo"} we can rewrite [\[analitic_hipotesi_compacte\]](#analitic_hipotesi_compacte){reference-type="eqref" reference="analitic_hipotesi_compacte"} as $$\begin{aligned} \begin{split}\label{eqdelta_analitic_tors_edo} D \Delta^x \cdot J &= \check \mathcal{K}_n^y [\check c\circ (\check \mathcal{K}_n^\theta + \Delta^\theta, \tau ) - \check c \circ (\check \mathcal{K}_n^\theta, \tau) ] + \Delta^y \, c\circ(\check \mathcal{K}_n^\theta + \Delta ^\theta, \tau) + \check \mathcal{G}_n^x , \\ D \Delta^y \cdot J & = P \circ (\check \mathcal{K}_n + \Delta, \tau) - P \circ (\check \mathcal{K}_n , \tau)+ \check \mathcal{G}_n^y ,\\ D \Delta^\theta \cdot J & = Q \circ (\check \mathcal{K}_n + \Delta, \tau) - Q \circ (\check \mathcal{K}_n , \tau)+ \check \mathcal{G}_n^\theta. \end{split}\end{aligned}$$ To deal with equation [\[eqdelta_analitic_tors_edo\]](#eqdelta_analitic_tors_edo){reference-type="eqref" reference="eqdelta_analitic_tors_edo"} we introduce function spaces and operators adapted to the vector field setting. **Definition 27**. *Given a sector $S = S(\beta, \, \rho)$, $\sigma > 0$ and $n \in \mathbb{N}$, let $\mathcal{Z}_n$, be the Banach space $$\begin{aligned} \mathcal{Z}_{n} = \bigg\{ f : S \times \mathbb{T}^{d+d'}_\sigma \times \Lambda_\mathbb{C}& \rightarrow \mathbb{C}\, | \ f \text{ real analytic,} \\ &\|f\|_n: = \sup_{(u, \theta, \tau, \lambda) \in S \times \mathbb{T}^{d+d'}_\sigma \times \Lambda_\mathbb{C}} \frac{|f(u,\theta, \tau, \lambda)|}{|u|^n} < \infty \bigg\}, \end{aligned}$$ with the norm $\|\cdot \|_n$.* Actually, it is exactly the same space as $\mathcal{W}_n$ with the functions depending on $(\theta, \tau)\in \mathbb{T}^{d+d'}$ instead of depending on $\theta\in \mathbb{T}^{d}$. As for the case of maps, we endow the product spaces $\prod_i \mathcal{Z}_i$ with the product norm and we define $$\mathcal{Z}_n^\times = \mathcal{Z}_n \times \mathcal{Z}_{n+k-1} \times \mathcal{Z}_{n+2p-k-1} ^d.$$ Next, we set equation [\[eqdelta_analitic_tors_edo\]](#eqdelta_analitic_tors_edo){reference-type="eqref" reference="eqdelta_analitic_tors_edo"} in a space of holomorphic functions defined in a domain $S(\beta, \, \rho) \times \mathbb{T}^{d+d'}_{\sigma'} \times \Lambda_\mathbb{C}$, and we look for $\Delta$ being a real analytic function of complex variables. Concretely, to solve equation [\[eqdelta_analitic_tors_edo\]](#eqdelta_analitic_tors_edo){reference-type="eqref" reference="eqdelta_analitic_tors_edo"}, we will consider $n$ big enough and we will look for a solution, $\Delta \in \mathcal{B}_\alpha \subset \mathcal{Z}_n^\times$, for some $\alpha >0$. To determine suitable values for $\alpha$ we proceed in the same way as in the case of maps. We take $$\alpha = \min \, \big\{ \tfrac{1}{2}, \, \tfrac{b}{2}, \, \tfrac{\tilde \sigma}{2} \big\},$$ where $b,\tilde \sigma, \sigma'$ and $\rho$ have the same meaning as there. **Definition 28**. *Let $k \geq 2$, $n \geq 0$, $\beta < \tfrac{\pi}{k-1}$ and let $J : S (\beta, \rho) \times \mathbb{T}^{d+d'}_{\sigma'} \to \mathbb{C}\times\mathbb{R}^{d+d'}_{\sigma'}$ be an analytic vector field of the form $$\label{R_def_S} J(u, \theta, \tau) = ( Y_k u^k + O(u^{k+1}), \, \omega , \, \nu),$$ with $Y_k < 0$ and such that the term $O(u^{k+1})$ does not depend on $(\theta, \tau)$.* *We define $\mathcal{S}_{n,J}: \mathcal{Z}_n \rightarrow \mathcal{Z}_n$, as the linear operator given by $$\begin{aligned} \mathcal{S}_{n,J}\, f = Df \cdot J = \partial_u f \cdot J^x + \partial_\theta f \cdot \omega + \partial_\tau f \cdot \nu. \end{aligned}$$* Note that this operator has a similar notation to a linear operator used in the map setting but it is different. The following lemma concerns the properties of the flows of vector fields of the form [\[R_def_S\]](#R_def_S){reference-type="eqref" reference="R_def_S"}. **Lemma 29**. *Let $J (u, \theta, \tau)$ be as in Definition [Definition 28](#def_S_tor_edo){reference-type="ref" reference="def_S_tor_edo"} and let $\varphi_s = (\varphi^u_s,\varphi^\theta_s, \varphi^\tau_s)$ be its flow. Then, $\varphi_s$ has the form $$\label{p_flux_1} \varphi_s(u, \theta, t) = (\varphi^u_s(u), \, \theta + \omega s, \, \tau + \nu s),$$ and, for any fixed $\mu \in (0, \,\, (k-1) |Y_k | \cos \kappa)$, with $\kappa = \frac{k-1}{2}\beta$, there exists $\rho_1\in (0,\rho]$ small enough such that $\varphi_s^u(u) \in S(\beta, \rho_1)$ for all $u \in S(\beta, \rho_1)$ and $s \in [0, \infty)$. Moreover, $$\label{p_flux_2} | \varphi^u_s(u) | \leq \frac{|u|}{(1 + s \mu |u|^{k-1})^{\frac{1}{k-1}}}, \qquad \forall \, u \in S(\beta, \, \rho_1), \quad \forall \, s \in [0, \infty).$$* *Proof.* By definition, the time-$s$ flow of $J$ satisfies $$\label{def_flow_lema} \varphi_s(u, \theta, \tau) = (u, \theta, \tau) + \int_0^s J \circ \varphi_s \, ds,$$ and thus, we obtain $\varphi_s^\theta(u, \theta, t) = \theta + \omega s, \, \varphi_s^\tau(u, \theta, t) = \tau + \nu s,$ and that $\varphi_s^u$ is independent of $\theta$ and $\tau$. Changing to complex polar coordinates, $u=re^{i\varphi}$, equation $\dot u = Y_k u^k + O(u^{k+1})$ becomes $$\begin{aligned} \label{polars-r} \dot r= Y_k \cos ((k-1)\varphi ) r^k + O(r^{k+1}), \\ \dot \varphi = Y_k \sin ((k-1)\varphi ) r^{k-1} + O(r^{k}).\end{aligned}$$ In the domain $S(\beta, \rho)$, $|(k-1)\varphi | < \kappa <\pi/2$. It is immediately checked that, if $\rho$ is small, on the boundary of $S$, the vector field points to the interior of $S$. Indeed, at $\varphi=\beta/2$, $\dot \varphi <0$; at $\varphi=-\beta/2$, $\dot \varphi >0$; and at $r=\rho$, $\dot r <0$. For the last inequality we use that the $O(r^{k+1})$ term in [\[polars-r\]](#polars-r){reference-type="eqref" reference="polars-r"} is less that $M r^{k+1}$ if $0<r<\rho$. We take $\tilde \mu$ such that $0<\mu<\tilde \mu < (k-1)|Y_k| \cos \kappa$ and $\rho_1 < \min\{ 1,\frac{\tilde \mu-\mu}{(k-1)M}\}$. Since $\cos ((k-1)\varphi ) > \cos \kappa >0$ we have $$\dot r\le Y_k \cos ((k-1)\varphi ) r^k + M r^{k+1}, \qquad 0<r<\rho_1.$$ With the previous choices $$\dot r\le - \frac{\tilde \mu}{k-1} r^k + M \rho r^k \le - \frac{ \mu}{k-1} r^k .$$ Integrating the last inequality we obtain [\[p_flux_2\]](#p_flux_2){reference-type="eqref" reference="p_flux_2"}. ◻ The following lemma states that $\mathcal{S}_{n,J}$ has a bounded right inverse and provides a bound of $\|\mathcal{S}_{n,J}^{-1}\|$. **Lemma 30**. *Given $k\geq 2$ and $n \geq 1$, the operator $\mathcal{S}_{n,J} : \mathcal{Z}_n \to \mathcal{Z}_n$ has a bounded right inverse, $$\mathcal{S}_{n,J}^{-1} : \mathcal{Z}_{n+k-1} \to \mathcal{Z}_n,$$ given by $$\label{sol_parabolic_tor_edo} \mathcal{S}_{n,J}^{ -1} \, \eta = - \int_{0}^\infty \, \eta \circ \varphi_s \, ds, \qquad \eta \in \mathcal{Z}_{n+k-1},$$ where $\varphi_s$ denotes the time-$s$ flow of $J$.* *Moreover, for any fixed $\mu \in (0, \,\, (k-1) |Y_k^x | \cos \kappa)$, with $\kappa = \frac{k-1}{2}\beta$, there exists $\rho >0$ such that, taking $S(\beta, \, \rho) \times \mathbb{T}^{d+d'}_{\sigma'}$ as the domain of the functions of $\mathcal{Z}_{n+k-1}$, we have $$\|(\mathcal{S}_{n,J})^{-1}\| \leq \tfrac{1}{\mu} \, \tfrac{k-1}{n}.$$* From Lemma [Lemma 29](#lema_prop_flux){reference-type="ref" reference="lema_prop_flux"} we have that $\varphi_s^u(u, \theta, \tau)$ belongs to $S(\beta, \rho)$ for all $s \in [0, \infty)$. Then clearly one has that $\varphi_s \in S(\beta, \rho) \times \mathbb{T}^{d+d'}_{\sigma'}$ and the composition $\eta \circ \varphi_s$ is well defined for all $s\ge 0$. Using again Lemma [Lemma 29](#lema_prop_flux){reference-type="ref" reference="lema_prop_flux"} we have, for $\rho$ small enough, $$\begin{aligned} |\eta \circ \varphi_s (u, \theta, \tau)| \, & \leq\, \|\eta \|_{n+k-1} \, \frac{1}{(\mu s)^{(1+\tfrac{n}{k-1})}}, \qquad \forall \, (u, \theta, \tau) \in S \times \mathbb{T}^{d+d'}_{\sigma'}, \quad \forall \, s \in [0, \infty),\end{aligned}$$ so that the integral [\[sol_parabolic_tor_edo\]](#sol_parabolic_tor_edo){reference-type="eqref" reference="sol_parabolic_tor_edo"} converges uniformly on $S \times \mathbb{T}^{d+d'}_{\sigma'}$. *Proof.* To show that [\[sol_parabolic_tor_edo\]](#sol_parabolic_tor_edo){reference-type="eqref" reference="sol_parabolic_tor_edo"} is a formal expression for a right inverse of $\mathcal{S}_{n,J}$, we recall that $\varphi_s(u, \theta, \tau) = (\varphi^u_s (u), \theta + \omega s, \tau + \nu s)$ is the time-$s$ flow of $J$. By differentiating under the integral sign one has $$\begin{aligned} \mathcal{S}_{n,J} \circ (\mathcal{S}_{n,J})^{-1} \eta & = -\int_0^\infty \partial_u (\eta \circ \varphi_s) \, ds \, J^x - \int_0^\infty \partial_\theta (\eta \circ \varphi_s) \, ds \cdot \omega - \int_0^\infty \partial_\tau (\eta \circ \varphi_s) \, ds \cdot \nu . \end{aligned}$$ Moreover, the following relations hold true, $$\begin{aligned} \label{relacions-S-edo} \begin{split} & \int_0^\infty \partial_\theta (\eta \circ \varphi_s) \, ds \cdot \omega = \int_0^\infty \partial_\theta \eta \circ \varphi_s \, \partial_s \varphi_s^\theta \, ds, \\ &\int_0^\infty \partial_\tau (\eta \circ \varphi_s) \, ds \cdot \nu = \int_0^\infty \partial_\tau \eta \circ \varphi_s \, \partial_s \varphi_s^\tau \, ds, \\ & \int_0^\infty \partial_u (\eta \circ \varphi_s) \, ds \, J^x = \int_0^\infty \partial_u \eta \circ \varphi_s \, \partial_s \varphi_s^u \, ds. \end{split} \end{aligned}$$ Indeed, the first two equalities above are immediate. To prove the third one, observe that we have $$\label{relacions-S-edo-2} \int_0^\infty \partial_u (\eta \circ \varphi_s) \, J^x \, ds = \int_0^\infty \partial_u \eta \circ \varphi_s \, \partial_u \varphi_s^u \, J^x \, \frac{J^x\circ \varphi_s}{J^x\circ \varphi_s}\, ds = \int_0^\infty g(s, u) \, h (s, u) \, ds,$$ where $g(s,u) = \partial_u \eta \circ \varphi_s \, J^x \circ \varphi_s$ and $h(s,u) = \partial_u \varphi_s^u \frac{J^x}{J^x \circ \varphi_s}$. We have that $\partial_s h(s,u) = 0$ and then $h(s, u) =h (0, u) = 1$ for all $s\ge 0$. Therefore, from [\[relacions-S-edo-2\]](#relacions-S-edo-2){reference-type="eqref" reference="relacions-S-edo-2"} we have $$\int_0^\infty \partial_u (\eta \circ \varphi_s) \, J^x \, ds = \int_0^\infty g(s, u) \, ds = \int_0^\infty \partial_u \eta \circ \varphi_s \, \partial_s \varphi_s^u \, ds,$$ so that the third equality of [\[relacions-S-edo\]](#relacions-S-edo){reference-type="eqref" reference="relacions-S-edo"} is proved. Finally, using [\[relacions-S-edo\]](#relacions-S-edo){reference-type="eqref" reference="relacions-S-edo"} we obtain $$\begin{aligned} \mathcal{S}_{n,J} \circ (\mathcal{S}_{n,J})^{-1} \eta &= -\int_0^\infty \partial_s (\eta \circ \varphi_s ) \, ds = \eta \circ \varphi_0 -\lim_{s \to \infty} \eta \circ \varphi_s = \eta. \end{aligned}$$ Now we check that $\mathcal{S}^{-1}_{n,J}$ is bounded on $\mathcal{Z}_{n+k-1}$. From [\[sol_parabolic_tor_edo\]](#sol_parabolic_tor_edo){reference-type="eqref" reference="sol_parabolic_tor_edo"} and Lemma [Lemma 29](#lema_prop_flux){reference-type="ref" reference="lema_prop_flux"}, one has $$\begin{aligned} \|(\mathcal{S}_{n,J})^{-1} \,\eta\|_n & \leq \, \sup_{S\times \mathbb{T}^{d+d'}_{\sigma'} } \, \frac{1}{|u|^n} \, \int_{0}^{\infty} |(\eta \circ \varphi_s)(u, \theta, \tau)| \, ds \\ & \leq \|\eta \|_{n+k-1} \, \sup_{ S} \, \frac{1}{|u|^n} \, \int_{0}^{\infty} \, \bigg{(} \frac{|u|}{(1+s\mu|u|^{k-1})^{1/k-1}}\bigg{)}^{n+k-1} \, ds \, \leq \, \frac{1}{\mu} \, \frac{k-1}{n} \, \| \eta \|_{n+k-1}. \end{aligned}$$ ◻ **Definition 31**. *Let $X$ be a vector field satisfying the hypotheses of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}, and let $\check X (x, y, \theta, \tau) = X(x, y, \theta, t)$, defined in $U_\mathbb{C}\times \mathbb{T}^{d+d'}_\sigma$. Given $n \geq 3$, we introduce $\mathcal{N}_{n, X} = (\mathcal{N}^x_{n,X},\, \mathcal{N}^y_{n,X}, \, \mathcal{N}_{n, X}^\theta): \mathcal{B}_\alpha \subset \mathcal{Z}_n^\times \to \mathcal{Z}_{n+k-1}^\times$, by $$\begin{aligned} \mathcal{N}_{n, X}^x (f) &= \check \mathcal{K}_n^y [\check c\circ (\check \mathcal{K}_n^\theta + f^\theta, \tau) - \check c \circ (\check \mathcal{K}_n^\theta, \tau ) ] + f^y \, \check c\circ(\check \mathcal{K}_n^\theta + f^\theta, \tau) + \check \mathcal{G}_n^x , \\ \mathcal{N}_{n, X}^y(f) & = P \circ (\check \mathcal{K}_n + f, \tau) - P \circ (\check \mathcal{K}_n, \tau) + \check \mathcal{G}_n^y ,\\ \mathcal{N}_{n, X}^\theta(f) & = Q \circ (\check \mathcal{K}_n + f, \tau) - Q \circ (\check \mathcal{K}_n, \tau) + \check \mathcal{G}_n^\theta. \end{aligned}$$* With the previously introduced parameters, the operators $\mathcal{N}_{n, X}$ are Lipschitz. **Lemma 32**. *For each $n \geq 3$, there exists a constant, $M_n >0$, such that $$\begin{aligned} \mathrm{Lip\,}\mathcal{N}^x_{n,X} & \leq \sup_{(\theta, \tau) \in \mathbb{T}^{d+d'}_\sigma} |\check c(\theta, \tau )| + M_n \rho, \\ \mathrm{Lip\,}\mathcal{N}^y_{n,X} & \leq k\, \sup_{(\theta, \tau) \in \mathbb{T}^{d+d'}_\sigma} |\check a_k(\theta,\tau)| + M_n \rho, \\ \mathrm{Lip\,}\mathcal{N}^\theta_{n,X} & \leq p \,\sup_{(\theta, \tau) \in \mathbb{T}^{d+d'}_\sigma} | \check d_p(\theta,\tau )| + M_n \rho, \end{aligned}$$ where $\rho$ is the radius of the sector $S(\beta, \, \rho)$.* The proof is completely analogous to the one of Lemma [Lemma 22](#lema_N_tors){reference-type="ref" reference="lema_N_tors"}, with the only difference that here the vector field $\check X$ and the functions of $\mathcal{B}_\alpha$ also depend on $\tau$. It will be omitted. **Definition 33**. *For $n > 2p-k-1$, we denote by $\mathcal{S}_{n,J}^\times : \mathcal{Z}_n^\times \to \mathcal{Z}_n^\times$ the linear operator defined component-wise as $\mathcal{S}_{n,J}^\times = (\mathcal{S}_{n,J}, \, \mathcal{S}_{n+k-1,J}, \, ( \mathcal{S}_{n+2p-k-1,J})^d )$.* With these operators, we can write equations [\[eqdelta_analitic_tors_edo\]](#eqdelta_analitic_tors_edo){reference-type="eqref" reference="eqdelta_analitic_tors_edo"} as $$\mathcal{S}_{n,J}^\times \, \Delta = \mathcal{N}_{n, X} (\Delta ).$$ Similarly as in Section [4.1](#sec-funcional-maps){reference-type="ref" reference="sec-funcional-maps"}, the inverse operator $(\mathcal{S}_{n,J}^\times)^{-1}$ is given by $$(\mathcal{S}_{n,J}^\times)^{-1} = (\mathcal{S}_{n,J}^{-1}, \, \mathcal{S}_{n+k-1,J}^{-1}, \, (\mathcal{S}_{n+2p-k-1,J}^{-1})^d).$$ **Definition 34**. *Let $X$ be a vector field satisfying the hypotheses of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}, and let $\check X (x, y, \theta, \tau) = X(x, y, \theta, t)$, defined in $U_\mathbb{C}\times \mathbb{T}^d_\sigma \times \mathbb{T}^{d'}_\sigma$, $U_\mathbb{C}\subset \mathbb{C}^2$. Given $n \geq 3$, we define $\mathcal{T}_{n, X} : \mathcal{B}_\alpha \subset \mathcal{Z}_n^\times \to \mathcal{Z}_n^\times$ by $$\mathcal{T}_{n, X} = (\mathcal{S}_{n,J}^\times)^{-1} \circ \mathcal{N}_{n, X}.$$* **Lemma 35**. *There exist $m_0 > 0$ and $\rho_0 >0$ such that if $\rho < \rho_0$, then, for every $n \geq m_0$, we have $\mathcal{T}_{n, X} (\mathcal{B}_\alpha) \subseteq \mathcal{B}_\alpha$ and $\mathcal{T}_{n, X}$ is a contraction operator in $\mathcal{B}_\alpha$.* The proof is completely analogous to the one of Lemma [Lemma 26](#lema_contraccio_tors){reference-type="ref" reference="lema_contraccio_tors"} and will be omitted. # Proofs of the main results {#sec-dems-tors} ## The case of maps {#the-case-of-maps} *Proof of Theorem [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"}.* Let $m_0$ be the integer provided by Lemma [Lemma 26](#lema_contraccio_tors){reference-type="ref" reference="lema_contraccio_tors"}, and let $n_0 = \max \{ m_0, \, k + 1\}$. We take the maps $\mathcal{K}_{n_0}$ and $R = \mathcal{R}_{n_0}$ given by Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, which satisfy $$\label{dem_analitic_polinomi} \mathcal{G}_{n_0}(u, \theta) = F ( \mathcal{K}_{n_0}(u, \theta )) - \mathcal{K}_{n_0} (R(u, \theta))=(O(u^{n_0+k}), \, O(u^{n_0+2k-1}), \, O(u^{n_0+2p-1})).$$ We will look for $\rho >0$ and for a differentiable function $\Delta : [0, \rho) \times \mathbb{T}^d\to \mathbb{R}^2 \times \mathbb{T}^d$, $\Delta$ analytic in $(0, \rho) \times \mathbb{T}^d$, satisfying $$\label{eq_delta_dem_maps_tor} F \circ (\mathcal{K}_{n_0} + \Delta) - (\mathcal{K}_{n_0} + \Delta) \circ R = 0.$$ Next, consider the holomorphic extension of $F$ to a neighborhood $U_\mathbb{C}\times \mathbb{T}^d_\sigma$ of $(0,0) \times \mathbb{T}^d$, where $U_\mathbb{C}\subset \mathbb{C}^2$ contains the closed ball of radius $b>0$ and take $\alpha= \min \, \{\tfrac{1}{2}, \, \tfrac{b}{2}, \, \tfrac{\tilde \sigma}{2},\}$, with $0 < \tilde \sigma < \sigma$. With this setting we rewrite [\[eq_delta_dem_maps_tor\]](#eq_delta_dem_maps_tor){reference-type="eqref" reference="eq_delta_dem_maps_tor"} as $$\begin{aligned} \Delta^x \circ R - \Delta^x &= \mathcal{K}_n^y [c\circ (\mathcal{K}_n^\theta + \Delta^\theta) - c \circ \mathcal{K}_n^\theta ] + \Delta^y \, c\circ(\mathcal{K}_n^\theta + \Delta ^\theta) + \mathcal{G}_n^x , \\ \Delta^y \circ R - \Delta^y & = P \circ (\mathcal{K}_n + \Delta) - P \circ \mathcal{K}_n + \mathcal{G}_n^y ,\\ \Delta^\theta \circ R - \Delta^\theta & = Q \circ (\mathcal{K}_n + \Delta) - Q \circ \mathcal{K}_n + \mathcal{G}_n^\theta, \end{aligned}$$ with $\Delta \in \mathcal{B}_\alpha \subset \mathcal{W}_n^ \times$, or using the operators defined in the previous section, $$\Delta = \mathcal{T}_{n_0, \, F} (\Delta), \qquad \Delta \in \mathcal{B}_\alpha.$$ By Lemma [Lemma 26](#lema_contraccio_tors){reference-type="ref" reference="lema_contraccio_tors"}, since $n_0 \geq m_0$, we have that $\mathcal{T}_{n_0, \, F}$ maps $\mathcal{B}_\alpha$ into itself and is a contraction. Then it has a unique fixed point, $\Delta^\infty \in \mathcal{B}_\alpha$. Note that this solution is unique once $\mathcal{K}_{n_0}$ is fixed. Finally, $K = \mathcal{K}_{n_0} + \Delta^\infty$ satisfies the conditions in the statement. The $C^1$ character of $K$ at the origin follows from the order condition of $K$ at 0. ◻ *Proof of Theorem [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"}.* Let $m_0$ be the integer provided by Lemma [Lemma 26](#lema_contraccio_tors){reference-type="ref" reference="lema_contraccio_tors"}, and let $n_0 = \max \{ m_0, \, k + 1\}$. We distinguish two cases: the value of $n$ given in the statement is such that $n<n_0$ or $n\ge n_0$. In the first first case we start looking for a better approximation $\mathcal{K}^*$ of the form $$\mathcal{K}^* (u, \theta) = \widehat K(u, \theta ) + \sum _{j=n+1} ^{n_0+1} \widehat K_j(u, \theta),$$ with $$\label{formulaK*} \widehat K_j(u, \theta ) = \begin{pmatrix} \overline K_{j}^x u^j + \widetilde K_{j+k-1}^x (\theta) u^{j+k-1} \\ \overline K_{j+k-1}^y u^{j+k-1} + \widetilde K_{j+2k-2}^y (\theta) u^{j+2k-2} \\ \overline K_{j+2p-k-1}^\theta u^{j+2p-k-1} + \widetilde K_{j+2p-2}^\theta (\theta) u^{j+2p - 2} \end{pmatrix}$$ and for $$\mathcal{R}^* (u, \theta) = \widehat R(u, \theta ) + \sum _{j=n+1} ^{n_0+1} \widehat R_j(u),$$ with $$\begin{aligned} \label{formulaR*} \widehat R_j^x (u) = \begin{cases} \delta_{j, k+1} \overline R_{2k-1}^x t^{2k-1} & \quad \text{ if } n \leq k, \\ 0 & \quad \text{ if } n > k, \end{cases} \qquad \ \ \ \widehat R_j^\theta (u) = 0. \end{aligned}$$ In the secon case, when $n\ge n_0$, we take $\mathcal{K}^* = \widehat K+ \widehat K_{n+1}$ and $\mathcal{R}^* = \widehat R+ \widehat R_{n+1}$, with $\widehat K_{n+1}$ and $\widehat R_{n+1}$ again as in [\[formulaK\*\]](#formulaK*){reference-type="eqref" reference="formulaK*"} and [\[formulaR\*\]](#formulaR*){reference-type="eqref" reference="formulaR*"}, respectively. We introduce $$\begin{aligned} n^* = \begin{cases} n_0 & \quad \text{ if } n <n_0, \\ n+1 & \quad \text{ if } n \ge n_0. \end{cases} \end{aligned}$$ The coefficients $\widehat K_{j}$ and $\widehat R_{j}$, $n+1\le j \le n_0$, are obtained imposing the condition $$\label{approx-hyp-n0} F ( \mathcal{K}^* (u, \theta) )- \mathcal{K}^* ( \mathcal{R}^*(u, \theta ) )= (O(u^{n^*+k}), O(u^{n^*+2k-1}), \, O(u^{n^* + 2p-1})) .$$ Indeed, proceeding as in Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, we obtain these coefficients iteratively. We denote $\mathcal{K}_j(u, \theta ) = \widehat K(u, \theta) + \sum_{m=n+1} ^j \widehat K_m (u, \theta)$ and $\mathcal{R}_j(u, \theta ) = \widehat R (u, \theta ) + \sum_{m=n+1}^j \widehat R_m(u)$ for $j\ge n+1$. In the iterative step we have $$\label{approx-hyp-j} F ( \mathcal{K}_j (u , \theta )) - \mathcal{K}_j ( \mathcal{R}_j( u, \theta )) = (O(u^{j+k}), O(u^{j+2k-1}), \, O(u^{j+2p-1})) .$$ Then, $$\begin{aligned} F(\mathcal{K}_j(u, \theta ) + \widehat K_{j+1} (u, \theta )) - & (\mathcal{K}_j+ \widehat K _{j+1})\circ (\mathcal{R}_j(u, \theta) + \widehat R_{j+1}(u) ) \\ = & F(\mathcal{K}_j (u, \theta )) - \mathcal{K}_j (\mathcal{R}_j(u, \theta) ) \\ & + DF(\mathcal{K}_j (u, \theta ))\widehat K_{j+1}(u, \theta) - \widehat K_{j+1} (\mathcal{R}_j(u, \theta )+\widehat R_{j+1} (u)) \\ & + \int_0^1 (1-s) D^2F(\mathcal{K}_j(u, \theta ) + s \widehat K_{j+1}(u, \theta) ) \, ds \, (\widehat K_{j+1}(u, \theta))^{\otimes 2} \\ & -D \mathcal{K}_j(\mathcal{R}_j(u, \theta ) )\widehat R_{j+1} (u) \\ & - \int_0^1 (1-s) D^2\mathcal{K}_j (\mathcal{R}_j(u, \theta ) + s \widehat R_{j+1}(u)) \, ds \, (\widehat R_{j+1}(u))^{\otimes 2} . \end{aligned}$$ The condition $$\label{approx-hyp-tor} F ( \mathcal{K}_{j+1} (u, \theta )) - \mathcal{K}_{j+1} ( \mathcal{R}_{j+1}(u, \theta )) = (O(u^{j+k+1}), O(u^{j+2k}), \, O(u^{j+2p}))$$ leads to equations [\[SDequations_induccio_sim\]](#SDequations_induccio_sim){reference-type="eqref" reference="SDequations_induccio_sim"} and [\[sist_lineal_tors_sim\]](#sist_lineal_tors_sim){reference-type="eqref" reference="sist_lineal_tors_sim"} in Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, which we solve in the same way. From this point we can proceed as in the proof of Theorem [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} and look for $\Delta \in \mathcal{B}_\alpha \subset \mathcal{W}_n^\times$ such that the pair $K= \mathcal{K}^* + \Delta$, $R=\mathcal{R}^*$ satisfies $F \circ K = K \circ R$. Finally, for the map $K$, we also have $$\begin{aligned} K(u, \theta ) - \widehat K(u, \theta) & = \mathcal{K}^*(u, \theta) - \widehat K(u, \theta) + \Delta(u, \theta) \\ & = \sum _{j=n+1} ^{n^*} \widehat K_j(u, \theta) + \Delta(u, \theta) \\ & = (O(u^{n+1}), O(u^{n+k}), \, O(u^{n+2p-k})) +(O(u^{n^*}), O(u^{n^*+k-1}), \, O(u^{n^*+2p-k-1})). \end{aligned}$$ Since $n^* \ge n+1$ we have $n+2p - k \leq n^* + 2p - k - 1$, and therefore, $$K(u, \theta ) - \widehat K(u, \theta) = (O(u^{n+1}), O(u^{n+k}), \, O(u^{n+2p-k})).$$ For the map $R$ we have $$\begin{aligned} R(t, \theta) - \widehat R(u, \theta) & = \mathcal{R}^*(u, \theta) - \widehat R(u, \theta) = \sum _{j=n+1} ^{n^*} \widehat R_j(u) =\left\{ \begin{array}{ll} ( O(u^{2k-1}) , 0 ) & \quad \text{ if }\; n\le k, \\ (0, 0) & \quad \text{ if } \; n>k. \end{array} \right. \end{aligned}$$ ◻ ## The case of vector fields {#the-case-of-vector-fields} *Proof of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}.* Let $m_0$ be the integer provided by Lemma [Lemma 35](#lema_contraccio_tors_edo){reference-type="ref" reference="lema_contraccio_tors_edo"}, and let $n_0 = \max \{ m_0, \, k + 1\}$. We take the approximations $\mathcal{K}_{n_0}$ and $Y = \mathcal{Y}_{n_0}$ given by Proposition [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"}, which satisfy $$\begin{aligned} \mathcal{G}_{n_0}(u, \theta, t) & = X ( \mathcal{K}_{n_0}(u, \theta, t ), t) - \partial_{(u, \theta)}\mathcal{K}_{n_0} (u, \theta, t) \cdot Y(u, \theta, t) - \partial_t \mathcal{K}_{n_0} (u, \theta, t)\\ & =(O(u^{n_0+k}), \, O(u^{n_0+2k-1}), \, O(u^{n_0+2p-1})). \end{aligned}$$ We will look for $\rho >0$ and a function $\Delta : [0, \rho) \times \mathbb{T}^d \times \mathbb{R}\to \mathbb{R}^2 \times \mathbb{T}^d$, $\Delta$ analytic in $(0, \rho) \times \mathbb{T}^d \times \mathbb{R}$, satisfying $$\label{eqdelta_dem_tor_edo} X \circ (\mathcal{K}_{n_0} + \Delta, t) - \partial_{(u, \theta)} (\mathcal{K}_{n_0} + \Delta) \cdot Y - \partial_t (\mathcal{K}_{n_0} + \Delta) = 0.$$ Let $\check X (x, y, \theta, \tau) = X(x, y, \theta, t)$ be the hull function of $X$ and consider the holomorphic extension of $\check X$ to a neighborhood $U_\mathbb{C}\times \mathbb{T}^{d+d'}_\sigma$ of $(0,0) \times \mathbb{T}^{d+d'}$, where $U_\mathbb{C}\subset \mathbb{C}^2$ contains the closed ball of radius $b>0$, and we also take $\alpha= \min \, \{\tfrac{1}{2}, \, \tfrac{b}{2}, \, \tfrac{\tilde \sigma}{2}\}$ with $0 < \tilde \sigma < \sigma$. This setting allows to rewrite [\[eqdelta_dem_tor_edo\]](#eqdelta_dem_tor_edo){reference-type="eqref" reference="eqdelta_dem_tor_edo"} as $$\begin{aligned} D \Delta^x \cdot J &= \check \mathcal{K}_n^y [\check c\circ (\check \mathcal{K}_n^\theta + \Delta^\theta, \tau ) - \check c \circ (\check \mathcal{K}_n^\theta, \tau) ] + \Delta^y \, c\circ(\check \mathcal{K}_n^\theta + \Delta ^\theta, \tau) + \check \mathcal{G}_n^x , \\ D \Delta^y \cdot J & = P \circ (\check \mathcal{K}_n + \Delta, \tau) - P \circ (\check \mathcal{K}_n , \tau)+ \check \mathcal{G}_n^y ,\\ D \Delta^\theta \cdot J & = Q \circ (\check \mathcal{K}_n + \Delta, \tau) - Q \circ (\check \mathcal{K}_n , \tau)+ \check \mathcal{G}_n^\theta, \end{aligned}$$ with $\Delta \in \mathcal{B}_\alpha \subset \mathcal{Z}_n^\times$, or using the operators defined in the vector field setting, $$\Delta = \mathcal{T}_{n_0,X} (\Delta), \qquad \Delta \in \mathcal{B}_\alpha.$$ By Lemma [Lemma 35](#lema_contraccio_tors_edo){reference-type="ref" reference="lema_contraccio_tors_edo"}, since $n_0 \geq m_0$, we have that $\mathcal{T}_{n_0,X}$ maps $\mathcal{B}_\alpha$ into itself and is a contraction. Then it has a unique fixed point, $\Delta^\infty \in \mathcal{B}_\alpha$. Note that this solution is unique once $\check \mathcal{K}_{n_0}$ is fixed. Finally we take $\widetilde\Delta^\infty (u, \theta, t) = \Delta^{\infty} (u, \theta, \tau)$, and then $K = \mathcal{K}_{n_0} + \widetilde\Delta^\infty$ satisfies the conditions in the statement. Again, the $C^1$ character of $K$ at the origin follows from the order condition of $K$ at $u= 0$. ◻ *Proof of Theorem [Theorem 6](#teorema_posteriori_tors_edo){reference-type="ref" reference="teorema_posteriori_tors_edo"}.* The proof is completely analogous to the one of Theorem [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"}, taking into account that now we are in the vector field setting. In the last step we use the same argument as in the proof of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}. ◻ # Appendix ## Proof of Theorem [Theorem 4](#thm:unicitat){reference-type="ref" reference="thm:unicitat"} {#proof-of-theorem-thmunicitat} The proof consists in doing a number of changes of variables that transforms the map into a new map such that their $x$ and $y$-components are independent on the angles up to order $k$ included, and has the form of the maps studied in [@fontich99]. Then we can use the inequalities obtained in that paper to get the uniqueness. We write the map in the form $$\label{forma_normal_tor-unicitat} F \begin{pmatrix} x\\ y\\ \theta \end{pmatrix} = \begin{pmatrix} x + c(\theta) y \\ y + a_k(\theta)x^k + A_{k-1}(x, y, \theta)y + A_{k+1}(x, y, \theta) \\ \theta + \omega +d_p(\theta)x^p + B_{p-1}(x, y, \theta)y + B_{p+1}(x, y, \theta) \end{pmatrix},$$ where $A_{k-1}$ and $B_{p-1}$ are homogeneous polynomials of degree $k-1$ and $p-1$ in $x,\,y$, respectively, depending on $\theta$, and $A_{k+1}$ and $B_{p+1}$ are function of order $k+1$ and $p+1$ respectively. We recall that $2p>k-1$. Moreover, it is convenient to assume that $p\le k$. If not, terms of order $p$ can be put in a remainder of order $k+1$. In this proof we will write $O_j$ for a term of order $j$ in the variables $x\,y$ and $O(x^\ell y^m)$ for a term of order $x^\ell y^m$. Both terms may depend on $\theta$. We start doing some steps of averaging. We consider changes of the form $$\begin{aligned} \Phi_1(x,y,\theta) & = (x+\phi(\theta) x^\ell y^m, y, \theta), \label{canviphi1}\\ \Phi_2(x,y,\theta) & = (x, y+\phi(\theta) x^\ell y^m, \theta), \label{canviphi2}\\ \Phi_3(x,y,\theta) & = (x, y, \theta+\phi(\theta) x^\ell y^m) \label{canviphi3},\end{aligned}$$ to average the monomials of order $x^\ell y^m$ in the $x$, $y$ and $\theta$ components, respectively. These changes may introduce new terms of order bigger or equal than $\ell+m$ in any component. Below we do a study of the terms that appear. First we do a change of the form [\[canviphi1\]](#canviphi1){reference-type="eqref" reference="canviphi1"} with $\ell=0$ and $m=1$ to average the term $c(\theta)y$. We obtain $$\begin{aligned} F^{(1)} \begin{pmatrix} x \\ y \\ \theta \end{pmatrix} & = \begin{pmatrix} x + \phi(\theta) y + c(\theta) y - \phi(\hat \theta )(y+a_k(\theta)x^k+ yO_{k-1}+O_{k+1} )\\ y + a_k(\theta)(x+\phi(\theta) y)^k + A_{k-1}(\hat x, y, \theta)y + A_{k+1}(\hat x, y, \theta) \\ \theta + \omega +d_p(\theta)(x+\phi(\theta) y)^p + B_{p-1}(\hat x, y, \theta)y + B_{p+1}(\hat x, y, \theta) \end{pmatrix},\end{aligned}$$ where $\hat x = x + \phi(\theta) y$ and $\hat \theta=\theta + \omega +d_p(\theta)(x+\phi(\theta) y)^p + B_{p-1}(\hat x, y, \theta)y + B_{p+1}(\hat x, y, \theta)$. We can rewrite the first component as $$\begin{aligned} x & + \Big(\phi(\theta) + c(\theta) - \phi(\theta + \omega) \Big)y + \Big(\phi(\theta + \omega) - \phi(\hat \theta ) \Big)y - \phi(\hat \theta ) O(x^k) + yO_{k-1}+O_{k+1}, \end{aligned}$$ and writing $c=\bar c + \tilde c$, by the small divisors lemma there exists a unique zero average function $\phi$ such that $$\phi(\theta + \omega) - \phi(\theta) = \tilde c(\theta)$$ and taking $\phi$ as such solution the first component becomes $$\begin{aligned} x & + \bar c y + y (O_{p}+ O_{k-1})+O_{k} =: C_p( x, y, \theta) y + C_k( x, y, \theta).\end{aligned}$$ The second and third components have the same structure and the same lower order terms $y + a_k(\theta)x^k$ and $\theta + \omega +d_p(\theta)x^p$, respectively, as $F$. Thus we have $$\label{primer-pas-averaging} F^{(1)} \begin{pmatrix} x\\ y\\ \theta \end{pmatrix} = \begin{pmatrix} x + \bar c y + C_p( x, y, \theta) y + C_k( x, y, \theta) \\ y + a_k(\theta)x^k + A^{}_{k-1}(x, y, \theta)y + A_{k+1}(x, y, \theta) \\ \theta + \omega +d_p(\theta)x^p + B_{p-1}(x, y, \theta)y + B_{p+1}(x, y, \theta) \end{pmatrix},$$ with new functions $A's$ and $B's$ of the same form as the ones in [\[forma_normal_tor-unicitat\]](#forma_normal_tor-unicitat){reference-type="eqref" reference="forma_normal_tor-unicitat"}. The changes $\Phi_1$, $\Phi_3$ and $\Phi_3$ applied to $F^{(1)}$ give new maps with the same structure and with coefficients averaged, provided we choose a suitable $\phi$. Concretely, the change $\Phi_1$, used for $\ell+m \ge p+1$, produces $$\begin{aligned} &\Phi_1^{-1} \circ F^{(1)} \circ \Phi_1 \begin{pmatrix} x \\ y \\ \theta \end{pmatrix} \\ = & \begin{pmatrix} x + \phi(\theta) x^\ell y^m + \bar c y ++ C_{p}(\hat x, y, \theta)y+ C_k(\hat x, y, \theta)- \phi(\hat \theta)(x+ \bar c y+O_{\ell+m} )^\ell(y+O(x^k)) ^m+O_{k+1} \\ y + a_k(\theta)(x+\phi(\theta) x^\ell y^m)^k + A_{k-1}(\hat x, y, \theta)y + A_{k+1}(\hat x, y, \theta) \\ \theta + \omega +d_p(\theta)(x+\phi(\theta) x^\ell y^m)^p + B_{p-1}(\hat x, y, \theta)y + B_{p+1}(\hat x, y, \theta) \end{pmatrix},\end{aligned}$$ where $\hat x= x + \phi(\theta) x^\ell y^m$ and $\hat \theta = \theta + \omega +d_p(\theta)(x+\phi(\theta) x^\ell y^m)^p + B_{p-1}(\hat x, y, \theta)y + B_{p+1}(\hat x, y, \theta)$. The change $\Phi_2$, used for, used for $\ell+m = k$, produces $$\begin{aligned} &\Phi_2^{-1} \circ F^{(1)} \circ \Phi_2 \begin{pmatrix} x \\ y \\ \theta \end{pmatrix} \\ & = \begin{pmatrix} x + \bar c \hat y +C_{p}( x,\hat y, \theta)\hat y+ C_k( x, \hat y, \theta) \\ y + \phi(\theta) x^\ell y^m+ a_k(\theta)x^k + A_{k-1}(x,\hat y, \theta)y + A_{k+1}( x, \hat y, \theta) -\phi(\hat \theta) (x + \bar c \hat y )^\ell y ^m + O_{k+1}\\ \theta + \omega +d_p(\theta)x^p + B_{p-1}( x, \hat y, \theta)\hat y + B_{p+1}(x, \hat y, \theta) \end{pmatrix},\end{aligned}$$ where $\hat y = y + \phi(\theta) x^\ell y^m$ and $\hat \theta= \theta + \omega +d_p(\theta)x^p + B_{p-1}( x, \hat y, \theta)\hat y + B_{p+1}(x, \hat y, \theta)$. And the change $\Phi_3$, used when $\ell+m \ge p$, produces $$\begin{aligned} &\Phi_3^{-1} \circ F^{(1)} \circ \Phi_3 \begin{pmatrix} x \\ y \\ \theta \end{pmatrix} \\ & = \begin{pmatrix} x + \bar c y + C_{p}(x, y, \hat \theta) y + C_{k}(x, y, \hat \theta) \\ y + a_k(\hat \theta)x ^k + A_{k-1}(x, y,\hat \theta)y + A_{k+1}(x, y, \hat \theta) \\ \theta + \phi(\theta) x^\ell y^m+ \omega +d_p(\hat \theta)x^p + B_{p-1}(x, y,\hat \theta)y + B_{p+1}(x, y, \hat \theta) - \phi(\hat \theta+ \omega ) (x+\bar c y)^\ell y^m+O_{k} \end{pmatrix},\end{aligned}$$ where $\hat \theta= \theta + \phi(\theta) x^\ell y^m$. Now we do several changes of the form $\Phi_3$ to average the terms of order $p$ of the third component. This may introduce new terms of the same order but with a higher value of the exponent of $y$. In this procedure we use the small divisors lemma in a completely analogous way as we did to arrive to [\[primer-pas-averaging\]](#primer-pas-averaging){reference-type="eqref" reference="primer-pas-averaging"}. Thus, we start averaging the term $x^p$, then the term $x^{p-1}y$ and so on until the term $y^p$ which can be averaged without introducing new terms of order $p$. These changes introduce terms of order $yO_{2p}$ in the first component and terms of order $yO_{k+p-1}$ in the second one. Since $2p > k-1$, both kind of terms are of order $k+1$ or bigger. Then we proceed doing changes of the form $\Phi_3$ and $\Phi_1$ to average the terms of order $p+1$ and higher starting with the terms $x^j$ and ending with $y^j$ when dealing with a degree $j$, $p+1\le j\le k$ in the first component. The changes $\Phi_1$, when considering the averaging of a term of order $x^\ell y ^m$ introduce new terms of order $x^{\ell+i} y ^{m-i}$, $i\ge 1$ (possibly depending on $\theta$). Moreover, when $\ell+m= p+1$ introduce terms $yO_{k-1}$ in the third component. At order $\ell+m> p+1$ they introduce terms of order $O_{k+1}$ which can be forgotten. However they maintain the structure of the second component. Similarly, the changes $\Phi_3$, when averaging a term of order $x^\ell y ^m$ in the third component add terms of order $x^{\ell +i} y^{m-i}$, $i\ge 1$ in the third component. Also, when $\ell+m= p$ they may introduce a term of order $k$ in the third component. That is why we proceed in the order indicated in the previous paragraph. Finally we do changes of the form $\Phi_2$ to average the terms of ordre $k$ of the second component while do not change the already obtained terms of order less or equal than $k$. After having done these changes we arrive to a map of the form $$\label{mapa-promitjat} \widehat F \begin{pmatrix} x\\ y\\ \theta \end{pmatrix} = \begin{pmatrix} x + \bar c y + \widehat C_{p}(x,y)y+ c_k x^k+\widehat C_{k+1}(x, y, \theta) \\ y + \bar a_k x^k + \widehat A_{k-1}(x, y)y + \widehat A_{k+1}(x, y, \theta) \\ \theta + \omega +\bar d_px^p + \widehat B_{p-1}(x, y)y + \widehat B_{k}(x, y, \theta) \end{pmatrix}.$$ Now we do a change $(x, y, \theta)\mapsto (\bar c x, y, \theta)$ which maintains the same form and changes the constant $\bar c$ to $1$. Next, we consider the related two-dimensional map (independent of the angles) $$%\label{forma_normal_tor} G \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} x + y + \widehat C_p(x,y)y + c_k x^k \\ y + \bar a_k x^k + \widehat A_{k-1}(x, y) y \end{pmatrix},$$ and we do the changes to transform it to the normal form $$N \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} x + y \\ y + a_k x^k +\dots + b_\ell x^{\ell-1} y + \dots \end{pmatrix} + O_{k+1}$$ given in [@fontich99]. The change to this normal form is known, and it is described in detail in [@fontich99]. To arrive to the normal form one has to do a sequence of changes of the form $$\label{canviC} C \begin{pmatrix} \xi \\ \eta \end{pmatrix} = \begin{pmatrix} \xi + \Phi(\xi, \eta) \\ \eta + \Psi(\xi, \eta) \end{pmatrix},$$ where $\Phi$ and $\Psi$ are homogeneous polynomials of degree $j\ge 2$ to remove as many monomials of degree $j$ as possible. The inverse is $$\label{canviCinversa} C^{-1} \begin{pmatrix} \xi \\ \eta \end{pmatrix} = \begin{pmatrix} \xi - \Phi(\xi, \eta) \\ \eta - \Psi(\xi, \eta) \end{pmatrix} + O_{2j-1}.$$ We will use changes of order $j\ge p+1$. Then $2j-1\ge 2p+1 >k$, so that in our computations these terms do not play any role. With these changes one can remove all terms of order $j$ except, in general, the terms $x^j$ and $x^{j-1} y$ in the second component of the map. We claim that in our case we can remove all terms of the first component (except de linear ones) without adding new terms in the second component. Indeed, assume inductively that $G$ has the form $$G \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} x + y + \sum_{i=j}^k E_{i}(x,y) \\ y + \bar a_k x^k + A_{k-1}(x, y) y \end{pmatrix} + O_{k+1},$$ where $E_{i}$ is a homogeneous polynomial of degree $p+1\le i< k$. It very important to note that when $i\le k-1$, $E_{i}(x,y) = y O_{i-1}$. Doing a change of the form [\[canviC\]](#canviC){reference-type="eqref" reference="canviC"} we obtain $$\begin{aligned} C^{-1} & \circ G \circ C \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} x + \Phi + y +\Psi - \Phi (x + y , y )+ \sum_{i=j}^k E_{i}(x ,y ) \\ y +\Psi - \Psi (x + y , y ) \end{pmatrix} + O_{k+1} .\end{aligned}$$ If we want to remove all terms of order $j$ we need to solve $$\begin{aligned} \Phi(x ,y ) +\Psi(x ,y ) - \Phi (x + y , y )+ E_j(x ,y ) & =0,\\ \Psi(x ,y ) - \Psi (x + y , y ) & =0.\end{aligned}$$ Since these equations are for homogeneous polynomiasl of degree $j$, actually we have a linear system for the coefficients of $\Phi$ and $\Psi$, given the coefficients of $E_j$. This system is studied in detail in [@fontich99]. We emphasize that here $E_j$ does not contain the term $x^j$. Then we can take $\Psi =0$ and then solve the first equation taking into account the mentioned property. When $j=k$ we have $$\begin{aligned} \Phi(x ,y ) +\Psi(x ,y ) - \Phi (x + y , y )+ E_k(x ,y ) & =0,\\ \Psi(x ,y ) - \Psi (x + y , y ) + a_kx^k + \widehat A_{k-1}(x,y)y & =0.\end{aligned}$$ Now we are in the general situation and we get that the order $k$ terms of the normal form are $$\begin{pmatrix} 0\\ a_kx^k +b_{k-1}x^{k-1}y \end{pmatrix}.$$ In Section 4 of [@fontich99] it is proved that the (parabolic) stable manifold of the map $$\begin{pmatrix} x+y \\ y+ a_kx^k +b_{k-1}x^{k-1}y \end{pmatrix} + O_{k+1}$$ is unique. Let $C$ be the change that puts $G$ in the normal form $N$. Consider the change $(x,y,\theta ) \mapsto (C(x,y),\theta)$ that tranforms the map $\widehat F$ in [\[mapa-promitjat\]](#mapa-promitjat){reference-type="eqref" reference="mapa-promitjat"} into a new map $$\label{mapa-promitjatinormalitzat} \check F \begin{pmatrix} x\\ y\\ \theta \end{pmatrix} = \begin{pmatrix} x + \bar c y + \check C_{k+1}(x, y, \theta) \\ y + \bar a_k x^k + b_{k-1}x^{k-1} y + \check A_{k+1}(x, y, \theta) \\ \theta + \omega +\bar d_px^p + \check B_{p-1}(x, y)y + \check B_{k}(x, y, \theta) \end{pmatrix},$$ The remainders of the first and second components are of order $k+1$ and uniformly bounded with respect to $\theta$. Then, all bounds of Section 4 of [@fontich99] are also valid here for the first and second components of the iterates of $\check F$ and we have uniqueness of the stable manifold of $\check F$. Undoing the (close to the identity) change, the stable manifold of $F$ is also unique. ## Proof of Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"} {#App-A1} The proof of Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"} is completely analogous to the one of Theorem [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"}. However, for the convenience of the reader we will sketch here an overview of the proof and the spaces and operators that have to be used. An analogous argument to the one of the proof of Proposition [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"} provides the expressions of the first coefficients of the parameteriaztions of $K$ and $Y$, namely those given in [\[coefs1_th_helicoure\]](#coefs1_th_helicoure){reference-type="eqref" reference="coefs1_th_helicoure"}. Proceeding inductively as described in the proof of Theorem [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"} we obtain that given $n$ there exist $\mathcal{K}_n$ and $\mathcal{Y}_n = Y$ such that $$\label{anex_una} X \circ \mathcal{K}_n - D \mathcal{K}_n \cdot Y = \mathcal{G}_n,$$ with $\mathcal{G}_n(u, \theta) = (O(u^{n+2}), \,O(u^{n+3}), O(u^{n+2}))$. Then we look for a $C^1$ function, $\Delta = \Delta (u, \theta)$, $\Delta: [0, \, \rho) \times \mathbb{T}^d \to \mathbb{R}^2$, analytic in $(0, \, \rho) \times \mathbb{T}^d$, satisfying $$\label{anex_dues} \ X \circ ( \mathcal{K}_n + \Delta) - D (\mathcal{K}_n + \Delta) \cdot Y = 0.$$ Moreover, we ask $\Delta$ to be of the form $\Delta = (\Delta^x, \Delta^y, \Delta^\theta) = (O(u^n), \, O(u^{n+1}), \, O(u^{n}))$. Using [\[anex_una\]](#anex_una){reference-type="eqref" reference="anex_una"} we can rewrite [\[anex_dues\]](#anex_dues){reference-type="eqref" reference="anex_dues"} as $$\label{funcional_anex} D \Delta \cdot Y = X \circ ( \mathcal{K}_n + \Delta ) - X \circ \mathcal{K}_n - \mathcal{G}_n,$$ which is the functional equation that needs to be solved. We fix $0< \beta <\pi$ and we consider the sector $S(\beta, \rho)$ for some $0 < \rho < 1$. We take the Banach spaces, for $n \in \mathbb{N}$, defined as $$\begin{aligned} \mathcal{Z}_{n} & = \bigg{\{} f : S \times \mathbb{T}^d_\sigma \rightarrow \mathbb{C}\ | \ f \text{ real analytic,} \ \|f\|_n: = \sup_{(u, \theta) \in S \times \mathbb{T}^d_\sigma } \frac{|f(u, \theta)|}{|u|^n} < \infty \bigg{\}},\end{aligned}$$ and we set equation [\[funcional_anex\]](#funcional_anex){reference-type="eqref" reference="funcional_anex"} in the ball $\mathcal{B}_\alpha \subset \mathcal{Z}_n^\times = \mathcal{Z}_n \times \mathcal{Z}_{n+1} \times \mathcal{Z}_n$, endowed with the product norm. We define the operators $\mathcal{S}_n : \mathcal{Z}_n \to \mathcal{Z}_n$ and $\mathcal{N}_n : \mathcal{Z}_{n}^\times \to \mathcal{Z}_{n+1}^\times$ analogously as in Definitions [Definition 28](#def_S_tor_edo){reference-type="ref" reference="def_S_tor_edo"} and [Definition 31](#def_N_analitic_tor_edo){reference-type="ref" reference="def_N_analitic_tor_edo"}, and we obtain the bounds $$\| \mathcal{S}_n^ {-1} \| \leq \frac{1}{\mu \, n}, \qquad \mu \in (0, 2 \overline Y_2^x \cos(\beta/2)), \qquad n \geq 1,$$ and $$\begin{aligned} &\mathrm{Lip\,}\mathcal{N}_n^x \leq \sup_{\theta \in \mathbb{T}^d_{\sigma}} |c(\theta)| + M_n \rho, \\ &\mathrm{Lip\,}\mathcal{N}_n^y \leq \max\{\sup_{\theta \in \mathbb{T}^d_{\sigma}} |b(\theta)|, \sup_{\theta \in \mathbb{T}^d_{\sigma}} b(\theta)/2c(\theta) \} + M_n \rho, \\ &\mathrm{Lip\,}\mathcal{N}_n^\theta \leq \sup_{\theta \in \mathbb{T}^d_{\sigma}} |d(\theta)| + M_n \rho. \end{aligned}$$ Finally, we have that $\mathcal{T}_n = \mathcal{S}_n^{-1} \circ \mathcal{N}_n$ is contractive in $\mathcal{B}_\alpha$, which provides a solution, $\Delta$, to [\[anex_dues\]](#anex_dues){reference-type="eqref" reference="anex_dues"} and concludes the proof of Theorem [Theorem 8](#teorema_analitic_helicoure){reference-type="ref" reference="teorema_analitic_helicoure"}. ## Unstable manifolds {#subsec_inestable_tors} The results of this paper concern the existence on stable invariant manifolds. However, completely analogous results to Theorems [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"} and [Theorem 5](#teorema_analitic_tors_edo){reference-type="ref" reference="teorema_analitic_tors_edo"} hold true for the existence of unstable manifolds assuming that $\overline R_k^x >0$ and $\overline Y_K^x>0$, respectively. Moreover, a formal approximation $\mathcal{K}_n$ obtained in Theorems [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"} and [Theorem 15](#prop_tors_edo_sim){reference-type="ref" reference="prop_tors_edo_sim"}, with $\overline R_k^x >0$ and $\overline Y_K^x>0$, respectively, is an approximation of a parameterization of a true unstable manifold. The results of existence of unstable manifolds for vector fields are obtained by just revesing time $t \mapsto -t$ and applying the results of existence to the new vector field, as we already have done in the applications in Section [2.3](#sec-appl-tors){reference-type="ref" reference="sec-appl-tors"}. For the case of maps, if F satisfies the hypotheses of Theorem [Theorem 1](#teorema_analitic_tors){reference-type="ref" reference="teorema_analitic_tors"}, the results for the unstable manifolds are obtained from the stated theorem without having to compute explicitely the inverse map $F^{-1}$. We show it in this section. To clarify the notation, we will refer to $\mathcal{K}_n^-$ and $\mathcal{R}_n^-$ as approximations of the parametrizations obtained in Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"} corresponding to the stable manifold and the restricted dynamics on it (namely, with $\overline R_k^x <0$), and to $\mathcal{K}_n^+$ and $\mathcal{R}_n^+$ as the parameterizations of the unstable manifold and the restricted dynamics inside it (with $\overline R_k^x >0$). Next we show that the approximation $\mathcal{K}_n^+$ provided in Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"} is an approximation of a parameterization of a true unstable manifold, $\widehat K^+$, of $F$, asymptotic to $\mathcal{T}^d$. Moreover, the dynamics on $\widehat K^+$ can be parameterized by a map $\widehat R^+$ that is also approximated by $\mathcal{R}_n^+$. As in the stable case, such pairs of maps also satisfy $$\widehat K^+(t, \theta) - \mathcal{K}_n^+ (t, \theta) = (O(t^{n+1}), O(t^{n+k}), O(t^{n+2p-k})),$$ and $$\widehat R^+ (t, \theta) - \mathcal{R}_n^+ (t, \theta ) = \left\{ \begin{array}{ll} (O(t^{2k-1}), 0) & \text{ if }\; n\le k, \\ (0,0) & \text{ if } \; n>k. \end{array} \right.$$ Assume we have a map of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"}. By Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}, there exist approximations $\mathcal{K}_n^+$ and $\mathcal{R}_n^+$ such that $$\label{aprox_inestable_tor} \mathcal{G}_n = F \circ \mathcal{K}_n^+- \mathcal{K}_n^+ \circ \mathcal{R}_n^+ = (O(t^{n+k}), O(t^{n+2k-1}), O(t^{n+2p-1})),$$ with $$\mathcal{R}_n^+(t, \theta)= \begin{pmatrix} t + \overline R_k^xt^k + O(t^{k+1}) \\ \theta + \omega \end{pmatrix}$$ and $\overline R_k^x >0$, which means that $\mathcal{R}_n^+$ is a repellor in the normal directions of $\mathcal{T}$. Also, $\mathcal{R}_n^+$ is invertible and we have $$(\mathcal{R}_n^+)^{-1}(t, \theta) = \begin{pmatrix} t - \overline R_k^x t^k + O(t^{k+1}) \\ \theta - \omega \end{pmatrix},$$ and $$F^{-1}\begin{pmatrix} x\\ y \\ \theta \end{pmatrix} = \begin{pmatrix} x - c(\theta - \omega)y + c(\theta - \omega)a_k(\theta - \omega) (x-c(\theta - \omega)y)^k + \check A(x, y, \theta)\\ y - a_k(\theta - \omega) (x-c(\theta - \omega)y)^k + \check B(x, y, \theta) \\ \theta - \omega - d_p(\theta - \omega)(x - c(\theta - \omega)y)^p + \check C(x, y, \theta) \end{pmatrix},$$ with $\check A (x, y, \theta), \check B (x, y, \theta) = O(\|(x, y)\|^{k+1}) +yO(\|(x, y)\|^{k-1})$, and $\check C (x, y, \theta) = O(\|(x, y)\|^{p+1}) + yO(\|(x, y)\|^{p-1})$. Composing by $F^{-1}$ by the left in [\[aprox_inestable_tor\]](#aprox_inestable_tor){reference-type="eqref" reference="aprox_inestable_tor"} and using Taylor's Theorem, we get $$\begin{aligned} \begin{split} \label{taylor_unstable} \mathcal{K}_n^+ & = F^{-1} \circ (\mathcal{K}_n^+ \circ \mathcal{R}_n^+ + \mathcal{G}_n ) \\ & = F^{-1} \circ (\mathcal{K}_n^+ \circ \mathcal{R}_n^+) + DF^{-1} \circ (\mathcal{K}_n^+ \circ \mathcal{R}_n^+) \cdot \mathcal{G}_n + O(\mathcal{G}_n^2), \end{split} \end{aligned}$$ and then composing by $(\mathcal{R}_n^+)^{-1}$ by the right we obtain $$\label{moviment_omega} F^{-1} \circ \mathcal{K}_n^+ - \mathcal{K}_n^+ \circ (\mathcal{R}_n^+)^{-1} = (O(t^{n+k}), O(t^{n+2k-1}), O(t^{n+2p-1})).$$ There exists a change of variables $\phi$ that transforms $F^{-1}$ into $H := \phi^{-1} \circ F^{-1} \circ \phi$ which is of the form $$H \begin{pmatrix} x\\ y \\ \theta \end{pmatrix} = \begin{pmatrix} x + c(\theta )y \\ y + a_k(\theta) x^k + \check D(x, y, \theta) \\ \theta - \omega - d_p(\theta )x^p + \check G(x, y, \theta) \end{pmatrix},$$ where $\check D(x, y, \theta)$ and $\check G (x, y, \theta)$ satisfy the properties [\[condicions_fnormal\]](#condicions_fnormal){reference-type="eqref" reference="condicions_fnormal"}. Note that he map $H$ is of the form [\[forma_normal_tor\]](#forma_normal_tor){reference-type="eqref" reference="forma_normal_tor"} (with $\omega$ of opposite sign). Moreover, composing by $\phi^{-1}$ by the left in [\[moviment_omega\]](#moviment_omega){reference-type="eqref" reference="moviment_omega"} and using Taylor's Theorem as in [\[taylor_unstable\]](#taylor_unstable){reference-type="eqref" reference="taylor_unstable"}, we get $$\phi^{-1} \circ F^{-1} \circ \mathcal{K}_n^+ - \phi^{-1} \circ \mathcal{K}_n^+ \circ (\mathcal{R}_n^+)^{-1} =(O(t^{n+k}), O(t^{n+2k-1}), O(t^{n+2p-1})),$$ which is equivalent to $$\begin{aligned} H \circ \phi^{-1} \circ \mathcal{K}_n^+ - \phi^{-1} \circ \mathcal{K}_n^+ \circ (\mathcal{R}_n^+)^{-1} = (O(t^{n+k}), O(t^{n+2k-1}), O(t^{n+2p-1})). \end{aligned}$$ Hence, $H, \, \phi^{-1}\circ \mathcal{K}_n^+$ and $(\mathcal{R}_n^+)^{-1}$ are analytic maps that satisfy the hypotheses of Theorem [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"}, where here the vector of frequencies is $-\omega$. Therefore, by Theorem [Theorem 3](#teorema_posteriori_tors){reference-type="ref" reference="teorema_posteriori_tors"}, there exist a map $K^+:[0,\rho) \times \mathbb{T}^d \to \mathbb{R}^2 \times \mathbb{T}^d$, analytic in $(0, \rho) \times \mathbb{T}^d$, and an analytic map $R^+: (-\rho, \rho)\times \mathbb{T}^d \to \mathbb{R}\times \mathbb{T}^d$ such that $$\label{invariancia_G_tor} H \circ K^+ = K^+ \circ R^+,$$ and moreover it holds that $$\label{apr_ca_inestable_tor} K^+ (t, \theta ) - \phi^{-1} \mathcal{K}_n^+(t, \theta ) = (O(t^{n+1}), O(t^{n+k}), O(t^{n+2p-k})) ,$$ $$\label{erra_invers} R^+(t, \theta) - (\mathcal{R}_n^+)^{-1}(t, \theta) = \left\{ \begin{array}{ll} (O(t^{2k-1}), 0) & \text{ if }\; n\le k, \\ (0,0) & \text{ if } \; n>k. \end{array} \right.$$ Also, composing by $\phi$ by the left in [\[invariancia_G\_tor\]](#invariancia_G_tor){reference-type="eqref" reference="invariancia_G_tor"} we have $$F^{-1} \circ \phi \circ K^+ = \phi \circ K^+ \circ R^+,$$ which means that $\phi \circ K^+$ is a parameterization of a stable manifold of $F^{-1}$, and the restricted dynamics on this stable manifold is given by the map $R^+$, which, using [\[erra_invers\]](#erra_invers){reference-type="eqref" reference="erra_invers"}, is of the form $$\label{erra_2} R^+(t, \theta) = \begin{pmatrix} t - \overline R_k^x t^k + O(t^{k+1}) \\ \theta - \omega \end{pmatrix},$$ with $\overline R_k^x >0$. As a consequence, $\phi \circ K^+$ is a parameterization of an unstable manifold of $F$, analytic in $(0, \rho) \times \mathbb{T}^d$, for some $\rho >0$. Moreover, composing by $\phi$ in [\[apr_ca_inestable_tor\]](#apr_ca_inestable_tor){reference-type="eqref" reference="apr_ca_inestable_tor"} and using Taylor's Theorem, we have $$\phi(K^+(t, \theta)) - \mathcal{K}_n^+(t, \theta) = (O(t^{n+1}), O(t^{n+k}), O(t^{n+2p-k})),$$ that is, $\phi \circ K^+$ is approximated by the parameterization $\mathcal{K}_n^+$ obtained in Proposition [Theorem 11](#prop_simple_tors){reference-type="ref" reference="prop_simple_tors"}. Denoting $\widehat K^+ := \phi \circ K^+$ we recover the notation used at the beginning of the section. Finally, note that since $R^+$ represents the restricted dynamics of $F^{-1}$ on the stable manifold $\phi \circ K^+$, then $(R^+)^{-1}$ represents the restricted dynamics of $F$ on the unstable manifold $\phi \circ K^+$. By the form of [\[erra_2\]](#erra_2){reference-type="eqref" reference="erra_2"} we have $$(R^+)^{-1}(t, \theta) = \begin{pmatrix} t + \overline R_k^x t^k + O(t^{k+1}) \\ \theta + \omega \end{pmatrix},$$ with $\overline R_k^x >0$, and hence, finally, $$(R^+)^{-1}(t, \theta) - \mathcal{R}_n^+(t, \theta) = \left\{ \begin{array}{ll} (O(t^{2k-1}), 0) & \text{ if }\; n\le k, \\ (0,0) & \text{ if } \; n>k, \end{array} \right.$$ as we claimed at the beginning of the section. Concretely, we recover the notation given at the beginning of the section denoting $\widehat R^+ := (R^+)^{-1}$. # Acknowledgements {#acknowledgements .unnumbered} C. Cufí-Cabré has been supported by the Spanish Government grants MTM2016-77278-P (MINECO/ FEDER, UE), PID2019-104658GB-I00 (MICINN/FEDER, UE) and BES-2017-081570, and by the Catalan Government grant 2021-SGR-00113. E. Fontich has been supported by the grant PID2021-125535NB-I00 (MICINN/FEDER,UE). This work has been also funded through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M). 99 I. Baldomá, E. Fontich and P. Martín, Gevrey estimates for one dimensional parabolic invariant manifolds of non-hyperbolic fixed points, *Discrete Contin. Dyn. Syst.* **37** (2017), n. 8, 4159-4190. I. Baldomá, E. Fontich and P. Martín, Invariant manifolds of parabolic fixed points (I). Existence and dependence on parameters, *J. Differential Equations* **268** (2020), n. 9, 5516-5573. I. Baldomá, E. Fontich and P. Martín, Invariant manifolds of parabolic fixed points (II). Approximations by sums of homogeneous functions, *J. Differential Equations* **268** (2020), n. 9, 5574--5627. I. Baldomá, E. Fontich and P. Martín, Whiskered parabolic tori in the planar $(n+1)$-body problem, *Commun. Math. Phys.* **374**, pp 63-110 (2020). X. Cabré, E. Fontich and R. de la Llave, The parameterization method for invariant manifolds I: Manifolds associated to non-resonant subspaces, *Indiana Univ. Math. J.* **52** (2003), no. 2, 283--328. X. Cabré, E. Fontich and R. de la Llave, The parameterization method for invariant manifolds III: overview and applications, *J. Differential Equations* **218** (2005), n. 2, 444--515. K. Chen, Normal forms of local diffeomorphisms on the real line, *Duke Math. J.* **35** (1968), 549--555. C. Cufí-Cabré and E. Fontich, Differentiable invariant manifolds of nilpotent parabolic points, *Discrete Contin. Dyn. Syst.* **41** (2021) n. 10, 4667-4704. E. Fontich, Stable curves asymptotic to a degenerate fixed point, *Nonlinear Anal.* **35** (1999), n. 6, Ser. A: Theory Methods, 711-733. R. Guantes, F. Borondo and S. Miret-Artés, Periodic orbits and the homoclinic tangle in atom-surface chaotic scattering, *Phys. Rev. E* **56** (1997), 378-389. M. Guardia, P. Martín and T. M-Seara, Oscillatory motions for the restricted planar circular three body problem. *Invent. Math.* **203** (2016), n. 2, 417--492. M. Guardia, P. Martín, T. M-Seara and L. Sabbagh, Oscillatory orbits in the restricted elliptic planar three body problem, *Discrete Contin. Dyn. Syst.* **37** (2017), n. 1, 229-256. À. Haro, M. Canadell, J. Ll. Figueras, A. Luque and J. M. Mondelo, The parameterization method for invariant manifolds. From rigorous results to effective computations. *Applied Mathematical Sciences* **195**. Springer, 2016. J. Llibre and C. Simó, Oscillatory solutions in the planar restricted three-body problem. *Math. Ann.* **248** (1980), n. 2, 153-184. J. Moser, Stable and random motions in dynamical systems. With special emphasis on celestial mechanics. Hermann Weyl Lectures, the Institute for Advanced Study, Princeton, N. J.; Annals of Mathematics Studies, No. 77. Princeton University Press, Princeton, N. J.; University of Tokyo Press, Tokyo, 1973. H. Rüssman, On optimal estimates for the solutions of linear PDEs of first order with constant coefficients on the torus. In Dynamical Systems, theory and applications, pp 598-624. Lecture notes in Phys., Vol. 38. Springer, Berlin (1975). K. Sitnikov, The existence of oscillatory motions in the three-body problems, *Dokl. Akad. Nauk SSSR* **133** 303--306 (Russian); translated as *Soviet Physics. Dokl.* **5** (1960) 647--650. F. Takens, Normal forms for certain singularities of vector fields, *Ann. Inst. Fourier (Grenoble)* **23** (1973), n. 2, 163-195. W. Zhang and W. Zhang, On invariant manifolds and invariant foliations without a spectral gap, *Adv. Math.* **303** (2016), 549-610.
arxiv_math
{ "id": "2310.05630", "title": "Invariant manifolds of maps and vector fields with nilpotent parabolic\n tori", "authors": "Clara Cuf\\'i-Cabr\\'e and Ernest Fontich", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $n>m\geqslant 1$ be integers with $n+m\geqslant 4$ even. We prove the existence of Maass forms with large sup norms on anisotropic $\text{O} (n,m)$, by combining a counting argument with a new period relation showing that a certain orthogonal period on $\text{O} (n,m)$ distinguishes theta lifts from $\text{Sp} _{2m}$. This generalizes a method of Rudnick and Sarnak in the rank one case, when $m = 1$. Our lower bound is naturally expressed as a ratio of the Plancherel measures for the groups $\text{O} (n,m)$ and $\text{Sp} _{2m}(\mathbb R)$, up to logarithmic factors, and strengthens the lower bounds of our previous paper [@BM] for such groups. In the case of odd-dimensional hyperbolic spaces, the growth exponent we obtain improves on a result of Donnelly, and is optimal under the purity conjecture of Sarnak. address: - | LAGA - Institut Galilée\ 99 avenue Jean Baptiste Clément\ 93430 Villetaneuse\ France - | Department of Mathematics\ University of Wisconsin -- Madison\ 480 Lincoln Drive\ Madison\ WI 53706, USA author: - Farrell Brumley - Simon Marshall title: Concentration properties of theta lifts on orthogonal groups --- # Introduction {#sec:intro} A well-known principle in quantum chaos suggests that, on a closed compact negatively curved Riemannian manifold, eigenfunctions should not manifest extreme localisation properties. Nevertheless, it is an emerging theme in arithmetic analysis that *intermediate* localisation properties can be realized in the setting of congruence manifolds, often for reasons related to functoriality. Indeed, one can access concentration behavior of eigenfunctions through their periods, and the non-vanishing of the latter, according to the relative Langlands program, distinguish functorial lifts from various source manifolds. A quantitative analysis can sometimes bootstrap non-vanishing to lower bounds. In this paper we investigate atypical concentration behavior for sparse subsequences of arithmetic eigenfunctions on compact congruence manifolds associated with indefinite orthogonal groups. These spaces fall under the purview of our previous paper [@BM], where they were shown to admit exceptional sequences of eigenfunctions having large sup norm, in the sense that $\|f_\lambda\|_\infty/\|f_\lambda\|_2=\Omega(\lambda^\delta)$ for some ineffective (in the logical sense) $\delta>0$, where $\lambda^2$ is the Laplacian eigenvalue. By contrast with the amplification method of *loc. cit*, our methods in this article make direct use of the theta correspondence and automorphic period relations -- techniques which allow for refined (and effective) estimates. More precisely, for integers $n\geqslant m\geqslant 1$, let $\mathbb{H}^{n,m}$ be the hyperbolic Grassmannian of signature $(n,m)$, parametrizing negative definite $m$-dimensional subspaces in $\mathbb R^{n,m}$. This is a simply connected symmetric space of non-compact type, of rank $m$ and dimension $d=nm$. It is irreducible unless $n = m = 2$, and carries an action of $\text{O} (n,m)$ by isometries. Let $\Gamma<\text{O} (n,m)$ be a uniform lattice and consider the quotient $Y=\Gamma\backslash\mathbb{H}^{n,m}$. Then $Y$ is a compact locally symmetric space with non-positive sectional curvature. We shall always assume $\Gamma$ to be congruence, so that $Y$ enjoys additional arithmetic symmetries. These symmetries may be exploited through the algebra of Hecke correspondences, the joint eigenfunctions of which are called *Hecke--Maass forms*. Our main result, Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, states that when $n>m\geqslant 1$ and $n+m\geqslant 4$ is even, one can find in every $O(1)$ spectral window a Hecke--Maass form on $Y$ whose $L^\infty$ norm grows by an explicit comparison of spectral density functions. Specifically, the minorant has the following form: it is the square-root of the ratio of the Plancherel measure on $\mathbb{H}^{n,m}$ with that of the Siegel upper half-space $\mathcal{H}^m$. The novelty here is the exact form of the lower bound in higher rank, which sheds light on the Sarnak purity conjecture [@Sa] when the spectral parameters approach root hyperplanes. Moreover, in rank 1, where the minorant is converted without loss in terms of the Laplacian eigenvalue $\lambda^2$, our lower bound yields $\|f_\lambda\|_\infty/\|f_\lambda\|_2=\Omega(\lambda^{(n-2)/2}/(\log\lambda)^{r/2})$, for an explicit constant $r\geqslant 1$. This improves upon the bound of Donnelly [@Donnelly] when $n$ is odd, and is best possible (up to logarithmic factors) under the purity conjecture. The proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} is based on the foundational article [@RS]. The main idea can be expressed very simply. Namely, if $\mathbf{H}\subset\mathbf{G}$ are reductive groups defined over a number field, one can show the existence of Hecke--Maass forms on $\mathbf{G}$ whose $\mathbf{H}$-periods are larger than average, provided 1. [\[1sketch\]]{#1sketch label="1sketch"} one knows the size of the average $\mathbf{H}$-period; 2. [\[2sketch\]]{#2sketch label="2sketch"} one can show that $\mathbf{H}$-periods *distinguish* a functorial lift from some other group $\mathbf{G}'$; 3. [\[3sketch\]]{#3sketch label="3sketch"} one can bound the number of contributing forms on the group $\mathbf{G}'$. Here, by 'distinguish' we mean that the non-vanishing of an $\mathbf{H}$-period characterizes the image of a functorial lift from $\mathbf{G}'$. In favorable situations, the bound in [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"} will show that this distinction is a rare occurrence, so [\[2sketch\]](#2sketch){reference-type="ref" reference="2sketch"} is in effect saying that most terms in [\[1sketch\]](#1sketch){reference-type="ref" reference="1sketch"} vanish. It follows that those forms which survive distinction must compensate for this by having unusually large $\mathbf{H}$ periods. When $\mathbf{H}$ is compact at infinity, this gives large point evaluations, whose average size is 1. For Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, we apply this to ${\bf G}=\text{O} (V)$, for a non-degenerate anisotropic quadratic space $V$ over a totally real number field, with signature $(n,m)$ at a fixed real place and definite at the others. We take ${\bf H}=\text{O} (U^\perp)\times\text{O} (U)$, where $U$ is an $m$-dimensional quadratic subspace that is negative definite at the distinguished real place, and $\mathbf{G}'=\text{Sp} _{2m}$. The functorial lift between ${\bf G}'$ and ${\bf G}$ will be the theta lift. In a follow-up paper, we shall again exploit this machinery to study the exceptional behavior of higher dimensional periods, showing the existence of Hecke--Maass forms on hyperbolic congruence manifolds with large geodesic Fourier coefficients in the resonance range. # Main results Let $Y=\Gamma\backslash\mathbb{H}^{n,m}$ be a compact congruence quotient. Under certain conditions on the signature $(n,m)$, we would like to establish the existence of eigenfunctions on $Y$ whose point evaluation is quantifiably larger than average. ## Statement of main theorem {#sec:exceptional} We begin by recalling the spectral parametrization of Maass forms on a general compact locally symmetric space of non-positive curvature, and the average size of values at fixed points. Let $G$ be a real connected[^1] semi-simple Lie group and $K$ a maximal compact subgroup. Let $\mathfrak g=\mathfrak{p}\oplus\mathfrak{k}$ be the corresponding Cartan decomposition of the Lie algebra $\mathfrak g$ of $G$. We make a choice of a maximal abelian subspace $\mathfrak{a}$ of $\mathfrak{p}$. Let $W=N_K(\mathfrak{a})/Z_K(\mathfrak{a})$ denote the Weyl group. The quotient $S=G/K$ is a Riemannian globally symmetric space. Let $\Gamma\subset G$ be a uniform lattice and consider the locally symmetric space $\Gamma\backslash S$. Recall that a *Maass form* $f\in L^2(\Gamma\backslash S)$ is a joint eigenfunction of $\mathbb{D}(S)$, the commutative algebra of left $G$-invariant differential operators on $S$. Given a Maass form $f$, let $\chi\in {\rm Hom}_{\mathbb C-\textrm{alg}}(\mathbb{D}(S),\mathbb C)$ be the associated eigencharacter, so that $Df =\chi(D)f$ for all $D\in\mathbb{D}(S)$. Via the Harish-Chandra isomorphism $\gamma_{\rm HC}:\mathbb{D}(S)\overset{\sim}{\longrightarrow} S(\mathfrak{a})^W$ onto the Weyl group invariants of the symmetric algebra of $\mathfrak{a}$ with complex coefficients, we have $\chi(D)=\nu(\gamma_{\rm HC}(D))$ for some $\nu\in\mathfrak{a}_\mathbb C^*$, unique up to $W$-conjugacy, called the *spectral parameter* of $f$. Let $\{f_i\}_{i\geqslant 0}$ be an orthonormal basis of $L^2(\Gamma\backslash S)$ consisting of Maass forms, and denote by $\mu_i$ the spectral parameter of $f_i$. In Proposition [Proposition 26](#local-Weyl){reference-type="ref" reference="local-Weyl"} we show that there is $Q \geqslant 1$ such that for any finite collection of points $x_1,\ldots ,x_h\in \Gamma\backslash S$, and any $\nu\in i\mathfrak{a}^*$ of large enough norm depending on $x_i$, we have $$\label{eq:intro:local-Weyl} \sum_{\|{\rm Im}\,\mu_i-\nu\|\leqslant Q}\bigg|\sum_{j=1}^h f_i(x_j)\bigg|^2\asymp \beta_S(\nu),$$ where $\beta_S(\nu)$ is the spectral density function of $L^2(S)$. This is a form of the local Weyl law for the $\mathbb{D}(S)$-spectrum of compact locally symmetric spaces. We deduce from [\[eq:intro:local-Weyl\]](#eq:intro:local-Weyl){reference-type="eqref" reference="eq:intro:local-Weyl"} that the average size of point evaluation (or, more generally, discrete periods) of eigenfunctions, in any given spectral ball of radius $Q$ with sufficiently regular center, is $\asymp 1$. In view of this average behavior, we will call *exceptional* any Maass form whose sup norm grows by at least a power of its eigenvalue. Note that by dropping all but one term, we derive from [\[eq:intro:local-Weyl\]](#eq:intro:local-Weyl){reference-type="eqref" reference="eq:intro:local-Weyl"} an upper bound for the sup norm of a Maass form $f_\nu$ of spectral parameter $\nu$, of the form $$\label{Linfty-local} \|f_\nu\|_\infty\ll \beta_S(\nu)^{1/2}\|f_\nu\|_2.$$ We call this the *local bound*, as the proof takes into consideration only local information about a given point. For the spaces we consider, it is not expected to be optimal, reflecting a far reaching delocalisation principle. Having established average and extremal benchmarks in generality, our aim is to prove explicit intermediate growth estimates on the sup norms for exceptional Maass forms in the setting of *congruence quotients* of hyperbolic Grassmannians $\Gamma\backslash\mathbb{H}^{n,m}$, or certain finite disjoint unions of such. (All of the above definitions and bounds apply equally well in this setting.) These disconnected manifolds arise as adelic double quotient spaces associated with the isometry group of a rational quadratic form, and may be expressed more classically as $Y=\bigcup_i\Gamma_i\backslash\mathbb{H}^{n,m}$, where $\Gamma_i$ runs over the genus class of $\Gamma$; see Section [9.9](#adelic2classical){reference-type="ref" reference="adelic2classical"}. As was mentioned in the introduction, the underlying mechanism for such results is a functorial correspondence from a tertiary source manifold, which in our case will be a congruence covering of the Siegel variety ${\rm Sp}_{2m}(\mathbb Z)\backslash\mathcal{H}^m$, where $\mathcal{H}^m=\text{Sp} _{2m}(\mathbb R)/{\rm U}(m)$ is the Siegel upper half-space. Note that the real rank of $\text{Sp} _{2m}(\mathbb R)$ is the same as that of $\text{O} (n,m)$, when $n\geqslant m$. This will allows us (see Section [2.2](#2examples){reference-type="ref" reference="2examples"} below for more details) to view a spectral parameter $\nu$ simultaneously for $\mathbb{H}^{n,m}$ and $\mathcal{H}^m$. **Theorem 1**. *Fix integers $n>m\geqslant 1$ with $n+m\geqslant 4$ even. Let $F$ be a totally real number field, with fixed archimedean place $v_0$. Let $V$ be a non-degenerate anisotropic quadratic space over $F$, of signature $(n,m)$ at $v_0$ and positive definite at the other real places. Let $Y=\bigcup_i\Gamma_i\backslash\mathbb{H}^{n,m}$ be a congruence arithmetic locally symmetric space associated with $\mathrm{O}(V)$.* *For sufficiently regular $\nu\in i\mathfrak{a}^*$ there exists an $L^2$-normalized Hecke--Maass form $f$ on $Y$ with spectral parameter $\nu+O(1)$ verifying $$\|f\|_\infty\gg \left(\log (3+\|\nu\|)\right)^{-r/2}\left(\beta_{\mathbb{H}^{n,m}}(\nu)/\beta_{\mathcal{H}^m}(\nu)\right)^{1/2},$$ where $r=[F:\mathbb Q]m$.* *Remark 1*. The condition that $n+m$ is even implies that $n \geqslant m+2$. When $n > m+2$, the forms produced in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} are non-tempered at almost all places; see Section [8.3](#sec:unramified){reference-type="ref" reference="sec:unramified"}. This fact is not, however, used in a direct way in the proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}; contrast this to the approach outlined in [@BM §A.4]. *Remark 2*. The logarithmic loss in the spectral parameter is due to a methodological short-cut we have adopted in the execution of Step [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"} from the introduction. More precisely, in proving Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}, we use a partial trace formula, which allows for a simple treatment of the geometric side. A more sophisticated analysis would eliminate this loss. ## Explicating the lower bound {#2examples} We now explicate the lower bound in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} by writing out in coordinates the two spectral density functions $\beta_S(\nu)$ appearing there, namely for $S=\mathbb{H}^{n,m}$ the hyperbolic Grassmannian and $S=\mathcal{H}^m$ the Siegel upper half-space. In fact, it will suffice to describe an approximation to $\beta_S(\nu)$, denoted here by $\tilde\beta_S(\nu)$, which is tight under the regularity assumption on $\nu$ in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}. We begin with the case of $S=\mathbb{H}^{n,m}= \text{SO} (n,m)^0/(\text{SO} (n)\times\text{SO} (m))$. For $n\geqslant m\geqslant 0$, we fix the signature $(n,m)$ quadratic form on $\mathbb R^m\oplus\mathbb R^m\oplus \mathbb R^{n-m}$ given by $$\sum_{i=1}^m (x_iy_i'+y_ix_i')+\sum_{i=1}^{n-m}z_iz_i'.$$ The Lie algebra $\mathfrak{so}(n,m)$ of $\text{SO} (n,m)^0$ consists of block matrices in $\mathfrak{sl}_{n+m}(\mathbb R)$ of the form $$\label{blocks} \begin{pmatrix} A & B& C\\ D & -^t\!A &E \\ -^tE& -^tC&F \end{pmatrix},$$ where $A\in \mathfrak{gl}_m(\mathbb R)$, $B,D\in \mathfrak{so}(m)$, $C,E\in M_{m,n-m}(\mathbb R)$, $F\in\mathfrak{so}(n-m)$. The $-1$ eigenspace $\mathfrak{p}$ of the Cartan involution "minus transpose\" consists of symmetric matrices of the form [\[blocks\]](#blocks){reference-type="eqref" reference="blocks"}. A maximal abelian subspace of $\mathfrak{p}$ can be taken to be $\mathfrak{a}=\{{\rm diag}(A,-\!A,0)\}$, with $A$ diagonal. We let $\{ \epsilon_i : i=1,\ldots ,m\}$ be the standard basis of $\mathfrak{a}^*$ given by evaluation of the diagonal entries of $A$. Then, if $n>m$, $$%\label{roots} \Sigma=\{\pm\epsilon_i\pm\epsilon_j\}\cup\{\pm \epsilon_i\}\quad\text{and}\quad \Sigma^+=\{\epsilon_i\pm\epsilon_j\}\cup\{\epsilon_i\}$$ are the sets of restricted roots and the subset of positive roots, respectively. The dimension of the root spaces for the $\pm\epsilon_i\pm\epsilon_j$ is $1$ and that for the $\pm\epsilon_i$ is $n-m$ (see [@Knapp VI.4 Example 3]). From this and the formula $\beta_S(\nu)=|{\bf c}(\nu)|^{-2}$, where ${\bf c}(\nu)$ is the Harish-Chandra ${\bf c}$-function, recalled in Section [4](#sec:inv-formula){reference-type="ref" reference="sec:inv-formula"}, one can deduce, using the Gindikin--Karpelevich product formula along with Stirling's formula, that if $\nu\in i\mathfrak{a}^*$ is sufficiently regular we have $$\label{first-beta-eq} \beta_{\mathbb{H}^{n,m}}(\nu)\asymp \tilde\beta_{\mathbb{H}^{n,m}}(\nu),\quad\textrm{where}\quad\tilde\beta_{\mathbb{H}^{n,m}}(\nu)=\prod_{i\neq j}(1+|\nu_i\pm \nu_j|)\prod_i (1+|\nu_i|)^{n-m},$$ where $\nu_i=\langle\nu,\epsilon_i\rangle$. We now do the same with the Siegel upper half-space $\mathcal{H}^m$ of (real) dimension $m(m+1)$. We may realize $\mathcal{H}^m$ as $\text{Sp} _{2m}(\mathbb R)/{\rm U}(m)$, where $$\text{Sp} _{2m}(\mathbb R)=\left\{g\in \text{SL} _{2m}(\mathbb R): {}^tgJg=J\right\}, \quad J=\begin{pmatrix} & I_m\\ -I_m & \end{pmatrix},$$ and ${\rm U}(m)=\text{O} (2m)\cap \text{Sp} _{2m}(\mathbb R)$. We recall that $\mathfrak{sp}_{2m}=\{X\in\mathfrak{gl}_{2m}(\mathbb R): {}^tXJ+JX=0\}$ is the space of block matrices in $\mathfrak{sl}_{2m}(\mathbb R)$ of the form $\left(\begin{smallmatrix} A & B\\ C & -{}^t\!A \end{smallmatrix}\right)$, where $A,B,C\in \mathfrak{gl}_m(\mathbb R)$ and both $B$ and $C$ are symmetric. The $-1$ eigenspace $\mathfrak{p}'$ of the Cartan involution "minus transpose\" is the subspace corresponding to $B=C$, and we can take as a maximal abelian subspace of $\mathfrak{p}'$ the space $\mathfrak{a}'=\{{\rm diag}(A,-A)\}$, with $A$ diagonal. If we again let $\{ \epsilon_i : i=1,\ldots ,m \}$ be the standard basis of $\mathfrak{a}'^*$, then the sets of restricted roots $\Sigma$ and positive roots $\Sigma^+$ are $$\label{symplectic-roots} \Sigma=\{\pm\epsilon_i\pm\epsilon_j\}\cup\{\pm 2\epsilon_i\}\quad\text{and}\quad \Sigma^+=\{\epsilon_i\pm\epsilon_j\}\cup\{2\epsilon_i\},$$ and all root space dimensions are $1$. We deduce, similarly to before, the estimate $$\label{second-beta-eq} \beta_{\mathcal{H}^m}(\nu)\asymp\tilde\beta_{\mathcal{H}^m}(\nu),\quad\textrm{where}\quad\tilde\beta_{\mathcal{H}^m}(\nu)=\prod_{i\neq j} (1+|\nu_i\pm\nu_j|)\prod_i(1+|\nu_i|),$$ valid for sufficiently regular $\nu\in i\mathfrak{a}^*$. *Remark 3*. Consider the *split* Lie algebra $\mathfrak{so}(m,m)$, viewed as a subalgebra of $\mathfrak{so}(n,m)$, by taking $C=E=F=0$ in [\[blocks\]](#blocks){reference-type="eqref" reference="blocks"}. Then $\mathfrak{a}\,\cap\,\mathfrak{so}(m,m)$ is isomorphic to $\mathfrak{a}$ and coincides in $\mathfrak{sl}_{2m}$ with $\mathfrak{a}'$. It is this sense in which we can view a spectral parameter $\nu$ simultaneously for $\mathbb{H}^{m,n}$ and $\mathcal{H}^m$. We deduce from [\[first-beta-eq\]](#first-beta-eq){reference-type="eqref" reference="first-beta-eq"} and [\[second-beta-eq\]](#second-beta-eq){reference-type="eqref" reference="second-beta-eq"} that the lower bound of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} can be expressed as $$\label{lower-bd-nu-i} \|f\|_\infty\gg \left(\log (3+\|\nu\|)\right)^{-m[F:\mathbb Q]/2 }\prod_i (1+|\nu_i|)^{(n-m-1)/2}.$$ Note that $n-m-1> 0$, due to the additional assumption on the parity of $n+m$. We may therefore view the power growth as arising from the *excess multiplicity* $n-m-1 > 0$ for the roots $\pm \epsilon_i$ in the non-split group $\text{SO} (n,m)^0$. ## Relation to literature Although our interest here lies primarily in the higher rank setting of $m\geqslant 2$, we point out the relation between Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} and the existing literature when the rank $m=1$. Let $\lambda^2$ be the Laplacian eigenvalue of $f$. In this case, Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} yields a lower bound of $\lambda^{(n-2)/2}/(\log\lambda)^{[F:\mathbb Q]/2 }$, which is a factor of $\lambda^{-1/2}/(\log\lambda)^{[F:\mathbb Q]/2 }$ off of the upper bound $\lambda^{(n-1)/2}$ in [\[Linfty-local\]](#Linfty-local){reference-type="eqref" reference="Linfty-local"}. For $n=3$ this recovers (up to a logarithmic loss) a famous result of Rudnick-Sarnak [@RS]. Moreover, for $n\geqslant 5$ odd this improves upon a result of Donnelly [@Donnelly], who had shown a weaker lower bound of $\lambda^{(n-4)/2}$. The exponent $(n-2)/2$ is the largest possible allowed by the purity conjecture (see Section [2.4](#sec:purity){reference-type="ref" reference="sec:purity"}). We now compare the setting of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} to the general context of our earlier paper [@BM]. Let $V$ be a non-degenerate anisotropic quadratic space over a totally real number field, with signature $(n,m)$ at a fixed real place $v_0$ and definite for real places $v\neq v_0$, and put $\mathbf{G}=\text{SO} (V)$. If $U$ is any $m$-dimensional quadratic subspace of $V$ that is defined over $F$ and negative definite at $v_0$, let $\theta$ be the involution $\theta(g)=sgs$, where $s$ is the orthogonal reflection with respect to $U$. Put ${\bf H}={\bf G}^\theta$; then $\mathbf{H}={\rm S}(\text{O} (U^\perp)\times\text{O} (U))$. Note that $\theta$ induces a Cartan involution on ${\bf G}(F_{v_0})$, since $\mathbf{H}_{v_0}={\rm S}(\text{O} (n)\times\text{O} (m))$ is a maximal compact subgroup of $\mathbf{G}(F_{v_0})=\text{SO} (n,m)$. Moreover, the conditions on the signature at $v_0$ imply that $\mathbf{G}(F_{v_0})$ is nonsplit. Applying the main result from [@BM] we obtain the existence of compact quotients $\Gamma \backslash \mathbb{H}^{n,m}$ that support $L^2$-normalized $f$ with spectral parameter $\nu+O(1)$ such that $\| f \|_\infty\gg \lambda^\delta$ for some ineffective $\delta>0$. The interest in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} is therefore in the quality (in rank 1) and combinatorial shape (in higher rank) of the lower bound. It should be remarked, however, that while [@BM] produces a weaker exponent than Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, it has the advantage of not requiring the assumption that $\nu$ is sufficiently regular. ## Relation to purity conjecture {#sec:purity} We now make some remarks comparing our results to the Sarnak purity conjecture [@Sa Appendix]. Let $Y$ be a compact negatively curved locally symmetric space, of dimension $d$ and rank $r$, and of congruence type. The Sarnak purity conjecture roughly states that the sup norms of *well-balanced* Hecke--Maass forms on $Y$, of high frequency $\lambda$, should behave as $\lambda^{k/2}$, where $k$ lies in $\mathbb Z\cap [0,d-r)$. Thus, the $L^\infty$ exponents on arithmetic manifolds are quantized. Here, "well-balanced" means that the spectral parameter $\nu$ should verify $1+|\langle \alpha, \nu \rangle|\asymp 1+\|\nu\|$ for all roots $\alpha$. If one takes $\nu$ well-balanced in the above sense, then the quotient $\left(\beta_{\mathbb{H}^{n,m}}(\nu)/\beta_{\mathcal{H}^m}(\nu)\right)^{1/2}$ appearing in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} becomes $\lambda^{(n-m-1)m/2}=\lambda^{(d-r)/2-r^2/2}$, where $d=mn$ and $r=m$. (Note that the relation $1+\|\nu\|\asymp 1+\lambda$ always holds.) Thus, for well-balanced $\nu$, Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} is consistent with Sarnak's purity conjecture, in that the exponent of $\lambda$ is indeed a half-integer. Note that the well-balanced condition is automatic in rank 1. That the well-balanced condition is necessary for any purity conjecture to hold was observed in [@Sa Appendix], in light of the examples of [@LO]. Indeed, as a consequence of their period formula, Lapid and Offen exhibited a lower bound of the form $\beta_{\mathfrak{H}_\mathbb C^n}(\nu)^{1/4}$ for the sup norm of base change lifts on compact quotients of $\mathfrak{H}_\mathbb C^n=\text{GL} _n(\mathbb C)/{\rm U}(n)$ coming from anisotropic unitary groups. In this higher rank setting, the expression $\beta_{\mathfrak{H}_\mathbb C^n}(\nu)^{1/4}$ can vary continuously as a power of $\lambda$ according to the position of $\nu$ relative to the root hyperplanes. This in turn lead to the suggestion (see [@Sa Appendix]) that the correct measure of complexity was rather $\beta_S(\nu)$. Observe that the lower bound in our Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} is not, in light of [\[first-beta-eq\]](#first-beta-eq){reference-type="eqref" reference="first-beta-eq"} and [\[lower-bd-nu-i\]](#lower-bd-nu-i){reference-type="eqref" reference="lower-bd-nu-i"}, a fractional power of $\beta_{\mathbb{H}^{n,m}}(\nu)$. So while $\beta_{\mathbb{H}^{n,m}}(\nu)$ is indeed of relevance, it does not *alone* determine the precise shape of $\|f\|_\infty$. In fact, the method of proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} would lead one to venture that the correct quantization of sup norm growth is through a *comparison* of spectral densities. For example, putting $\mathfrak{H}_\mathbb R^n=\text{GL} _n(\mathbb R)/{\rm O}(n)$, the Lapid-Offen bound can be written $$\beta_{\mathfrak{H}_\mathbb C^n}(\nu)^{1/4}\asymp (\beta_{\mathfrak{H}_\mathbb C^n}(\nu)/\beta_{\mathfrak{H}_\mathbb R^n}(\nu))^{1/2}.$$ The power growth is once again seen to come from the excess multiplicity (here equal to $2$) of the roots for $\text{GL} _n(\mathbb C)$. Note, finally, that for the rank one signature $(n,1)$ we may recognize the eligible purity exponents as $$\lambda^{(n-j)/2}=(\lambda^{n-1}/\lambda^{j-1})^{1/2}\asymp (\beta_{\mathbb{H}^n}(\nu)/\beta_{\mathbb{H}^j}(\nu))^{1/2} \qquad (j=2,\ldots , n),$$ since $1+\lambda^{j-1}\asymp\beta_{\mathbb{H}^j}(\nu)$. ## Outline of the paper We shall prove Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} by following the steps [\[1sketch\]](#1sketch){reference-type="ref" reference="1sketch"}, [\[2sketch\]](#2sketch){reference-type="ref" reference="2sketch"}, and [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"} given in Section [1](#sec:intro){reference-type="ref" reference="sec:intro"}. We now expand on these steps, and explain how they relate to the structure of the paper. As in Section [1](#sec:intro){reference-type="ref" reference="sec:intro"}, let ${\bf G}=\text{O} (V)$, for a non-degenerate anisotropic quadratic space $V$ over a totally real number field, with signature $(n,m)$ at a fixed real place and definite at the others. Let ${\bf H}=\text{O} (U^\perp)\times\text{O} (U)$, where $U$ is an $m$-dimensional quadratic subspace that is negative definite at the distinguished real place, and $\mathbf{G}'=\text{Sp} _{2m}$. ### Sections [3](#sec:notation){reference-type="ref" reference="sec:notation"} to [7](#sec:points){reference-type="ref" reference="sec:points"} {#sections-secnotation-to-secpoints} These sections correspond to Steps [\[1sketch\]](#1sketch){reference-type="ref" reference="1sketch"} and [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"}. Step [\[1sketch\]](#1sketch){reference-type="ref" reference="1sketch"} asks for the average pointwise value of Maass forms on ${\bf G}$ in the eigenvalue aspect, and Step [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"} asks for an upper bound for the number of cusp forms on ${\bf G}'$ with fixed ${\rm U}(m)$-type $\tau$, where $\tau$ is fixed and one-dimensional, in the eigenvalue aspect. We solve these problems using appropriate variants of the trace formula. Note that the problem of counting $\tau$-spherical forms on ${\bf G}'$ arises because these correspond to spherical forms on ${\bf G}$ under the theta lift, as explained below. After introducing notation for real groups in Section [3](#sec:notation){reference-type="ref" reference="sec:notation"}, we construct the archimedean test functions that we shall use in the trace formula in Sections [4](#sec:inv-formula){reference-type="ref" reference="sec:inv-formula"} and [5](#sec:test-function){reference-type="ref" reference="sec:test-function"}. Because of the need to count $\tau$-spherical forms on ${\bf G}'$, our test functions will likewise need to be $\tau$-spherical in Step [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"}, and we construct these using the $\tau$-spherical transform studied by Shimeno [@Shimeno]. Section [6](#sec:upper-bd-cusp-spec){reference-type="ref" reference="sec:upper-bd-cusp-spec"} contains our bound for the number of $\tau$-spherical cusp forms on ${\bf G}'$. We use a relatively elementary approach, by showing that cusp forms with spectral parameter $\lambda$ are concentrated below height $\| \lambda \|^{1+\epsilon}$ in the cusp, and then bounding the integral of the geometric kernel function over this region. As discussed in Remark [Remark 2](#remark2-intro){reference-type="ref" reference="remark2-intro"}, this leads to the logarithmic loss in our main theorem. We establish the average value result for Maass forms on ${\bf G}$ in Section [7](#sec:points){reference-type="ref" reference="sec:points"}. The result we prove here is purely analytical in nature, requiring no arithmetic hypotheses on the locally symmetric spaces nor the eigenfunctions themselves. ### Sections [8](#sec:theta-review){reference-type="ref" reference="sec:theta-review"} and [9](#sec:Period-relation){reference-type="ref" reference="sec:Period-relation"} {#sections-sectheta-review-and-secperiod-relation} These sections are devoted to the distinction principle of Step [\[2sketch\]](#2sketch){reference-type="ref" reference="2sketch"}. This distinction principle will follow from a period relation between forms on ${\bf G}$ and ${\bf G}'$, stated as Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"}. To describe this relation, let $\phi$ be a vector in an automorphic representation $\pi$ of ${\bf G}$, and assume that both $\pi$ and $\phi$ are spherical at infinity. Let $\theta(\phi)$ be the theta lift of $\phi$ to a form on ${\bf G}'$. In Section [9.3.2](#sec:Bessel-period){reference-type="ref" reference="sec:Bessel-period"}, we define a Bessel subgroup ${\bf R}= \text{O} (U) \rtimes {\bf N}$ of the Siegel parabolic ${\bf P}_\text{Siegel} = {\bf M}{\bf N}$, and a character $\psi_{\bf N}$ of ${\bf R}$. Our period relation then connects the ${\bf H}$-period of $\phi$, $$\mathscr{P}_{\bf H}(\phi)=\int_{[{\bf H}]} \phi(h) dh,$$ with the Bessel period of $\theta(\phi)$, $$\mathscr{B}_{\bf R}^{\, \psi_N}(\theta(\phi))=\int_{[{\bf R}]} \theta(\phi)(r)\psi_N^{-1}(r)dr.$$ This will imply that if $\mathscr{P}_{\bf H}(\phi)$ is nonzero, then so is the lift $\theta(\phi)$, and hence that $\mathscr{P}_{\bf H}$ distinguishes theta lifts from ${\bf G}'$. This approach is the same one used by Rudnick and Sarnak to prove distinction in the case $m = 1$. In higher rank, this period relation was also considered by Gan in [@Gan]; note that he refers to $\mathscr{B}_{\bf R}^{\, \psi_N}(\theta(\phi))$ as a Shalika period, rather than a Bessel period. Section [8](#sec:theta-review){reference-type="ref" reference="sec:theta-review"} introduces notation for the theta lift, and describes the global lift $\Theta(\pi)$ of the spherical representation $\pi$ to ${\bf G}'$. In particular, we show that $\Theta(\pi)$ is $\tau$-spherical, where $\tau = \det^{(n-m)/2}$. We also show that the lift of a tempered spherical representation of ${\bf G}(\mathbb R)$ to ${\rm Sp}_{2m'}$ vanishes for $m' < m$. It follows from this that if $\pi_\infty$ is tempered, then $\Theta(\pi)$ is either zero or cuspidal; the temperedness of $\pi_\infty$ will follow from the sufficient regularity condition on $\nu$ in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}. These two results imply that the forms on ${\bf G}'$ participating in our distinction principle are $\tau$-spherical cusp forms, which are counted in Section [6](#sec:upper-bd-cusp-spec){reference-type="ref" reference="sec:upper-bd-cusp-spec"} as described above. ### Section [10](#ProofsOfThms){reference-type="ref" reference="ProofsOfThms"} {#section-proofsofthms} In this section, we deduce Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} from the above ingredients via a counting argument. ## Acknowledgements We would like to thank Stephen Kudla and Colette Moeglin for helpful conversations. # Notation {#sec:notation} In this section we introduce notation related to harmonic analysis on Lie groups. Let $G$ be a non-compact connected[^2] semisimple Lie group with finite center, and $K$ a maximal compact subgroup of $G$. We write $S=G/K$ for the associated irreducible Riemannian globally symmetric space. ## Group decompositions {#sec:gp-decomp} Let $\mathfrak g$ be the Lie algebra of $G$ and $\mathfrak{k}\subset\mathfrak g$ the Lie algebra of $K$. Then $K$ is the group of fixed points of a Cartan involution $\Theta$ of $G$. Let $\theta$ be the differential of $\Theta$ and write $\mathfrak g=\mathfrak p\oplus\mathfrak{k}$ for the corresponding Cartan decomposition of $\mathfrak g$. Let $B$ be the Killing form on $\mathfrak{g}$; identifying $\mathfrak p$ with the tangent space at the identity $K$ of $S=G/K$, we endow $S$ with the Riemannian metric induced by the restriction of $B$ to $\mathfrak p$. Let $d$ be the associated bi-$K$-invariant distance function on $G$. For $R>0$, we put $G_R=\{g\in G: d(g,K)\leqslant R\}$. Fix a maximal abelian subspace $\mathfrak{a}\subset\mathfrak p$. Denote by $\langle \cdot ,\cdot\rangle$ the restriction of the Killing form to $\mathfrak{a}$. Write $\Sigma=\Sigma(\mathfrak{a},\mathfrak g)\subset\mathfrak{a}^*$ for the set of restricted roots and let $\Sigma^+$ denote a choice of positive roots. Then we have a restricted root space decomposition $\mathfrak g=\mathfrak{m}\oplus\mathfrak{a}\oplus\sum_{\alpha\in\Sigma}\mathfrak g_\alpha$, where $\mathfrak{m}=Z_\mathfrak{k}(\mathfrak{a})$ is the centralizer of $\mathfrak{a}$ in $\mathfrak{k}$. Let $m_\alpha=\dim\mathfrak g_\alpha$ be the multiplicity of the root $\alpha\in\Sigma$. As usual, we let $\rho=\frac12\sum_{\alpha\in\Sigma^+}m_\alpha\alpha$ be the half-sum of positive roots. We denote by $\Psi\subset\Sigma^+$ the set of simple roots. Let $\{H_\alpha:\alpha\in\Psi\}$ be the dual basis of $\Psi$ in $\mathfrak{a}$. For a root $\alpha\in\Sigma$ let $\alpha^\vee=2\alpha/\langle\alpha,\alpha\rangle$ denote the corresponding coroot. Let $W=N_K(\mathfrak{a})/Z_K(\mathfrak{a})$ be the associated Weyl group. Let $\mathfrak{a}_\mathbb C=\mathfrak{a}\otimes\mathbb C$. We call $\lambda\in\mathfrak{a}_\mathbb C^*$ *regular* if $\langle\lambda,\alpha\rangle\neq 0$ for all $\alpha\in\Sigma$. For $T> 0$ we say that $\lambda$ is $T$-*regular* if $|\langle\lambda,\alpha\rangle|\geqslant T$ for all $\alpha\in\Sigma^+$. Finally, we say that $\lambda\in\mathfrak{a}^*_\mathbb C$ is *sufficiently regular* if $\lambda$ is $T$-regular for a sufficiently large $T>0$, depending only on the group $G$. In general, when working with sufficiently regular parameters, all implied constants will depend on the underlying $T$; we will not, however, explicate this dependence in the notation. The notion of sufficiently regular should not be confused with the much stronger "well-balanced" condition introduced in Section [2.4](#sec:purity){reference-type="ref" reference="sec:purity"}. Let $A$ be the analytic subgroup corresponding to $\mathfrak{a}$, and $N$ be the analytic subgroup corresponding to $\mathfrak{n}=\sum_{\alpha\in\Sigma^+}\mathfrak g_\alpha$. We furthermore write $\overline{N}$ for the analytic subgroup of $\theta(\mathfrak{n})=\sum_{-\alpha\in\Sigma^+}\mathfrak g_\alpha$. Let $M= Z_K(A)$ be the centralizer of $A$ in $K$. Then $P= NAM$ is a minimal parabolic subgroup of $G$ containing $A$. We have an Iwasawa decomposition $G=NAK$. Denote by $H:G\rightarrow\mathfrak{a}$ and $\kappa: G\rightarrow K$ the respective Iwasawa projections, given by $g=n e^{H(g)}\kappa (g)$. We put $a(g)=e^{H(g)}$; more generally, when $\lambda\in\mathfrak{a}_\mathbb C^*$ we write $a(g)^\lambda=e^{\lambda(H(g))}$. We now fix measure normalizations. We equip $K$ with the probability Haar measure. The Killing form induces a $W$-invariant Haar measure on $\mathfrak{a}$ and hence $\mathfrak{a}^*$; this yields a measure $da$ on $A$ via the exponential map. Next we let $dn$ be the left-invariant Haar measure on $N$ normalized as in [@DKV p. 37]. We now define a Haar measure $dg$ on $G$ through the Iwasawa decomposition $G=NAK$. Namely, we let $$\label{Iwasawa-measure1} dg=a^{-2\rho} dnda dk \qquad (g=nak).$$ We recall that $A$ normalizes $N$, and that $$\label{A-normalizes-N} \det \textup{Ad}(a)|_{\mathfrak{n}}=a^{2\rho}\qquad (a\in A).$$ We may therefore write $dg$ in the coordinates $G=ANK$ as $$\label{Iwasawa-measure2} dg=dadn dk \qquad (g=ank).$$ Finally, we recall the Cartan decomposition $G=KAK$. Let $B_\mathfrak{a}(0,R)$ denote the ball in $\mathfrak{a}$ centered at the origin and of radius $R>0$, relative to the norm induced by $\langle\cdot,\cdot\rangle$. We put $A_R=\exp B_\mathfrak{a}(0,R)$. Then we have $G_R=KA_RK$. ## Spherical functions of abelian $K$-type {#sec:tau-spherical} We now recall several formulae in the theory of $\tau$-spherical functions, where $\tau$ is an abelian $K$-type. Fix a character $\tau: K\rightarrow\mathbb C^\times$ and let $C^\infty(S,\tau)$ denote the space of smooth functions $f:G\rightarrow\mathbb C$ satisfying $f(gk)=\tau(k)^{-1}f(g)$ for all $k\in K$. We view $C^\infty(S,\tau)$ as the space of smooth sections $H^0(S,L(\tau))$ of the associated homogeneous line bundle $L(\tau)=G\times_K \tau$ over $S$. Let $\mathbb{D}(\tau)$ denote the algebra of all left $G$-invariant differential operators on $H^0(S,L(\tau))$. Let $\gamma_{\rm HC}:\mathbb{D}(\tau)\rightarrow I(\mathfrak{a})$ be the Harish-Chandra isomorphism of type $\tau$ (see [@Shimeno1990 Theorem 2.4]), where $I(\mathfrak{a})=S(\mathfrak{a})^W$ and $S(\mathfrak{a})$ is the symmetric algebra of $\mathfrak{a}$ with complex coefficients. For fixed $\lambda\in\mathfrak{a}_\mathbb C^*$ we let $\chi_\lambda: \mathbb{D}(\tau)\rightarrow\mathbb C$ be the algebra homomorphism given by $\chi_\lambda(D)=\gamma_{\rm HC}(D)(\lambda)$; clearly, $\chi_{w\lambda}(D)=\chi_\lambda(D)$ for all $w\in W$. The space $\mathcal{E}_\lambda(S,\tau)$ of joint $\mathbb{D}(\tau)$-eigensections in $H^0(S,L(\tau))$ having fixed system of eigenvalues $\chi_\lambda$ forms a representation of $G$. Endowed with its natural Fréchet space topology, it is a smooth representation of $G$. Let $C^\infty(S,\tau,\tau)$ denote the subspace of $C^\infty(S,\tau)$ consisting of smooth functions $f:G\rightarrow\mathbb C$ such that $f(k_1gk_2)=\tau(k_1k_2)^{-1}f(g)$ for all $k_1,k_2\in K$. This is a commutative algebra under convolution [@Shimeno Lemma 3.1]. By [@Shimeno1990 Proposition 6.1], for $\lambda\in\mathfrak{a}_\mathbb C^*$ there is a unique $\varphi_{\lambda,\tau}\in C^\infty(S,\tau,\tau)\cap \mathcal{E}_\lambda(S,\tau)$ such that $\varphi_{\lambda,\tau}(e)=1$. Then the line spanned by $\varphi_{\lambda,\tau}$ is the $\tau$-isotypic subspace for $\mathcal{E}_\lambda(S,\tau)$ under the action of $K$. Moreorer, if $\lambda,\mu\in\mathfrak{a}^*_\mathbb C$ then $$\label{eq:equality-sph-fn} \varphi_{\lambda,\tau}=\varphi_{\mu,\tau} \quad\text{if, and only if, there is } w\in W \textrm{ such that } \lambda=w\mu.$$ We call $\varphi_{\lambda,\tau}$ the *$\tau$-spherical function* of spectral parameter $\lambda$. Let $V(\lambda,\tau)$ denote the smallest closed invariant subspace of $\mathcal{E}_\lambda(S,\tau)$ containing $\varphi_{\lambda,\tau}$. We give it the relative topology. Then $V(\lambda,\tau)$ is irreducible (see [@Heckman Corollary 5.2.5]) and is generically of infinite dimension (see [@Heckman Corollary 5.2.8]). ## Properties of the $\tau$-spherical function {#sec:induced-reps} In this section we recall the Poisson integral formula for $\varphi_{\lambda,\tau}$ and study its transformation properties. The Poisson integral formula is obtained by writing $\varphi_{\lambda,\tau}$ as a matrix coefficient of a generalized principal series representation. To define these representations, let $P=NAM$ be the standard minimal parabolic subgroup from Section [3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"}. Let $(\xi,V_\xi)$ be a finite dimensional unitary representation of $M$ and let $\lambda\in\mathfrak{a}^*_\mathbb C$. We let $\pi_{\lambda,\xi}={\rm Ind}_P^G(1\otimes e^\lambda\otimes\xi)$ be the smooth representation unitarily induced from the representation $1\otimes e^\lambda\otimes\xi$ on $P=NAM$. Thus $\pi_{\lambda,\xi}$ consists of smooth functions $f:G\rightarrow V_\xi$ that satisfy $$f(namx)=a^{\rho+\lambda}\xi(m)f(x),\qquad n\in N,\; a\in A, \; m\in M, \; x\in G,$$ the $G$-action being via the right-regular representation. We have a nondegenerate $G$-invariant pairing between $\pi_{\lambda, \xi}$ and $\pi_{-\lambda, \xi^\vee}$, where $\xi^\vee$ denotes the contragredient, given by $$\langle f_1, f_2\rangle=\int_K \langle f_1(k),f_2(k) \rangle dk.$$ To specialize this to the present setting, we let $\tau$ be an abelian $K$-type, and $\xi$ the restriction of $\tau$ to $M$. Then by Frobenius reciprocity $\pi_{\lambda, \xi}$ admits $\tau$ as a non-trivial $K$-type of dimension one, the line generated by $e_{\lambda, \tau}(g)=\tau(\kappa(g))a(g)^{\rho+\lambda}$. Note that $e_{\lambda, \tau}$ is the unique function in $\pi_{\lambda, \xi}$ which extends the function $\tau$ on $K$. The matrix coefficient assignment $v\mapsto m_v$, where $m_v(g)=\langle v,\pi_{\lambda, \xi^{-1}}(g)e_{\lambda, \tau^{-1}} \rangle$, yields an intertwining map $\pi_{-\lambda,\xi}\rightarrow \mathcal{E}_\lambda(S,\tau)$. In particular, when $v=e_{-\lambda, \tau}$, this yields the integral expression $$\label{defn-sph-fn} \varphi_{\lambda,\tau}(g)=\langle e_{-\lambda, \tau}, \pi_{\lambda, \xi^{-1}}(g) e_{\lambda, \tau^{-1}} \rangle=\int_K a(kg)^{\rho+\lambda}\tau(\kappa(kg)^{-1} k)dk,$$ which recovers that of Harish-Chandra when $\tau$ is trivial. We shall use the following transformational property of $\varphi_{\lambda,\tau}$. **Proposition 2**. *The $\tau$-spherical function satisfies $\varphi_{\lambda,\tau}(g^{-1})=\varphi_{-\lambda,\tau^{-1}}(g)$.* *Proof.* This follows from the expression [\[defn-sph-fn\]](#defn-sph-fn){reference-type="eqref" reference="defn-sph-fn"} for $\varphi_{\lambda,\tau}$ as a matrix coefficient. We have $$\begin{aligned} \varphi_{\lambda,\tau}(g^{-1}) & = \langle e_{-\lambda, \tau}, \pi_{\lambda, \xi^{-1}}(g^{-1}) e_{\lambda, \tau^{-1}} \rangle \\ & = \langle \pi_{-\lambda, \xi}(g) e_{-\lambda, \tau}, e_{\lambda, \tau^{-1}} \rangle \\ & = \langle e_{\lambda, \tau^{-1}}, \pi_{-\lambda, \xi}(g) e_{-\lambda, \tau} \rangle = \varphi_{-\lambda,\tau^{-1}}(g),\end{aligned}$$ as required. ◻ ## Harish-Chandra transform of type $\tau$ We continue to let $\tau$ be a one-dimensional $K$-type. Let $C_c^\infty(S,\tau,\tau)$ denote the $\tau$-spherical Hecke algebra, i.e., the space $C_c^\infty(S,\tau,\tau)$ equipped with the convolution product. Using Gelfand's trick, one can show that $C_c^\infty(S,\tau,\tau)$ is commutative (see [@Shimeno Lemma 3.1]). For $R>0$, let $\mathcal{PW}(\mathfrak{a}_\mathbb C^*)_R$ denote the space of Paley--Wiener functions of exponential type $R$ on $\mathfrak{a}_\mathbb C^*$. Put $\mathcal{PW}(\mathfrak{a}_\mathbb C^*)=\bigcup_{R>0} \mathcal{PW}(\mathfrak{a}_\mathbb C^*)_R$. The Harish-Chandra transform of type $\tau$ is the algebra homomorphism $\mathscr{H}_\tau:C_c^\infty(S,\tau,\tau)\rightarrow \mathcal{PW}(\mathfrak{a}_\mathbb C^*)$ given by $$\label{HC-transform} \mathscr{H}_\tau f(\lambda)=\int_G f(g)\varphi_{-\lambda,\tau^{-1}}(g)dg.$$ In particular $\mathscr{H}_\tau (f\ast g)=\mathscr{H}_\tau f.\mathscr{H}_\tau g$. The map $\mathscr{H}_\tau$ is injective with image $\mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W$, the subspace of Weyl group invariants [@Shimeno Lemma 6.2 and Theorem 6.5]. We shall sometimes write $\hat{f}(\lambda;\tau)$ for $\mathscr{H}_\tau f(\lambda)$; moreover, when $\tau=1$, we shall often drop $\tau$ from the notation and simply write $\mathscr{H}f$ or $\hat{f}$. On the other hand, the space $C_c^\infty(S,\tau,\tau)$ maps onto $C^\infty_c(A)^W$ via the Abel--Satake transform $$\mathscr{A}_\tau f(a)=a^\rho \int_N f(an) dn.$$ These two maps fit into the following commutative diagram [@Heckman Theorem 5.4.2] $$\label{eq:commutative-diagram} \xymatrix{ &C_c^\infty(S,\tau,\tau)\ar[rd]^{\mathscr{A}_\tau} \ar[ld]_{\mathscr{H}_\tau}\\ \mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W && C^\infty_c(A)^W,\ar[ll]_{\mathscr{F}} }$$ where $\mathscr{F}f(\lambda)=\int_A f(a)a^{-\lambda} da$ is the Fourier transform. These maps respect supports in the following way: for $R>0$, let $C^\infty_R(S,\tau,\tau)$ denote the space of functions in $C^\infty_c(S,\tau,\tau)$ with ${\rm supp}(f)\subset G_R$; similarly, let $C^\infty_R(A)^W$ denote the space of functions in $C^\infty_c(A)^W$ with ${\rm supp}(\mathscr{A}_\tau(f))\subset A_R$. Then, for $f\in C^\infty(S,\tau,\tau)$: $$f\in C^\infty_R(S,\tau,\tau)\;\text{ if, and only if, }\; \mathscr{A}_\tau(f)\in C^\infty_R(A)^W\;\text{ if, and only if, }\; \mathscr{H}_\tau (f)\in \mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W_R;$$ see [@Shimeno Theorem 6.5]. # $\tau$-spherical inversion formula {#sec:inv-formula} In this section $G$ shall denote a non-compact connected *simple* Lie group, $K$ a maximal compact subgroup of $G$, and $S=G/K$. The center of $\mathfrak k$ is either trivial or one dimensional, and in the latter case $G$ is said to be of Hermitian type. We let $\tau$ be a character of $K$. We shall make the following simplifying assumption on $G$: $$\label{eq:assumption} \textit{whenever $\tau$ is non-trivial, the Hermitian group $G$ has reduced root system.}$$ This will allow us to keep the notational complexity to a minimum, while allowing for sufficient generality for our applications. For the proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} the only time we shall consider a non-trivial character $\tau$ will be for $G=\text{Sp} _{2m}(\mathbb R)$. In this section we recall the explicit formulae for the $\tau$-spherical Plancherel measure $\mu_{\text{Pl}, \tau}$ on $\mathfrak{a}^*_\mathbb C$. This is the unique $\sigma$-finite measure for which the $\tau$-spherical Harish-Chandra transform $\mathscr{H}_\tau$ extends to an isometry $$\mathscr{H}_\tau: L^2(S,\tau,\tau)\rightarrow L^2(\mathfrak{a}^*_\mathbb C,|W|^{-1} \mu_{\text{Pl}, \tau}).$$ Importantly for us, the $\tau$-spherical Plancherel measure satisfies an inversion formula of the form $$\label{eq:general-inversion} k(g)=\frac{1}{|W|}\int_{\mathfrak{a}^*_\mathbb C}\mathscr{H}_\tau k(\lambda)\varphi_{\lambda,\tau}(g)d\mu_{\text{Pl}, \tau}(\lambda),$$ for every $k\in C^\infty_c(S,\tau,\tau)$. ## Parabolic subgroups {#sec:parabolic-notation} Recall from §[3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"} that we have fixed a minimal parabolic subgroup $P$ of $G$. Any parabolic subgroup of $G$ containing $P$ is called *standard*; they are parametrized as follows. Let $\Theta$ be a subset of the positive simple roots $\Psi$, and write $\langle\Theta\rangle=\Sigma^+\cap\big(\sum_{\alpha\in\Theta}\mathbb N\alpha\big)$ for the system of positive roots generated by $\Theta$. We write $W_\Theta$ for the subgroup of $W$ generated by the hyperplane reflections $s_\alpha$ for $\alpha\in\Theta$. Then $P_\Theta=PW_\Theta P$ is a standard parabolic subgroup, with unipotent radical $N_\Theta$ given by the product over $\alpha\in\Sigma^+\setminus\langle\Theta\rangle$ of the one-parameter subgroups corresponding to $\mathfrak g_\alpha$. In particular $P_\emptyset=P$ and (by the Bruhat decomposition) $P_{\Psi}=G$. Let $P_\Theta= N_\Theta A_\Theta M_\Theta$ be its Langlands decomposition. The Lie algebra of $A_\Theta$ is given by $$\label{defn:Lie-A-theta} \mathfrak{a}_\Theta=\{H\in\mathfrak{a}: \alpha(H)=0 \;\forall\alpha\in\Theta\}=\sum_{\alpha\in\Psi\setminus\Theta}\mathbb RH_\alpha.$$ Note that $\mathfrak{a}_\emptyset=\mathfrak{a}$ and $\mathfrak{a}_{\Psi}=\{0\}$. We write $\mathfrak{a}(\Theta)$ for the orthocomplement of $\mathfrak{a}_\Theta$ in $\mathfrak{a}$ relative to the Killing form, and $N(\Theta)$ for the unipotent subgroup of $M_\Theta$ given by the product over $\alpha \in \langle \Theta \rangle$ of the one-parameter subgroups corresponding to $\mathfrak g_\alpha$. Finally, let $\overline{N_\Theta}$ and $\overline{N(\Theta)}$ denote the analytic subgroups corresponding to the Lie subalgebras $\sum_{-\alpha\in\Sigma^+\setminus\langle\Theta\rangle}\mathfrak g_\alpha$ and $\sum_{-\alpha\in\langle\Theta\rangle}\mathfrak g_\alpha$, respectively. ## More notation on roots {#sec:more-roots} In preparation for the paragraphs to come, we shall need to introduce some notation concerning various types of roots in $\Sigma$. We let $\Sigma_{\rm red}=\{\alpha\in\Sigma: \alpha/2\notin\Sigma\}$ denote the subset of reduced roots. We say that $\Sigma$ is *reduced* if $\Sigma_{\rm red}=\Sigma$. As $\alpha$ varies over $\Sigma_{\rm red}$, there are at most 2 possible lengths $\langle \alpha,\alpha\rangle^{1/2}$. (Recall that the root systems $\Sigma$ for which all roots have the same length are called *simply laced*; a simply laced root system is automatically reduced.) We call the reduced roots of minimal length *short* and those of maximal length *long* (in the simply laced case, all roots will be deemed long). Roots of the same length have the same multiplicities, as they lie in the same Weyl group orbit. If $G$ is Hermitian with reduced root system, as in assumption [\[eq:assumption\]](#eq:assumption){reference-type="eqref" reference="eq:assumption"}, the classification result of C.C. Moore [@Moore Theorem 2] shows that $\Sigma$ is of the form [\[symplectic-roots\]](#symplectic-roots){reference-type="eqref" reference="symplectic-roots"}, i.e., of type $C$. However, we shall work with a different basis of $\mathfrak a^*$ to the one in [\[symplectic-roots\]](#symplectic-roots){reference-type="eqref" reference="symplectic-roots"} (differing only by a factor or 2), in order to agree with Shimeno's paper. We take a basis $\{ \beta_1, \ldots, \beta_r \}$ of $\mathfrak a^*$ so that the sets of short and long roots are $$\label{Shimeno-roots} \Sigma_s=\{\tfrac12 (\pm\beta_i\pm\beta_j): 1\leqslant i,j\leqslant r, i\neq j\} \quad \text{and} \quad \Sigma_\ell=\{\pm\beta_1,\ldots ,\pm\beta_r\}$$ respectively. We have $m_\alpha=1$ for all $\alpha\in\Sigma_\ell$. The positive short and long roots are $\Sigma_s^+=\{\frac12 (\beta_i\pm\beta_j) : i > j \}$ and $\Sigma_\ell^+=\{\beta_1,\ldots ,\beta_r\}$, respectively. Then $\Sigma_\ell^+$ forms a basis for $\mathfrak{a}^*$. ## $\tau$-spherical ${\bf c}$-function {#sec:c-function} We continue to let $\tau$ be a one-dimensional representation of $K$. Let $\Theta$ be a subset of the simple roots $\Psi$. For $\lambda\in\mathfrak{a}_\mathbb C^*$ with ${\rm Re}\, \langle\lambda,\alpha\rangle>0$ for all $\alpha\in\Sigma^+\setminus\langle\Theta\rangle$ we consider the absolutely convergent integral $${\bf c}^\Theta(\lambda;\tau)=\int_{\overline{N_\Theta}}a(\bar{n})^{\lambda+\rho} \tau(\kappa(\bar{n}))^{-1}d\bar{n}.$$ Similarly, for $\lambda\in\mathfrak{a}_\mathbb C^*$ with ${\rm Re}\, \langle\lambda,\alpha\rangle>0$ for all $\alpha\in\langle\Theta\rangle$, the integral $${\bf c}_\Theta(\lambda;\tau)=\int_{\overline{N(\Theta)}}a(\bar{n})^{\lambda+\rho} \tau(\kappa(\bar{n}))^{-1}d\bar{n}$$ converges absolutely. Both of these integrals are taken with respect to the invariant measure $d\bar{n}$, normalized so that ${\bf c}_\Theta(\rho,\tau_{\rm triv})=1={\bf c}^\Theta(\rho,\tau_{\rm triv})$. When $\Theta=\emptyset$, in which case $\overline{N_\Theta}=\overline{N}$, or $\Theta=\Psi$, in which case $\overline{N(\Theta)}=\overline{N}$, we shall often write ${\bf c}(\lambda;\tau)$ for ${\bf c}^\emptyset (\lambda;\tau)={\bf c}_\Psi (\lambda;\tau)$. Furthermore, when $\tau=\tau_{\rm triv}$ we shorten ${\bf c}(\lambda;\tau_{\rm triv})$ further to ${\bf c}(\lambda)$. The rank one reduction procedure of the Gindikin--Karpelevich formula allows us to evaluate these integrals using quotients of Gamma functions attached to each positive reduced root, as we now describe. We begin with the spherical case. Let $\lambda\in\mathfrak{a}^*_\mathbb C$. For $\alpha\in \Sigma_{\rm red}^+$ we put $$\label{c-tilde-spherical} {\bf c}_\alpha(\lambda) = c_{\alpha, 0} \frac{ 2^{ - \tfrac{1}{2}\langle \lambda, \alpha^\vee \rangle } \Gamma( \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }{ \Gamma( \tfrac{1}{4} m_\alpha + \tfrac{1}{2} + \tfrac{1}{4} \langle \lambda, \alpha^\vee \rangle ) \Gamma( \tfrac{1}{4} m_\alpha + \tfrac{1}{2} m_{2\alpha} + \tfrac{1}{4} \langle \lambda, \alpha^\vee \rangle ) },$$ where the constant $c_{\alpha, 0}$ is given by $$c_{\alpha,0} = 2^{ \tfrac{1}{2} m_\alpha + m_{2\alpha}} \Gamma( \tfrac{1}{2}( m_\alpha + m_{2\alpha} + 1) ).$$ This is the same as the formula from [@Helgason Ch. IV, Section 6] after replacing $\alpha_0$ with $\tfrac{1}{2} \alpha^\vee$. Let $\langle\Theta\rangle_{\rm red}=\langle\Theta\rangle\cap\Sigma_{\rm red}$. Then $${\bf c}_\Theta(\lambda)=\prod_{\alpha\in\langle\Theta\rangle_{\rm red}}{\bf c}_\alpha(\lambda),\qquad {\bf c}^\Theta(\lambda)=\prod_{\alpha\in \Sigma_{\rm red}^+\setminus\langle\Theta\rangle_{\rm red}}{\bf c}_\alpha(\lambda).$$ In particular, when $\Theta=\Psi$ in the first of the above two products, or $\Theta=\emptyset$ in the second, we recover ${\bf c}(\lambda)=\prod_{\alpha\in \Sigma_{\rm red}^+}{\bf c}_\alpha(\lambda)$ from [@Helgason Ch. IV (43)]. Now let $\tau$ be an arbitrary character of $K$. In accordance with [\[eq:assumption\]](#eq:assumption){reference-type="eqref" reference="eq:assumption"}, and since we have already treated the spherical case for general groups, we may now assume that $G$ is of Hermitian type with reduced root system. In this case, our formulas for the $\tau$-spherical ${\bf c}$-function will involve a real parameter $\ell_\tau$ associated to $\tau$, whose definition we recall from [@Shimeno p. 337]. Let $\mathfrak k_s$ and $\mathfrak{z}_\mathfrak k$ be the derived subalgebra and center of $\mathfrak k$, respectively. Then $\mathfrak k_s\subsetneq \mathfrak k$ and $\dim \mathfrak{z}_\mathfrak k=1$. Let $G_\mathbb C$ be the simply connected Lie group with Lie algebra $\mathfrak g_\mathbb C$, and let $G_\mathbb R$, $K_\mathbb R$, and $(K_\mathbb R)_s$ be the analytic subgroups of $G_\mathbb C$ corresponding to $\mathfrak g$, $\mathfrak k$, and $\mathfrak k_s$. Let $Z$ be the non-zero element of $\mathfrak{z}_\mathfrak k$ constructed in [@Schl Sect. 3], so that $e^{tZ} \in (K_\mathbb R)_s$ if and only if $t \in 2 \pi \mathbb Z$. Given a character $\tau$ of $K$, we define $\ell_\tau$ by requiring that $\tau( e^{tZ}) = e^{i \ell_\tau t}$. As in Section [4.2](#sec:more-roots){reference-type="ref" reference="sec:more-roots"} we let $\Sigma_s$ and $\Sigma_\ell$ denote the short and long roots. Let $\lambda\in\mathfrak{a}^*_\mathbb C$. Following (4.18)--(4.19) of [@Shimeno], for $\alpha\in \Sigma_\ell^+$ we put $$\label{c-tilde} {\bf c}_\alpha(\lambda,\tau) = \frac{ 2^{1 - \langle \lambda, \alpha^\vee \rangle } \Gamma( \langle \lambda, \alpha^\vee \rangle ) }{ \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle + \tfrac{1}{2} \ell_\tau ) \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle - \tfrac{1}{2} \ell_\tau ) },$$ whereas for $\alpha\in \Sigma_s^+$ we put ${\bf c}_\alpha(\lambda,\tau)={\bf c}_\alpha(\lambda)$. Then [@Shimeno Lemma 4.8] and [@Shimeno1990 Theorem 7.4] show that $$\label{c-theta-tau-factorization} {\bf c}_\Theta(\lambda,\tau)=\prod_{\alpha\in\langle\Theta\rangle}{\bf c}_\alpha(\lambda,\tau),\qquad {\bf c}^\Theta(\lambda,\tau)=\prod_{\alpha\in \Sigma^+\setminus\langle\Theta\rangle}{\bf c}_\alpha(\lambda,\tau).$$ Since [\[c-tilde\]](#c-tilde){reference-type="eqref" reference="c-tilde"} reduces to [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"} when $\tau$ is trivial, this definition specializes to the preceding case for type $C$ irreducible root systems. ## Comparison to literature {#sec:Gamma-comparison} We make a few comments to illustrate that our formulas for ${\bf c}_\alpha(\lambda,\tau)$ agree with those in Shimeno [@Shimeno], and also with [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"} in the case of overlap, when $G$ is Hermitian with reduced root system and $\tau$ is trivial. We begin with the agreement with Shimeno. If we use the notation $\lambda_i = \langle \lambda, \beta_i^\vee \rangle$ as in [@Shimeno (1.8)], our formula [\[c-tilde\]](#c-tilde){reference-type="eqref" reference="c-tilde"} in the case $\alpha \in \Sigma^+_\ell$ is the same as [@Shimeno (4.19)] after using $m = 0$ there. For $\alpha \in \Sigma^+_s$, the formula [@Shimeno (4.18)] reads $$\label{eq:Shimeno-c-alpha} {\bf c}_\alpha(\lambda, \tau) = \frac{ 2^{m_\alpha - 1} \Gamma( \tfrac12(m_\alpha + 1) ) \Gamma( \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }{ \sqrt{\pi} \Gamma( \tfrac{1}{2} m_\alpha + \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }$$ after using $m'=m_\alpha$, while formula [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"} for ${\bf c}_\alpha(\lambda, \tau) = {\bf c}_\alpha(\lambda)$ gives $${\bf c}_\alpha(\lambda, \tau) = \frac{ 2^{\tfrac{1}{2} m_\alpha - \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle } \Gamma( \tfrac12(m_\alpha + 1) ) \Gamma( \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }{ \Gamma( \tfrac{1}{4} m_\alpha + \tfrac12 + \tfrac{1}{4} \langle \lambda, \alpha^\vee \rangle ) \Gamma( \tfrac{1}{4} m_\alpha + \tfrac{1}{4} \langle \lambda, \alpha^\vee \rangle ) }.$$ It may be checked that these agree after using the duplication formula $\Gamma(s)\Gamma(s+1/2)=2^{1-2s}\sqrt{\pi}\Gamma(2s)$ for the Gamma function. We next check that our formulas for ${\bf c}_\alpha(\lambda, \tau)$ agree with [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"} when $G$ is Hermitian with reduced root system and $\tau$ is trivial. When $\alpha \in \Sigma^+_s$, there is nothing to check. When $\alpha \in \Sigma^+_\ell$, [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"} becomes $${\bf c}_\alpha(\lambda) = \frac{ 2^{\tfrac{1}{2} - \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle } \Gamma( \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }{ \Gamma( \tfrac{3}{4} + \tfrac{1}{4} \langle \lambda, \alpha^\vee \rangle ) \Gamma( \tfrac{1}{4} + \tfrac{1}{4} \langle \lambda, \alpha^\vee \rangle ) }.$$ Just as in the case above, this may be converted to $$\label{c-alpha-simplified} {\bf c}_\alpha(\lambda) = \frac{ \Gamma( \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }{ \sqrt{\pi} \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }$$ by using the duplication formula. On the other hand, [\[c-tilde\]](#c-tilde){reference-type="eqref" reference="c-tilde"} gives $${\bf c}_\alpha(\lambda, \tau_\text{triv} ) = \frac{ 2^{1 - \langle \lambda, \alpha^\vee \rangle } \Gamma( \langle \lambda, \alpha^\vee \rangle ) }{ \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle )^2 }.$$ By applying the duplication formula once more to the $\Gamma( \langle \lambda, \alpha^\vee \rangle )$ term in the numerator of this last expression, we see that the two formulas are equal. ## Spherical inversion formula {#sec:tau-triv-inversion} In this paragraph, we take $\tau$ to be trivial and $G$ general, as spelled out at the beginning of Section [4](#sec:inv-formula){reference-type="ref" reference="sec:inv-formula"}, and we review the classical spherical inversion formula. When $\tau$ is trivial, we shall often abbreviate $\mu_{\text{Pl}, \tau}$ to $\mu_\text{Pl}$. In this case, the Plancherel measure $\mu_\text{Pl}$ was determined by Gangolli [@Gangolli Theorem 3.5] and Helgason [@Helgason Ch. IV, §7, Theorem 7.1], as part of their proof of the commutativity of [\[eq:commutative-diagram\]](#eq:commutative-diagram){reference-type="eqref" reference="eq:commutative-diagram"}, and is given by $$d \mu_\text{Pl} = |{\bf c}(\lambda)|^{-2}d\lambda,$$ where $d\lambda$ is the Lebesgue measure on $i \mathfrak a^*$. Thus, $d\mu_\text{Pl} = \beta_S(\lambda) d\lambda$, where $\beta_S(\lambda)=|{\bf c}(\lambda)|^{-2}$ is the density function appearing throughout the introduction. For any $F\in \mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W$ we let $$\label{eq:def-J} \mathscr{J}F(g)=\int_{i\mathfrak{a}^*}F(\lambda)\varphi_\lambda (g)|{\bf c}(\lambda)|^{-2}d\lambda.$$ We have $\mathscr{J}F\in C^\infty_c(S,\tau_{\rm triv},\tau_{\rm triv})$. Recall from [\[HC-transform\]](#HC-transform){reference-type="eqref" reference="HC-transform"} the definition of the Harish-Chandra transform $\mathscr{H}=\mathscr{H}_{\tau_{\rm triv}}$, as well as the shorthand notation $k\mapsto\hat{k}=\mathscr{H}k$ we often use for it. For $k\in C^\infty_c(S,\tau_{\rm triv},\tau_{\rm triv})$, one has the following inversion formula $$\label{eq:sph-planch-inv} k(g)=\frac{1}{|W|}\mathscr{J}(\mathscr{H}k)(g)=\frac{1}{|W|}\int_{i\mathfrak{a}^*}\hat{k}(\lambda)\varphi_{\lambda}(g)|\mathbf{c}(\lambda)|^{-2}d\lambda \qquad (g\in G),$$ which recovers [\[eq:general-inversion\]](#eq:general-inversion){reference-type="eqref" reference="eq:general-inversion"} for the trivial $K$-type. ## $\tau$-spherical Plancherel measure and inversion formula {#sec:tau-sph-inversion} We now let $G$ be Hermitian of reduced root system. The $\tau$-spherical Plancherel measure on $\mathfrak{a}_\mathbb C^*$ (where $\tau$ is a character of $K$), was determined in [@Shimeno], using the various ${\bf c}$-functions from Section [4.3](#sec:c-function){reference-type="ref" reference="sec:c-function"}. To describe it, we begin by introducing some notation. We maintain the assumption [\[eq:assumption\]](#eq:assumption){reference-type="eqref" reference="eq:assumption"} that the root system $\Sigma$ of $G$ is reduced. Recall from Section [4.2](#sec:more-roots){reference-type="ref" reference="sec:more-roots"} that $\Sigma$ is of the form [\[Shimeno-roots\]](#Shimeno-roots){reference-type="eqref" reference="Shimeno-roots"}. In that case the simple roots $\Psi=\{\alpha_1,\ldots ,\alpha_r\}$ are of the form $$\label{eq:alpha-to-beta} \alpha_1=(\beta_r-\beta_{r-1})/2,\quad \alpha_2=(\beta_{r-1}-\beta_{r-2})/2,\quad \ldots ,\quad \alpha_{r-1}=(\beta_2-\beta_1)/2,\quad \alpha_r=\beta_1.$$ Recall from Section [4.1](#sec:parabolic-notation){reference-type="ref" reference="sec:parabolic-notation"} the notation associated with subsets $\Theta$ of $\Psi$. We shall be particularly interested in the subsets $$\Theta_j=\{\alpha_{r-j+1},\ldots ,\alpha_r\}\qquad (j=1,\ldots ,r).$$ We write $\Theta_0=\emptyset$. Thus $\dim \mathfrak{a}_{\Theta_j}=r-j$, $\dim \mathfrak{a}(\Theta_j)=j$, and $$\begin{aligned} \{0\}=\mathfrak{a}_{\Theta_r}\subset\cdots&\subset\mathfrak{a}_{\Theta_1}\subset \mathfrak{a}_{\Theta_0}=\mathfrak{a},\\ \{0\}=\mathfrak{a}(\Theta_0)\subset\cdots&\subset\mathfrak{a}(\Theta_{r-1})\subset \mathfrak{a}(\Theta_r)=\mathfrak{a}.\end{aligned}$$ Note that $W_{\Theta_r}=\{1\}$ and $W_{\Theta_0}=W$. We shall associate with each $j=0,1,\ldots ,r$ a measure $\mu_{\text{Pl}, \tau}^{(j)}$, as well as a transform $$\mathscr{J}_{\Theta_j,\tau}: \mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W\rightarrow C^\infty(S,\tau,\tau),$$ given by $$\mathscr{J}_{\Theta_j,\tau} F(g) = \frac{|W|}{|W_{\Theta_{r-j}}|} \int_{ \mathfrak a^*_\mathbb C} F(\lambda) \varphi_{\lambda, \tau}(g) d\mu_{\text{Pl}, \tau}^{(j)}(\lambda).$$ We begin with the extreme cases $j=0$ and $j=r$. --The case $j=0$ (the "most continuous part"): We define $$d\mu_{\text{Pl}, \tau}^{(0)} = |{\bf c}(\lambda,\tau)|^{-2}d\lambda,$$ where $d\lambda$ is the Lebesgue measure on $i \mathfrak a^*$. For any $F\in \mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W$, the corresponding transform is given by $$\mathscr{J}_{\Theta_0,\tau}F(g)=\int_{i\mathfrak{a}^*}F(\lambda)\varphi_{\lambda,\tau}(g)|{\bf c}(\lambda,\tau)|^{-2}d\lambda,$$ which directly generalizes the transform $\mathscr{J}$ in [\[eq:def-J\]](#eq:def-J){reference-type="eqref" reference="eq:def-J"} in the spherical case, but with arbitrary abelian $K$-type $\tau$. --The case $j=r$ (the "discrete part"): Let $L^2(S,\tau)$ denote the Hilbert space of square-integrable functions on $G$ which translate under $K$ on the right by $\tau^{-1}$. We define the $W$-invariant subset $D_\tau$ of $\mathfrak{a}^*_\mathbb C$ as the set of $\lambda\in\mathfrak{a}_\mathbb C^*$ for which $\varphi_{\lambda,\tau}\in L^2(S,\tau)$. In this case, the representation $V(\lambda,\tau)$ generated by $\varphi_{\lambda,\tau}$, as described in §[3.2](#sec:tau-spherical){reference-type="ref" reference="sec:tau-spherical"}, is a holomorphic discrete series representation [@Shimeno Theorem 5.10]. The set $D_\tau$ was determined in [@Shimeno Theorem 5.1]; in particular, $D_\tau$ is finite, and lies in $\mathfrak{a}^*$. *Remark 4*. For example, for $G=\text{Sp} _{2m}(\mathbb R)$, it is shown in [@Shimeno Theorem 5.1] that $D_\tau$ is empty precisely when $|\ell_\tau|\leqslant m$ [@Shimeno (5.4)]. When $m=1$ this corresponds to the fact that holomorphic discrete series of $\text{Sp} _2(\mathbb R)=\text{SL} _2(\mathbb R)$ have weight at least $2$. For $\lambda\in D_\tau$ let $d(\lambda,\tau)$ denote the formal degree of the representation $V(\lambda,\tau)$ generated by $\varphi_{\lambda,\tau}$. We define $$\mu_{\text{Pl}, \tau}^{(r)} = \sum_{\varrho\in D_{\tau}} d(\varrho,\tau) \delta_\varrho,$$ where $\delta_\varrho$ is the delta measure at $\varrho$. For $F\in \mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W$, the corresponding transform is $$\mathscr{J}_{\Theta_r,\tau}F(g)=\sum_{\varrho\in D_{\tau}}d(\varrho,\tau)F(\varrho)\varphi_{\varrho,\tau}(g).$$ -- The intermediate cases $j=1,\ldots ,r-1$: For each $j=1,\ldots ,r-1$, we define a discrete subset of $\mathfrak{a}(\Theta_j)^*$ as follows. Recall from Section [4.2](#sec:more-roots){reference-type="ref" reference="sec:more-roots"} that the positive long roots $\Sigma_\ell^+=\{\beta_1,\ldots ,\beta_r\}$ form a basis of $\mathfrak{a}^*$. For $\lambda\in\mathfrak{a}^*_\mathbb C$ recall furthermore the notation $\lambda_i= \langle\lambda,\beta_i^\vee\rangle$, where $\beta_i^\vee=2\beta_i/\langle\beta_i,\beta_i\rangle$. Following [@Shimeno (6.13)] we put[^3] $$\label{defn-D-tau-j} D_{\tau,j}=\left\{\varrho\in \mathfrak{a}(\Theta_j)^*: \varrho_1+|\ell_\tau|-1\in 2\mathbb N,\, \varrho_{i+1}-\varrho_i-m'\in 2\mathbb N\, (1\leqslant i\leqslant j-1),\, \varrho_j<0\right\},$$ where $m'$ is the common value of the multiplicity $m_\alpha$ for any $\alpha\in\Sigma_s$. This is a finite set (possibly empty, if $|\ell_\tau|$ is small enough), as any $\varrho\in D_{\tau,j}$ must in particular satisfy $1-|\ell_\tau|\leqslant \varrho_1<\varrho_2<\cdots <\varrho_j<0$ with $\varrho_i + |\ell_\tau|$ integral. For $\lambda\in\mathfrak{a}(\Theta_j)^*$ we let $$d_j(\lambda,\tau)=(-2\pi i)^j{\rm Res}_{\mu_j=\lambda_j}\cdots {\rm Res}_{\mu_1=\lambda_1} ({\bf c}_{\Theta_j}(\mu,\tau)^{-1}{\bf c}_{\Theta_j}(-\mu,\tau)^{-1}).$$ We define $\mu_{\text{Pl}, \tau}^{(j)}$ by $$\int_{\mathfrak a^*_\mathbb C} F(\lambda) d \mu_{\text{Pl}, \tau}^{(j)} = \sum_{\varrho\in D_{\tau,j}}d_j(\varrho,\tau)\int_{i\mathfrak{a}^*_{\Theta_j}}F(\varrho+\lambda_{\Theta_j}) |{\bf c}^{\Theta_j}(\varrho+\lambda_{\Theta_j},\tau)|^{-2} d\lambda_{\Theta_j},$$ with corresponding transform $$\mathscr{J}_{\Theta_j,\tau}F(g)=\frac{|W|}{|W_{\Theta_{r-j}}|}\sum_{\varrho\in D_{\tau,j}}d_j(\varrho,\tau)\int_{i\mathfrak{a}^*_{\Theta_j}}F(\varrho+\lambda_{\Theta_j})\varphi_{\varrho+\lambda_{\Theta_j},\tau}(g)|{\bf c}^{\Theta_j}(\varrho+\lambda_{\Theta_j},\tau)|^{-2}d\lambda_{\Theta_j}.$$ *Remark 5*. The formulae from the intermediate cases $j=1,\ldots ,r-1$ continue to make sense for $j=r$ and indeed coincide with the separate definition we gave. We have only separated out the $j=r$ case for expository clarity. Indeed, from [@Shimeno (5.14), (6.13)] one has $D_{\tau,r}=D_\tau$, and for $\varrho\in D_\tau$ one has $d(\varrho,\tau)^{-1}=\|\varphi_{\varrho,\tau}\|_2^2$, from [@Shimeno Remark 6.9]. With each of these transforms defined, we now put $\mu_{\text{Pl}, \tau}= \sum_{j = 0}^r \mu_{\text{Pl}, \tau}^{(j)}$, and $$\label{eq:J-tau-transform} \mathscr{J}_\tau=\sum_{j=0}^r \mathscr{J}_{\Theta_j,\tau}.$$ For $k\in C^\infty_c(S,\tau,\tau)$, one has the following inversion formula $$\label{spherical-inversion} k=\frac{1}{|W|}\mathscr{J}_\tau(\mathscr{H}_\tau k),$$ due to Shimeno [@Shimeno Theorem 6.7]. In this way, the inversion formula [\[spherical-inversion\]](#spherical-inversion){reference-type="eqref" reference="spherical-inversion"} is expressed in the form announced in [\[eq:general-inversion\]](#eq:general-inversion){reference-type="eqref" reference="eq:general-inversion"} (and it of course coincides with the case of trivial $K$-type in Section [4.5](#sec:tau-triv-inversion){reference-type="ref" reference="sec:tau-triv-inversion"}). # $\tau$-spherical test functions {#sec:test-function} We continue to assume that $G$ is simple, while invoking assumption [\[eq:assumption\]](#eq:assumption){reference-type="eqref" reference="eq:assumption"} when dealing with non-trivial (one-dimensional) $K$-types. This section provides for certain $\tau$-spherical test functions which will be of use in the next two sections. Indeed, for the proof of Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}, an important ingredient in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, we shall need test functions whose Harish-Chandra transforms have pleasing positivity and concentration properties on the unitary dual. Moreover, we shall need similar test functions in the spherical case in the proofs of Lemma [Lemma 25](#upper-local-Weyl){reference-type="ref" reference="upper-local-Weyl"} and Proposition [Proposition 26](#local-Weyl){reference-type="ref" reference="local-Weyl"}. The outline of this section is as follows. Our test functions are required to be positive on a subset $\mathfrak a^*_{\tau, \text{un}}$ of $\mathfrak a^*_\mathbb C$ (the *$\tau$-unitary dual*), defined in Definition [Definition 4](#tau-unitary-def){reference-type="ref" reference="tau-unitary-def"}, and we begin in Sections [5.1](#sec:taubounded){reference-type="ref" reference="sec:taubounded"} and [5.2](#ASP){reference-type="ref" reference="ASP"} by proving some bounds for $\mathfrak a^*_{\tau, \text{un}}$. We construct our test functions in Section [5.3](#sec:test-fn){reference-type="ref" reference="sec:test-fn"}. Section [5.5](#sec:test-fn-bds){reference-type="ref" reference="sec:test-fn-bds"} contains bounds for our test functions, and as we prove these by inverting the Harish-Chandra transform, we first establish, in Section [5.4](#sec:c-fun-estimates){reference-type="ref" reference="sec:c-fun-estimates"}, some estimates for the various Plancherel measures appearing in the inversion formulas. ## Boundedness of the $\tau$-spherical function {#sec:taubounded} The following result will be useful in controlling the $\tau$-unitary dual $\mathfrak a^*_{\tau, \text{un}}$. **Proposition 3**. *There are $a,b\in\mathbb R$, $b>0$, depending only on the root system of $G$, such that, if the $\tau$-spherical function $\varphi_{\lambda,\tau}$ is bounded then $\| \textup{Re}\lambda \| \leqslant a + b | \ell_\tau|$.* *Proof.* We begin by recalling that if $\tau$ is trivial, then a theorem of Helgason [@Helgason Ch. IV, Thm 8.1] states that $\varphi_\lambda$ is bounded if and only if $\text{Re} \lambda$ lies in the convex hull of $W \rho$. We may therefore assume that $\tau$ is nontrivial, and that $G$ is Hermitian with reduced root system. We shall deduce the proposition from an asymptotic formula for $\varphi_{\lambda,\tau}$ due to Shimeno [@Shimeno Prop 4.9], which we now recall. This formula is stated in terms of the partial spherical function $\varphi_{\lambda, \tau}^\Theta$ associated to a subset $\Theta \subset \Psi$, defined in [@Shimeno (4.22)]. Let $\Theta \subset \Psi$ be a subset of the simple roots and let $\lambda \in \mathfrak a_\mathbb C^*$ be such that $\text{Re} \langle \lambda, \alpha \rangle > 0$ for all $\alpha \in \Sigma^+ \setminus \langle \Theta \rangle$. Recall the subgroup $A_\Theta$ of $A$, whose Lie algebra is given in [\[defn:Lie-A-theta\]](#defn:Lie-A-theta){reference-type="eqref" reference="defn:Lie-A-theta"}. Then applying [@Shimeno Prop 4.9] to the function $f = 1_{\lambda, \tau}$, and using $\varphi_{\lambda, \tau}^\Theta(e) = 1$, which follows easily from the definition, we obtain $$\label{phi-limit} \lim_{a\, \underset{\Theta}{\rightarrow}\, \infty} a^{\rho - \lambda} \varphi_{\lambda, \tau}(a) = {\bf c}^\Theta(\lambda, \tau) \varphi^\Theta_{\lambda, \tau}(e) = {\bf c}^\Theta(\lambda, \tau).$$ Here, the $\Theta$ subscript in the limit means that $a \in A_\Theta$ and $a^\alpha \to \infty$ for all $\alpha \in \Psi \setminus \Theta$. Note that $\lambda$ is not a pole of $c_\alpha(\lambda, \tau)$ for any $\alpha \in \Sigma^+ \setminus \langle \Theta \rangle$, by our assumption that $\text{Re} \langle \lambda, \alpha \rangle > 0$ for these $\alpha$; the right-hand side of this formula is therefore finite. We now return to the proof of the proposition. We let $\lambda \in \mathfrak a_\mathbb C^*$ be a spectral parameter for which $\varphi_{\lambda,\tau}$ is bounded, and assume without loss of generality that $\text{Re}\, \lambda \in \overline{\mathfrak a_+^*}$. We shall prove the existence of $a,b\in\mathbb R$, $b>0$, such that, either $\text{Re} \langle \lambda, \alpha \rangle \leqslant a + b |\ell_\tau|$ for all simple roots $\alpha \in \Psi$, or that $\text{Re}\, \lambda(H_\alpha) \leqslant \rho(H_\alpha)$ for some $\alpha \in \Psi$. Moreover, either of these imply the proposition; this is easy to see for the first condition, as $\Psi$ forms a basis for $\mathfrak a^*$. For the second condition, recall the notation for the root system $\Sigma$ introduced in Section [4.2](#sec:more-roots){reference-type="ref" reference="sec:more-roots"}, in which $\Psi$ is given by [\[eq:alpha-to-beta\]](#eq:alpha-to-beta){reference-type="eqref" reference="eq:alpha-to-beta"}. If we write $\lambda = \sum \tfrac12 \lambda_i \beta_i$, where $\lambda_i=\langle \lambda, \beta_i^\vee\rangle$ as usual, then the condition $\text{Re}\, \lambda \in \overline{\mathfrak a_+^*}$ is equivalent to $0 \leqslant \text{Re}\, \lambda_1 \leqslant \cdots \leqslant \text{Re}\, \lambda_r$. Moreover, if we let $e_i$ be the basis for $\mathfrak a$ dual to $\beta_i$, then we have $$H_{\alpha_i} = 2 \sum_{j = r +1 - i}^r e_i, \quad 1 \leqslant i < r, \quad H_{\alpha_r} = \sum_{i = 1}^r e_i.$$ It follows that a bound for $\text{Re}\, \lambda(H_\alpha)$ for some $\alpha \in \Psi$ implies a bound on all $\text{Re}\, \lambda_i$ as required. We may assume that $\text{Re}\, \lambda(H_\alpha) > \rho(H_\alpha)$ for all $\alpha \in \Psi$, and wish to find $a,b$ such that $\text{Re} \langle \lambda, \alpha \rangle \leqslant a + b |\ell_\tau|$ for all $\alpha \in \Psi$. Let $\alpha\in \Psi$ and $\Theta = \Psi \setminus \{ \alpha \}$. Then $\text{Re} \langle \lambda, \alpha \rangle \leqslant \text{Re} \langle \lambda, \beta\rangle$ for any $\beta\in \Sigma^+ \setminus \langle \Theta \rangle$, since $\beta - \alpha$ is a non-negative linear combination of simple roots. It therefore suffices to find $a, b$, as well as some $\beta\in \Sigma^+ \setminus \langle \Theta \rangle$, for which $\text{Re} \langle \lambda, \beta\rangle\leqslant a+b|\ell_\tau|$. Recall the assumption that $\text{Re}\,\lambda\in\overline{\mathfrak a_+^*}$. If $\text{Re} \langle \lambda, \alpha \rangle = 0$, then we are done. Otherwise $\text{Re} \langle \lambda,\alpha\rangle > 0$, and hence $\text{Re} \langle \lambda, \beta\rangle > 0$ for all $\beta\in \Sigma^+ \setminus \langle \Theta \rangle$. We may therefore apply [\[phi-limit\]](#phi-limit){reference-type="eqref" reference="phi-limit"} to this choice of $\Theta$. Because $A_\Theta$ is one-dimensional and spanned by $H_{\alpha}$, our assumption that $\text{Re}\, \lambda( H_{\alpha}) > \rho(H_{\alpha})$ means that $a^{\rho - \lambda} \to 0$ as $a \to \infty$ in $A_\Theta$. Since $\varphi_{\lambda, \tau}$ is bounded, [\[phi-limit\]](#phi-limit){reference-type="eqref" reference="phi-limit"} implies that ${\bf c}^\Theta(\lambda, \tau) = 0$, and hence that ${\bf c}_{\beta}(\lambda, \tau) = 0$ for some $\beta\in \Sigma^+ \setminus \langle \Theta \rangle$. If $\beta\in \Sigma^+_s$ is a short positive root, the formula for ${\bf c}_{\beta}(\lambda, \tau)$ from Section [4.4](#sec:Gamma-comparison){reference-type="ref" reference="sec:Gamma-comparison"} shows that ${\bf c}_{\beta}(\lambda, \tau)$ can only be zero when $\Gamma( \tfrac12 m_{\beta} + \tfrac12 \langle \lambda, \beta^\vee \rangle )$ has a pole, but this can't happen because $\text{Re} \langle \lambda, \beta\rangle > 0$. If $\beta\in \Sigma^+_\ell$ is a long positive root, we likewise have that ${\bf c}_{\beta}(\lambda, \tau)$ can only be zero when one of $$\Gamma( \tfrac{1}{2} + \tfrac{1}{2} \langle \lambda, \beta^\vee \rangle + \tfrac{1}{2} \ell_\tau ) \quad \text{and} \quad \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \langle \lambda, \beta^\vee \rangle - \tfrac{1}{2} \ell_\tau )$$ has a pole. If we assume without loss of generality that $\ell_\tau \geqslant 0$, then the first Gamma factor again cannot have a pole by positivity. For the second factor to have a pole, we must have $$\tfrac{1}{2} + \tfrac{1}{2} \text{Re} \langle \lambda, \beta^\vee \rangle - \tfrac{1}{2} \ell_\tau \leqslant 0.$$ This implies that $\text{Re} \langle \lambda, \beta^\vee \rangle \leqslant \ell_\tau - 1$, as required. ◻ ## The $\tau$-unitary dual {#ASP} We now recall the definition of the $\tau$-unitary dual. **Definition 4**. *Let $\tau$ be an abelian $K$-type. The set $$\mathfrak{a}_{\tau, {\rm un}}^* =\{\lambda\in\mathfrak{a}^*_\mathbb C: \varphi_{\lambda, \tau} \;\textrm{is positive definite} \}$$ is the unitary spectrum of $\mathbb{D}(\tau)$. For $\tau$ trivial we abbreviate this to $\mathfrak{a}_{\rm un}^*$.* **Lemma 5**. *There are constants $a,b>0$, depending only on $G$, such that $$\mathfrak{a}_{\tau,\rm un}^*\subset \{\lambda\in\mathfrak a^*_\mathbb C: \| \textup{Re} \lambda \| \leqslant a + b | \ell_\tau|\}.$$* *Proof.* This follows from Proposition [Proposition 3](#prop:criterion-4-bounded){reference-type="ref" reference="prop:criterion-4-bounded"} since positive definite functions are bounded. ◻ **Lemma 6**. *If $\lambda \in\mathfrak{a}_{\tau,\rm{un}}^*$ then $W\lambda = -W\bar\lambda$.* *Proof.* We note that for $\lambda\in\mathfrak{a}_{\tau, {\rm un}}^*$ we have $$\label{eq:sph-unitary-lambda} \varphi_{\lambda,\tau}(g)=\varphi_{\bar\lambda,\tau^{-1}}(g^{-1}) = \varphi_{-\bar\lambda, \tau}(g).$$ Indeed, we have $\varphi_{\lambda, \tau}(g) = \overline{\varphi_{\lambda, \tau}(g^{-1})}$ because $\varphi_{\lambda, \tau}$ is positive definite, and the formula [\[defn-sph-fn\]](#defn-sph-fn){reference-type="eqref" reference="defn-sph-fn"} shows that $\overline{\varphi_{\lambda, \tau}(g^{-1})} = \varphi_{\bar\lambda,\tau^{-1}}(g^{-1})$, which gives the first formula. The second formula is Proposition [Proposition 2](#prop:sph-fn-2-vars){reference-type="ref" reference="prop:sph-fn-2-vars"}. Combining [\[eq:sph-unitary-lambda\]](#eq:sph-unitary-lambda){reference-type="eqref" reference="eq:sph-unitary-lambda"} with [\[eq:equality-sph-fn\]](#eq:equality-sph-fn){reference-type="eqref" reference="eq:equality-sph-fn"} yields the lemma. ◻ ## Existence of $\tau$-spherical test functions {#sec:test-fn} Using the preliminaries of the previous sections, we now arrive at the existence of $\tau$-spherical approximate spectral projectors. **Proposition 7**. *There are constants $R_0,c>0$ such that the following holds. Let $\nu\in i\mathfrak{a}^*$ and $\tau\in\widehat{K}$ be one-dimensional, subject to the assumption [\[eq:assumption\]](#eq:assumption){reference-type="eqref" reference="eq:assumption"}. For all $0<R\leqslant R_0$ there is a function $k_\nu\in C^\infty_R(S,\tau,\tau)$ whose Harish-Chandra transform $\widehat{k}_\nu(\,\cdot\,;\tau)\in\mathcal{PW}(\mathfrak{a}_\mathbb C^*)^W_R$ satisfies* 1. *[\[4\]]{#4 label="4"} $\widehat{k}_\nu(\lambda;\tau)\ll_{A,R} \exp(R\|{\rm Re}\,\lambda\|)\displaystyle\sum_{w\in W}\big(1+\|w\lambda-\nu\|\big)^{-A}$ for all $\lambda\in \mathfrak{a}_\mathbb C^*$;* 2. *[\[1\]]{#1 label="1"} $\widehat{k}_\nu(\lambda;\tau)\geqslant 0$ for all $\lambda\in\mathfrak{a}_{\tau, {\rm un}}^*$;* 3. *[\[3\]]{#3 label="3"} $\widehat{k}_\nu(\lambda;\tau) \geqslant c > 0$ for all $\lambda\in\mathfrak{a}_{\tau, {\rm un}}^*$ with $\|{\rm Im}\,\lambda-\nu\|\leqslant 1$.* *Here, we are writing $\lambda={\rm Re}\,\lambda+{\rm Im}\,\lambda\in\mathfrak{a}^*+i\mathfrak{a}^*$.* *Proof.* We begin by constructing the function $k_\nu$, using the commutative diagram from [\[eq:commutative-diagram\]](#eq:commutative-diagram){reference-type="eqref" reference="eq:commutative-diagram"}. We then verify each property in turn. We follow closely [@BM §4]. The proof uses the fact that the $\tau$-spherical Harish-Chandra transform $\mathscr{H}_\tau$ is an algebra isomorphism (preserving support conditions), but does not use the inversion formula [\[eq:general-inversion\]](#eq:general-inversion){reference-type="eqref" reference="eq:general-inversion"}. Let $R>0$. Let $g_0\in C^\infty_c(\mathfrak{a})$ be non-negative, even, supported in $B_{\mathfrak{a}}(0,R/4)$, and satisfy $\int_{\mathfrak{a}} g_0 = 1$. Let $g = g_0 * g_0$. Then $g \in C^\infty_0(\mathfrak a)$ is non-negative, even, supported in $B_{\mathfrak{a}}(0,R/2)$, and satisfies $\int_\mathfrak{a} g = 1$. The Fourier transform $h=\mathscr{F}g\in\mathcal{P}(\mathfrak{a}_\mathbb C^*)_{R/2}$ is even and non-negative on $i\mathfrak{a}^*$, satisfies $h(\overline{\lambda})=\overline{h(\lambda)}$ on $\mathfrak a_\mathbb C^*$, and is normalized such that $h(0)=1$. We center $h$ at the fixed tempered parameter $\nu\in i\mathfrak a^*$, and force $W$-invariance, by putting $$\label{defn-h-nu-0} h_\nu^0(\lambda)=\sum_{w\in W}h(w\lambda-\nu).$$ Then $h_\nu^0\in\mathcal{P}(\mathfrak{a}_\mathbb C^*)_{R/2}^W$; since $\nu\in i\mathfrak a^*$, we have $h_\nu^0(-\overline{\lambda})=\overline{h_\nu^0(\lambda)}$. We let $k_\nu^0(\cdot, \tau)=\mathscr{J}_\tau (h_\nu^0)\in C^\infty_{R/2}(S,\tau,\tau)$ be its inverse $\tau$-spherical transform, defined in [\[eq:def-J\]](#eq:def-J){reference-type="eqref" reference="eq:def-J"} for $\tau$ trivial and $G$ arbitrary, and in [\[eq:J-tau-transform\]](#eq:J-tau-transform){reference-type="eqref" reference="eq:J-tau-transform"} for $\tau$ non-trivial and $G$ Hermitian with reduced root system. Finally we put $$k_\nu(\cdot, \tau)=k_\nu^0(\cdot, \tau)\ast k_\nu^0(\cdot, \tau)\in C^\infty_R(S,\tau,\tau).$$ We deduce that $\widehat{k}_\nu(\lambda;\tau)=h_\nu^0(\lambda)^2\in \mathcal{P}(\mathfrak{a}_\mathbb C^*)_R^W$. Since $h\in \mathcal{P}(\mathfrak{a}_\mathbb C^*)_{R/2}$ it satisfies the Paley--Wiener estimate $$h(\lambda)\ll_{A,R} (1+\|\lambda\|)^{-A}\exp(R\|{\rm Re}\, \lambda\|/2).$$ Thus $$h^0_\nu(\lambda)\ll_{A,R} \exp(R\|{\rm Re}\, \lambda\|/2)\sum_{w\in W}(1+\|w\lambda-\nu\|)^{-A}.$$ Squaring this, and using $(1+\|w_1\lambda-\nu\|)(1+\|w_2\lambda-\nu\|)\geqslant (1+\|w\lambda-\nu\|)^2$ for the off-diagonal terms, where $w$ realizes $\min\{\|w_1\lambda-\nu\|,\|w_2\lambda-\nu\|\}$, proves [\[4\]](#4){reference-type="eqref" reference="4"}. We deduce from the $W$-invariance of $h^0_\nu$ and Lemma [Lemma 6](#lem:unitary-hermitian){reference-type="ref" reference="lem:unitary-hermitian"} that for $\lambda \in \mathfrak a^*_{\tau, {\rm un}}$, we have $h_\nu^0(\lambda)=h_\nu^0(-\overline{\lambda})=\overline{h_\nu^0(\lambda)}$. Hence $h_\nu^0$ is real-valued on $\mathfrak a^*_{\tau, {\rm un}}$. Since $\widehat{k}_\nu(\lambda;\tau)=h_\nu^0(\lambda)^2$, this establishes Property [\[1\]](#1){reference-type="eqref" reference="1"}. For Property [\[3\]](#3){reference-type="eqref" reference="3"}, let $a, b>0$ be as in Proposition [Proposition 3](#prop:criterion-4-bounded){reference-type="ref" reference="prop:criterion-4-bounded"}. We shall show that for $\lambda\in\mathfrak a^*_\mathbb C$ with $\|{\rm Re}\,\lambda\|\leqslant a+b|\ell_\tau|$ and $\|{\rm Im}\,\lambda-\nu\|\leqslant 1$ we have ${\rm Re}\, h_\nu^0(\lambda)\geqslant 1/4$. Since unitary parameters satisfy the first bound by Lemma [Lemma 5](#lemma:unitary){reference-type="ref" reference="lemma:unitary"}, and $h_\nu^0$ is real-valued on $\mathfrak a^*_{\tau, {\rm un}}$, we have $\widehat{k}_\nu(\lambda;\tau)=h_\nu^0(\lambda)^2 = ({\rm Re}\, h_\nu^0(\lambda))^2\geqslant 1/16$ for all $\lambda\in\mathfrak a^*_{\tau, {\rm un}}$ with $\|{\rm Im}\,\lambda-\nu\|\leqslant 1$, as desired. We begin by remarking that, by the normalization $h(0)=1$, there is small enough $R_1=R_1(G,\tau)>0$ such that, if $0<R\leqslant R_1$, then ${\rm Re}\, h(\lambda) \geqslant 1/2$, say, for all $\lambda\in\mathfrak{a}_\mathbb C^*$ with $\| {\rm Im}\, \lambda \| \leqslant 1$ and $\| {\rm Re}\, \lambda \| \leqslant a+b|\ell_\tau|$. Taking real parts in the definition [\[defn-h-nu-0\]](#defn-h-nu-0){reference-type="eqref" reference="defn-h-nu-0"}, we deduce that for all $\lambda\in\mathfrak{a}_\mathbb C^*$ with $\|{\rm Im}\,\lambda-\nu\|\leqslant 1$ and $\|{\rm Re}\, \lambda\|\leqslant a+b|\ell_\tau|$, we have $${\rm Re}\,h_\nu^0(\lambda)= \sum_{w \in W}{\rm Re}\, h(w\lambda - \nu) \geqslant \frac12 + (|W|-1)\min_{w\in W\setminus\{1\}}{\rm Re}\, h(w\lambda - \nu).$$ We claim that there is $C>1$ such that, for $R$ small enough, ${\rm Re}\, h(\lambda) \geqslant -CR$ for all $\lambda\in\mathfrak{a}_\mathbb C^*$ with $\|{\rm Re}\,\lambda \| \leqslant a+b|\ell_\tau|$. Indeed, for any $\lambda=\lambda_\Re+i\lambda_\Im\in\mathfrak{a}_\mathbb C^*$, we have $$\begin{aligned} h(\lambda) & = \int_\mathfrak ag(H) e^{-\lambda(H)} dH \\ & = \int_\mathfrak ag(H) [ e^{- i \lambda_\Im(H)} + ( e^{-\lambda(H)} - e^{-i\lambda_\Im(H)} ) ] dH \\ & = h(i\lambda_\Im) + \int_\mathfrak ag(H) e^{-i\lambda_\Im(H)} ( e^{-\lambda_\Re (H)} - 1 ) dH.\end{aligned}$$ Now assume $0<R\leqslant R_1<(a+b|\ell_\tau|)^{-1}$. From $\| \lambda_\Re\| \leqslant a+b|\ell_\tau|$ we deduce the existence of a constant $C>1$, depending only on $G$ and $\tau$, such that $| e^{-\lambda_\Re(H)} - 1 | \leqslant C R$ for all $H \in \text{supp}(g)\subset B_{\mathfrak{a}}(0,R/2)$. Thus, recalling that $\int_\mathfrak ag=1$, we get ${\rm Re}\,h(\lambda) \geqslant h(i\lambda_\Im) - CR$. Since $h$ is positive on $i\mathfrak{a}^*$ we deduce ${\rm Re}\,h(\lambda) \geqslant -CR$, as required. Now, since $\nu\in i\mathfrak a^*$ and $\|\cdot \|$ is $W$-invariant, we have $\|{\rm Re}\, (w\lambda-\nu)\|=\|{\rm Re}\, w\lambda\|=\|{\rm Re}\,\lambda\|\leqslant a+b|\ell_\tau|$ for all $w\in W$. It therefore follows from the above claim that if $0<R\leqslant R_0$, where $R_0= \min\{R_1, \frac{1}{4(|W|-1)C}\}$, then ${\rm Re}\, h_\nu^0(\lambda)\geqslant 1/4$, finishing the proof. ◻ ## Estimates on ${\bf c}$-functions {#sec:c-fun-estimates} We now establish some useful estimates for the $\tau$-spherical Plancherel measure. ### The spherical Plancherel majorant We begin by estimating the spherical density function $|{\bf c}(\lambda)|^{-2}$ on the support of $\mu_{\rm Pl}$, namely the tempered subspace $i\mathfrak{a}^*$ of $\mathfrak{a}_\mathbb C^*$. The results in this paragraph are contained in [@DKV], but we review them for completeness. For $\alpha\in\Sigma$ let $d_\alpha=m_\alpha+m_{2\alpha}$. Let $\Theta\subset \Psi$. Recall the notation $\Sigma_{\rm red}^+$ from §[4.2](#sec:more-roots){reference-type="ref" reference="sec:more-roots"} and $\langle\Theta\rangle_{\rm red}$ from §[4.3](#sec:c-function){reference-type="ref" reference="sec:c-function"}. For $\lambda\in i\mathfrak{a}^*$ let $$\tilde\beta_S^\Theta(\lambda)=\prod_{\alpha\in\Sigma_{\rm red}^+\setminus\langle\Theta\rangle_{\rm red}}(1+|\langle\lambda,\alpha^\vee\rangle|)^{d_\alpha}.$$ When $S=\mathbb{H}^{n,r}$ and $\mathcal{H}^m$ the following bounds recover [\[first-beta-eq\]](#first-beta-eq){reference-type="eqref" reference="first-beta-eq"} and [\[second-beta-eq\]](#second-beta-eq){reference-type="eqref" reference="second-beta-eq"}, respectively. **Lemma 8**. *${\, }$* 1. *[\[c-bd1\]]{#c-bd1 label="c-bd1"} For $\lambda\in i\mathfrak{a}^*$ we have $|{\bf c}^\Theta(\lambda)|^{-2}\ll \tilde\beta_S^\Theta(\lambda)$.* 2. *[\[c-bd2\]]{#c-bd2 label="c-bd2"} For any $T> 0$, if $\lambda\in i\mathfrak{a}^*$ is $T$-regular, then we have $|{\bf c}^\Theta(\lambda)|^{-2}\asymp_T\tilde\beta_S^\Theta(\lambda)$.* 3. *[\[c-bd3\]]{#c-bd3 label="c-bd3"} For any $c > 0$ and all $\lambda \in i \mathfrak a^*$, we have $\int_{ \| \nu - \lambda \| < c } |{\bf c}^\Theta(\nu)|^{-2} d\nu \asymp_c \tilde\beta_S^\Theta(\lambda)$.* *Proof.* The first estimate follows by directly applying Stirling's formula to [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"}. The second follows from [@DKV (3.44a)]. For the third, the upper bound follows from [\[c-bd1\]](#c-bd1){reference-type="eqref" reference="c-bd1"} and Lemma [Lemma 9](#c-bound){reference-type="ref" reference="c-bound"} below. For the lower bound, we choose $T>0$ small enough that, for any $\lambda$, at least half the points in the ball $\| \nu - \lambda \| < c$ are $T$-regular. The bound then follows from [\[c-bd2\]](#c-bd2){reference-type="eqref" reference="c-bd2"} and Lemma [Lemma 9](#c-bound){reference-type="ref" reference="c-bound"}. ◻ We now give some estimates on the spherical Plancherel majorant $\tilde\beta_S(\lambda)$ (and the related functions $\tilde\beta_S^\Theta(\lambda)$) on the entire space $\mathfrak{a}_\mathbb C^*$. **Lemma 9**. *For all $\lambda,\mu\in\mathfrak{a}_\mathbb C^*$ we have $$\tilde\beta_S^\Theta(\lambda+\nu)\ll (1+\|\lambda\|)^{d^\Theta(\Sigma)} \tilde\beta_S^\Theta(\nu),$$ where $d^\Theta(\Sigma)=\sum_{\alpha\in \Sigma^+_{\rm red}\setminus\langle\Theta\rangle_{\rm red}}d_\alpha$. In particular, we have $\tilde\beta_S^\Theta(\lambda)\ll (1+\|\lambda\|)^{d^\Theta(\Sigma)}$.* *Proof.* Let $\alpha\in\Sigma$. Then $$1+|\langle \lambda +\nu,\alpha^\vee\rangle | \leqslant 1+ |\langle \lambda ,\alpha^\vee\rangle | + |\langle\nu,\alpha^\vee\rangle | \leqslant (1+ \|\lambda\|) +|\langle\nu,\alpha^\vee\rangle | \leqslant (1+ \|\lambda\|)(1+|\langle\nu,\alpha^\vee\rangle |),$$ which implies the lemma. ◻ Clearly $\tilde\beta_S^\Theta(\lambda)\leqslant\tilde\beta_S(\lambda)$ for all $\lambda\in\mathfrak{a}_\mathbb C^*$ and all $\Theta\subset\Psi$. ### In which $G$ is Hermitian We now wish to bound the $\tau$-spherical Plancherel measure $\mu_{ \text{Pl}, \tau}$ on its support, when $G$ is Hermitian with reduced root system and $\tau$ is a character of $K$. Examining the formulas for each component $\mu_{ \text{Pl}, \tau}^{(j)}$ of $\mu_{ \text{Pl}, \tau}$ given in Section [4.6](#sec:tau-sph-inversion){reference-type="ref" reference="sec:tau-sph-inversion"}, we see that this is equivalent to bounding the growth of $| {\bf c}^{ \Theta_j}(\lambda; \tau) |^{-2}$ for $0 \leqslant j \leqslant r$ and $\lambda \in D_{\tau,j} + i \mathfrak a^*_{\Theta_j}$. **Lemma 10**. *Let $0 \leqslant j \leqslant r$. For $\lambda \in D_{\tau,j} + i \mathfrak a^*_{\Theta_j}$, ${\bf c}^{ \Theta_j}(\lambda; \tau)^{-1}$ is well-defined and $$| {\bf c}^{ \Theta_j}(\lambda; \tau) |^{-2} \ll_\tau \widetilde{\beta}^{\Theta_j}_S(\lambda).$$* *Proof.* We recall the definition [\[defn-D-tau-j\]](#defn-D-tau-j){reference-type="eqref" reference="defn-D-tau-j"} of $D_{\tau,j}$. Because $D_{\tau, j}$ is finite, it suffices to prove the lemma for $\lambda$ of the form $\varrho + \lambda_{\Theta_j}$ with $\varrho \in D_{\tau,j}$ fixed and $\lambda_{\Theta_j} \in i \mathfrak a^*_{\Theta_j}$. Note that $\Sigma^+\setminus\langle\Theta_j\rangle=\{\beta_{j+1},\ldots,\beta_r\}\cup\{(\beta_p\pm\beta_q)/2: p>j, q<p\}$. Let $\Xi$ denote the subset of $\Sigma^+\setminus\langle\Theta_j\rangle$ consisting of roots of the form $\alpha = \beta_p$, $j < p$, or $\alpha=(\beta_p+\beta_q)/2$, $q\leqslant j<p$, and $\Xi^c$ for its complement in $\Sigma^+\setminus\langle\Theta_j\rangle$. We further subdivide $\Xi$ into the sets $\Xi_p$ for $j < p$, where $\Xi_p$ contains the roots $\beta_p$ and $(\beta_p+\beta_q)/2$, $q \leqslant j$. Define ${\bf c}_p(\lambda, \tau) = \prod_{\alpha \in \Xi_p} {\bf c}_\alpha(\lambda,\tau)$. By the product formula for ${\bf c}^{ \Theta_j}(\lambda; \tau)$, it suffices to show that ${\bf c}_p(\lambda,\tau)$ is regular on $D_{\tau,j} + i \mathfrak a^*_{\Theta_j}$ for all $p$ and satisfies $$\label{cpbd} {\bf c}_p(\lambda,\tau)\ll \prod_{\alpha\in\Xi_p}(1 + | \langle \lambda, \alpha^\vee \rangle |)^{m_\alpha},$$ and that, for each $\alpha\in\Xi^c$, the function ${\bf c}_\alpha(\lambda,\tau)$ is regular on $D_{\tau,j} + i \mathfrak a^*_{\Theta_j}$ and satisfies $$\label{calphabd} {\bf c}_\alpha(\lambda,\tau)\ll (1 + | \langle \lambda, \alpha^\vee \rangle |)^{m_\alpha}.$$ It will be enough to establish the regularity, after which an application of Stirling's formula will yield [\[cpbd\]](#cpbd){reference-type="eqref" reference="cpbd"} and [\[calphabd\]](#calphabd){reference-type="eqref" reference="calphabd"}. We begin with the case of $\alpha\in\Xi^c$. Such $\alpha$ all lie in $\Sigma^+_s$, and either have the form $(\beta_p - \beta_q)/2$ with $p>j$ and $q<p$, or $(\beta_p + \beta_q)/2$ with $j < q < p$. In this case [\[c-tilde-spherical\]](#c-tilde-spherical){reference-type="eqref" reference="c-tilde-spherical"} (or, equivalently, [\[eq:Shimeno-c-alpha\]](#eq:Shimeno-c-alpha){reference-type="eqref" reference="eq:Shimeno-c-alpha"}) shows that $${\bf c}_\alpha( \lambda; \tau)={\bf c}_\alpha( \lambda)= c \frac{ \Gamma( \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle )}{ \Gamma( \tfrac{1}{2} m_\alpha + \tfrac{1}{2} \langle \lambda, \alpha^\vee \rangle ) }$$ for some constant $c \neq 0$. For the regularity of ${\bf c}_\alpha( \lambda; \tau)^{-1}$ it suffices to show that (the real part of) the argument $\frac12 m_\alpha+\text{Re} \langle \lambda, \alpha^\vee \rangle = \frac12 m_\alpha+ \langle \varrho, \alpha^\vee \rangle$ is positive. In the case $\alpha = (\beta_p - \beta_q)/2$, a calculation gives $$\label{eq:rho-alpha-check} \langle \varrho, \alpha^\vee \rangle = \frac{ 2 \langle \varrho, \alpha \rangle}{ \langle \alpha, \alpha \rangle} = \varrho_p - \varrho_q = -\varrho_q \geqslant 0,$$ while if $\alpha = (\beta_p + \beta_q)/2$ we likewise have $\langle \varrho, \alpha^\vee \rangle = \varrho_p + \varrho_q = 0$. In either case we have the positivity of $\frac12 m_\alpha+ \langle \varrho, \alpha^\vee \rangle$, as required. We now consider ${\bf c}_p(\lambda,\tau)$. As in [@Shimeno p. 380], the proof of the regularity of ${\bf c}_p( \lambda; \tau)^{-1}$ will be divided according to the parity the common value $m'$ of the multiplicities $m_\alpha$ of the short roots $\alpha$. First we consider the case when $m'$ is even. Under this condition, for any $\alpha\in\Sigma_s^+$ the factor ${\bf c}_\alpha(\lambda,\tau)^{-1}$ is given, up to a non-zero constant, by $\langle \lambda,\alpha^\vee\rangle(\langle \lambda,\alpha^\vee\rangle+2)\cdots (\langle \lambda,\alpha^\vee\rangle+m'-2)$, which is holomorphic. To establish the holomorphy of ${\bf c}_p(\lambda,\tau)^{-1}$ it therefore suffices to establish it for the factor ${\bf c}_{\beta_p}(\lambda,\tau)^{-1}$ corresponding to the unique long root in $\Xi_p$. Up to holomorphic factors, ${\bf c}_{\beta_p}(\lambda,\tau)^{-1}$ is equal to $$\Gamma( \tfrac{1}{2} + \tfrac{1}{2} \lambda_p + \tfrac{1}{2} \ell_\tau ) \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \lambda_p - \tfrac{1}{2} \ell_\tau ) \Gamma( \lambda_p )^{-1}.$$ Because $\lambda_p$ is purely imaginary, the only place at which this expression can have a pole is at $\lambda_p = 0$. Moreover, if we assume without loss of generality that $\ell_\tau \geqslant 0$, the first factor cannot have a pole, and any potential pole of the second factor is canceled by the zero of $\Gamma( \lambda_p )^{-1}$. It follows that ${\bf c}_{\beta_p}(\lambda,\tau)^{-1}$, and hence ${\bf c}_p(\lambda,\tau)^{-1}$, is holomorphic, as required. We now treat the case of $m'$ odd. We first consider the expression $\prod_{q = 1}^j {\bf c}_{(\beta_p + \beta_q)/2}(\lambda, \tau)^{-1}$, which appears as a factor of ${\bf c}_p(\lambda,\tau)^{-1}$. Up to a non-zero constant, this is equal to $$\prod_{q=1}^j \frac{ \Gamma( (m'+ \varrho_q + \lambda_p)/2)}{ \Gamma((\varrho_q + \lambda_p)/2)} = \Gamma((m' +\varrho_1 +\lambda_p)/2) \prod_{q=1}^{j-1} \frac{ \Gamma((m' + \varrho_{q+1} + \lambda_p)/2)}{ \Gamma( (\varrho_q +\lambda_p)/2)} \frac{1}{ \Gamma( (\varrho_j + \lambda_p)/2)},$$ where we have reorganized the product by pairing the denominator of one term with the numerator of the next. We observe that each term in the product from 1 to $j-1$ is holomorphic; this follows from the fact that the difference in the $\Gamma$-arguments, namely $(m' + \varrho_{q+1} - \varrho_q)/2$, is a positive integer by the definition of $D_{\tau,j}$ and the condition that $m'$ is odd. As the term $\Gamma( (\varrho_j + \lambda_p)/2)^{-1}$ is also holomorphic, to prove the holomorphy of ${\bf c}_p(\lambda,\tau)^{-1}$ it suffices to establish it for $${\bf c}_{\beta_p}(\lambda,\tau)^{-1} \Gamma((m' +\varrho_1 +\lambda_p)/2),$$ which is equal (up to holomorphic factors) to $$\Gamma( \tfrac{1}{2} + \tfrac{1}{2} \lambda_p + \tfrac{1}{2} \ell_\tau ) \Gamma( \tfrac{1}{2} + \tfrac{1}{2} \lambda_p - \tfrac{1}{2} \ell_\tau ) \Gamma(\tfrac{1}{2} m' + \tfrac{1}{2} \varrho_1 + \tfrac{1}{2} \lambda_p ) \Gamma( \lambda_p )^{-1}.$$ As in the case of $m'$ even, we only need to establish holomorphy at the point $\lambda_p = 0$, and if we assume that $\ell_\tau \geqslant 0$, the first factor does not have a pole because the argument is positive there. The factor $\Gamma( \lambda_p )^{-1}$ has a zero, so it suffices to show that at most one of the factors $\Gamma( \tfrac{1}{2} + \tfrac{1}{2} \lambda_p - \tfrac{1}{2} \ell_\tau )$ and $\Gamma(\tfrac{1}{2} m' + \tfrac{1}{2} \varrho_1 + \tfrac{1}{2} \lambda_p )$ has a pole. If they both had poles, then both $\tfrac{1}{2} - \tfrac{1}{2} \ell_\tau$ and $\frac12 m' + \frac12 \varrho_1$ would be nonpositive integers. This would imply that both $\ell_\tau$ and $\varrho_1$ were odd integers, but this is not possible in light of the definition of $D_{\tau,j}$. This establishes the holomorphy of ${\bf c}_p(\lambda,\tau)^{-1}$, and completes the proof. ◻ ## Bounds for $k_\nu$ {#sec:test-fn-bds} Let $k_\nu$ be the test function given by Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}. In this section, we prove two bounds for $k_\nu$ that will later be used to control the geometric side of the pre-trace formula. The first is a standard bound in the spherical case, which establishes decay of $k_\nu$ away from the base point. We derive it from the bounds of [@BP Theorem 2] and [@MT Theorem 8.2] for $\varphi_\mu$. **Lemma 11**. *Let $\nu\in i\mathfrak{a}^*$, and let $\tau=\tau_{\rm triv}$ be trivial. Let $0<R<R_0$ and $k_\nu\in C^\infty_R(S,\tau_{\rm triv},\tau_{\rm triv})$ be as in Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}. For $H \in \mathfrak a$, we have $$k_\nu(\exp H) \ll (1+\|\nu\| \|H\|)^{-1/2} \widetilde{\beta}_S(\nu).$$* *Proof.* Let $\Omega \subset \mathfrak a$ be a compact set such that $\text{supp}(k_\nu) \subset K e^\Omega K$. We may assume without loss of generality that $H \in \Omega$. We express $k_\nu(\exp H)$ using the spherical Plancherel inversion [\[eq:sph-planch-inv\]](#eq:sph-planch-inv){reference-type="eqref" reference="eq:sph-planch-inv"}, which gives $$k_\nu(\exp H) = \frac{1}{|W|} \int_{ i \mathfrak a^*} \widehat{k}_\nu(\lambda)\varphi_{\lambda}( \exp H)|{\bf c}(\lambda)|^{-2}d\lambda = \int_{ i {\mathfrak a_+^*}} \widehat{k}_\nu(\lambda)\varphi_{\lambda}( \exp H)|{\bf c}(\lambda)|^{-2}d\lambda,$$ where we have used the $W$-invariance of the integrand. We may insert the bound $$\varphi_{\lambda}(\exp H)\ll_\Omega (1+\|\lambda\| \|H\|)^{-1/2}$$ for $H \in \Omega$ from [@BP Theorem 2] and [@MT Theorem 8.2], and the bound $|{\bf c}(\lambda)|^{-2} \ll \widetilde{\beta}_S(\lambda)$ from Lemma [Lemma 8](#lemma:c-bd){reference-type="ref" reference="lemma:c-bd"} [\[c-bd1\]](#c-bd1){reference-type="eqref" reference="c-bd1"}, to obtain $$k_\nu(\exp H) \ll \int_{ i {\mathfrak a_+^*}} \widehat{k}_\nu(\lambda)(1+\|\lambda\| \|H\|)^{-1/2} \widetilde{\beta}_S(\lambda) d\lambda.$$ We then apply the bound for $\widehat{k}_\nu(\lambda)$ from Property [\[4\]](#4){reference-type="eqref" reference="4"} of Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} and unfold the sum over $W$ to obtain $$\begin{aligned} k_\nu(\exp H) & \ll_{A,R} \int_{ i \mathfrak a^*} (1 + \| \lambda - \nu \|)^{-A}(1+\|\lambda\| \|H\|)^{-1/2} \widetilde{\beta}_S(\lambda) d\lambda \\ & = \int_{ i \mathfrak a^*} (1 + \| \lambda \|)^{-A}(1+\|\lambda + \nu\| \|H\|)^{-1/2} \widetilde{\beta}_S(\lambda + \nu) d\lambda.\end{aligned}$$ Applying Lemma [Lemma 9](#c-bound){reference-type="ref" reference="c-bound"} and the bound $$(1+\|\lambda + \nu\| \|H\|)^{-1/2} \ll (1+\|\nu\| \|H\|)^{-1/2} (1 + \| \lambda \| )^{1/2},$$ while taking $A$ sufficiently large to ensure the convergence of the integral, completes the proof. ◻ We next prove a bound for the $L^\infty$ norm of $D k_\nu$, where $D$ is a differential operator. We shall do this by combining the following bound for $D \varphi_{\lambda, \tau}$ with the spherical inversion formula. Note that $\tau$ is no longer assumed to be trivial. **Lemma 12**. *Let $C > 0$, and let $D$ be a differential operator on $G$ of degree $n$ with smooth coefficients. For any compact set $B \subset G$ and any $\lambda \in \mathfrak a^*_\mathbb C$ with $\| \textup{Im} \lambda \| \leqslant C$ we have $\| D \varphi_{\lambda, \tau}|_B \|_\infty \ll_{B,C,D,\tau} (1 + \| \lambda \|)^n$.* *Proof.* We recall the left action of $G$ on functions $f$ on $G$, given by $(g.f)(h) = f(g^{-1}h)$, and on differential operators $D$, given by $(g.D)(f) = g.(D( g^{-1}.f))$. If we define $\psi(g) = a(g)^{\rho + \lambda} \tau( \kappa(g)^{-1})$, then the formula [\[defn-sph-fn\]](#defn-sph-fn){reference-type="eqref" reference="defn-sph-fn"} may be rewritten $$\varphi_{\lambda, \tau}(g) = \int_{K} \tau(k) \psi(kg) dk = \int_{K} \tau(k) (k^{-1}.\psi)(g) dk,$$ which gives $$D \varphi_{\lambda, \tau} = \int_{K} \tau(k) D(k^{-1}.\psi) dk = \int_{K} \tau(k) k^{-1}.((k.D)\psi) dk.$$ It therefore suffices to bound $\| (k.D)\psi |_B \|_\infty$ for $k \in K$. As the coefficients of $D$ are smooth, the operators $k.D$ have coefficients which are bounded on $B$, uniformly in $k$. By choosing a basis for the module of differential operators on $B$, it suffices to bound $D\psi$ for a single $D$. As $\tau$ is fixed, we may write $\psi(g) = b(g) e^{ \lambda (H(g))}$, where $b(g)$ is a fixed smooth function. The lemma now follows from an elementary argument by writing out $\psi$ and $D$ in a coordinate chart. ◻ **Lemma 13**. *Let $\nu\in i\mathfrak{a}^*$ and $\tau\in\widehat{K}$ be one-dimensional. Let $0<R<R_0$ and $k_\nu\in C^\infty_R(S,\tau,\tau)$ be as in Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}. If $D$ is a differential operator of degree $n$ on $G$ with smooth coefficients, we have $\| D k_\nu \|_\infty \ll_{R, D,\tau} \tilde{\beta}_S(\nu) (1 + \| \nu \|)^n$.* *Proof.* From the $\tau$-spherical inversion formula [\[spherical-inversion\]](#spherical-inversion){reference-type="eqref" reference="spherical-inversion"} we have $$D k_\nu = \sum_{j=0}^r \sum_{\varrho\in D_{\tau,j}} \frac{d_j(\varrho,\tau)}{|W_{\Theta_{r-j}}|} \int_{i\mathfrak{a}^*_{\Theta_j}} D\varphi_{\varrho+\lambda_{\Theta_j}, \tau} \widehat{k}_\nu(\varrho+\lambda_{\Theta_j};\tau) |{\bf c}^{\Theta_j}(\varrho+\lambda_{\Theta_j},\tau)|^{-2} d\lambda_{\Theta_j}.$$ Let $B$ be a compact set containing the support of $k_\nu$. Applying Lemma [Lemma 12](#sph-fn-bd){reference-type="ref" reference="sph-fn-bd"} with $C$ chosen so that $C > \| \varrho \|$ for all $\varrho \in D_{\tau, j}$, we find that $\| D\varphi_{\varrho+\lambda_{\Theta_j}, \tau} |_B \|_\infty \ll_{D,\tau} (1 + \| \varrho+\lambda_{\Theta_j} \|)^n$ for all $\varrho\in D_{\tau,j}$ and $\lambda_{\Theta_j} \in i\mathfrak{a}^*_{\Theta_j}$. This implies that $$\| D k_\nu \|_\infty \ll_{D,\tau} \sum_{j=0}^r \sum_{\varrho\in D_{\tau,j}} \frac{d_j(\varrho,\tau)}{|W_{\Theta_{r-j}}|} \int_{i\mathfrak{a}^*_{\Theta_j}} (1 + \| \varrho+\lambda_{\Theta_j} \|)^n |\widehat{k}_\nu(\varrho+\lambda_{\Theta_j};\tau)| |{\bf c}^{\Theta_j}(\varrho+\lambda_{\Theta_j},\tau)|^{-2} d\lambda_{\Theta_j}.$$ We shall show that the right-hand side is $\ll_{\tau} \tilde{\beta}_S(\nu) (1 + \| \nu \|)^n$. We begin with the two extremal cases: $j=0$ and $j=r$. For $j=0$ we use Lemma [Lemma 10](#lemma:c-theta-bd){reference-type="ref" reference="lemma:c-theta-bd"}, as well as Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} [\[4\]](#4){reference-type="eqref" reference="4"}, to obtain $$\begin{aligned} \frac{1}{|W|}\int_{i\mathfrak{a}^*} (1 + \| \lambda \|)^n \widehat{k}_\nu(\lambda;\tau)|{\bf c}(\lambda,\tau)|^{-2}d\lambda &\ll_{R, A,\tau} \sum_{w\in W}\int_{{i\mathfrak{a}_+^*}} (1 + \| \lambda \|)^n (1+\|w\lambda-\nu\|)^{-A} \tilde\beta_S(\lambda) d\lambda\\ & =\int_{i\mathfrak{a}^*} (1 + \| \lambda \|)^n (1+\|\lambda-\nu\|)^{-A} \tilde\beta_S(\lambda) d\lambda,\end{aligned}$$ where we have folded up the $W$-sum. After applying Lemma [Lemma 9](#c-bound){reference-type="ref" reference="c-bound"}, and the inequality $$\label{eq:lambda-bd} (1+\|\lambda\|)\leqslant (1+\|\nu\|)(1+\|\lambda-\nu\|),$$ this becomes $$\ll \tilde\beta_S(\nu)(1+\|\nu\|)^n\int_{i\mathfrak{a}^*} (1 + \| \lambda - \nu \|)^{n+d^\emptyset(\Sigma)-A} d\lambda,$$ which, for $A$ large enough, is $\ll_{\tau} \tilde{\beta}_S(\nu) (1 + \| \nu \|)^n$. For $j = r$, the bound $$\sum_{\varrho\in D_{\tau}}d(\varrho,\tau) (1 + \| \varrho \|)^n \widehat{k}_\nu(\varrho;\tau)\ll_{\tau} 1$$ follows from the fact that $D_\tau$ is finite. For the intermediate cases $j=1,\ldots ,r-1$, we apply Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} [\[4\]](#4){reference-type="eqref" reference="4"} and Lemma [Lemma 10](#lemma:c-theta-bd){reference-type="ref" reference="lemma:c-theta-bd"} to get $$\begin{aligned} \notag &\sum_{\varrho\in D_{\tau,j}} \frac{d_j(\varrho,\tau)}{|W_{\Theta_{r-j}}|}\int_{i\mathfrak{a}^*_{\Theta_j}} (1 + \| \varrho+\lambda_{\Theta_j} \|)^n |\widehat{k}_\nu(\varrho+\lambda_{\Theta_j};\tau)| |{\bf c}^{\Theta_j}(\varrho+\lambda_{\Theta_j},\tau)|^{-2} d\lambda_{\Theta_j}\\ \label{j-invert-bound} &\ll_{R,A,\tau} \max_{\varrho\in D_{\tau,j}}\max_{w\in W}\int_{i\mathfrak{a}^*_{\Theta_j}} (1 + \| \varrho+\lambda_{\Theta_j} \|)^n (1+\|w(\varrho+\lambda_{\Theta_j})-\nu\|)^{-A} \tilde\beta_S^{\Theta_j}(\varrho+\lambda_{\Theta_j})d\lambda_{\Theta_j}.\end{aligned}$$ The inequality [\[eq:lambda-bd\]](#eq:lambda-bd){reference-type="eqref" reference="eq:lambda-bd"} gives $$1+\|\varrho+\lambda_{\Theta_j}\| \leqslant (1+\|\varrho\|)(1+\|\lambda_{\Theta_j}\|) \ll_\tau 1+\|\lambda_{\Theta_j}\|$$ and $$(1+\|w(\varrho+\lambda_{\Theta_j})-\nu\|) \geqslant \frac{ 1 + \| w\lambda_{\Theta_j} - \nu \| }{ 1 + \| w \varrho \| } \gg_\tau 1 + \| w\lambda_{\Theta_j} - \nu \|,$$ while Lemma [Lemma 9](#c-bound){reference-type="ref" reference="c-bound"} gives $$\tilde\beta_S^{\Theta_j}(\varrho+\lambda_{\Theta_j}) \ll (1 + \| \varrho \|)^{d^{\Theta_j}(\Sigma)} \tilde\beta_S^{\Theta_j}(\lambda_{\Theta_j}) \ll_\tau \tilde\beta_S^{\Theta_j}(\lambda_{\Theta_j}).$$ Applying these to [\[j-invert-bound\]](#j-invert-bound){reference-type="eqref" reference="j-invert-bound"}, the upper bound becomes $$\begin{aligned} &\ll_{R,A,\tau} \max_{\varrho\in D_{\tau,j}} \max_{w\in W} \int_{i\mathfrak{a}^*_{\Theta_j}} (1 + \| \lambda_{\Theta_j} \|)^n (1+\|w\lambda_{\Theta_j}-\nu\|)^{-A} \tilde\beta_S^{\Theta_j}(\lambda_{\Theta_j})d\lambda_{\Theta_j} \\ &= \max_{w\in W} \int_{i\mathfrak{a}^*_{\Theta_j}} (1 + \| \lambda_{\Theta_j} \|)^n (1+\|\lambda_{\Theta_j}-w\nu\|)^{-A} \tilde\beta_S^{\Theta_j}(\lambda_{\Theta_j})d\lambda_{\Theta_j}.\end{aligned}$$ As in the case $j = 0$, we may bound this integral by $(1 + \| \nu \|)^n \tilde\beta_S^{\Theta_j}(w\nu ) \ll (1 + \| \nu \|)^n \tilde\beta_S(\nu )$, which completes the proof. ◻ # Upper bounds on the cuspidal spectrum with abelian lowest $K$-type {#sec:upper-bd-cusp-spec} In this section we establish upper bounds, which are sharp up to logarithmic powers, on the (multidimensional) spectral counting function of $L^2_\text{cusp}(\Gamma \backslash G, \tau)$, where $\Gamma$ is a lattice in a semi-simple Lie group $G$ and $\tau$ is an abelian $K$-type. The precise conditions on $G,\Gamma,$ and $\tau$ will be spelled out in Section [6.1](#sec:upper-bd-notation){reference-type="ref" reference="sec:upper-bd-notation"} below. When $\tau\in\widehat{K}$ is non-trivial, this will include the primary case of interest for Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, with $G$ being the real points of the restriction of scalars of a symplectic group over a totally real number field $F$, so that $G=\text{Sp} _{2m}(\mathbb R)^{[F:\mathbb Q]}$. The main result is Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}, which corresponds to Step [\[3sketch\]](#3sketch){reference-type="ref" reference="3sketch"} of the introduction, and which, we emphasize, only requires upper bounds (existence will not be our concern). ## Notation and statement of result {#sec:upper-bd-notation} Let ${\bf G}$ be a connected semisimple $\mathbb Q$-group and $G={\bf G}(\mathbb R)$ its real points. Let $r$ be the rank of $G$. We denote the connected component of the identity in $G$ by $G^0$. Let $K$ be a maximal compact subgroup of $G$, and let $K^0 = K \cap G^0$ which is a maximal compact subgroup of $G^0$. Let $S = G^0 / K^0$ be the Riemannian globally symmetric space associated to $G^0$. Let $\tau$ be a character of $K^0$. We make the following hypothesis on the pair $(G,\tau)$. Let $G_j$ denote the simple factors of $G^0$, with maximal compacts $K_j$. Write $\tau_j$ for the restriction of $\tau$ to $K_j$. We extend the assumption [\[eq:assumption\]](#eq:assumption){reference-type="eqref" reference="eq:assumption"} to the case of semisimple $G$, by assuming $$\label{eq:assumption2} \textit{when $\tau_j$ is non-trivial, the Hermitian group $G_j$ has reduced root system.}$$ Let $\Gamma < {\bf G}(\mathbb Q)$ be a congruence arithmetic lattice. We shall work with various spaces of automorphic forms on $\Gamma \backslash G$ that are isotypic for $\tau$. It would be natural to use notation such as $\mathcal A(\Gamma \backslash G, \tau)$ for these spaces, but unfortunately this conflicts with the notation used in Sections [3](#sec:notation){reference-type="ref" reference="sec:notation"}--[5](#sec:test-function){reference-type="ref" reference="sec:test-function"}, where e.g. $C^\infty(S, \tau)$ denotes functions that are $\tau^{-1}$-isotypic under the right action of $K^0$. As a result, $\mathcal A(\Gamma \backslash G, \tau)$, $L^2_\text{cusp}(\Gamma \backslash G, \tau)$, etc. **will always denote functions that are $\tau^{-1}$-isotypic.** This is somewhat awkward, but we feel that it is the best compromise available. We therefore define $L^2_\text{cusp}(\Gamma \backslash G, \tau)$ to be the $\tau^{-1}$-isotypic subspace of $L^2_\text{cusp}(\Gamma \backslash G)$. If $g_i$ are representatives for $\Gamma \backslash G / G^0$, and $\Gamma_i = g_i^{-1} ( \Gamma \cap G^0 ) g_i$, then $L^2_\text{cusp}(\Gamma \backslash G, \tau) \simeq \oplus L^2_\text{cusp}(\Gamma_i \backslash S, \tau)$. The algebra $\mathbb{D}(\tau)$ of left $G$-invariant differential operators on $H^0(S,L(\tau))$, defined in Section [3.2](#sec:tau-spherical){reference-type="ref" reference="sec:tau-spherical"}, preserves $L^2_{\rm cusp}(\Gamma\backslash G,\tau)\cap C^\infty(\Gamma \backslash G, \tau)$. For $\lambda\in\mathfrak{a}^*_\mathbb C$ we let $$\label{eq:E-lambda-cusp-tau} \mathcal{E}_\lambda^{\rm cusp}(\Gamma\backslash G,\tau)=\{f\in L^2_{\rm cusp}(\Gamma \backslash G,\tau) \cap C^\infty(\Gamma \backslash G, \tau): D f=\chi_\lambda(D) f, \;\forall D\in\mathbb{D}(\tau)\}.$$ Let $\Lambda_{\rm cusp}(\Gamma;\tau)=\{\lambda\in\mathfrak{a}^*_\mathbb C/W: \dim \mathcal{E}_\lambda^{\rm cusp}(\Gamma\backslash G,\tau) > 0\}$ be the cuspidal spectrum. Then $$\label{eq:L2-cusp-tau} L^2_{\rm cusp}(\Gamma \backslash G,\tau)=\bigoplus_{\lambda\in \Lambda_{\rm cusp}(\Gamma;\tau)} \mathcal{E}_\lambda^{\rm cusp}(\Gamma \backslash G,\tau).$$ We may now state the main result of this section, which estimates the number of cuspidal $\tau$-spherical spectral parameters $\lambda\in\Lambda_{\rm cusp}(\Gamma;\tau)$ in a bounded window about $\nu$. **Proposition 14**. *Let $G$ and $\tau$ be as above. For any $\nu\in i\mathfrak{a}^*$ we have $$\sum_{\substack{\lambda\in \Lambda_{\rm cusp}(\Gamma;\tau)\\ \|{\rm Im}\,\lambda-\nu\|= O(1)}}\dim \mathcal{E}_\lambda^{\rm cusp}(\Gamma \backslash G,\tau)\ll_{\Gamma,\tau} \left(\log (3+\|\nu\|)\right)^r \tilde\beta_S(\nu).$$* The logarithmic factor in Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"} is responsible for the loss of the same quality in Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}. As was mentioned in Remark [Remark 2](#remark2-intro){reference-type="ref" reference="remark2-intro"}, this factor is the price to pay for our use of a partial trace formula; it arises essentially from the estimation of the unipotent conjugacy classes. Furthermore, the arithmeticity assumption on the lattice $\Gamma$ is used only marginally, to ensure a pleasant theory of lattice reduction and Siegel domains, which is of critical use in the proof. ## Discussion of literature Before we provide a preview of the proof of Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}, let us place the result in perspective, by recalling the extensive literature surrounding this topic. We begin by comparing Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"} with known *upper bounds* in other settings. Duistermaat, Kolk, and Varadarajan [@DKV Theorem 7.3] prove a bound of the above form in the case when $\Gamma$ is a *uniform lattice* (where the notion of cuspidality is empty) in a non-compact connected semisimple real Lie group $G$ with finite center and $\tau$ is the *trivial $K$-type*. On the other hand, for non-compact semisimple $G$ and arbitrary lattices $\Gamma< G$, Donnelly [@Donnelly82 Theorem 9.1] showed that for *any* $K$-type $\tau$ one has $$\limsup_{\Lambda\rightarrow\infty}\bigg(\Lambda^{-\dim S/2}\sum_{\substack{\lambda\in \Lambda_{\rm cusp}(\Gamma;\tau)\\ \Omega(\lambda)\leqslant\Lambda}}\dim \mathcal{E}_\lambda^{\rm cusp}(\Gamma \backslash G,\tau)\bigg)\leqslant (4\pi)^{-\dim S/2}\frac{{\rm vol}(\Gamma \backslash G)}{\Gamma(\dim S/2+1)}\dim(\tau),$$ where $\Omega(\lambda)$ is the eigenvalue of the Casimir operator acting on $\mathcal{E}_\lambda(S,\tau)$. (When the dimension of $\tau$ is greater than 1, $\mathcal{E}_\lambda(S,\tau)$ denotes forms that are isotypic for the dual representation $\tau^\vee$.) In one respect, Donnelly's result is unnecessarily fine for us, since we do not need the exact constant in the upper bound. In another respect, his result is not fine enough for us, as it does not take into account the singularities of the spectral parameter: it counts all spectral parameters in a ball of increasing radius. We now compare Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"} with known asymptotic statements (Weyl laws, from which upper bounds can be deduced) in other settings. There are two Weyl laws for the cuspidal automorphic spectrum for split semisimple $\mathbb Q$-groups, such as ${\rm Res}_{F/\mathbb Q}\text{Sp} _{2m}$, under a spherical assumption at infinity, due to Lindenstrauss and Venkatesh [@LV] and, more recently, to Finis and Lapid [@FL2 Theorem 5.11]. Besides the spherical assumption at infinity (which is the same as to say that $\tau$ is trivial), the former paper (like that of Donnelly) counts only by Laplace eigenvalue, while the latter counts in dilated regions in $i\mathfrak a^*$ (albeit with a power saving error term). ## Sketch of proof Taking inspiration from Steven D. Miller's thesis [@Mi], we prove Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"} by integrating the automorphic kernel associated with an approximate spectral projector over a sufficiently large truncated Siegel domain $(\Gamma\backslash G)^{\leqslant T}$, rather than the entire automorphic quotient $\Gamma\backslash G$. The resulting formula, which one may call a *partial trace formula*, has the advantage of allowing for an elementary treatment of Eisenstein series: their $L^2$-mass on $(\Gamma\backslash G)^{\leqslant T}$ can be discarded by positivity, regardless of the size of the truncation parameter $T$, since we only need upper bounds. To ensure that the cuspidal spectrum receives a weight which is bounded away from zero, we must take the parameter $T$ large enough to capture a positive proportion of the $L^2$ mass of a cusp form of spectral parameter $\nu$. This is plausible in light of the rapid decay of cusp forms, a classical fact in the theory of automorphic forms, due to Gelfand and Piatetski-Shapiro. Unfortunately, the standard proof of rapid decay [@MW Lemma 1.2.10], which couples a uniform moderate growth estimate with integration by parts, does not come with any explicit uniformity in the spectral parameter. We resolve this problem in Lemma [Lemma 19](#lemma:mod-growth){reference-type="ref" reference="lemma:mod-growth"}, where we establish a uniform moderate growth estimate for $\tau^{-1}$-isotypic cusp forms of spectral parameter $\nu$, which is uniform in $\nu$. Such quantitative control is furnished by the sup norm estimate on the $\tau^{-1}$-spherical spectral projector (and its derivatives) in Proposition [Lemma 13](#k-nu-e){reference-type="ref" reference="k-nu-e"}. (Note that we must apply Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} to produce a function in $C^\infty(S, \tau^{-1}, \tau^{-1})$ if we want to count $\tau^{-1}$-isotypic forms.) We insert this information into the standard argument for rapid decay in Lemma [Lemma 20](#lemma:rapid-decay){reference-type="ref" reference="lemma:rapid-decay"}, which is then itself uniform in $\nu$. This last result is then shown to imply, in Lemmas [Lemma 21](#lemma:phi-pointwise){reference-type="ref" reference="lemma:phi-pointwise"} and [Lemma 22](#L2-mass){reference-type="ref" reference="L2-mass"}, the requisite lower bound on the $L^2$ mass of cusp forms on truncated Siegel domains. Finally, the logarithmic loss in $\nu$ will come from the contribution of the unipotent elements in $\Gamma$ on the geometric side of the partial trace formula; the relevant estimate is essentially contained in Lemma [Lemma 15](#cusp-kernel){reference-type="ref" reference="cusp-kernel"}. ## Siegel sets {#sec:Siegel-sets} We review some notation related to Siegel domains. For a nice reduction theory, we shall operate under the assumption that $\Gamma$ is congruence arithmetic. The arithmetic assumption on $\Gamma$ can be expressed as follows. There is a faithful $\mathbb Q$-representation $\rho:{\bf G}\rightarrow\text{SL} _N$, such that $\rho(\Gamma)$ contains the intersection of $\rho({\bf G}(\mathbb Q))$ with a principal congruence subgroup of $\text{SL} _N(\mathbb Z)$. Our primary source for reduction theory in this context is the book of Borel [@Borel69]. We now introduce a few important subgroups of ${\bf G}$ related to the geometry of $\Gamma\backslash G$ at infinity. We shall denote groups over $\mathbb Q$ by a boldface letter, and real points by the corresponding roman letter. If ${\bf H}$ is a $\mathbb Q$-group, we write $H^0$ for the connected component of the identity in $H$. Let[^4] ${\bf P}_\mathbb Q= {\bf U}{\bf L}$ be a minimal $\mathbb Q$-parabolic subgroup of ${\bf G}$. Let ${\bf A}$ be a maximal $\mathbb Q$-split torus in the center of ${\bf L}$, so that ${\bf L}$ is the centralizer of ${\bf A}$ in ${\bf G}$. Let $${\bf M}= \bigg( \bigcap_{ \chi \in X^*({\bf L})_\mathbb Q} \ker \chi \bigg)^0$$ be the maximal connected $\mathbb Q$-anisotropic subgroup of ${\bf L}$; then ${\bf P}_\mathbb Q={\bf U}{\bf M}{\bf A}$. Let $\mathfrak a_{P_\mathbb Q}$ be the Lie algebra of $A$. We assume that $\mathbf{P}_\mathbb Q$ and $\mathbf{A}$ are chosen such that $\mathfrak{a}_{P_\mathbb Q}$ is orthogonal to $\mathfrak{k}$. Let $\Sigma_\mathbb Q$ denote the roots of $\mathfrak a_{P_\mathbb Q}$ in $\mathfrak g$, $\Sigma_\mathbb Q^+$ the positive roots corresponding to ${\bf U}$, and $\Psi_\mathbb Q$ the simple roots. We write the half-sum of the positive roots in this setting as $\rho_\mathbb Q$. We have the Langlands decompositions $P_\mathbb Q= U MA$ and $P_\mathbb Q^0 = U M^0 A^0$, the latter of which is a diffeomorphism. Since $K$ meets all connected components of $G$ we have $G = P_\mathbb Q^0 K$, and hence $G=UM^0A^0K$. We define the Iwasawa height function $a: G \to A^0$ by $x\in UM^0 a(x) K$. Similarly to [\[Iwasawa-measure1\]](#Iwasawa-measure1){reference-type="eqref" reference="Iwasawa-measure1"} (where the inclusion $M\subset K$ held), we may decompose a fixed Haar measure $dg$ on $G$ as $$\label{Q-Iwasawa-measure1} dg=a^{-2\rho_\mathbb Q} du\, dm \, da\, dk,$$ for appropriately normalized Haar measures on the unimodular groups $U, M^0, A^0, K$. For $t>0$ write $$A_t^0=\{a\in A^0: \alpha(\log a)\geqslant t \;\forall\;\alpha\in \Psi_\mathbb Q\}.$$ Let $\omega\subset U.M^0$ be a compact neighborhood of the identity. We call a *Siegel set* (with respect to $\mathbf{P}_\mathbb Q$ and $K$) any subset of $G$ of the form $\mathfrak{S}_{t,\omega}=\omega A_t^0 K$. In view of the condition $\mathfrak{a}_{P_\mathbb Q}\perp\mathfrak{k}$, such Siegel domains are called *normal* in [@Borel69 Definition 12.3]. Borel [@Borel69 Théorèmes 13.1, 15.4] has shown the following properties of Siegel sets. 1. For every finite subset $A \subset {\bf G}(\mathbb Q)$ the set $\{\gamma\in\Gamma: \gamma A\mathfrak{S}_{t,\omega}\cap A\mathfrak{S}_{t,\omega} \neq \emptyset \}$ is finite (the Siegel property). 2. There is a Siegel set $\mathfrak{S}_{t,\omega}$ and a finite set $\Xi\subset\mathbf{G}(\mathbb Q)$ such that $G=\Gamma\Xi\mathfrak{S}_{t,\omega}$. In this case the subset $\Xi\mathfrak{S}_{t,\omega}$ of $G$ is called a *fundamental set*, and the above properties state that the natural map $\Xi\mathfrak{S}_{t,\omega}\rightarrow \Gamma \backslash G$ is surjective with finite fibers. By [@Borel69 Proposition 15.6], one may take for $\Xi$ a complete set of representatives for the "set of cusps\" $\Gamma\backslash\mathbf{G}(\mathbb Q)/{\bf P}_\mathbb Q(\mathbb Q)$. We now suppose that $t$, $\omega$, and $\Xi$ are chosen so that $\Xi\mathfrak{S}_{t,\omega}$ is a fundamental set for $\Gamma$. For $T > t$, define $$A_t^{\leqslant T} = \{a\in A^0 : t\leqslant\alpha(\log a) \leqslant T \;\forall\;\alpha\in \Psi_\mathbb Q\} \quad \text{and} \quad A_t^{> T} = A^0_t \setminus A_t^{\leqslant T}$$ to be the truncation of $A_t^0$ at height $T$, and its complement. We also define $\mathfrak{S}_{t,\omega}^{\leqslant T} = \omega A_t^{\leqslant T}K$ and $\mathfrak{S}_{t,\omega}^{> T} = \omega A_t^{> T} K$. We let $(\Gamma \backslash G)^{\leqslant T}$ be the image of $\Xi \mathfrak{S}_{t,\omega}^{\leqslant T}$ in $\Gamma \backslash G$, and let $(\Gamma \backslash G)^{>T}$ be the complement. ## Growth of automorphic kernel functions in the cusp The following lemma will be used to bound the growth of automorphic kernel functions in the cusp in the proof of Lemma [Lemma 19](#lemma:mod-growth){reference-type="ref" reference="lemma:mod-growth"} and again in Section [6.9](#sec:proof-Prop-Weyl){reference-type="ref" reference="sec:proof-Prop-Weyl"} (to complete the proof of Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}). It is valid in the generality of Section [6.4](#sec:Siegel-sets){reference-type="ref" reference="sec:Siegel-sets"}, and hence independent of the hypothesis [\[eq:assumption2\]](#eq:assumption2){reference-type="eqref" reference="eq:assumption2"}. In this section, we let $\Gamma_U$ denote the intersection $\Gamma \cap U$. **Lemma 15**. *Let $B \subset G$ be compact, and let $\mathfrak{S}_{t,\omega}$ be a Siegel set. If $x \in \mathfrak{S}_{t,\omega}$ and $\xi \in {\bf G}(\mathbb Q)$, we have $$\sum_{\gamma \in \Gamma} 1_B( x^{-1} \xi^{-1} \gamma \xi x) \ll a(x)^{2 \rho_\mathbb Q}.$$* *Proof.* By replacing $\Gamma$ with $\xi^{-1} \Gamma \xi$, we may assume that $\xi = 1$. By applying Lemma [Lemma 16](#Siegel-stab){reference-type="ref" reference="Siegel-stab"}, it suffices to restrict the sum over $\Gamma$ to $\Gamma_U \delta$, for some $\delta \in \Gamma$ that is independent of $x$. In other words, we must show that $\# \{ \gamma_U \in \Gamma_U : x^{-1} \gamma_U \delta x \in B \} \ll a(x)^{2 \rho_\mathbb Q}$. We may assume that $x^{-1} \delta x \in B$, at the expense of allowing the coset representative $\delta$ (but not the coset $\Gamma_U \delta$) to depend on $x$. Suppose that $\gamma_U$ satisfies $x^{-1} \gamma_U \delta x \in B$. We may write this condition as $\gamma_U^{x^{-1}} x^{-1} \delta x \in B$, and combined with our assumption that $x^{-1} \delta x \in B$ this gives $\gamma_U^{x^{-1}} \in B B^{-1}$. If we write $x = u m a k$ with $um \in \omega$ and $a \in A^0_t$, and assume that $B$ is bi-$K$-invariant, we may expand the condition $\gamma_U^{x^{-1}} \in B B^{-1}$ as $$a^{-1} m^{-1} u^{-1} \gamma_U u m a \in B B^{-1}.$$ We next rewrite this as $$\begin{aligned} (m^{-1} u^{-1})^{a^{-1}} a^{-1} \gamma_U a (u m)^{a^{-1}} & \in B B^{-1} \\ a^{-1} \gamma_U a & \in (u m)^{a^{-1}} B B^{-1} (m^{-1} u^{-1})^{a^{-1}} \\ & \subset \omega^{a^{-1}} B B^{-1} (\omega^{-1})^{a^{-1}}.\end{aligned}$$ The union of the sets $\omega^{a^{-1}}$, taken over $a \in A^0_t$, is compact, and we denote it by $\Omega$. We deduce that $a^{-1} \gamma_U a$ lies in the fixed compact set $\Omega B B^{-1} \Omega^{-1}$, and hence that the set $\{ \gamma_U \in \Gamma_U : x^{-1} \gamma_U \delta x \in B \}$ whose size we wish to control is contained in $\Gamma_U \cap a \Omega B B^{-1} \Omega^{-1} a^{-1}$. The lemma now follows from Lemma [Lemma 17](#Uball){reference-type="ref" reference="Uball"} below. ◻ **Lemma 16**. *Let $B \subset G$ be compact, and let $\mathfrak{S}_{t,\omega}$ be a Siegel set. There exists a finite set $\delta_1, \ldots, \delta_k \in \Gamma$, depending on $B$ and $\mathfrak{S}_{t,\omega}$, such that if $x^{-1} \gamma x \in B$ for some $x \in \mathfrak{S}_{t,\omega}$, then $\gamma \in \Gamma_U \delta_i$ for some $i$.* *Proof.* Let $x \in \mathfrak{S}_{t,\omega}$ and $\gamma \in \Gamma$ satisfy $x^{-1} \gamma x \in B$. We will show that there is a second Siegel set $\mathfrak{S}_{t',\omega'}$, independent of $x$, such that $\gamma_U \gamma x \in \mathfrak{S}_{t',\omega'}$ for some $\gamma_U \in \Gamma_U$. The lemma will follow from this by applying the Siegel property to $\mathfrak{S}_{t,\omega}$ and $\mathfrak{S}_{t',\omega'}$. We write $x \in \mathfrak{S}_{t,\omega}$ as $x = u_1 m_1 a_1 k_1$ with $u_1 m_1 \in \omega$ and $a_1 \in A^0_t$, and likewise write $\gamma x$ using the decomposition $G=UM^0A^0K$ as $\gamma x = u_2 m_2 a_2 k_2$. With this notation, we have $$\begin{aligned} x^{-1} \gamma x & = k_1^{-1} a_1^{-1} m_1^{-1} u_1^{-1} u_2 m_2 a_2 k_2 \\ & = k_1^{-1} (u_1^{-1} u_2)^{ a_1^{-1} m_1^{-1}} a_1^{-1} m_1^{-1} m_2 a_2 k_2.\end{aligned}$$ If we write $u' = (u_1^{-1} u_2)^{ a_1^{-1} m_1^{-1}}$, this becomes $$x^{-1} \gamma x = k_1^{-1} u' m_1^{-1} m_2 a_1^{-1} a_2 k_2.$$ If we assume that $B$ is bi-$K$-invariant, the condition $x^{-1} \gamma x \in B$ may be written as $u' m_1^{-1} m_2 a_1^{-1} a_2 \in B$, and we in fact have the stronger inclusion $$\label{inBP} u' m_1^{-1} m_2 a_1^{-1} a_2 \in B \cap P_\mathbb Q^0.$$ Because the Langlands decomposition $P_\mathbb Q^0 = U M^0 A^0$ is a diffeomorphism, we have $B \cap P_\mathbb Q^0 \subset B_U B_M B_A$ for compact sets $B_U \subset U$, $B_M \subset M^0$, and $B_A \subset A^0$, and it follows from this and ([\[inBP\]](#inBP){reference-type="ref" reference="inBP"}) that $m_2 \in m_1 B_M$ and $a_2 \in a_1 B_A$. Let $\omega_U \subset U$ and $\omega_M \subset M^0$ be compact sets such that $\omega \subset \omega_U \omega_M$, which implies that $u_1 \in \omega_U$ and $m_1 \in \omega_M$. Let $\omega'_U$ be a compact fundamental domain for the action of $\Gamma_U$ on $U$, and let $\gamma_U \in \Gamma_U$ satisfy $\gamma_U u_2 \in \omega_U'$. The inclusion $m_2 \in m_1 B_M$ implies that $m_2 \in \omega_M B_M$, and $a_2 \in a_1 B_A$ implies that $a_2 \in A^0_{t'}$ for some $t' > 0$. If we define $\omega' = \omega_U' \omega_M B_M$, then the auxiliary Siegel set we require is $\mathfrak{S}_{t',\omega'}$. Indeed, we have $$\label{Siegel-rep} \gamma_U \gamma x = (\gamma_U u_2) m_2 a_2 k_2 \in \omega_U' (\omega_M B_M) A^0_{t'} K = \mathfrak{S}_{t',\omega'}$$ as claimed. By applying the Siegel property to a Siegel set containing both $\mathfrak{S}_{t,\omega}$ and $\mathfrak{S}_{t',\omega'}$, we see that the set of $\delta \in \Gamma$ such that $\delta \mathfrak{S}_{t,\omega} \cap \mathfrak{S}_{t',\omega'} \neq \emptyset$ is finite, and we enumerate them as $\delta_1, \ldots, \delta_k$. Equation ([\[Siegel-rep\]](#Siegel-rep){reference-type="ref" reference="Siegel-rep"}) implies that $\gamma_U \gamma$ is equal to one of these $\delta_i$, which gives the lemma. ◻ **Lemma 17**. *If $B \subset U$ is compact, and $a \in A^0_t$, then $\# (\Gamma_U \cap a B a^{-1}) \ll a^{2\rho_\mathbb Q}$.* *Proof.* This follows from a standard volume argument. Take $C \subset U$ open and bounded, and such that $C C^{-1} \cap \Gamma_U = \{ e \}$. If we let $\Lambda = \Gamma_U \cap a B a^{-1}$, we have $$\label{Lambda-include} \Lambda C \subset a B a^{-1} C = a( B a^{-1} C a ) a^{-1}.$$ As in Lemma [Lemma 15](#cusp-kernel){reference-type="ref" reference="cusp-kernel"}, the union of the sets $a^{-1} C a$ over all $a \in A^0_t$ is compact, which implies there is a compact set $B'$ containing $B a^{-1} C a$ for all $a$. Equation ([\[Lambda-include\]](#Lambda-include){reference-type="ref" reference="Lambda-include"}) then gives $\Lambda C \subset a B' a^{-1}$. Our assumption that $C C^{-1} \cap \Gamma_U = \{ e \}$ implies that the sets $\gamma C$ for $\gamma \in \Gamma$ are all disjoint, so $$(\# \Lambda) \, \text{vol}(C) = \text{vol}( \Lambda C) \leqslant \text{vol}( a B' a^{-1}) = a^{2\rho_\mathbb Q} \text{vol}( B'),$$ as required. ◻ ## Pre-trace inequality We let $R$ denote the right-regular representation of $G$ on $L^2(\Gamma\backslash G)$. For a function $\omega \in C(G)$, we define $\omega^*$ by $\omega^*(g) = \overline{\omega}(g^{-1})$. The following result, which is again valid in the generality of Section [6.4](#sec:Siegel-sets){reference-type="ref" reference="sec:Siegel-sets"}, will be used in the proof of the uniform moderate growth estimate of Lemma [Lemma 19](#lemma:mod-growth){reference-type="ref" reference="lemma:mod-growth"}. The idea there will be to bound $D\phi$, for $\phi \in \mathcal{E}_\mu^{\rm cusp}(\Gamma\backslash G,\tau)$ and $D$ a left-invariant differential operator on $G$, by writing it as $R(\omega) \phi$ for some well-chosen $\omega \in C_c(G)$ depending on $D$, $\tau$, and $\mu$. We may then take advantage of the spectral theory of $L^2(\Gamma\backslash G,\tau)$ by applying the following lemma. This will be the key to accessing the dependency on $\mu$, since, by Proposition [Lemma 13](#k-nu-e){reference-type="ref" reference="k-nu-e"}, the analytic properties of $\omega * \omega^*$ can be understood for $\tau$-spherical eigenfunctions. **Lemma 18**. *If $\omega \in C_c(G)$ and $\{ \phi_i \}$ is an orthonormal set in $L^2(\Gamma \backslash G)$, then $$\sum_i | R(\omega) \phi_i(x) |^2 \leqslant \sum_{\gamma \in \Gamma} (\omega * \omega^*)(x^{-1} \gamma x).$$* *Proof.* We have $$[R( \omega) \phi_i](x) = \int_G \phi_i(xg) \omega(g) dg = \int_{G} \phi_i(g) \omega(x^{-1} g) dg.$$ Folding the integral over $\Gamma$ gives $$(x) = \int_{ \Gamma \backslash G} \phi_i(g) \sum_{\gamma \in \Gamma} \omega(x^{-1} \gamma g) dg = \langle \phi_i, \sum_{\gamma \in \Gamma} \omega(x^{-1} \gamma \; \cdot) \rangle,$$ where $\langle \cdot , \cdot \rangle$ denotes the inner product in $L^2(\Gamma \backslash G)$. Because the $\phi_i$ are orthonormal, we may apply Parseval to obtain $$\sum_i | R(\omega) \phi_i(x) |^2 \leqslant \bigg\| \sum_{\gamma \in \Gamma} \omega(x^{-1} \gamma \; \cdot) \bigg\|_2 = \int_{ \Gamma \backslash G} \bigg| \sum_{\gamma \in \Gamma} \omega(x^{-1} \gamma g) \bigg|^2 dg.$$ Expanding the square and unfolding again, we obtain $$\label{amp1} \sum_i | R(\omega) \phi_i(x) |^2 \leqslant \int_{ \Gamma \backslash G} \sum_{\gamma_1, \gamma_2 \in \Gamma} \omega(x^{-1} \gamma_1 g) \overline{\omega(x^{-1} \gamma_2 g)} dg = \sum_{\gamma \in \Gamma} \int_{G} \omega(x^{-1} \gamma g) \overline{\omega(x^{-1} g)} dg.$$ Because $\overline{\omega(x^{-1} g)} = \omega^*(g^{-1} x)$, we have $$\begin{aligned} \int_{G} \omega(x^{-1} \gamma g) \overline{\omega(x^{-1} g)} dg = \int_{G} \omega(x^{-1} \gamma g) \omega^*(g^{-1} x) dg = \omega * \omega^* ( x^{-1} \gamma x).\end{aligned}$$ Inserting this into [\[amp1\]](#amp1){reference-type="eqref" reference="amp1"} completes the proof. ◻ ## Uniform moderate growth of cusp forms {#sec:UMG} We next prove a uniform moderate growth estimate for $\tau^{-1}$-isotypic eigenfunctions. For the statement and proof, we introduce some notation relating to the differential operators on $G$. Let $D(G)$ and $D(G)^\vee$ be the algebras of left-invariant and right-invariant differential operators on $G$. We define the involution $\vee : x \mapsto x^{-1}$ of $G$, which also acts on functions by $f^\vee(g) = f(g^{-1})$. If $D$ is a differential operator on $G$ we let $D^\vee$ be its image under $\vee$, defined by $D^\vee(f) = (D f^\vee)^\vee$. Then the map $\vee : D \mapsto D^\vee$ gives an algebra isomorphism $\vee : D(G) \to D(G)^\vee$. We may explicate the map $\vee$ on left-invariant vector fields as follows: if $X \in \mathfrak g$, and $\widetilde{X} \in D(G)$ is the left-invariant vector field associated to $X$, then $- \widetilde{X}^\vee$ is the corresponding right-invariant vector field. We also have $$\label{convolve} (D_1^\vee f_1) * (D_2 f_2) = D_1^\vee D_2 (f_1 * f_2)$$ for $f_1, f_2 \in C^\infty_c(G)$ and $D_1, D_2 \in D(G)$. If $D$ is a differential operator on $G$, we let $\overline{D}$ denote the complex conjugate operator, defined by $\overline{D} f = \overline{ D \overline{f} }$. **Lemma 19**. *Let $\mathfrak{S}_{t,\omega}$ be a Siegel set. Let $\phi \in \mathcal{E}_\mu^{\rm cusp}(\Gamma\backslash G;\tau)$ satisfy $\| \phi \|_2 = 1$. For any $D \in D(G)$ of degree $n$, and any $\xi \in {\bf G}(\mathbb Q)$, we have $$\label{eq:mod-growth} D \phi (\xi x)\ll_D (1+\|\mu\|)^{n} \widetilde{\beta}_S(\mu)^{1/2} a(x)^{ \rho_\mathbb Q}, \qquad x \in\mathfrak{S}_{t,\omega}.$$* *Proof.* We shall apply Lemma [Lemma 18](#pre-trace){reference-type="ref" reference="pre-trace"} to the orthonormal set containing the single element $\phi$. To describe our choice of $\omega$, we first let $k_{-\mu}^0 \in C^\infty_c(S, \tau^{-1}, \tau^{-1})$ be the function appearing in the proof of Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}, with $\nu$ there chosen to be $-\mu$. By scaling $k_{-\mu}^0$ by a bounded constant, we may assume that $R(k_{-\mu}^0) \phi = \phi$. This is equivalent to $\phi = \phi * (k_{-\mu}^0)^\vee$, and so $$D \phi = D( \phi * (k_{-\mu}^0)^\vee ) = \phi * D (k_{-\mu}^0)^\vee = R( (D (k_{-\mu}^0)^\vee) ^\vee ) \phi = R( D^\vee k_{-\mu}^0 ) \phi.$$ It follows that if we apply Lemma [Lemma 18](#pre-trace){reference-type="ref" reference="pre-trace"} with $\omega = D^\vee k^0_{-\mu}$, we obtain $$| D\phi(\xi x) |^2 \leqslant \sum_{\gamma \in \Gamma} [ (D^\vee k_{-\mu}^0) * (D^\vee k_{-\mu}^0)^* ](x^{-1} \xi^{-1} \gamma \xi x).$$ Because the support of our test function is uniformly bounded, Lemma [Lemma 15](#cusp-kernel){reference-type="ref" reference="cusp-kernel"} gives $$\label{Xphi-intermed} | D\phi(\xi x) |^2 \ll \| (D^\vee k_{-\mu}^0) * (D^\vee k_{-\mu}^0)^* \|_\infty\, a(x)^{2 \rho_\mathbb Q}.$$ It therefore remains to show that $$\label{Dkbd} \| (D^\vee k_{-\mu}^0) * (D^\vee k_{-\mu}^0)^* \|_\infty \ll_D (1+\|\mu\|)^{n} \widetilde{\beta}_S(\mu)^{1/2}.$$ We have $$\begin{aligned} (D^\vee k_{-\mu}^0) * (D^\vee k_{-\mu}^0)^* = (D^\vee k_{-\mu}^0) * (\overline{D} (k_{-\mu}^0)^* ).\end{aligned}$$ Applying [\[convolve\]](#convolve){reference-type="eqref" reference="convolve"} to this gives $$\begin{aligned} (D^\vee k_{-\mu}^0) * (D^\vee k_{-\mu}^0)^* & = D^\vee \overline{D} ( k_{-\mu}^0 * (k_{-\mu}^0)^* ) \\ & = D^\vee \overline{D} k_v.\end{aligned}$$ Lemma [Lemma 13](#k-nu-e){reference-type="ref" reference="k-nu-e"} gives $\| D^\vee \overline{D} k_v \|_\infty \ll_D (1 + \| \mu \|)^{2n} \widetilde{\beta}_S(\mu)$. This implies ([\[Dkbd\]](#Dkbd){reference-type="ref" reference="Dkbd"}), and completes the proof. ◻ ## Rapid decay estimates and control of $L^2$ mass For the rest of Section [6](#sec:upper-bd-cusp-spec){reference-type="ref" reference="sec:upper-bd-cusp-spec"}, we fix a finite set $\Xi \subset {\bf G}(\mathbb Q)$ and a Siegel set $\mathfrak{S}_{t,\omega}$ such that $\Xi \mathfrak{S}_{t,\omega}$ is a fundamental set for $\Gamma$, and recall the notation for truncations of $\Gamma \backslash G$ introduced at the end of Section [6.4](#sec:Siegel-sets){reference-type="ref" reference="sec:Siegel-sets"}. We show that a cusp form with spectral parameter $\mu$ has a positive proportion of its $L^2$ mass in $(\Gamma \backslash G)^{\leqslant T}$, provided $T \gg (1 + \| \mu \|)^{1 + \epsilon}$. To do this, we first prove a version of the standard rapid decay estimate for cusp forms which is uniform in the spectral parameter. The proof of the following lemma is adapted from Lemma I.2.10 of [@MW]. **Lemma 20**. *Let $\phi \in \mathcal{E}_\mu^{\rm cusp}(\Gamma\backslash G;\tau)$ satisfy $\| \phi \|_2 = 1$. Let $\alpha \in \Psi_\mathbb Q$ be a simple root. For any $N \geqslant 2$ and $\xi \in \Xi$, we have $$\phi( \xi x) \ll_N \widetilde{\beta}_S(\mu)^{1/2} ( 1 + \| \mu \|)^N a(x)^{ \rho_\mathbb Q- N \alpha}, \qquad x \in\mathfrak{S}_{t,\omega}.$$* *Proof.* As there are only finitely many choices for $\xi$, we will fix one throughout the proof. Let $\Theta = \Psi_\mathbb Q\setminus \{ \alpha \}$, and let ${\bf P}_\Theta={\bf U}_\Theta {\bf L}_\Theta$ be the associated maximal $\mathbb Q$-parabolic subgroup. We fix a sequence ${\bf V}_0 = \{ 1 \} < {\bf V}_1 < \cdots < {\bf V}_k = {\bf U}_\Theta$ of subgroups defined over $\mathbb Q$ such that for all $i$, ${\bf V}_i$ is normal in ${\bf U}_\Theta$ and ${\bf V}_i / {\bf V}_{i-1} \simeq \mathbb{G}_a$. We set $\Gamma_i = \xi^{-1} \Gamma \xi \cap V_i$, which is a lattice in $V_i$. We have $V_i / V_{i-1} \simeq \mathbb R$, and the image of $\Gamma_i$ in $V_i / V_{i-1}$ is a lattice. We choose a generator $\ell \in V_i / V_{i-1}$ for this lattice, and let $Y \in \text{Lie}(V_i) / \text{Lie}(V_{i-1})$ be such that $e^Y = \ell \in V_i / V_{i-1}$. For all $i$, we define $$\psi_i(x) = \int_{ \Gamma_i \backslash V_i} \phi( \xi u x) du, \quad x \in G.$$ We have $\psi_0(x) = \phi( \xi x)$, and $\psi_k = 0$ by the cuspidality of $\phi$. We may therefore bound $\phi$ by bounding the successive differences $\psi_i - \psi_{i-1}$, using a standard integration by parts argument. For each $x$, we may consider the function of a real variable $t$ given by $t \mapsto \psi_{i-1}( e^{tY} x)$. Because $\psi_{i-1}$ is left-invariant under $V_{i-1}$ and $\Gamma_i$, and $e^Y$ lies in $V_{i-1} \Gamma_i$, we see that $\psi_{i-1}( e^{tY} x)$ is in fact a function on $\mathbb R/ \mathbb Z$. For $\eta \in \mathbb Z$ we define $\psi_{i-1}^\eta$ to be the $\eta$-Fourier coefficient of $\psi_{i-1}( e^{tY} x)$, given by $$\psi_{i-1}^\eta(x) = \int_{ \mathbb R/ \mathbb Z} \int_{ \Gamma_{i-1} \backslash V_{i-1} } e^{- 2 \pi i \eta t} \phi( \xi u e^{tY} x ) du dt.$$ We have $\psi_{i-1}^0 = \psi_i$, and hence $$\label{psi-difference} \psi_{i-1} - \psi_i = \sum_{\eta \neq 0} \psi_{i-1}^\eta.$$ We now assume that $x \in \mathfrak{S}_{t,\omega}$, and estimate $\psi_{i-1}^\eta$ for $\eta \neq 0$ by integration by parts. If we define $Y'_x = \textup{Ad}(x^{-1}) Y$, then we have $$\psi_{i-1}^\eta(x) = \int_{ \mathbb R/ \mathbb Z} \int_{ \Gamma_{i-1} \backslash V_{i-1} } e^{- 2 \pi i \eta t} \phi( \xi u x e^{tY'_x} ) du dt,$$ and integrating by parts $N$ times with respect to $t$ gives $$\begin{aligned} \notag \psi_{i-1}^\eta(x) & = (2 \pi i \eta)^{-N} \int_{ \mathbb R/ \mathbb Z} \int_{ \Gamma_{i-1} \backslash V_{i-1} } e^{- 2 \pi i \eta t} ( Y_x^{'N} \phi)( \xi u x e^{tY'_x} ) du dt \\ \label{phi-tau} & = (2 \pi i \eta)^{-N} \int_{ \mathbb R/ \mathbb Z} \int_{ \Gamma_{i-1} \backslash V_{i-1} } e^{- 2 \pi i \eta t} ( Y_x^{'N} \phi)( \xi u e^{tY} x ) du dt.\end{aligned}$$ We shall need to use the fact that $Y'_x$ becomes small as $\alpha(x)$ grows. Let $X_j$ be a basis for $\mathfrak g$, and let $X_j^*$ be the dual basis. Then we may show in the same way as Moeglin--Waldspurger on [@MW p. 31] that we have $$X_j^*( Y'_x) \ll a(x)^{-\alpha}.$$ It follows from this that we may write $Y_x^{'N} = a(x)^{-N \alpha} D_x$, where $D_x \in D(G)$ depends on $x$, and has degree $N$ and bounded coefficients when written in a basis of monomials in $X_j$. We now apply Lemma [Lemma 19](#lemma:mod-growth){reference-type="ref" reference="lemma:mod-growth"} to bound $(D_x \phi)( \xi u e^{tY} x )$. We do not necessarily have $u e^{tY} x \in \mathfrak{S}_{t,\omega}$, but as we may assume that $u e^{tY}$ lies in a bounded subset of $U_\Theta$ we see that $u e^{tY} x$ lies in a larger Siegel set to which we may apply Lemma [Lemma 19](#lemma:mod-growth){reference-type="ref" reference="lemma:mod-growth"}. We therefore have $$( Y_x^{'N} \phi)( \xi u e^{tY} x ) = a(x)^{-N \alpha} (D_x \phi)( \xi u e^{tY} x ) \ll (1+\|\mu\|)^N \widetilde{\beta}_S(\mu)^{1/2} a(x)^{ \rho_\mathbb Q- N \alpha}.$$ Applying this in [\[phi-tau\]](#phi-tau){reference-type="eqref" reference="phi-tau"} gives $$\psi_{i-1}^\eta(x) \ll_N \eta^{-N} (1+\|\mu\|)^N \widetilde{\beta}_S(\mu)^{1/2} a(x)^{ \rho_\mathbb Q- N \alpha}.$$ If $N \geqslant 2$, applying this in [\[psi-difference\]](#psi-difference){reference-type="eqref" reference="psi-difference"} and summing over $\eta$ gives $$\psi_{i-1}(x) - \psi_i(x) \ll_N (1+\|\mu\|)^N \widetilde{\beta}_S(\mu)^{1/2} a(x)^{ \rho_\mathbb Q- N \alpha}.$$ Summing this over $i$ completes the proof. ◻ We next use Lemma [Lemma 20](#lemma:rapid-decay){reference-type="ref" reference="lemma:rapid-decay"} to show that, when $x \in \Gamma \backslash G$ is above height $(1 + \| \mu \|)^{1 + \epsilon}$ in the cusp, $\phi$ is bounded by a constant independent of $\mu$. **Lemma 21**. *For any $c, \epsilon > 0$ there is a constant $C_\epsilon > 0$ such that the following holds. If $\phi \in \mathcal{E}^{\rm cusp}_\mu(\Gamma\backslash G;\tau)$ satisfies $\| \phi \|_2 = 1$, $T > C_\epsilon (1 + \| \mu \|)^{1 + \epsilon}$, and $g \in (\Gamma \backslash G)^{> T}$, then $| \phi(g) | < c$.* *Proof.* We know that $g = \xi x$ for some $\xi \in \Xi$, $x \in \mathfrak{S}_{t,\omega}$. Because $g \in (\Gamma \backslash G)^{> T}$, we must have $x \in \mathfrak{S}_{t,\omega}^{> T}$, and hence $a(x)^\alpha > T$ for some $\alpha \in \Psi_\mathbb Q$. Choose $\alpha \in \Psi_\mathbb Q$ so that $a(x)^\alpha$ is maximal. If we apply Lemma [Lemma 20](#lemma:rapid-decay){reference-type="ref" reference="lemma:rapid-decay"} with this $\alpha$, we get that for any $N \geqslant 2$ there is $C_N > 0$ such that $$|\phi( \xi x)| \leqslant C_N \widetilde{\beta}_S(\mu)^{1/2} ( 1 + \| \mu \|)^N a(x)^{ \rho_\mathbb Q- N \alpha}.$$ Because $a(x)^\alpha \geqslant a(x)^\beta$ for $\beta \in \Psi_\mathbb Q$, there is $M_1 > 0$ such that $a(x)^{M_1 \alpha} \geqslant a(x)^{ \rho_\mathbb Q}$. We then choose $M_2 > 0$ such that $(1 + \| \mu \|)^{M_2 \epsilon} \geqslant \widetilde{\beta}_S(\mu)^{1/2} ( 1 + \| \mu \|)^{M_1}$, and let $N = M_1 + M_2$. With this choice, we have $$\begin{aligned} |\phi( \xi x)| & \leqslant C_N \widetilde{\beta}_S(\mu)^{1/2} ( 1 + \| \mu \|)^N a(x)^{ \rho_\mathbb Q- N \alpha} \\ & \leqslant C_N a(x)^{ \rho_\mathbb Q- M_1 \alpha} \widetilde{\beta}_S(\mu)^{1/2} ( 1 + \| \mu \|)^{M_1} ( a(x)^\alpha / (1 + \| \mu \|) )^{-M_2} \\ & \leqslant C_N \widetilde{\beta}_S(\mu)^{1/2} ( 1 + \| \mu \|)^{M_1} C_\epsilon^{-M_2} (1 + \| \mu \|)^{-M_2 \epsilon} \\ & \leqslant C_N C_\epsilon^{-M_2}.\end{aligned}$$ Choosing $C_\epsilon$ sufficiently large completes the proof. ◻ Finally, we use the bounds we have established for $\phi$ in the cusp to prove the desired control on its $L^2$ mass. **Lemma 22**. *Given $0 < \kappa < 1$ and $\epsilon > 0$, there is $C_\epsilon > 0$ such that the following holds. If $\phi \in \mathcal{E}^{\rm cusp}_\mu(\Gamma\backslash G;\tau)$ satisfies $\| \phi \|_2 = 1$, and $T > C_\epsilon (1 + \| \mu \|)^{1 + \epsilon}$, then $$\int_{(\Gamma\backslash G)^{\leqslant T} } | \phi(g) |^2 dg \geqslant \kappa.$$* *Proof.* Since $\|\phi\|_2=1$, we need to show that $\int_{(\Gamma\backslash G)^{> T} } | \phi(g) |^2 dg$ can be made as small as possible. If we apply Lemma [Lemma 21](#lemma:phi-pointwise){reference-type="ref" reference="lemma:phi-pointwise"} with the given $\epsilon$ and $c = 1$, we obtain $C_\epsilon > 0$ such that if $T > C_\epsilon (1 + \| \mu \|)^{1 + \epsilon}$ and $g \in (\Gamma \backslash G)^{> T}$, then $| \phi(g) | < 1$. It follows that $$\int_{(\Gamma\backslash G)^{> T} } | \phi(g) |^2 dg \leqslant \text{vol}( (\Gamma\backslash G)^{> T} ).$$ Because $\text{vol}( (\Gamma\backslash G)^{> T} ) \to 0$ as $T \to \infty$, making $C_\epsilon$ sufficiently large gives the result. ◻ ## Proof of Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"} {#sec:proof-Prop-Weyl} We now return to Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"} and its proof. We apply Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} to each simple factor $G_j$ of $G^0$, relative to the spectral parameters $-\nu_j$ obtained by restricting $-\nu$ to the $j$-th component of $i\mathfrak{a}$, and with $\tau$ there chosen to be $\tau_j^{-1}$. For each $j$, we let $k^0_{-\nu_j}$ be the test function appearing in the proof of Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}, and define $k^0_{-\nu} = \prod k^0_{-\nu_j}$ and $k_{-\nu} = k^0_{-\nu} * ( k^0_{-\nu} )^*$. We let $$\label{defK} K_{-\nu} (x,y)=\sum_{\gamma\in\Gamma}k_{-\nu} (x^{-1}\gamma y)\in C^\infty(G \times G,\tau \times\tau^{-1})$$ be the associated automorphic kernel function. Let $\phi_i$ be an orthonormal basis of eigenfunctions for $L^2_\text{cusp}( \Gamma \backslash G, \tau)$, with spectral parameters $\lambda_i \in \Lambda_\text{cusp}( \Gamma; \tau)$. After possibly scaling $k^0_{-\nu}$, Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} implies that if $\| \text{Im} \, \lambda_i - \nu \| \leqslant 1$, we have $R( k^0_{-\nu}) \phi_i = c_i \phi_i$ with $|c_i| \geqslant 1$. We may therefore apply Lemma [Lemma 18](#pre-trace){reference-type="ref" reference="pre-trace"} to the orthonormal set containing those $\phi_i$ with $\| \text{Im} \, \lambda_i - \nu \| \leqslant 1$ to obtain $$\label{eq:pre-trace-inequality} \sum_{\substack{\lambda\in \Lambda_{\rm cusp}(\Gamma;\tau)\\ \|{\rm Im}\,\lambda-\nu\| \leqslant 1}} | \phi_i(x) |^2 \leqslant K_{-\nu} (x, x).$$ We apply Lemma [Lemma 22](#L2-mass){reference-type="ref" reference="L2-mass"} with $\kappa = 1/2$ and any $\epsilon > 0$ to obtain $C_\epsilon > 0$ such that if $T = C_\epsilon ( 1 + \| \nu \|)^{1 + \epsilon}$, then $$\int_{(\Gamma\backslash G)^{\leqslant T} } | \phi_i(x) |^2 dx \geqslant 1/2$$ for all $\| \text{Im} \, \lambda_i - \nu \| \leqslant 1$. Integrating the pretrace inequality [\[eq:pre-trace-inequality\]](#eq:pre-trace-inequality){reference-type="eqref" reference="eq:pre-trace-inequality"} over the truncated domain $(\Gamma\backslash G)^{\leqslant T}$, we therefore have $$\sum_{\substack{\lambda\in \Lambda_{\rm cusp}(\Gamma;\tau)\\ \|{\rm Im}\,\lambda-\nu\| \leqslant 1}} \dim \mathcal{E}_\lambda^{\rm cusp}(\Gamma\backslash G,\tau) \leqslant 2 \int_{(\Gamma\backslash G)^{\leqslant T} } K_{-\nu} (x, x) dx.$$ Because $(\Gamma\backslash G)^{\leqslant T}$ is the image of $\Xi \mathfrak{S}_{t,\omega}^{\leqslant T}$ in $\Gamma \backslash G$, we have $$\int_{(\Gamma\backslash G)^{\leqslant T} } K_{-\nu} (x, x) dx \leqslant \sum_{\xi \in \Xi} \int_{ x \in \mathfrak{S}_{t,\omega}^{\leqslant T} } K_{-\nu} (\xi x, \xi x) dx.$$ Let $B$ be a fixed compact set that contains the support of $k_{-\nu}$ for all $\nu$. Because $k_{-\nu} \ll_\tau \widetilde{\beta}_S(\nu) 1_B$ by Lemma [Lemma 13](#k-nu-e){reference-type="ref" reference="k-nu-e"}, we may apply Lemma [Lemma 15](#cusp-kernel){reference-type="ref" reference="cusp-kernel"} to obtain $$\int_{ x \in \mathfrak{S}_{t,\omega}^{\leqslant T} } K_{-\nu} (\xi x, \xi x) dx \ll_{\Gamma, \tau} \widetilde{\beta}_S(\nu) \int_{ x \in \mathfrak{S}_{t,\omega}^{\leqslant T} } a(x)^{2 \rho_\mathbb Q} dx.$$ Writing out the measure on $\mathfrak{S}_{t,\omega}^{\leqslant T}$ in Iwasawa coordinates, as in [\[Q-Iwasawa-measure1\]](#Q-Iwasawa-measure1){reference-type="eqref" reference="Q-Iwasawa-measure1"}, we see that $$\int_{ x \in \mathfrak{S}_{t,\omega}^{\leqslant T} } a(x)^{2 \rho_\mathbb Q} dx\asymp \int_{A_t^{\leqslant T}} da\asymp (\log T)^r \ll_\epsilon ( \log ( 3 + \| \nu \| ) )^r,$$ which completes the proof of Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}. # Mean square estimates for discrete automorphic periods {#sec:points} Let $G = \text{O} (n,m)$, with maximal compact $K = \text{O} (n) \times \text{O} (m)$. Let $\Gamma$ be a uniform lattice in $G$, with associated (compact) locally symmetric space $Y = \Gamma \backslash \mathbb{H}^{n,m}$. In this section we examine the asymptotic size of finite sums of point evaluations of Maass forms on $Y$. Our main result is Proposition [Proposition 26](#local-Weyl){reference-type="ref" reference="local-Weyl"}. Before proving this, we first recall some basic results on spherical representations and spectral parameters for the Zariski-disconnected group $G$ in Section [7.2](#sec:O-spherical){reference-type="ref" reference="sec:O-spherical"}. While the background material on spectral parameters will be stated for general $n$ and $m$, we shall only prove our results on point evaluations in the case $n > m$. We do this for simplicity, and because it is the only case we need in this paper. However, the results could be extended to the case $n = m$ with a small amount of extra work. ## Notation {#sec:O-spherical-notn} We let the real Lie algebras $\mathfrak g$, $\mathfrak k$, $\mathfrak a$, and $\mathfrak n$ have their usual meaning, and we let $\mathfrak{m}$ be the centralizer of $\mathfrak a$ in $\mathfrak k$. We let $\Sigma = \Sigma(\mathfrak a, \mathfrak g)$ be the roots of $\mathfrak a$ in $\mathfrak g$. We let $\Sigma^+$ be a choice of positive roots, with corresponding half-sum $\rho_\mathfrak a$. We let $W_\mathfrak a$ be the Weyl group of $\Sigma$, i.e., the group generated by the reflections in the root hyperplanes. We let $\widetilde{W}_\mathfrak a= N_K(\mathfrak a) / Z_K(\mathfrak a)$ be the extended Weyl group of $\Sigma$ corresponding to the disconnected group $G$. If $n > m$, then $\widetilde{W}_\mathfrak a$ is the same as $W_\mathfrak a$. If $n = m$, and we identify $\mathfrak a$ with $\mathbb R^m$ in the standard way, then $W_\mathfrak a$ is the group of linear transformations that permute the coordinates and change an even number of signs, and $\widetilde{W}_\mathfrak a$ is the group that permutes the coordinates and changes an arbitrary number of signs. Let $\mathfrak h_\mathfrak k$ be a Cartan subalgebra of $\mathfrak{m}$, so that $\mathfrak h= \mathfrak a+ \mathfrak h_\mathfrak k$ is a Cartan subalgebra of $\mathfrak g$. We let $\Delta = \Delta(\mathfrak h, \mathfrak g)$ be the roots of $\mathfrak h$ in $\mathfrak g$. We let $\Delta^+$ be a choice of positive roots that is compatible with $\Sigma^+$. We let $\rho$ be the half-sum of $\Delta^+$, and let $\rho_{\mathfrak{m}}$ be the half-sum of a system of positive roots for $\mathfrak{m}$ that is compatible with $\Delta^+$, so that $\rho = \rho_\mathfrak a+ \rho_{\mathfrak{m}}$. We let $W$ be the Weyl group of $\Delta$, which is generated by the reflections in the root hyperplanes, and let $\widetilde{W} = N_{G(\mathbb C)}(\mathfrak a_\mathbb C) / Z_{G(\mathbb C)}(\mathfrak a_\mathbb C)$ be the extended Weyl group corresponding to the disconnected group $G$. We have $W = \widetilde{W}$ if and only if $n+m$ is odd. We let $\mathcal U(\mathfrak g)$ denote the universal enveloping algebra of $\mathfrak g$, and $\mathcal Z(\mathfrak g)$ its center. We shall also need to consider the algebra $\mathcal Z(\mathfrak g)^G$ of $G$-invariants in $\mathcal Z(\mathfrak g)$, which may be strictly smaller than $\mathcal Z(\mathfrak g)$ because $G$ is not Zariski-connected. We have the Harish-Chandra isomorphism $\gamma_{{\rm HC},\mathfrak h}: \mathcal Z(\mathfrak g) \xrightarrow{\,\sim\,} S(\mathfrak h)^W$, where $S(\mathfrak h)$ is the symmetric algebra of $\mathfrak h$ with complex coefficients. ## Spherical representations of $\text{O} (n,m)$. {#sec:O-spherical} This section contains basic results on spherical representations of $G$. We will prove these results by passing between $G$ and its Zariski-connected subgroup $G^0 = \text{SO} (n,m)$, which has maximal compact subgroup $K^0 = \text{S}( \text{O} (n) \times \text{O} (m) )$. Let $\mathbb{D}(G/K)$ and $\mathbb{D}(G^0/K^0)$ be the rings of differential operators on $\mathbb{H}^{n,m}$ that are invariant under $G$ and $G^0$ respectively. We have the Harish-Chandra isomorphism $\gamma_{\text{HC},\mathfrak a} : \mathbb{D}(G^0/K^0) \to S(\mathfrak a)^{W_\mathfrak a}$ for $G^0$, see for instance [@GaVa Theorem 2.6.7]. Our first result establishes versions of the Harish-Chandra isomorphisms $\gamma_{\textup{HC},\mathfrak h}$ and $\gamma_{\textup{HC},\mathfrak a}$ for the disconnected group $G$. **Lemma 23**. *The map $\gamma_{\textup{HC},\mathfrak h}$ induces an isomorphism $\mathcal Z(\mathfrak g)^G \simeq S(\mathfrak h)^{\widetilde{W}}$, and $\gamma_{\textup{HC},\mathfrak a}$ induces an isomorphism $\mathbb{D}(G/K) \simeq S(\mathfrak a)^{\widetilde{W}_\mathfrak a}$.* *Proof.* We shall only prove this for $\gamma_{\textup{HC},\mathfrak a}$, as the case of $\gamma_{\textup{HC},\mathfrak h}$ is simpler. It will be helpful to explicate the relation between the algebras $\mathbb{D}(G/K)$ and $\mathbb{D}(G^0/K^0)$ using the commutative diagram $$\xymatrix{ \mathbb{D}(G/K) \ar[r] & \mathbb{D}(G^0/K^0) \\ \mathcal U(\mathfrak g)^K / \mathcal U(\mathfrak g)^K \cap \mathcal U(\mathfrak g) \mathfrak k\ar[r]\ar[u]^{\wr}& \ar[u]^{\wr} \mathcal U(\mathfrak g)^{K^0} / \mathcal U(\mathfrak g)^{K^0} \cap \mathcal U(\mathfrak g) \mathfrak k. }$$ The horizontal maps are the obvious inclusions. That the vertical maps are isomorphisms is well-known; see, for example, [@GaVa Prop. 1.7.2]. The right-hand vertical map is equivariant for the $K$-action, where $K$ acts on $\mathbb{D}(G^0/K^0)$ through its conjugation action on $G^0/K^0$. Moreover, the left-hand column is equal to the $K$-invariants in the right-hand column. Because $\mathbb{D}(G/K)$ is equal to the $K$-invariants in $\mathbb{D}(G^0/K^0)$, we may prove the lemma by understanding how $\gamma_{\textup{HC},\mathfrak a}$ interacts with conjugation by $K$, and to this end we recall how $\gamma_{\textup{HC},\mathfrak a}$ is defined [@GaVa Theorem 2.6.7]. If $D \in \mathbb{D}(G^0/K^0)$ is represented by an element $u_D \in \mathcal U(\mathfrak g)^{K^0}$, then $\gamma_{\textup{HC},\mathfrak a}(D)$ is defined by first choosing the unique $a_D \in S(\mathfrak a)$ such that $$u_D - a_D \in \mathfrak n\, \mathcal U(\mathfrak g) + \mathcal U(\mathfrak g) \mathfrak k,$$ then setting $\gamma_{\textup{HC},\mathfrak a}(D) = \rho^* a_D$, where $\rho^*$ is the automorphism of $S(\mathfrak a)$ that sends $X$ to $X + \rho(X)$ for every $X \in \mathfrak a$. In particular, every $D \in \mathbb{D}(G^0/K^0)$ is determined by its '$\mathfrak a$-part' $a_D$. We now consider the action of $K$ on $\mathbb{D}(G^0/K^0)$. To understand this action, it will be useful to choose an element $k \in K - K^0$ that preserves the Iwasawa decomposition of $\mathfrak g$. We note that such a $k$ exists, because any $k' \in K - K^0$ transforms the Iwasawa decomposition $\mathfrak g= \mathfrak n\oplus \mathfrak a\oplus \mathfrak k$ into one of the form $\mathfrak g= \textup{Ad}(k')\mathfrak n\oplus \textup{Ad}(k')\mathfrak a\oplus \mathfrak k$, and this new decomposition may be transformed back to the standard one by an element $k_0$ of $K^0$; we then take $k = k_0 k'$. The action of $\textup{Ad}(k)$ on $\mathfrak a$ is via an element $w \in \widetilde{W}_\mathfrak a$ that preserves the positive roots. Because $\textup{Ad}(k)$ preserves the Iwasawa decomposition, we have $a_{\textup{Ad}(k) D} = \textup{Ad}(k) a_D = wa_D$, and this implies that $\gamma_{\textup{HC},\mathfrak a}(\textup{Ad}(k)D) = w\gamma_{\textup{HC},\mathfrak a}(D)$. If $n > m$, then $W_\mathfrak a= \widetilde{W}_\mathfrak a$, and $w$ is equal to the identity because the Dynkin diagram of $\Sigma$ has no non-trivial automorphisms. This implies that $S(\mathfrak a)^{W_a} = S(\mathfrak a)^{\widetilde{W}_\mathfrak a}$ and $\mathbb{D}(G/K) = \mathbb{D}(G^0/K^0)$, which gives the lemma. If $n = m$, then $w \in \widetilde{W}_\mathfrak a- W_\mathfrak a$ is nontrivial, and the lemma again follows. ◻ Lemma [Lemma 23](#O-Harish-Chandra){reference-type="ref" reference="O-Harish-Chandra"} allows us to parametrize irreducible spherical representations of $G$. (We shall technically work with $(\mathfrak g,K)$-modules, but refer to them as representations by slight abuse of terminology.) If $(\pi, V)$ is such a representation, we note that $\dim V^K = 1$, which follows from the commutativity of $\mathbb{D}(G/K)$ in the usual way. We may then use Lemma [Lemma 23](#O-Harish-Chandra){reference-type="ref" reference="O-Harish-Chandra"} to define the spectral parameter of $\pi$ as the element $\lambda \in \mathfrak a_\mathbb C^* / \widetilde{W}_\mathfrak a$ corresponding to the character by which $\mathbb{D}(G/K) \simeq S(\mathfrak a)^{\widetilde{W}_\mathfrak a}$ acts on $V^K$. The following result shows that $\pi$ is classified by its spectral parameter. **Proposition 24**. *For any $\lambda \in \mathfrak a_\mathbb C^* / \widetilde{W}_\mathfrak a$, there is a unique irreducible admissible spherical representation $\pi_\lambda$ of $G$ with that spectral parameter. The infinitesimal character of $\pi_\lambda$ is given by $\lambda + \rho_{\mathfrak{m}} \in \mathfrak h^*_\mathbb C/ \widetilde{W}$.* *Proof.* We begin with the uniqueness of $\pi_\lambda$. Let $(\pi_1, V_1)$ and $(\pi_2, V_2)$ be two irreducible spherical representations with the same spectral parameter. Let $v \in V_1^K \oplus V_2^K$ be a spherical vector that doesn't lie in either $V_1^K$ or $V_2^K$, and let $V$ be the subrepresentation it generates as a $(\mathfrak g,K)$-module. We claim that $V^K = \mathbb Cv$. Indeed, if $w \in V^K$, then there are $k_i \in K$ and $X_i \in \mathcal U(\mathfrak g)$ such that $$w = \sum_i X_i k_i v = \Big( \sum_i X_i \Big) v = Yv,$$ where $Y \in \mathbb{D}(G/K)$. Moreover, $\mathbb{D}(G/K)$ acts by scalars on $V_1^K \oplus V_2^K$, since, by assumption, both summands share the same spectral parameter. This implies that $w \in \mathbb Cv$ as required. Finally, because $V^K = \mathbb Cv$, we see that $V$ is a nonzero submodule of $V_1 \oplus V_2$ that is not equal to $V_1$, $V_2$, or $V_1 \oplus V_2$. $V$ therefore induces an isomorphism $\pi_1 \simeq \pi_2$. For existence, we construct $\pi_\lambda$ by parabolic induction. Let $P$ be the minimal parabolic subgroup of $G^0$ defined over $\mathbb R$. We let $P = NAM$ be the Langlands decomposition of $P$, so that $A$ is the connected Lie group with Lie algebra $\mathfrak a$. Let $\lambda \in \mathfrak a_\mathbb C^*$, and let $e^\lambda$ be the corresponding character of $A$, which we extend to $P$. The unitarily induced representation $\text{Ind}_P^G \, e^\lambda$ has a unique irreducible spherical subquotient $\pi_\lambda$, and it may be checked that $\pi_\lambda$ has spectral parameter $\lambda$ as required. Finally, let $\gamma_\lambda:\mathcal Z(\mathfrak g)^G \rightarrow \mathbb C$ be the infinitesimal character of $\pi_\lambda$. Under the Harish-Chandra isomorphism $\gamma_{{\rm HC},\mathfrak h}$ of Lemma [Lemma 23](#O-Harish-Chandra){reference-type="ref" reference="O-Harish-Chandra"}, $\gamma_\lambda$ determines an element in $\mathfrak h_\mathbb C^*/\widetilde W$, which we must show can be represented by $\lambda+\rho_{\mathfrak{m}}$. If we also let $\gamma_{{\rm HC},\mathfrak a} : \mathbb{D}(G/K) \xrightarrow{\,\sim\,} S(\mathfrak a)^{\widetilde{W}_\mathfrak a}$ be as in Lemma [Lemma 23](#O-Harish-Chandra){reference-type="ref" reference="O-Harish-Chandra"}, the spectral parameter $\lambda\in\mathfrak a_\mathbb C^*/\widetilde W_\mathfrak a$ of $\pi_\lambda$ satisfies $\chi_\lambda(D)=\lambda(\gamma_{{\rm HC},\mathfrak a}(D))$ for a uniquely determined algebra homomorphism $\chi_\lambda: \mathbb{D}(G/K)\rightarrow\mathbb C$, and $\gamma_\lambda$ is given by the composition $$\gamma_\lambda: \mathcal Z(\mathfrak g)^G \to \mathbb{D}(G/K) \xrightarrow{\,\chi_\lambda \,} \mathbb C.$$ We may compute this by comparison with $G^0$. We choose an element $\lambda' \in \mathfrak a_\mathbb C^* / W_\mathfrak a$ that lifts $\lambda$; then the associated algebra homomorphism $\chi_{\lambda'}: \mathbb{D}(G^0/K^0) \to \mathbb C$ extends $\chi_\lambda$. It follows that the restriction to $\mathcal Z(\mathfrak g)^G$ of the character $$\mathcal Z(\mathfrak g) \to \mathbb{D}(G^0/K^0) \xrightarrow{\,\chi_{\lambda'}\,} \mathbb C$$ coincides with $\gamma_\lambda$. Since this character of $\mathcal Z(\mathfrak g)$ corresponds to $\lambda' + \rho_{\mathfrak{m}} \in \mathfrak h^*_\mathbb C/ W$ under $\gamma_{\textup{HC},\mathfrak h}$, see e.g. the proof of Proposition 2.1 in [@Helgason2], it follows that $\gamma_\lambda$ corresponds to $\lambda + \rho_{\mathfrak{m}} \in \mathfrak h^*_\mathbb C/ \widetilde{W}$ under $\gamma_{\textup{HC},\mathfrak h}|_{\mathcal Z(\mathfrak g)^G}$. ◻ ## A second moment asymptotic for discrete periods We now return to the global setting. We continue to let $G = \text{O} (n,m)$ and $K = \text{O} (n) \times \text{O} (m)$. For a uniform lattice $\Gamma<G$ we denote by $Y=\Gamma\backslash \mathbb{H}^{n,m}$ the associated (compact) locally symmetric space. In the rest of this section we assume that $n > m$. This constraint ensures that objects such as spherical functions, spherical representations, and Maass forms behave in essentially the same way for $G$, $G^0$, and the connected Lie group $G^{00} = \text{SO} (n,m)$ with maximal compact subgroup $K^{00} = \text{SO} (n) \times \text{SO} (m)$. We explain this in the next paragraph. Recall from §[7.1](#sec:O-spherical-notn){reference-type="ref" reference="sec:O-spherical-notn"} that $W_{\mathfrak{a}}=\widetilde{W_{\mathfrak{a}}}$ whenever $n>m$. Under this condition, a function on $\mathbb{H}^{n,m}=G/K=G^{00}/K^{00}$ is seen to be $K$-invariant if and only if it is $K^{00}$-invariant. Moreover, combining the Weyl group identity with Lemma [Lemma 23](#O-Harish-Chandra){reference-type="ref" reference="O-Harish-Chandra"}, we deduce in this case that $\mathbb{D}(G/K) = \mathbb{D}(G^0/K^0)$, and these are both equal to $\mathbb{D}(G^{00}/K^{00})$ because $G^0$ lies in the Harish-Chandra class. It follows that the notion of a joint eigenfunction, and its spectral parameter, on $\mathbb{H}^{n,m}$ is the same regardless of whether the latter is viewed as $G/K$ or $G^{00}/K^{00}$. Likewise, the definitions of spherical functions, and the Harish-Chandra transform, are insensitive to the choice of presentation of $\mathbb{H}^{n,m}$ as a symmetric space for $G$ or $G^{00}$. Finally, we observe that occurrences of $\pi_\lambda$ in $L^2(\Gamma \backslash G)$ are equivalent to Maass forms on $Y=\Gamma \backslash \mathbb{H}^{n,m}$ with spectral parameter $\lambda$, and that if $\pi_\lambda$ is unitarizable, then $\text{Re} \, \lambda$ lies in the convex hull of the set $W_\mathfrak a\rho_\mathfrak a$. In preparation for the proof of Proposition [Proposition 26](#local-Weyl){reference-type="ref" reference="local-Weyl"}, we establish the following result. **Lemma 25**. *For any $x\in Y$, $\nu\in i\mathfrak{a}^*$, and orthonormal basis $\{f_\mu\}$ of $L^2(Y)$ consisting of Maass forms $f_\mu$ of spectral parameter $\mu$, we have $$\sum_{\|{\rm Im}\,\mu-\nu\|\leqslant 1}|f_\mu(x)|^2\ll \tilde\beta_S(\nu),$$ the implied constant depending on $\Gamma$. In particular, dropping all but one term, we obtain the local (or trivial) bound $\|f_\nu\|_\infty\ll \tilde\beta_S(\nu)^{1/2}$.* *Proof.* We may assume without loss of generality that $\Gamma$ is torsion free, by using Selberg's lemma to pass to a torsion-free subgroup of finite index if necessary. We let $k_{-\nu}$ be the test function on $G^{00} = \text{SO} (n,m)^0$ defined in Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}, which may be viewed as a $K$-biinvariant function on $G$ by the remarks preceding the statement of the lemma. We spectrally expand the associated automorphic kernel function $K_{-\nu}$. Since $Y$ is compact, there is no continuous spectrum, and we obtain $$\label{eq:compact-expansion} \sum_{\mu}\widehat{k}_{-\nu}(-\mu)f_\mu(x)\overline{f_\mu(y)}=K_{-\nu}(x,y)=\sum_{\gamma\in\Gamma}k_{-\nu}(x^{-1}\gamma y).$$ We now take $x=y$ so that $$\sum_{\mu}\widehat{k}_{-\nu}(-\mu)|f_\mu(x)|^2=\sum_{\gamma\in\Gamma}k_{-\nu}(x^{-1}\gamma x).$$ Taking the support of $k_{-\nu}$ small enough, only $\gamma = e$ contributes to the sum (since $\Gamma$ is torsion-free), so that $$\label{simple-pre-trace} \sum_{\mu}\widehat{k}_{-\nu}(-\mu)|f_\mu(x)|^2= k_{-\nu}(e).$$ We then apply the estimate $k_{-\nu}(e)\ll \tilde\beta_S(\nu)$ of Proposition [Lemma 13](#k-nu-e){reference-type="ref" reference="k-nu-e"} to bound the right-hand side. For the left-hand side of [\[simple-pre-trace\]](#simple-pre-trace){reference-type="eqref" reference="simple-pre-trace"}, we apply Property [\[1\]](#1){reference-type="eqref" reference="1"} of Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}, which allows us to drop all terms but those for which $\|{\rm Im}\,\mu-\nu\|\leqslant 1$. For the remaining terms, we use Property [\[3\]](#3){reference-type="eqref" reference="3"} of Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} to find $$c\sum_{\|{\rm Im}\,\mu-\nu\|\leqslant 1}|f_\mu(x)|^2\leqslant \sum_{\mu}\widehat{k}_{-\nu}(-\mu)|f_\mu(x)|^2.$$ Putting these bounds together establishes the lemma. ◻ We now complement Lemma [Lemma 25](#upper-local-Weyl){reference-type="ref" reference="upper-local-Weyl"} by giving (upper and) lower bounds for the mean square of a finite discrete period on $Y$. We retain the same notations and assumptions as above. **Proposition 26**. *There is $Q\geqslant 1$ such that for any finite collection of points $x_1,\ldots ,x_h\in Y$, together with non-zero weights $c_i\in \mathbb C$, any spectral parameter $\nu\in i\mathfrak a^*$ of large enough norm depending on $x_i$, and any orthonormal basis $\{f_\mu\}$ of $L^2(Y)$ consisting of Maass forms $f_\mu$ of spectral parameter $\mu$, we have $$\label{cor-cmpt-packet} \sum_{\|{\rm Im}\,\mu-\nu\|\leqslant Q}\bigg|\sum_{i=1}^h c_i f_\mu(x_i)\bigg|^2 \asymp \widetilde{\beta}_S(\nu).$$ The implied constant in the lower bound depends only on $\Gamma$ and the weights $c_i$, while in the upper bound it depends additionally on $Q$.* *Proof.* It suffices to prove the lower bound, since the upper bound follows from Lemma [Lemma 25](#upper-local-Weyl){reference-type="ref" reference="upper-local-Weyl"}. We set up the proof as for Lemma [Lemma 25](#upper-local-Weyl){reference-type="ref" reference="upper-local-Weyl"}. We may therefore take as our starting point the pre-trace formula [\[eq:compact-expansion\]](#eq:compact-expansion){reference-type="eqref" reference="eq:compact-expansion"}. We begin with an analysis of the spectral side of [\[eq:compact-expansion\]](#eq:compact-expansion){reference-type="eqref" reference="eq:compact-expansion"}, for arbitrary $x,y\in Y$. In contrast to the proof of Lemma [Lemma 25](#upper-local-Weyl){reference-type="ref" reference="upper-local-Weyl"}, we shall need to show that the spectral summation is centered about $\nu$, up to a negligible error. We claim that for any $Q>1$, we have $$\label{2show} \sum_{\|{\rm Im}\, \mu-\nu\|>Q}\widehat{k}_{-\nu}(-\mu)f_\mu(x)\overline{f_\mu(y)}\ll_A Q^{-A} \widetilde{\beta}_S(\nu).$$ For this, we cover the region $\{\lambda\in i\mathfrak{a}^*: \|\lambda-\nu\|>Q\}$ by the union of unit balls centered at points $\lambda_n$. From the rapid decay estimate [\[4\]](#4){reference-type="eqref" reference="4"} from Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"}, the Cauchy--Schwarz inequality, and Lemma [Lemma 25](#upper-local-Weyl){reference-type="ref" reference="upper-local-Weyl"} we get $$\begin{aligned} \sum_{\|{\rm Im}\,\mu-\nu\|>Q}\widehat{k}_{-\nu}(-\mu)&f_\mu(x)\overline{f_\mu(y)}\\ &\ll_A \sum_n \|\lambda_n-\nu\|^{-A}\sum_{\|{\rm Im}\,\mu-\lambda_n\|\leqslant 1}|f_\mu(x)f_\mu(y)|\\ &\leqslant \sum_n \|\lambda_n-\nu\|^{-A}\bigg(\sum_{\|{\rm Im}\,\mu-\lambda_n\|\leqslant 1}|f_\mu(x)|^2\bigg)^{1/2}\bigg(\sum_{\|{\rm Im}\,\mu-\lambda_n\|\leqslant 1}|f_\mu(y)|^2\bigg)^{1/2}\\ &\ll \sum_n \|\lambda_n-\nu\|^{-A}\tilde\beta_S(\lambda_n).\end{aligned}$$ Inserting Lemma [Lemma 9](#c-bound){reference-type="ref" reference="c-bound"}, we find $$\sum_{\|{\rm Im}\,\mu-\nu\|>Q}\widehat{k}_{-\nu}(-\mu)f_\mu(x)\overline{f_\mu(y)}\ll_A \tilde\beta_S(\nu)\sum_n\|\lambda_n-\nu\|^{-A}\ll_A Q^{-A}\tilde\beta_S(\nu),$$ which is [\[2show\]](#2show){reference-type="eqref" reference="2show"}. By expanding the square, [\[2show\]](#2show){reference-type="eqref" reference="2show"} implies that $$\label{spectral-trunc} \sum_{\|{\rm Im}\,\mu-\nu\|>Q} \widehat{k}_{-\nu}(-\mu) \bigg|\sum_{i=1}^h c_i f_\mu(x_i)\bigg|^2 \ll_A Q^{-A}\tilde\beta_S(\nu).$$ We now move to the geometric side of [\[eq:compact-expansion\]](#eq:compact-expansion){reference-type="eqref" reference="eq:compact-expansion"}. Recall the bi-$K$-invariant distance function $d:G\rightarrow\mathbb R$ from §[3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"}. For two points $x=\Gamma gK$ and $y=\Gamma hK$ on $Y$, we put $d_Y(x,y)=\min_{\gamma\in\Gamma}d(g,\gamma h)$. We begin by showing that for any $x, y \in Y$ we have $$\label{eq:x-y} \sum_{\mu}\widehat{k}_{-\nu}(-\mu)f_\mu(x)\overline{f_\mu(y)} \ll \big(1+\|\nu\| d_Y(x,y)\big)^{-1/2} \widetilde{\beta}_S(\nu).$$ We may restrict the geometric side of [\[eq:compact-expansion\]](#eq:compact-expansion){reference-type="eqref" reference="eq:compact-expansion"} to those $\gamma \in \Gamma$ with $x^{-1} \gamma y \in \text{supp}(k_{-\nu})$, and this is a finite set whose size depends only on $\Gamma$. For each of these $\gamma$ we write $x^{-1} \gamma y = k_1 e^H k_2$, where $H \in \mathfrak a$ satisfies $\| H \| = d(x, \gamma y)$. Applying the bound of Lemma [Lemma 11](#knu-decay){reference-type="ref" reference="knu-decay"}, and using $d(x,y)\geqslant d_Y(x,y)$, we find $$k_{-\nu}(x^{-1} \gamma y) \ll \big(1+\|\nu\| \| H \| \big)^{-1/2} \widetilde{\beta}_S(\nu) \leqslant \big(1+\|\nu\| d_Y(x,y)\big)^{-1/2} \widetilde{\beta}_S(\nu).$$ Applying this bound to each term of the geometric side in [\[eq:compact-expansion\]](#eq:compact-expansion){reference-type="eqref" reference="eq:compact-expansion"} gives [\[eq:x-y\]](#eq:x-y){reference-type="eqref" reference="eq:x-y"}. We next consider the case when $x=y$. We split the geometric side of [\[eq:compact-expansion\]](#eq:compact-expansion){reference-type="eqref" reference="eq:compact-expansion"} into those $\gamma \in \Gamma$ that fix $x$, and the complement, which gives $$\sum_{\mu}\widehat{k}_{-\nu}(-\mu)|f_\mu(x)|^2= |\text{Stab}_\Gamma(x)| k_{-\nu}(e) + \sum_{ \gamma \in \Gamma - \text{Stab}_\Gamma(x)} k_{-\nu}( x^{-1} \gamma x).$$ As in the case where $x \neq y$, the sum on the right-hand side is over a finite set, and each term may be bounded by $O_x( \| \nu \|^{-1/2} \widetilde{\beta}_S(\nu))$ so that $$\sum_{\mu}\widehat{k}_{-\nu}(-\mu)|f_\mu(x)|^2 \asymp k_{-\nu}(e) + O_x( \| \nu \|^{-1/2} \widetilde{\beta}_S(\nu)).$$ Combining this with [\[eq:x-y\]](#eq:x-y){reference-type="eqref" reference="eq:x-y"} gives $$\label{geo-main} \sum_{\mu} \widehat{k}_{-\nu}(-\mu) \bigg|\sum_{i=1}^h c_i f_\mu(x_i)\bigg|^2 \asymp k_{-\nu}(e) + O_{x_i}( \| \nu \|^{-1/2} \widetilde{\beta}_S(\nu)).$$ Combining the geometric estimate [\[geo-main\]](#geo-main){reference-type="eqref" reference="geo-main"} with the spectral truncation [\[spectral-trunc\]](#spectral-trunc){reference-type="eqref" reference="spectral-trunc"} gives $$\sum_{\|{\rm Im}\,\mu-\nu\| \leqslant Q} \widehat{k}_{-\nu}(-\mu) \bigg|\sum_{i=1}^h c_i f_\mu(x_i)\bigg|^2 \asymp k_{-\nu}(e)+ O_{x_i}( \| \nu \|^{-1/2} \widetilde{\beta}_S(\nu)) + O_{A}(Q^{-A}\widetilde{\beta}_S(\nu)).$$ Since $\widehat{k}_{-\nu}(-\mu)\ll 1$ we obtain $$\label{eq:pretrace-w-main} k_{-\nu}(e)+ O_{x_i}( \| \nu \|^{-1/2} \widetilde{\beta}_S(\nu)) + O_{A}(Q^{-A}\widetilde{\beta}_S(\nu)) \ll \sum_{\|{\rm Im}\,\mu-\nu\| \leqslant Q} \bigg|\sum_{i=1}^h c_i f_\mu(x_i)\bigg|^2.$$ From Properties [\[1\]](#1){reference-type="eqref" reference="1"} and [\[3\]](#3){reference-type="eqref" reference="3"} of Proposition [Proposition 7](#test-fn){reference-type="ref" reference="test-fn"} we have $$k_{-\nu}(e)\geqslant c \int_{\substack{\mu \in i\mathfrak{a}^*\\\|\mu + \nu\|\leqslant 1}}|{\bf c}(\mu)|^{-2} d\mu.$$ We may apply Lemma [Lemma 8](#lemma:c-bd){reference-type="ref" reference="lemma:c-bd"} to show that the last integral is $\asymp \widetilde{\beta}_S(\nu)$. Fixing $A>1$ and taking $Q>1$ large enough, and also taking $\nu$ large enough depending on the $x_i$, we can make the error term in [\[eq:pretrace-w-main\]](#eq:pretrace-w-main){reference-type="eqref" reference="eq:pretrace-w-main"} smaller by a constant factor than the main term, which gives [\[cor-cmpt-packet\]](#cor-cmpt-packet){reference-type="eqref" reference="cor-cmpt-packet"}. ◻ # Review of the theta correspondence {#sec:theta-review} In this section, we review several properties of the local and global theta correspondence which will be useful in the proofs of our main theorems. ## Type I algebraic reductive dual pairs {#Type1} Let $k$ be a field of characteristic zero. Let $W$ denote a symplectic space over $k$, with symplectic form $\langle\,,\,\rangle_W$. Then $W$ admits a natural left-action by $\text{GL} (W)$ and we denote by ${\bf G}'=\text{Sp} (W)$ the associated symplectic group. Let $V$ be a non-degenerate quadratic space over $k$, and let ${\bf G}=\text{O} (V)$. The tensor product $\mathcal{W}=W\otimes V$ is naturally given the structure of a symplectic space by the rule $\langle w_1\otimes v_1,w_2\otimes v_2\rangle_{\mathcal{W}}=\langle w_1,w_2\rangle_W\langle v_1, v_2\rangle_V$. We again let $\text{GL} (\mathcal{W})$ act on the left, writing $\text{Sp} (\mathcal{W})$ for the symplectic group consisting of those automorphisms preserving $\langle \,,\,\rangle_{\mathcal{W}}$. Let $W=U\oplus U^*$ be a complete polarization of $W$ and put $\mathcal{X}=U\otimes V$ and $\mathcal{Y}=U^*\otimes V$. With respect to the decomposition $\mathcal{W}=\mathcal{X}\oplus\mathcal{Y}$, we may view the elements in $\text{Sp} (\mathcal{W})$ as $\langle\,,\,\rangle_{\mathcal{W}}$-preserving invertible matrices $\left(\begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\right)$, where $a\in {\rm End}(\mathcal{X})$, $b\in \text{Hom}(\mathcal{Y},\mathcal{X})$, $c\in \text{Hom}(\mathcal{X},\mathcal{Y})$, and $d\in {\rm End}(\mathcal{Y})$. Let $\mathcal{P}_{\rm Siegel}$ be the Siegel parabolic of $\text{Sp} (\mathcal{W})$, the stabilizer of $\mathcal{X}$. Thus $\mathcal{P}_{\rm Siegel}$ corresponds to the condition $c=0$ in the above coordinates. Let $\mathcal{P}_{\rm Siegel}=\mathcal{M}\mathcal{N}$ be the standard Levi decomposition, where $\mathcal{M}$ is the Levi subgroup consisting of elements in $\text{Sp} (\mathcal{W})$ which preserve the decomposition $\mathcal{X}\oplus\mathcal{Y}$ and $\mathcal{N}$ is the unipotent radical of $\mathcal{P}_{\rm Siegel}$. In the above coordinates we have $$\mathcal{M}=\left\{m(a)=\begin{pmatrix}a & \\ & a^\vee \end{pmatrix}: a \in \text{GL} (\mathcal{X})\right\},\qquad \mathcal{N}=\left\{n(b)=\begin{pmatrix}1 & b\\ & 1 \end{pmatrix}: b \in \text{Hom}^+(\mathcal{Y},\mathcal{X}) \right\}.$$ Here, $\text{Hom}^+(\mathcal{Y},\mathcal{X})$ denotes the subspace of symmetric homomorphisms in $\text{Hom}(\mathcal{Y},\mathcal{X})$, i.e those satisfying $\langle by_1, y_2 \rangle_{\mathcal{W}} = \langle by_2, y_1 \rangle_{\mathcal{W}}$ for all $y_1, y_2 \in \mathcal{Y}$, and $a^\vee\in\text{GL} (\mathcal{Y})$ is uniquely determined by the relation $\langle ax,a^\vee y\rangle_{\mathcal{W}}=\langle x,y\rangle_{\mathcal{W}}$ for all $x\in\mathcal{X}$, $y\in\mathcal{Y}$. If we use $\langle\,,\,\rangle_{\mathcal{W}}$ to identify $\mathcal{Y}$ with $\mathcal{X}^*$, then $\text{Hom}^+(\mathcal{Y},\mathcal{X})$ corresponds to the symmetric maps in $\text{Hom}(\mathcal{X}^*,\mathcal{X})$, and $a^\vee$ corresponds to ${}^t a^{-1} \in \text{GL} (\mathcal{X}^*)$. We now recall the embeddings of ${\bf G}=\text{O} (V)$ and ${\bf G}'=\text{Sp} (W)$ in $\text{Sp} (\mathcal{W})$. The group ${\bf G}$ acts on $\mathcal{W}$, sending $w\otimes v$ to $w\otimes (gv)$ and preserving the symplectic form $\langle\,,\,\rangle_{\mathcal{W}}$. This induces an embedding of ${\bf G}$ into $\text{Sp} (\mathcal{W})$, given by $g\mapsto m({\rm Id}_U\otimes g)$. Similarly, ${\bf G}'$ acts on $\mathcal{W}$, via $w\otimes v\mapsto (g'w)\otimes v$, which again clearly preserves $\langle\,,\,\rangle_{\mathcal{W}}$. To describe the resulting embedding into $\text{Sp} (\mathcal{W})$ explicitly, we first use the polarization $W=U\oplus U^*$, to write matrices in $\text{Sp} (W)$ in the form $\left(\begin{smallmatrix}A&B\\C&D\end{smallmatrix}\right)$, where $A\in {\rm End}(U)$, $B\in\text{Hom}(U^*,U)$, $C\in\text{Hom}(U,U^*)$, $D\in {\rm End}(U^*)$. Thus, ${\bf G}'\hookrightarrow\text{Sp} (\mathcal{W})$ is given by $$\begin{pmatrix}A & B\\ C & D \end{pmatrix}\mapsto \begin{pmatrix} A \otimes {\rm Id}_V& B\otimes {\rm Id}_V \\ C\otimes {\rm Id}_V & D\otimes {\rm Id}_V \end{pmatrix}.$$ Then ${\bf G}$ and ${\bf G}'$ form a Type I algebraic reductive dual pair in $\text{Sp} (\mathcal{W})$: for any extension $K/k$ the groups ${\bf G}(K)$ and ${\bf G}'(K)$ are mutual commutants in $\text{Sp} (\mathcal{W})(K)$, in the sense that $${\rm Cent}_{\text{Sp} (\mathcal{W})(K)}({\bf G}(K))={\bf G}'(K)\quad\textrm{and}\quad {\rm Cent}_{\text{Sp} (\mathcal{W})(K)}({\bf G}'(K))={\bf G}(K).$$ ## Local theta correspondence {#sec:Schroedinger} In this section, we shall take the field $k$ to be either $\mathbb R$ or a finite extension of $\mathbb Q_p$. Throughout we shall write $G={\bf G}(k)={\rm O}(V)(k)$ and $G'={\bf G}'(k)=\text{Sp} (W)(k)$. Let $\widetilde{\text{Sp} }(\mathcal{W})_k$ denote the metaplectic group[^5], defined as the unique non-split central extension of $\text{Sp} (\mathcal{W})(k)$ by $S^1$; thus $$1\rightarrow S^1\rightarrow \widetilde{\text{Sp} }(\mathcal{W})_k\rightarrow\text{Sp} (\mathcal{W})(k)\rightarrow 1.$$ We may write $\widetilde{\text{Sp} }(\mathcal{W})_k$ as the product $\text{Sp} (\mathcal{W})(k)\times S^1$ with the group multiplication given by a certain cocycle in $H^2(\text{Sp} (\mathcal{W})(k), S^1)$, explicated by means of the Leray invariant by Perrin [@Perrin] and Rao [@RR]. This cocycle (but not its class) depends on the choice of Lagrangian subspace $\mathcal{X}$ of $\mathcal{W}$, as well as the choice of a non-trivial additive character of $k$, that we fix below. Fix a non-trivial additive character $\psi$ of $k$ as follows. If $k$ is an extension of $\mathbb Q_p$, let $\psi_0$ denote the standard unramified character $x\mapsto e^{-2\pi ix}$ of $\mathbb Q_p$, and let $\psi$ be the pullback of $\psi_0$ by the trace map. If $k=\mathbb R$, we shall take $\psi(x)=e^{2\pi ix}$. Let ${\rm Heis}(\mathcal{W},k)$ denote the Heisenberg group of $\mathcal{W}$. When $k$ is non-archimedean, we let $\rho=\rho_\psi$ denote the unique, up to isomorphism, irreducible smooth representation of ${\rm Heis}(\mathcal{W},k)$ with central character $\psi$. Let $\omega=\omega_\psi$ be the corresponding Weil representation of $\widetilde{\text{Sp} }(\mathcal{W})_k$, acting on the space of $\rho$. The latter can be taken to be the space $\mathscr{S}(\mathcal{Y})$ of Schwartz--Bruhat functions on $\mathcal{Y}$ (the isomorphism class of $\rho$ and of $\omega$ is independent of the choice of the Langrangian subspace $\mathcal{Y}$). Similarly, for $k=\mathbb R$, we let $\rho^{\rm Hilb}=\rho_\psi^{\rm Hilb}$ denote the unique, up to isomorphism, irreducible unitary representation of ${\rm Heis}(\mathcal{W},\mathbb R)$ with central character $\psi$. Then the corresponding Weil representation $\omega^{\rm Hilb}=\omega_\psi^{\rm Hilb}$ of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb R$ acts on $L^2(\mathcal{Y})$. We shall often write simply $\omega^{\rm Hilb}$ for the space on which $\omega^{\rm Hilb}$ acts. For the archimedean theory, we prefer to work with the *algebrization* of $\omega^{\rm Hilb}$. To describe this, we first specify a maximal compact subgroup of $\text{Sp} (\mathcal{W})(\mathbb R)$. Let $J_W \in G'$ be a positive complex structure, that is, an element satisfying $J_W^2= - \text{Id}_W$ and $\langle J_W w, w \rangle > 0$ for all nonzero $w \in W$. We further assume that $J_W$ preserves the polarization $W = U \oplus U^*$, in the sense that $J_W U = U^*$ and $J_W U^* = U$. Choose a decomposition $V = V_+ \oplus V_-$ of $V$ into positive and negative definite subspaces. If we define $J \in \text{Sp} (\mathcal{W})(\mathbb R)$ by $$J = J_W \otimes \left(\begin{smallmatrix} \text{Id}_{V_+} & 0 \\ 0 & -\text{Id}_{V_-} \end{smallmatrix}\right),$$ then $J$ is a positive complex structure on $\mathcal{W}$ that preserves the polarization $\mathcal{W}=\mathcal{X}\oplus\mathcal{Y}$. If we define $\mathcal{K}$ to be the subgroup of $\text{Sp} (\mathcal{W})(\mathbb R)$ commuting with $J$, then $\mathcal{K}$ is a maximal compact subgroup. In fact, if we view $\mathcal{W}$ as a complex vector space via $J$, then $\mathcal{K}$ is equal to the unitary group of the positive definite Hermitian form $(w_1, w_2) \mapsto \langle Jw_1, w_2 \rangle + i \langle w_1, w_2 \rangle$. $\mathcal{K}$ has the property that $\mathcal{K} \cap G$ and $\mathcal{K} \cap G'$ are maximal compact subgroups of $G$ and $G'$, equal to ${\rm O}(V_+) \times {\rm O}(V_-)$, and the centralizer of $J_W$, respectively. If we let $\widetilde{\mathcal{K}}$ denote the preimage of $\mathcal{K}$ in $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb R$, then $\widetilde{\mathcal{K}}$ is a maximal compact subgroup of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb R$. We let $\mathfrak{sp}_{\dim\mathcal{W}}$ denote the (real) Lie algebra of $\text{Sp} (\mathcal{W})(\mathbb R)$, so that the Lie algebra of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb R$ is $\mathfrak{sp}_{\dim\mathcal{W}} \oplus \mathbb R$. We write $\omega$ for the Harish-Chandra module associated with $\omega^{\rm Hilb}$, which is the admissible $(\mathfrak{sp}_{\dim\mathcal{W}} \oplus \mathbb R,\widetilde{\mathcal{K}})$-module consisting of the $\widetilde{\mathcal{K}}$-finite vectors in $\omega^{\rm Hilb}$. The space of $\omega$ can be identified with the space of all products of complex valued polynomials on $\mathcal{Y}$ with the Gaussian $\mathcal{G}(y)=e^{-(\pi/2) \langle Jy, y \rangle}$ (see e.g. [@HST p. 388] for the constant appearing in front of $\langle Jy, y \rangle$), which we denote $\mathscr{S}_{\rm alg}(\mathcal{Y})$, and is dense in $\omega^{\rm Hilb}$. Besides $\mathscr{S}_{\rm alg}(\mathcal{Y})$, there is another realization of $\omega$, called the Fock model, which we denote by $\mathcal{F}$. To describe it, we consider $\mathcal{W}$ as a complex vector space using the complex structure $J$, and denote this space by $\mathcal{W}_J$. The space of $\mathcal{F}$ is then the holomorphic polynomials on $\mathcal{W}_J^*$. We may describe the action of $\widetilde{\mathcal{K}}$ in $\mathcal{F}$ by choosing a function $\det^{1/2}$ on $\widetilde{\mathcal{K}}$ whose square is the pullback of the determinant on $\mathcal{K} = {\rm U}( \mathcal{W}_J)$. Then, if $\widetilde{u} \in \widetilde{\mathcal{K}}$ projects to $u \in \mathcal{K}$, the action of $\widetilde{u}$ in $\mathcal{F}$ is given by $\widetilde{u} f(z) = \det^{1/2}( \widetilde{u}) f( {}^t u z)$. *Remark 6*. We shall also need to consider the subspace $\omega^{\rm sm}$ of smooth vectors in $\omega^{\rm Hilb}$, realized as the space of Schwartz functions $\mathscr{S}(\mathcal{Y})$, even in the archimedean setting (see the proof of Corollary [Corollary 40](#distinction){reference-type="ref" reference="distinction"}). When endowed with its smooth (Fréchet) topology, $\omega^{\rm sm}$ is an irreducible admissible representation of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb R$. It is a classical fact, proved using the Gärding subspace, that $\omega^{\rm sm}$ is dense in $\omega^{\rm Hilb}$. Now, from the chain of inclusions $\omega\subset\omega^{\rm sm}\subset\omega^{\rm Hilb}$, and the density of $\omega$ in $\omega^{\rm Hilb}$, it follows that $\mathscr{S}_{\rm alg}(\mathcal{Y})$ is dense in $\mathscr{S}(\mathcal{Y})$, *when the latter is given the subset topology induced by that of $L^2(\mathcal{Y})$*. We will prove in Section [9.8](#sec:lemma-schwartz-space){reference-type="ref" reference="sec:lemma-schwartz-space"} that $\mathscr{S}_{\rm alg}(\mathcal{Y})$ is in fact dense in $\mathscr{S}(\mathcal{Y})$, with its natural Schwartz topology. Let $\widetilde{G}$ and $\widetilde{G}'$ denote the respective inverse images in $\widetilde{\text{Sp} }(\mathcal{W})_k$ of $G$ and $G'$ under the surjective homomorphism $\widetilde{\text{Sp} }(\mathcal{W})_k\rightarrow\text{Sp} (\mathcal{W})(k)$. It follows from [@MVW Lemme II.5] that $\widetilde{G}$ and $\widetilde{G}'$ are mutual commutants inside $\widetilde{\text{Sp} }(\mathcal{W})_k$. For simplicity, $$\label{parity-assumption} \textit{we shall assume that $\dim V$ is even},$$ in which case $\widetilde{\text{Sp} }(\mathcal{W})_k$ splits over $GG'$ [@MVW Chapitre 3]. We shall take the explicit section $GG'\rightarrow \widetilde{GG'}=\widetilde{G}\widetilde{G}'$ of Kudla [@Kudla Theorem 3.1, case 1], which trivializes the restriction of the Perrin--Rao cocycle to $GG'$. Composing with the surjective multiplication map $G\times G'\rightarrow GG'$ we obtain $$\iota: G\times G'\rightarrow \widetilde{G}\widetilde{G}'\subset\widetilde{\text{Sp} }(\mathcal{W})_k.$$ For $k$ non-archimedean, we shall examine the restriction of $\omega$ to the product $\widetilde{G}\widetilde{G}'$, and then pull it back to $G\times G'$ via $\iota$. When $k=\mathbb R$, we shall rather look at the infinitesimal version of this restriction. For this, we let $\mathfrak{g},\mathfrak{g}'\subset\mathfrak{sp}_{\dim\mathcal{W}}$ denote the (real) Lie algebras of $G$ and $G'$, respectively. Pullback by $\iota$ then gives $\omega$ the structure of a $(\mathfrak g\oplus \mathfrak g', K \times K')$-module. We are now in a position to recall the local theta correspondence. In the next two paragraphs, in order to put the real and non-archimedean theory on equal footing, we shall make the following conventions: by a *representation* we mean a smooth admissible representation in the archimedean setting and an admissible $(\mathfrak{g},K)$-module in the real setting. Here, $K$ is a choice of a maximal compact subgroup. In both cases we shall write ${\rm Hom}_G(\pi,\sigma)$ for the space of $G$ or $(\mathfrak{g},K)$-module homomorphisms. Finally, tensor products are algebraic tensor products for $(\mathfrak{g},K)$-modules. With these conventions in place, let $\pi$ be an irreducible representation of $G$. We say that $\pi$ *occurs in $\omega$* if ${\rm Hom}_G(\omega,\pi)\neq 0$. Given such a $\pi$, we set $\Gamma(\pi)=\omega/\omega[\pi]$, where $$\omega[\pi]=\bigcap_{f\in {\rm Hom}_G(\omega, \pi)} \ker (f).$$ Then $\Gamma[\pi]$ is a representation of $G \times G'$, equal to the maximal quotient of $\omega$ on which $G$ acts as a multiple of $\pi$ (the maximal $\pi$-isotypic quotient of $\omega$). In fact, $\Gamma[\pi]$ is of the form $\pi\otimes\Theta(\pi,W)$, where $\Theta(\pi,W)$ is a non-zero representation of $G'$; see [@MVW Ch. 2, Lemma III.4] for $k$ archimedean and [@Howe1989 (2.5)] for $k=\mathbb R$. By theorems of Howe [@Howe1989] (for $k=\mathbb R$), Waldspurger [@Walds] (for $k$ non-archimedian of odd residual characteristic), and Gan--Takeda [@GanTakeda] (for arbitrary $k$ non-archimedean), $\Theta(\pi,W)$ admits a unique irreducible quotient -- a representation of $G'$ which we shall denote by $\theta(\pi,W)$. We call $\theta(\pi,W)$ the *theta lift* of $\pi$. For $\pi$ not appearing in $\omega$ we define its theta lift $\theta(\pi,W)$ to be zero. The above facts apply equally well with the roles of $G$ and $G'$, and $\pi$ and $\pi'$, reversed; this sets up a bijection between irreducible $\pi$ on $G$ and $\pi'$ on $G'$ appearing in $\omega$. In that case, we shall write $\theta(\pi',V)$ for the theta lift of $\pi'$ to $G=\text{O} (V)(k)$. For example, the statement of Lemma [Lemma 45](#lemma:Theta-map){reference-type="ref" reference="lemma:Theta-map"} uses the theta lifting in this direction (from $G'$ to $G$). ## Local unramified theta correspondence {#sec:unramified} We recall the local unramified theta correspondence. Let $k$ be a finite extension of $\mathbb Q_p$. We assume that $k/\mathbb Q_p$ is unramified, so that the character $\psi$ is also unramified. Let $V$ be a non-degenerate orthogonal space over $k$ of even dimension $d$. We say that $V$ is *unramified* if it admits a self-dual lattice. There are two equivalence classes of such $V$: 1. $V=V_{r,r}$ is the split orthogonal space, or 2. $V=V_0+V_{r,r}$, where $V_0$ is an unramified quadratic extension $K/k$ equipped with the norm form. We extend the definition of $V_0$ to case (1), by setting $V_0 = 0$. We define the character $\chi_V$ of $k^\times$ to be trivial in case (1) and the character of $K/k$ in case (2). We assume in this paragraph that $V$ unramified and again write $G={\bf G}(k)=\text{O} (V)(k)$. Let $L\subset V$ be a self-dual lattice and denote by $K_0$ its stabilizer in $G$. Then $K_0$ is a maximal compact hyperspecial subgroup of $G$. We say a smooth irreducible representation $\pi$ of $G$ is *unramified* if $\pi^{K_0}\neq 0$. Now let $W$ be a symplectic space over $k$. Then $W$ is always unramified. As before we put $G'={\bf G}'(k)=\text{Sp} (W)(k)$. Let $L'$ be a self-dual lattice in $W$ and write $K_0'$ for its stabilizer in $G'$. Then $K_0'$ is a maximal compact hyperspecial subgroup of $G'$. We call a smooth irreducible representation $\pi'$ of $G'$ *unramified* if $\pi'^{K_0'}\neq 0$. **Theorem 27** (Howe [@Howe], Theorem 7.1.(b)). *With hypotheses and notations as above, let $\pi$ be a smooth irreducible representation of $G$ and let $\theta(\pi,W)$ be its theta lift to $G'$. If $\theta(\pi,W) \neq 0$, then $\pi$ is unramified if and only if $\theta(\pi,W)$ is unramified. The same is true for the lift from $G'$ to $G$.* In fact we may be more precise, by recalling the theta correspondence for unramified representations from [@Rallis][^6]. For this, we introduce the following notation. Let $P$ be a minimal parabolic subgroup of $G$, which is defined to be the stabilizer in $G$ of a full isotropic flag in $V$; see [@Rallis Remark 4.1] for the precise definition. We assume that this flag is compatible with the lattice $L$ as in [@KudlaCastle VI.3]. Similarly, we denote by $P'$ a minimal parabolic for $G'$ compatible with $L'$. The Levi of $P$ is given by $M = {\rm GL}_1^r \times \text{O} (V_0)$. If $\lambda = (\lambda_1,\ldots ,\lambda_r)$ is an $r$-tuple of unramified characters of $k^\times$, we may think of $\lambda$ as a character of $M$ by extending it to be trivial on $\text{O} (V_0)$. We may then define $\pi(\lambda)$ to be the unique smooth irreducible unramified representation of $G$ appearing as a subquotient of ${\rm Ind}_P^G \lambda$. Likewise, if $\lambda' = (\lambda_1',\ldots ,\lambda_m')$ is an $m$-tuple of unramified characters of $k^\times$, we may define $\pi'(\lambda')$ to be the unique smooth irreducible unramified representation of $G'$ appearing as a subquotient of ${\rm Ind}_{P'}^{G'} \lambda'$. **Theorem 28** (Rallis [@Rallis], Remark 4.4). *Let $r$ and $d$ be the rank and dimension of the unramified quadratic space $V$. Let $\chi_V$ and $V_0$ be defined as above. Let $2m$ be the dimension of the symplectic space $W$. Assume $r\geqslant m$.* *Let $\lambda' = (\lambda_1',\ldots ,\lambda_m')$ be an $m$-tuple of unramified characters of $k^\times$. Then $\theta(\pi'(\lambda'),V)$ is non-zero, and equal to $\pi( \lambda)$ where $$\lambda = (\chi_V\lambda_1,\ldots ,\chi_V\lambda_m, |\cdot |^{\frac{d}{2}-m-1},\ldots ,|\cdot |^{\frac12 \dim V_0}).$$ In particular, if $d>2m+2$ then $\theta(\pi'(\lambda'),V)$ is not tempered.* While we do not use this result in the proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, it provides the justification for the remark about non-temperedness made after the statement of the theorem in Section [2.1](#sec:exceptional){reference-type="ref" reference="sec:exceptional"}. ## Tempered spherical transfer from archimedean $\text{O} (n,m)$ to $\text{Sp} _{2m}(\mathbb R)$ {#sec:arch-lift} We now let $k = \mathbb R$. Let $n\geqslant m\geqslant 1$ be integers with $n+m$ even, and let $1\leqslant m'\leqslant m$. We let $V$ have signature $(n,m)$, and let $W = W_{2m'}$ have dimension $2m'$. We make the standard choices of coordinates on $V$ and $W_{2m'}$, so that $G$ and $G'$ are identified with ${\rm O}(n,m)$ and $\text{Sp} _{2m'}(\mathbb R)$ respectively, with standard maximal compacts $\text{O} (n) \times \text{O} (m)$ and ${\rm U}(m')$. Let $\pi$ be an irreducible spherical representation of ${\rm O}(n,m)$, and let $\theta(\pi,W_{2m'})$ be its theta lift to $\text{Sp} _{2m'}(\mathbb R)$. We cannot deduce from Theorem [Theorem 27](#Howe-unram){reference-type="ref" reference="Howe-unram"} (which applies to non-archimedean local fields, and unramified local data) that $\theta(\pi,W_{2m'})$ is itself spherical. Nevertheless, $\theta(\pi,W_{2m'})$, when non-zero, is not far from spherical, in that it admits an abelian ${\rm U}(m')$-type. **Lemma 29**. *Let $\pi$ be an irreducible spherical representation of ${\rm O}(n,m)$, where $n\geqslant m\geqslant 1$. Let $1\leqslant m'\leqslant m$ and suppose that $\theta(\pi,W_{2m'})\neq 0$. Then $\theta(\pi,W_{2m'})$ admits $\det^{(n-m)/2}$ as a ${\rm U}(m')$-type.* *Proof.* For brevity, during the proof we will denote the maximal compact subgroups of ${\rm O}(n,m)$ and $\text{Sp} _{2m'}(\mathbb R)$ by $K$ and $K'$. We shall work in the Fock model $\mathcal{F}$ of the oscillator representation. For an integer $d\geq 0$, let $\mathcal{F}^d$ denote the subspace of polynomials of degree equal to $d$, a stable subspace under the commuting action of $K$ and $K'$. Let $\mathcal{H}\subset\mathcal{F}$ denote the space of joint harmonics for both $K$ and $K'$. Irreducible representations $\rho$ of $K$ and $\rho'$ of $K'$ are said to *correspond* if $\rho\otimes\rho'$ is a direct summand of the representation of $K\times K'$ on $\mathcal{H}$. This correspondence is a bijection between those irreducible representations of $K$ and $K'$ which appear in $\mathcal{H}$ [@Howe1989]. For a finite dimensional representation $\sigma$ of $K$ or $K'$, which occurs in the Fock space, its *degree* is the minimal $d$ for which the $\sigma$-isotypic subspace of $\mathcal{F}^d$ is non-zero. The trivial representation of $K$ and the character $\det^{(n-m)/2}$ of $K'$ correspond to each other, in the above sense. Indeed, this can be extracted from [@Paul Proposition 2.10 (i)], upon recalling that, under the standard parametrization of the irreducible finite dimensional representations of $K$ and $K'$ by their highest weight, the trivial representation of ${\rm O}(n)\times {\rm O}(m)$ has parameter $$(\underbrace{0,\ldots ,0}_{n\; \text{times}};1)\otimes (\underbrace{0,\ldots ,0}_{m\; \text{times}};1).$$ and the character $\det^{(n-m)/2}$ of ${\rm U}(m')$ has parameter $$\underbrace{\left((n-m)/2,\ldots ,(n-m)/2\right)}_{m'\; \text{times}}.$$ Now the spherical representation $\pi$ of $\text{O} (n,m)$ has the trivial representation as a $K$-type; as the degree of the latter is 0 (see [@Paul (2.13)]), it is necessarily a $K$-type of minimal degree for $\pi$. Now a theorem of Howe [@Howe1989] states that the $K'$-types which correspond with the $K$-types of minimal degree in $\pi$ must themselves appear in $\theta(\pi,W_{2m'})$. ◻ Motivated by Lemma [Lemma 29](#lem-K-type){reference-type="ref" reference="lem-K-type"}, we now recall the classification of irreducible representations of $\text{Sp} _{2m'}(\mathbb R)$ that admit abelian ${\rm U}(m')$-types. Let $\mathfrak h'_\mathbb C= \mathfrak a'_\mathbb C$ be the Cartan subalgebra of $\mathfrak{sp}_{2m'}$ defined in Section [2.2](#2examples){reference-type="ref" reference="2examples"}, and let $W'$ be the Weyl group. Let $P'$ be a Borel subgroup of ${\rm Sp}_{2m'}(\mathbb R)$, with Langlands decomposition $P' = N' A' M'$. We note that $M'$ is finite, and isomorphic to $(\mathbb Z/ 2)^{m'}$. If $(\pi',V')$ is a representation of $\text{Sp} _{2m'}(\mathbb R)$, and $v_\tau \in V'$ is a nonzero vector of abelian ${\rm U}(m')$-type $\tau^{-1}$, then the algebra $\mathbb{D}(\tau)$ defined in Section [3.2](#sec:tau-spherical){reference-type="ref" reference="sec:tau-spherical"} acts on $v_\tau$ by scalars, and this defines an element $\lambda \in \mathfrak a'^*_\mathbb C/ W'$ that we call the spectral parameter of $\pi'$. **Lemma 30**. *For any character $\tau$ of ${\rm U}(m')$ and $\lambda \in \mathfrak a'^*_\mathbb C/ W'$, there is a unique irreducible representation $\pi'_{\lambda, \tau}$ of ${\rm Sp}_{2m'}(\mathbb R)$ with spectral parameter $\lambda$ and containing $\tau^{-1}$ as a ${\rm U}(m')$-type. Moreover, the infinitesimal character of $\pi'_{\lambda, \tau}$ is also equal to $\lambda$.* *Proof.* The uniqueness of $\pi'_{\lambda, \tau}$ may be proved as in Proposition [Proposition 24](#O-spherical){reference-type="ref" reference="O-spherical"}. For existence, $\pi'_{\lambda, \tau}$ may be realized as the unique irreducible subquotient of $${\rm Ind}_{P'}^{{\rm Sp}_{2m'}(\mathbb R)}( e^\lambda \otimes \tau^{-1}|_{M'} )$$ containing $\tau^{-1}$ as a ${\rm U}(m')$-type. The assertion about the infinitesimal character follows as in the proof of [@Helgason2 Prop 2.1]. We briefly recall this argument, with the modifications needed to handle the $\tau$-spherical case considered here. For the duration of this proof only, we will work with the $KAN$ form of the Iwasawa decomposition, and let $\kappa(g) \in {\rm U}(m')$ and $H(g) \in \mathfrak a'$ be uniquely determined by $g \in \kappa(g) e^{H(g)} N'$. If we consider the function $$f(g) = e^{ (\lambda - \rho) H(g)} \tau^{-1}( \kappa(gk) ),$$ then it follows from the formula for $\varphi_{\lambda, \tau}$ given on p. 332 of [@Shimeno], and Proposition [Proposition 2](#prop:sph-fn-2-vars){reference-type="ref" reference="prop:sph-fn-2-vars"}, that $\varphi_{\lambda, \tau}$ is obtained by averaging $f$ as $$\varphi_{\lambda, \tau}(g) = \int_{{\rm U}(m')} \tau(k) f(gk) dk.$$ We let $\mathcal Z(\mathfrak g')$ be the center of the universal enveloping algebra of $\mathfrak g'$, and let $\gamma_{\rm HC} : \mathcal Z(\mathfrak g') \to S( \mathfrak a')^{W'}$ be the Harish-Chandra homomorphism. We may now apply the argument from [@Helgason2 Prop 2.1] to show that $f$, and hence also $\varphi_{\lambda, \tau}$, behaves under $\mathcal Z(\mathfrak g')$ as $Z \varphi_{\lambda, \tau} = \gamma_{\rm HC}(Z)( \lambda) \varphi_{\lambda, \tau}$. (This calculation is simplified by the fact that ${\rm Sp}_{2m'}(\mathbb R)$ is split.) Equation [\[defn-sph-fn\]](#defn-sph-fn){reference-type="eqref" reference="defn-sph-fn"} implies that $\varphi_{\lambda, \tau}$ is a matrix coefficient of $\pi'_{\lambda, \tau}$, which completes the proof. ◻ To continue, we recall a result of Przebinda [@Prz], which gives a relation between the infinitesimal characters of the archimedean theta correspondence. Let $\mathfrak h$ be the Cartan subalgebra of $\mathfrak{o}(n,m)$ defined in Section [7.1](#sec:O-spherical-notn){reference-type="ref" reference="sec:O-spherical-notn"}, with subspace $\mathfrak a$, and let the Weyl groups $\widetilde{W}_\mathfrak a$ and $\widetilde{W}$ be as in that section. We let $e_1, \ldots, e_{(n+m)/2}$ be the standard basis of $\mathfrak h^*_\mathbb C$, and $e_1', \ldots, e_{m'}'$ be the standard basis of $\mathfrak h'^*_\mathbb C$. Since $m' \leqslant m$, we have an inclusion $\iota: \mathfrak h'^*_\mathbb C\to \mathfrak h^*_\mathbb C$ defined by $\iota(e_j') = e_j$. We define the element $\tau \in \mathfrak h^*_\mathbb C$ by $$\label{taudef} \tau = \sum_{j = m'+1}^{(n+m)/2} ( \tfrac{n+m}{2} - j)e_j = ( \overbrace{0, \ldots, 0}^{m'}, \tfrac{n+m}{2} - m'-1, \ldots, 0 ).$$ Let $\pi'=\theta(\pi,W_{2m'})$. We let $\gamma_\pi \in \mathfrak h^*_\mathbb C/ \widetilde{W}$ be the infinitesimal character of $\pi$, and if $\pi' \neq 0$ then we let $\gamma_{\pi'} \in \mathfrak h'^*_\mathbb C/ W'$ be its infinitesimal character. If $\pi' \neq 0$, then [@Prz Theorem 1.19] states that $$\label{eq:inf-char} \gamma_\pi=\iota(\gamma_{\pi'})+ \tau.$$ We next show that the theta lift of a spherical representation $\pi$ vanishes if $m' < m$, under the assumption that $\pi$ is tempered. **Lemma 31**. *Let $\pi$ be an irreducible spherical tempered representation of ${\rm O}(n,m)$, where $n\geqslant m\geqslant 1$. For $m'<m$ we have $\theta(\pi,W_{2m'})=0$.* *Proof.* Let $\lambda \in i \mathfrak a^* / \widetilde{W}_\mathfrak a$ be the spectral parameter of $\pi$, as defined in Section [7.2](#sec:O-spherical){reference-type="ref" reference="sec:O-spherical"}. By Proposition [Proposition 24](#O-spherical){reference-type="ref" reference="O-spherical"}, we have $$\label{O-inf-char} \gamma_\pi = \lambda + \rho_{\mathfrak{m}} = \lambda + ( \overbrace{0, \ldots, 0}^{m}, \tfrac{n-m}{2} -1, \ldots, 0 ).$$ If we had $\gamma_\pi=\iota(\gamma_{\pi'})+ \tau$ for some $\gamma_{\pi'} \in \mathfrak h'^*_\mathbb C$, then $\gamma_\pi$ would have an entry with real part equal to $\tfrac{n+m}{2} - m'-1$ in absolute value, but if $m' < m$ this is incompatible with [\[O-inf-char\]](#O-inf-char){reference-type="eqref" reference="O-inf-char"} and the condition $\lambda \in i \mathfrak a^*$. ◻ We now remove the tempered assumption on $\pi$, and examine $\theta(\pi,W_{2m})$ for $\pi$ spherical on ${\rm O}(n,m)$. In the case under consideration when $m' = m$, we recall that there is a natural identification of $\mathfrak a^*_\mathbb C/ \widetilde{W}_\mathfrak a$ and $\mathfrak a'^*_\mathbb C/ W'$. Moreover, if we identify $\mathfrak a^*$ with a subspace of $\mathfrak h^*$ via the splitting $\mathfrak h= \mathfrak a+ \mathfrak h_\mathfrak k$, we may choose our bases of $\mathfrak h^*_\mathbb C$ and $\mathfrak h'^*_\mathbb C$ so that $\iota( \mathfrak a'^*) = \mathfrak a^*$, and this induces the identification of $\mathfrak a^*_\mathbb C/ \widetilde{W}_\mathfrak a$ and $\mathfrak a'^*_\mathbb C/ W'$. **Lemma 32**. *Let $n\geqslant m\geqslant 1$. Let $\pi$ be the irreducible spherical representation of ${\rm O}(n,m)$ with spectral parameter $\lambda \in \mathfrak a^*_\mathbb C/ \widetilde{W}_\mathfrak a$, as in Proposition [Proposition 24](#O-spherical){reference-type="ref" reference="O-spherical"}. We view $\lambda$ as an element of $\mathfrak a'^*_\mathbb C/ W'$. Then if $\theta(\pi,W_{2m}) \neq 0$, it is the unique irreducible subquotient of $${\rm Ind}_{P'}^{{\rm Sp}_{2m}(\mathbb R)}( e^\lambda \otimes \det\! {}^{(n-m)/2}|_{M'} )$$ containing $\det^{(n-m)/2}$ as a ${\rm U}(m)$-type.* *Proof.* We know from Lemma [Lemma 29](#lem-K-type){reference-type="ref" reference="lem-K-type"} that $\theta(\pi,W_{2m})$ admits the character $\det^{(n-m)/2}$ as a ${\rm U}(m)$-type. Moreover, because $m' = m$, the definition [\[taudef\]](#taudef){reference-type="eqref" reference="taudef"} of $\tau$ implies that $\tau = \rho_{\mathfrak{m}}$. It then follows from the relations [\[eq:inf-char\]](#eq:inf-char){reference-type="eqref" reference="eq:inf-char"} and [\[O-inf-char\]](#O-inf-char){reference-type="eqref" reference="O-inf-char"} that $\gamma_{\pi'} = \lambda \in \mathfrak h'^*_\mathbb C/ W'$. A result by Zhu [@Zhu] states that an irreducible admissible representation of a classical real group such as $\text{Sp} _{2m}(\mathbb R)$, which admits an abelian $K$-type (where $K$ is a maximal compact subgroup), is uniquely determined by its infinitesimal character. ◻ We finish this section by describing the lift of the trivial representation $1_n$ from ${\rm O}(n)$ to ${\rm Sp}_{2m'}(\mathbb R)$, where $n$ is even and $n \geqslant m'$. **Lemma 33**. *Let $n$ be even, with $n \geqslant m' \geqslant 1$. Let $1_n$ be the trivial representation of ${\rm O}(n)$. Then if $\theta(1_n,W_{2m'}) \neq 0$, it is the unique irreducible subquotient of $${\rm Ind}_{P'}^{{\rm Sp}_{2m'}(\mathbb R)}( e^\lambda \otimes \det\! {}^{n/2}|_{M'} )$$ containing $\det^{n/2}$ as a ${\rm U}(m')$-type, where $\lambda = (\tfrac{n}{2} - 1, \ldots, \tfrac{n}{2} - m') \in \mathfrak a'^*_\mathbb C$.* *Proof.* Assume that $\theta(1_n,W_{2m'}) \neq 0$. We may show as in Lemma [Lemma 29](#lem-K-type){reference-type="ref" reference="lem-K-type"} that $\theta(1_n,W_{2m'})$ admits $\det^{n/2}$ as a ${\rm U}(m')$-type. Moreover, the infinitesimal character of $1_n$ is equal to $(\tfrac{n}{2} - 1, \tfrac{n}{2} - 2, \ldots, 0)$, and combining this with [\[eq:inf-char\]](#eq:inf-char){reference-type="eqref" reference="eq:inf-char"} shows that the infinitesimal character of $\theta(1_n,W_{2m'})$ is $(\tfrac{n}{2} - 1, \ldots, \tfrac{n}{2} - m')$. The result now follows as in Lemma [Lemma 32](#lemma:1st-occ){reference-type="ref" reference="lemma:1st-occ"}. ◻ ## Review of the global theta correspondence {#sec:global-theta} We return to the setting of Section [8.1](#Type1){reference-type="ref" reference="Type1"} and now let $k=F$ be a totally real number field with ring of integers $\mathcal O_F$. ### Adelic structures {#sec:adelic-structures} We begin by fixing the restricted tensor product structure of the adelic points of $\text{Sp} (\mathcal{W})$, where $\mathcal{W}=W\otimes V$. Let $S_V$ denote the set of finite places $v$ of $F$ at which $V$ is ramified together with all archimedean places. Fix a lattice $L$ in $V$ which is self-dual at all $v\notin S_V$. Similarly, let $L'$ be a lattice in $W$ which is self-dual at all finite places. Then the tensor product $\mathcal{L}=L'\otimes L$ is a lattice in $\mathcal{W}$. For each $v\notin S_V$ let $L_{v}=L\otimes \mathcal O_{F_v}$ and $L_{v}'=L'\otimes \mathcal O_{F_v}$ be the localizations at $v$ and write $\mathcal{K}_{0,v}$ for the stabilizer of $\mathcal{L}_{v}=L_{v}'\otimes L_{v}$ in $\text{Sp} (\mathcal{W})(F_v)$. Then $\mathcal{K}_{0,v}$ is a hyperspecial subgroup of $\text{Sp} (\mathcal{W})(F_v)$. We define $\text{Sp} (\mathcal{W})(\mathbb A)$ as the restricted tensor product with respect to $\{\mathcal{K}_{0,v}\}_{v\notin S_V}$. For $v\notin S_V$, the local metaplectic group splits over the maximal compact subgroup $\mathcal{K}_{0,v}$ (this fact is independent of the parity assumption [\[parity-assumption\]](#parity-assumption){reference-type="eqref" reference="parity-assumption"} on $V$). We may therefore identify the latter with a subgroup of $\widetilde{\text{Sp} }(\mathcal{W})_{F_v}$ and form the restricted tensor product $\prod_v \widetilde{\text{Sp} }(\mathcal{W})_{F_v}$ with respect to the system $\{\mathcal{K}_{0,v}\}_{v\notin S_V}$. Unlike its symplectic counterpart, the adelic metaplectic group is *not* equal to $\prod_v \widetilde{\text{Sp} }(\mathcal{W})_{F_v}$ [@Howe §3]. There is, however, a unique non-split $S^1$-central extension of $\text{Sp} (\mathcal{W})(\mathbb A)$ which is a surjective image of $\prod_v \widetilde{\text{Sp} }(\mathcal{W})_{F_v}$; we shall denote this extension by $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$. Similarly to the local case in §[8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}, we may write $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A= \text{Sp} (\mathcal{W}) \times S^1$, with the multiplication given by the Perrin--Rao cocycle associated with the Lagrangian subspace $\mathcal{X}(\mathbb A)$ of $\mathcal{W}(\mathbb A)$ and the additive character $\psi$. The choice of $\mathcal{K}_{0,v}$ was made to facilitate the description of restricted tensor product structure of the adelic points of the subgroups ${\bf G}={\rm O}(V)$ and ${\bf G}'=\text{Sp} (W)$. Indeed, for every $v\notin S_V$ let $K_{0,v}$ and $K_{0,v}'$ denote the stabilizers of $L_{v}$ and $L_{v}'$, respectively; both are subgroups of $\mathcal{K}_{0,v}$. Then ${\bf G}(\mathbb A)$ and ${\bf G}'(\mathbb A)$ are the restricted tensor products taken with respect to the system of hyperspecial subgroups $\{K_{0,v}\}_{v\notin S_V}$ and $\{K_{0,v}'\}_{v\notin S_V}$. Let $\widetilde{{\bf G}}_\mathbb A$ and $\widetilde{{\bf G}}'_\mathbb A$ denote the inverse images of ${\bf G}(\mathbb A)$ and ${\bf G}'(\mathbb A)$ in $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$. It follows from the above description of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$, as well as the corresponding local properties, that $\widetilde{{\bf G}}_\mathbb A$ and $\widetilde{{\bf G}}'_\mathbb A$ commute with each other. Furthermore, recalling our parity assumption [\[parity-assumption\]](#parity-assumption){reference-type="eqref" reference="parity-assumption"}, the covering $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A\rightarrow\text{Sp} (\mathcal{W})(\mathbb A)$ splits over ${\bf G}(\mathbb A){\bf G}'(\mathbb A)$, and we may view both ${\bf G}(\mathbb A)$ and ${\bf G}'(\mathbb A)$ as subgroups of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$. ### Real structures For each real place $v$, we choose a positive complex structure $J_{W_v}$ on $W_v$. As in Section [8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}, when combined with a choice of decomposition of $V_v$ into positive and negative subspaces, this determines a positive complex structure $J_v$ on $\mathcal{W}_v$. We use these data to define maximal compact subgroups of ${\bf G}$, ${\bf G}'$, and $\text{Sp} (\mathcal{W})$ at the real places. ### The Weil representation and theta lifts Let $\psi = \otimes \psi_v$ be the product of the local characters defined in Section [8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}, which gives a character of $\mathbb A/ F$. For almost all non-archimedean places $v$ the local additive character $\psi_v$ is unramified, in the sense of Section [8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}. The choice of $\psi$ defines a Weil representation $\omega_v=\omega_{v,\psi_v}$ for each local metaplectic group $\widetilde{\text{Sp} }_{F_v}$. There is a global Weil representation $\omega$ of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$, with the property that the pullback of $\omega$ to $\prod_v \widetilde{\text{Sp} }(\mathcal{W})_{F_v}$ is the restricted tensor product of the $\omega_v$. As before, we have corresponding smooth and unitary representations $\omega^\text{sm}$ and $\omega^\text{Hilb}$. Let $\mathcal{X}\oplus\mathcal{Y}$ be a complete polarization of $\mathcal{W}$. An explicit model for $\omega^\text{sm}$ (the *Schroedinger model*) is realized on $\mathscr{S}(\mathcal{Y}(\mathbb A))$, the space of Schwartz--Bruhat functions on $\mathcal{Y}(\mathbb A)$. We shall henceforth fix this model for $\omega^\text{sm}$, describing it in some detail shortly. We likewise have a model for $\omega$ on $\mathscr{S}(\mathcal{Y}(\mathbb A_f))\otimes \mathscr{S}_{\rm alg}(\mathcal{Y}(F_\infty))$, where the algebraic Schwartz space $\mathscr{S}_{\rm alg}(\mathcal{Y}(F_\infty))$ is defined in §[8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}. A theorem of Weil [@Weil §40] states that there is a unique injective homomorphism $\iota_F: \text{Sp} (\mathcal{W})(F)\rightarrow \widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$ lifting the diagonal embedding $\text{Sp} (\mathcal{W})(F)\rightarrow\text{Sp} (\mathcal{W})(\mathbb A)$. We will often identify $\text{Sp} (\mathcal{W})(F)$ with its image in $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$ under the map $\iota_F$. Weil furthermore showed [@Weil §41] that the distribution $$\Theta: \alpha\mapsto \sum_{y\in \mathcal{Y}(F)}\alpha(y),\qquad \alpha\in\mathscr{S}(\mathcal{Y}(\mathbb A)),$$ is $\text{Sp} (\mathcal{W})(F)$-invariant: $\Theta(\omega(\gamma)\alpha)=\Theta(\alpha)$ for all $\gamma\in \text{Sp} (\mathcal{W})(F)$. For $\alpha\in\mathscr{S}(\mathcal{Y}(\mathbb A))$ the function $\Theta(\omega(\cdot)\alpha)$ is an $\text{Sp} (\mathcal{W})(F)$-invariant function of moderate growth on $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$; see [@Weil Théorème 6] and [@Howe §4]. We shall be interested in the restriction of $\Theta(\omega(\cdot)\alpha)$ to ${\bf G}(\mathbb A){\bf G}'(\mathbb A)$, when the latter is viewed as a subgroup of $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A$ through the splitting $\iota_\mathbb A: {\bf G}(\mathbb A){\bf G}'(\mathbb A)\rightarrow\widetilde{{\bf G}}(\mathbb A)\widetilde{{\bf G}}'(\mathbb A)=\widetilde{{\bf G}{\bf G}'}(\mathbb A)$ obtained by the restricted tensor product of all local splittings defined in §[8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}. It is known (see the discussion in [@CM §2.2]) that the restrictions of $\iota_\mathbb A$ and $\iota_F$ to ${\bf G}(F){\bf G}'(F)$ coincide. We denote $$\Theta(g,g';\alpha)=\sum_{y\in \mathcal{Y}(F)}(\omega(gg')\alpha)(y)$$ for $(g,g')\in{\bf G}(\mathbb A)\times{\bf G}'(\mathbb A)$. For a function $\phi\in C^\infty([{\bf G}])$ of rapid decrease we let $$\label{eq:Theta-fn} \Theta (\phi,\alpha;W)(g')=\int_{[{\bf G}]} \overline{\phi(g)} \Theta(g,g';\alpha)dg$$ be the theta lift of $\phi$ to $C^\infty([{\bf G}'])$. ### Explicating the theta kernel {#sec:thetakernel} We now give a more explicit form of the theta kernel $\Theta(g,g';\alpha)$, for elements $(g,g')\in{\bf G}(\mathbb A)\times{\bf P}_{\rm Siegel}(\mathbb A)$. Here, similarly to the definition of $\mathcal{P}_{\rm Siegel}$ in Section [8.1](#Type1){reference-type="ref" reference="Type1"}, the Siegel parabolic ${\bf P}_{\rm Siegel}$ in ${\bf G}'=\text{Sp} (W)$ is the stabilizer of $U\subset W$. We begin by describing the action of the Siegel parabolic $\mathcal P_\text{Siegel}$ of $\text{Sp} (\mathcal{W})$ in the Schroedinger model. With the notation from Section [8.1](#Type1){reference-type="ref" reference="Type1"}, $\mathcal{P}_{\rm Siegel}(\mathbb A)$ acts via the Weil representation $\omega^\text{sm}$ on functions $\alpha\in\mathscr{S}(\mathcal{Y}(\mathbb A))$ through the formulae $$\label{action0} \begin{aligned} \omega^\text{sm}(m(a),1)\alpha(y)&=|\det a|_\mathbb A^{1/2}\alpha({}^t a y),\;\;\qquad m(a)\in \mathcal{M}(\mathbb A),\\ \omega^\text{sm}(n(b),1)\alpha(y)&=\psi\big(\frac12\langle by, y\rangle_{\mathcal{W}}\big)\alpha(y),\quad n(b)\in \mathcal{N}(\mathbb A). \end{aligned}$$ Note that we are using the identification $\widetilde{\text{Sp} }(\mathcal{W})_\mathbb A= \text{Sp} (\mathcal{W}) \times S^1$ here. Restricting the first equation of [\[action0\]](#action0){reference-type="eqref" reference="action0"} to ${\bf G}(\mathbb A)=\text{O} (V)(\mathbb A)$, viewed as a subgroup of $\mathcal{M}(\mathbb A)$ via the embedding $g\mapsto m({\rm Id}_U\otimes g)$ from Section [8.1](#Type1){reference-type="ref" reference="Type1"}, we obtain $$\omega^\text{sm}(g,1)\alpha(x)=\alpha(g^{-1}x),\qquad g\in{\bf G}(\mathbb A).$$ Moreover, an arbitrary element of ${\bf P}_{\rm Siegel}(\mathbb A)$ can be written as $m(a\otimes {\rm Id}_V)n(b\otimes {\rm Id}_V)$, where $a\in\text{GL} (U)$ and $b\in {\rm sym}(U^*,U)$. We deduce from the second equation in [\[action0\]](#action0){reference-type="eqref" reference="action0"} that $$\label{eq:explicit-theta} \Theta(g,m(a)n(b);\alpha)=|\det a|_\mathbb A^{1/2}\sum_{y\in \mathcal{Y}(F)}\psi\bigg(\frac12\langle (b\otimes {\rm Id}_V)y, y\rangle_{\mathcal{W}}\bigg) \alpha(g^{-1}\, {}^t a y),$$ as a function on ${\bf G}(\mathbb A)\times{\bf P}_{\rm Siegel}(\mathbb A)$. ## Global representation theoretic properties {#sec:global-theta-properties} Henceforth, all decompositions of an automorphic representation of ${\bf G}={\rm O}(V)(\mathbb A)$ or ${\bf G}'(\mathbb A)=\text{Sp} (W)(\mathbb A)$ as a restricted tensor product will be taken with respect to the systems $\{K_{0,v}\}_{v\notin S_V}$ or $\{K_{0,v}'\}_{v<\infty}$ from Section [8.5](#sec:global-theta){reference-type="ref" reference="sec:global-theta"}, respectively. As with the local archimedean theory, we shall define the global theta lift in the context of $K$-finite automorphic forms. Given a cuspidal automorphic representation $\pi$ of ${\bf G}(\mathbb A)$, the space $\Theta(\pi,W)$ generated by $\Theta(\phi,\alpha;W)$, for $\phi\in \pi$ and $\alpha\in\mathscr{S}(\mathcal{Y}(\mathbb A_f))\otimes \mathscr{S}_{\rm alg}(\mathcal{Y}(F_\infty))$, is an invariant subspace, possibly zero, of the space of automorphic forms on ${\bf G}'(\mathbb A)$. The automorphic representation $\Theta(\pi,W)$ is of finite length. Its relation to the local theta correspondence is as follows. Suppose that $\pi'\simeq\otimes_v\pi'_v$ is an irreducible quotient of $\Theta(\pi,W)$. Because the space $\{ \overline{\phi} : \phi \in \pi \}$ is isomorphic to the contragredient $\pi^\vee$ of $\pi$, we have a non-trivial intertwining operator of ${\bf G}(\mathbb A) \times {\bf G}'(\mathbb A)$-modules $\omega \to \pi' \times \pi$, so that $\pi_v' \simeq \theta( \pi_v, W)$; see [@KR94 Proposition 7.1.2]. In fact, we have the following result of Kudla--Rallis [@KR94 Corollary 7.1.3]: **Theorem 34** (Kudla--Rallis). *Let $\pi$ be an irreducible cuspidal representation of ${\bf G}(\mathbb A)$. If $\Theta(\pi,W)$ is square-integrable, then $\Theta(\pi,W)$ is irreducible and $$\Theta(\pi, W)\simeq\otimes_v \theta(\pi_v, W).$$* Regarding the cuspidality of global theta lifts, we have the following theorem of Rallis, [@Rallis84 Theorem I.1.1], establishing the cuspidality of the "first occurence" of $\Theta(\pi,W)$. To formulate this principle properly, we need to add in the dimension $2m$ of the symplectic space $W$ to the notation, writing $W_{2m}$ in place of $W$ and $\text{Sp} _{2m}=\text{Sp} (W_{2m})$ in place of ${\bf G}'$. **Theorem 35** (Rallis). *Let $\pi$ be an irreducible cuspidal automophic representation of ${\bf G}(\mathbb A)$. Let $m\geqslant 2$. If $\Theta(\pi,W_{2m})$ is not contained in the space of cusp forms $L^2_{\rm cusp}({\rm Sp}_{2m}(F)\backslash {\rm Sp}_{2m}(\mathbb A))$ then $\Theta(\pi,W_{2(m-1)})\neq 0$.* Finally, the following lemma will be of use in the proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}. **Lemma 36**. *Let $v_0$ be a fixed archimedean place, and assume $V$ is of signature $n\geqslant m\geqslant 1$ at $v_0$. Let $\pi$ be an irreducible cuspidal automophic representation of ${\bf G}(\mathbb A)$, whose local component at $v_0$ is spherical and tempered. Then the global lift $\Theta(\pi,W_{2m})$ is cuspidal (or zero) and $\Theta(\pi,W_{2m})\simeq\otimes_v \theta(\pi_v,W_{2m,v})$.* *Proof.* Since $\pi_{v_0}$ is spherical and tempered, it follows from Lemma [Lemma 31](#first-occ-sph){reference-type="ref" reference="first-occ-sph"} that $\Theta(\pi_{v_0},W_{2m',v_0})$ vanishes for $m'<m$. The global lift $\Theta(\pi,W_{2m'})$ therefore vanishes for $m'<m$. The first statement of the lemma then follows from Theorem [Theorem 35](#thm:cuspidality){reference-type="ref" reference="thm:cuspidality"}. The factorization statement is a consequence of Theorem [Theorem 34](#global-irred){reference-type="ref" reference="global-irred"}. ◻ # Period relation {#sec:Period-relation} The goal of this section is to prove the adelic period relation Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"} and to interpret it classically in Section [9.9](#adelic2classical){reference-type="ref" reference="adelic2classical"}. ## The Maass period relation {#Maass-period-relation} We begin by recalling a result of Rudnick--Sarnak [@RS §3.1] in which certain discrete orthogonal periods of a Hecke--Maass form on $\text{SO} (3,1)$ are identified with the negative Fourier coefficients of a corresponding theta lift, a weight one Maass form on $\text{SL} _2(\mathbb R)$. This extends the classical result of Maass [@Maass59] concerning weight zero CM Maass forms, which are theta lifts of characters $\chi$ on $\text{SO} (1,1)$ (defined relative to a real quadratic extension of $\mathbb Q$ with norm form ${\rm N}_{E/\mathbb Q}$), whose Fourier coefficients are given by $\sum_{{\rm N}_{E/\mathbb Q}(\mathfrak{a})=n}\chi(\mathfrak{a})$. These results, along with the basic method of proof, will form the prototype of the period relation we establish in the next paragraph. Let $(V,Q)$ be an anisotropic quadratic space over $\mathbb Q$, of dimension $4$ and signature $(3,1)$. Denote ${\bf G}=\text{O} (V)$. Let $L\subset V$ be an integral lattice and let $\Gamma$ be the subgroup of ${\bf G}(\mathbb R)$ preserving $L$. Let $D$ be the discriminant of $L$. As a model for hyperbolic $3$-space, we take $\mathbb{H}^3$ to be one of the sheets of the two-sheeted hyperboloid $\{x\in V(\mathbb R): Q(x)=-1 \}$, or equivalently the space of negative definite lines in $V(\mathbb R)$. Let ${\bf G}(\mathbb R)^+$ be the neutral component of ${\bf G}(\mathbb R)$ for the Hausdorff topology; then ${\bf G}(\mathbb R)^+$ preserves $\mathbb{H}^3$. Letting $K_\infty^+<{\bf G}(\mathbb R)^+$ denote the stabilizer of a point in $\mathbb{H}^3$, we may isometrically identify the Riemannian symmetric space $\mathbb{H}^3$ with ${\bf G}(\mathbb R)^+/K_\infty^+$. Let $f_\lambda$ be a weight zero Maass form of spectral parameter $\lambda$ on $\Gamma^+\backslash\mathbb{H}^3$, where $\Gamma^+=\Gamma\cap{\bf G}(\mathbb R)^+$. Then $f_\lambda$ is the restriction to ${\bf G}(\mathbb R)^+$ of an adelic automorphic form $\phi_\lambda$ on ${\bf G}(\mathbb A)$, which is invariant under $K_\infty^+$ and an open compact subgroup $K_f\subset{\bf G}(\mathbb A_f)$ satisfying $\Gamma={\bf G}(\mathbb Q)\cap K_f$, and which vanishes on all but one connected component of the double quotient ${\bf G}(\mathbb Q)\backslash{\bf G}(\mathbb A)/K_f$. Let $\mathcal{G}\in\mathscr{S}_{\rm alg}(\mathbb R^4)$ be the fixed Gaussian in the algebraic Schwartz space model of the Weil representation from §[8.2](#sec:Schroedinger){reference-type="ref" reference="sec:Schroedinger"}. Then $F_\lambda=\Theta (\phi_\lambda,\mathcal{G})$ on $\text{Sp} _2=\text{SL} _2$ can be viewed classically as a weight one Maass form on $\mathbb{H}^2$ for the congruence subgroup $\Gamma_0(4D)$, with spectral parameter $\lambda$, and Nebentypus $\chi_D(n) = \big( \frac{D}{|n|} \big)$ where $\big( \frac{D}{|n|} \big)$ is the Kronecker symbol. The Fourier expansion of $F_\lambda$ can be written as $$F_\lambda(u+iv)=\sum_{m\in\mathbb Z}a_m(F_\lambda;v)e^{2 \pi i mu}\qquad (v>0).$$ The important point that Rudnick and Sarnak observe, following Maass [@Maass59], is that the Fourier coefficients $a_m(F_\lambda;v)$ *for negative $m$* can be expressed as a discrete automorphic period of $f_\lambda$. To define this period, let $Y_m$ ($m$ a negative integer) denote the quadric in $V$ given by $Q(x)=m$. Let $Y_m(L)=Y_m(\mathbb Q)\cap L$ denote the set of "$L$-integral points\" in $Y_m(\mathbb Q)$. Then $Y_m(L)$ admits an action by $\Gamma^+$, and the set $\Gamma^+ \backslash Y_m(L)$ of $\Gamma^+$-orbits is finite; we denote by $y_1,\ldots ,y_h$ their projections to $\Gamma^+\backslash\mathbb{H}^3$. Then the period relation of Rudnick--Sarnak states that $$\label{RS-relation} a_m(F_\lambda;v)=\widehat{\mathcal{G}}(\lambda;v)\sum_{j=1}^h\frac1{w_j}f_\lambda(y_j),$$ where $w_j$ is the order of the stabilizer of $y_j$ in $\Gamma^+$ and $$\label{I-lambda} \widehat{\mathcal{G}}(\lambda;v) = v \int_{{\bf G}(\mathbb R)^+}\mathcal{G}(\sqrt{v}g^{-1}y_0)\varphi_\lambda(g)dg$$ is (up to the factor of $v$) the spherical transform of $\mathcal{G}(\sqrt{v}g^{-1}y_0)$, viewed as a function on $g\in{\bf G}(\mathbb R)^+$. This transform is non vanishing at $\lambda$ for $v$ large enough. A crucial ingredient in the proof of the relation [\[RS-relation\]](#RS-relation){reference-type="eqref" reference="RS-relation"} is a uniqueness result: the Harish-Chandra spherical function $\varphi_\lambda$ on ${\bf G}(\mathbb R)^+$ is the unique up to scalar bi-$K_\infty^+$-invariant eigenfunction of the Laplacian of frequency $\lambda$. ## Notation {#sec:periodnotation} We now establish the notation that will remain in effect through to the end of the paper. We return to the setting of Section [8.5](#sec:global-theta){reference-type="ref" reference="sec:global-theta"}, and continue to let $F$ be a totally real number field with ring of integers $\mathcal O_F$. We continue to use the notation established in Section [8](#sec:theta-review){reference-type="ref" reference="sec:theta-review"}, with the exception that the Lagrangian subspace $U$ of $W$ is now denoted $U_W$. We assume that $V$ is anisotropic over $F$. Let $2m$ be the dimension of $W$, and let $d$ be the dimension of $V$. We continue to assume that $d \geqslant 4$ is even, and that $d = n + m$, where $n > m \geqslant 1$. We fix a real embedding $v_0$ of the number field $F$. We suppose that $V$ is positive definite at all real places distinct from $v_0$ and of signature $(n,m)$ at $v_0$. Pick an injective linear map $y_0 : U_W \to V$, and let $U_V = y_0(U_W)$ (we will later identify the spaces $U_V$ and $U_W$, hence the choice of notation). We assume that $U_V$ is negative definite at $v_0$. We let $q$ be the quadratic form on $U_W$ obtained by pulling back $Q$ under $y_0$; then $q$ is negative definite at $v_0$ and positive definite at all other real places. ## Orthogonal and Bessel periods {#sec:2periods} We now generalize the periods featured in the Rudnick--Sarnak period relation to the groups ${\bf G}$ and ${\bf G}'$, using the adelic language. We shall express an *orthogonal period* of an automorphic form on ${\bf G}$ in terms of a *Bessel period* of its theta lift to ${\bf G}'$. This period relation was also considered more recently by Gan in [@Gan]. ### Orthogonal periods on orthogonal groups {#sec:orth-orth} Let ${\bf H}=\text{O} (U_V^\perp)\times\text{O} (U_V)$. Let $\pi$ be an irreducible unitary automorphic representation of ${\bf G}(\mathbb A)$. Define an adelic automorphic period map by $$\mathscr{P}_{\bf H}(\phi)=\int_{[{\bf H}]} \phi(h) dh\qquad (\phi\in V_\pi^\infty),$$ where $dh$ is the ${\bf H}(\mathbb A)$-invariant probability measure on $[{\bf H}]$. The integral converges absolutely, since $[{\bf H}]$ is compact ($V$ being anisotropic). The association $\phi\mapsto \mathscr{P}_{\bf H}(\phi)$ then yields an element in $\text{Hom}_{{\bf H}(\mathbb A)}(\pi,1)$. ### Bessel periods on the symplectic group {#sec:Bessel-period} As in Section [8.5.4](#sec:thetakernel){reference-type="ref" reference="sec:thetakernel"}, we let ${\bf P}_{\rm Siegel}={\bf M}{\bf N}$ be the Siegel parabolic of ${\bf G}'$ associated to $U_W$. Recall from Section [8.1](#Type1){reference-type="ref" reference="Type1"} that ${\bf M}$ is isomorphic to $\text{GL} (U_W)$, through the map $\text{GL} (U_W)\rightarrow{\bf M}$ given by $a\mapsto m(a)={\rm diag}(a,a^\vee)$. We let $\text{O} (U_W)$ be the isometry group of $q$, viewed as a subgroup of ${\bf M}$. Then ${\bf R}=\text{O} (U_W)\ltimes{\bf N}$ is the associated *Bessel subgroup* ${\bf R}$ of ${\bf G}$. We shall define Bessel periods by integrating against a character of ${\bf R}$. We begin by recalling from Section [8.1](#Type1){reference-type="ref" reference="Type1"} that ${\bf N}$ can be identified with the space of symmetric morphisms $\text{Hom}^+( U_W^*, U_W)$. Because $q$ may be viewed as an element of $\text{Hom}( U_W, U_W^*)$, we can define a map ${\bf N}(F)\backslash{\bf N}(\mathbb A) \to F \backslash \mathbb A$ that sends $n(b)$ to $\text{tr}( qb)$. We then define a character $\psi_N$ of ${\bf N}(F)\backslash{\bf N}(\mathbb A)$ by $\psi_N(n(b)) = \psi(\tfrac{1}{2} \text{tr}(q b) )$. Then $\psi_N$ is invariant under conjugation by ${\rm O}(U_W)$, and therefore extends to a character of ${\bf R}(\mathbb A)$ that we will denote $1 \times \psi_N$. Let $\pi'$ be an irreducible unitary automorphic representation of ${\bf G}'(\mathbb A)=\text{Sp} (W)(\mathbb A)$. Define an adelic automorphic period map by $$\mathscr{B}_{\bf R}^{\, \psi_N}(\phi')=\int_{[\text{O} (U_W)]}\int_{[{\bf N}]} \phi'(m(t)n)\psi_N^{-1}(n)dtdn\qquad (\phi'\in V_{\pi'}^\infty),$$ where $dt$ and $dn$ are the invariant probability measures on $[\text{O} (U_W)]$ and $[{\bf N}]$, respectively. The integral converges absolutely, since $[\text{O} (U_W)]\times [{\bf N}]$ is compact, yielding an element in $\text{Hom}_{{\bf R}(\mathbb A)}(\pi', 1 \times \psi_N)$. ## An affine variety {#sec:affine} To establish a relation between the Bessel period associated with $U_W$ and the orthogonal period on ${\bf G}$, we shall need to consider the set of embeddings of $(U_W,q)$ into $(V,Q)$ as a quadratic subspace. This motivates the following definition. Recall that $\mathcal{Y}=U_W^*\otimes V={\rm Hom}(U_W,V)$. We let $$\label{eq:defn-YU} \mathcal{Y}_q=\{y\in {\rm Hom}(U_W,V): Q\circ y=q\}\subset\mathcal{Y}$$ denote the space of all isometric embeddings of $U_W$ into $V$, as quadratic spaces. In other words, $\mathcal{Y}_q$ is the variety of representations of the quadratic form $q$ by $Q$. Then the natural action of ${\bf G}\times \text{O} (U_W)$ on $\mathcal{Y}$ preserves $\mathcal{Y}_q$, which makes $\mathcal{Y}_q$ into an affine ${\bf G}\times \text{O} (U_W)$-variety. Concretely, $(g,t)\in{\bf G}\times\text{O} (U_W)$ sends $y\in\mathcal{Y}_q$ to $g\circ y\circ t^{-1}\in\mathcal{Y}_q$. The set of rational points $\mathcal{Y}_q(F)$ is non-empty, as it contains the embedding $y_0$ defined above, and it therefore forms a homogeneous space under the action of ${\bf G}(F)\times\text{O} (U_W)(F)$ by Witt's theorem. For any $F$-algebra $R$, we let $Y_q(R)$ denote the quotient set $\text{O} (U_W)(R)\backslash \mathcal{Y}_q(R)$. We denote the points in $Y_q(R)$ by $[y]$, where $y\in \mathcal{Y}_q(R)$. Then the stabilizer of the base point $[y_0]\in Y_q(F)$ in ${\bf G}(F)$ is ${\bf H}(F)=\text{O} (U_V^\perp)(F)\times\text{O} (U_V)(F)$. Indeed, this is clear for the subgroup $\text{O} (U_V^\perp)$, as the latter is already the stabilizer in ${\bf G}$ of $y_0\in\mathcal{Y}_q$ (before quotienting). In the quotient $Y_q$, observe that for every $g\in\text{O} (U_V)\subset{\bf G}$ there is $t\in\text{O} (U_W)$ such that $g \circ y_0= y_0\circ t^{-1}$. We deduce that $$\label{XS} {\bf G}(F)/{\bf H}(F)\simeq Y_q(F),$$ the isomorphism being induced by the orbit map. ## Auxiliary structures {#sec:level-signature} We keep the notations of the paragraph Section [9.3](#sec:2periods){reference-type="ref" reference="sec:2periods"}, and now make some choices of auxiliary data to be used throughout the rest of this section. ### Compact subgroups {#sec:lattice} Recall that in Section [8.5](#sec:global-theta){reference-type="ref" reference="sec:global-theta"} we fixed a lattice $L$ in $V$ which is self-dual at all $v\notin S_V$, and we used $L$ to define, at every finite place $v$, a compact open subgroup $K_{0,v}$ of $\text{O} (V)(F_v)$. For every finite $v$, let $K_v$ be an open subgroup of $K_{0,v}$ such that $K_v = K_{0,v}$ for almost all $v$, and let $K_f = \prod_{v < \infty} K_v$. We assume that the decomposition of $V_{v_0}$ into positive and negative subspaces chosen in Section [8.5](#sec:global-theta){reference-type="ref" reference="sec:global-theta"} is given by $V_{v_0} = U_{V,v_0}^\perp \oplus U_{V,v_0}$. We let $K_{v_0}$ be the corresponding maximal compact subgroup of $G_{v_0}$, which satisfies $H_{v_0} = K_{v_0}$. At other real places $v$ we let $K_v = G_v$, and define $K_\infty = \prod_{v \mid \infty} K_v$. Put $K=K_fK_\infty$. In a similar way, we fixed a globally self-dual lattice $L'$ in $W$, which we used to define compact open subgroups $K_{0,v}'$ of $\text{Sp} (W)(F_v)$. For every finite $v$, let $K_v'$ be an open subgroup of $K_{0,v}'$ such that $K_v' = K_{0,v}'$ for almost all $v$, and let $K_f' = \prod_{v < \infty} K_v'$. For real $v$, we let $K_v'$ be the maximal compact subgroup of $G_v'$ corresponding to the positive complex structure $J_{W_v}$ chosen in Section [8.5](#sec:global-theta){reference-type="ref" reference="sec:global-theta"}. We assume that $J_{W_v}$ is compatible with $q$, in the sense that the quadratic form $\langle J_{W_v} u_1, u_2 \rangle$ on $U_{W,v}$ is equal to $-q$ if $v = v_0$, and equal to $q$ at other real places. This implies that $\text{O} (U_W)(F_v)$ is contained in $K_v'$. Put $K_\infty' = \prod_{v \mid \infty} K_v'$, and $K'=K_f'K_\infty'$. ### Adelic Schwartz function {#sec:adelic-schwartz} We let $L'_{U_W}$ be a lattice in $U_W$, with dual lattice $L'_{U^*_W}$ in $U_W^*$. We may assume that $L' = L'_{U_W} \oplus L'_{U_W^*}$. We put $\mathcal{L}_\mathcal{Y} = L'_{U_W^*} \otimes L$. Then $\mathcal{L}_\mathcal{Y}$ is a lattice in $\mathcal{Y}=U_W^*\otimes V$. Let $\widehat{\mathcal{L}}_\mathcal{Y}$ be the closure of $\mathcal{L}_\mathcal{Y}$ in $\mathcal{Y}(\mathbb A_f)$ and write $\alpha_f$ for the characteristic function of $\widehat{\mathcal{L}}_\mathcal{Y}$. We may alternately define $\widehat{\mathcal{L}}_\mathcal{Y}$ as the set of maps in $\text{Hom}(U_W,V)(\mathbb A_f)$ that map $\widehat{L}'_{U_W}$ into $\widehat{L}$. Choose any function $\alpha_\infty = \prod_{v | \infty} \alpha_v \in \mathscr{S}_\text{alg}(\mathcal{Y}(F_\infty))$ that is invariant under $\text{O} (U_W)(F_\infty) \times K_\infty$. Finally, let $$\alpha=\alpha_f\otimes\alpha_\infty\in\mathscr{S}(\mathcal{Y}(\mathbb A_f))\otimes \mathscr{S}_\text{alg}(\mathcal{Y}(F_\infty)).$$ ### Integral sets {#sec:integral-sets} Having fixed a base point $y_0\in\mathcal{Y}_q(F)$, we obtain a right $1 \times K_f$-invariant function on $\text{O} (U_W)(\mathbb A_f)\times {\bf G}(\mathbb A_f)$ given by $(t,g)\mapsto \alpha_f ((t^{-1},g^{-1}).y_0)$. In view of ${\rm Stab}_{\bf G}(y_0)=\text{O} (U_V^\perp)$, the function $\alpha((t^{-1},g^{-1})y_0)$ is well-defined on $$\text{O} (U_W)(\mathbb A) \times \left(\text{O} (U_V^\perp)(\mathbb A)\backslash {\bf G}(\mathbb A)/K\right).$$ Note that $\alpha_f((t^{-1},g^{-1})y_0)$ is the characteristic function of the set $$\{(t,g)\in\text{O} (U_W)(\mathbb A_f)\times{\bf G}(\mathbb A_f):(t^{-1},g^{-1})y_0 \in \widehat{\mathcal{L}}_\mathcal{Y} \}.$$ For $[y]\in Y_q(\mathbb A)=\text{O} (U_W)(\mathbb A)\backslash\mathcal{Y}_q(\mathbb A)$ we put $$\label{defn:alphaU} \beta([y])=\int_{\text{O} (U_W)(\mathbb A)}\alpha(t^{-1}y)\, dt.$$ Since ${\rm Stab}_{\bf G}([y_0])={\bf H}$, the function $g\mapsto \beta(g^{-1}[y_0])$ is well-defined on ${\bf H}(\mathbb A)\backslash{\bf G}(\mathbb A)/K$. The integral in [\[defn:alphaU\]](#defn:alphaU){reference-type="eqref" reference="defn:alphaU"} is factorizable, so that $\beta=\beta_f\otimes\beta_\infty$, with the obvious definitions. Since $\alpha_\infty$ is assumed to be invariant under $\text{O} (U_W)(F_\infty)$, we in fact have $\beta_\infty([y])=\alpha_\infty(y)$ under an appropriate measure normalization. Moreover, we may describe the support of $\beta_f$ as follows. If $\widehat\Lambda = \text{O} (U_W)(\mathbb A_f)\widehat{\mathcal{L}}_\mathcal{Y} \subset \mathcal{Y}(\mathbb A_f)$, then the support of $\beta_f(g_f^{-1}[y_0])$ is the set $$\label{eq:defn-G-int} {\bf G}(\mathbb A_f)^{\rm int}=\{g_f\in{\bf G}(\mathbb A_f):g_f^{-1}[y_0]\in\widehat\Lambda\},$$ which is left ${\bf H}(\mathbb A_f)$-invariant and right $K_f$-invariant. For any finite place $v$ we define $G_v^\text{int}$ in the natural way, so that ${\bf G}(\mathbb A_f)^{\rm int} = \prod_{v < \infty} G_v^\text{int}$. Note that $G_v^\text{int}$ may be characterized as the set of $g \in G_v$ such that $g^{-1} U_{V,v}$ contains a free $\mathcal O_v$-submodule $M$ such that $(M,Q)$ is isometric to $(L_{U_W,v}',q)$. **Lemma 37**. *The set ${\bf H}(\mathbb A_f) \backslash {\bf G}(\mathbb A_f)^{\rm int} / K_f$ is finite.* *Proof.* We need to show that $H_v \backslash G_v^\text{int} / K_v$ is finite for all $v$, and has cardinality one for almost all $v$. We do this by defining $\mathcal M$ to be the set of free $\mathcal O_v$-submodules of $L_v$ that are isometric to $L_{U_W,v}'$. We define a map $\tau : \mathcal M\to H_v \backslash G_v^\text{int}$ by sending $M \in \mathcal M$ to the unique coset $H_v g \in H_v \backslash G_v$ such that $g^{-1} U_{V,v} = M \otimes F_v$, which clearly lies in $H_v \backslash G_v^\text{int}$. Moreover, $\tau$ is surjective: if $g \in G_v^\text{int}$, then $g^{-1} U_{V,v}$ contains some $M \in \mathcal M$, and hence $\tau(M) = H_vg$. It may also be checked that $\tau$ is $K_v$-equivariant. For a general place $v$, the set $\mathcal M$ is compact, and $K_v$ acts with open orbits. (The second assertion is perhaps not entirely trivial, but we leave the details to the reader.) It follows that the quotient $K_v \backslash \mathcal M$ is finite, which implies the same for $H_v \backslash G_v^\text{int} / K_v$, by the surjectivity and $K_v$-equivariance of $\tau$. We next assume that $L_v$ and $L_{U_W,v}'$ are self-dual, and that $K_v$ is equal to the stabilizer of $L_v$, which happens for almost all $v$. In this case, any $M \in \mathcal M$ must also be self-dual. By [@Baeza I Prop 3.2], this implies that the orthogonal complement of $M$ in $L_v$, which we denote by $M^\perp$, must satisfy $L_v = M \oplus M^\perp$, which implies that $M^\perp$ is also self-dual. Now, let $M, M' \in \mathcal M$. Because $M$ and $M'$ have the same discriminant, it follows that $M^\perp$ and $M'^\perp$ also have the same discriminant, and combined with their self-duality this means they are isometric. As a result, there is $g \in G_v$ that maps $M$ isometrically to $M'$ and $M^\perp$ isometrically to $M'^\perp$. We see from this that $g \in K_v$. Therefore $K_v$ acts transitively on $\mathcal M$, and hence on $H_v \backslash G_v^\text{int}$. ◻ ## An archimedean integral Once again we let $v_0$ be the fixed real place of Section [9.2](#sec:periodnotation){reference-type="ref" reference="sec:periodnotation"}. In this section only, we let $\alpha_{v_0} \in \mathscr{S}(\mathcal{Y}(F_{v_0}))$ be an arbitrary Schwartz function, with no assumption of invariance under $\text{O} (U_W)(F_\infty) \times K_\infty$. For $\lambda\in\mathfrak a^*_\mathbb C$ we let $$\label{eq:I-lambda} \widehat{\alpha}_{v_0}(\lambda)=\int_{G_{v_0}} \alpha_{v_0}(g^{-1}y_0)\varphi_{-\lambda}(g)dg,$$ where $\varphi_{-\lambda}$ is the Harish-Chandra spherical function. For simplicity, we have suppressed the dependency on $y_0$ in the notation. **Lemma 38**. *For any $\alpha_{v_0} \in \mathscr{S}(\mathcal{Y}(F_{v_0}))$ and any $\lambda\in\mathfrak a^*_\mathbb C$, the integral [\[I-lambda\]](#I-lambda){reference-type="eqref" reference="I-lambda"} converges absolutely, and $\alpha_{v_0} \mapsto \widehat{\alpha}_{v_0}(\lambda)$ defines a tempered distribution.* *Proof.* As the place $v_0$ shall be fixed throughout this proof, we omit it from the notation, and write $\mathcal{Y}$ for $\mathcal{Y}(F_{v_0})$, $G$ and $K$ for $G_{v_0}$ and $K_{v_0}$, etc. We choose a basis $(e_1, \ldots, e_m)$ for $U_W$ so that $q$ is the standard negative definite quadratic form, and a basis $(v_1, \ldots, v_d)$ for $V$ so that $Q$ is given by $z_1^2 + \ldots + z_n^2 - z_{n+1}^2 - \ldots -z_d^2$. We may also assume that $y_0(e_i) = v_{n+i}$ for $1 \leqslant i \leqslant m$. We define $\| \cdot \|_V$ to be the norm on $V$ corresponding to the standard positive definite quadratic form. We define $\| \cdot \|_\mathcal{Y}$ to be the mapping norm on $\mathcal{Y}= \text{Hom}(U_W,V)$ with respect to the norms $-q$ and $\| \cdot \|_V$. We let $B(r)$ be the open ball about zero of radius $r$ in $\mathcal{Y}$ with respect to $\| \cdot \|_\mathcal{Y}$. We define the subspaces $\mathfrak k$ and $\mathfrak p$ of $\mathfrak g$ in the usual way. We let $X_i \in \mathfrak g$, $1 \leqslant i \leqslant m$, be the vectors which act on $V$ by fixing all basis vectors except for $v_i$ and $v_{n+i}$, and whose action on $\text{span}\{ v_i, v_{n+i} \}$ is given by the matrix $\big(\begin{smallmatrix} 0 & 1\\ 1 & 0 \end{smallmatrix}\big)$. Then the $X_i$ span a maximal commuting subalgebra $\mathfrak a$ of $\mathfrak p$, with corresponding subgroup $A = \exp(\mathfrak a)$. For $H \in \mathfrak a$, we let $H_i$ be its coordinates with respect to the basis $\{ X_i \}$, and we equip $\mathfrak a$ with the standard norm with respect to these coordinates. As in Section [3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"}, we define $B_\mathfrak a(0,R)$ to be the ball in $\mathfrak a$ centered at 0 and of radius $R > 0$ relative to this norm. (Note that the norm we are using here is different to the one used in Section [3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"}.) We define $A_R = \exp(B_\mathfrak a(R))$, and $G_R = K A_R K$. We now turn to bounding the integral [\[I-lambda\]](#I-lambda){reference-type="eqref" reference="I-lambda"}. We may assume without loss of generality that $\lambda$ is real, so that $\varphi_{-\lambda} > 0$. If we let $\gamma : G \to \mathcal{Y}$ be the orbit map $\gamma(g) = g^{-1} y_0$, then the integral defining $\widehat{\alpha}(\lambda)$ may be thought of as the integral of $\alpha$ against the measure $\mu = \gamma_*(\varphi_{-\lambda}(g) dg)$. To show that $\mu$ is a tempered distribution, it suffices to show that $\mu( B(r))$ grows polynomially in $r$. We do this by first bounding the set $\gamma^{-1}(B(r)) \subset G$, and in particular showing that $\gamma^{-1}(B(r)) \subset G_{c \log r}$ for some $c > 0$ and $r$ large. Because $\gamma^{-1}(B(r))$ is bi-invariant under $K$, it suffices to show that $A \cap \gamma^{-1}(B(r)) \subset A_{c \log r}$. In order to have $e^H \in \gamma^{-1}(B(r))$, we must have $e^{-H} y_0 \in B(r)$. This implies that $\| e^{-H} y_0(e_i) \|_V \leqslant r$ for all $i$, which gives $$\begin{aligned} \| e^{-H} v_{n+i} \|_V & \leqslant r \\ \| -\sinh(H_i) v_i + \cosh(H_i) v_{n+i} \|_V & \leqslant r \\ \cosh(H_i) & \leqslant r \\ |H_i| & \leqslant \log (2r).\end{aligned}$$ This implies that $\| H \| \ll \log r$, and hence that $\gamma^{-1}(B(r)) \subset G_{c \log r}$ for some $c > 0$ as required. The quantity $\mu( B(r))$ is equal to the integral of $\varphi_{-\lambda}(g) dg$ over $\gamma^{-1}(B(r))$, and this is bounded above by the volume of $G_{c \log r}$ times the maximum of $\varphi_{-\lambda}(g)$ on $G_{c \log r}$. The usual formula for the volume form on $G$ in Cartan coordinates implies that $\text{vol}(G_{c \log r}) \ll r^a$ for some $a$, and the bound $| \varphi_{-\lambda}(e^H) | \ll e^{ \beta \| H \| \| \lambda \|}$ for some $\beta > 0$ gives a polynomial upper bound for $\varphi_{-\lambda}$. This completes the proof. ◻ ## Period relation {#period-relation} We are now ready to state and prove the fundamental period relation which lies at the heart of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}. We shall state it here in its adelic form. A more classical formulation will be given in Section [9.9](#adelic2classical){reference-type="ref" reference="adelic2classical"}. We retain the notation of Sections [9.3](#sec:2periods){reference-type="ref" reference="sec:2periods"}--[9.5](#sec:level-signature){reference-type="ref" reference="sec:level-signature"}, including the spaces $V$ and $W$, the quadratic space $(U_W,q)$, the base point $y_0 \in \mathcal{Y}_q(F)$, and the function $\alpha \in \mathscr{S}(\mathcal{Y}(\mathbb A_f))\otimes\mathscr{S}_{\rm alg}(\mathcal{Y}(F_\infty))$. **Proposition 39**. *Let $\phi_\lambda$ be a $K$-invariant automorphic form on ${\bf G}(\mathbb A)$ that is an eigenfunction of the ring of invariant differential operators on $G_{v_0} / K_{v_0}$ with spectral parameter $\lambda\in\mathfrak a_\mathbb C^*$. Let $\phi'_\lambda=\Theta(\phi_\lambda,\alpha;W)$ be its theta lift to ${\bf G}'(\mathbb A)$. Then $$\begin{gathered} \mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')= \widehat{\alpha}_{v_0}( -\overline\lambda) \prod_{v\mid\infty, v\neq v_0}\alpha_v(y_0) \\ \sum_{[b]\in {\bf H}(\mathbb A_f)\backslash {\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f)\beta_f( b^{-1} [y_0] ) \mathscr{P}_{\bf H}(R(b) \overline{\phi_\lambda} ),\end{gathered}$$ where $\textup{vol}({\bf H}(\mathbb A_f)bK_f)$ is the volume of ${\bf H}(\mathbb A_f)bK_f$ considered as a subset of ${\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)$.* *Proof.* Inserting the definition of the theta lift in [\[eq:Theta-fn\]](#eq:Theta-fn){reference-type="eqref" reference="eq:Theta-fn"}, we have $$\mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')=\int_{[\text{O} (U_W)]}\int_{[{\bf N}]}\int_{[{\bf G}]} \overline{\phi_\lambda}(g)\Theta(g,m(t)n;\alpha) \psi_N^{-1}(n)dg dn dt.$$ Using the explicit formulae [\[eq:explicit-theta\]](#eq:explicit-theta){reference-type="eqref" reference="eq:explicit-theta"} for the action of the Weil representation (and recalling that ${}^t t=t^{-1}$), we have $$\Theta(g,m(t)n(b);\alpha)=\sum_{y\in \mathcal{Y}(F)}\psi\left(\frac12\langle (b\otimes {\rm Id}_V)y, y\rangle_{\mathcal{W}}\right) \alpha((g^{-1},t^{-1})y).$$ It may be shown that $\langle (b\otimes {\rm Id}_V)y, y\rangle_{\mathcal{W}} = \text{tr}( b\, {}^t y Q y)$, where we are viewing $Q$ as an element of $\text{Hom}(V, V^*)$. Recall the definition of the variety $\mathcal{Y}_q$ given in [\[eq:defn-YU\]](#eq:defn-YU){reference-type="eqref" reference="eq:defn-YU"} along with the choice of additive character $\psi_N$ in §[9.3.2](#sec:Bessel-period){reference-type="ref" reference="sec:Bessel-period"}. From orthogonality, we obtain $$\begin{aligned} \int_{[{\bf N}]}\Theta(g,m(t)n;\alpha) \psi_N^{-1}(n)dn&=\sum_{y\in \mathcal{Y}(F)}\alpha((g^{-1},t^{-1})y)\int_{ [ \mathbf{Hom}^+]}\psi\left(\frac12 \text{tr}( b({}^t y Q y - q) ) \right)db\\ &=\sum_{y\in \mathcal{Y}_q(F)}\alpha((g^{-1},t^{-1})y),\end{aligned}$$ where $[\mathbf{Hom}^+]$ denotes the adelic quotient associated to the vector space $\text{Hom}^+(U_W^*, U_W)$. Inserting this into the expression for the Bessel period, we obtain $$\mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')=\int_{[{\bf G}]} \overline{\phi_\lambda}(g)\int_{[\text{O} (U_W)]}\sum_{y\in \mathcal{Y}_q(F)}\alpha((g^{-1},t^{-1})y) dtdg.$$ Unfolding the integral over $[\text{O} (U_W)]$ with the quotient $Y_q(F)=\text{O} (U_W)(F)\backslash\mathcal{Y}_q(F)$, and recalling the definition of $\beta$ in [\[defn:alphaU\]](#defn:alphaU){reference-type="eqref" reference="defn:alphaU"}, we find $$\begin{aligned} \mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')&=\int_{[{\bf G}]} \overline{\phi_\lambda}(g)\int_{[\text{O} (U_W)]}\sum_{\gamma\in \text{O} (U_W)(F)}\sum_{[y]\in Y_q(F)}\alpha((g^{-1},t^{-1}\gamma^{-1}) y) dt dg\\ &=\int_{[{\bf G}]} \overline{\phi_\lambda}(g)\sum_{[y]\in Y_q(F)}\int_{\text{O} (U_W)(\mathbb A)}\alpha((g^{-1},t^{-1})y) dt dg\\ &=\int_{[{\bf G}]} \overline{\phi_\lambda}(g)\sum_{[y]\in Y_q(F)}\beta(g^{-1}[y]) dg.\end{aligned}$$ Recall from [\[XS\]](#XS){reference-type="eqref" reference="XS"} that the orbit map $\gamma\mapsto \gamma^{-1}[y_0]$ (note the inverse) identifies $Y_q(F)$ with ${\bf H}(F)\backslash{\bf G}(F)$. From this, another unfolding yields $$\begin{aligned} \mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')&=\int_{[{\bf G}]} \overline{\phi_\lambda}(g)\sum_{\gamma\in{\bf H}(F)\backslash{\bf G}(F)}\beta(g^{-1}\gamma^{-1}[y_0] )dg\\ &=\int_{{\bf H}(F)\backslash{\bf G}(\mathbb A)} \overline{\phi_\lambda}(g)\beta(g^{-1}[y_0])dg\\ &=\int_{{\bf H}(\mathbb A)\backslash{\bf G}(\mathbb A)}\int_{{\bf H}(F)\backslash{\bf H}(\mathbb A)} \overline{\phi_\lambda}(hg) \beta(g^{-1}h^{-1}[y_0]) dhdg.\end{aligned}$$ From the left ${\bf H}(\mathbb A)$-invariance of $g\mapsto \beta(g^{-1}[y_0])$ we get $$\mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')=\int_{{\bf H}(\mathbb A)\backslash{\bf G}(\mathbb A)}\beta(g^{-1}[y_0]) \mathscr{P}_{{\bf H}}(R(g)\overline{\phi_\lambda}) dg.$$ We write this as $$\label{period-expr} \mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')=\int_{{\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)} \beta_f(g_f^{-1}[y_0]) \int_{H_\infty \backslash G_\infty} \beta_\infty(g_\infty^{-1}[y_0]) \mathscr{P}_{{\bf H}}(R(g_f g_\infty)\overline{\phi_\lambda}) dg_\infty dg_f,$$ and consider the inner archimedean integral. Since $\overline{\phi_\lambda}$ is right $K_\infty$-invariant, and $H_\infty = K_\infty$, $\mathscr{P}_{\bf H}(R(g_f g_\infty) \overline{\phi_\lambda})$ is a bi-$K_\infty$-invariant function on $G_\infty$. Moreover, since $\overline{\phi_\lambda}$ was assumed to be an eigenfunction on $G_{v_0}$ with spectral parameter $\overline\lambda$, the same is true for $\mathscr{P}_{\bf H}(R(g_f g_\infty) \overline{\phi_\lambda})$. From the uniqueness of spherical functions, it follows that $$\mathscr{P}_{\bf H}(R(g_f g_\infty) \overline{\phi_\lambda})=\varphi_{\overline\lambda}(g_{v_0})\mathscr{P}_{\bf H}(R(g_f) \overline{\phi_\lambda}).$$ Inserting this into [\[period-expr\]](#period-expr){reference-type="eqref" reference="period-expr"} gives $$\mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda')=\int_{{\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)} \beta_f(g_f^{-1}[y_0]) \mathscr{P}_{\bf H}(R(g_f) \overline{\phi_\lambda}) dg_f \int_{H_\infty \backslash G_\infty} \beta_\infty(g_\infty^{-1}[y_0]) \varphi_{\overline\lambda}(g_{v_0}) dg_\infty.$$ Recalling from §[9.5.2](#sec:adelic-schwartz){reference-type="ref" reference="sec:adelic-schwartz"} that $\beta_\infty([y])=\alpha_\infty(y)$, the archimedean integral unfolds to $$\int_{G_\infty} \alpha_\infty( g_\infty^{-1} y_0) \varphi_{\overline\lambda}(g_{v_0}) dg_\infty = \widehat{\alpha}_{v_0}( -\overline\lambda) \prod_{v\mid\infty, v\neq v_0}\alpha_v(y_0).$$ From the right $K_f$-invariance of $\beta_f( g_f^{-1}[y_0])$, and the description of its support in [\[eq:defn-G-int\]](#eq:defn-G-int){reference-type="eqref" reference="eq:defn-G-int"}, the integral over all finite places becomes $$\sum_{[b]\in{\bf H}(\mathbb A_f)\backslash {\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f)\beta_f( b^{-1} [y_0] ) \mathscr{P}_{{\bf H}}(R(b) \overline{\phi_\lambda}).$$ Combining these gives the proposition. ◻ ## Distinction relation {#sec:lemma-schwartz-space} Using Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"}, as well as density statement in functional analysis that we prove in Proposition [Proposition 41](#prop:density-in-Schwartz){reference-type="ref" reference="prop:density-in-Schwartz"}, we deduce the following corollary. **Corollary 40**. *Let notations and assumptions be as in Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"}. If $$\sum_{[b]\in{\bf H}(\mathbb A_f)\backslash {\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f)\beta_f( b^{-1} [y_0]) \mathscr{P}_{\bf H}(R(b)\phi_\lambda)\neq 0$$ then $\Theta(\phi_\lambda, \alpha;W)$ is non-zero for some choice of $\alpha_\infty$.* *Proof.* By Corollary [Corollary 43](#alphahat-nonvanish){reference-type="ref" reference="alphahat-nonvanish"} below, we may find $\alpha\in \mathscr{S}_{\rm alg}(\mathcal{Y}(F_{v_0}))$, invariant under $\textup{O}(U_W)(F_{v_0}) \times K_{v_0}$, such that $\widehat{\alpha}_{v_0}(-\overline\lambda)\neq 0$. In particular, $\alpha_{v_0}$ satisfies the conditions of §[9.5.2](#sec:adelic-schwartz){reference-type="ref" reference="sec:adelic-schwartz"}. We may furthermore assume that $\alpha_v(y_0)\neq 0$ for all archimedean places $v$ distinct from $v_0$. Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"} then implies that $\phi'_\lambda = \Theta(\phi_\lambda, \alpha; W)$ is non-zero, because it satisfies $\mathscr{B}_{\bf R}^{\, \psi_N}(\phi_\lambda') \neq 0$. ◻ We now prove that $\mathscr{S}_{\rm alg}(\mathcal{Y}(F_{v_0}))$ is dense in $\mathscr{S}(\mathcal{Y}(F_{v_0}))$ with the Schwartz topology, as promised in Remark [Remark 6](#rem:dense){reference-type="ref" reference="rem:dense"}. In order to simplify notation in the proof, and because the result may be of independent interest, we shall state it in a way that does not involve our previously established notation. Instead, until the end of the proof of Proposition [Proposition 41](#prop:density-in-Schwartz){reference-type="ref" reference="prop:density-in-Schwartz"}, $d$ will denote an arbitrary positive integer. We let $\mathscr{S}(\mathbb R^d)$ denote the space of Schwartz functions on $\mathbb R^d$, with the topology coming from the seminorms $$p_{ab}(f) = \| x^a f^{(b)} \|_\infty\qquad (a,b\in\mathbb N).$$ We let the algebraic Schwartz space $\mathscr{S}_{\rm alg}(\mathbb R^d)$ be the space of all products of polynomials on $\mathbb R^d$ with the Gaussian $e^{-\| x \|^2/2}$. **Proposition 41**. *$\mathscr{S}_{\rm alg}( \mathbb R^d)$ is dense in $\mathscr{S}( \mathbb R^d)$ with the Schwartz topology.* *Proof.* We shall use the theory of Hermite functions, for which we refer to [@CH Ch. II, Section 9.4, p. 91] for basic definitions. We let $H_0 = -\Delta + x^2$ be the Hermite operator on $\mathbb R$, and $H = -\Delta + \|x \|^2$ be the Hermite operator on $\mathbb R^d$. The Hermite functions of one variable, $$\label{Hermite-formula} h_n(x) = (2^n n! \sqrt{\pi} )^{-1/2} (-1)^n e^{x^2/2} \frac{d^n}{dx^n} e^{-x^2}$$ for $n \geqslant 0$, have the following properties: 1. $h_n(x) = p_n(x) e^{-x^2/2}$, where $p_n(x)$ is a polynomial of degree $n$. 2. [\[Hermite-eqn\]]{#Hermite-eqn label="Hermite-eqn"} $H_0 h_n = (2n+1) h_n$. 3. The functions $h_n$ form an orthonormal basis for $L^2(\mathbb R)$. Let $\alpha = (\alpha_1, \ldots, \alpha_d) \in \mathbb N^d$ be a multi-index, and let $|\alpha| = \alpha_1 + \ldots + \alpha_d$. If we define $\phi_\alpha(x) = h_{\alpha_1}(x_1) \cdots h_{\alpha_d}(x_d)$ for all $\alpha$, then $\{ \phi_\alpha \}$ is an orthonormal basis for $L^2(\mathbb R^d)$ satisfying $H \phi_\alpha = (2|\alpha| + d)\phi_\alpha$. If $f \in \mathscr{S}( \mathbb R^d)$, we may expand $f$ (viewed as a vector in $L^2(\mathbb R^d)$) in the orthonormal basis $\{ \phi_\alpha \}$ as $f = \sum_\alpha \langle f, \phi_\alpha \rangle \phi_\alpha$. Because the partial sums of this expansion lie in $\mathscr{S}_{\rm alg}( \mathbb R^d)$, it suffices to show that this sum converges in $\mathscr{S}( \mathbb R^d)$. We have $$\langle f, \phi_\alpha \rangle = (2|\alpha| + d)^{-k} \langle f, H^k \phi_\alpha \rangle = (2|\alpha| + d)^{-k} \langle H^k f, \phi_\alpha \rangle \ll_k (2|\alpha| + d)^{-k}$$ for any $k$. As a result, the convergence of $\sum_\alpha \langle f, \phi_\alpha \rangle \phi_\alpha$ will follow if we can show that for any seminorm $p_{ab}$ as above, there is an $l(a,b) > 0$ such that $$\label{seminormbd} p_{ab}( \phi_\alpha) \ll_{a,b} |\alpha|^{l(a,b)}.$$ We claim that it suffices to establish the bound [\[seminormbd\]](#seminormbd){reference-type="eqref" reference="seminormbd"} in the special case when $d = 1$ and $b = 0$. To demonstrate this, suppose we know the functions $h_n$ satisfy $p_{a0}(h_n) \ll_{a} n^{l(a,0)}$ for all $a$. When combined with the relation $h_n' = -x h_n + 2n h_{n-1}$, this implies that we also have $p_{a1}(h_n) \ll_{a} n^{l(a,1)}$. We may use the differential equation [\[Hermite-eqn\]](#Hermite-eqn){reference-type="eqref" reference="Hermite-eqn"} to show that for any $b$ we have $$h_n^{(b)}(x) = P_b(x,n) h_n(x) + Q_b(x,n) h'_n(x),$$ where $P_b$ and $Q_b$ are polynomials depending only on $b$, and this implies that $p_{ab}(h_n) \ll_{a,b} n^{l(a,b)}$ for any $a$ and $b$, and hence gives [\[seminormbd\]](#seminormbd){reference-type="eqref" reference="seminormbd"}. Having reduced to proving that $p_{a0}( h_n) \ll_a n^{l(a)}$ for any $a$, we now address this bound. One could probably derive this from classical asymptotic formulas for $h_n$ such as [@PR] or [@Sz Theorem 8.22.9], but we have found it simpler to use a bound of Koch--Tataru in the case where $x \ll \sqrt{n}$, combined with an elementary bound when $x \gg \sqrt{n}$. Koch--Tataru [@KT Corollary 3.2] prove that $\| h_n \|_\infty \ll n^{-1/12}$ (in fact, any polynomial bound will suffice for our purposes), which implies that $$\underset{ |x| \leqslant 10 \sqrt{n} }{\sup} | x^a h_n(x) | \ll_a n^{a/2-1/12}.$$ The remaining range is treated in the following lemma. ◻ **Lemma 42**. *There is a constant $C$ such that for $|x| > 10 \sqrt{n}$, we have $|h_n(x)| \leqslant C e^{-x^2/4}$.* *Proof.* We use the formula [\[Hermite-formula\]](#Hermite-formula){reference-type="eqref" reference="Hermite-formula"} for $h_n$, together with the contour integral formula $$\frac{d^n}{dx^n} e^{-x^2} = \frac{n!}{2\pi i} \int_\mathcal Ce^{-(z+x)^2} \frac{dz}{z^{n+1}},$$ where $\mathcal C$ is an anticlockwise circle of radius $\sqrt{n}$ centered at 0. We have $$\text{Re}(-(z+x)^2) = -x^2 + \text{Re}( -2zx - z^2) \leqslant -x^2 + 2|x| \sqrt{n} + n.$$ If we assume that $10 \sqrt{n} < |x|$, this becomes $$\text{Re}(-(z+x)^2) \leqslant -x^2 + \frac{1}{5} x^2 + \frac{1}{100} x^2 < -3x^2/4.$$ We therefore have $$\begin{aligned} \left| \int_\mathcal Ce^{-(z+x)^2} \frac{dz}{z^{n+1}} \right| & \leqslant \int_\mathcal C| e^{-(z+x)^2} z^{-n-1} | dz \\ & < (2\pi \sqrt{n}) n^{-(n+1)/2} e^{-3x^2/4} \\ & = 2 \pi n^{-n/2} e^{-3x^2/4},\end{aligned}$$ which in turn yields the bound $$\left| \frac{d^n}{dx^n} e^{-x^2} \right| < n! n^{-n/2} e^{-3x^2/4}.$$ Substituting this into [\[Hermite-formula\]](#Hermite-formula){reference-type="eqref" reference="Hermite-formula"} gives $$\begin{aligned} | h_n(x) | & < (2^n n! \sqrt{\pi} )^{-1/2} e^{x^2/2} \left[ n! n^{-n/2} e^{-3x^2/4} \right] \\ & = \pi^{-1/4} 2^{-n/2} (n!)^{1/2} n^{-n/2} e^{-x^2/4}.\end{aligned}$$ We must therefore prove that $2^{-n/2} (n!)^{1/2} n^{-n/2}$ is bounded independently of $n$. Using Stirling's formula, the logarithm of this is given by $$\begin{aligned} \log 2^{-n/2} (n!)^{1/2} n^{-n/2} & = - \frac{n \log 2}{2} + \frac{n \log n}{2} - \frac{n}{2} - \frac{n \log n}{2} + O( \log n) \\ & = - \frac{n \log 2}{2} - \frac{n}{2} + O( \log n),\end{aligned}$$ which is bounded above. This completes the proof. ◻ **Corollary 43**. *For any $\lambda \in \mathfrak a^*_\mathbb C$, there is a function $\alpha_{v_0} \in \mathscr{S}_{\rm alg}(\mathcal{Y}(F_{v_0}))$ that is invariant under $\textup{O}(U_W)(F_{v_0}) \times K_{v_0}$, and such that $\widehat{\alpha}_{v_0}(\lambda) \neq 0$.* *Proof.* We first construct a (possibly non-invariant) function satisfying $\widehat{\alpha}_{v_0}(\lambda) \neq 0$. The map $\alpha_{v_0} \to \widehat{\alpha}_{v_0}(\lambda)$ is a tempered distribution, and we proved in Proposition [Proposition 41](#prop:density-in-Schwartz){reference-type="ref" reference="prop:density-in-Schwartz"} that $\mathscr{S}_{\rm alg}(\mathcal{Y}(F_{v_0}))$ is dense in $\mathscr{S}(\mathcal{Y}(F_{v_0}))$. It therefore suffices to find $\alpha_{v_0} \in \mathscr{S}(\mathcal{Y}(F_{v_0}))$ for which $\widehat{\alpha}_{v_0}(\lambda) \neq 0$. To do this, we observe that the orbit map $g \to g^{-1} y_0$ is an embedding of $\text{O} (n) \backslash \text{O} (n,m)$ into $\mathcal{Y}(F_{v_0})$, because $\text{O} (n)$ is the stabilizer of $y_0$. The distribution $\widehat{\alpha}_{v_0}(\lambda)$ is given by integrating $\alpha_{v_0}$ against the function $\varphi_{-\lambda}$ pushed forward to this orbit, and this is clearly nonzero for some $\alpha_{v_0} \in \mathscr{S}(\mathcal{Y}(F_{v_0}))$. It may be checked that the distribution $\alpha_{v_0} \to \widehat{\alpha}_{v_0}(\lambda)$ is invariant under $\textup{O}(U_W)(F_{v_0}) \times K_{v_0}$, and so we may assume that $\alpha_{v_0}$ has the required invariance by averaging. ◻ ## Classical reformulation {#adelic2classical} We would now like to write the period relation in Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"} more classically, by writing the double quotient space ${\bf G}(F)\backslash{\bf G}(\mathbb A)/K$ as a union of locally symmetric spaces, and the adelic periods $\mathscr{P}_{\bf H}$ as a collection of classical periods on these spaces. We retain all notation from previous sections. By a theorem of Borel [@Borel Theorem 5.1], the set of genus classes for $V$, as parametrized by ${\bf G}(F)\backslash{\bf G}(\mathbb A_f)/K_f$, is finite. We fix a complete set of representatives $\{g_i\}_{i\in I}$ for this double coset space. As usual, we let $S=G_\infty/K_\infty$. The map $$\coprod_{i\in I} S\longrightarrow{\bf G}(F)\backslash ({\bf G}(\mathbb A_f)/K_f\times G_\infty/K_\infty),$$ sending $(i,g_\infty K_\infty)\in I\times S$ to the orbit ${\bf G}(F).(g_i K_f,g_\infty K_\infty)$, is clearly surjective, and the fiber over ${\bf G}(F).(g_iK_f, K_\infty)$ is the $\Gamma_i$-orbit of $(i, K_\infty)$, where $\Gamma_i={\bf G}(F)\cap g_iK_fg_i^{-1}$. We therefore obtain an isomorphism $${\bf G}(F)\backslash{\bf G}(\mathbb A)/K=\coprod_{i\in I} \Gamma_i\backslash S.$$ As in the statement of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, we let $Y$ denote the latter disjoint union. For any continuous function $\phi\in L^2([{\bf G}])$, right-invariant under $K$, we use the above decomposition to define a collection of functions $\phi^{(i)}\in L^2(\Gamma_i\backslash S)$ via the rule $\phi^{(i)}(g_\infty)=\phi(g_ig_\infty)$. We would now like to write the ${\bf H}$-period of $\phi$ classically. Let $K_f^{{\bf H}}=K_f\cap {\bf H}(\mathbb A_f)$, and ${\rm Gen}_{{\bf H}}={\bf H}(F)\backslash{\bf H}(\mathbb A_f)/K_f^{\bf H}$. Since $K_\infty^{\bf H}={\bf H}(F_\infty)$ is compact, we have a well-defined injective map $${\rm Gen}_{{\bf H}}={\bf H}(F)\backslash{\bf H}(\mathbb A)/K^{\bf H}\rightarrow {\bf G}(F)\backslash{\bf G}(\mathbb A)/K= Y.$$ The image in $Y$ is a finite collection of points, which (abusing notation) we continue to denote by ${\rm Gen}_{{\bf H}}$. Thus $$\label{classical-geod-period} \mathscr{P}_{\bf H}(\phi)=\sum_{x\in {\rm Gen}_{{\bf H}}} w_x \phi(x),$$ where $w_x$ is the weight given to $x$ by the natural probability measure on ${\rm Gen}_{{\bf H}}$. We have just written the ${\bf H}$-period of $\phi$ classically, but in the period relation of Proposition [Proposition 39](#period-relation){reference-type="ref" reference="period-relation"}, we encountered a certain *finite union* (cf. Lemma [Lemma 37](#lemma:finite-double-quotient){reference-type="ref" reference="lemma:finite-double-quotient"}) of weighted ${\bf H}$-periods, of the form $$\label{eq:sum-of-periods} \sum_{[b]\in{\bf H}(\mathbb A_f)\backslash {\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f( b^{-1} [y_0] )\mathscr{P}_{\bf H}(R(b)\overline{\phi_\lambda}).$$ To write this sum classically, we begin by noting that $$\mathscr{P}_{\bf H}(R(b)\overline{\phi_\lambda}) = \int_{[{\bf H}] b} \overline{\phi_\lambda}(x) dx.$$ We let $K_f^{{\bf H}}(b) = b K_f b^{-1} \cap {\bf H}(\mathbb A_f)$, and $K^{{\bf H}}(b) = K_\infty^{\bf H}K_f^{{\bf H}}(b)$. If we define ${\rm Gen}_{{\bf H}}^b = {\bf H}(F)\backslash{\bf H}(\mathbb A_f)/ K^{{\bf H}}(b)$, then as above we have an injection $${\rm Gen}_{{\bf H}}^b = {\bf H}(F)\backslash{\bf H}(\mathbb A)/ K^{{\bf H}}(b) \rightarrow {\bf G}(F)\backslash{\bf G}(\mathbb A)/K= Y$$ sending $x \in {\rm Gen}_{{\bf H}}^b$ to $xb$. It follows that $$\mathscr{P}_{\bf H}(R(b)\overline{\phi_\lambda}) = \sum_{x\in {\rm Gen}_{{\bf H}}^b} w_x^b \overline{\phi_\lambda}(xb)$$ for positive weights $w_x^b$, and this implies that [\[eq:sum-of-periods\]](#eq:sum-of-periods){reference-type="eqref" reference="eq:sum-of-periods"} can be written as $$\label{eq:H-period-double-sum} \sum_{[b]\in{\bf H}(\mathbb A_f)\backslash {\bf G}(\mathbb A_f)^{\rm int}/K_f}\beta(b) \sum_{x\in {\rm Gen}_{{\bf H}}^b} w_x^b \overline{\phi_\lambda}(xb),$$ where we have put $\beta(b)= \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f(b^{-1} [y_0])$. Finally, let let $\mathscr{X}\subset Y$ denote the finite collection of points $$\label{eq:defn-shifted-pts} \mathscr{X}=\left\{xb\in Y : \; [b]\in{\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)^{\rm int}/K_f, \; x\in {\rm Gen}_{{\bf H}}^b \right\}$$ appearing in [\[eq:H-period-double-sum\]](#eq:H-period-double-sum){reference-type="eqref" reference="eq:H-period-double-sum"}. For each $p=xb\in\mathscr{X}$ let $c_p= \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f(b^{-1}[y_0])w_x^b$ -- a positive real number -- denote the weight appearing in that same formula. Then [\[eq:sum-of-periods\]](#eq:sum-of-periods){reference-type="eqref" reference="eq:sum-of-periods"} becomes $$\label{eq:final-classical-period} \sum_{p\in\mathscr{X}} c_p\overline{\phi_\lambda}(p),$$ a weighted sum of point evaluations. *Remark 7*. Recall from §[9.4](#sec:affine){reference-type="ref" reference="sec:affine"} the set-theoretic quotient $Y_q(R)=\text{O} (U_W)(R)\backslash \mathcal{Y}_q(R)$, where $\mathcal{Y}_q$ is the algebraic variety defined in [\[eq:defn-YU\]](#eq:defn-YU){reference-type="eqref" reference="eq:defn-YU"} and $R$ is any $F$-algebra. It seems plausible that the work of Ellenberg--Venkatesh [@EV §3] should provide for a parametrization of the collection of points $\mathscr{X}$ in [\[eq:defn-shifted-pts\]](#eq:defn-shifted-pts){reference-type="eqref" reference="eq:defn-shifted-pts"} in terms of $\Gamma$-equivalence classes of "integral points on $Y_q$\". Such an interpretation would recover the Rudnick--Sarnak example of Section [9.1](#Maass-period-relation){reference-type="ref" reference="Maass-period-relation"}. As it is not strictly necessary for our purposes, we have not pursued this argument. # Proof of Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"} {#ProofsOfThms} We now return to the proof of our main result, Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}, putting all ingredients together. We retain the notation from Sections [9.2](#sec:periodnotation){reference-type="ref" reference="sec:periodnotation"}, [9.3](#sec:2periods){reference-type="ref" reference="sec:2periods"}, and [9.5](#sec:level-signature){reference-type="ref" reference="sec:level-signature"}. ## Archimedean representations. We begin by introducing the archimedean representations of ${\bf G}$ and ${\bf G}'$ that we shall work with. For ${\bf G}$, these are simply the spherical representations. For $\lambda \in \mathfrak a^*_\mathbb C$, let $\pi_{\lambda, v_0}$ be the irreducible spherical representation of $G_{v_0}$ with spectral parameter $\lambda$ defined in Proposition [Proposition 24](#O-spherical){reference-type="ref" reference="O-spherical"}, and let $\pi_\lambda$ be the representation of $G_\infty$ obtained as the tensor product of $\pi_{\lambda, v_0}$ with the trivial representation of $G_v$ at the other real places. For ${\bf G}'$, the representations we shall consider are those admitting a one-dimensional $K'_\infty$-type. We define the one-dimensional representation $\tau$ of $K'_\infty$ by setting $$\tau^{-1} = \det{}^{(n-m)/2}\bigotimes_{v\mid\infty, v\neq v_0}\det{}^{d/2}.$$ We choose $\tau$ this way so that we may consider forms that are $\tau^{-1}$-isotypic, and this allows us to write them in a way consistent with our earlier notation in e.g. Section [6.1](#sec:upper-bd-notation){reference-type="ref" reference="sec:upper-bd-notation"}. For $\lambda \in \mathfrak a'^*_\mathbb C$, we let $\pi'_{\lambda, \tau, v_0}$ be the representation of $G'_{v_0}$ with $K'_{v_0}$-type $\tau_{v_0}^{-1} = \det{}^{(n-m)/2}$ and spectral parameter $\lambda$ defined in Lemma [Lemma 30](#tau-spherical-class){reference-type="ref" reference="tau-spherical-class"}. For other real places $v$, we likewise use Lemma [Lemma 30](#tau-spherical-class){reference-type="ref" reference="tau-spherical-class"} to define a representation $\pi'_{\tau,v}$ of $G'_v$ with $K'_v$-type $\tau_{v}^{-1} = \det{}^{d/2}$ and spectral parameter $(\tfrac{n}{2} - 1, \ldots, \tfrac{n}{2} - m') \in \mathfrak a'^*_\mathbb C$. We then define the representation $\pi'_{\lambda, \tau}$ of $G'_\infty$ to be the tensor product of $\pi'_{\lambda, \tau, v_0}$ and the $\pi'_{\tau,v}$ for the other real $v$. The results of Section [8.4](#sec:arch-lift){reference-type="ref" reference="sec:arch-lift"} show that the local factors of $\pi_\lambda$ and $\pi'_{\lambda, \tau}$ are matched by the theta correspondence, assuming they actually occur in it. ## Automorphic forms We introduce notation for sets of automorphic forms and representations on ${\bf G}$ and ${\bf G}'$. Let $Q \geqslant 1$ be a constant, to be fixed in Lemma [Lemma 46](#cmpt-period-asymp){reference-type="ref" reference="cmpt-period-asymp"} below. $\mathscr{A}^{{\bf G}}(K,\nu,Q)$ is the set of isomorphism classes of automorphic representations $\pi$ of ${\bf G}$ with the property that $\pi_f^{K_f} \neq 0$, and $\pi_\infty \simeq \pi_\lambda$ with $\|{\rm Im}\,\lambda-\nu\| \leqslant Q$. We define $\widetilde{\mathscr{A}}^{{\bf G}}(K,\nu,Q)$ to be the set of pairs $(\pi, E)$, where $\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)$ and $E$ is a homomorphism from $\pi$ into the space of automorphic forms on ${\bf G}$. Then there is a map $\widetilde{\mathscr{A}}^{{\bf G}}(K,\nu,Q) \to \mathscr{A}^{{\bf G}}(K,\nu,Q)$. The fiber of this map over a representation $\pi$ is the multiplicity space of $\pi$, which we denote $m(\pi, {\bf G})$. We introduce similar notation for $\tau$-spherical representations of ${\bf G}'$. We let $\mathscr{A}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$ be the set of isomorphism classes of cuspidal automorphic representations $\pi'$ of ${\bf G}'$ such that $(\pi_f')^{K_f} \neq 0$, and $\pi_\infty' \simeq \pi_{\lambda, \tau}$ with $\|{\rm Im}\,\lambda-\nu\| \leqslant Q$. We define $\widetilde{\mathscr{A}}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$, and the multiplicity spaces $m(\pi', {\bf G}')$, in the same way as for ${\bf G}$, noting that we now only consider homomorphisms from $\pi'$ into the space of cusp forms. We next introduce spaces of automorphic forms $\mathcal{E}^{{\bf G}}(K,\nu,Q)$ and $\mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$ corresponding to $\mathscr{A}^{{\bf G}}(K,\nu,Q)$ and $\mathscr{A}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$. The space $\mathcal{E}^{{\bf G}}(K,\nu,Q)$ is spanned by automorphic forms on ${\bf G}$ that are right invariant under $K$, are eigenfunctions for the ring of differential operators $\mathbb{D}(G_{v_0}/K_{v_0})$ defined in Section [7.2](#sec:O-spherical){reference-type="ref" reference="sec:O-spherical"}, and whose spectral parameter $\lambda$ satisfies $\|{\rm Im}\,\lambda-\nu\| \leqslant Q$. Such automorphic forms can be viewed as functions on $$Y={\bf G}(F)\backslash{\bf G}(\mathbb A)/K.$$ We then have $$\mathcal{E}^{{\bf G}}(K,\nu,Q) = \bigoplus_{\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)} m(\pi, {\bf G}) \otimes \pi_f^{K_f}.$$ We define $\mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$ in the analogous way, and likewise have $$\mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau) = \bigoplus_{\pi' \in \mathscr{A}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau) } m(\pi', {\bf G}') \otimes (\pi_f')^{K_f'}.$$ We may write these spaces classically as follows. We begin with the case of $\mathcal{E}^{{\bf G}}(K,\nu,Q)$. Recall from Section [9.9](#adelic2classical){reference-type="ref" reference="adelic2classical"} the identification $$Y=\bigcup_{ i \in I}\Gamma_i\backslash\mathbb{H}^{n,m}.$$ Using the notation from [\[eq:E-lambda-cusp-tau\]](#eq:E-lambda-cusp-tau){reference-type="eqref" reference="eq:E-lambda-cusp-tau"} we have $$\label{EGclassical} \mathcal{E}^{{\bf G}}(K,\nu,Q) = \sum_{ i \in I}\sum_{\substack{\mu\in\Lambda(\Gamma_i)\\ \|{\rm Im}\,\mu-\nu\|\leqslant Q}}\mathcal{E}_\mu(\Gamma_i\backslash\mathbb{H}^{n,m}),$$ where we have omitted the cuspidality condition from the notation as it is empty for anisotropic ${\bf G}$. To describe the case of $\mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$, let $\mathsf{S}'=(\mathcal{H}^m)^{[F:\mathbb Q]}$, and let $\Gamma'={\bf G}'(\mathbb A_f)\cap K_f'$. Then, since ${\bf G}'$ is split and simply connected, the genus of $\Gamma'$ is $1$, and we have $$\label{EG'classical} \Gamma'\backslash {\bf G}'(F_\infty)={\bf G}'(F)\backslash {\bf G}'(\mathbb A)/K_f',\qquad \Gamma'\backslash \mathsf{S}'={\bf G}'(F)\backslash {\bf G}'(\mathbb A)/K'.$$ We now adapt the notation from [\[eq:E-lambda-cusp-tau\]](#eq:E-lambda-cusp-tau){reference-type="eqref" reference="eq:E-lambda-cusp-tau"} and [\[eq:L2-cusp-tau\]](#eq:L2-cusp-tau){reference-type="eqref" reference="eq:L2-cusp-tau"} to write $$\mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau) = \bigoplus_{\substack{\mu \in\Lambda_{\rm cusp}(\Gamma';\tau)\\ \|{\rm Im}\,\mu-\nu\|\leqslant Q}}\mathcal{E}^{\rm cusp}_\mu(\Gamma'\backslash\mathsf{S}',\tau).$$ ## Lifted forms For any $\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)$, we wish to define the subspace $m(\pi,{\bf G})^\Theta \subset m(\pi,{\bf G})$ of representations that arise as theta lifts of cusp forms on ${\bf G}'$. If any local factor of $\pi$ does not occur in the local theta correspondence, we set $m(\pi,{\bf G})^\Theta = 0$. Otherwise, we know that $\pi' = \otimes \theta(\pi_v, W)$ satisfies the local conditions to lie in $\mathscr{A}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$. At archimedean places this follows from Section [8.4](#sec:arch-lift){reference-type="ref" reference="sec:arch-lift"}, while at finite places this follows from results of Cossutta [@Cossutta Proposition 2.11] after choosing $K_f'$ small enough. The span of the global lifts $\Theta(\widetilde{\pi}', V)$, where $\widetilde{\pi}'$ runs over occurrences of $\pi'$ in the space of cusp forms on ${\bf G}'$, is a subspace of the $\pi$-isotypic space on ${\bf G}$. As such, it has the form $m(\pi,{\bf G})^\Theta \otimes \pi$ for a subspace $m(\pi,{\bf G})^\Theta \subset m(\pi, {\bf G})$ with $\dim m(\pi,{\bf G})^\Theta \leqslant \dim m(\pi', {\bf G}')$. In the same way, the orthogonal compliment of the space of lifts in the $\pi$-isotypic subspace has the form $m(\pi,{\bf G})^{\Theta^\perp} \otimes \pi$ for some $m(\pi,{\bf G})^{\Theta^\perp} \subset m(\pi, {\bf G})$. We define $\widetilde{\mathscr{A}}^{{\bf G},\Theta}(K,\nu,Q)$ to be the subset of $\widetilde{\mathscr{A}}^{{\bf G}}(K,\nu,Q)$ corresponding to the spaces $m(\pi,{\bf G})^\Theta$, and likewise for $\widetilde{\mathscr{A}}^{{\bf G},\Theta^\perp}(K,\nu,Q)$. We define $$\mathcal{E}^{{\bf G},\Theta}(K,\nu,Q) = \bigoplus_{\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)} m(\pi, {\bf G})^\Theta \otimes \pi_f^{K_f}$$ and $$\mathcal{E}^{{\bf G},\Theta^\perp}(K,\nu,Q) = \bigoplus_{\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)} m(\pi, {\bf G})^{\Theta^\perp} \otimes \pi_f^{K_f}$$ to be the corresponding spaces of automorphic forms. ## The distinction argument For a real parameter $T>0$, recall the notion of $T$-regular from Section [3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"}. **Proposition 44**. *Suppose that $\nu$ is $Q$-regular. If $\phi_\lambda \in \mathcal{E}^{{\bf G},\Theta^\perp}(K,\nu,Q)$, then $$\label{periodvanish} \sum_{[b]\in{\bf H}(\mathbb A_f)\backslash {\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f( b^{-1} [y_0]) \mathscr{P}_{\bf H}(R(b)\phi_\lambda) = 0.$$* *Proof.* We may assume that $\phi_\lambda \in \widetilde{\pi}$ for some $\widetilde{\pi} \in \widetilde{\mathscr{A}}^{{\bf G},\Theta^\perp}(K,\nu,Q)$. We prove the contrapositive, by assuming that the left-hand side of [\[periodvanish\]](#periodvanish){reference-type="eqref" reference="periodvanish"} is nonzero and deducing that $\phi_\lambda \notin \mathcal{E}^{{\bf G},\Theta^\perp}(K,\nu,Q)$. Under this assumption, Corollary [Corollary 40](#distinction){reference-type="ref" reference="distinction"} shows that $\phi'_\lambda = \Theta(\phi_\lambda, \alpha; W)$ is nonzero for some choice of $\alpha_\infty$, and hence that $\widetilde{\pi}' = \Theta( \widetilde{\pi}; W)$ is also nonzero. Our assumption that $\nu$ is $Q$-regular implies that $\lambda \in i \mathfrak a^*$, and hence that $\widetilde{\pi}_{v_0}$ is tempered. Lemma [Lemma 36](#cuspidal-theta){reference-type="ref" reference="cuspidal-theta"} then implies that $\widetilde{\pi}' \in \widetilde{\mathscr{A}}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$. We now have $$0 \neq \langle \phi'_\lambda, \phi'_\lambda \rangle = \langle \Theta(\phi_\lambda, \alpha; W), \phi'_\lambda \rangle = \langle \phi_\lambda, \Theta( \phi'_\lambda, \alpha; V) \rangle,$$ which shows that $\phi_\lambda \notin \mathcal{E}^{{\bf G},\Theta^\perp}(K,\nu,Q)$, as required. ◻ **Lemma 45**. *We have $\dim \mathcal{E}^{{\bf G},\Theta}(K,\nu,Q) \ll \left(\log (3+\|\nu\|)\right)^{[F:\mathbb Q]m}\beta_{\mathcal{H}^m}(\nu)$.* *Proof.* Bernstein's uniform admissibility theorem implies that $\dim \pi_f^{K_f} \ll 1$ for any $\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)$. We therefore have $$\begin{aligned} \dim \mathcal{E}^{{\bf G},\Theta}(K,\nu,Q) & = \sum_{\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)} \dim m(\pi, {\bf G})^\Theta \dim \pi_f^{K_f} \\ & \ll \sum_{\pi \in \mathscr{A}^{{\bf G}}(K,\nu,Q)} \dim m(\pi, {\bf G})^\Theta \\ & \leqslant \sum_{\pi' \in \mathscr{A}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)} \dim m(\pi', {\bf G}') \\ & \leqslant \mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau).\end{aligned}$$ The bound now follows from the classical description [\[EG\'classical\]](#EG'classical){reference-type="eqref" reference="EG'classical"} of $\mathcal{E}^{{\bf G}'}_{\rm cusp}(K',\nu,Q, \tau)$ and Proposition [Proposition 14](#Weyl-upper-bd){reference-type="ref" reference="Weyl-upper-bd"}. ◻ We let $\mathcal{B}(\nu)$ be a choice of orthonormal basis for $\mathcal{E}^{{\bf G}}(K,\nu,Q)$, and define $$\mathcal{M}_{\bf H}(\nu) = \sum_{\phi_\mu\in\mathcal{B}(\nu)} \bigg|\sum_{[b]\in{\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f( b^{-1} [y_0])\mathscr{P}_{\bf H}(R(b)\phi_\mu)\bigg|^2.$$ Note that $\mathcal{M}_{\bf H}(\nu)$ is independent of the choice of basis $\mathcal{B}(\nu)$. **Lemma 46**. *If $Q$ is chosen large enough depending on $Y$, we have $\mathcal{M}_{\bf H}(\nu)\gg\beta_{\mathbb{H}^{n,m}}(\nu)$.* *Proof.* We apply the classical description [\[EGclassical\]](#EGclassical){reference-type="eqref" reference="EGclassical"} of $\mathcal{E}^{{\bf G}}(K,\nu,Q)$, and choose the basis $\mathcal{B}(\nu)$ to be a union of bases $\mathcal{B}_i(\nu)$ for the corresponding spaces $$\sum_{\substack{\mu\in\Lambda(\Gamma_i)\\ \|{\rm Im}\,\mu-\nu\|\leqslant Q}}\mathcal{E}_\mu(\Gamma_i\backslash\mathbb{H}^{n,m})$$ for each classical quotient $\Gamma_i\backslash\mathbb{H}^{n,m}$. Thus, each member of $\mathcal{B}(\nu)$ is supported on a single connected component of $Y$. Recall the finite set of points $\mathscr{X}$ in $Y$ described in [\[eq:defn-shifted-pts\]](#eq:defn-shifted-pts){reference-type="eqref" reference="eq:defn-shifted-pts"}. For each $i$ let $\mathscr{X}_i$ denote the subset of $\mathscr{X}$ consisting of those points lying in the $i$-th connected component $\Gamma_i\backslash\mathbb{H}^{n,m}$. We deduce from [\[eq:final-classical-period\]](#eq:final-classical-period){reference-type="eqref" reference="eq:final-classical-period"} that the contribution to $\mathcal{M}_{\bf H}(\nu)$ coming from the lattice $\Gamma_i$ is $$\sum_{\phi_\mu\in\mathcal{B}_i(\nu)}\big|\sum_{p \in \mathscr{X}_i} c_p \phi_\mu(p) \big|^2,$$ where $c_p$ is as in [\[eq:final-classical-period\]](#eq:final-classical-period){reference-type="eqref" reference="eq:final-classical-period"}. After choosing $Q$ large enough, we may therefore apply Proposition [Proposition 26](#local-Weyl){reference-type="ref" reference="local-Weyl"} to show that the contribution of each $\mathcal{B}_i(\nu)$ to $\mathcal{M}_{\bf H}(\nu)$ is $\gg\beta_{\mathbb{H}^{n,m}}(\nu)$, as required. ◻ We may now combine these ingredients to prove Theorem [Theorem 1](#sup-thm){reference-type="ref" reference="sup-thm"}. Let $Q$ be as in Lemma [Lemma 46](#cmpt-period-asymp){reference-type="ref" reference="cmpt-period-asymp"}, and assume that $\nu$ is $Q$-regular. Choose a basis $\mathcal{B}(\nu)$ such that $\mathcal{B}(\nu) = \mathcal{B}(\nu)^\Theta \cup \mathcal{B}(\nu)^{\Theta^\perp}$, where $\mathcal{B}(\nu)^\Theta$ and $\mathcal{B}(\nu)^{\Theta^\perp}$ are bases for $\mathcal{E}^{{\bf G},\Theta}(K,\nu,Q)$ and $\mathcal{E}^{{\bf G},\Theta^\perp}(K,\nu,Q)$ respectively. Because the forms in $\mathcal{B}(\nu)^{\Theta^\perp}$ make no contribution to $\mathcal{M}_{\bf H}(\nu)$, Lemma [Lemma 46](#cmpt-period-asymp){reference-type="ref" reference="cmpt-period-asymp"} gives $$\sum_{\phi_\mu\in\mathcal{B}(\nu)^\Theta }\bigg|\sum_{[b]\in{\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f( b^{-1} [y_0]) \mathscr{P}_{\bf H}(R(b)\phi_\mu)\bigg|^2 \gg \beta_{\mathbb{H}^{n,m}}(\nu).$$ Moreover, Lemma [Lemma 45](#lemma:Theta-map){reference-type="ref" reference="lemma:Theta-map"} implies that there is some $\phi_\mu\in\mathcal{B}(\nu)^\Theta$ such that $$\begin{gathered} \bigg|\sum_{[b]\in{\bf H}(\mathbb A_f)\backslash{\bf G}(\mathbb A_f)^{\rm int}/K_f} \textup{vol}({\bf H}(\mathbb A_f)bK_f) \beta_f( b^{-1} [y_0]) \mathscr{P}_{\bf H}(R(b)\phi_\mu)\bigg|^2 \\ \gg \left(\log (3+\|\nu\|)\right)^{-[F:\mathbb Q]m} \beta_{\mathbb{H}^{n,m}}(\nu) / \beta_{\mathcal{H}^m}(\nu).\end{gathered}$$ This implies the same lower bound for $\| \phi_\mu \|_\infty^2$, which completes the proof. 9 R. Baeza, *Quadratic forms over semilocal rings*, Lecture Notes in Mathematics, vol. 655, Springer-Verlag, Berlin, 1978. V. Blomer, A. Pohl, *The sup-norm problem on the Siegel modular space of rank two,* Amer. J. Math, 138 (2016), 999--1027. A. Borel. *Introduction aux Groupes Arithmétiques.* Act. Sci. Ind. Paris, Hermann, 1969. F. Brumley, S. Marshall, *Lower bounds for Maass forms on semisimple groups*, Compositio Mathematica, Volume 156, Issue 5, May 2020, 959--1003. A. Borel, *Some finiteness properties of adele groups over number fields.* Publications Mathématiques de l'IHES, 16 (1963): 5--30. M. Cossutta, *Asymptotique des nombres de Betti des variétés arithmétiques*. Duke Mathematical Journal. 150, no. 3 (2009): 443--88. M. Cossutta, S. Marshall, *Theta lifting and cohomology growth in $p$-adic towers*. Int. Math. Res. Not. IMRN 11, 2601--2623. R. Courant, D. Hilbert, *Methods of Modern Mathematical Physics vol. 1*, Wiley 1989. H. Donnelly, *On the cuspidal spectrum for finite volume symmetric spaces.* J. Differential Geom. 17:239--253, 1982. H. Donnelly, *Exceptional sequences of eigenfunctions for hyperbolic manifolds*, Proc. AMS. 135 no. 5 (2007), 1551--1555. J. J. Duistermaat, J. A. C. Kolk, V. S. Varadarajan, *Spectra of compact locally symmetric manifolds of negative curvature*, Inventiones Mathematicae (1979). 52, 27-94. J. Ellenberg, A. Venkatesh, *Local-global principles for representations of quadratic forms.* <https://arxiv.org/abs/math/0604232>, extended online version of the paper published in Invent. Math. 171, 257--279 (2008). T. Finis, E. Lapid, *On the remainder term of the Weyl law for congruence subgroups of Chevalley groups.* Duke Math. J. 170 (4) 653-- 695. W.-T. Gan, *Periods and theta correspondence.* Proceedings of Symposia in Pure Mathematics Volume 101, 2019. doi.org/10.1090/pspum/101/01792 W.-T. Gan, S. Takeda, *A proof of the Howe duality conjecture,* J. Amer. Math. Soc., 29 (2) (2016): 473--493 R. Gangolli, *On the Plancherel formula and the Paley--Wiener theorem for spherical functions on semisimple Lie groups,* Ann. of Math. (2) 93 (1971), 150--165. R. Gangolli, V.S. Varadarajan, *Harmonic Analysis of Spherical Functions on Real Reductive Groups*, 101, (2012), Springer Science. M. Harris, D. Soudry, R. Taylor, *$l$-adic representations associated to modular forms over imaginary quadratic fields*, Invent. Math. 122, 377--411 (1993). G. Heckman, *Hypergeometric and spherical functions*, in Heckman, Schlichtkrull: Harmonic Analysis and Special Functions on Symmetric Spaces, Academic Press, San Diego, 1994. S. Helgason, *Groups and Geometric Analysis: Integral Geometry, Invariant Differential Operators, and Spherical Functions,* corrected reprint of 1984 original, Math Surveys and Monogr. 83, Amer. Math. Soc., Providence, 2000. S. Helgason, *Some results on invariant differential operators on symmetric spaces,* American J. Math, 1992, vol. 114, no. 4, 789--811. R. Howe, *$\theta$-series and invariant theory,* in Automorphic forms, representations and $L$-functions, p.275-286, Proc. of Symp. in Pure Math. XXXIII, Part 1, AMS, Providence 1979. R. Howe, *Transcending classical invariant theory.* Journal of the American Mathematical Society, 1989, vol. 2 (3), 535--552. M. Kashiwara, M. Vergne, *On the Segal-Shale-Weil representations and harmonic polynomials*. Inventiones mathematicae 44 (1978): 1--48. A. Knapp, *Lie Groups Beyond an Introduction*, Progress in Mathematics vol. 140, Birkhäuser, 2002. F. Knop and B. Schalke, *The dual group of a spherical variety,* Trans. Moscow Math. Soc. 78 (2017), 187--216. H. Koch and D. Tataru, *$L^p$ eigenfunction bounds for the Hermite operator*, Duke Math. J. 128 vol 2 (2005), 369--392. S. Kudla, *Notes on the local theta correspondence.* Lecture notes from the European School of Group Theory (1996), available at <http://www.math.toronto.edu/~skudla/castle.pdf>. S. Kudla, *Splitting metaplectic covers of reductive dual pairs.* Israel Journal of Mathematics 87, no. 1--3 (1994): 361--401. S. Kudla, S. Rallis, *A regularized Siegel-Weil formula: The first term identity,* Ann. of Math. (2) 140 (1994), 1--80. E. Lapid, O. Offen, *Compact unitary periods,* Compositio Math. 143 (2007), 323--338. E. Lindenstrauss, A. Venkatesh, *Existence and Weyl's law for spherical cusp forms*, Geom. Funct. Anal. 17 (2007), no. 1, 220--251. H. Maass, *Über die räumliche verteilung die punkte in gittern mit indefiniter metrik*, Math. Ann. 138 (1959), 287--315. J. Matz and N. Templier, *Sato-Tate equidistribution for families of Hecke--Maass forms on $\text{SL} (n,\mathbb R)/{\rm SO}(n)$*, Algebra and Number Theory 15 (2021), no. 6, 1343--1428. S. Miller, *On the existence and temperedness of cusp forms for $SL_3(\mathbb Z)$.* J. Reine Angew. Math. 533 (2001), 127--169. C. C. Moore, *Compactifications of symmetric spaces II: the Cartan domains*, Amer. J. Math. 86 (1964), 358--378. C. Moeglin, M.-F. Vignéras, and J.-L. Waldspurger, *Correspondances de Howe sur un Corps $p$-adique,* Lecture Notes in Mathematics, vol 1291, Springer-Verlag, Berlin, 1987. C. Moeglin and J.-L. Waldspurger, *Spectral Decomposition and Eisenstein Series*, Cambridge University Press, 1995. A. Paul, *On the Howe correspondence for symplectic-orthogonal dual pairs*, Journal of Functional Analysis, Volume 228, Issue 2, 15 November 2005, 270--310 P. Perrin, *Représentations de Schrödinger, indice de Maslov et groupe métaplectique,* in Non commutative harmonic analysis and Lie groups, Proc. Marseille-Luminy 1980, Springer LN 880, Berlin Heidelberg New York. T. Przebinda, *The duality correspondence of infinitesimal characters*, Colloq. Math., 70 (1) (1996), 93--102. M. Plancherel, W. Roatch, *Sur les valeurs asymptotiques des polynomes d'Hermite*, Comment. Math. Helvetici 1 (1929), 227--254. S. Rallis, *Langlands' functoriality and the Weil Representation.* American Journal of Mathematics, vol. 104, no. 3, 1982, pp. 469--515. S. Rallis, *On the Howe duality conjecture*. Compositio Mathematica, 51 (1984) no. 3, pp. 333--399. R. Ranga Rao, *On some explicit formulas in the theory of Weil representation*, Pacific J. Math. 157 (1993), no. 2, 335--371. Z. Rudnick, P. Sarnak, *The behaviour of eigenstates of arithmetic hyperbolic manifolds*, Comm. Math. Phys. 161 (1994), 195-213. P. Sarnak, *Letter to Morawetz*, available at <http://publications.ias.edu/sarnak/paper/480> Y. Sakellaridis, A. Venkatesh, *Periods and harmonic analysis on spherical varieties*, Astérisque No. 396 (2017), viii+360 pp. H. Schlichtkrull, *One-dimensional $K$-types in finite dimensional representations of semisimple Lie groups: A generalization of Helgason's theorem,* Math. Scand. 54 (1984), 279-294. N. Shimeno, *The Plancherel formula for spherical functions with one-dimensional $K$-type on a simply connected simple Lie group of Hermitian type,* J. Funct. Anal. 121, 330-388 (1994). N. Shimeno, *Eigenspaces of invariant differential operators on a homogeneous line bundle on a Riemannian symmetric space,* J. Fac. Sci. Univ. Tokyo, Sect. IA, Math., 37 (1990), 201--234. G. Szegö, *Orthogonal Polynomials*, Colloquium Publications, vol. 23 (4th ed.), American Mathematical Society 1975. J.-L. Waldspurger, *Démonstration d'une conjecture de dualité de Howe dans le cas $p$-adique, $p\neq 2$,* Festschrift in honor of I. I. Piatetski-Shapiro on the occasion of his sixtieth birthday, Part I, Israel Math. Conf. Proc. 2, Weizmann, Jerusalem, 1990, pp. 267--324. A. Weil, *Sur certains groupes d'opérateurs unitaires*, Acta Math. 111 (1964), 143--211. Ch-B. Zhu, *Representations with scalar $K$-types and applications.* Isr. J. Math. 135, 111--124 (2003). [^1]: We remark that, while our main theorem concerns Maass forms on the disconnected group $\text{O} (n,m)$, we show in Section [7](#sec:points){reference-type="ref" reference="sec:points"} that the notion of spectral parameter on this group is the same as for the connected Lie group ${\rm SO}(n,m)^0$ when $n > m$. [^2]: In §§[3](#sec:notation){reference-type="ref" reference="sec:notation"}-[5](#sec:test-function){reference-type="ref" reference="sec:test-function"}, we will assume $G$ to be connected. The results proved in these sections will be applied in §§[6](#sec:upper-bd-cusp-spec){reference-type="ref" reference="sec:upper-bd-cusp-spec"}-[7](#sec:points){reference-type="ref" reference="sec:points"} to the neutral component of the $F_{v_0}$-points of a semisimple group ${\bf G}$ defined over a totally real number field $F$, with distinguished real place $v_0$. This neutral component is denoted by $G^0$ in Section [6](#sec:upper-bd-cusp-spec){reference-type="ref" reference="sec:upper-bd-cusp-spec"} and by $G^{00}$ in Section [7](#sec:points){reference-type="ref" reference="sec:points"}. [^3]: There is a misprint in [@Shimeno (6.13)]: the condition $\lambda_r<0$ should be $\lambda_j<0$. [^4]: We caution the reader that the group of real points $P_\mathbb Q$ of ${\bf P}_\mathbb Q$ is not necessarily a minimal $\mathbb R$-parabolic of $G$, the latter having been denoted simply by $P$ in Section [3.1](#sec:gp-decomp){reference-type="ref" reference="sec:gp-decomp"}. We include the $\mathbb Q$ subscript in $P_\mathbb Q$ to distinguish it from $P$ for this reason, but we refrain from including the same subscript on related subgroups in this section, to avoid notational overload. [^5]: As is well-known, we may write $$\widetilde{\text{Sp} }(\mathcal{W})_k=\widehat{\text{Sp} }(\mathcal{W})_k\times_{\mu_2} S^1,$$ where $\widehat{\text{Sp} }(\mathcal{W})_k$ is the unique non-split 2-fold central extension of $\text{Sp} (\mathcal{W})_k$ and $\mu_2=\{\pm 1\}$. We prefer to work with $\widetilde{\text{Sp} }(\mathcal{W})_k$ rather than $\widehat{\text{Sp} }(\mathcal{W})_k$, since we shall work with subgroups of $\text{Sp} (\mathcal{W})_k$ which split over the former without splitting over the latter; see [@MVW II.9, Remarque]. For our purposes, we could also use the degree $8$ cover $\widehat{\text{Sp} }(\mathcal{W})_k\times_{\mu_2} \mu_8$. [^6]: This is commonly quoted from Kudla's notes [@KudlaCastle Proposition 3.2]. We take this opportunity to correct a typographical error in the statement of this proposition. Namely, in part (ii), the equation $\theta(\lambda) = (\chi_V\lambda_1,\ldots ,\chi_V\lambda_n, |\cdot |^{\frac{m}{2}-n},\ldots ,|\cdot |^{\frac12 \dim V_0})$ should read $\theta(\lambda) = (\chi_V\lambda_1,\ldots ,\chi_V\lambda_n, |\cdot |^{\frac{m}{2}-n-1},\ldots ,|\cdot |^{\frac12 \dim V_0})$.
arxiv_math
{ "id": "2309.06433", "title": "Concentration properties of theta lifts on orthogonal groups", "authors": "Farrell Brumley, Simon Marshall", "categories": "math.NT math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
arxiv_math
{ "id": "2310.03207", "title": "Universal slices of the category of graphs", "authors": "Ioannis Eleftheriadis", "categories": "math.CO math.CT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We give a necessary and sufficient condition for two circles, each with finitely many points added inside, to be betweenness isomorphic. We fully characterize the betweenness isomorphism classes in the family consisting of all circles with three collinear points inside. address: - | Institute of Mathematics\ Czech Academy of Sciences\ Žitná 25\ 115 67 Praha 1\ Czech Republic - | Institute of Mathematics\ Czech Academy of Sciences\ Žitná 25\ 115 67 Praha 1\ Czech Republic - | Institute of Mathematics\ University of Silesia\ Bankowa 14\ PL-40-007 Katowice\ Poland author: - Martin Doležal - Jan Kolář - Janusz Morawiec bibliography: - DKM-bibliography.bib title: Betweenness isomorphism classes of circles with finitely many points inside --- [^1] # Introduction Many interesting, natural and important mathematical concepts can be defined in the abstract setting. One of them is a ternary relation called *betweenness*, which is widely studied in connection with broadly understood geometry (see, e.g., [@Soltan1984; @AdelekeNeumann1998; @Pambuccian2011] and the references therein). The study of this topic, including axiomatization, goes back to [@Pasch1882; @HuntingtonKline1917; @Huntington1924], where it was introduced for the purposes of plane geometry. In the next years, betweenness was considered not only as a purely theoretical concept of the plane geometry, but it was also studied in different contexts with correlations to mathematical objects such as algebra (see, e.g., [@Hedlikova1983; @ChajdaKolarikLanger2013; @JostWenzel2023]), convex structures (see, e.g., [@Vel1993; @Kubis2002; @Chvatal2009; @AndersonBankstonMcCluskey2021]), graphs (see, e.g., [@MorganaMulder2002; @ChangatNarasimha-ShenoiSeethakuttyamma2019; @Courcelle2020; @Courcelle2021]), lattices (see, e.g., [@SmileyTransue1943; @DuvelmeyerWenzel2004]), metric and normed spaces (see, e.g. [@Toranzos1971; @DiminnieWhite1981; @Simovici2009; @BankstonMcCluskey2023]), ordered sets (see, e.g., [@Sholander1952; @Fishburn1971; @ZhangPerez-FernandezBeats2019; @Lihova2000]), topological structures (see, e.g., [@Bankston2013; @Bankston2015; @Shakir2023]), and many others (see, e.g., [@ChvatalWu2012; @BrunoMcCluskeySzeptycki2017; @Shi2022; @ZhaoZhaoAhang2023]). There are three essential generalizations of *linear betweenness* (induced by a fixed linear ordering on a given line): *algebraic betweenness* (applied to vector spaces), *metric betweenness* (considered on semimetric spaces), and *lattice betweenness* (studied on lattice objects). A comprehensive comparison of these three relations can be found in [@Smiley1943]. In this paper we are only interested in algebraic betweenness on certain subsets of the Euclidean plane (called also *Euclidean betweenness*) defined as follows: Given a subset $A$ of Euclidean space, we say that a point $x\in A$ is *between* points $a\in A$ and $b\in A$ (denoted by $B_A(a,x,b)$ in the language of the betweenness relation) if and only if $x=(1-\lambda)a+\lambda b$ with some $\lambda\in[0,1]$. Denoting by $[a,b]$ the (closed) linear segment connecting $a$ and $b$, i.e., $[a,b]=\{\lambda a+(1-\lambda)b:\lambda\in[0,1]\}$, we can equivalently say that a point $x\in A$ is between $a\in A$ and $b\in A$ if and only if $x\in[a,b]$. Euclidean betweenness can be easily generalized to vector spaces over arbitrary ordered fields; however, we are not going to explore this direction. Given two betweenness structures $(X,B_X)$ and $(Y,B_Y)$, a map $f\colon X\to Y$ is said to be: 1. *betweenness preserving* if $$B_X(a,x,b) \implies B_Y(f(a),f(x),f(b))$$ for all $a,x,b \in X$ (see e.g. [@HouMcColm2008]); 2. *betweenness isomorphism* if it is a bijection and $$B_X(a,x,b) \iff B_Y(f(a),f(x),f(b))$$ for all $a,x,b \in X$ (cf. [@Hedlikova1981] for the lattice case). In the case of Euclidean betweenness, we will use the notion of *betweenness homomorphisms* instead of betweenness preserving maps, i.e., given sets $S,R\subset\mathbb R^2$, we call a map $f\colon S\to R$ betweenness homomorphism if, for all $a,b,c\in S$ such that $c$ belongs to the (open) segment $(a,b):=[a,b]\setminus\{a,b\}$, it holds $f(c)\in(f(a),f(b))$. We can obviously replace open segments with closed ones in this definition, which will not change the defined concept and then will agree with the definition of a betweenness preserving map. Therefore, throughout this paper, a betweenness isomorphism is a bijective map $f\colon S\to R$ such that for every $a,b,c\in S$, it holds $c\in(a,b)\iff f(c)\in(f(a),f(b))$. We say that sets $S,R\subset\mathbb R^2$ are *betweenness isomorphic* if there exists a betweenness isomorphism from $S$ to $R$. Note that every betweenness preserving map that is a bijection between linearly ordered sets is automatically a betweenness isomorphism, but this is not the case for Euclidean betweenness; indeed, each bijection $f\colon K\to [0,1)\times\{0\}\subset\mathbb R^2$, where $K\subset\mathbb R^2$ is the unit circle centred at the origin, is a betweenness homomorphism; however, its inverse $f^{-1}$ is very far from being a betweenness homomorphism. Among subsets of the plane, one can easily find examples of those $A\subset\mathbb R^2$ for which the Euclidean betweenness is *discrete*, i.e., $B_A(a,x,b)$ holds if and only if $x\in\{a,b\}$ or equivalently $A$ does not contain three distinct collinear points. A typical example of such a set is the unit circle in the plane. Another example is any set $M$ (firstly constructed in [@Mazurkiewicz1914]) that has exactly two points in common with every line in the plane. Note that the set $M$ (described above) and $K$ (the unit circle in the plane) are trivially betweenness isomorphic. Therefore, finding all subsets of the Euclidean plane that are betweenness isomorphic to a given set is rather difficult. But one can try to decide when two given subsets of the Euclidean plain of a given form (e.g., two sets such that each of them consists exactly of $l$ points or lines) are betweenness isomorphic. Moreover, one can ask if any classification of a family of sets of the same form (e.g., the family of all sets consisting exactly of $l$ points or lines) in the language of betweenness isomorphism would be possible. To our best knowledge, Wiesław Kubiś was one of the first who posed (in personal communication) two questions in this direction. The first one reads as follows. **Problem 1**. *Let $l$ be a natural number. Let $S,R$ be subsets of the Euclidean plane, each of them consisting of a circle and $l$ points in the interior of the circle.* 1. *When are the sets $S$ and $R$ betweenness isomorphic?* 2. *How many classes of betweenness isomorphism of sets as above are there, and how to characterize them?* The second question is, in spirit, the same as the above one. We only need to replace the points from the interior of each circle with another concentric circle. Let us note that Problem 3.29.3 on page 66 in [@Vel1993] is of a similar type as it concerns the classification of betweenness isomorphism classes of convex polytopes in Euclidean spaces. A classification of betweenness preserving maps defined on convex planar sets was recently obtained in [@KubisMorawiecZurcher2022]. Namely, either the image is contained in the union of a line, and a single point outside of the line, or the image consists of certain five points, or else the mapping is a partial homography, i.e., a mapping that can be extended to a homography. Additionally, if the domain is an open convex set, then either the image is contained in a line or else the mapping is a partial homography. Unfortunately, this classification cannot be applied to answer because the problem does not concern convex subsets of the Euclidean plane. The paper aims to answer question (A) of and provides some partial answers to question (B). More precisely, in , we give a necessary and sufficient condition for two circles, each with $l$ points in its interiors, to be betweenness isomorphic (see ). We also formulate a useful consequence (see ), which we then apply in to answer question (B) of in the cases $l=1$ (see ), $l=2$ (see ), and $l=3$ assuming additionally that the points in the interior of each of the circles are collinear (see ). # Preliminaries ## Notation Let $S\subset\mathbb R^2$. We say that a point $c\in S$ is *extreme* in $S$ if there are no $a,b\in S$ such that $c\in(a,b)$. We denote by $\mathop{\mathrm{ext}}(S)$ the set of all extreme points in $S$. Let $A\subset S\subset\mathbb R^2$. We say that the set $A$ is *collinearly closed* in $S$ if $c\in A$ whenever $c\in S$ satisfies that there are $a,b\in A$ such that $a,b,c$ are collinear. We define the *collinear hull* $\mathop{\mathrm{c-hull}}_S(A)$ of $A$ in $S$ as the smallest set which is collinearly closed in $S$ and which contains $A$. **Lemma 2**. *Let $S,R\subset\mathbb R^2$ and $A\subset S$. Suppose that $f\colon S\to R$ is a betweenness isomorphism. Then:* 1. *[\[ext\]]{#ext label="ext"} $f(\mathop{\mathrm{ext}}(S))=\mathop{\mathrm{ext}}(R)$,* 2. *[\[ch\]]{#ch label="ch"} $f(\mathop{\mathrm{c-hull}}_S(A))=\mathop{\mathrm{c-hull}}_R(f(A))$.* *Proof.* [\[ext\]](#ext){reference-type="ref" reference="ext"} For every $c\in S$, it holds $$\begin{split} c\notin\mathop{\mathrm{ext}}(S)&\iff\exists a,b\in S:c\in(a,b)\iff\exists a,b\in S:f(c)\in(f(a),f(b))\\ &\iff\exists d,e\in R:f(c)\in(d,e)\iff f(c)\notin\mathop{\mathrm{ext}}(R), \end{split}$$ which proves our assertion. [\[ch\]](#ch){reference-type="ref" reference="ch"} Fix arbitrary $B\subset S$. Then $$\begin{split} B\text{ is collinearly closed in }S\iff&\forall a,b\in B\ \,\forall c\in S:\text{ if }a,b,c\text{ are collinear, then }c\in B\\ \iff&\forall a,b\in B\ \,\forall c\in S:\text{ if }f(a),f(b),f(c)\text{ are collinear, then }f(c)\in f(B)\\ \iff&\forall d,e\in f(B)\ \,\forall g\in R:\text{ if }d,e,g\text{ are collinear, then }g\in f(B)\\ \iff&f(B)\text{ is collinearly closed in }R. \end{split}$$ In particular, the set $f(\mathop{\mathrm{c-hull}}_S(A))$ is collinearly closed in $R$ and contains $f(A)$. Hence $f(\mathop{\mathrm{c-hull}}_S(A))\supset \mathop{\mathrm{c-hull}}_R(f(A))$. As $f^{-1}$ is also a betweenness isomorphism, the same argument gives $f^{-1}(\mathop{\mathrm{c-hull}}_R(f(A)))\supset \mathop{\mathrm{c-hull}}_S(f^{-1}(f(A)))$. As $f^{-1}$ is a bijection, the latter implies $\mathop{\mathrm{c-hull}}_R(f(A))\supset f(\mathop{\mathrm{c-hull}}_S((A))$. Therefore, the two sets are equal. ◻ ## The group $G_l$ Let us fix $l\in\mathbb N$. Any finite sequence of elements of $\{1,\ldots,l\}$ will be called a *configuration*. A configuration $(i_1,\ldots,i_n)$ will be called *irreducible* if $i_k\neq i_{k+1}$ for every $k\in\{1,\ldots,n-1\}$; otherwise the configuration will be called *reducible*. Note that the empty sequence $\emptyset$ (of length $n=0$) is an irreducible configuration. (Also, all configurations of length $n=1$ are irreducible.) Let $G_l$ be the set of all irreducible configurations. We consider the binary operation on $G_l$ given by concatenation followed by reduction (if necessary), namely $$\label{e:Op} (i_1,\ldots,i_n)(j_1,\ldots,j_m):= (i_1,\ldots,i_{n-k}, j_{1+k},\ldots,j_m),$$ where $$k=\max\left\{\strut r\in \{0,1,2,\dots\} : i_{n-p} = j_{1+p} \text{ for all } 0\le p<r \right\}.$$ Let us explain the meaning of [\[e:Op\]](#e:Op){reference-type="eqref" reference="e:Op"}. After concatenating two irreducible configurations $(i_1,\ldots,i_n)$, $(j_1,\ldots,j_m)$ we obtain a new configuration $(i_1,\ldots,i_n,j_1,\ldots,j_m)$. If this new configuration is irreducible then no reduction is necessary. Otherwise, it holds $i_n=j_1$. In this case, we 'remove' the elements $i_n,j_1$ from the configuration, so that we obtain another configuration $(i_1,\ldots,i_{n-1},j_2,\ldots,j_m)$. If this configuration is irreducible then the reduction is completed. Otherwise, it holds $i_{n-1}=j_2$. In that case, we 'remove' the elements $i_{n-1},j_2$, so that we obtain yet another configuration $(i_1,\ldots,i_{n-2},j_3,\ldots,j_m)$. We repeat this process until we obtain an irreducible (possibly empty) configuration, which will be the outcome of our reduction. **Lemma 3**. *The set $G_l$, together with the binary operation given by concatenation followed by reduction, is a group.* *Proof.* Associativity is easy to check. The identity element is the empty configuration. The inverse of $(i_1,\ldots,i_n)$ is $(i_n,\ldots,i_1)$. ◻ ## The group action of $G_l$ {#subsec:reversionETC} Fix $l\in\mathbb N$. Let $C$ be a circle in the plane and let $\{c_1,\ldots,c_l\}$ be a set of points from the interior of $C$. Following [@Kocik2013], for every point $X$ from the interior of the circle $C$ we define $P_X\colon C\to C$, called the reversion map (through the point $X$), as follows: if $c\in C$, then $P_X(c)$ is the unique point of $C$ such that $X\in(c,P_X(c))$. It is clear that for every point $X$ from the interior of the circle $C$ and every point $c\in C$, we have $(P_X\circ P_X)(c)=c$, i.e., each reversion map is an involution. To shorten the notation, for every $i\in\{1,\ldots,l\}$, we will use the symbol $R_i$ for the reversion map through the point $c_i$, i.e., we put $R_i:=P_{c_i}$. Now, we define a map $\alpha\colon C\times G_l\to C$ putting $$\alpha(c,\emptyset)=c\quad\text{and}\quad\alpha(c,(i_1,\dots,i_n))=(R_{i_n}\circ\dots\circ R_{i_1})(c)\quad\text{for every }(i_1,\dots,i_n)\neq\emptyset.$$ **Lemma 4**. *The map $\alpha$ is a right action of the group $G_l$ on $C$.* *Proof.* One just needs to check that $\alpha(\alpha(c,g),g')=\alpha(c,gg')$ for all $c\in C$ and $g,g'\in G_l$, which is clear by the definition of $\alpha$. ◻ In the following, we write shortly $c\cdot g$ instead of $\alpha(c,g)$. For every $c\in C$, let $$\mathbf O(c)=\left\{c\cdot g:g\in G_l\right\}$$ be the orbit of $c$, and let $$\mathop{\mathrm{\mathbf S}}(c)=\left\{g\in G_l:c\cdot g=c\right\}$$ be the stabilizer of $c$. By standard properties of a group action, we have that for all $c,c'\in C$, either $\mathbf O(c)=\mathbf O(c')$ or $\mathbf O(c)\cap \mathbf O(c')=\emptyset$. Thus the set of orbits of points $c\in C$ forms a partition of $C$. In fact, the orbits are equivalence classes under the relation $\sim$ on $C$ defined as follows: $$c\sim c'\iff\exists g\in G_l: c=c'\cdot g.$$ Note also that the orbit of any point $c\in C$ is a finite or countable set. Of course, the action $\alpha$ depends on the precise choice of the points $c_1,\ldots,c_l$ (and also on the order in which these points are indexed). Usually, it is clear from the context which points (and in which order) we use. # Circles with points inside {#sec3} In this section, we show that the existence of a betweenness isomorphism between $S$ and $R$ (which are of the form described in ) is closely related to the isomorphism of the corresponding group actions. This helps us to find necessary and sufficient conditions on $S$ and $R$ to be betweenness isomorphic. ## Betweenness isomorphism of two circles with points inside. For the rest of this section, we fix $l\in\mathbb N$ and two sets $S=C\cup\{c_1,\ldots,c_l\}$ and $R=D\cup\{d_1,\ldots,d_l\}$, where $C,D$ are circles in the plane, $c_1,\ldots,c_l$ are pairwise distinct points from the interior of $C$, and $d_1,\ldots,d_l$ are pairwise distinct points from the interior of $D$. Our aim is to describe when $S$ and $R$ are betweenness isomorphic. In the following, we denote by $\alpha$, resp. $\beta$, be the right action (described in the previous section) of the group $G_l$ on the set $C$, resp. $D$. We will use the short notation $$c\cdot g = \alpha(c,g)\quad\text{and}\quad d\cdot g=\beta(d,g),$$ for $c\in C$ and $d\in D$, respectively. Recall that an *isomorphism* of the two group actions $\alpha$ and $\beta$ is a pair $(\psi,\varphi)$, where $\psi$ is an automorphism of the group $G_l$ and $\varphi\colon C\to D$ is a bijection such that $$\varphi(c\cdot g)=\varphi(c)\cdot\psi(g)\quad\text{for all }c\in C\text{ and }g\in G_l.$$ For a permutation $\sigma\colon\{1,\ldots,l\}\to\{1,\ldots,l\}$, let $\psi_\sigma$ be the automorphism of the group $G_l$ given by $$\psi(\emptyset)=\emptyset\text{ and }\psi_\sigma(i_1,\ldots,i_n)=(\sigma(i_1),\ldots,\sigma(i_n))\quad\text{for every }n\in\mathbb N\text{ and every }g=(i_1,\ldots,i_n)\in G_l.$$ **Theorem 5**. *Let $f\colon S\to R$ be a bijection. Then $f$ is a betweenness isomorphism of $S$ and $R$ if and only if* 1. *[\[fci\]]{#fci label="fci"} $f(\{c_1,\ldots,c_l\})=\{d_1,\ldots,d_l\}$,* 2. *[\[fch\]]{#fch label="fch"} $f|_{\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})}$ is a betweenness isomorphism from $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ to $\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$,* 3. *[\[psif\]]{#psif label="psif"} $(\psi_\sigma,f|_C)$ is an isomorphism of the group actions $\alpha$ and $\beta$, where the permutation $\sigma$ is defined by $f(c_i)=d_{\sigma(i)}$ for every $i\in\{1,\ldots,l\}$.* *Proof.* Suppose first that $f\colon S\to R$ is a betweenness isomorphism of $S$ and $R$. By assertion [\[ext\]](#ext){reference-type="ref" reference="ext"} of , we have $f(C)=f(\mathop{\mathrm{ext}}(S))=\mathop{\mathrm{ext}}(R)=D$, and so $$f(\{c_1,\ldots,c_l\})=\{d_1,\ldots,d_l\}.$$ Hence [\[fci\]](#fci){reference-type="ref" reference="fci"} holds, and the permutation $\sigma$ from [\[psif\]](#psif){reference-type="ref" reference="psif"} is defined correctly. By assertion [\[ch\]](#ch){reference-type="ref" reference="ch"} of , it holds $$\begin{aligned} f(\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\}))=\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\}),\end{aligned}$$ and [\[fch\]](#fch){reference-type="ref" reference="fch"} follows. By [\[fci\]](#fci){reference-type="ref" reference="fci"} (which we already verified), it follows that $f|_C$ is a bijection between $C$ and $D$. So, to verify [\[psif\]](#psif){reference-type="ref" reference="psif"}, we must only show that $f(c\cdot g)=f(c)\cdot\psi_\sigma(g)$ for all $c\in C$ and $g\in G_l$. First, fix $c\in C$ and $i\in\{1,\ldots,l\}$. As $f$ is a betweenness homomorphism and $c_i\in (c,c\cdot(i))$, it follows that $$d_{\sigma(i)}=f(c_i)\in(f(c),f(c\cdot(i))).$$ Simultaneously, it holds $$d_{\sigma(i)}\in(f(c),f(c)\cdot(\sigma(i))).$$ As there is exactly one $d\in D$ such that $d_{\sigma(i)}\in(f(c),d)$, it follows that $$\begin{aligned} f(c\cdot(i))=f(c)\cdot(\sigma(i))=f(c)\cdot\psi_\sigma(i).\end{aligned}$$ Now, a simple induction yields $$f(c\cdot(i_1,\ldots,i_n))=f(c)\cdot(\sigma(i_1),\ldots,\sigma(i_n))=f(c)\cdot\psi_\sigma(i_1,\ldots,i_n)$$ for every $n\in\mathbb N$ and every $g=(i_1,\ldots,i_n)\in G_l$, and the conclusion follows. Now, suppose that [\[fci\]](#fci){reference-type="ref" reference="fci"}, [\[fch\]](#fch){reference-type="ref" reference="fch"} and [\[psif\]](#psif){reference-type="ref" reference="psif"} hold true. Let us fix $a,b,c\in S$. We must show that $$c\in(a,b)\iff f(c)\in(f(a),f(b)).$$ We distinguish four cases. **Case 1:** $a,b,c\in\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$. In this case, just apply [\[fch\]](#fch){reference-type="ref" reference="fch"}. **Case 2:** Exactly two of the points $a$, $b$, $c$ belong to $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$. As $f$ maps $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ onto $\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$ (by [\[fch\]](#fch){reference-type="ref" reference="fch"}), exactly two of the points $f(a)$, $f(b)$, $f(c)$ belong to $\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$. By the definition of a collinear hull, it follows that $a,b,c$ are not collinear, and similarly $f(a)$, $f(b)$, $f(c)$ are not collinear. So $c\notin(a,b)$ and $f(c)\notin(f(a),f(b))$. **Case 3:** $c\in C$. In that case, $c\notin(a,b)$ as $c$ is extreme in $S$, and $f(c)\notin(f(a),f(b))$ as $f(c)$ is extreme in $R$, by assertion [\[ext\]](#ext){reference-type="ref" reference="ext"} of . **Case 4:** $c\notin C$ and $a,b\in C$. Then there is $i\in\{1,\ldots,l\}$ such that $c=c_i$. If $\sigma$ is as in [\[psif\]](#psif){reference-type="ref" reference="psif"}, then $f(c)=d_{\sigma(i)}$. So we have $$c\in(a,b)\iff b=a\cdot(i)\stackrel{\ref{psif}}{\iff}f(b)=f(a)\cdot(\sigma_i)\iff f(c)=d_{\sigma(i)}\in(f(a),f(b)).$$ Finally, we note that the four cases above cover all possible situations. ◻ It is easy to see that the collinear hull $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ is the union of some (in fact, finitely many and pairwise disjoint) orbits given by the action $\alpha$. Similarly, $\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$ is the union of some orbits given by the action $\beta$. So we can define the right action $\alpha'$, resp. $\beta'$, of the group $G_l$ on the set $C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$, resp. $D\setminus\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$, as the restriction of $\alpha$, resp. $\beta$, to the corresponding set. *Remark 6*. remains true if we replace condition [\[psif\]](#psif){reference-type="ref" reference="psif"} with 1. [\[psif\'\]]{#psif' label="psif'"} $(\psi_\sigma,f|_{C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})})$ is an isomorphism of the group actions $\alpha'$ and $\beta'$, where the permutation $\sigma$ is defined by $f(c_i)=d_{\sigma(i)}$ for every $i\in\{1,\ldots,l\}$. *Proof.* Almost the same proof still works, only Case 4 would have to be restricted to '$c\notin C$ and $a,b\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$' (then the four cases still cover all possible situations). ◻ The following example shows that condition [\[fch\]](#fch){reference-type="ref" reference="fch"} in cannot be replaced by 1. [\[fch\'\]]{#fch' label="fch'"} $f|_{\{c_1,\ldots,c_l\}}$ is a betweenness isomorphism from $\{c_1,\ldots,c_l\}$ to $\{d_1,\ldots,d_l\}$. **Example 7**. Fix $l=2$ and let $R=S=C\cup\{c_1,c_2\}$. Denote by $x_1$ and $x_2$ the two points of $C$ that are collinear with $c_1$ and $c_2$. Define a map $f\colon S\to R$ putting $$f(x_1)=x_2,\quad f(x_2)=x_1,\quad f(x)=x\text{ for every }x\in S\setminus\{x_1,x_2\}.$$ It is clear that $f$ is not a betweenness isomorphism. However, [\[fci\]](#fci){reference-type="ref" reference="fci"}, [\[fch\'\]](#fch'){reference-type="ref" reference="fch'"}, and [\[psif\'\]](#psif'){reference-type="ref" reference="psif'"} hold with $\sigma$ being the identity permutation. To see that also [\[psif\]](#psif){reference-type="ref" reference="psif"} holds (with the identity permutation $\sigma$) observe that for all $i,j\in\{1,2\}$ we have $$f(x_i\cdot(j))=f(x_{3-i})=x_i=x_{3-i}\cdot(j)=f(x_i)\cdot(j),$$ and by induction $$f(x_i\cdot g)=f(x_i)\cdot g\quad\text{for every }g\in G_2,$$ which is what we wanted to show. From now on, put $$\mathcal O_C=\left\{\mathbf O(c):c\in C\setminus\mathop{\mathrm{c-hull}}_S\{c_1,\ldots,c_l\}\right\},$$ and $$\mathcal O_D=\left\{\mathbf O(d):d\in D\setminus\mathop{\mathrm{c-hull}}_R\{d_1,\ldots,d_l\}\right\}.$$ **Corollary 8**. *There exists a betweenness isomorphism of $S$ and $R$ if and only if there exists a betweenness isomorphism $\tilde f\colon\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})\to\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$ with the following property:* 1. *[\[P\]]{#P label="P"} Let $\sigma\colon\{1,\ldots,l\}\to\{1,\ldots,l\}$ be the permutation that corresponds to $\tilde f$, i.e., $$\tilde f(c_i)=d_{\sigma(i)}.$$ Then there exists a bijection $h\colon\mathcal O_C\to\mathcal O_D$ such that, for every $\mathbf O\in\mathcal O_C$, there are $x_\mathbf O\in \mathbf O$ and $y_\mathbf O\in h(\mathbf O)$ with $$\label{eq:stabs} \mathop{\mathrm{\mathbf S}}(y_\mathbf O)=\left\{\psi_\sigma(g):g\in \textbf{S}(x_\mathbf O)\right\}.$$* *Remark 9*. Before proving the above result, let us make two small comments. Namely, if $f\colon S\to R$ is a betweenness isomorphism, then the map $\tilde f$ accruing in can be chosen as the restriction of $f$ to $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$. Moreover, if $\tilde f$ is as in , then $\tilde f$ maps $\{c_1,\dots, c_l\}$ onto $\{d_1,\dots,d_l\}$ and therefore $\tilde f$ determines uniquely the corresponding to it permutation $\sigma$. *Proof of .* Suppose first that there exists a betweenness isomorphism $f\colon S\to R$. Then [\[fci\]](#fci){reference-type="ref" reference="fci"}, [\[fch\]](#fch){reference-type="ref" reference="fch"} and [\[psif\'\]](#psif'){reference-type="ref" reference="psif'"} hold by and . Let $\tilde f$ be the restriction of $f$ to $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$. Then, by [\[fch\]](#fch){reference-type="ref" reference="fch"}, $\tilde f$ is a betweenness isomorphism from $\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ to $\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$. Let $\sigma$ be the permutation defined by $f(c_i)=d_{\sigma(i)}$ for every $i\in \{1,\dots,l\}$. From [\[psif\'\]](#psif'){reference-type="ref" reference="psif'"} we see that the $f$-image of every orbit given by the action $\alpha'$ is an orbit given by the action $\beta'$. So we can define the bijection $h\colon\mathcal O_C\to\mathcal O_D$ by $$h(\mathbf O)=\{f(c):c\in\mathbf O\}.$$ Finally, for every $\mathbf O\in\mathcal O_C$ we fix an arbitrary $x_\mathbf O\in \mathbf O$ and put $y_\mathbf O=f(x_\mathbf O)$. Then $$g\in \mathop{\mathrm{\mathbf S}}(x_\mathbf O)\iff x_\mathbf O\cdot g=x_\mathbf O\stackrel{\ref{psif'}}{\iff}f(x_\mathbf O)\cdot\psi_\sigma(g)=f(x_\mathbf O)\iff y_\mathbf O\cdot\psi_\sigma(g)=y_\mathbf O\iff\psi_\sigma(g)\in \mathop{\mathrm{\mathbf S}}(y_\mathbf O),$$ and [\[eq:stabs\]](#eq:stabs){reference-type="eqref" reference="eq:stabs"} follows. Suppose now that there exists a betweenness isomorphism $\tilde f\colon \ch_S(\{c_1,\ldots,c_l\})\to\ch_R(\{d_1,\ldots,d_l\})$ as in the statement of the corollary, i.e., condition [\[P\]](#P){reference-type="ref" reference="P"} holds. We define $f\colon S\to R$ as an extension of $\tilde f$, so we only need to specify values of $f$ on $$C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})=\bigcup\mathcal O_C=\left\{x_\mathbf O\cdot g:\mathbf O\in\mathcal O_C,g\in G_l\right\}.$$ For every $\mathbf O\in\mathcal O_C$ and every $g\in G_l$, we put $$\label{e:fdef} f(x_\mathbf O\cdot g)=y_\mathbf O\cdot\psi_\sigma(g).$$ To verify the correctness of this definition, assume that $x_\mathbf O\cdot g=x_{\mathbf O'}\cdot g'$ for some $\mathbf O,\mathbf O'\in\mathcal O_C$ and $g,g'\in G_l$. Then $\mathbf O=\mathbf O'$, otherwise $x_\mathbf O\cdot g$, $x_{\mathbf O'}\cdot g'$ would belong to distinct (and hence disjoint) orbits. The equality $\mathbf O=\mathbf O'$ implies $x_\mathbf O=x_{\mathbf O'}$ and by our assumption we have $x_\mathbf O\cdot g=x_\mathbf O\cdot g'$, and hence $g(g')^{-1}\in \mathop{\mathrm{\mathbf S}}(x_\mathbf O)$. This together with [\[eq:stabs\]](#eq:stabs){reference-type="eqref" reference="eq:stabs"} implies $\psi_\sigma(g(g')^{-1})\in \textbf{S}(y_\mathbf O)$, and so $y_\mathbf O\cdot\psi_\sigma(g(g')^{-1})=y_\mathbf O$. Applying (from the right) the group action with $\psi_\sigma(g')$ we get $$y_\mathbf O\cdot\psi_\sigma(g)=(y_\mathbf O\cdot\psi_\sigma(g(g')^{-1}))\cdot\psi_\sigma(g') =y_\mathbf O\cdot\psi_\sigma(g'),$$ and the correctness of the definition follows. From [\[e:fdef\]](#e:fdef){reference-type="eqref" reference="e:fdef"}, we easily deduce that $f(x) \in \mathbf O(y_\mathbf O) = h(\mathbf O)$ for every $x\in \mathbf O$, hence $f(\mathbf O) \subset h(\mathbf O)$. To verify that $f$ is a betweenness isomorphism, we will apply and . We start by showing that $f$ is a bijection. As it is an extension of $\tilde f$, it is enough to show that the restriction of $f$ to $C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ is injective and onto $D\setminus\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})$. Suppose first that $f(x_\mathbf O\cdot g)=f(x_{\mathbf O'}\cdot g')$ for some $\mathbf O,\mathbf O'\in\mathcal O_C$ and $g,g'\in G_l$. Since $f(\mathbf O)\subset h(\mathbf O)$, $f(\mathbf O')\subset h(\mathbf O')$ and $h$ is a bijection, we must have $\mathbf O=\mathbf O'$, in particular $x_\mathbf O=x_\mathbf O'$ and $y_\mathbf O=y_\mathbf O'$. In that case, we have $y_\mathbf O\cdot\psi_\sigma(g)=y_\mathbf O\cdot\psi_\sigma(g')$, and hence $\psi_\sigma(g(g')^{-1})\in \mathop{\mathrm{\mathbf S}}(y_\mathbf O)$. This jointly with [\[eq:stabs\]](#eq:stabs){reference-type="eqref" reference="eq:stabs"} imply $g(g')^{-1}\in \mathop{\mathrm{\mathbf S}}(x_\mathbf O)$. So $x_\mathbf O\cdot g=x_\mathbf O\cdot g'$, and the injectivity of $f$ follows. Now, fix some $$d\in D\setminus\mathop{\mathrm{c-hull}}_R(\{d_1,\ldots,d_l\})=\bigcup\mathcal O_D.$$ As $h$ is a bijection, there is $\mathbf O\in\mathcal O_C$ such that $d\in h(\mathbf O)$. So we can find $g\in G_l$ such that $d=y_\mathbf O\cdot g$. Then $d=f(x_\mathbf O\cdot\psi_\sigma^{-1}(g))$, and the surjectivity of $f$ follows. We will check conditions [\[fci\]](#fci){reference-type="ref" reference="fci"} and [\[fch\]](#fch){reference-type="ref" reference="fch"} from , as well as condition [\[psif\'\]](#psif'){reference-type="ref" reference="psif'"} from . Conditions [\[fci\]](#fci){reference-type="ref" reference="fci"} and [\[fch\]](#fch){reference-type="ref" reference="fch"} hold as $f$ is an extension of $\tilde f$. So it only remains to check condition [\[psif\'\]](#psif'){reference-type="ref" reference="psif'"}. Let us fix $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ and $g\in G_l$, we must show that $$f(c\cdot g)=f(c)\cdot\psi_\sigma(g).$$ Let $\mathbf O=\mathbf O(c)$ be the orbit of $c$, and let $g'\in G_l$ be such that $c=x_\mathbf O\cdot g'$. Then $$f(c\cdot g)=f(x_\mathbf O\cdot g'g)=y_\mathbf O\cdot\psi_\sigma(g'g)=(y_\mathbf O\cdot\psi_\sigma(g'))\cdot\psi_\sigma(g)=f(x_\mathbf O\cdot g')\cdot\psi_\sigma(g)=f(c)\cdot\psi_\sigma(g),$$ as we needed. ◻ # Circles with collinear points inside {#sec4} We now provide some partial answers to , namely in the case when the $l$ points in the interior of each of the circles are collinear. ## The case $l=1$ This case is very easy. We deal with it by making use of . **Theorem 10**. *Let $S,R$ be sets of the form $S=C\cup\{c_1\}$ and $R=D\cup\{d_1\}$, where $C,D$ are circles in the plane, $c_1$ is a point from the interior of $C$, and $d_1$ is a point from the interior of $D$. Then the sets $S$ and $R$ are betweenness isomorphic.* *Proof.* Trivially, $\mathop{\mathrm{c-hull}}_S(\{c_1\})=\{c_1\}$ and $\mathop{\mathrm{c-hull}}_R(\{d_1\})=\{d_1\}$. So the only map $\tilde f\colon\mathop{\mathrm{c-hull}}_S(\{c_1\})\to\mathop{\mathrm{c-hull}}_R(\{d_1\})$ (given by $\tilde f(c_1)=d_1$) is obviously a betweenness isomorphism; the permutation $\sigma$ that corresponds to $\tilde f$ is the identity. In the considered case every orbit from $\mathcal O_C$ is a two point subset of $C$, every orbit from $\mathcal O_D$ is a two point subset of $D$. Therefore, the cardinality of both sets $\mathcal O_C$ and $\mathcal O_D$ is continuum; we fix a bijection $h$ between them arbitrarily. For every $\mathbf O\in\mathcal O_C$, we also fix $x_{\mathbf O}\in\mathbf O$ and $y_{\mathbf O}\in h(\mathbf O)$ arbitrarily. To see that [\[eq:stabs\]](#eq:stabs){reference-type="eqref" reference="eq:stabs"} holds (where $\psi_\sigma$ is the identity automorphism) it suffices to observe that all the stabilizers $\mathop{\mathrm{\mathbf S}}(c)$, where $c\in C$, and $\mathop{\mathrm{\mathbf S}}(d)$, where $d\in D$, are equal to the trivial subgroup $\{\emptyset\}$. Now, the conclusion immediately follows from . ◻ ## The case $l=2$ This is still easy with the application of . Before formulating the result for the case $l=2$, we prove the following useful fact. **Lemma 11**. *Let $C$ be a circle in the plane and $c_1,c_2$ be two distinct points from the interior of $C$. Let $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})$ and $(i_1,\ldots,i_m)\in\{1,2\}^m$ with $m\in\mathbb N$.* 1. *If $m$ is odd, then $$\label{e:Rcc} (R_{i_m}\circ\dots \circ R_{i_1})(c)\neq c.$$* 2. *If $m$ is even and $(i_1,\ldots,i_m)\in G_2\setminus\{\emptyset\}$, then [\[e:Rcc\]](#e:Rcc){reference-type="eqref" reference="e:Rcc"} holds.* *Proof.* Let $C_+$ denote the intersection of $C$ with one of the open half-planes given by the line passing through $c_1$ and $c_2$, and let $C_-$ denote the intersection of $C$ with the other open half-plane given by the same line. Then $C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})=C_+\cup C_-$. \(i\) If $m$ is odd, then it suffices to note that $(R_{i_m}\circ\dots \circ R_{i_1})(C_+)\subset C_-$ and $(R_{i_m}\circ\dots\circ R_{i_1})(C_-)\subset C_+$. \(ii\) If $m$ is even, then $(R_{i_m}\circ\dots \circ R_{i_1})(C_+)\subset C_+$ and $(R_{i_m}\circ\dots\circ R_{i_1})(C_-)\subset C_-$. Note that $g\in G_2\setminus\{\emptyset\}$ (being an irreducible configuration of elements of $\{1,2\}$) is a finite (nonempty) sequence of alternating $1$'s and $2$'s, ending with $1$ or $2$. Therefore, there exists $n\in\mathbb N$ such that either $R_{i_m}\circ\dots\circ R_{i_1}=(R_1\circ R_2)^n$ or $R_{i_m}\circ\dots\circ R_{i_1}=(R_2\circ R_1)^n$. Note that the maps $(R_2\circ R_1)^n$ and $(R_1\circ R_2)^n$ are mutual inverses. Thus the statements $(R_1\circ R_2)^n(c)\neq c$ and $(R_2\circ R_1)^n(c)\neq c$ are equivalent. So, to complete the proof, it is enough to show that $(R_1\circ R_2)^n(c)\neq c$. Assume that $c\in C_+$; the case $c\in C_-$ is analogous. Clearly, $(R_1\circ R_2)(c)\neq c$; otherwise we would have $R_2(c)=R_1(c)$, which implies that $c_1$, $c_2$ both lie on the line passing through $c$ and $R_1(c)$, and this is only possible if $c_1=c_2$ or $c\in \mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})$, contradicting our assumptions. The point $c$ splits the arc $C_+$ into two open subarcs, say $C_+^1$ and $C_+^2$. Let $C_+^1$ be the subarc that contains $(R_1\circ R_2)(c)$. From a picture, the reader may verify that $$(R_1\circ R_2)(C_+^1)\subset C_+^1.$$ Consequently, by an easy induction, $$(R_1\circ R_2)^{k-1}(C_+^1)\subset C_+^1\quad\text{for every } k\in\mathbb N$$ (where we use the convention that $(R_1\circ R_2)^0$ is the identity map). Hence $$(R_1\circ R_2)^n(c)=(R_1\circ R_2)^{n-1}((R_1\circ R_2)(c))\in(R_1\circ R_2)^{n-1}(C_+^1)\subset C_+^1.$$ As $c\notin C_+^1$, the conclusion follows. ◻ **Theorem 12**. *Let $S,R$ be sets of the form $S=C\cup\{c_1,c_2\}$ and $R=D\cup\{d_1,d_2\}$, where $C,D$ are circles in the plane, $c_1,c_2$ are distinct points from the interior of $C$, and $d_1,d_2$ are distinct points from the interior of $D$. Then the sets $S$ and $R$ are betweenness isomorphic.* *Proof.* In the case under consideration we have $\mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})=\{c_1,c_2,x_1,x_2\}$, where $x_1,x_2\in C$ are distinct points, collinear with $c_1$ and $c_2$. Similarly, $\mathop{\mathrm{c-hull}}_R(\{d_1,d_2\})$ consists of exactly four collinear points and two of them lying on the circle $D$. Therefore, it easily follows that there exists a betweenness isomorphism $\tilde f\colon\mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})\to\mathop{\mathrm{c-hull}}_R(\{d_1,d_2\})$ such that $\tilde f(c_1)=d_1$ and $\tilde f(c_2)=d_2$; then the identity permutation on $\setof{0,1}$ is the permutation $\sigma$ that corresponds to $\tilde f$. As all orbits $\mathbf O\in \mathcal O_C$ and $\mathbf O\in \mathcal O_D$ are obviously at most countable, the cardinality of each of the sets $\mathcal O_C$ and $\mathcal O_D$ is continuum. We fix a bijection $h$ of $\mathcal O_C$ and $\mathcal O_D$. For every $\mathbf O\in\mathcal O_C$, we fix $x_{\mathbf O}\in\mathbf O$ and $y_{\mathbf O}\in h(\mathbf O)$ arbitrarily. To complete the proof, by applying , we only need to show two properties: $\mathop{\mathrm{\mathbf S}}(c)=\{\emptyset\}$ for every $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})$, and $\mathop{\mathrm{\mathbf S}}(d)=\{\emptyset\}$ for every $d\in D\setminus\mathop{\mathrm{c-hull}}_R(\{d_1,d_2\})$. The former property is equivalent to $c\cdot g\neq c$ for all $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2\})$ and $g\in G_2\setminus\{\emptyset\}$, the latter property can be reformulated analogously. So, it suffices to apply . ◻ ## The case $l=3$ This case is much more involved than that of $l\le 2$. While we formulate some auxiliary results for general $l\ge 3$, our focus will be on the case $l=3$. For $l\ge 3$, let $\mathcal K_l$ be the collection of all pairs $s=(C,(c_1,\ldots,c_l))$, where $C$ is a circle in the plane and $c_1,\ldots,c_l$ are pairwise distinct collinear points from the interior of $C$ such that either $c_1<c_2<\ldots<c_l$ or $c_l<c_{l-1}<\ldots<c_1$ in the natural order on the line passing through all these points. The main result of this section is that there are exactly countably many betweenness isomorphism classes of sets of the form $C\cup\{c_1,c_2,c_3\}$ where $(C,(c_1,c_2,c_3))\in\mathcal K_3$. Also, we will explicitly describe these isomorphism classes; see [Theorem 14](#t:isoclasses){reference-type="ref" reference="t:isoclasses"}. This provides a partial answer to [Problem 1](#prob1x){reference-type="ref" reference="prob1x"} (for $3$ collinear points in the interior of the circle). To formulate [Theorem 14](#t:isoclasses){reference-type="ref" reference="t:isoclasses"}, we start with some notation first. Let $l\ge 3$ and $g=(i_1,\ldots,i_n)\in G_l$ (we allow $n=0$ here, in that case, $g=\emptyset$). For every $i\in\{1,\ldots,l\}$, we put $$M_i=\{m\in\{1,\ldots,n\}:i_m=i\},$$ and $$N(g,i)=\sum_{m\in M_i}(-1)^{m+1};$$ in particular, $N(g,i)=0$ if $M_i=\emptyset$. Then we define the *signature* $N(g)$ of $g$ by $$N(g)=(N(g,1),\ldots,N(g,l))\in\mathbb Z^l.$$ We say that a vector $v=(v_1,\ldots,v_l)\in\mathbb Z^l$ is *balanced* if $\sum_{i=1}^l v_i = 0$. *Remark 13*. Let $g\in G_l$. Then $\sum_{i=1}^lN(g,i)\in\{0,1\}$, and the signature $N(g)$ is a balanced vector (that is, $\sum_{i=1}^lN(g,i)=0$) if and only if (the irreducible sequence) $g$ is of even length. For a vector $v\in\mathbb Z^l\setminus \setof{\paren{0,\dots,0}}$, we denote by $\gcd(v)$ the greatest common divisor of all entries of $v$. For every $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$, we define $\mathop{\mathrm{set}}(s)\subset\mathbb R^2$ by $$\mathop{\mathrm{set}}(s)=C\cup\{c_1,\ldots,c_l\}.$$ Let $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$ be fixed. We say that a vector $v\in\mathbb Z^l$ is a *cycle* for $s$ if there are $g\in G_l$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ such that $N(g)=v$ and $g\in\mathop{\mathrm{\mathbf S}}(c)$. Later, we will see that, in this case, $g\in\mathop{\mathrm{\mathbf S}}(c)$ for every $g\in G_l$ with $N(g)=v$ and every $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ (see [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"}). This is the *porism* phenomenon which makes the collinear case special, cf. [@Kocik2013]. For every $l\ge 3$ and every $v\in\mathbb Z^l$, we define $$=\left\{s\in\mathcal K_l:v\text{ is a cycle for }s\right\},$$ and $$]=\left\{\mathop{\mathrm{set}}(s):s\in[v]\right\}.$$ Let $s\in\mathcal K_l$ be given. Note that, while $\mathop{\mathrm{set}}(s)$ is a subset of the plane, $s$ itself is just a code for this subset. If $l=3$, then each set of the form $C\cup\{c_1,c_2,c_3\}$, where $C$ is a circle in the plane and $c_1,c_2,c_3$ are pairwise distinct collinear points in the interior of $C$, has exactly two codes from $\mathcal K_3$. If $c_2$ lies on the line segment connecting $c_1$ and $c_3$ (the two other cases are completely analogous), these codes are $(C,(c_1,c_2,c_3))$ and $(C,(c_3,c_2,c_1))$. Now we are ready to formulate the main result of this section. **Theorem 14**. *The following are all betweenness isomorphism classes of sets of the form $C\cup\{c_1,c_2,c_3\}$, where $C$ is a circle in the plane and $c_1,c_2,c_3$ are pairwise distinct collinear points in the interior of $C$: $$\begin{aligned} \label{eq:firstline} &[[v]] \qquad \text{where } v=(v_1,v_2,v_3)\in \mathbb Z^3\text{ is balanced with } v_2 \ge 1, \ v_1 \le v_3 \le -1,\ \gcd(v) = 1,\\ \label{eq:secondline} &\mathbb O \coloneqq \left\{\mathop{\mathrm{set}}s:s\in\mathcal K_3 \text{ is such that $(0,0,0)$ is the only cycle for }s\right\}.\end{aligned}$$ Each of the classes is listed just once.* Let us recall the meaning of $[[v]]$ in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}. Namely, if $v_1, v_3$ are negative integers with no common divisor, we let $v_2=\absof{v_1}+\absof{v_3}$ and $m=2 v_2$. Then $[[v]]$ in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} is the class of all sets $M\subset \mathbb R^2$ which may be written in the form $M=C\cup\{c_1,c_2,c_3\}$ where $c_1,c_2,c_3$ are distinct collinear points in the interior of a circle $C$, such that there are - $c\in C$ not collinear with $c_1,c_2,c_3$, - $(i_1,\dots,i_m)$ so that $i_j=2$ for all $j$ odd while on the even positions, $1$ appears $\absof{v_1}$ times and $3$ appears $\absof{v_3}$ times, satisfying $$\label{e:CYCLE} \paren{P_{c_{i_m}}\circ P_{c_{i_{m-1}}}\circ\dots\circ P_{c_{i_2}} \circ P_{c_{i_1}}}(c)=c,$$ where $P_{X}$ is the reversion map (through the point $X$) as defined in [2.3](#subsec:reversionETC){reference-type="ref" reference="subsec:reversionETC"}. (Let us note that if there are $c$ and $(i_1,\ldots,i_m)$ with the above property, then [\[e:CYCLE\]](#e:CYCLE){reference-type="eqref" reference="e:CYCLE"} is actually true for all $c$ and $(i_1,\ldots,i_m)$ as we will see in [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"}.) The rest of this section is devoted to the proof of [Theorem 14](#t:isoclasses){reference-type="ref" reference="t:isoclasses"}. **Lemma 15**. *Let $l\ge 3$, $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$, $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ and $g\in G_l$ be given. If $c\cdot g=c$ then the irreducible sequence $g$ is of even length.* *Proof.* If $g$ is of odd length then the points $c$ and $c\cdot g$ lie in different open half-planes given by the line passing through the points $c_1,\ldots,c_l$. In particular, it holds $c\neq c\cdot g$. ◻ **Corollary 16**. *Let $l\ge 3$ and $v\in\mathbb Z^l$ be such that $v$ is a cycle for some $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$. Then $v$ is balanced.* *Proof.* Find $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$ such that $v$ is a cycle for $s$. Then there are $g\in G_l$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ such that $N(g)=v$ and $g\in\mathop{\mathrm{\mathbf S}}(c)$. By [Lemma 15](#l:even){reference-type="ref" reference="l:even"}, $g$ is of even length. Consequently, $v=N(g)$ is balanced by . ◻ We say that a permutation $(\pi_1,\ldots,\pi_n)$ is *admissible* if, for every $i\in\{1,\ldots,n\}$, the parity of $\pi_i$ is the same as the parity of $i$, i.e., $(-1)^{\pi_i}=(-1)^i$. Let $l\ge 3$ be given. To every $g\in G_l$ we will assign $\overline g\in G_l$ with the following properties: - for each $i\in\{1,\ldots,l\}$, $\overline g$ contains $i$ only at even positions, or only at odd positions (if any at all), - the elements of $\overline g$, separately on even positions and on odd positions, form a non-decreasing sequence, - $N(g)=N(\overline g)$, - $\overline g$ is obtained from $g$ only with the use of admissible permutations followed by reduction. We proceed as follows. First, using an admissible permutation of elements of $g$, we shift all instances of $l$ to the left (obeying even/odd positions, while respecting the order of other elements on even positions, and those on odd positions). Then, we reduce the obtained sequence. Observe that, in this case, the reduction simply means a 'removal' of all pairs $(l,l)$ from the start of the sequence. We obtain a new sequence $g^1$ which contains $l$ only at even positions, or only at odd positions. This finishes the first step of our construction. In the next steps, we consecutively repeat this procedure with elements of $\{1,\ldots,l-1\}$ in the reverse order, namely with $l+1-i$ in the $i$-th step. Let $\overline g$ be the sequence obtained after the last (that is, after the $l$th) iteration. Note that the elements of $\overline g$ on even positions got sorted during our procedure to a non-decreasing sequence, the same for the elements on odd positions. As we only used admissible permutations followed by reduction, it easily follows that $\overline g$ is an irreducible sequence which has all the required properties. Indeed, it is easy to see that admissible permutations followed by reduction do not change the signature $N(\cdot)$. The following lemma is almost obvious so we omit the proof. Recall that $N(\overline g)=N(g)$ and observe that $N(g)$ determines the number of occurrences of each element from $\{1,\ldots,l\}$ in the sequence $\overline g$, and its placement to either even or odd positions. **Lemma 17**. *Let $l\ge 3$ and $g_1,g_2\in G_l$ be given such that $N(g_1)=N(g_2)$. Then $\overline{g_1}=\overline{g_2}$.* **Lemma 18**. *Let $l\ge 3$ and $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$ be given. Let $\paren{i_1,\dots,i_n} \in \setof{1,\dots,l}^n$ with $n\in\mathbb N$.* 1. *If $\pi$ is an admissible permutation of $\setof{1,\dots,l}$ then $R_{i_{\pi(n)}}\circ \cdots \circ R_{i_{\pi(1)}} =R_{i_n}\circ \cdots \circ R_{i_1}$.* 2. *If $m\in \mathbb N$ and $\paren{j_1,\dots,j_m}$ is obtained from $\paren{i_1,\dots,i_n}$ by reduction, then $R_{j_m}\circ \cdots \circ R_{j_1} = R_{i_n}\circ \cdots \circ R_{i_1}$.* *Proof.* As reversions are involutive, assertion (ii) follows. To prove assertion (i) let us first assume that $i,j,k\in\{1,\ldots,l\}$ are given. By [@Kocik2013 Theorem 4], the composition $R_{i}\circ R_{j}\circ R_{k}$ is a reversion (through some point $X\in\mathbb R^2\setminus C$). As every reversion is an involution, we have $$(R_{k}\circ R_{j}\circ R_{i})\circ (R_{k}\circ R_{j}\circ R_{i})= \mathrm{Id}_C,$$ where $\mathrm{Id}_C$ is the identity map on the circle $C$. Multiplying by $R_{i}\circ R_{j}\circ R_{k}$ from the right hand side, we obtain $$R_{k}\circ R_{j}\circ R_{i}=R_{i}\circ R_{j}\circ R_{k}.$$ From this we see that interchanging of two entries of a sequence on consecutive even positions does not change the resulting reversion composition map; the same is true for consecutive odd positions. As any permutation can be obtained by a sequence of swapping of neighbouring elements, we reach the conclusion for all admissible permutations. ◻ **Lemma 19**. *Let $l\ge 3$, $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$ and $c\in C$ be given. Let $g_1,g_2\in G_l$ be such that $N(g_1)=N(g_2)$. Then $$g_1\in\mathop{\mathrm{\mathbf S}}(c)\iff g_2\in\mathop{\mathrm{\mathbf S}}(c).$$* *Proof.* Since $\overline g$ is obtained by admissible permutations and reductions, we have $c\cdot g=c\cdot \overline g$ for any $g\in G_l$ by [Lemma 18](#l:admi+redu){reference-type="ref" reference="l:admi+redu"}. As $N(g_1)=N(g_2)$, we have $\overline g_1 = \overline g_2$ by [Lemma 17](#fact:signatures){reference-type="ref" reference="fact:signatures"}. Hence $c\cdot g_1 = c\cdot \overline g_1 = c \cdot \overline g_2= c\cdot g_2$. Therefore $g_1\in \mathop{\mathrm{\mathbf S}}(c)$ if and only if $g_2\in\mathop{\mathrm{\mathbf S}}(c)$. ◻ **Lemma 20**. *Let $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$ and $v\in\mathbb Z^l$ be fixed. Suppose that $v$ is a cycle for $s$. Then $g\in\mathop{\mathrm{\mathbf S}}(c)$ for every $g\in G_l$ with $N(g)=v$ and every $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$.* *Proof.* As $v$ is a cycle for $s$, there are $g'\in G_l$ and $c'\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ such that $N(g')=v$ and $g'\in\mathop{\mathrm{\mathbf S}}(c')$. Fix arbitrary $g\in G_l$ with $N(g)=v$. By [Lemma 19](#l:theSameSignatures){reference-type="ref" reference="l:theSameSignatures"}, as $N(g')=v=N(g)$, we have $g\in\mathop{\mathrm{\mathbf S}}(c')$. So, to complete the proof, it is enough to show that $\mathop{\mathrm{\mathbf S}}(c)=\mathop{\mathrm{\mathbf S}}(c')$ for every $c,c'\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$. Recall that if $g\in\mathop{\mathrm{\mathbf S}}(c)$ for some $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,\ldots,c_l\})$ then $g$ is of even length by [Lemma 15](#l:even){reference-type="ref" reference="l:even"}. So it suffices to apply  [@Kocik2013 Theorem 7][^2]. ◻ **Lemma 21**. *Let $l\ge 3$, $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$ and $v^1,v^2\in\mathbb Z^l$ be given such that $v^1,v^2$ are cycles for $s$. Then $-v^1$, $v^1+v^2$ and $v^1-v^2$ are also cycles for $s$.* *Proof.* By assumptions there are $g_1\in G_l$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ such that $N(g_1)=v^1$ and $g_1\in\mathop{\mathrm{\mathbf S}}(c)$. Then $g_1^{-1}\in\mathop{\mathrm{\mathbf S}}(c)$. By [Lemma 15](#l:even){reference-type="ref" reference="l:even"}, the sequence $g_1$ is of even length. It immediately follows that $N(g_1^{-1})=-v^1$. So, $g_1^{-1}$ and $c$ witness that $-v^1$ is a cycle for $s$. Let $g_2\in G_l$ be such that $N(g_2)=v^2$. By [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"}, we have $g_2\in\mathop{\mathrm{\mathbf S}}(c)$. Clearly, $N(g_1g_2)=v^1+v^2$ (here, we use again the fact that $g_1$ is of even length) and $c\cdot g_1g_2=c\cdot g_2=c$. So, $g_1g_2$ and $c$ witness that $v^1+v^2$ is a cycle for $s$. To see that $v^1-v^2$ is a cycle for $s$ it suffices to note that $v^1-v^2=v^1+(-v^2)$ and apply what we already proved. ◻ **Lemma 22**. *Let $l\ge 3$, $s=(C,(c_1,\ldots,c_l))\in\mathcal K_l$, $v\in\mathbb Z^l$, and $k\in\mathbb Z\setminus\{0\}$ be given. Then $v$ is a cycle for $s$ if and only if $kv$ is a cycle for $s$.* *Proof.* Without loss of generality, we may assume that $k>0$ because, by [Lemma 21](#l:plusMinus){reference-type="ref" reference="l:plusMinus"}, $kv$ is a cycle for $s$ if and only if $-kv$ is a cycle for $s$. For $k=1$, there is nothing to prove. So, we assume that $k\geq 2$. Suppose first that $v$ is a cycle for $s$. Then there are $g\in G_l$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ such that $N(g)=v$ and $g\in\mathop{\mathrm{\mathbf S}}(c)$; the sequence $g$ is of even length by [Lemma 15](#l:even){reference-type="ref" reference="l:even"}. So $g^k\in G_l$ (the concatenation of $k$ copies of $g$, reduced if necessary) clearly satisfies $N(g^k)=kN(g)=kv$. Also, $g^k\in\mathop{\mathrm{\mathbf S}}(c)$ (as $g\in\mathop{\mathrm{\mathbf S}}(c)$ and each stabilizer is a subgroup). Thus $kv$ is a cycle for $s$. Now assume that $kv$ is a cycle for $s$. Then there are $g\in G_l$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ such that $N(g)=kv$ and $g\in\mathop{\mathrm{\mathbf S}}(c)$. Recall that $N(\overline g)=N(g)$. So, by [Lemma 19](#l:theSameSignatures){reference-type="ref" reference="l:theSameSignatures"}, we may assume without any loss of generality that $g=\overline g$. In particular, for each $i\in\{1,\ldots,l\}$, $g=\overline{g}$ contains $i$ only at even positions, or only at odd positions. Also, as the signature of $g=\overline{g}$ is divisible by $k$, each $i\in\{1,\ldots,l\}$ occurs at a multiple of $k$ positions in $g$. As $kv$ is a cycle, it is balanced by [Corollary 16](#cor:balanced){reference-type="ref" reference="cor:balanced"}; so $v$ is balanced, too. This implies that $\sum\absof{v_i}$ is an even number. Fix some $h\in G_l$ such that, for every $i\in\{1,\ldots,l\}$, - the sequence $h$ has $i$ exactly on $|v_i|$ positions, - if $g=\overline g$ contains $i$ only at even positions, then $h$ also contains $i$ only at even positions, - if $g=\overline g$ contains $i$ only at odd positions, then $h$ also contains $i$ only at odd positions. Then $h$ is of even length (as the length of $h$ equals $\sum\absof{v_i}$). Then it is clear that $h^k$ is an irreducible sequence which can be obtained from $g=\overline g$ by an admissible permutation. As admissible permutations do not change signatures, it still holds $$\label{e:poradToPlati} h^k\in\mathop{\mathrm{\mathbf S}}(c)$$ by [Lemma 19](#l:theSameSignatures){reference-type="ref" reference="l:theSameSignatures"}. So $$\label{e:h} h=(i_1,\ldots,i_{2n})$$ for some non-negative $n\in\mathbb Z$. This easily implies that $$kN(h)=N(h^k)=N(\overline g)=N(g)=kv.$$ In particular, $$N(h)=v.$$ If $n=0$ then $h=\emptyset$, and so $v=N(h)=(0,\ldots,0)\in\mathbb Z^l$. In this case, $\emptyset\in G_l$ and any $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,\ldots,c_l\})$ trivially witness that $v$ is a cycle. So we may assume that $n>0$. By [@Kocik2013 Theroem 3], for every $j_1,j_2,j_3\in\{1,\ldots,l\}$, the composition $R_{j_3}\circ R_{j_2}\circ R_{j_1}$ is a reversion through some point lying in the interior of the circle $C$ and collinear with all the points $c_1,\ldots,c_l$. One can easily conclude by induction on $m$ that, for every $j_1,\ldots,j_{2m-1}\in\{1,\ldots,l\}$, the composition $R_{j_{2m-1}}\circ\cdots\circ R_{j_1}$ is a reversion through some point, which lies in the interior of the circle $C$ and which is collinear with all the points $c_1,\ldots,c_l$. It follows that there is some $Y$ in the interior of $C$ such that, denoting by $P_Y$ the reversion map through $Y$, we have $$\label{e:popisReversema} R_{i_{2n}}\circ\cdots\circ R_{i_1}=R_{i_{2n}}\circ P_Y,$$ where $i_1,\ldots,i_{2n}$ are given by [\[e:h\]](#e:h){reference-type="eqref" reference="e:h"}. Consequently, it holds $$\label{e:dveReverze} c\stackrel{\eqref{e:poradToPlati}}{=}c\cdot h^k\stackrel{\eqref{e:popisReversema}}{=}(R_{i_{2n}}\circ P_Y)^k(c).$$ However, by [Lemma 11](#l:uperLowerFirst){reference-type="ref" reference="l:uperLowerFirst"}, this is possible only if $Y=c_{i_{2n}}$. Then $R_{i_{2n}}\circ P_Y$ is the identity map on the circle $C$. Consequently, we have $$c\cdot h\stackrel{\eqref{e:h}}{=}(R_{i_{2n}}\circ\cdots\circ R_{i_1})(c)\stackrel{\eqref{e:popisReversema}}{=}(R_{i_{2n}}\circ P_Y)(c)=c,$$ and so $h$ and $c$ witness that $v$ is a cycle for $s$. ◻ The following lemma is a direct generalization of [Lemma 11](#l:uperLowerFirst){reference-type="ref" reference="l:uperLowerFirst"} (and its proof is a straightforward adaptation of the proof of [Lemma 11](#l:uperLowerFirst){reference-type="ref" reference="l:uperLowerFirst"}). **Lemma 23**. *Let $C$ be a circle in the plane and $c_1,c_2,c_3$ be pairwise distinct points from the interior of $C$, such that $c_2$ lies on the line segment connecting $c_1$ and $c_3$ (so, in particular, the points $c_1,c_2,c_3$ are collinear). Let $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2,c_3\})$ and $(i_1,\ldots,i_m)\in\{1,2,3\}^m$ with $m\in\mathbb N$. Suppose that the sequence $(i_1,\ldots,i_m)$ has elements of $\{1,2\}$ on even positions and $3$'s on odd positions, or vice versa. Then $$% (R_{i_m}\circ\dots \circ R_{i_1})(c)\neq c.$$* *Proof.* Let $C_+$ denote the intersection of $C$ with one of the open half-planes given by the line passing through $c_1,c_2,c_3$, and let $C_-$ denote the intersection of $C$ with the other open half-plane given by the same line. Then $C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2,c_3\})=C_+\cup C_-$. \(i\) If $m$ is odd, then it suffices to note that $(R_{i_m}\circ\dots \circ R_{i_1})(C_+)\subset C_-$ and $(R_{i_m}\circ\dots\circ R_{i_1})(C_-)\subset C_+$. \(ii\) If $m$ is even, then $(R_{i_m}\circ\dots \circ R_{i_1})(C_+)\subset C_+$ and $(R_{i_m}\circ\dots\circ R_{i_1})(C_-)\subset C_-$. Note that the maps $R_{i_m}\circ\dots \circ R_{i_1}$ and $R_{i_1}\circ\dots \circ R_{i_m}$ are mutual inverses. In particular, the statements $(R_{i_m}\circ\dots \circ R_{i_1})(c)\neq c$ and $(R_{i_1}\circ\dots \circ R_{i_m})(c)\neq c$ are equivalent. So we may assume that the sequence $(i_1,\ldots,i_m)$ has elements of $\{1,2\}$ on even positions and $3$'s on odd positions (as in the 'vice versa' case, we may replace the sequence $(i_1,\ldots,i_m)$ by $(i_m,\ldots,i_1)$). Assume that $c\in C_+$; the case $c\in C_-$ is analogous. Clearly, $$(R_{i_2}\circ R_{i_1})(c)=(R_{i_2}\circ R_3)(c)\neq c.$$ The point $c$ splits the arc $C_+$ into two open subarcs, say $C_+^1$ and $C_+^2$. Let $C_+^1$ be the subarc that contains $(R_{i_2}\circ R_3)(c)$. From a picture, the reader may verify that $$(R_i\circ R_3)(C_+^1)\subset C_+^1\quad\text{whenever $i\in\{1,2\}$};$$ in particular, $$(R_{i_j}\circ R_{i_{j-1}})(C_+^1)=(R_{i_j}\circ R_3)(C_+^1)\subset C_+^1\quad\text{whenever $j\in\{2,\ldots,m\}$ is even}.$$ Consequently, by an easy induction, $$(R_{i_k}\circ\ldots\circ R_{i_3})(C_+^1)\subset C_+^1\quad \text{for every even }2<k\le m,$$ where way may also allow $k=2$ by interpreting the empty composition as the identity function (indeed, $C_+^1\subset C_+^1$). Hence $$(R_{i_m}\circ\ldots\circ R_{i_1})(c)=(R_{i_m}\circ\ldots\circ R_{i_3})(R_{i_2}\circ R_3(c))\in(R_{i_m}\circ\ldots\circ R_{i_3})(C_+^1)\subset C_+^1.$$ As $c\notin C_+^1$, the conclusion follows. ◻ **Lemma 24**. *Let $v=(v_1,v_2,v_3)\in\mathbb Z^3$ be given. Then $[v]\neq\emptyset$ if and only if either $v_1v_2v_3\neq 0$, $|v_2|=|v_1|+|v_3|$ and $v$ is balanced, or $v=(0,0,0)$.* *Proof.* To prove one of the implications, assume that $[v]\neq\emptyset$ and $v\neq(0,0,0)$. Then there is $s=(C,(c_1,c_2,c_3))\in\mathcal K_3$ such that $v$ is a cycle for $s$. That is, there are $g\in G_3$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,c_2,c_3\})$ such that $N(g)=v$ and $g\in\mathop{\mathrm{\mathbf S}}(c)$. First, we show that $v_1v_2v_3\neq 0$. Suppose for a contradiction that $v_3=0$ (the cases $v_1=0$ and $v_2=0$ are completely analogous). Then $g$ consists only of (alternating) $1$'s and $2$'s. Then, by [Lemma 11](#l:uperLowerFirst){reference-type="ref" reference="l:uperLowerFirst"}, the fact that $c=c\cdot g$ implies that $g=\emptyset$. But then $v=N(g)=(0,0,0)$, a contradiction. By [Corollary 16](#cor:balanced){reference-type="ref" reference="cor:balanced"}, the vector $v$ is balanced, that is, $v_1+v_2+v_3=0$. So we have, depending on the signs, $|v_2|=|v_1|+|v_3|$, $|v_1|=|v_2|+|v_3|$, or $|v_3|=|v_1|+|v_2|$. It only remains to show that the latter two cases cannot happen. We will only show that $|v_3|=|v_1|+|v_2|$ cannot happen; the other case is analogous. So suppose for a contradiction that $|v_3|=|v_1|+|v_2|$. Then the sequence $g$ has elements of $\{1,2\}$ on even positions and $3$'s on odd positions, or vice versa. Recall that $c_2$ lies on the line segment connecting $c_1$ and $c_3$ by the definition of $\mathcal K_3$. Hence, by [Lemma 23](#l:uperLowerFirst3){reference-type="ref" reference="l:uperLowerFirst3"}, $c\cdot g\neq c$, a contradiction. Now we will show the other implication. If $v=(0,0,0)$ then $[v]\neq\emptyset$ trivially ($(0,0,0)$ is a cycle for any $s=(C,(c_1,c_2,c_3))\in\mathcal K_3$, as witnessed by $g=\emptyset\in G_3$ and any $c\in C\setminus\mathop{\mathrm{c-hull}}_{\mathop{\mathrm{set}}(s)}(\{c_1,c_2,c_3\})$). So assume that $v_1v_2v_3\neq 0$, $|v_2|=|v_1|+|v_3|$ and $v$ is balanced. Let $K=\{(x,y)\in\mathbb R^2:x^2+y^2=1\}$ be the unit circle in the plane. We put $c_1=\left(-\frac12,0\right)$ and $c_3=\left(\frac12,0\right)$. It is enough to show that there exists $a\in\left(-\frac12,\frac12\right)$ such that, for $c_2=\left(a,0\right)$, we have $(K,(c_1,c_2,c_3))\in[v]$. In view of we may (and do) assume that $v_2>0$. Then, as $v$ is balanced, we have $v_1<0$ and $v_3<0$. We put $g=(2,1)^{|v_1|}(2,3)^{|v_3|}\in G_3$. As $$N(g)=(-|v_1|,|v_1|+|v_3|,-|v_3|)=v,$$ it suffices to find $a\in\left(-\frac12,\frac12\right)$ such that, for $c_2=(a,0)$, it holds $$\label{e:weWant} (0,1)\cdot g=(0,1).$$ To shorten the notation in the proof, for every $a\in(-1,1)$, we will use the symbol $R_a$ for the reversion $P_{(a,0)}\colon K\to K$ through the point $(a,0)$. Then [\[e:weWant\]](#e:weWant){reference-type="eqref" reference="e:weWant"} translates to $$\label{e:X_a} \left(\left(R_{\frac12}\circ R_a\right)^{|v_3|}\circ\left(R_{-\frac12}\circ R_a\right)^{|v_1|}\right)(0,1)=(0,1).$$ Let us consider the map $\rho\colon\left[-\frac12,\frac12\right]\to K$ given by $$\rho(a)=\left(\left(R_{\frac12}\circ R_a\right)^{|v_3|}\circ\left(R_{-\frac12}\circ R_a\right)^{|v_1|}\right)(0,1).$$ Note that $$\label{e:K++0} \rho\left(-\frac12\right)=\left(R_{\frac12}\circ R_{-\frac12}\right)^{|v_3|}(0,1)\quad\text{and}\quad\rho\left(\frac12\right)=\left(R_{-\frac12}\circ R_{\frac12}\right)^{|v_1|}(0,1).$$ We put $$K_{+}=\left\{(x,y)\in K:x>0,y>0\right\}\quad\text{and}\quad K_{-}=\left\{(x,y)\in K:x<0,y>0\right\};$$ It is clear from a picture that $$\label{e:K++1} \left(R_{\frac12}\circ R_{-\frac12}\right)(0,1)\in K_{+},\quad \left(R_{-\frac12}\circ R_{\frac12}\right)(0,1)\in K_{-},$$ and $$\label{e:K++2} \left(R_{\frac12}\circ R_{-\frac12}\right)(K_{+})\subset K_{+},\quad \left(R_{-\frac12}\circ R_{\frac12}\right)(K_{-})\subset K_{-}.$$ By [\[e:K++0\]](#e:K++0){reference-type="eqref" reference="e:K++0"}, [\[e:K++1\]](#e:K++1){reference-type="eqref" reference="e:K++1"} and [\[e:K++2\]](#e:K++2){reference-type="eqref" reference="e:K++2"}, as $v_1v_3\neq 0$, we obtain that $$\label{e:K++3} \rho\left(-\frac12\right)\in K_{+}\quad\text{and}\quad\rho\left(\frac12\right)\in K_{-}.$$ To complete the proof, it only remains to show that the map $\rho$ is continuous. Indeed, continuity would imply the existence of some $a\in\left(-\frac12,\frac12\right)$ such that the first coordinate of $\rho(a)$ is zero (as the first coordinates of $\rho\left(-\frac12\right)$ and $\rho\left(\frac12\right)$ have different signs by [\[e:K++3\]](#e:K++3){reference-type="eqref" reference="e:K++3"}). But the image of $\rho$ is clearly contained in the intersection of $K$ with the upper half plane, so this would mean that $\rho(a)=(0,1)$, which is equivalent to [\[e:X_a\]](#e:X_a){reference-type="eqref" reference="e:X_a"}. Next, we verify that $\rho$ is continuous. We claim that, for every $n\in\mathbb N$, the map $\Psi_n\colon K\times\left[-\frac12,\frac12\right]^n\to K$ defined by $$\Psi_n(c,(a_1,\ldots,a_n))=\left(R_{a_n}\circ\ldots\circ R_{a_1}\right)(c)$$ is continuous. This is all we need as the map $\rho$ is the composition of $\Psi_{|v_1|+|v_2|+|v_3|}$ and the (linear) map $$\left[-\frac12,\frac12\right]\ni a\mapsto \left( (0,1),\left(a, -\frac12,a, -\frac12, \dots,a, -\frac12, a, \frac12,a, \frac12, \dots, a, \frac12\right) \right)\in K\times\left[-\frac12,\frac12\right]^{|v_1|+|v_2|+|v_3|},$$ where $-\frac12$ appears $\left|v_1\right|$ times and $\frac12$ appears $\left|v_3\right|$ times. We prove our claim by induction on $n$. For $n=1$, this is obvious. Now fix $n\in\mathbb N$ and suppose that $\Psi_i$ is continuous for every $i\in\{1,\ldots,n\}$. For every $(c,(a_1,\ldots,a_{n+1}))\in K\times\left[-\frac12,\frac12\right]^{n+1}$, we have $$\Psi_{n+1}(c,(a_1,\ldots,a_{n+1}))=R_{a_{n+1}}(\Psi_n(c,(a_1,\ldots,a_n)))=\Psi_1(\Psi_n(c,(a_1,\ldots,a_n),a_{n+1})).$$ So, the continuity of $\Psi_{n+1}$ follows by the continuity of $\Psi_1$ and $\Psi_n$. ◻ **Lemma 25**. *Let $v^1,v^2\in\mathbb Z^3$ be given. Then $[v^1]\cap[v^2]\neq\emptyset$ if and only if $[v^1]\neq\emptyset$, $[v^2]\neq\emptyset$ and $v^1,v^2$ are linearly dependent.* *Proof.* Suppose first that $[v^1]\neq\emptyset$, $[v^2]\neq\emptyset$ and $v^1,v^2$ are linearly dependent. If $v^1=(0,0,0)$ then, trivially, $[v^1]=\mathcal K_3$. In that case, $[v^1]\cap[v^2]=[v^2]\neq\emptyset$. Likewise if $v^2=(0,0,0)$. So we may assume that both vectors $v^1,v^2$ are non-zero. By the linear dependency, $v^2$ is a multiple of $v^1$. As both $v^1,v^2$ have integer entries, the multiple must be given by a rational number. So there are $k_1,k_2\in\mathbb Z\setminus\{0\}$ such that $k_1v^1=k_2v^2$. As $[v^1]\neq\emptyset$, there is $s\in\mathcal K_3$ such that $v^1$ is a cycle for $s$. By [Lemma 22](#l:multiple){reference-type="ref" reference="l:multiple"}, $k_1v^1=k_2v^2$ is also a cycle for $s$. Another application of [Lemma 22](#l:multiple){reference-type="ref" reference="l:multiple"} gives us that $v^2$ is a cycle for $s$. So $s\in[v^1]\cap[v^2]$. Now suppose that $[v^1]\cap[v^2]\neq\emptyset$. Then $[v^1]\neq\emptyset$ and $[v^2]\neq\emptyset$. If one (or both) of $v^1,v^2$ is the zero vector, then $v^1,v^2$ are linearly dependent and we are done. So we may assume that both $v^1,v^2$ are non-zero. Denote $v^1=(l,m,r)$ and $v^2=(L,M,R)$. Let $s\in\mathcal K_3$ be such that both $v^1,v^2$ are cycles for $s$. Then, by [Lemma 22](#l:multiple){reference-type="ref" reference="l:multiple"}, $Lv^1$ and $lv^2$ are also cycles for $s$. By [Lemma 21](#l:plusMinus){reference-type="ref" reference="l:plusMinus"}, the vector $$Lv^1-lv^2=(0,Lm-lM,Lr-lR)$$ is also a cycle for $s$. By [Lemma 24](#l:charNonempty){reference-type="ref" reference="l:charNonempty"}, we must have $$Lm-lM=Lr-lR=0.$$ This means that the determinants of the matrices $$\begin{bmatrix} l & m \\ L & M \end{bmatrix} \text{ and } \begin{bmatrix} l & r \\ L & R \end{bmatrix}$$ are zero, and we immediately obtain that $v^1,v^2$ are linearly dependent. ◻ **Lemma 26**. *Let $s=(C,(c_1,c_2,c_3))\in\mathcal K_3$ be given. Then there is $v^0\in\mathbb Z^3$ such that $$\label{e:kv_0} \left\{v\in\mathbb Z^3:s\in [v]\right\}=\left\{kv^0:k\in\mathbb Z\right\}.$$ Moreover, either $\gcd(v^0)=1$, or $v^0=(0,0,0)$.* *Proof.* Note that $(0,0,0)$ is trivially a cycle for $s$. If $\{v\in\mathbb Z^3:s\in [v]\}=\{(0,0,0)\}$, then [\[e:kv_0\]](#e:kv_0){reference-type="eqref" reference="e:kv_0"} holds for $v^0=(0,0,0)$. So we may assume that $\{v\in\mathbb Z^3:s\in [v]\}\neq\{(0,0,0)\}$. Then there is $v\neq(0,0,0)$ which is a cycle for $s$. By [Lemma 24](#l:charNonempty){reference-type="ref" reference="l:charNonempty"}, all three entries of $v$ are non-zero. Let $$v^0=\frac v{\gcd(v)},$$ so that $\gcd(v^0)=1$. Then $kv^0$ is a cycle for $s$ for every $k\in\mathbb Z$ by [Lemma 22](#l:multiple){reference-type="ref" reference="l:multiple"}. It remains to show that there are no other cycles for $s$. So let $v'\in\mathbb Z^3\setminus\{(0,0,0)\}$ be an arbitrary cycle for $s$. By [Lemma 25](#l:dependency){reference-type="ref" reference="l:dependency"}, $v^0$ and $v'$ are linearly dependent. So there are $k_1,k_2\in\mathbb Z\setminus\{0\}$ such that $k_1v^0=k_2v'$. As $\gcd(v^0)=1$, we have $\gcd(k_2v')=\gcd(k_1v^0)=k_1$. But all entries of $k_2v'$ are divisible by $k_2$, so $k_2$ divides $k_1$. Thus $\frac{k_1}{k_2}\in\mathbb Z$, and then $$v'=\frac{k_1}{k_2}v^0\in\left\{kv^0:k\in\mathbb Z\right\},$$ which ends the proof. ◻ **Lemma 27**. *Let $K=\{(x,y)\in\mathbb R^2:x^2+y^2=1\}$ be the unit circle in the plane and let $-1<a_1<a_2<1$ be fixed. For every $a_3\in(a_2,1)$, let $s_{a_3}\in\mathcal K_3$ be given by $$s_{a_3}=(K,((a_1,0),(a_2,0),(a_3,0))).$$ Then, for every $v\in\mathbb Z^3\setminus\{(0,0,0)\}$, there exists at most one $a_3\in(a_2,1)$ such that $v$ is a cycle for $s_{a_3}$.* *Proof.* As in the proof of , for every $a\in(-1,1)$, we will use the symbol $R_a$ for the reversion $P_{(a,0)}\colon K\to K$ through the point $(a,0)$. For notational purposes, we define $a_1':=a_1$ and $a_2':=a_2$. Suppose for a contradiction that $a_3,a_3'\in(a_2,1)$, $a_3<a_3'$, are such that some $v=(v_1,v_2,v_3)\neq(0,0,0)$ is a cycle for both $s_{a_3}$ and $s_{a_3'}$. Then we can find $g=(i_1,\ldots,i_n)\in G_3$ such that $N(g)=v$. Note that $n>0$ as $v\neq(0,0,0)$, and that $v_1v_2v_3\neq 0$ by [Lemma 24](#l:charNonempty){reference-type="ref" reference="l:charNonempty"}. We may assume that, for each $i\in\{1,2,3\}$, $g$ contains $i$ only at even positions, or only at odd positions (if it is not the case, just replace $g$ by $\overline g$). Let $m\in\{1,\ldots,n\}$ be the first index such that $i_m=3$. Such an index exists as $v_3\neq 0$. Then $$\left(R_{a_{i_{m-1}}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1)=\left(R_{a'_{i_{m-1}}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1).$$ Consequently, by our assumption $a_3'>a_3$, the first coordinate of $$\left(R_{a'_{i_m}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1)=R_{a'_3}\left(\left(R_{a'_{i_{m-1}}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1)\right)$$ is strictly bigger that the first coordinate of $$\left(R_{a_{i_m}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1)=R_{a_3}\left(\left(R_{a_{i_{m-1}}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1)\right).$$ Now, by induction on $k$, the reader may easily verify that, for every $k\in\{0,\ldots,n-m\}$ (the case $k=0$ was verified just now), it holds: - if $k$ is even then the first coordinate of $\left(R_{a'_{i_{m+k}}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1)$ is strictly bigger that the first coordinate of $\left(R_{a_{i_{m+k}}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1)$, - if $k$ is odd then the first coordinate of $\left(R_{a'_{i_{m+k}}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1)$ is strictly smaller that the first coordinate of $\left(R_{a_{i_{m+k}}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1)$; the key ingredients of the induction step are the following facts: - if $i_{m+k}=3$ then $k$ is even (by our assumption on $g$), - any reversion restricted to the upper half-circle is strictly decreasing in terms of the first coordinate, the same is true for the restriction to the lower half-circle, - for any fixed $x\in K\setminus\{(-1,0),(1,0)\}$, the map $"(-1,1)\ni a\mapsto R_a(x)\in K"$ is strictly increasing in terms of the first coordinate. Finally, the case $k=n-m$ implies that $$\left(R_{a'_{i_n}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1)\neq \left(R_{a_{i_n}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1).$$ So either $$\left(R_{a'_{i_n}}\circ\ldots\circ R_{a'_{i_1}}\right)(0,1)\neq(0,1),$$ or $$\left(R_{a_{i_n}}\circ\ldots\circ R_{a_{i_1}}\right)(0,1)\neq(0,1).$$ But by [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"}, this is a contradiction with our assumption that $v$ is a cycle for both $s_{a_3}$ and $s_{a_3'}$. ◻ Finally, we are ready for the proof of our main result. *Proof of [Theorem 14](#t:isoclasses){reference-type="ref" reference="t:isoclasses"}.* Let $C$ be a circle in the plane and $c_1,c_2,c_3$ be pairwise distinct collinear points in the interior of $C$. We start by showing that the set $C\cup\{c_1,c_2,c_3\}$ belongs to (at least) one of the sets [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} and [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"}. Without loss of generality, we may assume that $c_2$ lies on the line segment connecting $c_1$ and $c_3$. Then there are exactly two distinct $s,s'\in\mathcal K_3$ such that $\mathop{\mathrm{set}}(s)=\mathop{\mathrm{set}}(s')=C\cup\{c_1,c_2,c_3\}$, namely $(C,(c_1,c_2,c_3))$ and $(C,(c_3,c_2,c_1))$. It follows by the definitions that $$\label{e:twoCases} \left\{v\in\mathbb Z^3:C\cup\{c_1,c_2,c_3\}\in[[v]]\right\}= \left\{v\in\mathbb Z^3:s\in[v]\right\}\cup \left\{v\in\mathbb Z^3:s'\in[v]\right\};$$ there is an obvious relation between the two sets on the right hand side: with $\pi_{1,3}(v)$ defined as $(v_3,v_2,v_1)\in\mathbb Z^3$ for every $v=(v_1,v_2,v_3)\in\mathbb Z^3$, we have $$\label{e:aboutTwoCases} \left\{v\in\mathbb Z^3:s'\in[v]\right\}= \pi_{1,3} \left(\left\{v\in\mathbb Z^3:s\in[v]\right\}\right).$$ We also note that $\gcd(\pi_{1,3}(v))=\gcd(v)$ and $$\label{e:aboutPi} \pi_{1,3}(k v)=k \pi_{1,3}(v) \qquad\text{for every $v\in \mathbb Z^3$ and $k\in \mathbb Z$.}$$ By [Lemma 26](#l:kernel){reference-type="ref" reference="l:kernel"}, there is $v^0=(v^0_1,v^0_2,v^0_3)\in\mathbb Z^3$, with either $\gcd(v^0)=1$ or $v^0=(0,0,0)$, such that $$\left\{v\in\mathbb Z^3:s\in[v]\right\}=\left\{kv^0:k\in\mathbb Z\right\}.$$ Without loss of generality, we may assume that $v^0_2\ge 0$ (otherwise, we can replace $v^0$ by $-v^0$). Using [\[e:twoCases\]](#e:twoCases){reference-type="eqref" reference="e:twoCases"}, [\[e:aboutTwoCases\]](#e:aboutTwoCases){reference-type="eqref" reference="e:aboutTwoCases"} and [\[e:aboutPi\]](#e:aboutPi){reference-type="eqref" reference="e:aboutPi"}, it follows that $$\label{e:svsS} \left\{v\in\mathbb Z^3:C\cup\{c_1,c_2,c_3\right\}\in[[v]]\}=\left\{kv^0:k\in\mathbb Z\right\}\cup\left\{k\pi_{1,3}(v^0):k\in\mathbb Z\right\}.$$ We immediately see that if $v^0=(0,0,0)$, then $$\left\{v\in\mathbb Z^3:C\cup\{c_1,c_2,c_3\}\in[[v]]\right\}=\left\{(0,0,0))\right\},$$ and so $C\cup\{c_1,c_2,c_3\}\in\mathbb O$. So suppose that $v^0\neq(0,0,0)$. Then, by [Lemma 24](#l:charNonempty){reference-type="ref" reference="l:charNonempty"}, it holds $v^0_1v^0_2v^0_3\neq 0$, $|v^0_2|=|v^0_1|+|v^0_3|$ and $v^0$ is balanced (then also $\pi_{1,3}(v^0)$ is balanced). Also, at least one of the vectors $v^0,\pi_{1,3}(v^0)$, satisfies that its first coordinate is less than or equal to its third coordinate. Let $v'=(v'_1,v'_2,v'_3)$ be such a vector. Then $v'\in\mathbb Z^3$ is a balanced vector such that $v'_2\ge 1$, $v'_1\le v'_3\le -1$, $\gcd(v')=1$ and $C\cup\{c_1,c_2,c_3\}\in[[v']]$. We have shown that every set $C\cup\{c_1,c_2,c_3\}$, as in the statement of the theorem, belongs to (at least) one of the sets [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}, [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"}. Next, we show that each of the sets [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}, [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"} is non-empty. For [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}, this follows from [Lemma 24](#l:charNonempty){reference-type="ref" reference="l:charNonempty"}. To prove that $\mathbb O\neq\emptyset$, let $K=\{(x,y)\in\mathbb R^2:x^2+y^2=1\}$ be the unit circle in the plane. For every $v\in\mathbb Z^3\setminus\{(0,0,0)\}$, we know by [Lemma 27](#l:only1choice){reference-type="ref" reference="l:only1choice"} that there is at most one $a_v\in(0,1)$ such that $v$ is a cycle for $(K,((-\frac12,0),(0,0),(a_v,0)))$. If there is no such $a_v$ then we define $a_v:=\frac 12$ (or we can choose any other value from the interval $(0,1)$). Now we fix some $$a\in(0,1)\setminus\left\{a_v:v\in\mathbb Z^3\setminus\{(0,0,0)\} \right\}.$$ Then no $v\in\mathbb Z^3\setminus\{(0,0,0)\}$ is a cycle for $s:=(K,((-\frac12,0),(0,0),(a,0)))$. Hence $\mathop{\mathrm{set}}(s)\in \mathbb O$ by the definition of $\mathbb O$, [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"}, so $\mathbb O\neq\emptyset$. Now we show that the sets [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}, [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"} are pairwise disjoint (and, consequently, each of them is listed just once as they are non-empty). For each $S\in[[v]]$ (where $[[v]]$ is as in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}), there is $s\in\mathcal K_3$ with $\mathop{\mathrm{set}}(s)=S$ such that $s\in[v]$. On the other hand, if $S\in\mathbb O$, then there is $s=(C,(c_1,c_2,c_3))\in\mathcal K_3$ such that $(0,0,0)$ is the only cycle for $s$. Then $s':=(C,(c_3,c_2,c_1))$ is the only element of $\mathcal K_3$, distinct from $s$, with $\mathop{\mathrm{set}}(s')=S$. Obviously, $(0,0,0)$ is the only cycle also for $s'$. So $\mathbb O$ is disjoint from each of the sets in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}. Now suppose that $[[v^1]],[[v^2]]$ are as in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} (then, in particular, $\gcd(v^1)=1=\gcd(v^2)$) and that some set $C\cup\{c_1,c_2,c_3\}$, as in the statement of the theorem, belongs to both $[[v^1]],[[v^2]]$. By what we already proved, we know that there is some $v^0=(v^0_1,v^0_2,v^0_3)\in\mathbb Z^3$ satisfying [\[e:svsS\]](#e:svsS){reference-type="eqref" reference="e:svsS"}. By [\[e:svsS\]](#e:svsS){reference-type="eqref" reference="e:svsS"}, each of the vectors $v^1,v^2$ is either an integer multiple of $v^0$, or an integer multiple of $\pi_{1,3}(v^0)$. As $\gcd(v^1)=1$, this implies that $v^1 = \pm v^0$ or $v^1 = \pm \pi_{1,3}(v^0)$ and likewise for $v^2$. Hence $v^1 = \pm v^2$ or $v^1 = \pm \pi_{1,3}(v^2)$. From the inequalities of [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} we then get $v^1=v^2$. This completes the proof of the disjointness. It remains to show that each set of the form [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} or [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"} is a betweenness isomorphism class of the collection of all sets $C\cup\{c_1,c_2,c_3\}$ as in the statement of the theorem. Let $S,R$ be sets of the form $S=C\cup\{c_1,c_2,c_3\}$ and $R=D\cup\{d_1,d_2,d_3\}$, where $C,D$ are circles in the plane, $c_1,c_2,c_3$ are pairwise distinct collinear points from the interior of $C$, and $d_1,d_2,d_3$ are pairwise distinct collinear points from the interior of $D$. To complete the proof, we must show that the sets $S,R$ are betweenness isomorphic if and only if they belong to the same set from the partition given by [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} and [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"}. Suppose first that $S,R$ are betweenness isomorphic. We will show that if $S\in[[v]]$ for some $v$ (as in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}) then $R\in[[v]]$. Similarly, one could prove that if $R\in[[v]]$ then $S\in[[v]]$. It will follow that $R,S$ belong to the same sets of the form [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}. And as the sets [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}, [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"} form a partition, $S\in\mathbb O$ if and only if $R\in\mathbb O$. So suppose that $S\in[[v]]$ for some $v$ as in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}. Then there is $s\in\mathcal K_3$ with $\mathop{\mathrm{set}}(s)=S$ such that $s\in[v]$. By rearranging the order in which the points $c_1,c_2,c_3$ are indexed, we may assume that $s=(C,(c_1,c_2,c_3))$. Let $f$ be a betweenness isomorphism from $S$ to $R$. By rearranging the order in which the points $d_1,d_2,d_3$ are indexed, we may assume that the permutation $\sigma\colon\{1,2,3\}\to\{1,2,3\}$ which corresponds to $f$ is the identity permutation (that is, $f$ maps $c_i$ to $d_i$, $i\in\{1,2,3\}$). Then, by [Corollary 8](#c:characterization){reference-type="ref" reference="c:characterization"} and [Remark 9](#r:afterchara){reference-type="ref" reference="r:afterchara"}, there is a bijection $h\colon\mathcal O_C\to\mathcal O_D$ such that, for every $\mathbf O\in\mathcal O_C$, there are $x_{\mathbf O}\in\mathbf O$ and $y_{\mathbf O}\in h(\mathbf O)$ with $$\label{e:equationForStabilizers} \mathop{\mathrm{\mathbf S}}(y_{\mathbf O})=\mathop{\mathrm{\mathbf S}}(x_{\mathbf O}).$$ As $s\in[v]$, there are $g\in G_3$ and $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2,c_3\})$ such that $N(g)=v$ and $g\in\mathop{\mathrm{\mathbf S}}(c)$. Fix some $\mathbf O\in\mathcal O_C$. By [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"}, it holds $$g\in\mathop{\mathrm{\mathbf S}}(x_{\mathbf O})\stackrel{\eqref{e:equationForStabilizers}}{=}\mathop{\mathrm{\mathbf S}}(y_{\mathbf O}).$$ So, $g$ and $y_\mathbf O$ witness that $$(D,(d_1,d_2,d_3))\in[v].$$ In particular, $R\in[[v]]$, as we wanted. Now suppose that $S,R$ belong to the same set from the partition given by [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"} and [\[eq:secondline\]](#eq:secondline){reference-type="eqref" reference="eq:secondline"}. We first deal with the case that $S,R\in[[v]]$ for some $v$ as in [\[eq:firstline\]](#eq:firstline){reference-type="eqref" reference="eq:firstline"}. Then there are $s,r\in\mathcal K_3$ such that $\mathop{\mathrm{set}}(s)=S$, $\mathop{\mathrm{set}}(r)=R$ and $s,r\in[v]$. By rearranging the order in which the points $c_1,c_2,c_3$, resp. $d_1,d_2,d_3$, are indexed, we may assume that $$s=(C,(c_1,c_2,c_3))\quad\text{and}\quad r=(D,(d_1,d_2,d_3)).$$ By [Lemma 26](#l:kernel){reference-type="ref" reference="l:kernel"}, there are $v^s,v^r\in\mathbb Z^3$ such that $$\left\{u\in\mathbb Z^3:s\in[u]\right\}=\left\{kv^s:k\in\mathbb Z\right\}$$ and $$\left\{u\in\mathbb Z^3:r\in[u]\right\}=\left\{kv^r:k\in\mathbb Z\right\}.$$ So there are $k_s,k_r\in\mathbb Z$ such that $$\label{e:k_sv^s} k_sv^s=v=k_rv^r.$$ In particular, the vectors $v^s,v^r$ are linearly dependent. [Lemma 26](#l:kernel){reference-type="ref" reference="l:kernel"} also states that either $\gcd(v^s)=1$, or $v^s=(0,0,0)$. The latter case cannot happen by [\[e:k_sv\^s\]](#e:k_sv^s){reference-type="eqref" reference="e:k_sv^s"} as $v\neq(0,0,0)$, so $\gcd(v^s)=1$. Similarly, $\gcd(v^r)=1$. Now, [\[e:k_sv\^s\]](#e:k_sv^s){reference-type="eqref" reference="e:k_sv^s"} easily implies that $k_s=\pm k_r$ and, consequently, $v^s=\pm v^r$. So it holds $$\label{e:rovnostCyklu} \left\{u\in\mathbb Z^3:s\in[u]\right\}=\left\{u\in\mathbb Z^3:r\in[u]\right\}.$$ We claim that, whenever $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2,c_3\})$ and $d\in D\setminus\mathop{\mathrm{c-hull}}_R(\{d_1,d_2,d_3\})$, we have $$\mathop{\mathrm{\mathbf S}}(c)=\mathop{\mathrm{\mathbf S}}(d).$$ By symmetry, it is enough to show only one inclusion. So fix $c,d$ as above and suppose that $g\in G_3$ belongs to $\mathop{\mathrm{\mathbf S}}(c)$. Then $N(g)$ is a cycle for $s$. By [\[e:rovnostCyklu\]](#e:rovnostCyklu){reference-type="ref" reference="e:rovnostCyklu"}, $N(g)$ is also a cycle for $r$. Now [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"} gives us that $g\in\mathop{\mathrm{\mathbf S}}(d)$. So our claim is verified. Observe that $c_2$ lies on the line segment connecting $c_1$ and $c_3$ (as $s=(C,(c_1,c_2,c_3))\in\mathcal K_3$), similarly for $d_2$. Hence, there is a betweenness isomorphism $\tilde f\colon\mathop{\mathrm{c-hull}}_S(\{c_1,c_2,c_3\})\to \mathop{\mathrm{c-hull}}_R(\{d_1,d_2,d_3\})$ which maps $c_i$ to $d_i$ for every $i\in\{1,2,3\}$. Then the permutation $\sigma\colon\{1,2,3\}\to\{1,2,3\}$ which corresponds to $\tilde f$ is the identity permutation. Note that each orbit from $\mathcal O_C$, as well as each orbit from $\mathcal O_D$, is at most countable. So there are continuum many orbits from $\mathcal O_C$, as well as from $\mathcal O_D$. So we can fix some bijection $h\colon\mathcal O_C\to\mathcal O_D$. By the preceding claim, for every $\mathbf O\in\mathcal O_C$, there are $x_{\mathbf O}\in\mathbf O$ and $y_{\mathbf O}\in h(\mathbf O)$ such that $$\mathop{\mathrm{\mathbf S}}(y_{\mathbf O})=\mathop{\mathrm{\mathbf S}}(x_{\mathbf O}).$$ So we can apply [Corollary 8](#c:characterization){reference-type="ref" reference="c:characterization"} to conclude that $S,R$ are betweenness isomorphic. Finally, suppose that $S,R\in\mathbb O$. Then there are $s,r\in\mathcal K_3$ such that $\mathop{\mathrm{set}}s=S$, $\mathop{\mathrm{set}}r=R$ and $(0,0,0)$ is the only cycle for $s$, as well as the only cycle for $r$. By the definition of a cycle and by [Lemma 20](#l:cyclePoint){reference-type="ref" reference="l:cyclePoint"}, for every $c\in C\setminus\mathop{\mathrm{c-hull}}_S(\{c_1,c_2,c_3\})$ and every $d\in D\setminus\mathop{\mathrm{c-hull}}_R(\{d_1,d_2,d_3\})$, we have $$\mathop{\mathrm{\mathbf S}}(c)=\left\{g\in G_3:N(g)=(0,0,0)\right\}=\mathop{\mathrm{\mathbf S}}(d).$$ Hence, we can apply [Corollary 8](#c:characterization){reference-type="ref" reference="c:characterization"} in the same way as above to conclude that $S$ and $R$ are betweenness isomorphic. ◻ [^1]: Martin Doležal was supported by the GAČR projekt EXPRO 20-31529X and by the Czech Academy of Sciences (RVO 67985840). Jan Kolář was supported by the Czech Academy of Sciences (RVO 67985840). Janusz Morawiec was supported by the University of Silesia Mathematics Department (Iterative Functional Equations and Real Analysis program). [^2]: Note that there is a typo in the statement of Theorem 7 in [@Kocik2013]. Namely, the premise '$\exists X\in K,\;\mathbf P(X)=X$' of the implication in [@Kocik2013 condition (11)] should correctly state '$\exists X\in K\setminus L,\;\mathbf P(X)=X$', where $L$ is the line passing through the (collinear) points $\mathbf P_1,\ldots,\mathbf P_{2n}$; cf.[@Bogomolny1997]
arxiv_math
{ "id": "2310.05708", "title": "Betweenness isomorphism classes of circles with finitely many points\n inside", "authors": "Martin Dole\\v{z}al, Jan Kol\\'a\\v{r}, Janusz Morawiec", "categories": "math.MG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, efficient alternating direction implicit (ADI) schemes are proposed to solve three-dimensional heat equations with irregular boundaries and interfaces. Starting from the well-known Douglas-Gunn ADI scheme, a modified ADI scheme is constructed to mitigate the issue of accuracy loss in solving problems with time-dependent boundary conditions. The unconditional stability of the new ADI scheme is also rigorously proven with the Fourier analysis. Then, by combining the ADI schemes with a 1D kernel-free boundary integral (KFBI) method, KFBI-ADI schemes are developed to solve the heat equation with irregular boundaries. In 1D sub-problems of the KFBI-ADI schemes, the KFBI discretization takes advantage of the Cartesian grid and preserves the structure of the coefficient matrix so that the fast Thomas algorithm can be applied to solve the linear system efficiently. Second-order accuracy and unconditional stability of the KFBI-ADI schemes are verified through several numerical tests for both the heat equation and a reaction-diffusion equation. For the Stefan problem, which is a free boundary problem of the heat equation, a level set method is incorporated into the ADI method to capture the time-dependent interface. Numerical examples for simulating 3D dendritic solidification phenomenons are also presented. address: School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Minhang, Shanghai 200240, PR China author: - Han Zhou - Minsheng Huang - Wenjun Ying bibliography: - references.bib title: ADI schemes for heat equations with irregular boundaries and interfaces in 3D with applications --- The heat equation, interface problem, free boundary, Cartesian grid, ADI schemes, KFBI method # Introduction {#sec:intro} The heat equation and its related problems such as reaction-diffusion equations and free-boundary problems have attracted researchers' interest due to the extensive applications such as heat transfer [@YANENKO19631094; @Gibou2005; @Pandit2016], pattern formation [@Chen1992; @Fernandes2012; @Asante2020], crystal growth [@Soni1999; @Nochetto1991; @Nochetto1991b], etc. Designing efficient numerical algorithms for solving these problems is of great significance, especially for those with complex boundaries/interfaces in three space dimensions, since they are closer to realistic models. It is well-known that ADI schemes have been proven to be highly efficient techniques for solving different kinds of partial differential equations (PDEs) in multiple space dimensions [@doi:10.1137/0103004; @doi:10.1137/0103003; @Douglas1964; @Kim2007; @Zhao2014]. Compared with conventional explicit and implicit time-stepping methods, which either suffer from severe stability constraints or require solving large algebraic systems, ADI schemes are generally unconditionally stable and have comparable computational costs with explicit schemes due to the dimension-splitting strategy. Classical ADI schemes are basically designed for simple geometries, which can be equipped with a Cartesian grid, such as tensor-product domains (e.g. rectangles in 2D and cubes in 3D) or unions of those ones (e.g. L-shaped domain). For more realistic problems, domain boundaries or internal interfaces are generally complex ones and impose difficulties in directly applying Cartesian grids and ADI schemes. The pioneering work of handling complex boundaries on Cartesian grids is the development of immersed boundary method (IBM) by C. S. Peskin in the 1970s [@Peskin1977; @Peskin2002]. Due to the success of IBM, many researchers were devoted to studying Cartesian grid methods and a number of methods emerged in the past decades [@doi:10.1137/0721021; @doi:10.1137/0731054; @doi:10.1137/1.9780898717464; @ZHOU20061]. Due to the simplicity and efficiency of the dimension splitting strategy, various different ADI schemes have also been proposed for solving PDEs with complex interfaces such as the immersed interface method (IIM) based ADI schemes [@Li1994; @IIMBook; @Liu2014; @Li2018], the matched interface and boundary (MIB) method based ADI schemes [@Zhao2015; @Wei2018; @Li2021], and the ghost fluid method (GFM) based ADI schemes [@Li2020; @Li2021] for parabolic interface problems, the ADI Yee's scheme for Maxwell's equations with discontinuous coefficients [@Deng2021]. There are also attempts have been made to solve nonlinear interface problems such as the IIM-ADI scheme for nonlinear convection--diffusion equations [@Liu2013], the ADI scheme for the nonlinear Poisson-Boltzmann equation [@Geng2013]. Recently, several dimension-splitting methods based on the 1D KFBI method have been proposed for solving time-dependent PDEs on two space-dimensional complex domains [@Zhou2023]. The KFBI method was originally proposed by W. Ying and C. S. Henriquez in 2007 for solving elliptic PDEs with irregular boundaries in two space dimensions [@Ying2007] and was generalized to three space dimensions [@Ying2013] and its fourth-order version [@Xie2020]. The KFBI method has also been successfully applied to interface problems with variable coefficients [@Ying2014], incompressible flows [@Xie2020a], and singularly perturbed reaction-diffusion equations [@XIE2021298]. This work generalizes one of the KFBI-based dimension-splitting methods, the KFBI-ADI method, for solving the heat equation, the reaction-diffusion equation, and the Stefan problem with complex or even moving boundaries/interfaces in three space dimensions. First, based on the Douglas-Gunn (DG) scheme [@Douglas1962], we propose a simple modified Douglas-Gunn (mDG) scheme that can improve the accuracy of DG-ADI scheme when applied to problems with time-dependent boundary conditions. Furthermore, KFBI-ADI schemes are constructed by solving the 1D sub-problems in the two ADI schemes with the 1D KFBI method so that they can be applied to the heat equation and reaction-diffusion equations on complex domains. The method achieves spatial and temporal second-order accuracy, which is demonstrated through numerical experiments for various domains. Finally, for the Stefan problem, in which the free boundary is time-dependent, we capture the moving boundary with the level set method [@Sethian1985; @OSHER198812] and solve the interface problem of the heat equation with the efficient ADI scheme. The remainder of the paper is organized as follows. In , we introduce the description of model problems including. Then, the construction of ADI schemes is described in [3](#sec:method){reference-type="ref" reference="sec:method"}. Some numerical examples are presented in [4](#sec:result){reference-type="ref" reference="sec:result"}. Finally, we give a brief discussion on the proposed method in . # Model problems {#sec:problem} Let $\Omega\in\mathbb{R}^3$ be a complex and fixed domain whose boundary is denoted by $\Gamma = \partial\Omega$. Let $u(\mathbf{x}, t):\Omega\times [0,T]\rightarrow \mathbb{R}$ be the unknown temperature field. We consider the Dirichlet boundary value problem (BVP) of the heat equation: [\[eqn:heat-eqn\]]{#eqn:heat-eqn label="eqn:heat-eqn"} \_t u(, t) &= u(, t) + f(, t),& (, t),\ u(, t) &= g_D(, t),& (x, t),\ u(, 0) &= u_0() ,& , where $f(\mathbf{x}, t)$ is the source term, $g_D(\mathbf{x}, t)$ and $u_0(\mathbf{x})$ are boundary and initial condition data, respectively. When $u(\mathbf{x}, t)$ represents other physical quantities, such as the concentration of chemical species, the equation [\[eqn:heat-eqn\]](#eqn:heat-eqn){reference-type="eqref" reference="eqn:heat-eqn"} is also called the diffusion equation. Often a solution-dependent source term $f(\mathbf{x}, t,u)$ is considered, which represents chemical reactions in the system. In that case, the equation [\[eqn:heat-eqn\]](#eqn:heat-eqn){reference-type="eqref" reference="eqn:heat-eqn"} becomes a reaction-diffusion equation. As an interesting application of the heat equation, the Stefan problem, which describes phase transitions between solid and liquid phases, is also considered. Due to the presence of thermal- or diffusion-driven phase transitions, the solid-liquid interface (the free boundary) is a-priori unknown and has to be solved together with the temperature field. This is the reason that the Stefan problem is also referred to as a free boundary problem. Let $\Gamma(t)$ be the free boundary that separates the solid region $\Omega_s(t)$ and the liquid region $\Omega_l(t)$, the Stefan problem is given by [@Schmidt1996; @Chen1997] \_t u(, t) &= Du(, t), &(, t) \_s(t)\_l(t),\ u(, t) &= -\_C H - \_V V_n, &(, t) (t), [\[eqn:gibbs\]]{#eqn:gibbs label="eqn:gibbs"}\ V_n &= \[D \_ u\], &(, t) (t),[\[eqn:ste-eqn\]]{#eqn:ste-eqn label="eqn:ste-eqn"} subject to initial conditions for both the temperature and the free boundary u(, 0) &= u_0(), \_s(0)\_l(0), (0) = \_0, where $D$ is the diffusion constant, $H$ is the mean curvature of $\Gamma$ (sum of principal curvatures), $V_n$ is the normal velocity of the free boundary $\Gamma$, $\varepsilon_C$ and $\varepsilon_V$ are the surface tension and molecular kinetic coefficients, and $\mathbf{n}$ is the unit outward normal. Here, the bracket $[q]=q_s - q_l$ denotes the jump of the one-sided limit values of the quantity $q$ on $\Gamma$. Equations [\[eqn:gibbs\]](#eqn:gibbs){reference-type="eqref" reference="eqn:gibbs"} and [\[eqn:ste-eqn\]](#eqn:ste-eqn){reference-type="eqref" reference="eqn:ste-eqn"} are the Gibbs-Thomson relation and the Stefan equation, respectively, and they lead to the coupling of temperature and the free boundary. For anisotropic solidification problems, the coefficients $\varepsilon_C$ and $\varepsilon_V$ depend on the orientation of the free boundary and can be written as $\varepsilon_C(\mathbf{n})$ and $\varepsilon_V(\mathbf{n})$. In this work, we consider the Stefan problem in a bounded region $\mathcal{B} = \Omega_s\cup \Gamma \cup\Omega_l$. Assume that $\mathcal{B}$ is simply a cube whose boundary is away from $\Gamma$ so that we have $\partial\mathcal{B} = \partial\Omega_l \backslash \Gamma$. The boundary condition for $u$ on $\partial\mathcal{B}$ is assumed to be a homogeneous Neumann boundary condition $\partial_{\mathbf{n}}u = 0$. # Numerical method {#sec:method} ## ADI schemes In this subsection, the alternating direction implicit method for the time discretization of the heat equation [\[eqn:heat-eqn\]](#eqn:heat-eqn){reference-type="eqref" reference="eqn:heat-eqn"} is described. The time interval $[0,T]$ is uniformly partitioned into $N_t$ intervals $0=t^0<t^1\cdots < t^{N_t-1} < t^{N_t} = T$ with a constant time step $\tau = T/N_t$. The semi-discrete approximation to the exact solution $u(\mathbf{x}, t^n)$ without spatial discretization is denoted as $u^n$. ### The Douglas-Gunn scheme The classical second-order DG scheme [@Douglas1962; @Hundsdorfer2003] for the 3D heat equation [\[eqn:heat-eqn\]](#eqn:heat-eqn){reference-type="eqref" reference="eqn:heat-eqn"} without source or reaction terms (i.e., $f(\mathbf{x},t, u) = 0$) is given by [\[eqn:DG-adi\]]{#eqn:DG-adi label="eqn:DG-adi"} &= (u\^\_xx + u\^n\_xx) + u\_yy\^n + u\_zz\^n,\ &= (u\_yy\^ - u\_yy\^n),\ &= (u\_zz\^n+1 - u\_zz\^n), where $u^{\star}$ and $u^{\star\star}$ are intermediate variables. The DG scheme can be directly extended to solve problems with $s \geq 3$ operators so that non-zero sources (i.e., $f(t,\mathbf{x}, u) \neq 0$) can be treated similarly [@Hundsdorfer2003]. Theoretical analysis of the DG scheme shows that it is unconditionally stable so that no stability constraint on time steps, such as $\tau<C h^2$ for explicit schemes, is required [@Hundsdorfer2003]. The DG scheme is built as a high-order perturbation of the second-order Crank-Nicolson scheme = (u\^n+1 + u\^n). In principle, the intermediate variables $u^{\star}$ and $u^{\star\star}$ are only auxiliary variables, which have little physical meanings. If one regards $u^{\star}$ and $u^{\star\star}$ as numerical approximations of the exact solution $u(\mathbf{x}, t^{n+1})$, the scheme is actually a prediction-correction scheme, which can be shown by rewriting [\[eqn:DG-adi\]](#eqn:DG-adi){reference-type="ref" reference="eqn:DG-adi"} as [\[eqn:DGADI-equiv\]]{#eqn:DGADI-equiv label="eqn:DGADI-equiv"} &= (u\^\_xx + u\^n\_xx) + u\_yy\^n + u\_zz\^n,\ &= (u\^\_xx + u\^n\_xx) + (u\_yy\^+u\_yy\^n) + u\_zz\^n),\ &= (u\^\_xx + u\^n\_xx) + (u\_yy\^+u\_yy\^n) + (u\_zz\^n+1 + u\_zz\^n). Let $\mathcal{A}_s(u^n, u^{n+1}, u^{\star}, u^{\star\star})$ for $s=1,2,3$ denote difference operators in the three sub-steps so the DG scheme can be written as \_1(u\^n, u\^n+1, u\^, u\^) &= 0,\ \_2(u\^n, u\^n+1, u\^, u\^) &= 0,\ \_3(u\^n, u\^n+1, u\^, u\^) &= 0. By replacing $u^n$, $u^{n+1}$, $u^{\star}$, and $u^{\star\star}$ with exact solutions that they approximate, it can be shown that local truncation errors satisfy [\[eqn:DGADI-LTE\]]{#eqn:DGADI-LTE label="eqn:DGADI-LTE"} E_1\^n+ = \_1(u(,t\^n), u(,t\^n+1), u(,t\^n+1), u(,t\^n+1)) &= (),\ E_2\^n+ = \_2(u(,t\^n), u(,t\^n+1), u(,t\^n+1), u(,t\^n+1)) &= (),\ E_3\^n+ = \_3(u(,t\^n), u(,t\^n+1), u(,t\^n+1), u(,t\^n+1)) &= (\^2). Evidently, the first two sub-steps are only first-order consistent approximations of the continuous problems. Being ignored by most authors, the insufficient consistency of the first two sub-steps may lead to reductions in numerical accuracy and convergence order if boundary conditions of $u^{\star}$ and $u^{\star\star}$ are given by $u^{n+1}$. This phenomenon will be demonstrated through a numerical example at the end of this subsection. As a matter of fact, there is a natural way to provide boundary conditions for $u^{\star}$ and $u^{\star\star}$ if the domain boundary is axis-aligned, for example, $\Omega=[0,1]^3$. By reversing the order of [\[eqn:DG-adi\]](#eqn:DG-adi){reference-type="ref" reference="eqn:DG-adi"} and restricting the solution to the boundary $\partial\Omega$, boundary conditions for $u^{\star}$ and $u^{\star\star}$ can be computed with a back-tracing approach, also known as the boundary correction technique, [\[eqn:bdry-crc\]]{#eqn:bdry-crc label="eqn:bdry-crc"} u\^ &= (1-\_zz) u\^n+1 + \_zz u\^n,\ u\^ &= (1-\_yy)u\^ + \_yyu\^n\ & = (1-\_yy)(1-\_zz) u\^n+1 + ((1-\_yy)\_zz + \_yy) u\^n. Since the boundary is axis-aligned, the boundary condition for $u^{\star\star}$ on $\partial\Omega\cap\{y=0, 1\}$ and that for $u^{\star}$ on $\partial\Omega\cap\{x=0, 1\}$ can be exactly or approximately computed only using boundary conditions of $u(\mathbf{x}, t)$ at $t^n$ and $t^{n+1}$. The boundary correction technique is able to retain the full accuracy of the DG scheme. Unfortunately, the technique is limited to axis-aligned boundaries and there is no straightforward extension for general irregular domains. ### A modified Douglas-Gunn scheme To alleviate the accuracy reduction problem of the original DG scheme, we make a few modifications to the DG scheme and expect the new scheme to be more accurate for problems with irregular boundaries. Since we have realized that the accuracy reduction results from insufficient consistency of the first two sub-steps. This motivates us to modify the first two sub-steps in the DG scheme [\[eqn:DGADI-equiv\]](#eqn:DGADI-equiv){reference-type="eqref" reference="eqn:DGADI-equiv"} based on an extrapolation technique, [\[eqn:mDGADI-equiv\]]{#eqn:mDGADI-equiv label="eqn:mDGADI-equiv"} &= (u\^\_xx + u\^n\_xx) + u\_yy\^n-u\_yy\^n-1 + u\_zz\^n-u\_zz\^n-1,\ &= (u\^\_xx + u\^n\_xx) + (u\_yy\^+u\_yy\^n) + u\_zz\^n-u\_zz\^n-1,\ &= (u\^\_xx + u\^n\_xx) + (u\_yy\^+u\_yy\^n) + (u\_zz\^n+1+u\_zz\^n). It is evident that the first two sub-steps are both second-order consistent. By rearranging terms, it can be simplified into [\[eqn:3L-ADI\]]{#eqn:3L-ADI label="eqn:3L-ADI"} &= (u\^\_xx + u\^n\_xx) + u\_yy\^n-u\_yy\^n-1 + u\_zz\^n-u\_zz\^n-1,\ &= u\_yy\^+u\_yy\^n-1 - u\_yy\^n,\ &= u\_zz\^n+1+u\_zz\^n-1 - u\_zz\^n. The modified scheme is a prediction-correction scheme, and each sub-step is a second-order consistent approximation to the continuous problem. In the new scheme, we expect that physical boundary conditions at $t^{n+1}$ can be used for $u^{\star}$ and $u^{\star\star}$ without losing second-order accuracy so that it is more suitable for solving problems on irregular domains, for which the boundary correction technique can not be applied. Similar to the original DG scheme, it can be proved that the modified DG scheme is also unconditionally stable so that no high-order time step constraint is required (see [6](#sec:analysis){reference-type="ref" reference="sec:analysis"}). For a non-vanishing source term $f(\mathbf{x}, t, T)$, the scheme can also be generalized to [\[eqn:3L-ADI2\]]{#eqn:3L-ADI2 label="eqn:3L-ADI2"} &= (f\^ + f\^n) + u\^n\_xx - u\_xx\^n-1\ & + u\_yy\^n-u\_yy\^n-1 + u\_zz\^n-u\_zz\^n-1,\ &= u\_xx\^+u\_xx\^n-1 - u\_xx\^n,\ &= u\_yy\^+u\_yy\^n-1 - u\_yy\^n,\ &= u\_zz\^n+1+u\_zz\^n-1 - u\_zz\^n. where $f^{\star}=f(\mathbf{x}, t^{n+1}, u^\star)$ and $f^{n}=f(\mathbf{x}, t^{n}, u^n)$. The sub-steps of the ADI schemes involve explicit parts of second derivatives of $u^n$ and $u^{n-1}$, which will present in the right-hand side of the one-dimensional sub-problems. When the domain boundary is axis-aligned, these terms can be easily calculated by applying central differences to the grid data of $u^n$ and $u^{n-1}$. If the domain is an irregular one, some stencil points of the central difference may exceed the domain and one may need to use a one-sided interpolation method to find these derivatives. An alternative approach is to solve $u^{n}$ first and then store the grid values of $u^n_{zz}$ using the relation $u^{n}_{zz} = \frac{2}{\tau}u^{n} + \Tilde{F}$, where $\Tilde{F}$ is the right-hand side of the one-dimensional sub-problem. By rearranging the order of sub-steps, for example $2\rightarrow 3\rightarrow 1$ and $3\rightarrow 1\rightarrow 2$, the values $u^{n}_{xx}$ and $u^n_{yy}$ can also be computed. In this way, one can easily obtain all the second derivatives of $u^n$ and $u^{n-1}$ without using one-sided interpolations. However, this approach is more time-consuming since it requires twice more computational cost due to the extra time-stepping procedures with different sub-step orders. ### Numerical validation To demonstrate the effect of inappropriate boundary conditions for intermediate variables on the numerical accuracy and convergence order, we construct a simple example of the 3D heat equation with a time-dependent boundary condition. The initial condition and boundary condition are chosen such that the exact solution is given by u(x, y, z, t) = e\^-6t(x)(y)(z). The computational domain is assumed to be a cube $\Omega=[-1.2,1.2]^3$. Numerical errors are estimated at the final time $T=0.5$. The problem is solved with the DG scheme with and without boundary correction technique and the modified DG scheme. Spatial derivatives are simply approximated with three-point central difference schemes. Errors and convergence orders obtained by the three methods are summarized in . It can be observed that the uncorrected DG scheme gives the most inaccurate result. Compared with the corrected one, the uncorrected DG scheme also encounters an order reduction in the $L_{\infty}$ norm from second-order to first-order. The modified DG scheme provides more accurate results than the uncorrected DG scheme and obtains second-order convergence in both norms. ## KFBI-ADI method for irregular boundaries Given an irregular domain $\Omega\in\mathbb{R}^3$. To solve PDEs on $\Omega$, we first embed it into a sufficiently large cuboid bounding box $\mathcal{B}$, which works as the computational domain. For simplicity, the box $\mathcal{B}$ is assumed to be a unit cube $[0,1]^3$ and is uniformly partitioned into a Cartesian grid with $N$ intervals in each space direction. A grid node is denoted as $(x_i,y_j,z_k)=(ih,jh,kh)$, $i,j,k = 1,2,\cdots,N$ where $h=1/N$ is the mesh size. After time discretization with the DG or the modified DG scheme, the time-dependent heat equation can be, after some rearrangements, transformed into a sequence of 1D ODEs, &\_xx u\_j,k - \^2 u\_j,k = f\_j,k, {y = y_j, z = z_k}, j,k = 1, 2,, N-1,\ &\_yy u\_k,i - \^2 u\_k,i = f\_k,i, {z = z_k, x = x_i}, k,i = 1, 2,, N-1,\ &\_zz u\_i,j - \^2 u\_i,j = f\_i,j, {x = x_i, y = y_j}, i,j = 1, 2,, N-1. subject to Dirichlet boundary conditions, respectively, &u\_j,k() = g_D(, t\^n+1), {y = y_j, z = z_k}, j,k = 1, 2,, N-1,\ &u\_k,i() = g_D(, t\^n+1), {z = z_k, x = x_i}, k,i = 1, 2,, N-1,\ &u\_i,j() = g_D(, t\^n+1), {x = x_i, y = y_j}, i,j = 1, 2,, N-1, where $\kappa^2 = 2/\tau$, $u_{j,k}$, $u_{k,i}$ and $u_{i,j}$ are solutions to the 1D sub-problems, and $f_{j,k}$, $f_{k,i}$ and $f_{i,j}$ are right-hand sides consisting of previous solutions. Here, since the boundary correction technique can not be applied, boundary conditions for intermediate variables are directly taken from physical boundary conditions at $t^{n+1}$. The KFBI method [@Ying2007; @Zhou2023] is adopted to solve the 1D sub-problems so we can take advantage of the Cartesian grid. In the 1D sub-problems, the domain $\Omega\cap\{y = y_j, z = z_k\}$ may consist of several disjoint subsets. However, to deliver the idea of the 1D KFBI method, it suffices to consider the simple two-point boundary value problem, [\[eqn:tp-bvp\]]{#eqn:tp-bvp label="eqn:tp-bvp"} \_xx u(x) - \^2 u(x) = f(x), x(a, b), subject to the boundary condition u(x) = g(x), x{a, b}. Solving the 1D ODE is rather simple if the interval $(a,b)$ is partitioned into a fitted grid and standard finite difference methods or finite element methods are applied. It is not the case here since endpoints of the one-dimensional intervals are generally different from Cartesian grid nodes, leading to an unfitted mesh for the interval $(a,b)$. In the KFBI method, the larger interval $[0,1]$, which is uniformly partitioned into a grid, is the actual computational domain instead of $(a,b)$. For each $x\in[0,1]$, let $G(y,x)$ be the Green function such that it satisfies [\[eqn:green-1d\]]{#eqn:green-1d label="eqn:green-1d"} &\_yyG(y, x) - \^2 G(y, x) = (y-x), y ,\ &G(0,x) = G(1,x) = 0, where $\delta(\cdot)$ is the Dirac delta function in 1D. Similar to higher dimensional cases, the solution $u(x)$ of the boundary value problem [\[eqn:tp-bvp\]](#eqn:tp-bvp){reference-type="eqref" reference="eqn:tp-bvp"} can be expressed as the sum of a layer potential and a volume potential u(x) = \_yG(y,x)(y)\|\_y=a\^b + \_a\^b G(y, x)f(y)  dy, x, where $\partial_{y}G(y,x)\varphi(y)\big|_{y=a}^b =\partial_{y}G(b,x)\varphi(b) - \partial_{y}G(a,x)\varphi(a)$ is actually a 1D version of a boundary integral. Combining the boundary condition, the above boundary value problem can be reduced to a boundary integral equation [\[eqn:bie-1d\]]{#eqn:bie-1d label="eqn:bie-1d"} (x) + \_yG(y,x)(y)\|\_y=a\^b = g(a) - \_a\^b G(y, x)f(y)  dy,x{a,b}. The matrix-free GMRES method [@Saad2003], which only requires the matrix-vector multiplication operator, is used to iteratively solve the discrete system of [\[eqn:bie-1d\]](#eqn:bie-1d){reference-type="ref" reference="eqn:bie-1d"}. In the KFBI method, boundary and volume integrals in [\[eqn:bie-1d\]](#eqn:bie-1d){reference-type="ref" reference="eqn:bie-1d"} are all evaluated by solving equivalent but simple interface problems so that analytical expressions of integral kernels, consisting of the Green function and its derivatives, are never used. Denote by $v(x) = \int_a^b G(y, x)f(y) \,dy$ the volume integral term. It satisfies the equivalent interface problem [\[eqn:ifp-v\]]{#eqn:ifp-v label="eqn:ifp-v"} { &\_xxv - \^2 v = (x), &(0,1) \\{a,b},\ &\[v\](x) = 0, &{a,b},\ &\[\_x v\](x) = 0, &{a,b},\ &v(x) = 0, &{0,1}, . where $\Tilde{f}(x)$ is an arbitrary extension of $f(x)$ to the larger interval $(0,1)$. Here, the bracket $[\cdot]$ denotes the jump value of the one-sided limit from the interior of $(a,b)$ to the exterior. The GMRES method starts with an initial guess $\varphi^0$ and generates a sequence of $\varphi^k, k = 1, 2, \cdots$ until it converges. Let $w^k(x) = \partial_{y}G(y,x)\varphi^k(y)|_{y=a}^b$ be the double layer potential, which satisfies the equivalent interface problem [\[eqn:ifp-w\]]{#eqn:ifp-w label="eqn:ifp-w"} { &\_xxw\^k - \^2 w\^k = 0, &(0,1) \\{a,b},\ &\[w\^k\](x) = \^k(x), &{a,b},\ &\[\_x w\^k\](x) = 0, &{a,b},\ &w\^k(x) = 0, &{0,1}. . From the interface problems, one can deduce jump values of the solution $w^k$ and $v$ and their derivatives as follows &\[v\] = 0,= 0, = \[\], [\[eqn:jumps-v\]]{#eqn:jumps-v label="eqn:jumps-v"}\ &\[w\^k\] = \^k,= 0, = 0. [\[eqn:jumps-w\]]{#eqn:jumps-w label="eqn:jumps-w"} The two interface problems are discretized with a second-order three-point finite difference method with the Cartesian grid on the interval $[0,1]$. In the vicinity of the interface, due to the low regularity of the solution, the local truncation error is large and leads to a poor approximation of the interface problem. This problem can be mitigated by adding correction terms computed from the jump conditions to the right-hand side of the finite difference equations as defect corrections [@Zhou2023]. As a result, the local truncation error becomes relatively small such that one can obtain accurate approximations. For example, let $v_i$ for $i=0,1,\cdots, N$ be the numerical approximation to the solution of [\[eqn:ifp-v\]](#eqn:ifp-v){reference-type="eqref" reference="eqn:ifp-v"} at the grid node $x_i$. The corrected finite difference scheme is given by [\[eqn:crc-fds\]]{#eqn:crc-fds label="eqn:crc-fds"} - \^2 v_i = (x_i) + C_i, i=1,2,, N-1, together with the boundary condition $v_0=v_N = 0$. Here, $C_i$ is the correction term, which can be computed with the jump values [\[eqn:jumps-v\]](#eqn:jumps-v){reference-type="eqref" reference="eqn:jumps-v"}. We stress that linear systems of the 1D interface problems are still tri-diagonal since corrections are only made for right-hand sides. The systems can be solved with the highly efficient LU decomposition method, precisely, the Thomas algorithm. In addition, the tri-diagonal linear systems are all the same if the problem has a constant coefficient and is discretized with the same mesh size in each space direction. At the implementation level, one only needs to call the LU decomposition procedure in advance and store the results for subsequent computations. Solving subsequent 1D interface problems only requires forward and backward substitutions. The integral operators in the boundary integral equation [\[eqn:bie-1d\]](#eqn:bie-1d){reference-type="eqref" reference="eqn:bie-1d"} can be interpreted as one-sided limits of the two non-smooth potential functions $v(x)$ and $w^k(x)$ at the interface. To be specific, we have \_a\^b G(y, x)f(y)  dy = v(x+) = v(x-), x{a, b},\ \_yG(y,x)\^k(y)\|\_y=a\^b = (w\^k(x+) + w\^k(x-)), x{a, b}. Given the numerical solutions $v_i$ and $w_i^k$ for $i=0,1,\cdots,N$, let $v_h(x)$ and $w_h^k(x)$ be two grid functions such that $v(x_i)=v_i$ and $w_h^k(x_i) = w_i^k$ for all $i=0,1,\cdots,N$. One can compute the integral operators by interpolating from the grid functions $v_h(x)$ and $w^k_h(x)$. Due to the jump of the potential functions on the interface $\{a,b\}$, one should also take into account the jump values in the Lagrangian interpolation. More details on the correction and interpolation methods are described in [@Zhou2023], and we omit them here. *Remark 1*. When $\kappa$ is a constant, it is not too difficult to obtain an analytical expression of $G(y,x)$ and use it to form the $2\times 2$ system [\[eqn:bie-1d\]](#eqn:bie-1d){reference-type="eqref" reference="eqn:bie-1d"}, which is then solved with a direct solver. As long as the unknown density $\varphi$ is solved, it only requires solving an equivalent interface problem with known jump conditions and the computational cost is as much as solving a tri-diagonal system. In this case, the KFBI is similar to Mayo's method [@Mayo1984; @Mayo1985; @Mayo1992]. Nevertheless, we mention that the KFBI method is more general than traditional boundary integral methods since it is also able to solve variable coefficient problems [@Ying2014]. We will report the results for variable coefficient problems in the future. ## Level set-ADI method for the Stefan problem ### Level set method For moving interface problems, the interface $\Gamma$ is time-dependent and also needs to be solved together with the PDE in the volume domain. Here, the level set method is used to implicitly capture the position of the moving interface. Given a level set function $\phi(\mathbf{x}):\mathcal{B}\rightarrow \mathbb{R}$, interface $\Gamma$ can be defined as the zero contour of $\phi(\mathbf{x})$, = {\| () = 0, }. For a time-dependent interface, the time-dependent level set function $\phi(\mathbf{x}, t):\mathcal{B}\times[0,T]\rightarrow \mathbb{R}$ satisfies the Hamilton-Jacobi equation [@Osher2003] [\[eqn:hj-eqn\]]{#eqn:hj-eqn label="eqn:hj-eqn"} \_t + V_n \|\| = 0. where $V_n$ is the prescribed normal velocity field of $\Gamma$. Assume that there exists $\varepsilon_0 > 0$ and $\varepsilon_V \geq \varepsilon_0$. A continuous extension of the normal velocity $V_n$ from $\Gamma$ to the domain $\mathcal{B}$ can be obtained by reformulating the Gibbs-Thomson relation [\[eqn:gibbs\]](#eqn:gibbs){reference-type="eqref" reference="eqn:gibbs"} as [\[eqn:vn-for\]]{#eqn:vn-for label="eqn:vn-for"} V_n = - () -, Plugging [\[eqn:vn-for\]](#eqn:vn-for){reference-type="eqref" reference="eqn:vn-for"} into [\[eqn:hj-eqn\]](#eqn:hj-eqn){reference-type="eqref" reference="eqn:hj-eqn"} gives the governing equation for the evolution of $\phi$, [\[eqn:ls-for\]]{#eqn:ls-for label="eqn:ls-for"} \_t - \|\| = () \|\|. Notice that the right-hand side of [\[eqn:ls-for\]](#eqn:ls-for){reference-type="eqref" reference="eqn:ls-for"} is a second-order term. An explicit time stepping scheme requires a stability constraint on the time step $\tau<C h^2$ where $C$ is a constant depending on the ratio $\varepsilon_C/\varepsilon_V$. The second-order stability constraint is induced by the mean curvature term and is more strict than the common Courant-Friedrichs-Levy (CFL) condition that is encountered in level set methods. To mitigate the problem, based on the relation $\nabla\cdot(\nabla\phi/|\nabla\phi|) |\nabla \phi| = \Delta\phi + \nabla\cdot(1/|\nabla\phi|) |\nabla \phi|$, a semi-implicit level set method [@Smereka2003; @Salac2008] is devised by adding an extra term $S(\Delta\phi^{n+1} - \Delta \phi^n)$ to stabilize the explicit time-stepping scheme, [\[eqn:dis-ls\]]{#eqn:dis-ls label="eqn:dis-ls"} - \|\^n\| = () \|\^n\| + S(\^n+1 - \^n), where $S \geq |\varepsilon_C(\phi^n)/\varepsilon_V(\phi^n)|$ is a selectable stabilization coefficient and $\epsilon>0$ is a small constant to avoid division by zero. Here, all but the stabilization term are treated explicitly to decouple the computation of $\phi^{n+1}$ and $u^{n+1}$ and avoid solving nonlinear systems. The fifth-order WENO scheme [@Osher1991; @Osher2003] is used for $|\nabla\phi^n|$ in the left-hand side of [\[eqn:dis-ls\]](#eqn:dis-ls){reference-type="eqref" reference="eqn:dis-ls"} since it represents convection in the normal direction. Spatial derivatives in the right-hand side of [\[eqn:dis-ls\]](#eqn:dis-ls){reference-type="eqref" reference="eqn:dis-ls"} are approximated with central differences since they are diffusive in nature. In practice, the level set function should be chosen as a signed distance function, which satisfies $|\nabla\phi|=1$, to improve the accuracy of the level set method. As the level set function evolves, it gradually becomes too flat or too steep in some areas and fails to be a signed distance function. To reconstruct a signed distance function again, the level set function $\phi$ is re-initialized by solving the re-initialization equation [@Osher2003] [\[eqn:rein-eqn\]]{#eqn:rein-eqn label="eqn:rein-eqn"} { & + S(\_0) (\|\|-1) = 0,\ &(, 0) = \_0(). . where $\tau$ is a pseudo-time, $\phi_0$ is the level set function before re-initialization, and $S(\phi) = \frac{\phi}{\sqrt{\phi^2 + h^2}}$ is a regularized sign function. The fifth-order WENO scheme and the third-order TVD Runge-Kutta scheme are used to solve the equation [\[eqn:rein-eqn\]](#eqn:rein-eqn){reference-type="eqref" reference="eqn:rein-eqn"}. In this work, a narrow-band level set method with a cutoff technique is implemented to improve the efficiency of numerical computations, resulting in a local semi-implicit level set method [@Salac2008]. The [\[eqn:ls-for,eqn:rein-eqn\]](#eqn:ls-for,eqn:rein-eqn){reference-type="ref" reference="eqn:ls-for,eqn:rein-eqn"} are only solved in the narrow band instead of the entire domain. At each time step, the narrow band is computed following a layer-by-layer approach. Define the first layer of irregular grid nodes by \_1 = {(x_i,y_j,z_k)\|(l, m,n), \_i,j,k\_i+l, j+m, k+n \< 0}, where the index set $\mathcal{I}$ is given as = {(1,0,0), (-1,0,0), (0,1,0), (0,-1,0), (0,0,1), (0,0,-1)}. Subsequently, the $k_{th}$ layer of irregular grid nodes for $k=2, 3\cdots$ are defined by \_k = {(x_i,y_j,z_k)\|(l, m,n), (x\_i+l, y\_j+m, z\_k+n) \_k-1 }. Let the first $K$ layers of irregular grid nodes be the computational narrow band. The grid nodes in the narrow band are called activated grid nodes, at which level set function values are updated when solving [\[eqn:ls-for,eqn:rein-eqn\]](#eqn:ls-for,eqn:rein-eqn){reference-type="ref" reference="eqn:ls-for,eqn:rein-eqn"}. After the level set function has been update at activated nodes for evolution or re-initialization, it becomes discontinuous at the interface between activated and non-activated grid nodes, which may cause numerical oscillations and eventually contaminate the solution. A simple cutoff procedure can bypass this issue by modifying the level set function after re-initialization so that it becomes continuous again. The cutoff function $\phi^{new} = c(\phi)$ is defined as c() = { &,& \|\| \< \^c,\ &\^c, & \|\| \^c. . where $\phi^c\in(0, Kh]$ is a cutoff threshold. In this work, we choose $K = 8$ and $\phi^c=5h$. Since the level set function is of little interest at grid nodes away from the zero contour, its value can be simply set as a constant to suppress numerical oscillations and simplify the algorithm as well. ### ADI scheme for the heat equation with an interface The evolution of the temperature field satisfies the heat equation and the Stefan condition, leading to an interface problem, [\[eqn:heat-ifp\]]{#eqn:heat-ifp label="eqn:heat-ifp"} { &u_t = u, &\_s_l,\ &\[u\] = 0, &,\ &\[\_u\] = -\_t/\|\|,&, . where we have substituted $V_n$ with $-\phi_t/|\nabla\phi|$ by using the equation [\[eqn:hj-eqn\]](#eqn:hj-eqn){reference-type="eqref" reference="eqn:hj-eqn"}. The zeroth-order jump condition $[u] = 0$ is implied by the Gibbs-Thomson relation [\[eqn:gibbs\]](#eqn:gibbs){reference-type="eqref" reference="eqn:gibbs"} since the temperature is continuous across $\Gamma$. In doing dimension splitting with ADI schemes for the problem, not only the PDE but also the interface condition needs to be decomposed into 1D ones. Note that the zeroth-order jump condition $[u]=0$ can directly reduce to a 1D one. To decompose the first-order jump condition, one needs to take tangential derivatives on the zeroth-order jump condition to generate two more first-order jump conditions. Let $\boldsymbol{\tau}_1$ and $\boldsymbol{\tau}_2$ be two unit tangential vectors with $\boldsymbol{\tau}_1 \cdot\boldsymbol{\tau}_2=0$. Taking tangential derivatives on both sides of $[u]=0$ in directions of $\boldsymbol{\tau}_1$ and $\boldsymbol{\tau}_2$ and combining the first-order jump condition in [\[eqn:heat-ifp\]](#eqn:heat-ifp){reference-type="eqref" reference="eqn:heat-ifp"}, it gives [\[eqn:jmp-sys\]]{#eqn:jmp-sys label="eqn:jmp-sys"} \_1 = 0,\_2 = 0, = -\_t/\|\| . From the $3\times 3$ linear system [\[eqn:jmp-sys\]](#eqn:jmp-sys){reference-type="eqref" reference="eqn:jmp-sys"}, first-order 1D jump conditions $[u_x]$, $[u_y]$, and $[u_z]$ can be easily obtained, = -\_t/\|\| = -\_t/\|\|\^2, where we have used the relation $\mathbf{n} = \nabla \phi / |\nabla \phi|$. At the discrete level, jump conditions are approximated by [\[eqn:jmp-dis\]]{#eqn:jmp-dis label="eqn:jmp-dis"} \[u\^n+1\] = 0,= := Q(\^n, \^n+1). where $\epsilon>0$ is also a small constant to avoid division by zero. Based on the ADI scheme [\[eqn:DG-adi\]](#eqn:DG-adi){reference-type="eqref" reference="eqn:DG-adi"} or [\[eqn:3L-ADI\]](#eqn:3L-ADI){reference-type="eqref" reference="eqn:3L-ADI"}, the ADI scheme for the heat equation with an interface requires solving a sequence of 1D interface problems, &\_xx u\_j,k - \^2 u\_j,k = f\_j,k, (\_s_l){y = y_j, z = z_k}, j,k = 1, 2,, N-1,\ &\_yy u\_k,i - \^2 u\_k,i = f\_k,i, (\_s_l){z = z_k, x = x_i}, k,i = 1, 2,, N-1,\ &\_zz u\_i,j - \^2 u\_i,j = f\_i,j, (\_s_l){x = x_i, y = y_j}, i,j = 1, 2,, N-1. subject to interface conditions, respectively, &\[u\_i,j\] = 0, \[\_x u\_j,k\] = Q(\^n,\^n+1) , {y = y_j, z = z_k}, j,k = 1, 2,, N-1,\ &\[u\_k,i\] = 0, \[\_y u\_k,i\] = Q(\^n,\^n+1) , {z = z_k, x = x_i}, k,i = 1, 2,, N-1,\ &\[u\_i,j\] = 0, \[\_z u\_i,j\] = Q(\^n,\^n+1) , {x = x_i, y = y_j}, i,j = 1, 2,, N-1. The 1D interface problems are similar to [\[eqn:ifp-v\]](#eqn:ifp-v){reference-type="eqref" reference="eqn:ifp-v"} and [\[eqn:ifp-w\]](#eqn:ifp-w){reference-type="eqref" reference="eqn:ifp-w"} and can be solved with the finite difference scheme [\[eqn:crc-fds\]](#eqn:crc-fds){reference-type="eqref" reference="eqn:crc-fds"} and the fast Thomas algorithm. Unlike the case of solving boundary value problems, there is no need to solve boundary integral equations since jump conditions of the 1D interface problems are already known. The computational cost for solving the interface problems is almost the same as that for cases without interfaces. # Numerical results {#sec:result} In this section, a variety of numerical examples, including the heat and a reaction-diffusion equation on fixed domains and the Stefan problem with a free boundary, are presented to validate the proposed method. The following level set functions $\phi(\mathbf{x})$ for defining irregular domains are used in the numerical tests: - An ellipsoid: $$\phi(\mathbf{x}) = \frac{x^{2}}{1^{2}} + \frac{y^{2}}{0.7^{2}} + \frac{z^{2}}{0.5^{2}} - 1;$$ - A torus: $$\phi(\mathbf{x}) = (\sqrt{x^{2} + y^{2}} - 0.8)^{2} + z^{2} - 0.34^2;$$ - A four-atom molecular: $$\phi(\mathbf{x}) = c - \sum_{i = 1}^{4} \exp(-\frac{||\mathbf{x}-\mathbf{x}_k||^2}{r^{2}}),$$ with $c=r=0.6$ and $\mathbf{x}_1 = (\frac{\sqrt{3}}{3}, 0, -\frac{\sqrt{6}}{12})$, $\mathbf{x}_2 = (-\frac{\sqrt{3}}{6}, 0.5, - \frac{\sqrt{6}}{12})$, $\mathbf{x}_3 = (-\frac{\sqrt{3}}{6}, -0.5, -\frac{\sqrt{6}}{12})$, $\mathbf{x}_4 = (0, 0, \frac{\sqrt{6}}{4})$; - A banana: $$\begin{aligned} \phi(\mathbf{x}) &= (7x + 6)^{4} + 2401y^{4} + 3601.5z^{4} + 98(7x+6)^{2}y^{2} +98(7x+6)^{2}z^{2} \\ & +4802y^{2}z^{2}-94(7x+6)^{2}+3822y^{2}-4606z^{2}+1521. \end{aligned}$$ Domain boundaries $\Gamma=\partial\Omega$ defined by zero contours of level set functions $\phi(\mathbf{x})$ are plotted in . \ Numerical errors are estimated on grid nodes $\Omega^h$ at the final computational time $t=T$ in both the $L_2$ and $L_{\infty}$ norms, which are defined as e\^h\_L_2 = , e\^h\_L\_ = \_\^h\|u\^h(, T)-u(,T)\|, where $N_{\Omega^h}$ is the number of grid nodes in $\Omega^h$, $u^h$ and $u$ are the numerical and exact solution, respectively. The convergence order is computed by = . All numerical experiments are performed on a personal computer with 16 Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz CPUs. The codes for conducting the numerical experiments are written in C++ computer language. ## The heat equation In the first example, an initial-boundary value problem (IBVP) of the heat equation [\[eqn:heat1\]](#eqn:heat1){reference-type="eqref" reference="eqn:heat1"} is solved [\[eqn:heat1\]]{#eqn:heat1 label="eqn:heat1"} u\_t(, t) = au(, t) + f(, t), where the diffusion coefficient is chosen as $a = 2$. Here, the initial and boundary conditions for $u(\mathbf{x}, t)$ and the source term $f(\mathbf{x}, t)$ are chosen such that the problem has the exact solution u(, t) = ((x + y + z) - t). Two different shapes of $\Omega$, an ellipsoid and a torus, are considered in this example. The bounding box is chosen as $\mathcal{B} = [-1.2,1.2]^3$. The final time is set as $T=0.5$. ### Convergence test In order to study the overall convergence rates of the proposed methods, the spatial grid and time step are simultaneously refined with $h=2.4/N$ and $\tau=2/N$ to perform computations. Numerical errors and convergence orders of the KFBI-ADI method based on both the DG scheme and the modified DG scheme are summarized in . ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ N KFBI-DG-ADI KFBI-mDG-ADI $L_2$ $L_{\infty}$ $L_2$ $L_{\infty}$ Error Rate Error Rate Error Rate Error Rate 16 1.32e-3 2.35e-3 1.62e-4 4.37e-4 32 3.32e-4 1.99 7.74e-4 1.60 2.64e-5 2.62 1.15e-4 1.93 64 8.14e-5 2.03 3.39e-4 1.19 4.93e-6 2.42 2.26e-5 2.35 128 1.97e-5 2.05 1.26e-4 1.43 1.04e-6 2.25 4.74e-6 2.25 256 4.77e-6 2.05 3.39e-5 1.89 2.39e-7 2.12 9.74e-7 2.28 ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ : Numerical results of the heat equation on an ellipsoid-shaped domain. ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ N KFBI-DG-ADI KFBI-mDG-ADI $L_2$ $L_{\infty}$ $L_2$ $L_{\infty}$ Error Rate Error Rate Error Rate Error Rate 16 1.32e-3 2.59e-3 1.75e-4 5.12e-4 32 3.48e-4 1.92 8.35e-4 1.63 2.49e-5 2.81 1.21e-4 2.08 64 8.58e-5 2.02 3.41e-4 1.29 3.96e-6 2.65 2.52e-5 2.26 128 2.11e-5 2.02 1.02e-4 1.74 7.04e-7 2.49 5.12e-6 2.30 256 5.15e-6 2.03 3.52e-5 1.53 1.43e-7 2.30 1.18e-6 2.12 ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ : Numerical results of the heat equation on a torus-shaped domain. Also, the difference in temporal convergence rates of the two schemes is studied by using a fine spatial grid (N=256) and successively refining the time step. The results are plotted in . ![Temporal convergence of the two KFBI-ADI schemes for the heat equation on the ellipsoid-shaped domain (left) and the torus-shaped domain (right).](fig/heat/ell-dt.png "fig:"){#fig:heat-dt width="0.45\\linewidth"} ![Temporal convergence of the two KFBI-ADI schemes for the heat equation on the ellipsoid-shaped domain (left) and the torus-shaped domain (right).](fig/heat/tor-dt.png "fig:"){#fig:heat-dt width="0.45\\linewidth"} It can be observed that the modified DG scheme produces more accurate results in both the ellipsoid and torus domains compared with the original DG scheme. Both two ADI schemes have approximately second-order accuracy. However, the DG scheme has an order reduction problem in the $L_{\infty}$ norm. The modified DG scheme is free of this problem since it has higher consistency of boundary conditions by construction. In addition, for all time steps, which are much larger than that used in explicit schemes, the computation never blows up. It numerically verifies that the proposed ADI schemes are unconditionally stable for irregular domains as well. ### Efficiency test An important aspect of ADI schemes is the efficiency of solving time-dependent problems in multiple spatial dimensions. Since the method is a dimensional splitting technique, it is intrinsically well-suited for parallelization. In each sub-step of the ADI schemes, a sequence of one-dimensional sub-problems, which are independent of each other, can be solved separately via simple for loops. In C++ language, custom codes for the ADI schemes written with for loops can be easily modified to a multi-thread version with the openMP library [@chandra2001parallel] by adding a few words. Even without parallelization techniques, the ADI scheme with the fast Thomas algorithm is still very efficient due to its linear complexity with a small constant factor. To demonstrate the efficiency of the proposed methods and the speed-up obtained by multi-threading, we solve the heat equation with the same configurations as before and collect wall times obtained by using different thread numbers $\chi = 1, 2, 4$ and $8$. The results are plotted in , in which the x-axis represents the number of degrees of freedom, defined as the total number of Cartesian grid nodes times the number of time steps, i.e. $DoF = (N+1)^3N_t$, and the y-axis represents the total wall time (in seconds) of numerical tests with different thread numbers. ![Wall times of the KFBI-ADI method with different mesh sizes and thread numbers for the heat equation on the ellipsoid-shaped domain (left) and the torus-shaped domain (right).](fig/heat/ell-time.png "fig:"){#fig:time1 width="45%"} ![Wall times of the KFBI-ADI method with different mesh sizes and thread numbers for the heat equation on the ellipsoid-shaped domain (left) and the torus-shaped domain (right).](fig/heat/tor-time.png "fig:"){#fig:time1 width="45%"} It can be observed that, as degrees of freedom increase, the slopes of these lines are closer to $1$ for different thread numbers, indicating that the proposed method has linear complexity. Furthermore, detailed speed-up ratios are also presented in to estimate the performance of acceleration by using multi-threading for the proposed method. Here, the speed-up ratio is defined as the ratio of wall times using one and $\chi$ threads, i.e., = . From the result, it is obvious that the method is accelerated by multi-threading considerably, which shows that the proposed method is quite suitable for parallelization. The speed-up ratio decreases as the number of threads increases and it fails to achieve the ideal case. It is mostly due to the increase in communication and synchronization costs when more threads are enabled for the computation. ellipsoid domain torus domain ----- ------------------ ---------- ---------- -------------- ---------- ---------- N $\chi=2$ $\chi=4$ $\chi=8$ $\chi=2$ $\chi=4$ $\chi=8$ 64 1.86 2.39 3.31 1.91 3.02 5.26 128 1.94 2.47 4.22 1.92 3.06 5.29 256 1.92 2.42 4.01 1.90 3.12 5.39 : Speed-up ratios of the proposed method with different thread numbers. ## The reaction-diffusion equation In this example, the IBVP of a reaction-diffusion equation is solved with the proposed method. We consider the Fisher equation [\[diffusion-reaction\]]{#diffusion-reaction label="diffusion-reaction"} u\_t = u + f(u), where $\epsilon>0$ is a small parameter, assumed to be $0.1$, and $f(u) = \frac{2}{3 \epsilon}u^{2}(1 - u)$ is a reaction term. Initial and boundary conditions are chosen such that the exact solution is given as u(,t) = , where $\mathbf{n}\in\mathbb{R}^3$ is a vector of unit length, representing the propagation direction of the reaction wavefront. The problem is solved on a banana-shaped and a four-atom molecular-shaped domain. The bounding box for the banana-shaped domain is chosen as $[-1.2,1.2]^3$ while the one for the molecular-shaped domain is chosen as $[-1.01, 1.01]^3$. The final time is set as $T=0.5$. The scalar nonlinear algebraic equation resulting from the implicit step of the reaction term is solved with the Newton method with an absolute tolerance $tol = 10^{-12}$. Numerical errors and convergence orders obtained by two KFBI-ADI schemes are summarized in . Similarly, we can observe second-order accuracy results for both two schemes. As we can see that since the exact solution of this example is nearly constant away from the moving front, the original DG scheme produces more accurate results than that for time-dependent boundary conditions and has comparable accuracy with the modified DG scheme. It is noteworthy that the banana-shaped domain is singular and has two sharp corners, which is different from previous domains. The proposed ADI methods can naturally handle the presence of sharp corners without losing accuracy since the geometric singularity is no longer a problem in 1D sub-problems. Snapshots of numerical solutions at $t=0$, $t=0.25$, and $t=0.5$ are also displayed in , which show the time evolution of moving front between red-colored and blue-colored phases. ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ N KFBI-DG-ADI KFBI-mDG-ADI $L_2$ $L_{\infty}$ $L_2$ $L_{\infty}$ Error Rate Error Rate Error Rate Error Rate 16 1.24e-3 5.63e-3 3.91e-3 9.32e-3 32 3.55e-4 1.80 1.89e-3 1.57 9.17e-4 2.09 2.32e-3 2.01 64 9.34e-5 1.93 5.21e-4 1.86 2.20e-4 2.06 5.57e-4 2.06 128 2.39e-5 1.97 1.38e-4 1.92 5.40e-5 2.03 1.36e-4 2.03 256 6.03e-6 1.99 3.46e-5 2.00 1.34e-5 2.01 3.34e-5 2.03 ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ : Numerical results of the reaction-diffusion equation on a four-atom molecular-shaped domain. ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ N KFBI-DG-ADI KFBI-mDG-ADI $L_2$ $L_{\infty}$ $L_2$ $L_{\infty}$ Error Rate Error Rate Error Rate Error Rate 16 1.51e-3 5.04e-3 2.42e-3 6.63e-3 32 4.74e-4 1.67 1.64e-3 1.62 5.58e-4 2.12 1.63e-3 2.02 64 1.27e-4 1.90 4.78e-4 1.78 1.35e-4 2.05 4.10e-4 1.99 128 3.28e-5 1.95 1.24e-4 1.95 3.31e-5 2.03 1.02e-4 2.01 256 8.31e-6 1.98 3.21e-5 1.95 8.22e-6 2.01 2.53e-5 2.01 ----- ------------- ------ -------------- ------ -------------- ------ -------------- ------ : Numerical results of the reaction-diffusion equation on a banana-shaped domain. ## The Stefan problem In the final example, a free boundary problem, the Stefan problem, is solved with the proposed method to simulate dendritic solidifications. In this case, the free boundary is not only irregular but also time-dependent. The level set method is used to capture the location of the free boundary. A uniform Cartesian grid with $128\times 128\times 128$ cells over the bounding box $\mathcal{B} = [-2,2]^{3}$ is used for computations. Time steps are computed by using the CFL number as $0.5$ for the level set equation. The computation is terminated when the free boundary reaches the box boundary. Initially, a spherical solid seed with zero temperature is placed at the origin such that it is surrounded by undercooled liquid with a lower temperature, the Stefan number $St = -0.5$. Thus, initial values of the level set function and the temperature field are chosen as (,0)=\_0() = \|\|\|\|-r, T(,0) = T_0() = H\_(()) St, where $r=0.1$ is the seed radius and $H_{\epsilon}(r) = \frac{1}{2}(1 + \tanh(r/\epsilon))$ is a regularized Heaviside function with $h$ being the mesh size. Anisotropic coefficients are chosen as \_C() = \_V() = (1-A(n_1\^4+n_2\^4+n_3\^4)), where $n_1$, $n_2$ and $n_3$ are components of the unit normal vector $\mathbf{n}$. Note that surface tension and molecular kinetic effects stabilize the free boundary by suppressing the unstable growth of small perturbations. Due to the anisotropy, the spherical solid has preferred growth directions of vectors $\mathbf{n}$ whose values $\varepsilon_C(\mathbf{n})$ and $\varepsilon_V(\mathbf{n})$ are small. First, we choose $\Bar{\varepsilon}=0.001$ and $A=-3.0$ and expect that dendrites grow in the six directions of coordinate axes. In , snapshots of the free boundary at $t=0, 0.01, 0.02, 0.04$, and $0.08$ are presented. ![Time evolution of the free boundary with $\Bar{\varepsilon}=0.001$ and $A=-3.0$ at $t=0$, $0.01$, $0.02$, $0.04$, and $0.08$.](fig/stefan/dendrite.pdf){#fig:stefan1 width="80%"} Morphology of the free boundary in the front view and temperature field are also shown in . ![Front view of the free boundary (left) and slices of the temperature field(right).](fig/stefan/front-view.png "fig:"){#fig:stefan2 width="40%"} ![Front view of the free boundary (left) and slices of the temperature field(right).](fig/stefan/slice.png "fig:"){#fig:stefan2 width="40%"} One can also observe that the symmetry of the free boundary is well-preserved by the proposed method. Then, different values of $\Bar{\varepsilon}$ are chosen to change the stabilization effect on the growth pattern of dendrites. Final morphologies of the free boundaries obtained with $\Bar{\epsilon} = 0.002$, $0.001$ and $0.0005$ are shown in . ![Morphologies of the free boundary using different parameters ($\Bar{\epsilon}=0.002$, $0.001$, and $0.0005$ from left to right) .](fig/stefan/large-sf.png "fig:"){#fig:stefan3 width="30%"} ![Morphologies of the free boundary using different parameters ($\Bar{\epsilon}=0.002$, $0.001$, and $0.0005$ from left to right) .](fig/stefan/middle-sf.png "fig:"){#fig:stefan3 width="30%"} ![Morphologies of the free boundary using different parameters ($\Bar{\epsilon}=0.002$, $0.001$, and $0.0005$ from left to right) .](fig/stefan/small-sf.png "fig:"){#fig:stefan3 width="30%"} As can be seen that larger values of $\Bar{\epsilon}$ lead to more stable results in the sense that the free boundary is smoother. We also choose $\Bar{\epsilon} = 0.005$ and $A=0.5$ to expect that the preferred growth directions are the eight vectors $(\pm 1, \pm 1, \pm1)$. Snapshots of the free boundary at $t=0, 0.05, 0.10, 0.20$, and $0.50$ are presented in . ![Time evolution of the free boundary with $\Bar{\epsilon}=0.005$ and $A=0.5$ at $t=0$, $0.05$, $0.1$, $0.3$, and $0.5$.](fig/stefan/dendrite2.pdf){#fig:stefan4 width="80%"} # Discussion {#sec:discu} In this paper, we developed ADI schemes for solving the heat equations, the reaction-diffusion equation, and the Stefan problem on three space-dimensional arbitrary domains. For problems with fixed domains, the proposed KFBI-ADI method achieves spatial and temporal second-order accuracy and unconditional stability, which are verified through multiple numerical examples. For problems with time-dependent domains, by capturing the free boundary with the level set method, the level set-ADI method is able to efficiently solve the Stefan problem and simulate complex dendritic growth patterns. The proposed method is efficient in the sense that the computational complexity is linearly proportional to the number of degrees of freedom. Furthermore, due to the dimension-splitting strategy of the ADI schemes, the method is parallelable in a very simple way, which can significantly accelerate the computation. The method presented in this paper is second-order accurate in both spatial and temporal directions. Since high-order methods have advantages in achieving a certain accuracy with fewer degrees of freedom, designing high-order extensions of the current method is an ongoing work. The current paper is mostly concerned with parabolic-type PDEs, for which the discretization shares some similarities since the main part is dealing with the Laplacian operator. For other types of PDEs, such as Maxwell's equations, which is of hyperbolic-type, developing efficient and geometrically flexible ADI schemes is also of great importance and will be explored in the future. # Stability analysis of the modified Douglas-Gunn ADI scheme {#sec:analysis} Let $\lambda= \tau/h^2$ where $\tau$ is the time step and $h$ is the mesh size. Denote by $\delta_{x}^{2}/h^2$, $\delta_{y}^{2}/h^2$, and $\delta_{z}^{2}/h^2$ the central difference operators for approximating $\partial_{xx}$, $\partial_{yy}$, and $\partial_{zz}$, respectively. Let $u$ be the numerical solution to the heat equation. The modified DG-ADI method can be written as (1- \_x\^2) u\^\* & =(1+ \_x\^2) u\^n+(\_y\^2+\_z\^2) u\^n-(\_y\^2+\_z\^2) u\^n-1, [\[eqn:A1a\]]{#eqn:A1a label="eqn:A1a"}\ (1- \_y\^2) u\^\* \* & =u\^\*+ \_y\^2 u\^n-1-\_y\^2 u\^n, [\[eqn:A1b\]]{#eqn:A1b label="eqn:A1b"}\ (1- \_z\^2) u\^n+1 & =u\^\* \*+ \_z\^2 u\^n-1-\_z\^2 u\^n. [\[eqn:A1c\]]{#eqn:A1c label="eqn:A1c"} Taking $(1-\frac{\lambda}{2} \delta_{x}^{2})$ on both sides of equation [\[eqn:A1b\]](#eqn:A1b){reference-type="ref" reference="eqn:A1b"} yields [\[eqn:A2\]]{#eqn:A2 label="eqn:A2"} (1- \_x\^2)(1- \_y\^2) u\^\* \*=(1- \_x\^2) u\^\*+(1- \_x\^2) \_y\^2 u\^n-1-(1- \_x\^2) \_y\^2 u\^n. By plugging [\[eqn:A1a\]](#eqn:A1a){reference-type="ref" reference="eqn:A1a"} into [\[eqn:A2\]](#eqn:A2){reference-type="ref" reference="eqn:A2"}, it leads to (1- \_x\^2)(1- \_y\^2) u\^\* \* &= u\^n+ \_x\^2 u\^n+ \_y\^2 u\^n + \_z\^2 u\^n\ &- \_z\^2 u\^n-1- \_x\^2 \_y\^2 u\^n-1+ \_x\^2 \_y\^2 u\^n, Similarly, taking $(1-\frac{\lambda}{2} \delta_{x}^{2})(1-\frac{\lambda}{2} \delta_{y}^{2})$ on [\[eqn:A1c\]](#eqn:A1c){reference-type="ref" reference="eqn:A1c"}, we can obtain the recurrence relation of $u^{n+1}$, $u^n$, and $u^{n-1}$, [\[eqn:A4\]]{#eqn:A4 label="eqn:A4"} & (1- \_x\^2)(1- \_y\^2)(1- \_z\^2) u\^n+1\ & =(1- \_x\^2)(1- \_y\^2) u\^\* \*+(1- \_x\^2)(1- \_y\^2)( \_z\^2 u\^n-1-\_z\^2 u\^n)\ & =u\^n+ \_x\^2 u\^n+ \_y\^2 u\^n+ \_z\^2 u\^n+ \_x\^2 \_y\^2 u\^n+ \_x\^2 \_z\^2 u\^n\ & + \_y\^2 \_z\^2 u\^n- \_x\^2 \_y\^2 u\^n-1- \_x\^2 \_z\^2 u\^n-1- \_y\^2 \_z\^2 u\^n-1\ & + \_x\^2 \_y\^2 \_z\^2 u\^n-1- \_x\^2 \_y\^2 \_z\^2 u\^n\ & =(1+ \_x\^2)(1+ \_y\^2)(1+ \_z\^2) u\^n+(\_x\^2 \_y\^2+\_x\^2 \_z\^2+\_y\^2 \_z\^2) u\^n\ & - \_x\^2 \_y\^2 \_z\^2 u\^n-(\_x\^2 \_y\^2+\_x\^2 \_z\^2+\_y\^2 \_z\^2) u\^n-1 + \_x\^2 \_y\^2 \_z\^2 u\^n-1. Next, the standard discrete Fourier analysis can be applied to study the $L_2$ stability of the ADI scheme. Let $v_{i, j, k}^{n+1}=u_{i, j, k}^{n}$ and $\mathbf{U}_{i, j, k}^{n}=[u_{i, j, k}^{n}, v_{i, j, k}^{n}]$, [\[eqn:A4\]](#eqn:A4){reference-type="ref" reference="eqn:A4"} can be reformulated as [\[eqn:A5\]]{#eqn:A5 label="eqn:A5"} \^n+1 = \^n where $$\mathbf{A} = \begin{pmatrix} (1-\dfrac{\lambda}{2} \delta_{x}^{2})(1-\dfrac{\lambda}{2} \delta_{y}^{2})(1-\dfrac{\lambda}{2} \delta_{z}^{2}) & 0 \\ 0 & 1 \end{pmatrix}, \quad \mathbf{B} = \begin{pmatrix} b_1 & b_2 \\ 0 & 1 \end{pmatrix},$$ $$b_1=(1+\frac{\lambda}{2} \delta_{x}^{2})(1+\frac{\lambda}{2} \delta_{y}^{2})(1+\frac{\lambda}{2} \delta_{z}^{2})+\frac{\lambda^{2}}{4}(\delta_{x}^{2} \delta_{y}^{2}+\delta_{x}^{2} \delta_{z}^{2}+\delta_{y}^{2} \delta_{z}^{2})-\frac{3 \lambda^{3}}{8} \delta_{x}^{2} \delta_{y}^{2} \delta_{z}^{2},$$ $$b_2=-\frac{\lambda^{2}}{4}(\delta_{x}^{2} \delta_{y}^{2}+\delta_{x}^{2} \delta_{z}^{2}+\delta_{y}^{2} \delta_{z}^{2})+\frac{\lambda^{3}}{8} \delta_{x}^{2} \delta_{y}^{2} \delta_{z}^{2}.$$ Let us consider the single mode $\mathbf{U}_{ijk}^n = e^{\sqrt{-1}(i k_1 h + j k_2 h + k k_3 h)}\mathbf{V}^n$. Then [\[eqn:A5\]](#eqn:A5){reference-type="ref" reference="eqn:A5"} becomes \^n+1 = \^n, where $$\hat{\mathbf{A}} = \begin{pmatrix} (1+\lambda \mu_{1})(1+\lambda \mu_{2})(1+\lambda \mu_{3}) & 0 \\ 0 & 1 \end{pmatrix}, \quad \hat{\mathbf{B}} = \begin{pmatrix} \hat b_1 & \hat b_2 \\ 0 & 1 \end{pmatrix},$$ $$\hat b_1=(1-\lambda \mu_{1})(1-\lambda \mu_{2})(1-\lambda \mu_{3})+\lambda^{2}(\mu_{1} \mu_{2}+\mu_{1} \mu_{3}+\mu_{2} \mu_{3})+3 \lambda^{3} \mu_{1} \mu_{2} \mu_{3},$$ $$\hat b_2=-\lambda^{2}(\mu_{1} \mu_{2}+\mu_{1} \mu_{3}+\mu_{2} \mu_{3})-\lambda^{3} \mu_{1} \mu_{2} \mu_{3},$$ $$\mu_{1}=2 \sin ^{2} \frac{k_{1} h}{2}, \mu_{2}=2 \sin ^{2} \frac{k_{2} h}{2}, \mu_{3}=2 \sin ^{2} \frac{k_{3} h}{2}.$$ The characteristic equation of the magnifying matrix $\hat{\mathbf{A}}^{-1} \hat{\mathbf{B}}$ is given by [\[eqn:char-eqn\]]{#eqn:char-eqn label="eqn:char-eqn"} \^2 - b - c = 0, where $$b =\dfrac{(1-\lambda \mu_{1})(1-\lambda \mu_{2})(1-\lambda \mu_{3})+\lambda^{2}(\mu_{1} \mu_{2}+\mu_{1} \mu_{3}+\mu_{2} \mu_{3})+3 \lambda^{3} \mu_{1} \mu_{2} \mu_{3}}{(1+\lambda \mu_{1})(1+\lambda \mu_{2})(1+\lambda \mu_{3})},$$ $$c =\dfrac{-\lambda^{2}(\mu_{1} \mu_{2}+\mu_{1} \mu_{3}+\mu_{2} \mu_{3})-\lambda^{3} \mu_{1} \mu_{2} \mu_{3}}{(1+\lambda \mu_{1})(1+\lambda \mu_{2})(1+\lambda \mu_{3})}.$$ Recall that modules of the roots of [\[eqn:char-eqn\]](#eqn:char-eqn){reference-type="ref" reference="eqn:char-eqn"} are all no more than $1$ if and only if $|b|\leq 1-c$ and $|c|\leq 1$. It is obvious that the inequality $|c| \leq 1$ holds and we only need to verify $c-1\leq b \leq 1-c$. Since we have &-\^2(\_1 \_2+\_1 \_3+\_2 \_3)-\^3 \_1 \_2 \_3\ & (1+\_1)(1+\_2)(1+\_3)+(1-\_1)(1-\_2)(1-\_3)\ & + \^2(\_1 \_2+\_1 \_3+\_2 \_3)+3 \^3 \_1 \_2 \_3\ & = 2+3 \^2(\_1 \_2+\_1 \_3+\_2 \_3)+3 \^3 \_1 \_2 \_3, then it gives $c-1\leq b$. The inequality from the other side can be seen from $$\begin{aligned} &(1-\lambda \mu_{1})(1-\lambda \mu_{2})(1-\lambda \mu_{3})+\lambda^{2}(\mu_{1} \mu_{2}+\mu_{1} \mu_{3}+\mu_{2} \mu_{3})+3 \lambda^{3} \mu_{1} \mu_{2} \mu_{3} \\ &\leq(1+\lambda \mu_{1})(1+\lambda \mu_{2})(1+\lambda \mu_{3}) +\lambda^{2}(\mu_{1} \mu_{2}+\mu_{1} \mu_{3}+\mu_{2} \mu_{3}) +\lambda^{3} \mu_{1} \mu_{2} \mu_{3},\end{aligned}$$ which is equivalent to $0 \leq \lambda(\mu_{1}+\mu_{2}+\mu_{3})$. Therefore, the spectral radius of the magnifying matrix $\hat{\mathbf{A}}^{-1} \hat{\mathbf{B}}$ is no more than $1$ and the scheme is unconditionally stable.
arxiv_math
{ "id": "2309.00979", "title": "ADI schemes for heat equations with irregular boundaries and interfaces\n in 3D with applications", "authors": "Han Zhou, Minsheng Huang, Wenjun Ying", "categories": "math.NA cs.NA physics.comp-ph", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $\mathcal{S}$ be a fixed family of graphs on vertex set $V$ and $\mathcal{G}$ be a collection of elements in $\mathcal{S}$. We investigated the transversal problem of finding the maximum value of $|\mathcal{G}|$ when $\mathcal{G}$ contains no rainbow elements in $\mathcal{S}$. Specifically, we determine the exact values when $\mathcal{S}$ is a family of stars or a family of trees of the same order $n$ with $n$ dividing $|V|$. Further, all the extremal cases for $\mathcal{G}$ are characterized.\ **Keywords:** Transversal; a family of graphs; rainbow tree; rainbow star; extremal graph\ **AMS subject classification 2020:** 05C15, 05C05, 05D15. author: - | Ethan Y.H. Li$^a$, Luyi Li$^{b,c}$, Ping Li$^a$\ $^a$School of Mathematics and Statistics\ Shaanxi Normal University, Xi'an, Shaanxi 710062, China\ $^b$Center for Combinatorics and LPMC\ Nankai University, Tianjin 300071, China\ $^c$Laboratoire Interdisciplinaire des Sciences du Numérique\ CNRS-Université Paris-Saclay, Orsay 91405, France\ Emails: yinhao_li\@snnu.edu.cn, liluyi\@mail.nankai.edu.cn, lp-math\@snnu.edu.cn title: Transversals in a collections of trees --- # Introduction In 1974, Dénes and Keedwell [@DK] conjectured that every $n\times n$ Latin square has a set of entries of order $n-1$ which contains at most one representative of each row and column and no symbol is repeated. Every such set is called a *partial transversal* of the Latin square. A lot of scholars have made contributions to this conjecture in the last few decades, see [@KK; @DD; @BVW; @W; @HS]. From another point of view, an equivalent statement of the conjecture is: for any proper edge-coloring of the balanced complete bipartite graph $K_{n,n}$ with $n$ colors, the edge-colored graph $K_{n,n}$ has a rainbow matching of size at least $n-1$. More generally, this concept may be extended as follows: given a collection of graphs $\mathcal{G}=\{G_1,G_2,\ldots,G_t\}$ (not necessarily distinct) on vertex set $V$ and a graph $H$, $\mathcal{G}$ is said to contain a *rainbow $H$* if there exists a graph isomorphic to $H$ consisting of at most one edge from each $G_i$. We say that $\mathcal{G}$ is *rainbow $H$-free* if $\mathcal{G}$ contains no rainbow $H$. Using this concept, the well-known Rota's basis conjecture (restricted to graphic matroids) may be reformulated as: any collection $\mathcal{G}=\{G_1,G_2,\ldots,G_n\}$ of spanning trees of a graph $H$ of order $n+1$ contains $n$ disjoint rainbow spanning trees of $H$. As the first step of our approach to this conjecture, we intend to find the minimum size of $\mathcal{G}$ preserving the existence of one rainbow tree of a given order or structure. Equivalently, we aim to determine the maximum size for $\mathcal{G}$ to exclude any rainbow tree of a given order or structure, which is analogous to Ramsey and Turán problems. There have been plenty of researchers interested in these problems, and they also studied rainbow graphs other than trees. For instance, Aharoni, DeVos, de la Maza, Montejano and Šámal [@ADGMS] gave a rainbow version of Mantel's theorem: a collection $\mathcal{G}=\{G_1,G_2,G_3\}$ of $n$-graphs with $|E(G_i)|>\frac{1+\tau^2}{4}n^2$ for all $1\leq i\leq 3$ contains a rainbow triangle, where $\tau=\frac{4-\sqrt7}{9}$. In 2020, Joos and Kim [@JK] proved a rainbow version of Dirac's theorem: if $\mathcal{G}=\{G_i: i\in [n]\}$ is a collection of not necessarily distinct $n$-graphs with the same vertex set of order $n$ and $\delta(G_i)\geq \frac{n}{2}$ for $i\in[n]$, then there exists a rainbow Hamiltonian cycle in $\mathcal{G}$. For more results on rainbow structures in a family of graphs, please refer to [@Pe; @BHS; @CHWW; @CWZ; @LLL; @MMP; @FHM]. In addition to the two above results which require edge or degree conditions, researchers also considered the problems of finding a rainbow graph $H$ from $\mathcal{G}$ where $H$ and elements of $\mathcal{G}$ belong to the same class. Aharoni, Briggs, Holzman and Jiang [@ABHJ] proved that every family of $2\left\lceil \frac{n}{2}\right\rceil-1$ odd cycles on $n$ vertices contains a rainbow odd cycle. Dong and Xu [@DX] showed that any collection of $\left\lfloor\frac{6(n-1)}{5}\right\rfloor+1$ even cycles on $n$ vertices contains a rainbow even cycle. Moreover, Goorevitch and Holzman [@GH] proved that every family of $(1+o(1))\frac{n^2}{8}$ triangles on $n$ vertices contains a rainbow triangle. In this paper, we continue to investigate this topic on rainbow stars and general rainbow trees, and the main results are the following two theorems. **Theorem 1**. *Let $\mathcal{S}$ be a collection of stars $K_{1,\Delta}$ on vertex set $V$ with $|V|=n=a(2\Delta-1)+b$ and $b(\Delta-1)=k_1(2\Delta-1)+k_2$, where $a,b,k_1,k_2$ are nonnegative integers and $0\leq b,k_2\leq 2\Delta-2$. Then the maximum value of $|\mathcal{S}|$ for $\mathcal{S}$ to be rainbow $K_{1,\Delta}$-free is $$\left\{ \begin{array}{ll} a(\Delta-1)^2+k_1(\Delta-1), & \mbox{if }a\geq 1\mbox{ and }0\leq k_2\leq\Delta; \vspace{0.05cm}\\ a(\Delta-1)^2+k_1(\Delta-1)+k_2-\Delta, & \mbox{if }a\geq 1\mbox{ and }\Delta \le k_2\leq 2\Delta-2;\\ \left\lfloor\frac{(n-1)^2}{4}\right\rfloor, & \mbox{if }a=0.\\ \end{array} \right.$$ Moreover, the bounds are tight, and $\mathfrak{A}(n,\Delta)$ (defined in Section [2](#sec-2){reference-type="ref" reference="sec-2"}) is the set of all rainbow $K_{1,\Delta}$-free collections $\mathcal{S}$ with $|\mathcal{S}|$ attaining these maximum values.* **Theorem 2**. *Let $\mathcal{T}=\{T_1,\ldots,T_t\}$ be a collection of trees on $V$ with $|V|=m$, $|T_i|=n$ for each $i\in[t]$ and $n|m$. Then the value of $t$ is at most $\frac{m(n-2)}{n}$ if $\mathcal{T}$ contains no rainbow tree of order $n$. Moreover, the bounds are tight, and $\mathfrak{B}(n,m)$ (defined in Section [3](#sec-thm2){reference-type="ref" reference="sec-thm2"}) is the set of all such collections $\mathcal{T}$ with $|\mathcal{S}|$ attaining the maximum value.* The proofs will be presented in the following two sections, before which we would like to introduce some additional notation. For a positive integer $n$ we use $[n]$ to denote the set $\{1,2,\ldots, n\}$. For a graph $G$, we use $V(G)$ and $E(G)$ to denote the *vertex set* and *edge set* of $G$, respectively. For any two vertex sets $X$ and $Y$, we use $X\vee Y$ to denote a graph obtained by adding an edge between each vertex of $X$ and each vertex of $Y$. For a subset $X$ of $V$, $\partial_G(X)$ denotes the set of edges between $X$ and $V-X$ in $E(G)$. For a digraph $\overrightarrow{D}$, we use $V(\overrightarrow{D})$ and $A(\overrightarrow{D})$ to denote *vertex set* and *arc set* of $\overrightarrow{D}$, respectively. For a subset $X$ of $V(\overrightarrow{D})$, $\overrightarrow{D}[X]$ denotes the subdigraph of $\overrightarrow{D}$ induced by $X$. Given a vertex $v$ in a digraph $\overrightarrow{D}$, we use $N_{\overrightarrow{D}}^+(v)$ and $N_{\overrightarrow{D}}^-(v)$ to denote the set of *out-neighbours* and *in-neighbours* of $v$ in $\overrightarrow{D}$, respectively. The *out-degree* (resp. *in-degree*) of $v$ in $\overrightarrow{D}$, denoted by $d_{\overrightarrow{D}}^+(v)$ (resp. $d_{\overrightarrow{D}}^-(v)$), is the number of out-neighbours (resp. in-neighbours) of $v$ in $\overrightarrow{D}$. A digraph is *$d$-out-regular* (resp. *$d$-in-regular*) if the out-degrees (resp. in-degrees) of all vertices are $d$. # Proof of Theorem [Theorem 1](#main-0){reference-type="ref" reference="main-0"} {#sec-2} This section is devoted to proving Theorem [Theorem 1](#main-0){reference-type="ref" reference="main-0"}. At first, we present some basic notation and give the definition of $\mathfrak{A}(n,\Delta)$, which describes all the extremal cases of Theorem [Theorem 1](#main-0){reference-type="ref" reference="main-0"}. For a collection $\mathcal{S}$ of stars $K_{1,\Delta}$ on vertex set $V$, we denote by $\mathcal{S}_u$ the set of all stars with center $u$. We partition $V$ into the set of centers $C=\{u\in V:\mathcal{S}_u\neq \emptyset\}$ and the set the other vertices $L=V-C$, and further set $\overrightarrow{D}$ to be the digraph with $$V(\overrightarrow{D})=V \quad \mbox{and} \quad A(\overrightarrow{D})=\{(x,y):xy \mbox{ is an edge of }\bigcup_{S\in \mathcal{S}_x}E(S) \}.$$ Then it is easy to verify that between any pair of vertices in $C$ there are at most two arcs and parallel arcs will not appear (symmetric arcs may exist). In addition, there is no symmetric arc or parallel arc between $C$ and $L$, and $L$ is an independent set in $\overrightarrow{D}$. **Definition 1**. *Let $V, n, \Delta, a, b,k_1,k_2$ be defined as in Theorem [Theorem 1](#main-0){reference-type="ref" reference="main-0"}. We define $\mathfrak{A}(n,\Delta)$ to be the set of collections $\mathcal{S}$ on vertex set $V$ consisting of stars $K_{1,\Delta}$ satisfying the following conditions:* 1. *for $n \ge 2\Delta-1$ and $k_2<\Delta$, there are $a(\Delta-1)^2+k_1(\Delta-1)$ stars in $\mathcal{S}$ such that* - *$|C|=a(\Delta-1)+k_1$;* - *for each vertex $u\in C$, there are exactly $\Delta-1$ stars $K_{1,\Delta}$ with center $u$, and all leaves of these stars are in $L$;* - *for each vertex $v\in L$, $d_{\overrightarrow{D}}^-(v)\leq \Delta-1$.* 2. *for $n \ge 2\Delta-1$ and $k_2>\Delta$, there are $a(\Delta-1)^2+k_1(\Delta-1)+k_2-\Delta$ stars $K_{1,\Delta}$ in $\mathcal{S}$ such that* - *$|C|=a(\Delta-1)+k_1+1$ and $|A(\overrightarrow{D}[C])|=2\Delta-1-k_2$;* - *for each vertex $u\in C$, there are exactly $\Delta-1-d_{\overrightarrow{D}}^-(u)$ copies of $K_{1,\Delta}$ with center $u$;* - *for each vertex $v\in L$, $d_{\overrightarrow{D}}^-(v)=\Delta-1$.* 3. *for $n \ge 2\Delta-1$ and $k_2=\Delta$, there are $a(\Delta-1)^2+k_1(\Delta-1)=a(\Delta-1)^2+k_1(\Delta-1)+k_2-\Delta$ stars in $\mathcal{S}$, and $\mathcal{S}$ satisfies conditions of $(i)$ or $(ii)$.* 4. *for $\Delta+1\leq n\leq 2\Delta-2$, there are $\left\lfloor\frac{(n-1)^2}{4}\right\rfloor$ stars $K_{1,\Delta}$ in $\mathcal{S}$ such that* - *if $n$ is odd, then $|C|=\left\lfloor\frac{n-1}{2}\right\rfloor$; if $n$ is even, then either $|C|=\left\lfloor\frac{n-1}{2}\right\rfloor$ or $|C|=\left\lceil\frac{n-1}{2}\right\rceil$;* - *$\overrightarrow{D}[C]$ is $(\Delta-|L|)$-out-regular;* - *for each vertex $u\in C$, $\mathcal{S}_u$ consists of $\Delta-1-d_{\overrightarrow{D}}^-(u)$ copies of $K_{1,\Delta}$, and $L$ is contained in the set of leaves of each star;* Now we shall present some examples to illustrate this definition. ![$\mathcal{S}'\in \mathfrak{A}(11,4)$  and  $\mathcal{S}'\in \mathfrak{A}(10,4)$.](Tu-2.eps "fig:"){#pict-2 width="350pt"}\ **Example 1**. *In the left part of Figure [1](#pict-2){reference-type="ref" reference="pict-2"}, $\mathcal{S}'$ is an element of $\mathfrak{A}(n,\Delta)$ with $n=11$, $\Delta =4$, $a=k_1=1$, $b=4$ and $k_2=5>\Delta$. Each solid monochromatic star represents three copies of stars, and each dashed monochromatic star represents two copies of stars. Then $|\mathcal{S}'|=13=a(\Delta-1)^2+k_1(\Delta-1)+k_2-\Delta$ and $\overrightarrow{D}[C]$ has two arcs $(x_1,x_2)$ and $(x_1,x_3)$. In the right part, $\mathcal{S}''$ belongs to $\mathfrak{A}(n,\Delta)$ with $n=10$, $\Delta=4$, $a=k_1=1$, $b=3$ and $k_2=2<\Delta$. Each solid monochromatic star represents three copies of stars. $\mathcal{S}''$ satisfies all conditions of $(i)$.* ![$\mathcal{S}^*\in \mathfrak{A}(8,3)$](Tu-1.eps "fig:"){#pict-1 width="250pt"}\ **Remark 1**. *If $n<2\Delta-1$ or $n\geq 2\Delta-1$ and $k_2> \Delta$, then for each vertex $u\in C$, $\mathcal{S}_u$ consists of copies of the same star with center $u$, that is, any two stars in $\mathcal{S}_u$ have the same leaves. However, if $n\geq 2\Delta-1$ and $k_2< \Delta$, then it is not necessary for the stars in $\mathcal{S}_u$ to have the same leaves, see $\mathcal{S}^*\in \mathfrak{A}(8,3)$ of Figure [2](#pict-1){reference-type="ref" reference="pict-1"} as an example. Here each solid or dashed monochromatic star represents only one star.* We will confirm that the definition of $\mathfrak{A}(n,\Delta)$ is meaningful, that is, $\mathfrak{A}(n,\Delta)\neq \emptyset$ and each element of $\mathfrak{A}(n,\Delta)$ is rainbow $K_{1,\Delta}$-free. **Lemma 1**. *If $\mathfrak{A}(n,\Delta)\neq \emptyset$, then each $\mathcal{S}\in \mathfrak{A}(n,\Delta)$ is rainbow $K_{1,\Delta}$-free.* *Proof.* For each vertex $u\in C$ (resp. $v\in L$), each rainbow star with center $u$ has at most $|\mathcal{S}_u|+d_{\overrightarrow{D}}^-(u)$ (resp. $d_{\overrightarrow{D}}^-(v)$) leaves. By the definition of $\mathfrak{A}(n,\Delta)$ we have $|\mathcal{S}_u|+d_{\overrightarrow{D}}^-(u)\leq \Delta-1$ and $d_{\overrightarrow{D}}^-(v)\leq \Delta-1$, and hence $\mathcal{S}$ is rainbow $K_{1,\Delta}$-free. ◻ **Lemma 2**. *For $n \ge \Delta + 1 \ge 3$, $\mathfrak{A}(n,\Delta)\neq \emptyset$.* *Proof.* We only need to construct a collection $\mathcal{S}\in \mathfrak{A}(n,\Delta)$ for each $n\geq \Delta+1$. Let $C=\{x_1,x_2,\ldots,x_p\}$ and $L=\{y_1,y_2,\ldots,y_q\}$, where by definition $n = p+q$. **Case 1** $n\geq 2\Delta-1$ and $k_2\leq \Delta$. Let $p=a(\Delta-1)+k_1$ and $q=a\Delta+b-k_1.$ For each vertex $x_i$ of $C$, let $\mathcal{T}_i$ be the (multi)set of $\Delta-1$ copies of $$x_i\vee\{y_{(i-1)\Delta+1},y_{(i-1)\Delta+2},\ldots,y_{i\Delta}\},$$ where the subscripts are taken modulo $q$. Then we define $\mathcal{S}$ to be the union of $\mathcal{T}_1,\mathcal{T}_2,\ldots,\mathcal{T}_p$. Now we only need to show that $\mathcal{S}$ satisfies the third condition of $(i)$. It is straightforward to verify that $d_{\overrightarrow{D}}^-(y_i)$ and $d_{\overrightarrow{D}}^-(y_j)$ differs by at most one for any $i,j\in[q]$. Hence for each $i\in[q]$ we have $$\begin{aligned} d_{\overrightarrow{D}}^-(y_i) \leq \left\lceil \frac{p\Delta}{q}\right\rceil &=\left\lceil\frac {\Delta\left(a(\Delta-1)+k_1\right) }{a\Delta+b-k_1}\right\rceil\\ &=\left\lceil\frac{k_1(2\Delta-1)-b(\Delta-1)} {a\Delta+b-k_1}\right\rceil + \Delta-1 \\ & \leq \Delta-1,\end{aligned}$$ where by assumption $b(\Delta-1)=k_1(2\Delta-1)+k_2$. **Case 2** $n\geq 2\Delta-1$ and $k_2\geq\Delta$. Let $p=a(\Delta-1)+k_1+1$ and $q=a\Delta+b-k_1-1.$ Note that $2\Delta-1-k_2\leq \Delta-1$. We construct $\mathcal{S}$ as follows (the subscripts are taken modulo $q$). - Let $\mathcal{T}_1$ be the set of $\Delta-1$ copies of the star $K_{1,\Delta}$ with center $x_1$ and the set of leaves being $\{y_1,\ldots,y_{k_2-\Delta+1},x_2,\ldots,x_{2\Delta-k_2}\}$. - For $2\leq i\leq 2\Delta-k_2$, let $\mathcal{T}_i$ be the set of $\Delta-2$ copies of the star $K_{1,\Delta}$ with center $x_i$ and the set of leaves being $\{y_{k_2+(i-2)\Delta+2},\ldots,y_{k_2+(i-1)\Delta+1}\}$. - For $2\Delta-k_2+1\leq i\leq p$, let $\mathcal{T}_i$ be the set of $\Delta-1$ copies of the star $K_{1,\Delta}$ with center $x_i$ and the set of leaves being $\{y_{k_2+(i-2)\Delta+2},\ldots,y_{k_2+(i-1)\Delta+1}\}$. Now it is easy to verify that $\mathcal{S}$ satisfies the first and second condition of $(ii)$. Then it remains to prove $d_{\overrightarrow{D}}^-(y_i) \le \Delta-1$ for each $i \in [q]$. Similar to Case 1, one can verify that $d_{\overrightarrow{D}}^-(y_i)$ and $d_{\overrightarrow{D}}^-(y_j)$ differs by at most one for any $i,j\in[q]$. Hence for each $i\in[q]$ we have $$\begin{aligned} d_{\overrightarrow{D}}^-(y_i) \leq \left\lceil \frac{p\Delta-(2\Delta-1-k_2)}{q}\right\rceil%=\frac{p\Delta-(2\Delta-1-k_2)}{q} = \Delta-1.\end{aligned}$$ **Case 3** $\Delta+1\leq n\leq 2\Delta-2$. Let $p=\left\lfloor\frac{n-1}{2}\right\rfloor$ when $n$ is odd, and $p=\left\lfloor\frac{n-1}{2}\right\rfloor$ or $\left\lfloor\frac{n+1}{2}\right\rfloor$ when $n$ is even. For each vertex $x_i$ of $C$, let $\mathcal{T}_i$ be the set of $q-1$ copies of $$x_i\vee (L\cup \{x_{i+1},x_{i+2},\ldots,x_{i+\Delta-q}\})$$ (the subscripts are taken modulo $q$). Let $\mathcal{S}=\mathcal{T}_1\cup \mathcal{T}_2\cup\cdots \cup\mathcal{T}_{|C|}$. Then it is not difficult to verify $\mathcal{S}$ satisfies all the conditions in $(iv)$ since $n=p+q$. ◻ **Proof of Theorem [Theorem 1](#main-0){reference-type="ref" reference="main-0"}:** Assume that $\mathcal{S}$ is a collection of stars $K_{1,\Delta}$ on the vertex set $V$ and $\mathcal{S}$ is rainbow $K_{1,\Delta}$-free. Then $|\mathcal{S}_u|\leq \Delta-1$ for each $u\in C$ and $d_{\overrightarrow{D}}^-(v)\leq \Delta-1$ for each $v\in L$, respectively. For simplicity of notation, we set $\overrightarrow{H}=\overrightarrow{D}[C]$ and $d_u=d_{\overrightarrow{H}}^-(u)=d^-_{\overrightarrow{D}}(u)$ for all $u\in C$. Then there is a rainbow star $K_{1,d_u}$ centered at $u$ with $N_{\overrightarrow{H}}^-(u)$ being the set of leaves. Moreover, no edge of the rainbow star belongs to $\bigcup_{S\in \mathcal{S}_u}E(S)$. This observation yields the following claim. **Claim 1**. *For each $u\in C$, $|\mathcal{S}_u|\leq\Delta-d_u-1$.* *Proof.* For each $S\in \mathcal{S}_u$, let $S'=S-N_{\overrightarrow{H}}^-(u)$. Then $\{S':S\in \mathcal{S}_u\}$ is a collection of stars on vertex set $V-N_{\overrightarrow{H}}^-(u)$. Since each $S'$ has at least $\Delta-d_u$ leaves, $\{S':S\in \mathcal{S}_u\}$ contains a rainbow star $K_{1,z}$, where $z=\min\{|\mathcal{S}_u|,\Delta-d_u\}$. Then there exists a rainbow star $K_{1,d_u+z}$ with center $u$ since the set of the colors appearing in $K_{1,z}$ and the set of the colors appearing in $K_{1,d_u}$ are disjoint. If $|\mathcal{S}_u|\geq\Delta-d_u$, then $d_u+z=\Delta$, a contradiction. Hence $|\mathcal{S}_u|\leq\Delta-d_u-1$. ◻ By Claim [Claim 1](#clm-star-1){reference-type="ref" reference="clm-star-1"}, we have $$\begin{aligned} \label{eq-1} \sum_{u\in C}(\Delta-d_u-1)\geq \sum_{u\in C}|\mathcal{S}_u|= |\mathcal{S}|.\end{aligned}$$ Note that for each vertex $u\in C$, $$\label{eq-du-Delta} \Delta\leq d_{\overrightarrow{D}}^+(u)=d_{\overrightarrow{H}}^+(u)+ |N_{\overrightarrow{D}}^+(u)\cap L|.$$ Then we have $$\label{eq-1-1} \begin{split} \sum_{u\in C}|N_{\overrightarrow{D}}^+(u)\cap L|&=\sum_{u\in C}(d^+_{\overrightarrow{D}}(u)-d_{\overrightarrow{H}}^+(u))\\ &\geq \sum_{u\in C}(\Delta-d_{\overrightarrow{H}}^+(u))\\ &=|C|\Delta-\sum_{u\in C}d_{\overrightarrow{H}}^+(u)\\ &=|C|\Delta-\sum_{u\in C}d_{\overrightarrow{H}}^-(u)=\sum_{u\in C}(\Delta-d_u). \end{split}$$ By the rainbow $K_{1,\Delta}$-freeness of $\mathcal{S}$, $d_{\overrightarrow{D}}^-(v)=|N_{\overrightarrow{D}}^-(v)\cap C|\leq \Delta-1$ for any $v\in L$. Then $$\label{eq-2} \begin{split} %&\sum_{u\in C}(d^+_{\overrightarrow{D}}(u)-d_u)=\sum_{u\in C}(d^+_{\overrightarrow{D}}(u)-d^+_{\overrightarrow{H}}(u))\\ %&= (\Delta-1)|L| \ge \sum_{v\in L}|N_{\overrightarrow{D}}^-(v)\cap C|=\sum_{u\in C}|N_{\overrightarrow{D}}^+(u)\cap L| \end{split}$$ Combining inequalities ([\[eq-1\]](#eq-1){reference-type="ref" reference="eq-1"}), ([\[eq-1-1\]](#eq-1-1){reference-type="ref" reference="eq-1-1"}) and ([\[eq-2\]](#eq-2){reference-type="ref" reference="eq-2"}), $$\label{eq-final-0} \begin{split} |L|(\Delta-1)&\geq \sum_{u\in C}|N_{\overrightarrow{D}}^+(u)\cap L| \\ %&=\sum_{u\in C}(d^+_{\overrightarrow{D}}(u)-d_u)\\ &\ge\sum_{u\in C}(\Delta-d_u)\\ &=\sum_{u\in C}(\Delta-d_u-1)+|C|\geq|\mathcal{S}|+|C|. \end{split}$$ Furthermore, since $|L|=n-|C|$, it follows that $$\begin{aligned} \label{ineq-00} |\mathcal{S}|\leq n(\Delta-1)-|C|\Delta.\end{aligned}$$ We continue our proof by considering the following two cases. **Case 1:** $a\geq 1$. Suppose to the contrary that $$\label{eq-exgraph-L1} |\mathcal{S}|\geq a(\Delta-1)^2+k_1(\Delta-1)+\mu,$$ where $$\mu=\left\{ \begin{array}{ll} 1, & \hbox{ if } 0\leq k_2\leq \Delta; \\ k_2-\Delta+1, & \hbox{ if } \Delta \le k_2\leq 2\Delta-2. \end{array} \right.$$ Since $|\mathcal{S}_u|\leq \Delta-1$ for each $u\in C$, it follows that $$\label{eq-bigc} |C|\geq \left\lceil\frac{|\mathcal{S}|}{\Delta-1}\right\rceil \geq a(\Delta-1)+k_1+1.$$ Recall that $n=a(2\Delta-1)+b$, it follows from inequality ([\[ineq-00\]](#ineq-00){reference-type="ref" reference="ineq-00"}) that $$\label{eq-exgraph-1} \begin{split}|\mathcal{S}|&\leq n(\Delta-1)-|C|\Delta \\ &\leq (\Delta-1)(a(2\Delta-1)+b)-\Delta\left(a(\Delta-1)+k_1+1\right)\\ %&=(\Delta-1)(a(\Delta-1)+b)-k_1\Delta-\Delta\\ &=a(\Delta-1)^2+b(\Delta-1)-(k_1+1)\Delta. \end{split}$$ Combining inequalities [\[eq-exgraph-L1\]](#eq-exgraph-L1){reference-type="eqref" reference="eq-exgraph-L1"} and [\[eq-exgraph-1\]](#eq-exgraph-1){reference-type="eqref" reference="eq-exgraph-1"}, we obtain $$\label{eq-exgraph-2} \begin{split} a(\Delta-1)^2+b(\Delta-1)-(k_1+1)\Delta \geq a(\Delta-1)^2+k_1(\Delta-1)+\mu. \end{split}$$ which yields $$\begin{aligned} %\label{eq-final-1} k_1(2\Delta-1)+\mu \le b(\Delta-1)-\Delta = k_1(2\Delta-1)+k_2-\Delta.\end{aligned}$$ Then we have $k_2-\Delta\geq \mu$, a contradiction. Therefore, we can get the upper bound of $|\mathcal{S}|$: $$\begin{aligned} |\mathcal{S}| &\leq a(\Delta-1)^2+k_1(\Delta-1)+\mu-1\\ &=\left\{ \begin{array}{ll} a(\Delta-1)^2+k_1(\Delta-1), & \mbox{if }a\geq 1\mbox{ and }0\leq k_2\leq\Delta; \vspace{0.05cm}\\ a(\Delta-1)^2+k_1(\Delta-1)+k_2-\Delta, & \mbox{if }\Delta\leq k_2\leq 2\Delta-2,\\ \end{array} \right.\end{aligned}$$ as stated in Theorem [Theorem 1](#main-0){reference-type="ref" reference="main-0"}. In Lemma [Lemma 1](#lemm){reference-type="ref" reference="lemm"} and [Lemma 2](#lem){reference-type="ref" reference="lem"} we have shown that these bounds are tight, and we now characterize all the collections $\mathcal{S}$ with $|S|$ attaining the maximum values. **Claim 2**. *The extremal cases of $\mathcal{S}$ satisfies the following properties:* 1. *if $\Delta<k_2\leq 2\Delta-2$, then $|C|=a(\Delta-1)+k_1 +1$;* 2. *if $k_2=\Delta$, then either $|C|=a(\Delta-1)+k_1 +1$ or $|C|=a(\Delta-1)+k_1$;* 3. *if $0\leq k_2\leq \Delta-1$, then $|C|=a(\Delta-1)+k_1$.* *Proof.* Since $|\mathcal{S}_u|\leq \Delta-1$, $|\mathcal{S}|\leq |C|(\Delta-1)$ and $|C|\geq |\mathcal{S}|/(\Delta-1)$. Hence $|C|\geq a(\Delta-1) +k_1+1$ if $k_2>\Delta$, and $|C|\geq a(\Delta-1)+k_1$ if $k_2\leq \Delta$. If $k_2\geq\Delta$, then by inequality ([\[ineq-00\]](#ineq-00){reference-type="ref" reference="ineq-00"}) we have $$\begin{aligned} |C|\Delta&\leq n(\Delta-1)-|\mathcal{S}|=\Delta(a(\Delta-1) +k_1+1).\end{aligned}$$ and $|C|\leq a(\Delta-1)+k_1+1$. Therefore, $|C|=a(\Delta-1) +k_1+1$ if $k_2>\Delta$, and either $|C|=a(\Delta-1)+k_1 +1$ or $|C|=a(\Delta-1)+k_1$ if $k_2=\Delta$. The first two statements hold. We now prove that last statement. Suppose that $|C|>a(\Delta-1)+k_1$ as in inequality [\[eq-bigc\]](#eq-bigc){reference-type="eqref" reference="eq-bigc"}. Then by inequality [\[eq-exgraph-1\]](#eq-exgraph-1){reference-type="eqref" reference="eq-exgraph-1"} $$a(\Delta-1)^2 +k_1(\Delta-1) = |\mathcal{S}| \le a(\Delta-1)^2+b(\Delta-1)-(k_1+1)\Delta,$$ which implies $k_2 \ge \Delta$, a contradiction. ◻ By Claim [Claim 2](#clm-star-2){reference-type="ref" reference="clm-star-2"}, $|C|=a(\Delta-1)+k_1$ or $|C|=a(\Delta-1)+k_1+1$. For the case $|C|=a(\Delta-1)+k_1$, we have $k_2\leq \Delta$. It is easy to compute that $|C|\Delta=|\mathcal{S}|+|C|$. Since $|\mathcal{S}_u|\leq \Delta-1$ and $|\mathcal{S}|=\sum_{u\in C}|\mathcal{S}_u|$, it follows that $|\mathcal{S}_u|=\Delta-1$ for each $u\in C$. By inequality ([\[eq-final-0\]](#eq-final-0){reference-type="ref" reference="eq-final-0"}), we have $\sum_{u\in C}(\Delta-d_u)\geq |\mathcal{S}|+|C|$, and hence $d_u=0$ for each $u\in C$. It follows from the hypothesis that $\mathcal{S}$ is rainbow $K_{1,\Delta}$-free that $d_{\overrightarrow{D}}^-(v)\leq \Delta-1$ for each $v\in L$. Therefore, $\mathcal{S}$ satisfies all conditions of $(i)$, and hence $\mathcal{S}\in \mathfrak{A}(n,\Delta)$. For the case $|C|=a(\Delta-1)+k_1+1$, we have $k_2\geq \Delta$ and $|L|(\Delta-1)=|\mathcal{S}|+|C|$. Hence all equalities of [\[eq-1\]](#eq-1){reference-type="eqref" reference="eq-1"}--([\[eq-final-0\]](#eq-final-0){reference-type="ref" reference="eq-final-0"}) hold, which implies that $d_{\overrightarrow{D}}^+(u)=\Delta$ for each $u\in C$ and $$\sum_{u\in C}d_u=|C|\Delta-(|\mathcal{S}|+|C|)=|C|(\Delta-1)-|\mathcal{S}|=2\Delta-1-k_2.$$ Therefore, $|A(\overrightarrow{D}[C])|=\sum_{u\in C}d_{\overrightarrow{H}}^-(u)=2\Delta-1-k_2.$ Note that by equalities in [\[eq-2\]](#eq-2){reference-type="eqref" reference="eq-2"} we have $\sum_{v\in L}d_{\overrightarrow{D}}^-(v)=%\sum_{u\in C}(d_{\overrightarrow{D}}^+(u)-d_u)=|C|\Delta-(|C|(\Delta-1)-|\mathcal{S}|)= |L|(\Delta-1),$ which together with $d_{\overrightarrow{D}}^-(v)\leq \Delta-1$ implies $d_{\overrightarrow{D}}^-(v)=\Delta-1$ for each $v\in L$. Moreover, by the equality in ([\[eq-final-0\]](#eq-final-0){reference-type="ref" reference="eq-final-0"}) $|\mathcal{S}_u|=\Delta-1-d_u$ for each $u\in C$. Recall that $d_{\overrightarrow{D}}^+(u)=\Delta$ for each $u\in C$, $\mathcal{S}_u$ consists of $\Delta-1-d_u$ copies of $K_{1,\Delta}$. The above arguments shows that $\mathcal{S}$ satisfies all conditions of $(ii)$, and hence $\mathcal{S}\in \mathfrak{A}(n,\Delta)$. It is worth noting that if $k_2=\Delta$, then $\mathcal{S}$ satisfies both conditions of $(i)$ and $(ii)$, which is in correspondence with $(iii)$. **Case 2:** $a=0$. We assume $\mathcal{S}$ contains no rainbow $K_{1,\Delta}$ and let $$\ell^*=\frac{\sum_{u\in C}(\Delta-d_u-1)}{|C|}=\frac{\sum_{u\in C}(\Delta-d^+_{\overrightarrow{H}}(u)-1)}{|C|}.$$ By inequality ([Claim 1](#clm-star-1){reference-type="ref" reference="clm-star-1"}), $$|\mathcal{S}|=\sum_{u\in C}|\mathcal{S}_u|\leq\sum_{u\in C}(\Delta-d_u-1)=|C|\ell^*.$$ Then $|L|\geq \Delta-d^+_{\overrightarrow{H}}(u)$ for each vertex $u\in C$ since each vertex $u\in C$ has at least $\Delta-d^+_{\overrightarrow{H}}(u)$ out-neighbors in $L$, It follows that $|L|\geq \ell^*+1$ and $n=|C|+|L|\geq |C|+\ell^*+1$. Hence $$|\mathcal{S}|\leq |C|\ell^*\leq \frac{(\ell^*+|C|)^2}{4}\leq \frac{(n-1)^2}{4}$$ and precisely $|\mathcal{S}|\leq \left\lfloor\frac{(n-1)^2}{4}\right\rfloor$. Then we characterize all the cases in which $|\mathcal{S}|$ attains this value. When above equalities hold, we have $|C|=\ell^*=\frac{n-1}{2}$ when $n$ is odd and $\{|C|,\ell^*\}=\{\left\lfloor\frac{n-1}{2}\right\rfloor,\left\lceil\frac{n-1}{2}\right\rceil\}$ when $n$ is even. Furthermore, $n=|C|+|L|=|C|+\ell^*+1$, which implies $|L|=\ell^*+1$. Hence, if $n$ is odd, then $|C|=\left\lfloor\frac{n-1}{2}\right\rfloor$ and $|L|=\left\lceil\frac{n+1}{2}\right\rceil$; if $n$ is even, then either $|C|=\left\lfloor\frac{n-1}{2}\right\rfloor$ and $|L|=\left\lceil\frac{n+1}{2}\right\rceil$, or $|L|=\left\lfloor\frac{n+1}{2}\right\rfloor$ and $|C|=\left\lceil\frac{n-1}{2}\right\rceil$. For each vertex $u\in C$, since $\ell^*+1=|L|\geq \Delta-d^+_{\overrightarrow{H}}(u)$, it follows that $\ell^*=|L|-1\geq \Delta-d^+_{\overrightarrow{H}}(u)-1$. On the other hand, $|C|\ell^*=\sum_{u\in C}(\Delta-d_u-1)=\sum_{u\in C}(\Delta-d^+_{\overrightarrow{H}}(u)-1)$, which implies that $\ell^*=|L|-1=\Delta-d^+_{\overrightarrow{H}}(u)-1$ and $d^+_{\overrightarrow{H}}(u)=\Delta-|L|$ for each $u\in C$ . Since $\Delta\leq d_D^+(u)\leq d^+_{\overrightarrow{H}}(u)+|L|=\Delta$, we have $d_D^+(u)=\Delta$ for each $u\in C$, which means that any two stars of $\mathcal{S}_u$ have common leaves and $L$ is contained in these leaves for each $u\in C$. It follows that $d^+_{\overrightarrow{H}}(u)=\Delta-|L|$ and $\overrightarrow{D}[C]$ is $(\Delta-|L|)$-out-regular. Since $|\mathcal{S}_u|\leq \Delta-1-d_{\overrightarrow{H}}^-(u)$ for each $u\in C$, it follows that $$|\mathcal{S}|=\sum_{u\in C}|\mathcal{S}_u|\leq \sum_{u\in C}(\Delta-1-d_{\overrightarrow{H}}^-(u))= |C|(\Delta-1)-\sum_{u\in C}d^+_{\overrightarrow{H}}(u)=\left\lfloor\frac{(n-1)^2}{4}\right\rfloor=|\mathcal{S}|.$$ Hence, $|\mathcal{S}_u|=\Delta-1-d_{\overrightarrow{H}}^-(u)$ for each $u\in C$, and this number is positive since $d_{\overrightarrow{H}}^-(u)\leq |C|-1\leq \left\lceil\frac{n-1}{2}\right\rceil-1<\Delta-1$. Therefore, $\mathcal{S}$ satisfies all conditions of $(iii)$, and $\mathcal{S}\in \mathfrak{A}(n,\Delta)$. # Proof of Theorem [Theorem 2](#main-1){reference-type="ref" reference="main-1"} {#sec-thm2} We shall give the proof of Theorem [Theorem 2](#main-1){reference-type="ref" reference="main-1"} in this section. To this end, we first introduce the definition of $\mathfrak{B}(n,m)$. **Definition 2**. *Let $2\leq n\leq m$ and $V$ be a vertex set of order $n$. We denote by $\mathfrak{B}(n,m)$ the set of families of $m(n-2)/n$ trees on $V$ such that* 1. *$V$ is partitioned into $m/n$ parts $V_1,V_2,\ldots,V_{m/n}$, where $|V_i|=n$ for all $1 \le i \le m/n$;* 2. *there are exactly $n-2$ spanning trees on $V_i$ for $1 \le i \le m/n$.* Next, we prove that if $\mathcal{T}=\{T_i: i\in[t]\}$ does not contain any rainbow tree of order $n$, then $|\mathcal{T}|\leq m(n-2)/n$. Let $F_1$ be a maximum rainbow tree in $\mathcal{T}$ and $V(F_1)=U_1$. Then we have $|U_1|\leq n-1$. Let $\mathcal{S}_1=\{T_i:E(T_i)\cap E(F_1)\neq \emptyset\}$. Then we have the following claim. **Claim 3**. *For each $T_i\in \mathcal{T}- \mathcal{S}_1$, $V(T_i)\cap U_1=\emptyset$.* *Proof.* Suppose to the contrary that $V(T_i)\cap U_1\neq \emptyset$ for some $T_i\in \mathcal{T}- \mathcal{S}_1$. Since $|U_1|\leq n-1$ and $|V(T_i)|=n$, it follows that $\partial_{T_i}(U_1)\neq \emptyset$. If we choose an edge $f$ of $\partial_{T_i}(U_1)$, then $F_1\cup f$ is a larger rainbow tree, which contradicts the maximality of $F_1$. ◻ By Claim [Claim 3](#clm-tree-1){reference-type="ref" reference="clm-tree-1"}, $\mathcal{T}_1=\mathcal{T}- \mathcal{S}_1$ is a collection of trees of order $n$ on vertex set $V_1=V-U_1$. Let $F_2$ be a maximum rainbow tree of $\mathcal{T}_1$, $V(F_2)=U_2$ and $\mathcal{S}_2=\{T_i:E(T_i)\cap E(F_2)\neq \emptyset\}$. Then $|U_2|\leq n-1$ and $\mathcal{T}_2=\mathcal{T}-\mathcal{S}_1-\mathcal{S}_2$ is a collection of trees of order $n$ on vertex set $V_2=V-U_1-U_2$ by Claim [Claim 3](#clm-tree-1){reference-type="ref" reference="clm-tree-1"}. Continuing this process until $V_t=V-\cup_{i=1}^sU_i=\emptyset$, we will obtain a sequence of rainbow trees $F_1,F_2,\ldots,F_s$ satisfying: 1. $F_i$ is a maximum rainbow tree in $\mathcal{T}_{i-1}$ for each $i\in[s]$, where we set $\mathcal{T}_0=\mathcal{T}$; 2. for each $i\in[s]$, $V(F_i)=U_i$, $\mathcal{S}_i=\{T_j:E(T_j)\cap E(F_i)\neq \emptyset\}$, $\mathcal{T}_i=\mathcal{T}-\bigcup_{j\in[i]}\mathcal{S}_j$, $V_i=V-\bigcup_{j\in[i]}U_i$, and $\mathcal{T}_i$ is a collection of trees of order $n$ on vertex set $V_i$; 3. we may have $|U_i|=1$ for some $i \in [s]$, in which case $\mathcal{T}_{i-1}=\mathcal{T}_i=\emptyset$. Since each $\mathcal{T}_i$ is a subset of $\mathcal{T}$, $\mathcal{T}_i$ does not contain any rainbow tree of order $n$. Hence $|V(F_i)|\leq n-1$ for each $i\in[s]$. Note that $\mathcal{S}_1,\mathcal{S}_2, \ldots,\mathcal{S}_{s}$ forms a partition of $\mathcal{T}$. Hence $$\label{eq-uit} t = \sum_{i\in[s]}|\mathcal{S}_i|=\sum_{i\in[s]}|E(F_i)|=\sum_{i\in[s]}(|U_i|-1).$$ Similarly, $U_1, U_2,\ldots,U_s$ forms a partition of $V$ and we have $\sum_{i\in[s]}|U_i|=|V|=m$. Then $$\begin{aligned} \label{ineq-2} t=m-s.\end{aligned}$$ To show that $t\leq m(n-2)/n$, we need to prove that $s\geq 2m/n$, the proof of which would be separated into several claims. At first, for each $2\leq i\leq s$, let $$\mathcal{R}_i=\{T_j\in \bigcup_{\ell\in[i-1]}\mathcal{S}_\ell: V(T_j)\cap U_i\neq \emptyset\}$$ and $|\mathcal{R}_i|=r_i$. **Claim 4**. *For each $T_k\in \mathcal{R}_i$ and each edge $f\in T_\ell$ with one endpoint in $U_i$ the other endpoint in $V-U_i$, there is a rainbow tree $T^*$ such that $F_i\cup f$ is a rainbow subtree of $T^*$, $|T^*|=|U_i|+r_i$ and the edges in $E(T^*)-E(F_i)$ come from distinct trees in $\mathcal{R}_i$. Moreover, $|U_i|+r_i\leq n-1$ for each $2\leq i\leq s$.* *Proof.* Let $\mathcal{R}_i=\{T_{c_1},T_{c_2},\cdots,T_{c_{r_i}}\}$. Without loss of generality, we may assume $k=c_1$ and set $f=f_1$. Then it is straightforward to verify that $F_i\cup f_1$ is a rainbow tree. Since $\mathcal{T}$ does not contain any rainbow tree of order $n$, we have $|F_i\cup f_1|<n$. Further, there exists an edge $f_2$ of $T_{c_2}$ belonging to $\partial_{T_{c_2}}(V(F_i\cup f_1))$ since $V(T_{c_2})\cap U_i\neq \emptyset$ and $|T_{c_2}|=n>|F_i\cup f_1|$. Then $F_i\cup \{f_1,f_2\}$ is again a rainbow tree. Repeating this process, we will get a rainbow tree $T^* = F_i\cup \{f_1,f_2,\ldots,f_{r_i}\}$ with $f_j\in E(T_{c_j})$ for each $j\in[r_i]$. It is easy to see that $T^*$ is a rainbow tree of order $|U_i|+r_i$ containing $F_i\cup f$ as a rainbow subtree, and $|U_i|+r_i\leq n-1$ since $\mathcal{T}$ does not contain any rainbow tree of order $n$. ◻ **Claim 5**. *$\sum_{2\leq i\leq s}r_i\geq \sum_{s\in[t-1]}(|U_i|-1)$.* *Proof.* For an integer $i\in[s-1]$, let $T$ be an arbitrary tree of $\mathcal{S}_i$. By the definition of $F_i$, we have that $V(T)\cap U_j=\emptyset$ for each $j<i$. However, it follows from $\partial_{T}(U_i)\neq \emptyset$ that there exists an integer $\ell$ with $i< \ell\leq s$ such that there is an edge in $\partial_{T}(U_i)$ connecting $U_i$ and $U_\ell$. Since $i\in[t-1]$ and $T\in\mathcal{S}_i$ is chosen arbitrarily, each tree of $\bigcup_{i\in[s-1]}\mathcal{S}_i$ contributes at least one to $\sum_{2\leq i\leq t}r_i$. Thus, $$\sum_{2\leq i\leq s}r_i\geq \left|\bigcup_{i\in[s-1]}\mathcal{S}_i\right|= \sum_{i\in[s-1]}|\mathcal{S}_i|= \sum_{i\in[s-1]}(|U_i|-1).$$ ◻ **Claim 6**. *$|U_s|=1$.* *Proof.* Suppose to the contrary that $|U_s|\geq 2$, which implies $\mathcal{S}_t\neq \emptyset$. Note that $\mathcal{S}_s$ is a collection of trees of order $n$ on vertex set $V_{s-1}$ and hence $|V_{s-1}|\geq n$. Then $V_s=V_{t-1}-U_s$ contains at least one vertex since $|U_s|\leq n-1$. It follows that there will exists an $F_{s+1}$, a contradiction. ◻ By Claim [Claim 4](#clm-tree-2){reference-type="ref" reference="clm-tree-2"}, [Claim 5](#clm-tree-3){reference-type="ref" reference="clm-tree-3"}, [Claim 6](#clm-tree-4){reference-type="ref" reference="clm-tree-4"}, identity [\[eq-uit\]](#eq-uit){reference-type="eqref" reference="eq-uit"} and [\[ineq-2\]](#ineq-2){reference-type="eqref" reference="ineq-2"}, we have $$\label{inq-ex-2} \begin{split} 2(m-s) &= 2\sum_{i\in[s]}(|U_i|-1)\\ &=\sum_{i\in[s]}(|U_i|-1)+\sum_{i\in[s-1]}(|U_i|-1)\\ &\leq \sum_{i\in[s]}(|U_i|-1)+\sum_{2\leq i\leq s}r_i\\ &=(|U_1|-1)+\sum_{2\leq i\leq s}(|U_i|-1+r_i)\\ &\leq n-2+(s-1)(n-2)\\ &=s(n-2), \end{split}$$ $$2(m-t)=2\sum_{i\in[t]}(|U_i|-1)\leq t(n-2),$$ and hence $$s\geq \frac{2m}{n}, \quad t=m-s\leq m-\frac{2m}{n}=\frac{m}{n}(n-2).$$ Hence, $|\mathcal{T}|\leq m(n-2)/n$. We now characterize all collections $\mathcal{T}$ with $|\mathcal{T}|= m(n-2)/n$ by induction. It is obvious that $\mathcal{T}\in \mathfrak{B}(n,m)$ if $m=n$. Note that $|\mathcal{T}|= m(n-2)/n$ indicates that $t=2m/n$ and all equalities in Claim [Claim 4](#clm-tree-2){reference-type="ref" reference="clm-tree-2"}, [Claim 5](#clm-tree-3){reference-type="ref" reference="clm-tree-3"} and inequality ([\[inq-ex-2\]](#inq-ex-2){reference-type="ref" reference="inq-ex-2"}) hold. Then $$\begin{aligned} %\label{equality-1} |U_i|+r_i= n-1\end{aligned}$$ for each $2\leq i\leq s$, and each $T\in\mathcal{S}_i$ contributes exactly one to $\sum_{2\leq i\leq s}r_i$, where $i\in[t-1]$. This implies that for each $T\in \mathcal{T}$, there exist exactly two indices $j,j'\in[t-1]$ such that $V(T)\subseteq U_j\cup U_{j'}$, $V(T)\cap U_j\neq \emptyset$ and $V(T)\cap U_{j'}\neq \emptyset$. For the case $i = s$, we have $r_s=n-2$ since $|U_s| = 1$ by Claim [Claim 6](#clm-tree-4){reference-type="ref" reference="clm-tree-4"}. Moreover, there are exactly $n-2$ trees of $\mathcal{T}$, say $T_{p_1},T_{p_2},\ldots,T_{p_{n-2}}$, such that $u\in V(T_{p_i})$ for each $i\in[n-2]$, where we set $U_s=\{u\}$. Assume that $T_{p_i}\in\mathcal{S}_{q_i}$ for each $i\in[n-2]$. Then $V(T_{p_i})\subseteq U_{q_i}\cup\{u\}$ for each $i\in[n-1]$. Since $|T_{p_i}|=n$, it follows that $|U_{q_i}|=n-1$ and $V(T_{p_i})= U_{q_i}\cup\{u\}$ for each $i\in[n-1]$. **Claim 7**. *$q_1=q_2=\cdots=q_{n-2}$.* *Proof.* Let $I=\{p_i: i\in[n-2] \mbox{ and } q_i=q_1\}$ and $I'=C(F_{q_1})-I$, where $C(F_{q_1})=\{i:T_i\in \mathcal{S}_{q_1}\}$. By contradiction, we suppose that $|I|<n-2$. Then $I'\neq \emptyset$ since $|U_{q_i}|=n-1$. Selecting one element $k\in I'$, we have $k\notin\{p_1,p_2,\ldots,p_{n-2}\}$ since $T_k \in \mathcal{S}_{q_1}$. Then there exists an integer $q_1 < \ell < s$ such that $V(T_k)\subseteq U_{q_1}\cup U_\ell$ and an edge $f\in E(T_k)$ joining $U_\ell$ and $U_{q_1}$, and by definition $T_{q_1}\in \mathcal{R}_{\ell}$. Note that $F_\ell\cup f$ is a rainbow tree and recall $|U_{\ell}|+r_{\ell}=n-1$, and then by Claim [Claim 4](#clm-tree-2){reference-type="ref" reference="clm-tree-2"} we can construct a rainbow tree $T^*$ containing $F_\ell\cup f$ such that $|T^*|=n-1$. It follows from $V(T_{p_1})= U_{q_1}\cup\{u\}$ that $V(T_{p_1})\cap U_\ell=\emptyset$ and $T_{p_1} \notin \mathcal{R}_l$. Then $p_1\notin C(T^*)$, where $C(T^*)=\{i\in[t]:E(T_i)\cap E(T^*)\neq\emptyset\}$. Since $V(T^*)\cap V(T_{p_1})\neq \emptyset$ and $|T^*|<|T_{p_1}|$, it follows that $\partial_{T_{p_1}}(V(T^*))\neq \emptyset$, say $f'\in \partial_{T_{p_1}}(V(T^*))$. Thus, $T^*\cup f'$ is a rainbow tree of order $n$, a contradiction. ◻ By the above claim, we may let $q=q_1=q_2=\ldots=q_{n-2}$. Then $$\mathcal{S}_{q}=\{T_{p_1},T_{p_2},\ldots,T_{p_{n-2}}\}$$ since $|\mathcal{S}_q| = |U_q|-1 \le n-2$. Hence, $F_q$ is a rainbow tree of order $n-1$ whose edges come from $T_{p_1},T_{p_2},\ldots,T_{p_{n-2}}$. Since $u\in V(T_{p_i})$ for each $i\in[n-2]$, it follows that $T_{p_1},T_{p_2},\ldots,T_{p_{n-2}}$ are $n-2$ trees on vertex set $V'=U_q\cup \{u\}$. Suppose that there exists some $T\in\mathcal{T}-\mathcal{S}_{q}$ and $v \in V(T)\cap V'$. Then $v \neq u$ since $\mathcal{S}_q = \mathcal{R}_s$. It follows that $v \in U_q \cap V(T)$ and $\partial_{T}(U_q) \neq \emptyset$, from which we can construct a rainbow tree of order $n$, a contradiction. Hence $V(T)\cap V'=\emptyset$ for each $T\in\mathcal{T}-\mathcal{S}_{q}$ and $\mathcal{F}=\mathcal{T}-\mathcal{S}_{q}$ is a collection of $(m-n)(n-2)/n$ trees of order $n$ on vertex set $V-V'$ with $|V-V'|=m-n$. Since $\mathcal{F}$ also contains no rainbow tree of order $n$, we have $\mathcal{F}\in \mathfrak{B}(n,m-n)$ and $\mathcal{T}\in \mathfrak{B}(n,m)$ by induction. # Concluding remarks In this paper, we mainly consider collections of two types of tree structures: stars and general trees. It is natural to ask how many Hamiltonian paths on vertex set $V$ ($|V|=n$) contains a rainbow Hamiltonian path. It is clear that the minimum number of Hamiltonian paths that are needed in the this question is at least $n-1$, since a rainbow Hamiltonian path has $n-1$ edges. In graphic matroids, a Hamiltonian path (or more generally, a spanning tree) is a basis. As mentioned in Introduction, the validness Rota's basis conjecture will imply that $n-1$ Hamiltonian paths on $V$ can be decomposed into $n-1$ rainbow spanning trees. Therefore, when we assume the conjecture is true, another natural problem is that whether this question is a corollary of Rota's basis conjecture. This surmise seems to be plausible because in this case there are at most $2(n-1)$ edges incident with each vertex of $V$, and these edges are partitioned into $n-1$ nonempty sets in the decomposition. # Acknowledgements Ping Li is supported by the National Natural Science Foundation of China (No. 12201375). Luyi Li is also supported by the Tianjin Research Innovation Project for Postgraduate Students (No. 2022BKY039). Ethan Li is supported by the Fundamental Research Funds for the Central Universities (No. GK202207023). 30 R. Aharoni and E. Berger, Rainbow matchings in r-partite r-graphs, Electron. J. Combin., 16: R119, (2009). R. Aharoni, J. Briggs, R. Holzman and Z. Jiang, Rainbow Odd Cycles, SIAM Journal on Discrete Mathematics. 35(4): 2293--2303, (2021). R. Aharoni, M. DeVos, S. González Hermosillo de la Maza, A. Montejano, R. Šámal, A rainbow version of Mantel's theorem, Adv. Comb. $\mathbf{2}$ (2020), 12 pp. P. Bradshaw, Transversals and Bipancyclicity in Bipartite Graph Families, Electron. J. Combin. 28(4):\#P4.25, 2021 P. Bradshaw, K. Halasz, L. Stacho, From one to many rainbow Hamiltonian cycles, Graphs & Comb. 38(6)(2022), 1--21. A.E. Brouwer, A.J. de Vries, R.M.A. Wieringa, A lower bound for the length of partial transversals in a latin square, Nieuw Arch. Wisk. 26(3)(1978), 330--332. M. Bucić, M. Kwan, A. Pokrovskiy and B. Sudakov, Halfway to Rota's Basis Conjecture, International Mathematics Research Notices, Volume 2020, Issue 21, November 2020, Pages 8007--8026, https://doi.org/10.1093/imrn/rnaa004 Y. Cheng, J. Han, B. Wang, G. Wang, Rainbow spanning structures in graph and hypergraph system, arXiv:2105.10219. Y. Cheng, G. Wang, Y. Zhao, Rainbow pancyclicity in graph systems, Electron. J. Combin. 28(3): \#P3.24, 2021. J. Dénes, A.D. Keedwell, Latin squares and their applications, Akadémiai Kiadó, Budapest, 1974. Z. Dong and Z. Xu, Rainbow even cycles, arXiv:2211.09530v1. D.A. Drake, Maximal sets of latin squares and partial transversals, J. Statist. Plann. Inference 1(1977), 143--149. A. Ferber, J. Han, D. Mao, Dirac-type problem of rainbow matchings and Hamilton cycles in random graphs, arXiv: 2211.05477. I. Goorevitch and Ron Holzman, Rainbow Triangles in Families of Triangles, arXiv:2209.15493v1. P. Hatami, P.W. Shor, A lower bound for the length of a partial transversal in a latin square, J. Comb. Theory, Ser. A, 115(2008), 1103--1113. R. Huang and G.-C. Rota. On the relations of various conjectures on Latin squares and straightening coefficients, Discrete Mathematics, 128(1--3):225--236, 1994. F. Joos, J. Kim, On a rainbow version of Dirac's theorem, Bull. London Math. Soc. $\mathbf{52}$ (2020), 498--504. K.K. Koksma, A lower bound for the order of a partial transversal in a latin square, J. Comb. Theory 7(1969), 94--95. L. Li, P. Li, X. Li, Rainbow structures in a collection of graphs with degree conditions, J. Graph. Theory. (2023), 1--19. https://doi.org/10.1002/jgt.22966 R. Montgomery, A. Müyesser, Y. Pehova, Transversal factors and spanning trees, Adv. Comb. $\mathbf{3}$ (2020), 25 pp. D.E. Woolbright, An $n\times n$ latin square has a transversal with at least $n-\sqrt{n}$ distinct symbols, J. Comb. Theory, Ser. A, 24(1978), 235--237.
arxiv_math
{ "id": "2310.06354", "title": "Transversals in a collections of trees", "authors": "Ethan Y.H. Li, Luyi Li, Ping Li", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we show that a geodesic flow of a compact surface without conjugate points of genus greater than one is time-preserving semi-conjugate to a continuous expansive flow which is topologically mixing and has a local product structure. As an application we show that the geodesic flow of a compact surface without conjugate points of genus greater than one has a unique measure of maximal entropy. This gives an alternative proof of Climenhaga-Knieper-War Theorem. author: - Edhin Franklin Mamani title: Geodesic flows of compact higher genus surfaces without conjugate points have expansive factors --- # Introduction The fundamental work of Morse [@morse24] is the main motivation for the study of existence of equivalences between geodesic flows of compact higher genus surfaces and hyperbolic geodesic flows. In 1980, Gromov [@grom87] built a semi-conjugacy, not necessarily time-preserving, between the geodesic flow of a higher genus surface of non-positive curvature and the geodesic flow of a hyperbolic surface. Later, Ghys did the same for Anosov geodesic flows (his proof extends to compact surfaces without conjugate points) [@ghys84]. In 2018, Gelfert and Ruggiero [@gelf19] found a time-preserving semi-conjugacy between geodesic flows of compact surfaces without focal points and expansive flows. This result was extended to the case of compact surfaces without conjugate points and continuous Green bundles [@gelf20]. The main contribution is the following. **Theorem 1**. *Let $M$ be a compact surface without conjugate points of genus greater than one and $\phi_t$ be its geodesic flow. Then, there exists a continuous expansive flow $\psi_t$ that is time-preserving semi-conjugate to $\phi_t$, acting on a compact metric space $X$ of topological dimension at least two. Moreover, $\psi_t$ is topologically mixing and has a local product structure.* The good dynamical properties of the factor flow $\psi_t$ allow us to apply Bowen-Franco's classical theory to get the unique measure of maximal entropy of $\psi_t$ [@fran77]. Combining this fact with Buzzi-Fisher-Sambarino-Vasquez's work [@buzzi12] about extensions of expansive flows we get the following application. **Theorem 2**. *Let $M$ be a compact surface without conjugate points of genus greater than one. Then, its geodesic flow $\phi_t$ has a unique measure of maximal entropy.* This theorem was proved by Climenhaga, Knieper and War in 2020 for a family of compact $n$-manifolds without conjugate points, including compact higher genus surfaces [@clim21]. Their approach uses an extension of Bowen-Franco's theory done by Climenhaga and Thompson [@clim16]. Gelfert-Ruggiero's approach (assuming no focal points or continuous Green bundles) is different and gives a more direct alternative proof. Let us briefly explain the new contributions of this work. The construction of the factor flow $\psi_t$ is based on an equivalence relation equivariant by the geodesic flow, which give rises a quotient space $X$ and a quotient flow $\psi_t$. A careful study of a basis of the quotient topology of $X$ is crucial in Gelfert-Ruggiero's works. This study relies on the structure of the expansive set $\mathcal{R}_0$, i.e., the set of points whose equivalence class is a singleton. Assuming continuity of Green bundles, $\mathcal{R}_0$ forms an open dense subset of the unit tangent bundle. However, in our setting, $\mathcal{R}_0$ might not be open because the openness of $\mathcal{R}_0$ is an open problem in general. Despite this, we show that $\mathcal{R}_0$ is dense on the complement of the set of points with non-trivial equivalence class, that turns out to be enough to get a basis of the quotient topology. Secondly, many ideas for the proof of the dynamical properties of $\psi_t$ in Gelfert-Ruggiero's papers rely on the fact that $X$ admits a 3-manifold structure. This allows us to choose a Riemannian metric in $X$ that can be lifted to the universal covering $\tilde{X}$ of $X$. The Riemannian structure of $\tilde{X}$ is important in the proofs. In our context, we do not know if $X$ admits a manifold structure. It is metrizable, but a metric in $X$ might not be a length metric, so it might not be lifted to $\tilde{X}$. We tackle this problem with more general topological arguments. Finally, we study the possible loss of topological dimension of the quotient and show that topological dimension of $\tilde{X}$ is at least two. The paper is organized as follows. Section 2 contains the preliminaries. In Section 3, we define the equivalence relation that give rises the quotient space and flow. Section 4 studies the topological properties of the factor flow. Section 5 deals with the topological dimension of the quotient space. Section 6 is concerned with the dynamical properties of the factor flow and we complete the proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}. Finally, Section 7 is devoted to show the uniqueness of the measure of maximal entropy of the geodesic flow. # Preliminaries ## Compact manifolds without conjugate points {#m} In this subsection we give basic definitions and notations that we use throughout the paper. Let $(M,g)$ be a $C^{\infty}$ compact connected Riemannian manifold, $TM$ be its tangent bundle and $T_1M$ be its unit tangent bundle. Consider the universal covering $\tilde{M}$ of $M$, the covering map $\pi:\tilde{M}\to M$ and the natural projection $d\pi:T\tilde{M}\to TM$. The universal covering $(\tilde{M},\tilde{g})$ is a complete Riemannian manifold with the pullback metric $\tilde{g}=\pi^*g$. A manifold $M$ has no conjugate points if the exponential map $\exp_p$ is non-singular at every $p\in M$. In particular, $\exp_p$ is a covering map for every $p\in M$ (p. 151 of [@doca92]). Denote by $\nabla$ the Levi-Civita connection induced by $g$. A geodesic is a smooth curve $\gamma\subset M$ with $\nabla_{\dot{\gamma}}\dot{\gamma}=0$. For every $\theta=(p,v)\in TM$, denote by $\gamma_{\theta}$ the unique geodesic with initial conditions $\gamma_{\theta}(0)=p$ and $\dot{\gamma}_{\theta}(0)=v$. The geodesic flow $\phi_t$ is defined by $$\phi: \mathbb{R}\times TM\to TM \qquad (t,\theta)\mapsto \phi_t(\theta)=\dot{\gamma}_{\theta}(t).$$ Parameterizing all geodesics by arc-length allows us to restrict $\phi_t$ to $T_1M$. We now define a Riemannian metric on the tangent bundle $TM$ (Section 1.3 of [@pater97]). Denote by $P:TM\to M$ and $\tilde{P}:T\tilde{M}\to \tilde{M}$ the corresponding canonical projections. For every $\theta=(p,v)\in TM$, the Levi-Civita connection induces the so-called connection map $C_{\theta}:T_{\theta}TM\to T_pM$. These linear maps provide the linear isomorphism $T_{\theta}TM\to T_pM\oplus T_pM$ with $\xi\mapsto (d_{\theta}P(\xi),C_{\theta}(\xi))$. We define the horizontal subspace by $\mathcal{H}(\theta)=\ker(C_{\theta})$ and the vertical subspace by $\mathcal{V}(\theta)=\ker(d_{\theta}P)$. These subspaces decompose the tangent space by $T_{\theta}TM=\mathcal{H}(\theta)\oplus \mathcal{V}(\theta)$. For every $\xi,\eta\in T_{\theta}TM$, the Sasaki metric is defined by $$\label{sasa} \langle \xi,\eta \rangle_s = \langle d_{\theta}P(\xi), d_{\theta}P(\eta) \rangle_p + \langle C_{\theta}(\xi), C_{\theta}(\eta) \rangle_p.$$ This metric induces a Riemannian distance $d_s$ usually called Sasaki distance. For every $\theta\in T_1M$, denote by $G(\theta)\subset T_{\theta}T_1M$ the subspace tangent to the geodesic flow at $\theta$. Let $N(\theta)\subset T_{\theta}T_1M$ be the subspace orthogonal to $G(\theta)$ with respect to the Sasaki metric. For every $\theta\in T_1M$, $H(\theta)=\mathcal{H}(\theta)\cap N(\theta)$ and $V(\theta)=\mathcal{V}(\theta)\cap N(\theta)$. From the above decomposition we have $$T_{\theta}T_1M=H(\theta)\oplus V(\theta)\oplus G(\theta) \quad \text{ and }\quad N(\theta)=H(\theta)\oplus V(\theta).$$ So, every $\xi\in N(\theta)$ has decomposition $\xi=(\xi_h,\xi_v)\in H(\theta)\oplus V(\theta)$. We call $\xi_h$ and $\xi_v$ the horizontal and vertical components of $\xi$ respectively. ## Horospheres and horocycles {#h} In this subsection we assume that $(M,g)$ is a compact surface without conjugate points and genus greater than one. We introduce important asymptotic objects in the universal covering. We follow [@esch77] and part II of [@pesin77]. Let $\theta\in T_1\tilde{M}$ and $\gamma_{\theta}$ be the geodesic induced by $\theta$. We define the forward Busemann function by $$b_{\theta}: \tilde{M}\to \mathbb{R} \qquad p\mapsto b_{\theta}(p)=\lim_{t\to \infty}d(p,\gamma_{\theta}(t))-t.$$ From now on, for every $\theta=(p,v)\in T_1\Tilde{M}$ we denote $-\theta:=(p,-v)\in T_1\Tilde{M}$. The stable and unstable horosphere of $\theta$ are defined by $$H^+(\theta)=b_{\theta}^{-1}(0)\subset \tilde{M} \quad \text{ and }\quad H^-(\theta)=b_{-\theta}^{-1}(0)\subset \tilde{M}.$$ We lift these horospheres to $T_1\tilde{M}$. Denote by $\nabla b_{\theta}$ the gradient vector field of $b_{\theta}$. We define the stable and unstable horocycle of $\theta$ by $$\mathcal{\tilde{F}}^s(\theta)=\{ (p,-\nabla_pb_{\theta}): p\in H^+(\theta) \}\quad \text{ and }\quad \mathcal{\tilde{F}}^u(\theta)= \{ (p,\nabla_pb_{-\theta}): p\in H^-(\theta) \}.$$ The horocycles project onto the horospheres by the canonical projection $\tilde{P}$. For every $\theta\in T_1\tilde{M}$, we define the stable and unstable families of horocycles by $$\mathcal{\tilde{F}}^s=(\mathcal{\tilde{F}}^s(\theta) )_{\theta\in T_1\tilde{M}} \quad \text{ and }\quad \mathcal{\tilde{F}}^u=( \mathcal{\tilde{F}}^u(\theta) )_{\theta\in T_1\tilde{M}}.$$ We also define the center stable and center unstable sets of $\theta$ by $$\mathcal{\tilde{F}}^{cs}(\theta)=\bigcup_{t\in \mathbb{R}} \mathcal{\tilde{F}}^s(\phi_t(\theta)) \quad \text{ and }\quad \mathcal{\tilde{F}}^{cu}(\theta)=\bigcup_{t\in \mathbb{R}} \mathcal{\tilde{F}}^u(\phi_t(\theta)).$$ We can define the above objects in the case of $T_1M$. For every $\theta\in T_1M$, $$\mathcal{F}^*(\theta)=d\pi (\mathcal{\tilde{F}}^*(\tilde{\theta}))\subset T_1M \quad \text{ and }\quad \mathcal{F}^*=d\pi (\mathcal{\tilde{F}}^*), \quad *=s,u,cs,cu;$$ for any lift $\tilde{\theta}\in T_1\tilde{M}$ of $\theta$. Let us state some properties of these objects. **Proposition 1** ([@esch77; @pesin77]). *Let $M$ be a compact surface without conjugate points of genus greater than one. Then, for every $\theta\in T_1\tilde{M}$,* 1. *Busemann functions $b_{\theta}$ are $C^{1,L}$ with $L$-Lipschitz unitary gradient for a uniform constant $L>0$ [@knip86].* 2. *Horospheres $H^+(\theta),H^-(\theta)\subset \tilde{M}$ and horocycles $\mathcal{\tilde{F}}^s(\theta),\mathcal{\tilde{F}}^u(\theta)\subset T_1\tilde{M}$ are embedded curves.* 3. *The families $\mathcal{\tilde{F}}^s,\mathcal{\tilde{F}}^u$ and $\mathcal{F}^s,\mathcal{F}^u$ are continuous foliations of $T_1\tilde{M},T_1M$ respectively, and invariant by the geodesic flow: for every $t\in \mathbb{R}$, $$\Tilde{\phi}_t(\mathcal{\tilde{F}}^s(\theta))=\mathcal{\tilde{F}}^s(\Tilde{\phi}_t(\theta)).$$* ## Morse's shadowing and consequences {#special} In 1924, Morse [@morse24] studied a special class of geodesics of closed surfaces of genus greater than one. These surfaces always admit a metric of negative curvature called hyperbolic metric. For this hyperbolic metric, its geodesics are called hyperbolic geodesics. **Theorem 3** ([@morse24]). *Let $(M,g)$ be a compact surface without conjugate points of genus greater than one and $\tilde{M}$ be its universal covering. Then, there exists $R>0$ such that for every geodesic $\gamma\subset \tilde{M}$ there exists a hyperbolic geodesic $\gamma'\subset \tilde{M}$ with Hausdorff distance between $\gamma$ and $\gamma'$ bounded above by $R$.* Given two geodesics $\gamma_1,\gamma_2\subset \Tilde{M}$, we say that $\gamma_1$ and $\gamma_2$ are asymptotic if $d(\gamma_1(t),\gamma_2(t))\leq C$ for every $t\geq 0$ and for some $C>0$. If the last inequality holds for every $t\in \mathbb{R}$, $\gamma_1$ and $\gamma_2$ are called bi-asymptotic. So, Theorem [Theorem 3](#mor){reference-type="ref" reference="mor"} says that $\gamma$ and $\gamma'$ are bi-asymptotic with respect to the hyperbolic distance. This gives a uniform bound between bi-asymptotic geodesics. **Theorem 4**. *Let $(M,g)$ be a compact surface without conjugate points and genus greater than one. Then there exists $Q(M)>0$ such that the Hausdorff distance between any two bi-asymptotic geodesics is bounded above by $Q(M)$.* For each $\theta\in T_1\tilde{M}$, we define the intersections $$I(\theta)=H^+(\theta)\cap H^-(\theta)\subset \tilde{M} \quad \text{ and }\quad \mathcal{\tilde{I}}(\theta)=\mathcal{\tilde{F}}^s(\theta)\cap \mathcal{\tilde{F}}^u(\theta)\subset T_1\tilde{M}.$$ We call $\mathcal{\tilde{I}}(\theta)$ the class of $\theta$. For the canonical projection $\tilde{P}$ we have $\tilde{P}(\mathcal{\tilde{I}})=I(\theta)$. We observe that for every $\eta=(q,w)\in \mathcal{\Tilde{I}}(\theta)$ with $q\in I(\theta)$, the geodesic $\gamma_{\eta}$ is bi-asymptotic to $\gamma_{\theta}$. So, we can translate the bounds of Theorem [Theorem 4](#morse){reference-type="ref" reference="morse"} from bi-asymptotic geodesics to intersections between horospheres and horocycles. This fact is included in the following proposition. **Proposition 2**. *Let $M$ be a compact surface without conjugate points of genus greater than one and $\tilde{M}$ be its universal covering. Then, for every $\theta\in T_1\tilde{M}$* 1. *$I(\theta)$ and $\mathcal{\tilde{I}}(\theta)$ are compact connected curves of $\tilde{M}$ and $T_1\tilde{M}$ respectively (Corollary 3.3 of [@riff18]).* 2. *$Diam(I(\theta))\leq Q$ and $Diam(\mathcal{\tilde{I}}(\theta))\leq \tilde{Q}$ for some $Q(M),\tilde{Q}(M)>0$.* We remark that for every $\theta \in T_1M$ and every lift $\tilde{\theta} \in T_1\tilde{M}$ of $\theta$, we have $$d\pi(\mathcal{\tilde{I}}(\tilde{\theta}))=\mathcal{I}(\theta)=\mathcal{F}^s(\theta)\cap \mathcal{F}_u(\theta).$$ **Definition 1**. Let $M$ be a compact surface without conjugate points and genus greater than one. We say that $\theta\in T_1M$ is an expansive point and $\mathcal{I}(\theta)$ is a trivial class if $\mathcal{I}(\theta)$ is a single point. Otherwise, $\theta$ is a non-expansive point and $\mathcal{I}(\theta)$ is a non-trivial class. The expansive points form the so-called expansive set $$\mathcal{R}_0=\{ \theta \in T_1M: \mathcal{F}^s(\theta)\cap \mathcal{F}^u(\theta)= \{ \theta \} \}.$$ The complement of $\mathcal{R}_0$ is called the non-expansive set. In addition, note that any non-trivial class $\mathcal{I}(\theta)$ has two boundary points. **Corollary 1** ([@riff18]). *Let $M$ be a compact surface without conjugate points of genus greater than one and $\tilde{M}$ be its universal covering. For every $\theta\in T_1\tilde{M}$, if $\eta=(q,w)\in \mathcal{\tilde{F}}^s(\theta)$ and $\gamma_{\eta}$ is bi-asymptotic to $\gamma_{\theta}$ then $$\eta \in \mathcal{\Tilde{I}}(\theta)=\mathcal{\tilde{F}}^s(\theta)\cap\mathcal{\tilde{F}}^u(\theta) \quad \text{ and }\quad q\in \Tilde{I}(\theta)=H^+(\theta)\cap H^-(\theta).$$* ## Visibility manifolds This subsection introduces visibility manifolds and state some of their dynamical and geometric properties. Let $M$ be a simply connected Riemannian manifold without conjugate points. For every $x,y\in M$, denote by $[x,y]$ the geodesic segment joining $x$ to $y$. For $z\in M$ we also denote by $\sphericalangle_z(x,y)$ the angle at $z$ formed by $[z,x]$ and $[z,y]$. We say that $M$ is a visibility manifold if for every $z\in M$ and every $\epsilon>0$ there exists $R(\epsilon, z)>0$ such that $$\text{ if }x,y\in M \text{ with }d(z,[x,y])>R(\epsilon, z) \quad \text{ then }\quad \sphericalangle_z(x,y)<\epsilon.$$ If $R(\epsilon,z)$ does not depend on $z$ then $M$ is called a uniform visibility manifold. **Theorem 5** ([@eber72]). *If $M$ is a compact surface without conjugate points of genus greater than one then its universal covering is a uniform visibility manifold.* In 1973, Eberlein extended some transitivity properties of the geodesic flow to the case of compact manifolds without conjugate points. A foliation is called minimal if each of its leaves is dense. **Theorem 6** ([@eber72; @eber73neg2]). *Let $M$ be a compact surface without conjugate points and genus greater than one. Then* 1. *The horospherical foliations $\mathcal{F}^s$ and $\mathcal{F}^u$ are minimal.* 2. *The geodesic flow $\phi_t$ is topologically mixing.* 3. *For every $\theta,\xi \in T_1\tilde{M}$ with $\theta\not\in \mathcal{\tilde{F}}^{cu}(\xi)$ there exists $\eta_1,\eta_2\in T_1\tilde{M}$ such that $$\mathcal{\tilde{F}}^s(\theta)\cap \mathcal{\tilde{F}}^{cu}(\xi)=\mathcal{\Tilde{I}}(\eta_1) \quad \text{ and }\quad \mathcal{\tilde{F}}^s(\xi)\cap \mathcal{\tilde{F}}^{cu}(\theta)=\mathcal{\Tilde{I}}(\eta_2).$$* We can transform item (3) into intersections of unstable horocycles and central stable sets. There exist $t_1,t_2\in \mathbb{R}$ such that $$\mathcal{\tilde{F}}^{cs}(\theta)\cap \mathcal{\tilde{F}}^u(\xi)=\mathcal{\Tilde{I}}(\Tilde{\phi}_{t_1}(\eta_1)) \quad \text{ and }\quad \mathcal{\tilde{F}}^{cs}(\xi)\cap\mathcal{\tilde{F}}^u(\theta)=\mathcal{\Tilde{I}}(\Tilde{\phi}_{t_2}(\eta_2)).$$ The above intersections are called the heteroclinic connections of $\Tilde{\phi}_t$. Recall that for a compact manifold of negative curvature, its geodesic flow is uniformly hyperbolic [@anos67]. This provides invariant submanifolds (which agree with the horocycles) with hyperbolic behavior. However, for a general compact manifold without conjugate points, its geodesic flow might not be uniformly hyperbolic. Despite this, the horocycles still have some weak hyperbolicity. From Equation [\[sasa\]](#sasa){reference-type="eqref" reference="sasa"} in Subsection [2.1](#m){reference-type="ref" reference="m"}, we recall that $d_s$ is the Sasaki distance restricted to $T_1\Tilde{M}$. **Proposition 3** ([@eber72]). *Let $M$ be a compact surface without conjugate points of genus greater than one and $\tilde{M}$ be its universal covering. Then, there exist $A,B>0$ such that for every $\theta\in T_1\tilde{M}$ and every $\eta\in \mathcal{\tilde{F}}^s(\theta)$, $$d_s(\Tilde{\phi}_t(\theta),\Tilde{\phi}_t(\eta))\leq Ad_s(\theta,\eta)+B, \quad \text{ for every } t\geq 0.$$* ## Some dynamical and ergodic properties of continuous flows on compact metric spaces {#p5} We first introduce the dynamical properties. Let $\psi_t:X\to X$ be a continuous flow acting on a compact metric space $X$. We say that $\psi_t$ is topologically transitive if there exists a dense orbit. The flow $\psi_t$ is topologically mixing if for every open sets $A,B\subset X$ there exists $t_0>0$ such that $\psi_t(A)\cap B\neq \emptyset$ for $|t|\geq t_0$. For every $x\in X$ and every $\epsilon>0$, we define the $$\text{strong stable set of $x$: }W^{ss}(x)=\{ y\in X:d(\psi_t(x),\psi_t(y))\to 0 \text{ as }t\to \infty\},$$ $$\epsilon\text{-strong stable set: }W^{ss}_{\epsilon}(x)=\{ y\in W^{ss}(x):d(\psi_t(x),\psi_t(y))\leq \epsilon, \text{ for every }t\geq 0\}.$$ The strong unstable $W^{uu}(x)$ and $\epsilon$-strong unstable $W^{uu}_{\epsilon}(x)$ sets are defined similarly for $t\leq0$. The flow $\psi_t$ has a local product structure if for every $\epsilon>0$ there exists $\delta>0$ such that if $x,y\in X$ satisfy $d(x,y)\leq \delta$ then there exists a unique $\tau\in \mathbb{R}$ with $$|\tau|\leq \epsilon \quad \text{ and } \quad W^{ss}_{\epsilon}(x)\cap W^{uu}_{\epsilon}(\phi_{\tau}(y))\neq \emptyset.$$ The orbit of the intersection point follows the orbit of $x$ in the future and the orbit of $y$ in the past. The flow $\psi_t$ is expansive if there exists $\epsilon>0$ such that if $x,y\in X$ satisfy $$d(\phi_t(x),\phi_{\rho(t)}(y))\leq \epsilon \text{ for every }t \in \mathbb{R}$$ and some reparametrization $\rho$, then there exists $\tau\in [-\epsilon,\epsilon]$ with $y=\phi_{\tau}(x)$. We call $\epsilon$ a constant of expansivity of $\phi_t$. In the context of continuous flows without singularities acting on compact manifolds, the above definition is equivalent to Bowen-Walters expansivity definition [@bowen72]. We remark that Anosov flows are always expansive. Let us define a special kind of semi-conjugacy between flows. Let $\phi_t:Y\to Y$ and $\psi_t:X\to X$ be two continuous flows acting on compact topological spaces. A map $\chi:Y\to X$ is called a time-preserving semi-conjugacy if $\chi$ is a continuous surjection satisfying $\chi\circ\phi_t=\psi_t\circ \chi$ for every $t\in \mathbb{R}$. In this case, we say that $\psi_t$ is time-preserving semi-conjugate to $\phi_t$ or is a time-preserving factor of $\phi_t$. We now give the ergodic properties. Let $\psi_t:X\to X$ be a continuous flow acting on a compact metric space $(X,d)$. A Borel set $Z\subset X$ is invariant by the flow if $\psi_t(Z)=Z$ for every $t\in \mathbb{R}$. A probability measure $\nu$ on $X$ is invariant by the flow if $(\psi_t)_*\nu=\nu$ for every $t\in \mathbb{R}$. Denote by $\mathcal{M}(\psi)$ the set of all flow-invariant-measures on $X$. A measure $\nu\in \mathcal{M}(\psi)$ is ergodic if for every flow-invariant set $A\subset X$, we have either $\nu(Z)=0$ or $\nu(Z)=1$. Let $Z\subset X$ be a flow-invariant Borel set and $\nu$ be a flow-invariant measure supported on $Z$. We define the metric entropy $h_{\nu}(\psi,Z)$ of $\nu$ with respect to the flow $\psi$ as the metric entropy $h_{\nu}(\psi_1,Z)$ with respect to its time-1 map $\psi_1$ [@walt00]. For $Z=X$ we write $h_{\nu}(\psi)$. When $Z$ is also compact, we define the topological entropy of $Z$ as follows. For every $\epsilon,T>0$ and every $x\in Z$, we define the $(T,\epsilon)$-dynamical balls by $$\label{dyna} B(x,\epsilon,T)=\{ y\in Z:d(\psi_s(x),\psi_s(y))<\epsilon,s\in [0,T] \}$$ Denote by $M(T,\epsilon,Z)$ the minimum cardinality of any cover of $Z$ by $(T,\epsilon)$-dynamical balls. The topological entropy of $Z$ with respect to $\psi$ is $$h(\psi,Z)=\lim_{\epsilon\to 0}\limsup_{T\to \infty} \frac{1}{T} \log M(T,\epsilon,Z).$$ For $Z=X$ we write $h(\psi)$. We remark that $h(\psi,Z)=h(\psi_1,Z)$ where $h(\psi_1,Z)$ is the topological entropy of $Z$ with respect to the time-1 map $\psi_1$. For each flow-invariant compact set $Z\subset X$, the variational principle [@dina71] says $$\label{varcon} h(\psi,Z)= \sup_{\nu} h_{\nu}(\psi,Z),$$ where $\nu$ varies over all flow-invariant measures supported on $Z$. We say that $\mu\in \mathcal{M}$ supported on $Z$ is a measure of maximal entropy if $h_{\mu}(\psi,Z)$ achieves the maximum in [\[varcon\]](#varcon){reference-type="eqref" reference="varcon"}. If $Z=X$ and $\mu$ is the only measure satisfying this condition then $\mu$ is the unique measure of maximal entropy for the flow $\psi$. # The quotient flow {#s1} In what follows we will assume that $(M,g)$ is a compact surface without conjugate points and genus greater than one. The main idea of the section is to built a factor flow of the geodesic flow of $(M,g)$. We follow the constructions of Gelfert and Ruggiero [@gelf19]. The properties given in Subsection [2.3](#special){reference-type="ref" reference="special"} suggest that an equivalence relation identifying each curve $\mathcal{I}(\theta)$ with a single point $[\theta]$ will do the job. Two points $\theta, \eta \in T_1M$ are equivalent, $\theta \sim \eta$, if and only if $\eta\in \mathcal{F}^s(\theta)$ and for every $\tilde{\theta},\tilde{\eta}\in T_1\tilde{M}$ lifts of $\theta,\eta$ with $\tilde{\eta}\in \mathcal{\tilde{F}}^s(\tilde{\theta})$, it holds that $\gamma_{\tilde{\theta}}$ and $\gamma_{\tilde{\eta}}$ are bi-asymptotic. **Lemma 1**. *For every $\eta,\theta\in T_1M$, $\eta\sim\theta$ if and only if $\eta\in \mathcal{I}(\theta)$.* *Proof.* If $\eta\sim\theta$ then there exist lifts $\tilde{\eta},\tilde{\theta}$ of $\eta,\theta$ such that $\tilde{\eta}\in\mathcal{\tilde{F}}^s(\tilde{\theta})$ and $\gamma_{\tilde{\eta}},\gamma_{\tilde{\theta}}$ are bi-asymptotic. Corollary [Corollary 1](#key1){reference-type="ref" reference="key1"} says that $\tilde{\eta}\in \mathcal{\tilde{I}}(\tilde{\theta})$ and projecting by $d\pi$ we get $\eta\in \mathcal{I}(\theta)$. The reverse implication is straightforward. ◻ This lemma and the properties of the horospherical leaves guarantee the above relation is an equivalence relation. This relation induces a quotient space $X$ and a quotient map $$\chi:T_1M\to X \qquad \theta\mapsto \chi(\theta)=[\theta],$$ where $[\theta]$ is the equivalence class of $\theta$. The geodesic flow and the quotient map induce a quotient flow $$\psi:\mathbb{R}\times X\to X \qquad (t,[\theta])\mapsto \psi_t[\theta]=[\phi_t(\theta)]=\chi \circ \phi_t(\theta).$$ We endow $X$ with the quotient topology. We state below some properties of these new objects. **Lemma 2**. *Let $M$ be a compact surface without conjugate points of genus greater than one. Then,* 1. *The quotient space $X$ is a compact space.* 2. *The quotient map $\chi$ is a time-preserving semi-conjugacy hence $\psi_t$ is a time-preserving factor of $\phi_t$.* *Proof.* For item 1, from the definition of the quotient topology we see that $\chi$ is a continuous surjection. Since $T_1M$ is compact so is $X$. For item 2, let us first show that $\psi_t$ is well-defined. Let $[\eta]\in X$ and $\xi\in T_1M$ be any representative of the equivalence class of $\eta$. So, we have $\xi\sim \eta$ hence $\xi\in \mathcal{I}(\eta)=\mathcal{F}^s(\eta)\cap \mathcal{F}^u(\eta)$ by Lemma [Lemma 1](#char1){reference-type="ref" reference="char1"}. By the invariance of the horospherical foliations it follows that $\phi_t(\xi)\in \mathcal{F}^s(\phi_t(\eta))\cap \mathcal{F}^u(\phi_t(\eta))=\mathcal{I}(\phi_t(\eta))$ for every $t\in\mathbb{R}$. This means that $\phi_t(\xi)\sim \phi_t(\eta)$ and hence $\psi_t[\xi]=[\phi_t(\xi)]=[\phi_t(\eta)]=\psi_t[\eta]$. Since $\chi$ is a continuous surjection, the item follows from the definition of the quotient flow. ◻ The following concept is useful for defining certain open sets of the quotient topology. **Definition 2**. Let $A$ be a subset of $T_1M$. We say that $A$ is saturated with respect to $\chi$, or simply saturated, if $\chi^{-1}\circ \chi(A)=A$. **Lemma 3**. *Let $M$ be a compact surface without conjugate points of genus greater than one. Then,* 1. *For every $\eta \in T_1M$, $\chi^{-1}\circ \chi (\eta)=\mathcal{I}(\eta)$.* 2. *For every open saturated set $U\subset T_1M$, $\chi(U)$ is an open set in $X$.* *Proof.* For item 1, $\chi^{-1}\circ \chi(\eta)=\chi^{-1}[\eta]$ implies that $\chi^{-1}\circ \chi(\eta)$ is the equivalence class of $\eta$ seen as subset of $T_1M$. By Lemma [Lemma 1](#char1){reference-type="ref" reference="char1"}, this equivalence class agree exactly with $\mathcal{I}(\eta)$. For item 2, by definition $\chi(U)$ is open in the quotient topology if $\chi^{-1}(\chi(U))$ is open in $T_1M$. The result follows since $\chi^{-1}(\chi(U))=U$. ◻ We extend the above construction to the geodesic flow $\tilde{\phi}_t$ of $(\tilde{M}, \tilde{g})$. Following Lemma [Lemma 1](#char1){reference-type="ref" reference="char1"}, we say that $\eta,\theta\in T_1\tilde{M}$ are equivalent if and only if $\eta\in \mathcal{\tilde{I}}(\theta)$. As before, the equivalence relation analogously induces a quotient space $\tilde{X}$, a quotient map $\tilde{\chi}$ and a quotient flow $\tilde{\psi}_t$. In a similar way, $\tilde{\chi}$ is a time-preserving semi-conjugacy between $\tilde{\phi}_t$ and $\tilde{\psi}_t$. Moreover, since $T_1\tilde{M}$ is a covering space of $T_1M$ with covering map $d\pi$, we see that $\tilde{X}$ is also a covering space of $X$ with a covering map $\Pi$ induced by $d\pi$: $$\label{cove} \Pi:\tilde{X}\to X, \qquad [\Tilde{\eta}]\mapsto \Pi[\Tilde{\eta}]=\chi\circ d\pi(\Tilde{\eta}).$$ It is not hard to show that $\Pi$ is a well-defined covering map using the map $d\pi$. # Topological properties of the quotient flow {#s3} In this section we build a special basis of neighborhoods of the quotient topology of $X$. We extend a construction made by Gelfert and Ruggiero for the case of compact higher genus surfaces without focal points (Section 4 of [@gelf19]). As an application, we show that $X$ is a compact metrizable space. We highlight that for every $\eta\in T_1M$, the set $\mathcal{F}^s(\eta)$ ($\mathcal{F}^u(\eta)$) is the orbit of a continuous complete flow: the stable (unstable) horocycle flow. The same holds for $\mathcal{\tilde{F}}^s(\tilde{\eta}),\mathcal{\tilde{F}}^u(\tilde{\eta})$ for every lift $\tilde{\eta}\in T_1\tilde{M}$ of $\eta$. Each of such sets is a Lipschitz embedded curve and can be parametrized by arc-length $c^s_{\eta}: \mathbb{R}\to \mathcal{F}^s(\eta), c^u_{\eta}: \mathbb{R}\to \mathcal{F}^u(\eta),\tilde{c}^s_{\eta}: \mathbb{R}\to \mathcal{\tilde{F}}^s(\tilde{\eta})$ and $\tilde{c}^u_{\eta}: \mathbb{R}\to \mathcal{\tilde{F}}^u(\tilde{\eta})$. In particular, each connected subset of $\mathcal{F}^{s,u}(\eta),\mathcal{\tilde{F}}^{s,u}(\tilde{\eta})$ is homeomorphic to an interval. The construction of a basis for the quotient space requires a better understanding of the set of expansive points $\mathcal{R}_0\subset T_1M$. Assuming no focal points or continuous Green bundles, Gelfert and Ruggiero showed that $\mathcal{R}_0$ is open and dense. This might no longer be the case if we drop these hypothesis. Let us start with the following elementary lemma. **Lemma 4**. *Any real interval cannot be the union of disjoint closed intervals.* **Lemma 5**. *Let $\theta\in T_1\Tilde{M}$ be a non-expansive point and $\mathcal{\tilde{I}}(\theta)$ be its non-trivial class. Then the boundary points of $\mathcal{\tilde{I}}(\theta)$ are accumulated by expansive points.* *Proof.* Let $c:\mathbb{R}\to \mathcal{\tilde{F}}^s(\theta)$ be an arc-length parametrization of $\mathcal{\tilde{F}}^s(\theta)$. Let $a,b\in \mathbb{R}$ with $a<b$ such that $\mathcal{\tilde{I}}(\theta)=c([a,b])$. Suppose that the boundary point $b$ is not accumulated by expansive points. Then, there exists $\delta>0$ such that $(b,b+\delta)$ has no expansive points. So, either $(b,b+\delta)$ is a single class or it is a disjoint union of distinct classes. Since classes are closed, Lemma [Lemma 4](#int){reference-type="ref" reference="int"} shows that $[b,b+\delta]$ is a subset of a single class. So, $b$ is a common point of both classes hence $[a,b+\delta]$ is a single class, a contradiction to $b$ be a boundary point. ◻ Next, for each $\theta \in T_1\tilde{M}$, let us define a family of open neighborhoods $A_i$ of $\mathcal{\tilde{I}}(\theta)$ such that $\tilde{\chi}(A_i)$ are open neighborhoods of $\tilde{\chi}(\theta)$ as well. For every $\delta,\epsilon>0$, there exist $a,b\in \mathbb{R}$ and a map $$R: (a-\epsilon, b+\epsilon)\times (-\delta,\delta)\to T_1\tilde{M}, \qquad \text{satisfying the conditions: }$$ 1. Let $(0,s)\mapsto R(0,s)$ be the arc-length parametrization of a $\delta$-neighborhood of $\theta$ in $V(\theta)$ with respect to the Sasaki metric where $V(\theta)$ is the vertical subspace of $\theta$. 2. For each $s_0\in (-\delta,\delta)$, let $(r,s_0)\mapsto R(r,s_0)$ be the arc-length parametrization of the continuous curve $R(\cdot,s_0)$ in $\mathcal{\tilde{F}}^s(R(0,s_0))$. 3. $R([a,b],0)=\mathcal{\tilde{I}}(\theta)$, $R(0,0)=\theta$ and $(r,0)\mapsto R(r,0)$ be the arc-length parametrization of a $\epsilon$-neighborhood of $\mathcal{\tilde{I}}(\theta)$ in $\mathcal{\tilde{F}}^s(\theta)$. The image of $R$ is denoted by $\Sigma= \Sigma(\theta,\epsilon,\delta)= R((a-\epsilon, b+\epsilon)\times (-\delta,\delta))$. The continuity of the horospherical foliations ensures that $R$ is a homeomorphism and $\Sigma$ is a 2-dimensional section containing $\mathcal{\tilde{I}}(\theta)$. Note that $\Sigma$ is foliated by stable horospherical leaves of points in $V(\theta)$. Since these leaves are topologically transverse to the geodesic flow, $\Sigma$ is a cross section. For $\tau>0$, Brouwer's open mapping Theorem gives the following open 3-dimensional neighborhood of $\mathcal{\tilde{I}}(\theta)$: $$B=B(\theta,\epsilon,\delta,\tau)= \bigcup_{|t|<\tau}\tilde{\phi}_t(\Sigma).$$ We next define a projection map $Pr:B\to \Sigma$. For every $\eta\in B$, $Pr(\eta)$ is the projection of $\eta$ along the geodesic flow $\tilde{\phi}_t$. From the properties of $\tilde{\phi}_t$, we see that $Pr$ is a continuous surjection. For every $\eta\in \Sigma$, we define the stable (unstable) interval and their intersection $$W^s(\eta)=\mathcal{\tilde{F}}^s(\eta)\cap \Sigma, \quad W^u(\eta)=Pr(\mathcal{\tilde{F}}^u(\eta)\cap B) \quad \text{ and }\quad [\xi,\eta]=W^s(\xi)\cap W^u(\eta).$$ Since $\Sigma$ is foliated by the stable horospherical foliation $\mathcal{\Tilde{F}}^s$, we see that $W^s$ is exactly $\mathcal{\Tilde{F}}^s$ while $W^u$ is not the unstable horospherical foliation $\mathcal{\Tilde{F}}^u$ because the geodesic flow cannot have a local section that is foliated by both $\mathcal{\Tilde{F}}^s$ and $\mathcal{\Tilde{F}}^u$. To build a new cross section let us choose four expansive points $\theta_1, \theta_2, \eta_1,\eta_2\in\Sigma$. Since $\mathcal{\tilde{I}}(\theta)$ is non-trivial, Lemma [Lemma 5](#que2){reference-type="ref" reference="que2"} says that for $\epsilon>0$ there exists $c\in (a-\epsilon,a), d\in (b,b+\epsilon)$ such that $\theta_1=R(c,0)$ and $\theta_2=R(d,0)$ are expansive points of $W^s(\theta)$. This also implies that $$\label{inclu} \mathcal{\tilde{I}}(\theta)=R([a,b],0)\subset R([c,d],0)$$ We define the upper and lower region of $\Sigma$ by $$\Sigma_+=\{ R(r,s): r\in (a-\epsilon,b+\epsilon), s>0\} \text{ and } \Sigma_-=\{ R(r,s): r\in (a-\epsilon,b+\epsilon), s<0\}.$$ Pick some expansive points $\eta_1\in W^u(\theta_1)\cap\Sigma_+$ and $\eta_2\in W^u(\theta_2)\cap\Sigma_-$. The new cross section $U= U(\theta,\epsilon, \delta,\theta_1, \theta_2, \eta_1,\eta_2)\subset \Sigma$ is the open 2-dimensional region in $\Sigma$ bounded by $W^u(\theta_1)$, $W^u(\theta_2)$, $W^s(\eta_1)$ and $W^s(\eta_2)$. **Lemma 6**. *The open cross section $U\in \Sigma$ is a saturated set containing $\mathcal{\tilde{I}}(\theta)$.* *Proof.* From relation [\[inclu\]](#inclu){reference-type="eqref" reference="inclu"} we see that $\mathcal{\tilde{I}}(\theta)$ is surrounded by $\theta_1,\theta_2$ in $W^s(\theta)$ hence it is included in $U$. Now, suppose by contradiction that there exists a non-expansive $\eta \in U$ such that $\mathcal{\tilde{I}}(\eta)$ is not included in $U$. This implies that $\mathcal{\tilde{I}}(\eta)$ intersects the boundary of $U$ at some $\xi$. Since $\eta\in \mathcal{\tilde{I}}(\xi)$ it follows that $\eta\in W^s(\xi)\cap W^u(\xi)$. Thus $\eta$ belongs to the boundary of $U$, a contradiction. ◻ As above, for $\tau>0$ Brouwer's open mapping Theorem says that $$A=A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2)= \bigcup_{|t|<\tau}\tilde{\phi}_t(U)$$ is an open 3-dimensional neighborhood of $\mathcal{\tilde{I}}(\theta)$. Since $U$ is saturated so is $A$. Thus, for every $\theta\in T_1\tilde{M}$, we have built a family $$\{ A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2): \epsilon, \delta,\tau>0, \theta_1, \theta_2\in W^s(\theta), \eta_1\in W^u(\theta_1),\eta_2\in W^u(\theta_2) \}$$ of saturated neighborhoods of $\mathcal{\tilde{I}}(\theta)$. Consider the family of quotients of sets $$\{ \tilde{\chi}( A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2)): \epsilon, \delta,\tau>0, \theta_1, \theta_2\in W^s(\theta), \eta_1\in W^u(\theta_1),\eta_2\in W^u(\theta_2) \}.$$ **Lemma 7**. *For every $\theta\in T_1\tilde{M}$, the family $$\mathcal{A}_{\theta}= \{ \tilde{\chi}( A(\theta,\epsilon_l, \delta_m,\tau_n)): \epsilon_l=1/l,\delta_m=1/m,\tau_n=1/n \text{ with } l,m,n\in \mathbb{N} \}$$ is a countable basis of neighborhoods of $[\theta]\in \tilde{X}$. Hence $\tilde{X}$ is first countable and $\{ \mathcal{A}_{\theta}: \theta\in T_1\tilde{M} \}$ is a basis for the quotient topology of $\tilde{X}$.* *Proof.* For every $\theta\in T_1\tilde{M}$, we know that $A=A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2)$ is an open neighborhood of $\mathcal{\tilde{I}}(\theta)$. Since $A$ is saturated it follows that $\tilde{\chi}^{-1}\circ \tilde{\chi}(A)=A$. We see that $\tilde{\chi}(A)\subset \tilde{X}$ is an open set containing $[\theta]$ and $\{ \tilde{\chi}( A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2))\}$ is a family of open neighborhoods of $[\theta]\in X$. Choosing $\epsilon, \delta, \tau>0$ small enough, every neighborhood $V$ of $\mathcal{\tilde{I}}(\theta)$ contains some $A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2)$. Given an open set $U\subset X$ containing $[\theta]$, $\tilde{\chi}^{-1}(U)$ is an open neighborhood of $\mathcal{\tilde{I}}(\theta)$. So, there exists $A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2)\subset \tilde{\chi}^{-1}(U)$ and hence $\tilde{\chi}(A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2))\subset U$. Therefore the collection $\{ \tilde{\chi}( A(\theta,\epsilon, \delta,\tau, \theta_1, \theta_2, \eta_1,\eta_2)) \}$ is a basis of neighborhoods of $[\theta]\in X$. This property is not affected by specific choices of parameters $\theta_1,\theta_2,\eta_1,\eta_2\in \Sigma$ but by the parameters $\epsilon,\delta,\tau>0$. Choosing $\epsilon_l=1/l,\delta_m=1/m ,\tau_n=1/n$ with $l,m.n\in \mathbb{N}$ we still have $\mathcal{A}_{\theta}=\{\tilde{\chi}( A(\theta,\epsilon_l, \delta_m,\tau_n)): l,m,n\in \mathbb{N} \}$ is a basis of neighborhoods of $[\theta]$. So, $\mathcal{A}_{\theta}$ is a countable basis of neighborhoods of $[\theta]$. ◻ This basis is important because it provides an explicit description of basic open sets of the quotient topology. So far, we have a family of neighborhoods for every $\mathcal{\tilde{I}}(\theta)\subset T_1\tilde{M}$ and a basis of neighborhoods of $[\theta]\in \tilde{X}$. From Equation [\[cove\]](#cove){reference-type="eqref" reference="cove"} in Section [3](#s1){reference-type="ref" reference="s1"}, we recall that $d\pi:T_1\Tilde{M}\to T_1M$ and $\Pi:\Tilde{X}\to X$ are covering maps. Projecting the above families of open neighborhoods by $\Pi$ and $d\pi$ respectively, we get corresponding families of open neighborhoods for every $\mathcal{I}(\theta)\subset T_1M$ and every $[\theta]\in X$. Thus, $X$ is first countable and $\{ \Pi(\mathcal{A}_{\theta}): \theta\in T_1\tilde{M} \}$ is a basis for the quotient topology of $X$. To show the metrizability of $X$ we recall a basic topological result [@will12]. **Proposition 4**. *If $f:X\to Y$ is a continuous surjection from a compact metric space onto a Hausdorff space, then $Y$ is metrizable.* **Lemma 8**. *Let $M$ be a compact surface without conjugate points and genus greater than one. Then, the quotient space $X$ is a compact metrizable space.* *Proof.* Since $\chi$ is a continuous surjection, $X$ is compact. We next show that $X$ is Hausdorff. Choose two distinct points $[\theta],[\eta]\in X$ and suppose that $\mathcal{F}^s(\theta)\cap \mathcal{F}^s(\eta)=\emptyset$. Choosing $\delta$ small enough in Lemma [Lemma 7](#que1){reference-type="ref" reference="que1"}, we can build disjoint basic open sets because $\mathcal{F}^s$ is a foliation of $T_1M$. We now suppose that $\mathcal{F}^s(\theta)\cap \mathcal{F}^s(\eta)\neq\emptyset$ hence $\mathcal{F}^s(\theta)= \mathcal{F}^s(\eta)$. Choosing $\epsilon$ small enough in Lemma [Lemma 7](#que1){reference-type="ref" reference="que1"}, the basic open sets of $\theta$ and $\eta$ are disjoint because $\mathcal{F}^u$ is a foliation of $T_1M$. The result follows from the application of Theorem [Proposition 4](#metri){reference-type="ref" reference="metri"} to our case. ◻ So, we choose some distance on $X$ that is compatible with the quotient topology. We denote it by $d$ and called it the quotient distance. # Topological dimension of the quotient space This section is devoted to show that the topological dimension of the quotient space is at least two. We first define the topological dimension [@munk00]. Let $X$ be a topological space and $\mathcal{U}$ be an open cover of $X$. The order of $\mathcal{U}$ is the smallest $n\in \mathbb{N}$ such that every $x\in X$ belongs to at most $n$ sets of $\mathcal{U}$. An open refinement of $\mathcal{U}$ is another open cover, each of whose sets is a subset of a set in $\mathcal{U}$. The topological dimension of $X$ is the minimum $n$ such that every $\mathcal{U}$ has an open refinement of order $n+1$ or less. We have as standard examples the open sets of $\mathbb{R}^n$. For every open set $U\subset \mathbb{R}^n$, the topological dimension of $U$ is $n$. **Theorem 7** ([@will12]). *Let $f:X\to Y$ be a homeomorphism between topological spaces. Then, the topological dimension of $X$ and $Y$ are equal.* Let $X$ be a topological space, $f:X\to \mathbb{R}^2$ be a continuous map and $U\subset X$ be an open set. We say that $U$ is a topological surface if the restriction of $f$ to $U$ is a homeomorphism. Theorem [Theorem 7](#will){reference-type="ref" reference="will"} implies that every topological surface has topological dimension 2. Let $\tilde{X}$ and $X$ be the quotient spaces defined in Section [3](#s1){reference-type="ref" reference="s1"}. To estimate their topological dimensions we find a topological surface passing through every point. **Lemma 9**. *Let $M$ be a compact surface without conjugate points and genus greater than one. Then, for every $[\theta]\in \tilde{X}$ there exists a topological surface $S_{[\theta]}$ containing $[\theta]$. In particular, $\tilde{X}$ and $X$ have topological dimension at least two.* *Proof.* Let $\theta \in T_1\tilde{M}$ and $V_{\theta}$ be the vertical fiber of $\theta$. Using the geodesic flow we define the set $$W_{\theta}=\bigcup_{t\in \mathbb{R}}\phi_t(V_{\theta})\subset T_1\tilde{M}.$$ Since $V_{\theta}$ is homeomorphic to the circle $S^1$, $W_{\theta}$ is homeomorphic to a cylinder hence $W_{\theta}$ is a topological surface. The divergence of geodesic rays guarantees that for every $\eta,\xi\in V_{\theta}$, $\eta\not\in \mathcal{\tilde{I}}(\xi)$. So, the restriction of $\tilde{\chi}$ to $W_{\theta}$ is injective hence bijective onto its image. This implies that $\tilde{\chi}(W_{\theta})\subset \tilde{X}$ is homeomorphic to a cylinder and the result follows. Theorem [Theorem 7](#will){reference-type="ref" reference="will"} implies that the topological dimension of $\tilde{X}$ is at least two. This conclusion extends to $X$ because $\tilde{X}$ and $X$ are locally homeomorphic. ◻ In [@gelf19; @gelf20], Gelfert and Ruggiero showed that $X$ and $\tilde{X}$ are topological 3-manifolds for: compact surfaces without focal points of genus greater than one and compact surfaces without conjugate points, genus greater than one and continuous Green bundles. It is not known whether the dimension of the quotient space is 3 without assuming any of the above two conditions. # Topological dynamics of the quotient flow {#s4} Section [3](#s1){reference-type="ref" reference="s1"} defines a quotient model: a continuous quotient flow $\psi_t:X\to X$ time-preserving semi-conjugate to the geodesic flow $\phi_t$. Our goal is to show that $\psi_t$ has typical properties of hyperbolic topological dynamics like expansivity, local product structure and topological mixing. Notice that the geodesic flow $\phi_t$ may not be expansive due to the presence of non-trivial strips. **Theorem 8**. *Let $M$ be a compact surface without conjugate points of genus greater than one and $\psi_t:X\to X$ be the quotient flow. Then, $\psi_t$ is topologically mixing, expansive and has a local product. Moreover $\psi_t$ has the pseudo-orbit tracing and specification properties.* We will prove Theorem [Theorem 8](#propd){reference-type="ref" reference="propd"} in several steps. Since the geodesic flow $\phi_t$ is topologically mixing (Theorem [Theorem 6](#v1){reference-type="ref" reference="v1"}) and $\chi$ is a continuous time-preserving semi-conjugacy, we deduce that $\psi_t$ is also topologically mixing. Recall that for every $\eta\in T_1M$, Lemma [Lemma 7](#que1){reference-type="ref" reference="que1"} gives a relationship between neighborhoods of $[\eta]\in X$ and special neighborhoods of $\mathcal{I}(\eta)$. We use this basic open sets to get a relationship between Sasaki distance $d_s$ (Equation [\[sasa\]](#sasa){reference-type="eqref" reference="sasa"}) and the quotient distance $d$. **Lemma 10**. *Let $Q>0$ be the Morse's constant given in Theorem [Theorem 4](#morse){reference-type="ref" reference="morse"}. Then, there exist $r_0,s_0>0$ such that for every $[\xi],[\eta]\in X$ with $d([\xi],[\eta])\leq r_0$ then $$d_s(\tilde{\xi},\tilde{\eta})\leq Q+s_0,$$ for some lifts $\tilde{\xi},\tilde{\eta}\in T_1\tilde{M}$ of $\xi,\eta\in T_1M$.* *Proof.* We consider the basic open sets $A(\eta,\epsilon,\delta,\tau)$ provided by Lemma [Lemma 7](#que1){reference-type="ref" reference="que1"}. For every $\theta\in T_1M$, choose $\epsilon,\delta,\tau>0$ small enough so that $A(\theta,\epsilon,\delta,\tau)$ is evenly covered by $d\pi$. Clearly, the family $\mathcal{A}=\{ \chi(A(\theta,\epsilon,\delta,\tau)): \theta \in T_1M\}$ is an open cover of the compact space $X$. Let $r_0>0$ be a Lebesgue number of $\mathcal{A}$. Thus, for every $[\eta],[\xi]\in X$ with $d([\eta],[\xi])\leq r_0$, there exists $\theta\in T_1M$ such that $$[\xi]\in B([\eta],r_0)\subset \chi(A(\theta,\epsilon,\delta,\tau))\in \mathcal{A},$$ where $B([\eta],r_0)$ is the $r_0$-closed ball centered at $[\eta]$. Since $A(\theta,\epsilon,\delta,\tau)$ is evenly covered by $d\pi$, for every lift $\tilde{\theta}$ of $\theta$ there exist lifts $\tilde{A}(\tilde{\theta},\epsilon,\delta,\tau)$ and $\tilde{\eta},\tilde{\xi}\in \tilde{A}(\tilde{\theta},\epsilon,\delta,\tau)$ of $A(\theta,\epsilon,\delta,\tau),\eta,\xi$ respectively. As $\epsilon, \delta, \tau$ are small enough, there exists $s_0>0$ so that $Diam(\tilde{A}(\tilde{\theta},\epsilon,\delta,\tau))\leq Q+s_0$ hence $d_s(\tilde{\eta},\tilde{\xi})\leq Q+s_0$. ◻ **Lemma 11**. *For every $\epsilon>0$ there exists $\delta>0$ such that if $[\xi]\in X$ satisfy $d([\xi],\psi_{\tau}[\xi])\leq \delta$ for some $\tau\in \mathbb{R}$, then $|\tau|\leq \epsilon$.* *Proof.* By contradiction, suppose there exist $\epsilon_0>0$ and sequences $[\xi_n]\in X, \tau_n\in \mathbb{R}$ such that for every $n\geq 1$ $$\label{est} d([\xi_n],\psi_{\tau_n}[\xi_n])\leq \frac{1}{n} \quad \text{ and }\quad |\tau_n|\geq \epsilon_0.$$ Up to subsequences, we can assume that $\tau_n\to T$ and $[\xi_n]\to [\xi]$. Since $\psi_t$ is continuous, $\psi_{\tau_n}[\xi_n]\to \psi_T[\xi]$. On the other hand inequalities [\[est\]](#est){reference-type="eqref" reference="est"} say that $[\xi_n]$ and $\psi_{\tau_n}[\xi_n]$ converge to the same limit $[\xi]=\psi_T[\xi]$. This holds if and only if $T=0$. Thus $\tau_n\to 0$, which contradicts inequalities [\[est\]](#est){reference-type="eqref" reference="est"}. ◻ **Lemma 12**. *The quotient flow $\psi_t$ is expansive.* *Proof.* Let $r_0>0$ be given by Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}. We first show that if there are two orbits of $\psi_t$ having Hausdorff distance bounded by $r_0$, then the orbits agree. Let $[\eta],[\xi]\in X$ with $d(\psi_t[\eta],\psi_{\rho(t)}[\xi])\leq r_0$ for every $t\in \mathbb{R}$ and some reparametrization $\rho$. By Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}, there exist lifts $\tilde{\eta},\tilde{\xi}\in T_1\tilde{M}$ of $\eta,\xi$ such that $$d_s(\phi_t(\tilde{\eta}),\phi_{\rho(t)}(\tilde{\xi}))\leq Q+s_0, \text{ for every } t\in \mathbb{R}.$$ Thus, the orbits of $\tilde{\eta}$ and $\tilde{\xi}$ have Hausdorff distance bounded by $Q+s_0$ hence the orbits are bi-asymptotic. This implies that there exists $\tau\in \mathbb{R}$ so that $\tilde{\xi}\in \mathcal{\tilde{I}}(\phi_{\tau}(\tilde{\eta}))$ hence $[\xi]=\psi_{\tau}[\eta]$. Given $\epsilon>0$, Lemma [Lemma 11](#est1){reference-type="ref" reference="est1"} yields $\delta_1>0$ satisfying its statement. Let $\delta=\min(\delta_1,r_0)$. If the orbits of $[\eta]$ and $[\xi]$ have Hausdorff distance bounded by $\delta\leq r_0$ then $[\xi]=\psi_{\tau}[\eta]$ and $|\tau|\leq \epsilon$ because $d([\eta],\psi_{\tau}[\eta])=d([\eta],[\xi])\leq \delta\leq \delta_1$. ◻ ## Local product structure We now deal with the existence of a local product structure. Though $\phi_t$ has no local product in the general case, $\phi_t$ has a related property. To look for strong stable and unstable sets we start with the horospherical leaves of the geodesic flow. These leaves enjoy a sort of weak local product structure provided by their heteroclinic connections (Theorem [Theorem 6](#v1){reference-type="ref" reference="v1"}): for every $\eta,\xi\in T_1\tilde{M}$ with $\xi\not\in \mathcal{\tilde{F}}^{cu}(-\eta)$, there exists $\theta\in T_1\tilde{M}$ such that $$\label{heti} \mathcal{\tilde{F}}^s(\eta)\cap \mathcal{\tilde{F}}^{cu}(\xi)=\mathcal{\tilde{I}}(\theta).$$ Though these intersections always exist, they are generally not unique. This is because $\theta$ may be non-expansive and in such a case $\mathcal{\tilde{I}}(\theta)$ would be non-trivial. The definition of $\tilde{\chi}$ strongly suggests that the quotients of the horospherical leaves are natural candidates to be the strong stable and unstable sets of $\psi_t$. For every $\eta\in T_1M$, we define $$V^s[\eta]=\chi(\mathcal{F}^s(\eta)), \qquad V^u[\eta]=\chi(\mathcal{F}^u(\eta)),$$ $$V^{cs}[\eta]=\chi(\mathcal{F}^{cs}(\eta)) \quad\text{ and }\quad V^{cu}[\eta]=\chi(\mathcal{F}^{cu}(\eta)).$$ We consider some connected components of $V^s[\eta]$ and $V^u[\eta]$. For every $[\eta]\in X$ and every open set $U\subset X$ containing $[\eta]$, we denote by $V^s[\eta]\cap U_c$ the connected component of $V^s[\eta]\cap U$ containing $[\eta]$. Similarly, we write $V^u[\eta]\cap U_c$, $V^{cs}[\eta]\cap U_c$ and $V^{cu}[\eta]\cap U_c$. Let $[\eta],[\xi]\in X$ close enough. If there exists an open set $U\subset X$ with $[\eta],[\xi]\in U$ such that $V^s[\eta]\cap U_c$ and $V^{cu}[\eta]\cap U_c$ intersect, then we define $$\label{intev} V^s[\eta]\cap V^{cu}[\xi]=\left(V^s[\eta]\cap U_c\right)\cap \left(V^{cu}[\eta]\cap U_c\right).$$ The heteroclinic connections [\[heti\]](#heti){reference-type="eqref" reference="heti"} provide $\theta\in T_1M$, $\tau\in \mathbb{R}$ and lifts $\tilde{\theta},\tilde{\eta},\tilde{\xi}\in T_1\tilde{M}$ of $\theta,\eta,\xi$ such that $\tilde{\xi}\not\in \mathcal{\tilde{F}}^{cu}(-\tilde{\eta})$ and $$\mathcal{\tilde{F}}^s(\tilde{\eta})\cap \mathcal{\tilde{F}}^{cu}(\tilde{\xi})=\mathcal{\tilde{F}}^s(\tilde{\eta})\cap \mathcal{\tilde{F}}^{u}(\tilde{\phi}_{\tau}(\tilde{\xi}))=\mathcal{\tilde{I}}(\tilde{\theta}).$$ $$V^s[\eta]\cap V^{cu}[\xi]=V^s[\eta]\cap V^u(\psi_{\tau}[\xi])=\chi\circ d\pi(\mathcal{\tilde{I}}(\tilde{\theta}))=[\mathcal{I}(\theta)]=[\theta].$$ By the definition of the quotient, this intersection is always unique if it exists. Denote by $B([\xi],r)$ the open ball of radius $r>0$ centered at $[\xi]$. **Lemma 13**. *Let $r_0>0$ be given by Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}. There cannot exist $\epsilon_0>0$ and sequences $[\eta_n],[\xi_n]\in X$, $t_n\in \mathbb{R}$ such that $t_n\to \infty$ and for every $n\geq 1$, $[\eta_n]\in V^s[\xi_n]$, $[\eta_n]$ belongs to the connected component of $V^s[\xi_n]\cap B([\xi_n],r_0)$ containing $[\xi_n]$, $$\label{sta} d([\eta_n],[\xi_n])\leq r_0 \quad\text{ and }\quad d(\psi_{t_n}[\eta_n],\psi_{t_n}[\xi_n])\geq \epsilon_0.$$ An analogous statement holds for the unstable case.* *Proof.* By contradiction suppose there exist the objects of the statement satisfying the inequalities [\[sta\]](#sta){reference-type="eqref" reference="sta"}. Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"} says that there exist lifts $\tilde{\eta}_n,\tilde{\xi}_n\in T_1\tilde{M}$ of $\eta_n,\xi_n$ such that for every $n\geq 1$, $d_s(\tilde{\eta}_n,\tilde{\xi}_n)\leq Q+s_0$. We claim that for every $n\geq 1$, $\tilde{\eta}_n\in \mathcal{\tilde{F}}^s(\tilde{\xi}_n)$. Otherwise, for some covering isometry $T$, $T(\tilde{\eta}_n)\in \mathcal{\tilde{F}}^s(\tilde{\xi}_n)$. Hence $[\eta_n]=[d\pi(T(\tilde{\eta}_n))]$ does not belong to the connected component of $V^s[\xi_n]\cap B([\xi_n],r_0)$ containing $[\xi_n]$ and the claim is proved. By lemma [Proposition 3](#quesi){reference-type="ref" reference="quesi"} there exist $A,B>0$ such that for every $n\geq 1$, $$d_s(\phi_t(\tilde{\eta}_n),\phi_t(\tilde{\xi}_n))\leq Ad_s(\tilde{\eta}_n,\tilde{\xi}_n)+B\leq A(Q+s_0)+B=C, \text{ for every }t\geq 0.$$ $$\label{quasic} \text{Hence}\quad d_s(\phi_t(\phi_{t_n}\tilde{\eta}_n),\phi_t(\phi_{t_n}\tilde{\xi}_n))\leq C \text{ for every }t\geq -t_n.$$ Up to subsequences and using covering isometries we can assume that $$\label{conv1} \phi_{t_n}(\tilde{\eta}_n)\to \tilde{\eta} \quad\text{ and }\quad \phi_{t_n}(\tilde{\xi}_n)\to \tilde{\xi}.$$ Since horospherical foliations are invariant by the action of the geodesic flow, we get $\phi_{t_n}(\tilde{\eta}_n)\in \mathcal{\tilde{F}}^s(\phi_{t_n}(\tilde{\xi}_n))$. Moreover, the continuity of the horospherical foliations gives $\tilde{\eta}\in \mathcal{\tilde{F}}^s(\tilde{\xi})$. Now, let $t\in \mathbb{R}$. Since $t_n \to \infty$, we see that $-t_n\leq t$ for $n$ large enough and inequalities [\[quasic\]](#quasic){reference-type="eqref" reference="quasic"} yields $d_s(\phi_t(\phi_{t_n}\tilde{\eta}_n),\phi_t(\phi_{t_n}\tilde{\xi}_n))\leq C$. By continuity we obtain $d_s(\phi_t(\tilde{\eta}),\phi_t(\tilde{\xi}))\leq C$ for every $t\in \mathbb{R}$. As $\tilde{\eta}\in \mathcal{\tilde{F}}^s(\tilde{\xi})$, Corollary [Corollary 1](#key1){reference-type="ref" reference="key1"} shows that $\tilde{\eta}\in \mathcal{\tilde{I}}(\tilde{\xi})$ hence $[\eta]=[\xi]$. Applying the map $\chi\circ d\pi$ to the sequences [\[conv1\]](#conv1){reference-type="eqref" reference="conv1"} we get $$\chi\circ d\pi (\phi_{t_n}(\tilde{\eta}_n))\to \chi\circ d\pi (\tilde{\eta}) \quad\text{ and }\quad \chi\circ d\pi (\phi_{t_n}(\tilde{\xi}_n))\to \chi\circ d\pi (\tilde{\xi}),$$ $$\psi_{t_n}[\eta_n]\to [\eta] \quad\text{ and }\quad \psi_{t_n}[\xi_n]\to [\xi].$$ Thus $d(\psi_{t_n}[\eta_n],\psi_{t_n}[\xi_n])\to 0$ as $n\to \infty$. This contradicts inequalities [\[sta\]](#sta){reference-type="eqref" reference="sta"}. ◻ An intermediate result to show the relationship between the pairs $W^{ss}[\eta]$,$W^{uu}[\eta]$ and $V^s[\eta]$,$V^u[\eta]$ is the so-called uniform contraction. We prove this contraction for $V^s[\eta]$ and $V^u[\eta]$ but only for distances smaller than $r_0$. **Lemma 14**. *Let $r_0>0$ be given by Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}. For every $\epsilon>0$ and $D\in(0,r_0]$ there exists $T>0$ such that if $[\eta]\in V^s[\xi]$, $[\eta]$ belongs to the connected component of $V^s[\xi]\cap B([\xi],r_0)$ containing $[\xi]$ and $d([\eta],[\xi])\leq D$, then $$d(\psi_t[\eta],\psi_t[\xi])\leq \epsilon \quad\text{ for every } t\geq T.$$ An analogous result holds for the unstable case.* *Proof.* By contradiction suppose there exist $\epsilon_0>0,D_0\in (0,r_0]$ and sequences $[\eta_n],[\xi_n]\in X$, $t_n\in \mathbb{R}$, such that $t_n\to \infty$ and for every $n\geq 1$, $[\eta_n]\in V^s[\xi_n]$, $[\eta_n]$ belongs to the connected component of $V^s[\xi_n]\cap B([\xi_n],r_0)$ containing $[\xi_n]$, $$d([\eta_n],[\xi_n])\leq D_0\leq r_0 \quad\text{ and }\quad d(\psi_{t_n}[\eta_n],\psi_{t_n}[\xi_n])\geq \epsilon_0.$$ This contradicts Lemma [Lemma 13](#basico){reference-type="ref" reference="basico"} and proves the statement. ◻ As an immediate consequence we see that $V^s[\eta]$ and $V^u[\eta]$ agree with the strong sets of $\psi_t$ locally for distances smaller than $r_0$. **Lemma 15**. *Let $r_0>0$ be given by Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}. If $[\eta]\in V^s[\xi]$, $[\eta]$ belongs to the connected component of $V^s[\xi]\cap B([\xi],r_0)$ containing $[\xi]$ and $d([\eta],[\xi])\leq r_0$ then $$d(\psi_t[\eta],\psi_t[\xi])\to 0 \quad\text{ as }\quad t\to \infty.$$ In particular, $[\eta]\in W^{ss}[\xi]$. An analogous statement holds for the unstable case.* *Proof.* For every $n\geq 1$, set $\epsilon_n=1/n$ and $D=r_0$ in Lemma [Lemma 14](#last){reference-type="ref" reference="last"}. So, there exists a sequence $T_n\to \infty$ such that $d(\psi_t[\eta], \psi_t[\xi])\leq \frac{1}{n}$ for every $t\geq T_n$. This implies that $d(\psi_t[\eta], \psi_t[\xi])\to 0$ as $t\to \infty$. ◻ The local product requires not only the intersection of $W^{ss}[\eta]$ and $W^{uu}[\eta]$ but the intersection of the $\epsilon$-strong sets $W^{ss}_{\epsilon}[\eta]$ and $W^{uu}_{\epsilon}[\eta]$. The following lemma sets a criterion to identify points of $W^{ss}_{\epsilon}[\eta]$ and $W^{uu}_{\epsilon}[\eta]$. **Lemma 16**. *Let $r_0>0$ be given by Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}. For every $\epsilon>0$ there exists $\delta\in (0,r_0]$ such that if $[\eta]\in V^s[\xi]$, $[\eta]$ belongs to the connected component of $V^s[\xi]\cap B([\xi],r_0)$ containing $[\xi]$ and $d([\eta],[\xi])\leq \delta$, then $$d(\psi_t[\eta],\psi_t[\xi])\leq \epsilon \quad\text{ for every } t\geq 0.$$ An analogous result holds for the unstable case.* *Proof.* By contradiction suppose there exist $\epsilon_0>0$ and sequences $[\eta_n],[\xi_n]\in X$, $\delta_n\in(0,r_0]$ and $t_n\in \mathbb{R}$, such that $\delta_n\to 0$ and for every $n\geq 1$, $[\eta_n]\in V^s[\xi_n]$, $[\eta_n]$ belongs to the connected component of $V^s[\xi_n]\cap B([\xi_n],r_0)$ containing $[\xi_n]$, $$\label{sta2} d([\eta_n],[\xi_n])\leq \delta_n \leq r_0 \quad\text{ and }\quad d(\psi_{t_n}[\eta_n],\psi_{t_n}[\xi_n])\geq \epsilon_0.$$ We claim that $t_n\to \infty$. Otherwise $t_n$ is bounded and after choosing a subsequence we have $t_n\to T\in \mathbb{R}$. For suitable subsequences, inequalities [\[sta2\]](#sta2){reference-type="eqref" reference="sta2"} imply that $[\eta_n]$ and $[\xi_n]$ converge to the same limit $[\eta]\in X$. Since $\psi_t$ is continuous, $\psi_{t_n}[\eta_n]$ and $\psi_{t_n}[\xi_n]$ converge to the same limit $\psi_T[\eta]$. This contradicts inequalities [\[sta2\]](#sta2){reference-type="eqref" reference="sta2"} and proves the claim. Since $t_n\to\infty$, inequalities [\[sta2\]](#sta2){reference-type="eqref" reference="sta2"} contradict Lemma [Lemma 13](#basico){reference-type="ref" reference="basico"} and prove the lemma. ◻ From this and Lemma [Lemma 15](#strong){reference-type="ref" reference="strong"}, we deduce that $W^{ss}_{\epsilon}[\eta]$ and $W^{uu}_{\epsilon}[\eta]$ agree with $V^s[\eta]$ and $V^u[\eta]$ locally. The following lemma states that if $[\eta]$ and $[\xi]$ are close enough then their intersection $[\theta]$ is close to $[\eta]$ and $[\xi]$. **Lemma 17**. *For every $\epsilon>0$ there exists $\delta>0$ such that if $[\eta],[\xi]\in X$, $[\theta]\in V^s[\eta]\cap V^u(\psi_{\tau}[\xi])$ and $d([\eta],[\xi])\leq \delta$, then $$d([\theta],[\eta])\leq \epsilon,\quad d([\theta],\psi_{\tau}[\xi])\leq \epsilon \quad\text{ and }\quad |\tau|\leq \epsilon.$$* *Proof.* By contradiction suppose there exist $\epsilon_0>0$ and sequences $[\eta_n],[\xi_n],[\theta_n]\in X$, $\tau_n\in \mathbb{R}$ such that for every $n\geq 1$, $[\theta_n]\in V^s[\eta_n]\cap V^u(\psi_{\tau_n}[\xi_n])$, $|\tau_n|\geq \epsilon_0$, $$\label{sta3} d([\eta_n],[\xi_n])\leq \frac{1}{n}, \quad d([\theta_n],[\eta_n])\geq \epsilon_0 \quad \text{ and }\quad d([\theta_n],\psi_{\tau_n}[\xi_n])\geq \epsilon_0.$$ Given $r_0>0$ from Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"}, for every $n\geq 1$ large enough, $d([\eta_n],[\xi_n])\leq \frac{1}{n}\leq r_0$. So, we can choose lifts $\tilde{\eta}_n,\tilde{\xi}_n,\tilde{\theta}_n$ of $\eta_n,\xi_n, \theta_n$ such that $d_s(\tilde{\eta}_n,\tilde{\xi}_n)\leq Q+s_0$ and $\tilde{\theta}_n$ belongs to the fundamental domain containing $\tilde{\eta}_n$ and $\tilde{\xi}_n$. We claim that for every $n\geq 1$, $$\label{inte1} \tilde{\theta}_n\in \mathcal{\tilde{F}}^s(\tilde{\eta}_n)\cap \mathcal{\tilde{F}}^u(\phi_{\tau_n}(\tilde{\xi}_n)).$$ Otherwise there exist sequences of covering isometries $T_n,T'_n$ such that $\tilde{\theta}_n\in \mathcal{\tilde{F}}^s(T_n(\tilde{\eta}_n))\cap \mathcal{\tilde{F}}^u(\phi_{\tau_n}(T'_n(\tilde{\xi}_n)))$. Thus, there exists an open set $U\subset X$ containing $[\eta_n]$ such that $[\theta_n]$ does not belong to the connected component of $V^s[\eta_n]\cap U$ containing $[\eta_n]$. Similarly, $[\theta_n]$ does not belong to the connected component of $V^{cu}[\xi_n]\cap U$ containing $[\xi_n]$. This contradicts the definition of intersection and proves the claim. So, if we use the same covering isometries for all sequences and choose suitable subsequences, we can assume that $\tilde{\eta}_n \to \tilde{\eta}, \tilde{\xi}_n\to \tilde{\xi}, \tilde{\theta}_n\to \tilde{\theta}$ and $\tau_n\to T$. Since we used the same covering isometries for the sequences, the continuity of the horospherical foliations applied to relation [\[inte1\]](#inte1){reference-type="eqref" reference="inte1"} yields $$\label{inte2} \tilde{\theta}\in \mathcal{\tilde{F}}^s(\tilde{\eta})\cap \mathcal{\tilde{F}}^u(\phi_T(\tilde{\xi})).$$ We claim that $\tilde{\eta}\in \mathcal{\tilde{I}}(\tilde{\xi})$. Otherwise $\eta\not\in \mathcal{I}(\xi)$ and $d([\eta],[\xi])>0$. But applying the limit to inequalities [\[sta3\]](#sta3){reference-type="eqref" reference="sta3"}, we get $d([\eta],[\xi])=0$ and the claim is proved. The claim and relation [\[inte2\]](#inte2){reference-type="eqref" reference="inte2"} provide that $\tilde{\theta}\in \mathcal{\tilde{F}}^s(\tilde{\eta})\cap \mathcal{\tilde{F}}^u(\phi_T(\tilde{\eta}))$. From this, Corollary [Corollary 1](#key1){reference-type="ref" reference="key1"} shows that $\tilde{\theta} \in \mathcal{\tilde{I}}(\tilde{\eta})$. Therefore $[\theta_n]$ and $[\eta_n]$ converge to the same limit $[\theta]=[\eta]$, which contradicts inequalities [\[sta3\]](#sta3){reference-type="eqref" reference="sta3"}. This gives $\delta_1>0$ such that $d([\theta],[\eta])\leq \epsilon$. A similar reasoning yields $\delta_2>0$ such that $d([\theta],\psi_{\tau}[\xi])\leq \epsilon$. Finally, we see that $\tilde{\theta} \in \mathcal{\tilde{I}}(\tilde{\eta})\subset \mathcal{\tilde{F}}^u(\tilde{\eta})$ hence $\tilde{\theta} \in \mathcal{\tilde{F}}^u(\tilde{\eta})\cap \mathcal{\tilde{F}}^u(\phi_T(\tilde{\eta}))$. This holds if and only if $T=0$. Therefore $\tau_n \to 0$, contradicting $|\tau_n|\geq \epsilon_0$. We thus get $\delta_3>0$ such that $|\tau|\leq \epsilon$. Choosing $\delta=\min(\delta_1,\delta_2,\delta_3)$ we get the result. ◻ This result is important because it grants the continuity of the local product structure. **Lemma 18**. *The quotient flow $\psi_t$ has a local product structure.* *Proof.* Let $r_0>0$ be given by Lemma [Lemma 10](#lev){reference-type="ref" reference="lev"} and $\epsilon\in(0,r_0]$. Consider $[\eta],[\xi]\in X$ and $[\theta]\in V^s[\eta]\cap V^u(\psi_{\tau}[\xi])$. By Lemma [Lemma 16](#veci){reference-type="ref" reference="veci"} there exists $\delta_1>0$ such that if $d([\theta],[\eta])\leq \delta_1$ and $d([\theta],\psi_{\tau}[\xi])\leq \delta_1$ then $$\label{veci1} d(\psi_t[\theta],\psi_t[\eta])\leq \epsilon \quad\text{ and }\quad d(\psi_{-t}[\theta],\psi_{-t}\psi_{\tau}[\xi])\leq \epsilon, \quad \text{for} t\geq 0.$$ For the same $\epsilon>0$, Lemma [Lemma 12](#exp1){reference-type="ref" reference="exp1"} gives $\delta_2>0$ such that expansivity holds. Set $\delta_m=\min(\delta_1,\delta_2,\epsilon)$. By Lemma [Lemma 17](#inters){reference-type="ref" reference="inters"}, for $\delta_m>0$ there exists $\delta>0$ such that if $[\eta],[\xi]\in X$, $[\theta]\in V^s[\eta]\cap V^u(\psi_{\tau}[\xi])$ and $d([\eta],[\xi])\leq \delta$ then $$d([\theta],[\eta])\leq \delta_m,\quad d([\theta],\psi_{\tau}[\xi])\leq \delta_m \quad\text{ and }\quad |\tau|\leq \delta_m\leq \epsilon.$$ From this and $\delta_m\leq \epsilon\leq r_0$, Lemma [Lemma 15](#strong){reference-type="ref" reference="strong"} implies that $[\theta]\in W^{ss}[\eta]\cap W^{uu}(\psi_{\tau}[\xi])$. Furthermore, since $\delta_m\leq \delta_1$, $[\theta],[\eta],[\xi]$ satisfy inequalities [\[veci1\]](#veci1){reference-type="eqref" reference="veci1"} hence $[\theta]\in W^{ss}_{\epsilon}[\eta]\cap W^{uu}_{\epsilon}(\psi_{\tau}[\xi])$ and $|\tau|\leq \epsilon$. ◻ Finally, pseudo-orbit tracing and specification properties are a consequences of previous dynamical properties. More precisely, 1. By Theorem 7.1 of [@thom91], if $\psi_t$ is expansive and has a local product structure then $\psi_t$ has the pseudo-orbit tracing property. 2. By Proposition 6.2 of [@gelf19], if $\psi_t$ is expansive, topological mixing and has the pseudo-orbit tracing property then $\psi_t$ has the specification property. # Uniqueness of the measure of maximal entropy of the geodesic flow This section is devoted to the study of the uniqueness of the measure of maximal entropy of the geodesic flow. The existence of such a measure follows from a work by Newhouse [@new89]. Indeed, the result says that a smooth flow on a compact smooth manifold always has a measure of maximal entropy. By hypothesis, the geodesic flow $\phi_t$ is a smooth flow acting on $T_1M$ and the result follows. We remark that in our case, the geodesic flow has positive topological entropy [@freire82]. The strategy for the proof of uniqueness of the measure of maximal entropy is the following. First of all, the properties of the factor flow implies that $\psi_t$ has a unique mesaure of maximal entropy. Secondly, we will show that the lift of this measure to $T_1M$ is the unique measure of maximal entropy for the geodesic flow. Recall that the quotient model is a quotient flow $\psi_t$ time-preserving semi-conjugate to the geodesic flow $\phi_t$. Bowen and Franco found a criterion to get the uniqueness of the measure of maximal entropy [@bowen71endo; @fran77]. **Theorem 9**. *Let $\phi_t:X\to X$ be a continuous flow acting on a compact metric space. If $\phi_t$ is expansive and has the specification property then $\phi_t$ has a unique measure of maximal entropy.* From the previous section, Theorem [Theorem 8](#propd){reference-type="ref" reference="propd"} says that $\psi_t$ is expansive and has the specification property. Applying Theorem [Theorem 9](#frank){reference-type="ref" reference="frank"} to our case we see that $\psi$ has a unique measure of maximal entropy $\nu$. To lift $\nu$ to $T_1M$ and verify the uniqueness property we rely on a abstract theorem proved by Buzzi-Fisher-Sambarino-Vasquez for discrete systems [@buzzi12]. They constructed a measure of maximal entropy using a classical argument due to Ledrappier and Walters [@ledra77]. We recall the construction for our setting. Let $\phi_t:Y\to Y$ and $\psi_t:X\to X$ be two continuous flows on compact metric spaces, $\chi:Y\to X$ be a time-preserving semi-conjugacy and $\nu$ be the measure of maximal entropy of $\psi_t$. Assume that $\psi_t$ is expansive, has the specification property and for every $x\in X$, $$\label{hip} h(\phi_1,\chi^{-1}(x))=0.$$ Let $\epsilon>0$ be an expansivity constant for $\psi_t$. For each $T>0$, we define the set $$Per(T,\epsilon)=\{ \chi^{-1}(\gamma)\subset Y: \gamma \text{ is a periodic orbit of }\psi_t \text{ with period in }[T-\epsilon,T+\epsilon] \}.$$ By expansivity this set is finite. The following lemma states a non-trivial fact about strips in our setting. **Lemma 19**. *Let $M$ be a compact surface without conjugate points of genus greater than one and $Per(T,\epsilon)$ be the set defined above. Then, every subset $\chi^{-1}(\gamma)\in Per(T,\epsilon)$ is compact and invariant by the geodesic flow $\phi_t$ and $$\chi^{-1}(\gamma)=\{ \phi_s(\mathcal{I}(\xi)):s\in [0,S]\text{ with }S\in[T-\epsilon,T+\epsilon] \} = \phi_{[0,S]}(\mathcal{I}(\xi)).$$ In particular, its lift $\tilde{\phi}_{[0,S]}(\mathcal{\Tilde{I}}(\Tilde{\xi}))\subset T_1\Tilde{M}$ is a strip of bi-asymptotic orbits of the geodesic flow $\Tilde{\phi}_t$ for any lift $\tilde{\xi}\in T_1\Tilde{M}$ of $\xi$.* We observe that the strip $\chi^{-1}(\gamma)$ might not have closed orbits of the geodesic flow $\phi_t$ hence its projection $P(\chi^{-1}(\gamma))\subset M$ might not have closed geodesics of $(M,g)$. However, for every non-closed $\phi_t$-orbit $\beta\subset\chi^{-1}(\gamma)$, Lemma [Lemma 19](#notri){reference-type="ref" reference="notri"} implies that $\beta$ and its accumulation points remain in $\chi^{-1}(\gamma)$. These properties might help to understand the geometry of these particular strips in future studies. Note that for compact surfaces without focal points, the geometry of strips is well-understood due to the flat strip Theorem [@pesin77]. It also follows from Lemma [Lemma 19](#notri){reference-type="ref" reference="notri"} that there exists a probability measure $\mu_{\gamma}$ supported on $\chi^{-1}(\gamma)$ and invariant by the geodesic flow $\phi_t$. So, we can take the average $$\mu_T=\frac{\sum_{\gamma}\mu_{\gamma}}{\# Per(T,\epsilon)},$$ where $\gamma$ varies according to $\chi^{-1}(\gamma)\in Per(T,\epsilon)$. We see that $\mu_T$ is a probability measure on $Y$ invariant by the flow $\phi_t$. Let $\mu\in \mathcal{M}(\phi)$ be an accumulation point of the set $(\mu_T)_{T>0}$ in the weak$^*$ topology. So, there exists a sequence $T_n\to \infty$ such that $\mu_{T_n}\to \mu$ weakly. Notice that for every $\chi^{-1}(\gamma)\in Per(T,\epsilon)$, $\chi_*\mu_{\gamma}$ is a probability measure supported on $\gamma$ and invariant by the flow $\psi_t$. It follows that $\chi_*(\mu_{T_n})$ is a probability measure supported on the union of periodic orbits of $\psi_t$ with period in $[T-\epsilon,T+\epsilon]$. In this case, Bowen showed that $\chi_*(\mu_{T_n})\to \nu$ in the weak$^*$ topology [@bowenper]. The continuity of $\chi_*$ and $\mu_{T_n}\to \mu$ provide that $\chi_*\mu=\nu$. We verify that $\mu$ is a measure of maximal entropy. Since $(Y,\phi_t,\mu)$ is an extension of $(X,\psi_t,\nu)$, we have $h_{\nu}(\psi_1)\leq h_{\mu}(\phi_1)$. Applying Bowen's formula [@bowen71endo] and assumption [\[hip\]](#hip){reference-type="eqref" reference="hip"}, we conclude that $$h(\phi_1)\leq h(\psi_1)+\sup_{x\in X}h(\phi_1,\chi^{-1}(x))=h(\psi_1)\quad \text{ hence }\quad h(\phi_1)= h(\psi_1).$$ Since $\nu$ is a measure of maximal entropy for $\psi_t$, $h(\phi_1)=h(\psi_1)=h_{\nu}(\psi_1)\leq h_{\mu}(\phi_1)$. So, every accumulation point $\mu\in \mathcal{M}(\phi)$ of the set $(\mu_T)_{T>0}$ satisfies: $$\label{par} \mu \text{ is a measure of maximal entropy for } \phi_t \quad \text{ and } \quad \chi_*\mu=\nu.$$ We state Buzzi-Fisher-Sambarino-Vasquez's Theorem for continuous systems. The proof is analogous to the discrete case with minor changes. **Proposition 5**. *Let $\phi_t:Y\to Y$ and $\psi_t:X\to X$ be two continuous flows on compact metric spaces, $\chi:Y\to X$ be a time-preserving semi-conjugacy and $\nu$ be the measure of maximal entropy of $\psi_t$. Assume that $\psi_t$ is expansive, has the specification property and* 1. *$h(\phi_1,\chi^{-1}(x))=0$ for every $x\in X$.* 2. *$\nu \bigg(\{ \chi(y): \chi^{-1}\circ\chi(y)=\{y\} \}\bigg)=1$.* *Then, there exists a unique measure of maximal entropy $\mu$ of $\phi_t$ with $\chi_*\mu=\nu$.* We apply this proposition to our context. Let $Y=T_1M$, $\phi_t$ be the geodesic flow, $X$ be the quotient space, $\psi_t$ be the quotient flow, $\chi$ be the quotient map and $\nu$ be the unique measure of maximal entropy of $\psi_t$. With these choices, the assumptions of Proposition [Proposition 5](#samba){reference-type="ref" reference="samba"} are satisfied except for Hypothesis 1 and 2. Regarding Hypothesis 1, we see that for every $[\eta]\in X$, $$\label{ayu} \chi^{-1}[\eta]=\chi^{-1}\circ \chi(\eta)=\mathcal{I}(\eta).$$ For compact surfaces without conjugate points and genus greater than one, Gelfert and Ruggiero [@gelf20] proved that $h(\phi_1, \mathcal{I}(\eta))=0$ for every $\eta\in T_1M$. Therefore, Hypothesis 1 is satisfied. Moreover, condition [\[par\]](#par){reference-type="eqref" reference="par"} says that there exists a measure of maximal entropy $\mu$ for the geodesic flow $\phi_t$ such that $\chi_*\mu=\nu$. So, to show the uniqueness it only remains to prove Hypothesis 2. We express Hypothesis 2 of Proposition [Proposition 5](#samba){reference-type="ref" reference="samba"} in our context. By identity [\[ayu\]](#ayu){reference-type="eqref" reference="ayu"}, this hypothesis has the following form $$\{ \chi(y): \chi^{-1}\circ \chi(y)=\{y\} \}= \{ \chi(\eta)\in X: \mathcal{I}(\eta)=\{\eta\} \}=\chi(\mathcal{R}_0).$$ Consequently, Hypothesis 2 becomes $$\label{ayu1} \nu(\chi(\mathcal{R}_0))=1.$$ To prove this condition, we use Proposition 3.3 of Climenhaga-Knieper-War's work [@clim21]. This proposition states a classical Katok's result in the context of geodesic flows of surfaces. **Lemma 20**. *Let $M$ be a surface without conjugate points of genus greater than one and $\mu$ be an ergodic measure on $T_1M$ invariant by the geodesic flow. $$\text{If }\quad h_{\mu}(\phi_1)>0 \quad \text{ then }\quad \mu(\mathcal{R}_0)=1.$$* We prove below condition [\[ayu1\]](#ayu1){reference-type="eqref" reference="ayu1"} and so the uniqueness of the measure of maximal entropy for the geodesic flow $\phi_t$. *Proof.* As remarked above, by condition [\[par\]](#par){reference-type="eqref" reference="par"}, $\mu$ is a measure of maximal entropy and hence $h_{\mu}(\phi_1)=h(\phi_1)>0$. Ergodic decomposition of $\mu$ provides an ergodic component $\tau$ with $h_{\tau}(\phi_1)>0$. Lemma [Lemma 20](#ani){reference-type="ref" reference="ani"} implies that $\tau(\mathcal{R}_0)=1$ hence $\mu(\mathcal{R}_0)>0$. So, we have $$\nu(\chi(\mathcal{R}_0))=\chi_*\mu(\chi(\mathcal{R}_0))=\mu(\chi^{-1}\chi\mathcal{R}_0)=\mu(\mathcal{R}_0)>0.$$ Since $\nu$ is ergodic and $\chi(\mathcal{R}_0)$ is invariant by $\psi_t$, we get $\nu(\chi(\mathcal{R}_0))=1$. ◻ Finally, we remark that Climenhaga-Knieper-War [@clim21] also showed that the unique measure of maximal entropy has full support. This property can be proven by our methods assuming that the expansive set $\mathcal{R}_0$ is dense. For this, we first restate Proposition 7.3.15 of [@fish19] in our context. **Proposition 6**. *Let $X$ be a compact metric space, $\psi_t:X\to X$ be a continuous expansive flow with the specification property and $\nu$ be its unique measure of maximal entropy. Then, for every $\epsilon>0$ there exist $A_{\epsilon},B_{\epsilon}>0$ such that for every $x\in X$ and every $T>0$, we have $A_{\epsilon}\leq e^{Th(\psi_1)}\nu(B(x,\epsilon,T))\leq B_{\epsilon}$ where $B(x,\epsilon,T)$ is the $(T,\epsilon)$-dynamical ball defined in Equation [\[dyna\]](#dyna){reference-type="eqref" reference="dyna"} in Subsection [2.5](#p5){reference-type="ref" reference="p5"}.* **Proposition 7**. *Let $M$ be a compact surface without conjugate points of genus greater than one and $\mu$ be its unique measure of maximal entropy. If $\mathcal{R}_0$ is dense in $T_1M$ then $\mu$ has full support.* *Proof.* For $T=0$ and every $\epsilon>0$, apply Proposition [Proposition 6](#gibbs){reference-type="ref" reference="gibbs"} to the quotient flow $\psi_t:X\to X$ and its unique measure of maximal entropy $\nu$. So, we have $0<A_{\epsilon}\leq \nu(B(x,\epsilon,0))\leq B_{\epsilon}$ for every $\epsilon>0$ and every $x\in X$. Therefore $\nu$ has full support on $X$ since $B(x,\epsilon,0)$ is just an open ball of radius $\epsilon$ centered at $x$. Now, let $U$ be any open set of $T_1M$. By density of $\mathcal{R}_0$, there is an expansive point $\xi\in U$. Note that the family of open saturated neighborhoods around $\mathcal{I}(\xi)=\xi$ defined in Section [4](#s3){reference-type="ref" reference="s3"} forms a basis of neighborhoods at $\xi$. Hence there exists an open saturated set $A=A(\xi,\epsilon', \delta',\tau')$ included in $U$. Since $\nu$ has full support and $\chi(A)$ is an open set of $X$, the conclusion follows from $$\mu(U)\geq \mu(A)=\mu(\chi^{-1}\chi(A))=\chi_*\mu(\chi(A))=\nu(\chi(A))>0.$$ ◻ Despite we assumed that $\mathcal{R}_0$ is dense, we believe that this actually holds in our setting. For the moment, this density for the more general case is being studied for future work. This property holds for example for compact higher genus surfaces without conjugate points and with continuous Green bundles which include the case of surfaces without focal points. # Acknowledgments I would like to thank my advisor Rafael Ruggiero for useful discussions. I appreciate the financial support of CAPES and FAPERJ funding agencies during the work. This article was supported in part by INCTMat under the project INCTMat-Faperj (E26/200.866/2018). 10 D. V. Anosov. Geodesic flows on closed riemannian manifolds with negative curvature. , 90:3--210, 1969. Rufus Bowen. Entropy for group endomorphisms and homogeneous spaces. , 153:401--414, 1971. Rufus Bowen. Periodic orbits for hyperbolic flows. , 94(1):1--30, 1972. Rufus Bowen and Peter Walters. Expansive one-parameter flows. , 12(1):180--193, 1972. Jérôme Buzzi, Todd Fisher, Martin Sambarino, and Carlos Vásquez. Maximal entropy measures for certain partially hyperbolic, derived from anosov systems. , 32(1):63--79, 2012. Vaughn Climenhaga, Gerhard Knieper, and Khadim War. Uniqueness of the measure of maximal entropy for geodesic flows on certain manifolds without conjugate points. , 376:107452, 2021. Vaughn Climenhaga and Daniel J Thompson. Unique equilibrium states for flows and homeomorphisms with non-uniform structure. , 303:745--799, 2016. Efim I Dinaburg. On the relations among various entropy characteristics of dynamical systems. , 5(2):337, 1971. Manfredo Perdigao Do Carmo and J Flaherty Francis. , volume 6. Springer, 1992. Patrick Eberlein. Geodesic flow in certain manifolds without conjugate points. , 167:151--170, 1972. Patrick Eberlein. Geodesic flows on negatively curved manifolds. ii. , 178:57--82, 1973. Jost-Hinrich Eschenburg. Horospheres and the stable part of the geodesic flow. , 153(3):237--251, 1977. Todd Fisher and Boris Hasselblatt. , volume 1. European Mathematical Society Publishing House, 2019. Ernesto Franco. Flows with unique equilibrium states. , 99(3):486--514, 1977. Alexandre Freire and Ricardo Mané. On the entropy of the geodesic flow in manifolds without conjugate points. , 69(3):375--392, 1982. Katrin Gelfert and Rafael O. Ruggiero. . Proc. Edinb. Math. Soc. (2), 62(1):61--95, 2019. Katrin Gelfert and Rafael O Ruggiero. Geodesic flows modeled by expansive flows: Compact surfaces without conjugate points and continuous green bundles. , 2020. Étienne Ghys. Flots d'Anosov sur les 3-variétés fibrées en cercles. , 4(1):67--80, 1984. Mikhael Gromov. Hyperbolic groups. In *Essays in group theory*, pages 75--263. Springer, 1987. Gerhard Knieper. . Number 168 in 1. Bonner Mathematische Schriften, 1986. François Ledrappier and Peter Walters. A relativised variational principle for continuous transformations. , 2(3):568--576, 1977. Harold Marston Morse. A fundamental class of geodesics on any closed surface of genus greater than one. , 26(1):25--60, 1924. James R Munkres. , volume 2. Prentice Hall Upper Saddle River, 2000. Sheldon E Newhouse. Continuity properties of entropy. , 129(1):215--235, 1989. Gabriel P Paternain. Finsler structures on surfaces with negative euler characteristic. , 23:421--426, 1997. Ja B Pesin. Geodesic flows on closed riemannian manifolds without focal points. , 11(6):1195, 1977. Ludovic Rifford and Rafael Ruggiero. On the stability conjecture for geodesic flows of manifold without conjugate points. , 4:759--784, 2021. Romeo F Thomas. Canonical coordinates and the pseudo orbit tracing property. , 90(2):316--343, 1991. Peter Walters. , volume 79. Springer Science & Business Media, 2000. Stephen Willard. . Courier Corporation, 2012.
arxiv_math
{ "id": "2309.08091", "title": "Geodesic flows of compact higher genus surfaces without conjugate points\n have expansive factors", "authors": "Edhin Franklin Mamani", "categories": "math.DS math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | By classical Fatou type theorems in various setups, it is well-known that positive harmonic functions have non-tangential limit at almost every point on the boundary. In this paper, in the setting of non-positively curved Harmonic manifolds of purely exponential volume growth, we are interested in the size of the exceptional sets of points on the boundary at infinity, where a suitable function blows up faster than a prescribed growth rate, along radial geodesic rays. For Poisson integrals of complex measures, we obtain a sharp bound on the Hausdorff dimension of the exceptional sets, in terms of the mean curvature of horospheres and the parameter of the growth rate. In the case of the Green potentials, we obtain similar upper bounds and also construct Green potentials that blow up faster than a prescribed rate on lower Hausdorff dimensional realizable sets. So we get a gap in the corresponding Hausdorff dimensions due to the assumption of variable pinched non-positive sectional curvature. We also obtain a Riesz decomposition theorem for subharmonic functions. Combining the above results we get our main result concerning Hausdorff dimensions of the exceptional sets of positive superharmonic functions. address: Stat-Math Unit, Indian Statistical Institute, 203 B. T. Rd., Kolkata 700108, India author: - Utsav Dewan title: Boundary exceptional sets for radial limits of superharmonic functions on non-positively curved harmonic manifolds of purely exponential volume growth --- # Introduction The boundary behavior of harmonic functions is one of the most well-studied topics in classical potential theory. The celebrated theorem of Fatou tells us that any positive harmonic function on the unit disk admits a non-tangential limit at almost all points on the boundary. This fact was generalized to rank one Riemannian symmetric spaces of non-compact type (for admissible limits) by Korányi [@Koranyi] and then to Hadamard manifolds of pinched negative curvature by Anderson and Schoen [@AS]. Then a natural question to ask is: how does a positive harmonic function behave along radial geodesic rays, on the complement of this full measure subset of the boundary. More precisely, how quickly can a positive harmonic function grow or how large can the exceptional set (in the boundary) be where the positive harmonic function blows up faster than a prescribed rate. These are the questions which we will address in this note, in the setting of non-positively curved Harmonic manifolds of purely exponential volume growth. Throughout this article, all Riemannian manifolds are assumed to be complete, simply connected and of dimension $n \ge 3$. A Harmonic manifold is a Riemannian manifold $X$ such that for any point $x \in X$, there exists a non-constant harmonic function on a punctured neighbourhood of $x$ which is radial around $x$, that is, only depends on the geodesic distance from $x$. By purely exponential volume growth, we mean that there exist constants $C > 1,\: h > 0$ such that the volume of metric balls $B(x, R)$ with center $x \in X$ and radius $R>1$, satisfies the asymptotics: $$\frac{1}{C} e^{hR} \le vol(B(x,R)) \le C e^{hR} \:.$$ It turns out that in our case, the constant $h > 0$ agrees with the mean curvature of the horospheres. It is well-known that the sectional curvature of Harmonic manifolds are bounded below (see [@Besse; @RR]), that is, there exists $b>0$ such that $K_X \ge -b^2$. The class of non-positively curved Harmonic manifolds of purely exponential volume growth includes all the known examples of non-compact non-flat Harmonic manifolds: the rank one Riemannian symmetric spaces of non-compact type and the Damek-Ricci spaces. Let $X$ be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres $h>0$. We fix an origin $o$ in $X$ and let $\partial X$ be the boundary at infinity. Now for a $\xi \in \partial X$, let $\gamma_\xi$ denote the unit-speed geodesic ray such that $\gamma_\xi(0)=o,\:\gamma_\xi(+\infty)=\xi$ and $d(o,\gamma_\xi(t))=t$ for all $t \in (0,+\infty)$. Then the Poisson kernel of $X$ is given by, $$\label{poisson_kernel} P(x,\xi) = e^{-hB_{\xi}(x)} \:,\text{ for all } x \in X,\: \xi \in \partial X,$$ where $B_{\xi}(x)$ is the Busemann function, defined by $$\label{busemann} B_{\xi}(x) = \displaystyle\lim_{t \to \infty} \left(d\left(x,\gamma_\xi(t)\right) - d\left(o,\gamma_\xi(t)\right)\right) \:.$$ The Martin representation formula [@KL Corollary 5.13] asserts that the positive harmonic functions on $X$ are given by Poisson integrals of finite, positive Borel measures on $\partial X$. More generally, for a complex measure $\mu$ on $\partial X$, let $u=P[\mu]$ be the Poisson integral of $\mu$. Then for any $\xi \in \partial X$ and any $t \in (0,+\infty)$, by ([\[poisson_kernel\]](#poisson_kernel){reference-type="ref" reference="poisson_kernel"}), ([\[busemann\]](#busemann){reference-type="ref" reference="busemann"}) and the triangle inequality, we have $$\label{bounding_poisson} |u(\gamma_\xi(t))| =\left|\int_{\partial X} P(\gamma_\xi(t),\eta)\:d\mu(\eta)\right| \le e^{ht}\: |\mu|(\partial X)\:,$$ where $|\mu|(\partial X)$ is the total variation of $\mu$. Then ([\[bounding_poisson\]](#bounding_poisson){reference-type="ref" reference="bounding_poisson"}) motivates us to consider for $\beta \in [0,h]$ and a complex-valued function $u$ on $X$, the following sets $$\label{Ebeta} E_\beta(u) := \left\{\xi \in \partial X : \displaystyle\limsup_{t \to +\infty} e^{-\beta t} \left|u\left(\gamma_{\xi}(t)\right)\right| > 0\right\}$$ and $$\label{Ebetainf} E^\infty_\beta(u) := \left\{\xi \in \partial X : \displaystyle\limsup_{t \to +\infty} e^{-\beta t} \left|u\left(\gamma_{\xi}(t)\right)\right| =+\infty\right\} \:.$$ When $K_X \le -1$, there is a natural metric called the visual metric, denoted by $\rho$ on $\partial X$. But in the generality of our situation, $\rho$ only defines a quasi-metric. However for $s \in (0,s_0)$, where $-s^2_0$ is the asymptotic upper curvature bound of $X$, one has a metric on $\partial X$, say $\rho_s$, bi-Lipschitz to $\rho^s$. In all our results, the Hausdorff dimensions or Hausdorff outer measures are with respect to $\rho_s$. The reader is referred to section $2$ for any unexplained notations and terminologies. Our first result gives an upper bound on the Hausdorff dimensions of the sets defined in ([\[Ebeta\]](#Ebeta){reference-type="ref" reference="Ebeta"}) and ([\[Ebetainf\]](#Ebetainf){reference-type="ref" reference="Ebetainf"}) for Poisson integrals of complex measures: **Theorem 1**. *Let $X$ be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres $h>0$. Assume $\beta \in [0,h]$ and $\mu$ to be a complex measure on $\partial X$. Then $$dim_{\mathcal{H}} E_\beta(P[\mu]) \le (h-\beta)/s \:, \text{ and } \mathcal{H}^{(h-\beta)/s}\left(E^\infty_\beta(P[\mu])\right) = 0 \:.$$* In fact, the bounds in Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"} are sharp. This is illustrated by the following result: **Theorem 2**. *Let $X$ be as in the statement of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}. Assume $\beta \in [0,h)$ and $E \subset \partial X$ with $\mathcal{H}^{(h-\beta)/s}(E)=0$. Then there exists a non-negative integrable function $f$ on $\partial X$ (with respect to the visibility measure $\lambda_o$) such that $E \subset E^\infty_\beta\left(P[f]\right)$.* In the classical Euclidean setting, analogues of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"} were obtained by Armitage [@A Theorem 4 with Corollary of Theorem 2] for the half-space and by Bayart-Heurteaux [@BH Theorem 1 or 3] for the unit ball. In the case of $\mathbb H^n(-1)$, the $n$-dimensional real Hyperbolic ball with constant sectional curvature equal to $-1$, analogues of Theorems [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"} and [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"} were recently obtained by Hirata [@H Theorems 3 and 5]. Now in $\mathbb H^n(-1)$, let $\mu$ be a non-negative Borel measure such that its Green potential $G[\mu]$ is well-defined. Then $G[\mu]$ has radial limit $0$ at almost all points on the boundary, whereas its boundary behavior along other non-tangential directions need not be nice [@St Theorem 9.4.1]. Similar results for the unit ball in $\mathbb C^n$ can be found in the works of Ullrich [@U]. These results regarding well-behaved radial limits of Green potentials on a full measure subset of the boundary intrigue us to consider the same problem of exceptional sets for Green potentials. Then one notes that Green potentials are just special examples of positive superharmonic functions. Finally motivated by [@H], we endeavour to obtain results similar to Theorems [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"} and [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"} for the class of positive superharmonic functions. As a first step of analyzing subharmonic (or superharmonic) functions, we obtain their Riesz decomposition, which may be a result of independent interest and seems to be new even for the case of Damek-Ricci spaces. For the relevant definitions in the following statement, the reader is referred to section $4$. **Theorem 3**. *Let $X$ be as in the statement of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}. Let $f$ be a subharmonic function on $X$ such that it has a harmonic majorant. Then $$f(x) = F_f(x) - \int_X G(x,y) d\mu_f(y) \:,\text{ for all } x \in X \:,$$ where $F_f$ and $\mu_f$ are the least harmonic majorant and the Riesz measure of $f$ respectively.* Then Theorem [Theorem 3](#riesz_decom){reference-type="ref" reference="riesz_decom"} motivates us back to our problem of determining the size of exceptional sets for Green potentials. As the Riesz measure of a subharmonic (or superharmonic) function is a Radon measure, we are naturally interested to look at an analogue of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"} for Green potentials of Radon measures on $X$: **Theorem 4**. *Let $X$ be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres $h>n-2$ and sectional curvature $K_X \ge -b^2$, for some $b > 0$. Let $\beta \in [0, h-n+2)$ and $\mu$ be a Radon measure on $X$ whose Green potential $G[\mu]$ is well-defined. Then for $b':=\max\{2b,1\}$, we have $$dim_{\mathcal{H}}E_\beta\left(G[\mu]\right) \le b'\left(h-\beta\right)/s\:,\text{ and } \mathcal{H}^{b'(h-\beta)/s}\left(E^\infty_\beta\left(G[\mu]\right)\right) =0 \:.$$* We then have the following analogue of Theorem [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"} for Green potentials. **Theorem 5**. *Let $X$ be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres $h>n-2$. Let $\beta \in [0, h-n+2)$ and $E \subset \partial X$ with $\mathcal{H}^{(h-\beta)/s}(E)=0$. Then there exists a Green potential $u$ on $X$ such that $E \subset E^\infty_\beta(u)$.* **Remark 6**. 1. *The condition $h>n-2$ is naturally posed due to the behavior of the Green function near its pole. Moreover, for any $\varepsilon>0$, all non-compact Harmonic manifolds with sectional curvature $K_X \le -{(1+\varepsilon)}^2{\left(\frac{n-2}{n-1}\right)}^2$, satisfy this property. The last statement follows from the fact that the mean curvature of horospheres is obtained as the Laplacian of the Busemann functions and an application of the Hessian comparison theorem.* 2. *Comparing with Theorems $4$ and $6$ of [@H], it follows that the Hausdorff dimension appearing in Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"} is optimal. Then we note the gap in the corresponding Hausdorff dimensions in Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} and Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"} (when $b>1/2$), whereas in the case of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}, the upper bound was shown to be sharp by Theorem [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"}. As it will be apparent from our arguments, the reason is two-fold. Firstly, unlike in the case of the Poisson kernel, the Green function has its singularity in the interior of the space. Hence while trying to compute the Hausdorff dimension of the exceptional set on the boundary, we have to project the analysis done in the interior to the boundary. This is where the geometric ingredient of variable curvature comes into play. Then for $b>1/2$, due to the pinching condition $-b^2 \le K_X \le 0$, we get a gap in the corresponding Hausdorff dimensions.* Finally as a consequence of the above results we obtain our main result: **Theorem 7**. *Let $X$ be as in the statement of Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"}. Let $u$ be a positive superharmonic function on $X$ and $\beta \in [0,h-n+2)$. Then for $b':=\max\{2b,1\}$, we have $$dim_{\mathcal{H}}E_\beta(u) \le b'\left(h-\beta\right)/s\:,\text{ and } \mathcal{H}^{b'\left(h-\beta\right)/s}\left(E^\infty_\beta(u)\right) =0 \:.$$ Conversely, let $X$ be as in the statement of Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}. Then for $\beta \in [0, h-n+2)$ and $E \subset \partial X$ with $\mathcal{H}^{(h-\beta)/s}(E)=0$, there exists a positive superharmonic function $u$ on $X$ such that $E \subset E^\infty_\beta(u)$.* In the proofs of Theorems [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}, [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"}, [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} and [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}, we follow the general outline of the arguments in [@H] but unlike in the case of $\mathbb H^n(-1)$, the boundary of a non-positively curved Harmonic manifold of purely exponential volume growth is not sufficiently regular and hence our arguments take a substantial detour by estimating global geometric quantities. Unlike in the case of $\mathbb H^n(-1)$, non-constant curvature makes it hard to get sharp estimates on Riemannian angles and the diameters of 'shadows' of balls. Then in order to get workable estimates of the above, one has to rely upon comparison principles afforded by the pinching condition on the sectional curvature for 'small' balls and the shadow lemma of Gromov hyperbolic spaces for 'large' balls. All of this ultimately results in the gap in the corresponding Hausdorff dimensions for Theorems [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} and [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}, which can be viewed as a distinct feature of variable pinched non-positive curvature. The arguments in the proof of Theorem [Theorem 3](#riesz_decom){reference-type="ref" reference="riesz_decom"} follow the classical steps presented as in [@St] and [@U] but unlike in their case, our space need not be a Riemannian symmetric space and hence their approach of $M\ddot{o}bius$ group invariant potential theory breaks down. Instead, we look at a geometric manifestation of convolution and work our way through to obtain results similar to that of the homogeneous setup. This paper is organized as follows. In section $2$, we recall the required preliminaries and fix our notations. In section $3$, we present our results on the Poisson integrals: Theorems [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"} and [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"}. In section $4$ we prove the Riesz decomposition theorem for subharmonic functions: Theorem [Theorem 3](#riesz_decom){reference-type="ref" reference="riesz_decom"}. In section $5$, the results for Green potentials: Theorems [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} and [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}, are proved. Section $6$ consists of the proof of Theorem [Theorem 7](#superh_thm){reference-type="ref" reference="superh_thm"}. # Preliminaries Throughout this article, $C(.)$ will be used to denote positive constants whose value may vary at each occurence, with dependence on parameters or geometric quantities made explicit inside the round bracket. When required, enumerated constants $C_1, C_2, \dots$ will be used to specify fixed constants. Let $f_1$ and $f_2$ be two positive functions. Then the notation $f_1 \asymp f_2$ will imply that there exists $C>1$ such that $(1/C) f_1 \le f_2 \le C f_1$. Also $f_1 \gtrsim f_2$ (respectively, $f_1 \lesssim f_2$) will imply that there exists $C>0$ such that $f_1 \ge C f_2$ (respectively, $f_1 \le C f_2$). The indicator function of a set $A$ will be denoted by $\chi_A$. ## Gromov Hyperbolic Spaces In this subsection we recall briefly some basic facts and definitions related to Gromov hyperbolic spaces. For more details, we refer to [@Bridson]. A *geodesic* in a metric space $X$ is an isometric embedding $\gamma : I \subset \mathbb{R} \to X$ of an interval into $X$. A metric space $X$ is said to be a *geodesic metric space* if any two points in $X$ can be joined by a geodesic. A geodesic metric space $X$ is called *Gromov hyperbolic* if there exists a $\delta \ge 0$ such that every geodesic triangle in $X$ is $\delta$-thin, that is, each side is contained in the $\delta$-neighbourhood of the union of the other two sides. This $\delta$ is called the Gromov hyperbolicity constant. For a Gromov hyperbolic space $X$, its *boundary at infinity* $\partial X$ is defined to be the set of equivalence classes of geodesic rays in $X$. Here a geodesic ray is an isometric embedding $\gamma : [0,\infty) \to X$ of a closed half-line into $X$, and two geodesic rays $\gamma, \tilde{\gamma}$ are said to be equivalent if the set $\{ d(\gamma(t), \tilde{\gamma}(t)) \ | \ t \geq 0 \}$ is bounded. The equivalence class of a geodesic ray $\gamma$ is denoted by $\gamma(\infty) \in \partial X$. A metric space is said to be *proper* if closed and bounded balls in the space are compact. Let $X$ be a proper, geodesic, Gromov hyperbolic space. There is a natural topology on $\overline{X} := X \cup \partial X$, called the *cone topology* such that $\overline{X}$ is a compact metrizable space which is a compactification of $X$. In this case, for every geodesic ray $\gamma$, $\gamma(t) \to \gamma(\infty) \in \partial X$ as $t \to \infty$, and for any $x \in X,\: \xi \in \partial X$ there exists a geodesic ray $\gamma$ such that $\gamma(0) = x, \gamma(\infty) = \xi$. For $x,y,z \in X$, the Gromov product of $y,z$ with respect to $x$ is defined by, $$\label{gromov_product} (y|z)_x := \frac{1}{2} \left(d(x,y)+d(x,z)-d(y,z)\right) \:.$$ If the space $X$ is in addition $CAT(0)$ then for any $x \in X$, the Gromov product ${(\cdot|\cdot)}_x$, extends continuously to $\partial X \times \partial X$ (see [@B]) and hence we define: $$(\xi|\eta)_x := \displaystyle\lim_{\substack{y \to \xi \\ z \to \eta}} (y|z)_x \:.$$ We note that $(\xi|\eta)_x = +\infty$ if and only if $\xi = \eta \in \partial X$. Moreover the above boundary continuity of the Gromov product results in the boundary continuity of the Busemann function defined in ([\[busemann\]](#busemann){reference-type="ref" reference="busemann"}). ## Harmonic Manifolds In this subsection we discuss the required preliminaries on Harmonic manifolds. The materials covered here can be found in [@BKP]. Let $X$ be a non-compact harmonic manifold of purely exponential volume growth, with origin $o \in X$. By purely exponential volume growth, it is meant that there exists $h > 0$ such that for all $R > 1$, the volume of metric ball $B(x, R)$ of center $x \in X$ and radius $R$ satisfies $$vol(B(x,R)) \asymp e^{hR} \:.$$ In our case, it turns out that the constant $h > 0$ agrees with the mean curvature of the horospheres. On Harmonic manifolds, the harmonic functions satisfy the usual mean value property on balls and spheres. For any $v \in T^1_x X$ and $r > 0$, let $A(v,r)$ denote the Jacobian of the map $v \mapsto \exp_x(rv)$. The definition of a harmonic manifold which has been given in the Introduction is equivalent ([@Willmore p. 224]) to the fact that this Jacobian is solely a function of the radius, that is, there is a function $A$ on $(0,\infty)$, such that $A(v,r) = A(r)$ for all $v \in T^1X$. This function $A$ is called the *density function* of $X$. $A$ satisfies the following asymptotics: $$\label{jacobian_estimate} A(r) \asymp \begin{cases} r^{n-1} & \text{ if } 0<r\le 1 \\ e^{hr} & \text{ if } r>1 \:, \end{cases}$$ In [@Kn12], it was shown that for $X$, a simply connected non-compact harmonic manifold of purely exponential volume growth with respect to a fixed basepoint $o \in X$, the condition of purely exponential volume growth is equivalent to either of the following conditions: 1. $X$ is Gromov hyperbolic. 2. $X$ has rank one. 3. The geodesic flow of $X$ is Anosov with respect to the Sasaki metric. Moreover, the Gromov boundary coincides with the visibility boundary $\partial X$ introduced in [@Eberlein]. This last fact follows from the work in [@KP16]. One has a family of measures on $\partial X$ called the visibility measures $\{\lambda_x\}_{x \in X}$. For $x \in X$, let $\theta_x$ denote the normalized canonical measure on $T^1_x X$ (the unit tangent space at $x$), induced by the Riemannian metric and then the visibility measure $\lambda_x$ is obtained as the push-forward of $\theta_x$ to the boundary $\partial X$ under the radial projection. The visibility measures $\lambda_x$ are pairwise absolutely continuous. For $(x, \xi) \in X \times \partial X$, the Poisson kernel is obtained as the following Radon-Nykodym derivative: $$P(x, \xi) = e^{-hB_\xi(x)} = \frac{d \lambda_x}{d \lambda_o}(\xi) \:.$$ As a consequence of the above identity one has that $P[\lambda_o] \equiv 1$. The following is the Martin representation formula for positive harmonic functions on $X$, which is a consequence of [@KL Corollary 5.13]: **Lemma 8**. *Let $u$ be a positive harmonic function on $X$. Then there is a unique, finite, positive Borel measure $\mu$ on $\partial X$ such that $u=P[\mu]$.* Next we introduce the notion of radial functions. For $x \in X$, let $d_x$ denote the distance function with respect to the point $x$, that is, $d_x(y):=d(x, y)$. A function $f$ on X is called *radial* around a point $x \in X$ if $f$ is constant on geodesic spheres centered at $x$. Then note that for a function $f$ radial around a point $x \in X$, we can associate a function $u$ on $\mathbb R$ such that $f=u \circ d_x$. If we just say that a function $f$ is radial, then it will be understood that $f$ is radial around $o$, that is, there exists a function $u$ on $\mathbb R$ such that $f=u \circ d_o$. For $x \in X$, one has the definition of an $\it x-translate$ of a radial function $f$ as: $$\label{translate} \tau_xf:= u \circ d_x \:.$$ Let $\Delta$ denote the Laplace-Beltrami operator associated to the Riemannian metric on $X$. Then one has the following result for Harmonic manifolds: **Lemma 9**. *Let $f \in C^2(X)$ be radial. Then we have for all $x \in X$, $$\tau_x ( \Delta f) = \Delta (\tau_x f) \:.$$* *Proof.* Let $L_R$ denote the radial part of $\Delta$, that is, the differential operator on $(0,\infty)$ defined by, $$L_R := \frac{d^2}{dr^2} + \frac{A'(r)}{A(r)} \frac{d}{dr} \:.$$ Let $f=u \circ d_o$, where $u$ is the corresponding function on $\mathbb R$. Then by repeated application of Proposition $3.2$ of [@BKP], we get $$\tau_x (\Delta f) = \tau_x \left( \left(L_R u\right) \circ d_o \right) = (L_R u) \circ d_x = \Delta (u \circ d_x) = \Delta ( \tau_x f )\:.$$ ◻ For a measurable function $f$ on $X$ and a measurable function which is radial, say $g$ on $X$, their convolution is defined as $$\label{convolution} f*g(x):= \int_X f(y) (\tau_x g)(y) dvol(y) \:,$$ whenever the above integral is well-defined. The following Lemma summarizes a few important properties of convolution. Proofs are straightforward consequences of the definition and can also be found in [@BKP; @PS]. **Lemma 10**. *(1) If $f$ and $g$ are both measurable radial functions on $X$ then if their convolution is defined at $x \in X$, one has $$(f*g)(x)=(g*f)(x)\:.$$* *(2) If $f$ is a measurable function on $X$, $g$ and $h$ are measurable radial functions on $X$ such that the convolutions are defined at $x \in X$, then $$(f*(g*h))(x) = ((f*g)*h)(x) \:.$$* *(3) If $f$ and $g$ are two radial functions on $X$ such that their convolution $f*g$ is defined at all points in $X$ then $f*g$ is also a radial function.* For $\xi \in \partial X$, the level sets of the Busemann function $B_\xi$ are called horopsheres based at $\xi$. For all $\xi \in \partial X$, the horsopheres based at $\xi$ have the same positive, constant mean curvature $h>0$ and can be obtained as $$\label{LapBus} \Delta B_\xi \equiv h \:.$$ The following is a version of the Harnack inequality due to Yau in [@Y75]: **Lemma 11** (Harnack-Yau). *Let $X$ be a Hadamard manifold with $-b^2 \le K_X \le 0$. Then there exists a constant $C(b,n)>0$ such that for any open set $\Omega \subset X$ and every positive harmonic function $u : \Omega \to (0,+\infty)$, one has $$\|\nabla \log u\| \le C(b,n) \:,\text{ for all } x \in X \text{ with } d(x, \partial \Omega) \ge 1\:.$$* We next state without proof an easy consequence of Harnack-Yau: **Lemma 12**. *Let $X$ be as in Lemma [Lemma 11](#Harnack_Yau){reference-type="ref" reference="Harnack_Yau"} and $\{f_n\}$ be a non-decreasing sequence of harmonic functions on an open connected set $\Omega \subset X$. Then either $f_n(x) \to +\infty$ for all $x \in \Omega$ or that $\{f_n\}$ converges to a harmonic function uniformly on compact subsets of $\Omega$.* While working in polar coordinates, we will frequently use the following notation: for $x \in X$ and $v \in T^1_x X$, $\gamma_{x,v}$ is the geodesic such that $\gamma_{x,v}(0)=x$ and $\gamma'_{x,v}(0)=v$. For $f \in C^2(X)$, one has by the Taylor expansion for $x\in X$, $t>0$ sufficiently small: $$\label{infinitesimal_mvp} \Delta f(x) \frac{t^2}{2n} + C(n) E(t) = \int_{T^1_x X} \left\{f\left(\gamma_{x,v}(t)\right) - f(x)\right\} \: d\theta_x(v)\:,$$ for some constant $C(n)>0$ and a term $E(t)$ which is of order $t^3$. We recall that for an open subset $\Omega \subset X$, an upper semi-continuous function $f: \Omega \to [-\infty,+\infty)$, with $f \not \equiv -\infty$ is subharmonic on $\Omega$ if $$\label{submvp} f(x) \le \int_{T^1_x X} f\left(\gamma_{x,v}(r)\right) d\theta_x(v) \:,$$ for all $x \in \Omega$ and $r>0$ sufficiently small. It is known that if $f$ is subharmonic on $X$ then ([\[submvp\]](#submvp){reference-type="ref" reference="submvp"}) is true for all $r>0$. Moreover, $f$ is locally integrable and bounded above on compact sets. For $f \in C^2(X)$, the above notion of subharmonicity is equivalent to the condition that $\Delta f \ge 0$. A function $f$ is superharmonic if $-f$ is subharmonic. Now as in our case, $$\int_1^{+\infty} \frac{1}{A(r)} \: dr < +\infty \:,$$ we have a positive Green function, which is a radial function defined by $$\label{Green_fn} G(r) = \frac{1}{C(n)} \int_{r}^{+\infty} \frac{1}{A(s)} \:ds \:,$$ for some constant $C(n)>0$. Then ([\[jacobian_estimate\]](#jacobian_estimate){reference-type="ref" reference="jacobian_estimate"}) yields the following estimates of the Green function: $$\label{green_estimate} G(r) \asymp \begin{cases} \frac{1}{r^{n-2}} & \text{ if } 0<r\le 1 \\ e^{-hr} & \text{ if } r>1 \:, \end{cases}$$ upto a positive constant depending only on $n$ and $h$, denoted by $C_1(h,n)$. Then for $x \in X$ the Green function with pole at $x$ is defined by, $$G_x(y) := (G \circ d_x)(y)= G(d(x,y)) \:, \text{ for } y \in X\:,$$ and is denoted by $G(x,y)$. Note that it is symmetric in its arguments. The distributional Laplacian of $G_x$ is, $$\Delta G_x = - \delta_x \:.$$ $G_x$ is harmonic on $X \setminus \{x\}$ and superharmonic on $X$. For a non-negative Borel measure $\mu$ on $X$, we say that it has a well-defined Green potential if there exists $x_0 \in X$ such that $$G[\mu](x_0) = \int_X G(x_0,y)\: d\mu(y) < +\infty \:.$$ A well-defined Green potential is again a positive superharmonic function. If the sectional curvature, $K_X \le -1$, then $\partial X$ is equipped with the visual metric, $$\label{visual_metric} \rho(\xi,\eta):= e^{-{(\xi|\eta)}_o} \:, \text{ for all } \xi,\eta \in \partial X \:.$$ For $r \in (0,1]$, we have the visual balls with radius $r$ and center $\xi \in \partial X$, $$\label{visual_ball} \mathscr{B}(\xi,r) = \{\eta \in \partial X : \rho(\xi,\eta) < r\} \:.$$ In the general case of a Harmonic manifold of purely exponential volume growth, although $\rho$ only defines a quasi-metric, the visibility measure $\lambda_o$ satisfies the following estimate for all $\xi \in \partial X$ and for all $r \in (0,1]$ : $$\label{visual_measure_estimate} \lambda_o\left(\mathscr{B}(\xi,r)\right) \le e^{6\delta h} r^h \:,$$ where $\delta$ is the Gromov hyperbolicity constant. However, one can get a metric by raising $\rho$ to suitable powers. For such spaces, one has the notion of 'asymptotic upper curvature bound of X', denoted by $-s^2_0$ (see [@BF; @S]). It is a critical exponent $s_0 \in (0,+\infty]$ such that for all $s \in (0,s_0)$, $\rho^{s}$ is Lipschitz metrizable, that is, there exists $C_2=C_2(s)>1$ and a metric $\rho_s$ such that $$\label{metric_relation} \frac{1}{C_2} \rho_s \le \rho^{s} \le C_2 \rho_s \:.$$ For such a fixed $s \in (0,s_0)$, we work with the metric $\rho_s$. By $\mathscr{B}_s(\xi,r)$ we denote a visual ball in the metric $\rho_s$, with center $\xi$ and radius $r$. Then one has for the following containment relations: $$\label{contain_rel1} \mathscr{B}_s\left(\xi, \frac{r^s}{C_2}\right) \subset \mathscr{B}(\xi,r) \subset \mathscr{B}_s\left(\xi,C_2 r^s\right)$$ and for $C_3=C^{1/s}_2>1$ , $$\label{contain_rel2} \mathscr{B}\left(\xi, \frac{r^{1/s}}{C_3}\right) \subset \mathscr{B}_s(\xi,r) \subset \mathscr{B}\left(\xi,C_3 r^{1/s}\right) \:.$$ For $\xi \in \partial X$, following the definition of $\gamma_\xi$ mentioned in the introduction, we define the shadow of a ball $B=B(x,r) \subset X$ (viewed from $o$) at $\partial X$ to be the set $$\mathcal{O}_o(B) := \{\xi \in \partial X: \gamma_\xi(t) \in B\:, \text{ for some } t >0\} \:.$$ Using the fact that the underlying $X$ is Gromov $\delta$-hyperbolic, one has the following standard 'shadow lemma' for balls with sufficiently large radius: **Lemma 13**. *There exists $C_4=C_4(\delta,s)>0$ such that for $r \in \left(0, \min\left\{\frac{1}{C^s_4},\frac{1}{C_2}\right\}\right)$ and for all $\xi \in \partial X$, we have $$\mathcal{B}_s(\xi,r) \subset \mathcal{O}_o\left(B\left(\gamma_\xi\left(\log\left(\frac{1}{C_4\:r^{1/s}}\right)\right),1+\delta\right)\right) \:.$$* As mentioned in the Introduction, the sectional curvature of $X$ satisfies $K_X \ge -b^2$ for some $b>0$. If three points in $X$ lie on the same geodesic, then they are called *collinear*. For three points $x,y,z$ which are not collinear, we form the geodesic triangle $\triangle$ in $X$ by the geodesic segments $[x, y],\: [y, z],\: [z, x]$. A comparison triangle is a geodesic triangle $\overline{\triangle}$ in $\mathbb{H}^2(-b^2)$ formed by geodesic segments $[\overline{x}, \overline{y}],\: [\overline{y}, \overline{z}],\: [\overline{z}, \overline{x}]$ of the same lengths as those of $\triangle$ (such a triangle exists and is unique up to isometry). Let $\theta(y,z)$ denote the Riemannian angle between the points $y$ and $z$, subtended at $x$. The corresponding angle between $\overline{y}$ and $\overline{z}$ subtended at $\overline{x}$ is called the *comparison angle* of $\theta(y,z)$ in $\mathbb{H}^2(-b^2)$ and denoted by $\theta_b(y,z)$. Then by Alexandrov's angle comparison theorem, $$\label{finite_angle_comparison} \theta_b(y,z) \le \theta(y,z) \:.$$ Consider the geodesics that join $x$ to $y$ and the one that joins $x$ to $z$. Now extend these geodesics. Then the extended infinite geodesic rays will hit $\partial X$ at two points, say $\xi$ and $\eta$ respectively. Now as points on these geodesics, say $y'$ and $z'$ in $X$ converge to $\xi$ and $\eta$, the comparison angles of $\theta(x',y')$ increase monotonically, and hence their limit exists. We define the comparison angle $\theta_b(\xi, \eta)$ to be this limit and in fact we have, $$\label{infinite_riemannian_angle_bound} e^{-b(\xi|\eta)_x} = \sin \left(\frac{\theta_b(\xi,\eta)}{2}\right) \le \sin \left(\frac{\theta(\xi,\eta)}{2}\right) \:,$$ where $\theta(\xi,\eta)$ is the Riemannian angle between $\xi$ and $\eta$ subtended at $x$. ## Hausdorff Outer Measure and Hausdorff Dimension In the setting of a general metric space, we now briefly recall the definitions of Hausdorff dimensions, Hausdorff outer measure and some of their important properties. These can be found in [@F]. Let $(M,d)$ be a metric space. Then for $\varepsilon > 0$, an $\varepsilon$-cover of a set $E \subset M$ is a countable (or finite) collection of sets $\{U_i\}$ with $$0 < diameter\left(U_i\right) \le \varepsilon \text{, for all }i \text{ such that } E \subset \displaystyle\bigcup_{i} U_i \:.$$ For $t \ge 0$, we recall that $$\mathcal{H}^t_\varepsilon(E) := \inf \left\{\displaystyle\sum_{i} {\left(diameter\left(U_i\right)\right)}^t : \{U_i\} \text{ is an } \varepsilon\text{-cover of } E\right\}\:.$$ Then the $t$-dimensional Hausdorff outer measure of $E$ is defined by, $$\mathcal{H}^t(E) := \displaystyle\lim_{\varepsilon \to 0} \mathcal{H}^t_\varepsilon(E) \:.$$ The above value remains unaltered if one only considers covers consisting of balls. The Hausdorff dimension of $E$ is defined by $$dim_{\mathcal{H}}E := \inf \left\{t \ge 0 : \mathcal{H}^t(E) < +\infty\right\} \:.$$ The following properties of Hausdorff dimension and Hausdorff outer measure will be crucial: - *Countable stability:* if $\{E_i\}_{i=1}^\infty$ is a countable sequence of sets in $(M,d)$, then $$dim_{\mathcal{H}}\left(\displaystyle\bigcup_{i=1}^\infty E_i\right) =\displaystyle\sup_{i \in \mathbb N} \left\{dim_{\mathcal{H}} E_i\right\} \:.$$ - *Non-increasing in dimension:* if $0 < t_1 \le t_2$ then for any $E$, $\mathcal{H}^{t_2}(E) \le \mathcal{H}^{t_1}(E)$. # Boundary behavior of Poisson integrals For any complex measure $\mu$ on $\partial X$, its Poisson integral $P[\mu]$ is a complex-valued harmonic function on $X$. In this section we will determine the size of the exceptional sets of such Poisson integrals along radial geodesic rays. The key to this analysis is an estimate in terms of a maximal function. ## Estimates of Maximal Function Let $0< \alpha_1 < \alpha_2 \le 1$ and $\xi \in \partial X$. Then for a complex measure $\mu$ on $\partial X$, we consider the following maximal function: $$\label{maximal_fn} M_{\alpha_1,\alpha_2}[\mu](\xi) := \displaystyle\sup_{\alpha_1 \le r \le \alpha_2} \frac{|\mu|(\mathscr{B}(\xi,r))}{r^h} \:.$$ When $d\mu = f d\lambda_o$ for some suitable function $f$ on $\partial X$, we will denote the corresponding maximal function by $M_{\alpha_1,\alpha_2}[f]$. Next we see an estimate relating the Poisson integral of a complex measure with the maximal function corresponding to the measure. **Lemma 14**. *Let $\tau \ge 1,\: 0<\varepsilon \le 1$ and $\mu$ be a complex measure on $\partial X$. Then there exists a constant $C(h) > 0$ such that for all $t > \log(\tau / \varepsilon)$, one has for all $\xi \in \partial X$, $$\label{maximal_fn_ineq} \left|P[\mu]\left(\gamma_{\xi}(t)\right)\right| \le C(h) \left\{e^{ht}|\mu|\left(\mathscr{B}\left(\xi,\tau e^{-t}\right)\right) + \frac{M_{\tau e^{-t},\varepsilon}[\mu](\xi)}{\tau^h} + \frac{e^{-ht}}{\varepsilon^{2h}} |\mu|(\partial X) \right\} \:.$$* *Proof.* Fix $\xi \in \partial X$ and $t > \log(\tau / \varepsilon)$. Then note that $\tau e^{-t} < \varepsilon$. Hence there exists a largest non-negative integer $m$ such that $$2^m \tau e^{-t} \le \varepsilon \:.$$ Let $$\begin{aligned} &\mathscr{B}^{(0)}& = \mathscr{B}\left(\xi,\tau e^{-t}\right) \:, \\ &\mathscr{B}^{(j)}& = \mathscr{B}\left(\xi, 2^j \tau e^{-t}\right) \setminus \mathscr{B}\left(\xi, 2^{j-1} \tau e^{-t}\right) ,\:\text{for } 1 \le j \le m \:,\\ &\mathscr{B}^{(m+1)}& = \partial X \setminus \mathscr{B}\left(\xi, 2^m \tau e^{-t}\right) \:.\end{aligned}$$ Now $$\left|P[\mu]\left(\gamma_{\xi}(t)\right)\right| \le \displaystyle\sum_{j=0}^{m+1} I_j \:,$$ where $$I_j = \int_{\mathscr{B}^{(j)}} e^{-hB_{\eta}\left(\gamma_{\xi}(t)\right)} \: d|\mu|(\eta) \:,\:\text{for } 0 \le j \le m+1\:.$$ We note that by triangle inequality, for all $\eta \in \partial X$, $$B_{\eta}\left(\gamma_{\xi}(t)\right) = \displaystyle\lim_{t' \to \infty} \left(d\left(\gamma_{\xi}(t), \gamma_\eta(t')\right)- d\left(o, \gamma_\eta(t')\right)\right) \ge -d\left(o, \gamma_\xi(t)\right)=-t \:.$$ Hence, $$I_0 \le \int_{\mathscr{B}^{(0)}} e^{ht} \:d|\mu|(\eta) = e^{ht} \:|\mu|\left(\mathscr{B}\left(\xi,\tau e^{-t}\right)\right) \:.$$ Next we note that Gromov products are monotonically non-decreasing along geodesics, which is a simple consequence of the triangle inequality. Hence in particular, for all $\eta \in \partial X$ such that $\eta \ne \xi$, one has $$\displaystyle\lim_{t' \to \infty} \left(\gamma_{\xi}(t)|\gamma_{\eta}(t')\right)_o \le \left(\xi|\eta\right)_o \:.$$ Combining the above with the facts that - $B_{\eta}\left(\gamma_{\xi}(t)\right) = t - 2 \displaystyle\lim_{t' \to \infty} \left(\gamma_{\xi}(t) | \gamma_\eta(t')\right)_o \:,$ - $e^{-{(\xi|\eta)}_o} \ge 2^{j-1} \tau e^{-t} \text{ when } \eta \in \mathscr{B}^{(j)}\:,\text{ for } 1 \le j \le m\:,$ it follows that $$\begin{aligned} I_j &\le & \int_{\mathscr{B}^{(j)}} e^{-ht}\: e^{2h{(\xi|\eta)}_o}\: d|\mu|(\eta) \\ & \le & \int_{\mathscr{B}^{(j)}} \frac{e^{-ht}}{{\left(2^{j-1} \tau e^{-t}\right)}^{2h}} \:d|\mu|(\eta) \\ & \le & \frac{|\mu|\left(\mathscr{B}\left(\xi, 2^j \tau e^{-t}\right) \right)}{{\left(2^{j-2} \tau\right)}^h {\left(2^j \tau e^{-t}\right)}^h} \\ & \le & \frac{1}{{\left(2^{j-2} \tau\right)}^h} \:M_{\tau e^{-t}, \varepsilon}[\mu](\xi) \:.\end{aligned}$$ Therefore, there exists $C(h) > 0$ such that, $$\displaystyle\sum_{j=1}^m I_j \le \left(\displaystyle\sum_{j=1}^m \frac{1}{{\left(2^{j-2}\right)}^h}\right) \frac{M_{\tau e^{-t}, \varepsilon}[\mu](\xi)}{\tau^h} \le C(h)\:\frac{M_{\tau e^{-t}, \varepsilon}[\mu](\xi)}{\tau^h} \:.$$ Repeating the same argument as above, we get $$\label{maximal_fn_ineq_last_step} I_{m+1} \le \int_{\mathscr{B}^{(m+1)}} \frac{e^{-ht}}{{\left(2^{m} \tau e^{-t}\right)}^{2h}} \:d|\mu|(\eta) \:.$$ Now by the choice of $m$, $$2^m \tau e^{-t} > \frac{\varepsilon}{2} \:.$$ Plugging the above in ([\[maximal_fn_ineq_last_step\]](#maximal_fn_ineq_last_step){reference-type="ref" reference="maximal_fn_ineq_last_step"}), it follows that $$I_{m+1} \le \int_{\mathscr{B}^{(m+1)}} \frac{2^{2h}\:e^{-ht}}{{\varepsilon}^{2h}} \:d|\mu|(\eta) \le \frac{2^{2h}\:e^{-ht}}{\varepsilon^{2h}} |\mu|\left(\partial X\right) \:.$$ Then summing up the above estimates, we get ([\[maximal_fn_ineq\]](#maximal_fn_ineq){reference-type="ref" reference="maximal_fn_ineq"}). ◻ Lemma [Lemma 14](#maximal_fn_lem){reference-type="ref" reference="maximal_fn_lem"} has the following consequences. **Corollary 15**. *Let $0<\varepsilon \le 1$ and $\mu$ be a complex measure on $\partial X$. Then there exists a constant $C(h) > 0$ such that for all $t > \log(1 / \varepsilon)$, one has for all $\xi \in \partial X$, $$\left|P[\mu]\left(\gamma_{\xi}(t)\right)\right| \le C(h) \left\{ 2M_{ e^{-t},\varepsilon}[\mu](\xi) + \frac{e^{-ht}}{\varepsilon^{2h}} |\mu|(\partial X) \right\} \:.$$* *Proof.* The Corollary follows by taking $\tau=1$ in Lemma [Lemma 14](#maximal_fn_lem){reference-type="ref" reference="maximal_fn_lem"}. ◻ **Corollary 16**. *Let $\xi \in \partial X,\:\tau > 1$ and $t>\log(\tau)$. If $f$ is a non-negative measurable function on $\partial X$ such that $f \equiv 1$ on $\mathscr{B}\left(\xi,\tau e^{-t}\right)$ and $f \le 1$ on $\partial X$, then there exists $C_5=C_5(h, \delta)>0$ (where $\delta$ is the Gromov hyperbolicity constant) such that $$P[f]\left(\gamma_{\xi}(t)\right) \ge 1 - \frac{C_5}{\tau^h} \:.$$* *Proof.* Let $t>\log(\tau)$. We consider $$g:= 1-f \:.$$ Then $g$ is a measurable, non-negative function on $\partial X$ such that $$g \equiv 0 \text{ on } \mathscr{B}\left(\xi,\tau e^{-t}\right) \text{ and } g \le 1 \text{ on } \partial X\:.$$ Then applying Lemma [Lemma 14](#maximal_fn_lem){reference-type="ref" reference="maximal_fn_lem"}, for $d\mu = g\: d\lambda_o$ and $\varepsilon=1$, we get that there exists $C(h)>0$ such that, $$\begin{aligned} \label{cor2_eq} P[g]\left(\gamma_\xi(t)\right) &\le & C(h) {\left(\frac{1}{\tau}\right)}^h M_{\tau e^{-t},1}[\mu](\xi) \nonumber\\ &\le & C(h) {\left(\frac{1}{\tau}\right)}^h M_{\tau e^{-t},1}[\lambda_o](\xi) \end{aligned}$$ Now using ([\[visual_measure_estimate\]](#visual_measure_estimate){reference-type="ref" reference="visual_measure_estimate"}) and ([\[maximal_fn\]](#maximal_fn){reference-type="ref" reference="maximal_fn"}), it follows that $$M_{\tau e^{-t},1}[\lambda_o](\xi) = \displaystyle\sup_{\tau e^{-t} \le r \le 1} \frac{\lambda_o\left(\mathscr{B}(\xi,r)\right)}{r^h} \le e^{6\delta h} \:.$$ Then plugging the above in ([\[cor2_eq\]](#cor2_eq){reference-type="ref" reference="cor2_eq"}), one has for some $C(h, \delta)>0$, $$P[g]\left(\gamma_\xi(t)\right) \le \frac{C(h,\delta)}{\tau^h} \:.$$ Thus, $$P[f]\left(\gamma_{\xi}(t)\right) = 1 - P[g]\left(\gamma_{\xi}(t)\right) \ge 1 - \frac{C(h,\delta)}{\tau^h} \:.$$ ◻ ## Upper bound on the Hausdorff dimension *Proof of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}.* For $L>0$, we set $$\label{defn_EL} E^L_{\beta}(P[\mu]) :=\left\{\xi \in \partial X : \displaystyle\limsup_{t \to +\infty} e^{-\beta t} \left|P[\mu]\left(\gamma_{\xi}(t)\right)\right| > L\right\} \:.$$ Our strategy will be to get some useful estimates on the $(h-\beta)/s$-dimensional outer Hausdorff measure of the set defined in ([\[defn_EL\]](#defn_EL){reference-type="ref" reference="defn_EL"}). First we choose and fix $\varepsilon \in (0,1)$ and $\xi \in E^L_{\beta}(P[\mu])$. Then by Corollary [Corollary 15](#cor1){reference-type="ref" reference="cor1"} there exists $C(h) > 0$ such that $$C(h)L < \displaystyle\limsup_{t \to +\infty} e^{-\beta t}\: M_{e^{-t}, \varepsilon}[\mu](\xi).$$ Hence, there exists $t_\xi \in (0,+\infty)$ satisfying $e^{-t_{\xi}} \le \varepsilon$ such that $$\label{poisson_pf_eq} C(h)L < e^{-\beta t_\xi}\: \frac{|\mu|\left(\mathscr{B}\left(\xi, e^{-t_\xi}\right)\right)}{e^{-ht_\xi}} \le e^{-\beta t_\xi}\: \frac{|\mu|\left(\mathscr{B}_s\left(\xi,C_2\: e^{-st_\xi}\right)\right)}{e^{-ht_\xi}} \:.$$ Now by Vitali 5-covering Lemma, there exist countably many visual balls $\{\mathscr{B}_s\left(\xi_j, r_j\right)\}_{j=1}^\infty$ satisfying ([\[poisson_pf_eq\]](#poisson_pf_eq){reference-type="ref" reference="poisson_pf_eq"}) such that - $r_j := C_2\:e^{-st_{\xi_j}} \le C_2 \:\varepsilon^s$, for all $j \in \mathbb N$, - $\mathscr{B}_s\left(\xi_j, r_j\right) \cap \mathscr{B}_s\left(\xi_k, r_k\right) = \emptyset$ for all $j \ne k$, - $E^L_{\beta}(P[\mu]) \subset \displaystyle\bigcup_{j=1}^\infty \mathscr{B}_s\left(\xi_j, 5r_j\right)$ . Then by ([\[poisson_pf_eq\]](#poisson_pf_eq){reference-type="ref" reference="poisson_pf_eq"}), there exists $C(h,\beta,s)>0$ such that $$\begin{aligned} \displaystyle\sum_{j=1}^\infty {\left(diameter\left(\mathscr{B}_s\left(\xi_j, 5r_j\right)\right)\right)}^{(h-\beta)/s} & \le & \left(\frac{C(h,\beta,s)}{L}\right) \displaystyle\sum_{j=1}^\infty |\mu|\left(\mathscr{B}_s\left(\xi_j, r_j\right)\right) \\ &=& \left(\frac{C(h,\beta,s)}{L}\right) |\mu|\left(\bigcup_{j=1}^\infty \mathscr{B}_s\left(\xi_j, r_j\right)\right) \\ &\le & \left(\frac{C(h, \beta, s)}{L}\right) |\mu|(\partial X) \:.\end{aligned}$$ We note that the constant appearing in the right hand side of the last inequality is independent of the choice of $\varepsilon$ and hence letting $\varepsilon \to 0$, we get that $$\label{poisson_pf_eq2} \mathcal{H}^{(h-\beta)/s}\left(E^L_{\beta}(P[\mu])\right) \le \frac{C(h, \beta, s)}{L} |\mu|\left(\partial X\right) < +\infty \:.$$ As $E^\infty_{\beta}(P[\mu]) \subset E^L_{\beta}(P[\mu])$ for all $L>0$, it follows that $$\mathcal{H}^{(h-\beta)/s}\left(E^\infty_{\beta}(P[\mu])\right) =0 \:.$$ Finally combining countable stability of the Hausdorff dimension and ([\[poisson_pf_eq2\]](#poisson_pf_eq2){reference-type="ref" reference="poisson_pf_eq2"}) we obtain, $$dim_{\mathcal{H}} E_\beta(P[\mu]) = \displaystyle\sup_{m \in \mathbb N} \left\{dim_{\mathcal{H}} E^{\frac{1}{m}}_\beta(P[\mu])\right\} \le (h-\beta)/s \:.$$ ◻ ## The sharpness result *Proof of Theorem [Theorem 2](#poisson_sharp_thm){reference-type="ref" reference="poisson_sharp_thm"}.* Since $\mathcal{H}^{(h-\beta)/s}(E)=0$, for any $m \in \mathbb N$, there exists a covering of $E$ by visual balls $\{\mathscr{B}^{(m,j)}_s\}_{j=1}^\infty$ such that $$\label{poisson_sharp_eq1} \displaystyle\sum_{j=1}^\infty {\left(diameter\left(\mathscr{B}^{(m,j)}_s\right)\right)}^{(h-\beta)/s} < 2^{-m} \:.$$ If $\mathscr{B}_s$ is a visual ball with center $\eta \in \partial X$ and with radius $r$, then for notation convenience, $2\mathscr{B}_s$ will denote the visual ball with the same center $\eta$ and twice the radius, that is, $2r$. Now we define, $$\label{defn_f_poisson} f:= \displaystyle\sum_{j,m} m\:{\left(diameter\left(\mathscr{B}^{(m,j)}_s\right)\right)}^{-(\beta/s)}\: \chi_{2\mathscr{B}^{(m,j)}_s} \:.$$ Then by ([\[visual_measure_estimate\]](#visual_measure_estimate){reference-type="ref" reference="visual_measure_estimate"}) and ([\[poisson_sharp_eq1\]](#poisson_sharp_eq1){reference-type="ref" reference="poisson_sharp_eq1"}), it follows that for some $C(h,\delta,s)>0$, $$\begin{aligned} \int_{\partial X} f d \lambda_o & \le & \displaystyle\sum_{j,m} m \: {\left(diameter\left(\mathscr{B}^{(m,j)}_s\right)\right)}^{-(\beta/s)} \:\lambda_o\left(2\mathscr{B}^{(m,j)}_s\right) \\ & \le & C(h,\delta,s) \displaystyle\sum_{j,m} m \:{\left(diameter\left(\mathscr{B}^{(m,j)}_s\right)\right)}^{(h-\beta)/s} \\ & < & C(h,\delta,s) \:\displaystyle\sum_{m=1}^\infty \frac{m}{2^m} \\ & < & +\infty \:.\end{aligned}$$ Thus $f d\lambda_o$ defines a finite, positive Borel measure. Now let $\xi \in E$ and fix $m \in \mathbb N$. Then there exists $j_m \in \mathbb N$ such that $\xi \in \mathscr{B}^{(m,j_m)}_s$. If $r_m$ is the radius of $\mathscr{B}^{(m,j_m)}_s$, then $\mathscr{B}_s(\xi,r_m) \subset 2\mathscr{B}^{(m,j_m)}_s$. Then by Corollary [Corollary 16](#cor2){reference-type="ref" reference="cor2"}, one has $$\label{poisson_sharp_eq2} P\left[\chi_{2\mathscr{B}^{(m,j_m)}_s}\right](\gamma_{\xi}(t)) \ge P\left[\chi_{\mathscr{B}_s(\xi,r_m)}\right](\gamma_{\xi}(t)) \ge P\left[\chi_{\mathscr{B}\left(\xi,\frac{r^{1/s}_m}{C_3}\right)}\right](\gamma_{\xi}(t)) \ge \frac{1}{2} \:,$$ whenever (following the statement of Corollary [Corollary 16](#cor2){reference-type="ref" reference="cor2"}) - $\tau^h > \max \left\{1,\: 2C_5\right\}$, - $t > \log(\tau)$, - $\tau e^{-t} \le \frac{r^{1/s}_m}{C_3}$. Hence choosing $\tau (=\tau(h,\delta))>0$ sufficiently large and setting $$t_m := \log(C_3\tau) + \frac{1}{s}\log\left(\frac{1}{r_m}\right) \:,$$ we have by ([\[poisson_sharp_eq2\]](#poisson_sharp_eq2){reference-type="ref" reference="poisson_sharp_eq2"}), $$\begin{aligned} P[f](\gamma_\xi(t_m)) & \ge & m {\left(diameter\left(\mathscr{B}^{(m,j_m)}_s\right)\right)}^{-(\beta/s)} P\left[\chi_{2\mathscr{B}^{(m,j_m)}_s}\right]\left(\gamma_{\xi}(t_m)\right) \\ & \ge & m \left(2^{-\left(\frac{\beta}{s} +1\right)} C^{-\beta}_3 \tau^{-\beta} \right) e^{\beta t_m} \:. \end{aligned}$$ Hence, there exists $C(h,\delta,\beta,s) >0$ such that for all $m \in \mathbb N$, $$\label{poisson_sharp_eq3} e^{-\beta t_m} P[f](\gamma_\xi(t_m)) \ge C(h,\delta,\beta,s)\: m \:.$$ Now by ([\[poisson_sharp_eq1\]](#poisson_sharp_eq1){reference-type="ref" reference="poisson_sharp_eq1"}), $$t_m > \log\left(2^{1/s}C_3\tau\right) + \frac{m}{h-\beta}\log(2) \to +\infty \text{ as } m \to +\infty \:.$$ Hence ([\[poisson_sharp_eq3\]](#poisson_sharp_eq3){reference-type="ref" reference="poisson_sharp_eq3"}) gives the result. ◻ # Riesz decomposition for subharmonic functions ## Riesz measure In this subsection our aim would be to prove the existence of a unique Radon measure on $X$ corresponding to a subharmonic function: **Proposition 17**. *If $f$ is subharmonic on $X$, then there exists a unique Radon measure $\mu_f$ on $X$ such that $$\int_X \psi \:d\mu_f = \int_X f \Delta \psi \:dvol \:, \text{ for all } \psi \in C^2_c(X)\:.$$* **Definition 18**. *For a subharmonic function $f$ on $X$, the unique Radon measure $\mu_f$ on $X$ obtained in the conclusion of Proposition [Proposition 17](#riesz_meas_prop){reference-type="ref" reference="riesz_meas_prop"} is called the Riesz measure of $f$.* The following lemmas will be important for the proof of Proposition [Proposition 17](#riesz_meas_prop){reference-type="ref" reference="riesz_meas_prop"}. **Lemma 19**. *Let $f$ be a $C^2$ subharmonic function on $X$. Then for all $x \in X$ and for all $0<r_1 \le r_2$, one has $$\int_{T^1_x X} f\left(\gamma_{x,v}(r_1)\right) d\theta_x(v) \le \int_{T^1_x X} f\left(\gamma_{x,v}(r_2)\right) d\theta_x(v) \:.$$* *Proof.* Let $u_1$ and $u_2$ be the harmonic extensions of $f$ on the balls $B(x,r_1)$ and $B(x,r_2)$ respectively. Then by subharmonicity of $f$, it follows that $f \le u_2$ on $B(x,r_2)$ and hence in particular on the sphere $S(x,r_1)$. Hence by the maximum principle, $u_1(x) \le u_2(x)$. Then using the mean value identity of harmonic functions, the result follows. ◻ **Lemma 20**. *For $r>0$, we define $\Omega_r := \frac{1}{vol\left(B(o,r)\right)} \chi_{B(o,r)}$. Then for $f \in C^2(X)$ one has for some constant $C(h,n) >0$, $$\Delta f(x) = \displaystyle\lim_{r\to 0} \frac{C(h,n)}{r^2} \left\{\left(f*\Omega_r\right)(x) - f(x) \right\}\:.$$* *Proof.* Integrating the identity ([\[infinitesimal_mvp\]](#infinitesimal_mvp){reference-type="ref" reference="infinitesimal_mvp"}) and then using the estimates of the density function ([\[jacobian_estimate\]](#jacobian_estimate){reference-type="ref" reference="jacobian_estimate"}) for small $r$ yields the identity, $$\Delta f(x) = \displaystyle\lim_{r\to 0} \frac{C(h,n)}{r^2\:vol(B(o,r))} \int_{B(x,r)} \left\{f(y) - f(x) \right\} dvol(y) \:.$$ Now the result follows from the definitions of $\Omega_r$ and the convolution. ◻ Now we introduce the notion of an approximate identity. **Definition 21**. *A sequence of non-negative continuous functions $\{h_j\}_{j=1}^\infty$ is an approximate identity in $L^1(X, dvol)$ if* - *$\int_X h_j \:dvol=1$, for all $j \in \mathbb N$ and* - *${\displaystyle\lim_{j \to \infty}} \int_{X \setminus B(o,\varepsilon)} h_j\: dvol=0$, for all $\varepsilon>0$.* **Remark 22**. *Let $\{r_j\}_{j=1}^\infty \subset (0,+\infty)$ be a decreasing sequence with $r_j \to 0$ as $j \to +\infty$. For each $j$, let $h_j$ be a non-negative $C^\infty$ radial function on $X$ with support contained in $\{x \in X: r_{j+1} < d(o,x) < r_j\}$ satisfying $\int_{X} h_j \:dvol=1$. Then the sequence $\{h_j\}_{j=1}^\infty$ forms a $C^\infty$-approximate identity.* **Lemma 23**. *Let $\{h_j\}_{j=1}^\infty$ be a $C^\infty$-approximate identity as defined in remark [Remark 22](#ex_of_approx_id){reference-type="ref" reference="ex_of_approx_id"}. If $f$ is subharmonic on $X$, then $\{f*h_j\}_{j=1}^\infty$ is a non-increasing sequence of $C^\infty$ subharmonic functions on $X$ satisfying $$\label{approx_id_eqn} \left(f*h_j\right)(x) \ge f(x) \text{ and } \displaystyle\lim_{j \to \infty} \left(f*h_j\right)(x)=f(x) \:,$$ for all $x \in X$.* *Proof.* The statement $f*h_j \in C^\infty(X)$ is a simple consequence of the facts that $h_j \in C_c^\infty(X)$, for all $j \in \mathbb N$ and $f$ is locally integrable. Let $h_j=u_j\circ d_o$, where $u_j$ is the corresponding function on $\mathbb R$. The inequality in ([\[approx_id_eqn\]](#approx_id_eqn){reference-type="ref" reference="approx_id_eqn"}) follows by integration in polar coordinates, $$\begin{aligned} \label{approx_id_eq1} \left(f*h_j\right)(x) &=& \int_{0}^\infty u_j(r)\:A(r) \left(\int_{T^1_x X} f\left(\gamma_{x,v}(r)\right) d\theta_x(v)\right) dr \nonumber\\ & \ge & f(x) \int_{0}^\infty u_j(r)\: A(r)\: dr \nonumber\\ & = & f(x) \:.\end{aligned}$$ Next we fix $x \in X$ and let $\alpha > f(x)$. By upper semi-continuity of $f$ there exists $r>0$ such that $$f(y) < \alpha \:,\text{ for all } y \in B(x,r) \:.$$ We note that for $r_j < r$, $$Supp\left(\tau_x h_j\right) \subset B(x,r) \:.$$ Then $$\left(f*h_j\right)(x) = \int_{B(x,r)} f(y) \left(\tau_x h_j\right)(y)\: dvol(y) \le \alpha \int_X h_j\: dvol = \alpha \:.$$ Hence, $$\displaystyle\limsup_{j \to +\infty} \left(f*h_j\right)(x) \le f(x) \:,$$ which combined with the inequality ([\[approx_id_eq1\]](#approx_id_eq1){reference-type="ref" reference="approx_id_eq1"}) yields ([\[approx_id_eqn\]](#approx_id_eqn){reference-type="ref" reference="approx_id_eqn"}). To show that $f*h_j$ is subharmonic for all $j \in \mathbb N$, we consider the convolution $(f*h_j)*\Omega_r$, where $\Omega_r$ is as defined in Lemma [Lemma 20](#Omega_r){reference-type="ref" reference="Omega_r"}. By repeated applications of Lemma [Lemma 10](#props_of_conv){reference-type="ref" reference="props_of_conv"} and by computations similar to that in ([\[approx_id_eq1\]](#approx_id_eq1){reference-type="ref" reference="approx_id_eq1"}), we get $$(f*h_j)*\Omega_r = \left(f*\Omega_r\right)*h_j \ge f * h_j \:.$$ Then in view of Lemma [Lemma 20](#Omega_r){reference-type="ref" reference="Omega_r"}, it follows that $f*h_j$ is subharmonic for all $j \in \mathbb N$. Finally we show that the sequence is non-increasing. But first we will need an analogue of Lemma [Lemma 19](#non_decr_mvp){reference-type="ref" reference="non_decr_mvp"} for $f$. Consider $t_2 \ge t_1 >0$. As $f*h_j$ are $C^\infty$-subharmonic functions with $f*h_j(x) \ge f(x)$ for all $x \in X$, we have by applying Lemma [Lemma 19](#non_decr_mvp){reference-type="ref" reference="non_decr_mvp"} to $f*h_j$, $$\begin{aligned} \int_{T^1_x X} f\left(\gamma_{x,v}(t_1)\right)d\theta_x(v) & \le & \int_{T^1_x X} (f*h_j)\left(\gamma_{x,v}(t_1)\right)d\theta_x(v) \\ & \le & \int_{T^1_x X} (f*h_j)\left(\gamma_{x,v}(t_2)\right)d\theta_x(v) \:.\end{aligned}$$ Next using the fact that subharmonic functions are bounded above on compact sets we see that the reverse Fatou lemma is applicable on $\{f*h_j\}_{j=1}^\infty$, which yields $$\displaystyle\limsup_{j \to +\infty} \int_{T^1_x X} (f*h_j)\left(\gamma_{x,v}(t_2)\right)d\theta_x(v) \le \int_{T^1_x X} f\left(\gamma_{x,v}(t_2)\right)d\theta_x(v) \:.$$ Combining the last two inequalities one gets the desired analogue of Lemma [Lemma 19](#non_decr_mvp){reference-type="ref" reference="non_decr_mvp"} for $f$. Now let $m>l$. Since $Supp(h_l)$ is contained in $\{r_{l+1} < d(o,x)< r_l\}$ and $r_{l+1} \ge r_m$, we have $$\begin{aligned} \left(f*h_l\right)(x) &=& \int_{r_{l+1}}^{r_l} u_l(r) A(r) \left(\int_{T^1_x X} f\left(\gamma_{x,v}(r)\right)d\theta_x(v) \right) dr \\ & \ge & \int_{r_{l+1}}^{r_l} u_l(r) A(r) \left(\int_{T^1_x X} f\left(\gamma_{x,v}(r_m)\right)d\theta_x(v) \right) dr \\ &=& \int_{T^1_x X} f\left(\gamma_{x,v}(r_m)\right)d\theta_x(v) \int_X h_l\: dvol \\ &=& \int_{T^1_x X} f\left(\gamma_{x,v}(r_m)\right)d\theta_x(v) \int_X h_m\: dvol \\ & \ge & \int_{r_{m+1}}^{r_m} u_m(r) A(r) \left(\int_{T^1_x X} f\left(\gamma_{x,v}(r)\right)d\theta_x(v) \right) dr \\ &=& \left(f*h_m\right)(x) \:.\end{aligned}$$ ◻ **Remark 24**. 1. *The proof of Lemma [Lemma 23](#approx_id_lemma){reference-type="ref" reference="approx_id_lemma"} shows that Lemma [Lemma 19](#non_decr_mvp){reference-type="ref" reference="non_decr_mvp"} is true for general subharmonic functions.* 2. *For a subharmonic function $f$, its restrictions to spheres are integrable, that is, $$\label{subh_L1} \int_{T^1_x X} |f(\gamma_{x,v}(r))|\: d\theta_x(v) < +\infty \:, \text{ for all } x \in X \text{ and all } r>0 \:,$$ This is seen as follows. Choose and fix $x \in X$. Then combining the fact that $f$ is locally integrable with the polar decomposition of the volume measure, we get that the function $$\begin{aligned} (0,+\infty) \to (0,+\infty] \:,\text{ defined by } \\ r \mapsto \int_{T^1_x X} |f(\gamma_{x,v}(r))| \: d\theta_x(v) \end{aligned}$$ is locally integrable with respect to the measure $A(r)dr$. Then as $A(r)dr$ is a regular Borel measure on $(0,+\infty)$, it follows that $$\label{subh_L1_eq1} \int_{T^1_x X} |f(\gamma_{x,v}(r))| \: d\theta_x(v) < +\infty \:,$$ for almost every $r \in (0,+\infty)$ , with respect to the measure $A(r)dr$. Then as the above measure takes positive values on every non-empty open set in $(0,+\infty)$, it follows that ([\[subh_L1_eq1\]](#subh_L1_eq1){reference-type="ref" reference="subh_L1_eq1"}) is true for $r$ belonging to a dense subset of $(0,+\infty)$. Now as $f$ is bounded above on compacts, we get that $$\label{subh_L1_eq2} \int_{T^1_x X} f(\gamma_{x,v}(r)) \: d\theta_x(v) > -\infty \:,$$ is true for $r$ belonging to a dense subset of $(0,+\infty)$. Then part $(1)$ of this remark yields that ([\[subh_L1_eq2\]](#subh_L1_eq2){reference-type="ref" reference="subh_L1_eq2"}) is true for all $r \in (0,+\infty)$. Combining this with the fact that $f$ is bounded above on compacts, ([\[subh_L1\]](#subh_L1){reference-type="ref" reference="subh_L1"}) follows.* Now we are in a position to prove Proposition [Proposition 17](#riesz_meas_prop){reference-type="ref" reference="riesz_meas_prop"}. *Proof of Proposition [Proposition 17](#riesz_meas_prop){reference-type="ref" reference="riesz_meas_prop"}.* We first show that $$\label{riesz_meas_pf_eq1} \int_X f \Delta \psi \:dvol \ge 0\:, \text{ for all } \psi \in C^2_c(X) \text{ with } \psi \ge 0 \:.$$ Let $\{h_j\}_{j=1}^\infty$ be a $C^\infty$ approximate identity as defined in remark [Remark 22](#ex_of_approx_id){reference-type="ref" reference="ex_of_approx_id"}. Set $f_j := f * h_j$. Then by Lemma [Lemma 23](#approx_id_lemma){reference-type="ref" reference="approx_id_lemma"}, $\{f_j\}_{j=1}^\infty$ is a non-increasing sequence of $C^\infty$-subharmonic functions on $X$ that converges to $f$ everywhere on $X$. Now combining the facts that - $\psi \in C^2_c(X)$, - $\left|f_j(x)\right| \le |f(x)|+|f_1(x)| \:,\text{ for all } j \in \mathbb N$,for all $x \in X$, - both $f,f_1$ being subharmonic are locally integrable, we note that the Dominated Convergence Theorem is applicable for the sequence of functions $\{f_j \Delta \psi\}_{j=1}^\infty$. Therefore by the Green's identity and the Dominated Convergence Theorem, $$\int_X f \Delta \psi \: dvol = \displaystyle\lim_{j \to \infty} \int_X f_j \Delta \psi \:dvol = \displaystyle\lim_{j \to \infty} \int_X \psi \Delta f_j \:dvol \ge 0 \:.$$ Hence, $$L(\psi) := \int_X f \Delta \psi \: dvol$$ defines a positive linear functional on $C^\infty_c(X)$. Then by usual density arguments (following the proof of Theorem $4.6.3$ of [@St] for the real hyperbolic ball verbatim) $L$ extends to $C_c(X)$ as a positive linear functional. Now the result follows by the Riesz Representation theorem for positive linear functionals on $C_c(X)$. ◻ ## Harmonic majorants **Definition 25**. 1. *A subharmonic function $f$ on $X$ is said to have a harmonic majorant if there exists a harmonic function $h$ on $X$ such that $$f(x) \le h(x)\:,\text{ for all } x \in X\:.$$* 2. *A harmonic function $h$ on $X$ is said to be the least harmonic majorant of a subharmonic function $f$ on $X$ if* - *$h$ is a harmonic majorant of $f$ and* - *$h(x) \le H(x)$ for all $x \in X$, whenever $H$ is a harmonic majorant of $f$.* In this subsection we will prove the following equivalence for existence of a least harmonic majorant in terms of boundedness of integrals over spheres. **Proposition 26**. *Let $f$ be subharmonic on $X$. Then the following are equivalent:* - *$f$ has a least harmonic majorant on $X$.* - *$f$ has a harmonic majorant on $X$.* - *for all $x \in X$,$$\displaystyle\lim_{r \to +\infty} \int_{T^1_x X} f\left(\gamma_{x,v}(r)\right) \:d\theta_x(v) < +\infty \:.$$* For the proof of Proposition [Proposition 26](#harmonic_majorant_prop){reference-type="ref" reference="harmonic_majorant_prop"}, we will need the following: **Lemma 27**. *Let $f$ be a subharmonic function on $X$. Choose and fix $x_0 \in X$. Then for each $r>0$, there exists a harmonic function $f^{(r)}$ on $B(x_0,r)$ such that* - *$f(x) \le f^{(r)}(x)$ for all $x \in B(x_0,r)$ and* - *$$\int_{T^1_{x_0}X} f\left(\gamma_{x_0,v}(r)\right) \:d\theta_{x_0}(v) = f^{(r)}(x_0) \:.$$* *Furthermore, if $F$ is harmonic on an open subset $\Omega$ of $X$ with $\overline{B(x_0,r)} \subset \Omega$ and $F(x) \ge f(x)$ for all $x \in \Omega$, then* - *$f^{(r)}(x) \le F(x)$ for all $x \in B(x_0,r)$.* - *If $0<r_1<r_2$, then $$f^{(r_1)}(x) \le f^{(r_2)}(x)\:,\text{ for all } x \in B(x_0,r_1) \:.$$* *Proof.* Fix $r>0$. Then by upper semi-continuity of $f$, there exists a decreasing sequence $\{f_n\}_{n=1}^\infty$ of continuous functions on $S(x_0,r)= \partial B(x_0,r)$ such that $$\displaystyle\lim_{n \to \infty} f_n\left(\gamma_{x_0,v}(r)\right)=f\left(\gamma_{x_0,v}(r)\right)\:,\text{ for all } v \in T^1_{x_0} X\:.$$ Now we consider the harmonic extensions of $f_n$ to $B(x_0,r)$, say $F_n$. Then by Harnack-Yau and the mean value property, it follows that the sequence $\{F_n\}_{n=1}^\infty$ is uniformly Cauchy on all compact subsets of $B(x_0,r)$. So the sequence converges (uniformly on compacts) to a harmonic function on $B(x_0,r)$, say $f^{(r)}$. We note that by the maximum principle, $f(x) \le f^{(r)}(x)$ for all $x \in B(x_0,r)$. Moreover it follows from part (2) of remark [Remark 24](#non_decr_mvp_rmk){reference-type="ref" reference="non_decr_mvp_rmk"} and that $\{f_n\}_{n=1}^\infty$ is a sequence of continuous functions decreasing to $f$, that the Dominated Convergence Theorem is applicable to $\{f_n\}_{n=1}^\infty$. Then by the mean value property and the Dominated Convergence Theorem, it follows that $$\begin{aligned} \label{harmonic_majorant_lem_eq1} f^{(r)}(x_0) = \displaystyle\lim_{n \to \infty} F_n(x_0) &=& \displaystyle\lim_{n \to \infty}\int_{T^1_{x_0}X} f_n\left(\gamma_{x_0,v}(r)\right) d\theta_{x_0}(v) \nonumber \\ &=& \int_{T^1_{x_0}X} f\left(\gamma_{x_0,v}(r)\right) d\theta_{x_0}(v) \:.\end{aligned}$$ Now for $F$ as in the second part of the statement, we consider $$h_n\left(\gamma_{x_0,v}(r)\right) := \min \{f_n\left(\gamma_{x_0,v}(r)\right),F\left(\gamma_{x_0,v}(r)\right)\} \:,\text{ for all } v \in T^1_{x_0} X\:.$$ and $H_n$ to be their corresponding harmonic extensions to $B(x_0,r)$. By the maximum principle, $H_n \le F_n$ on $B(x_0,r)$. But $\{h_n\}_{n=1}^\infty$ is a non-increasing sequence of continuous functions on $S(x_0,r)$ with $$\displaystyle\lim_{n \to \infty} h_n\left(\gamma_{x_0,v}(r)\right) = f\left(\gamma_{x_0,v}(r)\right) \:,\text{ for all } v \in T^1_{x_0}X \:.$$ Just as above, an application of Harnack-Yau and the mean value property would yield that the sequence $\{H_n\}_{n=1}^\infty$ converges (uniformly on compacts) to a harmonic function on $B(x_0,r)$. Moreover, by the mean value property, the Dominated Convergence Theorem and ([\[harmonic_majorant_lem_eq1\]](#harmonic_majorant_lem_eq1){reference-type="ref" reference="harmonic_majorant_lem_eq1"}), $$\displaystyle\lim_{n \to \infty} H_n(x_0) = \int_{T^1_{x_0} X} f\left(\gamma_{x_0,v}(r)\right) d\theta_{x_0}(v) = \displaystyle\lim_{n \to \infty} F_n(x_0) \:.$$ Hence by the maximum principle, $$f^{(r)}(x) = \displaystyle\lim_{n \to \infty} F_n(x) = \displaystyle\lim_{n \to \infty} H_n(x) \:, \text{ for all } x \in B(x_0,r) \:.$$ But again by the maximum principle, $$H_n(x) \le F(x) \:,\text{ for all } n \in \mathbb N, \: x \in B(x_0,r) \:,$$ and thus it follows that $$f^{(r)}(x) \le F(x)\:, \text{ for all } x \in B(x_0,r)\:.$$ This proves part $(iii)$ of the Lemma. Part $(iv)$ follows immediately from $(iii)$. ◻ Now we are in a position to prove Proposition [Proposition 26](#harmonic_majorant_prop){reference-type="ref" reference="harmonic_majorant_prop"}. *Proof of Proposition [Proposition 26](#harmonic_majorant_prop){reference-type="ref" reference="harmonic_majorant_prop"}.* Clearly $(i)$ implies $(ii)$ and $(ii)$ implies $(iii)$. We now suppose that $(iii)$ holds. Choose and fix $x_0 \in X$. Let $\{r_n\}_{n=1}^\infty$ be an increasing sequence of positive real numbers such that $r_n \to +\infty$ as $n \to +\infty$. For each $n \in \mathbb N$, let $f^{(n)}$ be the harmonic function on $B(x_0,r_n)$ satisfying the conclusion of Lemma [Lemma 27](#harmonic_majorant_lemma){reference-type="ref" reference="harmonic_majorant_lemma"}. By part $(iv)$ of Lemma [Lemma 27](#harmonic_majorant_lemma){reference-type="ref" reference="harmonic_majorant_lemma"}, $$f^{(n)}(x) \le f^{(n+1)}(x) \:,\text{ for all } x \in B(x_0,r_n) \:.$$ Moreover by part $(ii)$ of Lemma [Lemma 27](#harmonic_majorant_lemma){reference-type="ref" reference="harmonic_majorant_lemma"}, $$\displaystyle\lim_{n \to \infty} f^{(n)}(x_0) = \displaystyle\lim_{n \to \infty} \int_{T^1_{x_0}X} f\left(\gamma_{x_0,v}(r_n)\right)\: d\theta_{x_0}(v) < + \infty\:,$$ Therefore $\{f^{(n)}\}_{n=1}^\infty$ is a sequence of non-decreasing harmonic functions with a finite limit at $x_0$. Then by Lemma [Lemma 12](#Harnack_Thm){reference-type="ref" reference="Harnack_Thm"} it follows that $\{f^{(n)}\}_{n=1}^\infty$ converges uniformly (on compacts) to a harmonic function, say $F_f$. Then by part $(i)$ of Lemma [Lemma 27](#harmonic_majorant_lemma){reference-type="ref" reference="harmonic_majorant_lemma"}, $F_f$ is a harmonic majorant of $f$. In fact, $F_f$ is the least harmonic majorant of $f$, which is seen as follows. Let $F$ be any harmonic majorant of $f$. Then by part $(iii)$ of Lemma [Lemma 27](#harmonic_majorant_lemma){reference-type="ref" reference="harmonic_majorant_lemma"}, $$f^{(n)}(x) \le F(x)\:,\text{ for all } x \in B(x_0,r_n) \:,\text{ for all } n \in \mathbb N\:.$$ Hence, $$F_f(x) \le F(x) \:,\text{ for all } x \in X\:.$$ ◻ Henceforth the least harmonic majorant of a subharmonic function $f$ (when it exists) will be denoted by $F_f$. An immediate consequence of Proposition [Proposition 26](#harmonic_majorant_prop){reference-type="ref" reference="harmonic_majorant_prop"} is the following: **Corollary 28**. *Let $f \le 0$ be subharmonic on $X$. Then $F_f \equiv 0$ if and only if $$\displaystyle\lim_{r \to +\infty} \int_{T^1_{x}X} f\left(\gamma_{x,v}(r)\right)\: d\theta_x(v) = 0 \:,$$ for all $x \in X$.* *Proof.* By construction of $F_f$ in the proof of Proposition [Proposition 26](#harmonic_majorant_prop){reference-type="ref" reference="harmonic_majorant_prop"}, we have for all $x \in X$, $$F_f(x) = \displaystyle\lim_{r \to +\infty} \int_{T^1_{x}X} f\left(\gamma_{x,v}(r)\right)\: d\theta_{x}(v) \:.$$ Hence the result follows. ◻ ## Riesz decomposition Having established the notions of the Riesz measure and the least harmonic majorant of a subharmonic function, we now aim to prove the Riesz decomposition theorem. First we see a couple of lemmas. **Lemma 29**. *Let $h$ be a radial $C^\infty_c$ function on $X$ with $\int_X h \:dvol=0$. Let $v:= - G * h$, then $v$ is a radial $C^\infty_c$ function on $X$ with $\Delta v=h$.* *Proof.* Let $h=u\circ d_o$, where $u$ is the corresponding function on $\mathbb R$. For fixed $x \in X$, the function $y \mapsto G(x,y)$ is harmonic in $B(o,d(o,x))$ and hence for all $r \in (0,d(o,x))$, by the mean value property, $$\label{riesz_decom_lemma1_eq1} \int_{T^1_o X} G\left(x,\gamma_{o,v}(r)\right) d\theta_o(v) = G(x,o) = G(d(o,x)) \:.$$ We now choose $r>0$ such that $Supp(h) \subset \overline{B(o,r)}$. Then for all $x$ with $d(o,x)>r$, by ([\[riesz_decom_lemma1_eq1\]](#riesz_decom_lemma1_eq1){reference-type="ref" reference="riesz_decom_lemma1_eq1"}) and Lemma [Lemma 10](#props_of_conv){reference-type="ref" reference="props_of_conv"} we have $$\begin{aligned} G*h(x) &=& \int_X h(y) G(x,y) \: dvol(y) \\ &=& \int_{0}^{+\infty} u(r) A(r) \left(\int_{T^1_o X} G\left(x,\gamma_{o,v}(r)\right) d\theta_o(v)\right) \: dr \\ &=& G(d(o,x)) \int_X h\: dvol \\ &=& 0 \:.\end{aligned}$$ Thus $Supp(v) \subset \overline{B(o,r)}$. Hence $v \in C^\infty_c(X)$ (the regularity follows from the fact that the Green function is locally integrable) and is radial by Lemma [Lemma 10](#props_of_conv){reference-type="ref" reference="props_of_conv"}. Let $\psi \in C^2_c(X)$. Then by Green's identity, symmetry of the Green function and Fubini's theorem, it follows that $$\begin{aligned} \int_X \psi(y) \Delta v(y) \: dvol(y) &=& \int_X v(y) \Delta \psi (y) \: dvol(y) \\ &=&- \int_X \left(\int_X G(x,y) h(x) \: dvol(x)\right) \Delta \psi (y) \: dvol(y)\\ &=& - \int_X h(x) \left(\int_X G(x,y) \Delta \psi (y) \: dvol(y)\right) \: dvol(x) \\ & =& - \int_X h(x) \left(\int_X \Delta G(x,y) \psi (y) \: dvol(y)\right) \: dvol(x) \\ &=& \int_X h(x) \psi(x) \: dvol(x) \:.\end{aligned}$$ Since the above is true for any $\psi \in C^2_c(X)$, we get the result. ◻ **Lemma 30**. *Let $f \le 0$ be a subharmonic function on $X$ satisfying for all $x \in X$, $$\label{riesz_decom_lemma2_eq} \displaystyle\lim_{r \to +\infty} \int_{T^1_x X} f\left(\gamma_{x,v}(r)\right)\: d\theta_x(v) = 0 \:.$$ Then for all $x \in X$, $$f(x) = - \int_X G(x,y) \: d\mu_f(y) \:,$$ where $\mu_f$ is the Riesz measure of $f$ .* *Proof.* Let $\{r_j\}_{j=1}^\infty \subset (0,1)$ such that it monotonically decreases to $0$. For each $j \in \mathbb N$, let $$\begin{aligned} &&A^1_j := \{y \in X : r_{j+1} < d(o,y) < r_j\} \\ &&A^2_j := \{y \in X : e^{1/r_j} < d(o,y) < e^{1/r_{j+1}}\} \:.\end{aligned}$$ For each $j \in \mathbb N$ and $k=1,2$, let $h^k_j$ be a non-negative $C^\infty$ radial function with $$Supp \:h^k_j \subset A^k_j\:, \text{ and } \int_X h^k_j \: dvol=1\:.$$ We write $h^k_j = u^k_j \circ d_o$, where $u^k_j$ is the corresponding function on $\mathbb R$. We choose and fix $x \in X$. Now since $f$ is subharmonic by Lemma [Lemma 23](#approx_id_lemma){reference-type="ref" reference="approx_id_lemma"}, $$\label{riesz_decom_lemma2_eq1} f(x) = \displaystyle\lim_{j \to \infty} \left(f* h^1_j\right)(x) = \displaystyle\lim_{j \to \infty} \int_X f(y) \left(\tau_x h^1_j\right)(y) \:dvol(y) \:.$$ On the other hand by part $(1)$ of remark [Remark 24](#non_decr_mvp_rmk){reference-type="ref" reference="non_decr_mvp_rmk"}, $$\begin{aligned} \left(f* h^2_j\right)(x) &=& \int_{e^{1/r_j}}^{e^{1/r_{j+1}}} u^2_j(r) A(r) \left(\int_{T^1_x X} f\left(\gamma_{x,v}(r)\right) d\theta_x(v)\right)\:dr \\ & \le & \int_{e^{1/r_j}}^{e^{1/r_{j+1}}} u^2_j(r) A(r) \left(\int_{T^1_x X} f\left(\gamma_{x,v}\left(e^{1/r_{j+1}}\right)\right) d\theta_x(v)\right)\:dr \\ &=& \int_{T^1_x X} f\left(\gamma_{x,v}\left(e^{1/r_{j+1}}\right)\right) d\theta_x(v) \:.\end{aligned}$$ Similarly one gets the lower bound to obtain $$\int_{T^1_x X} f\left(\gamma_{x,v}\left(e^{1/r_j}\right)\right) d\theta_x(v) \le \left(f* h^2_j\right)(x) \le \int_{T^1_x X} f\left(\gamma_{x,v}\left(e^{1/r_{j+1}}\right)\right) d\theta_x(v) \:.$$ Then in view of ([\[riesz_decom_lemma2_eq\]](#riesz_decom_lemma2_eq){reference-type="ref" reference="riesz_decom_lemma2_eq"}), we get $$\label{riesz-decom_lemma2_eq2} \displaystyle\lim_{j \to \infty} \left(f* h^2_j\right)(x) = 0 \:.$$ Now for $h_j := h^1_j - h^2_j$, by ([\[riesz_decom_lemma2_eq1\]](#riesz_decom_lemma2_eq1){reference-type="ref" reference="riesz_decom_lemma2_eq1"}) and ([\[riesz-decom_lemma2_eq2\]](#riesz-decom_lemma2_eq2){reference-type="ref" reference="riesz-decom_lemma2_eq2"}), one has $$f(x) = \displaystyle\lim_{j \to \infty} \left(f* h_j\right)(x) \:.$$ We now note that since the Green function is superharmonic on $X$, Lemma [Lemma 23](#approx_id_lemma){reference-type="ref" reference="approx_id_lemma"} implies that $G* h^1_j$ increases to $G$. Arguments similar to Lemma [Lemma 23](#approx_id_lemma){reference-type="ref" reference="approx_id_lemma"} also yield that $G*h^2_j$ decreases to $0$. Hence, $G*h_j$ increases to $G$. Now for each $j \in \mathbb N$, let $v_j:= - \left(G*h_j\right)$, then by Lemma [Lemma 29](#riesz_decom_lemma1){reference-type="ref" reference="riesz_decom_lemma1"} we have $v_j \in C^\infty_c(X)$ is radial and $\Delta v_j = h_j$. Then by Lemma [Lemma 9](#laplacian_commutes_translation){reference-type="ref" reference="laplacian_commutes_translation"} and Monotone Convergence Theorem, it follows that $$\begin{aligned} f(x) & = & \displaystyle\lim_{j \to \infty} \int_X f(y) \left(\tau_x h_j\right)(y) \: dvol(y) \\ & = & \displaystyle\lim_{j \to \infty} \int_X f(y) \:\Delta (\tau_x v_j)(y) \: dvol(y) \\ & = & \displaystyle\lim_{j \to \infty}\int_X (\tau_x v_j)(y) \: d\mu_f(y) \\ & = & - \displaystyle\lim_{j \to \infty}\int_X \left(\tau_x \left(G*h_j\right)\right)(y) \: d\mu_f(y) \\ & = & - \int_X \left(\tau_x \: G\right)(y) \: d\mu_f(y) \\ & = & - \int_X G(x,y) \: d\mu_f(y) \:.\end{aligned}$$ ◻ *Proof of Theorem [Theorem 3](#riesz_decom){reference-type="ref" reference="riesz_decom"}.* Let $F_f$ be the least harmonic majorant of $f$. Consider $h := f-F_f$. Then $h \le 0$ is a subharmonic function with the constant zero function as its least harmonic majorant. Then by Corollary [Corollary 28](#harmonic_majorant_cor){reference-type="ref" reference="harmonic_majorant_cor"}, for all $x \in X$, $$\displaystyle\lim_{r \to +\infty} \int_{T^1_x X} h\left(\gamma_{x,v}(r)\right) \:d\theta_x(v) = 0 \:.$$ Thus by Lemma [Lemma 30](#riesz_decom_lemma2){reference-type="ref" reference="riesz_decom_lemma2"}, $$f(x) = F_f(x) - \int_X G(x,y)\: d\mu_h(y) \:.$$ So it is enough to show that $\mu_h = \mu_f$. This is seen as follows. For all $\psi \in C^2_c(X)$, by Green's identity $$\begin{aligned} \int_X \psi \:d\mu_h = \int_X h \Delta \psi \: dvol &=& \int_X f \Delta \psi \: dvol - \int_X F_f \:\Delta \psi \: dvol \\ & = &\int_X f \Delta \psi \: dvol - \int_X \psi \: \Delta F_f \: dvol \\ & = &\int_X f \Delta \psi \: dvol \\ & = & \int_X \psi \: d\mu_f \:.\end{aligned}$$ ◻ # Boundary behavior of Green potentials ## Upper bound on the Hausdorff dimension In this subsection we will work under the hypothesis of Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"}. We fix an origin $o \in X$. First, we state the following result regarding a condition imposed on the measure by the well-definedness of its Green potential. It is a simple consequence of the estimates of the Green function ([\[green_estimate\]](#green_estimate){reference-type="ref" reference="green_estimate"}) and hence the proof is omitted. **Lemma 31**. *Let $\mu$ be a Radon measure on $X$. If $G[\mu]$ is well-defined then $$\int_X e^{-hd(o,y)} \:d\mu (y) < +\infty \:.$$* Now for $\mu$ as in the statement of Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} and for any $x \in X$, we write, $$G[\mu](x)= \int_X G(x,y)\: d\mu(y) = \int_{B(x,1)} G(x,y)\: d\mu(y) + \int_{X \setminus B(x,1)} G(x,y)\: d\mu(y) \:.$$ We define, $$\label{defn_u1_u2} u_1(x) := \int_{B(x,1)} G(x,y)\: d\mu(y)\:, \text{ and } u_2(x) := \int_{X\setminus B(x,1)} G(x,y) \:d\mu(y) \:.$$ We first study the boundary behavior of the more well-behaved $u_2$. **Lemma 32**. *Let $\beta \in [0,h]$ and let $u_2$ be as in ([\[defn_u1_u2\]](#defn_u1_u2){reference-type="ref" reference="defn_u1_u2"}). Then $$dim_{\mathcal{H}}E_\beta(u_2) \le (h-\beta)/s\:,\text{ and } \mathcal{H}^{(h-\beta)/s}\left(E^\infty_\beta (u_2)\right) = 0 \:.$$* *Proof.* Let $x \in X$. By ([\[green_estimate\]](#green_estimate){reference-type="ref" reference="green_estimate"}), $$u_2(x) \le C_1 \left\{e^{-hd(o,x)} \mu\left(\{o\}\right) + \tilde{u_2}(x)\right\}\:,$$ where $$\tilde{u_2}(x) := \int_{X \setminus \left(B(x,1)\cup \{o\}\right)} e^{-hd(x,y)} \:d\mu(y)\:.$$ Then $$\tilde{u_2}(x) = \int_{X \setminus \left(B(x,1)\cup \{o\}\right)} e^{-hB_y(x)} e^{-hd(o,y)}\: d\mu(y)\:,$$ where $$B_y(x)= d(x,y)-d(o,y) \:.$$ Now rewriting the above in terms of the Gromov product, one has $$B_y(x) = d(o,x) - 2{(x|y)}_o \:.$$ Combining this with the fact that the Gromov product is monotonically non-decreasing along geodesics, it follows that the function, $y \mapsto B_{y}(x)$ is monotonically non-increasing along geodesics. Hence (by denoting $\eta_y \in \partial X$ to be the end-point of the extended infinite geodesic ray that joins $o$ to $y$, for $y \in X$) we get, $$\begin{aligned} \label{u2_lemma_eq1} \tilde{u_2}(x) &\le & \int_{X \setminus \left(B(x,1)\cup \{o\}\right)} e^{-hB_{\eta_y}(x)} e^{-hd(o,y)} \: d\mu(y) \nonumber\\ & \le & \int_{X \setminus \{o\}} e^{-hB_{\eta_y}(x)} e^{-hd(o,y)} \: d\mu(y) \:.\end{aligned}$$ Now by Lemma [Lemma 31](#necess_cond){reference-type="ref" reference="necess_cond"} and the Riesz Representation Theorem for positive linear functionals on the space of continuous functions on $\partial X$, there exists a Radon measure $\tilde{\mu}$ on $\partial X$ such that for all $\varphi$ continuous on $\partial X$, we have $$\int_{X \setminus \{o\}} \varphi(\eta_y)\: e^{-hd(o,y)} d\mu(y) = \int_{\partial X} \varphi(\eta)\: d\tilde{\mu}(\eta) \:.$$ Then by the boundary continuity of the Busemann function, it follows that $$\label{u2_lemma_eq2} \int_{X \setminus \{o\}} e^{-hB_{\eta_y}(x)} e^{-hd(o,y)} d\mu(y) = \int_{\partial X} e^{-hB_\eta(x)} d\tilde{\mu}(\eta) = P[\tilde{\mu}](x) \:.$$ Thus combining ([\[u2_lemma_eq1\]](#u2_lemma_eq1){reference-type="ref" reference="u2_lemma_eq1"}) and ([\[u2_lemma_eq2\]](#u2_lemma_eq2){reference-type="ref" reference="u2_lemma_eq2"}), we obtain $$u_2(x) \le C_1 \left\{e^{-hd(o,x)} \mu\left(\{o\}\right) + P[\tilde{\mu}](x)\right\} \:.$$ Hence, $E_\beta(u_2) \subset E_\beta\left(P[\tilde{\mu}]\right)$ and $E^\infty_\beta(u_2) \subset E^\infty_\beta\left(P[\tilde{\mu}]\right)$. The result now follows from Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}. ◻ Now we shift our focus to $u_1$. But first we prove the following geometric estimate. **Lemma 33**. *Let $B$ be a ball in $X$ with center $z$ and radius $r \in (0,5]$. Then the diameter $d$ of $\mathcal{O}_o(B)$, satisfies the following upper bound, for some constant $C_6=C_6(b,s)>0$ : $$d \le C_6\:r^{s/2b} \:e^{-sd(o,z)} \:,\text{ for } d(o,z) > 6\:.$$* *Proof.* Let $x \in B$ such that $o,x$ and $z$ are not collinear. We first note that by triangle inequality, $$\label{shadow_upper_eq1} {(x|z)}_o = \frac{1}{2} \left(d(o,x)+d(o,z)-d(x,z)\right) \ge d(o,z) - d(x,z) \ge d(o,z) -r \:.$$ Consider the geodesics that join $o$ to $x$ and the one that joins $o$ to $z$. Extend these geodesics. These extended geodesic rays will hit $\partial X$ at some points, say $\eta$ and $\xi$ respectively. Then $$\label{shadow_upper_eq2} {(\xi|\eta)}_o - {(x|z)}_o = {(\eta|z)}_x +{(\xi|\eta)}_z \:.$$ Then ([\[shadow_upper_eq1\]](#shadow_upper_eq1){reference-type="ref" reference="shadow_upper_eq1"}) and ([\[shadow_upper_eq2\]](#shadow_upper_eq2){reference-type="ref" reference="shadow_upper_eq2"}) yield, $${(\xi|\eta)}_o \ge d(o,z) -r +{(\xi|\eta)}_z \:,$$ which in turn gives, $$\label{shadow_upper_eq3} \rho_s(\xi,\eta) \le C_2\: e^{-s{(\xi|\eta)}_o} \le C_2\: e^{-sd(o,z)}\: e^{sr}\: e^{-s{(\xi|\eta)}_z} \:.$$ Now as $r \in (0,5]$, it is enough to obtain an upper bound on $e^{-{(\xi|\eta)}_z}$. Let $\alpha$ (respectively $\theta$) denote the Riemannian angle between $\eta$ and $\xi$ (respectively $\eta$ and $o$) subtended at $z$. Then $\alpha + \theta = \pi$. Now we claim that $$\label{shadow_upper_eq4} \frac{e^{-2b{(o|\eta)}_z}- e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}} \le \sin^2 \left(\frac{\theta}{2}\right) \:.$$ The above claim is proved as follows. Let $\gamma$ be the geodesic ray starting from $z$ and hitting $\partial X$ at $\eta$. Now for any $t \in (0, +\infty)$, we consider the geodesic triangle $\triangle(z,o,\gamma(t))$. Then let $\theta_b(t)$ denote the angle corresponding to $\theta$ in the comparison triangle $\overline{\triangle}(z,o,\gamma(t))$ in $\mathbb{H}^2(-b^2)$. By the angle comparison theorem, $$\sin\left(\frac{\theta_b(t)}{2}\right) \le \sin\left(\frac{\theta}{2}\right)\:, \text{ for all } t \in (0,+\infty) \:.$$ Now by the hyperbolic law of cosines, $$\sin^2 \left(\frac{\theta_b(t)}{2}\right) = \frac{\cosh bd(o,\gamma(t)) - \cosh b(d(o,z)-d(\gamma(t),z))}{2 \sinh bd(o,z) \sinh bd(\gamma(t),z)} \:.$$ Then $$\begin{aligned} \displaystyle\lim_{t \to +\infty} \frac{\cosh bd(o,\gamma(t))}{2 \sinh bd(o,z) \sinh bd(\gamma(t),z)} &=& \displaystyle\lim_{t \to +\infty} \frac{e^{bd(o,\gamma(t))}+e^{-bd(o,\gamma(t))}}{\left(e^{bd(o,z)}-e^{-bd(o,z)}\right)\left(e^{bd(\gamma(t),z)}- e^{-bd(\gamma(t),z)}\right)} \\ &=& \frac{e^{-2b{(o|\eta)}_z}}{1-e^{-2bd(o,z)}} \:,\end{aligned}$$ and $$\begin{aligned} \displaystyle\lim_{t \to +\infty} \frac{\cosh b(d(o,z)-d(\gamma(t),z))}{2 \sinh bd(o,z) \sinh bd(\gamma(t),z)} &=& \displaystyle\lim_{t \to +\infty} \frac{e^{b(d(o,z)-d(\gamma(t),z))}+e^{-b(d(o,z)-d(\gamma(t),z))}}{\left(e^{bd(o,z)}-e^{-bd(o,z)}\right)\left(e^{bd(\gamma(t),z)}- e^{-bd(\gamma(t),z)}\right)} \\ &=& \frac{e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}} \:.\end{aligned}$$ Hence, the claim is established. Now as by the triangle inequality, $${(o|\eta)}_z \le d(x,z) \le r \:,$$ plugging the above inequality in ([\[shadow_upper_eq4\]](#shadow_upper_eq4){reference-type="ref" reference="shadow_upper_eq4"}), we get that $$\label{shadow_upper_eq5} \frac{e^{-2br}- e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}} \le \sin^2 \left(\frac{\theta}{2}\right) \:.$$ Then by ([\[infinite_riemannian_angle_bound\]](#infinite_riemannian_angle_bound){reference-type="ref" reference="infinite_riemannian_angle_bound"}) and ([\[shadow_upper_eq5\]](#shadow_upper_eq5){reference-type="ref" reference="shadow_upper_eq5"}), there exists $C(b)>0$ such that, $$e^{-2b{(\xi|\eta)}_z} \le \sin^2\left(\frac{\alpha}{2}\right) = 1 - \sin^2\left(\frac{\theta}{2}\right) \le 1 - \frac{e^{-2br}- e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}} = \frac{1-e^{-2br}}{1-e^{-2bd(o,z)}} \le C(b) r \:.$$ Hence, there exists $C(b,s)>0$ such that $$\label{shadow_upper_eq6} e^{-s{(\xi|\eta)}_z} \le C(b,s)\: r^{s/2b} \:.$$ Then plugging ([\[shadow_upper_eq6\]](#shadow_upper_eq6){reference-type="ref" reference="shadow_upper_eq6"}) in ([\[shadow_upper_eq3\]](#shadow_upper_eq3){reference-type="ref" reference="shadow_upper_eq3"}) we get that, $$\rho_s(\xi,\eta) \le C(b,s)\:r^{s/2b}\: e^{-sd(o,z)} \:.$$ Now since $x$ was arbitrary, it follows that $$d = \displaystyle\sup_{\eta_1,\eta_2 \in \mathcal{O}_o(B)} \rho_s(\eta_1,\eta_2) \le \displaystyle\sup_{\eta_1,\eta_2 \in \mathcal{O}_o(B)} (\rho_s(\eta_1,\xi) + \rho_s(\xi,\eta_2)\:) \le 2\:C(b,s)\:r^{s/2b}\: e^{-sd(o,z)} \:.$$ ◻ Now we get back to estimating $u_1$. We introduce some new notation. Choose and fix $R>0$ and set for $L>0$, $$A_{\beta,R}(L) := \{x \in X \setminus B(o,R): e^{-\beta d(o,x)} u_1(x) > L\} \:.$$ **Lemma 34**. *Let $L >0,\:\beta \in [0,h-n+2)$ and $u_1$ defined as in ([\[defn_u1_u2\]](#defn_u1_u2){reference-type="ref" reference="defn_u1_u2"}). Then there exists a countable collection of balls $\{B(x_j,r_j)\}_{j=1}^\infty$ with $d(o,x_j) \ge R$ and $r_j \in (0,5]$ for all $j \in \mathbb N$, such that it covers $A_{\beta,R}(L)$ and moreover, one has $$\displaystyle\sum_{j=1}^\infty \left\{r_j\: e^{-d(o,x_j)}\right\}^{h-\beta} \le \frac{C(h,n,\beta)}{L}\int_X e^{-hd(o,x)} \: d\mu(x) \:,$$ for some positive constant $C(h,n,\beta)>0$.* *Proof.* Let $$C_7 := \frac{1}{C_1} \left(1+\frac{2-n}{h-\beta}\right) \:,$$ where $C_1$ is the constant implicit in ([\[green_estimate\]](#green_estimate){reference-type="ref" reference="green_estimate"}). Note that our hypothesis on $\beta$ implies that $C_7>0$. Choose and fix $x \in X \setminus B(o,R)$. Next we claim that if $$\mu\left(B(x,r)\right) \le C_7\: L\: e^{\beta d(o,x)}\: r^{h-\beta}\:, \text{ holds for all } 0 < r \le 1\:, \text{ then } u_1(x) \le L \: e^{\beta d(o,x)} \:.$$ The above claim is established as follows. By ([\[green_estimate\]](#green_estimate){reference-type="ref" reference="green_estimate"}), we have $$u_1(x) \le C_1 \int_{B(x,1)} {d(x,y)}^{2-n} d\mu(y) \:.$$ Now applying Fubini-Tonelli to the function $(y,r) \mapsto r^{1-n} \chi_{\{(y,r)\: : \: d(x,y)<r\}}$, it follows that $$\begin{aligned} \int_{B(x,1)} {d(x,y)}^{2-n} d\mu(y) &=& \mu\left(B(x,1)\right) + (n-2) \int_{0}^1 r^{1-n} \mu\left(B(x,r)\right) dr \\ & \le & C_7 L\: e^{\:\beta d(o,x)} \left\{ 1+(n-2)\int_{0}^1 r^{h+1-n-\beta}dr\right\} \\ &=& \frac{L\: e^{\:\beta d(o,x)}}{C_1} \:.\end{aligned}$$ Hence the claim follows. Thus for $x \in A_{\beta,R}(L)$, the above claim ensures the existence of $r_x \in (0,1]$ such that $$\label{u1_lemma1_eq} e^{-\beta d(o,x)} \mu\left(B(x,r_x)\right) > C_7 L\: r^{h-\beta}_x \:.$$ Then an application of Vitali 5-covering lemma yields a countable, disjoint collection of balls $B(x_j,r_{x_j})$ such that each ball satisfies the inequality ([\[u1_lemma1_eq\]](#u1_lemma1_eq){reference-type="ref" reference="u1_lemma1_eq"}) and for $r_j:=5r_{x_j}$, the balls $B(x_j,r_j)$ cover $A_{\beta,R}(L)$. Then by ([\[u1_lemma1_eq\]](#u1_lemma1_eq){reference-type="ref" reference="u1_lemma1_eq"}), we get $$\begin{aligned} \displaystyle\sum_{j=1}^\infty {\left(r_j\:e^{-d(o,x_j)}\right)}^{h-\beta} &\le & \left(\frac{5^{h-\beta}}{C_7 L}\right) \displaystyle\sum_{j=1}^\infty e^{-hd(o,x_j)} \: \mu\left(B\left(x_j,r_{x_j}\right)\right) \\ &\le & \left(\frac{5^{h-\beta} \:e^h}{C_7 L}\right) \int_X e^{-hd(o,x)}\: d\mu(x) \:.\end{aligned}$$ ◻ Now we do the estimates on the boundary. For $\xi \in \partial X$, define $$M_{\beta,R}[u_1](\xi):= \displaystyle\sup_{t>R} \: e^{-\beta t}u_1\left(\gamma_\xi(t)\right) \:.$$ **Lemma 35**. *Let $L>0,\:\beta \in [0,h-n+2)\:,b':=\max\{2b,1\}$ and $u_1$ be as defined in ([\[defn_u1_u2\]](#defn_u1_u2){reference-type="ref" reference="defn_u1_u2"}). For $$0<\varepsilon < C_6 {\left(\frac{5}{e^6}\right)}^{s/b'} \text{ and } R \ge \log(5)+\left(\frac{b'}{s}\right) \log(C_6/\varepsilon)\:,$$ (where $C_6>0$ is as in the conclusion of Lemma [Lemma 33](#shadow_upper){reference-type="ref" reference="shadow_upper"}), we have, $$\mathcal{H}^{b'(h-\beta)/s}_\varepsilon \left(\left\{\xi \in \partial X : M_{\beta,R}[u_1](\xi) > L\right\}\right) \le \frac{C(h,n,\beta,b,s)}{L} \int_X e^{-h d(o,x)} d\mu(x) \:,$$ for some constant $C(h,n,\beta,b,s)>0$.* *Proof.* Let $\xi \in \partial X$ such that $M_{\beta,R}[u_1](\xi) > L$. Then there exists $t_\xi >R$ such that $$e^{-\beta t_\xi}\: u_1\left(\gamma_\xi(t_\xi)\right) > L \:.$$ Then $\gamma_\xi(t_\xi) \in A_{\beta,R}(L)$. Let $\{B\left(x_j,r_j\right)\}_{j=1}^\infty$ be the balls obtained in the conclusion of Lemma [Lemma 34](#u1_lemma1){reference-type="ref" reference="u1_lemma1"}. Then $$\left\{\xi \in \partial X : M_{\beta,R}[u_1](\xi) > L\right\} \subset \displaystyle\bigcup_{j=1}^\infty \mathcal{O}_o\left(B\left(x_j,r_j\right)\right) \:.$$ In fact, the diameters of $\mathcal{O}_o\left(B\left(x_j,r_j\right)\right)$ are uniformly bounded by $\varepsilon$. This is seen as follows. We note that the hypothesis on $\varepsilon$ and $R$ imply that $R > 6$ and thus $d(o,x_j) \ge R >6$. Moreover as $r_j \in (0,5]$, Lemma [Lemma 33](#shadow_upper){reference-type="ref" reference="shadow_upper"} is applicable and it yields $$\label{bound_diameter} diameter\left(\mathcal{O}_o\left(B\left(x_j,r_j\right)\right)\right) \le C_6\: r^{s/b'}_j \: e^{-(s/b') d(o,x_j)} \le C_6 \:5^{s/b'} \:e^{-(s/b')R} \le \varepsilon$$ (the last inequality follows from an elementary computation involving the hypothesis on $R$). Hence by ([\[bound_diameter\]](#bound_diameter){reference-type="ref" reference="bound_diameter"}) and Lemma [Lemma 34](#u1_lemma1){reference-type="ref" reference="u1_lemma1"}, we get for some constant $C(h,n,\beta,b,s)>0$, $$\begin{aligned} \displaystyle\sum_{j=1}^\infty {\left(diameter\left(\mathcal{O}_o\left(B\left(x_j,r_j\right)\right)\right)\right)}^{b'(h-\beta)/s} &\le & C(h,n,\beta,b,s) \displaystyle\sum_{j=1}^\infty {\left(r_j\: e^{-d(o,x_j)}\right)}^{h-\beta} \\ & \le & \frac{C(h,n,\beta ,b,s)}{L} \int_X e^{-hd(o,x)}\: d\mu(x) \:.\end{aligned}$$ Hence the result follows. ◻ We now complete the proof of Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"}. *Proof of Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"}.* For $L>0$ and $\beta \in [0,h-n+2)$, we define $E^L_\beta(u_1)$ as in the proof of Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}. Then from Lemma [Lemma 35](#u1_lemma2){reference-type="ref" reference="u1_lemma2"} and Lemma [Lemma 31](#necess_cond){reference-type="ref" reference="necess_cond"}, one has $$\mathcal{H}^{b'(h-\beta)/s}\left(E^L_\beta(u_1)\right) \le \frac{C(h,n,\beta,b,s)}{L} \int_X e^{-hd(o,x)} d\mu(x) < +\infty \:.$$ Hence it follows that $$\mathcal{H}^{b'(h-\beta)/s}\left(E^\infty_\beta(u_1)\right)=0 \text{ and } dim_{\mathcal{H}} E^{\frac{1}{m}}_\beta(u_1) \le b'(h-\beta)/s\:, \text{ for all } m \in \mathbb N\:.$$ Then by the countable stability of the Hausdorff dimension, we get $$dim_{\mathcal{H}} E_\beta(u_1) \le b'(h-\beta)/s \:.$$ Now as $E_\beta\left(G[\mu]\right) \subset E_\beta(u_1) \cup E_\beta(u_2)$, one has from above and Lemma [Lemma 32](#u2_lemma){reference-type="ref" reference="u2_lemma"}, $$dim_{\mathcal{H}} E_\beta \left(G[\mu]\right) \le \max \{dim_{\mathcal{H}} E_\beta(u_1),dim_{\mathcal{H}} E_\beta(u_2)\} \le b'(h-\beta)/s \:.$$ Next we note that as the Hausdorff outer measure is non-increasing in the dimension, one has from Lemma [Lemma 32](#u2_lemma){reference-type="ref" reference="u2_lemma"} that $$\mathcal{H}^{b'(h-\beta)/s}\left(E^\infty_\beta(u_2)\right) \le \mathcal{H}^{(h-\beta)/s}\left(E^\infty_\beta(u_2)\right) =0 \:.$$ Now as $E^\infty_\beta\left(G[\mu]\right) \subset E^\infty_\beta(u_1) \cup E^\infty_\beta(u_2)$, it follows that $${\mathcal{H}}^{b'(h-\beta)/s} \left(E^\infty_\beta \left(G[\mu]\right)\right) \le {\mathcal{H}}^{b'(h-\beta)/s} \left(E^\infty_\beta (u_1)\right) + {\mathcal{H}}^{b'(h-\beta)/s} \left(E^\infty_\beta (u_2)\right) =0\:.$$ ◻ **Remark 36**. *For the class of negatively curved Harmonic manifolds with sectional curvature $-b^2 \le K_X \le -1$, for some $b>1$, one has sharper upper bounds on the Hausdorff dimension of the exceptional sets for the Green potentials of Radon measures: $$\frac{b(h-\beta)}{s} \:\:\text{ in stead of }\:\: \frac{2b(h-\beta)}{s} \:.$$* *First we note that in this case $s=1$ and one can work with the natural metric $\rho$. Then with respect to $\rho$, one has the following estimate on the diameters of shadows of balls:* *Let $B$ be a ball in $X$ with center $z$ and radius $r \in (0,5]$. Then the diameter $d$ of $\mathcal{O}_o(B)$, satisfies the following upper bound, for some constant $C(b)>0$ : $$d \le C(b) \:r^{1/b} \:e^{-(1/b)\:d(o,z)} \:,\text{ for } d(o,z) > 6\:.$$ The above claim is proved as follows. Let $x,y \in B$ be such that the three points: the origin $o,\:x$ and $y$ are not collinear. Let $\theta(x,y)$ be the Riemannian angle between $x$ and $y$, subtended at the origin $o$. Let $\theta_1(x,y)$ denote the corresponding comparison angle in $\mathbb H^2(-1)$. Then by the hyperbolic law of cosine, one has $$\sin^2\left(\frac{\theta_1(x,y)}{2}\right) = \frac{\cosh d(x,y)-\cosh \left(\left|d(o,x)-d(o,y)\right|\right)}{2 \sinh d(o,x)\sinh d(o,y)}\:.$$* *Then as $d(o,x)>1\:,\:d(o,x) \ge d(o,z) - r$ and similarly for $y$, we have $$\sinh d(o,x)\sinh d(o,y) \gtrsim e^{2d(o,z)} \:.$$ Also since, $d(x,y) \le 2r \le 10$, it follows that $$\cosh d(x,y)-\cosh \left(\left|d(o,x)-d(o,y)\right|\right) \le \cosh 2r -1 \lesssim r^2 \:.$$ Then using the angle comparison theorem, we get for some constant $C>0$, the following estimate: $$\label{shadow'_upper_eq1} \sin\left(\frac{\theta(x,y)}{2}\right) \le \sin\left(\frac{\theta_1(x,y)}{2}\right) \le C r e^{-\:d(o,z)} \:.$$ Now extend the geodesic rays joining $o$ to $x$ and the one joining $o$ to $y$. These extended geodesic rays hit $\partial X$ at two points, say $\eta_x$ and $\eta_y$ respectively. Then by ([\[infinite_riemannian_angle_bound\]](#infinite_riemannian_angle_bound){reference-type="ref" reference="infinite_riemannian_angle_bound"}), it follows that $$\label{shadow'_upper_eq2} e^{-b{(\eta_x|\eta_y)}_o} \le \sin\left(\frac{\theta(\eta_x,\eta_y)}{2}\right) = \sin\left(\frac{\theta(x,y)}{2}\right) \:.$$ Thus by ([\[shadow\'\_upper_eq1\]](#shadow'_upper_eq1){reference-type="ref" reference="shadow'_upper_eq1"}) and ([\[shadow\'\_upper_eq2\]](#shadow'_upper_eq2){reference-type="ref" reference="shadow'_upper_eq2"}), it follows that there exists $C(b)>0$ such that $$\rho(\eta_x,\eta_y) = e^{-{\left(\eta_x|\eta_y\right)}_o} = {\left(e^{-b{\left(\eta_x|\eta_y\right)}_o}\right)}^{1/b} \le {\left(\sin\left(\frac{\theta(x,y)}{2}\right)\right)}^{1/b} \le C(b)\: r^{1/b}\: e^{-(1/b)\:d(o,z)} \:.$$ The claim now follows by taking supremum over all such $x,y \in B$.* *Then using the above estimate and following the arguments as in Lemmas [Lemma 34](#u1_lemma1){reference-type="ref" reference="u1_lemma1"} and [Lemma 35](#u1_lemma2){reference-type="ref" reference="u1_lemma2"}, we get the corresponding Hausdorff dimensions (with respect to $\rho$) in Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} to be $b(h-\beta)$.* ## Construction of Green potentials on realizable sets In this subsection we work under the hypothesis of Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}. Before proceeding with the proof of Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}, we first see the following sufficient conditions for a non-negative Borel measure to have a well-defined Green potential, which again is an immediate consequence of ([\[green_estimate\]](#green_estimate){reference-type="ref" reference="green_estimate"}). **Lemma 37**. *If $\mu$ is a non-negative Borel measure on $X$ satisfying $$Supp(\mu) \cap B(o,1) = \emptyset \text{ and } \int_X e^{-hd(o,y)} \: d\mu(y) <+\infty \:,$$ then $G[\mu]$ is well-defined.* *Proof of Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}.* Since $\mathcal{H}^{(h-\beta)/s}(E)=0$, we have for any $m \in \mathbb N$, a covering of $E$ by visual balls $\{\mathscr{B}^{(m,j)}_s\}_{j=1}^\infty$ such that $$\label{green_sharp_eq1} \displaystyle\sum_{j=1}^\infty {\left(diameter\left(\mathscr{B}^{(m,j)}_s\right)\right)}^{(h-\beta)/s} < 2^{-k_m} \:,$$ where $\{k_m\}_{m=1}^\infty$ is a strictly monotonically increasing sequence of positive integers such that $$\label{green_sharp_eq2} 2^{-\frac{k_1s}{h-\beta}} < 2 \min\left\{{\left(\frac{e^{-3(1+\delta)}}{C_4}\right)}^s \:,\frac{1}{C_2} \right\}\:,$$ where $C_4>0$ is the positive constant obtained in the conclusion of Lemma [Lemma 13](#shadow_lemma){reference-type="ref" reference="shadow_lemma"}. Let $\mathscr{B}^{(m,j)}_s = \mathscr{B}_s\left(\xi^{(m)}_j,r^{(m)}_j\right)$. Then an elementary computation using ([\[green_sharp_eq1\]](#green_sharp_eq1){reference-type="ref" reference="green_sharp_eq1"}) and ([\[green_sharp_eq2\]](#green_sharp_eq2){reference-type="ref" reference="green_sharp_eq2"}) yields that for all $m,j \in \mathbb N$, $$r^{(m)}_j < \min\left\{{\left(\frac{e^{-3(1+\delta)}}{C_4}\right)}^s \:,\frac{1}{C_2} \right\}\:, \:\text{ and hence }\: \log\left(\frac{1}{C_4{\left(r^{(m)}_j\right)}^{1/s}}\right) > 3(1+\delta)\:.$$ Then setting $t^{(m)}_j=\log\left(\frac{1}{C_4{\left(r^{(m)}_j\right)}^{1/s}}\right)$, we get balls $B\left(\gamma_{\xi^{(m)}_j}\left(t^{(m)}_j\right),1+\delta\right)$ whose centers satisfy $$\label{green_sharp_eq3} d\left(\gamma_{\xi^{(m)}_j}\left(t^{(m)}_j\right),o\right)= t^{(m)}_j >3(1+\delta)\:.$$ Now applying Lemma [Lemma 13](#shadow_lemma){reference-type="ref" reference="shadow_lemma"}, we get that for all $m,\: j \in \mathbb N$, $$\label{contained_in_shadow} \mathscr{B}^{(m,j)}_s \subset \mathcal{O}_o\left(B\left(\gamma_{\xi^{(m)}_j}\left(t^{(m)}_j\right),1+\delta\right)\right)$$ We now set $$f := \displaystyle\sum_{j,m} m\:\: e^{\beta t^{(m)}_j} \chi_{B\left(\gamma_{\xi^{(m)}_j}\left(t^{(m)}_j\right),\: 2(1+\delta)\right)} \:.$$ Then as $f$ is defined in terms of indicator functions of balls of radius $2(1+\delta)$ and (by ([\[green_sharp_eq3\]](#green_sharp_eq3){reference-type="ref" reference="green_sharp_eq3"}) ) the centers lie outside $\overline{B(o,3(1+\delta))}$, it follows that $f dvol$ defines a non-negative Borel measure on $X$ which is supported away from the unit ball $B(o,1)$. Moreover, there exists a positive constant $C(h,n,\beta,\delta,s)>0$ such that by the triangle inequality, the definition of $t^{(m)}_j$ and ([\[green_sharp_eq1\]](#green_sharp_eq1){reference-type="ref" reference="green_sharp_eq1"}) we have, $$\begin{aligned} \int_X e^{-hd(o,x)} f(x) dvol(x) &\le & e^{2h(1+\delta)}\:\displaystyle\sum_{j,m} m\:\: e^{-(h-\beta) t^{(m)}_j} \: vol\left(B\left(\gamma_{\xi^{(m)}_j}\left(t^{(m)}_j\right),2(1+\delta)\right)\right) \\ & \le & C(h,n,\beta ,\delta,s) \displaystyle\sum_{j,m} m\:{\left(r^{(m)}_j\right)}^{(h-\beta)/s} \\ & \le & C(h,n,\beta ,\delta,s) \displaystyle\sum_{m=1}^\infty m\:2^{-k_m} \\ & < & +\infty \:. \end{aligned}$$ Then by Lemma [Lemma 37](#suffic_cond){reference-type="ref" reference="suffic_cond"}, $G\left[f\:dvol\right]$ is well-defined. Now we show that $E \subset E^\infty_\beta(u)$ where $u=G\left[f\:dvol\right]$. Let $\xi \in E$ and choose and fix an $m \in \mathbb N$. Now $\xi \in \mathscr{B}^{(m,j_m)}_s$ for some $j_m \in \mathbb N$. Then by ([\[contained_in_shadow\]](#contained_in_shadow){reference-type="ref" reference="contained_in_shadow"}), there exists $x_m \in B\left(\gamma_{\xi^{(m)}_{j_m}}\left(t^{(m)}_{j_m}\right),1+\delta\right)$ such that $x_m$ lies on the geodesic ray $\gamma_\xi$. Since $$B(x_m,1+\delta) \subset B\left(\gamma_{\xi^{(m)}_{j_m}}\left(t^{(m)}_{j_m}\right),2(1+\delta)\right)\:,$$ one has using radiality of the Green function, $$\begin{aligned} u(x_m) & \ge & m \: e^{\beta t^{(m)}_{j_m}} \int_{B\left(\gamma_{\xi^{(m)}_{j_m}}\left(t^{(m)}_{j_m}\right)\:,\:2(1+\delta)\right)} G(x_m,y) \:dvol(y) \\ & \ge & m \: \left(\frac{e^{\beta d(o,x_m)}}{e^{\beta(1+\delta)}}\right) \int_{B(x_m,1+\delta)} G(x_m,y) \:dvol(y) \\ & \ge & m \: e^{\beta d(o,x_m)}\left(\frac{1}{e^{\beta(1+\delta)}} \int_{B(o,1+\delta)} G(o,y) \:dvol(y) \right) \:.\end{aligned}$$ Thus as $m \to +\infty$ we get that $e^{-\beta d(o,x_m)}u(x_m) \to +\infty$. By construction, all the points $x_m$ lie on the geodesic $\gamma_\xi$ and are contained in the balls $B\left(\gamma_{\xi^{(m)}_{j_m}}\left(t^{(m)}_{j_m}\right),1+\delta\right)$. Then as by ([\[green_sharp_eq1\]](#green_sharp_eq1){reference-type="ref" reference="green_sharp_eq1"}), $$t^{(m)}_{j_m} \ge \log\left(\frac{2^{1/s}}{C_4}\right) + \left(\frac{k_m}{h-\beta}\right) \log 2 \to +\infty \text{ as } m \to +\infty \:,$$ it follows that $\xi \in E^\infty_\beta(u)$. ◻ # Proof of Theorem [Theorem 7](#superh_thm){reference-type="ref" reference="superh_thm"} {#proof-of-theorem-superh_thm} As $u$ is a positive superharmonic function, $-u$ is a negative subharmonic function with its least harmonic majorant, $F_{-u} \le 0$. Then by Theorem [Theorem 3](#riesz_decom){reference-type="ref" reference="riesz_decom"} and Lemma [Lemma 8](#martinrep){reference-type="ref" reference="martinrep"}, $$u = P[\mu_1] + G[\mu_2] \:,$$ where $\mu_1$ is a finite, positive Borel measure on $\partial X$ and $\mu_2$ is a Radon measure on $X$. Then as $$E_\beta(u) \subset E_\beta\left(P[\mu_1]\right) \cup E_\beta\left(G[\mu_2]\right)\:,$$ we have from Theorem [Theorem 1](#poisson_thm){reference-type="ref" reference="poisson_thm"}, and Theorem [Theorem 4](#Green_thm){reference-type="ref" reference="Green_thm"} that $$dim_{\mathcal{H}}E_\beta(u) \le \max \left\{dim_{\mathcal{H}}E_\beta\left(P[\mu_1]\right),\:dim_{\mathcal{H}}E_\beta\left(G[\mu_2]\right)\right\} \le b'(h-\beta)/s \:.$$ Similarly one concludes that $$\mathcal{H}^{b'(h-\beta)/s}\left(E^\infty_\beta(u)\right) =0 \:.$$ The converse part follows from Theorem [Theorem 5](#green_sharp_thm){reference-type="ref" reference="green_sharp_thm"}. This completes the proof of Theorem [Theorem 7](#superh_thm){reference-type="ref" reference="superh_thm"}. # Acknowledgement {#acknowledgement .unnumbered} The author would like to thank Prof. Kingshook Biswas for many illuminating discussions and suggestions. The author is supported by a research fellowship from Indian Statistical Institute. amsplain Anderson, M. and Schoen, R. *Positive harmonic functions on complete manifolds of negative curvature*. Ann. Math. 121 (1985), p. 429-461. Armitage, D.H. *Normal limits, half-spherical means and boundary measures of half-space Poisson integrals*. Hiroshima Math J. 11(2) (1981), p. 235-246. Bayart, F. and Heurteux, Y. *Boundary Multifractal Behaviour for Harmonic Functions in the Ball*. Potential Anal. 38 (2013), p. 499-514. Besse, A.L. *Manifolds all of whose geodesics are closed*. Ergebnisse der Mathematik und inhrer Grenzgebiete, Springer-Verlag Berlin Heidelberg 1978. Biswas, K. *Quasi-metric antipodal spaces and maximal Gromov hyperbolic spaces*. arXiv pre-print (2023): arXiv:2109.03725. Biswas, K.; Knieper, G. and Peyerimhoff, N. *The Fourier Transform on Harmonic Manifolds of Purely Exponential Volume Growth*. J. Geom. Anal. 31 (2021), p. 126-163. Bonk, M. and Foertsch, T. *Asymptotic upper curvature bounds in coarse geometry*. Math. Z. 253 (2006), p. 753-785. Bridson M. R. and Haefliger, A. *Metric spaces of nonpositive curvature*, Grundlehren der mathematischen Wissenschaften, ISSN 0072-7830; 319, 1999. Eberlein, P. and O'Neill, B. *Visibility manifolds*. Pac. J. Math. 46, 45--109 (1973). Falconer, K. *Fractal Geometry, Mathematical Foundations and Applications*. Wiley, 3rd Edition (2014). Hirata, K. *Boundary Growth Rates and Exceptional Sets for Superharmonic Functions on the Real Hyperbolic Ball*. J. Geom. Anal. 31 (2021), p. 10586-10602. Kemper, M. and Lokhamp, J. *Potential Theory on Gromov Hyperbolic Spaces*. Anal. Geom. Metr. Spaces 2022; 10: 394-431. Knieper, G. *New results on noncompact harmonic manifolds*. Comment. Math. Helv. 87, 669--703 (2012). Knieper, G. and Peyerimhoff, N. *Harmonic functions on rank one asymptotically harmonic manifolds*. J. Geom. Anal. 26(2), 750--781 (2016). Korányi, A. *Boundary behavior of Poisson integrals on symmetric spaces*. Trans. Amer. Math. Soc. 140 (1969), 393-409. Peyerimhoff, N. and Samiou, E. *Integral Geometric Properties of Non-compact Harmonic Spaces*. J. Geom. Anal. 25 (2015), p. 122-148. Ramachandran, K. and Ranjan, A. *A Pinching constant for Harmonic Manifolds*. arXiv pre-print (1996): arXiv:dg-ga/9603014. Schroeder, V. *Quasi-metric and metric spaces*. Conform. Geom. Dyn. 10 (2006), p. 355-360. Stoll, M. *Harmonic and Subharmonic Function Theory on the Hyperbolic Ball*. London Mathematical Society Lecture Note Series, 431. Cambridge University Press, Cambridge (2016). Ullrich, D. *Radial limits of M-subharmonic functions*. Trans. Amer. Math. Soc. 292, no. 2, (1985), p. 501-518. Willmore, T.J. *Riemannian Geometry*. Oxford Science Publications, The Clarendon Press, Oxford University Press, New York (1993) Yau, S. T. *Harmonic functions on complete Riemannian manifolds*. Commun. Pure Appl. Math. 28 (1975), p. 201-228.
arxiv_math
{ "id": "2309.05661", "title": "Boundary exceptional sets for radial limits of superharmonic functions\n on non-positively curved Harmonic manifolds of purely exponential volume\n growth", "authors": "Utsav Dewan", "categories": "math.CA math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $\mathcal{A}$ be an associative algebra containing the classical or quantum universal enveloping algebra $U$ of a semi-simple complex Lie algebra. Let $\mathcal{J}\subset \mathcal{A}$ designate the left ideal generated by positive root vectors in $U$. We construct the reduction algebra of the pair $(\mathcal{A},\mathcal{J})$ via the inverse Shapovalov form of $U$. author: - | Andrey Mudrov${}^{\dag,\ddag}$, Vladimir Stukopin${}^\ddag$\ ${}^{\dag}$ University of Leicester,\ University Road, LE1 7RH Leicester, UK,\ ${}^{\ddag}$ Moscow Institute of Physics and Technology,\ 9 Institutskiy per., Dolgoprudny, Moscow Region, 141701, Russia,\ e-mail: am405\@le.ac.uk, stukopin.va\@mipt.ru title: Mickelsson algebras and inverse Shapovalov form --- [Key words]{.ul}: Mickelsson algebras, inverse Shapovalov form, quantum Lax operators\ [AMS classification codes]{.ul}: 17B10, 17B37. # Introduction {#SecIntro} Step algebras were invented by J. Mickelsson as a tool for restricting representation of a simple Lie group to its semi-simple subgroup in his studies of Harish-Chandra modules [@Mick]. Mickelsson algebras is a representation free formulation of the raising and lowering operators technique devised by theoretical physicists for the needs of many-body problems and elementary particle physics [@NM]. Nowadays these algebras find numerous applications to harmonic analysis, [@Zh1; @DER], Gelfand-Zeitlin bases [@Mol], Yangians [@KN1; @KN2; @KN3] *etc*. Mickelsson algebras is a special case of reduction algebras associated with a pair $(\mathcal{A},\mathcal{J})$ where $\mathcal{A}$ is an associative algebra and $\mathcal{J}\subset \mathcal{A}$ is a left ideal. The reduction algebra $R(\mathcal{A},\mathcal{J})$ is a quotient by $\mathcal{J}$ of its normalizer $N(\mathcal{J})$, i.e. the maximal subalgebra in $\mathcal{A}$ where $\mathcal{J}$ is a two-sided ideal. Elements of $R(\mathcal{A},\mathcal{J})$ are exactly homomorphisms of the $\mathcal{A}$-module $\mathcal{V}=\mathcal{A}/\mathcal{J}$. There is a natural isomorphism between $R(\mathcal{A},\mathcal{J})$ and $\mathcal{V}^+=\ker \mathcal{J}\subset \mathcal{V}$ induced by the projection $\mathcal{A}\to \mathcal{V}$. If $\mathcal{A}$ contains the universal enveloping algebra of a semi-simple Lie algebra $\mathfrak{g}$, one can consider the ideal $\mathcal{J}$ generated by a maximal nilpotent subalgebra, $\mathfrak{g}_+$ relative to a fixed triangular decomposition of $\mathfrak{g}$. Then the Mickelsson algebra $Z(\mathcal{A},\mathfrak{g})=R(\mathcal{A},\mathcal{J})$ becomes responsible for restriction of $\mathcal{A}$-representations to $\mathfrak{g}$. It can be viewed as the symmetry algebra of $\mathfrak{g}$-singular vectors, which parameterize $\mathfrak{g}$-submodules, for instance, in a module of highest weight when $\mathcal{A}=U(\mathfrak{a})$ of a simple Lie algebra $\mathfrak{a}\supset \mathfrak{g}$. One can similarly define the Mickelsson algebra upon replacement of $U(\mathfrak{g})$ with the quantum group $U_q(\mathfrak{g})$, [@Kek]. Then it is a deformation of its classical counterpart provided $q$ is not a root of unity. We retain the same notation $Z(\mathcal{A},\mathfrak{g})$ for it although we mostly deal with quantum groups in this exposition. The classical version can be obtained as the limit case $q\to 1$. Let us point out the most studied examples of Mickelsson algebras. For finite dimensional $\mathfrak{g}$ of classical type they include $\mathfrak{a}=\mathfrak{g}\oplus\mathfrak{g}$ with diagonal embedding of $\mathfrak{g}$. Such $Z(\mathfrak{a},\mathfrak{g})$ is responsible for decomposition of tensor product of $\mathfrak{g}$-modules [@KO]. Another example is reductive pairs relative to the Gelfand-Zeitlin reduction [@GTs1; @Mol]. With regard to infinite dimensional $\mathfrak{a}$ one should mention Yangians and twisted Yangians [@KN1; @KN2; @KN3]. The modern approach to Mickelsson algebras is based on the theory of extremal projectors [@AST], after works of Zhelobenko [@Zh2; @Zh]. Extremal projector of $U(\mathfrak{g})$ and, respectively, its quantum counterpart, allows to construct $Z(\mathcal{A},\mathfrak{g})$ from $\mathcal{A}$ and provides a Poincare-Birkhoff-Witt basis. However the action of the extremal projector on $\mathcal{A}$ is hard to explicitly compute in general. We addressed this problem in [@MS] utilizing an approach of Hasse diagrams associated with classical or quantum $\mathfrak{g}$-modules. As a result, we obtained explicit expression for a PBW basis in $Z(\mathfrak{a},\mathfrak{g})$ through a basis in $\mathfrak{a}$ in the classical case and matrix entries of quantum Lax operators for $q\not =1$. The Hasse diagram technique of [@MS] turns out to be pretty close to the one used for calculation of the inverse Shapovalov form in [@M1]. This form is a well established concept in representation theory and applications [@Shap]. Its inverse is a key ingredient of dynamical quantum groups [@EV1] and equivariant quantization of homogeneous spaces [@AL; @M5]. Its matrix elements participate in a construction of Shapovalov elements delivering singular vectors in reducible Verma modules [@M5]. On the other hand, a relation between Shapovalov elements and Mickelsson algebras has been observed long ago [@Brun; @Cart]. In this paper, we explore those connections to a full extent and reformulate the results of [@MS] with the help of the Shapovalov inverse form. More precisely, with its lift $\mathcal{S}\in U_q(\mathfrak{g}_+)\otimes U_q(\mathfrak{b}_-)$ (a completed tensor product), where $\mathfrak{b}_-\subset \mathfrak{g}$ is a negative Borel subalgebra and $\mathfrak{g}_+$ its opposite maximal nilpotent Lie subalgebra in $\mathfrak{g}$. This finding should not be a surprise because inverse contravariant forms and extremal projectors are two alternative ways to construct $\mathfrak{g}_+$-invariants in tensor products of $\mathfrak{g}$-modules, see e.g. [@M3]. Let us briefly explain two alternative constructions of $Z(\mathcal{A},\mathfrak{g})$ presented in this paper, focusing on the $q\not =1$ setting. We consider two comultiplications $\Delta$ and $\tilde \Delta$ on $U_q(\mathfrak{a})$ intertwined by the R-matrix with a cut-off diagonal Cartan factor. Suppose that $X$ is a $U_q(\mathfrak{g}_+)$-module that extends to a $U_q(\mathfrak{b}_+)$-module diagonalizable over $U_q(\mathfrak{h})$ with finite dimensional weight subspaces. Let $X^*$ denote the dual vector space to $X$. We associate with $X$ a matrix $S_X\in \mathrm{End}(X)\otimes\mathcal{A}$ and a vector $\psi_{X^*}\in X^*\otimes\mathcal{A}$ that feature the following transformation properties: $$(1\otimes u)\psi_{X^*}=\psi_{X^*}\Delta(u), \quad \Delta(u) S_X\in \mathrm{End}(X)\otimes\mathcal{J}, \quad \forall u\in U_q(\mathfrak{g}_+).$$ Here we mean the natural right $\mathrm{End}(X)$-action on $X^*$. Then the $\mathcal{A}$-entries of the vector $Z_{X^*}=\psi_{X^*} S_X$ are in $\mathcal{V}^+$, modulo the ideal $\mathcal{J}$. A \"dual\" construction is using a matrix $\tilde S_X\in \mathrm{End}(X)\otimes\mathcal{A}$ and a vector $\psi_X\in X\otimes\mathcal{V}$ obeying $$(1\otimes u)\tilde S_X=\tilde S_X'\tilde \Delta(u), \quad \tilde \Delta (u)\psi_X =0$$ for each homogeneous $u\in U_q(\mathfrak{g}_+)$ and certain $\tilde S_X'$ depending on the weight of $u$. Then components of the vector $Z_X=\tilde S_X \psi_X$ are in $\mathcal{V}^+ \simeq Z(\mathcal{A},\mathfrak{g})$. Moreover, $Z_X=(\mathrm{id}\otimes\wp)(\psi_X)$, where $\wp$ is the extremal projector of $U_q(\mathfrak{g})$. The matrix $S_X$ is a reduction to the representation $X$ of the universal Shapovalov matrix $\mathcal{S}\in U_q(\mathfrak{g}_+)\otimes U_q(\mathfrak{b}_-)$ relative to left Verma modules and comultiplication $\Delta$. The matrix $\tilde S_X$ is a reduction of a twisted universal Shapovalov matrix of *right* Verma modules corresponding to comultiplication $\tilde \Delta$. A remarkable thing about the constructions described above is that a morphism from the category of right modules participates in the category of left modules and *vice versa*. Let us emphasize that the appearance of two different comultiplications in our approach is not just a reflection of the two alternative constructions of $Z(\mathcal{A},\mathfrak{g})$. The Shapovalov matrix of either approach is itself expressed through the twist linking $\Delta$ and $\tilde \Delta$. This interpretation of the main building block becomes obscure in the limit $q\to 1$ although the classical version of the theory can be developed in parallel. The paper is organized in the following way. After a short reminder of quantum group basics in Section [2](#SecQuantGroups){reference-type="ref" reference="SecQuantGroups"} we give a brief introduction into the theory of Mickelsson algebras in Section [3](#SecMickAlg){reference-type="ref" reference="SecMickAlg"}. Section [4](#secAlgRout){reference-type="ref" reference="secAlgRout"} is devoted to Hasse diagrams associated with representations of quantum groups and to algebra of routes. Construction of $Z(\mathcal{A},\mathfrak{g})$ via right Shapovalov matrix is presented in Section [5](#secRigthShMat){reference-type="ref" reference="secRigthShMat"}. An alternative construction via left Shapovalov matrix is given in Section [6](#secLeftShMat){reference-type="ref" reference="secLeftShMat"}. # Basic notation and quantum group conventions {#SecQuantGroups} For details on quantum groups, the reader is referred either to original Drinfeld's talk [@D1] or the textbook [@ChP]. In this exposition, we are also using notation of [@MS]. Here we remind the basics for convenience. Throughout the paper, $\mathfrak{g}$ is a semi-simple complex Lie algebra with a triangular decomposition $\mathfrak{g}=\mathfrak{g}_-\oplus\mathfrak{h}\oplus\mathfrak{g}_+$, where $\mathfrak{g}_\pm$ are maximal nilpotent Lie subalgebras and $\mathfrak{h}$ the Cartan subalgebra. Let $\mathrm{R}\subset \mathfrak{h}^*$ denote the root system of $\mathfrak{g}$ and $\mathrm{R}^+$ the subset of positive roots with basis $\Pi$ of simple roots. The root lattice generated by $\Pi$ is denoted by $\Gamma\subset \mathfrak{h}^*$, with the positive semigroup $\Gamma_+=\mathbb{Z}_+\Pi\subset \Gamma$. Height $\mathrm{ht}(\mu)\in \mathbb{Z}_+$ of a weight $\mu$ from $\Gamma_+$ is the sum of its coordinates in the expansion over the basis $\Pi$. An $\mathrm{ad}$-invariant form $(\>.\>,\>.\>)$ is fixed on $\mathfrak{g}$, restricted to $\mathfrak{h}$, and transferred to $\mathfrak{h}^*$ by duality. For every $\lambda\in \mathfrak{h}^*$ there is a unique element $h_\lambda\in \mathfrak{h}$ such that $\mu(h_\lambda)=(\mu,\lambda)$, for all $\mu\in \mathfrak{h}^*$. We assume that $q\in \mathbb{C}$ is non-zero and not a root of unity. For $z\in \mathfrak{h}+\mathbb{C}$, we denote $[z]_q=\frac{q^{z}-q^{-z }}{q-q^{-1}}$. The standard Drinfeld-Jimbo quantum group $U_q(\mathfrak{g})$ is an associative $\mathbb{C}$-algebra with unit generated by $e_\alpha$, $f_\alpha$, and invertible $q^{h_\alpha}$ labeled by simple roots $\alpha$. In particular, they satisfy $$q^{h_\alpha}e_\beta=q^{ (\alpha,\beta)}e_\beta q^{ h_\alpha}, \quad [e_\alpha,f_\beta]=\delta_{\alpha,\beta}[h_\alpha]_q, \quad q^{ h_\alpha}f_\beta=q^{-(\alpha,\beta)}f_\beta q^{ h_\alpha},\quad \alpha, \beta\in \Pi.$$ The full set of relations can be found in [@ChP]. We shall consider two Hopf algebra structures on $U_q(\mathfrak{g})$ with comultiplications $$\Delta(f_\alpha)= f_\alpha\otimes 1+q^{-h_\alpha}\otimes f_\alpha,\quad \Delta(q^{\pm h_\alpha})=q^{\pm h_\alpha}\otimes q^{\pm h_\alpha},\quad\Delta(e_\alpha)= e_\alpha\otimes q^{h_\alpha}+1\otimes e_\alpha,$$ $$\tilde \Delta(f_\alpha)=f_\alpha\otimes 1 +q^{h_\alpha}\otimes f_\alpha,\quad \tilde \Delta q^{\pm h_\alpha}=q^{\pm h_\alpha}\otimes q^{\pm h_\alpha}, \quad \tilde \Delta(e_\alpha)=e_\alpha\otimes q^{-h_\alpha}+1\otimes e_\alpha,$$ defined on the generators. Their antipodes are given by $$\gamma( f_\alpha)=- q^{h_\alpha}f_\alpha, \quad \gamma( q^{\pm h_\alpha})=q^{\mp h_\alpha}, \quad \gamma( e_\alpha)=- e_\alpha q^{-h_\alpha},$$ $$\tilde \gamma( f_\alpha)=- q^{-h_\alpha}f_\alpha, \quad \tilde \gamma( q^{\pm h_\alpha})=q^{\mp h_\alpha}, \quad \tilde \gamma( e_\alpha)=- e_\alpha q^{h_\alpha}.$$ The counit homomorphism $\epsilon\colon U_q(\mathfrak{g})\to \mathbb{C}$ is the same for $\Delta$ and $\tilde \Delta$ and returns $$\epsilon(e_\alpha)=0, \quad \epsilon(f_\alpha)=0, \quad \epsilon(q^{h_\alpha})=1.$$ We denote by $U_q(\mathfrak{h})$ the Cartan subalgebra generated by $\{q^{\pm h_\alpha}\}_{\alpha\in \Pi}$. The subalgebras generated by $\{e_\alpha\}_{\alpha\in \Pi}$ and $\{f_\alpha\}_{\alpha\in \Pi}$ are denoted as $U_q(\mathfrak{g}_+)$, and, respectively, $U_q(\mathfrak{g}_-)$. Quantum Borel subgroups are defined as $U_q(\mathfrak{b}_\pm)=U_q(\mathfrak{g}_\pm)U_q(\mathfrak{h})$; they are Hopf subalgebras in $U_q(\mathfrak{g})$. Let $\mathcal{R}$ denote a quasitriangular structure relative to $\Delta$ and set $\check{\mathcal{R}}=q^{-\sum_{i}h_i\otimes h_i}\mathcal{R}$, where $\{h_i\}_{i=1}^{\mathrm{rk}\mathfrak{g}}$ is an orthonormal basis in $\mathfrak{h}$. One can choose $\mathcal{R}$ such that $\check{\mathcal{R}}\in U_q(\mathfrak{g}_+)\otimes U_q(\mathfrak{g}_-)$ (a completed tensor product). The element $\check{\mathcal{R}}$ is a twist relating the two comultiplications: $$\check{\mathcal{R}}\Delta(u)=\tilde \Delta(u)\check{\mathcal{R}}, \quad \forall u\in U_q(\mathfrak{g}).$$ The matrix $\check{\mathcal{R}}$ plays a central role in this exposition. Given a $U_q(\mathfrak{h})$-module $V$, a non-zero vector $v$ is said to be of weight $\mu=\mathrm{wt}(v)$ if $q^{h_\alpha}v=q^{(\mu,\alpha)} v$ for all $\alpha\in \Pi$. The linear span of such vectors is denoted by $V[\mu]$. A $U_q(\mathfrak{g})$-module $V$ is said to be of highest weight $\lambda$ if it is generated by a weight vector $v\in V[\lambda]$ that is killed by all $e_\alpha$. Such vector $v$ is called highest; it is defined up to a non-zero scalar multiplier. For each $\mu\in \mathfrak{h}^*$ we denote by $\tau_\mu$ the automorphism of $U_q(\mathfrak{h})$ defined by the assignment $\tau_\mu\colon q^{\pm h_\beta}\mapsto q^{\pm h_\beta}q^{\pm(\mu,\beta)}$. We extend it to an automorphism of $U_q(\mathfrak{b}_-)$ by $\tau_\mu \colon h \phi\mapsto (\tau_{\mu} h) \phi$ for $h\in \hat U_q(\mathfrak{h})$ and $\phi\in U_q(\mathfrak{g}_-)$. It is an algebra map indeed because $\phi h= \tau_{-\nu}(h)\phi$ for $\phi$ of weight $\nu$ (with respect to the adjoint action) and $\tau_\mu\circ \tau_\nu=\tau_\nu\circ \tau_\mu$ for all $\mu, \nu\in \mathfrak{h}^*$. # Mickelsson algebras {#SecMickAlg} In this section we recall the definition and basic facts about Mickelsson algebras. Let $U$ denote either the classical or quantum universal enveloping algebra of $\mathfrak{g}$ with triangular factorization $U^-U^0U^+$ relative to a polarization $\mathfrak{g}=\mathfrak{g}_-\oplus\mathfrak{h}\oplus\mathfrak{g}_+$. The subalgebras $U^\pm$ are endowed with natural grading by setting $\deg(u)=1$ on their simple root generators. $u$. By $B^\pm$ we denote the quantum Borel subalgebras $U^\pm U^0$. Suppose that $U$ is contained in an associative algebra $\mathcal{A}$, and $\mathcal{A}$ is diagonalizable over $U^0$ with finite dimensional weight spaces. Define a left ideal $\mathcal{J}=\mathcal{J}_+=\mathcal{A}\mathfrak{g}_+$ as the one generated by $e_\alpha$ with $\alpha\in \Pi$. Similarly one defines a right ideal $\mathcal{J}_-=\mathfrak{g}_- \mathcal{A}$ generated by $f_\alpha$ with $\alpha\in \Pi$. Let $N(\mathcal{J})$ denote the normalizer of $\mathcal{J}$ in $\mathcal{A}$, i.e. the set of elements $a\in \mathcal{A}$ such that $\mathcal{J}a \subset \mathcal{J}$. It is clear that $N(\mathcal{J})$ is an algebra and $\mathcal{J}$ is a two-sided ideal in it. **Definition 1**. *The quotient $N(\mathcal{J})/\mathcal{J}$ is called Mickelsson (step, reduction) algebra of the pair $\mathcal{A}\supset U$ and denoted by $Z(\mathcal{A},\mathfrak{g})$.* In particular, $\mathcal{A}$ can be either classical or quantized universal enveloping algebra of a simple Lie algebra $\mathfrak{a}$ containing $\mathfrak{g}$ (provided the embedding $\mathfrak{a}\supset \mathfrak{g}$ is quantizable in the $q\not=1$ case). Then we denote $Z(\mathcal{A},\mathfrak{g})$ by $Z(\mathfrak{a},\mathfrak{g})$. We denote by $\hat U^0$ the ring of fractions over $[h_\mu-c]_q$ ($h_\mu-c$ in the classical case) where $\mu \in \Gamma_+$ and $c$ ranges in a subset in $\mathbb{C}$ depending on the weight structure of $\mathcal{A}$. In particular, if all weights of $\mathcal{A}$ are integral, it is sufficient to require $c\in \mathbb{Q}$. Assuming the adjoint action of $U^0$ on $\mathcal{A}$ diagonalizable and ideals $\mathcal{J}_\pm$ invariant with respect to $U^0$, we extend them accordingly, to $\hat \mathcal{A}$ and $\hat \mathcal{J}_\pm$. The Mickelsson algebra of the pair $(\hat \mathcal{A}, \hat \mathcal{J})$ will be denoted by $\hat Z(\mathcal{A}, \mathcal{J})$. It is clear that $\hat Z(\mathcal{A}, \mathcal{J})\simeq Z(\mathcal{A}, \mathcal{J})\otimes_{U^0}\hat U^0$. We suppose that there exist further extensions $\hat {U}\subset \breve{U}$, $\hat {\mathcal{A}}\subset \breve{\mathcal{A}}$, $\hat \mathcal{J}_\pm\subset \breve{ \mathcal{J}}_\pm$ such that $\mathcal{J}_\pm= \breve{ \mathcal{J}}_\pm\cap \mathcal{A}$, and there is an element (extremal projector) $\wp\in \breve { U}$ of zero weight satisfying $$\begin{aligned} \wp \mathcal{J}_+=0=\mathcal{J}_-\wp, \quad \wp^2=\wp, \quad \wp-1\in \breve { \mathcal{J}}_-\cap \breve {\mathcal{J}}_+. \label{extr_proj}\end{aligned}$$ Such an extension does exist in the trivial case $\mathcal{A}=U$; then it requires a completion of $\hat U$ with certain infinite series. In general, it is sufficient to assume, for instance, that $\mathcal{A}$ is a free left or right regular $U$-module generated by a locally finite adjoint $U$-submodule. In particular, if $\mathcal{A}=U(\mathfrak{a})$ of a finite dimensional Lie algebra $\mathfrak{a}\supset \mathfrak{g}$ or $\mathcal{A}=U_q(\mathfrak{a})\supset U_q(\mathfrak{g})=U$, where $\mathfrak{g}$ is the semi-simple part of a Levi subalgebra $\mathfrak{g}\subset \mathfrak{a}$. The exact formulas for $\wp$ can be found in $\cite{K}$. Note that $\wp$ is a product of factors each of which is a series in $f_\alpha^k e_\alpha^k$, $\alpha\in \mathrm{R}^+$, with coefficients in $\hat U^0$. The inclusion on the right in ([\[extr_proj\]](#extr_proj){reference-type="ref" reference="extr_proj"}) implies the following isomorphisms of $\hat U^0$-modules: $$\hat \mathcal{A}/\hat \mathcal{J}_+ \simeq \hat \mathcal{A}\wp, \quad \hat \mathcal{A}/\hat \mathcal{J}_-\simeq \wp \hat \mathcal{A}, \quad \hat \mathcal{A}/(\hat \mathcal{J}_-+\hat\mathcal{J}_+) \simeq \wp \hat \mathcal{A}\wp.$$ We denote by $\hat \mathcal{V}=\hat \mathcal{A}/\hat \mathcal{J}_+$, the \"universal parabolic Verma module\" and by $1_{\hat\mathcal{V}}$ the image of $1$ in $\hat \mathcal{V}$. **Theorem 2** (cf. [@K]). *Projection $\varpi\colon \hat \mathcal{A}\to \breve {\mathcal{A}}$, $a\mapsto \wp a\wp,$ factors through an isomorphism of algebras $$\hat Z(\mathcal{A},\mathfrak{g})\to \wp \hat \mathcal{A}\wp.$$* From now on we suppose that $\mathcal{A}$ is a free $U^-\otimes B^+$-bimodule relative to the regular actions generated by (a basis of) a vector space $\mathcal{Z}\subset \mathcal{A}$. Then $\mathcal{Z}\hat U^0$ is isomorphically mapped onto the double coset algebra $\hat \mathcal{A}/(\hat \mathcal{J}_- + \hat \mathcal{J}_+)$. Two types of $\mathcal{Z}$ may be of interest within our approach: a) either $\mathcal{Z}$ is a locally finite adjoint $B^+$-submodule or b) $\mathcal{Z}$ has a basis of ordered monomials $\psi_1^{m_1}\ldots \psi_k^{m_k}$ generated by weight elements $\{\psi_i\}_{i=1}^k$ of a finite dimensional $B^+$-submodule $\mathfrak{Z}$. In the latter case we say that $\mathfrak{Z}\subset \mathcal{Z}$ delivers a Poincare-Birkhoff-Witt (PBW) basis over $U$. The set $\{\psi_i\}_{i=1}^k$ is called a PBW system and its elements PBW generators. It is known that the image of a PBW-system in $\hat \mathcal{A}$ generates a PBW-basis in $\pi \hat \mathcal{A}\pi\simeq \hat Z(\mathcal{A},\mathfrak{g})$, as a $\hat U^0$-module. More generally, we may assume that $\mathcal{Z}$ and $\mathfrak{Z}$ satisfy the above requirements modulo $\mathcal{J}$, that is, upon embedding in $\mathcal{V}$. In particular, we are interested in the following two cases. First suppose that $\mathcal{A}=U(\mathfrak{a})$ of a Lie algebra $\mathfrak{a}\supset \mathfrak{g}$. Let $\mathfrak{Z}\subset \mathfrak{a}$ be a $\mathfrak{g}$-module complementary $\mathfrak{g}$. Any basis in $\mathfrak{Z}$ is a PBW system over $U$ according to the PBW theorem. The second case of our concern is when $\mathfrak{a}$ is a simple Lie algebra and $\mathfrak{g}$ is the derived Lie algebra of a Levi subalgebra in $\mathfrak{a}$. Then we take $\mathcal{A}=U_q(\mathfrak{a})$, $U=U_q(\mathfrak{g})$, and consider the $U_q(\mathfrak{a})$-module $\mathfrak{a}_q$ deforming the adjoint module $\mathfrak{a}$. It contains a $U_q(\mathfrak{g})$-submodule $\mathfrak{g}_q$ deforming the adjoint module $\mathfrak{g}$. We realize the quotient $\mathfrak{a}_q/\mathfrak{g}_q$ as a submodule in $\mathcal{A}$ (possibly modulo $\mathcal{J}$) and take it for the role of $\mathfrak{Z}$. There is a basis in $\mathfrak{Z}$ delivering a PBW system in $U_q(\mathfrak{a})$ over $U_q(\mathfrak{g})$ upon extension of the ring of scalars to $\mathbb{C}[[\hbar]]$ with $q=e^\hbar$, see Section [6](#secLeftShMat){reference-type="ref" reference="secLeftShMat"}. Other examples may include the Heisenberg double $U^*\rtimes U$, where $U^*$ is the function algebra on the (quantum) algebraic group of $\mathfrak{g}$, $\mathcal{A}=U\otimes U$ with the diagonal embedding of $U=U_q(\mathfrak{g})$, see Section [4.3](#SecLeftZ){reference-type="ref" reference="SecLeftZ"}, $\mathcal{A}=\mathrm{T}(\mathfrak{Z})\rtimes U$, where $\mathrm{T}$ is the tensor algebra of a $U$-module $\mathfrak{Z}$ *etc*. # Algebra of routes {#secAlgRout} In this section we briefly recall an algebraic structure on routes in a Hasse diagram following [@MS; @M1]; more details can be found therein. ## Hasse diagrams Let $X$ be a $U^+$-module that is extendable to a module over $B^+$ and diagonalizable over $U^0$ with $\dim X[\mu]<\infty$ for all $\mu\in \mathfrak{h}^*$. We assume that for any pair of vectors from $X$, their weight difference is in the root lattice $\Gamma$. We associate with $X$ a Hasse diagram $\mathfrak{H}(X)$ as follows. It is an oriented graph whose nodes are elements of a fixed weight basis $\{v_i\}_{i\in I_X}\subset X$, $\mathrm{wt}(v_i)=\nu_i$. We will identify them with elements of the index set $I_X$. Arrows $i\stackrel{e_\alpha}{\longleftarrow} j$ are marked with simple root vectors $e_\alpha$ if $\pi^\alpha_{ij}=\pi(e_\alpha)_{ij}\not =0$, then $\nu_i-\nu_j=\alpha$. A sequence of adjacent arrows $$m_1 \stackrel{\alpha_1}{\longleftarrow} m_2\stackrel{\alpha_2}{\longleftarrow} \ldots \stackrel{\alpha_k}{\longleftarrow} m_{k+1}$$ is called path (of length $k$) from $m_{k+1}$ to $m_1$. A node $i$ is superior to a node $j$, $i\succ j$, if there is a path from $j$ to $i$. This defines a partial order on $\mathfrak{H}(X)$. A route $\vec m =(m_1,\ldots m_{k+1})$ from $j$ to $i$ is an arbitrary ordered sequence $$i=m_1\succ \ldots \succ m_{k+1}=j.$$ Typically we orient routes in the right-to-left ascending order. We denote $\max(\vec m)=i$ and $\min(\vec m)=j$ and say that $\vec m$ is a route $i\dashleftarrow j$ or write it as $i\stackrel{\>\>\vec m}{\dashleftarrow} j$. We will also suppress one of the nodes to emphasize the start or end node: for instance, $i\stackrel{\>\>\vec m}{\dashleftarrow}$ means that $\vec m$ is route $i\dashleftarrow$ (terminating at $i$) whereas $\stackrel{\>\>\vec m}{\dashleftarrow} j$ is a route $\vec m$ starting from $j$. The integer $|\vec m|=k$ is called length of $\vec m$. Thus a path to $i$ from $j$ is a route $i\dashleftarrow j$ of maximal length equal to $\mathrm{ht}(\nu_i-\nu_j)$. Given $\stackrel{\>\>\vec m}{\dashleftarrow} j \succ i\stackrel{\>\>\vec n}{\dashleftarrow}$, we write $\vec m\succ \vec n$. Then there is a route $(\vec m,\vec n)$ that includes all nodes from $\vec m$ and $\vec n$. We drop the vector superscript for routes consisting of one node. For instance, $(\vec m, \vec n)=(i,j,k)$ if $\vec m=(i)\succ (j,k)=\vec n$. Given two routes $i\stackrel{\>\>\vec m}{\dashleftarrow} k\stackrel{\>\>\vec n}{\dashleftarrow} j$ we get a route $i\stackrel{\>\>\vec m\cdot \vec n}{\dashleftarrow} j$, the concatenation of $\vec m$ and $\vec n$. Its set of nodes is the union of the two. Concatenation is a partial associative operation on routes. Removing an arbitrary subset of nodes from a route is a route again (possibly empty). ## Auxiliary module $\Phi_X$ Denote by $\Phi_X$ a free right $\hat U^0$-module generated by routes in $\mathfrak{H}(X)$. Introduce a left $\hat U^0$-action by assigning a weight of $\nu_j-\nu_i$ to a route $i\stackrel{\>\>\vec m}{\dashleftarrow} j$: it is regarded as a character $\hat U^0\to \mathbb{C}$ relative to the induced adjoint action on $\Phi_X$. Extend concatenation as a partial operation to $\hat U^0$-lines $\vec m \hat U^0$, by associativity. Define $\mathcal{F}=\frac{1}{q-q^{-1}}(\check{\mathcal{R}}-1\otimes 1)\in U^+\otimes U^-$ and consider a matrix $F=(\pi\otimes\mathrm{id})(\mathcal{F})=\sum_{i,j\in I_X}e_{ij}\otimes\phi_{ij}$, where $e_{ij}$ are the matrix units obeying $e_{ij}e_{mn}=\delta_{jm}e_{in}$. The entry $\phi_{ij}$ carries weight $\nu_j-\nu_i$, so the matrix $F$ is strictly lower triangular. To each route $\vec m$ from $\mathfrak{H}(X)$ we assign an element $$\phi_{\vec m}= \phi_{m_1,m_2} \ldots \phi_{m_{k-1},m_k}\in U^-$$ end extend this assignment to a $\hat U^0$-bimodule homomorphism $p_\Phi\colon \Phi_X\to \hat B^-$. For a route $(i)$ of zero length we set $\phi_i=1$. The map $p_\Phi$ is multiplicative with respect to concatenation: $$p_\Phi( \vec m\cdot \vec n)=p_\Phi( \vec m)p_\Phi( \vec n), \quad \vec m,\vec n\in \Phi_X.$$ Arrows in $\mathfrak{H}(X)$ are in bijection with ordered pairs $(l,r)$ of nodes they connect. We call such pairs simple. Set $P(\alpha)=\{(l,r)\in I_X^2|\>l\stackrel{\alpha}{\longleftarrow}r\}$ for $\alpha\in \Pi$. We define an operator $\partial_{l,r}\colon \Phi_X\to \Phi_X\otimes_{\hat U^0} \Phi_X$ for each $(l,r)\in P(\alpha)$ as follows. We set it zero on routes of zero length. On routes of length $1$ we put $$\begin{aligned} \begin{array}{rrccc} \partial_{l,r} (l,r)&=&(l)\otimes(r)[h_\alpha]_q ,\\ \partial_{l,r} (l,j)&=&- (l)\otimes q^{-h_\alpha}(r,j), &r\succ j, \\ \partial_{l,r}(i,r) &=&(i,l)q^{h_\alpha}\otimes(r), &i\succ l, \end{array} \label{prt on phi}\end{aligned}$$ and zero otherwise. We extend it to all routes as a homomorphism of right $\hat U^0$-modules and a derivation with respect to concatenation: $$\partial_{l,r}(\vec m\cdot \vec n)=(\partial_{l,r} \vec m)\cdot \bigl((l)\otimes\vec n\bigr) + \bigl(\vec m\otimes(r)\bigr)\cdot (\partial_{l,r} \vec n).$$ At most one concatenation factor survives the action of $\partial_{l,r}$; then $\cdot$ in the right-hand side makes sense. Define $p_{\Phi\Phi}\colon \Phi_X\otimes_{ \hat U^0}\Phi_X \to \hat B^-$ as the composition of $p_\Phi\otimes p_\Phi$ with the multiplication on $\hat B^-$. It is also a $\hat U^0$-bimodule homomorphism. **Proposition 3**. *For each $\alpha\in \Pi$ and each $\xi\in \Phi_X$, $$\begin{aligned} \label{e-action} e_\alpha p_\Phi (\xi)=p_{\Phi\Phi}\circ \sum_{(l,r)\in P(\alpha)}\pi^\alpha_{lr}\partial_{l,r}(\xi)+\tau_{\alpha}^{-1}p_\Phi(\xi)e_\alpha.\end{aligned}$$* *Proof.* The proof is analogous to the proof of [@MS], Proposition 6.4 with the difference that the calculations are done exactly rather than modulo $\mathcal{J}$. That accounts for the appearance of the right term on the right-hand side of ([\[e-action\]](#e-action){reference-type="ref" reference="e-action"}). For $\xi=(i,j)$ we have $p_\Phi(\xi)=\phi_{ij}$. Comparing the formula $$\begin{aligned} e_{\alpha}\phi_{ij}- \phi_{ij} e_{\alpha}= \sum_{k\in I_X}\phi_{ik}q^{h_{\alpha}} \pi^\alpha_{kj} -\sum_{k\in I_X} \pi^\alpha_{ik} q^{-h_{\alpha}}\phi_{kj}+\pi^\alpha_{ij}[h_\alpha]_q, \quad \alpha\in \Pi, \label{intertwiner_F}\end{aligned}$$ with ([\[prt on phi\]](#prt on phi){reference-type="ref" reference="prt on phi"}) we prove ([\[e-action\]](#e-action){reference-type="ref" reference="e-action"}) for all routes of length 1. Note that $\tau_\alpha^{-1}$ is identical on $U^-$ so the first term in right-hand side of ([\[e-action\]](#e-action){reference-type="ref" reference="e-action"}) gives an expression for the commutator with $e_\alpha$. Using the Leibnitz rule we extend it to all routes, which generate $\Phi_X$ as a right $\hat U^0$-module. To complete the proof, we utilize the relation $e_\alpha f h - \tau^{-1}_\alpha(fh)e_\alpha=[e_\alpha, f]h$ for all $f\in U^-$ and $h\in \hat U^0$. ◻ Proposition [Proposition 3](#intertwing loc){reference-type="ref" reference="intertwing loc"} is the rationale for introducing the auxiliary module $\Phi_X$, as it allows to reduce the adjoint $U^-$-action on $B^-$ to the action of simpler operators $\partial_{l,r}$. ## A construction of $\hat Z(\mathcal{A},\mathfrak{g})$ {#SecLeftZ} Suppose that $X^*\subset \hat\mathcal{A}/\hat \mathcal{J}= \hat \mathcal{V}$ is a $U^+$-module that is isomorphic to the left dual to $X$. Let $\psi_X=\sum_{i\in I_X} x_i\otimes\psi_i\in X\otimes X^*$ the $U^-$-invariant. We call it right Mickelsson generator relative to $X$. In the previous section, we defined $\phi_{\vec m}$ as a ${B}^-$-valued function of routes in $\mathfrak{H}(X)$. We extend the assignment $I_X\ni i\mapsto \psi_i\in \mathcal{V}$ to a function of routes with values in a $B^-$-module $\mathcal{V}$ by setting $\psi_{\vec m}=\phi_{\vec m}\psi_{m_k}\in \mathcal{V}$, for $\vec m=(m_1,\ldots, m_k)$. Denote by $\rho\in \mathfrak{h}^*$ the half-sum of positive roots. For each weight $\mu\in \Gamma_+$ define $$\begin{aligned} \eta_{\mu}=h_\mu+(\mu, \rho)-\frac{1}{2}(\mu,\mu), \quad \tilde \eta_{\mu}=h_\mu+(\rho,\nu)+\frac{1}{2}(\mu,\mu)\end{aligned}$$ as elements of $\mathfrak{h}+\mathbb{C}$. Then $q^{\eta_\mu}$ and $q^{\tilde\eta_\mu}$ are elements of $\hat U^0$. Clearly $\eta_\mu=\tilde \eta_\mu-||\mu||^2$. Denote $$\varphi(z)=\frac{q^{-z}}{[z]_q},$$ where $z$ is an indeterminate. Next we introduce a system of elements of $\hat U^0$ as functions of routes. They will be used as left and right multipliers in construction of $\hat Z(\mathcal{A},\mathfrak{g})$. For each node $i\in I_X$ we denote $\eta_i=\eta_{\nu_i}$. For each ordered pair of nodes $i\succ j$ we define $\eta_{ij}=\eta_{\nu_i-\nu_j}$, $\tilde \eta_{ij}=\tilde \eta_{\nu_i-\nu_j}$, and set $$A^j_i=\varphi(-\eta_{ij}),\quad \tilde A^i_j=\varphi(\tilde \eta_{ij}),\quad B^i_j=\varphi(\eta_{i}-\eta_j).$$ Furthermore, for each route $\vec m=(m_1,\ldots, m_k)\not =\varnothing$ we define $$A^j_{\vec m}=A^j_{m_1}\ldots A^j_{m_k}, \quad \tilde A^i_{\vec m}=\tilde A^i_{m_1}\ldots \tilde A^j_{m_k}, \quad B^i_{\vec m}=B^i_{m_1}\ldots B^i_{m_k},$$ assuming $i\succ \vec m$ and $\vec m\succ j$. One can check that $\tau_{\nu_i}(\eta_i-\eta_{j})=\tilde \eta_{ij}$ for any $i\succ j$. This gives a relation $$\begin{aligned} \tilde A^i_{\vec m} =\tau_{\nu_i}(B^i_{\vec m}). \label{A-B}\end{aligned}$$ It is also convenient to set $A^i_i=\tilde A^i_i=B^i_i=1$ for all $i\in I_X$. **Proposition 4** ([@MS]). *For each $i\in I_X$, the element $$\begin{aligned} z_i=\psi_{i}+\sum_{i\succ \vec m \not =\varnothing }\psi_{(i,\vec m)}B^i_{\vec m}=\wp \psi_i \label{norm_element}\end{aligned}$$ belongs to $\in \hat\mathcal{V}^+$. Upon identification $\hat \mathcal{V}^+\simeq \hat Z(\mathcal{A},\mathfrak{g})$, $z_i$ is an element of $\hat Z(\mathcal{A},\mathfrak{g})$.* Picking up appropriate $X$ with $X^*\subset \mathcal{A}$ one can obtain the entire $\hat Z(\mathcal{A},\mathfrak{g})$. Alternatively, one can use $X$ to construct a PBW basis, as in [@MS]. Next we give an example that did not enter [@MS]. It is motivated by a realization of Drinfeld's quantum double of $U$ [@D1] as a subalgebra in $U\otimes U$ [@RS]. **Example 5**. **Let $U$ be the quantum group $U_q(\mathfrak{g})$ of a simple Lie algebra $\mathfrak{g}$ and set $\mathcal{A}=U_q(\mathfrak{g})\otimes U_q(\mathfrak{g})$ with the diagonal embedding of $U$ via $\Delta$. The corresponding Mickelsson algebra is responsible for decomposing tensor product of $U_q(\mathfrak{g})$-modules. Observe that an extension $\breve \mathcal{A}$ accommodating the extremal projector of $U$ does exist. The algebra $\breve U$ is spanned by series in products $u_-u_+ h$ of same weight, where $u_\pm\in U^\pm$ and $h\in \hat U^0$. It is sufficient to take $\breve \mathcal{A}=\breve U\otimes\breve U$.** *In order to construct a (right) Mickelsson generator, let $(Y,\varrho)$ be a fundamental $U$-module of minimal dimension and define an action $\pi$ on $E=\mathrm{End}(Y)$ by $\pi(u)(x)=\varrho(u^{(2)})x\varrho\bigl(\gamma^{-1}(u^{(1)})\bigr)$ (in the Sweedler notation for the coproduct), for $u\in U$ and $x\in E$. We restrict $\pi$ to $U^+$ and take the matrix $\psi_E=R_{12}^{-1}R_{31}\in E\otimes U\otimes U$ for a Mickelsson generator. It satisfies $$\bigl(\varrho(u^{(1)}) \otimes u^{(2)}\otimes u^{(3)}\bigr)\psi_E=\psi_E\bigl(\varrho(u^{(3)}) \otimes u^{(1)}\otimes u^{(2)}\bigr)=\psi_E\bigl(\varrho(u )\otimes 1\otimes 1\bigl) \mod E\otimes\hat \mathcal{J}, \quad \forall u\in U^+.$$ This is equivalent to $U^+$-invariance of $\psi_E$ as a tensor from $E\otimes\hat \mathcal{V}$.* *The module $E$ is completely reducible and contains a submodule $X\subset E$ quantizing the classical coadjoint module $\mathfrak{g}^*\simeq \mathfrak{g}$ complementary to diagonally embedded $\mathfrak{g}\subset \mathfrak{g}\oplus\mathfrak{g}$ (the classical double). The invariant projection $E\to X$ takes $\psi_E$ to $\psi_X$ whose components generate a PBW basis in $\mathcal{A}$ over $U$, upon extension of the ring of scalars to formal power series in $\hbar=\ln q$. Respectively, the components of the vector $Z_{X}=\tilde S_{X}\psi_{X}$ generate a PBW basis in $\hat Z(\mathcal{A},U)$.* # Right Shapovalov matrix and $\hat Z(\mathcal{A},\mathfrak{g})$ {#secRigthShMat} In this section we relate the construction of $\hat Z(\mathcal{A},\mathfrak{g})$ via Hasse diagrams worked out in [@MS] with the inverse Shapovalov form on $U$. Set $q=e^\hbar$ and denote by $U_\hbar(\mathfrak{g})$ the $\mathbb{C}[[\hbar]]$-extension of $U$ completed in the $\hbar$-adic topology [@D1]. Denote $d= \frac{1}{2}\sum_i h_i h_i + h_\rho \in U_\hbar(\mathfrak{h}).$ **Lemma 6**. *Let $\phi \in U^-$ be an element of weight $-\mu<0$. Then $$[d,\phi]=-\tilde \eta_\mu \phi=-\phi \eta_\mu.$$* *Proof.* Straightforward. ◻ It follows that the operator $q^{ [d, - ]}$ leaves a $\mathbb{C}[q,q^{-1}]$-submodule $\sum_{\mu>0}\hat B^-[-\mu]\subset \hat U_\hbar(\mathfrak{b}_-)$ invariant, and $\frac{q^{2[d, - ]}-\mathrm{id}}{q-q^{-1}}$ is invertible on it. The inverse $\varphi([d, -])$ acts as right multiplication by $\varphi(-\eta_\mu )$ and left multiplication by $\varphi(-\tilde \eta_\mu )$ on each subspace of weight $-\mu<0$. Then the operators $\varphi(\pm D)$ with $D=\mathrm{id}\otimes[d, -]$ are well defined on the subspace $\sum_{\mu>0} U^+[\mu]\otimes\hat B^-[-\mu]$. Introduce an element $\tilde \mathcal{S}$ in the completed tensor product $U^+ \otimes\hat B^-$ as $$\begin{aligned} \label{right S} \tilde \mathcal{S}=\sum_{n=0}^{\infty } \tilde \mathcal{S}^{(n)}, \quad \mbox{where}\quad \tilde \mathcal{S}^{(0)}=1\otimes 1,\quad \tilde \mathcal{S}^{(n+1)}= \varphi(-D)(\tilde \mathcal{S}^{(n)}\mathcal{F}), \quad n\geqslant 0.\end{aligned}$$ The series is truncated when the left tensor leg is sent to $\mathrm{End}(X)$, with $\dim(X)<\infty$. If we define $\mathrm{ht}(X)$ as the height of the difference between the highest and lowest weight of $X$, then $\tilde \mathcal{S}^{(n)}=0$ for $n>\mathrm{ht}(X)$, so the sum contains at most $\mathrm{ht}(X)+1$ terms. Formula ([\[right S\]](#right S){reference-type="ref" reference="right S"}) gives an explicit expression for $\tilde S^{(n)}_X=(\pi\otimes\mathrm{id})(\tilde \mathcal{S}^{(n)})\in \mathrm{End}(X)\otimes\hat B^-$ and hence for $\tilde S_X=\sum_{n=0}^\infty \tilde S^{(n)}$ if one knows the matrix $(\pi\otimes\mathrm{id})(\check{\mathcal{R}})$. It is greatly simplified in the classical limit because $\lim_{q\to 1}\mathcal{F}= \sum_{\alpha\in \mathrm{R}^+}e_\alpha\otimes f_\alpha$, the inverse invariant form $\mathfrak{g}_-\otimes\mathfrak{g}_+\to \mathbb{C}$. **Proposition 7**. *Components of vector $z_X=\tilde S_X \psi_X$ equal ([\[norm_element\]](#norm_element){reference-type="ref" reference="norm_element"}). [\[Mick_vector\]]{#Mick_vector label="Mick_vector"}* *Proof.* Let $(i,\vec m)$ with $\vec m=(m_1,\ldots, m_k)$ be a route. Since the weight of $\psi_{i,\vec m}$ is $-\nu_i$, formula ([\[A-B\]](#A-B){reference-type="ref" reference="A-B"}) implies $\psi_{i,\vec m}B^i_{\vec m}=\tilde A^i_{\vec m}\psi_{i,\vec m}=\tilde A^i_{\vec m} \phi_{i,\vec m}\psi_{m_k}$. But the sum of $\tilde A^i_{\vec m}\phi_{i,\vec m}$ over all $\stackrel{\vec m}{\longleftarrow} j$ such that $i\succ \vec m$ and $n=|\vec m|+1$ is exactly $\tilde S^{(n)}_{ij}$, thanks to Lemma [Lemma 6](#[d,.]){reference-type="ref" reference="[d,.]"}. ◻ Our next objective is to demonstrate that $\tilde \mathcal{S}$ is almost the right Shapovalov matrix. We lift $\tilde A^i_{\vec m}\phi_{i,\vec m}$ to $[i,\vec m]=\tilde A^i_{\vec m}(i,\vec m)\in \Phi_X$ in order to use Proposition [Proposition 3](#intertwing loc){reference-type="ref" reference="intertwing loc"} and reduce the study of $e_\alpha$-action to the action of operators $\partial_{l,r}$ with $(l,r)\in P(\alpha)$. Fix an ordered pair of nodes $i\succ j$. For each $\partial_{l,r}$ we define 1-, 2-, and 3-chains in as $\hat U^0$-linear combinations of routes $i\dashleftarrow j$: - $[\vec m]$ if $\vec m\cap (l,r)=\varnothing$ or $\vec m\cap (l,r)\not=\varnothing$ but $\vec m\cup (l,r)$ is not a route $i\dashleftarrow j$, - $[l,\vec \rho]+[l,r,\vec \rho], \quad \mbox{with}\quad l=i,\quad \quad [\vec \ell,l,r]+[\vec \ell,r],\quad \mbox{with}\quad r=j,$ - $[\vec \ell,l,\vec \rho]+[\vec \ell,l,r,\vec \rho]+[\vec \ell,r,\vec \rho].$ Here we assume that $i=\max(\vec \ell)$ and $j=\min(\vec\rho)$, so neither $\vec \ell$ nor $\vec \rho$ are empty. One can prove that every route $i\dashleftarrow j$ participates in exactly one chain, cf. [@MS], Lemma 7.1. **Proposition 8**. *The operator $\partial_{l,r}$ annihilates 1-, 3-, and left 2-chains.* *Proof.* Let us focus on chains of routes $i\dashleftarrow j$. First of all observe that $\partial_{l,r}$ kills all 1-chains. This case includes routes $\vec m$ whose smallest node $j$ equals $l$, because otherwise there is a route $(\vec m,r)$ which is not $\dashleftarrow j$, see the definition of 1-chains above. For 2- and 3-chains, we reduce the proof to [@MS], Lemma 7.2 by sending $\Phi_X$ to another auxiliary $\hat U^0$-module $\Psi_X$ and replacing $\partial_{l,r}$ with an operator $\nabla_{l,r}\colon \Phi_X\to \Phi_X\otimes_{\hat U^0}\Psi_X$. Indeed, there is a natural embedding $\gimel\colon\Phi_X\to \Psi_X$ that is a lift of the assignment $\phi_{\vec m}h\mapsto \psi_{\vec m}\tau_{\nu_j}^{-1}(h)$ for all $\vec m$ with $j=\min(\vec m)$ and $h\in \hat U^0$. It is a left $\hat U^0$-module homomorphism, but a \"dynamically twisted\" homomorphism of right $\hat U^0$-modules. The map $\gimel$ links $\partial_{l,r}$ with $\nabla_{l,r}$ by the formula $$\nabla_{l,r}\circ\gimel (\vec m h)=(\mathrm{id}\otimes\gimel) \circ \partial_{l,r}(\vec m h)+\bigl(\vec m\otimes\mathrm{id}\bigr) \cdot \nabla_{l,r}(j)\tau_{\nu_j}^{-1}(h),$$ for all $i\stackrel{\vec m}{\longleftarrow} j$ and $h\in \hat U^0$. In particular, $\partial_{l,r}$ and $\nabla_{l,r}$ are intertwined on the $\hat U^0$-submodule generated by routes whose smallest node $j$ is annihilated by $\nabla_{l,r}$. That is, exactly when $j\not =l$, cf. [@MS]. All 3- and left 2-chains are in that submodule and go over to chains $\Psi_X$, which are killed by $\nabla_{l,r}$, so the statement follows. ◻ Note with care that a right 2-chain is not killed by $\partial_{l,r}$. It is not mapped to a chain in $\Psi_X$, so the above reasoning is unapplicable in this case. This is not incidental and facilitates the following quasi-invariance of $\tilde \mathcal{S}$. **Proposition 9**. *The matrix $\tilde \mathcal{S}\in U^+\otimes\hat B^-$ satisfies the identity $$\begin{aligned} (1\otimes e_\alpha)\tilde \mathcal{S}=(\mathrm{id}\otimes\tau_\alpha^{-1})(\tilde \mathcal{S})\tilde \Delta(e_\alpha),\quad \forall \alpha\in \Pi. \label{quasi-inv}\end{aligned}$$* *Proof.* For each $i\succ \vec m\succ j$ we introduce $C^i_{\vec m,j}=\tau_{\nu_j}(B^i_{\vec m,j})\in \hat U^0$, so that $$[i,\vec m,j]=(i,\vec m,j)C^i_{\vec m,j}\in \Phi_X.$$ One can check that, in particular, $C^i_j=\varphi(\eta_{ij})$. We write down the commutation relation between $e_\alpha$ and the matrix entry $S_{ij}$ as $$\begin{aligned} e_\alpha\tilde S_{ij}=p_{\Phi\Phi}\sum_{i\succ \vec m\succ j}\sum_{(l,r)\in P(\alpha)}\pi^\alpha_{lr}\partial_{l,r} [i,\vec m,j]+\tau_\alpha^{-1} \tilde S_{ij}e_\alpha, \label{quasi_invariant}\end{aligned}$$ thanks to Proposition [Proposition 3](#intertwing loc){reference-type="ref" reference="intertwing loc"}. Let us change the order of summations and examine the terms $$\sum_{i\succ \vec m\succ j} \partial_{l,r} [i,\vec m,j]= \partial_{l,r}\sum_{i\succ \vec m\succ j} (i,\vec m,j)C^i_{\vec m,j}$$ separately for each pair $(l,r)\in P(\alpha)$. This summation can be rearranged over $(l,r)$-chains, of which only right 2-chains survive the action of $\partial_{l,r}$, by Proposition [Proposition 8](#chain-killer){reference-type="ref" reference="chain-killer"}. For such a chain, $\partial_{lr}\bigl ([i,\vec \ell, l,r]+[i,\vec \ell, r]\bigr)$ with $r=j$, we get $$\begin{aligned} \label{essent} \bigl(\partial_{l,r}(i,\vec \ell,l,r)C^i_{l}+ \partial_{l,r}(i,\vec \ell,r)\bigr)C^i_{r}C^i_{\vec \ell}= \bigl(\partial_{l,r}(i,\vec \ell,l,r)+ \partial_{l,r}(i,\vec \ell,r)(C^i_l)^{-1}\bigr)C^i_{r}C^i_{\vec \ell,l}.\end{aligned}$$ Explicitly, $C^i_l=\varphi(c)$, where $c=\eta_{il}-(\alpha,\nu_i-\nu_l)$ and $C^i_r=\varphi(\eta_{ir})$. Notice also that $$\begin{aligned} c+h_\alpha&=& h_{i}-h_l+h_\alpha+ (\nu_i-\nu_l,\rho)-\frac{1}{2}(\nu_i-\nu_l)^2+h_\alpha-(\alpha,\nu_i-\nu_l) = \eta_{ir}. \nonumber\end{aligned}$$ Using these equalities we write down the factor before $C^i_{\vec \ell,l}$ in ([\[essent\]](#essent){reference-type="ref" reference="essent"}) explicitly: $$\bigl(i,\vec \ell, l)\otimes(r)\bigr)\bigl([h_\alpha]_q+ q^{h_\alpha}\varphi(c )^{-1}\bigr)C^i_r =\bigl(i,\vec \ell, l)\otimes(r)\bigr) \frac{(q^{h_\alpha}-q^{-h_\alpha})+ q^{h_\alpha}(q^{2c}-1)}{q-q^{-1}} \varphi(\eta_{ir}).$$ The Cartan factor on the right amounts to $q^{-h_\alpha}$. Thus ([\[essent\]](#essent){reference-type="ref" reference="essent"}) equals $[i,\vec\ell,l]\otimes(r)q^{-h_\alpha}$ because zero length routes in $\Phi_X$ carry zero weights. Returning from $\Phi_X\otimes_{\hat U^0} \Phi_X$ to $\hat B^-$ via $p_{\Phi\Phi}$ we obtain $$\begin{aligned} \label{quasi-invariance} e_\alpha\tilde S_{ij}=\sum_{l} \sum_{i\succ \vec \ell \succ l}\tau_\alpha^{-1}\tilde A^i_{i,\vec \ell,l}\phi_{(i,\vec \ell, l)}\pi^\alpha_{lj}q^{-h_\alpha}+\tau_\alpha^{-1} \tilde S_{ij}e_\alpha = \sum_{l} \tau_\alpha^{-1}\tilde S_{il}\pi^\alpha_{lj}q^{-h_\alpha}+\tau_\alpha^{-1} \tilde S_{ij}e_\alpha.\end{aligned}$$ This is a coordinate form of equation ([\[quasi-inv\]](#quasi-inv){reference-type="ref" reference="quasi-inv"}) in the representation $(X,\pi)$. ◻ Recall that a (left) Verma module $V_\lambda$ of highest weight $\lambda\in \mathfrak{h}^*$ is induced from a one dimensional $B^+$-module $\mathbb{C}_\lambda$ of weight $\lambda$ that is trivial on $U^+\subset B^+$. Let $v_\lambda\in V_\lambda$ denote the highest weight generator and $X$ be a finite dimensional $U$-module. The universal Shapovalov matrix $\mathcal{S}$ is a unique element of a completed tensor product $U^+\otimes\hat B^-$ that sends $X\otimes v_\lambda$ onto the subspace of $U^+$-invariants in $X\otimes V_\lambda$, for every $X$ and generic $\lambda$. The matrix $\mathcal{S}$ delivers the inverse invariant pairing between $V_\lambda$ and the Verma module of lowest weight $-\lambda$. This pairing is equivalent to the canonical contravariant form on $V_\lambda$, which is a specialization of the Shapovalov form $U^-\otimes U^-\to U^0$ at $\lambda$. Similarly one can consider right Verma modules $\mathbb{C}_\lambda\otimes_{B^+} U$ and define the right Shapovalov matrix, which of course depends on the comultiplication on $U$. This matrix turns out to be $\tilde \mathcal{S}$ up to a twist by the squared antipode $\tilde \gamma^2$. Recall that the latter acts by $e_\alpha\mapsto q^{-(\alpha,\alpha)}e_\alpha$ on the positive generator of root $\alpha$. **Corollary 10**. *The element $(\tilde \gamma^{-2}\otimes\mathrm{id})(\tilde \mathcal{S})$ is the universal right Shapovalov matrix with respect to $\tilde \Delta$. In particular, it satisfies $$\begin{aligned} \label{Shap-intertw} (1\otimes e_\alpha)(\tilde \gamma^{-2}\otimes\tau_\alpha)(\tilde \mathcal{S})= (\tilde \gamma^{-2}\otimes\mathrm{id})(\tilde \mathcal{S})\tilde \Delta (e_\alpha)\end{aligned}$$ for each $\alpha\in \Pi$.* *Proof.* To verify ([\[Shap-intertw\]](#Shap-intertw){reference-type="ref" reference="Shap-intertw"}), apply the automorphism $\tau_\alpha$ to ([\[quasi-invariance\]](#quasi-invariance){reference-type="ref" reference="quasi-invariance"}) and get $$e_\alpha\tau_\alpha(\tilde S_{ij})= \sum_{l} \tilde S_{il}\pi^\alpha_{lj}q^{-(\alpha,\alpha)}q^{-h_\alpha}+ \tilde S_{ij}e_\alpha.$$ This is a coordinate presentation of an identity in $\mathrm{End}(X)\otimes\hat B^-$ that gives rise to a universal form $$(1\otimes e_\alpha)(\pi\otimes\tau_\alpha)(\tilde \mathcal{S})=(\pi\otimes\mathrm{id})(\tilde \mathcal{S})(\pi\circ \tilde \gamma^{2}\otimes\mathrm{id})\bigl(\tilde \Delta (e_\alpha)\bigr),$$ whence ([\[Shap-intertw\]](#Shap-intertw){reference-type="ref" reference="Shap-intertw"}) follows. It implies that $(\tilde \gamma^{-2}\otimes\mathrm{id})(\tilde \mathcal{S})$ is the right Shapovalov matrix. ◻ Another corollary of Proposition [\[quasi-inv\]](#quasi-inv){reference-type="ref" reference="quasi-inv"} is Proposition [\[Mick_vector\]](#Mick_vector){reference-type="ref" reference="Mick_vector"}, which thereby becomes a consequence of quasi-invariance ([\[quasi-inv\]](#quasi-inv){reference-type="ref" reference="quasi-inv"}) of the matrix $\tilde \mathcal{S}$ and invariance of the vector $\psi_X$. This explains the construction of $\hat Z(\mathcal{A},\mathfrak{g})$ via Hasse diagrams presented in [@MS]. Our studies demonstrate that the Shapovalov matrix has one more incarnation besides the inverse contravariant form and intertwining operators for Verma modules [@EV]. A morphism from the category of right modules acts as an operator in the category of left modules. The key role here belongs to quasi-invariance ([\[Shap-intertw\]](#Shap-intertw){reference-type="ref" reference="Shap-intertw"}), where zero left-hand side modulo $\langle e_\alpha\rangle_{\alpha\in \Pi}$ is sufficient for an intertwiner. The exact identity ([\[Shap-intertw\]](#Shap-intertw){reference-type="ref" reference="Shap-intertw"}) has not been employed before, to the best of our knowlege. We thereby conclude that the Shapovalov matrix is a construct in its own right and should be distinguished from the inverse Shapovalov form. # An alternative construction of $\hat Z(\mathcal{A},\mathfrak{g})$ {#secLeftShMat} ## Mickelsson algebras and left Shapovalov matrix {#SecHasseDiag} We are going to present an alternative description of the algebra $\hat Z(\mathcal{A},\mathfrak{g})$ in terms of the Shapovalov matrix $\mathcal{S}$ relative to left Verma modules. It is a unique element of a completed tensor product $U^+\otimes\hat B^-$ that satisfies $$\Delta(e_\alpha) \mathcal{S}\in U^+\otimes\hat B^-e_\alpha, \quad \forall \alpha\in \Pi.$$ If $V_\lambda$ is an irreducible Verma module with highest vector $v_\lambda$, and $X$ is a finite dimensional $U$-module, then the tensor $\mathcal{S}(v\otimes v_\lambda)$ is $U^+$-invariant for any $v\in X$, and any $U^+$-invariant tensor in $X\otimes V_\lambda$ is obtained this way. As well as the matrix $\tilde \mathcal{S}$, the left Shapovalov matrix can be explicitly expressed through the universal R-matrix: $$\begin{aligned} \label{geom F} \mathcal{S}=\sum_{n=0}^{\infty } \mathcal{S}^{(n)}, \quad \mbox{where}\quad \mathcal{S}^{(0)}=1\otimes 1,\quad \mathcal{S}^{(n+1)}=\varphi(D)\bigl( \mathcal{F}\mathcal{S}^{(n)}\bigr), \quad n\geqslant 0.\end{aligned}$$ For a graded $U^+$-module $X$ with representation homomorphism $\pi\colon U^+\to \mathrm{End}(X)$, the entries of the matrix $S_X=(\pi\otimes\mathrm{id})(\mathcal{S})$ in a weight basis in $X$ read $$\begin{aligned} S_{ij }= \sum_{i\succ \vec m\succ j }\phi_{i,\vec m, j}A^j_{i,\vec m}. \nonumber\end{aligned}$$ The summation is performed over all possible routes $i\dashleftarrow j$, see [@M1] for details. We regard $\mathcal{A}$ as an adjoint $U$-module under the action $$x\triangleright a=\mathrm{ad}(x)(a)=x^{(1)}a \gamma(x^{(2)}), \quad \forall x\in U, \quad \forall a\in \mathcal{A}.$$ Suppose that $X$ is realized as a submodule in $\mathcal{A}$ with respect to the adjoint action of $U^+$ on $\mathcal{A}$. Let $X^*$ be the right dual to $X$ with action $\raise 1.2pt\hbox{$\scriptstyle\blacktriangleright$}\hspace{2pt}$. Define a right $U^+$-action on $X^*$ by $y\triangleleft u=\gamma(u)\raise 1.2pt\hbox{$\scriptstyle\blacktriangleright$}\hspace{2pt}y$ for $y\in X^*$ and consider $X^*\otimes\hat \mathcal{A}$ as a right $U^+\otimes\hat B^-$-module with the regular action of $B^-$ on $\hat \mathcal{A}$. Let $\psi_{X^*}\subset X^*\otimes X$ denote the invariant tensor fixed up to a scalar factor. We call it left Mickelsson generator relative to $X$. It satisfies the identities $$\begin{aligned} \label{psi-invar} \bigl(\mathrm{id}_X \otimes\mathrm{ad}(u)\bigr)(\psi_{X^*})=\psi_{X^*}(u\otimes\mathrm{id}), \quad (\mathrm{id}_X \otimes u)(\psi_{X^*})=\psi_{X^*}\Delta(u), \quad \forall u\in U^+,\end{aligned}$$ which are equivalent to $U^+$-invariance. In a weight basis $\{\psi_i\}_{i\in I_X}\subset X$ and its dual basis $\{y_i\}_{i\in I_X}\subset X^*$, we have $\psi_{X^*}=\sum_{i\in I_X}y_{i}\otimes\psi_{i}$. **Proposition 11**. *The vector $$Z_{X^*}=\psi_{X^*} \mathcal{S}(1\otimes 1_\mathcal{V})=\sum_{i,k\in I_X}y_i\otimes\psi_{k}S_{ki} 1_{\hat \mathcal{V}}$$ belongs to $X^*\otimes\hat \mathcal{V}^+$.* *Proof.* Let us check the coordinate form of $Z_{X^*}$ given by the right equality first. Using symbolic Sweedler notation $\mathcal{S}= \mathcal{S}^+\otimes\mathcal{S}^-$ we write $S_{ki}= \pi(\mathcal{S}^+)_{ki} \mathcal{S}^-$ for all $k,i\in I_X$. Then the left formula in ([\[psi-invar\]](#psi-invar){reference-type="ref" reference="psi-invar"}) gives $$\psi_{X^*} \mathcal{S}=\sum_{i\in I_X}y_i \mathcal{S}^+\otimes\psi_i \mathcal{S}^-=\sum_{i\in I_X}y_i \otimes\bigl(\mathrm{ad}(\mathcal{S}^+)(\psi_i)\bigr) \mathcal{S}^-= \sum_{i\in I_X}y_i \otimes\sum_{k\in I_X} \pi (\mathcal{S}^+)_{ki} \psi_k\mathcal{S}^-,$$ which immediately implies the required equality. Furthermore, for all $u\in U^+$ we have $$(1\otimes u)Z_{X^*}=\bigl(\psi_{X^*}\Delta(u)\bigr)\mathcal{S}=\psi_{X^*}\bigl(\Delta(u)\mathcal{S}\bigr )=\epsilon(u)\psi_{X^*}\mathcal{S}=\epsilon(u)Z_{X^*},$$ modulo $X^*\otimes\hat \mathcal{J}_+$. We conclude that the entries of $Z_{X^*}$ are in $\hat \mathcal{V}^+$, whence the assertion follows. ◻ Thus constructing elements of $\hat Z(\mathcal{A},\mathfrak{g})$ out of $\mathcal{A}$ reduces to finding $\psi_{X^*}$ for appropriate $X$. Remark that the invariance condition can be relaxed to quasi-invariance $$(\mathrm{id}_X \otimes u)(\psi_{X^*})=\psi'_X\Delta(u), \quad \forall u\in U^+,$$ for some $\psi'_X$ not necessarily equal to $\psi_{X^*}$ (and possibly depending on $u$). **Example 12**. **Suppose that $\mathcal{A}$ is a universal enveloping algebra $U(\mathfrak{a})$ of a Lie algebra $\mathfrak{a}\supset \mathfrak{g}$. Let $\mathfrak{a}=X\oplus\mathfrak{g}$ be a decomposition to a direct sum of modules. Put $\mathcal{Z}$ to be the sum $\sum_{n=1}^{\infty}S^n(X)$ of symmetrized powers $X^n$. It freely generates $\mathcal{A}$ over $U$ thanks to the PBW theorem. It is a locally finite $\mathfrak{g}$-module, and the canonical element $\psi_{\mathcal{Z}^*}\in \mathcal{Z}^*\otimes\mathcal{Z}$ is a universal left Mickelsson generator. Its reduction to $X^*\otimes X$ delivers a PBW system in $\hat Z(\mathfrak{a},\mathfrak{g})$ over $\hat U(\mathfrak{h})$.** In order to relate the above construction with extremal projector, consider the situation when $X$ is an $U$-module and the element $\psi_{X^*}$ is $U$-invariant. Let $\Theta\in U\otimes\hat U_0$ be the image of $\mathcal{S}$ under the map $$U^+\otimes\hat B_-\to U^+\otimes U^-\otimes\hat U^0\to U\otimes\hat U^0,$$ where the left arrow is the triangular decomposition isomorphism, and the right one acts by $x\otimes y\otimes h\mapsto \gamma^{-1}(y)x\otimes h$. It is known that $\Theta$ is invertible. When specialized at a generic $\lambda\in \mathfrak{h}^*$, it becomes the universal extremal twist [@M3]; it is also related with dynamical Weyl group [@EV]. Now we derive the action of $\wp$ on the components of vector $\psi_{X^*}$: $$\begin{aligned} \label{Mick-el} Z_{X^*}=(\mathrm{id}\otimes\wp) \psi_{X^*} \Theta = \psi_{X^*} S_X\end{aligned}$$ modulo $X^*\otimes\mathcal{J}_+$. Thus the left Shapovalov matrix does not provide immediate evaluation of the $\wp$-action, but through the extremal twist $\Theta$. This is a distinction from the approach based on the right Shapovalov matrix. ## Quantum Lax operators In this section we construct left Mickelsson generators for the case of quantum reductive pairs. Suppose that $\mathfrak{a}$ is a simple Lie algebra and $\mathfrak{g}\subset \mathfrak{a}$ is a (commutant of) a Levi subalgebra. Let $\mathfrak{k}\supset \mathfrak{h}$ denote the Cartan subalgebra of $\mathfrak{a}$, $\mathfrak{a}_\pm \supset \mathfrak{g}_\pm$the maximal nilpotent subalgebras, and $\mathfrak{c}$ the orthogonal complement of $\mathfrak{h}$ in $\mathfrak{k}$. The quantum group $U_q(\mathfrak{g})$ is a natural Hopf subalgebra in $U_q(\mathfrak{a})$, for which the conventions of Section [2](#SecQuantGroups){reference-type="ref" reference="SecQuantGroups"} are in effect. It is known that $U_q(\mathfrak{a})$ has a locally finite $U_q(\mathfrak{g})$-submodule $\mathcal{Z}$ such that $$U_q(\mathfrak{a})=U_q(\mathfrak{g}_-)\mathcal{Z}U_q(\mathfrak{c}) U_q(\mathfrak{b}_+).$$ It can be constructed from quantized nilradicals of the parabolic subalgebras $\mathfrak{g}+\mathfrak{c}+\mathfrak{a}_\pm$. They are locally finite as $U_q(\mathfrak{g})$-modules and their existence is proved in [@Ke]. The corresponding Mickelsson generator $\psi_{\mathcal{Z}^*}$ gives rise to the entire $\hat Z(\mathfrak{a},\mathfrak{g})$, as a module over $\hat U_q(\mathfrak{k})$. Next we address the question of PBW basis in $\hat Z(\mathfrak{a},\mathfrak{g})$. By $\mathcal{R}$ we now understand the universal R-matrix of $U_q(\mathfrak{a})$, and $\check{\mathcal{R}}$ is obtained from it similarly as in the case of $U_q(\mathfrak{g})$ considered before. The intertwining identities for $\mathcal{R}$ translate to the following relations for the matrix $\check{\mathcal{R}}$. $$\begin{aligned} \check{R}(e_{\alpha}\otimes q^{h_{\alpha}} + 1\otimes e_{\alpha})&=& (e_{\alpha}\otimes q^{-h_{\alpha}} + 1\otimes e_{\alpha})\check{R}, \label{IRe} \\ \check{R}(f_\alpha\otimes 1 + q^{-h_\alpha}\otimes f_\alpha)&=&(q^{h_\alpha}\otimes f_\alpha+ f_\alpha\otimes 1)\check{R}, \label{IRf} \\ (e_{\alpha}\otimes q^{h_{\alpha}} + 1\otimes e_{\alpha}) \check{R}^{-1}&=& \check{R}^{-1}(e_{\alpha}\otimes q^{-h_{\alpha}} + 1\otimes e_{\alpha}), \label{IReInv} \\ (f_\alpha\otimes 1 + q^{-h_\alpha}\otimes f_\alpha) \check{R}^{-1}&=& \check{R}^{-1} (q^{h_\alpha}\otimes f_\alpha+ f_\alpha\otimes 1), \label{IRfInv}\end{aligned}$$ Here $\alpha$ is a simple root from $\Pi_\mathfrak{a}$, although we need these identities only for $\alpha\in \Pi_\mathfrak{g}$. Note that $\check{\mathcal{R}}^{-1}\in U_q(\mathfrak{a}_+)\otimes U_q(\mathfrak{a}_-)$. Recall that, for the given comultiplication $\Delta$, the adjoint action on the generators of the quantum group explicitly reads $$\mathrm{ad}(e_\alpha) u= e_\alpha u q^{-h_\alpha} - u e_\alpha q^{-h_\alpha}, \quad \mathrm{ad}(f_\alpha) u=f_\alpha u - q^{-\left(\alpha,\mathrm{wt}(u)\right)} u f_\alpha,$$ for all $\alpha\in \Pi_\mathfrak{g}$ and $u\in U_q(\mathfrak{a})$. Fix a $U_q(\mathfrak{a})$-module $V$ with representation homomorphism $\varrho\colon U_q(\mathfrak{a})\to \mathrm{End}(V)$ and consider the quantum Lax operators $$L^-_X=(\varrho\otimes\mathrm{id})\check{\mathcal{R}}^{-1} \in \mathrm{End}(V)\otimes U_q(\mathfrak{a}_-), \quad L^+_X=(\mathrm{id}\otimes\varrho)(\check{\mathcal{R}})\in U_q(\mathfrak{a}_+)\otimes\mathrm{End}(V).$$ Suppose that $V^\mathfrak{g}\not \not =\{0\}$ and pick up a $U_q(\mathfrak{g})$-invariant non-zero weight vector $v_0\in V^\mathfrak{g}$. Define $$\psi^-_{i}=L^-_{i0} ,\quad \psi^+_{i}=L^+_{i0}q^{h_{\nu_i}}.$$ These elements of $U_q(\mathfrak{a})$ carry weights $\mathrm{wt}(\psi^-_{i})=\nu_0-\nu_i< 0$ and $\mathrm{wt}(\psi^+_{i})=\nu_0-\nu_i> 0$. The weight $\nu_0$ is orthogonal to $\Pi_\mathfrak{g}$. **Proposition 13**. *For each $v_0\in V^\mathfrak{g}$ the vector spaces $\mathrm{Span}\{\psi^-_{i}\}_{0\prec i}$ and $\mathrm{Span}\{\psi^+_{i}\}_{i\prec 0}$ are invariant under the adjoint action of $U_q(\mathfrak{g})$ on $U_q(\mathfrak{a})$. Specifically, $$\begin{aligned} \mathrm{ad}(e_\alpha)(\psi^+_{i})&=& - \sum_{k\prec 0}q^{(\alpha,\nu_i)}\varrho(e_{\alpha})_{ik} \psi^+_{k}=\sum_{k\prec 0}\varrho\bigl(\tilde \gamma^{-1}(e_{\alpha})\bigr)_{ik}\psi^+_{k}, \nonumber\\ \mathrm{ad}(f_\alpha)(\psi^+_{i})&=& -\sum_{k\prec 0} \varrho(f_\alpha)_{ik} q^{-(\alpha,\nu_k)} \psi^+_{k}=\sum_{k\prec 0} \varrho\bigl(\tilde \gamma^{-1}(f_\alpha)\bigr)_{ik} \psi^+_{k}, \nonumber\\ \mathrm{ad}(e_\alpha)(\psi^-_{i})&=& -\sum_{0\prec k}\varrho(e_{\alpha})_{ik}q^{-(\alpha,\nu_k)}\psi^-_{k}= \sum_{0\prec k}\varrho\bigl( \gamma(e_{\alpha})\bigr)_{ik}\psi^-_{k}, \nonumber\\ \mathrm{ad}(f_\alpha)(\psi^-_{i})&=& -\sum_{0\prec k} q^{(\alpha,\nu_i)}\varrho(f_\alpha)_{ik} \psi^-_{k}= \sum_{0\prec k} \varrho\bigl( \gamma(f_\alpha)\bigr)_{ik} \psi^-_{k}. \nonumber\end{aligned}$$* *Proof.* It is obvious with regard to the subalgebra $U^0$ as the elements $\psi^\pm_i$ carry definite weights. To complete the proof, send to the representation the appropriate tensor factor in ([\[IRe\]](#IRe){reference-type="ref" reference="IRe"}-[\[IRfInv\]](#IRfInv){reference-type="ref" reference="IRfInv"}) and take into account that $\varrho(f_\alpha)_{k0}=\varrho(e_\alpha)_{k0}=(\alpha,\nu_0)=0$ for all $\alpha\in \Pi_\mathfrak{g}$ and all $k\in I_V$. ◻ Under the assumptions of Proposition [Proposition 13](#psi-adjoint){reference-type="ref" reference="psi-adjoint"}, one can construct Mickelsson generators by restricting the representation $\varrho$ to $U_q(\mathfrak{g})$ and take $X=\mathrm{Span}\{\psi^-_{i}\}_{0\prec i}$ with the representation homomorphism $\pi=\varrho^t\circ \gamma$ or $X=\mathrm{Span}\{\psi^+_{i}\}_{i\prec 0}$ with $\pi=\varrho^t\circ \tilde\gamma^{-1}$. Here $t$ stands for the matrix transposition. In order to construct a PBW basis in $\hat Z(\mathfrak{a},\mathfrak{g})$, one can take for $V$ the irreducible finite dimensional $U_q(\mathfrak{a})$-module $\mathfrak{a}_q$ whose highest weight is the maximal root. Let $\mathfrak{g}_q\subset \mathfrak{a}_q$ be the analogous $U_q(\mathfrak{g})$-module. Then $\mathfrak{a}_q^\mathfrak{g}\subset \mathfrak{a}_q[0]$. The sum $\sum_{\alpha\in \mathrm{R}_\mathfrak{a}\backslash \mathrm{R}_\mathfrak{g}} \mathfrak{a}_q[\alpha]$ forms a self-dual $U_q(\mathfrak{g})$-submodule, $\mathfrak{a}_q/(\mathfrak{a}_q[0]+\mathfrak{g}_q)$. In the classical limit, it goes over to $\mathfrak{a}/(\mathfrak{c}\oplus\mathfrak{g})$ whose elements generate a PBW basis in $U(\mathfrak{a})/\bigl(\mathfrak{g}_-U(\mathfrak{g})+U(\mathfrak{g})\mathfrak{g}_+\bigr)$ as a $U(\mathfrak{k})$-module. Each irreducible $U_q(\mathfrak{g})$-submodule in $\mathfrak{a}_q/(\mathfrak{a}_q[0]+\mathfrak{g}_q)$ can be constructed along the lines of Proposition [Proposition 13](#psi-adjoint){reference-type="ref" reference="psi-adjoint"} by picking up an appropriate vector $v_0\in \mathfrak{a}_q^\mathfrak{g}$. Taking the direct sum of such modules for $X$ one arrives at a Mickelsson generator $\psi_{X^*}$. Multiplying it by left Shapovalov matrix on the right yields a vector whose components generates a PBW basis in each weight of $\hat Z(\mathfrak{a},\mathfrak{g})$, for generic $q$, or equivalently, over $\mathbb{C}[[\hbar]]$. With regard to a particular $q$, the problem reduces to the question if the entries of L-operators generate a PBW basis in $U_q(\mathfrak{a})$. [**Acknowledgement**]{.ul}\ This work was done at the Center of Pure Mathematics MIPT, with financial support under project FSMG-2023-0013. It is also supported by Russian Science Foundation grant 23-21-00282. ## Declarations {#declarations .unnumbered} ### Data Availability {#data-availability .unnumbered} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. ### Funding {#funding .unnumbered} This research is supported by Russian Science Foundation grant 23-21-00282. ### Competing interests {#competing-interests .unnumbered} The authors have no competing interests to declare that are relevant to the content of this article. A Mickelsson, J.: *Step algebras of semisimple Lie algebras*, Rev. Mod. Phys., **4** (1973), 307--318. Nagel, J. G., Moshinsky, M.: *Operators that lower or raise the irreducible vector spaces of $U_{n-1}$ contained in an irreducible vector space of $U_n$*, J. Math. Phys., **6** (1965), 682--694. Zhelobenko, D., P.,: *S-algebras and Harish-Chandra Modules over symmetric Lie algebras*, Mathematics of the USSR-Izvestiya **37**, \# 1 (1991), 1--17. De Bie, H. Eelbode, D., and Roels, M. : *The Harmonic Transvector Algebra in Two Vector Variables*, J. Alg., **473** (2017), 247--282. Molev, A. I.: *Gelfand-Tsetlin bases for classical Lie algebras*, in Handbook of algebra., **4** 109--170, Elsevier/North-Holland, Amsterdam, 2006. Khoroshkin, S., M. Nazarov, M.: *Yangians and Mickelsson algebras, I*, Transform. Groups, **11**, \#4 (2006), 625--658. Khoroshkin, S., M. Nazarov, M.: *Yangians and Mickelsson algebras. II* Moscow Math. J., **6**, \#1 (2006), 477--504 Khoroshkin, S., M. Nazarov, M.: *Mickelsson algebras and representations of Yangians*, Transactions of AMS, **364**, \# 3 ( 2012), 1293--1367. Kek$\ddot{\rm a}$l$\ddot{\rm a}$inen, P.: *Step algebras of quantum algebras of type $A$, $B$ and $D$*, J. Phys. A: Math. Gen., **29** (1996), 1045--1053. Khoroshkin, S., Ogievetsky O., *Diagonal reduction algebras of $\mathfrak{g}\mathfrak{l}$ type*, Funct. Anal. Appl., **44** (2010), 182--198. Gelfand, I.,M., Tsetlin M., L. *Finite dimensional representations of the group of unimodular matrices*, Dokl. Akad. Nauk SSSR, **71**, \#5 (1950), 825--828. Asherova, R. M., Smirnov, Yu. F., and Tolstoy, V. N.: *Projection operators for the simple Lie groups*, Theor. Math. Phys. **8** (1971), 813--825. Zhelobenko, D., P.,: *Extremal projectors and generalized Mickelsson algebras over reductive Lie algebras*, Mathematics of the USSR-Izvestiya, **33**, \# 1 (1989), 85--100. Zhelobenko, D., P., *S-algebras and Harish-Chandra modules over reductive Lie algebras*, Mathematics of the USSR-Izvestiya, **37**, \# 1 (1991) 1--17. Mudrov, A., Stukopin, V.: *Mickelsson algebras via Hasse diagrams*, arXiv:2306.01592. Mudrov, A.: *R-matrix and inverse Shapovalov form*, J. Math. Phys., **57** (2016), 051706. Shapovalov, N.: *On a bilinear form on the universal enveloping algebra of a complex semisimple Lie algebra*, Funkt. Anal. Appl., **6** (1972), 65--70. Etingof, P., A. Varchenko, A.: *Exchange dynamical quantum groups*, Comm. Math. Phys., **205** (1999), 19--52. Alekseev, A. Lachowska, A.: *Invariant $*$-product on coadjoint orbits and the Shapovalov pairing*, Comment. Math. Helv., **80** (2005), 795--810. Mudrov, A.: *Vector bundles on quantum conjugacy classes*, arXiv:2201.04568. Brundan, J.: *Lowering operators for GL(n) and quantum GL(n)*, Group representations: cohomology, group actions and topology (Seattle, WA, 1996), Proc. Sympos. Pure Math., **63**, Amer. Math. Soc., Providence, RI, 1998, 95--114. Carter, R.:*Raising and lowering operators for $\mathfrak{s}\mathfrak{l}_n$, with applications to orthogonal bases of $\mathfrak{s}\mathfrak{l}_n$-modules*, The Arcata Conference on Representations of Finite Groups (Arcata, Calif., 1986), Proc. Sympos. Pure Math., **47**, Amer. Math. Soc., Providence, RI, (1987), 351--366. Mudrov, A.: *Contravariant forms and extremal projectors*, JPAA, **226**, \# 4 (2022), 106902, 18pp. Drinfeld, V.: *Quantum Groups*. In Proc. Int. Congress of Mathematicians, Berkeley 1986, Gleason, A. V. (eds) 798--820, AMS, Providence (1987). Chari, V. and Pressley, A.: A guide to quantum groups, Cambridge University Press, Cambridge 1994. Khoroshkin, S.: *Extremal Projector and Dynamical Twist*, TMF, **139**, \#1 (2004), 158--176. Reshetikhin, N., Semenov-Tian-Shansky, M: *Quantum R-matrices and factorization problem*, J. Geom. Phys., **5** (1988), 533-550. Etingof, P., A. Varchenko, A.: *Dynamical Weyl groups and applications*, Adv. Math., **167** (2002), 74--127. M. S. K$\acute{\mbox{e}}$b$\acute{\mbox{e}}$: *$\mathcal{O}$-alg$\grave{\mbox{e}}$bre quantiques*, C.R. Acal. Sci. Paris, **322**, Serie I (1996) 1--4.
arxiv_math
{ "id": "2309.05318", "title": "Mickelsson algebras and inverse Shapovalov form", "authors": "Andrey Mudrov, Vladimir Stukopin", "categories": "math.QA math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The study of *random segments* is a classic issue in geometrical probability, whose complexity depends on how it is defined. But in apparently simple models, the random behavior is not immediate. In the present manuscript the following setting is considered. Let four independent random points that follow a uniform distribution on the unit disk. Two random segments are built with them, which always are inside of the disk. We compute the density function of the angle between these two random segments when they intersect each other. This type of problem tends to be complex due to the high stochastic dependency that exists between the elements that form them. The expression obtained is in terms of integrals, however it allows us to understand the behavior of the distribution of the random angle between the two random segments. author: - Paulo Manrique-Mirón title: Angle between two random segments --- # **Introduction** {#sec:Intro} Geometrical probability deals with the study of classical geometry objects, points, segments, lines, planes, circles, spheres, etc., which are generated through some random mechanism [@kendall1963geometrical; @mathai1999introduction]. The development of this area dates back at least to 1733 when Georges-Louis Leclerc, Comte de Buffo, wrote "Mémoire sur le jeu de franc-carreau", where he proposed and solved (not always correctly) three problems formulated as mathematical games. These are clean tiles problem, the needle problem, and the mesh problem [@mathai1999introduction]. Among them, possibly the best known is the needle problem, which consists of randomly throwing a needle of length $l$ on a set of equidistant parallel lines, with separation $h$, on the plane. The question is to determine the probability that the needle cuts a line. In fact, the value of this probability is $\frac{2l}{\pi h}$. This game allows us to set up a method of simulation to determine the value of $\pi$. The analysis of a random geometric object depends on the way in which it is generated by certain random mechanism. For example, Kendall and Moran [@kendall1963geometrical] illustrate how to solve a problem proposed by Bertrand which consists of finding the probability that a *random chord* of a circle is longer than the side of the equilateral triangle inscribed in it. To do this, three different ways of understanding what is a *random chord* are proposed. First, the chord is form by joining two points which they are generated independently and uniformly on the circumference. The second model is to consider a chord which is perpendicular to the diameter and its point of intersection is uniformly distributed over the diameter. In the third, a point uniformly distributed on the circle is chosen and the chord is the perpendicular segment to the radius which passes through this point. The probability of the considered event is $\frac{1}{3}$, $\frac{1}{2}$, $\frac{1}{4}$, respectively, [@garwood1966distance]. Garwood and Holroyd [@garwood1966distance] interpret a *random chord* as the segment passing through two independent and uniformly distributed points $P,Q$ on the circle of radius one. They computed the density function of the distance $L$ of the chord to the center of the circle, $$f_L(l) = \frac{16}{3\pi} (1-l^2)^{3/2} \mathds{1}_{\left\{l\in[0,1]\right\}},$$since this distance determines the length of the chord. Previously, Garwood and Tanner [@garwood19582800] found the density of the distance $D$ between $P$ and $Q$, $$f_D(d) = \frac{2d}{\pi}\left(2 \arccos\left(\frac{d}{2}\right) - \sin\left(2\arccos\left(\frac{d}{2}\right) \right)\right) \mathds{1}_{\left\{d \in [0,2]\right\}}.$$ In both works the *infinitisimal strategy* is used to determine the densities of considered lengths, which consists in the following idea: if $f(w)$ is the density of the random variable $W$ then, intuitively, $f(w)dw$ is the probability that $W\in[w,w+dw]$. In this manuscript, the *random segments* are defined with the same mechanism proposed by Garwood and Holroyd, two independent and uniformly distributed points on the unit circle are joined to form a segment. Generating two independent segments in this way, we compute the density of the angle between them when they intersect, see Figure [1](#fig:mie24may0940){reference-type="ref" reference="fig:mie24may0940"}.a. Formally, the following framework is considered. Let $\mathds{D} = \left\{x\in\mathbb{R}^2: \left|\left|x\right|\right|\leq 1\right\}$ and $X,Y$ two independent random points which are uniformly distributed on $\mathds{D}$. From $X$ and $Y$ we define a *random segment* as $$S_{XY}:=\left\{w\in\mathds{D} : w =(1-\alpha)X+\alpha Y\, \mbox{ for }\, \alpha\in[0,1]\right\}.$$ We consider four independent random points $A,B,C,D$, all uniformly distributed on $\mathds{D}$, and let $S_{AB}, S_{CD}$ be the random segments associated with $(A,B)$ and $(C,D)$, respectively. Note $S_{AB}\cap S_{CD}$ could be empty. Our objective is compute the distribution of the angle between $S_{AB}, S_{CD}$ when they intersect, i.e., if $\Theta :=\angle \left(S_{AB}, S_{CD}\right)$, $$\label{eqn:mie24may0932} \mathbb{P}\left(\Theta \leq \theta | S_{AB} \cap S_{CD} \not=\O\right),$$ with $\theta\in[0,\pi]$. The angle $\Theta$ is measured counterclockwise from $S_{AB}$ to $S_{CD}$, see Figure  [1](#fig:mie24may0940){reference-type="ref" reference="fig:mie24may0940"}.b. ![](figura1.pdf){#fig:mie24may0940} In order to compute  [\[eqn:mie24may0932\]](#eqn:mie24may0932){reference-type="ref" reference="eqn:mie24may0932"}, we do a change of variable which permit to find an expression for it, which also permit to recover the results of Garwood and Holroyd and Garwood and Tanner at same time. The manuscript consists of two more sections. In the Main Result Section, the density of $\mathbb{P}\left(\Theta \leq \theta | S_{AB} \cap S_{CD} \not=\O\right)$ is presented, as well as some consequences of the proposed change of variable. The last section gives the proof of this result. Finally, we are grateful for the comments of Víctor Peréz Abreu and the support of Alberto Saucedo Lara. # **Main Result** {#sec:MainResult} The main result of this manuscript is presented below. **Theorem 1**. *The function density of the expression  [\[eqn:mie24may0932\]](#eqn:mie24may0932){reference-type="ref" reference="eqn:mie24may0932"} is given by $g(\theta)=\frac{1}{c}g^*(\theta) \mathds{1}_{\left\{\beta\in [0,\pi]\right\}}$, where $c= \int_{0}^{\pi} g^*(\theta) d\theta$, $$g^*(\theta) := \int_0^1 \int_0^1 \frac{1}{\pi} g^*_1(\rho_{AB},\rho_{CD},\theta) \mathds{1}_{ \left\{\sqrt{1-\rho_{AB}^2} \left|\sin(\theta)\right| \geq \left|\rho_{AB} \cos(\theta)+\rho_{CD}\right|\right\} } \textnormal{d}\rho_{AB} \textnormal{d}\rho_{CD},$$ and $$\begin{aligned} g^{*}_{1}(\rho_{AB},\rho_{CD}, \theta) &:= \left(\frac{8}{\pi}\right)^2 \sqrt{1-\rho_{AB}^2}\sqrt{1-\rho_{CD}^2}\\ & \hspace{1cm} \times \left[1- \frac{\rho_{AB}^2 + \rho_{CD}^2+2\rho_{AB}\rho_{CD}\cos\left(\theta\right)}{\sin^2\left(\theta\right)}\right]^2 \mathds{1}_{\rho_{AB}\in[0,1]} \mathds{1}_{\rho_{CD}\in[0,1]}.\end{aligned}$$* $\Box$ Figure  [\[fig:vie25agosto1218\]](#fig:vie25agosto1218){reference-type="ref" reference="fig:vie25agosto1218"} shows the graph of $g(\theta)$. The proof of Theorem  [Theorem 1](#thm:mar29ago1553){reference-type="ref" reference="thm:mar29ago1553"} is based on the following change of variable. Observe that $X=\sqrt{R_X}(\cos(\Gamma_X),\sin(\Gamma_X))^T$, where $v^T$ is the transpose of $v$, has uniform distribution on $\mathds{D}$ where $R_X, \Gamma_X$ are independent random variables such that $R_X\sim \mbox{Unif}[0,1]$ and $\Gamma_X\sim\mbox{Unif}[0,2\pi]$. Thus, we can express $S_{AB}$ as $$S_{AB} = \left\{w\in\mathbb{R}^2 : w=(1-\alpha) \sqrt{R_A}\begin{pmatrix} \cos(\Gamma_A)\\ \sin(\Gamma_A) \end{pmatrix} + \alpha \sqrt{R_B}\begin{pmatrix} \cos(\Gamma_B)\\ \sin(\Gamma_B) \end{pmatrix}, \; \alpha\in[0,1]\right\},$$ and $R_A,R_B,\Gamma_A,\Gamma_B$ are independent random variables with $R_A,R_B\sim \mbox{Unif}[0,1]$ and $\Gamma_A,\Gamma_B\sim\mbox{Unif}[0,2\pi]$. Consider the perpendicular $OF$ from the origin $O$ to the segment $S_{AB}$ (or its prolongation). Let $\Gamma_{AB}$ the angle this perpendicular makes with the $x$-axis and $R_{AB}$ the perpendicular distance of the segment $S_{AB}$ from the origin. The points $A, B$ are determined by $R_{AB},\Gamma_{AB}, T_{A}, T_{B}$ where $\left|T_{A}\right|$ and $\left|T_{A}\right|$ are the distance of $A$ and $B$ from $F$, respectively. See Figure [2](#fig:mie24may1115){reference-type="ref" reference="fig:mie24may1115"}. We denote particular values of $R_{AB},\Gamma_{AB}, T_{A}, T_{B}$ by $\rho_{AB},\gamma_{AB}, t_{A}, t_{B}$, respectively. ![](figura2.pdf){#fig:mie24may1115} Note that $$\begin{aligned} \label{eqn:mie24may1126} \sqrt{\rho_j} \cos\gamma_j & = \rho_{AB} \cos\gamma_{AB} - t_{j}\sin\gamma_{AB},\\ \sqrt{\rho_j} \sin\gamma_j & = \rho_{AB} \sin\gamma_{AB} + t_{j}\cos\gamma_{AB},\nonumber \end{aligned}$$ for $j\in\left\{A,B\right\}$. Thus, the join density of $(R_A,\Gamma_A,R_B,\Gamma_B)$ is $$f(\rho_A,\gamma_A,\rho_B,\gamma_B)= \frac{1}{(2\pi)^2} \mathds{1}_{\left\{\rho_A\in[0,1]\right\}} \mathds{1}_{\left\{\gamma_A\in [0,2\pi]\right\}} \mathds{1}_{\left\{\rho_B\in [0,1]\right\}} \mathds{1}_{\left\{\gamma_B\in [0,2\pi]\right\}},$$ and it can be expressed in termes of $\rho_{AB}, \gamma_{AB}, t_A, t_B$ as $$\begin{aligned} \label{eqn:mie24may1226} & \left(\frac{2}{2\pi}\right)^2\left|t_A-t_B\right| \mathds{1}_{\left\{\rho_{AB}\in[0,1]\right\}} \mathds{1}_{\left\{\gamma_{AB}\in[0,2\pi]\right\}} \mathds{1}_{\left\{t_A\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}} \mathds{1}_{\left\{t_B\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}}. \end{aligned}$$ This change of variable allows us to obtain more information of the random segment that does not seem clear from its original definition. For example, the results of Garwood and Holroyd and Garwood and Tanner can be deduced directly from this. The marginal density at $\rho_{AB}$ retrieves the result from Garwood and Holroyd, $$\begin{aligned} f(\rho_{AB}) & = \mathds{1}_{\left\{\rho_{AB}\in[0,1]\right\}} \frac{2}{\pi} \int \int \left|t_A-t_B\right| \mathds{1}_{\left\{t_A\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}} \mathds{1}_{\left\{t_B\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}} \textnormal{d}t_A \textnormal{d}t_B \\ & = \frac{8}{3\pi} (1-\rho_{AB}^2)^{3/2} \mathds{1}_{\left\{\rho_{AB}\in[0,1]\right\}}.\end{aligned}$$ Meanwhile, the marginal density at $(t_A,t_B)$, $$\begin{aligned} f(t_A,t_B) & = \frac{2}{\pi} \left|t_A-t_B\right| \min\left\{\sqrt{1-t_A^2},\sqrt{1-t_B^2}\right\} \mathds{1}_{\left\{t_A\in[-1,1]\right\}} \mathds{1}_{\left\{t_B\in[-1,1]\right\}},\end{aligned}$$ allows to retrieve the result of Garwood and Tanner, $$\begin{aligned} \mathbb{P}\left(\left|S_{AB}\right| \leq d\right) & = \mathbb{P}\left(\left|T_A-T_B\right| \leq d\right) \\ & = \int_{\left\{\left|t_A-t_B\right| \leq d\right\}} f(t_A,t_B) \textnormal{d}(t_A,t_B) \\ & = 4\times \frac{2}{\pi}\left[ \int_0^{d/2} \int_{-t_B}^{t_B} (t_B-t_A)\sqrt{1-t_B^2} \textnormal{d}t_A \textnormal{d}t_B +\int_{d/2}^1 \int_{t_B-d}^{t_B} (t_B-t_A)\sqrt{1-t_B^2} \textnormal{d}t_A \textnormal{d}t_B \right] \\ & = \int_0^d \frac{s}{\pi} \frac{-4s+s^3+8\sqrt{4-s^2}\,\textnormal{arccot}\left(\frac{2+s}{\sqrt{4-s^2}}\right)}{\sqrt{4-s^2}} \mathds{1}_{\left\{s\in[0,2]\right\}} \textnormal{d}s,\end{aligned}$$ with a little extra algebraic work. In the context of the main result of this work, the *interaction* between the independent random variables $\Gamma_{AB},\Gamma_{CD}, R_{AB}, R_{CD}$ induced by the condition that the segments $S_ {AB}$ and $S_{CD}$ intersect makes the expression for the density $g(\theta)$ be hard to reduce, as can be seen in the proof of Theorem [Theorem 1](#thm:mar29ago1553){reference-type="ref" reference="thm:mar29ago1553"}. However, it allows to clearly build a simulation scheme to approximate its form, see Figure  [\[fig:vie25agosto1218\]](#fig:vie25agosto1218){reference-type="ref" reference="fig:vie25agosto1218"}. Additionally, we are capable to estimate the probability of the event $\left\{S_{AB} \cap S_{CD} \not=\O\right\}$, i.e., $$\mathbb{P}\left(S_{AB} \cap S_{CD} \not=\O\right) = \int_{0}^{\pi} g^*(\theta) d\theta \approx 0.9393598,$$ which means that the random segments intersect quite often. An adequate change of variable can be allowed us to do a clearer analysis of a random geometric object. However, despite the apparently simplicity of the events, the complexity of the expressions continue due to the strong dependency that should be existed between the elements which conform the geometric object. # **Proof** {#sec:Proof} In this section the proof of Theorem  [Theorem 1](#thm:mar29ago1553){reference-type="ref" reference="thm:mar29ago1553"} is presented. From the expressions  [\[eqn:mie24may1126\]](#eqn:mie24may1126){reference-type="ref" reference="eqn:mie24may1126"} we have the following relationships: $$\begin{aligned} \label{eqn:mie24may1128} \rho_j &= \rho_{AB}^2 + t_j^2, \\ \cos\gamma_j & = \frac{\rho_{AB}\cos\gamma_{AB}-t_j\sin\gamma_{AB}}{\sqrt{\rho^2_{AB} + t_j^2}}, \nonumber \\ \sin\gamma_j & = \frac{\rho_{AB}\sin\gamma_{AB}+t_j\cos\gamma_{AB}}{\sqrt{\rho^2_{AB} + t_j^2}}, \nonumber \\ \tan\gamma_j & = \frac{\sin\gamma_j}{\cos\gamma_j} = \frac{\rho_{AB}\tan\gamma_{AB}+t_j}{\rho_{AB}-t_j\tan\gamma_{AB}}, \nonumber \\ \gamma_j & = \arctan\left( \frac{\rho_{AB}\tan\gamma_{AB}+t_j}{\rho_{AB}-t_j\tan\gamma_{AB}}\right), \nonumber\end{aligned}$$ for $j\in\left\{A,B\right\}$. The join density of $(R_A,\Gamma_A,R_B,\Gamma_B)$ $$f(\rho_A,\gamma_A,\rho_B,\gamma_B)= \frac{1}{(2\pi)^2} \mathds{1}_{\left\{\rho_A\in[0,1]\right\}} \mathds{1}_{\left\{\gamma_A\in [0,2\pi]\right\}} \mathds{1}_{\left\{\rho_B\in [0,1]\right\}} \mathds{1}_{\left\{\gamma_B\in [0,2\pi]\right\}}$$ is written in terms of $\rho_{AB},\gamma_{AB}, t_{A}, t_{B}$, i.e., $$\begin{aligned} & f(\rho_{AB},\gamma_{AB}, t_{A}, t_{B}) \\ & \;\;\; = \frac{1}{(2\pi)^2} \left|J\right| \mathds{1}_{\left\{\rho_{AB}\in[0,1]\right\}} \mathds{1}_{\left\{\gamma_{AB}\in[0,2\pi]\right\}} \mathds{1}_{\left\{t_A\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}} \mathds{1}_{\left\{t_B\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}},\end{aligned}$$ where $\left|J\right|$ is absolute value of the determine of the Jacobian matrix $J$, which is $$\renewcommand*{\arraystretch}{1.5} J = \begin{pmatrix} \frac{\partial \rho_A}{\partial t_A} & \frac{\partial \rho_A}{\partial t_B} & \frac{\partial \rho_A}{\partial \rho_{AB}} & \frac{\partial \rho_A}{\partial \gamma_{AB}}\\ \frac{\partial \gamma_A}{\partial t_A} & \frac{\partial \gamma_A}{\partial t_B} & \frac{\partial \gamma_A}{\partial \rho_{AB}} & \frac{\partial \gamma_A}{\partial \gamma_{AB}}\\ \frac{\partial \rho_B}{\partial t_A} & \frac{\partial \rho_B}{\partial t_B} & \frac{\partial \rho_B}{\partial \rho_{AB}} & \frac{\partial \rho_B}{\partial \gamma_{AB}}\\ \frac{\partial \gamma_B}{\partial t_A} & \frac{\partial \gamma_B}{\partial t_B} & \frac{\partial \gamma_B}{\partial \rho_{AB}} & \frac{\partial \gamma_B}{\partial \gamma_{AB}} \end{pmatrix} = \begin{pmatrix} 2t_A & 0 & 2\rho_{AB} & 0 \\ \frac{\rho_{AB}}{\rho_{AB}^2+t_A^2} & 0 & -\frac{t_A}{\rho_{AB}^2+t_A^2} & 1 \\ 0 & 2t_B & 2\rho_{AB} & 0 \\ 0 & \frac{\rho_{AB}}{\rho_{AB}^2+t_B^2} & -\frac{t_B}{\rho_{AB}^2+t_B^2} & 1 \end{pmatrix}.$$ Then $\left|J\right| = 4\left|t_A-t_B\right|$. Thus, the joint density $f(\rho_{AB},\gamma_{AB}, t_{A}, t_{B})$ is $$\begin{aligned} & \left(\frac{2}{2\pi}\right)^2\left|t_A-t_B\right| \mathds{1}_{\left\{\rho_{AB}\in[0,1]\right\}} \mathds{1}_{\left\{\gamma_{AB}\in[0,2\pi]\right\}} \mathds{1}_{\left\{t_A\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}} \mathds{1}_{\left\{t_B\in\left[-\sqrt{1-\rho^2_{AB}},\sqrt{1-\rho^2_{AB}}\right]\right\}}. \end{aligned}$$ From here, we consider $S_{AB}$ and $S_{CD}$ in terms of $\rho_{AB},\gamma_{AB}, t_{A}, t_{B}$. Note that the cardinality of $S_{AB}\cap S_{CD}$ is such that $|S_{AB}\cap S_{CD}|\in\left\{0,1\right\}$ with probability $1$. If $S_{AB}$ is fixed, then the probability that $C$ and $D$ remain in the chord induced by $S_{AB}$ is zero, since $C$ and $D$ have a continuous distribution. Let $l_{AB}, l_{CD}$ be the associated lines induce by $S_{AB}, S_{CD}$, respectively. Let $$F_{AB} = \rho_{AB} \begin{pmatrix} \cos\gamma_{AB}\\ \sin\gamma_{AB} \end{pmatrix}, \;\;\; F_{CD} = \rho_{CD} \begin{pmatrix} \cos\gamma_{CD}\\ \sin\gamma_{CD} \end{pmatrix}.$$ Note that the following system of equations  [\[eqn:vie26may1244\]](#eqn:vie26may1244){reference-type="ref" reference="eqn:vie26may1244"} has always a unique solution $z=(z_x,z_y)^T$ due to the considered random variables have continuous distributions. $$\begin{aligned} \label{eqn:vie26may1244} y + \frac{1}{\tan\gamma_{AB}} x & = \rho_{AB}\sin\gamma_{AB} + \frac{1}{\tan\gamma_{AB}} \rho_{AB}\cos\gamma_{AB}, \\ y + \frac{1}{\tan\gamma_{CD}} x & = \rho_{CD}\sin\gamma_{CD} + \frac{1}{\tan\gamma_{CD}} \rho_{CD}\cos\gamma_{CD}. \nonumber\end{aligned}$$ The point $z$ is the intersected point between $l_{AB}$ and $l_{CD}$, see Figure [3](#fig:vie26may1213){reference-type="ref" reference="fig:vie26may1213"}. ![](figura3.pdf){#fig:vie26may1213} From the system [\[eqn:vie26may1244\]](#eqn:vie26may1244){reference-type="ref" reference="eqn:vie26may1244"}, we have that $$\label{eqn:vie26may1246} z=\frac{1}{\sin\gamma_{AB}\cos\gamma_{CD}-\cos\gamma_{AB}\sin\gamma_{CD}}\begin{pmatrix} \rho_{CD}\sin\gamma_{AB}-\rho_{AB}\sin\gamma_{CD}\\ \rho_{AB}\cos\gamma_{CD}-\rho_{CD}\cos\gamma_{AB} \end{pmatrix},$$ and the norm of $z$ is $$\label{eqn:vie26may1324} \left|\left|z\right|\right|^2 = \frac{\rho_{AB}^2 + \rho_{CD}^2-2\rho_{AB}\rho_{CD}\cos\left(\gamma_A - \gamma_B\right)}{\sin^2\left(\gamma_A - \gamma_B\right)}.$$ In order that $S_{AB}\cap S_{CD}\neq \O$, we need that the point $z$ satisfies the condition $\left|\left|z\right|\right|^2\leq 1$, so that it is inside of $\mathds{D}$, and to assure that there exist $\alpha_{AB},\alpha_{CD}\in[0,1]$ such that $$\begin{aligned} \left(1-\alpha_{AB}\right)A + \alpha_{AB} B = z,\\ \left(1-\alpha_{CD}\right)C + \alpha_{CD} D = z.\end{aligned}$$ But note, $$\begin{aligned} z & = \left(1-\alpha_{AB}\right)A + \alpha_{AB}B \\ & = \left(1-\alpha_{AB}\right) \begin{pmatrix} \cos\gamma_{AB} & -\sin\gamma_{AB} \\ \sin\gamma_{AB} & \cos\gamma_{AB} \end{pmatrix} \begin{pmatrix} \rho_{AB} \\ t_A \end{pmatrix} + \alpha_{AB} \begin{pmatrix} \cos\gamma_{AB} & -\sin\gamma_{AB} \\ \sin\gamma_{AB} & \cos\gamma_{AB} \end{pmatrix} \begin{pmatrix} \rho_{AB} \\ t_B \end{pmatrix}\end{aligned}$$ which means $$\left|\left|z\right|\right|^2 = \rho_{AB}^2 +\left[(1-\alpha_{AB})t_A + \alpha_{AB}t_B\right]^2.$$ Observe that $\left|\left|z\right|\right|^2-\rho_{AB}^2\geq 0$ is always satisfied, then $$\label{eqn:dom28may0731} \alpha^{(1)}_{AB} = \frac{t_A(t_A-t_B)-\left|t_A-t_B\right|\sqrt{\left|\left|z\right|\right|^2-\rho_{AB}^2}}{(t_A-t_B)^2}, \;\;\; \alpha^{(2)}_{AB} = \frac{t_A(t_A-t_B)+\left|t_A-t_B\right|\sqrt{\left|\left|z\right|\right|^2-\rho_{AB}^2}}{(t_A-t_B)^2}.$$ Similarly, for the pair $(C,D)$ we obtain $\left|\left|z\right|\right|^2-\rho_{CD}^2\geq 0$ and $$\label{eqn:dom28may0732} \alpha^{(1)}_{CD} =\frac{t_C(t_C-t_D)-\left|t_C-t_D\right|\sqrt{\left|\left|z\right|\right|^2-\rho_{CD}^2}}{(t_C-t_D)^2}, \;\;\; \alpha^{(2)}_{CD} = \frac{t_C(t_C-t_D)+\left|t_C-t_D\right|\sqrt{\left|\left|z\right|\right|^2-\rho_{CD}^2}}{(t_C-t_D)^2}.$$ In order to compute the value of  [\[eqn:mie24may0932\]](#eqn:mie24may0932){reference-type="ref" reference="eqn:mie24may0932"}, we only need to consider the event $$\mathcal{E} :=\left\{\Theta \leq \theta, S_{AB} \cap S_{CD} \not=\O\right\}.$$ From the expressions  [\[eqn:dom28may0731\]](#eqn:dom28may0731){reference-type="ref" reference="eqn:dom28may0731"},  [\[eqn:dom28may0732\]](#eqn:dom28may0732){reference-type="ref" reference="eqn:dom28may0732"} and Figure  [3](#fig:vie26may1213){reference-type="ref" reference="fig:vie26may1213"}, we have that the event $\mathcal{E}$ can be expressed in the following way: $$\begin{aligned} \mathcal{E} & = \left\{\Theta \leq \theta, \exists s_1,s_2\in[0,1] : (1-s_1)A+s_1 B=(1-s_2)C+ s_2 D\right\} \nonumber \\ & = \cup_{i,j\in\left\{1,2\right\}}\left\{\left|\left|\gamma_{AB}-\gamma_{CD}\right| - \pi\right| \leq \theta, \left|\left|z\right|\right|^2\leq 1 , \alpha^{(i)}_{AB}, \alpha^{(j)}_{CD}\in[0,1]\right\} \nonumber \\ & = {\mathcal{E}_0 \cap \mathcal{E}_1} \cap \mathcal{E}_2, \nonumber\end{aligned}$$ where $$\mathcal{E}_0:= \left\{\left|\left|\gamma_{AB}-\gamma_{CD}\right| - \pi\right| \leq \theta\right\},\, \mathcal{E}_1:=\left\{\left|\left|z\right|\right|^2\leq 1\right\}, \mbox{ and } \mathcal{E}_2:= {\cup_{i,j\in\left\{1,2\right\}}\left\{\alpha^{(i)}_{AB}, \alpha^{(j)}_{CD}\in[0,1]\right\}}.$$ Let $f_1:= f(\rho_{AB},\gamma_{AB}, t_{A}, t_{B})$ and $f_2:=f(\rho_{CD},\gamma_{CD}, t_{C}, t_{D})$ be the densities associated to $(A,B)$ and $(C,D)$, respectively. Thus, the probability of the event $\mathcal{E}$ is $$\begin{aligned} \label{eqn:vie30jun1100} \mathbb{P}\left(\mathcal{E}\right) & = \int_{\mathcal{E}} f_1f_2 d(\rho_{AB},\gamma_{AB}, t_{A}, t_{B},\rho_{CD},\gamma_{CD}, t_{C}, t_{D}) \\ & = \int f_1f_2 \mathds{1}_{\mathcal{E}_0} \mathds{1}_{\mathcal{E}_1}\mathds{1}_{\mathcal{E}_2} d(\rho_{AB},\gamma_{AB}, t_{A}, t_{B},\rho_{CD},\gamma_{CD}, t_{C}, t_{D}). \nonumber\end{aligned}$$ In order to compute the integral  [\[eqn:vie30jun1100\]](#eqn:vie30jun1100){reference-type="ref" reference="eqn:vie30jun1100"}, we assume that $\left|\left|z\right|\right|^2$, $\rho_{AB}$, $\rho_{CD}$, $\gamma_{AB}$, and $\gamma_{CD}$ are fixed values satisfying the conditions described by $\mathcal{E}_0$ and $\mathcal{E}_1$. We note that $$\begin{aligned} f_1 f_2 \mathds{1}_{\mathcal{E}_2} & = \sum_{i,j\in\left\{1,2\right\}} f_1 f_2 \mathds{1}_{\left\{\alpha_{AB}^{(i)}, \alpha_{CD}^{(j)}\in[0,1] \right\}} = \sum_{i,j\in\left\{1,2\right\}} f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} f_2 \mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}}.\\\end{aligned}$$ As $f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}}$ does not sharing variables with $f_2 \mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}}$, under the previous assumption, we have $$\begin{aligned} & \int\int\int\int f_1 f_2 \mathds{1}_{\mathcal{E}_2} \textnormal{d}t_A \textnormal{d}t_B \textnormal{d}t_C \textnormal{d}t_D \\ & = \sum_{i,j\in\left\{1,2\right\}} \int\int\int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} f_2 \mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}} \textnormal{d}t_A \textnormal{d}t_B \textnormal{d}t_C \textnormal{d}t_D \\ & = \sum_{i,j\in\left\{1,2\right\}} \left[\int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} \textnormal{d}t_A \textnormal{d}t_B\right] \left[\int \int f_2 \mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}} \textnormal{d}t_C \textnormal{d}t_D\right].\end{aligned}$$ We observe that $$\begin{aligned} & \int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} \textnormal{d}t_A \textnormal{d}t_B = \int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}}\left(\mathds{1}_{\left\{t_A\geq t_B\right\}} + \mathds{1}_{\left\{t_A < t_B\right\}}\right) \textnormal{d}t_A \textnormal{d}t_B \\ & \;\;\; = \int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} \mathds{1}_{\left\{t_A\geq t_B\right\}} \textnormal{d}t_A \textnormal{d}t_B + \int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} \mathds{1}_{\left\{t_A< t_B\right\}} \textnormal{d}t_A \textnormal{d}t_B.\end{aligned}$$ For $i=1$, $$\begin{aligned} & \int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(1)}\in[0,1]\right\}} \textnormal{d}t_A \textnormal{d}t_B \\ & \;\;\; = \int\int f_1\mathds{1}_{\left\{t_A \geq \sqrt{\left|\left|z\right|\right|^2-\rho^2_{AB}} \geq t_B\right\}} \textnormal{d}t_A \textnormal{d}t_B + \int\int f_1\mathds{1}_{\left\{t_B \geq -\sqrt{\left|\left|z\right|\right|^2-\rho^2_{AB}} \geq t_A\right\}} \textnormal{d}t_A \textnormal{d}t_B,\end{aligned}$$ and $i=2$, $$\begin{aligned} & \int\int f_1\mathds{1}_{\left\{\alpha_{AB}^{(2)}\in[0,1]\right\}}\textnormal{d}t_A \textnormal{d}t_B \\ & \;\;\; = \int\int f_1\mathds{1}_{\left\{t_A \geq -\sqrt{\left|\left|z\right|\right|^2-\rho^2_{AB}} \geq t_B\right\}} \textnormal{d}t_A \textnormal{d}t_B + \int\int f_1\mathds{1}_{\left\{t_B \geq \sqrt{\left|\left|z\right|\right|^2-\rho^2_{AB}} \geq t_A\right\}} \textnormal{d}t_A \textnormal{d}t_B.\end{aligned}$$ Similarly expressions are obtained for the case of $f_2$. Carrying out the corresponding calculations, we have $$\begin{aligned} f^{**}_{3} &:= \int f_1 f_2 \mathds{1}_{\mathcal{E}_2} d(t_A,t_B,t_C,t_D) \\ & \,\, = \left(\frac{2}{\pi}\right)^4 \sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{CD}^2} \left(1-\left|\left|z\right|\right|^2\right)^2 \mathds{1}_{\left\{\rho_{AB}\in [0,1]\right\}} \mathds{1}_{\left\{\rho_{CD}\in [0,1]\right\}} \mathds{1}_{\left\{\gamma_{AB}\in[0,2\pi]\right\}} \mathds{1}_{\left\{\gamma_{CD}\in[0,2\pi]\right\}}.\end{aligned}$$ Observe that $\gamma_{AB}, \gamma_{CD}$ are independent variables, which are also independent of the others considered variables. As they are uniformly distributed on $[0,2\pi]$, we have that the density $h(\gamma)$ of $\Gamma:= \Gamma_{AB}-\Gamma_{CD}$ is $$h(\gamma) = \left\{ \begin{array}{cl} \frac{1}{2\pi} - \frac{\gamma}{(2\pi)^2} & \gamma\in [0,2\pi]\\ \frac{1}{2\pi} + \frac{\gamma}{(2\pi)^2} & \gamma\in [-2\pi,0]\\ 0 & \gamma\not\in [-2\pi,2\pi] \end{array} \right..$$ From this observation, if we define $$\begin{aligned} f^{*}_{3}(\rho_{AB},\rho_{CD},\gamma) &:= \left(\frac{8}{\pi}\right)^2 \sqrt{1-\rho_{AB}^2}(\sqrt{1-\rho_{CD}^2} \\ & \hspace{1cm} \times \left[1- \frac{\rho_{AB}^2 + \rho_{CD}^2-2\rho_{AB}\rho_{CD}\cos\left(\gamma\right)}{\sin^2\left(\gamma\right)}\right]^2 \mathds{1}_{\left\{\rho_{AB}\in [0,1]\right\}} \mathds{1}_{\left\{\rho_{CD}\in [0,1]\right\}}, \\ f_{3}(\rho_{AB},\rho_{CD},\gamma) & := h(\gamma)f^{*}_{3}(\rho_{AB},\rho_{CD},\gamma),\end{aligned}$$ thus the probability of the event $\mathcal{E}$ can be expressed as $$\begin{aligned} \mathbb{P}\left(\mathcal{E}\right) & = \int f_3(\rho_{AB},\rho_{CD},\gamma) \mathds{1}_{\mathcal{E}_0} \mathds{1}_{\mathcal{E}_1} d(\gamma,\rho_{AB},\rho_{CD}) = \int f_3(\rho_{AB},\rho_{CD},\gamma) \mathds{1}_{ \left\{\left|\left|\gamma\right| - \pi\right| \leq \theta\right\}} \mathds{1}_{\mathcal{E}_1} d(\gamma,\rho_{AB},\rho_{CD}) \\ & = 2\int f_3(\rho_{AB},\rho_{CD},\gamma) \mathds{1}_{ \left\{\left|\gamma - \pi\right| \leq \theta\right\}} \mathds{1}_{\left\{\gamma\in [0,2\pi]\right\}} \mathds{1}_{\mathcal{E}_1} d(\gamma,\rho_{AB},\rho_{CD}) \\ & = 2\int f_3(\rho_{AB},\rho_{CD},\gamma) \left[\mathds{1}_{ \left\{\pi - \gamma \leq \theta\right\}} \mathds{1}_{\left\{\gamma\in [0,\pi]\right\}} +\mathds{1}_{ \left\{\gamma - \pi \leq \theta\right\}} \mathds{1}_{\left\{\gamma\in (\pi,2\pi]\right\}} \right] \mathds{1}_{\mathcal{E}_1} d(\gamma,\rho_{AB},\rho_{CD}). \end{aligned}$$ Now, if we take the change of variable $\beta = \pi - \gamma$, then $$\begin{aligned} & 2\int f_3(\rho_{AB},\rho_{CD},\gamma) \mathds{1}_{ \left\{\pi - \gamma \leq \theta\right\}} \mathds{1}_{\left\{\gamma\in [0,\pi]\right\}} \mathds{1}_{\mathcal{E}_1} d(\gamma,\rho_{AB},\rho_{CD}) \\ &\;\;\; = 2 \int f_3(\rho_{AB},\rho_{CD},\pi-\beta) \mathds{1}_{ \left\{\sqrt{1-\rho_{AB}^2} \left|\sin(\pi-\beta)\right| \geq \left|\rho_{AB} \cos(\pi-\beta)-\rho_{CD}\right|\right\} } \mathds{1}_{ \left\{\beta \leq \theta\right\}} \mathds{1}_{\left\{\beta\in [0,\pi]\right\}} d(\gamma,\rho_{AB},\rho_{CD}) \\ &\;\;\; = \int_0^\theta\left[ 2 \int_0^1\int_0^1 f_3(\rho_{AB},\rho_{CD},\pi-\beta) \mathds{1}_{ \left\{\sqrt{1-\rho_{AB}^2} \left|\sin(\beta)\right| \geq \left|\rho_{AB} \cos(\beta)+\rho_{CD}\right|\right\} } \textnormal{d}\rho_{AB} \textnormal{d}\rho_{CD} \right] \mathds{1}_{\left\{\beta\in [0,\pi]\right\}} \textnormal{d}\beta.\end{aligned}$$ Similarly, if we take $\beta = \gamma - \pi$, then $$\begin{aligned} & 2\int f_3(\rho_{AB},\rho_{CD},\gamma) \mathds{1}_{ \left\{\gamma -\pi \leq \theta\right\}} \mathds{1}_{\left\{\gamma\in (\pi,2\pi]\right\}} \mathds{1}_{\mathcal{E}_1} d(\gamma,\rho_{AB},\rho_{CD}) \\ &\;\;\; = 2 \int f_3(\rho_{AB},\rho_{CD},\pi+\beta) \mathds{1}_{ \left\{\sqrt{1-\rho_{AB}^2} \left|\sin(\pi+\beta)\right| \geq \left|\rho_{AB} \cos(\pi+\beta)-\rho_{CD}\right|\right\} } \mathds{1}_{ \left\{\beta \leq \theta\right\}} \mathds{1}_{\left\{\beta\in (0,\pi]\right\}} d(\gamma,\rho_{AB},\rho_{CD}) \\ &\;\;\; = \int_0^\theta\left[ 2 \int_0^1\int_0^1 f_3(\rho_{AB},\rho_{CD},\pi+\beta) \mathds{1}_{ \left\{\sqrt{1-\rho_{AB}^2} \left|\sin(\beta)\right| \geq \left|\rho_{AB} \cos(\beta)+\rho_{CD}\right|\right\} } \textnormal{d}\rho_{AB} \textnormal{d}\rho_{CD} \right] \mathds{1}_{\left\{\beta\in (0,\pi]\right\}} \textnormal{d}\beta.\end{aligned}$$ Note $f^*_3(\rho_{AB},\rho_{CD},\pi-\beta) = f^*_3(\rho_{AB},\rho_{CD},\pi+\beta)$. Thus, we have $$\begin{aligned} & f_3(\rho_{AB},\rho_{CD},\pi-\beta) + f_3(\rho_{AB},\rho_{CD},\pi+\beta) \\ &\;\;\; = h(\pi-\beta)f^*_3(\rho_{AB},\rho_{CD},\pi-\beta) + h(\pi+\beta)f^*_3(\rho_{AB},\rho_{CD},\pi+\beta) \\ &\;\;\; = \frac{1}{2\pi} f^*_3(\rho_{AB},\rho_{CD},\pi-\beta).\end{aligned}$$ From the above, we can write the probability of the event $\mathcal{E}$ as $$\begin{aligned} \mathbb{P}\left(\mathcal{E}\right) = \int_0^\theta g^*(\beta) \mathds{1}_{\left\{\beta\in [0,\pi]\right\}} \textnormal{d}\beta,\end{aligned}$$ where $$\label{eq:mie23ago20230956} g^*(\beta) := \int_0^1 \int_0^1 \frac{1}{\pi} f^*_3(\rho_{AB},\rho_{CD},\pi-\beta) \mathds{1}_{ \left\{\sqrt{1-\rho_{AB}^2} \left|\sin(\beta)\right| \geq \left|\rho_{AB} \cos(\beta)+\rho_{CD}\right|\right\} } \textnormal{d}\rho_{AB} \textnormal{d}\rho_{CD}.$$ Let $c:=\int_0^\pi g^*(\beta) \textnormal{d}\beta$ and $g(\beta) = \frac{1}{c}g^*(\beta)$. Then $$\mathbb{P}\left(\Theta \leq \theta | S_{AB} \cap S_{CD} \not=\O\right) = \int_0^\theta g(\beta) \mathds{1}_{\left\{\beta\in [0,\pi]\right\}} \textnormal{d}\beta,$$ i.e., $g(\beta)$ is the density of the angle between the random segments $S_{AB}$ and $S_{CD}$ when they intersect. 1 F Garwood and EM Holroyd. The distance of a "random chord" of a circle from the centre. , 50(373):283--286, 1966. F Garwood and JC Tanner. . on note 2754--a repeatd integral. , 42(342):292--293, 1958. Maurice George Kendall and Patrick Alfred Pierce Moran. Geometrical probability. , 1963. Arakaparampil M Mathai. , volume 1. CRC Press, 1999.
arxiv_math
{ "id": "2309.03032", "title": "Angle between two random segments", "authors": "Paulo Manrique-Mir\\'on", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study quasi-quadratic modules in a pseudo-valuation domain $A$ whose strict units admit a square root. Let $\mathfrak X_R^N$ denote the set of quasi-quadratic modules in an $R$-module $N$, where $R$ is a commutative ring. It is known that there exists a unique overring $B$ of $A$ such that $B$ is a valuation ring with the valuation group $(G,\leq)$ and the maximal ideal of $B$ coincides with that of $A$. Let $F$ be the residue field of $B$. In the above setting, we found a one-to-one correspondence between $\mathfrak X_A^A$ and a subset of $\prod_{g \in G,g \geq e} \mathfrak X_{F_0}^F$. address: - Department of Liberal Arts, Japan Coast Guard Academy, 5-1 Wakaba-cho, Kure, Hiroshima 737-8512, Japan - Department of Architectural Engineering, Faculty of Engineering, Hiroshima Institute of Technology, 2-1-1 Miyake, Saeki-ku, Hiroshima 731-5193, Japan author: - Masato Fujita - Masaru Kageyama title: Quasi-quadratic modules in pseudo-valuation domain --- # Introduction {#sec:intro} Quadratic modules in the ring of univariate formal power series $E[\![X]\!]$ in the indeterminate $X$ were completely classified in [@AK] when $E$ is a euclidean field. This result is due to simple form of elements in the ring $E[\![X]\!]$ and due to the fact that a quadratic module is finitely generated. In [@FK], the authors considerably generalized it to the case in which the ring is an overring of a valuation ring whose strict units admit a square root. They first introduce the notion of quasi-quadratic modules which is a slight generalization of quadratic modules, and they gave a complete classification of quasi-quadratic modules of the ring under the assumption that the quasi-quadratic modules in the residue class field is already given. A pseudo-angular component map was the key tool used in the classification. This paper tackles the same problem, that is, classification of quasi-quadratic modules, but we consider pseudo-valuation domains defined in [@He] rather than overrings of a valuation ring. Let us recall the definition of a pseudo-valuation domain. **Definition 1**. Let $A$ be a domain with the quotient field $Q(A)=K$. A prime ideal $\mathfrak{p}$ of $A$ is called *strongly prime* if $x,y\in K$ and $xy\in\mathfrak{p}$ imply that $x\in \mathfrak{p}$ or $y\in \mathfrak{p}$. A domain $A$ is called a *pseudo-valuation domain* if every prime ideal of $A$ is strongly prime. It immediately follows from the definition that every valuation domain is a pseudo-valuation domain and a pseudo-valuation domain is a local ring. See [@He Proposition 1.1 and Corollary 1.3]. We fix a pseudo-valuation domain $A$ throughout the paper. We use the following notations in this paper. Notation Description --------------- ------------------------------------------------- $\mathfrak m$ the maximal ideal of $A$. $F_0$ the residue field of $A$, i.e. $A/\mathfrak m$. $\pi_0$ the residue map $A \to F_0$. $K$ the quotient field of the domain $A$. We give a fundamental example of a pseudo-valuation domain which is not a valuation domain. *Example 2*. Let us consider the local domain $$A=\left\{\left.\sum_{l=0}^\infty a_lx^l \in \mathbb C[\![X]\!]\;\right|\; a_0 \in \mathbb R\right\}$$ with the maximal ideal $\mathfrak{m}=X\mathbb C[\![X]\!]$. Set $B=\mathbb C[\![X]\!]$. The local domain $A$ is not a valuation domain of the quotient field $K=\mathbb{C}(\!(X)\!)$ but a pseudo-valuation domain. In fact, $A$ is not a valuation domain of $K$ because $\sqrt{-1}\not\in A$ and $(\sqrt{-1})^{-1}\not\in A$. The remaining task is to show that every prime ideal of $A$ is strongly prime. By using [@He Theorem 1.4], we have only to show that $\mathfrak{m}$ is strongly prime. Take elements $x, y\in K$ with $xy\in \mathfrak{m}$. Note that $B$ is a valuation domain of $K$ with the maximal ideal $\mathfrak{m}$. Since a valuation domain becomes a pseudo-valuation domain, $B$ is a pseudo-valuation domain of $K$. We have $x\in\mathfrak{m}$ or $y\in\mathfrak{m}$ because $\mathfrak{m}$ is strongly prime as an ideal of $B$. Thus it follows that $\mathfrak{m}$ is strongly prime as an ideal of $A$. We employ a technical assumption which is similar to the assumption employed in [@FK]. **Definition 3**. An element $x$ in $A$ is a *strict unit* if $\pi_0(x)=1$. The element $x$ *admits a square root* if there exists $u \in A$ such that $x=u^2$. We assume that any strict units in $A$ admit a square root. It is known that there exists a unique overring $B$ of $A$ which is a valuation ring in $K$ such that the maximal ideal of $B$ coincides with $\mathfrak m$ [@He Theorem 2.7]. We also use the following notations: Notation Description ------------------------------------- ------------------------------------------------------------------------------------------- $(K,\operatorname{\mbox{\bf val}})$ the valued field whose valuation ring is $B$. $(G,<)$ the valuation group of $(K,\operatorname{\mbox{\bf val}})$ with the identity element $e$. $F$ the residue class field of $(K,\operatorname{\mbox{\bf val}})$. $\pi$ the residue class map $B \to F$ of $(K,\operatorname{\mbox{\bf val}})$. $\overline{g}$ the equivalence class of $g \in G$ in $G/G^2$. For simplicity, we write $h = \overline{g}$ when the equivalence class of $h \in G$ in $G/G^2$ coincides with an element $\overline{g} \in G/G^2$. These notations are also used in [@FK]. It is easy to demonstrate that $F_0$ is a subfield of $F$ and $\pi_0$ is the restriction of $\pi$ to $A$. Hence we have the following commutative diagram $$\xymatrix@M=8pt{ A \ar[r]^{\pi_0} \ar@{^{(}->}[d] & F_0=A/\mathfrak{m} \ar@{^{(}->}[d] \\ B \ar[r]_{\pi} & F=B/\mathfrak{m} \ar@{}[lu]|{\circlearrowright} }$$ with the natural ring homomorphisms. We also have to slightly generalize the notion of quasi-quadratic modules. In [@FK], a quasi-quadratic module is a subset of the given ring. We redefine a quasi-quadratic module so that it is a subset of an $A$-module. **Definition 4**. Let $R$ be a commutative ring and $N$ be an $R$-module. A subset $M$ of $N$ is a *quasi-quadratic $R$-module* in $N$ if $M+M \subseteq M$ and $a^2 M \subseteq M$ for all $a \in R$. Note that we always have $0 \in M$. A quasi-quadratic $R$-module is simply called a quasi-quadratic module when the ring $R$ is clear from the context. Let $\mathfrak X_R^N$ denote the set of quasi-quadratic $R$-modules in $N$. We simply write it by $\mathfrak X_R$ when $N=R$. We are now ready to describe our purpose of this paper. Our goal is to construct a one-to-one correspondence between $\mathfrak X_A$ and a subset of $\prod_{g \in G, g \geq e} \mathfrak X_{F_0}^F$. The pseudo-angular component map $\operatorname{\mbox{\bf p.an}}:K^\times \rightarrow F^\times$ defined in [@FK] is also a very useful tool in this study. In Section [2](#sec:previous){reference-type="ref" reference="sec:previous"}, we recall the results in the previous study [@FK] including the pseudo-angular component map. Several basic lemmas which follow immediately from them are also discussed in this section. Section [3](#sec:qqmodule){reference-type="ref" reference="sec:qqmodule"} is the main part of this paper. We prove the above one-to-one correspondence and several properties of $\mathfrak X_A$ in this section. As a special case, we study $\mathfrak X_A$ for the pseudo-valuation domains $A$ introduced in Example [Example 2](#fund-ex){reference-type="ref" reference="fund-ex"} in Section [4](#sec:example){reference-type="ref" reference="sec:example"}. In the above sections, we assume that $F_0$ is not of characteristic two. We consider the case in which $F_0$ is of characteristic two in Section [5](#sec:char2){reference-type="ref" reference="sec:char2"}. Finally, We summarize notations used in this paper. For a commutative ring $R$, let $R^\times$ denotes the set of units. When $(S,<)$ be a linearly ordered set, for an element $c$ in $S$, the notation $S_{\geq c}$ denotes the set of elements in $S$ not smaller than $c$. # Review of the previous results {#sec:previous} Recall that an element $x \in K$ is a unit in $B$ if and only if $\operatorname{\mbox{\bf val}}(x)=e$. A unit $x$ in $B$ is called a *strict unit* if $\pi(x)=1$. We say that an element $x$ in $K$ *admits a square root* if there exists $y \in K$ with $x=y^2$. We defined a pseudo-angular component map $\operatorname{\mbox{\bf p.an}}:K^{\times} \rightarrow F^{\times}$ and demonstrated the existence of a pseudo-angular component map when strict units in $K$ always admit a square root in [@FK Proposition 2.14]. Here is the definition of a pseudo-angular component map. **Definition 5**. A *pseudo-angular component map* is a map $\operatorname{\mbox{\bf p.an}}:K^{\times} \rightarrow F^{\times}$ satisfying the following conditions: 1. We have $\operatorname{\mbox{\bf p.an}}(u)=\pi(u)$ for any $u \in B^{\times}$; 2. The equality $\operatorname{\mbox{\bf p.an}}(ux)=\pi(u) \cdot \operatorname{\mbox{\bf p.an}}(x)$ holds true for all $u \in B^{\times}$ and $x \in K^{\times}$; 3. For any $g \in G$ and nonzero $c \in F$, there exists an element $w \in K$ with $\operatorname{\mbox{\bf val}}(w)=g$ and $\operatorname{\mbox{\bf p.an}}(w)=c$; 4. For any nonzero elements $x_1,x_2 \in K$ with $x_1+x_2 \not=0$, we have - $\operatorname{\mbox{\bf p.an}}(x_1+x_2)=\operatorname{\mbox{\bf p.an}}(x_1)$ if $\operatorname{\mbox{\bf val}}(x_1) < \operatorname{\mbox{\bf val}}(x_2)$; - $\operatorname{\mbox{\bf val}}(x_1+x_2)=\operatorname{\mbox{\bf val}}(x_1)$ and $\operatorname{\mbox{\bf p.an}}(x_1+x_2)=\operatorname{\mbox{\bf p.an}}(x_1)+\operatorname{\mbox{\bf p.an}}(x_2)$ if $\operatorname{\mbox{\bf val}}(x_1) = \operatorname{\mbox{\bf val}}(x_2)$ and $\operatorname{\mbox{\bf p.an}}(x_1)+\operatorname{\mbox{\bf p.an}}(x_2) \not=0$; 5. For any $x,y \in K^{\times}$ with $\overline{\operatorname{\mbox{\bf val}}(x)}=\overline{\operatorname{\mbox{\bf val}}(y)}$ and $\operatorname{\mbox{\bf p.an}}(x)=\operatorname{\mbox{\bf p.an}}(y)$, there exists $u \in K^{\times}$ such that $y=u^2x$; 6. For any $a,u \in K^{\times}$, there exists $k \in F^{\times}$ such that $\operatorname{\mbox{\bf p.an}}(au^2)=\operatorname{\mbox{\bf p.an}}(a)k^2$. We give several lemmas necessary in this paper. **Lemma 6**. *Let $\operatorname{\mbox{\bf p.an}}:K^{\times} \rightarrow F^{\times}$ be a pseudo-angular component map. For any $a \in K^{\times}$, $g \in G$ and $k \in F^{\times}$, there exists $u \in K^{\times}$ such that $\operatorname{\mbox{\bf val}}(u)=g$ and $\operatorname{\mbox{\bf p.an}}(au^2)=\operatorname{\mbox{\bf p.an}}(a)k^2$.* *Proof.* Take an element $u_0 \in K^{\times}$ such that $\operatorname{\mbox{\bf val}}(u_0)=g$. It always exists because the valuation is surjective. There exists $k_0 \in F^\times$ such that $\operatorname{\mbox{\bf p.an}}(au_0^2)=\operatorname{\mbox{\bf p.an}}(a)k_0^2$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(6). Take an element $v \in B$ with $\pi(v)=k/k_0$ and set $u=u_0v$. It is obvious that $\operatorname{\mbox{\bf val}}(v)=e$ and $\operatorname{\mbox{\bf val}}(u)=g$. By Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(2), we have $\operatorname{\mbox{\bf p.an}}(au^2)=\operatorname{\mbox{\bf p.an}}(au_0^2v^2)=\operatorname{\mbox{\bf p.an}}(au_0^2)\pi(v)^2=\operatorname{\mbox{\bf p.an}}(a) \cdot k_0^2 \cdot (k/k_0)^2=\operatorname{\mbox{\bf p.an}}(a)k^2$. ◻ **Lemma 7**. *Let $\operatorname{\mbox{\bf p.an}}:K^{\times} \rightarrow F^{\times}$ be a pseudo-angular component map. Let $x,y \in K^{\times}$. If $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(y)$ and $\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)=0$, then $\operatorname{\mbox{\bf val}}(x+y)>\operatorname{\mbox{\bf val}}(x)$.* *Proof.* It is [@FK Corollary 2.16]. ◻ # Quasi-quadratic modules in a pseudo-valuation domain {#sec:qqmodule} We come back to the study of the pseudo-valuation domain $A$. We use the notations given in Section [1](#sec:intro){reference-type="ref" reference="sec:intro"} without notice. **Lemma 8**. *Let $A$ be a pseudo-valuation domain. Let $x \in B$. If $\pi(x) \in F_0$, then $x \in A$. If, in addition, $\pi(x) \neq 0$, $x \in A^\times$.* *Proof.* When $\pi(x)=0$, then $x \in \mathfrak m \subset A$. When $\pi(x) \neq 0$, there exists $y \in A$ with $\pi_0(y)=\pi(x)$ by the definition of $F_0$. Since $\pi_0(y) \neq 0$, we have $y \in A^{\times}$. In particular, we get $1/y \in A^\times$. Set $z=x/y-1$. We have $\pi(z)=\pi(x/y)-1=\pi(x)/\pi(y)-1=\pi(x)/\pi_0(y)-1=0$. It implies $z \in \mathfrak m \subset A$. Therefore, we get $x = (x/y) \cdot y =(1+z)y \in A$. ◻ **Corollary 9**. *Let $A$ be a pseudo-valuation domain. Any strict units in $B$ admit a square root if and only if any strict units in $A$ admit a square root.* *Proof.* Strict units in $B$ and their square roots belong to $A$ by Lemma [Lemma 8](#lem:basic){reference-type="ref" reference="lem:basic"} because their images under $\pi$ is $\pm 1$ and they belong to $F_0$. ◻ There exists a pseudo-angular component map $\operatorname{\mbox{\bf p.an}}:K^\times \rightarrow F^\times$ when any strict units in the valuation ring $B$ admit a square root by [@FK Proposition 2.14]. Therefore, by the above corollary, if any strict units in the pseudo-valuation domain $A$ admit a square root, a pseudo-angular component map $\operatorname{\mbox{\bf p.an}}:K^\times \rightarrow F^\times$ exists. We fix a pseudo-angular component map $\operatorname{\mbox{\bf p.an}}$ in this section. Let $S$ be a subset of a quasi-quadratic $A$-module $M$. The smallest quasi-quadratic $A$-submodule of $M$ containing the set $S$ is called the *quasi-quadratic closure* of $S$ in $M$ and denoted by $\operatorname{\mbox{\rm cl}}_A(S;M)$. We have $$\operatorname{\mbox{\rm cl}}_A(S;M)=\Bigl\{\sum_{i=1}^n a_i^2s_i\;|\; n \in \mathbb Z_{> 0},\ a_i \in A, \ s_i \in S\ (1 \leq \forall i \leq n)\Bigl\}.$$ It is denoted by $\operatorname{\mbox{\rm cl}}_A(S)$ when $M=A$. The subscript $A$ is omitted when it is clear from the context. We construct a one-to-one correspondence between $\mathfrak X_A$ and a subset of $\prod_{g \in G, g \geq e} \mathfrak X_{F_0}^F$ step by step. We first define important notions. **Definition 10**. For any $g \in G_{\geq e}$ and a subset $\mathcal M$ of $A$, we set $$M_g(\mathcal M)=\{\operatorname{\mbox{\bf p.an}}(x) \in F \setminus \{0\}\;|\; x \in \mathcal M \setminus \{0\}, \operatorname{\mbox{\bf val}}(x)=g\} \cup \{0\}.$$ For $g \in G_{\geq e}$ and a subset $M$ of $F$, we put $$\begin{aligned} \Psi_1(M,g)&=\{x \in A \setminus \{0\}\;|\; \operatorname{\mbox{\bf val}}(x) = \overline{g} \text{ and } ((\operatorname{\mbox{\bf val}}(x)=g \text{ and }\operatorname{\mbox{\bf p.an}}(x) \in M )\\ &\qquad \text{ or } (\operatorname{\mbox{\bf val}}(x)>g \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in \operatorname{\mbox{\rm cl}}_F(M)))\},\\ \Psi_2(M,g)&=\{x \in A \setminus \{0\}\;|\; \exists u \in \Psi_1(M,g) \text{ such that } \pm u \in \Psi_1(M,g)\\ &\qquad \text{ and }\operatorname{\mbox{\bf val}}(u)<\operatorname{\mbox{\bf val}}(x)\} \text{ and }\\ \Psi(M,g)&=\Psi_1(M,g) \cup \Psi_2(M,g) \cup \{0\}.\end{aligned}$$ We want to show that both $M_g(\mathcal M)$ and $\Psi(M,g)$ are quasi-quadratic modules. To this end, we prepare several lemmas. **Lemma 11**. *Let $A$ be a pseudo-valuation domain such that any strict units admit a square root. Let $\mathcal M$ be a quasi-quadratic $A$-module in $A$. For any $g,h \in G$ with $g \geq e$ and $h>e$, and $0 \neq c \in M_g(\mathcal M)$, the inclusions $F_0^2c \subseteq M_g(\mathcal M)$ and $F^2c \subseteq M_{gh^2}(\mathcal M)$ hold true.* *Proof.* Fix $x \in \mathcal M$ with $\operatorname{\mbox{\bf val}}(x)=g$ and $\operatorname{\mbox{\bf p.an}}(x)=c$. We first show the inclusion $F_0^2c \subseteq M_g(\mathcal M)$. It is obvious that $0 \in M_g(\mathcal M)$. Fix an arbitrary element $k \in F_0^{\times}$. Take $u \in B^{\times}$ such that $\pi(u)=k$. We have $u \in A^{\times}$ by Lemma [Lemma 8](#lem:basic){reference-type="ref" reference="lem:basic"}. We get $u^2x \in \mathcal M$ and $\operatorname{\mbox{\bf p.an}}(u^2x)=k^2c$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(2). It implies that $k^2c \in M_g(\mathcal M)$. The first inclusion has been demonstrated. We next demonstrate the inclusion $F^2c \subseteq M_{gh^2}(\mathcal M)$. It is obvious that $0 \in M_{gh^2}(\mathcal M)$. Fix an arbitrary element $k \in F^{\times}$. By Lemma [Lemma 6](#lem:lcp_basic1){reference-type="ref" reference="lem:lcp_basic1"}, there exists $u \in K^{\times}$ with $\operatorname{\mbox{\bf val}}(u)=h$ and $\operatorname{\mbox{\bf p.an}}(xu^2)=k^2c$. Since $h>e$, we have $u \in \mathfrak m \subset A$. Therefore, we have $xu^2 \in \mathcal M$. We obtain $k^2c \in M_{gh^2}(\mathcal M)$ because $\operatorname{\mbox{\bf val}}(xu^2)=gh^2$ and $\operatorname{\mbox{\bf p.an}}(xu^2)=k^2c$. ◻ **Lemma 12**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Let $\mathcal M$ be a quasi-quadratic $A$-module in $A$. Assume that there exists $0 \neq u \in A$ with $\pm u \in \mathcal M$. Then, any element $x \in A$ with $\operatorname{\mbox{\bf val}}(x)>\operatorname{\mbox{\bf val}}(u)$ belongs to the quasi-quadratic module $\mathcal M$.* *Proof.* Set $y=x/u$. We have $y \in \mathfrak m$ because $\operatorname{\mbox{\bf val}}(y)>e$. Since $y \in \mathfrak m$, the elements $\dfrac{y+1}{2}$ and $\dfrac{y-1}{2}$ are elements in $A$. We have $x = uy = \left(\dfrac{y+1}{2}\right)^2u+\left(\dfrac{y-1}{2}\right)^2\cdot(-u) \in \mathcal M$. ◻ **Lemma 13**. *Let $A$ be a pseudo-valuation domain such that any strict units admit a square root. Let $x_1$ and $x_2$ be nonzero elements in $A$. If $\operatorname{\mbox{\bf val}}(x_1)=\operatorname{\mbox{\bf val}}(x_2)$ and $\operatorname{\mbox{\bf p.an}}(x_1)=\operatorname{\mbox{\bf p.an}}(x_2)$, we have $x_2=x_1u^2$ for some $u \in A^\times$ with $\pi(u^2)=1$.* *Proof.* There exists $u \in B^\times$ satisfying the equalities in the lemma by [@FK Lemma 2.15]. We have $\pi(u)=\pm 1 \in F_0$. Lemma [Lemma 8](#lem:basic){reference-type="ref" reference="lem:basic"} implies that $u \in A^\times$. ◻ **Lemma 14**. *Let $A$ be a pseudo-valuation domain such that any strict units admit a square root. Let $\mathcal M$ be a quasi-quadratic $A$-module in $A$. Let $x$ be a nonzero element in $\mathcal M$ and $y \in A$ with $y \neq 0$. We assume that $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(y)$ and $\operatorname{\mbox{\bf p.an}}(x)=\operatorname{\mbox{\bf p.an}}(y)$. Then, we have $y \in \mathcal M$.* *Proof.* There exists $u\in A^\times$ such that $y=u^2x$ by Lemma [Lemma 13](#lem:basic4){reference-type="ref" reference="lem:basic4"}. It implies that $y \in \mathcal M$. ◻ We are ready to demonstrate that both $M_g(\mathcal M)$ and $\Psi(M,g)$ are quasi-quadratic modules. **Proposition 15**. *Let $A$ be a pseudo-valuation domain such that any strict units admit a square root. The set $M_g(\mathcal M)$ is a quasi-quadratic $F_0$-module in $F$ whenever $g \in G_{\geq e}$ and $\mathcal M$ is a quasi-quadratic $A$-module in $A$.* *Proof.* Set $M=M_g(\mathcal M)$ for simplicity. The proposition is obvious when $g \not\in \operatorname{\mbox{\bf val}}(\mathcal M)$. We consider the case in which $g \in \operatorname{\mbox{\bf val}}(\mathcal M)$. We have already demonstrated that $M$ is closed under multiplication by the squares of elements in $F_0$ in Lemma [Lemma 11](#lem:basic2){reference-type="ref" reference="lem:basic2"}. We have only to show that $M$ is closed under addition. Let $a,b$ be arbitrary elements in $M$. It is obvious that $a+b \in M$ when $a=0$, $b=0$ or $a+b=0$. Let us consider the other case. Take $x,y \in \mathcal M$ with $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(y)=g$, $a=\operatorname{\mbox{\bf p.an}}(x)$ and $b=\operatorname{\mbox{\bf p.an}}(y)$. We have $\operatorname{\mbox{\bf val}}(x+y)=g$ and $a+b=\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)=\operatorname{\mbox{\bf p.an}}(x+y)$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). It means that $a+b \in M$. ◻ **Proposition 16**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. The set $\Psi(M,g)$ is a quasi-quadratic $A$-module in $A$ whenever $g \in G_{\geq e}$ and $M$ is a quasi-quadratic $F_0$-module in $F$.* *Proof.* Set $\Psi_i=\Psi_i(M,g)$ for $i=1,2$ and $\Psi=\Psi(M,g)$ for simplicity. Note that $\operatorname{\mbox{\bf val}}(\Psi) \subseteq G_{\geq g} \cup \{\infty\}$. We first demonstrate that $\Psi$ is closed under addition. Take $x,y \in \Psi$. It is obvious that $x+y \in \Psi$ when $x=0$, $y=0$ or $x+y=0$. We consider the other case. Consider the case in which $\operatorname{\mbox{\bf val}}(x) \neq \operatorname{\mbox{\bf val}}(y)$. We may assume that $\operatorname{\mbox{\bf val}}(x)<\operatorname{\mbox{\bf val}}(y)$. We have $\operatorname{\mbox{\bf val}}(x+y)=\operatorname{\mbox{\bf val}}(x)$ and $\operatorname{\mbox{\bf p.an}}(x+y)=\operatorname{\mbox{\bf p.an}}(x)$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). It is easy to check that $x+y \in \Psi_i$ when $x \in \Psi_i$ for $i=1,2$ in this case. We omit the details. The next target is the case in which $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(y)$. We first consider the case in which $x \in \Psi_1 \setminus \Psi_2$. By the definition of $\Psi_2$, we get $y \not\in \Psi_2$. It implies that $y$ also belongs to $\Psi_1$. When $\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)=0$, it follows that $\operatorname{\mbox{\bf p.an}}(-x)=-\operatorname{\mbox{\bf p.an}}(x)=\operatorname{\mbox{\bf p.an}}(y)$. It is obvious that $\operatorname{\mbox{\bf val}}(-x)=\operatorname{\mbox{\bf val}}(x)$. Therefore, the element $-x$ also belongs to $\Psi_1$ both when $\operatorname{\mbox{\bf val}}(y)=g$ and $\operatorname{\mbox{\bf val}}(y)>g$. On the other hand, we have $\operatorname{\mbox{\bf val}}(x+y)>\operatorname{\mbox{\bf val}}(x)$ by Lemma [Lemma 7](#lem:lcp_basic2){reference-type="ref" reference="lem:lcp_basic2"}. They imply that $x+y \in \Psi_2 \subseteq \Psi$. When $\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y) \neq 0$, Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4) implies that $\operatorname{\mbox{\bf val}}(x+y)=\operatorname{\mbox{\bf val}}(x)$ and $\operatorname{\mbox{\bf p.an}}(x+y)=\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)$. They imply that $x+y \in \Psi_1$ both when $\operatorname{\mbox{\bf val}}(x)=g$ and $\operatorname{\mbox{\bf val}}(x)>g$. We next consider the case in which $x \in \Psi_2$. Since $\operatorname{\mbox{\bf val}}(x+y) \geq \operatorname{\mbox{\bf val}}(x)$, the sum $x+y$ also belongs to $\Psi_2$. We have proven that $\Psi$ is closed under addition. We show that $\Psi$ is closed under multiplication of the square of an element in $A$. Take $a \in A$ and $x \in \Psi$. It is obvious that $a^2x \in \Psi$ when $a=0$ or $x=0$. Assume that $a$ and $x$ are not zeros. We first treat the case in which $x \in \Psi_1$. When $\operatorname{\mbox{\bf val}}(x)=g$ and $a \in A^\times$, we get $\operatorname{\mbox{\bf val}}(a^2x)=\operatorname{\mbox{\bf val}}(x)=g$ and $\operatorname{\mbox{\bf p.an}}(a^2x)=\pi_0(a)^2\operatorname{\mbox{\bf p.an}}(x) \in M$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(2) because $M$ is a quasi-quadratic $F_0$-module in $F$. In the other case, we obtain $\operatorname{\mbox{\bf val}}(a)>e$ or $\operatorname{\mbox{\bf val}}(x)>g$. We have $\operatorname{\mbox{\bf val}}(a^2x)>g$. There exists $k \in F$ such that $\operatorname{\mbox{\bf p.an}}(a^2x)=k^2\operatorname{\mbox{\bf p.an}}(x)$. Since $\operatorname{\mbox{\bf p.an}}(x) \in \operatorname{\mbox{\rm cl}}_F(M)$, we get $\operatorname{\mbox{\bf p.an}}(a^2x) \in \operatorname{\mbox{\rm cl}}_F(M)$. It together with the equality $\overline{\operatorname{\mbox{\bf val}}(a^2x)}=\overline{g}$ implies that $a^2x \in \Psi_1$. The remaining case is the case in which $x \in \Psi_2$. However, it immediately follows from the definition of $\Psi_2$ that $a^2x \in \Psi_2$ because $\operatorname{\mbox{\bf val}}(a^2x) \geq \operatorname{\mbox{\bf val}}(x)$. ◻ The following structure theorem says that a quasi-quadratic $A$-module in $A$ is the union of quasi-quadratic modules of the from $\Psi(M,g)$. **Theorem 17**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Consider a quasi-quadratic $A$-module $\mathcal M$ in $A$. The equality $$\mathcal M = \bigcup_{g \in G_{\geq e}}\Psi(M_g(\mathcal M),g)$$ holds true.* *Proof.* Set $\mathcal N=\bigcup_{g \in G_{\geq e}}\Psi(M_g(\mathcal M),g)$ for simplicity. The inclusion $\mathcal M \subseteq \mathcal N$ is obvious. We demonstrate the opposite inclusion. Take $x \in \mathcal N$. We obviously have $x \in \mathcal M$ when $x=0$. So, we concentrate on the other case. We have $x \in \Psi(M_g(\mathcal M),g)$ for some $g \in G_{\geq e}$. Set $M=M_g(\mathcal M)$ and $\Psi=\Psi(M,g)$. We also define $\Psi_1$ and $\Psi_2$ in the same manner. When $x \in \Psi_1$ and $\operatorname{\mbox{\bf val}}(x)=g$, there exists $y \in \mathcal M$ such that $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(y)$ and $\operatorname{\mbox{\bf p.an}}(x)=\operatorname{\mbox{\bf p.an}}(y)$. It implies that $x \in \mathcal M$ by Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"}. We then consider the case in which $x \in \Psi_1$ and $\operatorname{\mbox{\bf val}}(x)>g$. We have $\operatorname{\mbox{\bf val}}(x)=gh^2$ for some $h \in G_{\geq e}$ by the definition of $\Psi_1$. There exist a positive integer $n$, elements $u_1, \ldots, u_n \in F$ and $v_1, \ldots, v_n \in M$ such that $\operatorname{\mbox{\bf p.an}}(x)=\sum_{i=1}^n u_i^2v_i$. We may assume that, for any $1 \leq j \leq n$, (\*) $\sum_{i=1}^ju_i^2v_i \neq 0$ without loss of generality. We can take $y_i \in \mathcal M$ with $\operatorname{\mbox{\bf val}}(y_i)=g$ and $v_i=\operatorname{\mbox{\bf p.an}}(y_i)$ for each $1 \leq i \leq n$ by the definition of $M$. We can choose $w_i \in K^\times$ such that $\operatorname{\mbox{\bf val}}(w_i)=h$ and $\operatorname{\mbox{\bf p.an}}(w_i^2y_i)=u_i^2v_i$ for each $1 \leq i \leq n$ by Lemma [Lemma 6](#lem:lcp_basic1){reference-type="ref" reference="lem:lcp_basic1"}. Since $\operatorname{\mbox{\bf val}}(w_i)=h>e$, we get $w_i \in \mathfrak m \subset A$. Put $z=\sum_{i=1}^n w_i^2y_i$, then $z \in \mathcal M$. On the other hand, applying Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4) inductively together with the assumption (\*), we get $\operatorname{\mbox{\bf val}}(z)=gh^2=\operatorname{\mbox{\bf val}}(x)$ and $\operatorname{\mbox{\bf p.an}}(z)=\sum_{i=1}^nu_i^2v_i=\operatorname{\mbox{\bf p.an}}(x)$. It implies that $x \in \mathcal M$ by Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"}. The remaining case is the case in which $x \in \Psi_2$. By the definition of $\Psi_2$, there exists $u \in \Psi_1$ such that $\pm u\in\Psi_1$ and $\operatorname{\mbox{\bf val}}(u)<\operatorname{\mbox{\bf val}}(x)$. We have already demonstrated that any element in $\Psi_1$ belongs to $\mathcal M$. In particular, we have $\pm u \in \mathcal M$. We immediately get $x \in \mathcal M$ by Lemma [Lemma 12](#lem:basic3){reference-type="ref" reference="lem:basic3"}. ◻ **Proposition 18**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Consider a quasi-quadratic $A$-module $\mathcal M$ in $A$ and a nonzero element $x\in A$ with $\operatorname{\mbox{\bf val}}(x)=g$. Then the following conditions are equivalent:* 1. *$x\in \operatorname{\mbox{\rm supp}}(\mathcal M):=\mathcal M \cap (-\mathcal M)$;* 2. *$\pm\operatorname{\mbox{\bf p.an}}(x)\in M_g(\mathcal M)$.* *Furthermore, $M_h(\mathcal M)=F$ whenever $\pm\operatorname{\mbox{\bf p.an}}(x)\in M_g(\mathcal M)$ and $g<h$.* *Proof.* (1) $\Rightarrow$ (2): It is nothing to prove. \(2\) $\Rightarrow$ (1): It is immediate from Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"}. We next demonstrate the 'furthermore' part. Take a nonzero element $c\in F$. We want to show that $c\in M_h(\mathcal M)$. By Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(3), there exists an element $u\in K$ with $\operatorname{\mbox{\bf p.an}}(u)=c$ and $\operatorname{\mbox{\bf val}}(u)=h$. Since $\operatorname{\mbox{\bf val}}(u)>g\geq e$, we have $u\in \mathfrak m\subset A$. It follows from the assumption that $\pm x\in \mathcal M$. Hence we have $u\in \mathcal M$ by Lemma [Lemma 12](#lem:basic3){reference-type="ref" reference="lem:basic3"}. This implies that $c=\operatorname{\mbox{\bf p.an}}(u)\in M_h(\mathcal M)$, as desired. ◻ **Corollary 19**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Consider a quasi-quadratic $A$-module $\mathcal M$ in $A$. Then the following conditions are equivalent:* 1. *$\mathcal M =A$;* 2. *$M_g(\mathcal M)=F$ for any $g\in G_{>e}$ and $M_e(\mathcal M)=F_0$;* 3. *$M_e(\mathcal M)=F_0$.* *Proof.* (1) $\Rightarrow$ (2): By the assumption, we have $\pm 1\in M_e(\mathcal M)$. It follows from Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"} that $M_g(\mathcal M)=F$ for any $g>e$ and $1\in \operatorname{\mbox{\rm supp}}(\mathcal M)$. Note that $M_e(\mathcal M)$ is a quasi-quadratic $F_0$-module in $F_0$ because we have $\operatorname{\mbox{\bf p.an}}(x)=\pi(x) \in F_0$ for any nonzero element $x \in A$ with $\operatorname{\mbox{\bf val}}(x)=e$. Take an element $d\in F_0$. We have $d=\Bigl(\dfrac{d+1}{2}\Bigl)^2\cdot 1+\Bigl(\dfrac{d-1}{2}\Bigl)^2\cdot (-1)\in M_e(\mathcal M)$. This implies that $M_e(\mathcal M)=F_0$. \(2\) $\Rightarrow$ (3): This requires no comment. \(3\) $\Rightarrow$ (1): We get $\pm 1\in M_e(\mathcal M)$. By Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"}, it follows that $1\in\operatorname{\mbox{\rm supp}}(\mathcal M)$. Hence we have $a=\Bigl(\dfrac{a+1}{2}\Bigl)^2\cdot 1+\Bigl(\dfrac{a-1}{2}\Bigl)^2\cdot (-1)\in \mathcal M$ for any $a\in A$. ◻ **Corollary 20**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Consider a quasi-quadratic $A$-module $\mathcal M$ in $A$. Then the following assertions hold:* 1. *$\operatorname{\mbox{\rm supp}}(\mathcal M)=\{x\in A\setminus \{0\}\;|\;\pm\operatorname{\mbox{\bf p.an}}(x)\in M_{\operatorname{\mbox{\bf val}}(x)}(\mathcal M)\} \cup\{0\}$.* 2. *When $\mathcal M\neq \{0\}$, the quasi-quadratic $A$-module $\mathcal M$ becomes an ideal of $A$ if and only if $\pm\operatorname{\mbox{\bf p.an}}(x)\in M_{\operatorname{\mbox{\bf val}}(x)}(\mathcal M)$ for any nonzero $x\in \mathcal M$.* *Proof.* (1) It is obvious from Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"}. \(2\) Assume that $\mathcal M$ is an ideal of $A$. Since $\operatorname{\mbox{\rm supp}}(\mathcal M)=\mathcal M$, the 'only if' part follows from (1). We demonstrate the opposite implication. Let $x$ be a nonzero element of $\mathcal M$. It is enough to prove that $ax\in \mathcal M$ for any $a\in A$. By the assumption we get $-\operatorname{\mbox{\bf p.an}}(x)\in M_{\operatorname{\mbox{\bf val}}(x)}(\mathcal M)$. Hence it follows from Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"} that $-x\in\mathcal M$. Thus we have $ax=\Bigl(\dfrac{a+1}{2}\Bigl)^2\cdot x+\Bigl(\dfrac{a-1}{2}\Bigl)^2\cdot (-x)\in \mathcal M$. ◻ We can show that an ideal of a pseudo-valuation ring is represented as a union of sets of the form $\Psi(F,g)$ for $g\in G_{\geq e}$. To see this, we first note that the following equalities hold true: $$\begin{aligned} \Psi_1(F,g)&=\{x \in A \setminus \{0\}\;|\; \operatorname{\mbox{\bf val}}(x) = \overline{g} \text{ and } \operatorname{\mbox{\bf val}}(x)\geq g \},\\ \Psi_2(F,g)&=\{x \in A \setminus \{0\}\;|\; \exists u \in A\setminus\{0\} \text{ with } \operatorname{\mbox{\bf val}}(u)=\overline{g} \text{ and } g\leq \operatorname{\mbox{\bf val}}(u)<\operatorname{\mbox{\bf val}}(x)\}.\end{aligned}$$ **Corollary 21**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Let $I$ be an ideal of $A$. Set $S=\operatorname{\mbox{\bf val}}(I)$. Then the following assertions hold true:* (1) *If the set $S$ has no smallest element, we have $$I=\bigcup_{x\in I\setminus\{0\}}\Psi(F,\operatorname{\mbox{\bf val}}(x))\cup\{0\}.$$* (2) *If the set $S$ has the smallest element $m(I)$, we have $$I=\Psi(M_{m(I)}(I),m(I)).$$* *Proof.* (1) Assume that $I\neq \{0\}$. It can be easily seen that the ideal $I$ is included the right hand side of the equality. We next show the opposite inclusion. Take a nonzero element $y\in \Psi(F,\operatorname{\mbox{\bf val}}(x))$ for some $x\in I\setminus\{0\}$. We first consider the case in which $y\in \Psi_1(F,\operatorname{\mbox{\bf val}}(x))$. It follows that $\operatorname{\mbox{\bf val}}(y)=\overline{\operatorname{\mbox{\bf val}}(x)}$ and $\operatorname{\mbox{\bf val}}(y)\geq \operatorname{\mbox{\bf val}}(x)$. Since $S$ has no smallest element, there exists an element $w\in I$ such that $\operatorname{\mbox{\bf val}}(x)>\operatorname{\mbox{\bf val}}(w)$. It follows from Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"} that $M_{\operatorname{\mbox{\bf val}}(y)}(I)=F$ because $\pm\operatorname{\mbox{\bf p.an}}(w)\in M_{\operatorname{\mbox{\bf val}}(w)}(I)$ by Corollary [Corollary 20](#supp3){reference-type="ref" reference="supp3"}. Since $\operatorname{\mbox{\bf p.an}}(y)$ belongs to $M_{\operatorname{\mbox{\bf val}}(y)}(I)$, there exists an element $z\in I$ such that $\operatorname{\mbox{\bf p.an}}(y)=\operatorname{\mbox{\bf p.an}}(z)$ and $\operatorname{\mbox{\bf val}}(y)=\operatorname{\mbox{\bf val}}(z)$. Hence we have $y\in I$ from Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"}. We next consider the remaining case in which $y\in \Psi_2(F, \operatorname{\mbox{\bf val}}(x))$. There exists a nonzero $u\in A$ such that $\operatorname{\mbox{\bf val}}(x)\leq\operatorname{\mbox{\bf val}}(u)<\operatorname{\mbox{\bf val}}(y)$. Thus we have $M_{\operatorname{\mbox{\bf val}}(y)}(I)=F$ by Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"} because $\pm \operatorname{\mbox{\bf p.an}}(x) \in M_{\operatorname{\mbox{\bf val}}(x)}(I)$. We get $y\in I$ by Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"} and the definition of $M_{\operatorname{\mbox{\bf val}}(y)}(I)$ because $\operatorname{\mbox{\bf p.an}}(y) \in M_{\operatorname{\mbox{\bf val}}(y)}(I)$. We have shown the assertion (1). \(2\) By Theorem [Theorem 17](#thm:decomp){reference-type="ref" reference="thm:decomp"}, we see that the ideal $I$ includes the right hand side of the equality. We show the opposite inclusion. Take a nonzero element $x\in I$. There exists an element $x_0\in I$ such that $m(I)=\operatorname{\mbox{\bf val}}(x_0)$. When $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(x_0)$, it is immediate that $x\in\Psi_1(M_{m(I)}(I),m(I))\subseteq \Psi(M_{m(I)}(I),m(I))$. We next suppose that $\operatorname{\mbox{\bf val}}(x)>\operatorname{\mbox{\bf val}}(x_0)$. It follows from Corollary [Corollary 20](#supp3){reference-type="ref" reference="supp3"}(2) that $\pm x_0\in\Psi_1(M_{m(I)}(I),m(I))$. Thus we have $x\in \Psi_2(M_{m(I)}(I),m(I))\subseteq\Psi(M_{m(I)}(I),m(I))$. ◻ The set $\mathfrak X_F$ is naturally a subset of $\mathfrak X_{F_0}^F$. We want to show that there is a bijection between $\mathfrak X_A$ and $$\begin{aligned} \mathfrak S_{F_0}^F&\!:=\!\Bigl\{\!(M_g)_{g \in G_{\geq e}} \!\in\!\!\!\! \prod_{g \in G_{\geq e}} \!\!\!\!\mathfrak X_{F_0}^F \Bigl| M_g \text{ satisfies the following conditions (i) through (iii)}\!\Bigl\}.\end{aligned}$$ (i) $M_e \subseteq F_0$; (ii) $\operatorname{\mbox{\rm cl}}_F(M_g) \subseteq M_h$ whenever $e \leq g < h$ and $\overline{g}=\overline{h}$; (iii) $M_h=F$ whenever there exist $g \in G_{\geq e}$ and $u \in F\setminus\{0\}$ with $e \leq g<h$ and $\pm u \in M_g$. For that purpose, we first demonstrate the following proposition: **Proposition 22**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. The union $\bigcup_{g\in G_{\geq e}}\Psi(M_{g},g)$ is a quasi-quadratic $A$-module in $A$ for each $(M_g)_{g \in G_{\geq e}} \in \mathfrak S_{F_0}^F$.* *Proof.* We have only to demonstrate that $\mathcal M=\bigcup_{g\in G_{\geq e}}\Psi(M_{g},g)$ is closed under addition and multiplication by the square of elements in $A$. The closedness under multiplication by the square immediately follows from Proposition [Proposition 16](#prop:Psi){reference-type="ref" reference="prop:Psi"}. We demonstrate the closedness under addition. Let $x_1,x_2$ be elements in $\mathcal M$. We prove that $x_1+x_2 \in \mathcal M$. It is obvious when at least one of $x_1$, $x_2$ and $x_1+x_2$ is zero. So we assume that they are nonzero elements. Take $g_1,g_2 \in G_{\geq e}$ with $x_i \in \Psi(M_{g_i},g_i)$ for $i=1,2$. When $\operatorname{\mbox{\bf val}}(x_1) \neq \operatorname{\mbox{\bf val}}(x_2)$, we may assume that $\operatorname{\mbox{\bf val}}(x_1)<\operatorname{\mbox{\bf val}}(x_2)$. We have $\operatorname{\mbox{\bf val}}(x_1+x_2)=\operatorname{\mbox{\bf val}}(x_1)$ and $\operatorname{\mbox{\bf p.an}}(x_1+x_2)=\operatorname{\mbox{\bf p.an}}(x_1)$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). These equalities imply that $x_1+x_2 \in \Psi(M_{g_1},g_1)$. We next consider the case in which $\operatorname{\mbox{\bf val}}(x_1)=\operatorname{\mbox{\bf val}}(x_2)$. When $g_1=g_2$, we immediately get $x_1+x_2 \in \Psi(M_{g_1},g_1)$ by Proposition [Proposition 16](#prop:Psi){reference-type="ref" reference="prop:Psi"}. Consider the case in which $g_1 \neq g_2$. We demonstrate that $x_i \in \Psi(M_{g_{3-i}},g_{3-i})$ for some $i=1,2$. If it is true, we obviously have $x_1+x_2 \in \Psi(M_{g_{3-i}},g_{3-i})$ by Proposition [Proposition 16](#prop:Psi){reference-type="ref" reference="prop:Psi"}. The proof has been completed. Set $h=\operatorname{\mbox{\bf val}}(x_1)=\operatorname{\mbox{\bf val}}(x_2)$. We first consider the case in which $x_1 \in \Psi_1(M_{g_1},g_1)$ and $x_2 \in \Psi_1(M_{g_2},g_2)$. We may assume that $g_1<g_2$ without loss of generality. Since $\operatorname{\mbox{\rm cl}}_F(M_{g_1}) \subseteq M_{g_2}$ by the assumption, we have $\operatorname{\mbox{\bf p.an}}(x_1) \in M_{g_2}$. It implies that $x_1 \in \Psi_1(M_{g_2},g_2) \subseteq \Psi(M_{g_2},g_2)$. The remaining case is the case in which at least one of $x_1 \in \Psi_2(M_{g_1},g_1)$ and $x_2 \in \Psi_2(M_{g_2},g_2)$ holds. We may assume that $x_1 \in \Psi_2(M_{g_1},g_1)$ without loss of generality. By the definition of $\Psi_2(M_{g_1},g_1)$, there exists $u \in \Psi_1(M_{g_1},g_1)$ with $\pm u \in \Psi_1(M_{g_1},g_1)\subseteq \Psi(M_{g_1},g_1)$ and $\operatorname{\mbox{\bf val}}(u)<h$. Recall that $\operatorname{\mbox{\bf val}}(x_2)=h$ by the assumption. We have $x_2 \in \Psi_2(M_{g_1},g_1) \subseteq \Psi(M_{g_1},g_1)$ using the definition of $\Psi_2(M_{g_1},g_1)$ again. ◻ We are now ready to demonstrate our main theorem. **Theorem 23**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Consider a quasi-quadratic $A$-module $\mathcal M$ in $A$. We define the map $\sigma:\mathfrak X_A \rightarrow \mathfrak S_{F_0}^F$ by $$\sigma(\mathcal M)=(M_{g}(\mathcal M))_{g \in G_{\geq e}}.$$ The map $\sigma$ is a bijection.* *Proof.* We first show that $\sigma$ is well-defined. We have to check that the family $\sigma(\mathcal M)$ satisfies the conditions (i) through (iii) in the definition of $\mathfrak S_{F_0}^F$. Satisfaction of condition (i) immediately follows from Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(1), the definition of $M_e(\mathcal M)$ and the fact that $\pi(A) =F_0$. We consider the condition (ii). By Proposition [Proposition 15](#prop:M){reference-type="ref" reference="prop:M"}, $M_g(\mathcal M)$ is a quasi-quadratic $F_0$-module in $F$ for any $g\in G_{\geq e}$. Take elements $g, h\in G_{\geq e}$ with $g<h$ and $\overline{g}=\overline{h}$. There exists an element $h_1\in G_{>e}$ such that $h=gh_1^2$. It follows from Lemma [Lemma 11](#lem:basic2){reference-type="ref" reference="lem:basic2"} that $F^2c \subseteq M_{gh_1^2}(\mathcal M)=M_h(\mathcal M)$ for any $c\in M_g(\mathcal M)$. This implies that $\operatorname{\mbox{\rm cl}}_F(M_g(\mathcal M))\subseteq M_h(\mathcal M)$. Next we check that $\sigma(\mathcal M)$ satisfies the condition (iii). Assume that there exist $g,h\in G_{\geq e}$ and $u\in F$ with $g<h$ and $\pm u\in M_g(\mathcal M)$. There exists an element $x\in\mathcal M$ such that $\operatorname{\mbox{\bf p.an}}(x)=u$ and $\operatorname{\mbox{\bf val}}(x)=g$. Since $\pm\operatorname{\mbox{\bf p.an}}(x)=\pm u\in M_g(\mathcal M)$, we have $M_h(\mathcal M)=F$ from Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"}, as desired. We define the map $\rho:\mathfrak S_{F_0}^F \rightarrow \mathfrak X_{A}$ by $$\rho\left((M_{g})_{g\in G_{\geq e}}\right) =\bigcup_{g\in G_{\geq e}}\Psi(M_{g},g)\text{.}$$ The map $\rho$ is well-defined by Proposition [Proposition 22](#prop:sigma_inverse){reference-type="ref" reference="prop:sigma_inverse"}. The composition $\rho \circ \sigma$ is the identity map by Theorem [Theorem 17](#thm:decomp){reference-type="ref" reference="thm:decomp"}. We demonstrate that $\sigma \circ \rho$ is also the identity map. Fix $g \in G_{\geq e}$. Let $N$ be the $g$-th coordinate of $\sigma \circ \rho\left((M_g)_{g\in G_{\geq e}}\right)$. We want to show that $N=M_{g}$. We first demonstrate the inclusion $N \subseteq M_g$. We have $$\begin{aligned} N &= M_{g}\left(\bigcup_{h\in G_{\geq e}}\Psi(M_{h}, h)\right)\\ &=\left\{\operatorname{\mbox{\bf p.an}}(x)\;\Bigl|\; x \in \bigcup_{h\in G_{\geq e}}\Psi(M_{h},h) \setminus\{0\} \text{ and } \operatorname{\mbox{\bf val}}(x)=g\right\} \cup \{0\}.\end{aligned}$$ Take an arbitrary nonzero element $x \in A$ with $\operatorname{\mbox{\bf p.an}}(x) \in N$ and $\operatorname{\mbox{\bf val}}(x)=g$. We also take $h \in G_{\geq e}$ such that $x \in \Psi(M_{h},h)$. If $x \in \Psi_2(M_{h},h)$, there exists $y\in\Psi_1(M_h,h)$ with $\pm y \in\Psi_1(M_h,h)$ and $\operatorname{\mbox{\bf val}}(y)<\operatorname{\mbox{\bf val}}(x)$. We first consider the case in which $\operatorname{\mbox{\bf val}}(y)=h$. It follows that $\pm\operatorname{\mbox{\bf p.an}}(y)\in M_h$. Since $h=\operatorname{\mbox{\bf val}}(y)<\operatorname{\mbox{\bf val}}(x)=g$, we have $M_g=F$ from the condition (iii) in the definition of $\mathfrak{S}_{F_0}^F$. The inclusion $N \subseteq M_g$ is obvious in this case. Next we demonstrate the remaining case in which $\operatorname{\mbox{\bf val}}(y)>h$. Recall that we have $\operatorname{\mbox{\bf val}}(y)=\overline{h}$ from the definition of $\Psi_1(M_h,h)$. We get $\pm\operatorname{\mbox{\bf p.an}}(y)\in\operatorname{\mbox{\rm cl}}_F(M_h)\subseteq M_{\operatorname{\mbox{\bf val}}(y)}$ by the condition (ii). It together with $\operatorname{\mbox{\bf val}}(y)<g$ and the condition (iii) implies that $M_g=F$, as desired. When $x \in \Psi_1(M_{h},h)$, we have $\operatorname{\mbox{\bf p.an}}(x) \in M_h=M_g$ if $g=h$ and $\operatorname{\mbox{\bf p.an}}(x) \in \operatorname{\mbox{\rm cl}}_F(M_h) \subseteq M_g$ otherwise. We demonstrate the opposite inclusion. Take a nonzero element $c \in M_{g}$. There exists a nonzero element $w \in B$ with $\operatorname{\mbox{\bf val}}(w)=g$ and $\operatorname{\mbox{\bf p.an}}(w)=c$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(3). We have $\pi(w)=c \in M_e \subseteq F_0$ by the condition (i) when $g=e$ and $\pi(w)=0 \in F_0$ otherwise. They imply that $w \in A$ by Lemma [Lemma 8](#lem:basic){reference-type="ref" reference="lem:basic"}. Since $w\in\Psi(M_g,g)$, we have $c \in N$. We have finished to prove that $\sigma$ and $\rho$ are the inverses of the others. ◻ We next consider the intersection and the sum of two quasi-quadratic modules. **Theorem 24**. *Let $A$ be a pseudo-valuation domain such that $2$ is a unit and any strict units admit a square root. Consider quasi-quadratic $A$-modules $\mathcal M_1$ and $\mathcal M_2$ in $A$. Set $M_{1,g}=M_g(\mathcal M_1)$ and $M_{2,g}=M_g(\mathcal M_2)$ for $g \in G_{\geq e}$. For each $g \in G_{\geq e}$, the following equalities hold true:* 1. *$M_g(\mathcal M_1 \cap \mathcal M_2)=M_{1,g} \cap M_{2,g}$.* 2. *$M_g(\mathcal M_1+\mathcal M_2)= \left\{\begin{array}{ll} F & \text{when } \exists h \in G_{\geq e} \text{ with } h<g \\ & \qquad\quad\!\! \text{and } M_{1,h} \cap (-M_{2,h}) \neq \{0\},\\ M_{1,g}+M_{2,g} & \text{elsewhere.} \end{array}\right.$* *Proof.* (1) The inclusion $M_g(\mathcal M_1 \cap \mathcal M_2) \subseteq M_{1,g} \cap M_{2,g}$ is obvious. We demonstrate the opposite inclusion. Take an arbitrary nonzero element $a \in M_{1,g} \cap M_{2,g}$. Since $a \in M_{i,g}$, there exists $w_i \in \mathcal M_i$ such that $\operatorname{\mbox{\bf val}}(w_i)=g$ and $\operatorname{\mbox{\bf p.an}}(w_i)=a$ for $i=1,2$. Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"} implies that $w_2$ is an element of $\mathcal M_1$. We have $a \in M_g(\mathcal M_1 \cap \mathcal M_2)$ because $w_2 \in \mathcal M_1 \cap \mathcal M_2$. \(2\) We first consider the case in which there exists $h \in G_{\geq e}$ with $h<g$ and $M_{1,h} \cap (-M_{2,h}) \neq \{0\}$. Take a nonzero element $a \in M_{1,h} \cap (-M_{2,h})$. There exist $w_1 \in \mathcal M_1$ and $w_2 \in \mathcal M_2$ with $\operatorname{\mbox{\bf val}}(w_1)=\operatorname{\mbox{\bf val}}(w_2)=h$ and $\operatorname{\mbox{\bf p.an}}(w_1)=-\operatorname{\mbox{\bf p.an}}(w_2)=a$. By Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"}, we get $-w_2 \in \mathcal M_1$. It implies that $\pm w_2$ belongs to $\mathcal M_1+\mathcal M_2$. This means that $w_2\in\operatorname{\mbox{\rm supp}}(\mathcal M_1+\mathcal M_2)$. Thus it follows from Proposition [Proposition 18](#supp1){reference-type="ref" reference="supp1"} that $M_g(\mathcal M_1+\mathcal M_2)=F$. We next consider the remaining case. The inclusion $M_{1,g}+M_{2,g} \subseteq M_g(\mathcal M_1+\mathcal M_2)$ immediately follows from Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). We demonstrate the opposite inclusion. Take a nonzero element $a \in M_g(\mathcal M_1+\mathcal M_2)$ and an element $w \in \mathcal M_1+\mathcal M_2$ with $\operatorname{\mbox{\bf val}}(w)=g$ and $\operatorname{\mbox{\bf p.an}}(w)=a$ as usual. There exist $w_1 \in \mathcal M_1$ and $w_2 \in \mathcal M_2$ with $w=w_1+w_2$. It is obvious that $a \in M_{1,g}+M_{2,g}$ when either $w_1=0$ or $w_2=0$. Therefore, we assume that both $w_1$ and $w_2$ are nonzero. We may assume that $\operatorname{\mbox{\bf val}}(w_1) \leq \operatorname{\mbox{\bf val}}(w_2)$ without loss of generality. Set $h=\operatorname{\mbox{\bf val}}(w_1)$. We have $g=\operatorname{\mbox{\bf val}}(w_1+w_2)\geq\min(\operatorname{\mbox{\bf val}}(w_1), \operatorname{\mbox{\bf val}}(w_2))=h$. Assume for contradiction that $h<g$. We have $\operatorname{\mbox{\bf val}}(w_1)=\operatorname{\mbox{\bf val}}(w_2)$ because $\operatorname{\mbox{\bf val}}(w_1+w_2)>\operatorname{\mbox{\bf val}}(w_1)$. We also have $\operatorname{\mbox{\bf p.an}}(w_2)=-\operatorname{\mbox{\bf p.an}}(w_1)$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). We get $-w_1 \in \mathcal M_2$ by Lemma [Lemma 14](#lem:basic5){reference-type="ref" reference="lem:basic5"}. We have $\operatorname{\mbox{\bf p.an}}(w_1) \in M_{1,h} \cap (-M_{2,h})$ and it is the contradiction to the assumption. We have demonstrated that $h=g$. We have either $\operatorname{\mbox{\bf val}}(w_1)<\operatorname{\mbox{\bf val}}(w_2)$ or $\operatorname{\mbox{\bf val}}(w_1)=\operatorname{\mbox{\bf val}}(w_2)$. In the former case, we obviously have $a=\operatorname{\mbox{\bf p.an}}(w)=\operatorname{\mbox{\bf p.an}}(w_1+w_2)=\operatorname{\mbox{\bf p.an}}(w_1) \in M_{1,g} \subseteq M_{1,g}+M_{2,g}$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). In the latter case, we have $\operatorname{\mbox{\bf p.an}}(w_1)+\operatorname{\mbox{\bf p.an}}(w_2) \neq 0$ by Lemma [Lemma 7](#lem:lcp_basic2){reference-type="ref" reference="lem:lcp_basic2"} and the equality $\operatorname{\mbox{\bf val}}(w_1)=\operatorname{\mbox{\bf val}}(w_1+w_2)$. We have $a=\operatorname{\mbox{\bf p.an}}(w)=\operatorname{\mbox{\bf p.an}}(w_1)+\operatorname{\mbox{\bf p.an}}(w_2) \in M_{1,g}+M_{2,g}$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). ◻ # Special cases. {#sec:example} ## Examples. {#subsec:example} We consider the correspondence between quasi-quadratic modules and cones in some special cases. *Example 25*. Let $(E,<)$ be a Euclidean field, which is an ordered field in which a positive element is a square. Let $V$ be an $E$-vector space. A subset $S$ of $V$ is *convex* if $\lambda x + (1-\lambda)y \in S$ whenever $x,y \in S$ and $\lambda \in E$ with $0 \leq \lambda \leq 1$. A subset $S$ of $V$ is a *ray* if $ax \in S$ whenever $a$ is a nonnegative element of $E$ and $x \in S$. A subset $Q$ of $V$ is a quasi-quadratic $E$-module in $V$ if and only if it is a convex ray. It is obvious that $\sum E^2=E_{\geq 0}$. Using this fact, the proof of the above equivalence is not so difficult. In fact, let $Q$ be a quasi-quadratic $E$-module in $V$. We first show that $Q$ is a ray. Take a positive element $a\in E$ and an element $x\in Q$. There exists an element $b\in E$ such that $a=b^2$. Since $Q$ is a quasi-quadratic $E$-module of $V$, we have $ax\in Q$. Next we show that $Q$ is convex. Take elements $x,y\in Q$. We have $1\cdot x+0\cdot y=x\in Q$ and $0\cdot x+1\cdot y=y\in Q$. Take an element $\lambda\in E$ with $0<\lambda<1$. Since $\lambda\in E^2$ and $1-\lambda \in E^2$, we get $\lambda x\in Q$ and $(1-\lambda)y\in Q$. Hence we have $\lambda x+(1-\lambda)y\in Q$. We demonstrate the opposite implication. We assume that $Q$ is a convex ray. Take elements $x,y\in Q$. It follows that $a^2 x\in Q$ for any $a\in E$ because $a^2$ is nonnegative. Since $Q$ is convex and $0<\dfrac{1}{2}<1$ in $E$, we get $\dfrac{1}{2}x+\Bigl(1-\dfrac{1}{2}\Bigl)y=\dfrac{x+y}{2}\in Q$. Thus we have $x+y\in Q$ because the element $2$ is positive in $E$ and $Q$ is a ray. Let us recall a fundamental example introduced in Section 1. *Example 26*. We consider the pseudo-valuation domain $$A=\left\{\left.\sum_{l=0}^\infty a_lx^l \in \mathbb C[\![X]\!]\;\right|\; a_0 \in \mathbb R\right\}.$$ We have $B=\mathbb C[\![X]\!]$, $F=\mathbb C$, $F_0=\mathbb R$ and $G=(\mathbb Z,+)$ in this case. The pseudo-angular component map $\operatorname{\mbox{\bf p.an}}$ is defined by taking the coefficient of the term of minimum degree. Since $\mathbb C$ is algebraically closed, any element is the square of some element. Therefore, quasi-quadratic $\mathbb C$-modules in $\mathbb C$ are trivial ones, that is, $\{0\}$ and $\mathbb C$. Under the identification of $\mathbb C$ with $\mathbb R\times \mathbb R$, only three types of quasi-quadratic $\mathbb R$-modules in $\mathbb C$ are possible by Example [Example 25](#ex:ex1){reference-type="ref" reference="ex:ex1"}. (i) Trivial quasi-quadratic modules, that is, $\{0\}$ and $\mathbb C$; (ii) Lines. For any $z \in \mathbb C$ with $|z|=1$, set $M_{\text{line}}(z)=\{tz \in \mathbb C\;|\; t \in \mathbb R\}$; (iii) Convex fans. For any $\theta_1,\theta_2$ with $0 \leq \theta_2 \leq \pi$, set $$\begin{aligned} &M_{\text{fan,oo}}(\theta_1,\theta_2)=\{0\} \cup \{re^{i\theta} \in \mathbb C\;|\; r>0,\ \theta_1 < \theta < \theta_1+\theta_2\}, \\ &M_{\text{fan,oc}}(\theta_1,\theta_2)=\{0\} \cup \{re^{i\theta} \in \mathbb C\;|\; r>0,\ \theta_1 < \theta \leq \theta_1+\theta_2\}, \\ &M_{\text{fan,co}}(\theta_1,\theta_2)=\{0\} \cup \{re^{i\theta} \in \mathbb C\;|\; r>0,\ \theta_1 \leq \theta < \theta_1+\theta_2\}, \\ &M_{\text{fan,cc}}(\theta_1,\theta_2)=\{0\} \cup \{re^{i\theta} \in \mathbb C\;|\; r>0,\ \theta_1 \leq \theta \leq \theta_1+\theta_2\}.\end{aligned}$$ The notation $i$ denotes the imaginary unit. The subscript 'o' means open, and 'c' means closed. The case in which $\theta_2=0$ is allowed only for $M_{\text{fan,cc}}(\theta_1,\theta_2)$. ![ $M_{\text{fan, oo}}(\theta_1,\theta_2)$, $M_{\text{fan, oc}}(\theta_1,\theta_2)$, $M_{\text{fan, co}}(\theta_1,\theta_2)$ and $M_{\text{fan, cc}}(\theta_1,\theta_2)$. ](fan_oo.pdf "fig:") ![ $M_{\text{fan, oo}}(\theta_1,\theta_2)$, $M_{\text{fan, oc}}(\theta_1,\theta_2)$, $M_{\text{fan, co}}(\theta_1,\theta_2)$ and $M_{\text{fan, cc}}(\theta_1,\theta_2)$. ](fan_oc.pdf "fig:") ![ $M_{\text{fan, oo}}(\theta_1,\theta_2)$, $M_{\text{fan, oc}}(\theta_1,\theta_2)$, $M_{\text{fan, co}}(\theta_1,\theta_2)$ and $M_{\text{fan, cc}}(\theta_1,\theta_2)$. ](fan_co.pdf "fig:") ![ $M_{\text{fan, oo}}(\theta_1,\theta_2)$, $M_{\text{fan, oc}}(\theta_1,\theta_2)$, $M_{\text{fan, co}}(\theta_1,\theta_2)$ and $M_{\text{fan, cc}}(\theta_1,\theta_2)$. ](fan_cc.pdf "fig:") Let $\mathcal M$ be a nonzero quasi-quadratic $A$-module in $A$. Set $m=\min\operatorname{\mbox{\bf val}}(\mathcal M)$. Since $\operatorname{\mbox{\rm cl}}_{\mathbb C}(M)=\mathbb C$ for any nonzero quasi-quadratic $\mathbb R$-module $M$ in $\mathbb C$, by Theorem [Theorem 17](#thm:decomp){reference-type="ref" reference="thm:decomp"}, the quasi-quadratic $A$-module $\mathcal M$ in $A$ is one of the following forms: 1. When $M_m(\mathcal M)$ is either $\mathbb C$, a line (in the case (ii)) or $M_{\text{fan,cc}}(\theta,\pi)$ for some $\theta \in \mathbb R$, we have $M_n(\mathcal M)=\mathbb C$ for all $n>m$ and $$\begin{aligned} \mathcal M= &\{x \in A\;|\; (\operatorname{\mbox{\bf val}}(x)=m \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M_m(\mathcal M)) \text{ or } \operatorname{\mbox{\bf val}}(x)>m\}.\end{aligned}$$ 2. Otherwise, $M_{m+1}(\mathcal M)$ is possibly any one of (i) through (iii), $M_n(\mathcal M)=\mathbb C$ for all $n>m+1$ and $$\begin{aligned} \mathcal M= &\{x \in A\;|\; (\operatorname{\mbox{\bf val}}(x)=m \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M_m(\mathcal M)) \\ & \qquad \text{ or } (\operatorname{\mbox{\bf val}}(x)=m+1 \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M_{m+1}(\mathcal M)) \\ & \qquad \text{ or } \operatorname{\mbox{\bf val}}(x)>m+1\}.\end{aligned}$$ ## Finitely generated quasi-quadratic modules of the pseudo-valuation ring. {#subsec:finite-qq-mod} In this subsection, we only consider the pseudo-valuation ring $$A=\left\{\left.\sum_{l=0}^\infty a_lx^l \in \mathbb C[\![X]\!]\;\right|\; a_0 \in \mathbb R\right\}.$$ We gives a necessary and sufficient condition for a quasi-quadratic module of the pseudo-valuation ring $A$ to be finitely generated as a quasi-quadratic $A$-module. **Lemma 27**. *Let $A$ be as above. Then the valuation ring $\mathbb{C}[\![X]\!]$ of $\mathbb{C}(\!(X)\!)$ is finitely generated as a quasi-quadratic $A$-module.* *Proof.* We demonstrate that $\mathbb{C}[\![X]\!]=(\pm 1)A^2+(\pm i)A^2+(\pm X)A^2+(\pm iX)A^2$. It immediately follows that the right-hand side of the equality is included the left-hand side. We demonstrate the opposite inclusion. Take a nonzero element $f=\sum_{k=0}^{\infty}c_kX^k\in \mathbb{C}[\![X]\!]$. When $\operatorname{\mbox{\bf val}}(f)=2m$ for some $m\geq 0$, there exists a nonzero element $g \in A$ such that $f=c_{2m}g^2$. We can take real numbers $\alpha$, $\beta\in\mathbb{R}$ with $c_{2m}=\alpha+i\beta$. Hence we have $f=\alpha g^2+i(\beta g^2)$. Since $\alpha, \beta\in \mathbb{R}^2\cup(-\mathbb{R}^2)$, the element $f$ is included in the right-hand side of the equality. When $\operatorname{\mbox{\bf val}}(f)=2m+1$ for some $m\geq 0$, there exists a nonzero element $h\in A$ such that $f=c_{2m+1}X^{2m+1}h^2$. We can take real numbers $\gamma$, $\delta\in\mathbb{R}$ with $c_{2m+1}=\gamma+i\delta$. Hence we have $f=\gamma X (X^{m}h)^2+i\bigl(\delta X(X^{m}h)^2\bigl)$. Since $\gamma, \delta\in \mathbb{R}^2\cup(-\mathbb{R}^2)$, the element $f$ is included in the right-hand side of the equality. ◻ **Lemma 28**. *Let $A$ be as above. With the same notation as in Example [Example 26](#ex:ex2){reference-type="ref" reference="ex:ex2"}, the following statements hold true:* (1) *Convex fans $M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$, $M_{\text{\rm fan,oc}}(\theta_1, \theta_2)$ and $M_{\text{\rm fan,co}}(\theta_1, \theta_2)$ for any $\theta_1, \theta_2$ with $0< \theta_2\leq \pi$ are not finitely generated as quasi-quadratic $\mathbb{R}$-modules in $\mathbb{C}$.* (2) *A line $M_{\text{\rm line}}(z)$ for any $z\in\mathbb{C}$ with $|z|=1$ and a convex fan $M_{\text{\rm fan,cc}}(\theta_1, \theta_2)$ for any $\theta_1, \theta_2$ with $0\leq \theta_2\leq \pi$ are finitely generated as an quasi-quadratic $\mathbb{R}$-modules in $\mathbb{C}$.* *Proof.* (1) We only demonstrate the case in which $M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$. The other cases follow in the same way. Since $M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$ is a convex ray, the following inequality holds true: $\arg (z+w)\leq \max \{\arg z, \arg w\}$ for any $z,w\in M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$, where the notation $\arg v$ denotes the argument for $v\in\mathbb{C}$. Assume for contradiction that $M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$ is a finitely generated quasi-quadratic $\mathbb{R}$-module. There exist elements $z_1,\ldots,z_n \in M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$ such that $M_{\text{\rm fan,oo}}(\theta_1, \theta_2)=\mathbb{R}^2z_1+\cdots+\mathbb{R}^2z_n$. It follows that $\arg z_l=\max\{\arg z_1, \ldots, \arg z_n\}$ for some $1\leq l\leq n$. We can take an element $\theta_0\in\mathbb{R}$ with $\arg z_l< \theta_0<\theta_1+\theta_2$. Since $e^{i\theta_0}\in M_{\text{\rm fan,oo}}(\theta_1, \theta_2)$, we have $e^{i\theta_0}=r_1^2z_1+\cdots+r_n^2 z_n$ for some $r_1,\ldots,r_n\in\mathbb{R}$. Hence it follows that $\theta_0=\arg(r_1^2z_1+\cdots+r_n^2 z_n)\leq \max_{1\leq i\leq n}\{\arg z_i\}\leq \arg z_l$. Contradiction. \(2\) We obviously have $M_{\text{\rm line}}(z)=\mathbb R^2 z+\mathbb R^2(-z)$. This means that $M_{\text{\rm line}}(z)$ is finitely generated. We next show that a convex fan $M_{\text{\rm fan,cc}}(\theta_1, \theta_2)$ is finitely generated. Set $z_1=e^{i\theta_1}$, $z_2=e^{i(\theta_1+\theta_2/2)}$ and $z_3=e^{i(\theta_1+\theta_2)}$. Then we have the following equalities: $$\begin{aligned} M_{\text{\rm fan,cc}}(\theta_1, \theta_2) &=&\{sz_1+tz_2\;|\;s,t\in\mathbb{R}^2\} \cup\{uz_2+vz_3\;|\;u,v\in\mathbb{R}^2\}\\ &=&\mathbb{R}^2z_1+\mathbb{R}^2z_2+\mathbb{R}^2z_3.\end{aligned}$$ ◻ We are now ready to demonstrate our main theorem in this section. **Theorem 29**. *Let $\mathcal M$ be an quasi-quadratic $A$-module in $A$. Set $m=\min(\operatorname{\mbox{\bf val}}(\mathcal M))$. The quasi-quadratic module $\mathcal M$ is finitely generated as an quasi-quadratic $A$-module in $A$ if and only if the quasi-quadratic modules $M_m(\mathcal M)$ and $M_{m+1}(\mathcal M)$ are finitely generated as quasi-quadratic $\mathbb{R}$-modules in $\mathbb{C}$ if and only if one of the following conditions is satisfied:* - *$M_m(\mathcal M)$ is either $\mathbb C$, a line or a convex fan of the form $M_{\text{\rm fan,cc}}(\theta, \pi)$ for some $\theta \in \mathbb R$.* - *$M_m(\mathcal M)=M_{\text{\rm fan,cc}}(\theta_1, \theta_2)$ for some $\theta_1 \in \mathbb R$ and $0<\theta_2<\pi$ and $M_{m+1}(\mathcal M)$ is either $\mathbb C$, a line or a convex fan of the form $M_{\text{\rm fan,cc}}(\theta'_1, \theta'_2)$ for some $\theta'_1 \in \mathbb R$ and $0 \leq \theta'_2 \leq \pi$.* *Proof.* The second 'if and only if' part follows from Example [Example 26](#ex:ex2){reference-type="ref" reference="ex:ex2"} and Lemma [Lemma 28](#conv-fans){reference-type="ref" reference="conv-fans"} once the first 'if and only if' part is proved. We prove the first 'if and only if' part in the rest of the proof. We first assume that $\mathcal M$ is a finitely generated quasi-quadratic $A$-module in $A$. There are finite nonzero elements $g_1,\ldots,g_n\in \mathcal M$ with $\mathcal M=A^2g_1+\cdots+A^2g_n$. Set $S_k:=\{i\;|\; 1\leq i\leq n, \operatorname{\mbox{\bf val}}(g_i)=k\}$ for $k \geq m$. We first prove that $M_m(\mathcal M)$ is finitely generated. Note that $S_m$ is not empty because $\min(\operatorname{\mbox{\bf val}}(\mathcal M))=m$. It is immediate that $\operatorname{\mbox{\bf p.an}}(g_i)\in M_m(\mathcal M)$ for any $i\in S_m$. By Proposition [Proposition 15](#prop:M){reference-type="ref" reference="prop:M"}, we have $\sum_{i\in S_m}\mathbb{R}^2\operatorname{\mbox{\bf p.an}}(g_i)\subset M_m(\mathcal M)$. We next prove the opposite inclusion. Take a nonzero element $c\in M_m(\mathcal M)$. We can take a nonzero element $f\in \mathcal M$ with $c=\operatorname{\mbox{\bf p.an}}(f)$ and $\operatorname{\mbox{\bf val}}(f)=m$. For each $1\leq i\leq m$, there are elements $a_i(X)\in A$ such that $f=a_1(X)^2g_1+\cdots+a_n(X)^2 g_n$ by the assumption. Then we have $$c=\operatorname{\mbox{\bf p.an}}(f)=\sum_{i\in S_m}a_i(0)^2\operatorname{\mbox{\bf p.an}}(g_i)\in \sum_{i\in S_m}\mathbb{R}^2\operatorname{\mbox{\bf p.an}}(g_i).$$ It implies that $M_m(\mathcal M)$ is finitely generated. We next show that $M_{m+1}(\mathcal M)$ is finitely generated. When $\mathcal M$ is one of the $A$-quasi-quadratic modules given in (a) of Example [Example 26](#ex:ex2){reference-type="ref" reference="ex:ex2"}, we have $M_{m+1}(\mathcal M)=\mathbb C$. It is finitely generated because $\mathbb C = \mathbb R^2 + (-\mathbb R^2) + (i \mathbb R^2)+(-i\mathbb R^2)$. We concentrate on the case in which $\mathcal M$ is one of the quasi-quadratic $A$-modules given in (b) of Example [Example 26](#ex:ex2){reference-type="ref" reference="ex:ex2"}. Let $f$ be an arbitrary element of $\mathcal M$ with $\operatorname{\mbox{\bf val}}(f)=m+1$. There are $a_1(X), \ldots, a_n(X)\in A$ such that $f=a_1(X)^2g_1+\cdots+a_n(X)^2 g_n$ by the assumption. Since $\operatorname{\mbox{\bf val}}(f)=m+1$, we have $\sum_{i \in S_m}a_i(0)^2\operatorname{\mbox{\bf p.an}}(g_i)=0$. If $a_k(0) \neq 0$ for some $k \in S_m$, we have $0 \neq a_k(0)^2\operatorname{\mbox{\bf p.an}}(g_k) \in M_m(\mathcal M)$ and $$-a_k(0)^2\operatorname{\mbox{\bf p.an}}(g_k) = \sum_{i \in S_m, i \neq k}a_i(0)^2\operatorname{\mbox{\bf p.an}}(g_i) \in M_m(\mathcal M).$$ It is a contradiction because there are no elements $u \neq 0$ in $M_m(\mathcal M)$ with $- u \in M_m(\mathcal M)$ in this case. Therefore, we have $a_i(0)=0$ for every $i \in S_m$. It implies that $\operatorname{\mbox{\bf val}}(a_i(X)^2g_i)>m$ for every $1 \leq i \leq n$. Using these inequalities, we finally get the equality $$\operatorname{\mbox{\bf p.an}}(f)=\sum_{i \in S_{m+1}}a_i(0)^2\operatorname{\mbox{\bf p.an}}(g_i).$$ It means that $M_{m+1}(\mathcal M)$ is generated by $\{\operatorname{\mbox{\bf p.an}}(g_i)\;|\; i \in S_{m+1}\}$ by the definition of $M_{m+1}(\mathcal M)$. We demonstrate the opposite implication. Suppose that $M_m(\mathcal M)$ and $M_{m+1}(\mathcal M)$ are finitely generated quasi-quadratic $\mathbb{R}$-modules in $\mathbb{C}$. Let $a_{i1}, \ldots a_{il_i}$ be generators of $M_{m+i}(\mathcal M)$ for $i=0,1$. Set $$\mathcal N_i=\{f \in A\;|\; \operatorname{\mbox{\bf val}}(f)=m+i \ \mbox{and}\ \operatorname{\mbox{\bf p.an}}(f)\in M_{m+i}(\mathcal M))\}$$ for $i=0,1$. We have $\mathcal N_i \subseteq \sum_{j=1}^{l_i}a_{ij}X^{m+i}A^2$. In fact, take a nonzero element $f \in \mathcal N_i$. We have $f=\operatorname{\mbox{\bf p.an}}(f)X^{m+i}g$ for some $g \in A$ with $\pi(g)=1$. Since any strict units in $A$ admit a square root, we have $$f \in \operatorname{\mbox{\bf p.an}}(f)X^{m+i}A^2 \subseteq \sum_{j=1}^{l_i} a_{ij}\mathbb R^2X^{m+i}A^2 =\sum_{j=1}^{l_i}a_{ij}X^{m+i}A^2.$$ By Example [Example 26](#ex:ex2){reference-type="ref" reference="ex:ex2"}, we have $$\begin{aligned} \mathcal M= &\{x \in A\;|\; (\operatorname{\mbox{\bf val}}(x)=m \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M_m(\mathcal M)) \text{ or } \operatorname{\mbox{\bf val}}(x)>m\}\end{aligned}$$ or $$\begin{aligned} \mathcal M= &\{x \in A\;|\; (\operatorname{\mbox{\bf val}}(x)=m \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M_m(\mathcal M)) \\ & \qquad \text{ or } (\operatorname{\mbox{\bf val}}(x)=m+1 \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M_{m+1}(\mathcal M)) \\ & \qquad \text{ or } \operatorname{\mbox{\bf val}}(x)>m+1\}.\end{aligned}$$ Even in the former case, we may assume that $\mathcal M$ is of the latter form because $M_{m+1}(\mathcal M)=\mathbb C$ in the former case. Therefore, we only treat the case in which $\mathcal M$ is of the latter form. It is obvious that $\mathcal N_i \subseteq \mathcal M$ for $i=0,1$. These inclusions imply $\sum_{j=1}^{l_i}a_{ij}X^{m+i}A^2 \subseteq \mathcal M$ for $i=0,1$ because $a_{ij}X^{m+i} \in \mathcal N_i$ and $\mathcal M$ is a quasi-quadratic $A$-module. We finally have $$\mathcal M = \mathcal N_0 \cup \mathcal N_1 \cup X^{m+2}\mathbb C[\![X]\!] \subseteq \sum_{i=0}^1 \sum_{j=1}^{l_i}a_{ij}X^{m+i}A^2 + X^{m+2}\mathbb C[\![X]\!] \subseteq \mathcal M.$$ It implies that $$\mathcal M = \sum_{i=0}^1 \sum_{j=1}^{l_i}a_{ij}X^{m+i}A^2 + X^{m+2}\mathbb C[\![X]\!].$$ This equality implies that $\mathcal M$ is finitely generated by Lemma [Lemma 27](#fg_val_ring){reference-type="ref" reference="fg_val_ring"}. ◻ # Appendix: When $F_0$ is of characteristic two {#sec:char2} We consider the case in which the residue field $F_0$ is of characteristic two. From now on, let $A$ be a pseudo-valuation ring whose residue field is of characteristic two such that any strict units admit a square root. We use the following lemma: **Lemma 30** (Basic lemma). *Any element in $\mathfrak m$ is the sum of squares of two elements in $A$.* *Proof.* Let $x \in \mathfrak m$. Since $\pi_0(1+x)=1$ and $\pi_0(-1)=-1=1$, both $1+x$ and $-1$ are strict units in $A$. There exist $u,v \in A$ with $u^2=1+x$ and $v^2=-1$ because strict units admit a square root in $A$. We have $x=u^2+v^2$. ◻ We give a corollary of the basic lemma. **Corollary 31**. *Let $\mathcal M$ be a quasi-quadratic $A$-module in $A$. Let $x \in \mathcal M$ and $y \in A$. If $\operatorname{\mbox{\bf val}}(x)<\operatorname{\mbox{\bf val}}(y)$, then $y \in \mathcal M$.* *Proof.* By the assumption, we have $\operatorname{\mbox{\bf val}}(y/x)>e$ and $y/x \in \mathfrak m$. There exist $u,v \in A$ with $y/x=u^2+v^2$. It implies that $y=(u^2+v^2)x \in \mathcal M$. ◻ For presenting structure theorems of quasi-quadratic modules, we introduce several notations. **Definition 32**. A subset $S$ of $G_{\geq e}$ is called a *cut* if $y \in S$ whenever $y>x$ for some $x \in S$. Let $\mathfrak S$ denotes the set of cuts and $\mathfrak S_{\min}$ denote the set of cuts having smallest elements. For any cut $S$, we set $$\Delta_1(S)=\{0\} \cup\{0 \neq x \in A\;|\; \operatorname{\mbox{\bf val}}(x) \in S \}$$ when $S$ does not have a smallest element. Otherwise, for any nonzero quasi-quadratic $F_0$-module $M$ in $F$, we set $$\begin{aligned} \Delta_2(S,M)&=\{0\}\cup\{0 \neq x \in A\;|\; \operatorname{\mbox{\bf val}}(x) >\min S \text{ or }\\ &\qquad (\operatorname{\mbox{\bf val}}(x)=\min S \text{ and } \operatorname{\mbox{\bf p.an}}(x) \in M)\}. \end{aligned}$$ Here, $\min S$ denotes the unique smallest element of $S$. The following lemma asserts that the sets $\Delta_1(S)$ and $\Delta_2(S,M)$ are quasi-quadratic $A$-modules in $A$. **Lemma 33**. *The following assertions hold true:* 1. *The set $\Delta_1(S)$ is a quasi-quadratic module in $A$ for any $S\in\mathfrak S \setminus \mathfrak S_{\min}$.* 2. *The set $\Delta_2(S,M)$ is a quasi-quadratic module in $A$ for any $S\in\mathfrak S_{\min}$ and for any nonzero quasi-quadratic $F_0$-module $M$ in $F$.* *Proof.* (1) It is a routine to demonstrate that $\Delta_1(S)$ is closed under multiplication by the squares of elements in $A$. We have only to demonstrate that $\Delta_1(S)$ is closed under addition. Take a nonzero elements $x,y\in\Delta_1(S)$ with $x+y\neq 0$. It immediately follows that $\operatorname{\mbox{\bf val}}(x+y)\in\Delta_1(S)$ since $\operatorname{\mbox{\bf val}}(x+y)\geq\min(\operatorname{\mbox{\bf val}}(x),\operatorname{\mbox{\bf val}}(y))\in S$ and $S$ is a cut. \(2\) We first demonstrate the closedness under addition. Take nonzero elements $x,y\in\Delta_2(S,M)$ with $x+y\neq 0$. We may assume that $\operatorname{\mbox{\bf val}}(x)\geq \operatorname{\mbox{\bf val}}(y)$. We first consider the case in which $\operatorname{\mbox{\bf val}}(x)=\operatorname{\mbox{\bf val}}(y)$. When $\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)\neq 0$, it follows from Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4) that $\operatorname{\mbox{\bf val}}(x+y)=\operatorname{\mbox{\bf val}}(x)$ and $\operatorname{\mbox{\bf p.an}}(x+y)=\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)$. This implies that $x+y\in\Delta_2(S,M)$. When $\operatorname{\mbox{\bf p.an}}(x)+\operatorname{\mbox{\bf p.an}}(y)= 0$, we have that $\operatorname{\mbox{\bf val}}(x+y)>\operatorname{\mbox{\bf val}}(x)\geq \min S$ by using Lemma [Lemma 7](#lem:lcp_basic2){reference-type="ref" reference="lem:lcp_basic2"}. This means that $x+y\in\Delta_2(S,M)$. We next consider the remaining case in which $\operatorname{\mbox{\bf val}}(x)>\operatorname{\mbox{\bf val}}(y)$. We get $\operatorname{\mbox{\bf val}}(x+y)=\operatorname{\mbox{\bf val}}(y)$ and $\operatorname{\mbox{\bf p.an}}(x+y)=\operatorname{\mbox{\bf p.an}}(y)$ from Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). Hence it follows that $x+y\in\Delta_2(S,M)$. We demonstrate that $\Delta_2(S,M)$ is closed under multiplication by the squares of elements in $A$. Take nonzero elements $x\in\Delta_2(S,M)$ and $a\in A$. When $\operatorname{\mbox{\bf val}}(x)>\min S$, it is obvious that $\operatorname{\mbox{\bf val}}(a^2x)\geq\operatorname{\mbox{\bf val}}(x)>\min S$. Thus we have $a^2x\in\Delta_2(S,M)$. We next consider the case in which $\operatorname{\mbox{\bf val}}(x)=\min S$ and $\operatorname{\mbox{\bf p.an}}(x)\in M$. When $\operatorname{\mbox{\bf val}}(a)>e$, it follows that $\operatorname{\mbox{\bf val}}(a^2x)>\operatorname{\mbox{\bf val}}(x)=\min S$. This implies that $a^2x\in\Delta_2(S,M)$. When $\operatorname{\mbox{\bf val}}(a)=e$, we have $\operatorname{\mbox{\bf val}}(a^2x)=\operatorname{\mbox{\bf val}}(x)=\min S$ and $\operatorname{\mbox{\bf p.an}}(a^2x)=\pi_0(a)^2\operatorname{\mbox{\bf p.an}}(x)\in M$ from Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(2). ◻ **Theorem 34**. *Let $\mathcal M$ be a quasi-quadratic $A$-module in $A$.* 1. *$S=\operatorname{\mbox{\bf val}}(\mathcal M)$ is a cut.* 2. *If $S$ does not have a smallest element, we have $\mathcal M=\Delta_1(S)$. Otherwise, $\mathcal M=\Delta_2(S,M_{\min S}(\mathcal M))$.* *Proof.* The assertion (1) immediately follows from Corollary [Corollary 31](#cor:char_two1){reference-type="ref" reference="cor:char_two1"}. We demonstrate the assertion (2) when $S$ does not have a smallest element. The inclusion $\mathcal M \subseteq \Delta_1(S)$ is obvious. We demonstrate the opposite inclusion. Take a nonzero element $x \in \Delta_1(S)$. We have $\operatorname{\mbox{\bf val}}(x) \in S$. Since $S$ does not have a smallest element, we can take $y \in \mathcal M$ with $\operatorname{\mbox{\bf val}}(y)<\operatorname{\mbox{\bf val}}(x)$. We get $x \in \mathcal M$ by Corollary [Corollary 31](#cor:char_two1){reference-type="ref" reference="cor:char_two1"}. We finally consider the remaining case. Set $M=M_{\min S}(\mathcal M)$ for simplicity. The inclusion $\mathcal M \subseteq \Delta_2(S,M)$ is easy. We omit the proof. We prove the opposite inclusion. Take a nonzero $x \in \Delta_2(S,M)$. When $\operatorname{\mbox{\bf val}}(x)>\min S$, we can prove $x \in \mathcal M$ similarly in the previous case. When $\operatorname{\mbox{\bf val}}(x)=\min S$, we have $\operatorname{\mbox{\bf p.an}}(x) \in M$. By the definition of $M$, there exists $y \in \mathcal M$ such that $\operatorname{\mbox{\bf val}}(y)=\operatorname{\mbox{\bf val}}(x)$ and $\operatorname{\mbox{\bf p.an}}(y)=\operatorname{\mbox{\bf p.an}}(x)$. We have $x \in \mathcal M$ by Lemma [Lemma 13](#lem:basic4){reference-type="ref" reference="lem:basic4"}. ◻ **Proposition 35**. *The following assertions hold true:* 1. *Let $S_1$ and $S_2$ be cuts in $\operatorname{\mbox{\bf val}}(A)$. Then, at least one of the inclusions $S_1 \subseteq S_2$ and $S_2 \subseteq S_1$ holds true.* 2. *Let $S_1$ and $S_2$ be cuts in $\operatorname{\mbox{\bf val}}(A)$. Let $M_1$ and $M_2$ be nonzero quasi-quadratic $F_0$-modules in $F$.* 1. *We have $\Delta_1(S_1) \cap \Delta_1(S_2)= \Delta_1(S_1)$ and $\Delta_1(S_1)+\Delta_1(S_2)=\Delta_1(S_2)$ when $S_1 \subseteq S_2$;* 2. *When $S_1, S_2 \in \mathfrak S_{\min}$, we have $$\begin{aligned} &\Delta_2(S_1,M_1) \cap \Delta_2(S_2,M_2) \\ &= \left\{ \begin{array}{ll} \Delta_2(S_1,M_1) & \text{ if } S_1 \subsetneq S_2,\\ \Delta_2(S_1, M_1 \cap M_2) & \text{ if } S_1=S_2 \text{ and } M_1 \cap M_2 \neq \{0\},\\ \Delta_1(S') & \text{ if } S_1=S_2 \text{ and } M_1 \cap M_2 = \{0\}, \end{array} \right. \end{aligned}$$ where $g_{\min}$ is a smallest element of $S_1$ and $S'=\{g \in S_1\;|\;g > g_{\min}\}$. We also have $$\begin{aligned} &\Delta_2(S_1,M_1) + \Delta_2(S_2,M_2) = \left\{ \begin{array}{ll} \Delta_2(S_2,M_2) & \text{ if } S_1 \subsetneq S_2,\\ \Delta_2(S_1, M_1 + M_2) & \text{ if } S_1=S_2; \end{array} \right. \end{aligned}$$* 3. *When $S_1 \in \mathfrak S_{\min}$, we have $$\begin{aligned} &\Delta_2(S_1,M_1) \cap \Delta_1(S_2) = \left\{ \begin{array}{ll} \Delta_2(S_1,M_1) & \text{ if } S_1 \subseteq S_2,\\ \Delta_1(S_2) & \text{ if } S_2 \subsetneq S_1,\\ \end{array} \right. \end{aligned}$$ and $$\begin{aligned} &\Delta_2(S_1,M_1) + \Delta_1(S_2) = \left\{ \begin{array}{ll} \Delta_1(S_2) & \text{ if } S_1 \subseteq S_2,\\ \Delta_2(S_1,M_1) & \text{ if } S_2 \subsetneq S_1.\\ \end{array} \right. \end{aligned}$$* *Proof.* The assertion (1) is obvious from the definition. Our next task is to prove assertion (2). The proof is similar to that of [@FK Proposition 5.10(2)], but we give it for readers' convenience. When $S_1 \subseteq S_2$, we have $\Delta_1(S_1) \subseteq \Delta_1(S_2)$. The assertion (a) is obvious from this inclusion. We investigate the intersection and the sum of $\Delta_2(S_1,M_1)$ and $\Delta_2(S_2,M_2)$ discussed in assertion (b). When $S_1 \subsetneq S_2$, $\min S_2$ is smaller than any element in $S_1$ by the definition of cuts. Hence we obviously have $\Delta_2(S_1,M_1) \subseteq \Delta_2(S_2,M_2)$. The equalities in the assertion (b) are obvious from this inclusion in this case. We next consider the case in which $S_1=S_2$. Set $g_{\min}=\min S_1$. The equalities on the intersection $\Delta_2(S_1,M_1) \cap \Delta_2(S_2,M_2)$ are not hard to derive. We omit the details. For the sum $\Delta_2(S_1,M_1)+\Delta_2(S_2,M_2)$, we first demonstrate the inclusion $\Delta_2(S_1,M_1)+\Delta_2(S_2,M_2) \subseteq \Delta_2(S_1,M_1+M_2)$. Take arbitrary elements $x_i \in \Delta_2(S_i,M_i)$ for $i=1,2$. We want to demonstrate $x_1+x_2 \in \Delta_2(S_1,M_1+M_2)$. It is obvious when at least one of $x_i$ is zero. It is also true when $\operatorname{\mbox{\bf val}}(x_1)\neq \operatorname{\mbox{\bf val}}(x_2)$. We next consider the case in which $\operatorname{\mbox{\bf val}}(x_1)=\operatorname{\mbox{\bf val}}(x_2) > g_{\min}$. It follows that $\operatorname{\mbox{\bf val}}(x_1+x_2)\geq\min\{\operatorname{\mbox{\bf val}}(x_1),\operatorname{\mbox{\bf val}}(x_2)\}>g_{\min}$. Thus we get $x_1+x_2 \in \Delta_2(S_1,M_1+M_2)$. We next consider the case in which $\operatorname{\mbox{\bf val}}(x_1)=\operatorname{\mbox{\bf val}}(x_2)=g_{\min}$ and $\operatorname{\mbox{\bf p.an}}(x_1)+\operatorname{\mbox{\bf p.an}}(x_2) \neq 0$. It immediately follows from Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). In the remaining case, we have $\operatorname{\mbox{\bf val}}(x_1)=\operatorname{\mbox{\bf val}}(x_2)=g_{\min}$ and $\operatorname{\mbox{\bf p.an}}(x_1)+\operatorname{\mbox{\bf p.an}}(x_2) = 0$. We have $\operatorname{\mbox{\bf val}}(x_1+x_2)>g_{\min}$ by Lemma [Lemma 7](#lem:lcp_basic2){reference-type="ref" reference="lem:lcp_basic2"} in this case. We also get $x_1+x_2 \in \Delta_2(S_1,M_1+M_2)$. We next prove the opposite inclusion $\Delta_2(S_1,M_1+M_2) \subseteq \Delta_2(S_1,M_1)+\Delta_2(S_2,M_2)$. Take $x \in \Delta_2(S_1,M_1+M_2)$. When $\operatorname{\mbox{\bf val}}(x) > g_{\min}$, we obviously have $x \in \Delta_2(S_1,M_1)$. When $\operatorname{\mbox{\bf val}}(x)=g_{\min}$ and $\operatorname{\mbox{\bf p.an}}(x) \in M_i$ for some $i=1,2$, we obviously have $x \in \Delta_2(S_i,M_i)$. The final case is the case in which $\operatorname{\mbox{\bf val}}(x)=g_{\min}$ and $\operatorname{\mbox{\bf p.an}}(x) \not\in M_i$ for $i=1,2$. We can take nonzero $b_i \in M_i$ with $b_1+b_2 = \operatorname{\mbox{\bf p.an}}(x)$ in this case. We can also take $x_1 \in A$ such that $\operatorname{\mbox{\bf val}}(x_1) = g_{\min}$ and $\operatorname{\mbox{\bf p.an}}(x_1)=b_1$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(3). The element $x_1$ belongs to $\Delta_2(S_1,M_1)$. Set $x_2 = x - x_1\neq 0$. We have $\operatorname{\mbox{\bf val}}(x_2)=g_{\min}$ and $\operatorname{\mbox{\bf p.an}}(x_2)=b_2$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(4). It means that $x_2 \in \Delta_2(S_2,M_2)$. We have proven the inclusion $\Delta_2(S_1,M_1+M_2) \subseteq \Delta_2(S_1,M_1)+\Delta_2(S_2,M_2)$ and finished the proof of the assertion (b). The assertion (c) is easily seen because we have $\Delta_2(S_1, M_1) \subseteq \Delta_1(S_2)$ when $S_1 \subseteq S_2$ and the opposite inclusion holds true otherwise by the definition of cuts, $\Delta_2(S_1, M_1)$ and $\Delta_1(S_2)$. ◻ Recall that $\mathfrak X_{R}^N$ denotes the set of all the quasi-quadratic $R$-modules in an $R$-module $N$. We set as follows: $$\begin{aligned} \mathcal O_{F_0,F} &=& (\mathfrak{S}\setminus\mathfrak{S}_{\min}) \sqcup \Bigl((\mathfrak{S}_{\min} \setminus \{G_{\geq e}\}) \times (\mathfrak{X}_{F_0}^F\setminus\{0\})\Bigl) \sqcup \Bigl(\{G_{\geq e}\} \times (\mathfrak{X}_{F_0}^{F_0}\setminus\{0\})\Bigl).\end{aligned}$$ **Theorem 36**. *The map $\Phi:\mathfrak{X}_A \to \mathcal O_{F,F_0}$ given by $$\begin{aligned} &\Phi(\mathcal M) = \left\{ \begin{array}{ll} \operatorname{\mbox{\bf val}}(\mathcal M) & \text{ if } \operatorname{\mbox{\bf val}}(\mathcal M)\not\in\mathfrak{S}_{\min} ,\\ (\operatorname{\mbox{\bf val}}(\mathcal M), M_{g_{\min}}(\mathcal M)) & \text{ if } \operatorname{\mbox{\bf val}}(\mathcal M)\in\mathfrak{S}_{\min}\\ \end{array} \right. \end{aligned}$$ is a bijection, where $g_{\min}$ denotes a smallest element in $\operatorname{\mbox{\bf val}}(\mathcal M)$.* *Proof.* The proof is very similar to that of [@FK Theorem 5.11(2)]. We give it for readers' convenience. The map $\Phi$ is well-defined by Proposition [Proposition 15](#prop:M){reference-type="ref" reference="prop:M"} and the inclusion $M_e(\mathcal M) \subseteq \pi(\mathcal M) \subseteq \pi(A) = F_0$. By using Lemma [Lemma 33](#lem:char_two_qq){reference-type="ref" reference="lem:char_two_qq"}, we can define the map $\Psi:\mathcal O_{F_0,F}\rightarrow \mathfrak{X}_A$ by sending $S$ to $\Delta_1(S)$ when $S\in\mathfrak{S}\setminus\mathfrak{S}_{\min}$ and by sending $(S,M)$ to $\Delta_2(S, M)$ when $(S,M)\in\Bigl((\mathfrak{S}_{\min} \setminus \{G_{\geq e}\}) \times (\mathfrak{X}_{F_0}^F\setminus\{0\})\Bigl)\sqcup \Bigl(\{G_{\geq e}\} \times (\mathfrak{X}_{F_0}^{F_0}\setminus\{0\})\Bigl)$. For any $S\in\mathfrak{S}\setminus\mathfrak{S}_{\min}$, we obviously have $\Phi(\Psi(S))=S$. For any $(S,M)\in\Bigl((\mathfrak{S}_{\min} \setminus \{G_{\geq e}\}) \times (\mathfrak{X}_{F_0}^F\setminus\{0\})\Bigl) \sqcup \Bigl(\{G_{\geq e}\} \times (\mathfrak{X}_{F_0}^{F_0}\setminus\{0\})\Bigl)$, we get $\operatorname{\mbox{\bf val}}(\Delta_2(S,M))=S$ by Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(3). We next demonstrate that $M_{g_{\min}}(\Delta_2(S,M))=M$. Since $M_{g_{\min}}(\Delta_2(S,M))=\{\operatorname{\mbox{\bf p.an}}(x)\;|\;x\in\Delta_2(S,M)\;\text{and}\;\operatorname{\mbox{\bf val}}(x)=g_{\min}\}\cup\{0\}$, we get $M_{g_{\min}}(\Delta_2(S,M))\subseteq M$. To show the opposite inclusion, take a nonzero element $c\in M$. By Definition [Definition 5](#def:pseudo){reference-type="ref" reference="def:pseudo"}(3), there exists an element $x\in K$ with $\operatorname{\mbox{\bf p.an}}(x)=c$ and $\operatorname{\mbox{\bf val}}(x)=g_{\min}$. When $g_{\min}>e$, it follows that $x\in\mathfrak{m}\subset A$. When $g_{\min}=e$ and $M\in\mathfrak{X}_{F_0}^{F_0}$, we have $x\in A$ from Lemma [Lemma 8](#lem:basic){reference-type="ref" reference="lem:basic"}. These imply that $c\in M_{g_{\min}}(\Delta_2(S,M))$. Hence we have $$\Phi(\Psi(S,M))=(\operatorname{\mbox{\bf val}}(\Delta_2(S,M)),M_{g_{\min}}(\Delta_2(S,M)))=(S,M).$$ Take a quasi-quadratic module $\mathcal M\in\mathfrak{X}_A$. We next demonstrate that $\Psi(\Phi(\mathcal M))=\mathcal M$. When $\operatorname{\mbox{\bf val}}(\mathcal{M})\not\in\mathfrak{S}_{\min}$, we have $$\Psi(\Phi(\mathcal M))=\Psi(\operatorname{\mbox{\bf val}}(\mathcal M))=\Delta_1(\operatorname{\mbox{\bf val}}(\mathcal M))=\mathcal M$$ from Theorem [Theorem 34](#thm:str_chartwo){reference-type="ref" reference="thm:str_chartwo"}(2). When $\operatorname{\mbox{\bf val}}(\mathcal M)\in\mathfrak{S}_{\min}$, it follows from Theorem [Theorem 34](#thm:str_chartwo){reference-type="ref" reference="thm:str_chartwo"}(2) that $$\Psi(\Phi(\mathcal M))=\Psi(\operatorname{\mbox{\bf val}}(\mathcal M),M_{g_{\min}}(\mathcal M))= \Delta_2(\operatorname{\mbox{\bf val}}(\mathcal M),M_{g_{\min}}(\mathcal M))=\mathcal M.$$ We have finished to prove that $\Phi$ and $\Psi$ are the inverses of the others. ◻ 99 Augustin, D., Knebusch, M. (2010). Quadratic modules in $R[\![X]\!]$. *Proc. Amer. Math. Soc.* 138:75-84. Fujita, M., Kageyama, M. (2023). *Quasi-quadratic modules in valuation ring and valued field*, *Acta Math. Hungarica* 170(1):33-83. Hedstorm, J. R. , Houston, E. G. (1978). *Pseudo-valuation domains*, *Pacific J. Math.*, 75:137-142.
arxiv_math
{ "id": "2310.04116", "title": "Quasi-quadratic modules in pseudo-valuation domain", "authors": "Masato Fujita, Masaru Kageyama", "categories": "math.AC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Gradient methods are experiencing a growth in methodological and theoretical developments owing to the challenges of optimization problems arising in data science. Focusing on data science applications with expense objective function evaluations yet inexpensive gradient function evaluations, gradient methods that never make objective function evaluations are either being rejuvenated or actively developed. However, as we show, such gradient methods are all susceptible to catastrophic divergence under realistic conditions for data science applications. In light of this, gradient methods which make use of objective function evaluations become more appealing, yet, as we show, can result in an exponential increase in objective evaluations between accepted iterates. As a result, existing gradient methods are not poorly suited to the needs of optimization problems arising from data science. In this work, we address this gap by developing a generic methodology that economically uses objective function evaluations in a problem-driven manner to prevent catastrophic divergence and avoid an explosion in objective evaluations between accepted iterates. Our methodology allows for specific procedures that can make use of specific step size selection methodologies or search direction strategies, and we develop a novel step size selection methodology that is well-suited to data science applications. We show that a procedure resulting from our methodology is highly competitive with standard optimization methods on CUTEst test problems. We then show a procedure resulting from our methodology are highly favorable to standard optimization methods on optimization problems arising in our target data science applications. Thus, we provide a novel gradient methodology that is better suited to optimization problems arising in data science. author: - Christian Varner and Vivak Patel bibliography: - ref.bib title: A Novel Gradient Methodology with Economical Objective Function Evaluations for Data Science Applications --- # Introduction {#sec:Introduction} Gradient methods are experiencing a growth in methodological and theoretical developments in order to address the needs of data science applications, in which full objective function and gradient function evaluations are expensive to compute [@bottou2018optimization §3]. Focusing on the important case of expensive objective evaluations yet inexpensive gradient evaluations [see @hardin2012gee Chapter 3], gradient methods that never make objective function evaluations are being rejuvenated and actively developed (see ). Such gradient methods generally enjoy inexpensive per-iteration costs, and global convergence guarantees when the gradient function is Lipschitz continuous with a common rank for any compact set in the domain.[^1] For a canonical example, the gradient method with diminishing step-sizes never evaluates the objective function, and converges globally under the aforementioned global Lipschitz smoothness condition [@bertsekas1999 Proposition 1.2.4]. **Method** **Convergence** **Divergence** -------------------------------- ------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------- Diminishing Step-Size [@bertsekas1999 Proposition 1.2.4], [@patel2023gradient Theorem 3.7] [@patel2023gradient Proposition 4.4] Constant Step-Size [@armijo1966minimization Corollary 1], [@zhang2020gradientclipping Theorem 3], [@li2023convex Theorem 5.1] Barzilai-Borwein Methods [@barzilai1988 §4], [@burdakov2019stabilized Theorem 3.3] Nesterov's Acceleration Method [@nesterov2012GradientMF Theorem 6], [@li2023convex Theorem 4.4] Bregman Distance Method [@bauschke2017descentlemma Theorem 1] Negative Curvature Method [@curtis2018exploiting Theorem 1] Lipschitz Approximation [@malitsky2020adaptive Theorem 1] Weighted Gradient-Norm Damping [@wu2020wngrad Theorem 2.3], [@grapiglia2022AdaptiveTrust Corollary 1] Adaptively Scaled Trust Region [@gratton2022firstorderOFFO Corollary 3.3], [@gratton2022firstorderOFFO Theorem 3.10], [@gratton2023OFFOSecondOrderOpt Theorem 3.3] : Gradient methods that do not use objective function information. The method, convergence result reference(s), and a reference to a catastrophic divergence example are listed. Unfortunately, this global Lipschitz smoothness condition does not apply to objective functions arising in data science (see ). Similarly, the recent generalized smoothness condition of [@li2023convex], in which the modulus of continuity of the gradient function is allowed to grow sub-quadratically, also does not apply to common objective functions arising in data science (see ). To make matters worse, under the more appropriate local Lipschitz continuity condition on the gradient function (see ), the canonical gradient method with diminishing step-sizes was shown to diverge catastrophically: for any diminishing step-size, there exists an optimization problem for which the objective function evaluated at the iterates diverges and the gradient function evaluated at the iterates remains bounded away from zero [@patel2023gradient Proposition 4.4]. To exacerbate this further, the aforementioned gradient methods that do not use objective function evaluations all are susceptible to such catastrophic behavior (see ). **Problem** **Ref.** **GRA** **HES** **SUB** **QUA** **LOC** **Details** ----------------------------------------------------------- ----------------------------------------- --------- --------- --------- --------- --------- ------------- Factor Analysis for Pattern Extraction [@lawley1962factor §4] No No No Yes Yes Classification via Feed Forward Neural Network [@rumelhart1987 Chapter 8] No No No Yes Yes Correlation Modeling via Generalized Estimating Equations [@lipsitz2008geelongitudinal Chapter 3] No No No Yes Yes : Below are canonical data science problems, a reference for the problem, and different smoothness conditions: globally Lipschitz smooth (GRA); globally Lipschitz Hessian smooth (HES); subquadratically Lipschitz smooth (SUB); quadratically Lipschitz smooth (QUA); and local Lipschitz continuity of the gradient (LOC). See [9.1](#subsec:smoothness-overview){reference-type="ref" reference="subsec:smoothness-overview"} for definitions. We indicate "yes" if the problem satisfies the condition and "no" if not. Details can be found in the indicated section. In light of such results, gradient methods that make use of objective function evaluations to ensure descent naturally avoid this catastrophic divergence, making them more attractive to their objective-evaluation-free alternatives when repeated, failed optimization attempts are too costly. On one extreme, there are gradient descent methods that may use several evaluations of the objective function before accepting a proposed iteration (see ). Such gradient methods can be shown to require only a finite number of accepted iterations to achieve a certain threshold for the gradient function, when the gradient function is globally Lipschitz continuous. Interestingly, such guarantees for this class of gradient methods appears to be readily extendable to the case where the gradient function is locally Lipschitz continuous [@zhang2020firstorder Theorem 5]. Furthermore, under the assumption of a globally Lipschitz continuous Hessian function, these gradient methods seem amenable to results that control the number of objective function evaluations prior to an accepted iterate [@cartis2011cubic2 Theorem 2.1]. However, under the more general and appropriate local Lipschitz continuity of the gradient function, these gradient methods can grow exponentially in the number of objective function evaluations per accepted iteration (see ), which can severely blunt their utility for data science applications. **Method** **Convergence** **Exploding Evaluations** ------------------------------------------- ------------------------------------------------------------------------- --------------------------- Armijo's Backtracking Method [@armijo1966minimization Corollary 2], [@zhang2020firstorder Theorem 5] Newton's Method with Cubic Regularization [@nesterov2006cubic Theorem 1] Lipschitz Constant(s) Line Search Methods [@nesterov2012GradientMF Theorem 3], [@curtis2018exploiting Theorem 2] Adaptive Cubic Regularization [@cartis2011cubic1 Theorem 2.5], [@cartis2011cubic2 Theorem 2.1] : Gradient methods that may use multiple objective function evaluations per accepted iterate. For each method, a short description, a convergence result reference, and an exploding evaluation example are given. To reiterate, for realistic assumptions on optimization problems arising in data science applications, gradient methods that do not use objective evaluations can diverge catastrophically, while those that avoid such situations can require an exponentially-increasing number of objective function evaluations per accepted iteration. Thus, gradient methods that use objective evaluations periodically or in a problem-driven manner seem to be the best choices to mitigate catastrophic divergence and the high costs of objective function evaluations. Notable gradient methods in this category are enumerated presently. 1. In [@polyak1969], the gradient method's step is determined by the (estimated) optimality gap and can be shown to converge for convex functions. However, this method readily experiences catastrophic divergence for nonconvex optimization problems which have locally Lipschitz continuous gradients. See [10.10](#subsec:cd-polyaks){reference-type="ref" reference="subsec:cd-polyaks"}. 2. In [@grippo1991class], the gradient method switches between using a (nonmonotone) line search in some iterations and not using objective function evaluations in other iterations. This gradient method uses a line search to ensure that the iterates remain in a fixed level set of the objective function, and assumes the compactness of the level set. This method is the most promising for data science applications, yet seems to suffer in two ways. First, this method requires a rapid decay in the gradient function to avoid the line search, which may not be possible as the method approaches a solution for a poorly conditioned problem. As a result, the method would begin to incur many expensive objective function evaluations. Second, the method requires the compactness of the level set, which fails to hold, say, for linearly separating two perfectly separable sets of data points as any scaling of the linear discriminant is a solution. In summary, existing gradient methods are poorly suited for optimization problems in data science. To address this gap, we introduce a novel methodology that allows for many procedures with different step size strategies and search direction strategies (e.g., Quasi-Newton directions). Importantly, any procedure of our methodology prevents catastrophic divergence by using objective function evaluations in an economical, problem-driven manner (see [Theorem 2](#result-objective-function){reference-type="ref" reference="result-objective-function"}), while also having some guarantees of convergence of the gradient function (see [Theorem 3](#result-gradient){reference-type="ref" reference="result-gradient"}). Our gradient methodology is based on a novel analysis technique developed in [@patel2022globalconvergence; @patel2022stoppingcriteria; @patel2023gradient], which makes use of triggering events as a theoretical tool to understand how gradient descent with diminishing step-sizes behaves on general nonconvex functions. Specifically, our gradient methodology converts these theoretical triggering events into practical tests, which are used to determine when to evaluate the objective function and modify algorithm parameters. We also introduce a novel step size scheme that produces a highly effective procedure when paired with a negative gradient search direction (see [4](#sec:StepSize){reference-type="ref" reference="sec:StepSize"}). Through experiments, we show that our procedure is highly competitive with standard approaches on unconstrained optimization problems for the CUTEst suite (see [5](#sec:Results-cutest){reference-type="ref" reference="sec:Results-cutest"}). We then show that our procedure is dominant on optimization problems for finding estimates from generalized estimating equations (see [6](#sec:Results-gee){reference-type="ref" reference="sec:Results-gee"}). Consequently, to the best of our knowledge, we provide a general gradient methodology---along with a novel, sophisticated step size procedure---that is practical and rigorously-justified for solving important optimization problems arising in data science applications. #### Summary of our contributions. 1. Excluding the case of gradient descent with diminishing step-sizes, we show that objective-function-evaluation-free methods can diverge catastrophically. Of note, we show that the Borwein-Barzilai method can diverge catastrophically which seems to have been an outstanding question in the literature [see @burdakov2019stabilized], but was known to happen in practice. See [1](#table-objective-free){reference-type="ref" reference="table-objective-free"}. 2. We show that methods which use objective function information to determine when to accept iterates can incur an exponentially increasing number of objective function evaluations per accepted iteration, which is an important consideration for optimization problems with expensive objective function evaluations (up to an arbitrarily specified number of accepted iterations in light of [@zhang2020firstorder Theorem 5]). See [3](#table-multiple-objective-evaluation){reference-type="ref" reference="table-multiple-objective-evaluation"}. 3. We show that some common assumptions about the smoothness of objective functions, such as globally Lipschitz continuous Hessian functions and the more recent sub-quadratic modulus of continuity condition on the gradient function, are inappropriate for optimization problems arising in data science. See [2](#table-data-science-problems){reference-type="ref" reference="table-data-science-problems"}. 4. We introduce a novel nonmonotone Armijo condition that does not require a local model of the objective function, and which is distinct from the standard Armijo condition and the one developed by [@grippo1986nonmonotone; @grippo1989truncated; @grippo1991class]. See the Nonmonotone Armijo Condition heading in [3.2](#subsec-method){reference-type="ref" reference="subsec-method"}. 5. We introduce a novel methodology that economically uses objective function evaluations to avoid optimality-gap divergence, while still providing a global convergence analysis under realistic assumptions on optimization problems for data science. See [\[alg-general-algorithm,result-objective-function,result-gradient\]](#alg-general-algorithm,result-objective-function,result-gradient){reference-type="ref" reference="alg-general-algorithm,result-objective-function,result-gradient"}. 6. We provide rather strong results when we restrict our assumptions from arbitrary local Lipschitz continuity of the gradient function to the case where the local Lipschitz rank is allowed to grow with the optimality gap and the square of the gradient function. We note, this is a substantial improvement over [@li2023convex] which is only able to do so when the Lipschitz rank grows sub-quadratically. See [Theorem 3](#result-gradient){reference-type="ref" reference="result-gradient"}. 7. We provide a novel, sophisticated step size methodology that, when paired with a negative gradient search direction and within our methodology, produces a procedure that is quite useful in practice. See [4](#sec:StepSize){reference-type="ref" reference="sec:StepSize"}. #### Organization of this work. In [2](#sec:Problem){reference-type="ref" reference="sec:Problem"}, we introduce the optimization problem of interest. In [3](#sec:Algo){reference-type="ref" reference="sec:Algo"}, we introduce the intuition of our methodology using a specific procedure, discuss our general methodology, and provide its global convergence analysis. In [4](#sec:StepSize){reference-type="ref" reference="sec:StepSize"}, we introduce a novel step size procedure that generates a rather practical procedure from our methodology. In [5](#sec:Results-cutest){reference-type="ref" reference="sec:Results-cutest"}, we compare procedures from our methodology to standard methods on optimization problems from the CUTEst test suite. In [6](#sec:Results-gee){reference-type="ref" reference="sec:Results-gee"}, we compare procedures from our methodology to standard methods on optimization problems arising from Generalized Estimating Equations, and we provide a brief overview of this class of data science problems. In [7](#sec:Conclusion){reference-type="ref" reference="sec:Conclusion"}, we conclude. In [8](#sec:Notation){reference-type="ref" reference="sec:Notation"}, we provide a table of notation. # Problem Formulation {#sec:Problem} Motivated by the properties of common data science problems (see ), we aim to (locally) solve $$\label{problem} \min_{\theta \in \mathbb{R}^n} F(\theta),$$ where the objective function, $F: \mathbb{R}^{n} \to \mathbb{R}$, satisfies the following assumptions. [\[as-bounded below\]]{#as-bounded below label="as-bounded below"} The objective function, $F$, is bounded below by some constant $F_{l.b.} > -\infty$. [\[as-loc-lip-cont\]]{#as-loc-lip-cont label="as-loc-lip-cont"} $\forall \theta \in \mathbb{R}^n$, the gradient function $\dot F(\theta) := \nabla F(\psi) \vert_{\psi=\theta}$ exists and is locally Lipschitz continuous. For clarity, we define local Lipschitz continuity as follows. **Definition 1**. *A function $G: \mathbb{R}^n \to \mathbb{R}^n$ is locally Lipschitz continuous if for every $\theta \in \mathbb{R}^n$, there exists an open ball of $\theta$, $\mathcal{N}$, and a constant $\mathcal L \geq 0$, such that $\forall \phi, \psi \in \mathcal{N}$, $$\label{eqn-lipschitz-bound} \frac{\left\Vert \dot F(\phi) - \dot F(\psi) \right\Vert_2 }{\left\Vert \phi - \psi \right\Vert_2} \leq \mathcal L.$$* An equivalent definition for a function $G$ to be locally Lipschitz continuous is if for every compact set $C \subset \mathbb{R}^p$, there exists an $\mathcal L(C) \geq 0$ such that [\[eqn-lipschitz-bound\]](#eqn-lipschitz-bound){reference-type="ref" reference="eqn-lipschitz-bound"} holds for all $\phi, \psi \in C$. We never assume that we have knowledge of the local Lipschitz rank in our gradient method. # Our General Method {#sec:Algo} Motivated by the theoretical analysis technique developed in [@patel2022globalconvergence; @patel2022stoppingcriteria; @patel2023gradient] around triggering events, we present a novel gradient method that develops these theoretical triggering events into practical tests to monitor the behavior of the iterate sequence, determine when to check the objective function, and when to adapt algorithm parameters. In [3.1](#subsec-intuition){reference-type="ref" reference="subsec-intuition"}, we present the intuition of this procedure and an example procedure of the method. In [3.2](#subsec-method){reference-type="ref" reference="subsec-method"}, we present our general method and important categories of variations such as the choice of the step size procedure. In [3.3](#subsec-global-convergence){reference-type="ref" reference="subsec-global-convergence"}, we conduct a global convergence analysis of our general method. ## Intuition and An Example of Our Method {#subsec-intuition} $F, \dot F, \theta_{0}, \epsilon > 0$ $\delta \leftarrow 1$ $k \leftarrow 0$ $\tau_{\mathop{\mathrm{obj}}} \leftarrow F(\theta_0)$ $j, \psi_0 \leftarrow 0, \theta_k$ $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}},~\tau_{\mathop{\mathrm{iter}},\max} \leftarrow 3 \Vert \dot F(\theta_k) \Vert_2, 100$ $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}},~\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}} \leftarrow 0.5 \Vert \dot F(\theta_k) \Vert_2, 1.5 \Vert \dot F(\theta_k) \Vert_2$ $\theta_{k+1}, \delta, k \leftarrow \theta_k, 0.5 \delta, k+1$ $\theta_{k+1}, k, \tau_{\mathop{\mathrm{obj}}} \leftarrow \psi_j, k+1, F(\psi_j)$ $\theta_{k+1}, \delta, k, \tau_{\mathop{\mathrm{obj}}} \leftarrow \psi_j, 1.5\delta, k+1, F(\psi_j)$ Exit Inner Loop $\psi_{j+1}, j \leftarrow \psi_j - \delta \dot F(\psi_j), j+1$ $\theta_k$ We will begin by discussing the standard gradient method. We will then reason about the short-term behavior of this gradient method in order to introduce a specific case of our procedure. Specifically, we will use our reasoning about the short-term behavior of the gradient method to introduce inexpensive tests. When the tests are triggered (i.e., are evaluated as true), we will evaluate the objective function at this triggering iterate and modify the standard gradient method based on this information. Consider an initial point, $\theta_0 \in \mathbb{R}^n$. Then, the standard gradient method generates a sequence, $\lbrace \theta_k : k\in\mathbb{N} \rbrace$, according to $$\theta_{k} = \theta_{k-1} - \alpha \dot F(\theta_{k-1}), ~ k \in \mathbb{N}.$$ This gradient method's iterates may eventually converge to a solution, oscillate in a cycle, approach a limit cycle, or diverge. While this asymptotic iterate behavior is important, the short-term iterate behavior is what we need to reason about to build our algorithm. The short-term iterate behavior can be reasoned about by imagining two constructions: a closed ball centered at $\theta_0$ of arbitrary, finite radius; and a positive open interval containing $\Vert \dot F(\theta_0) \Vert$. Then, trivially, the iterates will either exit, or remain inside of, the ball; and the iterates' gradient norms will either exit, or remain inside, the interval containing $\Vert \dot F(\theta_0) \Vert_2$. Based on these trivial constructions and the behavior of the iterates relative to them, our gradient algorithm tests them at each iteration. Specifically, at each iterate, our algorithm tests: 1. Has the iterate exited the closed ball around $\theta_0$? 2. Has the iterate's gradient norm escaped the interval containing $\Vert \dot F(\theta_0) \Vert_2$? 3. If the iterate has not exited the closed ball around $\theta_0$ and its gradient norm has remained in the interval containing $\Vert \dot F(\theta_0) \Vert_2$, has it been a "long time" since any of the above situations occurred? If any of the above situations comes back as being true at a given iterate, then our algorithm checks whether the objective function value has decreased sufficiently at the triggering iterate relative to the reference value, $F(\theta_0)$. If the objective function value at the triggering iterate has not sufficiently decreased the objective function value, then we are concerned about non-convergence and we address this by restarting the procedure $\theta_0$ and by scaling the step size down. If the objective function value at the triggering iterate has decreased relative to $F(\theta_0)$, then we suspect that this iterate is closer to a solution than $\theta_0$. In this case, we will shift the closed ball around the triggering iterate, shift the interval to be positive and contain the gradient-norm of the triggering iterate, and we will make the objective function value at this triggering iterate as our new reference value. Moreover, we can adjust the step size as well. For example, if we observe that the gradient norm has dropped below the interval at the triggering iterate, then we might conservatively keep the step size the same. If we observe that the gradient norm has not dropped below the interval at the triggering iterate, then we might choose to increase the step size to take advantage of large gradient values to make more progress. As a specific example of our method, the above procedure is formalized in with supplied example values for the radius of the closed ball, $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}$; the gradient-norm interval $(\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}},\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}})$; and the number of iterates since a triggering iterate, $\tau_{\mathop{\mathrm{iter}},\max}$. The procedure in differs from the above description as it is reorganized into an inner loop---the iterates between positive tests---and an outer loop---the iterates whenever a positive test is triggered. By examining , our procedure has two important properties (which we formalize later). First, our procedure guarantees $F(\theta_k) \leq F(\theta_0)$ for all $k \in \mathbb{N}$. Hence, our procedure cannot experience the catastrophic divergence that is endemic to objective-function-evaluation-free methods (see ). Second, by and the use of a fixed ball around the current iterate, our procedure must produce an outer loop iterate distinct form previous iterates with a finite worst-case number of objective function evaluations. Moreover, this number can only increase multiplicatively between two distinct iterates if the local Lipschitz constant of the gradient function is decaying multiplicatively. In turn, the iterates are entering a shallow region of the objective function, which will rapidly trigger the overall stopping condition for the outer loop. As a result, our procedure hedges against the exponentially increasing number of objective evaluations that can plague the methods in . ## Our Method {#subsec-method} $F, \dot F, \theta_{0}, \epsilon > 0$ $\mathrm{StepDirection()}$ $\mathrm{StepSize()}$ $\sigma_{\mathop{\mathrm{lower}}} \in (0,1)$ $\sigma_{\mathop{\mathrm{upper}}} \geq 1$ $w \in \mathbb{N}$ $\rho \in (0,1)$ $k \leftarrow 0$ $\delta_k \leftarrow 1$ $\tau_{\mathop{\mathrm{obj}}}^0 \leftarrow F(\theta_0)$ Select $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^0, \tau_{\mathop{\mathrm{iter}},\max}^0, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^0, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^0$ $j, \psi_0^k \leftarrow 0, \theta_k$ $\gamma_j^k, \alpha_j^k \leftarrow \mathrm{StepDirection()}, \mathrm{StepSize()}$ $\theta_{k+1}, \delta_{k+1} \leftarrow \theta_k, \sigma_{\mathop{\mathrm{lower}}} \delta_k$ Select $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^{k+1}, \tau_{\mathop{\mathrm{iter}},\max}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k+1}$ $\theta_{k+1}, \delta_{k+1} \leftarrow \psi_j^k, \delta_k$ Set $\tau_{\mathop{\mathrm{obj}}}^{k+1} \leftarrow$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"} Select $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^{k+1}, \tau_{\mathop{\mathrm{iter}},\max}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k+1}$ $\theta_{k+1}, \delta_{k+1}, \leftarrow \psi_j^k, \sigma_{\mathop{\mathrm{upper}}}\delta_k$ Set $\tau_{\mathop{\mathrm{obj}}}^{k+1} \leftarrow$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"} Select $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^{k+1}, \tau_{\mathop{\mathrm{iter}},\max}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k+1}$ $k \leftarrow k+1$ Exit Inner Loop $\psi_{j+1}^k, j \leftarrow \psi_j^k + \delta_k \alpha_j^k \gamma_j^k, j+1$ $\theta_k$ is a generalization of that allows for certains choices of subroutines and parameters. These subroutines and parameters can be *designed* to satisfy sensible properties that also enable a reasonable convergence analysis. The properties for these subroutines and parameters are discussed next. #### Step Direction In , the step direction is the negative gradient direction, $-\dot F(\psi)$, at the current point, $\psi$. In , the step direction at a point $\psi$ is generated by a procedure $\mathrm{StepDirection()}$, which---owing to our motivation of avoiding objective function evaluations---cannot make use of additional objective function evaluations and may be stateful. Because of the latter property, it may be possible for $\mathrm{StepDirection}()$ to generate two distinct step directions at a same point $\psi$ if $\psi$ is visited at two distinct times in the algorithm. Thus, to control $\mathrm{StepDirection()}$, we require the following properties. [\[property-stepdirection-negative\]]{#property-stepdirection-negative label="property-stepdirection-negative"} For any compact set $C \subset \mathbb{R}^n$, $\exists \underline g(C) > 0$ such that for any $\gamma$ generated by $\mathrm{StepDirection()}$ at $\psi \in C$ satisfies $-\underline g(C) \Vert \dot F(\psi) \Vert_2^2 \geq \dot F(\psi)^\intercal \gamma$. [\[property-stepdirection-propgrad\]]{#property-stepdirection-propgrad label="property-stepdirection-propgrad"} For any compact set $C \subset \mathbb{R}^n$, $\exists \overline g(C) > 0$ such that $\mathrm{StepDirection()}$, $\gamma$, generated at $\psi \in C$ satisfies $\Vert \gamma \Vert_2 \leq \overline g(C) \Vert \dot F(\psi) \Vert_2$. We now contextualize [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"} by using some specific examples. First, consider the case when $\mathrm{StepDirection}()$ is simply the negative gradient direction as in . Then, [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"} are satisfied with $\underline g(C) = \overline g(C) = 1$ for any compact $C \subset \mathbb{R}^n$. As another example, consider the case of an objective function that is twice-differentiable, strongly convex, and globally Lipschitz smooth. Furthermore, if the step direction is generated by Newton's method, then $\underline g(C)$ is the reciprocal of the Lipschitz rank, and $\overline g(C)$ is the reciprocal of the strong convexity constant for any compact $C \subset \mathbb{R}^n$ [c.f. @bertsekas1999 Eq. 1.27]. Thus, [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"} are generalizations of such important special cases, and allow for a broad and useful selection of step directions. Moreover, by [\[property-stepdirection-propgrad\]](#property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-propgrad"}, if a first-order stationary point is found and accepted, then $\gamma$ generated at this point will be zero and the algorithm will terminate. #### Step Size In , the step size is controlled by $\delta$ and how it is changed. In , $\mathrm{StepSize()}$ and $\lbrace \delta_k \rbrace$ determine the step size. As $\mathrm{StepSize()}$ may be stateful, we can have the same behavior occur as described for $\mathrm{StepDirection()}$. Hence, to control $\mathrm{StepSize()}$, we require the following properties. [\[property-stepsize-upper\]]{#property-stepsize-upper label="property-stepsize-upper"} For any compact set $C \subset \mathbb{R}^n$, $\exists \overline{\alpha}(C) > 0$ such that for any $\alpha$ generated by $\mathrm{StepSize()}$ at any $\psi \in C$ satisfies $\alpha \leq \overline\alpha(C)$. [\[property-stepsize-lower\]]{#property-stepsize-lower label="property-stepsize-lower"} For any compact set $C \subset \mathbb{R}^n$, $\exists \underline{\alpha}(C) > 0$ such that for any $\alpha$ generated by $\mathrm{StepSize()}$ at any $\psi \in C$ satisfies $\alpha \geq \underline\alpha(C)$. Trivially, [\[property-stepsize-upper,property-stepsize-lower\]](#property-stepsize-upper,property-stepsize-lower){reference-type="ref" reference="property-stepsize-upper,property-stepsize-lower"} are satisfied for the step size procedure used in [\[alg-naive-algorithm\]](#alg-naive-algorithm){reference-type="ref" reference="alg-naive-algorithm"}, which is implicitly $\mathrm{StepSize()}=1$. can be readily checked for a number of more sophisticated schemes in a variety of contexts. That is to say, [\[property-stepsize-upper,property-stepsize-lower\]](#property-stepsize-upper,property-stepsize-lower){reference-type="ref" reference="property-stepsize-upper,property-stepsize-lower"} appear to encompass a wide spectrum of step size selection schemes. Finally, $\lbrace \delta_k \rbrace$ also contribute to the choice of step size. Accordingly, $\delta_{k+1}$ is either the same as $\delta_k$, can be reduced by a factor of $\sigma_{\mathop{\mathrm{lower}}} \in (0,1)$, or increased by a factor of $\sigma_{\mathop{\mathrm{upper}}} \geq 1$. In , the values of $\sigma_{\mathop{\mathrm{lower}}}$ and $\sigma_{\mathop{\mathrm{upper}}}$ are set to $0.5$ and $1.5$, respectively. #### Nonmonotone Armijo Condition Recall, the standard Armijo condition accepts a proposed iterate $\psi \in \mathbb{R}^n$ from a current iterate $\psi$ if $F(\psi) \leq F(\theta) + \rho \dot F(\theta)^\intercal(\psi - \theta)$, where $\rho > 0$ is a relaxation parameter that is typically chosen to be small [see @bertsekas1999 Eq. 1.11]. In [@grippo1986nonmonotone; @grippo1989truncated; @grippo1991class], the standard Armijo condition is generalized such that given $w \in \mathbb{N}$ of the most recent iterates, $\lbrace \theta_{k},\ldots,\theta_{k-w+1} \rbrace \subset \mathbb{R}^n$, a proposed iterate $\psi \in \mathbb{R}^n$ is accepted if $F(\psi) \leq \max \lbrace F(\theta_j) : j=k-w+1,\ldots, k \rbrace + \rho \dot F(\theta_k)^\intercal (\psi - \theta_k)$, where $\rho > 0$ again plays the role of a relaxation parameter. This generalized Armijo condition, called a nonmonotone Armijo condition, allows for a nonmonotonic change in the objective function between iterates when $w > 1$, and reduces to the standard Armijo condition when $w = 1$. To compare our nonmonotone Armijo condition to the standard version and the aforementioned nonmontone Armijo condition, we need to define a subsequence of $\mathbb{N}$ for when our outer loop iterates are distinct. Let $$\label{eqn-iterate-subsequence} \ell_0 = 0\quad\mathrm{and}\quad \ell_t = \lbrace k > \ell_{t-1}: \theta_k \neq \theta_{\ell_{t-1}} \rbrace, ~ \forall t \in \mathbb{N},$$ with the convention that $\ell_t = \infty$ if no finite $k$ can be found to satisfy the property, and $\ell_{t} = \infty$ if $\ell_{t-1} = \infty$. Also, let $L: \mathbb{N} \cup \lbrace 0 \rbrace \to \mathbb{N} \cup \lbrace 0 \rbrace$ such that $$\label{eqn-iterate-subsequence-map} L(k) = \max \lbrace t : \ell_t \leq k \rbrace,$$ which specifies the element of $\lbrace \ell_t \rbrace$ that produced the most recent distinct iterate up to iterate $k$. Moreover, with this notation, define $$\label{eqn-nonmonotone-threshold} \tau_{\mathop{\mathrm{obj}}}^k = \max\left\lbrace F(\theta_{\ell_{\max\lbrace L(k) - w + 1, 0 \rbrace}}),F(\theta_{\ell_{\max\lbrace L(k) - w + 1, 0 \rbrace+1}}),\ldots ,F(\theta_{\ell_{L(k)}}) \right\rbrace,$$ which sets $\tau_{\mathop{\mathrm{obj}}}^k$ to be (up to) the $w$ most recent, distinct outer loop iterates. To develop a familiarity with these quantities, two of their simple properties are collected in the following lemma and a toy example of their behavior is shown in [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"} with several cases pointed out after. **Lemma 1**. *Let $\lbrace \ell_t : t+1 \in \mathbb{N}\rbrace$, $L: \mathbb{N} \cup \lbrace 0 \rbrace \to \mathbb{N} \cup \lbrace 0 \rbrace$, and $\lbrace \tau_{\mathop{\mathrm{obj}}}^k : k+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence,eqn-iterate-subsequence-map,eqn-nonmonotone-threshold\]](#eqn-iterate-subsequence,eqn-iterate-subsequence-map,eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-iterate-subsequence,eqn-iterate-subsequence-map,eqn-nonmonotone-threshold"}, respectively. Then, the following properties hold.* 1. *For any $t+1 \in \mathbb{N}$, if $\ell_t, \ell_{t+1} < \infty$, then $\theta_{\ell_t} = \theta_{\ell_t + 1} = \cdots = \theta_{\ell_{t+1} - 1}$. Hence, $\forall k +1 \in \mathbb{N}$, $\theta_{k} = \theta_{\ell_{L(k)}}$.* 2. *For any $t+1 \in \mathbb{N}$, if $\ell_t, \ell_{t+1} < \infty$, then $\tau_{\mathop{\mathrm{obj}}}^{\ell_t} = \tau_{\mathop{\mathrm{obj}}}^{\ell_t + 1} = \cdots = \tau_{\mathop{\mathrm{obj}}}^{\ell_{t+1}-1}$. Hence, $\forall k +1 \in \mathbb{N}$, $\tau_{\mathop{\mathrm{obj}}}^{k} = \tau_{\mathop{\mathrm{obj}}}^{\ell_{L(k)}}$.* is a toy example of the possible behaviors of the objective function and threshold at different outer loop iterate. We will take as an assumption that the values of $\lbrace \ell_t \rbrace$ are correct, and our nonmonotone Armijo condition is satisfied at these points. We now discuss several interesting behaviors from this toy example.[^2] 1. At $k=1$, a new iterate is *not* accepted based on the figure. Hence, at $k=1$, $\theta_1 = \theta_0$, which implies $F(\theta_1) = F(\theta_0)$ (i.e., the blue points are at the same level), $\tau_{\mathop{\mathrm{obj}}}^1 = \tau_{\mathop{\mathrm{obj}}}^0$ $L(1) = 0$ (i.e., the red points are at the same level), and $\theta_1 = \theta_{\ell_{L(1)}} = \theta_{\ell_0} = \theta_0$. 2. At $k=2$, a new iterate is accepted according to the figure, which has $F(\theta_2)$ distinct from $F(\theta_1) = F(\theta_0)$ (i.e., a change in the level of the blue point relative to $k=1$). Yet, at $k=2$, the threshold remains the same as, according to the figure, $\tau_{\mathop{\mathrm{obj}}}^2 = \max\lbrace F(\theta_{\ell_0}), F(\theta_{\ell_1}) \rbrace = F(\theta_{\ell_0}) = \tau_{\mathop{\mathrm{obj}}}^1$, which renders the red points to remain at the same level. Moreover, at $k=2$, because a new point is accepted, $\ell_1 = 2$, $L(2) = 1$, and $\ell_{L(2)} = 2$. 3. At $k=13$, a new iterate is accepted and so $\ell_6$ is well defined and is equal to $13$. Moreover, $\tau_{\mathop{\mathrm{obj}}}^{13} = \max \lbrace F(\theta_{\ell_4}), F(\theta_{\ell_5}), F(\theta_{\ell_6}) \rbrace = \max F(\theta_7), F(\theta_9), F(\theta_{13}) \rbrace = F(\theta_{13})$---hence, the red and blue points coincide. 4. At $k=16$, a new iterate is accepted according to the figure, and so $\ell_9$ is well defined and is equal to $k=16$. Interestingly, $F(\theta_{15})= F(\theta_{16})$ even though $\theta_{15} \neq \theta_{16}$, which is rendered as the blue points being at the same level. However, the thresholds change from $k=15$ to $k=16$, as $\tau_{\mathop{\mathrm{obj}}}^{15} = \max \lbrace F(\theta_{\ell_{6}}), F(\theta_{\ell_{7}}), F(\theta_{\ell_{8}}) \rbrace = \max \lbrace F(\theta_{13}), F(\theta_{14}), F(\theta_{15}) \rbrace = F(\theta_{13})$ according to the plot, while $\tau_{\mathop{\mathrm{obj}}}^{16} = F(\theta_{14})$. With this understanding of the notation, we can understand our nonmonotone Armijo condition's similarities and distinctions compared to the standard Armijo condition and that of [@grippo1986nonmonotone; @grippo1989truncated; @grippo1991class]. First, unlike the two aforementioned conditions, when the trigger iterate $j$ exceeds $1$, our condition for acceptance, $F(\psi_{j}^k) < \tau_{\mathop{\mathrm{obj}}}^k + \rho \delta_k \alpha_0^k \dot F(\theta_k)^\intercal \gamma_0^k$, has a fixed right hand side that does not depend on $\psi_j^k - \theta_k$. Second, when $j>1$ and $w=1$, then our condition does not reduce to the standard Armijo condition as is the case for the condition in [@grippo1986nonmonotone; @grippo1989truncated; @grippo1991class]. When $j = 1$, then our condition reduces to that of [@grippo1986nonmonotone; @grippo1989truncated; @grippo1991class]. When $j = 1$ and $w=1$, then our condition reduces to the standard Armijo condition. #### Test Parameter Selection In , the test parameter selection procedure should be designed to satisfy the following properties. We begin with requirements for $\lbrace \tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^k : k+1 \in \mathbb{N} \rbrace$. [\[property-radius-nonnegative\]]{#property-radius-nonnegative label="property-radius-nonnegative"} For all $k+1 \in \mathbb{N}$, $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^k \geq 0$. [\[property-radius-bounded\]]{#property-radius-bounded label="property-radius-bounded"} For any $\theta \in \mathbb{R}^n$, $\exists \overline \tau_{\mathop{\mathrm{exit}}}(\theta) \geq 0$ such that if $\theta = \theta_k$ then $\tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^k \leq \overline{\tau}_{\mathop{\mathrm{exit}}}(\theta)$. We now consider requirements on $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^k$ and $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^k$. [\[property-grad-lower\]]{#property-grad-lower label="property-grad-lower"} For any $\theta \in \mathbb{R}^n$, for all $k+1 \in \mathbb{N}$ such that if $\theta_k = \theta$, then $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^k \in (0, \Vert \dot F(\theta) \Vert_2)$. [\[property-grad-upper\]]{#property-grad-upper label="property-grad-upper"} For any $\theta \in \mathbb{R}^n$, for all $k+1 \in \mathbb{N}$ such that if $\theta_k = \theta$, then $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^k > \Vert \dot F(\theta) \Vert_2$. Finally, we consider requirements on $\tau_{\mathop{\mathrm{iter}},\max}^k$. [\[property-iter-max\]]{#property-iter-max label="property-iter-max"} There exists an $\overline \tau_{\max}$ such that $1 \leq \tau_{\mathop{\mathrm{iter}},\max}^k \leq \overline \tau_{\max}$ for all $k + 1 \in \mathbb{N}$. We see that the choices of these parameters in [\[alg-naive-algorithm\]](#alg-naive-algorithm){reference-type="ref" reference="alg-naive-algorithm"} satisfy the stated properties. Moreover, the above properties also prevent the inner loop from being triggered at $j = 0$ since $\Vert \psi_0^k - \theta_k \Vert = 0 \not > \tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^k \geq 0$ by [\[property-radius-nonnegative\]](#property-radius-nonnegative){reference-type="ref" reference="property-radius-nonnegative"}; $\Vert \psi_0^k \Vert_2 = \Vert \theta_k \Vert_2 \in (\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^k, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^k)$ by [\[property-grad-lower,property-grad-upper\]](#property-grad-lower,property-grad-upper){reference-type="ref" reference="property-grad-lower,property-grad-upper"}; and $\tau_{\mathop{\mathrm{iter}},\max}^k \geq 1$ by [\[property-iter-max\]](#property-iter-max){reference-type="ref" reference="property-iter-max"}. ## Global Convergence Analysis {#subsec-global-convergence} We now establish the asymptotic properties of our general methodology from a global convergence perspective (**by ignoring the outer loop stopping condition in [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"}**). We follow the following general steps in our analysis. 1. *Accepting new iterates.* We show that the procedure cannot stop accepting new iterates unless it has already found a first-order stationary point. Mathematically, we prove $\ell_{t+1} < \infty$ if $\dot F(\theta_{\ell_t}) \neq 0$. 2. *Objective function remains bounded.* We next establish that the objective function at the iterates cannot diverge (somewhat obviously) even if the iterates diverge, which addresses our concerns about catastrophic divergence. Less trivially, we will go into detail about the behavior of the thresholds, $\lbrace \tau_{\mathop{\mathrm{obj}}}^k : k+1 \in \mathbb{N} \rbrace$, generated by our procedure. 3. *Analysis of a gradient subsequence.* We analyze the asymptotic behavior of the gradients at the iterates to conclude that either a stationary point is found in finite time, or a subsequence of the gradients tends to zero if certain growth conditions hold on the constants arising in [\[as-loc-lip-cont,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded\]](#as-loc-lip-cont,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded){reference-type="ref" reference="as-loc-lip-cont,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded"}. We underscore several points. First, under our rather general problem setting, we cannot produce a useful complexity analysis. Second, under our general properties, we also do not provide a local convergence analysis as this would be specialized to the design choices of the end-user. Thus, while we can strengthen our analysis under more restrictive settings and under more restrictive properties, we will leave such specialization to the future for when a sufficiently important problem or algorithm design warrant such restrictions. #### Accepting new iterates. For each $k+1 \in \mathbb{N}$, let $j_k \in \mathbb{N}$ denote the triggering iterate for the inner loop, which is bounded by $\overline{\tau}_{\max}$ if [\[property-iter-max\]](#property-iter-max){reference-type="ref" reference="property-iter-max"} holds and is nonzero if [\[property-radius-nonnegative,property-grad-lower,property-grad-upper,property-iter-max\]](#property-radius-nonnegative,property-grad-lower,property-grad-upper,property-iter-max){reference-type="ref" reference="property-radius-nonnegative,property-grad-lower,property-grad-upper,property-iter-max"} hold. To show that the procedure will accept a new iterate satisfying our nonmonotone Armijo condition if $\dot F(\theta_k) \neq 0$, we show, the inner loop iterates remain in a compact set relative to the outer loop iterate, and, if two sequential outer loop iterates are identical, their inner loop iterates remain in nested compact sets as [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} enforces a reduction in the step-size scaling. We then show, for any compact set, if the outerloop iterates and inner loop iterates remain in the compact set, there is a sufficiently small step-size scaling parameter which will satisfy our nonomonotone Armijo condition. Hence, as [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} reduces the step-size scaling parameter whenever a triggering iterate is rejected, then this sufficiently-small step-size scaling parameter will eventually be reached and a new iterate will be accepted. **Lemma 2**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-upper,property-stepdirection-propgrad,property-radius-bounded,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-upper,property-stepdirection-propgrad,property-radius-bounded,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-upper,property-stepdirection-propgrad,property-radius-bounded,property-radius-nonnegative,property-grad-upper"} initialized at any $\theta_0 \in \mathbb{R}^n$. Then, for every $k+1 \in \mathbb{N}$, there exists a compact $C_k \subset \mathbb{R}^n$ such that $\theta_k \in C_k$, $\lbrace \psi_{1}^k,\ldots, \psi_{j_k}^k \rbrace \subset C_k$, and, if $\theta_{k} = \theta_{k+1}$ then $C_{k} \supset C_{k+1}$.* *Proof.* For any $\theta \in \mathbb{R}^n$, let $\overline \tau_{\mathop{\mathrm{exit}}}(\theta)$ be as in [\[property-radius-bounded\]](#property-radius-bounded){reference-type="ref" reference="property-radius-bounded"}. For any $\theta \in \mathbb{R}^n$, define $\mathcal{B}(\theta) = \lbrace \psi: \Vert \psi - \theta \Vert_2 \leq \overline \tau_{\mathop{\mathrm{exit}}}(\theta) \rbrace$; and define $\mathcal G(\theta) = \sup_{\psi \in \mathcal{B}(\theta)} \Vert \dot F(\psi) \Vert_2$, which is finite because of the continuity of the gradient function under [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}. Using [\[property-stepsize-upper,property-stepdirection-propgrad,property-radius-bounded\]](#property-stepsize-upper,property-stepdirection-propgrad,property-radius-bounded){reference-type="ref" reference="property-stepsize-upper,property-stepdirection-propgrad,property-radius-bounded"}, define $$\label{eqn-compactset-outerloop} C_k = \lbrace \psi : \Vert \psi - \theta_k \Vert_2 \leq \overline \tau_{\mathop{\mathrm{exit}}}(\theta_k) + \delta_k \overline\alpha (\mathcal{B}(\theta_k)) \overline g( \mathcal{B}(\theta_k)) \mathcal G(\theta_k) \rbrace, ~\forall k+1 \in \mathbb{N}.$$ We first show: $C_k$ satisfies the desired properties. First, $\theta_k \in C_k$ as the radius of the ball defining $C_k$ is non-negative. Second, by definition, the inner loop iterate $\psi_{i}^k$ satisfies $\Vert \psi_{i} - \theta_k \Vert_2 \leq \tau_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}^k \leq \overline \tau_{\mathop{\mathrm{exit}}}(\theta_k)$ for $i=1,\ldots,\psi_{j_k-1}^k$. In other words, $\psi_{i} \in \mathcal{B}(\theta_k)$ for $i=1,\ldots,\psi_{j_k-1}^k$. As a result, by [\[property-stepsize-upper\]](#property-stepsize-upper){reference-type="ref" reference="property-stepsize-upper"}, $\alpha_{j_k-1}^k \leq \overline \alpha (\mathcal{B}(\theta_k))$; and, by [\[property-grad-upper,as-loc-lip-cont\]](#property-grad-upper,as-loc-lip-cont){reference-type="ref" reference="property-grad-upper,as-loc-lip-cont"}, $\Vert \gamma_{j_k-1}^k \Vert_2 \leq \overline g(\mathcal{B}(\theta_k)) \Vert \dot F(\psi_{j_k-1}^k) \Vert_2 \leq \overline (\mathcal{B}(\theta_k))\mathcal G(\theta_k)$. Putting these pieces together, $$\begin{aligned} \Vert \psi_{j_k}^k - \theta_k \Vert_2 &\leq \Vert \psi_{j_k}^k - \psi_{j_k-1}^k \Vert_2 + \Vert \psi_{j_k -1}^k - \theta_k \Vert_2 \\ &\leq \Vert \delta_k \alpha_{j_k-1}^k \gamma_{j_k-1}^k \Vert_2 + \overline \tau_{\mathop{\mathrm{exit}}}(\theta_k) \\ &\leq \delta_k \overline \alpha (\mathcal{B}(\theta_k)) \overline g (\mathcal{B}(\theta_k)) \mathcal G(\theta_k) + \overline \tau_{\mathop{\mathrm{exit}}}(\theta_k).\end{aligned}$$ To summarize, $\psi_{j_k}^k \in C_k$ and $\psi_{i}^k \in \mathcal{B}(\theta_k) \subset C_k$ for $i=1,\ldots,j_k-1$. Finally, when $\theta_{k} = \theta_{k+1}$, $\delta_{k+1} = \sigma_{\mathop{\mathrm{lower}}} \delta_k \leq \delta_k$ since $\sigma_{\mathop{\mathrm{lower}}} \in (0,1)$ as required by [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"}. Plugging this information in [\[eqn-compactset-outerloop\]](#eqn-compactset-outerloop){reference-type="ref" reference="eqn-compactset-outerloop"}, it follows that $C_{k} \supset C_{k+1}$. ◻ With the existence of $\lbrace C_k : k+1 \in \mathbb{N} \rbrace$ established, we now show, there is a sufficiently small choice of $\delta_k$ such that the triggering iterate of the inner loop, $\psi_{j_k}^k$, will satisfy our nonmonontone Armijo condition *despite* the condition not depending on the distance between the terminal iterate and initial iterate. **Lemma 3**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} initialized at any $\theta_0 \in \mathbb{R}^n$. Let $C \subset \mathbb{R}^n$ be a compact set such that $C_k \subset C$ for some $k+1 \in \mathbb{N}$. Let $\mathcal L(C)$ denote the Lipschitz rank of the gradient function over $C$. If $\dot F(\theta_k) \neq 0$ and $$\delta_k < \frac{2 (1 - \rho) \underline g(C) }{\overline g(C)^2 \mathcal L(C) \overline \alpha(C)},$$ then $F(\psi_{j_k}^k) < F(\theta_k) + \rho \delta_k \alpha_{0}^k \dot F(\theta_k)^\intercal \gamma_0^k \leq \tau_{\mathop{\mathrm{obj}}}^k + \rho \delta_k \alpha_{0}^k \dot F(\theta_k)^\intercal \gamma_0^k$.* *Proof.* We recall several facts. By [Lemma 2](#result-iterates-in-compactsets){reference-type="ref" reference="result-iterates-in-compactsets"}, $\lbrace \psi_{j}^k : j=0,\ldots, j_k \rbrace \subset C_k \subset C$. Hence, by [\[property-stepsize-lower,property-stepsize-upper,property-radius-nonnegative\]](#property-stepsize-lower,property-stepsize-upper,property-radius-nonnegative){reference-type="ref" reference="property-stepsize-lower,property-stepsize-upper,property-radius-nonnegative"}, $0 < \underline\alpha(C) \leq \alpha_j^k \leq \overline\alpha(C) < \infty$ for $j = 0,\ldots j_k-1$. Moreover, by [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"}, $-\underline g(C) \Vert \dot F(\psi_j^k) \Vert_2^2 \geq \dot F(\psi_j^k)^\intercal \gamma_j^k$ and $\overline g(C) \Vert \dot F(\psi_j^k) \Vert_2 \geq \Vert \gamma_j^k \Vert_2$ for $j=0,\ldots,j_k-1$. Note, $j_k \neq 0$ under the given properties as described at the end of [3.2](#subsec-method){reference-type="ref" reference="subsec-method"}. Now, we use these facts, to convert the hypothesis on $\delta_k$ into a more useful form. Specifically, by [\[property-stepsize-upper\]](#property-stepsize-upper){reference-type="ref" reference="property-stepsize-upper"}, $\delta_k \alpha_j^k \leq \delta_k \overline\alpha(C) < [2(1-\rho) \underline g(C)]/[ \overline g^2 (C) \mathcal L(C)]$ for $j=0,\ldots,j_k -1$. By [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} and [\[property-stepsize-lower\]](#property-stepsize-lower){reference-type="ref" reference="property-stepsize-lower"}, $\delta_k \alpha_j^k > 0$, which implies $(\delta_k \alpha_j^k)^2 \overline g(C)^2 \mathcal L(C)/2 < (\delta_k \alpha_j^k)(1-\rho) \underline g(C)$ (note, we can take $\mathcal L(C) > 0$). By hypothesis and [\[property-grad-lower\]](#property-grad-lower){reference-type="ref" reference="property-grad-lower"}, $\Vert \dot F(\psi_j^k) \Vert_2^2 > 0$ for $j=0,\ldots,j_k-1$. Hence, $(\delta_k \alpha_j^k)^2 \overline g(C)^2 \Vert \dot F(\psi_j^k) \Vert_2^2 \mathcal L(C)/2 < (\delta_k \alpha_j^k) (1-\rho) \underline g(C) \Vert \dot F(\psi_j^k) \Vert_2^2$. Using [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"}, $\Vert \delta_k \alpha_j^k \gamma_j^k \Vert_2^2 \mathcal L(C)/2 < -(\delta_k \alpha_j^k)(1-\rho) \dot F(\psi_j^k)^\intercal \gamma_j^k$. In other words, for $j=0,\ldots,j_k-1$, $$\label{eqn-objective-reduction} (\delta_k \alpha_j^k) (1-\rho) \dot F(\psi_j^k)^\intercal \gamma_j^k + \frac{\mathcal L(C)}{2} \Vert \delta_k \alpha_j^k \gamma_j^k \Vert_2^2 < 0.$$ In fact, since $(1-\rho) \in (0,1)$ and $\dot F(\psi_j^k)^\intercal \gamma_j^k < 0$ by [\[property-stepdirection-negative\]](#property-stepdirection-negative){reference-type="ref" reference="property-stepdirection-negative"}, the $(1-\rho)$ can be replaced with $1$ and the inequality will hold. We use this relationship [\[eqn-objective-reduction\]](#eqn-objective-reduction){reference-type="ref" reference="eqn-objective-reduction"} in two ways. First, by Taylor's theorem, [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"} and [\[eqn-objective-reduction\]](#eqn-objective-reduction){reference-type="ref" reference="eqn-objective-reduction"}, for $j=0,\ldots,j_k-1$, $$F(\psi_{j+1}^k) \leq F(\psi_j^k) + \alpha_j^k \delta_k \dot F(\psi_j^k)^\intercal \gamma_j^k + \frac{\mathcal L(C)}{2} \Vert \delta_k \alpha_j^k \gamma_j^k \Vert_2^2 < F(\psi_j^k).$$ In particular, at $j=0$, by [\[eqn-objective-reduction\]](#eqn-objective-reduction){reference-type="ref" reference="eqn-objective-reduction"}, $$\begin{aligned} F(\psi_1^k) &\leq F(\psi_0^k) + (1-\rho)\alpha_0^k \delta_k \dot F(\psi_0^k)^\intercal \gamma_0^k + \frac{\mathcal L(C)}{2} \Vert \delta_k \alpha_0^k \gamma_0^k \Vert_2^2 + \rho \alpha_0^k \delta_k \dot F(\psi_0^k)^\intercal \gamma_0^k \\ &< F(\psi_0^k) + \rho \alpha_0^k \delta_k \dot F(\psi_0^k)^\intercal \gamma_0^k.\end{aligned}$$ Putting this together with $\psi_0^k = \theta_k$, $F(\psi_{j_k}^k) < F(\psi_{j_k-1}^k) < \cdots < F(\psi_1^k) < F(\theta_k) + \rho \alpha_0^k \delta_k \dot F(\theta_k)^\intercal \gamma_0^k$. Finally, $\theta_k = \theta_{\ell_{L(k)}}$ and so $F(\theta_k) = F(\theta_{\ell_{L(k)}}) \leq \tau_{\mathop{\mathrm{obj}}}^k$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"}. ◻ We now combine these two facts to show that the procedure must always accept a new iterate as long as a stationary point has yet to be found. **Theorem 1**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} initialized at any $\theta_0 \in \mathbb{R}^n$. Let $\lbrace \ell_t : t+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence\]](#eqn-iterate-subsequence){reference-type="ref" reference="eqn-iterate-subsequence"}. Then, for any $t+1\in \mathbb{N}$, if $\ell_t < \infty$ and $\dot F(\theta_{\ell_t}) \neq 0$, then $\ell_{t+1} < \infty$.* *Proof.* The proof is by induction. As the proof of the base case (i.e., $\ell_1 < \infty$) uses the same argument as the conclusion (i.e., if $\ell_t < \infty$ then $\ell_{t+1} < \infty$), we show the conclusion. To this end, suppose $\lbrace \ell_0,\ldots,\ell_{t} \rbrace$ are finite and $\dot F(\theta_{\ell_t}) \neq 0$. For a contradiction, suppose $\theta_k = \theta_{\ell_t}$ for all $k \geq \ell_t$. Then, by [Lemma 2](#result-iterates-in-compactsets){reference-type="ref" reference="result-iterates-in-compactsets"}, the compact sets $C_{k} \subset C_{\ell_t}$ for all $k \geq \ell_t$. By [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} and $\sigma_{\mathop{\mathrm{lower}}} \in (0,1)$, then $\delta_k = \sigma_{\mathop{\mathrm{lower}}}^{k-\ell_t} \delta_{\ell_t}$. There exists a $k \geq \ell_t$ such that $$\delta_k = \sigma_{\mathop{\mathrm{lower}}}^{k - \ell_t} \delta_{\ell_t} < \frac{2 (1 - \rho) \underline g (C_{\ell_t}) }{ \overline g(C_{\ell_t})^2 \mathcal L(C_{\ell_t}) \overline \alpha(C_{\ell_t})},$$ which, by [Lemma 3](#result-sufficient-scaling){reference-type="ref" reference="result-sufficient-scaling"} and using $\dot F(\theta_k) = \dot F(\theta_{\ell_t}) \neq 0$, implies $\psi_{j_k}^k$ satisfies our nonmonotone Armijo condition. Hence, $\theta_{k+1} = \psi_{j_k}^k \neq \theta_{\ell_t}$. Therefore, $\ell_{t+1} = k+1 < \infty$. ◻ #### Objective function remains bounded. We begin by establishing properties of $\tau_{\mathop{\mathrm{obj}}}^k$. Recall, from [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"}, the value of $\tau_{\mathop{\mathrm{obj}}}^k$ is determined by the maximum $F(\theta_{\ell_i})$ over $i=\max\lbrace L(k) - w + 1, 0 \rbrace,\ldots,L(k)$. To keep track of which accepted iterate corresponds to $\tau_{\mathop{\mathrm{obj}}}^k$, a useful quantity to define is $O:\mathbb{N} \cup \lbrace 0 \rbrace \to \mathbb{N} \cup \lbrace 0 \rbrace$ such that $$\label{eqn-maxobj-index-func} O(k) = \max \lbrace s \leq L(k) : F(\theta_{\ell_s}) = \tau_{\mathop{\mathrm{obj}}}^k \rbrace, ~k+1 \in \mathbb{N}.$$ We collect some simple facts about this function in the following lemma, and then provide some additional intuition using [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"}. **Lemma 4**. *Let $O:\mathbb{N} \cup \lbrace 0 \rbrace \to \mathbb{N} \cup \lbrace 0 \rbrace$ be defined as in [\[eqn-maxobj-index-func\]](#eqn-maxobj-index-func){reference-type="ref" reference="eqn-maxobj-index-func"}. If, for $k+1 \in \mathbb{N}$, $\dot F(\theta_k) \neq 0$, then the following properties hold.* 1. *$O(k) \in \lbrace \max\lbrace L(k) - w + 1,0 \rbrace,\ldots,L(k) \rbrace$;* 2. *$\tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_{O(k)}})$;* 3. *If $O(k) \neq L(k)$, then $F(\theta_{\ell_i}) < \tau_{\mathop{\mathrm{obj}}}^k$ for $i \in \lbrace O(k)+1,\ldots,L(k) \rbrace$; and* 4. *$F(\theta_{\ell_i}) \leq \tau_{\mathop{\mathrm{obj}}}^k$ for $i \in \lbrace \max\lbrace L(k) - w + 1, 0 \rbrace, \ldots, O(k) \rbrace$.* Recall, [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"} is a toy example of the possible behaviors of the objective function and threshold at different outer loop iterate. We will take as an assumption that the values of $\lbrace \ell_t \rbrace$ are correct, and our nonmonotone Armijo condition is satisfied at these points. We now discuss several informative behaviors of $O(k)$ from this toy example. 1. For $k=0,1,2,3,4,5$, $\tau_{\mathop{\mathrm{obj}}}^k = F(\theta_0)$ as $F(\theta_{0}) > F(\theta_{\ell_1}) = F(\theta_2)$ and $F(\theta_0) > F(\theta_{\ell_2}) = F(\theta_3)$. Hence, $O(k) = 0$. 2. For $k=6,7,8$, $\tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_2}) = F(\theta_3)$ even though the sets defining $\tau_{\mathop{\mathrm{obj}}}^6$ versus $\tau_{obj}^7$ and $\tau_{\mathop{\mathrm{obj}}}^8$ are distinct. For all $k=6,7,8$, $O(k) = 2$. 3. At $k=13$, $\tau_{\mathop{\mathrm{obj}}}^{13} = F(\theta_{\ell_6}) = F(\theta_{13})$ as this is the largest objective function value over the $3$ most recently accepted iterates (including $k=13=\ell_{6}$). Hence, $O(13) = 6$. 4. At $k=17$, $\tau_{\mathop{\mathrm{obj}}}^{17} = F(\theta_{\ell_{9}}) = F(\theta_{\ell_8})$. By the definition of $O$, $O(17) = 9$ as this is the larger between $8$ and $9$. also indicates that $\tau_{\mathop{\mathrm{obj}}}^k$ is a nonincreasing sequence. We show that this is not just true of the toy example, but in all cases. In fact, [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"} and [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"} indicate that $\tau_{\mathop{\mathrm{obj}}}^{k} > \tau_{\mathop{\mathrm{obj}}}^{k+1}$ only when $k+1$ is an accepted iterate and when $F(\theta_{\ell_{O(k)}})$ is no longer in the set defining $\tau_{\mathop{\mathrm{obj}}}^{k+1}$. In other words, $\tau_{\mathop{\mathrm{obj}}}^{k} > \tau_{\mathop{\mathrm{obj}}}^{k+1}$ when $k+1 = \ell_{L(k+1)}$ and $O(k) \not\in \lbrace \max\lbrace L(k+1) - w + 1, 0 \rbrace,\ldots,L(k+1) \rbrace$. We now show that this is not just true of the toy example, but is true in all cases. **Lemma 5**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} initialized at any $\theta_0 \in \mathbb{R}^n$. Let $\lbrace \ell_t : t+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence\]](#eqn-iterate-subsequence){reference-type="ref" reference="eqn-iterate-subsequence"}, and let $O:\mathbb{N} \cup \lbrace 0 \rbrace \to \mathbb{N} \cup \lbrace 0 \rbrace$ be defined as in [\[eqn-maxobj-index-func\]](#eqn-maxobj-index-func){reference-type="ref" reference="eqn-maxobj-index-func"}. For any $k+1 \in \mathbb{N}$, if $\dot F(\theta_k) \neq 0$, then only one of the following holds.* 1. *$k+1 = \ell_{L(k+1)}$ (i.e., $\theta_{k+1} \neq \theta_k$), $O(k) = L(k) - w + 1$, and $\tau_{\mathop{\mathrm{obj}}}^k > \tau_{\mathop{\mathrm{obj}}}^{k+1}$;* 2. *$k+1 = \ell_{L(k+1)}$ (i.e., $\theta_{k+1} \neq \theta_k$), $O(k) \neq L(k) - w + 1$, $\tau_{\mathop{\mathrm{obj}}}^k = \tau_{\mathop{\mathrm{obj}}}^{k+1}$, and $O(k+1) = O(k)$; or* 3. *$\tau_{\mathop{\mathrm{obj}}}^k = \tau_{\mathop{\mathrm{obj}}}^{k+1}$.* *Proof.* We recall several facts. First, $\theta_k = \theta_{\ell_{L(k)}}$. Second, either the triggering iterate is rejected, which is equivalent to $L(k+1) = L(k)$; or the triggering iterate is accepted, which is equivalent to $L(k+1) = L(k)+1$ and $\ell_{L(k+1)} = k+1$. Now, there are three cases to consider. First, $\theta_k = \theta_{k+1}$. Then, $\tau_{\mathop{\mathrm{obj}}}^k = \tau_{\mathop{\mathrm{obj}}}^{k+1}$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"}. Second, consider when $\theta_k \neq \theta_{k+1}$ and $O(k) > L(k) - w + 1$. Then, $O(k) \geq L(k+1) - w + 1$ since $L(k+1) = L(k)+1$. Using this fact with [Lemma 4](#result-properties-maxobj-index-func){reference-type="ref" reference="result-properties-maxobj-index-func"}, $\tau_{\mathop{\mathrm{obj}}}^{k+1} = \max\lbrace F(\theta_{k+1}), F(\theta_{\ell_{O(k)}}) \rbrace$. Now, so long as $F(\theta_{k+1}) < F(\theta_{\ell_{O(k)}})$, then $\tau_{\mathop{\mathrm{obj}}}^{k+1} = \tau_{\mathop{\mathrm{obj}}}^{k}$ and $O(k+1) = O(k)$. By our nonmonotone Armijo condition, since $\dot F(\theta_k) \neq 0$, $F(\theta_{\ell_{L(k+1)}}) = F(\theta_{k+1}) < \tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_{O(k)}})$. Hence, $\tau_{\mathop{\mathrm{obj}}}^{k+1} = \tau_{\mathop{\mathrm{obj}}}^{k}$ and $O(k+1) = O(k)$. Third, consider when $\theta_k \neq \theta_{k+1}$ and $O(k) = L(k) - w + 1$. Then, $O(k) = L(k) - w + 1 \geq 0$. By our facts about $O(k)$, $F(\theta_{\ell_i}) < \tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_{O(k)}})$ for $i = L(k) - w + 2,\ldots, L(k)$. Using $L(k) + 1 = L(k+1)$ when $\theta_k \neq \theta_{k+1}$ and [Lemma 4](#result-properties-maxobj-index-func){reference-type="ref" reference="result-properties-maxobj-index-func"}, $F(\theta_i) < \tau_{\mathop{\mathrm{obj}}}^k$ for $i = L(k+1) - w + 1,\ldots, L(k+1) - 1$. Moreover, by our nonmonotone Armijo condition and $\dot F(\theta_k) \neq 0$, $F(\theta_{\ell_{L(k+1)}}) = F(\theta_{k+1}) < \tau_{\mathop{\mathrm{obj}}}^k$. Hence, by the definition of $\tau_{\mathop{\mathrm{obj}}}^{k+1}$, $\tau_{\mathop{\mathrm{obj}}}^{k+1} < \tau_{\mathop{\mathrm{obj}}}^k$. ◻ Part of is as follows: $\tau_{\mathop{\mathrm{obj}}}^{k+1} < \tau_{\mathop{\mathrm{obj}}}^{k}$ only when $k+1$ is an accepted iterate and $O(k) = L(k+1) - w$. In other words, the threshold only can decrease at $k+1$, if $k+1$ is an accepted iterate that is $w$ accepted iterates away from $O(k)$. In [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"}, we see exactly this behavior occur as we point out presently. 1. The threshold remains constant until there are $w=3$ accepted iterates after $\ell_0$; that is, when $\ell_3$ is defined. In [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"}, we see a drop in the threshold level (i.e., the red points) at $k=\ell_3$. 2. The threshold remains constant again until there are $w=3$ accepted iterates after $\ell_{O(\ell_3)} = \ell_{2}$; that is, when $\ell_5$ is defined. In [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"}, we see a drop in the threshold level (i.e., the red points) at $k=\ell_5$. 3. The threshold remains constant again until there are $w=3$ accepted iterates after $\ell_{O(\ell_5)} = \ell_3$; that is when $\ell_6$ is defined. In [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"}, we see a drop in the threshold level (i.e., the red points) at $k=\ell_6$. This behavior motivates us to define the following sequence. $$\label{eqn-maxobj-index} o_0 = 0\quad\text{and}\quad o_s = O(\ell_{o_{s-1} + w}),~\forall s \in \mathbb{N},$$ with the convention of $o_s = \infty$ if $\ell_{o_{s-1} + w} = \infty$. With this notation, we formalize the above discussion. **Lemma 6**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} at any $\theta_0 \in \mathbb{R}^n$. Let $\lbrace \ell_t : t+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence\]](#eqn-iterate-subsequence){reference-type="ref" reference="eqn-iterate-subsequence"}, and let $\lbrace o_s : s + 1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-maxobj-index\]](#eqn-maxobj-index){reference-type="ref" reference="eqn-maxobj-index"} and let $o_{-1} = -w$. For any $s+1 \in \mathbb{N}$, if $o_s < \infty$ then one of the two holds.* 1. *$\exists \bar i \in \lbrace o_{s-1} + w - o_{s},\ldots,w \rbrace$ such that $\dot F(\theta_{\ell_{o_s + \bar i}}) = 0$, $\ell_{o_s+i} < \infty$ for all $i \in \lbrace o_{s-1}+w-o_s,\ldots,\bar i \rbrace$, and $\tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_{o_s}})$ for all $k \in [\ell_{o_{s-1}+w},\max\lbrace\ell_{o_s+\bar{i}}-1, \ell_{o_{s-1}+w} \rbrace] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$; or* 2. *$\forall i \in \lbrace o_{s-1}+w-o_s,\ldots, w \rbrace$, $\ell_{o_s+i} < \infty$ and $\dot F(\theta_{\ell_{o_s + i}}) \neq 0$, and $\tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_{o_s}})$ for all $k \in [\ell_{o_{s-1}+w},\ell_{o_{s}+w}-1] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$.* *Proof.* We proceed by induction. At each step, we verify that $\ell_{o_s+i} < \infty$ and $\tau_{\mathop{\mathrm{obj}}}^{k} = F(\theta_{\ell_{o_s}})$ for all $k \in [ \ell_{o_{s-1} + w},\min\lbrace \ell_{o_s+i}-1, \ell_{o_{s-1} + w} \rbrace] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$. Then, there are two cases to deal with: either $\dot F(\theta_{\ell_{o_s+i}}) = 0$ or it does not. In the former case, we produce the first part of the result. In the latter case, we proceed with induction. For the base case, $i=o_{s-1} + w - o_s$. Now, $o_s < \infty$ by hypothesis, which implies $\ell_{o_s + i} = \ell_{o_{s-1} + w} < \infty$. Moreover, $$o_s = O(\ell_{o_{s-1}+w}) = \max\left \lbrace t \leq o_{s-1} + w : F(\theta_{\ell_t}) = \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_{s-1} + w}} \right \rbrace,$$ which requires $F(\theta_{\ell_{o_s}}) = \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_{s-1} + w}}$. Hence, $\forall k \in [ \ell_{o_{s-1} + w}, \min \lbrace \ell_{o_s+i}-1, \ell_{o_{s-1} + w} \rbrace ] = \lbrace \ell_{o_{s-1} + w} \rbrace$, $F(\theta_{\ell_{o_s}}) = \tau_{\mathop{\mathrm{obj}}}^k$. Now, either $\dot F(\theta_{\ell_{o_{s-1} + w}}) = 0$ and $\bar i = o_{s-1} + w - o_s$ or $\dot F(\theta_{\ell_{o_{s-1} + w}}) \neq 0$ and we can increment $i$. For the hypothesis, we assume for some $i \in \lbrace o_{s-1} +w-o_s,\ldots, w - 1 \rbrace$, $\ell_{o_s+i} < \infty$, $\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i}} = F(\theta_{\ell_{o_s}})$, and $\dot F(\theta_{\ell_{o_s + i}}) \neq 0$. Furthermore, for any $\tilde i \in \lbrace o_{s-1}+w-o_s,\ldots, i \rbrace$, we assume $\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+\tilde i}} = F(\theta_{\ell_{o_s}})$ and $\dot F(\theta_{\ell_{o_s + \tilde i}}) \neq 0$. We now generalize the result to $i+1$. Since $\dot F(\theta_{\ell_{o_s+i}}) \neq 0$, $\ell_{o_s+i+1} < \infty$ by [Theorem 1](#result-accept-new-iterates){reference-type="ref" reference="result-accept-new-iterates"}. Moreover, by [Lemma 1](#result-simple-properties-accepted-iterates){reference-type="ref" reference="result-simple-properties-accepted-iterates"} and the induction hypothesis, $\tau_{\mathop{\mathrm{obj}}}^k = \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i}} = F(\theta_{\ell_{o_s}})$ for $k \in \lbrace \ell_{o_s+i},\ldots, \ell_{o_s+i + 1} - 1 \rbrace$. There are now two cases to consider. If $i = w- 1$, then either $\dot F(\theta_{\ell_{o_s + i + 1}})$ is zero or not (note, $o_s + i + 1 = o_s + w$). If it is zero, then we set $\bar i = w$ and the first part of the result is proven. If it is not zero, then the second part of the result is proven. If $i < w - 1$, we must verify $\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i+1}} = F(\theta_{\ell_{o_s}})$. Suppose this is not true. By [Lemma 5](#result-threshold-decrease){reference-type="ref" reference="result-threshold-decrease"}, $\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i+1}} < \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s + i + 1}-1} = F(\theta_{\ell_{o_s}})$ if and only if $O(\ell_{o_s + i + 1}-1) = L( \ell_{o_s+i+1} - 1) - w + 1 = o_s+i - w + 1$. When $i < w-1$, $O(\ell_{o_s+i+1}-1) = o_s + i - w + 1 < o_s$. Thus, $o_s \in \lbrace o_s + i - w + 2,\ldots, o_s + i + 1\rbrace$ and so $$\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i+1}} = \max \lbrace F(\theta_{\ell_{o_s+i - w + 2}}),\ldots, F(\theta_{\ell_{o_s+i+1}}) \rbrace \geq F(\theta_{\ell_{o_s}}).$$ This contradicts $\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i+1}} < F(\theta_{\ell_{o_s}})$. Hence, $\tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+i+1}} = F(\theta_{\ell_{o_s}})$. Now, one of two options can now occur. First, $\dot F(\theta_{\ell_{o_s+i+1}}) = 0$ and $\bar i = o_s + i + 1$, which produces the first part of the result. Second, $\dot F(\theta_{\ell_{o_{s} + i + 1}}) \neq 0$, which concludes the induction proof. ◻ With the relevant information about the thresholds established, we can now conclude as follows about the behavior of the objective function. **Theorem 2**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} at any $\theta_0 \in \mathbb{R}^n$. Let $\lbrace \ell_t : t+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence\]](#eqn-iterate-subsequence){reference-type="ref" reference="eqn-iterate-subsequence"}, and let $\lbrace o_s : s + 1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-maxobj-index\]](#eqn-maxobj-index){reference-type="ref" reference="eqn-maxobj-index"}. Then, one of the following occurs.* 1. *There exists a $t + 1 \in \mathbb{N}$ such that $\ell_t < \infty$ and $F(\theta_{\ell_t}) \leq F(\theta_0)$ and $\dot F(\theta_{\ell_t}) = 0$.* 2. *The elements of $\lbrace o_s : s+1 \in \mathbb{N} \rbrace$ are all finite; for any $s+1 \in \mathbb{N}$ and $\forall k \in [\ell_{o_s},\ell_{o_{s+1}}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$, $F(\theta_k) \leq F(\theta_{\ell_{o_s}})$; and the sequence $\lbrace F(\theta_{\ell_{o_s}}) : s + 1 \in \mathbb{N} \rbrace$ is strictly decreasing.* *Proof.* Let $o_{-1}=-w$. We proceed by induction on $s \in \mathbb{N} \cup \lbrace 0 \rbrace$. For the base case, $s=0$, [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"} specifies two cases. The first case of [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"} supplies the first case of the present claim with $t \in \lbrace 0,\ldots,w\rbrace$. If the second case of [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"}, then two statements are true: 1. $\ell_w < \infty$; 2. $F(\theta_{\ell_{0}}) = \tau_{\mathop{\mathrm{obj}}}^k$ for all $k \in [0,\ell_{w}-1] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$. By the first statement, $o_1$ is finite. By the both statements, our nonmonotone Armijo condition, and [\[property-stepdirection-negative\]](#property-stepdirection-negative){reference-type="ref" reference="property-stepdirection-negative"}, $F(\theta_{k+1}) \leq \tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_0})$ for all $k \in [0,\ell_{w}-1] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$. In other words, $\forall k \in [\ell_{0},\ell_{w}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$, $F(\theta_k) \leq F(\theta_{\ell_{0}})$. As $o_1 \leq w$, $\forall k \in [\ell_{0},\ell_{o_1}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$, $F(\theta_k) \leq F(\theta_{\ell_0})$. Finally, by [\[result-threshold-decrease,result-maxobj-threshold-properties\]](#result-threshold-decrease,result-maxobj-threshold-properties){reference-type="ref" reference="result-threshold-decrease,result-maxobj-threshold-properties"}, $F(\theta_{\ell_0}) = \tau_{\mathop{\mathrm{obj}}}^{\ell_w - 1} > \tau_{\mathop{\mathrm{obj}}}^{\ell_w} = F(\theta_{\ell_{o_1}})$. For the induction hypothesis, for some $s \in \mathbb{N} \cup \lbrace 0 \rbrace$, we assume the elements of $\lbrace o_t : t \in \lbrace 0,\ldots, s \rbrace \rbrace$ are finite; for all $t \in \lbrace 0,\ldots,\max\lbrace s-1, 0 \rbrace \rbrace$ and for all $k \in [\ell_{o_t},\ell_{o_{t+1}}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$, $F(\theta_k) \leq F(\theta_{\ell_{o_t}})$; and for all $t \in \lbrace 0,\ldots,\max\lbrace s-1, 0 \rbrace \rbrace$, $F(\theta_{\ell_{o_t}}) > F(\theta_{\ell_{o_{t+1}}})$. We now generalize to $s+1$. Since $o_s < \infty$ by the induction hypothesis, [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"} specifies two cases. In the first case of [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"}, there is a $\bar i \in \lbrace o_{s-1} + w - o_s,\ldots, w\rbrace$ such that 1. $\dot F(\theta_{\ell_{o_s + \bar i}}) = 0$, 2. $\ell_{o_s+i} < \infty$ for all $i \in \lbrace o_{s-1}+w-o_s,\ldots,\bar i \rbrace$, and 3. $\tau_{\mathop{\mathrm{obj}}}^k = F(\theta_{\ell_{o_s}})$ for all $k \in [\ell_{o_{s-1}+w},\min\lbrace \ell_{o_s+\bar i}-1, \ell_{o_{s-1}+w} \rbrace ] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$. Let $t = o_s + \bar i$. Then, by the first and second statements, $\ell_t < \infty$ and $\dot F(\theta_{\ell_t}) = 0$. By the third statement, our nonmonotone Armijo condition, and [\[property-stepdirection-negative\]](#property-stepdirection-negative){reference-type="ref" reference="property-stepdirection-negative"}, $F(\theta_{\ell_t}) = F(\theta_{\ell_{o_s+\bar i}}) < \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+\bar i}-1} = F(\theta_{\ell_{o_s}})$. By the induction hypothesis, $F(\theta_{\ell_t}) \leq F(\theta_{\ell_{o_s}}) < F(\theta_{0})$. Hence, in the first case of [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"}, we conclude the first part of the current result. In the second case of [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"}, we need to verify the three claims of the induction hypothesis for $s+1$. First, by [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"}, $\ell_{o_s + w} < \infty$, which implies $o_{s+1} < \infty$. Thus, the elements of $\lbrace o_t : t \in \lbrace 0,\ldots,s+1 \rbrace \rbrace$ are finite. Second, we must show $\forall k \in [\ell_{o_s}, \ell_{o_{s+1}}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$, $F(\theta_k) \leq F(\theta_{\ell_{o_s}})$. By [Lemma 4](#result-properties-maxobj-index-func){reference-type="ref" reference="result-properties-maxobj-index-func"} and the definition of $o_s$, $F(\theta_{\ell_t}) < F(\theta_{\ell_{o_s}})$ for all $t \in \lbrace \min\lbrace o_s+1, o_{s-1}+w \rbrace,\ldots, o_{s-1}+w \rbrace$. Therefore, $F(\theta_k) \leq F(\theta_{\ell_{o_s}})$ for all $k \in [\ell_{o_s},\ell_{o_{s-1}+w}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$. By the second part of [Lemma 6](#result-maxobj-threshold-properties){reference-type="ref" reference="result-maxobj-threshold-properties"}, $F(\theta_{k}) \leq \tau_{\mathop{\mathrm{obj}}}^{k-1} = F(\theta_{\ell_{o_s}})$ for all $k \in [\ell_{o_{s-1}+w}+1,\ell_{o_s+w}] \cap ( \mathbb{N} \cup \lbrace 0 \rbrace )$. Hence, by the induction hypothesis, for all $t \in \lbrace 0,\ldots,\max\lbrace s, 0 \rbrace \rbrace$ and for all $k \in [\ell_{o_t},\ell_{o_{t+1}}] \cap (\mathbb{N} \cup \lbrace 0 \rbrace)$, $F(\theta_k) \leq F(\theta_{\ell_{o_t}})$. Finally, by [\[result-threshold-decrease,result-maxobj-threshold-properties\]](#result-threshold-decrease,result-maxobj-threshold-properties){reference-type="ref" reference="result-threshold-decrease,result-maxobj-threshold-properties"}, $F(\theta_{\ell_{o_s}}) = \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+w} - 1} > \tau_{\mathop{\mathrm{obj}}}^{\ell_{o_s+w}} = F(\theta_{\ell_{o_{s+1}}})$. This concludes the proof by induction. ◻ #### Analysis of a gradient subsequence. We study a specific subsequence of the accepted iterates to show that the gradient function evaluated along this subsequence *can* be well-behaved based on the constants in [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} and the local properties of the Lipschitz rank, $\mathcal{L}(\cdot)$. To specify this sequence, letting $\lbrace \ell_t : t+1 \in \mathbb{N} \rbrace$ and $\lbrace o_s : s + 1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence,eqn-maxobj-index\]](#eqn-iterate-subsequence,eqn-maxobj-index){reference-type="ref" reference="eqn-iterate-subsequence,eqn-maxobj-index"} (respectively), define $$\label{eqn-gradient-index} g_0 = 0 \quad\text{and}\quad g_u = \min\lbrace o_s \geq g_{u-1} + w \rbrace, ~\forall u \in \mathbb{N},$$ with the convention $g_u = \infty$ if $g_{u-1} = \infty$ or if no finite $o_s$ can be found (see [\[figure-objective-threshold-diagram\]](#figure-objective-threshold-diagram){reference-type="ref" reference="figure-objective-threshold-diagram"}). With this notation, we have the following result, which we emphasize does not depend on the scaling constants $\lbrace \delta_k \rbrace$ and only on the user-designed constants in [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} and the local properties of the Lipschitz rank, $\mathcal{L}(\cdot)$. **Lemma 7**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-bounded below,as-loc-lip-cont\]](#as-bounded below,as-loc-lip-cont){reference-type="ref" reference="as-bounded below,as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} at any $\theta_0 \in \mathbb{R}^n$. Let $\lbrace \ell_t : t+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-iterate-subsequence\]](#eqn-iterate-subsequence){reference-type="ref" reference="eqn-iterate-subsequence"}, let $\lbrace o_s : s + 1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-maxobj-index\]](#eqn-maxobj-index){reference-type="ref" reference="eqn-maxobj-index"}, and let $\lbrace g_u : u+1 \in \mathbb{N} \rbrace$ be defined as in [\[eqn-gradient-index\]](#eqn-gradient-index){reference-type="ref" reference="eqn-gradient-index"}. Let $\lbrace C_k : k+1 \in \mathbb{N} \rbrace$ be a sequence of compact sets in $\mathbb{R}^n$ satisfying: $\theta_{k} \in C_k$; $\lbrace \psi_{1}^k,\ldots,\psi_{j_k}^k \rbrace \subset C_k$; and if $\theta_{k+1} = \theta_k$ then $C_{k+1} \subset C_{k}$ (see [Lemma 2](#result-iterates-in-compactsets){reference-type="ref" reference="result-iterates-in-compactsets"}). Then, one of the following occurs.* 1. *There exists a $t + 1 \in \mathbb{N}$ such that $\ell_t < \infty$ and $F(\theta_{\ell_t}) \leq F(\theta_0)$ and $\dot F(\theta_{\ell_t}) = 0$.* 2. *The elements of $\lbrace g_u: u+1 \in \mathbb{N} \rbrace$ are all finite, and $$\sum_{u=1}^\infty \frac{\underline{g}(C_{\ell_{g_u-1}})^2}{\overline{g}(C_{\ell_{g_u-1}})^2} \frac{\underline\alpha (C_{\ell_{g_u-1}})}{\overline\alpha (C_{\ell_{g_u-1}})} \frac{ \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2 }{\mathcal{L}(C_{\ell_{g_u-1}})} < \infty.$$* *Proof.* By [Theorem 2](#result-objective-function){reference-type="ref" reference="result-objective-function"}, either we are in the first case of the result or $\lbrace o_s : s+1 \in \mathbb{N} \rbrace$ are all finite. Thus, when all elements of $\lbrace o_s : s+1 \in \mathbb{N} \rbrace$ are finite, then the elements of $\lbrace g_u : u+1 \in \mathbb{N} \rbrace$ are all finite. Now, since $\ell_{g_u}$ is an accepted point, by our nonmontone Armijo condition, $$F(\theta_{\ell_{g_{u}}}) < \tau_{\mathop{\mathrm{obj}}}^{\ell_{g_{u}}-1} + \rho \delta_{\ell_{g_{u}}-1} \alpha_0^{\ell_{g_{u}}-1} \dot F(\theta_{\ell_{g_u}-1})^\intercal \gamma_0^{\ell_{g_{u}}-1}, ~\forall u \in \mathbb{N}.$$ Note, since $\theta_{\ell_{g_u}-1}$ is not an accepted iterate, $\theta_{\ell_{g_u}-1} = \theta_{\ell_{g_u-1}}$, and $C_{\ell_{g_u}-1} \subset C_{\ell_{g_u-1}}$. We now make several substitutions into the preceding inequality. First, by [\[property-stepdirection-negative\]](#property-stepdirection-negative){reference-type="ref" reference="property-stepdirection-negative"}, $$F(\theta_{\ell_{g_{u}}}) < \tau_{\mathop{\mathrm{obj}}}^{\ell_{g_{u}}-1} - \rho \delta_{\ell_{g_{u}}-1} \alpha_0^{\ell_{g_{u}}-1} \underline g (C_{\ell_{g_u-1}}) \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2, ~\forall u \in \mathbb{N}.$$ Second, by [\[property-stepsize-lower\]](#property-stepsize-lower){reference-type="ref" reference="property-stepsize-lower"}, $$F(\theta_{\ell_{g_u}}) < \tau_{\mathop{\mathrm{obj}}}^{\ell_{g_{u}}-1} - \rho \delta_{\ell_{g_u}-1} \underline \alpha (C_{\ell_{g_u-1}}) \underline g (C_{\ell_{g_u-1}}) \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2, ~\forall u \in \mathbb{N}.$$ Third, by [Lemma 3](#result-sufficient-scaling){reference-type="ref" reference="result-sufficient-scaling"}, $$\delta_{\ell_{g_u} -1} \geq \frac{2(1-\rho)\sigma_{\mathop{\mathrm{lower}}} \underline g (C_{\ell_{g_u-1}})}{\overline g (C_{\ell_{g_u-1}})^2 \overline \alpha (C_{\ell_{g_u-1}}) \mathcal L (C_{\ell_{g_u-1}})},$$ which implies (after rearranging), $$\frac{ \underline g (C_{\ell_{g_u-1}})^2 \underline \alpha (C_{\ell_{g_u-1}}) \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2} { \overline g (C_{\ell_{g_u-1}})^2 \overline \alpha (C_{\ell_{g_u-1}}) \mathcal L (C_{\ell_{g_u-1}}) } < \frac{ \tau_{\mathop{\mathrm{obj}}}^{\ell_{g_{u}}-1} - F(\theta_{\ell_{g_u}}) }{ 2\rho(1-\rho) \sigma_{\mathop{\mathrm{lower}}} } , ~\forall u \in \mathbb{N}.$$ Now, $\tau_{\mathop{\mathrm{obj}}}^{\ell_{g_u}-1} = F(\theta_{\ell_{o_s}})$ where $o_s \in \lbrace g_u - w,\ldots,g_u-1 \rbrace$. Since $g_{u-1} \leq g_{u} - w$ by construction, the second part of [Theorem 2](#result-objective-function){reference-type="ref" reference="result-objective-function"} states $F(\theta_{\ell_{o_s}}) \leq F(\theta_{\ell_{g_{u-1}}})$. Hence, $$\frac{ \underline g (C_{\ell_{g_u-1}})^2 \underline \alpha (C_{\ell_{g_u-1}}) \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2} { \overline g (C_{\ell_{g_u-1}})^2 \overline \alpha (C_{\ell_{g_u-1}}) \mathcal L (C_{\ell_{g_u-1}}) } < \frac{ F(\theta_{\ell_{g_{u-1}}}) - F(\theta_{\ell_{g_u}}) }{ 2\rho(1-\rho) \sigma_{\mathop{\mathrm{lower}}} } , ~\forall u \in \mathbb{N}.$$ Taking the sum over all $u \in \mathbb{N}$ and using [\[as-bounded below\]](#as-bounded below){reference-type="ref" reference="as-bounded below"}, the right hand side is bounded by $[F(\theta_{0}) - F_{l.b.}]/[2\rho(1-\rho) \sigma_{\mathop{\mathrm{lower}}}]$, which is finite. The result follows. ◻ We now provide sufficient conditions which guarantee that a procedure within our methodology will find a region of the objective function whose gradient is nearly zero. In particular, we roughly say that the procedure terminates in finite time; or, if the iterates remain in a bounded region, then they will come arbitrarily close to a first order stationary point; or, if the Lipschitz rank grows at most quadratically (which improves over the results of [@li2023convex]), then the iterates will find a region of the objective function that is arbitrarily close to zero. **Theorem 3**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-bounded below,as-loc-lip-cont\]](#as-bounded below,as-loc-lip-cont){reference-type="ref" reference="as-bounded below,as-loc-lip-cont"}, and is solved using [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"} with [\[property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper\]](#property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper){reference-type="ref" reference="property-iter-max,property-stepsize-lower,property-stepsize-upper,property-stepdirection-negative,property-stepdirection-propgrad,property-radius-bounded,property-grad-lower,property-radius-nonnegative,property-grad-upper"} at any $\theta_0 \in \mathbb{R}^n$. Then, either there exists a $k + 1 \in \mathbb{N}$ such that $F(\theta_{k}) \leq F(\theta_0)$ and $\dot F(\theta_{k}) = 0$, or $\lbrace F(\theta_{k}) : k+1 \in \mathbb{N} \rbrace$ is bounded and has a strictly decreasing subsequence. If the procedure does not terminate in finite time (i.e., we are in the second case), let $\lbrace C_k : k+1 \in \mathbb{N} \rbrace$ be a sequence of compact sets in $\mathbb{R}^n$ satisfying: $\theta_{k} \in C_k$; $\lbrace \psi_{1}^k,\ldots,\psi_{j_k}^k \rbrace \subset C_k$; and if $\theta_{k+1} = \theta_k$ then $C_{k+1} \subset C_{k}$ (see [Lemma 2](#result-iterates-in-compactsets){reference-type="ref" reference="result-iterates-in-compactsets"}). Then, we have the following results.* 1. *If there exists $K \in \mathbb{N}$ such that $\cup_{k\geq K} C_k$ is bounded, then $\lbrace \theta_k \rbrace$ are bounded and $\liminf_{k \to \infty} \Vert \dot F(\theta_k) \Vert_2 = 0$.* 2. *If $\cup_{k \geq K} C_k$ is unbounded for all $K \in \mathbb{N}$, $$\liminf_{k \to \infty} \frac{\underline g (C_k)^2 \underline \alpha (C_k)}{\overline g(C_k)^2 \overline \alpha (C_k)} > 0,$$ and, for some $w_1 \geq 0$ and $w_2 \in [0,2]$, $\exists c_0,c_1, c_2 \geq 0$ such that $\mathcal{L}(C_k) \leq c_0 + c_1 (F(\theta_k) - F_{l.b.})^{w_1} + c_2 \Vert \dot F(\theta_k) \Vert_2^{w_2}$, then $\liminf_{k \to \infty} \Vert \dot F(\theta_k) \Vert_2 = 0$.* *Proof.* By [Theorem 2](#result-objective-function){reference-type="ref" reference="result-objective-function"}, we have both cases. We now study the special cases of the second case. If, for some $K \in \mathbb{N}$, $\cup_{k \geq K} C_k$ is bounded, then there exists a compact set $C$ such that $\cup_{k \geq K} C_k \subset C$. Since $\theta_k \in C_k$, then $\limsup_k \Vert \theta_k \Vert_2 < \infty$. To show that the limit infimum of the gradient goes to zero, note, for all $k \geq K$, $$\frac{\underline g(C)^2}{\overline g(C)^2} \frac{\underline \alpha(C)}{\overline \alpha(C)} \frac{1}{\mathcal{L}(C)} \leq \frac{\underline g(C_k)^2}{\overline g(C_k)^2} \frac{\underline \alpha(C_k)}{\overline \alpha(C_k)} \frac{1}{\mathcal{L}(C_k)}.$$ Using this relationship and [Lemma 7](#result-zoutendjik){reference-type="ref" reference="result-zoutendjik"}, $\sum_{u : \ell_{g_u-1} \geq K} \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2 < \infty$. Hence, $\liminf_{k} \Vert \dot F(\theta_k) \Vert_2 = 0$, as needed. For the second result, there exists a $\kappa > 0$ and a $K \in \mathbb{N}$ such that $$\frac{\underline g (C_k)^2 \underline \alpha (C_k)}{\overline g(C_k)^2 \overline \alpha (C_k)} \geq \kappa, ~\forall k \geq K.$$ Moreover, since $F(\theta_k) \leq F(\theta_0)$ by [Theorem 2](#result-objective-function){reference-type="ref" reference="result-objective-function"}, $\exists c_0' > 0$ such that $\mathcal{L}(C_k) \leq c_0' + c_2 \Vert \dot F(\theta_k) \Vert_2^{w_2}$ for all $k + 1 \in \mathbb{N}$. Using these two facts and [Lemma 7](#result-zoutendjik){reference-type="ref" reference="result-zoutendjik"}, $$\sum_{u : \ell_{g_u-1} \geq K} \frac{ \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^2}{c_0' + c_2 \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2^{w_2} } < \infty.$$ Now, if $c_2 = 0$ the result follows. Suppose $c_2 > 0$. We now verify that $\limsup_u \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2 < \infty$ by contradiction, which will yield an upper bound for the denominator in the preceding inequality and, in turn, imply $\lim_k \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2 = 0$. To this end, suppose $\limsup_u \Vert \dot F(\theta_{\ell_{g_u-1}}) \Vert_2 = \infty$. Then, there exists a subsequence $\lbrace g_u' \rbrace \subset \lbrace g_u \rbrace$ such that $$0 < \frac{c_0'}{2 c_2} \leq \Vert \dot F(\theta_{\ell_{g_u'-1}}) \Vert_2^2 - \frac{1}{2} \Vert \dot F(\theta_{\ell_{g_u'-1}}) \Vert_2^{w_2}.$$ Rearranging, $$0 < \frac{1}{2 c_2} \leq \frac{ \Vert \dot F(\theta_{\ell_{g_u'-1}}) \Vert_2^2}{c_0' + c_2 \Vert \dot F(\theta_{\ell_{g_u'-1}}) \Vert_2^{w_2} }.$$ Hence, we arrive at the contradiction, $\sum_{u : \ell_{g_u' - 1} \geq K} (2c_2)^{-1} < \infty$. The result follows. ◻ # A Novel Step Size Procedure {#sec:StepSize} Having introduced our general framework and discussed its theoretical properties in [3](#sec:Algo){reference-type="ref" reference="sec:Algo"}, we now present a complete instantiation of our framework that is equipped with a novel step size technique using negative gradient directions. The entire algorithm is outlined in [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}, and a helper function required in step size computation is presented in [\[alg:update\]](#alg:update){reference-type="ref" reference="alg:update"}. $F, \dot F, \theta_{0}, \epsilon > 0$ $k \leftarrow 0$ $\delta_k \leftarrow 1$ $w \leftarrow 10$ $\tau_{\mathop{\mathrm{obj}}}^0 \leftarrow F(\theta_0)$ $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^0, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^0 \leftarrow \Vert \dot{F}(\theta_0) \Vert/\sqrt{2}, \sqrt{20}\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^0$ $j, \psi_0^k \leftarrow 0, \theta_k$ $\hat{L}_j^k \leftarrow \Call{Update}{j, k}$ $\alpha^k_{j} \leftarrow \min\left( \frac{(\tau^{k}_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}})^2}{ \Vert \dot{F}(\psi_j^k) \Vert_2^3 + .5 \Vert \dot{F}(\psi_j^k) \Vert_2^2 \hat{L}_j^k + 10^{-16 }}, \frac{1}{ \Vert \dot{F}(\psi_j^k) \Vert_2 + .5 \hat{L}_j^k + 10^{-16} } \right) + 10^{-16}$ $\theta_{k+1}, \delta_{k+1} \leftarrow \theta_k, .5 \delta_k$ $\theta_{k+1}, \delta_{k+1} \leftarrow \psi_j^k, \delta_k$ Set $\tau_{\mathop{\mathrm{obj}}}^{k+1} \leftarrow$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"} $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k+1} \leftarrow \Vert \dot{F}(\theta_{k+1}) \Vert/\sqrt{2}, \sqrt{20}\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}$ $\theta_{k+1}, \delta_{k+1}, \leftarrow \psi_j^k, \min\{1.5\delta_k, 1\}$ Set $\tau_{\mathop{\mathrm{obj}}}^{k+1} \leftarrow$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"} $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k+1} \leftarrow \Vert \dot{F}(\theta_{k+1}) \Vert/\sqrt{2}, \sqrt{20}\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}$ $\theta_{k+1}, \delta_{k+1}, \leftarrow \psi_j^k, \min\{1.5\delta_k, 1\}$ Set $\tau_{\mathop{\mathrm{obj}}}^{k+1} \leftarrow$ by [\[eqn-nonmonotone-threshold\]](#eqn-nonmonotone-threshold){reference-type="ref" reference="eqn-nonmonotone-threshold"} $\tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k+1}, \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k+1} \leftarrow \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^{k} \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^{k}$ $k \leftarrow k+1$ Exit Inner Loop $\psi_{j+1}^k, j \leftarrow \psi_j^k - \delta_k \alpha_j^k \dot{F}(\psi^k_j), j+1$ $\theta_k$ $\dot{F}, \psi_{j}^k, \psi_{j-1}^k, \psi_{j_{k-1}}^{k-1}, L(k)$ $\hat{L}_0^0 \leftarrow 1$ $\hat{L}_0^k \leftarrow \hat{L}_{j_{k-1}}^{k-1}$ $\hat{L}_{j}^k \leftarrow \frac{ \Vert \dot{F}(\psi_j^k) - \dot{F}(\psi_{j-1}^k) \Vert_2}{ \Vert \psi_j^k-\psi_{j-1}^k \Vert_2}$ $\hat{L}_{j}^k \leftarrow \max\left( \frac{ \Vert \dot{F}(\psi_j^k) - \dot{F}(\psi_{j-1}^k) \Vert_2}{ \Vert \psi_j^k-\psi_{j-1}^k \Vert_2}, \hat{L}_{j-1}^k \right)$ $\hat{L}_j^k$ This method pairs the novel step size routine with negative gradient directions to produce quite a competitive and practical algorithm for solving general unconstrained optimization problems, as well as key data science problems (see [\[sec:Results-cutest,sec:Results-gee\]](#sec:Results-cutest,sec:Results-gee){reference-type="ref" reference="sec:Results-cutest,sec:Results-gee"}). As the only difference between [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"} and our general method is the specification of parameters and subroutines, and the inclusion of one more "else-if" statement, we only discuss these components, and highlight key differences when compared to other step size methods (see [3](#sec:Algo){reference-type="ref" reference="sec:Algo"} for an in-depth discussion of the general method, [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"}). To begin, the extra "else-if" statement seperates the case when the gradient is above the upper threshold, and is exactly the same as accepting iterates normally (Line 23-24), except for the addition of gradient threshold updating. This ensures that the core behavior of [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"} is the same as our general method (see [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"}), with this inclusion only restricting threshold updates to occur when absolutely needed. Next, to dicuss the novel step size routine, we break it into two distinct steps --- one for local Lipschitz approximation, and a second for step size computation. We discuss key aspects of each of these stages next. #### Local Lipschitz Approximation. Before calculation of the step size, a local Lipschitz estimator is computed by $\mathrm{Update()}$ in [\[alg:update\]](#alg:update){reference-type="ref" reference="alg:update"}, and can be decomposed into four cases based on key states of [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}. Specifically, there is an initialization phase, re-initialization phase for subsequent initial inner loop iterations, and an aggressive/conservative switch based on if the previous iterate was accepted (see [\[alg:update\]](#alg:update){reference-type="ref" reference="alg:update"}). The use of local Lipschitz approximation is similar to other methods such as [@nesterov2012GradientMF; @curtis2018exploiting; @malitsky2020adaptive; @zhang2020firstorder]; however, our method differs in that the approximation is used irrespective of any local model in the region, eliminating the need for objective evaluations to verify the accuracy of such a model. Furthermore, we do not explicitly assume any type of global behavior, or exact knowledge of the local Lipschitz constant throughout the algorithm. #### Step Size Computation. After the local Lipschitz approximation step, calculation of the step size occurs, and can be decomposed into three parts. First, the step size is selected between two quantities; the first that will almost always be selected throughout the course of the algorithm, while the second, provides a nice upper bound for arbitrary points when proving theoretical properties (see [Lemma 8](#lemma:novel-step-size-satisfies-properties){reference-type="ref" reference="lemma:novel-step-size-satisfies-properties"}). Second, each term dampens its numerator by a function of the gradient at the current iterate and the local Lipschitz approximator in order to condition the step size. Lastly, constants $10^{-16}$ appear in the denominators, and are added to the final result of the minimum in order to gaurantee a division by $0$ does not occur (see [Lemma 8](#lemma:novel-step-size-satisfies-properties){reference-type="ref" reference="lemma:novel-step-size-satisfies-properties"}). This novel step size computation, as already remarked, has similarities to other Lipschitz approximation methods; however the inclusion of the gradient terms also relate it to Weighted Gradient-Norm Damping methods [@wu2020wngrad; @grapiglia2022AdaptiveTrust]. The difference here is that our step size is more aggressive, because it only takes into account the current iterate, and not all previous iterates in forming a weight. Additionally, when only considering [@grapiglia2022AdaptiveTrust], we do not try to minimize some upper bound model within a trust region, but directly try to reduce the optimality gap. #### Theory. Having discussed the computational aspects of our novel step size method, we now present theory to show that [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"} enjoys the global convergence properties proved for our general framework in [3](#sec:Algo){reference-type="ref" reference="sec:Algo"} (see [Theorem 4](#thm:convergence-theorem-novel-step){reference-type="ref" reference="thm:convergence-theorem-novel-step"}). We will first prove that the subroutines and parameters in [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"} satisfy [\[property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower,property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max\]](#property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower,property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower,property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max"}. Once shown, we automatically obtain convergence results when $\Vert \theta_k \Vert_2$ is bounded by part 1 of for the general method. Finally, some further analysis will show that our novel step size satisfies conditions in part 2 of , providing gaurantees when the local Lipschitz constants grow at most quadratically. **Lemma 8**. *has subroutines $\mathrm{StepDirection()}$ and $\mathrm{StepSize()}{}$ that satisfy [\[property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower\]](#property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower"} and parameters $\tau_{\mathop{\mathrm{iter}}, \mathop{\mathrm{exit}}}^k, \tau^k_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}, \tau^k_{\mathop{\mathrm{grad}}, \mathop{\mathrm{upper}}}, \tau_{\mathop{\mathrm{iter}}, \max}^k$ that satisfy [\[property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max\]](#property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max){reference-type="ref" reference="property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max"}.* *Proof.* Take any compact set $C \subset \mathbb{R}^n$. In [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}, $\mathrm{StepDirection(\psi)}$ returns the negative gradient direction at $\psi$, therefore satisfies [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"} with $\underbar{$g$}(C) = \bar{g}(C) = 1$. Next to prove [\[property-stepsize-upper,property-stepsize-lower\]](#property-stepsize-upper,property-stepsize-lower){reference-type="ref" reference="property-stepsize-upper,property-stepsize-lower"}, recall we must show that $\forall \psi \in C$, any $\alpha$ computed by $\mathrm{StepSize()}{}$ at $\psi$ must satisfy $\underbar{$\alpha$}(C) < \alpha < \bar{\alpha}(C)$ for some constants $\bar{\alpha}(C), \underbar{$\alpha$}(C) > 0$. From [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}, $\mathrm{StepSize()}{}$ returns for any $\psi$ $$\alpha = \min\left( \frac{(\tau^k_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}})^2}{ \Vert \dot{F}(\psi) \Vert_2^3 + .5 \Vert \dot{F}(\psi) \Vert_2^2 \hat{L}_j^k + 10^{-16 }}, \frac{1}{ \Vert \dot{F}(\psi) \Vert_2 + .5 \hat{L}_j^k + 10^{-16} } \right) + 10^{-16},$$ where $\hat{L}_j^k \ge 0$ and $(\tau^k_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}})^2 \ge 0$. Since everything inside the minimum is positive, $\underbar{$\alpha$}(C) \geq 10^{-16}$. To upper bound $\alpha$, $$\alpha \leq \frac{1}{ \Vert \dot{F}(\psi) \Vert_2 + .5 \hat{L}_j^k + 10^{-16} } + 10^{-16} \le \frac{1}{10^{-16}} + 10^{-16} = \bar{\alpha}(C).$$ To prove our other parameters satisfy the required properties, note that $\tau_{\mathop{\mathrm{iter}}, \mathop{\mathrm{exit}}}^k = 10$, which is non-negative and bounded above ([\[property-radius-nonnegative,property-radius-bounded\]](#property-radius-nonnegative,property-radius-bounded){reference-type="ref" reference="property-radius-nonnegative,property-radius-bounded"}); and $\tau_{\mathop{\mathrm{iter}}, \max}^k = 100$, which is greater than 1 and bounded above ([\[property-iter-max\]](#property-iter-max){reference-type="ref" reference="property-iter-max"}). Lastly, in [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}, we iteratively define the gradient upper and lower bound whenever the gradient thresholds are violated as $$\begin{aligned} 0 < \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}^k = \Vert \dot{F}(\theta_k) \Vert_2/\sqrt{2} < \Vert \dot{F}(\theta_k) \Vert_2, \text{\quad} \Vert \dot{F}(\theta_k) \Vert_2 < \tau_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}^k = \sqrt{10} \Vert \dot{F}(\theta_k) \Vert_2. \end{aligned}$$ Whenever these parameters are not updated, the gradient is strictly within the previous interval, which satisfy [\[property-grad-lower,property-grad-upper\]](#property-grad-lower,property-grad-upper){reference-type="ref" reference="property-grad-lower,property-grad-upper"}. ◻ We are now ready to prove the main convergence results for [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}. The first conclusion will follow directly by , while the special cases will require some analysis to bound all iterates, and prove a property of our novel step size. The theorem and proof that follow is an example of how to use , and this techique can be used to prove global convergence for any user defined subroutines and parameters, assuming [\[property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower,property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max\]](#property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower,property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad,property-stepsize-upper,property-stepsize-lower,property-radius-nonnegative,property-radius-bounded,property-grad-lower,property-grad-upper,property-iter-max"} are satisfied. **Theorem 4**. *Suppose [\[problem\]](#problem){reference-type="ref" reference="problem"} satisfies [\[as-bounded below,as-loc-lip-cont\]](#as-bounded below,as-loc-lip-cont){reference-type="ref" reference="as-bounded below,as-loc-lip-cont"}, and is solved using [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}. Then, either there exists a $k+1 \in \mathbb{N}$ such that $F(\theta_k) \le F(\theta_0)$ and $\dot{F}(\theta_k)=0$, or $\{F(\theta_k) : k+1 \in \mathbb{N}\}$ is bounded and has a strictly decreasing subsequence. In the latter case, we have the following results* 1. *If $\{\theta_k\}$ are bounded, then $\liminf_{k\to\infty} \Vert \dot F(\theta_k) \Vert_2 = 0$* 2. *If $\{\theta_k\}$ are unbounded and for some $w_1 \geq 0$ and $w_2 \in [0,2]$, $\exists c_0,c_1, c_2 \geq 0$ such that $\mathcal{L}(C_k) \leq c_0 + c_1 (F(\theta_k) - F_{l.b.})^{w_1} + c_2 \Vert \dot F(\theta_k) \Vert_2^{w_2}$, then $\liminf_{k \to \infty} \Vert \dot F(\theta_k) \Vert_2 = 0$* *Proof.* By [Lemma 8](#lemma:novel-step-size-satisfies-properties){reference-type="ref" reference="lemma:novel-step-size-satisfies-properties"} and [Theorem 2](#result-objective-function){reference-type="ref" reference="result-objective-function"}, we have both cases. We now study the special cases when the procedure does not terminate in finite time. In the case $\{\theta_k\}$ is bounded, we will first bound each $\theta_k$ and the iterates $\{\psi_1^k,...,\psi_{j_k}^k\}$ in nice compact regions $C_k$ in order to get an overall compact set $C$ that bounds all iterates of [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}. Specifically, by our parameter initializations and construction of the triggering events, for any $k+1 \in \mathbb{N}$, define $\mathcal{B}(\theta_k, 10) = \{\psi : \Vert \psi-\theta_k \Vert_2 \le 10\}$, then $\{\psi_1^k,...,\psi_{j_k-1}^k\} \subset \mathcal{B}(\theta_k, 10)$. To bound the distance between $\psi_{j_k}^k$ and $\psi_{j_k-1}^k$, as the last iterate could leave $\mathcal{B}(\theta_k, 10)$, define $\mathcal{G}(\theta, R) = \sup_{\psi \in \mathcal{B}(\theta, R)} \Vert \dot{F}(\psi) \Vert_2$, then using that $\delta_k \le 1$, and $\alpha_j^k \le 10^{-16} + 1/( \Vert \dot{F}(\psi_{j_k-1}^k) \Vert_2 + .5 \hat{L}_{j_k-1}^k + 10^{-16})$ we obtain $$\left\Vert \psi_{j_k}^k - \psi_{j_k-1}^k \right\Vert_2 = \left\Vert \delta_k \alpha_{j}^k \dot{F}(\psi_{j_k-1}^k) \right\Vert_2 \le 1 + 10^{-16} \left\Vert \dot{F}(\psi_{j_k-1}^k) \right\Vert_2 \le 1 + 10^{-16}\mathcal{G}(\theta_k, 10).$$ Therefore, $$\left\Vert \theta_k - \psi_{j_k}^k \right\Vert_2 \leq \left\Vert \theta_k - \psi_{j_k-1}^k \right\Vert_2 + \left\Vert \psi_{j_k-1}^k - \psi_{j_k}^k \right\Vert_2 \le 11 + 10^{-16}\mathcal{G}(\theta_k, 10).$$ Now, by defining $C_k = \{\psi : \Vert \psi-\theta_k \Vert_2 \le 11 + 10^{-16}\mathcal{G}(\theta_k, 10)\}$, we get the desired property that $\{\psi_1^k,...,\psi_{j_k}^k\} \subset C_k$. Now, by our assumptions $\exists R > 0$ such that for all $k+1 \in\mathbb{N}, ~ \Vert \theta_k \Vert_2 \leq R$. Therefore, for all $k+1 \in\mathbb{N}$ it must be the case that $\mathcal{G}(\theta_k, 10) \le \mathcal{G}(0, 10+R)$. Using this fact, this implies that $$\bigcup_{k+1 \in \mathbb{N}} C_k \subseteq \{\psi: \Vert \psi \Vert_2 \le 11 + 10^{-16}\mathcal{G}(0, 10+R) \} =\vcentcolon C.$$ Therefore, by the rest of the proof in , we obtain $\liminf_{k\to\infty} \Vert \dot F(\theta_k) \Vert_2 = 0$. To obtain the second special case, by part 2 of we must prove that $$\liminf_{k\to\infty} \frac{\underline g (C_k)^2 \underline \alpha (C_k)}{\overline g(C_k)^2 \overline \alpha (C_k)} > 0.$$ Recall that negative gradient directions satisfy [\[property-stepdirection-negative,property-stepdirection-propgrad\]](#property-stepdirection-negative,property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-negative,property-stepdirection-propgrad"} with $\underline g (C_k) = \overline g(C_k) = 1$. Finally, by [Lemma 8](#lemma:novel-step-size-satisfies-properties){reference-type="ref" reference="lemma:novel-step-size-satisfies-properties"}, for all $k+1 \in \mathbb{N}$ $$\frac{\underline \alpha (C_k)}{\overline \alpha (C_k)} \geq \frac{10^{-16}}{10^{16} + 10^{-16}} > 0.$$ Therefore, we obtain the required result by [Theorem 3](#result-gradient){reference-type="ref" reference="result-gradient"}. ◻ In summary, we have just presented a novel step size procedure utilizing negative gradient directions within our framework in [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}, and shown how to apply the theory developed for our general method to specific subroutines and parameters. Throughout this process, we have shown that our novel step size procedure is well-behaved ([Lemma 8](#lemma:novel-step-size-satisfies-properties){reference-type="ref" reference="lemma:novel-step-size-satisfies-properties"}), and enjoys global convergence results under assumptions that are broader than many alternative methods ([Theorem 4](#thm:convergence-theorem-novel-step){reference-type="ref" reference="thm:convergence-theorem-novel-step"}). With this solid theoretical foundation, we will now numerically show that this procedure is very competitive against a range of first and second order methods on CUTEst problems ([5](#sec:Results-cutest){reference-type="ref" reference="sec:Results-cutest"}), and has superior performance when comparing on several data science problems ([6](#sec:Results-gee){reference-type="ref" reference="sec:Results-gee"}). # Numerical Experiments on CUTEst Problems {#sec:Results-cutest} We now present a numerical experiment using our methodology (see [\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"} in [4](#sec:StepSize){reference-type="ref" reference="sec:StepSize"} for details) on a set of unconstrained optimization problems from the CUTEst library [@gould2015]. The details of the experiment are in [4](#table:cutest-exper){reference-type="ref" reference="table:cutest-exper"}. ------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Problems** All[\[footnote-alg-exception\]](#footnote-alg-exception){reference-type="ref" reference="footnote-alg-exception"} unconstrained CUTEst problems with objective, gradient, and Hessian information on the smallest dimension setting (117 problems). **Algorithms** Gradient Descent with Armijo and Wolfe line search [@nocedal2006numerical Chapter 3], Cubic Regularization Newton's (CRN, [@nesterov2006cubic]), Adaptive Cubic Regularization (ACR, [@cartis2011cubic1]), Dynamic Method (DMC, [@curtis2018exploiting]), and Our method ([\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}). **Termination** $20{,}000$ iterations, or gradient tolerance of $10^{-5}$. **Data Recorded** Number of objective and gradient evaluations, and CPU Time. ------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : CUTEst numerical experiment overview. To summarize the information in [4](#table:cutest-exper){reference-type="ref" reference="table:cutest-exper"}, we run three first order methods and three second order methods for $20{,}000$ iterations, or until a gradient tolerance of $10^{-5}$ is reached, on all unconstrained problems that have objective, gradient, and Hessian information.[^3] For algorithms that have an optimization sub-problem to compute the next iterate, we utilize a Krylov-based trust region method accessed through SciPy [see @virtanen2020scipy; @gould1999trustregion], with a limit of $500$ iterations, or until a specific gradient tolerance is reached ([see @cartis2011cubic1 §7] for such gradient tolerances).[^4] All methods that have some type of inner loop have an iteration limit of $100$ for that loop.[^5] Finally, to compare these methods we record the number of objective, gradient, and Hessian evaluations, as well as CPU time; in our analysis we concentrate on comparing these quantities only on problems where *all* algorithms reached an iterate with gradient $10^{-5}$ (i.e., a successful termination) leaving a total of 76 problems in the analysis. Before jumping into the results, we remark that the list of algorithms in [4](#table:cutest-exper){reference-type="ref" reference="table:cutest-exper"} (and what will be tested in [6](#sec:Results-gee){reference-type="ref" reference="sec:Results-gee"}) is missing an important class of methods that have recently gained attention called Objective Function Free Optimization algorithms [@gratton2022firstorderOFFO; @gratton2023OFFORegularized; @gratton2023OFFOSecondOrderOpt; @gratton2022ParametricComplexityAnalysis; @gratton2023ComplexityOfFirstOrderOFFO]. We leave these algorithms out of our comparison, because many of the assumptions required in their theory are not satisfied by problems in data science; specifically, these methods require global Lipschitz continuity of the gradient, and that the infinity norm of the gradient is bounded by some constant over its entire domain [see @gratton2022firstorderOFFO AS.2 and AS.3], which are unrealistic assumptions for many problems (for examples, [see @gratton2022firstorderOFFO §5] and [2](#table-data-science-problems){reference-type="ref" reference="table-data-science-problems"}). Finally, the analysis will be structured as the following: we present graphs that illustrate the relative change of objective and gradient evaluations between our method and the comparing algorithms to get an idea of computational cost, then supplement this information with a graph comparing CPU time to uncover other additional expenses paid throughout the algorithm. We split this analysis into first and second order methods, and compare against first order algorithm next. #### Comparison with First Order Methods. The relative change of objective and gradient evaluations between our method and the comparing first order methods are presented in [\[fig:first-order-rel-change-total\]](#fig:first-order-rel-change-total){reference-type="ref" reference="fig:first-order-rel-change-total"}. As illustrated from these graphs, our method is extremely competitive when comparing on this metric, as it uses fewer combined objective and gradient evaluations on all 76 problems except for 5. When comparing just objective evaluations in [\[fig:first-order-rel-change-func-grad\]](#fig:first-order-rel-change-func-grad){reference-type="ref" reference="fig:first-order-rel-change-func-grad"}, our method uses evaluations economically owing to the triggering events for the inner loop, whereas line search methods rely heavily on repeated objective evaluations for iterate acceptance. On the other hand, our method requires more gradient evaluations on many problems, as some inner loop iterations are bound to be wasted in comparison to the one evaluation per descent guarantee for line search methods. Lastly, we compare CPU time between our method and other first order methods in [\[fig:first-order-cpu-time-comparison\]](#fig:first-order-cpu-time-comparison){reference-type="ref" reference="fig:first-order-cpu-time-comparison"}. As can be seen in [\[fig:first-order-cpu-time-comparison\]](#fig:first-order-cpu-time-comparison){reference-type="ref" reference="fig:first-order-cpu-time-comparison"}, our method compares favorably against line search algorithms on only slightly larger than half the number of problems. While [\[fig:first-order-rel-change-total,fig:first-order-rel-change-func-grad\]](#fig:first-order-rel-change-total,fig:first-order-rel-change-func-grad){reference-type="ref" reference="fig:first-order-rel-change-total,fig:first-order-rel-change-func-grad"} might lead one believe that our method would be outright faster on all problems, interestingly there does seem to be some non-trivial computational aspects when checking triggering events in the inner loop. #### Comparison with Second Order Methods. To compare our method against second order methods, we present the same relative change graphics in [\[fig:second-order-rel-change-total\]](#fig:second-order-rel-change-total){reference-type="ref" reference="fig:second-order-rel-change-total"} and [\[fig:second-order-rel-change-grad-func\]](#fig:second-order-rel-change-grad-func){reference-type="ref" reference="fig:second-order-rel-change-grad-func"}. As can be seen from [\[fig:second-order-rel-change-total\]](#fig:second-order-rel-change-total){reference-type="ref" reference="fig:second-order-rel-change-total"}, cubic regularized Newton's method and the adaptive cubic regularization method require substantially fewer objective and gradient evaluations compared to our first-order algorithm. Furthermore, when comparing relative change of just the objective or the gradient evaluations separately in [\[fig:second-order-rel-change-grad-func\]](#fig:second-order-rel-change-grad-func){reference-type="ref" reference="fig:second-order-rel-change-grad-func"}, cubic regularized Newton's method and adaptive cubic regularization almost always require fewer evaluations than our method on all but a handful of problems. Such results are not surprising as second-order methods make use of Hessian information in order to find a minimizer [see @nesterov2006cubic], which is not used in our example procedure. However, when we use CPU times to account for the cost of using Hessian information and the solutions of sub-problems created by using Hessian information (see [\[fig:second-order-cpu-time-comparison\]](#fig:second-order-cpu-time-comparison){reference-type="ref" reference="fig:second-order-cpu-time-comparison"}), our method does better on almost all of the problems. #### In Summary. When comparing our method against two first order methods, and three second order methods on a set of CUTEst problems, we see that our method is extremely competitive. In comparison to the first order methods, we see that our novel procedure economically uses objective evaluations at the price of a small increase in gradient evaluations; while, in comparison to second order methods, our algorithm has superior CPU time performance even when taking more objective and gradient evaluations. This makes our method a competitive and practical alternative to traditional line search techniques and second order methods in addressing general unconstrained optimization problems, and with the possible exception of Armijo line search, is the only algorithm that enjoys a sufficiently general theory for problems in data science. # Numerical Experiments on GEE Problems {#sec:Results-gee} One important characteristic of our general framework (see [\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"}) that was showcased when using our novel step size routine on the CUTEst problems is the economical use of objective function evaluations (see [5](#sec:Results-cutest){reference-type="ref" reference="sec:Results-cutest"}). This makes our method the preferred candidate for data science applications where accurate objective evaluations are needed, yet are expensive to obtain, and gradient evaluations are inexpensive [see @berahas2021global; @gratton2023ComplexityOfFirstOrderOFFO]. A key data science application where this is a hindrance is estimation via generalized estimating equations (GEEs). Often, this method is used when data exhibits complex grouped structure, as in repeated measurements in biomedical studies [see @lipsitz2008geelongitudinal Chapter 3]. This modeling procedure *first* specifies the gradient function in some standard, computable form, whereas the objective must then be calculated, or approximated via numerical integration (i.e., using quadrature), through path integrals over the vector field defined by the gradient function [see @mccullagh1989glm Chapter 9]. Specifically, by letting $Y_i \in \mathbb{R}^{n_i}$ be a vector of $n_i$ measurements from group $i$, with covariates $X_i \in \mathbb{R}^{n_i \times n}$, the average response of the group, $\mu_i(\theta)$, is modeled using unknown weights, $\theta \in \mathbb{R}^{n}$, and function (possibly non-linear) of $\theta^\intercal X_i$. Then, by specifying how group $i$ is inter-correlated through a covariance matrix, $V_i(\theta) \in \mathbb{R}^{n_i \times n_i}$, and letting $D_i(\theta)$ be the Jacobian of $\mu_i(\theta)$, $\theta$ can be estimated by finding the root of $$\label{eq:general-gee} \dot{F}(\theta) = -\sum_{i=1}^n D_i(\theta)^\intercal V_i(\theta)^{-1}(Y_i-\mu_i(\theta)).$$ For statistical reasons [see @mccullagh1989glm Chapter 9], finding the root of [\[eq:general-gee\]](#eq:general-gee){reference-type="ref" reference="eq:general-gee"} is generally not enough, and a minimizer of the objective (when it exists) is desired; therefore, estimation of $\theta$ can be accomplished by minimizing some appropriate functional. One formulation is the following: given some reference value $\theta_{\mathop{\mathrm{ref}}}$ and path $C \subset \mathbb{R}^n$, with smooth parametrization $p(t) : [0,1] \to \mathbb{R}^{n}$, with $p(0) = \theta_{\mathop{\mathrm{ref}}}$ and $p(1) = \theta$, the objective function can be defined as $$\label{eq:general-gee-path-form} F(\theta) = \int_C \dot{F} \cdot dp(t) = \int_0^1 \dot{F}(p(t))^\intercal \dot{p}(t)dt.$$ Provided $\dot{F}$ is an irrotational vector field, the integral will be independent of the path, $p(t)$, making $F(\theta)$ well-defined and "act" like a likelihood function. Unfortunately, owing to the expressivity of the variance function and non-linear components, the integral, [\[eq:general-gee-path-form\]](#eq:general-gee-path-form){reference-type="ref" reference="eq:general-gee-path-form"}, might not have a closed form expression, requiring expensive approximation techniques to evaluate. This characteristic makes GEE estimates hard to compute for objective evaluation heavy methods, while our algorithm is an excellent candidate as it uses evaluation economically. Having introduced GEE's, we now present numerical experiments on two canonical GEE examples, Wedderburn's Leaf Blotch Model and a simplified Fieller-Creasy estimation problem. The details of the experiments are in [5](#table:gee-expr){reference-type="ref" reference="table:gee-expr"}. --------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Problems** Wedderburn's Leaf Blotch Model ([\[eq:wedderburn-obj\]](#eq:wedderburn-obj){reference-type="ref" reference="eq:wedderburn-obj"}), and simplified Fieller-Creasy estimation ([\[eq:fieller-creasy-obj\]](#eq:fieller-creasy-obj){reference-type="ref" reference="eq:fieller-creasy-obj"}). **Algorithms** Gradient Descent with Armijo and Wolfe line search ([@nocedal2006numerical Chapter 3]), Cubic Regularization Newton's (CRN, [@nesterov2006cubic]), Adaptive Cubic Regularization (ACR, [@cartis2011cubic1]), Dynamic Method (DMC, [@curtis2018exploiting]), Our method ([\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}), Root Finding - Armijo, and Powells Dogleg ([@nocedal2006numerical Chapter 11]). **Starting Points** Wedderburn's Example: Initial components of $\theta$ are randomly generated between $-1$ and $1$, except the first and eleventh were set to $0$. Fieller-Creasy Example: Initial $\theta$ randomly generated between $0$ and $1$. **Trials** We randomly generate a set of $1{,}000$ starting points as described, and run each algorithm once at every point. **Termination** $1{,}000$ iterations, or gradient tolerance of $10^{-5}$. **Data Recorded** Number of objective and gradient evaluations, and CPU Time. --------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : GEE numerical experiment overview. To summarize the information in [5](#table:gee-expr){reference-type="ref" reference="table:gee-expr"}, for each of our GEE problems, we run our method, two first order methods, three second order methods, and two root finding methods on a set of $1{,}000$ randomly generated starting points for a maximum of $1{,}000$ iterations, or until a gradient tolerance of $10^{-5}$ is attained (a successful termination). We test the same first and second order methods as in [5](#sec:Results-cutest){reference-type="ref" reference="sec:Results-cutest"}, with the inclusion of two root finding algorithms, Gradient Descent with Armijo line search on the objective $.5 \Vert \dot{F}(\theta) \Vert_2^2$ (Root Finding - Armijo), and Powell's Dogleg method [see @nocedal2006numerical Chapter 11], to compare against this type of estimation procedure. Lastly, to compare the methods, we record the number of objective and gradient evaluations, along with the CPU time until termination. The discussion of the numerical experiments is separated by problem, with a short introduction in the beginning, followed by numerical results. We start the analysis with Wedderburn's Leaf Blotch example. #### Wedderburn's Leaf Blotch Example. First introduced in his seminal paper to analyze a leaf disease occurring in barley plants [see @wedderburn1974quasilikelihood], the Leaf Blotch estimating equations defined for $\theta \in \mathbb{R}^{20}$ are $$\dot{F}(\theta) = -\sum_{i=1}^{90} \frac{y_i-\mu_i(\theta)}{\mu_i(\theta)(1-\mu_i(\theta))} x_i, \text{\quad} \mu_i(\theta) = \frac{\exp(\theta^\intercal x_i)}{1 + \exp(\theta^\intercal x_i)}.$$ To formulate an objective, any reference value, $\theta_{\mathop{\mathrm{ref}}}$, and any smooth path, $p(t)$, can be selected. We use the true minimizer, $\theta^*$, and set the path of integration to be $p_{\theta}(t) = t\theta + (1-t)\theta^*$, so that the optimization problem is $$\label{eq:wedderburn-obj} \min_{\theta\in\mathbb{R}^{20}} F(\theta) \vcentcolon=\min_{\theta\in\mathbb{R}^{20}} \int_0^1 \dot{F}(p_{\theta}(t))^\intercal (\theta-\theta^*)dt.$$ Wedderburn's example has an objective functions that can be found in closed form; however, for the sake illustration, we use quadrature to approximate [\[eq:wedderburn-obj\]](#eq:wedderburn-obj){reference-type="ref" reference="eq:wedderburn-obj"}. Finally, to begin the discussion of numerical results, we present plots with the CPU time for each algorithm in [\[fig:wedderburn-cpu-time-results\]](#fig:wedderburn-cpu-time-results){reference-type="ref" reference="fig:wedderburn-cpu-time-results"}, and number of objective and gradient evaluations for optimization algorithms in [\[fig:wedderburn-evaluation-boxplots\]](#fig:wedderburn-evaluation-boxplots){reference-type="ref" reference="fig:wedderburn-evaluation-boxplots"}. In both graphs, for each method, we only take into account trials where termination occurred by satisfaction of the gradient tolerance condition (i.e., successful termination). Additionally, the method of root finding using Armijo line search is missing from the plots because it was found that more than $10{,}000$ iterations were needed for this method to reach the gradient tolerance condition. From [\[fig:wedderburn-cpu-time-results\]](#fig:wedderburn-cpu-time-results){reference-type="ref" reference="fig:wedderburn-cpu-time-results"}, DMC takes the longest; a surprising cluster of equally performing methods --- CRN, ACR, and Gradient Descent using Armijo and Wolfe line search --- are faster than DMC; our procedure is faster than all those using objective function information; and Powell's Dogleg method, which does not require any objective function information, is the quickest. For first-order methods, these relative CPU times are readily explained by the number of objective evaluations required (see [\[fig:wedderburn-evaluation-boxplots\]](#fig:wedderburn-evaluation-boxplots){reference-type="ref" reference="fig:wedderburn-evaluation-boxplots"}); whereas, for second-order methods, the slower CPU times (despite fewer evaluations) are caused by expensive sub-problem solvers. For this example, Powell's dogleg method seems to be the best choice, followed by our procedure; however, as we will see in the next example, Powell's dogleg method often finds roots corresponding to maximizers, whereas our solutions correspond to the minimizer. #### Fieller-Creasy Ratio Estimation Example. To begin, we consider the simplified Fieller-Creasy problem [see @mccullagh1989glm Chapter 9, §4]. The estimating equation for $\theta \in \mathbb{R}$, provided $N \in \mathbb{N}$ datapoints, $\{(Y_{i1}, Y_{i2})\}_{i=1}^N$, is $$\dot{F}(\theta) = -\sum_{i=1}^N \frac{(Y_{i2}+\theta Y_{i1})(Y_{i1} - \theta Y_{i2})}{\sigma^2(1+\theta^2)^2}.$$ Since this is one dimensional, we use the fundamental theorem of calculus to define the objective as $$\label{eq:fieller-creasy-obj} \min_{\theta\in\mathbb{R}} F(\theta) \vcentcolon=\min_{\theta\in\mathbb{R}} \int_{0}^\theta \dot{F}(t) dt.$$ Fieller-Creasy has an objective functions that can be found in closed form; however, for illustration, we use quadrature to approximate [\[eq:fieller-creasy-obj\]](#eq:fieller-creasy-obj){reference-type="ref" reference="eq:fieller-creasy-obj"}. We use the same experimental design as in [5](#table:gee-expr){reference-type="ref" reference="table:gee-expr"}, and use a simulated dataset of 50 points, where $\{(Y_{i1}, Y_{i2})\}_{i=1}^{50}$ are independent, normally distributed random variables, with means $\{(\mu_i, \mu_i/5)\}_{i=1}^{50}$, and standard deviation of $.05$. The means are generated by taking $50$ points evenly spaced between $1$ and $3$. Generating the random datapoints in this way, there will be two stationary points, one at $\approx 5$ and another at $\approx -.2$, the first corresponding to a minimum and the second corresponding to a maximum. To begin the analysis, CPU times are plotted in [\[fig:fieller-creasy-cpu\]](#fig:fieller-creasy-cpu){reference-type="ref" reference="fig:fieller-creasy-cpu"}, and the number objective and gradient evaluations are plotted for the optimization method in [\[fig:fieller-creasy-evaluations\]](#fig:fieller-creasy-evaluations){reference-type="ref" reference="fig:fieller-creasy-evaluations"}. In both plots and for each algorithm, we only incoporate counts where the algorithm had a successful termination event, and were close to the approximate minimizer. Furthermore, notice that root finding using gradient descent with Armijo line search is represented in the comparison as well. In [\[fig:fieller-creasy-cpu\]](#fig:fieller-creasy-cpu){reference-type="ref" reference="fig:fieller-creasy-cpu"}, there is much of the same trend as the previous example: root finding methods are fastest, followed by our procedure, and trailed by the remaining first-order and second-order algorithms. Also as in the previous example, the same explanations seem to persist to describe these relative differences (see [\[fig:fieller-creasy-evaluations\]](#fig:fieller-creasy-evaluations){reference-type="ref" reference="fig:fieller-creasy-evaluations"}). Again, the root finding methods would seem to be the more favorable choice as they are fastest; however, the root finding methods are less reliable in finding a minimizer. Specifically, in [6](#table:gee-terminal-iterate-state){reference-type="ref" reference="table:gee-terminal-iterate-state"}, we count the number of times a solver approximately finds a minimizer, a maximizer or neither.[^6] We readily observe that the root-finding methods tend to be substantially less reliable in comparison to our procedure. Hence, despite their faster speed, our procedure is favorable in both speed and reliability. **Algorithm** **Approximate Minimizer** **Approximate Maximizer** **Neither** ----------------------- --------------------------- --------------------------- ------------- DMC 655 0 345 CRN 856 0 144 ACR 405 0 595 Powell's Dogleg 601 399 0 Root Finding - Armijo 414 125 461 Our Method **1000** 0 0 GD with Armijo 186 0 814 GD with Wolfe 174 0 826 : Categorization of Terminal Points #### In Summary. Using two GEE examples, we have compared our procedure against two first order optimization algorithms, three second order optimization algorithms, and two root finding methods. In comparison to the optimization methods, our procedure is faster and just as reliable in finding a local minimizer. In comparison to the root finding problems, our procedure is slower but is more reliable in finding a local minimizer. Thus, for the needs of such data science problems, our procedure provides the fastest and most reliable solver. # Conclusion {#sec:Conclusion} To meet the growing demands of data science applications, there has been a renewed interest in theoretical and methodological developments for gradient descent methods that address the scenario of expensive objective evaluations, yet cheap gradient evaluations. In this work, we have shown that many contemporary and traditional approaches are ill-suited to address these problems (see [\[table-objective-free,table-multiple-objective-evaluation\]](#table-objective-free,table-multiple-objective-evaluation){reference-type="ref" reference="table-objective-free,table-multiple-objective-evaluation"}). To address this gap, we present a new, general optimization methodology ([\[alg-general-algorithm\]](#alg-general-algorithm){reference-type="ref" reference="alg-general-algorithm"}) that economically uses objective function evaluations, and allows for a variety of subroutines and problem specific adaptations. We have shown global convergence results (see [Theorem 3](#result-gradient){reference-type="ref" reference="result-gradient"}) under reasonable assumptions for data science scenarios. Furthermore, we specialized this framework, and developed a novel step size procedure using negative gradient directions ([\[alg:novel-step-size\]](#alg:novel-step-size){reference-type="ref" reference="alg:novel-step-size"}) that is not only extremely competitive on general optimization problems from the CUTEst library ([5](#sec:Results-cutest){reference-type="ref" reference="sec:Results-cutest"}), but is superior to the alternatives on target data science applications (see [6](#sec:Results-gee){reference-type="ref" reference="sec:Results-gee"}). Several open questions arise from this work. Methodologically, we hope to extend our general framework to develop novel step size routines under realistic assumptions for other algorithms, such as coordinate descent, Newton's methods, Quasi-Newton methods, etc. Theoretically, we hope to provide local convergence results for [\[alg-general-algorithm,alg:novel-step-size\]](#alg-general-algorithm,alg:novel-step-size){reference-type="ref" reference="alg-general-algorithm,alg:novel-step-size"}, and study how the gradient behaves around stationary points, and when iterates are diverging. Lastly, in the application setting, it would be interesting to scale our algorithm to large instances of GEE problems to develop new insights to applied questions. # Notation {#sec:Notation} -------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- $F(\theta), \dot{F}(\theta), \ddot{F}(\theta)$ Objective, gradient of objective, and Hessian of objective, respectively. $k,j$ Outer and inner loop iteration counters. $\theta_k$ $k$th outer loop iterate. $\psi_j^k$ $j$th inner loop iterate for outer loop iteration $k$. $\tau^k_{\mathop{\mathrm{iter}},\mathop{\mathrm{exit}}}$ Radius for bounding ball triggering event. $\tau^k_{\mathop{\mathrm{iter}},\max}$ Maximum inner loop iteration allowance. $\tau^k_{\mathop{\mathrm{grad}},\mathop{\mathrm{upper}}}, \tau^k_{\mathop{\mathrm{grad}},\mathop{\mathrm{lower}}}$ Upper and lower thresholds for gradient triggering event. $\delta, \delta_k$ Step size scaling term. $\tau^k_{\mathop{\mathrm{obj}}}$ Nonmonotone search threshold. $w$ Number of objective values in nonmonotone search. $\rho$ Relaxation term in nonmonotone Armijo condition. $\sigma_{\mathop{\mathrm{upper}}}, \sigma_{\mathop{\mathrm{lower}}}$ Inflation and Reduction factors for step size scaling term. $\alpha, \alpha_j^k, \mathrm{StepSize}$ Step size, and the function that produces the step size. $\gamma, \gamma_j^k, \mathrm{StepDirection}$ Step applied to each inner loop iterate, and the function that produces the step. $\epsilon$ Gradient threshold for termination. $\hat{L}_j^k$ Approximation of local Lipschitz constant -------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- : Notation for Algorithm --------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\mathbb{N}, \mathbb{R}^n$ Natural numbers, n-dimensional Euclidean space. $\Vert \cdot \Vert_2$ Euclidean norm. $C, C_k$ Compact sets. $\underline g(C)$ Constant in compact set $C$ to ensure step direction is a descent direction. See [\[property-stepdirection-negative\]](#property-stepdirection-negative){reference-type="ref" reference="property-stepdirection-negative"}. $\overline g(C)$ Constant in compact set $C$ to ensure the norm of step direction is within some factor of the norm gradient. See [\[property-stepdirection-propgrad\]](#property-stepdirection-propgrad){reference-type="ref" reference="property-stepdirection-propgrad"}. $\overline \alpha(C)$ Upper bound on step size in compact set $C$. See [\[property-stepsize-upper\]](#property-stepsize-upper){reference-type="ref" reference="property-stepsize-upper"}. $\underline \alpha(C)$ Lower bound on step size in compact set $C$. See [\[property-stepsize-lower\]](#property-stepsize-lower){reference-type="ref" reference="property-stepsize-lower"}. $\overline \tau_{\mathop{\mathrm{exit}}}(\theta)$ Upper bound on radius for bounding ball. See [\[property-radius-bounded\]](#property-radius-bounded){reference-type="ref" reference="property-radius-bounded"}. $\overline \tau_{\max}$ Upper bound on maximum number of inner loop iterations. See [\[property-iter-max\]](#property-iter-max){reference-type="ref" reference="property-iter-max"}. $\mathcal{B}(\theta), \mathcal{B}(\theta, R)$ Ball around $\theta$. Radius either specified implicitly or with $R$. $\mathcal{G}(\theta), \mathcal{G}(\theta, R)$ Max norm gradient within the ball $\mathcal{B}(\theta)$ or $\mathcal{B}(\theta, R)$. $\mathcal{L}(C)$ Local Lipschitz constant within compact set $C$. $\ell_k$ Index of iterates that satisfied the nonmonotone Armijo Condition. $L(k)$ Sub-index of last accepted iterate. $O(k)$ Largest sub-index smaller than $L(k)$ such that $\ell_{O(k)}$ is the index of the iterate, whose objective value is used as the nonmonotone threshold. $o_s$ Sub-index such that $\ell_{o_s}$ are the indices of the iterates, whose objective values are used as nonmonotone thresholds. $g_u$ Subindices for $\ell$ of distinct iterates used in analysis of the gradient. --------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Notation for Theory # Canonical Data Science Problems and Their Smoothness ## Notions of Smoothness {#subsec:smoothness-overview} We briefly overview several notions of smoothness that are relevant to our discussion. First, let $F: \Theta \to \mathbb{R}$ where $\Theta \subset \mathbb{R}^n$. Moreover, let $F$ be twice differentiable on its domain, and we denote the first derivative by $\dot F$ and the second by $\ddot F$. Note, we can specify all of the notions of smoothness below (excluding the globally Lipschitz Hessian smooth case) using only the gradient function. **Definition 2** ([@li2023convex], Definition 1). *A real-valued differentiable function, $F: \Theta \to \mathbb{R}$ is $\kappa$-smooth for a non-decreasing function $\kappa: [0,\infty) \to (0,\infty)$ if $\Vert \ddot F (\theta) \Vert \leq \kappa ( \Vert \dot F(\theta) \Vert )$.[^7]* This definition allows for several notions of smoothness to occur. Here, we specify a few of them. 1. $F$ is globally Lipschitz smooth if $\Vert \ddot F(\theta) \Vert$ is bounded. 2. $F$ is globally Lipschitz Hessian smooth if $\exists \kappa \geq 0$ such that, for all $\theta, \psi \in \Theta$, $$\left\Vert \ddot F(\theta) - \ddot F(\psi) \right\Vert \leq \kappa \left\Vert \theta - \psi \right\Vert.$$ 3. $F$ is subquadratically Lipschitz smooth if for any sequence, $\lbrace \theta_k : k \in \mathbb{N} \rbrace$, for which $\lim_{k} \Vert \dot F(\theta_k) \Vert = \infty$, $\limsup_{k} \Vert \ddot F(\theta_k) \Vert/ \Vert \dot F(\theta_k) \Vert^2 = 0$. 4. $F$ is quadratically Lipschitz smooth if for any sequence, $\lbrace \theta_k : k \in \mathbb{N} \rbrace$, for which $\lim_k \Vert \dot F(\theta_k) \Vert = \infty$, $\limsup_k \Vert \ddot F(\theta_k) \Vert/ \Vert \dot F(\theta_k) \Vert^2 < \infty$; and, if there exists a sequence, $\lbrace \theta_k : k \in \mathbb{N} \rbrace$ such that $\lim_k \Vert \dot F(\theta_k) \Vert = \infty$ and $0 < \liminf_k \Vert \ddot F(\theta_k) \Vert/ \Vert \dot F(\theta_k) \Vert^2$. 5. $F$ is superquadratically Lipschitz smooth if there exists a sequence, $\lbrace \theta_k : k \in \mathbb{N} \rbrace$, for which $\lim_k \Vert \dot F(\theta_k) \Vert = \infty$ and $\limsup_k \Vert \ddot F(\theta_k) \Vert/ \Vert \dot F(\theta_k) \Vert^2 = \infty$. In all cases, the gradient function is Locally Lipschitz continuous as specified in [Definition 1](#def-lipschitz){reference-type="ref" reference="def-lipschitz"}. In [@li2023convex], results were shown for the subquadratically Lipschitz smooth case. In [@cartis2011cubic1; @cartis2011cubic2], results were shown for the globally Lipschitz Hessian smooth case. In our previous works [@patel2022globalconvergence; @patel2023gradient], results were shown for the general locally Lipschitz continuous case. In particular, results were established for a subset of superquadratically Lipschitz smooth functions [see @patel2022globalconvergence Assumption 5]. We use these statements to establish that certain problems are either quadratically Lipschitz smooth or superquadratically Lipschitz smooth. As a result, our methodology and results are relevant to these problems. ## Factor Analysis for Pattern Extraction {#subsec:factor-analysis} Factor analysis is a statistical method for data reduction and pattern extraction, often employed to identify a set of latent variables (usually, of lower dimension) to explain the observations. To be specific, factor analysis models observations, $\lbrace X_1,\ldots,X_N \rbrace$, as normally distributed as the sum of a rank deficient matrix and a diagonal matrix: $X_i \sim \mathcal{N}(0, WW^\intercal + V)$, where $W$ usually has fewer than $N$ columns; and $V$ is a diagonal matrix. In factor analysis, we then estimate $W$ and $V$ either using the likelihood function or the method of moments (c.f., Principal Component Analysis). The canonical form of factor analysis is to estimate the precision of an observation $x \in \mathbb{R}$ assuming that it is generated from a normal distribution with mean zero and unknown precision, $\theta^2$. In this case, the objective function is the negative log-likelihood, $F: \mathbb{R} \setminus \lbrace 0 \rbrace \to \mathbb{R}$, such that $$F(\theta) = \frac{1}{2} \log (2\pi) - \frac{1}{2} \log(\theta^2) + \frac{1}{2}\theta^2 x^2.$$ A direct computation of the gradient and Hessian functions yields, $$\dot F(\theta) = -\frac{1}{\theta} + \theta x^2,\quad\text{and}\quad \ddot F(\theta) = \frac{1}{\theta^2} + x^2.$$ We now consider each type of smoothness outlined above. 1. $|\ddot F(\theta)|$ is unbounded as $\theta$ tends to zero. Hence, $F$ is **not** globally Lipschitz smooth. 2. Consider $$\frac{|\ddot F(\theta) - \ddot F(1)|}{| \theta - 1|} = \frac{| \theta^{-2} - 1 |}{|\theta - 1|}.$$ As $\theta$ tends to zero, the ratio is unbounded. Hence, $F$ is **not** globally Lipschitz Hessian smooth. 3. Now, $$\frac{|\ddot F(\theta) |}{\dot F(\theta)^2} = \frac{\theta^{-2} + x^2}{\theta^{-2} - 2x^2 + \theta^2 x^2} = \frac{1 + \theta^2 x^2}{1 - 2x^2\theta^2 + \theta^4 x^2}.$$ Moreover, $| \dot F(\theta) | \to \infty$ only when $\theta \to 0$, $\theta \to \infty$ or $\theta \to -\infty$. When $\theta \to 0$, then the above ratio tends to $1$. When $\theta$ diverges, the ratio goes to zero. Hence $F$ is quadratically Lipschitz smooth, but is **not** subquadratically Lipschitz smooth, **nor** superquadratically Lipschitz smooth. ## Classification via Feed Forward Neural Network {#subsec:classification-nn} In this section, we focus on the simplified example of a one dimensional four layer feed forward neural network with linear activations except for the last layer which is a sigmoid. Additional details can be found in [@patel2022globalconvergence Appendix A.2]. The dataset for classification is $\{(1,1), (0,0)\}$ (each occurring the equal probability), where the first element of each tuple is the label, and the second element is the feature. We train the network using binary cross-entropy loss. The resulting risk function, which is the objective function, is (up to an additive constant) $$\label{eq:binary-cross-entropy} F(\theta) = \log(1+\exp\left( {-w_4w_3w_2w_1} \right)),$$ where $\theta \in \mathbb{R}^4$ is the vector whose entries are $w_1,w_2,w_3,$ and $w_4$. The gradient and hessian of [\[eq:binary-cross-entropy\]](#eq:binary-cross-entropy){reference-type="ref" reference="eq:binary-cross-entropy"} are $$\begin{aligned} \dot{F}(\theta) &= \frac{-1}{\exp(w_4w_3w_2w_1) + 1} \begin{bmatrix} w_4w_3w_2\\ w_4w_3w_1\\ w_4w_2w_1\\ w_3w_2w_1 \end{bmatrix}, \end{aligned}$$ and $$\begin{aligned} \ddot{F}(\theta) &= \frac{-1}{\exp(w_4w_3w_2w_1)+1} \begin{bmatrix} 0 & w_3w_4 & w_4w_2 & w_3w_2 \\ w_3w_4 & 0 & w_1w_4 & w_1w_3\\ w_4w_2 & w_1w_4 & 0 & w_2w_1 \\ w_3w_2 & w_1w_3 & w_2w_1 & 0 \end{bmatrix}\\ & + \frac{\exp(w_4w_3w_2w_1)}{\left( \exp(w_4w_3w_2w_1) + 1 \right)^2} \begin{bmatrix} w_4w_3w_2\\ w_4w_3w_1\\ w_4w_2w_1\\ w_3w_2w_1 \end{bmatrix} \begin{bmatrix} w_4w_3w_2\\ w_4w_3w_1\\ w_4w_2w_1\\ w_3w_2w_1 \end{bmatrix}^\intercal . \end{aligned}$$ We now consider each type of smoothness outlined above. 1. Note, the Frobenius norm of the Hessian is at least the absolute value of its $(1,1)$ entry. This entry evaluated when $w_1 = 0$ and $w_2 = w_3 = w_4$ is $w_2^6/4$, which becomes unbounded as $w_2 \to \infty$. Hence, $F$ is **not** globally Lipschitz smooth. 2. Note, $F(0) = 0$. Hence, using the Frobenius norm, $\Vert \ddot F(\theta) - \ddot F(0) \Vert = \Vert \ddot F(\theta) \Vert$, which is at least the absolute value of the $(1,1)$ entry of $\ddot F(\theta)$. Consider $\theta$ with $w_1 = 0$ and $w_2=w_3=w_4$, then $$\frac{ \Vert \ddot F(\theta) - \ddot F(0) \Vert}{ \sqrt{ 3w_2^2 - 0}} \geq \frac{w_2^6}{4 \sqrt{ 3w_2^2}},$$ where we have used the Euclidean distance between $\theta$ and $0$ is the denominator. The right hand side of the inequality diverges as $w_2 \to \infty$. Thus, $F$ is **not** globally Lipschitz Hessian continuous. 3. We again use the Frobenius norm for the Hessian and the Euclidean distance for the vectors. Consider $\theta$ with $w_1 = 0$ and $w_2 = w_3 = w_4$. Then, just as before, $$\frac{ \Vert \ddot F(\theta) \Vert }{ \Vert \dot F (\theta) \Vert^2} \geq \frac{w_2^6}{w_2^6 } = 1.$$ Hence, $F$ is **not** subquadratically Lipschitz smooth. 4. With the same norms as above, $$\Vert \dot F(\theta) \Vert^2 = \left( \frac{1}{1 + \exp(w_4 w_3 w_2 w_1)} \right)^2 \left( (w_4 w_3 w_2)^2 + (w_4 w_3 w_1)^2 + (w_4 w_2 w_1)^2 + (w_3 w_2 w_1)^2 \right).$$ In order for this quantity to diverge, $w_4w_3w_2w_1 \leq 0$. Moreover, at most only one of $\lbrace w_1,w_2,w_3,w_4 \rbrace$ can be zero, while at least one must diverge to infinity. Hence, since $w_4w_3w_2w_1 \leq 0$ for $\Vert \dot F(\theta) \Vert^2$ to diverge, then, eventually, $$\Vert \dot F(\theta) \Vert^2 \geq \frac{(w_4 w_3 w_2)^2 + (w_4 w_3 w_1)^2 + (w_4 w_2 w_1)^2 + (w_3 w_2 w_1)^2}{2}.$$ Moreover, using $w_4w_3w_2w_1 \leq 0$ and the triangle inequality, $$\Vert \ddot F(\theta) \Vert \leq \sqrt{ \sum_{i \neq j} (w_i w_j)^2 } + (w_4 w_3 w_2)^2 + (w_4 w_3 w_1)^2 + (w_4 w_2 w_1)^2 + (w_3 w_2 w_1)^2.$$ Hence, when $\Vert \dot F(\theta) \Vert$ diverges, $$\frac{ \Vert \ddot F(\theta) \Vert }{ \Vert \dot F(\theta) \Vert^2 } \leq \frac{\sqrt{ \sum_{i \neq j} (w_i w_j)^2 }}{(w_4 w_3 w_2)^2 + (w_4 w_3 w_1)^2 + (w_4 w_2 w_1)^2 + (w_3 w_2 w_1)^2} + 2.$$ Now, since at most three terms can diverge and at most one term can be zero, the limit of the right hand side is two. Hence, $F$ is quadratically Lipschitz continuous. ## Correlation Modeling via Generalized Estimating Equations {#subsec:gee} Generalized estimating equations for correlation modeling is a statistical method that posits a model directly about the gradient function [@lipsitz2008geelongitudinal Chapter 3]. As a simple example, consider an observation $y > 0$ estimated by a parameter $\theta > 0$ using the estimating equation, $$\dot F(\theta) = -\frac{y - \sqrt{\theta}}{\theta}.$$ Then, the root of this equation correspond to the minimizer of the function (up to an additive constant) $$F(\theta) = -y \log( \theta) + 2 \sqrt{\theta},$$ whose Hessian is $$\ddot F(\theta) = \frac{ y - 0.5 \sqrt{\theta}}{\theta^2}.$$ We now check the smoothness conditions. 1. As $\theta \to 0$, $\ddot F(\theta) \to \infty$. Hence, $F$ is **not** globally Lipschitz smooth. 2. Note, for any $\theta \in (0, 4y^2)$, $$\frac{| \ddot F(\theta) - \ddot F(4y^2) | }{|\theta - 4y^2|} = \frac{|\ddot F(\theta)| }{|\theta - 4y^2|} \geq \frac{ |y - 0.5 \sqrt{\theta}|}{4y^2\theta^2}.$$ The last term on the right diverges as $\theta \to 0$. Hence, $F$ is **not** globally Lipschitz Hessian smooth. 3. Now, $$\frac{|\ddot F(\theta)|}{\dot F(\theta)^2} = \frac{ |y - 0.5 \sqrt{\theta}|}{\theta^2} \frac{\theta^2}{(y - \sqrt{\theta})^2} = \frac{|y - 0.5 \sqrt{\theta}|}{(y - \sqrt\theta)^2}.$$ As $\dot F(\theta)$ diverges only as $\theta \to 0$, the preceding ratio tends to $1/y$. Hence, $F$ is **not** subquadratically Lipschitz continuous. $F$ is quadratically Lipschitz continuous. # Divergence Examples for Gradient Methods without Objective Evaluations ## Preliminaries {#subsec:cd-preliminaries} Below, we show, for each group of methods listed in [1](#table-objective-free){reference-type="ref" reference="table-objective-free"} and for any allowed choice of the algorithm parameters, there exists an objective function for which the objective function diverges (and, in most cases, the gradient function remains bounded away from zero). While not always, we will often use the following function (as pieces of another function) to define our objective. Let $m > 0$ and $d, \delta \in (0,1]$, and define $$\label{eqn-cd-building-function} f(\theta; m, d, \delta) = \begin{cases} -d\theta & \theta \in \left( 0, \frac{2-d}{16}m \right) \\ \frac{8}{m}\left( \theta - \frac{m}{8} \right)^2 - m \left(\frac{-d^2 + 4d}{32}\right) & \theta \in \left[ \frac{2-d}{16}m, \frac{3}{16}m \right) \\ \frac{-5m}{16}\exp\left( \frac{5/16}{\theta/m - 1/2} + 1 \right) + m \left(\frac{11 + d^2 -4d}{32} \right) & \theta \in \left( \frac{3}{16}m, \frac{1}{2}m \right) \\ m \left( \frac{11 + d^2 - 4d}{32} \right) & \theta = \frac{1}{2}m \\ \frac{5m}{16} \exp\left(\frac{-5/16}{\theta/m - 1/2} + 1 \right) + m \left(\frac{11 + d^2 -4d}{32} \right) & \theta \in \left( \frac{1}{2}m, \frac{13}{16}m \right) \\ \frac{-8}{m} \left( \theta - \frac{7m}{8} \right)^2 + m\left( \frac{22 + d^2 -4d}{32} \right) & \theta \in \left[ \frac{13}{16}m, \frac{\delta + 14}{16}m \right) \\ -\delta \theta + m \left(\frac{22 + d^2 + \delta^2 -4d + 28\delta}{32}\right) & \theta \in \left[ \frac{\delta + 14}{16}m, m \right] . \end{cases}$$ This function was constructed in [@patel2023gradient eq. E.4]; an example is plotted in [\[plot-cd-building-function\]](#plot-cd-building-function){reference-type="ref" reference="plot-cd-building-function"}; and it was shown to have the following properties [@patel2023gradient Proposition E.1]. [\[result-cd-building-properties\]]{#result-cd-building-properties label="result-cd-building-properties"} Let $m > 0$ and $d,\delta \in (0,1)$. The continuous extension of [\[eqn-cd-building-function\]](#eqn-cd-building-function){reference-type="ref" reference="eqn-cd-building-function"} to $[0,m]$ is continuous on its domain; bounded from below by $-m/8$; differentiable on $(0,m)$; admits one-sided derivatives of $-d$ at $\theta=0$ and $-\delta$ at $\theta = m$; has a locally Lipschitz continuous derivative function; and $f(m;m,d,\delta) \geq 7m/16$. We will use this building block function to define the following function. Let $\lbrace S_{j} : j + 1 \in \mathbb{N} \rbrace$ be a strictly increasing sequence in $\mathbb{R}$ and let $\lbrace d_j \in (0,1]: j+1 \in \mathbb{N} \rbrace$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined by $$\label{eqn-cd-objective} F(\theta) = \begin{cases} - d_0(\theta - S_0) & \theta \leq S_0 \\ f(\theta - S_0; S_1 - S_0, d_0, d_1) & \theta \in (S_0,S_1] \\ f(\theta - S_j; S_{j+1} - S_j, d_j, d_{j+1}) + F(S_j) & \theta \in (S_j, S_{j+1}], ~\forall j \in \mathbb{N}. \end{cases}$$ Note, [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"} is defined recursively as $F(S_j)$ is needed in order for $F$ to be defined on $(S_j, S_{j+1}]$. We now show that this objective satisfies [\[as-bounded below,as-loc-lip-cont\]](#as-bounded below,as-loc-lip-cont){reference-type="ref" reference="as-bounded below,as-loc-lip-cont"}, along with some other properties. [\[result-cd-objective\]]{#result-cd-objective label="result-cd-objective"} Let $\lbrace S_{j} : j + 1 \in \mathbb{N} \rbrace$ be a strictly increasing sequence in $\mathbb{R}$ and let $\lbrace d_j \in (0,1]: j + 1 \in \mathbb{N} \rbrace$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Then, $F$ satisfies [\[as-loc-lip-cont\]](#as-loc-lip-cont){reference-type="ref" reference="as-loc-lip-cont"}. Moreover, $F(\theta) \geq 7(S_j - S_0)/16 - (S_{j+1} - S_j)/8$ for $\theta \in (S_j, S_{j+1}]$, $F(S_j) \geq 7 (S_j - S_0)/ 16$ and $\dot F(S_j) = -d_j$ for all $j +1 \in \mathbb{N}$. *Proof.* By the properties of $-d_0\theta$ and $f(\theta; m, d, \delta)$ (see [\[result-cd-building-properties\]](#result-cd-building-properties){reference-type="ref" reference="result-cd-building-properties"}), we need only check the behavior of $F$ at the points $\lbrace S_j : j+1 \in \mathbb{N} \rbrace$. At $S_0$, $F(S_0) = 0$ and $\lim_{\theta \downarrow S_0} F(\theta) = \lim_{\theta \downarrow 0} f(\theta - S_0; S_1 - S_0, 1, d_1) = 0$. Moreover, the derivative from the left at $\theta = S_0$ is $-d_0$ (i.e, the derivative of $-d_0\theta$), and the derivative from the right is $-d_0$ by [\[result-cd-building-properties\]](#result-cd-building-properties){reference-type="ref" reference="result-cd-building-properties"}. In other words, $\dot F(S_0) = -d_0$. Moreover, the derivative is constant in a neighborhood of $\theta = S_0$, which implies that the derivative of $F$ is locally Lipschitz continuous at $\theta = S_0$. Finally, by [\[result-cd-building-properties\]](#result-cd-building-properties){reference-type="ref" reference="result-cd-building-properties"}, $F(\theta) \geq -(S_1 - S_0)/8$ on $\theta \in (S_0, S_1]$ and $F(S_1) \geq 7 (S_1 - S_0)/ 16$. Suppose for some $j \in \mathbb{N}$, we have verified, for $\ell=0,\ldots,j-1$, $F$ is continuously differentiable at $\theta = S_\ell$; the derivative is locally Lipschitz continuous at this point; $\dot F(S_\ell) = -d_\ell$; $F(\theta) \geq 7(S_{\ell+1} - S_0)/16 - (S_{\ell+1} - S_{\ell})/8$ for all $\theta \in (S_\ell,S_{\ell+1}]$; and $F(S_{\ell+1}) \geq 7 (S_{\ell+1} - S_0)/ 16$. Now, at $S_j$, $\lim_{\theta \downarrow S_j} F(\theta) = \lim_{\theta\downarrow S_j} f(\theta - S_j; S_{j+1} - S_j, d_{j}, d_{j+1}) + F(S_j) = F(S_j)$. So $F$ is continuous at $\theta = S_j$. By [\[result-cd-building-properties\]](#result-cd-building-properties){reference-type="ref" reference="result-cd-building-properties"}, the derivatives from the left and right at $\theta = S_j$ are both $-d_j$. As the derivative is constant in a small neighborhood of $\theta = S_j$, $\dot F$ is locally Lipschitz continuous at $\theta = S_j$. Finally, by [\[result-cd-building-properties\]](#result-cd-building-properties){reference-type="ref" reference="result-cd-building-properties"}, $F(\theta) \geq F(S_j) - (S_{j+1} - S_j)/8 \geq 7(S_j - S_0)/16 - (S_{j+1} - S_j)/8$ on $(S_j, S_{j+1}]$ and $F(S_{j+1}) \geq F(S_j) + 7(S_{j+1} - S_j)/16 = 7(S_{j+1} - S_0)/16$. This completes the proof by induction. ◻ ## Constant Step Size {#subsec:ce-constant} Consider gradient descent with constant step size: given $\theta_0 \in \mathbb{R}^n$, $\lbrace \theta_k : k \in \mathbb{N} \rbrace$ are recursively generated by $$\theta_{k} = \theta_{k-1} - m \dot F(\theta_{k-1}), ~\forall k \in \mathbb{N},$$ where $F:\mathbb{R}^n \to \mathbb{R}$. Gradient descent with constant step size is known to generate iterates on a convex, one-dimensional quadratic function that diverge (which produces an objective function sequence that diverges), if the step size exceeds a multiple of the reciprocal of the Lipschitz rank of the gradient function. Here, we show a slightly stronger statement. Let $F(\theta) = \theta^4/4$, which satisfies [\[as-bounded below,as-loc-lip-cont\]](#as-bounded below,as-loc-lip-cont){reference-type="ref" reference="as-bounded below,as-loc-lip-cont"}. Then, for gradient descent with any constant step size $m > 0$, $\exists \theta_0 \in \mathbb{R}$ such that the iterates of gradient descent diverge, the objective function at the iterates diverges, and the absolute value of the gradient function at the iterates diverges. *Proof.* Let $m > 0$. Choose $\theta_0^2 > 2/m$. We begin by induction to show $\theta_{k}^2 > \theta_{k-1}^2$ for all $k \in \mathbb{N}$. As the base case and generalization follow the same strategy, we state the induction hypothesis and generalization step. The induction hypothesis is $\theta_{k}^2 > \theta_{k-1}^2 > 2/m$, for some $k \in \mathbb{N}$. We now show $\theta_{k+1}^2 > \theta_k^2$. Note, $\theta_{k+1} = \theta_k - m \theta_k^3$. If $\theta_{k} > 0$, then $\theta_{k+1} < \theta_{k}$, and $$\theta_k^2 > 2/m \Leftrightarrow m\theta_k^3 > 2 \theta_k \Leftrightarrow -(\theta_k - m\theta_k^3) > \theta_k.$$ Hence, if $\theta_k > 0$, then $\theta_{k+1}^2 > \theta_{k}^2 > 2/m$. If $\theta_k < 0$, then $\theta_{k} < \theta_{k+1}$, and $$\theta_{k}^2 > 2/m \Leftrightarrow m \theta_k^3 < 2 \theta_k \Leftrightarrow - \theta_k < \theta_k - m \theta_k^3.$$ Hence, if $\theta_k < 0$, then $\theta_{k+1}^2 > \theta_k^2 > 2/m$. With this established, we see that $\lbrace | \theta_k | \rbrace$ is a strictly increasing sequence. Therefore, $\lbrace F(\theta_k) = \theta_k^4 / 4 \rbrace$ is a strictly increasing sequence. Moreover, $\lbrace |\dot F(\theta_k) | = |\theta_k|^3 \rbrace$ is a strictly increasing sequence. ◻ ## Barzilai-Borwein Method {#subsec:ce-bbb} We construct a one-dimensional objective function on which the iterates generated by the Barzilai-Borwein method will diverge and cause the optimality gap to diverge. In this context, the Barzilai-Borwein method begins with an iterate $\theta_0$ and an initial step $m_0 > 0$ and generates iterates $\lbrace \theta_{k} : k \in \mathbb{N} \rbrace$ by the recursion $\theta_{k+1} = \theta_k - m_k \dot F(\theta_k)$ for $k + 1 \in \mathbb{N}$, where $F: \mathbb{R} \to \mathbb{R}$; and, for $k \in \mathbb{N}$, $$m_k = \frac{\theta_{k} - \theta_{k-1}}{\dot F(\theta_k) - \dot F(\theta_{k-1})}.$$ Note, both variations of the Barzilai-Borwein method reduce to the same case in one dimension. We now construct our objective function, and show the Barzilai-Borwein method will produce iterates whose optimality gap diverges. Let $m_0 > 0$, $S_j = m_0 j$ for all $j+1 \in \mathbb{N}$, and let $d_j = 2^{-j}$ for all $j + 1 \in \mathbb{N}$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Let $\theta_0 = 0$, $\theta_1 = \theta_0 - m_0 \dot F(\theta_0)$, and $$\theta_{k+1} = \theta_k - \frac{\theta_k - \theta_{k-1}}{\dot F(\theta_k) - \dot F(\theta_{k-1})} \dot F(\theta_k), ~k \in \mathbb{N}.$$ Then $\lim_k F(\theta_k) = \infty$. *Proof.* The initial step is a special case. We then proceed by induction on the second update. For the initial step, $\theta_1 = \theta_0 - m_0 \dot F(\theta_0) = 0 + m_0 = m_0 = S_1$. The second step is the base case for our proof by induction. By [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}, $$\theta_2 = \theta_1 - \frac{\theta_1 - \theta_{0}}{\dot F(\theta_1) - \dot F(\theta_{0})} \dot F(\theta_1) = m_0 - \frac{m_0}{-2^{-1} + 1}(- 2^{-1}) = 2m_0 = S_2.$$ Suppose $\theta_\ell = m_0 \ell = S_{\ell}$ for all $\ell = 0,\ldots,j$. Then, by [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}, $$\theta_{j+1} = \theta_j - \frac{\theta_j - \theta_{j-1}}{\dot F(\theta_j) - \dot F(\theta_{j-1}) } \dot F(\theta_j) = m_0 j - \frac{m_0}{-2^{-j} + 2^{-j+1}} (-2^{j}) = m_0 j + \frac{m_0}{2-1} = m_0 (j+1) = S_{j+1}.$$ By induction, $\theta_j = m_0 j = S_j$ for all $j + 1\in \mathbb{N}$. By [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}, $\lim_{j} F(\theta_j) = \lim_{j} F(S_j) \geq \lim_{j} 7S_j / 16 = \infty$. ◻ ## Nesterov Accelerated Gradient Descent {#subsec:ce-nag} Nesterov accelerated gradient descent, first proposed in [@nesterov1983], is known to achieve the optimal rate of convergence among all methods only using gradient information [@nesterov2004convexopt Chapter 2]. In this section, we focus on a slight varition of the original method discussed in [@li2023convex], as it was shown under relaxed Lipschitz smoothness conditions ($\ell$-smoothness) that the modified algorithm retains the optimal rate of convergence. For completeness, we present the algorithm in [@li2023convex] below (see [\[alg:NAG\]](#alg:NAG){reference-type="ref" reference="alg:NAG"}). $\theta_0 \in \mathbb{R}^n, m \in (0, \infty)$ $z_0 = \theta_0, B_0 = 0, A_0 = 1/m$ $B_{t+1} = B_t + .5(1+\sqrt{4B_t+1})$ $A_{t+1} = B_{t+1} + \frac{1}{m}$ $y_t = \theta_t + (1-\frac{A_t}{A_{t+1}})(z_t-\theta_t)$ $\theta_{t+1} = y_t - m\dot{F}(y_t)$ $z_{t+1} = z_t - m(A_{t+1}-A_t)\dot{F}(y_t)$ Before constructing the objective function and showing the divergence behavior of [\[alg:NAG\]](#alg:NAG){reference-type="ref" reference="alg:NAG"}, we note some properties of $\lbrace B_t \rbrace$. First, $B_0 = 0$, $B_1 = 1$, and $B_{t+1} - B_t > 1$ for all $t \in \mathbb{N}$. As a result, $A_{t+1} - A_t = B_{t+1} - B_t > 1$ and $A_t/A_{t+1} \in (0,1)$ for all $t + 1 \in \mathbb{N}$. We now define three sequences that mimic the behavior of [\[alg:NAG\]](#alg:NAG){reference-type="ref" reference="alg:NAG"}. Let $m > 0$, $\Theta_0 = 0$, $Z_0 = 0$, $S_0 = 0$, $\Theta_t = Y_{t-1} + m$ for $t \in \mathbb{N}$, $Z_t = Z_{t-1} + m (A_{t} - A_{t-1})$, and $$Y_t = \Theta_t + \left( 1 - \frac{A_{t-1}}{A_t} \right) (Z_t - \Theta_t), ~t\in \mathbb{N}.$$ For these sequences, we have the following property. **Lemma 9**. *For $t > 1$, $\Theta_t < Y_t < Z_t$. $\lbrace Y_t : t+1 \in \mathbb{N} \rbrace$ is a strictly increasing sequence.* *Proof.* We begin with the first statement. Note, $\Theta_0 = Z_0 = Y_0 = 0$ and $\Theta_1 = Z_1 = Y_1 = m$. We proceed by induction with the base case of $t = 2$. $\Theta_2 = Y_1 + m = 2m$; $Z_2 = m + m (A_2 - A_1) > \Theta_2$. As $S_2$ is a strict convex combination of $\Theta_2$ and $Z_2$, $\Theta_2 < Y_2 < Z_2$. Suppose, for all $\ell \in \mathbb{N}$ strictly less than $t \in \mathbb{N}_{\geq 3}$, $\Theta_\ell < Y_\ell < Z_\ell$. Then, $$Z_t = Z_{t-1} + m(A_t - A_{t-1}) > Y_{t-1} + m = \Theta_t.$$ Since $Y_t$ is a strict convex combination of $\Theta_t$ and $Z_t$, the statement holds. Now, we verify that $\lbrace Y_t : t+1 \in \mathbb{N} \rbrace$ is strictly increasing. Note, $Y_0 = 0$, $Y_1 = 1$ and $Y_2 > \Theta_2 = Y_1 + m > Y_1$. Suppose this holds up to and including some index $t \in \mathbb{N}$. Then, $Y_{t+1} > \Theta_{t+1} = Y_{t} + m > Y_t$. The conclusion follows. ◻ We now construct the counterexample and show that [\[alg:NAG\]](#alg:NAG){reference-type="ref" reference="alg:NAG"} produces iterates that diverge and whose optimality gap diverges, and whose derivates at $\lbrace y_t \rbrace$ and at $\lbrace \theta_t \rbrace$ remain bounded away from zero. To do so, let $S_0 = \Theta_0 = 0$, $S_1 = Y_1 = m$, $$\label{eqn-cd-nag-sequence} S_{2j} = \Theta_{j+1}, \quad\text{and}\quad S_{2j+1} = Y_{j+1}, ~\forall j \in \mathbb{N}.$$ By [Lemma 9](#result-cd-nag-sequence){reference-type="ref" reference="result-cd-nag-sequence"}, then $\lbrace S_j : j+1 \in \mathbb{N} \rbrace$ is a strictly increasing sequence. We now have the following example. Let $m > 0$. Define $\lbrace S_j : j + 1 \in \mathbb{N} \rbrace$ as in [\[eqn-cd-nag-sequence\]](#eqn-cd-nag-sequence){reference-type="ref" reference="eqn-cd-nag-sequence"}, and let $d_j = 1$ for all $j + 1 \in \mathbb{N}$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Let $\theta_0 = 0$, and let $\lbrace \theta_j : j \in \mathbb{N} \rbrace$ and $\lbrace y_j : j+1 \in \mathbb{N} \rbrace$ be defined as in [\[alg:NAG\]](#alg:NAG){reference-type="ref" reference="alg:NAG"}. Then, $\lim_k F(\theta_k) = \infty$, $\lim_k F(y_k) = \infty$, and $\inf_{k} | \dot F(\theta_k) | = \inf_k |\dot F(y_k) | = 1$. *Proof.* We need to verify, $\theta_{j} = S_{2j}$ for all $j+1 \in \mathbb{N}$ and $\theta_{j+1} = S_{2j+1}$ for all $j+1 \in \mathbb{N}$. Once this is verified, then, by [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}, $F(S_j) \geq 7S_j/16$ and $\dot F(S_j) = -1$ for all $j+1 \in \mathbb{N}$. The result will follow. To achieve this, we need to show $\Theta_j = \theta_j$, $Y_j = y_j$ and $Z_j = z_j$ for all $j+1 \in \mathbb{N}$. We proceed by induction. When $\theta_0 = 0$, then $z_0 = 0$ and $y_0 = 0$. Thus, $\Theta_0 = \theta_0$, $Y_0 = y_0$ and $Z_0 = z_0$. Suppose for $j \in \mathbb{N}$, $\Theta_j = \theta_j$, $Y_j = y_j$ and $Z_j = z_j$. Then, $\theta_{j+1} = y_j - m \dot F(y_j) = Y_j - m \dot F(Y_j) = Y_j - m \dot F(S_{2j-1}) = Y_j + m = \Theta_{j+1}$, since $\dot F(S_{2j-1}) = -1$ by [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"} and our choice of $d_{2j-1} = 1$. Moreover, $z_{j+1} = z_j - m (A_{j+1} - A_j) \dot F(Y_j) = Z_j + m(A_{j+1} - A_j) = Z_{j+1}$. Finally, $y_{j+1} = \theta_{j+1} + (1 - A_{j+1}/A_{j+2})(z_{j+1} - \theta_{j+1}) = \Theta_{j+1} + (1 - A_{j+1}/A_{j+2}) (Z_{j+1} - \Theta_{j+1}) = Y_{j+1}$. We have concluded the proof by induction. The result follows. ◻ ## Bregman Distance Method {#subsec:ce-bregman} Roughly, the Bregman Distance Method optimizes a composite nonconvex objective over closed, convex sets where one component of the objective is differentiable and is $L$-smooth adaptable with respect to a some proper, differentiable, convex function [@bauschke2017descentlemma]. The key parameter is a weight on the Bregman distance given by the convex function, denoted by $m > 0$. For our example, let $m > 0$. Define $S_j = m j$ for all $j+1 \in \mathbb{N}$, and $d_{j} = 1$ for all $j+1 \in \mathbb{N}$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. If we let this $F$ be the objective function (and the second part of the composite function be identically zero), and we use the quadratic function to define the Bregman distance, then the Bregman distance method reduces to a standard proximal procedure as specified in [\[alg:bregman-dist\]](#alg:bregman-dist){reference-type="ref" reference="alg:bregman-dist"}. $\theta_0 \in \mathbb{R}$, $m > 0$ $\theta_k = \arg\min_{\theta \in \mathbb{R}} \left\{ F(\theta) + \frac{1}{2m}\left\Vert \theta - \theta_{k-1} \right\Vert_2^2 \right\}$ In [\[alg:bregman-dist\]](#alg:bregman-dist){reference-type="ref" reference="alg:bregman-dist"}, the inner loop part is as difficult as the original problem. Hence, we assume that we can solve it locally to our preference. In this case, we have the following result. Let $m > 0$. Let $S_j = m_j$ and $d_j = 1$ for all $j+1 \in \mathbb{N}$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. If $\theta_0 = 0$ and the inner loop can be solved to a local minimizer, then there is a choice of local minimizers such that $\lim_k F(\theta_k) = \infty$ and $\inf_k |\dot F(\theta_k)| = 1$. *Proof.* We proceed by induction to show $S_j = \theta_j$. First, $S_0 = \theta_0 = 0$. Suppose the relationship holds up to some $j \in \mathbb{N}$; that is, $\theta_j = S_j$. Then, we locally minimize $F(\theta) + \frac{1}{2m}(\theta - \theta_j)$. We now verify that $S_{j+1}$ is a local minimizer and let $\theta_{j+1} = S_{j+1}$, which will complete the proof by induction. We check second-order optimality conditions. $\dot F(S_{j+1}) = \dot F(S_{j+1}) + m^{-1}(S_{j+1} - S_j) = -1 + m^{-1}m = 0$. Since $\dot F(\theta)$ is locally linear at $S_{j+1}$, the second derivative of the local problem is $1/m > 0$. Hence, we can set $\theta_{j+1} = S_{j+1}$. The result follows by [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}. ◻ ## Negative Curvature Method {#subsec:ce-negative} One method explored in [@curtis2018exploiting] on exploiting negative curvature in gradient methods is a two-step algorithm that alternates between negative curvature and gradient steps with pre-defined constant step size $m,n \in (0, \infty)$. The psuedo-code is presented in [\[alg:neg-curv-two-step\]](#alg:neg-curv-two-step){reference-type="ref" reference="alg:neg-curv-two-step"}. $\theta_0, m > 0, m' > 0$ $s_k' = 0$ $s_k' =$ $s_k = 0$ $s_k = -\dot{F}(\theta_k)$ $\theta_{k+1} = \theta_k + m s_k + m' s_k'$ Note that in [\[alg:neg-curv-two-step\]](#alg:neg-curv-two-step){reference-type="ref" reference="alg:neg-curv-two-step"}, $s_k'$ is selected to be a descent direction and a direction of negative curvature (see [@curtis2018exploiting §2] for details). Furthermore, in full generality, $s_k$ needs only to be a descent direction; however, as a simplification, we select the negative gradient. Finally, the step sizes, $m$ and $m'$, are constants that should be based on the global smoothness parameters of the gradient and Hessian. We have the following example [\[prop:neg-div\]]{#prop:neg-div label="prop:neg-div"} Let $m, m' > 0$. Let $S_j = mj$ and $d_j = 1$ for all $j+1 \in \mathbb{N}$. Define $F:\mathbb{R} \to \mathbb{R}$ as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Let $\theta_0 = 0$ and let $\lbrace \theta_j : j \in \mathbb{N} \rbrace$ be generated by [\[alg:neg-curv-two-step\]](#alg:neg-curv-two-step){reference-type="ref" reference="alg:neg-curv-two-step"}. Then,, $\lim_{j} F(\theta_j) = \infty$, and $\inf_j |\dot{F}(\theta_j)| = 1$ *Proof.* We proceed by induction to show that $\theta_j = S_j$ and the result follows by [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}. First, $\theta_0 = 0 = S_0$. Suppose this relationship holds up to some $j \in \mathbb{N}$. Since $F$ is locally linear around $S_j$, $\ddot F(S_j) = 0$ and $s_j' = 0$. Since $\dot F(S_j) = -1$ then $s_j = 1$. Hence, $\theta_{j+1} = S_j + m + 0 = S_{j+1}$. ◻ ## Lipschitz Approximation {#subsec:ce-lipschitz} presents a scheme for a Lipschitz-approximation-based optimization procedure introduced in [@malitsky2020adaptive §2]. We then provide an example in which the optimality gap diverges. $\theta_0 \in \mathbb{R}^n$, $m_0 > 0$ $w_0 = +\infty$ $\theta_1 = \theta_0 - m_0 \dot{F}(\theta_0)$ $m_k = \min\left\{ \sqrt{1+w_{k-1}} m_{k-1}, \frac{\left\Vert \theta_{k} - \theta_{k-1} \right\Vert_2}{2\left\Vert \dot{F}(\theta_k) - \dot{F}(\theta_{k-1}) \right\Vert_2} \right\}$ $\theta_{k+1} = \theta_{k} - m_k \dot{F}(\theta_k)$ $w_k = \frac{m_k}{m_{k-1}}$ Let $m_0 > 0$. Let $S_0 = 0$ and $S_{j+1} = S_j + m_0 (\sqrt{5}/2)^j$ for all $j+1 \in \mathbb{N}$. Let $d_{j} = (\sqrt{5}/[\sqrt{5} + 1])^j$ for all $j+1 \in \mathbb{N}$. Let $F: \mathbb{R}\to\mathbb{R}$ be defined by [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. If $\theta_0 = 0$ and $\lbrace \theta_j : j \in \mathbb{N} \rbrace$ are generated by [\[alg:lip-approx\]](#alg:lip-approx){reference-type="ref" reference="alg:lip-approx"}, then $\lim_{j} F(\theta_j) = \infty$. *Proof.* Note, $\theta_0 = 0 = S_0$. We proceed by induction. For the base case, $j=1$, $\theta_1 = 0 - m_0 \dot F(S_0) = m_0 = S_1$. Moreover, $m_0 = m_0$. Suppose, for $j+1 \in \mathbb{N}$, $\theta_{\ell} = S_\ell$ for all $\ell \in \lbrace 0,\ldots,j \rbrace$; and $m_{\ell} = m_0([\sqrt{5}+1]/2)^\ell$ for all $\ell \in \lbrace 0,\ldots, j-1 \rbrace$. We now conclude. First, we calculate $w_{j-1}$. By the induction hypothesis, (with $j > 1$) $$w_{j-1} = \frac{m_0([\sqrt{5}+1]/2)^{j-1} }{m_0([\sqrt{5}+1]/2)^{j-2}} = \frac{\sqrt{5} + 1}{2} = \sqrt{ 1 + \frac{\sqrt{5} + 1}{2}} = \sqrt{1 + w_{j-1}}.$$ Hence, $$\begin{aligned} m_j &= \min\left\lbrace \frac{\sqrt{5} + 1}{2} m_{j-1}, \frac{|\dot F(\theta_{j-1})|}{2|\dot F(\theta_{j}) - \dot F(\theta_{j-1})|} m_{j-1} \right\rbrace = \min\left\lbrace \frac{\sqrt{5} + 1}{2} m_{j-1}, \frac{d_{j-1}}{2(d_{j-1} - d_j)} m_{j-1} \right\rbrace \\ &= \frac{\sqrt{5} + 1}{2} m_{j-1} = \left(\frac{\sqrt{5} + 1}{2}\right)^{j} m_0. \end{aligned}$$ Finally, $$\theta_{j+1} = \theta_j - m_j \dot F(\theta_j) = S_j + m_0 \left(\frac{\sqrt{5} + 1}{2}\right)^{j}\left( \frac{\sqrt{5}}{\sqrt{5} + 1} \right)^j = S_j + m_0 \left(\frac{\sqrt{5}}{2} \right)^j = S_{j+1}.$$ To conclude, $\theta_j = S_j$ for all $j+1 \in \mathbb{N}$. By [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}, $\lim_{j} F(\theta_j) = \infty$. ◻ ## Weighted Gradient-Norm Damping {#subsec:ce-weighted} In this section, we show that weighted gradient-norm damping methods such as [\[alg:wngrad\]](#alg:wngrad){reference-type="ref" reference="alg:wngrad"} from [@wu2020wngrad] will have a divergent optimality gap. The method from [@grapiglia2022AdaptiveTrust] can be handled similarly. $\theta_0 \in \mathbb{R}^n, b_0 > 0$ $\theta_{k} = \theta_{k-1} - \frac{1}{b_{k-1}} \dot{F}(\theta_{k-1})$ $b_k = b_{k-1} + \frac{||\dot{F}(\theta_{k})||_2^2}{b_{k-1}}$ Let $b_0 > 0$. Define $S_0 = 0$, and for all $j \in \mathbb{N}$, $S_j = S_{j-1} + \frac{1}{B_{j-1}}$ where $B_{0} = b_0$, and $B_{j} = B_{j-1} + \frac{1}{B_{j-1}}$. Let $d_j = 1$ for all $j+1 \in\mathbb{N}$. Let $F:\mathbb{R} \to \mathbb{R}$ be defined by [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Let $\theta_0$ and $\{\theta_j : j \in \mathbb{N}\}$ be defined by [\[alg:wngrad\]](#alg:wngrad){reference-type="ref" reference="alg:wngrad"}. Then, $\lim_j F(\theta_j) = \infty$ and $\inf_j |\dot{F}(\theta_j)| = 1$. *Proof.* First, we show that $\theta_j = S_j$ and $b_j = B_{j}$ by induction, and then show that $S_j$ diverges. To this end, $\theta_0 = S_0$ and $b_0 = B_{0}$ by assumption, therefore assume that $\theta_j = S_j$ and $b_j = B_j$ for some $j \in \mathbb{N} \cup \{0\}$. Then, since $\dot{F}(\theta_j) = \dot{F}(S_j) = -1$, $\theta_{j+1} = \theta_j - \frac{1}{b_j}\dot{F}(\theta_j) = S_j - \frac{1}{B_j}\dot{F}(S_j) = S_j + \frac{1}{B_j} = S_{j+1}$, and $b_{j+1} = b_{j} + \frac{ \Vert \dot{F}(\theta_{j+1}) \Vert_2^2 }{b_{j}} = B_{j} + \frac{1}{B_{j}} = B_{j+1}$. Lastly, to show $S_j$ diverges we show that $1 \leq B_j \leq jB_1$ for $j \in \mathbb{N}$ by induction. The base case is true since either $b_0 > 1$, implying that $B_1 = b_0 + \frac{1}{b_0} \geq 1$, or $0 < b_0 \leq 1$ and $B_1 \geq \frac{1}{b_0} \geq 1$. Suppose this is true for $j \in \mathbb{N}$, then by the inductive hypothesis $\frac{1}{B_j} \leq 1 \leq B_1$ so that $B_{j+1} = B_j + \frac{1}{B_j} \leq jB_1 + B_1 = (j+1)B_1$. Lastly, $B_{j+1} \geq B_j \geq 1$ by the inductive hypothesis. Therefore, by definition $S_j = \sum_{i=0}^{j-1} \frac{1}{B_i} \geq \frac{1}{B_0}+\frac{1}{B_1}\sum_{i=1}^{j-1} \frac{1}{i}$ which implies $S_j$ diverges. ◻ ## Adaptively Scaled Trust Region {#subsec:ce-adaptiveTR} In this section, we show the Adagrad-like method described in [@gratton2022firstorderOFFO §3] (see [\[algo:adagrad\]](#algo:adagrad){reference-type="ref" reference="algo:adagrad"}) will have a divergent optimality gap. Note, a variation of this method described in [@gratton2022firstorderOFFO §4] can be treated similarly. $\zeta \in (0, 1], \mu \in (0,1), \theta_0 \in \mathbb{R}^n$ $w_k = \left( \zeta + \sum_{j=0}^k \dot F(\theta_j)^2 \right)^{\mu}$ $\theta_{k+1} = \theta_k - \frac{1}{w_k} \dot F(\theta_k)$ Let $\zeta \in (0,1]$ and $\mu \in (0,1)$. Let $S_0 = 0$ and $S_j = S_{j-1} + (\zeta +j)^\mu$ for all $j \in \mathbb{N}$. Let $d_j = 1$ for all $j+1 \in \mathbb{N}$. Let $F: \mathbb{R} \to \mathbb{R}$ be defined by [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Let $\theta_0 = 0$ and $\lbrace \theta_j : j\in\mathbb{N} \rbrace$ be defined by [\[algo:adagrad\]](#algo:adagrad){reference-type="ref" reference="algo:adagrad"}. Then, $\lim_{j} F(\theta_j) = \infty$ and $\inf_{j} |\dot F(\theta_j)| = 1$. *Proof.* First, we verify $\theta_j = S_j$ by induction. Then, we verify $\lbrace S_j \rbrace$ diverges. For the base case, $S_0 = \theta_0$. Suppose $S_\ell = \theta_{\ell}$ for all $\ell \in \lbrace 0,\ldots, j \rbrace$. We now generalize to $j+1$. First, since $\dot F(\theta_{\ell}) = \dot F(S_{\ell}) = -1$, $w_j = (\zeta + \sum_{\ell=0}^j 1)^{\mu} = (\zeta + j + 1)^{\mu}$. Then, $\theta_{j+1} = \theta_j + (\zeta+j+1)^{-\mu} = S_j + (\zeta+j+1)^{-\mu} = S_{j+1}$. Hence, $\theta_{j} = S_j$ for all $j+1 \in \mathbb{N}$. Now, $S_j = \sum_{\ell=1}^j (\zeta + j + 1)^{\mu}$. Since $\mu \leq 1$, $\lbrace S_j \rbrace$ diverges. By [\[result-cd-objective\]](#result-cd-objective){reference-type="ref" reference="result-cd-objective"}, the result follows. ◻ ## Polyak's Method {#subsec:cd-polyaks} We construct a one-dimensional example where applying gradient descent with Polyak's step size rule results in the iterates diverging, objective value diverging, and gradients that stay bounded away from 0. In this context, iterates $\{\theta_k : k \in \mathbb{N}\}$ are generated through $\theta_{k+1} = \theta_k - m_k \dot{F}(\theta_k)$, where $F : \mathbb{R} \to \mathbb{R}$; and, for all $k+1 \in \mathbb{N}$, $$m_k = \frac{F(\theta_k) - F_{l.b.}}{\dot{F}(\theta_k)^2}.$$ Here, we consider $F_{l.b.}$ to be the tightness lower bound, that is $F_{l.b.} = \min_{\theta\in\mathbb{R}}F(\theta)$. Let $S_0 = 0$, $S_1 = 1$, $\mathcal{O}_0 = 0$, and for all $j\in\mathbb{N}_{\geq 2}$, $$S_j = S_{j-1}+8\mathcal{O}_{j-1}+\frac{248}{2048}, \text{\quad} \mathcal{O}_{j-1} = \frac{1346}{2048} (S_{j-1} - S_{j-2}) + \mathcal{O}_{j-2}.$$ Additionally, let $d_j = -\frac{1}{8}$ for all $j+1 \in \mathbb{N}$, and define $F : \mathbb{R} \to \mathbb{R}$ as in [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}. Let $\theta_0 = 1$ and let $\{\theta_j : j \in \mathbb{N}\}$ be defined as $$\label{eq:polyak-iterates} \theta_j = \theta_{j-1} - \frac{F(\theta_{j-1}) - F_{l.b.}}{\dot{F}(\theta_{j-1})^2} \dot{F}(\theta_{j-1}),$$ then $\lim_{j\to\infty} \theta_j = \infty$, $\lim_{j\to\infty} F(\theta_j) = \infty$, and for all $j+1 \in \mathbb{N}$, $\inf_j |\dot{F}(\theta_j)| = \frac{1}{8}$. *Proof.* First, we verify that $\mathcal{O}_{j-1} = F(S_{j-1})$ for all $j \in \mathbb{N}$. Clearly, $\mathcal{O}_0 = 0 = F(0) = F(S_0)$ by our assumptions and definition of $F$. Suppose this is true for some $j-1 \in \mathbb{N} \cup \{0\}$, we will show it is true for $j$. Specifically, by definition of $\mathcal{O}_j$, $F$, and the inductive hypothesis $$\mathcal{O}_j = \frac{1346}{2048}(S_{j}-S_{j-1}) + \mathcal{O}_{j-1} = f(S_{j}-S_{j-1};S_j-S_{j-1},d_{j-1},d_{j}) + F(S_{j-1}) = F(S_j).$$ Having established this equality, we now verify that $F_{l.b.} = -\frac{31}{2048}$. Indeed, for $\theta \leq 0$, $F(\theta) \geq 0$, and for $\theta \in (S_0,S_1]$ the minimum value is $-\frac{31}{2048}$ attained at $1/8$. For $\theta \in (S_j,S_{j+1}]$, by [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"}, [\[result-cd-building-properties\]](#result-cd-building-properties){reference-type="ref" reference="result-cd-building-properties"}, the relationship $F(S_j) = \mathcal{O}_j$, and definition of $S_{j+1}$ $$F(\theta) = f(\theta-S_j; S_{j+1} - S_j, d_j, d_{j+1}) + F(S_j) \geq -(S_{j+1}-S_j)/8 + \mathcal{O}_j = -\frac{31}{2048}.$$ Now we show that $\theta_j = S_{j+1}$ for all $j+1 \in \mathbb{N}$. This is true by assumption for $\theta_0$, so by induction, suppose that $\theta_{j-1} = S_j$, then using [\[eq:polyak-iterates\]](#eq:polyak-iterates){reference-type="ref" reference="eq:polyak-iterates"}, the minimum value $F_{l.b.}$, $\dot{F}(S_j) = \dot{F}(\theta_j) = d_j = \frac{-1}{8}$, and $F(S_j) = \mathcal{O}_j$ $$\theta_{j} = \theta_{j-1} - m_{j-1} \dot{F}(\theta_{j-1}) = \theta_{j-1} + 8F(\theta_{j-1})+\frac{248}{2048} = S_{j} + 8F(S_{j}) + \frac{248}{2048} = S_{j+1}.$$ Note, this proves that for all $j+1 \in \mathbb{N}$, $\dot{F}(\theta_j) = \frac{-1}{8}$. To finish, we show that $S_j \to \infty$ as $j \to \infty$. To do this we show that $S_{j+1}-S_j \geq 1$. For the base case, $S_{1} - S_{0} = 1$. Now, suppose for $j \in \mathbb{N}$ that $S_{j}-S_{j-1} \geq 1$, then by [\[eqn-cd-objective\]](#eqn-cd-objective){reference-type="ref" reference="eqn-cd-objective"} and [\[eqn-cd-building-function\]](#eqn-cd-building-function){reference-type="ref" reference="eqn-cd-building-function"}, $S_{j+1}-S_{j} = 8\mathcal{O}_{j}+\frac{248}{2048} = 8\left( \frac{1346}{2048} (S_{j}-S_{j-1}) + \mathcal{O}_{j-1} \right)+\frac{248}{2048} \geq 8\mathcal{O}_{j-1} +\frac{248}{2048} = (S_{j}-S_{j-1}) \geq 1$. This implies that the iterates diverge, and the function values diverge. ◻ # Objective Function Evaluation Explosion for Gradient Methods with Descent Conditions ## Preliminaries {#subsec:exploding-obj-eval-preliminaries} We show that the methods from [3](#table-multiple-objective-evaluation){reference-type="ref" reference="table-multiple-objective-evaluation"} will require an exponentially increasing number of objective function evaluations per iteration up to an arbitrary iterate $J \in \mathbb{N}$. To construct the objective functions, we will make use of an interpolating polynomial of degree $9$. **Lemma 10**. *Let $m > 0$. Let $f_{-}, f_{0}, f_+, f_-', f_{0}', f_{+}', f_-'', f_{0}'', f_{+}'' \in \mathbb{R}$. Then, there exists a polynomial function of degree $9$, $P:\mathbb{R} \to \mathbb{R}$ such that: (1) $f_- = P(-m)$, $f_{0} = P(0)$, $f_+ = P(m)$; (2) $f_-' = \dot P(-m)$, $f_{0}' = \dot P(0)$, $f_+' = \dot P(m)$; and (3) $f_-'' = \ddot P(-m), f_{0}'' = \ddot P(0), f_{+}'' = \ddot P(m)$.* *Proof.* Let $P(\theta) = \sum_{i=0}^9 c_i \theta^i$. Then, from $P(0) = c_0, \dot P(0) = c_1, \ddot P(0) = 2c_2$ we set $c_0 = f_0, c_1 = f_0', c_2 = f_0''/2$. For the rest of the coefficients, $\{c_i\}_{i=3}^9$, we look at the set of solutions to the following linear system $$\begin{bmatrix} f_+ - f_0 - f_0' m - (f_0''/2) m^2 \\ f_{-} - f_0 + f_0' m - (f_0''/2) m^2 \\ f_+' - f_0' - f_0''m \\ f_{-}' - f_0' + f_0''m \\ f_{+}'' - f_0''\\ f_{-}'' - f_0'' \end{bmatrix} = \begin{bmatrix} m^9 & m^8 & m^7 & m^6 & m^5 & m^4 & m^3 \\ -m^9 & m^8 & -m^7 & m^6 & -m^5 & m^4 & -m^3 \\ 9m^8 & 8m^7 & 7m^6 & 6m^5 & 5m^4 & 4m^3 & 3m^2 \\ 9m^8 & -8m^7 & 7m^6 & -6m^5 & 5m^4 & -4m^3 & 3m^2 \\ 72m^7 & 56m^6 & 42m^5 & 30m^4 & 20m^3 & 12m^2 & 6m \\ -72m^7 & 56m^6 & -42m^5 & 30m^4 & -20m^3 & 12m^2 & -6m \end{bmatrix} \begin{bmatrix} c_9\\ c_8\\ c_7\\ c_6\\ c_5\\ c_4\\ c_3 \end{bmatrix}.$$ The above coefficient matrix has row echelon form $$\begin{bmatrix} m^9 & m^8 & m^7 & m^6 & m^5 & m^4 & m^3 \\ 0 & 2m^8 & 0 & 2m^6 & 0 & 2m^4 & 0 \\ 0 & 0 & -2m^6 & 2m^5 & -4m^4 & -4m^3 & -6m^2 \\ 0 & 0 & 0 & 4m^5 & 0 & 8m^3 & 0 \\ 0 & 0 & 0 & 0 & 8m^3 & 8m^2 & 24m \\ 0 & 0 & 0 & 0 & 0 & 16m^2 & 0 \end{bmatrix}.$$ Therefore, there is an infinite number of solutions to the linear system, any of which satisfy properties $(1), (2), \text{and } (3)$. ◻ Now, for any $J \in \mathbb{N}$, let $\{S_j : j \in \{0,1,...,J\} \}$ be a strictly increasing sequence and $\Delta > 0$ such that the intervals $\{ [S_j - \Delta, S_j + \Delta] : j \in \{0,1,...,J\} \}$ are disjoint. Let $\{(f_j, f_j', f_j'') \in \mathbb{R}^3 : j \in \{0,1,...,J\}\}$ be values we select later. For $j \in \lbrace 0,\ldots, J \rbrace$, let $P_j(\theta)$ be a polynomial of degree $9$ such that $$0 = P_j(S_j - \Delta) = \dot P_j (S_j - \Delta) = \ddot P_j (S_j - \Delta) = P_j(S_j + \Delta) = \dot P_j (S_j + \Delta) = \ddot P_j(S_j + \Delta),$$ and $P_j (S_j) = f_j$, $\dot P_j(S_j) = f_j'$ and $\ddot P_j(S_j) = f_j''$. By [Lemma 10](#lemma-ee-interp-poly){reference-type="ref" reference="lemma-ee-interp-poly"}, this polynomial exists. Using these polynomials, we define our objective function, $F_J : \mathbb{R} \to \mathbb{R}$ by $$\label{eqn-ee-objective} F_J(\theta) = \begin{cases} P_j( \theta ) & \theta \in \left[S_j - \Delta, S_j + \Delta \right], ~j=0,\ldots,J \\ 0 & \text{otherwise}. \end{cases}$$ This function has properties stated in [\[prop-ee-properties\]](#prop-ee-properties){reference-type="ref" reference="prop-ee-properties"}. [\[prop-ee-properties\]]{#prop-ee-properties label="prop-ee-properties"} For any $J \in \mathbb{N}$, given a strictly increasing sequence $\{S_j : j \in \{0,1,...,J\} \}$, $\Delta > 0$ such that the intervals $\{ [S_j - \Delta, S_j + \Delta] : j \in \{0,1,...,J\} \}$ are disjoint. Let $\{(f_j, f_j', f_j'') \in \mathbb{R}^3 : j \in \{0,1,...,J\}\}$. Let $F_J: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-ee-objective\]](#eqn-ee-objective){reference-type="ref" reference="eqn-ee-objective"}. Then, 1. For all $j \in \{0,1,...,J\}$, $F_J(S_j) = f_j, \dot F_J(S_j) = f_j', \ddot F_J(S_j) = f_j''$; 2. For all $j \in \{0,1,...,J\}$, $F_J(S_j - \Delta) = \dot F_J(S_j - \Delta) = \ddot F_J(S_j - \Delta) = F_j(S_j + \Delta) = \dot F_j(S_j + \Delta) = \ddot F_j(S_j + \Delta) = 0$; 3. $F_J$ satisfies [\[as-bounded below,as-loc-lip-cont\]](#as-bounded below,as-loc-lip-cont){reference-type="ref" reference="as-bounded below,as-loc-lip-cont"}. *Proof.* First, properties 1 and 2 follow directly by construction of $F_J(\theta)$ and $P_j(\theta)$, and [Lemma 10](#lemma-ee-interp-poly){reference-type="ref" reference="lemma-ee-interp-poly"}. To check the other properties, we begin by checking the boundary points in $F_J$. By construction, $P_j$ are zero at these points and have one sided derivatives of value zero. As $F_J$ is zero everywhere else, then the objective is continuous and differentiable everywhere. We now verify that the objective is locally Lipschitz continuous. $P_j$ are polynomial functions which are twice continuously differentiable, which implies that they are locally Lipschitz continuous. The zero function is twice continuously differentiable. Hence, we need only check the local Lipschitz continuity of the derivative at the boundary points. By assumption, the intervals $\{ [S_j - \Delta, S_j + \Delta] : j \in \{0,1,...,J\} \}$ are disjoint and $\Delta > 0$. Therefore, there exists an $\epsilon_1 > 0$ such that for any boundary point $S_{j} - \Delta$, $F_J(\theta) = 0$ for all $\theta \in (S_{j} - \Delta - \epsilon_1, S_{j} - \Delta]$. Now, let $\epsilon_2 \in (0, 2\Delta]$. Let $L$ be the local Lipschitz rank of $\dot P_j$ within $[S_j - \Delta, S_j - \Delta +\epsilon_2)$. Then, for any $\theta \in (S_j - \Delta - \epsilon_1, S_j - \Delta]$ and $\psi \in [S_j - \Delta, S_j - \Delta + \epsilon_2)$, $$|\dot F_J(\theta) - \dot F_J(\psi)| = |\dot P_j(\psi)| \leq L \left|\psi - S_j + \Delta \right| \leq L |\psi - \theta|.$$ The boundary point of $S_j + \Delta$ can be treated the same. Finally, $F_J$ is bounded from below since $F_J$ is continuous everywhere and only nonzero on a compact set. ◻ ## Armijo's Backtracking Method {#subsec:armijo-backtracking} shows Armijo's backtracking method [@armijo1966minimization]. $\alpha >0$, $\delta \in (0,1)$, $\rho \in (0,1)$, $\theta_0 \in \mathbb{R}^n$ $j = 0$ $j = j + 1$ $\theta_{k+1} = \theta_k - \alpha \delta^j \dot F(\theta_k)$. We now show that there is an exponential increase in the number of objective function evaluations between accepted iterates up to $J$. [\[result-ee-armijo\]]{#result-ee-armijo label="result-ee-armijo"} Let $\delta \in (0,1)$, $\alpha > 0$, $\rho \in (0,1)$, and $J \in \mathbb{N}$. Let $S_0 = 0$; for $j=1,\ldots,J$, let $S_j = S_{j-1} + \delta^{j - 2^{j-1}}$; and, for $j=0,\ldots,J$, $$( f_j, f_j', f_j'' ) = \begin{cases} \left( 0, -1/\alpha, 0 \right) & j = 0 \\ \left( \frac{-\rho}{\alpha}\sum_{k=0}^{j-1} \delta^{2k+4 - 2^{k+2}}, -\frac{\delta^{j+2 - 2^{j+1}}}{\alpha}, 0 \right) & j = 1 \ldots, J \end{cases}.$$ Let $\Delta = \frac{1-\delta}{2}$, and $F_J: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-ee-objective\]](#eqn-ee-objective){reference-type="ref" reference="eqn-ee-objective"}. Let $\theta_0 = 0$ and let $\lbrace \theta_1,\ldots,\theta_J \rbrace$ be generated by [\[alg:armijo\]](#alg:armijo){reference-type="ref" reference="alg:armijo"}. Then, the total number of objective function evaluations taken by [\[alg:armijo\]](#alg:armijo){reference-type="ref" reference="alg:armijo"} upon acceptance of iterate $\theta_j$ is $2^{j}$ for $j=1,\ldots,J$. *Proof.* First, note that $S_j$ is a strictly increasing finite sequence, and that for all $j \in \mathbb{N}$, $\delta^{j-2^{j-1}} > 1-\delta > 0$ so that our constructed function satisfies the assumptions in [\[prop-ee-properties\]](#prop-ee-properties){reference-type="ref" reference="prop-ee-properties"}. Now to prove the statement in the proposition we proceed by induction. By construction, $\theta_0 = S_0$ so for the base case. Since $F(\theta_0) = F(S_0) = 0$, $$F(\theta_0 - \alpha \dot F(\theta_0)) = F(\theta_0 + 1) = F(S_1) = - \frac{\rho}{\alpha} = F(\theta_0) - \alpha \rho \dot F(\theta_0)^2.$$ Hence, the algorithm accepts $\theta_1 = S_1$. Moreover, the algorithm computed $F(\theta_0)$ and $F(S_1)$ for line search, totaling to $2 = 2^{1}$ objective evaluations. If $J = 1$ we are done. Otherwise, we continue. Suppose for $j \in \lbrace 1,\ldots, J-1 \rbrace$, $\theta_j = S_j$ and the total number of objective evaluations taken by the algorithm upon acceptance of $\theta_j$ is $2^{j}$. We now generalize by showing that $$\theta_j - \alpha \delta^{2^j - 1} \dot F(\theta_j) = S_j - \alpha \delta^{2^j - 1} \dot F(S_j) = S_j + \delta^{2^j - 1}\delta^{j+2-2^{j+1}} = S_{j} + \delta^{j +1 - 2^{j}} = S_{j+1}$$ is the accepted iterate (i.e., $\theta_{j+1} = S_{j+1}$). Hence, we must show that $F(\theta_{j} - \alpha \delta^{\ell} \dot F(\theta_j) ) > F(\theta_j) - \rho \alpha \delta^{\ell} \dot F(\theta_j)^2$ for all $\ell \in \lbrace 0,1,\ldots,2^{j}-2 \rbrace$, and then the opposite holds at $\ell = 2^{j} - 1$. To show this, we first verify $\theta_j - \alpha \delta^{\ell} \dot F(\theta_j) \in [ S_{j+1} + \frac{1-\delta}{2}, S_{j+2} - \frac{1-\delta}{2} ]$ for $\ell \in \lbrace 0,1,\ldots,2^{j}-2 \rbrace$, which implies $F(\theta_j - \alpha \delta^{\ell} \dot F(\theta_j)) = 0$. As the sequence is decreasing with increasing $\ell$, it is enough to verify that the two end points are in the interval. For the smallest term (i.e., $\ell = 2^j - 2$), $$\theta_j - \alpha \delta^{2^j - 2} \dot F(\theta_j) = S_j + \delta^{2^j - 2 + j + 2 - 2^{j+1}} = S_j + \delta^{j - 2^{j}} > S_{j} + \delta^{j +1 - 2^{j}} + \frac{1-\delta}{2} = S_{j+1} + \frac{1-\delta}{2},$$ where the inequality follows from $\delta^{j - 2^j} > 1/2$ for all $j \in \mathbb{N} \cup \lbrace 0 \rbrace$, which provides the necessary inequality when rearranged. For the largest term (i.e., $\ell = 0$), since $\delta^{j+1-2^j}> 1/2 > \frac{1-\delta}{2}$ for all $j \in \mathbb{N} \cup \{0\}$, $$\theta_j - \alpha \dot F(\theta_j) =S_j + \delta^{j+2-2^{j+1}} = S_{j+2} + \delta^{j+2-2^{j+1}} - \delta^{j+1 - 2^{j}} - \delta^{j+2 - 2^{j+1}} \le S_{j+2} - \frac{1-\delta}{2}.$$ Therefore, since for all $j \in \lbrace 1,\ldots, J-1 \rbrace$, $F(\theta_j) = P(S_j) < 0$, $\dot F(\theta_j) = \dot P(\theta_j) < 0$, and $\delta, \alpha, \rho > 0$ we have for all $\ell \in \{0,1,...,2^j-2\}$, $F(\theta_j - \alpha \delta^{\ell} \dot F(\theta_j)) = 0 > F(\theta_j) - \rho\alpha\delta^\ell\dot{F}(\theta_j)^2$. Lastly, for $\ell = 2^j-1$ (i.e., $\theta_j - \alpha \delta^{\ell} \dot F(\theta_j) = S_{j+1}$), since $\delta < 1$ we obtain that $\delta^{2j+4-2^{j+2}} \geq \delta^{2^j-1} \delta^{2j+4-2^{j+2}}$ for all $j \in \mathbb{N} \cup \{0\}$, which implies by adding the extra terms and multiplying by $-\rho/\alpha$ that $$F(S_{j+1}) = -\frac{\rho}{\alpha}\sum_{k=0}^j \delta^{2k+4-2^{k+2}} \leq -\frac{\rho}{\alpha}\sum_{k=0}^{j-1} \delta^{2k+4-2^{k+2}} - \frac{\rho}{\alpha} \delta^{2^j-1} \delta^{2j+4-2^{j+2}} = F(\theta_j) - \rho\alpha\delta^{2^j-1}\dot{F}(\theta_j)^2.$$ Therefore, $2^j$ objective function evaluations were required to accept $\theta_{j+1}$ (excluding $F(\theta_j)$ as that was computed by the previous iteration). By the inductive hypothesis, $2^j$ total objective evaluations are required for $\theta_j$, thus the total number of objective evaluations taken by the algorithm upon acceptance of $\theta_{j+1}$ is $2^j + 2^j = 2^{j+1}$. ◻ ## Newton's Method with Cubic Regularization {#subsec:cubic-newton-method} shows Cubic Regularized Newton's Method from [@nesterov2006cubic]. $L_0 > 0$, $\delta_1 > 1$, $\theta_0 \in \mathbb{R}^n$ $\ell \leftarrow 0$ $M^k_\ell \leftarrow L_0$ $U_k(\psi; M^k_\ell) \leftarrow \dot F(\theta_k)^\intercal (\psi - \theta_k) + .5 (\psi - \theta_k)^\intercal \ddot F(\theta_k) (\psi - \theta_k) + \frac{1}{6}M^k_\ell \Vert \psi - \theta_k \Vert_2^3$ $\psi^k_\ell \leftarrow \arg\min_{\psi \in \mathbb{R}^n} U_k(\psi; M^k_\ell)$ $\textbf{exit loop}$ $M^{\ell+1}_k \leftarrow \delta_1 M^k_\ell$ $\ell \leftarrow \ell + 1$ $\theta_{k+1} \leftarrow \psi^k_\ell$ We now provide an objective function such that there is an exponential increase in objective function evaluations between accepted iterates up to the $J$th acceptance. Let $L_0 > 0$, $\delta_1 \in (1,\infty)$, and $J \in \mathbb{N}$. Set $\delta = \frac{1}{\sqrt{\delta_1}}$, and $S_0 = 0$; for $j = 1,...,J$, let $S_j = S_{j-1} + \delta^{j-2^{j-1}}$; and, for $j = 0,\ldots, J$, $$(f_j, f_j', f_j'') = \begin{cases} \left(0, -L_0/2, 0 \right) & j = 0 \\ \left( -\frac{L_0}{3} \sum_{k=0}^{j-1} (\delta^{k+2-2^{k+1}})^3, - \frac{L_0}{2}\left(\delta^{j+2-2^{j+1}}\right)^2, 0 \right) & $j = 1,\ldots,J$ \end{cases}.$$ Let $\Delta = \frac{1-\delta}{2}$, and $F_J : \mathbb{R}\to\mathbb{R}$ be defined as in [\[eqn-ee-objective\]](#eqn-ee-objective){reference-type="ref" reference="eqn-ee-objective"}. Let $\theta_0 = 0$ and let $\{\theta_1,...,\theta_J\}$ be generated by [\[alg:crn\]](#alg:crn){reference-type="ref" reference="alg:crn"}. Then, the total number of objective function evaluations taken by [\[alg:crn\]](#alg:crn){reference-type="ref" reference="alg:crn"} upon acceptance of iterate $\theta_j$ is $2^j$ for $j = 1,...,J$. *Proof.* We first provide a global minimizer of $U_j(\psi; M)$ for any penalty parameter, $M > 0$, in the special case that $\theta_j$ satisfies $\ddot F(\theta_j) = 0$ and $\dot F(\theta_j) \not= 0$. Specifically, $U_j$, $\dot U_j$, and $\ddot U_j$ are $$\begin{aligned} &U_j(\psi; M) = \dot F(\theta_j)^\intercal (\psi - \theta_j) + \begin{cases} \frac{M}{6}(\psi-\theta_j)^3 & (\psi-\theta_j) \geq 0 \\ -\frac{M}{6}(\psi-\theta_j)^3 & (\psi-\theta_j) < 0 \end{cases},\\ \text{\quad} &\dot U_j(\psi; M) = \dot F(\theta_j) + \begin{cases} \frac{M}{2} (\psi-\theta_j)^2 & (\psi-\theta_j) \geq 0 \\ -\frac{M}{2} (\psi-\theta_j)^2 & (\psi-\theta_j) < 0 \end{cases},\\ &\ddot U_j(\psi; M) = \begin{cases} M(\psi-\theta_j) & (\psi-\theta_j) \geq 0\\ -M(\psi-\theta_j) & (\psi-\theta_j) < 0 \end{cases}. \end{aligned}$$ From these equations, one can verify by checking first and second order sufficient conditions that two possible minimizers are $\theta_j + \sqrt{-2\dot F(\theta_j)/M}$ and $\theta_j - \sqrt{2\dot F(\theta_j)/M}$, only one of which is an element of $\mathbb{R}$ depending on the sign of $\dot F(\theta_j)$. Next, the function $F_J$ has all the properties in [\[prop-ee-properties\]](#prop-ee-properties){reference-type="ref" reference="prop-ee-properties"} (see the proof of [\[result-ee-armijo\]](#result-ee-armijo){reference-type="ref" reference="result-ee-armijo"}). We now proceed by induction to prove the statement in the current proposition. For the base case, by construction $\theta_0 = S_0$, and by [\[alg:acr\]](#alg:acr){reference-type="ref" reference="alg:acr"}, $M^0_0 = L_0$. By the above discussion, this implies the first trial point is $\psi^0_0 = \theta_0 + \sqrt{-2\dot F(S_0)/L_0} = S_0 + 1 = S_1$. Therefore, $$F(S_1) = \frac{-L_0}{3} = F(\theta_0) - \sqrt{2/L_0}(2/3)(-\dot F(S_0))^{3/2} = F(\theta_0) + U_0(\psi_0^0; M_0^0).$$ This implies that $\theta_1 = \psi^0_0 = S_1$. Upon acceptance of $S_1$, the algorithm took two objective function evaluation ($F(\theta_0)$ and $F(S_1)$). If $J=1$ we are done, otherwise suppose that for $j \in \{1,...,J-1\}$, $\theta_j = S_j$ and $2^j$ total objective function evaluations have been taken by the algorithm upon acceptance of $\theta_j$. Using the identity $\delta = \sqrt{1/\delta_1}$, we generalize this to $j+1$ by showing that $$\theta_j + \sqrt{-2\dot F(\theta_j)/(\delta_1^{2^j-1}L_0)} = S_j + \sqrt{1/\delta_1^{2^j-1}} \delta^{j+2-2^{j+1}} = S_j + \delta^{2^j-1}\delta^{j+2-2^{j+1}} = S_{j+1}$$ is the accepted iterate (i.e., $\theta_{j+1} = S_{j+1}$). We first show that for all $\ell \in \{0,...,2^{j}-2\}$ that $\psi^j_\ell \in [S_{j+1} + (1-\delta)/2, S_{j+2}-(1-\delta)/2]$, which will imply that $\{\psi^j_\ell\}_{\ell = 0}^{2^j-2}$ are all rejected. To prove this, we only check the cases $\ell = 0$ and $\ell = 2^{j}-2$, as the optimal solution to the subproblem with penalty parameter $\delta^\ell_1 L_0$ is $\theta_j + \sqrt{-2\dot F(\theta_j)/(\delta^\ell_1 L_0)} = \theta_j + \delta^\ell \sqrt{-2\dot F(\theta_j)/L_0}$ is a decreasing quantity. Now taking $\ell = 2^j-2$ (i.e., we are in the smallest case), using the inductive hypothesis and $\delta < 1$ $$\theta_j + \delta^{2^j-2} \sqrt{-2\dot F(\theta_j)/L_0} = S_j + \delta^{2^j-2}\delta^{j+2-2^{j+1}} \geq S_{j+1} + \frac{1-\delta}{2}.$$ In the case $\ell = 0$ (i.e., the largest case), using the inductive hypothesis and $\delta < 1$ $$\theta_j + \sqrt{-2\dot F(\theta_j)/L_0} = S_j + \delta^{j+2-2^{j+1}} = S_{j+2} - \delta^{j+1-2^j} \leq S_{j+2} - \frac{1-\delta}{2}.$$ This proves that for all $\ell \in \{0,...,2^{j}-2\}$ that $\psi^{j}_\ell \in [S_{j+1} + (1-\delta)/2, S_{j+2}-(1-\delta)/2]$, and thus, $F(\psi^j_\ell) = 0$ by construction. To show that this results in rejection of all trial iterates $\{\psi^j_\ell\}_{\ell = 0}^{2^j-2}$, note that $$F(\theta_j)+U_j(\psi^j_\ell; \delta^\ell_1 L_0) = F(\theta_j) - \sqrt{2/(\delta^\ell_1 L_0)} (2/3) (-\dot F(\theta_j))^{3/2}.$$ By inductive hypothesis, $F(\theta_j) = F(S_j) < 0$, which by using the above equation implies that for all $\ell \in \{0,...,2^{j}-2\}$, $F(\psi^j_\ell) = 0 > F(\theta_j)+U_j(\psi_\ell^j; \delta^\ell_1 L_0)$. Lastly, we show that $F(\psi_{2^j-1}^j) = F(S_{j+1})$ is accepted. Since $\delta < 1$, $(\delta^{j+2-2^{j+1}})^3 \geq \delta^{2^j-1}(\delta^{j+2-2^{j+1}})^3$ for all $j \in \mathbb{N}\cup\{0\}$, which by adding the extra terms in the sum for $F(S_{j+1})$ and multiplying by $-L_0/3$ implies $$\begin{aligned} F(S_{j+1}) = -\frac{L_0}{3} \sum_{k=0}^{j} (\delta^{k+2-2^{k+1}})^3 &\leq -\frac{L_0}{3} \sum_{k=0}^{j-1} (\delta^{k+2-2^{k+1}})^3 - \delta^{2^j-1}\frac{L_0}{3} (\delta^{j+2-2^{j+1}})^3 \\ &= F(S_j) - \delta^{2^j-1} \sqrt{\frac{2}{L_0}} \frac{2}{3} (-\dot F(\theta_j))^{3/2}. \end{aligned}$$ Therefore, $S_{j+1}$ is accepted. To finish the proof, everytime the algorithm computes a trial iterate, $\psi^{j}_\ell$, $F(\psi^{j}_\ell)$ is computed once to check satisfaction of the second order cubic regularized upper bound model. Therefore, to accept $\theta_{j+1}$ it takes $2^j$ objective function evaluations (excluding $F(\theta_j)$ as that was computed in the previous iteration). By the inductive hypothesis, the algorithm already took $2^{j}$ for $\theta_j$, so the total is $2^{j+1}$. ◻ ## Lipschitz Constant Line Search Methods {#subsec:lipschitz-line-search} For this class of methods, we construct an objective function such that the dynamic method from [@curtis2018exploiting] (see [\[alg:dynamic_method\]](#alg:dynamic_method){reference-type="ref" reference="alg:dynamic_method"}) exhibits an exponential increase in objective functions evaluations. The method from [@nesterov2012GradientMF] can be handled similarly. $\delta_1 \in (1,\infty)$, $L_0 \in (0, \infty), \sigma_0 \in (0, \infty)$, $\theta_0 \in \mathbb{R}^n$ $\mathrm{NegativeCurvatureDirection()}$ $s'_k \leftarrow 0$ $s'_k \leftarrow \mathrm{NegativeCurvatureDirection()}$ $s_k \leftarrow -\dot F(\theta_k)$ $m_k(L) \leftarrow -\dot F(\theta_k)^\intercal s_k / (L \Vert s_k \Vert_2^2)$ $c_k \leftarrow (s'_k)^\intercal \ddot F(\theta_k)s'_k$ $m'_k(\sigma) \leftarrow \left( -c_k + \sqrt{c_k^2-2\sigma \Vert s'_k \Vert_2^3 \dot F(\theta_k)^\intercal s'_k} \right)/(\sigma \Vert s'_k \Vert_2^3)$ $U_k(m; L) \leftarrow m\dot F(\theta_k)^\intercal s_k + \frac{1}{2} L m^2 \Vert s_k \Vert_2^2$ $U_k'(m'; \sigma) \leftarrow m'\dot F(\theta_k)^\intercal s'_k + \frac{1}{2} (m')^2 (s'_k)^\intercal \ddot F(\theta_k) s_k' + \frac{\sigma}{6}(m')^3 \Vert s'_k \Vert_2^3$ $\ell, L_\ell^k, \sigma_\ell^k \leftarrow 0, L_k, \sigma_k$ $\gamma_k \leftarrow m_k(L_\ell^k)s_k$ $\textbf{exit loop}$ $L_{\ell+1}^k \leftarrow \delta_1 L_\ell^k$ $\gamma_k \leftarrow m'_k(\sigma_\ell^k)s'_k$ $\textbf{exit loop}$ $\sigma_{\ell+1}^k \leftarrow \delta_1 \sigma_\ell^k$ $\ell \leftarrow \ell + 1$ $\theta_{k+1} \leftarrow \theta_{k} + \gamma_k$ $L_{k+1} \in (0, L_\ell^k]$ $\sigma_{k+1} \in (0, \sigma_\ell^k]$ We now provide a class of objective functions such that there is an exponential increase in objective function evaluations between accepted iterates up to the $J$th acceptance. Let $\delta_1 \in (1,\infty)$, $L_0 \in (0, \infty), \sigma_0 \in (0, \infty)$, and $J \in \mathbb{N}$. Set $\delta = \frac{1}{\delta_1}$, and $S_0 = 0$; for $j = 1,...,J$, let $S_j = S_{j-1} + \delta^{j-2^{j-1}}$; and, for $j = 0,\ldots,J$ $$(f_j, f_j', f_j'') = \begin{cases} \left( 0, -L_0, d_0' \right) & j = 0 \\ \left(-\frac{L_0}{2} \sum_{k = 0}^{j-1} \delta^{2k + 4 - 2^{k+2}}, -L_0\delta^{j+2-2^{j+1}}, d_j' \right) & j = 1,\ldots, J \end{cases},$$ where $d_j' \in \mathbb{R}_{\ge 0}$ is any non-negative real number. Let $\Delta = (1-\delta)/2$, and $F_J : \mathbb{R}\to\mathbb{R}$ be defined as in [\[eqn-ee-objective\]](#eqn-ee-objective){reference-type="ref" reference="eqn-ee-objective"}. Let $\theta_0 = 0$ and let $\{\theta_1,...,\theta_J\}$ be generated by [\[alg:dynamic_method\]](#alg:dynamic_method){reference-type="ref" reference="alg:dynamic_method"}. Then, the total number of objective function evaluations taken by [\[alg:dynamic_method\]](#alg:dynamic_method){reference-type="ref" reference="alg:dynamic_method"} upon acceptance of iterate $\theta_j$ is $2^j$ for $j = 1,...,J$. *Proof.* First the function $F_J$ has all the properties in [\[prop-ee-properties\]](#prop-ee-properties){reference-type="ref" reference="prop-ee-properties"}, as our components satisfy all assumptions. We now proceed by induction. By construction, $\theta_0 = S_0$ and $\ddot F(\theta_0) = \ddot F(S_0) \geq 0$, therefore $s_0' = 0$. The algorithm also sets $L_0^0 = L_0$, so by our choice of $s_0$, $m_0(L_0^0) = 1/L_0$ and $U_0(m_0(L^0_0); L^0_0) = -\dot F(\theta_0)^2 / (2L_0)$. Since $U_0(m_0(L^0_0);L^0_0) < 0 = U'_0(m_0'(\sigma^0_0);\sigma^0_0)$, the algorithm checks to see if the first order upper bound model is satisfied. Indeed, $$F(\theta_0 - (1/L_0)\dot F(\theta_0) ) = F(S_1) = -\frac{L_0}{2} = F(S_0) - \frac{1}{2L_0} \dot F(S_0)^2 = F(S_0) + U_0(m_0(L_0^0); L_0^0).$$ Therefore, $S_1$ is accepted with two objective function evaluations ($F(S_1)$ and $F(\theta_0)$). Furthermore, we can set $L_1 = L_0 \in (0,L^0_0]$. If $J = 1$ we are done, otherwise suppose that for $j \in \{1,...,J-1\}$ that $\theta_j = S_j$, $L_j = L_0$, and $2^j$ objective functions have been taken to accept $\theta_j$. We generalize to the case of $j+1$ by showing that $$\theta_j - \frac{1}{\delta_1^{2^j-1} L_0}\dot F(\theta_j) = S_j + \delta^{2^j-1} \delta^{j+2-2^{j+1}} = S_{j+1}$$ is the accepted iterate. We do this by showing that all iterations $\ell \in \{0,...,2^j-2\}$ result in a rejection, and only $L_j$ is inflated. To start, from the inductive hypothesis we have $\ddot F(\theta_j) \geq 0$, therefore $s_j' = 0$. By our choice of $s_j$, for any $L \in \mathbb{R}$, $m_j(L) = 1/L$. This implies that for all $\ell \in \{0,...,2^j-2\}$, $U_j(m_j(\delta^\ell_1 L_0); \delta^\ell_1 L_0) = - (1/2\delta^\ell_1 L_0) \dot F(\theta_j)^2$; thus, for any $\sigma > 0$, $U_j(m_j(\delta^\ell_1 L_0); \delta^\ell_1 L_0) < 0 = U_j'(m_j'(\sigma); \sigma)$. We now show by induction that for all $\ell \in \{0,..., 2^j-2\}$, the trial iterate on iteration $\ell$ is indeed $\theta_j + m_k(\delta^\ell_1 L_0)s_k$, and it is rejected. For the base case, $\ell = 0$ and $L_0^j = L_0$, which by $U_j(m_j(L_0); L_0) < U_j'(m_j'(\sigma_j); \sigma_j)$, means the algorithm checks the first order upper bound model. Note, that the proposed iterate $\theta_j - (1/L_0^j) \dot F(\theta_j) = \theta_j - (1/L_0) \dot F(\theta_j)$ satisfies $$S_{j+1} + \frac{1-\delta}{2} \leq S_j + \delta^{2^j-2}\delta^{j+2-2^j} \leq \theta_j - (1/L_0)\dot F(\theta_j) \leq S_j + \delta^{j+2-2^j} = S_{j+2}-\delta^{j+1-2^j} \leq S_{j+2}-\frac{1-\delta}{2}.$$ This implies that $F(\theta_j - (1/L_0)\dot F(\theta_j)) = 0$, yet by the inductive hypothesis $F(\theta_j) + U_j(m_j(L_0); L_0) < 0$. Therefore, the proposed iterate is rejected, and $L_1^j = \delta_1 L_0^j = \delta_1 L_0$. The inductive step is handled in exactly the same way. Finally, we show that for $\ell = 2^j-1$ we accept the proposed iterate. By the above inductive proof, at iteration $\ell = 2^j-1$, $L_{2^j-1}^j = \delta_1^{2^j-1}L_0$. Recall, $\theta_j - (1/\delta_1^{2^j-1} L_0)\dot F(\theta_j) = S_{j+1}$, and since $\delta^{2j+4-2^{j+2}} \geq \delta^{2^j-1}\delta^{2j+4-2^{j+2}}$ for all $j \in \mathbb{N}$, by adding the extra terms from $F(S_{j+1})$ and multiplying by $-L_0/2$ we obtain $$\begin{aligned} F(S_{j+1}) = -\frac{L_1}{2} \sum_{k = 0}^{j} \delta^{2k + 4 - 2^{k+2}} &\leq -\frac{L_1}{2} \sum_{k = 0}^{j-1} \delta^{2k + 4 - 2^{k+2}} - \frac{L_1}{2}\delta^{2^j-1}\delta^{2j+4-2^{j+2}}\\ &= F(S_j) - \frac{\delta^{2^j-1}}{2L_0} \dot F(S_j)^2 = F(S_j) + U_j(m_j(L_{2^j-1}^j); L_{2^j-1}^j). \end{aligned}$$ Therefore, $\theta_{j+1} = S_{j+1}$ is the accepted iterate. Additionally, since $\delta_1 > 1$, we can set $L_{j+1} = L_0 \in (0, \delta_1^{2^j-1}L_0]$. Finally, the algorithm took $2^j$ evaluations to accept this point (excluding $F(\theta_j)$ as this was calculated at the previous point). By the inductive hypothesis, $2^j$ have already been taken, implying a total of $2^{j+1}$ objective function evaluations upon acceptance of $\theta_{j+1}$. ◻ ## Adaptive Cubic Regularization {#subsec:adaptive-cubic-reg} shows the Adaptive Cubic Regularization method from [@cartis2011cubic1]. $\theta_0 \in \mathbb{R}^n$, $\delta_1, \delta_2 \in \mathbb{R}$ such that $\delta_2 \geq \delta_1 > 1$, $\eta_1, \eta_2 \in \mathbb{R}$ such that $1 > \eta_2 \geq \eta_1 > 0$, $\sigma_0 > 0$. $B_0 \in \mathbb{R}^{n \times n}$ and $\mathrm{UpdateHessianApproximation()}$ $\ell \leftarrow 0$ $\sigma_\ell^k \leftarrow \sigma_k$ $U_k(\gamma; \sigma_\ell^k) \leftarrow F(\theta_k) + \gamma^\intercal \dot F(\theta_k) + .5 \gamma^\intercal B_k \gamma^\intercal + \frac{1}{3}\sigma_\ell^k \Vert \gamma \Vert_2^3$ $m'_k \leftarrow \arg\min_{m \in \mathbb{R}_{+}} U_k(-m \dot F(\theta_k))$ $\gamma'_k \leftarrow -m_k' \dot F(\theta_k)$ $\rho_\ell^k \leftarrow \frac{F(\theta_k) - F(\theta_k + \gamma_\ell^k)}{F(\theta_k) - U_k(\gamma_\ell^k; \sigma_\ell^k)}$ $\gamma_k \leftarrow \gamma_\ell^k$ $\textbf{exit loop}$ $\sigma_{\ell+1}^{k} \in [\delta_1\sigma_\ell^k, \delta_2 \sigma_\ell^k]$ $\ell \leftarrow \ell + 1$ $\theta_{k+1} = \theta_{k} + \gamma_k$ $\sigma_{k+1} \in [0,\sigma_\ell^k]$ $\sigma_{k+1} \in [\sigma_\ell^k, \delta_1 \sigma_\ell^k]$ $B_{k+1} \leftarrow \mathrm{UpdateHessianApproximation()}$ We now show that there is an exponential increase in the number of objective function evaluations between accepted iterates up until the $J$th acceptance. Let $\delta_1, \delta_2 \in \mathbb{R}$ such that $\delta_2 \geq \delta_1 > 1$, $\eta_1, \eta_2 \in \mathbb{R}$ such that $1 > \eta_2 \geq \eta_1 > 0$, $\sigma_0 > 0$, and $J \in \mathbb{N}$. Define $\delta = \frac{1}{\delta_2^{1/2}}$ and $S_0 = 0$; for $j = 1,\ldots, J$, let $S_j = S_{j-1} + \delta^{j - 2^{j-1}}$; and, for $j = 0,\ldots,J$ $$( f_j, f_j', f_j'' ) = \begin{cases} \left(0, -\sigma_0, 0 \right) & j = 0\\ \left( -\frac{2(\eta_2+1)\sigma_0}{3} \sum_{k=0}^{j-1} \left( \delta^{k+2 - 2^{k+1}} \right)^{3}, -\sigma_0\left( \delta^{j+2 - 2^{j+1}} \right)^2, 0 \right) & j = 1,\ldots,J \end{cases}.$$ Let $\Delta = \frac{1-\delta}{2}$, and $F_J: \mathbb{R} \to \mathbb{R}$ be defined as in [\[eqn-ee-objective\]](#eqn-ee-objective){reference-type="ref" reference="eqn-ee-objective"}, and define $B_j = \ddot F(\theta_j)$ (i.e., use the true Hessian). Let $\theta_0 = 0$ and let $\lbrace \theta_1,\ldots,\theta_J \rbrace$ be generated by [\[alg:acr\]](#alg:acr){reference-type="ref" reference="alg:acr"}. Then, the total number of objective function evaluations taken by [\[alg:acr\]](#alg:acr){reference-type="ref" reference="alg:acr"} upon acceptance of iterate $\theta_j$ is $2^{j}$ for $j=1,\ldots,J$. *Proof.* First, the function $F_J$ has all the properties in [\[prop-ee-properties\]](#prop-ee-properties){reference-type="ref" reference="prop-ee-properties"}, as our components satisfy all assumptions. Now, before proceeding to the main proof, we provide a global minimizer of $U_j(\gamma; \sigma)$ for any $\sigma > 0$, in the special case that $\theta_j$ satisfies $B_j = \ddot F(\theta_j) = 0$ and $\dot F(\theta_j) \not= 0$. Checking first and second order sufficient conditions, we can rewrite $U_j$, and derive $\dot U_j, \ddot U_j$ as $$\begin{aligned} U_j(\gamma; \sigma) = F(\theta_j) + \gamma^\intercal \dot F(\theta_j) +& \begin{cases} \frac{\sigma}{3}s^3 & s \geq 0 \\ -\frac{\sigma}{3}s^3 & s < 0 \end{cases}, \text{\quad} \dot U_j(\gamma; \sigma) = \dot F(\theta_j) + \begin{cases} \sigma s^2 & s \geq 0 \\ -\sigma s^2 & s < 0 \end{cases}\\ &\ddot U_k(\gamma; \sigma) = \begin{cases} 2\sigma s & s \geq 0\\ -2\sigma s & s < 0 \end{cases} \end{aligned}$$ One can verify that two possible minimizers are $\sqrt{-\dot F(\theta_j)/\sigma}, -\sqrt{\dot F(\theta_j)/\sigma}$, only one of which is in $\mathbb{R}$ depending on the sign of the gradient. We now proceed to prove the statement in the proposition by induction. By construction, $\theta_0 = S_0$, so that $B_0 = \ddot F(S_0) = 0$ and $\dot F(S_0) = -\sigma_0 < 0$. To compute $\gamma^0_0$, we globally minimize $U_k(\gamma; \sigma^0_0)$, which by above yields $\gamma^0_0 = \sqrt{-\dot F(\theta_0)/\sigma^0_0} = 1$. Therefore, since $\sigma^0_0 = \sigma_0$ $$F(\theta_0) - F(\theta_0 + \gamma_0) = -F(S_1) = \frac{2(\eta_2+1)\sigma_0}{3} \text{,\quad} F(\theta_0) - U_0(\gamma^0_0; \sigma_0) = \frac{2\sigma_0}{3}.$$ This implies that $\rho^0_0 = (\eta_2+1) > \eta_2$, and we accept $\theta_1 = S_1$ using two objective function evaluation ($F(\theta_0)$ and $F(S_1)$). Additionally, since $\rho^0_0 > \eta_2$ we can set $\sigma_1 = \sigma^0_0 = \sigma_0$. If $J = 1$ we are done, otherwise suppose for $j \in \{1,...,J-1\}$, $\theta_j = S_j$, $\sigma_j = \sigma_0$, and the algorithm has taken a total of $2^j$ objective evaluations. We generalize to the case of $j+1$ by showing that $$\theta_j + \delta^{2^j-1} \frac{1}{\sqrt{\sigma_0}} \sqrt{-\dot F(\theta_j)} = \theta_j + \delta^{2^j-1} \delta^{j+2-2^{j+1}} = S_j + \delta^{j+1-2^j} = S_{j+1}$$ is the accepted iterate. To do this, we prove that for all $\ell \in \{0,1,...,2^j-2\}$, the trial step $\gamma_\ell^j$ results in a ratio $\rho_{\ell}^j < \eta_1$. First, for any $\ell \in \{0,1,...,2^j-2\}$, since $\dot F(\theta_j) = \dot F(S_j) < 0$ and $\ddot F(\theta_j) = \ddot F(S_j) = 0$ the solution of $\arg\min U_j(\gamma; \delta_2^\ell \sigma_0)$ is $\gamma_\ell^j = \sqrt{\frac{-\dot F(\theta_j)}{\delta_2^\ell \sigma_0}} = \delta^\ell \sqrt{\frac{-\dot F(\theta_j)}{\sigma_0}}$ using the identity $\delta = \frac{1}{\sqrt{\delta_2}}$. We now proceed by induction. For the base case, suppose $\ell = 0$, then our penalty parameter is $\sigma_0^j = \sigma_0$ by the inductive hypothesis. Therefore, the trial step is $\gamma_0^j = \sqrt{\frac{-\dot F(\theta_j)}{\sigma_0}}$ and using the fact that $\delta^{2^j-2} \leq 1$, we obtain that $\theta_j + \gamma_0^j$ satisfies $$S_{j+1} + \frac{1-\delta}{2} \leq S_j + \delta^{2^j-2}\delta^{j+2-2^{j+1}} \leq \theta_j + \sqrt{\frac{-\dot F(\theta_j)}{\sigma_0}} \leq S_j + \delta^{j+2-2^{j+1}} \leq S_{j+2} - \frac{1-\delta}{2}.$$ This implies that $F(\theta_j + \gamma_0^j) = 0$. Since $F(S_j) < 0$ this implies $\rho_0^j < 0$, so the algorithm rejects the iterate and inflates the penalty parameter. Since $\sigma_1^j$ can be any value within the interval $[\delta_1\sigma_0^j, \delta_2\sigma_0^j]$, let $\sigma_{1}^j = \delta_2\sigma_0^j = \delta_2\sigma_0$. The inductive step is handled in exactly the same way, therefore all trial iterates $\theta_j + \gamma_\ell^j$ are rejected for $\ell \in \{0,1,...,2^j-2\}$. Finally, the iterate $S_{j+1}$ is accepted, because after $\ell = 2^j-2$ the penalty parameter is increased to $\delta_2^{2^j-1} \sigma_0$, so by letting $\gamma_{2^j-1}^j = \delta^{2^j-1} \sqrt{\frac{-\dot F(\theta_j)}{\sigma_0}}$ we obtain $$\frac{F(S_j) - F(S_{j+1})}{F(S_j) - U_j\left( \gamma_{2^j-1}^j; \delta^{2^j-1}\sigma_0 \right)} = \frac{\frac{2(\eta_2+1)\sigma_0}{3} \left( \delta^{j+2 - 2^{j+1}} \right)^{3} }{ \delta^{2^j-1} \frac{2}{3\sqrt{\sigma_0}} \sigma_0^{3/2} \left( \delta^{j+2 - 2^{j+1}} \right)^{3} } = \frac{\eta_2+1}{\delta^{2^j-1}} > \eta_2$$ Since $\rho_{2^j-1}^j > \eta_2$, we can reset $\sigma_{j+1} = \sigma_0$. To finish the original inductive proof, we have just shown to accept $\theta_{j+1}$, [\[alg:acr\]](#alg:acr){reference-type="ref" reference="alg:acr"} requires $2^j$ objective function evaluations (excluding $F(\theta_j)$ as it was calculated the previous iteration). Therefore, in total, the algorithm has taken $2^{j+1}$ function evaluations upon acceptance of $\theta_{j+1}$, as by the inductive hypothesis the algorithm has taken $2^j$ to accept $\theta_j$. ◻ [^1]: Such objective functions are referred to as Lipschitz smooth in the literature. Here we will refer to such functions as globally Lipschitz smooth to distinguish them from the relevant locally Lipschitz continuous case. [^2]: For now, we ignore the other sequences, and will reference the figure to explain these quantities when they are introduced. [^3]: [\[footnote-alg-exception\]]{#footnote-alg-exception label="footnote-alg-exception"}Except for SCURLY10, SCULRY20, SCULRY30, and TESTQUAD as the first three gave initialization errors, and the last due to computational time constraints [^4]: Necessary for CRN, ACR [^5]: Necessary for Line search, CRN, DMC, Ours [^6]: The criterion for an approximate minimizer being successful termination, and the terminal iterate between $4.9$ and $5$. For an approximate maximizer, the criterion having successful termination, and the terminal iterate between $-.2$ and $-.21$. [^7]: *The norms may be arbitrary as all norms are equivalent in finite dimensions.*
arxiv_math
{ "id": "2309.10894", "title": "A Novel Gradient Methodology with Economical Objective Function\n Evaluations for Data Science Applications", "authors": "Christian Varner, Vivak Patel", "categories": "math.OC stat.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show how the bialgebra cohomologies of two Hopf algebras involved in an exact sequence are related, when the third factor is finite-dimensional cosemisimple. As an application, we provide a short proof of the computation of the bialgebra cohomology of the universal cosovereign Hopf algebras in the generic (cosemisimple) case, done recently by Baraquin, Franz, Gerhold, Kula and Tobolski. address: Université Clermont Auvergne, CNRS, LMBP, F-63000 CLERMONT-FERRAND, FRANCE author: - Julien Bichon title: Bialgebra cohomology and exact sequences --- # introduction Gerstenhaber-Schack cohomology, which includes bialgebra cohomology as a special instance, is a cohomology theory adapted to Hopf algebras. It was introduced in [@gs1; @gs2] by means of an explicit bicomplex modeled on the Hochschild complex of the underlying algebra and the Cartier complex of the underlying coalgebra, with deformation theory as a motivation. See [@shst] for an exposition, with the original coefficients being Hopf bimodules, but in view of the equivalence between Hopf bimodules and Yetter-Drinfeld modules [@sc94], one can work in the simpler framework of Yetter-Drinfeld modules. Gerstenhaber-Schack cohomology has been useful in proving some fundamental results in Hopf algebra theory [@ste2; @eg], but few concrete computations were known (see [@pw; @shst]) until it was shown by Taillefer [@tai04] that Gerstenhaber-Schack cohomology can be identified with the ${\rm Ext}$ functor on the category of Yetter-Drinfeld modules: if $A$ is a Hopf algebra, $V$ is a Yetter-Drinfeld module over $A$ and $k$ is the trivial Yetter-Drinfeld module, one has $$H_{\rm GS}^*(A,V)\simeq \mathrm{Ext}^*_{\mathcal{YD}_A^A}(k, V)$$ The bialgebra cohomology of $A$ is then defined by $H_b^*(A)=H_{\rm GS}^*(A,k)$. We will use this ${\rm Ext}$ description, which opens the way to use classical tools of homological algebra, as a definition. Note that the category $\mathcal{YD}_A^A$ has enough injective objects [@camizh97; @tai04], so the above ${\rm Ext}$ spaces can be studied using injective resolutions of $V$, and when $\mathcal{YD}_A^A$ has enough projective objects (for example if $A$ is cosemisimple, or more generally if $A$ is co-Frobenius), they can also be computed by using projective resolutions of the trivial module. This note is a contribution to the study of Gerstenhaber-Schack cohomology: we show how the bialgebra (and Gerstenhaber-Schack) cohomologies of two Hopf algebras involved in an exact sequence of Hopf algebras are related when the third factor is a finite-dimensional cosemisimple Hopf algebra, see Theorem [Theorem 6](#thm:hgssubL){reference-type="ref" reference="thm:hgssubL"}. When the third factor is the semisimple group algebra of a finite abelian group, the result even takes a nicer form, see Corollary [Corollary 7](#cor:hgssub){reference-type="ref" reference="cor:hgssub"}. We apply our result to provide a computation of the bialgebra cohomology of the universal cosovereign Hopf algebras [@bi18] in the generic (cosemisimple) case, a class of Hopf algebras that we believe to be of particular interest in view of their universal property, see [@bi07]. Such a computation has just been done by Baraquin, Franz, Gerhold, Kula and Tobolski [@bfgkt], but the present proof is shorter. # Preliminaries We work over an algebraically closed field $k$, and use standard notation from Hopf algebra theory, for which a standard reference is [@mon]. ## Exact sequences of Hopf algebras Recall that a sequence of Hopf algebra maps $$k \to B \overset{i}\to A \overset{p}\to L \to k$$ is said to be exact [@ad] if the following conditions hold: 1. $i$ is injective and $p$ is surjective, 2. ${\rm Ker}(p) =Ai(B)^+ =i(B)^+A$, where $i(B)^+=i(B)\cap{\rm Ker}(\varepsilon)$, 3. $i(B) = A^{\operatorname{co}L} = \{ a \in A:\, (\mathop{\mathrm{id}}\otimes p)\Delta(a) = a \otimes 1 \} = {^{\operatorname{co}L}A} = \{ a \in A:\, (p \otimes \mathop{\mathrm{id}})\Delta(a) = 1 \otimes a \}$. Note that condition (2) implies $pi= \varepsilon 1$. In an exact sequence as above, we can assume, without loss of generality, that $B$ is Hopf subalgebra and $i$ is the inclusion map. A Hopf algebra exact sequence $k \to B \overset{i}\to A \overset{p}\to L \to k$ is said to be cocentral if the Hopf algebra map $p$ is cocentral, that is for any $a\in A$, we have $p(a_{(1)})\otimes a_{(2)}= p(a_{(2)})\otimes a_{(1)}$. ## Yetter-Drinfeld modules Recall that a (right-right) Yetter-Drinfeld module over a Hopf algebra $A$ is a right $A$-comodule and right $A$-module $V$ satisfying the condition, $\forall v \in V$, $\forall a \in A$, $$(v \cdot a)_{(0)} \otimes (v \cdot a)_{(1)} = v_{(0)} \cdot a_{(2)} \otimes S(a_{(1)}) v_{(1)} a_{(3)}$$ The category of Yetter-Drinfeld modules over $A$ is denoted $\mathcal{YD}_A^A$: the morphisms are the $A$-linear and $A$-colinear maps. The category $\mathcal{YD}_A^{A}$ is obviously abelian, and, endowed with the usual tensor product of modules and comodules, is a tensor category, with unit the trivial Yetter-Drinfeld module, denoted $k$. *Example 1*. Let $B \subset A$ be a Hopf subalgebra, and consider the quotient coalgebra $L = A /B^+A$. Endow $L$ with the right $A$-module structure induced by the quotient map $p : A\to L$, i.e $p(a)\cdot b = p(ab)$ and with the coadjoint $A$-comodule structure given $p(a)\mapsto p(a_{(2)})\otimes S(a_{(1)})a_{(3)}$. Then $L$, endowed with these two structures, is a Yetter-Drinfeld module over $A$. In particular if $k \to B \overset{i}\to A \overset{p}\to L \to k$ is an exact sequence of Hopf algebras, then $L$ inherits a Yetter-Drinfeld module structure over $A$. *Example 2*. Let $\psi : A\to k$ be an algebra map satisfying $\psi(a_{(1)}) a_{(2)}= \psi(a_{(2)}) a_{(1)}$ for any $a\in A$. Endow $k$ with the trivial $A$-comodule structure and with the $A$-module structure induced by $\psi$. Then $k$, endowed with these two structures, is a Yetter-Drinfeld module over $A$, that we denote $k_\psi$. Examples [Example 1](#ex:ydquot){reference-type="ref" reference="ex:ydquot"} and [Example 2](#ex:yd1d){reference-type="ref" reference="ex:yd1d"} are related by the following lemma. **Lemma 3**. *Let $p : A \to k\Gamma$ be surjective cocentral Hopf algebra map, where $\Gamma$ is a group. For $\psi \in \widehat{\Gamma} ={\rm Hom}(\Gamma, k^*)$, we still denote by $\psi$ the composition of the unique extension of $\psi$ to $k\Gamma$ with $p$. If $\Gamma$ is finite abelian and $|\Gamma|\not = 0$ in $k$, the Fourier transform is an isomorphism $$k\Gamma \simeq \bigoplus_{\psi \in \widehat{\Gamma}}k_\psi$$ in the category $\mathcal{YD}_A^A$, where $k\Gamma$ has the coadjoint Yetter-Drinfeld structure given in Example [Example 1](#ex:ydquot){reference-type="ref" reference="ex:ydquot"}, and the right-handed term has the Yetter-Drinfeld structure from Example [Example 2](#ex:yd1d){reference-type="ref" reference="ex:yd1d"}.* *Proof.* The Fourier transform is defined by $$\mathcal F : k\Gamma \longrightarrow \bigoplus_{\psi \in \widehat{\Gamma}}k_\psi, \quad \Gamma \ni g \longmapsto \sum_{\psi\in \widehat{\Gamma}} \psi(g)e_\psi$$ where $e_\psi$ denotes the basis element in $k_\psi$, and since $k$ is algebraically closed, the assumption $|\Gamma|\not = 0$ in $k$ ensures that $\mathcal F$ is a linear isomorphism. The cocentrality assumption on $p$ ensures that the $A$-comodule structure on $k\Gamma$ from Example [Example 1](#ex:ydquot){reference-type="ref" reference="ex:ydquot"} is trivial, so $\mathcal F$ is a comodule map as well. To prove the $A$-linearity of $\mathcal F$, recall first that $p : A \to k\Gamma$ induces and algebra grading $$A = \bigoplus_{g\in \Gamma} A_g$$ where $A_g = \{ a \in A \ | \ a_{(1)}\otimes p(a_{(2)}) = a\otimes g\}$, with for $a \in A_g$, $p(a) =\varepsilon(a) g$. For $g \in \Gamma$, pick $a\in A_g$ such that $p(a)=g$. For $h\in \Gamma$ and $a'\in A_h$, we have $aa'\in A_{gh}$ and hence $$\begin{aligned} \mathcal F(g\cdot a') & = \mathcal F(p(aa')) =\mathcal F(\varepsilon(aa')gh)=\varepsilon(a') \sum_{\psi\in \widehat{\Gamma}} \psi(gh)e_\psi =\varepsilon(a') \sum_{\psi\in \widehat{\Gamma}} \psi(g)\psi(h)e_\psi \\ & = \sum_{\psi\in \widehat{\Gamma}} \psi(g)\varepsilon(a')\psi(h)e_\psi = \sum_{\psi\in \widehat{\Gamma}} \psi(g)\psi(p(a'))e_\psi = \sum_{\psi\in \widehat{\Gamma}} \psi(g)e_\psi\cdot a' = \mathcal F(g)\cdot a'\end{aligned}$$ and this concludes the proof. ◻ # Main results The main tool to prove our main results will be induction and restriction of Yetter-Drinfeld modules, that we first recall. Let $B \subset A$ be a Hopf subalgebra. Recall [@camizh; @bi18] that we have a pair of adjoint functors $$\begin{aligned} \mathcal{YD}^A_A \longrightarrow \mathcal{YD}^B_B &\quad \quad \mathcal{YD}_B^B \longrightarrow \mathcal{YD}_A^A \\ X \longmapsto X^{(B)}& \quad \quad V \longmapsto V\otimes_B A\end{aligned}$$ constructed as follows: 1. For an object $X$ in $\mathcal{YD}_A^A$, $X^{(B)}=\{x \in X \ | \ x_{(0)} \otimes x_{(1)} \in X \otimes B\}$ is equipped with the obvious $B$-comodule structure, and is a $B$-submodule of $X$. We have $X^{(B)}\simeq X\square_A B$, where the right term is the cotensor product, and we say that $B\subset A$ is (right) coflat when the above functor is exact. 2. For an object $V \in \mathcal{YD}_B^B$, the induced $A$-module $V\otimes_B A$ has the $A$-comodule structure given by the map $$v \otimes_B a \mapsto v_{(0)} \otimes_B a_{(2)} \otimes S(a_{(1)}) v_{(1)} a_{(3)}$$ We then have the following result [@bi18 Proposition 3.3], which follows from the general machinery of pairs of adjoint functors. **Proposition 4**. *Let $B \subset A$ be a Hopf subalgebra. If $B \subset A$ is coflat and $A$ is flat as a left $B$-module, we have, for any object $X$ in $\mathcal{YD}_A^A$ and any object $V$ in $\mathcal{YD}_B^B$, natural isomorphisms $${\rm Ext}_{\mathcal{YD}_A^A}^*(V \otimes_B A, X) \simeq {\rm Ext}_{\mathcal{YD}_B^B}^*(V, X^{(B)})$$* *Remark 5*. Let $B \subset A$ be a Hopf subalgebra, and consider the quotient coalgebra $L = A /B^+A$. Recall from Example [Example 1](#ex:ydquot){reference-type="ref" reference="ex:ydquot"} that $L$ has a natural Yetter-Drinfeld module structure over $A$. The induced Yetter-Drinfeld module $k\otimes_B A$ is isomorphic to $L$ in $\mathcal{YD}_A^A$. **Theorem 6**. *Let $k\to B \to A \to L \to k$ be an exact sequence of Hopf algebras, with $L$ finite-dimensional and cosemisimple. We have, for any $X \in \mathcal{YD}_A^A$, $$H^*_{\rm GS}(B, X^{(B)}) \simeq H_{\rm GS}^*(A, X\otimes L^*)$$ and hence in particular $$H^*_b(B) \simeq H_{\rm GS}^*(A, L^*)$$ where $L^*$ is the dual Yetter-Drinfeld module of $L$* *Proof.* Since $L=A/B^+A$ is cosemisimple, $B\subset A$ is coflat [@bi18 Proposition 3.4]. Moreover, still because $L$ is cosemisimple, the quotient map $A\to L$ is faithfully coflat, and hence $A$ is (faithfully) flat as a $B$-module by the left version of [@tak Theorem 2]. Hence we can use Proposition [Proposition 4](#prop:adjyd){reference-type="ref" reference="prop:adjyd"}, applied to $V=k$ to get $${\rm Ext}_{\mathcal{YD}_A^A}^*(k \otimes_B A, X) \simeq {\rm Ext}_{\mathcal{YD}_B^B}^*(k, X^{(B)})$$ and hence, by Remark [Remark 5](#rem:indtriv){reference-type="ref" reference="rem:indtriv"}, $${\rm Ext}_{\mathcal{YD}_A^A}^*(L, X) \simeq {\rm Ext}_{\mathcal{YD}_B^B}^*(k, X^{(B)})$$ Since $L$ is assumed to be finite-dimensional, the usual adjunction between the exact functors $-\otimes L$ and $-\otimes L^*$ provides the announced isomorphism. ◻ **Corollary 7**. *Let $k\to B \to A \to k\Gamma \to k$ be a cocentral exact sequence of Hopf algebras. If $\Gamma$ is a finite abelian group with $|\Gamma|\not = 0$ in $k$, then we have, for any $X \in \mathcal{YD}_A^A$, $$H^*_{\rm GS}(B, X^{(B)}) \simeq \bigoplus_{\psi \in \widehat{\Gamma}} H_{\rm GS}^*(A, X\otimes k_{\psi})$$ and hence in particular $$H^*_b(B) \simeq \bigoplus_{\psi \in \widehat{\Gamma}} H_{\rm GS}^*(A, k_{\psi})$$* *Proof.* We are in the situation of Theorem [Theorem 6](#thm:hgssubL){reference-type="ref" reference="thm:hgssubL"}, hence $$H^*_{\rm GS}(B, X^{(B)}) \simeq H_{\rm GS}^*(A, X\otimes L^*)$$ for $L=k\Gamma$. The assumption on $\Gamma$, ensures, by Lemma [Lemma 3](#lemm){reference-type="ref" reference="lemm"}, that $L\simeq \oplus_{\psi \in \widehat{\Gamma}}k_\psi$ as Yetter-Drinfeld modules over $A$, and hence in particular $L\simeq L^*$. The statement follows. ◻ *Remark 8*. Recall that the Gerstenhaber-Schack cohomological dimension of a Hopf algebra $A$ is defined by $${\rm cd}_{\rm GS}(A)= {\rm sup}\{n : \ H_{\rm GS}^n(A, V) \not=0 \ {\rm for} \ {\rm some} \ V \in \mathcal{YD}_A^A\}\in \mathbb N \cup \{\infty\}$$ Let $k\to B \to A \to k\Gamma \to k$ be a cocentral exact sequence with $\Gamma$ a finite abelian group such $|\Gamma|\not = 0$. Then it follows from Corollary [Corollary 7](#cor:hgssub){reference-type="ref" reference="cor:hgssub"} that $\mathrm{cd}_{\rm GS}(B)\geq \mathrm{cd}_{\rm GS}(A)$. If $A$ is cosemisimple, then $\mathrm{cd}_{\rm GS}(B)=\mathrm{cd}_{\rm GS}(A)$ by [@bi18 Theorem 4.9]. We expect that equality holds in general. # Application to the bialgebra cohomology of universal cosovereign Hopf algebras ## Universal cosovereign Hopf algebras Recall that for $n \geq 2$ and $F \in {\rm GL}_n(k)$, the universal cosovereign Hopf algebra $H(F)$ is the algebra presented by generators $(u_{ij})_{1 \leq i,j \leq n}$ and $(v_{ij})_{1 \leq i,j \leq n}$, and relations: $${u} {v^t} = { v^t} u = I_n ; \quad {vF} {u^t} F^{-1} = {F} {u^t} F^{-1}v = I_n,$$ where $u= (u_{ij})$, $v = (v_{ij})$ and $I_n$ is the identity $n \times n$ matrix. The algebra $H(F)$ has a Hopf algebra structure defined by $$\begin{gathered} \Delta(u_{ij}) = \sum_k u_{ik} \otimes u_{kj}, \quad \Delta(v_{ij}) = \sum_k v_{ik} \otimes v_{kj}, \\ \varepsilon (u_{ij}) = \varepsilon (v_{ij}) = \delta_{ij}, \quad S(u) = {v^t}, \quad S(v) = F { u^t} F^{-1}.\end{gathered}$$ We refer the reader to [@bi07; @bi18] for more information and background on the Hopf algebras $H(F)$. Recall from [@bi07] that a matrix $F \in \mathop{\mathrm{GL}}_n(k)$ is said to be $\bullet$ normalizable if ${\rm tr}(F) \not= 0$ and ${\rm tr} (F^{-1})\not=0$ or ${\rm tr}(F)=0={\rm tr} (F^{-1})$; $\bullet$ generic if it is normalizable and the solutions of the equation $q^2 -\sqrt{{\rm tr}(F){\rm tr}(F^{-1})}q +1 = 0$ are generic, i.e. are not roots of unity of order $\geq 3$ (this property does not depend on the choice of the above square root); $\bullet$ an asymmetry if there exists $E \in {\rm GL}_n(k)$ such that $F=E^tE^{-1}$. ## Hopf algebras of bilinear forms Let $E \in {\rm GL}_n(k)$. The Hopf algebra $\mathcal B(E)$ defined by Dubois-Violette and Launer [@dvl] is presented by generators $a_{ij}$, $1 \leq i,j \leq n$, and relations $E^{-1}a^tEa=I_n=aE^{-1}a^tE$, where $a$ is the matrix $(a_{ij})$. The Hopf algebra structure is given by $$\Delta(a_{ij}) = \sum_k a_{ik} \otimes a_{kj}, \quad \varepsilon (a_{ij}) = \delta_{ij}, \quad S(a) = E^{-1} { a^t} E$$ For an appropriate matrix $E_q$, one has $\mathcal B(E_q)=\mathcal O_q({\rm SL}_2(k))$, the coordinate algebra on quantum ${\rm SL}_2$. The Hopf algebra $\mathcal B(E)$ is cosemisimple if and only if $F=E^tE^{-1}$ is generic in the sense of the previous subsection: this follows from [@bi03] and the classical result for $\mathcal O_q({\rm SL}_2(k))$. Denote by $\mathcal B_+(E)$ the subalgebra of $\mathcal B(E)$ generated by the products $a_{ij}a_{kl}$, $1\leq i,j,k,l\leq n$. This is a Hopf subalgebra of $\mathcal B(E)$, that fits into a cocentral exact sequence $$k\to \mathcal B_+(E) \to \mathcal B(E) \to k\mathbb Z_2 \to k$$ where the projection on the right is given by $p(a_{ij}) =\delta_{ij}g$, with $g$ being the generator of the cyclic group $\mathbb Z_2$. By Example [Example 1](#ex:ydquot){reference-type="ref" reference="ex:ydquot"}, $k\mathbb Z_2$ inherits a Yetter-Drinfeld module structure over $\mathcal B(E)$, whose module structure is induced by $p$, and comodule structure is trivial. The bialgebra cohomology of $\mathcal B(E)$ was computed in the cosemisimple case in [@bic Theorem 6.5] with $\mathbb C$ as a base field. We record and supplement the result here, taking care of the characteristic of the base field, together with another computation of Gerstenhaber-Schack cohomology, with coefficients in $k\mathbb Z_2$. **Theorem 9**. *Let $E \in {\rm GL}_n(k)$, $n\geq 2$, and assume that $E^tE^{-1}$ is generic.* 1. *We have $$H_{\rm GS}^p(\mathcal B (E),k\mathbb Z_2) \simeq \begin{cases} k \ \text{if} \ p=0,3 \\ \{0\} \ \text{otherwise} \end{cases}$$* 2. *If ${\rm char}(k)\not=2$, then $$H_b^p(\mathcal B (E))\simeq \begin{cases} k \ \text{if} \ p=0,3 \\ \{0\} \ \text{otherwise} \end{cases}$$* 3. *If ${\rm char}(k)=2$, then $$H_b^p(\mathcal B (E))\simeq \begin{cases} k \ \text{if} \ p=0,1,2,3 \\ \{0\} \ \text{otherwise} \end{cases}$$* *Proof.* The resolution given in [@bic Theorem 5.1] is valid over any field, and can be used to compute the above cohomologies, since the involved Yetter-Drinfeld modules are free, and hence projective by the cosemisimplicity assumption on $\mathcal B(E)$. The result is then obtained by direct computations, which depend on whether $k$ has, or not, characteristic $2$. ◻ As a first application of the results of Section 3, we recover in a shorter way the bialgebra cohomology computation of $\mathcal B_+(E)$ in the cosemisimple case [@bi16 Theorem 6.4], that we supplement in the characteristic $2$ case. **Corollary 10**. *Let $E \in {\rm GL}_n(k)$, $n\geq 2$, and assume that $E^tE^{-1}$ is generic. We have $$H_b^p(\mathcal B_+ (E))\simeq \begin{cases} k \ \text{if} \ p=0,3 \\ \{0\} \ \text{otherwise} \end{cases}$$* *Proof.* The Yetter-Drinfeld module $k\mathbb Z_2$ is self dual, hence the result is the combination of the first part of Theorem [Theorem 9](#thm:hbbe){reference-type="ref" reference="thm:hbbe"} and of Theorem [Theorem 6](#thm:hgssubL){reference-type="ref" reference="thm:hgssubL"}, ◻ ## Relation between $H(F)$ and $\mathcal B(E)$ The first relation between $H(F)$ and $\mathcal B(E)$ was observed by Banica in [@ba97], when $F=E^tE^{-1}\in {\rm GL}_n(\mathbb C)$ is positive matrix, and a key result from [@ba97] in that case is the existence of a Hopf algebra embedding $$\label{banembedd} H(F)\hookrightarrow \mathcal B(E)*\mathbb C \mathbb Z$$ which, according to [@tw Proposition 6.20], can be refined to an embedding $$\label{twembedd} H(F)\hookrightarrow \mathcal B(E)*\mathbb C \mathbb Z_2$$ This is strengthened in [@bfgkt Theorem 4.11], where it is shown that the embedding is still valid for any generic asymmetry $F$. In fact, there is a simple proof of this result, valid over any field $k$ and any asymmetry $F=E^tE^{-1}$. **Proposition 11**. *Let $E \in {\rm GL}_n(k)$ and let $F = E^tE^{-1}$. There exists a $\mathbb Z_2$-action on $H(F)$ such one gets a Hopf algebra isomorphism $$H(F)\rtimes k\mathbb{Z}_2 \simeq \mathcal{B}(E)*k\mathbb{Z}_2$$* *Proof.* The announced $\mathbb Z_2$-action, from [@bny Example 2.18], is provided by the order $2$ Hopf algebra automorphism of $H(F)$ given in matrix form as follows $$\tau(u)= (E^t)^{-1} v E^t, \quad \tau(v) =E^t u (E^t)^{-1}$$ We therefore form the usual crossed product Hopf algebra $H(F)\rtimes k\mathbb{Z}_2$. Denoting by $g$ the generator of $\mathbb Z_2$, it is a straightforward verification to check the existence of a Hopf algebra map, written in matrix form $$\begin{aligned} H(F)\rtimes k\mathbb{Z}_2 &\longrightarrow \mathcal{B}(E)*k\mathbb{Z}_2 \\ u, \ v , \ g &\longmapsto ag, \ E^t ga (E^t)^{-1}, \ g \end{aligned}$$ Similarly, it is straightforward to construct an inverse isomorphism $$\begin{aligned} \mathcal{B}(E)*k\mathbb{Z}_2 &\longrightarrow H(F)\rtimes k\mathbb{Z}_2 \\ a, \ g &\longmapsto \ ug, \ g \end{aligned}$$ We leave the detailed verification to the reader. ◻ ## Bialgebra cohomology of $H(F)$ in the generic case **Theorem 12**. *Let $F \in {\rm GL}_n(k)$, $n\geq 2$, with $F$ generic. The bialgebra cohomology of $H(F)$ is $$H_b^p(H(F))\simeq \begin{cases} k \ \text{if} \ p=0,1,3 \\ \{0\} \ \text{otherwise} \end{cases}$$* *Proof.* First notice that one always has $H^0_b(A)=k$ for any Hopf algebra, while the computation of $H^1_b(H(F))$ is extremely easy (see the complex in [@bi16 Proposition 5.3]), so we concentrate on degree $p\geq 2$. First consider the asymmetry case: $F=E^tE^{-1}$. Consider the $\mathbb Z_2$-action of Proposition [Proposition 11](#prop:iso){reference-type="ref" reference="prop:iso"} and the Hopf algebra map $$\varepsilon \otimes {\rm id} : H(F)\rtimes k\mathbb Z_2\to k\mathbb Z_2$$ This is cocentral, and the associated Hopf subalgebra $B$ is clearly the image of the natural embedding $H(F)\hookrightarrow \mathrm{H}(F)\rtimes k \mathbb Z_2$. Theorem [Theorem 6](#thm:hgssubL){reference-type="ref" reference="thm:hgssubL"} gives an isomorphism $$H^*_b(H(F)) \simeq H_{\rm GS}^*(H(F)\rtimes k\mathbb Z_2, k\mathbb Z_2 ) %\\ \simeq H_b^*(H(F)\rtimes k\mathbb Z_2) \oplus H_{\rm GS}^*( H(F)\rtimes k\mathbb Z_2, k_\psi)$$ Considering now the isomorphism of Proposition [Proposition 11](#prop:iso){reference-type="ref" reference="prop:iso"}, we obtain the isomorphism $$H^*_b(H(F)) \simeq H_{\rm GS}^p(\mathcal B(E)*k\mathbb Z_2 , k\mathbb Z_2)$$ Since $\mathcal B(E)$ is cosemisimple as well, [@bi18 Theorem 5.9] yields, for $p\geq 2$, $$H^p_b(H(F)) \simeq %H_b^p(\mathcal B(E))\oplus H_b^p(k\mathbb Z_2) \oplus H_{\rm GS}^p(\mathcal B(E), k\mathbb Z_2)\oplus H_{GS}^p(k\mathbb Z_2 , k\mathbb Z_2)$$ Since $k\mathbb Z_2$ is cosemisimple and cocommutative, we have $H_{\rm GS}^p(k \mathbb Z_2, k\mathbb Z_2)\simeq {\rm Ext}^p_{k\mathbb Z_2}(k, k\mathbb Z_2)$, and the latter ${\rm Ext}$-space is easily seen to vanish if $p\geq 1$. We conclude by the first part of Theorem [Theorem 9](#thm:hbbe){reference-type="ref" reference="thm:hbbe"}. For a general matrix $F$, by [@bi07 Theorem 1.1] there always exists an asymmetry $F(q) \in {\rm GL}_2(k)$ such that the tensor categories of comodules $H(F)$ and $H(q)$ are equivalent, hence the monoidal invariance of bialgebra cohomology (see e.g. [@bic14 Theorem 7.10]) gives the result. ◻ *Remark 13*. One can also compute the usual Hochschild cohomology for $H(F)$ in the asymmetry case, for particular choices of coefficients, by combining Proposition [Proposition 11](#prop:iso){reference-type="ref" reference="prop:iso"} and the usual adjunction relation for ${\mathop{\mathrm{Ext}}}$ (see e.g. [@hs IV.12]). The computation is done in greater generality in [@bfgkt Theorem B], and is valid for any normalizable $F$ over any field, since Proposition [Proposition 11](#prop:iso){reference-type="ref" reference="prop:iso"} is. Notice also that it follows from [@bfgkt] that $\mathrm{cd}(H(F))=3$ for any normalizable $F$, which was only known for $F$ an asymmetry [@bi18] or $F$ generic [@bi21]. Here $\mathrm{cd}$ is the cohomological dimension, i.e. the global dimension, which, for Hopf algebras, coincides as well with the Hochschild cohomological dimension. 25 N. Andruskiewitsch, J. Devoto, Extensions of Hopf algebras, *St. Petersburg Math. J.* **7** (1996), no. 1, 17-52. T. Banica, Le groupe quantique compact libre ${\rm U}(n)$, *Comm. Math. Phys.* **190** (1997), no. 1, 143-172. I. Baraquin, U. Franz, M. Gerhold, A. Kula, M. Tobolski, A free resolution for the free unitary quantum group, arXiv:2309.07767. J. Bichon, The representation category of the quantum group of a non-degenerate bilinear form, *Comm. Algebra* **31** (2003), no. 10, 4831-4851. J. Bichon, Co-representation theory of universal co-sovereign Hopf algebras, *J. Lond. Math. Soc.* **75** (2007), no. 1, 83-98. J. Bichon, Hochschild homology of Hopf algebras and free Yetter-Drinfeld resolutions of the counit, *Compos. Math.* **149** (2013), no. 4, 658-678. J. Bichon, Hopf-Galois objects and cogroupoids, *Rev. Un. Mat. Argentina* **55** (2014), no. 2, 11-69. J. Bichon, Gerstenhaber-Schack and Hochschild cohomologies of Hopf algebras, *Doc. Math.* **21** (2016), 955-986. J. Bichon, Cohomological dimensions of universal cosovereign Hopf algebras, *Publ. Mat.* **62** (2018), no. 2, 301-330. J. Bichon, On the monoidal invariance of the cohomological dimension of Hopf algebras, *C. R. Math. Acad. Sci. Paris* **360** (2022), 561--582. J. Bichon, S. Neshveyev, M. Yamashita, Graded twisting of categories and quantum groups by group actions, *Ann. Inst. Fourier (Grenoble)* **66** (2016), no. 6, 2299-2338. S. Caenepeel, G. Militaru, S. Zhu, Crossed modules and Doi-Hopf modules, *Israel J. Math.* **100** (1997), 221-247. S. Caenepeel, G. Militaru, S. Zhu, Frobenius and separable functors for generalized module categories and nonlinear equations, Lecture Notes in Mathematics 1787. Springer-Verlag, 2002. M. Dubois-Violette, G. Launer, The quantum group of a non-degenerate bilinear form, *Phys. Lett. B* **245**, no.2 (1990), 175-177. P. Etingof, S. Gelaki, On finite-dimensional semisimple and cosemisimple Hopf algebras in positive characteristic, *Internat. Math. Res. Notices* **1998**, no. 16, 851-864. M. Gerstenhaber, S. Schack, Bialgebra cohomology, deformations and quantum groups, *Proc. Nat. Acad. Sci. USA* **87** (1990), no. 1, 78-81. M. Gerstenhaber, S. Schack, Algebras, bialgebras, quantum groups, and algebraic deformations, *Contemp. Math.* **134** (1992), 51-92. P. Hilton, U. Stammbach, A course in homological algebra, Graduate Texts in Mathematics 4. Springer, 1971. S. Montgomery, Hopf algebras and their actions on rings. CBMS Regional Conference Series in Mathematics, 82. American Mathematical Society, 1993. B. Parshall, J. Wang, On bialgebra cohomology, *Bull. Soc. Math. Belg. Sér. A* **42** (1990), no. 3, 607-642. P. Schauenburg, Hopf modules and Yetter-Drinfeld modules, *J. Algebra* **169** (1994), no. 3, 874-890. S. Shnider, S. Sternberg, Quantum groups. From coalgebras to Drinfeld algebras. A guided tour. Graduate Texts in Mathematical Physics, II. International Press, Cambridge, MA, 1993. D. Stefan, The set of types of n-dimensional semisimple and cosemisimple Hopf algebras is finite, *J. Algebra* **193** (1997), no. 2, 571-580. R. Taillefer, Injective Hopf bimodules, cohomologies of infinite dimensional Hopf algebras and graded-commutativity of the Yoneda product, *J. Algebra* **276** (2004), no. 1, 259-279. M. Takeuchi, Relative Hopf modules-equivalences and freeness criteria, *J. Algebra* **60** (1979), no. 2, 452-471. P. Tarrago, M. Weber, Unitary easy quantum groups: the free case and the group case. *Int. Math. Res. Not. IMRN* **2017**, no. 18, 5710-5750.
arxiv_math
{ "id": "2309.10434", "title": "Bialgebra cohomology and exact sequences", "authors": "Julien Bichon (LMBP)", "categories": "math.KT math.QA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- address: Department of Mathematics, University of California, Riverside CA, 92521, USA author: - John C. Baez title: The Icosidodecahedron --- ![image](icosidodecahedron.jpg){width="15em"} 1em The icosidodecahedron can be built by truncating either a regular icosahedron or a regular dodecahedron. It has 30 vertices, one at the center of each edge of the icosahedron---or equivalently, one at the center of each edge of a dodecahedron. It is a beautiful, highly symmetrical shape. But the icosidodecahedron is just a shadow of a more symmetrical shape with twice as many vertices, which lives in a space with twice as many dimensions! Namely, it is a projection down to 3d space of a 6-dimensional polytope with 60 vertices. Even better, it is also a slice of a still more symmetrical 4d polytope with 120 vertices, which in turn is the projection down to 4d space of a even more symmetrical 8-dimensional polytope with 240 vertices. Note how the numbers keep doubling: 30, 60, 120 and 240. To understand all this, start with the group of rotational symmetries of the icosahedron. This is a 60-element subgroup of the rotation group ${\rm SO}(3)$. The group of unit quaternions is a double cover of ${\rm SO}(3)$, so this 60-element subgroup has a double cover, called the , consisting of 120 unit quaternions. With a suitable choice of coordinates, we can take these to be $$\pm 1 , \quad \frac{\pm 1 \pm i \pm j \pm k}{2}, \quad \frac{\pm i \pm \phi j \pm \Phi k}{2}$$ together with everything obtained from these by even permutations of $1, i, j,$ and $k$, where $$\phi = \frac{\sqrt{5} - 1}{2}, \quad \Phi = \frac{\sqrt{5} + 1}{2},$$ are the 'little' and 'big' golden ratios, respectively. These 120 unit quaternions are the vertices of a convex polytope in 4 dimensions. In fact this is a regular polytope, called the since it has 600 regular tetrahedra as faces [@Coxeter]. Here is the 600-cell projected down to 3d space, as drawn using Robert Webb's Stella software [@Webb]: ![image](600-cell.png){width="18em"} If we slice the 600-cell halfway between two of its opposite vertices, we get an icosidodecahedron. This is easiest to see by intersecting the 600-cell with the space of : $$\{ ai + bj + ck : \; a,b,c \in \mathbb{R} \}$$ Of the 600-cell's vertices, those that lie in this 3-dimensional space are $$\pm i, \pm j, \pm k,$$ which are the vertices of an octahedron, and the points $$\displaystyle{ \frac{\pm i \pm \phi j \pm \Phi k}{2} , \quad \frac{\pm j \pm \phi k \pm \Phi i}{2} , \quad \frac{\pm k \pm \phi i \pm \Phi j}{2} }$$ which are the vertices of three 'golden boxes'. A is the 3d analogue of a golden rectangle: its three sides are in the proportions $\phi, 1$ and $\Phi.$ It is well-known that the vertices of an octahedron and the golden boxes, taken together in this way, are the vertices of an icosidodecahedron: ![image](icosidodecahedron_golden_boxes_and_octahedron_narain.png){width="15em"} But we are not done with the binary icosahedral group---far from it! Since these 120 quaternions are closed under quaternion multiplication, their integer linear combinations form a subring of the quaternions, which Conway and Sloane [@CS] call the . Since any icosian can be written as $a + bi + cj + dk$ where the numbers $a,b,c,d \in {\mathbb R}$ are of the form $x + y \sqrt{5}$ with $x,y$ rational, any icosian gives an 8-tuple of rational numbers. However, we do not get all 8-tuples of rationals this way, only those lying in a certain lattice in ${\mathbb R}^8$. There is a way to think of this lattice as a rescaled copy of the famous $\mathrm{E}_8$ lattice. To do this, Conway and Sloane put a new norm on the icosians as follows. The usual quaternionic norm is $$\|a + bi + cj + dk\|^2 = a^2 + b^2 + c^2 + d^2$$ But for an icosian this norm is always of the form $x + \sqrt{5} y$ for some rationals $x$ and $y$. Conway and Sloane define a new norm on the icosians by setting $$|a + bi + cj + dk|^2 = x + y .$$ With this new norm, Conway and Sloane show the icosians are isomorphic to a rescaled version of a the lattice in ${\mathbb R}^8$ that gives the densest packing of spheres in 8 dimensions: the $\mathrm{E}_8$ lattice [@Viazovska]. The 240 shortest nonzero vectors in the $\mathrm{E}_8$ lattice are the vertices of an 8-dimensional convex polytope called the : ![image](E8_roots.png){width="18em"} However, if we remember that each of these 240 vectors came from a quaternion, we can also think of them as 240 quaternions. These turn out to be the vertices of *two* 600-cells in the quaternions! In the usual quaternionic norm, one of these 600-cells is larger than the other by a factor of $\Phi$. In fact, there is an orthogonal projection from ${\mathbb R}^8$ down to ${\mathbb R}^4$ that maps the $\mathrm{E}_8$ root polytope to the 600-cell. So, in a very real sense, the 600-cell is the 'shadow' of a polytope with twice as many vertices, living in a space whose dimension is twice as large. And as a spinoff, this fact gives the same sort of relationship between the icosidodecahedron and a 6-dimensional polytope. The key is to look at the : icosians of the form $a i + b j + c k$ for real $a,b,c$. Since $a,b$ and $c$ are each of the form $x + \sqrt{5}y$ with $x$ and $y$ rational, any pure imaginary icosian gives a 6-tuple of rational numbers. We do not get all 6-tuples of rationals this way, but only those lying in a certain lattice. We have $$\|ai + bj + ck\|^2 = a^2 + b^2 + c^2$$ For a pure imaginary icosian this is always of the form $x + \sqrt{5} y$ for some rationals $x$ and $y$. So, we can define a new norm on the pure imaginary icosians by $$|ai + bj + ck|^2 = x + y$$ With this new norm, the pure imaginary icosians are isomorphic to a rescaled version of a lattice in ${\mathbb R}^6$ called the . The 60 shortest nonzero vectors in the $\mathrm{D}_6$ lattice are called the of this lattice, and they are the vertices of a 6-dimensional convex polytope called the . There is an orthogonal projection from ${\mathbb R}^6$ to ${\mathbb R}^3$ that maps this polytope to an icosidodecahedron. In fact 30 vertices of the $\mathrm{D}_6$ root polytope map to the vertices of this icosidodecahedron, while the other 30 map to vertices of a second, smaller icosidodecahedron. Let us see some details. The usual coordinatization of the $\mathrm{D}_6$ lattice in Euclidean ${\mathbb R}^6$ is $$\mathrm{D}_6 = \left\{ (x_1, \dots, x_6) : \; x_i \in \mathbb{Z}, \; \sum_i x_i \in 2{\mathbb Z}\right\} \subset {\mathbb R}^6 .$$ Its roots are the vectors $$(\pm 1, \pm 1, 0, 0, 0, 0)$$ and all vectors obtained by permuting the six coordinates. We shall see that these vectors are sent to the vertices of an icosidodecahedron by the linear map $T \colon{\mathbb R}^6 \to {\mathbb R}^3$ given as a $3 \times 6$ matrix by $$\left( \begin{array}{cccccc} \Phi & \Phi & -1 & -1 & 0 & 0 \\ 0 & 0 & \Phi & -\Phi & -1 & 1 \\ -1 & 1 & 0 & 0 & \Phi & \Phi \end{array} \right)$$ The rows of this matrix are orthogonal, all with the same norm, so after rescaling it by a constant factor we obtain an orthogonal projection. The columns of this matrix are six vertices of an icosahedron, chosen so that we never have a vertex and its opposite. For any pair of columns, they are either neighboring vertices of the icosahedron, or a vertex and the opposite of a neighboring vertex. The map $T$ thus sends any $\mathrm{D}_6$ root to either the sum or the difference of two neighboring icosahedron vertices. In this way we obtain all possible sums and differences of neighboring vertices of the icosahedron. It is easy to see that the sums of neighboring vertices give the vertices of an icosidodecahedron, since by definition the icosidodecahedron has vertices at the midpoints of the edges of a regular icosahedron. It is less obvious that the differences of neighboring vertices of the icosahedron give the vertices of a second, smaller icosidodecahedron. But thanks to the symmetry of the situation, we can check this by considering just one example. In fact the vectors defining the vertices of the larger icosidodecahedron turn out to be precisely $\Phi$ times the vectors defining the vertices of the smaller one! The beauties we have just seen are part of an even larger pattern relating all the non-crystallographic Coxeter groups to crystallographic Coxeter groups. For more, see the work of Fring and Korff [@FringKorff], Boehm, Dechant and Twarock [@DBT], and the many papers they refer to. Fring and Korff apply these ideas to integrable systems in physics, while the latter authors explore connections to affine Dynkin diagrams. For more relations between the icosahedron and $\mathrm{E}_8$, see [@Baez2]. ### Acknowledgements {#acknowledgements .unnumbered} I thank Greg Egan for help with these ideas. The image of an icosidodecahedron was created by Cyp and was put on [Wikicommons](https://commons.wikimedia.org/wiki/File:Icosidodecahedron.jpg) with a Creative Commons Attribution-Share Alike 3.0 Unported license. The 600-cell was made using [Robert Webb's Stella software](http://www.software3d.com/Stella.php) and is available on [Wikicommons](https://commons.wikimedia.org/wiki/File:Schlegel_wireframe_600-cell_vertex-centered.png) with a Creative Commons Attribution-Share Alike 3.0 Unported license. The icosidodecahedron with three golden boxes and an octahedron inscribed in it was created by Rahul Narain on [Mathstodon](https://mathstodon.xyz/web/@narain/109342508956447823). The projection of the $\mathrm{E}_8$ roots to the plane was created by Claudio Rocchini and put on [Wikicommons](https://commons.wikimedia.org/wiki/File:E8_graph.svg) with a Creative Commons Attribution 3.0 Unported license. This article is an expanded of an earlier article [@Baez1]. The only thing I left out here is a rotating image of two icosidodecahedra prepared by Greg Egan and the matrix describing a linear map $S \colon{\mathbb R}^8 \to {\mathbb R}^4$ that when suitably rescaled gives a projection mapping the $\mathrm{E}_8$ lattice in its usual coordinatization $$\{ x \in {\mathbb R}^8: \, \textrm{all } x_i \in {\mathbb Z}\textrm{ or all } x_i \in {\mathbb Z}+ \frac{1}{2} \textrm{ and } \sum_i x_i \in 2Z \}$$ to the icosians, and thus mapping the 240 $\mathrm{E}_8$ roots to two 600-cells. For completeness, here is that matrix: $$\left( \begin{array}{cccccccc} \Phi+1 & \Phi -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \Phi & \Phi & -1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & \Phi & -\Phi & -1 & 1 \\ 0 & 0 & -1 & 1 & 0 & 0 & \Phi & \Phi \end{array} \right)$$ 4 J. Baez, Icosidodecahedron from $\mathrm{D}_6$, *Visual Insight*, January 1, 2015. Available at <https://blogs.ams.org/visualinsight/2015/01/01/icosidodecahedron-from-projected-d6-root-polytope/>. J. Baez, From the icosahedron to $\mathrm{E}_8$, *London Math. Soc. Newsletter* **476** (2018), 18--23. Available as [arXiv:1712.06436](https://arxiv.org/abs/1712.06436). J. H. C. Conway and N. J. A. Sloane, *Sphere Packings, Lattices and Groups*, Springer, Berlin, 2013. H. S. M. Coxeter, *Regular Polytopes*, Dover, Mineola, New York, 1974. C. Bœhm, P.-P. Dechant and R. Twarock, Affine extensions of non-crystallographic Coxeter groups induced by projection, *J. Math. Phys.* **54**, 093508 (2013). Available as [arXiv:1110.5228](http://arxiv.org/abs/1110.5228). A. Fring and C. Korff, Affine Toda field theories related to Coxeter groups of non-crystallographic type, *Nucl. Phys. ***B729** (2005), 361--386. Available as [arxiv:hep-th/0506226](http://arxiv.org/abs/hep-th/0506226). M. Viazovska, The sphere packing problem in dimension 8, *Ann. Math.* **185** (2017), 991--1015. Available as [arXiv:1603.04246](http://arxiv.org/abs/arXiv:1603.04246). R. Webb, Stella, <http://www.software3d.com/Stella.php>.
arxiv_math
{ "id": "2309.15774", "title": "The Icosidodecahedron", "authors": "John C. Baez", "categories": "math.CO math.GR math.HO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- address: BCAM -- Basque Center for Applied Mathematics, Bilbao, Spain and IKERBASQUE, Basque Foundation for Science, Bilbao, Spain author: - Ilya Smirnov bibliography: - refs.bib title: An invitation to equimultiplicity of F-invariants --- *Dedicated to Craig and Mel for putting the wind in our sails.* # Introduction This note grew from the lectures that I delivered at the ICTP school, but the content has changed a lot. Since there are several good introductions to F-invariants (such as [@MaPolstraBook; @CaminataDe; @HunekeSurvey]), I decided to omit the proofs of the basic results and instead to focus on the notion of *equimultiplicity* and its role in the theory of Hilbert--Kunz multiplicity and F-signature. My motivation was the lack of a good introduction to equimultiplicity in general and the feeling that there is still a lot to be done in the study of equimultiplicity for *F-invariants*, i.e., invariants derived from the Frobenius. *Equimultiplicity* is defined for a numerical invariant of local ring, let $\mathop{\mathrm{\epsilon}}$ be one of these. By localizing at primes, $\mathop{\mathrm{\epsilon}}$ defines a function on the spectrum. A prime $\mathfrak p \in \mathop{\mathrm{Spec}}R$ is $\mathop{\mathrm{\epsilon}}$-*equimultiple* if $\mathop{\mathrm{\epsilon}}(\mathfrak p) = \mathop{\mathrm{\epsilon}}(\mathfrak q)$ for all $\mathfrak p \subseteq \mathfrak q$; this condition appears in the stratification of the spectrum by the values of $\mathop{\mathrm{\epsilon}}$. In order to work with $\mathop{\mathrm{\epsilon}}$-equimultiplicity it is essential to find convenient necessary and sufficient conditions, perhaps a criterion, for $\mathfrak p$ to be $\mathop{\mathrm{\epsilon}}$-equimultiple. In Section [2](#sec: equimultiplicity intro){reference-type="ref" reference="sec: equimultiplicity intro"}, I will discuss in details an equimultiplicity approach to the following three problems: 1. showing that an extremal value (which is $1$ for the invariants considered here) of $\mathop{\mathrm{\epsilon}}$ determines whether a ring is regular, 2. proving that $\mathop{\mathrm{\epsilon}}$ is strongly upper semicontinuous, 3. resolving singularities. This note pursues the first two applications for Hilbert--Kunz multiplicity and F-signature. It should be stressed that equimultiplicity theories for multiplicity and the Hilbert--Samuel function were crucial in the resolution of singularities in characteristic $0$. It could happen that one day F-invariants will serve a similar role in the resolution of singularities in positive characteristic -- this is a good enough reason to study their equimultiplicity. With this point of view, the applications in this note are a useful byproduct and a test of tools. ## The structure of this note {#the-structure-of-this-note .unnumbered} In Section [2](#sec: equimultiplicity intro){reference-type="ref" reference="sec: equimultiplicity intro"}, I will start with a brief overview of equimultiplicity and its main uses. Section [3](#sec: F-intro){reference-type="ref" reference="sec: F-intro"} introduces F-signature and Hilbert--Kunz multiplicity and can be safely skipped by experienced readers. Section [4](#sec: F-signature){reference-type="ref" reference="sec: F-signature"} considers equimultiplicity for F-signature, largely based on my work with Polstra [@PolstraSmirnov]. The following two sections are more novel. In Section [5](#convergence){reference-type="ref" reference="convergence"} I will prove a uniform convergence estimate for Hilbert--Kunz multiplicity and present its applications; the treatment here is a bit different and inspired by Huneke's survey [@HunekeSurvey]. Building on the tools obtained through uniform convergence, Section [6](#sec: Hilbert-Kunz){reference-type="ref" reference="sec: Hilbert-Kunz"} develops a theory of Hilbert--Kunz multiplicity for primes of dimension $1$. An important difference with [@SmirnovEqui] is that it is not assumed that $R/\mathfrak p$ is regular. This leads to two novel applications: a rigidity theorem for Hilbert--Kunz multiplicity in weakly F-regular rings ([@PolstraSmirnov] dealt with strongly F-regular rings) and a different proof of the Watanabe--Yoshida theorem. This proof is a modification of the proof given by Huneke and Yao in [@HunekeYao] -- I replaced their use of the symbolic power $\mathfrak p^{(n)}$ by $({\mathfrak p}^{[p^e]})^*$, this ideal is $\mathfrak p$-primary due to equimultiplicity. This change makes my proof more technical, but it also makes the proof more *conceptual* as it fits in the framework that can be applied to other invariants. The note is finished with suggestions for further research. ## Acknowledgments {#acknowledgments .unnumbered} I thank Linquan Ma for many fruitful conversations, Thomas Polstra for collaborating on [@PolstraSmirnov], and Austyn Simpson and Craig Huneke for providing comments on this manuscript. I am indebted to Craig Huneke and Mel Hochster for their mathematics and mentorship through the years. Like most papers in "characteristic $p$ methods", this note has something from Mel and Craig on almost every page. I want to highlight that the presented necessary condition for Hilbert--Kunz equimultiplicity is given in terms of *tight closure*. This note grew from my thesis, the part developed while Craig was still in Kansas, and it is influenced by every weekly meeting we had back then. I think I still remember clearly that during one meeting I showed him that equimultiplicity implies that the saturation of Frobenius powers was contained in their tight closures and Craig suggested that this should imply that the tight closure itself should be saturated, see Theorem [Theorem 36](#thm: equimultiple property){reference-type="ref" reference="thm: equimultiple property"}. At that time, Craig also introduced me to Lipman's article on equimultiplicity [@Lipman] which had a lot of influence on me through the years. # Equimultiplicity {#sec: equimultiplicity intro} The purpose of this section is to briefly discuss the notion of equimultiplicity and its potential applications. First, in the following a local ring $(R, \mathfrak m)$ is defined as a Noetherian unital ring such that the set of all non-invertible elements forms an ideal $\mathfrak m$. Let $\mathop{\mathrm{\epsilon}}$ be an invariant of local rings then $\mathop{\mathrm{\epsilon}}$ defines a function on the spectrum of a Noetherian ring $R$ by setting $\mathop{\mathrm{\epsilon}}(\mathfrak p) = \mathop{\mathrm{\epsilon}}(R_\mathfrak p)$. If $\mathop{\mathrm{\epsilon}}$ takes values in a totally ordered set $\Lambda$, it is desirable that $\mathop{\mathrm{\epsilon}}\colon \mathop{\mathrm{Spec}}R \to \Lambda$ is a map of posets. This property comes from the intuitive expectation that if $\mathfrak p \subset \mathfrak q \in \mathop{\mathrm{Spec}}R$, then the singularity of $R_\mathfrak p$ cannot be worse than the singularity of $R_\mathfrak q$. For example, the Hilbert--Samuel multiplicity, $\operatorname{e}(R)$, is a positive integer, so the inequality becomes $\operatorname{e}(\mathfrak p) \leq \operatorname{e}(\mathfrak q)$. For some other invariants, such as F-signature, the inequality should be opposite. A prime $\mathfrak p$ is $\mathop{\mathrm{\epsilon}}$-equimultiple if $\mathop{\mathrm{\epsilon}}(\mathfrak p) = \mathop{\mathrm{\epsilon}}(\mathfrak q)$ for all $\mathfrak p \subseteq \mathfrak q$; the intuitive meaning is that the singularity, as measured by $\mathop{\mathrm{\epsilon}}$, does not change along $\mathfrak p$. In order to work with this condition, it is necessary to characterize it in a more intrinsic way. For example, the classical case of Hilbert--Samuel multiplicity is presented in the following way. **Theorem 1** ([@Dade Theorem 2], see also [@Lipman Corollary, page 121]). *Let $(R, \mathfrak m)$ be a formally equidimensional local ring and $\mathfrak p$ be a prime ideal such that $R/\mathfrak p$ is regular. Then $\operatorname{e}(\mathfrak m) = \operatorname{e}(\mathfrak p)$ if and only if the analytic spread of $\mathfrak p$, $\mathop{\mathrm{\ell}}(\mathfrak p)$, is minimal, i.e., the dimension of the fiber cone $\oplus \mathfrak p^n/\mathfrak m\mathfrak p^n$ is equal to $\operatorname{ht}\mathfrak p$.* **Remark 2**. In literature, any ideal $I$ such that $\mathop{\mathrm{\ell}}(I) = \operatorname{ht}(I)$ is called equimultiple. While this condition has a connection with multiplicities (see [@Lipman Theorem 4]), it is not equivalent to $\operatorname{e}$-equimultiplicity if $R/\mathfrak p$ is not regular. If $R/\mathfrak p$ is not regular, I do not know of any necessary condition for Hilbert--Samuel equimultiplicity. For example, any prime ideal in a regular ring is $\operatorname{e}$-equimultiple by the definition, but the height and the analytic spread need not be equal. A family of examples is given by defining ideals of a monomial curve with $3$ generators, such as the defining prime $\mathfrak p = (x^3 - yz, y^2- xz, z^5 - x y^3) \subset k[x,y, z]$ of the monomial curve $k[t^3, t^4, t^5]$. As discussed on page 258 of [@HunekeDseq] the defining primes are generated by a $d$-sequence, so the fiber cone is isomorphic to a polynomial ring in $3$ variable by [@HunekeDseq Theorem 2.2]. While the two invariants of this note, Hilbert--Kunz multiplicity and F-signature, take values in real numbers, it should be noted that a theory of equimultiplicity can be developed for invariants with values in a more general poset. Two classically studied examples are the Hilbert function, which takes values in sequences of positive integers with an appropriate order, and the Hilbert polynomial. ## Connection with semicontinuity **Definition 3**. Let $\mathop{\mathrm{\epsilon}}\colon X \to \mathbb{R}$ be a function on a topological space $X$. We say that $\mathop{\mathrm{\epsilon}}$ is *upper semicontinuous* if the subset $X_{< a} := \{x \in X \mid \mathop{\mathrm{\epsilon}}(x) < a\}$ is open for all $a \in \mathbb{R}$. We say that $\mathop{\mathrm{\epsilon}}$ is *strongly upper semicontinuous* if any set of the form $X_{\leq a} := \{x \in X \mid \mathop{\mathrm{\epsilon}}(x) \leq a\}$ is open. Since $X_{< a} = \cup_{b < a} X_{\leq b}$, strong upper semicontinuity is indeed a stronger notion. Lower semicontinuity and strong lower semicontinuity are defined similarly, by using $X_{> a}$ and $X_{\geq a}$. Semicontinuity is a notion of continuity appropriate for algebraic geometry and it has a number of important consequences in the theory of F-invariants, see for example [@EnescuShimomoto; @SPYGlobal; @SmirnovTucker] and the discussion in [2.2](#sub: blowup){reference-type="ref" reference="sub: blowup"}. In addition, strong semicontinuity provides a constructible stratification, *i.e.,* the sets $X_{= a} \coloneqq X_{\leq a} \cap X_{\geq a}$ are locally closed. In practice, it might be easier to establish strong semicontinuity, for example, due to the following connection with equimultiplicity that comes from Nagata's criterion for openness. The criterion asserts that a set $U \subseteq \mathop{\mathrm{Spec}}R$ is open if and only if for all pairs $\mathfrak p \subseteq \mathfrak q \in \mathop{\mathrm{Spec}}R$ the following conditions hold 1. if $\mathfrak q \in U$ then $\mathfrak p \in U$, and 2. if $\mathfrak p \in U$ there exists an element $s \notin \mathfrak p$ such that whenever $s \notin \mathfrak q$ then $\mathfrak q \in U$. By applying this to $X_{\leq a}$, the following characterization follows immediately. **Proposition 4**. *The function $\mathop{\mathrm{\epsilon}}\colon \mathop{\mathrm{Spec}}R \to \mathbb {R}$ is strongly upper semicontinuous if and only if $\mathop{\mathrm{\epsilon}}$ satisfies the localization inequality, *i.e.,* $\mathop{\mathrm{\epsilon}}(R_\mathfrak p) \leq \mathop{\mathrm{\epsilon}}(R_\mathfrak q)$ when $\mathfrak p \subseteq \mathfrak q$, and for any prime $\mathfrak p$ there is $s\notin \mathfrak p$ such that $\mathfrak pR_s$ is $\mathop{\mathrm{\epsilon}}$-equimultiple.* With a convenient sufficient condition for $\mathop{\mathrm{\epsilon}}$-equimultiplicity, such as Theorem [Theorem 1](#thm: multiplicity criterion){reference-type="ref" reference="thm: multiplicity criterion"}, Proposition [Proposition 4](#prop: Nagata){reference-type="ref" reference="prop: Nagata"} can be easily used to show semicontinuity. **Corollary 5**. *Multiplicity is upper semicontinuous on the the spectrum of an excellent ring.* *Proof.* It was shown in [@Dade] that multiplicity localizes if $R$ is excellent[^1]. Furthermore, for any prime $\mathfrak p$ the quotient $R/\mathfrak p$ is still excellent, so its regular locus is open by the definition. Thus, there is an element $u \notin \mathfrak p$ such that $(R/\mathfrak p)_u$ is regular. Second, the analytic spread of $\mathfrak pR_\mathfrak p$ is $\dim R_\mathfrak p = \operatorname{ht}\mathfrak p$, so it is possible to find $v \notin \mathfrak p$ so that the same equality holds in $R_v$. It remains to apply Theorem [Theorem 1](#thm: multiplicity criterion){reference-type="ref" reference="thm: multiplicity criterion"} in $R_{uv}$. ◻ On the other hand, Proposition [Proposition 4](#prop: Nagata){reference-type="ref" reference="prop: Nagata"} will be used to show that Hilbert--Kunz multiplicity is *not* strongly upper semicontinuous in Corollary [Corollary 46](#not locally constant){reference-type="ref" reference="not locally constant"}. ## Connection to resolution of singularities {#sub: blowup} The notion of equimultiplicity is also closely connected to the main approach to the resolution of singularities, and essentially originates, in the case of Hilbert function, from Hironaka's work [@Hironaka]. I will briefly outline this connection, and, for simplicity, I will assume that $\mathop{\mathrm{\epsilon}}(R)$ is upper semicontinuous (a lower semicontinuous function is completely analogous). Since $X = \mathop{\mathrm{Spec}}R$ is quasi-compact, the open cover $X = \cup X_{< a}$ is finite, hence $\mathop{\mathrm{\epsilon}}$ must attain its supremum on $X$. Let $M$ to denote the maximal value of $\mathop{\mathrm{\epsilon}}$ on $X$. By semicontinuity, the maximal locus of $\mathop{\mathrm{\epsilon}}$, i.e., the set $X_{\max} = \{\mathfrak p \mid \mathop{\mathrm{\epsilon}}(\mathfrak p) = M\} = X \setminus X_{< M}$, is closed. As measured by $\mathop{\mathrm{\epsilon}}$, $X_{\max}$ is the locus of the worst singularity, thus a natural attempt to improve the singularity is to blow up $X_{\max}$ or its component. The key difficulty is to show that singularity improves after blowing up. Because any prime ideal in $X_{\max}$ is $\mathop{\mathrm{\epsilon}}$-equimultiple, $\mathop{\mathrm{\epsilon}}$-equimultiplicity should help to understand the effect of the blowing up. The theories of equimultiplicity for Hilbert--Samuel multiplicity and Hilbert functions were developed for this reason. As a guideline consider the result of Dade that controls the multiplicity under the blowing up $\operatorname{e}$-equimultiple ideals. **Theorem 6** ([@Dade Theorem 3], [@EquimultiplicityBook Corollary 31.2]). *Let $(R, \mathfrak m)$ be a formally equidimensional local ring. If $\mathfrak p$ is a prime ideal such that $\mathop{\mathrm{\ell}}(\mathfrak p) = \operatorname{ht}\mathfrak p$ then any local ring $S$ of the blowing up[^2] of $\mathfrak p$ that dominates $R$ satisfies the inequality $$\operatorname{e}(S) \leq \operatorname{e}(R/\mathfrak p) \operatorname{e}(R_\mathfrak p).$$* This theorem only says that the singularity does not get worse, but, in fact, in characteristic $0$, it is possible to find an invariant $\mathop{\mathrm{\epsilon}}$ such that the resolution is obtained by consecutively blowing up the maximal locus of $\mathop{\mathrm{\epsilon}}$. The proofs originating from Hironaka's approach, use the the Hilbert--Samuel function as a part of $\mathop{\mathrm{\epsilon}}$. In fact, $\mathop{\mathrm{\epsilon}}$ is a sequence of invariants that refines the Hilbert--Samuel function so that $\mathop{\mathrm{\epsilon}}$ decreases with each blow-up, the additional pieces are found through studying when $\mathop{\mathrm{\epsilon}}$ does not change after blowing up. I refer to [@EquimultiplicityBook] for more details about equimultiplicity in the Hilbert--Samuel theory. ## Detection of singularities Equimultiplicity also provides an approach to showing that a given invariant $\mathop{\mathrm{\epsilon}}$ detects singularities. For simplicity, assume $\mathop{\mathrm{\epsilon}}(R) \in [1, \infty)$, so that the goal is to show that if $(R, \mathfrak m)$ a local ring (perhaps with extra assumptions) and $\mathop{\mathrm{\epsilon}}(\mathfrak m) = 1$, then $R$ is regular. If $\mathop{\mathrm{\epsilon}}$ localizes well, then the condition $\mathop{\mathrm{\epsilon}}(R) = 1$ implies that every prime ideal in $R$ is $\mathop{\mathrm{\epsilon}}$-equimultiple. This opens an approach by induction: we may take a suitable $\mathfrak p \neq \mathfrak m$, then $R_\mathfrak p$ is regular by induction. Thus the goal becomes to show that if $\mathfrak p$ is $\mathop{\mathrm{\epsilon}}$-equimultiple and $R_\mathfrak p$ is regular, then $R$ must be regular. In this note, this strategy is applied to Hilbert--Kunz multiplicity and F-signature. Interestingly, as Theorem [Theorem 1](#thm: multiplicity criterion){reference-type="ref" reference="thm: multiplicity criterion"} is limited to the situation where $R/\mathfrak p$ is regular, one cannot use the same strategy to recover Nagata's original criterion of $\operatorname{e}(R) = 1$. # An introduction to F-invariants {#sec: F-intro} The purpose of this section is to discuss the basics of the positive characteristic commutative algebra and introduce Hilbert--Kunz multiplicity and F-signature. The proofs and more comprehensive treatments can be found in [@HunekeSurvey; @MaPolstraBook; @CaminataDe]. A key result for the study of singularities in positive characteristic is the characterization of regular rings due to Kunz. Essentially all proofs of showing that a certain invariant detects singularities reduce to this famous theorem. In the theorem, $F_*^e R$ denotes an $R$-module obtained as the target of the $e$-th iterate of the Frobenius morphism $F^e \colon R \to R$. **Theorem 7** ([@Kunz1]). *Let $(R, \mathfrak m)$ be a local ring of prime characteristic $p > 0$. Then $R$ is regular if and only if $F^e_* R$ is a free $R$-module for some (equivalently, all) $e \in \mathbb{N}$.* If, in addition, $R$ is *F-finite* (that is, $F_*^e R$ is finitely generated), then we could detect regularity by measuring numerically the "failure of freeness" of $F_*^e R$. As an example, we could assume that $R$ is a domain with the field of fractions $L$ and define the generic rank of $M$ as its rank at the generic point, $\dim_{L} M \otimes_R L$. Define two functions $$f(M) = \frac{\min \{N \mid \text{ there is } R^{\oplus N} \to M \to 0\}} {\text{generic rank of } M} = \frac{\text{minimal number of generators of } M} {\text{generic rank of } M}$$ and $$g(M) = \frac{\max \{N \mid \text{ there is } M \to R^{\oplus N} \to 0\}}{\text{generic rank of } M} = \frac{\max \{N \mid M = \oplus^N R \oplus M'\}}{\text{generic rank of } M}.$$ It is easy to see that $f(M) \in [1, \infty)$, $g(M) \in [0, 1]$, and $f(M) = 1$ if and only if $M$ is free if and only if $g(M) = 1$. Kunz's theorem gives numerical tests for singularity through the application of either function to $F_*^e R$ for any $e$. But it happens that the values change with $e$ in a remarkable way, leading to the following definitions. **Definition 8** (Kunz [@Kunz2], Monsky [@Monsky]). Let $(R, \mathfrak m)$ be an F-finite local domain of characteristic $p > 0$. Then the Hilbert--Kunz multiplicity of $R$ is $$\operatorname{e}_{HK}(R) = \lim_{e \to \infty} \frac{\text{minimal number of generators of } F_*^e R}{\text{generic rank of } F_*^e R}.$$ **Definition 9** (Smith--Van der Bergh [@SmithVdB], Huneke--Leuschke [@HunekeLeuschke]). Let $(R, \mathfrak m)$ be an F-finite local domain of characteristic $p > 0$. Then the F-signature of $R$ is $$\mathop{\mathrm{s}}(R) = \lim_{e \to \infty} \frac{\max \{N \mid F_*^e R = \oplus^N R \oplus M\}}{\text{generic rank of } F_*^e R}.$$ It requires work to show that these limits exist and also detect singularity. This will be discussed in Section [5](#convergence){reference-type="ref" reference="convergence"}. ## The generic rank theorem In order to work with these invariants it is convenient to restate them in a different way. The starting place is another fundamental result of Kunz [@Kunz2] that computes the generic rank. **Theorem 10** (Kunz). *Let $(R, \mathfrak m, k)$ be an F-finite local domain of dimension $d$. Then for any $e > 0$ the generic rank of $F_*^e R$ is $p^{ed} [k^{1/p^e} : k]$.* *Proof.* The idea of the proof is that it reduces, via Cohen's structure theorem, to the easiest case where $R = k[[x_1, \ldots, x_d]]$. In this case the computation is very explicit, because $F_*^e R \cong R^{1/p^e} = k^{1/p^e}[x_1^{1/p^e}, \ldots, x_d^{1/p^e}]$ as $R$-algebras by $F_*^e r \mapsto r^{1/p^e}$. Let $B = k^{1/p^e}[x_1, \ldots, x_d]$. Since $k^{1/p^e}$ is free over $k$, $B$ is a free $R$-module of rank $[k^{1/p^e} : k]$. Furthermore, $k^{1/p^e}[x_1^{1/p^e}, \ldots, x_d^{1/p^e}]$ is a free $B$-module of rank $p^{ed}$ by giving explicitly a basis $x_1^{a_1/p^e}\cdots x_d^{a_d/p^e}$ where $0 \leq a_i \leq p^e - 1$. ◻ **Corollary 11**. *Let $R$ be an F-finite ring and $\mathfrak p \subset \mathfrak q$ be prime ideals. Then for any $e > 1$ $$[k(\mathfrak p)^{1/p^e} : k(\mathfrak p)] = p^{e\dim R_\mathfrak q/\mathfrak pR_\mathfrak q} [k(\mathfrak q)^{1/p^e} : k(\mathfrak q)].$$* *Proof.* Follows from the theorem by passing to $R_\mathfrak q/\mathfrak pR_\mathfrak q$. ◻ ## Hilbert--Kunz multiplicity via colengths. In order to employ ideal-theoretic techniques, it is convenient to restate the first definition of Hilbert--Kunz multiplicity. In the rest of the paper $\mathop{\mathrm{lg}}(M)$ denotes the length of a module. **Lemma 12**. *Let $(R, \mathfrak m)$ be an F-finite local ring with the residue field $k$. Then for any $e > 0$ the minimal number of generators of $F_*^e R$ is $[k^{1/p^e} : k]\mathop{\mathrm{lg}}(R/{\mathfrak m}^{[p^e]})$, where ${\mathfrak m}^{[p^e]}$ is the ideal generated by $\{x^{p^e} \mid x \in \mathfrak m\}$.* *Proof.* Since $R$ is local the minimal number of generators of any module is $\mathop{\mathrm{lg}}(M/\mathfrak mM)$. Then $\mathfrak m \cdot F_*^e R$ consists of elements of the form $F_* \left (\sum_i r_i m_i^{p^e} \right )$, hence $$\mathop{\mathrm{lg}}(F_*^e R/\mathfrak m F_*^e R) = \mathop{\mathrm{lg}}(F_*^e R/{\mathfrak m}^{[p^e]}).$$ By the definition of length, for $L = \mathop{\mathrm{lg}}(R/{\mathfrak m}^{[p^e]})$ there is a filtration $${\mathfrak m}^{[p^e]} = M_0 \subset \cdots \subset M_L = R$$ with $M_{i + 1}/M_i \cong k$. Since $F_*^e \bullet$ is exact, it follows that $\mathop{\mathrm{lg}}(F_*^e R/\mathfrak m F_*^e R) = [F_*^e k: k ] L.$ Since $k$ is a field, $F_*^e k \cong k^{1/p^e}$ as $k$-algebras. ◻ It follows from Theorem [Theorem 10](#thm: Kunz rank){reference-type="ref" reference="thm: Kunz rank"} and the definition that $$\frac{\text{number of generators of } F_*^e R}{\text{generic rank of } F_*^e R} =\frac{[k^{1/p^e}:k] \mathop{\mathrm{lg}}(R/{\mathfrak m}^{[p^e]})}{p^{e\dim R} [k^{1/p^e} : k]} = \frac{\mathop{\mathrm{lg}}(R/{\mathfrak m}^{[p^e]})}{p^{e\dim R}}.$$ This immediately extends the definition of Hilbert--Kunz multiplicity to all local rings of positive characteristic and all $\mathfrak m$-primary ideals. **Definition 13**. If $(R, \mathfrak m)$ is a local ring of characteristic $p > 0$, $I$ is an $\mathfrak m$-primary ideal, and $M$ is a finitely generated $R$-module, then the Hilbert--Kunz multiplicity of $I$ on $M$ is $$\operatorname{e}_{HK}(I; M) = \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(M/{I}^{[p^e]}M)}{p^{e\dim R}}.$$ From this definition one obtains a number of useful properties. **Proposition 14** (Some properties of Hilbert--Kunz multiplicity). *Let $I$ be an $\mathfrak m$-primary ideal. Then the following properties hold.* 1. *$\operatorname{e}_{HK}(I; M) = 0$ if $\dim M < \dim R$.* 2. *$\operatorname{e}_{HK}(I^{[p]}; M) = p^d \operatorname{e}_{HK}(I; M)$.* 3. *(Additive) If $0 \to L \to M \to N \to 0$ is an exact sequence of finitely generated $R$-modules, then $\operatorname{e}_{HK}(I; M) = \operatorname{e}_{HK}(I; L) + \operatorname{e}_{HK}(I; N)$.[\[item: exact sequence\]]{#item: exact sequence label="item: exact sequence"}* 4. *(Associativity) $\operatorname{e}_{HK}(I; M) = \sum_\mathfrak p \operatorname{e}_{HK}(I; R/\mathfrak p) \mathop{\mathrm{lg}}(M_\mathfrak p)$, where the sum ranges over primes such that $\dim R/\mathfrak p = \dim R$. [\[item: associative\]]{#item: associative label="item: associative"}* 5. *$\operatorname{e}_{HK}(I) \leq \mathop{\mathrm{lg}}(J/I) \operatorname{e}_{HK}(\mathfrak m) + \operatorname{e}_{HK}(J)$ for any ideal $I \subseteq J \subseteq R$.[\[item: filtration\]]{#item: filtration label="item: filtration"}* *Proof.* Most of there properties were observed in [@WatanabeYoshida]. I want to comment on the last two. First, recall that $M$ has a prime filtration: $0 = M_0 \subsetneq M_1 \subsetneq \cdots \subsetneq M_L = M$ such that $M_{i+1}/M_i \cong R/\mathfrak p_i$. Thus $\operatorname{e}_{HK}(I; M) = \sum_{i} \operatorname{e}_{HK}(I; R/\mathfrak p_i)$ by ([\[item: exact sequence\]](#item: exact sequence){reference-type="ref" reference="item: exact sequence"}). However, by the first property, $\operatorname{e}_{HK}(I; R/\mathfrak p_i) = 0$ unless $\dim R/\mathfrak p_i = \dim R$. Since any prime $\mathfrak p$ such that $\dim R/\mathfrak p = d$ is minimal, in the filtration $(M_i)_\mathfrak p$ only remain the quotients such that $R/\mathfrak p_i = R/\mathfrak p$. Thus the localized filtration is a composition series of $M_\mathfrak p$, its length must be $\mathop{\mathrm{lg}}(M_\mathfrak p)$. For the last property, take a composition series of $J/I$, it is a sequence $I = I_0 \subsetneq I_1 \subsetneq \cdots \subsetneq I_\ell = J$ where $I_{k + 1}/I_k \cong R/\mathfrak m$, i.e., $I_{k + 1} = (I_k, u_{k})$ where $\mathfrak m u_{k} \subseteq I_k$. It follows that ${\mathfrak m}^{[p^e]} u_{k}^{p^e} \subseteq {I_{k}}^{[p^e]}$, so $$\mathop{\mathrm{lg}}(R/{I}^{[p^e]}) = \sum_{k = 0}^{\ell - 1} \mathop{\mathrm{lg}}({I_{k+1}}^{[p^e]}/{I_{k}}^{[p^e]}) + \mathop{\mathrm{lg}}(R/{J}^{[p^e]}) \leq \sum_{k = 0}^{\ell - 1} \mathop{\mathrm{lg}}(R/{\mathfrak m}^{[p^e]}) + \mathop{\mathrm{lg}}(R/{J}^{[p^e]})$$ and the assertion follows. ◻ # Equimultiplicity for F-signature {#sec: F-signature} I will now present the results on F-signature that were obtained in [@PolstraSmirnov]. As a first step, we should verify that the invariant satisfies the localization property. In order to shorten Definition [Definition 9](#def: Fsig){reference-type="ref" reference="def: Fsig"}, denote $a_e (R) = \max \{N \mid F_*^e R = \oplus^N R \oplus M\}$. **Lemma 15**. *If $(R, \mathfrak m)$ is an F-finite local domain then F-signature satisfy the localization property: if $\mathfrak p$ is a prime ideal then $\mathop{\mathrm{s}}(R_\mathfrak p) \geq \mathop{\mathrm{s}}(R)$.* *Proof.* If $M$ is an arbitrary finitely generated $R$-module then the free rank of $M_\mathfrak p$ may only increase: if $M = \oplus^n R \oplus M'$ then $M'_\mathfrak p$ may gain free summands. Hence, $$\mathop{\mathrm{s}}(R) = \lim_{e \to \infty} \frac{a_e(R) }{\text{generic rank of } F_*^e R} \leq \lim_{e \to \infty} \frac{a_e(R_\mathfrak p)}{\text{generic rank of } F_*^e R} = \mathop{\mathrm{s}}(R_\mathfrak p).$$ ◻ The key insight of the proof is the following lemma that produces an adequate number of direct summands. **Lemma 16**. *Suppose $(R,\mathfrak{m})$ is an $F$-finite domain and $\mathop{\mathrm{s}}(R) > 0$. Suppose $M$ is a finitely generated $R$-module and $e_0$ is a positive integer such that there is a decomposition $$F_*^{e_0} R \cong M \oplus N_{0}.$$ If $M$ has no free direct summand, then for any $e \geq e_0$ there is decomposition $F_*^{e} R \cong R^{\oplus a_e(R)}\oplus M^{\oplus b_e} \oplus N_{e}$ such that $$\lim_{e\to \infty}\frac{b_e}{\mathop{\mathrm{rank}}(F^e_*R)} = \frac{\mathop{\mathrm{s}}(R)}{\mathop{\mathrm{rank}}(F^{e_0}_*R)} > 0.$$* *Proof.* For simplicity, I will write $a_e$ instead of $a_e(R)$. For any $e\in \mathbb{N}$ write $F^e_*R\cong R^{\oplus a_e}\oplus M_e$. Then $$F^{e+e_0}_*R\cong F^{e_0}_*R^{\oplus a_e}\oplus F^{e_0}_*M_e \cong \oplus^{a_e} \left (M \oplus N_0 \right) \oplus F^{e_0}_*M_e \cong \oplus^{a_e} M \oplus \left( \oplus^{a_e} N_{0} \oplus F^{e_0}_*M_e\right).$$ Therefore via this decomposition $$\lim_{e\to \infty}\frac{b_e}{\mathop{\mathrm{rank}}(F^e_*R)} = \lim_{e\to \infty}\frac{a_{e-e_0}}{\mathop{\mathrm{rank}}(F^e_*R)}=\frac{\mathop{\mathrm{s}}(R)}{\mathop{\mathrm{rank}}(F^{e_0}_*R)}.$$ It remains to observe that the maximal rank of a free summand of any finitely generated $R$-module is an invariant independent of the choice of a direct sum decomposition. This is because $R$ is a direct summand of $N$ if and only if $\hat{R}$ is a direct summand of $\hat{N}$ and a complete local ring satisfies the Krull--Schmidt condition. Therefore, as $M$ has no free summand, it is possible to decompose $\oplus^{a_e} N_{0} \oplus F^{e_0}_*M_e \cong R^{\oplus a_e} \oplus N_e$. ◻ The proof of the localization property leads quite naturally to the following observation on $\mathop{\mathrm{s}}$-equimultiplicity. While a truly intrinsic characterization of $\mathop{\mathrm{s}}$-equimultiplicity is not known, the result still suffices for showing that F-signature detects singularity. **Theorem 17** (Rigidity property). *Let $(R,\mathfrak{m})$ be an $F$-finite local domain and $\mathfrak p\in \mathop{\mathrm{Spec}}R$ be a prime ideal. Suppose that $\mathop{\mathrm{s}}(R) > 0$. Then $\mathop{\mathrm{s}}(R)=\mathop{\mathrm{s}}(R_\mathfrak p)$ if and only if $a_e(R)=a_e(R_\mathfrak p)$ for every $e\in \mathbb{N}$.* *Proof.* If $a_e(R)=a_e(R_\mathfrak p)$ for every $e\in \mathbb{N}$ then clearly $\mathop{\mathrm{s}}(R)=\mathop{\mathrm{s}}(R_\mathfrak p)$. Otherwise, suppose that $a_{e_0}(R_\mathfrak p)>a_{e_0}(R)$ for some $e_0$ and write $F^{e_0}_*R\cong R^{\oplus a_{e_0}}\oplus M_{e_0}$. Then $(M_{e_0})_\mathfrak p$ has a free $R_\mathfrak p$-summand. By Lemma [Lemma 16](#lemma: summands){reference-type="ref" reference="lemma: summands"} for each $e\in \mathbb{N}$ we may decompose $$F^e_*R\cong R^{\oplus a_e(R)}\oplus M_{e_0}^{\oplus b_e}\oplus N_e$$ where $b_e$ grows as fast as $\mathop{\mathrm{rank}}(F_*^e R)$. Localizing at the prime $P$ we see that $a_e(R_\mathfrak p)\geq a_e(R)+b_e$ and $$\mathop{\mathrm{s}}(R_\mathfrak p) = \lim_{e\to \infty}\frac{a_e(R_\mathfrak p)}{\mathop{\mathrm{rank}}(F^e_*R)} \geq \lim_{e\to \infty}\frac{a_e(R) + b_e}{\mathop{\mathrm{rank}}(F^e_*R)} = \mathop{\mathrm{s}}(R) \frac{1 + \mathop{\mathrm{rank}}(F^{e_0}_*R)}{\mathop{\mathrm{rank}}(F^{e_0}_*R)} > \mathop{\mathrm{s}}(R).$$ ◻ An application useful for experiments is that if there is a single $e$ such that $a_e(R) \neq a_e(R_\mathfrak p)$ then $\mathop{\mathrm{s}}(R) \neq \mathop{\mathrm{s}}(R_\mathfrak p)$. **Remark 18**. In the case where $\mathop{\mathrm{s}}(R) = 0$ the F-signature functions can be different. As an example, take $R = \mathbb{F}_p [[x,y, z]]/(x^2 - y^2z)$ with $p > 2$. It was computed in [@BlickleSchwedeTucker 4.3.2] that $a_e(R) \approx p^e/2$. On the other hand, the prime ideal $\mathfrak p = (x, y)$ is the F-splitting prime of $R$ in the sense of [@AberbachEnescu]. Then by [@AberbachEnescu Proposition 3.6, Corollary 3.4] $a_e(R_\mathfrak p) = p^e$. ## F-signature and non-singularity In contrast with the multiplicity-like invariants, F-signature has two distinguished values, $0$ and $1$. It was shown by Aberbach--Leuschke in [@AberbachLeuschke] (see also [@PolstraTucker] for alternative proofs) that F-signature is positive if and only if $R$ is *strongly F-regular*. It will be used in the proof that a strongly F-regular local ring is a domain, [@MaPolstraBook Lemma 3.2]. **Theorem 19** ([@HunekeLeuschke Corollary 16]). *Let $(R,\mathfrak m)$ be an $F$-finite local ring of prime characteristic $p>0$. Then $\mathop{\mathrm{s}}(R)=1$ if and only if $R$ is a regular local ring.* *Proof.* Since $\mathop{\mathrm{s}}(R) > 0$, $R$ is a domain, so $R_{(0)}$ is a field and $a_e(R_{(0)})=\mathop{\mathrm{rank}}(F^e_*R)$. By the localization inequality, $\mathop{\mathrm{s}}(R_{(0)}) = \mathop{\mathrm{s}}(R)$, so $a_e(R)=a_e(R_{(0)})=\mathop{\mathrm{rank}}(F^e_*R)$ by Theorem [Theorem 17](#theorem: F-signature is rigid){reference-type="ref" reference="theorem: F-signature is rigid"}. Therefore $F^e_*R$ is a free $R$-module and $R$ is a regular local ring by Theorem [Theorem 7](#theorem Kunz){reference-type="ref" reference="theorem Kunz"}. ◻ # Uniform convergence of F-invariants {#convergence} The theory of $\operatorname{e}_{HK}$-equimultiplicity requires a result on interchange of limits. Following [@SmirnovEqui] the formula will be derived from a uniform convergence estimate. This type of convergence estimates first appeared in [@Tucker]. The proof presented here borrows from Huneke's survey [@HunekeSurvey], these arguments are slightly different from [@Tucker; @SmirnovEqui]. ## Convergence estimates One way to think about existence of Hilbert--Kunz multiplicity is through an equivalence relation on modules, where $M \sim N$ if there exists a module $T$ such that $\dim T < \dim R$ and $|\mathop{\mathrm{lg}}(M/IN) - \mathop{\mathrm{lg}}(N/IN)| < \mathop{\mathrm{lg}}(T/IT)$ for all $\mathfrak m$-primary ideals $I$. The uniform convergence argument imposes a further control on the difference module $T$. **Lemma 20**. *Let $(R, \mathfrak m)$ be a local ring and $M, N$ be finitely generated $R$-modules. Suppose that $M_\mathfrak p \cong N_\mathfrak p$ for every prime $\mathfrak p$ with $\dim R/\mathfrak p = \dim R$. Then there exist exact sequences $$M \to N \to T_1 \to 0 \text{ and } N \to M \to T_2 \to 0$$ such that $\dim T_1, \dim T_2 < \dim R$. In particular, if $T = T_1 \oplus T_2$, then for any $\mathfrak m$-primary ideal $I$ $$|\mathop{\mathrm{lg}}(M/IN) - \mathop{\mathrm{lg}}(N/IN)| < \mathop{\mathrm{lg}}(T/IT).$$* *Proof.* There are only finitely many primes $\mathfrak p_i$ with $\dim R/\mathfrak p_i = \dim R$. Set $S = R \setminus \cup \mathfrak p_i$. By the assumption $S^{-1} M \cong S^{-1} N$. Since $\operatorname{Hom}$ commutes with localization for finitely presented modules, there are maps $\psi\colon M \to N$ and $\phi \colon N \to M$ which become isomorphisms after localizing in $S$. Hence $\dim \mathop{\mathrm{coker}}\phi, \mathop{\mathrm{coker}}\psi < \dim R$ and the assertion follows. ◻ **Corollary 21**. *Let $(R, \mathfrak m, k)$ be an F-finite reduced local ring and $M$ be a finitely generated $R$-module. Then for $p^\gamma = p^{\dim R} [k^{1/p} : k]$ we have exact sequences $$F_* M \to \oplus^{p^\gamma} M \to T_1 \to 0 \text{ and }$$ $$\oplus^{p^\gamma} M \to F_* M \to T_2 \to 0,$$ where $\dim T_1, \dim T_2 < \dim R$.* *Proof.* Let $\mathfrak p \in \mathop{\mathrm{Minh}}(R)$, i.e., $\dim R/\mathfrak p = \dim R$. Then $R_\mathfrak p$ is a field, $k(\mathfrak p)$, so $M_\mathfrak p$ is a vector space over it. If $r$ is the rank of this vector space then we can compute $$\dim_{k(\mathfrak p)} (F_* M)_\mathfrak p = \dim_{k(\mathfrak p)} F_* M_\mathfrak p = r \dim_{k(\mathfrak p)} F_* k(\mathfrak p) = r p^\gamma,$$ by Theorem [Theorem 10](#thm: Kunz rank){reference-type="ref" reference="thm: Kunz rank"}. Therefore $\oplus^{p^\gamma} M_\mathfrak p$ and $F_* M_\mathfrak p$ are isomorphic for any $\mathfrak p \in \mathop{\mathrm{Minh}}(R)$. ◻ The following result is very much inspired by the treatment of Dutta's lemma ([@Dutta]) in [@HunekeSurvey Lemma 3.4]. **Lemma 22**. *Let $(R, \mathfrak m, k)$ be an F-finite local ring with a residue field $k$ and let $M$ be a finitely generated $R$-module. Denote $p^{\dim M}[k : k^p] = p^{\gamma}$. Then there exists a finitely generated $R$-module $T$ such that $\mathop{\mathrm{Supp}}M = \mathop{\mathrm{Supp}}T$ and for all $\mathfrak m$-primary ideals $I$ and $e \geq 0$ $$\mathop{\mathrm{lg}}(R/I \otimes_R F_*^e M) \leq p^{e\gamma} \mathop{\mathrm{lg}}(T/IT).$$* *Proof.* Note that for all $e \gg 0$ the annihilator of $F_*^e M$ is a radical ideal. We may replace $M$ with $F_*^{e_0} M$: if $T$ is such that $\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e} M) \leq p^{e \gamma} \mathop{\mathrm{lg}}(T/IT)$ for $e \geq e_0$, then the assertion holds for $T' = \oplus_{e = 0}^{e_0} F_*^e M \oplus T$. Hence by replacing $R$ by $R/\mathop{\mathrm{Ann}}M$, it suffices to assume that $R$ is reduced and $\dim M = \dim R$. The proof proceeds by induction on $\dim M$. In the base case, $\dim M = 0$, $M$ is a $k$-vector space. Let $m$ be its dimension. Then $$\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e} M) = \mathop{\mathrm{lg}}(F_*^{e} M) = m [k^{1/p^e} : k] = m p^{e \gamma}$$ and the base case follows by taking $T = M$. In the general case, by Corollary [Corollary 21](#cor: Frob rank equivalence){reference-type="ref" reference="cor: Frob rank equivalence"} there is a module $N$ such that $\dim N < \dim R$ and $$\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e + 1} M) \leq p^{\dim R + \alpha} \mathop{\mathrm{lg}}(R/I \otimes_R F_*^e M) + \mathop{\mathrm{lg}}(R/I \otimes_R F_*^e N).$$ These equations add up an give that $$\label{eq: defect} \mathop{\mathrm{lg}}(R/I \otimes F_*^e M) \leq p^{e\gamma} \mathop{\mathrm{lg}}(M/IM) + \sum_{n = 0}^{e - 1} p^{(e-1-n)\gamma}\mathop{\mathrm{lg}}(R/I \otimes F_*^n N).$$ Since $\dim N \leq \dim R - 1$, by induction there is a module $T$ such that $\mathop{\mathrm{lg}}(R/I \otimes F_*^n N) \leq p^{n(\gamma - 1)} \mathop{\mathrm{lg}}(T/IT)$, plugging this estimate in ([\[eq: defect\]](#eq: defect){reference-type="ref" reference="eq: defect"}) yields that $$\begin{aligned} \mathop{\mathrm{lg}}(R/I \otimes F_*^e M) &\leq p^{e \gamma} \mathop{\mathrm{lg}}(M/IM) + \sum_{n = 0}^{e - 1} p^{(e-1-n)\gamma}\mathop{\mathrm{lg}}(R/I \otimes F_*^n N) \\ &\leq p^{e\gamma} \mathop{\mathrm{lg}}(M/IM) + \sum_{n = 0}^{e - 1} p^{e\gamma - n} \mathop{\mathrm{lg}}(T/IT) \\ &\leq p^{e \gamma}\left( \mathop{\mathrm{lg}}(M/IM) + \mathop{\mathrm{lg}}(T/IT) \sum_{n= 0}^{e -1} p^{-n} \right) \\&\leq p^{e \gamma}\left( \mathop{\mathrm{lg}}(M/IM) + 2\mathop{\mathrm{lg}}(T/IT) \right) \end{aligned}$$ and the assertion follows for $M \oplus T \oplus T$. ◻ **Corollary 23**. *Let $(R, \mathfrak m, k)$ be an F-finite local ring and $M$ be a finitely generated $R$-module. Denote $p^{\dim M}[k : k^p] = p^{\gamma}$. There exists a positive integer $e_0$ (which can be taken to be $0$ if $R$ is reduced) and a finitely generated $R$-module $D$ such that $\dim D < \dim R$ and for any $\mathfrak m$-primary ideal $I$ and all $e \geq 1$ $$\left |\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e + e_0} M) - p^{e\gamma} \mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e_0} M)\right | \leq p^{e\gamma} \mathop{\mathrm{lg}}(D/ID).$$* *Proof.* There is no harm in assuming that $\dim M = \dim R$. Fix $e_0$ to be such that $\sqrt{0}^{[p^{e_0}]} = 0$. Then $F_*^{e + e_0} M$ is an $R_{\operatorname {red}}$-module for any $e\geq 0$. Hence Corollary [Corollary 21](#cor: Frob rank equivalence){reference-type="ref" reference="cor: Frob rank equivalence"} applies and yields as in ([\[eq: defect\]](#eq: defect){reference-type="ref" reference="eq: defect"}) that $$\left |\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e +1 + e_0} M) - p^{\gamma} \mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e + e_0} M)\right | \leq \mathop{\mathrm{lg}}(R/I \otimes_R F_*^e T),$$ where $\dim T < \dim R$. Therefore, $$\left |\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e + e_0} M) - p^{e\gamma} \mathop{\mathrm{lg}}(R/I \otimes_R F_*^{ e_0} M)\right | \leq \sum_{i = 0}^{e - 1} p^{(e - i)\gamma}\mathop{\mathrm{lg}}(R/I \otimes_R F_*^i T).$$ Since $\dim T < \dim R$, Lemma [Lemma 22](#lem: measure defect){reference-type="ref" reference="lem: measure defect"} provides $T'$ such that $\mathop{\mathrm{lg}}(R/I \otimes_R F_*^i T) \leq p^{i (\gamma - 1)} \mathop{\mathrm{lg}}(T'/IT')$. Set $D = T' \oplus T'$. It follows that $$\left |\mathop{\mathrm{lg}}(R/I \otimes_R F_*^{e + e_0} M) - p^{e\gamma} \mathop{\mathrm{lg}}(R/I \otimes_R F_*^{ e_0} M)\right | \leq \sum_{i = 0}^{e - 1} \frac{p^{e\gamma - i}}{2}\mathop{\mathrm{lg}}(D/ID) \leq p^{e\gamma}\mathop{\mathrm{lg}}(D/ID).$$ ◻ This corollary, combined with the computation in Lemma [Lemma 12](#lem: numgen){reference-type="ref" reference="lem: numgen"}, immediately shows that the sequence $\mathop{\mathrm{lg}}(R/{I}^{[p^e]})/p^{e\dim R}$, appearing in the definition of Hilbert--Kunz multiplicity, converges by the Cauchy criterion. A convergence estimate then follows by by passing to the limit and using the standard field extension result. **Corollary 24**. *Let $(R, \mathfrak m, k)$ be a local ring of characteristic $p > 0$. There exists a faithfully flat local $R$-algebra $S$, such that $S/\mathfrak mS$ is an algebraically closed field, a finitely generated $S$-module $T$, such that $\dim T < \dim R$, and a positive integer $e_0$ (which can be taken to be $0$ if $R$ is F-finite and reduced) so that for any $\mathfrak m$-primary ideal $I$ $$\left |\operatorname{e}_{HK}(I) - p^{- e_0 \dim R} \mathop{\mathrm{lg}}(R/I^{[p^{e_0}]})\right | \leq \mathop{\mathrm{lg}}_S (T/IT).$$* *Proof.* By Cohen's structure theorem, it is possible to write $\widehat{R}$ as a quotient of a power series ring $R = k[[x_1, \ldots, x_n]]/J$. Then $S = \overline{k} [[x_1, \ldots, x_n]]/J$ satisfies the assumptions of the theorem. By tensoring a composition series of a finite length $R$-module $M$ with $S$, it is easy to see that $\mathop{\mathrm{lg}}_R (M) = \mathop{\mathrm{lg}}_S (M \otimes_R S)$. In particular, $\operatorname{e}_{HK}(IS) = \operatorname{e}_{HK}(I)$ for any $\mathfrak m$-primary ideal $I$. Hence, it suffices to replace $R$ with $S$, then the assertion follows after passing to the limit as $e \to \infty$ in Corollary [Corollary 23](#cor: intermediate convergence){reference-type="ref" reference="cor: intermediate convergence"}. ◻ ## Applications ### Existence of F-signature The estimate in Corollary [Corollary 24](#cor: convergence formula){reference-type="ref" reference="cor: convergence formula"} provides uniform convergence of various families of ideals. This applies to F-signature as it can be expressed as the limit of a sequence of ideals that was introduced in [@AberbachEnescu; @Yao]. **Lemma 25**. *Let $(R, \mathfrak m, k)$ be an F-finite local domain of characteristic $p > 0$ and denote $$I_e = \left\{x \in R \mid \phi (F_*^e x) \in \mathfrak m \text{ for all } \phi \in \operatorname{Hom}(F_*^e R, R) \right\}.$$ Then $$\frac{\mathop{\mathrm{lg}}(R/I_e)}{p^{ed}} = \frac{\max \{N \mid F_*^e R = \oplus^N R \oplus M\}}{\text{generic rank of } F_*^e R}.$$* *In particular, $$\mathop{\mathrm{s}}(R) = \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R/I_e)}{p^{ed}}.$$* *Proof.* Take a decomposition $F_*^e R = R^{\oplus a_e} \oplus M_e$, where $a_e$ is the maximal possible integer. The key observation is that for any $\phi \in \operatorname{Hom}(F_*^e R, R)$ there is a containment $\phi (M_e) \subseteq \mathfrak m$, otherwise there will be at least one more splitting. Therefore, $\mathfrak m F_*^e R + M_e = \mathfrak m^{\oplus a_e} \oplus M_e \subseteq F_*^e I_e$, so $\oplus a_e \otimes_R R/\mathfrak m$ surjects onto $F_*^e R/I_e$, giving a bound on the length. On the other hand, the minimal generators of $R^{\oplus a_e}$ are clearly not in $F_*^e I_e$, so $$a_e = \mathop{\mathrm{lg}}_R (F_*^e R/I_e) = [k^{1/p^e} : k] \mathop{\mathrm{lg}}(R/I_e)$$ and the assertion follows from Theorem [Theorem 10](#thm: Kunz rank){reference-type="ref" reference="thm: Kunz rank"}. ◻ **Theorem 26** (Tucker, [@Tucker]). *The limit in the definition of $\mathop{\mathrm{s}}(R)$ exists.* *Proof.* First of all, note that the definition implies two properties of $I_e$: $I_e^{[p]}\subseteq I_{e + 1}$ and $\phi (I_{e + 1}) \subseteq I_e$ for any $\phi \in \operatorname{Hom}(F_* R, R)$. In particular, ${\mathfrak m}^{[p^e]} \subseteq I_e$ and it follows from the containment that $\left |\operatorname{e}_{HK}(I_e) - \mathop{\mathrm{lg}}(R/I_e)\right | \leq \mathop{\mathrm{lg}}(T/\mathfrak m^{[p^e]} T)$. If $\mathfrak m$ can be generated by $\mu$ elements, then $\mathfrak m^{\mu p^e} \subseteq \mathfrak m^{[p^e]}$. Hence the existence of the Hilbert--Samuel polynomial, $n \mapsto \mathop{\mathrm{lg}}(T/\mathfrak m^nT)$ for $n \gg 0$, allows to find a constant $C$ such that $\mathop{\mathrm{lg}}(T/\mathfrak m^{[p^e]} T) < C p^{e \dim R}$. The proof is finished after noting that $\lim_{e \to \infty} \operatorname{e}_{HK}(I_e)/p^{e\dim R}$ exists as this sequence is monotone due to the containment $I_e^{[p]} \subseteq I_{e + 1}$. ◻ ### The localization inequality for Hilbert--Kunz multiplicity as a swap of limits The main application needed in the next section is a formula presenting the Hilbert--Kunz multiplicity of $R_\mathfrak p$ as a limit of $\mathfrak m$-primary ideals. For this, consider a two-parameter family of ideals $I_{n, e} = ({I}^{[p^e]}, x^{np^e})$ where $I$ is an ideal such that $\dim R/I = 1$ and $x\in \mathfrak m$ is such that $(I,x)$ is $\mathfrak m$-primary. The first step is to get control on the right-hand side of Corollary [Corollary 24](#cor: convergence formula){reference-type="ref" reference="cor: convergence formula"}. **Lemma 27**. *Let $(R, \mathfrak m)$ be a local ring of characteristic $p > 0$ and dimension $d > 0$, $I$ be an ideal such that $\dim R/I = 1$, and $M$ be a finitely generated $R$-module of dimension $d$. If $x \in \mathfrak m$ is such that $(I, x)$ is $\mathfrak m$-primary then there exists a constant $C$ such that $\mathop{\mathrm{lg}}(M/({I}^{[p^e]}, x^{np^e})M) < C p^{ed}n$.* *Proof.* It is enough to show the statement for $M = R$. In $A = R/{I}^{[p^e]}$ we have exact sequences $$0 \to A/(xA + 0:_A x^n) \to A/x^{n+1}A \to A/x^{n}A \to 0$$ so $\mathop{\mathrm{lg}}(A/x^{n+1}A) \leq \mathop{\mathrm{lg}}(A/x^{n}A) + \mathop{\mathrm{lg}}(A/xA)$. Hence, by induction $$\mathop{\mathrm{lg}}(R/({I}^{[p^e]}, x^{np^e})R) \leq n \mathop{\mathrm{lg}}(R/({I}^{[p^e]}, x^{p^e})R)$$ and it remains to note that the Hilbert--Kunz function of $(I, x)$ is bounded by some $Cp^{ed}$ as discussed before. ◻ If $x$ is an element and $I$ is an ideal of a ring, then, with a slight abuse of notation, I will use $xR/I$ to denote the ideal generated by the image of $x$ in $R/I$. **Theorem 28**. *Let $(R, \mathfrak m)$ be a local ring of characteristic $p > 0$, $I$ be an ideal such that $\dim R/I = 1$ and $\operatorname{ht}I = \dim R - 1$. Then we have $$\lim_{n \to \infty} \frac{\operatorname{e}_{HK}((I, x^n))}{n} = \sum_{\mathfrak p \in \Lambda} \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(IR_\mathfrak p),$$ where the sum is taken over minimal primes $\mathfrak p$ of $I$ such that $\dim R/\mathfrak p = 1$ and $\dim R_\mathfrak p = \dim R - 1$.* *Proof.* Plugging Lemma [Lemma 27](#lem: two parameter){reference-type="ref" reference="lem: two parameter"} into Corollary [Corollary 24](#cor: convergence formula){reference-type="ref" reference="cor: convergence formula"} gives a convergence estimate $$\left |\operatorname{e}_{HK}((I, x^n)) - \frac{1}{p^{ed}}\mathop{\mathrm{lg}}(R/(I^{[p^{e}]},x^{np^{e}})) \right | < n\frac{C}{p^{e - e_0}} < n\frac{C}{p^{e}}.$$ Therefore, the bisequence $a_{n,e} = \frac{1}{np^{ed}}\mathop{\mathrm{lg}}(R/({I}^{[p^e]},x^{np^e}))$ converges uniformly in $e$. Since the limits $\lim_{e \to \infty} a_{n,e}$ and $\lim_{n \to \infty} a_{n, e} = \operatorname{e}(x^{p^e}R/{I}^{[p^e]})/p^{ed}$ exist, the standard result says that the double limits exist and are interchangeable. Hence, by the associativity formula for multiplicity applied in $R/{I}^{[p^e]}$ $$\begin{aligned} \lim_{e \to \infty} \lim_{n \to \infty} a_{e, n} &= \lim_{e \to \infty} \frac{1}{p^{e(d-1)}} \operatorname{e}((x)R/{I}^{[p^e]}) = \lim_{e \to \infty} \frac{1}{p^{e(d-1)}} \sum_{\mathfrak p \in \mathop{\mathrm{Minh}}(R/{I}^{[p^e]})} \operatorname{e}(xR/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p/{I}^{[p^e]}R_\mathfrak p) \\ &=\sum_{\mathfrak p \in \mathop{\mathrm{Minh}}(R/I)} \operatorname{e}(xR/\mathfrak p) \lim_{e \to \infty} \frac{1}{p^{e(d-1)}} \mathop{\mathrm{lg}}(R_\mathfrak p/{I}^{[p^e]}R_\mathfrak p) = \sum_{\mathfrak p \in \Lambda} \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(IR_\mathfrak p), \end{aligned}$$ where we first used that $\mathop{\mathrm{Minh}}(R/{I}^{[p^e]}) = \mathop{\mathrm{Minh}}(R/I)$ and then changed this set to $\Lambda$ because the limit was taken over $p^{e(d-1)}$. ◻ An easy consequence is a proof of the localization property for Hilbert--Kunz multiplicity. **Corollary 29**. *Let $(R, \mathfrak m)$ be a local ring of characteristic $p > 0$, and $\mathfrak p$ be a prime ideal such that $\dim R_\mathfrak p + \dim R/\mathfrak p = \dim R$. Then $\operatorname{e}_{HK}(R_\mathfrak p) \leq \operatorname{e}_{HK}(R)$.* *Proof.* By localizing at an appropriate prime $\mathfrak p \subset \mathfrak q \subset \mathfrak m$, it is enough to consider the case $\dim R/\mathfrak p = 1$. Let $x$ be any element such that $(\mathfrak p, x)$ is $\mathfrak m$-primary. By Theorem [Theorem 28](#thm: Hilbert-Kunz descent){reference-type="ref" reference="thm: Hilbert-Kunz descent"} $$\lim_{n \to \infty} \frac{\operatorname{e}_{HK}((\mathfrak p, x^n))}{n} = \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(R_\mathfrak p).$$ On the other hand, by ([\[item: filtration\]](#item: filtration){reference-type="ref" reference="item: filtration"}) of Proposition [Proposition 14](#prop: HK properties){reference-type="ref" reference="prop: HK properties"} $$\operatorname{e}_{HK}((\mathfrak p, x^n)) \leq \mathop{\mathrm{lg}}(R/(\mathfrak p, x^n)) \operatorname{e}_{HK}(R).$$ Thus, after taking the limit as $n \to \infty$ we obtain $\operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(R_\mathfrak p) \leq \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(R)$. ◻ # Equimultiplicity for Hilbert--Kunz multiplicity {#sec: Hilbert-Kunz} I will describe a necessary condition for $\operatorname{e}_{HK}$-equimultiplicity in the special case where $\dim R/\mathfrak p = 1$. This case is easier to treat but still suffices for the applications. The ideas originate from [@SmirnovEqui], however it was assumed there that $R/\mathfrak p$ is regular as in Theorem [Theorem 1](#thm: multiplicity criterion){reference-type="ref" reference="thm: multiplicity criterion"}. ## A review of tight closure The characterization of equimultiplicity will be given in terms of tight closure, so let me list the needed facts from [@HochsterHuneke1]. A good introduction to the tight closure theory is [@Hochster]. **Theorem 30**. *Let $R$ be a Noetherian ring of characteristic $p > 0$.* 1. *$x \in I^*$ if and only if $cx^{p^e} \in {I}^{[p^e]}$ for all $e \gg 0$ and $c\in R$ which is not contained in any minimal prime.* 2. *An element $c$ not contained in any minimal prime is called a *test element* if $cx^{p^e} \in {I}^{[p^e]}$ for all $I$, all $x \in I^*$, and all $e \geq 1$.* 3. *Test elements exist if $R$ is complete and reduced by [@HHSmooth].* 4. *If $R$ is formally unmixed (i.e., $\mathop{\mathrm{Ass}}(\widehat{R}) = \mathop{\mathrm{Minh}}(\widehat{R})$) then for a pair of $\mathfrak m$-primary ideals $I \subset J$ we have $\operatorname{e}_{HK}(I) = \operatorname{e}_{HK}(J)$ if and only if $J \subseteq I^*$.* **Lemma 31**. *Suppose that $(R, \mathfrak m)$ has a test element $c$. Let $I$ be an ideal and $x \in \mathfrak m$ be an element. Then $$\cap_{n \geq 1} (I + x^n)^* = I^*$$* *Proof.* Take an element $z$ in the intersection on the left. Since $c$ is a test element, $cz^{p^e} \in ({I}^{[p^e]}, x^{np^e})$ for all $e, n \geq 1$. But ${I}^{[p^e]} = \cap_n ({I}^{[p^e]}, x^{np^e})$ by Krull's intersection theorem in $R/{I}^{[p^e]}$, so it follows that $c \in I^*$. ◻ **Lemma 32**. *Let $(R, \mathfrak m)$ be a formally unmixed local ring of dimension $d$ and $I$ be an $\mathfrak m$-primary ideal. Suppose that $L_e$ is a sequence of ideals such that* 1. *${I}^{[p^e]} \subseteq L_e$,* 2. *$L_e^{[p]} \subseteq L_{e + 1}$,* 3. *$\lim_{e \to \infty} \mathop{\mathrm{lg}}(R/L_e)/p^{ed} = \operatorname{e}_{HK}(I)$.* *Then $L_e \subseteq ({I}^{[p^e]})^*$.* *Proof.* The assumptions give containments $I^{[p^{e + e'}]} \subseteq L_e^{[e']} \subseteq L_{e + e'}$. Then it is easy to see that $\operatorname{e}_{HK}(L_e) = \operatorname{e}_{HK}({I}^{[p^e]})$ and the assertion follows from Theorem [Theorem 30](#thm: tc review){reference-type="ref" reference="thm: tc review"}. ◻ There is also a converse. **Lemma 33**. *Let $(R, \mathfrak m)$ be a local ring of characteristic $p > 0$ and $I$ be an $\mathfrak m$-primary ideal. Let $I_e$ be a sequence of ideals such that $I^{[p^e]} \subseteq I_e \subseteq (I^{[p^e]})^*$. If $R$ has a test element $c$, then $$\lim\limits_{e \to \infty} \frac{1}{p^{e\dim R}} \mathop{\mathrm{lg}}(R/I_e) = \operatorname{e}_{HK}(I).$$* *Proof.* Consider an exact sequence $$R \xrightarrow{c} R \to R/(c) \to 0.$$ Since $cI_e \subseteq c({I}^{[p^e]})^* \subseteq {I}^{[p^e]}$, we obtain that the sequence $$R/I_e \xrightarrow{c} R/{I}^{[p^e]} \to R/(c, {I}^{[p^e]}) \to 0.$$ is still exact and implies the estimate $$\mathop{\mathrm{lg}}(R/I_e) \leq \mathop{\mathrm{lg}}(R/{I}^{[p^e]}) \leq \mathop{\mathrm{lg}}(R/I_e) + \mathop{\mathrm{lg}}(R/(c, {I}^{[p^e]})R).$$ Note that $\dim R/cR < \dim R$ since $c$ is not contained in any minimal prime, so the estimate gives that $0 = \lim\limits_{e \to \infty} \frac{1}{p^{e\dim R}} \left ( \mathop{\mathrm{lg}}(R/I_e) - \mathop{\mathrm{lg}}(R/{I}^{[p^e]}) \right)$ and the assertion follows. ◻ ## Equimultiplicity {#equimultiplicity} As a starting point recall that Theorem [Theorem 28](#thm: Hilbert-Kunz descent){reference-type="ref" reference="thm: Hilbert-Kunz descent"} gives a descent formula for Hilbert--Kunz multiplicity: if $\dim R/\mathfrak p = 1$ and $\dim R_\mathfrak p = \dim R - 1$ then $$\lim_{n \to \infty} \frac{\operatorname{e}_{HK}((\mathfrak p, x^n))}{n} = \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(R_\mathfrak p).$$ A key observation is that the sequence in the left-hand side is nonincreasing. **Lemma 34**. *Suppose that $\dim R/\mathfrak p = 1$ and $x \notin \mathfrak p$. Then $$\frac{\operatorname{e}_{HK}((\mathfrak p, x^n))}{n} \geq \frac{\operatorname{e}_{HK}((\mathfrak p, x^{n+1}))}{n+1}$$ Moreover, the equality holds if and only if $$\operatorname{e}_{HK}((\mathfrak p, x^n)) = n \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R/(x^{p^e}R + \mathfrak p^{[p^e]}:_Rx^{np^e}))}{p^{e\dim R}}.$$* *Proof.* First, take $I$ to be any ideal such that $\dim R/I = 1$ and the image of $x$ is a parameter in $R/I$. We have the exact sequence $$\label{eq: filtration} 0 \to R/(xR + I:_R x^k) \to R/(I, x^{k+1}) \to R/(I, x^k) \to 0.$$ From these exact sequences for various $i$ we deduce that $$\begin{aligned} \mathop{\mathrm{lg}}(R/(I, x^n)) &= \mathop{\mathrm{lg}}(R/(I, x^{n-1}) + \mathop{\mathrm{lg}}(R/(xR + I:_R x^{n-1})) = \sum_{i = 0}^{n-1} \mathop{\mathrm{lg}}(R/(xR + I:_Rx^i)) \\&\geq n \mathop{\mathrm{lg}}(R/(xR + I:_Rx^{n})). \end{aligned}$$ Thus, one may estimate from ([\[eq: filtration\]](#eq: filtration){reference-type="ref" reference="eq: filtration"}) that $$\mathop{\mathrm{lg}}(R/(I, x^{n+1})) = \mathop{\mathrm{lg}}(R/(I, x^n)) + \mathop{\mathrm{lg}}(R/(xR + I:_Rx^{n})) \leq \mathop{\mathrm{lg}}(R/(I, x^n)) + \frac{1}{n} \mathop{\mathrm{lg}}(R/(I, x^n)).$$ Plugging $I = \mathfrak p^{[p^e]}$ and replacing $x$ by $x^{p^e}$ yields the estimate $$\begin{aligned} %\label{eq: filtering} \mathop{\mathrm{lg}}(R/(\mathfrak p^{[p^e]}, x^{p^e(n+1)})) &= \mathop{\mathrm{lg}}(R/(\mathfrak p^{[p^e]}, x^{np^e})) + \mathop{\mathrm{lg}}(R/(x^{p^e}R + \mathfrak p^{[p^e]}:_Rx^{np^e})) \\ &\leq \frac{n+1}{n} \mathop{\mathrm{lg}}(R/(\mathfrak p^{[p^e]}, x^{np^e})) \end{aligned}$$ which implies both assertions after taking the limits. ◻ **Remark 35**. Before the next proof let me remind that a parameter $x$ in a one-dimensional ring $(S, \mathfrak m)$ satisfies the inequality $\operatorname{e}(xS) \leq \mathop{\mathrm{lg}}(S/xS)$ and the equality holds if and only if $S$ is Cohen-Macaulay. These two facts easily follow from ([\[eq: filtration\]](#eq: filtration){reference-type="ref" reference="eq: filtration"}). Note that if $S = R/I$ then it is Cohen-Macaulay if and only if $\mathfrak m$ is not an associated prime of $I$, i.e., $I :_R \mathfrak m^\infty \coloneqq \cup_n I:_R \mathfrak m^n = 0$. **Theorem 36**. *Let $(R, \mathfrak m)$ be a formally unmixed local ring of characteristic $p > 0$ and dimension $d$, $c \in$ be a test element, and $\mathfrak p$ be a prime ideal such that $\dim R/\mathfrak p = 1$ and $\dim R_\mathfrak p = d - 1$. If $\operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(R_\mathfrak p)$ then $(\mathfrak p^{[p^e]})^* :_R \mathfrak m^{\infty} = (\mathfrak p^{[p^e]})^*$ for all $e$.* *Proof.* Let $x \in \mathfrak m\setminus \mathfrak p$. Note that by ([\[item: filtration\]](#item: filtration){reference-type="ref" reference="item: filtration"}) of Proposition [Proposition 14](#prop: HK properties){reference-type="ref" reference="prop: HK properties"} $\operatorname{e}_{HK}((\mathfrak p, x)) \leq \operatorname{e}_{HK}(R) \mathop{\mathrm{lg}}(R/\mathfrak p+ xR) = \operatorname{e}_{HK}(R)\operatorname{e}(xR/\mathfrak p)$ where we have used that $x$ is a parameter in the one-dimensional domain $R/\mathfrak p$. Thus by Lemma [Lemma 34](#lem: HK filtration){reference-type="ref" reference="lem: HK filtration"} for all $n$ $$\label{eq: equimultiplicity inequalities} \operatorname{e}_{HK}(R)\operatorname{e}(xR/\mathfrak p) \geq \operatorname{e}_{HK}((\mathfrak p, x)) \geq \frac{\operatorname{e}_{HK}((\mathfrak p, x^n))}{n} \geq \frac{\operatorname{e}_{HK}((\mathfrak p, x^{n+1}))}{n+1} \geq \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(R_\mathfrak p).$$ Since $\operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(R_\mathfrak p)$ we have equality throughout, so Lemma [Lemma 34](#lem: HK filtration){reference-type="ref" reference="lem: HK filtration"} gives that for all $n$ $$\label{eq: colons HK} \operatorname{e}_{HK}((\mathfrak p, x)) = \frac{1}{n}\operatorname{e}_{HK}((\mathfrak p, x^n)) = \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R/(x^{p^e}R + \mathfrak p^{[p^e]}:_Rx^{np^e}))}{p^{e\dim R}}.$$ It is then easy to check that for any $n$ the sequence of ideals $L_e = x^{p^e}R + \mathfrak p^{[p^e]}:_Rx^{np^e}$ satisfies the assumptions of Lemma [Lemma 32](#lemma: filtration in tight closure){reference-type="ref" reference="lemma: filtration in tight closure"} for $I = (\mathfrak p, x)$. Therefore, for all $e$ and $n$ $$(x^{p^e}R + \mathfrak p^{[p^e]}:_Rx^{np^e}) \subseteq (x^{p^e}R + \mathfrak p^{[p^e]})^*.$$ Thus $\mathfrak p^{[p^e]}:_R \mathfrak m^{\infty} = \mathfrak p^{[p^e]}:_R x^{\infty} \subseteq (x^{p^e}R + \mathfrak p^{[p^e]})^*$. Because $x$ was arbitrary, we may replace it with its powers, so for all $e$ $$\mathfrak p^{[p^e]}:_R \mathfrak m^{\infty} \subseteq \cap_n (x^{np^e}R + \mathfrak p^{[p^e]})^* = (\mathfrak p^{[p^e]})^*.$$ Suppose that there is $a \in \mathfrak m$ such that $ax \in (\mathfrak p^{[p^e]})^*$, i.e., $ca^{p^{e'}} x^{p^{e'}} \in \mathfrak p^{[p^{e + e'}]}$ for all $e' \gg 0$. Hence, $ca^{p^{e'}} \in \mathfrak p^{[p^{e + e'}]} : \mathfrak m^\infty \subseteq (\mathfrak p^{[p^{e + e'}]})^*$, which shows that $c^2 a^{p^{e'}} \subseteq \mathfrak p^{[p^{e + e'}]}$. Therefore, $(\mathfrak p^{[p^e]})^* : x \subseteq (\mathfrak p^{[p^e]})^*$ and the assertion follows. ◻ **Corollary 37**. *Let $(R, \mathfrak m)$ be a formally unmixed local ring of characteristic $p > 0$ and dimension $d$, $c$ be a test element, and $\mathfrak p$ be a prime ideal such that $\dim R/\mathfrak p = 1$ and $\dim R_\mathfrak p = d - 1$. If $\operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(R_\mathfrak p)$ then for any $x \in \mathfrak m \setminus \mathfrak p$ we have equality $$\mathop{\mathrm{lg}}(R/((\mathfrak p^{[p^e]})^* + xR)) = \operatorname{e}(xR/(\mathfrak p^{[p^e]})^*) = \operatorname{e}(xR/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p/(\mathfrak p^{[p^e]})^*R_\mathfrak p).$$* *Proof.* First, if $A$ is a one-dimensional local ring and $x$ is a parameter, then $\operatorname{e}(xA) = \mathop{\mathrm{lg}}(A/xA) - \mathop{\mathrm{lg}}(0:_A x)$. Now $A = R/(\mathfrak p^{[p^e]})^*$ is Cohen-Macaulay by Theorem [Theorem 36](#thm: equimultiple property){reference-type="ref" reference="thm: equimultiple property"}, so $\mathop{\mathrm{lg}}(A/xA) = \operatorname{e}(xA)$. Second, by the associativity formula for multiplicity, because $A$ has only one minimal prime $\mathfrak pA$, $\operatorname{e}(xA) = \operatorname{e}(xA/\mathfrak p)\mathop{\mathrm{lg}}(A_\mathfrak p)$. ◻ In the case where $R/\mathfrak p$ is regular, the tight closure condition is not just necessary but is also sufficient. **Corollary 38**. *Let $(R, \mathfrak m)$ be a formally unmixed local ring of characteristic $p > 0$ and dimension $d$, $c$ be a test element, and $\mathfrak p$ be a prime ideal such that $R/\mathfrak p$ is a one-dimensional regular ring and $\dim R_\mathfrak p = d - 1$. Then $\operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(R_\mathfrak p)$ if and only if $(\mathfrak p^{[p^e]})^* :_R \mathfrak m^{\infty} = (\mathfrak p^{[p^e]})^*$ for all $e$.* *Proof.* The goal is to show the converse to Theorem [Theorem 36](#thm: equimultiple property){reference-type="ref" reference="thm: equimultiple property"}. By the assumption, there is an element $x \in R$ such that $\mathfrak m = \mathfrak p + xR$ and the image of $x$ is a regular element in all $R/(\mathfrak p^{[p^e]})^*$, a one-dimensional Cohen--Macaulay ring. Hence by Lemma [Lemma 33](#lemma: tc limit){reference-type="ref" reference="lemma: tc limit"} $$\begin{aligned} \operatorname{e}_{HK}(\mathfrak m) &= \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R/((\mathfrak p^{[p^e]})^* + x^{p^e}R))}{p^{ed}} = \lim_{e \to \infty} \frac{\operatorname{e}(x^{p^e}R/(\mathfrak p^{[p^e]})^*)}{p^{ed}} \\&= \lim_{e \to \infty} \frac{\operatorname{e}(x^{p^e}R/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p/(\mathfrak p^{[p^e]})^*R_\mathfrak p)}{p^{ed}} = \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R_\mathfrak p/(\mathfrak p^{[p^e]})^*R_\mathfrak p)}{p^{e(d-1)}} = \operatorname{e}_{HK}(R_\mathfrak p).\end{aligned}$$ ◻ While it looks different, this formula is a positive characteristic analogue of Theorem [Theorem 1](#thm: multiplicity criterion){reference-type="ref" reference="thm: multiplicity criterion"}. Namely, by the characterization of the analytic spread in terms of integral closure given by Burch in [@Burch], Theorem [Theorem 1](#thm: multiplicity criterion){reference-type="ref" reference="thm: multiplicity criterion"} asserts that if $R/\mathfrak p$ is a DVR then $\operatorname{e}(R_\mathfrak p) = \operatorname{e}(R)$ if and only if $\overline{\mathfrak p^n}$ is $\mathfrak p$-primary for all $n$. **Remark 39**. In the case where $\dim R/\mathfrak p > 1$, the characterization is slightly more complicated. Loosely speaking, if $\mathfrak p$ is an $\operatorname{e}_{HK}$-equimultiple prime then $\mathfrak p^{[p^e]}$ should be "Cohen-Macaulay up to tight closure": if $m = \dim R/\mathfrak p$, then there exist elements $x_1, \ldots, x_m$ such that $x_{i + 1}$ is not a zerodivisor modulo $(\mathfrak p^{[p^e]}, x_1^{p^e}, \ldots, x_i^{p^e})^*$ for all $e$ and all $i < m$. See [@SmirnovEqui]. ## Applications ### Rigidity in weakly F-regular rings Theorem [Theorem 17](#theorem: F-signature is rigid){reference-type="ref" reference="theorem: F-signature is rigid"} shows that F-signature is rigid. A similar result holds for Hilbert--Kunz multiplicity provided that tight closure is a trivial operation. **Definition 40**. We say that a ring $R$ is weakly F-regular if $I^* = I$ for all ideals $I$. **Proposition 41** (Rigidity). *Let $(R, \mathfrak m)$ be weakly F-regular local ring and $\mathfrak p$ is a prime ideal such that $\dim R/\mathfrak p + \dim R_\mathfrak p = \dim R$. Then $\operatorname{e}_{HK}(\mathfrak m) = \operatorname{e}_{HK}(\mathfrak p)$ if and only if $\mathop{\mathrm{lg}}(R/\mathfrak m^{[p^e]}) = p^{e \dim R/\mathfrak p} \mathop{\mathrm{lg}}(R_\mathfrak p/\mathfrak p^{[p^e]}R_\mathfrak p)$ for all $e \geq 1$.* *In particular, if $R$ is a weakly F-regular F-finite domain then $\operatorname{e}_{HK}(\mathfrak m) = \operatorname{e}_{HK}(\mathfrak p)$ if and only if the numbers of minimal generators of $F_*^e R$ and $F_*^e R_\mathfrak p$ are equal for all $e \geq 1$.* *Proof.* Suppose that $\operatorname{e}_{HK}(\mathfrak m) = \operatorname{e}_{HK}(\mathfrak p)$ and use induction on $\dim R/\mathfrak p$ to reduce to the case where $\dim R/\mathfrak p = 1$. Let $x \in \mathfrak m$ be such that $\mathfrak p + xR$ is $\mathfrak m$-primary. By Corollary [Corollary 37](#cor: colength and multiplicity){reference-type="ref" reference="cor: colength and multiplicity"} for all $e \geq 1$ $$\mathop{\mathrm{lg}}(R/(\mathfrak p^{[p^e]} + x^{p^e}R)) = p^e \operatorname{e}(xR/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p/\mathfrak p^{[p^e]}R_\mathfrak p).$$ In particular, $\operatorname{e}_{HK}(\mathfrak p + xR) = \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(\mathfrak p) = \operatorname{e}(xR/\mathfrak p) \operatorname{e}_{HK}(\mathfrak m)$. Now, proceed as in the proof of Proposition [Proposition 14](#prop: HK properties){reference-type="ref" reference="prop: HK properties"} and take a composition series of $\mathfrak m/(\mathfrak p + xR)$: $$I_0 = \mathfrak p + xR \subset I_1 \subset \cdots \subset I_\ell = \mathfrak m,$$ where $I_{k+1} = (I_k + u_k)$ with $\mathfrak mu_k \subseteq I_k$. Raising to a Frobenius power preserves the containments, hence $$\mathop{\mathrm{lg}}(R/(\mathfrak p^{[p^e]} + x^{p^e}R)) = \sum_{k = 0}^{\ell - 1} \mathop{\mathrm{lg}}(R/I_k^{[p^e]} : u_k^{p^e}) \leq \mathop{\mathrm{lg}}(R/(\mathfrak p + xR)) \mathop{\mathrm{lg}}(R/\mathfrak m^{[p^e]}).$$ Since the limits of the left-hand side and the right-hand side agree, for all $k$ we must have $$\lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R/I_k^{[p^e]} : u_k^{p^e})}{p^{e \dim R}} = \operatorname{e}_{HK}(\mathfrak m).$$ By Lemma [Lemma 32](#lemma: filtration in tight closure){reference-type="ref" reference="lemma: filtration in tight closure"} $\mathfrak m^{[p^e]} \subseteq I_k^{[p^e]} : u_k^{p^e} \subseteq (\mathfrak m^{[p^e]})^* = \mathfrak m^{[p^e]}$, hence the equality holds and it follows that $$p^e \operatorname{e}(xR/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p/\mathfrak p^{[p^e]}R_\mathfrak p) = \mathop{\mathrm{lg}}(R/(\mathfrak p^{[p^e]} + x^{p^e}R)) = \mathop{\mathrm{lg}}(R/(\mathfrak p + xR)) \mathop{\mathrm{lg}}(R/\mathfrak m^{[p^e]})$$ and one direction follows. The converse is clear. ◻ ### Hilbert--Kunz multiplicity and singularity I will now present a new proof of the celebrated theorem of Watanabe and Yoshida which opened the use of Hilbert--Kunz theory in the study of singularities. **Theorem 42** (Watanabe--Yoshida). *Let $(R, \mathfrak m)$ be a local ring of characteristic $p > 0$. Then $R$ is regular if and only if $\operatorname{e}_{HK}(R) = 1$ and $R$ is formally unmixed (i.e., $\mathop{\mathrm{Ass}}\widehat{R} = \mathop{\mathrm{Minh}}(\widehat{R})$).* It seems that all proofs of this statement pass through the following crucial lemma which reduces the assertion to the Kunz theorem. **Lemma 43** (Watanabe--Yoshida). *Let $(R, \mathfrak m)$ be a local ring such that $\operatorname{e}_{HK}(R) = 1$. Suppose that there exists an ideal $I \subseteq \mathfrak m^{[p]}$ such that $\operatorname{e}_{HK}(I) \geq \mathop{\mathrm{lg}}(R/I)$. Then $R$ is regular.* *Proof.* By Proposition [Proposition 14](#prop: HK properties){reference-type="ref" reference="prop: HK properties"}, $\operatorname{e}_{HK}(I) \leq \mathop{\mathrm{lg}}(\mathfrak m^{[p]}/I) \operatorname{e}_{HK}(R) + \operatorname{e}_{HK}(\mathfrak m^{[p]}) = \mathop{\mathrm{lg}}(\mathfrak m^{[p]}/I) + p^d$. Therefore, $$\mathop{\mathrm{lg}}(\mathfrak m^{[p]}/I) + p^d = \operatorname{e}_{HK}(I) \geq \mathop{\mathrm{lg}}(R/I) = \mathop{\mathrm{lg}}(\mathfrak m^{[p]}/I) + \mathop{\mathrm{lg}}(R/\mathfrak m^{[p]})$$ and we obtain that $p^d \geq \mathop{\mathrm{lg}}(R/\mathfrak m^{[p]})$. This implies that $R$ is regular by Theorem [Theorem 7](#theorem Kunz){reference-type="ref" reference="theorem Kunz"}. ◻ Thus the Watanabe--Yoshida theorem amounts to finding an ideal $I$ fitting the assumptions of the lemma. There are several possible options. The original proof of Watanabe and Yoshida uses a *parameter* ideal, but the inequality $\operatorname{e}_{HK}(I) \geq \mathop{\mathrm{lg}}(R/I)$ requires $R$ to be Cohen-Macaulay and this is hard to show. Second, in [@MQSLength2] we showed that $\operatorname{e}_{HK}(I) \geq \mathop{\mathrm{lg}}(R/I)$ holds for any *integrally closed* ideal, so one can take for example $I = \overline{\mathfrak m^n}$ for any $n \gg 0$. The third construction, due to Huneke and Yao in [@HunekeYao], is the simplest of all proofs that I know. In this case, $I = \mathfrak p^{(n)} + xR$ where $\mathfrak p$ is a prime such that $\dim R/\mathfrak p = 1$ and $R_\mathfrak p$ is regular, $x \in \mathfrak m^{[p]} \setminus \mathfrak p$, and $n$ is taken sufficiently large for $I$ to be contained in $\mathfrak m^{[p]}$, such $n$ exists by Chevalley's lemma. Such primes exist: if $\mathfrak p$ is such that $\dim R/\mathfrak p = 1$ and $\dim R_\mathfrak p = \dim R - 1$, then a) $\operatorname{e}_{HK}(R_\mathfrak p) = \operatorname{e}_{HK}(R)$ by the localization property, b) $R_\mathfrak p$ is regular by induction on $\dim R$. The proof below also starts with a prime $\mathfrak p$ with the above properties, but it uses $\operatorname{e}_{HK}$-equimultiplicity to construct the ideal $I$. This approach conceptualizes the construction of Huneke and Yao, although it requires a lot more work to build the equimultiplicity machinery. Last, I want to note that if $R$ is strongly F-regular, then one can avoid the Watanabe--Yoshida lemma and modify the proof of Theorem [Theorem 19](#theorem signature 1 iff regular){reference-type="ref" reference="theorem signature 1 iff regular"} to get an easy proof, see [@PolstraSmirnov]. However, there does not seem to be an easy way for showing that a complete domain with $\operatorname{e}_{HK}(R) = 1$ is strongly F-regular. *The proof of the Watanabe--Yoshida theorem.* If $R$ is regular, then $\operatorname{e}_{HK}(R) = 1$ by the Kunz theorem, the main direction is the converse. The proof is by induction on $\dim R$. The base case $\dim R = 0$ is trivial since $1 = \operatorname{e}_{HK}(R) = \mathop{\mathrm{lg}}(R)$, so $R$ is a field. It is enough to assume that $R$ is complete, as the assumptions pass to the completion. It also suffices to assume that $R$ is reduced. Namely, $\operatorname{e}_{HK}(R) \geq \operatorname{e}_{HK}(R_{\operatorname {red}}) \geq 1$, so $\operatorname{e}_{HK}(R_{\operatorname {red}}) = 1$. Now, if $R_{\operatorname {red}}$ is regular, then $\mathfrak p = \sqrt{0}$ is prime and, by the associativity formula, $$1 = \operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(R/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p) = \mathop{\mathrm{lg}}(R_\mathfrak p).$$ Hence, $R_\mathfrak p$ is a field. Because $\mathop{\mathrm{Ass}}(R)= \{\mathfrak p\}$, $R\setminus \mathfrak p$ consists of non zerodivisors, so $\mathfrak p = 0$ and $R = R_{\operatorname {red}}$ is regular. It is known by [@HHSmooth] that a complete reduced ring has a test element. As explained before the proof, there is a prime ideal $\mathfrak p$ such that $\dim R/\mathfrak p = 1$ and $\operatorname{e}_{HK}(R_\mathfrak p) = 1$, so $R_\mathfrak p$ is regular by induction. **Claim 1**. For $L_e = (\mathfrak p^{[p^e]})^* + x^{p^e}R$ we have $\mathop{\mathrm{lg}}(R/L_e) = \operatorname{e}_{HK}(L_e)$. *Proof.* First, by Corollary [Corollary 37](#cor: colength and multiplicity){reference-type="ref" reference="cor: colength and multiplicity"} $$\mathop{\mathrm{lg}}(R/((\mathfrak p^{[p^e]})^* + x^{p^e}R)) = \operatorname{e}(x^{p^e}R/(\mathfrak p^{[p^e]})^*) = p^e\operatorname{e}(xR/\mathfrak p) \mathop{\mathrm{lg}}(R_\mathfrak p/(\mathfrak p^{[p^e]})^*R_\mathfrak p).$$ Since $R_\mathfrak p$ is regular, we compute $\mathop{\mathrm{lg}}(R_\mathfrak p/(\mathfrak p^{[p^e]})^*R_\mathfrak p) = \mathop{\mathrm{lg}}(R_\mathfrak p/(\mathfrak p^{[p^e]})^*R_\mathfrak p) = p^{e\dim R_\mathfrak p}$. Therefore $$\mathop{\mathrm{lg}}(R/((\mathfrak p^{[p^e]})^* + x^{p^e}R)) = p^{e\dim R} \operatorname{e}(xR/\mathfrak p).$$ Second, $\operatorname{e}_{HK}((\mathfrak p^{[p^e]})^* + x^{p^e}R) = \operatorname{e}_{HK}((\mathfrak p+ xR)^{[p^e]}) = p^{e\dim R} \operatorname{e}_{HK}(\mathfrak p + xR)$, because $(\mathfrak p^{[p^e]})^* + x^{p^e}R$ is contained in the tight closure of ${(\mathfrak p + xR)}^{[p^e]}$. But then the inequalities ([\[eq: equimultiplicity inequalities\]](#eq: equimultiplicity inequalities){reference-type="ref" reference="eq: equimultiplicity inequalities"}) in the proof of Theorem [Theorem 36](#thm: equimultiple property){reference-type="ref" reference="thm: equimultiple property"} show that $\operatorname{e}_{HK}(\mathfrak p + xR)) = \operatorname{e}_{HK}(xR/\mathfrak p)\operatorname{e}_{HK}(R_\mathfrak p) = \operatorname{e}_{HK}(xR/\mathfrak p)$ and the claim follows. ◻ It remains to show that $L_e \subseteq \mathfrak m^{[p]}$ for a large $e$. This follows from Chevalley's lemma or from the properties of integral closure: there a constant $c$ such that $L_e \subseteq \overline{(\mathfrak p, x)^e} \subseteq (\mathfrak p, x)^{e - c}$ because $R$ is a complete domain, see [@HunekeSwanson Proposition 5.3.4]. The assertion now follows from Lemma [Lemma 43](#lem: WY condition){reference-type="ref" reference="lem: WY condition"}. ◻ ### An application to semicontinuity. The study of semicontinuity in the theory of F-invariants originates from [@EnescuShimomoto]. It is known by [@SmirnovSemi; @Lyu] that Hilbert--Kunz multiplicity is an upper semicontinuous on the spectrum of an excellent ring, see also [@Polstra; @PolstraTucker] for F-signature, and [@SPYGlobal] for Frobenius Betti numbers. However, it was originally believed that Hilbert--Kunz multiplicity would be strong upper semicontinuous just as multiplicity. I will now present an argument from [@SmirnovEqui] that shows that this is not the case. The counter-example is based on Monsky's computation of Hilbert--Kunz multiplicity for quadrics in $3$ variables. The quartics split in two families which were treated by Monsky in [@MonskyQP; @MonskyQL]. **Theorem 44** (Monsky). *Let $K$ be an algebraically closed field of characteristic $2$. For $\alpha \in K$ let $R_\alpha = K[[x,y,z]]/(z^4 +xyz^2 + (x^3 + y^3)z + \alpha x^2y^2)$. Then* 1. *$\operatorname{e}_{HK}(R_\alpha) = 3 + \frac 12$, if $\alpha = 0$,* 2. *$\operatorname{e}_{HK}(R_\alpha) = 3 + 4^{-m}$, if $\alpha \neq 0$ is algebraic over $\mathbb Z/2\mathbb Z$, where $m = [\mathbb Z/2\mathbb Z(\lambda): \mathbb Z/2\mathbb Z]$ for $\lambda$ such that $\alpha = \lambda^2 + \lambda$* 3. *$\operatorname{e}_{HK}(R_\alpha) = 3$ if $\alpha$ is transcendental over $\mathbb Z/2\mathbb Z$.* *Proof.* The last two cases are computed by Monsky in [@MonskyQP]. In the first case there is a factorization $$z^4 +xyz^2 + (x^3 + y^3)z = z (x + y + z) (x^2 + y^2 + z^2 + zx + zy + xy) = z(x + y + z) ((x + y + z)^2 + zy).$$ Hence, by the associativity formula $$\begin{aligned} \operatorname{e}_{HK}(R_0) &= \operatorname{e}_{HK}(K[x,y]) + \operatorname{e}_{HK}(K[x,y,z]/(x + y + z)) + \operatorname{e}_{HK}(K[x, y, z]/((x + y + z)^2 + zy)) \\ &= 2 + \operatorname{e}_{HK}(K[x, y, z]/(x^2 + zy)) = 3 \frac{1}{2},\end{aligned}$$ as the last equation, $(A_1)$-singularity, was computed in [@WatanabeYoshida Theorem 5.4]. ◻ Building on this computation, in [@BrennerMonsky] Brenner and Monsky showed that tight closure does not compute with localization on the hypersurface[^3] $$R = F[x, y, z, t]/(z^4 +xyz^2 + (x^3 + y^3)z + tx^2y^2),$$ where $F$ is an algebraic closure of $\mathbb Z/2\mathbb Z$. By the Jacobian criterion, the singular locus of this hypersurface is $V((x,y,z))$ and $\mathfrak p = (x,y, z)$ is prime, in fact $R/\mathfrak p \cong F[t]$. **Proposition 45**. *Let $F$ be the algebraic closure of $\mathbb Z/2\mathbb Z$, $R = F[x, y, z, t]/(z^4 +xyz^2 + (x^3 + y^3)z + tx^2y^2)$, and $\mathfrak p = (x,y ,z)$, a prime ideal of $R$. Then $\operatorname{e}_{HK}(\mathfrak p) = 3$, but $\operatorname{e}_{HK}(\mathfrak m) > 3$ for any maximal ideal $\mathfrak m$ containing $\mathfrak p$.* *Proof.* Cohen's structure theorem shows that $\widehat{R_\mathfrak p} \cong R_t$ for $K = F(t)$, so $\operatorname{e}_{HK}(R_P) = 3$. Since $F$ is algebraically closed, all maximal ideals containing $\mathfrak p$ are of the form $\mathfrak m_\alpha = (\mathfrak p, t - \alpha)$ for $\alpha \in F$. By the way of contradiction, suppose that $\operatorname{e}_{HK}(\mathfrak m_\alpha) = \operatorname{e}_{HK}(\mathfrak p)$. Then by Corollary [Corollary 37](#cor: colength and multiplicity){reference-type="ref" reference="cor: colength and multiplicity"} $$\mathop{\mathrm{lg}}(R_{\mathfrak m_\alpha}/(t-\alpha, (\mathfrak p^{[p^e]})^*)) = \mathop{\mathrm{lg}}(R_\mathfrak p/\mathfrak p^{[p^e]})^*R_\mathfrak p)$$ for all $e \geq 1$. Now, Lemma [Lemma 33](#lemma: tc limit){reference-type="ref" reference="lemma: tc limit"} gives that $$\operatorname{e}_{HK}(R_{\mathfrak m_\alpha}/(t - \alpha)) = \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R_{\mathfrak m_\alpha}/(t-\alpha, (\mathfrak p^{[p^e]})^*))}{p^{2e}} = \lim_{e \to \infty} \frac{\mathop{\mathrm{lg}}(R_\mathfrak p/\mathfrak p^{[p^e]})^*R_\mathfrak p))}{p^{2e}} = 3$$ This contradicts Theorem [Theorem 44](#thm: Monsky computation){reference-type="ref" reference="thm: Monsky computation"} as it asserts that $\operatorname{e}_{HK}(R_{\mathfrak m}/(t - \alpha)) > 3$. ◻ **Corollary 46**. *Let $R = F[x, y, z, t]/(z^4 +xyz^2 + (x^3 + y^3)z + tx^2y^2)$ where $F$ is a field of characteristic $2$. Then the set $\{\mathfrak q \in \mathop{\mathrm{Spec}}R \mid \operatorname{e}_{HK}(\mathfrak q) \leq 3\}$ is not open. In particular, $\operatorname{e}_{HK}$ is not strongly upper semicontinuous.* *Proof.* If the set was open, its intersection with $V((x,y,z))$ should be open and non-empty. In particular, all but finitely many maximal ideals $\mathfrak m$ containing $(x,y,z)$ should belong to the open set. ◻ # Further directions There are many other invariants derived from Frobenius for which one could study equimultiplicity. In my opinion, the most promising are dual F-signature ([@Sannai; @SmirnovTucker]), the Frobenius--Betti numbers ([@DHNB; @AberbachLi]), and the Frobenius--Euler characteristics ([@SPYGlobal]) because they are already known to be semicontinuous. In addition, our understanding of equimultiplicity of F-signature is still unsatisfactory, because there is no intrinsic characterization similar to Theorem [Theorem 36](#thm: equimultiple property){reference-type="ref" reference="thm: equimultiple property"}. In particular, the strong semicontinuity of other F-invariants still might possible, although unlikely. Second, we have no analog of Theorem [Theorem 6](#thm: Dade blowup){reference-type="ref" reference="thm: Dade blowup"} as it is known (see Section 5 of [@MPST]) that F-signature and Hilbert--Kunz multiplicity do not behave well after blowing up the maximal ideal of an isolated singularity. However, it would be naive to expect that Hilbert--Kunz multiplicity will suffice alone, the resolving invariant in characteristic $0$ is far more complex. Last, there is a related notion of equimultiplicity for singularities in families, see [@Lipman]. This side has not been touched much for F-invariants, it is only known that Hilbert--Kunz multiplicity is semicontinuous [@Affine]. [^1]: By a surprising result of [@LL], the localization property for all rings is equivalent to the notorious conjecture of Lech on multiplicity in faithfully flat extensions. [^2]: *In other words, $S$ is a homogeneous localization of the Rees algebra at a relevant prime that contracts to $\mathfrak m$.* [^3]: A recent work [@Simpson] establishes the failure of localization in the second family of quartics.
arxiv_math
{ "id": "2309.04322", "title": "An invitation to equimultiplicity of F-invariants", "authors": "Ilya Smirnov", "categories": "math.AG math.AC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study the asymptotics of bounded lecture hall tableaux. Limit shapes form when the bounds of the lecture hall tableaux go to infinity linearly in the lengths of the partitions describing the large-scale shapes of these tableaux. We prove Conjecture 6.1 in [@SKN21], stating that the slopes of the rescaled height functions in the scaling limit satisfy a complex Burgers equation. We also show that the fluctuations of the unrescaled height functions converge to the Gaussian free field. The proof is based on new construction and analysis of Schur generating functions for the lecture hall tableaux, whose corresponding particle configurations do not form a Gelfand-Tsetlin scheme; and the corresponding dimer models are not doubly periodic. author: - Zhongyang Li - David Keating - István Prause bibliography: - lht.bib title: Asymptotics of Bounded Lecture-Hall Tableaux --- # Introduction Lecture hall tableaux were introduced in [@SK20] as fillings of Young tableaux satisfying certain conditions, which generalize both lecture hall partitions ([@BE971; @BE972]) and anti-lecture hall compositions ([@SC03]), and also contain reverse semistandard Young tableaux as a limit case. Lecture hall partitions and anti-lecture hall compositions have attracted considerable interest among combinatorists in the last two decades; see the recent survey [@SA16] and references therein. We now define the lecture hall tableaux. Recall that a partition $\lambda=(\lambda_1,\ldots,\lambda_k)$ is a sequence of nonnegative integers $\lambda_1\geq \lambda_2\geq \ldots\geq \lambda_k\geq 0$. Each integer $\lambda_i$ is called a part of $\lambda$. The length $l(\lambda)$ of $\lambda$ is the number of parts. A partition $\lambda=(\lambda_1,\ldots,\lambda_k)$ can be identified with its Young diagram, which consists of unit squares (cells) with integer coordinates $(i,j)$ satisfying $1\leq i\leq k$ and $1\leq j\leq \lambda_i$. For two partitions $\lambda$ and $\mu$ we write $\mu\subset\lambda$ to mean that the Young diagram of $\mu$ is contained in that of $\lambda$ as a set. In this case, a skew shape $\lambda/\mu$ is defined to be the set-theoretic difference $\lambda/\mu$ of their Young diagrams. We denote by $|\lambda/\mu|$ the number of cells in $\lambda/\mu$. A partition $\lambda$ is also considered as a skew shape by $\lambda/\emptyset$; where $\emptyset$ represents the empty partition. A tableau of shape $\lambda/\mu$ is a filling of the cells in $\lambda/\mu$ with nonnegative integers. In other words, a tableau is a map $T : \lambda/\mu\rightarrow {\mathbb N}$, where ${\mathbb N}$ is the set of nonnegative integers. **Definition 1**. *An $n$-lecture hall tableau of shape $\lambda/\mu$ is a tableau $L$ of shape $\lambda/\mu$ satisfying the following conditions $$\begin{aligned} \frac{L(i,j)}{n+c(i,j)}\geq \frac{L(i,j+1)}{n+c(i,j+1)},\qquad \frac{L(i,j)}{n+c(i,j)}> \frac{L(i+1,j)}{n+c(i+1,j)}.\end{aligned}$$ where $c(i,j)=j-i$ is the content of the cell $(i,j)$. The set of $n$-lecture hall tableaux is denoted by $LHT_n(\lambda/\mu)$. For $L\in LHT_n(\lambda/\mu)$, let $\lfloor L\rfloor$ be the tableaux of shape $\lambda/\mu$ whose $(i,j)$th entry is $\lfloor\frac{L(i,j)}{(n-i+j)} \rfloor$.* See the left graph of Figure [1](#fig:tpd){reference-type="ref" reference="fig:tpd"} for an example of a lecture hall tableaux. In this paper we study lecture hall tableaux with an extra condition as follows: $$\begin{aligned} L(i,j)<t(n+j-i)\end{aligned}$$ We say these tableaux are bounded by $t>0$. These tableaux are called bounded lecture hall tableaux and are enumerated in [@CK20]. The main aim of this paper is to study the asymptotics of bounded $n$-lecture hall tableaux as $n\rightarrow\infty$. We shall first recall a bijection between lecture hall tableaux and non-intersecting path configurations in [@CK20], and then investigate the asymptotics (limit shape and height fluctuations) of the corresponding non-intersecting path configurations. We first define the graph on which the non-intersecting path configurations correspond to the lecture hall tableaux. **Definition 2**. 1. *Given a positive integer $t$, the lecture hall graph is a graph $\mathcal{G}_t=(V_t,E_t)$. This graph can be described through an embedding in the plane with vertex set $V_t$ given by* - *$\left(i,\frac{j}{i+1}\right)$ for $i\geq 0$ and $0\leq j<t(i+1)$.* *and the directed edges given by* - *from $\left(i,k+\frac{r}{i+1}\right)$ to $\left(i+1,k+\frac{r}{i+2}\right)$ for $i\geq 0$, $0\leq r\leq i$ and $0\leq k<t$* - *from $\left(i,k + \frac{r + 1}{i + 1}\right)$ to $\left(i,k + \frac{r}{i + 1}\right)$ for $i\geq 0$ and $0\leq r \leq i$ and $0 \leq k < t-1$ or for $i\geq 0$ and $0\leq r <i$ and $k = t-1$.* 2. *Given a positive integer $t$ and a partition $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_n)$ with $\lambda_1\geq \lambda_2\geq\ldots\geq \lambda_n\geq 0$, a non-intersecting path configuration is a system of $n$ paths on the graph $\mathcal{G}_t$. For each integer $i$ satisfying $1\leq i\leq n$, the $i$th path starts at $\left(n-i,t-\frac{1}{n-i+1}\right)$, ends at $(n-i+\lambda_i,0)$ and moves only downwards and rightwards. The paths are said to be not intersecting if they do not share a vertex.* See the middle graph of [1](#fig:tpd){reference-type="ref" reference="fig:tpd"} for an example of $\mathcal{G}_3$ and a configuration of non-intersecting paths on $\mathcal{G}_3$. Given a positive integer $t$ and a partition $\lambda=(\lambda_1,\ldots,\lambda_n)$ with $\lambda_1\geq \ldots\geq \lambda_n\geq 0$, the non-intersecting path system is a system of $n$ paths on the graph $\mathcal{G}_t$. The $i$th path starts at $\left(n-i,t-\frac{1}{n-i+1}\right)$ and ends at $(\lambda_i+n-i,0)$. The paths are called non-intersection if they do not share a vertex. **Theorem 3**. *([@CK20])There is a bijection between the bounded lecture hall tableaux of shape $\lambda$ and bounded by $t$ and non-intersecting paths on $\mathcal{G}_t$ starting at $\left(n-i,t-\frac{1}{n-i+1}\right)$ and ending at $(n-i+\lambda_i,0)$ for $i=1,2,\ldots,n$.* *More precisely, there are exactly $|\lambda|$ non-vertical edges present in the non-intersecting path configuration in $\mathcal{G}_t$ corresponding to a lecture-hall tableaux of shape $\lambda$. These edges have left endpoints located at $\left(n+j-i-1,\frac{L(i,j)}{n+j-i}\right)$. The non-intersecting path configuration corresponding to the lecture hall tableaux is the unique non-intersecting path configuration joining $\left(n-i,t-\frac{1}{n-i+1}\right)$ and $(n-i+\lambda_i,0)$ for $i=1,2,\ldots,n$ obtained by adding only vertical edges to these present non-vertical edges.* One can see that for an $n$-lecture hall tableaux bounded by $t$, $t$ is also the height of the corresponding lecture hall graph $\mathcal{G}_t$, and $n$ is also the total number of paths in the corresponding non-intersecting path configuration on $\mathcal{G}_t$. See Figure [1](#fig:tpd){reference-type="ref" reference="fig:tpd"} for an example of such a correspondence. ![Tableau, non-intersecting paths, and dimers. The left graph represents a lecture hall tableaux $L$ of shape $\lambda=(2,2)$ with $L(1,1)=5$, $L(1,2)=5$, $L(2,1)=3$, $L(2,2)=3$ and $n=2$. Then $\frac{L(1,1)}{n+1-1}=\frac{5}{2}$; $\frac{L(2,1)}{n+1-2}$=2;$\frac{L(1,2)}{n+2-1}=\frac{5}{3}$;$\frac{L(2,2)}{n+2-2}=\frac{3}{2}$. The lecture hall tableaux is bounded by $t=3$. The middle graph represents the corresponding non-intersecting path configuration. The right graph represents a dimer configuration on a graph which is not doubly-periodic.](tpd.pdf){#fig:tpd} We shall investigate the asymptotics of bounded lecture hall tableaux as $n,t\rightarrow\infty$ by studying the asymptotics of the corresponding non-intersecting paths. These asymptotics were studied in [@SKN21] using the (not fully rigorous) tangent method; here we attack this problem by analyzing Schur polynomials. The tangent method gives the frozen boundary without the full limit shape; instead Conjecture 6.1 were made in [@SKN21], indicating that the slopes of the rescaled height functions in the scaling limit satisfy the complex Burgers equation. The complex Burgers equation was proved to be the governing equation of height functions in the scaling limit for uniform lozenge tilings and for other doubly periodic dimer models [@ko07]. This equation naturally arises through a variational problem, we refer to [@ADPZ23] for a detailed study of the variational problem. Here we note that for lecture hall tableaux no variational principle has been established and although lecture hall tableaux naturally corresponds to non-interacting paths configurations and dimer configurations on a hexagon-octagon lattice ([@SKN21]), the corresponding hexagon-octagon lattice in this case is not doubly periodic as in the setting in [@ko07]; see the right graph of Figure [1](#fig:tpd){reference-type="ref" reference="fig:tpd"}. The Schur generating function approach was applied to study uniform dimer model on a hexagonal lattice in a trapezoid domain in [@bg; @bg16], and for uniform dimer model on a rectangular square grid in [@bk]. A generalized version of the Schur generating function was defined to study the non-uniform dimer model on rail-yard graphs in [@BL17; @ZL18; @ZL201; @ZL20; @Li21]. Schur processes are specializations of the Macdonald processes when $q=t$, hence the asymptotics of Schur processes can also be obtained by investigating the more general Macdonald processes; see [@LV21; @ZL22]. All the existing Schur-generating functions seem to be defined in the setting of the Gelfand-Tsetlin scheme; however the lecture hall tableaux are novel in the sense that on a skew shape they cannot be computed by skew Schur functions; and the corresponding particle configurations induced by the non-intersecting path configurations of the lecture hall tableaux do not satisfy the interlacing conditions required by the Gelfand-Tsetlin scheme; see Figure [\[fig:lambdakappaEx\]](#fig:lambdakappaEx){reference-type="ref" reference="fig:lambdakappaEx"} for an example. By constructing a novel Schur generating function specifically for the lecture hall tableaux and analyzing its asymptotics, in this paper we obtain a full description of the limit shape, including the moment formulas for the counting measures and the complex Burgers equation; resolving Conjecture 6.1 in [@SKN21]. The Gaussian free field, as a high dimensional time analog of the Brownian motion, was proved to be the rule of height fluctuations for dimer models on a large class of graphs ([@RK01; @Li13]). In this paper we show that the unrescaled height fluctuations of the lecture hall tableaux converge to the Gaussian free field when $t$ goes to infinity linearly as $n$ goes to infinity. The main results (with exact statements given in later sections after a number of precise definitions) and the organization of the paper are as follows. - In Section [2](#lsti){reference-type="ref" reference="lsti"}, we prove the moment formula for the limit counting when $n\rightarrow\infty$, $t\rightarrow\infty$ and $\frac{t}{n}\rightarrow\alpha\in (0,\infty)$; the main theorem in Section [2](#lsti){reference-type="ref" reference="lsti"} is Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}. - In Section [3](#sect:be){reference-type="ref" reference="sect:be"}, we prove that the slopes of the (rescaled) height function in the scaling limit satisfy the complex Burgers equation; confirming Conjecture 6.1 in [@SKN21]. The main theorem proved in Section [3](#sect:be){reference-type="ref" reference="sect:be"} is Theorem [Theorem 15](#thm:m31){reference-type="ref" reference="thm:m31"}. - In Section [4](#gff){reference-type="ref" reference="gff"}, we prove the convergence of the (unrescaled) height fluctuation to the Gaussian free field (GFF) $n\rightarrow\infty$, $t\rightarrow\infty$ and $\frac{t}{n}\rightarrow\alpha\in (0,\infty)$; the main theorem in Section [4](#gff){reference-type="ref" reference="gff"} is Theorem [Theorem 20](#thm:gff){reference-type="ref" reference="thm:gff"}. - In Appendix [5](#sa){reference-type="ref" reference="sa"}, we discuss some technical results. # Limit Shape when $t\rightarrow\infty$ {#lsti} In this section, we prove the moment formula of the limit counting measure when $n\rightarrow\infty$, $t\rightarrow\infty$ and $\frac{t}{n}\rightarrow\alpha\in (0,\infty)$ by defining and analyzing a novel Schur generating function for lecture hall tableaux, which correspond to neither Gelfand-Tsetlin schemes nor doubly-periodic dimer models. The main theorem in this Section is Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}. Let $\mathcal{M}$ be a random non-intersecting path configuration on $\mathcal{G}=\mathcal{G}_t$. Let $n$ be the total number of non-intersecting paths. Let $\kappa\geq 0$ be an integer. Let $\epsilon>0$ be sufficiently small such that the region $y\in(\kappa,\kappa+\epsilon]$ does not intersect any non-vertical edge of $\mathcal{G}$. We associate a partition $\lambda^{(\kappa)}$ as follows: - $\lambda^{(\kappa)}_1$ is the number of absent vertical edges of $\mathcal{M}$ intersecting $y=\kappa+\epsilon$ to the left of the rightmost vertical edges present in $\mathcal{M}$. - for $j\geq 2$, $\lambda^{(\kappa)}_j$ is the number of absent vertical edges of $\mathcal{M}$ intersecting $y=\kappa+\epsilon$ to the left of the $j$th rightmost vertical edges present in $\mathcal{M}$. See Figure [\[fig:lambdakappaEx\]](#fig:lambdakappaEx){reference-type="ref" reference="fig:lambdakappaEx"} for an example. For $\mathbf{x}=(x_0,x_1,\ldots)$ Let $s_{\lambda/\mu}(\mathbf{x})$ be the skew Schur function. For any tableaux $T$ of shape $\lambda/\mu$, let $$\begin{aligned} \mathbf{x}^T=\prod_{(i,j)\in \lambda/\mu}x_{T(i,j)};\end{aligned}$$ we define $$\begin{aligned} L_{\lambda/\mu}^n(\mathbf{x})=\sum_{T\in LHT_n(\lambda/\mu)}\mathbf{x}^{\lfloor T \rfloor}\end{aligned}$$ **Definition 4**. *Let $\rho_{\kappa}$ be the probability distribution of $\lambda^{(\kappa)}$. Define the Schur generating function for $\rho_{\kappa}$ as follows: $$\begin{aligned} \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}|,\mathbf{u})=\sum_{\lambda\in {\mathbb {Y}}}\rho_{\kappa}(\lambda)\frac{s_{\lambda}(|\mathbf{x}|+\mathbf{u})}{s_{\lambda}(|\mathbf{x}|)}\end{aligned}$$ where $$\begin{aligned} \mathbf{u}=(u_1,u_2,\ldots,u_n);\qquad \mathbf{x}=(x_1,x_2,\ldots,x_t);\qquad |\mathbf{x}|=x_1+x_2+\ldots+x_t\end{aligned}$$ and $$\begin{aligned} &&s_{\lambda}(|\mathbf{x}|+\mathbf{u}):=s_{\lambda}(|\mathbf{x}|+u_1,|\mathbf{x}|+u_2,\ldots,|\mathbf{x}|+u_n)\label{sxu}\\ &&s_{\lambda}(|\mathbf{x}|):=s_{\lambda}(|\mathbf{x}|,\ldots,|\mathbf{x}|)\end{aligned}$$* **Lemma 5**. *Let $\lambda\in{\mathbb {Y}}$ with $l(\lambda)\leq n$. Let $$\begin{aligned} \mathbf{a}=(a_1,\ldots,a_t);\qquad \mathbf{b}=(b_1,\ldots,b_n).\end{aligned}$$ Then $$\begin{aligned} s_{\lambda}(|\mathbf{a}|+\mathbf{b})=\sum_{\nu\subset\lambda}L_{\lambda/\nu}(\mathbf{a})s_{\nu}(\mathbf{b});\end{aligned}$$ where $s_{\lambda}(|\mathbf{a}|+\mathbf{b})$ is defined as in ([\[sxu\]](#sxu){reference-type="ref" reference="sxu"}), and $$\begin{aligned} s_{\nu}(\mathbf{b})= s_{\nu}(b_1,b_2,\ldots,b_n)\end{aligned}$$* *Proof.* The lemma follows from Theorem 1.6 of [@CK20] by letting $\nu=\emptyset$. ◻ **Lemma 6**. *Assume the partition on the bottom boundary $\lambda^{(0)}$ is fixed. Then for any $\kappa>0$, $$\begin{aligned} \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u})= \frac{s_{\lambda^{(0)}}(|\mathbf{x}|+\mathbf{u})}{s_{ \lambda^{(0)}}(|\mathbf{x}|)}\end{aligned}$$ where $$\begin{aligned} \mathbf{x}_{\kappa}=(x_{\kappa}, x_{\kappa+1},\ldots,x_t)\end{aligned}$$* *Proof.* Let $$\begin{aligned} \mathbf{x}\setminus \mathbf{x}_{\kappa}=(x_1,x_2,\ldots,x_{\kappa-1}).\end{aligned}$$ By Definition [Definition 4](#df01){reference-type="ref" reference="df01"}, we have $$\begin{aligned} \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u})&=&\sum_{\lambda\in {\mathbb {Y}}}\rho_{\kappa}(\lambda)\frac{s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u})}{s_{\lambda}(|\mathbf{x}_{\kappa}|)}\\ &=&\sum_{\lambda\in {\mathbb {Y}}}\frac{L_{\lambda}(\mathbf{x}_{\kappa})L_{\lambda^{(0)}/ \lambda}(\mathbf{x}\setminus \mathbf{x}_{\lceil\kappa \rceil} )}{L_{\lambda^{(0)}}(\mathbf{x})}\frac{s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u})}{s_{\lambda}(|\mathbf{x}_{\kappa}|)}\\ &=&\sum_{\lambda\in {\mathbb {Y}}}\frac{L_{\lambda^{(0)}/ \lambda}(\mathbf{x}\setminus \mathbf{x}_{\lceil\kappa \rceil} )s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u})}{L_{\lambda^{(0)}}(\mathbf{x})}\\ &=&\frac{s_{\lambda^{(0)}}(|\mathbf{x}|+\mathbf{u})}{s_{ \lambda^{(0)}}(|\mathbf{x}|)},\end{aligned}$$ where the last identity follows from Lemma [Lemma 5](#l02){reference-type="ref" reference="l02"}. Then the lemma follows. ◻ Define a differential operator on Schur generating functions $$\begin{aligned} \mathcal{D}_{j,\kappa} \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u}):=\frac{1}{V(\mathbf{u})} \left[\sum_{i}\left((|\mathbf{x}_{\kappa}|+u_i)\frac{\partial}{\partial u_i}\right)^j\right] V(\mathbf{u})\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u});\end{aligned}$$ where $$\begin{aligned} V(\mathbf{u})=\prod_{i<j}(u_i-u_j).\end{aligned}$$ We shall omit the index $\kappa$ in the differential operator $\mathcal{D}$ when there is no confusion. We introduce the following definition to study the distribution of random partitions. **Definition 7**. *Let $\lambda$ be a length-$N$ partition. We define the counting measure $m(\lambda)$ as a probability measure on ${\mathbb R}$ as follows: $$\begin{aligned} m(\lambda)=\frac{1}{N}\sum_{i=1}^{N}\delta\left(\frac{\lambda_i+N-i}{N}\right).\end{aligned}$$ If $\lambda$ is random, then we can define the corresponding random counting measure.* Then **Lemma 8**. *Let $j,m\in{\mathbb N}$. Then $$\begin{aligned} \left.\frac{1}{n^{(j+1)m}}\mathcal{D}_j^m \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u})\right|_{\mathbf{u}=0}:=\mathbb{E}\left(\int_{{\mathbb R}} x^{j}d\mathbf{m}_{\rho_{\kappa}}\right)^m.\end{aligned}$$ where $\mathbf{m}_{\rho_{\kappa}}$ is the random counting measure for the random partition $\lambda^{(\kappa)}$.* *Proof.* By Definition [Definition 4](#df01){reference-type="ref" reference="df01"}, we obtain $$\begin{aligned} \left.\mathcal{D}_j^m \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u})\right|_{\mathbf{u}=\mathbf{0}} =\sum_{\lambda\in {\mathbb {Y}}}\rho_{\kappa}(\lambda)\frac{1}{V(\mathbf{u})} \left[\sum_{i=1}^{n}\left((|\mathbf{x}_{\kappa}|+u_i)\frac{\partial}{\partial u_i}\right)^j\right]^m V(\mathbf{u})\frac{s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u})}{s_{\lambda}(|\mathbf{x}_{\kappa}|)}\end{aligned}$$ Explicit computations show that $$\begin{aligned} \frac{1}{V(\mathbf{u})} \sum_{i=1}^{n}\left((|\mathbf{x}_{\kappa}|+u_i)\frac{\partial}{\partial u_i}\right)^j V(\mathbf{u})s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u}) =\left[\sum_{i=1}^{n}(\lambda_i+n-i)^j\right]s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u}).\end{aligned}$$ Hence we have $$\begin{aligned} \left.\mathcal{D}_j^m \mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u})\right|_{\mathbf{u}=\mathbf{0}} =\sum_{\lambda\in {\mathbb {Y}}_{n}}\rho_{k}(\lambda) \left[\sum_{i=1}^{n}(\lambda_i+n-i)^j\right]^m\end{aligned}$$ Then the lemma follows. ◻ **Theorem 9**. *Let $n$ be the the total number of non-interacting paths in $\mathcal{G}$, and let $t$ be the height of $\mathcal{G}$. Let $\rho_{\kappa}(n)$ be the probability distribution of $\lambda^{(\kappa)}$. Assume $$\begin{aligned} y:=\lim_{n\rightarrow\infty}\frac{\kappa}{n};\qquad s:=\lim_{n\rightarrow\infty}\frac{|\mathbf{x}_{\kappa}|}{|\mathbf{x}|};\qquad \mathcal{\alpha}:=\lim_{n\rightarrow\infty}\frac{t}{n};\label{lms1}\end{aligned}$$ such that $$\begin{aligned} s\in(0,1);\qquad y\in(0,\alpha).\end{aligned}$$ Then random measures $\mathbf{m}_{\rho_{\kappa}(n)}$ converge as $n\rightarrow\infty$ in probability, in the sense of moments to a deterministic measure $\mathbf{m}_{y}$ on ${\mathbb R}$, whose moments are given by $$\begin{aligned} \int_{{\mathbb R}}x^j\mathbf{m}_{y}(dx)=\frac{1}{2(j+1)\pi\mathbf{i}}\oint_1\frac{dz}{z-1+s}\left((z-1+s)H_{\mathbf{m}_0}'(z)+\frac{z-1+s}{z-1}\right)^{j+1}\end{aligned}$$ Here $\mathbf{m}_0$ is the limit counting measure for the boundary partition $\lambda^{(0)}\in {\mathbb {Y}}_n$ as $n\rightarrow\infty$, and $H_{\mathbf{m}_0}$ is defined as in ([\[hmz\]](#hmz){reference-type="ref" reference="hmz"}).* *Proof.* By Lemma [Lemma 6](#l03){reference-type="ref" reference="l03"}, $$\begin{aligned} &&\lim_{n\rightarrow\infty}\frac{1}{n}\log \mathcal{S}_{\rho_{\kappa}(n)}(|\mathbf{x}_{\kappa}|,u_1,\ldots,u_j,0,\ldots,0)\\ &=&\lim_{n\rightarrow\infty}\frac{1}{n}\log\frac{s_{\lambda^{(0)}}(|\mathbf{x}|+(u_1,\ldots,u_j,0,\ldots,0))}{s_{\lambda^{(0)}}(|\mathbf{x}|)}\\ &=&\lim_{n\rightarrow\infty}\frac{1}{n}\log\frac{s_{\lambda^{(0)}}\left(1+\frac{u_1}{|\mathbf{x}|},\ldots,1+\frac{u_j}{|\mathbf{x}|},1,\ldots,1\right)}{s_{\lambda^{(0)}}(1,\ldots,1)}\\ &=&H_{\mathbf{m}_0}\left(1+\frac{u_1}{|\mathbf{x}|}\right)+\ldots+H_{\mathbf{m}_0}\left(1+\frac{u_j}{|\mathbf{x}|}\right).\end{aligned}$$ where the last identity follows from Lemma [Lemma 21](#la1){reference-type="ref" reference="la1"}. Then we can write $$\begin{aligned} && \mathcal{S}_{\rho_{\kappa}(n)}\left(|\mathbf{x}_{\kappa}|,u_1,\ldots,u_n\right) =e^{n\left[\sum_{i\in [n]}H_{\mathbf{m}_0}\left(1+\frac{u_i}{|\mathbf{x}_{\kappa}|}\right)\right]}T_{n}\left(u_1,\ldots,u_n\right)\label{rst}\end{aligned}$$ such that $$\begin{aligned} \lim_{n\rightarrow \infty}\frac{1}{n}\log T_{n}\left(u_1,\ldots,u_j,0,\ldots,0\right)=0.\label{llt1}\end{aligned}$$ and $$\begin{aligned} T_{n}\left(0,\ldots,0\right)=1;\label{tnx1}\end{aligned}$$ and the convergence is uniform when each $\frac{u_i}{|\mathbf{x}_{\kappa}|}$ is in a small complex neighborhood of $0$ for $i\in [j]$. Then by Lemma [Lemma 8](#l04){reference-type="ref" reference="l04"}, $$\begin{aligned} &&{\mathbb E}\left(\int_{{\mathbb R}}x^j d\mathbf{m}_{\rho_{\kappa}(n)}\right)^m= \left.\frac{1}{n^{m(j+1)}} (\mathcal{D}_j)^m\mathcal{S}_{\rho_{\kappa}(n)}\left(|\mathbf{x}_{\kappa}|,u_1,\ldots,u_n\right)\right|_{\mathbf{u}=0} \\ &=&\frac{1}{n^{m(j+1)}} \left.\left[T_{n}\left(u_1,\ldots,u_n\right)(\mathcal{D}_j)^me^{n\left[\sum_{i\in [n]}H_{\mathbf{m}_0}\left(1+\frac{u_i}{|\mathbf{x}|}\right)\right]}\right|_{(u_1,\ldots,u_n)=(0,\ldots,0)}+R\right]\end{aligned}$$ where $R$ is the terms in $(\mathcal{D}_j)^m\mathcal{S}_{\rho_{\kappa}(n)}\left(|\mathbf{x}_{\kappa}|,\mathbf{u}\right)|_{\mathbf{u}=0}$ obtained when the differential operator $(\mathcal{D}_j)^m$ acts on $T_{n}\left(\mathbf{u}\right)$ as well. From ([\[llt1\]](#llt1){reference-type="ref" reference="llt1"}) we see that the leading term of ${\mathbb E}\int_{{\mathbb R}}x^jd\mathbf{m}_{\rho_{\kappa}(n)}$ as $n\rightarrow \infty$ is the same as that of $$\begin{aligned} &&\frac{1}{n^{m(j+1)}} \left.T_{n}\left(u_1,\ldots,u_n\right)(\mathcal{D}_j)^me^{n\left[\sum_{i\in [n]}H_{\mathbf{m}_0}\left(1+\frac{u_j}{|\mathbf{x}|}\right)\right]}\right|_{(u_1,\ldots,u_n)=(0,\ldots,0)}\label{ltm2}\\ &=&\frac{1}{n^{m(j+1)}} \left.(\mathcal{D}_j)^me^{n\left[\sum_{i\in [n]}H_{\mathbf{m}_0}\left(1+\frac{u_j}{|\mathbf{x}|}\right)\right]}\right|_{(u_1,\ldots,u_n)=(0,\ldots,0)}\notag\end{aligned}$$ where the last identity follows from ([\[tnx1\]](#tnx1){reference-type="ref" reference="tnx1"}). When $m=1$, ([\[ltm2\]](#ltm2){reference-type="ref" reference="ltm2"}) can be computed as follows $$\begin{aligned} &&\frac{1}{n^{j+1}} \frac{1}{\prod_{i,j\in[n]:i<j}(u_i-u_j)} \sum_{r\in [n]}\left((|\mathbf{x}_{\kappa}|+u_r)\frac{\partial}{\partial u_r}\right)^j\left.\left[e^{n\left[\sum_{i\in [n]}H_{\mathbf{m}_0}\left(1+\frac{u_j}{|\mathbf{x}|}\right)\right]}\prod_{i,j\in[n]:i<j}(u_i-u_j)\right]\right|_{\mathbf{u}=\mathbf{0}}\end{aligned}$$ whose leading term as $n\rightarrow\infty$ is the same as that of $$\mathcal{M}_{j}:= \lim_{\frac{\mathbf{u}}{|\mathbf{x}|}\rightarrow\mathbf{0}}\sum_{r\in[n]}\sum_{g=0}^{j}\left[n\right]^{-g-1} %\left(\begin{array}{c}p\\l\end{array}\right) \binom{j}{g} \frac{(|\mathbf{x}_{\kappa}|+u_r)^j}{|\mathbf{x}|^{j-g}}\left[H'_{\mathbf{m}_0}\left(1+\frac{u_r}{|\mathbf{x}|}\right)\right]^{j-g}\left(\sum_{j\in [n]\setminus\{r\}}\frac{1}{u_r-u_j}\right)^{g}.$$ By ([\[lms1\]](#lms1){reference-type="ref" reference="lms1"}), we obtain $$\begin{gathered} \mathcal{M}_{j}=\lim_{\frac{\mathbf{u}}{|\mathbf{x}|}\rightarrow\mathbf{0}}\sum_{r\in[n]}\sum_{g=0}^{j}\frac{1}{n}{j\choose g}\left(s+\frac{u_r}{|\mathbf{x}|}\right)^j\left[H'_{\mathbf{m}_0}\left(1+\frac{u_r}{|\mathbf{x}|}\right)\right]^{j-g}\left(\frac{1}{n}\sum_{j\in [n]\setminus\{r\}}\frac{1}{\left(\frac{u_r}{|\mathbf{x}|}+1\right)-\left(\frac{u_j}{|\mathbf{x}|}+1\right)}\right)^{g}.\end{gathered}$$ Let $$\begin{aligned} z_i:=\frac{u_i}{|\mathbf{x}|}+1\end{aligned}$$ By Lemma [Lemma 22](#la2){reference-type="ref" reference="la2"}, we obtain $$\begin{aligned} \mathcal{M}_{j}&=&\lim_{(z_1,\ldots,z_n)\rightarrow 1}\sum_{r\in[n]}\sum_{g=0}^{j}\frac{1}{n}{j\choose g}\left(z_r-1+s\right)^j\left[H'_{\mathbf{m}_0}\left(z_r\right)\right]^{j-g}\left(\frac{1}{n}\sum_{j\in [n]\setminus\{r\}}\frac{1}{z_r-z_j}\right)^{g}\\ &=&\lim_{(z_1,\ldots,z_n)\rightarrow 1}\sum_{g=0}^{j}\frac{1}{g+1}{j\choose g}\frac{1}{g!}\left.\frac{\partial^{g}\left[\left(z-1+s\right)^jH'_{\mathbf{m}_0}\left(z\right)^{j-g}\right]}{\partial z^{g}}\right|_{z=1}\end{aligned}$$ Then the lemma follows from the Residue Theorem. ◻ **Definition 10**. *Assume as $n\rightarrow\infty$, the rescaled graph $\frac{1}{n}\mathcal{G}$ approximate a bounded simply-connected region $\mathcal{R}\subset {\mathbb R}^2$. Let $\mathcal{L}$ be the set of $(\chi,y)$ inside $\mathcal{R}$ such that the density $d\mathbf{m}_y(\frac{\chi}{1-y})$ is not equal to 0 or 1. Then $\mathcal{L}$ is called the liquid region. Its boundary $\partial\mathcal{L}$ is called the frozen boundary. Let $$\begin{aligned} \widetilde{\mathcal{L}}:=\{(\chi,s):(\chi,y)\in \mathcal{L}\}\end{aligned}$$ where $s,y$ are given as Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}.* **Definition 11**. *Let $\eta$ be a compactly supported measure on ${\mathbb R}$. The Stieljes transform of $\eta$ is defined by $$\begin{aligned} \mathrm{St}_{\eta}(w):=\int_{{\mathbb R}}\frac{\eta[ds]}{w-s}\end{aligned}$$ for $w\in \mathbb{C}\setminus \mathrm{supp}(\eta)$.* **Theorem 12**. *Let $$\begin{aligned} U_y(z):=(z-1+s)H'_{\mathbf{m}_0}(z)+\frac{z-1+s}{z-1}\label{duyz}\end{aligned}$$ Assume the liquid region is nonempty, and assume that for any $x\in {\mathbb R}$, the equation $U_y(z)=x$ has at most one pair of complex conjugate roots. Then for any point $(\chi,y)$ lying on the frozen boundary, the equation $U_y(z)=\chi$ has double roots.* *Proof.* The density of the measure $d\mathbf{m}_y(x)$ can be computed by the Stieljes transform $$\begin{aligned} \frac{d\mathbf{m}_y(x)}{dx}=-\lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\Im(\mathrm{St}_{\mathbf{m}_y}(x+\mathbf{i}\epsilon))\label{dst}\end{aligned}$$ where $\Im(\cdot)$ represents the imaginary part of a complex number and $\mathrm{St}_{\mathbf{m}_y}$ is the Stieljes transform of the measure $\mathbf{m}_y$. Then the theorem follows from similar arguments of Lemma 8.1 of [@BL17]. ◻ **Example 13**. *Assume the bottom boundary partition is given by $$\begin{aligned} \lambda^{(0)}(n):=((p-1)n,(p-1)(n-1),\ldots,p-1)\in {\mathbb {Y}}_{n}\end{aligned}$$ where $p,n$ are positive integers. We have $$\begin{aligned} \frac{d\mathbf{m}_0}{dx}=\frac{1}{p},\ \forall x\in(0,pn).\end{aligned}$$ Then the $k$th moment of $\mathbf{m}_0$ can be computed as follows $$\begin{aligned} M_k(\mathbf{m}_0)=\frac{p^k}{k+1}.\end{aligned}$$ and therefore $$\begin{aligned} S_{\mathbf{m}_0}(z)=-\frac{1}{p}\log(1-pz).\end{aligned}$$ Hence we have $$\begin{aligned} S_{\mathbf{m}_0}^{(-1)}(u)=\frac{1-e^{-pu}}{p}.\end{aligned}$$ and $$\begin{aligned} H_{\mathbf{m}_0}'(u)=\frac{pu^{p-1}}{u^{p}-1}-\frac{1}{u-1}\end{aligned}$$ Then $$\begin{aligned} U_y(z)=\frac{pz^{p-1}(z-1+s)}{z^p-1}\end{aligned}$$ Assume $p=3$. then for each $\chi\in {\mathbb R}$ the equation $U_y(z)=\chi$ has at most one pair of nonreal conjugate roots. The condition that $U_y(z)=\chi$ has double roots gives $$\begin{aligned} \begin{cases} U_y(z)=\chi.\\ U_y'(z)=0 \end{cases}\end{aligned}$$ which gives the parametric equation for $(x,s)$ as follows. $$\begin{aligned} \begin{cases} \chi=\frac{3z^3}{z^3+2}\\ s=\frac{z^3-3z+2}{z^3+2} \end{cases}\end{aligned}$$* 1. *When $x_1=x_2=\ldots=x_n$, and $\alpha=1$, we have $s=1-y$. The frozen boundary is given by the blue curve of Figure [2](#fig:fb1){reference-type="ref" reference="fig:fb1"}.* ![Frozen boundary for the scaling limit of weighted non-interaction paths. The blue curve is for the uniform weight; the red curve is when the limit weight function $s$ satisfies $y=(1-s)^2$.](LHFB.eps){#fig:fb1 width=".8\\textwidth"} 2. *When $\alpha=1$, and $y=(1-s)^2$. The frozen boundary is given by the red curve of Figure [2](#fig:fb1){reference-type="ref" reference="fig:fb1"}.* **Example 14**. *Assume the bottom boundary partition is given by $$\begin{aligned} \lambda^{(0)}(n):=(n,,\ldots,n,\frac{n}{2},\frac{n}{2}-1,\ldots,1)\in {\mathbb {Y}}_{n}\end{aligned}$$ where $n$ is a positive even integer. We have $$\begin{aligned} \frac{d\mathbf{m}_0}{dx}=\begin{cases}\frac{1}{2}&\mathrm{if}\ x\in\left(0,1\right);\\ 1&\mathrm{if}\ x\in\left(\frac{3}{2},2\right);\\ 0&\mathrm{otherwise}. \end{cases}\end{aligned}$$ Then the $k$th moment of $\mathbf{m}_0$ can be computed as follows $$\begin{aligned} M_k(\mathbf{m}_0)=\frac{1}{k+1}\left(2^{k+1}-\left(\frac{3}{2}\right)^{k+1}+\frac{1}{2}\right)\end{aligned}$$ Hence we have $$\begin{aligned} S_{\mathbf{m}_0}(z)&=&\log\frac{1-\frac{3z}{2}}{(1-2z)\sqrt{1-z}}\end{aligned}$$* # Rescaled Height Function and Complex Burgers Equation {#sect:be} In this section, we prove that the slopes of the (rescaled) height function in the scaling limit satisfy the complex Burgers equation; confirming Conjecture 6.1 in [@SKN21]. The main idea is to differentiate the moment formula obtained in Section [2](#lsti){reference-type="ref" reference="lsti"} to obtain the slope of the limit (rescaled) height function, and then verify the complex Burgers equation. On the lecture hall graph $\mathcal{G}$, define a random height function $h$ associated to a random non-intersecting path configuration as follows. The height at the lower left corner is 0, and the height increases by 1 whenever crossing a path from the left to the right. Define the rescaled height function by $$\begin{aligned} h_n(\chi,y):=\frac{1}{n}h(n\chi,ny)\end{aligned}$$ Then by ([\[dst\]](#dst){reference-type="ref" reference="dst"}), we obtain $$\begin{aligned} \lim_{n\rightarrow\infty}\frac{d h_n(\chi,y)}{d\chi}&=&\frac{d\mathbf{m}_y(\chi)}{d\chi}=-\lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\Im(\mathrm{St}_{\mathbf{m}_y}(\chi+\mathbf{i}\epsilon))\end{aligned}$$ Under the assumption of Theorem [Theorem 12](#t17){reference-type="ref" reference="t17"}, following similar computations before Lemma 8.1 of [@BL17], we obtain that when $(\chi,y)$ is in the liquid region, $$\lim_{n\rightarrow\infty}\frac{dh_n(\chi,y)}{d\chi}=\frac{1}{\pi}\mathrm{Arg}(\mathbf{z}_+(\chi,y)-1+s).\label{sjl}$$ where $\mathbf{z}_+(\chi,y)$ is the unique root in the upper half plane of the equation $U_y(z)=\chi$. Let $\mathbf{h}$ be the limit of $h_n$ as $n\rightarrow\infty$. Assume $$\begin{aligned} \lim_{n\rightarrow\infty}\frac{\lambda_1^{(0)}+n-1}{n}=\beta\in [1,\infty).\end{aligned}$$ In this case the measure $\mathbf{m}_y$ has compact support $[0,\beta]\subset{\mathbb R}$. Note that $$\begin{aligned} \int_{{\mathbb R}}x^j \mathbf{m}_y(dx)= \int_{0}^{\beta}x^j \mathbf{m}_y(dx)= \int_0^{\beta}x^jd \mathbf{h} =\int_0^{\beta}d\left(x^j\mathbf{h}(x,y)\right) -j\int_0^{\beta}\mathbf{h}(x,y)x^{j-1}dx.\end{aligned}$$ then $\int_{0}^{\beta}d( x^j\mathbf{h}(x,y))$ is a finite constant independent of $y$, then $$\begin{aligned} \frac{d\int_{0}^{\beta}d( x^j\mathbf{h}(x,y))}{dy}=0.\end{aligned}$$ Then by Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"} we have $$\begin{aligned} \int_{0}^{\beta}\frac{\partial\mathbf{h}(x,y)}{\partial y}x^{j-1}dx&=&-\frac{1}{j}\frac{d}{dy}\int_{0}^{\beta}x^j\mathbf{m}_y(dx)=-\frac{1}{j}\frac{d}{dy}\int_{{\mathbb R}}x^j\mathbf{m}_y(dx) \\ &=&-\frac{1}{2j(j+1)\pi\mathbf{i}}\frac{d}{dy}\oint_1\frac{dz}{z-1+s}\left((z-1+s)H_{\mathbf{m}_0}'(z)+\frac{z-1+s}{z-1}\right)^{j+1}\end{aligned}$$ We make a change of variables and let $w=z-1+s$, we obtain $$\begin{aligned} \int_{0}^{\beta}\frac{\partial\mathbf{h}(x,y)}{\partial y}x^{j-1}dx&=&-\frac{1}{2j(j+1)\pi\mathbf{i}} \frac{ds}{dy} \oint_s\frac{dw}{w}\frac{\partial\left(wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}\right)^{j+1}}{\partial s}\\ &=&\frac{1}{2j\pi\mathbf{i}} \frac{ds}{dy} \oint_s dw \left(wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}\right)^j \frac{\partial\left(H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}\right)}{\partial w}\\ &=&\frac{1}{2j\pi\mathbf{i}} \frac{ds}{dy} \oint_s \left(wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}\right)^j d\left(H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}\right)\end{aligned}$$ For each fixed $y$, we can again consider $$\begin{aligned} d\xi_y(x):=\frac{\partial \mathbf{h}(x,y)}{\partial y}dx\end{aligned}$$ a measure on ${\mathbb R}$. Note that this measure has compact support in $[0,\beta]$. The density of the measure $\frac{\partial \mathbf{h}(x,y)}{\partial y}$ can be computed by the Stieljes transform of the measure; i.e. $$\begin{aligned} \frac{\partial\mathbf{h}(x,y)}{\partial y}&=& -\lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\Im\mathrm{St}_{\xi_y}(x+\mathbf{i}\epsilon).\end{aligned}$$ Moreover, $$\begin{aligned} \mathrm{St}_{\xi_y}(x)&=&\sum_{j=1}^{\infty}x^{-j} \int_{{\mathbb R}}u^{j-1}d\xi_y(u)\\ &=&\sum_{j=1}^{\infty} \frac{1}{2j\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\left(\frac{wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}}{x}\right)^{j}d\left(H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}\right)\\ &=&-\frac{1}{2\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\log\left(1-\frac{wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}}{x}\right)d\left(H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}\right)\\ &=&-\frac{1}{2\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}d\left[\left(H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}\right)\log\left(1-\frac{wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}}{x}\right)\right]\\ &&+\frac{1}{2\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\frac{H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}}{1-\frac{wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}}{x}}\frac{d\left(1-\frac{wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}}{x}\right)}{dw}dw\end{aligned}$$ By ([\[hmd\]](#hmd){reference-type="ref" reference="hmd"}) we obtain $$\begin{aligned} H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}=\frac{1}{(w+1-s) S_\mathbf{m}^{(-1)}(\ln (w+1-s))}\end{aligned}$$ When $|x|$ is sufficiently large, in the region of the complex plane enclosed by a small circle centered at $s$, the number of zeros of the equation $wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}=x$ is equal to the number of poles of $wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}$. Therefore we have $$\begin{aligned} -\frac{1}{2\pi\mathbf{i}}\oint_{s}d\left[\left(H_{\mathbf{m}_0}'(w+1-s)+\frac{1}{w-s}\right)\log\left(1-\frac{wH_{\mathbf{m}_0}'(w+1-s)+\frac{w}{w-s}}{x}\right)\right]=0.\end{aligned}$$ In the uniformly weighted case we have $s=1-y$, then $$\begin{aligned} \mathrm{St}_{\xi_y}(x)=- H_{\mathbf{m}_0}'(\mathbf{z}_+(x,y))-\frac{1}{\mathbf{z}_+(x,y)-1}=-\frac{1}{\pi}\Im\frac{1}{\mathbf{z}_+(\chi,y)S_{\mathbf{m}_0}^{(-1)}(\ln \mathbf{z}_+(\chi,y))},\end{aligned}$$ where $\mathbf{z}_+(x,y))$ is the unique root of the equation $U_y(z)=x$ that converges to $1$ as $x\rightarrow\infty$. Hence we have the following theorem: **Theorem 15**. *Assume $\mathcal{G}$ is uniformly weighted such that $s=1-y$. Suppose that the assumptions of Theorem [Theorem 12](#t17){reference-type="ref" reference="t17"} holds. Let $$\begin{aligned} u=\frac{1}{\mathbf{z}_+(\chi,y)S_{\mathbf{m}_0}^{(-1)}(\ln \mathbf{z}_+(\chi,y))}\end{aligned}$$ Then $$\begin{aligned} \frac{\partial h}{\partial x}=\frac{1}{\pi}\left(2-\mathrm{Arg}(u)\right);\qquad \frac{\partial h}{\partial y}=\frac{1}{\pi}\Im u\label{s21}\end{aligned}$$ where $\mathrm{Arg}(\cdot)$ is the branch of the argument function taking values in $[0,2\pi)$. Moreover, $u$ satisfies the complex Burgers equation $$\begin{aligned} u_x-uu_y=0.\label{be}\end{aligned}$$* *Proof.* The above arguments show that $$\begin{aligned} \nabla \mathbf{h} =\left(\frac{1}{\pi}\mathrm{Arg}(\mathbf{z}_+(\chi,y)-y),\frac{1}{\pi}\Im\frac{1}{\mathbf{z}_+(\chi,y)S_{\mathbf{m}_0}^{(-1)}(\ln \mathbf{z}_+(\chi,y))}\right)\end{aligned}$$ Moreover, the equation $U_y(z)=x$ gives $$\begin{aligned} \frac{\mathbf{z}_+(\chi,y)-y}{\mathbf{z}_+(\chi,y)S_{\mathbf{m}_0}^{(-1)}(\ln \mathbf{z}_+(\chi,y))}=x\label{s11}\end{aligned}$$ Since $x\in {\mathbb R}$, we have $$\begin{aligned} \mathrm{Arg}\left(\mathbf{z}_+(\chi,y)-y\right)+\mathrm{Arg}\left(\frac{1}{\mathbf{z}_+(\chi,y)S_{\mathbf{m}_0}^{(-1)}(\ln \mathbf{z}_+(\chi,y))}\right)=2\pi\end{aligned}$$ Then ([\[s21\]](#s21){reference-type="ref" reference="s21"}) follows. For simplicity, we shall use $z$ to denote $\mathbf{z}_{+}(\chi,y)$. Let $$\begin{aligned} \zeta:=\frac{1}{S_{\mathbf{m}_0}^{(-1)}(\ln z)};\end{aligned}$$ then $$\begin{aligned} z=e^{\mathrm{St}_{\mathbf{m}_0}(\zeta)}=e^{\int_{{\mathbb R}}\frac{\mathbf{m}_0(dt)}{\zeta-t}};\qquad u=\frac{\zeta}{z}.\end{aligned}$$ Since $$\begin{aligned} u_x=\frac{-z_x\zeta+z\zeta_x}{z^2};\qquad u_y=\frac{-z_y\zeta+z\zeta_y}{z^2};\end{aligned}$$ and $$\begin{aligned} z_x=-z\zeta_x\int\frac{\mathbf{m}_0(dt)}{(\zeta-t)^2};\qquad z_y=-z\zeta_y\int\frac{\mathbf{m}_0(dt)}{(\zeta-t)^2};\end{aligned}$$ we obtain $$\begin{aligned} \frac{u_x}{u_y}=\frac{\zeta_x}{\zeta_y}=\frac{z_x}{z_y}\label{qq1}\end{aligned}$$ Moreover by ([\[s11\]](#s11){reference-type="ref" reference="s11"}) we have $$\begin{aligned} z-y=xu.\end{aligned}$$ By taking derivatives we infer that $$\begin{aligned} z_x=xu_x+u;\qquad z_y-1=xu_y\end{aligned}$$ Hence $$\begin{aligned} \frac{u_x}{u_y}=\frac{z_x-u}{z_y-1}\label{qq2}\end{aligned}$$ ([\[qq1\]](#qq1){reference-type="ref" reference="qq1"}) and ([\[qq2\]](#qq2){reference-type="ref" reference="qq2"}) implies that $\frac{u_x}{u_y}=u$; and the complex Burgers equation ([\[be\]](#be){reference-type="ref" reference="be"}) follows. ◻ # Height fluctuations and the Gaussian free field (GFF) when $t\rightarrow\infty$ {#gff} In Section [4](#gff){reference-type="ref" reference="gff"}, we prove the convergence of the (unrescaled) height fluctuation to the Gaussian free field (GFF) $n\rightarrow\infty$, $t\rightarrow\infty$ and $\frac{t}{n}\rightarrow\alpha\in (0,\infty)$. The main idea is as follows. (1) Using the Schur difference operator defined in Section [2](#lsti){reference-type="ref" reference="lsti"} to act on the Schur generating functions, we obtain the moments of the height functions; we then verify the Wick's formula in the scaling limit to obtain a Gaussian distribution. (2) We find an explicit diffeomorphism from the liquid region to the upper half plane, such that the image of the limit of the fluctuations of the (unrescaled) height function under the diffeomrophism has the correlation kernel given by the Green's function in the upper half plane. Combining with (1), We then conclude that the the limit of the fluctuations of the (unrescaled) height function is the pull-back of the GFF in the upper half plane under this mapping. The main theorem in Section [4](#gff){reference-type="ref" reference="gff"} is Theorem [Theorem 20](#thm:gff){reference-type="ref" reference="thm:gff"}. Let $C_0^{\infty}$ be the space of smooth real-valued functions with compact support in the upper half plane ${\mathbb H}$. The **Gaussian free field** (GFF) $\Xi$ on ${\mathbb H}$ with the zero boundary condition is a collection of Gaussian random variables $\{\xi_{f}\}_{f\in C_0^{\infty}}$ indexed by functions in $C_0^{\infty}$, such that the covariance of two Gaussian random variables $\xi_{f_1}$, $\xi_{f_2}$ is given by $$\begin{aligned} \mathrm{Cov}(\xi_{f_1},\xi_{f_2})=\int_{{\mathbb H}}\int_{{\mathbb H}}f_1(z)f_2(w)G_{{\mathbb H}}(z,w)dzd\overline{z}dwd\overline{w}, \end{aligned}$$ where $$\begin{aligned} G_{{\mathbb H}}(z,w):=-\frac{1}{2\pi}\ln\left|\frac{z-w}{z-\overline{w}}\right|,\qquad z,w\in {\mathbb H} \end{aligned}$$ is the Green's function of the Dirichlet Laplacian operator on ${\mathbb H}$. The Gaussian free field $\Xi$ can also be considered as a random distribution on $C_0^{\infty}$ of ${\mathbb H}$, such that for any $f\in C_0^{\infty}$, we have $$\begin{aligned} \Xi(f)=\int_{{\mathbb H}}f(z)\Xi(z)dz:=\xi_f; \end{aligned}$$ where $\Xi(z)$ is the generalized function corresponding to the linear functional $\Xi$. Note that GFF is conformally invariant; in the sense that for any simply-connected domain $\mathcal{D}\subsetneq \mathbb{C}$, and let $\phi:\mathcal{D}\rightarrow{\mathbb H}$ be a conformal map from $\mathcal{D}$ to ${\mathbb H}$. Then the GFF on $\mathcal{D}$ is $$\begin{aligned} \Xi_{\mathcal{D}}(z):=\Xi(\phi(z)) \end{aligned}$$ See [@SS07] for more about GFF. Let $f$ be a function of $r$ variables. Define the symmetrization of $f$ as follows $$\begin{aligned} \mathrm{Sym}_{x_1,\ldots,x_r}f(x_1,\ldots,x_r):=\frac{1}{r!}\sum_{\sigma\in S_r}f(x_{\sigma(1)},\ldots,x_{\sigma(r)}); \label{dsym} \end{aligned}$$ **Theorem 16**. *Under the assumptions of Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}, for $\alpha_r\in [0,\alpha]$, let $$\begin{aligned} p_k^{(\lfloor \alpha_r n \rfloor)}=\sum_{i=1}^{n}\left(\lambda^{(t-\lfloor\alpha_r n\rfloor)}_i+ n -i \right)^k;\ k=1,2,\ldots\end{aligned}$$ Then the collection of random variables $$\begin{aligned} \left\{ n^{-k}\left[p_k^{(\lfloor \alpha_r n\rfloor)}-\mathbb{E}p_k^{(\lfloor \alpha_r n\rfloor)}\right]\right\}_{r=1,2,\ldots,g,k\geq 1}\end{aligned}$$ converges to a Gaussian vector, in the sense of moments, with 0 mean and covariance $$\begin{aligned} &\lim_{n\rightarrow\infty}\frac{\mathrm{cov}\left[p_{k_1}^{\lfloor\alpha_{r_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{r_2}n\rfloor}\right]}{n^{k_1+k_2}} = \frac{1}{(2\pi\mathbf{i})^2}\oint_{|z-1|=\epsilon}\oint_{|w-1|=\epsilon}\left((z-1+s_{r_1})H_{\mathbf{m}_0}'(z)+\frac{z-1+s_{r_1}}{z-1}\right)^{k_1}\\ &\times\left((w-1+s_{r_2})H_{\mathbf{m}_0}'(w)+\frac{w-1+s_{r_2}}{w-1}\right)^{k_2}Q(z,w)dzdw \end{aligned}$$ where for $i\in\{1,2\}$, $$\begin{aligned} s_{r_i}=\lim_{n\rightarrow\infty}\frac{|\mathbf{x}_{t-\lfloor\alpha_{r_i}n\rfloor}|}{|\mathbf{x}|}\end{aligned}$$ and $$\begin{aligned} &Q(z,w)=\frac{1}{(z-w)^2}+\frac{\partial^2}{\partial z\partial w}\mathrm{log}\left(1-\frac{(z-1)(w-1)}{z-w}\left[zH_{\mathbf{m}_0}'(z)-wH_{\mathbf{m}_0}'(w)\right]\right) \end{aligned}$$* *Proof.* For $r=1,\ldots,g$, let $\kappa_r=t-\alpha_r n$. Note that $$\begin{aligned} \mathbb{E}\left[p_{k_1}^{(\lfloor\alpha_1 n\rfloor)}\right]^{m_1} \cdot\ldots\cdot \left(p_{k_g}^{\lfloor\alpha_g n\rfloor}\right)^{m_g}=\left.\mathcal{D}_{k_1,\kappa_1}^{m_1}\cdot \mathcal{D}_{k_g,\kappa_g}^{m_g}\mathcal{S}_{\rho_{\kappa_g}}(|\mathbf{x}_{\kappa_g}|,\mathbf{u})\right|_{\mathbf{u}=0}\end{aligned}$$ Then $$\begin{aligned} &\frac{\mathrm{cov}\left[p_{k_1}^{\lfloor\alpha_{t_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{t_2}n\rfloor}\right]}{n^{k_1+k_2}}\\ &=\left.\frac{1}{n^{k_1+k_2}}\left[\mathcal{D}_{k_1,\kappa_1}\mathcal{D}_{k_2,\kappa_2}\mathcal{S}_{\rho_{\kappa_2}}(|\mathbf{x}_{\kappa_2}|,\mathbf{u})-\mathcal{D}_{k_1,\kappa_1}\mathcal{S}_{\rho_{\kappa_1}}(|\mathbf{x}_{\kappa_1}|,\mathbf{u})\mathcal{D}_{k_2,\kappa_2}\mathcal{S}_{\rho_{\kappa_2}}(|\mathbf{x}_{\kappa_2}|,\mathbf{u})\right]\right|_{\mathbf{u}=0}\end{aligned}$$ We have $$\begin{aligned} &\mathbb{E}\left[p_{k_1}^{\lfloor\alpha_{t_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{t_2}n\rfloor}\right]\\ &=\left.\frac{1}{V(\mathbf{u})}\left[\sum_i\left((|\mathbf{x}_{\kappa_1}|+u_i)\frac{\partial}{\partial u_i}\right)^{k_1}\right]\left[\sum_j\left((|\mathbf{x}_{\kappa_2}|+u_j)\frac{\partial}{\partial u_j}\right)^{k_2}\right]V(\mathbf{u})\frac{s_{\lambda^{(0)}}\left(1^n+\frac{\mathbf{u}}{|\mathbf{x}|}\right)}{s_{\lambda^{(0)}}(1^n)}\right|_{\mathbf{u}=0}\end{aligned}$$ Write $S_n:=\frac{s_{\lambda^{(0)}}\left(1^n+\frac{\mathbf{u}}{|\mathbf{x}|}\right)}{s_{\lambda^{(0)}}(1^n)}$, $u_{j,\kappa}=\frac{u_j}{|\mathbf{x}_{\kappa}|}$, $\mathbf{u}_\kappa=\frac{\mathbf{u}}{|\mathbf{x}_{\kappa}|}$ and define $$\begin{aligned} \mathcal{F}_{\kappa,k}:&= \left.\frac{1}{V(\mathbf{u})S_n}\left[\sum_j\left((|\mathbf{x}_{\kappa}|+u_j)\frac{\partial}{\partial u_j}\right)^{k}\right]V(\mathbf{u})S_n\right|_{\mathbf{u}=0}\\ &= \left.\frac{1}{V(\mathbf{u})S_n}\left[\sum_j\left((1+u_{j,\kappa})\frac{\partial}{\partial u_{j,\kappa}}\right)^{k}\right]V(\mathbf{u})S_n\right|_{\mathbf{u}=0}\end{aligned}$$ and use the fact that $\frac{\partial S_n}{\partial u_i}=\exp(\log S_n)\frac{\partial \log S_n}{\partial u_i}$, we obtain $$\begin{aligned} &\mathbb{E}\left[p_{k_1}^{\lfloor\alpha_{t_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{t_2}n\rfloor}\right]=\left.\frac{1}{V(\mathbf{u})S_n}\left[\sum_i\left((1+u_{i,\kappa_1})\frac{\partial}{\partial u_{i,\kappa_1}}\right)^{k_1}\right]V(\mathbf{u})S_n\mathcal{F}_{k_2,\kappa_2}\right|_{\mathbf{u}=0}\end{aligned}$$ which is the sum of terms of the form $$\begin{aligned} \mathrm{Sym}_{a_1,\ldots,a_{r+1}}\frac{c_0(1+u_{a_1,\kappa_1})^{k_1-m_0} \frac{\partial^{m_1}\mathcal{F}_{k_2,\kappa_2}}{\partial u_{a_1,\kappa_1}}\left[\frac{\partial^{m_2}\log S_n}{\partial u_{a_1,\kappa_1}}\right]^{d_2}\cdots \left[\frac{\partial^{m_q}\log S_n}{\partial u_{a_1,\kappa_1}}\right]^{d_q} } {(u_{a_1,\kappa_1}-u_{a_2,\kappa_1})\cdots(u_{a_1,\kappa_1}-x_{a_{r+1},\kappa-1})}, \label{ex1}\end{aligned}$$ where $r,m_0,\ldots,m_q,d_2,\ldots,d_q$ are nonnegative integers satisfying $$\begin{aligned} &m_2<m_3<\ldots<m_t;\ \mathrm{and}\ \\ &m_0+m_1+m_2d_2+\ldots+m_qd_q+r=k_1;\label{csd}\end{aligned}$$ and $\mathrm{Sym}_{a_1,\ldots,a_{r+1}}$ is defined as in ([\[dsym\]](#dsym){reference-type="ref" reference="dsym"}). From the terms when $m_1=0$ we obtain $\mathcal{F}_{k_1,\kappa_1}\mathcal{F}_{k_2,\kappa_2}$, which is exactly $\mathbb{E}\left[p_{k_1}^{\lfloor\alpha_{t_1}n\rfloor}\right]\mathbb{E}\left[p_{k_2}^{\lfloor\alpha_{t_2}n\rfloor}\right]$ when $\mathbf{u}=0$. We now consider the terms with $m_1\geq 1$. From Lemma [Lemma 21](#la1){reference-type="ref" reference="la1"} we obtain that when $n$ is large, the asymptotic degree of $n$ in $\frac{\partial^{j}\log S_n}{\partial u_{i,\kappa}}$ is 1; and the asymptotic degree of $n$ in $\frac{\partial^{j}\mathcal{F}_{k,\kappa}}{\partial u_{i,\kappa}}$ is $k$. Hence when $n$ is large, the asymptotic degree of $n$ in the sum of ([\[ex1\]](#ex1){reference-type="ref" reference="ex1"}) over all $a_1,\ldots,a_{r+1}\in\{1,2,\ldots,n\}$ is at most $$\begin{aligned} k_2+d_2+\ldots+d_q+(r+1). \label{ddn}\end{aligned}$$ Given $s_1\geq 1$ and ([\[csd\]](#csd){reference-type="ref" reference="csd"}), ([\[ddn\]](#ddn){reference-type="ref" reference="ddn"}) is maximal when $m_0=0$, $m_1=1$, $m_2=1$ and $d_2=k_1-1-r$. Hence the leading terms of $\mathrm{cov}\left[p_{k_1}^{\lfloor\alpha_{t_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{t_2}n\rfloor}\right]$ are the same as that of $$\begin{aligned} k_1\sum_{r=0}^{k_1-1}\sum_{\{a_1,\ldots,a_{r+1}\}\subset[n]}{k_1-1\choose r}(r+1)! \mathrm{Sym}_{a_1,\ldots,a_{r+1}}\frac{(1+u_{a_1,\kappa_1})^{k_1} \frac{\partial \mathcal{F}_{k_2,\kappa_2}}{\partial u_{a_1,\kappa_1}}\left[\frac{\partial\log S_n}{\partial u_{a_1,\kappa_1}}\right]^{k_1-1-r}} {(u_{a_1,\kappa_1}-u_{a_2,\kappa_1})\cdots(u_{a_1,\kappa_1}-u_{a_{r+1},\kappa_1})}, \label{ex2}\end{aligned}$$ as $n\rightarrow\infty$, both of which are asymptotically $n^{k_1+k_2}$. Expanding $\mathcal{F}_{k_2,\kappa_2}$ and analyzing leading terms as $n\rightarrow\infty$ in a similar way, we obtain that the leading terms of ([\[ex2\]](#ex2){reference-type="ref" reference="ex2"}) are the same as that of $$\begin{aligned} &k_1\sum_{r=0}^{k_1-1}\sum_{\{a_1,\ldots,a_{r+1}\}\subset[n]}{k_1-1\choose r}(r+1)! \mathrm{Sym}_{a_1,\ldots,a_{r+1}}\frac{(1+u_{a_1,\kappa_1})^{k_1} \left[\frac{\partial\log S_n}{\partial u_{a_1,\kappa_1}}\right]^{k_1-1-r}} {(u_{a_1,\kappa_1}-u_{a_2,\kappa_1})\cdots(u_{a_1,\kappa_1}-u_{a_{r+1},\kappa_1})}\label{cc1}\\ &\times\frac{\partial }{\partial u_{a_1,\kappa_1}} \left[\sum_{q=0}^{k_2}\sum_{\{b_1,\ldots,b_{q+1}\}\subset[n]}{k_2\choose q}(q+1)! \mathrm{Sym}_{b_1,\ldots,b_{q}}\frac{(1+u_{b_1,\kappa_2})^{k_2} \left[\frac{\partial\log S_n}{\partial u_{b_1,\kappa_2}}\right]^{k_2-q}} {(u_{b_1,\kappa_2}-u_{b_2,\kappa_2})\cdots(u_{b_1,\kappa_2}-u_{b_{q+1},\kappa_2})}\right]\notag\end{aligned}$$ Note that for those terms corresponding to $|\{a_1,\ldots,a_{r+1}\}\cap\{b_1,\ldots,b_{q+1}\}|\geq 2$, the asymptotic degree of $n$ is at most $k_1+k_2-1$ as $n\rightarrow\infty$. Hence in the limit we only need to consider those terms with $|\{a_1,\ldots,a_{r+1}\}\cap\{b_1,\ldots,b_{q+1}\}|\leq 1$. The following cases might occur: 1. $\{a_1,\ldots,a_{r+1}\}\cap\{b_1,\ldots,b_{q+1}\}=\emptyset$. Then $$\begin{aligned} &\frac{\partial }{\partial u_{a_1,\kappa_1}} \left[\sum_{q=0}^{k_2}\sum_{\{b_1,\ldots,b_{q+1}\}\subset[n]}{k_2\choose q}(q+1)! \mathrm{Sym}_{b_1,\ldots,b_{q}}\frac{(1+u_{b_1,\kappa_2})^{k_2} \left[\frac{\partial\log S_n}{\partial u_{b_1,\kappa_2}}\right]^{k_2-q}} {(u_{b_1,\kappa_2}-u_{b_2,\kappa_2})\cdots(u_{b_1,\kappa_2}-u_{b_{q+1},\kappa_2})}\right]\\ &= \sum_{q=0}^{k_2}\sum_{\{b_1,\ldots,b_{q+1}\}\subset[n]}{k_2\choose q}(q+1)! \mathrm{Sym}_{b_1,\ldots,b_{q}}\frac{(1+u_{b_1,\kappa_2})^{k_2} \left[\frac{\partial\log S_n}{\partial u_{b_1,\kappa_2}}\right]^{k_2-q-1} \frac{\partial^2 \log S_n}{\partial u_{a_1,\kappa_1}\partial u_{b_1,\kappa_2}} } {(u_{b_1,\kappa_2}-u_{b_2,\kappa_2})\cdots(u_{b_1,\kappa_2}-u_{b_{q+1},\kappa_2})}\end{aligned}$$ By Lemma [Lemma 21](#la1){reference-type="ref" reference="la1"}, we obtain $$\begin{aligned} \frac{\partial\log S_n}{\partial u_{i,\kappa_j}}\approx n s_{r_j} H_{\mathbf{m}_0}'\left(1+\frac{u_i}{|\mathbf{x}|}\right)\end{aligned}$$ where $j\in\{1,2\}.$ By Lemma [Lemma 23](#la3){reference-type="ref" reference="la3"}, we obtain $$\begin{aligned} &\frac{\partial^2\log S_n}{\partial u_{a_1,\kappa_1}\partial u_{b_1,\kappa_2}}\\ &=\frac{\partial^2}{\partial u_{a_1,\kappa_1}\partial u_{b_i,\kappa_2}}\log\left(1- \frac{u_{a_1}u_{b_1}}{|\mathbf{x}|^2}\frac{\left(1+\frac{u_{a_1}}{|\mathbf{x}|}\right)H_{\mathbf{m}_0}'\left(1+\frac{u_{a_1}}{|\mathbf{x}|}\right)-\left(1+\frac{u_{b_1}}{|\mathbf{x}|}\right)H_{\mathbf{m}_0}'\left(1+\frac{u_{b_1}}{|\mathbf{x}|}\right)}{\frac{u_{a_1}-u_{b_1}}{|\mathbf{x}|}}\right)\end{aligned}$$ Then by the residue theorem and Lemma [Lemma 22](#la2){reference-type="ref" reference="la2"}, the contribution to $\lim_{n\rightarrow\infty}\frac{(\ref{cc1})}{n^{k_1+k_2}}$ when $|\{a_1,\ldots,a_{r+1}\}\cap\{b_1,\ldots,b_{q+1}\}|=\emptyset$ is $$\begin{aligned} &\frac{1}{(2\pi \mathbf{i})^2}\oint_{|z-1|=\epsilon}\oint_{|w-1|=2\epsilon} dzdw \left((z-1+s_{\kappa_1})H_{\mathbf{m}_0}'(z)+\frac{z-1+s_{\kappa_1}}{z-1}\right)^{k_1}\\ &\times\left((w-1+s_{\kappa_2})H_{\mathbf{m}_0}'(z)+\frac{z-1+s_{\kappa_2}}{z-1}\right)^{k_2} \frac{\partial^2}{\partial z\partial w}\log\left(1-(z-1)(w-1)\frac{zH_{\mathbf{m}_0}'(z)-wH_{\mathbf{m}_0}'(w)}{z-w}\right)\end{aligned}$$ 2. $|\{a_1,\ldots,a_{r+1}\}\cap\{b_1,\ldots,b_{q+1}\}|=1$. Again by the residue theorem and Lemma [Lemma 22](#la2){reference-type="ref" reference="la2"}, the contribution to $\lim_{n\rightarrow\infty}\frac{(\ref{cc1})}{n^{k_1+k_2}}$ when $|\{a_1,\ldots,a_{r+1}\}\cap\{b_1,\ldots,b_{q+1}\}|=\emptyset$ is $$\begin{aligned} &\frac{1}{(2\pi \mathbf{i})^2}\oint_{|z-1|=\epsilon}\oint_{|w-1|=2\epsilon} dzdw \left((z-1+s_{\kappa_1})H_{\mathbf{m}_0}'(z)+\frac{z-1+s_{\kappa_1}}{z-1}\right)^{k_1}\\ &\times\left((w-1+s_{\kappa_2})H_{\mathbf{m}_0}'(z)+\frac{z-1+s_{\kappa_2}}{z-1}\right)^{k_2}\frac{1}{(z-w)^2} \end{aligned}$$ Hence the leading terms of $\mathrm{cov}\left[p_{k_1}^{\lfloor\alpha_{t_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{t_2}n\rfloor}\right]$ are the same as that of $$\begin{aligned} &k_1\sum_{r=0}^{k_1-1}\sum_{\{a_1,\ldots,a_{r+1}\}\subset[n]}{k_1-1\choose r}(r+1)! \mathrm{Sym}_{a_1,\ldots,a_{r+1}}\frac{(1+u_{a_1,\kappa_1})^{k_1} n^{k_1-1-r}s_{\kappa_1}^{k_1-1-r}[H_{\mathbf{m}_0}'\left(1+\frac{u_{a_1}}{|\mathbf{x}|}\right)]^{k_1-1-r}} {(u_{a_1,\kappa_1}-u_{a_2,\kappa_1})\cdots(u_{a_1,\kappa_1}-u_{a_{r+1},\kappa_1})}\\ &\times\frac{\partial }{\partial u_{a_1,\kappa_1}} \left[\sum_{q=0}^{k_2}\sum_{\{b_1,\ldots,b_{q+1}\}\subset[n]}{k_2\choose q}(q+1)! \mathrm{Sym}_{b_1,\ldots,b_{q}}\frac{(1+u_{b_1,\kappa_2})^{k_2} n^{k_2-q}s_{\kappa_2}^{k_2-q}[H_{\mathbf{m}_0}'\left(1+\frac{u_{b_1}}{|\mathbf{x}|}\right)]^{k_2-q}} {(u_{b_1,\kappa_2}-u_{b_2,\kappa_2})\cdots(u_{b_1,\kappa_2}-u_{b_{q+1},\kappa_2})}\right]\end{aligned}$$ Then the theorem follows from explicit computations of the moments and using the Wick's probability theorem to obtain the Gaussian fluctuation. ◻ **Assumption 17**. *Let $l$ be a fixed positive integer. Assume there exists $$\begin{aligned} 0=a_1<b_1<a_2<b_2<\ldots<a_{l}<b_l\end{aligned}$$ such that $\mathbf{m}_0$, the limit counting measure corresponding to the partition on the bottom boundary satisfies $$\begin{aligned} \frac{d\mathbf{m}_0}{dx}=\begin{cases}1&\mathrm{if}\ a_i<x<b_i\\ 0&\mathrm{if}\ b_j<x<a_{j+1}\end{cases}\end{aligned}$$ where $i\in[l]$ and $j\in[l-1]$.* **Lemma 18**. *Suppose Assumption [Assumption 17](#ap32){reference-type="ref" reference="ap32"} holds. For any $\chi\in{\mathbb R}$, the equation $U_y(z)=\chi$ has at most one pair of complex conjugate root, where $U_y(z)$ is defined by ([\[duyz\]](#duyz){reference-type="ref" reference="duyz"}).* *Proof.* Under Assumption [Assumption 17](#ap32){reference-type="ref" reference="ap32"}, by ([\[hmd\]](#hmd){reference-type="ref" reference="hmd"}) we have $$\begin{aligned} H_{\mathbf{m}_0}'(z)=-\frac{1}{z-1}+\frac{\zeta}{z}\end{aligned}$$ where $$\begin{aligned} z=\prod_{i=1}^{l}\frac{(\zeta-a_i)}{(\zeta-b_i)}\label{ee1}\end{aligned}$$ Hence by ([\[duyz\]](#duyz){reference-type="ref" reference="duyz"}) we obtain $$\begin{aligned} U_y(z)=\frac{(z-1+s)\zeta}{z}\label{ee2}\end{aligned}$$ It suffices to show that the equation $U_y(z)=\chi$ has at most one pair of complex conjugate root in $\zeta$. By $U_y(z)=\chi$ and ([\[ee2\]](#ee2){reference-type="ref" reference="ee2"}) we obtain $$\begin{aligned} \zeta=\frac{\chi z}{z-1+s}\label{ee3}\end{aligned}$$ Plugging ([\[ee3\]](#ee3){reference-type="ref" reference="ee3"}) into ([\[ee1\]](#ee1){reference-type="ref" reference="ee1"}) we obtain $$\begin{aligned} z=C\prod_{i=1}^{l}\frac{z+\frac{a_i(1-s)}{\chi-a_i}}{z+\frac{b_i(1-s)}{\chi-b_i}}:=G(z)\end{aligned}$$ between any two consecutive poles of $G(z)$, either $G(z)$ increase from $-\infty$ to $\infty$, or $G(z)$ decrease from $\infty$ to $-\infty$, hence the equation $z=G(z)$ has at least one real root between any two consecutive poles of $G(z)$. Then we infer that $z=G(z)$ has at least $l-1$ real roots; since the degree of the equation is at most $l+1$, we deduct that it has at most one pair of complex conjugate roots. ◻ **Lemma 19**. *Suppose that Assumption [Assumption 17](#ap32){reference-type="ref" reference="ap32"} holds. Let $y,s$ be given as in Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}. Let $$\begin{aligned} V_y(\zeta):=\zeta\left[1-(1-s)e^{-\mathrm{St}_{\mathbf{m}_0}(\zeta)}\right]\label{dvy}\end{aligned}$$ Then for any $\chi\in {\mathbb R}$, the equation $V_y(\zeta)=\chi$ has 0 or $1$ roots in the upper half plane ${\mathbb H}$. The map $\mathcal{T}_{\mathcal{L}}:\tilde{\mathcal{L}}\rightarrow{\mathbb H}$ which maps each point in $\tilde{\mathcal{L}}$ to the unique root of ([\[dvy\]](#dvy){reference-type="ref" reference="dvy"}) is a diffeomorphism form $\tilde{\mathcal{L}}$ to ${\mathbb H}$ with inverse map given by $$\begin{aligned} &\chi_{\mathcal{L}}(\zeta):=\zeta\left[1+\frac{e^{-\mathrm{St}_{\mathbf{m}_0}(\zeta)}(\zeta-\overline{\zeta})} {\overline{\zeta}e^{-\mathrm{St}_{\mathbf{m}_0}(\overline{\zeta})}-\zeta e^{-\mathrm{St}_{\mathbf{m}_0}(\zeta)}}\right]\label{dc}\\ &s_{\mathcal{L}}(\zeta):=1+\frac{\zeta-\overline{\zeta}} {\overline{\zeta}e^{-\mathrm{St}_{\mathbf{m}_0}(\overline{\zeta})}-\zeta e^{-\mathrm{St}_{\mathbf{m}_0}(\zeta)}},\label{ds}\end{aligned}$$ where $\mathrm{St}_{\mathbf{m}_0}$ represents the Stieljes transform of the measure $\mathbf{m}_0$.* *Proof.* The proof is an adaptation of the proof of Theorem 2.1 in [@dm14]. We shall show that 1. $\tilde{\mathcal{L}}$ is nonempty; 2. $\tilde{\mathcal{L}}$ is open; 3. $T_{\mathcal{L}}:\widetilde{\mathcal{L}}\rightarrow{\mathbb H}$ is continuous; 4. $T_{\mathcal{L}}:\widetilde{\mathcal{L}}\rightarrow{\mathbb H}$ is injective; 5. $T_{\mathcal{L}}:\widetilde{\mathcal{L}}\rightarrow T_{\mathcal{L}}(\widetilde{\mathcal{L}})$ has an inverse for each $\zeta\in T_{\mathcal{L}}(\widetilde{\mathcal{L}})$. 6. $T_{\mathcal{L}}(\widetilde{\mathcal{L}})={\mathbb H}$. **Proof of (1).** Note that $$\begin{aligned} \mathrm{St}_{\mathbf{m}_0}(\zeta)=\frac{1}{\zeta}+\frac{M_1}{\zeta^2}+\frac{M_2}{\zeta^3}+\cdots\label{est}\end{aligned}$$ where $$\begin{aligned} M_1=\int_{{\mathbb R}}x\mathbf{m}_0[dx];\qquad M_2=\int_{{\mathbb R}}x^2\mathbf{m}_0[dx].\end{aligned}$$ Plugging ([\[est\]](#est){reference-type="ref" reference="est"}) into ([\[dc\]](#dc){reference-type="ref" reference="dc"}) and ([\[ds\]](#ds){reference-type="ref" reference="ds"}) we obtain $$\begin{aligned} \chi_{\mathcal{L}}(\zeta)&=1+O\left(\frac{1}{|\zeta|}\right)\\ s_{\mathcal{L}}(\zeta)&=\frac{M_1-\frac{1}{2}}{|\zeta|^2}+O\left(\frac{1}{|\zeta|^2}\right)\end{aligned}$$ When $\mathbf{m}_0$ satisfies Assumption [Assumption 17](#ap32){reference-type="ref" reference="ap32"} we have $$\begin{aligned} M_1=\int_{0}^{b_l}x\mathbf{m}_0[dx]> \int_0^1xdx=\frac{1}{2}\end{aligned}$$ It follows that when $|\zeta|$ is large $(\chi_{\mathcal{L}}(\zeta),s_{\mathcal{L}}(\zeta))\in (0,b)\times (0,1)$; hence $\widetilde{\mathcal{L}}$ is nonempty. **Proofs of (2) and (3).** Let $(\chi_1,s_1)\in \widetilde{\mathcal{L}}$ and $\zeta_1=T_{\mathcal{L}}(\chi_1,s_1)$. We shall prove that $(\chi_2,s_2)\in\widetilde{\mathcal{L}}$ whenever $|\chi_1-\chi_2|+|s_1-s_2|$ is small. Let $y_1,y_2$ correspond to $s_1,s_2$ as in Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}. Let $\epsilon>0$ such that $B(\zeta_1,\epsilon)\in {\mathbb H}$; then $\inf_{\zeta\in\partial B(\zeta_1,\epsilon)}|V_{y_1}(\zeta)-\chi_1|>0$ when $\epsilon$ is small. Fix $\epsilon$, by continuity we have $|(V_{y_1}(\zeta)-\chi_1)-(V_{y_2}(\zeta)-\chi_2)|<\epsilon$ when $|s_1-s_2|+|\chi_1-\chi_2|$ is sufficiently small for $\zeta\in \partial B_{\zeta_1,\epsilon}$. Hence when $|s_1-s_2|+|\chi_1-\chi_2|$ is sufficiently small, $$\begin{aligned} |(V_{y_1}(\zeta)-\chi_1)-(V_{y_2}(\zeta)-\chi_2)|<|V_{y_1}(\zeta)-\chi_1|,\qquad \forall \zeta\in \partial B_{\zeta_1,\epsilon}\end{aligned}$$ By Rouchés theorem $V_{y_2}(\zeta)-\chi_2$ has a unique root in $B_{\zeta_1,\epsilon}$, hence $(\chi_2,s_2)\in \tilde{\mathcal{L}}$. \(4\) (5) follows from ([\[dc\]](#dc){reference-type="ref" reference="dc"}) ([\[ds\]](#ds){reference-type="ref" reference="ds"}). **Proof of (6).** From (1)-(5) we see that $T_{\mathcal{L}}(\widetilde{\mathcal{L}})$ is open and homeomorphic to $\widetilde{\mathcal{L}}$. Assume there exists $\zeta\in \partial T_{\mathcal{L}}(\widetilde{\mathcal{L}})$ and $\zeta\in {\mathbb H}\setminus T_{\mathcal{L}}(\widetilde{\mathcal{L}})$. Let $\zeta_n\in T_{\mathcal{L}}(\widetilde{\mathcal{L}})$ such that $\lim_{n\rightarrow\infty}\zeta_n=\zeta$. Then there exists a subsequence $(\chi_{\mathcal{L}}(\zeta_n),s_{\mathcal{L}}(\zeta_n))$ converges to some $(\chi,s)\in {\mathbb R}\times[0,1]$. Then $\zeta=T_{\mathcal{L}}(\chi,s)$. Since $\zeta\in {\mathbb H}$, we obtain $(\chi,s)\in \widetilde{\mathcal{L}}$ and $\zeta\in T_{\mathcal{L}}(\widetilde{\mathcal{L}})$. ◻ **Theorem 20**. *Suppose that Assumption [Assumption 17](#ap32){reference-type="ref" reference="ap32"} holds. For each $z\in {\mathbb H}$, let $$\begin{aligned} \mathbf{\Delta}_{n}(z):=\Delta_n(n\chi_{\widetilde{\mathcal{L}}}(z),ns_{\widetilde{\mathcal{L}}}(z)):=\sqrt{\pi}\left|\{g\in[n]:\lambda_g^{(n-ny(s_{\widetilde{\mathcal{L}}}(z)))}-n+g\geq n \chi_{\widetilde{\mathcal{L}}(z)}\}\right|\end{aligned}$$ Under the assumption of Theorem [Theorem 9](#t15){reference-type="ref" reference="t15"}, $\mathbf{\Delta}_n(z)-\mathbb{E}\mathbf{\Delta}_n(z)$ converges to GFF in the upper half plane in the sense that for each $s\in(0,1)$ $$\begin{aligned} \lim_{n\rightarrow\infty}\int_{-\infty}^{\infty}\chi^j\left(\Delta_n(n\chi,ns)-\mathbb{E}\Delta_n(n\chi,ns)\right)d\chi =\int_{z\in{\mathbb H}: s_{\widetilde{\mathcal{L}}}(z)=s}\chi^j_{\widetilde{\mathcal{L}}}(z)\frac{d\chi_{\widetilde{\mathcal{L}}}(z)}{dz}\Xi(z)dz\end{aligned}$$* *Proof.* Explicit computations show that $$\begin{aligned} &\lim_{n\rightarrow\infty}\int_{-\infty}^{\infty}\chi^j\left(\Delta_n(n\chi,ns)-\mathbb{E}\Delta_n(n\chi,ns)\right)d\chi\\ &=\frac{\sqrt{\pi}}{j+1}\sum_{i=1}^n\left([\lambda_i^{(n-ns)}-n+i]^{j+1}-\mathbb{E}[\lambda_i^{(n-ns)}-n+i]^{j+1}\right)\end{aligned}$$ Note that $$\begin{aligned} \frac{\partial^2}{\partial z\partial w}\log[z-w]=\frac{1}{(z-w)^2}\end{aligned}$$ Hence we have $$\begin{aligned} &Q(z,w)= \frac{\partial^2}{\partial z\partial w}\log\left(\left[wH_{\mathbf{m}_0}'(w)+\frac{w}{w-1}\right]-\left[zH_{\mathbf{m}_0}'(z)+\frac{z}{z-1}\right]\right)\end{aligned}$$ Let $$\begin{aligned} \tilde{z}:&=zH_{\mathbf{m}_0}'(z)+\frac{z}{z-1}=\mathrm{St}^{(-1)}_{\mathbf{m}_0}(\log z)\\ \tilde{w}:&=wH_{\mathbf{m}_0}'(w)+\frac{w}{w-1}=\mathrm{St}^{(-1)}_{\mathbf{m}_0}(\log w)\end{aligned}$$ Then by Theorem [Theorem 16](#t31){reference-type="ref" reference="t31"}, we obtain $$\begin{aligned} &\lim_{n\rightarrow\infty}\frac{\mathrm{cov}\left[p_{k_1}^{\lfloor\alpha_{r_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{r_2}n\rfloor}\right]}{n^{k_1+k_2}} = \frac{1}{(2\pi\mathbf{i})^2}\oint_{|\tilde{z}|=C}\oint_{|\tilde{w}|=2C}\left(V_{y_{r_1}}(\tilde{z})\right)^{k_1}\left(V_{y_{r_2}}(\tilde{w})\right)^{k_2}\frac{1}{(\tilde{z}-\tilde{w})^2}d\tilde{z}d\tilde{w}. \end{aligned}$$ Make a contour deformation and integration by parts, we obtain $$\begin{aligned} &\lim_{n\rightarrow\infty}\frac{\mathrm{cov}\left[p_{k_1}^{\lfloor\alpha_{r_1}n\rfloor},p_{k_2}^{\lfloor\alpha_{r_2}n\rfloor}\right]}{n^{k_1+k_2}}\\ &= \frac{1}{(2\pi\mathbf{i})^2}\oint_{\tilde{z}\in{\mathbb H}:s_{\widetilde{\mathcal{L}}}(z)=s_{r_1}}\oint_{\tilde{w}\in{\mathbb H}:s_{\widetilde{\mathcal{L}}}(w)=s_{r_2}}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{z})\right)^{k_1}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{w})\right)^{k_2}\frac{\partial^2}{\partial z\partial w}\left[2\log\frac{|\tilde{z}-\tilde{w}|}{|\tilde{z}-\overline{\tilde{w}}|}\right]d\tilde{z}d\tilde{w}\\ &=\frac{k_1k_2}{\pi}\oint_{\tilde{z}\in{\mathbb H}:s_{\widetilde{\mathcal{L}}}(z)=s_{r_1}}\oint_{\tilde{w}\in{\mathbb H}:s_{\widetilde{\mathcal{L}}}(w)=s_{r_2}}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{z})\right)^{k_1-1}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{w})\right)^{k_2-1} \frac{\partial\chi_{\widetilde{\mathcal{L}}}(\tilde{z})}{\partial\tilde{z}}\frac{\partial\chi_{\widetilde{\mathcal{L}}}(\tilde{w})}{\partial\tilde{w}} G_{\mathbb{H}}(z,w) d\tilde{z}d\tilde{w}. \end{aligned}$$ Then the theorem follows. ◻ # Technical Results {#sa} We use ${\mathbb {Y}}$ to denote the set of all the partitions and ${\mathbb {Y}}_N$ to denote the set of all the partitions of length $N$. **Lemma 21**. *If $(\lambda(N))\in {\mathbb {Y}}_N$ is a regular sequence of partitions, and the sequence of counting measures $m(\lambda(N))$ converges weakly to a measure $\mathbf{m}$ with compact support. When the $\beta_i$s are equal to 1, there exists an explicit function $H_{\mathbf{m}}$, analytic in a neighborhood of 1, depending on the weak limit $\mathbf{m}$ such that $$\lim_{N\rightarrow\infty} \frac{1}{N} \log\left(% \frac{s_{\lambda(N)}(u_1,\ldots,u_k,1,\ldots,1)}{s_{\lambda(N)}(1,\ldots,1)} \right) = H_{\mathbf{m}}(u_1)+\cdots+H_{\mathbf{m}}(u_k), \label{nlc}$$ and the convergence is uniform when $(u_1,\dotsc,u_k)$ is in a neighborhood of $(1,\dots,1)$.* *Proof.* See Theorem 4.2 of [@bg]. ◻ Precisely, $H_{\mathbf{m}}$ is constructed as follows: let $S_{\mathbf{m}}(z)=z+\sum_{k=1}^\infty M_k(\mathbf{m}) z^{k+1}$ be the moment generating function of the measure $\mathbf{m}$, where $M_k(\mathbf{m})=\int x^k d\mathbf{m}(x)$, and $S_{\mathbf{m}}^{(-1)}$ be its inverse for the composition. Let $R_{\mathbf{m}}(z)$ be the *Voiculescu R-transform* of $\mathbf{m}$ defined as $$R_{\mathbf{m}}(z) = \frac{1}{S_\mathbf{m}^{(-1)}(z)} - \frac{1}{z}.$$ Then $$\label{hmz} H_{\mathbf{m}}(u) = \int_{0}^{\ln u} R_\mathbf{m}(t)dt+ \ln\left( \frac{\ln u}{u-1} \right).$$ In particular, $H_{\mathbf{m}}(1)=0$, and $$H'_\mathbf{m}(u) = \frac{1}{u S_\mathbf{m}^{(-1)}(\ln u)} - \frac{1}{u-1}.\label{hmd}$$ **Lemma 22**. *Let $n$ be a positive integer and let $g(z)$ be an analytic function defined in a neighborhood of 1. Then $$\begin{aligned} \lim_{(z_1,\ldots,z_n)\rightarrow (1,\ldots,1)}\left(\sum_{i=1}^n\frac{g(z_i)}{\prod_{j\in[n],j\neq i}(z_i-z_j)}\right)=\left.\frac{\partial^{n-1}}{\partial z^{n-1}}\left(\frac{g(z)}{(n-1)!}\right)\right|_{z=1}\end{aligned}$$* *Proof.* See Lemma 5.5 of [@bg]. ◻ **Lemma 23**. *If $(\lambda(N))\in {\mathbb {Y}}_N$ is a regular sequence of partitions, and the sequence of counting measures $m(\lambda(N))$ converges weakly to a measure $\mathbf{m}$ with compact support. Then $$\begin{aligned} &\lim_{N\rightarrow\infty}\frac{\partial^2}{\partial x_1\partial x_2}\log\frac{s_{\lambda(N)}(x_1,\ldots,x_{k},1^{N-k})}{s_{\lambda(N)}(1^N)} \\ &=\frac{\partial^2}{\partial x_1\partial x_2}\log\left(1-(x_1-1)(x_2-1)\frac{x_1 H_{\mathbf{m}}'(x_1)-x_2H_{\mathbf{m}}'(x_2)}{x_1-x_2}\right)\end{aligned}$$ and $$\begin{aligned} \lim_{N\rightarrow\infty}\frac{\partial^3}{\partial x_1\partial x_2\partial x_3}\log\frac{s_{\lambda(N)}(x_1,\ldots,x_{k},1^{N-k})}{s_{\lambda(N)}(1^N)}=0.\end{aligned}$$* *Proof.* See Theorem 8.2 of [@bg16]. ◻ **Acknowledgements.** We thank Sylvie Corteel for asking the questions solved in the paper and for helpful discussions. ZL acknowledges support from National Science Foundation under grant 1608896 and from Simons Foundation under grant 638143. DK and IP are grateful to the Workshop on 'Randomness, Integrability, and Universality', held on Spring 2022 at the Galileo Galilei Institute for Theoretical Physics, for hospitality and support at some stage of this work. IP acknowledges support by the Academy of Finland grant 355839.
arxiv_math
{ "id": "2309.15235", "title": "Asymptotics of Bounded Lecture-Hall Tableaux", "authors": "David Keating, Zhongyang Li, Istvan Prause", "categories": "math.PR math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The Eichler-Shimura isomorphism describes a certain cohomology group with coefficients in a space of polynomials by using holomorphic modular/cusp forms. It determines a canonical decomposition of the corresponding de Rham cohomology group associated to a specific oper on a Riemann surface. One purpose of the present paper is to establish its analogue for opers in positive characteristic. We first discuss some basic properties on the (parabolic) de Rham cohomology groups and deformation spaces of $G$-opers (where $G$ is a semisimple algebraic group of adjoint type) in a general formulation. In particular, it is shown that the deformation space of a $G$-oper induced from an $\mathrm{SL}_2$-oper decomposes into a direct sum of the (parabolic) de Rham cohomology groups of its symmetric products. As a consequence, we obtain an Eichler-Shimura-type decomposition for dormant opers on general pointed stable curves by considering a transversal intersection of related spaces in the de Rham moduli space. author: - Yasuhiro Wakabayashi title: | Infinitesimal deformations of opers\ in positive characteristic and\ the de Rham cohomology of symmetric products --- # Introduction ## The Eichler-Shimura isomorphism for a modular curve {#S01} The Eichler-Shimura isomorphism describes a certain cohomology group with coefficients in a space of polynomials by using holomorphic modular/cusp forms. To be precise, let $N$ be an integer with $N \geq 5$, and $\Gamma$ the level $N$ congruence subgroup of $\mathrm{SL}_2 (\mathbb{Z})$. Given each $l \geq 2$, we have the $\mathbb{C}$-vector space $$\begin{aligned} V_{l-2} := \mathrm{Sym}^{l-2}(\mathbb{C}X + \mathbb{C}Y)\end{aligned}$$ of homogenous degree $l-2$ polynomials in $\mathbb{C}[X, Y]$. It has a $\Gamma$-action given by $\gamma (P) (X, Y) =P(a X + c Y, bX + d Y)$ for any $P \in V_{l-2}$ and $\gamma := \begin{pmatrix} a & b \\ c & d\end{pmatrix} \in \Gamma$. The Eichler-Shimura isomorphism (cf.  [@Eic],  [@Shi]) asserts a canonical isomorphism for the cohomology group $H^1 (\Gamma, V_{l-2})$ of the resulting $\mathbb{C}[\Gamma]$-module $V_{l-2}$: $$\begin{aligned} \label{Eq121} M_l (\Gamma, \mathbb{C}) \oplus \overline{S_l (\Gamma, \mathbb{C})} \stackrel{\sim}{\rightarrow}H^1 (\Gamma, V_{l-2}),\end{aligned}$$ where $\overline{(-)}$ denotes the complex conjugation and $M_l (\Gamma, \mathbb{C})$ (resp., $S_l (\Gamma, \mathbb{C})$) denotes the $\mathbb{C}$-vector space of modular forms (resp., cusp forms) of weight $l$ and level $N$. This isomorphism can be interpreted in geometric terms, as follows. Let $Y_\Gamma$ denote the Riemann surface defined as the quotient $\Gamma \backslash \mathbb{H}$ of the upper half plane $\mathbb{H}:= \left\{z \in \mathbb{C}\, | \, \mathrm{Im}(z) > 0\right\}$ by the natural $\Gamma$-action. Also, let $X_\Gamma := \Gamma \backslash \mathbb{H}^*$, where $\mathbb{H}^* := \mathbb{H}\cup \mathbb{P}^1 (\mathbb{Q})$ (as a subset of $\mathbb{P}^1 (\mathbb{C})$, equipped with a natural $\mathrm{SL}_2 (\mathbb{R})$-action) has a topology inducing a structure of compact Riemann surface on the quotient $X_\Gamma$. The moduli interpretation of $Y_\Gamma$ brings us a universal family of complex tori $\pi : E \rightarrow Y_\Gamma$ of level $N$. The relative de Rham cohomology of this family forms a holomorphic vector bundle $\mathcal{F}_Y := \mathbb{R}^1 \pi_* \Omega_{E/Y_\Gamma}^\bullet$ of rank $2$; it carries a (flat) connection $\nabla_Y$ (called the *Gauss-Manin connection*), as well as a $2$-step decreasing filtration $\{ \mathcal{F}_Y^j \}_{j=0}^2$ (called the *Hodge filtration*) given by $\mathcal{F}^0_Y := \mathcal{F}_Y$, $\mathcal{F}^{2}_Y := 0$, and $\mathcal{F}^1_Y := \pi_* (\Omega_{E/Y_\Gamma})$, considered as a line subbundle of $\mathcal{F}_Y$ via the injection $\mathcal{F}^1_Y \hookrightarrow\mathcal{F}_Y$ arising from the Hodge to de Rham spectral sequence $'E_1^{a, b}:= \mathbb{R}^b \pi_* \Omega^a_{E/Y_\Gamma} \Rightarrow \mathbb{R}^{a + b}\pi_* \Omega_{E/Y_\Gamma}^\bullet$. Each object in the data $(\mathcal{F}_Y, \nabla_Y, \{ \mathcal{F}^j_Y \}_{j=0}^2)$ has a natural extension over $X_\Gamma$ by admitting simple poles along the cusps $D := X_\Gamma \setminus Y_\Gamma$; the resulting collection $$\begin{aligned} \label{Eq124} \mathscr{F}_X^\heartsuit := (\mathcal{F}_X, \nabla_X, \{ \mathcal{F}_X^j \}_{j=0}^2) \end{aligned}$$ specifies a canonical example of an $\mathrm{SL}_2$-oper (cf. Remark [Remark 2](#Rem667){reference-type="ref" reference="Rem667"}), and the line bundle $\varTheta := \mathcal{F}_X^1$ satisfies $\varTheta^{\otimes 2} \cong \Omega_{X} (D)$. We shall write $\mathrm{Sym}^{l-2}\mathscr{F}_X$ for the $(l-2)$-nd symmetric power of the flat bundle $\mathscr{F}_X := (\mathcal{F}_X, \nabla_X)$. The $1$-st de Rham cohomology group $H_{\mathrm{dR}}^1 (X, \mathrm{Sym}^{l-2}\mathscr{F}_X)$ of $\mathrm{Sym}^{l-2}\mathscr{F}_X$ fits into the short exact sequence $$\begin{aligned} \label{Eq122} 0 \longrightarrow H^0 (X, \varTheta^{\otimes l}) \longrightarrow H_{\mathrm{dR}}^1 (X, \mathrm{Sym}^{l-2}\mathscr{F}_X) \longrightarrow H^0 (X, \varTheta^{\otimes l}(-D))^\vee \longrightarrow 0\end{aligned}$$ arising from the filtration on $\mathrm{Sym}^{l-2}\mathscr{F}_X$ induced by $\{ \mathcal{F}_X^j \}_j$. Since $\mathrm{Sym}^{l-2}\mathscr{F}_X$ corresponds to the local system determined by the $\mathbb{C}[\Gamma]$-module $V_{l-2}$ via the Riemann-Hilbert correspondence, there exists a canonical identification $$\begin{aligned} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{l-2} \mathscr{F}_X) = H^1 (\Gamma, V_{l-2}) \end{aligned}$$ (cf.  [@D Chap. II, Corollaire 6.10]). On the other hand, the elements of $H^0 (X, \varTheta^{\otimes l})$ can be interpreted, via descent along the covering map $\mathbb{H}\twoheadrightarrow Y_\Gamma$, as those of $M_l (\Gamma, \mathbb{C})$, and moreover, the Petersson inner product yields $H^0 (X, \varTheta^{\otimes l} (-D))^\vee = \overline{S_l (\Gamma, \mathbb{C})}$. Under these identifications, the Eichler-Shimura isomorphism ([\[Eq121\]](#Eq121){reference-type="ref" reference="Eq121"}) may be regarded as a canonical choice of a splitting of the short exact sequence ([\[Eq122\]](#Eq122){reference-type="ref" reference="Eq122"}) given by taking periods. See, e.g.,  [@BrHa],  [@KaSc],  [@Sch] for algebraic treatments of the Eichler-Shimura isomorphism in terms of de Rham cohomology. ## Analogous decomposition in positive characteristic {#S034} The present paper aims to investigate the short exact sequence ([\[Eq122\]](#Eq122){reference-type="ref" reference="Eq122"}) for opers in positive characteristic (including the case of parabolic de Rham cohomology) in relation to their deformation spaces, as well as to establish an analogue of the Eichler-Shimura type decomposition for such opers. The present paper contains many arguments based on those in  [@Wak8] and can be positioned as a continuation of that work. Let $p$ be an odd prime, $l$ an integer with $3 \leq l \leq p-1$, $k$ an algebraically closed field of characteristic $p$, $(g, r)$ a pair of nonnegative integers with $2g-2 +r > 0$, and $\mathscr{X}:= (X, \{ \sigma_i \}_{i=1}^r)$ an $r$-pointed stable curve over $k$ of genus $g$, where $X$ denotes a nodal curve over $k$ and $\{ \sigma_i \}_i$ is a set of marked points on $X$. (The present paper deals not only with smooth curves but also with *pointed* and *stable* curves, since the same arguments can be applied. However, it is not so essential to consider such a fairly general situation.) Suppose that we are given an $\mathrm{SL}_2$-oper $\mathscr{F}^\heartsuit := (\mathcal{F}, \nabla, \{ \mathcal{F}^j \}_j)$ on $\mathscr{X}$. Then, the $2$-fold tensor product of the line bundle $\varTheta := \mathcal{F}^1 \left(\subseteq \mathcal{F}\right)$ is isomorphic to the sheaf $\Omega_{X^\mathrm{log}}$ of logarithmic $1$-forms on the log curve $X^\mathrm{log}$ determined by $\mathscr{X}$. Just as in the complex case, $\mathscr{F}^\heartsuit$ associates the $1$-st de Rham cohomology $H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{l-2}\mathscr{F})$ of the $(l-2)$-nd symmetric power $\mathrm{Sym}^{l-2}\mathscr{F}$ of $\mathscr{F}:= (\mathcal{F}, \nabla)$, fitting into the short exact sequence $$\begin{aligned} \label{Eq120} 0 \longrightarrow H^0 (X, \varTheta^{\otimes l}) \longrightarrow H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{l-2}\mathscr{F}) \longrightarrow H^0 (X, \varTheta^{\otimes l}(-D))^\vee \longrightarrow 0\end{aligned}$$ (cf. Theorem [Theorem 19](#Th67){reference-type="ref" reference="Th67"}, (i)), where $D$ denotes the divisor on $X$ given by the union of the $\sigma_i$'s. (Similarly, a version for the parabolic de Rham cohomology of $\mathrm{Sym}^{l-1}\mathscr{F}$ holds, as asserted in Theorem [Theorem 19](#Th67){reference-type="ref" reference="Th67"}, (ii).) Our question regarding this sequence is described as follows: > *For what kind of $\mathrm{SL}_2$-opers $\mathscr{F}^\heartsuit$ can we construct a **canonical** splitting of ([\[Eq120\]](#Eq120){reference-type="ref" reference="Eq120"}) arising from geometry such as the classical Eichler-Shimura isomorphism?* Even when $\mathscr{F}^\heartsuit$ is taken to be the mod $p$ reduction of the "$\mathscr{F}_X^\heartsuit$\" defined on a modular curve as above, the classical Eichler-Shimura isomorphism would not be appropriate in constructing a canonical splitting because of its non-algebraic construction. To get an answer to the above question, we discuss opers with vanishing $p$-curvature, called *dormant opers*. Here, recall that $p$-curvature is an invariant for (flat) connections measuring the obstruction to the compatibility of $p$-power structures appearing in certain associated spaces of infinitesimal symmetries; this is a key ingredient of the $p$-adic Teichmuller theory developed by S. Mochizuki (cf.  [@Mzk1],  [@Mzk2]), in which various moduli spaces of $\mathrm{SL}_2$-opers (or $\mathrm{PGL}_2$-opers) characterized by the degree of vanishing $p$-curvature were investigated. That work suggests that $\mathrm{SL}_2$-opers whose $p$-curvature are close to zero may be thought of as candidates for suitable analogues of the canonical $\mathrm{SL}_2$-opers on modular curves. In fact, we prove the following assertion, which is one of the main results in the present paper (cf. Theorem [Theorem 32](#Co12){reference-type="ref" reference="Co12"} for a similar assertion with slightly relaxed assumptions). **Theorem 1** (cf. Corollary [Corollary 33](#ThAgg){reference-type="ref" reference="ThAgg"}). *Let us keep the above notation. Moreover, suppose that $l$ is even, $\mathscr{F}^\heartsuit$ is dormant (i.e. has vanishing $p$-curvature), and $\mathscr{X}$ is general in the moduli space $\overline{\mathcal{M}}_{g, r}$ of $r$-pointed stable curves of genus $g$. Then, there exists a **canonical** splitting of ([\[Eq120\]](#Eq120){reference-type="ref" reference="Eq120"}). In particular, we obtain a **canonical** decomposition $$\begin{aligned} \label{Eqff131} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{l-2}\mathscr{F}) = H^0 (X, \varTheta^{\otimes l}) \oplus H^0 (X, \varTheta^{\otimes l}(-D))^\vee\end{aligned}$$ of the $1$-st de Rham cohomology $H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{l-2}\mathscr{F})$ of $\mathrm{Sym}^{l-2}\mathscr{F}$.* Other relatively important results are Theorems [Theorem 3](#Pr2){reference-type="ref" reference="Pr2"}, [Theorem 14](#Th9m){reference-type="ref" reference="Th9m"}, [Theorem 17](#Prc1){reference-type="ref" reference="Prc1"}, [Theorem 19](#Th67){reference-type="ref" reference="Th67"},  [Theorem 20](#Th66){reference-type="ref" reference="Th66"}, [Theorem 27](#Th77){reference-type="ref" reference="Th77"}, [Theorem 32](#Co12){reference-type="ref" reference="Co12"}, and Corollaries [Corollary 30](#Co34){reference-type="ref" reference="Co34"}, [Corollary 31](#Co34f){reference-type="ref" reference="Co34f"}. ## Outline of our argument {#SS0009} The present paper develops a general theory of opers on pointed stable curves and their infinitesimal deformations, that may be positioned as a continuation of the related work in  [@Wak8]. (The first half of our discussion proceeds under the situation where the characteristic of the base field is not necessarily positive.) Here, we briefly describe the arguments and results leading to the above theorem. (There are many assertions proved in this text that are not directly involved in proving Theorem [Theorem 1](#ThA){reference-type="ref" reference="ThA"}. For example, the study of parabolic de Rham cohomology groups and their self-duality assertion are included; see Corollary [Corollary 11](#Coo78){reference-type="ref" reference="Coo78"}. Regarding the self-duality, we can find some previous results involved, e.g.,  [@Can Theorem 5.2, (a)],  [@Og2 Remark 3.2], and  [@Sch Theorem 2.7, (i)].) Let $G$ be a semisimple algebraic group of adjoint type with certain properties (cf. the assumption imposed at the beginning of § [1.4](#Ssde){reference-type="ref" reference="Ssde"}). The GIT quotient of its Lie algebra $\mathfrak{g}:= \mathrm{Lie} (G)$ by the adjoint $G$-action defines a $k$-scheme $\mathfrak{c}$. We shall fix an element $\rho$ of $\mathfrak{c}(k)^{\times r}$, which lifts to an element $\widetilde{\rho} \in \mathfrak{g}^{\times r}$ via the Kostant section (cf. ([\[Eq202\]](#Eq202){reference-type="ref" reference="Eq202"})). Denote by $$\begin{aligned} \mathcal{C}onn_{G} \end{aligned}$$ (cf. ([\[Eq3334\]](#Eq3334){reference-type="ref" reference="Eq3334"})) the moduli stack classifying flat $G$-bundles on $\mathscr{X}$. As proved in Theorem [Theorem 3](#Pr2){reference-type="ref" reference="Pr2"}, the moduli stack $$\begin{aligned} \mathcal{O}p_{G} \ \left(\text{resp.,} \ \mathcal{O}p_{G, \rho} \right) \end{aligned}$$ classifying $G$-opers (resp., $G$-opers of radii $\rho$) forms a locally closed substack of $\mathcal{C}onn_G$. If we are given a $G$-oper $\mathscr{E}^\spadesuit$ classified by a closed point $q \in \mathcal{O}p_{G, \rho}$, then the inclusion $\mathcal{O}p_{G} \hookrightarrow\mathcal{C}onn_G$ induces, via differentiation, a $k$-linear inclusion $T_q\mathcal{O}p_G \hookrightarrow T_q \mathcal{C}onn_G$ between the tangent spaces at $q$. We investigate these spaces, as well as the relationship between them, by describing relevant infinitesimal deformations in terms of various (de Rham) cohomology groups associated to $\mathscr{E}^\spadesuit$. The resulting description allows us to obtain a short exact sequence $$\begin{aligned} \label{Et4} 0 \longrightarrow T_q \mathcal{O}p_G &\xrightarrow{\mathrm{inclusion}} T_q \mathcal{C}onn_{G} \longrightarrow T_q^\vee \mathcal{O}p_{G, \rho} \longrightarrow 0 \end{aligned}$$ (cf. Theorem [Theorem 14](#Th9m){reference-type="ref" reference="Th9m"}), where $T^\vee_q (-)$ denotes the cotangent space at $q$. Next, since the base field $k$ is of characteristic $p$, one can define the closed substack $$\begin{aligned} \mathcal{C}onn_G^{\psi = 0}\end{aligned}$$ (cf. ([\[Eq220\]](#Eq220){reference-type="ref" reference="Eq220"})) of $\mathcal{C}onn_G$ classifying flat $G$-bundles with vanishing $p$-curvature. The intersection $$\begin{aligned} \label{Eq35ggh} \mathcal{O}p_G^{^\mathrm{Zzz...}} := \mathcal{O}p_G \cap \mathcal{C}onn_G^{\psi = 0} \ \end{aligned}$$ (cf. ([\[Eq212\]](#Eq212){reference-type="ref" reference="Eq212"}), ([\[Eq211\]](#Eq211){reference-type="ref" reference="Eq211"})) is nothing but the moduli space of dormant $G$-opers and was substantially investigated in  [@Wak8] from a viewpoint of enumerative geometry. One important fact proved in *loc. cit.* is the étaleness of $\mathcal{O}p_G^{^\mathrm{Zzz...}}\!$ for a sufficiently general $\mathscr{X}$ (cf.  [@Wak8 Theorem 8.19]). (This fact also played an essential role in the proof of Joshi's conjecture; see  [@Wak8 Theorem 9.10].) Since the equality of dimensions $$\begin{aligned} \label{Ghr33} \mathrm{dim}(T_q\mathcal{O}p_G) + \mathrm{dim}(T_q\mathcal{C}onn_G^{\psi = 0}) = \mathrm{dim}(T_q\mathcal{C}onn_G)\end{aligned}$$ holds (cf. Proposition [Proposition 21](#Eqw3){reference-type="ref" reference="Eqw3"}), the ètaleness means that $\mathcal{O}p_G$ and $\mathcal{C}onn_G^{\psi = 0}$ intersect transversally. Hence, the composite $$\begin{aligned} T_q \mathcal{C}onn_{G}^{\psi = 0} \hookrightarrow T_q \mathcal{C}onn_{G} \twoheadrightarrow T^\vee_q \mathcal{O}p_{G, \rho} \end{aligned}$$ is an isomorphism, and the composite of its inverse and the inclusion $T_q \mathcal{C}onn_{G}^{\psi = 0} \hookrightarrow T_q \mathcal{C}onn_{G}$ determines a split injection of ([\[Et4\]](#Et4){reference-type="ref" reference="Et4"}). It follows that there exists a canonical decomposition $$\begin{aligned} \label{Erkij} T_q \mathcal{O}p_G \oplus T_q^\vee \mathcal{O}p_G = T_q \mathcal{C}onn_{G} \\end{aligned}$$ of $T_q \mathcal{C}onn_{G}$. Finally, let us consider the case of $G = \mathrm{PGL}_n$ with $n = \frac{p-1}{2}$. (This is only the setting considered here for simplicity, and the main text will deal with a general $G$.) Suppose further that $\mathscr{E}^\spadesuit$ comes from the "$\mathscr{F}^\heartsuit$\" as in the statement of Theorem [Theorem 1](#ThA){reference-type="ref" reference="ThA"} via change of structure group $\mathrm{SL}_2 \rightarrow\mathrm{PGL}_n$ obtained by integrating a suitable principal $\mathfrak{s}\mathfrak{l}_2$-subalgebra of $\mathfrak{p}\mathfrak{g}\mathfrak{l}_n$. Then, a natural affine structure on $\mathcal{O}p_G$ yields direct sum decompositions $$\begin{aligned} T_q \mathcal{O}p_G = \bigoplus_{j=1}^{n-1} H^0 (X, \varTheta^{\otimes 2j}), \hspace{10mm} T_q^\vee \mathcal{O}p_{G, \rho} = \bigoplus_{j=1}^{n-1} H^0 (X, \varTheta^{\otimes 2j}(-D))^\vee\end{aligned}$$ (cf.  [@Wak8 Theorem 2.24]). On the other hand, we show (cf. ([\[Eq3033\]](#Eq3033){reference-type="ref" reference="Eq3033"}), ([\[Eq834\]](#Eq834){reference-type="ref" reference="Eq834"})) that the $k$-vector space $T_q \mathcal{C}onn_{G}$ decomposes into a direct sum $$\begin{aligned} T_q \mathcal{C}onn_{G} = \bigoplus_{j=1}^{n-1} H_{\mathrm{dR}}^1 (X, \mathrm{Sym}^{2j} \mathscr{F})\end{aligned}$$ according to the irreducible decomposition of the $\mathfrak{s}\mathfrak{l}_2$-algebra $\mathfrak{p}\mathfrak{g}\mathfrak{l}_n$. By various definitions involved, ([\[Erkij\]](#Erkij){reference-type="ref" reference="Erkij"}) turns out to be compatible with the respective structures of direct sum decomposition. Thus, the required decomposition ([\[Eqff131\]](#Eqff131){reference-type="ref" reference="Eqff131"}) can be obtained by considering its $(l/2)$-th factor. **Remark 1**. As observed in  [@Wak13], it can be seen that the construction of the canonical decomposition ([\[Eqff131\]](#Eqff131){reference-type="ref" reference="Eqff131"}) discussed above is entirely similar to that of the classical Eichler-Shimura isomorphism, by regarding $\mathcal{C}onn_G^{\psi = 0}$ as a positive characteristic analogue of the moduli space of $G$-local systems with real monodromy. ## Notation and Conventions {#Ssde} The basic terminology and notation in our discussion follow  [@Wak8]. Throughout the present paper, we fix an algebraically closed field $k$, a semisimple algebraic group $G$ over $k$, a maximal torus $T$ of $G$, and a Borel subgroup $B$ of $G$ containing $T$. Write $\mathfrak{g}$, $\mathfrak{b}$, and $\mathfrak{t}$ for the Lie algebras of $G$, $B$, and $T$, respectively. Whenever we discuss the case of $G=\mathrm{SL}_n$ or $\mathrm{PGL}_n$, the group $B$ (resp., $T$) is assumed to be the subgroup of $G$ consisting of upper triangular matrices (resp., diagonal matrices). Regarding the algebraic group $G$, we assume that either "$\mathrm{char}(k)=0$\" or "$\mathrm{char}(k) = p >2h_G$\" or "$G = \mathrm{PGL}_n$ with $1 < n < \mathrm{char}(k)$\" is fulfilled, where $h_G$ denotes $1$ plus the height of a root with maximum height, as defined in  [@Wak8 § 1.4.1]. (If $G$ is simple, then $h_G$ coincides with its Coxeter number.) In particular, when $\mathrm{char}(k)= p>0$, the prime $p$ is very good for $G$, in the sense of  [@KW Chap. VI, Definition 1.6] (cf. the proof of Lemma [Lemma 4](#Lem1){reference-type="ref" reference="Lem1"}). Next, given an fs log scheme $T^\mathrm{log}$ and an fs log scheme $Y^\mathrm{log}$ over $T^\mathrm{log}$, we shall write $\Omega_{Y^\mathrm{log}/T^\mathrm{log}}$ for the sheaf of logarithmic $1$-forms on $Y^\mathrm{log}$ over $T^\mathrm{log}$, and write $\mathcal{T}_{Y^\mathrm{log}/T^\mathrm{log}}$ for its dual, i.e., the sheaf of locally defined logarithmic derivations on $\mathcal{O}_Y$ relative to $T^\mathrm{log}$. We fix a pair of nonnegative integers $(g, r)$ with $2g-2 +r >0$ and an $r$-pointed stable curve $\mathscr{X}:= (X, \{ \sigma_i \}_{i=1}^r)$ of genus $g$ over $k$, where $X$ denotes the underlying curve and $\{ \sigma_i \}_{i=1}^r$ denotes the set of its marked points. Recall from  [@KaFu Theorem 2.6] that $\mathrm{Spec}(k)$ and $X$ admit canonical log structures pulled-back, via the classifying map of $\mathscr{X}$, from the moduli stack of $r$-pointed stable curves of genus $g$ and the universal family of curves over that stack, respectively; the resulting log schemes are denoted by $\mathrm{Spec}(k)^\mathrm{log}$ and $X^\mathrm{log}$, respectively. For simplicity, we write $\Omega := \Omega_{X^\mathrm{log}/\mathrm{Spec}(k)^\mathrm{log}}$ and $\mathcal{T}:= \mathcal{T}_{X^\mathrm{log}/\mathrm{Spec}(k)^\mathrm{log}}$. Also, write $d$ for the logarithmic universal derivation $\mathcal{O}_X \rightarrow\Omega$. Let $D$ denote the divisor $D$ defined as the union of the $\sigma_i$'s. For each $\mathcal{O}_X$-module $\mathcal{F}$, we set ${^c}\mathcal{F}:= \mathcal{F}\otimes_{\mathcal{O}_X} \mathcal{O}_X (-D)$. In particular, ${^c}\Omega$ is isomorphic to the dualizing sheaf $\omega_X$ of $X/k$. # Flat $G$-bundles and opers on a pointed stable curve {#S1} In this section, we recall the definition of a flat $G$-bundle, as well as a $G$-oper, on a pointed stable curve. We refer the reader to, e.g.,  [@Wak8] for detailed discussions on $G$-opers in such a general formulation. See also  [@BD1],  [@JP] and  [@Mzk2]. ## Flat $G$-bundles on a pointed stable curve {#SS0111} We shall fix a (right) $G$-bundle $\pi : \mathcal{E}\rightarrow X$ on $X$. Given a $k$-vector space $\mathfrak{h}$ equipped with a $G$-action, we write $\mathfrak{h}_\mathcal{E}$ for the vector bundle on $X$ associated to the affine space $\mathcal{E}\times^G \mathfrak{h}\left(:= (\mathcal{E}\times_k \mathfrak{h})/G \right)$. In the case where $\mathfrak{h}$ is taken to be the Lie algebra $\mathfrak{g}$ equipped with the $G$-action via the adjoint representation $\mathrm{Ad} : G \rightarrow\mathrm{GL}(\mathfrak{g})$, the resulting vector bundle $\mathfrak{g}_\mathcal{E}$ is called the **adjoint bundle** of $\mathcal{E}$. By pulling-back the log structure of $X^\mathrm{log}$ via $\pi$, one obtains a log structure on $\mathcal{E}$; the resulting log scheme is denoted by $\mathcal{E}^\mathrm{log}$. The $G$-action on $\mathcal{E}$ induces a $G$-action on the direct image $\pi_* (\mathcal{T}_{\mathcal{E}^\mathrm{log}/\mathrm{Spec}(k)^\mathrm{log}})$ of $\mathcal{T}_{\mathcal{E}^\mathrm{log}/\mathrm{Spec}(k)^\mathrm{log}}$. We shall set $\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}} := \pi_* (\mathcal{T}_{\mathcal{E}^\mathrm{log}/\mathrm{Spec}(k)^\mathrm{log}})^G$, i.e., the subsheaf of $G$-invariant sections of $\pi_* (\mathcal{T}_{\mathcal{E}^\mathrm{log}/\mathrm{Spec}(k)^\mathrm{log}})$. The differential of $\pi$ gives rise to a canonical short exact sequence of $\mathcal{O}_X$-modules $$\begin{aligned} \label{Eqq10} 0 \longrightarrow\mathfrak{g}_\mathcal{E}\longrightarrow\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}} \xrightarrow{d_\mathcal{E}} \mathcal{T}\longrightarrow 0 \end{aligned}$$ (cf.  [@Wak8 § 1.2.5]). Recall that a *log $k$-connection* (or, a *log connection*, for short) on $\mathcal{E}$ is, by definition, an $\mathcal{O}_X$-linear morphism $\nabla : \mathcal{T}\rightarrow\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}}$ with $d_\mathcal{E}\circ \nabla = \mathrm{id}_{\mathcal{T}_{X^\mathrm{log}}}$. Since $\Omega$ is a line bundle, any log connection is flat, i.e., has vanishing curvature (cf.  [@Wak8 Definition 1.23] for the definition of curvature). By a **flat $G$-bundle** on $\mathscr{X}$, we shall mean a pair $(\mathcal{E}, \nabla)$ consisting of a (right) $G$-bundle $\mathcal{E}$ on $X$ and a (flat) log connection $\nabla$ on $\mathcal{E}$. For example, any flat $\mathrm{SL}_n$-bundle (where $n \geq 1$) on $\mathscr{X}$ can be identified with a collection $$\begin{aligned} \label{WWW19} (\mathcal{F}, \nabla, \delta)\end{aligned}$$ consisting of a rank $n$ vector bundle $\mathcal{F}$ on $X$, a log connection $\nabla$ on $\mathcal{F}$ in the usual sense (i.e., a $k$-linear morphism $\nabla : \mathcal{F}\rightarrow\Omega \otimes_{\mathcal{O}_X} \mathcal{F}$ satisfying the Leibnitz rule), and an isomorphism of flat bundles $\delta : (\mathrm{det}(\mathcal{F}), \mathrm{det}(\nabla)) \stackrel{\sim}{\rightarrow}(\mathcal{O}_X, d)$, where $\mathrm{det}(\nabla)$ denotes the log connection on the determinant bundle $\mathrm{det}(\mathcal{F})$ of $\mathcal{F}$ induced naturally from $\nabla$. We denote by $$\begin{aligned} \label{Eq3334} \mathcal{C}onn_G \end{aligned}$$ the moduli stack classifying flat $G$-bundles on $\mathscr{X}$. It is verified (from, e.g.,  [@Wan Theorem 1.0.1]) that $\mathcal{C}onn_{G}$ may be represented by an algebraic stack over $k$ in the sense of Artin, and locally of finite type over $k$. ## The Cartan decomposition of a Lie algebra {#SS051} Denote by $\Gamma$ the set of simple roots in $B$ with respect to $T$. For each character $\beta$ of $T$, we set $$\begin{aligned} \mathfrak{g}^\beta := \left\{ x \in \mathfrak{g}\, | \, \mathrm{Ad}(t)(x) = \beta (t) \cdot x \ \text{for all} \ t \in T\right\}.\end{aligned}$$ Recall that the Cartan decomposition of $\mathfrak{g}$ is a Lie algebra grading $\mathfrak{g}= \bigoplus_{j \in \mathbb{Z}} \mathfrak{g}_j$ such that $\mathfrak{g}_j = \mathfrak{g}_{-j} = 0$ for all $j > \mathrm{rk} (\mathfrak{g})$, where $\mathrm{rk}(\mathfrak{g})$ denotes the rank of $\mathfrak{g}$, and $\mathfrak{g}_0 = \mathfrak{t}$, $\mathfrak{g}_1 = \bigoplus_{\alpha \in \Gamma} \mathfrak{g}^\alpha$, $\mathfrak{g}_{-1} = \bigoplus_{\alpha \in \Gamma} \mathfrak{g}^{-\alpha}$. It associates the decreasing filtration $\{ \mathfrak{g}^j \}_{j \in \mathbb{Z}}$ on $\mathfrak{g}$ given by $\mathfrak{g}^j := \bigoplus_{l \geq j} \mathfrak{g}_l$. Let us fix a collection of data $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}:= \{ x_\alpha \}_{\alpha\in \Gamma}$, where each $x_\alpha$ denotes a generator of $\mathfrak{g}^\alpha$, and write $q_{1} := \sum_{\alpha \in \Gamma} x_\alpha$. (If we take two such collections, then they are conjugate by an element of $T$. For that reason, the results obtained in the subsequent discussions are essentially independent of the choice of $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$.) Also, by regarding the fundamental coweight $\check{\omega}_\alpha$ of $\alpha$ as an element of $\mathfrak{t}$ via differentiation, we obtain $\check{\rho} := \sum_{\alpha \in \Gamma} \check{\omega}_\alpha \in \mathfrak{t}$. Then, there is a unique collection $\{ y_\alpha \}_{\alpha \in \Gamma}$ of generators $y_\alpha \in \mathfrak{g}^{-\alpha}$, such that the set $\{ q_{-1}, 2 \check{\rho}, q_1 \}$, where $q_{-1} := \sum_{\alpha \in \Gamma} y_\alpha \in \mathfrak{g}_{-1}$, forms an $\mathfrak{s}\mathfrak{l}_2$-triple. If $G =\mathrm{PGL}_n$, then we identify its Lie algebra $\mathfrak{p}\mathfrak{g}\mathfrak{l}_n$ with $\mathfrak{s}\mathfrak{l}_n$ via the composite isomorphism $\mathfrak{s}\mathfrak{l}_n \hookrightarrow\mathfrak{g}\mathfrak{l}_n \twoheadrightarrow\mathfrak{p}\mathfrak{g}\mathfrak{l}_n$, and assume that the triple $\{ q_{-1}, 2 \check{\rho}, q_1 \}$ in $\mathfrak{s}\mathfrak{l}_n$ is always taken as $$\begin{aligned} \label{QQ0230} & q_{-1} = \begin{pmatrix} 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{pmatrix}, \ 2 \check{\rho} = \begin{pmatrix} n-1 & 0 & 0 & \cdots & 0 \\ 0 & n-3 & 0 & \cdots & 0 \\ 0 & 0 & n-5 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & - (n-1) \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} & q_1 = \begin{pmatrix} 0 & n-1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 2(n-2) & 0 & \cdots & 0 \\ 0 & 0 & 0 & 3(n-2) & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0& \cdots & n-1 \\ 0 & 0 & 0 & 0& \cdots &0 \end{pmatrix}. \notag\end{aligned}$$ Denote by $\mathfrak{c}_G$ (or $\mathfrak{c}$, for simplicity) the GIT quotient of $\mathfrak{g}$ by the adjoint $G$-action. In particular, there exists a natural projection $\chi : \mathfrak{g}\twoheadrightarrow\mathfrak{c}$. This morphism factors through the projection $\mathfrak{g}\rightarrow[\mathfrak{g}/G]$ to the quotient stack $[\mathfrak{g}/G]$ of $\mathfrak{g}$ by the adjoint $G$-action. The resulting morphism of $k$-stacks $[\mathfrak{g}/G] \rightarrow\mathfrak{c}$ will be denoted by $[\chi]$. Next, let us consider the space $\mathfrak{g}^{\mathrm{ad}(q_1)}$ of $\mathrm{ad}(q_1)$-invariants, i.e., $\mathfrak{g}^{\mathrm{ad}(q_1)} := \left\{ x \in \mathfrak{g}\, | \, \mathrm{ad}(q_1) (x) = 0\right\}$. It has dimension equal to $\mathrm{rk}(\mathfrak{g})$, and the Cartan decomposition $\mathfrak{g}= \bigoplus_{j \in \mathbb{Z}} \mathfrak{g}_j$ restricts to a decomposition $\mathfrak{g}^{\mathrm{ad}(q_1)} = \bigoplus_{j \in \mathbb{Z}} \mathfrak{g}_j^{\mathrm{ad}(q_1)}$ on $\mathfrak{g}^{\mathrm{ad}(q_1)}$. By the assumption on $G$ imposed in § [1.4](#Ssde){reference-type="ref" reference="Ssde"}, the composite $$\begin{aligned} \label{Eq141} \mathrm{Kos} : \mathfrak{g}^{\mathrm{ad}(q_1)} \xrightarrow{v \mapsto v + q_{-1}} \mathfrak{g}\xrightarrow{\chi} \mathfrak{c}.\end{aligned}$$ becomes an isomorphism of $k$-schemes. Denote by $[0]_G$ (or simply, $[0]$, if there is no fear of confusion) the element of $\mathfrak{c}(k)$ determined by the image of the zero section $0$ of $\mathfrak{g}^{\mathrm{ad}(q_1)}$ via $\mathrm{Kos}$. ## Residues and radii {#SS0542} Let $(\mathcal{E}, \nabla)$ be a flat $G$-bundle on $\mathscr{X}$. By a **marking** on $(\mathcal{E}, \nabla)$, we mean a collection of Lie algebra isomorphisms $\sigma_i^*(\mathfrak{g}_\mathcal{E})\stackrel{\sim}{\rightarrow}\mathfrak{g}$ ($i = 1, \cdots, r$). The notion of an isomorphism between flat $G$-bundles with marking can be defined in an evident manner. Under the assumption that $r > 0$, let us choose $i \in \{1, \cdots, r \}$, and moreover choose a local function $t \in \mathcal{O}_X$ defining the closed subscheme $\mathrm{Im}(\sigma_i)$ of $X$. Then, the element $$\begin{aligned} \label{Eq148} \mu_i^\nabla := \overline{\nabla \left( t \frac{d}{d t}\right)} \in \sigma_i^*(\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}}) \end{aligned}$$ lies in $\sigma_i^* (\mathfrak{g}_\mathcal{E})$ and does not depend on the choice of $t$ (cf. the discussion in  [@Wak8 § 1.6.3]). We refer to $\mu_i^\nabla$ as the **residue** (or, the **monodromy operator**, in the terminology of  [@Mzk2] and  [@Wak8]) of $\nabla$ at $\sigma_i$. If $(\mathcal{E}, \nabla)$ is equipped with a marking, then $\mu_i^\nabla$ can be regarded as an element of $\mathfrak{g}$ by passing to the isomorphism $\sigma^*_i (\mathfrak{g}_\mathcal{E}) \stackrel{\sim}{\rightarrow}\mathfrak{g}$ given by this marking. In the case of $r = 0$, any log connection is regarded as being equipped with the trivial marking and having *residue $\emptyset$*. Thus, for each $r$-tuple $\mu := (\mu_i)_{i=1}^r$ of elements of $\mathfrak{g}$ (where $\mu := \emptyset$ if $r = 0$), we obtain the moduli stack $$\begin{aligned} \label{Eq4493} \mathcal{C}onn_{G, \mu} \end{aligned}$$ classifying flat $G$-bundles on $\mathscr{X}$ with marking whose residue at $\sigma_i$ coincides with $\mu_i$ for every $i=1, \cdots, r$. Next, note that, for each $i=1, \cdots, r$, the pair $(\sigma_i^* (\mathcal{E}), \mu_i^\nabla)$ specifies a $k$-rational point of $[\mathfrak{g}/G]$; it defines a $k$-rational point $$\begin{aligned} \label{Eq4492} \rho_i^\nabla := [\chi] \left( (\sigma_i^* (\mathcal{E}), \mu_i^\nabla)\right) \in \mathfrak{c}(k)\end{aligned}$$ via the morphism $[\chi] : [\mathfrak{g}/G] \rightarrow\mathfrak{c}$. This element is well-defined without choosing any marking, and we refer to $\rho_i^\nabla$ as the **radius** of $\nabla$ at $\sigma_i$. Given an $r$-tuple $\rho := (\rho_i)_{i=1}^r \in \mathfrak{c}^{\times r}$, we say that $\nabla$ is **of radii $\rho$** if its radius at $\sigma_i$ coincides with $\rho_i$ for every $i =1, \cdots, r$. In the case of $r = 0$, any log connection is regarded as having *radius $\emptyset$*. The stack $\mathcal{C}onn_{G}$ admits a closed substack $$\begin{aligned} \label{Eq4491} \mathcal{C}onn_{G, \rho} \end{aligned}$$ classifying flat $G$-bundles on $\mathscr{X}$ of radii $\rho$. By assigning the residues of each flat bundle to its radii via $\chi : \mathfrak{g}\rightarrow\mathfrak{c}$, we have a morphism of $k$-stacks $$\begin{aligned} \label{Eq4490} \mathcal{C}onn_{G, \mu} \rightarrow\mathcal{C}onn_{G, \chi (\mu)}, \end{aligned}$$ where $\chi (\mu):= (\chi (\mu_i))_{i=1}^r$. ## $G$-opers on a pointed stable curve {#SS052} Let $\mathcal{E}_B$ be a $B$-bundle on $X$. Denote by $\mathcal{E}_G$ the $G$-bundle obtained from $\mathcal{E}_B$ via change of structure group by the inclusion $B \hookrightarrow G$. (Hence, $\mathcal{E}_B$ specifies a $B$-reduction of $\mathcal{E}_G$.) By using the injection $\widetilde{\mathcal{T}}_{\mathcal{E}_B^\mathrm{log}} \hookrightarrow\widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}$ induced by the natural inclusion $\mathcal{E}_B \hookrightarrow\mathcal{E}_G$, we regard $\widetilde{\mathcal{T}}_{\mathcal{E}_B^\mathrm{log}}$ as an $\mathcal{O}_X$-submodule of $\widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}$. Since the $k$-vector subspace $\mathfrak{g}^j$ of $\mathfrak{g}$ (for each $j \in \mathbb{Z}$) is closed under the adjoint $B$-action, it induces a subbundle $\mathfrak{g}_{\mathcal{E}_B}^j$ of $\mathfrak{g}_{\mathcal{E}_B} \left(= \mathfrak{g}_{\mathcal{E}_G} \right)$. For each $j \in \mathbb{Z}$, we shall set $$\begin{aligned} \widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^j := \widetilde{\mathcal{T}}_{\mathcal{E}_B^\mathrm{log}} + \mathfrak{g}_{\mathcal{E}_B}^j \left(\subseteq \widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}} \right).\end{aligned}$$ The inclusion $\mathfrak{g}_{\mathcal{E}_B} \left(= \mathfrak{g}_{\mathcal{E}_G} \right) \hookrightarrow\widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}$ yields an isomorphism $$\begin{aligned} \label{Eqq49} \mathfrak{g}_{\mathcal{E}_B}^{j}/\mathfrak{g}_{\mathcal{E}_B}^{j+1} \stackrel{\sim}{\rightarrow}\widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^{j}/\widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^{j+1}.\end{aligned}$$ On the other hand, since each $\mathfrak{g}^{-\alpha}$ ($\alpha \in \Gamma$) is closed under the adjoint $B$-action, the decomposition $\mathfrak{g}^{-1}/\mathfrak{g}^0 = \bigoplus_{\alpha \in \Gamma} \mathfrak{g}^{-\alpha}$ yields a canonical isomorphism $\mathfrak{g}^{-1}_{\mathcal{E}_B}/\mathfrak{g}^{0}_{\mathcal{E}_B} \stackrel{\sim}{\rightarrow}\bigoplus_{\alpha \in \Gamma} \mathfrak{g}_{\mathcal{E}_B}^{- \alpha}$. By composing it with the inverse of ([\[Eqq49\]](#Eqq49){reference-type="ref" reference="Eqq49"}) for $j=-1$, we obtain a decomposition $$\begin{aligned} \label{Eq10} \widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^{-1}/\widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^0 \stackrel{\sim}{\rightarrow}\bigoplus_{\alpha \in \Gamma} \mathfrak{g}_{\mathcal{E}_B}^{- \alpha}.\end{aligned}$$ Now, let us consider a pair $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ consisting of a $B$-bundle $\mathcal{E}_B$ on $X$ and a log connection $\nabla$ on the $G$-bundle $\mathcal{E}_G := \mathcal{E}_B \times_B G$ induced by $\mathcal{E}_B$. Recall that $\mathscr{E}^\spadesuit$ is called a **$G$-oper** on $\mathscr{X}$(cf.  [@Wak8 Definition 2.1]) if it satisfies the following two conditions: - $\nabla (\mathcal{T}) \subseteq \widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^{-1}$; - For any $\alpha \in \Gamma$, the composite $$\begin{aligned} \mathcal{T}\xrightarrow{\nabla} \widetilde{\mathcal{T}}_{\mathcal{E}_G^\mathrm{log}}^{-1} \twoheadrightarrow\widetilde{\mathcal{T}}^{-1}_{\mathcal{E}_G^\mathrm{log}}/\widetilde{\mathcal{T}}^0_{\mathcal{E}_G^\mathrm{log}} \twoheadrightarrow\mathfrak{g}^{-\alpha}_{\mathcal{E}_B}\end{aligned}$$ is an isomorphism, where the third arrow denotes the natural projection with respect to the decomposition ([\[Eq10\]](#Eq10){reference-type="ref" reference="Eq10"}). If $r >0$ and $\rho$ is an element of $\mathfrak{c}(k)^{\times r}$, then we say that a $G$-oper $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ is **of radii $\rho$** if the flat $G$-bundle $(\mathcal{E}_G, \nabla)$ is of radii $\rho$ (cf.  [@Wak8 Definition 2.32]). In the case of $r = 0$, any $G$-oper is regarded as being **of radius $\emptyset$**. **Remark 2** (Case of $G = \mathrm{SL}_n$). Each $\mathrm{SL}_n$-oper can be described in terms of vector bundles because a $B$-reduction of an $\mathrm{SL}_n$-bundle corresponds to a complete flag on the associated vector bundle. To be precise, a pair $(\mathcal{E}_B, \nabla)$ of a $B$-bundle $\mathcal{E}_B$ and a connection $\nabla$ on $\mathcal{E}_{\mathrm{SL}_n} := \mathcal{E}_B \times_B \mathrm{SL}_n$ can be translated into a collection of data $$\begin{aligned} \label{Eq58} \mathscr{F}^\heartsuit := (\mathcal{F}, \nabla, \{ \mathcal{F}^j \}_{j=0}^n, \delta)\end{aligned}$$ such that $(\mathcal{F}, \nabla, \delta)$ is a triple as in ([\[WWW19\]](#WWW19){reference-type="ref" reference="WWW19"}) and $\{ \mathcal{F}^j \}_{j=0}^n$ denotes a decreasing filtration $0= \mathcal{F}^n \subseteq \mathcal{F}^{n-1} \subseteq \cdots \subseteq \mathcal{F}^1 \subseteq \mathcal{F}^0 = \mathcal{F}$ of $\mathcal{F}$ whose graded pieces $\mathcal{F}^j /\mathcal{F}^{j+1}$ ($i=0, \cdots, n-1$) are line bundles. Moreover, such a collection corresponds to an $\mathrm{SL}_n$-oper if it satisfies the following two conditions: - For every $j = 1, \cdots, n-1$, the inclusion relation $\nabla (\mathcal{F}^j) \subseteq \Omega \otimes_{\mathcal{O}_X} \mathcal{F}^{j-1}$ holds; - For every $j= 1, \cdots, n-1$, the $\mathcal{O}_X$-linear morphism $$\begin{aligned} \label{Eqq212} \mathrm{KS}_{\mathscr{F}^\heartsuit}^j : \mathcal{F}^j/\mathcal{F}^{j+1} \rightarrow\Omega \otimes_{\mathcal{O}_X} (\mathcal{F}^{j-1}/\mathcal{F}^{j})\end{aligned}$$ induced from $\nabla$ (by taking account of the former condition) is an isomorphism. (The isomorphism $\mathrm{KS}_{\mathscr{F}^\heartsuit}^j$ is called the $j$-th *Kodaira-Spencer map* for $\mathscr{F}^\heartsuit$.) Also, such a collection without a choice of an isomorphism $\delta$ is called a **$\mathrm{GL}_n$-oper** (cf.  [@Wak8 Definition 4.17]). Even when dealing with an $\mathrm{SL}_n$-oper, we will omit the datum "$\delta$\" in its notation for simplicity. # The moduli space of opers and change of structure group {#S2} This section deals with the moduli stack of $G$-opers and recall a construction of $G$-opers using $\mathrm{PGL}_2$-opers (or $\mathrm{SL}_2$-opers). ## The moduli space of $G$-opers {#SS055} Denote by $$\begin{aligned} \label{Eq5009} \mathcal{O}p_G \end{aligned}$$ the moduli stack classifying $G$-opers on $\mathscr{X}$. *In the rest of the present paper, we assume that $G$ is of adjoint type unless otherwise stated.* Then, it follows from  [@Wak8 Theorem A] that $\mathcal{O}p_{G}$ may be represented by an affine scheme over $k$ (cf. § [3.3](#SS036){reference-type="ref" reference="SS036"} for more details). The assignment $(\mathcal{E}_B, \nabla) \mapsto (\mathcal{E}_G, \nabla)$ (for each $G$-oper $(\mathcal{E}_B, \nabla)$) determines a morphism of $k$-stacks $$\begin{aligned} \label{Eq201} \mathrm{Imm}_G : \mathcal{O}p_{G} \rightarrow\mathcal{C}onn_{G}.\end{aligned}$$ Next, suppose that we are given an element $\rho := (\rho_i)_{i=1}^r$ of $\mathfrak{c}(k)^{\times r}$ (where $\rho := \emptyset$ if $r = 0$). It determines a closed subscheme $$\begin{aligned} \label{Eq5007} \mathcal{O}p_{G, \rho}\end{aligned}$$ of $\mathcal{O}p_G$ classifying $G$-opers of radii $\rho$. The morphism $\mathrm{Imm}_G$ restricts to a morphism of $k$-stacks $$\begin{aligned} \label{Eq5001} \mathrm{Imm}_{G, \rho} : \mathcal{O}p_{G, \rho} \rightarrow\mathcal{C}onn_{G, \rho}.\end{aligned}$$ Let us take a $G$-oper $\mathscr{E}^\spadesuit$ on $\mathscr{X}$ of radii $\rho$. According to  [@Wak8 Proposition 2.19], there exists a unique (up to isomorphism) pair $({^\dagger}\mathscr{E}^\spadesuit, \mathrm{nor}_{\mathscr{E}^\spadesuit})$ consisting of a $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper ${^\dagger}\mathscr{E}^\spadesuit := ({^\dagger}\mathcal{E}_B, \nabla)$ on $\mathscr{X}$ and an isomorphism of $G$-opers $\mathrm{nor}_{\mathscr{E}^\spadesuit} : {^\dagger}\mathscr{E}^\spadesuit \stackrel{\sim}{\rightarrow}\mathscr{E}^\spadesuit$ (cf.  [@Wak8 Definition 2.14] for the definition of $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normality). Here, recall from  [@Wak8 § 2.4.3] that, for each $i =1, \cdots, r$, there exists a canonical $G$-equivariant isomorphism $\sigma^*_i ({^\dagger}\mathcal{E}_G) \stackrel{\sim}{\rightarrow}G$, where ${^\dagger}\mathcal{E}_G := {^\dagger}\mathcal{E}_B \times^B G$, that induces a Lie algebra isomorphism $\left(\sigma^*_i (\mathfrak{g}_{{^\dagger}\mathcal{E}_G}) = \right) \sigma^*_i (\mathfrak{g}_{{^\dagger}\mathcal{E}_B}) \stackrel{\sim}{\rightarrow}\mathfrak{g}$. In this way, the underlying flat $G$-bundle $({^\dagger}\mathcal{E}_G, \nabla)$ of ${^\dagger}\mathscr{E}^\spadesuit$ admits a canonical choice of a marking. Also, by the definition of $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normality, the residue of $\nabla$ at $\sigma_i$ coincides with $q_{-1} + \mathrm{Kos}^{-1}(\rho_i)$. If we write $$\begin{aligned} \label{Eq202} \widetilde{\rho} \left(= (\widetilde{\rho}_i)_{i=1}^r \right) := (q_{-1}+ \mathrm{Kos}^{-1}(\rho_i))_{i=1}^r,\end{aligned}$$ then the resulting assignment $\mathscr{E}^\spadesuit \mapsto ({^\dagger}\mathcal{E}_G, \nabla)$ determines a morphism of $k$-stacks $$\begin{aligned} \label{Eq200} \mathrm{Imm}_{G, \widetilde{\rho}} : \mathcal{O}p_{G, \rho} \rightarrow\mathcal{C}onn_{G, \widetilde{\rho}}.\end{aligned}$$ The following assertion generalizes the fact in  [@BD1 § 3.1.11, Remark, (iii)], dealing with the case where the underlying curve is an unpointed smooth projective curve over the field of complex numbers. **Theorem 3**. *(Recall that $G$ is assumed to be of adjoint type.) The morphism of $k$-stacks $\mathrm{Imm}_G$ (resp., $\mathrm{Imm}_{G, \rho}$; resp., $\mathrm{Imm}_{G, \widetilde{\rho}}$) is schematic and an immersion.* *Proof.* It suffices to consider the non-resp'd assertion because the proofs of the remaining ones are similar. Denote by $\mathcal{B}un_G$ (resp., $\mathcal{B}un_B$) the moduli stack of $G$-bundles (resp., $B$-bundles) on $X$. By forgetting the data of log connections, we obtain a morphism of $k$-stacks $\mathcal{C}onn_G \rightarrow\mathcal{B}un_G$. Note that the morphism $\mathcal{B}un_B \rightarrow\mathcal{B}un_G$ induced by the change of structure group using $B \hookrightarrow G$ is schematic (cf.  [@Wan Corollary 3.2.4]), and $\mathcal{O}p_G$ can be represented by a locally closed substack of the fiber product $\mathcal{C}onn_G \times_{\mathcal{B}un_G} \mathcal{B}un_B$. Hence, the morphism $\mathrm{Imm}_G$ is verified to be schematic. The result of Lemma [Lemma 4](#Lem1){reference-type="ref" reference="Lem1"} described below implies that $\mathrm{Imm}_G$ is a monomorphism. By Lemma [Lemma 5](#Lem145){reference-type="ref" reference="Lem145"}, this morphism also satisfies the valuative criterion $(*)$ in the sense of  [@Mzk2 Chap I, § 2.4]. In particular, it is a radimmersion (cf.  [@Mzk2 Chap I, Theorem 2.12]). Consequently, it follows from  [@Mzk2 Chap I, Corollary 2.13] that $\mathrm{Imm}_G$ defines an immersion. ◻ The following two lemmas were applied in the proof of the above proposition. **Lemma 4**. *Let $\mathscr{E}^\spadesuit_i := (\mathcal{E}_{B, i}, \nabla_i)$ ($i=1,2$) be $G$-opers on $\mathscr{X}$ and $\alpha_G : (\mathcal{E}_{G, 1}, \nabla_1) \stackrel{\sim}{\rightarrow}(\mathcal{E}_{G, 2}, \nabla_2)$ (where $\mathcal{E}_{G, i} := \mathcal{E}_{B, i} \times^B G$) an isomorphism of flat $G$-bundles. Then, there exists a unique isomorphism of $G$-opers $\alpha_B : \mathscr{E}^\spadesuit_1 \stackrel{\sim}{\rightarrow}\mathscr{E}^\spadesuit_2$ inducing $\alpha_G$ via change of structure group by $B \hookrightarrow G$. In particular, if $(\mathcal{E}_G, \nabla)$ is a flat $G$-bundle on $\mathscr{X}$, then a $B$-reduction $\mathcal{E}_B$ of $\mathcal{E}_G$ for which the pair $(\mathcal{E}_B, \nabla)$ defines a $G$-oper is (if it exists) uniquely determined up to isomorphism.* *Proof.* We may assume, without loss of generality, that $\mathcal{E}:= \mathcal{E}_{G, 1} = \mathcal{E}_{G, 2}$, $\nabla := \nabla_1 = \nabla_2$, and that $\alpha_G$ coincides with the identity morphism of $\mathcal{E}$. The two Borel reductions $\mathcal{E}_{B, 1}$, $\mathcal{E}_{B, 2}$ of $\mathcal{E}$ define the $\mathcal{O}_X$-submodules of $\mathfrak{g}_\mathcal{E}$, i.e., $\mathfrak{g}^0_{\mathcal{E}_{B, 1}} \left(= \mathfrak{b}_{\mathcal{E}_{B, 1}}\right)$ and $\mathfrak{g}^0_{\mathcal{E}_{B, 2}} \left(= \mathfrak{b}_{\mathcal{E}_{B, 2}}\right)$. Notice here that, for each $i \in \{1, 2\}$ and $j \in \mathbb{Z}$, the subquotient $\mathfrak{g}^j_{\mathcal{E}_{B, i}}/\mathfrak{g}^{j+1}_{\mathcal{E}_{B, i}}$ is isomorphic to $\Omega^{\otimes j} \otimes_k\mathfrak{g}_{j}$. Hence, by using $\mathrm{deg}(\Omega)> 0$, we can verify the equality $\mathfrak{g}^1_{\mathcal{E}_{B, 1}} = \mathfrak{g}^1_{\mathcal{E}_{B, 2}}$. Next, let us take a closed point $x$ of $X$, and fix an identification $\mathcal{E}_{B, 1}|_x = B$, which extends to $\mathcal{E}|_x = G$ and moreover induces $\mathfrak{g}^1_{\mathcal{E}_{B, 1}} |_x = \mathfrak{g}^1$ via differentiation. Then, we can write $\mathcal{E}_{B, 2}|_x = h B \left(\subseteq G \right)$ for some $h \in G$. It follows that $\mathfrak{g}^1_{\mathcal{E}_{B, 2}} |_x$ coincides with the image $\mathrm{ad}(h)(\mathfrak{g}^1)$ of $\mathfrak{g}^1$ via the adjoint operator $\mathrm{ad}(h)$ given by $h$. Since the characteristic of the base field $k$ is very good for $G$ if it is of positive characteristic, the equality $\mathfrak{g}^1_{\mathcal{E}_{B, 1}} = \mathfrak{g}^1_{\mathcal{E}_{B, 2}}$ proved above together with the Springer isomorphism (cf. e.g.,  [@KW Theorem 3.3]) implies $N = h N h^{-1}$, where $N$ denotes the unipotent radical of $B$. It follows that $B = h B h^{-1}$ (cf.  [@Hum § 23.1, Corollary D]), and we have $h \in B$ (cf.  [@Hum § 23.1, Theorem]). Therefore, the equality $\mathcal{E}_{B, 1}|_x = \mathcal{E}_{B, 2}|_x$ holds for all $x$. This means $\mathcal{E}_{B, 1} = \mathcal{E}_{B, 2}$, completing the proof of this assertion. ◻ **Lemma 5**. *Let $T$ be a trait, i.e., the spectrum of a discrete valuation ring. Denote by $\eta$ its generic point and by $t$ its closed point. Now, let us consider a collection $$\begin{aligned} (\mathscr{E}_T, \mathcal{E}_{B, \eta}, \mathcal{E}_{B, t}),\end{aligned}$$ where* - *$\mathscr{E}_T := (\mathcal{E}_T, \nabla_T)$ denotes a flat $G$-bundle $\mathscr{E}_T := (\mathcal{E}_T, \nabla_T)$ on the base-change $\mathscr{X}_T := (X_T, \{ \sigma_{i, T} \}_{i=1}^r)$ of $\mathscr{X}$ over $T$ (cf.  [@Wak8 § 1] for the formulation of a flat $G$-bundle on a family of pointed stable curves);* - *$\mathcal{E}_{B, \eta}$ and $\mathcal{E}_{B, t}$ are $B$-reductions of $\mathcal{E}_\eta := \mathcal{E}_T \times_T \eta$ and $\mathcal{E}_{t} := \mathcal{E}_T \times_T t$, respectively, such that both $(\mathcal{E}_{B, \eta}, \nabla_T |_{\mathcal{E}_\eta})$ and $(\mathcal{E}_{B, t}, \nabla_T |_{\mathcal{E}_t})$ form $G$-opers.* *Then, there exists a $B$-reduction $\mathcal{E}_{B, T}$ on $\mathcal{E}_T$ such that $\mathcal{E}_{B, T} \times_T \eta = \mathcal{E}_{B, \eta}$, $\mathcal{E}_{B, T} \times_T t = \mathcal{E}_{T, t}$, and the pair $(\mathcal{E}_{B, T}, \nabla_T)$ forms a $G$-oper.* *Proof.* Denote by $X_\eta$ the generic fiber of $X_T$. Note that $\{ \mathfrak{g}_{\mathcal{E}_\eta}^j \}_{j}$ and $\{ \mathfrak{g}_{\mathcal{E}_t}^j \}_j$ coincide with the Harder-Narasimhan filtrations on $\mathfrak{g}_{\mathcal{E}_\eta}$ and $\mathfrak{g}_{\mathcal{E}_t}$, respectively. According to  [@HL Theorem 2.3.2], $\{ \mathfrak{g}_{\mathcal{E}_\eta}^j \}_{j}$ extends to a decreasing filtration $\{ \mathfrak{g}_{\mathcal{E}_T}^{j} \}_j$ (i.e., $\mathfrak{g}_{\mathcal{E}_T}^{j} |_{X_\eta} = \mathfrak{g}_{\mathcal{E}_\eta}^j$ for every $j$) on $\mathfrak{g}_{\mathcal{E}_T}$ such that the subquotients $\mathfrak{g}_{\mathcal{E}_T}^{j}/\mathfrak{g}_{\mathcal{E}_T}^{j+1}$ are all $T$-flat. By the upper semicontinuity of the Harder-Narasimhan type, the special fiber of $\{\mathfrak{g}_{\mathcal{E}_T}^j \}_j$ must coincide with $\{ \mathfrak{g}_{\mathcal{E}_t}^j \}_j$. Here, recall that the isomorphism $G \stackrel{\sim}{\rightarrow}\mathrm{Aut} (\mathfrak{g})^0$ (where $\mathrm{Aut} (\mathfrak{g})^0$ denotes the identity component of the group of Lie algebra automorphisms of $\mathfrak{g}$) induced from the adjoint representation of $G$ restricts to an isomorphism of algebraic groups $B \stackrel{\sim}{\rightarrow}\left\{ h \in \mathrm{Aut}(\mathfrak{g})^0 \, | \, h (\mathfrak{b}) \subseteq \mathfrak{b}\right\}$ (cf.  [@Wak8 Remark 1.29]). It follows that the Lie subalgebra $\mathfrak{g}_{\mathcal{E}_T}^0$ of $\mathfrak{g}_{\mathcal{E}_T}$ determines a $B$-reduction $\mathcal{E}_{B, T}$ of $\mathcal{E}_{T}$, which satisfies $\mathcal{E}_{B, T} \times_T \eta = \mathcal{E}_{B, \eta}$ and $\mathcal{E}_{B, t} \times_T t = \mathcal{E}_{B, t}$. Since the generic and special fibers of $(\mathcal{E}_{B, T}, \nabla_T)$ coincide with the $G$-opers $(\mathcal{E}_{B, \eta}, \nabla_T |_{\mathcal{E}_\eta})$ and $(\mathcal{E}_{B, t}, \nabla_T |_{\mathcal{E}_t})$, respectively, the pair $(\mathcal{E}_{B, T}, \nabla_T)$ turns out to form a $G$-oper. This completes the proof of this lemma. ◻ ## Change of structure group {#SS0r2} In the case of $G = \mathrm{PGL}_2$, we use the notations $G^\odot$, $B^\odot$, $\mathfrak{g}^\odot$, and $\mathfrak{b}^\odot$ instead of $G$, $B$, $\mathfrak{g}$, and $\mathfrak{b}$, respectively, for simplicity. The $\mathfrak{s}\mathfrak{l}_2$-triple $\{ q_{-1}, 2 \check{\rho}, q_1 \}$ in $\mathfrak{g}$ associated to the fixed collection $\ \, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$ determines an injection $$\begin{aligned} \label{injg} \iota_\mathfrak{g}: \mathfrak{g}^\odot \hookrightarrow\mathfrak{g}. \end{aligned}$$ To be precise, $\iota_\mathfrak{g}$ is given by $\iota_\mathfrak{g}\left(\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \right) = q_1$, $\iota_\mathfrak{g}\left(\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \right) = 2 \check{\rho}$, and $\iota_\mathfrak{g}\left(\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \right) = q_{-1}$. In particular, we have $\iota_\mathfrak{g}(\mathfrak{b}^\odot) \subseteq \mathfrak{b}$. According to  [@Wak8 Proposition 2.10], there exists a unique injective morphism of algebraic $k$-groups $\iota_B : B^\odot \hookrightarrow B$ whose differential induces the restriction $\mathfrak{b}^\odot \hookrightarrow\mathfrak{b}$ of $\iota_\mathfrak{g}$. Let $\mathscr{E}^\spadesuit_\odot := (\mathcal{E}_{B^\odot}, \nabla_\odot)$ be a $G^\odot$-oper on $\mathscr{X}$. We shall write $\mathcal{E}_{G^\odot} := \mathcal{E}_{B^\odot} \times^{B^\odot} G^\odot$, $\mathcal{E}_{B} := \mathcal{E}_{B^\odot} \times^{B^\odot, \iota_B} B$, and $\mathcal{E}_G := \mathcal{E}_B \times^{B} G$. The injection $\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{B^\odot}} \hookrightarrow\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{B}}$ induced by $\iota_B$ extends to an $\mathcal{O}_X$-linear injection $d \iota_{G} : \widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{G^\odot}} \hookrightarrow \widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{G}}$. The composite $\iota_{G*}(\nabla_\odot) : \mathcal{T}_{X^\mathrm{log}} \xrightarrow{\nabla_\odot} \widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{G^\odot}} \xrightarrow{d \iota_{G}} \widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{G}}$ specifies a log connection on $\mathcal{E}_G$, and the resulting pair $$\begin{aligned} \label{associated} \iota_{G *}(\mathscr{E}^\spadesuit_\odot) := (\mathcal{E}_{B}, \iota_{G*}(\nabla_\odot)) \end{aligned}$$ forms a $G$-oper on $\mathscr{X}$. The assignment $\mathscr{E}_\odot^\spadesuit \mapsto \iota_{G *}(\mathscr{E}_\odot^\spadesuit)$ determines a closed immersion between $k$-schemes $$\begin{aligned} \label{Eq405} %\iota_G^{\mcO p} : \mathcal{O}p_{G^\odot} \hookrightarrow\mathcal{O}p_G\end{aligned}$$ (cf.  [@Wak8 Theorem 2.24, (ii)]). If $\iota_\mathfrak{c}$ denotes the morphism $\mathfrak{c}_{G^\odot} \rightarrow\mathfrak{c}_{G}$ induced by $\iota_\mathfrak{g}$ via the adjoint quotients, then, for each $\rho_\odot := (\rho_{\odot, i})_{i=1}^r \in \mathfrak{c}_{G^\odot} (k)^{\times r}$, the morphism ([\[Eq405\]](#Eq405){reference-type="ref" reference="Eq405"}) restricts to a closed immersion $$\begin{aligned} \label{Eq281} %\iota_{G, \rho_\odot}^{\mcO p} : \mathcal{O}p_{G^\odot, \rho_\odot} \hookrightarrow\mathcal{O}p_{G, \iota_\mathfrak{c}(\rho_\odot)}\end{aligned}$$ by putting $\iota_{\mathfrak{c}} (\rho_\odot) := (\iota_{\mathfrak{c}}(\rho_{\odot, i}))_{i=1}^r$, where $\rho_\odot := \emptyset$ and $\iota_{\mathfrak{c}} (\rho_\odot) := \emptyset$ if $r = 0$ (cf.  [@Wak8 Remark 2.37]). In particular, we obtain a closed immersion $\mathcal{O}p_{G^\odot, [0]^{\times r}} \hookrightarrow\mathcal{O}p_{G, [0]^{\times r}}$, where $[0]^{\times r} := ([0], \cdots, [0])$ in $\mathfrak{c}_{G^\odot} (k)^{\times r}$ or $\mathfrak{c}_{G} (k)^{\times r}$. ## The affine structure on the moduli space {#SS036} Note that $\mathfrak{g}^{\mathrm{ad}(q_1)}$ is closed under the $B^\odot$-action via $\iota_B$. Therefore, if ${^\dagger}\mathcal{E}_{B^\odot}$ denotes the $B^\odot$-bundle on $X$ constructed in  [@Wak8 (190)], then it induces a rank $\mathrm{rk}(\mathfrak{g})$ vector bundle $$\begin{aligned} \label{QQ801} \mathcal{V}_{G} := \Omega\otimes_{\mathcal{O}_X} (\mathfrak{g}^{\mathrm{ad}(q_1)})_{{^\dagger}\mathcal{E}_{B^\odot}}\end{aligned}$$ on $X$ (cf.  [@Wak8 § 2.4.5]). For example, $\mathcal{V}_{G^\odot}$ is canonically isomorphic to $\Omega^{\otimes 2}$. The decomposition $\mathfrak{g}^{\mathrm{ad}(q_1)} = \bigoplus_{j \in \mathbb{Z}} \mathfrak{g}_j^{\mathrm{ad}(q_1)}$ determines a decomposition $\mathcal{V}_{G} = \bigoplus_{j \in \mathbb{Z}} \mathcal{V}_{G, j}$; it gives a decreasing filtration $\{ \mathcal{V}^j_{G} \}_{j \in \mathbb{Z}}$ on $\mathcal{V}_G$ by putting $\mathcal{V}^j_{G} := \bigoplus_{j' \geq j} \mathcal{V}_{G, j'}$. Also, for each $j \in \mathbb{Z}$, there exists an isomorphism $$\begin{aligned} \label{QR020} \mathcal{V}_{G, j} \stackrel{\sim}{\rightarrow}\Omega^{\otimes (j+1)} \otimes_k \mathfrak{g}^{\mathrm{ad}(q_1)}_j.\end{aligned}$$ Recall that $\mathcal{O}p_{G^\odot}$ admits a structure of affine space modeled on the space of quadratic log differentials $H^0 (X, \Omega^{\otimes 2}) \left(= H^0 (X, \mathcal{V}_{G^\odot}) \right)$ (cf.  [@Wak8 § 2.3.4]). In what follows, let us review a canonical affine structure on $\mathcal{O}p_G$ generalizing this fact (cf.  [@Wak8 Theorem 2.24, (i)]). The $B^\odot$-equivariant inclusion $\mathfrak{g}^{\mathrm{ad}(q_1)} \hookrightarrow\mathfrak{g}$ yields an $\mathcal{O}_X$-linear injection $$\begin{aligned} \label{inclV} \varsigma : \mathcal{V}_{G} \hookrightarrow\left( \Omega\otimes_{\mathcal{O}_X} \mathfrak{g}_{{^\dagger}\mathcal{E}_{B^\odot}}= \right)\Omega\otimes_{\mathcal{O}_X} \mathfrak{g}_{{^\dagger}\mathcal{E}_{B}},\end{aligned}$$ where ${^\dagger}\mathcal{E}_B := {^\dagger}\mathcal{E}_{B^\odot} \times^{B^\odot, \iota_B} B$. Also, $\iota_\mathfrak{g}$ induces an $\mathcal{O}_X$-linear injection $\Omega^{\otimes 2} \left(= \mathcal{V}_{G^\odot} \right) \hookrightarrow\mathcal{V}_{G}$. By using these injections, we shall regard $\mathcal{V}_{G}$ and $\Omega^{\otimes 2}$ as $\mathcal{O}_X$-submodules of $\Omega\otimes_{\mathcal{O}_X} \mathfrak{g}_{{^\dagger}\mathcal{E}_{B}}$ and $\mathcal{V}_{G}$, respectively. Next, let us take a $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G^\odot$-oper ${^\dagger}\mathscr{E}_{\odot}^\spadesuit := ({^\dagger}\mathcal{E}_{B^\odot}, \nabla_\odot)$ on $\mathscr{X}$. Given an element $R$ of $H^0 (X, \mathcal{V}_G)$, regarded as an $\mathcal{O}_X$-linear morphism $\mathcal{T}\rightarrow(\mathfrak{g}^{\mathrm{ad}(q_1)})_{{^\dagger}\mathcal{E}_{B^\odot}} \left(\subseteq \widetilde{\mathcal{T}}_{{^\dagger}\mathcal{E}^\mathrm{log}_{G}} \right)$, one may verify that the collection $$\begin{aligned} \iota_{G*}(\mathscr{E}^\spadesuit_{\odot})_{+R} := ({^\dagger}\mathcal{E}_{B}, \iota_{G*}(\nabla_\odot) + R)\end{aligned}$$ forms a $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper. The resulting assignment $(\mathscr{E}^\spadesuit_\odot, R) \mapsto \iota_{G*}(\mathscr{E}^\spadesuit_{\odot})_{+R}$ defines an isomorphism of $k$-schemes $$\begin{aligned} \label{Eq42} \mathcal{O}p_{G^\odot} \times^{H^0 (X, \Omega^{\otimes 2})} H^0 (X, \mathcal{V}_{G}) \stackrel{\sim}{\rightarrow}\mathcal{O}p_{G}. \end{aligned}$$ In particular, the natural $H^0 (X, \mathcal{V}_G)$-action on $\mathcal{O}p_{G^\odot} \times^{H^0 (X, \Omega^{\otimes 2})} H^0 (X, \mathcal{V}_{G})$ yields, via ([\[Eq42\]](#Eq42){reference-type="ref" reference="Eq42"}), an affine structure on $\mathcal{O}p_G$ modeled on that space (cf.  [@Wak8 Theorem 2.24]). Moreover, if $\rho$ is an element of $\mathfrak{c}(k)^{\times r}$ (where $\rho := \emptyset$ if $r = 0$), then this structure restrict to an affine structure on $\mathcal{O}p_{G, \rho} \left(\subseteq \mathcal{O}p_G \right)$ modeled on $H^0(X, {^c}\mathcal{V}_G) \left(\subseteq H^0(X, \mathcal{V}_G) \right)$ (cf.  [@Wak8 Theorem 2.36]). ## The $\mathrm{SL}_{n}$-oper associated to an $\mathrm{SL}_2$-oper {#SS03g61} Let $n$ be a positive integer with $1 < n < p$ and $\mathscr{F}^\heartsuit_\odot := (\mathcal{F}_\odot, \nabla_\odot, \{ \mathcal{F}_\odot^j \}_{j=0}^2)$ an $\mathrm{SL}_2$-oper on $\mathscr{X}$ (cf. Remark [Remark 2](#Rem667){reference-type="ref" reference="Rem667"}). Write $\varTheta := \mathcal{F}^1_\odot$, which is a line bundle on $X$. Then, we obtain a composite isomorphism $$\begin{aligned} \label{Eq803} \Omega \stackrel{\sim}{\rightarrow}\varTheta \otimes_{\mathcal{O}_X} (\mathcal{F}_\odot/\varTheta)^\vee \stackrel{\sim}{\rightarrow}\varTheta^{\otimes 2},\end{aligned}$$ where the first arrow denotes the isomorphism given by the $1$-st Kodaira-Spencer map $\mathrm{KS}_{\mathscr{F}_\odot^\heartsuit}^1 : \varTheta \stackrel{\sim}{\rightarrow}\Omega \otimes_{\mathcal{O}_X} (\mathcal{F}_\odot/ \varTheta)$ for $\mathscr{F}^\heartsuit$ (cf. ([\[Eqq212\]](#Eqq212){reference-type="ref" reference="Eqq212"})) and the second arrow arises from the fixed isomorphism $\left(\varTheta \otimes_{\mathcal{O}_X} (\mathcal{F}_\odot/\varTheta) = \right) \mathrm{det}(\mathcal{F}_\odot) \stackrel{\sim}{\rightarrow}\mathcal{O}_X$. Thus, the line bundle $\varTheta$ together with ([\[Eq803\]](#Eq803){reference-type="ref" reference="Eq803"}) specifies a theta characteristic of $X^\mathrm{log}$ in the sense of  [@Wak8 Example 4.34]; we shall refer to it as the **theta characteristic associated to $\mathscr{F}^\heartsuit_\odot$**. The $(n-1)$-st symmetric product $\mathrm{Sym}^{n-1} \mathcal{F}_\odot$ of $\mathcal{F}_\odot$ over $\mathcal{O}_X$ forms a rank $n$ vector bundle on $X$; it admits a log connection $\mathrm{Sym}^{n-1}\nabla_\odot$ induced naturally by $\nabla_\odot$. Moreover, $\mathrm{Sym}^{n-1}\mathcal{F}_\odot$ has an $n$-step decreasing filtration $\{ (\mathrm{Sym}^{n-1}\mathcal{F}_\odot)^j \}_{j=0}^n$ defined in such a way that $(\mathrm{Sym}^{n-1}\mathcal{F}_\odot)^0 := \mathrm{Sym}^{n-1}\mathcal{F}_\odot$, $(\mathrm{Sym}^{n-1} \mathcal{F}_\odot)^n = 0$, and $(\mathrm{Sym}^{n-1}\mathcal{F}_\odot)^j$ (for each $j =1, \cdots, n-1$) is defined as the image of $\varTheta^{\otimes j} \otimes \mathcal{F}_\odot^{\otimes (n-1-j)}$ via the natural quotient $\mathcal{F}_\odot^{\otimes (n-1)} \twoheadrightarrow\mathrm{Sym}^{n-1}\mathcal{F}_\odot$. For each $j = 0, \cdots, n-1$, there exists a canonical composite of isomorphisms $$\begin{aligned} \label{Eq901} (\mathrm{Sym}^{n-1}\mathcal{F}_\odot)^j /(\mathrm{Sym}^{n-1}\mathcal{F}_\odot)^{j+1} \stackrel{\sim}{\rightarrow}\varTheta^{\otimes j} \otimes_{\mathcal{O}_X} (\mathcal{F}_\odot/\varTheta)^{\otimes (n-1 - j)} \stackrel{\sim}{\rightarrow}\varTheta^{\otimes (2j -n +1)}.\end{aligned}$$ Moreover, by the assumption $n < p$, the collection $$\begin{aligned} \label{Eq288} \mathrm{Sym}^{n-1}\mathscr{F}_\odot^\heartsuit := (\mathrm{Sym}^{n-1}\mathcal{F}_\odot, \mathrm{Sym}^{n-1}\nabla_\odot, \{ (\mathrm{Sym}^{n-1}\mathcal{F}_\odot)^j \}_{j=0}^n)\end{aligned}$$ forms an $\mathrm{SL}_n$-oper on $\mathscr{X}$, called the **$(n-1)$-st symmetric product of $\mathscr{F}^\heartsuit_\odot$**. The resulting assignment $\mathscr{F}^\heartsuit_\odot \mapsto \mathrm{Sym}^{n-1} \mathscr{F}^\heartsuit_\odot$ determines a morphism $$\begin{aligned} \label{Eq289} \mathcal{O}p_{\mathrm{SL}_2} \rightarrow\mathcal{O}p_{\mathrm{SL}_n}.\end{aligned}$$ This morphism makes the following square diagram commute: $$\begin{aligned} \label{Eq345} \vcenter{\xymatrix@C=46pt@R=36pt{ \mathcal{O}p_{\mathrm{SL}_2} \ar[r]^-{(\ref{Eq289})} \ar[d] & \mathcal{O}p_{\mathrm{SL}_n}\ar[d] \\ \mathcal{O}p_{G^\odot} \ar[r]_-{(\ref{Eq405})} & \mathcal{O}p_{\mathrm{PGL}_n}, }}\end{aligned}$$ where the right-hand and left-hand vertical arrows are obtained via projectivization. ## The $\mathrm{SL}_{2l+1}$-oper associated to a $\mathrm{PGL}_2$-oper {#SS03f61} This subsection discusses a construction of an $\mathrm{SL}_{n}$-oper for an odd $n$ using a $G^\odot$-oper that is compatible with ([\[Eq405\]](#Eq405){reference-type="ref" reference="Eq405"}) and ([\[Eq289\]](#Eq289){reference-type="ref" reference="Eq289"}). The advantage of considering this construction is that $\mathscr{X}$ always admits $G^\odot$-opers in contrast to the case of $\mathrm{SL}_{2}$-opers. (In fact, there is a nodal curve that does not have theta characteristics; this implies $\mathcal{O}p_{\mathrm{PGL}_2} = \emptyset$ for such a curve.) Let $\mathscr{E}^\spadesuit_\odot := (\mathcal{E}_{B^\odot}, \nabla_\odot)$ be a $G^\odot$-oper on $\mathscr{X}$. Write $\mathcal{E}_\odot := \mathcal{E}_{B^\odot} \times^{B^\odot} G^\odot$ and $\mathscr{E}_\odot := (\mathcal{E}_\odot, \nabla_\odot)$. We can choose a $\mathrm{GL}_2$-oper $\mathscr{F}^\heartsuit_\odot := (\mathcal{F}_\odot, \nabla_{\mathcal{F}_\odot}, \{ \mathcal{F}^j_\odot \}_{j=0}^2)$ on $\mathscr{X}$ whose projectivization is isomorphic to $\mathscr{E}^\spadesuit_\odot$ (cf.  [@Wak8 Theorem 4.66]). For an integer $l$ with $1 \leq l < \frac{p-1}{2}$, the previous discussion can be applied to obtain a $\mathrm{GL}_{2l+1}$-oper $$\begin{aligned} \mathrm{Sym}^{2l}\mathscr{F}^\heartsuit_\odot := (\mathrm{Sym}^{2l}\mathcal{F}_\odot, \mathrm{Sym}^{2l}\nabla_{\mathcal{F}_\odot}, \{ (\mathrm{Sym}^{2l}\mathcal{F}_\odot)^j \}_{i=0}^{2l+1}).\end{aligned}$$ In particular, there exists a sequence of natural isomorphisms $$\begin{aligned} \label{Eq877} \mathrm{det}(\mathrm{Sym}^{2l}\mathcal{F}_\odot) &\stackrel{\sim}{\rightarrow}\bigotimes_{j=0}^{2l} (\mathrm{Sym}^{2l}\mathcal{F}_\odot)^j/(\mathrm{Sym}^{2l}\mathcal{F}_\odot)^{j+1} \\ &\stackrel{\sim}{\rightarrow}\bigoplus_{j=0}^{2l} (\mathcal{F}^1_\odot)^{\otimes 2l} \otimes_{\mathcal{O}_X} \mathcal{T}^{\otimes 2l-j} \notag \\ &\stackrel{\sim}{\rightarrow} (\mathcal{F}^1_\odot)^{\otimes 2l(2l+1)} \otimes_{\mathcal{O}_X} \mathcal{T}^{\otimes l (2l+1)} \notag \\ & \stackrel{\sim}{\rightarrow}((\mathcal{F}^{1}_\odot)^{\otimes 2l} \otimes_{\mathcal{O}_X} \mathcal{T}^{\otimes l})^{\otimes (2l+1)}. \notag\end{aligned}$$ Since $p\nmid 2l+1$, there exists a log connection $\nabla_\mathcal{L}$ on $\mathcal{L}:= (\mathcal{F}^{1})^{\otimes 2l} \otimes_{\mathcal{O}_X} \mathcal{T}^{\otimes l}$ whose $(2l+1)$-fold tensor product corresponds to $\mathrm{Sym}^{2l}\nabla_\odot$ via ([\[Eq877\]](#Eq877){reference-type="ref" reference="Eq877"}) (cf.  [@Wak8 Proposition 4.22, (i)]). Let us write $$\begin{aligned} \mathrm{Sym}^{2l}\mathcal{E}_\odot := \mathcal{L}^\vee \otimes_{\mathcal{O}_X} \mathrm{Sym}^{2l} \mathcal{F}_\odot \ \ \ \text{and} \ \ \ (\mathrm{Sym}^{2l}\mathcal{E}_\odot)^j := \mathcal{L}^\vee \otimes_{\mathcal{O}_X} (\mathrm{Sym}^{2l} \mathcal{F}_\odot)^j\end{aligned}$$ ($j=0, \cdots, 2l+1$). Then, there exist canonical isomorphisms $$\begin{aligned} \label{Eq913} (\mathrm{Sym}^{2l}\mathcal{E}_\odot)^j /(\mathrm{Sym}^{2l}\mathcal{E}_\odot)^{j+1} \stackrel{\sim}{\rightarrow}\Omega^{\otimes j-l} \end{aligned}$$ ($j=0, \cdots, 2l$). If $\nabla^\vee_\mathcal{L}$ denotes the log connection on the dual $\mathcal{L}^\vee$ induced naturally by $\nabla_\mathcal{L}$, the log connection $\mathrm{det}(\nabla_\mathcal{L}^\vee \otimes \mathrm{Sym}^{2l}\nabla_{\mathcal{F}_\odot})$ on the determinant $\mathrm{det}(\mathrm{Sym}^{2l}\mathcal{E}_\odot)$ is compatible with the trivial connection $d$ on $\mathcal{O}_X$ via the composite isomorphism $$\begin{aligned} \mathrm{det}(\mathrm{Sym}^{2l}\mathcal{E}_\odot) \stackrel{\sim}{\rightarrow}\bigotimes_{j=0}^{2l} (\mathrm{Sym}^{2l}\mathcal{E}_\odot)^j /(\mathrm{Sym}^{2l}\mathcal{E}_\odot)^{j+1} \stackrel{\sim}{\rightarrow}\bigotimes_{j=0}^{2l} \Omega^{\otimes j-l} \stackrel{\sim}{\rightarrow}\mathcal{O}_X.\end{aligned}$$ Write $\mathrm{Sym}^{2l}\nabla_\odot$ for the log connection $\nabla_\mathcal{L}^\vee \otimes \mathrm{Sym}^{2l}\nabla_{\mathcal{F}_\odot}$ on $\mathrm{Sym}^{2l}\mathcal{E}_\odot$. Then, the resulting pair $\mathrm{Sym}^{2l}\mathscr{E}_\odot := (\mathrm{Sym}^{2l}\mathcal{E}_\odot, \mathrm{Sym}^{2l}\nabla_\odot)$ specifies a flat bundle, and moreover, the collection of data $$\begin{aligned} \label{Eq9d00} \mathrm{Sym}^{2l}\mathscr{E}_\odot^\spadesuit := (\mathrm{Sym}^{2l}\mathcal{E}_\odot, \mathrm{Sym}^{2l}\nabla_\odot, \{ (\mathrm{Sym}^{2l}\mathcal{E}_\odot)^j \}_{j=0}^{2l+1})\end{aligned}$$ has a structure of $\mathrm{SL}_{2l+1}$-oper. The isomorphism classes of $\mathrm{Sym}^{2l}\mathscr{E}_\odot$ and $\mathrm{Sym}^{2l}\mathscr{E}_\odot^\spadesuit$ depend only on $\mathscr{E}^\spadesuit_\odot$ (i.e,, not on the choice of $\mathscr{F}^\heartsuit_\odot$). We refer to $\mathrm{Sym}^{2l}\mathscr{E}_\odot^\spadesuit$ as the **$2l$-th symmetric product of $\mathscr{E}_\odot^\spadesuit$**. The resulting assignment $\mathscr{E}^\spadesuit_\odot \mapsto \mathrm{Sym}^{2l}\mathscr{E}^\spadesuit_\odot$ determines a well-defined morphism $$\begin{aligned} \label{Eq903} \mathcal{O}p_{G^\odot} \rightarrow\mathcal{O}p_{\mathrm{SL}_{2l+1}},\end{aligned}$$ which makes the following diagram commute: $$\begin{aligned} \label{Eq34g5} \vcenter{\xymatrix@C=46pt@R=36pt{ \mathcal{O}p_{\mathrm{SL}_2} \ar[r]^-{(\ref{Eq289})} \ar[d]_-{\mathrm{projectivization}} & \mathcal{O}p_{\mathrm{SL}_{2l+l}}\ar[d]^-{\mathrm{projectivization}} \\ \mathcal{O}p_{G^\odot} \ar[r]_-{(\ref{Eq405})} \ar[ur]^-{(\ref{Eq903})} & \mathcal{O}p_{\mathrm{PGL}_{2l+1}}. }}\end{aligned}$$ # Infinitesimal deformations of opers {#S230} This section deals with infinitesimal deformations of a flat $G$-bundle, as well as a $G$-oper. In particular, we verify dualities between certain deformation spaces of a flat $G$-bundle with prescribed residues/radii. These results are probably already known at least in the case of characteristic $0$ or in some restricted situations. However, the author could not find any suitable references consistent with our situation, so we here discuss them. ## De Rham cohomology/Parabolic de Rham cohomology {#SSa22} Let $\nabla' : \mathcal{K}^0 \rightarrow\mathcal{K}^1$ be a morphism of sheaves on $X$. It may be regarded as a complex concentrated at degrees $0$ and $1$; this complex is denoted by $\mathcal{K}^\bullet [\nabla']$. (In particular, we have $\mathcal{K}^j [\nabla'] := \mathcal{K}^j$ for $j= 0,1$). For each $l \geq 0$, we shall write $\mathbb{H}^l (X, \mathcal{K}^\bullet [\nabla'])$ for the $l$-th hypercohomology group of the complex $\mathcal{K}^\bullet [\nabla']$. Given an integer $l$ and a sheaf $\mathcal{F}$ on $X$, we define the complex $\mathcal{F}[l]$ to be $\mathcal{F}$ (considered as a complex concentrated at degree $0$) shifted down by $l$, so that $\mathcal{F}[l]^{-l} = \mathcal{F}$ and $\mathcal{F}[l]^i = 0$ for $i \neq l$. For a flat bundle $\mathscr{F}:= (\mathcal{F}, \nabla)$ on $\mathscr{X}$, we shall set $\Omega^{0}_{\mathrm{par}} (\mathcal{F}) := \mathcal{F}$ and $$\begin{aligned} \label{Eq651} \Omega^{1}_{\mathrm{par}}(\mathcal{F}) := \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}+ \mathrm{Im}(\nabla) \left(\subseteq \Omega \otimes_{\mathcal{O}_X} \mathcal{F}\right).\end{aligned}$$ Then, $\nabla$ restricts to a $k$-linear morphism $$\begin{aligned} \label{Eq650} \nabla_{\mathrm{par}} : \Omega^{0}_{\mathrm{per}}(\mathcal{F}) \rightarrow\Omega^{1}_{\mathrm{per}} (\mathcal{F}).\end{aligned}$$ The $k$-vector space $$\begin{aligned} \label{Eq5010} H^l_{\mathrm{dR}} (X, \mathscr{F}) := \mathbb{H}^l (X, \mathcal{K}^\bullet [\nabla]) \ \left(\text{resp.,} \ H^l_{\mathrm{dR}, \mathrm{par}} (X, \mathscr{F}) := \mathbb{H}^l (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}]) \right)\end{aligned}$$ is called the **$l$-th de Rham cohomology group** (resp., the **$l$-th parabolic de Rham cohomology group**) of $\mathscr{F}$. The $k$-linear morphism $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}]) \rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla])$ induced by the inclusion of complexes $\mathcal{K}^\bullet [\nabla_\mathrm{par}] \rightarrow\mathcal{K}^\bullet [\nabla]$ is injective. By using this injection, we occasionally regard $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}])$ as a subspace of $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla])$. Here, we prove a comparison between the parabolic de Rham cohomology and the hypercohomology of the restriction $$\begin{aligned} \label{Eqqs} {^c}\nabla_{\mathrm{par}} : \Omega_\mathrm{par}^{0, c} (\mathcal{F}) \rightarrow\Omega_\mathrm{par}^{1, c} (\mathcal{F}) \end{aligned}$$ of $\nabla$, where $\Omega_\mathrm{par}^{0, c} (\mathcal{F}) := \nabla^{-1}(\Omega\otimes_{\mathcal{O}_X} {^c}\mathcal{F})$ and $\Omega_\mathrm{par}^{1, c} (\mathcal{F}) := \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}$. **Proposition 6**. *For each $l \geq 0$, the morphism of $k$-vector spaces $$\begin{aligned} \label{Eq8010} \mathbb{H}^l (X, \mathcal{K}^\bullet [{^c}\nabla_{\mathrm{par}}]) \stackrel{\sim}{\rightarrow} \mathbb{H}^l (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}])\end{aligned}$$ induced by the inclusion $\mathcal{K}^\bullet [{^c}\nabla_{\mathrm{par}}] \hookrightarrow\mathcal{K}^\bullet [\nabla_{\mathrm{par}}]$ is an isomorphism.* *Proof.* Let us choose $i \in \{1, \cdots, r \}$. Then, $\sigma^*_i (\Omega \otimes_{\mathcal{O}_X} \mathcal{F})$ may be identified with $\sigma^*_i (\mathcal{F})$ via $\sigma_i^*(\Omega \otimes_{\mathcal{O}_X} \mathcal{F}) \cong \sigma_i^* (\Omega) \otimes_k \sigma_i^* (\mathcal{F}) \cong \sigma_i^*(\mathcal{F})$, where the second "$\cong$\" follows from the residue map $\sigma^*_i (\Omega) \stackrel{\sim}{\rightarrow}k$. Under this identification, the log connection $\nabla$ specifies an endomorphism $\nu_i : \sigma^*_i (\mathcal{F}) \rightarrow\sigma_i^* (\mathcal{F}) \left(= \sigma_i^*(\Omega \otimes_{\mathcal{O}_X} \mathcal{F}) \right)$ of $\sigma_i^* (\mathcal{F})$. By taking quotients of the both sides of the equality $\Omega_{\mathrm{par}}^0 (\mathcal{F})/{^c}\mathcal{F}= \bigoplus_{i=1}^r \sigma_{i*}(\sigma_i^*(\mathcal{F}))$, we obtain $$\begin{aligned} \label{Eq8012i} \Omega_{\mathrm{par}}^0 (\mathcal{F}) / \Omega_{\mathrm{par}}^{0, c} (\mathcal{F}) = \bigoplus_{i=1}^r \sigma_{i*}(\sigma_i^*(\mathcal{F})/\mathrm{Ker}(\nu_i)).\end{aligned}$$ Also, the identification $(\Omega \otimes_{\mathcal{O}_X} \mathcal{F})/\Omega_\mathrm{par}^{1, c} (\mathcal{F}) = \left( \bigoplus_{i=1}^r \sigma_{i*} (\sigma^*_i (\Omega) \otimes_k \sigma^*_i (\mathfrak{g}_\mathcal{E})) =\right) \bigoplus_{i=1}^r \sigma_{i*}(\mathcal{F})$ restricts to $$\begin{aligned} \label{Eq222ii} \Omega_{\mathrm{par}}^1 (\mathcal{F})/ \Omega_{\mathrm{par}}^{1, c} (\mathcal{F}) = \bigoplus_{i=1}^r \sigma_{i*}(\mathrm{Im}(\nu_i)).\end{aligned}$$ By using ([\[Eq8012i\]](#Eq8012i){reference-type="ref" reference="Eq8012i"}) and ([\[Eq222ii\]](#Eq222ii){reference-type="ref" reference="Eq222ii"}), we obtain a morphism of short exact sequences $$\begin{aligned} \label{Eqq300} \vcenter{\xymatrix@C=46pt@R=36pt{ 0 \ar[r] & \Omega_\mathrm{par}^{0, c} (\mathcal{F}) \ar[r]^-{\mathrm{inclusion}} \ar[d]^-{{^c}\nabla_{\mathrm{par}}} & \Omega_{\mathrm{par}}^0 (\mathcal{F}) \ar[r] \ar[d]^-{\nabla_\mathrm{par}} & \bigoplus_{i=1}^r \sigma_{i*}(\sigma^*(\mathcal{F}) / \mathrm{Ker}(\nu_i)) \ar[r] \ar[d]^-{\bigoplus_{i=1}^r \sigma_{i*}(\nu_i)}& 0 \\ 0 \ar[r] & \Omega^{1, c}_\mathrm{par} (\mathcal{F}) \ar[r]_-{\mathrm{inclusion}} & \Omega_{\mathrm{par}}^1 (\mathcal{F}) \ar[r]& \bigoplus_{i=1}^r \sigma_{i*}(\mathrm{Im}(\nu_i)) \ar[r] & 0. }}\end{aligned}$$ Since the right-hand vertical arrow in this diagram is an isomorphism, the inclusion $\mathcal{K}^\bullet [{^c}\nabla_{\mathrm{par}}] \hookrightarrow\mathcal{K}^\bullet [\nabla_{\mathrm{par}}]$ turns out to be a quasi-isomorphism. This means that the induced morphism $\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla_{\mathrm{par}}]) \rightarrow \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}])$ is an isomorphism. ◻ ## Duality for de Rham cohomology {#SS055} Now, let $(\mathcal{E}, \nabla)$ be a flat $G$-bundle on $\mathscr{X}$. The log connection $\nabla$ induces a log connection $\nabla^\mathrm{ad} : \mathfrak{g}_{\mathcal{E}} \rightarrow\Omega\otimes_{\mathcal{O}_X} \mathfrak{g}_{\mathcal{E}}$ on the adjoint bundle $\mathfrak{g}_{\mathcal{E}}$; it restricts to a log connection $$\begin{aligned} \label{Eq454} {^c} \nabla^\mathrm{ad} : {^c}\mathfrak{g}_\mathcal{E}\rightarrow\Omega \otimes_{\mathcal{O}_X} {^c}\mathfrak{g}_\mathcal{E}\end{aligned}$$ on ${^c}\mathfrak{g}_\mathcal{E}\left(= \mathfrak{g}_\mathcal{E}(-D)\right)$. If ${^c}d$ denotes the restriction ${^c}\mathcal{O}_X \rightarrow\Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{O}_X \left(= \omega_X \right)$ of the universal logarithmic derivation $d$, then the pairing $({^c}\mathfrak{g}_\mathcal{E}, {^c}\nabla^\mathrm{ad}) \otimes_{\mathcal{O}_X} (\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})\rightarrow({^c}\mathcal{O}_X, {^c}d)$ arising from the Killing form $\kappa$ on $\mathfrak{g}$ yields a composite $k$-linear pairing $$\begin{aligned} \label{Eq24012} \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \times \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) \xrightarrow{\cup} \mathbb{H}^2 (X, \mathcal{K}^\bullet [{^c}d]) \stackrel{\sim}{\rightarrow}H^1 (X, \omega_X) \stackrel{\sim}{\rightarrow}k.\end{aligned}$$ The following assertion for the case of flat $G$-bundles with vanishing $p$-curvature can be found in  [@Wak8 Corollary 6.16]. **Proposition 7**. *The pairing ([\[Eq24012\]](#Eq24012){reference-type="ref" reference="Eq24012"}) is nondegenerate. That is to say, the $k$-linear morphism $$\begin{aligned} \label{Eww3} \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^{\mathrm{ad}}])^\vee\end{aligned}$$ determined by ([\[Eq24012\]](#Eq24012){reference-type="ref" reference="Eq24012"}) is an isomorphism.* *Proof.* Let us consider the diagram $$\begin{aligned} \label{Eq3i90} \vcenter{\xymatrix@C=7pt@R=36pt{ H^0 (X, {^c}\mathfrak{g}_\mathcal{E}) \ar[r] \ar[d] & H^0 (X, \Omega\otimes_{\mathcal{O}_X} {^c}\mathfrak{g}_\mathcal{E}) \ar[r] \ar[d] & \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \ar[r] \ar[d]^-{(\ref{Eww3})} &H^1 (X, {^c}\mathfrak{g}_\mathcal{E}) \ar[r] \ar[d] &H^1 (X, \Omega\otimes_{\mathcal{O}_X} {^c}\mathfrak{g}_\mathcal{E}) \ar[d] \\ H^1 (X, \Omega\otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E})^\vee \ar[r] & H^1 (X, \mathfrak{g}_\mathcal{E})^\vee \ar[r] &\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])^\vee \ar[r] & H^0 (X, \Omega\otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E})^\vee \ar[r] & H^0 (X, \mathfrak{g}_\mathcal{E})^\vee, }}\end{aligned}$$ where the upper and lower horizontal sequences are the exact sequences arising from the Hodge to de Rham spectral sequences $H^b (X, \mathcal{K}^a[{^c}\nabla^\mathrm{ad}]) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet[{^c}\nabla^\mathrm{ad}])$ and $H^b (X, \mathcal{K}^a[\nabla^\mathrm{ad}]) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet[\nabla^\mathrm{ad}])$, respectively. The assumed condition on $G$ described in § [1.4](#Ssde){reference-type="ref" reference="Ssde"} implies that the Killing form $\kappa$ is nondegenerate, so it induces an isomorphism $\mathfrak{g}_\mathcal{E}\stackrel{\sim}{\rightarrow}\mathfrak{g}_\mathcal{E}^\vee$. All the vertical arrows in ([\[Eq3i90\]](#Eq3i90){reference-type="ref" reference="Eq3i90"}) except for the middle one are isomorphisms because they come from the pairings given by Serre duality. It follows that ([\[Eww3\]](#Eww3){reference-type="ref" reference="Eww3"}) is an isomorphism by the five lemma, and this completes the proof of this assertion. ◻ Denote by $q$ the $k$-rational point of $\mathcal{C}onn_G$ classifying $(\mathcal{E}, \nabla)$. Also, let $T_q \mathcal{C}onn_G$ (resp., $T_q^\vee \mathcal{C}onn_G$) denote the tangent space (resp., the cotangent space) of $\mathcal{C}onn_G$ at $q$. As discussed in  [@Wak8 § 6.1.4] (or, e.g.,  [@O3 Proposition 3.6] for the case where $\mathscr{X}$ is unpointed and smooth), the underlying set of $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])$ may be identified with the space of first-order deformations of $(\mathcal{E}, \nabla)$. That is to say, there exists a canonical isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eq3033} \gamma : T_q \mathcal{C}onn_G \stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]).\end{aligned}$$ If $(\mathcal{E}, \nabla)$ is equipped with a structure of marking and its residues are given by $\mu := (\mu_i )_{i=1}^r \in \mathfrak{g}^{\times r}$ (where $\mu := \emptyset$ if $r = 0$), then a similar discussion yields a canonical isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eq350} \gamma_\mu : T_q \mathcal{C}onn_{G, \mu} \stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]),\end{aligned}$$ where we use the same notation "$q$\" to denote the $k$-rational point of $\mathcal{C}onn_{G, \mu}$ classifying $(\mathcal{E}, \nabla)$ (with marking). In particular, by using the isomorphisms ([\[Eww3\]](#Eww3){reference-type="ref" reference="Eww3"}), ([\[Eq3033\]](#Eq3033){reference-type="ref" reference="Eq3033"}), and ([\[Eq350\]](#Eq350){reference-type="ref" reference="Eq350"}), we obtain an isomorphism $$\begin{aligned} \label{Er67} T_q \mathcal{C}onn_{G, \mu} \stackrel{\sim}{\rightarrow}T_q^\vee \mathcal{C}onn_G.\end{aligned}$$ ## Duality for parabolic de Rham cohomology {#SS055g} Next, let us consider the case of parabolic de Rham cohomology. We shall denote by $$\begin{aligned} \label{Eqq338} \kappa^\triangleright : \left((\Omega \otimes_{\mathcal{O}_X} {^c}\mathfrak{g}_\mathcal{E}) \times \mathfrak{g}_\mathcal{E}= \right) \Omega^{1, c}_\mathrm{par} (\mathfrak{g}_\mathcal{E}) \times \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}) \rightarrow{^c}\Omega\end{aligned}$$ the $\mathcal{O}_X$-bilinear pairing induced from the Killing form $\kappa$ of $\mathfrak{g}$. Since $\kappa$ is nondegenerate, we see that $\kappa^\triangleright$ is nondegenerate. By applying Serre duality to $\kappa^\triangleright$, we obtain a nondegenerate pairing of $k$-vector spaces $$\begin{aligned} \label{Eqw5} H^{1-l} (X, \Omega_\mathrm{par}^{1, c} (\mathfrak{g}_\mathcal{E})) \otimes_k H^l (X, \Omega^0_\mathrm{par}(\mathfrak{g}_\mathcal{E})) \rightarrow\left(H^1 (X, {^c}\Omega) = \right) k \end{aligned}$$ for $l =0,1$. Next, we shall write $U := X \setminus \bigcup \{ \sigma_i \}_{i=1}^r$ and write $\iota$ for the inclusion $U \hookrightarrow X$. The $\mathcal{O}_X$-module $\iota_*(\iota^* (\mathfrak{g}_\mathcal{E}))$ (resp., $\iota_*(\iota^* (\Omega \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E}))$) contains $\Omega^{0, c}_\mathrm{par}(\mathfrak{g}_\mathcal{E})$ (resp., $\Omega^1_\mathrm{par}(\mathfrak{g}_\mathcal{E})$) as an $\mathcal{O}_X$-submodule. Here, let us consider the natural skew-symmetric $\mathcal{O}_X$-bilinear pairing $$\begin{aligned} \label{Eqw4} \widetilde{\kappa}^\blacktriangleright : \iota_*(\iota^* (\mathfrak{g}_\mathcal{E})) \times \iota_*(\iota^* (\Omega \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E})) \rightarrow\iota_*(\iota^*(\Omega))\end{aligned}$$ arising from $\kappa$. **Proposition 8**. *Assume (cf. Remark [Remark 9](#Rem55){reference-type="ref" reference="Rem55"}) that there exists a $B$-reduction $\mathcal{E}_B$ of $\mathcal{E}$ for which the pair $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ forms a $G$-oper of radii $[0]^{\times r} := ([0], [0], \cdots, [0]) \in \mathfrak{c}(k)^{\times r}$ (where $[0]^{\times r} := \emptyset$ if $r = 0$). Then, $\widetilde{\kappa}^\blacktriangleright$ restricts to a nondegenerate $\mathcal{O}_X$-bilinear pairing $$\begin{aligned} \label{Eqw3} \kappa^\blacktriangleright : \Omega^{0, c}_\mathrm{par}(\mathfrak{g}_\mathcal{E}) \times \Omega^1_\mathrm{par}(\mathfrak{g}_\mathcal{E}) \rightarrow{^c}\Omega.\end{aligned}$$ In particular, for $l =0, 1$, the morphism $$\begin{aligned} \label{Eqw1} H^{1-l} (X, \Omega^{0, c}_\mathrm{par}(\mathfrak{g})) \otimes_k H^l (X, \Omega^1_\mathrm{par}(\mathfrak{g}_\mathcal{E})) \rightarrow\left(H^1 (X, {^c}\Omega) = \right) k\end{aligned}$$ obtained by applying Serre duality to this pairing is nondegenerate.* *Proof.* After replacing $\mathscr{E}^\spadesuit$ with its $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normalization (cf.  [@Wak8 Proposition 2.18]), we may assume that $\mathscr{E}^\spadesuit$ is $\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal. In particular, since $\nabla$ is of radii $[0]^{\times r}$, its residue at $\sigma_i$ coincides with $q_{-1} \in \mathfrak{g}$ for every $i=1, \cdots, r$. We first consider the former assertion. Just as in the case of $\kappa^\triangleright$, the restriction of $\widetilde{\kappa}^\blacktriangleright$ to $U$ is nondegenerate. Thus, the problem is reduced to examining the pairing $\widetilde{\kappa}^\blacktriangleright$ around the marked points. Let us choose $i \in \{1, \cdots, r \}$ and choose a local function $t \in \mathcal{O}_X$ defining the closed subscheme $\mathrm{Im}(\sigma_i) \subseteq X$. Then, the formal neighborhood $Q$ of $\sigma_i$ is naturally isomorphic to $\mathrm{Spec}(k[\![t]\!])$, and we have ${^c}\Omega |_Q = k[\![t]\!] dt$. The choice of such a function $t$ specifies a canonical identification $\mathfrak{g}_\mathcal{E}|_Q = k[\![t]\!] \otimes_k \mathfrak{g}$ (cf.  [@Wak8 § 2.4.3]), which also gives $\iota_* (\iota^* (\mathfrak{g}_\mathcal{E})) |_{Q} = k (\!(t)\!) \otimes_k \mathfrak{g}$. Under these identifications, the restriction of $\widetilde{\kappa}^\blacktriangleright$ to $Q$ coincides with the Killing form $\kappa$ tensored with $k(\!(t)\!)$. Here, recall (cf.  [@Wak8 § 2.4.4 and § 6.2.3]) that there exist direct sum decompositions of $\mathfrak{g}$: $$\begin{aligned} \label{Eqq721} \mathfrak{g}= \mathrm{Ker}(\mathrm{ad}(q_{-1})) \oplus \mathrm{Im}(\mathrm{ad}(q_1)) = \mathrm{Ker}(\mathrm{ad}(q_{1})) \oplus \mathrm{Im}(\mathrm{ad}(q_{-1})),\end{aligned}$$ and that the isomorphism $\mathfrak{g}\stackrel{\sim}{\rightarrow}\mathfrak{g}^\vee$ arising from $\kappa$ restricts to $\mathrm{Ker}(\mathrm{ad}(q_{-1})) \stackrel{\sim}{\rightarrow}\mathrm{Ker}(\mathrm{ad}(q_{1}))^\vee$, as well as to $\mathrm{Im}(\mathrm{ad}(q_1))\stackrel{\sim}{\rightarrow}\mathrm{Im}(\mathrm{ad}(q_{-1}))^\vee$. In particular, by the equality $\mu_i^\nabla = q_{-1}$, the first (resp., second) decomposition of ([\[Eqq721\]](#Eqq721){reference-type="ref" reference="Eqq721"}) yields $$\begin{aligned} \label{Eqq291} \Omega_\mathrm{par}^{0, c} (\mathfrak{g}_\mathcal{E}) |_Q = k[\![t]\!] \otimes_k\mathrm{Ker}(\mathrm{ad}(q_{-1})) \oplus k[\![t]\!] t\otimes_k\mathrm{Im}(\mathrm{ad}(q_1))\hspace{11mm} \\ \left(\text{resp.,} \ \Omega_\mathrm{par}^{1} (\mathfrak{g}_\mathcal{E}) |_Q = k[\![t]\!] dt \otimes_k\mathrm{Ker}(\mathrm{ad}(q_{1})) \oplus k[\![t]\!] \frac{dt}{t}\otimes_k\mathrm{Im}(\mathrm{ad}(q_{-1}))\right). \notag\end{aligned}$$ According to this description, each section $v$ (resp., $u$) of $\Omega_\mathrm{par}^{0, c} (\mathfrak{g}_\mathcal{E}) |_Q$ (resp., $\Omega_\mathrm{par}^{1} (\mathfrak{g}_\mathcal{E}) |_Q$) may be described as the sum $v = v_1 + t \cdot v_2$ (resp., $u = u_1 + \frac{1}{t} \cdot u_2$) for some $v_1 \in k[\![t]\!] \otimes_k\mathrm{Ker}(\mathrm{ad}(q_{-1}))$, $v_2 \in k[\![t]\!]\otimes_k\mathrm{Im}(\mathrm{ad}(q_1))$ (resp., $u_1 \in k[\![t]\!] dt \otimes_k\mathrm{Ker}(\mathrm{ad}(q_{1}))$, $u_2 \in k[\![t]\!] dt \otimes_k\mathrm{Im}(\mathrm{ad}(q_{-1}))$). Then, we have $$\begin{aligned} \widetilde{\kappa}^\blacktriangleright (v, u) &= \widetilde{\kappa}^\blacktriangleright (v_1, u_1) + \widetilde{\kappa}^\blacktriangleright (t \cdot v_2, \frac{1}{t} \cdot u_2) \\ &= \widetilde{\kappa}^\blacktriangleright (v_1, u_1) + \widetilde{\kappa}^\blacktriangleright (v_2, u_2) \notag \\ &= \widetilde{\kappa}^\blacktriangleright (v_1 + v_2, u_1 + u_2). \notag\end{aligned}$$ It follows that $\widetilde{\kappa}^\blacktriangleright (v, u)$ lies in $k[\![t]\!] dt \left(= {^c}\Omega |_Q\right)$, and the restriction of $\widetilde{\kappa}^\blacktriangleright$ over $Q$ is nondegenerate. This completes the proof of the former assertion. The latter assertion follows directly from the former assertion. ◻ **Remark 9** (Flat $G$-bundles with parabolic structure). By the similarity of the proof, the same assertion as Proposition [Proposition 8](#Prop99){reference-type="ref" reference="Prop99"} remains true after replacing the assumption (i.e., the existence of "$\mathcal{E}_B$\") with the following condition (regarded as a kind of parabolic structure): the flat $G$-bundle $(\mathcal{E}, \nabla)$ is equipped with a marking, and its residue at $\sigma_i$ coincides with $q_{-1} \in \mathfrak{g}$ (via the fixed marking) for every $i=1, \cdots, r$. Until the end of this subsection, we keep the assumption in the above proposition. Let $\Box$ denote either the absence or presence of "$c\,$\". One may calculate $\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^\Box}\nabla_\mathrm{par}^\mathrm{ad}])$ as the total cohomology of the Čech double complex $\mathrm{Tot}^\bullet (\check{C} (\mathscr{U}, \mathcal{K}^\bullet [{^\Box}\nabla_\mathrm{par}^\mathrm{ad}]))$ associated to $\mathcal{K}^\bullet [{^\Box}\nabla_\mathrm{par}^\mathrm{ad}]$, where $\mathscr{U}:= \{ U_\alpha \}_{\alpha \in I}$ is a finite affine open covering of $X$. Denote by $I_2$ the set of pairs $(\alpha, \beta) \in I \times I$ with $U_{\alpha \beta} := U_\alpha \cap U_\beta \neq \emptyset$. Then, each element $v$ of $\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^\Box}\nabla_\mathrm{par}^\mathrm{ad}])$ may be given by a collection of data $$\begin{aligned} \label{Eqq321} v = (\{ \partial_{\alpha \beta} \}_{(\alpha, \beta)}, \{ \delta_\alpha \}_{\alpha})\end{aligned}$$ consisting of a Čech $1$-cocycle $\{ \partial_{\alpha \beta}\}_{(\alpha, \beta) \in I_2} \in \check{C}^1 (\mathscr{U}, \Omega_\mathrm{par}^{0, \Box} (\mathfrak{g}_\mathcal{E}))$ with $\partial_{\alpha \beta} \in H^0 (U_{\alpha \beta}, \Omega_\mathrm{par}^{0, \Box} (\mathfrak{g}_\mathcal{E}))$ and a Čech $0$-cochain $\{ \delta_\alpha \}_{\alpha \in I} \in \check{C}^0 (\mathscr{U}, \Omega^{1, \Box}_\mathrm{par}(\mathfrak{g}_\mathcal{E}))$ with $\delta_\alpha \in H^0 (U_\alpha, \Omega_{\mathrm{par}}^ {1, \Box} (\mathfrak{g}_\mathcal{E}))$ which agree under ${^\Box}\nabla_\mathrm{par}^\mathrm{ad}$ and the Čech coboundary map. Using the description in terms of Čech double complexes, one can obtain a skew-symmetric $k$-bilinear pairing $$\begin{aligned} \label{Eqq1} \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}_\mathrm{par}]) \times \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}]) \rightarrow \left(H^1 (X, {^c}\Omega) =\right) k\end{aligned}$$ given by assigning $(\{ \partial_{\alpha \beta} \}_{(\alpha, \beta)}, \{ \delta_\alpha \}_{\alpha}) \otimes (\{ \partial'_{\alpha \beta} \}_{(\alpha, \beta)}, \{ \delta'_\alpha \}_{\alpha}) \mapsto \kappa^\blacktriangleright (\partial_{\alpha \beta}, \delta'_\beta) - \kappa^{\triangleright} (\delta_\alpha, \partial'_{\alpha \beta})$. **Proposition 10**. *Let us keep the assumption in Proposition [Proposition 8](#Prop99){reference-type="ref" reference="Prop99"} (or impose the condition described in Remark [Remark 9](#Rem55){reference-type="ref" reference="Rem55"} instead). Then, the pairing ([\[Eqq1\]](#Eqq1){reference-type="ref" reference="Eqq1"}) is nondegenerate. In particular, the morphism of $k$-vector spaces $$\begin{aligned} \label{Eqq233} \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}_\mathrm{par}]) \rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])^\vee\end{aligned}$$ induced by ([\[Eqq1\]](#Eqq1){reference-type="ref" reference="Eqq1"}) is an isomorphism.* *Proof.* Note that the morphism ([\[Eqq233\]](#Eqq233){reference-type="ref" reference="Eqq233"}) fits into the morphism of exact sequences $$\begin{aligned} \label{Eqq301} \vcenter{\xymatrix@C=7pt@R=36pt{ H^0 (X, \Omega_\mathrm{par}^{0, c}(\mathfrak{g}_\mathcal{E})) \ar[r] \ar[d]^-{\wr} & H^0 (X, \Omega_\mathrm{par}^{1, c} (\mathfrak{g}_\mathcal{E})) \ar[r] \ar[d]^-{\wr} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla_\mathrm{par}^\mathrm{ad}]) \ar[r] \ar[d]^-{ (\ref{Eqq233})} & H^1 (X, \Omega_\mathrm{par}^{0, c}(\mathfrak{g}_\mathcal{E})) \ar[d]^-{\wr} \ar[r] & H^1 (X, \Omega_\mathrm{par}^{1, c} (\mathfrak{g}_\mathcal{E})) \ar[d]^-{\wr} \\ H^1 (X, \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E}))^\vee \ar[r] &H^1 (X, \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}))^\vee \ar[r]& \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}^\mathrm{ad}])^\vee \ar[r] & H^0 (X, \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E}))^\vee \ar[r] & H^0 (X, \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}))^\vee, }}\end{aligned}$$ where - the upper and lower horizontal sequences are the exact sequences induced from the Hodge to de Rham spectral sequences $E_1^{a, b} = H^b (X, \mathcal{K}^a[{^c}\nabla^\mathrm{ad}_\mathrm{par}]) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet[{^c}\nabla^\mathrm{ad}_\mathrm{par}])$ and $E_1^{a, b} = H^b (X, \mathcal{K}^a[\nabla_\mathrm{par}^\mathrm{ad}]) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet[\nabla^\mathrm{ad}_\mathrm{par}])$, respectively; - the second and fifth vertical arrows from the left are the isomorphisms arising from the nondegenerate pairing ([\[Eqw5\]](#Eqw5){reference-type="ref" reference="Eqw5"}) for $l=1$ and $0$, respectively; - the first and fourth vertical arrows from the left are the isomorphisms arising from the nondegenerate pairing ([\[Eqw1\]](#Eqw1){reference-type="ref" reference="Eqw1"}) for $l=1$ and $0$, respectively. Thus, by applying the five lemma to this diagram, we see that the middle vertical arrow ([\[Eqq233\]](#Eqq233){reference-type="ref" reference="Eqq233"}) is an isomorphism. ◻ **Corollary 11**. *There exists a canonical nondegenerate skew-symmetric pairing $$\begin{aligned} \label{HHH89} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}]) \times \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}]) \rightarrow k.\end{aligned}$$ In particular, we obtain an isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eqqw9} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}]) \stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])^\vee.\end{aligned}$$* *Proof.* The required pairing ([\[HHH89\]](#HHH89){reference-type="ref" reference="HHH89"}) is obtained by composing ([\[Eqq1\]](#Eqq1){reference-type="ref" reference="Eqq1"}) and the inverse of ([\[Eq8010\]](#Eq8010){reference-type="ref" reference="Eq8010"}) in the case where "$\nabla$\" is taken to be $\nabla^\mathrm{ad}$. ◻ Next, denote by $T_q \mathcal{C}onn_{G, [0]^{\times r}}$ (resp., $T_q^\vee \mathcal{C}onn_{G, [0]^{\times r}}$) the tangent space (resp., the cotangent space) of $\mathcal{C}onn_{G, \rho}$ at $q$. In particular, $T_q \mathcal{C}onn_{G, [0]^{\times r}}$ may be identified with the set of deformations of $(\mathcal{E}, \nabla)$ fixing the radii. Then, we can prove the following assertion. **Proposition 12**. *Let us keep the assumption in Proposition [Proposition 8](#Prop99){reference-type="ref" reference="Prop99"}. Then, there exists a canonical isomorphism of $k$-vector spaces $$\begin{aligned} \label{K02} \gamma_{[0]^{\times r}} : T_q \mathcal{C}onn_{G, [0]^{\times r}} \stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^{\mathrm{ad}}]).\end{aligned}$$ Moreover, the following diagram is commutative: $$\begin{aligned} \label{Eq340} \vcenter{\xymatrix@C=46pt@R=36pt{ T_q \mathcal{C}onn_{G, [0]^{\times r}} \ar[r]^-{\mathrm{inclusion}} \ar[d]^-{\wr}_-{\gamma_{[0]^{\times r}}} & T_q \mathcal{C}onn_{G} \ar[d]^-{\gamma}_-{\wr} \\ \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^{\mathrm{ad}}]) \ar[r]_-{\mathrm{inclusion}} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^{\mathrm{ad}}]). }}\end{aligned}$$* *Proof.* After possibly replacing $\mathscr{E}^\spadesuit$ with its $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normalization, we may assume that $\mathscr{E}^\spadesuit$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal. In particular, the assumption implies that the residue of $\nabla$ at every marked point coincides with $q_{-1}$ under the canonical local trivialization of $\mathcal{E}\left(= {^\dagger}\mathcal{E}_G \right)$ (cf.  [@Wak8 § 2.4.3]). Let us take a deformation $(\mathcal{E}_\varepsilon, \nabla_\varepsilon)$ of $(\mathcal{E}, \nabla)$ over $k_\varepsilon :=k [\varepsilon]/(\varepsilon^2)$. It corresponds to an element $v$ of $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])$ via $\gamma$. This deformation is represented by a collection of data $(\{ \partial_{\alpha \beta}\}_{(\alpha, \beta) \in I_2}, \{ \delta_\alpha \}_{\alpha \in I})$ (as displayed in ([\[Eqq321\]](#Eqq321){reference-type="ref" reference="Eqq321"})) in the Čech double complex $\mathrm{Tot}^\bullet (\check{C}(\mathscr{U}, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]))$ associated to $\mathcal{K}^\bullet [\nabla^\mathrm{ad}]$ and an affine open covering $\mathscr{U}:= \{ U_\alpha \}_{\alpha \in I}$ of $X$ indexed by a finite set $I$. Denote by $\pi$ the projection $X \times_k k_\varepsilon \rightarrow X$. Then, for each $\alpha \in I$, the flat $G$-bundle $(\mathcal{E}_\varepsilon, \nabla_\varepsilon) |_{U_\alpha}$ (i.e., the restriction of $(\mathcal{E}_\varepsilon, \nabla_\varepsilon)$ to $U_\alpha$) is isomorphic to $(\pi^*(\mathcal{E})|_{U_\alpha}, \pi^*(\nabla)|_{U_\alpha} + \delta_\alpha \cdot \varepsilon)$. If $\sigma_i \in U_{\alpha} (k)$ for some $i \in \{1, \cdots, r \}$, then the residue of $\nabla_\varepsilon$ at $\sigma_i$ coincides with $q_{-1} + \overline{\delta}_\alpha \cdot \varepsilon \in \mathfrak{g}(k_\varepsilon)$, where $\overline{\delta}_\alpha$ denotes the image of $\delta_\alpha$ in $\mathfrak{g}$ via the composite surjection $\Omega \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E}\twoheadrightarrow\sigma^*_i (\Omega) \otimes_k \sigma_i^* (\mathfrak{g}_\mathcal{E}) \stackrel{\sim}{\rightarrow}\mathfrak{g}$, where the second morphism is given by both the residue map $\sigma^*_i (\Omega)\stackrel{\sim}{\rightarrow}k$ and the isomorphism $\sigma^*_i (\mathfrak{g}_\mathcal{E}) \stackrel{\sim}{\rightarrow}\mathfrak{g}$ induced from the canonical trivialization $\sigma_i^*(\mathcal{E}) \stackrel{\sim}{\rightarrow}G$. Now, suppose that $v$ lies in $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])$. After possibly replacing $(\{ \partial_{\alpha \beta}\}_{(\alpha, \beta)}, \{ \delta_\alpha \}_{\alpha})$ with another, we may assume that $\delta_{\alpha} \in H^0 (U_\alpha, {^c}\Omega \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E})$ for every $\alpha \in I$. Hence, the equality $\overline{\delta}_\alpha = 0$ holds, and the radius at each marked point coincides with $\left(\chi (q_{-1} + 0 \cdot \varepsilon) =\right) [0]$. This means that $(\mathcal{E}_\varepsilon, \nabla_\varepsilon)$ is classified by $T_q \mathcal{C}onn_{G, \rho}$. Conversely, suppose that $(\mathcal{E}_\varepsilon, \nabla_\varepsilon)$ is of radii $[0]^{\times r}$. Let us choose $i \in \{ 1, \cdots, r \}$ and choose $\alpha_i \in I$ with $\sigma_i \in U_ {\alpha_i} (k)$. Recall from  [@Kos Theorem 2] (or  [@Ngo Lemme 1.2.3]) that if $\mathfrak{g}^{\mathrm{reg}}$ denotes the set of regular elements in $\mathfrak{g}$, then the fiber of the projection $\chi |_{\mathfrak{g}^{\mathrm{reg}}} : \mathfrak{g}^{\mathrm{reg}} \rightarrow\mathfrak{c}$ over $[0]$ forms a homogenous space with respect to the adjoint $G$-action. Since both $q_{-1}$ and $q_{-1} + \overline{\delta}_{\alpha_i} \cdot \varepsilon$ belong to this fiber, there exists an element in $G (k_\varepsilon)$ of the form $e + w_i \cdot \varepsilon$, where $e$ denotes the identity element of $G$ and $w_i$ is an element of $\mathfrak{g}$, satisfying $\mathrm{Ad}(e + w_i \cdot \varepsilon) (q_{-1}) \left( =q_{-1} + [w_i, q_{-1}]\cdot\varepsilon \right) = q_{-1} + \overline{\delta}_\alpha \cdot \varepsilon$. Hence, we have $[w_i, q_{-1}] = \overline{\delta}_\alpha$. Let us take a section $\lambda_i \in H^0 (U_\alpha, \mathfrak{g}_\mathcal{E})$ whose image in $\left(H^0(U_\alpha, \mathfrak{g}_\mathcal{E})/H^0 (U_\alpha, {^c}\mathfrak{g}_\mathcal{E}) = H^0 (U_\alpha, \sigma^*_i(\mathfrak{g}_\mathcal{E})) = \right) \mathfrak{g}$ coincides with $w_i$, and replace $\delta_{\alpha_i}$ (resp., $\partial_{\alpha_i \beta}$) with the section $\delta_{\alpha_i} - \nabla^\mathrm{ad}(w_i)$ (resp., $\partial_{\alpha_i \beta} - w_i |_{U_{\alpha_i \beta}}$). Then, the resulting collection $(\{ \partial_{\alpha \beta} \}_{(\alpha, \beta)}, \{ \delta_\alpha \}_{\alpha})$ still represents $v$ and furthermore lies in $\mathrm{Tot}^\bullet (\check{C} (\mathscr{U}, \mathcal{K}^\bullet [\nabla_\mathrm{par}^\mathrm{ad}]))$. This means that $v$ lies in $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])$. As a consequence, we conclude that $\gamma$ restricts to an isomorphism of $k$-vector spaces $T_q \mathcal{C}onn_{G, \rho}$ $\stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])$, and this completes the proofs of the former and latter assertions of this proposition. ◻ By applying the above proposition together with the isomorphism ([\[Eqqw9\]](#Eqqw9){reference-type="ref" reference="Eqqw9"}), we obtain an isomorphism of $k$-vector spaces $$\begin{aligned} \label{K09} T_q \mathcal{C}onn_{G, [0]^{\times r}} \stackrel{\sim}{\rightarrow}T_q^\vee \mathcal{C}onn_{G, [0]^{\times r}}.\end{aligned}$$ ## The deformation space of a $G$-oper I {#SSa1} Let $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ be a $G$-oper on $\mathscr{X}$, and write $\mathcal{E}:= \mathcal{E}_B \times^B G$. **Proposition 13**. *Let $\Box$ denote either the absence or presence of "$c$\". Then, there exists a short exact sequence $$\begin{aligned} 0 \longrightarrow H^0 (X, {^\Box}\mathcal{V}_G)\xrightarrow{'e_{\sharp, \Box}} \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^\Box}\nabla^\mathrm{ad}]) \xrightarrow{{'e}_{\flat, \Box}} H^1(X, \Omega\otimes_{\mathcal{O}_X} {^\Box}\mathcal{V}_G^{\vee}) \longrightarrow 0. \label{HDeq1} \end{aligned}$$ Moreover, the following diagram forms an isomorphism of short exact sequences: $$\begin{aligned} \label{Ed3h7} \vcenter{\xymatrix@C=26pt@R=36pt{ 0 \ar[r] & H^0 (X, {^c}\mathcal{V}_G) \ar[r]^-{'e_{\sharp, c}} \ar[d]_-{\wr}^{\mathrm{Serre \ duality}} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \ar[r]^-{{'e}_{\flat, c}} \ar[d]^-{(\ref{Eww3})} & H^1(X, \Omega\otimes_{\mathcal{O}_X} {^c}(\mathcal{V}_G^{\vee})) \ar[r] \ar[d]_-{\wr}^{\mathrm{Serre \ duality}} & 0 \\ 0 \ar[r] & H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee)^\vee \ar[r]_-{({'e}_{\flat})^\vee} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])^\vee \ar[r]_-{('e_\sharp)^\vee} & H^0 (X, \mathcal{V}_G)^\vee \ar[r]_-{} & 0, }}\end{aligned}$$ where the left-hand and right-hand vertical arrows are the isomorphism arising from Serre duality.* *Proof.* After possibly replacing $\mathscr{E}^\spadesuit$ with its $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normalization, we may assume that $\mathscr{E}^\spadesuit$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal. The Hodge to de Rham spectral sequence $${'E}^{a,b}_{1} :=H^b (X, \mathcal{K}^a[{^\Box}\nabla^\mathrm{ad}]) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet[{^\Box}\nabla^\mathrm{ad}]), \label{HDsp}$$ of the complex $\mathcal{K}^\bullet [{^\Box}\nabla^\mathrm{ad}]$ yields a short exact sequence $$\begin{aligned} 0 \longrightarrow\mathrm{Coker}(H^0({^\Box}\nabla^\mathrm{ad})) \longrightarrow\mathbb{H}^1(X, \mathcal{K}^\bullet [{^\Box}\nabla^\mathrm{ad}]) \longrightarrow\mathrm{Ker}(H^1({^\Box}\nabla^\mathrm{ad})) \longrightarrow 0, \label{HDeq} \end{aligned}$$ where $H^j ({^\Box}\nabla^\mathrm{ad})$ ($j = 0, 1$) denotes the morphism $H^j (X, \mathcal{K}^0[{^\Box}\nabla^\mathrm{ad}]) \rightarrow H^j(X, \mathcal{K}^1[{^\Box}\nabla^\mathrm{ad}])$ induced by ${^\Box}\nabla^\mathrm{ad}$. (Namely, this is obtained from the dual of the lower horizontal sequence in ([\[Eq3i90\]](#Eq3i90){reference-type="ref" reference="Eq3i90"}).) Here, observe that the following square diagram is commutative: $$\begin{aligned} \label{Ed3g7} \vcenter{\xymatrix@C=46pt@R=36pt{ H^0(X, {^c}\mathcal{V}_G) \ar[r]^-{H^0 (\varsigma |_{{^c}\mathcal{V}_G})} \ar[d]^-{\wr}_-{\mathrm{Serre \ duality}} & H^0(X, \Omega\otimes_{\mathcal{O}_X} {^c}\mathfrak{g}_{\mathcal{E}}) \ar[r]^-{\mathrm{quotient}} \ar[d]^-{\wr}_-{\mathrm{Serre \ duality}} & \mathrm{Coker}(H^0 ({^c}\nabla^{\mathrm{ad}})) \ar[d]^-{\wr} \\ H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee)^\vee \ar[r]_-{H^1 (\varsigma^\veebar)^\vee} & H^1 (X, \mathfrak{g}_\mathcal{E}^\vee)^\vee \ar[r]_-{\mathrm{quotient}} & \mathrm{Ker}(H^1 (\nabla^\mathrm{ad}))^\vee, }}\end{aligned}$$ where - $\varsigma^{\veebar}$ denotes the morphism $\mathfrak{g}_{\mathcal{E}}^\vee \rightarrow\Omega\otimes_{\mathcal{O}_X} \mathcal{V}_G^{\vee}$ defined as the tensor product of the dual $\varsigma^\vee : \Omega^\vee \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E}^\vee \rightarrow\mathcal{V}_G^\vee$ of $\varsigma$ and the identity morphism of $\Omega$; - the left-hand and middle vertical arrows denote the isomorphisms arising from Serre duality; - the right-hand vertical arrow denotes the isomorphism induced from ([\[Eq3i90\]](#Eq3i90){reference-type="ref" reference="Eq3i90"}) (via $\mathfrak{g}_\mathcal{E}\cong \mathfrak{g}_\mathcal{E}^\vee$). It follows from  [@Wak8 Lemma 6.4, (ii)] that the composite of the lower horizontal arrows is an isomorphism, so the composite of the upper horizontal arrows turns out to be an isomorphism. Also, by a similar argument together with  [@Wak8 Lemma 6.4, (i)], we obtain the following commutative square diagram all of whose arrows are isomorphisms: $$\begin{aligned} \label{Ed3gg7} \vcenter{\xymatrix@C=46pt@R=36pt{ \mathrm{Ker} (H^1 ({^c}\nabla^\mathrm{ad})) \ar[r]^-{\sim} \ar[d]_-{\wr} & H^1 (X, \Omega \otimes_{\mathcal{O}_X} {^c}(\mathcal{V}_G^\vee)) \ar[d]^-{\wr} \\ \mathrm{Coker} (H^0 (\nabla^\mathrm{ad}))^\vee \ar[r]_-{\sim} & H^0 (X, \mathcal{V}_G)^\vee. }}\end{aligned}$$ Under the identifications of $k$-vector spaces given by the isomorphisms in ([\[Ed3g7\]](#Ed3g7){reference-type="ref" reference="Ed3g7"}) and ([\[Ed3gg7\]](#Ed3gg7){reference-type="ref" reference="Ed3gg7"}), the short exact sequence ([\[HDeq\]](#HDeq){reference-type="ref" reference="HDeq"}) coincides with the desired sequence ([\[HDeq1\]](#HDeq1){reference-type="ref" reference="HDeq1"}). This completes the proof of the former assertion. Moreover, the latter assertion follows from the various definitions of morphisms involved. ◻ In the resp'd portion of the following discussion, we suppose that the $G$-oper $\mathscr{E}^\spadesuit$ is of radii $\rho \in \mathfrak{c}(k)^{\times r}$ (where $\rho := \emptyset$ if $r = 0$). Denote by $q$ the $k$-rational point of $\mathcal{O}p_G$ (resp., $\mathcal{O}p_{G, \rho}$) classifying $\mathscr{E}^\spadesuit$; we use the same notation "$q$\" to denote the image of this point via the immersion $\mathrm{Imm}_G : \mathcal{O}p_G \hookrightarrow\mathcal{C}onn_G$ (resp., $\mathrm{Imm}_{G, \widetilde{\rho}} : \mathcal{O}p_{G, \rho} \hookrightarrow\mathcal{C}onn_{G, \widetilde{\rho}}$). The affine structure on $\mathcal{O}p_G$ (resp., $\mathcal{O}p_{G, \rho}$) recalled in § [3.3](#SS036){reference-type="ref" reference="SS036"} gives an identification between $H^0 (X, \mathcal{V}_G)$ (resp., $H^0 (X, {^c}\mathcal{V}_G)$) and the space of first-order deformations of the $G$-oper $\mathscr{E}^\spadesuit$ (resp., the $G$-oper $\mathscr{E}^\spadesuit$ fixing the radii). Hence, if $T_q \mathcal{O}p_G$ (resp., $T_q \mathcal{O}p_{G, \rho}$) denotes the tangent space of $\mathcal{O}p_G$ (resp., $\mathcal{O}p_{G, \rho}$) at $q$, then there exists a canonical isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eq300} \gamma_\sharp : T_q \mathcal{O}p_G \stackrel{\sim}{\rightarrow}H^0 (X, \mathcal{V}_G) \ \left(\text{resp.,} \ \gamma_{\sharp, \rho} : T_q \mathcal{O}p_{G, \rho} \stackrel{\sim}{\rightarrow}H^0 (X, {^c}\mathcal{V}_G) \right).\end{aligned}$$ Moreover, $\gamma$ and $\gamma_\sharp$ (resp., $\gamma_{\widetilde{\rho}}$ and $\gamma_{\sharp, \rho}$) make the following square diagram commute: $$\begin{aligned} \label{Ed37} \vcenter{\xymatrix@C=33pt@R=36pt{ T_q \mathcal{O}p_G \ar[r] \ar[d]_-{\gamma_\sharp}^-{\wr} & T_q \mathcal{C}onn_G \ar[d]^-{\gamma}_-{\wr} \\ H^0 (X, \mathcal{V}_G) \ar[r]_-{{'}e_\sharp} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^{\mathrm{ad}}]) }} \ \left(\text{resp.,} \ \vcenter{\xymatrix@C=33pt@R=36pt{ T_q \mathcal{O}p_{G, \rho} \ar[r] \ar[d]_-{\gamma_{\sharp, \rho}}^-{\wr} & T_q \mathcal{C}onn_{G, \widetilde{\rho}} \ar[d]^-{\gamma_{\widetilde{\rho}}}_-{\wr} \\ H^0 (X, {^c}\mathcal{V}_G) \ar[r]_-{{'}e_{\sharp, c}} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^{\mathrm{ad}}]) }} \right),\end{aligned}$$ where the upper horizontal arrow denotes the differential at $q$ of $\mathrm{Imm}_G$ (resp., $\mathrm{Imm}_{G, \widetilde{\rho}}$). Thus, we have obtained the following assertion. **Theorem 14**. *There exist canonical short exact sequences $$\begin{aligned} \label{WWW45} 0 \longrightarrow T_q \mathcal{O}p_G &\longrightarrow T_q \mathcal{C}onn_{G} \longrightarrow T_q^\vee \mathcal{O}p_{G, \rho} \longrightarrow 0, \\ 0 \longrightarrow T_q \mathcal{O}p_{G, \rho} &\longrightarrow T_q \mathcal{C}onn_{G, \widetilde{\rho}} \longrightarrow T_q^\vee \mathcal{O}p_{G} \longrightarrow 0, \notag\end{aligned}$$ where the second arrows in the upper and lower sequences are the differentials at $q$ of $\mathrm{Imm}_G$ and $\mathrm{Imm}_{G, \rho}$, respectively. Moreover, the second sequence in ([\[WWW45\]](#WWW45){reference-type="ref" reference="WWW45"}) is compatible with the dual of the first one via the isomorphism ([\[Er67\]](#Er67){reference-type="ref" reference="Er67"}).* *Proof.* The assertion follows from Proposition [Proposition 13](#Prop62){reference-type="ref" reference="Prop62"} together with the commutativity of the diagrams displayed in ([\[Ed37\]](#Ed37){reference-type="ref" reference="Ed37"}). ◻ ## The deformation space of a $G$-oper II {#SSa5} Next, we prove the self-duality of the parabolic de Rham cohomology group $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])$. To this end, let us first consider the following lemma. **Lemma 15**. *Suppose that $\mathscr{E}^\spadesuit$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal and of radii $[0]^{\times r}$. For each $j \in \mathbb{Z}$, we shall set $$\begin{aligned} \label{Er40} \Omega_\mathrm{par}^{0, \Box} (\mathfrak{g}_\mathcal{E})^j := \Omega_\mathrm{par}^{0, \Box} (\mathfrak{g}_\mathcal{E}) \cap \mathfrak{g}_\mathcal{E}^j \ \ \text{and} \ \ \Omega_\mathrm{par}^{1, \Box} (\mathfrak{g}_\mathcal{E})^j := \Omega_\mathrm{par}^{1, \Box} (\mathfrak{g}_\mathcal{E}) \cap (\Omega \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E}^j),\end{aligned}$$ where $\Box$ denotes either the absence or presence of "$c$\". Also, denote by $$\begin{aligned} \label{Er11} {^\Box}\nabla_\mathrm{par}^{\mathrm{ad}(j)} : \Omega_\mathrm{par}^{0, \Box} (\mathfrak{g}_\mathcal{E})^j \rightarrow\Omega_\mathrm{par}^{0, \Box} (\mathfrak{g}_\mathcal{E})^{j-1} \end{aligned}$$ the morphism obtained by restricting $\nabla_\mathrm{par}^\mathrm{ad}$. Then, the following assertions hold:* - *For each integer $j$, the composite $$\begin{aligned} \label{Er10} H^0 (X, {^c}\mathcal{V}_G^j) \rightarrow H^0 (X, \Omega^{1, c}_\mathrm{par}(\mathfrak{g}_\mathcal{E})^j) \xrightarrow{\mathrm{quotient}} \mathrm{Coker}(H^0 ({^c}\nabla_\mathrm{par}^{\mathrm{ad} (j+1)}))\end{aligned}$$ is an isomorphism, where the first arrow arises from the natural inclusion ${^c}\mathcal{V}_G^j \hookrightarrow\Omega_\mathrm{par}^{1, c} (\mathfrak{g}_\mathcal{E})^j$. In particular, we have $$\begin{aligned} \label{Er2} H^0 (X, {^c}\mathcal{V}_G) \cong \mathrm{Coker}(H^0 ({^c}\nabla_\mathrm{par}^\mathrm{ad})).\end{aligned}$$* - *For each integer $j$, the composite $$\begin{aligned} \label{Er17} \mathrm{Ker}(H^1 (\nabla^{\mathrm{ad} (j)}_\mathrm{par})) \xrightarrow{\mathrm{inclusion}} H^1 (X, \Omega_\mathrm{par}^{0}(\mathfrak{g}_{\mathcal{E}})^j) \rightarrow H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^{\vee, j})\end{aligned}$$ is an isomorphism, where $\mathcal{V}_G^{\vee, j} := (\mathcal{V}_G/\mathcal{V}_G^{-j +1})^\vee \left(\subseteq \mathcal{V}_G^\vee \right)$ and the second arrow arises from the natural surjection $\Omega_\mathrm{par}^{0}(\mathfrak{g}_{\mathcal{E}})^j \twoheadrightarrow\Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^{\vee, j}$. In particular, we have $$\begin{aligned} \label{Er18} \mathrm{Ker} (H^1 (\nabla^\mathrm{ad}_\mathrm{par})) \cong H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee).\end{aligned}$$* *Proof.* We only consider assertion (ii) since assertion (i) follows from a similar argument. For each integer $j$, we shall write $\nabla_\mathrm{par}^{\mathrm{ad}(j/j+1)}$ for the $\mathcal{O}_X$-linear morphism $\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^j)/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^{j+1}) \rightarrow\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j}$ induced from $\nabla_\mathrm{par}^\mathrm{ad}$. First, let us consider the case of $j \geq 0$. Since the residue of $\nabla$ at every marked point coincides with $q_{-1}$ (by the assumption that $\mathscr{E}^\spadesuit$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal and of radii $[0]^{\times r}$), one may verify that the resp'd portion of  [@Wak8 (768)] restricts to a *split* exact sequence $$\begin{aligned} \label{Er31} 0 \longrightarrow \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^j/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j+1} \xrightarrow{\nabla_\mathrm{par}^{\mathrm{ad}(j/j+1)}} \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j} \longrightarrow \mathcal{V}_{G, j-1} \longrightarrow 0.\end{aligned}$$ In particular, the equality $\mathrm{Ker}(H^1 (\nabla_\mathrm{par}^{\mathrm{ad}(j/j+1)})) =0$ holds (by which we obtain $\mathrm{Ker}(H^1 (\nabla_\mathrm{par}^{\mathrm{ad}(j)})) = 0$). Hence, the assertion for $j \geq 0$ follows from the fact that $\mathcal{V}_G^{\vee, j} \left(= (\mathcal{V}_G^{}/\mathcal{V}_G^{-j +1})^\vee \right) = 0$. Next, we shall prove the case of $j \leq -1$. Consider the following morphism of sequences: $$\begin{aligned} \label{MorSeq} \vcenter{\xymatrix@C=21pt@R=36pt{ 0 \ar[r]& H^1 (\Omega_\mathrm{par}^0(\mathfrak{g}_{\mathcal{E}})^{j+1}) \ar[r] \ar[d]^-{ H^1(\nabla_\mathrm{par}^{\mathrm{ad}(j+1)})} &H^1 (\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j}) \ar[r] \ar[d]^-{H^1(\nabla_\mathrm{par}^{\mathrm{ad}(j)})} & H^1 (\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^j)/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^{j+1})) \ar[r] \ar[d]^-{H^1(\nabla_\mathrm{par}^{\mathrm{ad}(j/j+1)})} & 0 \\ 0 \ar[r]& H^1 (\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^j) \ar[r] & H^1 (\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}) \ar[r] & H^1 (\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j}) \ar[r]& 0. }}\end{aligned}$$ Since the residue of $\nabla$ at every marked point coincides with $q_{-1}$, the line bundles $\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^j)/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^{j+1})$ and $\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j}$ coincide with $\mathfrak{g}_\mathcal{E}^j/\mathfrak{g}_\mathcal{E}^{j+1}$ and $\Omega \otimes_{\mathcal{O}_X} (\mathfrak{g}_\mathcal{E}^{j-1}/\mathfrak{g}_\mathcal{E}^{j})$, respectively. In particular, these are isomorphic to direct sums of finite copies of $\Omega^{\otimes j}$ (cf.  [@Wak8 § 2.1.4]), so we have $$\label{f=0} H^0 (X, \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^j)/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E}^{j+1})) = H^0 (X, \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j}) = 0.$$ By the above equalities together with the fact that $\mathrm{dim}(X) =1$ (which implies $H^2 (-) =0$), both the upper and lower horizontal sequences in ([\[MorSeq\]](#MorSeq){reference-type="ref" reference="MorSeq"}) turn out to be exact. Also, the right-hand vertical arrow in ([\[MorSeq\]](#MorSeq){reference-type="ref" reference="MorSeq"}) is surjective because of the long exact sequence arising from the non-resp'd portion of  [@Wak8 (768)]. By descending induction on $j$, one can verify the surjectivity of both the left-hand and middle vertical arrows in ([\[MorSeq\]](#MorSeq){reference-type="ref" reference="MorSeq"}). Thus, the snake lemma applied to ([\[MorSeq\]](#MorSeq){reference-type="ref" reference="MorSeq"}) shows that the natural sequence $$0 \rightarrow\mathrm{Ker}(H^1 (\nabla_\mathrm{par}^{\mathrm{ad}(j+1)}))\rightarrow\mathrm{Ker}(H^1 (\nabla_\mathrm{par}^{\mathrm{ad}(j)})) \rightarrow\mathrm{Ker}(H^1 (\nabla_\mathrm{par}^{\mathrm{ad}(j/j+1)})) \rightarrow 0 \label{Ker} \hspace{-3mm}$$ is exact. Let us consider the morphism of short exact sequences $$\begin{aligned} \label{MorSeq2} \vcenter{\xymatrix@C=21pt@R=36pt{ 0 \ar[r]& \mathrm{Ker}(H^1(\nabla_\mathrm{par}^{\mathrm{ad}(j+1)})) \ar[d] \ar[r] &\mathrm{Ker}(H^1(\nabla_\mathrm{par}^{\mathrm{ad}(j)})) \ar[r] \ar[d]^-{(\ref{Er17})} & \mathrm{Ker}(H^1 (\nabla_\mathrm{par}^{\mathrm{ad}(j/j+1)})) \ar[d] \ar[r] & 0 \\ 0 \ar[r]& H^1(\Omega\otimes_{\mathcal{O}_X} \mathcal{V}_G^{\vee, j+1}) \ar[r]& H^1(\Omega\otimes_{\mathcal{O}_X} \mathcal{V}_G^{\vee, j}) \ar[r] & H^1(\Omega\otimes_{\mathcal{O}_X} (\mathcal{V}_{G, -j})^\vee) \ar[r]& 0, }}\hspace{-4mm}\end{aligned}$$ where the left-hand vertical arrow is ([\[Er17\]](#Er17){reference-type="ref" reference="Er17"}) with $j$ replaced by $j+1$ and the exactness of the lower horizontal sequence follows from the natural decomposition $\mathcal{V}_G^{\vee, j} = \mathcal{V}_G^{\vee, j+1} \oplus (\mathcal{V}_{G, -j})^\vee$. The right-hand vertical arrow in this diagram is an isomorphism because it coincides with the inverse of the isomorphism $\mathbb{R}^1 f_*(\Omega \otimes_{\mathcal{O}_X} (\mathcal{V}_{G, -j})^\vee) \stackrel{\sim}{\rightarrow}\mathrm{Ker}(\mathbb{R}^1 f_*(\nabla^{\mathrm{ad}(j/j+1)}))$ arising from the splitting of the non-resp'd portion of  [@Wak8 (768)]. Hence, by descending induction on $j$, we see that the middle vertical arrow ([\[Er17\]](#Er17){reference-type="ref" reference="Er17"}) is an isomorphism. This completes the proof of the assertion. ◻ By using the above lemma, we obtain the following proposition. **Proposition 16**. *Suppose that the $G$-oper $\mathscr{E}^\spadesuit$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal and of radii $[0]^{\times r}$ (where $[0]^{\times r} := \emptyset$ if $r = 0$). Then, the following assertions hold:* - *$\mathbb{H}^0 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^\mathrm{ad}]) = \mathbb{H}^2 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^\mathrm{ad}]) = 0$.* - *There exists a short exact sequence $$\begin{aligned} \label{Eqr} 0 \longrightarrow H^0 (X, {^c}\mathcal{V}_G) \xrightarrow{{'}e_{\sharp, \mathrm{par}}} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}]) \xrightarrow{{'}e_{\flat, \mathrm{par}}} H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee) \longrightarrow 0,\end{aligned}$$ which fits into the following isomorphism of short exact sequences: $$\begin{aligned} \label{Ed3hh7} \vcenter{\xymatrix@C=31pt@R=36pt{ 0 \ar[r] & H^0 (X, {^c}\mathcal{V}_G) \ar[r]^-{'e_{\sharp, \mathrm{par}}} \ar[d]^-{\mathrm{Serre \ duality}}_-{\wr} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}]) \ar[r]^-{{'e}_{\flat, \mathrm{par}}} \ar[d]^-{(\ref{Eqqw9})} & H^1(X, \Omega\otimes_{\mathcal{O}_X} \mathcal{V}_G^{\vee}) \ar[r] \ar[d]^-{\mathrm{Serre \ duality}}_-{\wr} & 0 \\ 0 \ar[r] & H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee)^\vee \ar[r]_-{({'e}_{\flat, \mathrm{par}})^\vee} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}_\mathrm{par}])^\vee \ar[r]_-{('e_{\sharp, \mathrm{par}})^\vee} & H^0 (X, {^c}\mathcal{V}_G)^\vee \ar[r]_-{} & 0. }}\end{aligned}$$ In particular, $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^\mathrm{ad}])$ is a $k$-vector space of dimension $(2g-2 +r) \cdot \mathrm{dim}(\mathfrak{g}) - r \cdot \mathrm{rk} (\mathfrak{g})$.* *Proof.* First, we shall prove assertion (i). To this end, it suffices to verify the equality $\mathbb{H}^2 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^\mathrm{ad}]) = 0$ because the proof of the remaining one is similar. Since $H^1 (X, \mathcal{V}_{G, j-1}) = 0$ for every $j$, it follows from the exactness of ([\[Er31\]](#Er31){reference-type="ref" reference="Er31"}) that the morphism $$\begin{aligned} H^1 (\nabla_{\mathrm{par}}^{\mathrm{ad} (j/j+1)}) : H^1 (X, \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j}/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j+1}) \rightarrow H^1 (X, \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j})\end{aligned}$$ induced from $\nabla_{\mathrm{par}}^{\mathrm{ad} (j/j+1)}$ is surjective when $j \geq 0$. Next, consider the following morphism of short exact sequences defined for each $j \in \mathbb{Z}$: $$\begin{aligned} \label{f} \vcenter{\xymatrix@C=31pt@R=36pt{ 0 \ar[r] & \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j+1} \ar[r] \ar[d]^-{\nabla_\mathrm{par}^{\mathrm{ad}(j+1)}} & \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j} \ar[r] \ar[d]^-{\nabla_\mathrm{par}^{\mathrm{ad}(j)}} & \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j}/\Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j+1} \ar[r] \ar[d]^-{\nabla_{\mathrm{par}}^{\mathrm{ad} (j/j+1)}} & 0 \\ 0 \ar[r] & \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j} \ar[r] & \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1} \ar[r] & \Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j-1}/\Omega_\mathrm{par}^1 (\mathfrak{g}_\mathcal{E})^{j} \ar[r] & 0. }}\end{aligned}$$ By descending induction on $j$, the surjectivity of $H^1 (\nabla_{\mathrm{par}}^{\mathrm{ad} (j/j+1)})$ implies that of $H^1 (\nabla_{\mathrm{par}}^{\mathrm{ad}(j)}) : H^1 (X, \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^j) \rightarrow H^1 (X, \Omega_\mathrm{par}^0 (\mathfrak{g}_\mathcal{E})^{j-1})$. Then, the required equality $\mathbb{H}^2 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}^\mathrm{ad}]) = 0$ follows from the surjectivity of $H^1 (\nabla_{\mathrm{par}}^{\mathrm{ad}(j)})$ for $j= -\mathrm{rk}(\mathfrak{g})$ together with the Hodge to de Rham spectral sequence $E_1^{a, b} = H^b(X, \mathcal{K}^a [\nabla_{\mathrm{par}}^{\mathrm{ad}}]) \Rightarrow \mathbb{H}^{a+b} (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}^\mathrm{ad}])$. Assertion (ii) follows from the commutativity of the diagram ([\[Eqq301\]](#Eqq301){reference-type="ref" reference="Eqq301"}), the definition of ([\[Eqqw9\]](#Eqqw9){reference-type="ref" reference="Eqqw9"}), Lemma [Lemma 15](#Lem55){reference-type="ref" reference="Lem55"}, (i) and (ii), and  [@Wak8 Proposition 2.23, (ii)]. ◻ Let us keep the assumption in the above proposition. As mentioned at the end of § [3.3](#SS036){reference-type="ref" reference="SS036"}, there exists a canonical affine structure on $\mathcal{O}p_{G, [0]^{\times r}}$ modeled on $H^0 (X, {^c}\mathcal{V}_G)$. This affine structure induces an isomorphism $$\begin{aligned} \gamma_{\sharp, [0]^{\times r}} : T_q \mathcal{O}p_{G, [0]^{\times r}} \stackrel{\sim}{\rightarrow}H^0 (X, {^c}\mathcal{V}_G).\end{aligned}$$ It follows from the various constructions involved that the following square diagram is commutative: $$\begin{aligned} \label{Eq340} \vcenter{\xymatrix@C=46pt@R=36pt{ T_q \mathcal{O}p_{G, [0]^{\times r}} \ar[r] \ar[d]_-{\gamma_{\sharp, [0]^{\times r}}}^-{\wr} & T_q \mathcal{C}onn_{G, [0]^{\times r}} \ar[d]^-{\gamma_{[0]^{\times r}}}_-{\wr} \\ H^0 (X, {^c}\mathcal{V}_G) \ar[r]_-{{'}e_{\sharp, \mathrm{par}}} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}^{\mathrm{ad}}]), }}\end{aligned}$$ where the upper horizontal arrow denotes the differential at $q$ of $\mathrm{Imm}_{G, [0]^{\times r}}$. **Theorem 17**. - *There exists a canonical short exact sequence $$\begin{aligned} 0 \longrightarrow T_q \mathcal{O}p_{G, [0]^{\times r}} \longrightarrow T_q \mathcal{C}onn_{G, [0]^{\times r}} \longrightarrow T_q^\vee \mathcal{O}p_{G, [0]^{\times r}} \longrightarrow 0,\end{aligned}$$ where the second arrow is the differential of $\mathrm{Imm}_{G, [0]^{\times r}}$ at $q$. Moreover, this sequence is, in an evident sense, invariant under taking the dual via ([\[K09\]](#K09){reference-type="ref" reference="K09"}).* - *Let us consider $T_q \mathcal{O}p_{G, [0]^{\times r}}$ as a $k$-vector subspace of $T_q \mathcal{C}onn_{G, [0]^{\times r}}$ via the second arrow in the lower sequence of ([\[WWW45\]](#WWW45){reference-type="ref" reference="WWW45"}). Then, $T_q \mathcal{O}p_{G, [0]^{\times r}}$ is Lagrangian with respect to the nondegenerate pairing $T_q \mathcal{C}onn_{G, [0]^{\times r}} \times T_q \mathcal{C}onn_{G, [0]^{\times r}} \rightarrow k$ induced by ([\[K09\]](#K09){reference-type="ref" reference="K09"}).* *Proof.* Assertion (i) follows from Proposition [Proposition 16](#Er35){reference-type="ref" reference="Er35"}, (ii), together with the isomorphisms $\gamma_{[0]^{\times r}}$ and $\gamma_{\sharp, [0]^{\times r}}$. Next, by the definition of ${'}e_{\sharp, \mathrm{par}}$, we see that $T_q \mathcal{O}p_{G, [0]^{\times r}}$ is isotropic. Hence, assertion (ii) follows from the equality $\mathrm{dim}(T_q \mathcal{O}p_{G, [0]^{\times r}}) = \frac{1}{2} \cdot \mathrm{dim}(T_q \mathcal{C}onn_{G, [0]^{\times r}})$ resulting from assertion (i). ◻ # Cohomology of the symmetric products of opers {#S23g0} In this section, we examine the (parabolic) de Rham cohomology of symmetric products of a $\mathrm{PGL}_2$-oper (or an $\mathrm{SL}_2$-oper). We will show that, if a given $G$-oper comes from a $\mathrm{PGL}_2$-oper via change of structure group, then the (parabolic) de Rham cohomology group of the induced adjoint bundle decomposes into the direct sum of such cohomology groups (cf. ([\[Eq83f4\]](#Eq83f4){reference-type="ref" reference="Eq83f4"})). ## The de Rham cohomology of an $\mathrm{SL}_n$-oper {#SS08700} We shall fix an integer $n$ with $1 < n < p$, and let $\mathscr{F}^\heartsuit := (\mathcal{F}, \nabla, \{ \mathcal{F}^j \}_{i=0}^n)$ be an $\mathrm{SL}_n$-oper on $\mathscr{X}$ (cf. Remark [Remark 2](#Rem667){reference-type="ref" reference="Rem667"}). Write $\mathscr{F}:= (\mathcal{F}, \nabla)$ and ${^c}\mathscr{F}:= ({^c}\mathcal{F}, {^c}\nabla)$, where ${^c}\nabla$ denotes the log connection on ${^c}\mathcal{F}$ obtained by restricting $\nabla$. Let $\Box$ denote either the absence or presence of "$c$\". Then, the natural morphisms of complexes $\Omega\otimes_{\mathcal{O}_X} {^\Box}\mathcal{F}^{n-1}[-1] \rightarrow\mathcal{K}^\bullet [{^\Box}\nabla]$ and $\mathcal{K}^\bullet [{^\Box}\nabla] \rightarrow{^\Box}\mathcal{F}/{^\Box}\mathcal{F}^1 [0]$ together induce a sequence of $k$-vector spaces $$\begin{aligned} \label{Eq116} 0 \longrightarrow H^0 (X, \Omega\otimes_{\mathcal{O}_X} {^\Box}\mathcal{F}^{n-1}) \longrightarrow H_{\mathrm{dR}}^1 (X, {^\Box}\mathscr{F}) \longrightarrow H^1 (X, {^\Box}\mathcal{F}/{^\Box}\mathcal{F}^1) \longrightarrow 0.\end{aligned}$$ Also, the natural morphisms $\Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}[-1] \rightarrow\mathcal{K}^\bullet [\nabla_{\mathrm{per}}]$ and $\mathcal{K}^\bullet [\nabla_{\mathrm{per}}] \rightarrow\mathcal{F}/\mathcal{F}^1 [0]$ induce a sequence of $k$-vector spaces $$\begin{aligned} \label{Eq648} 0 \longrightarrow H^0 (X, \Omega\otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}) \longrightarrow H_{\mathrm{dR}, \mathrm{per}}^1 (X, \mathscr{F}) \longrightarrow H^1 (X, \mathcal{F}/\mathcal{F}^1) \longrightarrow 0.\end{aligned}$$ Then, the following assertion holds. **Proposition 18**. - *The sequence ([\[Eq116\]](#Eq116){reference-type="ref" reference="Eq116"}) is exact.* - *Suppose that $\mathscr{F}^\heartsuit$ is of radii $[0]^{\times r} \in \mathfrak{c}(k)^{\times r}$ (where $[0]^{\times r} := \emptyset$ if $r = 0$). Then, the sequence ([\[Eq648\]](#Eq648){reference-type="ref" reference="Eq648"}) is exact.* *Proof.* We only consider assertion (ii) because the proof of assertion (i) is relatively simpler than that of assertion (ii). For each $j = 0, \cdots, n$, we shall set $\Omega_{\mathrm{par}}^0 (\mathcal{F})^j := \mathcal{F}^j$ and $\Omega_{\mathrm{par}}^1 (\mathcal{F})^j := \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^j + \nabla (\mathcal{F}^{j+1})$, where $\mathcal{F}^{n+1} := 0$. Also, denote by $\nabla^j_{\mathrm{par}}$ ($j = 1, \cdots, n$) the morphism $\Omega^0_{\mathrm{par}} (\mathcal{F})^j \rightarrow\Omega^1_{\mathrm{par}} (\mathcal{F})^{j-1}$ obtained by restricting $\nabla$. By descending induction on $j \geq 1$, we shall prove the claim that the morphism $$\begin{aligned} \label{Eq400} H^l (X, \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}) \rightarrow\mathbb{H}^{l+1} (X, \mathcal{K}^\bullet [\nabla^j_{\mathrm{par}}])\end{aligned}$$ induced by the inclusion $\Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}[-1] \hookrightarrow\mathcal{K}^\bullet [\nabla^j_{\mathrm{par}}]$ is an isomorphism for every $l$. The base step, i.e., the case of $j= n$ is clear because $\nabla^n_\mathrm{par}$ coincides with the zero map $0 \rightarrow\Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}$. To prove the induction step, we assume that the case of $j = j_0$ ($1 < j_0 \leq n$) has been proved. Note that the Kodaira-Spencer map $\mathrm{KS}_{\mathscr{F}^\heartsuit}^{j_0 -1}$ (cf. ([\[Eqq212\]](#Eqq212){reference-type="ref" reference="Eqq212"})) decomposes into the composite of morphisms between line bundles $$\begin{aligned} \left(\mathcal{F}^{j_0-1}/\mathcal{F}^{j_0} = \right)\Omega_{\mathrm{par}}^0 (\mathcal{F})^{j_0-1} /\Omega_{\mathrm{par}}^0 (\mathcal{F})^{j_0} &\xrightarrow{\mathrm{KS}_{\mathscr{F}^\heartsuit, \mathrm{par}}^{j_0 -1}} \Omega_{\mathrm{par}}^1 (\mathcal{F})^{j_0-2} /\Omega_{\mathrm{par}}^1 (\mathcal{F})^{j_0-1} \\ & \xrightarrow{\mathrm{inclusion}} \Omega \otimes_{\mathcal{O}_X} (\mathcal{F}^{j_0-2}/\mathcal{F}^{j_0-1}), \notag \end{aligned}$$ where the first arrow, i.e., $\mathrm{KS}_{\mathscr{F}^\heartsuit, \mathrm{par}}^{j_0 -1}$, is the morphism induced from $\nabla^j_{\mathrm{par}}$. Since $\mathrm{KS}_{\mathscr{F}^\heartsuit}^{j_0 -1}$ is an isomorphism by assumption, the morphism $\mathrm{KS}_{\mathscr{F}^\heartsuit, \mathrm{par}}^{j_0 -1}$ is verified to be an isomorphism. The morphism $\mathbb{H}^{l+1} (X, \mathcal{K}^\bullet [\nabla^{j_0 }_{\mathrm{par}}]) \rightarrow\mathbb{H}^{l+1} (X, \mathcal{K}^\bullet [\nabla^{j_0 -1}_{\mathrm{par}}])$ induced by the inclusion $\mathcal{K}^\bullet [\nabla^{j_0 }_{\mathrm{par}}] \hookrightarrow\mathcal{K}^\bullet [\nabla^{j_0 -1}_{\mathrm{par}}]$ is an isomorphism because the following diagram forms a morphism of short exact sequences: $$\begin{aligned} \label{Eq340} \vcenter{\xymatrix@C=36pt@R=36pt{ 0\ar[r] & \Omega_{\mathrm{par}}^0 (\mathcal{F})^{j_0}\ar[r]^-{\mathrm{inclusion}} \ar[d]^-{\nabla^{j_0}_{\mathrm{par}}} &\Omega_{\mathrm{par}}^0 (\mathcal{F})^{j_0-1} \ar[r]^-{\mathrm{quotient}} \ar[d]^-{\nabla^{j_0-1}_{\mathrm{par}}} & \Omega_{\mathrm{par}}^0 (\mathcal{F})^{j_0-1}/\Omega_{\mathrm{par}}^0 (\mathcal{F})^{j_0} \ar[r] \ar[d]_{\wr}^-{\mathrm{KS}_{\mathscr{F}^\heartsuit, \mathrm{par}}^{j_0-1}} & 0 \\ 0\ar[r] & \Omega_{\mathrm{par}}^1 (\mathcal{F})^{j_0 -1}\ar[r]_-{\mathrm{inclusion}} & \Omega_{\mathrm{par}}^1 (\mathcal{F})^{j_0-2} \ar[r]_-{\mathrm{quotient}} & \Omega_{\mathrm{par}}^1 (\mathcal{F})^{j_0-2}/\Omega_{\mathrm{par}}^1 (\mathcal{F})^{j_0-1}\ar[r] & 0. }}\end{aligned}$$ Hence, the induction hypothesis implies that the morphism $H^l (X, \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}) \rightarrow\mathbb{H}^{l+1} (X, \mathcal{K}^\bullet [\nabla^{j_0-1}_{\mathrm{par}}])$ induced by the composite of natural inclusions $$\begin{aligned} \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}[-1] \hookrightarrow\mathcal{K}^\bullet [\nabla_{\mathrm{par}}^{j_0}] \hookrightarrow\mathcal{K}^\bullet [\nabla_{\mathrm{par}}^{j_0-1}]\end{aligned}$$ is an isomorphism. This completes the proof of the claim. In particular, for every $l$, we obtain an isomorphism $$\begin{aligned} \label{Eq5423} H^l (X, \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}) \stackrel{\sim}{\rightarrow} \mathbb{H}^{l+1} (X, \mathcal{K}^\bullet [\nabla^1_{\mathrm{par}}]).\end{aligned}$$ We go back to the proof of assertion (ii). According to  [@Wak8 Proposition 4.55], we may assume that the $\mathrm{PGL}_n$-oper induced by $\mathscr{F}^\heartsuit$ via projectivization is normal in the sense of  [@Wak8 Definition 4.53]. Let us fix $i \in \{1, \cdots, r \}$, and moreover, choose a local function $t$ defining $\sigma_i \in X (k)$. Then, around the marked point $\sigma_i$, the log connection $\nabla$ may be described as $$\begin{aligned} \label{Eq998} \nabla = d + \frac{dt}{t} \otimes \begin{pmatrix} 0 & 0 & 0 & \cdots & 0 & a_n \\ 1 & 0 & 0 & \cdots & 0 & a_{n-1} \\ 0 & 1 & 0 & \cdots & 0 & a_{n-2} \\ 0 & 0 & 1 & \cdots & 0 & a_{n-3} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & a_1 \end{pmatrix}\end{aligned}$$ for some local functions $a_1, \cdots, a_{n}$ (cf.  [@Wak8 Remark 4.30]). Then, the assumption in (ii) together with  [@Wak8 Theorem 4.49] implies that $a_j \equiv 0$ mod $(t)$ for every $j = 1, \cdots, n$. This implies $\Omega^1_{\mathrm{par}}(\mathcal{F})^{0} = \Omega^1_{\mathrm{par}}(\mathcal{F})$. Hence, the diagram $$\begin{aligned} \label{Eq340} \vcenter{\xymatrix@C=36pt@R=36pt{ 0\ar[r] & \Omega_{\mathrm{par}}^0 (\mathcal{F})^{1}\ar[r]^-{\mathrm{inclusion}} \ar[d]^-{\nabla^{1}_{\mathrm{par}}} & \Omega_{\mathrm{par}}^0 (\mathcal{F}) \ar[r]^-{\mathrm{quotient}} \ar[d]^-{\nabla_{\mathrm{par}}} & \mathcal{F}^{}/\mathcal{F}^{1} \left(= \Omega_{\mathrm{par}}^0 (\mathcal{F})/ \Omega_{\mathrm{par}}^0 (\mathcal{F})^{1} \right) \ar[r] \ar[d] & 0 \\ 0\ar[r] &\Omega^1_{\mathrm{par}}(\mathcal{F})^{0}\ar[r]_-{\sim} &\Omega_{\mathrm{par}}^1 (\mathcal{F})\ar[r]_-{} & 0\ar[r] & 0 }}\end{aligned}$$ forms a short exact sequence of morphisms, which induces the long exact sequence of $k$-vector spaces $$\begin{aligned} \label{Eq399} H^0 (X, \mathcal{F}/\mathcal{F}^1) &\rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^1_{\mathrm{par}}]) \rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_{\mathrm{par}}]) \\ & \rightarrow H^1 (X, \mathcal{F}/\mathcal{F}^1) \rightarrow\mathbb{H}^2 (X, \mathcal{K}^\bullet [\nabla^1_{\mathrm{par}}]). \notag\end{aligned}$$ Since $\mathrm{deg}(\mathcal{F}/\mathcal{F}^1) < 0$, the equality $H^0 (X, \mathcal{F}/\mathcal{F}^1) = 0$ holds. On the other hand, by the claim proved above, the $k$-vector space $\mathbb{H}^2 (X, \mathcal{K}^\bullet [\nabla^1_{\mathrm{par}}]) \left(\cong H^1 (X, \Omega \otimes_{\mathcal{O}_X} {^c}\mathcal{F}^{n-1}) \cong H^0 (X, (\mathcal{F}^{n-1})^\vee)^\vee \right)$ is verified to be zero because of the fact that $\mathrm{deg}(\mathcal{F}^{n-1}) >0$. Hence, ([\[Eq399\]](#Eq399){reference-type="ref" reference="Eq399"}) together with ([\[Eq5423\]](#Eq5423){reference-type="ref" reference="Eq5423"}) for $l=0$ implies the exactness of ([\[Eq648\]](#Eq648){reference-type="ref" reference="Eq648"}), thus completing the proof of this proposition. ◻ ## Duality for symmetric products of opers {#SSa2} Next, let us describe the short exact sequences (and their dualities) discussed in the previous subsection for the $\mathrm{SL}_n$-oper arising from an $\mathrm{SL}_2$-oper/a $\mathrm{PGL}_2$-oper via change of structure group. If the underlying space is a compact Riemann surface, then the assertion corresponding to the following theorem can be found in  [@Gun Theorem 6] (for $n=3$) and  [@Wen § 4.4.4]. The case of the uniformizing $\mathrm{SL}_2$-opers on modular curves was already proved in  [@Sch Theorem 2.7]. (Note that, by the discussion at the beginning of § 2.4 in *loc. cit.*, the radius of the uniformizing $\mathrm{SL}_2$-oper at every marked point coincides with $[0]$; hence, the following Theorem [Theorem 19](#Th67){reference-type="ref" reference="Th67"}, (ii), can be regarded as a generalization of that result.) Also, see  [@Og2 Theorem 3.1, Remark 3.2] for a version of elliptic $F$-$T$-crystals. **Theorem 19**. *Let $\mathscr{F}^\heartsuit_\odot := (\mathcal{F}_\odot, \nabla_\odot, \{ \mathcal{F}^j_\odot \}_{j=0}^2)$ be an $\mathrm{SL}_2$-oper on $\mathscr{X}$. Write $\varTheta$ for the theta characteristic associated to $\mathscr{F}^\heartsuit$ (cf. § [3.4](#SS03g61){reference-type="ref" reference="SS03g61"}). Also, denote by $\mathrm{Sym}^{n-1}\mathscr{F}_\odot$ the $(n-1)$-st symmetric product of the underlying flat $\mathrm{SL}_2$-bundle $\mathscr{F}_\odot$ of $\mathscr{F}^\heartsuit_\odot$. Then, the following assertions hold:* - *Let $\Box$ denote either the absence or presence of "$c$\". Then, there exists a canonical short exact sequence of $k$-vector spaces $$\begin{aligned} \label{Eq8177} 0 \rightarrow H^0 (X, {^\Box}(\varTheta^{\otimes (n+1)})) \rightarrow %\xrightarrow{{^\Box}h_{\sharp}^n [\msF_\odot^\heartsuit]} H^1_{\mathrm{dR}} (X, {^\Box}(\mathrm{Sym}^{n-1}\mathscr{F}_\odot)) %\xrightarrow{{^\Box}h_{\flat}^n [\msF_\odot^\heartsuit]} \rightarrow H^1 (X, {^\Box}(\varTheta^{\otimes (-n+1)})) \rightarrow 0.\end{aligned}$$ Moreover, there exists a canonical isomorphism $$\begin{aligned} \label{KKK98} H^1_{\mathrm{dR}} (X, {^c}(\mathrm{Sym}^{n-1}\mathscr{F}_\odot)) \stackrel{\sim}{\rightarrow} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot)^\vee\end{aligned}$$ via which the short exact sequences ([\[Eq8177\]](#Eq8177){reference-type="ref" reference="Eq8177"}) for $\Box$'s in both cases are compatible with each other.* - *Suppose further that $\mathscr{F}^\heartsuit_\odot$ is of radii $[0]^{\times r} \in \mathfrak{c}(k)^{\times r}$ (where $[0]^{\times r} := \emptyset$ if $r = 0$). Then, there exists a canonical short exact sequence $$\begin{aligned} \label{Eq810} 0 \rightarrow H^0 (X, \varTheta^{\otimes (n+1)}(-D)) %\xrightarrow{h_{\sharp, \mr{par}}^n [\msF_\odot^\heartsuit]} \rightarrow H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot) %\xrightarrow{h_{\flat, \mr{par}}^n [\msF_\odot^\heartsuit]} \rightarrow H^1 (X, \varTheta^{\otimes (-n+1)}) \rightarrow 0. \end{aligned}$$ Moreover, there exists a canonical isomorphism $$\begin{aligned} \label{Ett38} H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot) \stackrel{\sim}{\rightarrow}H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot)^\vee\end{aligned}$$ via which the short exact sequence ([\[Eq810\]](#Eq810){reference-type="ref" reference="Eq810"}) is compatible with its dual.* *Proof.* We only prove assertion (ii) because the proof of assertion (i) is similar. For simplicity, we shall write $(\mathcal{F}, \nabla) := (\mathrm{Sym}^{n-1}\mathcal{F}_\odot, \mathrm{Sym}^{n-1}\nabla_\odot)$. First, we shall consider the former assertion. Under the identifications $\mathcal{F}^{n-1} = \varTheta^{\otimes (n-1)}$ and $\mathcal{F}/ \mathcal{F}^1 = \varTheta^{\otimes (-n+1)}$ given by ([\[Eq901\]](#Eq901){reference-type="ref" reference="Eq901"}), the sequence ([\[Eq648\]](#Eq648){reference-type="ref" reference="Eq648"}) defined for the $\mathrm{SL}_n$-oper $\mathrm{Sym}^{n-1} \mathscr{F}_\odot^\heartsuit$ becomes a short exact sequence $$\begin{aligned} \label{Eq888} 0 \rightarrow H^0 (X, \Omega\otimes_{\mathcal{O}_X} \varTheta^{\otimes (n-1)}(-D)) \rightarrow H_{\mathrm{dR}, \mathrm{par}}^1 (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot) \rightarrow H^1 (X, \varTheta^{\otimes (-n+1)}) \rightarrow 0. \end{aligned}$$ Moreover, since ([\[Eq803\]](#Eq803){reference-type="ref" reference="Eq803"}) gives an identification $\Omega = \varTheta^{\otimes 2}$, we have $H^0 (X, \Omega \otimes_{\mathcal{O}_X} \varTheta^{\otimes (n-1)}(-D)) \cong H^0 (X, \varTheta^{\otimes (n+1)}(-D))$. Hence, the assertion follows immediately from Proposition [Proposition 18](#Prop15){reference-type="ref" reference="Prop15"}, (ii). Next, we shall prove the latter assertion of (ii). To do this, we may assume that the $\mathrm{SL}_2$-oper $\mathscr{F}_\odot^\heartsuit$ comes, via the isomorphism "$\Lambda_\vartheta^{\clubsuit \Rightarrow \diamondsuit}$\" obtained in  [@Wak8 § 4.6.6, (553)], from a $(2, \vartheta)$-projective connection (cf.  [@Wak8 Definition 4.37, (ii)]), where $\vartheta$ denotes a $2$-theta characteristic of $X^\mathrm{log}$ (cf.  [@Wak8 Definition 4.31, (i)]) determined by $\varTheta$ in the manner of  [@Wak8 Example 4.34]. Then, by considering the matrix form of the log connection $\nabla_\odot$ (cf.  [@Wak8 Remarks 4.30, 4.39]), the dual $\mathscr{F}_\odot^\vee$ of $\mathscr{F}_\odot$ is isomorphic to $\mathscr{F}_\odot$ itself. We fix an isomorphism $\mathscr{F}_\odot \stackrel{\sim}{\rightarrow}\mathscr{F}_\odot^\vee$, which induces an isomorphism $\varpi : (\mathcal{F}, \nabla) \stackrel{\sim}{\rightarrow}(\mathcal{F}^\vee, \nabla^\vee)$ (cf.  [@Can § 3]). Now, let $U$ and $\iota$ be as in § [4.3](#SS055g){reference-type="ref" reference="SS055g"}, and consider the $\mathcal{O}_X$-bilinear pairing $$\begin{aligned} \widetilde{\varpi}^{\blacktriangleright} : \iota_* (\iota^* (\mathcal{F})) \times \iota_* (\iota^* (\Omega \otimes_{\mathcal{O}_X} \mathcal{F})) \rightarrow\iota_* (\iota^* (\Omega))\end{aligned}$$ arising from the isomorphism $\varpi$. In the following discussion, we shall prove (by an argument entirely similar to the proof of Proposition [Proposition 8](#Prop99){reference-type="ref" reference="Prop99"}) the claim that *$\widetilde{\varpi}^{\blacktriangleright}$ restricts to a nondegenerate $\mathcal{O}_X$-bilinear pairing* $$\begin{aligned} \varpi^{\blacktriangleright} : \Omega_\mathrm{par}^{0, c}(\mathcal{F}) \times \Omega_\mathrm{par}^1 (\mathcal{F}) \rightarrow{^c}\Omega.\end{aligned}$$ Since the restriction of $\widetilde{\varpi}^{\blacktriangleright}$ to $U$ is nondegenerate, the problem is reduced to examining the pairing $\widetilde{\varpi}^{\blacktriangleright}$ around the marked points. Let us choose $i \in \{1, \cdots, r \}$ and choose a local function $t \in \mathcal{O}_X$ defining the closed subscheme $\mathrm{Im}(\sigma_i) \subseteq X$. Then, the formal neighborhood $Q$ of $\sigma_i$ is naturally isomorphic to $\mathrm{Spec}(k[\![t]\!])$, and we have ${^c}\Omega |_Q = k[\![t]\!] dt$. After fixing a trivialization $\varTheta |_Q = k[\![t]\!]$, we can associate, to the local function $t$, a canonical identification $\mathcal{F}|_Q = k[\![t]\!]^{\oplus n}$ (cf.  [@Wak8 Remark 4.30]); it restricts to $\mathcal{F}^j |_Q = k[\![t]\!]^{\oplus n -j} \oplus \{ 0 \}^{\oplus j} \left( \subseteq k[\![t]\!]^{\oplus n} \right)$. This identification gives $$\begin{aligned} \label{Egio} \iota_* (\iota^* (\mathcal{F})) |_{Q} = k (\!(t)\!)^{\oplus n} \ \left(\text{resp.,} \ \iota_* (\iota^* (\Omega \otimes_{\mathcal{O}_X} \mathcal{F})) |_{Q} = (k (\!(t)\!) dt)^{\oplus n} \right). \end{aligned}$$ Moreover, by the equality $\mu_i^{\mathrm{Sym}^{n-1}\nabla_\odot} = q_{-1}$, ([\[Egio\]](#Egio){reference-type="ref" reference="Egio"}) restricts to an identification $$\begin{aligned} \label{Eqq291g} \Omega_\mathrm{par}^{0, c} (\mathcal{F}) |_Q = (k[\![t]\!]t)^{\oplus (n-1)} \oplus k[\![t]\!] \ \left(\text{resp.,} \ \Omega_\mathrm{par}^{1} (\mathcal{F}) |_Q = k[\![t]\!]dt \oplus \left(k[\![t]\!] \frac{dt}{t}\right)^{\oplus (n-1)}\right). \end{aligned}$$ According to this description, each section $v$ (resp., $u$) of $\Omega_\mathrm{par}^{0, c} (\mathcal{F}) |_Q$ (resp., $\Omega_\mathrm{par}^{1} (\mathcal{F}) |_Q$) may be described as the sum $v = t \cdot \sum_{i=1}^{n-1} v_i + v_n$ (resp., $u = u_1 + \frac{1}{t} \cdot \sum_{i=2}^n u_i$) for some $v_1, \cdots, v_n \in k[\![t]\!]$ (resp., $u_1, \cdots, u_n \in k[\![t]\!] dt$). Then, up to a multiplicative constant factor, the following equalities hold: $$\begin{aligned} \widetilde{\varpi}^\blacktriangleright (v, u) &= \widetilde{\varpi}^\blacktriangleright \left(t \cdot \sum_{i=1}^{n-1} v_i, \frac{1}{t} \cdot \sum_{i=2}^n u_i \right) + \widetilde{\varpi}^\blacktriangleright (v_n, u_1) \\ &= \widetilde{\varpi}^\blacktriangleright \left(\sum_{i=1}^{n-1} v_i, \sum_{i=2}^n u_i\right) + \widetilde{\varpi}^\blacktriangleright (v_n, u_1) \notag \\ &= \sum_{i=1}^n\widetilde{\varpi}^\blacktriangleright (v_i, u_{n+1 - i}). \notag\end{aligned}$$ It follows that $\widetilde{\varpi}^\blacktriangleright (v, u)$ lies in $k[\![t]\!] dt \left(= {^c}\Omega |_Q\right)$, and that the restriction of $\widetilde{\varpi}^\blacktriangleright$ over $Q$ is nondegenerate. This completes the proof of the claim. Just as in the case of ([\[Eqq1\]](#Eqq1){reference-type="ref" reference="Eqq1"}), the pairing $\varpi^\blacktriangleright$ together with the natural nondegenerate pairing $\varpi^\triangleright : \Omega_\mathrm{par}^{1, c}(\mathcal{F}) \times \Omega_\mathrm{par}^{1}(\mathcal{F}) \rightarrow{^c}\Omega$ arising from $\varpi$ yields a $k$-bilinear pairing $$\begin{aligned} %\varpi_H : \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla_\mathrm{par}]) \times \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}]) \rightarrow k.\end{aligned}$$ The induced morphism $\varpi_H : \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla_\mathrm{par}]) \rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}])^\vee$ fits into the morphism of exact sequences $$\begin{aligned} \label{Eqqwer} \vcenter{\xymatrix@C=11pt@R=36pt{ H^0 (X, \Omega_\mathrm{par}^{0, c}(\mathcal{F})) \ar[r] \ar[d]^-{\wr} & H^0 (X, \Omega_\mathrm{par}^{1, c} (\mathcal{F})) \ar[r] \ar[d]^-{\wr} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla_\mathrm{par}]) \ar[d]^-{\varpi_{H}}\ar[r] & H^1 (X, \Omega_\mathrm{par}^{0, c}(\mathcal{F})) \ar[d]^-{\wr} \ar[r] & H^1 (X, \Omega_\mathrm{par}^{1, c} (\mathcal{F})) \ar[d]^-{\wr} \\ H^1 (X, \Omega_\mathrm{par}^1 (\mathcal{F}))^\vee \ar[r] &H^1 (X, \Omega_\mathrm{par}^0 (\mathcal{F}))^\vee \ar[r]& \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}])^\vee \ar[r] & H^0 (X, \Omega_\mathrm{par}^1 (\mathcal{F}))^\vee \ar[r] & H^0 (X, \Omega_\mathrm{par}^0 (\mathcal{F}))^\vee }} \end{aligned}$$ (cf. ([\[Eqq301\]](#Eqq301){reference-type="ref" reference="Eqq301"}) for a similar diagram), where the horizontal sequences arise from the Hodge to de Rham spectral sequences involved, and the vertical arrows except for the middle one arise from the pairings $\varpi^\blacktriangleright$ and $\varpi^\triangleright$. By applying the five lemma to this diagram, we see that the middle vertical arrow $\varpi_H$ is an isomorphism. Under the equality $\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla_\mathrm{par}]) = H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{n-1}\mathcal{F}_\odot) \left(= \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}]) \right)$ resulting from Proposition [Proposition 6](#Prp7){reference-type="ref" reference="Prp7"}, $\varpi_H$ defines an isomorphism $$\begin{aligned} \label{EHju} H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot) \stackrel{\sim}{\rightarrow}H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{n-1}\mathscr{F}_\odot)^\vee.\end{aligned}$$ Moreover, by the various definitions involved and the local description of $\widetilde{\varpi}^\blacktriangleright$ described above, we see that that the short exact sequence ([\[Eq810\]](#Eq810){reference-type="ref" reference="Eq810"}) is compatible with its dual via ([\[EHju\]](#EHju){reference-type="ref" reference="EHju"}). This completes the proof of this proposition. ◻ Next, if $n = 2l+1$ for some positive integer $l$, then the following assertion holds. (We leave to the reader the formulation of relevant duality as asserted in the above theorem.) **Theorem 20**. *Let $\mathscr{E}^\spadesuit$ be a $G^\odot$-oper on $\mathscr{X}$. Recall from the discussion in § [3.5](#SS03f61){reference-type="ref" reference="SS03f61"} that the $2l$-th symmetric product $\mathrm{Sym}^{2l}\mathscr{E}_\odot^\spadesuit$ of $\mathscr{E}^\spadesuit$ can be obtained (cf. ([\[Eq9d00\]](#Eq9d00){reference-type="ref" reference="Eq9d00"})). Then, the following assertions hold:* - *There exists a canonical short exact sequence of $k$-vector spaces $$\begin{aligned} \label{Eq817} 0 \longrightarrow H^0 (X, \Omega^{\otimes (l+1)}) %\xrightarrow{h_\sharp^l [\msE_\odot^\spadesuit]} \longrightarrow H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{E}) %\xrightarrow{h_\flat^l [\msE_\odot^\spadesuit]} \longrightarrow H^1 (X, \Omega^{\otimes (-l)}) \longrightarrow 0,\end{aligned}$$ where $\mathrm{Sym}^{2l}\mathscr{E}_\cdot$ denotes the underlying flat $\mathrm{SL}_{2l}$-bundle of $\mathrm{Sym}^{2l}\mathscr{E}^\spadesuit_\cdot$.* - *Suppose further that $\mathscr{E}^\spadesuit_\odot$ is of radii $[0]^{\times r} \in \mathfrak{c}(k)^{\times r}$ (where $[0]^{\times r} := \emptyset$ if $r = 0$). Then, there exists a canonical short exact sequence $$\begin{aligned} \label{Eq810F} 0 \longrightarrow H^0 (X, \Omega^{\otimes (l+1)}(-D)) %\xrightarrow{h_{\sharp, \mr{par}}^l [\msE_\odot^\spadesuit]} \longrightarrow H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{2l}\mathscr{E}) %\xrightarrow{h_{\flat, \mr{par}}^l [\msE_\odot^\spadesuit]} \longrightarrow H^1 (X, \Omega^{\otimes (-l)}) \longrightarrow 0. \end{aligned}$$* *Finally, when $\mathscr{E}^\spadesuit$ comes from an $\mathrm{SL}_2$-oper via projectivization, these short exact sequences coincide with the corresponding ones in the statement of Theorem [Theorem 19](#Th67){reference-type="ref" reference="Th67"}.* *Proof.* Assertion (i) (resp., (ii)) follows from Proposition [Proposition 18](#Prop15){reference-type="ref" reference="Prop15"}, (i) (resp., (ii)) and an argument similar to the proof of Theorem [Theorem 19](#Th67){reference-type="ref" reference="Th67"}, (i) (resp., (ii)). ◻ ## The irreducible decomposition of the adjoint bundle {#SSa4} Let $\mathscr{E}^\spadesuit_\odot := (\mathcal{E}_{B^\odot}, \nabla_\odot)$ be a $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $\mathrm{PGL}_2$-oper. We shall write $\mathscr{E}^\spadesuit \left(= (\mathcal{E}_B, \nabla) \right) :=\iota_{G*}(\mathscr{E}^\spadesuit_\odot)$, which is a $G$-oper. Also, write $\mathcal{E}_\odot := \mathcal{E}_{B^\odot} \times^{B^\odot} G^\odot$ and $\mathcal{E}:= \mathcal{E}_B \times^B G$. In what follows, we consider a decomposition of the flat adjoint bundle $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ according to the irreducible decomposition of $\mathfrak{g}$, regarded as an $\mathfrak{s}\mathfrak{l}_2$-module via $\iota_\mathfrak{g}$. Let us fix an integer $l$ with $- \mathrm{rk}(\mathfrak{g}) \leq l \leq \mathrm{rk} (\mathfrak{g})$. Under the natural identification $\Omega^{\otimes l}\otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)}= \mathcal{V}_{G, l} \left(\subseteq \mathfrak{g}_\mathcal{E}\right)$, each nonzero element $a \in \mathfrak{g}_l^{\mathrm{ad}(q_1)}$ determines an inclusion $\gamma_a : \Omega^{\otimes l} \hookrightarrow\mathfrak{g}_\mathcal{E}$. Note that, for each $j = 0, \cdots, 2l+1$, the $k$-vector subspace $$\begin{aligned} H_a^j := \sum_{s=0}^{2l-j} k \cdot \mathrm{ad}(q_{-1})^s(a) \left(= \bigoplus_{s=0}^{2l-j} k \cdot \mathrm{ad}(q_{-1})^s(a) \right)\end{aligned}$$ of $\mathfrak{g}$ is closed under the adjoint $B^\odot$-action via $\iota_B$, and that $H_a^0$ defines an $\mathfrak{s}\mathfrak{l}_2$-submodule of $\mathfrak{g}$. Hence, $H_a^0$ determines a rank $(2l+1)$ flat subbundle $(\mathcal{H}_a, \nabla_a)$ of $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$, and the subspaces $\{ H_a^{j}\}_{j=0}^{2l+1}$ of $H_a^0$ determine a decreasing filtration $\{ \mathcal{H}_a^j \}_{j=0}^{2l+1}$ on $\mathcal{H}_a$. One may verify that the resulting collection $$\begin{aligned} \label{Eqq100} \mathscr{H}_a^\heartsuit := (\mathcal{H}_a, \nabla_a, \{ \mathcal{H}_a^j \}_{j=0}^{2l+1}) \end{aligned}$$ form an $\mathrm{SL}_{2l+1}$-oper on $\mathscr{X}$. Next, let $\mathcal{D}$ (resp., $\mathcal{D}_{< m}$ for each nonnegative integer $m$) denote the sheaf of logarithmic differential operators (resp., logarithmic differential operators of order $< m$) of $X^\mathrm{log}$ (cf.  [@Wak8 § 4.2.1]). Then, it follows from the definition of an $\mathrm{SL}_{2l+1}$-oper that the composite $$\begin{aligned} \widetilde{\gamma}_a : \mathcal{D}_{< 2l+1} \otimes_{\mathcal{O}_X} \Omega^{\otimes l} \longrightarrow \mathcal{D}\otimes_{\mathcal{O}_X} \mathcal{H}_a^0 \longrightarrow\mathcal{H}_a \hspace{20mm} \\ \left(\text{resp.,} \ \widetilde{\gamma}_{\mathrm{sym}} : \mathcal{D}_{< 2l+1} \otimes_{\mathcal{O}_X}\Omega^{\otimes l} \longrightarrow \mathcal{D}\otimes_{\mathcal{O}_X} \mathrm{Sym}^{2l}\mathcal{E}\longrightarrow\mathrm{Sym}^{2l}\mathcal{E}_\odot \notag \right) \end{aligned}$$ is an isomorphism, where the first arrow denotes the injection arising from both the natural inclusion $\mathcal{D}_{< 2l+1} \hookrightarrow\mathcal{D}$ and $\gamma_a$ (resp., the injection $\Omega^{\otimes l} \hookrightarrow\mathrm{Sym}^{2l}\mathcal{E}_\odot$ deduced from ([\[Eq913\]](#Eq913){reference-type="ref" reference="Eq913"})), and the second arrow arises from the $\mathcal{D}$-module structure on $\mathcal{H}_a^0$ (resp., $\mathrm{Sym}^{2l}\mathcal{E}_\odot$) determined by $\nabla_a$ (resp., $\mathrm{Sym}^{2l}\nabla_\odot$). The composite $$\begin{aligned} \mathrm{Sym}^{2l} \mathcal{E}_\odot \xrightarrow{\widetilde{\gamma}_{\mathrm{sym}}^{-1}} \mathcal{D}_{< 2l+1} \otimes_{\mathcal{O}_X} \Omega^{\otimes l} \xrightarrow{\widetilde{\gamma}_a} \mathcal{H}_a \xrightarrow{\mathrm{inclusion}} \mathfrak{g}_\mathcal{E} \end{aligned}$$ is compatible with the respective connections, i.e., $\mathrm{Sym}^{2l}\nabla_\odot$ and $\nabla^\mathrm{ad}$. That is, we have obtained an injective morphism of flat bundles $\xi_a : \mathrm{Sym}^{2l} \mathscr{E}_\odot \hookrightarrow(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$. The morphism of flat bundles $$\begin{aligned} \xi_l : \mathrm{Sym}^{2l} \mathscr{E}_\odot \otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)} \hookrightarrow(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})\end{aligned}$$ given by $v \otimes a \mapsto \xi_a (v)$ for $a \in \mathfrak{g}_l^{\mathrm{ad}(q_1)}$ and $v \in \mathrm{Sym}^{2l}\mathcal{E}_\odot$ is well-defined, and the various $\xi_l$'s define an isomorphism $$\begin{aligned} \label{Eq160} \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})} \xi_l : \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})} \mathrm{Sym}^{2l}\mathscr{E}_\odot \otimes_k \mathfrak{g}^{\mathrm{ad}(q_1)}_l \stackrel{\sim}{\rightarrow} (\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad}). \end{aligned}$$ By applying the $1$-st hypercohomology functor to this isomorphism, we obtain an isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eq834} \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot) \otimes_k \mathfrak{g}^{\mathrm{ad}(q_1)}_l \stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]).\end{aligned}$$ This isomorphism fits into the isomorphism of short exact sequences $$\begin{aligned} \label{Eq16tt} \vcenter{\xymatrix@C=11pt@R=36pt{ 0 \ar[r] & {\displaystyle \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^0 (X, \Omega^{\otimes (l+1)}) \ar[r] %^-{\bigoplus_l h^l_\sharp [\msE^\spadesuit_\odot]} \ar[d]^-{\wr} & {\displaystyle \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{E}) \otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)} \ar[r] %^-{\bigoplus_l h^l_\flat [\msE^\spadesuit_\odot]} \ar[d]_-{\wr}^-{(\ref{Eq834})} & {\displaystyle \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^1 (X, \Omega^{\otimes (-l)}) \ar[r] \ar[d]^-{\wr} & 0 \\ 0 \ar[r] & H^0 (X, \mathcal{V}_G)\ar[r]_-{{'}e_\sharp} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) \ar[r]_-{{'}e_\flat} & H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee) \ar[r] & 0 }}\end{aligned}$$ (cf. ([\[Eq817\]](#Eq817){reference-type="ref" reference="Eq817"}) for the upper horizontal sequence and ([\[HDeq1\]](#HDeq1){reference-type="ref" reference="HDeq1"}) for the lower horizontal sequence), where the left-hand and right-hand vertical arrows are the isomorphism induced from $\mathcal{V}_G \cong \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})} \Omega \otimes_k \mathfrak{g}^{\mathrm{ad}(q_1)}$ (cf. ([\[QR020\]](#QR020){reference-type="ref" reference="QR020"})). Moreover, suppose that $\mathscr{E}^\spadesuit_\odot$ is of radii $[0]^{\times r}$ (where $[0]^{\times r} := \emptyset$ if $r = 0$). Then, ([\[Eq160\]](#Eq160){reference-type="ref" reference="Eq160"}) yields an isomorphism $$\begin{aligned} \label{Eq83f4} \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})} H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot) \otimes_k \mathfrak{g}^{\mathrm{ad}(q_1)}_l \stackrel{\sim}{\rightarrow}\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}^\mathrm{ad}]),\end{aligned}$$ which fits into the isomorphism of short exact sequences $$\begin{aligned} \label{Eq16ttt} \vcenter{\xymatrix@C=11pt@R=36pt{ 0 \ar[r] & {\displaystyle \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^0 (X, \Omega^{\otimes (l+1)}(-D)) \ar[r] \ar[d]^-{\wr} & {\displaystyle \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot) \otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)} \ar[r] \ar[d]_-{\wr}^-{(\ref{Eq83f4})} & {\displaystyle \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^1 (X, \Omega^{\otimes (-l)}) \ar[r] \ar[d]^-{\wr} & 0 \\ 0 \ar[r] & H^0 (X, {^c}\mathcal{V}_G)\ar[r]_-{{'}e_{\sharp, \mathrm{par}}} & \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla_\mathrm{par}^\mathrm{ad}]) \ar[r]_-{{'}e_{\flat, \mathrm{par}} } & H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee) \ar[r] & 0 }} \end{aligned}$$ (cf. ([\[Eq810\]](#Eq810){reference-type="ref" reference="Eq810"}) for the upper horizontal sequence and ([\[Eqr\]](#Eqr){reference-type="ref" reference="Eqr"}) for the lower horizontal sequence). Once we have constructed a canonical self-duality for $H^1_{\mathrm{dR}, \mathrm{par}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot)$ as asserted in Theorem [Theorem 19](#Th67){reference-type="ref" reference="Th67"}, (ii), this duality will be verified to be compatible with ([\[Eqqw9\]](#Eqqw9){reference-type="ref" reference="Eqqw9"}) via ([\[Eq83f4\]](#Eq83f4){reference-type="ref" reference="Eq83f4"}). # Dormant opers and their deformations This section discusses opers in positive characteristic, especially, dormant opers (= opers with vanishing $p$-curvature). The study of the moduli space of dormant opers was substantially developed in  [@Jo14],  [@JP],  [@Wak],  [@Wak2], and  [@Wak8]. One of the important features of this moduli space is the generic étaleness (cf.  [@Wak8 Theorem G]). In other words, if the underlying curve is sufficiently general (and $G$ satisfies a certain additional assumption), then the moduli spaces of opers and $p$-flat bundles intersect transversally. This fact will be essential in the proof of our main theorem, as discussed in the next section. In the rest of the present paper, we assume that $\mathrm{char}(k)= p> 0$ for a prime $p$, and that either "$p >2h_G$\" or "$G = \mathrm{PGL}_n$ with $1 < n < p$\" is fulfilled. ## Dormant opers on pointed stable curves {#SS02} Let $(\mathcal{E}, \nabla)$ be a flat $G$-bundle on $\mathscr{X}$. Recall (cf.  [@Wak8 Definition 3.8]) that the $p$-curvature of $\nabla$ is the $\mathcal{O}_X$-linear morphism ${^p}\psi^\nabla : \mathcal{T}^{\otimes p} \rightarrow\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}}$ uniquely determined by the condition that any local section in $\mathcal{T}^{\otimes p}$ of the form $\partial^{\otimes p}$ for some $\partial \in \mathcal{T}$ is mapped to $\nabla (\partial)^{[p]}- \nabla (\partial^{[\partial]})$, where $(-)^{[p]}$ denotes the result of applying the $p$-power operations (cf.  [@Wak8 § 3.2]). In the case where $G$ is a matrix group, this morphism is equivalent to the classical definition of $p$-curvature described in terms of vector bundles (cf. e.g.,  [@Kal § 5]). By a **$p$-flat $G$-bundle**, we mean a flat $G$-bundle $(\mathcal{E}, \nabla)$ such that $\nabla$ has vanishing $p$-curvature. We shall denote by $$\begin{aligned} \label{Eq220} \mathcal{C}onn_G^{\psi = 0} \ \left(\text{resp.,} \ \mathcal{C}onn_{G, \mu}^{\psi = 0} \ \text{for each $\mu \in \mathfrak{g}^{\times r}$} \right)\end{aligned}$$ the closed substack of $\mathcal{C}onn_G$ (resp., $\mathcal{C}onn_{G, \mu}$) classifying $p$-flat $G$-bundles. Next, a $G$-oper $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ is called **dormant** if $\nabla$ has vanishing $p$-curvature (cf.  [@Wak8 Definition 3.15]). Then, we obtain a closed subscheme $$\begin{aligned} \label{Eq212} \mathcal{O}p^{^\mathrm{Zzz...}}_{G} \ \left(\text{resp.}, \ \mathcal{O}p^{^\mathrm{Zzz...}}_{G, \rho} \right)\end{aligned}$$ of $\mathcal{O}p_{G}$ (resp., $\mathcal{O}p_{G, \rho}$ for each $\rho \in \mathfrak{c}(k)^{\times r}$) classifying dormant $G$-opers. The following equality between substacks of $\mathcal{C}onn_G$ (resp., $\mathcal{C}onn_{G, \widetilde{\rho}}$) holds: $$\begin{aligned} \label{Eq211} \mathcal{O}p_G^{^\mathrm{Zzz...}} = \mathcal{O}p_G \cap \mathcal{C}onn_G^{\psi = 0} \ \left(\text{resp.,} \ \mathcal{O}p_{G, \rho}^{^\mathrm{Zzz...}} = \mathcal{O}p_{G, \rho} \cap \mathcal{C}onn_{G, \widetilde{\rho}}^{\psi = 0} \right).\end{aligned}$$ Since $G$ has been assumed to be of adjoint type, we can apply  [@Wak8 Theorem C] to see that $\mathcal{O}p^{^\mathrm{Zzz...}}_G$ (resp., $\mathcal{O}p^{^\mathrm{Zzz...}}_{G, \rho}$) is a nonempty finite $k$-scheme (resp., a possibly empty finite $k$-scheme). Now, let $\mathscr{E}^\spadesuit_\odot := (\mathcal{E}_{B^\odot}, \nabla_\odot)$ be a $G^\odot$-oper on $\mathscr{X}$. The log connection $\iota_{G*}(\nabla)$ of the $G$-bundle $\mathcal{E}_G := \mathcal{E}_{B^\odot} \times^{B^\odot, \iota_B}G$ (cf. ([\[associated\]](#associated){reference-type="ref" reference="associated"})) satisfies the equality ${^p}\psi^{\iota_{G*}(\nabla)} = (d \iota_G)_\mathcal{E}\circ {^p}\psi^{\nabla}$, where $(d \iota_{G})_\mathcal{E}$ denotes the natural injection $\widetilde{\mathcal{T}}_{\mathcal{E}^\mathrm{log}_{G^\odot}} \hookrightarrow\widetilde{\mathcal{T}}_{\mathcal{E}_{G}^\mathrm{log}}$ (cf.  [@Wak8 § 3.3.2]). In particular, $\mathscr{E}^\spadesuit_{\odot}$ is dormant if and only if the associated $G$-oper $\iota_{G*}(\mathscr{E}^\spadesuit_\odot)$ is dormant. It follows that ([\[Eq405\]](#Eq405){reference-type="ref" reference="Eq405"}) (resp., ([\[Eq281\]](#Eq281){reference-type="ref" reference="Eq281"})) restricts to a closed immersion $$\begin{aligned} \label{E2} %{^p}\iota_G^{\mcO p} : \mathcal{O}p^{^\mathrm{Zzz...}}_{G^\odot} \hookrightarrow\mathcal{O}p^{^\mathrm{Zzz...}}_{G} \ \left(\text{resp.,} \ %{^p}\iota_{G, \rho_\odot}^{\mcO p} : \mathcal{O}p^{^\mathrm{Zzz...}}_{G^\odot, \rho_\odot} \hookrightarrow\mathcal{O}p^{^\mathrm{Zzz...}}_{G, \iota_\mathfrak{c}(\rho_\odot)} \right).\end{aligned}$$ ## Example: a dormant $\mathrm{PGL}_2$-oper on a Shimura curve {#SS0123} In this subsection, we shall illustrate an example of a dormant $\mathrm{PGL}_2$-oper on a Shimura curve resulting from the discussions by H. Reimann (cf.  [@Rei]) and M. Sheng, J. Zhang, K. Zuo (cf.  [@SZZ]). See  [@Wak10] for other examples of dormant $\mathrm{PGL}_2$-opers, constructed by using the Gauss maps for Fermat curves. Let $F$ be a totally real number field of degree $\geq 2$ in which $p$ is unramified. Also, let $A$ be a quaternion algebra over $F$ which is split at one infinite place $\tau : F \hookrightarrow\mathbb{R}$ and ramified at all remaining infinite places. After fixing an embedding $\overline{\mathbb{Q}} \hookrightarrow\overline{\mathbb{Q}}_p$, one obtains an bijection $\mathrm{Hom}_\mathbb{Q}(F, \overline{\mathbb{Q}})$ and $\coprod_{i=1}^r \mathrm{Hom}_{\mathbb{Q}_p} (F_{\mathfrak{p}_i}, \overline{\mathbb{Q}}_p)$, where $\mathfrak{p}_1 \left(=:\mathfrak{p}\right), \mathfrak{p}_2 , \mathfrak{p}_3, \cdots, \mathfrak{p}_r$ are all primes in $F$ dividing $p$ such that $\tau$ lies in $\mathrm{Hom}_{\mathbb{Q}_p} (F_{\mathfrak{p}}, \overline{\mathbb{Q}}_p)$. Also, we shall choose a pair $(L, K)$ consisting of a totally imaginary quadratic extension $L$ of $F$ contained in $A$ such that all $\mathfrak{p}_i$ stay prime in $L$, and an imaginary quadratic extension $K$ of $F$ in which all the primes $\mathfrak{p}_1, \cdots, \mathfrak{p}_r$ split. By regarding both $F^\times$ and $K^\times$ as $\mathbb{Q}$-groups, one may obtain a $\mathbb{Q}$-group $G$ which makes the following sequence exact: $$\begin{aligned} 1 \longrightarrow F^\times \xrightarrow{f \mapsto (f, f^{-1})} A^\times \times K^\times \longrightarrow G \longrightarrow 1.\end{aligned}$$ Suppose $A$ is split at $\mathfrak{p}_1$, and write $\mathcal{O}_B := \mathcal{O}_A \otimes_{\mathcal{O}_F} \mathcal{O}_K$, where $\mathcal{O}_A$ is an order of $A$ containing $\mathcal{O}_L$ with certain additional properties (cf.  [@Rei § 2, (2.9)]). Then, for every level structure $C = C_p \times C^p \subseteq G (\mathbb{A}_f)$ (where $\mathbb{A}_f$ denotes the ring of finite adèles of $\mathbb{Q}$) with $C_p = G(\mathbb{Z}_p)$ and $C^p$ sufficiently small (cf.  [@Rei § 2, (2.11)]), there exists a smooth proper $\mathcal{O}_{F_\mathfrak{p}}$-scheme $\mathcal{M}_C$ of relative dimension $1$ which is the coarse moduli space of a certain moduli functor of PEL type with the endomorphism algebra $\mathcal{O}_B$ (cf.  [@Rei Definition 2.12, Proposition 2.14, and Corollary 3.14]). Let us take one of the connected components of the base-change of $\mathcal{M}_C$ to $k$ (via a fixed inclusions $\mathcal{O}_{F_\mathfrak{p}}/p\mathcal{O}_{F_\mathfrak{p}}\hookrightarrow\overline{\mathbb{F}}_p \hookrightarrow k$), which we denote by $X$. After possibly replacing $C$ with another, we may assume that the genus of $X$ is greater than $1$ and that there exists a universal abelian scheme $f : Y \rightarrow X$. The $1$-st de Rham cohomology sheaf $\mathcal{H}:= R^1 f_* (\mathcal{K}^\bullet [\mathcal{O}_Y \xrightarrow{d}\Omega_{Y /X}])$ on $X$ is equipped with the Gauss-Manin connection $\nabla$, as well as a $2$-step decreasing filtration $\{ \mathcal{H}^j \}_{j=0}^2$ on $\mathcal{H}$ arising from the Hodge filtration. If we write $\mathfrak{p}_i \mathcal{O}_K = \mathfrak{q}_i \overline{\mathfrak{q}}_i$ ($i=1, \cdots, r$) for distinct primes $\mathfrak{q}_i$, $\overline{\mathfrak{q}}_i$ in $K$, then we have $p \mathcal{O}_{LK} = \prod_{i=1}^r \mathfrak{q}_i \overline{\mathfrak{q}}_i$; it induces a natural isommorphism $LK \otimes_\mathbb{Q}\mathbb{Q}_p \stackrel{\sim}{\rightarrow}\prod_{i=1}^r LK_{\mathfrak{q}_i} \times LK_{\overline{\mathfrak{q}}_i}$. This isomorphism tensored with $\mathbb{Q}^{\mathrm{un}}_p$ (= the maximal unramified extension of $\mathbb{Q}_p$) restricts to a decomposition $$\begin{aligned} \label{Eq201rt} \prod_{\phi \in \Phi} w (\phi) \times \overline{w} (\phi) : \mathcal{O}_{LK} \otimes_\mathbb{Z}W (\overline{\mathbb{F}}_p) \stackrel{\sim}{\rightarrow}\prod_{\phi \in \Phi} W (\overline{\mathbb{F}}_p) \times W (\overline{\mathbb{F}}_p),\end{aligned}$$ where $\Phi := \mathrm{Hom}_{\mathbb{Q}} (L, \overline{\mathbb{Q}})$ and for every $\phi \in \Phi$ we denote by $w (\phi)$ (resp., $\overline{w} (\phi)$) the unique extension of $\phi$ in $\mathrm{Hom}_\mathbb{Q}(LK, \overline{\mathbb{Q}})$ whose restriction to $K$ lies $\mathrm{Hom}_{\mathbb{Q}_p}(K_{\mathfrak{q}_i}, \overline{\mathbb{Q}}_p)$ (resp., $\mathrm{Hom}_{\mathbb{Q}_p}(K_{\overline{\mathfrak{q}}_i}, \overline{\mathbb{Q}}_p)$) for some $i$. By the injection $\left(\mathcal{O}_{LK} \subseteq \right) \mathcal{O}_B \hookrightarrow\mathrm{End}_X (Y)$, $\mathcal{H}$ is equipped with a structure of $\mathcal{O}_{LK}\otimes_\mathbb{Z}W (\overline{\mathbb{F}}_p)$-module. Hence the decomposition ([\[Eq201rt\]](#Eq201rt){reference-type="ref" reference="Eq201rt"}) gives rise to a decomposition of $(\mathcal{H}, \nabla)$ into a direct sum of rank $2$ flat subbundles $$\begin{aligned} \label{Eq107} (\mathcal{H}, \nabla) = \bigoplus_{\phi \in \Phi} (\mathcal{H}_{\phi}, \nabla_\phi) \oplus (\mathcal{H}_{\overline{\phi}}, \nabla_{\overline{\phi}}),\end{aligned}$$ (cf.  [@SZZ Proposition 4.1]); it restricts to a decomposition $\mathcal{H}^1 = \bigoplus_{\phi \in \Phi} \mathcal{H}^1_\phi \oplus \mathcal{H}_{\overline{\phi}}^1$, where $\mathcal{H}_{\phi}^{1} := \mathcal{H}^{1} \cap \mathcal{H}_{ \phi}$ and $\mathcal{H}_{\overline{\phi}}^{1} := \mathcal{H}^{1} \cap \mathcal{H}_{\overline{\phi}}$. Also, the structure of Dieudonné crystal on $\mathcal{H}$ corresponding to the abelian scheme $Y/X$ determines, via reduction modulo $p$, an $\mathcal{O}_{X}$-linear morphism $\xi : F_{X}^* \mathcal{H}\rightarrow\mathcal{H}$ (where $F_X$ denotes the absolute Frobenius endomorphism of $X$). Let us take $\widetilde{\tau} \in \Phi$ with $\widetilde{\tau} |_{F} = \tau$. If $\sigma$ denotes the automorphism induced by the Frobenius automorphism of $\overline{\mathbb{F}}_p$ over $\mathbb{F}_p$, then $\xi$ restricts to an isomorphism $$\begin{aligned} \xi_{\widetilde{\tau}} : F_{X}^*\mathcal{H}_{\sigma^{-1}\widetilde{\tau}} \stackrel{\sim}{\rightarrow}\mathcal{H}_{\widetilde{\tau}}\end{aligned}$$ (cf.  [@SZZ Proposition 3.1, (iii)]). Since this isomorphism is compatible with the connection, we see that $\nabla_{\widetilde{\tau}}$ has vanishing $p$-curvature. By the latter assertion of  [@SZZ Proposition 4.1], both $\mathcal{H}_{\widetilde{\tau}}^{1}$ and $\mathcal{H}_{\widetilde{\tau}}/\mathcal{H}_{\widetilde{\tau}}^{1}$ are line bundles. Moreover, if $p \geq 2g$, then the structure of Higgs field defined as the Kodaira-Spencer map $\mathcal{H}_{\widetilde{\tau}}^{1} \rightarrow\Omega_{X} \otimes_{\mathcal{O}_X} \left(\mathcal{H}_{\widetilde{\tau}}/\mathcal{H}_{\widetilde{\tau}}^{1}\right)$ is an isomorphism (cf.  [@SZZ Proposition 4.4]). It follows that the collection $$\begin{aligned} (\mathcal{H}_{\widetilde{\tau}}, \nabla_{\widetilde{\tau}}, \{ \mathcal{H}^j_{\widetilde{\tau}} \}_{j=0}^2)\end{aligned}$$ specifies a dormant $\mathrm{GL}_2$-oper. In particular, by taking its projectivization, one can obtain a dormant $\mathrm{PGL}_2$-oper on $X$. ## The deformation space of a dormant $G$-oper {#SS030} Let $(\mathcal{E}, \nabla)$ be a flat $G$-bundle on $\mathscr{X}$ with vanishing $p$-curvature. In the resp'd portion of the following discussion, we suppose that $(\mathcal{E}, \nabla)$ is equipped with a marking, and that $\nabla$ has residues $\mu \in \mathfrak{g}^{\times r}$ (where $\mu := \emptyset$ if $r = 0$). Consider the conjugate spectral sequence $$\begin{aligned} \label{Csp} ''E_{2}^{a,b} := H^a(X, \mathcal{H}^b(\mathcal{K}^\bullet [\nabla^\mathrm{ad}])) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) \hspace{4mm} \\ \left(\text{resp.,} \ ''E_{2}^{a,b} := H^a(X, \mathcal{H}^b(\mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}])) \Rightarrow \mathbb{H}^{a+b}(X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \right)\end{aligned}$$ associated to $\mathcal{K}^\bullet [\nabla^\mathrm{ad}]$ (resp., $\mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]$), where $\mathcal{H}^b(\mathcal{K}^\bullet [-])$ denotes the $b$-th cohomology sheaf of the complex $\mathcal{K}^\bullet [-]$. This spectral sequence induces a short exact sequence $$\begin{aligned} \label{Ceq} 0 \rightarrow H^1(X, \mathrm{Ker}(\nabla^\mathrm{ad})) \xrightarrow{{''e_\sharp}} \mathbb{H}^1(X, \mathcal{K}^\bullet[\nabla^\mathrm{ad}]) \xrightarrow{{''e}_\flat} H^0 (X, \mathrm{Coker}(\nabla^\mathrm{ad})) \rightarrow 0 \hspace{10mm}\\ \left(\text{resp.,} \ 0 \rightarrow H^1(X, \mathrm{Ker}({^c}\nabla^\mathrm{ad})) \xrightarrow{{''e_{\sharp, c}}} \mathbb{H}^1(X, \mathcal{K}^\bullet[{^c}\nabla^\mathrm{ad}]) \xrightarrow{{''e}_{\flat, c}} H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad})) \rightarrow 0 \right). \notag\end{aligned}$$ Recall that the deformation of $p$-curvature may be described as the Cartier map of the flat bundle $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ (cf.  [@Wak8 Proposition 6.11]), and that this map (resp., this map restricted to ${^c}\mathfrak{g}_\mathcal{E}$) factors through the quotient $\Omega \otimes_{\mathcal{O}_X} \mathfrak{g}_\mathcal{E}\twoheadrightarrow\mathrm{Coker}(\nabla^\mathrm{ad})$ (resp., the quotient $\Omega \otimes_{\mathcal{O}_X} {^c}\mathfrak{g}_\mathcal{E}\twoheadrightarrow\mathrm{Coker}({^c}\nabla^\mathrm{ad})$) (cf.  [@Og Proposition 2.2.4]). Hence, the space of first-order deformations of the flat $G$-bundle $(\mathcal{E}, \nabla)$ preserving the condition of vanishing $p$-curvature (resp., the condition of vanishing $p$-curvature and fixing residues) may be identified with $H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad}))$ (resp., $H^1 (X, \mathrm{Ker}({^c}\nabla^\mathrm{ad}))$). That is, the isomorphism $\gamma$ (resp., $\gamma_\mu$) restricts, via ${''}e_\sharp$ (resp., ${''}e_{\sharp, c}$), to an isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eq303} {^p} \gamma : T_q \mathcal{C}onn_G^{\psi = 0} \stackrel{\sim}{\rightarrow} H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad})) \ \left(\text{resp.,} \ {^p} \gamma_\mu : T_q \mathcal{C}onn_{G, \mu}^{\psi = 0} \stackrel{\sim}{\rightarrow}H^1 (X, \mathrm{Ker}({^c}\nabla^\mathrm{ad})) \right).\end{aligned}$$ Next, suppose further that there exists a structure of $B$-reduction $\mathcal{E}_B$ on $\mathcal{E}$ for which $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ specifies a dormant $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper (resp., a dormant $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper of radii $\rho := \chi (\mu)$, where $\rho := \emptyset$ if $r = 0$). Denote by $q$ the $k$-rational point of $\mathcal{O}p_G^{^\mathrm{Zzz...}}$ (resp., $\mathcal{O}p_{G, \rho}^{^\mathrm{Zzz...}}$) classifying $\mathscr{E}^\spadesuit$; we use the same symbol "$q$\" to denote the point of $\mathcal{C}onn_G^{\psi = 0}$ (resp., $\mathcal{C}onn_{G, \widetilde{\rho}}^{\psi = 0}$) classifying $(\mathcal{E}, \nabla)$. We already know the following assertion. **Proposition 21**. *The following equalities hold: $$\begin{aligned} \label{WWW8} \mathrm{dim}(T_q \mathcal{C}onn_{G}^{\psi = 0}) &= 2 \cdot \mathrm{dim}(T_q \mathcal{O}p_G) - r \cdot \mathrm{rk}(\mathfrak{g})\\ & = 2 \cdot \mathrm{dim}(T_q \mathcal{C}onn_{G}^{\psi = 0}) + r \cdot \mathrm{rk}(\mathfrak{g}) \notag \\ & = (2g-2 +r)\cdot \mathrm{dim}(\mathfrak{g}). \notag\end{aligned}$$ In particular, we have $$\begin{aligned} \label{WWW9} \mathrm{dim}(T_q \mathcal{O}p_G) + \mathrm{dim}(T_q \mathcal{C}onn_{G}^{\psi = 0}) = \mathrm{dim}(T_q \mathcal{C}onn_{G}^{\psi = 0}).\end{aligned}$$* *Proof.* The assertion follows from ([\[Eq3033\]](#Eq3033){reference-type="ref" reference="Eq3033"}), ([\[Eq300\]](#Eq300){reference-type="ref" reference="Eq300"}), ([\[Eq303\]](#Eq303){reference-type="ref" reference="Eq303"}), and  [@Wak8 Proposition 2.23, 6.5, and 6.18]. ◻ We shall set $$\begin{aligned} \label{Eq302} e_\flat^{(0)} : H^0 (X, \mathcal{V}_G) \xrightarrow{{'}e_\sharp} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) \xrightarrow{{''}e_\flat } H^0 (X, \mathrm{Coker}(\nabla^\mathrm{ad}))\hspace{10mm}\\ \left(\text{resp.,} \ {^c}e_\flat^{(0)} : H^0 (X, {^c}\mathcal{V}_G) \xrightarrow{{'}e_{\sharp, c}} \mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \xrightarrow{{''}e_{\flat, c} } H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad})). \right) \notag\end{aligned}$$ The discussion preceding the above proposition (together with the equality ([\[Eq211\]](#Eq211){reference-type="ref" reference="Eq211"})) also shows that the tangent space $T_q \mathcal{O}p_G^{^\mathrm{Zzz...}}$ (resp., $T_q \mathcal{O}p_{G, \rho}^{^\mathrm{Zzz...}}$) of $\mathcal{O}p_G^{^\mathrm{Zzz...}}$ (resp., $\mathcal{O}p_{G, \rho}^{^\mathrm{Zzz...}}$) at $q$ may be identified with $\mathrm{Ker}(e_\flat^{(0)}) = \mathrm{Im}({'}e_\sharp) \cap \mathrm{Im}({''}e_\sharp)$ (resp., $\mathrm{Ker}({^c}e_\flat^{(0)}) = \mathrm{Im}({'}e_{\sharp, c}) \cap \mathrm{Im}({''}e_{\sharp, c})$). That is to say, we obtain an isomorphism of $k$-vector spaces $$\begin{aligned} \label{Eq301} {^p}\gamma_\sharp := \gamma_\sharp \cap {^p}\gamma : T_q \mathcal{O}p_G^{^\mathrm{Zzz...}} \!\stackrel{\sim}{\rightarrow}\mathrm{Ker}(e_\flat^{(0)}) \ \left(\text{resp.,} \ {^p}\gamma_{\sharp, \rho} := \gamma_{\sharp, \widetilde{\rho}} \cap {^p}\gamma_\rho : T_q \mathcal{O}p_{G, \rho}^{^\mathrm{Zzz...}}\! \stackrel{\sim}{\rightarrow}\mathrm{Ker}({^c}e_\flat^{(0)}) \right) \hspace{-3mm}\end{aligned}$$ (cf.  [@Wak8 Corollary 6.20]). ## Ordinariness for dormant $G$-opers {#SS0101} Next, we shall denote by $$\begin{aligned} \label{Eq100} \mathcal{O}p^{^\mathrm{Zzz...}}_{G, \text{\'{e}t}}\end{aligned}$$ the étale locus of $\mathcal{O}p^{^\mathrm{Zzz...}}_{G}$ over $k$. It follows from  [@Wak8 Theorem G] that $\mathcal{O}p^{^\mathrm{Zzz...}}_{G, \text{\'{e}t}} = \mathcal{O}p^{^\mathrm{Zzz...}}_{G}$ if $\mathscr{X}$ is a general pointed stable curve and $G$ is either $\mathrm{PGL}_n$ with $2n < p$, $SO_{2l+1}$ with $4l +2 < p$, or $PCSp_{2m}$ with $4m< p$. For example, if $G$ is one of these groups, then this equality holds for every totally degenerate curve, in the sense of  [@Wak8 Definition 7.15]. **Definition 22**. We shall say that a dormant $G$-oper $\mathscr{E}^\spadesuit$ on $\mathscr{X}$ is **ordinary** if it is classified by a point of $\mathcal{O}p^{^\mathrm{Zzz...}}_{G, \text{\'{e}t}}$, or equivalently, the morphism $e_\flat^{(0)}$ associated to $\mathscr{E}^\spadesuit$ is injective (cf. ([\[Eq301\]](#Eq301){reference-type="ref" reference="Eq301"})). **Remark 23** (Previous studies). In some of the papers written by the author, the ordinariness for opers and related notions were discussed in several settings. For example, we refer the reader to  [@Wak9 Definitions 2.1.2, 3.4.1] for the ordinariness of $\mathrm{PGL}_n$-opers. On the other hand, the case of dormant indigenous $(G, H)$-bundles (where $G$ denotes a connected smooth algebraic group and $H$ denotes a suitable closed subgroup of $G$) on a smooth algebraic variety can be found in  [@Wak11 Definition 6.7.1]. **Proposition 24**. *Let $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ be a dormant $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper on $\mathscr{X}$. Then, the following three conditions are equivalent to each other:* - *$\mathscr{E}^\spadesuit$ is ordinary in the sense of Definition [Definition 22](#Def3){reference-type="ref" reference="Def3"}.* - *The morphism $e_\flat^{(0)}$ associated to $\mathscr{E}^\spadesuit$ is an isomorphism.* - *The composite $$\begin{aligned} \label{Eq876} e_\sharp^{(0)} : H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad})) \xrightarrow{{''}e_\sharp} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) \xrightarrow{{'}e_\flat } H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee)\end{aligned}$$ is an isomorphism.* *Proof.* The equivalence (b) $\Leftrightarrow$ (c) follows from the exactness of the sequences ([\[HDeq1\]](#HDeq1){reference-type="ref" reference="HDeq1"}) and ([\[Ceq\]](#Ceq){reference-type="ref" reference="Ceq"}). Also, there is nothing to prove for the implication (b) $\Rightarrow$ (a). Finally, the implication (a) $\Rightarrow$ (b) follows from the equality $$\begin{aligned} \mathrm{dim}H^0 (X, \mathcal{V}_G) = \mathrm{dim}H^0 (X, \mathrm{Coker}(\nabla^\mathrm{ad})) \left( = \frac{2g-2 + r}{2} \cdot \mathrm{dim}( \mathfrak{g}) + \frac{r}{2} \cdot \mathrm{rk}(\mathfrak{g}) \right), \end{aligned}$$ that was already proved in  [@Wak8 Propositions 2.23 and 6.18] (see also Proposition [Proposition 21](#Eqw3){reference-type="ref" reference="Eqw3"} described above). ◻ **Corollary 25**. *Let $\mathscr{E}^\spadesuit$ be an ordinary dormant $G$-oper on $\mathscr{X}$. Then, there exists a canonical decomposition of $\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])$ into a direct sum $$\begin{aligned} \label{Eq110} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) &= H^0 (X, \mathcal{V}_G) \oplus H^1 (X, \Omega \otimes_{\mathcal{O}_X} \mathcal{V}_G^\vee) \left(\cong H^0 (X, \mathcal{V}_G) \oplus H^0 (X, {^c}\mathcal{V}_G)^\vee \right). \end{aligned}$$* *Proof.* After possibly replacing $\mathscr{E}^\spadesuit$ with its $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normalization (cf.  [@Wak8 Proposition 2.19]), we may assume that $\mathscr{E}^\spadesuit$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal. Since $e_\flat^{(0)}$ is bijective by the equivalence (a) $\Leftrightarrow$ (b) asserted in Proposition [Proposition 24](#Thm4){reference-type="ref" reference="Thm4"}, we obtain the composite surjection $$\begin{aligned} \label{Eq308} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]) \xrightarrow{{''}e_\flat} H^1 (X, \mathrm{Coker}(\nabla^\mathrm{ad})) \xrightarrow{(e_\flat^{(0)})^{-1}} H^0 (X, \mathcal{V}_G).\end{aligned}$$ It specifies a split surjection of the short exact sequence ([\[HDeq1\]](#HDeq1){reference-type="ref" reference="HDeq1"}) in the case where $\Box$ denotes the absence of "$c$\". In particular, this split surjection determines the desired decomposition ([\[Eq110\]](#Eq110){reference-type="ref" reference="Eq110"}). ◻ Let $\mathscr{E}^\spadesuit$ and $q$ be as in § [6.3](#SS030){reference-type="ref" reference="SS030"}. Suppose further that $\mathscr{E}^\spadesuit$ is ordinary, which means that the two substacks $\mathcal{O}p_G$, $\mathcal{C}onn_G^{\psi =0}$ of $\mathcal{C}onn_G$ intersect transversally at $q$. Since $e_\flat^{(0)}$ is an isomorphism (cf. Proposition [Proposition 24](#Thm4){reference-type="ref" reference="Thm4"}), the morphism $$\begin{aligned} \label{Eq304} ({'}e_\sharp, {''}e_\sharp) : H^0 (X, \mathcal{V}_G) \oplus H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad})) \rightarrow\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])\end{aligned}$$ turns out to be an isomorphism. Under the identifications between various $k$-vector spaces given by $\gamma$ (cf. ([\[Eq3033\]](#Eq3033){reference-type="ref" reference="Eq3033"})), $\gamma_\sharp$ (cf. ([\[Eq300\]](#Eq300){reference-type="ref" reference="Eq300"})), and ${^p}\gamma$ (cf. ([\[Eq303\]](#Eq303){reference-type="ref" reference="Eq303"})), the isomorphism ([\[Eq304\]](#Eq304){reference-type="ref" reference="Eq304"}) reads a decomposition $$\begin{aligned} \label{Eq306} T_q \mathcal{O}p_G \oplus T_q \mathcal{C}onn_G^{\psi = 0} \stackrel{\sim}{\rightarrow}T_q \mathcal{C}onn_G\end{aligned}$$ of the tangent space $T_q \mathcal{C}onn_G$; it can be obtained by differentiating the immersions $\mathcal{O}p_G \hookrightarrow\mathcal{C}onn_G$ and $\mathcal{C}onn_G^{\psi = 0} \hookrightarrow\mathcal{C}onn_G$ at $q$. # Eichler-Shimura-type isomorphisms for dormant opers This final section is devoted to proving Theorem [Theorem 1](#ThA){reference-type="ref" reference="ThA"}, which establishes an Eichler-Shimura-type decomposition for a dormant $G$-oper on a general pointed stable curve. This is constructed by relating it to the decomposition of a tangent space already obtained in ([\[Eq306\]](#Eq306){reference-type="ref" reference="Eq306"}). ## The irreducible decomposition of the deformation space {#SS0134} Let $\mathscr{E}^\spadesuit := (\mathcal{E}_B, \nabla)$ be a dormant $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper on $\mathscr{X}$. Suppose that $\mathscr{E}^\spadesuit$ is of radii $\rho := (\rho_i)_{i=1}^r\in \mathfrak{c}(k)^{\times r}$, where $\rho := \emptyset$ if $r = 0$. Note that the dual $(\mathfrak{g}_\mathcal{E}^\vee, \nabla^{\mathrm{ad}\vee})$ of the flat bundle $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ is isomorphic to $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ itself because of the nondegeneracy of the Killing form on $\mathfrak{g}$. Hence, by applying  [@Wak8 Corollary 6.16] to the case where "$(\mathcal{V}, \nabla)$\" is taken to be $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$, we obtain an isomorphism of short exact sequences: $$\begin{aligned} \label{Eq240} \vcenter{\xymatrix@C=21pt@R=36pt{ 0 \ar[r] & H^1 (X, \mathrm{Ker}({^c}\nabla^\mathrm{ad})) \ar[r]^-{{''}e_{\sharp, c}} \ar[d]_-{\wr}^-{} &\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}]) \ar[r]^-{{''}e_{\flat, c}} \ar[d]_-{\wr}^-{(\ref{Eww3})} & H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad})) \ar[r] \ar[d]_-{\wr}^-{} & 0 \\ 0 \ar[r]& H^0 (X, \mathrm{Coker}(\nabla^\mathrm{ad}))^{\vee} \ar[r]_-{({''}e_\flat)^\vee } &\mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])^\vee \ar[r]_-{({''}e_\sharp)^\vee} &H^1(X, \mathrm{Ker}(\nabla^\mathrm{ad}))^\vee \ar[r] & 0. }}\end{aligned}$$ **Lemma 26**. *The isomorphism $e_{\flat}^{(0)}$ associated to $\mathscr{E}^\spadesuit$ (cf. Proposition [Proposition 24](#Thm4){reference-type="ref" reference="Thm4"}) restricts to an isomorphism of $k$-vector spaces $$\begin{aligned} %{^c}e_{\flat}^{(0)} : H^0 (X, {^c}\mathcal{V}_G) \stackrel{\sim}{\rightarrow}H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad})).\end{aligned}$$* *Proof.* Note that both the log connections $\nabla^\mathrm{ad}$ and ${^c}\nabla^\mathrm{ad}$ have vanishing $p$-curvature. Hence, the morphism $\delta : \mathrm{Coker}({^c}\nabla^\mathrm{ad}) \rightarrow\mathrm{Coker}(\nabla^\mathrm{ad})$ induced by the inclusion of complexes $\mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}] \hookrightarrow\mathcal{K}^\bullet [\nabla^\mathrm{ad}]$ is injective because of the commutativity of the following square diagram: $$\begin{aligned} \label{Eq24011} \vcenter{\xymatrix@C=46pt@R=36pt{ \mathrm{Coker}({^c}\nabla^\mathrm{ad}) \ar[r]^-{\delta} \ar[d]_-{\wr} & \mathrm{Coker}(\nabla^\mathrm{ad}) \ar[d]^-{\wr} \\ \Omega^{(1)} \otimes_{\mathcal{O}_{X^{(1)}}} \mathrm{Ker}({^c}\nabla^\mathrm{ad}) \ar[r]_-{\mathrm{inclusion}} & \Omega^{(1)} \otimes_{\mathcal{O}_{X^{(1)}}} \mathrm{Ker}(\nabla^\mathrm{ad}), }}\end{aligned}$$ where - $X^{(1)}$ denotes the Frobenius twist of $X$ over $k$, and $\Omega^{(1)}$ denotes the pull-back of $\Omega$ along the base-change morphism $X^{(1)} \stackrel{\sim}{\rightarrow}X$ induced by the Frobenius automorphism of $k$; - we regard both $\mathrm{Ker}({^c}\nabla^\mathrm{ad})$ and $\mathrm{Ker}(\nabla^\mathrm{ad})$ as $\mathcal{O}_{X^{(1)}}$-modules via the underlying homeomorphism of the relative Frobenius morphism $X \rightarrow X^{(1)}$; - the right-hand and left-hand vertical arrows denote the isomorphisms induced by the Cartier operators of $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ and $({^c}\mathfrak{g}_\mathcal{E}, {^c}\nabla^\mathrm{ad})$, respectively (cf. the discussion following  [@Og Proposition 1.2.4]). In particular, the morphism $$\begin{aligned} H^0(\delta) : H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad})) \rightarrow H^0 (X, \mathrm{Coker}(\nabla^\mathrm{ad}))\end{aligned}$$ induced by $\delta$ is injective. It follows that the composite of natural morphisms ${^c}\mathcal{V}_G [-1] \rightarrow\mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}] \rightarrow\mathrm{Coker}({^c}\nabla^\mathrm{ad})[-1]$ gives a restriction $$\begin{aligned} \label{Fgr4} %{^c}e_{\flat}^{(0)} : H^0 (X, {^c}\mathcal{V}_G) \hookrightarrow H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad}))\end{aligned}$$ of $e_{\flat}^{(0)}$ via the injection $H^0 (\delta)$ and the natural inclusion $H^0 (X, {^c}\mathcal{V}_G) \hookrightarrow H^0 (X, \mathcal{V}_G)$. Since the dual $(\mathfrak{g}_\mathcal{E}^\vee, \nabla^{\mathrm{ad}\vee})$ of the flat bundle $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ is isomorphic to $(\mathfrak{g}_\mathcal{E}, \nabla^\mathrm{ad})$ itself because of the nondegeneracy of the Killing form on $\mathfrak{g}$, it follows from  [@Wak8 Corollary 6.16] that $H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad}))$ is isomorphic to $H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad}))^\vee$. Hence, this fact together with  [@Wak8 Propositions 2.23 and 6.18] implies the equality $$\begin{aligned} \label{Eq325} \mathrm{dim}( H^0 (X, {^c}\mathcal{V}_G)) = \mathrm{dim} (H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad}))) \left(= \frac{2g-2 +r}{2} \cdot \mathrm{dim}(\mathfrak{g}) - \frac{r}{2} \cdot \mathrm{rk}(\mathfrak{g}) \right).\end{aligned}$$ Thus, ([\[Fgr4\]](#Fgr4){reference-type="ref" reference="Fgr4"}) is verified to be an isomorphism, as desired. ◻ Next, suppose that $\mathscr{E}^\spadesuit$ is ordinary. The dual of the composite $$\begin{aligned} H^0 (X, \mathrm{Coker}(\nabla^\mathrm{ad})) \xrightarrow{(e_\flat^{(0)})^{-1}} H^0(X, \mathcal{V}_G) \xrightarrow{{'}e_\sharp} \mathbb{H}^1 (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}])\end{aligned}$$ determines a split surjection of the lower horizontal sequence in ([\[Eq240\]](#Eq240){reference-type="ref" reference="Eq240"}). Hence, it induces a splitting of the upper horizontal one. Under the identifications between various $k$-vector spaces given by $\gamma_{\widetilde{\rho}}$ (cf. ([\[Eq350\]](#Eq350){reference-type="ref" reference="Eq350"})), ${^p}\gamma_{\widetilde{\rho}}$ (cf. ([\[Eq303\]](#Eq303){reference-type="ref" reference="Eq303"})), and ${^c}e_\flat^{(0)}\circ \gamma_{\sharp, \rho}$ (cf. ([\[Eq300\]](#Eq300){reference-type="ref" reference="Eq300"}), ([\[Eq302\]](#Eq302){reference-type="ref" reference="Eq302"})), the resulting decomposition of $\mathbb{H}^1 (X, \mathcal{K}^\bullet [{^c}\nabla^\mathrm{ad}])$ determines a direct sum decomposition $$\begin{aligned} \label{Ek8} T_q \mathcal{C}onn_{G, \widetilde{\rho}}^{\psi = 0} \oplus T_q \mathcal{O}p_{G, \rho} \stackrel{\sim}{\rightarrow}T_q \mathcal{C}onn_{G, \widetilde{\rho}}\end{aligned}$$ of $T_q \mathcal{C}onn_{G, \widetilde{\rho}}$; it coincides with the morphism induced by differentiating the inclusions $\mathcal{O}p_{G, \rho} \hookrightarrow\mathcal{C}onn_{G, \widetilde{\rho}}$ and $\mathcal{C}onn_{G, \widetilde{\rho}}^{\psi = 0} \hookrightarrow\mathcal{C}onn_{G, \widetilde{\rho}}$. Then, the following assertion holds. **Theorem 27**. *Let $\mathscr{E}^\spadesuit$ be an ordinary dormant $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal $G$-oper on $\mathscr{X}$. Let us consider the following two composite isomorphisms: $$\begin{aligned} \label{Ek1} T_q \mathcal{C}onn_{G, \widetilde{\rho}}^{\psi = 0} & \xrightarrow{{^p}\gamma_{\widetilde{\rho}}} H^1 (X, \mathrm{Ker}({^c}\nabla^\mathrm{ad})) \xrightarrow{\sim} H^0(X, \mathrm{Coker}(\nabla^\mathrm{ad}))^\vee \\ & \xrightarrow{(e_\flat^{(0)})^\vee} H^0 (X, \mathcal{V}_G)^\vee \xrightarrow{\gamma_\sharp^\vee} T^\vee_q \mathcal{O}p_G, \notag\end{aligned}$$ $$\begin{aligned} \label{Ek2} T_q \mathcal{O}p_{G, \rho} &\xrightarrow{\gamma_{\sharp, \rho}}H^0 (X, {^c}\mathcal{V}_G) \xrightarrow{{^c}e_\flat^{(0)}}H^0 (X, \mathrm{Coker}({^c}\nabla^\mathrm{ad})) \\ & \xrightarrow{\sim} H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad}))^\vee \xrightarrow{({^p}\gamma)^\vee} T_q^\vee \mathcal{C}onn_{G}^{\psi = 0} \notag\end{aligned}$$ (cf. ([\[Eq240\]](#Eq240){reference-type="ref" reference="Eq240"}) for the definitions of the second arrow in ([\[Ek1\]](#Ek1){reference-type="ref" reference="Ek1"}) and the third arrow in ([\[Ek2\]](#Ek2){reference-type="ref" reference="Ek2"})). Then, these isomorphisms make the following square diagram commute: $$\begin{aligned} \label{Eq223} \vcenter{\xymatrix@C=46pt@R=36pt{ T_q \mathcal{C}onn_{G, \widetilde{\rho}}^{\psi = 0} \oplus T_q \mathcal{O}p_{G, \rho} \ar[r]^-{(\ref{Ek8})}_-{\sim} \ar[d]_-{(\ref{Ek1}) \oplus (\ref{Ek2})}^-{\wr} & T_q \mathcal{C}onn_{G, \widetilde{\rho}} \ar[d]^-{(\ref{Er67})}_-{\wr} \\ T_q^\vee \mathcal{O}p_{G} \oplus T_q^\vee \mathcal{C}onn_{G}^{\psi = 0} \ar[r]_-{(\ref{Eq306})^\vee}^-{\sim} & T_q^\vee \mathcal{C}onn_{G}. }}\end{aligned}$$ In particular, if $r = 0$, then the subspace $T_q \mathcal{C}onn_G^{\psi = 0}$ of $T_q \mathcal{C}onn_G$ is Lagrangian with respect to the bilinear form on $T_q \mathcal{C}onn_G$ determined by ([\[Er67\]](#Er67){reference-type="ref" reference="Er67"}).* *Proof.* The assertion follows from Theorem [Theorem 17](#Prc1){reference-type="ref" reference="Prc1"}, (ii), and the definitions of various morphisms involved. ◻ **Remark 28** (Case of the parabolic de Rham cohomology). When $r > 0$, we cannot construct a direct sum decomposition of $T_q \mathcal{C}onn_{G, [0]^{\times r}}$ using a dormant $G$-oper in the same manner as ([\[Eq306\]](#Eq306){reference-type="ref" reference="Eq306"}) or ([\[Ek8\]](#Ek8){reference-type="ref" reference="Ek8"}). In fact, as discussed in  [@Wak8 Remark 3.37], any dormant $G$-oper cannot have radii $[0]^{\times r}$. ## The canonical decomposition derived from ordinariness {#SS0893} Next, let $\mathscr{E}^\spadesuit_\odot := (\mathcal{E}_{B^\odot}, \nabla_\odot)$ be a dormant $G^\odot$-oper on $\mathscr{X}$, and write $\mathcal{E}_\odot := \mathcal{E}_{B^\odot} \times^{B^\odot} G^\odot$ and $\mathscr{E}_\odot := (\mathcal{E}_\odot, \nabla_\odot)$. **Proposition 29**. *The following two conditions are equivalent to each other:* - *The dormant $G$-oper $\iota_{G *}(\mathscr{E}^\spadesuit_\odot)$ (cf. ([\[associated\]](#associated){reference-type="ref" reference="associated"})) is ordinary;* - *For each integer $l$ with $1\leq l \leq \mathrm{rk}(\mathfrak{g})$, the composite morphism $$\begin{aligned} \label{Eq838} \kappa_{l} : H^1 (X, \mathrm{Ker} (\mathrm{Sym}^{2l}\nabla_\odot)) \xrightarrow{\theta_l } H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot) \longrightarrow %\xrightarrow{h^l_\flat [\msE^\spadesuit_\odot]} H^1 (X, \Omega^{\otimes (-l)})\end{aligned}$$ (cf. ([\[Eq817\]](#Eq817){reference-type="ref" reference="Eq817"}) for the definition of the second arrow) is an isomorphism, where the first arrow $\theta_l$ arises from the natural inclusion $\mathrm{Ker} (\mathrm{Sym}^{2l}\nabla_\odot) [0] \rightarrow\mathcal{K}^\bullet [\mathrm{Sym}^{2l}\nabla_\odot]$.* *Proof.* After possibly replacing $\mathscr{E}^\spadesuit_\odot$ with its $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normalization, we may assume that $\mathscr{E}^\spadesuit_\odot$ is $\,\, q_1\hspace{-5.0mm}\rotatebox[origin=c]{-10}{$\swarrow$}\hspace{0.5mm}$-normal. We shall write $\iota_{G*}(\mathscr{E}^\spadesuit_\odot) := (\mathcal{E}_B, \nabla)$. The isomorphism ([\[Eq160\]](#Eq160){reference-type="ref" reference="Eq160"}) restricts to an isomorphism $$\begin{aligned} \bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}\mathrm{Ker}(\mathrm{Sym}^{2l}\nabla_\odot) \otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)} \stackrel{\sim}{\rightarrow}\mathrm{Ker}(\nabla^\mathrm{ad}).\end{aligned}$$ It induces a commutative square diagram $$\begin{aligned} \label{Eq166} \vcenter{\xymatrix@C=26pt@R=36pt{ \displaystyle{\bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}}H^1 (X, \mathrm{Ker}(\mathrm{Sym}^{2l}\nabla_\odot)) \otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)} \ar[r]^-{\sim} \ar[d]_-{\bigoplus_l \theta_l \otimes \mathrm{id}} & H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad})) \ar[d]^-{{''}e_\sharp } \\ \displaystyle{\bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}} H^1_{\mathrm{dR}}(X, \mathrm{Sym}^{2l}\mathscr{E}_\odot) \otimes_k\mathfrak{g}_l^{\mathrm{ad}(q_1)}\ar[r]_-{(\ref{Eq834})} & \mathbb{H}^1_{} (X, \mathcal{K}^\bullet [\nabla^\mathrm{ad}]). }}\end{aligned}$$ By composing ([\[Eq166\]](#Eq166){reference-type="ref" reference="Eq166"}) and the right-hand square diagram in ([\[Eq16tt\]](#Eq16tt){reference-type="ref" reference="Eq16tt"}), we obtain a commutative square $$\begin{aligned} \label{Eq16ji} \vcenter{\xymatrix@C=26pt@R=36pt{ \displaystyle{\bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}}H^1 (X, \mathrm{Ker}(\mathrm{Sym}^{2l}(\nabla))) \otimes_k \mathfrak{g}_l^{\mathrm{ad}(q_1)} \ar[r]^-{\sim} \ar[d]_-{\bigoplus_{l}\kappa_{l} \otimes \mathrm{id}} & H^1 (X, \mathrm{Ker}(\nabla^\mathrm{ad})) \ar[d]^-{e_\sharp^{(0)}} \\ \displaystyle{\bigoplus_{l=1}^{\mathrm{rk}(\mathfrak{g})}}H^1 (X, \Omega^{\otimes (-l)})\otimes_k\mathfrak{g}_l^{\mathrm{ad}(q_1)}\ar[r]_-{\sim} & H^1 (X, \Omega\otimes \mathcal{V}_G^\vee) }}\end{aligned}$$ (cf. ([\[Eq876\]](#Eq876){reference-type="ref" reference="Eq876"}) for the definition of $e_\sharp^{(0)}$). The commutativity of this diagram implies that $\kappa_{l}$ is an isomorphism for every $l$ if and only if $e_\sharp^{(0)}$ is an isomorphism. Hence, the assertion follows from the equivalence (a) $\Leftrightarrow$ (c) asserted in Proposition [Proposition 24](#Thm4){reference-type="ref" reference="Thm4"}. ◻ **Corollary 30**. *Let $\mathscr{E}_\odot^\spadesuit$ be a dormant $G^\odot$-oper on $\mathscr{X}$. Also, let $G'$ be another algebraic group satisfying the same conditions imposed on $G$ (cf. § [1.4](#Ssde){reference-type="ref" reference="Ssde"}). Suppose that $\mathrm{rk}(\mathfrak{g}) \geq \mathrm{rk}(\mathfrak{g}')$ and that the $G$-oper $\iota_{G*}(\mathscr{E}^\spadesuit)$ is ordinary. Then, the dormant $G'$-oper $\iota_{G'*}(\mathscr{E}^\spadesuit)$ is ordinary.* *Proof.* The assertion can be immediately proved by applying Proposition [Proposition 29](#T01){reference-type="ref" reference="T01"} to both $\iota_{G*}(\mathscr{E}^\spadesuit)$ and $\iota_{G'*}(\mathscr{E}^\spadesuit)$. ◻ **Corollary 31**. *Let $\mathscr{E}_\odot^\spadesuit$ be a dormant $G^\odot$-oper on $\mathscr{X}$. Suppose that $\mathscr{X}$ is general in the moduli stack $\overline{\mathcal{M}}_{g, r}$ of $r$-pointed stable curves of genus $g$ over $k$. (To be precise, $\mathscr{X}$ specifies a point of $\overline{\mathcal{M}}_{g, r}$ that lies outside some fixed closed substack.)* - *Suppose that $\mathrm{rk}(\mathfrak{g}) \leq \frac{p-3}{2}$. Then, the dormant $G$-oper $\iota_{G *}(\mathscr{E}^\spadesuit_\odot)$ associated to any dormant $G^\odot$-oper $\mathscr{E}^\spadesuit_\odot$ on $\mathscr{X}$ is ordinary.* - *For each integer $l$ with $1 \leq l \leq \frac{p-3}{2}$, the morphism $\kappa_{l}$ associated to $\mathscr{E}^\spadesuit_\odot$ (introduced in Proposition [Proposition 29](#T01){reference-type="ref" reference="T01"}) is an isomorphism.* *Proof.* First, let us prove assertion (i). For simplicity, we shall set $G^\circledast := \mathrm{PGL}_\frac{p-1}{2}$, which has rank $\frac{p-3}{2}$. By applying Corollary [Corollary 30](#Co34){reference-type="ref" reference="Co34"} to both $G$ and $G^\circledast$, we can reduce the problem to the case of $G = G^\circledast$. Denote by $\mathfrak{O}\mathfrak{p}^{^\mathrm{Zzz...}}_{G^\odot}$ (resp., $\mathfrak{O}\mathfrak{p}^{^\mathrm{Zzz...}}_{G^\circledast}$) the Deligne-Mumford stack classifying pairs $(\mathscr{X}, \mathscr{E}^\spadesuit)$ consisting of $\mathscr{X}\in \mathrm{Ob}(\overline{\mathcal{M}}_{g, r})$ and a dormant $G^\odot$-oper (resp., a dormant $G^\circledast$-oper) $\mathscr{E}^\spadesuit$ on $\mathscr{X}$. According to  [@Mzk2 Chap. II, Theorem 2.8], the stack $\mathfrak{O}\mathfrak{p}^{^\mathrm{Zzz...}}_{G^\odot}$ is irreducible, and the natural projection $\mathfrak{O}\mathfrak{p}^{^\mathrm{Zzz...}}_{G^\odot} \rightarrow\overline{\mathcal{M}}_{g, r}$ is finite and faithfully flat. Hence, one can find a unique irreducible component $\mathcal{N}$ of $\mathfrak{O}\mathfrak{p}_{G^\circledast}^{^\mathrm{Zzz...}}$ containing the image of the $\overline{\mathcal{M}}_{g, r}$-morphism $$\begin{aligned} \label{Mor44} %\iota_{G^\circledast}^{\mfO \mfp} : \mathfrak{O}\mathfrak{p}_{G^\odot}^{^\mathrm{Zzz...}} \rightarrow\mathfrak{O}\mathfrak{p}_{G^\circledast}^{^\mathrm{Zzz...}} \end{aligned}$$ given by assigning $(\mathscr{X}, \mathscr{E}^\spadesuit) \mapsto (\mathscr{X}, \iota_{G*}(\mathscr{E}^\spadesuit))$. If $\mathfrak{O}\mathfrak{p}_{G^\circledast, \text{\'{e}t}}^{^\mathrm{Zzz...}}$ denotes the étale locus of $\mathfrak{O}\mathfrak{p}_{G^\circledast}^{^\mathrm{Zzz...}}$ over $\overline{\mathcal{M}}_{g, r}$, then the intersection $\mathfrak{O}\mathfrak{p}_{G^\circledast, \text{\'{e}t}}^{^\mathrm{Zzz...}} \cap \mathcal{N}$ defines a dense open substack of $\mathcal{N}$ (cf.  [@Wak8 Theorem G]). Since $\mathfrak{O}\mathfrak{p}_{G^\circledast}^{^\mathrm{Zzz...}}$ is finite over $\overline{\mathcal{M}}_{g, r}$ (cf.  [@Wak8 Theorem 3.34]), the restriction $\mathfrak{O}\mathfrak{p}_{G^\odot}^{^\mathrm{Zzz...}} \rightarrow\mathcal{N}$ of ([\[Mor44\]](#Mor44){reference-type="ref" reference="Mor44"}) is dominant. This implies that the inverse image $\mathcal{Q}$ of $\mathfrak{O}\mathfrak{p}_{G^\circledast, \text{\'{e}t}}^{^\mathrm{Zzz...}} \cap \mathcal{N}$ via ([\[Mor44\]](#Mor44){reference-type="ref" reference="Mor44"}) is a dense open substack of $\mathfrak{O}\mathfrak{p}_{G^\odot}^{^\mathrm{Zzz...}}$. The complement in $\overline{\mathcal{M}}_{g, r}$ of the image of $\mathfrak{O}\mathfrak{p}_{G^\odot}^{^\mathrm{Zzz...}} \setminus \mathcal{Q}$ forms a dense open substack of $\overline{\mathcal{M}}_{g, r}$ and by definition classifies pointed stable curves $\mathscr{X}$ such that $\iota_{G^\circledast *}(\mathscr{E}^\spadesuit)$ is ordinary for any dormant $G^\odot$-oper $\mathscr{E}^\spadesuit$ on $\mathscr{X}$. This completes the proof of assertion (i). Moreover, assertion (ii) follows from assertion (i) together with Proposition [Proposition 29](#T01){reference-type="ref" reference="T01"}. ◻ We conclude this paper with the following assertions. **Theorem 32**. *Let $\mathscr{E}^\spadesuit_\odot$ be a dormant $G^\odot$-oper on $\mathscr{X}$, and write $\mathscr{E}_\odot$ for the underlying flat $G^\odot$-bundle of $\mathscr{E}^\spadesuit_\odot$. Also, let $l$ be an integer with $1 \leq l \leq \frac{p-3}{2}$. Suppose that $\mathscr{X}$ is general in $\overline{\mathcal{M}}_{g, r}$. Then, there exists a canonical splitting of ([\[Eq817\]](#Eq817){reference-type="ref" reference="Eq817"}). In particular, we obtain a canonical decomposition $$\begin{aligned} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot) &= H^0 (X, \Omega^{\otimes (l+1)}) \oplus H^1 (X, \Omega^{\otimes (-l)}) \\ & \hspace{-1mm}\left(\cong H^0 (X, \Omega^{\otimes (l+1)}) \oplus H^0 (X, \Omega^{\otimes (l+1)}(-D))^\vee\right) \notag\end{aligned}$$ of the $1$-st de Rham cohomology group $H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot)$ of $\mathrm{Sym}^{2l}\mathscr{E}_\odot$.* *Proof.* It follows from Corollary [Corollary 31](#Co34f){reference-type="ref" reference="Co34f"}, (ii), that $\kappa_{l}$ is an isomorphism for every $l = 1, \cdots, \frac{p-3}{2}$. Then, the composite injection $$\begin{aligned} H^1 (X, \Omega^{\otimes (-l)}) \xrightarrow{\kappa_{l}^{-1}} H^1 (X, \mathrm{Ker}(\mathrm{Sym}^{2l}\nabla_\odot)) \xrightarrow{\theta_l}H_\mathrm{dR}^1 (X, \mathrm{Sym}^{2l}\mathscr{E}_\odot).\end{aligned}$$ specifies the desired splitting. ◻ **Corollary 33** (cf. Theorem [Theorem 1](#ThA){reference-type="ref" reference="ThA"}). *Let $\mathscr{F}^\heartsuit_\odot$ be a dormant $\mathrm{SL}_2$-oper on $\mathscr{X}$, and $l$ an integer with $1 \leq l \leq \frac{p-3}{2}$. Denote by $\varTheta$ the theta characteristic of $\mathscr{X}$ associated to $\mathscr{F}^\heartsuit_\odot$, and by $\mathscr{F}_\odot$ the underlying flat $\mathrm{SL}_2$-bundle of $\mathscr{F}^\heartsuit_\odot$. Suppose that $\mathscr{X}$ is general in $\overline{\mathcal{M}}_{g, r}$. Then, there exists a canonical splitting of ([\[Eq810\]](#Eq810){reference-type="ref" reference="Eq810"}) for $n= 2l +1$. In particular, we obtain a canonical decomposition $$\begin{aligned} \label{Eq131} H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{F}) & = H^0 (X, \varTheta^{\otimes 2(l+1)}) \oplus H^1 (X, \varTheta^{\otimes (-2l)}) \\ & \hspace{-1mm} \left(\cong H^0 (X, \varTheta^{\otimes 2(l+1)}) \oplus H^0 (X, \varTheta^{\otimes 2(l+1)}(-D))^\vee \right) \notag\end{aligned}$$ of the $1$-st de Rham cohomology $H^1_{\mathrm{dR}} (X, \mathrm{Sym}^{2l}\mathscr{F}_\odot)$ of $\mathrm{Sym}^{2l}\mathscr{F}_\odot$.* *Proof.* The assertion follows from Theorem [Theorem 32](#Co12){reference-type="ref" reference="Co12"} and the comment at the end of the statement in Theorem [Theorem 20](#Th66){reference-type="ref" reference="Th66"}. ◻ **Remark 34**. In  [@Wak7 § 2.4], the author constructed a $p$-adic version of the Eichler-Shimura isomorphism for the $2$-nd symmetric product of an ordinary nilpotent indigenous bundle, i.e., a certain $G^\odot$-oper with nilpotent $p$-curvature (cf.  [@Mzk1 Chap. II, Definition 3.1]). This result is an essential ingredient for proving the main theorem in  [@Wak7], which compares canonical symplectic structures on the related moduli spaces. Despite the similarity in concept, this result seems to have no (at least direct) relation with the decomposition asserted in the above theorem because the $G^\odot$-opers treated there never have vanishing $p$-curvature. However, we also expect that the previous $p$-adic version extends to the higher symmetric products, generalizing Faltings' construction of the $p$-adic Eichler-Shimura isomorphisms for modular curves (cf.  [@Fal Theorem 6]). ## Acknowledgements {#acknowledgements .unnumbered} We would like to thank modular curves for their helpful comments on the Eichler-Shimura isomorphism. Our work was partially supported by Grant-in-Aid for Scientific Research (KAKENHI No. 21K13770). 99 A. A. Beilinson, V. Drinfeld, *Quantization of Hitchin's integrable system and Hecke eigensheaves,* Available at: https://inspirehep.net/literature/ 1885368. F. Brown, R. Hain, Algebraic de Rham theory for weakly holomorphic modular forms of level one, *Algebra Number Theory* **12** (2018), pp. 723-750. L. Candelori, Harmonic weak Maass forms of integral weight: a geometric approach, *Math. Ann.* **360** (2014), pp. 489-517. P. Deligne, *Equations différentielles à points singuliers réguliers,* Lecture Notes in Math., Vol. 163, Springer-Verlag, Berlin-New York (1970), iii+133 pp. M. Eichler, Eine Verallgemeinerung der Abelschen Integrale, *Math. Z.* **67** (1957), pp. 267-298. G. Faltings, Hodge-Tate structures and modular forms, *Math. Ann.* **278** (1987), pp. 133-149. R. Gunning, Special coordinate coverings of Riemann surfaces, *Math. Ann.* **170** (1967), pp. 67-86. J. E. Humphreys, *Linear algebraic groups*, Grad. Texts in Math., No. 21, Springer-Verlag, New York-Heidelberg (1975), xiv+247 pp. D. Huybrechts, M. Lehn, *The geometry of moduli spaces of sheaves*, Cambridge Math. Lib. Cambridge University Press, Cambridge (2010), xviii+325 pp. K. Joshi, The degree of the dormant operatic locus, *Int. Math. Res. Not. IMRN* (2017), pp. 2599-2613. K. Joshi, C. Pauly, Hitchin-Mochizuki morphism, opers and Frobenius-destabilized vector bundles over curves, *Adv. Math.* **274** (2015), pp. 39-75. F. Kato, Log smooth deformation and moduli of log smooth curves, *Internat. J. Math.* **11**, (2000), pp. 215-232. N. M. Katz, Nilpotent connections and the monodromy theorem: Applications of a result of Turrittin, *Inst. Hautes Etudes Sci. Publ. Math.* **39** (1970), pp. 175-232. M. Kazalicki, A. J. Scholl, Modular forms, de Rham cohomology and congruences, *Trans. Amer. Math. Soc.* **368** (2016), pp. 7097-7117. R. Kiehl, R. Weissauer, *Weil conjectures, Perverse Sheaves and l'adic Fourier Transform*, Ergeb. Math. Grenzgeb. (3) **42**, Springer-Verlag, Berlin (2001), xii+375 pp. B. Kostant, Lie group representations on polynomial rings. *Amer. J. Math.* **85** (1963), pp. 327-404. S. Mochizuki, A theory of ordinary $p$-adic curves, *Publ. Res. Inst. Math. Sci.* **32** (1996), pp. 957-1152. S. Mochizuki, *Foundations of $p$-adic Teichmüller theory*, AMS/IP Stud. Adv. Math., **11**, American Mathematical Society, Providence, RIInternational Press, Cambridge, MA (1999), xii+529 pp. B. C. Ngô, Le lemme fondamental pour les algèbres de Lie, *Publ. Math. Inst. Hautes Études Sci.* **111** (2010), pp. 1-169. A. Ogus, *$F$-Crystals, Griffiths Transversality, and the Hodge Decomposition*, Astérisque **221** (1994), ii+183 pp. A. Ogus, Elliptic crystals and modular motives, *Adv. Math.* **162** (2001), pp. 173-216. B. Osserman, The generalized Vershiebung map for curves of genus $2$, *Math. Ann.* **336** (2006), pp. 963-986. H. Reimann, *The semi-simple zeta function of quaternionic Shimura varieties*, Lecture Notes in Math., *1657*, Springer-Verlag, Berlin (1997), viii+143 pp. A. J. Scholl, Modular forms and de Rham cohomology; Atkin-Swinnerton-Dyer congruences, *Invent. math.* **79** (1985), pp. 49-77. M. Sheng, J. Zhang, K. Zuo, Higgs bundles over the good reduction of a quaternionic Shimura curve, *J. Reine Angew. Math.* **671** (2012), pp. 223-248. G. Shimura, Sur les intégrales attachées aux formes automorphes, *J. Math. Soc. Japan* **11** (1959), pp. 291-311. Y. Wakabayashi, An explicit formula for the generic number of dormant indigenous bundles, *Publ. Res. Inst. Math. Sci.* **50** (2014), pp. 383-409. Y. Wakabayashi, Spin networks, Ehrhart quasi-polynomials, and combinatorics of dormant indigenous bundles, *Kyoto J. Math.* **59** (2019), pp. 649-684. Y. Wakabayashi, Symplectic geometry of $p$-adic Teichmüller uniformization for ordinary nilpotent indigenous bundles, *Tunis. J. Math.* **4**, (2022), pp. 203-247. Y. Wakabayashi, *A theory of dormant opers on pointed stable curves*, Astérisque **432** (2022), ix+296 pp. Y. Wakabayashi, Dormant opers and Gauss maps in positive characteristic, *arXiv: math. AG/2209.08526*, (2022). Y. Wakabayashi, Cyclic étale coverings of generic curves and ordinariness of dormant opers, *Journal of Algebra* **623** (2023), pp. 154-192. Y. Wakabayashi, Frobenius-Ehresmann structures and Cartan geometries in positive characteristic, *Indag. Math. (N.S.)* **34** (2023), pp. 488-580. Y. Wakabayashi, Opers with real monodromy and Eichler-Shimura isomorphism, Preprint. J. Wan, The moduli stack of $G$-bundles, *arXiv: math. AG/1104.4828*, (2011). R. A. Wentworth, Higgs bundles and local systems on Riemann surfaces, *Adv. Courses Math. CRM Barcelona*, Birkhäuser/Springer, Cham (2016), pp. 165-219.
arxiv_math
{ "id": "2309.11750", "title": "Infinitesimal deformations of opers in positive characteristic and the\n de Rham cohomology of symmetric products", "authors": "Yasuhiro Wakabayashi", "categories": "math.AG", "license": "http://creativecommons.org/publicdomain/zero/1.0/" }
--- abstract: | Define the minimal excludant of an overpartition $\pi$, denoted $\overline{\text{mex}}({\pi})$, to be the smallest positive integer that is not a part of the non-overlined parts of $\pi$. For a positive integer $n$, the function $\sigma \overline{\text{mex}}({n})$ is the sum of the minimal excludants over all overpartitions of $n$. In this paper, we proved that the $\sigma \overline{\text{mex}}({n})$ equals the number of partitions of $n$ into distinct parts using three colors. We also provide an asymptotic formula for $\sigma \overline{\text{mex}}({n})$ and show that $\sigma \overline{\text{mex}}({n})$ is almost always even and is odd exactly when $n$ is a triangular number. Moreover, we generalize $\overline{\text{mex}}({\pi})$ using the least $r$-gaps, denoted $\overline{\text{mex}}_r({\pi})$, defined as the smallest part of the non-overlined parts of the overpartition $\pi$ appearing less than $r$ times. Similarly, for a positive integer $n$, the function $\sigma_r \overline{\text{mex}}({n})$ is the sum of the least $r$-gaps over all overpartitions of $n$. We derive a generating function and an asymptotic formula for $\sigma_r \overline{\text{mex}}({n})$. Lastly, we study the arithmetic density of $\sigma_r \overline{\text{mex}}({n})$ modulo $2^k$, where $r=2^m\cdot3^n, m,n \in \mathbb{Z}_\geq 0.$\       **Keywords**: partitions, minimal excludant, least gaps, modular forms **AMS Classification:** 05A17, 11F11, 11F20, 11P83 author: - "Victor Manuel R. Aricheta[^1]    and Judy Ann L. Donato[^2]" title: Minimal Excludant over Overpartitions --- # Introduction The minimal excludant (mex) of a subset $S$ of a well-ordered set $U$ is the smallest value in $U$ that is not in $S$. In particular, the minimal excludant of a set $S$ of positive integers, denoted mex($S$), is the least positive integer not in $S$, i.e., $\text{mex}(S)=\text{min}(\mathbb{Z}^+ \backslash S).$ The history of the minimal excludant goes way back in the 1930's when it was first used in combinatorial game theory by Sprague and Grundy [@sprague], [@grundy].\ In 2019, Andrews and Newman [@andrewsnewman] studied the minimal excludant of an integer partition $\pi$, denoted $\text{mex}(\pi)$, which is defined as the smallest positive integer that is not a part of $\pi.$ Moreover, the arithmetic function $$\sigma \text{mex}(n):=\displaystyle \sum_{\pi \in \mathcal{P}(n)} \text{mex}(\pi),$$ where $\mathcal{P}(n)$ is the set of all partitions of $n$, was also introduced.\ They proved the following interesting relationship between $\sigma \text{mex}(n)$ and $D_2(n)$ which is the number of partitions of $n$ into distinct parts using two colors: $$\sigma \text{mex}(n)=D_2(n).$$ Moreover, it was shown that $\sigma \text{mex}(n)$ is almost always even and is odd exactly when $n=j(3j\pm 1)$ for some $j \in \mathbb{N}.$\ In [@ballantinemerca], Ballantine and Merca explored the least $r$-gap of a partition $\pi$, denoted $g_r(\pi)$, which is the smallest part of $\pi$ appearing less than $r$ times. In particular, $g_1(\pi)$ is the minimal excludant of $\pi.$ They defined the arithmetic function $$\sigma_r \text{mex}(n)=\displaystyle \sum_{\pi \in \mathcal{P}(n)} g_r(\pi)$$ which is the sum of the least $r$-gaps in all partitions of $n$. The following generating function for $\sigma_r\text{mex}(n)$ was also derived: $$\displaystyle \sum_{n=0}^\infty \sigma_r\text{mex}(n)q^n=\dfrac{(q^{2r};q^{2r})_\infty}{(q;q)_\infty(q^r;q^{2r})_\infty}.$$ Furthermore, Chakraborty and Ray [@chakrabortyray] studied the arithmetic density of $\sigma_2\text{mex}(n)$ and $\sigma_3\text{mex}(n)$ modulo $2^k$ for any positive integer $k$ and proved that for almost every nonnegative integer $n$ lying in an arithmetic progression, the integer $\sigma_r \text{mex}(n)$is a multiple of $2^k$ where $r \in \{2,3\}.$\ Now, recall that the overpartition of a positive integer $n$ is a non-increasing sequence of natural numbers whose sum is $n$ in which the first occurrence of a number may be overlined. We denote by $\overline{p}(n)$ the number of overpartitions of $n$. For example, $\overline{p}(3)=8$ since there are 8 overpartitions of $3$ which are: $$3, \overline{3}, 2+1, \overline{2}+1, 2+\overline{1}, \overline{2}+\overline{1}, 1+1+1, \overline{1}+1+1.$$ The goal of this paper is to extend the notion of minimal excludant of partitions to overpartitions. There are several ways to obtain such generalization, but we propose the following definition below (Definition [Definition 2](#def2){reference-type="ref" reference="def2"}). We justify using this definition through the results we obtain, results that are manifestly analogues of results concerning the classical partition function (see Proposition [Proposition 2](#Tk){reference-type="ref" reference="Tk"} for example). **Definition 1**. *The **minimal exludant of an overpartition** $\pi$, denoted $\overline{\text{mex}}(\pi)$, is the smallest positive integer that is not a part of the non-overlined parts of $\pi$. For a positive integer $n$, denote the sum of $\overline{\text{mex}}(\pi)$ over all overpartitions $\pi$ of $n$ as $\sigma \overline{\text{mex}}(n):$ $$\sigma \overline{\text{mex}}(n)=\displaystyle \sum_{\pi \in \overline{\mathcal{P}}(n)} \overline{\text{mex}}(\pi),$$ where $\overline{\mathcal{P}}(n)$ is the set of all overpartitions of $n$. Moreover, we set $\sigma \overline{\text{mex}}({0})=1.$* For example, consider $n=3.$ The table below shows all overpartitions of $3$ and their corresponding minimal excludant.\ $\pi$ $\overline{\text{mex}}(\pi)$ ------------------------------- ------------------------------ 3 1 $\overline{3}$ 1 $2 + 1$ 3 $\overline{2} + 1$ 2 $2 + \overline{1}$ 1 $\overline{2} + \overline{1}$ 1 1+1+1 2 $\overline{1}+1+1$ 2 Thus, $\sigma \overline{\text{mex}}(3)= 13.$\ In Section 2, we derive the generating function of $\sigma \overline{\text{mex}}({n})$ and prove the following theorem relating $\sigma \overline{\text{mex}}({n})$ and $D_3(n)$, the number of partitions of $n$ into distinct parts using three colors. This theorem may be viewed as a generalization of the results of Andrews and Newman, which relates $\sigma\text{mex}(n)$ and $D_2(n).$ **Theorem 1**. *For all positive integer $n$, we have $$\sigma\overline{\text{mex}}(n)=D_3(n).$$* We then derive an asymptotic formula for $\sigma \overline{\text{mex}}({n})$ and prove a theorem regarding the parity of $\sigma \overline{\text{mex}}({n})$. **Theorem 2**. *We have $$\sigma \overline{\text{mex}}({n}) \sim \dfrac{e^{\pi\sqrt{n}}}{8n^{3/4}}$$ as $n \rightarrow \infty.$* **Theorem 3**. *For a positive integer $n$, we have $$\sigma \overline{\text{mex}}({n}) \equiv \begin{cases} 1 \mod 2, \text{ if } n=\frac{j(j+1)}{2} \text{ for some } j \in \mathbb{N}\\ 0 \mod 2, \text{ otherwise.} \end{cases}$$* In Section 3, we generalize $\overline{\text{mex}}({n})$ into $r$-gaps which we define as follows. **Definition 2**. *The **least r-gap of an overpartition $\pi$**, denoted $\overline{\text{mex}}_r({\pi})$ is the smallest part of the non-overlined parts of $\pi$ appearing less than $r$ times. Moreover, the function $$\sigma_r \overline{\text{mex}}({n}) = \displaystyle \sum_{\pi \in \overline{\mathcal{P}}(n)} \overline{\text{mex}}_r({\pi})$$ is the sum of the least $r$-gaps over all overpartitions of $n.$* We then derive the generating function and an asymptotic formula for $\sigma_r \overline{\text{mex}}({n})$. **Theorem 4**. *For all positive integer $n$, we have $$\displaystyle\sum_{n=0}^{\infty} \sigma_r \overline{\text{mex}}({n})q^n = \dfrac{(-q;q)_\infty(q^{2r};q^{2r})_\infty}{(q;q)_\infty(q^{r};q^{2r})_\infty}.$$* **Theorem 5**. *We have $$\sigma_r \overline{\text{mex}}({n}) \sim \dfrac{e^{\pi\sqrt{n}}}{8rn^{3/4}}$$ $n \rightarrow \infty.$* Lastly, we prove our main result about the distribution of $\sigma_r \overline{\text{mex}}({n})$ when $r=2^m \cdot 3^n, m,n \in \mathbb{Z}_{\geq 0}$ in Section 4. **Theorem 6**. *Let $r = 2^m \cdot 3^n$ where $m,n \in \mathbb{Z}_{\geq 0}$ and $k \geq 1$ be a positive integer. Then $$\displaystyle \lim_{X \to +\infty} \dfrac{\#\{n\leq X: \sigma_r\overline{mex}(n) \equiv 0 \ (\mathrm{mod}\ 2^k)\}}{X}=1.$$* Equivalently, for almost every nonnegative integer $n$ lying in an arithmetic progression, the integer $\sigma_r \overline{\text{mex}}({n})$ is a multiple of $2^k$ when $r=2^m \cdot 3^n, m,n \in \mathbb{Z}_{\geq 0}$. # Minimal Excludant of an Overpartition ## Generating Function of $\sigma\overline{\text{mex}}(n)$ **Proof of Theorem [Theorem 1](#D3){reference-type="ref" reference="D3"}:** *Proof.* Let $\overline{p^{\text{mex}}}(m,n)$ be the number of overpartitions $\pi$ of $n$ with $\overline{\text{mex}}(\pi)=m.$ Then we have the following double series $M(z,q)$ in which the coefficient of $z^mq^n$ is $\overline{p^{\text{mex}}}(m,n)$: $$\begin{aligned} M(z,q):=\displaystyle\sum_{n=0}^\infty \sum_{m=1}^\infty \overline{p^{\text{mex}}}(m,n) z^mq^n &= \displaystyle \sum_{m=1}^\infty z^mq^1\cdot q^2 \cdots \cdot q^{m-1} \cdot \dfrac{\displaystyle\prod_{n=1}^\infty (1+q^n)}{\displaystyle\prod_{\substack{n-1 \\ n \neq m}}^\infty (1-q^n)}\\ &=\displaystyle \sum_{m=1}^\infty z^mq^{m \choose 2} \cdot \dfrac{(-q;q)_\infty}{(q;q)_\infty} \cdot (1-q^m)\\ &=\dfrac{(-q;q)_\infty}{(q;q)_\infty} \displaystyle \sum_{m=1}^\infty z^mq^{m \choose 2} \cdot (1-q^m)\end{aligned}$$ Thus, $$\begin{aligned} \displaystyle \sum_{n \geq 0} \sigma \overline{\text{mex}}(n)q^n &= \dfrac{\partial}{\partial z}\Big|_{z=1} M(z,q)\\ &=\dfrac{(-q;q)_\infty}{(q;q)_\infty} \displaystyle \sum_{m=0}^\infty mq^{m \choose 2} (1-q^m)\\ &=\dfrac{(-q;q)_\infty}{(q;q)_\infty} \left( \displaystyle \sum_{m=1}^\infty mq^{m \choose 2} -\displaystyle \sum_{m=1}^\infty mq^{m \choose 2}\cdot q^m\right)\\ &= \dfrac{(-q;q)_\infty}{(q;q)_\infty} \left( \displaystyle \sum_{m=1}^\infty mq^{m \choose 2} -\displaystyle \sum_{m=1}^\infty (m-1)q^{m \choose 2}\right)\\ &=\dfrac{(-q;q)_\infty}{(q;q)_\infty} \displaystyle \sum_{m=0}^\infty q^{m+1 \choose 2}\\ &=\dfrac{(-q;q)_\infty}{(q;q)_\infty} \cdot \dfrac{(q^2;q^2)_\infty}{(q;q^2)_\infty}\\ &=(-q;q)_\infty \cdot \dfrac{(q^2;q^2)_\infty}{(q;q)_\infty(q;q^2)_\infty}\\ &=(-q;q)_\infty \cdot (-q;q)^2_\infty\\ &=(-q;q)^3_\infty\\ &=\displaystyle \sum_{n \geq 0} D_3(n)q^n.\end{aligned}$$ ◻ As an illustration, observe that the thirteen 3-colored partitions of 3 are: $3_1, 3_2, 3_3, 2_1+1_1, 2_1+1_2, 2_1+1_3, 2_2+1_1, 2_2+1_2, 2_2+1_3, 2_3+1_1, 2_3+1_2, 2_3+1_3, 1_1+1_2+1_3.$ Indeed, $D_3(3)=13=\sigma\overline{\text{mex}}(3).$ ## Asymptotic Formula for $\sigma \overline{\text{mex}}({n})$ To derive an asymptotic formula for $\sigma \overline{\text{mex}}({n})$, we will be using the following asymptotic result by Ingham [@ingham] about the coefficients of a power series. **Proposition 1**. *Let $A(q)= \sum_{n=0}^{\infty} a(n)q^n$ be a power series with radius of convergence 1. Assume that $\{a(n)\}$ is a weakly increasing sequence of nonnegative real numbers. If there are constants $\alpha, \beta \in \mathbb{R},$ and $C>0$ such that $$A(e^{-t})\sim \alpha t^{\beta}e^{\frac{C}{t}}, \text{ as } t \rightarrow 0^{+}$$ then we have $$a(n) \sim \dfrac{\alpha}{2\sqrt{\pi}}\dfrac{C^{\frac{2\beta+1}{4}}}{n^{\frac{2\beta+3}{4}}}e^{2\sqrt{Cn}}, \text{ as } n \rightarrow \infty.$$* **Proof of Theorem [Theorem 2](#asym1){reference-type="ref" reference="asym1"}:** *Proof.* Note that $\sigma \overline{\text{mex}}({n})=D_3(n)$ and $\{D_3(n)\}$ is an increasing sequence of nonnegative real numbers, thus $\sigma \overline{\text{mex}}({n})$ is also an increasing sequence of nonnegative real numbers.\ Let $A(q)=(-q;q)^3_\infty$, where $a(n)=\sigma \overline{\text{mex}}({n})$ as in Proposition [Proposition 1](#asymingham){reference-type="ref" reference="asymingham"}.\ From [@bhorjaetal], $$\begin{aligned} \dfrac{1}{(e^{-t}; e^{-t})_\infty} \sim \sqrt{\dfrac{t}{2\pi}} e^{\frac{\pi^2}{6t}} \text{ as } t \rightarrow 0^+. \label{1}\end{aligned}$$ Moreover, we will use the following identity $$\begin{aligned} (-q;q)_{\infty}=\dfrac{1}{(q;q^2)_\infty}=\dfrac{(q^2;q^2)_\infty}{(q;q)_\infty}.\label{2}\end{aligned}$$ By ([\[1\]](#1){reference-type="ref" reference="1"}) and ([\[2\]](#2){reference-type="ref" reference="2"}), as $t \rightarrow 0^{+},$ $$(-e^{-t};e^{-t})_\infty = \dfrac{(e^{-2t};e^{-2t})_\infty}{(e^{-t};e^{-t})_\infty}\sim \frac{\sqrt{\frac{t}{2\pi}} e^{\frac{\pi^2}{6t}}}{\sqrt{\frac{2t}{2\pi}} e^{\frac{\pi^2}{12t}}}=\dfrac{1}{\sqrt{2}}e^{\frac{\pi^2}{12t}}.$$ Hence, as $t \rightarrow 0^{+},$ $$\begin{aligned} A(e^{-t})=(-e^{-t}; e^{-t})_\infty^3 \sim \left(\dfrac{1}{\sqrt{2}}e^{\frac{\pi^2}{12t}}\right)^3 = \dfrac{1}{2\sqrt{2}}e^{\frac{\pi^2}{4t}}. \label{3}\end{aligned}$$ Take $\alpha=\dfrac{1}{2\sqrt{2}}, \beta=0$ and $C=\frac{\pi^2}{4}$, by Proposition [Proposition 1](#asymingham){reference-type="ref" reference="asymingham"}, $$\sigma \overline{\text{mex}}({n}) \sim \dfrac{\frac{1}{2\sqrt{2}}}{2\sqrt{\pi}} \dfrac{\left(\frac{\pi^2}{4}\right)^{1/4}}{n^{3/4}}e^{2\sqrt{\frac{\pi^2}{4}n}} = \dfrac{e^{\pi{\sqrt{n}}}}{8n^{3/4}}$$ as $n \rightarrow \infty$. ◻ ## Parity of $\sigma \overline{\text{mex}}({n})$ **Proof of Theorem [Theorem 3](#parity){reference-type="ref" reference="parity"}:** *Proof.* We have $$\begin{aligned} \displaystyle \sum_{n \geq 0} \sigma \overline{\text{mex}}({n})q^n &=(-q;q)_\infty^3 \\ &= \displaystyle \prod_{n=1}^{\infty} (1+q^n)^3\\ &\equiv \displaystyle \prod_{n=1}^{\infty} (1-q^n)^3 \ (\mathrm{mod}\ 2)\\ &=(q;q)_\infty^3\\ &=\displaystyle \sum_{j=0}^\infty (-1)^j(2j+1)q^{\frac{j(j+1)}{2}}, \text{ by Jacobi's identity \cite{jacobi}.} \end{aligned}$$ ◻ Comparing coefficients, we have that $\sigma \overline{\text{mex}}({n}) \equiv 0 \ (\mathrm{mod}\ 2)$ for $n \neq \frac{j(j+1)}{2}$ for some $j \in \mathbb{N}$ and $\sigma \overline{\text{mex}}({n}) \equiv 1 \ (\mathrm{mod}\ 2)$ otherwise. This shows that $\sigma \overline{\text{mex}}({n})$ is almost always even and is odd exactly when $n$ is a triangular number. # Least $r$-gaps ## Generating Function of $\sigma_r \overline{\text{mex}}({n})$ In [@ballantinemerca], Ballantine and Merca proved that for $n \geq 0$ and $r \geq 1$, $$\displaystyle \sum_{k=0}^{\infty} p(n-rT_k) = \sigma_r \text{mex}(n).$$ We extend this result to overpartitions and present an analogous proof for the following proposition. **Proposition 2**. *For $n \geq 0$ and $r \geq 1,$ $$\displaystyle\sum_{k=0}^{\infty} \overline{p}(n-rT_k)=\sigma_r \overline{\text{mex}}({n}).$$* *Proof.* Fix $r \geq 1,$ for each $k \geq 0,$ consider the staircase partition $$\delta_r(k)=(1^r, 2^r,..., (k-1)^r, k^r)$$ where each part from 1 to $k$ is repeated $r$ times. We create an injection from the set of overpartitions of $n-rT_k$ into the set of of overpartitions of $n$ with the following mapping: $$\phi_{r,n,k}: \overline{\mathcal{P}}(n-rT_k) \xhookrightarrow{} \overline{\mathcal{P}}(n)$$ where for an overpartition $\pi$ of $n-rT_k$, $\phi_{n,r,k}(\pi)$ is the overpartition obtained by inserting the non-overlined staircase partition $\delta_r(k)$.\ For example, if $\pi= 4+\overline{3}+2+\overline{1}+1=11$, we have $\phi_{2,11,3}=4+\overline{3}+2+\overline{1}+1+3+3+2+2+1+1=23.$\ Let $\mathcal{A}_{r,n,k}$ be the image of the overpartitions of $n-rT_k$ under $\phi_{n,r,k}$. We have $\overline{p}(n-rT_k)=|\mathcal{A}_{r,n,k}|$ and $\mathcal{A}_{r,n,k}$ consists of the partitions of $n$ satisfying $\overline{\text{mex}}_r({\pi})>k$.\ Now, suppose $\pi$ is an overpartition of $n$ with $\overline{\text{mex}}_r({\pi})=k$. Then $\pi \in \mathcal{A}_{r,n,i},$ for $i=0,1,...,k-1$ and $\pi \notin A_{r,n,j}$ with $j \geq k.$ Thus, each overpartition of $n$ with $\overline{\text{mex}}_r({\pi})=k$ is counted by the summation $\displaystyle\sum_{k=0}^{\infty} \overline{p}(n-rT_k)$ exactly $k$ times. ◻ **Proof of Theorem [Theorem 4](#gen){reference-type="ref" reference="gen"}:** *Proof.* We have $$\begin{aligned} \displaystyle \sum_{n=0}^{\infty} \sigma_r \overline{\text{mex}}({n})q^n &= \displaystyle \sum_{n=0}^{\infty} \left(\displaystyle\sum_{k=0}^{\infty} \overline{p}(n-rT_k)\right)q^n, \text{ by Proposition \ref{Tk}}\\ &= \displaystyle \sum_{n=0}^{\infty} \displaystyle\sum_{k=0}^{\infty} \overline{p}(n-rT_k)q^n\\ &=\displaystyle \sum_{n=0}^\infty \displaystyle \sum_{k=0}^\infty \overline{p}(n)q^{n+rT_k} \\ &=\left(\displaystyle \sum_{n=0}^\infty \overline{p}(n)q^n \right)\left(\displaystyle \sum_{k=0}^\infty q^{rT_k} \right).\\\end{aligned}$$ Note that the generating function for $\overline{p}(n)$ is $$\displaystyle \sum_{n=0}^\infty \overline{p}(n)q^n = \dfrac{(-q;q)_\infty}{(q;q)_\infty}.$$ Moreover, from [@ballantinemerca], $$\displaystyle \sum_{k=0}^\infty q^{rT_k} = \dfrac{(q^{2r};q^{2r})_\infty}{(q^r;q^{2r})_\infty}.$$ Thus, $$\displaystyle \sum_{n=0}^{\infty} \sigma_r \overline{\text{mex}}({n})q^n = \left(\displaystyle \sum_{n=0}^\infty \overline{p}(n)q^n \right)\left(\displaystyle \sum_{k=0}^\infty q^{rT_k} \right) = \dfrac{(-q;q)_\infty(q^{2r};q^{2r})_\infty}{(q;q)_\infty(q^{r};q^{2r})_\infty}.$$ ◻ ## Asymptotic Formula for $\sigma_r \overline{\text{mex}}({n})$ Here, we generalize our asymptotic result in Theorem [Theorem 2](#asym1){reference-type="ref" reference="asym1"} for least $r$-gaps.\ **Proof of Theorem [Theorem 5](#asym2){reference-type="ref" reference="asym2"}:** *Proof.* Note that $\overline{p}(n) < \overline{p}(n+1)$ for $n \in \mathbb{N},$ since for every overpartition of $n$, say $n=a_1+a_2+\cdots+a_l$, we correspondingly have $n+1=a_1+a_2+\cdots+a_l+1$ as an overpartition of $n+1.$ Since $\sigma_r \overline{\text{mex}}({n})$ is the sum of the least $r$-gaps taken over all overpartitions of $n$, then we can conclude that $\sigma_r \overline{\text{mex}}({n})$ is a weakly increasing sequence.\ Let $A(q)=\dfrac{(-q;q)_\infty(q^{2r};q^{2r})_\infty}{(q;q)_\infty(q^{r};q^{2r})_\infty}$ , where $a(n)=\sigma_r \overline{\text{mex}}({n})$ as in Proposition [Proposition 1](#asymingham){reference-type="ref" reference="asymingham"}.\ First, $$A(q)=\dfrac{(-q;q)_\infty(q^{2r};q^{2r})_\infty}{(q;q)_\infty(q^{r};q^{2r})_\infty}=\dfrac{(-q;q)_\infty(-q^r;q^r)^2_\infty (q^r;q^r)_\infty}{(q;q)_\infty}.$$ Hence, using ([\[3\]](#3){reference-type="ref" reference="3"}), as $t \rightarrow 0^{+},$ $$\begin{aligned} A(e^{-t}) & = \dfrac{(-e^{-t};e^{-t})_\infty(-e^{-rt};e^{-rt})_\infty^2(e^{-rt};e^{-rt})_\infty}{(e^{-t};e^{-t})_\infty}\\ & \sim \dfrac{\frac{1}{\sqrt{2}}e^{\frac{\pi^2}{12t}}\left(\frac{1}{\sqrt{2}}e^{\frac{\pi^2}{12rt}}\right)^2\sqrt{\frac{t}{2\pi}}e^{\frac{\pi^2}{6t}}}{\sqrt{\frac{rt}{2\pi}}e^{\frac{\pi^2}{6rt}}}\\ &=\dfrac{1}{2\sqrt{2}r}e^{\frac{\pi^2}{4t}}\end{aligned}$$ Take $\alpha=\dfrac{1}{2\sqrt{2}r}, \beta=0$ and $C=\frac{\pi^2}{4}$, by Proposition [Proposition 1](#asymingham){reference-type="ref" reference="asymingham"}, $$\sigma_r \overline{\text{mex}}({n}) \sim \dfrac{\frac{1}{2\sqrt{2}r}}{2\sqrt{\pi}} \dfrac{\left(\frac{\pi^2}{4}\right)^{1/4}}{n^{3/4}}e^{2\sqrt{\frac{\pi^2}{4}n}} = \dfrac{e^{\pi{\sqrt{n}}}}{8rn^{3/4}}$$ as $n \rightarrow \infty$. ◻ # Distribution of $\sigma_r \overline{\text{mex}}({n})$ ## Preliminaries We first discuss some preliminaries about modular forms. We define the upper-half complex plane $$\mathbb{H}=\{z \in \mathbb{C} \mid \text{Im}(z)>0\}$$ and the modular group $$\text{SL}_2(\mathbb{Z}) = \left\{\begin{pmatrix} a & b\\ c & d \end{pmatrix} \Big| \, ad-bc = 1; a,b,c,d \in \mathbb{Z}\right\}.$$ For $A={\begin{pmatrix} a & b\\ c & d \end{pmatrix}} \in \text{SL}_2(\mathbb{Z}),$ the modular group $\text{SL}_2(\mathbb{Z})$ acts on $\mathbb{H}$ by the following linear fractional transformation: $$Az= {\begin{pmatrix} a & b\\ c & d \end{pmatrix}}z =\dfrac{az+b}{cz+d}.$$ Moreover, if $N \in \mathbb{Z}^+$, we define the following **congruence subgroups** of $SL_2(\mathbb{Z})$ of level $N$: $$\Gamma_0(N):=\left\{\begin{pmatrix} a & b\\ c & d \end{pmatrix} \in \text{SL}_2(\mathbb{Z}) \Big| \begin{pmatrix} a & b\\ c & d \end{pmatrix} \equiv \begin{pmatrix} * & *\\ 0 & * \end{pmatrix} \mod N \right\}$$ $$\Gamma_1(N):=\left\{\begin{pmatrix} a & b\\ c & d \end{pmatrix} \in \text{SL}_2(\mathbb{Z}) \Big| \begin{pmatrix} a & b\\ c & d \end{pmatrix} \equiv \begin{pmatrix} 1 & *\\ 0 & 1 \end{pmatrix} \mod N \right\}$$ $$\Gamma(N):=\left\{\begin{pmatrix} a & b\\ c & d \end{pmatrix} \in \text{SL}_2(\mathbb{Z}) \Big| \begin{pmatrix} a & b\\ c & d \end{pmatrix} \equiv \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \mod N\right\}.$$ Note that the following inclusions are true: $$\Gamma(N) \subseteq \Gamma_1(N) \subseteq \Gamma_0(N) \subseteq \text{SL}_2(\mathbb{Z}).\\$$ Modular forms are complex functions on $\mathbb{H}$ that transforms nicely under these congruence subgroups of $\text{SL}_2({\mathbb{Z}}).$ For this paper, we are interested on modular forms transforming nicely with respect to $\Gamma_0(N)$ having a Nebentypus character $\chi$ defined as follows. **Definition 3**. *Let $\chi$ be a Dirichlet character modulo $N$ (a positive integer). Then a modular form $f \in M_k(\Gamma_1(N))$ has Nebentypus character $\chi$ if $$f\left(\dfrac{az+b}{cz+d}\right)=\chi(d)(cz+d)^k f(z)$$ for all $z \in \mathbb{H}$ and all $\begin{pmatrix} a & b\\ c & d \end{pmatrix} \in \Gamma_0(N)$. The space of all such modular forms is denoted $M_k(N,\chi)$.* In particular, we look at modular forms involving the Dedekind eta function which is defined as follows. **Definition 4**. *The Dedekind eta function is the function $\eta(z)$ where $z \in \mathbb{H}:$ $$\eta(z) = e^{\frac{\pi iz}{12}} \prod_{n=1}^{\infty} (1-e^{2\pi i nz}).$$* *Defining $q:=e^{2\pi i z}$, we have: $$\eta(z)=q^{\frac{1}{24}} \prod_{n=1}^{\infty}(1-q^n).$$* **Definition 5**. *A function $f(z)$ is called an **eta-product** if it is expressible as a finite product of the form $$f(z)=\prod_{\delta \mid N} \eta^{r_\delta}(\delta z)$$ where $N$ and each $r_\delta$ is an integer.* The next two theorems will be used to proved that an eta-product is a holomorphic modular form. **Theorem 7** (Gordon, Hughes, Newman). *If $f(z)= \prod_{\delta \mid N} \eta^{r_\delta}(\delta z)$ is an eta-product for which $$\begin{aligned} \displaystyle \sum_{\delta \mid N} \delta r_{\delta} \equiv 0 \ (\mathrm{mod}\ 24) \label{4} \end{aligned}$$ and $$\begin{aligned} \displaystyle \sum_{\delta \mid N} \dfrac{N}{\delta}r_{\delta} \equiv 0 \ (\mathrm{mod}\ 24) \label{5}\end{aligned}$$ then $f(z)$ satisfies $$f(Az)=\chi(d)(cz+d)^kf(z)$$ for all $A=\begin{pmatrix} a & b\\ c & d \end{pmatrix} \in \Gamma_0(N)$ where $k=\displaystyle \sum_{\delta \mid N} r_\delta.$ Here the character $\chi$ is defined by $\chi(d)=\left(\dfrac{(-1)^ks}{d}\right)$ and $s=\displaystyle\prod_{\delta \mid N} \delta^{r_\delta}.$* **Theorem 8** (Ligozat). *Let $c,d$ and $N$ be positive integers with $d \mid N$ and $\text{gcd}(c,d)=1.$ With the notation as above, if the eta-product $f(z)$ satisfies ([\[4\]](#4){reference-type="ref" reference="4"}) and ([\[5\]](#5){reference-type="ref" reference="5"}), then the order of vanishing of $f(z)$ at the cusp $\frac{c}{d}$ is $$\dfrac{1}{24}\displaystyle\sum_{\delta \mid N} \dfrac{N\text{gcd}(d,\delta)^2r_\delta}{\text{gcd}\left(d,\frac{N}{d}\right)d\delta}.$$* ## Proof of Main Result Before we prove Theorem [Theorem 6](#main){reference-type="ref" reference="main"}, we prove two propositions first. **Proposition 3**. *Let $k$ be a positive integer. Then $$f_{r,k}(z):=\dfrac{\eta(48z)\eta(24rz)^{2^k-1}}{\eta(24z)^2\eta(48rz)^{2^{k-1}-2}} \equiv \displaystyle\sum_{n=0}^\infty \sigma_r\overline{\text{mex}}(n)q^{24n+3r} (\text{mod } 2^k)$$* **Proof:** Consider $$g(z)=\dfrac{\eta(24rz)^2}{\eta(48rz)}= \dfrac{(q^{24r};q^{24r})^2_\infty}{(q^{48r};q^{48r})_\infty}.$$\ By the binomial theorem, $(q^r;q^r)^{2^k}_\infty \equiv (q^{2r};q^{2r})^{2^{k-1}}_\infty \ (\mathrm{mod}\ 2^k).$\ Thus, $(q^{24r};q^{24r})^{2^k}_\infty \equiv (q^{48r};q^{48r})^{2^{k-1}}_\infty \ (\mathrm{mod}\ 2^k)$, and so $$g^{2^{k-1}}(z)=\dfrac{(q^{24r};q^{24r})^{2^k}_\infty}{(q^{48r};q^{48r})^{2^{k-1}}_\infty} \equiv 1 \ (\mathrm{mod}\ 2^k).$$ Now, consider $$\begin{aligned} \dfrac{\eta(48z)\eta(48rz)^2}{\eta(24z)^2\eta(24rz)}\cdot g^{2^{k-1}}(z)&= \dfrac{\eta(48z)\eta(48rz)^2}{\eta(24z)^2\eta(24rz)}\cdot \dfrac{\eta(24rz)^{2^k}}{\eta(48rz)^{2^k-1}}\\ &=\dfrac{\eta(48z)\eta(24rz)^{2^k-1}}{\eta(24z)^2\eta(48rz)^{2^{k-1}-2}}\\ &=f_{r,k}(z).\end{aligned}$$ Observe that $$\begin{aligned} f_{r,k}(z)&=\dfrac{\eta(48z)\eta(48rz)^2}{\eta(24z)^2\eta(24rz)}\cdot g^{2^k-1}(z)\\ &\equiv\dfrac{\eta(48z)\eta(48rz)^2}{\eta(24z)^2\eta(24rz)} \ (\mathrm{mod}\ 2^k)\\ &=q^{3r} \dfrac{(q^{48};q^{48})_\infty(q^{48r};q^{48r})^2_\infty}{(q^{24};q^{24})^2_\infty (q^{24r};q^{24r})_\infty}.\end{aligned}$$ Note that $$\begin{aligned} \displaystyle\sum_{n=0}^\infty \sigma_r\overline{\text{mex}}(n)q^n &= \dfrac{(-q;q)_\infty (q^{2r};q^{2r})_\infty}{(q;q)_\infty (q^{r};q^{2r})_\infty}\\ &=\dfrac{(-q;q)_\infty (q^{2r};q^{2r})^2_\infty}{(q;q)_\infty (q^{r};q^{r})_\infty}\\ &=\dfrac{(q^2;q^2)_\infty (q^{2r};q^{2r})^2_\infty}{(q;q)^2_\infty (q^{r};q^{r})_\infty}.\end{aligned}$$ Hence, $$f_{r,k}(z) \equiv q^{3r} \displaystyle\sum_{n=0}^\infty \sigma_r\overline{\text{mex}}(n)q^{24n} \ (\mathrm{mod}\ 2^k) = \displaystyle\sum_{n=0}^\infty \sigma_r\overline{\text{mex}}(n)q^{24n+3r}.$$ **Proposition 4**. *Let $r=2^m \cdot 3^n$ where $m,n \in \mathbb{Z}_{\geq 0}$ and $k \geq m+2n+1$ be an integer greater than 3. Then $f_{r,k}(z) \in M_{2^{k-2}}(\Gamma_0(N),\chi)$, where $$N=\begin{cases} 2^7 \cdot 3^{n+1}, \hspace{0.16in} m=0,1,2\\ 2^{m+4}\cdot 3^{n+1}, m \geq 3 \end{cases}$$.* *Proof.* Let $r=2^m \cdot 3^n$ where $m,n \in \mathbb{Z}_{\geq 0}.$\ First, the weight of $f_{r,k}(z)$ is: $$\ell=\dfrac{1}{2}\displaystyle \sum_{\delta |N} r_{\delta} = \dfrac{1}{2}\left[1+(2^k-1)-2-(2^{k-1}-2)\right]=2^{k-1}-2^{k-2}=2^{k-2}.$$ Second, since $f_{r,k}(z)=\dfrac{\eta(48z)\eta(24rz)^{2^k-1}}{\eta(24z)^2\eta(48rz)^{2^{k-1}-2}},$ then $\delta_1=48, \delta_2=24r, \delta_3=24$ and $\delta_4=48r$ with $r_{48}=1, r_{24r}=2^k-1, r_{24}=-2,$ and $r_{48r}=2-2^{k-1}.$\ Clearly, $f_{r,k}(z)$ satisfies equation ([\[4\]](#4){reference-type="ref" reference="4"}) since ' $$\displaystyle \sum_{\delta | N} \delta r_\delta = 48\cdot 1 + 24r \cdot (2^k-1) + 24 \cdot (-2) + 48r \cdot (2-2^{k-1}) \equiv 0 \ (\mathrm{mod}\ 24).$$ Moreover, to satisfy equation ([\[5\]](#5){reference-type="ref" reference="5"}), we can let $N=48ru,$ where $u$ is the smallest positive integer satisfying $$\displaystyle \sum_{\delta | N} \dfrac{N}{\delta} r_{\delta} \equiv 0 \ (\mathrm{mod}\ 24).$$ Then, $$\begin{aligned} \displaystyle \sum_{\delta | N} \dfrac{N}{\delta}r_\delta &=\dfrac{48ru}{48}+\dfrac{48ru}{24r}(2^k-1)-\dfrac{48ru}{24}(2)-\dfrac{48ru}{48r}(2^{k-1}-2)\\ &=ru+2u(2^k-1)-4ru-u(2^{k-1}-2)\\ &=u(2^{k+1}-2^{k-1}-3r)\\ &=u(3\cdot 2^{k-1} -3r) \equiv 0 \ (\mathrm{mod}\ 24) \end{aligned}$$ We have the following: - If $m=0,$ then $u=8$, and so $N=48 \cdot (2^0\cdot3^n) \cdot 8 =2^7\cdot3^{n+1}.$ - If $m=1,$ then $u=4$, and so $N=48 \cdot (2^1\cdot3^n) \cdot 4 =2^7\cdot3^{n+1}.$ - If $m=2,$ then $u=2$, and so $N=48 \cdot (2^1\cdot3^n) \cdot 4 =2^7\cdot3^{n+1}.$ - If $m\geq 3,$ then $u=1$, and so $N=48\cdot (2^m\cdot3^n)\cdot 1 =2^{m+4}\cdot3^{n+1}.$ To prove that $f_{r,k}(z) \in M_{2k-2}(\Gamma_0(N),\chi)$, it suffices to show that $f_{r,k}(z)$ is holomorphic at all cusps of $\Gamma_0(N).$\ From Theorem [Theorem 8](#ligozat){reference-type="ref" reference="ligozat"}, the order of vanishing of $f_{r,k}(z)$ at the cusp $\frac{c}{d}$ where $d|N$ and gcd$(c,d)=1$, is: $$\dfrac{N}{24} \displaystyle \sum_{\delta |N} \dfrac{\text{gcd}(d,\delta)^2r_\delta}{\text{gcd}\left(d, \frac{N}{d}\right)d\delta}$$ Hence, $f_{r,k}(z)=\dfrac{\eta(48z)\eta(24rz)^{2^k-1}}{\eta(24z)^2\eta(48rz)^{2^{k-1}-2}}$ is holomorphic at the cusp $\frac{c}{d}$ if and only if $$\dfrac{N}{24} \displaystyle \sum_{\delta |N} \dfrac{\text{gcd}(d,\delta)^2r_\delta}{\text{gcd}\left(d, \frac{N}{d}\right)d\delta} \geq 0 \iff \displaystyle \sum_{\delta |N} \dfrac{\text{gcd}(d,\delta)^2r_\delta}{\delta} \geq 0.$$ That is, $$\dfrac{\text{gcd}(d,48)^2}{48}-2\dfrac{\text{gcd}(d,24)^2}{24}+(2^k-1)\dfrac{\text{gcd}(d,24r)^2}{24r}-(2^{k-1}-2)\dfrac{\text{gcd}(d,48r)^2}{48r} \geq 0.$$ Equivalently, $$r\text{gcd}(d,48)^2-4r\text{gcd}(d,24)^2+(2^{k+1}-2)\text{gcd}(d,24r)^2-(2^{k-1}-2)\text{gcd}(d,48r)^2 \geq 0. \label{LHS}$$ Now, if $N=2^7\cdot 3^{n+1},$ then $d=2^t\cdot 3^s, 0 \leq t \leq 7, 0 \leq s \leq n+1$. Similarly, if $N=2^{m+4}\cdot 3^{n+1},$ then $d=2^t\cdot 3^s, 0 \leq t \leq m+4, 0 \leq s \leq n+1$.\ Let $(\star)$ be the left-hand side of inequality ([\[LHS\]](#LHS){reference-type="ref" reference="LHS"}). We now prove that $(\star) \geq 0$ for $k\geq m+2n+1$. We divide our proof into 6 cases.\ **Case 1: $d=1$**\ We have $\text{gcd}(d,48)=1$, $\text{gcd}(d,24)=1,$ $\text{gcd}(d,24r)=1$, and $\text{gcd}(d,48r)=1$. $$\begin{aligned} (\star) &=(2^m\cdot3^n)-4(2^m\cdot3^n)+(2^{k+1}-2)-(2^{k-1}-2)\\ &=2^{k+1}-2^{k-1}-3\cdot(2^m\cdot 3^n)\\ &=3\cdot 2^{k-1} - 2^{m} \cdot 3^{n+1}\end{aligned}$$ If we let $k \geq m+2n+1$, then $$\begin{aligned} 3\cdot 2^{k-1} &\geq 3\cdot 2^{m+2n}\\ &= 3\cdot 2^{m} \cdot 2^{2n}\\ &\geq 3\cdot 2^{m} \cdot 3^{n}\\ & = 2^{m} \cdot 3^{n+1},\end{aligned}$$ proving that $(\star) \geq 0$ for $k \geq m+2n+1$.\ **Case 2: $d=3^s, 1 \leq s \leq n+ 1$**\ We have $\text{gcd}(d,48)=3$, $\text{gcd}(d,24)=3,$ $\text{gcd}(d,24r)=\text{gcd}(d, 2^{m+3}\cdot3^{n+1})=3^s$, and $\text{gcd}(d,48r)=\text{gcd}(d, 2^{m+4}\cdot3^{n+1})=3^s$. $$\begin{aligned} (\star) &=(2^m\cdot3^n)\cdot9 -4(2^m\cdot3^n)\cdot 9 +(2^{k+1}-2) \cdot 3^{2s}-(2^{k-1}-2)\cdot 3^{2s}\\ &=3^{2s}\cdot 3\cdot 2^{k-1} - 3 \cdot 9 \cdot (2^m\cdot3^n)\\ &=2^{k-1} \cdot 3^{2s+1} -2^m\cdot 3^{n+3}\end{aligned}$$ If we let $k \geq m+2n-4s+5$, then $$\begin{aligned} 2^{k-1} \cdot 3^{2s+1} & \geq 2^{m+2n-4s+4} \cdot 3^{2s+1} \\ &= 2^{m} \cdot 2^{2n-4s+4} \cdot 3^{2s+1} \\ &\geq 2^{m} \cdot 3^{n-2s+2} \cdot 3^{2s+1} \\ & = 2^{m+4} \cdot 3^{n+3},\end{aligned}$$ proving that $(\star) \geq 0$ for $k \geq m+2n-4s+5$. Moreover, since $m+2n-4s+5 \leq m+2n+1$, it follows that $(\star) \geq 0$ for $k \geq m+2n+1$.\ **Case 3: $d=2^t, 0 < t \leq 3$**\ We have $\text{gcd}(d,48)=2^t$, $\text{gcd}(d,24)=2^t,$ $\text{gcd}(d,24r)=2^t$, and $\text{gcd}(d,48r)=2^t$. $$\begin{aligned} (\star) &=(2^m\cdot3^n)\cdot2^{2t} -4(2^m\cdot3^n)\cdot 2^{2t} +(2^{k+1}-2) \cdot 2^{2t}-(2^{k-1}-2)\cdot 2^{2t}\\ &=2^{2t}\cdot 3\cdot 2^{k-1} - 3 \cdot 2^{2t} \cdot (2^m\cdot3^n)\\ &=2^{k+2t-1} \cdot 3 -2^{m+2t}\cdot 3^{n+1}\end{aligned}$$ If we let $k \geq m+2n+1$, then $$\begin{aligned} 2^{k+2t-1} \cdot 3 & \geq 2^{m+2n+2t} \cdot 3 \\ &= 2^{m+2t} \cdot 2^{2n} \cdot 3\\ &\geq 2^{m+2t} \cdot 3^{n} \cdot 3 \\ & = 2^{m+2t} \cdot 3^{n+1},\end{aligned}$$ proving that $(\star) \geq 0$ for $k \geq m+2n+1$.\ **Case 4: $d=2^t, 3< t \leq m+4$**\ We have $\text{gcd}(d,48)=2^4$, $\text{gcd}(d,24)=2^3,$ $\text{gcd}(d,24r)=\text{gcd}(d, 2^{m+3}\cdot3^{n+1})=2^t$, and $\text{gcd}(d,48r)=\text{gcd}(d, 2^{m+4}\cdot3^{n+1})=2^t$. $$\begin{aligned} (\star)&=(2^m\cdot3^n)\cdot2^{8} -4(2^m\cdot3^n)\cdot 2^{6} +(2^{k+1}-2) \cdot 2^{2t}-(2^{k-1}-2)\cdot 2^{2t}\\ &=2^{2t}\cdot 3\cdot 2^{k-1}\\ &=2^{k+2t-1} \cdot 3 \geq 0 \text{ for } k \geq 1\end{aligned}$$ **Case 5: $d=2^t \cdot 3^s, 0 < t \leq 3, 1 \leq s \leq n+1$**\ We have $\text{gcd}(d,48)=2^t\cdot 3$, $\text{gcd}(d,24)=2^t \cdot 3,$ $\text{gcd}(d,24r)=\text{gcd}(d, 2^{m+3}\cdot3^{n+1})=2^t \cdot 3^s$, and $\text{gcd}(d,48r)=\text{gcd}(d, 2^{m+4}\cdot3^{n+1})=2^t \cdot 3^s$. $$\begin{aligned} (\star)=&(2^m\cdot3^n)\cdot(2^{2t}\cdot 3^2) -4(2^m\cdot3^n)\cdot(2^{2t}\cdot 3^2)+(2^{k+1}-2)\cdot(2^{2t}\cdot 3^{2s})\\ &-(2^{k-1}-2)\cdot(2^{2t}\cdot 3^{2s})\\ =&(2^{2t}\cdot 3^{2s}) \cdot 3\cdot 2^{k-1} - 3 \cdot(2^{2t}\cdot 3^2) \cdot (2^m\cdot3^n)\\ =&2^{k+2t-1} \cdot 3^{2s+1} -2^{m+2t}\cdot 3^{n+3}\end{aligned}$$ If we let $k \geq m+2n-4s+5$, then $$\begin{aligned} 2^{k+2t-1} \cdot 3^{2s+1} & \geq 2^{m+2n+2t-4s+4} \cdot 3^{2s+1} \\ &= 2^{m+2t} \cdot 2^{2n-4s+4} \cdot 3^{2s+1}\\ &\geq 2^{m+2t} \cdot 3^{n-2s+2} \cdot 3^{2s+1} \\ & = 2^{m+2t} \cdot 3^{n+3},\end{aligned}$$ proving that $(\star) \geq 0$ for $k \geq m+2n-4s+5$. Moreover, since $m+2n-4s+5 \leq m+2n+1$, it follows that $(\star) \geq 0$ for $k \geq m+2n+1$.\ **Case 6: $d=2^t \cdot 3^s, 3 < t \leq m+4, 1 \leq s \leq n+1$**\ We have $\text{gcd}(d,48)=2^4\cdot 3$, $\text{gcd}(d,24)=2^3 \cdot 3,$ $\text{gcd}(d,24r)=\text{gcd}(d, 2^{m+3}\cdot3^{n+1})=2^t \cdot 3^s$, and $\text{gcd}(d,48r)=\text{gcd}(d, 2^{m+4}\cdot3^{n+2})=2^t \cdot 3^s$. $$\begin{aligned} (\star)=&(2^m\cdot3^n)\cdot(2^8\cdot 3^2) -4(2^m\cdot3^n)\cdot(2^6\cdot 3^2)+(2^{k+1}-2)\cdot(2^{2t}\cdot 3^{2s})\\ &-(2^{k-1}-2)\cdot(2^{2t}\cdot 3^{2s})\\ =&(2^{2t}\cdot 3^{2s}) \cdot 3\cdot 2^{k-1}\\ =&2^{k+2t-1}\cdot 3^{2s+1} \geq 0 \text{ for } k \geq 1.\end{aligned}$$ In all possible cases, we have that $(\star) \geq 0$ for $k \geq m+2n+1$ where $k$ is a positive integer greater than 3. Hence, by Theorem [Theorem 8](#ligozat){reference-type="ref" reference="ligozat"}, $f_{r,k}(z)$ is a modular form of weight $2^{k-2}.$ ◻ Lastly, we will use Serre's Theorem [@serre] regarding the coefficients of the Fourier expansion of a holomorphic modular form to prove our final result. **Theorem 9** (Serre). *Let $k,m$ be positive integers. If $f(z) \in M_k(\Gamma_0(N), \chi)$ has Fourier expansion $f(z) = \sum_{n=0}^{\infty} c(n)q^n \in \mathbb{Z}[[q]],$ then there is a constant $\alpha >0$ such that $$\#\{n\leq X: c(n) \not\equiv 0\mod m\} = \mathcal{O}\left(\dfrac{X}{\log^\alpha X}\right).$$* **Proof of Theorem [Theorem 6](#main){reference-type="ref" reference="main"}** *Proof.* Let $r = 2^m \cdot 3^n$ where $m,n \in \mathbb{Z}_{\geq 0}$ and $k \geq m+2n+1, \, k \geq 3$ be a positive integer. Since $f_{r,k}(z) \in M_{2^{k-2}}(\Gamma_0(N),\chi)$ and the Fourier coefficients of $f_{r,k}(z)$ are integers, then by Serre's Theorem, we can find a constant $\alpha >0$ such that $$\#\{n\leq X: \sigma_r\overline{mex}(n) \not\equiv 0\mod 2^k\} = \mathcal{O}\left(\dfrac{X}{\log^\alpha X}\right),$$ for $k \geq m+2n+1.$\ Then $$\displaystyle \lim_{X \to +\infty} \dfrac{\#\{n\leq X: \sigma_r\overline{mex}(n) \equiv 0\mod 2^k\}}{X}=1.$$ Equivalently, for almost every nonnegative integer $n$ lying in an arithmetic progression, $\sigma_r\overline{mex}(n)$ is a multiple of $2^k$ where $r = 2^m \cdot 3^n$ where $m,n \in \mathbb{Z}_{\geq 0}$ and $k \geq m+2n+1, k \geq 3$. Consequently, $\sigma_r\overline{mex}(n)$ is a multiple of $2^k$, where $k \geq 1$. ◻ 99 G. E. Andrews, *The Theory of Partitions,* Addison-Wesley Publishing Company, 1976. G. E. Andrews and D. Newman, *Partitions and the minimal excludant*, Ann. Comb., 23 (2019), 249--254. C. Ballantine and M. Merca, *Bisected theta series, least r-gaps in partitions, and polygonal numbers*, Ramanujan J. 52.2 (2020), 433--444. S. C. Bhoria, P. Eyyunni, P. S. Kaur and B. Maji, *Minimal excludant over partitions into distinct parts*, arXiv:2105.13875v1, 2021. K. Chakraborty and C. Ray, *Distribution of generalized mex-related integer partitions*, Hardy-Ramanujan Journal, Hardy-Ramanujan Society (Special Commemorative volume in honour of Srinivasa Ramanujan), 43 (2021), 122--128. P. Grundy, *Mathematics and Games*, Eureka 2 (1939), 6--9. A. E. Ingham, *A Tauberian theorem for partitions*, Ann. of Math. (2) 42 (1941), 1075--1090. C. Jacobi, *Fundamenta nova theoriae function ellipticarum*, Mathematische Werke 1 (1829), 49-239. L. Killford, *Modular Forms: A classical and computational introduction*, Imperial College Press, 2008. R. Sprague, *Über mathematische Kampfspiele*, Tohoku Mathematical Journal, 41 (1935), 438--444. J.P Serre, *Divisibilité des coefficients des formes modulaires de poids entier*, C.R. Acad. Sci. Paris (A) 279 (1974), 679-682. [^1]: Institute of Mathematics, University of the Philippines, Diliman, Quezon City 1101, Philippines; `vmaricheta@math.upd.edu.ph` [^2]: Institute of Mathematics, University of the Philippines, Diliman, Quezon City 1101, Philippines; `jadonato@math.upd.edu.ph`
arxiv_math
{ "id": "2309.04398", "title": "Minimal Excludant over Overpartitions", "authors": "Victor Manuel R. Aricheta, Judy Ann L. Donato", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | Let $G$ be the Grothendieck group of a commutative monoid $M$. We prove that for any multiplicative set $S$ of homogeneous elements of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$, then the localization ring $S^{-1}R=\bigoplus\limits_{x\in G}(S^{-1}R)_{x}$ is a $G$-graded ring where each homogeneous component $(S^{-1}R)_{x}$ is an additive subgroup of $S^{-1}R$ consisting of all fractions $f\in S^{-1}R$ (including the zero element) such that if $f\neq0$ then it has a presentation of the form $f=r/s$ with $r\in R$ and $s\in S$ are homogeneous elements for which $x=[\operatorname{deg}(r),\operatorname{deg}(s)]$. As an application, for any multiplicative set $S$ of a ring $R$ we have then the canonical isomorphism of $G$-graded rings $T^{-1}(R[M])\simeq(S^{-1}R)[G]$ where $T=\{s\epsilon_{m}: s\in S, m\in M\}$ is a multiplicative set of the monoid-ring $R[M]$. Next, we show that a cancellative monoid is a totally ordered monoid if and only if its Grothendieck group is a torsion-free group. As an application, Levi's famous theorem (which asserts that every torsion-free Abelian group is a totally ordered group) is easily deduced. Finally, we show that the group of units of any localization ring $S^{-1}R$ is canonically isomorphic to the Grothendieck group of the saturation of $S$. address: | Department of Mathematics, Faculty of Basic Sciences, University of Maragheh\ P. O. Box 55136-553, Maragheh, Iran. author: - Abolfazl Tarizadeh title: Grading of homogeneous localization by the Grothendieck group --- # Introduction and Preliminaries The Grothendieck group of a commutative monoid is a fundamental construction of mathematics. In fact, several of the basic structures of mathematics derive from Grothendieck groups. For instance, the set of integers (especially the negative integers) is indeed the Grothendieck group of the additive monoid of natural numbers. The role of Grothendieck groups in some other advanced structures of mathematics is very influential, although understanding or observation of the importance of this (undercover) role sometimes seems imperceptible or nonsignificant to us. In this article, in particular, we will observe that the Grothendieck group plays a main role in graded ring theory. Let $G$ be the Grothendieck group of a commutative monoid $M$. We prove that for any multiplicative set $S$ of homogeneous elements of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$, then the localization ring $S^{-1}R=\bigoplus\limits_{x\in G}(S^{-1}R)_{x}$ is a $G$-graded ring where each homogeneous component $(S^{-1}R)_{x}$ is an additive subgroup of $S^{-1}R$ consisting of all fractions $f\in S^{-1}R$ (including the zero element) such that if $f\neq0$ then it has a presentation of the form $f=r/s$ with $r\in R$ and $s\in S$ are homogeneous elements for which $x=[\operatorname{deg}(r),\operatorname{deg}(s)]$. In particular, the following well-known result is easily deduced which asserts that for any multiplicative set $S$ of homogeneous elements of an $\mathbb{N}$-graded ring $R=\bigoplus\limits_{n\geqslant0}R_{n}$, then $S^{-1}R=\bigoplus\limits_{n\in \mathbb{Z}}(S^{-1}R)_{n}$ is a $\mathbb{Z}$-graded ring. As a second application, for any multiplicative set $S$ of a ring $R$ we obtain the following canonical isomorphism of $G$-graded rings $T^{-1}(R[M])\simeq(S^{-1}R)[G]$ where $T=\{s\epsilon_{m}: s\in S, m\in M\}$ is a multiplicative set of the monoid-ring $R[M]$. In particular, the following well-known result is recovered which asserts that the ring of Laurent polynomial $R[x,x^{-1}]$ is canonically isomorphic to the group-ring $R[\mathbb{Z}]$. As another evidence to show the role of the Grothendieck group, we prove that a cancellative monoid is a totally ordered monoid if and only if its Grothendieck group is a torsion-free group. As an application, Levi's famous theorem (which asserts that every torsion-free Abelian group is a totally ordered group) is easily deduced. Finally, we show that the group of units of any localization ring $S^{-1}R$ is canonically isomorphic to the Grothendieck group of the saturation of $S$. In particular, if $\mathfrak{p}$ is a prime ideal of a ring $R$ then the Grothendieck group of the multiplicative monoid $R\setminus\mathfrak{p}$ is canonically isomorphic to the group of units of the local ring $R_{\mathfrak{p}}$. As another application, the Grothendieck group of the set of non-zero-divisors of $R$ is canonically isomorphic to the group of units of the total ring of fractions of $R$. In this article, all monoids and rings are assumed to be commutative. Recall that the Grothendieck group of a commutative monoid $M$ is constructed in the following way. We first define a relation over the set $M\times M$ as $(a,b)\sim(c,d)$ if there exists some $m\in M$ such that $(a+d)+m=(b+c)+m$. It can be seen that this is an equivalence relation. Here we denote the equivalence class containing of an ordered pair $(a,b)$ simply by $[a,b]$, and we denote by $G=\{[a,b]: a,b\in M\}$ the set of all equivalence classes obtained by this relation. Then the set $G$ by the operation $[a,b]+[c,d]=[a+c,b+d]$ is an Abelian group. Indeed, $[0,0]$ is the identity element of $G$ where $0$ is the identity of $M$, and for each $[a,b]\in G$ its inverse is $[b,a]$. The group $G$ is called the *Grothendieck group of $M$*. Note that in the above construction, the commutativity of the monoid $M$ plays a vital role. The canonical map $f:M\rightarrow G$ given by $m\mapsto[m,0]$ is a morphism of monoids and the pair $(G,f)$ satisfies in the following universal property: for any such pair $(H,g)$, i.e., $H$ is an abelian group and $g:M\rightarrow H$ is a morphism of monoids, then there exists a unique morphism of groups $h:G\rightarrow H$ such that $g=h\circ f$. In fact, $h([a,b])=g(a)-g(b)$. The canonical map $f:M\rightarrow G$ is injective if and only if $M$ has the cancellation property. Note that the Grothendieck group of a commutative monoid $M$ is trivial if and only if $M=M^{0}$ where we call $M^{0}=\{x\in M:\exists y\in M, x+y=y\}$ the quasi-zero submonoid of $M$. For example, the Grothendieck group of the multiplicative monoid of natural numbers $\mathbb{N}=\{0,1,2,\ldots\}$ is trivial. # Grading of homogeneous localization Let $R=\bigoplus\limits_{m\in M}R_{m}$ be an $M$-graded ring with $M$ a monoid. Recall that every nonzero element $0\neq r\in R_{m}$ is called a homogeneous element of $R$ of degree $m\in M$ and written $\operatorname{deg}(r)=m$. If $r,r'\in R$ are homogeneous elements with $rr'\neq0$, then $rr'$ is homogeneous and $\operatorname{deg}(rr')=\operatorname{deg}(r)+\operatorname{deg}(r')$. Let $S$ be a multiplicative set of homogeneous elements of $R$. Let $G=\{[a,b]: a,b\in M\}$ be the Grothendieck group of $M$. For each $x\in G$, by $(S^{-1}R)_{x}$ we mean the set of all fractions $f\in S^{-1}R$ (including the zero element) such that if $f\neq0$ then it has a presentation of the form $f=r/s$ with $r\in R$ and $s\in S$ are homogeneous elements for which $x=[\operatorname{deg}(r),\operatorname{deg}(s)]$. Then we show that every homogeneous localization of a graded ring is graded by the Grothendieck group with the homogeneous components $(S^{-1}R)_{x}$. We first need to treat the cancellative case: **Theorem 1**. *Let $S$ be a multiplicative set of homogeneous elements of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$ with $M$ a cancellative monoid. Then $S^{-1}R=\bigoplus\limits_{x\in G}(S^{-1}R)_{x}$ is a $G$-graded ring where $G$ is the Grothendieck group of $M$.* *Proof.* We have $0\notin S$ and so $st\neq0$ for all $s,t\in S$. We first show that $(S^{-1}R)_{x}$ is the additive subgroup of $S^{-1}R$ for all $x\in G$. Clearly $(S^{-1}R)_{x}$ is nonempty, because by the definition, it contains the zero element. Take $f,g\in(S^{-1}R)_{x}$. We will show that $-f$ and $f+g$ are members of $(S^{-1}R)_{x}$. We may assume that $f,g\neq0$. Then $f=r/s$ and $g=r'/s'$ where $r,r'\in R$ and $s,s'\in S$ are homogeneous elements with $[\operatorname{deg}(r),\operatorname{deg}(s)]=x=[\operatorname{deg}(r'),\operatorname{deg}(s')]$. It is clear that $-r$ is homogeneous with $\operatorname{deg}(-r)=\operatorname{deg}(r)$ and so $[\operatorname{deg}(-r),\operatorname{deg}(s)]=x$. This shows that $-f=(-r)/s\in(S^{-1}R)_{x}$. Both $rs'$ and $r's$ are nonzero and hence are homogeneous. We show that $\operatorname{deg}(rs')=\operatorname{deg}(r's)$. Since $M$ has the cancellation property, the canonical map $M\rightarrow G$ given by $m\mapsto[m,0]$ is injective. The images of both $\operatorname{deg}(rs')=\operatorname{deg}(r)+\operatorname{deg}(s')$ and $\operatorname{deg}(r's)=\operatorname{deg}(r')+\operatorname{deg}(s)$ under this map are the same, because $[\operatorname{deg}(r),\operatorname{deg}(s)]=[\operatorname{deg}(r'),\operatorname{deg}(s')]$. Hence, $\operatorname{deg}(rs')=\operatorname{deg}(r's)$. We may assume $f+g\neq0$. Thus $rs'+r's\neq0$ and so it is homogenous of degree $\operatorname{deg}(rs')$. Then $[\operatorname{deg}(rs'+r's),\operatorname{deg}(ss')]=[\operatorname{deg}(r)+\operatorname{deg}(s'),\operatorname{deg}(s)+\operatorname{deg}(s')]= [\operatorname{deg}(r),\operatorname{deg}(s)]+[\operatorname{deg}(s'),\operatorname{deg}(s')]=x$. This shows that $f+g=(rs'+r's)/ss'\in(S^{-1}R)_{x}$. Hence, $(S^{-1}R)_{x}$ is the additive subgroup of $S^{-1}R$. If $f\in(S^{-1}R)_{x}$ and $g\in(S^{-1}R)_{y}$ for some $x,y\in G$, then we show that $fg\in(S^{-1}R)_{x+y}$. We may assume $fg\neq0$. Then $rr'\neq0$ and so it is a homogeneous element and we have $[\operatorname{deg}(rr'),\operatorname{deg}(ss')]=[\operatorname{deg}(r)+\operatorname{deg}(r'),\operatorname{deg}(s)+\operatorname{deg}(s')]=x+y$. It can be easily seen that $S^{-1}R=\sum\limits_{x\in G}(S^{-1}R)_{x}$. We have to show that this is a direct sum. Take some $f\in(S^{-1}R)_{x}\cap\sum \limits_{\substack{y\in G,\\y\neq x}}(S^{-1}R)_{y}$. Suppose $f\neq0$. We may write $f=\sum\limits_{k=1}^{n}f_{k}$ where $0\neq f_{k}\in(S^{-1}R)_{y_{k}}$ for all $k$, and $x\in G\setminus\{y_{1},\ldots,y_{n}\}$. We may assume the elements $y_{1},\ldots,y_{n}\in G$ are pairwise distinct. Each $f_{k}=r_{k}/s_{k}$ where $r_{k}\in R$ and $s_{k}\in S$ are homogeneous elements with $y_{k}=[\operatorname{deg}(r_{k}),\operatorname{deg}(s_{k})]$. Then by the calculus of fractions, there exists some $t\in S$ such that $rs't=st(\sum\limits_{k=1}^{n}r_{k}t_{k})$ where $s'=\prod\limits_{i=1}^{n}s_{i}$ and each $t_{k}=\prod\limits_{\substack{i=1,\\i\neq k}}^{n}s_{i}$. We claim that if $k\neq d$, then $\operatorname{deg}(r_{k}t_{k})\neq\operatorname{deg}(r_{d}t_{d})$. Indeed, if $\operatorname{deg}(r_{k}t_{k})=\operatorname{deg}(r_{d}t_{d})$ then $\operatorname{deg}(r_{k})+\operatorname{deg}(s_{d})+\sum\limits_{\substack{i=1,\\i\neq d,k}}^{n}\operatorname{deg}(s_{i})=\operatorname{deg}(r_{d})+\operatorname{deg}(s_{k})+ \sum\limits_{\substack{i=1,\\i\neq d,k}}^{n}\operatorname{deg}(s_{i})$. Using the cancellation property of $M$, we get that $\operatorname{deg}(r_{k})+\operatorname{deg}(s_{d})=\operatorname{deg}(r_{d})+\operatorname{deg}(s_{k})$. Then consider their images under the canonical map $M\rightarrow G$, we will have $y_{k}=[\operatorname{deg}(r_{k}),\operatorname{deg}(s_{k})]=[\operatorname{deg}(r_{d}),\operatorname{deg}(s_{d})]=y_{d}$ which is a contradiction, because $y_{k}\neq y_{d}$. This establishes the claim. Now this observation together with the cancellation property of $M$ and the direct sum hypothesis in $R$, gives us that $rs't=str_{k}t_{k}$ for some $k$. It follows that $\operatorname{deg}(r)+\operatorname{deg}(s_{k})+\sum\limits_{\substack{i=1,\\i\neq k}}^{n}\operatorname{deg}(s_{i})+\operatorname{deg}(t)=\operatorname{deg}(s)+\operatorname{deg}(r_{k})+ \sum\limits_{\substack{i=1,\\i\neq k}}^{n}\operatorname{deg}(s_{i})+\operatorname{deg}(t)$. Then using the cancellation property of $M$ once more, we obtain that $\operatorname{deg}(r)+\operatorname{deg}(s_{k})=\operatorname{deg}(s)+\operatorname{deg}(r_{k})$. Then taking their images under the canonical map $M\rightarrow G$, we get that $x=[\operatorname{deg}(r),\operatorname{deg}(s)]=[\operatorname{deg}(r_{k}),\operatorname{deg}(s_{k})]=y_{k}$ which is a contradiction. Hence, $f=0$. ◻ The Grothendieck group of the additive monoid of natural numbers $\mathbb{N}=\{0,1,2,\ldots\}$ is called the *additive group of integers* and is denoted by $\mathbb{Z}$. This allows us to define the integers (especially the negative integers) in a quite formal way. In fact, $n:=[n,0]$ and so $-n=[0,n]$ and $[m,n]=m-n$ for all $m,n\in\mathbb{N}$.\ If $S$ is a multiplicative set of homogeneous elements of an $\mathbb{N}$-graded ring $R=\bigoplus\limits_{n\geqslant0}R_{n}$, then for each $n\in\mathbb{Z}$ by $(S^{-1}R)_{n}$ we mean the set of all fractions $f\in S^{-1}R$ (including the zero element) such that if $f\neq0$ then it has a presentation of the form $f=r/s$ with $r\in R$ and $s\in S$ are homogeneous and $\operatorname{deg}(r)-\operatorname{deg}(s)=n$. Then the following well-known result is recovered: **Corollary 2**. *Let $S$ be a multiplicative set of homogeneous elements of an $\mathbb{N}$-graded ring $R=\bigoplus\limits_{n\geqslant0}R_{n}$. Then $S^{-1}R=\bigoplus\limits_{n\in\mathbb{Z}}(S^{-1}R)_{n}$ is a $\mathbb{Z}$-graded ring.* *Proof.* It is an immediate consequence of Theorem [Theorem 1](#Lemma 1-bir){reference-type="ref" reference="Lemma 1-bir"}. ◻ **Remark 3**. Let $\{H_{x}\}$ be a family of subgroups of an Abelian group $G$ with the direct sum decomposition $G=\bigoplus\limits_{x}H_{x}$. Assume for each $x$, $S_{x}$ is a subset of $G$ contained in $H_{x}$, and every element $f\in G$ can be written as a (finite) sum $f=\sum\limits_{x}f_{x}$ with $f_{x}\in S_{x}$ for all $x$. Then $S_{x}=H_{x}$ for all $x$. Indeed, if $f\in H_{y}$ for some $y$, then $f-f_{y}=\sum\limits_{x\neq y}f_{x}\in H_{y}\cap\sum\limits_{x\neq y}H_{x}=0$. Hence, $f=f_{y}\in S_{y}$. In the following result we generalize Theorem [Theorem 1](#Lemma 1-bir){reference-type="ref" reference="Lemma 1-bir"} to arbitrary monoids (not necessarily cancellative). **Theorem 4**. *Let $S$ be a multiplicative set of homogeneous elements of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$ with $M$ a monoid. Then $S^{-1}R=\bigoplus\limits_{x\in G}(S^{-1}R)_{x}$ is a $G$-graded ring where $G$ is the Grothendieck group of $M$.* *Proof.* To prove the assertion, we reduce the problem to the cancellative case and then apply Theorem [Theorem 1](#Lemma 1-bir){reference-type="ref" reference="Lemma 1-bir"}. Let $M'=\{[m,0]: m \in M\}$ be the image of the canonical map $M\rightarrow G$. Then $M'$ is a submonoid of the group $G$ and hence $M'$ is cancellative. It can be easily seen that the Grothendieck group of $M'$ is canonically isomorphic to $G$ (in fact, the Grothendieck group of every submonoid $N$ of $G$ with $M'\subseteq N\subseteq G$ is isomorphic to $G$). By changing of the grading, we can view $R=\bigoplus\limits_{x\in M'}T_{x}$ as an $M'$-graded ring with the homogeneous components $T_{x}=\sum\limits_{[m,0]=x}R_{m}$. Every element $s\in S$ is also homogeneous in this new grading of $R$, because $s\in R_{\operatorname{deg}(s)}\subseteq T_{[\operatorname{deg}(s),0]}$. Then by Theorem [Theorem 1](#Lemma 1-bir){reference-type="ref" reference="Lemma 1-bir"}, $S^{-1}R=\bigoplus\limits_{x\in G}B_{x}$ is a $G$-graded ring where each $B_{x}$ is an additive subgroup of $S^{-1}R$ consisting of all fractions $f\in S^{-1}R$ (including the zero element) such that if $f\neq0$ then it has a presentation of the form $f=r/s$ with $r\in R$ and $s\in S$ are homogeneous elements in the new grading (i.e. $r\in T_{y}$ for some $y=[m,0]\in M'$ and $s\in T_{[\operatorname{deg}(s),0]}$) for which $x=y-[\operatorname{deg}(s),0]=[m,\operatorname{deg}(s)]$. For each $x\in G$ the set $(S^{-1}R)_{x}$ is contained in $B_{x}$. It is also clear that each element $f\in S^{-1}R$ can be written as a (finite) sum $f=\sum\limits_{x\in G}f_{x}$ with $f_{x}\in(S^{-1}R)_{x}$ for all $x$. Then by Remark [Remark 3](#Remark iv dort){reference-type="ref" reference="Remark iv dort"}, $(S^{-1}R)_{x}=B_{x}$ for all $x\in G$. ◻ **Remark 5**. Let $R=\bigoplus\limits_{m\in M}R_{m}$ be an $M$-graded ring with $M$ a monoid. If $M$ is cancellative, then it can be seen that $1\in R_{0}$ where 0 is the identity element of $M$ (in other words, $R_{0}$ is a subring of $R$). But in general (when $M$ is not necessarily cancellative), this may not happen. In fact in general, the unit of $R$ is an element of the subring $\bigoplus\limits_{m\in M^{0}}R_{m}$ where $M^{0}=\{x\in M:\exists y\in M, x+y=y\}$ is the quasi-zero submonoid of $M$ (even if 1 is homogeneous, it may happen that $\operatorname{deg}(1)\neq0$). In this regard see also [@Abolfazl; @Tarizadeh; @2 Example 3.7, Lemma 3.8]. **Corollary 6**. *Let $S$ be a multiplicative set of homogeneous elements of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$ with $M$ a monoid. If $G$ is the Grothendieck group of $M$, then $L=\{[m,\operatorname{deg}(s)]\in G: m\in M, s\in S\}$ is a submonoid of $G$ containing the image of $M$, and $S^{-1}R=\bigoplus\limits_{x\in L}(S^{-1}R)_{x}$ is an $L$-graded ring.* *Proof.* In fact, $L$ is the set of all elements $x\in G$ which has a presentation of the form $x=[m,\operatorname{deg}(s)]$ with $m\in M$ and $s \in S$. If $y=[m',\operatorname{deg}(s')]$ is a second element of $L$ then $x+y=[m+m',\operatorname{deg}(s)+\operatorname{deg}(s')]=[m+m',\operatorname{deg}(ss')]\in L$. If 0 denotes the identity element of $M$ then $[0,0]=[\operatorname{deg}(1),\operatorname{deg}(1)]\in L$. Hence $L$ is a submonoid of the group $G$. If $m\in M$ then $[m,0]=[m+\operatorname{deg}(1),\operatorname{deg}(1)]\in L$. By Theorem [Theorem 4](#Theorem 20 altin){reference-type="ref" reference="Theorem 20 altin"}, $S^{-1}R=\bigoplus\limits_{x\in G}(S^{-1}R)_{x}$ is a $G$-graded ring. But it can be easily seen that $(S^{-1}R)_{x}=0$ for all $x\in G\setminus L$. Hence, $S^{-1}R=\bigoplus\limits_{x\in L}(S^{-1}R)_{x}$ is an $L$-graded ring. ◻ **Remark 7**. In relation with Corollary [Corollary 6](#Coro 10 on){reference-type="ref" reference="Coro 10 on"}, note that the identity element 0 of $M$ is not necessarily a member of $\{\operatorname{deg}(s): s \in S\}$. Although $1\in S$ and so 1 is homogeneous, it may happen that $\operatorname{deg}(1)\neq0$ (see Remark [Remark 5](#Remark iii uch){reference-type="ref" reference="Remark iii uch"}). **Proposition 8**. *Let $S$ be a multiplicative set of homogeneous elements of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$ with $M$ a cancellative monoid. Then $M'=\{\operatorname{deg}(s): s\in S\}$ is a submonoid of $M$.* *Proof.* Since $M$ is cancellative, then by [@Abolfazl; @Tarizadeh; @2 Lemma 3.6], $1\in R_{0}$ where $0$ is the identity element of $M$. Thus $\operatorname{deg}(1)=0\in M'$. If $s,t\in S$ then $st \in S$ and we have $\operatorname{deg}(s)+\operatorname{deg}(t)=\operatorname{deg}(st)\in M'$. Hence, $M'$ is a submonoid of $M$. ◻ For the definition of the monoid-ring see e.g. [@Abolfazl; @Tarizadeh §2.4]. For any ring $R$ and a monoid $M$, the monoid-ring $R[M]=\bigoplus\limits_{m\in M}S_{m}$ is an $M$-graded ring with the homogeneous components $S_{m}=R\epsilon_{m}=\{r\epsilon_{m}: r \in R\}$ where $\epsilon_{m}:=(\delta_{a,m})_{a\in M}\in R[M]$ and $\delta_{a,m}$ is the Kronecker delta (we call $\epsilon_{m}$ a monoimial of $R[M]$ of degree $m$). **Theorem 9**. *If $S$ is a multiplicative subset of a ring $R$ and $M$ a monoid, then we have the canonical isomorphism of $G$-graded rings $T^{-1}(R[M])\simeq (S^{-1}R)[G]$ with $T=\{s\epsilon_{m}: s\in S, m\in M\}$ and $G$ is the Grothendieck group of $M$.* *Proof.* By the universal property of monoid-rings, there exists a (unique) morphism of rings $f:R[M]\rightarrow(S^{-1}R)[G]$ which sends each $r\epsilon_{m}$ into $(r/1)\epsilon_{[m,0]}$. But $T$ is a multiplicative set of homogeneous elements of an $M$-graded ring $R[M]$. Thus by Theorem [Theorem 4](#Theorem 20 altin){reference-type="ref" reference="Theorem 20 altin"}, $T^{-1}(R[M])$ is a $G$-graded ring. The image of each $s\epsilon_{m}\in T$ under $f$ is invertible, because $\big((s/1)\epsilon_{[m,0]}\big) \big((1/s)\epsilon_{[0,m]}\big)=\epsilon_{[m,m]}=1$. Then by the universal property of localization rings, there exists a (unique) morphism of rings $h:T^{-1}(R[M])\rightarrow(S^{-1}R)[G]$ which sends each $r\epsilon_{m}/s\epsilon_{n}$ into $(r/s)\epsilon_{[m,n]}$. The map $h$ is surjective. It is also a morphism of $G$-graded rings and so its kernel is a graded ideal. Hence, to show that $\operatorname{Ker}h=0$, it suffices to check it for homogeneous elements. If $r\epsilon_{m}/s\epsilon_{n}\in\operatorname{Ker}h$ then $r/s=0$ and so $rs'=0$ for some $s'\in S$. This yields that $r\epsilon_{m}/s\epsilon_{n}= rs'\epsilon_{m}/ss'\epsilon_{n}=0$. Hence, $h$ is injective. ◻ **Corollary 10**. *For any ring $R$ and a monoid $M$, we have the canonical isomorphism of $G$-graded rings $T^{-1}(R[M])\simeq R[G]$ with $T=\{\epsilon_{m}: m\in M\}$ and $G$ is the Grothendieck group of $M$.* *Proof.* It follows from Theorem [Theorem 9](#Theorem 6 alti){reference-type="ref" reference="Theorem 6 alti"} by taking $S=\{1\}$. ◻ **Remark 11**. If $\{M_{k}: k\in I\}$ is a family of monoids, then the Grothendieck group of the direct sum monoid $\bigoplus\limits_{k\in I}M_{k}$ is canonically isomorphic to the direct sum group $\bigoplus\limits_{k\in I}G_{k}$ where each $G_{k}$ is the Grothendieck group of $M_{k}$. **Corollary 12**. *For any ring $R$ and an index set $I$, the ring of Laurent polynomials $R[x^{\pm1}_{k}: k\in I]$ is canonically isomorphic to the group-ring $R[\bigoplus\limits_{k\in I}\mathbb{Z}]$.* *Proof.* We know that $R[x_{k}: k\in I]=R[M]$ and by the above remark the Grothendieck group of the additive monoid $M=\bigoplus\limits_{k\in I}\mathbb{N}$ is canonically isomorphic to the additive group $\bigoplus\limits_{k\in I}\mathbb{Z}$. We also have $R[x^{\pm1}_{k}: k\in I]=T^{-1}(R[M])$ where $T=\{\epsilon_{m}: m\in M\}$. Hence, the assertion follows from Corollary [Corollary 10](#Theorem I){reference-type="ref" reference="Theorem I"}. ◻ # Cancellative monoids and torsion-free groups In the following result, we characterize cancellative monoids by adding two new equivalences (iii) and (iv). **Proposition 13**. *For a given monoid $M$ with the Grothendieck group $G$ the following assertions are equivalent.\ $\mathbf{(i)}$ $M$ has the cancellation property.\ $\mathbf{(ii)}$ The canonical map $M\rightarrow G$ is injective.\ $\mathbf{(iii)}$ For any nonzero ring $R$, the monomial $\epsilon_{m}$ is a non-zero-divisor of $R[M]$ for all $m\in M$.\ $\mathbf{(iv)}$ For any ring $R$, the canonical ring map $R[M]\rightarrow R[G]$ given by $\sum\limits_{m\in M}r_{m}\epsilon_{m}\mapsto\sum\limits_{m\in M}r_{m}\epsilon_{[m,0]}$ is injective.* *Proof.* (i)$\Leftrightarrow$(ii): It is well-known and easy.\ (i)$\Rightarrow$(iii): If $(\epsilon_{m})f=0$ for some $f=\sum\limits_{x\in M}r_{x}\epsilon_{x}\in R[M]$. Then $\sum\limits_{x\in M}r_{x}\epsilon_{x+m}=0$. Since $M$ is cancellative, $r_{x}\epsilon_{x+m}$ is the ONLY homogeneous element of degree $x+m$ on the left hand side of the above equality. Hence, $r_{x}\epsilon_{x+m}=0$ and so $r_{x}=0$ for all $x\in M$. This shows that $f=0$.\ (iii)$\Rightarrow$(i): Suppose $m+a=m+b$ for some $a,b,m\in M$. Then in $\mathbb{Z}[M]$ we have $\epsilon_{m}(\epsilon_{a}-\epsilon_{b})=0$. By hypothesis, $\epsilon_{m}$ is a non-zero-divisor, thus $\epsilon_{a}=\epsilon_{b}$ and so $a=b$.\ (i)$\Rightarrow$(iv): Suppose $\sum\limits_{m\in M}r_{m}\epsilon_{[m,0]}=0$. Since $M$ is cancellative, $r_{m}\epsilon_{[m,0]}$ is the ONLY homogeneous element of degree $[m,0]$ on the left hand side of the above equality. This yields that $r_{m}\epsilon_{[m,0]}=0$ and so $r_{m}=0$ for all $m\in M$. This shows that $\sum\limits_{m\in M}r_{m}\epsilon_{m}=0$.\ (iv)$\Rightarrow$(i): Suppose $m+a=m+b$ for some $a,b,m\in M$. This shows that $[a,0]=[b,0]$ and so $\epsilon_{a}-\epsilon_{b}$ is in the kernel of the canonical ring map $\mathbb{Z}[M]\rightarrow\mathbb{Z}[G]$. Thus by hypothesis, $\epsilon_{a}=\epsilon_{b}$ and so $a=b$. ◻ Recall that by a *totally* (*linearly*) *ordered monoid* we mean a monoid $M$ equipped with a total ordering $<$ such that its operation is compatible with its ordering, i.e. if $a<b$ for some $a,b\in M$, then $a+c\leqslant b+c$ for all $c\in M$. If moreover $M$ is cancellative, then $a<b$ yields that $a+c<b+c$. **Remark 14**. Let $\{M_{i}: i\in I\}$ be a family of totally ordered monoids and let $M=\prod\limits_{i\in I}M_{i}$ be their direct product monoid. Then $M$ can be made into a totally ordered monoid via the lexicographical ordering induced by the orderings on the $M_{i}$. In fact, using the well-ordering theorem (which asserts that every set can be well-ordered), the index set $I$ can be well-ordered. Take $a=(a_{i}), b=(b_{i})\in M$. If $a\neq b$, then the set $\{i\in I: a_{i}\neq b_{i}\}$ is nonempty. Let $k$ be the least element of this set. Then the lexicographical ordering $<_{\mathrm{lex}}$ is defined on $M$ as $a<_{\mathrm{lex}}b$ or $b<_{\mathrm{lex}}a$, depending on whether $a_{k}< b_{k}$ or $b_{k}< a_{k}$, where $<$ is the ordering on $M_{k}$. Hence, $(M,<_{\mathrm{lex}})$ is a totally ordered monoid. In particular, the direct sum monoid $\bigoplus\limits_{i\in I}M_{i}$ is also a totally ordered monoid, because every submonoid of a totally ordered monoid is itself a totally ordered monoid. Now we prove the main result of this section: **Theorem 15**. *A cancellative monoid is a totally ordered monoid if and only if its Grothendieck group is a torsion-free group.* *Proof.* Let $M$ be a cancellative monoid with the Grothendieck group $G$. If $M$ is a totally ordered monoid (by an ordering $<$), then it can be easily seen that $G$ is a totally ordered group whose order is defined by $[a,b]<[c,d]$ if $a+d<b+c$ in $M$. It is also easy to check that every totally ordered group is a torsion-free group. As a second proof for the implication "$\Rightarrow$\", suppose some non-identity element $x$ of $G$ is of finite order. So there exists a natural number $n\geqslant2$ such that $nx=0$. We may write $x=[a,b]$ where $a,b\in M$ with $a\neq b$. Then we may assume, say $a<b$. It follows that $na<nb$. But $nx=0$ yields that $[na,nb]=[0,0]$ and so $na=nb$ which is a contradiction. Hence, $G$ is a torsion-free group (i.e. every non-identity element is of infinite order). We prove the reverse implication as follows. Consider $G$ as a $\mathbb{Z}$-module and put $S:=\mathbb{Z}\setminus\{0\}$. Since $G$ is a torsion-free group, the canonical map $G\rightarrow S^{-1}G$ is injective. Note that $S^{-1}G$ is an $S^{-1}\mathbb{Z}$-module. But $\mathbb{Q}:=S^{-1}\mathbb{Z}$ is a field (the field of rational numbers). Hence, the $\mathbb{Q}$-vector space $S^{-1}G$ is isomorphic to a direct sum of copies of $\mathbb{Q}$. The additive group of rational numbers $\mathbb{Q}$ is a totally ordered group whose order is defined by $r/s <r'/s'$ if $rs'<r's$ in $\mathbb{Z}$. Then by Remark [Remark 14](#Remark I){reference-type="ref" reference="Remark I"}, any direct sum of copies of $\mathbb{Q}$ is a totally ordered group with the lexicographical ordering. Therefore $S^{-1}G$ and hence also $G$ are totally ordered groups. But the canonical map $M\rightarrow G$ is injective. Therefore $M$ is also a totally ordered monoid. ◻ As an application, Levi's theorem (see [@Levi §3] or [@Lam Theorem 6.31]) is easily deduced: **Corollary 16**. *Every torsion-free abelian group is a totally ordered group.* *Proof.* The Grothendieck group of every Abelian group is canonically isomorphic to itself. Hence, the assertion immediately follows from Theorem [Theorem 15](#Theorem III){reference-type="ref" reference="Theorem III"}. ◻ # Group of units as the Grothendieck group The group of units (invertible elements) of a ring $R$ is denoted by $R^{\ast}$. If $S$ is a multiplicative set of a ring $R$, then $S$ is a submonoid of the multiplicative monoid of $R$. We have then the following result: **Lemma 17**. *The Grothendieck group of a multiplicative set $S$ of a ring $R$ can be canonically embedded in $(S^{-1}R)^{\ast}$.* *Proof.* Let $G=\{[s,t]: s,t\in S\}$ be the Grothendieck group of the multiplicative monoid $S$. The map $S\rightarrow(S^{-1}R)^{\ast}$ given by $s\mapsto s/1$ is a morphism of monoids. Then by the universal property of Grothendieck groups, there exists a morphism of groups $f:G\rightarrow(S^{-1}R)^{\ast}$ which sends each $[s,t]$ into $s/t$. If $[s,t]\in\operatorname{Ker}f$, then $ss'=ts'$ for some $s'\in S$. Thus $[s,t]=[1,1]$ is the identity element of $G$. Hence, $f$ is injective. ◻ Let $S$ be a multiplicative set of a ring $R$. Then $\overline{S}=\{a\in R:\exists b\in R, ab\in S\}$ is a multiplicative set of $R$ and $S\subseteq\overline{S}$. It can be also seen that $\overline{S}=\{a\in R: a/1\in(S^{-1}R)^{\ast}\}=R\setminus\bigcup\limits_ {\substack{\mathfrak{p}\in\operatorname{Spec}(R),\\\mathfrak{p}\cap S= \emptyset}}\mathfrak{p}$. The set $\overline{S}$ is called the saturation of $S$, and $S$ is called saturated if $S=\overline{S}$. For example, if $\mathfrak{p}$ is a prime ideal of a ring $R$, then $R\setminus\mathfrak{p}$ is saturated. The set of non-zero-divisors of every ring is saturated. The multiplicative set $S=\{4^{n}:n\geqslant0\}=\{1,4,16,\ldots\}$ of $\mathbb{Z}$ is not saturated, because $2\in\overline{S}$ but not in $S$. **Lemma 18**. *If $S$ is a saturated multiplicative set of a ring $R$, then the invertible elements of $S^{-1}R$ are precisely of the form $s/t$ with $s,t\in S$.* *Proof.* It is straightforward. ◻ **Example 19**. The converse of the above lemma does not hold. In fact, we find a non-saturated multiplicative set $S$ in a ring $R$ such that the invertible elements of the nonzero ring $S^{-1}R$ are precisely of the form $s/t$ with $s,t\in S$. The multiplicative set $S=\{ax^{n}: a\in K^{\ast}, n=0 \: or \: n\geqslant2\}$ of the polynomial ring $R=K[x]$ (with $K$ a field) is non-saturated, because $x\in\overline{S}\setminus S$. We know that $K[x]=\bigoplus\limits_{n\geqslant0}Kx^{n}$ is an $\mathbb{N}$-graded ring and so by Corollary [Corollary 2](#Coro 11 onbir){reference-type="ref" reference="Coro 11 onbir"}, $S^{-1}R$ is a $\mathbb{Z}$-graded ring. We know that in a $\mathbb{Z}$-graded integral domain, every invertible element is homogeneous (see e.g. [@Abolfazl; @Tarizadeh Lemma 5.2]). Thus if $f$ is an invertible element in $S^{-1}R$ then $f=ax^{m}/bx^{n}$ where $bx^{n}\in S$, $a\neq0$ and $m\geqslant0$. We can take $ax^{m}\in S$, because $K$ is a field and so $a\in K^{\ast}$, and if $m=1$ then we may write $f=ax^{m+2}/bx^{n+2}$. **Corollary 20**. *The Grothendieck group of a saturated multiplicative set $S$ of a ring $R$ is canonically isomorphic to $(S^{-1}R)^{\ast}$.* *Proof.* Let $G=\{[s,t]: s,t\in S\}$ be the Grothendieck group of the multiplicative monoid $S$. Then by Lemma [Lemma 17](#Lemma 2 two-iki){reference-type="ref" reference="Lemma 2 two-iki"}, the canonical morphism of groups $f:G\rightarrow(S^{-1}R)^{\ast}$ given by $[s,t]\mapsto s/t$ is injective. If $r/s\in(S^{-1}R)^{\ast}$ then $rr't=ss't\in S$ where $r'\in R$ and $s',t\in S$. It follows that $r\in\overline{S}=S$ and so $f$ is surjective. ◻ **Corollary 21**. *If $\mathfrak{p}$ is a prime ideal of a ring $R$, then the Grothendieck group of the multiplicative monoid $R\setminus\mathfrak{p}$ is canonically isomorphic to $(R_{\mathfrak{p}})^{\ast}= R_{\mathfrak{p}}\setminus\mathfrak{p}R_{\mathfrak{p}}$.* *Proof.* It follows from Corollary [Corollary 20](#Coro 7 yedi){reference-type="ref" reference="Coro 7 yedi"}. ◻ For any ring $R$ by $T(R)$ we mean the total ring of fractions of $R$. In fact, $T(R)=S^{-1}R$ where $S$ is the set of non-zero-divisors of $R$. **Corollary 22**. *The Grothendieck group of the set of non-zero-divisors of a ring $R$ is canonically isomorphic to $T(R)^{\ast}$.* *Proof.* It follows from Corollary [Corollary 20](#Coro 7 yedi){reference-type="ref" reference="Coro 7 yedi"}. ◻ In particular, if $R$ is an integral domain with the field of fractions $F$. Then the Grothendieck group of the multiplicative monoid $R\setminus\{0\}$ is canonically isomorphic to $F^{\ast}=F\setminus\{0\}$. **Corollary 23**. *If $S$ is a multiplicative set of a ring $R$, then the Grothendieck group of $\overline{S}$ is canonically isomorphic to $(S^{-1}R)^{\ast}$.* *Proof.* Let $G$ be the Grothendieck group of the multiplicative monoid $T:=\overline{S}$. Since $T$ is saturated, then by Corollary [Corollary 20](#Coro 7 yedi){reference-type="ref" reference="Coro 7 yedi"}, $G$ is canonically isomorphic to $(T^{-1}R)^{\ast}$. But it can be seen that the map $S^{-1}R\rightarrow T^{-1}R$ given by $r/s\mapsto r/s$ is an isomorphism of rings and so it induces an isomorphism between the corresponding groups of units. Hence, $G$ is isomorphic to $(S^{-1}R)^{\ast}$. ◻ **Corollary 24**. *If $I$ is an ideal of a ring $R$, then the Grothendieck group of the multiplicative monoid $T=R\setminus\bigcup\limits_ {\substack{\mathfrak{m}\in\operatorname{Max}(R),\\ I\subseteq\mathfrak{m}}}\mathfrak{m}$ is canonically isomorphic to $(S^{-1}R)^{\ast}$ with $S=1+I$.* *Proof.* We first show that $\overline{S}=T$. It is clear that $\overline{S}\subseteq T$. If $x\in T$ then $x/1$ is invertible in $S^{-1}R$. If not, then $x\in\mathfrak{m}$ for some maximal ideal $\mathfrak{m}$ of $R$ with $\mathfrak{m}\cap S=\emptyset$. If $I$ is not contained in $\mathfrak{m}$ then $\mathfrak{m}+I=R$ and so $\mathfrak{m}\cap S\neq\emptyset$ which is a contradiction. Hence, $I\subseteq\mathfrak{m}$. But this is also contradiction with the choice of $x\in T$. Thus $x/1\in(S^{-1}R)^{\ast}$ and so $x\in\overline{S}$. Now the assertion follows from Corollary [Corollary 23](#Coro 8 sekiz){reference-type="ref" reference="Coro 8 sekiz"}. ◻ 10 T.Y. Lam, A First Course in Noncommutative Rings, Springer-Verlag, (2001). F.W. Levi, Ordered groups, Proc. Indian Acad. Sci. A **16**(4) (1942) 256-263. A. Tarizadeh, Homogeneity of zero-divisors, units and colon ideals in a graded ring, submitted to Compositio Mathematica, https://doi.org/10.48550/arXiv.2108.10235 (2021). A. Tarizadeh, Homogeneity of the Jacobson radical and idempotents in a graded ring, submitted to Duke. Math. J., https://doi.org/10.48550/arXiv.2309.02880 (2023).
arxiv_math
{ "id": "2309.15620", "title": "Grading of homogeneous localization by the Grothendieck group", "authors": "Abolfazl Tarizadeh", "categories": "math.AC math.AG math.GR math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider the 3D or 2D primitive equations for oceans and atmosphere in the isothermal setting. In this paper, we establish a new conditional uniqueness result for weak solutions to the primitive equations, that is, if a weak solution belongs some scaling invariant function spaces, and satisfies some additional assumptions, then the weak solution is unique. In particular, our result can be obtained as different one from $z$-weak solutions framework by adopting some anisotropic approaches with the homogeneous toroidal Besov spaces. As an application of the proof, we establish the energy equality for weak solutions in the uniqueness class given in the main theorem. address: - | Technische Universität Darmstadt\ Fachbereich Mathematik\ Schlossgartenstr. 7\ 64289 Darmstadt\ Germany - | Department of Pure and Applied Mathematics\ Graduate School of Fundamental Science and Engineering\ Waseda University\ 169-8555 Tokyo\ Japan author: - Tim Binz - Yoshiki Iida bibliography: - ref.bib title: Uniqueness of weak solutions to the primitive equations in some anisotropic spaces --- # Introduction {#sec:intro} In this paper, we consider the following three-dimensional (full-viscous) primitive equations for oceans and the atmosphere with no Coriolis force in the isothermal setting: $$\begin{aligned} \left\{ \begin{aligned}\label{eq:PE} \partial_{t}v + (u\cdot \nabla) v - \Delta v + \nabla_{\mathrm{H}}\Pi &= 0, \\ \partial_{z}\Pi &= 0, \\ \mathrm{div} \, u &= 0. \end{aligned} \right. \tag{3D PE}\end{aligned}$$ The unknown functions $v: \Omega \to \mathbb{R}^2$, $w: \Omega \to \mathbb{R}$ and $\Pi: \Omega \to \mathbb{R}$ denote the horizontal velocity, the vertical velocity, and the scaler pressure of the fluid, respectively. We write $u={}^\top(v, w)$. The operator $\nabla_\mathrm{H}$ denotes the horizontal gradient. The domain $\Omega$ is described below. The primitive equations are derived from the (rotating) incompressible Navier--Stokes equations with the hydrostatic balance assumption for the pressure term in the vertical direction. The rigorous justification of the hydrostatic approximation of the primitive equations are studied in [@FGHHKW2020; @FGK2021; @LT2018]. The physical boundary problem for the primitive equations [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} is usually studied in the layer domain $$D_h \equiv \{ (x_\mathrm{H}, z) \in \mathbb{R}^{2}\times \mathbb{R}\,;\,-h<z<0 \} \qquad (h>0)$$ subject to the horizontally periodic condition $$\text{$v$, $w$ and $p$ are $2L$ periodic in $x_\mathrm{H}$ for some $L>0$,}$$ and the no-vertical flow conditions $w|_{z=-h, 0}=0$. Furthermore, the boundary conditions $\partial_{z}v|_{z=-h, 0}=0$ are sometimes imposed. In this setting, extending $v$ (resp. $w$) evenly (resp. oddly) in $z$ component with respect to $z=0$ and to be defined on $(-L, L)^{2}\times (-h, h)$, then the extended velocity $v$ (resp. $w$ is periodic and even (resp. odd). In this paper, we focus on only periodicity of the velocities $v$ and $w$ and we consider the system [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} on $\Omega = \mathbb{T}^2 \times \mathbb{T}= \mathbb{T}^3$. We also consider the following two-dimensional primitive equations for the *three*-dimensional geophysical fluid motion: $$\begin{aligned} \left\{ \begin{aligned}\label{eq:PE2} \partial_{t}v + ({}^\top(v^1, w)\cdot \nabla_{x_1, z}) v - \Delta v + (\partial_{x_1} \Pi)e_1 &= 0, \\ \partial_{z}\Pi &= 0, \\ \partial_{x_1} v^1 + \partial_{z}w &= 0. \end{aligned} \right. \tag{2D PE}\end{aligned}$$ Here, $v: \Omega' \ni (x_1, z)\mapsto (v^1(x_1, z), v^2(x_1, z))\in \mathbb{R}^2$ and $w: \Omega' \ni (x_1, z)\mapsto w(x_1, z)\in\mathbb{R}$ denote the horizontal velocity and the vertical velocity, respectively. The operator $\nabla_{x_1, z}$ is the gradient $\nabla_{x_1, z}={}^\top(\partial_{x_1}, \partial_{z})$, and the unit vector $e_1$ is defined as $e_1={}^\top(1, 0)$. The domain $\Omega'$ is also described as follows. As in the three-dimensional case, originally we consider [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"} in the layer domain $$D_h' \equiv \{ (x_1, z) \in \mathbb{R}\times \mathbb{R}\,;\,-h<z<0 \} \qquad (h>0)$$ under the horizontal boundary conditions $$\text{$v$, $w$ and $p$ are $2L$ periodic in $x_1$ for some $L>0$,}$$ the no-vertical flow conditions $w|_{z=-h, 0}=0$, and the Neumann conditions in $z$ $\partial_{z}v|_{z=-h, 0}=0$. After that, extending $v$ and $w$ on $(-L, L)\times (-h, h)$ in $z$ direction as in three-dimensional case, and taking $L=h=\pi$, we consider [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"} on $\Omega'=\mathbb{T}^2$. Lions--Temam--Wang [@LTW1992-NewPE; @LTW1992-OnLSO; @LTW1995] started mathematical studies for the primitive equations, where they showed the existence of global weak solutions in three-dimensional case, by the Galerkin method. Later, Guillén González--Masmoudi--Rodríguez Bellido [@GMR2001] established the local well-posedness for large initial data in $H^1$ and the global well-posedness for small data in $H^1$. In their seminal work Cao--Titi [@CT2007] showed the global well-posedness for the three-dimensional primitive equations for *arbitrary large* initial data in $H^1$, by deriving a suitable a priori estimate for the strong solution obtained by [@GMR2001]. Hieber--Kashiwabara [@HK2016] showed the global well-posedness in three dimensional case for large initial data in $H^{2/p, p}$ ($6/5\leq p <\infty$). The assumption $6/5\leq p <\infty$ was relaxed to $1 < p < \infty$ in the work of Hieber--Hussein--Kashiwabara [@HHK2016]. In [@HK2016], they introduced an analytic semigroup (so-called the hydrostatic Stokes semigroup) which yields the solution to the linearized equation of [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"}, and they proved the local well-posedness according to the method of Kato [@K1984]. Further properties of the hydrostatic semigroup were investigated in [@GGHHK2017; @GGHHK2020; @GGHHK2021]. If we take the limit $p\to \infty$ for the initial value space $H^{2/p, p}$ in [@HK2016], then $H^{2/p, p}$ tends to $L^\infty$. Giga--Gries--Hieber--Hussein--Kashiwabara [@GGHHK2021] considered the initial value problem in the scaling invariant space $L^\infty_\mathrm{H}(L^1_z)$, which is larger than $L^\infty$. The existence of the time periodic solutions to the primitive equations for time periodic external forces was discussed in [@GHK2017; @HS2013; @M2010-TPE]. However, the uniqueness of weak solutions to the primitive equations is still an open problem, even for two-dimensional case unlike the Navier--Stokes equations. Bresch--Guillén González--Masmoudi--Rodríguez Bellido [@BGMR2003], Bresch--Kazhikhov--Lemoine [@BKL2005], Petcu [@P2007], and Tachim Medjo [@M2010] introduced $z$-weak solution, as a class of weak solutions for which global existence and the uniqueness hold. Roughly saying, a weak solution $v$ to the primitive equations is said $z$-weak solution if $v$ satisfies the additional regularity $$\partial_{z}v \in L^\infty(0, \infty; L^2) \cap L^2(0, \infty; \dot{H}^1).$$ In particular, for three-dimensional case, [@M2010] showed the global existence and uniqueness of $z$-weak solutions for initial data $v_0\in \{ V \in L^6(\Omega) \,;\,\partial_{z}V \in L^2(\Omega)\}$. Kukavika--Pei--Rusin--Ziane [@KPRZ2014] proved the uniqueness for continuous initial data, and Li--Titi [@LT2017] proved for initial data $v_0\in \{ V\in L^6(\Omega) \,;\,\partial_{z}V \in L^2(\Omega) \}$ including a small perturbation in $L^\infty$-norm. [@LT2017] also proved that any weak solutions are smooth for $t>0$ by weak-strong uniqueness argument. Following [@LT2017], Ju [@J2021] showed that, for a weak solution $v$ to the two-dimensional primitive equations, if $\partial_{z}v$ belongs to $L^4(0, T; L^2)$ or $L^2(0, T; L_{x_1}^{\infty}L^2_z)$, then the uniqueness holds. This result differs from $z$-weak solutions framework. The uniqueness problem for weak solutions to the primitive equations was also discussed in [@BGMR2003; @BKL2005; @J2017]. Recently, the preprint written by Boutros--Markfelder--Titi [@BMT2023] constructed infinitely many generalized weak solutions for the same initial data in $H^1$. Their result also shows the nonuniqueness of generalized weak solutions to the primitive equations. Besides, they treated the hydrostatic Euler equations (as known as the inviscid primitive equations) and two-dimensional Prandtl equations. Their proof followed the convex integration scheme, which is used for solving Onsager's conjecture by Isset [@I2018] and Buckmaster--de Lellis--Székelyhidi--Vicol [@BLSV2019], and for proving nonuniqueness of very weak solutions to the 3D Navier--Stokes euqations by Buckmaster--Vicol [@BV2019]. In this paper, we establish a new class which the uniqueness of weak solutions holds, away from the framework of $z$-weak solutions. We emphasize that in all previous works weak solutions *a-priori* satisfying the energy inequality are considered, whereas this article deals with merely weak solutions which satisfy the energy (in)equality *a posteriori*. More specifically, we derive the uniqueness by focusing on scale invariance of solution space. This approach is based on the point of view of Serrin [@S1963] for the Navier--Stokes equations. Indeed, [@KS1996; @KT2000; @M1984; @S1963] showed that weak solutions to the Navier--Stokes equations in a scaling invariant space (so-called the Serrin class) are unique and smooth. Since the primitive equations essentially have anisotropy for the vertical direction, it is difficult to treat the vertical velocity $w$. Indeed, for the three-dimensional case, $w$ is determined by $$w=\int_z^\pi \mathrm{div}_{\mathrm{H}} \,{v} \,d\zeta,$$ so there is one derivative loss for the horizontal velocity $v$. Here, the operator $\mathrm{div} \, _\mathrm{H}$ denotes the two-dimensional horizontal divergence. In order to overcome the difficulty, it is necessary to use some anisotropic approaches. Therefore, we introduce homogeneous-type Besov spaces on $\mathbb{T}$, that is, the toroidal Besov spaces defined by Tsurumi [@T2019]. Then, a scaling-invariant condition appears for the uniqueness to hold. However, when we treat low-frequency part of a weak solution, another condition appears. For the two-dimensional case, this condition seems to be improved from Ju [@J2021]. In addition, for the three-dimensional case, it seems to be related by the concept of $z$-weak solutions by a formal consideration (see (5) below). Our precise main result now reads: **Theorem 1** (weak-strong uniqueness). *Let $n=3$ or $2$, and let $v_1$ and $v_2$ be weak solutions to the $n$-dimensional primitive equations ([\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} for $n=3$ or [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"} for $n=2$) with the same initial value $v_1(0)=v_2(0)=v_0\in \mathcal{H}$, where the hydrostatic solenoidal subspace $\mathcal{H}$ is given by $${ \mathcal{H} } \equiv \begin{dcases} \left\{ v\in L^2(\mathbb{T}^3) \,;\, \text{ $v$ is even in $z$ and }\ \mathrm{div} \, _\mathrm{H}\left( \int_{-\pi}^{\pi} v(x_\mathrm{H}, z) \, dz \right) = 0 \right\} & (n=3), \\ { \left\{ v\in L^2(\mathbb{T}^2) \,;\, \text{ $v$ is even in $z$ and}\ \partial_{x_1}\left( \int_{-\pi}^{\pi} v(x_1, z) \, dz \right) = 0 \right\} } & { (n=2). } \end{dcases}$$ Furthermore, suppose that there exists $T=T_{v_1}>0$ such that $$\label{eq:asp} v_1 \in L^\beta(0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} ) \cap L^{\gamma}(0, T; \dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}L_{\mathrm{H}}^{\infty}),$$ where $$\label{eq:scaling invariant} \frac{2}{\beta} + \frac{2}{p} = 1 \qquad ( 2 < p,\, \beta < \infty)$$ and for some $2<\gamma<\infty$.* *We also suppose that $v_1$ and $v_2$ satisfy the energy inequality $$\label{eq:energy inequality vj} \frac{1}{2}\| v_j(t) \|_{L^2(\mathbb{T}^n)}^2 + \int_0^t \| \nabla v_j (\tau) \|_{L^2(\mathbb{T}^n)}^2 \,d\tau \leq \frac{1}{2}\| v_0 \|_{L^2(\mathbb{T}^n)}^2 \qquad (j=1, 2)$$ for a.e. $t\in (0, \infty)$.* *Then, $v_1\equiv v_2$ on $[0, T]$ holds .* **Remark 1**. (1) The assumption $v_1\in X_T^{\beta, p}\equiv L^\beta(0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} )$ means that $v_1$ belongs to scaling invariant spaces by means of $$\| (v_1)_\lambda \|_{X_{T, \lambda}^{\beta, p}} = \| v_1 \|_{X_T^{\beta, p}} \qquad \text{for $\lambda>0$},$$ where $(v_1)_\lambda(x_1, x_2, z, t) = \lambda v(\lambda x_1, \lambda x_2, \lambda z, \lambda^2 t)$ and $X_{T, \lambda}^{\beta, p, q} = L^\beta(0, T/\lambda^2 ; \dot{B}^{{-1/p}}_{{p}, {q}, {z}}(\mathbb{T}_\lambda ; L_{\mathrm{H}}^{\infty}({ \mathbb{T}_\lambda^{n-1} }) ) )$. Indeed, the condition [\[eq:scaling invariant\]](#eq:scaling invariant){reference-type="eqref" reference="eq:scaling invariant"} can be immediately rewritten as $$\frac{2}{\beta} + \frac{1}{p} + \frac{n-1}{\infty} = 1 + \left( -\frac{1}{p} \right).$$ (2) We can see from the proof that the uniqueness is valid even if the assumption [\[eq:asp\]](#eq:asp){reference-type="eqref" reference="eq:asp"} is replaced by $$v_1 \in L^\beta(0, T; L_{\mathrm{H}}^{\infty}\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}} ) \cap L^{\gamma}(0, T; L_{\mathrm{H}}^{\infty}\dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}).$$ (3) (comparison with the $z$-weak solution framework in the 3D case) As we stated, letting $\gamma\to 2+0$, then $L^{\gamma}(0, T; \dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}L_{\mathrm{H}}^{\infty})$ tends to $L^2(0, T; \dot{H}^1_zL_{\mathrm{H}}^{\infty})$. For the three dimensional case, if we formally see $L_{\mathrm{H}}^{\infty}(\mathbb{T}^2)$ as $\dot{H}^1_\mathrm{H}(\mathbb{T}^2)$ [^1], the condition $$v_1\in L^2(0, T; \dot{H}^1_zL_{\mathrm{H}}^{\infty})$$ can be seen as $$\partial_{z}\nabla_{\mathrm{H}}v_1 \in L^2(0, T; L^2),$$ which corresponds to the definition of $z$-weak solutions. (4) (comparison with Ju [@J2021] for the 2D case) Letting $\gamma\to 2+0$, then $L^{\gamma}(0, T; \dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}L_{\mathrm{H}}^{\infty})$ tends to $L^2(0, T; \dot{H}^1_zL_{\mathrm{H}}^{\infty})$. This corresponds to the condition $$\partial_{z}v_1 \in L^2(0, T; L^2_zL_{\mathrm{H}}^{\infty}),$$ which is one of that derived by Ju [@J2021]. Previous results by [@J2021; @LT2017] showed the energy equality for weak solutions *satisfying the energy inequality*, whereas we prove the energy equality for weak solutions in slightly larger class than that given in [\[eq:asp\]](#eq:asp){reference-type="eqref" reference="eq:asp"} without assuming the energy inequality. **Theorem 2** (energy equality). *If a weak solution $v$ with $v(0)=v_0\in \mathcal{H}$ to [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} satisfies $$\label{eq:asp EE} v \in L^\beta( 0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} ) \cap L^{\gamma}( 0, T; \dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} )$$ for some $T=T_v>0$, where $$\frac{2}{\beta} + \frac{2}{p} = 1 \qquad ( 2 < p,\, \beta < \infty),$$ and for some $2<\gamma<\infty$.* *Then, the weak solution $v$ satisfies the following energy equality: $$\label{eq:energy equality} \frac{1}{2}\| v(t) \|_{L^2}^2 + \int_0^t \| \nabla v (\tau) \|_{L^2}^2 \,d\tau = \frac{1}{2}\| v_0 \|_{L^2}^2$$ for $0<t\leq T$.* **Remark 2**. We can see from the proof that the energy equality [\[eq:energy equality\]](#eq:energy equality){reference-type="eqref" reference="eq:energy equality"} is valid even if the assumption [\[eq:asp EE\]](#eq:asp EE){reference-type="eqref" reference="eq:asp EE"} is replaced by $$v \in L^\beta( 0, T; L_{\mathrm{H}}^{\infty}\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}} ) \cap L^{\gamma}( 0, T; L_{\mathrm{H}}^{\infty}\dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}} ).$$ The proof of the energy equality [\[eq:energy equality\]](#eq:energy equality){reference-type="eqref" reference="eq:energy equality"} is an application of the technique essentially used in the proof of the uniqueness (). We will prove in . Combining and we obtain uniqueness of strong solutions. More precisely, we obtain **Corollary 1** (strong uniqueness). *Let $v_1,v_2 \in C_w([0, \infty); \mathcal{H}) \cap L^2_\mathrm{loc}([0, \infty); (H^1\cap \mathcal{H}))$ with the same initial value $v_1(0)=v_2(0)=v_0\in \mathcal{H}$ be weak solutions to [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} or [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"}. Suppose that there exists $T=T_{v_1,v_2}>0$ such that $$\label{eq:asp2} v_1, v_2 \in L^\beta(0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} ) \cap L^{\gamma}(0, T; \dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}L_{\mathrm{H}}^{\infty}),$$ where [\[eq:scaling invariant\]](#eq:scaling invariant){reference-type="eqref" reference="eq:scaling invariant"} and for some $2<\gamma<\infty$.* *Then, $v_1\equiv v_2$ on $[0, T]$ holds.* # Preliminaries {#sec:prelim} ## Definition and important properties of the toroidal Besov spaces {#subsec:toroidal Besov} In this , we mainly refer [@T2019], and we also refer Schmeisser--Triebel [@ST1987], Triebel [@T1983] and Xiong--Xu--Yin [@XXY2018]. In the following, $\mathbb{T}_\lambda=\mathbb{R}/(2\pi\lambda\mathbb{Z})$ denotes the one-dimensional torus whose period is $2\pi\lambda$ for $\lambda>0$. We define $\mathcal{D}(\mathbb{T}_\lambda)$ and its subspace $\mathcal{D}_0(\mathbb{T}_\lambda)$ as $$\begin{aligned} & \mathcal{D}(\mathbb{T}_\lambda) = \left\{ \phi\in C^\infty (\mathbb{R}) \,;\,\text{ $ \phi(z)=\phi(z') $ for $z-z'\in 2\pi\lambda\mathbb{Z}$ } \right\}, \\ & \mathcal{D}_0(\mathbb{T}_\lambda) = \left\{ \phi\in \mathcal{D}(\mathbb{T}_\lambda) \,;\,\int_{[-\pi\lambda, \pi\lambda]} \phi(z) \, dz = 0 \right\}.\end{aligned}$$ Here the locally convex topology in $\mathcal{D}(\mathbb{T}_\lambda)$ is generated by the semi-norms $$[ \phi ]_{l} = \sup_{z\in \mathbb{T}_\lambda} | \partial_z^{l}\phi (z) | \qquad \text{for $l\in\mathbb{N}\cup\{0\}$},$$ and we equip $\mathcal{D}_0(\mathbb{T}_\lambda)$ with the relative topology of $\mathcal{D}(\mathbb{T}_\lambda)$. We define $\mathcal{D}^\prime (\mathbb{T}_\lambda)$ (resp. $\mathcal{D}_0^\prime (\mathbb{T}_\lambda)$) as the topological dual space of $\mathcal{D}(\mathbb{T}_\lambda)$ (resp. $\mathcal{D}_0(\mathbb{T}_\lambda)$), respectively. Any $\phi\in\mathcal{D}(\mathbb{T}_\lambda)$ has the Fourier series representation $$\phi(z) = \sum_{k\in\mathbb{Z}} \, [\mathcal{F}_{\mathbb{T}_\lambda} \phi](k) e^{i k\cdot (\lambda^{-1} z)} \qquad \text{in $\mathcal{D}(\mathbb{T}_\lambda)$},$$ where $\mathcal{F}_{\mathbb{T}_\lambda}\phi$ is defined by $$[\mathcal{F}_{\mathbb{T}_\lambda} \phi](k) \equiv \frac{1}{2\pi\lambda} \int_{[-\pi\lambda, \pi\lambda]} \phi(z) e^{-ik\cdot (\lambda^{-1} z)} \, dz \qquad \text{for $k\in\mathbb{Z}$}.$$ We can easily see that the Fourier coefficients $[\mathcal{F}_{\mathbb{T}_\lambda} \phi] (k)$ are rapidly decreasing as $|k|\to\infty$, namely, $\mathcal{F}_{\mathbb{T}_\lambda} \phi \in \mathcal{S}_\lambda(\mathbb{Z})$, where $$\mathcal{S}_\lambda(\mathbb{Z}) \equiv \left\{ \{a_k\}_{k\in\mathbb{Z}}\subset \mathbb{C} \,;\,\text{for any $m\in\mathbb{N}\cup\{0\}$, there exists $C_m>0$ s.t.~ $\displaystyle|a_k| \leq \frac{C_m}{(1+|k/\lambda|)^m} $ }\right\}.$$ We equip $\mathcal{S}_\lambda(\mathbb{Z})$ with the locally convex topology generated by the norms $$\| \{a_k\}_{k\in\mathbb{Z}} \|_{\ell^\infty_{m, \lambda}} \equiv \sup_{k\in\mathbb{Z}} \left( (1+|k/\lambda|)^m|a_k| \right), \qquad \text{for $m\in\mathbb{N}\cup\{0\}$}.$$ We also define $\mathcal{F}_{\mathbb{T}_{\lambda}}^{-1}: \mathcal{S}_\lambda(\mathbb{Z})\to \mathcal{D}(\mathbb{T}_\lambda)$ as $$[\mathcal{F}^{-1}_{\mathbb{T}_\lambda}g](z) \equiv \sum_{k\in\mathbb{Z}} g(k) e^{ik\cdot(\lambda^{-1}z)}.$$ Here we also define $\mathcal{F}_{\mathbb{T}_\lambda}: \mathcal{D}^\prime(\mathbb{T}_\lambda)\to \mathcal{S}^\prime_\lambda(\mathbb{Z})$ and $\mathcal{F}_{\mathbb{T}_{\lambda}}^{-1}: \mathcal{S}^\prime_\lambda(\mathbb{Z})\to \mathcal{D}^\prime(\mathbb{T}_\lambda)$ as $$\langle{ \mathcal{F}_{\mathbb{T}_{\lambda}}f , \alpha }\rangle_{\mathcal{S}_\lambda^\prime(\mathbb{Z})\times \mathcal{S}_\lambda(\mathbb{Z})} \equiv \langle{ f, \mathcal{F}^{-1}_{\mathbb{T}_\lambda}[\alpha(-\cdot )] }\rangle_{\mathcal{D}^\prime(\mathbb{T}_\lambda)\times\mathcal{D}(\mathbb{T}_\lambda)} \qquad \text{for any $\alpha\in\mathcal{S}_\lambda(\mathbb{Z})$},$$ and $$\langle{ \mathcal{F}^{-1}_{\mathbb{T}_{\lambda}}g , \phi }\rangle_{\mathcal{D}^\prime(\mathbb{T}_\lambda)\times\mathcal{D}(\mathbb{T}_\lambda)} \equiv \langle{ g, \mathcal{F}_{\mathbb{T}_\lambda}[\phi(-\cdot)] }\rangle_{\mathcal{S}_\lambda^\prime(\mathbb{Z})\times \mathcal{S}_\lambda(\mathbb{Z})} \qquad \text{for any $\phi\in\mathcal{D}(\mathbb{T}_\lambda)$},$$ respectively, where $\mathcal{S}^\prime_\lambda(\mathbb{Z})$ is the topological dual space of $\mathcal{S}_\lambda(\mathbb{Z})$. We also define the convolution of $\phi, \psi\in \mathcal{D}(\mathbb{T}_\lambda)$ (or $\phi, \psi\in \mathcal{D}_0(\mathbb{T}_\lambda)$) as $$(\phi\ast \psi)(z) = \int_{[-\pi\lambda, \pi\lambda]} \phi(z-\zeta) \psi(\zeta)\,d\zeta.$$ Using this definition, we define the convolution of $\phi\in \mathcal{D}(\mathbb{T}_\lambda)$ and $f\in \mathcal{D}^\prime(\mathbb{T}_\lambda)$ (resp. $\phi\in \mathcal{D}_0(\mathbb{T}_\lambda)$ and $f\in \mathcal{D}_0^\prime(\mathbb{T}_\lambda)$) as $$\langle{ \phi \ast f, \psi }\rangle \equiv \langle{ f, \phi(-\cdot)\ast \psi }\rangle = \left\langle{ f, \int_{[-\pi\lambda, \pi\lambda]} \phi(y-\cdot)\psi(y)\,dy }\right\rangle$$ for any $\psi\in\mathcal{D}(\mathbb{T}_\lambda)$ (resp. $\psi\in\mathcal{D}_0(\mathbb{T}_\lambda)$). Next, we recall some properties of Littlewood--Paley decomposition. We first take a smooth and radial symmetry function $\chi\in C_0^\infty(\mathbb{R})$ satisfying $$\chi(\xi) = \chi (|\xi|) =\begin{cases} 1 & (|\xi|\leq1/2), \\ 0 & (|\xi|\geq1). \end{cases}$$ We define the function $\rho$ on $\mathbb{R}$ as $\rho(\xi) \equiv \chi(\xi/2)-\chi(\xi)$. Then, it is easy to see that $\{\rho_{\lambda, j}\}_{j\in\mathbb{Z}}$ defined as $$\label{eq: rhoj} \rho_{\lambda, j}(\xi) \equiv \rho(2^{-j}\lambda^{-1}\xi)$$ satisfies the following properties. **Proposition 1**. *Let $\lambda>0$, and let $\{\rho_{\lambda, j}\}_{j\in\mathbb{Z}}$ be the partition of the unity defined as [\[eq: rhoj\]](#eq: rhoj){reference-type="eqref" reference="eq: rhoj"}.* (1) *$\sum\limits_{j\in\mathbb{Z}}\rho_{\lambda, j}(\xi)=1 \qquad \text{for $\xi\neq0$}$.* (2) *$\mathop{\mathrm{supp}}{ \rho_{\lambda, j} } \subset \left\{ \xi\in\mathbb{R}\,;\,2^{j-1}\leq |\lambda^{-1}\xi|\leq 2^{j+1} \right\}$* (3) *$\mathop{\mathrm{supp}}{ \rho_{\lambda, j} }\cap \mathop{\mathrm{supp}}{ \rho_{\lambda, j'} } =\emptyset \qquad \text{for $|j-j'|\geq2$}.$* Let $\varphi_{\lambda, j} = \mathcal{F}_{\mathbb{T}_\lambda}^{-1}[\rho_{\lambda, j}|_{\mathbb{Z}}]$, where $\rho_{\lambda, j}|_{\mathbb{Z}}$ denotes the restriction of $\rho_{\lambda, j}$ on $\mathbb{Z}$. From , it follows $\varphi_{\lambda, j}\in \mathcal{D}(\mathbb{T}_\lambda)$. Next we give the definition of the homogeneous toroidal Besov spaces on $\mathbb{T}_\lambda$. **Definition 1** (homogeneous toroidal Besov spaces, cf. [@T2019]). Let $s\in \mathbb{R}$, $1\leq p \leq \infty$ and $1\leq q \leq \infty$. We define the homogeneous toroidal Besov space $\dot{B}^{{s}}_{{p}, {q}}(\mathbb{T}_\lambda)$ as the set of all distribution $u\in \mathcal{D}_0^\prime(\mathbb{T}_\lambda)$ satisfying $$\| u \|_{\dot{B}^{{s}}_{{p}, {q}}(\mathbb{T}_\lambda)} = \begin{cases} \vspace{10pt} \displaystyle\left( \sum_{j\in\mathbb{Z}} \left| 2^{sj} \| \varphi_{\lambda, j} \ast u \|_{L^p(\mathbb{T}_\lambda)} \right|^q \right)^{1/q} < \infty & (1\leq q<\infty), \\ \displaystyle\sup_{j\in\mathbb{Z}} \left( 2^{sj} \| \varphi_{\lambda, j} \ast u \|_{L^p(\mathbb{T}_\lambda)} \right) < \infty & (q=\infty). \end{cases}$$ **Remark 3**. For the definition of inhomogeneous Besov spaces on a torus, readers can refer [@ST1987; @T1983; @XXY2018]. By the definition of the norm of Besov spaces, it is to see that the following property holds. **Lemma 1**. *Let $s\in\mathbb{R}$, $1\leq p\leq \infty$ and $1\leq q_1 \leq q_2 \leq \infty$. Then, the following inclusion relation holds: $$\dot{B}^{{s}}_{{p}, {q_1}}(\mathbb{T}_\lambda) \subset \dot{B}^{{s}}_{{p}, {q_2}}(\mathbb{T}_\lambda).$$* For $p=2$, the following remarkable properties hold. **Proposition 2**. (1) *Let $l=\mathbb{N}\cup\{0\}$. Then, there exist absolutely constants $C_1=C_1(l, \lambda)$ and $C_2=C_2(l, \lambda)$ such that $$\label{eq: equiv} C_1\| u \|_{\dot{W}^{l, 2}(\mathbb{T}_\lambda)} \leq \| u \|_{\dot{B}^{{l}}_{{2}, {2}}(\mathbb{T}_\lambda)} \leq C_2 \| u \|_{\dot{W}^{l, 2}(\mathbb{T}_\lambda)}$$ for any $u\in \dot{W}^{l, 2}(\mathbb{T}_\lambda)$, where $$\begin{aligned} &\, \dot{W}^{l, 2}(\mathbb{T}_\lambda) \\ =&\, \begin{cases} \vspace{10pt} \left\{ u\in L_{\mathrm{loc}}^1(\mathbb{T}_\lambda) \,;\,\substack{ \displaystyle \| u \|_{\dot{W}^{l, 2}(\mathbb{T}_\lambda)} \equiv \| \partial_{z}^l u \|_{L^2(\mathbb{T}_\lambda)}<\infty, where \\ \text{{\normalsize $\partial_{z}^l$ denotes the $l$-th weak derivative. }} } \right\} & (l\geq1), \\ \displaystyle \dot{L}^2(\mathbb{T}_\lambda) = \left\{ u\in L^2(\mathbb{T}_\lambda) \,;\, \| u \|_{\dot{L}^2(\mathbb{T}_\lambda)} \equiv \left\| u - [\mathcal{F}_{\mathbb{T}_\lambda}u](0) \right\|_{L^2(\mathbb{T}_\lambda)}<\infty \right\} & (l=0). \end{cases} \end{aligned}$$* (2) *In particular, it holds that $$\label{eq: equiv L^2} \| u \|_{\dot{B}^{{0}}_{{2}, {2}}(\mathbb{T}_\lambda)} \leq C \| u \|_{L^2(\mathbb{T}_\lambda)}$$ for any $u \in L^2(\mathbb{T}_\lambda)$, where $C=C(\lambda)>0$.* **Remark 4**. In the homogeneous Besov spaces on the whole space $\mathbb{R}$ (or more generally, $\mathbb{R}^n$), it is well known that the same property as (1) are valid. For example, Bahouri--Chemin--Danchin [@BCD2011 p.63] mentions about this for the Riesz potential spaces $\dot{H}^s$. We also list the important properties of the toroidal Besov spaces. As in , the following proposition is also valid for the whole space case. **Proposition 3**. (1) *We define the one-dimensional fractional derivative operator $(-\partial_{zz})^{s/2}$ ($s\in\mathbb{R}$) on $\mathbb{T}_\lambda$ as $$(-\partial_{zz})^{s/2}f \equiv \mathcal{F}_{\mathbb{T}_\lambda}^{-1}[ |\lambda^{-1}\cdot|^{s} \mathcal{F}_{\mathbb{T}_\lambda}f ] \qquad \text{for $f\in\mathcal{D}_0^\prime(\mathbb{T}_\lambda)$}.$$ For $s\in\mathbb{R}$ and $1\leq p\leq \infty$, Bernstein's inequality holds in the sense that $$\| (-\partial_{zz}^2)^{s/2} \varphi_{\lambda, j} \ast u \|_{L^p(\mathbb{T}_\lambda)} \leq C 2^{sj} \| \varphi_{\lambda, j} \ast u \|_{L^p(\mathbb{T}_\lambda)},$$ where $C=C(s)>0$ does not depend on $p$ and $\lambda$.* (2) *Let $1\leq p \leq q \leq \infty$. It holds that $$\| \varphi_{\lambda, j} \ast u \|_{L^q} \leq C 2^{(1/p-1/q)j} \| \varphi_{\lambda, j} \ast u \|_{L^p},$$ where $C=C(p, q, \lambda)>0$.* (3) *Let $-\infty<s_0\leq s_1<\infty$, $1\leq p_1\leq p_0\leq \infty$ and $1\leq q \leq \infty$ satisfy $s_1-1/p_1 = s_0-1/p_0$. Then, the Sobolev-type continuous embedding $$\dot{B}^{{s_1}}_{{p_1}, {q}} \hookrightarrow \dot{B}^{{s_0}}_{{p_0}, {q}}$$ holds with the estimate $$\label{eq: Sobolev} \| u \|_{\dot{B}^{{s_0}}_{{p_0}, {q}}} \leq C \| u \|_{\dot{B}^{{s_1}}_{{p_1}, {q}}},$$ where $C=C(p_0, p_1, q, s_0, s_1, \lambda)>0$.* ## Weak solutions to the primitive equations {#subsec:weak sol} **Definition 2** (weak solution to the 3D primitive equations). A function $v: \mathbb{T}^3\times (0, \infty)\to \mathbb{R}^{ 3 }$ is called a *weak solution* to the primitive equations [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} if the following conditions hold: (i) $v\in C_w([0, \infty); \mathcal{H} ) \cap L^2_\mathrm{loc}([0, \infty); (H^1\cap \mathcal{H} )(\mathbb{T}^3))$; (ii) $v$ satisfies [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} in the weak sense: $$- \int_0^\infty \int_{\mathbb{T}^3} v \cdot \partial_\tau \phi \,dx d\tau + \int_0^\infty \int_{\mathbb{T}^3} \nabla v \, \colon \nabla \phi \,dx d\tau - \int_0^\infty \int_{\mathbb{T}^3} (u \otimes v) \, \colon \nabla \phi \,dx d\tau = (v_0, \phi(0))$$ for any compactly supported in time function $\phi \in C([0, \infty); (C^1\cap \mathcal{H})(\mathbb{T}^3))\cap C^1([0, \infty); C(\mathbb{T}^3))$. Here, $A\,\colon B$ denotes the inner-product of $3 \times 2$- matrixes $A$ and $B$. **Definition 3** (weak solution to the 2D primitive equations). A function $v: \mathbb{T}^2\times (0, \infty)\to \mathbb{R}^3$ is called a *weak solution* to the two-dimensional primitive equations [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"} if the following conditions hold: (i) $v={}^\top(v^1, v^2)\in C_w([0, \infty); \mathcal{H}) \cap L^2_\mathrm{loc}([0, \infty); (H^1\cap \mathcal{H})$; (ii) $v$ satisfies [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"} in the weak sense: $$\begin{aligned} & - \int_0^\infty \int_{\mathbb{T}^2} v \cdot \partial_\tau \phi \,dx d\tau + \int_0^\infty \int_{\mathbb{T}^2} \nabla_{x_1, z} v \, \colon\nabla_{x_1, z} \phi \,dx d\tau \\ & \hspace{130pt} - \int_0^\infty \int_{\mathbb{T}^2} ({}^\top(v^1, w)\otimes v) \, \colon \nabla_{x_1, z} \phi \,dx d\tau = (v_0, \phi(0)) \end{aligned}$$ for any compactly supported in time function $\phi \in C([0, \infty); (C^1\cap \mathcal{H})(\mathbb{T}^2))\cap C^1([0, \infty); C(\mathbb{T}^2))$. In this two-dimensional case, $A\,\colon B$ denotes the inner-product of $2\times 2$-matrixes $A$ and $B$. The following proposition states that any weak solutions on $[0, \infty)$ are also weak solutions on $[s, t]$ ($0\leq s<t<\infty$). For the proof, see [@LT2017 Proposition 3.2]. The two-dimensional case may be proved in the same way. **Proposition 4** (cf. [@LT2017 Proposition 3.2]). (1) *Let $v$ be a weak solution to the three-dimensional primitive equations [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"}. Then, for any $0\leq s < t<\infty$, it holds that $$\label{eq:weak solution PE} \left. \int_{\mathbb{T}^3} v\cdot\phi\,dx \right|_{\tau=s}^{\tau=t} - \int_s^t \int_{\mathbb{T}^3} v\cdot \partial_\tau\phi\,dxd\tau + \int_s^t \int_{\mathbb{T}^3} \nabla v : \nabla\phi \, dxd\tau - \int_s^t \int_{\mathbb{T}^3} (u\otimes v): \nabla\phi \,dxd\tau = 0$$ for any $\phi \in C([s, t]; (C^1\cap \mathcal{H})(\mathbb{T}^3))\cap C^1([s, t]; C(\mathbb{T}^3))$.* (2) *Let $v$ be a weak solution to the two-dimensional primitive equations [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"}. Then, for any $0\leq s < t<\infty$, it holds that $$\begin{aligned} \begin{aligned}\label{eq:weak solution PE2} & \left. \int_{\mathbb{T}^2} v\cdot\phi\,dx \right|_{\tau=s}^{\tau=t} - \int_s^t \int_{\mathbb{T}^2} v\cdot \partial_\tau\phi \,dxd\tau + \int_s^t \int_{\mathbb{T}^2} \nabla_{x_1, z} v : \nabla_{x_1, z}\phi \, dxd\tau \\ &\hspace{150pt} - \int_s^t \int_{\mathbb{T}^2} ( {}^\top (v^1, w)\otimes v): \nabla_{x_1, z}\phi \,dxd\tau = 0 \end{aligned} \end{aligned}$$ for any $\phi \in C([s, t]; (C^1\cap \mathcal{H})(\mathbb{T}^2))\cap C^1([s, t]; C(\mathbb{T}^2))$.* In , we imposed the energy inequality [\[eq:energy inequality vj\]](#eq:energy inequality vj){reference-type="eqref" reference="eq:energy inequality vj"} to the weak solutions $v_1$ and $v_2$. This is because the following remarkable lemma can be deduced. For the detailed proof, see [@J2021; @LT2017]. **Lemma 2** (cf. [@J2021 Theorem 3.1, Theorem 3.3], [@LT2017 Proposition 3.1, Proposition 3.4]). *Let $v$ be a weak solution to the three-dimensional primitive equations [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} or the two-dimensional one [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"}. Suppose that $$\label{eq:energy inequality} \frac{1}{2}\| v(t) \|_{L^2(\mathbb{T}^n)}^2 + \int_0^t \| \nabla v (\tau) \|_{L^2(\mathbb{T}^n)}^2 \,d\tau \leq \frac{1}{2}\| v_0 \|_{L^2(\mathbb{T}^n)}^2 \qquad \text{($n=3$ or $2$)}$$ for a.e. $t\in (0, \infty)$.* *Then, the followings hold.* 1. *There exists a subset $E\subset (0, \infty)$ whose measure is zero such that $$\lim_{\substack{t\to 0+,\\ t\in (0, \infty)\setminus E}} v(t)= v_0 \qquad \text{in $L^2(\mathbb{T}^n)$}.$$* 2. *If $v_0\in H^1\cap \mathcal{H}$, then $v$ coincides with the unique global strong solution with the same initial data $v_0$.* 3. *$v$ is right continuous in $L^2(\mathbb{T}^n)$ at $t=0$, i.e., $v(t)\to v_0$ as $t\to +0$ in $L^2(\mathbb{T}^n)$.* **Remark 5**. (1) Note that (3) implies additional time regularity $v~\in~BC([0, \infty); L^2(\mathbb{T}^n))$. (2) It should be noted that Li--Titi [@LT2017] use a stronger definition of weak solution (see [@LT2017 Definition 1.1]) which requires the energy inequality and an energy-type differential inequality. In particular, this differential inequality is not used in the proof. # Proof of the uniqueness {#sec:proof} ## 3D case {#subsec:uniqueness 3D} In this , let $v_d \equiv v_1 - v_2$. In order to prove the uniqueness, we introduce the following lemma which is a consequence of [@LT2017 Corollary 3.1 (i)]. **Lemma 3**. *Let $v_1$ and $v_2$ be weak solutions to the three-dimensional primitive equations [\[eq:PE\]](#eq:PE){reference-type="eqref" reference="eq:PE"} with the energy inequality.* *Then, we have $$\begin{aligned} \begin{aligned}\label{eq:1st form} &\, \frac{1}{2}\| v_d (t) \|_{L^2}^2 - \frac{1}{2}\| v_d(s) \|_{L^2}^2 + \int_s^t \| \nabla v_d (\tau) \|^2_{L^2} \,d \tau \\ =&\, \int_s^t \int_{\mathbb{T}^3} ( u_d \otimes v_1 )(x, \tau) \colon \nabla v_d (x, \tau) \,d x d \tau \\ =&\, \int_s^t \int_{\mathbb{T}^3} ( v_d \otimes v_1 )(x, \tau) \colon \nabla_{\mathrm{H}}v_d (x, \tau) \,d x d \tau + \int_s^t \int_{\mathbb{T}^3} w_d(x, \tau) v_1(x, \tau) \cdot \partial_{z}v_d (x, \tau) \,d x d \tau \end{aligned}\end{aligned}$$ for $0<s\leq t\leq T$.* *Proof.* From [@LT2017 Corollary 3.1 (i)], it follows that every weak solution with the energy inequality is smooth. In particular, $v_1$ and $v_2$ are smooth. Therefore, we obtain [\[eq:1st form\]](#eq:1st form){reference-type="eqref" reference="eq:1st form"} by a direct calculation. ◻ With the aid of the Littlewood--Paley decomposition and the Bony's decomposition, we can decompose the right hand side of [\[eq:1st form\]](#eq:1st form){reference-type="eqref" reference="eq:1st form"} into the following six terms: $$\begin{aligned} \begin{aligned}\label{eq:RHS 1st form} \text{R.H.S.~ of \eqref{eq:1st form}} \ &= \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast v_d ) \otimes ( P_{j+k}v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \right] \,dx d\tau \\ %1 &\quad + \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( P_{j+k} v_d ) \otimes ( \varphi_{j+k}\ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \right] \,dx d\tau \\ %2 &\quad +\sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast v_d ) \otimes ( \varphi_{l} \ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \right] \,dx d\tau \\ %3 &\quad + \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast w_d ) ( P_{j+k}v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ %4 &\quad + \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( P_{j+k}w_d ) ( \varphi_{j+k} \ast v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ %5 &\quad + \sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast w_d ) ( \varphi_{l} \ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ %6 &\eqqcolon J_1 + J_2 + J_3 + J_4 + J_5 + J_6, \end{aligned}\end{aligned}$$ where $\ast$ denotes the one-dimensional convolution on $\mathbb{T}$ in $z$-direction, and where $\displaystyle P_{m} f \equiv \sum_{j\leq m-3} \varphi_j \ast f$. In the following, we will show $$\label{eq: desired} J_1 + \cdots + J_6 \leq \int_s^t \| \nabla v_d (\tau) \|_{L^2}^2 + C \int_s^t \| v_d (\tau) \|_{L^2}^2 \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \right) \, d\tau,$$ where the absolute constant $C$ does not depend on $s$ and $t$, so that we obtain $$\begin{aligned} \begin{aligned}\label{eq: desired 2} \frac{1}{2}\| v_d (t) \|_{L^2}^2 - \frac{1}{2}\| v_d(s) \|_{L^2}^2 &\leq C \int_s^t \| v_d (\tau) \|_{L^2}^2 \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \right) \, d\tau \\ &\leq C\int_0^t \| v_d (\tau) \|_{L^2}^2 \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \right) \, d\tau \end{aligned}\end{aligned}$$ for $0 < s \leq t \leq T$. If [\[eq: desired\]](#eq: desired){reference-type="eqref" reference="eq: desired"} and [\[eq: desired 2\]](#eq: desired 2){reference-type="eqref" reference="eq: desired 2"} are proved, then we have $$\begin{aligned} &\, \frac{1}{2}\| v_d (t) \|_{L^2}^2 - \frac{1}{2}\| v_d(0) \|_{L^2}^2 \\ \leq&\, C \int_0^t \| v_d (\tau) \|_{L^2}^2 \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 (\tau) \|_{ \dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}L_{\mathrm{H}}^{\infty} }^\gamma \right) \, d\tau + \frac{1}{2} \left( \| v_d (s) \|_{L^2}^2 - \| v_d(0) \|_{L^2}^2 \right).\end{aligned}$$ We can deduce from (3) that $$\frac{1}{2}\| v_d (t) \|_{L^2}^2 - \frac{1}{2}\| v_d(0) \|_{L^2}^2 \leq C \int_0^t \| v_d (\tau) \|_{L^2}^2 \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \right) \, d\tau.$$ Applying Gronwall's inequality, we have by the assumption $v_1\in L^\beta(0, T; \dot{B}^{-1/p}_{p, \infty, z}L_{\mathrm{H}}^{\infty})\cap L^\gamma(0, T; \dot{B}^{2/\gamma}_{2, 2, z}L_{\mathrm{H}}^{\infty})$ and $v_d(0)=v_1(0)-v_2(0)=v_0-v_0=0$ that $$\| v_d (t) \|_{L^2}^2 \leq C \operatorname{exp}{ \left( \int_0^t \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \right) \, d\tau \right) } \cdot \| v_d(0) \|_{L^2}^2 = 0.$$ Therefore, we have $v_d\equiv 0$ on $[0, T]$. *[Step 1.]{.ul} Estimate of $J_1$.* We can estimate $J_1$ by Hölder's inequality as follows: $$\begin{aligned} \begin{aligned}\label{eq: J1_1} J_1 &= \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast v_d ) \otimes ( P_{j+k}v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \right] \,dx d\tau \\ & \leq \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast v_d ) \otimes ( P_{j+k}v_1 ) \right) \|_{L^2_z} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ & \leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \varphi_{j+k} \ast v_d \|_{L^q_z} \| P_{j+k}v_1 \|_{L^{p}_z} \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \varphi_{j+k} \ast v_d \|_{L^q_z} \left[ \sum_{l \leq j+k-3} \| \varphi_l \ast v_1 \|_{L^{p}_z} \right] \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \left\| \| \varphi_{j+k} \ast v_d \|_{L^q_z} \right\|_{L_{\mathrm{H}}^{2}} \left\| \sum_{l \leq j+k-3} { \| \varphi_l \ast v_1 \|_{L^p_z} } \right\|_{ L_{\mathrm{H}}^{\infty} } \cdot \| \varphi_j \ast (\nabla_{\mathrm{H}}v_d ) \|_{L^2} \, d\tau \end{aligned}\end{aligned}$$ where $C=C(n, p)>0$. Interpolation inequality, (1) for $l=1$, and (2) yield that, for any $j' \in \mathbb{Z}$, $$\begin{aligned} \| \varphi_{j'} \ast v_d \|_{L^q_z} &\leq \| \varphi_{j'} \ast v_d \|_{L^2_z}^{1-2/p} \| \varphi_{j'} \ast v_d \|_{L^\infty_z}^{2/p} \\ &\leq C \| \varphi_{j'} \ast v_d \|_{L^2_z}^{1-2/p} \left[ 2^{j'/2} \| \varphi_{j'} \ast v_d \|_{L^2_z} \right]^{2/p} \\ &\leq C 2^{-j'/p} \| \varphi_{j'} \ast v_d \|_{L^2_z}^{1-2/p} \left[ 2^{j'} \| \varphi_{j'} \ast v_d \|_{L^2_z} \right]^{2/p} \\ &\leq C 2^{-j'/p} \| \varphi_{j'} \ast v_d \|_{L^2_z}^{1-2/p} \| \varphi_{j'} \ast ( \partial_{z}v_d ) \|_{L^2_z}^{2/p}.\end{aligned}$$ Therefore, it follows from Hölder's inequality that $$\begin{aligned} \begin{aligned}\label{eq: Lq-interpolation} \left\| \| \varphi_{j'} \ast v_d \|_{L^q_z} \right\|_{ L_{\mathrm{H}}^{2} } &\leq C2^{-j'/p} \left\| \| \varphi_{j'} \ast v_d \|_{L^2_z}^{1-2/p} \| \varphi_{j'} \ast ( \partial_{z}v_d) \|_{L^2_z}^{2/p} \right\|_{ L_{\mathrm{H}}^{2} } \\ &\leq C2^{-{j'}/p} \| \varphi_{j'} \ast v_d \|_{L^2}^{1-2/p} \| \varphi_{j'} \ast ( \partial_{z}v_d) \|_{L^2}^{2/p} . \end{aligned}\end{aligned}$$ Minkowski's inequality and the definition of the toroidal Besov space yield that $$\begin{aligned} \begin{aligned}\label{eq: v1-lowfreq} \left\| \sum_{l \leq j+k-3} { \| \varphi_l \ast v_1 \|_{L^p_z} } \right\|_{ L_{\mathrm{H}}^{\infty} } &\leq \sum_{l \leq j+k-3} \| \varphi_l \ast v_1 \|_{ L^p_zL_{\mathrm{H}}^{\infty} } \\ &\leq C 2^{(j+k-3)/p} \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} } \\ &\leq C 2^{(j+k)/p} \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }. \end{aligned}\end{aligned}$$ Applying the estimates [\[eq: Lq-interpolation\]](#eq: Lq-interpolation){reference-type="eqref" reference="eq: Lq-interpolation"} and [\[eq: v1-lowfreq\]](#eq: v1-lowfreq){reference-type="eqref" reference="eq: v1-lowfreq"} for [\[eq: J1_1\]](#eq: J1_1){reference-type="eqref" reference="eq: J1_1"}, we have that $$\begin{aligned} J_1 &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t 2^{-(j+k)/p} \| \varphi_{j+k} \ast v_d \|_{L^2}^{1-2/p} \| \varphi_{j+k} \ast ( \partial_{z}v_d) \|_{L^2}^{2/p} \\ & \hspace{150pt} \cdot 2^{(j+k)/p} \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} } \cdot \| \varphi_j \ast (\nabla_{\mathrm{H}}v_d ) \|_{L^2}\, d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \| \varphi_j \ast ( \nabla_{\mathrm{H}}{v_d} ) \|_{L^2} \cdot \| \varphi_{j+k} \ast (\partial_{z}v_d) \|_{L^2}^{2/p} \cdot \| \varphi_{j+k} \ast v_d \|_{L^2}^{1-2/p} \cdot \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} } \, d\tau.\end{aligned}$$ Young's inequality and (2) yield that $$\begin{aligned} \begin{aligned}\label{eq: J1} J_1 &\leq \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \frac{1}{42c^2} \| \varphi_j \ast ( \nabla_{\mathrm{H}}{v_d} ) \|_{L^2}^2 + \frac{1}{42c^2} \| \varphi_{j+k} \ast (\partial_{z}v_d) \|_{L^2}^2 \, d\tau \\ & \hspace{150pt} + C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \| \varphi_{j+k} \ast v_d \|_{L^2}^2 \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \, d\tau \\ &\leq \frac{1}{6c^2} \sum_{j\in\mathbb{Z}} \int_s^t \left( \| \varphi_j \ast ( \nabla_{\mathrm{H}}{v_d} ) \|_{L^2}^2 + \| \varphi_j \ast (\partial_{z}v_d) \|_{L^2}^2 \right) \, d\tau \\ &\hspace{150pt} + C \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_j \ast v_d \|_{L^2}^2 \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \, d\tau \\ &= \frac{1}{6c^2} \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_j \ast ( \nabla{v_d} ) \|_{L^2}^2 \, d\tau + C \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_j \ast v_d \|_{L^2}^2 \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \, d\tau \\ &= \frac{1}{6c^2} \int_s^t \left\| \| v_d \|_{ \dot{B}^{{0}}_{{2}, {2}, {\mathrm{H} }} } \right\|_{L_z^2}^2 \, d\tau + C \int_s^t \left\| \| v_d \|_{ \dot{B}^{{0}}_{{2}, {2}, {\mathrm{H} }} } \right\|_{L_z^2}^2 \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \, d\tau \\ &\leq \frac{1}{6} \int_s^t \| v_d \|_{L^2}^2 \, d\tau + C \int_s^t \| v_d \|_{L^2}^2 \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \, d\tau \end{aligned}\end{aligned}$$ where the absolute constant $c$ is given by [\[eq: equiv L\^2\]](#eq: equiv L^2){reference-type="eqref" reference="eq: equiv L^2"}. $$\begin{aligned} \begin{aligned}\label{eq: J2_1} J_2 &= \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( P_{j+k} v_d ) \otimes ( \varphi_{j+k}\ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \right] \,dx d\tau \\ & \leq \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( P_{j+k} v_d ) \otimes ( \varphi_{j+k}\ast v_1 ) \right) \|_{L^2_z} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ & \leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| P_{j+k} v_d \|_{L^\infty_z} \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-k/2} \int_s^t \int_{\mathbb{T}^2} \| P_{j+k} v_d \|_{L^\infty_z} \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau. \end{aligned}\end{aligned}$$ Here, Hölder's inequality and (2) yield that, for $2<\gamma<\infty$, $$\begin{aligned} \begin{aligned}\label{eq: J2_2} \| P_{j+k} v_d \|_{L^\infty_z} &\leq \sum_{l\leq j+k-3} \| \varphi_l \ast v_d \|_{L^\infty_z} \\ &\leq C \sum_{l\leq j+k-3} 2^l \| \varphi_l \ast v_d \|_{L^1_z} \\ &\leq C \sum_{l\leq j+k-3} 2^l \| \varphi_l \ast v_d \|_{L^2_z} \qquad \text{(H\"{o}lder's inequality)} \\ &\leq C \sum_{l\leq j+k-3} 2^{{2/\gamma} l} \| \varphi_l \ast v_d \|_{L^2_z}^{2/\gamma} \| \varphi_l \ast (\partial_{z}v_d) \|_{L^2_z}^{1-2/\gamma} \\ &\leq C 2^{(j+k)2/\gamma} \| v_d \|_{L^2_z}^{2/\gamma} \| \partial_{z}v_d \|_{L^2_z}^{1-2/\gamma}. \end{aligned}\end{aligned}$$ Therefore, we have from [\[eq: J2_2\]](#eq: J2_2){reference-type="eqref" reference="eq: J2_2"}, Hölder's inequality, and Minkowski's inequality that $$\begin{aligned} J_2 &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-k/2} \int_s^t \int_{\mathbb{T}^2} \| v_d \|_{L^2_z}^{2/\gamma} \| \partial_{z}v_d \|_{L^2_z}^{1-2/\gamma} \\ & \hspace{150pt} \cdot \left[ 2^{2(j+k)/\gamma} \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \right] \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-k/2} \int_s^t \| v_d \|_{L^2}^{2/\gamma} \| \partial_{z}v_d \|_{L^2}^{1-2/\gamma} \\ &\hspace{150pt} \cdot \left[ 2^{2(j+k)/\gamma} \left\| \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \right\|_{L_{\mathrm{H}}^{\infty}} \right] \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2} \, d\tau\\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-k/2} \int_s^t \| v_d \|_{L^2}^{2/\gamma} \| \partial_{z}v_d \|_{L^2}^{1-2/\gamma} \\ &\hspace{150pt} \cdot \left[ 2^{2(j+k)/\gamma} \| \varphi_{j+k} \ast v_1 \|_{L^2_zL_{\mathrm{H}}^{\infty}} \right] \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2} \, d\tau.\end{aligned}$$ Hölder's inequality for number sequences, , and Young's inequality yield $$\begin{aligned} \begin{aligned}\label{eq: J2} J_2 &\leq C \sum_{k=-3}^3 2^{-k/2} \int_s^t \| v_d \|_{L^2}^{2/\gamma} \| \partial_{z}v_d \|_{L^2}^{1-2/\gamma} \cdot \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}, {z}} L_{\mathrm{H}}^{\infty}} \cdot \left\| \| \nabla_{\mathrm{H}}v_d \|_{\dot{B}^{{0}}_{{2}, {2}, {z}}} \right\|_{L_{\mathrm{H}}^{2}} \,d\tau \\ &\leq C \int_s^t \| v_d \|_{L^2}^{2/\gamma} \| \partial_{z}v_d \|_{L^2}^{1-2/\gamma} \cdot \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}, {z}} L_{\mathrm{H}}^{\infty}} \cdot \| \nabla_{\mathrm{H}}v_d \|_{L^2} \,d\tau \\ &\leq \frac{1}{6}\int_s^t \| \nabla v_d\|_{L^2}^2 \,d\tau + C \int_s^t \| v_d \|_{L^2}^2 \cdot \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}, {z}} L_{\mathrm{H}}^{\infty}}^{\gamma} \,d\tau. \end{aligned}\end{aligned}$$ $$\begin{aligned} \begin{aligned}\label{eq: J3_1} J_3 &= \sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast v_d ) \otimes ( \varphi_{l} \ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \right] \,dx d\tau \\ &\leq \sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast v_d ) \otimes ( \varphi_{l} \ast v_1 ) \right) \|_{L_z^{q'}} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_z^q} \, d x_\mathrm{H}d\tau \\ &\leq \sum_{j\in\mathbb{Z}} \sum_{ k \geq j-5 } \sum_{ l = -2 }^{2} \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast v_d ) \otimes ( \varphi_{k+l} \ast v_1 ) \right) \|_{L_z^{q'}} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_z^q} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq j-5 } \sum_{ l = -2 }^{2} \int_s^t \int_{\mathbb{T}^2} \| \varphi_{k} \ast v_d \|_{L^2_z} \| \varphi_{k+l} \ast v_1 \|_{L_p^p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_z^q} \, d x_\mathrm{H}d\tau \\ &= C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } \sum_{ l = -2 }^{2} \int_s^t \int_{\mathbb{T}^2} \| \varphi_{j+k} \ast v_d \|_{L^2_z} \| \varphi_{j+k+l} \ast v_1 \|_{L_z^p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_z^q} \, d x_\mathrm{H}d\tau \end{aligned}\end{aligned}$$ Here, we have $$\begin{aligned} \begin{aligned}\label{eq: J3_2} \| \varphi_{j+k} \ast v_d \|_{L^2_z} &= \| \varphi_{j+k} \ast v_d \|_{L^2_z}^{1-2/p} \| \varphi_{j+k} \ast v_d \|_{L^2_z}^{2/p} \\ &\leq 2^{ - (j+k) \cdot 2/p } \| \varphi_{j+k} \ast v_d \|_{L^2_z}^{1-2/p} \left[ 2^{j+k} \| \varphi_{j+k} \ast v_d \|_{L^2_z} \right]^{2/p} \\ &\leq C \cdot 2^{ - (j+k) \cdot 2/p } \| \varphi_{j+k} \ast v_d \|_{L^2_z}^{1-2/p} \| \varphi_{j+k} \ast ( \partial_{z}v_d ) \|_{L^2_z}^{2/p}. \end{aligned}\end{aligned}$$ By (note that $1/2-1/q=1/p$), $$\label{eq: J3_3} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_z^q} \leq C 2^{j/p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2_z}.$$ Applying [\[eq: J3_2\]](#eq: J3_2){reference-type="eqref" reference="eq: J3_2"}, [\[eq: J3_3\]](#eq: J3_3){reference-type="eqref" reference="eq: J3_3"} and Minkowski's inequality, we have $$\begin{aligned} J_3 &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } \sum_{ l = -2 }^{2} \int_s^t \int_G 2^{ -(j+k)\cdot 2/p }\| \varphi_{j+k} \ast v_d \|_{L^2_z}^{1-2/p} \| \varphi_{j+k} \ast ( \partial_{z}v_d ) \|_{L^2_z}^{2/p} \\ &\hspace{150pt} \cdot \| \varphi_{j+k+l} \ast v_1 \|_{L_z^p} \cdot 2^{j/p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_z^2} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } 2^{-k/p} \sum_{ l = -2 }^{2} 2^{l/p} \int_s^t \| \varphi_{j+k} \ast v_d \|_{L^2}^{1-2/p} \| \varphi_{j+k} \ast ( \partial_{z}v_d ) \|_{L^2}^{2/p} \\ &\hspace{150pt} \cdot \left[ 2^{ -(j+k+l)/p } \| \varphi_{j+k+l} \ast v_1 \|_{ L_z^p L_{\mathrm{H}}^{\infty} } \right] \cdot \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L^2} d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } 2^{-k/p} \int_s^t \| \varphi_{j+k} \ast v_d \|_{L^2}^{1-2/p} \| \varphi_{j+k} \ast ( \partial_{z}v_d ) \|_{L^2}^{2/p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_2} \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }.\end{aligned}$$ Let $\displaystyle C_p = \sum_{k\geq-5} 2^{-k/p} \in (0, \infty)$, then again Young's inequality and yield that $$\begin{aligned} \begin{aligned}\label{eq: J3} J_3 & \leq \sum_{ k \geq -5 } 2^{-k/p} \left( \sum_{j\in\mathbb{Z}} \int_s^t \frac{1}{6c^2C_p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_2}^2 \,d\tau + \sum_{j\in\mathbb{Z}} \int_s^t \frac{1}{6c^2C_p} \| \varphi_{j+k} \ast ( \partial_{z}v_d ) \|_{L^2}^2 \right. \\ & \hspace{250pt} \left. + C \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_{j+k} \ast v_d \|_{L^2}^2 \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \,d\tau \right) \\ &= \sum_{ k \geq -5 } 2^{-k/p} \left( \sum_{j\in\mathbb{Z}} \int_s^t \frac{1}{6c^2C_p} \| \varphi_j \ast ( \nabla_{\mathrm{H}}v_d ) \|_{L_2}^2 \,d\tau + \sum_{j\in\mathbb{Z}} \int_s^t \frac{1}{6c^2C_p} \| \varphi_{j} \ast ( \partial_{z}v_d ) \|_{L^2}^2 \right. \\ & \hspace{250pt} \left. + C \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_{j} \ast v_d \|_{L^2}^2 \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \,d\tau \right) \\ &\leq \frac{1}{6c^2} \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_j \ast ( \nabla v_d ) \|_{L^2}^2 \,d\tau + C \sum_{j\in\mathbb{Z}} \int_s^t \| \varphi_{j} \ast v_d \|_{L^2}^2 \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \,d\tau \\ &= \frac{1}{6c^2} \int_s^t \left\| \| \nabla v_d \|_{ \dot{B}^{{0}}_{{2}, {2}, {\mathrm{H} }} } \right\|_{L^2_z}^2 \,d\tau + C \int_s^t \left\| \| v_d \|_{ \dot{B}^{{0}}_{{2}, {2}, {\mathrm{H} }} } \right\|_{L^2}^2 \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \,d\tau \\ &\leq \frac{1}{6} \int_s^t \| \nabla v_d \|_{L^2}^2 \,d\tau + C \int_s^t \| v_d \|_{L^2}^2 \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \,d\tau. \end{aligned}\end{aligned}$$ $$\begin{aligned} \begin{aligned}\label{eq: J4_1} J_4 &= \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast w_d ) ( P_{j+k}v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ & \leq \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast w_d ) ( P_{j+k}v_1 ) \right) \|_{L^{q'}_z} \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^q_z} \, d x_\mathrm{H}d\tau \\ & \leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \varphi_{j+k} \ast w_d \|_{L^2_z} \| P_{j+k}v_1 \|_{L^{p}_z} \cdot 2^j \| \varphi_j \ast v_d \|_{L^q_z} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \varphi_{j+k} \ast ( \partial_{z}w_d) \|_{L^2_z} \left[ \sum_{l \leq j+k-3} \| \varphi_l \ast v_1 \|_{L^{p}_z} \right] \cdot \| \varphi_j \ast v_d \|_{L^q_z} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \| \varphi_{j+k} \ast ( \mathrm{div}_{\mathrm{H}} \,{v_d}) \|_{L^2} \left\| \sum_{l \leq j+k-3} { \| \varphi_l \ast v_1 \|_{L^p_z} } \right\|_{ L_{\mathrm{H}}^{\infty} } \cdot \left\| \| \varphi_j \ast v_d \|_{L^q_z} \right\|_{ L_{\mathrm{H}}^{2} } \, d\tau . \end{aligned}\end{aligned}$$ Applying [\[eq: Lq-interpolation\]](#eq: Lq-interpolation){reference-type="eqref" reference="eq: Lq-interpolation"} and [\[eq: v1-lowfreq\]](#eq: v1-lowfreq){reference-type="eqref" reference="eq: v1-lowfreq"} for [\[eq: J3_1\]](#eq: J3_1){reference-type="eqref" reference="eq: J3_1"}, we have $$\begin{aligned} J_4 &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \| \varphi_{j+k} \ast ( \mathrm{div} \, _H{v_d} ) \|_{L^2} \cdot 2^{(j+k)/p} \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} } \\ & \hspace{150pt} \cdot 2^{-j/p} \| \varphi_j \ast v_d \|_{L^2}^{1-2/p} \| \varphi_j \ast (\partial_{z}v_d) \|_{L^2}^{2/p} \, d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \| \varphi_{j+k} \ast ( \nabla_{\mathrm{H}}{v_d} ) \|_{L^2} \cdot \| \varphi_j \ast (\partial_{z}v_d) \|_{L^2}^{2/p} \cdot \| \varphi_j \ast v_d \|_{L^2}^{1-2/p} \cdot \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} } \, d\tau.\end{aligned}$$ It follows again by the same calculation with [\[eq: J1\]](#eq: J1){reference-type="eqref" reference="eq: J1"} for $J_1$ that $$\begin{aligned} \begin{aligned}\label{eq: J4} J_4 &\leq \frac{1}{6} \int_s^t \| v_d \|_{L^2}^2 \, d\tau + C \int_s^t \| v_d \|_{L^2}^2 \|v_1\|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \, d\tau. \end{aligned}\end{aligned}$$ $$\begin{aligned} \begin{aligned}\label{eq: J5_1} J_5 &= \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( P_{j+k}w_d ) ( \varphi_{j+k} \ast v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ & \leq \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( P_{j+k}w_d ) ( \varphi_{j+k} \ast v_1 ) \right) \|_{L^2_z} \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2_z} \, d x_\mathrm{H}d\tau \\ & \leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \| P_{j+k}w_d \|_{L^\infty_z} \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \\ &\hspace{150pt} \cdot 2^{2j/\gamma} \| \varphi_j \ast v_d \|_{L^2_z}^{2/\gamma} \cdot \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2_z}^{1-2/\gamma} \, d x_\mathrm{H}d\tau. \end{aligned}\end{aligned}$$ Here, $$\begin{aligned} \| P_{j+k} w_d \|_{L^\infty_z} &\leq C \| w_d \|_{L^\infty_z} \\ &\leq C \| \mathrm{div}_{\mathrm{H}} \,v_d \|_{L^1_z} \qquad \text{(def.~of $w$)} \\ &\leq C \| \mathrm{div}_{\mathrm{H}} \,v_d \|_{L^2_z} \qquad \text{(H\"{o}lder's inequality)} \\ &\leq C \| \nabla_{\mathrm{H}}v_d \|_{L^2_z}.\end{aligned}$$ Therefore, we have $$\begin{aligned} \begin{aligned}\label{eq: J5} J_5 &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-2k/\gamma} \int_s^t \int_G \| \nabla_{\mathrm{H}}v_d \|_{L^2_z} \cdot \left[ 2^{2(j+k)/\gamma} \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \right] \\ &\hspace{200pt} \cdot \| \varphi_j \ast v_d \|_{L^2_z}^{2/\gamma} \cdot \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2_z}^{1-2/\gamma} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-2k/\gamma} \int_s^t \| \nabla_{\mathrm{H}}v_d \|_{L^2} \cdot \left[ 2^{2(j+k)/\gamma} \left\| \| \varphi_{j+k} \ast v_1 \|_{L^2_z} \right\|_{L_{\mathrm{H}}^{\infty}} \right] \\ &\hspace{200pt} \cdot \| \varphi_j \ast v_d \|_{L^2}^{2/\gamma} \cdot \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2}^{1-2/\gamma} \, d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 2^{-2k/\gamma} \int_s^t \| \nabla_{\mathrm{H}}v_d \|_{L^2} \cdot \left[ 2^{2(j+k)/\gamma} \| \varphi_{j+k} \ast v_1 \|_{L^2_zL_{\mathrm{H}}^{\infty}} \right] \\ &\hspace{200pt} \cdot \| \varphi_j \ast v_d \|_{L^2}^{2/\gamma} \cdot \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2}^{1-2/\gamma} \, d\tau \\ &\leq C \sum_{k=-3}^3 2^{-2k/\gamma} \int_s^t \| \nabla_{\mathrm{H}}v_d \|_{L^2} \cdot \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}} \left\| \| v_d \|_{\dot{B}^{{0}}_{{2}, {2}, {z}}} \right\|_{L_{\mathrm{H}}^{2}}^{2/\gamma} \cdot \left\| \| \partial_{z}v_d \|_{\dot{B}^{{0}}_{{2}, {2}, {z}}} \right\|_{L_{\mathrm{H}}^{2}}^{1-2/\gamma} \, d\tau \\ &\leq C \int_s^t \| \nabla_{\mathrm{H}}v_d \|_{L^2} \cdot \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}, {z}}L_{\mathrm{H}}^{\infty}} \| v_d \|_{L^2}^{2/\gamma} \| \partial_{z}v_d \|_{L^2}^{1-2/\gamma} \, d\tau \\ &\leq \frac{1}{6} \int_s^t \| \nabla v_d \|_{L^2}^2 \, d\tau + C \int_s^t \| v_d \|_{L^2}^2 \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \, d\tau. \end{aligned}\end{aligned}$$ $$\begin{aligned} \begin{aligned}\label{eq: J6_1} J_6 &= \sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^3} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast w_d ) \otimes ( \varphi_{l} \ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ % &\leq \sum_{j\in\Z} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } % \sum_{ |k-l| \leq 2 } \int_s^t \int_{\T^2} % \| \vpt_j \ast \left( ( \vp_{k} \ast w_d ) \otimes ( \vp_{l} \ast v_1 ) \right) \|_{L_z^{q'}} % \| \vp_j \ast ( \dz v_d ) \|_{L_z^q} \, d \xH d\tau \\ &\leq \sum_{j\in\mathbb{Z}} \sum_{ k \geq j-5 } \sum_{ l = -2 }^{2} \int_s^t \int_{\mathbb{T}^2} \| \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast w_d ) \otimes ( \varphi_{k+l} \ast v_1 ) \right) \|_{L_z^{q'}} \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L_z^q} \, d x_\mathrm{H}d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq j-5 } \sum_{ l = -2 }^{2} \int_s^t \int_{\mathbb{T}^2} \| \varphi_{k} \ast w_d \|_{L^2_z} \| \varphi_{k+l} \ast v_1 \|_{L_p^p} \cdot 2^j \| \varphi_j \ast v_d \|_{L_z^q} \, d x_\mathrm{H}d\tau \\ % &= C \sum_{j\in\Z} \sum_{ k \geq -5 } % \sum_{ l = -2 }^{2} \int_s^t \int_{\T^2} % 2^{-k}\cdot 2^{j+k} % \| \vp_{j+k} \ast w_d \|_{L^2_z} % \| \vp_{j+k+l} \ast v_1 \|_{L_z^p} % \| \vp_j \ast v_d \|_{L_z^q} \, d \xH d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } 2^{-k} \sum_{ l = -2 }^{2} \int_s^t \int_{\mathbb{T}^2} \| \varphi_{j+k} \ast (\partial_{z}w_d) \|_{L^2_z} \| \varphi_{j+k+l} \ast v_1 \|_{L_z^p} \| \varphi_j \ast v_d \|_{L_z^q} \, d x_\mathrm{H}d\tau \\ % &\leq C \sum_{j\in\Z} \sum_{ k \geq -5 } 2^{-k} % \sum_{ l = -2 }^{2} \int_s^t \int_{\T^2} % \| \vp_{j+k} \ast (\divH v_d) \|_{L^2_z} % \| \vp_{j+k+l} \ast v_1 \|_{L_z^p} % \| \vp_j \ast v_d \|_{L_z^q} \, d \xH d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } 2^{-k} \sum_{ l = -2 }^{2} \int_s^t \| \varphi_{j+k} \ast (\mathrm{div}_{\mathrm{H}} \,v_d) \|_{L^2} \left\| \| \varphi_{j+k+l} \ast v_1 \|_{L_z^p} \right\|_{L_{\mathrm{H}}^{\infty}} \left\| \| \varphi_j \ast v_d \|_{L_z^q} \right\|_{ L_{\mathrm{H}}^{2} } \, d\tau . \end{aligned}\end{aligned}$$ Applying Minkowski's inequality and [\[eq: Lq-interpolation\]](#eq: Lq-interpolation){reference-type="eqref" reference="eq: Lq-interpolation"} for [\[eq: J6_1\]](#eq: J6_1){reference-type="eqref" reference="eq: J6_1"}, we have $$\begin{aligned} J_6 &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } 2^{-(1-1/p)k} \sum_{ l = -2 }^{2} 2^{l/p} \int_s^t \| \varphi_{j+k} \ast (\mathrm{div}_{\mathrm{H}} \,v_d) \|_{L^2} \\ &\hspace{100pt} \cdot \left[ 2^{-(j+k+l)/p} \| \varphi_{j+k+l} \ast v_1 \|_{L_z^p L_{\mathrm{H}}^{\infty} } \right] \| \varphi_j \ast v_d \|_{L^2}^{1-2/p} \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2}^{2/p} \, d\tau \\ &\leq C \sum_{j\in\mathbb{Z}} \sum_{ k \geq -5 } 2^{-(1-1/p)k} \int_s^t \| \varphi_{j+k} \ast (\mathrm{div}_{\mathrm{H}} \,v_d) \|_{L^2} \\ &\hspace{100pt} \cdot \| v_1 \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \| \varphi_j \ast v_d \|_{L^2}^{1-2/p} \| \varphi_j \ast ( \partial_{z}v_d ) \|_{L^2}^{2/p} \, d\tau.\end{aligned}$$ In a similar manner as [\[eq: J3\]](#eq: J3){reference-type="eqref" reference="eq: J3"}, we have $$\label{eq: J6} J_6 \leq \frac{1}{6} \int_s^t \| \nabla v_d \|_{L^2}^2 \,d\tau + C \int_s^t \| v_d \|_{L^2}^2 \| v_1 \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta \,d\tau.$$ Collecting the estimates [\[eq: J1\]](#eq: J1){reference-type="eqref" reference="eq: J1"}, [\[eq: J2\]](#eq: J2){reference-type="eqref" reference="eq: J2"}, [\[eq: J3\]](#eq: J3){reference-type="eqref" reference="eq: J3"}, [\[eq: J4\]](#eq: J4){reference-type="eqref" reference="eq: J4"}, [\[eq: J5\]](#eq: J5){reference-type="eqref" reference="eq: J5"} and [\[eq: J6\]](#eq: J6){reference-type="eqref" reference="eq: J6"}, we finally obtain [\[eq: desired\]](#eq: desired){reference-type="eqref" reference="eq: desired"}: $$J_1 + \cdots + J_6 \leq \int_s^t \| \nabla v_d (\tau) \|_{L^2}^2 + C \int_s^t \| v_d (\tau) \|_{L^2}^2 \left( \| v_1 (\tau) \|_{ \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty} }^\beta + \| v_1 \|_{\dot{B}^{{2/\gamma}}_{{2}, {2}}L_{\mathrm{H}}^{\infty}}^{\gamma} \right) \, d\tau. \qed$$ ## 2D case {#subsec:uniqueness 2D} In this , we use the same notation $v_d = v_1 - v_2$ for weak solutions $v_j=(v_j^1, v_j^2)$ ($j=1, 2$). Corresponding to , we have to introduce the following lemma which is a consequence of [@J2021 eq. (4.8)]. **Lemma 4**. *Let $v_1$ and $v_2$ be weak solutions to the two-dimensional primitive equations [\[eq:PE2\]](#eq:PE2){reference-type="eqref" reference="eq:PE2"} with the energy inequality.* *Then, we have $$\begin{aligned} \begin{aligned}\label{eq:1st form 2D} &\, \frac{1}{2}\| v_d (t) \|_{L^2}^2 - \frac{1}{2}\| v_d(s) \|_{L^2}^2 + \int_s^t \| \nabla v_d (\tau) \|^2_{L^2} \,d \tau \\ =&\, \int_s^t \int_{\mathbb{T}^2} ( {}^\top(v^1_d, w_d) \otimes v_1 )(x, \tau) \colon \nabla_{x_1} v_d (x, \tau) \,d x d \tau \\ =&\, \int_s^t \int_{\mathbb{T}^2} v_d^1(x, \tau)\, v_1(x, \tau) \cdot \partial_{x_1} v_d (x, \tau) \,d x d \tau + \int_s^t \int_{\mathbb{T}^2} w_d(x, \tau)\, v_1(x, \tau) \cdot \partial_{z}v_d (x, \tau) \,d x d \tau \end{aligned}\end{aligned}$$ for $0<s\leq t\leq T$.* As in [\[eq:RHS 1st form\]](#eq:RHS 1st form){reference-type="eqref" reference="eq:RHS 1st form"}, we have from that the following representation holds: $$\begin{aligned} &\, \text{R.H.S.~ of \eqref{eq:1st form 2D}} \\ =&\, \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast v_d^1 ) ( P_{j+k}v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{x_1} v_d ) \right] \,dx d\tau \\ %1 &\quad + \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \left[ \widetilde{\varphi}_j \ast \left( ( P_{j+k} v_d^1 ) ( \varphi_{j+k}\ast v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{x_2} v_d ) \right] \,dx d\tau \\ %2 &\quad +\sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^2} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast v_d^1 ) ( \varphi_{l} \ast v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{x_1} v_d ) \right] \,dx d\tau \\ %3 &\quad + \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{j+k} \ast w_d ) ( P_{j+k}v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ %4 &\quad + \sum_{j\in\mathbb{Z}} \sum_{k=-3}^3 \int_s^t \int_{\mathbb{T}^2} \left[ \widetilde{\varphi}_j \ast \left( ( P_{j+k}w_d ) ( \varphi_{j+k} \ast v_1 ) \right) \right] \cdot \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau \\ %5 &\quad + \sum_{j\in\mathbb{Z}} \sum_{ \substack{ \max{ \{ k, l \} } \\ \qquad \geq j-3 } } \sum_{ |k-l| \leq 2 } \int_s^t \int_{\mathbb{T}^2} \left[ \widetilde{\varphi}_j \ast \left( ( \varphi_{k} \ast w_d ) ( \varphi_{l} \ast v_1 ) \right) \right] \colon \left[ \varphi_j \ast ( \partial_{z}v_d ) \right] \,dx d\tau. %6\end{aligned}$$ Therefore, under the assumption [\[eq:asp\]](#eq:asp){reference-type="eqref" reference="eq:asp"}, we obtain the uniqueness result in the same manner as the proof for the three-dimensional case. 0◻ # Proof of the energy equality {#sec:proof of EE} ## 3D case {#d-case} In order to prove , let $$\label{eq: vN} v_{\leq N} \equiv \psi_{N} \ast_{z} \chi_{N} \ast_{\mathrm{H}} v = \sum_{j, j' \leq N}\varphi_{j} \ast_{z} \theta_{j'} \ast_{\mathrm{H}} v,$$ where $\{\varphi_j\}_{j\in\mathbb{Z}}$ (resp. $\{\theta_j\}_{j\in\mathbb{Z}}$) is the one -dimensional (resp.  two -dimensional) Littlewood--Paley decomposition of the unity and $$\psi_N = \sum_{j\leq N} \varphi_j \qquad \left( \text{resp.}\ \chi_N = \sum_{j\leq N} \theta_j \right).$$ In this section, $\ast_z$ (resp. $\ast_\mathrm{H}$) denotes the one -dimensional (resp.  two -dimensional) convolution in $z$-direction (resp. $x_\mathrm{H}$-direction). For the three-dimensional case, choosing $\phi = \left( v_{\leq N} \right)_{\leq N}$ in [\[eq:weak solution PE\]](#eq:weak solution PE){reference-type="eqref" reference="eq:weak solution PE"} , we have $$\begin{aligned} \begin{aligned}\label{eq:energy eq error} &\, \frac{1}{2}\| v_{\leq N}(t) \|_{L^2}^2 - \frac{1}{2}\| ( v_0 )_{\leq N} \|_{L^2}^2 + \int_0^t \int_{\mathbb{T}^3} |\nabla v_{\leq N}|^2\,dx d\tau \\ =&\, \int_0^t \int_{\mathbb{T}^3} \left[ (v\otimes v)_{\leq N} - v_{\leq N} \otimes v_{\leq N} \right]: \nabla_{\mathrm{H}}v_{\leq N} \, dx d\tau + \int_0^t \int_{\mathbb{T}^3} \left[ (w v)_{\leq N} - w_{\leq N} v_{\leq N} \right]\cdot \partial_{z}v_{\leq N} \, dx d\tau . \end{aligned}\end{aligned}$$ We define functions $I_j$ ($1\leq j\leq 8$) as follows: $$\begin{aligned} I_1(x, t) &= \left[ \chi_N \ast_{\mathrm{H}} \left( \int_{\mathbb{R}} \psi(\zeta) { \left( v(\cdot, z-2^{-N}\zeta, t) -v(\cdot, z, t) \right) \otimes \left( v(\cdot, z-2^{-N}\zeta, t) -v(\cdot, z, t) \right) } \,d\zeta \right) \right] (x_\mathrm{H}); \\ I_2(x, t) &= - \left[ \chi_N \ast_H \left( \left( \psi_N \ast_z v - v \right) \otimes \left( \psi_N \ast_z v - v \right) \right) \right] (x, t); \\ I_3(x, t) &= \int_{\mathbb{R}^2} \chi(y_\mathrm{H}) \, \left[ \psi_N \ast_z \left( v(x_\mathrm{H}- 2^{-N}y_\mathrm{H}, \cdot, t) - v(x_\mathrm{H}, \cdot, t) \right) \right] (z) \\ & \hspace{150pt} \otimes \left[ \psi_N \ast_z \left( v(x_\mathrm{H}- 2^{-N}y_\mathrm{H}, \cdot , t) - v(x_\mathrm{H}, \cdot, t) \right) \right] (z) \,dy_\mathrm{H}; \\ I_4(x, t) &= - \left[ \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right) \otimes \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right) \right] (x, t); \\ I_5(x, t) &= \left[ \chi_N \ast_{\mathrm{H}} \left( \int_{\mathbb{R}} \psi(\zeta) { \left( w(\cdot, z-2^{-N}\zeta, t) -w(z, t) \right) \left( v(\cdot, z-2^{-N}\zeta, t) -v(z, t) \right) } \,d\zeta \right) \right] (x_\mathrm{H}); \\ I_6(x, t) &= - \left[ \chi_N \ast_H \left( \left( \psi_N \ast_z w - w \right) \left( \psi_N \ast_z v - v \right) \right) \right] (x, t); \\ I_7(x, t) &= \int_{\mathbb{R}^2} \chi(y_\mathrm{H}) \, \left[ \psi_N \ast_z \left( w(x_\mathrm{H}- 2^{-N}y_\mathrm{H}, \cdot, t) - w(x_\mathrm{H}, \cdot, t) \right) \right] (z) \\ & \hspace{150pt} \cdot \left[ \psi_N \ast_z \left( v(x_\mathrm{H}- 2^{-N}y_\mathrm{H}, \cdot, t) - v(x_\mathrm{H}, \cdot, t) \right) \right] (z) \,dy_\mathrm{H}; \\ I_8(x, t) &= - \left[ \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} w - w \right) \right) \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right) \right] (x, t).\end{aligned}$$ Then, $$\label{eq:error rhs} \text{R.H.S.~ of \eqref{eq:energy eq error}} \, = \sum_{m=1}^4 \int_0^t \int_{\mathbb{T}^3} I_m : \nabla_{\mathrm{H}}v_{\leq N} \,dx d\tau + \sum_{m=5}^8 \int_0^t \int_{\mathbb{T}^3} I_m \cdot \partial_{z}v_{\leq N} \,dx d\tau.$$ We first show the following estimate. **Lemma 5**. *Let $1\leq p\leq \infty$. Then, it holds that $$\| \nabla_{\mathrm{H}}v_{\leq N} \|_{L^\infty} \leq C 2^{ (1+2/p)N} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}} L_{\mathrm{H}}^{\infty}}$$ for any $v\in \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}$.* *Proof.* Since $$\| \nabla_{\mathrm{H}}v_{\leq N} \|_{L_{\mathrm{H}}^{\infty}} = \left\| (\nabla_{\mathrm{H}}\chi_N ) \ast_\mathrm{H}\left( \psi_N \ast_z v \right) \right\|_{L_{\mathrm{H}}^{\infty}} \leq C 2^N \| \psi_N \ast_z v \|_{L_{\mathrm{H}}^{\infty}},$$ we have $$\begin{aligned} \| \nabla_{\mathrm{H}}v_{\leq N} \|_{L^\infty} &\leq C2^N \left\| \| \psi_N \ast_z v \|_{L^\infty_z} \right\|_{L_{\mathrm{H}}^{\infty}}.\end{aligned}$$ Since it holds by (2) that $$\| \psi_N \ast_z v \|_{L^\infty_z} \leq \sum_{j\leq N} \| \varphi_j \ast_z v \|_{L^\infty_z} \leq C \sum_{j\leq N} 2^{2j/p} \cdot \left[ 2^{-j/p}\| \varphi_j \ast_z v \|_{L^p_z} \right],$$ we have by Minkowski's inequality that $$\begin{aligned} \| \nabla_{\mathrm{H}}v_{\leq N} \|_{L^\infty} &\leq C 2^N \left\| \sum_{j\leq N} 2^{2j/p} \cdot \left[ 2^{-j/p}\| \varphi_j \ast_z v \|_{L^p_z} \right] \right\|_{L_{\mathrm{H}}^{\infty}} \\ &\leq C 2^N \cdot \sum_{j\leq N} 2^{2j/p} \cdot \left[ 2^{-j/p}\| \varphi_j \ast_z v \|_{L_{\mathrm{H}}^{\infty}(L^p_z)} \right] \\ &\leq C 2^N \cdot \sum_{j\leq N} 2^{2j/p} \cdot \left[ 2^{-j/p}\| \varphi_j \ast_z v \|_{L^p_z(L_{\mathrm{H}}^{\infty})} \right] \\ &\leq C 2^N \cdot \sum_{j\leq N} 2^{2j/p} \cdot \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \\ &\leq C 2^{(1+2/p)N} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}}.\end{aligned}$$ ◻ We list the essential estimates for $I_j$ ($j=1, 2, 3, 4$) to control the former part of the energy flux. **Lemma 6**. *Let $2\leq p\leq \infty$. Assume that $v\in H^1(\mathbb{T}^3)$. Then, we have the following estimates.* (1) *$\| I_1 \|_{L^1} \leq C 2^{-(1+2/p)N} \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \| \partial_{z}v \|_{L^2}^{1+2/p}.$* (2) *$\| I_2 \|_{L^1} \leq C 2^{-(1+2/p)N} \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p} \| \partial_{z}v \|_{L^2}^{1+2/p}.$* (3) *$\| I_3 \|_{L^1} \leq C 2^{-(1+2/p)N} \left( \int_{\mathbb{R}^{n-1}} |\chi(y_\mathrm{H})| |y_\mathrm{H}|^{1+2/p} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{1-2/p} \,dy_\mathrm{H} \right) \| \nabla_{\mathrm{H}}v \|_{L^2}^{1+2/p}.$* (4) *$\| I_4 \|_{L^1} \leq C 2^{-(1+2/p)N} \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{1-2/p} \| \nabla_{\mathrm{H}}v \|_{L^2}^{1+2/p}.$* *Proof.* The estimates (1)-(4) can be shown in a similar manner which Hausdorff--Young's inequality for convolution and the interpolation inequality are applied. The terms $I_j$ ($j=1, 2, 3, 4$) can be estimates as follows. (1) $$\begin{aligned} &\, \| I_1 \|_{L^1} \\ =&\, \left\| \chi_N \ast_{\mathrm{H}} \left( \int_{\mathbb{R}} \psi(\zeta) { \left( v(\cdot, \cdot-2^{-N}\zeta) -v \right) \otimes \left( v(\cdot, \cdot-2^{-N}\zeta) -v \right) } \,d\zeta \right) \right\|_{ L_{\mathrm{H}}^{1} L^{1}_z } \\ \leq&\, C\, \int_{\mathbb{R}} |\psi(\zeta)| \cdot \left\| \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^2 \right\|_{L_{\mathrm{H}}^{1}} \,d\zeta \\ =&\, C\, \int_{\mathbb{R}} |\psi(\zeta)| \cdot \left\| \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^{1-2/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^{1+2/p} \right\|_{L_{\mathrm{H}}^{1}} \,d\zeta \\ \leq&\, \int_{\mathbb{R}} |\psi(\zeta)| \cdot \left\| \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^{1-2/p} \cdot \left[ 2^{-N}|\zeta| \| \partial_{z}v \|_{L^2_z} \right]^{1+2/p} \right\|_{L_{\mathrm{H}}^{1}} \,d\zeta \\ \leq&\, C 2^{-(1+2/p)N} \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \cdot \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \cdot \| \partial_{z}v \|_{L^2}^{1+2/p}.\end{aligned}$$ (2) $$\begin{aligned} \| I_2 \|_{L^1} &= \left\| \chi_N \ast_H \left( \left( \psi_N \ast_z v - v \right) \otimes \left( \psi_N \ast_z v - v \right) \right) \right\|_{L^1} \\ &\leq C \left\| \| \psi_N \ast_z v - v \|_{L^2_z}^2 \right\|_{L_{\mathrm{H}}^{1}} \\ &= C \left\| \| \psi_N \ast_z v - v \|_{L^2_z}^{1-2/p} \| \psi_N \ast_z v - v \|_{L^2_z}^{1+2/p} \right\|_{L_{\mathrm{H}}^{1}} \\ &\leq C \left\| \| \psi_N \ast_z v - v \|_{L^2_z}^{1-2/p} \cdot \left[ 2^{-N} \| \partial_{z}v \|_{L^2_z} \right]^{1+2/p} \right\|_{L_{\mathrm{H}}^{1}} \\ &\leq C 2^{-(1+2/p)N} \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p} \| \partial_{z}v \|_{L^2}^{1+2/p}.\end{aligned}$$ (3) $$\begin{aligned} &\, \| I_3 \|_{L^1} \\ =&\, \left\| \int_{\mathbb{R}^2} \chi(y_\mathrm{H}) \, \left[ \psi_N \ast_z \left( v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v\right) \right] \otimes \left[ \psi_N \ast_z \left( v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \right) \right] \,dy_\mathrm{H} \right\|_{L^1} \\ \leq &\, \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^2 \,dy_\mathrm{H}\\ =&\, \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{1-2/p} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{1+2/p} \,dy_\mathrm{H}\\ \leq&\, C \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{1-2/p} \cdot \left[ 2^{-N}|y_H| \| \nabla_{\mathrm{H}}v \|_{L^2} \right]^{1+2/p} \,dy_\mathrm{H}\\ \leq&\, C 2^{-(1+2/p)N} \left( \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| |y_\mathrm{H}|^{1+2/p} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{1-2/p} \,dy_\mathrm{H} \right) \cdot \| \nabla_{\mathrm{H}}v \|_{L^2}^{1+2/p}.\end{aligned}$$ (4) $$\begin{aligned} \| I_4 \|_{L^1} &= \left\| \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right) \otimes \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right) \right\|_{L^1} \\ &\leq \left\| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right\|_{L^2}^2 \\ &\leq C \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^2 \\ &= C \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{1-2/p} \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{1+2/p} \\ &\leq C 2^{-(1+2/p)N} \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{1-2/p} \| \nabla_{\mathrm{H}}v \|_{L^2}^{1+2/p}. \qedhere \end{aligned}$$  ◻ **Lemma 7**. *If $v\in L^\infty(0, T; L^2) \cap L^2(0, T; H^1) \cap L^\beta (0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty})$ for some $2<\beta,\, p<\infty$ with $2/\beta+2/p=1$, then it holds that $$\sum_{m=1}^4 \int_0^t \int_{\mathbb{T}^3} I_m : \nabla_{\mathrm{H}}v_{\leq N} \,dx d\tau \to 0 \qquad \text{as $N\to\infty$}$$ for $0<t\leq T$.* *Proof.* and [Lemma 6](#lem:3-2){reference-type="ref" reference="lem:3-2"} yield $$\begin{aligned} &\, \left| \sum_{m=1}^4 \int_0^t \int_{\mathbb{T}^3} I_m : \nabla_{\mathrm{H}}v_{\leq N} \,dxd\tau \right| \\ \leq &\, \sum_{m=1}^4 \int_0^t \| I_m(\tau) \|_{L^1} \| \nabla_{\mathrm{H}}v_{\leq N} (\tau) \|_{L^\infty} \,d\tau \\ \leq &\, C \int_0^t \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \cdot \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \cdot \| \partial_{z}v \|_{L^2}^{1+2/p} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau \\ &+ C \int_0^t \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p} \| \partial_{z}v \|_{L^2}^{1+2/p} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau \\ & + C \int_0^t \left( \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| |y_\mathrm{H}|^{1+2/p} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{1-2/p} \,dy_\mathrm{H} \right) \cdot \| \nabla_{\mathrm{H}}v \|_{L^2}^{1+2/p} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau \\ & + C \int_0^t \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{1-2/p} \| \nabla_{\mathrm{H}}v \|_{L^2}^{1+2/p} \cdot \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau.\end{aligned}$$ By the assumption $v\in L^\infty(0, T; L^2) \cap L^2(0, T; H^1)$ and $v\in L^\beta(0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty})$, we can apply the Lebesgue convergence theorem for each term above. The first term tends to 0 as $N\to \infty$ due to the continuity of the transition in $L^2_z(\mathbb{T})$. The third term also tends to 0. The second term tends to 0 due to $\psi_N \ast_z v \to v$ in $L^2_z(\mathbb{T})$. The forth term also tends to 0. Consequently, we have that $$\left| \sum_{m=1}^4 \int_0^t \int_{\mathbb{T}^3} I_m : \nabla_{\mathrm{H}}v_{\leq N} \,dxd\tau \right| \to 0. \qedhere$$ ◻ Next, we deal with the second term of the right hand side of [\[eq:energy eq error\]](#eq:energy eq error){reference-type="eqref" reference="eq:energy eq error"}. We introduce two types of the estimates of $\partial_{z}v_{\leq N}$. **Lemma 8**. (1) *Let $1\leq p\leq \infty$. Then it holds that $$\| \partial_{z}v_{\leq N} \|_{L_{\mathrm{H}}^{\infty}L^p_z} \leq C 2^{-(1+1/p)N} \|v\|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}}$$ for any $v\in \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}$.* (2) *Let $2\leq \gamma\leq \infty$. Then it holds that $$\| \partial_{z}v_{\leq N} \|_{L_{\mathrm{H}}^{\infty}L^2_z} \leq C 2^{-(1-2/\gamma)N} \| v \|_{\dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}}$$ for any $v \in \dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}$.* *Proof.* The proof is straightforward and similar with that of , thus we omit it. ◻ **Lemma 9**. *Let $2\leq p \leq \infty$ and $1/p+1/{p'}=1$. Assume that $v\in H^1(\mathbb{T}^3)$. Then, the following estimates hold.* (1) *$\| I_5 \|_{L_{\mathrm{H}}^{1}L^{p'}_z} \leq C 2^{-(1+1/p)N} \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \| \nabla v \|_{L^2}^{1+2/p}.$* (2) *$\| I_6 \|_{L_{\mathrm{H}}^{1}L^{p'}_z} \leq C 2^{-(1+1/p)N} \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p} \| \nabla v \|_{L^2}^{1+2/p} .$* *Proof.* (1) We can estimate $\| I_5 \|_{ L_{\mathrm{H}}^{1} L^{p'}_z }$ by Hausdorff--Young's inequality for convolution, Minkowski's inequality and Hölder's inequality as $$\begin{aligned} \begin{aligned}\label{eq:I5_1} \| I_5 \|_{ L_{\mathrm{H}}^{1} L^{p'}_z } &= \left\| \chi_N \ast_{\mathrm{H}} \left( \int_{\mathbb{R}} \psi(\zeta) { \left( w(\cdot, \cdot-2^{-N}\zeta) -w \right) \left( v(\cdot, \cdot-2^{-N}\zeta) -v \right) } \,d\zeta \right) \right\|_{ L_{\mathrm{H}}^{1} L^{p'}_z } \\ &\leq C\, \int_{\mathbb{R}} |\psi(\zeta)| \cdot \left\| \| w(\cdot, \cdot-2^{-N}\zeta) -w \|_{L^2_z} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^q_z} \right\|_{L_{\mathrm{H}}^{1}} \,d\zeta. \end{aligned}\end{aligned}$$ Since $$\| w(\cdot, \cdot-2^{-N}\zeta) -w \|_{L^2_z} \leq C|\zeta| \cdot 2^{-N} \| \partial_{z}w \|_{L^2_z} = C|\zeta| \cdot 2^{-N} \| \mathrm{div}_{\mathrm{H}} \,v \|_{L^2_z},$$ we have $$\label{eq:I5_2} \| w(\cdot, \cdot-2^{-N}\zeta) -w \|_{L^2} \leq C |\zeta| \cdot 2^{-N} \| \mathrm{div}_{\mathrm{H}} \,v \|_{L^2}.$$ Since we can estimate as $$\begin{aligned} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^q_z} &\leq \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^{1-2/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^\infty_z}^{2/p} \\ &\leq C \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^{1-2/p} \left[ 2^{-N/2}|\zeta| \|v\|_{\dot{C}^{1/2} } \right]^{2/p} \\ &\leq C |\zeta|^{2/p} \cdot 2^{-N/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2_z}^{1-2/p} \|\partial_{z}v\|_{L^2_z}^{2/p},\end{aligned}$$ we have $$\label{eq:I5_3} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L_{\mathrm{H}}^{2} L^q_z} \leq C |\zeta|^{2/p} \cdot 2^{-N/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \|\partial_{z}v\|_{L^2}^{2/p}.$$ Consequently, it follows from [\[eq:I5_1\]](#eq:I5_1){reference-type="eqref" reference="eq:I5_1"}, [\[eq:I5_2\]](#eq:I5_2){reference-type="eqref" reference="eq:I5_2"} and [\[eq:I5_3\]](#eq:I5_3){reference-type="eqref" reference="eq:I5_3"} that $$\begin{aligned} &\, \| I_5 \|_{L_{\mathrm{H}}^{1}L^{p'}_z} \\ \leq&\, C 2^{-(1+1/p)N} \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \cdot \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \| \mathrm{div}_{\mathrm{H}} \,v \|_{L^2} \|\partial_{z}v\|_{L^2}^{2/p} \\ \leq&\, C 2^{-(1+1/p)N} \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \cdot \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \| \nabla v \|_{L^2}^{1+2/p}.\end{aligned}$$ (2) Hausdorff--Young's inequality for convolution and Hölder's inequality yield $$\begin{aligned} \begin{aligned}\label{eq:I6_1} \| I_6 \|_{L_{\mathrm{H}}^{1}L^{p'}} &= \left\| \chi_N \ast_H \left( \left( \psi_N \ast_z w - w \right) \left( \psi_N \ast_z v - v \right) \right) \right\|_{L_{\mathrm{H}}^{1}L^{p'}_z} \\ &\leq C \left\| \| \psi_N \ast_z w - w \|_{L^2_z} \| \psi_N \ast_z v - v \|_{L^q_z} \right\|_{L_{\mathrm{H}}^{1}}. \end{aligned}\end{aligned}$$ Since $$\| \psi_N \ast_z w - w \|_{L^2_z} \leq C 2^{-N} \| \partial_{z}w \|_{L^2_z} = C2^{-N} \| \mathrm{div}_{\mathrm{H}} \,v \|_{L^2_z},$$ we have $$\label{eq:I6_2} \| \psi_N \ast_z w - w \|_{L_{\mathrm{H}}^{2}L^2_z} \leq C2^{-N} \| \mathrm{div}_{\mathrm{H}} \,v \|_{L^2}.$$ Since it holds that $$\begin{aligned} \| \psi_N \ast_z v - v \|_{L^q_z} &\leq \| \psi_N \ast_z v - v \|_{L^2_z}^{1-2/p} \| \psi_N \ast_z v - v \|_{L^\infty_z}^{2/p} \\ &\leq C \| \psi_N \ast_z v - v \|_{L^2_z}^{1-2/p} \left[ \| \psi_N \ast_z v - v \|_{L^\infty_z} \right]^{2/p},\end{aligned}$$ we have $$\begin{aligned} \| \psi_N \ast_z v - v \|_{L^\infty_z} &\leq \sum_{j>N} \| \varphi_j \ast_z v \|_{L^\infty_z} \\ &\leq C \sum_{j>N} 2^{-j/2} \| \varphi_j \ast_z (\partial_{z}v) \|_{L^2_z} \\ &\leq C 2^{-N/2} \| \partial_{z}v \|_{L^2_z}.\end{aligned}$$ Therefore, we have $$\begin{aligned} \begin{aligned}\label{eq:I6_3} \left\| \| \psi_N \ast_z v - v \|_{L^q_z} \right\|_{L_{\mathrm{H}}^{2}} &\leq C 2^{-(1+1/p)N} \| \mathrm{div}_{\mathrm{H}} \,v \|_{L^2} \| \partial_{z}v \|_{L^2}^{2/p} \| \psi_N \ast_z v - v \|_{L^2_z}^{1-2/p} \\ &\leq C 2^{-(1+1/p)N} \| \nabla v \|_{L^2}^{1+2/p} \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p}. \end{aligned}\end{aligned}$$ Consequently, it follows from [\[eq:I6_1\]](#eq:I6_1){reference-type="eqref" reference="eq:I6_1"}, [\[eq:I6_2\]](#eq:I6_2){reference-type="eqref" reference="eq:I6_2"} and [\[eq:I6_3\]](#eq:I6_3){reference-type="eqref" reference="eq:I6_3"} that $$\begin{aligned} \| I_6 \|_{L_{\mathrm{H}}^{1}L^{p'}_z} &\leq C 2^{-(1+1/p)N} \| \nabla v \|_{L^2}^{1+2/p} \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p}. \qedhere \end{aligned}$$  ◻ **Lemma 10**. *Let $2\leq \gamma \leq \infty$. Assume that $v\in H^1(\mathbb{T}^3)$. Then, the following estimates hold.* (1) *$\| I_7 \|_{L_{\mathrm{H}}^{1}L^{2}_z} \leq C 2^{-(1-2/\gamma) N} \left( \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| |y_\mathrm{H}|^{1-2/\gamma} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{2/\gamma} \,dy_\mathrm{H} \right) \cdot \| \nabla_{\mathrm{H}}v \|_{L^2}^{2-2/\gamma}.$* (2) *$\| I_8 \|_{L_{\mathrm{H}}^{1}L^2_z} \leq C2^{-(1-2/\gamma)N} \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{2/\gamma} \| \nabla_{\mathrm{H}}v \|_{L^2}^{2-2/\gamma}.$* *Proof.* (1) We can estimate $\| I_7 \|_{ L_{\mathrm{H}}^{1} L^2_z }$ by Minkowski's inequality and Hölder's inequality as $$\begin{aligned} &\, \| I_7 \|_{L_{\mathrm{H}}^{1}L^2_z} \\ =&\, \left\| \int_{\mathbb{R}^2} \chi(y_\mathrm{H}) \, \left[ \psi_N \ast_z \left( w(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - w \right) \right] \cdot \left[ \psi_N \ast_z \left( v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \right) \right] \right\|_{L_{\mathrm{H}}^{1}L^2_z} \\ \leq &\, \int_{\mathbb{R}^2} | \chi(y_\mathrm{H}) | \| \psi_N \ast_z \left( w(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - w \right) \|_{L_{\mathrm{H}}^{2}L^\infty_z} \left\| \| \psi_N \ast_z \left( v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \right) \|_{L^2_z} \right\|_{L_{\mathrm{H}}^{2}} \,dy_\mathrm{H}.\end{aligned}$$ Here, it holds that $$\begin{aligned} &\, \| \psi_N \ast_z \left( w(\cdot - 2^{-N}y_\mathrm{H}, \cdot) - w \right) \|_{L_{\mathrm{H}}^{2} L^\infty_z} \\ \leq &\, C \left\| \| w\|_{L_{\mathrm{H}}^{\infty}} \right\|_{L_{\mathrm{H}}^{2}} \\ \leq &\, C \left\| \int_{-\pi}^{\pi} |\mathrm{div}_{\mathrm{H}} \,{v}|\,d\zeta \right\|_{L_{\mathrm{H}}^{2}} \\ \leq &\, C \| \mathrm{div}_{\mathrm{H}} \,{v} \|_{L^2} \qquad \left( L^2_z(\mathbb{T}) \hookrightarrow L^1_z(\mathbb{T}) \right) \\ \leq &\, C \| \nabla_{\mathrm{H}}{v} \|_{L^2},\end{aligned}$$ and $$\begin{aligned} &\, \left\| \| \psi_N \ast_z \left( v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \right) \|_{L^2_z} \right\|_{L_{\mathrm{H}}^{2}} \\ \leq&\, C \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2} \\ =&\, C \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{2/\gamma} \left\| \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L_{\mathrm{H}}^{2}} \right\|_{L^2_z}^{1-2/\gamma} \\ \leq&\, C \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{2/\gamma} \cdot \left[ 2^{-N}|y_\mathrm{H}| \left\| \| \nabla_{\mathrm{H}}v \|_{L_{\mathrm{H}}^{2}} \right\|_{L^2_z} \right]^{1-2/\gamma} \\ =&\, C |y_\mathrm{H}|^{1-2/\gamma} \cdot 2^{-(1-2/\gamma)N} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{2/\gamma} \| \nabla_{\mathrm{H}}v \|_{L^2}^{1-2/\gamma}.\end{aligned}$$ Therefore, we have $$\begin{aligned} \| I_7 \|_{L_{\mathrm{H}}^{1}L_z^{2}} \leq\, C 2^{-(1-2/\gamma)N} \left( \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| |y_\mathrm{H}|^{1-2/\gamma} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{2/\gamma} \,dy_\mathrm{H} \right) \cdot \| \nabla_{\mathrm{H}}v \|_{L^2}^{2-2/\gamma}.\end{aligned}$$ (2) We can estimate $\| I_8 \|_{ L_{\mathrm{H}}^{1} L^{p'}_z }$ by Hölder's inequality as $$\begin{aligned} &\, \left\| \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} w - w \right) \right) \left( \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \right) \right\|_{L_{\mathrm{H}}^{1}L^{2}_z} \\ \leq &\, \| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} w - w \right) \|_{L_{\mathrm{H}}^{2} L^\infty_z} \| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \|_{L^2}.\end{aligned}$$ Here, it holds that $$\begin{aligned} &\, \| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} w - w \right) \|_{L_{\mathrm{H}}^{2} L^\infty_z} \\ \leq &\, C \left\| \| w\|_{L_{\mathrm{H}}^{\infty}} \right\|_{L_{\mathrm{H}}^{2}} \\ \leq &\, C \left\| \int_{-\pi}^{\pi} |\mathrm{div}_{\mathrm{H}} \,{v}|\,d\zeta \right\|_{L_{\mathrm{H}}^{2}} \\ \leq &\, C \| \mathrm{div}_{\mathrm{H}} \,{v} \|_{L^2} \qquad \left( L^2_z(\mathbb{T}) \hookrightarrow L^1_z(\mathbb{T}) \right) \\ \leq &\, C \| \nabla_{\mathrm{H}}{v} \|_{L^2},\end{aligned}$$ and $$\begin{aligned} &\, \| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \|_{L^2} \\ = & \| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \|_{L^2}^{2/\gamma} \| \psi_N \ast_z \left( \chi_N\ast_{\mathrm{H}} v - v \right) \|_{L^2}^{1-2/\gamma} \\ \leq &\, C \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{2/\gamma} \cdot \left[ 2^{-N} \left\| \| \nabla_{\mathrm{H}}v \|_{L^2_z} \right\|_{L_{\mathrm{H}}^{2}} \right]^{1-2/\gamma} \\ \leq &\, C 2^{-(1-2/\gamma)N} \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{2/\gamma} \| \nabla_{\mathrm{H}}v \|_{L^2}^{1-2/\gamma}.\end{aligned}$$ Therefore, we have $$\| I_8 \|_{L_{\mathrm{H}}^{1}L^2_z} \leq C 2^{-(1-2/\gamma)N} \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{2/\gamma} \| \nabla_{\mathrm{H}}v \|_{L^2}^{2-2/\gamma}.\qedhere$$  ◻ **Lemma 11**. *If $v\in L^\infty(0, T; L^2) \cap L^2(0, T; H^1) \cap L^\beta (0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}) \cap L^\gamma(0, T; \dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty})$ for some $2<\beta,\, p<\infty$ with $2/\beta + 2/p = 1$ and for some $2<\gamma<\infty$, then it holds that* *$$\sum_{m=5}^8 \int_0^t \int_{\mathbb{T}^3} I_m \cdot \partial_{z}v_{\leq N} \,dx d\tau \to 0 \qquad \text{as $N\to\infty$}$$ for $0< t \leq T$.* *Proof.* Applying Hölder's inequality, (1) and , we have $$\begin{aligned} &\, \left| \int_0^t \int_{\mathbb{T}^3} I_5 \cdot \partial_{z}v_{\leq N} \,dxd\tau + \int_0^t \int_{\mathbb{T}^3} I_6 \cdot \partial_{z}v_{\leq N} \,dxd\tau \right| \\ \leq &\, \int_0^t \| I_5(\tau) \|_{L_{\mathrm{H}}^{1}L^{p'}_z} \| \partial_{z}v_{\leq N} (\tau) \|_{L_{\mathrm{H}}^{\infty}L^p_z} \,d\tau + \int_0^t \| I_6(\tau) \|_{L_{\mathrm{H}}^{1}L^{p'}_z} \| \partial_{z}v_{\leq N} (\tau) \|_{L_{\mathrm{H}}^{\infty}L^p_z} \,d\tau\\ \leq &\, C \int_0^t \left( \int_{\mathbb{R}} |\psi(\zeta)| |\zeta|^{1+2/p} \| v(\cdot, \cdot-2^{-N}\zeta) -v \|_{L^2}^{1-2/p} \,d\zeta \right) \| \nabla v \|_{L^2}^{1+2/p} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau \\ &+ C \int_0^t \| \psi_N \ast_z v - v \|_{L^2}^{1-2/p} \| \nabla v \|_{L^2}^{1+2/p} \| v \|_{\dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau .\end{aligned}$$ By the assumption $v\in L^\infty(0, T; L^2) \cap L^2(0, T; H^1)$ and $v\in L^\beta (0, T; \dot{B}^{{-1/p}}_{{p}, {\infty}, {z}}L_{\mathrm{H}}^{\infty})$, we can apply the Lebesgue convergence theorem for each term above. The first term tends to 0 as $N\to \infty$ due to the continuity of the transition in $L^2_z(\mathbb{T})$. The second term tends to 0 due to $\psi_N \ast_z v \to v$ in $L^2_z(\mathbb{T})$. The forth term also tends to 0. Therefore, we have that $$\label{eq:lem3-7-1} \left| \int_0^t \int_{\mathbb{T}^3} I_5 : \partial_{z}v_{\leq N} \,dxd\tau + \int_0^t \int_{\mathbb{T}^3} I_6 : \partial_{z}v_{\leq N} \,dxd\tau \right| \to 0.$$ On the other hand, applying Hölder's inequality, (2) and , we have $$\begin{aligned} &\, \left| \int_0^t \int_{\mathbb{T}^3} I_7 \cdot \partial_{z}v_{\leq N} \,dxd\tau + \int_0^t \int_{\mathbb{T}^3} I_8 \cdot \partial_{z}v_{\leq N} \,dxd\tau \right| \\ \leq &\, \int_0^t \| I_7(\tau) \|_{L_{\mathrm{H}}^{1}L^2_z} \| \partial_{z}v_{\leq N} (\tau) \|_{L_{\mathrm{H}}^{\infty}L^2_z} \,d\tau + \int_0^t \| I_8(\tau) \|_{L_{\mathrm{H}}^{1}L^2_z} \| \partial_{z}v_{\leq N} (\tau) \|_{L_{\mathrm{H}}^{\infty}L^2_z} \,d\tau\\ \leq &\, C \int_0^t \left( \int_{\mathbb{R}^2} |\chi(y_\mathrm{H})| |y_\mathrm{H}|^{1-2/\gamma} \| v(\cdot - 2^{-N}y_\mathrm{H}, \cdot ) - v \|_{L^2}^{2/\gamma} \,dy_\mathrm{H} \right) \| \nabla_{\mathrm{H}}v \|_{L^2}^{2-2/\gamma} \| v \|_{\dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau \\ &+ C \int_0^t \| \chi_N\ast_{\mathrm{H}} v - v \|_{L^2}^{2/\gamma} \| \nabla_{\mathrm{H}}v \|_{L^2}^{2-2/\gamma} \| v \|_{\dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty}} \,d\tau .\end{aligned}$$ As in [\[eq:lem3-7-1\]](#eq:lem3-7-1){reference-type="eqref" reference="eq:lem3-7-1"}, we have from the assumptions $v\in L^\infty(0, T; L^2(\mathbb{T}^3)) \cap L^2(0, T; H^1(\mathbb{T}^3))$ and $v\in L^\gamma(0, T; \dot{B}^{{2/\gamma}}_{{2}, {\infty}, {z}}L_{\mathrm{H}}^{\infty})$ that $$\left| \int_0^t \int_{\mathbb{T}^3} I_7 \cdot \partial_{z}v_{\leq N} \,dxd\tau + \int_0^t \int_{\mathbb{T}^3} I_8 \cdot \partial_{z}v_{\leq N} \,dxd\tau \right| \to 0.$$ ◻ *Proof of .* Since $v_{\leq N}$ and $\nabla v_{\leq N}$ approximate $v$ and $\nabla v$ in the sense of $L^2(\mathbb{T}^3)$, respectively, we can see that $$\begin{aligned} \begin{aligned}\label{eq:prf ee lhs} \text{L.H.S.~of \eqref{eq:energy eq error}}\, &= \frac{1}{2}\| v_{\leq N}(t) \|_{L^2}^2 - \frac{1}{2}\| ( v_0 )_{\leq N} \|_{L^2}^2 + \int_0^t \int_{\mathbb{T}^3} |\nabla v_{\leq N}|^2\,dx d\tau \\ &\to \frac{1}{2}\| v(t) \|_{L^2}^2 - \frac{1}{2}\| v_0 \|_{L^2}^2 + \int_0^t \int_{\mathbb{T}^3} |\nabla v|^2\,dx d\tau \qquad \text{as $N\to\infty$} \end{aligned}\end{aligned}$$ with the aid of the assumption $v\in L^\infty(0, T; L^2(\mathbb{T}^3)) \cap L^2(0, T; H^1(\mathbb{T}^3))$. Applying and [Lemma 11](#lem:3-7){reference-type="ref" reference="lem:3-7"} for [\[eq:error rhs\]](#eq:error rhs){reference-type="eqref" reference="eq:error rhs"}, we have $$\begin{aligned} \begin{aligned}\label{eq:prf ee rhs} \text{R.H.S.~of \eqref{eq:energy eq error}}\, &= \sum_{m=1}^4 \int_0^t \int_{\mathbb{T}^3} I_m : \nabla_{\mathrm{H}}v_{\leq N} \,dx d\tau + \sum_{m=5}^8 \int_0^t \int_{\mathbb{T}^3} I_m \cdot \partial_{z}v_{\leq N} \,dx d\tau \\ &\to 0 \qquad \text{as $N\to\infty$}. \end{aligned}\end{aligned}$$ Consequently, [\[eq:prf ee lhs\]](#eq:prf ee lhs){reference-type="eqref" reference="eq:prf ee lhs"} and [\[eq:prf ee rhs\]](#eq:prf ee rhs){reference-type="eqref" reference="eq:prf ee rhs"} yield the energy equality [\[eq:energy equality\]](#eq:energy equality){reference-type="eqref" reference="eq:energy equality"}. ◻ ## 2D case {#d-case-1} In order to prove , let $$\label{eq: vN2} v_{\leq N} \equiv \psi_{N} \ast_{z} \widetilde{\chi_{N}} \ast_{x_1} v = \sum_{j, j' \leq N}\varphi_{j} \ast_{z} \widetilde{\theta_{j'}} \ast_{x_1} v,$$ where $\{\varphi_j\}_{j\in\mathbb{Z}}$ (resp. $\left\{\widetilde{\theta_j}\right\}_{j\in\mathbb{Z}}$) is the one-dimensional Littlewood--Paley decomposition in $z$-direction (resp. $x_1$-ditrection) of the unity and $$\psi_N = \sum_{j\leq N} \varphi_j \qquad \left( \text{resp.}\ \widetilde{\chi_N} = \sum_{j\leq N} \widetilde{\theta_j} \right).$$ In this section, $\ast_z$ (resp. $\ast_{x_1}$) denotes the one-dimensional convolution in $z$-direction (resp. $x_1$ direction). As in three-dimensional case, choosing $\phi = \left( v_{\leq N} \right)_{\leq N}$ which is given in [\[eq: vN2\]](#eq: vN2){reference-type="eqref" reference="eq: vN2"} in [\[eq:weak solution PE2\]](#eq:weak solution PE2){reference-type="eqref" reference="eq:weak solution PE2"}, we have $$\begin{aligned} \begin{aligned}\label{eq:energy eq error 2D} &\, \frac{1}{2}\| v_{\leq N}(t) \|_{L^2}^2 - \frac{1}{2}\| ( v_0 )_{\leq N} \|_{L^2}^2 + \int_0^t \int_{\mathbb{T}^2} |\nabla v_{\leq N}|^2\,dx d\tau \\ =&\, \int_0^t \int_{\mathbb{T}^2} \left[ (v^1 v)_{\leq N} - (v^1)_{\leq N} v_{\leq N} \right] \cdot \partial_{x_1} v_{\leq N} \, dx d\tau + \int_0^t \int_{\mathbb{T}^2} \left[ (w v)_{\leq N} - w_{\leq N} v_{\leq N} \right]\cdot \partial_{z}v_{\leq N} \, dx d\tau \\ =&\, \sum_{m=1}^4 \int_0^t \int_{\mathbb{T}^2} \widetilde{I_m} \cdot \partial_{x_1} v_{\leq N} \,dx d\tau + \sum_{m=5}^8 \int_0^t \int_{\mathbb{T}^2} \widetilde{I_m} \cdot \partial_{z}v_{\leq N} \,dx d\tau, \end{aligned}\end{aligned}$$ where $\widetilde{I_1}, \widetilde{I_2}, \ldots, \widetilde{I_8}$ are given by $$\begin{aligned} \widetilde{I_1}(x, t) &= \left[ \widetilde{\chi_N} \ast_{x_1} \left( \int_{\mathbb{R}} \psi(\zeta) { \left( v^1(\cdot, z-2^{-N}\zeta, t) -v^1(\cdot, z, t) \right) \left( v(\cdot, z-2^{-N}\zeta, t) -v(\cdot, z, t) \right) } \,d\zeta \right) \right] (x_1); \\ \widetilde{I_2} (x, t) &= - \left[ \widetilde{\chi_N} \ast_{x_1} \left( \left( \psi_N \ast_z v^1 - v^1 \right) \left( \psi_N \ast_z v - v \right) \right) \right] (x, t); \\ \widetilde{I_3}(x, t) &= \int_{\mathbb{R}} \widetilde{\chi}(y_1) \, \left[ \psi_N \ast_z \left( v^1(x_1 - 2^{-N}y_1, \cdot, t) - v^1(x_1, \cdot, t) \right) \right] (z) \\ & \hspace{150pt} \cdot \left[ \psi_N \ast_z \left( v(x_1 - 2^{-N}y_1, \cdot , t) - v(x_1, \cdot, t) \right) \right] (z) \,dy_1; \\ \widetilde{I_4}(x, t) &= - \left[ \left( \psi_N \ast_z \left( \widetilde{\chi_N}\ast_{x_1} v^1 - v^1 \right) \right) \left( \psi_N \ast_z \left( \widetilde{\chi_N}\ast_{x_1} v - v \right) \right) \right] (x, t); \\ \widetilde{I_5}(x, t) &= \left[ \widetilde{\chi_N} \ast_{x_1} \left( \int_{\mathbb{R}} \psi(\zeta) { \left( w(\cdot, z-2^{-N}\zeta, t) -w(z, t) \right) \left( v(\cdot, z-2^{-N}\zeta, t) -v(z, t) \right) } \,d\zeta \right) \right] (x_1); \\ \widetilde{I_6}(x, t) &= - \left[ \widetilde{\chi_N} \ast_{x_1} \left( \left( \psi_N \ast_z w - w \right) \left( \psi_N \ast_z v - v \right) \right) \right] (x, t); \\ \widetilde{I_7}(x, t) &= \int_{\mathbb{R}} \widetilde{\chi}(y_1) \, \left[ \psi_N \ast_z \left( w(x_1 - 2^{-N}y_1, \cdot, t) - w(x_1, \cdot, t) \right) \right] (z) \\ & \hspace{150pt} \cdot \left[ \psi_N \ast_z \left( v(x_1 - 2^{-N}y_1, \cdot, t) - v(x_1, \cdot, t) \right) \right] (z) \,dy_1; \\ \widetilde{I_8}(x, t) &= - \left[ \left( \psi_N \ast_z \left( \widetilde{\chi_N}\ast_{x_1} w - w \right) \right) \left( \psi_N \ast_z \left( \widetilde{\chi_N}\ast_{x_1} v - v \right) \right) \right] (x, t).\end{aligned}$$ Therefore, we also obtain the energy equality in the same manner as the proof for the three-dimensional case. 0◻ The authors express their sincere thanks to Professor Hideo Kozono for his encouragement. This project starts when the first-named author visited Waseda University of Tokyo. He is grateful to the Department of Mathematics for its kind hospitality during this time. This work was supported by JST SPRING, Grant Number JPMJSP2128. [^1]: Strictly saying, the embedding $\dot{H}^1_\mathrm{H}\hookrightarrow L^\infty_\mathrm{H}$ does NOT hold due to the marginal case of the Sobolev embedding in two dimension, but we see these spaces as similar space with respect to scaling or embedding here.
arxiv_math
{ "id": "2309.03443", "title": "Uniqueness of weak solutions to the primitive equations in some\n anisotropic spaces", "authors": "Tim Binz, Yoshiki Iida", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we determine the transverse instability of periodic standing wave solutions for the generalized Schrödinger equation with fractional power nonlinearity. The existence of periodic waves is determined by using a constrained minimization problem in the complex setting, and it is shown that the corresponding real solution, depending on the power nonlinearity, is always positive or changes its sign. The transverse instability results are then determined by applying the main result given in [@RoussetTzvetkov] for the periodic case. title: Transverse instability of periodic standing waves for the generalized nonlinear Schrödinger equation --- **Gabriel E. Bittencourt Moraes$^1$ Fábio Natali$^2$** Departamento de Matemática - Universidade Estadual de Maringá\ Avenida Colombo, 5790, CEP 87020-900, Maringá, PR, Brazil.\ $^1$pg54546\@uem.br\ $^2$fmanatali\@uem.br # Introduction Consider the nonlinear Schrödinger equation (NLS) $$\label{NLS-equation} i u_t + \Delta u + |u|^\alpha u = 0,$$ where $u:\mathbb{T}_L \times \mathbb{R}\times \mathbb{R}\rightarrow\mathbb{C}$, $\alpha>0$ and $\mathbb{T}_L$ is called the $L$-torus. In our paper, functions defined in the set $\mathbb{T}_L$ satisfy $h(x+L)=h(x)$ for all $x\in\mathbb{R}$. Transverse instability of nonlinear Schrödinger equation has been studied by some authors. Indeed, in [@RoussetTzvetkov2008] and [@RoussetTzvetkov2009] the authors proved the transverse instability for the cubic nonlinear Schrödinger equation on $\mathbb{R}^2$ and $\mathbb{R} \times \mathbb{T}_L$, respectively. In [@Yamazaki-System], the author used similar ideas as in [@RoussetTzvetkov2009] to study instability for standing waves of a system of nonlinear Schrödinger equations posed on $\mathbb{R} \times \mathbb{T}_L$ for $L>0$ in a convenient open interval. Moreover, in [@Yamazaki-Potential] the author studied transverse instability results associated to the equation $$\label{yamazaki} i \partial_t u = - \Delta u + V(x) u - |u|^{p-1} u, \; \; (x,y,t) \in \mathbb{T}_L \times \mathbb{R} \times \mathbb{R},$$ where $p>1$ and $V:\mathbb{R} \rightarrow \mathbb{R}$ is a smooth potential. The influence of the potential $V$ that appears in $(\ref{yamazaki})$ affects the transverse instability result of the standing wave solution $u(x,y,t) = e^{i\omega t}\tilde{\varphi}_\omega(x,y)$ in the following sense: there exist two critical values $\omega_{*,1} > \omega_{*,0}>0$ such that for $\omega \in (\omega_{*,0}, \omega_{*,1})$ such that for $0 < L \leq (\lambda_\omega)^{-\frac{1}{2}}$, the standing wave is stable. If $(\lambda_\omega)^{-\frac{1}{2}} < L$, the standing wave is unstable. Here $-\lambda_{\omega}<0$ indicates the first negative eigenvalue of a convenient linear operator. In [@Yamazaki-NullPotential] the same author studied the transversal stability of standing waves solutions for the equation [\[yamazaki\]](#yamazaki){reference-type="eqref" reference="yamazaki"} with null potential $V=0$. Concerning the case $\mathbb{T}_{L_1}\times \mathbb{T}_{L_2}$ the authors in [@HakkaevStanislavovaStefanov] studied the transverse instability of standing waves of the form $u(x,y,t)=e^{i\omega t}\tilde{\varphi}(x)$ for the nonlinear Schrödinger equation $(\ref{NLS-equation})$ in the case where $\alpha=1$ (quadratic nonlinearity) and $\alpha=2$ (cubic nonlinearity). In both specific cases, it is well known that explicit periodic solutions $\tilde{\varphi}$ depending on the Jacobi elliptic functions are solutions of the nonlinear equation $$\label{eqhakka} -\tilde{\varphi}''+\omega\tilde{\varphi}-|\tilde{\varphi}|^{\alpha}\tilde{\varphi}=0,$$ with $\alpha=1$ or $\alpha=2$. They obtained the existence of a large period $L_{2}$ depending on the periodic solution $\tilde{\varphi}$ such that the standing wave is transversally unstable with respect to perturbations of the same period. The fact that solutions to $(\ref{eqhakka})$ are given explicitly in the cases of $\alpha=1$ and $\alpha=2$ is crucial to obtain the transversal instability result. We shall describe our results. In equation [\[NLS-equation\]](#NLS-equation){reference-type="eqref" reference="NLS-equation"}, we can consider periodic standing waves $u(x,y,t) = e^{i\omega t} \varphi(x)$, where $\omega>0$. Substituting this form into [\[NLS-equation\]](#NLS-equation){reference-type="eqref" reference="NLS-equation"}, we obtain $$\label{edo} -\varphi'' + \omega \varphi - |\varphi|^\alpha \varphi = 0,$$ where $\varphi$ is a real function with minimal period $L>0$. Now, we consider the perturbation of $u(x,y,t)$ associated to the equation [\[NLS-equation\]](#NLS-equation){reference-type="eqref" reference="NLS-equation"} given by $$\label{perturbation} u(x,y,t) = e^{i \omega t} \left( \varphi(x) + v(x,y,t)\right).$$ Substituting [\[perturbation\]](#perturbation){reference-type="eqref" reference="perturbation"} into [\[NLS-equation\]](#NLS-equation){reference-type="eqref" reference="NLS-equation"}, using [\[edo\]](#edo){reference-type="eqref" reference="edo"} and neglecting the nonlinear terms in $v$, we have that $$\label{NLS-equation1} i v_t - \omega v + v_{xx} + v_{yy} + |\varphi|^\alpha v + \alpha |\varphi|^\alpha {\rm Re}(v) = 0.$$ Next, let us consider the separation of variables of the form $$\label{u-V} u(x,y,t) = e^{\lambda t} \cos(\kappa y) {\rm v}(x).$$ Replacing [\[u-V\]](#u-V){reference-type="eqref" reference="u-V"} into [\[NLS-equation1\]](#NLS-equation1){reference-type="eqref" reference="NLS-equation1"}, we have $$\label{eq-32} i \lambda {\rm v}(x) - \omega {\rm v}(x) + {\rm v}(x)'' - \kappa^2 {\rm v}(x) + |\varphi|^\alpha {\rm v}(x) + \alpha |\varphi|^\alpha {\rm Re}({\rm v}(x)) = 0.$$ On the other hand, by considering ${\rm v}(x) = v_1(x) + i v_2(x)$, we obtain by $(\ref{eq-32})$ the following spectral problem $$\left( \begin{array}{cc} 0 & \mathcal{L}_2 + \kappa^2 \\ - (\mathcal{L}_1 + \kappa^2) & 0 \\ \end{array} \right) \left( \begin{array}{c} v_1 \\ v_2 \end{array}\right)=\lambda \left( \begin{array}{c} v_1 \\ v_2 \end{array} \right),$$ where the operators $\mathcal{L}_1, \mathcal{L}_2: H_{per}^2\subset L^2_{per} \rightarrow L^2_{per}$ are typical Hill operators defined as $$\label{L1-L2} \mathcal{L}_1 := -\partial_x^2 + \omega - (\alpha+1)|\varphi|^\alpha \; \; \text{ and } \; \; \mathcal{L}_2:= -\partial_x^2 + \omega - |\varphi|^\alpha.$$ Let us consider $J = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)$ and $\mathbb{L}_{per}^2 = L^2_{per} \times L^2_{per}$. We have that $J^{-1} = J^* = -J$ and for $\left( \begin{array}{c} v_1 \\ v_2 \end{array} \right) = J \left( \begin{array}{c} w_1 \\ w_2 \end{array} \right)$, we get $$\label{Sk} \underbrace{ J^{-1} \left( \begin{array}{cc} \mathcal{L}_1 + \kappa^2 & 0 \\ 0 & \mathcal{L}_2 + \kappa^2 \end{array} \right) J}_{:=\mathcal{S}(\kappa)} \left( \begin{array}{c} w_1 \\ w_2 \end{array} \right)=\lambda J \left( \begin{array}{c} w_1 \\ w_2 \end{array} \right),$$ that is, for ${\rm w} = \left( \begin{array}{c} w_1 \\ w_2 \end{array} \right)\equiv ( w_1, w_2)$, we obtain the following spectral problem $$\label{L(k)A(k)} \mathcal{S}(\kappa) {\rm w} = \lambda J {\rm w}.$$ **Definition 1**. *The periodic wave $\varphi$ is said to be transversally stable if $\sigma(\mathcal{S}(\kappa)) \subset i \mathbb{R}$ for all $\kappa > 0$. Otherwise, if $\sigma(\mathcal{S}(\kappa))$ contains a point $\lambda\in\mathbb{C}$ with ${\rm Re}(\lambda) > 0$ and $U\neq0$ such that $(\ref{L(k)A(k)})$ is satisfied for some $\kappa>0$, the periodic wave $\varphi$ is said to be transversally unstable.* From $(\ref{L(k)A(k)})$ and Definition $\ref{defi}$, we can see that the problem of transverse stability reduces, in fact, to a spectral stability problem. To obtain that the solution $\varphi$ is transversally unstable, we are going to use the main result given by [@RoussetTzvetkov] whose main assumptions are: - The linear operator $\mathcal{S}(\kappa)$ defined in $\mathbb{L}_{per}^2$ with domain in $\mathbb{H}_{per}^2$ is self-adjoint for all $\kappa\in\mathbb{R}$. - There exists $K>0$ and $\beta > 0$ such that $\mathcal{S}(\kappa) \geq \beta {\rm Id}$ for $|\kappa| \geq K$; - The essential spectrum $\sigma_{ess}(\mathcal{S}(\kappa))$ is included in $[c_\kappa,+\infty)$ with $c_\kappa > 0$ for $\kappa \neq 0$; - For every $\kappa_1 \geq \kappa_2 \geq 0$, we have $\mathcal{S}(\kappa_1) \geq \mathcal{S}(\kappa_2)$. In addition, if for some $\kappa > 0$ and ${\rm w} \neq 0$, we have $\mathcal{S}(\kappa) {\rm w} = 0$, then $(\mathcal{S}'(\kappa) {\rm w},{\rm w}) > 0$ (with $\mathcal{S}'(\kappa)$ the derivative of $\mathcal{S}$ with respect to $\kappa$); - The spectrum $\sigma(\mathcal{S}(0))$ of $\mathcal{S}(0)$ is under the form $\{-\lambda_0\} \cup I$ where $-\lambda_0 < 0$ is an isolated simple eigenvalue and $I$ is included in $[0,+\infty)$. Assumptions **(H0)**-**(H4)** implies that the wave $\varphi$ is transversally unstable according to the Definition $\ref{defi}$. In order to verify assumptions **(H0)**-**(H4)**, we first need to show the existence of $L$-periodic solutions $\varphi$ for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"}. For even periodic solutions, we need to solve the following constrained minimization problem $$\label{minP1} \inf \left\{ \mathcal{B}_\omega(u) := \frac{1}{2} \int_0^L |u_x|^2 + \omega |u|^2 dx : u \in \mathbb{H}^1_{per,even} \text{ and } \int_0^L |u|^{\alpha+2} dx = \tau \right\},$$ where $\tau > 0$ is a fixed number and $\mathbb{H}^1_{per,even}$ denotes the space $\mathbb{H}^1_{per} = H^1_{per} \times H^1_{per}$ restricted to the even periodic functions. Furthermore and for $\alpha>0$ even integer, we can show the existence of odd periodic solutions $\varphi$ satisfying equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} by solving the constrained minimization problem $$\label{minP2} \inf \left\{ \mathcal{B}_\omega(u) := \frac{1}{2} \int_0^L |u_x|^2 + \omega |u|^2 dx : u \in \mathbb{H}^1_{per,odd} \text{ and } \int_0^L |u|^{\alpha+2} dx = \upsilon \right\},$$ where $\upsilon > 0$ is a fixed number and $\mathbb{H}^1_{per,odd}$ denotes the space $\mathbb{H}^1_{per}$ restricted to the odd periodic functions. The second requirement in our paper to obtain the transversal instability concerns a convenient spectral analysis over the operators $\mathcal{S}(\kappa)$ for all $\kappa\geq0$. Tools related to Floquet theory for Hill operators of the form $\mathcal{P} = -\partial_x^2 + Q(x)$ can be used to obtain the requirements outlined in **(H0)-(H4)**. Here, $Q(x)$ represents a smooth, real, and even periodic potential. Our results are then established. **Theorem 1**. *Let $L>0$ and $\alpha > 0$ be fixed. Let $\varphi$ be an even positive periodic solution for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} obtained from the minimization problem $(\ref{minP1})$. There exist $\lambda > 0$, $\kappa >0$ and ${\rm w} \in \mathbb{H}^2_{per}$ such that the spectral problem [\[L(k)A(k)\]](#L(k)A(k)){reference-type="eqref" reference="L(k)A(k)"} is verified. In other words, the positive even periodic standing wave solution $\varphi$ for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} is transversally unstable in the sense of the Definition [Definition 1](#defi){reference-type="ref" reference="defi"}.* **Theorem 2**. *Let $L>0$ be fixed and consider $\alpha > 0$ a fixed even integer. Let $\varphi$ be a periodic solution that changes its sign for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} obtained from the minimization problem $(\ref{minP2})$. There exist $\lambda > 0$, $\kappa > 0$ and ${\rm w} \in \mathbb{H}^2_{per}$ such that the spectral problem [\[L(k)A(k)\]](#L(k)A(k)){reference-type="eqref" reference="L(k)A(k)"} is verified. In other words, the odd periodic standing wave solution $\varphi$ for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} is transversally unstable in the sense of the Definition [Definition 1](#defi){reference-type="ref" reference="defi"}.* Our paper is organized as follows: in [2](#section-existence){reference-type="ref" reference="section-existence"}, we establish the existence of $L$-periodic solutions $\varphi$ that solves [\[edo\]](#edo){reference-type="eqref" reference="edo"}. The spectral analysis for the operator $\mathcal{S}(0)$ is obtained in the Section [3](#section-spectral){reference-type="ref" reference="section-spectral"}. Finally, the transverse instability of periodic standing waves $\varphi$ is shown in Section [4](#section-transverseinstability){reference-type="ref" reference="section-transverseinstability"}.\ **Notation.** For $s\geq0$ and $L>0$, the Sobolev space $H^s_{per}:=H^s_{per}(\mathbb{T}_L)$ consists of all real periodic distributions $f$ such that $$\|f\|^2_{H^s_{per}}:= L \sum_{k=-\infty}^{\infty}(1+k^2)^s|\hat{f}(k)|^2 <\infty$$ where $\hat{f}$ is the periodic Fourier transform of $f$. The space $H^s_{per}$ is a Hilbert space with the inner product denoted by $(\cdot, \cdot)_{H^s}$. When $s=0$, the space $H^s_{per}$ is isometrically isomorphic to the space $L_{per}^2$ and will be denoted by $L^2_{per}:=H^0_{per}$ (see, e.g., [@Iorio]). The norm and inner product in $L^2_{per}$ will be denoted by $\|\cdot \|_{L^2}$ and $(\cdot, \cdot)_{L^2}$. For a complex function $f=f_1+if_2\equiv(f_1,f_2)$, we denote the space $\mathbb{H}^s_{per} := H^s_{per} \times H^s_{per}$ for all $s \geq 0$. Notation $H_{per,even}^s$ indicates the subspace of $H^s_{per}$ constituted by even periodic functions and $H_{per,odd}^s$ is the subspace of $H^s_{per}$ constituted by odd periodic functions. Let $\mathcal{A}$ be a linear operator. We denote ${\rm n}(\mathcal{A})$ and ${\rm z}(\mathcal{A})$ as the number of negative eigenvalues and the dimension of the kernel of $\mathcal{A}$, respectively. Moreover, we denote $\mathcal{A}_{even}$ ($\mathcal{A}_{odd}$) as the operator $\mathcal{A}$ restricted in the even (odd) sector of its domain. Given $z=\xi+i\varsigma \in \mathbb{C}$, we denote $|z|=\sqrt{\xi^2+\varsigma^2}.$ # Existence of periodic waves {#section-existence} In this section, we are going to prove the existence of periodic wave solutions $\varphi$ for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"}. First, we obtain an even positive solution $\varphi$. Second, we prove the existence of an odd periodic solution $\varphi$ for the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"}. ## Even positive solutions Let $L>0$ and $\alpha > 0$ be fixed. To obtain an even periodic wave that solves [\[edo\]](#edo){reference-type="eqref" reference="edo"}, we use a variational approach in order to minimize a suitable constrained functional. Indeed, let $\tau > 0$ be fixed and consider the set $$\mathcal{Y}_\tau := \left\{ u \in \mathbb{H}^1_{per,even};\ \int_{0}^{L} |u|^{\alpha + 2} dx = \tau \right\}.$$ For $\omega > 0$, we define the functional $\mathcal{B}_\omega: \mathbb{H}^1_{per,even} \rightarrow \mathbb{R}$ given by $$\mathcal{B}_\omega(u) := \frac{1}{2} \int_{0}^L |u_x|^2 + \omega |u|^2 dx.$$ Thus, we have the following result: **Proposition 2**. *Let $L>0$ be fixed and consider $\tau>0$ and $\omega > 0$. The minimization problem $$\label{min-problem} \Gamma_\omega:= \inf_{u \in \mathcal{Y}_\tau} \mathcal{B}_\omega(u)$$ has at least one solution, that is, there exists a complex-valued function $\Phi \in \mathcal{Y}_\tau$ such that $\mathcal{B}_\omega(\Phi) = \Gamma_\omega$. Moreover, $\Phi$ satisfies the equation $$\label{eq-min-problem} -\Phi'' + \omega \Phi - |\Phi|^\alpha \Phi = 0.$$* *Proof.* First, it is easy to see that the functional $\mathcal{B}_\omega$ induces an equivalent norm $\mathbb{H}^1_{per,even}$, that is, there exist positive constants $c_0$ and $c_1$ such that $$0 \leq c_0 \|u\|_{\mathbb{H}^1_{per}} \leq \sqrt{2 \mathcal{B}_\omega (u)} \leq c_1 \|u\|_{\mathbb{H}^1_{per}}.$$ In addition, since $\mathcal{B}_\omega(u) \geq 0$ for all $u \in \mathbb{H}^1_{per,even}$, we have that $\Gamma_\omega \geq 0$. From the smoothness of the functional $\mathcal{B}_\omega$, we may consider a sequence of minimizers $(u_n)_{n \in \mathbb{N}} \subset \mathcal{Y}_\tau$ such that $$\label{convergenceBw} \mathcal{B}_\omega (u_n) \rightarrow \Gamma_\omega.$$ From [\[convergenceBw\]](#convergenceBw){reference-type="eqref" reference="convergenceBw"}, we have that the sequence $(\mathcal{B}_\omega(u_n))_{n \in \mathbb{N}}$ is bounded $\mathbb{H}^1_{per,even}$. Since $\mathbb{H}^1_{per,even}$ is a reflexive Hilbert space, there exists $\Phi \in \mathbb{H}^1_{per,even}$ such that (modulus a subsequence) we have $$u_n \rightharpoonup \Phi \text{ weakly in } \mathbb{H}^1_{per,even}.$$ Using the compact embedding $\mathbb{H}^1_{per,even} \hookrightarrow \mathbb{L}^{\alpha+2}_{per,even}$ for all $\alpha > 0$, we have that $$u_n \rightarrow \Phi \in \mathbb{L}^{\alpha+2}_{per,even}.$$ Moreover, by using the mean value theorem and Hölder's inequality, we obtain $$\bigg| \int_{0}^L |u_n|^{\alpha+2} - |\Phi|^{\alpha+2} dx \bigg| \leq \int_{0}^L \left| |u_n|^{\alpha+2} - |\Phi|^{\alpha+2} \right| dx \leq 2(\alpha+2) \left( \|u_n\|_{\mathbb{L}^{\alpha+2}_{per}}^{\alpha+1} + \|\Phi\|_{\mathbb{L}_{per}}^{\alpha+1} \right) \|u_n - \Phi\|_{\mathbb{L}^{\alpha+2}_{per}},$$ implying that $\int_{0}^L |\Phi|^{\alpha+2} dx = \tau$, that is, $\Phi \in \mathcal{Y}_\tau$. Since $\mathcal{B}_\omega$ is lower semi-continuous, $$\mathcal{B}_\omega(\Phi) \leq \liminf_{n \rightarrow \infty} \mathcal{B}_\omega(u_n),$$ that is, $\mathcal{B}_\omega(\Phi) \leq \Gamma_\omega$. On the other hand, once $\Phi \in \mathcal{Y}_\tau$, we conclude that $\mathcal{B}_\omega(\Phi) \geq \Gamma_\omega$. Therefore, we conclude that $\Phi \in \mathcal{Y}_\tau$ is a minimizer of the functional $\Gamma_\omega$, that is, $$\mathcal{B}_\omega(\Phi) = \Gamma_\omega = \inf_{u \in \mathcal{Y}_\tau} \mathcal{B}_\omega(u).$$ Next, by Lagrange multiplier theorem, there exists a constant $c_2 = \frac{2 \mathcal{B}_\omega(\Phi)}{\tau} > 0$ such that $$\label{eqtiwhc2} - \Phi'' + \omega \Phi = c_2 |\Phi|^\alpha \Phi.$$ A scaling argument $\Psi = \sqrt[\alpha]{c_2} \Phi$ allows us to choose $c_2 = 1$ in [\[eqtiwhc2\]](#eqtiwhc2){reference-type="eqref" reference="eqtiwhc2"}. Therefore, the complex function $\Phi$ is a periodic minimizer of the problem [\[min-problem\]](#min-problem){reference-type="eqref" reference="min-problem"} and satisfies the equation [\[eq-min-problem\]](#eq-min-problem){reference-type="eqref" reference="eq-min-problem"}. ◻ **Remark 3**. *Let $\Phi \in \mathbb{H}^1_{per,even}$ the minimizer obtained in the Proposition [Proposition 2](#minimizationproblem){reference-type="ref" reference="minimizationproblem"}. It is easy to check that the function $e^{-i \theta} \Phi$ satisfies $\mathcal{B}_\omega(e^{-i\theta} \Phi) = \Gamma_\omega$ for all $\theta\in\mathbb{R}$. Consequently, $e^{-i\theta} \Phi$ satisfies equation $(\ref{eq-min-problem})$ for all $\theta\in\mathbb{R}$.* We show that $\Phi$ obtained in Proposition $\ref{minimizationproblem}$ can be considered of the form $\Phi=e^{i\theta_0}\varphi$ for some $\theta_0\in\mathbb{R}$ and $\varphi$ is a real periodic even function. Indeed, since $\Phi = \phi_1 + i\phi_2$ satisfies the equation [\[eq-min-problem\]](#eq-min-problem){reference-type="eqref" reference="eq-min-problem"}, we have that $$\label{1} -\phi_1 '' + \omega \phi_1 - \left( \phi_1^2 + \phi_2^2 \right)^{\frac{\alpha}{2}} \phi_1 = 0,$$ and $$\label{2} -\phi_2 '' + \omega \phi_2 - \left( \phi_1^2 + \phi_2^2 \right)^{\frac{\alpha}{2}} \phi_2 = 0.$$ Multiplying the equations [\[1\]](#1){reference-type="eqref" reference="1"} and [\[2\]](#2){reference-type="eqref" reference="2"} by $\phi_2$ and $\phi_1$, respectively, and subtracting both results, we obtain $$-\phi_1 '' \phi_2 + \phi_2'' \phi_1 = 0,$$ that is, $$-\phi_1 ' \phi_2 + \phi_2 ' \phi_1 = \tilde{c},$$ where $\tilde{c}$ is an integration constant. Being $\phi_1$ and $\phi_2$ even functions, we have that $\tilde{c} = 0$, that is, $-\phi_1' \phi_2 + \phi_2' \phi_1 = 0$, implying that $\phi_1 = r \phi_2$ for some $r \in \mathbb{R}$. Thus, $\Phi = (r+i)\phi_2 = e^{i \theta_0} \sqrt{1+r^2}\phi_2$, where $\theta_0$ is the principal argument of the complex number $r+i$. Therefore, if $\varphi = \sqrt{1+r^2} \phi_2$, we conclude that the minimizer $\Phi$ can be rewritten in the form $\Phi = e^{i \theta_0} \varphi$ for some $\theta_0 \in \mathbb{R}$. The next step is to show that $\varphi$ is positive. In fact, we have to notice that the operator $\mathcal{L}$ can be obtained defining the functional $G(u) = E(u) + \omega F(u)$ where $E$ and $F$ are conserved quantities associated to the equation $(\ref{NLS-equation})$ given by $$E(u) = \frac{1}{2} \int_0^L |u_x|^2 - \frac{2}{\alpha + 2} |u|^{\alpha+2} dx \; \; \text{ and } \; \; F(u) = \frac{1}{2} \int_0^L |u|^2 dx.$$ Then, we have that $G'(\varphi,0) = 0$, that is, $(\varphi,0)$ is a critical point of $G$. In addition, we have that $G''(\varphi,0) = \mathcal{L}$. By Proposition [Proposition 2](#minimizationproblem){reference-type="ref" reference="minimizationproblem"} and the min-max principle (see [@ReedSimon Theorem XIII.2]) that ${\rm n}(\mathcal{L}_{even}) \leq 1$. Moreover, using the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"}, we get $$(\mathcal{L}_{1,even} \varphi, \varphi)_{L^2_{per}} = (\mathcal{L}_1 \varphi, \varphi)_{L^2_{per}} = - \alpha \int_0^L |\varphi|^{\alpha+2} dx = - \alpha \tau < 0,$$ that is, ${\rm n}(\mathcal{L}_{1,even}) \geq 1$ and we conclude in fact ${\rm n}(\mathcal{L}_{even}) = 1$. Moreover, since $\mathcal{L}$ has a diagonal form, we automatically obtain $$\label{Leven-even} {\rm n}(\mathcal{L}_{1,even}) = 1 \; \; \text{ and } \; \; {\rm n}(\mathcal{L}_{2,even}) = 0.$$ By Krein-Ruttman's theorem, we deduce that $\varphi > 0$ and ${\rm z}(\mathcal{L}_{2,even}) = 1$. ## Odd solutions Let $\alpha>0$ be a fixed even integer. Using similar arguments as in Subsection 3.1, we can obtain the existence of an odd solution $\varphi$ that satisfies [\[edo\]](#edo){reference-type="eqref" reference="edo"}. In fact, for $\upsilon > 0$, let us consider $$\mathcal{X}_\upsilon := \left\{ u \in \mathbb{H}^1_{per,odd};\ \int_{0}^{L} |u|^{\alpha + 2} dx = \upsilon \right\}.$$ Define the functional $\mathcal{B}_\omega: \mathbb{H}^1_{per,odd} \rightarrow \mathbb{R}$ given by $$\mathcal{B}_\omega(u) := \frac{1}{2} \int_{0}^L |u_x|^2 + \omega |u|^2 dx,$$ where $\omega>0$. We have the following proposition concerning the existence of odd periodic standing wave solutions for the equation $(\ref{NLS-equation})$. The proof follows by similar arguments as done in Proposition $\ref{minimizationproblem}$. **Proposition 4**. *Let $L>0$ be fixed and consider $\upsilon>0$ and $\omega > 0$. If $\alpha > 0$ is an even integer, the minimization problem $$\label{min-problem-odd} \Omega_\omega:= \inf_{u \in \mathcal{X}_\upsilon} \mathcal{B}_\omega(u)$$ has at least one solution, that is, there exists a complex-valued function $\Psi \in \mathcal{X}_\upsilon$ such that $\mathcal{B}_\omega(\Psi) = \Omega_\omega$. Moreover, $\Psi$ satisfies the equation $$\label{eq-min-problem-odd} -\Psi'' + \omega \Psi - |\Psi|^\alpha \Psi = 0.$$* $\blacksquare$ **Remark 5**. *The same arguments used in the end of Subsection 3.1 can be repeated in this case to prove the existence of $\theta_1\in\mathbb{R}$ such that $\Psi = e^{i \theta_1} \varphi$, where $\varphi$ is an odd real solution that satisfies equation [\[edo\]](#edo){reference-type="eqref" reference="edo"}.* # Spectral Analysis {#section-spectral} From the definition of $\mathcal{S}(\kappa)$ in $(\ref{Sk})$ and the fact that $$\label{operator-Lcal} \mathcal{L}:= \left( \begin{array}{cc} \mathcal{L}_1 & 0 \\ 0 & \mathcal{L}_2 \end{array} \right),$$ is a diagonal operator, we immediately obtain ${\rm n}(\mathcal{L})={\rm n}(\mathcal{S}(0))$ and ${\rm z}(\mathcal{L})={\rm z}(\mathcal{S}(0))$. ## Spectral analysis for even positive periodic solutions Let $L>0$ and $\alpha > 0$ be fixed. Consider $\varphi$ the even positive solution of [\[edo\]](#edo){reference-type="eqref" reference="edo"} obtained in Proposition [Proposition 2](#minimizationproblem){reference-type="ref" reference="minimizationproblem"}. By [\[Leven-even\]](#Leven-even){reference-type="eqref" reference="Leven-even"} we obtain that ${\rm n}(\mathcal{L}_{1,even}) = 1$ and ${\rm n}(\mathcal{L}_{2,even}) = 0$. Since $\mathcal{L}_{1,odd}\varphi' = 0$, we can conclude that $\lambda = 0$ is the first eigenvalue of the operator $\mathcal{L}_{1,odd}$ and thus ${\rm n}(\mathcal{L}_{1,odd}) = {\rm n}(\mathcal{L}_{2,odd}) = 0$. Therefore, we have $${\rm n}(\mathcal{L}) = {\rm n}(\mathcal{L}_{even}) + {\rm n}(\mathcal{L}_{odd}) = 1.$$ On the other hand, since $\varphi$ is positive we can use [@AlvesNatali Lemma 3.7] to deduce ${\rm z}(\mathcal{L}_1) = 1$. Now, we have that $0$ is a simple eigenvalue associated to the linear operator $\mathcal{L}_2$, and thus $${\rm Ker}(\mathcal{L}) = \left[ (\varphi',0), (0,\varphi) \right].$$ Summarizing the above, we have the following result: **Proposition 6**. *Let $L>0$ and $\alpha > 0$ be fixed. Consider $\varphi$ as the positive solution of [\[edo\]](#edo){reference-type="eqref" reference="edo"} obtained in the Proposition [Proposition 2](#minimizationproblem){reference-type="ref" reference="minimizationproblem"}. Operator $\mathcal{L}$ defined in [\[operator-Lcal\]](#operator-Lcal){reference-type="eqref" reference="operator-Lcal"} has exactly one simple negative eigenvalue which is simple and zero is a double eigenvalue with eigenfunctions $(\varphi',0)$ and $(0,\varphi)$. Moreover, the remainder of the spectrum is constituted by a discrete set of eigenvalues.* $\blacksquare$ ## Spectral analysis for odd periodic solutions Let $L>0$ be fixed and consider $\alpha > 0$ an even integer. Let $\varphi$ be the odd solution of [\[edo\]](#edo){reference-type="eqref" reference="edo"} obtained in Proposition [Proposition 4](#minimizationproblem-odd){reference-type="ref" reference="minimizationproblem-odd"}. The arguments in [@AlvesNatali Lemma 3.6] enable to conclude that ${\rm n}(\mathcal{L}_1) = 2$ but, according to the assumption **(H4)**, the linear operator $\mathcal{S}(0)$ cannot have more than one negative eigenvalue. To overcome this problem and since ${\rm n}(\mathcal{L})={\rm n}(\mathcal{S}(0))$, we shall consider the restriction of the operator $\mathcal{L}_{odd}$. First, we see from Krein-Rutman's theorem that the first eigenvalue of $\mathcal{L}_1$ is simple and its associated to a even and positive eigenfunction. Since ${\rm n}(\mathcal{L}_{1}) = {\rm n}(\mathcal{L}_{1,odd}) + {\rm n}(\mathcal{L}_{1,even})$ and $1\leq{\rm n}(\mathcal{L}_{1,even}) \leq 2$, we obtain that ${\rm n}(\mathcal{L}_{1,odd}) \leq 1$. On the other hand, we have $$( \mathcal{L}_{1,odd} \varphi,\varphi)_{L^2_{per,odd}} = (\mathcal{L}_1 \varphi,\varphi)_{L^2_{per}} = - \int_0^L \alpha |\varphi|^{\alpha+2} = - \upsilon \alpha < 0,$$ so that ${\rm n}(\mathcal{L}_{1,odd}) = 1$. Let $\lambda_0^{(i)}$ and $\lambda_1^{(i)}$ be the first two simple eigenvalues associated to the linearized operator $\mathcal{L}_{i,odd}$, $i=1,2$. Since $\mathcal{L}_{1,odd} < \mathcal{L}_{2,odd}$, we can use the standard comparison theorem (see [@Eastham Theorem 2.2.2]) to obtain $$\lambda_0^{(1)} < \lambda_0^{(2)} \; \; \; \text{ and } \; \; \; \lambda_1^{(1)} < \lambda_1^{(2)}.$$ The fact that ${\rm n}(\mathcal{L}_{1,odd}) = 1$ implies automatically that $\lambda_0^{(1)} < 0$ and $\lambda_1^{(1)} = 0$. Thus, $\lambda_1^{(2)} > 0$ and since $\mathcal{L}_2 \varphi = 0$, we obtain $\lambda_0^{(2)} =0$, so that ${\rm n}(\mathcal{L}_{2,odd})=0$. Therefore, since ${\rm n}(\mathcal{L}_{1,odd}) = 1$ and ${\rm n}(\mathcal{L}_{2,odd}) = 0$, we conclude ${\rm n}(\mathcal{L}_{odd})=1$ and we have the following result: **Proposition 7**. *Let $L>0$ be fixed and consider $\alpha > 0$ an even integer number. If $\varphi$ is the odd solution of the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} obtained in the Proposition [Proposition 4](#minimizationproblem-odd){reference-type="ref" reference="minimizationproblem-odd"}, then the restricted operator $\mathcal{L}_{odd}$ has exactly one negative eigenvalue which is simple and zero is a simple eigenvalue with eigenfunction $(0,\varphi)$. Moreover, the remainder of the spectrum is constituted by a discrete set of eigenvalues.* $\blacksquare$ # transverse instability {#section-transverseinstability} Here, we are going to prove our main results about transverse instability of periodic standing wave solutions for the NLS equation [\[NLS-equation\]](#NLS-equation){reference-type="eqref" reference="NLS-equation"}. To do so, we need to check that all assumptions $\textbf{(H0)}-\textbf{(H4)}$ are verified. Initially, we can proof the following result with respect to the coercitivity of the operator $\mathcal{S}(\kappa)$ for $|\kappa|$ large enough. Since the linear operator $\mathcal{S}(\kappa)$ has a square term $\kappa^2$, we only consider $\kappa\geq0$ by symmetry. **Lemma 8**. *Let $L>0$ and $\alpha > 0$ be fixed. Consider the wave solution $\varphi$ of the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} given by Proposition $\ref{minimizationproblem}$ or $\ref{minimizationproblem-odd}$. There exist $K>0$ and $\beta > 0$ such that $\mathcal{S}(\kappa) \geq \beta {\rm Id}$ for $\kappa \geq K.$* *Proof.* Consider the periodic wave solution $\varphi$ of the equation [\[edo\]](#edo){reference-type="eqref" reference="edo"} given by Proposition $\ref{minimizationproblem}$ or $\ref{minimizationproblem-odd}$. In addition, let $-\lambda_0$ be the first eigenvalue of the operator $\mathcal{L}$ defined in [\[operator-Lcal\]](#operator-Lcal){reference-type="eqref" reference="operator-Lcal"}. We obtain by min-max theorem $$\begin{aligned} \left( \mathcal{S}(\kappa)(f,g), (f,g) \right)_{\mathbb{L}^2_{per}} & = \left( (\mathcal{L}_2 + \kappa^2) f, f\right)_{L^2_{per}} + \left( (\mathcal{L}_1 + \kappa^2) g,g \right)_{L^2_{per}} \\ & = (\mathcal{L}_2 f, f)_{L^2_{per}} + (\mathcal{L}_1 g,g )_{L^2_{per}} + \kappa^2 (f,f)_{L^2_{per}} + \kappa^2 (g,g)_{L^2_{per}} \\ & = \left(\mathcal{L} (g,f), (g,f) \right)_{\mathbb{L}^2_{per}} + \kappa^2 \left( (f,g), (f,g) \right)_{\mathbb{L}^2_{per}} \\ & \geq - \lambda_0 \|(f,g)\|_{\mathbb{L}^2_{per}}^2 + \kappa^2 \|(f,g)\|_{\mathbb{L}^2_{per}}^2 \\ & = (\kappa^2 - \lambda_0) \|(f,g)\|_{\mathbb{L}^2_{per}}^2. \end{aligned}$$ Letting $K > \sqrt{\lambda_0}$, we have that $\beta = \kappa^2 - \lambda_0$ is positive for all $\kappa > K$. ◻ The next result provides us with information regarding the increase in relation to the operator $\mathcal{S}(\kappa)$ in terms of $\kappa$ and the behavior of the derivative of $\mathcal{S}(\kappa)$ with respect to $\kappa.$ **Lemma 9**. *For every $\kappa_1 \geq \kappa_2 \geq 0$, we have $\mathcal{S}(\kappa_1) \geq \mathcal{S}(\kappa_2)$. In addition, if $\mathcal{S}'(\kappa)$ denotes the derivative of $\mathcal{S}(\kappa)$ with respect to $\kappa$, we have that $(\mathcal{S}'(\kappa) {\rm w},{\rm w}) > 0$ for all ${\rm w} \in \mathbb{H}^2_{per}$ and $\kappa > 0$.* *Proof.* For $\kappa_1 \geq \kappa_2 \geq 0$, we see that $$\left( ( \mathcal{S}(\kappa_1) - \mathcal{S}(\kappa_2) ) (f,g), (f,g) \right)_{\mathbb{L}^2_{per}} = (\kappa_1^2 - \kappa_2^2) \|(f,g)\|_{\mathbb{L}^2_{per}}^2 \geq 0,$$ for all $(f,g) \in \mathbb{L}^2_{per}$, that is, $\mathcal{S}(\kappa_1) \geq \mathcal{S}(\kappa_2)$ for all $\kappa_1 \geq \kappa_2 \geq 0$. Moreover, for all $\kappa >0$ we have $$\mathcal{S}'(\kappa) = \left( \begin{array}{cc} 2 \kappa & 0 \\ 0 & 2 \kappa \end{array} \right),$$ so that $(\mathcal{S}'(\kappa) {\rm w}, {\rm w})_{L^2_{per}} > 0$ for all ${\rm w} \in \mathbb{H}^2_{per}$ and $\kappa > 0$. ◻ Lemmas [Lemma 8](#lema-H1){reference-type="ref" reference="lema-H1"} and [Lemma 9](#lema-H3){reference-type="ref" reference="lema-H3"} and the spectral analysis obtained in Section [3](#section-spectral){reference-type="ref" reference="section-spectral"} are useful to obtain the proofs of Theorems $\ref{main-teo-evensolution}$ and $\ref{main-teo-oddsolution}$.\ *Proof of theorems [Theorem 1](#main-teo-evensolution){reference-type="ref" reference="main-teo-evensolution"} and [Theorem 2](#main-teo-oddsolution){reference-type="ref" reference="main-teo-oddsolution"}.* Let $L>0$ and $\alpha > 0$ be fixed. First of all, we see that condition (**H0**) is verified since $\mathcal{S}(\kappa)$ is clearly a self-adjoint operator in $\mathbb{L}_{per}^2$ and the reason that $\mathcal{L}+\kappa^2$ is also a self-adjoint operator in the same space for all $\kappa\in\mathbb{R}$. Consider then $\varphi$ as the even positive periodic solution obtained by Proposition [Proposition 2](#minimizationproblem){reference-type="ref" reference="minimizationproblem"}. From the Lemma [Lemma 8](#lema-H1){reference-type="ref" reference="lema-H1"} we have that (**H1**) is satisfied. As we are working in the periodic scenario, the essential spectrum $\sigma_{ess}(\mathcal{S}(\kappa))$ is an empty set for all $\kappa\in\mathbb{R}$ and assumption $\textbf{(H2)}$ is established. Condition (**H3**) is obtained by Lemma [Lemma 9](#lema-H3){reference-type="ref" reference="lema-H3"}. We also have that condition (**H4**) is verified by Proposition [Proposition 6](#proposition-spectralanalysis-even){reference-type="ref" reference="proposition-spectralanalysis-even"} and the fact that ${\rm n}(\mathcal{L})={\rm n}(\mathcal{S}(0))$. On the other hand, let $\alpha > 0$ be an even integer. If $\varphi$ is the odd periodic solution obtained in the Proposition [Proposition 4](#minimizationproblem-odd){reference-type="ref" reference="minimizationproblem-odd"}, we can use the same arguments as used above for positive solutions to obtain that assumptions (**H0**)-(**H3**) are easily verified. In addition, by Proposition [Proposition 7](#proposition-spectralanalysis-odd){reference-type="ref" reference="proposition-spectralanalysis-odd"}, ${\rm n}(\mathcal{L}_{odd})={\rm n}(\mathcal{S}(0)_{odd})=1$ has requested in the assumption (**H4**). The theorem is now proved. ◻ # Acknowledgments {#acknowledgments .unnumbered} F. Natali is partially supported by CNPq/Brazil (grant 303907/2021-5). G. E. Bittencourt Moraes is supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)/Brazil - Finance code 001. 99 Alves, G. and Natali, F., *Monotonicity of the period map for the equation $-\varphi'' + \varphi - \varphi^k = 0$*, preprint, 2023. Eastham, M.S.P., *The spectral theory of periodic differential equations*, Scottish Academic Press, Edinburgh, 1973. Hakkaev, S., Stanislavova, M. and Stefanov, A., *Transverse instability for periodic waves of KP-I and Schrödinger equations*, Ind. Univ. Math. J., 61 (2012), 461-492. Iório Jr., R. and Iório, V., *Fourier Analysis and Partial Differential Equations*, Cambridge Stud. in Adv. Math., 2001. Reed, M. and Simon, B., *Methods of modern mathematical physics IV. Analysis of operators*, Academic Press, New York-London, 1978. Rousset, F. and Tzvetkov, N., *A simple criterion of transverse linear instability for solitary waves*, Math. Res. Lett., 17 (2010), 157-169. Rousset, F. and Tzvetkov, N., *Transverse nonlinear instability of solitary waves for some Hamiltonian PDE's*, J. Math. Pures. Appl., 90 (2008), 550-590. Rousset, F. and Tzvetkov, N., *Transverse nonlinear instability for two-dimensional dispersive models*, Ann. Inst. Poinc. Anal. Non., 26 (2009), 477--496. Yamazaki, Y., *Transverse instability for a system of nonlinear Schrödinger equations*, Disc. Contin. Dyn. Syst. Series B, 19 (2014), 565--588. Yamazaki, Y., *Stability of line standing waves near the bifurcation point for nonlinear Schrödinger equations*, Kodai Math. J., 38 (2015), 65-96. Yamazaki, Y., *Transverse instability for nonlinear Schrödinger equation with a linear potential*, Adv. Diff. Equat., 21 (2016), 429-462.
arxiv_math
{ "id": "2310.03467", "title": "Transverse instability of periodic standing waves for the generalized\n nonlinear Schrodinger equation", "authors": "Fabio Natali and Gabriel E. Bittencourt Moraes", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Considering the damped wave equation with a Gaussian noise $F$ where $F$ is white in time and has a covariance function depending on spatial variables, we will see that this equation has a mild solution which is stationary in time $t$. We define a weakly self-avoiding polymer with intrinsic length $J$ associated to this SPDE. Our main result is that the polymer has an effective radius of approximately $J^{5/3}$. author: - Yuanyuan Pan bibliography: - bibdatabase.bib title: The damped wave equation and associated polymer --- # Introduction Polymers are studied intensively in many fields. There are many works studying different aspects of polymers. For example, *Random polymer models* deals with equilibrium statistical mechanics of a class of polymers [@Gia07]. *Random polymers* focuses on the interface between probability theory and equilibrium statistical physics [@dHo07]. *The theory of polymer dynamics* concentrates on the dynamics of polymers in the liquid state [@DoEd86]. Our work is inspired by Mueller and Neuman's [@MN22]. Their work studied the radius of polymers without self-intersection and showed that, considering the heat equation with white noise, the effective radius of the polymers is approximately $J^{5/3}$. As stated in [@MN22], the simplest model for a polymer is random walk. If self-intersection is prohibited, we are led to study self-avoiding random walk. We are interested in finding macroscopic extension of a polymer. Such extension is often measured by the variance of the end-to-end distance, ${\textrm{E}}\,[\lvert S_n\rvert^2]$, where $S_n$ is the location of a polymer at $n$ units from its beginning $S_0$. One famous problem is to show that ${\textrm{E}}\,[\lvert S_n\rvert^2]\approx Cn^{2\nu}$ where $(S_n)_{n\in\mathbb{N}}$ is the simple random walk on $\mathbb{Z}^d$ with self-avoiding path, and $\nu$ is a constant depending on $d$. 1. When $d\geq 5$, $\nu=\frac{1}{2}$. Hara and Slade followed the idea of Brydges and Spencer [@BS85], and verified the result in [@TS91] and [@TS92]. 2. Almost nothing is known rigorously about $\nu$ in dimension 2, 3, and 4. 1. When $d=2$, based on non-rigorous Coulomb gas methods, Nienhuis [@NienhuisBernard1982ECPa] predicted that $\nu=\frac{3}{4}$. This predicted value has been confirmed numerically by Monte Carlo simulation, e.g. [@LMS95], and exact enumeration of self-avoiding walks up to length n = 71 [@JensenIwan2004Eosw]. 2. For $d=3$, $\nu$ is expected to be 0.588$\cdots$. An early prediction for the values of $\nu$, referred to as the Flory values [@FloryP.J.1951Tcor], was $\nu=\frac{3}{d+2}$ for $1\leq d\leq 4$. This does give the correct answer for $d=1,2,4$, but it is not accurate when $d=3$. The Flory argument is very remote from a rigorous mathematical proof. 3. When $d=4$, $\nu$ is expected to be $\frac{1}{2}$. And there should be a logarithmic correction. Dimension four is the upper critical dimension for the self-avoiding walk. The expected number of intersections between two independent random walks tends to infinity, but only logarithmically in the length. Such considerations are related to the logarithmic corrections. Partial works for this case can be found in [@BDCGS12] and [@BSTW17] and the references therein. 3. When $d=1$, it is obvious to see $\nu=1$. The case $d=1$ is the simplest, but it still presents challenging questions. For example, if we consider the weakly self-avoiding one-dimensional simple random walks $(S_n)_{n\in\mathbb{N}}$ with $S_0=0$, there is a complete answer [@GdH93] to characterize the limiting speed, $\underset{n\rightarrow\infty}{\lim}\frac{1}{n}\left({\textrm{E}}\,[S_n^2]\right)^{1/2}$. There has also been work on the continuous-time situation, see [@HdHK97]. We study the radius of polymers that satisfy the damped wave equation in one dimensional space. The wave equation can be used to study the propagation of mechanical waves or vibrational modes within the polymer structure. Polymers are composed of long chains of repeating molecular units, and these chains can exhibit certain vibrational modes when excited. In [@DoEd86] chapter 4, if we consider the discrete case, we can use the Rouse model to describe the motion of internal beads of a polymer, that is $$dX_i(t)=\Delta X_i(t)dt+dB_i(t)$$[\[rousemodel\]]{#rousemodel label="rousemodel"} where $\Delta$ is the discrete Laplacian. If $F$ is a force acting on a bead along a polymer chain, then, ideally, $F=ma$ where $m$ is the mass of the bead and $a$ is the acceleration. By ([\[rousemodel\]](#rousemodel){reference-type="ref" reference="rousemodel"}), $a$ is proportional to the second differential of position. To be specific, we work with the damped wave equation and a noise that is white in time and colored in space, which will be introduced later. The rigorous definition of white noise is discussed in many references, for example [@WalshSPDE86] and [@alma9946044013405216]. The outline of the theorem and some lemmas' proofs are similar to the ones in [@MN22].\ ## Acknowledgement The author is very grateful to Carl Mueller for his valuable insights and advice. # Preliminary Let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, P)$ be a probaility space where $\mathcal{F}_t$ is the filtration generated by the white noise in time $(W(s))_{s\leq t}$. In other word, we have $\mathcal{F}_t=\sigma\{W(s):s\leq t\}$.\ Let $\mathbb{N}=\{0, 1, 2,\dots\}$ and $\big(u(t,x)\big)_{t\geq0, x\in\mathbb{R}}$ be a solution of the following wave equation with the colored noise satisfying initial conditions and the Neumann Boundary condition: Let $u_0\in\mathcal{H}^1(\mathbb{R})$ and $u_1\in\mathcal{L}^2(\mathbb{R})$, $$\begin{split} \partial_{t}^{2}u(t,x)+\partial_{t}u(t,x)&=\Delta u(t,x)+\dot{F}(t,x)\\ u(0,x)=u_{0}(x), &\quad \partial_{t}u(0,x)=u_{1}(x) \quad (x, t)\in[0,J]\times\mathbb{R}_+\\ \partial_{x}u(t,0)&=\partial_{x}(t,J)=0. \end{split}\label{main system}$$ Heuristically, the Fourier series of noise $F$ is $$\dot{F}(t,x)=\sum_{n\in\mathbb{N}}\gamma_n\dot{W}_n(t)\varphi_n(x)$$ where $(W_n)_{n\in\mathbb{N}}$ are independent and identical distributed white noise in time, and $(\gamma_n)_{n\in\mathbb{N}}$ is a collection of real numbers such that $(\gamma_n^2)_{n\in\mathbb{N}}$ is a decreasing sequence satisfying the following conditions, $$\sum_{n\in\mathbb{N}}\gamma_n^2<+\infty \quad \text{ and }\quad \gamma^2_n\leq \dfrac{c}{n^\alpha}, \forall n\neq0.$$ where $c$ and $\alpha$ are positive constants inpendent from $n$. By Theorem 1.1 of [@Nu20], the fundamental solution of the damped wave equation on $\mathbb{R}$ is $$G_t^\mathbb{R}(x)=\frac{1}{2}e^{-t/2}\text{sgn}(t)\text{ }I_0\bigg(\frac{1}{2}\sqrt{t^2-x^2}\bigg)\chi_{[-|t|, |t|]}(x),$$ where $I_0$ is the modified Bessel function of the first kind and with parameter 0.\ If we consider the Neumann Boundary condition, the fundamental solution of system ([\[main system\]](#main system){reference-type="ref" reference="main system"}) is $$G_t(x,y)=\sum_{n\in\mathbb{Z}}G_t^\mathbb{R}(x+y-2nJ)+G_t^\mathbb{R}(x-y-(2n+1)J),\quad x, y\in[0,J].$$ Then the mild solution of ([\[main system\]](#main system){reference-type="ref" reference="main system"}) is $$\begin{split} u(t,x)= &\int_{0}^{J}\partial_{t}G_t(x,y)u_0(y)dy+\int_{0}^{J}G_t(x,y)\big(\frac{1}{2}u_0(y)+u_1(y)\big)dy\\ &+\int_{0}^{t}\int_{0}^{J}G_{t-s}(x,y)F(dyds). \end{split}$$ According to Theorem 1.3 of [@Nu20], this mild solution is the unique solution in $\mathcal{C}^1(\mathbb{R}, \mathcal{D}'(\mathbb{R}))$. By Theorem 5.3 of [@Nu20] and Young's inequality, we can show that the Fourier series of $u$ converges in $L^2\left([0,J]\times\Omega\right)$. That is $$u(t,x)=\sum_{n\in\mathbb{N}}a_n(t)\varphi_n(x)$$ where $$a_n(t)=\begin{cases} \frac{\gamma_{n}}{w_n}\int_{0}^{t}e^{\frac{-1+w_n}{2}(t-s)}-e^{\frac{-1-w_n}{2}(t-s)}dW_n(s) & J>2\pi n \\ \gamma_{n}\int_{0}^{t}e^{-\frac{1}{2}(t-s)}(t-s)dW_n(s) & J=2\pi n \\ \frac{\gamma_{n}}{\omega_n}\int_{0}^{t}e^{-\frac{1}{2}(t-s)}\sin(\omega_n(t-s))dW_n(s) & 0<J<2\pi. \end{cases}$$ Details of getting the expression of $a_n$ are in Appendix [5.1](#dampedwaveequationwiththenoiseF){reference-type="ref" reference="dampedwaveequationwiththenoiseF"}. Let $m(\cdot)$ be the Lebesgue measure on $\mathbb{R}$. Then we define an occupation measure and a local time as follows, $$\begin{split} L_t(A)&=m\{x\in[0,J]: u(t,x)\in A\}\\ l_t(y)&=\dfrac{L_t(dy)}{dy}. \end{split}$$ If $P_{T,J}$ is the original probability measure of $\left(u(t,x)\right)_{t\in[0,T],x\in[0J]}$, we define the probability measure $Q_{T,J,\beta}$ as follows: Let $E^{P_{T,J}}$ and $E^{Q_{T,J,\beta}}$ be the expectation with respect to $P_{T,J}$ and $Q_{T,J,\beta}$ respectively. We write $E$ for $E^{P_{T,J}}$. Let $$\begin{split} \mathcal{E}_{T,J,\beta}&=\exp\left(-\beta\int_{0}^{T}\int_{-\infty}^{\infty}l_t(y)^2dydt\right),\\ Z_{T,J,\beta}&=E[\mathcal{E}_{T,J,\beta}]=E^{P_{T,J}}[\mathcal{E}_{T,J,\beta}]. \end{split}$$ where $\beta$ is a positive parameter. Then we define $$Q_{T,J,\beta}(A)=\dfrac{1}{Z_{T,J,\beta}}E[\mathcal{E}_{T,J,\beta}\mathbbm{1}_A].$$ For ease of notation, we will write $$P_T=P_{T,J},\quad P_T=P_{T,J,\beta},\quad \mathcal{E}_T=\mathcal{E}_{T,J,\beta},\quad Z_T=Z_{T,J,\beta}.$$ We define the radius of $\left(u(t,x)\right)_{t\in[0,T],x\in[0J]}$ to be $$R(T,J)=\left[\dfrac{1}{TJ}\int_{0}^{T}\int_{0}^{J}\left(u(t,x)-\bar{u}(t)\right)^2\right]^{1/2}.$$ **Theorem 1**. *There are constants $\epsilon_0$, $K_0$ and $K_1$ not depending on $\beta$ and $J$ such that $$\underset{T\rightarrow\infty}{\lim}Q_T\left[\epsilon_0K_0J^{5/3}\leq R(T,J)\leq K_1J^{5/3}\right]=1.$$* To prove the theorem, we define $$\begin{split} A^{(1)}_{T,J}:=\left\{R(T,J)<\epsilon_0K_0J^{5/3}\right\} \quad \text{ and } \quad A^{(2)}_{T,J}:=\left\{R(T,J)>\epsilon_0K_0J^{5/3}\right\} . \end{split}$$ It suffices to show that for $i=1, 2$, $$\underset{T\rightarrow\infty}{\lim}Q_T\left(A^{(i)}_{T,J}\right)=0.$$ # Lower bound {#lowerbound} We will show that $Q_T\left(A^{(i)}_{T,J}\right)$ approaches $0$ as $T$ goes to infinity. First, we need to find a lower bound of $Z_T$. ## Stationary solution We define a measure $\hat{P}_T^{(a)}$ that adds a drift depending on $x$ to the colored noise. We add a drift $a\varphi_1(\cdot)$ to the noise $\dot{F}$, where $a$ is a nonzero constant. Recall that $\varphi_1$ is one of the eigenfunctions of the Laplacian operator.\ Fixing $T>0$, by Theorem 5.1 from [@Da78], we have $$\begin{split} \frac{d\hat{P}_T^{a}}{dP_t}= &\exp\bigg(\int_{0}^{T}\int_{0}^{J}a\varphi_1(x)F(dxdt)\bigg)\\ &-\frac{1}{2}\int_{0}^{T}\int_{0}^{J}\int_{0}^{J}a^2\varphi_1(x)\varphi_1(y)f(x,y)dxdydt \end{split}$$ where $f(x,y)=\sum_{n\in\mathbb{N}}\gamma_n^2\varphi_n(x)\varphi_n(y)$. Let $\hat{{\textrm{E}}\,}$ be the expectation with respect to $\hat{P}_T^{(a)}$. By Jensen's inequality $$\begin{split} \log Z_T &= \log\hat{{\textrm{E}}\,}\bigg[\exp\bigg(-\beta\int_{0}^{T}\int_{-\infty}^{\infty}\ell_t(y)^2dydt-\log\frac{d\hat{P}_T^{a}}{dP_t}\bigg)\bigg]\\ &\geq-\beta\hat{{\textrm{E}}\,}\bigg[\int_{0}^{T}\int_{-\infty}^{\infty}\ell_t(y)^2dydt\bigg]-\hat{{\textrm{E}}\,}\bigg[\log\frac{d\hat{P}_T^{a}}{dP_t}\bigg]. \end{split}$$[\[logZT\]]{#logZT label="logZT"} Recall the Fourier series of $u$ is $$u(t,x)=\sum_{n\in\mathbb{N}}a_n(t)\varphi_{n}(x).$$ Let $\bar{u}(t)=\frac{1}{J}\int_{0}^{J}u(t,x)dx$. By Fubini's theorem, it is not hard to find that $$\begin{split} \bar{u}(t) &=a_0(t)\varphi_0(x).\label{ubaroft} \end{split}$$ Then we have $$u(t,x)-\bar{u}(t)=\sum_{n\neq 0}a_n(t)\varphi_{n}(x).$$ Let $$U(t,x)=u(t,x)-\bar{u}(t).$$ For $n\neq 0$, we consider the process $a_n$ in the future. We define $\tilde{a}_n$ and $\tilde{U}$ as the following: $$\tilde{a}_n(t)=\begin{cases} \frac{\gamma_{n}}{w_n}\int_{-\infty}^{t}e^{\frac{-1+w_n}{2}(t-s)}-e^{\frac{-1-w_n}{2}(t-s)}dW_n(s) & J>2\pi n\\ \gamma_{n}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}(t-s)dW_n(s) & J=2\pi n\\ \frac{\gamma_{n}}{\omega_n}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\sin(\omega_n(t-s))dW_n(s) & 0<J<2\pi n. \end{cases}$$ and $\tilde{U}$ has the same relation with $\tilde{a}_n$ as $U$ with $a_n$. That is $\tilde{U}(t,x)=\sum_{n\in\mathbb{N}_+}\tilde{a}_n(t)\varphi_n(x)$. After taking the drift, let $\hat{a}_n$ and $\hat{U}$ be the expressions corresonding to $\tilde{a}$ and $\tilde{U}$ respectively. So we have $$\hat{U}(t,x)=\sum_{n\in\mathbb{N}_+}\hat{a}(t)\varphi_n(x).$$ Heuristically, $$\hat{F}(t.x)(t,x)+a\varphi_1(x)=\gamma_1\left(\dot{W_1}(t)+\dfrac{a}{\gamma_1}\right)\varphi_1(x)+\sum_{n\neq1}\gamma_n\dot{W_n}(t)\varphi_n(x).$$ When $n\neq1$, we have $\hat{a}_n=\tilde{a}_n$. When $n=1$, $$\hat{a}_1(t)=\begin{cases} \frac{\gamma_{1}}{\omega_1}\int_{-\infty}^{t}e^{\frac{-1+\omega_1}{2}(t-s)}-e^{\frac{-1-\omega_1}{2}(t-s)}\big(dW_1(s)+\dfrac{a}{\gamma_1}ds\big) & J>2\pi \\ \gamma_{1}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}(t-s)\big(dW_1(s)+\dfrac{a}{\gamma_1}ds\big) & J=2\pi \\ \frac{\gamma_{1}}{\omega_1}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\sin(\omega_1(t-s))\big(dW_1(s)+\dfrac{a}{\gamma_1}ds\big) & 0<J<2\pi . \end{cases}$$ It is easy to check that for each $n\in\mathbb{N}_+$, $\hat{a}_n$ is weakly stationary. Since $\{\hat{a}_n\}_{n\in\mathbb{N}_+}$ is jointly Gaussian, $\{\hat{a}_n\}_{n\in\mathbb{N}_+}$ is strong stationary. Then $\hat{U}(t,\cdot)$ is stationary in time $t$. Let $g_{t, x_1, x_2}$ be the density function of $\tilde{U}(t, x_1)-\tilde{U}(t, x_2)$ under $P$, and $\hat{g}$ be the density function of $\hat{U}(t, x_1)-\hat{U}(t, x_2)$ under $\hat{P}^a$. Since $\hat{U}(t,\cdot)$ is stationary in $t$ and $$\begin{split} \hat{U}(t,x_1)-\hat{U}(t,x_2) &=\sum_{n\in\mathbb{N_+}}\hat{a}_n(t)\big(\varphi_n(x_1)-\varphi_n(x_2)\big)\\ &=\sum_{n\in\mathbb{N_+}}\left[\hat{b}_n(t)(\varphi_n(x_1-\varphi_n(x_2)))+\hat{c}_n(t)\big(\varphi_1(x_n)-\varphi_n(x_2)\big)\right]. \end{split}$$ We have $$\hat{g}_{t,x_1, x_2}=\hat{g}_{0,x_1, x_2}, \quad\forall t\in\mathbb{R}.$$ To simplify our notation, we write $\hat{g}_{x_1, x_2}$ instead of $\hat{g}_{0, x_1, x_2}$. Recall $$\begin{split} L_t(A)&=m\big\{x\in[0, J] : u(t,x)\in A\big\},\\ l_t(y)&=\frac{L_t(dy)}{dy}. \end{split}$$ By Lemma 2.5 of [@MN22], we have $$\begin{split} \hat{{\textrm{E}}\,}\bigg[\int_{-\infty}^{\infty}l_t(y)^2dy\bigg] &=\int_{0}^{J}\int_{0}^{J}\hat{g}_{t, x_1, x_2}(0)dx_2dx_1\\ &=\int_{0}^{J}\int_{0}^{J}\hat{g}_{ x_1, x_2}(0)dx_2dx_1\\ &=2\int_{0}^{J}\int_{x_1}^{J}\hat{g}_{x_1, x_2}(0)dx_2dx_1\\ &=2\int_{0}^{J}\int_{x_1}^{J}g_{x_1, x_2}\bigg(D(x_1,x_2)\bigg)dx_2dx_1\\ &=C_1\int_{0}^{J}\int_{x_1}^{J}\frac{1}{\sigma(x_1, x_2)}\exp\bigg(-\frac{D(x_1, x_2)^2}{2\sigma(x_1, x_2)^2}\bigg)dx_2dx_1. \end{split}$$ where $C_1$ is a constant, $\sigma$ is the standard deviation of $\tilde{U}(0,x_1)-\tilde{U}(0,x_2)$ and $D(x_1,x_2)$ is the drift term and $D(x_1, x_2)=d(t)\left(\varphi_1(x_1)-\varphi_1(x_2)\right)$. In Appendix [5.3](#driftterm){reference-type="ref" reference="driftterm"}, we showed that $$d(t)=\begin{cases} \dfrac{a}{\omega_1}\left(\dfrac{2}{1-\omega_1}-\dfrac{2}{1+\omega_1}\right) & J>2\pi \\ 2a & J=2\pi \\ \dfrac{4a}{5\omega_1} & 0<J<2\pi . \end{cases}$$ Since $\exp\bigg(-\frac{D(x_1, x_2)^2}{2\sigma(x_1, x_2)^2}\bigg)\leq 1$ for all $0\leq x_1\leq x_2\leq J$, in order to get an upper bound of $\hat{{\textrm{E}}\,}\bigg[\int_{-\infty}^{\infty}l_t(y)^2dy\bigg]$, it is enough to find a lower bound of $\sigma$. **Lemma 1**. *$\sigma(x_1,x_2)^2$ is bounded below by $\tilde{C}\frac{\left|x_1-x_2\right|}{J^2}$ for some constant $\tilde{C}$.* Proof of Lemma 2.3 is in Appendix [5.4](#lowerboundofsigmax1x2square){reference-type="ref" reference="lowerboundofsigmax1x2square"}. Now we go back to $\hat{{\textrm{E}}\,}\bigg[\int_{-\infty}^{\infty}l_t(y)^2dy\bigg]$. $$\begin{split} \hat{{\textrm{E}}\,}\bigg[\int_{-\infty}^{\infty}l_t(y)^2dy\bigg] &=2\int_{0}^{J}\int_{x_1}^{J}g_{x_1, x_2}\bigg(D(x_1, x_2)\bigg)dx_2dx_1\\ &\leq C_1\int_{0}^{J}\int_{x_1}^{J}\frac{1}{\sigma(x_1, x_2)}\exp\bigg(-\frac{D(x_1, x_2)^2}{\sigma(x_1, x_2)^2}\bigg)dx_2dx_1\\ \leq C_1\mathcal{I}_1+C_1\mathcal{I}_2 \end{split}$$ where $$\mathcal{I}_1=\int_0^J\int_{x_1}^J\frac{1}{\sigma(x_1, x_2)}\exp\bigg(-\frac{D(x_1, x_2)^2}{\sigma(x_1, x_2)^2}\bigg)\mathbbm{1}_{\{x_1+x_2\leq J\}}dx_2dx_1$$ and $$\mathcal{I}_2=\int_0^J\int_{x_1}^J\frac{1}{\sigma(x_1, x_2)}\exp\bigg(-\frac{D(x_1, x_2)^2}{\sigma(x_1, x_2)^2}\bigg)\mathbbm{1}_{\{x_1+x_2\geq J\}}dx_2dx_1.$$ Then $$\begin{split} \mathcal{I}_1 &\leq C_1\int_0^J\int_{x_1}^J\frac{\sqrt{J}}{\sqrt{\left|x_2-x_1\right|}}\mathbbm{1}_{\{x_1+x_2\leq J\}}dx_2dx_1\\ &\leq C_1\sqrt{J}\int_0^{J}\int_{0}^{J-x_1}\frac{1}{\sqrt{p}}dpdx_1\\ &=\dfrac{4}{3}C_1J^2\\ &=C_2J^2. \end{split}$$ where $C_2=\frac{4}{3}C_1$. In the second inequality, we take $p=x_2-x_1$. Now we work with $\mathcal{I}_2$. Since in this case, $x_2\geq\min\{x_1, J-x_1\}$, we have the following $$\begin{split} \mathcal{I}_2 \leq \int_0^J\int_{x_1}^J\frac{\sqrt{J}}{\sqrt{\left|x_2-x_1\right|}}dx_2dx_1. \end{split}$$ Let $p=x_2-x_1$, then we have $$\begin{split} \int_0^J\int_{x_1}^J\frac{\sqrt{J}}{\sqrt{\left|x_2-x_1\right|}}dx_2dx_1 &\leq C_1\sqrt{J}\int_0^{J}\int_{0}^{J-x_1}\frac{1}{\sqrt{p}}dpdx_1\\ &=C_2J^2. \end{split}$$ We have $$\hat{{\textrm{E}}\,}\bigg[\int_{-\infty}^{\infty}l_t(y)^2dy\bigg]\leq C_3J^2$$ where $C_3=2C_2$ is a constant. Since $$\hat{E}\bigg[\int_0^T\int_{-\infty}^{\infty}l_t(y)^2dydt\bigg]=\hat{E}\bigg[\int_0^T\int_{-\infty}^{\infty}l_0(y)^2dydt\bigg]=T\hat{E}\bigg[\int_{-\infty}^{\infty}l_0(y)^2dy\bigg],$$ back to ([\[logZT\]](#logZT){reference-type="ref" reference="logZT"}), we have $$\log Z_T\geq -\beta T\hat{E}\bigg[\int_{-\infty}^{\infty}l_0(y)^2dy\bigg] -\hat{E}\bigg[\log\frac{d\hat{P}_T^{a}}{dP_T}\bigg].$$ Now we work on $\hat{E}\bigg[\log\frac{d\hat{P}_T^{a}}{dP_T}\bigg]$. By Theorem 5.1 of [@Da78], we have $$\frac{d\hat{P}_T^{a}}{dP_T}=\exp\bigg(\int_0^T\int_0^Ja\varphi_1(x)F(dxdt)-\frac{1}{2}\int_0^T\int_0^J\int_0^Ja^2\varphi_1(x)\varphi_1(y)f(x,y)dxdydt\bigg)$$ where $f(x,y)=\sum_{n=0}^{\infty}\gamma_n^2\varphi_n(x)\varphi_n(y)$. Then $$\hat{{\textrm{E}}\,}\bigg[\log\frac{d\hat{P}_T^{a}}{dP_T} \bigg] =\hat{{\textrm{E}}\,}\bigg[\int_0^T\int_0^Ja\varphi_1(x)F(dxdt)\bigg] -\frac{1}{2}\int_0^T\int_0^J\int_0^Ja^2\varphi_1(x)\varphi_1(y)f(x-y)dxdydt.$$ Let $$\begin{split} \hat{\zeta}(T,a)&=\exp\bigg(\frac{1}{2}\int_0^T\int_0^J\int_0^Ja^2\varphi_1(x)\varphi_1(y)f(x-y)dxdydt\bigg),\\ F(T, a\varphi_1)&=\int_0^T\int_0^Ja\varphi_1(x)F(dxdt). \end{split}$$ Then $$\begin{split} \hat{{\textrm{E}}\,}\bigg[\log\frac{d\hat{P}_T^{a}}{dP_T} \bigg] &=\hat{{\textrm{E}}\,}\big[F(T, a\varphi_1)\big]-\log\hat{\zeta}(T,a)\\ &={\textrm{E}}\,\big[F(T,a\varphi_1)\frac{d\hat{P}_T^{a}}{dP_T}\big]-\log\hat{\zeta}(T,a)\\ &=\frac{a}{\hat{\zeta}(T,a)}E\big[F(T,\varphi_1)\exp(aF(T,\varphi_1))\big]-\log\hat{\zeta}(T,a)\\ &=\frac{a}{\hat{\zeta}(T,a)}\frac{d}{da}{\textrm{E}}\,\big[\exp(aF(T,\varphi_1))\big]-\log\hat{\zeta}(T,a).\label{exp(aF)} \end{split}$$ Now, let $X=F(T,\varphi_1)$ and $\psi(a)={\textrm{E}}\,[\exp(aX)]$. Then $X$ is normally distributed with mean zero and variance $\sigma_X^2$, where $$\sigma_X^2=\int_0^T\int_0^J\int_0^J\varphi_1(x)\varphi_1(y)f(x-y)dxdydt=\frac{2}{a^2}\log\hat{\zeta}(T,a).$$ Then $$\begin{split} \psi(a)&=\exp\bigg(\frac{a^2}{2}\sigma_X^2\bigg),\\ \frac{d}{da}\psi(a)&=\sigma_X^2a\exp(\frac{a^2}{2}\sigma_X^2). \end{split}$$ We bring these two expressions to ([\[exp(aF)\]](#exp(aF)){reference-type="ref" reference="exp(aF)"}), we get $$\begin{split} \hat{{\textrm{E}}\,}\big[F(T, a\varphi_1)\big] &=\frac{a}{\hat{\zeta}(T,a)}\sigma_X^2a\exp\bigg(\frac{a^2}{2}\sigma_X^2\bigg)\\ &=\frac{a^2\exp\bigg(\frac{a^2}{2}\sigma_X^2\bigg)}{\hat{\zeta}(T,a)}\cdot\frac{2}{a^2}\log\hat{\zeta}(T,a)\\ &=2\exp\bigg(\frac{a^2}{2}\sigma_X^2\bigg)\frac{\log\hat{\zeta}(T,a)}{\hat{\zeta}(T,a)}. \end{split}$$ Back to ([\[exp(aF)\]](#exp(aF)){reference-type="ref" reference="exp(aF)"}), we get $$\begin{split} \hat{{\textrm{E}}\,}\bigg[\log\frac{d\hat{P}_T^{a}}{dP_T} \bigg] &=\hat{{\textrm{E}}\,}\big[F(T, a\varphi_1)\big]-\log\hat{\zeta}(T,a)\\ &=2\exp\bigg(\frac{a^2}{2}\sigma_X^2\bigg)\frac{\log\hat{\zeta}(T,a)}{\hat{\zeta}(T,a)}-\log\hat{\zeta}(T,a)\\ &=2\exp\bigg(\log\hat{\zeta}(T,a)\bigg)\frac{\log\hat{\zeta}(T,a)}{\hat{\zeta}(T,a)}-\log\hat{\zeta}(T,a)\\ &=\log\hat{\zeta}(T,a). \end{split}$$ Since $$\begin{split} \log\hat{\zeta}(T,a) &=\frac{1}{2}\int_0^T\int_0^J\int_0^Ja^2\varphi_1(x)\varphi_1(y)f(x,y)dxdydt\\ &=\frac{a^2T}{2}\int_0^J\int_0^J\varphi_1(x)\varphi_1(y)f(x,y)dxdy\\ &=\frac{a^2TJ\gamma_1^2}{4}, \end{split}$$ we have $$\hat{{\textrm{E}}\,}\bigg[\log\frac{d\hat{P}_T^a}{dP_T}\bigg]=\frac{a^2TJ\gamma_1^2}{4}.$$ Due to the inequality ([\[logZT\]](#logZT){reference-type="ref" reference="logZT"}), we get $$\underset{T\rightarrow\infty}{\liminf}\frac{1}{T}\log Z_T \geq -C_3\beta J^2-\frac{a^2J\gamma_1^2}{4}.\label{1/TlogZT}$$ ## Estimate Recall $$\bar{u}(t)=\frac{1}{J}\int_{0}^{J}u(t,x)dx.$$ In [@MN22], $\theta_u$ and $R_\varphi$ are defined as the following $$\begin{split} \theta_u(t,J)&:=\bigg[\frac{1}{J}\int_{0}^{J}\big(u(t,x)-\bar{u}(t)\big)^2dx\bigg]^{1/2}, \quad 0\leq t\leq T,\\ R_\varphi(T,J)&=\bigg(\frac{1}{T}\int_{0}^{T}\theta_\varphi(t,J)^2dt\bigg)^{1/2}. \end{split}$$ Also, we have the probability measure $Q_T(A)$ and set $A_{T,J}^{(1)}$ as $$\begin{split} Q_T(A)&=\frac{1}{Z_{T,J,\beta}}{\textrm{E}}\,\big[\mathcal{E}_{T,J,\beta}\mathbbm{1}_A\big],\\ A_{T,J}^{(1)}&=\{R(T,J)<\epsilon_0CJ^{5/3}\}. \end{split}$$ where $\epsilon_0$ is a constant not depending on $\beta$ and $J$, and $C$ is also a constant. We will show $$\underset{T\rightarrow\infty}{\lim}Q_T\big(A_{T,J}^{(1)}\big)=0$$ where $K>0$. We define the event $$A=A_{K,T,J}=\{R(T,J)\leq K\}.$$ **Lemma 2**. *On the set $A$, we have $$\left|\{t\in[0,T]:\theta_u^2(t,J)\leq2K^2\}\right|\geq\frac{T}{2}.$$* #### Proof: We prove this by contradiction.\ Suppose on $A$, $$\left|\{t\in[0,T]:\theta_u^2(t,J)>\}\right|\geq\frac{T}{2}.$$ Then $$\begin{split} \int_{0}^{T}\theta_u^2(t,J)dt &>\int_{0}^{T}\theta_u^2(t,J)\mathbbm{1}_{\{\theta_u^2(t,J)>2K^2\}}dt\\ &>2K^2\cdot\frac{T}{2}\\ &=K^2T. \end{split}$$ But $$R(T,J)\leq K$$ is equivalent to $$\frac{1}{T}\int_{0}^{T}\theta_u^2(t,J)dt\leq K^2,$$ and it is equivalent to $$\int_{0}^{T}\theta_u^2(t,J)dt\leq K^2T.$$ We have the contradiction. $\square$ **Lemma 3**. *If $\theta_u(t,J)^2\leq 2K^2$, we have $$\left|\{x\in[0, J]: u(t,x)\in[\bar{u}(t)-2K, \bar{u}(t)+2K]\}\right|>\frac{J}{2}.$$* #### Proof: We prove this by contradiction. Suppose when $\theta_u(t,J)^2\leq 2K^2$, then $$\left|\{x\in[0, J]: u(t,x)\in[\bar{u}(t)-2K, \bar{u}(t)+2K]\}\right|\leq\frac{J}{2}.$$ We also have $$\left|\{x\in[0, J]: \left|u(t,x)-\bar{u}(t)\right|>2K\}\right|>\frac{J}{2}.$$ Then we get $$\int_{0}^{J}\big(u(t,x)-\bar{u}(t)\big)^2dx>(2K)^2\cdot\frac{J}{2}=2K^2J.$$ But by definition of $\theta_u$, if $\theta_u^2(t,J)\leq 2K^2$, we have $$\frac{1}{J}\int_{0}^{J}\big(u(t,x)-\bar{u}(t)\big)^2dx\leq2K^2.$$ We have $$\int_{0}^{J}\big(u(t,x)-\bar{u}(t)\big)^2dx\leq2K^2J.$$ This is a contradiction. $\square$ Let $d_t^{\pm}=\bar{u}(t)\pm2K$, then $d_t^+-d_t^-=4K$. Recall $$\begin{split} L_t(A)&=m\{x\in[0,J]:u(t,x)\in A\},\\ l_t^u(y)&=L_t(dy)/dy. \end{split}$$ Then $$\begin{split} \bigg|\{t\in[0,T]:&\int_{d_k^-}^{d_k^+}l_t^u(x)dx\geq\frac{J}{2}\}\bigg| =\left|\{t\in[0,T]:L_t\big((d_k^-,d_k^+)\big)\geq\frac{J}{2}\}\right|\\ &=\left|\{t\in[0,T]:\left|\{x\in[0, J]: u(t,x)\in[\bar{u}(t)-2K, \bar{u}(t)+2K]\}\right|>\frac{J}{2}\}\right| \\ &\geq\frac{TJ}{2}. \end{split}$$ Then we get $$\begin{split} \int_{0}^{T}\int_{-\infty}^{+\infty}l_t^u(y)^2dydt &\geq 4K\int_{-\infty}^{+\infty}\bigg(\int_{d_k^-}^{d_k^+}l_t^u(y)^2\frac{dy}{4K}\bigg)dt\\ &\geq 4K\int_{-\infty}^{+\infty}\bigg(\int_{d_k^-}^{d_k^+}l_t^u(y)\frac{dy}{4K}\bigg)^2dt\\ &\geq \frac{TJ}{32K}. \end{split}$$ The third inequality is due to Jensen's inequality. We let $K=\epsilon_0C(\beta,J)J^{5/3}$, then $$\int_{0}^{T}\int_{-\infty}^{+\infty}l_t^u(y)^2dydt\geq \frac{TJ}{32\epsilon_0C(\beta,J)J^{5/3}}.$$ Recall the definitions $$\begin{split} \mathcal{E}_{T, J, \beta}&=\exp\bigg(-\beta\int_{0}^{T}\int_{-\infty}^{+\infty}l_t^u(y)^2dydt\bigg),\\ Z_{T, J, \beta}&={\textrm{E}}\,\big[\mathcal{E}_{T,J,\beta}\big],\\ Q_{T,J,\beta}(A)&=\frac{1}{Z_{T, J, \beta}}{\textrm{E}}\,\big[\mathcal{E}_{T,J,\beta}\mathbbm{1}_A\big]. \end{split}$$ Then we get $$\begin{split} {\textrm{E}}\,\big[\mathcal{E}_{T,J,\beta}&\mathbbm{1}_{\{R(T,J)<\epsilon_0C(\beta,J)J^{5/3}\}}\big]\\ &=\exp\bigg(-\beta\int_{0}^{T}\int_{-\infty}^{+\infty}l_t^u(y)^2\mathbbm{1}_{\{R(T,J)<\epsilon_0C(\beta,J)J^{5/3}\}}dydt\bigg)\\ &\leq\exp\bigg(-\beta\frac{TJ}{32\epsilon_0C(\beta,J)J^{5/3}}\bigg). \end{split}$$ By ([\[1/TlogZT\]](#1/TlogZT){reference-type="ref" reference="1/TlogZT"}), we have $$\begin{split} \underset{T\rightarrow\infty}{\lim}\frac{1}{T}\log Q_T(A_{T,J}^{(1)}) &\leq\underset{T\rightarrow\infty}{\lim}\frac{1}{T}\log{\textrm{E}}\,\big[\mathcal{E}_{T,J,\beta}\mathbbm{1}_{\{R(T,J)<\epsilon_0C(\beta,J)J^{5/3}\}}\big]-\underset{T\rightarrow\infty}{\liminf}\frac{1}{T}\log Z_T\\ &\leq\underset{T\rightarrow\infty}{\lim}\frac{1}{T}\bigg(-\beta\frac{TJ}{32\epsilon_0C(\beta,J)J^{5/3}}\bigg)+C_3\beta J^2+\frac{a^2J\gamma_1^2}{4}\\ &=-\beta\frac{J}{32\epsilon_0C(\beta,J)J^{5/3}}+C_3\beta J^2+\frac{a^2J\gamma_1^2}{4}. \end{split}$$ Hence by choosing an $\epsilon$ small enough we get the result. # Upper bound In this section, we will show $\underset{T\rightarrow\infty}{\lim}Q_T\left(A^{(i)}_{T,J}\right)=0$. Since we have found a lower bound of $\log Z_T$ in section [3](#lowerbound){reference-type="ref" reference="lowerbound"}, it is enough to prove that for a large enough constant $K$, we have $\underset{T\rightarrow+\infty}{\lim}P\left(R(T,J)\geq K\right)=0.$ Recall $$\tilde{a}_n(t)=\begin{cases} \frac{\gamma_{n}}{w_n}\int_{-\infty}^{t}e^{\frac{-1+w_n}{2}(t-s)}-e^{\frac{-1-w_n}{2}(t-s)}dW_n(s) & J>2\pi n \\ \gamma_{n}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}(t-s)dW_n(s) & J=2\pi n \\ \frac{\gamma_{n}}{\omega_n}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\sin(\omega_n(t-s))dW_n(s) & 0<J<2\pi. \end{cases}$$ $\tilde{a}_n$ is stationary in time $t$ for each $n$. We also defined $\tilde{U}(t,x)=\sum_{n\neq 0}\tilde{a}_n(t)\varphi_n(x)$.\ Recall the radius of a polymer $$R(T,J)=\bigg[\frac{1}{TJ}\int_{0}^{T}\int_{0}^{J}(\tilde{U}(t,x))^2dxdt\bigg]^{1/2}.$$ We define $$S_T^{(n)}=\int_{0}^{T}\big(\tilde{a}_n(t)\big)^2dt.$$ Then $$\begin{split} R^2(T,J) &=\frac{1}{TJ}\int_{0}^{T}\int_{0}^{J}(\tilde{U}(t,x))^2dxdt\\ &=\sum_{n=1}^{\infty}\frac{1}{TJ}\int_{0}^{T}\big(\tilde{a}_n(t)\big)^2dt\\ &=\sum_{n=1}^{\infty}\frac{1}{TJ}S_T^{n}. \end{split}$$ **Proposition 1**. *We can choose a $K$ large enough so that $$\underset{T\rightarrow+\infty}{\lim}P\left(R(T,J)\geq K\right)=0.$$* #### Proof: Let $c_0=\sum_{n=1}^{+\infty}\frac{1}{n^2}=\frac{6}{\pi^2}$. We have $$\begin{split} P\left(R(T,J)\geq K\right) &=P\left(R(T,J)^2\geq K^2\right)\\ &\leq \sum_{n=1}^{\infty}P\left(\dfrac{1}{TJ}S_T^n>c_0K^2n^{-2}\right). \end{split}$$ We will show the work with three cases: (1) $J<2\pi n$; (2) $J=2\pi n$; and (3) $J>2\pi n$. The proofs from each case are similar. We will omit some details in case (2) and (3). **Case (1) When $J<2\pi n$** , **Step 1.** For each $n>\frac{J}{2\pi}$, we will find an upper bound of $P\left(\dfrac{1}{TJ}S_T^n>Kn^{-2}\right).$ Let $A_n(t)=\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\sin(\omega_n(t-s))dW_n(s)$. Then $$\tilde{a}_n(t)=\frac{\gamma_n}{\omega_n}A_n(t)$$ and $$\dfrac{1}{TJ}S_T^n=\dfrac{1}{TJ}\bigg(\dfrac{\gamma_n}{\omega_n}\bigg)^2\int_{0}^{T}A_n^2(t)dt.$$ We have $$\begin{split} A_t^n &=\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\sin(\omega_n(t-s))dW_n(s)\\ &=e^{-\frac{1}{2}t}\sin(\omega_nt)\int_{-\infty}^{t}e^{\frac{1}{2}s}\cos(s)dW_n(s) -e^{-\frac{1}{2}t}\cos(\omega_nt)\int_{-\infty}^{t}e^{\frac{1}{2}s}\sin(s)dW_n(s). \end{split}$$ $\int_{-\infty}^{t}e^{\frac{1}{2}s}\cos(s)dW_n(s)$ is a time-changed two-sided Brownian motion $B_{T(t)}$ with time scale $T(t)$. Since $$E\big[\big(\int_{-\infty}^{t}e^{\frac{1}{2}s}\cos(s)dW_n(s)\big)^2\big]=\int_{-\infty}^{t}e^{s}\cos^2(s)ds$$ and $$\int_{-\infty}^{t}e^{s}\cos^2(s)ds\leq \int_{-\infty}^{t}e^{s}ds=e^t.$$ When $t\geq 0$, we have $$\int_{-\infty}^{t}e^{\frac{1}{2}s}\cos(s)dW_n(s)\overset{D}{=}B_{\int_{-\infty}^{t}e^{s}\cos^2(s)ds}\leq \underset{0\leq r\leq t}{\sup}B_{e^r}$$ and $$\int_{-\infty}^{t}e^{\frac{1}{2}s}\sin(s)dW_n(s)\overset{D}{=}B_{\int_{-\infty}^{t}e^{s}\sin^2(s)ds}\leq \underset{0\leq r\leq t}{\sup}B_{e^r}.$$ Define $$\begin{split} B^c_t&:=B_{\int_{-\infty}^{t}e^{s}\cos^2(s)ds}=\int_{-\infty}^{t}e^{\frac{1}{2}s}\cos(s)dW_n(s)\\ B^s_t&:=B_{\int_{-\infty}^{t}e^{s}\sin^2(s)ds}=\int_{-\infty}^{t}e^{\frac{1}{2}s}\sin(s)dW_n(s)\\ \tilde{B}_{T}&:=\underset{0\leq t\leq T}{\sup}|B_t|. \end{split}$$ Let $\lambda$ be a real number, $$\begin{split} P\bigg(\big(A_t^n\big)^2>\lambda\bigg) &=P\bigg(\big(e^{-\frac{1}{2}t}\sin(\omega_nt)\int_{-\infty}^{t}e^{\frac{1}{2}s}\cos(s)dW_n(s)\\ &\qquad-e^{-\frac{1}{2}t}\cos(\omega_nt)\int_{-\infty}^{t}e^{\frac{1}{2}s}\sin(s)dW_n(s)\big)^2>\lambda\bigg)\\ &\leq P\bigg(\big(e^{-\frac{1}{2}t}B^c_t +e^{-\frac{1}{2}t}B^s_t)^2>\lambda\bigg)\\ &\leq P\bigg(2e^{-t}(B^c_t)^2+2e^{-t}(B^s_t)^2>\lambda\bigg)\\ &\leq P\bigg(16e^{-t}\tilde{B}_{e^t}^2>\lambda\bigg). \end{split}$$ Given $0\leq s\leq t$, since $\tilde{B}_s\geq |B_s|$, $$\begin{split} \tilde{B}_t &=\underset{0\leq r\leq t}{\sup}|B_r|=\underset{0\leq r\leq t}{\sup}|B_r-B_s+B_s|\\ &\leq \underset{0\leq r\leq t}{\sup}|B_r-B_s|+|B_s|\\ &\leq \underset{ 0\leq r\leq s}{\sup}|B_r-B_s|+\underset{s\leq r\leq t}{\sup}|B_r-B_s|+|B_s|\\ &\leq 2\underset{0\leq r\leq s}{\sup}|B_r|+\underset{s\leq r\leq t}{\sup}|B_r-B_s|+|B_s|\\ &\leq 3\underset{0\leq r\leq s}{\sup}|B_r|+\underset{s\leq r\leq t}{\sup}|B_r-B_s|.\label{ineqoftildeB} \end{split}$$ Let $$\tilde{B}_{s,t}:=\underset{s\leq r\leq t}{\sup}|B_r-B_s|.$$ Note that $\tilde{B}_{s,t}$ is independent from $\mathcal{F}_s$. And we have $$\tilde{B}_t\leq \tilde{B}_{s,t}+3\tilde{B}_s, \quad \text{and }\quad \tilde{B}_{s,t}\overset{D}{=}\tilde{B}_{t-s}.$$ Let $\tau >0$. By Markov's inequality, $$\begin{split} P\bigg(\frac{1}{TJ}S_n>Kn^{-2}\bigg) &= P\bigg(\int_{0}^{T}A_n^2(t)dt>\frac{K}{n^2}\big(\frac{\omega_n}{\gamma_n}\big)^2TJ\bigg)\\ &=P\bigg(e^{\tau\int_{0}^{T}A_n^2(t)dt}>e^{\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}\bigg)\\ &\leq E\big[e^{\tau\int_{0}^{T}A_n^2(t)dt}\big]\cdot e^{-\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}\\ &\leq E\big[e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^t}^2dt}\big]\cdot e^{-\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}.\label{tauMI} \end{split}$$ **Step 2.** We will find an upper bound of $\int_{0}^{T}e^{-t}\tilde{B}_{e^t}^2dt$. **Lemma 4**. *Let $T>2$ be a positive integer and $a,b>0$. Then $$\begin{split} \int_{0}^{T}e^{-at}\tilde{B}^2_{e^{bt}}dt &=\int_{0}^{1}e^{-at}\tilde{B}^2_{e^{bt}}dt+\tilde{B}_{e^{b}}^2\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-at}dt\\ &\quad +\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-at}\tilde{B}^2_{e^{bm},e^{b(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-at}\tilde{B}^2_{e^{bm},e^{bt}}dt\bigg].\label{inteB} \end{split}$$* [\[upperboundlemma1\]]{#upperboundlemma1 label="upperboundlemma1"} The proof of Lemma 4.1 is in Appendix [5.5](#proofofintegraleexpbrownianmotion){reference-type="ref" reference="proofofintegraleexpbrownianmotion"} Now, we take $a=b=1$. Then Lemma 4.1 gives $$\begin{split} \int_{0}^{T}e^{-t}\tilde{B}^2_{e^{t}}dt &=\int_{0}^{1}e^{-t}\tilde{B}^2_{e^{t}}dt+\tilde{B}_{e}^2\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-t}dt\\ &\quad +\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-t}\tilde{B}^2_{e^{m},e^{(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-t}\tilde{B}^2_{e^{m},e^{t}}dt\bigg]. \end{split}$$ Let $$\begin{split} (\text{I})&:=\int_{0}^{1}e^{-t}\tilde{B}^2_{e^{t}}dt+\tilde{B}_{e}^2\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-t}dt\\ (\text{II})&:=\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-t}\tilde{B}^2_{e^{m},e^{(m+1)}}dt\bigg)\\ &\qquad\quad+2\int_{m}^{m+1}e^{-t}\tilde{B}^2_{e^{m},e^{t}}dt\bigg]. \end{split}$$ Recall that $\mathcal{F}_t$ is the filtration generated by $\{B_s | \hspace{0.05cm} s\leq t\}$. Clearly, $(\text{I})\in\mathcal{F}_e$. And in (I), $$\begin{split} \sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-t}dt &=\sum_{k=1}^{T-1}18^k\int_{k}^{k+1}e^{-t}dt\\ &\leq \sum_{k=1}^{T-1}\big(\frac{18}{e}\big)^k\\ &=\frac{7^T-7}{6}, \qquad \text{since }\hspace{1ex} \frac{18}{e}\sim6.62\\. \end{split}$$ Then we have $$(\text{I})\leq \int_{0}^{1}e^{-t}\tilde{B}^2_{e^t}dt+\bigg(\frac{7^T-7}{6}\bigg)\tilde{B}^2_e\leq \int_{0}^{1}e^{-t}\tilde{B}^2_{e^t}dt+7^T\tilde{B}^2_e.$$ Since $\tilde{B}_{e^t}\leq \tilde{B}_e, \forall t\in[0,1]$, $$(\text{I})\leq \tilde{B}^2_e(1+7^T).$$ Now we work with (II).\ For each $m$ in $\{1,\dots, T-2\}$, $$\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-t}\tilde{B}^2_{e^m,e^{m+1}}dt\bigg)+2\int_{m}^{m+1}e^{-t}\tilde{B}^2_{e^m,e^t}dt\bigg]\in\mathcal{F}_{e^{m+1}}$$ and $$\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-t}\tilde{B}^2_{e^m,e^{m+1}}dt\bigg)+2\int_{m}^{m+1}e^{-t}\tilde{B}^2_{e^m,e^t}dt\bigg]\perp \!\!\! \perp\mathcal{F}_{e^{m}}.$$ Also we have $$\begin{split} \sum_{k=m+1}^{T-1}&\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-t}\tilde{B}^2_{e^m,e^{m+1}}dt\\ &=\tilde{B}^2_{e^m,e^{m+1}}\sum_{k=m+1}^{T-1}2^{k-m+1}\cdot3^{2(k-m)}\int_{k}^{k+1}e^{-t}dt\\ &=\tilde{B}^2_{e^m,e^{m+1}}\cdot2^{1-m}\cdot9^{-m}\sum_{k=m+1}^{T-1}18^k\int_{k}^{k+1}e^{-t}dt\\ &\leq \tilde{B}^2_{e^m,e^{m+1}}\cdot2^{1-m}\cdot9^{-m}\sum_{k=m+1}^{T-1}(\frac{18}{e})^k\\ &\leq \tilde{B}^2_{e^m,e^{m+1}}\cdot2^{1-m}\cdot9^{-m}\frac{7^T-7^{m+1}}{6}\\ &\leq \tilde{B}^2_{e^m,e^{m+1}}\cdot2^{1-m}\cdot9^{-m}\cdot 7^T. \end{split}$$ Let $\alpha_m=2^{1-m}\cdot 9^{-m}\cdot 7^T$. Recall $\tilde{B}_{s,t}=\underset{s\leq r\leq t}{\sup}|B_r-B_s|$. Then if $t\leq p$, we have $\tilde{B}_{s,t}\leq\tilde{B}_{s,p}$. Therefore, $$\begin{split} (\text{II}) &\leq\sum_{m=1}^{T-2}\bigg[\tilde{B}^2_{e^m,e^{m+1}}\cdot\alpha_m+2\int_{m}^{m+1}e^{-t}\tilde{B}^2_{e^m,e^t}dt\bigg]\\ &\leq\sum_{m=1}^{T-2}\tilde{B}^2_{e^m,e^{m+1}}\big(\alpha_m+2e^{-m}\big). \end{split}$$ Thus, we get $$\begin{split} \int_{0}^{T}e^{-t}\tilde{B}^2_{e^t}dt &\leq \tilde{B}^2_e(1+7^T)+\sum_{m=1}^{T-2}\tilde{B}^2_{e^m,e^{m+1}}\big(\alpha_m+2e^{-m}\big). \end{split}$$ **Step 3.** We will choose a proper value for $\tau$ and find an explicit upper bound of\ $E\big[e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^t}^2dt}\big]$.\ Back to ([\[tauMI\]](#tauMI){reference-type="ref" reference="tauMI"}), we have $$\begin{split} E\big[&e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^T}^2dt}\big] \leq E\big[e^{16\tau(\tilde{B}^2_e(1+7^T)+\sum_{m=1}^{T-2}\tilde{B}^2_{e^m,e^{m+1}}\big(\alpha_m+2e^{-m}\big))}\big]\\ &=E\big[e^{16\tau\tilde{B}^2_e(1+7^T)}\big] \cdot \prod_{m=1}^{T-2}E\big[e^{16\tau\tilde{B}^2_{e^m,e^{m+1}}\big(\alpha_m+2e^{-m}\big))}\big].\label{mterms} \end{split}$$ Since $e^{16\tau\tilde{B}^2_e(1+7^T)}$ is a nonnegative random variable, we have $$\begin{split} E\big[e^{16\tau(1+7^T)\tilde{B}_e^2}\big] &=\int_{1}^{+\infty}P\big(e^{16\tau(1+7^T)\tilde{B}_e^2}\geq x\big)dx\\ &=\int_{1}^{+\infty}4P\bigg(B_e\geq \sqrt{\frac{\log x}{16\tau(1+7^T)}}\bigg)dx. \end{split}$$ **Lemma 5**. *Let $X\sim \mathcal{N}(0, \sigma^2)$ and $\gamma>0$ with $1-\dfrac{1}{2\gamma\sigma^2}<0$, then $$\int_{1}^{+\infty}P\left(X\geq\sqrt{\dfrac{\log x}{\gamma}}\right)dx\leq\dfrac{\sigma^2\gamma\sqrt{2\pi\sigma^2}}{\sqrt{1-2\gamma\sigma^2}}.$$* The proof of the lemma is in Appendix [5.6](#integraloftailprobability){reference-type="ref" reference="integraloftailprobability"}. Let $\gamma=16\tau(1+7^T)$ with $1-\dfrac{1}{2\gamma e}<0$. Since $B_e\sim\mathcal{N}(0, e)$, by Lemma 4.2, we have $$E\big[e^{16\tau(1+7^T)\tilde{B}_e^2}\big] \leq4\cdot\dfrac{e\gamma}{\sqrt{1-2\gamma e}}=\dfrac{64e\tau(1+7^T)}{\sqrt{1-32e\tau(1+7^T)}}$$ where $0<\tau<\dfrac{1}{32e(1+7^T)}$. Now we work on the other factors in ([\[mterms\]](#mterms){reference-type="ref" reference="mterms"}). For $m$ in $\{1, 2, \dots, T-3, T-2\}$, we have $$\begin{split} E\big[\exp\bigg(16\tau\tilde{B}^2_{e^m,e^{m+1}}&\big(\alpha_m+2e^{-m}\big)\bigg)\big]\\ &=\int_{1}^{+\infty}P\big(\exp\big(16\tau\tilde{B}^2_{e^m,e^{m+1}}(\alpha_m+2e^{-m})\big)\geq x\big)dx\\ &=\int_{1}^{+\infty}4P\bigg(B_{e^{m+1}-e^m}\geq \sqrt{\frac{\log x}{16\tau(\alpha_m+2e^{-m})}}\bigg)dx. \end{split}$$ Let $\gamma_m=16\tau(\alpha_m+2e^{-m})$ and $\sigma_m=e^{m+1}-e^m$. When $1-\dfrac{1}{2\sigma_m^2\gamma_m}<0$, by Lemma 4.2, we have $$\begin{split} E\big[\exp\bigg(16\tau\tilde{B}^2_{e^m,e^{m+1}}\big(\alpha_m+2e^{-m}\big)\bigg)\big] &\leq 4\cdot \dfrac{\sigma_m^2\gamma_m}{\sqrt{1-2\gamma_m\sigma_m^2}}\\ &=\dfrac{64\tau(e^{m+1}-e^m)(\alpha_m+2e^{-m})}{\sqrt{1-32\tau(e^{m+1}-e^m)(\alpha_m+2e^{-m})}} \end{split}$$ where $\tau<\dfrac{1}{32\sigma_m^2(\alpha_m+2e^{-m})}$. Recall $\alpha_m=2^{1-m}\cdot9^{-m}\cdot 7^T$ and $m\in\{1,\dots, T-2\}$. Let $\tau=\frac{1}{32e^T\cdot8^T}$. Then $\tau<\frac{1}{32(1+7^T)e}$ and $\tau<\frac{1}{32(2^{1-m}\cdot9^{-m}\cdot 7^T+2e^{-m})(e^{m+1}-e^m)}$ for all $m$.\ Back to ([\[mterms\]](#mterms){reference-type="ref" reference="mterms"}), we have When $\tau=\frac{1}{32e^T\cdot8^T}$, according to ([\[mterms\]](#mterms){reference-type="ref" reference="mterms"}), $$\begin{split} E\big[&e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^t}^2dt}\big] \leq\dfrac{64e\tau(1+7^T)}{\sqrt{1-32e\tau(1+7^T)}} \cdot\prod_{m=1}^{T-2}\dfrac{64\sigma_m\tau(\alpha_m+2e^{-m})}{\sqrt{1-32\sigma_m\tau(\alpha_m+2e^{-m})}}\\ &=\dfrac{\frac{2(1+7^T)}{e^{T-1}8^T}}{\sqrt{1-\frac{(1+7^T)}{e^{T-1}8^T}}}\cdot\prod_{m=1}^{T-2}\dfrac{\frac{2\alpha_m\sigma_m^2}{e^T8^T}}{\sqrt{1-\frac{\alpha_m\sigma_m^2}{e^T8^T}}}\\ &=\dfrac{2(1+7^T)}{\sqrt{e^{T-1}8^T}\sqrt{e^{T-1}8^T-(1+7^T)}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{2(2^{1-m}\cdot9^{-m}\cdot 7^T+2e^{-m})(e^{m+1}-e^m)}{\sqrt{e^{T}8^T}\sqrt{e^{T}8^T-(2^{1-m}\cdot9^{-m}\cdot 7^T+2e^{-m})(e^{m+1}-e^m)}}. \end{split}$$ Since $1+7^T<8^T$ and $\forall m\in\{1,\dots, T-2\}$, $(2^{1-m}\cdot9^{-m}\cdot 7^T+2e^{-m})<8^T$ when $T>2$, we have $$\begin{split} E\big[e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^t}^2dt}\big] &\leq\dfrac{2(1+7^T)}{\sqrt{e^{T-1}8^T}\sqrt{e^{T-1}8^T-8^T}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{2(2^{1-m}\cdot9^{-m}\cdot 7^T+2e^{-m})e^{m+1}}{\sqrt{e^{T}8^T}\sqrt{e^{T}8^T-8^Te^{T-1}}}\\ &=\dfrac{2\cdot2^{T-2}(1+7^T)}{8^T\sqrt{e^{2T-2}-e^{T-1}}}\cdot\dfrac{1}{8^{T(T-2)}(\sqrt{e^{2T}-e^{2T-1}})^{(T-2)}}\\ &\quad\cdot\prod_{m=1}^{T-2}\bigg[\big(2^{1-m}\cdot9^{-m}\cdot e^{m+1}\big)\cdot 7^T+2e\bigg]. \end{split}$$ Since $2^{1-m}\cdot9^{-m}\cdot e^{m+1}=\dfrac{2e\cdot e^m}{18^m}\leq 1$ and $2e\leq7^{T}$ when $T\geq3$, we have $$\prod_{m=1}^{T-2}\bigg[\big(2^{1-m}\cdot9^{-m}\cdot e^{m+1}\big)\cdot 7^T+2e\bigg] \leq2^T\cdot7^{T(T-2)}=8^{T/3}\cdot7^{T(T-2)}.$$ Then we have $$\begin{split} E\big[e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^t}^2dt}\big] &\leq \dfrac{2\cdot2^{T-2}(1+7^T)8^{T/3}\cdot7^{T(T-2)}}{8^{T^2-T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\\ &\leq \dfrac{2(1+7^T)\cdot7^{T(T-2)}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}. \end{split}$$ **Step 4.** We are ready to find upper bounds of $P\bigg(\frac{1}{TJ}S_n>Kn^{-2}\bigg)$ for each $n>\frac{J}{2\pi}$ and show that $\underset{T\rightarrow+\infty}{\lim}\sum_{n\geq N}P\bigg(\frac{1}{TJ}S_n>Kn^{-2}\bigg)=0$.\ Going back to ([\[tauMI\]](#tauMI){reference-type="ref" reference="tauMI"}), $$\begin{split} P\bigg(&\frac{1}{TJ}S_n>Kn^{-2}\bigg) \leq E\big[e^{16\tau\int_{0}^{T}e^{-t}\tilde{B}_{e^T}^2dt}\big]\cdot e^{-\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}\\ &\leq \dfrac{2(1+7^T)\cdot7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\cdot e^{-\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ} \end{split}$$ where $\tau=\dfrac{1}{32e^T8^T}$. Since $\omega_n=\dfrac{\sqrt{4k_n^2-1}}{2}$ and $k_n=\dfrac{n\pi}{J}$, if $\ceil[\big]{\dfrac{J}{2\pi}}>\dfrac{J}{2\pi}$, for all $n>\dfrac{J}{2\pi}$ and $n\in\mathbb{Z}$, we have $$\dfrac{\omega_n^2}{n^2}=\dfrac{4k_n^2-1}{4n^2}=\dfrac{\frac{4n^2\pi^2}{J^2}-1}{4n^2}=\dfrac{4n^2\pi^2-J^2}{4n^2J^2}=\dfrac{\pi^2}{J^2}-\dfrac{1}{4n^2}\geq\dfrac{\pi^2}{J^2}-\dfrac{1}{(\ceil{J/\pi})^2}>0.$$ If $\ceil[\big]{\dfrac{J}{2\pi}}=\dfrac{J}{2\pi}$, the minimum value that $n$ can take is $\ceil[\big]{\dfrac{J}{2\pi}}+1$, and $$\dfrac{\omega_n^2}{n^2}=\dfrac{4k_n^2-1}{4n^2}=\dfrac{\frac{4n^2\pi^2}{J^2}-1}{4n^2}=\dfrac{4n^2\pi^2-J^2}{4n^2J^2}=\dfrac{\pi^2}{J^2}-\dfrac{1}{4n^2}\geq\dfrac{\pi^2}{J^2}-\dfrac{1}{(\ceil{J/\pi}+1)^2}>0.$$ We define $\epsilon(J)=\dfrac{\pi^2}{J^2}-\dfrac{1}{(\ceil{J/\pi})^2}$, then $$P\bigg(\frac{1}{TJ}S_n>Kn^{-2}\bigg)\leq \dfrac{2(1+7^T)7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\cdot e^{-\tau K\epsilon(J)(\frac{1}{\gamma_n})^2TJ}.$$ Since $\gamma_{n}^2\leq c'n^{-\alpha}$ for some positive constant $c'$ and $\alpha$, $-\dfrac{1}{\gamma_{n}^2}\leq -cn^{\alpha}$ where $c c'=1$. Let $N=\dfrac{J+1}{2\pi}$, then $$\begin{split} \sum_{n\geq N}&P\bigg(\frac{1}{TJ}S_n>Kn^{-2}\bigg) \leq \dfrac{2(1+7^T)7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\\ &\qquad\qquad\qquad\qquad\cdot\bigg(\exp\bigg(-\tau K\epsilon(J)\big(\frac{1}{\gamma_{N}}\big)^2TJ\bigg)+\int_{N}^{+\infty}\exp(-\tau K\epsilon(J) cx^{\alpha}TJ)dx\bigg)\\ &=\dfrac{2(1+7^T)7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}\cdot\exp(\frac{1}{32e^T8^T}K\epsilon(J)(\frac{1}{\gamma_{N}})^2TJ) }\\ &\quad +\left(\dfrac{2(1+7^T)7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\right)\\ &\qquad\cdot\bigg(\int_{N}^{+\infty}\exp(-\tau K\epsilon(J) cx^{\alpha}TJ)dx\bigg)\\ &=:\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {1};}+\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2};}. \end{split}$$ First, we note that $$\underset{T\rightarrow+\infty}{\lim}\exp\bigg(\frac{1}{32e^T8^T}K\epsilon(J)\bigg(\frac{1}{\gamma_{N}^2}\bigg)TJ\bigg)=1.$$ Secondly, since $\sqrt{1-e^{-1}}\geq\dfrac{3}{4}$ and $e^{1/2}\geq 1.6$, we have $$\begin{split} \dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot(\sqrt{1-e^{-1}})^{T-2}} &\leq\bigg(\dfrac{4}{3}\bigg)^{T-2}\cdot\dfrac{1}{e^{\frac{T-1}{2}}\sqrt{e^{T-1}-1}}\\ &\leq \bigg(\dfrac{4}{3}\bigg)^{T-2}\cdot\bigg(\dfrac{1}{1.6}\bigg)^{T-1}\cdot\dfrac{1}{\sqrt{e^{T-1}-1}}\\ &=\dfrac{1}{1.6}\cdot\bigg(\dfrac{4}{4.8}\bigg)^{T-2}\cdot\dfrac{1}{\sqrt{e^{T-1}-1}}. \end{split}$$ Let $T\rightarrow+\infty$, we have $\dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot(\sqrt{1-e^{-1}})^{T-2}}\rightarrow0$.\ Now, we work on $\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {1};}$. $$\begin{split} \underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {1};} &= \underset{T\rightarrow+\infty}{\lim}\bigg[\dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot(\sqrt{1-e^{-1}})^{T-2}\cdot\exp(\frac{1}{32e^T8^T}K\epsilon(\frac{1}{\gamma_{N}})^2TJ)}\\ &\qquad\qquad\cdot\dfrac{2(1+7^T)7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\bigg]\\ &=\bigg(\underset{T\rightarrow+\infty}{\lim}\hspace{0.1in}\dfrac{1}{\exp(\frac{1}{32e^T8^T}K\epsilon(\frac{1}{\gamma_{N}})^2TJ)}\bigg)\\ &\quad\cdot \bigg(\underset{T\rightarrow+\infty}{\lim}\hspace{0.1in}\dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot(\sqrt{1-e^{-1}})^{T-2}}\bigg)\\ &\quad\cdot\bigg[\underset{T\rightarrow+\infty}{\lim}\bigg(\dfrac{2\cdot7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\bigg)+\underset{T\rightarrow+\infty}{\lim}\bigg(\dfrac{2\cdot7^{T^2-T}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\bigg)\bigg]. \end{split}$$ Let $y=\dfrac{7^{T^2-T}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}$. $$\begin{split} \ln y &=\ln7^{T^2-T}-\ln8^{T^2-\frac{5}{3}T}-\ln e^{T^2-2T}\\ &=T^2\big(\ln7-\ln8-1\big)-T\big(\ln7-\frac{5}{3}\ln8-2\big). \end{split}$$ Since $\big(\ln7-\ln8-1\big)<0$ and $\big(\ln7-\frac{5}{3}\ln8-2\big)<0$, as $T\rightarrow+\infty$, $\ln y\rightarrow -\infty$ which implies $y\rightarrow0$. We showed that $$\underset{T\rightarrow+\infty}{\lim}\bigg(\dfrac{2\cdot7^{T^2-T}}{8^{T^2-\frac{4}{3}T}\cdot e^{T^2-2T}}\bigg)=0.$$ Since $\dfrac{2\cdot7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}<\dfrac{2\cdot7^{T^2-T}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}$ when $T>2$, we have $$\underset{T\rightarrow+\infty}{\lim}\bigg(\dfrac{2\cdot7^{T^2-2T}}{8^{T^2-\frac{4}{3}T}\cdot e^{T^2-2T}}\bigg)=0.$$ The above limits imply that $\underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {1};}=0$. Next, we consider $\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2};}$. We have $$\int_{N}^{+\infty}\exp(-\tau TK\epsilon(J) cx^{\alpha}J)dx\leq\tilde{K}\cdot\Gamma\bigg(\dfrac{1}{\alpha}\bigg)(8^Te^T)^{\frac{1}{\alpha}}\dfrac{1}{T^\alpha}.$$ where $\tilde{K}=\bigg(\dfrac{32}{K\epsilon cJ}\bigg)^{\frac{1}{1-\beta}}$ and $\Gamma$ is the gamma function. When $T>1$, $T^\alpha>1$ for all $\alpha>0$, $$\int_{N}^{+\infty}\exp(-\tau TK\epsilon(J) cx^{\alpha}J)dx\leq\tilde{K}\cdot\Gamma\bigg(\dfrac{1}{\alpha}\bigg)(8^Te^T)^{\frac{1}{\alpha}}.$$ Then we have $$\begin{split} \tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2};} &=\dfrac{2(1+7^T)7^{T^2-2T}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\cdot\bigg(\int_{N}^{+\infty}\exp(-\tau K\epsilon(J) cx^{\alpha}TJ)dx\bigg)\\ &\leq\tilde{K}\Gamma\bigg(\dfrac{1}{\alpha}\bigg)\dfrac{2(1+7^T)7^{T^2-2T}8^{\frac{T}{\alpha}}e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}\\ &=:\tilde{K}\Gamma\bigg(\dfrac{1}{\alpha}\bigg)\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2a};}. \end{split}$$ Since $\tilde{K}$ and $\Gamma\left(\dfrac{1}{\alpha}\right)$ are independent from $T$, $\underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2};}=\underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2a};}$. Then $$\underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2a};} =\underset{T\rightarrow+\infty}{\lim}\dfrac{2(1+7^T)7^{T^2-2T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}\cdot(\sqrt{1-e^{-1}})^{T-2}}.$$ We know that $\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot e^{T^2-2T}}=0$. Then we rearrange the order and get $$\begin{split} \underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2a};}&=2\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot(\sqrt{1-e^{-1}})^{T-2}}\bigg)\\ &\quad\cdot\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{(1+7^T)7^{T^2-2T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\bigg)\\ &=2\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\sqrt{e^{2T-2}-e^{T-1}}\cdot(\sqrt{1-e^{-1}})^{T-2}}\bigg)\\ &\quad\cdot\left[\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{7^{T^2-2T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\bigg) +\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{7^{T^2-T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\bigg)\right]. \end{split}$$ Since $\dfrac{7^{T^2-2T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}\leq\dfrac{7^{T^2-T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}$ when $T>2$, if we can show $$\underset{T\rightarrow+\infty}{\lim}\dfrac{7^{T^2-T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}=0,$$ then by the squeeze theorem, we must have $$\underset{T\rightarrow+\infty}{\lim}\dfrac{7^{T^2-2T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}}=0.$$ $$\begin{split} \underset{T\rightarrow+\infty}{\lim}\dfrac{7^{T^2-T}\cdot8^{\frac{T}{\alpha}}\cdot e^{\frac{T}{\alpha}}}{8^{T^2-\frac{5}{3}T}\cdot e^{T^2-2T}} &=\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\exp\big(T^2-T\big(2+\frac{1}{\alpha}\big)\big)}\bigg)\\ &\quad\cdot\bigg(\underset{T\rightarrow+\infty}{\lim}\dfrac{7^{T^2-T}}{8^{T^2-T\big(\frac{5}{3}+\frac{1}{\alpha}\big)}}\bigg). \end{split}$$ We have $\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\exp\big(T^2-T\big(2+\frac{1}{\alpha}\big)\big)}=0$. Now let $y=\dfrac{7^{T^2-T}}{8^{T^2-T\big(\frac{5}{3}+\frac{1}{\alpha}\big)}}$. Then $$\begin{split} \ln y &=(T^2-T)\ln7-\big(T^2-T\big(\frac{5}{3}+\frac{1}{\alpha}\big)\big)\ln8\\ &=T\big(T(\ln7-\ln8)+\big(\frac{5}{3}+\dfrac{1}{\alpha}\big)\ln8-\ln7\big). \end{split}$$ Since $\ln7<\ln8$ and $\alpha>0$, as $T\rightarrow+\infty$, $\ln y\rightarrow-\infty$. So we have $y\rightarrow0$. We showed that $$\underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2a};}=0.$$ Then we have $\underset{T\rightarrow+\infty}{\lim}\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {2};}=0.$ We proved that $$\underset{T\rightarrow+\infty}{\lim}\sum_{n> J/2\pi}P\bigg(\dfrac{1}{TJ}S_n>Kn^{-2}\bigg)=0$$ In Case 2 and 3, we will follow the same steps. Since there are finitely many $n$ in both two cases, we can skip step 4.\ **Case (2) When $J>2\pi n$** , recall that $$\tilde{a}_n(t)=\dfrac{\gamma_n}{\omega_n}\int_{-\infty}^{t}e^{\frac{-1+\omega_n}{2}(t-s)}-e^{\frac{-1-\omega_n}{2}(t-s)}dW_n(s)$$ where $\omega_n=\frac{\sqrt{1-4k_n^2}}{2}, k_n=\frac{n\pi}{J}$. **Step 1.** For each $n<\frac{J}{2\pi}$, we will find an upper bound of $P\left(\dfrac{1}{TJ}S_T^n>Kn^{-2}\right).$ Let $C_n(t)=\int_{-\infty}^{t}e^{\frac{-1+\omega_n}{2}(t-s)}-e^{\frac{-1-\omega_n}{2}(t-s)}dW_n(s)$, then $$C_n(t)=e^{\frac{-1+\omega_n}{2}(t)}\int_{-\infty}^{t}e^{\frac{1-\omega_n}{2}s}dW_n(s)-e^{\frac{-1-\omega_n}{2}(t)}\int_{-\infty}^{t}e^{\frac{1+\omega_n}{2}s}dW_n(s).$$ Each integral above has mean 0 and variance as the following $$\begin{split} {\textrm{E}}\,\big[\big(\int_{-\infty}^{t}e^{\frac{1-\omega_n}{2}(s)}dW_n(s)\big)^2\big]&=\int_{-\infty}^{t}e^{(1-\omega_n)(s)}dW_n(s)=\dfrac{1}{1-\omega_n}e^{(1-\omega_n)t},\\ {\textrm{E}}\,\big[\big(\int_{-\infty}^{t}e^{\frac{1+\omega_n}{2}(s)}dW_n(s)\big)^2\big]&=\int_{-\infty}^{t}e^{(1+\omega_n)(s)}dW_n(s)=\dfrac{1}{1+\omega_n}e^{(1+\omega_n)t}. \end{split}$$ Then we have the following $$\begin{split} \int_{-\infty}^{t}e^{\frac{1-\omega_n}{2}(s)}dW_n(s)&\stackrel{D}{=}B_{\int_{-\infty}^{t}e^{(1-\omega_n)}ds}\stackrel{D}{=}B_{\frac{1}{1-\omega_n}e^{(1-\omega_n)t}}=:\tilde{B}_t^-,\\ \int_{-\infty}^{t}e^{\frac{1+\omega_n}{2}(s)}dW_n(s)&\stackrel{D}{=}B_{\int_{-\infty}^{t}e^{(1+\omega_n)}ds}\stackrel{D}{=}B_{\frac{1}{1+\omega_n}e^{(1+\omega_n)t}}=:\tilde{B}_t^+. \end{split}$$ We define $$\tilde{B}_t:=\underset{0\leq s\leq t}{\sup}\left|B_{\frac{1}{1-\omega_n}s}\right|.$$ Since $$\tilde{B}_t^\pm\leq\underset{0\leq s\leq t}{\sup}B_{\frac{1}{1-\omega_n}e^{(1+\omega_n)s}}\leq\underset{0\leq s\leq t}{\sup}\left|_{\frac{1}{1-\omega_n}e^{(1+\omega_n)s}}\right|=\tilde{B}_{e^{(1+\omega_n)t}},$$ given any $\lambda\in\mathbb{R}$, $$\begin{split} P(C_n^2(t)>\lambda) \leq P\big(4e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}>\lambda\big). \end{split}$$ By ([\[ineqoftildeB\]](#ineqoftildeB){reference-type="ref" reference="ineqoftildeB"}), given $0\leq s\leq t$, $$\tilde{B}_t\leq 3\tilde{B}_s+\tilde{B}_{s,t}$$ where $$\tilde{B}_{s,t}:=\underset{s\leq r\leq t}{\sup}\left|B_{\frac{1}{1-\omega_n}r}-B_{\frac{1}{1-\omega_n}s}\right|$$ and $$\tilde{B}_{s,t}\stackrel{D}{=}\tilde{B}_{t-s}.$$ Let $\tau>0$, then by Markov inquality, $$\begin{split} P\bigg(\dfrac{1}{TJ}S_n>Kn^{-2}\bigg) &=P\bigg(\int_{0}^{T}C_n^2(t)dt>\dfrac{K}{n^2}\bigg(\dfrac{\omega_n}{\gamma_n}\bigg)^2TJ\bigg)\\ &=P\bigg(e^{\tau\int_{0}^{T}C_n^2(t)dt}>e^{\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}\bigg)\\ &\leq {\textrm{E}}\,\bigg[e^{\tau\int_{0}^{T}C_n^2(t)dt}\bigg]e^{-\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}\\ &\leq{\textrm{E}}\,\bigg[e^{4\tau\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt}\bigg]e^{-\tau\frac{K}{n^2}(\frac{\omega_n}{\gamma_n})^2TJ}.\label{probofJ>2pin} \end{split}$$ **Step 2.** We will find an upper bound of $\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt$. Let $T>2$ be an integer, by Lemma 4.1, $$\begin{split} \int_{0}^{T}&e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt \leq\int_{0}^{1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt +\tilde{B}^2_{e^{(1+\omega_n)}}\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot 3^{2k}e^{(-1+\omega_n)t}dts\\ & +\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{m(1+\omega_n)},e^{(1+\omega_n)t}}\bigg].\label{J>2pininteB} \end{split}$$ Let $$\begin{split} (\text{I})&=\int_{0}^{1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt +\tilde{B}^2_{e^{(1+\omega_n)}}\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot 3^{2k}e^{(-1+\omega_n)t}dt,\\ (\text{II})&=\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{m(1+\omega_n)},e^{(1+\omega_n)t}}\bigg]. \end{split}$$ We start with (I). Note that (I)$\in\mathcal{F}_{e^{(1+\omega_n)}}$. We have $$\begin{split} \sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{(-1+\omega_n)t}dt \leq\dfrac{1}{1-\omega_n}\sum_{k=1}^{T-1}\bigg(\dfrac{18}{e^{1-\omega_n}}\bigg)^k. \end{split}$$ Since $\omega_n=\dfrac{\sqrt{1-4k_n^2}}{2}$ where $k_n=\dfrac{n\pi}{J}$, we have $1-\omega_n>0$, and $$\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{(-1+\omega_n)t}dt \leq\dfrac{1}{1-\omega_n}\sum_{k=1}^{T-1}18^k=\dfrac{1}{1-\omega_n}\dfrac{18^T-18}{17}.$$ Then we have $$\begin{split} (\text{I}) \leq \int_{0}^{1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt+\dfrac{18^T}{1-\omega_n}\tilde{B}^2_{e^{1+\omega_n}}. \end{split}$$ Since $e^{(-1+\omega_n)t}\leq1$ and $\tilde{B}^2_{e^{(1+\omega_n)t}}\leq\tilde{B}^2_{e^{(1+\omega_n)}}, \forall t\in[0,1]$, $$(\text{I})\leq \tilde{B}^2_{e^{(1+\omega_n)}}\bigg(1+\dfrac{18^T}{1-\omega_n}\bigg).$$ Next, we consider $$\begin{split} (\text{II})&=\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{m(1+\omega_n)},e^{(1+\omega_n)t}}\bigg]. \end{split}$$ For each $m\in{1, 2,\dots, T-2}$, let $$\begin{split} M_m=&\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{m(1+\omega_n)},e^{(1+\omega_n)t}}\bigg]. \end{split}$$ Then $M_m\in\mathcal{F}_{e^{(1+\omega_n)(m+1)}}$ and $M_m\perp \!\!\! \perp\mathcal{F}_{e^{(1+\omega_n)m}}$. For a fixed $m$, we have $$\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}dt$$ $$\hspace{1.5in}\leq \dfrac{\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\cdot2^{1-m}\cdot9^{-m}\cdot18^T}{1-\omega_n}.\hspace{3.2cm}$$ Let $\alpha'_m=2^{1-m}\cdot9^{-m}\cdot18^T$. Then we have $$\begin{split} (\text{II})&\leq \sum_{m=1}^{T-2}\left[\dfrac{\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\cdot\alpha'_m}{1-\omega_n}+2\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\dfrac{e^{(-1+\omega_n)m}}{1-\omega_n}\right]\\ &=\dfrac{1}{1-\omega_n}\sum_{m=1}^{T-2}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\left(\alpha'_m+2e^{(-1+\omega_n)m}\right). \end{split}$$ **Step 3.** We will choose a proper value for $\tau$ and find an explicit upper bound of\ $E\big[e^{4\tau\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}_{e^{(1+\omega_n)t}}^2dt}\big]$.\ Back to ([\[J\>2pininteB\]](#J>2pininteB){reference-type="ref" reference="J>2pininteB"}), we have $$\begin{split} \int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt &\leq\left(\dfrac{18^T}{1-\omega_n}+1\right)\tilde{B}^2_{e^{1+\omega_n}}\\ &\quad+\dfrac{1}{1-\omega_n}\sum_{m=1}^{T-2}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\left(\alpha'_m+2e^{(-1+\omega_n)m}\right). \end{split}$$ Then $$\begin{split} {\textrm{E}}\,&\left[e^{4\tau\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt}\right] \leq{\textrm{E}}\,\left[\exp\left(4\tau\left(\dfrac{18^T}{1-\omega_n}+1\right)\tilde{B}^2_{e^{1+\omega_n}} \right)\right]\\ &\quad\cdot\prod_{m=1}^{T-2}{\textrm{E}}\,\left[\exp\left(4\tau\dfrac{1}{1-\omega_n}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\left(\alpha'_m+2e^{(-1+\omega_n)m}\right)\right)\right].\label{J>2pinexpectation} \end{split}$$ Let $$\begin{split} \text{(i)}&={\textrm{E}}\,\left[\exp\left(4\tau\left(\dfrac{18^T}{1-\omega_n}+1\right)\tilde{B}^2_{e^{1+\omega_n}} \right)\right]\\ \text{(ii)}&=\prod_{m=1}^{T-2}{\textrm{E}}\,\left[\exp\left(4\tau\dfrac{1}{1-\omega_n}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\left(\alpha'_m+2e^{(-1+\omega_n)m}\right)\right)\right]. \end{split}$$ We start with (i). $$\begin{split} {\textrm{E}}\,\left[\exp\left(4\tau\left(\dfrac{18^T}{1-\omega_n}+1\right)\tilde{B}^2_{e^{1+\omega_n}}\right)\right] &=\int_{1}^{+\infty}P\left(\exp\left(4\tau\left(\dfrac{18^T}{1-\omega_n}+1\right)\tilde{B}^2_{e^{1+\omega_n}}\right)\geq x\right)dx\\ &=4\int_{1}^{+\infty}P\left(B_{e^{(1+\omega_n)}}\geq\sqrt{\dfrac{\log x}{4\tau\left(\dfrac{18^T}{1-\omega_n}+1\right)}}\right)dx. \end{split}$$ When $1-\dfrac{1}{8\tau e^{1+\omega_n}(\frac{18^T}{1-\omega_n}+1)}<0$, that is $\tau<\dfrac{1}{8\left(\frac{18^T}{1-\omega_n}+1\right)e^{1+\omega_n}}$, by Lemma 4.1, we have $${\textrm{E}}\,\left[\exp\left(4\tau\left(\dfrac{18^T}{1-\omega_n}+1\right)\tilde{B}^2_{e^{1+\omega_n}}\right)\right] \leq \dfrac{16\tau e^{1+\omega_n}\left(\frac{18^T}{1-\omega_n}+1\right)}{\sqrt{1-8\tau\left(\frac{18^T}{1-\omega_n}+1\right)e^{(1+\omega_n)}}}.$$ Next we work on (ii). For each $m\in\{1, 2, \dots, T-2\}$, $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\dfrac{1}{1-\omega_n}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\left(\alpha'_m+2e^{(-1+\omega_n)m}\right)\right)\right]\\ &=4\int_{1}^{\infty}P\left(B_{e^{(1+\omega_n)(m+1)}-e^{(1+\omega_n)m}}\geq\sqrt{\dfrac{\log x}{4\tau\frac{1}{1-\omega_n}(\alpha'_m+2e^{(-1+\omega_n)m})}}\right)dx. \end{split}$$ Let $\sigma_m^2=e^{(1+\omega_n)(m+1)}-e^{(1+\omega_n)m}$ and $\gamma_m=4\tau\frac{1}{1-\omega_n}(\alpha'_m+2e^{(-1+\omega_n)m})$. When $1-\dfrac{1}{2\gamma_m\sigma_m^2}<0$, that is $\tau<\dfrac{1}{8\frac{1}{1-\omega_n}(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{(-1+\omega_n)m})(e^{(1+\omega_n)(m+1)}-e^{(1+\omega_n)m})}$. By Lemma 4.2, we have $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\dfrac{1}{1-\omega_n}\tilde{B}^2_{e^{(1+\omega_n)m},e^{(1+\omega_n)(m+1)}}\left(\alpha'_m+2e^{(-1+\omega_n)m}\right)\right)\right]\\ &\leq\dfrac{16\tau\frac{1}{1-\omega_n}(\alpha'_m+2e^{(-1+\omega_n)m})\sigma_m^2}{\sqrt{1-8\tau\frac{1}{1-\omega_n}(\alpha'_m+2e^{(-1+\omega_n)m})\sigma_m^2}}. \end{split}$$ Considering (i) and (ii) together, we want $\tau$ to satisfiy the following conditions: $$\begin{cases} \tau<\dfrac{1}{8\frac{1}{1-\omega_n}(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{(-1+\omega_n)m})(e^{(1+\omega_n)(m+1)}-e^{(1+\omega_n)m})}, & \forall m\in\{1,\dots, T-2\}\\ \tau<\dfrac{1}{8\left(\frac{18^T}{1-\omega_n}+1\right)e^{1+\omega_n}}. \end{cases}$$ So, we let $$\tau=\dfrac{1}{8\cdot\frac{1}{1-\omega_n}\cdot19^T\cdot e^{(1+\omega_n)T}}.$$ Back to ([\[J\>2pinexpectation\]](#J>2pinexpectation){reference-type="ref" reference="J>2pinexpectation"}), $$\begin{split} {\textrm{E}}\,\bigg[\exp\bigg(4\tau\int_{0}^{T}e^{(-1+\omega_n)t}&\tilde{B}^2_{e^{(1+\omega_n)t}}dt\bigg)\bigg]\\ &\leq\dfrac{\frac{16 e^{1+\omega_n}\left(\frac{18^T}{1-\omega_n}+1\right)}{8\cdot\frac{1}{1-\omega_n}\cdot19^T\cdot e^{(1+\omega_n)T}}}{\sqrt{1-8\left(\frac{18^T}{1-\omega_n}+1\right)e^{(1+\omega_n)}\cdot\frac{1}{8\cdot\frac{1}{1-\omega_n}\cdot19^T\cdot e^{(1+\omega_n)T}}}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{\frac{16\frac{1}{1-\omega_n}(\alpha'_m+2e^{(-1+\omega_n)m})\sigma_m^2}{8\cdot\frac{1}{1-\omega_n}\cdot19^T\cdot e^{(1+\omega_n)T}}}{\sqrt{1-\frac{1}{19^T\cdot e^{(1+\omega_n)T}}(\alpha'_m+2e^{(-1+\omega_n)m})\sigma_m^2}}. \end{split}$$ Recall $\sigma_m^2=e^{(1+\omega_n)(m+1)}-e^{(1+\omega_n)m}$ and $\alpha'_m=2^{1-m}\cdot9^{-m}\cdot18^T$, then $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt\right)\right]\\ &\leq\dfrac{2\left(\frac{18^T}{1-\omega_n}+1\right)}{\sqrt{\frac{1}{1-\omega_n}\cdot19^T\cdot e^{(1+\omega_n)(T-1)}}\sqrt{\frac{1}{1-\omega_n}\cdot19^T\cdot e^{(1+\omega_n)(T-1)}-\frac{19^T}{1-\omega_n}}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{2(\alpha'_m+2e^{(-1+\omega_n)m})\cdot e^{(1+\omega_n)(m+1)}}{\sqrt{19^T\cdot e^{(1+\omega_n)T}}\sqrt{19^T\cdot e^{(1+\omega_n)T}-19^T\cdot e^{(1+\omega_n)(T-1)}}}. \end{split}$$ Since $0<\omega_n<1$, as $T>5$, we have the following $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt\right)\right]\\ &\leq \dfrac{2(18^T+1-\omega_n)}{19^T\sqrt{e^{(1+\omega_n)(2T-2)}-e^{(1+\omega_n)(T-1)}}}\prod_{m=1}^{T-2}\dfrac{2\left(2\cdot18^Te^{(1+\omega_n)}\frac{e^{(1+\omega_n)m}}{18^m}+2e^{(1+\omega_n)}\cdot e^{2\omega_nm}\right)}{19^T\sqrt{e^{(1+\omega_n)2T}-e^{(1+\omega_n)(2T-1)}}}. \end{split}$$ Since $\frac{e^{(1+\omega_n)m}}{18^m}<1$ and $e^{2\omega_nm}<18^T$ as $T$ large enough, $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\int_{0}^{T}e^{(-1+\omega_n)t}\tilde{B}^2_{e^{(1+\omega_n)t}}dt\right)\right]\\ &\leq \dfrac{4\cdot18^T}{19^T\sqrt{e^{(1+\omega_n)(2T-2)}-e^{(1+\omega_n)(T-1)}}}\cdot\dfrac{8^{T-2}\cdot e^{(1+\omega_n)(T-2)}\cdot 18^{T(T-2)} }{19^{T(T-2)}e^{(1+\omega_n)T(T-2)}\left(\sqrt{1-e^{-(1+\omega_n)}}\right)^{T-2}}. \end{split}$$ **Step 4.** We are ready to get an upper bound of $P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)$ and will show that $$\underset{T\rightarrow+\infty}{\lim}P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)=0.$$ Back to ([\[probofJ\>2pin\]](#probofJ>2pin){reference-type="ref" reference="probofJ>2pin"}), we have $$\begin{split} P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right) &\leq\dfrac{4\cdot18^T\cdot 8^{T-2}\cdot e^{(1+\omega_n)(T-2)}}{19^T\sqrt{e^{(1+\omega_n)(2T-2)}-e^{(1+\omega_n)(T-1)}}}\\ &\quad\cdot\dfrac{18^{T(T-2)}}{19^{T(T-2)}e^{(1+\omega_n)T(T-2)}\left(\sqrt{1-e^{-(1+\omega_n)}}\right)^{T-2}}\\ &\quad\cdot\dfrac{1}{\exp\left(\frac{1}{8\frac{1}{1-\omega_n}19^Te^{(1+\omega_n)T}}\frac{K}{n^2}\left(\frac{\omega_n}{\gamma_n}\right)^2TJ\right)}. \end{split}$$ Let $$\text{(1)}=\dfrac{18^{T(T-2)}}{19^{T(T-1)}},\quad \text{(2)}=\dfrac{4\cdot18^T\cdot 8^{T-2}\cdot e^{(1+\omega_n)(T-2)}}{e^{(1+\omega_n)T(T-2)}},$$ $$\text{(3)}=\dfrac{1}{\sqrt{e^{(1+\omega_n)(2T-2)}-e^{(1+\omega_n)(T-1)}}\cdot\left(\sqrt{1-e^{-(1+\omega_n)}}\right)^{T-2}},$$ and $$\text{(4)}=\dfrac{1}{\exp\left(\frac{1}{8\frac{1}{1-\omega_n}19^Te^{(1+\omega_n)T}}\frac{K}{n^2}\left(\frac{\omega_n}{\gamma_n}\right)^2TJ\right)}.$$ Then $$P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)\leq \text{(1)}\cdot\text{(2)}\cdot\text{(3)}\cdot\text{(4)}.$$ For fixed $n$ such that $0<n<\dfrac{J}{2\pi}$, $$\underset{T\rightarrow+\infty}{\lim}\text{(4)}= \underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\exp\left(\frac{1}{8\frac{1}{1-\omega_n}19^Te^{(1+\omega_n)T}}\frac{K}{n^2}\left(\frac{\omega_n}{\gamma_n}\right)^2TJ\right)}=\dfrac{1}{e^0}=1.$$ Since $1-e^{-(1+\omega_n)}>1-e^{-1}$, $\sqrt{1-e^{-1}}\geq\dfrac{3}{4}$ and $e^{1/2}\geq 1.6$, $$\begin{split} (3)\leq\dfrac{1}{1.6}\left(\dfrac{4}{4.8}\right)^{T-2}\dfrac{1}{\sqrt{e^{(1+\omega_n)(T-1)}-1}}. \end{split}$$ Then $$\underset{T\rightarrow+\infty}{\lim}(3)= \underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\sqrt{e^{(1+\omega_n)(2T-2)}-e^{(1+\omega_n)(T-1)}}\cdot \left(\sqrt{1-e^{-(1+\omega_n)}}\right)^{T-2}}=0.$$ Also $$\underset{T\rightarrow+\infty}{\lim}(1)=\underset{T\rightarrow+\infty}{\lim}\dfrac{18^{T^2-T}}{19^{T^2-T}}=0.$$ Finally, $$\begin{split} \underset{T\rightarrow+\infty}{\lim}(2) &=\underset{T\rightarrow+\infty}{\lim}\dfrac{8^{T-2}\cdot e^{(1+\omega_n)(T-2)}}{e^{(1+\omega_n)(T-2)T}}\\ &=\underset{T\rightarrow+\infty}{\lim}8^{T-2}\cdot e^{(1+\omega_n)(T-2)(T-1)} =\underset{T\rightarrow+\infty}{\lim}\left(\dfrac{8\cdot e^{(1+\omega_n)}}{e^{(1+\omega_n)T}}\right)^{T-2}=0. \end{split}$$ For each $0<n<\dfrac{J}{2\pi}$, we showed that $$\underset{T\rightarrow+\infty}{\lim}P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)=0.$$ **Case (3) When $J=2\pi n$,** recall $$\begin{split} \tilde{a}_n(t) &=\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\gamma_n(t-s)dW_n(s)\\ &=\gamma_n\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}(t-s)dW_n(s). \end{split}$$ **Step 1.** We will find an upper bound of $P\left(\dfrac{1}{TJ}S_T^n>Kn^{-2}\right)$ when $n<\frac{J}{2\pi}$. Let $$D_n(t)=\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}(t-s)dW_n(s),$$ then $$\begin{split} D_n(t) &=e^{-\frac{1}{2}t}\int_{-\infty}^{t}e^{\frac{1}{2}s}(t-s)dW_n(s)\\ &=te^{-\frac{1}{2}t}\int_{-\infty}^{t}e^{\frac{1}{2}s}dW_n(s)-e^{-\frac{1}{2}t}\int_{-\infty}^{t}s\cdot e^{\frac{1}{2}s}dW_n(s).\label{covJ=2pin} \end{split}$$ The covariance of each Ito's itegral from ([\[covJ=2pin\]](#covJ=2pin){reference-type="ref" reference="covJ=2pin"}): $${\textrm{E}}\,\left[\left(\int_{-\infty}^{t}e^{\frac{1}{2}s}dW_n(s)\right)^2\right] =\int_{-\infty}^{t}e^sds=e^t.$$ and $$\begin{split} {\textrm{E}}\,\left[\left(\int_{-\infty}^{t}s\cdot e^{\frac{1}{2}s}dW_n(s)\right)^2\right] =e^t(t-1)^2. \end{split}$$ Next, we let $$I_t^{(1)}:=\int_{-\infty}^{t}e^{\frac{1}{2}s}dW_n(s)$$ and $$I_t^{(2)}:=\int_{-\infty}^{t}s\cdot e^{\frac{1}{2}s}dW_n(s).$$ Then $$\begin{split} D_n(t)&=te^{-\frac{1}{2}t}I_t^{(1)}-e^{-\frac{1}{2}t}I_t^{(2)}\\ I_t^{(1)}&\stackrel{D}{=}B_{e^t}\\ I_t^{(2)}&\stackrel{D}{=}B_{e^t(t-1)^2}. \end{split}$$ Since $$e^t+e^t(t-1)^2\leq e^t+e^{2t}\leq 2e^{2t},$$ we define $$\tilde{B}_t:=\underset{0\leq s\leq t}{\sup}\left|B_{2s}\right|,$$ then for each $i\in\{1, 2\}$, we have $$\left|I_t^{(i)}\right|\leq\underset{0\leq s\leq t}{\sup}\left|B_{2e^{2t}}\right|=\tilde{B}_{e^{2t}}.$$ Recall $$\begin{split} S_n(T) &=\int_{0}^{T}\left(\tilde{a}_n(t)\right)^2dt\\ &=\gamma_n^2\int_{0}^{T}D_n^2(t)dt. \end{split}$$ Let $\tau>0$, $$\begin{split} P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right) \leq{\textrm{E}}\,\big[\exp\left(\tau\int_{0}^{T}D_n^2(t)dt\right)\big]\cdot \exp\left(-\dfrac{\tau KTJ}{n^2\gamma_n^2}\right).\label{markovJ=2pin} \end{split}$$ Since $\exp\left(\tau\int_{0}^{T}D_n^2(t)dt\right)$ is a nonnegative random variable, $$\begin{split} {\textrm{E}}\,\left[\exp\left(\tau\int_{0}^{T}D_n^2(t)dt\right)\right] =\int_{0}^{+\infty}P\left(\int_{0}^{T}\left(te^{-\frac{1}{2}t}I_t^{(1)}-e^{-\frac{1}{2}t}I_t^{(2)}\right)^2dt>\dfrac{\log s}{\tau}\right)ds.\label{markovJ=2pin-1} \end{split}$$ By Cauchy-Schwarz inequality, $$\begin{split} {\textrm{E}}\,\left[\exp\left(\tau\int_{0}^{T}D_n^2(t)dt\right)\right] \leq \int_{0}^{+\infty}2P\left(\int_{0}^{T}2(t^2+1)e^{-t}\left(\tilde{B}_{e^{2t}}\right)^2dt>\dfrac{\log s}{2\tau}\right)ds. \end{split}\label{markovJ=2pin-2}$$ **Lemma 6**. *$\exp\left(\dfrac{9}{10}t\right)\leq t^2+1$, for all $t\in\mathbb{R}$.* The proof of Lemma 4.3 is in Appendix [5.8](#exp9/10t){reference-type="ref" reference="exp9/10t"}. By Lemma 4.3, we have $$\begin{split} P\left(\int_{0}^{T}2(t^2+1)e^{-t}\left(\tilde{B}_{e^{2t}}\right)^2dt>\dfrac{\log s}{2\tau}\right) \leq P\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\left(\tilde{B}_{e^{2t}}\right)^2dt>\log s\right). \end{split}$$ Back to ([\[markovJ=2pin-1\]](#markovJ=2pin-1){reference-type="ref" reference="markovJ=2pin-1"}) and ([\[markovJ=2pin-2\]](#markovJ=2pin-2){reference-type="ref" reference="markovJ=2pin-2"}), we have $$\begin{split} {\textrm{E}}\,\left[\exp\left(\tau\int_{0}^{T}D_n^2(t)dt\right)\right] \leq 2{\textrm{E}}\,\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\left(\tilde{B}_{e^{2t}}\right)^2dt\right)\right]. \end{split}$$ Then continuing from ([\[markovJ=2pin\]](#markovJ=2pin){reference-type="ref" reference="markovJ=2pin"}), $$P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right) \leq2{\textrm{E}}\,\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\left(\tilde{B}_{e^{2t}}\right)^2dt\right)\right]\cdot \exp\left(-\dfrac{\tau KTJ}{n^2\gamma_n^2}\right).\label{markovJ=2pin_2}$$ **Step 2.** We will find an upper bound of $\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt$. Recall $$\tilde{B}_t:=\underset{0\leq s\leq t}{\sup}\left|B_{2s}\right|.$$ Since $n$ is fixed, by ([\[ineqoftildeB\]](#ineqoftildeB){reference-type="ref" reference="ineqoftildeB"}), given $0\leq s\leq t$, $$\tilde{B}_t\leq 3\tilde{B}_s+\tilde{B}_{s,t}$$ where $$\tilde{B}_{s,t}:=\underset{s\leq r\leq t}{\sup}\left|B_{2r}-B_{2s}\right|$$ and $$\tilde{B}_{s,t}\stackrel{D}{=}\tilde{B}_{t-s}.$$ Let $T>2$ be an integer, by Lemma 4.1, $$\begin{split} \int_{0}^{T}&e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt =\int_{0}^{1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt +\tilde{B}^2_{e^{2}}\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot 3^{2k}e^{-\frac{1}{10}t}dts\\ & +\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2t}}\bigg].\label{J=2pininteB} \end{split}$$ Let $$\begin{split} (\text{I})&=\int_{0}^{1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt +\tilde{B}^2_{e^{2}}\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot 3^{2k}e^{-\frac{1}{10}t}dts\\ (\text{II})&=\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2t}}\bigg]. \end{split}$$ We start with (I). Note that (I)$\in\mathcal{F}_{e^2}$. $$\begin{split} \sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-\frac{1}{10}t}dt \leq10\sum_{k=1}^{T-1}\bigg(\dfrac{18}{e^{\frac{1}{10}}}\bigg)^k. \end{split}$$ Then we have $$\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-\frac{1}{10}t}dt \leq10\sum_{k=1}^{T-1}18^k=10\cdot\dfrac{18^T-18}{17}$$ and $$\begin{split} (\text{I}) \leq \int_{0}^{1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt+10\cdot18^T\tilde{B}^2_{e^{2}}. \end{split}$$ Since $e^{-\frac{1}{10}t}\leq1$ and $\tilde{B}^2_{e^{2t}}\leq\tilde{B}^2_{e^{2}}, \forall t\in[0,1]$, $$(\text{I})\leq \tilde{B}^2_{e^{2}}\bigg(1+10\cdot18^T\bigg).$$ Next, we consider $$\begin{split} (\text{II})&=\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2t}}\bigg]. \end{split}$$ For each $m\in\{1, 2,\dots, T-2\}$, let $$\begin{split} M_m=&\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2t}}\bigg]. \end{split}$$ Then $M_m\in\mathcal{F}_{e^{2(m+1)}}$ and $M_m\perp \!\!\! \perp\mathcal{F}_{e^{2m}}$. $$\begin{split} \sum_{k=m+1}^{T-1}&\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2m},e^{2(m+1)}}dt\\ &\leq 10\tilde{B}^2_{e^{2m},e^{2(m+1)}}\cdot2^{1-m}\cdot9^{-m}\cdot18^T. \end{split}$$ We have $$\begin{split} (\text{II})&\leq \sum_{m=1}^{T-2}\left[10\tilde{B}^2_{e^{2m},e^{2(m+1)}}\cdot2^{1-m}\cdot9^{-m}\cdot18^T+2\tilde{B}^2_{e^{2m},e^{2(m+1)}}\cdot10e^{-\frac{1}{10}m}\right]\\ &=10\sum_{m=1}^{T-2}\tilde{B}^2_{e^{2m},e^{2(m+1)}}\left(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m}\right). \end{split}$$ Going back to ([\[J=2pininteB\]](#J=2pininteB){reference-type="ref" reference="J=2pininteB"}), we have $$\begin{split} \int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt &\leq\left(10\cdot18^T+1\right)\tilde{B}^2_{e^{2}}\\ &\quad+10\sum_{m=1}^{T-2}\tilde{B}^2_{e^{2m},e^{2(m+1)}}\left(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m}\right). \end{split}$$ **Step 3.** We will choose a proper value for $\tau$ and find an explicit upper bound of\ $E\big[e^{4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt}\big]$.\ From ([\[markovJ=2pin_2\]](#markovJ=2pin_2){reference-type="ref" reference="markovJ=2pin_2"}) $$\begin{split} {\textrm{E}}\,&\left[e^{4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt}\right] \leq{\textrm{E}}\,\left[\exp\left(4\tau\left(10\cdot18^T+1\right)\tilde{B}^2_{e^{2}} \right)\right]\\ &\quad\cdot\prod_{m=1}^{T-2}{\textrm{E}}\,\left[\exp\left(4\tau\cdot10\cdot\tilde{B}^2_{e^{2m},e^{2(m+1)}}\left(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m}\right)\right)\right].\label{318} \end{split}$$ Let $$\begin{split} \text{(i)}&={\textrm{E}}\,\left[\exp\left(4\tau\left(10\cdot18^T+1\right)\tilde{B}^2_{e^{2}} \right)\right]\\ \text{(ii)}&=\prod_{m=1}^{T-2}{\textrm{E}}\,\left[\exp\left(4\tau\cdot10\cdot\tilde{B}^2_{e^{2m},e^{2(m+1)}}\left(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m}\right)\right)\right]. \end{split}$$ We start with (i). $$\begin{split} {\textrm{E}}\,\left[\exp\left(4\tau\left(10\cdot18^T+1\right)\tilde{B}^2_{e^{2}}\right)\right] &=\int_{1}^{+\infty}P\left(\exp\left(4\tau\left(10\cdot18^T+1\right)\tilde{B}^2_{e^{2}}\right)\geq x\right)dx\\ &=4\int_{1}^{+\infty}P\left(B_{e^{2}}\geq\sqrt{\dfrac{\log x}{4\tau\left(10\cdot18^T+1\right)}}\right)dx. \end{split}$$ When $1-\dfrac{1}{8e^2\tau\left(10\cdot18^T+1\right)}<0$, that is $\tau<\dfrac{1}{8\left(10\cdot18^T+1\right)e^{2}}$, by Lemma 4.2, $${\textrm{E}}\,\left[\exp\left(4\tau\left(10\cdot18^T+1\right)\tilde{B}^2_{e^{2}}\right)\right]\leq \dfrac{16\tau e^{2}\left(10\cdot18^T+1\right)}{\sqrt{1-8\tau\left(10\cdot18^T+1\right)e^{2}}}.$$ Next, we work on (ii). Let $\alpha'_m=2^{1-m}\cdot9^{-m}\cdot18^T$ and $\sigma_m^2=e^{2(m+1)}-e^{2m}$. For each $m\in\{1, 2, \dots, T-2\}$, $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\cdot10\cdot\tilde{B}^2_{e^{2m},e^{2(m+1)}}\left(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m}\right)\right)\right]\\ &=4\int_{1}^{\infty}P\left(B_{e^{2(m+1)}-e^{2m}}\geq\sqrt{\dfrac{\log x}{4\tau\cdot10\cdot(\alpha'_m+2e^{-\frac{1}{10}m})}}\right)dx. \end{split}$$ When $1-\dfrac{1}{2\sigma_m^2(4\tau\cdot10\cdot(\alpha'_m+2e^{-\frac{1}{10}m}))}<0$, that is $$\tau<\dfrac{1}{8\cdot10\cdot(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m})(e^{2(m+1)}-e^{2m})},$$ by lemma 4.2, $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\cdot10\cdot\tilde{B}^2_{e^{2m},e^{2(m+1)}}\left(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m}\right)\right)\right]\\ &\leq\dfrac{16\tau\cdot10\cdot(\alpha'_m+2e^{-\frac{1}{10}m})\sigma_m^2}{\sqrt{1-8\tau\frac{1}{1-\omega_n}(\alpha'm+2e^{-\frac{1}{10}m})\sigma_m^2}}. \end{split}$$ By considering (i) and (ii), we want the $\tau$ to satisfiy the following conditions: $$\begin{cases} \tau<\dfrac{1}{8\cdot10\cdot(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m})(e^{2(m+1)}-e^{2m})}, & \forall m\in\{1,\dots, T-2\}\\ \tau<\dfrac{1}{8\left(10\cdot18^T+1\right)e^{2}}. \end{cases}$$ Then we let $$\tau=\dfrac{1}{8\cdot10\cdot19^T\cdot e^{2T}}.$$ Back to ([\[318\]](#318){reference-type="ref" reference="318"}), we get $$\begin{split} {\textrm{E}}\,\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt\right)\right] &\leq\dfrac{16\tau e^{2}\left(10\cdot18^T+1\right)}{\sqrt{1-8\tau\left(10\cdot18^T+1\right)e^{2}}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{16\tau\cdot10\cdot(\alpha'_m+2e^{-\frac{1}{10}m})\sigma_m^2}{\sqrt{1-8\tau\cdot10\cdot(\alpha'_m+2e^{-\frac{1}{10}m})\sigma_m^2}}. \end{split}$$ Now we replace $\tau$ by its value, $$\begin{split} {\textrm{E}}\,\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt\right)\right] &\leq \dfrac{\frac{16 e^{2}\left(10\cdot18^T+1\right)}{8\cdot10\cdot19^T\cdot e^{2T}}}{\sqrt{1-8\left(10\cdot18^T+1\right)e^{2}\cdot\frac{1}{8\cdot10\cdot19^T\cdot e^{2T}}}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{\frac{16\cdot10\cdot(\alpha'_m+2e^{-\frac{1}{10}m})\sigma_m^2}{8\cdot10\cdot19^T\cdot e^{2T}}}{\sqrt{1-\frac{1}{19^T\cdot e^{2T}}(\alpha'_m+2e^{-\frac{1}{10}m})\sigma_m^2}}. \label{J=2pin2} \end{split}$$ We simplify the right hand side of ([\[J=2pin2\]](#J=2pin2){reference-type="ref" reference="J=2pin2"}), $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt\right)\right]\\ &\leq \dfrac{2\left(10\cdot18^T+1\right)}{\sqrt{10\cdot19^T\cdot e^{2(T-1)}}\sqrt{10\cdot19^T\cdot e^{2(T-1)}-\left(10\cdot18^T+1\right)}}\\ &\quad\cdot\prod_{m=1}^{T-2}\bigg[\dfrac{2(\alpha'_m+2e^{-\frac{1}{10}m})(e^{2(m+1)}-e^{2m})}{\sqrt{19^T\cdot e^{2T}}}\\ &\qquad\cdot \dfrac{1}{\sqrt{19^T\cdot e^{2T}-(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m})(e^{2(m+1)}-e^{2m})}}\bigg]. \end{split}$$ Since when $T\geq3$, $\left(10\cdot18^T+1\right)<10\cdot19^T$ and $\forall m\in\{1,\dots, T-2\}$,\ $(2^{1-m}\cdot9^{-m}\cdot18^T+2e^{-\frac{1}{10}m})(e^{2(m+1)}-e^{2m})<19^T\cdot e^{2(T-1)}$, $$\begin{split} {\textrm{E}}\,\bigg[\exp\bigg(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}&\tilde{B}^2_{e^{2t}}dt\bigg)\bigg]\\ &\leq \dfrac{2\left(10\cdot18^T+1\right)}{\sqrt{10\cdot19^T\cdot e^{2(T-1)}}\sqrt{10\cdot19^T\cdot e^{2(T-1)}-10\cdot19^T}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{2(\alpha'_m+2e^{-\frac{1}{10}m})\cdot e^{2(m+1)}}{\sqrt{19^T\cdot e^{2T}}\sqrt{19^T\cdot e^{2T}-19^T\cdot e^{2(T-1)}}}\\ &\leq \dfrac{2\cdot10(18^T+\frac{1}{10})}{10\cdot19^T\sqrt{e^{2(2T-2)}-e^{2(T-1)}}}\\ &\quad\cdot\prod_{m=1}^{T-2}\dfrac{2(\alpha'_m e^{2(m+1)}+2e^{(2-\frac{1}{10})m+2})}{19^T\sqrt{e^{2\cdot2T}-e^{2\cdot(2T-1)}}} . \end{split}$$ Since $\forall m\in\{1,\dots, T-2\}$, $$\alpha'_m=2^{1-m}\cdot9^{-m}\cdot18^T=2\cdot\dfrac{18^T}{18^m}.$$ We have $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt\right)\right]\\ &\leq \dfrac{2(18^T+\frac{1}{10})}{19^T\sqrt{e^{2(2T-2)}-e^{2(T-1)}}} \cdot\prod_{m=1}^{T-2}\dfrac{2\left(2\cdot18^Te^{2}\frac{e^{2m}}{18^m}+2e^{2}\cdot e^{(2-\frac{1}{10})m}\right)}{19^T\sqrt{e^{2\cdot2T}-e^{2\cdot(2T-1)}}}. \end{split}$$ Since $\dfrac{e^{2m}}{18^m}<1$, and $e^{(2-\frac{1}{10})}m<18^T$, $\forall m\in\{1,\dots, T-2\}$. Then $$\begin{split} {\textrm{E}}\,&\left[\exp\left(4\tau\int_{0}^{T}e^{-\frac{1}{10}t}\tilde{B}^2_{e^{2t}}dt\right)\right]\\ &\leq\dfrac{2(18^T+\frac{1}{10})}{19^T\sqrt{e^{2(2T-2)}-e^{2(T-1)}}}\prod_{m=1}^{T-2}\dfrac{2\cdot2\cdot\left(2\cdot e^{2}\cdot18^T\right)}{19^Te^{2T}\sqrt{1-e^{-2}}}\\ &\leq\dfrac{4\cdot18^T}{19^T\sqrt{e^{2(2T-2)}-e^{2(T-1)}}}\cdot\dfrac{8^{T-2}\cdot e^{2(T-2)}\cdot 18^{T(T-2)} }{19^{T(T-2)}e^{2T(T-2)}\left(\sqrt{1-e^{-2}}\right)^{T-2}}. \end{split}$$ **Step 4.** We are ready to get an upper bound of $P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)$ and will show that $$\underset{T\rightarrow+\infty}{\lim}P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)=0.$$ Back to ([\[markovJ=2pin_2\]](#markovJ=2pin_2){reference-type="ref" reference="markovJ=2pin_2"}), $$\begin{split} P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right) &\leq\dfrac{4\cdot18^T\cdot 8^{T-2}\cdot e^{2(T-2)}\cdot 18^{T(T-2)}}{19^T\sqrt{e^{2(2T-2)}-e^{2(T-1)}}}\\ &\quad\cdot\dfrac{1}{19^{T(T-2)}e^{2T(T-2)}\left(\sqrt{1-e^{-2}}\right)^{T-2}}\\ &\quad\cdot\dfrac{1}{\exp\left(\frac{1}{8\cdot10\cdot19^Te^{2T}}\frac{K}{n^2\gamma_n^2}TJ\right)}\\ &=\dfrac{4\cdot18^T\cdot 8^{T-2}\cdot e^{2(T-2)}\cdot 18^{T(T-2)}}{19^{T^2-T}e^{2T(T-2)}}\\ &\quad\cdot\dfrac{1}{\sqrt{e^{2(2T-2)}-e^{2(T-1)}}\cdot\left(\sqrt{1-e^{-2}}\right)^{T-2}}\\ &\quad\cdot\dfrac{1}{\exp\left(\frac{1}{8\cdot10\cdot19^Te^{2T}}\frac{K}{n^2\gamma_n^2}TJ\right)}\\ &=:(1)\cdot(2)\cdot(3). \end{split}$$ Since $n=\dfrac{J}{2\pi}$, $$\underset{T\rightarrow+\infty}{\lim}(3)=\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\exp\left(\frac{1}{8\cdot10\cdot19^Te^{2T}}\frac{K}{n^2\gamma_n^2}TJ\right)}=\dfrac{1}{e^0}=1.$$ Next, since $1-e^{-2}>1-e^{-1}$, $\sqrt{1-e^{-1}}\geq\dfrac{3}{4}$ and $e\geq 2.6$, $$\begin{split} &\dfrac{1}{\sqrt{e^{2(2T-2)}-e^{2(T-1)}}\cdot \left(\sqrt{1-e^{-2}}\right)^{T-2}}\\ &\qquad\leq\left(\dfrac{4}{3}\right)^{T-2}\dfrac{1}{e^{(T-1)}\sqrt{e^{2(T-1)}-1}}\\ &\qquad\leq\left(\dfrac{4}{3}\right)^{T-2}\left(\dfrac{1}{2.6}\right)^{(T-1)}\dfrac{1}{\sqrt{e^{2(T-1)}-1}}\\ &\qquad=\dfrac{1}{2.6}\left(\dfrac{4}{7.8}\right)^{T-2}\dfrac{1}{\sqrt{e^{2(T-1)}-1}}. \end{split}$$ Then $$\underset{T\rightarrow+\infty}{\lim}(2)=\underset{T\rightarrow+\infty}{\lim}\dfrac{1}{\sqrt{e^{2(2T-2)}-e^{2(T-1)}}\cdot \left(\sqrt{1-e^{-2}}\right)^{T-2}}=0.$$ We simplify (1), $$\begin{split} (1) &=\dfrac{4\cdot18^T\cdot 8^{T-2}\cdot e^{2(T-2)}\cdot 18^{T(T-2)}}{19^{T^2-T}e^{2T(T-2)}}\\ &=4\cdot\dfrac{8^{T-2}}{e^{2(T-1)(T-2)}}\cdot\dfrac{18^{T^2-T}}{19^{T^2-T}}. \end{split}$$ Since $$\begin{split} \underset{T\rightarrow+\infty}{\lim}\dfrac{18^{T^2-T}}{19^{T^2-T}}=0 \end{split}$$ and $$\underset{T\rightarrow+\infty}{\lim}\dfrac{8^{T-2}}{e^{2(T-1)(T-2)}}=\underset{T\rightarrow+\infty}{\lim}\left(\dfrac{8}{e^{2(T-1)}}\right)^{(T-2)}=0,$$ we have $$\underset{T\rightarrow+\infty}{\lim}(1)=0.$$ Therefore, when $n=\dfrac{J}{2\pi}$, $$\underset{T\rightarrow+\infty}{\lim}P\left(\dfrac{1}{TJ}S_n>Kn^{-2}\right)=0.$$ $\square$ # Appendix ## Damped wave equation with the noise F and its solution {#dampedwaveequationwiththenoiseF} Let $\phi\in\mathcal{C}^\infty(\mathbb{R})$ with $\phi '(0)=\phi ' (J)=0$. By multiplying ([\[main system\]](#main system){reference-type="ref" reference="main system"}) by $\phi(x)$ and integrating over both variables, we get $$\begin{split} \int_{0}^{J}[\partial_tu(t.x)&-\partial_tu(0,x)]\phi(x)dx+\int_{0}^{J}[u(t.x)-u(0,x)]\phi(x)dx\\ &=\int_{0}^{t}\int_{0}^{J}u(s,x)\phi''(x)dxds+\int_{0}^{t}\int_{0}^{J}\phi(x)F(dxds).\label{integ_form} \end{split}$$ Let $\{\varphi_{n}\}_{n\in\mathbb{Z}}$ be a complete set of eigenfunctions of the Laplacian $\Delta$ satisfying the Neumann boundary condition: $$\begin{split} \varphi_{n}(x)&=c_n\cos\big(\frac{n\pi}{J}x\big)\quad n\in\mathbb{Z}\\ \Delta\varphi_n&=\lambda_n\varphi_n \end{split}$$ where $$c_n= \begin{cases} \sqrt{\frac{2}{J}},\quad n\neq 0\\ \sqrt{\frac{1}{J}}, \quad n=0 \end{cases}$$ and $$\lambda_n= \begin{cases} -\left(\dfrac{n\pi}{J}\right)^2,&\quad n\neq 0\\ 0, &\quad n=0. \end{cases}$$ Considering the Fourier series of $u$, $$u(t,x)=\underset{n\in\mathbb{N}}{\sum}a_n(t)\varphi_{n}(x),\text{ where } \Delta\varphi_{n}=\lambda_{n}\varphi_{n}.\label{ufourierseries}$$ The series converges in $L^2\left([0,J]\times\Omega\right)$. The proof is in Appendix [5.2](#fourierseriesofmildsolutionu){reference-type="ref" reference="fourierseriesofmildsolutionu"} Let $\phi$ in ([\[integ_form\]](#integ_form){reference-type="ref" reference="integ_form"}) be the eigenfunction $\varphi_n$. According to the fourier series of $u$, we have that for each $n\in\mathbb{N}$, $$\partial_ta_n(t)-\partial_ta_n(0)+a_n(t)-a_n(0)=\int_{0}^{t}\lambda_na_n(s)ds+\int_{0}^{t}\gamma_nW_n(ds).$$ Let $$\begin{cases} X^n_t&=a_n=\text{the position of the above stochastic oscillator}\\ V^n_t&=\frac{d}{dt}a_n=\text{velocity}. \end{cases}$$ We have the following stochastic differential equation. $$\begin{cases} dX^n_t&=V^n_t dt\\ dV^n_t&=\big(-V^n_t+\lambda_n X^n_t\big)dt+\gamma_ndW_n(t). \end{cases}$$ ### When n is nonzero When $n\neq 0$, we have the following stochastic differential equation, $$d\begin{pmatrix} X^n_t\\ V^n_t \end{pmatrix}=\begin{pmatrix}0 & 1\\ \lambda_n & -1\end{pmatrix}\begin{pmatrix}X^n_t \\ V^n_t\end{pmatrix}dt+\begin{pmatrix}0 \\ \gamma_n \end{pmatrix}dW_n(t). \label{SDE}$$ Defining $M_n=\begin{pmatrix} 0&1\\\lambda_n&-1 \end{pmatrix}$, $\alpha_n=\begin{pmatrix} 0 \\ \gamma_n \end{pmatrix}$, and $\vec{X}^n_t=\begin{pmatrix}X^n_t\\V^n_t\end{pmatrix}$, we have $$d\vec{X}^n_t=M_n\cdot\vec{X}^n_tdt+\gamma_ndW_n(t).\label{SDEnnot0}$$ Multiplying $e^{-M_nt}$ to both sides of ([\[SDEnnot0\]](#SDEnnot0){reference-type="ref" reference="SDEnnot0"}), we have $$\begin{split} e^{-M_nt}d\vec{X}_n^t&=e^{-M_nt}\cdot M_n\cdot\vec{X}_n^tdt+e^{-M_nt}\gamma_ndW_n(t)\\ e^{-M_nt}d\vec{X}_n^t-e^{-M_nt}\cdot M_n\cdot\vec{X}_n^tdt&=e^{-M_nt}\gamma_ndW_n(t)\\ d(e^{-M_nt}\vec{X}_n^t)&=e^{-M_nt}\gamma_ndW_n(t). \end{split}$$ Then we have $$\begin{split} &d\bigg(e^{-M_nt}\begin{pmatrix}X^n_t\\V^n_t\end{pmatrix}\bigg)=e^{-M_nt}\begin{pmatrix}0\\\gamma_n\end{pmatrix}dW_n(t)\\ &\begin{pmatrix}X^n_t\\V^n_t\end{pmatrix}=e^{M_nt}\begin{pmatrix}x^n_0\\v^n_0\end{pmatrix}+e^{M_nt}\int_{0}^{t}e^{-M_ns}\begin{pmatrix}0\\\gamma_n\end{pmatrix}dW_n(s).\label{Xsoln} \end{split}$$ where $x^n_0$ and $v^n_0$ are initial data of $X^n$ and $Y^n$ respectively.\ Note that since $tM_n$ and $(-s)M_n$ commute for all $t, s\in\mathbb{R}$, $e^{tM_n}\cdot e^{-sM_n}=e^{M_n(t-s)}$.\ Now we start to solve for $X^n$.\ Recall that $M_n=\begin{pmatrix} 0&1\\\lambda_n&-1 \end{pmatrix}$, where $\lambda_n=-k_n^2=-\big(\frac{n\pi}{J}\big)^2$.\ First, we need to find $e^{M_nt}$ using the eigenvalues and eigenvectors of the matrix $M_n$, where $M_n$ has characteristic polynomial: $$p(t)=\det(M_n-tI)=\bigg|\begin{matrix}-t&1\\-k_n^2&-1-t\end{matrix}\bigg|=t^2+t+k_n^2.$$ Setting $p(t)=0$, we find that two eigenvalues of $M_n$ are: $$c_{n}^{(1)}=\frac{-1+\sqrt{1-4k_n^2}}{2},\text{ and } c_{n}^{(2)}=\frac{-1-\sqrt{1-4k_n^2}}{2}.$$ There are three cases to discuss: 1. $1-4k_{n}^2>0$, which is equivalent to $J>2n\pi$; 2. $1-4k_{n}^2=0$, which is equivalent to $J=2n\pi$; 3. $1-4k_{n}^2<0$. which is equivalent to $J<2n\pi$. ```{=html} <!-- --> ``` 1. when $1-4k_n^2>0$, i.e $J>2n\pi$.\ In this case, $c^{(1)}_n$ and $c^{(2)}_n$ are two distinct eigenvalues. Their corresponding eigenvectors are $v^{(1)}_n=\begin{pmatrix} 1 ,&c^{(1)}_n \end{pmatrix}^T$ and $v^{(2)}_n=\begin{pmatrix} 1 , &c^{(2)}_n \end{pmatrix}^T$.\ Let $V_n=\begin{pmatrix} v^{(1)}_n ,& v^{(2)}_n \end{pmatrix}$, and $D_n$ be the diagonal matrix with diagonal entries $c^{(1)}_n$ and $c^{(2)}_n$. Then we have $$\begin{split} M_n=V_nD_nV^{-1}_n,\qquad e^{M_nt}=V_ne^{D_nt}V^{-1}_n \end{split}$$ We get $$\begin{split} X^n_t= &-\frac{1}{w_n}\bigg(\bigg(\frac{-1-w_n}{2}e^{\frac{-1+w_n}{2}t}-\frac{-1+w_n}{2}e^{\frac{-1-w_n}{2}t}\bigg)x^n_0+\bigg(e^{\frac{-1-w_n}{2}t}-e^{\frac{-1+w_n}{2}t}\bigg)v^n_0\bigg)\\ &-\frac{\gamma_{n}}{w_n}\int_{0}^{t}e^{\frac{-1-w_n}{2}(t-s)}-e^{\frac{-1+w_n}{2}(t-s)}dW_n(s). \end{split}$$ 2. when $1-4k_n^2=0$, i.e $J=2n\pi$. $\lambda_{n}=-k_n^2=-\frac{1}{4}$. Then $M_n=\begin{pmatrix}0&1\\-\frac{1}{4}&-1\end{pmatrix}$ has two repeated eigenvalues, $c^{(1)}_n=c^{(2)}_n=-\frac{1}{2}$. We can write $M_n=PAP^{-1}$.\ We have $$A=\begin{pmatrix} -\frac{1}{2}&1\\0&-\frac{1}{2} \end{pmatrix}, \quad P=\begin{pmatrix} 1&0\\-\frac{1}{2}&1 \end{pmatrix}, \quad P^{-1}=\begin{pmatrix} 1&0\\\frac{1}{2}&1 \end{pmatrix}.$$ Thus, we have $$e^{M_n}=Pe^AP^{-1}=Pe^{c^{(1)}_nI+N}P^{-1}=Pe^{c^{(1)}_nI}e^NP^{-1}$$ where $A=c^{(1)}_nI+N$ and $N=\begin{pmatrix} 0&1\\0&0 \end{pmatrix}$ is nilpotent with index 2. $N$ commutes with $c^{(1)}_nI$.\ Since $(tN)^2=\begin{pmatrix} 0&0\\0&0 \end{pmatrix}$ for all $t\in\mathbb{R}$, $$e^{tN}=I+\sum_{k=1}^{+\infty}\dfrac{(tN)^k}{k!}=I+tN=\begin{pmatrix} 1&t\\0&1 \end{pmatrix}.$$ Then $\forall t\in\mathbb{R}$, $$e^{M_nt}=\frac{1}{4}e^{-\frac{1}{2}t}\begin{pmatrix}4+2t & 4t \\ -t & 4-2t\end{pmatrix}.$$ Then we have $$X^n_t=\frac{1}{4}e^{-\frac{1}{2}t}[(4+2t)x_0^n+4tv_0^n]+\int_{0}^{t}e^{-\frac{1}{2}(t-s)}\gamma_{n}(t-s)dW_n(s).$$ 3. when $1-4k_n^2<0$, i.e $0<J<2\pi n.$ $M_n=\begin{pmatrix} 0 & 1\\ \lambda_{n} & -1 \end{pmatrix}$ has two complex eigenvalues $c_n^{(1)}=\frac{-1+(\sqrt{4k_n^2-1})i}{2}$ and $c_n^{(2)}=\frac{-1-(\sqrt{4k_n^2-1})i}{2}$, where $\lambda_n=-k_n^2$.\ Let $\alpha_n=-\frac{1}{2}$ and $\omega_n=\frac{\sqrt{4k_n^2-1}}{2}$. Then $M_n=P_nD_nP_n^{-1}$, where $$D_n=\begin{pmatrix} \alpha_n & \omega_n\\ -\omega_n & \alpha_n \end{pmatrix}, \quad P_n=\begin{pmatrix} 1 & 0 \\ \alpha_n & \omega_n \end{pmatrix}, \quad \text{ and } P_n^{-1}=\begin{pmatrix} 1 & 0 \\ -\frac{\alpha_n}{\omega_n} & \frac{1}{\omega_n} \end{pmatrix}.$$ We have $$e^{D_nt}=e^{\alpha_nt}\begin{pmatrix} \cos(\omega_nt)&\sin(\omega_nt)\\-\sin(\omega_nt)&\cos(\omega_nt) \end{pmatrix}.$$ Then $$\begin{split} e^{M_nt} &=P_ne^{D_nt}P_n^-1\\ &=\begin{pmatrix}1 & 0 \\ \alpha_n & \omega_n\end{pmatrix}e^{\alpha_nt} \begin{pmatrix}\cos\omega_nt & \sin\omega_nt\\ -\sin\omega_nt & \cos\omega_nt\end{pmatrix} \begin{pmatrix}1 & 0 \\ -\frac{\alpha_n}{\omega_n} & \frac{1}{\omega_n}\end{pmatrix}\\ &=e^{-\frac{1}{2}t}\begin{pmatrix} \cos(\omega_nt)+\frac{1}{2\omega_n}\sin(\omega_nt) & \frac{1}{\omega_n}\sin(\omega_nt)\\ -\omega_n\sin(\omega_nt)-\frac{1}{4\omega_n}\sin(\omega_nt) & -\frac{1}{2\omega_n}\sin(\omega_nt)+\cos(\omega_nt) \end{pmatrix}. \end{split}$$ From ([\[Xsoln\]](#Xsoln){reference-type="ref" reference="Xsoln"}), $$\begin{split} X^n_t= &e^{-\frac{1}{2}t}\big[\big(\cos(\omega_nt)+\frac{1}{2\omega_n}\sin(\omega_nt)\big)x_0^n+\frac{1}{\omega_n}\sin(\omega_nt)v_0^n\big]\\ &+\int_{0}^{t}e^{-\frac{1}{2}(t-s)}\frac{\gamma_{n}}{\omega_n}\sin(\omega_n(t-s))dW_n(s). \end{split}$$ ### n=0 When $n=0$, we have the following, $$\begin{cases} dX^0_t&=V^0_t dt\\ dV^0_t&=-V^0_tdt+\gamma_0dW_0(t). \end{cases}$$ Multiplying $e^t$ to the equation of $dV^0_t$, we get $$d\big(e^tV^0_t\big)=\gamma_0e^tdW_0(t).$$ Taking integrals on both sides with respect to time, we have $$\begin{split} V^0_t&=\gamma_0\int_{0}^{t}e^{s-t}dW_0(s)+v_0e^{-t}\\ X^0_t&=\int_{0}^{t}V_s^0ds\\ &=\gamma_0\int_{0}^{t}\int_{0}^{s}e^{\alpha-s}dW_0(\alpha)ds+v_0(1-e^{-t})+x_0 \end{split}$$ where $x_0$ and $v_0$ are initial data.\ Since for every $s\in[0,t]$ where $t\in[0,T]$, $e^{\alpha-s}\leq1$ uniformly for $\alpha\in[0,s]$, we can apply stochastic Fubini to the integral term of $X_t^0$. In other words, $$X^0_t=\gamma_0\int_{0}^{t}\int_{\alpha}^{t}e^{\alpha-s}dsdW_0(\alpha)+v_0(1-e^{-t})+x_0.$$ ## Fourier series of the mild solution u. {#fourierseriesofmildsolutionu} Since $$G_t^{\mathbb{R}}(x)=\dfrac{1}{2}e^{-t/2}\text{sgn}(t)\text{I}_0\left(\dfrac{1}{2}\sqrt{t^2-x^2}\right)\chi_{[-|t|, |t|]}(x)$$ is supported on $\left[-|t|, |t|\right]$ for $t\in[0,T]$, $$\begin{split} G_t(x,y) &=\sum_{n\in\mathbb{Z}}\left(G_t^{\mathbb{R}}(y+x-2nJ)+G_t^{\mathbb{R}}(y-x-2nJ)\right)\\ &=\sum_{|n|\leq M_T, n\in\mathbb{Z}}\left(G_t^{\mathbb{R}}(y+x-2nJ)+G_t^{\mathbb{R}}(y-x-2nJ)\right) \end{split}$$ where $M_T=\ceil[\bigg]{\dfrac{T}{J}}+1$. Then in the mild form, we can expand $G$ to $G^\mathbb{R}$, $$\begin{split} u(t,x) &=\int_{0}^{J}\partial_{t}G_t(x,y)u_0(y)dy+\int_{0}^{J}G_t(x,y)\big(\frac{1}{2}u_0(y)+u_1(y)\big)dy\\ &\quad+\int_{0}^{t}\int_{0}^{J}G_{t-s}(x,y)F(dyds)\\ &=\sum_{|n|\leq M_T, n\in\mathbb{Z}}\bigg[\int_{0}^{J}\partial_{t}G_t^{\mathbb{R}}(y+x-2nJ)u_0(y)dy\\ &\quad+\int_{0}^{J}\partial_{t}G_t^{\mathbb{R}}(y-x-2nJ)u_0(y)dy\\ &\quad+\int_{0}^{J}G_t^{\mathbb{R}}(y+x-2nJ)\big(\frac{1}{2}u_0(y)+u_1(y)\big)dy\\ &\quad+\int_{0}^{J}G_t^{\mathbb{R}}(y-x-2nJ)\big(\frac{1}{2}u_0(y)+u_1(y)\big)dy\\ &\quad+\int_{0}^{t}\int_{0}^{J}G_{t-s}^{\mathbb{R}}(y+x-2nJ)F(dyds)\\ &\quad+\int_{0}^{t}\int_{0}^{J}G_{t-s}^{\mathbb{R}}(y-x-2nJ)F(dyds)\bigg]. \end{split}$$ By Theorem 5.3 of [@Nu20], Young's inequality, and $\Vert I_0\Vert<+\infty$, we know that $${\textrm{E}}\,\left[\int_{0}^{J}|u(t,x)|^2dx\right]<+\infty,$$ then the Fourier series of $u$ in ([\[ufourierseries\]](#ufourierseries){reference-type="ref" reference="ufourierseries"}) converges in $L^2\left([0,J]\times\Omega\right)$. ## Drift term {#driftterm} We add the drift term $a\varphi_1$ to the noise $F$. Let $v$ be the solution of the following model $$\begin{split} \partial_{t}^{2}v+\partial_{t}v&=\Delta v+a\varphi_1\\ v(0,x)=v_{0}(x), &\quad \partial_{t}v(0,x)=v_{1}(x) \quad (x, t)\in[0,J]\times\mathbb{R}_+\\ \partial_{x}v(t,0)&=\partial_{x}(t,J)=0 \end{split}$$ where $\varphi_1=\sqrt{\dfrac{2}{J}}\cos\left(\dfrac{\pi}{J}x\right)$. We can write $v(t,x)=d(t)\varphi_1(x)$ where $d$ is a function depending on $t$. To solve the above model, we apply the same technique as in Appendix [5.1](#dampedwaveequationwiththenoiseF){reference-type="ref" reference="dampedwaveequationwiththenoiseF"}. Considering the process in the future, we get $$d(t)=\begin{cases} \frac{a}{\omega_1}\int_{-\infty}^{t}e^{\frac{-1+\omega_1}{2}(t-s)}-e^{\frac{-1-\omega_1}{2}(t-s)}ds & J>2\pi \\ a\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\gamma_{1}(t-s)ds & J=2\pi \\ \frac{a}{\omega_1}\int_{-\infty}^{t}e^{-\frac{1}{2}(t-s)}\frac{\gamma_{1}}{\omega_1}\sin(\omega_1(t-s))ds & 0<J<2\pi . \end{cases}$$ That is $$d(t)=\begin{cases} \dfrac{a}{\omega_1}\left(\dfrac{2}{1-\omega_1}-\dfrac{2}{1+\omega_1}\right) & J>2\pi \\ 2a & J=2\pi \\ \dfrac{4a}{5\omega_1} & 0<J<2\pi . \end{cases}$$ ## Proof of Claim {#lowerboundofsigmax1x2square} **Claim:** $\sigma(x_1,x_2)^2$ is bounded below by $\tilde{C}\frac{\left|x_1-x_2\right|}{J^2}$ for some constant $\tilde{C}$.\ The proof basically follows the proof of Lemma 2.7 of [@MN22]. The difference is that we need to set $x=\frac{x_1+x_2}{2J}$ and $h=\frac{x_2-x_1}{J}$. Let $\mathcal{U}(x_1, x_2)=\tilde{U}(0,x_1)-\tilde{U}(0,x_2)$. then $${\textrm{Var}}\,\big[\mathcal{U}(x_1, x_2)\big]={\textrm{E}}\,\big[\mathcal{U}(x_1, x_2)^2\big].$$ We have $$\begin{split} \mathcal{U}(x_1, x_2)^2 &=\bigg(\sum_{n\neq 0}\tilde{a}_n(0)\big(\varphi_n(x_1)-\varphi_n(x_2)\big)\bigg)^2\\ &=\sum_{n, m\neq 0}\tilde{a}_n(0)\tilde{a}_m(0)\big(\varphi_n(x_1)-\varphi_n(x_2)\big)\big(\varphi_m(x_1)-\varphi_m(x_2)\big)\\ &=\sum_{n\neq0}\tilde{a}_n(0)^2\big(\varphi_n(x_1)-\varphi_n(x_2)\big)^2\\ &+2\sum_{m>n>0}\tilde{a}_n(0)\tilde{a}_m(0)\big(\varphi_n(x_1)-\varphi_n(x_2)\big)\big(\varphi_m(x_1)-\varphi_m(x_2)\big). \end{split}$$ Then we take expectation on both sides $$\begin{split} {\textrm{E}}\,\bigg[\mathcal{U}(x_1, x_2)^2\bigg] &=\sum_{n\neq0}{\textrm{E}}\,\bigg[\tilde{a}_n(0)^2\big(\varphi_n(x_1)-\varphi_n(x_2)\big)^2\bigg]\\ &=\sum_{n\neq0}{\textrm{E}}\,\bigg[\tilde{a}_n(0)^2\bigg]\big(\varphi_n(x_1)-\varphi_n(x_2)\big)^2\\ &+2\sum_{m>n>0}{\textrm{E}}\,\bigg[\tilde{a}_n(0)\tilde{a}_m(0)\bigg]\big(\varphi_n(x_1)-\varphi_n(x_2)\big)\big(\varphi_m(x_1)-\varphi_m(x_2)\big)\\ &=\sum_{n\neq0}{\textrm{E}}\,\bigg[\tilde{a}_n(0)^2\bigg]\big(\varphi_n(x_1)-\varphi_n(x_2)\big)^2. \end{split}$$ Recall that 1. When $J<2\pi n$, ${\textrm{E}}\,[\mid\tilde{a}_n(t)\mid^2]=\frac{\gamma_n^2}{2(1+4\omega_n^2)}$. 2. When $J=2\pi n$, ${\textrm{E}}\,[\mid\tilde{a}_n(t)\mid^2]=2\gamma_n^2$. 3. When $J>2\pi n$, ${\textrm{E}}\,[\mid\tilde{a}_n(t)\mid^2]=\frac{2\gamma_n^2}{\omega_n(1-\omega_n^2)}$. Since $\gamma_n\rightarrow0$ as $n\rightarrow\infty$ and $\omega_n=\sqrt{\left|1-4\big(\frac{n\pi}{J}\big)^2\right|}$, for all $n$, ${\textrm{E}}\,[\mid\tilde{a}_n(t)\mid^2]$ is bounded for all $t\in\mathbb{R}$. Let $c_n={\textrm{E}}\,[\tilde{a}_n(0)^2]$, we recall $\varphi_n(x)=\sqrt{\dfrac{2}{J}}\cos\left(\dfrac{n\pi}{J}x\right)$, then $$\begin{split} \sigma^2&={\textrm{E}}\,\bigg[\mathcal{U}(x_1, x_2)^2\bigg]=\sum_{n\neq0}{\textrm{E}}\,\bigg[\tilde{a}_n(0)^2\bigg]\big(\varphi_n(x_1)-\varphi_n(x_2)\big)^2\\ &=\dfrac{4}{J}\sum_{n\neq0}c_n\left[\cos\left(\dfrac{n\pi}{J}x_1\right)-\cos\left(\dfrac{n\pi}{J}x_2\right)\right]^2. \end{split}$$ Since $\cos(a)-\cos(b)=-2\sin\left(\frac{a-b}{2}\right)\sin\left(\frac{a-b}{2}\right)$, we have $$\sigma^2=\dfrac{16}{J}\sum_{n=1}^{+\infty}c_n\sin^2\left(\dfrac{n\pi}{2J}(x_1-x_2)\right)\sin^2\left(\dfrac{n\pi}{2J}(x_1+x_2)\right).$$ Let $x=\dfrac{x_1+x_2}{2J}$ and $h=\dfrac{x_1-x_2}{J}$. By symmetry, we assume $x\in\left[0,\dfrac{1}{2}\right]$.\ It suffices to prove the estimate for $h<\delta_0$ where $\delta_0>0$ is small and to be chosen later. Since $c_n>0, \forall n$, for any $N>0$, $$\begin{split} \sigma^2 &=\dfrac{16}{J}\sum_{n=1}^{+\infty}c_n\sin^2\left(\dfrac{n\pi}{2}h\right)\sin^2\left(n\pi x\right)\\ &\geq\dfrac{16}{J}\sum_{n=1}^{N}c_n\sin^2\left(\dfrac{n\pi}{2}h\right)\sin^2\left(n\pi x\right). \end{split}$$ Note that $x_1=x-\dfrac{h}{2J}$ and $x_1\geq0$, we have $x\geq\dfrac{h}{2J}$. Let $\delta_1>0$ be a small number to be chosen later, and $$N=[2h^{-1}(1-\delta_1)], \text{ where }[\cdot] \text{ represents the greatest integer function.}$$ Then we have $$\pi(1-\delta_1)-1\leq \pi N2^{-1}h\leq \pi(1-\delta_1). \label{*}$$ Given any $n$ such that $1\leq n\leq N$, we have $$\sin\left(\dfrac{\pi}{2}nh\right)\geq cnh$$ where $c$ is a constant. Let $m_1=16\underset{1\leq n\leq N}{\min}c_n$, then $$\begin{split} \sigma^2 &\geq cm_1h^2\frac{1}{J}\sum_{n=1}^{N}\sin^2(n\pi x)\\ &=cm_1h^2\frac{1}{J}\left[2N+1-\dfrac{\sin((2N+1)\pi x)}{\sin(\pi x)}\right]. \end{split}$$ We want to show $2N+1-\dfrac{\sin((2N+1)\pi x)}{\sin(\pi x)}$ is of order $N$. So we need to show that for some small number $\delta_2>0$ to be chosen later, $$\dfrac{\sin((2N+1)\pi x)}{\sin(\pi x)}\leq 2N(1-\delta_2). \label{**}$$ Let $\delta_3>0$, since $\dfrac{h}{2}\leq x\leq \dfrac{1}{2}$ and $h<\delta_0$, we can choose $\delta_0$ small enough such that $$\sin(\pi x)\geq\sin\left(\frac{\pi}{2}h\right)\geq\frac{\pi}{2}h(1-\delta_3).$$ By ([\[\*\]](#*){reference-type="ref" reference="*"}), we have $$\begin{split} \dfrac{\sin((2N+1)\pi x)}{\sin(\pi x)} &\leq\dfrac{1}{\frac{\pi}{2}h(1-\delta_3)}\\ &=N\dfrac{1}{(N\pi2^{-1}h)(1-\delta_3)}\\ &\leq N\dfrac{1}{[\pi(1-\delta_1)-1](1-\delta_3)}. \end{split}$$ The above inequalitiy verifies ([\[\*\*\]](#**){reference-type="ref" reference="**"}) provided $\delta_1$, $\delta_2$ and $\delta_3$ are small enough. So we have $$\begin{split} \sigma^2 &\geq cm_1h^2\frac{1}{J}[2N+1-\dfrac{\sin((2N+1)\pi x)}{\sin(\pi x)}]\\ &\geq cm_1h^2N\delta_2\frac{1}{J}\\ &\geq 2cm_1h\delta_2\frac{1}{J}, \quad\text{ by } (\ref{*})\\ &=2\tilde{c}\dfrac{|x_1-x_2|}{J^2}\delta_2, \quad\text{ for all }|x_1-x_2|\leq \delta_0 \end{split}$$ where $\tilde{c}(J)=cm_1$. ## Proof of Lemma 4.1 {#proofofintegraleexpbrownianmotion} **Proof:** By ([\[ineqoftildeB\]](#ineqoftildeB){reference-type="ref" reference="ineqoftildeB"}), we can have the following $$\begin{split} \int_{0}^{T}&e^{-at}\tilde{B}_{e^{bt}}^2dt =\int_{0}^{1}e^{-at}\tilde{B}_{e^{bt}}^2dt+\int_{1}^{2}e^{-at}\tilde{B}_{e^{bt}}^2dt+\cdots+\int_{T-1}^{T}e^{-at}\tilde{B}_{e^{bt}}^2dt\\ &\leq\int_{0}^{1}e^{-at}\tilde{B}^2_{e^{bt}}dt +\sum_{k=1}^{T-1}\int_{k}^{k+1}e^{-at}\left(\sum_{l=1}^{k-1}3^{k-l}\tilde{B}_{e^{lb},e^{(l+1)b}}+3^k\tilde{B}_{e^b}+\tilde{B}_{e^{kb},e^{bt}}\right). \end{split}$$ Then we apply the Cauchy-Schwarz inequality to each integral from above. We get $$\begin{split} \int_{0}^{T}&e^{-at}\tilde{B}_{e^{bt}}^2dt \leq \int_{0}^{1}e^{-at}\tilde{B}_{e^{bt}}^2dt\\ &\hspace{1.7cm} +\sum_{k=1}^{T-1}\int_{k}^{k+1}e^{-at}\left[\sum_{l=1}^{k-1}2^{k+1-l}\left(3^{k-l}\tilde{B}_{e^{lb},e^{(l+1)b}}\right)^2+2^k\left(3^k\tilde{B}_{e^b}\right)^2+2\left(\tilde{B}_{e^{kb},e^{bt}}\right)^2\right]. \end{split}$$ We rearrange the order of the above integrals by grouping with similar integrands. Then, we have $$\begin{split} \int_{0}^{T}e^{-at}\tilde{B}^2_{e^{bt}}dt &=\int_{0}^{1}e^{-at}\tilde{B}^2_{e^{bt}}dt+\tilde{B}_{e^{b}}^2\sum_{k=1}^{T-1}\int_{k}^{k+1}2^k\cdot3^{2k}e^{-at}dt\\ &\quad +\sum_{m=1}^{T-2}\bigg[\bigg(\sum_{k=m+1}^{T-1}\int_{k}^{k+1}2^{k-m+1}\cdot3^{2(k-m)}e^{-at}\tilde{B}^2_{e^{bm},e^{b(m+1)}}dt\bigg)\\ &\qquad\qquad+2\int_{m}^{m+1}e^{-at}\tilde{B}^2_{e^{bm},e^{bt}}dt\bigg]. \end{split}$$ ## Proof of Lemma 4.2 {#integraloftailprobability} **Proof:** First, we start by changing the variable. Let $y=\log x$, then $dy=\dfrac{1}{x}dx$. That is $e^ydy=dx$. Then $$\begin{split} \int_{1}^{+\infty}P\left(X\geq\sqrt{\dfrac{\log x}{\gamma}}\right)dx &=\int_{0}^{+\infty}P\left(X\geq\sqrt{\dfrac{y}{\gamma}}\right)e^ydy\\ &=\int_{0}^{+\infty}\left(\int_{\sqrt{\frac{y}{\gamma}}}^{+\infty}\dfrac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{z^2}{2\sigma^2}}dz\right)e^ydy. \end{split}$$ By ([\[upperboundofesquareintegral\]](#upperboundofesquareintegral){reference-type="ref" reference="upperboundofesquareintegral"}), we have $$\begin{split} \int_{1}^{+\infty}P\left(X\geq\sqrt{\dfrac{\log x}{\gamma}}\right)dx &\leq\dfrac{1}{\sqrt{2\pi\sigma^2}}\int_{0}^{+\infty}\dfrac{\sigma^2}{\sqrt{\frac{y}{\gamma}}}\exp\left(y-\dfrac{y}{2\gamma\sigma^2}\right)dy\\ &=\dfrac{1}{\sqrt{2\pi\sigma^2}}\int_{0}^{+\infty}\dfrac{\sqrt{\gamma}\sigma^2}{\sqrt{y}}\exp\left(\left(1-\dfrac{1}{2\gamma\sigma^2}\right)y\right)dy. \end{split}$$ Let $u=\sqrt{y}$, then $du=\dfrac{1}{2\sqrt{y}}dy$, $$\int_{1}^{+\infty}P\left(X\geq\sqrt{\dfrac{\log x}{\gamma}}\right)dx \leq\dfrac{2\sigma^2\sqrt{\gamma}}{\sqrt{2\pi\sigma^2}}\int_{0}^{+\infty}\exp\left(\left(1-\dfrac{1}{2\gamma\sigma^2}\right)u^2\right)du.$$ Since $1-\dfrac{1}{2\gamma\sigma^2}<0$, the above integral converges. And for all $b>0$, we have $\int_{0}^{+\infty}e^{-bx^2}=\dfrac{\sqrt{\pi}}{2\sqrt{b}}$. Then $$\begin{split} \int_{1}^{+\infty}P\left(X\geq\sqrt{\dfrac{\log x}{\gamma}}\right)dx &\leq\dfrac{2\sigma^2\sqrt{\gamma}}{\sqrt{2\pi\sigma^2}}\cdot\dfrac{\sqrt{\pi}}{2\sqrt{\frac{1}{2\gamma\sigma^2}-1}}\\ &=\dfrac{\sigma^2\gamma}{\sqrt{1-2\gamma\sigma^2}}. \end{split}$$ ## Proof of Claim: {#claimDurrett} By [@Dubook], we can get an upper bound of $\int_{a}^{+\infty}e^{-\frac{z^2}{2c}}dz$ where $a>0$ and $c>0$, $$\int_{a}^{+\infty}e^{-\frac{z^2}{2c}}dz \leq \int_{a}^{+\infty}\dfrac{z}{a}e^{-\frac{z^2}{2c}}dz$$ Let $y=z^2$, then $dy=2zdz$. We have $$\begin{split} \int_{a}^{+\infty}e^{-\frac{z^2}{2c}}dz &\leq\int_{a^2}^{+\infty}\dfrac{1}{2a}e^{-\frac{y}{2c}}dy\\ &=-\dfrac{c}{a}e^{-\frac{y}{2e}}\bigg\lvert_{a^2}^{+\infty}\\ &=\dfrac{c}{a}e^{-\frac{a^2}{2c}}.\label{upperboundofesquareintegral} \end{split}$$ ## Proof of Lemma 4.3 {#exp9/10t} **Proof:** Let $M(t)=\sum_{n=3}^{\infty}\dfrac{\left(\frac{9}{10}t\right)^n}{n!}$ and $f(t)= e^{\frac{9}{10}t}-t^2-1$, then $$\begin{split} f(t) &=\sum_{n=0}^{\infty}\dfrac{\left(\frac{9}{10}t\right)^n}{n!}-t^2-1\\ &=1+\dfrac{9}{10}t+\dfrac{1}{2}\cdot\dfrac{81}{100}t^2+M(t)-t^2-1.\label{taylor} \end{split}$$ We simplify ([\[taylor\]](#taylor){reference-type="ref" reference="taylor"}), $$\begin{split} f(t) &=\dfrac{9}{10}t+\left(\dfrac{81}{200}-1\right)t^2+M(t)\\ &=\dfrac{t}{200}(180-119t)+M(t). \end{split}$$ When $180-119t\geq0$, i.e. $t\leq \dfrac{180}{119}$, we have $f(t)\geq0$.\ Then we are left to show when $t\geq\dfrac{180}{119}$, $f(t)\geq0$ is also true.\ We know that $e^{\frac{9}{10}t}$ and $t^2+1$ are monotonic increasing functions when $t\geq0$, then at $t=\dfrac{180}{119}$, $$f\left(\dfrac{180}{119}\right)=e^{\frac{9}{10}\cdot\frac{180}{119}}-\left(\left(\dfrac{180}{119}\right)^2+1\right)\approx3.901-3.288>0.$$ The first derivative is $$f'\left(\dfrac{180}{119}\right)=\dfrac{9}{10}e^{\frac{9}{10}\cdot\frac{180}{119}}-2\cdot \dfrac{180}{119}\approx 3.511-3.0252>0.$$ The second derivative is $$f''\left(\dfrac{180}{119}\right)=\left(\dfrac{9}{10}\right)^2e^{\frac{9}{10}\cdot\frac{180}{119}}-2\approx 3.160-2>0.$$ Since $f''$ is an increasing function, when $t\geq\dfrac{180}{119}$, $f''(t)\geq0$.\ Since $f'\left(\dfrac{180}{119}\right)$, $f'(t)\geq0$ $\forall t\geq\dfrac{180}{119}$. Then we know that $f(t)\geq 0$ for $t\in\left[\dfrac{180}{119},+\infty\right]$.\ We proved the claim. ## An intuitive justification for the Theorem {#anintuitivejustificationfortheorem} We assume that $l_t(y)$ is constant over $y\in[-R,R]$. Then $l_t(y)=\frac{J}{2R}$. We have $$\int_{0}^{T}\int_{-R}^{R}l_t^2(y)dydt=2TR\left(\dfrac{J}{2R}\right)^2=\dfrac{CTJ^2}{R}\label{A1}$$ where $C$ is a constant independent from $T$ and $J$. We want to find an approximate probability for the colored noise $\dot{F}$. That is $\exp\left(-\frac{1}{2}\int_{0}^{T}\int_{0}^{J}\left(\dot{F}(t,x)\right)^2dxdt\right)$. From ([\[main system\]](#main system){reference-type="ref" reference="main system"}), we substitute for $\dot{F}$. We get an approximate probability of $$\exp\left(-\frac{1}{2}\int_{0}^{T}\int_{0}^{J}\left[\partial_{t}^{2}u(t,x)+\partial_{t}u(t,x)-\Delta u(t,x)\right]^2dxdt\right).$$ We have that the minimizer $u$ is often constant in $t$, giving us $$\exp\left(-\frac{T}{2}\int_{0}^{J}\left[\Delta u(t,x)\right]^2dx\right).$$ We might think that the minimizer $u$ has a constant value of $|\Delta u|$. Considering the Neumann boundary conditions, a function could be $$u(x)= \begin{cases} ax^2-aJ^2/4\quad & x\in[0,J/2]\\ a(J-x)^2-aJ^2/4\quad & x\in[J/2,J]. \end{cases}$$ Taking $u(0)=R$ and $u(J)=-R$, we get $a=4R/J^2$ and $|u''(x)|=2a=8R/J^2$. Then $$\frac{T}{2}\int_{0}^{J}\left[\Delta u(t,x)\right]^2dx=CTJ\left(\dfrac{R}{J^2}\right)^2=\dfrac{CTR^2}{J^3}.\label{A2}$$ Equating ([\[A1\]](#A1){reference-type="ref" reference="A1"}) and ([\[A2\]](#A2){reference-type="ref" reference="A2"}), we get $$R=CJ^{5/3}.$$ ## Noise F Let $\{W_n\}$ be independent and identical distributed white noise in time and $\{\gamma_n\}_{n\in\mathbb{N}}$ is a collection of real numbers such that $$\sum_{n\in\mathbb{N}}\gamma_n^2<+\infty \quad \text{ and }\quad \gamma_n^2\leq\dfrac{c}{n^\alpha} \forall n\in\mathbb{N}$$ where $c$ and $\alpha$ are positive constants. Intuitively, we have $$\begin{split} {\textrm{Cov}}\,[\dot{F}&(t,x),\dot{F}(s,y)]\\ &={\textrm{E}}\,\left[\left(\sum_{n\in\mathbb{N}}\gamma_n\dot{W}_n(t)\varphi_n(x)\right)\cdot\left(\sum_{m\in\mathbb{N}}\gamma_m\dot{W}_m(s)\varphi_m(y)\right)\right]\\ &=\sum_{n,m\in\mathbb{N}}\gamma_n\gamma_m{\textrm{E}}\,\left[\dot{W}_n(t)\dot{W}_m(t)\right]\varphi_n(x)\varphi_m(y)\\ &=\delta(t-s)\sum_{n\in\mathbb{N}}\gamma_n^2\varphi_n(x)\varphi_n(y). \end{split}$$
arxiv_math
{ "id": "2310.01631", "title": "The damped wave equation and associated polymer", "authors": "Yuanyuan Pan", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- address: | ТПО « Северный Очаг»\ Россия --- **О финитной отделимости конечно порожденных коммутативных колец** **©  Станислав Кублановский** Установлены необходимые и достаточные условия финитной отделимости конечно порожденных коммутативных колец. Доказано, что каждое такое кольцо суть конечное расширение некоторого своего идеала кручения $I_{k}$ ($k$ число свободное от квадратов), который является подпрямым произведением конечного кольца и некоторого конечного набора колец без делителей нуля простых характеристик, являющихся целыми расширениями любого своего бесконечного моногенного подкольца. # Введение Понятие финитной аппроксимируемости и финитной отделимости в алгебраических системах вызывает постоянный интерес исследователей. Одной из причин этого интереса является связь с алгоритмическими проблемами. На эту связь указал еще академик А.И. Мальцев в работе 1958 [@mal]. А именно: в финитно аппроксимируемых (финитно отделимых) конечно определенных системах разрешима проблема равенства (проблема вхождения). Все это относится и к основным алгебраическим системам: полугруппам, группам и кольцам. Напомним, что алгебраическая система $A$ называется финитно отделимой, если для любого ее элемента $a$ и для любой подсистемы $A^{'}$ такой, что $a\not\in A'$, существует конечная система $F$ и гомоморфизм $\varphi :A\rightarrow F$ такой, что $\varphi \left({a}\right)\not\in \varphi \left({A'}\right)$ Алгебраическая система $A$ называется финитно аппроксимируемой, если для любых ее двух различных элементов $a,b$ существует конечная система $F$ и гомоморфизм $\varphi :A\rightarrow F$ такой, что $\varphi \left({a}\right)\ne \varphi \left({b}\right)$. Свойство финитной отделимости хорошо изучено в группах и полугруппах. Но для колец многие вопросы в этой тематике еще далеки от разрешения. Одним из таких вопросов был критерий финитной отделимости моногенных колец (напомним, что моногенным называют кольцо, порожденное одним элементом). Разрешение этого вопроса являлось бы отправной точкой для исследований. Оказалось, что в отличие от групп и полугрупп в кольцах дело обстоит значительно сложнее. Описание финитно отделимых моногенных колец получено автором в работе [@kyblv]. Данная работа является продолжением указанной работы. Целью исследования автора здесь является описание финитно отделимых конечно порожденных коммутативных колец. Основным результатом настоящей работы является следующая теорема. ** 1**. *Для того, чтобы конечно порожденное коммутативное кольцо было финитно отделимо, необходимо и достаточно, чтобы оно было конечным расширением некоторого идеала кручения $I_{k}$ ($k$ --- число свободное от квадратов), который является подпрямым произведением конечного кольца и некоторого конечного набора колец без делителей нуля простых характеристик, в которых любые два трансцендентные элемента целозависимы.* ** 2**. *Конечно порожденное коммутативное кольцо является финитно отделимым в том и только в том случае, когда каждое его двупорожденное подкольцо финитно отделимо.* Из теоремы Гильберта о базисе следует, что каждое конечно порожденное коммутативное кольцо является конечно определенным, то есть может быть задано конечным набором определяющих соотношений. В настоящей работе приведен контрпример кольца простой характеристики без делителей нуля, показывающий, что из целой зависимости образующих кольца не следует, вообще говоря, целая зависимость произвольных двух трансцендентных элементов. Также в работе приведены примеры колец, в которых это имеет место. Поэтому представляет интерес описание условий финитной отделимости конечно порожденных коммутативных колец на языке образующих и определяющих соотношений. В связи с особой ролью в этой проблематике двупорожденных колец простой характеристики автором доказано следующее ** 3**. *Двупорожденное кольцо простой характеристики $K=Z_{p}\left<{a,b\mid f\left({a,b}\right)=0}\right>$, где $f\left({x,y}\right)$ --- однородный многочлен от двух переменных, будет финитно отделимым тогда и только тогда, когда $f\left({x,y}\right)$ --- сепарабельный многочлен, то есть является произведением различных неприводимых многочленов.* В §2 даны основные определения и обозначения, используемые в работе. В §3 приведены доказательства вспомогательных утверждений. В §4 изложено доказательство основных результатов настоящей работы. В §5 приведены пример и контрпример к естественным гипотезам, связанным с финитной отделимостью. В заключительном §6 сформулирован ряд открытых проблем по рассматриваемой тематике. # Определения и обозначения Определение кольца и поля, идеала, подкольца, фактор-кольца и канонического гомоморфизма из кольца в фактор-кольцо, подпрямого произведения колец предполагаются известными. Рассматриваются кольца без требования существования 1, если не оговорено противное. Через $Z$ обозначается кольцо целых чисел. Если $p$ --- простое число, то через $Z_{p}$ обозначается кольцо остатков по модулю $p$ (оно, как известно, является простым полем). Каждое кольцо простой характеристики $p$ рассматривается как алгебра над простым полем $Z_{p}$. Натуральное число $k$ называют свободным от квадратов, если оно не делится на квадрат простого числа. Легко видеть, что число свободное от квадратов это в точности произведение различных простых чисел, либо 1. **1.** Через $Z_{p}[x]$ и $Z_{p}[x,y]$ обозначаются кольца многочленов с одной и двумя переменными с коэффициентами из простого поля $Z_{p}$. **2.** Каждый многочлен $f\left({x,y}\right)$ из $Z_{p}[x,y]$ можно рассматривать как многочлен от одной переменой $x$ с коэффициентами из кольца $Z_{p}[y]$. Если старший коэффициент такого многочлена равен 1, то многочлен $f\left({x,y}\right)$ называется унитарным относительно $x$. Аналогично определяется многочлен унитарный относительно $y$. Многочлен $f\left({x,y}\right)$ называется унитарным, если он является унитарным по каждой переменной. **3.** Однородный многочлен $f\left({x,y}\right)$ --- это многочлен, все одночлены которого имеют одинаковую полную степень. **4.** Элемент $a$ кольца $K$ простой характеристики $p$ называется целым, если он в этом кольце является корнем некоторого ненулевого многочлена (без свободного члена) из $Z_{p}[x]$ (часто такие элементы называют алгебраическими над полем $Z_{p}$ ). Элемент, не являющийся целым, называется трансцендентным. **5.** Элементы $a$ и $b$ кольца $K$ простой характеристики $p$ называются целозависимыми, если выполнено равенство $f\left({a,b}\right)=0$ для некоторого унитарного многочлена $f\left({x,y}\right)\in Z_{p}[x,y]$ без свободного члена. **6.** Идеал $I$ кольца $K$ называется простым, если $I\ne K$ и для любых элементов $a,b$ кольца $K$ из $ab\in I$ следует, что $a\in I$ или $b\in I$. Легко видеть, что $I$ является простым идеалом тогда и только тогда, когда фактор-кольцо $K/I$ --- кольцо без делителей нуля. **7.** Идеал $I$ коммутативного кольца $K$ называется примарным, если $I\ne K$ и для любых элементов $a,b$ кольца $K$ из $ab\in I$ следует, что $a\in I$ или $b^{n}\in I$ для некоторого натурального числа $n$. **8.** Идеал $I$ кольца $K$ называется идеалом конечного индекса, если фактор кольцо $K/I$ конечно. **9.** Кольцо $K$ называется конечным расширением своего идеала $I$, если аддитивная группа фактор-кольца $K/I$ является конечно порожденной. **10.** Если фиксируется некоторое натуральное число $k$, то множество всех элементов $a$ кольца $K$, для которых выполнено равенство $k\cdot a=0$, образует идеал, который обозначают через $I_{k}$ и называют идеалом $k$-кручения. **11.** Мы будем использовать **теорему Ласкера--Нетер**: *каждый идеал нётерова кольца можно записать в виде конечного пересечения примарных идеалов*. Доказательство этой теоремы можно найти в [@zarc], §4. **12.** Также будут использованы свойства полей и свойства алгебры многочленов над полем.Такие понятия как поле частных и алгебраическое замыкание поля [@byrbv]. **13.** Будет использован результат А.И. Мальцева о замкнутости класса финитно отделимых колец относительно взятия конечных прямых произведений, подколец и гомоморфных образов [@mal]. # Вспомогательные утверждения **Лемма 1**. *Если кольцо $K$ простой характеристики финитно отделимо и $a^{2}=0$ для некоторого элемента $a\in K$, то для любого элемента $b\in K$ имеет место $a\cdot f\left({b}\right)=0$ для некоторого унитарного многочлена $f\left({x}\right)$ с целыми коэффициентами и без свободного члена.* *Proof.* Предположим противное, то есть пусть для некоторых элементов $a$ и $b$ в кольце $K$ простой характеристики $p$ имеет место $a^{2}=0$ и $a\cdot f\left({b}\right)\ne 0$ для любого унитарного многочлена $f\left({x}\right)$ с целыми коэффициентами и без свободного члена. Рассмотрим множество элементов $a_{n}=a+a\left({b^{2n}-b^{n}}\right)$ и подкольцо $A$ кольца $K$, порожденное этим семейством элементов $M=\{a_{n}\mid n=1,2,\dots\}$. Нетрудно заметить, что $A$ совпадает с аддитивной подгруппой, порожденной семейством элементов $M$, то есть $A=Za_{1}+Za_{2}+ \dots + Za_{n}+ \dots$ (поскольку произведение любых элементов из $M$ равно нулю). Если бы $a$ принадлежало $A$, то для некоторого натурального $n$ имело бы место $a=z_{1}a_{1}+\dots + z_{n}a_{n}$ для некоторых целых чисел $z_{1},z_{2},\dots , z_{n}$ (причем $z_{n}$ не делится на $p$). Тогда $ag\left({b}\right)=0$ для некоторого многочлена $g\left({x}\right)$ степени $2n$ со старшим коэффициентом $z_{n}$ и без свободного члена. В простом кольце вычетов $Z_{p}$ имеет место $z\cdot z_{n}=1\left({mod p}\right)$ для некоторого целого $z$. Тогда имеем $zag\left({b}\right)=0$ Пусть $f\left({x}\right)$ - многочлен, полученный из многочлена $z\cdot \left({g\left({x}\right)}\right)$ заменой старшего коэффициента $zz_{n}$ на $1$. Нетрудно видеть, что $a\cdot f\left({b}\right)=zag\left({b}\right)=0$ и $f\left({x}\right)$ ---унитарный многочлен с целыми коэффициентами и без свободного члена. Это противоречит предположению. Вывод: $a\not\in A$. Тогда в силу финитной отделимости кольца $K$ должен найтись гомоморфизм $\varphi :K\rightarrow F$ для некоторого конечного кольца $F$ такой, что $\varphi \left({a}\right)\not\in \varphi \left({A}\right)$. Но в конечном кольце для любого элемента некоторая его степень является идемпотентом, то есть $\varphi \left({b^{2n}}\right)=\varphi \left({b^{n}}\right)$ для некоторого натурального числа $n$. Откуда $\varphi \left({a_{n}}\right)=\varphi \left({a}\right)$, то есть $\varphi \left({a}\right)\in \varphi \left({A}\right)$. Полученное противоречие доказывает лемму 1. ◻ **Лемма 2**. *Каждый примарный идеал в финитно отделимом конечно порожденном коммутативном кольце простой характеристики является простым идеалом или идеалом конечного индекса.* *Proof.* Пусть $I$ примарный идеал в финитно отделимом конечно порожденном коммутативном кольце $K=Z\left<a_{1},a_{2}, \dots , a_{n}\right>$ простой характеристики $p$. Предположим, что $K/I$ --- бесконечное кольцо. Покажем, что фактор кольцо $K/I$ --- кольцо без делителей нуля. Предположим противное: $cd=0$ для некоторых отличных от нуля элементов $c,d$ из кольца $K/I$. По определению примарного идеала $d^{n}=0$ в $K/I$ для некоторого натурального $n$. Поскольку $c\ne 0$ и $d\ne 0$, то $n>1$. Выберем $n>1$ наименьшим со свойством $d^{n}=0$. Применим лемму 1, взяв в качестве $b$ какой-либо $\overline{a}_{i}$ ($\overline{a}_{i}$ --- образ $a_{i}$ при каноническом гомоморфизме $K\rightarrow K/I$), а в качестве $a$ элемент $d^{n-1}$. Тогда $a^{2}=0$, откуда $a\cdot f\left({b}\right)=0$ для некоторого унитарного многочлена $f\left({x}\right)$ ( без свободного члена). Заметим, что $a\ne 0$. Поэтому по определению примарного идеала заключаем,что $f\left({b}\right)^{m}=0$ для некоторого натурального числа $m$. Заметим, что многочлен $f\left({x}\right)$ можно рассматривать как ненулевой многочлен из $Z_{p}[x]$. Поэтому $f\left({x}\right)^{m}$ --- ненулевой многочлен. Заключаем, что $\overline{a}_{i}$ --- целый элемент кольца $K/I$ по определению. Мы получили, что кольцо $K/I$ порождается конечным набором целых элементов. Нетрудно видеть, что его аддитивная группа конечно порождена. Конечно порожденная группа простой характеристики является конечным множеством, то есть $K/I$ --- конечное кольцо. Это противоречит предположению. Противоречие получено из предположения, что кольцо $K/I$ не является кольцом без делителей нуля. Лемма 2 доказана. ◻ **Лемма 3**. *В финитно отделимом кольце простой характеристики без делителей нуля любые два трансцендентных элемента является целозависимыми.* *Proof.* Пусть $K$ --- финитно отделимое кольцо простой характеристики $p$ без делителей нуля. Пусть $a,b$ два произвольных трансцендентных элемента кольца $K$ По лемме 2 из работы [@kyblv] имеет место равенство $$f_{0}\left({b}\right)a^{n}+f_{1}\left({b}\right)a^{n-1}+\dots + f_{n-1}\left({b}\right)a+f_{n-1}\left({b}\right)a=0 \tag{$*$}$$ для некоторого натурального $n$ и некоторых многочленов $f_{i}\left({x}\right)\in Z_{p}[x]$ без свободных членов ($i=0, \dots ,n-1$), причем $f_{0}\left({x}\right)\in Z_{p}[x]$ --- ненулевой многочлен. Без ограничения общности считаем, что $n$ выбрано наименьшим из возможных. Это число $n$ обозначим через $D_{b}\left|{a}\right|$ (то есть через $D_{b}\left|{a}\right|$ обозначим наименьшую из возможных степеней ненулевых многочленов $\sigma _{b}\left({x}\right)$ (без свободного члена) с коэффициентами из кольца $Z\left<{b}\right>$, для которых $\sigma _{b}\left({a}\right)=0$ в кольце $K$. Такое число называют алгебраической степенью элемента $a$ относительно $b$ в кольце $K$. Пусть $d\left({x}\right)=\gcd \left({f_{0}\left({x}\right),f_{1}\left({x}\right), \dots ,f_{n}\left({x}\right)}\right)$ и $k_{i}=f_{i}\left({x}\right):d\left({x}\right)$ для $i=0,1,n-1$. Ясно, что $k_{i}\in Z_{p}[x]$. Тогда по определению наибольшего общего делителя имеем $\gcd\left({k_{0},k_{1},\dots,k_{n-1}}\right)=1$. Покажем, что $k_{0}=1$. Предположим противное, то есть $k_{0}\ne 1$. Пусть $g\left({x}\right)=xd\left({x}\right)$. Тогда $g\left({x}\right)$ --- многочлен без свободного члена. Кольцо $K$ можно считать модулем над кольцом многочленов $Z_{p}[x,y]$ с операцией $k\left({x,y}\right)\cdot c=k^{* }\left({b,a}\right)c+qc$, где $k\left({x,y}\right)$ --- произвольный многочлен из $Z_{p}[x,y]$, $c$ --- произвольный элемент кольца $K$, $q$ -свободный член многочлена $k\left({x,y}\right)$, а $k^{* }\left({x,y}\right)=k\left({x,y}\right)-q$. Заметим, что из равенства (\*) путем умножения на $b$ следует равенство: $$g\left({b}\right)\cdot \left({k_{0}\cdot a^{n}+k_{1}\cdot a^{n-1}+\dots+k_{n-1}\cdot a}\right)=0.$$ Поскольку $b$ --- трансцендентный элемент и кольцо $K$ без делителей нуля, то из последнего равенства следуют равенства: $$k_{0}\cdot a^{n}+k_{1}\cdot a^{n-1}+\dots+k_{n-1}\cdot a=0$$ и $$D_{b}\left|{a}\right|=n. \tag{**}$$ Тогда для некоторого натурального $m$, не превосходящего $n-1$, многочлен $k_{m}$ не делится на некоторый неприводимый многочлен --- делитель многочлена $k_{0}$ в кольце многочленов $Z_{p}[x]$ (в противном случае нод многочленов $k_{i}$ был бы отличен от 1). Далее будем считать, что $m$ выбрано наименьшим из возможных с указанным свойством. Перепишем равенство (\*) в двух видах: $$\begin{gathered} k_{0}\cdot a^{n}+\dots+k_{m-1}\cdot a^{n-m+1}=-k_{m}\cdot a^{n-m}-\dots-k_{n-1}\cdot a \tag{1}\\ k_{0}\cdot a^{n}=-k_{1}\cdot a^{n-1}-\dots-k_{n-1}\cdot a \tag{2}\end{gathered}$$ Обозначим через $f\left({x,y}\right)$ и $\varphi \left({x,y}\right)$ следующие многочлены из $Z_{p}[x,y]$: $$\begin{gathered} f\left({x,y}\right)= k_{0}y^{n}+k_{1}y^{n-1}+ \dots + k_{m-1}y^{n-m+1},\\ \varphi \left({x,y}\right)=-k_{m}y^{n-m} - \dots -k_{n-1}y.\end{gathered}$$ Обозначим через $A$ подмножество кольца $K$ определяемое равенством: $$A=Z_{p}[x]\cdot \left\{{k_{0}\cdot a}\right\}+Z_{p}[x]\cdot \left\{{k^{2}_{0}\cdot a^{2}}\right\}+\dots + Z_{p}[x]\cdot \left\{{k^{n-1}_{0}\cdot a^{n-1}}\right\}.$$ Ясно, что $A$ замкнуто по сложению (вычитанию), то есть $A$ --- аддитивная подгруппа кольца $K$. Из равенства $\left({2}\right)$ следует замкнутость $A$ по умножению. Чтобы в этом убедиться достаточно проверить, что $$\left({k_{0}\cdot a}\right)\left({k^{n-1}_{0}a^{n-1}}\right)\in A.$$ Действительно, $$\left({k_{0}\cdot a}\right)\cdot \left({k^{n-1}_{0}\cdot a^{n-1}}\right)=\left({k^{n-1}_{0}k_{0}}\right)\cdot a^{n}= k^{n-1}_{0}\cdot \left({-k_{1}\cdot a^{n-1}- \dots -k_{n-1}\cdot a}\right)\in A.$$ Следовательно, $A$ --- подкольцо кольца $K$. Заметим, что $A$ содержит все элементы вида $k^{s}_{0}\cdot a^{s}$ для любого натурального $s$. Из определений вытекает, что $A$ --- подмодуль модуля $K$ над кольцом $Z_{p}[x]$. Обозначим через $M$ подмножество кольца $K$, определяемое следующим образом: $$M=\{w=z_{n-1}\cdot a^{n-1}+ \dots + z_{i}\cdot a^{i} + \dots + z_{1}\cdot a \mid z_{i}\in Z_{p}[x], deg z_{i}<deg k^{i}_{0}\}$$ Отметим, что $M$ --- конечное множество. Пусть $M^{* }$ --- множество ненулевых элементов из $M$. Покажем, что $M^{* }\cap A=\left\{{0}\right\}$. Действительно, в противном случае получилось бы $D_{b}\left|{a}\right|<n$, что противоречит $\left({**}\right)$. По условию кольцо $K$ финитно отделимо. Это означает, что существуют конечное кольцо $\Omega$ и гомоморфизм $\chi :K\rightarrow \Omega$ такой, что $\chi \left({M^{* }}\right)\cap \chi \left({A}\right)=\emptyset$. Тогда в конечном кольце существуют натуральные числа $t$ и $h$ такие, что $\chi \left({k^{t}_{0}\cdot a}\right)=\chi \left({k^{t+h}_{0}\cdot a}\right)$. Из последнего равенства следует $$\chi \left({k^{t}_{0}\cdot a}\right)=\chi \left({k^{t+sh}_{0}\cdot a}\right)$$ для любого натурального числа $s$, откуда $$\chi \left({k^{t}_{0}\cdot a^{s}}\right)=\chi \left({k^{t+sh}_{0}\cdot a^{s}}\right)=\chi \left({k^{t+sh-s}_{0}\cdot k_{0}a}\right)\chi \left({k^{s-1}_{0}\cdot a^{s-1}}\right).$$ Как отмечено выше, $k^{s}_{0}a^{s}\in A$ для всех $s$. Поэтому из последнего равенства вытекает $$\chi \left({k^{t}_{0}a^{s}}\right)\in \chi \left({A}\right) \ \text{для всех } s\in \mathbb{N} \tag{3}$$ Пусть $L=\{k_{0},k_{1},\dots,k_{m-1}\}$. Пусть $q$ --- некоторое натуральное число. Обозначим через $I\left({L^{q}}\right)$ идеал кольца $Z_{p}[x]$, порожденный множеством $L^{q}=L\cdot L\cdot L\cdots L$ ($q$ раз). Для дальнейших рассуждений докажем следующую импликацию:\ Если для некоторых $c_{i }\in I\left({L^{q}}\right)\left({i>n-m}\right)$ и для некоторого неотрицательно целого числа $z$ имеет место равенство $$\sum\limits_{i>n-m}{c_{i}\cdot a^{i}}=k^{z}_{m}\cdot \varphi \left({b,a}\right)+r\left({b,a}\right)$$ для некоторого многочлена $r\left({x,y}\right)$ без свободного члена из кольца $Z_{p}[x,y]$ степени меньшей $n-m$ относительно $y$, то имеет место аналогичное равенство $$\sum\limits_{i>n-m}{c'_{i}\cdot a^{i}}=k^{z'}_{m}\cdot \varphi \left({b,a}\right)+r'\left({b,a}\right)$$ для некоторых $c'_{i }\in I\left({L^{q+1}}\right)\left({i>n-m}\right)$ и для некоторого натурального числа $z'$ и некоторого многочлена $r'\left({x,y}\right)$ из кольца $Z_{p}[x,y]$ степени меньшей $n-m$ относительно переменной $y$ без свободного члена. Для доказательства этой импликации заметим следующее:\ а) Для некоторого достаточно большого натурального числа $l$ в кольце многочленов $Z_{p}[x,y]$ возможно деление с остатком, а именно: $$k^{l}_{m}y^{i}=U_{i}\left({x,y}\right)\varphi \left({x,y}\right)+R_{i}\left({x,y}\right), deg_{y}R_{i}\left({x,y}\right)<deg_{y}\varphi \left({x,y}\right)=n-m$$ для всех $i$ из промежутка $n\ge i>n-m$, для которых $c_{i}\ne 0$. Заметим, что все многочлены $R_{i}\left({x,y}\right)$ без свободного члена. б) Тогда из условия рассматриваемой импликации следует путем умножения на $k^{l}_{m}$: $$\sum\limits_{i>n-m}{c_{i}k^{l}_{m}\cdot a^{i}}=k^{z+l}_{m}\cdot \varphi \left({b,a}\right)+k^{l}_{m}\cdot r\left({b,a}\right),$$ откуда на основании (а) получаем $$\sum\limits_{i>n-m}{c_{i}\cdot \left({U_{i}\left({x,y}\right)\cdot \varphi \left({b,a}\right)+R_{i}\left({b,a}\right)}\right)}=k^{z+l}_{m}\cdot \varphi \left({b,a}\right)+k^{l}_{m}\cdot r\left({b,a}\right).$$ в) Из равенства $\left({1}\right)$ следует $\varphi \left({b,a}\right)=f\left({b,a}\right)$. Поэтому заменяя в предыдущем равенстве $\varphi \left({b,a}\right)$ на $f\left({b,a}\right)$, получаем $$\sum\limits_{i>n-m}{c_{i}\left({U_{i}\left({x,y}\right)f\left({b,a}\right)+R_{i}\left({b,a}\right)}\right)}=k^{z+l}_{m}\varphi \left({b,a}\right)+k^{l}_{m}r\left({b,a}\right).$$ Коэффициенты многочлена $f\left({x,y}\right)$ относительно переменной $y$ принадлежат множеству $L$ по определению. Поэтому коэффициенты многочлена $c_{i}\left({U_{i}\left({x,y}\right)f\left({x,y}\right)}\right)$ относительно переменной $y$ принадлежат множеству $Z\left({L^{q+1}}\right)$, поскольку по условию все $c_{i }\in I\left({L^{q}}\right)$. Чтобы доказать рассматриваемую импликацию, осталось в левой части последнего равенства раскрыть скобки и перенести все $c_{i}R_{i}\left({b,a}\right)$ в правую часть. Итак, импликация доказана. Из этой импликации методом математической индукции приходим к выводу: для любого натурального числа $s$, и для некоторых целых чисел $c_{i }\in I\left({L^{s}}\right)\left({i>n-m}\right)$, и для некоторого целого неотрицательного числа $z$ имеет место равенство $$\sum\limits_{i>n-m}{c_{i}a^{i}}=k^{z}_{m}\varphi \left({b,a}\right)+r\left({b,a}\right)$$ для некоторого многочлена с $r\left({x,y}\right)$ из $Z_{p}[x,y]$ степени меньшей $n-m$ и без свободного члена. Базой индукции является равенство $\left({1}\right)$. Заметим, что $$k^{z}_{m}\varphi \left({x,y}\right)+r\left({x,y}\right)=q_{1}y+q_{2}y^{2}+\dots+q_{n-m}y^{n-m}$$ для некоторых многочленов $q_{j}$ ($j=1,\dots,n-m$) из кольца $Z_{p}[x]$, причем $q_{n-m}=-k^{z+1}_{m}$. Поделим каждое $q_{j}$ с остатком на $k^{j}_{0}$ в кольце многочленов $Z_{p}[x]$ и получим равенства: $q_{j}=p_{j}k^{j}_{0}+q^{* }_{j}$, где $0\le q^{* }_{j}<k^{j}_{0}$ ($j=1\dots,n-m$). Заметим, что $q^{* }_{n-m}\ne 0$, поскольку $k^{z+1}_{m}$ не делится на $k_{0}$. Имеем $$k^{z}_{m}\cdot \varphi \left({b,a}\right)+r\left({b,a}\right)=\sum\limits_{j=1}^{n-m}{\left({p_{j}k^{j}_{0}+q^{* }_{j}}\right)\cdot a}^{j}=\sum\limits_{j=1}^{n-m}{p_{j}k^{j}_{0}a}^{j}+\sum\limits_{j=1}^{n-m}{q^{* }_{j}a}^{j}.$$ Получаем $$\chi \left({\sum\limits_{i>n-m}{c_{i}a^{i}}}\right)=\chi \left({k^{z}_{m}\varphi \left({b,a}\right)+r\left({b,a}\right)}\right).$$ Из предыдущего равенства получаем: $$\chi \left({\sum\limits_{j=1}^{n-m}{p_{j}k^{j}_{0}\cdot a}^{j}}\right)+\chi \left({\sum\limits_{j=1}^{n-m}{q^{* }_{j}\cdot a}^{j}}\right). \tag{4}$$ Для достаточно большого числа $s$ любой элемент $c_{i}$ из идеала $I\left({L^{s}}\right)$ делится на $k^{t}_{0}$. Поэтому на основании $\left({3}\right)$ заключаем $\chi \left({\sum\limits_{i>n-m}{c_{i}a^{i}}}\right)\in \chi \left({A}\right)$. На основании определения множества $A$ имеем $\left({\sum\limits_{j=1}^{n-m}{p_{j}k^{j}_{0}a}^{j}}\right)\in A$. Поэтому из равенства $\left({4}\right)$ получаем $\chi \left({\sum\limits_{j=1}^{n-m}{q^{* }_{j}a}^{j}}\right)\in \chi \left({A}\right)$. На основании определения множества $M$ имеем $\sum\limits_{j=1}^{n-m}{q^{* }_{j}a}^{j}\in M$. Поскольку $\chi \left({M^{* }}\right)\cap \chi \left({A}\right)=\emptyset$, то заключаем $\sum\limits_{j=1}^{n-m}{q^{* }_{j}a}^{j}=0$. Но тогда получилось бы $D_{b}\left|{a}\right|\le n-m<n$, что противоречит выбору $n$. Противоречие получилось из предположения $k_{0}\ne 1$. Это означает, что $k_{0}=1$. Итак, мы доказали равенство $a^{n}+k_{1}\cdot a^{n-1}+ \dots + k_{n-1}\cdot a=0$. Аналогично доказывается равенство $b^{n'}+l_{1}\cdot b^{n'-1}+ \dots + l_{n'-1}\cdot b=0$ для некоторого натурального $n'$ и некоторых многочленов $l_{1},l_{2},\dots, l_{n'-1}$ из $Z_{p}[y]$. Без ограничения общности, можно считать, что $n$ больше максимальной из степеней многочленов $l_{1},l_{2},\dots,l_{n'-1}$, а $n'$ больше максимальной из степеней многочленов $k_{1},k_{2},\dots,k_{n}$. Тогда $$\varphi \left({x,y}\right)=y^{n}+k_{1}y^{n-1}+\dots + k_{n-1}y+x^{n'}+l_{1}x^{n'-1}+ \dots + l_{n'-1}x$$ является унитарным многочленом и $\varphi \left({b,a}\right)=0$. Вывод: элементы $a$ и $b$ целозависимы. Лемма 3 доказана. ◻ **Лемма 4**. *Если конечно порожденное коммутативное кольцо $K$ финитно отделимо, то оно является конечным расширением некоторого своего идеала кручения $I_{k}$, который является прямым произведением конечного набора колец различных простых характеристик.* *Proof.* Пусть $K$ --- конечно порожденное коммутативное финитно отделимое кольцо. Пусть $\{a_{1},\dots,a_{n}\}$ --- порождающее множество кольца $K$. По леммам 3 и 4 из работы [@kyblv] каждый элемент финитно отделимого кольца имеет конечное целое кручение свободное от квадратов. Отсюда следует $k_{i}\cdot f_{i}\left({a_{i}}\right)=0$ для некоторого набора унитарных многочленов $f_{i}\left({x}\right)$ с целыми коэффициентами без свободных членов и некоторого набора натуральных чисел $k_{i}$ свободных от квадратов ($i=1, \dots , n$). Пусть $k=$ Н.О.К $\left({k_{1}, \dots ,k_{n}}\right)$, $I_{k}=\{b\in K |kb=0\}$. В фактор-кольце $K/I_{k}$ имеет место $f_{i}\left({\overline{a}_{i}}\right)=0$, где $\overline{a}_{i}$ --- образ элемента $a_{i}$ при каноническом гомоморфизме $K\rightarrow K/I _{k}$($i=1,2, \dots ,n$). Это означает, что все $\overline{a}_{i}$ являются целыми алгебраическими элементами в кольце $K/I _{k}$. Отсюда следует, что аддитивная группа кольца $K/I _{k}$ конечно порождена. Поскольку все числа $k_{1}, \dots ,k_{n}$ --- свободны от квадратов, то и $k=$ Н.О.К $\left({k_{1},\dots,k_{n}}\right)$ свободно от квадратов. Пусть $k=p_{1}\cdot p_{2}\cdot\dots\cdot p_{n}$ --- разложение на простые множители. Из определений вытекает: Н.О.Д $\left({k/p_{1},\dots, k/p_{n}}\right)=1$. По теореме Евклида о линейном выражении наибольшего общего делителя существует семейство целых чисел $z_{i}$ ($i=1, \dots ,n$) такое, что $$z_{1}\left({k/p_{1}}\right)+\dots+z_{n}\left({k/p_{n}}\right)=1. \tag{5}$$ Рассмотрим семейство идеалов $\left({k/p_{i}}\right)\cdot I_{k}$ кольца $I_{k}$ Из равенства (5) следует: $I_{k}=$ $z_{1}\left({k/p_{1}}\right)I_{k}+\dots+z_{n}\left({k/p_{n}}\right)I_{k}$. Каждое кольцо $\left({k/p_{i}}\right)\cdot I_{k}$ --- это кольцо простой характеристики $p_{i}$. Поэтому сумма идеалов различных простых характеристик является прямой суммой. Осталось заметить, что прямая сумма конечного набора идеалов изоморфна их прямому произведению. Лемма 4 доказана. ◻ **Следствие 1**. *Утверждения леммы 4 остаются справедливыми для конечно порожденных PI-колец благодаря теореме Ширшова о высоте [@shir].* **Лемма 5**. *Если конечно порожденное коммутативное кольцо простой характеристики финитно отделимо, то оно является подпрямым произведением конечного кольца и некоторого конечного набора колец без делителей нуля.* *Proof.* Пусть конечно порожденное коммутативное кольцо $K$ простой характеристики является финитно отделимым. По теореме Гильберта любое конечно порожденное коммутативное кольцо является нетеровым. По теореме Ласкера--Нетер [@zarc] каждый идеал кольца $K$ является пересечением некоторого конечного семейства примарных идеалов. В частности, $\bigcap\limits_{i=1}^{n}{I_{i}}=0$ для некоторого семейства примарных идеалов $I_{i}$ кольца $K$. Это означает, что кольцо $K$ раскладывается в подпрямое произведение колец $K/I_{i}$. Осталось применить лемму 2, из которой следует, что каждое кольцо $K/I_{i}$ либо конечно, либо является кольцом без делителей нуля. Лемма 5 доказана. ◻ **Лемма 6**. *Конечно порожденное коммутативное кольцо без делителей нуля простой характеристики, в котором каждые два трансцендентных элемента целозависимы, является финитно отделимым.* *Proof.* Пусть $K$ конечно порожденное коммутативное кольцо без делителей нуля имеет простую характеристику. Пусть любые два трансцендентных элемента кольца $K$ целозависимы. Покажем, что $K$ --- финитно отделимое кольцо. Пусть $a\notin A$ для некоторого элемента $a$ и подкольца $A$ кольца $K$. Если в кольце $A$ нет трансцендентных элементов, то каждый элемент $b$ из $A$ является регулярным, то есть представимым в виде $b=b^{2}b'$ для некоторого элемента $b'\in A$. По умножению множество $A^{* }$, ненулевых элементов $A$, образует коммутативную регулярную полугруппу с сокращением. Как хорошо известно, $A^{* }$ --- это группа. Поэтому $A$ --- поле. Любое конечно порожденное коммутативное кольцо является финитно аппроксимируемым кольцом [@orz],[@kybl]. Поэтому $K$ --- финитно аппроксимируемое кольцо. Поэтому и поле $A$ --- финитно аппроксимируемое кольцо. Откуда заключаем, что $A$ --- конечное поле. Тогда возможность разделения элемента $a$ от конечного подкольца следует из финитной аппроксимируемости кольца $K$. Если в кольце $A$ есть некоторый трансцендентный элемент $b$, то все образующие кольца $K$ являются либо целыми, либо целыми относительно $b$ по лемме 3. Отсюда вытекает, что кольцо $K$ является конечно порожденным модулем над моногенным подкольцом $Z\left<{b}\right>$. По предложению 5 из работы \[8\] следует, что кольце $K$ каждый элемент $a$ можно финитно отделить от любого подкольца, содержащего $b$, но не содержащего $a$. Таковым является и подкольцо $A$. Лемма 6 доказана. ◻ **Лемма 7**. *Кольцо многочленов $C_{p}[x]$ от одного переменного с коэффициентами из произвольного конечного поля $C_{p}$ финитно отделимо.* *Proof.* Пусть $f\left({x}\right)\notin A$, где $f\left({x}\right)$ --- некоторый многочлен из кольца $C_{p}[x]$, а $A$ --- некоторое подкольцо кольца $C_{p}[x]$. Если $A$ состоит из многочленов нулевой степени, то $A$ --- конечное множество. Возможность финитного отделения любого элемента $f\left({x}\right)\notin A$ от конечного подкольца $A$ следует из финитной аппроксимируемости конечно порожденных коммутативных колец [@orz], [@kybl]. Пусть в кольце $A$ есть некоторый многочлен $b=g\left({x}\right)$ и пусть степень многочлена $g\left({x}\right)$ равна $n$ и $n\ge 1$. Рассмотрим кольцо $C_{p}[x]$ как модуль над моногенным подкольцом $Z<b>$. Покажем, что этот модуль является конечно порожденным и конечное множество $B=\{h\left({x}\right)|degh\left({x}\right)\le n\}$, состоящее из всех многочленов $h\left({x}\right)$ из $C_{p}[x]$ степени меньшей или равной $n$, является порождающим множеством этого модуля. Пусть $[B]$ --- подмодуль модуля $C_{p}[x]$, как модуля над кольцом $Z<b>$, порожденный множеством $B$. Покажем, что $C_{p}[x]=[B]$, то есть каждый многочлен $\alpha \left({x}\right)$ из $C_{p}[x] \in [B]$. Если $deg\alpha \left({x}\right)\le n$, то $\alpha \left({x}\right)\in B\subset [B]$. Пусть $deg\alpha \left({x}\right)>n$. Дальнейшие рассуждения проводим по индукции. Предположим, что для любого многочленам $\beta \left({x}\right)$ из $C_{p}[x]$, степень которого меньше степени многочлена $\alpha \left({x}\right)$ имеет место $\beta \left({x}\right)\in [B]$. Разделим многочлен $\alpha \left({x}\right)$ на многочлен $g\left({x}\right)$ в кольце $C_{p}[x]$ с остатком, то есть рассмотрим равенство: $\alpha \left({x}\right)=g\left({x}\right)\cdot \beta \left({x}\right)+r\left({x}\right)$ для некоторых многочленов $\beta \left({x}\right)$ и $r\left({x}\right)$ из $C_{p}[x]$ и $degr\left({x}\right)<degg\left({x}\right)$. Тогда по определению $r\left({x}\right)\in B\subset [B]$. Но из сравнения степеней многочленов следует, что $deg\alpha \left({x}\right)=degg\left({x}\right)+deg\beta \left({x}\right)$. Откуда $deg\beta \left({x}\right)=deg\alpha \left({x}\right)-degg\left({x}\right)=deg\alpha \left({x}\right)-n<deg\alpha \left({x}\right)$. По индукционному предположению $\beta \left({x}\right)\in [B]$. Но тогда $g\left({x}\right)\cdot \beta \left({x}\right)=b\cdot \beta \left({x}\right)\in b[B]\subset [B]$. Мы получили, что $g\left({x}\right)\cdot \beta \left({x}\right)\in [B]$ и $r\left({x}\right)\in [B]$. Поэтому $\alpha \left({x}\right)=g\left({x}\right)\cdot \beta \left({x}\right)+r\left({x}\right)\in [B]$. Итак, мы доказали по индукции, что $C_{p}[x]=[B]$. Заметим, что $B$ --- конечное множество.Это означает, что $C_{p}[x]$ является конечно порожденным модулем над своим подкольцом $Z<b>$. Поскольку $b\in A$, то по предложению 5 из [@kyblv] существуют конечное кольцо $F$ и гомоморфизм $\gamma :C_{p}[x]\rightarrow F$ такой, что $\gamma \left({a}\right)\notin \gamma \left({A}\right)$. Вывод: кольцо $C_{p}[x]$ финитно отделимо. Лемма 7 доказана. ◻ # Основные результаты В этом параграфе мы излагаем доказательства основных результатов. Основным результатом настоящей работы является следующая теорема. ** 4**. *Для того, чтобы конечно порожденное коммутативное кольцо было финитно отделимо, необходимо и достаточно, чтобы оно было конечным расширением некоторого идеала кручения $I_{k}$ ($k$ --- число свободное от квадратов), который является подпрямым произведением конечного кольца и некоторого конечного набора колец без делителей нуля простых характеристик, в которых любые два трансцендентные элемента целозависимы.* *Proof.* Необходимость. Следует из лемм 2--5 и замкнутости класса финитно отделимых колец относительно подколец и гомоморфных образов. Достаточность. Следует из леммы 6 настоящей работы, предложения 2 работы [@kyblv] и замкнутости класса финитно отделимых колец относительно конечных прямых произведений и подколец. ◻ ** 5**. *Конечно порожденное коммутативное кольцо является финитно отделимым в том и только в том случае, когда каждое его двупорожденное подкольцо финитно отделимо.* *Proof.* Вытекает из теоремы и замкнутости класса финитно отделимых колец относительно подколец и гомоморфных образов. ◻ ** 6**. *Двупорожденное кольцо простой характеристики $K=Z_{p}\left<{a,b\mid f\left({a,b}\right)=0}\right>$, где $f\left({x,y}\right)$ --- однородный многочлен над полем $Z_{p}$ от двух переменных, будет финитно отделимым, тогда и только тогда, когда $f\left({x,y}\right)$ --- сепарабельный многочлен, то есть является произведением различных неприводимых многочленов.* *Proof.* Необходимость. Пусть кольцо $K=\mathbb{Z}_{p}\left<{a,b\mid f\left({a,b}\right)=0}\right>$ финитно отделимо, где $f\left({x,y}\right)$ --- однородный многочлен от двух переменных над полем $\mathbb{Z}_{p}$ некоторой степени $n\ge 1$. Предположим, что многочлен $f\left({x,y}\right)$ не является сепарабельным, Тогда $f\left({x,y}\right)=g\left({x,y}\right)^{2}h\left({x,y}\right)$ для некоторых однородных многочленов $g\left({x,y}\right)$ и $h\left({x,y}\right)$ ненулевой степени из $\mathbb{Z}_{p}[x,y]$. Пусть $I=\{u\in K\mid uh\left({a,b}\right)=0\}$. Ясно, что $I$ --- идеал кольца $K$ и в факторе $K/I$ выполнено $\overline{g\left({a,b}\right)}^{2}=0$, где чертой обозначается образ элемента при каноническом гомоморфизме из $K\rightarrow K/I$. Поскольку класс финитно отделимых колец замкнут относительно гомоморфизмов, то применяя лемму 1 к кольцу $K/I$, получим $\overline{g\left({a,b}\right)\varphi \left({b}\right)}=0$ для некоторого унитарного многочлена $\varphi \left({x}\right)$ из $Z_{p}$ без свободного члена. Это означает, что $g\left({a,b}\right)\varphi \left({b}\right)\in I$ в кольце $K$, откуда следует $g\left({a,b}\right)\varphi \left({b}\right)h\left({a,b}\right)=0$. Из последнего равенства следует $g\left({x,y}\right)\varphi \left({y}\right)h\left({x,y}\right)=f\left({x,y}\right)\psi \left({x,y}\right)$ для некоторого многочлена $\psi \left({x,y}\right)$ из $Z_{p}[x,y]$. Получаем равенство в кольце многочленов $Z_{p}[x,y]$: $g\left({x,y}\right)\varphi \left({y}\right)h\left({x,y}\right)=g\left({x,y}\right)^{2}h\left({x,y}\right)\psi \left({x,y}\right)$. Из последнего равенства путем сокращения получаем равенство $\varphi \left({y}\right)=g\left({x,y}\right)\psi \left({x,y}\right)$, что невозможно. Противоречие получено из предположения, что многочлен $f\left({x,y}\right)$ не является сепарабельным. Вывод: $f\left({x,y}\right)$ --- сепарабельный многочлен. Необходимость доказана. Достаточность. Пусть двупорожденное кольцо $K$ простой характеристики $p$ задано соотношением $K=Z_{p}\left<{a,b\mid f\left({a,b}\right)=0}\right>$, где $f\left({x,y}\right)$ --- однородный сепарабельный многочлен от двух переменных над полем $Z_{p}$ некоторой степени $n\ge 1$. Однородный сепарабельный многочлен раскладывается однозначно в произведение различных неприводимых однородных многочленов $f\left({x,y}\right)=f_{1}\left({x,y}\right)\cdot f_{2}\left({x,y}\right)\cdots f_{m}\left({x,y}\right)$. Главный идеал $I\left({f}\right)$ кольца многочленов $Z_{p}[x,y]$, порожденный многочленом $f=f\left({x,y}\right)$, является пересечением семейства главных идеалов $I\left({f_{i}}\right)$, порожденных многочленами $f_{i}=f_{i}\left({x,y}\right)$, то есть $$I\left({I}\right)=\overset{i=m}{\underset{i=1}{\cap }}I\left({f_{i}}\right).$$ Откуда следует, что кольцо $K=Z^{* }_{p}[x,y]/I\left({f}\right)$ является подпрямым произведением семейства колец $Z^{* }_{p}[x,y]/I\left({f_{i}}\right)$. Финитная отделимость кольца $K$ следовала бы из финитной отделимости каждого из колец $Z^{* }_{p}[x,y]/I\left({f_{i}}\right)$ в силу замкнутости класса финитно отделимых колец относительно конечных прямых произведений, гомоморфизмов и подколец. Поэтому далее, без ограничения общности, будем считать, что $f\left({x,y}\right)$ --- неприводимый и унитарный относительно $x$ многочлен. Это означает, что кольцо $Z^{* }_{p}[x,y]/I\left({f}\right)$ --- является кольцом без делителей нуля. Поэтому $K$ является кольцом без делителей нуля (поскольку $K$ изоморфно кольцу $Z^{* }_{p}[x,y]/I\left({f}\right)$). Пусть $\overline{K}$ --- поле частных кольца $K$ и пусть $\Lambda$ --- алгебраическое замыкание поля $\overline{K}$. Кольцо $K$ является подкольцом поля $\Lambda$. Пусть поле $\overline{Z_{p}}$ --- алгебраическое замыкание простого поля $Z_{p}$. Поле $\overline{Z_{p}}$ можно рассматривать как подполе $\Lambda$. Над полем $\overline{Z_{p}}$ унитарный многочлен $f\left({t,1}\right)$ раскладывается в произведение линейных множителей $$f\left({t,1}\right)=\left({t-\lambda _{1}}\right)\left({t-\lambda _{2}}\right)..\left({t-\lambda _{n}}\right)$$ для некоторых $\lambda _{1},\lambda _{2}, \dots, \lambda _{n}$ из $\overline{Z_{p}}$ (см. [@byrbv]). Поэтому $$\begin{gathered} f\left({x,y}\right)=y^{n}\left({\frac{x}{y}-\lambda _{1}}\right)\left({\frac{x}{y}-\lambda _{2}}\right)\dots\left({\frac{x}{y}-\lambda _{n}}\right) ={}\\{}= \left({x-\lambda _{1}y}\right)\left({x-\lambda _{2}y}\right)\dots\left({x-\lambda _{n}y}\right).\end{gathered}$$ Поэтому в поле $\Lambda$ имеет место равенство: $$f\left({a,b}\right)=\left({a-\lambda _{1}b}\right)\left({a-\lambda _{2}b}\right)\cdots\left({a-\lambda _{n}b}\right).$$ Поскольку $f\left({a,b}\right)=0$, то и $\left({a-\lambda _{1}b}\right)\left({a-\lambda _{2}b}\right)\cdots\left({a-\lambda _{n}b}\right)=0$. Это означает, что $a-\lambda _{i}b=0$ для некоторого $i$, откуда $a=\lambda _{i}b$. Пусть $C_{p}$ --- подполе поля $\overline{Z_{p}}$, порожденное элементом $\lambda _{i}$. Из определений следует, что $\lambda _{i}$ --- целый элемент над $Z_{p}$, то есть $\varphi \left({\lambda _{i}}\right)=0$, для некоторого ненулевого многочлена $\varphi \left({x}\right)$ из кольца. $Z_{p}[x]$. Поэтому $C_{p}$ совпадает с подкольцом, порожденным $\lambda _{i}$, которое, в свою очередь, порождается как абелева группа конечным набором элементов $1,\lambda _{i},\lambda ^{2}_{i},\dots, \lambda ^{m}_{i}$, где $m$ --- это степень многочлена $\varphi \left({x}\right)$. Конечно порожденная абелева группа простой характеристики является конечной. Вывод: $C_{p}$ --- конечное поле. Заметим, что элемент $b$ является трансцендентным над полем $C_{p}$, то есть $g\left({b}\right)\ne 0$ для всех ненулевых многочленов $g\left({x}\right)$ из кольца $C_{p}[x]$. В противном случае элемент $b$ был бы целым элементом над кольцом $Z_{p}$, то есть $h\left({b}\right)=0$ для некоторого унитарного многочлена $h\left({x}\right)$ без свободного члена из $Z_{p}[x]$ (поскольку каждый элемент кольца $C_{p}$ является целым над $Z_{p}$). Но в кольце $K$ равенство $h\left({b}\right)=0$ означает, что многочлен $h\left({x}\right)$ делится на многочлен $f\left({x,y}\right)$ в кольце многочленов $Z_{p}[x,y]$. Последнее, очевидно, неверно. Полученное означает, что в поле $\Lambda$ подкольцо $K^{* }$, порожденное элементом $\lambda _{i}$ и элементом $b$, изоморфно кольцу многочленов $C_{p}[x]$. По лемме 7 кольцо $C_{p}[x]$ финитно отделимо. Итак, кольцо $K^{* }$ финитно отделимо. Остается заметить, что кольцо $K$ является подкольцом кольца $K^{* }$, поскольку $a=\lambda _{i}b\in K^{* }$. Предложение доказано полностью. ◻ Есть достаточно простой критерий определяющий сепарабельность многочлена от одного переменного и от двух переменных над полем. ** 7**. *Многочлен ненулевой степени от одного переменного $\varphi \left({t}\right)$ является сепарабельным в том и только в том случае, когда этот многочлен взаимно прост со своей производной $\varphi '\left({t}\right)$. (Это несложное упражнение).* Как нетрудно видеть, однородный многочлен от двух переменных $f\left({x,y}\right)$ ненулевой степени из $Z_{p}[x,y]$ будет сепарабельным тогда и только тогда, когда сепарабельным будет многочлен от одного переменного $\varphi \left({t}\right)=f\left({t,1}\right)$. # Примеры и контрпримеры ** 8**. *Конечно порожденное коммутативное кольцо простой характеристики без делителей нуля будет финитно отделимым, тогда и только тогда, когда ее любые два трансцендентные образующие являются целозависимыми.* Следующий пример показывает, что данная гипотеза в такой формулировке, вообще говоря, неверна. ** 9**. *Кольцо $K=Z_{3}\left<{a,b\mid a^{2}+b-b^{2}=0}\right>$ не является финитно отделимым.* *Proof.* Во-первых, нетрудно убедиться, что многочлен $f\left({x,y}\right)=x^{2}+y-y^{2}$ является неприводимым (то есть не разлагается на множители ненулевой степени) в кольце $Z_{3}[x,y]$. Поэтому фактор-кольцо $Z_{3}[x,y]/I\left({x^{2}+y-y^{2}}\right)$, где $I\left({x^{2}+y-y^{2}}\right)$ --- главный идеал, порожденный многочленом $x^{2}+y-y^{2}$, является кольцом без делителей нуля. Поскольку кольцо $K$ вкладывается естественным образом в кольцо $Z_{3}[x,y]/I\left({x^{2}+y-y^{2}}\right)$, то можно сделать вывод о том, что $K$ --- кольцо без делителей нуля. Пусть $c=a-b$, откуда $a=c+b$. Тогда $$c^{2}+2bc+b^{2}=b^{2}-b,$$ откуда $c^{2}+2bc+b=0$. Из последнего равенства получаем $$\left({2c+1}\right)b+c^{2}=0.$$ Предположим, что $b\in Z\left<c\right>$. Это означает, что $b=f\left({c}\right)$ для некоторого ненулевого многочлена $f\left({x}\right)$ без свободного члена из кольца многочленов $Z_{3}[x]$. Если степень многочлена $f\left({x}\right)$ равна 1, то есть $f\left({x}\right)=zx$ для некоторого числа $z=1$ или $z=2$, то в кольце $K$ имеет место $$\left({2c+1}\right)zc+c^{2}=0,$$ откуда $$\left({2z+1}\right)c^{2}+zc=0.$$ Если $z=1$, то получим $c=0$, что противоречит выбору $c$. Если же $z=2$, то получаем $2c^{2}+2c=0$, откуда $c^{2}+c=0$. Это означает, что $c$ --- целый алгебраический элемент. Если степень многочлена $f\left({x}\right)$ больше $1$, то из равенства $$\left({2c+1}\right)f\left({c}\right)+c^{2}=0$$ также следует, что $c$ --- целый алгебраический элемент. Тогда аддитивная группа кольца $Z\left<c\right>$ конечно порождена и имеет характеристику 3. Заключаем, что $Z\left<c\right>$ --- конечное кольцо. Поскольку по предположению $b=f\left({c}\right)\in Z\left<c\right>$, заключаем, что $Z\left<b\right>$ --- конечное кольцо (как подкольцо конечного). Тогда $g\left({b}\right)=0$ для некоторого ненулевого многочлена $g\left({x}\right)$ без свободного члена из кольца многочленов $Z_{3}[x]$. Из изоморфизма колец $K$ и $Z^{* }_{3}[x,y ]/\left({x^{2}+y-y^{2}}\right)$ заключаем, что многочлен $g\left({x}\right)$ делится на многочлен $x^{2}+y-y^{2}$ в кольце многочленов $Z_{3}[x,y]$. А это неверно! Противоречие получено из предположения $b\in Z\left<c\right>$. Вывод: $b\notin Z\left<c\right>$. Предположим, что кольцо $K$ является финитно отделимым кольцом. Тогда $\varphi \left({b}\right)\notin \varphi \left(Z\left<c\right>\right)$ для некоторого гомоморфизма $\varphi:K\rightarrow F$ из кольца $K$ в некоторое конечное кольцо $F$. Тогда $\varphi \left({b^{2n}}\right)=\varphi \left({b^{n}}\right)$ для некоторого натурального $n$. Отсюда $$\varphi \left({\left({2c+1}\right)^{2n-1}b^{2n}}\right)=\varphi \left({\left({2c+1}\right)^{2n-1}b^{n}}\right).$$ Из последнего равенства следует $$\varphi \left({\left({\left({2c+1}\right)b}\right)^{2n-1}b}\right)=\varphi \left({\left({2c+1}\right)^{n-1}\left({\left({2c+1}\right)b}\right)^{n}}\right),$$ откуда $$\varphi \left({\left({-c^{2}}\right)^{2n-1}b}\right)=\varphi \left({\left({2c+1}\right)^{n-1}\left({-c^{2}}\right)^{n}}\right)\in \varphi \left({Z<c>}\right).$$ Имеем $\varphi \left({\left({-c^{2}}\right)^{2n-1}b}\right)\in \varphi \left({Z<c>}\right)$ и $\varphi \left({\left({2c+1}\right)b}\right)=\varphi \left({-c^{2}}\right)\in \varphi \left({Z<c>}\right)$. Поскольку многочлены $u=\left({-x^{2}}\right)^{2n-1}$ и $v=2x+1$ являются взаимно простыми в кольце многочленов $Z_{3}[x]$, то их Н.О.Д$\left({u,v}\right)=1$. Тогда по теореме о линейном выражении Н.О.Д имеет место равенство $$\alpha \left({x}\right)u\left({x}\right)+\beta \left({x}\right)v\left({x}\right)=1$$ в кольце многочленов $Z_{3}[x]$. Тогда получаем $\varphi \left({\alpha \left({c}\right)\left({-c^{2}}\right)^{2n-1}b}\right)\in \varphi \left({Z\left<c\right>}\right)$ и $\varphi \left({\beta \left({c}\right)\left({2c+1}\right)b}\right)\in \varphi \left({Z\left<c\right> }\right)$, откуда $$\varphi \left({\alpha \left({c}\right)\left({-c^{2}}\right)^{2n-1}b+\beta \left({c}\right)\left({2c+1}\right)b}\right)\in \varphi \left({Z\left<c\right>}\right),$$ то есть $$\varphi \left({\left({\alpha \left({c}\right)\left({-c^{2}}\right)^{2n-1}+\beta \left({c}\right)\left({2c+1}\right)}\right)b}\right)\in \varphi \left({Z\left<c\right>}\right).$$ Из последнего равенства получаем $\varphi \left({b}\right)\in \varphi \left(Z\left<c\right>\right)$, что противоречит выбору $\varphi$. Противоречие получено из предположения о том, что кольцо $K$ является финитно отделимым кольцом. Вывод: $K$ не является финитно отделимым кольцом. Что и требовалось доказать. ◻ Пример 1 показывает, что из целозависимости образующих кольца, вообще говоря, не следует целозависимость произвольной пары трансцендентных элементов. ** 10**. *Кольцо $K=Z_{2}\left<{a,b\mid a^{2}+b-b^{2}=0}\right>$ является финитно отделимым.* *Proof.* Во-первых, нетрудно убедиться, что многочлен $f\left({x,y}\right)=x^{2}+y-y^{2}$ является неприводимым (то есть не разлагается на множители ненулевой степени) в кольце $Z_{2}[x,y]$. Поэтому фактор-кольцо $Z_{2}[x,y]/I\left({x^{2}+y-y^{2}}\right)$, где $I\left({x^{2}+y-y^{2}}\right)$ -- главный идеал, порожденный многочленом $x^{2}+y-y^{2}$, является кольцом без делителей нуля. Поскольку кольцо $K$ вкладывается естественным образом в кольцо $Z_{2}[x,y]/I\left({x^{2}+y-y^{2}}\right)$, то можно сделать вывод о том, что $K$ --- кольцо без делителей нуля. Пусть $\varphi \left({x}\right)=x^{2}-x$. Заметим, что каждый элемент $c$ из кольца $K$ можно записать в виде $c=f\left({b}\right)a+g\left({b}\right)$ для некоторых многочленов $f\left({x}\right),g\left({x}\right)$ из кольца многочленов $Z_{2}[x]$, причем $g\left({x}\right)$ без свободного члена. Пусть $c$ --- ненулевой элемент, тогда хотя бы один из многочленов $f\left({x}\right),g\left({x}\right)$ не нулевой. Тогда $c-g\left({b}\right)=f\left({b}\right)a$, откуда $$\left({c-g\left({b}\right)}\right)^{2}=f\left({b}\right)^{2}a^{2}=-f\left({b}\right)^{2}\varphi \left({b}\right),$$ откуда $$c^{2}+g\left({b}\right)^{2}+f\left({b}\right)^{2}\varphi \left({b}\right)=0.$$ Если $g\left({x}\right)^{2}+f\left({x}\right)^{2}\varphi \left({x}\right)$ ненулевой многочлен, то $b$ --- целый алгебраический над кольцом $Z\left<c\right>$. Но $a$ --- целый алгебраический над кольцом $Z\left<b\right>$, поскольку в кольце $K$ по условию выполняется равенство $a^{2}+\varphi \left({b}\right)=0$. Тогда можно сделать вывод, что и $a$ --- целый алгебраический над кольцом $Z\left<c\right>$. Вывод: кольцо $K$ простой характеристики $2$ является конечно порожденным модулем над моногенным подкольцом $Z\left<{c}\right>$. По предложению 5 из работы [@kyblv] это кольцо $K$ финитно отделимо от подколец, содержащих $c$. Поскольку в качестве $c$ может выступать любой ненулевой элемент кольца $K$, можно сделать вывод, что в этом случае кольцо $K$ финитно отделимо. Если $g\left({x}\right)^{2}+f\left({x}\right)^{2}\varphi \left({x}\right)$ нулевой многочлен, то $g\left({x}\right)^{2}+f\left({x}\right)^{2}\left({x^{2}-x}\right)$ --- нулевой многочлен в $Z_{2}[x]$. Тогда $xf\left({x}\right)^{2}=g\left({x}\right)^{2}+f\left({x}\right)^{2}x^{2}$. Слева стоит многочлен нечетной степени, а справа многочлен четной степени. Равенство таких многочленов невозможно в кольце $Z_{2}[x]$, поэтому рассматриваемый подслучай не имеет места. Вывод: кольцо $K$ финитно отделимо. Что и требовалось доказать. ◻ # Заключение. Открытые вопросы. Полученные в настоящей работе результаты показывают, что вопрос об описании конечно порожденных коммутативных колец со свойством финитной отделимости сводится к описанию финитно отделимых колец простой характеристики без делителей нуля. Вопрос об описании последних связан с описанием унитарных неприводимых многочленов от двух переменных с коэффициентами из простого поля, определяющих финитно отделимые двупорожденные кольца. В этой проблематике представляют интерес следующие вопросы. ** 11**. *Описать финитно отделимые двупорожденные кольца простой характеристики, заданные одним определяющим соотношением.* ** 12**. *Описать финитно отделимые простые квадратичные расширения моногенных колец простой характеристики, то есть кольца вида $K=Z_{p}\left<{a,b\mid a^{2}=f\left({b}\right)}\right>$, где $p$ --- простое число, $f\left({x}\right)$ --- многочлен без свободного члена из кольца многочленов $Z_{p}[x]$.* 0 Мальцев А. И. О гомоморфизмах на конечные группы. Уч. зап. Иванов. гос. пед. ин-та. **18** (1958), 49--60. Ширшов А. И. О кольцах с тождественными соотношениями. Мат. сб. **43**(2) (1957), 277--283. Бурбаки Н. Алгебра. Модули, кольца, формы, пер. с франц., М.: Мир, 1966. Зарисский О., Самюэль Р. Коммутативная алгебра. М.: ИЛ, 1963. Бурбаки Н. Алгебра. Часть 2. Многочлены и поля. Упорядоченные группы. М.: Мир, 1965. Orzech, M., Ribes, L. Residual finiteness and the Hopf property in rings. J. Algebra **15** (1970), 81--88. Кублановский С. И. О многообразиях ассоциативных алгебр с локальными условиями конечности. Алгебра и анализ, **9**:4 (1997), 119---174; St. Petersburg Math. J., **9**:4 (1998), 763---813 Кублановский С. И. О финитной отделимости конечно порожденных ассоциативных колец. arXiv:2310.00308 (2023). https://doi.org/10.48550/arXiv.2310.00308
arxiv_math
{ "id": "2310.03868", "title": "On the finite separability of finitely generated commutative rings", "authors": "Stanislav Kublanovsky", "categories": "math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A dimer model is a quiver with faces embedded in a surface. We define and investigate notions of consistency for dimer models on general surfaces with boundary which restrict to well-studied consistency conditions in the disk and torus case. We define weak consistency in terms of the associated dimer algebra and show that it is equivalent to the absence of bad configurations on the strand diagram. In the disk and torus case, weakly consistent models are nondegenerate, meaning that every arrow is contained in a perfect matching; this is not true for general surfaces. Strong consistency is defined to require weak consistency as well as nondegeneracy. We prove that the completed dimer algebra of a weakly consistent dimer model as well as the noncompleted dimer algebra of a strongly consistent dimer model are bimodule internally 3-Calabi-Yau with respect to their boundary idempotents. As a consequence, the Gorenstein-projective module category of the completed boundary algebra of suitable dimer models categorifies the cluster algebra given by their underlying quiver. We provide additional consequences of weak and strong consistency, including that one may reduce a strongly consistent dimer model by removing digon-faces and that consistency behaves well under taking dimer submodels. author: - Jonah Berggren$^1$ and Khrystyna Serhiyenko$^2$ bibliography: - biblio.bib date: | $^1$University of Kentucky <jrberggren@uky.edu>\ $^2$University of Kentucky <khrystyna.serhiyenko@uky.edu> nocite: - "[@BBMTCYFADQ]" - "[@BFTDBI]" - "[@BFT2]" title: Consistent Dimer Models on Surfaces with Boundary --- # Introduction Dimer models were introduced as a model to study phase transitions in solid state physics. In this setting, a dimer model is a bicolored graph embedded into a surface, representing a configuration of particles which may bond to one another. The physics of this system is described by *perfect matchings* of the graph. Moreover, to a dimer model one may associate a *dimer algebra*, which is the Jacobian algebra of a certain quiver with potential, whose combinatorics and representation theory relate to the physics of the dimer model. In the physics literature, dimer models on tori have seen the most study, especially those satisfying certain *consistency conditions* [@XHV]. Under these conditions, Mozgovoy and Reineke showed in [@XMR] that the dimer algebra is 3-Calabi-Yau. Ishii and Ueda [@XIU2007] showed that the moduli space $\mathcal M_\theta$ of stable representations of the dimer algebra with dimension vector $(1,\dots,1)$ and a generic stability condition $\theta$ in the sense of King [@XKing] is a smooth toric Calabi-Yau 3-fold. Moreover, the center $Z$ of the dimer algebra $A_Q$ is a Gorenstein affine 3-fold, the dimer algebra $A_Q$ is a non-commutative crepant resolution of $Z$, and $\mathcal M_\theta$ is a crepant resolution of $Z$ [@XIU2007]. Properties of the category of coherent sheaves over $\mathcal M_\theta$ may be understood through the combinatorics of the dimer model, opening a rich connection to mirror symmetry [@XBocklandt2015], [@XBocklandt2013], [@XFU]. Many equivalent definitions of consistency have been introduced for torus dimer models. See, for example, [@XBocklandt2011 Theorem 10.2], [@XBocklandt2009], [@XBroomhead2009], [@XMR]. In particular, consistency of a dimer model is equivalent to the absence of certain bad configurations in the strand diagram of the dimer model [@XBocklandt2015 Theorem 1.37]. Dimer models on disks have been studied separately, and are of particular interest to the theory of cluster algebras. Postnikov introduced plabic graphs and strand diagrams in [@Postnikov]. Scott [@XScott] showed that the homogeneous coordinate ring of the Grassmannian $\textup{Gr}(k,n)$ is a cluster algebra, in which certain seeds are indexed by $(k,n)$-Postnikov diagrams. Jensen-King-Su [@XJKS] gave an additive categorification for this cluster structure, and Baur-King-Marsh [@BKMX] interpreted this categorification as the Gorenstein-projective module category over the completed boundary algebra of the associated dimer model. Pressland extended these results to arbitrary Postnikov diagrams in [@XPressland2019] and observed that a dimer model coming from a Postnikov diagram satisfies a *thinness condition*, which is analagous to the algebraic consistency conditions in the torus literature. A systematic study on dimer models on more general surfaces was initiated by Franco in [@BFTFDBPTSA]. This study is largely concerned with the master and mesonic moduli spaces on dimer models, which may be computed using the combinatorics of perfect matchings. Operations such as removing an edge and the dual untwisting map were investigated in [@BFTFDBPTSA; @NDIBFT; @BFTFDB]. Quiver mutation and square moves were connected with cluster mutation in [@CTFBFT], and further connected with combinatorial mutation of polytopes in [@Twin]. Dimer models on general surfaces were connected with matroid polytopes and used to obtain a partial matroid stratification of the Grassmannian, generalizing the place of dimer models in disks in the matroid stratification of the Grassmannian [@TGOOSD; @BFTCAATG; @NPOSD; @SOSCVTGONPOSD; @HCCAQFTD]. Various generalizations of the notions of consistency in the disk and torus case have been considered in this body of work. We define a new notion of consistency for dimer models on compact orientable surfaces with or without boundary which are not spheres. We call a dimer model *path-consistent* if, for any fixed vertices $v_1$ and $v_2$ and a fixed homotopy class $C$ of paths from $v_1$ to $v_2$, there is a unique (up to path-equivalence) *minimal path* $r$ from $v_1$ to $v_2$ in $C$ such that any path from $v_1$ to $v_2$ in $C$ is equivalent to $r$ composed with some number face-paths. When $Q$ is a dimer model on a torus, path-consistency is equivalent to the many consistency conditions in the literature. When $Q$ is a dimer model on a higher genus surface without boundary, path-consistency is equivalent to the weaker notions of consistency rather than the stronger algebraic consistency. See [@XBocklandt2011 Theorem 10.2]. When $Q$ is on a disk, path-consistency is the thinness condition appearing in Pressland [@XPressland2019]. We associate a strand diagram to a dimer model and diagram and define *bad configurations*. We say that a dimer model is *strand-consistent* if it has no bad configurations. This matches the notion of zigzag consistency of general dimer models on surfaces with boundary briefly considered in the first section of [@XBocklandt2015]. In particular, it agrees with the well-studied notions of consistency in the torus case. Our first main theorem is as follows. A key idea of the proof is to observe that either notion of consistency of a dimer model is equivalent to consistency of its (possibly infinite) universal cover model, which enables the assumption of simple connectedness. **Theorem 1** (Theorem [\[thm:cons-alg-str\]](#thm:cons-alg-str){reference-type="ref" reference="thm:cons-alg-str"}). *Let $Q$ be a dimer model not on a sphere. The following are equivalent:* 1. *[\[q1\]]{#q1 label="q1"} The dimer model $Q$ is path-consistent.* 2. *[\[q2\]]{#q2 label="q2"} The dimer model $Q$ is strand-consistent.* 3. *The dimer algebra $A_Q$ is cancellative.* We may thus say that a *weakly consistent dimer model* is one satisfying any of the above equivalent conditions. This generalizes results in the case of the torus [@XBocklandt2011 Theorem 10.1], [@XIU]. This was also shown for dimer models on the disk corresponding to $(k,n)$-diagrams in [@BKMX]. The implication [\[q2\]](#q2){reference-type="eqref" reference="q2"}$\implies$[\[q1\]](#q1){reference-type="eqref" reference="q1"} for general dimer models on disks appears in [@XPressland2019 Proposition 2.11]. A corollary of our result is the reverse direction in the disk case (Corollary [Corollary 53](#cor:main-cor-disk){reference-type="ref" reference="cor:main-cor-disk"}). As an application, we use the strand diagram characterization of consistency to prove that *dimer submodels* of weakly consistent dimer models are weakly consistent (Corollary [Corollary 55](#cor:dimer-submodel-consistent){reference-type="ref" reference="cor:dimer-submodel-consistent"}). This gives us practical ways to get new weakly consistent models from old and to understand equivalence classes of minimal paths. Next, we study *perfect matchings* of weakly consistent dimer models. In the torus case, perfect matchings of the dimer model feature prominently [@XIU], [@XBroomhead2009], [@XBocklandt2011]. Perfect matchings of a torus dimer model generate the cone of R-symmetries, which have applications in physics. Perfect matchings may be used to calculate the perfect matching polygon of the dimer model, which is related to the center of the dimer algebra. Perfect matchings of a dimer model on a disk [@CKP], [@XLam2015] are the natural analog and may be connected with certain perfect matching modules of the completed dimer algebra to understand the categorification given by the boundary algebra of a dimer model on a disk [@CKP]. Over arbitrary compact surfaces with boundary, perfect matchings may be used to describe the master and mesonic moduli spaces associated to the dimer model. Moreover, perfect matchings of a dimer model on a general surface may be calculated by taking determinants of Kasteleyn matrices [@BFTFDBPTSA §5]. In Theorem [Theorem 63](#thm:geo-consistent-then-almost-perfect-matching){reference-type="ref" reference="thm:geo-consistent-then-almost-perfect-matching"}, we show that any (possibly infinite) simply connected weakly consistent dimer model has a perfect matching. This means that the universal cover model of any weakly consistent dimer model has a perfect matching. On the other hand, we give an example of a (non-simply-connected) weakly consistent dimer model which has no perfect matching (Example [Example 64](#ex:cons-tor-no-perf){reference-type="ref" reference="ex:cons-tor-no-perf"}). One important notion for dimer models in the disk and torus is *nondegeneracy*, which requires that all arrows are contained in a perfect matching. We extend this definition to general surfaces and show that nondegeneracy gives a positive grading to the dimer algebra. We define a *(strongly) consistent* dimer model as one which is weakly consistent and nondegenerate. In the disk and torus case, weak and strong consistency are equivalent, but this is not true for more general surfaces. We then use [@XPressland2019] to prove the following result. **Theorem 2** (Theorem [Theorem 73](#thm:Calabi-Yau){reference-type="ref" reference="thm:Calabi-Yau"}). *Let $Q$ be a finite dimer model. If $Q$ is strongly consistent, then the dimer algebra $A_Q$ is bimodule internally 3-Calabi-Yau with respect to its boundary idempotent. If $Q$ is weakly consistent, then the completed dimer algebra $\widehat A_Q$ is bimodule internally 3-Calabi-Yau with respect to its boundary idempotent.* When $Q$ is a dimer model on a disk, we recover [@XPressland2019 Thoerem 3.7]. When $Q$ has no boundary, this translates to the algebra being 3-Calabi-Yau [@XPressland2015 Remark 2.2]. Hence, we recover the statement in the torus (and closed surface of higher genus) literature that consistent dimer models are 3-Calabi-Yau proven in [@XDavison Corollary 4.4]. Using [@AIRX Theorem 4.1 and Theorem 4.10], Theorem [Theorem 2](#thm:AAAz){reference-type="ref" reference="thm:AAAz"} immediately implies the following. **Corollary 3** (Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"}). *Let $Q$ be a weakly consistent, Noetherian, and boundary-finite (Definition [Definition 77](#defn:bf){reference-type="ref" reference="defn:bf"}) dimer model with no digons. Then the Gorenstein-projective module category of the completed boundary algebra of $Q$ categorifies the cluster algebra given by the ice quiver of $Q$.* We use the term "categorification" for brevity during the introduction; see Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} for a more rigorous statement. We give some examples of strongly consistent dimer models on annuli satisfying the requirements of Corollary [Corollary 3](#cor:GFDS){reference-type="ref" reference="cor:GFDS"}. We use the theory of dimer submodels to get some interesting results about equivalence classes of minimal paths in (weakly and strongly) consistent dimer models. We prove that in a strongly consistent dimer model, minimal *leftmost* and *rightmost* paths in a given homotopy class between two vertices are unique when they exist. If we further assume nondegeneracy, then they always exist. Finally, we study the reduction of dimer models. In the disk case, consistent dimer models with at least three boundary vertices may be *reduced* in order to obtain a dimer model with no digons and an isomorphic dimer algebra [@XPressland2019 §2]. We show in Proposition [Proposition 83](#prop:reduce-dimer-model){reference-type="ref" reference="prop:reduce-dimer-model"} that a similar process may be used to remove certain, but not all, digon-faces in the non-simply-connected case. Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"} gives a weakly (but not strongly) consistent dimer model with a digon-face which may not be removed in this way. On the other hand, Corollary [Corollary 85](#cor:degenreduce){reference-type="ref" reference="cor:degenreduce"} states that if we require strong consistency, then we may remove all digon-faces from a dimer model. The article is organized as follows. In Section [2](#sec:first-section){reference-type="ref" reference="sec:first-section"}, we define dimer models and prove that path-consistency is equivalent to cancellativity. We also show that these notions behave well when passing to the universal cover of a dimer model. In Section [\[sec:mac\]](#sec:mac){reference-type="ref" reference="sec:mac"}, we develop some technical theory of basic and cycle-removing morphs in order to prove that a path-consistent and simply connected dimer model has no *irreducible pairs* (Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}). This result is used in Section [4](#sec:pdsc){reference-type="ref" reference="sec:pdsc"} to complete the proof of Theorem [Theorem 1](#thm:main1){reference-type="ref" reference="thm:main1"} by showing that path-consistency and strand-consistency are equivalent. Next, in Section [5](#sec:ds){reference-type="ref" reference="sec:ds"}, we introduce dimer submodels and prove that dimer submodels of weakly consistent dimer models are weakly consistent (Corollary [Corollary 55](#cor:dimer-submodel-consistent){reference-type="ref" reference="cor:dimer-submodel-consistent"}). This gives us practical ways to get new weakly consistent models from old and to understand equivalence classes of minimal paths. Section [6](#sec:perfect-matching){reference-type="ref" reference="sec:perfect-matching"} is dedicated to perfect matchings of weakly consistent dimer models. In Section [\[sec:3cy\]](#sec:3cy){reference-type="ref" reference="sec:3cy"}, we prove that the completed dimer algebra of a weakly consistent dimer model and the noncompleted dimer algebra of a strongly consistent dimer model are bimodule internally 3-Calabi-Yau with respect to their boundary idempotents (Theorem [Theorem 2](#thm:AAAz){reference-type="ref" reference="thm:AAAz"}). As a result, we obtain our categorification result Corollary [Corollary 3](#cor:GFDS){reference-type="ref" reference="cor:GFDS"}. In Section [\[sec:eraec\]](#sec:eraec){reference-type="ref" reference="sec:eraec"}, we use the results of Section [5](#sec:ds){reference-type="ref" reference="sec:ds"} to understand the equivalence classes of minimal paths in consistent dimer models. Lastly, in Section [9](#sec:red){reference-type="ref" reference="sec:red"}, we discuss the process of reducing a dimer model by removing digon-faces. We prove that if $Q$ is strongly consistent, then all digon-faces may be removed. ## Acknowledgments {#acknowledgments .unnumbered} The authors thank Matthew Pressland for useful discussions and for comments on preliminary versions. The authors were supported by the NSF grant DMS-2054255. # Covers and Consistency {#sec:first-section} In this section we define a dimer model on an arbitrary surface with boundary. We introduce path-consistency, which generalizes notions of consistency of dimer models on the disk and torus. We show that path-consistency is equivalent to cancellativity for dimer models not on a sphere. Moreover, we prove that these notions work well with taking the universal cover of a dimer model. ## Dimer Models We begin by defining dimer models, following [@BKMX §3]. A *quiver* is a directed graph. A *cycle* of $Q$ is a nonempty oriented path of $Q$ which starts and ends at the same vertex. If $Q$ is a quiver, we write $Q_{cyc}$ for the set of cycles in $Q$ up to cyclic equivalence. An element of $Q_{cyc}$ is the set of arrows in some cycle. **Definition 1**. A *quiver with faces* is a triple $Q=(Q_0,Q_1,Q_2)$, where $(Q_0,Q_1)$ are the vertices and arrows of a quiver and $Q_2\subseteq Q_{cyc}$ is a set of *faces* of $Q$. A *digon-face* of $Q$ is a face in $Q_2$ consisting of two arrows. Given a vertex $i\in Q_0$, we define the *incidence graph* of $Q$ at $i$ to be the graph whose vertices are given by the arrows incident to $i$ and whose arrows $\alpha\to\beta$ correspond to paths $$\xrightarrow{\alpha}i\xrightarrow{\beta}$$ which occur in faces of $Q$. **Definition 2**. A (locally finite, oriented) *dimer model with boundary* is given by a quiver with faces $Q=(Q_0,Q_1,Q_2)$, where $Q_2$ is written as a disjoint union $Q_2=Q_2^{cc}\cup Q_2^{cl}$, satisfying the following properties: The quiver $Q$ has no loops. Each arrow of $Q_1$ is in either one face or two faces of $Q$. An arrow which is in one face is called a *boundary arrow* and an arrow which is in two faces is called an *internal arrow*. Each internal arrow lies in a cycle bounding a face in $Q_2^{cc}$ and in a cycle bounding a face in $Q_2^{cl}$. [\[ddm:4\]]{#ddm:4 label="ddm:4"} The incidence graph of $Q$ at each vertex is connected. Any vertex of $Q$ is incident with a finite number of arrows. Given a dimer model with boundary $Q$ we may associate each face $F$ of $Q$ with a polygon whose edges are labeled by the arrows in $F$ and glue the edges of these polygons together as indicated by the directions of the arrows to form a surface with boundary $S(Q)$ into which $Q$ may be embedded. The surface $S(Q)$ is oriented such that the cycles of faces in $Q_2^{cc}$ are oriented positive (or counter-clockwise) and the cycles of faces in $Q_2^{cl}$ are oriented negative (or clockwise). The boundary of $S(Q)$ runs along the boundary arrows of $Q$. If $S(Q)$ is a disk, then we say that $Q$ is a *dimer model on a disk*. If $S(Q)$ is simply connected, then we say that $Q$ is a *simply connected dimer model*. A dimer model $Q$ is *finite* if its vertex set is finite. Note that $Q$ is finite if and only if $S(Q)$ is compact. Suppose that $Q$ is a finite quiver with no loops such that every vertex has finite degree. Suppose further that $Q$ has an embedding into an oriented surface $\Sigma$ with boundary such that the complement of $Q$ in $\Sigma$ is a disjoint union of discs, each of which is bounded by a cycle of $Q$. We may then view $Q$ as a dimer model with boundary by declaring $Q_2^{cc}$ (respectively $Q_2^{cl}$) to be the set of positively (respectively, negatively) oriented cycles of $Q$ which bound a connected component of the complement of $Q$ in $\Sigma$. All dimer models may be obtained in this way. Let $Q$ be a dimer model and let $p$ be a path in $Q$. We write $t(p)$ and $h(p)$ for the start and end vertex of $p$, respectively. If a path $q$ can be factored in the form $q=q_2pq_1$, where $h(q_1)=t(p)$ and $t(q_2)=h(p)$, we say that $p$ is in $q$ or that $q$ contains $p$ as a subpath and we write $p\in q$. Corresponding to any vertex $v$ is a *constant path* $e_v$ from $v$ to itself which has no arrows. Any arrow $\alpha$ in $Q$ is associated with at most one clockwise and one counter-clockwise face of the dimer model. We refer to the set of arrows in these faces as $F_\alpha^{cl}$ and $F_\alpha^{cc}$, respectively, when they exist. Let $R_\alpha^{cl}$ (respectively $R_\alpha^{cc}$) be the path in $Q$ from $h(\alpha)$ to $t(\alpha)$ consisting of all arrows in $F_\alpha^{cl}$ (respectively $F_\alpha^{cc}$) except for $\alpha$. A path in $Q$ of the form $R_\alpha^{cl}$ (respectively $R_\alpha^{cc}$) for some $\alpha$ is called a *clockwise return path* (respectively a *counterclockwise return path*) of $\alpha$. **Definition 3**. Given a dimer model with boundary $Q$, the *dimer algebra* $A_Q$ is defined as the quotient of the path algebra $\mathbb CQ$ by the relations $$[R_\alpha^{cc}]-[R_\alpha^{cl}]$$ for every internal arrow $\alpha\in Q_1$. We now make more definitions. We say that two paths $p$ and $q$ in $Q$ are *path-equivalent* if their associated elements in the dimer algebra $A_Q$ are equal. If $p$ is a path in $Q$, we write $[p]$ for the path-equivalence class of $p$ under these relations. The set of *left-morphable* (respectively *right-morphable*) arrows of $p$ is the set of internal arrows $\alpha\in Q_1$ such that $R_\alpha^{cc}$ (respectively $R_\alpha^{cl}$) is in $p$. The set of *morphable arrows* of $p$ is the set of arrows which are left-morphable or right-morphable with respect to $p$. Let $\alpha$ be a morphable arrow of $p$. Then $p$ contains $R_\alpha^{cl}$ or $R_\alpha^{cc}$ as a subpath, and may possibly contain multiple such subpaths. If $p'$ is a path obtained from $p$ by replacing a single subpath $R_\alpha^{cl}$ with $R_\alpha^{cc}$ ($R_\alpha^{cc}$ with $R_\alpha^{cl}$, respectively), then we say that $p'$ is a *basic right-morph* (respectively, *basic left-morph*) of $p$. We omit the word "basic" when the context is clear. If $p$ only has one subpath which is a copy of $R_\alpha^{cl}$ or $R_\alpha^{cc}$, then we say that $p'$ is an *unambiguous basic (right or left) morph* of $p'$ and we write $p'=m_\alpha(p)$. We say that $\alpha$ is an *unambiguous* morphable arrow of $p$ in this case. Since the relations of $A_Q$ are generated by the relations $\{[R_\alpha^{cc}]-[R_\alpha^{cl}]\ :\ \alpha\textup{ is an internal arrow of }Q\}$, two paths $p$ and $q$ are path-equivalent if and only if there is a sequence of paths $p=r_1,\dots,r_m=q$ such that $r_{i+1}$ is a basic morph of $r_i$ for $i\in[m-1]$. Suppose $p$ is a cycle in $Q$ which starts and ends at some vertex $v$ and travels around a face of $Q$ once. Then we say that $p$ is a *face-path* of $Q$ starting at $v$. The terminology is justified by the following observation which follows from the defining relations. **Remark 4**. Any two face-paths of $Q$ starting at $v$ are path-equivalent. **Definition 5**. For all $v\in Q_0$, fix some face-path $f_v$ at $v$. Then define $$f:=\sum_{v\in Q_0}f_v.$$ If $|Q_0|$ is finite, then $[f]$ is an element of $A_Q$. It follows from Remark [Remark 4](#faces-are-equivalent){reference-type="ref" reference="faces-are-equivalent"} that the path-equivalence class $[f]$ is independent of the choice of $f_v$ for all $v\in Q_0$. Moreover, the dimer algebra relations imply that $[f]$ commutes with every arrow. Hence, if $|Q_0|$ is finite, then $[f]$ is in the center of $A_Q$. If $|Q_0|$ is not finite, then $[f]$ is not an element of the dimer model $A_Q$. However, every element $x$ of $A_Q$ has a well-defined product with $f$, so we use notation such as $[xf^m]$ in this case as well. The *completed path algebra* $\langle\langle\mathbb CQ\rangle\rangle$ has as its underlying set the *possibly infinite* linear combinations of finite paths in $Q$, with multiplication induced by composition. See [@XPressland2020 Definition 2.6]. **Definition 6**. The *completed dimer algebra* $\widehat A_Q$ is the quotient of the completed path algebra $\mathbb C\langle\langle Q\rangle\rangle$ by the closure of the ideal generated by the relations $R_\alpha^{cc}-R_\alpha^{cl}$ for each internal arrow $\alpha$ with respect to the arrow ideal. Elements of $\widehat A_Q$ are possibly infinite linear combinations of (finite) paths of $Q$, with multiplication induced by composition. ## Path-Consistency {#sec:hc} We now define path-consistency, which is a nice condition on the equivalence classes of paths between two vertices. We prove some short lemmas about path-consistent models. A path $p$ in $Q$ is also a path in the surface $S(Q)$. We thus say that paths $p$ and $q$ of $Q$ are *homotopic* if they are homotopic as paths in $S(Q)$. **Definition 7**. A path $p$ in a dimer model $Q$ is *minimal* if we may not write $[p]=[qf^m]$ for any $m\geq1$. **Definition 8**. A dimer model $Q=(Q_0,Q_1,Q_2)$ such that $S(Q)$ is not a sphere is *path-consistent* if it satisfies the following *path-consistency condition*: Fix vertices $v_1$ and $v_2$ of $Q$. For any homotopy class $C$ of paths in $S(Q)$ from $v_1$ to $v_2$ there is a minimal path $p_{v_2v_1}^C$, unique up to path-equivalence, with the property that any path $p$ from $v_1$ to $v_2$ in $Q$ in the homotopy class $C$ satisfies $[p]=[f^mp_{v_2v_1}^C]$ for a unique nonnegative integer $m$. We call $m$ the *c-value* of $p$. **Remark 9**. We require a path-consistent model not to be on a sphere in order to show in Theorem [\[thm:cons-alg-str\]](#thm:cons-alg-str){reference-type="ref" reference="thm:cons-alg-str"} that path-consistency is equivalent to strand-consistency. This result is not true if we allow dimer models on spheres (see Remark [Remark 46](#remk:alg-cons-no-sphere){reference-type="ref" reference="remk:alg-cons-no-sphere"}). Many of the other results we prove about path-consistent models will still be true if we allow dimer models on spheres. We remark that equivalent paths of a general dimer model must be homotopic, so in some sense this is the "lowest number of path equivalence classes" that one could hope for in a dimer model. **Lemma 10**. *If $p$ and $q$ are paths in a path-consistent dimer model $Q$ with $h(p)=t(q)$, then the c-value of the composition $qp$ is greater than or equal to the c-value of $p$ plus the c-value of $q$.* *Proof.* If $p$ and $q$ are paths in a path-consistent dimer model $Q$ with $h(p)=t(q)$, then we may write $[p]=[f^{m_p}r_p]$ and $[q]=[f^{m_q}r_q]$ for some minimal paths $r_p$ and $r_q$. Then $m_p$ is the c-value of $p$ and $m_q$ is the c-value of $q$. Then, using the fact that $[f]$ is central, we calculate $$[qp]=[f^{m_q}r_qf^{m_p}r_p]=[f^{m_q+m_p}r_qr_p].$$ We have shown that $[f^{m_q+m_p}]$ may be factored out of $[qp]$, hence the c-value of $[qp]$ is greater than or equal to $m_q+m_p$. ◻ Two paths are equivalent if and only if there is a sequence of basic morphs taking one to the other. Since a basic morph cannot remove some arrows without replacing them with other arrows, the constant path is the unique minimal path from a vertex to itself. This leads to the following remark. **Remark 11**. If $Q$ is path-consistent and $p$ is a nonconstant null-homotopic cycle, then the c-value of $p$ is positive. It is an important fact that all face-paths of a dimer model are null-homotopic. This lets us show the following. **Lemma 12**. *Let $Q$ be a path-consistent dimer model. Any proper subpath of a face-path of $Q$ is minimal.* *Proof.* Suppose $p$ is a proper subpath of a face-path $f_v$ starting at $v:=t(p)$. Let $p'$ be the subpath of $f_v$ such that $p'p=f_v$. If $p$ is not minimal, then by definition of path-consistency, $[p]=[rf^m]$ for some positive integer $m$ and some minimal path $r$ from $v$ to $h(p)$ homotopic to $p$. Then $[p'p]=[p'rf^m]=[p'rf_v^m]$. Moreover, $r$ is homotopic to $p$, hence $p'r$ is homotopic to the face-path $p'p$, hence is null-homotopic. Then by Remark [Remark 11](#rem:cycle-cant-be-constant){reference-type="ref" reference="rem:cycle-cant-be-constant"} it has some positive c-value of $m'$. By definition of path-consistency, $[p'r]=[f_v^{m'}]$. It follows that $$[f_v]=[p'p]=[p'rf_v^m]=[f_v^{m'}f_v^m]=[f_v^{m'+m}],$$ which is a contradiction since $m+m'\geq1+1=2$ but all face-paths trivially have a c-value of 1. It follows that $p$ is minimal. ◻ **Lemma 13**. *Let $Q$ be a simply connected path-consistent dimer model. No proper subpath of a face-path of $Q$ is a cycle.* *Proof.* Suppose $p$ is a subpath of a face-path of $Q$ which is a cycle. By Lemma [Lemma 12](#lem:subpath-of-facepath-is-thin){reference-type="ref" reference="lem:subpath-of-facepath-is-thin"}, the path $p$ is minimal. Since $Q$ is simply-connected, $p$ is null-homotopic. The only minimal null-homotopic path from a vertex to itself is the constant path, so this is a contradiction. ◻ ## Universal Covers We define the notion of a *universal cover dimer model* and show that it behaves well with respect to path-consistency and the cancellation property. Let $Q$ be a dimer model. We construct a dimer model $\widetilde Q$ over the universal cover $\widetilde{S(Q)}$ of $S(Q)$. We consider $Q$ to be embedded into $S(Q)$, so that a vertex $v\in Q$ may be considered as a point of $S(Q)$. Similarly, we describe $\widetilde Q$ embedded into $\widetilde{S(Q)}$. The vertices of $\widetilde Q$ are the points $\tilde v\in\widetilde{S(Q)}$ which descend to vertices $v$ of $S(Q)$. For any arrow $\alpha$ from $v$ to $w$ in $Q$ and any $\tilde v\in\widetilde Q_0$, there is an arrow $\alpha_{\tilde v}$ obtained by lifting $\alpha$ as a path in $Q$ up to a path in $\widetilde Q$ starting at $\tilde v$. The face-paths of $\widetilde Q$ are similarly induced by lifting the face-paths of $Q$. It is not hard to see that $\widetilde Q$ is a (locally finite) dimer model. The following facts follow by universal cover theory. 1. If $S(Q)$ is not a sphere, then $\widetilde{S(Q)}$ is not a sphere. 2. The surface $\widetilde{S(Q)}$ is simply connected. 3. Let $\tilde p$ and $\tilde q$ be paths in $\widetilde Q$ with the same start and end vertices which are lifts of paths $p$ and $q$ of $Q$, respectively. Then $[p]=[q]$ in $A_Q$ if and only if $[\tilde p]=[\tilde q]$ in $A_{\widetilde Q}$. Universal covers are useful to consider because simple cycles on the universal cover have well-defined interiors. The following remark gives another advantage of universal covers. **Remark 14**. Choose vertices $\tilde v_1$ and $\tilde v_2$ of $\widetilde Q$. Any two paths from $\tilde v_1$ to $\tilde v_2$ are homotopic, hence descend to homotopic paths in $Q$. Then this choice of vertices gives a homotopy class $C$ of paths between the corresponding vertices $v_1$ and $v_2$ of $Q$. The paths from $\tilde v_1$ to $\tilde v_2$ in $\widetilde Q$ correspond precisely to the paths from $v_1$ to $v_2$ in the homotopy class $C$. Equivalence classes of paths in the dimer algebra are respected by this correspondence. Remark [Remark 14](#remk:Q-hat-Q-path-correspondence){reference-type="ref" reference="remk:Q-hat-Q-path-correspondence"} relates $Q$ and $\widetilde Q$ in a useful way. Many of our technical results require simple connectedness. Passing to the universal cover model allows us to prove things about general dimer models $Q$ by considering their simply connected universal cover models. In particular, we may study path-consistency of $Q$ by studying path-consistency of $\widetilde Q$. **Proposition 15**. *A dimer model $Q$ is path-consistent if and only if $\widetilde Q$ is path-consistent.* *Proof.* Suppose $Q$ is path-consistent. Choose vertices $\tilde v_1$ and $\tilde v_2$ of $\widetilde Q$; these correspond to vertices $v_1$ and $v_2$ of $Q$ and induce a homotopy class $C$ of paths between them. By Remark [Remark 14](#remk:Q-hat-Q-path-correspondence){reference-type="ref" reference="remk:Q-hat-Q-path-correspondence"}, the paths in $\widetilde Q$ from $\tilde v_1$ to $\tilde v_2$ correspond to the paths in $Q$ from $v_1$ to $v_2$ in $C$. By path-consistency of $Q$, each such path in $Q$ is equivalent to $p_{v_2v_1}^C$ composed with some power of a face-path, hence $\widetilde Q$ is path-consistent. The other direction is similar. ◻ **Definition 16**. A path algebra with relations $A=kQ/I$ is called a *cancellation algebra* (or *cancellative*) if for paths $p,q,a,b$ of $Q$ with $h(a)=t(p)=t(q)$ and $t(b)=h(p)=h(q)$, we have $[pa]=[qa]\iff [p]=[q]$ and $[bp]=[bq]\iff [p]=[q]$. We call this the *cancellation property*. **Lemma 17**. *$A_Q$ is a cancellation algebra if and only if $A_{\widetilde Q}$ is a cancellation algebra.* *Proof.* This follows because $[p]=[q]$ in $A_Q$ if and only if $[\tilde p]=[\tilde q]$ in $A_{\widetilde Q}$, where $\tilde p$ and $\tilde q$ are any lifts of $p$ and $q$ to $\widetilde Q$ with $t(\tilde p)=t(\tilde q)$. ◻ **Lemma 18**. *Let $p$ be a path in $Q$ of length $m$. Then the composition of face-paths $f^m_{t(p)}$ is equivalent to a path beginning with $p$.* *Proof.* Let $p=\gamma_m\dots\gamma_1$ be a product of arrows. For each $\gamma_i$, let $R_{\gamma_i}$ be a return path of $\gamma_i$. The path $R_{\gamma_1}\dots R_{\gamma_m}\gamma_m\dots\gamma_1$ is equivalent to $f^m_{t(p)}$ and begins with $p$. ◻ We now show that the notions of path-consistency and cancellativity coincide. **Theorem 19**. *A dimer model $Q$ is path-consistent if and only if $A_Q$ is a cancellation algebra and $S(Q)$ is not a sphere.* *Proof.* By Lemma [Lemma 17](#lem:Q-canc-iff-hat-Q-canc){reference-type="ref" reference="lem:Q-canc-iff-hat-Q-canc"} and Proposition [Proposition 15](#prop:Q-consistent-iff-hat-Q-consistent){reference-type="ref" reference="prop:Q-consistent-iff-hat-Q-consistent"}, it suffices to show the result on the universal cover $\widetilde Q$. Suppose that $\widetilde Q$ is path-consistent. Then $S(Q)$ is not a sphere by definition. We prove that $A_{\widetilde Q}$ is a cancellation algebra. Accordingly, take paths $p,q,a$ of ${\widetilde Q}$ with $h(a)=t(p)=t(q)$ and $h(p)=h(q)$. We show that $[pa]=[qa]\implies [p]=[q]$. The case of left composition is symmetric. By path-consistency, we may write $[p]=[rf^{m_p}]$ and $[q]=[rf^{m_q}]$, where $r$ is a minimal path from $t(p)$ to $h(p)$, necessarily homotopic to $p$ and $q$ by simple connectedness. Given $[pa]=[qa]$, we have $[f^{m_p}ra]=[f^{m_q}ra]$. Then $m_p=m_q$ by path-consistency. We have shown that $$[q]=[rf^{m_q}]=[rf^{m_p}]=[p]$$ and the proof of this direction is complete. Suppose now that $A_{\widetilde Q}$ is a cancellation algebra and $S(Q)$ is not a sphere. We first show that only a finite number of face-paths may be factored out of any path $p$, and that this number is bounded by the number of arrows in $p$. Suppose to the contrary that there is some path $p$ of $Q$ with $m$ arrows such that we may write $[p]=[p'f^{m'}]$ for some $m'>m$. By Lemma [Lemma 18](#lem:many-cycles-factor-through-path){reference-type="ref" reference="lem:many-cycles-factor-through-path"}, $[p'f^{m'}]=[lp]$ for some nonconstant cycle $l$ at $h(p)$. Applying the cancellation property to the equation $[p]=[p'f^{m'}]=[lp]$ gives that $l$ is equivalent to the constant path, which is a contradiction. This shows that only a finite number of face-paths may be factored out of any path of ${\widetilde Q}$. Then any path $p$ of ${\widetilde Q}$ is equivalent to $rf^m$ for a minimal path $r$ and a nonnegative integer $m$. Suppose $[p]=[rf^m]=[r'f^{m'}]$ for some nonnegative integers $m$ and $m'$ and minimal paths $r$ and $r'$. Without loss of generality suppose $m\leq m'$. By the cancellation property, $[r]=[r'f^{m'-m}]$. Then if $m'>m$ we have factored a face-path out of $r$, contradicting minimality of $r$, hence $m'=m$ and $[r]=[r']$. Then if ${\widetilde Q}$ is not path-consistent, there must be minimal paths $p$ and $q$ between the same vertices which are not equivalent. Take $m$ which is greater than the length of $p$ and the length of $q$. By Lemma [Lemma 18](#lem:many-cycles-factor-through-path){reference-type="ref" reference="lem:many-cycles-factor-through-path"}, $[pf^m]=[pq'q]$ for some path $q'$ from $h(q)$ to $t(q)$. Suppose we have shown that $pq'$ is equivalent to $f^{m'}$ for some $m'$. Then $$[pf^m]=[pq'q]=[qf^{m'}].$$ By the cancellation property, we get either $[p]=[qf^{m'-m}]$ or $[q]=[pf^{m-m'}]$. Since $p$ and $q$ are minimal, we have $m=m'$ and $[p]=[q]$, contradicting our initial assumption. The proof is then complete if we show that any cycle is equivalent to a composition of face-paths. We do so now. Suppose to the contrary and take a simple cycle $l=\gamma_{s'}\dots\gamma_1$ which is not equivalent to a composition of face-paths and bounds minimal area with respect to this property. For any arrow $\gamma_i\in l$, let $R_{\gamma_i}$ be the return path of $\gamma_i$ inside of the disk bounded by $l$. As in Lemma [Lemma 18](#lem:many-cycles-factor-through-path){reference-type="ref" reference="lem:many-cycles-factor-through-path"}, set $l':=R_1\dots R_{s'}$. Then $l'l=R_1\dots R_{s'}\gamma_{{s'}}\dots\gamma_1$ is equivalent to $f_{t(l)}^{s'}$. Moreover, $l'$ is a cycle lying in the area bounded by $l$. If $l'$ is a simple cycle strictly contained in $l$ then $l'$ is equivalent to a composition of face-paths by choice of $l$. If $l'$ is not a simple cycle, then one-by-one we remove simple proper subcycles of $l'$, each of which is strictly contained in the area defined by $l$ and hence is equivalent to a composition of face-paths. Then we replace them with a composition of face-paths until we get $[l']=[f_{t(p)}^{s}]$ for some $s$. Either way, $l'$ is equivalent to some composition of face-paths $f_{t(p)}^s$. Then $[f_{t(p)}^{s'}]=[l'l]=[f_{t(p)}^sl]$. Since $l'$ is a subpath of $l'l$, we must have that ${s'}\geq s$. Then the cancellation property gives $[f_{t(p)}^{{s'}-s}]=[l]$ and $l$ is equivalent to a composition of face-paths, contradicting the choice of $l$. This completes the proof that all cycles are equivalent to a composition of face-paths and yields the theorem. ◻ If $A_Q$ is a cancellation algebra but $S(Q)$ is a sphere, then the proof of the above theorem still shows that $Q$ satisfies the path-consistency condition of Definition [Definition 8](#defn:algebraic-consistency){reference-type="ref" reference="defn:algebraic-consistency"}. See Remark [Remark 46](#remk:alg-cons-no-sphere){reference-type="ref" reference="remk:alg-cons-no-sphere"} for an explanation of why we require $S(Q)$ to not be a sphere in this definition. ## Winding Numbers {#ssec:wnt} In later sections, we will make use of the winding number of a cycle around a point in a simply connected dimer model. We now set up notation and prove a lemma. **Definition 20**. Let $p$ be a path in $\widetilde Q$ and let $F$ be a face of $\widetilde Q$. Let $q$ be a walk around the undirected graph of $Q$ from $h(p)$ to $t(p)$. We write $\textup{Wind}(qp,F)$ for the winding number of the path $qp$ around some point in the interior of $F$. **Lemma 21**. *Let $p$ be a path in $\widetilde Q$ and let $q$ be an undirected path in $\widetilde Q$ such that $qp$ is a cycle. Let $\alpha$ be a left-morphable arrow of $p$. Then for any face $F$ of $\widetilde Q$, $$\textup{Wind}(m_\alpha(p)q,F)= \begin{cases} \textup{Wind}(qp,F)-1 & F\in\{F_\alpha^{cl},F_\alpha^{cc}\}\\ \textup{Wind}(qp,F) & \text{else}. \end{cases}$$* *Proof.* If $F=F_\alpha^{cc}$ or $F=F_\alpha^{cl}$, then $R_\alpha^{cc}$ winds around $F$ for some positive angle $0\leq\theta\leq2\pi$, while $R_\alpha^{cl}$ winds around $F$ for an angle of $\theta-2\pi$. Left-morphing at $\alpha$ switches the former for the latter, leading to a net decrease of $2\pi$ radians. Then $\textup{Wind}(m_\alpha(p)q,F)=\textup{Wind}(qp,F)-1$ in this case. If $F$ is any other face, then $R_\alpha^{cl}$ and $R_\alpha^{cc}$ do not wind differently around $F$ and the winding number does not change. ◻ # Morphs and Chains {#sec:morphsandchains} [\[sec:actual-argument\]]{#sec:actual-argument label="sec:actual-argument"}[\[sec:mac\]]{#sec:mac label="sec:mac"} In this section, we prove some technical results about basic morphs with the goal of proving that a path-consistent and simply connected dimer model has no irreducible pairs (Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}). This result will be used in Section [4](#sec:pdsc){reference-type="ref" reference="sec:pdsc"} to characterize path-consistency in terms of the strand diagram of a dimer model. We start with a definition. In the preceding, we have used the fact that two paths $p$ and $q$ are equivalent if and only if there is a sequence of basic morphs taking $p$ to $q$. We introduce the idea of a *chain* of morphable arrows that allows us to talk about sequences of morphs applied to a path in special cases. Recall that a morphable arrow $\alpha$ of $p$ is *unambiguous* if $p$ only has one subpath which is a copy of $R_\alpha^{cl}$ or $R_\alpha^{cc}$. In this case, there is a unique path $p'=m_\alpha(p)$ obtained by replacing the subpath $R_\alpha^{cl}$ with $R_\alpha^{cc}$, or vice versa. If $\alpha_1,\dots,\alpha_r\in Q_1$ with each $\alpha_i$ an unambiguous morphable arrow with respect to $m_{\alpha_{i-1}}\circ\dots\circ m_{\alpha_1}(p)$ for all $i\leq r$, we call the sequence $a=\alpha_r\dots\alpha_1$ a *morphable chain*, or simply a *chain*, of $p$. We introduce the notation $m_{a}(p):=m_{\alpha_r}\circ\dots\circ m_{\alpha_1}(p)$ and we say that $a$ is a chain from $p$ to $m_a(p)$. For some $i\in[r]$, we say that $\alpha_i$ is a left-morph (respectively right-morph) of $a$ if $\alpha_i$ is left-morphable (respectively right-morphable) with respect to $m_{\alpha_{i-1}\dots\alpha_{1}}(p)$. Note that since $\alpha_i$ is an *unambiguous* morphable arrow of this path, $\alpha_i$ is either a left-morph or a right-morph of $a$, but not both. If $\alpha_i$ is a left-morph (respectively right-morph) for all $i$, we say that $a$ is a *left-chain* (respectively *right-chain*). Two chains $a$ and $b$ of $p$ are *equivalent* if $m_a(p)=m_b(p)$. Since we require morphable arrows of a chain to be ambiguous, it may be the case that paths $p$ and $q$ are equivalent despite there being no chain from $p$ to $q$. For example, this is true if $p$ and $q$ are equivalent but distinct and every morphable arrow of $p$ is ambiguous. In reasonable circumstances, however, the notion of a chain is often sufficient. For example, elementary paths have no unambiguous morphs, so two minimal paths are equivalent if and only if there is a chain from one to the other. ## Cycle-Removing Morphs In this subsection, we let $\widetilde Q$ be a path-consistent and simply connected dimer model and we define a new type of morph which weakly decreases the c-value of a path and preserves the property of being an elementary path, which we define now. **Definition 22**. A *elementary* path in a dimer model $Q$ is a (possibly constant) path which is not a face-path and which contains no cycles as proper subpaths. Note that an elementary path may never contain all arrows in a given face-path. Then if $p$ is elementary, no morphable arrow of $p$ is in $p$. Moreover, every morphable arrow of $p$ is unambiguous. We also have the following. **Definition 23**. Let $p$ be an elementary path in a path-consistent dimer model ${\widetilde Q}$. Let $\alpha$ be a right-morphable arrow of $p$. If $m_\alpha(p)$ is elementary, we define the *cycle-removing right-morph* $\omega_\alpha(p)$ to be $m_\alpha(p)$. If not, write $p=p''R_\alpha^{cl}p'$ for subpaths $p'$ and $p''$ of $p$. This decomposition is unique because morphable arrows of elementary paths are unambiguous. Let $v_0:=h(\alpha)$ and number the vertices of $F_\alpha^{cc}$ counter-clockwise $h(\alpha)=v_0,v_1,\dots,v_m=t(\alpha)$. Let $a$ be the largest integer less than $m$ such that $v_a\in p'$. Note that if $v_m\in p'$, then $p''$ is constant by elementariness. Let $b$ be the smallest integer greater than 0 such that $v_b\in p''$. Since $p=p''R_\alpha^{cl}p'$ is elementary, $p'$ and $p''$ do not intersect except for possibly at the endpoints $t(p'),h(p'')$ if they coincide. Then any proper subcycle of $m_\alpha(p)=p''R_\alpha^{cc}p'$ must involve some $v_i$ for $i\in\{1,\dots,m-1\}$, hence either $a>0$ or $b<m$ or both. Moreover, $a\leq b$. Let $q'$ be the subpath of $p'$ from $t(p')$ to $v_a$. Let $R'$ be the subpath of $R_\alpha^{cc}$ from $v_a$ to $v_b$. Let $q''$ be the subpath of $p''$ from $v_b$ to $h(p'')$. If $q''R'q'$ is not a face-path, define the *cycle-removing right-morph* $\omega_\alpha(p)$ to be $q''R'q'$. Otherwise, define $\omega_\alpha(p)$ to be the constant path. For example, see Figures [\[fig:cycle-removing-example\]](#fig:cycle-removing-example){reference-type="ref" reference="fig:cycle-removing-example"} and [\[fig:cycle-removing-to-constant\]](#fig:cycle-removing-to-constant){reference-type="ref" reference="fig:cycle-removing-to-constant"}. We similarly define *cycle-removing left-morphs*. Intuitively, the cycle-removing right-morph $\omega_\alpha(p)$ is obtained by removing the proper subcycles from $m_\alpha(p)$ to get an elementary path. Since $\widetilde Q$ is path-consistent, any cycle is equivalent to a composition of face-paths, hence $m_\alpha(p)$ is equivalent to $\omega_\alpha(p)f^m$ for some $m\geq0$. We define cycle-removing morphs only for simply connected and path-consistent dimer models because without these hypotheses, there may be cycles which are not equivalent to a composition of face-paths. Hence, we may have that $m_\alpha(p)$ is not equivalent to $\omega_\alpha(p)f^m$ for any $m\geq0$. If $Q$ is path-consistent but not simply connected, we can pass to $\widetilde Q$, do a cycle-removing morph, and pass back to $Q$. The result is that we do the corresponding basic morph and remove *null-homotopic* cycles of $Q$. If $p$ is elementary and $m_\alpha(p)$ contains a proper subcycle, we say that $\alpha$ *creates a proper subcycle* of $p$. Observe that $\alpha$ creates a proper subcycle if and only if $\omega_\alpha(p)\neq m_\alpha(p)$. **Lemma 24**. *Let $p$ be an elementary path in a path-consistent quiver ${\widetilde Q}$ and let $\alpha$ be a right-morphable arrow of $p$. Then we have the following:* *[\[lrr:1\]]{#lrr:1 label="lrr:1"} The cycle-removing right-morph $\omega_\alpha(p)$ is elementary.* *[\[lrr:2\]]{#lrr:2 label="lrr:2"} The cycle-removing right-morph $\omega_\alpha(p)$ contains some arrow of $R_\alpha^{cc}$ if and only if $\omega_\alpha(p)$ is nonconstant.* *[\[lrr:3\]]{#lrr:3 label="lrr:3"} The arrow $\alpha$ creates a proper subcycle if and only if $\omega_\alpha(p)$ does not contain all of $R_\alpha^{cc}$.* *Proof.* Parts [\[lrr:1\]](#lrr:1){reference-type="eqref" reference="lrr:1"} and [\[lrr:3\]](#lrr:3){reference-type="eqref" reference="lrr:3"} follow from the definitions. We prove [\[lrr:2\]](#lrr:2){reference-type="eqref" reference="lrr:2"}. Certainly if $\omega_\alpha(p)$ is constant then it contains no arrow of $R_\alpha^{cc}$. On the other hand, if $\omega_\alpha(p)$ is nonconstant, then in the notation of Definition [Definition 23](#defn:cycle-removing-morph){reference-type="ref" reference="defn:cycle-removing-morph"} we must have $a<b$. Then some arrow of $R_\alpha^{cc}$ is contained in $R'$, and hence is contained in $\omega_\alpha(p)=q''R'q'$. ◻ **Remark 25**. Many of the definitions and results appearing above as well as later in the text are symmetric. If one switches "left" for "right" and "clockwise" for "counter-clockwise" in the statements and proofs, the analogous arguments and results hold. We will refer to these as dual results without stating them separately. ## Left, Right, Good, and Bad We now define a notion of one path being to the right of another. We obtain some conditions under which cycle-removing morphs behave well with respect to this concept of left and right. **Definition 26**. Let $Q=(Q_0,Q_1,Q_2)$ be a dimer model. A *disk submodel* of $Q$ is a dimer model on a disk $Q'=(Q'_0,Q'_1,Q'_2)$ such that $Q'_0\subseteq Q_0$ and $Q'_1$ (respectively $Q'_2$) consists of all edges in $Q_1$ (respectively faces in $Q_2$) containing only vertices in $Q'_0$. If $Q$ is a dimer model, then there is an embedding of $Q$ into an orientable surface. A disk submodel is obtained by choosing a set of faces of $Q$ in this embedding which make up a disk. All disk submodels of $Q$ are obtained in this way. **Definition 27**. Suppose $p$ and $q$ are elementary paths in a simply connected quiver ${\widetilde Q}$ which is not on a sphere with $t(p)=t(q)$ and $h(p)=h(q)$. We say that $p$ is *to the right of* $q$ if the following conditions are satisfied. 1. The shared vertices of $p$ and $q$ may be ordered $v_1,\dots,v_m$ such that $v_i$ is the $i$th vertex among $\{v_1,\dots,v_m\}$ to appear in $p$ and is the $i$th such vertex to appear in $q$. We remark that if $p$ is an elementary cycle and $q$ is trivial, then $m=2$ and $v_1=v_2$. 2. For all $i\in[m-1]$, if $p_i$ (respectively $q_i$) is the subpath of $p$ (respectively $q$) from $v_i$ to $v_{i+1}$, then either $p_i$ and $q_i$ are the same arrow or $q_i^{-1}p_i$ is a counter-clockwise (necessarily elementary) cycle. In the latter case, we say that $p$ and $q$ bound the disk $q_i^{-1}p_i$. See Figure [\[ex:disksbounded\]](#ex:disksbounded){reference-type="ref" reference="ex:disksbounded"}. We remark that if $q$ is constant and $p$ is an elementary counter-clockwise cycle, then $p$ is to the right of $q$ with $m=2$ and $v_1=v_2$. Similarly, an elementary counter-clockwise cycle at $v$ is to the left of the constant path at $v$. We warn the reader that this does not form a partial order on paths in $\widetilde Q$ with the same start and end vertices as the relation is not transitive. See Figure [\[fig:lr-not-trans\]](#fig:lr-not-trans){reference-type="ref" reference="fig:lr-not-trans"} for an example. **Definition 28**. Let $p$ be an elementary path of $\widetilde Q$ and let $\alpha$ be a morphable arrow of $p$. We say that $\alpha$ is *good* if $\alpha$ is a left-morphable arrow or if $\alpha$ is a right-morphable arrow such that $m_\alpha(p)$ does not contain a proper counter-clockwise subcycle. If $\alpha$ is a right-morphable arrow such that $m_\alpha(p)$ contains a proper counter-clockwise subcycle, then we say that $\alpha$ is *bad*. A cycle-removing chain $a=\alpha_r\dots\alpha_1$ of $p$ is *good* if each $\alpha_i$ is a good morphable arrow of $\omega_{\alpha_{i-1}\dots\alpha_{1}}(p)$. Otherwise, $a$ is *bad*. Good morphs are useful because a good cycle-removing morph behaves reasonably well with respect to the notion of one path being to the left or right of another. Consider the following. **Lemma 29**. 1. *[\[lrl:1\]]{#lrl:1 label="lrl:1"} Let $q$ be an elementary path in $\widetilde Q$ and let $p$ be any elementary path to the right of $q$. [\[rlrrl1\]]{#rlrrl1 label="rlrrl1"} Let $\beta$ be a left-morphable arrow of $p$ which is not a left-morphable arrow of $q$. Then $\omega_\beta(p)$ is to the right of $q$.* 2. *[\[lrl:2\]]{#lrl:2 label="lrl:2"} Let $p$ be an elementary counter-clockwise cycle in $\widetilde Q$ and let $\alpha$ be a right-morphable arrow of $p$ such that $m_\alpha(p)$ has no proper counter-clockwise subcycle. Then $p$ is contained in the area enclosed by $\omega_\alpha(p)$.* *Proof.* To see [\[lrl:1\]](#lrl:1){reference-type="eqref" reference="lrl:1"}, it suffices to reduce to the case where $p$ and $q$ share only their start and end vertices. In this case, they bound one disk $q^{-1}p$ and a left-morph of $p$ at $\beta$ results in a path contained in the area enclosed by $q^{-1}p$. We now prove [\[lrl:2\]](#lrl:2){reference-type="eqref" reference="lrl:2"}. We show that $\textup{Wind}(\omega_\alpha(p),F)\geq\textup{Wind}(p,F)$ for any face $F$. It follows that $\omega_\alpha(p)$ is also an elementary counter-clockwise cycle. Since any face $F$ is in the interior of $p$ (respectively $\omega_\alpha(p)$) if and only if $\textup{Wind}(p,F)>0$ (respectively $\textup{Wind}(\omega_\alpha(p),F)>0$), it further follows that if $F$ is in the interior of $p$ then $F$ is in the interior of $\omega_\alpha(p)$ and the statement is proven. Let $p$ be an elementary counter-clockwise cycle in $\widetilde Q$ and let $\alpha$ be a right-morphable arrow of $p$ such that $m_\alpha(p)$ has no proper counter-clockwise subcycle. By Lemma [Lemma 21](#lem:rotation-number-formula){reference-type="ref" reference="lem:rotation-number-formula"}, $\textup{Wind}(m_\alpha(p),F)\geq\textup{Wind}(p,F)$. The path $\omega_\alpha(p)$ is obtained from $m_\alpha(p)$ by deleting some number of proper elementary subcycles. All of these are clockwise, and hence have a winding number less than or equal to zero around $F$, by assumption. It follows that their deletion can only increase the winding number around $F$ and we have $$\textup{Wind}(\omega_\alpha(p),F)\geq\textup{Wind}(m_\alpha(p),F)\geq\textup{Wind}(p,F).$$ If $F$ is enclosed within $p$, then $\textup{Wind}(p,F)=1$ and the above inequality forces $\textup{Wind}(\omega_\alpha(p),F)=1$. It follows that $F$ is enclosed within $\omega_\alpha(p)$. This ends the proof. ◻ Note that the conditions of [\[lrl:1\]](#lrl:1){reference-type="eqref" reference="lrl:1"} and [\[lrl:2\]](#lrl:2){reference-type="eqref" reference="lrl:2"} necessitate that the morphable arrows considered are good. Some caution must be shown when considering cycle-removing morphs and moving paths to the right or left, particularly when the morphs are bad. For example, Figure [\[fig:right-creates-counterclockwise\]](#fig:right-creates-counterclockwise){reference-type="ref" reference="fig:right-creates-counterclockwise"} shows an example of a path $p$ with a right-morphable arrow $\alpha$ such that $\omega_\alpha(p)$ is to the left of $p$, justifying the limited scope of Lemma [Lemma 29](#lem:right-left){reference-type="ref" reference="lem:right-left"} [\[lrl:1\]](#lrl:1){reference-type="eqref" reference="lrl:1"}. Figure [\[fig:right-creates-counterclockwise\]](#fig:right-creates-counterclockwise){reference-type="ref" reference="fig:right-creates-counterclockwise"} also shows that Lemma [Lemma 29](#lem:right-left){reference-type="ref" reference="lem:right-left"} [\[lrl:2\]](#lrl:2){reference-type="eqref" reference="lrl:2"} may fail when $\alpha$ is bad. ## Irreducible Pairs The goal of this section is to show that certain pairs of paths called *irreducible pairs* (Definition [Definition 38](#defn:irreducible-pair){reference-type="ref" reference="defn:irreducible-pair"}) cannot appear in simply connected path-consistent dimer models. We begin with some technical lemmas about cycle-removing morphs. **Lemma 30**. *Let $p$ be an elementary path in a path-consistent quiver $\widetilde Q$. Let $\alpha$ be a right-morphable arrow of $p$ and let $\beta$ be a left-morphable arrow of $\omega_\alpha(p)$ distinct from $\alpha$. Then $\beta$ is a left-morphable arrow of $p$.* *Proof.* Any subpath of $\omega_\alpha(p)$ which is not a subpath of $p$ must contain some arrow $\gamma$ of $R_\alpha^{cc}$. Since $\omega_\alpha(p)$ does not contain $\alpha$, the only counter-clockwise return path which $\gamma$ could be a part of is $R_\alpha^{cc}$. This shows that $\omega_\alpha(p)$ contains no counter-clockwise return paths which are not in $p$, other than possibly $R_\alpha^{cc}$. The result follows. ◻ **Definition 31**. We say that a (crossing-creating) chain $a=\alpha_r\dots\alpha_1$ of an elementary path $p$ is *crossing-creating* if $m_{\alpha_r\dots\alpha_1}(p)$ is elementary for $i<r$ but $m_a(p)$ is not elementary. If a path $p$ of a path-consistent dimer model is not minimal then there must be a crossing-creating chain of $p$. Our goal is now to show that there must be a *good* crossing-creating chain of $p$ (Proposition [Proposition 36](#lem:cycle-removing-no-right-cc-cycle){reference-type="ref" reference="lem:cycle-removing-no-right-cc-cycle"}). Our first step is Lemmas [Lemma 32](#lem:right-no-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-no-creates-cc-left-ccc-cycle-version"} and [Lemma 33](#lem:right-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-creates-cc-left-ccc-cycle-version"}. These show that an elementary path must have a crossing-creating left-chain under certain technical conditions. **Lemma 32**. *Let $p$ be an elementary counter-clockwise cycle in $\widetilde Q$. Let $\alpha$ be a right-morphable arrow of $p$ such that $m_\alpha(p)$ has no proper counter-clockwise subcycle. Suppose there is a crossing-creating left-chain of $\omega_\alpha(p)$. Then there is a crossing-creating left-chain of $p$.* *Proof.* Let $b=\beta_s\dots\beta_1$ be a crossing-creating left-chain of $\omega_\alpha(p)$. Let $j$ be maximal such that $b':=\beta_j\dots\beta_1$ is a left-chain of $p$ as well as $\omega_\alpha(p)$. If $b'$ is crossing-creating for $p$, then we are done, so suppose $b'$ is not a crossing-creating chain of $p$. Suppose first that $b$ is a left-chain of $p$, and hence that $j=s$ and $b'=b$. Since $b$ creates a crossing of $\omega_\alpha(p)$ but not of $p$, there must be a proper subcycle of $m_{b}(\omega_\alpha(p))$ starting at a vertex $v$ of $\omega_\alpha(p)$ which is not a vertex of $p$. Then left-morphing $m_{\beta_{s-1}\dots\beta_{1}}(p)$ at $\beta_s$ must add $v$ to $p$. By Remark [Lemma 29](#lem:right-left){reference-type="ref" reference="lem:right-left"} [\[lrl:2\]](#lrl:2){reference-type="eqref" reference="lrl:2"}, the area bounded by $p$ is contained in the area bounded by $\omega_\alpha(p)$. In particular, vertices of $\omega_\alpha(p)$ which are not vertices of $p$ are not in the area bounded by $p$. Since left-morphing $p$ at $b$ does not create any crossings, $p=\omega_b(\omega_b(p))$ is obtained by applying the (cycle-removing) right-chain $b$ to $\omega_b(p)$. Then Remark [Lemma 29](#lem:right-left){reference-type="ref" reference="lem:right-left"} [\[lrl:2\]](#lrl:2){reference-type="eqref" reference="lrl:2"} applied to $\omega_b(p)$ shows that $\omega_b(p)$ is strictly contained in the area bounded by $p$. Since $v$ is not in the area bounded by $p$, the vertex $v$ cannot be in $\omega_b(p)$. This is a contradiction. It follows that $b$ is not a left-chain of $p$ and hence that $j<s$. We claim that $R_\alpha^{cl}\in m_{\beta_{j'}\dots\beta_{1}}(p)$ and that no arrow of $F_\alpha^{cl}$ is in $m_{\beta_{j'}\dots\beta_{1}}(\omega_\alpha(p))$ for any $1\leq j'\leq j$. Since $R_\alpha^{cl}\in p$ and $\beta_1$ does not create a crossing of $p$, we must have $\beta_1\not\in F_\alpha^{cl}$. Since no arrow of $F_\alpha^{cl}$ is in $\omega_\alpha(p)$ and $R_{\beta_1}^{cc}\in\omega_\alpha(p)$, for any $\alpha'\in F_\alpha^{cl}$ we must have $\beta_1\not\in F_{\alpha'}^{cc}$. It follows that $R_\alpha^{cl}$ is in $m_{\beta_1}(p)$ and that no arrow of $F_\alpha^{cl}$ is in $m_{\beta_1}(\omega_\alpha(p))$. We repeat this argument to see that $R_\alpha^{cl}\in m_{\beta{j'}\dots\beta_{1}}(p)$ and that no arrow of $F_\alpha^{cl}$ is in $m_{\beta{j'}\dots\beta_{1}}(\omega_\alpha(p))$ for any $1\leq j'\leq j$, proving the claim. First, we suppose that $m_\alpha(p)$ is not elementary and hence that $\omega_\alpha(p)$ does not contain all of $R_\alpha^{cc}$. In particular, either the arrow $\alpha'$ following $\alpha$ in $R_\alpha^{cc}$ or the arrow $\alpha''$ preceding $\alpha$ in $R_\alpha^{cc}$ (or both) are absent from $\omega_\alpha(p)$. Suppose that $\alpha'$ is not in $\omega_\alpha(p)$; the $\alpha''$ case is the same. Since $b'$ is a left-chain of $p$ and $R_\alpha^{cl}$ is in $m_{\beta_{j'}\dots\beta_{1}}(p)$ by the claim above, the arrows $\alpha'$ and $\alpha$ may not be added by any morph in $b'$ without creating a crossing in $p$, which would contradict our choice of $b'$. Hence, neither $\alpha'$ nor $\alpha$ is in $m_{\beta_j\dots\beta_1}(\omega_\alpha(p))$. In particular, no return path of an arrow in $F_\alpha^{cc}$ is a subpath of $m_{\beta_j\dots\beta_1}(\omega_\alpha(p))$. Since $\beta_{j+1}$ is a left-morphable arrow of $m_{b'}(\omega_\alpha(p))$ but not of $m_{b'}(p)$, it must be the case that $R_{\beta_{j+1}}^{cc}$ is a subpath of $m_{b'}(\omega_\alpha(p))$ but not of $m_{b'}(p)$. Since the only such subpaths contain some arrow of $R_\alpha^{cc}$, we must have that $\beta_{j+1}\in F_\alpha^{cc}$. This contradicts the fact that no return path in $F_\alpha^{cc}$ is a subpath of $m_{\beta_j\dots\beta_1}(\omega_\alpha(p))$. On the other hand, suppose that $m_\alpha(p)$ is elementary, and hence that $\omega_\alpha(p)=m_\alpha(p)$. As above, the arrow $\beta_{j+1}$ must be in $F_\alpha^{cc}$. If $\beta_{j+1}\neq\alpha$, then $\alpha\in m_{\beta_j\dots\beta_1}(\omega_\alpha(p))$, contradicting the claim in the third paragraph, so $\beta_{j+1}=\alpha$. Consider the chain $\beta_{j+1}\beta_j\dots\beta_1\alpha$ of $p$. The left-morph $\beta_{j+1}$ cancels out the right-morph $\alpha$ in this chain, hence the left-chain $\beta_{j+2}\beta_j\dots\beta_1$ of $p$ is equivalent to $\beta_{j+2}\dots\beta_2\beta_1\alpha$. Then $\beta_s\dots\beta_{j+2}\beta_j\dots\beta_1$ is a crossing-creating left-chain of $p$. This completes the proof. ◻ **Lemma 33**. *Let $p$ be an elementary path in $\widetilde Q$. Let $\alpha$ be a right-morphable arrow of $p$ and let $l$ be a proper elementary counter-clockwise subcycle of $m_\alpha(p)$. Suppose there is a crossing-creating left-chain of $l$. Then there is a crossing-creating left-chain of $p$.* *Proof.* The proof is the same as that of Lemma [Lemma 32](#lem:right-no-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-no-creates-cc-left-ccc-cycle-version"}, with $l$ taking the place of $\omega_\alpha(p)$. ◻ **Lemma 34**. *Let $p$ be an elementary path and let $\alpha$ be a right-morphable arrow of $p$. Then $m_\alpha(p)$ does not contain a counter-clockwise face-path.* *Proof.* Any arrow added by right-morphing at $\alpha$ is only a part of one counter-clockwise face, which is $F_\alpha^{cc}$. Since $p$ is elementary, $\alpha$ is not in $p$ and hence is not in $m_\alpha(p)$. The result follows. ◻ Before finally proving that any elementary path has a good crossing-creating chain (Proposition [Proposition 36](#lem:cycle-removing-no-right-cc-cycle){reference-type="ref" reference="lem:cycle-removing-no-right-cc-cycle"}), we first prove the special case that any counter-clockwise elementary cycle has a good crossing-creating chain. **Proposition 35**. *Let $p$ be a counter-clockwise elementary cycle in a path-consistent dimer model $\widetilde Q$. Then $p$ has a crossing-creating left-chain.* *Proof.* We induct on the c-value of $p$. For the base case, let $p$ be an elementary counter-clockwise cycle with a minimal c-value among elementary counter-clockwise cycles. By path-consistency, $p$ must be equivalent to a composition of face-paths and hence must have a crossing-creating chain. If $p$ has a good crossing-creating chain $a=\alpha_r\dots\alpha_1$ such that $\alpha_r$ is a right-morph, then $\omega_a(p)$ is an elementary counter-clockwise cycle by Lemma [Lemma 29](#lem:right-left){reference-type="ref" reference="lem:right-left"} [\[lrl:2\]](#lrl:2){reference-type="eqref" reference="lrl:2"}, and necessarily has a lower c-value than $p$, contradicting our choice of $p$. Suppose $p$ has a good crossing-creating chain $a=\alpha_r\dots\alpha_1$ such that $\alpha_r$ is a left-morph. If $a$ is a left-chain, then there is nothing to show; otherwise, let $j$ be maximal such that $\alpha_j$ is a right-morph. Applying Lemma [Lemma 32](#lem:right-no-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-no-creates-cc-left-ccc-cycle-version"} to $m_{\alpha_1\dots\alpha_{j-1}}(p)$ shows that there is a crossing-creating left-chain of $m_{\alpha_1\dots\alpha_{j-1}}(p)$. Repeating this process for each right-morph of $a$ gives that there is a crossing-creating left-chain of $p$. If $p$ has a bad crossing-creating chain $a=\alpha_r\dots\alpha_1$, then $\alpha_r$ is a right-morph and $m_a(p)$ contains a proper subpath $l$ which is a simple counter-clockwise cycle. By Lemma [Lemma 34](#lem:right-morph-no-cc-facepath){reference-type="ref" reference="lem:right-morph-no-cc-facepath"}, $l$ is elementary. This contradicts our choice of $p$, since $l$ has a strictly lower c-value than $p$. This completes the proof of the base case. Now suppose that $p$ is an elementary counter-clockwise cycle which does not have a minimal c-value. There must be a crossing-creating chain $c=\gamma_r\dots\gamma_1$ of $p$. We first show that there must be a crossing-creating chain $ab$ of $p$ where $b$ is a left-chain of $p$ and $a$ is a (possibly empty) right-chain of $m_b(p)$. Suppose $c$ is not already of this form and let $j$ be minimal such that $\gamma_{j-1}$ is a right-morph in $c$ and $\gamma_j$ is a left-morph in $c$. If $\gamma_{j-1}$ and $\gamma_j$ are the same arrow, then $\gamma_j$ cannot create a crossing and the term $\gamma_{j}\gamma_{j-1}$ may be removed from $c$ to get an equivalent chain $\gamma_r\dots\gamma_{j+1}\gamma_{j-2}\dots\gamma_1$. If $\gamma_{j-1}$ and $\gamma_j$ are not the same arrow, then by Lemma [Lemma 30](#lem:cycleless-switcheroo){reference-type="ref" reference="lem:cycleless-switcheroo"}, $\gamma_j$ is a left-morphable arrow of $m_{\gamma_{j-2}\dots\gamma_1}(p)=\omega_{\gamma_{j-2}\dots\gamma_1}(p)$. If this left-morph creates a crossing, then let $c_1:=\gamma_j\gamma_{j-2}\dots\gamma_1$. Otherwise, note that $\gamma_j\gamma_{j-1}$ and $\gamma_{j-1}\gamma_{j}$ are equivalent chains of $m_{\gamma_{j-2}\dots\gamma_1}(p)$, hence $$c_1:=\gamma_r\dots\gamma_{j+1}\gamma_{j-1}\gamma_j\gamma_{j-2}\dots\gamma_1$$ is a crossing-creating chain of $p$ equivalent to $c$. We apply the above logic repeatedly to move all left-morphs in $c$ to the front. We end up with some crossing-creating chain $c_m=ab$ where $b$ is a left-chain of $p$ and $a$ is a right-chain of $m_b(p)$. If $a$ is trivial, then $b$ is a crossing-creating left-chain and we are done. Otherwise, let $a=\alpha_r\dots\alpha_1$. Lemma [Lemma 33](#lem:right-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-creates-cc-left-ccc-cycle-version"} or Lemma [Lemma 32](#lem:right-no-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-no-creates-cc-left-ccc-cycle-version"} (depending on whether $m_{ba}(p)$ contains a proper counter-clockwise subcycle) along with the induction hypothesis shows that $m_{\alpha_{r-1}\dots\alpha_1b}(p)$ has a crossing-creating left-chain. We now repeatedly apply Lemma [Lemma 32](#lem:right-no-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-no-creates-cc-left-ccc-cycle-version"} to see that $m_b(p)$ has a crossing-creating left-chain $b'$. Then $b'b$ is a crossing-creating left-chain of $p$. ◻ **Proposition 36**. *Let $p$ be an elementary path in a path-consistent dimer model $\widetilde Q$. Then there exists a good cycle-removing chain from $p$ to a minimal path.* *Proof.* We prove by induction on the c-value of $p$. The base case when $p$ is minimal is trivial. Re-organized the end of this proof. Suppose the result has been shown for paths with a strictly lower c-value than that of some non-minimal path $p$. We first show that $p$ has a good crossing-creating chain. Since $p$ is not minimal, there is some crossing-creating chain $a=\alpha_r\dots\alpha_1$ of $p$. If $\alpha_r$ is a left-morph or if $m_a(p)$ contains no proper counter-clockwise subcycle, then $a$ is a good crossing-creating chain of $p$. If not, then $m_a(p)$ contains a proper counter-clockwise simple subcycle $l$, which is not a face-path by Lemma [Lemma 34](#lem:right-morph-no-cc-facepath){reference-type="ref" reference="lem:right-morph-no-cc-facepath"}. By Proposition [Proposition 35](#prop:simple-cc-cycle-c-left-chain){reference-type="ref" reference="prop:simple-cc-cycle-c-left-chain"}, $l$ has a crossing-creating left-chain. By Lemma [Lemma 33](#lem:right-creates-cc-left-ccc-cycle-version){reference-type="ref" reference="lem:right-creates-cc-left-ccc-cycle-version"}, $m_{\alpha_{r-1}\dots\alpha_{1}}(p)$ has a crossing-creating left-chain $a'$. Then $a'\alpha_{r-1}\dots\alpha_1$ is a good crossing-creating chain of $p$. Then we may suppose that $p$ has a good crossing-creating chain $a$. Since $\omega_a(p)$ is an elementary path with a lower c-value than $p$, by the induction hypothesis there is a good cycle-removing chain $a''$ of from $\omega_a(p)$ to a minimal path. Then $a''a$ is a good cycle-removing chain from $p$ to a minimal path. ◻ We are now ready to prove the main result of this section, after defining the necessary terms. **Definition 37**. A path $p$ is a *rightmost path* (respectively *leftmost path*) if $p$ has no right-morphable (respectively left-morphable) arrows. **Definition 38**. An *irreducible pair* is a pair of paths $(p,q)$ in $\widetilde Q$ such that $q^{-1}p$ is a simple counter-clockwise cycle, $p$ is leftmost, and $q$ is rightmost. Note that $p$ or $q$ may be constant. If this is true, then the other path in the pair cannot be a face-path by elementariness. The notion of an irreducible pair appears in [@XBocklandt2011] when $Q$ is a dimer model on a torus. **Lemma 39**. *Let $(p,q)$ be an irreducible pair of a path-consistent dimer model $\widetilde Q$. Let $r$ be an elementary path from $t(p)$ to $h(p)$ which does not enter the interior of $q^{-1}p$ and let $\alpha$ be a morphable arrow of $r$. Then $m_\alpha(r)$ does not enter the interior of $q^{-1}p$.* *Proof.* Suppose to the contrary that $m_\alpha(r)$ enters the interior of $q^{-1}p$. Since $p$ is leftmost and $q$ is rightmost, it must be the case that $\alpha$ is a right-morph at an arrow of $p$ or a left-morph at an arrow of $q$. Suppose the former is true; the latter case is symmetric. The segment $r'$ of $r$ from $t(p)$ to $h(\alpha)$ either winds right or left around $q^{-1}p$. See Figure [\[fig:fffff\]](#fig:fffff){reference-type="ref" reference="fig:fffff"}. In either case, $R_\alpha^{cl}r'$ may not be completed to a path from $t(p)$ to $h(p)$ without entering the interior of $q^{-1}p$ or causing a self-intersection. ◻ **Theorem 40**. *A simply connected path-consistent dimer model has no irreducible pairs.* *Proof.* Let $\widetilde Q$ be path-consistent and simply connected. Take a pair $(p,q)$ of paths such that $q^{-1}p$ is an elementary counter-clockwise cycle. We show that $p$ has a left-morphable arrow or $q$ has a right-morphable arrow. If $p$ is a cycle and $q$ is trivial, then Proposition [Proposition 35](#prop:simple-cc-cycle-c-left-chain){reference-type="ref" reference="prop:simple-cc-cycle-c-left-chain"} shows the desired result. If $p$ is trivial and $q$ is a cycle, the dual of Proposition [Proposition 35](#prop:simple-cc-cycle-c-left-chain){reference-type="ref" reference="prop:simple-cc-cycle-c-left-chain"} shows the desired result. We may now assume that $p$ and $q$ are not cycles. Suppose for the sake of contradiction that $(p,q)$ is an irreducible pair. Choose a face $F$ in the interior of $q^{-1}p$. By Proposition [Proposition 36](#lem:cycle-removing-no-right-cc-cycle){reference-type="ref" reference="lem:cycle-removing-no-right-cc-cycle"}, there is a good cycle-removing chain $a=\alpha_r\dots\alpha_1$ from $p$ to some minimal path $\omega_a(p)$. By repeated application of Lemma [Lemma 39](#lem:ffl){reference-type="ref" reference="lem:ffl"}, no intermediate path of this chain enters the interior of $q^{-1}p$. In particular, no arrow $\alpha_i$ is an arrow of $F$. Then Lemma [Lemma 21](#lem:rotation-number-formula){reference-type="ref" reference="lem:rotation-number-formula"} tells us that $\textup{Wind}\big(q^{-1}m_{\alpha_i}(\omega_{\alpha_{i-1}\dots\alpha_{1}}(p)),F\big)=\textup{Wind}\big(q^{-1}\omega_{\alpha_{i-1}\dots\alpha_{1}}(p),F\big)$ for each $i$. There may be some clockwise cycles removed from $m_{\alpha_i}(\omega_{\alpha_{i-1}\dots\alpha_{1}}(p))$ to get $\omega_{\alpha_i\dots\alpha_1}(p)$, but since $a$ is good no counter-clockwise cycles are removed. Hence, $$\textup{Wind}\big(q^{-1}\omega_{\alpha_i\dots\alpha_1}(p),F\big)\geq\textup{Wind}\big(q^{-1}m_{\alpha_i}(\omega_{\alpha_{i-1}\dots\alpha_1}(p)),F\big)=\textup{Wind}(\omega_{\alpha_{i-1}\dots\alpha_1}(p),F).$$ By applying this result for each $i$, we see that $$\textup{Wind}(q^{-1}\omega_a(p),F)\geq\textup{Wind}(q^{-1}p,F)=1.$$ Dually, there is a cycle-removing chain $b=\beta_s\dots\beta_1$ of $q$ removing only counter-clockwise cycles such that $\omega_b(q)$ is minimal and $$\textup{Wind}(q^{-1}\omega_b(q),F)\leq\textup{Wind}(q^{-1}q,F)=0.$$ By path-consistency, $\omega_a(p)$ and $\omega_b(q)$ are equivalent. Then there is a chain $c=\gamma_t\dots\gamma_1$ such that $m_c(\omega_a(p))=\omega_b(q)$. As above, Lemma [Lemma 39](#lem:ffl){reference-type="ref" reference="lem:ffl"} shows that no arrow $\gamma_i$ of $c$ is an arrow of $F$, hence repeated application of Lemma [Lemma 21](#lem:rotation-number-formula){reference-type="ref" reference="lem:rotation-number-formula"} gives that $1=\textup{Wind}(q^{-1}\omega_a(p),F)=\textup{Wind}(q^{-1}\omega_b(p),F)=0$, a contradiction. ◻ # Strand Diagrams and Strand-Consistency {#sec:pdsc} In this section, we use *zigzag paths* to associate a dimer model to a *strand diagram* on its surface. Our goal is to prove that a finite dimer model is path-consistent if and only if there are no bad configurations in its strand diagram. This generalizes ideas of Bocklandt [@XBocklandt2011] and Ishii and Ueda [@XIU]. The theory of cycle-removing morphs developed in Section [\[sec:actual-argument\]](#sec:actual-argument){reference-type="ref" reference="sec:actual-argument"}, in particular Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}, is necessary to prove this result. ## Strand Diagrams We define strand diagrams and connect them to dimer models. The below definition is a reformulation of [@XBocklandt2015 Definition 1.10]. **Definition 41**. Let $\Sigma$ be an oriented surface with or without boundary with a discrete set of marked points on its boundary. A (connected) *strand diagram* $D$ in $\Sigma$ consists of a collection of oriented strands in $\Sigma$, each of which is either a closed cycle in the interior of $D$ or starts and ends at marked boundary points, subject to the following conditions. Each boundary marked point is the start point of exactly one strand, and the end point of exactly one strand.[\[pd:1\]]{#pd:1 label="pd:1"} Any two strands intersect in finitely many points, and each intersection involves only two strands. Each intersection not at a marked boundary point is transversal.[\[pd:2\]]{#pd:2 label="pd:2"} [\[pd:3\]]{#pd:3 label="pd:3"} Moving along a strand, the signs of its crossings with other strands alternate. This includes intersections at a marked boundary point. See the figure below, where the bold segment is boundary. [\[fig:alternating-strands\]]{#fig:alternating-strands label="fig:alternating-strands"} [\[dpd:4\]]{#dpd:4 label="dpd:4"} Any connected component $C$ of the complement of $D$ in the interior of $\Sigma$ is an open disk. The boundary of $C$ may contain either zero or one one-dimensional "boundary segment" of the boundary of $\Sigma$. In the former case, $C$ is an *internal region*. In the latter, $C$ is a *boundary region*. It follows from [\[pd:3\]](#pd:3){reference-type="eqref" reference="pd:3"} that $C$ is either an *oriented region* (i.e., all strands on the boundary of the component are oriented in the same direction) or an *alternating region* (i.e., the strands on the boundary of the component alternate directions). See the left and right sides of Figure [\[fig:oriented-alternating\]](#fig:oriented-alternating){reference-type="ref" reference="fig:oriented-alternating"}, respectively. Note that by [\[pd:3\]](#pd:3){reference-type="eqref" reference="pd:3"}, any boundary region with multiple strands must be alternating. We consider a boundary region with a single strand to be alternating. [\[pd:5\]]{#pd:5 label="pd:5"} The union of the strands is connected. The diagram $D$ is called a *Postnikov diagram* if in addition it satisfies the following conditions No subpath of a strand is a null-homotopic closed cycle.[\[pd:6nointeriorcycles\]]{#pd:6nointeriorcycles label="pd:6nointeriorcycles"} [\[spd:2\]]{#spd:2 label="spd:2"} If two strand segments intersect twice and are oriented in the same direction between these intersection points, then they must not be homotopic.[\[pd:5badlens\]]{#pd:5badlens label="pd:5badlens"} In other words, *bad configurations* shown in Figure [\[fig:bad-configurations\]](#fig:bad-configurations){reference-type="ref" reference="fig:bad-configurations"} and described below must not appear in order for $D$ to be a Postnikov diagram: 1. [\[gg1\]]{#gg1 label="gg1"} A strand which intersects itself through a null-homotopic cycle as forbidden in [\[pd:6nointeriorcycles\]](#pd:6nointeriorcycles){reference-type="eqref" reference="pd:6nointeriorcycles"}, called a *null-homotopic self-intersecting strand*. 2. [\[gg2\]]{#gg2 label="gg2"} A null-homotopic strand in the interior as forbidden in [\[pd:6nointeriorcycles\]](#pd:6nointeriorcycles){reference-type="eqref" reference="pd:6nointeriorcycles"}, called a *null-homotopic closed cycle*. 3. [\[gg3\]]{#gg3 label="gg3"} Two strand segments which intersect in the same order null-homotopically as forbidden in [\[pd:5badlens\]](#pd:5badlens){reference-type="eqref" reference="pd:5badlens"}, called a *bad lens*. When $Q=\widehat Q$ is simply connected, any cycle is null-homotopic and we often omit "null-homotopic" from [\[gg1\]](#gg1){reference-type="eqref" reference="gg1"} and [\[gg2\]](#gg2){reference-type="eqref" reference="gg2"}. **Remark 42**. In fact, if $\Sigma$ is simply connected then conditions [\[pd:5\]](#pd:5){reference-type="eqref" reference="pd:5"} and [\[pd:3\]](#pd:3){reference-type="eqref" reference="pd:3"} of a strand diagram along with the lack of bad lenses imply that if there are multiple strands, then there is no strand which starts and ends at the same marked boundary point, and hence no strand contains a cycle. To see this, suppose there is a strand $z$ which starts and ends at the same marked boundary point. Consider the first time this strand intersects with another strand $w$. Then by connectedness [\[pd:3\]](#pd:3){reference-type="eqref" reference="pd:3"}, $w$ must enter the area defined by $z$ at this intersection. Then $w$ must eventually leave the area bounded by $z$, creating a bad lens. **Definition 43**.  [\[defn:dimer-model-from-strand-diagram\]]{#defn:dimer-model-from-strand-diagram label="defn:dimer-model-from-strand-diagram"} Let $D$ be a strand diagram in a surface. We associate to $D$ a dimer model $Q_D$ as follows. The vertices of $Q_D$ are the alternating regions of $D$. When the closures of two different alternating regions $v_1$ and $v_2$ meet in a crossing point between strands of $D$, or at one of the marked boundary points, we draw an arrow between $v_1$ and $v_2$, oriented in a way consistent with these strands, as shown in Figure [\[fig:strand-arrows\]](#fig:strand-arrows){reference-type="ref" reference="fig:strand-arrows"}. The counter-clockwise (respectively clockwise) faces of $Q_D$ are the arrows around a counter-clockwise (respectively clockwise) region of $D$. We may also go in the other direction. **Definition 44**. Let $Q$ be a dimer model. We associate a strand diagram $D_Q$ to $Q$ embedded in the surface $S(Q)$ as follows. To any arrow $\alpha$ of $Q$, let $v_\alpha$ be the point in the center of $\alpha$ in the embedding of $Q$ into $S(Q)$. For any two arrows $\alpha$ and $\beta$ of $Q$ such that $\beta\alpha$ is a subpath of a face-path, we draw a path from $v_\alpha$ to $v_\beta$ along the interior of the face containing $\beta\alpha$. The union of these strand segments forms a connected strand diagram $D_Q$. See Figure [\[fig:submodel-disk-annulus\]](#fig:submodel-disk-annulus){reference-type="ref" reference="fig:submodel-disk-annulus"} for an example on the disk and annulus. The above constructions are mutual inverses, and hence establish a correspondence between strand diagrams and dimer models. This is implicit in the work of Bocklandt [@XBocklandt2015]. **Definition 45**. A dimer model $Q$ is *strand-consistent* if its strand diagram $D_Q$ does not have any bad configurations. In other words, $Q$ is strand-consistent if $D_Q$ is a Postnikov diagram. **Remark 46**. A dimer model on a sphere is never strand-consistent since there must be a null-homotopic closed cycle in the interior. On the other hand, a dimer model on a sphere can still satisfy the path-consistency condition of Definition [Definition 8](#defn:algebraic-consistency){reference-type="ref" reference="defn:algebraic-consistency"}. Hence, in order to prove that the notions of path-consistency and strand-consistency are equivalent (Theorem [Theorem 52](#thm:not-a-cons-then-not-g-cons){reference-type="ref" reference="thm:not-a-cons-then-not-g-cons"}), it is necessary to throw out the case where $S(Q)$ is a sphere in Definition [Definition 8](#defn:algebraic-consistency){reference-type="ref" reference="defn:algebraic-consistency"}. Since a cycle on a surface $\Sigma$ lifts to a cycle on the universal cover if and only if it is null-homotopic in $\Sigma$, bad configurations in $Q$ correspond precisely to bad configurations in $\widetilde Q$. The following result is a consequence of this observation. **Proposition 47**. *$Q$ is strand-consistent if and only if $\widetilde Q$ is strand-consistent.* Proposition [Proposition 47](#prop:Q-g-consistent-iff-hat-Q-g-consistent){reference-type="ref" reference="prop:Q-g-consistent-iff-hat-Q-g-consistent"} is useful because bad configurations are easier to work with on the simply connected dimer model $\widetilde Q$. In particular, the null-homotopic conditions appearing in each of the three bad configurations may be ignored in the simply connected case. As such, we often pass to the universal cover models when working with with strand diagrams. ## Path-Consistency Implies Strand-Consistency We now prove that path-consistency implies strand-consistency for finite dimer models. We must first define zigzag paths and their return paths. **Definition 48**. Let $Q$ be a dimer model. A *zigzag path* of $Q$ is a maximal (possibly infinite) path $z=\dots\gamma_{i+1}\gamma_{i}\dots$ such that one of the following holds. 1. $\gamma_{i+1}\gamma_{i}$ is part of a counter-clockwise face if $i$ is odd and a clockwise face if $i$ is even. In the former case, $\gamma_{i+1}\gamma_{i}$ is called a *zig* and $h(\gamma_i)$ is a *zig vertex*. In the latter, $\gamma_{i+1}\gamma_{i}$ is a *zag* and $h(\gamma_i)$ is a *zag vertex*. 2. $\gamma_{i+1}\gamma_i$ is part of a clockwise face if $i$ is odd and a counter-clockwise face if $i$ is even. In the former case, $\gamma_{i+1}\gamma_i$ is called a *zag* and $h(\gamma_i)$ is a *zag vertex*. In the latter, $\gamma_{i+1}\gamma_i$ is a *zig* and $h(\gamma_i)$ is a *zig vertex*. If $z$ is a finite path, then we write $z=\gamma_t\dots\gamma_0$ and we note that $\gamma_0$ and $\gamma_t$ must be boundary arrows. we similarly may have infinite zigzag paths $\gamma_0\gamma_{-1}\dots$ or $\dots\gamma_{1}\gamma_0$ ending or starting at boundary vertices, respectively. It is immediate by the constructions of Definitions [Definition 44](#defn:strand-diagram-from-dimer-model){reference-type="ref" reference="defn:strand-diagram-from-dimer-model"} and [\[defn:dimer-model-from-strand-diagram\]](#defn:dimer-model-from-strand-diagram){reference-type="ref" reference="defn:dimer-model-from-strand-diagram"} that zigzag paths of $Q$ correspond to strands of $D_Q$. Intersections of strands in $D_Q$ correspond to shared arrows of zigzag paths in the quiver. We may thus view the bad configurations of Definition [Definition 41](#defn:postnikov-diagram){reference-type="ref" reference="defn:postnikov-diagram"} as bad configurations of zigzag paths. 1. A zigzag path has a *null-homotopic self-intersection* if it passes through the same arrow twice, first as the start of a zig and then as the start of a zag (or vice versa), and the segment between these occurrences is null-homotopic. 2. A zigzag path is a *null-homotopic closed cycle* if it is cyclic, infinitely repeating, and null-homotopic. 3. Two homotopic segments of (possibly the same) zigzag paths $z'$ and $w'$ form a *bad lens* if $z'=\beta z_s\dots z_0\alpha$ and $w'=\beta w_t\dots w_0 \alpha$ and $z_i\neq w_j$ for all $i$ and $j$. The following definition appears in the literature on dimer models in tori. See, for example, [@XBocklandt2011] and [@XBroomhead2009]. **Definition 49**. Take a subpath $z':=\gamma_t\dots\gamma_1$ of a zigzag path $z$ such that $\gamma_i\neq\gamma_j$ for $i\neq j$. Suppose that $t(z')$ and $h(z')$ are both zag vertices of $z$. For each zig $\gamma_{j+1}\gamma_j$ (for $j<t$ odd), let $p_j$ be the subpath of $F_{\gamma_j}^{cc}$ from $h(\gamma_{j+1})$ to $t(\gamma_j)$. Then $p_j$ consists of all arrows in $F_{\gamma_j}^{cc}$ except for $\gamma_{j+1}$ and $\gamma_j$. The *right return path* is the composition $p_{j_{t-1}}\dots p_{j_1}$ of all such $p_j$'s. The *elementary right return path* is the path obtained by removing all proper elementary subcycles of the right return path in order of their appearance. If the result is a face-path, then the elementary right return path is constant. We dually define *(elementary) left return paths*. For examples, see Figure [\[fig:triple\]](#fig:triple){reference-type="ref" reference="fig:triple"}. In particular, the left of this figure gives an example where the right return path and the elementary right return path differ. It is immediate that right return paths are leftmost and left return paths are rightmost. **Theorem 50**. *A path-consistent dimer model is strand-consistent.* *Proof.* Given a path-consistent dimer model $Q$, Proposition [Proposition 15](#prop:Q-consistent-iff-hat-Q-consistent){reference-type="ref" reference="prop:Q-consistent-iff-hat-Q-consistent"} implies that $\widetilde Q$ is path-consistent. By Proposition [Proposition 47](#prop:Q-g-consistent-iff-hat-Q-g-consistent){reference-type="ref" reference="prop:Q-g-consistent-iff-hat-Q-g-consistent"}, it suffices to show that $\widetilde Q$ is strand-consistent. We will do this by showing that self-intersecting strands, closed cycles, and bad lenses in the strand diagram give rise to irreducible pairs, which cannot occur in strand-consistent models by Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}. See Figure [\[fig:triple\]](#fig:triple){reference-type="ref" reference="fig:triple"}. First, suppose there is some self-intersecting strand $z$ of $D_{\widetilde Q}$. Recall from the discussion following Definition [Definition 48](#defn:zigzag-path){reference-type="ref" reference="defn:zigzag-path"} that intersections of the strand $z$ correspond to multiple incidences of an arrow in its associated zigzag path in ${\widetilde Q}$. Then there is some segment $C=\gamma_0\gamma_t\dots\gamma_1\gamma_0$ of the zigzag path associated to $z$ such that $\gamma_i\neq\gamma_j$ for $i\neq j$. Suppose that the segment of the strand $z$ corresponding to $\gamma_t\dots\gamma_0$ is a clockwise cycle; the counter-clockwise case is symmetric. See the left of Figure [\[fig:triple\]](#fig:triple){reference-type="ref" reference="fig:triple"}. Let $v_1:=h(\gamma_0)$ and $v_2:=t(\gamma_0)$. Since the cycle runs clockwise, $v_1$ and $v_2$ are both zag vertices of $z$. Let $z':=\gamma_t\dots\gamma_1$ and let $p$ be the elementary right return path of $z'$. Then $p$ is a leftmost path from $v_2$ to $v_1$ and $\gamma_0^{-1}p$ is an elementary counter-clockwise cycle winding counter-clockwise around $z'$, hence $(p,\gamma_0)$ is an irreducible pair. This contradicts Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}. Now suppose there is a strand $z$ of $D_{\widetilde Q}$ which is a closed cycle. By the above, we may suppose that $z$ contains no self-intersections. Then we may realize $z$ as a path $z':=\gamma_t\dots\gamma_0$ of ${\widetilde Q}$ such that $t(\gamma_t)=h(\gamma_0)$ is a zag vertex of $z$ and $\gamma_i\neq\gamma_j$ for $i\neq j$. See the middle of Figure [\[fig:triple\]](#fig:triple){reference-type="ref" reference="fig:triple"}. As above, we assume that $z'$ winds clockwise. Let $p$ be the elementary right return path of $z'$. Then $p$ winds counter-clockwise around $z'$ and thus is a nontrivial counter-clockwise path which is leftmost, contradicting Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}. Now, suppose there is a bad lens in $D_{\widetilde Q}$. Accordingly, we may take subpaths of zigzag paths $z':=\beta\gamma_t\dots\gamma_1\alpha$ and $w':=\beta\delta_s\dots\delta_1\alpha$ such that $\gamma_i\neq\delta_j$ for $i\neq j$. By the above, the strands have no self-intersections and hence $z'$ and $w'$ have no repeated arrows. Suppose without loss of generality that $z'$ is to the left of $w'$. Then $t(\alpha)$ and $h(\beta)$ are zig vertices of $w$ and zag vertices of $z$. See the right of Figure [\[fig:triple\]](#fig:triple){reference-type="ref" reference="fig:triple"}. Let $p$ be the right elementary return path of $z'$ and let $q$ be the left elementary return path of $w'$. Let $p_0:=p$ and $q_0:=q$. Choose a face $F$ in the interior of the bad lens. Then $p_0$ is a leftmost elementary path and $q_0$ is a rightmost elementary path such that $\textup{Wind}(q_0^{-1}p_0,F)=1>0$, since $q^{-1}p$ winds counter-clockwise around the lens. If $q_0^{-1}p_0$ is simple, then $(p_0,q_0)$ is an irreducible pair and we are done. If not, then $q_0^{-1}p_0$ has some simple proper subcycle $l$. The paths $p_0$ and $q_0$ are elementary, so $l$ must be of the form $(q'_0)^{-1}p'_0$, where $p'_0,q'_0,p_1,q_1$ are paths such that $p_0=p'_0p_1$ and $q_0=q_1q'_0$. If $(q'_0)^{-1}p'_0$ is counter-clockwise, then $(p'_0,q'_0)$ forms an irreducible pair, since any subpath of $p$ is leftmost and any subpath of $q$ is rightmost. If not, then the removal of $(q'_0)^{-1}p'_0$ from $q_0^{-1}p_0$ may only increase the winding number around $F$, hence $\textup{Wind}(q_1^{-1}p_1,F)\geq\textup{Wind}(q_0^{-1}p_0,F)>0$. We now start the process over with $p_1$ and $q_1$ in place of $p_0$ and $q_0$. This process must eventually terminate when some $(p'_i,q'_i)$ forms an irreducible pair, contradicting Theorem [Theorem 40](#prop:no-irreducible-pairs){reference-type="ref" reference="prop:no-irreducible-pairs"}. ◻ ## Strand-Consistency Implies Path-Consistency We now prove the converse of Theorem [Theorem 50](#thm:good-quiver-no-bad-configurations){reference-type="ref" reference="thm:good-quiver-no-bad-configurations"}, completing the proof that the notions of path-consistency and strand-consistency are equivalent for finite dimer models on surfaces which are not spheres. In the case where $Q$ is a dimer model on a disk, this is proven in [@CKP Proposition 2.15]. In the case where $Q$ is a dimer model on a compact surface without boundary, it appears in [@XBocklandt2011 Theorem 10.1, Theorem 10.2]. **Proposition 51**. *A strand-consistent dimer model is path-consistent.* *Proof.* By Theorem [Theorem 19](#thm:consistent-iff-cancellation){reference-type="ref" reference="thm:consistent-iff-cancellation"}, $Q$ is path-consistent if and only if $A_Q$ is cancellative. By Lemma [Lemma 17](#lem:Q-canc-iff-hat-Q-canc){reference-type="ref" reference="lem:Q-canc-iff-hat-Q-canc"}, this is true if and only if $A_{\widetilde Q}$ is cancellative. Hence, it suffices to show that $A_{\widetilde Q}$ is a cancellation algebra. Accordingly, suppose $p,q,a$ are paths of $\widetilde Q$ such that $h(a)=t(p)=t(q)$ and $h(p)=h(q)$ and $[pa]=[qa]$. Then there is a finite sequence of morphs taking $pa$ to $qa$ in $\widetilde Q$. Let $Q'$ be a disk submodel of $\widetilde Q$ containing every intermediate path in this sequence. Since $D_{Q'}$ is a restriction of $D_{\widetilde Q}$, which has no bad configurations, the former also has no bad configurations. By [@CKP Proposition 2.15], $Q'$ is path-consistent. By Theorem [Theorem 19](#thm:consistent-iff-cancellation){reference-type="ref" reference="thm:consistent-iff-cancellation"}, $A_{Q'}$ is cancellative, hence $[p]=[q]$ in $Q'$. Then there is a sequence of morphs taking $p$ to $q$ in $Q'$; this is necessarily a sequence of morphs in $\widetilde Q$, so $[p]=[q]$ in $\widetilde Q$. It may similarly be shown that if $p,q,b$ are paths of $\widetilde Q$ such that $t(b)=h(p)=h(q)$ and $t(p)=t(q)$, then $[bp]=[bq]$ implies $[p]=[q]$. This completes the proof that $A_{\widetilde Q}$ is cancellation, and thus that $Q$ is path-consistent. ◻ **Theorem 52**. *[\[thm:cons-alg-str\]]{#thm:cons-alg-str label="thm:cons-alg-str"} Let $Q$ be a dimer model not on a sphere. The following are equivalent:* 1. *$Q$ is path-consistent,* 2. *$Q$ is strand-consistent,* 3. *The dimer algebra $A_Q$ is cancellative.* *Proof.* Path-consistency implies strand-consistency by Theorem [Theorem 50](#thm:good-quiver-no-bad-configurations){reference-type="ref" reference="thm:good-quiver-no-bad-configurations"}. Strand-consistency implies path-consistency by Proposition [Proposition 51](#thm:not-a-cons-then-not-g-conss){reference-type="ref" reference="thm:not-a-cons-then-not-g-conss"}. Path-consistency is equivalent to cancellativity by Theorem [Theorem 19](#thm:consistent-iff-cancellation){reference-type="ref" reference="thm:consistent-iff-cancellation"}. ◻ Theorem [Theorem 52](#thm:not-a-cons-then-not-g-cons){reference-type="ref" reference="thm:not-a-cons-then-not-g-cons"} is known for dimer models on tori. It was shown for dimer models on the disk corresponding to $(k,n)$-diagrams in [@BKMX]. The implication [\[q2\]](#q2){reference-type="eqref" reference="q2"}$\implies$[\[q1\]](#q1){reference-type="eqref" reference="q1"} for general dimer models on disks appears in [@XPressland2019 Proposition 2.11]. The authors are not aware of a proof in the other implication in the case of the disk, hence we include this corollary. **Corollary 53**. *Let $Q$ be a dimer model in a disk. Then $Q$ is path-consistent if and only if $Q$ is strand-consistent.* In light of Theorem [\[thm:cons-alg-str\]](#thm:cons-alg-str){reference-type="ref" reference="thm:cons-alg-str"}, we use the word *weakly consistent* to refer to both path-consistent and strand-consistent dimer models. We will define strong consistency in Section [6](#sec:perfect-matching){reference-type="ref" reference="sec:perfect-matching"}. # Dimer Submodels {#sec:ds} Recall the definition of a *disk submodel* in Definition [Definition 26](#defn:disk-submodel){reference-type="ref" reference="defn:disk-submodel"}. We now define dimer submodels more generally and show that the dimer submodel of a weakly consistent model is weakly consistent, and moreover that the path equivalence classes of a dimer submodel may be understood in terms of the original dimer model. This is a useful result that will lead to some nice corollaries in Section [8](#sec:thin-quivers){reference-type="ref" reference="sec:thin-quivers"}. **Definition 54**. Let $Q=(Q_0,Q_1,Q_2)$ be a dimer model. Let $\mathcal F\subseteq Q_2$ be a set of faces of $Q_2$ which form a connected surface with boundary which is a subspace of the surface $S(Q)$ of $Q$. We define the *dimer submodel $Q^\mathcal F$ of $Q$ induced by $\mathcal F$* as the dimer model $Q^\mathcal F=(Q^\mathcal F_0,Q^\mathcal F_1,\mathcal F)$, where $Q^\mathcal F_0$ and $Q^\mathcal F_1$ are the sets of vertices and edges of $Q$ appearing in some face of $\mathcal F$. Intuitively, $Q^\mathcal F$ is obtained by deleting all faces of $Q$ which are not in $\mathcal F$. See Figure [\[fig:submodel-disk-annulus\]](#fig:submodel-disk-annulus){reference-type="ref" reference="fig:submodel-disk-annulus"}. It is immediate that if $\mathcal G\subseteq\mathcal F\subseteq Q_2$ induce dimer submodels, then $Q^\mathcal G=(Q^\mathcal F)^\mathcal G$. **Corollary 55**. *Let $Q$ be a dimer model. Let $\mathcal F$ be a set of faces of $Q$ forming a surface $S$ such that the restriction of the strand diagram $D_Q$ to $S$ has no bad configurations. Then $Q^\mathcal F$ is weakly consistent. In particular, if $Q$ is weakly consistent then any dimer submodel of $Q$ is weakly consistent.* *Proof.* Since weak consistency is characterized by the absence of bad configurations by Theorem [Theorem 52](#thm:not-a-cons-then-not-g-cons){reference-type="ref" reference="thm:not-a-cons-then-not-g-cons"}, the first statement is trivial. Passing from $Q$ to a dimer submodel $Q^{\mathcal F}$ corresponds to restricting the strand diagram of $Q$ to the surface given by the union of the faces in $\mathcal F$. This cannot create any bad configurations, so the second statement follows. ◻ See Figure [\[fig:submodel-disk-annulus\]](#fig:submodel-disk-annulus){reference-type="ref" reference="fig:submodel-disk-annulus"} for an example of how Corollary [Corollary 55](#cor:dimer-submodel-consistent){reference-type="ref" reference="cor:dimer-submodel-consistent"} may be used in practice to obtain weakly consistent dimer models from existing (not necessarily weakly consistent) models. **Theorem 56**. *Let $Q$ be a weakly consistent dimer model and let $Q^{\mathcal F}$ be a weakly consistent dimer submodel of $Q$. Then two paths in $Q^{\mathcal F}$ are equivalent in $Q^{\mathcal F}$ if and only if they are equivalent in $Q$ and homotopic in $S(Q^{\mathcal F})$.* *Proof.* If $[p]=[q]$ in $Q^\mathcal F$, then certainly $[p]=[q]$ in $Q$. Moreover, in this case $p$ is homotopic to $q$ in $S(Q^\mathcal F)$, hence in $Q$. On the other hand, suppose that $[p]=[q]$ in $Q$ and that $p$ is homotopic to $q$ in $S(Q^{\mathcal F})$. By path-consistency of $Q^\mathcal F$, without loss of generality we have $[p]=[qf^m]$ in $Q^{\mathcal F}$ for some nonnegative integer $m$. Then we have $[p]=[qf^m]$ in $Q$. By path-consistency of $Q$, since $[p]=[q]$ in $Q$ we must have $m=0$ and hence $[p]=[q]$ in $Q^\mathcal F$. ◻ **Remark 57**. Theorem [Theorem 56](#thm:submodel-path-equivalence){reference-type="ref" reference="thm:submodel-path-equivalence"} could be stated more generally without changing the proof. We don't need $Q$ to be weakly consistent; we merely need to be able to "cancel face-paths" in $Q$. In other words, we require that $[pf^m]=[p]$ cannot hold for positive $m$. This is a weaker conition than cancellativity (and hence weak consistency) and is satisfied, for example, if $Q$ has a perfect matching. See Section [6](#sec:perfect-matching){reference-type="ref" reference="sec:perfect-matching"} and Lemma [Lemma 58](#lem:perfect-matching-intersection-num){reference-type="ref" reference="lem:perfect-matching-intersection-num"}. # Perfect Matchings {#sec:perfect-matching} A *perfect matching* of a dimer model $Q$ is a collection of arrows $\mathcal M$ of $Q$ such that every face of $Q$ contains exactly one arrow in $\mathcal M$. See Figure [\[fig:plabic-quiver-strand\]](#fig:plabic-quiver-strand){reference-type="ref" reference="fig:plabic-quiver-strand"} for two examples. Dimer models on the torus with perfect matchings have been studied in, for example, [@XIU], [@XBroomhead2009], [@XBocklandt2011] and are often called *dimer configurations*. The Gorenstein affine toric threefold obtained by putting the *perfect matching polygon* at height one is the center of the dimer algebra, and the dimer algebra is viewed as a non-commutative crepant resolution of this variety [@XBocklandt2015]. Perfect matchings of dimer models on a more general surface, often called *almost perfect matchings*, are a natural generalization. Perfect matching polygons may be extended to dimer models over arbitrary compact surfaces with boundary, and capture the data of the master and mesonic moduli spaces [@BFTFDBPTSA]. Perfect matchings may be calculated by taking determinants of Kasteleyn matrices [@BFTFDBPTSA §5]. In the present paper, we give some basic results, prove existence of perfect matchings in weakly consistent simply connected dimer models, and give a counterexample to the existence of perfect matchings in arbitrary weakly consistent dimer models. We use combinatorial theory of matchings of (undirected) graphs in order to prove the main result of this section. In order to do so, we associate to $Q$ a bipartite *plabic graph* $\mathcal G=(\mathcal G_0^b,\mathcal G_0^w,\mathcal G_1)$ embedded into $S(Q)$ as follows. To each face $F$ of $Q$, we associate an *internal vertex* $v_F$ embedded in the interior of $F$. If $F$ is a clockwise face, we say that $v_F$ is a *black vertex* and we write $v_F\in\mathcal G_0^b$. If $F$ is a counter-clockwise face, we say that $v_F$ is a *white vertex* and we write $v_F\in\mathcal G_0^w$. For any boundary arrow $\alpha$ of $Q$, we draw a *boundary vertex* $v_\alpha$ embedded in the middle of $\alpha$. We consider $v_\alpha$ to be a *white vertex* if $\alpha$ is part of a clockwise face, and a *black vertex* if $\alpha$ is part of a counter-clockwise face. For each internal arrow $\alpha$ of $Q$, we draw an edge between $v_{F_\alpha^{cl}}$ and $v_{F_\alpha^{cc}}$. For each boundary arrow $\alpha$ of $Q$, we draw an edge between $v_\alpha$ and the vertex corresponding to the unique face incident to $\alpha$. See Figure [\[fig:plabic-quiver-strand\]](#fig:plabic-quiver-strand){reference-type="ref" reference="fig:plabic-quiver-strand"}. A perfect matching $\mathcal M$ of $Q$ is dual to a collection of edges $\mathcal N$ of the plabic graph $\mathcal G$ of $Q$ such that every internal vertex of $\mathcal G$ is contained in exactly one edge of $\mathcal N$. We refer to both $\mathcal M$ and $\mathcal N$ as perfect matching of $Q$. If $Q$ has no boundary, then a perfect matching is dual to a perfect matching of the dual (plabic) graph of $Q$ in the usual sense. See Figure [\[fig:plabic-quiver-strand\]](#fig:plabic-quiver-strand){reference-type="ref" reference="fig:plabic-quiver-strand"}. If $\mathcal M$ is a perfect matching of $Q$, we say that the *intersections* of a path $p$ with $\mathcal M$ are the arrows of $p$ which are in $\mathcal M$. The *intersection number* is the number of intersections of $p$ with $\mathcal M$ (counting "multiplicities" if $p$ has multiple instances of the same arrow). **Lemma 58**. *Suppose $Q$ has a perfect matching $\mathcal M$. If $[p]=[q]$, then $p$ and $q$ have the same intersection numbers with respect to $\mathcal M$. In particular, $[p]\neq[pf^m]$ for $m>0$.* *Proof.* Since $R_\alpha^{cl}$ contains an arrow of $\mathcal M$ if and only if $R_\alpha^{cc}$ does, it follows that basic morphs preserve intersection number. The first statement follows. The second follows because any face-path has an intersection number of 1 with respect to any perfect matching. ◻ **Lemma 59**. *Suppose $Q$ has a perfect matching $\mathcal M$ and let $p$ be a path in $Q$. The set $$\{m\ |\ [p]=[f^mq]\textrm{ for some path }q:t(p)\to h(p)\}$$ is bounded above by the number of intersections of $p$ with $\mathcal M$. In particular, only a finite number of face-paths can be factored out of $p$.* *Proof.* Any face-path has an intersection number of one with any perfect matching. Hence, if $[p]=[f^mq]$, then by Lemma [Lemma 58](#lem:perfect-matching-intersection-num){reference-type="ref" reference="lem:perfect-matching-intersection-num"} the intersection number of $p$ must be at least $m$. ◻ The condition that only a finite number of face-paths can be factored out of any path $p$ is implied by path-consistency. In fact, it is a strictly weaker property than weak consistency. We show the stronger statement that the existence of a perfect matching does not imply weak consistency. Indeed, Figure [\[fig:perf-match-not-cons\]](#fig:perf-match-not-cons){reference-type="ref" reference="fig:perf-match-not-cons"}, which shows a dimer model which has a perfect matching but is not weakly consistent. More generally, if $Q$ is a weakly consistent dimer model, then let $Q'$ be the dimer model obtained by replacing some internal arrow $\alpha$ of $Q$ with two consecutive arrows $\gamma\beta$ such that $h(\gamma)=h(\alpha)$ and $t(\beta)=t(\alpha)$ and $h(\beta)=t(\gamma)$ is a new vertex of $Q'$. Then the strand diagram of $Q'$ has a bad digon, hence is not weakly consistent. On the other hand, $Q$ has a perfect matching by Theorem [Theorem 63](#thm:geo-consistent-then-almost-perfect-matching){reference-type="ref" reference="thm:geo-consistent-then-almost-perfect-matching"}, hence $Q'$ has a perfect matching. ## Consistency and Existence of Perfect Matchings We have seen that the existence of a perfect matching does not guarantee weak consistency. We now investigate whether weak consistency guarantees the existence of a perfect matching. We show in Theorem [Theorem 63](#thm:geo-consistent-then-almost-perfect-matching){reference-type="ref" reference="thm:geo-consistent-then-almost-perfect-matching"} that perfect matchings exist in simply connected weakly consistent dimer models. On the other hand, we see in Example [Example 64](#ex:cons-tor-no-perf){reference-type="ref" reference="ex:cons-tor-no-perf"} that perfect matchings need not exist in arbitrary weakly consistent dimer models. **Definition 60**. Let $(V_1,V_2,E)$ be a possibly infinite bipartite graph. Let $\{i,j\}=\{1,2\}$. Let $S\subseteq V_i$. A *matching* from $S$ into $V_j$ is a set $\mathcal N$ of disjoint edges in $E$ such that every vertex of $S$ is incident to precisely one edge in $\mathcal N$. A perfect matching of a dimer model, then, is a matching onto some full subgraph of the plabic graph $\mathcal G$ of $Q$ induced by all of the internal vertices and some subset of the boundary vertices. We use the following formulation of Hall's marriage theorem for locally finite graphs. **Theorem 61** ([@XXClark Theorem 6]). *Let $(V_1,V_2,E)$ be a bipartite graph in which every vertex has finite degree. The following are equivalent.* 1. *There is a matching from $V_1$ into $V_2$.* 2. *Any $m$ vertices of $V_1$ have at least $m$ distinct neighbors in $V_2$.* **Theorem 62** ([@AharoniX Theorem 1.1]). *Let $(V_1,V_2,E)$ be a bipartite graph. Let $A\subseteq V_1$ and $B\subseteq V_2$. If there exists a matching from $A$ into $V_2$ and a matching from $B$ into $V_1$, then there exists a disjoint set of edges $\mathcal N$ in $E$ such that each vertex in $A\cup B$ is incident to precisely one edge in $\mathcal N$.* **Theorem 63**. *If a simply connected dimer model $\widetilde Q$ is weakly consistent, then it has a perfect matching.* *Proof.* We will use the dual definition of a perfect matching. We must then show that there is a set $\mathcal N$ of edges of the plabic graph $\mathcal G$ of $\widetilde Q$ such that every vertex of $\mathcal G$ is incident to exactly one edge of $\mathcal N$. We first claim that it suffices to show that any collection of $m$ internal black vertices is connected to at least $m$ white vertices and that any collection of $m$ internal white vertices is connected to at least $m$ black vertices. Suppose this is true. By applying Theorem [Theorem 61](#thm:IBLFHT){reference-type="ref" reference="thm:IBLFHT"} to the internal black vertices, we see that there is a matching from the set of internal black vertices into the white vertices. Symmetrically, we get a matching from the set of internal white vertices into the black vertices. Then Theorem [Theorem 62](#thm:aharoni){reference-type="ref" reference="thm:aharoni"} shows that there exists a perfect matching. This ends the proof of the claim. We show that any collection of $m$ internal white vertices is connected to at least $m$ black vertices. The remaining case is symmetric. Take a set $S$ of $m$ internal white vertices of $\mathcal G_{\widetilde Q}$. These correspond to $m$ internal faces of $\widetilde Q$. Let $Q'$ be a disk submodel of $\widetilde Q$ containing all faces of $S$. Such a disk submodel must exist since $\widetilde Q$ is simply connected. Since $D_{\widetilde Q}$ has no bad configurations and $D_{Q'}$ is a restriction of $D_{\widetilde Q}$, the latter also has no bad configurations. By [@CKP Proposition 2.15], $Q'$ is a path-consistent dimer model. By [@CKP Corollary 4.6], $\mathcal G_{Q'}$ has a perfect matching. In particular, by Hall's marriage theorem (Theorem [Theorem 61](#thm:IBLFHT){reference-type="ref" reference="thm:IBLFHT"}) the set $S$ of white vertices considered as vertices of $\mathcal G_{Q'}$ has at least $m$ neighbors in $\mathcal G_{Q'}$, hence $S$ has at least $m$ neighbors in $\mathcal G_{\widetilde Q}$. This completes the proof. ◻ Example [Example 64](#ex:cons-tor-no-perf){reference-type="ref" reference="ex:cons-tor-no-perf"} shows that Theorem [Theorem 63](#thm:geo-consistent-then-almost-perfect-matching){reference-type="ref" reference="thm:geo-consistent-then-almost-perfect-matching"} does not work for dimer models which are not simply connected. **Example 64**. Figure [\[fig:cons-tor-no-perf\]](#fig:cons-tor-no-perf){reference-type="ref" reference="fig:cons-tor-no-perf"} gives a weakly consistent dimer model with no perfect matching. Any perfect matching of this dimer model must contain one of the arrows of its digon-face. A short check verifies that no such perfect matching exists. We remark that this dimer model is obtained by taking a weakly consistent dimer model on a torus, and replacing a counter-clockwise square with a variant of the dimer model of Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"}. Example [Example 64](#ex:cons-tor-no-perf){reference-type="ref" reference="ex:cons-tor-no-perf"} raises a question: what sort of conditions may we impose on a weakly consistent dimer model to necessitate the existence of some perfect matching? In particular, does any weakly consistent dimer model with no digon-faces have a perfect matching? ## Nondegeneracy {#sec:nondeg} In the disk and torus case, an important idea is *nondegeneracy* of dimer models. We will define nondegeneracy and prove a simple result which will be used in Section [7](#sec:3CY){reference-type="ref" reference="sec:3CY"}. **Definition 65**. A dimer model $Q$ is *nondegenerate* if every arrow is contained in a perfect matching. Otherwise, it is *degenerate*. In the disk and torus case, nondegeneracy is implied by weak consistency [@XIUM Proposition 6.2] [@CKP]. In the general case, this is not true. For example, the weakly consistent dimer model in Figure [\[fig:cons-tor-no-perf\]](#fig:cons-tor-no-perf){reference-type="ref" reference="fig:cons-tor-no-perf"} has no perfect matchings, hence is certainly degenerate. The right of Figure [\[fig:noeth-ann\]](#fig:noeth-ann){reference-type="ref" reference="fig:noeth-ann"} shows a weakly consistent dimer model which has a perfect matching but is still degenerate. Nondegeneracy will feature prominently in Section [7](#sec:3CY){reference-type="ref" reference="sec:3CY"} and Section [\[sec:reduce\]](#sec:reduce){reference-type="ref" reference="sec:reduce"}. Figure [\[fig:perf-match-not-cons\]](#fig:perf-match-not-cons){reference-type="ref" reference="fig:perf-match-not-cons"} gives an example of a disk model which is nondegenerate but not weakly consistent. See Example [Example 78](#ex:noeth-fin){reference-type="ref" reference="ex:noeth-fin"} for multiple examples of nondegenerate weakly consistent dimer models on annuli. In the rest of this paper, we will see that nondegeneracy is a useful condition that allows us to generalize results from the disk and torus case, motivating the following definition. **Definition 66**. A dimer model is *(strongly) consistent* if it is weakly consistent and nondegenerate. The following lemma generalizes the well-known situation in the torus and disk literature. See, for example, [@XBroomhead2009 §2.3] in the torus case and [@XPressland2019 Proposition 3.1] in the disk case. **Lemma 67**. *If $Q$ is strongly consistent, then $A_Q$ (and hence $\widehat A_Q$) admits a $\mathbb Z$-grading such that* 1. *Every nonconstant path has a positive degree, and* 2. *Every face-path has the same degree.* *Proof.* Let $C$ be a collection of all perfect matchings on $Q$. Given a path $p$ of $Q$, we give $p$ the grading $$G(p)=\sum_{\mathcal M\in C}\mathcal M(p),$$ where $\mathcal M(p)$ is the number of arrows of $p$ which are in $\mathcal M$. Note that, for any perfect matching $\mathcal M$, the quantity $\mathcal M(p)$ is unchanged by applying a basic morph to $p$. This means that the quantity $G(p)$ is a well-defined number of the equivalence class of $p$. It is clear that if $p$, $q$, and $qp$ are paths, then $G(p)+G(q)=G(qp)$. It follows that $G$ gives a positive $\mathbb Z$-grading on $A$ through which every arrow is given a positive degree. The second statement follows because the degree of any face-path is equal to the number of perfect matchings on $Q$. ◻ # Bimodule Internal 3-Calabi-Yau Property {#sec:3CY} [\[sec:3cy\]]{#sec:3cy label="sec:3cy"} We show that finite strongly consistent (completed or noncompleted) dimer models are bimodule internally 3-Calabi-Yau with respect to their boundary idempotent in the sense of [@XPressland2015]. As an application, we use a result from [@AIRX] to show that the Gorenstein-projective module category over the completed boundary algebra of a finite strongly consistent dimer model $Q$ satisfying some extra conditions categorifies the cluster algebra given by the ice quiver of $Q$. We give new examples of suitable dimer models. The technical part of this section follows [@XPressland2019 §3] (in the disk case) and [@XBroomhead2009 §7] (in the torus case). Note that the former writes $A$ for the completed dimer algebra (or Jacobian algebra) $\widehat A_Q$, and the latter deals only with the noncompleted dimer algebra. Throughout this section, let $Q$ be a finite strongly consistent dimer model. We write ${\mathcal A}$ to denote simultaneously the noncompleted dimer algebra $A_Q$ and the completed dimer algebra $\widehat A_Q$, since the arguments are the same. ## One-Sided and Two-Sided Complexes We begin by defining some $({\mathcal A},{\mathcal A})$-bimodules. If $v$ is a vertex of $Q$, define $T_v$ (respectively $H_v$) to be the set of all arrows with tail (respectively head) $v$. Define $Q_0^m$ to be the set of mutable vertices of $Q$. Let $Q_1^m$ be the set of mutable arrows of $Q$. We define vector spaces $$T_3:=\oplus_{v\in Q_0^m}\mathbb C\omega_v,\ \ T_2:=\oplus_{\alpha\in Q_1^m}\mathbb C\rho_\alpha,\ \ T_1:=\oplus_{\alpha\in Q_1}\mathbb C\alpha,\ \ T_0:=\oplus_{v\in Q_0}\mathbb C e_v.$$ The $({\mathcal A},{\mathcal A})$-bimodule structures are given by $$e_v\cdot\omega_v\cdot e_v=\omega_v,\ \ e_{t(\alpha)}\cdot\rho_\alpha\cdot e_{h(\alpha)}=\rho_\alpha,\ \ e_{h(\alpha)}\cdot\alpha\cdot e_{t(\alpha)}=\alpha,\ \ e_v\cdot e_v\cdot e_v=e_v.$$ All other products with the generators of $T_i$ are zero. In this section, all tensor products are over $T_0$ unless otherwise specified. We consider the following complex, which we will call the *two-sided complex*. $$\label{eqn:cmp2}0\to {\mathcal A}\otimes T_3\otimes {\mathcal A}\xrightarrow{\mu_3}{\mathcal A}\otimes T_2\otimes {\mathcal A}\xrightarrow{\mu_2}{\mathcal A}\otimes T_1\otimes {\mathcal A}\xrightarrow{\mu_1}{\mathcal A}\otimes T_0\otimes {\mathcal A}\xrightarrow{\mu_0}{\mathcal A}\to0$$ We consider ${\mathcal A}\otimes T_0\otimes {\mathcal A}$ to be the degree-zero term of the complex and we define the maps $\mu_i$ as follows. First, define a function $\partial$ on the arrows of $Q$ by $$\partial(\alpha)=\begin{cases}R_\alpha^{cc}-R_\alpha^{cl}&\alpha\text{ is an internal arrow}\\ R_\alpha^{cc}&\alpha\text{ is a boundary arrow in a counter-clockwise face}\\ -R_\alpha^{cl}&\alpha\text{ is a boundary arrow in a clockwise face}.\end{cases}$$ For a path $p=\alpha_m\dots\alpha_1$, we define $$\Delta_\alpha(p)=\sum_{\alpha_i=\alpha}\alpha_m\dots\alpha_{i+1}\otimes\alpha\otimes\alpha_{i-1}\dots\alpha_1$$ and extend by linearity and continuity to obtain a map $\Delta_\alpha:\mathbb C\langle\langle Q\rangle\rangle\to {\mathcal A}\otimes T_1\otimes {\mathcal A}$. Then we define $$\mu_3(x\otimes\omega_v\otimes y)=\sum_{\alpha\in T_v^{m}}x\otimes\rho_\alpha\otimes \alpha y-\sum_{\beta\in H_v^{m}}x\beta\otimes\rho_\beta\otimes y,$$ $$\mu_2(x\otimes\rho_\alpha\otimes y)=\sum_{\beta\in Q_1}x\Delta_\beta\big(\partial(\alpha)\big)y,\textup{ and }$$ $$\mu_1(x\otimes\alpha\otimes y)=x\otimes e_{h(\alpha)}\otimes\alpha y-x\alpha\otimes e_{t(\alpha)}\otimes y.$$ Since the tensor products are over $T_0$, there is a natural isomorphism ${\mathcal A}\otimes T_0\otimes {\mathcal A}\cong {\mathcal A}\otimes {\mathcal A}$. This may be composed with the multiplication map to obtain $\mu_0$. Dimer models are a special case of a *Jacobian ice quiver*. It was shown in [@XPressland2015 Lemma 5.5] that the sequence [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is a complex of bimodules for any Jacobian ice quiver and, moreover, that it is exact everywhere except possibly in degrees -3 and -2. We further have the following theorem. **Theorem 68** ([@XPressland2015 Theorem 5.6]). *If the complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is exact, then ${\mathcal A}$ is bimodule internally 3-Calabi-Yau with respect to the idempotent given by the sum of all frozen vertex simples.* Exactness in degrees other than -2 and -3 may be seen by observing that in degrees 1, 0, and -1, the complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is the standard projective bimodule presentation of an algebra given by a quiver with relations. Our goal is to show that when $Q$ is strongly consistent, the complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is exact. To do this, we will first define a version of [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} which is merely a complex of modules, rather than bimodules, and prove that exactness of this one-sided complex is equivalent to exactness of the two-sided complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} in the non-degenerate case. We will then show that the one-sided complex is exact to finish the proof. Use the quotient map ${\mathcal A}\to {\mathcal A}/\textup{rad}\ {\mathcal A}\cong T_0$ to consider $T_0$ as an $({\mathcal A},{\mathcal A})$-bimodule. Using this bimodule structure we consider the functor $\mathcal F=-\otimes_{\mathcal A}T_0$ from the category of $({\mathcal A},{\mathcal A})$-bimodules to itself. We apply this to the complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} and note that $T_i\otimes{\mathcal A}\otimes_{\mathcal A}T_0\cong T_i$ and ${\mathcal A}\otimes_{T_0}T_0\cong{\mathcal A}$ to get the complex $$\label{eqn:cmp1}0\to {\mathcal A}\otimes T_3\xrightarrow{\mathcal F(\mu_3)}{\mathcal A}\otimes T_2\xrightarrow{\mathcal F(\mu_2)}{\mathcal A}\otimes T_1\xrightarrow{\mathcal F(\mu_1)} {\mathcal A}\xrightarrow{\mu_0}T_0\to0$$ We forget the right ${\mathcal A}$-module structure and treat this as a complex of left ${\mathcal A}$-modules, which we refer to as the *one-sided complex*. We would like to show exactness in degrees $-3$ and $-2$, so we explicitly write the maps $\mathcal F(\mu_3)$ and $\mathcal F(\mu_2)$. $$\begin{aligned} \mathcal F(\mu_3):x\otimes\omega_v&\mapsto -\sum_{\beta\in H_v^m}x\beta\otimes\rho_\beta\\ \mathcal F(\mu_2):x\otimes\rho_\alpha&\mapsto\sum_{\beta\in Q_1}x\Delta_\beta^r(\partial(\alpha))\end{aligned}$$ where, for $p=\alpha_m\dots\alpha_1$, $$\Delta_\beta^r(p)=\sum_{\alpha_i=\beta}\alpha_m\dots\alpha_{i+1}\otimes\alpha_i.$$ We now do some calculations for $\mu_3$. $$\begin{aligned} \mu_3:x\otimes\omega_v\otimes y&\mapsto \sum_{\alpha\in T_v^{m}}x\otimes\rho_\alpha\otimes\alpha y-\sum_{\beta\in H_v^{m}}x\beta\otimes\rho_\beta\otimes y\\ &=\sum_{\alpha\in T_v^{m}}x\otimes\rho_\alpha\otimes\alpha y-\left(\sum_{\beta\in H_v^m}x\beta\otimes\rho_\beta\right)\otimes y\\ &=\left(\sum_{\alpha\in T_v^{m}}x\otimes\rho_\alpha\otimes\alpha\right)y+(\mathcal F(\mu_3)(x\otimes\omega_v))\otimes y\end{aligned}$$ We perform a similar calculation for $\mu_2$. $$\begin{aligned} \mu_2:x\otimes\rho_\alpha\otimes y&\mapsto\sum_{\beta\in Q_1}x\Delta_\beta\big(\partial(\alpha)\big)y\\ &=\left(\sum_{\beta\in Q_1}x\Delta_\beta^r(p)\right)\otimes y\\ &=(\mathcal F(\mu_2)(x\otimes\rho_\alpha))\otimes y\end{aligned}$$ We have shown that, for $j\in\{2,3\}$, we have $$\label{eqn:idd}\mu_j:u\otimes y\mapsto((\mathcal F(\mu_j)(u))\otimes y+\left(\sum_{v,y'}v\otimes y'\right)y,$$ where - $u$ is in either ${\mathcal A}\otimes T_3$ (if $j=3$) or ${\mathcal A}\otimes T_2$ (if $j=2$), - $v$ ranges across some elements of ${\mathcal A}\otimes T_2$ (if $j=3$) or ${\mathcal A}\otimes T_1$ (if $j=2$), and - $y'$ ranges across some arrows of ${\mathcal A}$. ## Proving 3-Calabi-Yau Property for Strongly Consistent Models We now show that exactness of the bimodule complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is equivalent to exactness of the one-sided complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"} when $Q$ is strongly consistent. We will then show that the one-sided complex is exact, and hence that the completed dimer algebra is bimodule internally 3-Calabi-Yau with respect to its boundary idempotent. We consider the bimodules $T_i$ to be $\mathbb Z$-graded as follows. All elements of $T_3$ and $T_0$ have degree 0. An element $\rho_\alpha$ in $T_2$ corresponding to an arrow $\alpha\in Q$ is given the negative of the grading of $\alpha$ in $Q$. An element $\alpha$ in $T_1$ corresponding to an arrow $\alpha\in Q$ is given the grading of $\alpha$ in $Q$. We extend the grading on ${\mathcal A}$ given by Lemma [Lemma 67](#lem:posgrad){reference-type="ref" reference="lem:posgrad"} with the gradings on $T_i$ described above to a $\mathbb Z$-grading on the bimodule complex ${\mathcal A}\otimes T_*\otimes {\mathcal A}$ by adding the grading in each of the three positions. We remark that the grading on ${\mathcal A}\otimes T_2\otimes {\mathcal A}$ is not positive. The minimum possible degree of an element of ${\mathcal A}\otimes T_2\otimes {\mathcal A}$ is $-m$, where $m$ is the maximum possible degree of an arrow of ${\mathcal A}$. Moreover, every face-path has degree equal to the number of perfect matchings on $Q$. **Lemma 69**. *The maps $\mu_3$ and $\mu_2$ are maps of graded bimodules. In other words, they map homogeneous elements to homogeneous elements.* *Proof.* First, we consider $\mu_3$. Any summand of $\mu_3(x\otimes\omega_v\otimes y)$ is of the form $x\otimes\rho_\alpha\otimes\alpha y$ or $x\alpha\otimes\rho_\alpha\otimes y$ for some arrow $\alpha$. The $\alpha$ on the left or right summands adds some number $m$ to the degree, and the $\rho_\alpha$ in the middle subtracts that same degree, so the degree of $\mu_3(x\otimes\omega_v\otimes y)$ is the same as the degree of $x\otimes\omega_v\otimes y$. We now consider $\mu_2$. Any summand of $\mu_2(x\otimes\rho_\alpha\otimes y)$ is of the form $xR''\otimes\beta\otimes R'y$ for some arrow $\beta$ and some paths $R'$ and $R''$ such that $R''\beta R'$ is some return path $R_\alpha$ of $\alpha$. Compared to $x\otimes\rho_\alpha\otimes y$, we are replacing the (negative) grading of middle term $\rho_\alpha$ with the (positive) grading of the path $R_\alpha=R''\beta R'$. The result is that the grading has increased by the grading of $\alpha R_\alpha$ in ${\mathcal A}$. This is a face-path, and all face-paths have the same grading. It follows that $\mu_2$ has the effect of increasing the grading of a homogeneous element by the grading of a face-path in ${\mathcal A}$. ◻ **Theorem 70** ([@XPressland2017 Lemma 4.7]). *If the one-sided complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="ref" reference="eqn:cmp1"} is exact for ${\mathcal A}=\widehat A_Q$, then the two-sided complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is exact for ${\mathcal A}=\widehat A_Q$.* To show a version of Theorem [Theorem 70](#thm:wherebefore){reference-type="ref" reference="thm:wherebefore"} when ${\mathcal A}$ is the noncompleted dimer algebra $A_Q$, we assume in addition nondegeneracy. **Proposition 71**. *Suppose $Q$ is strongly consistent. If the one-sided complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"} is exact, then the two-sided complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is exact.* *Proof.* We follow the proof of [@XBroomhead2009 Proposition 7.5]. Suppose the one-sided complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"} is exact. We know that the bimodule complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is exact in every degree except for, possibly, $-3$ and $-2$. It remains to show that $\ker\mu_2\subseteq\textup{\textup{im}}\mu_3$ and $\ker\mu_3\subseteq\textup{im}\mu_4=0$, where $T_4:=0$ and $\mu_4:T_4\to {\mathcal A}\otimes T_3\otimes {\mathcal A}$ is the zero map. Let $j\in\{3,2\}$. Let $\phi_0$ be a nonzero element of ${\mathcal A}\otimes T_j\otimes {\mathcal A}$ which is in the kernel of $\mu_j$. We show that $\phi_0$ is in the image of $\mu_{j+1}$. Since the grading is respected by the map $\mu_j$ (Lemma [Lemma 69](#lem:gradres){reference-type="ref" reference="lem:gradres"}), we may assume that $\phi_0$ is homogeneous of some grade $d$. We may organize the terms of $\phi_0$ by the degree of the term in third position. $$\phi_0=\sum_{y\in Y}u_y\otimes y+\{\textup{terms with strictly higher degree in the third position}\},$$ where $Y$ is a nonempty linearly independent set of monomials in the graded piece ${\mathcal A}^{(d_0)}$ with least possible degree, and $u_y\in({\mathcal A}\otimes T_j)^{(d-d_0)}$. Applying the map $\mu_j$ and using [\[eqn:idd\]](#eqn:idd){reference-type="eqref" reference="eqn:idd"} and Lemma [Lemma 67](#lem:posgrad){reference-type="ref" reference="lem:posgrad"}, we see that $\mu_j(\phi_0)=0$ is equivalent to the condition $$0=\sum_{y\in Y}(\mathcal F(\mu_j)(u_y))\otimes y+\{\textup{terms with strictly higher degree in the third position}\}.$$ Since the monomials $y\in Y$ are linearly independent, this implies that for all $y\in Y$ we have $\mathcal F(\mu_j)(u_y)=0$. Using the exactness of the one-sided complex, we conclude that there exist elements $v_y\in({\mathcal A}\otimes T_{j+1})^{(d-d_0)}$ such that $\mathcal F(\mu_j)(v_y)=u_y$ for each $y\in Y$. We construct an element $$\psi_1=\sum_{y\in Y}v_y\otimes y\in({\mathcal A}\otimes T_{j_1}\otimes{\mathcal A})^{(d)}$$ and apply $\mu_{j+1}$ to get $$\mu_{j+1}(\psi_1)=\sum_{y\in Y}u_y\otimes y+\{\textup{terms with strictly higher degree in the third position}\}.$$ We observe that $\phi_1:=\phi_0-\mu_{j+1}(\psi_1)$ is in the kernel of $\mu_{j}$ and that its terms have strictly higher degree in the third position than $\phi_0$. We iterate the procedure, noting that the degree in the third position is strictly increasing but is bounded above by the total degree $d$ if $j=3$, and by $d$ plus the maximum degree of an arrow of ${\mathcal A}$ if $j=2$. Hence, after a finite number of iterations we get $\phi_r=\phi_0-\sum_{i=1}^r\mu_{j+1}(\psi_i)=0$. We conclude that $\phi_0=\mu_{j+1}(\sum_{i=1}^r\psi_i)$ and that the complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"} is exact at ${\mathcal A}\otimes T_j\otimes {\mathcal A}$. ◻ We now know that in order to show exactness of the bimodule complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"}, it suffices to show exactness of the one-sided complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"}. We follow [@XPressland2019 §3] and prove this vertex by vertex. If $v$ is a vertex, let $S_v:=e_vT_0e_v$ be the simple module at $v$. We may consider the complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"} as a complex of $({\mathcal A},T_0)$-bimodules, which we denote by $\mathbf P^1$. Since $T_0=\oplus_{v\in Q_0}S_v$, we have $$\mathbf P^1=\oplus_{v\in Q_0}\mathbf P^1e_v$$ as a complex of left ${\mathcal A}$-modules. Hence, in order to show exactness of the complex of left ${\mathcal A}$-modules [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"}, it suffices to show that for any vertex $v$ of $Q$, the complex of left ${\mathcal A}$-modules $\mathbf P^1e_v$ is exact. We rewrite this complex $\mathbf P^1e_v$ as $$\label{eqn:cmp4} 0\to X_3\xrightarrow{\bar\mu_3}X_2\xrightarrow{\bar\mu_2}X_1\xrightarrow{\bar\mu_1}A\otimes T_0\otimes S_v\to S_v\to0,$$ where the spaces $X_i$ are defined as $$\begin{aligned} X_1&:=\bigoplus_{\beta\in T_v}{\mathcal A}e_{h(\beta)},\\ X_2&:=\bigoplus_{\alpha\in H_v^m}{\mathcal A}e_{t(\alpha)},\\ X_3&:=\begin{cases} {\mathcal A}e_v&v\in Q_0^m\\ 0&\textup{else}\end{cases},\end{aligned}$$ and the maps $\bar\mu_j$ are induced by $\mathcal F(\mu_j)$ under the relevant isomorphisms. We will make explicit the maps $\bar\mu_3$ and $\bar\mu_2$ after introducing some notation. We write a general element $x$ of $\oplus_{\alpha\in H_v^{m}}{\mathcal A}e_{t(\alpha)}$ as $x=\sum_{\alpha\in H_v^{m}}x_\alpha\otimes[\alpha]$, where for any $\alpha\in H_v^{m}$, the summand $x_\alpha\otimes[\alpha]$ refers to the element $x_\alpha\in{\mathcal A}e_{t(\alpha)}$ in the summand of $\oplus_{\alpha\in H_v^{m}}{\mathcal A}e_{t(\alpha)}$ indexed by $\alpha$. Similarly, a general element of $\oplus_{\beta\in T_v}{\mathcal A}e_{h(\beta)}$ will be written as $y=\sum_{\beta\in T_v}y_\beta\otimes[\beta]$. We define the *right derivative $\partial_\beta^r$ with respect to $\beta$* on a path $\gamma_k\dots\gamma_1$ by $$\partial_\beta^r(\gamma_k\dots\gamma_1)=\begin{cases}\gamma_k\dots\gamma_2&\gamma_1=\beta\\0&\gamma_1\neq\beta\end{cases}$$ and extend linearly and continuously. Similarly, there is a *left derivative*, defined on paths by $$\partial_\beta^l(\gamma_k\dots\gamma_1)=\begin{cases}\gamma_{k-1}\dots\gamma_1&\gamma_k=\beta\\0&\gamma_k\neq\beta.\end{cases}$$ Given two arrows $\alpha$ and $\beta$ of $Q$, we observe that $$\partial_\beta^r(\partial(\alpha))=\partial_\alpha^l(\partial(\beta)).$$ We now calculate: $$\begin{aligned} \bar\mu_2(x)&=\sum_{\beta\in T_v}\left(\sum_{\alpha\in H_v^m}x_\alpha\partial_\beta^r(\partial(\alpha))\right)\otimes[\beta]\\ \bar\mu_3(x)&=\sum_{\alpha\in H_v^{m}}x\alpha\otimes[\alpha]\end{aligned}$$ We now finally prove the main result of this section after citing the disk version proven by Pressland. Note that in [@XPressland2019], dimer models on a disk are required to have at least three boundary vertices to avoid degenerate cases. This condition is not used in the cited result (see [@XPressland2019 Remark 2.2]), so we cite it without this limitation. The results and proofs of [@XPressland2015 §5] and [@XPressland2019 §3] are stated only for completed Jacobian algebras, but work also for their noncompleted variants. Hence, we cite this result for the completed and noncompleted algebras simultaneously. **Theorem 72** ([@XPressland2019 Theorem 3.7], [@XPressland2015 Theorem 5.6]). *If $Q$ is a path-consistent dimer model on a disk, then the sequence [\[eqn:cmp4\]](#eqn:cmp4){reference-type="eqref" reference="eqn:cmp4"} is exact for all $v$, and hence ${\mathcal A}$ is bimodule internally 3-Calabi-Yau with respect to its boundary idempotent.* In fact, Pressland's proof of Theorem [Theorem 72](#thm:Calabi-Yau-disk){reference-type="ref" reference="thm:Calabi-Yau-disk"} works in the general setting, with the stipulation that all computations must be performed as a sum over homotopy classes. We give a shorter proof here, which uses Pressland's result for disk models as well as the theory of dimer submodels developed in Section [5](#sec:ds){reference-type="ref" reference="sec:ds"}. **Theorem 73**. *Let $Q$ be a finite dimer model. If $Q$ is weakly consistent, then $\widehat A_Q$ is bimodule internally 3-Calabi-Yau with respect to its boundary idempotent. If $Q$ is strongly consistent, then $A_Q$ is bimodule internally 3-Calabi-Yau with respect to its boundary idempotent.* *Proof.* By Theorem [Theorem 68](#thm:bim3cy){reference-type="ref" reference="thm:bim3cy"}, it suffices to prove exactness of the two-sided complex [\[eqn:cmp2\]](#eqn:cmp2){reference-type="eqref" reference="eqn:cmp2"}. By Proposition [Proposition 71](#prop:1iff2){reference-type="ref" reference="prop:1iff2"} or Theorem [Theorem 70](#thm:wherebefore){reference-type="ref" reference="thm:wherebefore"} (depending on whether ${\mathcal A}=A_Q$ or ${\mathcal A}=\widehat A_Q$), it suffices to prove exactness of the one-sided complex [\[eqn:cmp1\]](#eqn:cmp1){reference-type="eqref" reference="eqn:cmp1"}. As argued in the text following Proposition [Proposition 71](#prop:1iff2){reference-type="ref" reference="prop:1iff2"}, we may prove this by showing that the complex [\[eqn:cmp4\]](#eqn:cmp4){reference-type="eqref" reference="eqn:cmp4"} is exact for any choice of $v\in Q_0$. We need only show that $\bar\mu_3$ is injective and that $\ker\bar\mu_2\subseteq\textup{im}\ \bar\mu_3$. First, we show injectivity of $\bar\mu_3$. If $v$ is boundary, then this is trivial, so suppose $v\in Q_0^{m}$. Suppose there is some nonzero $x\in{\mathcal A}e_v$ with $0=\bar\mu_3(x)=\sum_{\alpha\in H_v^{m}}x\alpha\otimes[\alpha]$. Write $x=\sum_Cx_C$, where the sum is over homotopy classes of paths in $Q$ starting at $v$ and $x_C=\sum_{m=0}^\infty a_{C,m}r_Cf^m$, where $r_C$ is a minimal path in the homotopy class $C$ and $a_{C,m}\in\mathbb C$. If $C$ and $C'$ are different homotopy classes, then $C\alpha$ and $C'\alpha$ are different homotopy classes for any arrow $\alpha$. In particular, the summands of $\bar\mu_3(x_C)$ and $\bar\mu_3(x_{C'})$ corresponding to each arrow $\alpha\in H_v^{m}$ are in different homotopy classes. Since $\bar\mu_3(x)=0$, this means that $\bar\mu_3(x_C)=0$ for all homotopy classes $C$. Since $x\neq0$, we may choose a homotopy class $C$ such that $a_{C,m}\neq0$ for some $m$. Then $0=\bar\mu_3(x_C)=\sum_{\alpha\in H_v^{m}}x_C\alpha\otimes[\alpha]$. Then $0=x_C\alpha=\sum_{m=0}^\infty a_{C,m}r_C\alpha f^m$ for all $\alpha\in H_v^{m}$. Hence, $a_{C,m}=0$ for all $m\geq0$ and we have $x_C=0$. This contradicts our choice of $C$ and completes the proof of injectivity of $\bar\mu_3$. We now prove that the image of $\bar\mu_3$ contains the kernel of $\bar\mu_2$. Take a nonzero element $x=\sum_Cx_C$ of $\ker\bar\mu_2$, where the sum is over homotopy classes of paths in $Q$ starting at the tail of some arrow $\alpha\in H_v^{m}$ and $x_C=\sum_{\alpha\in H_v^{m}}\left(a_{C,\alpha,m}r_Cf^m\otimes[\alpha]\right)$, where $r_C$ is a minimal path in $Q$ homotopic to $C$ and $a_{C,\alpha,m}$ are some coefficients in $\mathbb C$). We wish to find $y\in X$ such that $\bar\mu_3(y)=x$. It suffices to find $y_C$ for each homotopy class $C$ with $\bar\mu_3(y_C)=x_C$ (note that elements of $y_C$ will not be in the homotopy class $C$), so fix a homotopy class $C$. Pick a vertex $\tilde v$ of $\widetilde Q$ corresponding to $v$. For $\alpha\in H_v^{m}$, let $\tilde\alpha$ be a corresponding arrow in $\widetilde Q$ ending at $\tilde v$. Choose a (finite) disk submodel $Q^C$ of $\widetilde Q$ containing $\tilde v$ and a minimal path $r_{C\alpha}$ in $Q$ homotopic to $C\alpha$ for each $\alpha\in H_v^{m}$. Lift each $r_{C\alpha}$ to a minimal path $\tilde r_{C\alpha}$ beginning at $t(\tilde\alpha)$. Similarly, lift $x_C$ to $\tilde x_C:=\sum_{\alpha\in H_v^{m}}(a_{C,\alpha,m}\tilde r_Cf^m\otimes[\alpha])$, where $\tilde r_C$ is a lift of $r_C$ beginning at $\tilde v$. Then $\tilde x_C$ is in the kernel of the lift $\widetilde{\bar\mu_2}$. By Corollary [Corollary 55](#cor:dimer-submodel-consistent){reference-type="ref" reference="cor:dimer-submodel-consistent"}, $Q^C$ is weakly consistent. By Theorem [Theorem 56](#thm:submodel-path-equivalence){reference-type="ref" reference="thm:submodel-path-equivalence"}, two paths of $Q^C$ are equivalent in $Q^C$ if and only if they are equivalent as paths of $\widetilde Q$. By Theorem [Theorem 72](#thm:Calabi-Yau-disk){reference-type="ref" reference="thm:Calabi-Yau-disk"}, the exact sequence corresponding to [\[eqn:cmp4\]](#eqn:cmp4){reference-type="eqref" reference="eqn:cmp4"} of $Q^C$ must be exact. In particular, we get some $\tilde y_C$ in $\tilde X$ which maps to $\tilde x_C$ through $\widetilde{\bar\mu_3}$. Then $\tilde y_C$ considered as a sum of paths in $\widetilde Q$ descends to some $y_C$ in $X$ with $\bar\mu_3(y_C)=x_C$ and the proof is complete. ◻ ## Categorification In certain special cases Theorem [Theorem 73](#thm:Calabi-Yau){reference-type="ref" reference="thm:Calabi-Yau"} provides examples of categorifications of cluster algebras. We loosely model this section after [@XPressland2019 §4]. We avoid defining technical terms used in this subsection, and instead refer to [@XPressland2019] for more information. **Theorem 74** ([@AIRX Theorem 4.1 and Theorem 4.10][\[AIR\]]{#AIR label="AIR"}[\[thm:cat-lemma\]]{#thm:cat-lemma label="thm:cat-lemma"}). *Let $A$ be an algebra and $e\in A$ an idempotent. If $A$ is Noetherian, $\underline A=A/\langle e\rangle$ is finite dimensional, and $A$ is bimodule 3-Calabi-Yau with respect to $e$, then* 1. *$B=eAe$ is Iwanaga-Gorenstein with Gorenstein dimension at most 3,* 2. *$eA$ is a cluster-tilting object in the Frobenius category of Gorenstein projective modules $\textup{GP}(B)$,* 3. *The stable category $\underline{\textup{GP}}(B)$ is a 2-Calabi-Yau triangulated category, and* 4. *The natural maps $A\to\textup{End}_B(eA)^{op}$ and $\underline{A}\to\underline{\textup{End}}_B(eA)^{op}$ are isomorphisms.* If $Q$ is a weakly consistent dimer model and $e$ is its boundary idempotent, then $\widehat A_Q$ is bimodule internally 3-Calabi-Yau with respect to $e$ by Theorem [Theorem 73](#thm:Calabi-Yau){reference-type="ref" reference="thm:Calabi-Yau"}. Hence, in order to apply Theorem [\[thm:cat-lemma\]](#thm:cat-lemma){reference-type="ref" reference="thm:cat-lemma"}, it suffices to check that $\widehat A_Q$ is Noetherian and that $\widehat A/\langle e\rangle$ is finite dimensional as a vector space. **Corollary 75**. *Let $Q$ be a finite weakly consistent dimer model with boundary idempotent $e$. Suppose that $\widehat A_Q/\langle e\rangle$ is finite dimensional (hence that $\widehat A_Q/\langle e\rangle\cong A_Q/\langle e\rangle$), and that $\widehat A_Q$ is Noetherian. Write $\widehat B_Q:=e\widehat A_Qe$ for the completed boundary algebra of $Q$.* 1. *$\widehat B_Q$ is Iwanaga-Gorenstein with Gorenstein dimension at most 3,* 2. *$e\widehat A_Q$ is a cluster-tilting object in the Frobenius category of Gorenstein projective modules $\textup{GP}(\widehat B_Q)$,* 3. *The stable category $\underline{\textup{GP}}(B_Q)$ is a 2-Calabi-Yau triangulated category, and* 4. *[\[en:4\]]{#en:4 label="en:4"}The natural maps $\widehat A_Q\to\textup{End}_{\widehat B_Q}(e\widehat A_Q)^{op}$ and $A_Q/\langle e\rangle\to\widehat A_Q/\langle e\rangle\cong\underline{\textup{End}}_{\widehat B_Q}(e\widehat A_Q)^{op}$ are isomorphisms.* If $Q$ has no digons, then it follows from Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} [\[en:4\]](#en:4){reference-type="eqref" reference="en:4"} and [@KR Theorem 2.1] that ${\textup{GP}}(\widehat B_Q)$ is a categorification of the cluster algebra whose seed is given by the underlying ice quiver of $Q$. Section [9](#sec:red){reference-type="ref" reference="sec:red"} gives us tools to *reduce* a dimer model by removing digon-faces while preserving the dimer algebra. **Remark 76**. In the setting of Theorem [\[thm:cat-lemma\]](#thm:cat-lemma){reference-type="ref" reference="thm:cat-lemma"}, if $e''$ is an idempotent orthogonal to $e$ then $\widehat A$ is 3-Calabi-Yau with respect to $e':=e+e''$ by [@XPressland2015 Remark 2.2]. This means that if $Q$ is a weakly consistent dimer model and $\widehat A_Q$ is Noetherian, then even if $\widehat A_Q/\langle e\rangle$ is not Noetherian we may use a larger idempotent $e'$, where some of the internal vertices are seen to be frozen, in order to apply Theorem [\[thm:cat-lemma\]](#thm:cat-lemma){reference-type="ref" reference="thm:cat-lemma"}. In this way, we can get categorifications even from, for example, a consistent dimer model on a torus. See [@AIRX §6]. **Definition 77**. We say that a weakly consistent dimer model $Q$ is *boundary-finite* if $\widehat A_Q/\langle e\rangle$ is finite-dimensional. We say that $Q$ is *Noetherian* if $\widehat A_Q$ is Noetherian. Hence, if $Q$ is a finite weakly consistent dimer model, then in order to apply Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} we must only check Noetherianness and boundary-finiteness of $Q$. The authors are not aware of an example of a consistent Noetherian dimer model on a surface other than a disk, annulus, or torus. Dimer models on tori and disks have been studied extensively. - Any consistent dimer model on a torus is Noetherian, but since the boundary idempotent is zero such a dimer model is never boundary-finite. On the other hand, as in Remark [Remark 76](#remk:gggg){reference-type="ref" reference="remk:gggg"} we may use a larger idempotent to apply Theorem [\[thm:cat-lemma\]](#thm:cat-lemma){reference-type="ref" reference="thm:cat-lemma"}. - If $Q$ is a consistent dimer model on a disk with no digon-faces, Pressland showed in [@XPressland2019 Proposition 4.4] that $Q$ is Noetherian and boundary-finite. Hence, Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} can be applied to any consistent dimer model on a disk. Moreover, in the same paper it is shown that $(\textup{GP}(\widehat B_Q),e\widehat A_Q)$ is a Frobenius 2-Calabi-Yau realization of the cluster algebra $\mathscr A_Q$ given by the ice quiver $Q$. On the other hand, dimer models on annuli have seen comparatively little attention. In a future paper, we will study consistent dimer models on annuli. For now, we give a few examples pertaining to Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"}. Figure [\[fig:noeth-ann\]](#fig:noeth-ann){reference-type="ref" reference="fig:noeth-ann"} shows that a dimer model on an annulus may be Noetherian but not boundary-finite, or boundary-finite but not Noetherian. Example [Example 78](#ex:noeth-fin){reference-type="ref" reference="ex:noeth-fin"} shows that there exist (strongly) consistent dimer models on annuli satisfying both of these conditions simultaneously. **Example 78**. Figure [\[fig:noeth-fin\]](#fig:noeth-fin){reference-type="ref" reference="fig:noeth-fin"} and Figure [\[fig:noeth-finn\]](#fig:noeth-finn){reference-type="ref" reference="fig:noeth-finn"} show strongly consistent dimer models satisfying Noetherianness and boundary finiteness. Both figures suggest general constructions. One may, as in Figure [\[fig:noeth-fin\]](#fig:noeth-fin){reference-type="ref" reference="fig:noeth-fin"}, construct a strongly consistent dimer model on an annulus such that the subquiver of internal vertices looks like "some number of layers of an alternating affine type $\widetilde A_{2n}$ quiver" where corresponding vertices of adjacent layers are connected by arrows. Similarly, one may as in Figure [\[fig:noeth-finn\]](#fig:noeth-finn){reference-type="ref" reference="fig:noeth-finn"} construct a strongly consistent dimer model on an annulus such that the subquiver of internal vertices looks like some number of layers of any affine type $\widetilde A_n$ quiver. Note that in all such examples, no strand starts and ends at the same boundary component. Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} may be applied to the annulus models of Example [Example 78](#ex:noeth-fin){reference-type="ref" reference="ex:noeth-fin"} to show that the Gorenstein-projective module categories over their boundary algebras categorify the cluster algebra given by their underlying quivers. # Extra Results about Equivalence Classes {#sec:thin-quivers} [\[sec:eraec\]]{#sec:eraec label="sec:eraec"} For this section, we suppose that $Q$ is a weakly consistent dimer model unless otherwise specified. We use Theorem [Theorem 56](#thm:submodel-path-equivalence){reference-type="ref" reference="thm:submodel-path-equivalence"} to get some interesting results about the path equivalence classes in weakly consistent quivers. These results are not used anywhere else in the paper. **Proposition 79**. *Let $p$ and $q$ be distinct elementary paths in a weakly consistent dimer model $\widetilde Q$ with the same start and end vertices. If $p$ is to the right of $q$, then either $p$ has a left-morphable arrow or $q$ has a strictly higher c-value than $p$.* *Proof.* We may reduce to the case where $p$ and $q$ share no vertices except for the start and end vertices. Then $q^{-1}p$ defines a disk submodel $Q'$ of $\widetilde Q$ in which $p$ consists only of boundary arrows in counter-clockwise faces and $q$ consists of boundary arrows in clockwise faces. If $p$ has a strictly higher c-value than $q$ in $\widetilde Q$, then we are done. Suppose the c-value of $p$ is less than or equal to that of $q$ in $\widetilde Q$. Then $[p]=[qf^m]$ for some $m$ in $\widetilde Q$. By Theorem [Theorem 56](#thm:submodel-path-equivalence){reference-type="ref" reference="thm:submodel-path-equivalence"}, the same is true in $Q'$. It follows that $p$ has a left-morphable arrow in $Q'$, and hence in $\widetilde Q$. This proves the desired result. ◻ **Corollary 80**. *Let $Q=\widetilde Q$ be a simply connected dimer model. Let $v$ and $w$ be vertices of $Q$. Suppose that there is a leftmost minimal path $p$ from $v$ to $w$. Then $p$ is to the left of every minimal path and to the right of every leftmost path.* *Proof.* If $p'$ is any other minimal path from $v$ to $w$, Proposition [Proposition 79](#prop:catch-all-right-left-c-value-thing){reference-type="ref" reference="prop:catch-all-right-left-c-value-thing"} shows that $p$ is to the left of $p'$. If $q$ is any other leftmost path from $v$ to $w$, Proposition [Proposition 79](#prop:catch-all-right-left-c-value-thing){reference-type="ref" reference="prop:catch-all-right-left-c-value-thing"} shows that $q$ is to the left of $p$. ◻ If $Q$ is not simply connected, we may still pass to the universal cover and apply Corollary [Corollary 80](#cor:lrcool){reference-type="ref" reference="cor:lrcool"} to gain insight into the equivalence classes of paths of $Q$. **Corollary 81**. *Suppose $Q$ is a weakly consistent dimer model. If $p$ and $q$ are minimal homotopic leftmost paths with the same start and end vertices, then $p=q$.* *Proof.* Suppose $p$ and $q$ are homotopic minimal leftmost paths with the same start and end vertices. Lift them to minimal leftmost paths $\tilde p$ and $\tilde q$ of $\widetilde Q$. Since $p$ and $q$ are homotopic, these paths have the same start and end vertices. It suffices to show that $\tilde p=\tilde q$. Suppose this is not the case. By taking subpaths, we may reduce to the case where $\tilde p$ and $\tilde q$ share only their start and end vertices. Say $\tilde p$ is to the right of $\tilde q$. Since both paths have the same c-value, Proposition [Proposition 79](#prop:catch-all-right-left-c-value-thing){reference-type="ref" reference="prop:catch-all-right-left-c-value-thing"} shows that $\tilde p$ has a left-morphable arrow, a contradiction. ◻ Corollary [Corollary 81](#cor:thin-leftmost-path-unique){reference-type="ref" reference="cor:thin-leftmost-path-unique"} indicates that there is at most one minimal leftmost path between any two vertices of $Q$. In general, leftmost paths need not exist. Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"} gives an example of a finite weakly consistent dimer model on an annulus where a minimal path has an infinite equivalence class. Let $\alpha$ be one of the arrows of the digon-face of $Q$. Then an arbitrarily long series of left-morphs at the other arrow of the digon-face may be applied to $\alpha$. In other words, there is no leftmost path from the outer boundary ring to the inner boundary ring. The dimer model on the right of Figure [\[fig:noeth-ann\]](#fig:noeth-ann){reference-type="ref" reference="fig:noeth-ann"} has no digons and displays the same behavior: a minimal path from a vertex of the inner boundary to a vertex of the outer boundary may be left-morphed or right-morphed indefinitely, depending on the path. On the other hand, by assuming nondegeneracy we guarantee the existence of leftmost and rightmost paths between any given vertices. **Theorem 82**. *Let $Q$ be a strongly consistent dimer model. Let $v$ and $w$ be distinct vertices of $Q$ and let $C$ be a homotopy class of paths from $v$ to $w$. Then there is a unique minimal leftmost path $p$ from $v$ to $w$ in $C$.* *Proof.* We show the existence of a leftmost path from $v$ to $w$. Uniqueness follows from Corollary [Corollary 81](#cor:thin-leftmost-path-unique){reference-type="ref" reference="cor:thin-leftmost-path-unique"}. Lift $v$ and $w$ to vertices $\tilde v$ and $\tilde w$ of the universal cover model $\widetilde Q$ such that a path from $\tilde v$ to $\tilde w$ descends to a path of $Q$ in homotopy class $C$. We consider paths on $\widetilde Q$ to have the grading of their corresponding paths on $Q$. By path-consistency, all minimal paths from $\tilde v$ to $\tilde w$ are equivalent and hence have the same grading $k$. Then if $\tilde q=\alpha_m\dots\alpha_1$ is a minimal path from $\tilde v$ to $\tilde w$, then since the degree of each arrow is positive we must have $m\leq k$. This shows that the length of a minimal path from $\tilde v$ to $\tilde w$ is bounded by $k$. This condition guarantees that there are a finite number of minimal paths from $\tilde v$ to $\tilde w$. Now consider again the minimal path $\tilde q$ from $\tilde v$ to $\tilde w$. Let $\tilde r$ be any path from $\tilde w$ to $\tilde v$. If $\tilde q$ is not leftmost, then left-morph it at some arrow $\alpha_1$ to get some $\tilde q_1$. Then Lemma [Lemma 21](#lem:rotation-number-formula){reference-type="ref" reference="lem:rotation-number-formula"} gives that $\textup{Wind}(\tilde r\tilde q_1,F)<\textup{Wind}(\tilde r\tilde q,F)$ if $F$ is one of the faces containing $\alpha_1$, and $\textup{Wind}(\tilde r\tilde q_1,F)=\textup{Wind}(\tilde r\tilde q,F)$ otherwise. Continue to left-morph to get some sequence of paths $\tilde q,\tilde q_1,\tilde q_2,\dots$. By the above inequalities, for any $j$ the inequality $\label{fdsdf}\textup{Wind}(\tilde r\tilde q_{j+1},F)\leq\textup{Wind}(\tilde r\tilde q_j,F)$ holds for all faces $F$, and is strict for some choice of $F$. Repeating this argument shows that $\tilde q_i\neq\tilde q_j$ for $i\neq j$. Since there are a finite number of minimal paths from $\tilde v$ to $\tilde w$ and the sequence never repeats itself, the sequence must terminate with some leftmost path $\tilde p:=\tilde q_l$ from $\tilde v$ to $\tilde w$. This descends to a leftmost path $p$ from $v$ to $w$ in homotopy class $C$. ◻ Theorem [Theorem 82](#cor:c){reference-type="ref" reference="cor:c"} and Corollary [Corollary 80](#cor:lrcool){reference-type="ref" reference="cor:lrcool"} are useful tools to understand equivalence classes of paths in nondegenerate dimer models. In particular, if $Q$ is a consistent (hence also nondegenerate) dimer model in a disk and $e$ is the boundary idempotent of $Q$, the *boundary algebra* $eA_Qe$ may be used to obtain an additive Frobenius categorification of the ice quiver of $Q$ [@XPressland2019]. In a future paper, we will use these result to study boundary algebras of consistent dimer models on disks. # Reduction of a Dimer Model {#sec:red} [\[sec:reduce\]]{#sec:reduce label="sec:reduce"} There has been some interest, particularly in the disk case, in *reducing* a dimer model by removing digon-faces. See [@XPressland2019] and [@XBroomhead2009 Remark 2.9]. In particular, if a weakly consistent, Noetherian, boundary-finite dimer model $Q$ has no digons, our categorification result Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} shows that $\textup{GP}(Q)$ categorifies the cluster algebra $\mathscr A_Q$ given by the underlying quiver of $Q$. On the other hand, if $Q$ has digons, then it may not be immediately clear what cluster algebra is being categorified. Reducing a dimer model helps to avoid this issue. In [@XPressland2019], Pressland works with consistent dimer models on a disk with more than three boundary vertices. He observes that, in this case, the removal of digon-faces corresponds to untwisting moves in the strand diagram and hence their removal is straightforward. Moreover, the resulting *reduced dimer models* do not have any digons, and the disk version of Corollary [Corollary 75](#thm:cat-thm){reference-type="ref" reference="thm:cat-thm"} can be used to prove results about categorification. The situation for more general weakly consistent dimer models is more complicated, as there may be copies similar to Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"} contained in the dimer model whose digon-faces may not be removed. However, such occurrences will force the dimer model to be degenerate. Then in the nondegenerate case we may freely remove internal digon-faces from a dimer model. First, we consider the general situation. **Proposition 83**. *Let $Q$ be a weakly consistent dimer model with a finite number of digon-faces. There exists a *reduced dimer model $Q_{red}$ of $Q$* satisfying the following:* 1. *[\[prdm:0\]]{#prdm:0 label="prdm:0"} $S(Q_{red})=S(Q)$,* 2. *[\[prdm:1\]]{#prdm:1 label="prdm:1"} $A_{Q_{red}}\cong A_Q$,* 3. *[\[prdm:2\]]{#prdm:2 label="prdm:2"}$Q_{red}$ is weakly consistent,* 4. *[\[prdm:55\]]{#prdm:55 label="prdm:55"}If $Q$ is nondegenerate, then $Q_{red}$ is nondegenerate, and* 5. *[\[prdm:-1\]]{#prdm:-1 label="prdm:-1"} Either $Q_{red}$ is a dimer model on a disk composed of a single digon-face, or every digon-face of $Q_{red}$ is an internal face incident to only one other face.* The condition [\[prdm:-1\]](#prdm:-1){reference-type="eqref" reference="prdm:-1"} means that, as long as $Q_{red}$ is not composed of a single digon-face, every digon-face of $Q_{red}$ is in a configuration like that of Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"}. See also Figure [\[fig:cons-tor-no-perf\]](#fig:cons-tor-no-perf){reference-type="ref" reference="fig:cons-tor-no-perf"}. In contrast, the configuration of Figure [\[fig:tr-annulus\]](#fig:tr-annulus){reference-type="ref" reference="fig:tr-annulus"} is not possible in a weakly consistent dimer model. *Proof.* Note that [\[prdm:2\]](#prdm:2){reference-type="eqref" reference="prdm:2"} follows from [\[prdm:1\]](#prdm:1){reference-type="eqref" reference="prdm:1"}. Suppose that $Q$ is not merely a digon-face and let $\alpha\beta$ be a digon-face of $Q$ which is either a boundary face or is incident to two distinct faces. Then $\alpha$ and $\beta$ may not both be boundary arrows. Suppose $\alpha\beta$ is a clockwise face; the counter-clockwise case is symmetric. If $\alpha$ and $\beta$ are both internal arrows, then by assumption the faces $F_\alpha^{cc}$ and $F_\beta^{cc}$ are the distinct neighbors of the digon $\alpha\beta$. Let $Q^1=(Q^1_0,Q^1_1,Q^1_2)$ be the dimer model whose underlying quiver is $Q$ without $\alpha$ and $\beta$ and whose set of faces is the same as $Q_2$, but with the faces $\{\alpha\beta,F_\alpha^{cc},F_\beta^{cc}\}$ replaced with one face whose arrows are those of $R_\alpha^{cc}$ and $R_\beta^{cc}$. Since the dimer algebra relations of $Q$ give $[\alpha]=[R_\beta^{cc}]$ and $[\beta]=[R_\alpha^{cc}]$, the dimer algebra is not changed by this operation. The surface is also unchanged by this operation. If $\mathcal M$ is a perfect matching of $Q$, then $\mathcal M$ contains either $\alpha$ or $\beta$, and the removal of this arrow from $\mathcal M$ gives a perfect matching of $Q$. Hence, if $Q$ is nondegenerate then $Q^1$ is nondegenerate. See Figure [\[fig:digon-remove1\]](#fig:digon-remove1){reference-type="ref" reference="fig:digon-remove1"}. On the other hand, suppose without loss of generality that $\alpha$ is a boundary arrow and $\beta$ is internal. Then let $Q^1$ be the dimer model obtained by removing the arrow $\alpha$ from $Q_1$ and the face $\alpha\beta$ from $Q_2$. As above, $[\alpha]=[R_\beta^{cc}]$ in $A_Q$ and hence the dimer model is unchanged by this operation. If $\mathcal M$ is a perfect matching of $Q$, then $\mathcal M$ contains $\alpha$ or $\beta$. If $\mathcal M$ contains $\alpha$, then removing this arrow gives a perfect matching of $\mathcal M$. If $\mathcal M$ contains $\beta$, then $\mathcal M$ is also a perfect matching of $Q$. Then if $Q$ is nondegenerate, then $Q^1$ is nondegenerate. Furthermore, $S(Q^1)=S(Q)$. See Figure [\[fig:digon-remove2\]](#fig:digon-remove2){reference-type="ref" reference="fig:digon-remove2"}. In either case, we have defined a quiver $Q^1$ such that $S(Q^1)=S(Q)$ and $A_{Q^1}\cong A_Q$. Furthermore, $Q^1$ has strictly less digon-faces than $Q$. We may now apply this process repeatedly to remove all digon-faces of $Q$. Since $Q$ has a finite number of digon-faces, this process must terminate with some $Q_{red}$ such that either $Q_{red}$ only has two arrows or every digon-face of $Q_{red}$ is an internal face incident to only one other face. In the former case, $Q_{red}$ must be composed of a single digon-face since if it has two digon-faces then $S(Q_{red})=S(Q)$ is a sphere, hence $Q$ is not weakly consistent, a contradiction. ◻ Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"} shows a weakly consistent model with an internal digon-face which may not be removed by the process of the above theorem. Indeed, if the digon-face is removed then the resulting "dimer model" would have a face which is not homeomorphic to an open disk. On the level of strand diagrams, removing the digon-face corresponds to an untwisting move that disconnects the strand diagram. If $Q$ has an infinite number of digon-faces, a new problem appears. See Figure [\[fig:cant-reduce\]](#fig:cant-reduce){reference-type="ref" reference="fig:cant-reduce"}, which shows an infinite model with an infinite number of digons. While any finite number of digons may be removed, all of them may not be removed at once. The universal cover of the dimer model of Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"} displays the same behavior. **Remark 84**. In [@XPressland2020 §3], Pressland outlines a method of removing digon-faces from general Jacobian ice quivers without changing the completed algebra. If this process is applied to the dimer model of Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"}, the resulting ice quiver is not a dimer model. Note that neither Figure [\[fig:non-red-annulus\]](#fig:non-red-annulus){reference-type="ref" reference="fig:non-red-annulus"} nor Figure [\[fig:cant-reduce\]](#fig:cant-reduce){reference-type="ref" reference="fig:cant-reduce"} is nondegenerate. In fact, any nondegenerate dimer model may be reduced since the condition [\[prdm:-1\]](#prdm:-1){reference-type="eqref" reference="prdm:-1"} necessitates **Corollary 85**. *Let $Q$ be a strongly consistent dimer model. There exists a *reduced dimer model $Q_{red}$ of $Q$* satisfying the following:* 1. *$S(Q_{red})=S(Q)$,* 2. *$A_{Q_{red}}\cong A_Q$,* 3. *$Q_{red}$ is strongly consistent, and* 4. *Either $Q_{red}$ is a dimer model on a disk composed of a single digon-face, or $Q_{red}$ has no digon-faces.* *Proof.* Apply Proposition [Proposition 83](#prop:reduce-dimer-model){reference-type="ref" reference="prop:reduce-dimer-model"}. If $Q_{red}$ is not a dimer model on a disk composed of a single digon-face, then $Q_{red}$ has an internal digon-face $\alpha\beta$ which is incident to a single other face $F$. Let $\gamma$ be an arrow of $F$ which is not $\alpha$ or $\beta$. Any perfect matching $\mathcal M$ must contain $\alpha$ or $\beta$, since $\alpha\beta$ is a face. Then $\mathcal M$ cannot contain $\gamma$, since $\gamma$ shares a face with these arrows. We have shown that no perfect matching contains $\gamma$. Then $Q_{red}$, and by extension $Q$, is degenerate. ◻ We remark that $Q_{red}$ may have digons, even if it has no digon-faces. Consider the dimer model on a torus pictured on the left of Figure [\[fig:dignotdig\]](#fig:dignotdig){reference-type="ref" reference="fig:dignotdig"}. While the quiver $Q$ has digons, it has no *null-homotopic* digons. As we see by looking at the universal cover model on the right, this means that there are no digon-faces in $Q$, hence $Q=Q_{red}$ is reduced.
arxiv_math
{ "id": "2310.02454", "title": "Consistent Dimer Models on Surfaces with Boundary", "authors": "Jonah Berggren, Khrystyna Serhiyenko", "categories": "math.CO math.AG math.RT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we study natural reconfiguration spaces associated to the problem of distributing a fixed number of resources to labeled nodes of a tree network, so that no node is left empty. These spaces turn out to be cubical complexes, which can be thought of as higher-dimensional geometric extensions of the combinatorial Stirling problem of partitioning a set of named objects into non-empty labeled parts. As our main result, we prove that these Stirling complexes are always homotopy equivalent to wedges of spheres of the same dimension. Furthermore, we provide several combinatorial formulae to count these spheres. Somewhat surprisingly, the homotopy type of the Stirling complexes turns out to depend only on the number of resources and the number of the labeled nodes, not on the actual structure of the tree network. address: - Department of Mathematics, University of Bremen, 28334 Bremen, Federal Republic of Germany. - Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Kunigami-gun, Okinawa, Japan. author: - | Dmitry N. Kozlov\ with an appendix by title: Stirling complexes --- # Stirling complexes ## Motivation Consider the situation where $n$ unique resources need to be distributed among $m$ locations. Clearly, subject to the only condition that $n\geq m$, this can be done in many different ways. Specifically, the number of solutions is equal to $m!\genfrac{\{}{\}}{0pt}1{n}{m}$, where $\genfrac{\{}{\}}{0pt}1{n}{m}$ is the *Stirling number of the second kind*, which is a classical combinatorial function, counting the number of ways $n$ objects can be partitioned into $m$ non-empty groups, see [@GKP; @Kn; @S]. Imagine furthermore, that the locations, to which the resources are distributed, are connected by a tree network, and that each resource can be shifted from its location to a neighboring one. Simultaneous multiple shifts of different resources are allowed, as long as at any point of this shifting procedure there remain some resources, which are not being moved, in each node. We would like to model this situation by introducing a higher-dimensional parameter space which encodes the interplay of such shifts. In what follows we introduce a family of combinatorial cubical complexes, which fulfill this task. We shall call these complexes the *Stirling complexes*. In recent years topology has increasingly been used in applications, most notably in data analysis, see [@Ca] and the references therein. The idea of using higher-dimensional cell complexes to record transformations of combinatorial objects has been a further major thread in the tapestry of applied topology. For instance, a family of prodsimplicial complexes has been constructed in [@BaK06], see also [@BaK07; @Ko07; @Ko08], to find topological obstructions to graph colorings, a famously notorious problem. Another example is provided by the so-called protocol complexes, which have been introduced as a part of the topological approach to questions in theoretical distributed computing, see [@HKR] and the numerous references therein. Optimally, such constructions provide deeper insight into the original combinatorial questions, yielding at the same time interesting, often highly symmetric families of combinatorial cell complexes. In what follows, we shall use standard facts and terminology of graph theory, as well as algebraic topology. If the need arises, the reader is invited to consult [@Har] for graph theory, and [@FFG; @Fu; @GH; @Hat; @Ko08; @Ko20; @Mu] for algebraic topology. ## Definition of the Stirling complexes Let $m$ be an arbitrary integer, $m\geq 2$, and let $T$ be an arbitrary tree on $m$ vertices, labeled with numbers $1$ through $m$. This tree models our network. Assume furthermore we have $n\geq m$. We can view $T$ as a $1$-dimensional simplicial complex, which leads us to considering the cubical complex $T^n$. Let us make the following observations about this complex. - The cubes of $T^n$ are indexed by the $n$-tuples $c=(c_1,\dots,c_n)$, where each $c_i$ is either a vertex or an edge of $T$. - The dimension of $c$ is equal to the number of $c_i$'s which are edges. Accordingly, the vertices of $T^n$ are indexed by the $n$-tuples of the vertices of $T$, the dimension of $T^n$ is equal to $n$, and the top-dimensional cubes are indexed by the $n$-tuples of the edges. - The boundary cubes of $c$ are obtained by replacing edges in the indexing $n$-tuple with adjacent vertices. The number of replaced edges is precisely the codimension of the corresponding boundary cube. We are now ready to define our main objects of study. **Definition 1**. *Given a tree $T$ with $m\geq 2$ vertices, and a positive integer $n$, the **Stirling complex** $\mathcal{S}tr(T,n)$ is the subcomplex of $T^n$ consisting of all $n$-tuples $c=(c_1,\dots,c_n)$, such that each vertex of $T$ occurs as an entry in $c$ at least once.* Since the condition of is preserved by taking the boundary, the Stirling complexes are well-defined. The following facts hold for Stirling complexes. - If $n<m$, the condition in cannot be fulfilled, so $\mathcal{S}tr(T,n)$ is empty in this case. - The complex $\mathcal{S}tr(T,m)$ consists of $m!$ vertices, indexed by all permutations of the set $[m]=\{1,\dots,m\}$. - In general, the vertices of $\mathcal{S}tr(T,n)$ are indexed by all ways to partition the set $[n]$ into $m$ labeled parts. Accordingly, the number of vertices of $\mathcal{S}tr(T,n)$ is equal to $m!\genfrac{\{}{\}}{0pt}1{n}{m}$. - The dimension of $\mathcal{S}tr(T,n)$ is equal to $n-m$, since this is the maximal number of resources which can be assigned to the edges of $T$. For each $0\leq d\leq n-m$, the Stirling complex $\mathcal{S}tr(T,n)$ has $\binom{n}{d}(m-1)^dm!\genfrac{\{}{\}}{0pt}1{n-d}{m}$ cubes of dimension $d$. To see this, first choose $d$ resources among $n$, then assign each resource to one of the $m-1$ edges, and then finally distribute the rest of the resources to the nodes, so that no node is left empty. This gives us the following formula for the Euler characteristic: $$\label{eq:chistc} \chi(\mathcal{S}tr(T,n))=\sum_{d=0}^{n-m}(-1)^d\binom{n}{d}(m-1)^dm!\genfrac{\{}{\}}{0pt}0{n-d}{m}.$$ In what follows, we shall derive a better formula for $\chi(\mathcal{S}tr(T,n))$. ## Examples To acquaint ourselves with the Stirling complexes, let us consider a few further examples. *Example 1.* The first interesting example is $\mathcal{S}tr(T,m+1)$. The dimension of this Stirling complex is 1, so it is a graph. The numerical data of this graph is the following. - The number of vertices of $\mathcal{S}tr(T,m+1)$ is $$m!\genfrac{\{}{\}}{0pt}0{m+1}{m}=m!\binom{m+1}{2}=\frac{m}{2}(m+1)!.$$ The vertices of $\mathcal{S}tr(T,m+1)$ are indexed by the $(m+1)$-tuples of the vertices of $T$, with one vertex repeating twice and all other vertices occurring exactly once. - As a graph $\mathcal{S}tr(T,m+1)$ has $(m-1)(m+1)!$ edges; the edges are indexed by $(m+1)$-tuples consisting of one edge and $m$ vertices of $T$, with each vertex repeating exactly once. Accordingly, the Euler characteristic of this Stirling complex is given by $$\chi(\mathcal{S}tr(T,m+1))=-\dfrac{1}{2}(m-2)(m+1)!.$$ It is easy to see using a direct argument that $\mathcal{S}tr(T,m+1)$ is always connected. Therefore it is homotopy equivalent to a wedge of $\dfrac{1}{2}(m-2)(m+1)!+1$ circles. Consider now the special case when $m=4$. Let $T_1$ be the tree with one vertex of degree $3$ and $3$ leaves. Let $T_2$ be the string with $3$ edges: it has $2$ vertices of degree $2$ and $2$ leaves. Both $\mathcal{S}tr(T_1,5)$ and $\mathcal{S}tr(T_2,5)$ are connected and have $240$ vertices and $360$ edges. However, these two graphs are different: $\mathcal{S}tr(T_1,5)$ has $60$ vertices with valency $6$, and the rest of the vertices with valency $2$, whereas all vertices of $\mathcal{S}tr(T_2,5)$ have valency $2$ or $4$. We see therefore that, while topology of $\mathcal{S}tr(T_1,5)$ and $\mathcal{S}tr(T_2,5)$ is the same, the spaces themselves depend on the actual tree structure of $T_1$ and $T_2$. *Example 2.* Next, consider the cubical complexes $\mathcal{S}tr(T,m+2)$, for $m\geq 2$. These are $2$-dimensional. The number of vertices is given by $$f_0:=m!\genfrac{\{}{\}}{0pt}1{m+2}{m}=m!\frac{1}{24}m(m+1)(m+2)(3m+1)= m(3m+1)\frac{(m+2)!}{24}.$$ The number of edges is given by $$f_1:=(m+2)(m-1)\frac{m}{2}(m+1)!=12m(m-1)\frac{(m+2)!}{24}.$$ Finally, the number of squares is given by $$f_2:=\binom{m+2}{2}(m-1)^2m!=12(m-1)^2\frac{(m+2)!}{24}.$$ So, $$\chi(\mathcal{S}tr(T,m+2))=f_0+f_2-f_1=(3m^2-11m+12)\frac{(m+2)!}{24}.$$ *Example 3.* Switching to considering the small values of $m$. Set $m:=2$, so $T$ is just an edge. The complex $\mathcal{S}tr(T,n)$ is a cubical subdivision of the $(n-2)$-dimensional sphere. $\mathcal{S}tr(T,3)$ is a hexagon. $\mathcal{S}tr(T,4)$ is a rhombic dodecahedron, whose $f$-vector is $(14,24,12)$. In general the $f$-vector of $\mathcal{S}tr(T,n)$, when $T$ is a single edge, is $(f_0,\dots,f_{n-2})$, where $$f_k=\binom{n}{k}(2^{n-k}-2),\text{ for }k=0,\dots,n-2.$$ The cubical complex $\mathcal{S}tr(T,n)$ can be obtained by starting with an $n$-cube $K$ and then deleting two opposite vertices $a$ and $b$, together with all smaller cubes in $K$ containing $a$ or $b$. The author is not aware whether there exists some established terminology for these complexes, beyond the cases $n=3$ and $n=4$. # The topology of the Stirling complexes ## The formulation of the main theorem Somewhat surprisingly, our main theorem implies that the homotopy type of the Stirling complexes $\mathcal{S}tr(T,n)$ only depends on $n$ and on the number of vertices in $T$, not on the actual tree structure. **Theorem 2**. *Assume $T$ is a tree with $m$ vertices, $m\geq 2$, and $n$ is an integer, $n\geq m$. The cubical complex $\mathcal{S}tr(T,n)$ is homotopy equivalent to a wedge of $(n-m)$-dimensional spheres.* *Let $f(m,n)$ denote the number of these spheres. Then $f(m,n)$ is given by the following formula $$\begin{gathered} \label{eq:fmn} f(m,n)=(m-1)^n-\binom{m}{1}(m-2)^n+\binom{m}{2}(m-3)^n+\dots \\ +(-1)^{m-1}\binom{m}{m-3}2^n+(-1)^m\binom{m}{m-2}.\end{gathered}$$* In particular, we have $f(2,n)=1$, confirming our observation that $\mathcal{S}tr(T,n)$ is a sphere in this case. Further values of $f(-,-)$ are $$\begin{aligned} f(3,n) &=2^n-3, \\ f(4,n) &=3^n-4\cdot 2^n+6.\end{aligned}$$ gives the values of $f(m,n)$ for small $m$ and $n$. $$\begin{tabular}{ c|cccccc } $n\setminus m$ & 2 & 3 & 4 & 5 & 6 & \dots \\ [2pt] \hline \\ [-5pt] 2 & 1 & & & & & \\ [2pt] 3 & 1 & 5 & & & &\\ [2pt] 4 & 1 & 13 & 23 & & & \\ [2pt] 5 & 1 & 29 & 121 & 119 & & \\ [2pt] 6 & 1 & 61 & 479 & 1083 & 719 & \\ %[2pt] $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\ddots$ \end{tabular}$$ It is interesting to think about the implications of for the original problem of resource distribution. Clearly, the fact that $\mathcal{S}tr(T,n)$ is connected, when $n>m$, means that, starting from any distribution one can get to any other one by moving the resources. When $n>m+1$, the space $\mathcal{S}tr(T,n)$ is simply connected. This means that when two distributions are fixed, any two redistribution schemes from the first distribution to the second one, are homotopic, i.e., there is a simultaneous redistribution scheme, connecting the two. Even higher connectivity of $\mathcal{S}tr(T,n)$ means the presence of these higher-dimensional redistribution schemes. Finally, the fact that the homotopy of $\mathcal{S}tr(T,n)$ is not trivial in the top dimension means that in this dimension there is a number of fundamentally different higher-dimensional redistribution schemes. The number $f(m,n)$ tells us, in a certain sense, just how many of these different schemes there are. Let us make a few comments on the numerical side. First, by Euler-Poincaré formula, could be used instead of , although the latter is clearly simpler. Second, let ${\tt{SF}}(m,n)$ denote the number of surjective functions from $[n]$ to $[m]$. We have ${\tt{SF}}(m,n)=m!\genfrac{\{}{\}}{0pt}1{n}{m}$. We can then rewrite as follows. **Proposition 3**. *For all $n\geq m\geq 2$, we have $$\label{eq:chistar} f(m,n)={\tt{SF}}(m-1,n)-{\tt{SF}}(m-2,n)+\dots+(-1)^m{\tt{SF}}(1,n).$$* As a simple corollary of the principle of inclusion-exclusion we have the following well-known formula $$\label{eq:sf} {\tt{SF}}(a,b)=a^b-\binom{a}{a-1}(a-1)^b+\dots+(-1)^{a-1}\binom{a}{1}.$$ Substituting the right hand side of into , and using the Pascal triangle addition rule for the binomial coefficients, shows that it is equivalent to . 0◻ Finally, for future reference, we record the following fact. **Proposition 4**. *We have $$\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^m=m!-1.$$* This follows from the following well-known polynomial identity $$m!=\sum_{k=0}^m\binom{m}{k}(-1)^k(x-k)^m,$$ where $x$ is a variable, by substituting $x:=m+1$. 0◻ shows that holds for $m=n$. ## Relaxing the occupancy requirement Our proof of proceeds by induction. As it often happens in such situations, it is opportune to deal with a more general class of complexes. In our case, we relax the requirement that each node must have at least one allocated resource. **Definition 5**. *For any cell $c$ of $T^n$, we define $$\text{\rm supp}\,c=\{v\in V(T)\,|\,\exists k\in[n],\textrm{ such that }c_k=v\}\subseteq V(T).$$* *Let $S$ be an arbitrary subset of the vertex set of $T$. We define $\mathcal{S}tr(T,S,n)$ to be the subcomplex of $T^n$, consisting of all cells $c$ whose support contains $S$.* Note, that whenever $b,c\in T^n$ are cubes, such that $b\subseteq c$, we have $\text{\rm supp}\,c\subseteq\text{\rm supp}\,b$. In other words, the support of a cell $c$ either stays the same or increases when taking the boundary. This implies that the cubical complex $\mathcal{S}tr(T,S,n)$ is well-defined. Extreme values for $S$ give us two special cases: - for $S=V(T)$, we have $\mathcal{S}tr(T,S,n)=\mathcal{S}tr(T,n)$; - for $S=\emptyset$, we have $\mathcal{S}tr(T,S,n)=T^n$, which is contractible as a topological space. Rather than attacking directly, we shall prove the following, more general result. **Theorem 6**. *The complex $\mathcal{S}tr(T,S,n)$ is homotopy equivalent to a wedge of $(n-|S|)$-dimensional spheres. The number of spheres is $f(|S|,n)$.* Clearly, is a special case of , where $S=V(T)$. # Homotopy colimits ## The diagrams of topological spaces Our strategy to prove is to decompose the spaces $\mathcal{S}tr(T,S,n)$ into simpler pieces and then to manipulate this decomposition, while preserving the homotopy type of the total space. Although there are different ways to formulate our argument, we find it handy to phrase it using the language of homotopy colimits. Let us introduce the corresponding terminology, see also [@BoK; @Hat; @V]. We assume that the reader is familiar with basic category theory, [@ML; @Mi]. Recall, that given a poset $P$, we can always view $P$ as a category so that - the objects of that category are precisely the elements of $P$; - for any two elements $p,q\in P$, such that $p\geq q$, there exists a *unique* morphism from $p$ to $q$. The composition rule in this category is clearly uniquely defined, since there is at most one morphism between any two objects. Recall that $\mathbf{Top}$ denotes the category of topological spaces and continuous maps. **Definition 7**. *Assume, we are given a poset $P$, and we view it as a category. A functor from $P$ to $\mathbf{Top}$ is called a **diagram of topological spaces over $P$.*** Specifically, a diagram ${\mathcal D}$ is a collection of topological spaces ${\mathcal D}(p)$, where $p\in P$, together with continuous maps ${\mathcal D}_{p,q}:{\mathcal D}(p)\rightarrow{\mathcal D}(q)$, where $p>q$. These maps are subject to the condition ${\mathcal D}_{q,r}\circ{\mathcal D}_{p,q}={\mathcal D}_{p,r}$, whenever $p>q>r$. ## Homotopy colimits of diagrams over $P^T$ Let $T$ be a an arbitrary tree with $m$ vertices, where $m\geq 2$. We assume that the vertices are indexed by the set $[m]=\{1,\dots,m\}$. A poset $P^T$ is defined as follows: - the elements of $P^T$ are indexed by the vertices and the edges of $T$; - the partial order on $P^T$ given by saying that each edge is larger than its adjacent vertices. This poset has $2m-1$ elements. The elements indexed by the vertices are minimal, while the elements indexed by the edges are maximal, and each one is larger than exactly $2$ minimal elements. A diagram ${\mathcal D}$ of topological spaces over $P^T$ is then given by the following data, subject to no further conditions: - spaces ${\mathcal D}(v)$ for all vertices of $T$; - spaces ${\mathcal D}(e)$ for all edges of $T$; - continuous maps ${\mathcal D}_{e,v}:{\mathcal D}(e)\rightarrow{\mathcal D}(v)$, whenever $v$ is a vertex adjacent to the edge $e$. **Definition 8**. *Assume ${\mathcal D}$ is a diagram of topological spaces over a poset $P^T$. We define the **homotopy colimit** of ${\mathcal D}$, denoted ${\tt hocolim}\,{\mathcal D}$, as the quotient space $${\tt hocolim}\,{\mathcal D}=\left(\coprod_{v\in V(T)} {\mathcal D}(v) \coprod_{e\in E(T)}({\mathcal D}(e)\times[0,1])\right)/\sim,$$ where the equivalence relation $\sim$ is generated by $(x,0)\sim {\mathcal D}_{e,v}(x)$, and $(x,1)\sim {\mathcal D}_{e,w}(x)$, whenever $x\in{\mathcal D}(e)$, $e=(v,w)$, $v<w$.* Let us mention that the notion of homotopy colimit can be defined more generally, including homotopy colimits of diagrams of topological spaces over arbitrary posets. Here, we restrict ourselves to , which will be sufficient for our purposes. ## Homotopy independence of the homotopy colimits of diagrams of CW complexes over $P^T$ From now on, we assume that the spaces ${\mathcal D}(p)$ are CW complexes, for all $p\in P^T$, and the maps ${\mathcal D}_{e,v}$ are cellular. The next proposition says that changing these maps up to homotopy does not change the homotopy type of the homotopy colimit. **Proposition 9**. *Assume ${\mathcal D}$ and ${\mathcal E}$ are diagrams of CW complexes over $P^T$, such that* 1. *${\mathcal D}(p)={\mathcal E}(p)$, for all $p\in P^T$;* 2. *the maps ${\mathcal D}_{e,v}$ and ${\mathcal E}_{e,v}$ are homotopic, whenever $e$ is an edge of $T$, and $v$ is a vertex adjacent to $e$.* *Then ${\tt hocolim}\,{\mathcal D}$ and ${\tt hocolim}\,{\mathcal E}$ are homotopy equivalent.* Since $T$ is finite, it is enough to consider the case where ${\mathcal D}_{e,v}$ and ${\mathcal E}_{e,v}$ coincide, for all, but one single instance of an edge $e$ and a vertex $v$. Decompose the tree $T$ into a union of trees $T'$ and $T''$, such that the intersection of $T'$ and $T''$ is vertex $v$, $v$ is a leaf of $T'$, and $T'$ contains the edge $e$, see . Let ${\mathcal D}'$ be the diagram of CW complexes on $P^{T'}$, which is a restriction of ${\mathcal D}$ with a slight change at $v$. Specifically, it is defined as follows: - for any vertex $w\in V(T')$, we have ${\mathcal D}'(w):=\begin{cases} {\mathcal D}(w),&\text{ if } w\neq v; \\ {\mathcal D}(e),&\text{ otherwise.} \end{cases}$ - ${\mathcal D}'(r)={\mathcal D}(r)$, for all $r\in E(T')$; - for any edge $r\in E(T')$ and an adjacent vertex $w$, we have $${\mathcal D}'_{r,w}=\begin{cases} {\mathcal D}_{r,w},&\text{ if } (r,w)\neq (e,v);\\ \textrm{id}_{{\mathcal D}(e)},&\text{ otherwise.} \end{cases}.$$ Let ${\mathcal D}''$ be the restriction of ${\mathcal D}$ to $P^{T''}$. Set $X:={\tt hocolim}\,{\mathcal D}'$, $Y:={\tt hocolim}\,{\mathcal D}''$, $A:={\mathcal D}'(v)={\mathcal D}(e)$. Note that $X$ and $Y$ are CW complexes, and $A$ is a CW subcomplex of $X$. Set $f:={\mathcal D}_{e,v}$ and $g:={\mathcal E}_{e,v}$. Clearly, ${\tt hocolim}\,{\mathcal D}$ is obtained from $Y$ by attaching $X$ over $f$, whereas ${\tt hocolim}\,{\mathcal E}$ is obtained from $Y$ by attaching $X$ over $g$. We assumed that $f$ is homotopic to $G$. It is then a general fact, see e.g. [@Hat], that the homotopy type of the adjunction space does not change, when the attachment map is replaced by a homotopic one. This implies that ${\tt hocolim}\,{\mathcal D}$ and ${\tt hocolim}\,{\mathcal E}$ are homotopy equivalent. 0◻ ## Special homotopy colimits As above, let $T$ be an arbitrary tree with at least $2$ vertices. Let us fix a nonempty subset $S\subseteq V(T)$. Assume we have a diagram of CW complexes over $P^T$ satisfying the following conditions: - ${\mathcal D}(v)$ are single points, for all $v\in S$; - ${\mathcal D}(e)=X$, for all $e\in E(T)$, and ${\mathcal D}(v)=X$, for any $v\notin S$, where $X$ is some fixed CW complex; - the maps ${\mathcal D}_{e,v}$ are identity maps, for all $v\notin S$. **Proposition 10**. *Under the conditions above, the homotopy colimit of ${\mathcal D}$ is homotopy equivalent to the wedge of $|S|-1$ copies of ${\tt susp}\,X$.* The proof is by induction on the number of vertices of $T$. The induction base is when $m=2$. We have two cases. If $S=1$, then ${\tt hocolim}\,{\mathcal D}$ is a cone over $X$, hence contractible. If $S=2$, then ${\tt hocolim}\,{\mathcal D}$ is obtained by taking a cylinder over $X$ and shrinking each of the end copies of $X$ to a point. This is precisely the suspension space ${\tt susp}\,X$. From now on we can assume $m\geq 3$. We break our argument in the following cases. Assume there exists an internal vertex $v\in T$, such that $v\in S$. Let $e_1,\dots,e_k$ be the edges adjacent to $v$, $k\geq 2$. Cutting $T$ at $v$ will decompose $T$ into the trees $T_1,\dots,T_k$, $v$ is a leaf in each of them, and $e_i$ is adjacent to $v$ in $T_i$, for all $i=1,\dots,k$; see . Let ${\mathcal D}_i$ be the restriction of ${\mathcal D}$ to $T_i$, for $i=1,\dots,k$. Each homotopy colimit ${\tt hocolim}\,{\mathcal D}_i$ has a marked point $x_i$ corresponding to the copy of ${\mathcal D}(v)$. The homotopy colimit ${\tt hocolim}\,{\mathcal D}$ is obtained by gluing the homotopy colimits ${\tt hocolim}\,{\mathcal D}_i$ together along these points, for $i=1,\dots,k$. Accordingly, we see that $$\label{eq:gl1} {\tt hocolim}\,{\mathcal D}\cong\vee_{i=1}^k{\tt hocolim}\,{\mathcal D}_i.$$ Set $S_i:=S\cap V(T_i)$. The vertex $v$ is in $S$, so $v\in S_i$, for all $i$. This means that $S\setminus v=\coprod_{i=1}^k(S_i\setminus v)$, and hence $|S|-1=\sum_{i=1}^k(|S_i|-1)$. Since each $T_i$ has fewer vertices than $T$, we know by the induction assumption that ${\tt hocolim}\,{\mathcal D}_i$ is homotopy equivalent to $|S_i|-1$ copies of ${\tt susp}\,X$. Accordingly, [\[eq:gl1\]](#eq:gl1){reference-type="eqref" reference="eq:gl1"} implies that ${\tt hocolim}\,{\mathcal D}$ is homotopy equivalent to a wedge of $|S|-1$ copies of ${\tt susp}\,X$. All the vertices in $S$ are leaves of $T$, and there exists a further leaf $w\notin S$. Assume $w$ is connected to the vertex $u$. Since $m\geq 3$ and all the vertices in $S$ are leaves, we must have $u\notin S$. Let $T'$ be the tree obtained from $T$ by deleting $w$ and the adjacent edge. Let ${\mathcal D}'$ be the restriction of ${\mathcal D}$ to $T'$. By induction assumption ${\tt hocolim}\,{\mathcal D}'$ is homotopy equivalent to a wedge of $|S|-1$ copies of ${\tt susp}\,X$. The space ${\tt hocolim}\,{\mathcal D}$ is obtained from ${\tt hocolim}\,{\mathcal D}'$ by attaching a cylinder with base $X$ at one of its ends. Clearly ${\tt hocolim}\,{\mathcal D}'$ is a strong deformation of ${\tt hocolim}\,{\mathcal D}$, so the latter is also homotopy equivalent to a wedge of $|S|-1$ copies of ${\tt susp}\,X$. The set $S$ is precisely the set of all leaves of $T$. Since $m\geq 3$, we have at least $3$ leaves. Fix $v\in S$. Say $v$ is connected to $w$ by an edge. We have $w\notin S$. Let $T'$ be the tree obtained from $T$ by deleting $v$, and let ${\mathcal D}'$ be the restriction of ${\mathcal D}$ to $T'$. The topological space ${\tt hocolim}\,{\mathcal D}$ is obtained from ${\tt hocolim}\,{\mathcal D}'$ by attaching a cone over $X={\mathcal D}(w)$. Let $u\in S$ be any other leaf of $T$, $u\neq v$. There is a unique path inside of $T'$ connecting $w$ with $u$. The homotopy colimit of the restriction of ${\mathcal D}$ to that path is a cone with apex at ${\mathcal D}(u)$ and base at ${\mathcal D}(w)$. This cone lies inside ${\tt hocolim}\, T'$, therefore the inclusion map ${\mathcal D}(w)\hookrightarrow{\tt hocolim}\,{\mathcal D}'$ is trivial. It follows that, up to homotopy equivalence, attaching a cone over ${\mathcal D}(w)$ to ${\tt hocolim}\,{\mathcal D}'$ is the same as wedging ${\tt hocolim}\,{\mathcal D}'$ with ${\tt susp}\,X$. The result now follows by induction. 0◻ Let us now consider a little more general diagrams. These satisfy the same conditions outside of $S$, however, for any $v\in S$, the spaces ${\mathcal D}(v)$ are now arbitrary connected CW complexes, and each ${\mathcal D}_{e,v}$ maps everything to some point in ${\mathcal D}(v)$, whenever $e$ is an adjacent edge. In this case, can be generalized as follows. **Proposition 11**. *Under the conditions above the homotopy colimit of ${\mathcal D}$ is homotopy equivalent to the wedge $$\vee_{v\in S}{\mathcal D}(v)\vee_\Omega{\tt susp}\,X,$$ where $|\Omega|=|S|-1$.* For each $v\in V(T)$ we select a base point $x_v\in{\mathcal D}(v)$. Since ${\mathcal D}(v)$ is connected, any continuous map $\varphi:Y\rightarrow{\mathcal D}(v)$, mapping everything to a point, is homotopic to a map $\psi:Y\rightarrow{\mathcal D}(v)$ mapping everything to the base point $x_v$. By we can therefore assume that each map ${\mathcal D}_{e,v}$ maps everything to $x_v$, whenever $v\in S$ and $e$ is an adjacent edge, without changing the homotopy type of ${\tt hocolim}\,{\mathcal D}$. Let ${\mathcal D}'$ be the diagram which we obtain from ${\mathcal D}$ by replacing each ${\mathcal D}(v)$ by a point, for $v\in S$. Clearly, the homotopy colimit ${\tt hocolim}\,{\mathcal D}$ is obtained from ${\tt hocolim}\,{\mathcal D}'$ be wedging it with all the ${\mathcal D}(v)$, for $v\in S$. The result follows from . 0◻ # Structural decomposition of Stirling complexes and consequences ## Representing Stirling complexes as homotopy colimits. $\,$ Let us fix $n$ and $S$. We define a diagram ${\mathcal D}$ of topological spaces over $P^T$ as follows: - for each vertex $v\in V(T)$, we set ${\mathcal D}(v)$ to be the subcomplex of $\mathcal{S}tr(T,S,n)$, consisting of all cells $c$, such that $c_n=v$; - for each edge $e\in E(T)$, we set ${\mathcal D}(e)$ to be the Stirling complex $\mathcal{S}tr(T,S,n-1)$; - finally, for each edge $e\in E(T)$, $e=(v,w)$, we define the map ${\mathcal D}_{e,v}:{\mathcal D}(e)\rightarrow{\mathcal D}(v)$ by setting $c_n:=v$. **Proposition 12**. *The homotopy colimit ${\tt hocolim}\,{\mathcal D}$ is homeomorphic to the cubical complex $\mathcal{S}tr(T,S,n)$.* Whenever $e=(v,w)$ is an edge of $T$, let $B_e$ denote the subcomplex of $\mathcal{S}tr(T,S,n)$ consisting of all cells $c\in\mathcal{S}tr(T,S,n)$ such that one of the following holds 1. $c_n=e$; 2. $c_n=v$, and there exists $1\leq k\leq n-1$, such that $c_k=v$; 3. $c_n=w$, and there exists $1\leq k\leq n-1$, such that $c_k=w$. It is easy to see that this set of cells is closed under taking the boundary, hence the subcomplex $B_e$ is well-defined. Furthermore, the complex $\mathcal{S}tr(T,S,n)$ is the union of the subcomplexes ${\mathcal D}(v)$, for $v\in V(T)$, and $B_e$, for $e\in E(T)$. To see this just take any cube $(c_1,\dots,c_n)$ and sort it according to the value of $c_n$. Recording the value of $c_n$ separately, we can see that, as a cubical complex, each $B_e$ is isomorphic to the direct product of $\mathcal{S}tr(T,S,n-1)$ with the closed interval $[0,1]$. This can be seen as a cylinder with base $\mathcal{S}tr(T,S,n-1)$. The entire complex $\mathcal{S}tr(T,S,n)$ is obtained by taking the disjoint union of ${\mathcal D}(v)$, for $v\in V(T)$, and connecting them by these cylinders. For each cylinder $B_e$, $e=(v,w)$, its bases are identified with corresponding subcomplexes of ${\mathcal D}(v)$ and ${\mathcal D}(w)$ by assigning $c_n:=v$ or $c_n:=w$. These are precisely the maps ${\mathcal D}_{e,v}$ and ${\mathcal D}_{e,w}$. Comparing this gluing procedure with the definition of ${\tt hocolim}\,{\mathcal D}$ we see that we obtain a homeomorphic space. 0◻ ## The proof of the main theorem We are now ready to show our main result. First, when $|S|=n$, the complex $\mathcal{S}tr(T,S,n)$ is a disjoint union of $n!$ points. This can be viewed as a wedge of $n!-1$ copies of a $0$-dimensional sphere, so the result follows from . Assume from now on that $n\geq |S|+1$. By we can replace $\mathcal{S}tr(T,S,n)$ by ${\tt hocolim}\,{\mathcal D}$. Consider now a map ${\mathcal D}_{e,v}:{\mathcal D}(e)\rightarrow{\mathcal D}(v)$. By induction, we know that ${\mathcal D}(e)=\mathcal{S}tr(T,S,n-1)$ is homotopy equivalent to a wedge of spheres of dimension $n-1-|S|$. We make 2 observations. 1. If $v\notin S$, the cubical complex ${\mathcal D}(v)$ is isomorphic to $\mathcal{S}tr(T,S,n-1)$, and the map ${\mathcal D}_{e,v}$ is the identity map. 2. If $v\in S$, the cubical complex ${\mathcal D}(v)$ is isomorphic to $\mathcal{S}tr(T,S\setminus v,n-1)$. This is because we know that $c_n=v$, so there is no need to request that $v$ is occupied by some other resource. By induction assumption, the space $\mathcal{S}tr(T,S\setminus v,n-1)$ is homotopy equivalent to a wedge of spheres of dimension $n-1-(|S|-1)=n-|S|$. In particular, it is $(n-|S|-1)$-connected. Therefore, the map ${\mathcal D}_{e,v}$ is homotopic to a trivial map, which takes everything to a point. We now apply to shift our consideration to the diagram ${\mathcal D}'$, which is obtained from ${\mathcal D}$ by replacing the maps ${\mathcal D}_{e,v}$ by trivial ones, whenever $v\in S$. This diagram has the same homotopy type as ${\mathcal D}$. On the other hand, it now satisfies the conditions of , where the connectivity of the spaces ${\mathcal D}(v)$ is a consequence of the fact that $n\geq|S|+1$. It follows from that proposition that $${\tt hocolim}\,{\mathcal D}\simeq \vee_{v\in S}{\mathcal D}(v)\vee_\Omega{\tt susp}\,\mathcal{S}tr(T,S,n-1),$$ where $|\Omega|=|S|-1$. Counting spheres on both sides, we obtain the recursive formula $$f(|S|,n)=(|S|-1)f(|S|,n-1)+|S|f(|S|-1,n-1).$$ The validity of the formula now follows from . 0◻ **Remark 13**. *After this paper was submitted for publication, a shorter proof of [Theorem 6](#thm:main2){reference-type="ref" reference="thm:main2"} was found by one of the referees. It is included in the appendix.* **Proposition 14**. *Let $\Gamma=\{(m,n)\in{\mathbb Z}\times{\mathbb Z}\,|\,n\geq m\geq 2\}$. Assume we have a function $f:\Gamma\rightarrow{\mathbb Z}$, which satisfies the following:* 1. *for all $n>m\geq 3$ we have recursive formula $$\label{eq:rec} f(m,n)=(m-1)f(m,n-1)+m f(m-1,n-1);$$* 2. *we have the boundary conditions $f(2,n)=1$, $f(m,m)=m!-1$.* *Then for all $(m,n)\in\Gamma$, the value $f(m,n)$ is given by , which we rewrite as $$\label{eq:fmn2} f(m,n)=\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^n.$$* Clearly, the recursive rule together with the boundary conditions defines the values of the function of the function $f(-,-)$ uniquely. Therefore, to show that $f$ is given by the formula we just need to know that this formula satisfies our boundary conditions and the recursion. Substituting $m=2$ into immediately yields $1$ on the right hand side, as there is only one summand, with $\alpha=1$. The case $m=n$ follows from . To show that satisfies the recursion we need to check that $$\begin{gathered} \label{eq:recal} \sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^n=\\ (m-1)\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^{n-1}+ m\sum_{\alpha=1}^{m-2}(-1)^{m+\alpha}\binom{m-1}{\alpha+1}\alpha^{n-1}.\end{gathered}$$ We do that simply by comparing the coefficients of $\alpha^{n-1}$ on each side of . For $\alpha=m-1$, the coefficient on each side is $m-1$. For $\alpha=1,\dots,m-2$, we need to show that $$(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha=(m-1)(-1)^{m+\alpha+1}\binom{m}{\alpha+1} +m(-1)^{m+\alpha}\binom{m-1}{\alpha+1}.$$ This follows from the formula $$(m-\alpha-1)\binom{m}{\alpha+1}=m\binom{m-1}{\alpha+1}. \qed$$ We finish with an open question. **Open Question 1**. *Let $T$ be a tree with a single internal vertex of valency $r$, where $r\geq 2$, and let $n$ be any integer, $n\geq r+1$. The symmetric group ${\mathcal S}_r$ acts on $T$ by permuting its $r$ leaves. This induces an ${\mathcal S}_r$-action on the Stirling complex $\mathcal{S}tr(T,n)$, and hence also an ${\mathcal S}_r$-action on $H_{n-r-1}(\mathcal{S}tr(T,n);{\mathbb R})$. It would be interesting to decompose this representation of ${\mathcal S}_r$ into irreducible ones.* # Appendix: Stirling Complexes via the Wedge Lemma {#appendix-stirling-complexes-via-the-wedge-lemma .unnumbered} by [Roy Meshulam [^1]]{.smallcaps} In this appendix we prove a generalization of . Let $X=X_0$ be a finite simplicial complex, and let $S=\{X_i\}_{i \in [m]}$ be a family of subcomplexes of $X$. For $n \geq m$ let $$A_{m,n}=\left\{ (i_1,\ldots,i_n) \in (\{0\}\cup [m])^n : \{i_1,\ldots,i_n\} \supset [m] \right\}.$$ Slightly extending the setup considered in [Definition 5](#df:str){reference-type="ref" reference="df:str"}, we define the *Stirling Complex* associated with the triple $(X,S,n)$ by $$\mathcal{S}tr(X,S,n)= \bigcup_{(i_1,\ldots,i_n) \in A_{m,n}} X_{i_1} \times \cdots \times X_{i_{n}}.$$ Let ${\mathbb S}^k$ denote the $k$-sphere. asserts that if $T$ be a finite tree and $S$ is a set of $m \geq 2$ distinct vertices of $T$, then $$\mathcal{S}tr(T,S,n)\simeq \bigvee_{i=1}^{f(m,n)} {\mathbb S}^{n-m}.$$ Here we give a simple proof of a generalization of that perhaps clarifies why the homotopy type of $\mathcal{S}tr(T,S,n)$ does not depend on the structure of $T$. **Theorem 15**. *Let $X$ be a finite contractible complex and let $S=\{X_i\}_{i=1}^m$ be a family of $m \geq 2$ pairwise disjoint contractible subcomplexes of $X$. Then $$\mathcal{S}tr(X,S,n)\simeq \bigvee_{i=1}^{f(m,n)} {\mathbb S}^{n-m}.$$* The main tool in the proof of Theorem [Theorem 15](#t:app){reference-type="ref" reference="t:app"} is the Wedge Lemma of Ziegler and Živaljević (Lemma 1.8 in [@ZZ]). The version below appears in [@HRW]. For a poset $(P,\prec)$ and $p \in P$ let $P_{\prec p}=\{q \in P: q \prec p\}$. Let $\Delta(P)$ denote the order complex of $P$. Let $Y$ be a regular CW-complex and let $\{Y_i\}_{i=1}^m$ be subcomplexes of $Y$ such that $\bigcup_{i=1}^m Y_i=Y$. Let $(P,\prec)$ be the poset whose elements index all distinct partial intersections $\bigcap_{j \in J} Y_j$, where $\emptyset \neq J \subset [m]$. Let $U_p$ denote the partial intersection indexed by $p \in P$, and let $\prec$ denote reverse inclusion, i.e. $p \prec q$ if $U_q \subsetneqq U_p$. **Wedge Lemma [@ZZ; @HRW].** suppose that for any $p \in P$ there exists a $c_p \in U_p$ such that the inclusion $\bigcup_{q \succ p} U_q \hookrightarrow U_p$ is homotopic to the constant map to $c_p$. Then $$\label{e:wlemma} Y \simeq \bigvee_{p \in P} \Delta(P_{\prec p})*U_p.$$ **Proof of Theorem [Theorem 15](#t:app){reference-type="ref" reference="t:app"}.** If $m=n$ then $\mathcal{S}tr(X,S,n)$ is a union of $m!$ disjoint contractible sets and hence homotopy equivalent to $\bigvee_{i=1}^{m!-1} {\mathbb S}^0$. Suppose $n>m \geq 2$. In view of the recursion ([\[eq:rec\]](#eq:rec){reference-type="ref" reference="eq:rec"}), it suffices as in the proof of , to establish the following homotopy decomposition: $$\label{e:homdec} \mathcal{S}tr(X,S,n) \simeq \bigvee_{i=1}^m \mathcal{S}tr(X,S \setminus\{ X_i \},n-1) \vee \bigvee_{i=1}^{m-1} {\mathbb S}^0 * \mathcal{S}tr(X,S,n-1) .$$ We proceed with the proof of ([\[e:homdec\]](#e:homdec){reference-type="ref" reference="e:homdec"}). For $1 \leq i \leq m$ let $$Y_i=\big(X_i \times \mathcal{S}tr(X,S \setminus \{X_i\},n-1)\big) \cup \big(X \times \mathcal{S}tr(X,S,n-1) \big).$$ Then $\bigcup_{i=1}^m Y_i=\mathcal{S}tr(X,S,n)$. Next note that $$\label{e:xsubs} \mathcal{S}tr(X,S,n-1) \subset \mathcal{S}tr(X,S \setminus \{X_i\},n-1).$$ As $X_i \subset X$ are both contractible, it follows that $X_i$ is a deformation retract of $X$. Together with ([\[e:xsubs\]](#e:xsubs){reference-type="ref" reference="e:xsubs"}) it follows that $X_i \times \mathcal{S}tr(X,S \setminus \{X_i\},n-1)$ is a deformation retract of $Y_i$. Therefore $$\label{e:yv} Y_i \simeq \mathcal{S}tr(X,S \setminus \{X_i\},n-1).$$ Let $Z=X \times \mathcal{S}tr(X,S,n-1)$. Then for any $1 \leq i \neq j \leq m$ $$\label{e:capyv} Y_{i} \cap Y_{j}= \bigcap_{k=1}^m Y_k = Z \simeq \mathcal{S}tr(X,S,n-1).$$ Eq. ([\[e:capyv\]](#e:capyv){reference-type="ref" reference="e:capyv"}) implies that the intersection poset $(P,\prec)$ of the cover $\{Y_i\}_{i=1}^m$ is $P=[m] \cup \{\widehat{1}\}$, where $i \in [m]$ represents $Y_i$, $\widehat{1}$ represents $Z$, $[m]$ is an antichain and $i \prec \widehat{1}$ for all $i \in [m]$. Note that $\Delta(P_{\prec i})=\emptyset$ for all $i \in [m]$ and $\Delta(P_{\prec \widehat{1}})$ is the discrete space $[m]$. By induction, $Y_i$ is homotopy equivalent to a wedge of $(n-m)$-spheres and $Z$ is homotopy equivalent to a wedge of $(n-m-1)$-spheres. Hence the inclusion $Z \hookrightarrow Y_i$ is null homotopic. Using the Wedge Lemma together with ([\[e:yv\]](#e:yv){reference-type="ref" reference="e:yv"}) and ([\[e:capyv\]](#e:capyv){reference-type="ref" reference="e:capyv"}), it follows that $$\label{e:wedgel} \begin{split} \mathcal{S}tr(X,S,n) &\simeq \left( \bigvee_{i \in [m]} \Delta(P_{\prec i}) * Y_i\right) \vee \left( \Delta(P_{\prec \widehat{1}}) * Z\right) =\left(\bigvee_{i \in [m]}Y_i\right) \vee ([m]*Z) \\ &\simeq \bigvee_{i \in [m]} \mathcal{S}tr(X,S \setminus \{X_i\},n-1) \vee \bigvee_{i=1}^{m-1} {\mathbb S}^0 * \mathcal{S}tr(X,S,n-1). \end{split}$$ This completes the proof of ([\[e:homdec\]](#e:homdec){reference-type="ref" reference="e:homdec"}) and hence of Theorem [Theorem 15](#t:app){reference-type="ref" reference="t:app"}. 0◻ AaA00 E. Babson, D.N. Kozlov, *Complexes of graph homomorphisms*, Israel J. Math. **152** (2006), 285--312. E. Babson, D.N. Kozlov, *Proof of the Lovász Conjecture*, Annals of Math. (2) **165** (2007), no. 3, 965--1007. M. Bousfield, D.M. Kan, *Homotopy Limits, Completions and Localizations*, Springer Lect. Notes Math.  **304**, Berlin-Heidelberg-New York 1972. G. Carlsson, *Topology and data*, Bull. Amer. Math. Soc. (N.S.) **46** (2009), no. 2, 255--308. A.T. Fomenko, D.B. Fuks, V.L. Gutenmacher, *Homotopic topology*, Translated from the Russian by K. Mályusz. Akadémiai Kiadó (Publishing House of the Hungarian Academy of Sciences), Budapest, 1986. W. Fulton, *Algebraic topology*, Graduate Texts in Mathematics **153**, Springer-Verlag, New York, 1995. xviii+430 pp. R.L. Graham, D.E. Knuth, O. Patashnik, *Concrete Mathematics: A Foundation for Computer Science,* 2nd ed. Reading, MA: Addison-Wesley, pp. 257-267, 1994. M.J. Greenberg, J.R. Harper, *Algebraic Topology*, Mathematics Lecture Note Series **58**, Benjamin/Cummings Publishing Co., Inc., Advanced Book Program, Reading, Mass., 1981. xi+311 pp. F. Harary, *Graph Theory*, Addison-Wesley Series in Mathematics, Reading, MA, 1969. J. Herzog, V. Reiner and V. Welker, *The Koszul property in affine semigroup rings*, Pacific J. Math. **186** (1998), pp. 39---65. A. Hatcher, *Algebraic topology*, Cambridge University Press, Cambridge, 2002. M. Herlihy, D.N. Kozlov, S. Rajsbaum, *Distributed computing through combinatorial topology*, Elsevier/Morgan Kaufmann, Waltham, MA, 2014. xiv+319 pp. D.E. Knuth, *The Art of Computer Programming, Vol. 1: Fundamental Algorithms,* 3rd ed. Reading, MA: Addison-Wesley, 1997. D.N. Kozlov, *Chromatic numbers, morphism complexes, and Stiefel-Whitney characteristic classes*, in: *Geometric Combinatorics* (eds. E. Miller, V. Reiner, B. Sturmfels), pp. 249--315, IAS/Park City Mathematics Series **13**, American Mathematical Society, Providence, RI; Institute for Advanced Study (IAS), Princeton, NJ; 2007. D.N. Kozlov, *Combinatorial Algebraic Topology*, Algorithms and Computation in Mathematics **21**, Springer, Berlin, 2008, xx+389 pp. D.N. Kozlov, *Organized collapse: an introduction to discrete Morse theory*, Graduate Studies in Mathematics **207**, American Mathematical Society, Providence, RI, 2020, xxiii+312 pp. S. MacLane, *Categories for the Working Mathematician*, Second edition, Graduate Texts in Mathematics, **5**, Springer-Verlag, New York, 1998. B. Mitchell, *Theory of categories*, Pure and Applied Mathematics, Vol. XVII, Academic Press, New York-London, 1965. J.R. Munkres, *Elements of algebraic topology*, Addison-Wesley Publishing Company, Menlo Park, CA, 1984 ix+454 pp. J. Stirling, *Methodus differentialis, sive tractatus de summation et interpolation serierum infinitarium.* London, 1730. English translation by Holliday, J. The Differential Method: A Treatise of the Summation and Interpolation of Infinite Series. 1749. R.M. Vogt, *Homotopy limits and colimits*, Math. Z. **134** (1973), pp. 11--52. G.M. Ziegler and R. Živaljević, *Homotopy types of subspace arrangements via diagrams of spaces*, Math. Ann. **295** (1993) pp. 527--548. [^1]: Department of Mathematics, Technion, Haifa 32000, Israel. e-mail: meshulam\@technion.ac.il . Supported by ISF grant 686/20.
arxiv_math
{ "id": "2309.17142", "title": "Stirling complexes", "authors": "Dmitry N. Kozlov", "categories": "math.CO math.AT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Iterative gradient-based optimization algorithms are widely used to solve difficult or large-scale optimization problems. There are many algorithms to choose from, such as gradient descent and its accelerated variants such as Polyak's Heavy Ball method or Nesterov's Fast Gradient method. It has long been observed that iterative algorithms can be viewed as dynamical systems, and more recently, as robust controllers. Here, the "uncertainty" in the dynamics is the gradient of the function being optimized. Therefore, worst-case or average-case performance can be analyzed using tools from robust control theory, such as integral quadratic constraints (IQCs). In this tutorial paper, we show how such an analysis can be carried out using an alternative Lyapunov-based approach. This approach recovers the same performance bounds as with IQCs, but with the added benefit of constructing a Lyapunov function. author: - "Bryan Van Scoy[^1]" - "Laurent Lessard[^2]" bibliography: - lyaplift.bib title: | A Tutorial on a Lyapunov-Based Approach to the\ Analysis of Iterative Optimization Algorithms --- [^3] # Introduction In this paper, we consider unconstrained optimization problems of the form $\min_{x\in \mathbb{R}^d}\, f(x),$ where $f:\mathbb{R}^d\to\mathbb{R}$ is a continuously differentiable function. Iterative gradient-based optimization algorithms attempt to solve such problems by starting from some initial guess $x_0\in\mathbb{R}^d$ and iteratively updating $x_k$ in an effort to converge to a local minimizer $x_\star \in \mathop{\mathrm{\arg\min}}_x\, f(x)$. The simplest such algorithm is *gradient descent* [\[GD\]](#GD){reference-type="eqref" reference="GD"}, which uses the update rule $$\label{GD} x_{k+1} = x_k - \alpha \nabla f(x_k). \tag{GD}$$ Intuitively, each step moves in the direction of the negative gradient (steepest descent direction). Here, $\alpha$ is the *stepsize*, which is a tunable parameter. Generally, a larger stepsize can result in faster convergence, but if the stepsize is too large, the algorithm may fail to converge. Gradient descent may converge slowly when the function is poorly conditioned (when the condition number of the Hessian $\nabla^2 f$ is large). This is due to the fact that the contours of $f$ are elongated so the iterates tend to oscillate in a non-productive manner. One way to alleviate this problem is by using *accelerated algorithms*. For example, Polyak's Heavy Ball [\[HB\]](#HB){reference-type="eqref" reference="HB"} [@polyak_book §3.2.1] uses an additional *momentum term* compared to [\[GD\]](#GD){reference-type="eqref" reference="GD"}: $$\label{HB} x_{k+1} = x_k -\alpha \nabla f(x_k) + \beta (x_k - x_{k-1}). \tag{HB}$$ Alternatively, Nesterov's Fast Gradient [\[FG\]](#FG){reference-type="eqref" reference="FG"} [@nesterov_book §2.2.1] evaluates the gradient at an interpolated point $y_k$: $$\label{FG}\tag{FG} \begin{aligned} x_{k+1} &= y_k -\alpha \nabla f(y_k), \\ y_{k+1} &= x_{k+1} + \beta(x_{k+1} - x_{k}). \end{aligned}$$ In [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}, we compare the convergence of [\[GD\]](#GD){reference-type="eqref" reference="GD"}, [\[HB\]](#HB){reference-type="eqref" reference="HB"}, and [\[FG\]](#FG){reference-type="eqref" reference="FG"} on a simple quadratic function. ![Comparison of different iterative algorithms applied to $f(u,v) = u^2 + 10 v^2$ (contour lines shown) with initial point $(-5,1.5)$. Each algorithm is tuned to have the fastest possible convergence rate. Steps (in brackets) indicate the number of iterations needed to achieve $\lVert{x_k - x_\star}\rVert < 10^{-6}$. Gradient descent (GD) is less efficient than the Heavy Ball (HB) or Fast Gradient (FG) accelerated methods.](convergence){#fig:convergence} The tuning of an algorithm (the choice of $\alpha$ and $\beta$ in the examples above) can have a dramatic effect on the convergence behavior. However, it is typically not feasible to find the optimal tuning, as this would depend on $f$, which presumably is a function complicated enough to warrant being minimized numerically. Instead, we seek performance guarantees over a *class of functions*. For example, we may know a priori that $f$ is convex, or that it possesses a certain structural property, such as being quadratic. In the optimization literature, this sort of worst-case analysis is known as *algorithm analysis*. ## The quadratic case When $f$ is a quadratic function as in least squares problems and as illustrated in [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}, we can explicitly parameterize $f$, which makes the algorithm analysis straightforward. Specifically, $\nabla f(x) = Qx$ for some matrix $Q$. The dynamical systems for GD, HB, FG are therefore linear and convergence rate can be established via eigenvalue analysis. This leads to the following result. **Proposition 1** (). Suppose $f:\mathbb{R}^d\to\mathbb{R}$ is quadratic and satisfies $0 \prec m I_d \preceq \nabla^2 f(x) \preceq L I_d$. The smallest $\rho$ such that the iterates of the algorithms GD, HB, FG applied to $f$ satisfy $$\lVert{x_k - x_\star}\rVert \leq C \rho^k \lVert{x_0-x_\star}\rVert \quad\text{for some $C > 0$}$$ is given by the following tunings (where $\kappa \colonequals L/m$) Optimal tuning Rate bound ------------------------------------------------------ ------------------------------------------------------------------------------------------------------------- ------------------------------------------------------ [\[GD\]](#GD){reference-type="eqref" reference="GD"} $\alpha = \frac{2}{L+m}$ $\rho = \frac{\kappa-1}{\kappa+1}$ [\[HB\]](#HB){reference-type="eqref" reference="HB"} $\alpha = \frac{4}{(\sqrt L + \sqrt m)^2},\, \beta = \bigl(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\bigr)^2$ $\rho = \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}$ [\[FG\]](#FG){reference-type="eqref" reference="FG"} $\alpha = \frac{4}{3L+m},\, \beta = \frac{\sqrt{3\kappa+1}-2}{\sqrt{3\kappa+1}+2}$ $\rho = \frac{\sqrt{3\kappa+1}-2}{\sqrt{3\kappa+1}}$ The tunings from [Proposition 1](#prop:quadratic){reference-type="ref" reference="prop:quadratic"} are the same as those used to generate the iterates shown in [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}. ## The smooth and strongly convex case {#sec:FmL} Another popular function class is the set of *smooth and strongly convex functions*, which we denote $\mathcal{F}_{m,L}$. These are continuously differentiable functions $f$ that satisfy: (i) $L$-Lipschitz gradients: $\lVert{\nabla f(x)\!-\!\nabla f(y)}\rVert \!\leq\! L \lVert{x\!-\!y}\rVert$ for all $x,y\in\mathbb{R}^d$. (ii) $m$-strong convexity: $f(x) - \frac{m}{2}\lVert{x}\rVert^2$ is convex. Here, we assume $m$ and $L$ can be estimated or are otherwise known a priori. This class of functions includes the quadratic functions from [Proposition 1](#prop:quadratic){reference-type="ref" reference="prop:quadratic"}, but also includes non-quadratic functions. Functions of this type occur for example in regularized logistic regression or support vector machines with a smoothed hinge loss [@statistical_learning_book]. The class of smooth strongly convex functions cannot easily be parameterized, and the closed-loop dynamics of algorithms such as GD, HB, FG are generally nonlinear. Therefore, performing algorithm analysis requires a different approach from the quadratic case. Classical approaches typically involve clever manipulation of the inequalities that characterize Lipschitz gradients or strong convexity. One such approach is Nesterov's *estimating sequences* [@nesterov_book §2.2.1], which yield the following result. **Proposition 2**. Suppose $f:\mathbb{R}^d\to\mathbb{R}$ is smooth and strongly convex with parameters $0<m\leq L$. The method [\[FG\]](#FG){reference-type="eqref" reference="FG"} with parameters $\alpha = \frac{1}{L}$ and $\beta = \frac{\sqrt{L}-\sqrt{m}}{\sqrt{L}+\sqrt{m}}$ achieves $\lVert{x_k - x_\star}\rVert \leq C \rho^k \lVert{x_0-x_\star}\rVert$ for some $C>0$, with $\rho = \sqrt{1-\sqrt{\tfrac{m}{L}}}$. ## Automated algorithm analysis In recent years, there has been an effort to systematize, unify, and automate the process of algorithm analysis. Drori and Teboulle [@drori-teboulle] pioneered the idea that the worst-case performance of an algorithm (after a fixed number of iterations) could be characterized by a semidefinite program (SDP) whose size scaled with the number of iterations. It was later shown that this SDP could be convexified, leading to the so-called *Performance Estimation Problem* formulation [@taylor2017smooth]. Another popular approach is to express iterative methods as discretizations of gradient flows [@su2014differential]. This view led to the unification of various accelerated algorithms, including [\[FG\]](#FG){reference-type="eqref" reference="FG"}, and the derivation of an associated Lyapunov-based proof of convergence [@wilson2021lyapunov]. Yet another approach is to look for an asymptotic performance guarantee by leveraging tools from robust control [@lessard16]. Here, the idea is to view the algorithm as a Lur'e problem [@lure_postnikov] (a linear time-invariant system in feedback with a static nonlinearity) and to adapt the integral quadratic constraint (IQCs) approach [@iqc] to derive performance bounds. This approach also requires solving a convex SDP, but unlike PEP, its size is small and fixed (does not depend on the number of iterations). ## Tutorial overview In this tutorial, we will present an alternative Lyapunov-based approach to algorithm analysis, described in more detail in [@optalg; @dissalg; @van2022absolute], that shares features in common with each of the aforementioned works. In the sections that follow, we will: 1) frame algorithm analysis as a Lur'e problem (as in the IQC approach); 2) use interpolation conditions to describe the set of smooth strongly convex functions (as in the PEP approach); 3) describe the lifting procedure we will use; 4) show how Lyapunov functions can be used to certify performance via solving convex SDPs; and 5) present numerical examples that illustrate the effectiveness of this approach for certifying both convergence rate and robustness to gradient noise. # Iterative algorithms as Lur'e problems To express the iterative algorithm as a Lur'e problem, we begin by defining signals corresponding to the input $y_k$ and output $u_k$ of the gradient $\nabla f$. For example, [\[FG\]](#FG){reference-type="eqref" reference="FG"} can be rewritten as $$\begin{aligned} x_{k+1} &= y_k - \alpha u_k, \\ y_{k+1} &= x_{k+1} + \beta (x_{k+1} - x_k), \\ u_k &= \nabla f(y_k).\end{aligned}$$ Now define the augmented state $\xi_k \colonequals(x_k,x_{k-1})$ and put into standard state-space form to obtain [\[eq:ss_nest\]]{#eq:ss_nest label="eq:ss_nest"} $$\begin{aligned} \xi_{k+1} &= \begin{bmatrix} (1+\beta)I_d & -\beta I_d \\ I_d & 0_d\end{bmatrix} \xi_k + \begin{bmatrix}-\alpha I_d \\ 0_d\end{bmatrix} u_k, \\ y_k &= \begin{bmatrix} (1+\beta)I_d & -\beta I_d \end{bmatrix} \xi_k, \\ u_k &= \nabla f(y_k).\end{aligned}$$ To avoid the use of Kronecker products, it is convenient to express states, inputs, and outputs as *row vectors*, where there are $d$ columns corresponding to the $d$ dimensions of the original system. This transforms [\[eq:ss_nest\]](#eq:ss_nest){reference-type="eqref" reference="eq:ss_nest"} into [\[eq:ss_nest2\]]{#eq:ss_nest2 label="eq:ss_nest2"} $$\begin{aligned} \xi_{k+1} &= \begin{bmatrix} 1+\beta & -\beta \\ 1& 0\end{bmatrix} \xi_k + \begin{bmatrix}-\alpha \\ 0\end{bmatrix} u_k, \\ y_k &= \begin{bmatrix} 1+\beta & -\beta \end{bmatrix} \xi_k, \\ u_k &= \nabla f(y_k), \end{aligned}$$ where $\xi_k \in \mathbb{R}^{2\times d}$, $u_k \in \mathbb{R}^{1\times d}$, $y_k \in \mathbb{R}^{1\times d}$, and the gradient maps row vectors to row vectors: $\nabla f : \mathbb{R}^{1\times d} \to \mathbb{R}^{1\times d}$. Because of this convention, $\lVert{\cdot}\rVert$ denotes the Frobenius norm, for example, $\lVert{\xi_k}\rVert^2 \colonequals\sum_{i=1}^2\sum_{j=1}^d (\xi_k)_{ij}^2$. We will use this convention from now on. Viewed as a block diagram, [\[eq:ss_nest2\]](#eq:ss_nest2){reference-type="ref" reference="eq:ss_nest2"} has the following representation. Here, $G$ is a single-input single-output (SISO) system that depends on the algorithm, and it is understood that $G$ acts separately on each of the $d$ dimensions of the input. The system $G$ has different realizations depending on the algorithm: GD HB FG ---------------------------------- -------------------------------------------------------------- -------------------------------------------------------------- $\!\left[\begin{array}{c|c} $\addtolength{\arraycolsep}{-2pt}\!\left[\begin{array}{cc|c} $\addtolength{\arraycolsep}{-2pt}\!\left[\begin{array}{cc|c} 1 & -\alpha \\\hline 1+\beta & -\beta & -\alpha \\ 1+\beta & -\beta & -\alpha \\ 1 & 0 1 & 0 & 0 \\ \hline 1 & 0 & 0 \\ \hline \end{array}\right]\!$ 1 & 0 & 0 1+\beta & -\beta & 0 \end{array}\right]\!$ \end{array}\right]\!$ The idea of representing iterative algorithms as dynamical systems is not new; see for example [@polyak_book]. It is not restricted to the three algorithms above either; it can be used to represent distributed optimization algorithms [@distralg], operator-splitting methods [@iqcadmm_ICML; @dissalg], and a variety of other numerical algorithms [@bhaya2006control]. # Lifted algorithm dynamics Suppose the algorithm dynamics satisfy the state space equations (again using the row vector convention) [\[eq:ssalg\]]{#eq:ssalg label="eq:ssalg"} $$\begin{aligned} \xi_{k+1} &= A \xi_k + B \left( u_k + w_k \right), \\ y_k &= C \xi_k, \\ u_k &= \nabla f( y_k ). \end{aligned}$$ In contrast with [\[eq:ss_nest2\]](#eq:ss_nest2){reference-type="eqref" reference="eq:ss_nest2"}, we have included gradient noise $w_k$, which will allow us to analyze algorithm sensitivity in the sequel. In order to find the tightest possible performance bounds, we will search for a Lyapunov function that depends on a finite history of past algorithm iterates and function values. To this end, define the *lifted* iterates $$\label{eq:lifted_iterates} \bm{y}_k \colonequals\begin{bmatrix}y_k\\y_{k-1}\\\vdots\\y_{k-\ell}\end{bmatrix}\!,\; \bm{u}_k \colonequals\begin{bmatrix}u_k\\u_{k-1}\\\vdots\\u_{k-\ell}\end{bmatrix}\!,\; \bm{f}_k \colonequals\begin{bmatrix}f_k\\f_{k-1}\\\vdots\\f_{k-\ell}\end{bmatrix}\!.$$ Also, define the truncation matrices $Z_+$, $Z$ as follows. $$\label{eq:Z} Z_+ \colonequals\begin{bmatrix}I_\ell & 0_{\ell \times 1}\end{bmatrix} \quad\text{and}\quad Z \colonequals\begin{bmatrix}0_{\ell \times 1} & I_\ell\end{bmatrix}.$$ Multiplying a lifted iterate by $Z$ on the left removes the most recent iterate (time $k$), while using $Z_+$ removes the oldest iterate (time $k-\ell$). Now define the lifted state $$\label{eq:state_aug} \bm{x}_k \colonequals\begin{bmatrix} \xi_k-\xi_\star \\ Z \bm{y}_k \\ Z \bm{u}_k\end{bmatrix} \in \mathbb{R}^{(n+2\ell)\times d},$$ which is the current state $\xi_k$ and the $\ell$ previous inputs $u_{k-1},\ldots,u_{k-\ell}$ and outputs $y_{k-1},\ldots,y_{k-\ell}$ of the original system. The *lifted system* is a system with state $\bm{x}_k$, input $(u_k,w_k)$, and output $(\bm{y}_k,\bm{u}_k)$ that is consistent with the original dynamics [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"}. These turn out to be $$\begin{aligned} \bm{x}_{k+1}\! &= \!\underbrace{\addtolength{\arraycolsep}{-3pt} \begin{bmatrix} A & 0 & 0 \\ Z_+ \bm{e}_1 C & Z_+ Z^\mathsf{T}& 0 \\ 0 & 0 & Z_+ Z^\mathsf{T}\end{bmatrix}}_{\bm{A}}\!\!\bm{x}_k + \! \underbrace{\begin{bmatrix}B \\ 0 \\ Z_+ \bm{e}_1\!\end{bmatrix}}_{\bm{B}}\!\! u_k +\! \underbrace{\begin{bmatrix}B \\ 0 \\ 0\end{bmatrix}}_{\bm{H}}\!\! w_k \notag \\ \begin{bmatrix}\bm{y}_k \\ \bm{u}_k\end{bmatrix}\! &= \!\underbrace{\begin{bmatrix} \bm{e}_1 C & Z^\mathsf{T}& 0 \\ 0 & 0 & Z^\mathsf{T}\end{bmatrix}}_{\bm{C}} \bm{x}_k + \underbrace{\begin{bmatrix} 0 \\ \bm{e}_1\end{bmatrix}}_{\bm{D}} u_k, \label{eq:sslifted} \end{aligned}$$ where $\bm{e}_1=\begin{bmatrix}1 & 0 & \cdots & 0\end{bmatrix}^\mathsf{T}\in\mathbb{R}^{\ell+1}$. We can recover the iterates of the original system (shifted to the fixed point) by projecting the augmented state and the input as $$\begin{gathered} \tilde \xi_k = \underbrace{\begin{bmatrix}I_n & 0_{n\times(2\ell+1)}\end{bmatrix}}_{\bm{X}} \!\begin{bmatrix}\bm{x}_k \\ u_k\end{bmatrix}\!, \; \tilde y_k = \underbrace{\begin{bmatrix}C & 0_{1\times(2\ell+1)}\end{bmatrix}}_{\bm{Y}} \!\begin{bmatrix}\bm{x}_k \\ u_k\end{bmatrix}\!, \\ \text{and}\quad \tilde u_k = \underbrace{\begin{bmatrix}0_{1\times(n+2\ell)} & 1\end{bmatrix}}_{\bm{U}} \begin{bmatrix}\bm{x}_k \\ u_k\end{bmatrix}. \label{eq:sslifted2}\end{gathered}$$ Note that when $\ell=0$, the lifted system reduces to the original system. That is, [\[eq:sslifted\]](#eq:sslifted){reference-type="eqref" reference="eq:sslifted"} becomes [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"}. # Interpolation Conditions {#sec:interpolation} For the remainder of this paper, we will restrict our attention to smooth strongly convex functions, $f\in\mathcal{F}_{m,L}$. The characterization of $\mathcal{F}_{m,L}$ from [1.2](#sec:FmL){reference-type="ref" reference="sec:FmL"} depends on a continuum of points $x,y\in\mathbb{R}^d$. However, we will be analyzing the algorithm at a discrete set of iterates, so we will instead characterize $\mathcal{F}_{m,L}$ using *interpolation conditions*. These conditions were developed in [@taylor2017smooth] and provide necessary and sufficient conditions under which a given finite set of input-output data can be interpolated by a function in $f\in\mathcal{F}_{m,L}$. **Theorem 3** ([@taylor2017smooth Thm. 4]). Consider the set $\{(y_k,u_k,f_k)\}$ for $k=1,\dots,m$. The following are equivalent. (i) There exists a function $f\in\mathcal{F}_{m,L}$ satisfying $$f(y_k) = f_k\quad\text{and}\quad \nabla f(y_k) = u_k\quad\text{for }k=1,\dots,m.$$ (ii) The following inequality holds for $i,j \in \{1,\dots,m\}$.[\[interpii\]]{#interpii label="interpii"} $$\begin{gathered} 2(L-m)(f_i-f_j)-mL\lVert{y_i-y_j}\rVert^2\\ +2(y_i-y_j)^\mathsf{T}(mu_i-Lu_j)-\lVert{u_i-u_j}\rVert^2\geq 0. \end{gathered}$$ Although we restrict our attention to smooth strongly convex functions, interpolation conditions can be derived for other function classes as well. See for example [@taylor2017smooth; @bousselmi2023interpolation]. Using our row vector convention, if we suppose that $y_k,u_k\in\mathbb{R}^{1\times d}$, [\[interpii\]](#interpii){reference-type="ref" reference="interpii"} in [Theorem 3](#thm:interp){reference-type="ref" reference="thm:interp"} can be written as $$\label{eq:qij} q_{ij} \colonequals \mathop{\mathrm{\mathrm{tr}}}\begin{bmatrix}y_i\\y_j\\u_i\\u_j\end{bmatrix}^\mathsf{T}\!\!\!H \begin{bmatrix}y_i\\y_j\\u_i\\u_j\end{bmatrix} + h^\mathsf{T}\begin{bmatrix}f_i \\ f_j\end{bmatrix} \geq 0,\quad\text{where}$$ $$H \colonequals \addtolength{\arraycolsep}{-3pt} \begin{bmatrix} -mL & mL & m & -L\\ mL & -mL & -m & L\\ m & -m & -1 & 1\\ -L & L & 1 & -1\end{bmatrix} \;\text{and}\;\; h \colonequals 2(L\!-\!m)\!\begin{bmatrix}1 \\ -1\end{bmatrix}\!.$$ Consider an algorithm in lifted form [\[eq:sslifted\]](#eq:sslifted){reference-type="eqref" reference="eq:sslifted"}. We are interested in writing down as many valid inequalities as we can that relate its iterates. To this end, we will consider nonnegative linear combinations of the inequalities [\[eq:qij\]](#eq:qij){reference-type="eqref" reference="eq:qij"}. The computation is presented in the following corollary. **Corollary 4**. Consider a function $f\in \mathcal{F}_{m,L}$, and let $y_\star\in\mathbb{R}^{1\times d}$ denote its optimizer, $u_\star=0\in\mathbb{R}^{1\times d}$ the optimal gradient, and $f_\star\in\mathbb{R}$ the optimal function value. Let $y_k,\dots,y_{k-\ell} \in \mathbb{R}^{1\times d}$ be a sequence of iterates, and define $u_{k-i} \colonequals\nabla f(y_{k-i})$ and $f_{k-i}\colonequals f(y_{k-i})$ for $i=0,\dots,\ell$. Using these values, define the augmented vectors $\bm{y}_k$, $\bm{u}_k$, $\bm{f}_k$ as in [\[eq:lifted_iterates\]](#eq:lifted_iterates){reference-type="eqref" reference="eq:lifted_iterates"}. Finally, define the index set $\mathcal{I} \colonequals\{1,2,\ldots,\ell+1,\star\}$ and let $\bm{e}_i$ denote the $i\textsuperscript{th}$ unit vector in $\mathbb{R}^{\ell+1}$ with $\bm{e}_\star \colonequals 0\in\mathbb{R}^{\ell+1}$. Then the inequality $$\label{eq:cvx_ineq} \mathop{\mathrm{\mathrm{tr}}}\begin{bmatrix}\bm{y}^t \\ \bm{u}^t\end{bmatrix}^\mathsf{T}\Pi(\Lambda) \begin{bmatrix}\bm{y}^t \\ \bm{u}^t\end{bmatrix} + \pi(\Lambda)^\mathsf{T}\bm{f}^t \ge 0$$ holds for all $\Lambda\in\mathbb{R}^{(\ell+2)\times(\ell+2)}$ such that $\Lambda_{ij}\geq 0$, where $\Pi(\Lambda)$ and $\pi(\Lambda)$ are defined in [\[eq:multipliers\]](#eq:multipliers){reference-type="ref" reference="eq:multipliers"}. $$\label{eq:multipliers} \Pi(\Lambda) \colonequals\sum_{i,j\in \mathcal{I}} \Lambda_{ij} \begin{bmatrix} -mL\,(\bm{e}_i-\bm{e}_j)(\bm{e}_i-\bm{e}_j)^\mathsf{T}& (\bm{e}_i-\bm{e}_j)(m\bm{e}_i-L\bm{e}_j)^\mathsf{T}\\[2pt] (m\bm{e}_i-L\bm{e}_j)(\bm{e}_i-\bm{e}_j)^\mathsf{T}& -(\bm{e}_i-\bm{e}_j)(\bm{e}_i-\bm{e}_j)^\mathsf{T}\end{bmatrix} ,\;\; \pi(\Lambda) \colonequals 2\,(L-m) \sum_{i,j\in \mathcal{I}} \Lambda_{ij}\,(\bm{e}_i-\bm{e}_j)$$ # Lyapunov performance certification In [\[prop:quadratic,prop:estimating_sequences\]](#prop:quadratic,prop:estimating_sequences){reference-type="ref" reference="prop:quadratic,prop:estimating_sequences"}, we used *convergence rate* as a proxy for algorithm performance. For an algorithm $G$ of the form [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"} with fixed point $\xi_\star$, we define convergence rate formally as follows. Assume $w_k=0$ for all $k$ and let $$\begin{gathered} \mathop{\mathrm{\textsc{rate}}}(G) \colonequals\\ \inf\left\{ r > 0 \;\middle|\; \sup_{f\in\mathcal{F}_{m,L}}\, \sup_{\xi_0\in\mathbb{R}^{n\times d}}\,\sup_{k\geq 0}\, \frac{\lVert{\xi_k - \xi_\star}\rVert}{r^k \lVert{\xi_0-\xi_\star}\rVert}<\infty \right\}.\end{gathered}$$ A smaller $\rho_G$ is desirable because it means that the algorithm is guaranteed to converge faster to its fixed point. Another performance metric of interest is *sensitivity to additive gradient noise*. Suppose the noise inputs $w = (w_0,w_1,\dots)$ are random, zero-mean, bounded variance, and independent across timesteps (not necessarily identically distributed). Specifically, $\mathop{\mathrm{\mathbb{E}}}w_k = 0$, $\mathop{\mathrm{\mathbb{E}}}w_k^\mathsf{T}w_k \preceq \sigma^2 I_d$, and $\mathop{\mathrm{\mathbb{E}}}w_i^\mathsf{T}w_j = 0$ for all $i\neq j$. We denote the set of all such joint distributions over $w$ as $\mathcal{P}_\sigma$. For any fixed algorithm $G$, function $f\in\mathcal{F}_{m,L}$, initial point $\xi_0$, and noise distribution $w\sim \mathbb{P} \in \mathcal{P}_\sigma$, consider the stochastic iterate sequence $y_0,y_1,\dots$ produced by $G$ and let $y_\star \colonequals\mathop{\mathrm{\arg\min}}_{y\in\mathbb{R}^d} f(y)$ be the unique minimizer of $f$. We define the noise sensitivity to be: $$\begin{gathered} \label{def:gamma} \mathop{\mathrm{\textsc{sens}}}(G,\sigma^2) \colonequals\sup_{f\in\mathcal{F}_{m,L}}\, \sup_{\xi_0\in\mathbb{R}^{n\times d}}\, \sup_{\mathbb{P}\in \mathcal{P}_\sigma}\\ \limsup_{T\to \infty} \sqrt{\mathop{\mathrm{\mathbb{E}}}_{w\sim \mathbb{P}}\, \frac{1}{T} \sum_{k=0}^{T-1} \bigl\lVert{ y_k-y_\star }\bigr\rVert^2}.\end{gathered}$$ A smaller sensitivity is desirable because it means that the algorithm is more robust to gradient noise. Both convergence rate and noise sensitivity can be computed exactly using eigenvalue analysis for the case of quadratic functions [@mihailo; @optalg]. For smooth strongly convex functions, IQC theory can instead be used to bound the rate and sensitivity [@scherer]. We will now show an alternative Lyapunov approach, originally proposed in [@dissalg; @optalg; @van2022absolute]. Specifically, we will use a Lyapunov function of the Lur'e--Postnikov form (quadratic in the state plus integral of the nonlinearity). For our lifted system representation, this takes the form $$\label{eq:lyap} V(\bm{x},\bm{f}) \colonequals\mathop{\mathrm{\mathrm{tr}}}\bigl(\bm{x}^\mathsf{T}P \bm{x}\bigr) + p^\mathsf{T}Z\bm{f},$$ where $P$ and $p$ are parameters that must be optimized ($P$ need not be positive definite), and the matrix $Z$ is defined in [\[eq:Z\]](#eq:Z){reference-type="eqref" reference="eq:Z"}. Both convergence rate and noise sensitivity can be verified by finding $P$ and $p$ such that the Lyapunov function [\[eq:lyap\]](#eq:lyap){reference-type="eqref" reference="eq:lyap"} satisfies some algebraic conditions. **Lemma 5**. Consider the algorithm dynamics [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"} and let $V$ be of the form [\[eq:lyap\]](#eq:lyap){reference-type="eqref" reference="eq:lyap"}. If the iterates of the lifted system [\[eq:sslifted\]](#eq:sslifted){reference-type="eqref" reference="eq:sslifted"} with $w_k=0$ satisfy the conditions (i) [\[ratei\]]{#ratei label="ratei"} $V(\bm{x}_k,\bm{f}_k) \geq \lVert{\xi_k-\xi_\star}\rVert^2$ and (ii) [\[rateii\]]{#rateii label="rateii"} $V(\bm{x}_{k+1},\bm{f}_{k+1}) \leq r^2\,V(\bm{x}_k,\bm{f}_k)$ for some $r>0$, then $\mathop{\mathrm{\textsc{rate}}}(G) \leq r$. **Proof.** Applying [\[ratei,rateii\]](#ratei,rateii){reference-type="ref" reference="ratei,rateii"} for $k \geq \ell$, we obtain $$\begin{aligned} \lVert{\xi_k-\xi_\star}\rVert^2 &\leq V(\bm{x}_k,\bm{f}_k) \leq \cdots \leq r^{2k}V(\bm{x}_0,\bm{f}_0) \\ &\leq r^{2k} \bigl(C_0 \lVert{\xi_0 - \xi_\star}\rVert^2 + C_1 \bigr),\end{aligned}$$ where $C_0$ and $C_1$ are constants that depend on the initialization of the algorithm and the parameters $P,p$. The result follows from applying the definition of $\mathop{\mathrm{\textsc{rate}}}(G)$.   ------------------------------------------------------------------------ **Lemma 6**. Consider the algorithm dynamics [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"} and let $V$ be of the form [\[eq:lyap\]](#eq:lyap){reference-type="eqref" reference="eq:lyap"}. If the iterates of the lifted system [\[eq:sslifted\]](#eq:sslifted){reference-type="eqref" reference="eq:sslifted"} satisfy the conditions (i) [\[sensi\]]{#sensi label="sensi"} $\mathop{\mathrm{\mathbb{E}}}V(\bm{x}_k,\bm{f}_k) \geq 0$ (ii) [\[sensii\]]{#sensii label="sensii"} $\mathop{\mathrm{\mathbb{E}}}V(\bm{x}_{k+1},\bm{f}_{k+1}) - \mathop{\mathrm{\mathbb{E}}}V(\bm{x}_k,\bm{f}_k) + \mathop{\mathrm{\mathbb{E}}}\,\lVert{y_k-y_\star}\rVert^2 \leq \gamma^2$ for some $\gamma > 0$, then $\mathop{\mathrm{\textsc{sens}}}(G,\sigma^2) \leq \gamma$. **Proof.** applying [\[sensi\]](#sensi){reference-type="ref" reference="sensi"} and averaging [\[sensii\]](#sensii){reference-type="ref" reference="sensii"} over $k=0,\dots,T-1$, we obtain $$\frac{1}{T} \mathop{\mathrm{\mathbb{E}}}\sum_{t=0}^{T-1} \lVert{y_k-y_\star}\rVert^2 \leq \frac{1}{T} \mathop{\mathrm{\mathbb{E}}}V(\bm{x}_0,\bm{f}_0) + \gamma^2.$$ Taking the limit superior as $T\to\infty$ implies that $\gamma$ is an upper bound on the senstivity to gradient noise.   ------------------------------------------------------------------------ We will now show how [\[lem:rate,lem:sens\]](#lem:rate,lem:sens){reference-type="ref" reference="lem:rate,lem:sens"}, together with the interpolation conditions of [4](#sec:interpolation){reference-type="ref" reference="sec:interpolation"}, can be used to efficiently certify convergence rate and noise sensitivity for iterative algorithm. **Theorem 7**. Consider an algorithm $G$ of the form [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"} with with fixed point $(\xi_\star,y_\star,u_\star)$ applied to a function $f \in \mathcal{F}_{m,L}$. Suppose there is additive gradient noise with distribution in $\mathcal{P}_\sigma$. Define the truncation matrices in [\[eq:Z\]](#eq:Z){reference-type="eqref" reference="eq:Z"}, the augmented state space and projection matrices in [\[eq:sslifted\]](#eq:sslifted){reference-type="eqref" reference="eq:sslifted"}--[\[eq:sslifted2\]](#eq:sslifted2){reference-type="eqref" reference="eq:sslifted2"}, and the valid inequality matrices in [\[eq:multipliers\]](#eq:multipliers){reference-type="eqref" reference="eq:multipliers"}. a) If there exist $P=P^\mathsf{T}\in\mathbb{R}^{(n+2\ell)\times (n+2\ell)}$ and $p\in\mathbb{R}^\ell$ and $\Lambda_1,\Lambda_2\geq 0$ (elementwise) and $r>0$ such that [\[eq:lmi_cvx_rate\]]{#eq:lmi_cvx_rate label="eq:lmi_cvx_rate"} $$\begin{aligned} \addtolength{\arraycolsep}{-2pt}\hspace{-5mm} \begin{bmatrix} \bm{A}& \bm{B}\\ I & 0 \\ \bm{C}& \bm{D}\end{bmatrix}^\mathsf{T}\! \begin{bmatrix} P & 0 & 0\\ 0 & -r^2 P & 0 \\ 0 & 0 & \Pi(\Lambda_1)\end{bmatrix}\! \begin{bmatrix} \bm{A}& \bm{B}\\ I & 0 \\ \bm{C}& \bm{D}\end{bmatrix} &\preceq 0 \label{eq:lmi_cvx_1} \\ (Z_+ - r^2 Z)^\mathsf{T}p + \pi(\Lambda_1) &\leq 0 \label{eq:lmi_cvx_2} \\ \hspace{-1cm}\bm{X}^\mathsf{T}\bm{X}+ \addtolength{\arraycolsep}{-2pt} \begin{bmatrix}I & 0 \\ \bm{C}& \bm{D}\end{bmatrix}^\mathsf{T}\! \begin{bmatrix}-P & 0 \\ 0 & \Pi(\Lambda_2)\end{bmatrix}\! \begin{bmatrix}I & 0 \\ \bm{C}& \bm{D}\end{bmatrix} &\preceq 0 \label{eq:lmi_cvx_3} \\ -Z^\mathsf{T}p + \pi(\Lambda_2) &\leq 0 \label{eq:lmi_cvx_4} \end{aligned}$$ then $\mathop{\mathrm{\textsc{rate}}}(G) \leq r$. b) If there exist $P=P^\mathsf{T}\in\mathbb{R}^{(n+2\ell)\times(n+2\ell)}$ and $p\in\mathbb{R}^\ell$ and $\Lambda_1,\Lambda_2\geq 0$ (elementwise) such that [\[eq:lmi_cvx_sensitivity\]]{#eq:lmi_cvx_sensitivity label="eq:lmi_cvx_sensitivity"} $$\begin{aligned} \addtolength{\arraycolsep}{-2pt}\hspace{-1cm} \begin{bmatrix} \bm{A}& \bm{B}\\ I & 0 \\ \bm{C}& \bm{D}\end{bmatrix}^\mathsf{T}\! \begin{bmatrix} P & 0 & 0\\ 0 & -P & 0\\ 0 & 0 & \Pi(\Lambda_1)\end{bmatrix} \! \begin{bmatrix} \bm{A}& \bm{B}\\ I & 0 \\ \bm{C}& \bm{D}\end{bmatrix}\! + \bm{Y}^\mathsf{T}\bm{Y}&\preceq 0 \label{eq:lmi_cvx_sensitivity_1} \\ (Z_+ - Z)^\mathsf{T}p + \pi(\Lambda_1) &\leq 0 \label{eq:lmi_cvx_sensitivity_2} \\ \addtolength{\arraycolsep}{-2pt} \begin{bmatrix}I & 0 \\ \bm{C}& \bm{D}\end{bmatrix}^\mathsf{T}\! \begin{bmatrix}-P & 0 \\ 0 & \Pi(\Lambda_2)\end{bmatrix} \! \begin{bmatrix}I & 0 \\ \bm{C}& \bm{D}\end{bmatrix} &\preceq 0 \label{eq:lmi_cvx_sensitivity_3} \\ -Z^\mathsf{T}p + \pi(\Lambda_2) &\leq 0 \label{eq:lmi_cvx_sensitivity_4} \end{aligned}$$ then $\mathop{\mathrm{\textsc{sens}}}(G,\sigma^2) \le \sqrt{\sigma^2 d\cdot (\bm{H}^\mathsf{T}P \bm{H})}$. **Proof.** Consider a trajectory $(\xi_k,u_k,y_k,w_k)$ of the dynamics [\[eq:ssalg\]](#eq:ssalg){reference-type="eqref" reference="eq:ssalg"} with $w_k=0$. Multiply [\[eq:lmi_cvx_1\]](#eq:lmi_cvx_1){reference-type="eqref" reference="eq:lmi_cvx_1"} and [\[eq:lmi_cvx_3\]](#eq:lmi_cvx_3){reference-type="eqref" reference="eq:lmi_cvx_3"} on the right and left by $(\bm{x}_k,u_k)\in\mathbb{R}^{n+\ell+1}$ and its transpose, respectively, and take the trace. Also, take the inner product of [\[eq:lmi_cvx_2\]](#eq:lmi_cvx_2){reference-type="eqref" reference="eq:lmi_cvx_2"} and [\[eq:lmi_cvx_4\]](#eq:lmi_cvx_4){reference-type="eqref" reference="eq:lmi_cvx_4"} with $\bm{f}_k$, which is valid because $\bm{f}_k$ is elementwise nonnegative. Next, sum the resulting [\[eq:lmi_cvx_1\]](#eq:lmi_cvx_1){reference-type="eqref" reference="eq:lmi_cvx_1"}+[\[eq:lmi_cvx_2\]](#eq:lmi_cvx_2){reference-type="eqref" reference="eq:lmi_cvx_2"} and [\[eq:lmi_cvx_3\]](#eq:lmi_cvx_3){reference-type="eqref" reference="eq:lmi_cvx_3"}+[\[eq:lmi_cvx_4\]](#eq:lmi_cvx_4){reference-type="eqref" reference="eq:lmi_cvx_4"}, and the result immediately follows from [Lemma 5](#lem:rate){reference-type="ref" reference="lem:rate"} and [\[eq:cvx_ineq\]](#eq:cvx_ineq){reference-type="ref" reference="eq:cvx_ineq"}. For the second part of the proof, we do not restrict $w_k=0$, and perform similar operations to the inequalities [\[eq:lmi_cvx_sensitivity\]](#eq:lmi_cvx_sensitivity){reference-type="eqref" reference="eq:lmi_cvx_sensitivity"} as in the first part, this time taking expected values, and applying [Lemma 6](#lem:sens){reference-type="ref" reference="lem:sens"} and [\[eq:cvx_ineq\]](#eq:cvx_ineq){reference-type="ref" reference="eq:cvx_ineq"}.   ------------------------------------------------------------------------ ## Efficient numerical solutions For fixed $r>0$, the conditions [\[eq:lmi_cvx_rate\]](#eq:lmi_cvx_rate){reference-type="eqref" reference="eq:lmi_cvx_rate"} and [\[eq:lmi_cvx_sensitivity\]](#eq:lmi_cvx_sensitivity){reference-type="eqref" reference="eq:lmi_cvx_sensitivity"} are linear matrix inequalities (LMIs) in the variables $P,p,\Lambda_1,\Lambda_2$. Each LMI has a size that depends on the number of algorithm states $n$ and the lifting dimension $\ell$, both of which are typically small. Critically, the size of the LMIs *does not* depend on $d$ (the domain dimension of $f$), which can be very large in practice. To find the best bound on $\mathop{\mathrm{\textsc{rate}}}(G)$, one can perform a bisection search on $r$, at each step checking feasibility of [\[eq:lmi_cvx_rate\]](#eq:lmi_cvx_rate){reference-type="ref" reference="eq:lmi_cvx_rate"}. To find the best bound on $\mathop{\mathrm{\textsc{sens}}}(G,\sigma^2)$, we can directly minimize $\bm{H}^\mathsf{T}P \bm{H}$ subject to [\[eq:lmi_cvx_sensitivity\]](#eq:lmi_cvx_sensitivity){reference-type="ref" reference="eq:lmi_cvx_sensitivity"}. # Numerical examples {#sec:numerical} In [2](#fig:rate_comparison){reference-type="ref" reference="fig:rate_comparison"}, we plot bounds on $\mathop{\mathrm{\textsc{rate}}}(G)$ for smooth strongly convex functions for different algorithms and choices of $L/m$, computed using [Theorem 7](#thm:lmi_cvx){reference-type="ref" reference="thm:lmi_cvx"}. For all curves, a lifting dimension $\ell=1$ was enough to get the best results. We tuned [\[GD\]](#GD){reference-type="eqref" reference="GD"} and [\[HB\]](#HB){reference-type="eqref" reference="HB"} as in [Proposition 1](#prop:quadratic){reference-type="ref" reference="prop:quadratic"}, which recovers the result of [@lessard16 §4.6] whereby HB tuned in this manner may not be globally convergent in $\mathcal{F}_{m,L}$. We tuned [\[FG\]](#FG){reference-type="eqref" reference="FG"} as in [Proposition 1](#prop:quadratic){reference-type="ref" reference="prop:quadratic"}, which yields a tighter rate bound that that of [Proposition 2](#prop:estimating_sequences){reference-type="ref" reference="prop:estimating_sequences"} (shown as FG\*). Finally, we show the *Triple Momentum Method* [@van2017fastest], which has the fastest known convergence rate for this function class. ![Application of [Theorem 7](#thm:lmi_cvx){reference-type="ref" reference="thm:lmi_cvx"} to find bounds on $\mathop{\mathrm{\textsc{rate}}}(G)$ for different algorithms applied to smooth strongly convex functions. See the text of [6](#sec:numerical){reference-type="ref" reference="sec:numerical"} for details.](rate_comparison){#fig:rate_comparison} In [3](#fig:perf_tradeoff){reference-type="ref" reference="fig:perf_tradeoff"}, we plot sensitivity to gradient noise versus convergence rate for various algorithms applied to smooth strongly convex functions with $m=1$ and $L=8$, computed using [Theorem 7](#thm:lmi_cvx){reference-type="ref" reference="thm:lmi_cvx"}. We used $\ell=1$ for convergence rate and $\ell=6$ for noise sensitivity. Also shown is GD\*, which explores other tunings of GD with $0 \leq \alpha \leq \tfrac{2}{L}$ and shows that the regime $\alpha > \frac{2}{L+m}$ is always Pareto-suboptimal. Finally, we show RAM, which is the *Robust Accelerated Method* of [@optalg] and achieves the best known trade-off between convergence rate and noise robustness. ![Application of [Theorem 7](#thm:lmi_cvx){reference-type="ref" reference="thm:lmi_cvx"} to plot the trade-off between sensitivity to additive gradient noise and convergence rate in $\mathcal{F}_{1,8}$. See the text of [6](#sec:numerical){reference-type="ref" reference="sec:numerical"} for details.](perf_tradeoff){#fig:perf_tradeoff} # Concluding remarks In this tutorial, we presented a Lyapunov-based approach for algorithm analysis, which is covered in more technical detail in [@optalg; @dissalg; @van2022absolute]. Showcased in [\[fig:perf_tradeoff,fig:rate_comparison\]](#fig:perf_tradeoff,fig:rate_comparison){reference-type="ref" reference="fig:perf_tradeoff,fig:rate_comparison"}, this approach attains numerical results that empirically match those obtained using IQCs [@lessard16; @scherer], but does so using a familiar Lur'e--Postnikov Lyapunov function [\[eq:lyap\]](#eq:lyap){reference-type="eqref" reference="eq:lyap"}, and with results such as [Theorem 7](#thm:lmi_cvx){reference-type="ref" reference="thm:lmi_cvx"} that have straightforward proofs. [^1]: B. Van Scoy is with the Department of Electrical and Computer Engineering at Miami University, Oxford, OH 45056, USA `bvanscoy@miamioh.edu` [^2]: L. Lessard is with the Department of Mechanical and Industrial Engineering at Northeastern University, Boston, MA 02115, USA `l.lessard@northeastern.edu` [^3]: This material is based upon work supported by the National Science Foundation under Grants No. 2136945, 2139482.
arxiv_math
{ "id": "2309.11377", "title": "A Tutorial on a Lyapunov-Based Approach to the Analysis of Iterative\n Optimization Algorithms", "authors": "Bryan Van Scoy and Laurent Lessard", "categories": "math.OC cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The $2$-packing number $\rho_2(G)$ of a graph $G$ is the cardinality of a largest $2$-packing of $G$ and the open packing number $\rho^{\rm o}(G)$ is the cardinality of a largest open packing of $G$, where an open packing (resp. $2$-packing) is a set of vertices in $G$ no two (closed) neighborhoods of which intersect. It is proved that if $G$ is bipartite, then $\rho^{\rm o}(G\,\square\,K_2) = 2\rho_2(G)$. For hypercubes, the lower bounds $\rho_2(Q_n) \ge 2^{n - \lfloor \log n\rfloor -1}$ and $\rho^{\rm o}(Q_n) \ge 2^{n - \lfloor \log (n-1)\rfloor -1}$ are established. These findings are applied to injective colorings of hypercubes. In particular, it is demonstrated that $Q_9$ is the smallest hypercube which is not perfect injectively colorable. It is also proved that $\gamma_t(Q_{2^k}\times H) = 2^{2^k-k}\gamma_t(H)$, where $H$ is an arbitrary graph with no isolated vertices. author: - Boštjan Brešar$^{a,b}$ - Sandi Klavžar$^{a,b,c}$ - | Douglas F. Rall$^{d}$\ title: Packings in bipartite prisms and hypercubes --- $^a$ Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia\ $^b$ Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia\ $^c$ Faculty of Mathematics and Physics, University of Ljubljana, Slovenia\ $^d$ Department of Mathematics, Furman University, Greenville, SC, USA\ **Keywords:** $2$-packing number, open packing number, bipartite prism, hypercube, injective coloring, (total) domination number\ **AMS Subj. Class. (2020)**: 05C69, 05C76 # Introduction {#sec:intro} For many reasons, hypercubes are ubiquitous in theoretical computer science and in combinatorics. Understanding their structure is therefore a fundamental problem. Although hypercubes have a seemingly simple structure, we quickly encounter very complex problems. For instance, one of them was the middle levels problem, which was successfully dismissed [@mutze-2016]. On the other hand, the problem of determining the domination number of hypercubes is beyond the reach of existing methods. To date, exact values of $\gamma(Q_n)$ are only known for $n \le 9$, where the value $\gamma(Q_9) = 62$ was obtained in [@ostergard-2001], and for the following two infinite families. **Theorem 1**. *([@harary-1993; @wee-1988]) [\[thm:infinite-families\]]{#thm:infinite-families label="thm:infinite-families"} If $k\ge 1$, then $\gamma(Q_{2^k-1}) = 2^{2^k-k-1}$ and $\gamma(Q_{2^k}) = 2^{2^k-k}$.* The values $\gamma(Q_{2^k-1}) = 2^{2^k-k-1}$ can be obtained from the fact that hypercubes $Q_{2^k-1}$ admit 1-perfect codes, in which case the domination number coincides with the cardinality of a 1-perfect code. The most important variation of the domination number is the total domination number; see a recent monograph [@bookHHH] surveying domination theory with the two invariants in the central role. Roughly speaking, domination operates with closed neighborhoods while total domination with open neighborhoods, which often causes a different behavior of the invariants. However, as proved in [@azarija-2017] by using hypergraph transversals, $\gamma_t(Q_{n+1})= 2\gamma(Q_n)$ for all $n$, which makes the domination number and the total domination number in hypercubes tightly connected. More generally, the authors of [@azarija-2017] proved that $\gamma_t(G\,\square\,K_2)=2\gamma(G)$ as soon as $G$ is a bipartite graph. The concepts of packing number and open packing number of a graph are often used in domination theory, since they present natural lower bounds on the domination number and the total domination number, respectively, of the graph. The concept of packing was used back in 1975 by Meir and Moon in their classical theorem stating that in a tree the domination number equals the packing number [@mm-1975]. On the other hand, open packing was introduced by Henning and Slater [@hs], and was later used in [@Rall] to prove a canonical formula for the total domination number of the direct product of two graphs, which holds if one of the factors has the total domination number equal to its open packing number. Similarly as total domination is related to domination, open packing can be regarded as a version of packing in which closed neighborhoods are replaced with open neighborhoods. See [@mohammadi-2019; @mojdeh-2020; @mojdeh-2022] for some recent studies of (open) packings as well as [@gao-2017] for their application. Open packings are also related to the so-called injective colorings of graphs, cf. [@pp]. More precisely, an injective coloring of a graph is exactly a partition of its vertex set into open packings. In a recent paper [@bsy], graphs that admit injective colorings such that each of the color classes is a maximum open packing were considered. While proving this property for hypercubes of some small dimensions, it was also proved for those whose dimension is a power of $2$. Yet, nothing else was known, including whether there exists a hypercube that does not satisfy this property. One of the reasons for the difficulty of this question is that the open packing number (i.e., the cardinality of a maximum open packing) has not been known. We proceed as follows. In the remainder of this introduction, we provide the definitions and concepts we need for the following. In Section [2](#sec:bip){reference-type="ref" reference="sec:bip"} we prove that the open packing number of a prism over a bipartite graph $G$ is twice the $2$-packing number of $G$. This result nicely complements [@azarija-2017 Theorem 1] which states that the total domination number of a prism over a bipartite graph $G$ is twice the domination number of $G$. We also demonstrate that in general, the open packing number of a prism over a graph $G$ can be arbitrary larger that the $2$-packing number of $G$. In Section [3](#sec:hyp){reference-type="ref" reference="sec:hyp"} we prove lower bounds on the $2$-packing number and the open packing number of hypercubes. The bounds are sharp for small dimensions and for two infinite families, but are not sharp in general. In the subsequent section we apply these findings to injective colorings of hypercubes. In particular we demonstrate that $Q_9$ is the smallest hypercube which is not perfect injectively colorable. In the concluding remarks, we give an overview of the known values for the hypercube invariants considered here and also derive the total domination number of the direct product of $Q_{2^k}$ and an arbitrary graph. ## Preliminaries {#sec:prelim} Let $G = (V(G), E(G))$ be a graph and $x\in V(G)$. The *open neighborhood* $N(x)$ is the set of vertices adjacent to $x$ and the *closed neighborhood* is $N[x] = N(x)\cup \{x\}$. A set $D\subseteq V(G)$ is a *dominating set* of $G$ if each vertex of $V(G)\setminus D$ has a neighbor in $D$. The cardinality of a smallest dominating set of $G$ is the *domination number* $\gamma(G)$ of $G$. Similarly, $D\subseteq V(G)$ is a *total dominating set* of $G$ if each vertex of $V(G)$ has a neighbor in $D$. The cardinality of a smallest dominating set of $G$ is the *total domination number* $\gamma_t(G)$ of $G$. Let $X\subseteq V(G)$. Then $X$ is a *$2$-packing* of $G$ if $N[x] \cap N[y] = \emptyset$ for every pair of distinct vertices $x,y\in X$. Similarly, if $N(x) \cap N(y) = \emptyset$ for every pair of distinct vertices $x,y\in X$, then $X$ is an *open packing* of $G$. The cardinality of a largest $2$-packing of $G$ is the *$2$-packing number* $\rho_2(G)$ of $G$ and the cardinality of a largest open packing of $G$ is the *open packing number* $\rho^{\rm o}(G)$ of $G$. By a *$\rho_2$-set* of $G$ we mean a $2$-packing of $G$ of cardinality $\rho_2(G)$. A *$\rho^{\rm o}$-set* of $G$ is defined analogously. If $X$ is a $2$-packing such that $V(G) = \cup_{x\in X} N[x]$ then we say that $X$ is a *$1$-perfect code* of $G$. In domination theory, 1-perfect codes are known as *efficient dominating sets*, see [@bookHHH Chapter 9] and [@KP]. Since $\gamma(G) \ge \rho_2(G)$ for every graph $G$, if $X$ is a $1$-perfect code of $G$, then $X$ is also a dominating set of $G$. This observation leads to the following well known fact. **Proposition 2**. *If $G$ admits a $1$-perfect code, then $\gamma(G) = \rho_2(G)$. If in addition $G$ is $r$-regular, then $\gamma(G) = \rho_2(G)=\frac{n(G)}{r+1}$.* The *Cartesian product* $G\,\square\,H$ of graphs $G$ and $H$ is the graph whose vertex set is $V(G) \times V(H)$, and two vertices $(g_1,h_1)$ and $(g_2,h_2)$ are adjacent in $G \,\square\,H$ if either $g_1=g_2$ and $h_1h_2$ is an edge in $H$ or $h_1=h_2$ and $g_1g_2$ is an edge in $G$. For a vertex $g$ of $G$, the subgraph of $G\,\square\,H$ induced by the set $\{(g,h):\, h\in V(H)\}$ is an *$H$-fiber* and is denoted by $^g\!H$. Similarly, for $h \in H$, the *$G$-fiber*, $G^h$, is the subgraph induced by $\{(g,h):\, g\in V(G)\}$. Cartesian product is commutative and associative. The *hypercube* of dimension $n$, or the *$n$-cube*, is isomorphic to $K_2\,\square\,\cdots\,\square\,K_2$, where there are $n$ factors $K_2$, and is denoted by $Q_n$. The equality $Q_n=Q_{n-1}\,\square\,K_2$ will be used (at least implicitly) several times in the paper. Finally, the *direct product* $G\times H$ of graphs $G$ and $H$ has the vertex set $V(G) \times V(H)$, and two vertices $(g_1,h_1)$ and $(g_2,h_2)$ are adjacent in $G \times H$ if $g_1g_2$ is an edge in $G$ and $h_1h_2$ is an edge in $H$. # Packing vs. open packing in bipartite prisms {#sec:bip} In [@azarija-2017] it was proved that if $G$ is a bipartite graph, then $\gamma_t(G\,\square\,K_2)=2\gamma(G)$. In this section we prove an analogous result that connects the open packing number and the packing number. We begin with the following simple lemma, which holds in all graphs. **Lemma 3**. *If $G$ is a graph, then $\rho^{\rm o}(G\,\square\,K_2)\ge 2\rho_2(G)$.* *Proof.* Let $G$ be a graph, and let $P$ be a $\rho_2$-set of $G$. Then $P\times V(K_2)$ is an open packing of $G\,\square\,K_2$, hence the result. ◻ In general, $\rho^{\rm o}(G\,\square\,K_2)$ can be arbitrary larger than $2 \rho_2(G)$. For an example consider the family of graphs $G_k$, $k\ge 1$, defined as follows. $G_k$ contains $2k$ disjoint cycles $C_5$ connected in a row by an edge between two consecutive $5$-cycles. This informal definition of $G_k$ should be clear from Fig. [\[fig:G2\]](#fig:G2){reference-type="ref" reference="fig:G2"} where $G_2\,\square\,K_2$ is drawn. As an arbitrary packing of $G_k$ contains at most one vertex of each $C_5$ we infer that $\rho_2(G_k) = 2k$. On the other hand, repeating the pattern as shown in Fig. [\[fig:G2\]](#fig:G2){reference-type="ref" reference="fig:G2"} for $k=2$, we get $\rho^{\rm o}(G_k\,\square\,K_2) \ge 5k$. For bipartite graphs, however, the above phenomena cannot occur as the main result of this section asserts. **Theorem 4**. *If $G$ is a bipartite graph, then $\rho^{\rm o}(G\,\square\,K_2)=2\rho_2(G)$.* *Proof.* Let $G$ be a bipartite graph with parts $A$ and $B$ forming the natural partition of $V(G)$. By Lemma [Lemma 3](#lem:lower-bound){reference-type="ref" reference="lem:lower-bound"}, we have $\rho^{\rm o}(G\,\square\,K_2)\ge 2\rho_2(G)$. To prove the reversed inequality, consider an open packing $O$ in $G\,\square\,K_2$ such that $|O|=\rho^{\rm o}(G\,\square\,K_2)$. We will show that $O$ can be transformed into an open packing $O'$ of the form $P'\times V(K_2)$, where $P'$ is a subset of $V(G)$. (Clearly, the latter also implies that $P'$ is a $2$-packing.) Note that $O$ can be presented as the disjoint union $I\cup R$, where $I$ is the set of vertices that are isolated in the subgraph of $G\,\square\,K_2$ induced by $O$, while $R$ is the set of vertices that have exactly one neighbor in $O$. Clearly, at least one of the sets $I$ or $R$ is non-empty. Set $V(K_2)=\{1,2\}$, and let $I_i=I\cap V(G^i)$ and $R_i=R\cap V(G^i)$ for all $i\in [2]$. In addition, let $I_i^A=\{(u,i)\in I_i:\, u\in A\}$, $I_i^B=\{(u,i)\in I_i:\, u\in B\}$ for $i\in [2]$, and similarly let $R_i^A=\{(u,i)\in R_i:\, u\in A\}$, $R_i^B=\{(u,i)\in R_i:\, u\in B\}$ for $i\in [2]$. Next, we compare the two quantities $|I_1^A|+|I_2^B|$ and $|I_2^A|+|I_1^B|$. We may assume with no loss of generality that $|I_1^A|+|I_2^B|\ge|I_2^A|+|I_1^B|$. Now, the announced transformation of $O$ to $O'$ is defined as follows: - if $(u,t)\in I_1^A\cup I_2^B$, then let $\{u\}\times V(K_2)\subseteq O'$; - if $(u,t)\in I_2^A\cup I_1^B$, then let $(\{u\}\times V(K_2))\cap O'=\emptyset$; - if $(u,1)\in R_1$ and $(u,2)\in R_2$, then let $\{u\}\times V(K_2)\subseteq O'$; - if $(u,1)\in R_1^A$ and $(v,1)\in R_1^B$, where $uv\in E(G)$, then let $\{u\}\times V(K_2)\subseteq O'$ and $(\{v\}\times V(K_2))\cap O'=\emptyset$; - if $(u,2)\in R_2^A$ and $(v,2)\in R_2^B$, where $uv\in E(G)$, then let $\{v\}\times V(K_2)\subseteq O'$ and $(\{u\}\times V(K_2))\cap O'=\emptyset$. We claim that $|O'|\ge |O|$. Indeed, the first two rows in the above transformation show that for every vertex $(u,t)\in I_1^A\cup I_2^B$ we get two vertices in $O'$, while for every vertex $(u,t)\in I_2^A\cup I_1^B$ we get no vertices in $O'$, yet $|I_1^A\cup I_2^B|>|I_2^A\cup I_1^B|$ by the earlier assumption. By the last three rows of the above transformation, every pair of vertices in $R$ is replaced by two vertices in $O'$. This altogether implies that $|O'|\ge |O|$, so it remains to prove that $O'$ is an open packing in $G\,\square\,K_2$. If $(u,1)\in I_1^A$ and $(v,1)\in I_1^A$, then $d_G(u,v)\ge 4$, because the vertices belong to $O$, which is an open packing, and $u$ and $v$ are both in $A$. Thus vertices in $\{u\}\times V(K_2)$ will be at distance at least $4$ from the vertices in $\{v\}\times V(K_2)$. By symmetry, we get the same conclusion for vertices $(u,2)\in I_2^B$ and $(v,2)\in I_2^B$. If $(u,1)\in I_1^A$ and $(v,2)\in I_2^B$, then $d_G(u,v)\ge 3$, because $u$ and $v$ belong to different parts, $A$ and $B$ respectively, of the bipartition of $V(G)$ and they belong to $O$, which is an open packing. Thus, vertices in $\{u\}\times V(K_2)$ will be at distance at least $3$ from the vertices in $\{v\}\times V(K_2)$, as desired. Clearly, if $(u,t)\in I_1^A\cup I_2^B$, then $d_G(u,v)\ge 3$ for any $v\in V(G)$ such that $\{(v,1),(v,2)\}\subset R$. This yields that vertices in $\{u\}\times V(K_2)$ will be at distance at least $3$ from the vertices in $\{v\}\times V(K_2)$. If $(u,1)\in I_1^A$ and $(v,1)\in R_1^A$, we have $d_G(u,v)\ge 4$. On the other hand, if $(u,1)\in I_1^A$ and $(v,2)\in R_2^B$ we have $d_G(u,v)\ge 3$. In either case, the corresponding vertices in $O'$ are at least three apart. By symmetry, we can find that for vertices in $I_2^B$ and vertices in $R_1^A\cup R_2^B$ their distances are sufficiently large so that the corresponding $K_2$-fibers that are in $O'$ will be at distance at least $3$. This completes the proof that the distance between the vertices in $O'$ that appear in the first row of the above transformation to all other vertices in $O'$ will be at least $3$, except of course for two vertices in $O'$ that belong to the same $K_2$-fiber and are adjacent. Vertices of $O'$ that appear in the third row of the transformation remain at distance at least $3$ from all other vertices in $O'$ (with the clear exception of two adjacent such vertices). Therefore, it remains to consider the vertices in $O'$ that appear in the last two rows of the above transformation. Suppose there are two vertices in $R_1^A$ (and a similar argument can be applied if they are in $R_2^B$), say, $(u,1)$ and $(v,1)$, which are not adjacent. Then $d_G(u,v)\ge 4$, and so $\{u\}\times V(K_2)$ will be at distance at least $4$ from the vertices in $\{v\}\times V(K_2)$ (by symmetry, the same conclusion applies if $(u,2)$ and $(v,2)$ are in $R_2^B$). Finally, let $(u,1)\in R_1^A$ and $(v,2)\in R_2^B$. Since $O$ is an open packing, we have $d_G(u,v)>1$, and since they are in different parts of the bipartition, we get $d_G(u,v)\ge 3$. We derive that $\{u\}\times V(K_2)$ will be at distance at least $3$ from the vertices in $\{v\}\times V(K_2)$, which concludes the proof that $O'$ is an open packing. Since $|O|=\rho^{\rm o}(G\,\square\,K_2)$ and $|O'|\ge |O|$, we derive $|O'|=|O|=\rho^{\rm o}(G\,\square\,K_2)$. In addition, there exists a set $P'\subset V(G)$ such that $O'=P'\times [2]$, where $P'$ is a $2$-packing of $G$. Hence, $|P'|\le \rho_2(G)$, and so $|O'|=2|P'|\le 2\rho_2(G)$, implying $\rho^{\rm o}(G\,\square\,K_2)\le 2\rho_2(G)$. ◻ # (Open) packings in hypercubes {#sec:hyp} The following lemma follows by observing that the restriction of a 2-packing in $G\,\square\,K_2$ to a $G$-layer is a 2-packing of that layer. **Lemma 5**. *If $G$ is a graph, then $\rho_2(G\,\square\,K_2) \le 2 \rho_2(G)$.* We can now bound $\rho_2$ and $\rho^{\rm o}$ of hypercubes as follows. **Theorem 6**. *If $n\ge 2$, then* 1. *$\rho_2(Q_n) \ge 2^{n - \lfloor \log n\rfloor -1}$and* 2. *$\rho^{\rm o}(Q_n) \ge 2^{n - \lfloor \log (n-1)\rfloor -1}$.* *Proof.* (i) Suppose first that $n = 2^k - 1$, where $k\ge 2$. As already mentioned, in these cases $Q_n$ admits a $1$-perfect code, say $S$. Then $|S| = 2^{2^k - 1}/2^k = 2^{2^k - k - 1}$ and consequently $$\rho_2(Q_n) = |S| = 2^{2^k - k - 1} = 2^{2^k -1 - (k - 1) - 1} = 2^{n - \lfloor \log n\rfloor -1}\,.$$ Consider now the hypercubes $Q_n$, where $k\ge 3$ and $2^{k-1} - 1 < n < 2^k - 1$. In particular, if $n = 2^k - 2$, then since $Q_{2^k - 1} = Q_{2^k - 2} \,\square\,K_2$, Lemma [Lemma 5](#lem:prism-rotwo){reference-type="ref" reference="lem:prism-rotwo"} implies that $$\rho_2(Q_{n}) = \rho_2(Q_{2^k - 2}) \ge \frac{1}{2} \rho_2(Q_{2^k - 1}) = 2^{2^k-k-2} = 2^{2^k-2 - (k-1)-1} = 2^{n - \lfloor \log n\rfloor -1} \,.$$ Inductively applying the lemma, the result holds for all $n$ such that $2^{k-1} - 1 < n < 2^k - 1$. Therefore, (i) holds for all $n\ge 2$. \(ii\) Applying Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"} and (i), we have $$\rho^{\rm o}(Q_n) = 2\rho_2(Q_{n-1}) \ge 2\cdot 2^{(n-1) - \lfloor \log (n-1)\rfloor -1} = 2^{n - \lfloor \log (n-1)\rfloor -1}$$ for all $n\ge 2$ and we are done. ◻ If $n\le 7$, then equality holds in Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(i). The cases when $n\in \{2,3,4\}$ can be easily argued by case analysis. The equality in cases when $n\in \{5,6\}$ then follow by combining Lemma [Lemma 5](#lem:prism-rotwo){reference-type="ref" reference="lem:prism-rotwo"} and Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(i). For $n = 7$, the equality holds because $Q_7$ has a $1$-perfect code. One is thus tempted to conjecture that the lower bound in Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(i) holds for all $n$. However, with the help of a computer, we found the set $$\begin{aligned} T =\ & \{00000000, 00001110, 00110010, 00111100, 01010110, 01011000, \\ & \ 01100100, 01101001, 01111111, 10010100, 10100101, 10101011, \\ & \ 11000111, 11001100, 11011011, 11100010, 11110001\}\end{aligned}$$ which is a 2-packing in $Q_8$ with $|T| = 17$, hence $\rho_2(Q_8) \ge 17$. By Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"}, this in turn implies that $\rho^{\rm o}(Q_9) \ge 34$. Hence also the lower bound in Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(ii) is not sharp in general. It is sharp however for all $n\le 8$ because the lower bound in Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(i) is sharp for $n\le 7$ and because of Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"}. Furthermore, by using Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"} and the fact that the lower bound in Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(i) is sharp when $n=2^k - 1$, it follows that the lower bound in Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(ii) is sharp for each value of $n$ that is a power of $2$. # Application to injective colorings {#sec:injective} An *injective coloring* of a graph $G$ is a partition of the vertex set of $G$ into open packings. The *injective chromatic number*, $\chi_i(G)$, of $G$ is the minimum cardinality of an injective coloring in $G$. The concept was introduced by Hahn, Kratochvíl, Širáň and Sotteau [@hkss] back in 2002, and has been considered by a number of authors, cf. [@brause-2022; @chen-2012]. In the recent paper [@bsy], graphs that admit special types of injective colorings were considered: a graph $G$ is a *perfect injectively colorable graph* if it has an injective coloring in which every color class forms a $\rho^{\rm o}$-set of $G$. The authors of [@bsy] considered hypercubes that are perfect injectively colorable. They noticed that such are the hypercubes $Q_n$, where $n\in [5]$, and proved that for all $k\in \mathbb{N}$, the hypercube $Q_{2^k}$ is a perfect injectively colorable graph. Apart from the mentioned cases, it was asked in [@bsy Problem 1] in which other dimensions the hypercube is perfect injectively colorable. Since an answer to the question is closely related to computing the value of the open packing number of hypercubes, it was also asked in [@bsy Problem 2] what is the value of $\rho^{\rm o}(Q_n)$ for $n\ge 6$. In this note, we give some partial answers to the above two questions. One can easily find that $\rho_2(Q_5)=4$, which by Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"} implies that $\rho^{\rm o}(Q_6)=8$. In addition, Fig. [\[fig:Q6\]](#fig:Q6){reference-type="ref" reference="fig:Q6"} shows a maximum $2$-packing of $Q_6$ of cardinality $8$, where vertices of an arbitrary color in $[8]$ form a maximum $2$-packing. This gives, again by Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"}, that $\rho^{\rm o}(Q_7)=16$. In addition, recall that $\rho^{\rm o}(Q_8)=32$, which follows from the fact that $Q_7$ has a perfect code. Now, by the observation from Section [3](#sec:hyp){reference-type="ref" reference="sec:hyp"}, we have $\rho_2(Q_8)\ge 17$. On the other hand, we claim that $\rho_2(Q_8)\le 30$. Suppose to the contrary that $\rho_2(Q_8)> 30$, and let $P$ be a $\rho_2$-set of $Q_8$. Then, partitioning $V(Q_8)$ into $Q$ and $Q'$, each of which induces $Q_7$, we infer that either $|Q\cap P|$ or $|Q'\cap P|$ is equal to $16$. We may assume that $|Q\cap P|=16$, and noting that $Q\cap P$ is a $2$-packing of $Q_7$, this implies that $Q\cap P$ corresponds to a perfect code of $Q_7$, thus $Q\cap P$ is a dominating set of $Q$. This in turn implies that every vertex in $Q'$ is at distance at most $2$ from a vertex in $Q\cap P$, which yields that $P=Q\cap P$, and so $|P|=16$, a contradiction proving that $\rho_2(Q_8)\le 30$. Now, using Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"}, we get $34\le \rho^{\rm o}(Q_9)\le 60$. In particular, $\rho^{\rm o}(Q_9)$ is not a power of $2$, which readily implies that $Q_9$ does not admit a partition into $\rho^{\rm o}$-sets, and is consequently not a perfect injectively colorable graph. On the other hand, refer to Fig. [\[fig:Q6\]](#fig:Q6){reference-type="ref" reference="fig:Q6"} again, which shows a coloring of $Q_6$ in which each color class is a $2$-packing of cardinality $\rho_2(Q_6)$. By applying Theorem [Theorem 4](#thm:bip-prisms){reference-type="ref" reference="thm:bip-prisms"} and the first part of its proof, one can construct an injective coloring of $Q_7$ in which each color class is a open packing of cardinality $\rho^{\rm o}(Q_7)$. Therefore, $Q_7$ is perfect injectively colorable graph. Summarizing the above, hypercubes $Q_n$, where $n\le 8$, are perfect injectively colorable graphs, and so $Q_9$ is the first instance of a hypercube, which is not in this class of graphs. # Concluding remarks Table [1](#tab:hypercubes){reference-type="ref" reference="tab:hypercubes"} presents values or bounds on the main domination and packing invariants in hypercubes $Q_n$, for all $n, n\le 9$. The values for $\gamma$ and $\gamma_t$ have been known earlier, while some of the values and bounds for $\rho_2$ and $\rho^{\rm o}$ have been obtained in this paper. [\[tab:hypercubes\]]{#tab:hypercubes label="tab:hypercubes"} $n$ 1 2 3 4 5 6 7 8 9 ---------------- --- --- --- --- --- ---- ---- ------- ------- $\gamma$ 1 2 2 4 7 12 16 32 62 $\gamma_t$ 2 2 4 4 8 14 24 32 64 $\rho_2$ 1 1 2 2 4 8 16 17-30 ? $\rho^{\rm o}$ 2 2 2 4 4 8 16 32 34-60 : Packing and domination invariants in hypercubes $Q_n$, where $n<10$. In addition, consider the value $\gamma_t(Q_{2^k}) = 2^{2^k-k}$, which follows from Theorem [\[thm:infinite-families\]](#thm:infinite-families){reference-type="ref" reference="thm:infinite-families"} combined with the formula $\gamma_t(G\,\square\,K_2)=2\gamma(G)$ from [@azarija-2017]. Now, compare this with the bound $\rho^{\rm o}(Q_{2^k}) \ge 2^{2^k-k}$, which follows from Theorem [Theorem 6](#thm:lower-bound){reference-type="ref" reference="thm:lower-bound"}(ii) when plugging $n=2^k$. Since $\gamma_t(G)\ge \rho^{\rm o}(G)$ for every graph $G$ with no isolated vertices, we infer that $$\label{eq:cubes} \gamma_t(Q_{2^k})=2^{2^k-k}=\rho^{\rm o}(Q_{2^k}), \textrm{ for all } k\in \mathbb{N}.$$ Recall the result from [@Rall] stating that $\gamma_t(G\times H) = \gamma_t(G)\gamma_t(H)$ whenever $G$ is a graph with $\rho^{\rm o}(G)=\gamma_t(G)$ and graphs $G$ and $H$ have no isolated vertices. Therefore, from the discussion above we get that $$\gamma_t(Q_{2^k}\times H) = 2^{2^k-k}\gamma_t(H)\,,$$ where $k\in \mathbb{N}$ and $H$ is an arbitrary graph with no isolated vertices. An additional family of graphs with this property (that $\gamma_t=\rho^{\rm o}$) can be found in [@mohammadi-2019]. It would be interesting to establish if there are any hypercubes $Q_n$ of other dimensions than those in [\[eq:cubes\]](#eq:cubes){reference-type="eqref" reference="eq:cubes"} that satisfy the equality $\gamma_t(Q_n)=\rho^{\rm o}(Q_n)$. # Acknowledgments {#acknowledgments .unnumbered} This work was performed within the bilateral grant 'Domination in graphs, digraphs and their products\" (BI-US/22-24-038). B.B. and S.K. were supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, N1-0285, and J1-3002. 99 J. Azarija, M.A. Henning, S. Klavžar, (Total) domination in prisms, Electron. J. Combin. 24 (2017) Paper 1.19. C. Brause, P. Golovach, B. Martin, P. Ochem, D. Paulusma, S. Smith, Acyclic, star, and injective colouring: bounding the diameter, Electron. J. Combin. 29 (2022) Paper 2.43. B. Brešar, B. Samadi, I.G. Yero, Injective coloring of graphs revisited, Discrete Math. 346 (2023) Paper 113348. M. Chen, G. Hahn, A. Raspaud, W. Wang, Some results on the injective chromatic number of graphs, J. Comb. Optim. 24 (2012) 299--318. Y. Gao, E. Zhu, Z. Shao, I. Gutman, A. Klobučar, Total domination and open packing in some chemical graphs, J. Math. Chem. 56 (2018) 1481--1492. G. Hahn, J. Kratochvíl, J. Širáň, D. Sotteau, On the injective chromatic number of graphs, Discrete Math. 256 (2002) 179--192. F. Harary, M. Livingston, Independent domination in hypercubes. Appl. Math. Lett. 6 (1993) 27--28. T.W. Haynes, S.T. Hedetniemi, M.A. Henning, *Domination in Graphs--Core Concepts*, Springer Monographs in Mathematics, Springer, Cham, 2023. M.A. Henning, P.J. Slater, Open packing in graphs, J. Combin. Math. Combin. Comput. 28 (1999) 5--18. M. Knor, P. Potočnik, Efficient domination in cubic vertex-transitive graphs, European J. Combin. 33 (2012) 1755--1764. A. Meir, J.W. Moon, Relations between packing and covering numbers of a tree, Pacific J. Math. 61 (1975) 225--233. M. Mohammadi, M. Maghasedi, A note on the open packing number in graphs, Math. Bohem. 144 (2019) 221--224. D.A. Mojdeh, I. Peterin, B. Samadi, I.G. Yero, (Open) packing number of some graph products, Discrete Math. Theor. Comput. Sci. 22 (2020) Paper 1. D.A. Mojdeh, B. Samadi, I.G. Yero, Further results on packing related parameters in graphs, Discuss. Math. Graph Theory 42 (2022) 333--348. T. Mütze, Proof of the middle levels conjecture, Proc. Lond. Math. Soc. 112 (2016) 677--713. P.R.J. Östergård, U. Blass, On the size of optimal binary codes of length $9$ and covering radius $1$. IEEE Trans. Inform. Theory 47 (2001) 2556--2557. B.S. Panda, Priyamvada, Injective coloring of some subclasses of bipartite graphs and chordal graphs, Discrete Appl. Math. 291 (2021) 68--87. D. F. Rall, Total domination in categorical products of graphs, Discuss. Math. Graph Theory 25 (2005) 35--44. G.J.M. van Wee, Improved sphere bounds on the covering radius of codes, IEEE Trans. Inform. Theory 34 (1988) 237--245.
arxiv_math
{ "id": "2309.04963", "title": "Packings in bipartite prisms and hypercubes", "authors": "Bo\\v{s}tjan Bre\\v{s}ar, Sandi Klav\\v{z}ar, Douglas F. Rall", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
arxiv_math
{ "id": "2310.04642", "title": "On two conjectural series involving Riemann zeta function", "authors": "Chuanan Wei, Ce Xu", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The Reynolds equation, combined with the Elrod algorithm for including the effect of cavitation, resembles a nonlinear convection-diffusion-reaction (CDR) equation. Its solution by finite elements is prone to oscillations in convection-dominated regions, which are present whenever cavitation occurs. We propose a stabilized finite-element method that is based on the variational multiscale method and exploits the concept of orthogonal subgrid scales. We demonstrate that this approach only requires one additional term in the weak form to obtain a stable method that converges optimally when performing mesh refinement. address: - International Centre for Numerical Methods in Engineering (CIMNE), 08034 Barcelona, Spain - Otto von Guericke University Magdeburg, Institute of Mechanics, 39106 Magdeburg, Germany - Universitat Politècnica de Catalunya, 08034 Barcelona, Spain author: - Hauke Gravenkamp - Simon Pfeil - Ramon Codina bibliography: - reynoldsOSS.bib - reynoldsOSS_Pfeil.bib title: | Stabilized finite elements for the solution of the\ Reynolds equation considering cavitation --- hydrodynamic bearings, Reynolds equation, stabilized finite elements, variational multiscale method, orthogonal subgrid scales # Introduction The Reynolds equation is essential in modeling hydrodynamic lubrication processes in bearings [@Reynolds1886]. The basic form of this partial differential equation (PDE) is derived from the Navier-Stokes equations, describing the special case of pressure generation in thin fluid films. Early approaches to the numerical solution of the Reynolds equation date back several decades and include conventional techniques, such as the finite-element method [@Reddi1969], finite differences [@Gnanadoss1964], and the finite-volume method [@Arghir2002]. A semi-analytical technique has recently been described for obtaining efficient solutions under some simplifying assumptions [@Pfeil2020a]. It is well known that the Reynolds equation, in its simplest version, can lead to unphysical negative pressure results in regions where the physical pressure becomes small enough for cavitation to occur. As a consequence, different variants have been proposed to obtain a more realistic model that includes effects due to cavitation. A trivial solution consists in setting any negative values to zero in a postprocessing step and, as a consequence, violating mass conservation. This method is often referred to as Gümbel or half-Sommerfeld condition, see, e.g., [@Li2017b]. More physically accurate cavitation models enforce mass conservation through either the boundary conditions or the differential equation, in both cases leading to a nonlinear boundary value problem (BVP). In the former case, the computational domain is limited to the non-cavitated region (pressure zone) or the two flow regimes are interpreted as two separate domains coupled through interface conditions. The nonlinearity arises from the fact that the locations of the pressure and cavitation zones are initially unknown. This concept leads to the Swift-Stieber (or Reynolds) boundary condition [@Swift1932; @Stieber1933] and -- in a more sophisticated variant -- to the Jakobsson-Floberg-Olsson (JFO) conditions [@Jakobsson1957; @Floberg1957; @Olsson1965]. See, e.g., [@Schweizer2008; @Schweizer2008a] for analyses and numerical implementations of these two approaches. The other class of cavitation models incorporates the entire fluid film, i.e., pressure as well as cavitation zones, into one common computational domain, usually in combination with linear boundary conditions. The Reynolds equation is then formulated as a globally valid nonlinear differential equation considering the different physical properties of both regimes. The Elrod (or Elrod-Adams) algorithm [@Elrod1974; @Elrod1981; @Fesanghary2011], which is the most prevalent cavitation model, and the bi-phase approach [@Feng1986; @Zeidan1989] fall into this category. These cavitation algorithms still satisfy (and are motivated by) the interface conditions stipulated by the JFO model without enforcing them explicitly. The cavitation model used in the study at hand is based on the assumptions proposed by Kumar and Booker [@Kumar1991]. Our current work will focus on the numerical treatment of this existing model; hence, we will not discuss in great detail the peculiarities of the modeling approach. Section [2](#sec:problem){reference-type="ref" reference="sec:problem"} will provide a brief summary of the BVP; for a more detailed outline of its derivation, the reader may consult [@Pfeil2023], in addition to the original works cited above. Note that this cavitation model, like many others, is usually classified as a variant of the Elrod algorithm, as the assumptions are similar (although not identical) to the original work by Elrod and Adams. The numerical solution of the Reynolds equation is prone to instabilities and severe oscillations, which can severely hinder the convergence of the nonlinear terms and require infeasibly small mesh sizes to obtain realistic solutions. A conventional approach to improving stability and convergence relies on including an additional unphysical diffusion term that is nonzero in the cavitation domain. This method is referred to as artificial diffusion or artificial dissipation and is used in, e.g., [@Shi2002; @vanostayen2009; @Alakhramsing2015]. Alternatively, numerical diffusion can be introduced through upwind schemes. In the context of finite element models, this is achieved through Petrov-Galerkin formulations [@Hajjam2007; @Lengiewicz2014] -- in particular the streamline upwind Petrov-Galerkin (SUPG) method [@Habchi2012; @Liu2020] -- where the test functions are modified, or by a special quadrature approach [@Bertocchi2013] where the positions of the integration points are shifted depending on the flow direction. In finite difference or finite volume schemes, upwind differences are used, meaning that the convective term is discretized with a backward difference relative to the flow direction instead of a central difference [@Vijayaraghavan1989; @Shyu2008; @Ausas2009]. The amount of diffusion needs to be large enough to stabilize the solution but also small enough to avoid overly diffusive results. Since finer meshes require less diffusion to achieve stability, the tuning parameters in the artificial diffusion method are usually defined as functions of the element size. Similar mesh-dependent behavior is obtained when employing upwind schemes. In this study, we propose a stabilization technique based on the variational multiscale (VMS) method that is able to achieve optimal error convergence under mesh refinement and, to our knowledge, has not been applied to the Reynolds equation as of yet. We may stress that the objective of this study is not the development of a new cavitation model but rather the design of a stabilization technique for an existing one. The oscillatory behavior, as observed in the solution of the Reynolds equation, is also well-known from the analysis of the (linear) convection-diffusion-reaction (CDR) equation or the Navier-Stokes equations. In particular, instabilities are observed in 'convection-dominated flows,' i.e., where convective terms (involving first-order spatial derivatives) are significantly larger than diffusive terms (derivatives of second order) [@Codina2000a]. We will demonstrate that the Reynolds equation in the particular version discussed here can be cast into the form of a nonlinear and inhomogeneous CDR equation such that methods developed for the linear CDR equation can be adapted to our application. In particular, we devise a stabilized finite element method to prevent global oscillations and improve the convergence of the nonlinear problem. The proposed approach is based on the concept of the VMS method [@Hughes1995a; @Hughes1998; @Codina2017a]. The general idea of this framework is based on splitting the unknown solution into a contribution that is in the finite element space and a remainder that cannot be represented by the chosen discretization. Within this family of approaches, existing variants differ in the assumed model for this remainder, often referred to as *subscales*. An overview of several methods for solving the CDR equation is presented in [@Codina1998]. An important subclass can be summarized as residual-based methods [@Lins2010], which include stabilization terms involving the strong residual of the governing equation to ensure the error introduced by the stabilization vanishes as the element size approaches zero. We will use here an extension of this idea that involves the projection of the residual onto a space orthogonal to the finite element space. Details on this approach (now coined *orthogonal subgrid scales*, OSGS) for the linear CDR equation were presented in [@Codina2000]. A major advantage of the OSGS approach lies in the fact that it allows neglecting terms in the residual that are not essential for achieving stability while retaining optimal convergence of the method. This idea is sometimes referred to as *term-by-term stabilization* [@Codina2008; @Coppola-Owen2011; @Castillo2017; @Castillo2019]. Based on this concept, we are able to propose a stable and optimally converging method by including only one additional term in the discretized weak form. In addition to deriving the stabilized finite-element method, we will briefly discuss the application of a shock-capturing method that is beneficial in problems involving large gradients of the solution. In the following section, we will briefly summarize the BVP that will be the focus of this paper before discussing a suitable variational form in Section [3](#variational){reference-type="ref" reference="variational"}. The main idea -- the proposed stabilized finite element method for the Reynolds equation -- is explained in Section [4](#stabilization){reference-type="ref" reference="stabilization"}. In the ensuing, we discuss some essential numerical aspects, namely the computation of the projections (Section [5](#projection){reference-type="ref" reference="projection"}), the linearization of the discretized weak form (Section [6](#linearization){reference-type="ref" reference="linearization"}), and the application of a shock-capturing algorithm (Section [7](#shockCapturing){reference-type="ref" reference="shockCapturing"}). Finally, Section [8](#numex){reference-type="ref" reference="numex"} presents three numerical studies demonstrating stable and optimally converging results. # Problem statement {#sec:problem} The model assumed here has been described in much detail in [@Pfeil2023] and the references therein. Here, it shall suffice to post the resulting BVP and provide the details essential for deriving the stabilized finite-element method. Furthermore, we limit the discussion to homogeneous Dirichlet boundary conditions for notational simplicity. Consequently, the BVP can be summarized as follows. Let $\Omega$ be an open, bounded, and polyhedral domain of $\mathds{R}^2$ and $\partial \Omega$ the domain's boundary. We want to find $u$ such that $$\begin{aligned} \label{eq:reynolds} -\tfrac{1}{12}\nabla\!\cdot\!\big(H^3(x,y)\,\nabla \left(g(u)\,u(x,y)\right)\big) -\partial_x\big(\left(g(u)-1\right)H(x,y)\,u(x,y)\big) &=f(x,y) -\partial_x H(x,y) \quad && \text{in}\ \Omega, \\ u &= 0 \quad &&\text{on\ } \partial\Omega.\end{aligned}$$ The *gap function* $H(x,y)$ is assumed to be known, typically as a result of a previous simulation step. Hence, the problem can be considered quasi-static,[^1] i.e., we aim to find a solution $u(x,y)$ for a given $H(x,y)$. In our numerical examples, we will assume the gap function to be of the form $$\label{eq:gapFunction} H = 1-\zeta\,\cos(x-x_a)$$ with constants $\zeta$ and $x_a$. Furthermore, $g(u)$ is a *switch function*, describing the transition between the pressure and cavitation zones. Here, we will use the regularized version [@Nitzschke2016a] $$g(u)=\frac{1}{\uppi} \arctan \left(\frac{u}{1-\overline{u}}\right)+\frac{1}{2},$$ where the constant $\overline{u}$ is usually chosen in the range $0.9< \overline{u}<1$. For other regularization approaches in cavitation algorithms, see, e.g., [@Fesanghary2011; @bayada2001; @Zhang2017; @nilsson2007]. A few remarks may help to clarify the BVP in this particular setting: - Physically, the computational domain represents a thin cylindrical layer (assuming a journal bearing) and is often formulated in cylindrical coordinates $(z,\theta)$. As the radial dependency and the curvature are commonly neglected, the computational domain reduces to a plane surface. Thus, we write the PDE in Cartesian coordinates to avoid any confusion regarding the differential operators. - The above version of the Reynolds equation is already non-dimensionalized; see [@Pfeil2023] for details. - The continuous unknown field $u(x,y)$ has different physical interpretation according to its sign. In regions where $u\geq 0$ ('pressure zone'), it represents the hydrodynamic pressure, while it is related to a film fraction[^2] where $u< 0$ ('cavitation zone'). More specifically, the non-dimensionalized pressure $p$ and the film fraction $\vartheta$ can be derived from $u$ as $p=gu$ and $\vartheta=(1-g)u+1$, respectively.[^3] - The first term of the PDE is dominant in the pressure zone; the second is dominant in the cavitation zone. The regularization function ensures a smooth transition between both regimes. - The function $f(x,y)$ incorporates any additional forcing terms. In this study, we solely included it for the purpose of creating manufactured solutions. - The signs in Eq. [\[eq:reynolds\]](#eq:reynolds){reference-type="eqref" reference="eq:reynolds"} are chosen such that the diffusion term in the weak form will be positive definite (In contrast to, e.g., [@Pfeil2023]), as this is a common assumption in the analysis of similar problems. Introducing shorthand notations for the left-hand side and right-hand side of [\[eq:reynolds\]](#eq:reynolds){reference-type="eqref" reference="eq:reynolds"}, the PDE is of the generic form $$\label{eq:strongFormGeneric} \mathcal{L}(u,u) = \hat{f}.$$ Note that $\mathcal{L}(u,u)$ is a quasi-linear operator; its second slot accounts for the spatial derivatives and the first one for the dependence of their coefficients on the unknowns. # Variational form {#variational} We denote by $\mathcal{W}\subset H^1_0(\Omega)$ the adequate function space where the continuous problem is well posed, which depends on the choice of $g(u)$. Here, the standard notation is used for the Sobolev spaces, i.e., $H^m(\Omega)$ is the space of functions whose distributional derivatives up to integer order $m \geq 0$ are square-integrable in $\Omega$. The space of square-integrable functions is denoted as $L^2(\Omega)$, and $H_0^1(\Omega)$ consists of functions in $H^1(\Omega)$ that vanish on $\partial \Omega$. Multiplying Eq. [\[eq:reynolds\]](#eq:reynolds){reference-type="eqref" reference="eq:reynolds"} by test functions $v\in \mathcal{W}$ (i.e., taken from the same space as $u$) and integrating over the computational domain yields $$\label{eq:weak} -\tfrac{1}{12} \big( v,\, \nabla\!\cdot\!(H^3\,\nabla (g\,u)) \big) - \big(v,\, \partial_x (\left(g\negthinspace\shortminus\negthinspace 1\right)Hu)\big) = \big(v,\, f- \partial_x H \big).$$ Here and in the following, we use the symbol $(\scalebox{1}{\textbullet},\scalebox{1}{\textbullet})$ to indicate integration over the computational domain, irrespective of whether or not this expression represents the $\mathrm{L}^2$-inner product in $\Omega$. Integrating the first term by parts, we obtain $$\label{eq:weak2} \tfrac{1}{12}\big(\nabla v,\, H^3\nabla (gu)\big) -\big(v,\, \partial_x ((g\negthinspace\shortminus\negthinspace 1 )Hu)\big) =\big(v,\, f- \partial_x H \big).$$ It is convenient to define a function $\phi(u)=g(u)\,u$, such that $$\label{eq:definePhi} \nabla (g(u)\,u) = \frac{\partial}{\partial u}(g(u)\,u)\,\nabla u=\phi'(u)\,\nabla u,$$ where the prime symbol denotes a derivative with respect to $u$. Substituting Eq. [\[eq:definePhi\]](#eq:definePhi){reference-type="eqref" reference="eq:definePhi"} into the first term of Eq. [\[eq:weak2\]](#eq:weak2){reference-type="eqref" reference="eq:weak2"} and expanding the second term leads to $$\label{eq:weak3} \tfrac{1}{12}\big(\nabla v,\, H^3\phi'\,\nabla u\big) -\big(v,\, (g\negthinspace\shortminus\negthinspace 1)H\, \partial_x u\big) -\big(v,\, \partial_x (\left(g\negthinspace\shortminus\negthinspace 1\right)H)u\big) =\big(v,\, f-\partial_x H \big).$$ The above weak form resembles that of the standard convection-diffusion-reaction (CDR) equation, and we will use this terminology throughout the paper, regardless of the physical meaning of these terms. However, observe that all terms on the left-hand side involve nonlinearities (since $g=g(u)$) and spatial variation of the coefficients ($H=H(x,y)$). We may also note that, in this particular setting, the 'convection' term involves only derivatives with respect to $x$. For easier comparison with previous developments for the (linear) CDR equation, let us introduce the abbreviations $$\begin{aligned} &k(u,x,y) = \tfrac{1}{12}H^3(x,y)\,\phi'(u),\quad &&\bm{a}(u,x,y) = [(g(u)-1)\,H(x,y),\,0]^\mathrm{T},\\ &s(u,x,y) = \partial_x \big((g(u)\negthinspace\shortminus\negthinspace 1)\,H(x,y)\big), \quad &&\hat{f}(x,y) = f(x,y)-\partial_x H(x,y). \end{aligned}$$ Hence, the weak form of the problem reads: Find $u\in \mathcal{W}$ such that $$\label{eq:weakAbb} \big(\nabla v,\, k(u,x,y) \nabla u\big) -\big(v, \bm{a}(u,x,y) \!\cdot\!\nabla u\big) -\big(v,\, s(u,x,y)\, u\big) =\big(v, \hat{f}(x,y) \big)$$ for all $v\in \mathcal{W}$. More compactly, we may write $$\label{eq:bilinearFormEquation} B(u; u, v) = L(v),$$ where, for a given $\hat{u}\in \mathcal{W}$, we denote by $B(\hat{u};\scalebox{1}{\textbullet}, \scalebox{1}{\textbullet})$ the bilinear form defined on $\mathcal{W}\times \mathcal{W}$ as $$\label{eq:B} B(\hat{u}; u, v) = \big(\nabla v,\, k(\hat{u},x,y) \nabla u\big) -\big(v,\, \bm{a}(\hat{u},x,y) \!\cdot\!\nabla u\big) -\big(v,\, s(\hat{u},x,y)\, u\big),$$ and $L(v)$ is given as $$\label{eq:L} L(v) = \big(v, \hat{f}(x,y) \big).$$ [\[rmk:artificialDiffusivity\]]{#rmk:artificialDiffusivity label="rmk:artificialDiffusivity"} As mentioned in the introduction, the most popular remedy to avoid unphysical oscillations in the solution of the Reynolds equation consists in including a term representing artificial diffusion (or using an upwind scheme that adds this diffusion indirectly), thus ensuring that the problem remains well-posed even in the cavitation region where $k(u,x,y)$ (nearly) vanishes. For the solution to converge to the correct result under mesh refinement, the artificial diffusivity must decrease with the element size $h$. The value of the artificial diffusivity can be justified, e.g., by the streamline diffusion method [@Lengiewicz2014], which is an upwind scheme based on a Petrov-Galerkin formulation. This approach leads to the form $$\label{eq:artificialdiffusion} D(\hat{u}; u, v) = \big(\hat{\bm{a}}\cdot\nabla v,\, \tfrac{h}{2} \nabla\cdot (\bm{a}\,u)\big) = -\big(\partial_xv,\, \tfrac{h}{2} \partial_x (a_x\,u)\big)$$ with the convection vector $\bm{a}=[a_x,\,0]^\mathrm{T}$ defined above and $\hat{\bm{a}} = \bm{a}/|\bm{a}|$. Note that, in the case of the Reynolds equation, the artificial diffusivity takes significant values only in the cavitation zone. As shown in [@Pfeil2023], the term derived in Eq. [\[eq:artificialdiffusion\]](#eq:artificialdiffusion){reference-type="eqref" reference="eq:artificialdiffusion"} is also exactly equivalent to the numerical diffusion obtained by the upwind difference technique, which is commonly used for the stabilization of FDM and FVM solutions of the Reynolds equation. Here, we will only use this artificial diffusion formulation for comparison and validation of our results in the numerical examples. # Stabilized finite element formulation {#stabilization} In this work, we consider only two-dimensional plane geometries and hence require a polygonal finite element partition of the computational domain $\Omega$. We may assume the partition to be quasi-uniform with an element diameter denoted as $h$. We construct the finite element spaces $\mathcal{W}_h \subset \mathcal{W}$ in the usual manner employing polynomial interpolation of uniform order throughout the computational domain. Assuming the same finite element spaces for the trial and test functions, we obtain the Galerkin discretization as $$\label{eq:weakFEM} B(u_h; u_h, v_h) = L(v_h).$$ It is well-known that the discretized CDR equation is prone to nonphysical oscillations in the case of convection-dominated problems. The same issue is observed when solving the Reynolds equation. In fact, the influence of cavitation within the model employed here always leads to the problem being 'convection'-dominated in a cavitation domain. Note that, without regularization, the diffusion term would exactly vanish in the cavitation domain. We choose a stabilization analogously to the linear CDR-equation that has been studied extensively in previous work [@Codina1998; @Codina2000a; @Codina2000; @Principe2010]. A particularly detailed discussion of the approach employed here is presented in [@Principe2008]. The underlying idea is based on the variational multiscale (VMS) method, which relies on splitting the unknown function $u$ into the component that belongs to the chosen finite element space and a remainder (referred to as subscale and denoted by $\widetilde{u}$) that cannot be resolved by the finite element discretization. The corresponding space of subscales is denoted as $\widetilde{\mathcal{W}}$. Thus, we have $$u = u_h + \widetilde{u}, \quad \mathcal{W} = \mathcal{W}_h \oplus \widetilde{\mathcal{W}}.$$ Substituting the above decomposition into the variational form and separating equations with respect to the test functions yields [\[eq:stabilizedWFseparated\]]{#eq:stabilizedWFseparated label="eq:stabilizedWFseparated"} $$\begin{aligned} {6} \label{eq:stabilizedWFseparatedGT} &B(u_h;u_h, v_h) &\ +\ &B(u_h;\widetilde{u}, v_h)\, &= L(v_h), \\ \label{eq:stabilizedWFseparatedSS} &B(u_h;u_h, \widetilde{v}) &\ +\ &B(u_h;\widetilde{u}, \widetilde{v})\, &= L(\widetilde{v})\,. \end{aligned}$$ Note that we used $B(u_h;\scalebox{1}{\textbullet}, \scalebox{1}{\textbullet})$, hereby computing the coefficients of the weak form based on the finite element approximation $u_h$. An alternative, referred to as *nonlinear subscales*, consists in including the contributions from the subscales in the computation of the coefficients, i.e., using instead $B(u;\scalebox{1}{\textbullet}, \scalebox{1}{\textbullet})$. Following the standard procedure of the VMS method (see, e.g., [@Codina2017a]), the second term in Eq. [\[eq:stabilizedWFseparatedGT\]](#eq:stabilizedWFseparatedGT){reference-type="eqref" reference="eq:stabilizedWFseparatedGT"} is integrated by parts. In this step, it is a common assumption that the subscales vanish on the element boundaries. Thus, we obtain $$\label{eq:stabilizedWFadjoint} B(u_h;u_h, v_h) + \sum_K \big( \widetilde{u}, \mathcal{L}^*(u,v_h)\big) _K \approx L(v_h),$$ where the subscript $K$ indicates integration and summation over all elements in the mesh. In the above equation, $\mathcal{L}^*$ is the adjoint of the differential operator $\mathcal{L}$. Several different approaches exist within this family of VMS methods, which mainly differ in the approximation of the subscale equation [\[eq:stabilizedWFseparatedSS\]](#eq:stabilizedWFseparatedSS){reference-type="eqref" reference="eq:stabilizedWFseparatedSS"}. As an important subclass of methods, residual-based approaches assume a subscale model of the form $$\label{eq:subscaleEvolution} \tau^{\shortminus 1}\,\widetilde{u} = \widetilde{P}\left( \hat{f} - \mathcal{L}(u,u_h) \right)$$ with a stabilization parameter $\tau$ that approximates the effect of the differential operator on the subscales. Furthermore, $\widetilde{P}$ denotes the projection onto the subscale space [@Castillo2019], here applied to the residual of the finite-element approximation. In this work, we follow the concept of *orthogonal subgrid scales* (OSGS) [@Codina2000], i.e., we choose $\widetilde{P}=P^\perp$ to be the projection onto the space orthogonal to the finite element space. It can be rewritten as $P^\perp=I-P_h$, where $P_h$ is the projection onto the finite element space. Substituting Eq. [\[eq:subscaleEvolution\]](#eq:subscaleEvolution){reference-type="eqref" reference="eq:subscaleEvolution"} into [\[eq:stabilizedWFadjoint\]](#eq:stabilizedWFadjoint){reference-type="eqref" reference="eq:stabilizedWFadjoint"} then yields $$\label{eq:stabilizedSubsti} B(u_h;u_h, v_h) + \sum_K \big( \tau\,P^\perp( \hat{f} - \mathcal{L}(u,u_h)), \mathcal{L}^*(u,v_h)\big) _K \approx L(v_h).$$ However, when employing orthogonal subgrid scales, we can drastically simplify the above formulation by considering only those terms in the residual as well as the adjoint operator $\mathcal{L}^*$ that are essential for achieving stability. As discussed in detail in the literature on the OSGS approach, the orthogonal projection of the relevant terms is sufficient for achieving stability while maintaining optimal convergence [@Codina2008; @Castillo2019]. In our application, where we want to stabilize convection, it suffices to include the convection term in $\mathcal{L}$ and $\mathcal{L}^*$. Hence, we can expect to obtain a stable formulation by approximating the stabilization terms as[^4] $$\label{eq:stabi} \sum_K \big( \tau\,P^\perp\negthickspace\left( \hat{f} - \mathcal{L}(u,u_h) \right), \mathcal{L}^*(u,v_h)\big) _K \approx\big(\bm{a}\!\cdot\!\nabla v_h,\, \tau\, P^\perp(\bm{a}\!\cdot\!\nabla u_h)\big) \eqqcolon S(u_h;\, u_h, v_h)$$ or, considering $P^\perp=I-P_h$, $$\label{eq:stabTerm} S(u_h;\,u_h, v_h) = \big(\bm{a}\!\cdot\!\nabla v_h,\, \tau\, \bm{a}\!\cdot\!\nabla u_h\big) - \big(\bm{a}\!\cdot\!\nabla v_h,\, \tau\, P_h\big(\bm{a}\!\cdot\!\nabla u_h\big)\big).$$ The first term of the stabilization $S(u_h;\,v_h,u_h)$ can be interpreted as additional nonlinear diffusion and leads to a stable method. The second term, involving the projection onto the finite-element space, ensures optimal convergence under mesh refinement. Details on the computation of this projection are provided in the next section. The stabilization parameter $\tau$ is adapted from previous work on the linear CDR equation [@Codina2000; @Codina2000a]. It needs to be chosen such that the residual of the governing equation vanishes as the element size $h$ approaches zero, and the optimal convergence rate is attained in the asymptotic regime. A common approach to deriving a computable expression for the stabilization parameter is based on a Fourier analysis of the residual, see, e.g., [@Codina2002; @Principe2010; @Castillo2019]. From the discussion in the papers mentioned before, it is known that the stabilization parameter for the linear CDR equation, say $-\hat{k}\,\Delta u + \hat{\mathbf{a}}\cdot\nabla u + \hat{s}\,u=0$, can be chosen of the form $(c_1\,\hat{k}/h^2 + c_2\,|\hat{\mathbf{a}}|/h + \hat{s})^{\shortminus1}$. Here, we need to take into account the nonlinear and inhomogeneous nature of $k(u,x,y)$, $\bm{a}(u,x,y)$, and $s(u,x,y)$; thus, $\tau(u,x,y)$ will be evaluated at every Gauss point when integrating the stabilization term [\[eq:stabi\]](#eq:stabi){reference-type="eqref" reference="eq:stabi"}. We obtain $$\begin{aligned} \nonumber \tau(u,x,y) &=\left(\frac{c_1}{h^2} \left|k(u,x,y)\right| + \frac{c_2}{h} \left| \bm{a}(u,x,y)\right| + \left|s(u,x,y)\right|\right)^{-1} \\ &= \left(\frac{c_1}{12\,h^2} \left|H^3(x,y) \phi'(u)\right| + \frac{c_2}{h} \left| (g(u)\negthinspace\shortminus\negthinspace 1)H(x,y)\right| + \left|\partial_x \big((g(u)\negthinspace\shortminus\negthinspace 1)\,H(x,y)\big)\right|\right)^{-1}, \label{eq:tau} \end{aligned}$$ where $c_1,\,c_2$ are algorithmic constants commonly chosen (for linear elements) as $$c_1=4,\quad c_2=2.$$ In summary, the stabilized finite element method reads: Find $u_h\in \mathcal{W}_h$ such that $$\label{eq:stabFEMfinal} B(u_h;u_h, v_h) + S(u_h;\,v_h,u_h) = L(v_h),$$ for all $v_h\in \mathcal{W}_h$ with $B$, $L$, and $S$ given in Eqs. [\[eq:B\]](#eq:B){reference-type="eqref" reference="eq:B"}, [\[eq:L\]](#eq:L){reference-type="eqref" reference="eq:L"}, and [\[eq:stabTerm\]](#eq:stabTerm){reference-type="eqref" reference="eq:stabTerm"} and the stabilization parameter $\tau$ defined in [\[eq:tau\]](#eq:tau){reference-type="eqref" reference="eq:tau"}. # Computing the projection {#projection} The projections of the convection term present in the stabilized weak form are apriori unknown functions in the finite-element space; we will denote them as $$\xi_h = P_h\big(\bm{a}\!\cdot\!\nabla u_h\big).$$ Computing these functions $\xi_h$ corresponds to solving $$\label{eq:projection} \big(\eta_h,\, \bm{a}\!\cdot\!\nabla u_h\big) - \big(\eta_h, \xi_h \big) = 0$$ with test functions $\eta_h\in \mathcal{W}_h$ (i.e., they consist of the same basis functions as $v_h$). Combining Eqs. [\[eq:stabFEMfinal\]](#eq:stabFEMfinal){reference-type="eqref" reference="eq:stabFEMfinal"} and [\[eq:projection\]](#eq:projection){reference-type="eqref" reference="eq:projection"}, we obtain the system of equations [\[eq:system\]]{#eq:system label="eq:system"} $$\begin{aligned} B_s(u_h;u_h, v_h) - \big(\bm{a}\!\cdot\!\nabla v_h,\, \tau\, \xi_h \big) &=L(v_h)\label{eq:system_a}\\ \big(\eta_h,\, \bm{a}\!\cdot\!\nabla u_h\big) - \big(\eta_h, \xi_h \big) &= 0, \label{eq:system_b}\end{aligned}$$ where we introduced, for notational convenience, $$B_s(u_h; v_h,u_h) = B(u_h;u_h, v_h) + \big(\bm{a}\!\cdot\!\nabla v_h,\, \tau\, \bm{a}\!\cdot\!\nabla u_h\big).$$ In this work, we compute the projections $\xi_h$ *implicitly* along the lines of the approach suggested in [@Codina2008]. That is to say, at a given iteration $i$, the solution $u_h^{i+1}$ as well as the projections $\xi_h^{i+1}$ are both treated as unknowns and obtained by solving the following coupled system of equations: [\[eq:system_imp\]]{#eq:system_imp label="eq:system_imp"} $$\begin{aligned} B_s(u_h^i;v_h,u_h^{i+1}) - \Big(\bm{a}\!\cdot\!\nabla v_h,\, \tau\, \xi_h^{i+1} \Big) &=L(v_h), \label{eq:system_imp_a}\\ \Big(\eta_h,\, \bm{a}\!\cdot\!\nabla u_h^{i+1} \Big) - \Big(\eta_h, \xi_h^{i+1} \Big) &= 0. \label{eq:system_imp_b}\end{aligned}$$ Details on the linearization of the Galerkin terms in $B_s(u_h^i;v_h,u_h^{i+1})$ will be presented in Section [6](#linearization){reference-type="ref" reference="linearization"}. For clarity, we rewrite the system [\[eq:system_imp\]](#eq:system_imp){reference-type="eqref" reference="eq:system_imp"} in matrix form, assuming, for instance, a finite-element version using standard piece-wise polynomial shape functions: $$\left[\begin{array}{ll} \mathbf{K} & -\mathbf{P}_\tau \\ \mathbf{P} & -\mathbf{M} \end{array}\right] \left[\begin{array}{rr} \mathbf{u}\\ \bm{\xi} \end{array}\right] = \left[\begin{array}{rr} \mathbf{F}\\ \mathbf{0} \end{array}\right].$$ Here, the symbols $\mathbf{u}$ and $\bm{\xi}$ denote the vectors of unknowns, i.e., the coefficients of the finite-element representation (typically nodal values) of $u_h^{i+1}$ and $\xi_h^{i+1}$. The definition of the matrices $\mathbf{K},\,\mathbf{P}_\tau,\,\mathbf{P},\,\mathbf{M}$ follows directly from comparing with Eqs. [\[eq:system_imp\]](#eq:system_imp){reference-type="eqref" reference="eq:system_imp"}.[^5] Thus, in the implicit version of the linearization, we formally introduced additional degrees of freedom representing the projection of the stabilization term onto the finite-element space. Nevertheless, these additional degrees of freedom can be eliminated by static condensation, yielding $$\left(\mathbf{K} - \mathbf{P}_\tau \mathbf{M}^{\negthinspace\shortminus\negthinspace 1}\mathbf{P}\right)\mathbf{u} = \mathbf{F}\,.$$ When using a finite element version that allows *mass lumping* -- i.e., the Gram matrix $\mathbf{M}$ can be diagonalized -- the cost of computing the additional term $\mathbf{P}_\tau \mathbf{M}^{\negthinspace\shortminus\negthinspace 1}\mathbf{P}$ is small compared to the solution of the final system of equations. We note that alternatives to this implicit approach are generally possible. In particular, a straightforward explicit version consists in replacing ${i+1}$ by $i$ in Eq. [\[eq:system_imp_b\]](#eq:system_imp_b){reference-type="eqref" reference="eq:system_imp_b"} and the second term in Eq. [\[eq:system_imp_a\]](#eq:system_imp_a){reference-type="eqref" reference="eq:system_imp_a"}, thus using the projections at the previous iteration $i$ when computing $u^{i+1}$. This approach is essentially a staggered scheme involving alternating computations of $u$ and $\xi$. However, it is known from numerical studies on the current and previous applications that this explicit scheme leads to poor convergence of the nonlinear terms. In transient problems, on the other hand, the usually preferred option relies on using the projections at the previous *time step*; hence, they are not updated by the nonlinear solver and, consequently, do not affect the convergence of the same. The resulting error in the projections is typically negligible for sufficiently small time steps. Such an approach may also prove beneficial for the solution of the Reynolds equation when it is part of a larger system in which the gap function $H$ varies in time. However, for the current work where we are interested in the stationary case, we will stick to the implicit version outlined above. # Linearization of the Galerkin terms {#linearization} As all terms in the weak form [\[eq:bilinearFormEquation\]](#eq:bilinearFormEquation){reference-type="eqref" reference="eq:bilinearFormEquation"} are nonlinear, we must choose a suitable linearization and employ an iterative solver for obtaining a converged solution. The simplest approach is obtained by a fixed-point iteration (also referred to as Picard's method): $$B_{\mathrm{P}}(u^{i}; u^{i+1}, v) = \big(\nabla v,\, k(u^{i},x,y) \nabla u^{i+1}\big) -\big(v,\, \bm{a}(u^{i},x,y) \!\cdot\!\nabla u^{i+1}\big) -\big(v,\, s(u^{i},x,y)\, u^{i+1}\big).$$ This approach simply corresponds to using the result $u^i$ of a previous iteration $i$ in the integration of the stiffness matrix for computing the solution $u^{i+1}$. Analogously, the stabilization terms are linearized by approximating $$\begin{gathered} S_{\mathrm{P}}(u^i;\,u_h^{i+1},v_h) = \big(\bm{a}(u^i,x,y)\!\cdot\!\nabla v_h,\, \tau(u^i,x,y)\, \bm{a}(u^i,x,y)\!\cdot\!\nabla u_h^{i+1}\big) \\ - \big(\bm{a}(u^i,x,y)\!\cdot\!\nabla v_h,\, \tau(u^i,x,y)\, P_h\big(\bm{a}(u^i,x,y)\!\cdot\!\nabla u_h^{i+1}\big)\big). \label{eq:stabTermPicard}\end{gathered}$$ In addition, we make use of the classical Newton-Raphson method in order to obtain significantly faster convergence when the initial guess is sufficiently close to the converged solution. To this end, we approximate the diffusive term by a first-order Taylor expansion at $u^i$: $$\begin{aligned} k(u)\,\nabla u &\approx k(u^i)\,\nabla u^i + \left.\partial_u\left(k(u)\,\nabla u\right)\right|_{u^i} (u^{i+1}-u^i) \\ & = k(u^i)\,\nabla u^i + k'(u^i)\,\nabla u^i\,(u^{i+1}-u^i) + k(u^i)\,\nabla (u^{i+1}-u^i) \\ & = k(u^i)\,\nabla u^{i+1} +k'(u^i)\,\nabla u^i\,u^{i+1} -k'(u^i)\,\nabla u^i\,u^i.\end{aligned}$$ For convenience, we rewrite the convective and reactive terms (the second term in Eq. [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"}) as $$\partial_x \big(\left(g\negthinspace\shortminus\negthinspace 1\right)Hu\big) = \partial_x \big(\psi(u)\,H\big) %= \psi(\uk)\,\partial_x H + H\,\partial_x \psi(\uk) = \psi(u)\,\partial_x H + H\, \psi'(u)\,\partial_xu$$ with $\psi(u) = (g-1)\,u$, and we obtain an analogous linearization as $$\begin{aligned} &\psi(u)\,\partial_x H + H\, \psi'(u)\,\partial_xu\\ &\approx \psi(u^i)\,\partial_x H + H\, \psi'(u^i)\,\partial_xu^i + \left.\partial_u\left( \psi(u)\,\partial_x H + H\, \psi'(u)\,\partial_xu\right)\right|_{u^i} (u^{i+1}-u^i)\\ &= \psi(u^i)\,\partial_x H + H\, \psi'(u^i)\,\partial_xu^i + \left( \psi'(u^i)\,\partial_x H + H\, \psi''(u^i)\,\partial_xu^i \right)(u^{i+1}-u^i) + H\, \psi'(u^i)\,\partial_x (u^{i+1}-u^i)\\ &= H\, \psi'(u^i)\,\partial_x u^{i+1} + \left( \psi'(u^i)\,\partial_x H + H\, \psi''(u^i)\,\partial_xu^i \right)(u^{i+1}-u^i) + \psi(u^i)\,\partial_x H.\end{aligned}$$ Hence, the linearization of the (unstabilized) weak form [\[eq:bilinearFormEquation\]](#eq:bilinearFormEquation){reference-type="eqref" reference="eq:bilinearFormEquation"} based on the Newton-Raphson method reads $$\begin{gathered} B_\mathrm{N}^{i+1}(v,\,u) = \left(\nabla v,\, k^{\,i}\,\nabla u^{i+1}\right) +\left(\nabla v,\, k'^{\,i}\,\nabla u^i\,u^{i+1} \right) -\left(\nabla v,\, k'^{\,i}\,\nabla u^i\,u^i\right) \\ -\left(v,\, H\, \psi'^{\,i}\,\partial_x u^{i+1}\right) - \left(v,\, \chi^iu^{i+1} \right) +\left(v,\, \chi^iu^i \right) - \left(v,\, \psi^{\,i}\,\partial_x H\right) = L(v)\end{gathered}$$ with the abbreviations $$\chi^i \coloneqq \psi'^{\,i}\,\partial_x H(x,y) + H(x,y)\, \psi''^{\,i}\,\partial_xu^i, \qquad k^i \coloneqq k(u^i,x,y), \qquad \psi^i \coloneqq \psi(u^i).$$ Implementing higher-order approximations of the stabilization terms $S(v_h,u_h)$ and the involved projections is cumbersome, mainly due to the relatively complicated dependency of the stabilization parameter $\tau$ on the solution $u$. We will adhere to using the linearization by Picard's method for the stabilization terms as described above (Eq. [\[eq:stabTermPicard\]](#eq:stabTermPicard){reference-type="eqref" reference="eq:stabTermPicard"}) even when employing Newton's method for the standard Galerkin terms. This simplification is common in the vast majority of works on subgrid-scale methods, and our numerical results confirm the rapid convergence of this approximated Newton-Raphson scheme. # Shock-capturing {#shockCapturing} The stabilization outlined before yields a stable method that converges at an optimal rate and avoids unphysical global oscillations. On the other hand, local oscillations may still arise in confined regions where the solution changes abruptly. While this behavior is typical in boundary layers, it can, in our current application, also occur in the transition between cavitation and pressure zones, where the behavior of the solution changes significantly. Such oscillations manifest themselves in 'over- and undershooting' phenomena, which are ubiquitous in many interpolation problems. The presence of such oscillations is not necessarily problematic, and their magnitude decays under mesh refinement in the stabilized method. Nevertheless, it can sometimes be desired to suppress oscillations and obtain a smoother solution, particularly on relatively coarse meshes. The corresponding techniques are often referred to as 'shock-capturing,' and their further advancement is still an active research area not restricted to stabilized finite-element methods. Essentially, many of these techniques are based on adding local artificial diffusion of a magnitude that depends on the gradient of the solution. Hence, the shock-capturing term is usually nonlinear, and its magnitude must be carefully selected in order to yield smoother results without deviating too far from the correct solution [@Castillo2014; @Badia2014]. Here, we will not delve into the details of different shock-capturing approaches but simply apply one established version with proven capabilities to reduce local oscillations without significantly deteriorating accuracy. The idea -- as introduced in [@Codina1993] and slightly modified in [@Knopp2002] -- is to introduce artificial diffusion that is scaled by the strong residual of the governing PDE. Hence, we add a term of the form $$D(\hat{u}_h;u_h,v_h)= \sum_K \big(\nabla v_h,\, \tau_s(\hat{u}_h) \nabla u_h\big)_K$$ with $$\tau_s(u) = R_K^*(u) \, \sigma_K(u)$$ and $$R_K^*(u) = \frac{|| \mathcal{L}(u,u) - \hat{f}||_{L^2(K)}}{\alpha\,||\hat{f}||_{L^2(K)} + ||u||_{H^1(K)}}, \qquad \sigma_K(u) = \frac{h_K}{2}\,\operatorname{max}\negthinspace\left(0,\,\beta-\frac{1}{P_K}\right), \qquad P_K = \frac{h_K\,R_K^*(u)}{2\,k(u)},$$ where the subscript $K$ again indicates evaluation on each element. Hence, the artificial diffusivity is scaled by the element-wise $L^2$ norm of the strong residual, adequately normalized by the element-wise $H^1$ norm of the solution and the element-wise $L^2$ norm of the right-hand side. The factor $\alpha$ is introduced here for dimensional consistency and chosen as $$\alpha = \frac{\sum_K ||u||_{H^1(K)}}{\sum_K ||\hat{f}||_{L^2(K)}}.$$ Furthermore, $\sigma_K(u)$ is a limiter function, ensuring positive bounded values and yields small artificial diffusivity in regions where the existing diffusivity $k(u)$ is large. The algorithmic constant $\beta$ is chosen as 0.7 in our numerical examples. For details on this approach, the reader is referred to [@Codina1993]. # Numerical examples {#numex} ## Convergence test {#sec:convergence} We begin the numerical studies by assessing a simple example with a known smooth solution in the computational domain $$\Upomega=\left\{\left. (x,\,y) \in \mathbb{R}^2\ \right|\ 0\leq x\leq 2\uppi,\ \shortminus 1\leq y\leq 1\right\}.$$ We construct this problem using a so-called manufactured solution. That is to say, we choose $u(x,y)$ and substitute this function into the governing PDE to obtain the corresponding forcing term. In our example, we use $$\label{eq:manuCosSin} u(x,y) = (1-\cos(2x)) \sin(x)(1+\cos(\uppi\,y))/6$$ as depicted in Fig. [\[fig:manu_u\]](#fig:manu_u){reference-type="ref" reference="fig:manu_u"}. Note that the solution takes positive and negative values (i.e., involves a pressure zone and a cavitation zone) and vanishes at the domain's boundary. The gap function is chosen according to Eq. [\[eq:gapFunction\]](#eq:gapFunction){reference-type="eqref" reference="eq:gapFunction"} with the parameters $\zeta = \tfrac{1}{2}$, $x_a = \uppi$. In order to ensure the method's robustness, different values of the regularization parameter $\overline{u}$ have been tested, namely $\overline{u} = 0.9,\,0.91,\,...\,,\,0.99$, leading to almost identical results. The solutions presented in this section have been computed using $\overline{u}=0.98$. Substituting Eq. [\[eq:manuCosSin\]](#eq:manuCosSin){reference-type="eqref" reference="eq:manuCosSin"} into the governing PDE yields the forcing function $f(x,y)$ depicted in Fig. [\[fig:manu_f\]](#fig:manu_f){reference-type="ref" reference="fig:manu_f"}. We refrain from printing the lengthy expressions describing this function; however, a Matlab function for computing $f(x,y)$ for this example is publicly available [@Gravenkamp2023e]. Homogeneous Dirichlet conditions are prescribed at the boundary, and the initial guess is chosen as $u_0(x,y) = 1$ inside the domain. The computational domain is discretized by starting with three quadrilateral elements along the $x$-direction (of size $\sfrac{2}{3}\,\uppi \times 2$) and consecutively dividing each element into four to obtain a series of consistently refined meshes. As an example, Fig. [\[fig:manu_uVSx_stab\]](#fig:manu_uVSx_stab){reference-type="ref" reference="fig:manu_uVSx_stab"} displays the computed solution along the line $y=0$ when using a mesh of $96\times 32$ elements. The comparison against the analytical solution shows no significant discrepancies when employing the stabilized finite element method. In contrast, if we do not include the stabilization term (but leave all other parameters unchanged), we are unable to find a converged solution. Figure [\[fig:manu_uVSx_nostab\]](#fig:manu_uVSx_nostab){reference-type="ref" reference="fig:manu_uVSx_nostab"} shows the solution after 50 fixed-point iterations, revealing strong oscillations in the cavitation region, which prevent convergence in this example. However, it should be noted that this behavior is strongly mesh-dependent, and in this simple example with a smooth solution, the fixed-point iteration did converge for some of the meshes we tested. To get a notion of the amount of stabilization included in the method, Fig. [\[fig:manu_tau\]](#fig:manu_tau){reference-type="ref" reference="fig:manu_tau"} shows the variation of the stabilization parameter $\tau(x,y)$ within the entire domain for the coarsest and finest mesh utilized in this example. To better assess the accuracy of the stabilized method, we compute the normalized $L^2$ norm of the error for different mesh sizes, i.e., $$\epsilon(u_h) = \frac{||u-u_h||_{L^2}}{||u||_{L^2}}.$$ This error measure decreases proportionally to $h^2$, which is the optimal convergence rate in the case of linear elements; see Fig. [\[fig:manu_convergence\]](#fig:manu_convergence){reference-type="ref" reference="fig:manu_convergence"}. For comparison, we show the corresponding results when employing artificial diffusion (as described in Remark [\[rmk:artificialDiffusivity\]](#rmk:artificialDiffusivity){reference-type="ref" reference="rmk:artificialDiffusivity"}) instead of the proposed OSGS approach. While the artificial diffusion does allow the method to converge, it leads to only first-order convergence and overall significantly larger errors. However, it should be acknowledged that the OSGS approach causes slightly larger computational costs for the same mesh, as it requires the computation of the additional matrices related to the projections (cf. Eq. [\[eq:system_imp\]](#eq:system_imp){reference-type="eqref" reference="eq:system_imp"}), as well as their static condensation. In Fig. [\[fig:manu_residual\]](#fig:manu_residual){reference-type="ref" reference="fig:manu_residual"}, we depict the convergence of the residual when employing fixed-point iteration and Newton's method in the proposed approach -- again for the discretization using $96\times 32$ elements. We obtain the expected asymptotic convergence rates of approximately one and two, respectively, when averaging the last three points of the depicted graphs. Note that Newton's method tends to fail when starting with an initial guess far from the true solution. For this reason, we usually perform a few fixed-point iterations (in this example four) before switching to Newton's method for faster convergence. ## Example involving large gradients {#sec:boundaryLayer} As a second example, we consider another manufactured solution that involves large gradients both in the field $u(x,y)$ as well as the forcing term $f(x,y)$, see Fig. [\[fig:boundaryLayer\]](#fig:boundaryLayer){reference-type="ref" reference="fig:boundaryLayer"}. The solution is chosen as $$u(x,y) = \frac{1}{4}\Big(\frac{1-\operatorname{e}^{\frac{100}{2\uppi} x}}{1-\operatorname{e}^{100}}-1+\tfrac{1}{2}(\cos(\tfrac{x}{2})+1)\Big) (1+\cos(\uppi y)).$$ The computational domain, gap function, regularization, as well as the boundary conditions, initial guess, and meshes, are identical to the previous example in Section [8.1](#sec:convergence){reference-type="ref" reference="sec:convergence"}. Again, modifying the regularization parameter has little effect on the convergence of the method, and the presented results are computed using $\overline{u} = 0.98$. The solution along the line $y=0$ when using the mesh of $96\times 32$ elements is depicted in Fig. [\[fig:boundaryLayer_uVSx\]](#fig:boundaryLayer_uVSx){reference-type="ref" reference="fig:boundaryLayer_uVSx"}. When employing the stabilized method, this discretization yields a sufficiently accurate solution. In contrast, severe oscillations are still visible without stabilization. The effect of incorporating the shock-capturing term is demonstrated using a coarser mesh of $24\times 8$ elements in Fig. [\[fig:boundaryLayer_uVSx_h3\]](#fig:boundaryLayer_uVSx_h3){reference-type="ref" reference="fig:boundaryLayer_uVSx_h3"}. While the stabilization leads to optimally convergent results and avoids global oscillations, local oscillations of large amplitude can still occur near the boundary where large gradients are present in the exact solution. In this case, the shock-capturing approach helps reduce these oscillations. On the other hand, note that the stabilized method gives accurate results on sufficiently fine meshes even without shock-capturing -- at least in this simple example. The graph in Fig. [\[fig:boundaryLayer_uVSx\]](#fig:boundaryLayer_uVSx){reference-type="ref" reference="fig:boundaryLayer_uVSx"} was computed without shock-capturing, and there is no visible difference when incorporating it. It should also be noted that shock-capturing can have undesired effects on the convergence both with respect to mesh refinement and convergence of the nonlinear terms. In the approach we used, this effect depends strongly on the parameter $\beta$. Large values of $\beta$ lead to smoother results but increase the overall error of the numerical solution and may deteriorate the convergence of the nonlinear solver. Here, we chose a value of $\beta = 0.7$, which has been found to yield a suitable trade-off between effective shock-capturing and accuracy. From the results of the convergence study in Fig. [\[fig:boundaryLayer_convergence\]](#fig:boundaryLayer_convergence){reference-type="ref" reference="fig:boundaryLayer_convergence"}, we can observe that the shock-capturing then leads to only a slight increase in the overall error. Regarding the convergence of the nonlinear solver, we mainly see an effect on Newton's method; see Fig. [\[fig:boundaryLayer_residual\]](#fig:boundaryLayer_residual){reference-type="ref" reference="fig:boundaryLayer_residual"}. This can be attributed to the fact that the nonlinear coefficient (involving element-wise integration of the strong residual) must be computed based on the previous iteration; hence, this term is still linearized by a simple fixed-point iteration, even when using Newton's method for all other terms. The results in this figure are again evaluated for the finest mesh, but those for coarser meshes show a similar trend. ## Realistic example {#sec:realExample} As a final numerical study, we consider a more realistic scenario of a bearing, leading to a typical distribution of pressure and cavitation zones. The example is based on one described in detail in [@Pfeil2023]. Only the boundary conditions are simplified here for conciseness. The computational domain is the same as in the previous examples (i.e., a rectangle of dimensions $2\uppi\times 2$). The gap function is defined by the parameters $\zeta = 0.6$, $x_a = \tfrac{7}{9}\uppi$ ($140^\circ$ in a cylindrical coordinate system). Homogeneous Dirichlet boundary conditions are applied. The initial guess is again chosen as $u_0(x,y)=1$. Figure [\[fig:real_u\]](#fig:real_u){reference-type="ref" reference="fig:real_u"} shows the solution in the entire computational domain, computed using a mesh of $100\times 32$ elements. The result displays the typical smooth behavior inside the pressure zone and a rather abrupt transition to the cavitation zone. For a more in-depth discussion of the underlying physics and the relevance within the scope of bearing simulations, we refer the interested reader to [@Pfeil2023]. Similar to the example in Section [8.2](#sec:boundaryLayer){reference-type="ref" reference="sec:boundaryLayer"}, the solution features large gradients near the boundary $x=2\uppi$. This gives rise to oscillations that are mitigated when employing the shock-capturing approach. Figure [\[fig:real_uVSx\]](#fig:real_uVSx){reference-type="ref" reference="fig:real_uVSx"} shows the solution along the line $y=0$. It can be observed that the proposed stabilized method avoids global oscillations inside the domain. The shock-capturing approach successfully suppresses the local oscillations near the boundary without causing any visible deviation elsewhere in the computational domain. These results were once again computed using a regularization parameter of $\overline{u}=0.98$. In addition, we present in Fig. [1](#fig:real_uVSx_ubar){reference-type="ref" reference="fig:real_uVSx_ubar"} the solution along $y=0$ using different values of $\overline{u}$. Only small deviations can be noticed for rather low values of $\overline{u}\leq 0.94$, confirming that the model is robust for a sufficient range of reasonable values. ![Solution along the line $y=0$ for different values of the regularization parameter $\overline{u}$.[\[fig:real_uVSx_ubar\]]{#fig:real_uVSx_ubar label="fig:real_uVSx_ubar"}](reynoldsCavitation_real_uVSx_ubar.eps){#fig:real_uVSx_ubar width="50%"} # Conclusion The proposed stabilized finite element method for the Reynolds equation follows rather straightforwardly from the existing approaches available for the CDR equation when taking into account the nonlinearities inherent in the cavitation model. We showed that it is sufficient to include one additional term in order to stabilize convection and demonstrated numerically the optimal convergence of the resulting method. Furthermore, we have found that local oscillations, which can occur in regions exhibiting large gradients, can effectively be suppressed by existing shock-capturing techniques. # Acknowledgements {#acknowledgements .unnumbered} H. Gravenkamp acknowledges grant CEX2018-000797-S funded by the Ministerio de Ciencia e Innovación, MCIN/AEI/10.13039/501100011033. S. Pfeil acknowledges the financial support by the German Research Foundation, Germany (DFG), project no. 490625563, grant no. WO 2085/8-1. R. Codina acknowledges the support received from the ICREA Acadèmia Research Program of the Catalan Government. [^1]: In general, terms involving explicit time-dependency may also appear in the Reynolds equation [\[eq:reynolds\]](#eq:reynolds){reference-type="eqref" reference="eq:reynolds"} in some applications, but we will not discuss the transient cases in this work. [^2]: The film fraction (also known as density ratio) $\vartheta$ is a measure of the relative amount of liquid in the cavitated fluid film. [^3]: The idea of describing the complementary unknowns $p$ and $\vartheta$ by a common global unknown $u$ was proposed by Shi and Paranjpe [@Shi2002]. In their work, they use $p$ and $\vartheta$ in the strong form of the differential equation and substitute these quantities by $u$ after discretization, meaning that $p$ and $\vartheta$ are both interpolated separately, and the substitution with $u$ is performed afterwards with respect to the discrete nodal quantities. In contrast, we interpret $u$ as a continuous field constituting the global unknown in our strong form. Alternatively, it is possible to employ concepts of mixed finite element formulations and treat $p$ and $\vartheta$ as separate unknown fields [@Lengiewicz2014]. [^4]: Keep in mind that $\bm{a} = \bm{a}(u,x,y)$ and $\tau = \tau(u,x,y)$, which is omitted in the following equations for readability. Hence, the stabilization term is nonlinear. [^5]: In some applications of orthogonal subgrid scales, the stabilization parameter $\tau$ is assumed constant in the entire domain (hence $\mathbf{P}_\tau = \tau\, \mathbf{P}^\mathrm{T}$) or approximated as being constant within each element. In our case, $\tau = \tau(u,x,y)$ is a rather complicated function of the solution $u$ as well as the spatial coordinates; hence, it is evaluated at every Gauss point.
arxiv_math
{ "id": "2310.06066", "title": "Stabilized finite elements for the solution of the Reynolds equation\n considering cavitation", "authors": "Hauke Gravenkamp, Simon Pfeil, Ramon Codina", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Explicit formulae are given for the nine possible induced matrix norms corresponding to the $1$-, $2$-, and $\infty$-norms for Euclidean space. The complexity of computing these norms is investigated. author: - "Andrew D. Lewis[^1]" date: 2010/03/20 title: "A top nine list: Most popular induced matrix norms" --- **Keywords.** Induced norm. **AMS Subject Classifications (2010).** 15A60 # Introduction Arguably the most commonly used norms for real Euclidean space $\mathbb{R}^n$ are the norms $\lVert\cdot\rVert_1$, $\lVert\cdot\rVert_2$, and $\lVert\cdot\rVert_\infty$ defined by $$\lVert\boldsymbol{x}\rVert_1=\sum_{j=1}^n\lvert x_j\rvert,\quad \lVert\boldsymbol{x}\rVert_2=\left(\sum_{j=1}^n\lvert x_j\rvert^2\right)^{1/2},\quad \lVert\boldsymbol{x}\rVert_\infty=\max\{\lvert x_1\rvert,\dots,\lvert x_n\rvert\},$$ respectively, for $\boldsymbol{x}=(x_1,\dots,x_n)\in\mathbb{R}^n$. Let $\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$ be the set of linear maps from $\mathbb{R}^n$ to $\mathbb{R}^m$, which we identify with the set of $m\times n$ matrices in the usual way. If $\boldsymbol{A}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$ and if $p,q\in\{1,2,\infty\}$ then the norm of $\boldsymbol{A}$ induced by the $p$-norm on $\mathbb{R}^n$ and the $q$-norm on $\mathbb{R}^m$ is $$\lVert\boldsymbol{A}\rVert_{p,q}=\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_q\;|\enspace\lVert\boldsymbol{x}\rVert_p=1\}.$$ This is well-known to define a norm on $\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$. There are other equivalent characterisations of the induced norm, but the one given above is the only one we will need. We refer to [@RAH/CRJ:13] for a general discussion of induced matrix norms. For certain combinations of $(p,q)$, explicit expressions for $\lVert\cdot\rVert_{p,q}$ are known. For example, in [@RAH/CRJ:13] expressions are given in the cases $(1,1)$ (in §5.6.4), $(2,2)$ (§5.6.6), and $(\infty,\infty)$ (§5.6.5). In [@JR:00] the case $(\infty,1)$ is studied, and its computation is shown to be NP-hard. The case $(2,1)$ is given by @KD/BAP:09, although the details of the degenerate case given there are a little sketchy. @KD/BAP:09 also list all of the other combinations except $(2,\infty)$, for which no expression seems to be available, and which we give here, apparently for the first time. The formula given by @KD/BAP:09 for $(\infty,2)$ is presented without reference or proof, and is incorrect, probably a typographical error. Here we present the correct formulae for all nine of the induced norms. Although most of these formulae are known in the literature, we give proofs in all nine cases so that, for the first time, all proofs for all cases are given in one place. We also analyse the computational complexity of computing these various norms. Here is the notation we use. By $\{\boldsymbol{e}_1,\dots,\boldsymbol{e}_n\}$ we denote the standard basis for $\mathbb{R}^n$. For a matrix $\boldsymbol{A}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$, $\boldsymbol{r}(\boldsymbol{A},a)\in\mathbb{R}^n$ denotes the $a$th row and $\boldsymbol{c}(\boldsymbol{A},j)\in\mathbb{R}^m$ denotes the $j$th column. The components of $\boldsymbol{A}$ are denoted by $A_{aj}$, $a\in\{1,\dots,m\}$, $j\in\{1,\dots,n\}$. The transpose of $\boldsymbol{A}$ is denoted by $\boldsymbol{A}^T$. The Euclidean inner product is denoted by $\langle\cdot,\cdot\rangle$. For a differentiable map $\boldsymbol{f}\colon\mathbb{R}^n\rightarrow\mathbb{R}^m$, $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{\boldsymbol{f}}(\boldsymbol{x})\in\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$ denotes the derivative of $\boldsymbol{f}$ at $\boldsymbol{x}$. For a set $X$, $\boldsymbol{2}^X$ denotes the power set of $X$. # Formulae for induced norms **Theorem 1**. *Let $p,q\in\{1,2,\infty\}$ and let $\boldsymbol{A}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$. The induced norm $\lVert\cdot\rVert_{p,q}$ satisfies the following formulae:* *[\[pl:matnorm11\]]{#pl:matnorm11 label="pl:matnorm11"} $\lVert\boldsymbol{A}\rVert_{1,1}= \max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_1\;|\enspace j\in\{1,\dots,n\}\}$;* *[\[pl:matnorm12\]]{#pl:matnorm12 label="pl:matnorm12"} $\lVert\boldsymbol{A}\rVert_{1,2}= \max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_2\;|\enspace j\in\{1,\dots,n\}\}$;* *[\[pl:matnorm1infty\]]{#pl:matnorm1infty label="pl:matnorm1infty"} $\displaystyle\begin{aligned}[t] \lVert\boldsymbol{A}\rVert_{1,\infty}=&\; \max\{\lvert A_{aj}\rvert\;|\enspace a\in\{1,\dots,m\},\ j\in\{1,\dots,n\}\}\\ =&\;\max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_\infty\;|\enspace j\in\{1,\dots,n\}\}\\ =&\;\max\{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_\infty\;|\enspace a\in\{1,\dots,m\}\} \end{aligned}$;* *[\[pl:matnorm21\]]{#pl:matnorm21 label="pl:matnorm21"} $\lVert\boldsymbol{A}\rVert_{2,1}= \max\{\lVert\boldsymbol{A}^T(\boldsymbol{u})\rVert_2\;|\enspace\boldsymbol{u}\in\{-1,1\}^m\}$;* *[\[pl:matnorm22\]]{#pl:matnorm22 label="pl:matnorm22"} $\lVert\boldsymbol{A}\rVert_{2,2}= \max\{\sqrt{\lambda}\;|\enspace\lambda\ \textrm{is an eigenvalue for}\ \boldsymbol{A}^T\boldsymbol{A}\}$;* *[\[pl:matnorm2infty\]]{#pl:matnorm2infty label="pl:matnorm2infty"} $\lVert\boldsymbol{A}\rVert_{2,\infty}= \max\{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2\;|\enspace a\in\{1,\dots,m\}\}$;* *[\[pl:matnorminfty1\]]{#pl:matnorminfty1 label="pl:matnorminfty1"} $\lVert\boldsymbol{A}\rVert_{\infty,1}= \max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert_1\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}$;* *[\[pl:matnorminfty2\]]{#pl:matnorminfty2 label="pl:matnorminfty2"} $\lVert\boldsymbol{A}\rVert_{\infty,2}= \max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert_2\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}$;* *[\[pl:matnorminftyinfty\]]{#pl:matnorminftyinfty label="pl:matnorminftyinfty"} $\lVert\boldsymbol{A}\rVert_{\infty,\infty}= \max\{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_1\;|\enspace a\in\{1,\dots,m\}\}$.* *[\[pl:matnorm11\]](#pl:matnorm11){reference-type="eqref" reference="pl:matnorm11"} We compute $$\begin{aligned} \lVert\boldsymbol{A}\rVert_{1,1}=&\;\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_1\;|\enspace\lVert\boldsymbol{x}\rVert_1=1\}\\ =&\;\sup\left\{\sum_{a=1}^m \lvert\langle\boldsymbol{r}(\boldsymbol{A}(\boldsymbol{x})),\boldsymbol{x}\rangle\rvert\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\sum_{a=1}^m \lvert\langle\boldsymbol{r}(\boldsymbol{A}(\boldsymbol{x})),\boldsymbol{x}\rangle\rvert}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ \le&\;\sup\left\{\sum_{a=1}^m\sum_{j=1}^n\lvert A_{aj}\rvert\lvert x_j\rvert\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\sum_{a=1}^m\sum_{j=1}^n\lvert A_{aj}\rvert\lvert x_j\rvert}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ =&\;\sup\left\{\sum_{j=1}^n\lvert x_j\rvert \left(\sum_{a=1}^m\lvert A_{aj}\rvert\right)\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\sum_{j=1}^n\lvert x_j\rvert \left(\sum_{a=1}^m\lvert A_{aj}\rvert\right)}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ \le&\;\max\left\{\sum_{a=1}^m\lvert A_{aj}\rvert\immediate\vphantom{j\in\{1,\dots,n\}}\;\right| \left.\immediate\vphantom{\sum_{a=1}^m\lvert A_{aj}\rvert}\enspace j\in\{1,\dots,n\}\right\}\\ =&\;\max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_1\;|\enspace j\in\{1,\dots,n\}\}.\end{aligned}$$ To establish the opposite inequality, suppose that $k\in\{1,\dots,n\}$ is such that $$\lVert\boldsymbol{c}(\boldsymbol{A},k)\rVert_1= \max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_1\;|\enspace j\in\{1,\dots,n\}\}.$$ Then, $$\lVert\boldsymbol{A}(\boldsymbol{e}_k)\rVert_1=\sum_{a=1}^m \left\lvert\left(\sum_{j=1}^nA_{aj}\boldsymbol{e}_{k,j}\right)\right\rvert= \sum_{a=1}^m\lvert A_{ak}\rvert=\lVert\boldsymbol{c}(\boldsymbol{A},k)\rVert_1.$$ Thus $$\lVert\boldsymbol{A}\rVert_{1,1}\ge \max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_1\;|\enspace j\in\{1,\dots,n\}\},$$ since $\lVert\boldsymbol{e}_k\rVert_1=1$.* *[\[pl:matnorm12\]](#pl:matnorm12){reference-type="eqref" reference="pl:matnorm12"} We compute $$\begin{aligned} \lVert\boldsymbol{A}\rVert_{1,2}=&\;\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_2\;|\enspace\lVert\boldsymbol{x}\rVert_1=1\}\\ =&\;\sup\left\{\left(\sum_{a=1}^m \langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}\rangle^2\right)^{1/2}\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\left(\sum_{a=1}^m \langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}\rangle^2\right)^{1/2}}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ \le&\;\sup\left\{\left(\sum_{a=1}^m\left(\sum_{j=1}^n \lvert A_{aj}x_j\rvert\right)^2\right)^{1/2}\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\left(\sum_{a=1}^m\left(\sum_{j=1}^n \lvert A_{aj}x_j\rvert\right)^2\right)^{1/2}}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ \le&\;\sup\left\{\left(\sum_{a=1}^m (\max\{\lvert A_{aj}\rvert\;|\enspace j\in\{1,\dots,n\}\})^2 \left(\sum_{j=1}^n\lvert x_j\rvert\right)^2\right)^{1/2}\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\left(\sum_{a=1}^m (\max\{\lvert A_{aj}\rvert\;|\enspace j\in\{1,\dots,n\}\})^2 \left(\sum_{j=1}^n\lvert x_j\rvert\right)^2\right)^{1/2}}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ =&\;\left(\sum_{a=1}^m(\max\{\lvert A_{aj}\rvert\;|\enspace j\in\{1,\dots,n\}\})^2\right)^{1/2}\\ =&\;\left(\max\left\{\sum_{a=1}^mA_{aj}^2\immediate\vphantom{j\in\{1,\dots,n\}}\;\right| \left.\immediate\vphantom{\sum_{a=1}^mA_{aj}^2}\enspace j\in\{1,\dots,n\}\right\}\right)^{1/2}= \max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_2\;|\enspace j\in\{1,\dots,n\}\},\end{aligned}$$ using the fact that $$\sup\{\lVert\boldsymbol{x}\rVert_2\;|\enspace\lVert\boldsymbol{x}\rVert_1=1\}=1.$$ To establish the other inequality, note that if we take $k\in\{1,\dots,n\}$ such that $$\lVert\boldsymbol{c}(\boldsymbol{A},k)\rVert_2=\max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_2\;|\enspace j\in\{1,\dots,n\}\},$$ then we have $$\lVert\boldsymbol{A}(\boldsymbol{e}_k)\rVert_2=\left(\sum_{a=1}^m\left(\sum_{j=1}^n A_{aj}\boldsymbol{e}_{k,j}\right)^2\right)^{1/2}= \left(\sum_{a=1}^mA_{ak}^2\right)^{1/2}=\lVert\boldsymbol{c}(\boldsymbol{A},k)\rVert_2.$$ Thus $$\lVert\boldsymbol{A}\rVert_{1,2}\ge\max\{\lVert\boldsymbol{c}(\boldsymbol{A},j)\rVert_2\;|\enspace j\in\{1,\dots,n\}\},$$ since $\lVert\boldsymbol{e}_k\rVert_1=1$.* *[\[pl:matnorm1infty\]](#pl:matnorm1infty){reference-type="eqref" reference="pl:matnorm1infty"} Here we compute $$\begin{aligned} \lVert\boldsymbol{A}\rVert_{1,\infty}=&\;\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_\infty\;|\enspace\lVert\boldsymbol{x}\rVert_1=1\}\\ =&\;\sup\left\{\max\left\{\left\lvert\sum_{j=1}^nA_{aj}x_j\right\rvert\immediate\vphantom{a\in\{1,\dots,m\}}\;\right| \left.\immediate\vphantom{\left\lvert\sum_{j=1}^nA_{aj}x_j\right\rvert}\enspace a\in\{1,\dots,m\}\right\}\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\max\left\{\left\lvert\sum_{j=1}^nA_{aj}x_j\right\rvert\immediate\vphantom{a\in\{1,\dots,m\}}\;\right| \left.\immediate\vphantom{\left\lvert\sum_{j=1}^nA_{aj}x_j\right\rvert}\enspace a\in\{1,\dots,m\}\right\}}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ \le&\;\sup\left\{\max\left\{\lvert A_{aj}\rvert\immediate\vphantom{j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}}\;\right| \left.\immediate\vphantom{\lvert A_{aj}\rvert}\enspace j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}\right\}\left(\sum_{j=1}^n\lvert x_j\rvert\right)\immediate\vphantom{\lVert\boldsymbol{x}\rVert_1=1}\;\right| \left.\immediate\vphantom{\max\left\{\lvert A_{aj}\rvert\immediate\vphantom{j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}}\;\right| \left.\immediate\vphantom{\lvert A_{aj}\rvert}\enspace j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}\right\}\left(\sum_{j=1}^n\lvert x_j\rvert\right)}\enspace\lVert\boldsymbol{x}\rVert_1=1\right\}\\ =&\;\max\{\lvert A_{aj}\rvert\;|\enspace j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}\}.\end{aligned}$$ For the converse inequality, let $k\in\{1,\dots,n\}$ be such that $$\max\{\lvert A_{ak}\rvert\;|\enspace a\in\{1,\dots,m\}\}= \max\{\lvert A_{aj}\rvert\;|\enspace j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}\}.$$ Then $$\begin{aligned} \lVert\boldsymbol{A}(\boldsymbol{e}_k)\rVert_\infty=&\; \max\left\{\left\lvert\sum_{j=1}^nA_{aj}\boldsymbol{e}_{k,j}\right\rvert\immediate\vphantom{a\in\{1,\dots,m\}}\;\right| \left.\immediate\vphantom{\left\lvert\sum_{j=1}^nA_{aj}\boldsymbol{e}_{k,j}\right\rvert}\enspace a\in\{1,\dots,m\}\right\}\\ =&\;\max\{\lvert A_{ak}\rvert\;|\enspace a\in\{1,\dots,m\}\}.\end{aligned}$$ Thus $$\lVert\boldsymbol{A}\rVert_{1,\infty}\ge \max\{\lvert A_{aj}\rvert\;|\enspace j\in\{1,\dots,n\},\ a\in\{1,\dots,m\}\},$$ since $\lVert\boldsymbol{e}_k\rVert_1=1$.* *[\[pl:matnorm21\]](#pl:matnorm21){reference-type="eqref" reference="pl:matnorm21"} In this case we maximise the function $\boldsymbol{x}\mapsto\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_1$ subject to the constraint that $\lVert\boldsymbol{x}\rVert_2^2=1$. We shall do this using the Lagrange Multiplier Theorem [e.g., @CHE:73 §II.5], defining $$f(\boldsymbol{x})=\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_1,\quad g(\boldsymbol{x})=\lVert\boldsymbol{x}\rVert_2^2-1.$$ Let us first assume that none of the rows of $\boldsymbol{A}$ are zero. We must exercise some care because $f$ is not differentiable on $\mathbb{R}^n$. However, $f$ is differentiable at points off the set $$B_{\boldsymbol{A}}=\{\boldsymbol{x}\in\mathbb{R}^n\;|\enspace\textrm{there exists}\ a\in\{1,\dots,m\}\ \textrm{such that}\ \langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}\rangle=0\}.$$ To facilitate computations, let us define $\boldsymbol{u}_{\boldsymbol{A}}\colon\mathbb{R}^n\rightarrow\mathbb{R}^m$ by asking that $$u_{\boldsymbol{A},a}(\boldsymbol{x})=\operatorname{sign}(\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}\rangle).$$ Note that $B_{\boldsymbol{A}}=\boldsymbol{u}_{\boldsymbol{A}}^{-1}(\boldsymbol{0})$ and that on $\mathbb{R}^n\setminus B_{\boldsymbol{A}}$ the function $\boldsymbol{u}_{\boldsymbol{A}}$ is locally constant. Moreover, it is clear that $$f(\boldsymbol{x})=\langle\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}),\boldsymbol{A}(\boldsymbol{x})\rangle.$$* *Now let $\boldsymbol{x}_0\in\mathbb{R}^n\setminus B_{\boldsymbol{A}}$ be a maximum of $f$ subject to the constraint that $g(\boldsymbol{x})=0$. One easily verifies that $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{g}$ has rank $1$ at points that satisfy the constraint. Thus, by the Lagrange Multiplier Theorem, there exists $\lambda\in\mathbb{R}$ such that $$\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{(f-\lambda g)}(\boldsymbol{x}_0)=\boldsymbol{0}.$$ We compute $$\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{f}(\boldsymbol{x}_0)\cdot\boldsymbol{v}= \langle\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}_0),\boldsymbol{A}(\boldsymbol{v})\rangle,\quad \@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{g}(\boldsymbol{x})\cdot\boldsymbol{v}=2\langle\boldsymbol{x},\boldsymbol{v}\rangle.$$ Thus $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{(f-\lambda g)}(\boldsymbol{x}_0)=\boldsymbol{0}$ if and only if $$\boldsymbol{A}^T(\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}_0))=2\lambda\boldsymbol{x}_0\quad \implies\quad\lvert\lambda\rvert=\frac{1}{2} \lVert\boldsymbol{A}^T(\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}_0))\rVert_2,$$ since $\lVert\boldsymbol{x}_0\rVert_2=1$. Thus $\lambda=0$ if and only if $\boldsymbol{A}^T(\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}_0))=\boldsymbol{0}$. Therefore, if $\lambda=0$, then $f(\boldsymbol{x}_0)=0$. We can disregard this possibility since $f$ cannot have a maximum of zero as we are assuming that $\boldsymbol{A}$ has no zero rows. As $\lambda\not=0$ we have $$f(\boldsymbol{x}_0)=\langle\boldsymbol{A}^T (\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}_0)),\boldsymbol{x}_0\rangle= \frac{1}{2\lambda}\lVert\boldsymbol{A}^T (\boldsymbol{u}_{\boldsymbol{A}}(\boldsymbol{x}_0))\rVert_2^2=2\lambda.$$ We conclude that, at solutions of the constrained maximisation problem, we must have $$f(\boldsymbol{x}_0)=\lVert\boldsymbol{A}^T(\boldsymbol{u})\rVert_2,$$ where $\boldsymbol{u}$ varies over the nonzero points in the image of $\boldsymbol{u}_{\boldsymbol{A}}$, i.e., over points from $\{-1,1\}^m$.* *This would conclude the proof of this part of the theorem in the case that $\boldsymbol{A}$ has no zero rows, but for the fact that it is possible that $f$ attains its maximum on $B_{\boldsymbol{A}}$. We now show that this does not happen. Let $\boldsymbol{x}_0\in B_{\boldsymbol{A}}$ satisfy $\lVert\boldsymbol{x}_0\rVert_2=1$ and denote $$A_0=\{a\in\{1,\dots,m\}\;|\enspace u_{\boldsymbol{A},a}(\boldsymbol{x}_0)=0\}.$$ Let $A_1=\{1,\dots,m\}\setminus A_0$. Let $a_0\in A_0$. For $\epsilon\in\mathbb{R}$ define $$\boldsymbol{x}_\epsilon=\frac{\boldsymbol{x}_0+\epsilon\boldsymbol{r}(\boldsymbol{A},a_0)} {\sqrt{1+\epsilon^2\lVert\boldsymbol{r}(\boldsymbol{A},a_0)\rVert_2^2}}.$$ Note that $\boldsymbol{x}_\epsilon$ satisfies the constraint $\lVert\boldsymbol{x}_\epsilon\rVert_2^2=1$. Now let $\epsilon_0\in\mathbb{R}_{>0}$ be sufficiently small that $$\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_\epsilon\rangle\not=0$$ for all $a\in A_1$ and $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[{-\epsilon_0},{\epsilon_0}]$. Then we compute $$\begin{aligned} \notag \lVert\boldsymbol{A}(\boldsymbol{x}_\epsilon)\rVert_1=&\; \sum_{a=1}^m\lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle+ \epsilon\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{r}(\boldsymbol{A},a_0)\rangle\rvert+O(\epsilon^2)\\ \notag =&\;\sum_{a\in A_0}\lvert\epsilon\rvert \lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{r}(\boldsymbol{A},a_0)\rangle\rvert\\ \label{eq:matnorms1} &\;+\sum_{a\in A_1}\lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle+\epsilon\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{r}(\boldsymbol{A},a_0)\rangle\rvert+ O(\epsilon^2).\end{aligned}$$ Since we are assuming that none of the rows of $\boldsymbol{A}$ are zero, $$\label{eq:matnorms2} \sum_{a\in A_0}\lvert\epsilon\rvert \lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{r}(\boldsymbol{A},a_0)\rangle\rvert>0$$ for $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[{-\epsilon_0},{\epsilon_0}]$, as long as $\epsilon_0$ is sufficiently small. Now take $a\in A_1$. If $\epsilon$ is sufficiently small we can write $$\lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle+\epsilon\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{r}(\boldsymbol{A},a_0)\rangle\rvert= \lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle\rvert+\epsilon C_a$$ for some $C_a\in\mathbb{R}$. As a result, and using [\[eq:matnorms1\]](#eq:matnorms1){reference-type="eqref" reference="eq:matnorms1"}, we have $$\lVert\boldsymbol{A}(\boldsymbol{x}_\epsilon)\rVert_1=\lVert\boldsymbol{A}(\boldsymbol{x}_0)\rVert_1+ \sum_{a\in A_0}(\lvert\epsilon\rvert \lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{r}(\boldsymbol{A},a_0)\rangle\rvert+ \epsilon\sum_{a\in A_1}C_a+O(\epsilon^2).$$ It therefore follows, by choosing $\epsilon_0$ to be sufficiently small, that we have $$\lVert\boldsymbol{A}(\boldsymbol{x}_\epsilon)\rVert_1>\lVert\boldsymbol{A}(\boldsymbol{x}_0)\rVert_1$$ either for all $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[{-\epsilon_0},0)$ or for all $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}(0,{\epsilon_0}]$, taking [\[eq:matnorms2\]](#eq:matnorms2){reference-type="eqref" reference="eq:matnorms2"} into account. Thus if $\boldsymbol{x}_0\in B_{\boldsymbol{A}}$ then $\boldsymbol{x}_0$ is not a local maximum for $f$ subject to the constraint $g^{-1}(0)$.* *Finally, suppose that $\boldsymbol{A}$ has some rows that are zero. Let $$A_0=\{a\in\{1,\dots,m\}\;|\enspace\boldsymbol{r}(\boldsymbol{A},a)=\boldsymbol{0}\}$$ and let $A_1=\{1,\dots,m\}\setminus A_0$. Let $A_1=\{a_1,\dots,a_k\}$ with $a_1<\dots<a_k$, and define $\hat{\boldsymbol{A}}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^k)$ by $$\hat{\boldsymbol{A}}(\boldsymbol{x})=\sum_{r=1}^k \langle\boldsymbol{r}(\boldsymbol{A},a_r),\boldsymbol{x}\rangle\boldsymbol{e}_r,$$ and note that $\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_1=\lVert\hat{\boldsymbol{A}}(\boldsymbol{x})\rVert_1$ for every $\boldsymbol{x}\in\mathbb{R}^n$. If $\boldsymbol{y}\in\mathbb{R}^m$ define $\hat{\boldsymbol{y}}\in\mathbb{R}^k$ by removing from $\boldsymbol{y}$ the elements corresponding to the zero rows of $\boldsymbol{A}$: $$\hat{\boldsymbol{y}}=(y_{a_1},\dots,y_{a_k}).$$ Then we easily determine that $\boldsymbol{A}^T(\boldsymbol{y})= \hat{\boldsymbol{A}}\null^T(\hat{\boldsymbol{y}})$. Therefore, $$\begin{aligned} \lVert\boldsymbol{A}\rVert_{2,1}=&\;\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_1\;|\enspace\lVert\boldsymbol{x}\rVert_2=1\}\\ =&\;\sup\{\lVert\hat{\boldsymbol{A}}(\boldsymbol{x})\rVert_1\;|\enspace\lVert\boldsymbol{x}\rVert_2=1\}=\lVert\hat{\boldsymbol{A}}\rVert_{2,1}\\ =&\;\max\{\lVert\hat{\boldsymbol{A}}\null^T(\hat{\boldsymbol{u}})\rVert_2\;|\enspace\hat{\boldsymbol{u}}\in\{-1,1\}^k\}\\ =&\;\max\{\lVert\boldsymbol{A}^T(\boldsymbol{u})\rVert_2\;|\enspace\boldsymbol{u}\in\{-1,1\}^m\},\end{aligned}$$ and this finally gives the result.* *[\[pl:matnorm22\]](#pl:matnorm22){reference-type="eqref" reference="pl:matnorm22"} Note that, in this case, we wish to maximise the function $\boldsymbol{x}\mapsto\lVert\boldsymbol{A}(\boldsymbol{x})\rVert^2_2$ subject to the constraint that $\lVert\boldsymbol{x}\rVert^2_2=1$. In this case, the function we are maximising and the function defining the constraint are infinitely differentiable. Therefore, we can use the Lagrange Multiplier Theorem to determine the character of the maxima. Thus we define $$f(\boldsymbol{x})=\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_2^2,\quad g(\boldsymbol{x})=\lVert\boldsymbol{x}\rVert_2^2-1.$$ As $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{g}$ has rank $1$ at points satisfying the constraint, if a point $\boldsymbol{x}_0\in\mathbb{R}^n$ solves the constrained maximisation problem, then there exists $\lambda\in\mathbb{R}$ such that $$\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{(f-\lambda g)}(\boldsymbol{x}_0)=0.$$ Since $f(\boldsymbol{x})=\langle\boldsymbol{A}^T\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}(\boldsymbol{x}),\boldsymbol{x}\rangle$, we compute $$\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{f}(\boldsymbol{x})\cdot\boldsymbol{v}= 2\langle\boldsymbol{A}^T\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}(\boldsymbol{x}),\boldsymbol{v}\rangle.$$ As above, $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{g}(\boldsymbol{x})\cdot\boldsymbol{v}=2\langle\boldsymbol{x},\boldsymbol{v}\rangle$. Thus $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{(f-\lambda g)}(\boldsymbol{x}_0)=0$ implies that $$\boldsymbol{A}^T\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}(\boldsymbol{x}_0)=\lambda\boldsymbol{x}_0.$$ Thus it must be the case that $\lambda$ is an eigenvalue for $\boldsymbol{A}^T\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}$ with eigenvector $\boldsymbol{x}_0$. Since $\boldsymbol{A}^T\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}$ is symmetric and positive-semidefinite, all eigenvalues are real and nonnegative. Thus there exist $\lambda_1,\dots,\lambda_n\in\mathbb{R}_{\ge 0}$ and vectors $\boldsymbol{x}_1,\dots,\boldsymbol{x}_n$ such that $$\lambda_1\le\cdots\le\lambda_n,$$ such that $\boldsymbol{A}^T\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}(\boldsymbol{x}_j)=\lambda_j\boldsymbol{x}_j$, $j\in\{1,\dots,n\}$, and such that a solution to the problem of maximising $f$ with the constraint $g^{-1}(0)$ is obtained by evaluating $f$ at one of the points $\boldsymbol{x}_1,\dots,\boldsymbol{x}_n$. Thus the problem can be solved by evaluating $f$ at this finite collection of points, and determining at which of these $f$ has its largest value. A computation gives $f(\boldsymbol{x}_j)=\lambda_j$, and this part of the result follows.* *[\[pl:matnorm2infty\]](#pl:matnorm2infty){reference-type="eqref" reference="pl:matnorm2infty"} First of all, we note that this part of the theorem certainly holds when $\boldsymbol{A}=\boldsymbol{0}$. Thus we shall freely assume that $\boldsymbol{A}$ is nonzero when convenient. We maximise the function $\boldsymbol{x}\mapsto\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_\infty$ subject to the constraint that $\lVert\boldsymbol{x}\rVert^2_2=1$. We shall again use the Lagrange Multiplier Theorem, defining $$f(\boldsymbol{x})=\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_\infty,\quad g(\boldsymbol{x})=\lVert\boldsymbol{x}\rVert_2^2-1.$$ Note that $\boldsymbol{A}$ is not differentiable on $\mathbb{R}^n$, so we first restrict to a subset where $f$ is differentiable. Let us define $$\begin{aligned} S_{\boldsymbol{A}}\colon&\begin{aligned}[t]\mathbb{R}^n\end{aligned}\rightarrow \begin{aligned}[t]\boldsymbol{2}^{\{1,\dots,m\}}\end{aligned}\\&\begin{aligned}[t]\boldsymbol{x}\end{aligned} \mapsto\begin{aligned}[t]\{a\in\{1,\dots,m\}\;|\enspace\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}\rangle=\lVert\boldsymbol{A}(\boldsymbol{x})\rVert_\infty\}.\end{aligned}\end{aligned}$$ Then denote $$B_{\boldsymbol{A}}=\{\boldsymbol{x}\in\mathbb{R}^n\;|\enspace\operatorname{card}(S_{\boldsymbol{A}}(\boldsymbol{x}))>1\}.$$ We easily see that $f$ is differentiable at points that are not in the set $B_{\boldsymbol{A}}$.* *Let us first suppose that $\boldsymbol{x}_0\in\mathbb{R}^n\setminus B_{\boldsymbol{A}}$ is a maximum of $f$ subject to the constraint that $g(\boldsymbol{x})=0$. Then there exists a unique $a_0\in\{1,\dots,m\}$ such that $f(\boldsymbol{x}_0)=\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}_0\rangle$. Since we are assuming that $\boldsymbol{A}$ is nonzero, it must be that $\boldsymbol{r}(\boldsymbol{A},a_0)$ is nonzero. Moreover, there exists a neighbourhood $U$ of $\boldsymbol{x}_0$ such that $$\operatorname{sign}(\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}\rangle)= \operatorname{sign}(\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}_0\rangle)$$ and $f(\boldsymbol{x})=\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}\rangle$ for each $\boldsymbol{x}\in U$. Abbreviating $$u_{\boldsymbol{A},a_0}(\boldsymbol{x})= \operatorname{sign}(\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}\rangle),$$ we have $$f(\boldsymbol{x})=u_{\boldsymbol{A},j}(\boldsymbol{x}_0)\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}\rangle$$ for every $\boldsymbol{x}\in U$. Note that, as in the proofs of parts [\[pl:matnorm21\]](#pl:matnorm21){reference-type="eqref" reference="pl:matnorm21"} and [\[pl:matnorm22\]](#pl:matnorm22){reference-type="eqref" reference="pl:matnorm22"} above, $\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{g}(\boldsymbol{x})$ has rank $1$ for $\boldsymbol{x}\not=0$. Therefore there exists $\lambda\in\mathbb{R}$ such that $$\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{(f-\lambda g)}(\boldsymbol{x}_0)=\boldsymbol{0}.$$ We compute $$\@ifnextchar[{\@ADL@rlinder}{\@ADL@linder}{(f-\lambda g)}(\boldsymbol{x}_0)\cdot\boldsymbol{v}= u_{\boldsymbol{A},j}(\boldsymbol{x}_0)\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{v}\rangle- 2\lambda\langle\boldsymbol{x}_0,\boldsymbol{v}\rangle$$ for every $\boldsymbol{v}\in\mathbb{R}^n$. Thus we must have $$2\lambda\boldsymbol{x}_0=u_{\boldsymbol{A},a_0}(\boldsymbol{x}_0)\boldsymbol{r}(\boldsymbol{A},a_0).$$ This implies that $\boldsymbol{x}_0$ and $\boldsymbol{r}(\boldsymbol{A},a_0)$ are collinear and that $$\lvert\lambda\rvert=\frac{1}{2}\lVert\boldsymbol{r}(\boldsymbol{A},a_0)\rVert_2$$ since $\lVert\boldsymbol{x}_0\rVert_2=1$. Therefore, $$f(\boldsymbol{x}_0)=u_{\boldsymbol{A},a_0}(\boldsymbol{x}_0)\langle\boldsymbol{r}(\boldsymbol{A},a_0),\tfrac{1}{2\lambda}u_{\boldsymbol{A},a_0}(\boldsymbol{x}_0)\boldsymbol{r}(\boldsymbol{A},a_0)\rangle= 2\lambda.$$ Since $\lvert\lambda\rvert=\frac{1}{2}\lVert\boldsymbol{r}(\boldsymbol{A},a_0)\rVert_2$ it follows that $$f(\boldsymbol{x}_0)=\lVert\boldsymbol{r}(\boldsymbol{A},a_0)\rVert_2.$$* *This completes the proof, but for the fact that maxima of $f$ may occur at points in $B_{\boldsymbol{A}}$. Thus let $\boldsymbol{x}_0\in B_{\boldsymbol{A}}$ be such that $\lVert\boldsymbol{x}_0\rVert_2=1$. For $a\in S_{\boldsymbol{A}}(\boldsymbol{x}_0)$ let us write $$\boldsymbol{r}(\boldsymbol{A},a)=\rho_a\boldsymbol{x}_0+\boldsymbol{y}_a,$$ where $\langle\boldsymbol{x}_0,\boldsymbol{y}_a\rangle=0$. Therefore, $\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle=\rho_a$. We claim that if there exists $a_0\in S_{\boldsymbol{A}}(\boldsymbol{x}_0)$ for which $\boldsymbol{y}_{a_0}\not=\boldsymbol{0}$, then $\boldsymbol{x}_0$ cannot be a maximum of $f$ subject to the constraint $g^{-1}(0)$. Indeed, if $\boldsymbol{y}_{a_0}\not=0$ then define $$\boldsymbol{x}_\epsilon=\frac{\boldsymbol{x}_0+\epsilon\boldsymbol{y}_{a_0}} {\sqrt{1+\epsilon^2\lVert\boldsymbol{y}_{a_0}\rVert_2^2}}.$$ As in the proof of part [\[pl:matnorm21\]](#pl:matnorm21){reference-type="eqref" reference="pl:matnorm21"} above, one shows that $\boldsymbol{x}_\epsilon$ satisfies the constraint for every $\epsilon\in\mathbb{R}$. Also as in the proof of part [\[pl:matnorm21\]](#pl:matnorm21){reference-type="eqref" reference="pl:matnorm21"}, we have $$\boldsymbol{x}_\epsilon=\boldsymbol{x}_0+\epsilon\boldsymbol{y}_0+O(\epsilon^2).$$ Thus, for $\epsilon$ sufficiently small, $$\lvert\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}_\epsilon\rangle\rvert= \lvert\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}_0\rangle\rvert+ \epsilon C_{a_0}+O(\epsilon^2)$$ where $C_{a_0}$ is nonzero. Therefore, there exists $\epsilon_0\in\mathbb{R}_{>0}$ such that $$\lvert\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}_\epsilon\rangle\rvert> \lvert\langle\boldsymbol{r}(\boldsymbol{A},a_0),\boldsymbol{x}_0\rangle\rvert$$ either for all $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[{-\epsilon_0},0)$ or for all $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}(0,{\epsilon_0}]$. In either case, $\boldsymbol{x}_0$ cannot be a maximum for $f$ subject to the constraint $g^{-1}(0)$.* *Finally, suppose that $\boldsymbol{x}_0\in B_{\boldsymbol{A}}$ is a maximum for $f$ subject to the constraint $g^{-1}(0)$. Then, as we saw in the preceding paragraph, for each $a\in S_{\boldsymbol{A}}(\boldsymbol{x}_0)$, we must have $$\boldsymbol{r}(\boldsymbol{A},a)=\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle\boldsymbol{x}_0.$$ It follows that $\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2^2= \langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}_0\rangle^2$. Moreover, by definition of $S_{\boldsymbol{A}}(\boldsymbol{x}_0)$ and since we are supposing that $\boldsymbol{x}_0$ is a maximum for $f$ subject to the constraint $g^{-1}(0)$, we have $$\label{eq:matnorms3} \lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2=\lVert\boldsymbol{A}\rVert_{2,\infty}.$$ Now, if $a\in\{1,\dots,m\}$, we claim that $$\label{eq:matnorms4} \lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2\le\lVert\boldsymbol{A}\rVert_{2,\infty}.$$ Indeed suppose that $a\in\{1,\dots,m\}$ satisfies $$\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2>\lVert\boldsymbol{A}\rVert_{2,\infty}.$$ Define $\boldsymbol{x}=\frac{\boldsymbol{r}(\boldsymbol{A},a)}{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2}$ so that $\boldsymbol{x}$ satisfies the constraint $g(\boldsymbol{x})=0$. Moreover, $$f(\boldsymbol{x})\ge\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{x}\rangle= \lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2>\lVert\boldsymbol{A}\rVert_{2,\infty},$$ contradicting the assumption that $\boldsymbol{x}_0$ is a maximum for $f$. Thus, given that [\[eq:matnorms3\]](#eq:matnorms3){reference-type="eqref" reference="eq:matnorms3"} holds for every $a\in S_{\boldsymbol{A}}(\boldsymbol{x}_0)$ and [\[eq:matnorms4\]](#eq:matnorms4){reference-type="eqref" reference="eq:matnorms4"} holds for every $a\in\{1,\dots,m\}$, we have $$\lVert\boldsymbol{A}\rVert_{2,\infty}=\max\{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_2\;|\enspace a\in\{1,\dots,m\}\},$$ as desired.* *For the last three parts of the theorem, the following result is useful.* *Let $\lVert\cdot\rVert$ be a norm on $\mathbb{R}^n$ and let $|||\cdot|||_\infty$ be the norm induced on $\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$ by the norm $\lVert\cdot\rVert_\infty$ on $\mathbb{R}^n$ and the norm $\lVert\cdot\rVert$ on $\mathbb{R}^m$. Then $$|||\boldsymbol{A}|||_\infty=\max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}.$$* *Note that the set $$\{\boldsymbol{x}\in\mathbb{R}^n\;|\enspace\lVert\boldsymbol{x}\rVert_\infty\le 1\}$$ is a convex polytope. Therefore, this set is the convex hull of the vertices $\{-1,1\}^n$; see [@RJW:94 Theorem 2.6.16]. Thus, if $\lVert\boldsymbol{x}\rVert_\infty=1$ we can write $$\boldsymbol{x}=\sum_{\boldsymbol{u}\in\{-1,1\}^n}\lambda_{\boldsymbol{u}}\boldsymbol{u}$$ where $\lambda_{\boldsymbol{u}}\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ for each $\boldsymbol{u}\in\{-1,1\}^n$ and $$\sum_{\boldsymbol{u}\in\{-1,1\}^n}\lambda_{\boldsymbol{u}}=1.$$ Therefore, $$\begin{aligned} \lVert\boldsymbol{A}(\boldsymbol{x})\rVert=&\; \left\lVert\sum_{\boldsymbol{u}\in\{-1,1\}^n}\lambda_{\boldsymbol{u}}\boldsymbol{A}(\boldsymbol{u})\right\rVert\le \sum_{\boldsymbol{u}\in\{-1,1\}^n}\lambda_{\boldsymbol{u}} \lVert\boldsymbol{A}(\boldsymbol{u})\rVert\\ \le&\;\left(\sum_{\boldsymbol{u}\in\{-1,1\}^n}\lambda_{\boldsymbol{u}}\right) \max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}\\ =&\;\max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}.\end{aligned}$$ Therefore, $$\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert\;|\enspace\lVert\boldsymbol{x}\rVert_\infty=1\}\le \max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\} \le\sup\{\lVert\boldsymbol{A}(\boldsymbol{x})\rVert\;|\enspace\lVert\boldsymbol{x}\rVert_\infty=1\},$$ the last inequality holding since if $\boldsymbol{u}\in\{-1,1\}^n$ then $\lVert\boldsymbol{u}\rVert_\infty=1$. The result follows since the previous inequalities must be equalities.* *[\[pl:matnorminfty1\]](#pl:matnorminfty1){reference-type="eqref" reference="pl:matnorminfty1"} This follows immediately from the preceding lemma.* *[\[pl:matnorminfty2\]](#pl:matnorminfty2){reference-type="eqref" reference="pl:matnorminfty2"} This too follows immediately from the preceding lemma.* *[\[pl:matnorminftyinfty\]](#pl:matnorminftyinfty){reference-type="eqref" reference="pl:matnorminftyinfty"} Note that for $\boldsymbol{u}\in\{-1,1\}^n$ we have $$\lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{u}\rangle\rvert= \left\lvert\sum_{j=1}^nA_{aj}u_j\right\rvert\le\sum_{j=1}^n\lvert A_{aj}\rvert= \lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_1.$$ Therefore, using the previous lemma, $$\begin{aligned} \lVert\boldsymbol{A}\rVert_{\infty,\infty}=&\;\max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert_\infty\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}\\ =&\;\max\{\max\{\lvert\langle\boldsymbol{r}(\boldsymbol{A},a),\boldsymbol{u}\rangle\rvert\;|\enspace a\in\{1,\dots,m\}\}\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}\\ \le&\;\max\{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_1\;|\enspace a\in\{1,\dots,m\}\}.\end{aligned}$$ To establish the other inequality, for $a\in\{1,\dots,m\}$ define $\boldsymbol{u}_a\in\{-1,1\}^n$ by $$u_{a,j}=\begin{cases}1,&A_{aj}\ge0,\\-1,&A_{aj}<0\end{cases}$$ and note that a direct computation gives the $a$th component of $\boldsymbol{A}(\boldsymbol{u}_a)$ as $\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_1$. Therefore, $$\begin{aligned} \max\{\lVert\boldsymbol{r}(\boldsymbol{A},a)\rVert_1\;|\enspace a\in\{1,\dots,m\}\}=&\; \max\{\lvert\boldsymbol{A}(\boldsymbol{u}_a)_a\rvert\;|\enspace a\in\{1,\dots,m\}\}\\ \le&\;\max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert_\infty\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}= \lVert\boldsymbol{A}\rVert_{\infty,\infty},\end{aligned}$$ giving this part of the theorem.* # Complexity of induced norm computations Let us consider a comparison of the nine induced matrix norms in terms of the computational effort required. One would like to know how many operations are required to compute any of the norms. We shall do this making the following assumptions on our computational model. > Floating point operations are carried out to an accuracy of $\epsilon=2^{-N}$ for some fixed $N\in\mathbb{Z}_{>0}$. By $M(N)$ we denote the number of operations required to multiply integers $j_1$ and $j_2$ satisfying $0\le j_1,j_2\le 2^N$. We assume that addition and multiplication of floating point numbers can be performed with a relative error of $O(2^{-N})$ using $O(M(N))$ operations. With this assumption, we can deduce the computational complexity of the basic operations we will need. Computing a square root takes $O(M(N))$ operations; see [@RPB:76]. Computing the absolute value of a number is $1$ operation (a bit flip). Comparing two numbers takes $O(N)$ operations. Finding the maximum number in a list of $k$ numbers takes $O(kN)$ operations; see [@MB/RWF/VP/RLR/RET:73]. If $\boldsymbol{A}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^m)$ and $\boldsymbol{B}\in\textup{L}(\mathbb{R}^m;\mathbb{R}^p)$ then the matrix multiplication $\boldsymbol{B}\boldsymbol{A}$ takes $O(mnpM(n))$ operations. Faster matrix multiplication algorithms are possible than the direct one whose complexity we describe here, [e.g., @DC/SW:90], but we are mainly interested in the fact that matrix multiplication has polynomial complexity in the size of the matrices. Computation of the $QR$-decomposition of an $k\times k$ matrix $\boldsymbol{A}$ has computational complexity $O(k^3M(N))$. Note that the $QR$-decomposition can be used to determine the GramSchmidt orthogonalisation of a finite number of vectors. We refer to [@GHG/CFVL:96 §5.2] for details. Let us describe deterministic bounds for the operations needed to compute the eigenvalues and eigenvectors of a $k\times k$ matrix $\boldsymbol{A}$, following @VYP/ZQC:99. Let us fix some norm $|||\cdot|||$ on $\textup{L}(\mathbb{R}^k;\mathbb{R}^k)$. Given $\epsilon=2^{-N}$ as above, let $\beta\in\mathbb{R}_{>0}$ be such that $2^{-\beta}|||\boldsymbol{A}|||\le\epsilon$. Then @VYP/ZQC:99 show that the eigenvalues and eigenvectors of $\boldsymbol{A}$ can be computed for $\boldsymbol{A}$ using an algorithm of complexity $$\label{eq:eigencomplexity} O(k^3M(N))+O((k\log^2k)(\log\beta+\log^2k)M(N)).$$ There are stochastic, iterative, or gradient flow algorithms that will generically perform computations with fewer operations than predicted by this bound. However, the complexity of such algorithms is difficult to understand, or they require unbounded numbers of operations in the worst case. In any event, here we only care that the complexity of the eigenproblem is polynomial. The previous two computational complexity results can be combined to show that finding the square root of a symmetric positive-definite matrix has computational complexity given by [\[eq:eigencomplexity\]](#eq:eigencomplexity){reference-type="eqref" reference="eq:eigencomplexity"}. This is no doubt known, but let us see how this works since it is simple. First compute the eigenvalues and eigenvectors of $\boldsymbol{A}$ using an algorithm with complexity given by [\[eq:eigencomplexity\]](#eq:eigencomplexity){reference-type="eqref" reference="eq:eigencomplexity"}. The eigenvectors can be made into an orthonormal basis of eigenvectors using the GramSchmidt procedure. This decomposition can be performed using an algorithm of complexity $O(n^3M(N))$. Assembling the orthogonal eigenvectors into the columns of a matrix gives an orthogonal matrix $\boldsymbol{U}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^n)$ and a diagonal matrix $\boldsymbol{D}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^n)$ with positive diagonals such that $\boldsymbol{A}=\boldsymbol{U}\boldsymbol{D}\boldsymbol{U}^T$. Then the matrix $\boldsymbol{D}^{1/2}$ with diagonal entries equal to the square roots of the diagonal of $\boldsymbol{D}$ can be constructed with complexity $O(nM(n))$. Finally, $\boldsymbol{A}^{1/2}=\boldsymbol{U}\boldsymbol{D}^{1/2}\boldsymbol{U}$ is computed using matrix multiplication with complexity of $O(n^3M(n))$. Using these known computational complexity results, it is relatively straightforward to assess the complexity of the computations of the various norms in Theorem [Theorem 1](#the:matnorms){reference-type="ref" reference="the:matnorms"}. In Table [1](#tab:complexity){reference-type="ref" reference="tab:complexity"} $1$ $2$ $\infty$ ---------- ------------ ------------ ---------- $1$ $O(mn)$ $O(mn)$ $O(mn)$ $2$ $O(mn2^m)$ $O(n^3)$ $O(mn)$ $\infty$ $O(mn2^n)$ $O(mn2^n)$ $O(mn)$ : Complexity of computing the norms $\lVert\cdot\rVert_{p,q}$ we display this data, recording only the dependency of the computations on the number of rows $m$ and columns $n$ of the matrix. Note that the cases of $(p,q)\in\{(2,1),(\infty,1),(\infty,2)\}$ are exceptional in that the required operations grow exponentially with the size of $\boldsymbol{A}$. One must exercise some care in drawing conclusions here. For example, as we show in the proof of Theorem [Theorem 1](#the:matnorms){reference-type="ref" reference="the:matnorms"}, $$\label{eq:inftyinfty} \lVert\boldsymbol{A}\rVert_{\infty,\infty}=\max\{\lVert\boldsymbol{A}(\boldsymbol{u})\rVert_\infty\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\},$$ and this computation has complexity $O(mn2^n)$. However, it turns out that the norm can be determined with a formula that is actually less complex. Indeed, our proof of the formula for $\lVert\cdot\rVert_{\infty,\infty}$which is not the usual proofstarts with the formula [\[eq:inftyinfty\]](#eq:inftyinfty){reference-type="eqref" reference="eq:inftyinfty"} and produces a result with complexity $O(mn)$ as stated in Table [1](#tab:complexity){reference-type="ref" reference="tab:complexity"}. One is then led to ask, are there similar simplifications of the norms corresponding to the cases $(p,q)\in\{(2,1),(\infty,1),(\infty,2)\}$? @JR:00 shows that the computation of $\lVert\cdot\rVert_{\infty,1}$ is NP-hard. We shall show here, using his ideas, that the computation of the norms $\lVert\cdot\rVert_{2,1}$ and $\lVert\cdot\rVert_{\infty,2}$ are likewise difficult, perhaps impossible, to reduce to algorithms with polynomial complexity. **Theorem 2**. *If there exists an algorithm to compute $\lVert\boldsymbol{A}\rVert_{2,1}$ or $\lVert\boldsymbol{A}\rVert_{\infty,2}$ whose computational complexity is polynomial in the number of rows and the number of columns of $\boldsymbol{A}$, then P=NP.* *First note that $\lVert\boldsymbol{A}\rVert_{2,1}=\lVert\boldsymbol{A}^T\rVert_{\infty,2}$, so it suffices to prove the theorem only for $(p,q)=(\infty,2)$.* *Following @JR:00 we introduce the notion of an ***$MC$-matrix*** ("$MC$" stands for "max-cut" since these matrices are related to the "max-cut problem" in graph theory) as a symmetric matrix $\boldsymbol{A}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^n)$ with the property that the diagonal elements are equal to $n$ and the off-diagonal elements are either $0$ or $-1$. [@JR:94] shows that $MC$-matrices are positive-definite. @SP/JR:93 also prove the following.* > **The following decision problem is NP-complete:\ > Given an $n\times n$ $MC$-matrix $\boldsymbol{A}$ and $M\in\mathbb{Z}_{>0}$, is $\langle\boldsymbol{A}(\boldsymbol{u}),\boldsymbol{u}\rangle\ge M$ for some $\boldsymbol{u}\in\{-1,1\}^n$?** *We will use this fact crucially in our proof.* *Let us call a symmetric matrix $\boldsymbol{A}\in\textup{L}(\mathbb{R}^n;\mathbb{R}^n)$ a ***$\sqrt{MC}$-matrix*** if $\boldsymbol{A}\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}$ is an $MC$-matrix. Note that the map $\boldsymbol{A}\mapsto\boldsymbol{A}\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\boldsymbol{A}$ from the set of $\sqrt{MC}$-matrices to the set of $MC$-matrices is surjective since $MC$-matrices have symmetric positive-definite square roots by virtue of their being themselves symmetric and positive-definite.* *Now suppose that there exists an algorithm for determining the $(\infty,2)$-norm of a matrix, the computational complexity of which is of polynomial order in the number of rows and columns of the matrix. Let $\boldsymbol{A}$ be an $n\times n$ $MC$-matrix and let $M\in\mathbb{Z}_{>0}$. As we pointed out prior to stating the theorem, one can determine the $\sqrt{MC}$-matrix $\boldsymbol{A}^{1/2}$ using an algorithm with computational complexity that is polynomial in $n$. Then, by assumption, we can compute $$\begin{aligned} \lVert\boldsymbol{A}^{1/2}\rVert_{\infty,2}^2=&\; \max\{\lVert\boldsymbol{A}^{1/2}(\boldsymbol{u})\rVert_2^2\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}\\ =&\;\max\{\langle\boldsymbol{A}(\boldsymbol{u}),\boldsymbol{u}\rangle\;|\enspace\boldsymbol{u}\in\{-1,1\}^n\}\end{aligned}$$ in polynomial time. In particular, we can determine whether $\langle\boldsymbol{A}(\boldsymbol{u}),\boldsymbol{u}\rangle\ge M$ in polynomial time. As we stated above, this latter decision problem is NP-complete, and so we must have P=NP.* [^1]: Professor, Department of Mathematics and Statistics, Queen's University, Kingston, ON K7L 3N6, Canada, email: `andrew.lewis@queensu.ca`
arxiv_math
{ "id": "2309.07190", "title": "A top nine list: Most popular induced matrix norms", "authors": "Andrew D. Lewis", "categories": "math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Consider the polynomial $p:\mathbb{R}_+^2 \rightarrow (0,\infty)$ in two variables given by $p(x,y)=a(x)+b(x)y+y^2,$ where $a(x)=a_0(x+a_1)(x+a_2), b(x)=b_0(x+b_1)$, $a_0, a_1,a_2,b_0,b_1 \in \mathbb{R}$ with $a_1 \leqslant a_2$. In this paper, we obtain same set of necessary and sufficient conditions for which $\{\frac{1}{p(m,n)}\}_{(m,n) \in \mathbb{Z}_+^2}$ is a joint completely monotone net. As an application, we present a solution to the Cauchy dual subnormality problem for torally expansive toral $3$-isometric weighted $2$-shifts under some positivity assumption. address: | Department of Mathematics and Statistics\ Indian Institute of Technology Kanpur, India author: - Rajkamal Nailwal title: Joint complete monotonicity in two variables with an application to the Cauchy dual subnormality problem --- # Introduction The present paper can be considered as a sequel to [@ACN]. In the latter paper, it was shown that if $p(x, y)=a+b x+c y+d xy$ with $a > 0$ and $b, c, d \geqslant 0,$ then $\frac{1}{p}$ is joint completely monotone if and only if $a d - b c \leqslant 0$ (see [@ACN Theorem 3.1]). This completely determines polynomials of bi-degree $(1,1)$ whose reciprocal is a joint completely monotone net. In the same paper, a solution to the Cauchy dual subnormality problem (for short, CDSP) was presented for toral $3$-isometric weighted $2$-shifts, which are separate $2$-isometries (see [@ACN Theorem 4.9]; for solutions to CDSP for various classes of $m$-isometries, see [@AC2017; @ACJS; @BS2019; @CGR; @RZ2019]). In this paper, we continue this study and discuss the joint complete monotonicity of the reciprocal of polynomials of bi-degree $(2, 1)$ and $(2, 2)$. We also discuss an application to the CDSP for torally expansive toral $3$-isometric weighted $2$-shifts. Before we state the main results of this paper, let us fix some notations and recall the relevant notions. Let $n$ be a positive integer and $X$ be a set. The notation $X^n$ represents the Cartesian product of $X$ with itself, taken $n$ times. Denote by $\mathbb{Z}_+$ and $\mathbb{R}_+,$ the set of nonnegative integers and nonnegative real numbers respectively. Let $\alpha = (\alpha_1, \ldots, \alpha_n), \beta = (\beta_1, \ldots, \beta_n) \in \mathbb{Z}^n_+.$ Set $|\alpha|=\alpha_1 + \cdots + \alpha_n$ and $(\beta)_\alpha = \prod_{j=1}^n (\beta_j)_{\alpha_j},$ where $(\beta_j)_{0}=1,$ $(\beta_j)_{1}=\beta_j$ and $$(\beta_j)_{\alpha_j} = \beta_j(\beta_j -1) \cdots (\beta_j -\alpha_j+1), \quad \alpha_j \geqslant 2, ~j=1, \ldots, n.$$ We denote $\alpha \leqslant\beta$ if $\alpha_j \leqslant\beta_j$ for every $j=1, \ldots, n.$ For $\alpha \leqslant\beta$, we let $\binom{\beta}{\alpha}=\prod_{j=1}^n \binom{\beta_j}{\alpha_j}.$ For a net $\{a_\alpha\}_{\alpha \in \mathbb Z^n_+}$ and $j=1, \ldots, n,$ let $\triangle_j$ denote the *forward difference operator* given by $$\begin{aligned} \triangle_j a_\alpha = a_{\alpha + \varepsilon_j} - a_\alpha, \quad \alpha \in \mathbb Z^n_+,\end{aligned}$$ where $\varepsilon_j$ stands for the $n$-tuple with $j$th entry equal to $1$ and $0$ elsewhere. Note that for any $i,j \in \{1,2,\ldots, n\}, \triangle_i \triangle_j =\triangle_j \triangle_i.$ For $\alpha=(\alpha_1, \ldots, \alpha_n) \in \mathbb Z^n_+,$ let $\triangle^\alpha$ denote the operator $\prod_{j=1}^n \triangle^{\alpha_j}_j.$ For a polynomial $p$ in one variable, let $\deg p$ denote the degree of $p.$ A polynomial $p$ of two variables is said to be of *bi-degree* $\alpha=(\alpha_1, \alpha_2) \in \mathbb Z^2_+$ if for each $j=1, 2,$ $\alpha_j$ is the largest integer for which $\partial^{\alpha_j}_j p \neq 0.$ A net $\mathfrak a = \{a_\alpha\}_{\alpha \in \mathbb{Z}^n_+}$ is said to be *joint completely monotone* if $$\begin{aligned} (-1)^{|\beta|} \triangle^{\beta} a_{\alpha} \geqslant 0, \quad \alpha, \beta \in \mathbb Z^n_+. \end{aligned}$$ When $n=1,$ we simply refer to $\mathfrak a$ as a *completely monotone* sequence. We say $\mathfrak a$ is a *separate completely monotone* if for every $j\in \{1, \ldots, n\}$ , $k \in \mathbb{Z_+},$ $$\begin{aligned} (-1)^{k} \triangle_j^{k} a_{\alpha} \geqslant 0, \quad \alpha \in \mathbb Z^n_+.\end{aligned}$$ It is readily seen that a joint completely monotone net is separate completely monotone. For more information on complete monotonicity in one and several variables, reader is referred to [@Kball; @BCR1984; @BD2005; @RZ2019]. *Remark 1*. If $\phi$ is completely monotone function on $\mathbb{Z}_+^n,$ then for any $\beta \in \mathbb{Z}_+^n,$ the function $\alpha \mapsto \phi(\alpha+\beta)$ is also completely monotone on $\mathbb{Z}_+^n.$ $\diamondsuit$ We are now ready to state the main results of this paper. **Theorem 2** (Special case of bi-degree $(2, 1)$). *Let $p:\mathbb{R}_+^2 \rightarrow (0,\infty)$ be a polynomial given by $p(x,y)=b(x)+a(x)y$ where $a(x)=a_0(x+a_1)$ and $b(x)=b_0(x+b_1)(x+b_2),a_0, a_1, b_0, b_1, b_2 \in \mathbb{R},$ with $b_1\leqslant b_2$ and $a_0,a_1 \neq 0.$ Then the net $\left\{ \frac{1}{p(m,n)}\right\}_{m,n \in \mathbb{Z}_+}$ is joint completely monotone if and only if $b_1 \leqslant a_1 \leqslant b_2.$* **Theorem 3** (Special case of bi-degree $(2,2)$). *Let $p:\mathbb{R}_+^2 \rightarrow (0,\infty)$ be a polynomial given by $p(x,y)=a(x)+b(x)y+y^2,$ where $a(x)=a_0(x+a_1)(x+a_2), b(x)=b_0(x+b_1)$, $a_0, a_1,a_2,b_0,b_1 \in \mathbb{R}$ with $a_1 \leqslant a_2$. Then $\left\{ \frac{1}{p(m,n)}\right\}_{m,n \in \mathbb{Z}_+}$ is a joint completely monotone net if and only if $$\begin{aligned} \label{Impeq} 4a_0 \leqslant b_0^2 ~\mbox{and}~ a_0(a_2-a_1)^2\leqslant b_0^2(b_1-a_1)(a_2-b_1).\end{aligned}$$* ## Plan of the paper {#plan-of-the-paper .unnumbered} In Section 2, we present a proof of the Theorem [Theorem 2](#bi-deg-2-1-second){reference-type="ref" reference="bi-deg-2-1-second"}. We also present a consequence of this theorem (see Corollary [Corollary 4](#consequence-1){reference-type="ref" reference="consequence-1"}). In Section 3, we provide a proof of the Theorem [Theorem 3](#MT){reference-type="ref" reference="MT"} which is fairly long consisiting of several lemmas. We begin the proof by providing some necessary conditions for a class of polynomials in two variables taking positive values (see Lemma [Lemma 6](#obv){reference-type="ref" reference="obv"}(i)). We also obtain necessary conditions for a class of polynomial of bi-degree $(k,2),$ $k \in \mathbb{Z}_+,$ whose reciprocal is joint completely monotone (see Lemma [Lemma 7](#prop-n-condition){reference-type="ref" reference="prop-n-condition"}). Another ingredient in the proof of Theorem [Theorem 3](#MT){reference-type="ref" reference="MT"} is Lemma [Lemma 8](#Ext-cm){reference-type="ref" reference="Ext-cm"}. In Section 4, we provide a solution to the Cauchy dual subnormality problem for torally expansive toral $3$-isometric weighted $2$-shifts under some mild positivity conditions (see Theorem [Theorem 14](#CDSP-T3I){reference-type="ref" reference="CDSP-T3I"}). The proof of Theorem [Theorem 14](#CDSP-T3I){reference-type="ref" reference="CDSP-T3I"} relies on Theorems [Theorem 2](#bi-deg-2-1-second){reference-type="ref" reference="bi-deg-2-1-second"} and [Theorem 3](#MT){reference-type="ref" reference="MT"} and a characterization of toral $3$-isometries (see Proposition [Proposition 11](#3-iso-wt-char){reference-type="ref" reference="3-iso-wt-char"}). # A special case of bi-degree $(2, 1)$ Recall that for a positive real number $\nu,$ the *Bessel function $J_{\nu}(z)$ of the first kind of order $\nu$* is given by $$\begin{aligned} J_{\nu}(z)=\Big(\frac{z}{2}\Big)^{\nu} \sum_{k=0}^\infty \Big(\frac{-z^{2}}{4}\Big)^k\frac{1}{k!\Gamma(\nu+k+1)}, \quad z \in \mathbb C \setminus (-\infty, 0],\end{aligned}$$ where $\Gamma$ denotes the Gamma function. *Proof of Theorem [Theorem 2](#bi-deg-2-1-second){reference-type="ref" reference="bi-deg-2-1-second"}.* Since range of $p$ is contained in $(0,\infty)$ and $a_0,a_1 \neq 0,$ an elementary checking shows that $a_0,a_1,a_2,b_0,b_1 >0$ (see discussion prior to [@ACN Proposition 3.2]). It was implicitly recorded in the proof of [@ACN Theorem 3.6] that for $m,n \in \mathbb{Z}_+$ $$\begin{aligned} \frac{1}{p(m,n)} &=& \int_{[0, 1]^2}t^{n} s^m_1 \, \frac{(s_1/t^{c_0})^{a_1-1}t^{c_0(b_1+b_2-a_1)-1}}{a_0t^{c_0}}\\ && \sum_{k=0}^{\infty}\frac{(-c_0c_1\log {(s_1/t^{c_0})}\log t)^k}{k!^2}\mathbbm{1}_{[0,t^{c_0}]}(s_1)ds_1dt.\end{aligned}$$ where $c_0=b_0/a_0>0$ and $c_1=(a_1-b_2)(a_1-b_1)$. So the weight function for the net $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is $$\begin{aligned} \notag w(s,t)=\frac{(s/t^{c_0})^{a_1-1}t^{c_0(b_1+b_2-a_1)-1}}{a_0t^{c_0}} \sum_{k=0}^{\infty}\frac{(-c_2\log {(s/t^{c_0})}\log t)^k}{k!^2}\mathbbm{1}_{[0,t^{c_0}]}(s)\end{aligned}$$ where $s,t \in (0,1)$ and $c_2=c_0c_1.$ Sufficiency part follows from [@ACN Theorem 3.6]. To prove the necessity part assume that $a_1 \notin [b_1,b_2].$ We will show that $w(s,t)<0$ on some open set contained in $(0,1)^2$. Since by the pasting lemma, $w(s,t)$ is continuous on $(0,1)^2,$ it only require to show that $w(s,t)<0$ for some $s,t \in (0,1)$. It now suffices to check that $$\begin{aligned} \notag \sum_{k=0}^{\infty}\frac{(-c_2\log {(s/t^{c_0})}\log t)^k}{k!^2}\mathbbm{1}_{[0,t^{c_0}]}(s) < 0,\end{aligned}$$ for some $s,t \in (0,1)$. Observe that $c_2=c_0c_1>0$. Take $t_0=1/2$ and $s_0=\frac{e^{-\frac{5}{c_2\log(2)}}}{2^{c_0}}<\frac{1}{2^{c_0}}$. It is easy to see that $$\begin{aligned} \sum_{k=0}^{\infty}\frac{(-c_2\log {(s_0/t_0^{c_0})}\log t_0)^k}{k!^2}\mathbbm{1}_{[0,{t_0}^{c_0}]}(s_0)= \sum_{k=0}^{\infty}\frac{(-5)^k}{k!^2} = J_{0}(2\sqrt{5}) \approx -0.3268,\end{aligned}$$ where $J_{0}(x)$ is the Bessel function of the first kind of order $0$. This, together with the continuity of $w(s,t)$ on $(0,1)^2,$ implies that $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is not a joint completely monotone net. Therefore, we have $b_1 \leqslant a_1 \leqslant b_2.$ This completes the proof. ◻ The following corollary is a consequence of Theorem [Theorem 2](#bi-deg-2-1-second){reference-type="ref" reference="bi-deg-2-1-second"}. **Corollary 4**. *For $a_0, b_0, b_1, b_2 \in (0,\infty)$ and $a_1 \in \mathbb{R},$ let $a(x)=a_0(x+a_1)$ and $b(x)=b_0(x+b_1)(x+b_2)$ with $b_1\leqslant b_2.$ If the net $\left\{ \frac{1}{b(m)+a(m)n}\right\}_{m,n \in \mathbb{Z}_+}$ is joint completely monotone then $a_1>0.$* *Proof.* Assume that the net $\left\{ \frac{1}{b(m)+a(m)n}\right\}_{m,n \in \mathbb{Z}_+}$ is joint completely monotone. Let if possible $a_1 \leqslant 0.$ By Remark [Remark 1](#remark-cm-gen){reference-type="ref" reference="remark-cm-gen"}, for $k \in \mathbb{Z}_+,$ the net $\left\{ \frac{1}{b(m+k)+a(m+k)n}\right\}_{m,n \in \mathbb{Z}_+}$ is joint completely monotone net. Choose $k$ such that $a_1+k>0.$ By Theorem [Theorem 2](#bi-deg-2-1-second){reference-type="ref" reference="bi-deg-2-1-second"}, $b_1+k \leqslant a_1+k \leqslant b_2 +k.$ This yields $b_1 \leqslant 0,$ which is a contradiction. Hence $a_1 > 0.$ ◻ # A special case of bi-degree $(2, 2)$ In this section, we consider a class of polynomials with bi-degree $(2,2)$ and characterize joint complete monotonicity of their reciprocals. The following lemma is stated for frequent use and it is a consequence of [@AC2017 Proposition 4.3]. **Lemma 5**. *Let $p:\mathbb{R}_+ \rightarrow (0,\infty)$ be a quadratic polynomial. If $p$ is irreducible then $\{1/p(n)\}_{n\in \mathbb{Z}_+}$ can never be a completely monotone sequence.* We need several lemmas in the proof of Theorem [Theorem 3](#MT){reference-type="ref" reference="MT"}. **Lemma 6**. *Let $p:\mathbb{R}_+^2 \rightarrow (0,\infty)$ be a polynomial given by $p(x,y)=a(x)+b(x)y+y^2,$ where $a(x)=a_0(x+a_1)(x+a_2),$ $b(x)=b_0(x+b_1)$, $a_0,a_1,a_2,b_0,b_1 \in \mathbb{R}$ with $a_1 \leqslant a_2$. Then the following hold:* 1. *$a_0,a_1,a_2 >0$ and $b_0,b_1 \geqslant 0,$* 2. *if $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is a joint completely monotone net, then $b_0 \neq 0.$* *Proof.* (i) Note that for all $x,y\in \mathbb{R}_+,$ $p(x,y) > 0.$ Thus $p(x,0)>0$ or equivalently, $a(x)>0$ for every $x \in \mathbb R_+.$ This shows that $a_0,a_1,a_2 >0$. If $b_0<0,$ then for sufficiently large values of $x,$ $p(x,.)$ will have a positive root, which is not possible since $p(x,y) \in (0, \infty)$ for all $x,y \in \mathbb{R}_+.$ This shows that $b_0 \geqslant 0.$ If $b_1<0$, then $p(0,y)$ will have a positive real root, and one may argue as above to get a contradiction. \(ii\) Assume that $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is a joint completely monotone net. If $b_0=0,$ then for large values of $n,$ $p(x,n)$ is irreducible in $x.$ Therefore, by Lemma [Lemma 5](#freq){reference-type="ref" reference="freq"}, $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is never joint completely monotone. This shows that $b_0 \neq 0.$ ◻ The following lemma provides necessary conditions for a class of polynomials in two variables whose reciprocal is joint completely monotone. **Lemma 7**. *Let $p:\mathbb{R}_+^2\rightarrow (0,\infty)$ be a polynomial in two variables given by $p(x,y)=q(x)+r(x)y+s(x)y^2,$ where $q,r$ and $s$ are polynomials in one variable. If the net $\{1/p(m,n)\}_{(m,n)\in \mathbb{Z}_+^2}$ is joint completely monotone, then $$\begin{aligned} \label{p1-sqrt} && 4q(m)s(m) \leqslant r^2(m), \quad m \in \mathbb{Z}_+, \\ && \deg (q)+\deg (s) \leqslant 2 \deg (r). \label{inq-new}\end{aligned}$$* *Proof.* Assume that $\{1/p(m,n)\}_{(m,n)\in \mathbb{Z}_+^2}$ is joint completely monotone. As noted earlier, $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is separate completely monotone. Therefore, by Lemma [Lemma 5](#freq){reference-type="ref" reference="freq"}, for any $m \in \mathbb{Z}_+,$ the roots of $p(m,.)$ are real numbers. Thus, we can apply the formula for the roots of a quadratic equation to obtain [\[p1-sqrt\]](#p1-sqrt){reference-type="eqref" reference="p1-sqrt"}. Note that [\[p1-sqrt\]](#p1-sqrt){reference-type="eqref" reference="p1-sqrt"} yields [\[inq-new\]](#inq-new){reference-type="eqref" reference="inq-new"}. ◻ We need the following lemma in the proof of the necessity part of Theorem [Theorem 3](#MT){reference-type="ref" reference="MT"}. **Lemma 8**. *Let $p:\mathbb R_+^2\rightarrow (0,\infty)$ be a polynomial in two variables given by $p(x,y)=a(x)+b(x)y+y^2,$ where $a$ and $b$ are polynomials in one variable. Assume that $b^2(m)\neq 4a(m)$ for every $m\in \mathbb{Z}_+$. Let $\left\{1/p(m,n)\right\}_{(m,n)\in \mathbb{Z}_+^2}$ be a joint completely monotone net. Then, for any positive real numbers $\alpha$ and $\beta,$ $\left\{\frac{1}{p(m,\alpha m +\beta)}\right\}_{m \in \mathbb{Z}_+}$ is a completely monotone sequence.* *Proof.* By Proposition [Lemma 7](#prop-n-condition){reference-type="ref" reference="prop-n-condition"} and the assumption that $b^2(m)\neq 4a(m)$ for every $m \in \mathbb{Z}_+,$ $$b^2(m)-4a(m)> 0, \quad m \in \mathbb Z_+.$$ Also, for $m\in \mathbb Z_+$ and $y \in \mathbb R_+,$ $$p(m,y)=(y+r_1(m))(y+r_2(m)),$$ where $r_1$ and $r_2$ are given by $$\begin{aligned} r_1(m)=\frac{b(m)+\sqrt{b^2(m)-4a(m)}}{2}, \quad r_2(m)=\frac{b(m)-\sqrt{b^2(m)-4a(m)}}{2}. \end{aligned}$$ Note that for every $m \in \mathbb Z_+$ and $y \in \mathbb R_+,$ $$\begin{aligned} \frac{1}{p(m,y)}&=&\frac{1}{(y+r_1(m))(y+r_2(m))} \\ &=&\frac{1}{r_2(m)-r_1(m)}\left(\frac{1}{y+r_1(m)}-\frac{1}{y+r_2(m)}\right)\\ &=&\int_0^1t^y\left(\frac{t^{r_1(m)-1}-t^{r_2(m)-1}}{r_2(m)-r_1(m)}\right)dt. \end{aligned}$$ Therefore, $$\begin{aligned} \label{t-w-int} \frac{1}{p(m,y)}=\int_0^1t^yw_m(t)dt, \quad m\in \mathbb Z_+,\,y \in \mathbb R_+, \end{aligned}$$ where $w_m$ is given by $$\begin{aligned} w_m(t)=\frac{t^{r_1(m)-1}-t^{r_2(m)-1}}{r_2(m)-r_1(m)}, \quad t \in (0,1). \end{aligned}$$ Since the net $\left\{\frac{1}{p(m,n)}\right\}_{m,n \in \mathbb Z_+}$ is joint completely monotone, by [\[t-w-int\]](#t-w-int){reference-type="eqref" reference="t-w-int"} $$\begin{aligned} (-1)^{i}\triangle_1^{i}\frac{1}{p(m,n)}=\int_{[0, 1]}t^n(-1)^{i}\triangle_1^{i}w_m(t)dt \geqslant 0, \quad i,m \in \mathbb Z_+. \end{aligned}$$ This implies for every $i,m \in \mathbb Z_+$ and $t \in (0,1),$ $$(-1)^{i}\triangle_1^{i}w_m(t) \geqslant 0.$$ Therefore, for each $t\in (0,1),$ $\{w_{m}(t)\}_{m\in \mathbb Z_+}$ is completely monotone. Let $\alpha$ and $\beta$ be positive real numbers. Note that for every $t\in (0,1),$ $\{t^{\alpha m+\beta}\}_{m\in \mathbb Z_+}$ is a completely monotone sequence. By [@BCR1984 Lemma 8.2.1(v)] , for every $t \in (0,1),$ we have $$\begin{aligned} (-1)^{i}\triangle ^{i}t^{\alpha m +\beta}w_m(t) \geqslant 0, \quad m \in \mathbb Z_+. \end{aligned}$$ This combined with [\[t-w-int\]](#t-w-int){reference-type="eqref" reference="t-w-int"}, yields $$\begin{aligned} (-1)^{i}\triangle^{i}\frac{1}{p(m,\alpha m+\beta)}&=&\int_0^1(-1)^{i}\triangle^{i}t^{\alpha m +\beta}w_m(t)dt \geqslant 0, \quad m \in \mathbb Z_+. \end{aligned}$$ This shows that $\left\{\frac{1}{p{(m,\alpha m + \beta)}}\right\}_{m\in \mathbb{Z}_+}$ is a completely monotone sequence. ◻ *Proof of Theorem [Theorem 3](#MT){reference-type="ref" reference="MT"}.* Assume that the net $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is joint completely monotone. A routine calculation shows that $$\begin{aligned} \label{b-square-4a} && b^2(m)-4a(m) \\ \notag &=& (b_0^2-4a_0)m^2+(2b_0^2b_1-4a_0(a_1+a_2))m+b_0^2b_1^2-4a_0a_1a_2,\end{aligned}$$ which, by [\[p1-sqrt\]](#p1-sqrt){reference-type="eqref" reference="p1-sqrt"} (applied to $q=a$, $r=b$ and $s=1$), is nonnegative for every $m \in \mathbb Z_+$. It follows that $b_0^2-4a_0 \geqslant 0$ and $b_0^2b_1^2-4a_0a_1 \geqslant 0.$ By Lemma [Lemma 6](#obv){reference-type="ref" reference="obv"}, $a_0,a_1,a_2, b_0>0$ and $b_1\geqslant 0.$ Before we prove the necessity part, consider the polynomials given by $$\begin{aligned} \label{f-g} f(m)= \frac{b(m)}{2}, ~g(m)=\frac{\sqrt{b^2(m)-4a(m)}}{2}, \quad m \in \mathbb Z_+, \end{aligned}$$ ($g$ is real-valued since $b^2 \geqslant 4a$) and note that $$\begin{aligned} \label{1byp-f-g} \frac{1}{p(m,n)}=\frac{1}{(n+f(m))^2-g^2(m)}, \quad m,n \in \mathbb{Z}_+.\end{aligned}$$ We will divide the verification of [\[Impeq\]](#Impeq){reference-type="eqref" reference="Impeq"} into the following cases. *Case 1*. $\deg b^2-4a \leqslant 1$ If $\deg b^2-4a =0,$ then by [\[b-square-4a\]](#b-square-4a){reference-type="eqref" reference="b-square-4a"}, $b_0^2=4a_0$ and $2b_1=a_1+a_2$, and hence [\[Impeq\]](#Impeq){reference-type="eqref" reference="Impeq"} holds. If possible, then assume that $b^2(m)-4a(m)$ is a linear polynomial. By [\[b-square-4a\]](#b-square-4a){reference-type="eqref" reference="b-square-4a"}, $b_0^2=4a_0,$ and hence for every $m \in \mathbb Z_+,$ $$\begin{aligned} \label{b-0-square} b^2(m)-4a(m)=b_0^2\big(2b_1-(a_1+a_2)\big)m+b_0^2(b_1^2-a_1a_2)=c_0m+c_1,\end{aligned}$$ where $c_0=b_0^2\big(2b_1-(a_1+a_2)\big)$ and $c_1=b_0^2(b_1^2-a_1a_2).$ Since $b^2(m)-4a(m)$ is a nonnegative linear polynomial (see [\[p1-sqrt\]](#p1-sqrt){reference-type="eqref" reference="p1-sqrt"}), we have $$\begin{aligned} \label{a1plusa2} a_1+a_2 < 2b_1.\end{aligned}$$ A routine calculation using [\[f-g\]](#f-g){reference-type="eqref" reference="f-g"} and [\[b-0-square\]](#b-0-square){reference-type="eqref" reference="b-0-square"} shows that for $m,n \in \mathbb{Z}_+,$ $$\begin{aligned} p(m, n) &\overset{\eqref{1byp-f-g}}=& (n+f(m))^2-g^2(m) \\ &=& \Big(\frac{b_0m} {2}+ \frac{\frac{b_0^2b_1}{2}+b_0n-\frac{c_0}{4} }{b_0}\Big)^2 +b_0\Big(b_1-\frac{(a_1+a_2)}{2}\Big)n \\ &+& \frac{b_0^2b_1^2}{4} -\frac{\Big(\frac{b_0^2b_1^2}{2}-\frac{c_0}{4}\Big)^2}{b_0^2}. \end{aligned}$$ Since $a_1+a_2 < 2b_1$ (see [\[a1plusa2\]](#a1plusa2){reference-type="eqref" reference="a1plusa2"}) and $b_0 >0,$ we note that there exists $n_0 \in \mathbb Z_+$ such that $$\begin{aligned} \frac{\Big(\frac{b_0^2b_1^2}{2}-\frac{c_0}{4}\Big)^2}{b_0^2} - \frac{b_0^2b_1^2}{4} < b_0\Big(b_1-\frac{(a_1+a_2)}{2}\Big)n, \quad n \geqslant n_0. \end{aligned}$$ It follows that $p(m,n_0)$ is irreducible in $m$, and hence $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is not separate completely monotone. Hence $\deg b^2-4a =0,$ which completes the proof in this case. *Case 2*. $b^2(x)-4a(x)$ is a quadratic polynomial Note that by [\[p1-sqrt\]](#p1-sqrt){reference-type="eqref" reference="p1-sqrt"} and [\[b-square-4a\]](#b-square-4a){reference-type="eqref" reference="b-square-4a"}, $b_0^2-4a_0 > 0.$ It is easy to see using [\[f-g\]](#f-g){reference-type="eqref" reference="f-g"} that for every $m \in \mathbb{Z}_+,$ $$\begin{aligned} g^2(m) &=&(c_0m+c_1)^2+c_2, \end{aligned}$$ where $c_0, c_1, c_2$ are given by $$\begin{aligned} \label{c2-inq} \notag c_0&=&\frac{\sqrt{b_0^2-4a_0}}{2}, \quad \notag c_1= \frac{b_0^2b_1-2a_0(a_1+a_2)}{2\sqrt{b_0^2-4a_0}},\\ c_2&=&-\frac{a_0(a_2-a_1)^2-b_0^2(b_1-a_1)(a_2-b_1)}{b_0^2-4a_0}. \end{aligned}$$ Now, we choose a very large $\alpha_0 \in \mathbb{Z}_+$ such that $c_0 \alpha_0 +c_1 >0$ and $b^2(m+\alpha_0) \neq 4a(m+\alpha_0)$ for every $m\in \mathbb{Z}_+.$ We also choose a very large natural number, say $N_0>1,$ such that $$\begin{aligned} \label{l_1-l_2} l_1:=N_0c_0-\frac{b_0}{2}> 0, \quad l_2:=N_0(c_0\alpha_0+c_1)-\frac{b_0\alpha_0}{2}-\frac{b_0b_1}{2}>0. \end{aligned}$$ Take $n=l_1m+l_2$ and consider $$\begin{aligned} && \frac{1}{p(m+\alpha_0,l_1m+l_2)}\\&\overset{\eqref{1byp-f-g}}=& \frac{1}{(l_1m+l_2+f(m+\alpha_0))^2-g^2(m+\alpha_0)}\\&\overset{\eqref{f-g},\eqref{l_1-l_2}}=&\frac{1}{(N_0c_0m+N_0(c_0\alpha_0+c_1))^2-(c_0m+c_0\alpha_0+c_1)^2-c_2}\\ &=&\frac{1}{(N_0^2-1)(c_0m+c_0\alpha_0+c_1)^2-c_2}. \end{aligned}$$ Assume that [\[Impeq\]](#Impeq){reference-type="eqref" reference="Impeq"} does not hold. By [\[c2-inq\]](#c2-inq){reference-type="eqref" reference="c2-inq"}, we obtain $c_2<0.$ Therefore, the polynomial $(N_0^2-1)(c_0m+c_0 \alpha_0+c_1)^2-c_2$ is irreducible in $m.$ One may see, using Lemma [Lemma 5](#freq){reference-type="ref" reference="freq"} that the sequence $\left\{\frac{1}{p(m+ \alpha_0, l_1m+l_2)}\right\}_{m \in \mathbb{Z}_+}$ is not completely monotone. This is not possible in view of Lemma [Lemma 8](#Ext-cm){reference-type="ref" reference="Ext-cm"} and Remark [Remark 1](#remark-cm-gen){reference-type="ref" reference="remark-cm-gen"}. This completes the proof of the necessity part. We will divide the verification of the sufficiency part into several cases. *Case 1*. $b^2(m)-4a(m)$ is a constant By [\[b-square-4a\]](#b-square-4a){reference-type="eqref" reference="b-square-4a"}, we have $2b_1=a_1+a_2$ and $b_0^2 = 4a_0.$ This implies for every $m\in \mathbb Z_+,$ $$\begin{aligned} b^2(m)-4a(m)\overset{\eqref{b-square-4a}}=b_0^2b_1^2-4a_0a_1a_2 = a_0(a_1-a_2)^2 \geqslant 0. \end{aligned}$$ It follows that for $m,n\in \mathbb{Z}_+,$ $$\begin{gathered} \frac{1}{p(m,n)} \overset{\eqref{1byp-f-g}}= \\ \notag \frac{1}{\Big(n+\frac{b_0}{2}m+\frac{b_0b_1}{2}+\frac{\sqrt{b_0^2b_1^2-4a_0a_1a_2}}{2}\Big)\Big(n+\frac{b_0}{2}m+\frac{b_0b_1}{2}-\frac{\sqrt{b_0^2b_1^2-4a_0a_1a_2}}{2}\Big)}.\end{gathered}$$ Clearly, $\frac{b_0b_1}{2}+\frac{\sqrt{b_0^2b_1^2-4a_0a_1a_2}}{2} \geqslant 0$ and $\frac{b_0b_1}{2}-\frac{\sqrt{b_0^2b_1^2-4a_0a_1a_2}}{2} \geqslant 0.$ In this case, $\{1/p(m,n)\}_{(m,n)\in \mathbb{Z}_+^2}$ is a joint completely monotone net since it is the product of two joint completely monotone net (see [@BCR1984 Lemma 8.2.1(v)]). *Case 2*. $b^2(m)-4a(m)$ is a linear polynomial Note that from [\[Impeq\]](#Impeq){reference-type="eqref" reference="Impeq"}, $$\begin{aligned} a_0(a_2-a_1)^2 \leqslant b_0^2(b_1-a_1)(a_2-b_1). \end{aligned}$$ Since $b^2(m)-4a(m)$ is a linear polynomial, we have $4a_0=b_0^2,$ and hence $$\begin{aligned} (a_2-b_1+b_1-a_1)^2 \leqslant 4(b_1-a_1)(a_2-b_1). \end{aligned}$$ It now follows that $$\begin{aligned} (a_2-b_1)^2+(b_1-a_1)^2+2(a_2-b_1)(b_1-a_1) \leqslant 4(b_1-a_1)(a_2-b_1), \end{aligned}$$ which clearly yields $(a_2-2b_1+a_1)^2 \leqslant 0,$ or equivalently, $a_1+a_2=2b_1.$ Thus, this case reduces to that of (1). Therefore, the net $\left\{\frac{1}{p(m,n)}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is joint completely monotone. *Case 3*. $b^2(x)-4a(x)$ is a quadratic polynomial Note that $b_0^2-4a_0 > 0.$ For every $m \in \mathbb{Z}_+,$ we already noted that $$\begin{aligned} g^2(m)&=& (c_0m+c_1)^2+c_2, \end{aligned}$$ where $c_0, c_1, c_2$ are given by $$\begin{aligned} \notag c_0&=&\frac{\sqrt{b_0^2-4a_0}}{2}, \quad c_1=\frac{b_0^2b_1-2a_0(a_1+a_2)}{2\sqrt{b_0^2-4a_0}},\\ \label{c2} c_2&=& -\frac{a_0(a_2-a_1)^2-b_0^2(b_1-a_1)(a_2-b_1)}{b_0^2-4a_0}. \end{aligned}$$ Also note that by [\[1byp-f-g\]](#1byp-f-g){reference-type="eqref" reference="1byp-f-g"}, for $m,n \in \mathbb{Z}_+,$ $$\begin{aligned} \frac{1}{p(m, n)} &=& \frac{1}{(n+b(m)/2)^2-((c_0m+c_1)^2+c_2)} \\ &=& \frac{1}{(n+\frac{b(m)}{2}+c_0m+c_1)(n+\frac{b(m)}{2}-c_0m-c_1)-c_2}\\ &=& \frac{1}{p_1(m,n)p_2(m,n)-c_2}, \end{aligned}$$ where $p_1$ and $p_2$ are given by $$\begin{aligned} p_1(m,n):=n+(b_0/2+c_0)m+(b_0b_1/2+c_1), \quad m, n \in \mathbb{Z}_+ , \\ p_2(m,n):=n+(b_0/2-c_0)m+(b_0b_1/2-c_1), \quad m,n \in \mathbb{Z}_+. \end{aligned}$$ By [\[Impeq\]](#Impeq){reference-type="eqref" reference="Impeq"} and [\[c2\]](#c2){reference-type="eqref" reference="c2"}, $c_2 \geqslant 0.$ If $c_1 \geqslant 0,$ then $b_0b_1/2+c_1 \geqslant 0,$ and since $$\begin{aligned} (b_0b_1/2+c_1)(b_0b_1/2-c_1) = p_1(0,0)p_2(0,0) > c_2\geqslant 0,\end{aligned}$$ we must have $b_0b_1/2-c_1 > 0.$ Similarly, if $c_1<0,$ then $b_0b_1/2-c_1 > 0,$ and hence $b_0b_1/2+c_1 > 0.$ Thus, for any real value of $c_1,$ $\{1/p_1(m,n)\}_{(m,n)\in \mathbb{Z}_+^2}$ and $\{1/p_2(m,n)\}_{(m,n)\in \mathbb{Z}_+^2}$ are joint completely monotone nets (see [@ACN Theorem 3.1]). Note that for $m,n \in \mathbb{Z}_+,$ $p(m,n)=p_1(m,n)p_2(m,n)-c_2> 0.$ Thus, we have $$\begin{aligned} \label{lessthan1} \frac{c_2}{p_1(m,n)p_2(m,n)}<1, \quad m,n \in \mathbb{Z}_+.\end{aligned}$$ Therefore, for all $m,n \in \mathbb{Z_+},$ $$\begin{aligned} \frac{1}{p_1(m,n)p_2(m,n)-c_2}&=&\frac{1}{p_1(m,n)p_2(m,n)}\Big(\frac{1}{1-\frac{c_2}{p_1(m,n)p_2(m,n)}}\Big)\\ &\overset{\eqref{lessthan1}}=&\sum_{k=0}^{\infty}\frac{c_2^k}{(p_1(m,n)p_2(m,n))^{k+1}}.\end{aligned}$$ Since, for each $k\in \mathbb{Z}_+$, $\left\{\frac{c_2^k} {(p_1(m,n)p_2(m,n))^{k+1}}\right\}_{(m,n)\in \mathbb{Z}_+^2}$ is a joint completely monotone net, the finite sum $\left\{\sum_{k=0}^\ell\frac{c_2^k}{(p_1(m,n)p_2(m,n))^{k+1}}\right\}_{(m,n)\in \mathbb{Z}_+^2},$ where $\ell \in \mathbb Z_+,$ is also joint completely monotone. Since the limit of the joint completely monotone net is joint completely monotone (see [@BCR1984 p. 130]), we conclude that the net $\sum_{k=0}^{\infty}\frac{c_2^k}{(p_1(m,n)p_2(m,n))^{k+1}}$ is joint completely monotone. This completes the proof of the sufficiency part. ◻ # An application to the Cauchy dual subnormality problem Let $n$ be a positive integer. A operator tuple $T=(T_1, \ldots, T_n)$ on a complex separable Hilbert space $H$ is said to be *commuting n-tuple* if $T_1, \ldots, T_n$ are bounded linear operator on $H$ and $T_i T_j = T_j T_i$ for every $1 \leqslant i \neq j \leqslant n.$ Following [@Ag1990; @Athavale2001; @R1988], we now give the definition of *toral $m$-isometry*. **Definition 9**. Let $m$ be a positive integer. A commuting $n$-tuple $T$ is said to be a *toral $m$-isometry* if $$\begin{aligned} \sum_{\overset{\alpha \in \mathbb Z^n_+}{0 \leqslant\alpha \leqslant\beta}} (-1)^{|\alpha|} \binom{\beta}{\alpha}T^{*\alpha}T^{\alpha} = 0, \quad \beta \in \mathbb Z^n_+, ~|\beta|=m,\end{aligned}$$ where $T^{\alpha}$ denotes the bounded linear operator $\prod_{j=1}^nT^{\alpha_j}_j$ and $T^{*\alpha}$ stands for the Hilbert space adjoint of $T^{\alpha}.$ *Remark 10*. It is easy to see from the definition that *toral $m$-isometric tuple* $T=(T_1, \ldots, T_n)$ is a *separate $m$-isometric tuple*, that is, $T_1, \ldots, T_n$ are $m$-isometries. Therefore one can apply [@AS1995 Lemma 1.21] to see that $T_{j}$'s are left invertible for $1 \leqslant j \leqslant n.$ The reader is referred to [@Ag1990; @AS1995; @Athavale2001; @AS1999; @CC2012; @JJS2020; @S2001] for the basic theory of toral $m$-isometries. Let $\mathscr H$ denote a Hilbert space over the complex numbers, which is separable and has an orthonormal basis $\mathscr E = \{e_{\alpha}:\alpha \in \mathbb{Z}^n_+\}.$ Let ${\bf w} = \big\{w^{(j)}_{\alpha} : j=1, \ldots, n, ~\alpha \in \mathbb{Z}^n_+ \big\}$ be collection of bounded complex numbers. We define a \"weighted $n$-shift\"$,\mathscr W = (\mathscr W_1, \ldots, \mathscr W_n)$ on $\mathscr E$ as follows: for $j=1, \ldots, n$ and any $\alpha \in \mathbb{Z}^n_+$, is given by $$\mathscr W_j e_\alpha = w^{(j)}_\alpha e_{\alpha + \varepsilon_j},$$ where $\varepsilon_j$ is a vector with a $1$ in the $j$th position and zeros elsewhere. Note that by defining it linearly on $\mathscr E,$ they have bounded bounded extensions to $\mathscr H.$ Also for any $i, j \in \{ 1, \ldots, n \}, \mathscr W_i$ and $\mathscr W_j$ commute if and only if $$\begin{aligned} w^{(i)}_\alpha w^{(j)}_{\alpha + \varepsilon_i}=w^{(j)}_\alpha w^{(i)}_{\alpha + \varepsilon_j}, \quad \alpha \in {\mathbb Z}^n_+. \end{aligned}$$ Let $\mathscr W$ be a commuting weighted $n$-shift. Note that for any $\beta \in \mathbb Z^n_+,$ there exists a positive scalar $m(\beta)$ such that $$\begin{aligned} \label{W-beta} \mathscr W^{\beta} e_0= m(\beta)e_\beta. \end{aligned}$$ For more information on basic theory of weighted multi-shifts reader is referred to [@AS1999; @JS2001; @JL1979]. From this point forward, we will make the following assumptions about $\mathscr W$: the weight multi-sequence ${\bf w}$ of $\mathscr W$ comprises positive numbers, $\mathscr W$ has a bounded extension to $\mathscr H$, and $\mathscr W$ is a commuting $n$-tuple. We will denote the weighted $n$-shift $\mathscr W$ with weight multi-sequence ${\bf w}$ by $\mathscr W : {{w^{(j)}_\alpha}}$. To simplify the calculations, we introduce the following notation: for $i,j \in \{0,1,2\}$ $$\begin{aligned} \rho_{ij}={\triangle_1^i\triangle_2^j(\|\mathscr W^{\alpha}e_0\|^2)|_{\alpha =0} }.\end{aligned}$$ **Proposition 11**. *For a weighted $2$-shift $\mathscr W : \{w^{(j)}_\alpha\},$ the following statements are valid$:$* 1. *$\mathscr W$ is a toral $3$-isometry if and only if for $\alpha=(\alpha_1,\alpha_2)\in \mathbb Z_+^2,$ $$\begin{aligned} \|\mathscr W^\alpha e_0\|^2 = 1+a_1\alpha_1 +a_2\,\alpha_1^2+ (b_1\alpha_1+b_2) \alpha_2+ c_1\,\alpha_2^2, \end{aligned}$$ where $a_1, a_2,b_1,b_2$ and $c_1$ are given by $$\begin{aligned} \label{coef-expre} a_1=\rho_{10}-\frac{\rho_{20}}{2},\, a_2= \frac{\rho_{20}}{2},\, b_1=\rho_{11},\, b_2 = \rho_{01}-\frac{\rho_{02}}{2},\, c_1 = \frac{\rho_{02}}{2}. \end{aligned}$$* 2. *$\mathscr W$ is a toral $3$-isometry with $\mathscr W_2$ being a $2$-isometry if and only if for $\alpha=(\alpha_1, \alpha_2)\in \mathbb Z_+^2,$ $$\begin{aligned} \label{wt-3-iso-new} \|\mathscr W^\alpha e_0\|^2 = 1+a\,\alpha_1+b\,\alpha_1^2 + (c +d\, \alpha_1\,) \,\alpha_2,\end{aligned}$$ where $a,b,c$ and $d$ are as follows$:$ $$\begin{aligned} a = \rho_{10}-\frac{\rho_{20}}{2}, \quad b =\frac{\rho_{20}}{2}, \quad c = \rho_{01}, \quad d = \rho_{11}.\end{aligned}$$* *Proof.* This follows from [@ACN Proposition 4.6 ] (the case of $m = 3$). ◻ We say that a commuting $n$-tuple $T=(T_1, \ldots, T_n)$ is *jointly subnormal* if there exist a Hilbert space $K$ containing $H$ and a commuting $n$-tuple $N$ of normal operators $N_1, \ldots, N_n$ on $K$ such that $$\begin{aligned} T_j = {N_j}|_{H}, \quad j=1, \ldots, n.\end{aligned}$$ Assume that $T^*_jT_j$ is invertible for every $j=1, \ldots, n.$ Following [@CC2012; @S2001], we denote the $n$-tuple $T^{\mathfrak{t}}:= (T^{\mathfrak{t}}_1, \cdots, T^{\mathfrak{t}}_n)$ as the *operator tuple torally Cauchy dual* to $T$. Here, $T^{\mathfrak{t}}_j := T_j(T^*_jT_j)^{-1},$ for $j=1, \cdots, n.$ In view of Remark [Remark 10](#Li-rem){reference-type="ref" reference="Li-rem"} , one can see that Cauchy dual of toral $m$-isometric $n$-tuple exists. Let $\mathscr W : \{w^{(j)}_\alpha\}$ represent a weighted $n$-shift, where $\mathscr W^*_j \mathscr W_j$ is invertible for each $j=1, \ldots, n$. The operator tuple $\mathscr W^{\mathfrak{t}}$ torally Cauchy dual to the weighted $n$-shift $\mathscr W$, satisfy the following relation: $$\begin{aligned} \label{dual-action} \mathscr W^{\mathfrak{t}}_j e_\alpha = \frac{1}{w^{(j)}_\alpha} \, e_{\alpha + \varepsilon_j}, \quad j=1, \ldots, n. \end{aligned}$$ Note that, by [\[dual-action\]](#dual-action){reference-type="eqref" reference="dual-action"}, it is easy to see that: $$\begin{aligned} \label{reciprocal} \|(\mathscr W^{\mathfrak{t}})^\alpha e_0\|^2 = \frac{1}{\|\mathscr W^{\alpha} e_0\|^2}, \quad \alpha \in \mathbb Z^n_+. \end{aligned}$$ **Proposition 12**. *Let $\mathscr W$ be a torally expansive weighted $n$-shift and let $\mathscr W^{\mathfrak{t}}$ be its toral Cauchy dual. Then $\mathscr W^{\mathfrak{t}}$ is jointly subnormal if and only if the net $\left\{ \frac{1}{ \|\mathscr W^{\alpha} e_0\|^2}\right\}_{ \alpha \in \mathbb Z^n_+}$ is joint completely monotone.* *Proof.* Recall that for a torally contractive weighted $n$-shift $U$, the following holds$:$ $$\begin{aligned} \label{subnor-if-r} \begin{minipage}{67ex} \text{$ U$ is jointly subnormal if and only if $\{\| U^\alpha e_0\|^2\}_{\alpha \in \mathbb Z^n_+}$ is a joint} \\ \text{completely monotone net.} \end{minipage} \end{aligned}$$ This fact can be deduced from [@Athavale1987 Theorem 4.4], together with [\[W-beta\]](#W-beta){reference-type="eqref" reference="W-beta"} (see also the discussion prior to [@Athavale2001 Eqn (E)]). By [\[dual-action\]](#dual-action){reference-type="eqref" reference="dual-action"} and the discussion following it, $\mathscr W^{\mathfrak t}$ is a commuting weighted $n$-shift. Since $\mathscr W$ is torally expansive, a routine calculations shows that $\mathscr W^t$ is torally contractive. This, combined with [\[reciprocal\]](#reciprocal){reference-type="eqref" reference="reciprocal"} and [\[subnor-if-r\]](#subnor-if-r){reference-type="eqref" reference="subnor-if-r"}, completes the proof of the proposition. ◻ In what follows, we will assume that $$\begin{aligned} \label{assumption-1} \rho_{10}>\frac{\rho_{20}}{2}, \quad \rho_{11} >0, \quad \rho_{01}>\frac{\rho_{02}}{2}\quad \text{and}\quad \Big(\rho_{10}-\frac{\rho_{20}}{2}\Big)^2\geqslant 2\rho_{20}.\end{aligned}$$ *Remark 13*. (i) By [\[coef-expre\]](#coef-expre){reference-type="eqref" reference="coef-expre"} and [\[assumption-1\]](#assumption-1){reference-type="eqref" reference="assumption-1"}, we have $a_1 > 0, b_1 > 0, b_2 > 0$ and $a_1^2\geqslant 4a_2.$ \(ii\) The condition $a_1^2\geqslant 4a_2$ is necessary for the joint subnormality of the toral Cauchy dual of a torally expansive toral $3$-isometric weighted $2$-shift. This follows from the fact that the joint complete monotonicity of $$\left\{\frac{1}{1+a_1\alpha_1 +a_2\,\alpha_1^2+ (b_1\alpha_1+b_2) \alpha_2+ c_1\,\alpha_2^2}\right\}_{\alpha_1,\alpha_2\in \mathbb{Z}_+}$$ implies the complete monotonicity of the sequence $\left\{\frac{1}{1+a_1\alpha_1+a_2\alpha_1^2}\right\}_{\alpha_1\in \mathbb{Z}_+}$ (see Lemma [Lemma 5](#freq){reference-type="ref" reference="freq"}). $\diamondsuit$ We now present a solution to the CDSP for torally expansive toral $3$-isometric weighted $2$-shifts under the additional assumption [\[assumption-1\]](#assumption-1){reference-type="eqref" reference="assumption-1"}. **Theorem 14**. *Let $\mathscr W : \{w^{(j)}_\alpha\}$ be a torally expansive toral $3$-isometric weighted $2$-shift and let $\mathscr W^{\mathfrak{t}}$ be the operator tuple torally Cauchy dual to $\mathscr W$. If [\[assumption-1\]](#assumption-1){reference-type="eqref" reference="assumption-1"} holds, then the following statements holds$:$* - *Assume that $\mathscr W_2$ is a 2-isometry but $\mathscr W_1$ is not a 2-isometry. Then the operator tuple $\mathscr W^{\mathfrak{t}}$ is jointly subnormal if and only if $$\begin{aligned} \label{big-eqn-2-new}\notag \left(\frac{2\rho_{10}-\rho_{20}}{{\rho_{20}}} - \frac{\rho_{01}}{\rho_{11}}\right)^2 \leqslant\left(\frac{2\rho_{10}-{\rho_{20}}}{{\rho_{20}}}\right)^2-\frac{8}{\rho_{20}}.\end{aligned}$$* - *Assume that neither $\mathscr W_1$ nor $\mathscr W_2$ is a $2$-isometry. Then the operator tuple $\mathscr W^{\mathfrak{t}}$ is jointly subnormal if and only if ${{\rho_{20}}}{\rho_{02}}\leqslant{\rho^2_{11}}$ and $$\begin{aligned} &&\frac{\rho_{02}}{2}((\rho_{10}-\frac{\rho_{20}}{2})^2-2\rho_{20})+\frac{\rho_{20}}{2}(\rho_{01}-\frac{\rho_{02}}{2})^2 \\ &\leqslant& \rho_{11}\Big(\rho_{10}-\frac{\rho_{20}}{2}\Big)\Big(\rho_{01}-\frac{\rho_{02}}{2}\Big)-\rho_{11}^2. \end{aligned}$$* *Proof.* Note that by Proposition [Proposition 11](#3-iso-wt-char){reference-type="ref" reference="3-iso-wt-char"}(i), for $\alpha=(\alpha_1, \alpha_2) \in \mathbb Z^2_+,$ $$\begin{aligned} \label{pos-1-coef} \|(\mathscr W)^\alpha e_0\|^2=1+a_1\alpha_1+a_2\alpha_1^2+(b_1\alpha_1+b_2)\alpha_2+c_1\alpha_2^2,\end{aligned}$$ where $a_1,a_2,b_1,b_2$ and $c_1$ are given by $$\begin{aligned} \label{coef-rho} a_1=\rho_{10}-\frac{\rho_{20}}{2}, \quad a_2= \frac{\rho_{20}}{2}, \quad b_1=\rho_{11}, \quad b_2 = \rho_{01}-\frac{\rho_{02}}{2}, \quad c_1 = \frac{\rho_{02}}{2}. \end{aligned}$$ By Remark [\[rem-assump\]](#rem-assump){reference-type="eqref" reference="rem-assump"}(i), $a_1>0,\, b_1>0,\, b_2 > 0$ and $a_1^2 \geqslant 4a_2.$ It is clear from [\[pos-1-coef\]](#pos-1-coef){reference-type="eqref" reference="pos-1-coef"} that $a_2\geqslant 0$ and $c_1\geqslant 0.$ \(a\) Since $\mathscr W_2$ is a $2$-isometry and $\mathscr W_1$ is not a $2$-isometry, $c_1=0$ and $a_2>0$. By [\[reciprocal\]](#reciprocal){reference-type="eqref" reference="reciprocal"}, $\alpha= (\alpha_1, \alpha_2) \in \mathbb{Z}_+^2,$ $$\begin{aligned} \|(\mathscr W^{\mathfrak{t}})^\alpha e_0\|^2 = \frac{1}{1+a_1\alpha_1+a_2\alpha_1^2+(b_1\alpha_1+b_2)\alpha_2}.\end{aligned}$$ By Proposition [Proposition 12](#subnormal-reciprocal){reference-type="ref" reference="subnormal-reciprocal"}, $\mathscr W^{\mathfrak{t}}$ is jointly subnormal if and only if $$\left\{\frac{1}{1+a_1\alpha_1+a_2\alpha_1^2+(b_1\alpha_1+b_2)\alpha_2}\right\}_{\alpha_1,\alpha_2 \in \mathbb Z_+}$$ is a joint completely monotone net. Since $a_1^2 \geqslant 4a_2,$ we have $$p(x,y)=\Big(x+\frac{a_1-\sqrt{a_1^2-4a_2}}{2a_2}\Big)\Big(x+\frac{a_1+\sqrt{a_1^2-4a_2}}{2a_2}\Big)+b_1\Big(x+\frac{b_2}{b_1}\Big)y.$$ Routine calculations using Theorem [Theorem 2](#bi-deg-2-1-second){reference-type="ref" reference="bi-deg-2-1-second"} and [\[coef-rho\]](#coef-rho){reference-type="eqref" reference="coef-rho"} yield the desired result. \(b\) Since neither $\mathscr W_1$ nor $\mathscr W_2$ is a $2$-isometry, $a_2$ and $c_1$ are non-zero. Hence $a_2>0$ and $c_1>0$. By [\[reciprocal\]](#reciprocal){reference-type="eqref" reference="reciprocal"}, $\alpha= (\alpha_1, \alpha_2) \in \mathbb{Z}_+^2,$ $$\begin{aligned} \|(\mathscr W^{\mathfrak{t}})^\alpha e_0\|^2 = \frac{1}{1+a_1\alpha_1+a_2\alpha_1^2+(b_1\alpha_1+b_2)\alpha_2+c_1\alpha_2^2}.\end{aligned}$$ By Proposition [Proposition 12](#subnormal-reciprocal){reference-type="ref" reference="subnormal-reciprocal"}, $\mathscr W^{\mathfrak{t}}$ is jointly subnormal if and only if the net $$\left\{\frac{1}{1+a_1\alpha_1+a_2\alpha_1^2+(b_1\alpha_1+b_2)\alpha_2+c_1\alpha_2^2}\right\}_{\alpha_1,\alpha_2 \in \mathbb Z_+}$$ is joint completely monotone or equivalently $$\begin{aligned} \label{mod-JCM} \left\{\frac{1}{1/c_1+(a_1/c_1)\alpha_1+(a_2/c_1)\alpha_1^2+((b_1\alpha_1+b_2)/c_1)\alpha_2+\alpha_2^2}\right\}_{\alpha_1,\alpha_2 \in \mathbb Z_+}\end{aligned}$$ is a joint completely monotone net. Note that the polynomial $$p(x,y)=\frac{1}{c_1}+\frac{a_1}{c_1}x+\frac{a_2}{c_1}x^2+(\frac{b_1}{c_1}x+\frac{b_2}{c_1})y+y^2$$ maps $\mathbb{R}_+^2$ to $(0,\infty).$ Since $a_1^2 \geqslant 4a_2,$ we have $$p(x,y)=\frac{1}{c_1}\Big(x+\frac{a_1-\sqrt{a_1^2-4a_2}}{2a_2}\Big)\Big(x+\frac{a_1+\sqrt{a_1^2-4a_2}}{2a_2}\Big)+\frac{b_1}{c_1}\Big(x+\frac{b_2}{b_1}\Big)y+y^2.$$ Routine calculations using Theorem [Theorem 3](#MT){reference-type="ref" reference="MT"} and [\[coef-rho\]](#coef-rho){reference-type="eqref" reference="coef-rho"} yield the desired result. ◻ It is not clear whether the conclusion of Theorem [Theorem 14](#CDSP-T3I){reference-type="ref" reference="CDSP-T3I"} holds if we relax the assumption [\[assumption-1\]](#assumption-1){reference-type="eqref" reference="assumption-1"}. *Declaration of competing interest:* None *Acknowledgements.* I would like to express my sincere gratitude to Prof. Akash Anand and Prof. Sameer Chavan for their invaluable guidance and support throughout the preparation of this paper. J. Agler, A disconjugacy theorem for Toeplitz operators, *Amer. J. Math.* **112** (1990), 1--14. J. Agler, M. Stankus, $m$-isometric transformations of Hilbert space. I, *Integral Equations Operator Theory* **21** (1995), 383-429. A. Anand, S. Chavan, A moment problem and joint q-isometry tuples. *Complex Anal. Oper. Theory* **11** (2017), 785--810. A. Anand, S. Chavan, Z. J. Jabłoński, J. Stochel, A solution to the Cauchy dual subnormality problem for 2-isometries, *J. Funct. Anal.* **277** (2019), 108292, 51 pp. A. Anand, S. Chavan, R. Nailwal, Joint complete monotonicity of rational functions in two variables and toral m-isometric pairs, *J. Operator Theory* (2023). A. Athavale, Holomorphic kernels and commuting operators, *Trans. Amer. Math. Soc.* **304** (1987), 101-110. A. Athavale, Alternatingly hyperexpansive operator tuples, *Positivity* **5** (2001), 259-273. A. Athavale, V. Sholapurkar, Completely hyperexpansive operator tuples, *Positivity* **3** (1999), 245-257. C. Badea, L. Suciu, The Cauchy dual and 2-isometric liftings of concave operators, *J. Math. Anal. Appl.* **472** (2019), 1458-1474. K. Ball, Completely monotonic rational functions and Hall's marriage theorem, *J. Comb. Theory* **61** (1994), 118-124. C. Berg, J. P. R. Christensen, P. Ressel, *Harmonic analysis on semigroups. Theory of positive definite and related functions.* Graduate Texts in Mathematics, 100. Springer-Verlag, New York, 1984. x+289 pp. C. Berg, A. Durán, Some transformations of Hausdorff moment sequences and harmonic numbers, *Canad. J. Math.* **57** (2005), 941--960. S. Chavan, S. Ghara, Md. R. Reza, The Cauchy dual subnormality problem via de Branges-Rovnyak spaces, *Studia Math* **265** (2022), 315-341. S. Chavan, R. Curto, Operators Cauchy dual to $2$-hyperexpansive operators: the multivariable case, *Integral Equ. Oper. Theory* **73** (2012), 481--516. Z. Jabloński, I. B. Jung, J. Stochel, $m$-isometric operators and their local properties, *Linear Algebra Appl.* **596** (2020), 49-70. Z. Jabloński, J. Stochel, Unbounded $2$-hyperexpansive operators, *Proc. Edinb. Math. Soc.* **44** (2001), 613-629. N.P. Jewell, A.R. Lubin, Commuting weighted shifts and analytic function theory in several variables, *J. Operator Theory* **1** (1979), 207--223. Md. R. Reza, G. Zhang, Hausdorff moment sequences induced by rational functions, *Complex Anal. Oper. Theory* **13** (2019), 4117-4142. S. Richter, Invariant subspaces of the Dirichlet shift, *J. Reine Angew. Math.* **386** (1988), 205--220. S. Shimorin, Wold-type decompositions and wandering subspaces for operators close to isometries, *J. Reine Angew. Math.* **531** (2001), 147-189.
arxiv_math
{ "id": "2310.04785", "title": "Joint complete monotonicity in two variables with an application to the\n Cauchy dual subnormality problem", "authors": "Rajkamal Nailwal", "categories": "math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
arxiv_math
{ "id": "2310.03168", "title": "Analysis of a space-time phase-field fracture complementarity model and\n its optimal control formulation", "authors": "Denis Khimin, Johannes Lankeit, Marc C. Steinbach, Thomas Wick", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Let $G$ be an undirected simple connected graph. We say a vertex $u$ is eccentric to a vertex $v$ in $G$ if $d(u,v)=\max\{d(v,w): w\in V(G)\}$. The eccentric graph, $E(G)$ of $G$ is a graph defined on the same vertex set as of $G$ and two vertices are adjacent if one is eccentric to the other. We find the structure and the girth of the eccentric graph of trees and see that the girth of the eccentric graph of a tree can either be zero, three, or four. Further, we study the structure of the eccentric graph of the Cartesian product of graphs and prove that the girth of the eccentric graph of the Cartesian product of trees can only be zero, three, four or six. Furthermore, we provide a comprehensive classification when the eccentric girth assumes these values. We also give the structure of the eccentric graph of the grid graphs and the Cartesian product of cycles. Finally, we determine the conditions under which the eccentricity matrix of the Cartesian product of trees becomes invertible. author: - "Anita Arora[^1]" - "Rajiv Mishra[^2]" bibliography: - ecc.bib title: Eccentric graph of trees and their Cartesian products --- Eccentric graph; Eccentric girth; Cartesian product, Trees. 05C05; 05C12; 05C50; 05C75 # Introduction {#sec: basicEG} Let $G$ be a simple undirected graph on $n$ vertices with $m$ edges and $V(G)$ denote the set of vertices in $G$. If two vertices $v,w\in V(G)$ are adjacent, we will write $v\sim_{G} w$. The *neighbourhood* of a vertex $v$ in $G$ is defined as $N_G(v)=\{w:v\sim_{G} w\}$. If the graph $G$ is connected, the *distance* $d_G(v, w)$, between two vertices $v$ and $w$ is the length of the shortest path in $G$ connecting them. The *distance matrix* of a connected graph $G$, denoted as $D(G)$, is the $n\times n$ matrix indexed by $V(G)$ whose $(v,w)$th-entry is equal to $d_G(v, w)$. We will only consider simple, undirected graphs on *at least* two vertices in this paper. The *eccentricity*, $e_G(v)$, of a vertex $v\in V(G)$ is defined as $$e_G(v)=\max\{d(u,v): u\in V(G)\},$$ we will use $e(v)$ instead of $e_G(v)$ whenever there is no confusion about the underlying graph. If $d(u,v)=e(v)$, then we will say $u$ is *eccentric* to $v$ and a shortest path between $u$ and $v$ is called an *eccentric path* (starting from $v$). The *diameter* of $G$, $diam(G)$, is the maximum of eccentricities of the vertices in $G$. A *diametrical path* is a longest path among all eccentric paths in the graph $G$. The *eccentricity matrix* of a connected graph $G$, denoted by $\mathcal{E}_G$, is constructed from the distance matrix $D(G)$, retaining the largest distances in each row and each column, while other elements of the distance matrix are set to zero. In other words, $$(\mathcal{E}_G)_{i\,j}=\begin{cases} d(u_i,u_j) & \text{ if } d(u_i,u_j)=\min\{e(u_i),e(u_j)\},\\ 0 & \text{ otherwise.} \end{cases}$$ **Definition 1** (). *The *eccentric graph* $E(G)$, of a connected graph $G$ is the simple graph with the vertex set same as that of $G$ and $(v,u)$ is an edge in $E(G)$ if either $v$ is eccentric to $u$ or $u$ is eccentric to $v$. If $u$ is adjacent to $v$ in $E(G)$, we will write $u\sim_{\scriptscriptstyle E(G)} v$.* Note that the adjacency matrix of the eccentric graph $E(G)$ is obtained by replacing the non-zero entries in the eccentricity matrix $\mathcal{E}_G$, by 1. Recall the girth of a graph $G$ is the length of the shortest cycle present in $G$. If a graph $G$ has no cycles, we will say that $G$ has girth 0. We will call the girth of the eccentric graph as *eccentric girth*. Girth is the dual concept to edge connectivity, in the sense that the girth of a planar graph is the edge connectivity of its dual graph, and vice versa. Calculating the girth of a graph is an important task in graph theory, as it helps us understand the graph's structure and properties. The notion of eccentricity matrix was first introduced by Randi$\grave{c}$ as the $D_{max}$-matrix in 2013 [@Dmax2013] and subsequently, Wang et al. renamed it as the eccentricity matrix in 2018 [@EccMatrix2018]. The eccentricity matrix of a graph is also called as anti-adjacency matrix in the following sense. The eccentricity matrix is obtained from the distance matrix by preserving only the largest distances in each row and column; on the other hand, the adjacency matrix is obtained from the distance matrix by preserving only the smallest non-zero distances in each row and column. Unlike the adjacency matrix and the distance matrix, the eccentricity matrix of a connected graph need not be irreducible. The eccentricity matrix of a complete bipartite graph is reducible and the eccentricity matrix of a tree is irreducible [@EccMatrix2018; @Mahato2020]. Spectra of the eccentricity matrix for some graphs are studied by Mahato et al. [@Mahato2020] and Wang et al. [@EccMatrix2018], the lower and upper bounds for the $\mathcal{E}$-spectral radius of graphs are also discussed in [@EccMatrix2018]. J. Wang et al. studied the non-isomorphic co-spectral graphs with respect to the eccentricity matrix [@wang2022spectral]. Eccentricity matrix has interesting applications, its main application is in the field of chemical graph theory  [@Dmax2013; @DmaxApplication2013]. A necessary and sufficient condition for $E(G)$ to be isomorphic to $G$ or the complement of $G$ is given by Akiyama et al. [@EccGraph1985]. Kaspar et al. gave complete structure of the eccentric graph for some well-known graphs like paths and cycles [@PathCycleEG2018]. A *star graph* $S_{n}$, on $(n+1)$ vertices is a graph with $n$ vertices of degree 1 and one vertex, called the center, of degree $n$. A *double star* $S_{s,t}$, is a graph obtained by adding an edge between the center vertices of two stars, $S_{s}$ and $S_{t}$. Let $P_n$ denotes the *path* graph on $n$ vertices with the natural labelling $1,2,\ldots,n$. Then, $$E(P_n)= \begin{cases} K_n,\text{ the complete graph }& \text{ if }n\leq 3,\\ S_{\frac{n-2}{2},\frac{n-2}{2}},\text{ a double star }& \text{ if }n>3 \text{ and is even},\\ H_{\frac{n-3}{2}}& \text{ if }n>3 \text{ and is odd}, \end{cases}$$ where $K_t$ denotes the complete graph on $t$ vertices and $H_t$ is a graph obtained by adding $t$ pendant vertices to each of any two of the vertices of a triangle (see ). Let $C_n$ denotes the cycle graph on $n$ vertices and the vertices are labeled as $1,2,\ldots,n$. Then, $$\label{eqn:eccGraphOfCycle} E(C_n)= \begin{cases} \frac{n}{2}K_2 &\text{ if } n \text{ is even,}\\ C_n &\text{ if } n \text{ is odd.} \end{cases}$$ Also, $E(K_n)=K_n$ and $E(K_{s,t})=K_s\cup K_t$ for $s,t>1$ [@PathCycleEG2018]. Throughout the paper, we will use the notation $P_n$ and $C_n$ to denote the path graph and the cycle graph on $n$ vertices. Numerous interesting properties of the eccentric matrix of a tree have been established so far. For instance, Mahato showed that the eccentric matrix of a tree is invertible only if the tree is a star [@Mahato-2023]. Additionally, the diameter of the tree is odd if and only if eigenvalues of its eccentric matrix are symmetric about the origin [@MahatoSymmetry2022]. In , we will give a complete structure of the eccentric graph of a general tree and point out one more structural information in . In , we will prove that the eccentric girth of a tree can either be zero, three or four. In , we will present some structural properties of the eccentric graph of the Cartesian product of graphs and classify all the possible values of the eccentric girth of the Cartesian product of trees. Lastly, in [5](#sec:Invertibilty of EG){reference-type="ref" reference="sec:Invertibilty of EG"}, generalising the result of Mahato [@Mahato-2023 Theorem 2.1], we will analyze and classify the conditions under which the eccentricity matrix of the Cartesian product of trees becomes invertible. # Structure of eccentric graph of a tree {#sec:Structure of $E(T)$} In this section, we will focus on the structure of the eccentric graph of a tree. Recall that a *tree* is a connected graph with no cycles and the *degree* of a vertex $v$ in a simple graph $G$ is the number of vertices adjacent to it. A vertex of degree $1$ is called a *leaf* or a *pendant* vertex. The *union* of two graphs $G_1$ and $G_2$ is the simple graph whose vertex set and edge set are formed by taking the union of the vertex sets of $G_1$ and $G_2$ and the edge sets of $G_1$ and $G_2$, respectively. **Definition 2**. *Let $T$ be a tree and $v$ be a leaf vertex in $T$. We define the path from $v$ to the nearest vertex of degree greater than two as the *stem* at $v$.* *Note that a path graph $P_n$ has no stems.* **Definition 3**. *Let $P$ be a diametrical path in a tree $T$. We define the *tree induced from the path $P$* as the subtree of $T$ obtained by removing stems at those leaves (except endpoints of $P$), which are an endpoint of some diametrical path other than $P$.* Consider the tree $T$ shown in . $T$ has three diametrical paths and the subtrees induced by these are shown in . Note that the structure of the eccentric graph of a subtree induced from a diametrical path in $T$ depends on the diameter of $T$. In case of an even diameter, it looks as shown in the left of and in case of odd diameter, it looks as shown in the right of . For example, the eccentric graphs of the subtrees in have been shown in . The following result shows that the graphs shown in are the building blocks for the eccentric graph of a tree. **Theorem 1**. *Let $Q_1,\cdots,Q_k$ be possible diametrical paths in $T$ with starting point $v_0^1,\dots,v_0^k$ and ending point $v_n^1,\dots,v_n^k$, respectively. Let $T_1,\dots,T_k$ be induced trees from $Q_1,\dots,Q_k$, repectively. Then, $E(T)=\cup_{i=1}^k E(T_k)$.* *Proof.* It is clear that each vertex of $T$ lies in at least one tree induced from a diametrical path. For $i\in [k]$, let $e$ be an edge in the eccentric graph $E(T_i)$. As $Q_i$ is the unique diametrical path in $E(T_i)$, it follows that one of the endpoints of $e$ is either $v_0^i$ or $v_n^i$, assume $e=(v,v_n^i)$. Thus, $e_{T_i}(v)=d_{T_i}(v,v_n^i)=d_{T}(v,v_n^i)=e_{T}(v)$. Thus, $E(T_i)$ is a subgraph of $E(T)$. Now, let $v\sim_{E(T)} w$; if both $v$ and $w$ lie on the same diametrical path, then done. Otherwise, it is enough to show that $v$ and $w$ both lie on the same tree $T_s$ for some $s\in [k]$. Let $v\in E(T_j)$, eccentric graph induced from the path $Q_j$ for some $j\in [k]$. If $w\notin E(T_j)$, then $w$ lies on a stem at some leaf $z$ in $T$. In that case, either the path joining from $v_0^j$ to $z$ or the path joining from $v_n^j$ to $z$ is a diametrical path. Consequently, either vertex $v$ lie on the tree induced by this diametrical path or both the vertices $v$ and $w$ lie on another diametrical path. In both cases, we get the adjacency relation between $v$ and $w$ in the eccentric graph of a tree induced by some diametrical path. ◻ The following example illustrates . **Example 2**. *Let $T$ be the tree shown in . The eccentric graph of $T$ (see ) is the union of the eccentric graphs (shown in ) of the subtrees (shown in ) induced from the three diametrical paths of $T$.* In the remaining part of this section, we will highlight more structural information about the eccentric graph of a tree. **Proposition 1**. *Let $T$ be a tree. There does not exist $v_1,v_2,v_3\in V(T)$ such that $v_1\sim_{E(T)} v_2$, $v_2\sim_{E(T)} v_3$ and $e_T(v_1)<e_T(v_2)<e_T(v_3)$.* **Proof:** On the contrary, assume such $v_1,v_2,v_3\in V(T)$ exist. Then $d_T(v_1,v_2)=e_T(v_1)$, $d_T(v_2,v_3)=e_T(v_2)$, and $v_2$ and $v_3$ are pendant vertices in $T$. Let $P$ be the path between $v_1$ and $v_2$ in $T$ and $v_k$ be the middle vertex on $P$ if length of $P$ is even else $v_k$ be the right middle vertex on $P$. Now as $v_2$ and $v_3$ are pendants, $v_3$ cannot lie on the path $P$, moreover $v_3$ will lie on some branch emerging from a vertex $w$ on the path $P$ other than $v_2$. Now two cases arise depending on whether $w$ is positioned to the right of $v_k$ or lies strictly left to $v_k$. If $w$ lies on the right to $v_k$ as shown in the left of , then $d_T(w, v_2)\leq d_T(w,v_1)$. As $d_T(v_2,v_1)=e_T(v_1)<e_T(v_2)=d_T(v_2,v_3)$, then $d_T(w,v_1)<d_T(w,v_3)$. Thus, $$\begin{aligned} e_T(v_1)~=~d_T(v_1,v_2)&=d_T(v_1,w)+d_T(w,v_2),\\ &\leq d_T(v_1,w)+d_T(v_1,w),\\ &< d_T(v_1,w)+d_T(w,v_3),\\ &= d_T(v_1,v_3),\end{aligned}$$ which is a contradiction. If $w$ lies strictly left to $v_k$ as shown in the right of . Since $e_T(v_3)>e_T(v_2)$, there exist a vertex, $w_1$ in $V(T)$ such that $e_T(v_3)=d_T(v_3,w_1)$. In particular, $$\label{eqn:case2} d_T(v_3,v_2)<d_T(v_3,w_1).$$ Note that $w_1$ must lie on some branch emerging from a vertex on $P$ else eccentricity of $v_2$ will increase. This leads to the following two subcases (see ): First, $w_1$ lies on a branch emerging from a vertex, $w_2$ which is situated to the left of $w$. By [\[eqn:case2\]](#eqn:case2){reference-type="eqref" reference="eqn:case2"}, $d_T(w,v_2)<d_T(w,w_1)$ and $d_T(w,v_2)\geq d_T(w,v_3)$ otherwise $d_T(v_1,v_2)=e_T(v_1)<d_T(v_1, v_3)$. Thus $$\begin{aligned} d_T(v_2,w_1)&=d_T(w,v_2)+d_T(w,w_1)\\ %&\geq d_T(w, v_3)+d_T(w, w_1)\\ &> d_T(w, v_3)+d_T(w,v_2),\\ &= d_T(v_2,v_3),\\ &=e_T(v_2)\end{aligned}$$ which is a contradiction. Second, if $w_2$ is on the right to $w$ as shown in right of . Again by [\[eqn:case2\]](#eqn:case2){reference-type="eqref" reference="eqn:case2"}, $d_T(w_2,v_2)<d_T(w_2,w_1)$. Hence, $e_T(v_1)=d_T(v_1,v_2)<d_T(v_1,w_1)$, which is absurd. 0◻ *The essence of Proposition [Proposition 1](#prop:Ecc_is_smallest_or_largest_in_tree){reference-type="ref" reference="prop:Ecc_is_smallest_or_largest_in_tree"} can be summarized as the eccentricity of a vertex $v\in V(T)$ is either the smallest or the largest among the eccentricities of its neighbours in the eccentric graph of $T$.* # Eccentric girth of a tree {#sec:EccGir of a tree} In this section, we will determine the eccentric girth of a tree and its potential values. In addition, we will classify the instances in which these possible values of the eccentric girth can be achieved. It is well-known that two paths of maximum length must pass through a common point. Thus, it is evident that two diametrical paths in a tree must intersect. But this is not true in general, the graph in has two diametrical paths (dashed) but they do not intersect. Now, we will present the main result of this section which classifies the eccentric girth of a tree. **Theorem 3**. *Let $T$ be a tree. Then the girth of the eccentric graph $E(T)$, is either zero, three, or four. Moreover, $$\text{the girth of } E(T)= \begin{cases} 3&\text{ if the diameter of $T$ is even,}\\ 0&\text{ if the diameter of $T$ is odd with unique diametrical path,}\\ 4& \text{ otherwise}. \end{cases}$$* **Proof:** The proof is divided into the following cases depending on the parity of the diameter of $T$. First, let the diameter of $T$ is even and $P= v_0\,v_1\ldots v_k\,v_{k+1}\ldots v_{2k}$ be a diametrical path. Note that $e(v_0)=2k=e(v_{2k})$ and $d(v_0,v_{2k})=2k$, therefore $v_0\sim_{E(T)} v_{2k}$. If $e(v_k)>k$ then one of $e(v_0)$ or $e(v_{2k})$ will be greater than $2k$, which is not possible. Also, $d(v_0,v_k)=k=d(v_k,v_{2k})$, therefore $e(v_k)=k$ and $v_k\sim_{E(T)} v_0$, $v_k\sim_{E(T)} v_{2k}$. Thus, $v_0,v_k,$ and $v_{2k}$ forms a triangle in $E(T)$. Second, If the diameter of $T$ is odd and $P= v_0\,v_1\ldots v_k\,v_{k+1}\ldots v_{2k+1}$ is the unique diametrical path in $T$. It is sufficient to show that for any vertex $i\in V(T)$ exactly one of $v_0$ or $v_{2k+1}$ is eccentric to $i$ and no other vertex is eccentric to $i$. Note that, in a tree, if a vertex $j$ is eccentric to some vertex then $j$ must be a pendant vertex. Let $i\in V(P)$, if possible, there exists a vertex $j\in V(T)$ other than $v_0$ and $v_{2k+1}$ which is eccentric to $i$, that is, $d(i, j)=e(i)$, then $j$ is a leaf of a branch emerging from some vertex $p\in V(P)$. Assume $p$ is on the left of $i$ in $P$ then $d(i,j)\geq d(i, v_0)$ this implies $d(v_{2k+1},j)=d(v_{2k+1},i)+d(i, j)\geq d(v_{2k+1},i)+d(i, v_0)=2k+1$ which contradicts the fact that $P$ is the only diametrical path. A similar argument can be given when $p$ is on the right of $i$. Now suppose $i\in V(T)\setminus V(P)$ lying on some branch emerging from a vertex $i'\in V(P)$. Again let there exists $j\in V(T)$ other than $v_0$ and $v_{2k+1}$ which is eccentric to $i$. Note that $j$ cannot lie on the same branch otherwise eccentricity of one of $v_0$ or $v_{2k+1}$ will increase. Thus, $j$ must be eccentric to $i'$ which cannot happen as proved in the preceding paragraph. Moreover, because of odd diameter exactly one of $v_0$ or $v_{2k+1}$ can be eccentric to $i$. For illustration, $E(T)$ in this scenario is shown in . Third, let the diameter of $T$ is odd and $P= v_0\,v_1\ldots v_k\,v_{k+1}\ldots v_{2k+1}$, $P^{\prime}= w_0\,w_1\ldots w_k\allowbreak \,w_{k+1}\ldots w_{2k+1}$ be two diametrical paths in $T$. As mentioned at the start of , they must intersect. Therefore, it is reasonable to assume that $P$ and $P'$ have one common endpoint say $v_0=w_0$, otherwise we can create two such diametrical paths. Hence, $(v_0,v_{2k+1},v_1,w_{2k+1})$ forms a 4-cycle in $E(T)$. Now, if there is a triangle $(z_1, z_2, z_3)$ in $E(T)$ and $e(z_1)\leq e(z_2)\leq e(z_3)$. Without loss of generality, assume $z_1$ is a vertex on some branch emerging from $w_p$, $1\leq p\leq k$ (Note that $z_1$ can be $w_0$). If $z$ is any vertex eccentric to $z_1$ then $z$ must be a vertex on some branch emerging from $w_i$ for some $k+1\leq i\leq 2k$, if not then $d(z, w_{2k+1})>2k+1$ which is the diameter. Now $z_2$ being eccentric to $z_1$, must lie on some branch emerging from $w_q$, $k+1\leq q \leq 2k$ (Note that $z_2$ can be $w_{2k+1}$). Again, as $z_3$ is eccentric to $z_2$, $z_3$ is a vertex on some branch emerging from $w_r$, $1\leq r \leq k$ but then $z_3$ cannot be eccentric to $z_1$. Hence $E(T)$ cannot have a triangle. 0◻ # Eccentric graph of the Cartesian product of graphs {#sec:EccGofCProd} In this section, We will examine some properties of the eccentric graph of the Cartesian product of general graphs and calculate the girth of the Cartesian product of trees in . We begin by recalling the definition of the *Cartesian product* and the *Kronecker product* of two graphs. **Definition 4**. *Let $G_1$ and $G_2$ be two simple connected graphs. The *Cartesian product* of $G_1$ and $G_2$ denoted as $G_1\square G_2$, is a graph with vertex set $V(G_1)\times V(G_2)$, and two vertices $(u_1,u_2)$ and $(v_1,v_2)$ are adjacent if and only if either $u_1=v_1$ and $u_2 \sim_{G_2} v_2$ or $u_1\sim_{G_1} v_1$ and $u_2=v_2$.* The following equations follow directly from . $$\label{eqn:dis is sum of dis} d_{G_1\square G_2}\big((u_1,u_2),(v_1,v_2)\big)=d_{G_1}(u_1,v_1)+d_{G_2}(u_2,v_2),$$ and $$\label{eqn:ecc is sum of ecc} e_{G_1\square G_2}\big((u_1,u_2)\big)=e_{G_1}(u_1)+e_{G_2}(u_2).$$ The above definition and the equations can be generalised to the Cartesian product of $k$ graphs $G_1,\dots, G_k$ denoted as $G_1\square \cdots\square G_k$. **Definition 5**. *Let $G_1$ and $G_2$ be two simple connected graphs. The *Kronecker product* of $G_1$ and $G_2$ denoted as $G_1\times G_2$, is a graph with vertex set $V(G_1)\times V(G_2)$, and two vertices $(u_1,u_2)$ and $(v_1,v_2)$ are adjacent if and only if $u_1\sim_{G_1} v_1$ and $u_2 \sim_{G_2} v_2$.* **Lemma 1**. *Let $G_1, \ldots, G_k$ be simple connected graphs and $G=G_1\square \cdots \square G_k$ be their Cartesian product. Let $u=(u_1, \ldots, u_k)$, $v=(v_1, \ldots, v_k)\in V(G)$ where $u_i, v_i \in V(G_i)$ for $i\in [k]$. Then, $v$ is eccentric to $u$ if and only if $v_i$ is eccentric to $u_i$ for all $i\in [k]$.* **Proof:** Let $v$ $u$, i.e. $d_G\big( u, v\big)= \max \{d_G\big(u, x\big): x\in V(G)\}$. Then by [\[eqn:dis is sum of dis\]](#eqn:dis is sum of dis){reference-type="eqref" reference="eqn:dis is sum of dis"} we can express this as: $$\sum\limits_{i=1}^{k} d_{G_i}\big( u_i, v_i\big)=\max \left\{ \sum\limits_{i=1}^{k} d_{G_i}\big(u_i, x_i\big): x_i \in V(G_i)\right\}.$$ Which holds only if $$d_{G_i}\big( u_i, v_i\big)=\max \{ d_{G_i}\big(u_i, x_i\big): x_i \in V(G_i)\} \text{ for all } i\in [k]\}.$$ Thus, $v_i$ is eccentric to $u_i$ for all $i\in [k]$. Furthermore, we can reverse the steps of this argument to establish the converse part. 0◻ *Note that if $(u_1,\ldots, u_k)\sim_{E(G_1\square\cdots \square G_k)} (v_1,\ldots , v_k)$ then $u_i \neq v_i$ for all $i\in [k]$. Also, it is clear from that if $u \sim_{E(G)} v$ then $u_i \sim_{E(G_i)} v_i$ for all $i\in [k]$ but the converse is not true.* For example, $1\sim_{E(P_4)} 3$ and $2\sim_{E(P_4)} 4$ but $(1, 2)\nsim_{E(P_4 \square P_4)} (3,4)$ (see ). **Corollary 1**. *Let $G_1$ and $G_2$ be simple graphs such that all the vertices in both $G_1$ and $G_2$ have the same eccentricities. Then $E(G_1\square G_2)$ is isomorphic to $E(G_1)\times E(G_2)$, the Kronecker product of $E(G_1)$ and $E(G_2)$.* **Lemma 2**. *Let $G_1, \ldots, G_k$ be simple connected graphs and $G=G_1\square \cdots \square G_k$. If for some $s, t\in [k]$ there exist $u_s, v_s, w_s\in V(G_s)$ such that $u_s\sim_{E(G_s)} v_s$, $v_s\sim_{E(G_s)} w_s$ and $e_{G_s}(v_s)\geq \max\{e_{G_s}(u_s),e_{G_s}(w_s)\}$, and there exist $u_t, v_t, w_t\in V(G_t)$ such that $u_t\sim_{E(G_t)} v_t$, $v_t\sim_{E(G_t)} w_t$ and $e_{G_t}(v_t)\leq \min\{e_{G_t}(u_t),e_{G_t}(w_t)\}$, then there exists a $4$-cycle in $E(G)$.* **Proof:** Without loss of generality, assume $s=1$ and $t=2$ and for $i=3, \ldots, k$, let $\{u_i, v_i\}$ be an edge in $E(G_i)$ such that $e_{G_i}(u_i)\geq e_{G_i}(v_i)$. Using , $a=(u_1, v_2, v_3, \ldots v_k)$, $b=(v_1, w_2, u_3, \ldots,\allowbreak u_k)$, $c=(w_1, v_2, v_3, \ldots, v_k)$ and $d=(v_1, u_2,u_3, \ldots, u_k)$ forms a $4$-cycle in $E(G)$. 0◻ We will now prove that there is a triangle in the eccentric graph of the Cartesian product of $k$ graphs if and only if there is a triangle in the eccentric graph of each of the individual graphs. **Theorem 4**. *Let $G_1, \ldots, G_k$ be simple connected graphs and $G%=G_1\square \ldots \square G_k$ be their Cartesian product. Then the girth of $E(G)=3$ if and only if the girth of $E(G_i)=3$ for all $i\in [k]$.* **Proof:** First, suppose there is a triangle in $E(G_i)$ for all $i\in [k]$. Let $\{u_i, v_i, w_i\}$ be a triangle in $E(G_i)$ such that $e_{G_1}(u_i)\leq e_{G_1}(v_i)\leq e_{G_1}(w_i)$ for all $i\in [k]$. Therefore by , $(u_1, \ldots, u_k), (v_1, \ldots, v_k), (w_1, \ldots, w_k)$ forms a triangle in $E(G)$. Conversely, suppose $(u_1, \ldots, u_k), (v_1, \ldots, v_k), (w_1, \ldots, w_k)$ forms a triangle in $E(G)$, then again by $\{u_i, v_i, w_i\}$ forms a triangle in $E(G_i)$ for all $i\in [k]$. 0◻ **Theorem 5**. *Let $G_1, \ldots, G_k$ be simple connected graphs such that the eccentric girth of at least two of them is greater than two. Let $G=G_1\square \cdots \square G_k$, then the girth of $E(G)$ is four except when the girth of $E(G_i)$ is exactly three for all $i\in[k]$.* **Proof:** Suppose $E(G_1)$ and $E(G_2)$ have girth greater than two and $C_1$ and $C_2$ are cycles in $E(G_1)$ and $E(G_2)$, respectively. Let $v_1$ be a vertex of the largest eccentricity on $C_1$ and $v_2$ be a vertex of the smallest eccentricity on $C_2$. In particular, if $u_1,w_1$ are neighbours of $v_1$ in $C_1$ and $u_2, w_2$ are neighbours of $v_2$ in $C_2$, then $$e_{G_1}(v_1)\geq \max\{e_{G_1}(u_1),e_{G_1}(w_1)\} \,\text{ and }\, e_{G}(v_2)\leq \min\{e_{G_2}(u_2),e_{G_2}(w_2)\}.$$ Hence, the result follows from and . 0◻ Based on the above-stated theorems, it can be concluded that the eccentric girth of the Cartesian product of graphs, in which at least two have non-zero eccentric girth, is either three or four. ## Eccentric girth of the Cartesian product of trees {#sec:EccGirthOfCPofTrees} Recall that in [3](#sec:EccGir of a tree){reference-type="ref" reference="sec:EccGir of a tree"}, we observed that the eccentric girth of a tree could either be zero, three or four. Now, we will prove that for the Cartesian product of trees, it can also be six in addition to the above values. We will now characterize completely the eccentric girth of the Cartesian product of trees and present an analogous result to . **Theorem 6**. *Let $T_1, \ldots, T_k$ be trees and $G=T_1\square \cdots \square T_k$. Then* *$$\text{ the girth of }E(G)=\begin{cases} 0 & \text{ if the girth of } E(T_i)=0 \text{ for all } i\in [k],\\ 3 & \text{ if the girth of } E(T_i)=3 \text{ for all } i\in [k],\\ 6 & \text{ if }G=T_1\square P_2\square \cdots \square P_2 \text{ and } E(T_1) \text{ is } C_4\text{-free with girth three,}\\ 4 & \text{ otherwise.} \end{cases}$$* **Proof:** First, assume $T_1,\dots,T_k$ are trees with eccentric girth 0. By , there exists a unique diametrical path of odd length in $T_i$ with endpoints $u_i$ and $v_i$ for all $i\in[k]$. Now consider the set of vertices $S=\{(x_1, \ldots, x_k): x_i\in \{ u_i, v_i\}, i\in[k]\}$ in $V(G)$, then any vertex $u\in V(G)\setminus S$ is adjacent to exactly one vertex in the eccentric graph $E(G)$ and that neighbour lies in $S$. Also, note that any two vertices in $S$ are adjacent if and only if they differ at each component, therefore $E(G)$ is an acyclic graph with $2^{k-1}$ connected components. Second, only one of $T_i's$ say $T_1$ has non-zero eccentric girth. Now there are two cases, one is when at least one of $T_i$, $i=2,\ldots, k$, is not $P_2$ and the other is $T_i=P_2$ for all $i=2,\ldots, k$. If suppose $T_2\neq P_2$, and since $E(T_2)$ has girth zero, by there exists a unique diametrical path with endpoints $u_2$ and $v_2$ and $u_2\sim_{E(T_2)} v_2$. Now, as $T_2\neq P_2$ and $E(T_2)$ is connected [@EccMatrix2018], there is a vertex $w_2$, adjacent to either $u_2$ or $v_2$, let's say $v_2\sim_{E(T_2)} w_2$. Clearly, $e_{T_2}(v_2)\geq \max \{e_{T_2}(u_2), e_{T_2}(w_2)\}$. Additionally, as the girth of $E(T_1)$ is nonzero, it is possible to choose $u_1, v_1, w_1\in V(T_1)$ such that $u_1\sim_{E(T_1)} v_1$, $v_1\sim_{E(T_1)} w_1$ and $e_{T_1}(v_1)\leq \min\{e_{T_1}(u_1),e_{T_1}(w_1)\}$. Therefore by and , the girth of $E(G)$ is four. Let $T_i=P_2$ with endpoints $\{u_i, v_i\}$ for $i=2, \ldots, k$. If $E(T_1)$ contains a 4-cycle, $\{u_1,v_1,w_1,x_1\}$, then $\{(u_1,u_2,\ldots, u_k),(v_1,v_2,\ldots, v_k),(w_1,u_2,\ldots, u_k),(x_1,v_2,\ldots, v_k)\}$ forms a $4$-cycle in $E(G)$. Therefore the girth of $E(G)$ is four as $E(G)$ can not contain any odd cycle (because $T_2=P_2$). If $E(T_1)$ doesn't contain a $4$-cycle, then by , girth of $E(T_1)$ is $3$. Let $\{u_1, v_1, w_1\}$ be a $3$-cycle in $E(T_1)$ then $\{(u_1,u_2,\ldots, u_k),(v_1,v_2,\ldots, v_k),(w_1,u_2,\ldots, u_k),\allowbreak(u_1,v_2,\ldots, v_k),(v_1,u_2,\ldots, u_k),(w_1,v_2,\ldots, v_k)\}$ forms a $6$-cycle in $E(G)$. If $E(G)$ contains a $4$-cycle, then so is $E(T_1)$ as $T_i=P_2$ for all $i=2, \ldots, k$. Finally, the rest of the cases follows from . 0◻ As an illustration, We will now discuss the structure and the girth of the eccentric graph of the graphs obtained as the Cartesian product of two path graphs and two cycle graphs. ## Cartesian product of two path graphs An $m\times n$ *grid graph* is the Cartesian product of the path graphs $P_m$ and $P_n$, denoted as $P_m\square P_n$. Let the vertices of $P_m\square P_n$ be $\{(i, j): 1\leq i \leq m,\, 1\leq j \leq n\}$. For the sake of simplicity in figures, we label a vertex $(i, j)$ by $(i-1)n+j$. shows the mentioned labelling for the grid graph $P_3\square P_5$. Let $G=P_m\square P_n$ be a grid. Then the eccentricity of the vertices is given by $$e\big((i,j)\big)=\begin{cases} d\big((i,j), (m, n)\big) & \text{~if~} 1\leq i\leq \lceil\frac{m}{2}\rceil,\;1\leq j\leq \lceil\frac{n}{2}\rceil,\\ d\big((i,j), (1,1)\big) & \text{~if~} \lfloor \frac{m}{2}\rfloor< i\leq m,\; \lfloor\frac{n}{2}\rfloor< j\leq n,\\ d\big((i,j), (m, 1)\big) & \text{~if~} 1\leq i\leq \lceil\frac{m}{2}\rceil,\; \lfloor\frac{n}{2}\rfloor< j\leq n,\\ d\big((i,j), (1, n)\big) & \text{~if~} \lfloor\frac{m}{2}\rfloor< i\leq m,\;1\leq j\leq \lceil\frac{n}{2}\rceil. \end{cases}$$ Note that $(1, 1), (1, n), (m, 1)$ and $(m, n)$ have the maximum eccentricity, which is $m+n$. Therefore, $$(i,j) \sim_{E(G)} \begin{cases} (m, n) & \text{~if~} 1\leq i\leq \lceil\frac{m}{2}\rceil,\;1\leq j\leq \lceil\frac{n}{2}\rceil,\\ (1,1) & \text{~if~} \lfloor \frac{m}{2}\rfloor< i\leq m,\; \lfloor\frac{n}{2}\rfloor< j\leq n,\\ (m, 1) & \text{~if~} 1\leq i\leq \lceil\frac{m}{2}\rceil,\; \lfloor\frac{n}{2}\rfloor< j\leq n,\\ (1, n) & \text{~if~} \lfloor\frac{m}{2}\rfloor< i\leq m,\;1\leq j\leq \lceil\frac{n}{2}\rceil. \end{cases}$$ From the above adjacency relations, it is clear that the eccentric graph of $P_m\square P_n$ has a specific structure depending on the parity of $m$ and $n$. *Further, note that the girth of the eccentric graph $E(P_m\square P_n)$ is zero if both $m$ and $n$ are even, four if exactly one of $m$ and $n$ is even, and three if both $m$ and $n$ are odd.* Illustrations for all three cases are provided in . ## Cartesian product of two cycle graphs As discussed in , $E(C_n)$ is isomorphic to the $\frac{n}{2}$ copies of $K_2$ for an even $n$. Thus when $n$ and $m$ both are even, each vertex in $E(C_n\square C_m)$ has degree 1. In other words, $E(C_n\square C_m)$ is isomorphic to a graph containing $\frac{nm}{2}$ copies of $K_2$. For an even $n$ and an odd $m$, each vertex in $E(C_n)$ and $E(C_m)$ has degree $1$ and $2$ respectively. Therefore, $E(C_n\square C_m)$ is a $2$-regular graph. Consequently, $E(C_n\square C_m)$ is either a cycle or a union of cycles. Moreover, $E(C_n\square C_m)$ consists $\frac{n}{2}$ cycles of length $2m$ namely, $$\bigg( (i,1),\Big(\frac{n}{2}+i,2\Big), %(i,3),(\frac{n}{2}+i,4), \dots,(i,m),\Big(\frac{n}{2}+i,1\Big),(i,2),\dots, \Big(\frac{n}{2}+i,m\Big) %,(i,1) \bigg)$$ for $i\in[\frac{n}{2}]$. shows the eccentric graph of the Cartesian product of $C_4$ and $C_3$. When $m$ and $n$ both are $3$ the eccentric graph of $C_n\square C_m$ is shown in and its girth is $3$ by , which can be seen in the figure as well. Finally, for the remaining case, it follows from that the eccentric girth of $E(C_n\square C_m)$ is four. The following statement summarizes the above discussion: The eccentric girth of the Cartesian product of two cycle graphs is even except when both cycles are triangles. Moreover, $$\text{Eccentric girth of }E(C_n\square C_m)=\begin{cases} 0 &\text{if both }n \text{ and } m \text{ are even,}\\ 3 &\text{if }n,m=3,\\ 2m&\text{if }n \text{ is even and } m \text{ is odd,}\\ 4 &\text{otherwise.}\\ \end{cases}$$ We will end this section with the following observation. **Proposition 2**. *For an odd value of $n$, $E(C_n\square C_n)$ is isomorphic to $C_n\square C_n$.* **Proof:** By , it is enough to show that $C_n\square\; C_n$ is isomorphic to $C_n \times C_n$ for an odd $n$. We assume the natural labelling on the vertices of $C_n$. Now, we define an isomorphism $f$ from $C_n\square\; C_n$ to $C_n \times C_n$ as follows $$\begin{aligned} f\big((1,1)\big)&= (1,1),\\ f\big((i,1)\big)&= (n+2-i,n+2-i)\text{ for } i=2,\ldots,n,\\ f\big((i,j)\big)&= \big[f\big((i,1)\big)+(j-1,1-j)\big]\,(\text{mod}\; n).\end{aligned}$$ We will write $0$ as $n$ in the computation of $f$. To see $f$ is a bijection, first note that $f\big((i,1)\big)\neq f\big((j,1)\big)$ for $i\neq j$. Now assume $(i,j)\neq (k,l)$, this happens in either of three cases, $(a)$ $i\neq k$ and $j=l$, $(b)$ $i=k$ and $j\neq l$, or $(c)$ $i\neq k$ and $j\neq l$. Consider the first case $i\neq k$ and $j=l$ and let $f\big((i,1)\big)=(s,s)$ and $f\big((k,1)\big)=(t,t)$, clearly $s\neq t$. Now, if $f\big((i,j)\big)= f\big((k,l)\big)$ this implies $s+j-1 \equiv t+j-1 (\text{mod}\;{n})$ which leads to $s=t$, a contradiction. Therefore $f\big((i,j)\big)\neq f\big((k,l)\big)$. Similarly, we can show for the second case. Now consider the third case $i\neq k$ and $j\neq l$, and again let $f\big((i,1)\big)=(s,s)$ and $f\big((k,1)\big)=(t,t)$, clearly $s\neq t$. Now, if $f\big((i,j)\big)= f\big((k,l)\big)$ this implies $s+j-1 \equiv t+l-1 (\text{mod}\;{n})$ and $s+1-j \equiv t+1-l (\text{mod}\;{n})$, compatibility with addition of congruence leads to again $s=t$ (because $n$ is odd), a contradition. Therefore, $f$ is a bijection. Now, let $(i,j)\in V(C_n \square C_n)$ and $f\big((i,j)\big)= (s,t)$. Then $f\big((i\pm 1,j)\big)=(s\pm 1, t\pm 1)(\text{mod}\;{n})$ and $f\big((i,j\pm 1)\big)=(s\pm 1, t\mp 1)(\text{mod}\;{n})$. This proves that $f$ preserves the adjacency. 0◻ # Invertibilty of eccentricity matrix of the Cartesian product of trees {#sec:Invertibilty of EG} In this section, we will focus on the invertibility of the eccentric matrix for the Cartesian product of trees. First, recall the definition of the *Kronecker product* of two matrices. **Definition 6**. *Let $A=(a_{i,j})$ be an $m\times n$ matrix and $B=(b_{i,j})$ be a $p\times q$ matrix, then the *Kronecker product*, $A \otimes B$, is an $mp\times nq$ block matrix defined as $$\begin{pmatrix} b_{11}A &\cdots&b_{1n}A \\ \vdots &\ddots&\vdots \\ b_{m1}A &\cdots&b_{mn}A \\ \end{pmatrix}.$$* Kronecker product of two matrices is non-commutative in general. If $A$ and $B$ are square matrices of order $n$ and $p$, respectively, then $$\det A\otimes B=(\det A )^p(\det B)^n.$$ **Lemma 3**. *Let $T$ be a tree that is not a star or $P_4$, then the eccentricity matrix of $T\square \underbrace{ P_2\square \cdots \square P_2}_{k-1}$ is not invertible.* Let $G=T\square P_2\square\cdots \square P_2$ and the $i^{th}$ graph in this product is the path $P_2$ with endpoints $\{u_i, v_i\}$ for $i=2,\ldots,k$. Note that a vertex $(x_1,x_2\ldots,x_k)$ is adjacent to $(u_1,u_2,\ldots,u_k)$ in $E(G)$ if and only if $x_i=v_i$ for $i=2,\ldots,k$ and either $x_1$ is eccentric to $u_1$ in $T_1$ or $u_1$ is eccentric to $x_1$ in $T_1$. In other words, adjacency with $(u_1,u_2,\ldots,u_k)$ in $E(G)$ solely depends on the adjacency of $u_1$ in $E(T_1)$. Now we consider three cases. *Case 1*: Let the diameter of $T_1$ be $3$ and $P= a_1\,b_1\,c_1\,d_1$ be a diametrical path in $T_1$. As $T_1\neq P_4$, there must be a leaf vertex, say $e_1$, adjacent to either $b_1$ or $c_1$. Let's assume $e_1$ is adjacent to $b_1$. Now we claim that $N_{E(G)}\big((a_1,u_2,\ldots,u_k)\big)=N_{E(G)}\big((e_1,u_2,\ldots,u_k)\big)$. If a vertex $f_1$, is eccentric to $a_1$ then $f_1$ is also eccentric to $e_1$ because $d_{T_1}(a_1,f_1)=d_{T_1}(e_1,f_1)$, and if $a_1$ is eccentric to some vertex $f_1$ then so is $e_1$ because $d_{T_1}(a_1,f_1)=d_{T_1}(e_1,f_1)$. This proves our claim and hence the rows corresponding to these two vertices in $\mathcal{E}_G$ are exactly the same and therefore $\det(\mathcal{E}_G)=0$. *Case 2*: Let the diameter of $T_1$ be $4$ and $P= a_1\,b_1\,c_1\,d_1\, e_1$ be a diametrical path in $T_1$. Let $\{b_1,d_1,p_1,\ldots, p_{\ell}\}$ be the set of neighbours of $c_1$. Note that if a vertex $x$ is eccentric to a neighbour of $c_1$ then it is also eccentric to $c_1$. Further, note that none of $c_1$ or its neghbours can be eccentric to any vertex in $T_1$. Therefore, row corresponding to $(c_1,u_2,\ldots,u_k)$ in the matrix $\mathcal{E}_G$ is a constant multiple of the sum of the rows corresponding to $(b_1,u_2,\ldots,u_k),(d_1,u_2,\ldots,u_k), (p_1,u_2,\ldots,u_k),\dots (p_{\ell},u_2,\ldots,u_k)$. *Case 3*: Let the diameter of $T_1$ be greater than $4$ and $P= a_1\,b_1\,c_1\,d_1\ldots\, z_1$ be a diametrical path in $T_1$. By using similar arguments as in case 1 and case 2, we get the rows corresponding to $(b_1,u_2,\ldots,u_k)$ and $(c_1,u_2,\ldots,u_k)$ in $\mathcal{E}_G$ are constant multiple of each other and hence $\det(\mathcal{E}_G)=0$. 0◻ Now, we will present the main result of this section. **Theorem 7**. *Let $T_1, \dots, T_k$ be trees and $G \,(=T_1\square \cdots \square T_k)$ be their Cartesian product. Then the eccentricity matrix of G, $\mathcal{E}_G$, is invertible if and only if one of them is a star or $P_4$ and the rest are $P_2$.* Let $T_1, \dots, T_k$ be trees with at least two vertices and $G=T_1\square \cdots \square T_k$. Assume that $T_1$ is a star on $n+1$ vertices and $T_2,\dots ,T_k=P_2$. Then the eccentricity matrix of $G%=T_1\square \cdots \square T_k$ is $$\mathcal{E}_G=\begin{pmatrix} 0&k+1 & k+1&\cdots &k+1\\ k+1& 0&k+2 &\cdots &k+2\\ \vdots&\vdots & \ddots&\\ k+1&k+2 &\cdots & 0&k+2\\ k+1&k+2 &\cdots &k+2 &0 \end{pmatrix} \otimes J_{2^k},$$ where, $J_{2^k}$ is a $2^k\times 2^k$ antidiagonal matrix with all antidiagonal entries as 1.\ \ Note that $\det \begin{pmatrix} 0&k+1 & k+1&\cdots &k+1\\ k+1& 0&k+2 &\cdots &k+2\\ \vdots&\vdots & \ddots&\\ k+1&k+2 &\cdots & 0&k+2\\ k+1&k+2 &\cdots &k+2 &0 \end{pmatrix}$ is $(-1)^{n+1}n(k+1)^2(k+2)^{n-1}$, also $\det J_{2^k} \neq 0$. Therefore $\det \mathcal{E}_G\neq 0$. Now if $T_1=P_4$, then the eccentricity matrix of $G%=T_1\square \cdots \square T_k$ is $$\mathcal{E}_G=\begin{pmatrix} 0&0& 2+k &3+k\\ 0& 0&0 &2+k\\ 2+k&0 & 0&0\\ 3+k&2+k&0 &0 \end{pmatrix} \otimes J_{2^k},$$ Again, $\det \mathcal{E}_G\neq 0$, as $\det \begin{pmatrix} 0&0& 2+k &3+k\\ 0& 0&0 &2+k\\ 2+k&0 & 0&0\\ 3+k&2+k&0 &0 \end{pmatrix}=(k+2)^4$. For the converse part, let $T_1$ be neither a star nor $P_4$. Thus the diameter of $T_1>2$ and let $P=u_1u_2\dots u_s$ be a diametrical path in $T_1$. If each of $T_2,\dots ,T_k$ contains only pendant vertices, then the conclusion follows from . Therefore, we can assume without loss of generality, $T_2$ has a non-pendant vertex $v$. Now we want to show that $\det \mathcal{E}_{G}$ is zero. This assertion holds if we can show in general $\det \mathcal{E}_{K}$ is zero, where $K$ is the Cartesian product of $T_1$, $T_2$ and a simple connected graph $H$. Let $(u_i, v, x)\in V(K)$. Note that $(u_i, v, x)$ cannot be farthest from (and hence, eccentric to) any vertex in $K$ because $v$ is a non-pendant. Consequently, only those vertices are adjacent to $(u_i,v,x)$ (in $E(K)$) which are eccentric to $(u_i,v,x)$. Thus, $$\label{eqn:EccinNonpendantCase} N_{E(K)}(u_i,v, x)=\{(w_i,w, y): w_i,w, y \text{ are eccentric to } u_i,v,x \text{ respectively} \}.$$ Now if any vertex is eccentric to $u_1$ in $T_1$ then the same vertex is eccentric to $u_2$ as well in $T_1$ leading to $$N_{E(K)}(u_1,v, x)= N_{E(K)}(u_2,v,x).$$ Thus, rows corresponding to $(u_1,v, x)$ in $\mathcal{E}_K$ is a constant multiple of that of $(u_2,v,x)$, proving the non-invertibility of $\mathcal{E}_K$. 0◻ # Acknowledgements {#acknowledgements .unnumbered} Authors thank Professor Arvind Ayyer for his valuable comments. The first author thanks the Prime Minister Research Fellowship, India, for the funding. The second author acknowledges the support of the Council of Scientific $\And$ Industrial Research, India (File number: 09/921(0347)/2021-EMR-I). [^1]: Department of Mathematics, Indian Institute of Science, Bangalore, India (anitaarora\@iisc.ac.in). [^2]: Department of Mathematics and Statistics, IISER Kolkata, Kolkata, India (rm20rs017\@iiserkol.ac.in).
arxiv_math
{ "id": "2309.06338", "title": "Eccentric graph of trees and their Cartesian products", "authors": "Anita Arora, Rajiv Mishra", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We exhibit closed hyperbolic manifolds with arbitrarily small systole in each dimension that are not quasi-arithmetic in the sense of Vinberg, and are thus not commensurable to those constructed by Agol, Belolipetsky--Thomson, and Bergeron--Haglund--Wise. This is done by taking hybrids of the manifolds constructed by the latter authors. address: Institut des Hautes Études Scientifiques, Université Paris-Saclay, 35 route de Chartres, 91440 Bures-sur-Yvette, France author: - Sami Douba bibliography: - bantingbib.bib title: Systoles of hyperbolic hybrids --- Given a lattice $\Gamma$ in $\mathrm{Isom}(\mathbb{H}^n)$, the *systole* of the hyperbolic orbifold $M = \Gamma \backslash \mathbb{H}^n$ is the minimal length of a closed geodesic in $M$, i.e., the minimal translation length of a loxodromic element in $\Gamma$. Using the geometry of hyperbolic right-angled polygons, one can easily construct for each sufficiently small $\epsilon > 0$ a closed oriented hyperbolic surface of genus $2$ and systole precisely $\epsilon$. By Mostow rigidity, the topology of a closed hyperbolic $n$-manifold determines the systole as soon as $n\geq 3$; in particular, the systoles of such manifolds take on only countably many values. It is nevertheless true that for every $n \geq 3$ and every $\epsilon > 0$, there is a closed hyperbolic $n$-manifold of systole $< \epsilon$. For $n = 3$, this follows from Thurston's theory of hyperbolic Dehn filling.[^1] No such tool is available in dimension $4$, and it was in this dimension that Agol [@agol2006systoles] provided a strategy for producing closed hyperbolic manifolds of arbitrarily small systole by "inbreeding" arithmetic manifolds. The separability arguments required to implement Agol's strategy in every dimension were later supplied independently by Belolipetsky--Thomson [@MR2821431] and Bergeron--Haglund--Wise [@bergeron2011hyperplane]. An interesting feature of the manifolds arising from this "inbreeding" construction is that they are all quasi-arithmetic in the sense of Vinberg [@MR207853], as observed by Thomson [@MR3451458]. If $\Gamma < \mathrm{Isom}(\mathbb{H}^n)$ is a lattice, then $\Gamma$ is said to be *quasi-arithmetic* if there is a totally real number field $k \subset \mathbb{R}$ and a $k$-group ${\bf G}$ such that ${\bf G}(\mathbb{R})$ is isogenous to $\mathrm{Isom}(\mathbb{H}^n)$ with $\Gamma$ virtually contained in ${\bf G}(k)$ via this isogeny, but ${\bf G}^\sigma(\mathbb{R})$ is compact for every embedding $\sigma: k \rightarrow \mathbb{R}$ distinct from the inclusion $k \subset \mathbb{R}$. In this case, the adjoint trace field of $\Gamma$ coincides with $k$, and $\Gamma$ is arithmetic if and only if $\mathrm{trAd}(\gamma)$ is an algebraic integer for each $\gamma \in \Gamma$. The purpose of this note is to demonstrate the following. **Theorem 1**. *For each $n \geq 2$ and $\epsilon > 0$, there is a closed hyperbolic $n$-manifold of systole $< \epsilon$ that is not quasi-arithmetic.* It follows that the manifolds $M$ with small systole that we exhibit below are not commensurable to those constructed by Agol, Belolipetsky--Thomson, and Bergeron--Haglund--Wise. These $M$ are nevertheless pseudo-arithmetic in the sense of Emery and Mila [@MR4237964], and hence we do not address the question of the latter authors as to whether every lattice in $\mathrm{Isom}(\mathbb{H}^n)$ for $n \geq 4$ is pseudo-arithmetic. *Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.* Set $k = \mathbb{Q}(\sqrt{2})$, and let $a \in \mathbb{Q}_{>0}$ be a non-square in $k$. For $n \geq 2$, let $f_1$ and $f_2$ be the quadratic forms in $n+1$ variables with coefficients in $k$ given by $$\begin{aligned} {4} x_1^2 + x_2^2 + \ldots + x_n^2 - \sqrt{2}x_{n+1}^2, \\ ax_1^2+x_2^2 +\ldots +x_n^2 - \sqrt{2}x_{n+1}^2,\end{aligned}$$ respectively. For $i=1,2$, denote by $\mathbb{H}_{f_i}$ the $f_i$-hyperboloid model of $\mathbb{H}^n$, and by $\mathrm{O}'(f_i; \mathbb{R}) = \mathrm{Isom}(\mathbb{H}_{f_i})$ the index-$2$ subgroup of $\mathrm{O}(f_i; \mathbb{R})$ preserving $\mathbb{H}_{f_i}$. Let $A_i$ be the $1$-dimensional subgroup of $\mathrm{O}'(f_i; \mathbb{R})$ consisting of all matrices of the form $$\begin{aligned} {4} \begin{pmatrix} * & & * \\ & I_{n-1} & \\ * & & * \end{pmatrix}.\end{aligned}$$ Since the $k$-points of $A_i$ are dense in the identity component $A_i^\circ$ of $A_i$, we can find $g_i \in A_i^\circ$ with entries in $k$ such that the leading eigenvalue $\lambda_i$ of $g_i$ satisfies $\mathrm{log}(\lambda_i) < \epsilon/4$. Denote by $H_i \subset \mathbb{H}_{f_i}$ the geodesic hyperplane $\{x_1 = 0\}$. Belolipetsky and Thomson [@MR2821431] show that there is an ideal $I \subset \mathbb{Z}[\sqrt{2}]$ such that for $i=1,2$, the hyperplanes $H_i$ and $g_i H_i$ project to disjoint embedded $2$-sided totally geodesic hypersurfaces $\Sigma_i$ and $\Sigma_i'$, respectively, in the orbifold $M_i := \Gamma_i(I) \backslash \mathbb{H}_{f_i}$, where $\Gamma_i(I)$ denotes the principal congruence subgroup of level $I$ in $\Gamma_i := \mathrm{O}'(f_i ; \mathbb{Z}[\sqrt{2}])$. Up to diminishing $I$, we can assume moreover that the $\Gamma_i(I)$ are torsion-free, so that the $M_i$ are manifolds, and that the projection $\omega_i$ to $M_i$ of the orthogeodesic $\tilde{\omega}_i \subset \mathbb{H}_{f_i}$ joining the hyperplanes $H_i$ and $g_i H_i$ does not cross $\Sigma_i$ or $\Sigma_i'$. Let $N_i$ be the component containing $\omega_i$ of the hyperbolic manifold with totally geodesic boundary obtained by cutting $M_i$ along $\Sigma_i$ and $\Sigma_i'$. Identify $\Sigma_i$ and $\Sigma_i'$ with the two boundary components of $N_i$ joined by $\omega_i$. Finally, let $M$ be the manifold obtained by first gluing the $N_i$ along the $\Sigma_i$ via a canonical isometry in the language of Mila [@MR4230399], so that the $\omega_i$ glue to an orthogeodesic $\omega$ joining the $\Sigma_i'$ of length $\mathrm{log}(\lambda_1)+ \mathrm{log}(\lambda_2)$, and then doubling the resulting manifold along its boundary; see Figure [1](#fig:systolefigure){reference-type="ref" reference="fig:systolefigure"}. Then $M$ contains a closed geodesic of length $2(\mathrm{log}(\lambda_1)+ \mathrm{log}(\lambda_2)) < \epsilon$, namely, the double of $\omega$. To see that $M$ is not quasi-arithmetic, one can for instance use the fact that a Zariski-dense subgroup of a quasi-arithmetic lattice has the same adjoint trace field as the lattice itself; see [@bdr Cor. 2]. In particular, since the $\pi_1(N_i)$ are Zariski-dense (see [@MR932135 Lem. 1.7.A]), they have adjoint trace field $k$. On the other hand, as explained in [@MR4230399], the adjoint trace field of $\pi_1(M)$ is $k(\sqrt{a}) \neq k$. Since $\pi_1(N_1) \subset \pi_1(M)$, we conclude that $\pi_1(M)$ is not quasi-arithmetic. ◻ ![A schematic of the construction of the manifold $M$ in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.](systolefigure.pdf){#fig:systolefigure width="100%"} *Remark 2*. We define a *classical hybrid* to be any closed hyperbolic orbifold $V$ obtained by gluing two (possibly disconnected) compact hyperbolic orbifolds $O_1$ and $O_2$ with totally geodesic boundary along their boundaries via isometries, such that each component of the double $2O_i$ of $O_i$ along $\partial O_i$ is arithmetic for $i=1,2$, and such that $2O_1$ is not commensurable to $2O_2$. Note that to say $2O_i$ is arithmetic is to say that there is an arithmetic lattice containing $\pi_1(O_i)$ and the reflections in the boundary components of the universal cover $\widetilde{O_i}$, so that the particular examples of nonarithmetic hyperbolic manifolds constructed by Gromov and Piatetski-Shapiro in [@MR932135 2.8.C and 2.9] are classical hybrids in the above sense. For fixed $D$ and $\mu$, there are only finitely many monic integer polynomials of degree $\leq D$ and Mahler measure $\leq \mu$. Thus, for fixed $d$ and $n$, there is some $\epsilon_{d,n}$ such that any arithmetic hyperbolic $n$-orbifold with adjoint trace field of degree $\leq d$ has systole $\geq \epsilon_{d,n}$. It then follows that any classical hybrid of dimension $n$ with adjoint trace field of degree $\leq d$ has systole $\geq \epsilon_{d,n}$. Indeed, let $\gamma$ be a closed geodesic in a classical hybrid $V$ as above. If $\gamma$ is contained entirely in $O_i$ for some $i=1,2$, then $\gamma$ has length $\geq \mathrm{sys}(2O_i) \geq \epsilon_{d,n}$. Otherwise, $$\begin{aligned} {4} \mathrm{length}(\gamma) = \mathrm{length}(\gamma \cap O_1) + \mathrm{length}(\gamma \cap O_2) \geq \frac{1}{2} \mathrm{sys}(2O_1) + \frac{1}{2} \mathrm{sys}(2O_2) %&\geq \frac{1}{2} \epsilon_{d,n} + \frac{1}{2} \epsilon_{d,n} \\ \geq \epsilon_{d,n}.\end{aligned}$$ By [@MR3090707 Lem. 3.3], for $n \geq 3$, any $n$-orbifold commensurable to a classical hybrid is again a classical hybrid. Since all the manifolds $M$ as in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} share the same adjoint trace field $k(\sqrt{a})$, we obtain from the above discussion that for $n \geq 3$ and $\epsilon < \epsilon_{4,n}$, the output $n$-manifold $M$ is not commensurable to any classical hybrid in the above sense. *Remark 3*. The approach in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} can also be used to construct a lattice in $\mathrm{O}'(n,1)$ for each $n \geq 2$ that is neither quasi-arithmetic nor has integral traces.[^2] For instance, if in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} we pick the $g_i$ so that $\lambda_1\lambda_2$ is not an algebraic integer (regardless of how small the $\log(\lambda_i)$ are), then $\Gamma = \pi_1(M)$ has the desired property. Indeed, by an argument similar to [@MR3892248 Lem. 1.1], up to conjugation within $\mathrm{O}'(f_1 ; \mathbb{R})$, we have $\Gamma \subset \mathrm{O}'(f_1; K)$, where $K = k(\sqrt{a})$. Thus, if $\mathrm{tr}\mathrm{Ad} (g) \in \mathcal{O}_K$ for each $g \in \Gamma$, then the same is true of $\mathrm{tr}(g)$ for each $g \in \Gamma$ by [@MR279206 Theorems 1 and 2 and Lem. 2]. On the other hand, if $\gamma \in \Gamma$ is an element representing the double of $\omega$ in $M$, then since $\gamma$ is a pure hyperbolic element, we have that $\mathrm{tr}(\gamma)$ is a monic palindromic Laurent polynomial in $\lambda_1\lambda_2$ with integer coefficients, so that if $\mathrm{tr}(\gamma)$ is an algebraic integer, then the same is true of $\lambda_1\lambda_2$. For instance, one can take $a = 3$ and $$\begin{aligned} {4} g_1 &= \begin{pmatrix} 3+2\sqrt{2} & & 4 + 2\sqrt{2} \\ & I_{n-1} & \\ 2+2\sqrt{2} & & 3+2\sqrt{2} \end{pmatrix}, \\ g_2 &= \begin{pmatrix} \frac{11+6\sqrt{2}}{7} & & \frac{4+6\sqrt{2}}{7} \\ & I_{n-1} & \\ \frac{18+6\sqrt{2}}{7} & & \frac{11+6\sqrt{2}}{7} \end{pmatrix}.\end{aligned}$$ One can arrange in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} for the double of $\omega$ to be the shortest closed geodesic in $M$. Indeed, a similar approach yields the following (compare, for instance, [@MR3702548 Thm. 1.5]). **Theorem 4**. *For every $n \geq 3$ and $m \in \mathbb{Z}_{>0}$, one can find $\epsilon_m < \frac{1}{m}$ and $m$ pairwise incommensurable closed hyperbolic $n$-manifolds of the same volume each possessing a unique shortest closed geodesic of length precisely $\epsilon_m$.* *Proof.* Fix $n$ and $m$. In the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, take $\epsilon \leq 2^{-m}\min\{\frac{1}{m}, \epsilon_{2,n}\}$, where $\epsilon_{2,n}$ is as in Remark [Remark 2](#classicalhybrid){reference-type="ref" reference="classicalhybrid"}. For $i=1,2$, let $p_i \in H_i$ be the point at which the axis of $g_i$ meets $H_i$, and let $q_i$ be the midpoint of the geodesic segment $\tilde{\omega}_i \subset \mathbb{H}_{f_i}$. For a point $y_i \in \mathbb{H}_{f_i}$ and a subgroup $\Delta_i < \Gamma_i$, let $D_i(\Delta_i, y_i)$ be the Dirichlet domain for $\Delta_i$ in $\mathbb{H}_{f_i}$ centered at $y_i$. Given an ideal $J \subset \mathbb{Z}[\sqrt{2}]$, denote by $\Delta_i(J)$ (resp., $\Delta_i'(J)$) the stabilizer in $\Gamma_i(J)$ of $H_i$ (resp., $gH_i$), and let $\Phi_i(J) = \langle \Delta_i(J), \Delta_i'(J) \rangle < \Gamma_i(J)$. We pass to an ideal $J \subset I$ such that 1. [\[gf\]]{#gf label="gf"} the walls of $D_i(\Delta_i(J), p_i)$ and the walls of $D_i(\Delta_i'(J), gp_i)$ together comprise the walls of $D_i(\Phi_i(J), q_i)$; 2. [\[bananas\]]{#bananas label="bananas"} no wall of $D_i(\Delta_i(J), p_i)$ enters the $\epsilon_{2,n}$-neighborhood of $gH_i$ in $\mathbb{H}_{f_i}$, and no wall of $D_i(\Delta_i'(J), gp_i)$ enters the $\epsilon_{2,n}$-neighborhood of $H_i$. From [\[gf\]](#gf){reference-type="ref" reference="gf"}, we have that $\Phi_i(J)$ is geometrically finite, and hence separable in $\Gamma_i(J)$ by aforementioned work of Bergeron--Haglund--Wise [@bergeron2011hyperplane]. It follows that there is a finite-index subgroup $\Lambda_i < \Gamma_i(J)$ containing $\Phi_i(J)$ such that no wall of $D_i(\Lambda_i, q_i)$ that is not a wall of $D_i(\Phi_i(J), q_i)$ enters the $\epsilon_{2,n}$-neighborhood of $H_i \cup gH_i$. We replace $M_i$ with the manifold $\Lambda_i \backslash \mathbb{H}_{f_i}$. Then any orthogeodesic segment in $M_i$ with endpoints on $\Sigma_i \cup \Sigma_i'$ apart from $\omega_i$ now has length $\geq \epsilon_{2,n}$. Let $N_i'$ be the double of $N_i$ across all boundary components of $N_i$ apart from $\Sigma_i$, and $\omega_i' \subset N_i'$ the double of $\omega_i$. Then any connected closed manifold $C$ built out of a total of $2^m$ copies of the $N_i'$ by gluing boundary components via isometries so that the copies of $\omega_i'$ match up has a single shortest closed geodesic, namely, the closed geodesic obtained by gluing up all the copies of $\omega_i'$, and this geodesic has length $< 2^m\frac{\epsilon}{2} < \frac{1}{m}$. Moreover, the systole and volume of such $C$ depend only on the number of times each of $N_1'$ and $N_2'$ features in the construction of $C$. Now we claim that if we set $a = 17$ in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, then the manifolds $C$ as above represent at least $m$ distinct commensurability classes, even under the stipulation that $N_1'$ and $N_2'$ feature equally in the construction of $C$. Indeed, for our choice of $a$, Gelander and Levit [@MR3261631] show that the forms $f_1$ and $f_2$ are not similar over $k$. Since $n \geq 3$, it then follows from work of Raimbault[^3] [@MR3090707] that if $C_1$ and $C_2$ are commensurable manifolds of the form $C$ above, and $\alpha_j$ is the cyclic sequence of $1$'s and $2$'s dictating the decomposition of $C_j$ into copies of $N_1'$ and $N_2'$ for $j=1,2$, then $\alpha_1$ and $\alpha_2$ are related by a dihedral symmetry. Now it suffices to observe that one can easily produce $m$ cyclic sequences of $1$'s and $2$'s of length $2^m$ no two of which are related by such a symmetry and each of which features an equal number of $1$'s and $2$'s. ◻ *Remark 5*. In the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, if there is no other orthogeodesic in $M_i = \Gamma_i(I) \backslash \mathbb{H}_{f_i}$ joining $\Sigma_i$ to $\Sigma_i'$ of precisely the same length as $\omega_i$, then the argument of Belolipetsky and Thomson [@MR2821431] (more precisely, the proof of their generalized Margulis--Vinberg lemma [@MR2821431 Lem. 3.1]) in fact shows that, for any $\delta > 0$, it is possible to pass to an ideal $I_\delta \subset I \subset \mathbb{Z}[\sqrt{2}]$ such that, replacing $M_i$ with $\Gamma_i(I_\delta) \backslash \mathbb{H}_{f_i}$, every orthogeodesic in $M_i$ with endpoints on $\Sigma_i \cup \Sigma_i'$ apart from $\omega_i$ now has length $\geq \delta$. Taking $\delta = \epsilon_{2,n}$, one could then instead use the above congruence covers $M_i$ in the proof of Theorem [Theorem 4](#shortest){reference-type="ref" reference="shortest"}. It has been suggested to us by Misha Belolipetsky that this can potentially be arranged for an appropriate (indeed, generic) choice of $g_i$, but we have not pursued this here. ## Acknowledgements {#acknowledgements .unnumbered} I am grateful to Misha Belolipetsky, Nikolay Bogachev, and Jean Raimbault for instructive discussions, and for their comments on an earlier draft of this note. The author was supported by the Huawei Young Talents Program. [^1]: In retrospect, there is a more elementary construction again involving hyperbolic right-angled polyhedra [@bogachevdouba], though this also can ultimately be conceptualized as resulting from "Dehn filling" of Coxeter $3$-polyhedra. [^2]: Note that, even having fixed the adjoint trace field, small systoles are not enough to guarantee nonintegral traces, since the ambient groups of the relevant lattices are not admissible. [^3]: Raimbault's work [@MR3090707] deals with manifolds constructed cyclically out of pieces of *arithmetic* manifolds the ambient groups of whose associated lattices are distinct. However, his arguments persist if "arithmetic" is weakened to "quasi-arithmetic" in the previous sentence, as observed in [@bdr], and as is the case in our context.
arxiv_math
{ "id": "2309.16051", "title": "Systoles of hyperbolic hybrids", "authors": "Sami Douba", "categories": "math.GR math.GT math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we first study the dual fractional parabolic equation $$\partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ \mbox{in}\ \ B_1(0)\times\mathbb{R},$$ subject to the vanishing exterior condition. We show that for each $t\in\mathbb{R}$, the positive bounded solution $u(\cdot,t)$ must be radially symmetric and strictly decreasing about the origin in the unit ball in $\mathbb{R}^n$. To overcome the challenges caused by the dual non-locality of the operator $\partial^\alpha_t+(-\Delta)^s$, some novel techniques were introduced. Then we establish the Liouville theorem for the homogeneous equation in the whole space $$\label{B} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = 0\ \ \mbox{in}\ \ \mathbb{R}^n\times\mathbb{R}.$$ We first prove a maximum principle in unbounded domains for anti-symmetric functions to deduce that $u(x,t)$ must be constant with respect to $x.$ Then it suffices for us to establish the Liouville theorem for the Marchaud fractional equation $$\partial^\alpha_t u(t) = 0\ \ \mbox{in}\ \ \mathbb{R}.$$ To circumvent the difficulties arising from the nonlocal and one-sided nature of the operator $\partial_t^\alpha$, we bring in some new ideas and simpler approaches. Instead of disturbing the anti-symmetric function, we employ a perturbation technique directly on the solution $u(t)$ itself. This method provides a more concise and intuitive route to establish the Liouville theorem for one-sided operators $\partial_t^\alpha$, including even more general Marchaud time derivatives. **Mathematics Subject classification (2020):** 35R11; 35B06, 47G30; 35B50; 35B53. **Keywords:** dual fractional parabolic equations; direct method of moving planes; narrow region principle; radial symmetry; monotonicity; Liouville theorem; .\ author: - Yahong Guo - Lingwei Ma - "Zhenqiu Zhang[^1]" title: A Liouville Theorem and Radial Symmetry for dual fractional parabolic equations --- # Introduction  The primary objective of this paper is to investigate the qualitative properties of solutions to dual nonlocal parabolic equations associated with the operator $\partial_t^\alpha+(-\Delta)^s$. More precisely, we first investigate the radial symmetry and monotonicity of solutions for the following equation in the unit ball $$\label{A} \left\{ \begin{array}{ll} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ &\mbox{in}\ \ B_1(0)\times\mathbb{R},\\ u(x,t)\equiv 0 \ \ &\mbox{in}\ \ B_1^c(0)\times\mathbb{R}. \end{array} \right.$$ Then we establish the Liouville theorem for the homogeneous equation in the whole space $$\label{B} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = 0\ \ \mbox{in}\ \ \mathbb{R}^n\times\mathbb{R}.$$ The one-sided nonlocal time derivative $\partial_t^\alpha$ considered here is known as the Marchaud fractional derivative of order $\alpha$, defined as $$\label{1.00} \partial^\alpha_t u(x,t)=C_\alpha\displaystyle\int_{-\infty}^t\displaystyle\frac{u(x,t)-u(x,\tau)}{(t-\tau)^{1+\alpha}}d\tau,$$ with $0<\alpha<1, C_\alpha=\displaystyle\frac{\alpha}{\Gamma({1-\alpha})}$ and $\Gamma$ represents the Gamma function. From the definition, such fractional time derivative depends on the values of function from the past, sometime also denoted as $(D_{\rm{left}})^\alpha$. The spatial nonlocal elliptic pseudo-differential operator, the fractional Laplacian $(-\Delta)^s$ is defined as $$\label{1.0} (-\Delta)^su(x,t)= C_{n,s}P.V.\displaystyle\int_{\mathbb{R}^{n}}\displaystyle\frac{u(x,t)-u(y,t)}{\lvert x-y\rvert^{n+2s}}dy.$$ where $0 < s < 1$, $C_{n,s} := \frac{4^s \Gamma\left(\frac{n+2s}{2}\right)}{\pi^{n/2} \left|\Gamma(-s)\right|}$ is a normalization positive constant and $P.V.$ stands for the Cauchy principal value. In order to guarantees that the singular integral in [\[1.00\]](#1.00){reference-type="eqref" reference="1.00"} and [\[1.0\]](#1.0){reference-type="eqref" reference="1.0"} are well defined, we assume that $$u(x,t)\in\left(\mathcal{L}_{2s}\cap C^{1,1}_{loc}(\mathbb{R}^n)\right)\times\left(C^1(\mathbb{R})\cap \mathcal{L}^{-}_\alpha(\mathbb{R})\right),$$ Here, the slowly increasing function spaces $\mathcal{L}_{2s}$ and $\mathcal{L}^{-}_\alpha(\mathbb{R})$ are defined respectively by $$\mathcal{L}_{2s}:=\left\{v\in L^1_{loc}(\mathbb{R}^n) \ | \int_{\mathbb{R}^n} \frac{|v(x)|}{1+|x|^{n+2s}} dx < +\infty\right\}$$ and $$\mathcal{L}^{-}_\alpha(\mathbb{R}):=\left\{v\in L^1_{loc}(\mathbb{R})\ | \int_{-\infty}^{t} \frac{|v(\tau)|}{1+|\tau|^{1+\alpha}} d\tau < +\infty \text{ for\ each}\ t\in\mathbb{R}\right\}.$$ A typical application of equation in [\[A\]](#A){reference-type="eqref" reference="A"} is in modeling continuous time random walks [@33], which generalizes Brownian random walks. This fractional kinetic equation introduces nonlocality in time, leading to history dependence due to unusually large waiting times, and nonlocality in space, accounting for unusually large jumps connecting distant regions, such as L$\acute{e}$vy flights. In applications within financial field, it can also be used to model the waiting time between transactions is correlated with the ensuring price jump (cf. [@37]). Another model is presented in [@dCN] to simulate transport of tracer particles in plasma, where the function $u$ is the probability density function for tracer particles which represents the probability of finding a particle at time $t$ and position $x$, the right hand side $f$ is a source term. In this case, the nonlocal space operator $(-\Delta)^s$ accounts for avalanche-like transport that can occur, while the Marchaud time derivative $\partial^\alpha_t$ accounts for the trapping of the trace particles in turbulent eddies. It is worth mentioning that the nonlocal operator $\partial^\alpha_t + (-\Delta)^s$ in problem [\[A\]](#A){reference-type="eqref" reference="A"} can be reduced to the local heat operator $\partial_t - \Delta$ as $\alpha \to 1$ and $s \to 1$. The method of moving planes, initially introduced by Alexandroff in [@28] and simplified by Berestycki and Nirenberg [@HB3], is a widely used technique for studying the monotonicity of solutions to local elliptic and parabolic equations. However, this approach can not be applied directly to psuedo-differential equations involving the fractional Laplacian due to its nonlocality. To circumvent this difficulty, Cafferelli and Silvestre [@14] introduced an extension method that can turn a non-local equation to a local one in higher dimensions. Thereby the traditional method of moving planes designed for local equations can be applied for the extended problem to establish the well-posedness of solutions, and a series of interesting results have been obtained in [@Chang; @11; @13; @18; @Sun1; @20; @Sun2; @LC2; @31] and the references therein. However, this method is exclusively applicable to equations involving the fractional Laplacian and sometimes additional restrictions may need to be imposed on the problems, while it will not be necessary in dealing with the fractional equations directly. To remove these restrictions, Chen, Li, and Li [@11] introduced a direct method of moving planes nearly ten years later. This method significantly simplify the proof process and has been widely applied to establish the symmetry, monotonicity, non-existence of solutions for various elliptic equations and systems involving the fractional Laplacian, the fully nonlinear nonlocal operators, the fractional p-Laplacians as well as the higher order fractional operators, we refer to [@CL; @CLL; @CW; @DQ; @LC3; @34] and the references therein. Recently, this method has also been gradually made use of studing the geometric behavior of solutions for fractional parabolic equations with the general local time derivative $\partial _t u(x,t)$. (cf. [@10; @CM; @JW; @WuC] and the references therein). In particular, the authors of [@10] established symmetry and monotonicity of positive solutions on a unit ball for the classical parabolic problem $$\label{Local} \left\{ \begin{array}{ll} \partial_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ &\mbox{in}\ \ B_1(0)\times\mathbb{R},\\ u(x,t)\equiv 0 \ \ &\mbox{in}\ \ B_1^c(0)\times\mathbb{R}. \end{array} \right.$$ However, so far as we know, there is still a lack of research on the geometric properties of solutions to nonlocal parabolic equations [\[A\]](#A){reference-type="eqref" reference="A"} with the Marchaud fractional time derivative $\partial _t^\alpha u(x,t)$ and the fractional Laplacian$(-\Delta)^s$. Recently, Guo, Ma and Zhang [@24] employed a suitable sliding method, first introduced by Berestycki and Nirenberg [@HB3], to demenstrate the generalized version of Gibbons' conjecture in the setting of the dual nonlocal parabolic equation $$\partial^\alpha_t u(x,t)+\mathcal{L}u(x,t) = f(t,u(x,t))\ \ \mbox{in}\ \ \mathbb{R}^n\times\mathbb{R}.$$ Here the spatial nonlocal elliptic operators of integro-differential type is defined as $$\label{D} \mathcal{L}u(x,t)= P.V.\displaystyle\int_{\mathbb{R}^{n}}\left[u(x,t)-u(y,t)\right]\cdot K(x,y)dy.$$ Chen and Ma [@cm] carried out a suitable direct method of moving planes to obtain the monotonicity of positive solutions for the following problem $$\left\{ \begin{array}{ll} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ &\mbox{in}\ \mathbb{R}^n_+ \times \mathbb{R}, ,\\ u(x,t)\equiv 0 \ \ &\mbox{in}\ \ (\mathbb{R}^n \setminus \mathbb{R}^n_+) \times \mathbb{R}. \end{array} \right.$$ Therefore, our first main interest here is to apply a direct method of moving planes to establish the radial symmetry and monotonicity of solutions to problem [\[A\]](#A){reference-type="eqref" reference="A"} in the unit ball. Our second main objective is to establish the Liouville theorem of equation [\[B\]](#B){reference-type="eqref" reference="B"}. The classical Liouville theorem states that any bounded harmonic function defined in the entire space $\mathbb{R}^n$ must be identically constant. This theorem plays a crucial role in deriving a priori estimates and establishing the qualitative properties of solutions, including their existence, nonexistence, and uniqueness. As a result, it has been extensively studied in the analysis of partial differential equations and this area of study has been further extended to various types of elliptic and fractional elliptic equations, even to $k$-Hessian equations using diverse methods, including Harnack inequalities, blow-up and compactness arguments, as well as Fourier analysis (cf. [@Chang1; @BK; @18; @DQ; @F; @M; @SZ] and the references therein). In the context of nonlocal homogeneous parabolic equation [\[B\]](#B){reference-type="eqref" reference="B"}, when restricted the domain of $t$ to $(-\infty, 0]$, Widder [@W] proved that all bounded solutions $u(x, t)$ must be constant in case of $\alpha=1,s=1$; while for $\alpha=1,0<s<1$, Serra [@S] showed that the solutions with some growth condition is a constant. In recent times, Ma, Guo and Zhang [@MGZ] demonstrated that the bounded entire solutions of the homogeneous master equation $$\label{C}(\partial_t -\Delta)^s u(x, t) = 0 \mbox{~in~} \mathbb{R}^n \times \mathbb{R},$$ must be constant. Here the fully fractional heat operator $(\partial_t -\Delta)^s$ was first proposed by Riesz [@R], and it can be defined pointwise using the following singular integral: $$(\partial_t -\Delta)^s u(x, t) := C_{n,s} \displaystyle\int_{-\infty}^{t} \displaystyle\int_{\mathbb{R}^n} \frac{u(x, t) - u(y, \tau)}{(t - \tau)^{n+2s}} e^{-\frac{|x - y|^2 }{ 4(t - \tau)}} dy d\tau,$$ where $0 < s < 1$, $C_{n,s} = \frac{1}{(4\pi)^{n/2} |\Gamma(-s)|}$. It is essential to emphasize that in [@MGZ], we first established the maximum principles for operators $(\partial_t -\Delta)^s$ to conclude that any bounded solution $u(x, t)$ must be constant with respect to the spatial variable $x$. i.e. $u(x,t)=u(t).$ This will simplify equation [\[C\]](#C){reference-type="eqref" reference="C"} to a one-sided one-dimensional fractional equation $$\label{time}\partial^\alpha_t u(t)=0 ~ \mbox{in}\ \ \mathbb{R}.$$ Then we obtained that the bounded solution $u(t)$ must be constant with respect to $t$ by employing the method of Fourier analysis which is applicable to more general distributions beyond bounded functions. While this method does not fully capture the one-sided nature of the operator $\partial_t^\alpha.$ Taking inspiration from these findings, our second main objective is to develop an alternative and more straightforward method to generalize the Liouville theorem to the dual fractional parabolic operator $\partial_t^\alpha+(-\Delta)^s$ in the whole space. Now we explain the novelty and challenges of our approach in deriving the radial symmetry of solutions for problem [\[A\]](#A){reference-type="eqref" reference="A"} in the unit ball and the Liouville theorem for equation [\[B\]](#B){reference-type="eqref" reference="B"} in the whole space by analysing the characteristics of the one-sided fractional time operator $\partial_t^\alpha$ and the (double-sided) fractional Laplacian $(-\Delta)^s$. In comparison with [@10] for the operator $\partial_t+(-\Delta)^s$ and [@MGZ] for the operator $(\partial_t -\Delta)^s$, a notable difference in this paper is that all perturbations are novel and constructed from different scaling and shifting of smooth cut-off functions $\eta_k$ to match the dual fractional parabolic operators $\partial_t^\alpha+(-\Delta)^s$. Then by applying the $\mathbf{Translation~ and ~Rescaling ~Invariance}$ $$\label{TS}\mathcal{L}\left[u\left(\displaystyle\frac{x-\bar{x}}{r}\right)\right] = \frac{1}{r^{\beta}}\mathcal{L}u\left(\displaystyle\frac{x-\bar{x}}{r}\right),$$ to the specific operators $\mathcal{L}=(-\Delta)^s$ with $\beta=2s$ and $\mathcal{L}=\partial_t^\alpha$ with $\beta=\alpha$, we derive $$\mathcal{L}\eta_k\lesssim \displaystyle\frac{1}{r^\beta},$$ which is a key estimate in proving the maximum principle for anti-symmetry functions with respect to $x$ as well as the Liouville theorem for the Marchaud fractional operator $\partial_t^\alpha.$ Utilizing these essential tools, we can develop the direct method of moving planes and obtain the Liouville theorem for the dual fractional operator $\partial_t^\alpha+(-\Delta)^s$ . From one aspect, we point out the distinction between the local time derivatives $\partial _t$ and the nonlocal operator $\partial_t^\alpha$ in the process of establishing the radial symmetry of solutions for equation [\[A\]](#A){reference-type="eqref" reference="A"} through the direct method of moving planes combined with the limiting argument. Differing from traditional approaches employed for classical parabolic equations [\[Local\]](#Local){reference-type="eqref" reference="Local"}(cf. [@10]), we repeatedly use the following two key $\mathbf{observations}$ arising from the nonlocal and one-sided nature of the one-dimensional fractional time operator $\partial_t^\alpha$. $\mathbf{Observation~A.}$ If $u(\bar{t})=\min\limits_{t\in\mathbb{R}}u(t) ~(\mbox{or~}\max\limits_{t\in\mathbb{R}}u(t))$, then $\partial_t^\alpha u(\bar{t})\leq 0~(\mbox{or~}\geq 0).$ $\mathbf{Observation~B.}$ Assume that $u(\bar{t})=\min\limits_{t\in\mathbb{R}}u(t) ~(\mbox{or~}\max\limits_{t\in\mathbb{R}}u(t))$. Then $\partial_t^\alpha u(\bar{t})= 0$ if and only if $$u(t)\equiv u(\bar{t}) \mbox{~in~}t<\bar{t}.$$ From another standpoint, we emphasize the different challenges between the one-sided operator $\partial_t^\alpha$ and the fractional Laplacian $(-\Delta)^s$ in the process of deriving the Liouville theorem for homogeneous equation [\[B\]](#B){reference-type="eqref" reference="B"}. It is well-known that the (double-sided) fractional Laplacian $(-\Delta)^s$ satisfies the $\mathbf{Reflection ~Invariance}$ (or chain rule) $$\label{R}(-\Delta)^s\left[u(x^\lambda)\right] = (-\Delta)^s u\left(x^\lambda\right),$$ where $x^\lambda = (2\lambda - x_1, x')$ denote the reflection of $x$ with respect to the hyperplane $x_1=\lambda.$ However this is no longer valid for the fractional time derivative $\mathcal{L}=\partial_t^\alpha$ due to its one-sided nature. Indeed, if we denote $(D_{\rm{left}})^\alpha:=\partial_t^\alpha$ and $t^\lambda=2\lambda-t$, then instead of [\[R\]](#R){reference-type="eqref" reference="R"} we obtain $$\label{R1}(D_{\rm{left}})^\alpha\left[u(t^\lambda)\right] = (D_{\rm{right}})^\alpha u\left(t^\lambda\right).$$ Here $(D_{\rm{right}})^\alpha$ is also a fractional derivative that based on the values of the function in the future, defined as $$(D_{\rm{right}})^\alpha u(t):=C_\alpha\displaystyle\int_t^{+\infty}\displaystyle\frac{u(t)-u(\tau)}{(\tau-t)^{1+\alpha}}d\tau.$$ The property [\[R\]](#R){reference-type="eqref" reference="R"} plays a crucial role in establishing the symmetry of solutions with respect to spatial planes and further deriving the Liouville theorem. Let us compare equation [\[B\]](#B){reference-type="eqref" reference="B"} with the classical fractional parabolic equation $$\label{R2} \partial_tu(x,t)+(-\Delta)^su(x,t)=0 \mbox{~in~} \mathbb{R}^n\times\mathbb{R}.$$ By establishing the maximum principle for anti-symmetric function $w(x,t)=u(x^\lambda,t)-u(x,t),$ we conclude that any bounded solution $u(\cdot, t)$ is symmetric with respect to any hyperplane in $\mathbb{R}^n$ for each fixed $t\in\mathbb{R}$, i.e., $$u(x,t)=u(t) \mbox{~in~}\mathbb{R}^n\times\mathbb{R},$$ and hence $\partial_tu(t)=0.$ From this one can derive immediately $u$, a bounded solution of equation [\[R2\]](#R2){reference-type="eqref" reference="R2"}, is a constant. However for the dual fractional parabolic equation [\[B\]](#B){reference-type="eqref" reference="B"}, it still need to further prove the Liouville theorem for Marchaud fractional equation [\[time\]](#time){reference-type="eqref" reference="time"}. Due to the lack of reflection invariance [\[R\]](#R){reference-type="eqref" reference="R"} for one-sided operator $\partial_t^\alpha$, one can not establish a maximum principle for the antisymmetric function $w (t)= u(t^\lambda) - u(t)$ in the same way as with double-sided operators like the fractional Laplacian. To circumvent this difficulty, in this paper, we introduce some new ideas and simpler approaches. Inspired by the aforementioned $\mathbf{Observation~ }\mathbf{B}$ satisfied by the one-sided operator itself, we directly begin with the definition of operator $\partial_t^\alpha$ and employ a perturbation technique on the solution $u(t)$ itself instead of on the anti-symmetric function $w(t)$. It provides a more concise and intuitive method for establishing the Liouville theorem for one-sided operators $\partial_t^\alpha$. This is precisely a novel aspect of our work. In contrast to the Fourier analysis method used in our recent work [@MGZ], this refined approach highlights more directly the distinctions between one-sided and double-sided operators. Needless to say, focusing on the nonlocal time operator, we work mainly with $(D_{\rm{left}})^\alpha$. While all our results are equally valid for the right fractional time derivitive $(D_{\rm{right}})^\alpha$. In addition, it is notable to emphasize that the proofs presented here for the radial symmetry and monotonicity of solutions as well as the Liouville theorem can be adapted to various nonlocal equations involving the spatial nonlocal elliptic operators $\mathcal{L}$ as defined in [\[D\]](#D){reference-type="eqref" reference="D"} and the general fractional time derivative (cf. [@1; @2]) of the form $$\label{GMD} \int_{-\infty}^{t} [u(t)-u(s)]\mathcal{K}(t,s) ds.$$ Provided that the kernel $\mathcal{K}$ here and $K$ in [\[D\]](#D){reference-type="eqref" reference="D"} possesses some radial decreasing property. Before presenting the main results of this paper, we introduce the notation that will be used throughout the subsequent sections. Let $x_1$ be any given direction in $\mathbb{R}^n$, $$T_\lambda = \{(x_1, x') \in \mathbb{R}^n \mid x_1 = \lambda, \lambda \in \mathbb{R}\}$$ be a moving planes perpendicular to the $x_1$-axis, $$\Sigma_\lambda = \{x \in \mathbb{R}^n \mid x_1 < \lambda\}$$ be the region to the left of the hyperplane $T_\lambda$ in $\mathbb{R}^n$ and $$\Omega_\lambda = \Sigma_\lambda\cap B_1(0).$$ Furthermore, we denote the reflection of $x$ with respect to the hyperplane $T_\lambda$ as $$x^\lambda = (2\lambda - x_1, x_2, \ldots, x_n).$$ Let $u_\lambda(x, t) = u(x^\lambda, t)$, we define $$w_\lambda(x, t) = u_\lambda(x, t) - u(x, t).$$ It is evident that $w_\lambda(x, t)$ is an antisymmetric function of $x$ with respect to the hyperplane $T_\lambda$. We are now ready to illustrate the main results of this paper. **Theorem 1**. *Let $u(x,t)\in\left(C^{1,1}(B_1(0))\cap C(\overline{B_1(0)})\right)\times C^1(\mathbb{R})$ be a positive bounded solution of $$\label{1.11} \left\{ \begin{array}{ll} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ &\mbox{in}\ \ B_1(0)\times\mathbb{R},\\ u(x,t)\equiv 0 \ \ &\mbox{in}\ \ B_1^c(0)\times\mathbb{R}. \end{array} \right.$$ Suppose that $f\in C^1([0,+\infty))$ satisfies $f(0)\geq 0$ and $f'(0)\leq 0.$ Then for each $t\in\mathbb{R},$ $u(\cdot,t)$ is radially symmetric and strictly decreasing about the origin in $B_1(0)$.* The Theorem [Theorem 1](#thm2){reference-type="ref" reference="thm2"} is proved by using the direct method of moving plane for dual fractional operators $\partial_t^\alpha+(-\Delta)^s$, which primarily relies on the following narrow region principle for anti-symmetric functions. **Theorem 2**. *Let $\Omega$ be a bounded domain containing in the slab $\{x\in\Sigma_{\lambda}\mid\lambda-l<x_1<\lambda\}$. Assume that $w(x,t)\in\left(\mathcal{L}_{2s}\cap C^{1,1}_{loc}(\Omega)\right)\times\left(C^1(\mathbb{R})\cap \mathcal{L}^{-}_\alpha(\mathbb{R})\right)$ is bounded from below in $\overline{\Omega}\times \mathbb{R}$ and for each $t\in \mathbb{R},$ $w(\cdot,t)$ is lower semi-continuous up to the boundary $\partial \Omega$. Suppose $$\label{1.1} \left\{ \begin{array}{ll} \partial^\alpha_t w(x,t)+(-\Delta)^s w(x,t)=c(x,t)w(x,t) ,~ &(x,t) \in \Omega\times\mathbb{R} , \\ w(x,t)\geq 0 , ~ &(x,t) \in (\Sigma_\lambda\backslash\Omega)\times\mathbb{R},\\ w(x,t)=-w(x^\lambda,t) , ~ &(x,t) \in \Sigma_\lambda\times\mathbb{R}. \end{array} \right.$$ where the coefficient function $c(x,t)$ is bounded from above.\ Then $$\label{1.2} w(x,t)\geq 0~\mbox{in}~\Sigma_\lambda\times\mathbb{R},$$ for sufficiently small $l$. Furthermore, if $w(x,t)$ vanishes at some point $(x^0,t_0)\in\Omega\times\mathbb{R},$ then $$\label{1.3} w(x,t)\equiv0~\mbox{in}~\mathbb{R}^n\times(-\infty,t_0].$$* It is worth noting that in theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}, $\Omega$ is a bounded narrow domain within $\Sigma_\lambda$ and $c(x,t)$ is just bounded from above. However for the whole unbounded region $\Sigma_\lambda$ restricted to $w>0$, espectially when $c(x,t)$ is nonpositive, we will also have the second Maximum Principle for anti-symmetric functions with respect to $x$. This serves as a fundamental tool in estabishing the Liouville theorem for the dual fractional operator $\partial_t^\alpha+(-\Delta)^s$. **Theorem 3**. *Assume that $w(x,t)\in\left(\mathcal{L}_{2s}\cap C^{1,1}_{loc}(\Sigma_\lambda)\right)\times\left(C^1(\mathbb{R})\cap \mathcal{L}^{-}_\alpha(\mathbb{R})\right)$ is bounded from above in $\Sigma_\lambda\times \mathbb{R}$ and satisfies $$\label{2.1} \left\{ \begin{array}{ll} \partial^\alpha_t w(x,t)+(-\Delta)^s w(x,t)\leq 0 ,~ &\mbox{in}\ \ \{(x,t)\in \Sigma_\lambda\times\mathbb{R}\ | \ w(x,t)>0\}\,, \\[0.05cm] w(x,t)=-w(x^\lambda,t) , ~ & \mbox{in}\ \ \ \Sigma_\lambda\times\mathbb{R}. \end{array} \right.$$ Then $$\label{2.2} w(x,t)\leq 0~\mbox{in}~\Sigma_\lambda\times\mathbb{R}.$$* Since $w(x,t)=u(x^\lambda,t)-u(x,t)$ is an anti-symmetric function with respect to $x$, Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"} only yields that a bounded entire solution $u(x,t)$ of homogeneous equation associated with the operator $\partial_t^\alpha+(-\Delta)^s$ in the whole space $\mathbb{R}^n\times\mathbb{R}$ must be constant with respect to the spatial variable $x$, i.e. $u(x,t)=u(t)$. To further show that it is also a constant with respect to the time variable $t$, it suffices for us to establish a Liouville theorem involving a one-sided Marchaud fractional time operator $\partial^\alpha_t$ as the following. **Theorem 4**. *Let $u(t)\in C^1(\mathbb{R})$ be a bounded solution of $$\label{anti1} \partial^\alpha_t u(t) = 0\ \ \mbox{in}\ \ \mathbb{R}.$$ Then it must be constant.* As an immediate applications of the maximum principle in unbounded domains as stated in Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"} and the Liouville Theorem for the Marchaud operator $\partial^\alpha_t$ in $\mathbb{R}$, Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"}, we derive the second main result in this paper --- Liouville Theorem for the dual fractional operator $\partial_t^\alpha+(-\Delta)^s$ in the whole space. **Theorem 5**. *Let $u(x,t)\in C_{loc}^{1,1}(\mathbb{R}^n)\times C^1(\mathbb{R})$ be a bounded solution of $$\label{2.9} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = 0\ \ \mbox{in}\ \ \mathbb{R}^n\times\mathbb{R}.$$ Then it must be constant.* *Remark 6*. The above theorem can be regarded as a generalization of the classical Liouville theorem for the fractional elliptic and parabolic equation involving the Laplacian in the whole space, where the boundedness condition may not be optimal but is still reasonable. Relaxing this boundedness condition is the focus of our upcoming work. The remaining of this paper is organized as follows. In Sec.2, we first demonstrate two maximum principle: the narrow domain principle (Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}) and the maximum principle in unbounded domains (Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"} ) applicable to the dual fractional operator $\partial^\alpha_t +(-\Delta)^s$. Based on the narrow domain principle, we then carry out a direct method of moving planes for the nonlocal operator $\partial^\alpha_t +(-\Delta)^s$ to prove the radial symmetry of solutions announced in Theorem [Theorem 1](#thm2){reference-type="ref" reference="thm2"} in Sec.3. Moving on to Sec.4, we initially establish the Liouville theorem for the Marchaud operator $\partial^\alpha_t$ (Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"}), and subsequently, in combination with the maximum principle in unbounded domains developed in Sec.2, we prove the Liouville Theorem for the dual fractional operator $\partial_t^\alpha+(-\Delta)^s$ as stated in Theorem [Theorem 5](#thm4){reference-type="ref" reference="thm4"}. Throughout this paper, we use $C$ to denote a general constant whose value may vary from line to line. # Maximum Principles for Antisymmetric functions In this section, we will demonstrate various maximum principles for antisymmetric functions, including Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"} and Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}. We will explain in the subsequent part how these principles play vital roles in carrying out a direct method of moving planes to establish the symmetry and monotonicity of solutions. ## Narrow region principle in bounded domains Our first key tool is a narrow region principle for antisymmetric functions in bounded domains, which plays a crucial role in deriving the radial symmetry and monotonicity of solutions for the dual fractional equation. *Proof of Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}.* First we argue by contradiction to derive [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}. If not, since $\Omega$ is bounded, $w$ is bounded from below in $\Omega\times\mathbb{R}$ and $w(\cdot,t)$ is lower semi-continuous up to the boundary $\partial\Omega$ for each fixed $t\in\mathbb{R}$, there must exist $x(t)\in\Omega$ and $m>0$ such that $$\label{1.4}\inf\limits_{(x,t)\in\Omega\times\mathbb{R}}w(x,t)=\inf\limits_{t\in\mathbb{R}}w(x(t),t)=-m<0.$$ Then there exists a minimizing sequence $\{t_k\}\subset \mathbb{R}$ and a sequence $\{m_k\}\nearrow m$ such that $$w(x(t_k),t_k)=-m_k\searrow-m~as~k\to\infty.$$ Since the infimum of $w$ with respect to $t$ may not be attained, we need to perturb $w$ with respect to $t$ such that the infimum $-m$ can be attained by the perturbed function. For this purpose, we introduce the following auxiliary function $$v_k(x,t)=w(x,t)-\varepsilon_k\eta_k(t),$$ where $\varepsilon_k=m-m_k$ and $\eta_k(t)=\eta(t-t_k)$ with $\eta\in C_0^\infty(-1,1)$, $0\leq\eta\leq1$ satisfying $$\eta( t) = \begin{cases} 1, & \lvert t\rvert\leq \frac{1}{2}, \\ 0, & \lvert t\rvert\geq 1. \end{cases}$$ Clearly $supp \eta_k\subset (-1+t_k,1+t_k)$ and $\eta_k(t_k)=1$. By [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"} and the exterior condition in [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, we have $$\left.\begin{array}{r@{\ \ }c@{\ \ }ll} v_k(x(t_k),t_k)&=&-m\,, \\[0.15cm] v_k(x,t)=w(x,t)&\geq& -m\ \mbox{in}\ \Omega\times(\mathbb{R}\backslash (-1+t_k,1+t_k))\,, \\[0.15cm] v_k(x,t)\geq-\varepsilon_k\eta_k(t) &>& -m\ \mbox{in}\ (\Sigma_\lambda\backslash\Omega)\times \mathbb{R}\,. \\[0.05cm] \end{array}\right.$$ Since $w$ is lower semi-continuous on $\overline\Omega\times \mathbb{R}$, then $v_k$ must attains its minimum value which is at most $-m$ at $\Omega\times (-1+t_k,1+t_k)$, that is, $$\label{1.5} \exists\ \{(\bar{{x}}^k,\bar{t}_k)\}\subset\Omega\times (-1+t_k,1+t_k)\ \ s.t.\ \ -m-\varepsilon_k\leq v_k(\bar{{x}}^k,\bar{t}_k)=\inf\limits_{\Sigma_\lambda\times\mathbb{R}}v_k(x,t)\leq -m.$$ Consequently, $$-m\leq w(\bar{x}^k,\bar{t}_k)\leq-m_k<0.$$ Now applying [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"}, the definition of $v_k$ and the anti-symmetry of $w$ in $x$, we derive $$\left.\begin{array}{r@{\ \ }c@{\ \ }ll} {\partial^\alpha_t v_k}(\bar{x}^k,\bar{t}_k)&=&C_\alpha\displaystyle\int_{-\infty}^{\bar{t}_k}\displaystyle\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(\bar{x}^k,\tau)} {(\bar{t}_k-\tau)^{1+\alpha}}d\tau\leq 0\,. \\[0.2cm] (-\Delta)^s v_k(\bar{x}^k,\bar{t}_k)&=&C_{n,s}P.V.\displaystyle\int_{\mathbb{R}^{n}}\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(y,\bar{t}_k)}{\lvert\bar{x}^k-y\rvert^{n+2s}}dy \\[0.3cm] &=&C_{n,s}P.V.\displaystyle\int_{\Sigma_\lambda}\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(y,\bar{t}_k)}{\lvert\bar{x}^k-y\rvert^{n+2s}}dy+C_{n,s}\displaystyle\int_{\Sigma_\lambda}\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(y^\lambda,\bar{t}_k)}{\lvert\bar{x}^k-y^\lambda\rvert^{n+2s}}dy \\[0.3cm] &\leq&C_{n,s}\displaystyle\int_{\Sigma_\lambda}\frac{2v_k(\bar{x}^k,\bar{t}_k)-v_k(y,\bar{t}_k)-v_k(y^\lambda,\bar{t}_k)}{\lvert\bar{x}^k-y^{\lambda}\rvert^{n+2s}}dy \\[0.3cm] &=&2C_{n,s}w_k(\bar{x}^k,\bar{t}_k)\displaystyle\int_{\Sigma_\lambda}\frac{1}{\lvert\bar{x}^k-y^{\lambda}\rvert^{n+2s}}dy \\ %&\leq&C_{n,s}2\lt(v_k(\bar{x}^k,\bar{t}_k)+\ve_k\rt)\jf_{\Sigma_\lambda}\frac{1}{\abs{\bar{x}^k-y^{\lambda}}^{n+2s}}dy %\\ &\leq&-\displaystyle\frac{Cm_k}{l^{2s}}.\end{array}\right.$$ It follows that $$\label{1.6}\partial^\alpha_t v_k(\bar{x}^k,\bar{t}_k)+(-\Delta)^s v_k(\bar{x}^k,\bar{t}_k)\leq-\displaystyle\frac{Cm_k}{l^{2s}}.$$ In addition, substituting $v_k$ into the differential equation in [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and using the assumption $c(x,t)\leq C_0$, we obtain $$\label{1.7} \partial^\alpha_t v_k(\bar{x}^k,\bar{t}_k)+(-\Delta)^s v_k(\bar{x}^k,\bar{t}_k)=c(\bar{x}^k,\bar{t}_k)w(\bar{x}^k,\bar{t}_k)-\varepsilon_k\partial^\alpha_t\eta_k(\bar{t}_k)\geq-C_0m-C\varepsilon_k.$$ Then a combination of [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"} and [\[1.7\]](#1.7){reference-type="eqref" reference="1.7"} yields that $$-C_0m\leq-\displaystyle\frac{Cm_k}{l^{2s}}+C\varepsilon_k\to-\displaystyle\frac{Cm}{l^{2s}},$$ as $k\to\infty,$ which is a contradiction for sufficiently small $l$. Hence we complete the proof of [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}. Next, we show the validity of [\[1.3\]](#1.3){reference-type="eqref" reference="1.3"}. If $w(x,t)$ vanishes at $(x^0,t_0)\in\Omega\times\mathbb{R},$ then by [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}, we derive that $$w(x^0,t_0)=\min\limits_{\Sigma_\lambda\times\mathbb{R}}w(x,t)=0.$$ The equation in [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} obviously implies that $$\label{1.8}\partial^\alpha_t w(x^0,t_0)+(-\Delta)^s w(x^0,t_0)=0.$$ On the other hand, since $w(x,t)\geq 0$ in $\Sigma_\lambda\times\mathbb{R}$ and $$\lvert x^0-y^{\lambda}\rvert>\lvert x^0-y\rvert~\mbox{provided~} y\in\Sigma_{\lambda},$$ we obtain $$\label{1.9} \left.\begin{array}{r@{\ \ }c@{\ \ }ll} (-\Delta)^s w(x^0,t_0)&=&C_{n,s}P.V.\displaystyle\int_{\mathbb{R}^{n}}\frac{-w(y,t_0)}{\lvert x^0-y\rvert^{n+2s}}dy \\[0.3cm] &=&C_{n,s}P.V.\displaystyle\int_{\Sigma_\lambda}w(y,t_0)\left[\frac{1}{\lvert x^0-y^{\lambda}\rvert^{n+2s}}-\frac{1}{\lvert x^0-y\rvert^{n+2s}}\right]dy \\[0.3cm]&\leq&0 \end{array}\right.$$ and $$\label{1.10} \left.\begin{array}{r@{\ \ }c@{\ \ }ll} \partial^\alpha_t w(x^0,t_0)&=&C_\alpha\displaystyle\int_{-\infty}^{t_0}\frac{-w(x^0,\tau)}{(t_0-\tau)^{1+\alpha}}d\tau\leq0.\\[0.3cm] \end{array}\right.$$ So it follows from [\[1.8\]](#1.8){reference-type="eqref" reference="1.8"}, [\[1.9\]](#1.9){reference-type="eqref" reference="1.9"} and [\[1.10\]](#1.10){reference-type="eqref" reference="1.10"} that $$0=\partial^\alpha_t w(x^0,t_0)=C_\alpha\displaystyle\int_{-\infty}^{t_0}\frac{-w(x^0,\tau)}{(t_0-\tau)^{1+\alpha}}d\tau,$$ then we must have $$w(x^0,\tau)\equiv 0=\min_{\Sigma_\lambda\times\mathbb{R}}w(x,t), \mbox{~for~}\forall \tau\in (-\infty,t_0],$$ that is, for each $\tau\in(-\infty,t_0]$, $w(x,t)$ attains zero at $(x^0,\tau)\in\Omega\times\mathbb{R}$. Now, repeating the previous process, we further obtain $$\label{1.9} \left.\begin{array}{r@{\ \ }c@{\ \ }ll} 0=(-\Delta)^s w(x^0,\tau) &=&C_{n,s}P.V.\displaystyle\int_{\Sigma_\lambda}w(y,\tau)\left[\frac{1}{\lvert x^0-y^{\lambda}\rvert^{n+2s}}-\frac{1}{\lvert x^0-y\rvert^{n+2s}}\right]dy. \end{array}\right.$$ Together with the anti-symmetry of $w(y,\tau)$ with respect to $y$, we derive $$w(y,\tau)\equiv 0\mbox{~for~}\forall y\in \mathbb{R}^n.$$ Therefore, $$w(y,\tau)\equiv 0 \mbox{~in~} \mathbb{R}^n\times(-\infty,t_0].$$ This completes the proof of Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}. ◻ ## Maximum principle in unbounded domains We now prove Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}, the maximum principle for antisymmetric functions in unbounded domains. This is also an ensential ingredient in proving the Liouville theorem for the dual fractional operator. *Proof of Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}.* We argue by contradiction. If [\[2.2\]](#2.2){reference-type="eqref" reference="2.2"} is not true, since $w(x,t)$ is bounded from above in $\Sigma_\lambda\times \mathbb{R}$, then there exists a constant $A>0$ such that $$\label{2.3} \sup\limits_{(x,t)\in \Sigma_\lambda\times \mathbb{R}}w(x,t):=A>0.$$ Since the domain $\Sigma_\lambda\times\mathbb{R}$ is unbounded, the supremum of $w(x,t)$ may not be attained in $\Sigma_\lambda\times\mathbb{R}$, however, by [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"}, there exists a maximizing sequence $\{(x^k,t_k)\}\subset \Sigma_\lambda\times\mathbb{R}$ such that $$w(x^k,t_k)\to A\ \mbox{as}\ k\to \infty.$$ More accurately, there exists a sequence $\{\varepsilon_k\}\searrow 0$ such that $$\label{2.4} w(x^k,t_k)=A-\varepsilon_k>0.$$ Now we introduce a perturbation of $w$ near $(x^k,t_k)$ as following $$\label{2.5} v_k(x,t)=w(x,t)+\varepsilon_k\eta_k(x,t) \ \mbox{in}\ \mathbb{R}^n\times \mathbb{R},$$ where $$\eta_k(x,t)=\eta\left(\displaystyle\frac{x-x^k}{r_k/2},\displaystyle\frac{t-t_k}{(r_k/2)^{2s/\alpha}}\right),$$ with $r_k=dist(x_k,T_{\lambda})>0$ and $\eta\in C^\infty_0(\mathbb{R}^n\times\mathbb{R})$ is a cut-off smooth function satisfying $$\left\{\begin{array}{r@{\ \ }c@{\ \ }ll} 0\leq \eta\leq 1 &\mbox{in}&\ \ \mathbb{R}^n\times\mathbb{R}\,, \\[0.05cm] \eta= 1 &\mbox{in}&\ \ B_{1/2}(0)\times[-\displaystyle\frac{1}{2},\displaystyle\frac{1}{2}]\,, \\[0.05cm] \eta= 0 &\mbox{in}&\ \ \left(\mathbb{R}^n\times\mathbb{R}\right) \backslash\left(B_{1}(0)\times[-1,1]\right)\,. \\[0.05cm] \end{array}\right.$$ Denote $$Q_k(x^k,t_k):=B_{r_k/2}(x^k)\times\left[t_k-\left(\displaystyle\frac{r_k}{2}\right)^{2s/\alpha}, t_k+\left(\displaystyle\frac{r_k}{2}\right)^{2s/\alpha}\right]\subset\Sigma_\lambda\times\mathbb{R}.$$ By [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"}, [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"} and [\[2.5\]](#2.5){reference-type="eqref" reference="2.5"}, we have $$\left.\begin{array}{r@{\ \ }c@{\ \ }ll} v_k(x^k,t_k)&=&A\,, \\[0.15cm] v_k(x,t)=w(x,t)&\leq& A\ \mbox{in}\ \left(\Sigma_\lambda\times\mathbb{R}\right) \backslash Q_k(x^k,t_k)\,, \\[0.15cm] v_k(x,t)=\varepsilon_k\eta_k(x,t) &<& A\ \mbox{on}\ \ T_\lambda\times\mathbb{R}\,. \\[0.05cm] \end{array}\right.$$ Since $w$ is upper semi-continuous on $\overline{\Sigma}_\lambda\times \mathbb{R}$, then $v_k$ must attains its maximum value which is at least $A$ at $\overline{Q_k(x^k,t_k)}\subset{\Sigma}_\lambda\times \mathbb{R}$, that is, $$\label{2.6} \exists\ \{(\bar{{x}}^k,\bar{t}_k)\}\subset\overline{Q_k(x^k,t_k)}\ \ \ s.t.\ \ \ A+\varepsilon_k\geq v_k(\bar{{x}}^k,\bar{t}_k)=\sup\limits_{\Sigma_\lambda\times\mathbb{R}}v_k(x,t)\geq A,$$ where we have used [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"} and [\[2.5\]](#2.5){reference-type="eqref" reference="2.5"}. Now, applying [\[2.6\]](#2.6){reference-type="eqref" reference="2.6"}, we derive $$\left.\begin{array}{r@{\ \ }c@{\ \ }ll} w(\bar{x}^k,\bar{t}_k)&\geq&A-\varepsilon_k>0\,, \\[0.2cm] {\partial^\alpha_t v_k}(\bar{x}^k,\bar{t}_k)&=&C_\alpha\displaystyle\int_{-\infty}^{\bar{t}_k}\displaystyle\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(\bar{x}^k,\tau)} {(\bar{t}_k-\tau)^{1+\alpha}}d\tau\geq 0\,. \\[0.2cm] \end{array}\right.$$ Next, we derive a contradiction by estimating the value of $(-\Delta)^s v_k$ at the maximum point $(\bar{x}^k,\bar{t}_k)$ of $v_k$ in $\Sigma_\lambda\times\mathbb{R}.$ On one hand, taking into account of differential inequality in [\[2.1\]](#2.1){reference-type="eqref" reference="2.1"}, [\[2.5\]](#2.5){reference-type="eqref" reference="2.5"} and translation and scaling invariance of the operator $\partial_t^\alpha+(-\Delta)^s$(see [\[TS\]](#TS){reference-type="eqref" reference="TS"}, we obtain $$\begin{aligned} \nonumber\label{2.7} (-\Delta)^s v_k(\bar{x}^k,\bar{t}_k)&=&(-\Delta)^s w(\bar{x}^k,\bar{t}_k)+\varepsilon_k(-\Delta)^s \eta_k(\bar{x}^k,\bar{t}_k) \\ \nonumber &\leq& -\partial^\alpha_t w(\bar{x}^k,\bar{t}_k)+\varepsilon_k(-\Delta)^s \eta_k(\bar{x}^k,\bar{t}_k) \\ \nonumber %&=&\varepsilon_k\Phi(\bar{x}^k){\partial^\alpha_t \eta_k}(\bar{x}^k,\bar{t}_k)- {\partial^\alpha_t v_k}(\bar{x}^k,\bar{t}_k)+\varepsilon_k\eta(\bar{t}_k)(-\Delta)^s \Phi_k(\bar{x}^k,\bar{t}_k)\\ \nonumber &\leq&\varepsilon_k\left[ {\partial^\alpha_t \eta_k} (\bar{x}^k,\bar{t}_k)+(-\Delta)^s \eta_k(\bar{x}^k,\bar{t}_k)\right] \\ &\leq& C\displaystyle\frac{\varepsilon_k }{r_k^{2s}}.\end{aligned}$$ On the other hand, starting from the definition of operator $(-\Delta)^s$ and utilizing the antisymmetry of $w$ in $x$ as well as the fact $\lvert\bar{x}^k-y^\lambda\rvert>\lvert\bar{x}^k-y\rvert$ and [\[2.6\]](#2.6){reference-type="eqref" reference="2.6"}, we compute $$\label{2.8} \left.\begin{array}{r@{\ \ }c@{\ \ }ll} (-\Delta)^s v_k(\bar{x}^k,\bar{t}_k)&=&C_{n,s}P.V.\displaystyle\int_{\mathbb{R}^{n}}\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(y,\bar{t}_k)}{\lvert\bar{x}^k-y\rvert^{n+2s}}dy \\[0.3cm] &=&C_{n,s}P.V.\displaystyle\int_{\Sigma_\lambda}\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(y,\bar{t}_k)}{\lvert\bar{x}^k-y\rvert^{n+2s}}dy+C_{n,s}\displaystyle\int_{\Sigma_\lambda}\frac{v_k(\bar{x}^k,\bar{t}_k)-v_k(y^\lambda,\bar{t}_k)}{\lvert\bar{x}^k-y^\lambda\rvert^{n+2s}}dy \\[0.3cm] &\geq&C_{n,s}\displaystyle\int_{\Sigma_\lambda}\frac{2v_k(\bar{x}^k,\bar{t}_k)-v_k(y,\bar{t}_k)-v_k(y^\lambda,\bar{t}_k)}{\lvert\bar{x}^k-y^{\lambda}\rvert^{n+2s}}dy \\[0.3cm] &\geq&C_{n,s}2\left(v_k(\bar{x}^k,\bar{t}_k)-\varepsilon_k\right)\displaystyle\int_{\Sigma_\lambda}\frac{1}{\lvert\bar{x}^k-y^{\lambda}\rvert^{n+2s}}dy \\ &\geq&\displaystyle\frac{C(A-\varepsilon_k)}{r_k^{2s}}.\end{array}\right.$$ Finally, a combination of [\[2.7\]](#2.7){reference-type="eqref" reference="2.7"} and [\[2.8\]](#2.8){reference-type="eqref" reference="2.8"} yields that $$A-\varepsilon_k\leq C\varepsilon_k,$$ which leads to a contradiction for sufficiently large $k$. Hence we conclude that [\[2.2\]](#2.2){reference-type="eqref" reference="2.2"} is valid. ◻ # Radial symmetry of solutions In this section, we employ the narrow region principle (Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}) as a fundamental tool to initiate the direct moving plane method, then by combining perturbation techniques and limit arguments, for the dual fractional equation $$\partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ \mbox{in}\ \ B_1(0)\times\mathbb{R},$$ under suitable assumptions on the nonlinear term $f$, we show that the solution $u(\cdot,t)$ with the vanishing exterior condition is radially symmetric and strictly decreasing with respect to the origin in a unit ball. *Proof of Theorem [Theorem 1](#thm2){reference-type="ref" reference="thm2"}.* Let $x_1$ be any direction and for any $\lambda\in\mathbb{R},$ we define $T_\lambda,\ \Sigma_{\lambda},\ \Omega_{\lambda},\ x^{\lambda},\ w_\lambda$ as described in section 1. Substituting the definition of $w_\lambda$ into the equation [\[1.11\]](#1.11){reference-type="eqref" reference="1.11"}, we have $$\label{1.12} \left\{ \begin{array}{ll} \partial^\alpha_t w_\lambda(x,t)+(-\Delta)^s w_\lambda(x,t)=c_\lambda(x,t)w_{\lambda}(x,t) ,~ &(x,t) \in \Omega_\lambda\times\mathbb{R} , \\ w_\lambda(x,t)\geq 0 , ~ &(x,t) \in (\Sigma_\lambda\backslash\Omega_\lambda)\times\mathbb{R},\\ w_\lambda(x,t)=-w_\lambda(x^\lambda,t) , ~ &(x,t) \in \Sigma_\lambda\times\mathbb{R}. \end{array} \right.$$ where the weighted function $$c_\lambda(x,t)=\displaystyle\frac{f(u_\lambda(x,t))-f(u(x,t))}{u_\lambda(x,t)-u(x,t)}$$ is bounded in $\Omega_\lambda\times\mathbb{R}$ due to $f\in C^1\left([0,+\infty)\right).$ Now we carry out the direct method of moving plane which is devided into two steps as outlined below. $\rm{\mathbf{Step~1.}}$ Start moving the plane $T_\lambda$ from $x_1=-1$ to the right along the $x_1$-axis. When $\lambda$ is sufficiently closed to $-1$, $\Omega_\lambda$ is a narrow region. Then by applying the narrow rigion principle, Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}, to problem [\[1.12\]](#1.12){reference-type="eqref" reference="1.12"}, we deduce that $$\label{1.13} w_\lambda(x,t)\geq 0 \mbox{~in~} \Sigma_\lambda\times\mathbb{R}.$$ This provides a starting point to move the plane $T_\lambda.$ $\rm{\mathbf{Step~2.}}$ Continuing to move the plane $T_\lambda$ towards the right along the $x_1$-axis until reaching its limiting position as long as inequality [\[1.13\]](#1.13){reference-type="eqref" reference="1.13"} holds. Denote $$\lambda_0:=\sup\{\lambda<0\mid w_\mu(x,t)\geq 0, (x,t)\in\Sigma_{\mu}\times\mathbb{R}\mbox{~for~any~}\mu\leq \lambda\}.$$ We are going to employ the contradiction argument to verify that $$\label{1.14} \lambda_0=0.$$ Otherwise, if $\lambda_0<0,$ according to the definition of $\lambda_0$, there exsits a sequences of negative numbers $\{\lambda_k\}$ with $\{\lambda_k\}\searrow \lambda_0$ and a sequence of positive numbers $\{m_k\}\searrow0$ such that $$\inf\limits_{\Omega_{\lambda_k}\times\mathbb{R}}w_{\lambda_k}(x,t)=\inf\limits_{\Sigma_{\lambda_k}\times\mathbb{R}}w_{\lambda_k}(x,t)=-m_k .$$ It implies that for each fixed $k>0,$ there exists a point $(x^{k},t_{k}) \in\Omega_{\lambda_k}\times\mathbb{R}$ such that $$-m_k\leq w_{\lambda_k}(x^k,t_k)= -m_k+m_k^2<0.$$ Since $\mathbb{R}$ is an unbounded interval, the infimum of $w_{\lambda_k}$ with respect to $t$ may not be attained. In order to estimate $\partial_t^\alpha w_{\lambda_k}$, we need to introduce a perturbation of $w_{\lambda_k}$ near $t_k$ as follows $$\label{1.15} v_{k}(x,t)=w_{\lambda_k}(x,t)-m_k^2\eta_k(t) \ \mbox{in}\ \Sigma_{\lambda_k}\times \mathbb{R},$$ where $\eta_k(t)=\eta(t-t_k)$ with $\eta\in C_0^\infty(-1,1)$ be a cut-off function as in the proof of Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}. Based on the above analysis and the exterior condition in [\[1.12\]](#1.12){reference-type="eqref" reference="1.12"} satisfied by $w_{\lambda_k}$, we have $$\left\{\begin{array}{r@{\ \ }c@{\ \ }ll} v_{k}(x^k,t_k)&=&-m_k\,, \\[0.15cm] v_{k}(x,t)=w_{\lambda_k}(x,t)&\geq& -m_k\ \mbox{in}\ \Omega_{\lambda_k}\times(\mathbb{R}\backslash (-1+t_k,1+t_k))\,, \\[0.15cm] v_{k}(x,t)\geq-m_k^2\eta_k(x,t) &>& -m_k\ \mbox{in}\ (\Sigma_{\lambda_k}\backslash\Omega_{\lambda_k})\times \mathbb{R}\,. \\[0.05cm] \end{array}\right.$$ Since $u$ is continuous on $\overline\Omega_{\lambda_k}\times \mathbb{R}$, then $v_{k}$ must attains its minimum value which is at most $-m_k$ at $\Omega_{\lambda_k}\times (-1+t_k,1+t_k)$, that is, $$\label{1.16} \exists\ \{(\bar{{x}}^k,\bar{t}_k)\}\subset\Omega_{\lambda_k}\times (-1+t_k,1+t_k)\ \ s.t.\ \ -m_k-m_k^2\leq v_{k}(\bar{{x}}^k,\bar{t}_k)=\inf\limits_{\Sigma_{\lambda_k}\times\mathbb{R}}v_{k}(x,t)\leq -m_k,$$ which implies that $$\label{1.16}-m_k\leq w_{\lambda_k}(\bar{x}^k,\bar{t}_k)\leq-m_k+m_k^2<0.$$ Similar to the process of Theorem [Theorem 2](#thm1){reference-type="ref" reference="thm1"}, we have $$\label{1.24}\left. \begin{array}{ll}\partial^\alpha_t v_{k}(\bar{x}^k,\bar{t}_k)+(-\Delta)^s v_{k}(\bar{x}^k,\bar{t}_k)&\leq 2C_{n,s}w_{\lambda_k}(\bar{x}^k,\bar{t}_k)\displaystyle\int_{\Sigma_{\lambda_k}}\frac{1}{\lvert\bar{x}^k-y^{\lambda_k}\rvert^{n+2s}}dy \\[0.3cm]&\leq-\displaystyle\frac{C(m_k-m_k^2)}{{dist}(\bar{x}^k,T_{\lambda_k})^{2s}} %\\[0.3cm]&\leq-\fr{C(m_k-m_k^2)}{2^{2s}} .\end{array} \right.$$ Furthermore, it follows from the differential equation in [\[1.12\]](#1.12){reference-type="eqref" reference="1.12"} and [\[1.16\]](#1.16){reference-type="eqref" reference="1.16"} that $$\left. \begin{array}{ll} \partial^\alpha_t v_{k}(\bar{x}^k,\bar{t}_k)+(-\Delta)^s v_{k}(\bar{x}^k,\bar{t}_k)&=c_{\lambda_k}(\bar{x}^k,\bar{t}_k)w_{\lambda_k}(\bar{x}^k,\bar{t}_k)-m_k^2\partial^\alpha_t\eta_k(\bar{t}_k) \\ &\geq-c_{\lambda_k}(\bar{x}^k,\bar{t}_k)m_k-Cm_k^2. \end{array} \right.$$ Here we may assume $c_{\lambda_k}(\bar{x}^k,\bar{t}_k)\geq 0$ without loss of generality. Otherwise, a contradiction can be derived from [\[1.24\]](#1.24){reference-type="eqref" reference="1.24"}. Consquently, $$\label{dist}-c_{\lambda_k}(\bar{x}^k,\bar{t}_k)-Cm_k\leq -\displaystyle\frac{C(1-m_k)}{{dist}(\bar{x}^k,T_{\lambda_k})^{2s}} \leq-\displaystyle\frac{C(1-m_k)}{2^{2s}},$$ by virtue of $m_{k}\to 0$ as $k\to\infty$, we derive that for sufficiently large $k$, $$c_{\lambda_k}(\bar{x}^k,\bar{t}_k)\geq C_0>0.$$ This implies that there exists $\mbox{~some~}\xi_{k}\in\left(u_{\lambda_k}(\bar{x}^k,\bar{t}_k),u(\bar{x}^k,\bar{t}_k)\right)$ such that $$f'(\xi_{k})\geq C_0.$$ Thus, owing to [\[1.16\]](#1.16){reference-type="eqref" reference="1.16"} and the assumption $f'(0)\leq 0$, after extracting a subsequence, we obtain $$\label{1.17}u(\bar{x}^k,\bar{t}_k)\geq C_1>0,$$ for sufficiently large $k$. In order to simplify the notation, we denote $$\tilde{w}_{k}(x,t)=w_{\lambda_k}(x,t+\bar{t}_k)\mbox{~and~}\tilde{c}_{k}(x,t)=c_{\lambda_k}(x,t+\bar{t}_k).$$ It follows from Arzel$\grave{a}$-Ascoli theorem that there exist two continuous function $\tilde{w}$ and $\tilde{c}$ such that $$\lim_{{k \to \infty}} \tilde{w}_{k}(x, t) = \tilde{w}(x, t)$$ and $$\lim_{{k \to \infty}} \tilde{c}_{k}(x, t) = \tilde{c}(x, t)$$ uniformly in $B_1(0)\times\mathbb{R}.$ Moreover, taking into account of the equation $${\partial_t^\alpha}\tilde{w}_{k}(x, t) +(- \Delta)^s \tilde{w}_{k}(x, t) = \tilde{c}_{k}(x, t)\tilde{w}_{k}(x, t), \quad \text{in } \Omega_{\lambda_k} \times \mathbb{R},$$ we conclude that the limit function $\tilde{w}$ satisfies $$\label{1.18} {\partial_t^\alpha}\tilde{w}(x, t) +(- \Delta)^s \tilde{w}(x, t) = \tilde{c}(x, t)\tilde{w}(x, t), \quad \text{in } \Omega_{\lambda_0} \times \mathbb{R}.$$ As mentioned in [\[dist\]](#dist){reference-type="eqref" reference="dist"}, combining the uniform boundedness of $c_{\lambda_k}(\bar{x}^k,\bar{t}_k)$ with $\Omega_{\lambda_k}\subset B_1(0)$ and $\lambda_k\to\lambda_0,$ we may assume that $\bar{x}^k\to x^0\in {\Sigma}_{\lambda_0}\cap \overline{B}_1(0).$ Then applying [\[1.16\]](#1.16){reference-type="eqref" reference="1.16"} and the continuity on $u$, we obtain $$\label{mini}\tilde{w}(x^0, 0)=0=\inf\limits_{\Sigma_{\lambda_0}\times\mathbb{R}}{w}_{\lambda_0}(x,t)=\inf\limits_{\Sigma_{\lambda_0}\times\mathbb{R}}\tilde{w}(x,t).$$ Substituting this into the limit equation [\[1.18\]](#1.18){reference-type="eqref" reference="1.18"}, it yields $$\left. \begin{array}{ll}0&={\partial_t^\alpha}\tilde{w}(x^0, 0) +(- \Delta)^s \tilde{w}(x^0,0)\\[0.3cm] &=C_\alpha\displaystyle\int_{-\infty}^0\displaystyle\frac{-\tilde{w}(x^0,\tau)}{(-\tau)^{1+\alpha}}d\tau +C_{n,s}P.V.\displaystyle\int_{\Sigma_{\lambda_0}}\tilde{w}(y,0)\left[\frac{1}{\lvert x^0-y^{\lambda}\rvert^{n+2s}}-\frac{1}{\lvert x^0-y\rvert^{n+2s}}\right]dy.\end{array} \right.$$ As a result of [\[mini\]](#mini){reference-type="eqref" reference="mini"}, the antisymmetry of $\tilde{w}(x, t)$ with respect to $x$ and the fact that $\lvert x^0-y^{\lambda}\rvert>\lvert x^0-y\rvert$, we conclude $$\label{1.19}\tilde{w}(x, t) \equiv 0,~~(x, t) \in \mathbb{R}^n \times (-\infty, 0].$$ Correspondingly, we define $$u_k(x, t) = u(x, t + \bar{t}_k).$$ Similar to the previous discussion regarding $\tilde{w}_{k}$, we also have $$\lim_{k \to \infty} u_k(x, t) = \tilde{u}(x, t),$$ and $$\label{1.20} {\partial_t^\alpha}\tilde{u}(x, t) +(- \Delta)^s \tilde{u}(x, t) = f\left(\tilde{u}(x, t)\right) \quad \text{in } B_1(0) \times \mathbb{R}.$$ In addition, by using [\[1.17\]](#1.17){reference-type="eqref" reference="1.17"}, we infer that $$\label{1.21}\tilde{u}(x^0, 0) = \lim_{j \to \infty} u(\bar{x}^j, \bar{t}_j)\geq C_1 > 0.$$ Next, we will show that $$\label{1.22}\tilde{u}(x,0)> 0~\mbox{in}~B_1(0).$$ If this is not true, according to the exterior condition and the interior positivity of $u$, then there exists a point $\bar{x} \in B_1(0)$ such that $$\tilde{u}(\bar{x}, 0) =\inf\limits_{\mathbb{R}^n\times\mathbb{R}}\tilde{u}(x,t)= 0,$$ which, together with limit equation [\[1.20\]](#1.20){reference-type="eqref" reference="1.20"} and the assumption $f(0)\geq0$, leads to $$0=(- \Delta)^s \tilde{u}(\bar{x},0)=C_{n,s}P.V.\displaystyle\int_{\mathbb{R}^n}\frac{-\tilde{u}(y,0)}{\lvert\bar{x}-y\rvert^{n+2s}}dy.$$ Thus, $\tilde{u}(x,0)\equiv 0\mbox{~in~} \mathbb{R}^n$ due to $u\geq 0.$ This contradicts [\[1.21\]](#1.21){reference-type="eqref" reference="1.21"} and thus verifies the assersion [\[1.22\]](#1.22){reference-type="eqref" reference="1.22"}. Due to the condition $\tilde{u}(x, 0) \equiv 0$ in $B_{1}^{c}(0)$, [\[1.22\]](#1.22){reference-type="eqref" reference="1.22"} and $\lambda_0 < 0$, we further conclude that there must exists a point $\tilde{x} \in B_{1}^{c}(0)$ such that $\tilde{x}^{\lambda_0} \in B_1(0)$ and $$\tilde{w}(\tilde{x}, 0) = \tilde{u}(\tilde{x}^{\lambda_0}, 0) - \tilde{u}(\tilde{x}, 0) = \tilde{u}(\tilde{x}^{\lambda_0}, 0) > 0.$$ However, this contradicts [\[1.19\]](#1.19){reference-type="eqref" reference="1.19"}. Hence, we have established that the limiting position must be $T_0$. By choosing $x_1$ arbitrarily and considering the definition of $\lambda_0$, we deduce that $u(\cdot,t)$ must be radially symmetric and monotone nonincreasing about the origin in the unit ball $B_1(0)$. Now we are ready to demonstrate the strict monotonicity, more specifically, it is sufficient to prove that $$\label{1.23}w_{\lambda}(x,t)>0, ~\forall\lambda\in(-1,0).$$ If not, then there exists some $\lambda_0 \in (-1, 0)$ and a point $(x^0, t_0) \in \Omega_{\lambda_0} \times \mathbb{R}$ such that $$w_{\lambda_0}(x_0, t_0)=\min\limits_{\Sigma_{\lambda_0}\times\mathbb{R}} w_{\lambda_0}= 0.$$ Combining the differential equation in [\[1.12\]](#1.12){reference-type="eqref" reference="1.12"} with the definition of the dual fractional operator $\partial_t^{\alpha} + (-\Delta)^s$, similar to the previous argument, we must have $$w_{\lambda_0}(x, t) \equiv 0 \mbox{~in~} \Sigma_{\lambda_0}\times(-\infty,t_0].$$ This is a contradiction due to the fact that $u(\cdot,t)>0$ in $B_1(0)$ and $u(\cdot,t)\equiv 0$ in $B_1^c(0)$ for each fixed $t\in\mathbb{R}.$ Hence, we verify the assertion [\[1.23\]](#1.23){reference-type="eqref" reference="1.23"} and thus complete the proof of Theorem [Theorem 1](#thm2){reference-type="ref" reference="thm2"}. ◻ # Liouville Theorem In this section, we begin by employing perturbation techniques and analyzing the nonlocal one-sided nature of the one-dimensional operator $\partial^\alpha_t$ to establish the Liouville theorem for the Marchaud fractional time operator $\partial^\alpha_t$, Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"}. Directly following this, by incorporating the maximum principle in unbounded domain as stated in Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}, we will be able to derive our second main result, Theorem [Theorem 5](#thm4){reference-type="ref" reference="thm4"}. ## Liouville Theorem for the Marchaud fractional time operator $\partial^\alpha_t$ Let us begin by recalling the definition of the Marchaud derivitive $$\label{M}\partial_t^\alpha u(t)=C_\alpha\displaystyle\int_{-\infty}^{t}\displaystyle\frac{u(t)-u(\tau)}{(t-\tau)^{1+\alpha}}d\tau.$$ Now we show that a bounded solution of equation $\partial_t^\alpha u(t)=0$ in $\mathbb{R}^n$ must be constant. *Proof of Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"}.* The proof goes by contradiction. Since $u(t)$ is bounded in $\mathbb{R},$ we may assume that $$\label{max}M:=\sup\limits_{t\in\mathbb{R}}u(t)>\inf\limits_{t\in\mathbb{R}}u(t)=:m.$$ Now we divide the proof into three cases based on whether the maximum and minimum values are attained and proceed to derive a contradiction for each case. $\mathbf{Case ~1}$: The extrema (maximum and minimum) of $u$ are both attained in $\mathbb{R}$. Suppose that u attains its maximum at $\bar{t}$ and its minimum at $\underline{t}$ with $\underline{t}<\bar{t}.$ Owing to equation [\[anti1\]](#anti1){reference-type="eqref" reference="anti1"} and the nonlocal one-sided nature of $\partial_t^\alpha,$ see [\[M\]](#M){reference-type="eqref" reference="M"}, we have $$u(t)\equiv u(\underline{t})=m ~\mbox{for}~t<\underline{t}$$and $$u(t)\equiv u(\bar{t})=M ~\mbox{for}~t<\bar{t}.$$ This contracdicts the assumption $\underline{t}<\bar{t}$. We can derive a similar contradiction in the case $\overline{t}<\underline{t}$. $\mathbf{Case ~2}$: Only one of the extrema (maximum or minimum) of $u$ is attained in $\mathbb{R}$. Without loss of generality, we may assume that $u$ attains its maximum at $t_0$ and there exists a minimizing sequence $\{t_k\}\searrow-\infty$ such that $$\label{case2}\lim\limits_{k\to\infty}u(t_k)=m.$$ Then applying equation [\[anti1\]](#anti1){reference-type="eqref" reference="anti1"} and the definition of $\partial_t^\alpha$ [\[M\]](#M){reference-type="eqref" reference="M"}, we have $$u(t)\equiv u(t_0)=M ~\mbox{for}~t<t_0,$$ which contradicts [\[case2\]](#case2){reference-type="eqref" reference="case2"} due to the continuity of $u.$ $\mathbf{Case ~3}$: The extrema (maximum and minimum) of $u$ are both unattainable. We assume without loss of generality that there exist a minimizing sequence $\{\underline{t}_k\}\searrow-\infty$ and a maximizing sequence $\{\bar{t}_k\}\searrow-\infty$ and a sequence $\{\varepsilon_k\}\searrow0$ such that $$u(\overline{t}_k)=M-\varepsilon_k$$ and $$u(\underline{t}_k)=m+\varepsilon_k.$$ By extracting subsequences, we may assume $\overline{t}_k-\underline{t}_k>1.$ Now we introduce a perturbation of $w$ near $\underline{t}_k$ and $\overline{t}_k$ as following $$v_k(t)=u(t)+\varepsilon_k\eta_k(t) \ \mbox{in}\ \mathbb{R},$$ where $$\eta_k(t)=\eta\left(\frac{t-\overline{t}_k}{r_k}\right)-\eta\left(\frac{t-\underline{t}_k}{r_k}\right),$$ with $r_k=\frac{1}{4}(\overline{t}_k-\underline{t}_k)>0$ and $\eta\in C^\infty_0(\mathbb{R})$ is a cut-off smooth function as described in the proof of Theorem[Theorem 2](#thm1){reference-type="ref" reference="thm1"}. Clearly, $supp \eta_k\subset (-r_k+\underline{t}_k,r_k+\underline{t}_k)\cup(-r_k+\overline{t}_k,r_k+\overline{t}_k)$ and there holds $$\eta_k(\overline{t}_k)=1,~ \eta_k(\underline{t}_k)=-1,$$ $$\eta_k(t)=-\eta\left(\frac{t-\overline{t}_k}{r_k}\right)\leq 0~\mbox{in}~\mathbb{R}\backslash(-r_k+\overline{t}_k,r_k+\overline{t}_k)$$and $$\eta_k(t)=\eta\left(\frac{t-\underline{t}_k}{r_k}\right)\geq 0~\mbox{in}~\mathbb{R}\backslash(-r_k+\underline{t}_k,r_k+\underline{t}_k).$$ Then we have $$\left\{\begin{array}{r@{\ \ }c@{\ \ }ll} v_k(\overline{t}_k)&=&M,~v_k(\underline{t}_k)=m\,, \\[0.1cm] v_k(t)&\leq& M \ ~\mbox{in}~\mathbb{R}\backslash(-r_k+\overline{t}_k,r_k+\overline{t}_k)\,, \\[0.1cm] v_k(t)&\geq& m\ \ ~\mbox{in}~\mathbb{R}\backslash(-r_k+\underline{t}_k,r_k+\underline{t}_k)\,. \\[0.05cm]\end{array}\right.$$ Subsequently, $v_k$ must attain its maximum value, which is at least $M$, at $[-r_k+\overline{t}_k,r_k+\overline{t}_k]$ and also attain its minimum value, which is at most $m$, at $[-r_k+\underline{t}_k,r_k+\underline{t}_k]$, more specifically, $$\exists\ \{\bar{s}_k\}\subset[-r_k+\overline{t}_k,r_k+\overline{t}_k]\ \ \ s.t.\ \ \ M+\varepsilon_k\geq v_k(\bar{s}_k)=\sup\limits_{t\in\mathbb{R}}v_k(t)\geq M.$$and $$\exists\ \{\underline{s}_k\}\subset[-r_k+\underline{t}_k,r_k+\underline{t}_k]\ \ \ s.t.\ \ \ m-\varepsilon_k\leq v_k(\underline{s}_k)=\inf\limits_{t\in\mathbb{R}}v_k(t)\leq m.$$ Consequently, $$\label{2.12}\left.\begin{array}{r@{\ \ }c@{\ \ }ll}\partial^\alpha_t v_k(\bar{s}_k)&=&C_\alpha\displaystyle\int_{-\infty}^{\bar{s}_k}\displaystyle\frac{v_k(\bar{s}_k)-v_k(\tau)} {(\bar{s}_k-\tau)^{1+\alpha}}d\tau\\[0.3cm] %&=&C_\alpha\lt\{\jf_{-\infty}^{\bar{t}_k}\fr{v_k(\bar{t}_k^\lm)-v_k(\tau)} %{(\bar{t}_k^\lm-\tau)^{1+\alpha}}d\tau+\jf_{\bar{t}_k}^{\bar{t}_k^\lm}\fr{v_k(\bar{t}_k^\lm)-v_k(\tau)} %{(\bar{t}_k^\lm-\tau)^{1+\alpha}}d\tau\rt\}.\\[0.3cm] &\geq&C_\alpha\displaystyle\int_{-\infty}^{\underline{s}_k}\displaystyle\frac{v_k(\bar{s}_k)-v_k(\tau)} {(\bar{s}_k-\tau)^{1+\alpha}}d\tau \\[0.4cm] &=&C_\alpha\left\{\displaystyle\int_{-\infty}^{\underline{s}_k}\displaystyle\frac{v_k(\bar{s}_k)-v_k(\underline{s}_k)} {(\bar{s}_k-\tau)^{1+\alpha}}d\tau+ \displaystyle\int_{-\infty}^{\underline{s}_k}\displaystyle\frac{v_k(\underline{s}_k)-v_k(\tau)} {(\bar{s}_k-\tau)^{1+\alpha}}d\tau\right\} \\[0.4cm] &\geq&C_\alpha\left\{(M-m)\displaystyle\int_{\underline{s}_k-r_k}^{\underline{s}_k}\displaystyle\frac{1} {(\overline{s}_k-\tau)^{1+\alpha}}d\tau+\displaystyle\int_{-\infty}^{\underline{s}_k}\displaystyle\frac{v_k(\underline{s}_k)-v_k(\tau)} {(\underline{s}_k-\tau)^{1+\alpha}}d\tau\right\} \\[0.4cm] &\geq&\displaystyle\frac{C_0} {{r_k}^\alpha}+\partial^\alpha_t v_k(\underline{s}_k).%+\jf_{\lm}^{\bar{t}_k^\lm}\fr{v_k(\bar{t}_k^\lm)-v_k(\tau)} %{(\bar{t}_k^\lm-\tau)^{1+\alpha}}d\tau\rt) %&\geq&C_\alpha\jf_{-1}^{0}\fr{\bar{v}_k(0)-\bar{v}_k(\tau)} %{(-\tau)^{1+\alpha}}d\tau\geq 0 \end{array}\right.$$ In addition, owing to the equation in [\[anti1\]](#anti1){reference-type="eqref" reference="anti1"}, we ultilize the rescaling and translation for $\partial_t^\alpha\eta$ (see [\[TS\]](#TS){reference-type="eqref" reference="TS"}), it is easily derived $$\label{2.13}\partial^\alpha_t v_k(\bar{s}_k),\ \partial^\alpha_t v_k(\bar{s}_k^\lambda)\sim{\displaystyle\frac{\varepsilon_k}{{r_k}^\alpha}}.$$ It follows from [\[2.12\]](#2.12){reference-type="eqref" reference="2.12"} and [\[2.13\]](#2.13){reference-type="eqref" reference="2.13"} that $$C\varepsilon_k\geq C_0-C \varepsilon_k,$$ which leads to a contradiction for sufficiently large $k$. In conclusion, we verifies [\[max\]](#max){reference-type="eqref" reference="max"} and thus completes the proof of Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"}. ◻ ## Liouville Theorem for the dual fractional operator $\partial^\alpha_t +(-\Delta)^s$ In the rest of this section, we employ the Maximum principle (Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"}) for antisymmetric functions in unbounded domains, along with the Liouville theorem for the Marchaud fractional time operator $\partial^\alpha_t$ just established in Section 4.1, to complete the proof of the Liouville theorem (Theorem [Theorem 5](#thm4){reference-type="ref" reference="thm4"}) for the dual fractional operator $\partial^\alpha_t +(-\Delta)^s.$ *Proof of Theorem [Theorem 5](#thm4){reference-type="ref" reference="thm4"}.* For each fixed $t\in\mathbb{R}$, we first claim that $u(\cdot, t)$ is symmetric with respect to any hyperplane in $\mathbb{R}^n$. Let $x_1$ be any given direction in $\mathbb{R}^n$, and we keep the notation $T_\lambda,\Sigma_\lambda,w_\lambda(x,t),$ $u_\lambda(x,t),$ $x^\lambda$ defined in section 1. For any $\lambda\in\mathbb{R}$, on account of equation [\[2.9\]](#2.9){reference-type="eqref" reference="2.9"}, we derive $$\left\{\begin{array}{ll} \partial^\alpha_t w_\lambda(x,t)+(-\Delta)^s w_\lambda(x,t) =0,\ \ &\mbox{in}\ \ \Sigma_\lambda\times\mathbb{R},\\[0.3cm] w_\lambda(x,t)=-w_\lambda(x^\lambda,t), &\mbox{in}\ \ \Sigma_\lambda\times\mathbb{R}. \end{array}\right.$$ It follows from Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"} that $$w_\lambda(x,t)\equiv 0 \mbox{~in~} \Sigma_\lambda\times\mathbb{R}.$$ As a result, the arbitrariness of $\lambda$ indicates that $u(\cdot, t)$ exhibits symmetry with respect to any hyperplane perpendicular to the $x_1$-axis. Moreover, since the selection of the $x_1$ direction is arbitrary, we conclude that $u(\cdot, t)$ is symmetric with respect to any hyperplane in $\mathbb{R}^n$ for each fixed $t\in\mathbb{R}$. Thus, we deduce that $u(x, t)$ depends only on $t$, i.e., $$u(x,t)=u(t) \mbox{~in~}\mathbb{R}^n\times\mathbb{R}.$$ Now equation [\[2.9\]](#2.9){reference-type="eqref" reference="2.9"} reduce to the following one-dimensional one-sided fractional equation $$\partial^\alpha_t u(t) = 0\ \ \mbox{in}\ \ \mathbb{R}.$$ Then Theorem [Theorem 4](#thm5){reference-type="ref" reference="thm5"} yields that $u(t)$ must be constant. Thus, we have confirmed that the bounded solution of equation [\[2.9\]](#2.9){reference-type="eqref" reference="2.9"} must be constant. This completes the proof of Theorem [Theorem 5](#thm4){reference-type="ref" reference="thm4"}. ◻ $\mathbf{Acknowledgement}$ The work of the second author is partially supported by the National Natural Science Foundation of China (NSFC Grant No.12101452) and the work of the third author is partially supported by the National Natural Science Foundation of China (NSFC Grant No.12071229). 99 M. Allen, L. Caffarelli, A. Vasseur, A parabolic problem with a fractional time derivative. Arch. Ration. Mech. Anal., 221(2)(2016), 603-630. M. Allen, L. Caffarelli, A. Vasseur, Porous medium flow with both a fractional potential pressure and fractional time derivative. Chin. Ann. Math., 38B(1),(2017), 45-82. H. Berestycki, L. Nirenberg, On the method of moving planes and the sliding method, Bol. Soc. Bras. Mat, 22 (1991), 1-37. K. Bogdan. T. Kulczycki, and A. Nowak, Gradient estimates for harmonic and q-harmonic functions of symmetric stable processes, Illinois J. Math., 46 (2002), 541-556. L. Caffarelli, L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. S.-Y. A. Chang, M. d. M. Gonz$\grave{a}$lez, Fractional Laplacian in conformal geometry, Adv. Math, 226 (2011), no. 2, 1410-1432. S.-Y. A. Chang, Y. Yuan, A Liouville problem for the Sigma-2 equation, Discrete and Continuous Dynamical Systems, 28(2010), no. 2, 659-664. W. Chen, Y. Hu, and L. Ma, Moving planes and sliding methods for fractional elliptic and parabolic equations, Adv. Nonlinear Stud., 2023, inpress. W. Chen, C. Li, Maximum principles for the fractional p-Laplacian and symmetry of solutions, Adv. Math., 335 (2018), 735-758. W. Chen, C. Li, G. Li, Maximum principles for a fully nonlinear fractional order equation and symmetry of solutions, Calc. Var., 56 (2017), 29. W. Chen, C. Li, and Y. Li, A direct method of moving planes for the fractional Laplacian, Adv. Math., 308 (2017), 404-437. W. Chen, C. Li, and J. Zhu, Fractional equations with indefinite nonlinearities, Disc. Cont. Dyn. Sys., 39 (2019), 1257-1268. W. Chen, L. Ma, Qualitative properties of solutions for dual fractional nonlinear parabolic equations, J. Funct. Anal., 2023, accepted. W. Chen, L. Ma, Gibbons' conjecture for entire solutions of master equations, arXiv:2304.07888v1, 2023. W. Chen, L. Wu, Liouville theorems for fractional parabolic equations, Adv. Nonlinear Stud., 21 (2021), 939- 958. W. Chen, L. Wu, Uniform a priori estimates for solutions of higher critical order fractional equations, Calc. Var., 60 (2021), 102. N. Cui, H. Sun, Ground state solution for a nonlinear fractional magnetic Schrödinger equation with indefinite potential, J. Math. Phys. 63 (2022), 091510. W. Dai, Z. Liu, and G. Lu, Liouville type theorems for PDE and IE systems involving fractional Laplacian on a half space, Potential Anal., 46 (2017), 569-588. W. Dai, G. Qin, Liouville-Type theorems for fractional and higher-order h$\acute{e}$non-hardy type equations via the method of scaling spheres, Int. Math. Res. Not., 11 (2023), 9001-9070. W. Dai, G. Qin, and D. Wu, Direct Methods for pseudo-relativistic Schrödinger operators, J. Geom. Anal., 31 (2021), 5555-5618. D. del-Castillo-Negrete, B. A. Carreras, V. E. Lynch, Nondiffusive transport in plasma turbulene: a fractional diffusion approach, Phys. Rev. Lett., 94 (2005), 065003. A. Farina, Liouville-type theorems for elliptic problems, Handbook of Differential Equations: Stationary Partial Differential Equations, 4 (2007), 61-116. Y. Guo, L. Ma, Z. Zhang, Sliding methods for dual fractional nonlinear divergence type parabolic equations and the Gibbons' conjecture, submitted to Adv. Nonlinear Stud., 2023. H. Hopf, Lectures on differential geometry in the large, Stanford University, 1956 S. Jarohs, T. Weth, Asymptotic symmetry for a class of nonlinear fractional reaction-diffusion equations, Discrete Contin. Dyn. Syst., 34 (2014), 2581-2615. Z. Jin, H. Sun, J. Zhang, Existence of ground state solutions for critical fractional Choquard equations involving periodic magnetic field, Adv. Nonlinear Stud. 22 (2022), 372-389. C. Li, L. Wu, Pointwise regularity for fractional equations, J. Differential Equations, 302 (2021), 1-36. C. Li, Z. Wu, Radial symmetry for systems of fractional Laplacian, Acta Math. Sci. Ser. B (Engl. Ed.) 38 (2018), no. 5, 1567-1582. J. Li, G. Lu, and J. Wang, Potential characterizations of geodesic balls on hyperbolic spaces: a moving plane approach, J. Geom. Anal., 33 (2023), 134. R. Metzler, J. Klafter, The random walk's guide to anomalous diffusion: A fractional dynamics approach, Phys. Rep., 339 (2000), 1-77. L. Ma, Y. Guo, and Z. Zhang, Radial symmetry and Liouville theorem for master equations, arXiv:2306.11554, 2023. L. Ma, Z. Zhang, Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains, Discrete Contin. Dyn. Syst., 41 (2021), 537-552. L. Ma, Z. Zhang, Symmetry of positive solutions for Choquard equations with fractional p-Laplacian, Nonlinear Anal., 182 (2019), 248-262. J. Moser, On Harnack's theorem for elliptic differential equations, Comm. Pure Appl Math., 14 (1961), 577-591. M. Raberto, E. Scalas, and F. Mainardi, Waiting-times and returns in high-frequency fnancial data: an empirical study, Physica A, 314 (2002), 749-755. M. Riesz, Integrales de Riemann-Liouville et potentiels, Acta Sci. Math. Szeged, 9 (1938), 1-42. J. Serra, Regularity for fully nonlinear nonlocal parabolic equations with rough kernels, Calc. Var., 54 (2015), 615-629. J. Serrin, H. Zou, Cauchy-Liouville and universal boundedness theorems for quasilinear elliptic equations and inequalities, Acta Math., 189 (2002),79-142. D. V. Widder, The heat equation, Academic Press, New York, 1976. L. Wu, W. Chen, Ancient solutions to nonlocal parabolic equations, Adv. Math., 408 (2022), 108607. [^1]: Corresponding author.
arxiv_math
{ "id": "2309.03429", "title": "A Liouville Theorem and Radial Symmetry for dual fractional parabolic\n equations", "authors": "Yahong Guo, Lingwei Ma, Zhenqiu Zhang", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The primary purpose of this paper is to study the Wiener-type regularity criteria for non-linear equations driven by integro-differential operators, whose model is the fractional $p-$Laplace equation. In doing so, with the help of tools from potential analysis, such as fractional relative Sobolev capacities, Wiener type integrals, Wolff potentials, $(\alpha,p)-$barriers, and $(\alpha,p)-$balayages, we first prove the characterizations of the fractional thinness and the Perron boundary regularity. Then, we establish a Wiener test and a generalized fractional Wiener criterion. Furthermore, we also prove the continuity of the fractional superharmonic function, the fractional resolutivity, a connection between $(\alpha,p)-$potentials and $(\alpha,p)-$Perron solutions, and the existence of a capacitary function for an arbitrary condenser. address: - School of mathematics and statistics, Linyi University, Linyi 276005, China - School of mathematics and statistics, Linyi University, Linyi 276005, China - Department of Mathematics and Statistics, MacEwan University, Edmonton, Alberta T5J2P2, Canada author: - Shaoguang Shi - Guanglan Wang - ZhiChun Zhai title: Wiener type regularity for non-linear integro-differential equations --- [^1] # Introduction {#s1} ## Aim of the paper The theory of the boundary regularity of a domain, initiated by Wiener [@Wi], is fundamental and significant in potential analysis, which is connected closely with harmonic functions. Many researchers have extensively studied this topic. We mention a few of these works. In [@GZ], Gariepy and Ziemer established a regularity condition at the boundary of weak solutions of the Dirichlet problem for quasilinear elliptic equations of second order in an open set $O\subset \mathbb{R}^n.$ In [@M2], Mazýa established the sufficient part of the Wiener test for quasilinear elliptic equations. In [@KM], Kilpeläinen and Malý proved the necessary part of the Wiener test for quasilinear elliptic equations. In this paper, we focus on the Wiener type regularity for the following non-linear integro-differential equation, $$\mathfrak{L}_{\alpha}u=0\quad\hbox{in}\quad \Omega\subset \mathbb R^n$$ and its related Dirichlet problems. We assume throughout this paper that $\Omega\subset \mathbb{R}^{n}$ is an open domain unless otherwise specified. For any $\varphi\in C^\infty_c(\Omega)$ (the class of all infinitely differentiable functions with compact support in $\Omega$), the operator $\mathfrak{L}_{\alpha}$ can be defined as $$\langle\mathfrak{L}_{\alpha}, \varphi\rangle:=\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))K_{\alpha}(x,y)dxdy$$ and the kernel function $K_{\alpha}: \mathbb R^n\times \mathbb R^n\longrightarrow \mathbb R$ is assumed to be measurable and satisfying $$\frac{1}{C|x-y|^{n+\alpha p}}\leq K_{\alpha}(x,y)\leq \frac{C}{|x-y|^{n+\alpha p}}\quad \hbox{for}\quad (\alpha,p)\in (0,1)\times (1,\infty)\quad \hbox{and}\quad C\geq 1.$$ Much work has been carried out recently in studying $\mathfrak{L}_{\alpha}$ and the Dirichlet problems associated to $\mathfrak{L}_{\alpha}.$ Let us only review a few. For the linear case, when $p = 2$ and when the kernel $K_\alpha$ equals to the Gagliardo kernel $K_\alpha(x, y) = |x - y|^{-(n+2\alpha)},$ Caffarelli and his collaborators have studied the fractional obstacle problem in [@CF; @CRS; @CSS]. In [@KKL], the authors studied the solutions defined via integration by parts with test functions, as viscosity solutions or via comparison, and proved that the three notions coincide for bounded solutions. For the non-linear and possibly degenerate case, in [@CKP1; @CKP2], Castro et al. proved a general Harnack inequality and the regularity results for $\mathfrak{L}_{\alpha}u=0$ in $\Omega$ while $u=g$ in $\Omega^c:=\mathbb{R}^{n}\setminus \Omega.$ In [@KKP1; @KKP2; @KKP3; @KKP4], Korvenpää et al. systematically studied the corresponding obstacle problems. They established the existence and uniqueness of the solutions, the boundedness, continuity and Hölder continuity up to the boundary from the obstacle, the minimum of the corresponding weak supersolutions becoming a weak supersolution, a comparison principle, a priori bounds, and the lower semicontinuity of supersolutions. In [@KMS], Kuusi et al. established the existence, regularity and potential theory for the non-linear integro-differential equations involving measure data. In [@P], Palatucci studied $(\alpha, p)$-superharmonic functions, and the nonlocal counterpart of the Perron method in non-linear potential theory, and the connection among the fractional viscosity solutions, the weak solutions and the $(\alpha, p)-$superharmonic functions. Specially, when $K_{\alpha}(x,y)=|x-y|^{-(n+\alpha p)}$, $\mathfrak{L}_{\alpha}$ becomes the fractional $p-$Laplace operator $(-\Delta_{p})^{\alpha}$, which can be understood as $$\langle(-\Delta_{p})^{\alpha}u, \varphi\rangle:=\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{n+\alpha p}}dxdy,\quad \forall\varphi\in C_{c}^{\infty}(\mathbb R^n).$$ Namely, $\mathfrak{L}_{\alpha}=(-\Delta_{p})^{\alpha}$ in the sense of distributions in this case. Alternatively, there exists a constant $C(n,\alpha,p)$ such that $$(-\Delta_{p})^{\alpha}u(x)=C(n,\alpha,p)\lim_{\epsilon\to 0}\int_{|y-x|\ge\epsilon}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+\alpha p}}\,dy.$$ Then the classical $(\alpha, p)-$Laplace equation $(-\Delta_{p})^{\alpha}u=0$ is understood in the sense of $$\label{(1.1)} \langle(-\Delta_{p})^{\alpha}u, \varphi\rangle =0\quad \forall\varphi\in C_{c}^{\infty}(\mathbb R^n)$$ and the function $u$ is said to be a weak solution to $(\ref{(1.1)})$. Supersolution and subsolution of $(\ref{(1.1)})$ can be defined as $\langle(-\Delta_{p})^{\alpha}u, \varphi\rangle \geq0$ and $\langle(-\Delta_{p})^{\alpha}u, \varphi\rangle \leq0$ for nonnegative $\varphi\in C_{c}^{\infty}(\Omega),$ respectively. The related Dirichlet problem is often defined as $$\label{(1.2)} \begin{cases} (-\Delta_{p})^{\alpha}u=0\quad\hbox{in}\quad\Omega;\\ u=f\quad\hbox{in}\quad\Omega^{c}. \end{cases}$$ For simplicity, in this paper, we will study the fractional thinness and the fractional regularity of the integro-differential equations associated with $(-\Delta_{p})^{\alpha}.$ This means $\mathfrak{L}_{\alpha}=(-\Delta_{p})^{\alpha}$ for the rest of this paper. The operator $(-\Delta_{p})^{\alpha}$ can be viewed as a nonlocal version of the classical $p-$Laplace operator $-\Delta_{p}$ and many studies have been carried out on $(\ref{(1.1)})$ and $(\ref{(1.2)})$ for its wide applications in Physics, Biology and so on. To see this, we refer the interested readers to [@BL; @CS1; @CS2; @CSS; @CS; @ILPS; @IS; @LL1; @LL2; @MRT; @PXZ; @S; @TX; @V; @W; @ZTZ] and the references therein. Motivated by the above-mentioned excellent works on the regularity of non-linear integro-differential equations, this paper aims to establish the Wiener type regularity for non-linear integro-differential equations using a newly developed theory of the fractional relative Sobolev capacity, which is quite different from that of previously known results. Before stating our main results, we review some basic definitions and preliminaries. ## Preliminaries The working space of this paper is the fractional Sobolev space $W^{\alpha, p}(\Omega)$ on a domain $\Omega\subset\mathbb R^n$, which is defined as $$W^{\alpha, p}(\Omega)=L^{p}(\Omega)\cap \dot{W}^{\alpha, p}(\Omega)$$ with $L^{p}(\Omega)$ the classical $p-$Lebesgue space on $\Omega$ and $\dot{W}^{\alpha, p}(\Omega)$ the space with the Gagliardo semi-norm $$[u]_{\dot{W}^{\alpha, p}(\Omega)}^{p}= \int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+\alpha p}}dxdy \quad\hbox{for all measurable function}\quad u\quad\hbox{on}\quad\Omega.$$ $W_{0}^{\alpha,p}(\Omega)$ and $\dot{W}_{0}^{\alpha,p}(\Omega)$ are the completion of $C^{\infty}_c(\Omega)$ under $\|\cdot\|_{W^{\alpha,p}(\Omega)}$ and $[\cdot]_{\dot{W}^{\alpha,p}(\Omega)}$, respectively. ### $(\alpha,p)$-harmonic function and $(\alpha,p)$-boundary regularity {#1.2} A function $u: \Omega\longrightarrow \mathbb{R}$ is said to be $(\alpha, p)-$harmonic if $u\in C(\Omega)$ (all continuous functions on $\Omega$) is a solution to $(\ref{(1.1)})$ in the weak sense. As a concept closely related to supersolution, the $(\alpha, p)-$superharmonic function is given by a function $u$ satisfying the following three assumptions (see [@KKP4]): $$\begin{cases} {\rm(i)} \quad u \ \hbox{is lower semicontinuous (l.s.c)};\\ {\rm(ii)} \quad u\not\equiv \infty\ \hbox{in each component of}\ \Omega;\\ {\rm(iii)}\quad u\geq v \ \hbox{on} \ \partial O\longrightarrow u\geq v \ \hbox{in}\ O\quad\hbox{for all open set}\ O\Subset \Omega\ \& \ \forall (\alpha, p)-\hbox{harmonic function}\ v\in C(\overline{O}). \end{cases}$$ Here, $O\Subset \Omega$ means $\overline{O}$ is a compact subset of $\Omega$. More precisely, $(\alpha, p)-$superharmonic function is the viscosity supersolution for $(\ref{(1.1)})$(see [@KKP1] for more details). A function $u$ is called $(\alpha, p)-$subharmonic if $-u$ is $(\alpha, p)-$superharmonic. For abbreviation, we use $H_{\alpha}(\Omega)$, $H_{\alpha}^{+}(\Omega)$ and $H_{\alpha}^{-}(\Omega)$ to represent the class of $(\alpha, p)-$harmonic functions, $(\alpha, p)-$superharmonic functions and $(\alpha, p)-$subharmonic functions in $\Omega$, respectively. It is immediately that $H_{\alpha}(\Omega)=H_{\alpha}^{+}(\Omega)\cap H_{\alpha}^{-}(\Omega)$. In potential theory, a basic problem is the boundary regularity of $\Omega$. We recall that a point $x_{0}\in \partial\Omega$(the boundary of $\Omega$) is $(\alpha,p)-$regular if for every $u\in W^{\alpha,p}(\Omega)\cap C(\overline{\Omega})$, there exists a function $f\in H_{\alpha}(\Omega)$ with $$f-u\in W^{\alpha,p}_{0}(\Omega)\quad \hbox{and} \lim_{x\longrightarrow x_{0}}f(x)=u(x_{0}).$$ The existence and uniqueness of such $f$ can be seen from [@Sh Theorem 2.7] and [@SX3 Theorem 1.1]. $\Omega$ is called $(\alpha,p)-$regular if $x$ is $(\alpha,p)-$regular for each $x\in \partial\Omega$ and $(\alpha,p)-$irregular if $\Omega$ is not $(\alpha,p)-$regular. A considerable amount of research has been performed on the problem of fractional regularity during the last decade, such as Ros-Oton-Serra [@ROS] by developing a fractional analog of the Krylov boundary Harnack method, Lindgren-Lindqvist [@LL2] via Perron's method, Iannizzotto-Mosconib-Squassina [@IMS] utilizing barriers and Giacomoni-Kumar-Sreenadh [@GKS] using a suitable Caccioppoli inequality and the weak Harnack inequality. In this paper, the fractional regularity is studied by using a newly developed theory of the fractional relative Sobolev capacity. Traditionally, regularity is defined in connection with Perron solutions. For the fractional case, the $(\alpha,p)-$Perron solution was first considered, for example, in [@KKP4; @LL2]. We recall some related definitions as follows. The upper $(\alpha,p)-$Perron solution $\overline{H}_{f}^{\alpha}$ and the lower $(\alpha,p)-$Perron solution $\underline{H}_{f}^{\alpha}$ of a function $f:\partial\Omega\rightarrow [-\infty,+\infty]$ in $\Omega$ are given by $$\overline{H}_{f}^{\alpha}=\overline{H}_{f}^{\alpha}(\Omega)=\inf\left\{u: u\in \mathcal{U}_{f}^{\alpha}\right\},\quad\underline{H}_{f}^{\alpha}=\underline{H}_{f}^{\alpha}(\Omega)=\sup\left\{u: u\in \mathcal{L}_{f}^{\alpha}\right\},$$ where the upper class $\mathcal{U}_{f}^{\alpha}$ and the lower class $\mathcal{L}_{f}^{\alpha}$ are defined as $$\mathcal{U}_{f}^{\alpha}=\left\{u: u\in H_{\alpha}^{+}(\Omega),\ u \,\,\hbox{is bounded below},\,\,\liminf_{x\longrightarrow y}u(x)\geq f(y)\,\,\hbox{for all}\,\, y\in \partial \Omega\right\}$$ and $$\mathcal{L}_{f}^{\alpha}=\left\{u: u\in H_{\alpha}^{-}(\Omega), \,\,u \,\,\hbox{is bounded above},\,\,\limsup_{x\longrightarrow y}u(x)\leq f(y)\,\,\hbox{for all}\,\, y\in \partial \Omega\right\}.$$ It follows that there hold the following three results: - $u\in \mathcal{U}_{f}^{\alpha}$ if and only if $-u\in \mathcal{L}_{f}^{\alpha}$, - $\underline{H}_{f}^{\alpha}\leq \overline{H}_{f}^{\alpha}$, - $\overline{H}_{f}^{\alpha}\leq \overline{H}_{g}^{\alpha}$ if $f\leq g.$ The $(\alpha,p)-$Perron's solution is an important tool to solve the Dirichlet problem ([\[(1.2)\]](#(1.2)){reference-type="ref" reference="(1.2)"}). It is a natural question to ask which one of the two $(\alpha,p)-$Perron solutions is the "correct\" solution to ([\[(1.2)\]](#(1.2)){reference-type="ref" reference="(1.2)"}). It follows from the comparison principle [@KKP4 Theorem 16] that the $(\alpha,p)-$Perron solution coincides with the classical solution of ([\[(1.2)\]](#(1.2)){reference-type="ref" reference="(1.2)"}). That is, $\underline{H}_{f}^{\alpha}$ and $\overline{H}_{f}^{\alpha}$ are local solutions, see for example, [@LL2 Theorem 22] and also [@KKP4 Theorem 2] which implies that $\underline{H}_{f}^{\alpha}$ and $\overline{H}_{f}^{\alpha}$ can be either identically $-\infty$ in $\Omega$, identically $+\infty$ in $\Omega$, or $(\alpha,p)-$harmonic in $\Omega$, respectively. We say that a boundary point $x_{0}$ of $\Omega$ is Perron regular, if $$\lim_{x\longrightarrow x_{0}}\overline{H}_{f}^{\alpha}(x)=f(x_{0})\quad \forall f\in C(\partial\Omega).$$ $\Omega$ is called Perron regular if all points $x_{0}\in \partial\Omega$ are regular. Similarly, the same regularity is true if we replace $\overline{H}_{f}^{\alpha}$ with $\underline{H}_{f}^{\alpha}$ since $\overline{H}_{f}^{\alpha}=-\underline{H}_{-f}^{\alpha}$. We will show in Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"} that the $(\alpha,p)-$regular boundary point agrees with the Perron regular boundary point if $\Omega$ is bounded. ### $(\alpha,p)$-Wiener type integral and $(\alpha,p)$-thinness {#1.3} In the non-linear case, an important device in the central concepts of modern potential theory is the Wiener test (or Wiener criterion) introduced by Wiener [@Wi] to measure the boundary regularity in terms of capacity densities. We first adopt the following form of the Wiener type integral defined for an arbitrary set $E$ as $$\mathcal{W}_{p}^{\alpha}(E,x_{0})=\int_{0}^{1}\left(F_{\alpha, p}(x_{0},E,r)\right)^{\frac{1}{p-1}}\frac{dr}{r}:=\int_{0}^{1}\left(\frac{C_{\alpha, p}(E\cap B(x_{0},r), B(x_{0},2r))}{C_{\alpha, p}(B(x_{0},r), B(x_{0},2r))}\right)^{\frac{1}{p-1}}\frac{dr}{r},$$ where $F_{\alpha, p}(x_{0},E,r)$ is the $(\alpha,p)$-capacity density function, $B(x_0,r)$ is the $x_0$-centered Euclidean ball with radius $r$ and $C_{\alpha, p}(E, \Omega)$ is the $(\alpha,p)$-capacity for any set $E\subset \Omega\subseteq \mathbb{R}^{n}$ which was defined as $$C_{\alpha, p}(E, \Omega)=\inf _{\text{open}\ O\supset E}C_{\alpha, p}(O, \Omega)=\inf _{\text{open}\ O\supset E}\sup_{\text{compact}\ K\subset O}C_{\alpha, p}(K, \Omega),$$ where $$C_{\alpha, p}(K, \Omega):=\inf_{u\in \mathcal{X}^{\alpha, p}_{0}(K, \Omega)}[u]_{\dot{W}^{\alpha, p}(\Omega)}^{p} \quad\hbox{ with}\quad \mathcal{X}^{\alpha, p}_{0}(K, \Omega):= \left\{u: u\in \dot{W}^{\alpha, p}_{0}(\Omega)\quad\& \quad u\geq \chi_{K}\right\}.$$ Here $\chi_E$ stands for the characteristic function of a set $E$. In practice, $\mathcal{X}^{\alpha, p}_{0}(K, \Omega)$ can be replaced by $$\mathcal{Y}^{\alpha, p}_{0}(K, \Omega):= \left\{u: u\in \dot{W}^{\alpha, p}_{0}(\Omega)\quad\& \quad 0\leq u\leq 1,\quad u=1\quad\hbox{on}\quad K\right\}$$ (see for example [@SX1]). Accordingly, a set $E$ is called $(\alpha,p)$-capacity zero, denoted by $C_{\alpha, p}(E)=0,$ if $C_{\alpha, p}(E\cap \Omega, \Omega)=0$ for all open sets $\Omega\subset \mathbb{R}^{n}$. A property holds quasi everywhere (denoted by q.e. in the following) if it holds except for a set of zero $(\alpha,p)$-capacity. As we will also see from this paper (Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"}-Theorem [Theorem 18](#Theorem 1.6){reference-type="ref" reference="Theorem 1.6"}) and the known results [@AH; @AX; @HKM2; @LWXY; @M; @W1; @X; @X2; @XY], Sobolev type capacity is a powerful and useful tool in the study of potential theory, function spaces, harmonic analysis, partial differential equations and so on. A set $E$ is $$\begin{cases} (\alpha,p)-\hbox{thick at} \quad x_{0}\quad\hbox{if}\quad \mathcal{W}_{p}^{\alpha}(E,x_{0})=\infty,\quad\hbox{fractional thickness};\\ (\alpha,p)-\hbox{thin at} \quad x_{0}\quad\hbox{if}\quad\mathcal{W}_{p}^{\alpha}(E,x_{0})<+\infty,\quad\hbox{fractional thinness}. \end{cases}$$ The thinness was introduced by Adams and Meyer [@AM] to non-linear potential theory and then further studied in [@AH; @HKM1; @HW] to quasi-linear elliptic equations. We adopted in this paper to the fractional integro-differential equations, which partly inspired by some ideas from [@HKM2] for non-linear elliptic equations. ### $(\alpha,p)$-barrier and $(\alpha,p)$-balayage {#1.4} A function $u$ is called fractional barrier (denoted by $(\alpha,p)$-barrier) [@LL2 Definition 25] relative to $\Omega$ at $x_{0}$ if $$u\in H_{\alpha}^{+}(\Omega);\quad \liminf_{x\longrightarrow y}u(x)>0\quad\hbox{for each}\quad y\in \partial\Omega\backslash \{x_{0}\};\quad \lim_{x\longrightarrow x_{0}}u(x)=0.$$ By the minimum principle [@SX3 Lemma 2.5], an $(\alpha,p)$-barrier is always nonnegative and only $\mathbb{R}^{n}$ admits an $(\alpha,p)$-barrier that is not strictly positive. Furthermore, if the open set $O\subset \Omega$ and $u$ is a strictly positive $(\alpha,p)$-barrier relative to $\Omega$, then $u$ is an $(\alpha,p)$-barrier relative to $O$. One of our main results-Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"} in this paper is to characterize the Perron regular boundary points in terms of the $(\alpha,p)$-barrier. To introduce our main results, for a function which is locally bounded from below, we recall the $(\alpha,p)$-balayage from [@SX3] as $$\widehat{\mathcal{B}}^{f}(x):=\widehat{\mathcal{B}}^{f}(\Omega)(x)= \lim_{r\longrightarrow 0}\inf_{\Omega\cap B(x,r)}\mathcal{B}^{f}\big(\Omega\cap B(x,r)\big),$$ where $$\mathcal{B}^{f}:=\mathcal{B}^{f}(\Omega)=\inf \Psi_{f}:=\inf\left\{u: u\in H_{\alpha}^{+}(\Omega)\quad\hbox{and}\quad u\geq f\quad\hbox{in}\quad \Omega\right\}.$$ A relative version is defined for a nonnegative function $g$ on a set $E\subset \Omega$ as $\widehat{\mathcal{B}}^{g}_{E}=\widehat{\mathcal{B}}^{f}$ (the $(\alpha,p)$-balayage relative to $E$) for $$f=\begin{cases}g \quad \hbox{on} \quad E;\\ 0\quad \hbox{on}\quad\Omega\setminus E. \end{cases}$$ $\widehat{\mathcal{B}}^{1}_{E}$ is called the $(\alpha,p)$-potential of the set $E$ in $\Omega$. It follows from [@SX3 Theorem 1.2] that $\widehat{\mathcal{B}}^{1}_{E}\in H_{\alpha}^{+}(\Omega)\cap H_{\alpha}(\Omega\backslash \overline{E}).$ Now, we are ready to state our main results. ## Statement of main results Our first result gives two equivalent characterizations of $(\alpha,p)-$thinness. **Theorem 1**. *Assume that $E\subset \mathbb{R}^{n}$ and $U(x_{0})$ denotes a neighborhood of $x_{0}$. Then the following three statements are equivalent* - *$E$ is $(\alpha,p)$-thin at $x_{0}$;* - *There is a function $u\in H_{\alpha}^{+}(U(x_{0}))$ with $\liminf_{x\longrightarrow x_{0}\ \&\ x\in E\backslash \{x_{0}\}} u(x)>u(x_{0});$* - *There is a nonnegative function $u\in H_{\alpha}^{+}(U(x_{0}))$ such that $\widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}(x_{0})<u(x_{0})$ for $V(x_{0})\Subset U(x_{0})$.* The second result characterizes the fractional regularity when $\Omega$ is bounded. **Theorem 2**. *Assume that $C_{\alpha, p}(\{x_{0}\}, \Omega)=0$ for a finite point $x_{0}\in \partial\Omega$ and $\Omega$ is bounded. Then the following statements are equivalent* - *$x_{0}$ is Perron regular;* - *There is an $(\alpha,p)-$barrier at $x_{0}$ relative to $\Omega$;* - *$\widehat{\mathcal{B}}^{u}_{\overline{U}\backslash \Omega}(V)(x_{0})=u(x_{0})$ for nonnegative $u\in H_{\alpha}^{+}(V)$ and $U\Subset V$ are bounded open sets with $x_{0}\in U$;* - *$\widehat{\mathcal{B}}^{1}_{\overline{B}\backslash \Omega}(2B)(x_{0})=1$ for all balls $B$ with $x_{0}\in B$;* - *$x_{0}$ is $(\alpha,p)-$regular.* If $\Omega$ is unbounded, then the point $\infty\in \partial\Omega$. Then all topological notions are therefore understood with respect to the space $\overline{\mathbb{R}^{n}}=\mathbb{R}^{n}\cup \{\infty\}$. It is a classical problem to classify the Riemann surfaces or Riemannian manifolds, which carry nonconstant bounded $(\alpha, p)-$superharmonic functions. The following theorem shows that the existence of such functions is connected closely with the regularity of $\infty$ for the Dirichlet problem $(\ref{(1.2)})$. **Theorem 3**. *Assume that $\Omega$ is unbounded and the ball $B\subset \Omega$. Then the following statements are equivalent* - *$\infty$ is Perron regular for each $\Omega$;* - *$\infty$ is Perron regular for ${\overline B}^{c}$;* - *There is a nonconstant bounded function $u\in H_{\alpha}^{+}(\mathbb{R}^{n})$;* - *$C_{\alpha, p}(B, \mathbb{R}^{n})>0$ for each ball $B\Subset \Omega$;* - *$C_{\alpha, p}(B, \mathbb{R}^{n})>0$ for some ball $B\Subset \Omega$.* With the help of the local nature of $(\alpha,p)-$superhamonic functions, the continuity of $(\alpha,p)-$balayage and the existence of the solution to ([\[(1.2)\]](#(1.2)){reference-type="ref" reference="(1.2)"}) with Sobolev boundary values that will be established in section [2.4](#s5){reference-type="ref" reference="s5"}, we derive the following fractional Wiener test from Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. **Theorem 4**. *Let $x_{0}\in \partial\Omega$ be a finite boundary point. Then $$x_{0} \, \hbox{is}\,(\alpha,p)-\hbox{regular} \Longleftrightarrow \mathcal{W}_{p}^{\alpha}(\Omega^{c}, x_{0})=+\infty \Longleftrightarrow \Omega^{c}\, \hbox{is}\, (\alpha,p)-\hbox{thick at}\, x_{0}.$$* The last main results is a generalization of the classical Wiener Criterion, which need two basic notions. Write $$(\alpha,p)-\operatornamewithlimits{ess\,sup}f_{B(x_{0},r)}=\inf \left\{t:f\leq t\quad q.e.\, \hbox{in}\quad B(x_{0},r)\right\},\,\,\,\,\overline{f(x_{0})}=\inf_{r>0}\left((\alpha,p)-\operatornamewithlimits{ess\,sup}f_{B(x_{0},r)}\right)$$ and $$F_{\varepsilon}=\left\{x: f(x)\geq \overline{f(x_{0})}-\varepsilon\quad\hbox{for}\quad\varepsilon>0\right\}.$$ Then, the point $x_{0}$ is called an $(\alpha,p)-$Wiener point of $f$ if $F_{\varepsilon}$ is not $(\alpha,p)-$thin at $x_{0}$. Denote by $$\Psi_{f,u_{0}}(\Omega)=\left\{u\in W^{\alpha,p}(\Omega):u\geq f\quad \hbox{in}\quad \Omega,\quad u_{0}\in W^{\alpha,p}(\Omega)\quad \hbox{with}\quad u-u_{0}\in W_{0}^{\alpha,p}(\Omega)\right\}.$$ The obstacle problem with obstacle $f$ and boundary value $u_{0}$ for $(\ref{(1.1)})$(denoted by $\Phi_{f,u_{0}}(\Omega)$) is to find a function $u\in \Psi_{f,u_{0}}(\Omega)$ such that $$\langle(-\Delta_{p})^{\alpha}u, \varphi\rangle\geq 0\quad\hbox{for}\quad\varphi\in \Psi_{f,u_{0}}(\Omega).$$ Such a function $u$ is called the solution to $\Phi_{f,u_{0}}(\Omega).$ Let $f$ be bounded. A function $u$ is said to be a local solution to the obstacle problem at the point $x_{0},$ denoted by $\Phi^{x_0}_{f,u_{0}}(\Omega),$ if there is an open neighborhood $\Omega$ of $x_{0}$ such that $$u\in W^{\alpha,p}(\Omega),\quad u\geq f\quad q.e.$$ and $$\langle(-\Delta_{p})^{\alpha}u, \varphi\rangle\geq 0\quad \hbox{for}\quad \varphi\in W_{0}^{\alpha,p}(\Omega) \quad \hbox{with}\quad u+\varphi\geq f\quad q.e..$$ For more information about the obstacle problem, see, for example, [@KKP2] and [@Sh]. **Theorem 5**. *The following two statements hold.* - *If $x_{0}$ is an $(\alpha,p)-$Wiener point of $f,$ then each local solution to the obstacle problem $\Phi^{x_0}_{f,u_{0}}(\Omega)$ is continuous at $x_{0}.$* - *If $x_{0}$ is not an $(\alpha,p)-$Wiener point of $f$, then there exits a local solution of $\Phi^{x_0}_{f,u_{0}}(\Omega)$ which can not be continuous at $x_{0}$.* ## Plan of the paper The rest of this paper is organized as follows. In Section [2](#S2){reference-type="ref" reference="S2"}, we provide the proof of our main results. More specifically, in section [2.1](#s2){reference-type="ref" reference="s2"}, based on one lemma concerning the $(\alpha,p)-$thinnes/ thickness and another one discussing when $\widehat{\mathcal{R}}^{f}=\widehat{\mathcal{B}}^{f},$ we will prove Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"} which provides some characterizations for $(\alpha,p)$-thinness and $(\alpha,p)$-boundary regularity. In Section [2.2](#s3){reference-type="ref" reference="s3"}, we establish Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"} which provides regularity characterizations when $\Omega$ is bounded. Moreover, as a byproduct, we show that the regularity is a local property. When $\Omega$ is unbounded, in Section [2.3](#s4){reference-type="ref" reference="s4"}, after establishing the weak compactness in $\dot{W}^{\alpha,p}_0(\Omega),$ we prove Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"} and its corollary. In Section [2.4](#s5){reference-type="ref" reference="s5"}, based on four technical lemmas which are significant for the understanding of the $(\alpha,p)-$supperhamonic functions, the Perrson regularity, the $(\alpha,p)-$balayage, and the solution of ([\[(1.2)\]](#(1.2)){reference-type="ref" reference="(1.2)"}) with Sobolev boundary values, we prove Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"} which provides the fractional Wiener test. In Section [2.5](#sec2.5){reference-type="ref" reference="sec2.5"}, we derive Theorem [Theorem 5](#Theorem 5.1){reference-type="ref" reference="Theorem 5.1"} from Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. To see the significance of our main results and technical lemmas, in Section [3](#s8){reference-type="ref" reference="s8"} we derive more regularity conditions. After proving a generalized comparison lemma, in Section [3.1](#s6){reference-type="ref" reference="s6"}, we prove Theorem [Theorem 16](#Theorem 1.5){reference-type="ref" reference="Theorem 1.5"} concerning the continuity of $(\alpha,p)-$supperharmonic functions. Based on a technical lemma about the uniform convergence for upper $(\alpha,p)-$Perron solutions, we prove the resolutivity( Theorem [Theorem 18](#Theorem 1.6){reference-type="ref" reference="Theorem 1.6"}) in Section [3.2](#s7){reference-type="ref" reference="s7"}. In the rest of Section [3](#s8){reference-type="ref" reference="s8"}, we prove a connection between $(\alpha,p)-$potentials and $(\alpha,p)-$Perron solutions, and the existence of a capacitary function for an arbitrary condenser using Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}, Theorem [Theorem 16](#Theorem 1.5){reference-type="ref" reference="Theorem 1.5"}, Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"}, Lemma [Lemma 15](#lemma 3.6){reference-type="ref" reference="lemma 3.6"} and Lemma [Lemma 17](#lemma 5.1){reference-type="ref" reference="lemma 5.1"}. In the forthcoming discussions, $A\lesssim B$ ($A\gtrsim B$) means $A\leqslant CB$ ($A\geqslant CB$) for a positive constant $C$ which may change from line to line and $A\thickapprox B$ amounts to $A\lesssim B\lesssim A$. $u^{+}=\max\{u, 0\}$, $u^{-}=\min\{u, 0\}.$ # Proof of main results {#S2} This section is devoted to the proof of our main results. We begin with the proof of Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. ## Proof of Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}: characterizations of fractional thinness {#s2} We only need to show the cycle $(i)\Longrightarrow (ii)\Longrightarrow (iii)\Longrightarrow (i).$ To do so, some technical lemmas are needed. ### Important Lemmas **Lemma 6**. *Suppose that $E$ is a Borel set, then $$\begin{cases} E\,\hbox{is}\, (\alpha,p)-\hbox{thin at}\,\, x_{0}\Longrightarrow \hbox{there exists an open set}\, O\supset E\backslash \{x_{0}\}\,\hbox{and}\, O\, \hbox{is}\,\,(\alpha,p)-\hbox{thin at}\, x_{0};\\ E\, \hbox{is}\, (\alpha,p)-\hbox{thick at}\, x_{0}\Longrightarrow \hbox{there exists a compact set}\, K\subset E\backslash \{x_{0}\}\, \hbox{and}\, K\, \hbox{is}\, (\alpha,p)-\hbox{thick at}\, x_{0}. \end{cases}$$* *Proof.* Let $B_{i}=B(x_{0},2^{-i})$ for $i=1,2,\cdots,$ and pick an open set $O_{i}\subset B_{i}$ with $E\cap B_{i}\subset O_{i}$ and $$\left(F_{\alpha,p}(x_{0},E,2^{-i})\right)^{\frac{1}{p-1}}\geq \left(\frac{C_{\alpha,p}(O_{i}, B_{i-1})}{C_{\alpha,p}(B_{i}, B_{i-1})}\right)^{\frac{1}{p-1}}-2^{-i},$$ where $F_{\alpha, p}(x_{0},E,r)$ is the $(\alpha,p)$-capacity density function. Without loss of generality, we assume that the sequence $\{O_{i}\}$ is decreasing and that $E\subset B_{1}$. Therefore, $E\backslash \{x_{0}\}\subset O=\cup_{i}(O_{i}\backslash \overline{B}_{i+2})$. Moreover, $C_{\alpha,p}(O\cap B_{1}, B_{0})\leq C_{\alpha,p}(B_{1}, B_{0})$, and hence $$C_{\alpha,p}(O\cap B_{i}, B_{i-1})\leq C_{\alpha,p}(O_{i-1}\cap B_{i}, B_{i-1})\leq CC_{\alpha,p}(O_{i-1}, B_{i-2}).$$ Accordingly, $$\sum_{i=1}^{\infty}\left(F_{\alpha,p}(x_{0},O,2^{-i})\right)^{\frac{1}{p-1}}\leq C+C\sum_{i=2}^{\infty}\left(\left(F_{\alpha,p}(x_{0},E,2^{-i})\right)^{\frac{1}{p-1}}+2^{-i}\right)<+\infty,$$ which is the desired result by [@SX3 Lemma 2.10]. Similarly, we can prove the second assertion by recalling $C_{\alpha,p}(E, \Omega)=\sup_{K\subset E}C_{\alpha,p}(K, \Omega)$ with $K$ being compact. ◻ In order to state the next lemma, we need another version of $(\alpha,p)$-balayage. For a function $f$, which is locally bounded from below, let $$\mathcal{R}^{f}=\mathcal{R}^{f}_{\Omega}=\inf \left\{u: u\in H_{\alpha}^{+}(\Omega)\quad \hbox{and}\quad u\geq f\quad q.e.\quad in \quad \Omega\right\}.$$ Then the quasi $(\alpha,p)$-balayage can be defined as $$\widehat{\mathcal{R}}^{f}(x)=\widehat{\mathcal{R}}^{f}_{\Omega}(x)=\liminf_{y\longrightarrow x}\mathcal{R}^{f}_{\Omega}(y).$$ If $f$ is bounded, then $\widehat{\mathcal{R}}^{f}\in H_{\alpha}^{+}(\Omega)$ by a slight modification of [@SX3 Lemma 2.7] and $\widehat{\mathcal{R}}^{f}_{\Omega}\geq f$ q.e. in $\Omega$ by [@SX3 Theroem 1.3]. $\widehat{\mathcal{R}}^{f}_{\Omega}$ is the smallest function in $H_{\alpha}^{+}(\Omega)$ above the obstacle $f$ in $\Omega$ q.e.. If $\Omega$ is bounded, then $f$ is bounded, and hence $\widehat{\mathcal{R}}^{f}_{\Omega}\in W_{\alpha,p}(\Omega)$ by [@Sh Proposition 2.11] and [@KKP4 Theorems 11 & 13]. By recalling the definition of $(\alpha,p)-$balayage $\widehat{\mathcal{B}}^{f}$ in Section [1](#s1){reference-type="ref" reference="s1"}, it is interesting to know whether sets of $(\alpha,p)-$capacity zero can be neglected. Namely, whether $$\widehat{\mathcal{R}}^{f}=\widehat{\mathcal{B}}^{f}?$$ Below is a desired solution to this question. **Lemma 7**. *Let $\Omega$ be a bounded open set. Then the following statements hold.* - *If $\widehat{\mathcal{R}}^{f}\in \dot{W}^{\alpha,p}(\Omega),$ then $\widehat{\mathcal{R}}^{f}=\widehat{\mathcal{B}}^{f}$;* - *If $f$ is nonnegative with $\operatorname{supp}f\Subset \Omega$, then $\widehat{\mathcal{R}}^{f}=\widehat{\mathcal{B}}^{f}$;* - *$\widehat{\mathcal{R}}^{f}$ is a local solution of $\Phi^{x_0}_{f,u_{0}}(\Omega)$.* *Proof.* For abbreviation, we write $\widehat{\mathcal{R}}, \widehat{\mathcal{B}}$ instead of $\widehat{\mathcal{R}}^{f}$ and $\widehat{\mathcal{B}}^{f}$, respectively. It remains to prove that $\widehat{\mathcal{B}}\leq \widehat{\mathcal{R}}$ since $\widehat{\mathcal{B}}\geq \widehat{\mathcal{R}}$ q.e. is immediately. By letting $S=\left\{x\in \Omega: \widehat{\mathcal{R}}(x)<f(x)\right\}$, we get from the fundamental convergence theorem [@SX3 Theorem 1.3] that $C_{\alpha,p}(S, \Omega)=0.$ Hence, there exists a nonnegative l.s.c function $u\in W_{\alpha,p}(\mathbb{R}^{n})$ with $u(x)=\infty$ for $x\in S$. Let $f_{i}=\widehat{\mathcal{R}}+\frac{1}{i}u$ and $v_{i}$ be the solution to $\Phi_{f_{i}, f_{i}}(\Omega)$. Then [@Sh Theorem 2.5 , Theorem 3.1 & Corollary 3.7] allow us to assume that $v_{i}\in H_{\alpha}^{+}(\Omega).$ Hence, since $f_{i}$ is l.s.c., we get $$v_{i}(x)=\operatorname{essliminf}_{y\longrightarrow x}v_{i}(y)\geq \operatorname{essliminf}_{y\longrightarrow x}f_{i}(y)\geq f_{i}(x)\geq f(x)\quad\forall x\in \Omega$$ according to [@Sh Proposition 2.11]. Accordingly, $\widehat{\mathcal{B}}\leq \lim v_{i}=v.$ Moreover, the fact $f_{i}\longrightarrow \widehat{R}$ in $\dot{W}_{\alpha,p}(\Omega)$, the uniqueness of solution to $\Phi_{f_{i},f_{i}}$ and [@Sh Proposition 2.15] imply that $v=\widehat{\mathcal{R}}$ q.e.. Finally, according to [@Sh Corollary 3.9], we get $$\widehat{\mathcal{R}}(x)=\operatorname{essliminf}_{y\longrightarrow x}v(y)\geq \operatorname{essliminf}_{y\longrightarrow x}\widehat{\mathcal{B}}(y)=\widehat{\mathcal{B}}(x)\quad \forall x\in \Omega,$$ which is (i). To prove (ii), we only need to show $\widehat{\mathcal{R}}\in \dot{W}_{\alpha,p}(\Omega)$ according to (i). By choosing a neighborhood $U$ of $\partial \Omega$ such that $$\begin{cases} \overline{U}\cap \operatorname{supp}f=\phi;\\ \mathbb{R}^{n}\backslash U \, \hbox{is not}\, (\alpha,p)-\hbox{thin at}\, \hbox{every}\quad x\in \partial U, \end{cases}$$ it follows from [@SX3 Lemma 2.7] that $\widehat{\mathcal{R}}\in H_{\alpha}(\Omega\backslash \operatorname{supp}f)$ and hence $\widehat{\mathcal{R}}\in H_{\alpha}(U\cap \Omega)$. Let $$\left\{ \begin{array}{ll} \partial U\cap\Omega\subset \Omega_{i}\Subset \Omega_{i+1}\Subset \Omega\quad \hbox{with}\quad \cup_{i}\Omega_{i}=\Omega;\\ \varphi= \widehat{\mathcal{R}}\,\,\,\, \hbox{in}\,\,\,\, \partial U\cap\Omega\quad \hbox{with}\quad \operatorname{supp}\varphi\Subset \Omega\quad \hbox{for}\quad \varphi\in C(\Omega)\cap \dot{W}^{\alpha,p}(\Omega);\\ h_{i}\in H_{\alpha}(U\cap \Omega_{i})\quad \hbox{with}\quad h_{i}-\varphi\in \dot{W}_{0}^{\alpha,p}(U\cap \Omega_{i}). \end{array}\right.$$ Then the comparison principle [@KKP4 Lemma 6](see also [@P Theorem 15]) gives $$\widehat{\mathcal{R}}(x)\geq h_{i+1}(x)\geq h_{i}(x)\quad\hbox{for}\quad x\in U\cap \Omega_{i}$$ since $\widehat{\mathcal{R}}\in W^{\alpha,p}(U\cap \Omega_{i})$[@KKP4 Theorem 1, Lemma 5 & Theorem 11] with $$\widehat{\mathcal{R}}\geq 0 \quad \hbox{and}\quad \widehat{\mathcal{R}}=\varphi=h_{i} \quad \hbox{in}\quad \partial U\cap \Omega_{i}=\partial U\cap \Omega.$$ Consequently, [@KKP4 Theorem 15] implies $\lim_{i}h_{i}=h\in H_{\alpha}(U\cap \Omega).$ Furthermore, we get $$[h_{i}]_{\dot{W}_{\alpha,p}(U\cap \Omega_{i})}^{p}\leq C[\varphi]_{\dot{W}_{\alpha,p}(\Omega)}^{p},$$ which shows $h\in \dot{W}^{\alpha,p}(U\cap \Omega)$. Next, we claim that $h=\widehat{\mathcal{R}}$ in $U\cap \Omega$. In fact, $h\leq \widehat{\mathcal{R}}$ is immediately. To prove the inverse, let $$\widetilde{h}=\begin{cases} \min\{h,\widehat{\mathcal{R}}\}\,\, \hbox{in}\,\, U\cap\Omega;\\ \widehat{\mathcal{R}}\,\, \hbox{in}\,\, \Omega\backslash U. \end{cases}$$ Then $\widetilde{h}$ is l.s.c in $\Omega$ by the fact $\lim_{y\longrightarrow x}h(y)=\widehat{\mathcal{R}}(x)$ for all $x\in \partial U\cap \Omega.$ It follows from [@Sh Proposition 2.21] that $\widetilde{h}\in H_{\alpha}^{+}(\Omega).$ Whence we get $\widetilde{h}\geq \widehat{\mathcal{R}}$ in $\Omega$, and finally $h= \widehat{\mathcal{R}}$ in $U\cap \Omega$, which complete the proof of (ii). (iii). Let $x_{0}\in \Omega$ and $B=B(x_{0},r)\Subset \Omega$. Then $\widehat{\mathcal{R}}^{f}\in W^{\alpha,p}(B)$ by [@KKP4]. Suppose that $u$ is the solution to $\Phi_{f, \widehat{\mathcal{R}}}(B)$. Then it follows from [@Sh Theorem 3.1 & Corollary 3.7] that $u\in H_{\alpha}^{+}(B).$ Hence we obtain $\widehat{\mathcal{R}}\geq u$ in $B$ according to [@Sh Lemma 2.6] since $\min\left\{u, \widehat{\mathcal{R}}\right\}-\widehat{\mathcal{R}}\in W_{0}^{\alpha,p}(B)$. Next, we need to show $u\geq \widehat{\mathcal{R}}$ in $B$ in order to complete our proof. Denote by $\{\varphi_{i}\}$ an increasing sequence with $\{\varphi_{i}\}\subset C_{c}^{\infty}(\mathbb{R}^{n})$ and $\varphi_{i}\longrightarrow \widehat{\mathcal{R}}$ in $\partial B.$ Let $h_{i}\in C(\overline{B})\cap H_{\alpha}(B)$ be the unique function such that $h_{i} =\varphi_{i}$ in $\partial B.$ It follows from [@Sh Lemma 2.8] that $\min\{u-h_{i},0\}\in W_{0}^{\alpha,p}(B)$ and hence $u\geq h_{i}$ in $B.$ This implies that $\liminf_{y\longrightarrow x}u(y)\geq \widehat{\mathcal{R}}(x)$ for all $x\in \partial B.$ Accordingly, the function $$v=\begin{cases} \min\{\widehat{\mathcal{R}},u\}\quad\hbox{in}\quad B;\\ \widehat{\mathcal{R}}\quad\hbox{in}\quad\Omega\backslash B \end{cases}$$ is l.s.c and $v\in H_{\alpha}^{+}(\Omega)$ due to [@Sh Proposition 2.21]. Consequently, $u\geq v\geq \widehat{\mathcal{R}}$ in $B$ as desired. ◻ Having proved the previous two critical lemmas, we are ready to prove Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. ### Proof of Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"} {#proof-of-theorem-theorem-1.1} \(i\) $\Longrightarrow$ (ii). Suppose that $E$ is $(\alpha,p)-$thin at $x_{0}\notin E.$ Then, we can assume that $E$ is open according to Lemma [Lemma 6](#lemma 2.1){reference-type="ref" reference="lemma 2.1"}. Taking $$\begin{cases} B_{i}=B(x_{0},r_{i})\quad\hbox{with}\quad r_{i}=2^{-i};\\ E_{i}=E\cap B_{i};\\ u=\widehat{\mathcal{B}}^{1}_{E_{j}}(B_{j-2})\quad\hbox{with}\quad(-\Delta_{p})^{\alpha}u=\mu\quad\hbox{and}\quad j\geq 2\quad\hbox{be an integer}, \end{cases}$$ we conclude that $u\geq 1$ on $E_{j}$. In order to obtain (ii), it suffices to show that $u(x_{0})<1$. It follows from [@SX3 Lemma 2.8] and [@SX1 Theorem 2.2] that $$\left(\inf_{B_{j}}u\right)^{p-1}r_{j}^{n-\alpha p}\leq C\left(\inf_{B_{j}}u\right)^{p-1}C_{\alpha,p}\left(\left\{u>\inf_{B_{j}}u\right\}, B_{j-2}\right)\leq C\mu(B_{j-2})\leq C\mu(B_{j-1})$$ and consequently, $$\inf_{B_{j}}u\leq C\left(\frac{\mu(B_{j-1})}{r_{j-1}^{n-\alpha p}}\right)^{\frac{1}{p-1}}.$$ On the other hand, [@SX2 Lemma 5.1] shows that, for $i>j-2,$ $$\mu(B_{i})\leq C C_{\alpha,p}(E_{i}, B_{j-2})\leq C C_{\alpha,p}(E_{i}, B_{i-1}).$$ The previous inequality together with [@KMS Theorem 1.2] and the fact $u\geq 1$ implies that there exists $C:=C(n,\alpha,p)>0$ such that $$u(x_{0})\leq C\inf_{B_{j}}u+C\textbf{W}_{\alpha,p}^{\mu}(x_{0}, r_{j-1})\leq C\sum_{i=j-1}^{\infty}\left(\frac{C_{\alpha,p}(E_{i}, B_{i-1})}{r_{i}^{n-\alpha p}}\right)^{\frac{1}{p-1}}\leq\frac{1}{2}$$ as desired by taking $j$ large enough. \(ii\) $\Longrightarrow$ (iii). We give the proof only for the case $x\in \overline{E\backslash \{x_{0}\}}$ since the case $x\notin \overline{E\backslash \{x_{0}\}}$ is straightforward. We can assume that $\liminf_{x\rightarrow x_{0}, x\in E\backslash \{x_{0}\}}u(x)>1>u(x_{0})$ for the $u$ given in (ii). By choosing an open set $V(x_{0})\Subset U(x_{0})$ with $u|_{(E\cap V(x_{0}))\backslash \{x_{0}\}}>1$, Lemma [Lemma 7](#lemma 2.2){reference-type="ref" reference="lemma 2.2"} implies $$\widehat{\mathcal{B}}^{1}_{E\cap V(x_{0})}(x_{0})=\widehat{\mathcal{B}}^{1}_{(E\cap V(x_{0}))\backslash \{x_{0}\}}(x_{0})\leq u(x_{0})<1,$$ which is (iii). \(iii\) $\Longrightarrow$ (i). Without loss of generality, we assume that $U(x_{0})$ is bounded and $(\alpha, p)-$regular. The choquet topological lemma [@HKM2 Lemma 8.3] implies the existence of a nonnegative decreasing sequence $\{v_{i}\}\subset H_{\alpha}^{+}(U(x_{0}))$ satisfying $$\begin{cases} v_{i}(x)=u(x)\quad\hbox{for all} \quad x\in E\cap V(x_{0});\\ \widehat{v}(x)=\liminf_{y\longrightarrow x}v(y)=\widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}(x)\quad\hbox{for}\quad v=\lim_{i}v_{i}\quad\hbox{and}\quad x\in U(x_{0}). \end{cases}$$ Denote by $\widetilde{E}=\left\{x\in V(x_{0}):v(x)=u(x)\right\}.$ We may assume that $E\subset \widetilde{E}$ is a Borel set and obtain $$\widehat{\mathcal{B}}^{u}_{\widetilde{E}}(x_{0})\leq \widehat{v}(x_{0})=\widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}(x_{0})<u(x_{0}).$$ Moreover, (ii) of Lemma [Lemma 7](#lemma 2.2){reference-type="ref" reference="lemma 2.2"} allows us to assume $x_{0}\in E$. Next, we prove (i) by contradiction. Assume that $E$ is not $(\alpha, p)-$thin at $x_{0}$. Then Lemma [Lemma 6](#lemma 2.1){reference-type="ref" reference="lemma 2.1"} gives the existence of a compact set $K\subset E\cap V(x_{0})$ and $K$ is not $(\alpha, p)-$thin at $x_{0}$. Let $$\begin{cases} \{\varphi_{i}\}\subset C_{c}^{\infty}(U(x_{0}))\quad\hbox{be increasing with}\quad u=\lim_{i}\varphi_{i}\quad\hbox{in}\quad K;\\ h_{i}\in H_{\alpha}(U(x_{0})\backslash K)\quad\hbox{be unique with}\quad h_{i}-\varphi_{i}\in W^{\alpha,p}(U(x_{0})\backslash K);\\ w\in H_{\alpha}^{+}(U(x_{0}))\quad\hbox{be nonnegative with}\quad w\geq u\quad\hbox{in}\quad K;\\ \widetilde{w}=\min\{w, \sup h_{i}\}, \quad w_{0}=\min\{\widetilde{w}+\varepsilon-h_{i},0\}\quad\hbox{for all}\quad\varepsilon>0. \end{cases}$$ Then $u\geq h_{i}$ in $U(x_{0})\backslash K$, $\widetilde{w}\in W^{\alpha,p}(V(x_{0}))$ by [@KKP4 Theorem 1] and $w_{0}\in W_{0}^{\alpha,p}(U(x_{0})\backslash K)$. We conclude from the comparision principle [@Sh Lemma 2.8] that $\widetilde{w}\geq h_{i}$ in $U(x_{0})\backslash K.$ Hence we get $$u\geq \mathcal{B}^{u}_{K}=\mathcal{B}^{u\chi_{K}}(U(x_{0}))\geq h_{i}\quad\hbox{in}\quad U(x_{0})\backslash K,$$ and finally $$\widehat{\mathcal{B}}^{u}_{K}(x_{0})=\sup_{r>0}\inf _{B(x_{0},r)}\mathcal{B}^{u}_{K}=\min\left\{\liminf_{y\longrightarrow x_{0}, y\in U(x_{0})\backslash K}\mathcal{B}_{K}^{u}(y), u(x_{0})\right\}\geq \min\left\{\lim_{y\longrightarrow x_{0}, y\in U(x_{0})\backslash K}h_{i}(y), u(x_{0})\right\}=\varphi_{i}(x_{0})$$ since $K$ is not $(\alpha, p)-$thin at $x_{0}$ (see for example [@KKP4]). Thus, we get $$u(x_{0})\leq \widehat{\mathcal{B}}^{u}_{K}(x_{0})\leq \widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}(x_{0})<u(x_{0})$$ which is a contradiction. Therefore (i) is proved. **Remark 1**. (iii) of Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"} shows that $x_{0}$ is not $(\alpha, p)-$regular. In fact, if $u$ is as that in (iii), $x_{0}\in V(x)\Subset U(x_{0})$. Let $$\begin{cases} \{\varphi_{i}\}\subset C_{c}^{\infty}(U(x_{0}))\,\,\hbox{be increasing with}\,\, u=\lim_{i}\varphi_{i}\quad\hbox{in}\quad\partial(V(x_{0})\cap \Omega);\\ h_{i}\in H_{\alpha}(U(x_{0})\cap \Omega)\,\,\hbox{be unique with}\,\, h_{i}-\varphi_{i}\in W^{\alpha,p}_{0}(U(x_{0})\cap\Omega). \end{cases}$$ Then by the proof of (iii) $\Longrightarrow$ (i), $h_{i}\leq \widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}$ in $U(x_{0})\cap \Omega$. If $x_{0}$ is $(\alpha, p)-$regular, then $$\varphi_{i}(x_{0})=\lim_{x\longrightarrow x_{0}}h_{i}(x)\leq \widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}(x_{0}).$$ By letting $i\longrightarrow \infty,$ we get $u(x_{0})\leq \widehat{\mathcal{B}}^{u}_{E\cap V(x_{0})}(x_{0}),$ which is a contradiction. ## Proof of Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"}: characterizations of Perron regularity for bounded domains {#s3} We prove Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"} by showing (i)$\Longrightarrow$ (ii)$\Longrightarrow$ (i)$\Longrightarrow$ (iii)$\Longrightarrow$ (iv)$\Longrightarrow$ (v)$\Longrightarrow$ (i). The equivalence of (i) and (ii) was given in [@LL2 Theorem 26]. (i)$\Longrightarrow$ (iii). Let $$\begin{cases} \{u_{i}\}\subset C(\mathbb{R}^{n})\,\,\hbox{be increasing},\quad i=1,2,\cdots;\\ \lim_{i}u_{i}=u\,\,\hbox{in}\,\, U\,\,\hbox{and}\quad u_{i}=0\quad\hbox{in}\quad\partial V;\\ v_{i}=\begin{cases}\overline{H}_{u_{i}}^{\alpha}(V\backslash (\overline{U}\backslash \Omega))\,\,\hbox{in}\,\, V\backslash (\overline{U}\backslash \Omega);\\ u_{i}\quad\hbox{in}\quad\overline{U}\backslash \Omega. \end{cases} \end{cases}$$ It follows from (ii) of Lemma [Corollary 8](#lemma 3.3){reference-type="ref" reference="lemma 3.3"} that $$u_{i}(x_{0})=\lim_{x\longrightarrow x_{0}}v_{i}(x)\leq \liminf_{x\longrightarrow x_{0}}\mathcal{B}^{u}_{\overline{U}\backslash \Omega}(V)(x)=\widehat{\mathcal{B}}^{u}_{\overline{U}\backslash \Omega}(V)(x_{0}),$$ and hence $u(x_{0})\leq \widehat{\mathcal{B}}^{u}_{\overline{U}\backslash \Omega}(V)(x_{0})$, which implies (iii) since it is not hard to get $u(x_{0})\geq \widehat{\mathcal{B}}^{u}_{\overline{U}\backslash \Omega}(V)(x_{0}).$ (iii)$\Longrightarrow$(iv) is straightforward. (iv)$\Longrightarrow$(v) follows from [@SX3 Proposition 3.2]. (v)$\Longrightarrow$(i). This can be done by showing (v)$\Longrightarrow$ (iv)$\Longrightarrow$ (i). Assume that $x_{0}$ is $(\alpha,p)-$regular and that $B$ is a ball containing $x_{0}$. Then [@SX3 Theorem 1.2] implies $$\widehat{\mathcal{B}}^{1}_{\overline{B}\backslash \Omega}(2B)(x_{0})=\lim_{x\longrightarrow x_{0}, x\in \Omega}\widehat{\mathcal{B}}^{1}_{\overline{B}\backslash \Omega}(2B)(x)=1,$$ which is (iv). (iv)$\Longrightarrow$ (i) is immediately from Lemma [Lemma 12](#lemma 3.2){reference-type="ref" reference="lemma 3.2"}. This finishes the proof of Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"}. **The regularity is a local property**. By the equivalence (i)$\Longleftrightarrow$(ii) of Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"}, we get the following fact that regularity is a local property. **Corollary 8**. *Let $O, \Omega \subset \mathbb{R}^{n}$ be open sets, $x_{0}\in \partial\Omega \cap \partial O$ and $U(x_{0})$ be a neighborhood of $x_{0}$.* - *If $O\subset \Omega$ such that $U(x_{0})\cap O=U(x_{0})\cap \Omega$, then there exists an $(\alpha, p)-$barrier relative to $\Omega$ at $x_{0}$ if and only if there exists an $(\alpha, p)-$barrier relative to $O$ at $x_{0}.$* - *If $U(x_{0})\cap O=U(x_{0})\cap \Omega$, then $x_{0}$ is Perron regular relative to $\Omega$ if and only if $x_{0}$ is Perron regular relative to $O.$* - *Assume that $O\subset\Omega\subset \mathbb{R}^{n}$. Then $x_{0}$ is Perron regular relative to $O$ if $x_{0}$ is Perron regular relative to $\Omega$.* *Proof.* The equivalence of (i) and (ii) in Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"} shows that the existence of an $(\alpha, p)$-barrier is a local property, i.e. (i), (ii) is a consequence of (i)$\Longleftrightarrow$(ii) of Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"} and (i), while (iii) is a byproduct of (ii). ◻ ## Proof of Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"}: characterizations of Perron regularity for unbounded domains {#s4} To prove Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"}, we need the following lemma concerning the weak compactness in $\dot{W}^{\alpha,p}_{0}(\Omega)$. **Lemma 9**. *Let $\{u_{i}\}\subset \dot{W}^{\alpha,p}_{0}(\Omega)$ be a bounded squence in $\dot{W}^{\alpha,p}(\Omega)$ satisfying $\lim_{i}u_{i}=u$ and $u_{i}$ being bounded in $\dot{W}^{\alpha,p}(O)$ for each open set $O\Subset \Omega.$ Then $u\in \dot{W}^{\alpha,p}_{0}(\Omega)$ and $u_{i}\longrightarrow u\,\,\hbox{weakly in}\,\,\dot{W}^{\alpha,p}(\Omega).$* *Proof.* From [@Sh Lemma 2.2], it follows that $u\in \dot{W}^{\alpha,p}_{loc}(\Omega)$ and $u_{i}\longrightarrow u$ weakly in $\dot{W}^{\alpha,p}(O)$ for $O\Subset \Omega$. The $\dot{W}^{\alpha,p}(\Omega)$ boundedness of $\{u_{i}\}$ gives the existence of a weakly convergent subsequence $\{u_{i_{j}}\}$ of $\{u_{i}\}$. Hence, the weak convergence of $u_{i_{j}}\longrightarrow u$ implies $u\in \dot{W}^{\alpha,p}(\Omega).$ Furthermore, we have $u_{i}\longrightarrow u$ weakly in $\dot{W}^{\alpha,p}(\Omega)$ since the independence of the subsequence for the weak limit. We conclude from the Mazur lemma [@HKM2 Lemma 1.29] that $u\in \dot{W}^{\alpha,p}_{0}(\Omega)$. In fact, by choosing a sequence $\{v_{i}\}\subset \dot{W}^{\alpha,p}_{0}(\Omega)$ of convex combinations of $u_{i}$ with $v_{i}\longrightarrow u$ in $\dot{W}^{\alpha,p}(\Omega)$ and $$[v_{i}-u]_{\dot{W}^{\alpha,p}(\Omega)}<\left(\frac{\varepsilon}{2}\right)^{p}, \quad[v_{i}-\varphi]_{\dot{W}^{\alpha,p}(\Omega)}<\left(\frac{\varepsilon}{2}\right)^{p}\quad\hbox{for}\quad \varphi\in C_{c}^{\infty}(\Omega),\quad\varepsilon>0.$$ Thus, we achieve the desired result $[\varphi-u]_{\dot{W}^{\alpha,p}(\Omega)}<\varepsilon$. ◻ **Proof of Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"}**. Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"} will be proved by showing (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii)$\Longrightarrow$(iv)$\Longrightarrow$(v)$\Longrightarrow$(i). (i)$\Longrightarrow$ (ii) is straightforward. (ii)$\Longrightarrow$ (iii). Assume that $\infty$ is Perron regular for $\Omega=\overline{B}^{c}$ with $B$ being a ball. Then $\Omega$ is regular since $\overline{B}$ is $(\alpha,p)-$thick at each of its points. This implies the existence of a function $u\in H_{\alpha}(\Omega)$ such that $u\in C(\overline{\Omega})$, $u=1$ on $\partial B$ and $u=0$ at $\infty$. Consequently, $u\in H_{\alpha}^{+}(\mathbb{R}^{n})$ is the desired nonconstant bounded function with $0\leq u\leq1$ by extending $u$ to $B$ as $u=1$ in $B$. (iii)$\Longrightarrow$ (iv). Let $u$ be the function in (iii). Without loss of generality, we assume $\inf u=0$. Choosing an open ball $B\subset \mathbb{R}^{n}$, we observe that $m=\min _{\overline{B}}u>0.$ We complete the proof by a contradiction. Suppose that $C_{\alpha,p}(\overline{B}, \mathbb{R}^{n})=0.$ Then, $\mathbb{R}^{n}$ can be exhausted by increasing concentric balls $\{B_{i}\}$ with $B\Subset B_{1}\Subset B_{2}\Subset\cdots$ and $C_{\alpha,p}(\overline{B}, B_{i})\longrightarrow 0$. Let $u_{i}=\widehat{\mathcal{B}}^{1}_{\overline{B}}(B_{i})$. Then $u_{i}\in H_{\alpha}(B_{i}\backslash \overline{B})$ and $u_{i}\longrightarrow v\in H_{\alpha}(\mathbb{R}^{n}\backslash \overline{B})$ by [@SX3 Theorem 1.2] and [@KKP4 Theorem 15]. It follows from the fact $v\leq u/m$ that $v$ is not constant. In fact, $\lim_{x\longrightarrow y}h(x)=1$ for $y\in \partial B$. At the same time, the quasiminimizing property [@Sh P 1056] gives $[u_{i}]_{\dot{W}^{\alpha,p}(B_{i})}\leq C_{\alpha,p}(\overline{B}, B_{i})$. Namely, $\{u_{i}\}$ is bounded in $\dot{W}^{\alpha,p}(\mathbb{R}^{n})$. We conclude from Lemma [Lemma 9](#lemma 4.1){reference-type="ref" reference="lemma 4.1"} that $v\in \dot{W}^{\alpha,p}(\overline{B}^{c})$ and $u_{i}\longrightarrow v$ weakly in $\dot{W}^{\alpha,p}(\overline{B}^{c}).$ Hence it follows from [@HKM2 Remark 5.25] that $$[v]_{\dot{W}^{\alpha,p}(\overline{B}^{c})}\leq \liminf_{i\longrightarrow \infty}[u_{i}]_{\dot{W}^{\alpha,p}(\overline{B}^{c})}\leq \lim_{i\longrightarrow \infty}C_{\alpha,p}(\overline{B}, B_{i})=0.$$ Finally, we can derive that $v$ is constant in $\overline{B}^{c}$, which is a contradiction and thus (iv) follows. (iv)$\Longrightarrow$(v) is straightforward. (v)$\Longrightarrow$(i). Let $$\begin{cases} C_{\alpha,p}(\overline{B}, \mathbb{R}^{n})=\delta>0\quad\hbox{for a ball}\quad B;\\ u_{i}=\widehat{\mathcal{B}}^{1}_{\overline{B}}(B_{i})\quad\hbox{with}\quad B_{i}=(i+1)B,\quad i=1,2,\cdots. \end{cases}$$ Then $u_{i}\longrightarrow u\in H_{\alpha}(\overline{B}^{c}).$ We claim first that $u$ is not constant. In fact, by the same analysis as in the proof of (iii)$\Longrightarrow$(iv), we have $u_{i}\longrightarrow u$ weakly in $\dot{W}^{\alpha,p}(\overline{B}^{c})$. The Mazur lemma [@HKM2] allows the existence of a convex combination of $u_{i}$, $$\overline{u_{i}}=\sum_{j=1}^{i}\lambda_{j,i}u_{j},\quad \sum_{j=1}^{i}\lambda_{j,i}=1\quad \hbox{ with}\quad \lambda_{j,i}\geq 0$$ and $\overline{u_{i}}\longrightarrow u$ in $\dot{W}^{\alpha,p}(\overline{B}^{c})$. If $u$ is a constant, then there exists a $i_{k}$ with $[\overline{u_{i_{k}}}]_{\dot{W}^{\alpha,p}(\overline{B}^{c})}<\delta$. However, $C_{\alpha,p}(\overline{B}, \mathbb{R}^{n})\leq C_{\alpha,p}(\overline{B}, B_{i_{k}})<\delta.$ This forces a contradiction. Next, we claim that $\infty$ has a strictly positive $(\alpha,p)-$barrier relative to $\mathbb{R}^{n}$. Indeed, by [@SX3 Lemma 2.5], we infer that $\lim_{x\longrightarrow \infty}u(x)=\inf u<1.$ Moreover, $u\in H_{\alpha}^{+}(\mathbb{R}^{n})$ if $u=1$ on $\overline{B}$ since $\lim_{x\longrightarrow y\in\partial B}u(x)=1$. Accordingly, $v=u-\inf_{u}$ is the desired $(\alpha,p)-$barrier, which implies that $v$ is an $(\alpha,p)-$barrier at $\infty$ relative to each unbounded open set. This yields (i) by Theorem [Theorem 2](#Theorem 1.3){reference-type="ref" reference="Theorem 1.3"}. Then the proof of Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"} is finished. If $\Omega$ is bounded in Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"}, we have the following Corollary. **Corollary 10**. *Assume that $\infty$ is a Perron irregular boundary point of an unbounded open set $E$ and $\Omega\subset \mathbb{R}^{n}$ is a bounded open set. Then the following statements are equivalent:* - *$C_{\alpha, p}(\Omega^{c})>0$;* - *There exists a bounded $u\in H_{\alpha}^{+}(\Omega)$ satisfying that $u$ is nonconstant in each component of $\Omega$;* - *$C_{\alpha, p}(B, \Omega)>0$ for each ball $B\Subset \Omega$;* - *$C_{\alpha, p}(B, \Omega)>0$ for some ball $B\Subset \Omega$.* *Proof.* (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii)$\Longrightarrow$ (iv) go essentially as that in the proof of Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"} due to the fact that there exists at least one Perron regular point of $\Omega\backslash \overline{B}$ on $\partial \Omega$ if $B\Subset \Omega$ is a ball, and that the balls $\{B_{i}\}$ in the argument can be replaced by polyhedra. The only difference lies in showing (iv)$\Longrightarrow$ (i), which can be done by contradiction. Since $\infty$ is a Perron irregular boundary point of $\Omega$, Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"} gives $C_{\alpha,p}(\overline{B}, \mathbb{R}^{n})=0.$ Then, for fixed $\varepsilon>0$, we can pick a function $\varphi\in Y^{\alpha,p}_{0}(\overline{B}, \mathbb{R}^{n})$ such that $[\varphi]_{\dot{W}^{\alpha,p}(\mathbb{R}^{n})}<\varepsilon$. Assume $C_{\alpha,p}(\Omega^{c})=0$ and let $K=\operatorname{supp}\varphi\cap \Omega^{c}.$ Then we get $C_{\alpha,p}(K)=0$. We thus pick a function $\psi\in C_{c}^{\infty}(\mathbb{R}^{n})$ with $$0\leq \psi \leq 1;\quad \psi=1\quad\hbox{in a neighborhood of}\quad K;\quad \psi=0\quad\hbox{on}\quad\overline{B};\quad [\psi]_{\dot{W}^{\alpha,p}(\mathbb{R}^{n})}<\varepsilon.$$ Consequently, we get $\eta=(1-\psi)\varphi\in Y^{\alpha,p}_{0}(\overline{B}, \Omega).$ Hence, we derive $$C_{\alpha,p}(\overline{B},\Omega)\leq [\eta]_{\dot{W}^{\alpha,p}(\Omega)}\leq 2^{p-1}\left([\varphi]_{\dot{W}^{\alpha,p}(\mathbb{R}^{n})}+[\psi]_{\dot{W}^{\alpha,p}(\mathbb{R}^{n})}\right)\leq 2^{p}\varepsilon,$$ which is a contradiction of (iv) and then (i) follows. ◻ ## Proof of Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"}: the fractional Wiener test {#s5} In this section, we show the fractional Wiener test by using Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. The second equivalence of Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"} follows immediately by the definition of $(\alpha, p)$-thickness. We only need to show that the $(\alpha, p)$-regular boundary points can be characterized by the fractional Wiener integral. The regularity of $x_{0}$ can be achieved by the divergence of the fractional Wiener integral according to [@SX3 Theorem 1.1]. It suffices to show that the regularity of $x_{0}\in \partial\Omega$ implies the divergence of $\mathcal{W}^{\alpha}_{p}(\Omega^{c}, x_{0})$. To do this, we need to prove the following lemmas which are critical not only for the proof of Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"} but also for other results in Section [3](#s8){reference-type="ref" reference="s8"}. ### Important Lemmas It is well known that functions in $H_{\alpha}^{+}(\Omega)$ are not local since it requires testing in all open sets $O\Subset \Omega$. However, we can show the local nature of $(\alpha, p)-$superharmonic functions by [@Sh Proposition 2.23, Theorem 3.6, Corollarys 3.7 & 3.9]. Indeed, we have the following result. **Lemma 11**. *Let $\Omega$ be an open set. Then* - *$u\in H_{\alpha}^{+}(\Omega)\Longleftrightarrow u|_{U}\in H_{\alpha}^{+}(\Omega)$ with $U\subset \Omega$ be the neighborhood of $x\in \Omega$.* - *Let $$\begin{cases} O_{i},\quad i=1,2\quad\hbox{be open sets with}\quad\Omega=O_{1}\cup O_{2};\\ u\in H_{\alpha}^{+}(O_{1}),\quad v\in H_{\alpha}^{+}(O_{2})\quad\hbox{satisfying}\quad u\leq v\quad\hbox{in}\quad O_{1}\cap O_{2}\\ \quad \hbox{and}\quad w=\begin{cases}v\quad\hbox{on}\quad\Omega\backslash O_{1};\\ u\quad\hbox{in}\quad O_{1} \end{cases} is\quad l.s.c. \end{cases}$$ Then $w\in H_{\alpha}^{+}(\Omega)$.* *Proof.* Since we may assume that $u$ is bounded, it remains to consider the corresponding property of supersolutions according to [@Sh Proposition 2.16, Theorems 3.1 & 3.6]. Therefore, (i) follows by applying a partition of unity. Using (i), we can prove (ii), which can be seen as another version of the classical pasting lemma [@Sh Proposition 2.21]. In fact, it follows from [@Sh Proposition 2.21] that $w\in H_{\alpha}^{+}(O_{2})$ since $\min\{u,v\}=u$ in $O_{1}\cap O_{2}$. Hence, $w\in H_{\alpha}^{+}(\Omega)$ by (i) noticing that $O_{1}\cup O_{2}=\Omega.$ ◻ **Lemma 12**. *Let $O$ be an open set and $x_{0}\in B\cap (\partial O \backslash \{\infty\})$ for a ball $B$ with rational center and radius. Then* - *$x_{0}$ is Perron regular if $\widehat{\mathcal{B}}^{1}_{\overline{B}\backslash O}(2B)(x_{0})=1$.* - *$C_{\alpha,p}(S,O)=0$, where $S=\{x:x\in \partial O \,\,\hbox{is finite and}\,\,(\alpha,p)-\hbox{irregualr}\}$.* - *There exists a finite point $x_{0}\in \partial O$ which is Perron regular relative to $E\subset O$ if $C_{\alpha,p}(O^{c})>0$.* - *Let $u\in H_{\alpha}^{+}(U)$ and $C_{\alpha,p}(O^{c})>0$ with $U$ be a neighborhood of $\overline{B}\subset O$. Then there exists a function $\widetilde{u}\in H_{\alpha}^{+}(O)$ with $\widetilde{u}=u$ in $B$ and $\widetilde{u}$ is bounded below.* *Proof.* (i) is a slight modification of [@SX3 Proposition 3.2] with $v$ and $h$ replaced by $f\in C(\partial O)$, $\overline{H}_{f}^{\alpha}$. (i)$\Longrightarrow$(ii) can be derived in a similar way as that of (ii) $\Longrightarrow$ (iii) in [@SX3 Proposition 3.2]. To prove (iii), we first claim that $C_{\alpha,p}(\partial E)>0$ since $C_{\alpha,p}(E^{c})=0$. In fact, if $C_{\alpha,p}(\partial E)=0$, then $E^{c}$ must have an interior point. This implies that $(\partial E)^{c}$ has at least two components, which is a contradiction since $\partial E$ does not separate $\mathbb{R}^{n}$(see for example [@HKM2 Lemma 2.46], [@SX1 Proposition 4.4] and [@SZ Lemma 3.3]). Consequently, $C_{\alpha,p}(\partial E)>0$ and (iii) follows from (ii). (iv). Without loss of generality, we may assume that $O$ is connected since it is sufficient to prove that $u$ has a desired extension to the component of $O$ which contains $B$. Pick a ball $B_{0}\Subset O$ satisfying $$u\in H_{\alpha}^{+}(B_{0});\quad B\Subset B_{0};\quad u>0\quad\hbox{in}\quad B_{0}$$ and replace $u$ instead of $v=\widehat{\mathcal{B}}^{u}_{B}(B_{0}).$ It follows from [@SX1 Lemma 2.7 & Proposition 3.2] that $$v=u\quad\hbox{in}\quad B\quad\&\quad\lim_{x\longrightarrow y}v(x)=0\quad\hbox{for all}\quad y\in \partial B_{0}.$$ Set $m=\min_{\overline{B}}v,$ $K=\left\{x\in B_{0}: v(x)\geq m\right\}\supset \overline{B},$ $$w=\begin{cases}\overline{H}_{f}^{\alpha}\quad\hbox{in}\quad O\backslash K;\\ m\quad\hbox{in}\quad K; \end{cases}\quad \hbox{and}\quad f=\begin{cases}0\quad\hbox{on}\quad\partial O;\\ m\quad\hbox{in}\quad K. \end{cases}\\ $$ We proceed the proof by showing $$\label{3.2} \lim_{x\longrightarrow y}w(x)=m\quad \forall y\in K.$$ Choose $\widetilde{w}\in \mathcal{U}_{f}^{\alpha}$, then [@Sh Proposition 2.20] implies that $\widetilde{w}$ is nonnegative and $\widetilde{w}\geq v$ in $B_{0}\backslash K$ since $v\in H_{\alpha}(B_{0}\backslash \overline{B})$. Consequently, $\lim_{x\longrightarrow y}w(x)\geq m$ $\forall y\in \partial K$ and $(\ref{3.2})$ is obtained. Furthermore, [@Sh Proposition 2.21] implies $w\in H_{\alpha}^{+}(O)\cap H_{\alpha}(O\backslash K).$ Then (iii) yields the existence of a point $y\in \partial O$ with $\lim_{x\longrightarrow y}w(x)=0$, which shows $w<m$ on $\partial B_{0}$ by [@SX3 Lemma 2.5]. Thus, we have $$\label{3.3} m/((m-\max_{\partial B_{0}}w)(w-m))\leq -m=v-m\quad\hbox{on}\quad\partial B_{0}.$$ Similarly, a further application of [@Sh Proposition 2.20] implies the truth of $(\ref{3.3})$ in $B_{0}\backslash K$. Finally, Lemma [Lemma 11](#lemma 3.1){reference-type="ref" reference="lemma 3.1"} implies that the function $$\widetilde{u}=\begin{cases} v\quad\hbox{in}\quad K;\\ (m-\max_{\partial B_{0}}w)(w-m)+m\quad\hbox{in}\quad O\backslash K \end{cases}$$ is the desired function. ◻ To prove Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"}, we need the continuity of $(\alpha, p)$-balayage which can be formulated as follows. **Lemma 13**. *Assume that $f:\overline{\mathbb{R}^{n}}\longrightarrow \mathbb{R}^{n}$ is continuous and $u=\widehat{\mathcal{B}}^{f}_{\Omega}$. Then* - *$u$ is continuous, $u\geq f$ and $\lim_{x\longrightarrow x_{0}}u(x)=f(x_{0})$ if $x_{0}\in \partial\Omega\backslash\{\infty\}$ is Perron regular.* - *$u\in \dot{W}^{\alpha,p}(\Omega)$, $u-f\in \dot{W}_{0}^{\alpha,p}(\Omega)$ and $\langle(-\Delta_{p})^{\alpha}u, v\rangle\geq 0$ if $f\in \dot{W}^{\alpha,p}_{0}(\Omega)$ and $\varphi\in \dot{W}_{0}^{\alpha,p}(\Omega)$ satisfy $\varphi\geq f-u$.* - *$u\in W^{\alpha,p}(\Omega)$, $u-f\in W_{0}^{\alpha,p}(\Omega)$ if $f\in W^{\alpha,p}(\Omega)$ and $\Omega$ is bounded.* - *$\lim_{x\longrightarrow \infty}u(x)=f(\infty)$ if $\Omega$ is unbounded and $\infty$ is a Perron regular boundary point of each unbounded open set $O\subset \Omega$.* *Proof.* It follows from [@SX3 Proposition 3.3] that $u\in C(\Omega)$ is bounded and $u\geq f$. Suppose that $x_{0}\in \partial\Omega\backslash \{\infty\}$ is Perron regular. We proceed to show that $$\label{3.1} \lim_{x\longrightarrow x_{0}}u(x)=f(x_{0}).$$ By letting $$\begin{cases} v\,\,\hbox{be an}\,\,(\alpha,p)-\hbox{barrier at}\,\, x_{0};\\ B\,\,\hbox{be a ball centered at}\,\, x_{0}\,\,\hbox{with}\,\,\partial B\cap\Omega \neq \phi;\\ |f(x)-f(x_{0})|<\varepsilon\,\,\hbox{for all}\,\, x\in B\,\,\hbox{and fixed}\,\,\varepsilon>0, \end{cases}$$ we obtain from [@Sh Proposition 2.21] the existence of a $\xi>0$ and $$w(x)=\begin{cases} \min\{\xi v(x)+f(x_{0})+\varepsilon, \sup |f|\}\quad\hbox{for}\quad x\in B\cap \Omega;\\ \sup |f|\quad\hbox{for}\quad x\in \Omega\backslash B, \end{cases}$$ which is l.s.c satisfying $w\in H_{\alpha}^{+}(\Omega)$ and $w\geq f$ in $\Omega$. Hence, we get $$\limsup_{x\longrightarrow x_{0}}u(x)\leq \lim_{x\longrightarrow x_{0}}w(x)\leq f(x_{0})+\varepsilon,$$ which implies $(\ref{3.1})$ since $u\geq f$. To show (ii), let $$\begin{cases} f\in \dot{W}^{\alpha,p}(\Omega);\\ \{O_{i}\}\,\,\hbox{be increasing regular open sets with}\,\, O_{1}\Subset O_{2}\Subset\cdots \Subset \Omega\,\,\hbox{and}\,\, \cup_{i}O_{i}=\Omega;\\ u_{i}\,\,\hbox{be the solution to the obstacle problem}\,\,\Phi_{f,f}(O_{i}). \end{cases}$$ It follows from [@Sh Lemma 2.6] that $\{u_{i}\}$ is increasing and bounded above by $u$. Therefore, $$\lim_{i}u_{i}=\widetilde{u}\in H_{\alpha}^{+}(\Omega)\quad \hbox{with}\,\, u\geq \widetilde{u}\geq f.$$ Consequently, we derive $u=\widetilde{u}$ since $u\leq \widetilde{u}$ by the definition of $(\alpha,p)-$balayage. By letting $u_{i}=f$ on $\Omega\backslash O_{i}$, we see that $u_{i}-f\in \dot{W}^{\alpha,p}_{0}(\Omega)$. On the other hand, the quasi minimizing property (see for example [@Sh P1056]) implies that $$[u_{i}]_{\dot{W}^{\alpha,p}(O_{i})}\leq C[f]_{\dot{W}^{\alpha,p}(O_{i})}\leq C[f]_{\dot{W}^{\alpha,p}(\Omega)}.$$ Accordingly, we deduce that $(u_{i}-f)\longrightarrow (u-f)$ weakly in $\dot{W}^{\alpha,p}(\Omega)$, and hence $(u-f)\in \dot{W}^{\alpha,p}_{0}(\Omega)$ by [@Sh Lemma 2.2]. Let $\{\varphi_{i}\}\subset C_{c}^{\infty}(\Omega)$ such that $\varphi_{i}\longrightarrow \varphi$ in $\dot{W}^{\alpha,p}(\Omega)$. Then the function $\psi_{i}=\max\{\varphi_{i}, f-u\}\in \dot{W}^{\alpha,p}_{0}(\Omega),$ and $\psi_{i}\longrightarrow \varphi$ in $\dot{W}^{\alpha,p}(\Omega)$ by [@SX3 Lemma 2.1]. For fixed $i$, we choose $j_{i}$ with $\operatorname{supp}\psi_{i}\subset O_{j_{i}}$. It follows from [@Sh Proposition 2.15] that $u$ is a solution to $\Phi_{f,u}(O_{j_{0}})$. Thus, $$\langle(-\Delta_{p})^{\alpha}u, \psi_{i}\rangle=\langle(-\Delta_{p})^{\alpha}u, \psi_{i}\rangle|_{O_{j_{i}}}\geq 0.$$ Consequently, we obtain $$\langle(-\Delta_{p})^{\alpha}u, \varphi\rangle=\lim_{i\longrightarrow \infty}\langle(-\Delta_{p})^{\alpha}u, \psi_{i}\rangle\geq 0,$$ which complete the proof of (ii). \(iii\) and (iv) is a slight modification of (ii) and (i), respectively. ◻ In fact, the continuity assumption in Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} can be relaxed to an extention in the following sense, which can be established in a similar way as that of (i) and (ii) of Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"}. **Corollary 14**. *Let $f$ be bounded above, $u=\widehat{\mathcal{B}}^{f}_{\Omega}$ and $x_{0}\in \partial\Omega\backslash \{\infty\}$ be Perron regular. Then* - *$\lim_{x\longrightarrow x_{0}}u(x)=f(x_{0})$ whenever $f$ is continuous at $x_{0}$ and $u\geq f$ in $B(x_{0},r)\cap\Omega$ for some $r>0$.* - *$u\in \dot{W}^{\alpha,p}(\Omega)$, $u-f\in \dot{W}_{0}^{\alpha,p}(\Omega)$ and $\langle(-\Delta_{p})^{\alpha}u, v\rangle\geq 0$ if $f\in \dot{W}_{0}^{\alpha,p}(\Omega)$ is bounded above and the solution $v$ to $\Phi_{f,f}(O)$ lies above $f$ in all open set $O\Subset \Omega$.* Utilizing Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"}, one can get a solution to $(\ref{(1.2)})$ with Sobolev boundary values, which helps us to complete the proof of Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"}. **Lemma 15**. *The following two statements hold.* - *Assume that $C_{\alpha,p}(\Omega^{c})>0$ and that $f\in C(\overline{\mathbb{R}^{n}})\cap \dot{W}^{\alpha,p}(\Omega)$. Then $f$ is resolutive and $\overline{H}^{\alpha}_{f}-f\in \dot{W}_{0}^{\alpha,p}(\Omega)$ if $\Omega$ is an unbounded open set.* - *Assume that $f\in C(\overline{\mathbb{R}^{n}})\cap \dot{W}^{\alpha,p}(\Omega)$. Then $\overline{H}^{\alpha}_{f}-f\in W_{0}^{\alpha,p}(\Omega)$ if $\Omega$ is bounded. Moreover, $\overline{H}^{\alpha}_{f}$ is the unique $(\alpha,p)-$harmonic function with Sobolev boundary values $f$.* *Proof.* Let $u=\widehat{\mathcal{B}}^{f}_{\Omega}$ and $O_{i}$ be as in the proof of Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} (ii). Denote by $u_{i}$ the Poisson modification $P(u,O_{i})$ of $u$ in $O_{i}$ (see [@Sh P1069]). It is easy to see that $$u\geq u_{1}\geq u_{2}\geq \cdots\geq \overline{H}_{f}^{\alpha}\quad\hbox{and}\quad \overline{H}_{f}^{\alpha}\leq \lim_{i}u_{i}=u^{*}\in H_{\alpha}(\Omega)$$ according to [@Sh Proposition 2.22 & Remark 2.1]. Furthermore, we get $u_{i}\longrightarrow u^{*}$ weakly in $\dot{W}^{\alpha,p}(\Omega)$ and $f-u^{*}\in \dot{W}_{\alpha,p}(\Omega)$ by a slight modification of the proof of Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} (i) and (ii). Similarly, the $(\alpha,p)-$subharmonic function $v=-\widehat{\mathcal{B}}^{-f}_{\Omega}$ gives the existence of a function $u_{*}\in H_{\alpha}(\Omega)$ such that $v\leq u_{*}\leq \underline{H}_{f}^{\alpha}\leq \overline{H}_{f}^{\alpha}\leq u^{*}$ in $\Omega$ and $f-u_{*}\in \dot{W}_{0}^{\alpha,p}(\Omega)$. To complete the proof of (i), it remains to verify $u^{*}= u_{*}$. Since $u^{*}- u_{*}\in \dot{W}_{0}^{\alpha,p}(\Omega)$, we get $\langle(-\Delta_{p})^{\alpha}u^{*}-(-\Delta_{p})^{\alpha}u_{*}, u^{*}- u_{*}\rangle=0,$ and hence $u^{*}= u_{*}+C$ for some constants $C$ in each component $\Omega_{0}$ of $\Omega$. Next, we claim that $C=0.$ In fact, Lemma [Lemma 12](#lemma 3.2){reference-type="ref" reference="lemma 3.2"} (iii) shows that $\Omega_{0}$ has a finite Perron regular boundary point $x_{0}$ since $C_{\alpha,p}(\Omega^{c})>0$, and hence Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} now implies, as $x\in \Omega_{0}$ goes to $x_{0},$ $$\limsup u_{*}(x)\leq \limsup u^{*}(x)\leq \lim u(x)=f(x_{0})=\lim v(x)\leq \liminf u_{*}(x)\leq \liminf u^{*}(x),$$ which is the desired result. \(ii\) can be proved similarly as that of (i). We may assume that $f\in C(\overline{\mathbb{R}^{n}})$ by the Tietze extension theorem. Then $(u_{i}-f)\longrightarrow \overline{H}_{f}^{\alpha}-f$ weakly in $W_{0}^{\alpha,p}(\Omega)$ as $\widehat{\mathcal{B}}^{f}-f\in W_{0}^{\alpha,p}(\Omega)$ by Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} (iii) and (iv). On the other hand, (ii) can also be proved by the comparision principle. Let $h\in H_{\alpha}(\Omega)$ such that $h-f\in W_{0}^{\alpha,p}(\Omega)$. Then $\lim_{x\longrightarrow y}h(x)=f(y)$ for $x\in \partial\Omega$ q.e. by Lemma [Lemma 12](#lemma 3.2){reference-type="ref" reference="lemma 3.2"}. Then, it follows from [@Sh Proposition 2.20] that $h\leq u$ for $u\in \mathcal{U}_{f}^{\alpha}$. On the other hand, $h\geq v$ for $v\in \mathcal{L}_{f}^{\alpha}$, which shows that $h\leq \overline{H}_{f}^{\alpha}=\underline{H}_{f}^{\alpha}\leq h$ and finishes the proof. ◻ ### Proof of Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"} {#proof-of-theorem-theorem-1.2} Having established the previous technical lemmas, we are ready to prove Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"}. Next, we show that the regularity of $x_{0}$ implies the divergence of the Wiener type integral by contradiction. Assume that $W_{\alpha,p}(\Omega^{c}, x_{0})< \infty$, then the proof follows by two cases. Case 1. $x_{0}$ is an isolated boundary point. In this case, it is easy to show that $x_{0}$ is irregular by [@SX3 Lemma 2.5] and [@SZ Theroem 3.4]. Case 2. $x_{0}$ is an accumlation point of $\Omega^{c}$. Using Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}, we deduce that there exist balls $B_{i}:=B(x_{0},r_{i}), i=1,2,$ with $r_{1}<r_{2},$ and a function $u\in H_{\alpha}^{+}(B_{2})$ with $0\leq u\leq 1,$ $u=1$ in $B_{2}\cap (\Omega^{c}\backslash \{x_{0}\})$ and $u(x_{0})\leq \frac{1}{2}.$ Pick a function $\varphi\in C^{\infty}(\mathbb{R}^{n})$ satisfying $$\begin{cases} \varphi(x)\leq u(x)\quad\forall x\in \Omega^{c}\cap \overline{B}_{1}\backslash \{x_{0}\};\\ \varphi(x)=1\quad\forall x\in U(x_{0})(\hbox{a neighborhood of}\, x_{0}). \end{cases}$$ Then, we can derive $\overline{H}_{\varphi}^{\alpha}\in W^{\alpha,p}(B_{1}\cap \Omega)$ from Lemma [Lemma 15](#lemma 3.6){reference-type="ref" reference="lemma 3.6"} (ii) and the fact that the set of the irregular boundary points is of $(\alpha,p)$-capacity zero [@SX3 Proposition 3.2]. Thus, we get $\overline{H}_{\varphi}^{\alpha}(x)\leq u(x)$ for $x\in B_{r}\cap \Omega$ according to [@KKP4 Corollary 2]. Particularly, $$\liminf_{x\longrightarrow x_{0}}\overline{H}_{\varphi}^{\alpha}(x)\leq \liminf_{x\longrightarrow x_{0}, x\in \Omega}u(x)=u(x_{0})\leq \frac{1}{2}<1=\varphi(x_{0}).$$ Accordingly, $x_{0}$ is not $(\alpha,p)$-regular boundary point of $B_{1}\backslash \Omega$, which shows that $x_{0}$ is not an $(\alpha,p)$-regular boundary point of $\Omega$ by the fact that the regularity is a local property(Lemma [Corollary 8](#lemma 3.3){reference-type="ref" reference="lemma 3.3"}). The proof of Theorem [Theorem 4](#Theorem 1.2){reference-type="ref" reference="Theorem 1.2"} is completed. ## Proof of Theorem [Theorem 5](#Theorem 5.1){reference-type="ref" reference="Theorem 5.1"}: the generalized fractional Wiener criterion {#sec2.5} Assume that $u\in W^{\alpha,p}(\Omega)$ is a local solution to $\Phi^{x_0}_{f,u_{0}}.$ Then $u\in H_{\alpha}^{+}(\Omega)$ is nonnegative according to [@Sh Theorem 3.1]. Let $E$ be a set as in Theorem [Theorem 16](#Theorem 1.5){reference-type="ref" reference="Theorem 1.5"} (ii) and $$E_{\varepsilon}=\{x:f(x)\geq \overline{f(x_{0})}-\varepsilon\}\cap \{\Omega\backslash E\}$$ for each $\varepsilon>0.$ Then $C_{\alpha,p}(B\cap E_{\varepsilon},2B)>0$ for each ball $B=B(x_{0},r)$ since $E_{\varepsilon}$ is not $(\alpha,p)-$thin at $x_{0}$. Accordingly, $u(x_{0})\geq \overline{f(x_{0})}-\varepsilon$, and hence $u(x_{0})\geq \overline{f(x_{0})}$. On the other hand, [@CKP2 Theorem 1.1 & Remark 4.2] gives $$\operatornamewithlimits{ess\,sup}_{B(x_{0},r)}(u-\xi)^{+}\leq \frac{C(u-\xi)^{+}}{|B(x_{0},2r)|}dx=\frac{C(u-\min\{u,\xi\})}{|B(x_{0},2r)|}dx\longrightarrow 0\quad\hbox{as}\quad r\longrightarrow 0$$ for fixed $\xi>u(x_{0})$ and $r>0$ small enough, where the last approximation follows from [@KKP4 Theorem 13]. On account of the above analysis and [@Sh Corollary 3.9], we have $\limsup_{x\longrightarrow x_{0}}u(x)\leq \xi$, which gives the desired result by letting $\xi\longrightarrow u(x_{0}).$ Next, we proceed to show (ii). Assume that there exists $\varepsilon>0$ such that $F_{\varepsilon}$ is $(\alpha,p)-$thin at $x_{0}$, then Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"} gives the existence of a bounded function $u\in H_{\alpha}^{+}(B(x_{0},r_{0}))$ with $u=\sup f$ on $F_{\varepsilon}\backslash \{x_{0}\}$ and $u(x_{0})=\overline{f(x_{0})}-\frac{\varepsilon}{2}$. By [@P Theorem 15], we may assume that $u\in W^{\alpha,p}(B)$ and $u>\overline{f(x_{0})}-\varepsilon$ in $B$. Let $v\in H_{\alpha}^{+}(\Omega)$ be the solution to $\Phi^{x_0}_{f,u_{0}}$ with $\Omega=B$ such that $u-v\in W^{\alpha,p}_{0}(B)$. Then [@KKP4] implies that $v\leq u$ in $B$ since $u\geq f$ q.e.. Accordingly, we get $$\liminf_{x\longrightarrow x_{0}}v(x)\leq \operatorname{essliminf}_{x\longrightarrow x_{0}}u(x)=u(x_{0})<\overline{f}(x_{0})=\inf_{r>0}(\alpha,p)-\operatornamewithlimits{ess\,sup}_{B(x_{0},r)} f\leq \limsup_{x\longrightarrow x_{0}}v(x)$$ for $v\geq f$ q.e., which shows that $v$ can not be continuous at $x_{0}$. # Beyond {#s8} We end this paper with more regulairty conditions: the cotinuitity of fractional superharmonic functions, the fractional resolutivity, a connection between $(\alpha,p)-$potentials and $(\alpha,p)-$Perron solutions by using Lemmas [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} & [Lemma 15](#lemma 3.6){reference-type="ref" reference="lemma 3.6"}, and the existence of a capacitary function for an arbitrary condenser by applying Theorem [Theorem 20](#theorem 3.2){reference-type="ref" reference="theorem 3.2"} and Lemma [Lemma 17](#lemma 5.1){reference-type="ref" reference="lemma 5.1"}, which can be viewed as a general version of [@SX3 Theorem 1.2 (ii)]. ## The continuity of fractional superharmonic functions {#s6} With the help of a generalized comparison lemma, we can deduce the continuity of $(\alpha,p)-$superharmonic functions from Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. **Theorem 16**. *Let $u\in H_{\alpha}^{+}(\Omega)$. Then the following three statements hold.* - *$u$ is real-valued and continuous at $x_{0}$ if and only if for each $\varepsilon>0$, there is an $r>0$ such that $$\textbf{W}_{\alpha,p}^{\mu}(x,r)<\frac{\varepsilon}{ C}-\hbox{Tail}(u^{-};x,r),$$ where $$\textbf{W}_{\alpha,p}^{\mu}(x,r)=:\int_{0}^{r}\left(\frac{\mu(B(x,t))}{t^{n-\alpha p}}\right)^{\frac{1}{p-1}}\frac{dt}{t}$$ is the fractional Wolff potential for a Radon measure $\mu$ ([@KMS]), and $$\hbox{Tail}(u^{-};x,r)=:\left(r^{\alpha p}\int_{\mathbb{R}^n\backslash B(x,r)}|u^{-}(x_0)|^{p-1}|x-x_0|^{-n-\alpha p}dx_0\right)^{\frac{1}{p-1}}$$ in the nonlocal tail of function $u^{-}$ in the ball of radius $r>0$ centred in $x\in \mathbb{R}^n.$* - *There exists a set $E$ such that $E$ is $(\alpha,p)-$thin at $x_{0}\in \Omega$ and that $u|_{\Omega\backslash E}$ is continuous at $x_{0}$;* - *If a finite point $x_{0}\in \partial\Omega$ is $(\alpha,p)-$ regular with $C_{\alpha, p}(\{x_{o}\}, \Omega)=0$ and $u\in H_{\alpha}^{+}(U(x_{0}))$, then $\lim_{x\longrightarrow x_{0}\ \& \ x\in \Omega^{c}}u(x)=u(x_{0}).$* We begin with the generalized comparison lemma which is critical to the proof of Theorem [Theorem 16](#Theorem 1.5){reference-type="ref" reference="Theorem 1.5"} (iii). **Lemma 17**. *Let $$\begin{cases} \Omega\subset \mathbb{R}^{n}\,\,\hbox{be bounded};\\ E\subset \partial \Omega\,\,\hbox{and}\,\, C_{\alpha,p}(E,\Omega)=0;\\ (u,v)\in (H_{\alpha}^{+}(\Omega)\times H_{\alpha}^{-}(\Omega))\cap (\dot{W}^{\alpha,p}(\Omega)\times \dot{W}^{\alpha,p}(\Omega)). \end{cases}$$ Then $v\leq u$ in $\Omega$.* *Proof.* Without loss of generality, we assume that $E$ is compact and $u\in \dot{W}^{\alpha,p}(\Omega)$ since the set $$\left\{x\in \partial\Omega: \liminf_{y\longrightarrow x}u(y)+\varepsilon>\limsup_{y\longrightarrow x}v(y)\right\}$$ is open on $\partial\Omega$ for $\varepsilon>0$. We claim that there exists a decreasing sequence $\{\varphi_{i}\}\subset C(\Omega\cap E)\cap W^{\alpha,p}(\Omega)$ such that $$\begin{cases} \varphi_{i}\in [0,M]\quad\hbox{for}\quad M=\sup |u|+\sup |v|;\\ \varphi_{i}=M\quad\hbox{on}\quad E;\\ \|\varphi_{i}\|_{W^{\alpha,p}(\Omega)}\longrightarrow 0\quad\hbox{as}\quad i\longrightarrow \infty. \end{cases}$$ This can be done by picking a nonnegative sequence $\{\eta_{i}\}\subset C(\Omega\cap E)\cap W^{\alpha,p}(\Omega)$ with $\eta_{i}=M$ on $E$ and $\eta_{i}\longrightarrow 0$ in $W^{\alpha,p}(\Omega)$. In fact, we can pick $\{\varphi_{i}\}$ as follows $$\begin{cases} \varphi_{1}=\min\{M, \eta_{1}\};\\ \varphi_{i+1}=\min\{\varphi_{i}, \eta_{j}\}\quad\hbox{with}\quad j\quad\hbox{is choosen large enough to gurantee}\quad\|\varphi_{i+1}\|_{W^{\alpha,p}(\Omega)}\leq\frac{1}{2}\|\varphi_{i}\|_{W^{\alpha,p}(\Omega)} \end{cases}$$ since $\min\{\varphi_{i}, \eta_{j}\}\longrightarrow 0$ in $W^{\alpha,p}(\Omega)$ as $j\longrightarrow \infty$ according to [@SX3 Lemma 2.1]. Let $$\begin{cases} \xi_{i}=u+\varphi_{i};\\ u_{i}\quad\hbox{be the solution to}\quad\Phi_{\psi_{i},\psi_{i}(\Omega)}. \end{cases}$$ It follows from [@KKP4 Theorem 13] that $u_{i}\geq \xi_{i}$ in $\Omega$. Hence, we get $\limsup_{y\longrightarrow x}v(y)\leq \liminf_{y\longrightarrow x}u_{i}(y)$ for $x\in \partial\Omega$, which shows $u_{i}\geq v$ according to [@Sh Proposition 2.20]. On the other hand, [@Sh Proposition 2.15] implies $u_{i}\longrightarrow u.$ A further application of [@KKP4 Theorem 13] gives $u\geq v$ in $\Omega$ as desired. ◻ We now turn to the proof of Theorem [Theorem 16](#Theorem 1.5){reference-type="ref" reference="Theorem 1.5"}. The sufficiency of (i) is [@KMS Theorem 1.5]. We only need to prove the necessity. Without loss of generality, we assume $u(x_{0})=0$ for $u(x_{0})<\infty$ according to [@KMS Theorem 1.2 & Theorem 1.3] and choose $r_{0}>0$ with $u(x)>-\varepsilon$ for $x\in B(x_{0}, 4r_{0})$ since $u$ is l.s.c. Then, we get, for $x\in B(x_{0},r)$ with $r<r_{0}$, $$u(x)<C\inf_{B(x,r)}u+C\textbf{W}_{\alpha,p}^{\mu}(x,r)+C\hbox{Tail}(u^{-};x_{0},r)\leq C\varepsilon$$ as desired. \(ii\) can be proved by applying Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}, which will be divided into two cases. Case 1. $u(x_{0})=\infty$. It is easy to obtain the desired result by choosing $E=\phi$. Case 2. $u(x_{0})<\infty$. Denote by $E_{i}=\left\{x\in \Omega: u(x)-u(x_{0})>\frac{1}{i}\right\}$ for $i\in \mathbb{N}^{+}$. Then $E_{i}$ is $(\alpha,p)-$thin at $x_{0}$ by Theorem [Theorem 1](#Theorem 1.1){reference-type="ref" reference="Theorem 1.1"}. Using [@Me Proposition 3.1], we can show that there exists a decreasing sequence $r_{i}\longrightarrow 0$ such that the set $E=\cup_{i}(E_{i}\cap B(x_{0},r_{i}))$ is $(\alpha,p)-$thin at $x_{0}$, which implies our desired result since $u$ is l.s.c. Now, we prove (iii) by a contradiction. If (iii) is not true, assume that there exists a function $u\in H_{\alpha}^{+}(U(x_{0}))$ such that $\lim_{x\longrightarrow x_{0}, x\in \Omega^{c}}u(x)>u(x_{0})$. For some ball $B$ centered at $x_{0}$, we may assume that $u\in W^{\alpha, p}(3B)$ and $u=1$ in $\Omega^{c}\cap (\overline{B} \backslash\{x_{0}\})$. Lemma [Lemma 17](#lemma 5.1){reference-type="ref" reference="lemma 5.1"} now shows that $\overline{H}_{f}^{\alpha}\leq u$ in $2B\backslash (\overline{B}\cap \Omega^{c})$ by picking $$\begin{cases} f\in C^{\infty}(\mathbb{R}^{n})\quad\hbox{with}\quad f=1\quad\hbox{in}\quad\overline{B};\\ f\leq u\quad\hbox{on}\quad\partial (2B). \end{cases}$$ It follows from [@KKP4 Theorem 13] that $$1=\lim_{x\longrightarrow x_{0}, x\in \Omega}\overline{H}_{f}^{\alpha}(x)\leq \liminf_{x\longrightarrow x_{0}, x\in \Omega}u(x)=u(x_{0})<1,$$ which is a contradiction. ## The fractional resolutivity {#s7} A function $f:\partial\Omega\longrightarrow [-\infty,+\infty]$ is called $(\alpha,p)$-resolutive if $\overline{H}_{f}^{\alpha}= \underline{H}_{f}^{\alpha}\in H_{\alpha}(\Omega)$. [@KKP4 Theorem 2] shows that $f$ is $(\alpha,p)$-resolutive if $f$ is continuous. The resolutivity of $f$ does not imply the Perron regularity of a boundary point. But, the inverse is true. More generally, we have the following result. **Theorem 18**. *Let $f:\partial\Omega\longrightarrow [-\infty,+\infty]$. Then $f$ is $(\alpha,p)-$resolutive if one of the following conditions holds.* - *There is a bounded $u\in H_{\alpha}(\Omega)$ with $\lim_{x\longrightarrow x_{0}}u(x)=f(x_{0})$ for each $x_{0}\in \partial\Omega$;* - *$C_{\alpha,p}(\Omega^{c})>0$ and $f\in C(\partial\Omega)$;* - *$\Omega$ is Perron regular and $f\in C(\partial\Omega)$;* - *$\Omega$ is Perron regular, $f$ is bounded and lower semicontinuous on $\partial\Omega$.* In order to prove Theorem [Theorem 18](#Theorem 1.6){reference-type="ref" reference="Theorem 1.6"}, we need to establish the following uniform convergence lemma for upper $(\alpha,p)$-Perron solutions. ### The uniform convergence of upper $(\alpha,p)$-Perron solutions We call a set of functions $\mathcal{E}$ is downward directed if for functions $g_{1}, g_{2}\in \mathcal{E}$, there exists a function $g\in \mathcal{E}$ such that $g\leq \min\{g_{1}, g_{2}\}.$ **Lemma 19**. - *Let $G$ be a downward directed family of upper semicontinuous functions $g: \partial\Omega\longrightarrow [-\infty, +\infty]$ and let $f=\inf G$. Then $\overline{H}_{f}^{\alpha}=\inf\left\{\overline{H}_{g}^{\alpha}: g\in G\right\}.$* - *Suppose that $f_{i}: \partial\Omega\longrightarrow [-\infty, +\infty]$ is a decreasing sequence of upper semicontinuous functions and $\lim_{i}f_{i}=f$. Then $\overline{H}_{f}^{\alpha}=\lim_{i\longrightarrow \infty}\overline{H}_{f_{i}}^{\alpha}$.* - *Let $\{f_{i}\}: \partial \Omega\longrightarrow \mathbb{R}$ be resolutive and $f_{i}\longrightarrow f$ uniformly. Then $f$ is $(\alpha,p)-$resolutive and $\overline{H}_{f_{i}}^{\alpha}\longrightarrow \overline{H}_{f}^{\alpha}$ as $i\longrightarrow \infty$.* *Proof.* (i). For abbreviation, we denote $h=\inf_{G}\overline{H}_{g}^{\alpha}$. Next, we show $\overline{H}_{f}^{\alpha}=h.$ The inequality $\overline{H}_{f}^{\alpha}\leq h$ in $\Omega$ is straightforward. To show the reverse inequality, let $u\in \mathcal{U}_{f}^{\alpha}.$ Then, for fixed $\varepsilon>0$, the set $$\mathcal{F}=\left\{y\in \partial\Omega: \liminf_{x\longrightarrow y}u(x)+\varepsilon>g(y), g\in G\right\}$$ is open and cover $\partial \Omega$ according to the upper semicontinuity of $G$. The fact that $\partial \Omega$ is compact in $\overline{\mathbb{R}^{n}}$ and $G$ is downward directed allow us to find a function $g\in G$ with $\lim_{x\longrightarrow y}u(x)+\varepsilon>g(y)\quad\forall y\in \partial\Omega.$ Namely, $u+\varepsilon\in \mathcal{U}_{g}^{\alpha}$, and hence $u+\varepsilon\geq \overline{H}_{g}^{\alpha}\geq h.$ Finally we derive $\overline{H}_{f}^{\alpha}+\varepsilon\geq h$, which shows that $\overline{H}_{f}^{\alpha}\geq h$ by the arbitrariness of $\varepsilon$ and the desired result is obtained. \(ii\) follows from (i). (iii). Since $|f_{i}-f|<\varepsilon$ for given $\varepsilon>0$ and $i$ large enough, $\overline{H}_{f}^{\alpha}-\varepsilon\leq \overline{H}_{f_{i}}^{\alpha}=\underline{H}_{f_{i}}^{\alpha}\leq \underline{H}_{f}^{\alpha}+\varepsilon.$ Consequently, $\lim_{i\longrightarrow \infty}\overline{H}_{f_{i}}^{\alpha}=\underline{H}_{f}^{\alpha}=\overline{H}_{f}^{\alpha}$. Furthermore, we get $\overline{H}_{f}^{\alpha}\in H_{\alpha}(\partial\Omega)$ since $\overline{H}_{f}^{\alpha}$ is finite. ◻ ### Proof of Theorem [Theorem 18](#Theorem 1.6){reference-type="ref" reference="Theorem 1.6"} {#proof-of-theorem-theorem-1.6} . (i) can be derived immediately from the comparison principle, see for example [@Sh Proposition 2.20]. \(ii\) follows from (iii) of Lemma [Lemma 19](#lemma 6.1){reference-type="ref" reference="lemma 6.1"}. Indeed, without loss of generality, we assume $f\in C(\overline{\mathbb{R}^{n}})$ and $f(\infty)=0$ according to the Tietze extension theorem. Then, we get the desired result since $f$ can be approximated uniformly by $\varphi_{j}\in C_{c}^{\infty}(\mathbb{R}^{n})$ and $\varphi_{j}$ are resolutive according to (i) of Lemma [Lemma 15](#lemma 3.6){reference-type="ref" reference="lemma 3.6"}. \(iii\) is a by-product of Lemma [Lemma 12](#lemma 3.2){reference-type="ref" reference="lemma 3.2"} (iii) and Theorem [Theorem 3](#Theorem 1.4){reference-type="ref" reference="Theorem 1.4"}. \(iv\) will be proved by showing $\underline{H}_{f}^{\alpha}\geq \overline{H}_{f}^{\alpha}$. Let $\{f_{i}\}\subset C(\partial \Omega)$ be increasing such that $f_{i}\longrightarrow f$ on $\partial \Omega.$ Then, we have $$\liminf_{x\longrightarrow y}\underline{H}_{f}^{\alpha}(x)\geq \lim_{x\longrightarrow y}\underline{H}_{f_{i}}^{\alpha}(x)=f_{i}(y)\quad\forall y\in \partial \Omega.$$ Accordingly, we get $\underline{H}_{f}^{\alpha}\geq \overline{H}_{f}^{\alpha}$ by the fact $\underline{H}_{f}^{\alpha}\in \mathcal{U}_{f}^{\alpha}$ since $f_{i}\longrightarrow f.$ ## A connection between $(\alpha,p)-$potentials and $(\alpha,p)-$Perron solutions **Theorem 20**. *Let $u\in H_{\alpha}^{+}(\Omega)$ be nonnegative and $\Omega\subset \mathbb{R}^{n}$ be open. Then the following statements hold.* - *$\widehat{\mathcal{B}}^{u}_{E}(\Omega)=\overline{H}_{f}^{\alpha}(\Omega\backslash E)$ in $\Omega\backslash E$ with $E\subset \Omega$ is relatively closed and $f=\begin{cases}u\quad\hbox{on}\quad\partial E\cap \Omega;\\ 0\quad\hbox{on}\quad\partial \Omega. \end{cases}$* - *$$\begin{cases}\overline{H}_{f}^{\alpha}-f\in \dot{W}^{\alpha,p}_{0}(\Omega\backslash E)\quad\hbox{if}\, \, f\in C(\overline{\Omega})\cap \dot{W}^{\alpha,p}(\Omega);\\ \overline{H}_{f}^{\alpha}-f\in W^{\alpha,p}_{0}(\Omega\backslash E)\quad\hbox{if}\, \, \Omega\,\, \hbox{is bounded and}\, f\in C(\overline{\Omega})\cap \dot{W}^{\alpha,p}(\Omega). \end{cases}$$* - *$\lim_{x\longrightarrow x_{0}}\widehat{\mathcal{B}}^{u}_{E}(x)=0$ for $E\Subset \Omega$ and $x_{0}\in \partial\Omega$ is Perron regular. Particularly, $\lim_{x\longrightarrow x_{0}}\widehat{\mathcal{B}}^{u}_{E}(x)=0$ q.e. on $\partial \Omega$.* *Proof.* (i). For the first assertion, it remains to prove that $\overline{H}_{f}^{\alpha}\geq \widehat{\mathcal{B}}^{u}_{E}$ in $\Omega\backslash E$ since the reverse inequality is obviously. To do so, by choosing $v\in \mathcal{U}_{f}^{\alpha}$ and writing $$w=\begin{cases}\min\{u,v\}\quad\hbox{in}\quad\Omega\backslash E;\\ u\quad\hbox{on}\quad E, \end{cases}$$ we get $w\in H_{\alpha}^{+}(\Omega)$ according to [@Sh Proposition 2.2]. Hence, we obtain $w\geq \widehat{\mathcal{B}}^{u}_{E},$ and finally $v\geq \widehat{\mathcal{B}}^{u}_{E}$ as desired. The second assertion follows from (i) of Lemma [Lemma 13](#lemma 3.4){reference-type="ref" reference="lemma 3.4"} and (ii) of Lemma [Lemma 15](#lemma 3.6){reference-type="ref" reference="lemma 3.6"}. (iii). By choosing a polyhedron $O\Subset \Omega$ with $E\Subset O$ and applying the Poisson modification in a neighborhood of $\partial O$, there exists a function $v\in \Psi_{E}^{u}(\Omega)$ with $v$ bounded on $\partial O$. Therefore, we have $0\leq \widehat{\mathcal{B}}^{u}_{E}\leq \widehat{\mathcal{B}}^{v}_{\overline{O}},$ which implies the desired result since $\lim_{x\longrightarrow y}\widehat{\mathcal{B}}^{v}_{\overline{O}}(x)=0$ for any $y\in \partial\Omega.$ Moreover, for $y\in \partial\Omega,$ (i) implies that $y$ is Perron regular. Furthermore, it follows from [@SX3 Proposition 3.2] that $\lim_{x\longrightarrow x_{0}}\widehat{\mathcal{B}}^{u}_{E}(x)=0$ q.e. on $\partial \Omega.$ ◻ ## The existence of a capacitary function for an arbitrary condenser **Theorem 21**. *Let $E\subset \Omega$ with $C_{\alpha,p}(E, \Omega)<\infty$ and $u=\widehat{\mathcal{B}}^{1}_{E}(\Omega).$ Then $$\begin{cases}u\in \dot{W}^{\alpha,p}(\Omega)\quad\hbox{and}\quad C_{\alpha,p}(E, \Omega)=[u]_{\dot{W}^{\alpha,p}(\Omega)};\\ u\in W^{\alpha,p}(\Omega)\quad\hbox{if}\quad\Omega\quad\hbox{is bounded}. \end{cases}$$* *Proof.* The proof can be divided into three steps. Step 1-showing $[v_{i}]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(\overline{O}_i, \Omega)$, where $$\begin{cases}O\subset \Omega\quad\hbox{is open with}\quad C_{\alpha,p}(O, \Omega)<\infty;\\ \{O_{i}\}\quad\hbox{is a sequence of polyhedra with}\quad O_{1}\Subset O_{2}\Subset \cdots \Subset O;\\ v_{i}=\widehat{\mathcal{B}}^{1}_{\overline{O_{i}}}(\Omega)\quad i=1,2,3,\cdots . \end{cases}$$ In fact, we deduce from Theorem [Theorem 20](#theorem 3.2){reference-type="ref" reference="theorem 3.2"} (i) that $v_{i}-\varphi\in \dot{W}^{\alpha,p}_{0}(\Omega\backslash \overline{O_{i}})$ if $\varphi$ is admissible for the condenser $(\overline{O_{i}}, \Omega)$, namely, $\varphi\in X_{0}^{\alpha,p}(\overline{O_{i}}, \Omega)$. Since $\varphi_{i}\in H_{\alpha}(\Omega\backslash \overline{O_{i}})$, we get $$C_{\alpha,p}(\overline{O_{i}}, \Omega)\leq [v_{i}]_{\dot{W}^{\alpha,p}(\Omega)}\leq [\varphi]_{\dot{W}^{\alpha,p}(\Omega)}.$$ Then, the desired result follows by taking the infimum over all $\varphi$. Step 2-proving $[u]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(E, \Omega)$ if $E=O$ is open. Using Lemma [Lemma 9](#lemma 4.1){reference-type="ref" reference="lemma 4.1"}, we derive that $v_{i}\longrightarrow \widehat{\mathcal{B}}^{1}_{E}$ weakly in $\dot{W}^{\alpha,p}(\Omega)$ since $$[v_{i}]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(\overline{O_{i}}, \Omega)\leq C_{\alpha,p}(O, \Omega)<\infty.$$ Then $\widehat{\mathcal{B}}^{1}_{E}$ can be approximated in $\dot{W}^{\alpha,p}(\Omega)$ by functions $\varphi\in X_{0}^{\alpha,p}(\overline{O_{i}}, \Omega)$ for each $i$ by the Mazar lemma. Consequently, [@HKM2 (5.25)] gives $$C_{\alpha,p}(O, \Omega)=\lim_{i\longrightarrow \infty}C_{\alpha,p}(\overline{O_{i}}, \Omega)\leq[u]_{\dot{W}^{\alpha,p}(\Omega)}\leq \liminf_{i\longrightarrow \infty}[v_{i}]_{\dot{W}^{\alpha,p}(\Omega)}=\lim_{i\longrightarrow \infty}C_{\alpha,p}(\overline{O_{i}}, \Omega)= C_{\alpha,p}(O, \Omega)$$ as desired. Step 3-checking $[u]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(E, \Omega)$ for a general set $E\subset\Omega$. Let $$\begin{cases}u_{i}\in C_{c}^{\infty}(\Omega)\quad\hbox{be increasing and nonnegative};\\ u_{i}\longrightarrow u\quad\hbox{in}\quad\Omega;\\ E\subset O\subset \Omega\quad\hbox{for a nonempty open set}\quad O\quad\hbox{with}\quad C_{\alpha,p}(O, \Omega)<\infty. \end{cases}$$ Then we may assume that $u_{i}<\widehat{\mathcal{B}}^{1}_{O}$ in $\Omega$ by replacing $u_{i}$ with $(1-\delta_{i})u_{i}$ if necessary for $\delta_{i}\longrightarrow 0$ decreasingly. Exhausting $\Omega$ by polyhedra $U_{1}\Subset U_{2}\Subset \cdots\Subset \Omega$, there is an index $i_{j}$ with $\widehat{\mathcal{B}}^{1}_{O\cap U_{i_{j}}}(\Omega)>u_{i}$ since $\widehat{\mathcal{B}}^{1}_{O\cap U_{i}}(\Omega)$ increases to $\widehat{\mathcal{B}}^{1}_{O}(\Omega)$ after the observation $\widehat{\mathcal{B}}^{1}_{O\cap U_{i}}(\Omega)>0$ for $i$ large enough. Likewise, setting $O_{j}=O\cap U_{i_{j}}$, there is a $k_{j}$ with $\widehat{\mathcal{B}}^{1}_{O_{j}}(U_{k})\geq u_{j}$ in $U_{k}$ for $k\geq k_{j}$ since $\widehat{\mathcal{B}}^{1}_{O_{j}}(U_{k})$ increases to $\widehat{\mathcal{B}}^{1}_{O_{j}}(\Omega)$. By setting $$\begin{cases}v_{j,k}=\widehat{\mathcal{B}}^{1}_{O_{j}}(U_{k});\\ u_{j}=\widehat{\mathcal{B}}^{\varphi_{j}}(\Omega);\\ u_{j,k}=\widehat{\mathcal{B}}^{\varphi_{j}}(U_{k}), \end{cases}$$ we obtain that $v_{j,k}$ increases to $\widehat{\mathcal{B}}^{1}_{O_{j}}(\Omega)$ as $k\longrightarrow \infty$. Then by extending $v_{j,k}$ as zero to $\Omega\backslash U_{k}$, one has $$[v_{j,k}]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(O_{j}, U_{k})\longrightarrow C_{\alpha,p}(O_{j}, \Omega)<\infty,$$ and hence $v_{j,k}\longrightarrow \widehat{\mathcal{B}}^{1}_{O_{j}}(\Omega)$ weakly in $\dot{W}^{\alpha,p}(\Omega)$ by Lemma [Lemma 9](#lemma 4.1){reference-type="ref" reference="lemma 4.1"}. Finally, we get that $\lim_{k\longrightarrow \infty}u_{j,k}=u_{j}$. Moreover, the quasi minimizing property shows that $[u_{j,k}]_{\dot{W}^{\alpha,p}(\Omega)}\leq [v_{j,k}]_{\dot{W}^{\alpha,p}(\Omega)},$ and hence $$(v_{j,k}-u_{j,k})\longrightarrow (\widehat{\mathcal{B}}^{1}_{O_{j}}(\Omega)-u_{j})\quad\hbox{weakly in}\quad\dot{W}^{\alpha,p}(\Omega).$$ Thus, we get $$0\leq [u_{j}(v_{j,k}-u_{j,k})]_{\dot{W}^{\alpha,p}(\Omega)}\longrightarrow [u_{j}(\widehat{\mathcal{B}}^{1}_{O_{j}}(\Omega)-u_{j})]_{\dot{W}^{\alpha,p}(\Omega)}\quad\hbox{as}\quad k\longrightarrow \infty.$$ Accordingly, we reach $$[u_{j}]_{\dot{W}^{\alpha,p}(\Omega)}\leq [\widehat{\mathcal{B}}^{1}_{O_{j}}]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(O_{j}, \Omega)\leq C_{\alpha,p}(O, \Omega)<\infty.$$ A further application of Lemma [Lemma 9](#lemma 4.1){reference-type="ref" reference="lemma 4.1"} shows that $u_{j}\longrightarrow u$ weakly in $\dot{W}^{\alpha,p}(\Omega)$ and $u\in \dot{W}^{\alpha,p}_{0}(\Omega)$. Furthermore, $u\in W^{\alpha,p}_{0}(\Omega)$ if $\Omega$ is bounded according to [@Sh Lemma 2.2]. The weak lower semicontinuity of norms gives that $$[u]_{\dot{W}^{\alpha,p}(\Omega)}\leq \liminf_{j\longrightarrow \infty}[u_{j}]_{\dot{W}^{\alpha,p}(\Omega)}\leq \lim_{j\longrightarrow \infty}C_{\alpha,p}(O_{j}, \Omega)= C_{\alpha,p}(O, \Omega).$$ Consequently, we obtain $[u]_{\dot{W}^{\alpha,p}(\Omega)}\leq C_{\alpha,p}(E, \Omega)$ by taking the infimum over all open neighborhoods $O$ of $E$ in $\Omega$. What is left is to show that $[u]_{\dot{W}^{\alpha,p}(\Omega)}\geq C_{\alpha,p}(E, \Omega)$. For $0<\varepsilon<1,$ denote by $$O_{\varepsilon}=\{u>1-\varepsilon\} \quad \hbox{and}\quad E^{'}=\{x\in E, u(x)=1\}.$$ Then $O_{\varepsilon}\supset E^{'}$ is open and $C_{\alpha,p}(E', \Omega)= C_{\alpha,p}(E, \Omega)$ by [@SX3 Theorem 1.3]. It follows from step 1 that $[\widehat{\mathcal{B}}^{1}_{\overline{O}}]_{\dot{W}^{\alpha,p}(\Omega)}=C_{\alpha,p}(\overline{O}, \Omega)$ for $O\Subset O_{\varepsilon}$, which is a polyhedron. On the other hand, it can be deduced from the fact $u\in \dot{W}^{\alpha,p}_{0}(\Omega)$ and $\widehat{\mathcal{B}}^{1}_{\overline{O}}\in \dot{W}^{\alpha,p}_{0}(\Omega)$ that $\left(\min\left\{1, \frac{u}{1-\varepsilon}\right\}-\widehat{\mathcal{B}}^{1}_{\overline{O}}\right)\in \dot{W}^{\alpha,p}_{0}(\Omega\backslash \overline{O})$. Accordingly, we derive $$C_{\alpha,p}(\overline{O}, \Omega)=[\widehat{\mathcal{B}}^{1}_{\overline{O}}]_{\dot{W}^{\alpha,p}(\Omega)}\leq (1-\varepsilon)^{-p}[u]_{\dot{W}^{\alpha,p}(\Omega)}$$ since $\widehat{\mathcal{B}}^{1}_{\overline{O}}\in H_{\alpha}(\Omega\backslash \overline{O})$. Finally, we obtain $C_{\alpha,p}(E, \Omega)\leq C_{\alpha,p}(O_{\varepsilon}, \Omega)\leq (1-\varepsilon)^{-p}[u]_{\dot{W}^{\alpha,p}(\Omega)}$ by taking the supremum over all polyhedra $O$ in $O_{\varepsilon}$, which implies the desired inequality by taking $\varepsilon\longrightarrow 0$. ◻ 30 D. R. Adams and L. I. Hedberg, *Function Spaces and Potential Theory*. Grundlehren der mathematischen Wissenschaften, Vol. 314, Springer-Verlag, Berlin, 1996. D. R. Adams and N.G. Meyers, *Thinness and Wiener criteria for non-linear potentials*.*Indiana Univ. Math. J.*, **22** (1972), 169-197. D. R. Adams and J. Xiao, *Strong type estimates for homogeneous Besov capacities*. *Math. Ann.*, **325** (2003), 695-709. L. Brasco and E. Lindgren, *Higher Sobolev regularity for the fractional p-Laplace equation in the superquadratic case.* *Adv. Math.*, **304** (2017), 300-354. X. Cabré and Y. Sire, *Nonlinear equations for fractional Laplacians I: regularity, maximum principles, and Hamiltonian estimates*. *Ann. Inst. Henri Poincaré Anal. Non Linéaire*, **31** (2014), 23-53. X. Cabré and Y. Sire, *Nonlinear equations for fractional Laplacians II: Existence, uniqueness, and qualitative properties of solutions*. *Trans. Amer. Math. Soc.*, **367** (2015), 911-941. L. Caffarelli and A. Figalli, *Regularity of solutions to the parabolic fractional obstacle problem*. *J. Reine Angew. Math.* **680** (2013), 191--233. L. Caffarelli, X. Ros-Oton and J. Serra, *Obstacle problems for integro-differential operators: regularity of solutions and free boundaries*. *Invent. Math.*, **208** (2017), 1155-1211. L. Caffarelli, S. Salsa and L. Silvestre, *Regularity estimates for the solution and the free boundary of the obstacle problem for the fractional Laplacian*. *Invention. Math.*, **171** (2008), 425-461. L. Caffarelli and L. Silvestre, *An extension problem related to the fractional Laplacian*. *Comm. Partial Differential Equations*, **32** (2007), 1245-1260. A. D. Castro, T. Kuusi and G. Palatucci, *Nonlocal Harnack inequalities*. *J. Funct. Anal.*, **267** (2014), 1807-1836. A. D. Castro, T. Kuusi and G. Palatucci, *Local behavior of fractional $p$-minimizers*. *Ann. Inst. H. Poincaré*, **33** (2016), 1279-1299. R. Gariepy and W. P. Ziemer, *A regularity condition at the boundary for solutions of quasilinear elliptic equations*. *Arch. Rational Mech. Anal.*, **67** (1977), 25-39. J. Giacomoni, D. Kumar and K. Sreenadh, Global regularity results for non-homogeneous growth fractional problems. *J. Geom. Anal.*, **32** (2022), 1-41. J. Heinonen, T. Kilpeläinen and O. Martio, *Fine topology and quasilinear elliptic equations*. *Ann Inst. Fourier*, **39** (1989), 293-318. J. Heinonen, T. Kilpeläinen and O. Martio, *Nonlinear Potential Theory of Degenerate Elliptic Equations*. Oxford University Press, Oxford, 1993. L. I. Hedberg and H. Wolff, *Thin sets in nonlinear potential theory.* *Ann Inst. Fourier*, **33** (1983), 161-187. A. Iannizzotto, S. Liu, K. Perera and M. Squassina, *Existence results for fractional p-Laplacian problems via Morse theory*. *Adv. Calc. Var.*, **9** (2016), 101-125. A. Iannizzotto and M. Squassina, *Weyl-type laws for fractional p-eigenvalue problems*. *Asymptot. Anal.*, **88** (2014), 233-245. A. Iannizzotto, S. Mosconib and M. Squassina, *Fine boundary regularity for the degenerate fractional $p$-Laplacian*. *J. Funct. Anal.*, **279** (2020), 108659. T. Kilpeläinen and J. Malý, *The Wiener test and potential estimates for quasilinear elliptic equations*. *Acta. Math.*, **172** (1994), 137-161. J. Korvenpää, T. Kuusi and E. Lindgren, *Equivalence of solutions to fractional $p$-Laplace type equations.* *J. Math. Pures Appl.(9)*, **132** (2019), 1--26. J. Korvenpää, T. Kuusi and G. Palatucci, *Hölder continuity up to the boundary for a class of fractional obstacle problems*. *Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl.*, **27** (2016), 355-367. J. Korvenpää, T. Kuusi and G. Palatucci, *The obstacle problem for nonlinear integro-differential operators*. *Calc. Var. Partial Differential Equations*, **55** (2016), 1-29. J. Korvenpää, T. Kuusi and G. Palatucci, *A note on fractional supersolutions.* *Electron. J. Differential Equations*, **263** (2016), 1-9. J. Korvenpää, T. Kuusi and G. Palatucci, *Fractional superharmonic functions and the Perron method for nonlinear integro-differential equations.* *Math. Ann.*, **369** (2017), 1443-1489. T. Kuusi, G. Mingione and Y. Sire, *Nonlocal equations with measure data.* *Comm. Math. Phys.*, **337** (2015), 1317-1368 E. Lindgren and P. Lindqvist, *Fractional eigenvalues*. *Calc. Var. Partial Differential Equations*, **49** (2014), 795-826. E. Lindgren and P. Lindqvist, *Perron's Method and Wiener's Theorem for a nonlocal equation*. *Potential Anal.*, **46** (2017), 705-737. L. Liu, S. Wu, J. Xiao and W. Yuan, *The logarithmic Sobolev capacity*. *Adv. Math.*, **392** (2021), 107993. M. J. Mazón, J. D. Rossi and J. Toledo, *Fractional p-Laplacian evolution equations*. *J. Math. Pures Appl.(9)*, **105** (2016), 810-844. V. Mazýa, *Sobolev Spaces*. Springer-Verlag, Berlin, 1980. V. Mazýa, *On the continuity at a boundary point of solutions of quasi-linear elliptic equations*. *Vestnik Leningrad Univ. Math.*, **3** (1976), 225-242. N. G. Meyers, *Continuity properties of potentials*. *Duke. Math. J.*, **42** (1975), 157-166. G. Palatucci, *The Dirichlet problem for the p-fractional Laplace equation*. *Nonlinear Anal.*, **177** (2018), 699-732. P. Pucci, M. Xiang and B. Zhang, *Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p-Laplacian in $\mathbb{R}^{n}$*. *Calc. Var. Partial Differential Equations*, **54** (2015), 2785-2806. X. Ros-Oton and J. Serra, *The Dirichlet problem for the fractional Laplacian: regularity up to the boundary*. *J. Math. Pures Appl.(9)*, **101** (2014), 275-302. S. Shi, *Some notes on supersolutions of fractional $p$-Laplace equation.* *J. Math. Anal. Appl.*, **463** (2018), 1052-1074. S. Shi and J. Xiao, *Fractional capacities relative to bounded open Lipschitz sets.* *Potential Anal.*, **45** (2016), 261-298. S. Shi and J. Xiao, *Fractional capacities relative to bounded open Lipschitz sets complemented.* *Calc. Var. Partial Differential Equations*, **56** (2017), 1-22. S. Shi, L. Zhang and G. Wang, *Fractional nonlinear regularity, potential and balayage.* *J. Geom. Anal.*, **32** (2022), 1-29. S. Shi and L. Zhang, *Dual characterization of fractional capacity via solution of fractional p-Laplace equation.* *Math. Nachr.*, **293** (2020), 2233-2247. A. Schikorra, *Nonlinear commutators for the fractional p-Laplacian and applications.* *Math. Ann.*, **366** (2016), 695-720. J. Tan and J. Xiong, *A Harnack inequality for fractional Laplace equations with lower order terms.* *Discrete Contin. Dyn. Syst.*, **3** (2011), 975-983. J. L. Vàzquez, *Recent progress in the theory of nonlinear diffusion with fractional Laplacian operators.* *Discrete Contin. Dyn. Syst.*, **7** (2014), 857-885. M. Warma, *The fractional relative capacity and the fractional Laplacian with Neumann and Robin boundary conditions on open sets.* *Potential Anal.*, **42** (2015), 499-547. M. Warma, *Local Lipschitz continuity of the inverse of the fractional p-Laplacian, Hölder type continuity and continuous dependence of solutions to associated parabolic equations on bounded domains.* *Nonlinear Anal.*, **135** (2016), 129-157. N. Wiener, *The Dirichlet problem.* *J. Math. Phy.*, **1** (1976), 394-413. J. Xiao, *Homogeneous endpoint Besov space embeddings by Hausdorff capacity and heat equation*. *Adv. Math.*, **207** (2006), 828-846. J. Xiao, *Optimal geometric estimates for fractional Sobolev capacities.* *C. R. Math. Acad. Sci. Paris*, 354(2016): 149-153. J. Xiao, D. Ye, *Anisotropic Sobolev capacity with fractional order*. *Canad. J. Math.* **69**(2017): 873-889. Y. Zhang, X. Tang and J. Zhang, *Existence of infinitely many solutions for fractional p-Laplacian equations with sign-changing potential.* *Electron. J. Differential Equations*, **208** (2017), 1-14. [^1]: The authors were supported by Natural Science Foundation of China (\# 12271232, 12071197), Natural Science Foundation of Shandong Province (\# ZR2019YQ04, \#2020KJI002, \#ZR2021MA079). And they would like to thank Prof. Jie Xiao (Memorial University, Canada) for his suggestions and comments on this work.
arxiv_math
{ "id": "2309.02408", "title": "Wiener type regularity for non-linear integro-differential equations", "authors": "Shaoguang Shi, Guanglan Wang, Zhichun Zhai", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | **Abstract.** Given a positive integer $n$, we consider the group algebra of the symmetric group $S_{n}$. In this algebra, we define $n$ elements $t_{1},t_{2},\ldots,t_{n}$ by the formula $$t_{\ell}:=\operatorname*{cyc}\nolimits_{\ell}+\operatorname*{cyc}% \nolimits_{\ell,\ell+1}+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ell +2}+\cdots+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,n},$$ where $\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,k}$ denotes the cycle that sends $\ell\mapsto\ell+1\mapsto\ell+2\mapsto\cdots\mapsto k\mapsto\ell$. These $n$ elements are called the *somewhere-to-below shuffles* due to an interpretation as card-shuffling operators. In this paper, we show that their commutators $\left[ t_{i},t_{j}\right] =t_{i}t_{j}-t_{j}t_{i}$ are nilpotent, and specifically that $$\left[ t_{i},t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0\ \ \ \ \ \ \ \ \ \ \text{for any }i,j\in\left\{ 1,2,\ldots,n\right\}$$ and $$\left[ t_{i},t_{j}\right] ^{j-i+1}=0\ \ \ \ \ \ \ \ \ \ \text{for any }1\leq i\leq j\leq n.$$ We discuss some further identities and open questions. **Mathematics Subject Classifications:** 05E99, 20C30, 60J10. **Keywords:** symmetric group, permutations, card shuffling, top-to-random shuffle, group algebra, filtration, nilpotency, substitutional analysis. author: - Darij Grinberg bibliography: - biblio.bib date: version 2.0, September 20, 2023 title: Commutator nilpotency for somewhere-to-below shuffles --- # Introduction The *somewhere-to-below shuffles* $t_{1},t_{2},\ldots,t_{n}$ (and their linear combinations, called the *one-sided cycle shuffles*) are certain elements in the group algebra of a symmetric group $S_{n}$. They have been introduced in [@s2b1] by Lafrenière and the present author, and are a novel generalization of the top-to-random shuffle (also known as the *Tsetlin library*). They are defined by the formula $$t_{\ell}:=\operatorname*{cyc}\nolimits_{\ell}+\operatorname*{cyc}% \nolimits_{\ell,\ell+1}+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ell +2}+\cdots+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,n}\in \mathbf{k}\left[ S_{n}\right] ,$$ where $\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,k}$ denotes the cycle that sends $\ell\mapsto\ell+1\mapsto\ell+2\mapsto\cdots\mapsto k\mapsto\ell$ (and leaves all remaining elements of $\left\{ 1,2,\ldots,n\right\}$ unchanged). One of the main results of [@s2b1] was the construction of a basis $\left( a_{w}\right) _{w\in S_{n}}$ of the group algebra in which multiplication by these shuffles acts as an upper-triangular matrix (i.e., for which $a_{w}t_{\ell}$ equals a linear combination of $a_{u}$'s with $u\leq w$ for a certain total order on $S_{n}$). Consequences of this fact (or, more precisely, of a certain filtration that entails this fact) include an explicit description of the eigenvalues of each one-sided cycle shuffle, as well as analogous properties of some related shuffles. Another consequence of the joint triangularizability of $t_{1},t_{2}% ,\ldots,t_{n}$ is the fact that the commutators $\left[ t_{i},t_{j}\right] :=t_{i}t_{j}-t_{j}t_{i}$ are nilpotent (since the commutator of two upper-triangular matrices is strictly upper-triangular and thus nilpotent). Explicitly, this means that $\left[ t_{i},t_{j}\right] ^{n!}=0$, since the $t_{1},t_{2},\ldots,t_{n}$ act on a free module of rank $n!$. However, experiments have suggested that the minimal $m\in\mathbb{N}$ satisfying $\left[ t_{i},t_{j}\right] ^{m}=0$ is far smaller than $n!$, and in fact is bounded from above by $n$. In the present paper, we shall prove this. Concretely, we will prove the following results (the notation $\left[ m\right]$ means the set $\left\{ 1,2,\ldots,m\right\}$): - **Corollary [\[cor.right-bound\]](#cor.right-bound){reference-type="ref" reference="cor.right-bound"}.** We have $\left[ t_{i}% ,t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0$ for any $i,j\in\left[ n\right]$. - **Theorem [\[thm.right-bound\]](#thm.right-bound){reference-type="ref" reference="thm.right-bound"}.** Let $j\in\left[ n\right]$ and $m\in\mathbb{N}$ be such that $2m\geq n-j+2$. Let $i_{1},i_{2},\ldots,i_{m}$ be $m$ elements of $\left[ j\right]$ (not necessarily distinct). Then, $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m}},t_{j}\right] =0.$$ - **Corollary [\[cor.left-bound\]](#cor.left-bound){reference-type="ref" reference="cor.left-bound"}.** We have $\left[ t_{i}% ,t_{j}\right] ^{j-i+1}=0$ for any $1\leq i\leq j\leq n$. - **Theorem [\[thm.left-bound\]](#thm.left-bound){reference-type="ref" reference="thm.left-bound"}.** Let $j\in\left[ n\right]$, and let $m$ be a positive integer. Let $k_{1},k_{2},\ldots,k_{m}$ be $m$ elements of $\left[ j\right]$ (not necessarily distinct) satisfying $m\geq j-k_{m}+1$. Then, $$\left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m}},t_{j}\right] =0.$$ Along the way, we will also prove the following helpful facts: - **Theorem [\[thm.1+sjtitj\]](#thm.1+sjtitj){reference-type="ref" reference="thm.1+sjtitj"}.** We have $\left( 1+s_{j}\right) \left[ t_{i},t_{j}\right] =0$ for any $1\leq i\leq j<n$, where $s_{j}$ denotes the transposition swapping $j$ with $j+1$. - **Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"}.** For any $i\in\left[ n-1\right]$, we have $t_{i+1}t_{i}=\left( t_{i}-1\right) t_{i}=t_{i}\left( t_{i}-1\right)$. - **Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"}.** For any $i\in\left[ n-2\right]$, we have $t_{i+2}\left( t_{i}-1\right) =\left( t_{i}-1\right) \left( t_{i+1}-1\right)$. - **Corollary [\[cor.titi+12=0\]](#cor.titi+12=0){reference-type="ref" reference="cor.titi+12=0"}.** For any $i\in\left[ n-1\right]$, we have $\left[ t_{i},t_{i+1}\right] =t_{i}\left( t_{i+1}-\left( t_{i}-1\right) \right)$ and $\left[ t_{i},t_{i+1}\right] t_{i}=\left[ t_{i},t_{i+1}\right] ^{2}=0$. These results can be regarded as first steps towards understanding the $\mathbf{k}$-subalgebra $\mathbf{k}\left[ t_{1},t_{2},\ldots,t_{n}\right]$ of $\mathbf{k}\left[ S_{n}\right]$ that is generated by the somewhere-to-below shuffles. So far, very little is known about this $\mathbf{k}$-subalgebra, except for its simultaneous triangularizability (a consequence of [@s2b1 Theorem 4.1]). One might ask for its dimension as a $\mathbf{k}$-module (when $\mathbf{k}$ is a field). Here is some numerical data for $\mathbf{k}=\mathbb{Q}$ and $n\leq8$: $$% \begin{tabular} [c]{|c||c|c|c|c|c|c|c|c|}\hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$\\\hline $\dim\left( \mathbb{Q}\left[ t_{1},t_{2},\ldots,t_{n}\right] \right) $ & $1$ & $2$ & $4$ & $9$ & $23$ & $66$ & $212$ & $761$\\\hline \end{tabular} \label{eq.dimQ}%$$ As of September 14th, 2023, this sequence of dimensions is not in the OEIS. We note that each single somewhere-to-below shuffle by itself is easily understood using the well-known theory of the top-to-random shuffle[^1], but this approach says nothing about the interactions between two or more of the $n$ somewhere-to-below shuffles. #### Acknowledgements The author would like to thank Sarah Brauner and Nadia Lafrenière for inspiring discussions. The SageMath CAS [@sagemath] was indispensable at every stage of the research presented here. # Notations and notions ## Basic notations Let $\mathbf{k}$ be any commutative ring. (The reader can safely take $\mathbf{k}=\mathbb{Z}$.) Let $\mathbb{N}:=\left\{ 0,1,2,\ldots\right\}$ be the set of all nonnegative integers. For any integers $a$ and $b$, we set $$\left[ a,b\right] :=\left\{ k\in\mathbb{Z}\ \mid\ a\leq k\leq b\right\} =\left\{ a,a+1,\ldots,b\right\} .$$ This is an empty set if $a>b$. In general, $\left[ a,b\right]$ is called an *integer interval*. For each $n\in\mathbb{Z}$, let $\left[ n\right] :=\left[ 1,n\right] =\left\{ 1,2,\ldots,n\right\}$. Fix an integer $n\in\mathbb{N}$. Let $S_{n}$ be the $n$-th symmetric group, i.e., the group of all permutations of $\left[ n\right]$. We multiply permutations in the "continental" way: that is, $\left( \pi\sigma\right) \left( i\right) =\pi\left( \sigma\left( i\right) \right)$ for all $\pi,\sigma\in S_{n}$ and $i\in\left[ n\right]$. For any $k$ distinct elements $i_{1},i_{2},\ldots,i_{k}$ of $\left[ n\right]$, we let $\operatorname*{cyc}\nolimits_{i_{1},i_{2},\ldots,i_{k}}$ be the permutation in $S_{n}$ that sends $i_{1},i_{2},\ldots,i_{k-1},i_{k}$ to $i_{2},i_{3},\ldots,i_{k},i_{1}$, respectively while leaving all remaining elements of $\left[ n\right]$ unchanged. This permutation is known as a *cycle*. Note that $\operatorname*{cyc}\nolimits_{i}=\operatorname*{id}$ for any single $i\in\left[ n\right]$. For any $i\in\left[ n-1\right]$, we let $s_{i}:=\operatorname*{cyc}% \nolimits_{i,i+1}\in S_{n}$. This permutation $s_{i}$ is called a *simple transposition*, as it swaps $i$ with $i+1$ while leaving all other elements of $\left[ n\right]$ unchanged. It clearly satisfies $$s_{i}^{2}=\operatorname*{id}. \label{eq.si.invol}%$$ Furthermore, two simple transpositions $s_{i}$ and $s_{j}$ commute whenever $\left\vert i-j\right\vert >1$. This latter fact is known as *reflection locality*. ## Some elements of $\mathbf{k}\left[ S_{n}\right]$ Consider the group algebra $\mathbf{k}\left[ S_{n}\right]$. In this algebra, define $n$ elements $t_{1},t_{2},\ldots,t_{n}$ by setting[^2] $$t_{\ell}:=\operatorname*{cyc}\nolimits_{\ell}+\operatorname*{cyc}% \nolimits_{\ell,\ell+1}+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ell +2}+\cdots+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,n}\in \mathbf{k}\left[ S_{n}\right] \label{eq.def.tl.deftl}%$$ for each $\ell\in\left[ n\right]$. Thus, in particular, $t_{n}% =\operatorname*{cyc}\nolimits_{n}=\operatorname*{id}=1$ (where $1$ means the unity of $\mathbf{k}\left[ S_{n}\right]$). We shall refer to the $n$ elements $t_{1},t_{2},\ldots,t_{n}$ as the *somewhere-to-below shuffles*. These shuffles were studied in [@s2b1] (where, in particular, their probabilistic meaning was discussed, which explains the origin of their name). ## Commutators If $a$ and $b$ are two elements of some ring, then $\left[ a,b\right]$ shall denote their commutator $ab-ba$. This notation clashes with our above-defined notation $\left[ a,b\right]$ for the interval $\left\{ k\in\mathbb{Z}\ \mid\ a\leq k\leq b\right\}$ (when $a$ and $b$ are two integers), but we don't expect any confusion to arise in practice, since we will only use the notation $\left[ a,b\right]$ for $ab-ba$ when $a$ and $b$ are visibly elements of the ring $\mathbf{k}\left[ S_{n}\right]$ (as opposed to integers). # Elementary computations in $S_{n}$ In this section, we will perform some simple computations in the symmetric group $S_{n}$. The results of these computations will later become ingredients in some of our proofs. ## The cycles $\left( v\Longrightarrow w\right)$ **Definition 1** (). Let $v,w\in\left[ n\right]$ satisfy $v\leq w$. Then, $\left( v\Longrightarrow w\right)$ shall denote the permutation $\operatorname*{cyc}% \nolimits_{v,v+1,\ldots,w}$. The symbol "$\Longrightarrow$" in this notation $\left( v\Longrightarrow w\right)$ has nothing to do with logical implication; instead, it is meant to summon an image of a " current" flowing from $v$ to $w$. The symbol "$\Longrightarrow$" is understood to bind less strongly than addition or subtraction; thus, for example, the expression "$\left( v+1\Longrightarrow w\right)$"  means $\left( \left( v+1\right) \Longrightarrow w\right)$. Every $v\in\left[ n\right]$ satisfies $$\left( v\Longrightarrow v\right) =\operatorname*{cyc}\nolimits_{v}% =\operatorname*{id}=1. \label{eq.p-cyc-p=1}%$$ The following is just a little bit less obvious: **Proposition 2** (). [\[prop.cycpq.sss\]]{#prop.cycpq.sss label="prop.cycpq.sss"}Let $v,w\in\left[ n\right]$ satisfy $v\leq w$. Then, $\left( v\Longrightarrow w\right) =s_{v}s_{v+1}\cdots s_{w-1}$. **Proof.** Easy verification.   ------------------------------------------------------------------------ **Proof.** For any two distinct elements $i$ and $j$ of $\left[ n\right]$, we let $t_{i,j}$ denote the permutation in $S_{n}$ that swaps $i$ with $j$ while leaving all other elements of $\left[ n\right]$ unchanged. (This notation has nothing to do with the somewhere-to-below shuffle $t_{\ell}$.) Note that the permutation $t_{i,j}$ is called a *transposition*. It is clear that $t_{i,j}=\operatorname*{cyc}\nolimits_{i,j}$ for any two distinct elements $i$ and $j$ of $\left[ n\right]$. Thus, for each $i\in\left[ n-1\right]$, we have $$t_{i,i+1}=\operatorname*{cyc}\nolimits_{i,i+1}=s_{i} \label{pf.prop.cycpq.sss.1}%$$ (since $s_{i}$ is defined to be $\operatorname*{cyc}\nolimits_{i,i+1}$). A known fact (see, e.g., [@detnotes Exercise 5.16]) shows that every $k\in\left\{ 1,2,\ldots,n\right\}$ and every $k$ distinct elements $i_{1},i_{2},\ldots,i_{k}$ of $\left[ n\right]$ satisfy $$\operatorname*{cyc}\nolimits_{i_{1},i_{2},\ldots,i_{k}}=t_{i_{1},i_{2}}\circ t_{i_{2},i_{3}}\circ\cdots\circ t_{i_{k-1},i_{k}}.$$ Applying this to $k=w-v+1$ and $\left( i_{1},i_{2},\ldots,i_{k}\right) =\left( v,v+1,\ldots,w\right)$, we obtain $$\begin{aligned} \operatorname*{cyc}\nolimits_{v,v+1,\ldots,w} & =\underbrace{t_{v,v+1}% }_{\substack{=s_{v}\\\text{(by (\ref{pf.prop.cycpq.sss.1}),}\\\text{applied to }i=v\text{)}}}\circ\underbrace{t_{v+1,v+2}}_{\substack{=s_{v+1}\\\text{(by (\ref{pf.prop.cycpq.sss.1}),}\\\text{applied to }i=v+1\text{)}}}\circ \cdots\circ\underbrace{t_{w-1,w}}_{\substack{=s_{w-1}\\\text{(by (\ref{pf.prop.cycpq.sss.1}),}\\\text{applied to }i=w-1\text{)}}}\\ & =s_{v}\circ s_{v+1}\circ\cdots\circ s_{w-1}=s_{v}s_{v+1}\cdots s_{w-1}%\end{aligned}$$ (since the product $\pi\sigma$ of two permutations $\pi,\sigma\in S_{n}$ is precisely their composition $\pi\circ\sigma$). However, the definition of $\left( v\Longrightarrow w\right)$ yields $$\left( v\Longrightarrow w\right) =\operatorname*{cyc}\nolimits_{v,v+1,\ldots ,w}=s_{v}s_{v+1}\cdots s_{w-1}.$$ This proves Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"}.   ------------------------------------------------------------------------ **Proposition 3** (). [\[prop.cycpq.rec\]]{#prop.cycpq.rec label="prop.cycpq.rec"}Let $v,w\in\left[ n\right]$ satisfy $v<w$. Then: 1. We have $\left( v\Longrightarrow w\right) =s_{v}\left( v+1\Longrightarrow w\right)$. 2. We have $\left( v\Longrightarrow w\right) =\left( v\Longrightarrow w-1\right) s_{w-1}$. **Proof.** Easy verification (easiest using Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"}).   ------------------------------------------------------------------------ **Proof.** We have $v<w$, so that $v\leq w-1$ (since $v$ and $w$ are integers) and thus $v+1\leq w$. **(a)** We have $v+1\leq w$. Thus, the permutation $\left( v+1\Longrightarrow w\right)$ is well-defined, and Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} (applied to $v+1$ instead of $v$) yields $$\left( v+1\Longrightarrow w\right) =s_{v+1}s_{\left( v+1\right) +1}\cdots s_{w-1}=s_{v+1}s_{v+2}\cdots s_{w-1}. \label{pf.prop.cycpq.rec.1}%$$ However, Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} yields $$\left( v\Longrightarrow w\right) =s_{v}s_{v+1}\cdots s_{w-1}=s_{v}% \underbrace{\left( s_{v+1}s_{v+2}\cdots s_{w-1}\right) }_{\substack{=\left( v+1\Longrightarrow w\right) \\\text{(by (\ref{pf.prop.cycpq.rec.1}))}}% }=s_{v}\left( v+1\Longrightarrow w\right) .$$ This proves Proposition [\[prop.cycpq.rec\]](#prop.cycpq.rec){reference-type="ref" reference="prop.cycpq.rec"} **(a)**. **(b)** We have $v\leq w-1$. Thus, the permutation $\left( v\Longrightarrow w-1\right)$ is well-defined, and Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} (applied to $w-1$ instead of $w$) yields $$\left( v\Longrightarrow w-1\right) =s_{v}s_{v+1}\cdots s_{\left( w-1\right) -1}=s_{v}s_{v+1}\cdots s_{w-2}. \label{pf.prop.cycpq.rec.2}%$$ However, Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} yields $$\left( v\Longrightarrow w\right) =s_{v}s_{v+1}\cdots s_{w-1}% =\underbrace{\left( s_{v}s_{v+1}\cdots s_{w-2}\right) }_{\substack{=\left( v\Longrightarrow w-1\right) \\\text{(by (\ref{pf.prop.cycpq.rec.2}))}% }}s_{w-1}=\left( v\Longrightarrow w-1\right) s_{w-1}.$$ This proves Proposition [\[prop.cycpq.rec\]](#prop.cycpq.rec){reference-type="ref" reference="prop.cycpq.rec"} **(b)**.   ------------------------------------------------------------------------ ## Rewriting rules for products of cycles Next we recall how conjugation in $S_{n}$ acts on cycles: **Proposition 4** (). [\[prop.cyc-conj\]]{#prop.cyc-conj label="prop.cyc-conj"}Let $\sigma\in S_{n}$. Let $i_{1},i_{2},\ldots,i_{k}$ be $k$ distinct elements of $\left[ n\right]$. Then, $$\sigma\operatorname*{cyc}\nolimits_{i_{1},i_{2},\ldots,i_{k}}\sigma ^{-1}=\operatorname*{cyc}\nolimits_{\sigma\left( i_{1}\right) ,\sigma\left( i_{2}\right) ,\ldots,\sigma\left( i_{k}\right) }. \label{eq.prop.cyc-conj.eq}%$$ **Proof.** Well-known.   ------------------------------------------------------------------------ **Proof.** A well-known fact (see, e.g., [@detnotes Exercise 5.17 **(a)**]) says that $$\sigma\circ\operatorname*{cyc}\nolimits_{i_{1},i_{2},\ldots,i_{k}}\circ \sigma^{-1}=\operatorname*{cyc}\nolimits_{\sigma\left( i_{1}\right) ,\sigma\left( i_{2}\right) ,\ldots,\sigma\left( i_{k}\right) }. \label{pf.prop.cyc-conj.1}%$$ However, the product $\pi\sigma$ of two permutations $\pi,\sigma\in S_{n}$ is precisely their composition $\pi\circ\sigma$. Hence, $$\sigma\operatorname*{cyc}\nolimits_{i_{1},i_{2},\ldots,i_{k}}\sigma ^{-1}=\sigma\circ\operatorname*{cyc}\nolimits_{i_{1},i_{2},\ldots,i_{k}}% \circ\sigma^{-1}=\operatorname*{cyc}\nolimits_{\sigma\left( i_{1}\right) ,\sigma\left( i_{2}\right) ,\ldots,\sigma\left( i_{k}\right) }%$$ (by ([\[pf.prop.cyc-conj.1\]](#pf.prop.cyc-conj.1){reference-type="ref" reference="pf.prop.cyc-conj.1"})). This proves Proposition [\[prop.cyc-conj\]](#prop.cyc-conj){reference-type="ref" reference="prop.cyc-conj"}.   ------------------------------------------------------------------------ Proposition [\[prop.cyc-conj\]](#prop.cyc-conj){reference-type="ref" reference="prop.cyc-conj"} allows us to prove several relations between the cycles $\left( v\Longrightarrow w\right)$. We shall collect a catalogue of such relations now in order to have them at arm's reach in later proofs. **Lemma 5** (). [\[lem.jpiq1\]]{#lem.jpiq1 label="lem.jpiq1"}Let $i,j,v,w\in\left[ n\right]$ be such that $w\geq v>j\geq i$. Then, $$\left( j+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) =\left( i\Longrightarrow w\right) \left( j\Longrightarrow v-1\right) .$$ **Proof.** Let $\sigma:=\left( i\Longrightarrow w\right)$. We have $i\leq j$ (since $j\geq i$) and $v-1\leq w-1$ (since $w\geq v$). Thus, the numbers $j,j+1,\ldots,v-1$ all belong to the interval $\left[ i,w-1\right]$. Hence, the permutation $\sigma=\left( i\Longrightarrow w\right) =\operatorname*{cyc}\nolimits_{i,i+1,\ldots,w}$ sends these numbers to $j+1,j+2,\ldots,v$, respectively. In other words, $$\left( \sigma\left( j\right) ,\sigma\left( j+1\right) ,\ldots ,\sigma\left( v-1\right) \right) =\left( j+1,j+2,\ldots,v\right) .$$ However, from $\left( i\Longrightarrow w\right) =\sigma$ and $\left( j\Longrightarrow v-1\right) =\operatorname*{cyc}\nolimits_{j,j+1,\ldots,v-1}%$, we obtain $$\begin{aligned} & \left( i\Longrightarrow w\right) \left( j\Longrightarrow v-1\right) \left( i\Longrightarrow w\right) ^{-1}\\ & =\sigma\operatorname*{cyc}\nolimits_{j,j+1,\ldots,v-1}\sigma^{-1}% =\operatorname*{cyc}\nolimits_{\sigma\left( j\right) ,\sigma\left( j+1\right) ,\ldots,\sigma\left( v-1\right) }\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{eq.prop.cyc-conj.eq})}\right) \\ & =\operatorname*{cyc}\nolimits_{j+1,j+2,\ldots,v}\ \ \ \ \ \ \ \ \ \ \left( \text{since }\left( \sigma\left( j\right) ,\sigma\left( j+1\right) ,\ldots,\sigma\left( v-1\right) \right) =\left( j+1,j+2,\ldots,v\right) \right) \\ & =\left( j+1\Longrightarrow v\right) .\end{aligned}$$ In other words, $\left( i\Longrightarrow w\right) \left( j\Longrightarrow v-1\right) =\left( j+1\Longrightarrow v\right) \left( i\Longrightarrow w\right)$. Thus, Lemma [\[lem.jpiq1\]](#lem.jpiq1){reference-type="ref" reference="lem.jpiq1"} is proved.   ------------------------------------------------------------------------ **Lemma 6** (). [\[lem.jpiq2\]]{#lem.jpiq2 label="lem.jpiq2"}Let $i,v,w\in\left[ n\right]$ be such that $v>w\geq i$. Then, $$\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) =\left( i\Longrightarrow w+1\right) \left( i\Longrightarrow v\right) .$$ **Proof.** We have $i<v$ (since $v>i$). Thus, Proposition [\[prop.cycpq.rec\]](#prop.cycpq.rec){reference-type="ref" reference="prop.cycpq.rec"} **(a)** yields $$\left( i\Longrightarrow v\right) =s_{i}\left( i+1\Longrightarrow v\right) . \label{pf.lem.jpiq2.1}%$$ On the other hand, from $v>w$, we obtain $v\geq w+1$, so that $w+1\leq v\leq n$ and therefore $w+1\in\left[ n\right]$. Furthermore, $v\geq w+1>w\geq i\geq i$. Thus, Lemma [\[lem.jpiq1\]](#lem.jpiq1){reference-type="ref" reference="lem.jpiq1"} (applied to $i$, $w+1$ and $v$ instead of $j$, $v$ and $w$) yields $$\begin{aligned} \left( i+1\Longrightarrow w+1\right) \left( i\Longrightarrow v\right) & =\left( i\Longrightarrow v\right) \left( i\Longrightarrow \underbrace{\left( w+1\right) -1}_{=w}\right) \nonumber\\ & =\left( i\Longrightarrow v\right) \left( i\Longrightarrow w\right) . \label{pf.lem.jpiq2.3}%\end{aligned}$$ However, Proposition [\[prop.cycpq.rec\]](#prop.cycpq.rec){reference-type="ref" reference="prop.cycpq.rec"} **(a)** yields $\left( i\Longrightarrow w+1\right) =s_{i}\left( i+1\Longrightarrow w+1\right)$ (since $i\leq w<w+1$). Hence, $$\begin{aligned} \underbrace{\left( i\Longrightarrow w+1\right) }_{=s_{i}\left( i+1\Longrightarrow w+1\right) }\left( i\Longrightarrow v\right) & =s_{i}\underbrace{\left( i+1\Longrightarrow w+1\right) \left( i\Longrightarrow v\right) }_{\substack{=\left( i\Longrightarrow v\right) \left( i\Longrightarrow w\right) \\\text{(by (\ref{pf.lem.jpiq2.3}))}% }}=s_{i}\underbrace{\left( i\Longrightarrow v\right) }_{\substack{=s_{i}% \left( i+1\Longrightarrow v\right) \\\text{(by (\ref{pf.lem.jpiq2.1}))}% }}\left( i\Longrightarrow w\right) \\ & =\underbrace{s_{i}s_{i}}_{\substack{=s_{i}^{2}=\operatorname*{id}% \\\text{(by (\ref{eq.si.invol}))}}}\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) =\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) .\end{aligned}$$ This proves Lemma [\[lem.jpiq2\]](#lem.jpiq2){reference-type="ref" reference="lem.jpiq2"}.   ------------------------------------------------------------------------ **Lemma 7** (). [\[lem.suip\]]{#lem.suip label="lem.suip"}Let $i,u,v\in\left[ n\right]$ be such that $i<u<v$. Then, $$s_{u}\left( i\Longrightarrow v\right) =\left( i\Longrightarrow v\right) s_{u-1}.$$ **Proof.** Let $\sigma:=\left( i\Longrightarrow v\right)$. Then, $i\leq u-1$ (since $i<u$) and $u\leq v-1$ (since $u<v$). Therefore, the numbers $u-1$ and $u$ both belong to the interval $\left[ i,v-1\right]$. Hence, the permutation $\sigma=\left( i\Longrightarrow v\right) =\operatorname*{cyc}% \nolimits_{i,i+1,\ldots,w}$ sends these numbers to $u$ and $u+1$, respectively. In other words, $$\sigma\left( u-1\right) =u\ \ \ \ \ \ \ \ \ \ \text{and}% \ \ \ \ \ \ \ \ \ \ \sigma\left( u\right) =u+1.$$ However, from $\left( i\Longrightarrow v\right) =\sigma$ and $s_{u-1}% =\operatorname*{cyc}\nolimits_{u-1,u}$, we obtain $$\begin{aligned} \left( i\Longrightarrow v\right) s_{u-1}\left( i\Longrightarrow v\right) ^{-1} & =\sigma\operatorname*{cyc}\nolimits_{u-1,u}\sigma^{-1}% =\operatorname*{cyc}\nolimits_{\sigma\left( u-1\right) ,\sigma\left( u\right) }\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{eq.prop.cyc-conj.eq}% )}\right) \\ & =\operatorname*{cyc}\nolimits_{u,u+1}\ \ \ \ \ \ \ \ \ \ \left( \text{since }\sigma\left( u-1\right) =u\text{ and }\sigma\left( u\right) =u+1\right) \\ & =s_{u}.\end{aligned}$$ In other words, $\left( i\Longrightarrow v\right) s_{u-1}=s_{u}\left( i\Longrightarrow v\right)$. Thus, Lemma [\[lem.suip\]](#lem.suip){reference-type="ref" reference="lem.suip"} is proved.   ------------------------------------------------------------------------ Finally, the following fact is easy to check: **Lemma 8** (). [\[lem.cyc-ijk\]]{#lem.cyc-ijk label="lem.cyc-ijk"}Let $i,j,k\in\left[ n\right]$ be such that $i\leq j\leq k$. Then, $$\left( i\Longrightarrow k\right) =\left( i\Longrightarrow j\right) \left( j\Longrightarrow k\right) .$$ **Proof.** Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} yields $\left( i\Longrightarrow j\right) =s_{i}s_{i+1}\cdots s_{j-1}$ and $\left( j\Longrightarrow k\right) =s_{j}s_{j+1}\cdots s_{k-1}$. Multiplying these equalities by each other, we find $$\left( i\Longrightarrow j\right) \left( j\Longrightarrow k\right) =\left( s_{i}s_{i+1}\cdots s_{j-1}\right) \left( s_{j}s_{j+1}\cdots s_{k-1}\right) =s_{i}s_{i+1}\cdots s_{k-1}.$$ Comparing this with $$\left( i\Longrightarrow k\right) =s_{i}s_{i+1}\cdots s_{k-1}% \ \ \ \ \ \ \ \ \ \ \left( \text{by Proposition \ref{prop.cycpq.sss}}\right) ,$$ we obtain $\left( i\Longrightarrow k\right) =\left( i\Longrightarrow j\right) \left( j\Longrightarrow k\right)$. This proves Lemma [\[lem.cyc-ijk\]](#lem.cyc-ijk){reference-type="ref" reference="lem.cyc-ijk"}.   ------------------------------------------------------------------------ # Basic properties of somewhere-to-below shuffles We now return to the group algebra $\mathbf{k}\left[ S_{n}\right]$. We begin by rewriting the definition of the somewhere-to-below shuffle $t_{\ell}$: **Proposition 9** (). [\[prop.tl-as-sum\]]{#prop.tl-as-sum label="prop.tl-as-sum"}Let $\ell\in\left[ n\right]$. Then, $$t_{\ell}=\sum\limits_{w=\ell}^{n}\left( \ell\Longrightarrow w\right) .$$ **Proof.** From ([\[eq.def.tl.deftl\]](#eq.def.tl.deftl){reference-type="ref" reference="eq.def.tl.deftl"}), we have $$\begin{aligned} t_{\ell} & =\operatorname*{cyc}\nolimits_{\ell}+\operatorname*{cyc}% \nolimits_{\ell,\ell+1}+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ell +2}+\cdots+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,n}\\ & =\sum\limits_{w=\ell}^{n}\underbrace{\operatorname*{cyc}\nolimits_{\ell ,\ell+1,\ldots,w}}_{\substack{=\left( \ell\Longrightarrow w\right) \\\text{(by the definition of }\left( \ell\Longrightarrow w\right) \text{)}% }}=\sum\limits_{w=\ell}^{n}\left( \ell\Longrightarrow w\right) .\end{aligned}$$ This proves Proposition [\[prop.tl-as-sum\]](#prop.tl-as-sum){reference-type="ref" reference="prop.tl-as-sum"}.   ------------------------------------------------------------------------ **Corollary 10** (). [\[cor.tl-via-tl+1\]]{#cor.tl-via-tl+1 label="cor.tl-via-tl+1"}Let $\ell\in\left[ n-1\right]$. Then, $t_{\ell }=1+s_{\ell}t_{\ell+1}$. **Proof.** Proposition [\[prop.tl-as-sum\]](#prop.tl-as-sum){reference-type="ref" reference="prop.tl-as-sum"} yields $$\begin{aligned} t_{\ell} & =\sum\limits_{w=\ell}^{n}\left( \ell\Longrightarrow w\right) =\underbrace{\left( \ell\Longrightarrow\ell\right) }_{=1}+\sum\limits_{w=\ell +1}^{n}\underbrace{\left( \ell\Longrightarrow w\right) }_{\substack{=s_{\ell }\left( \ell+1\Longrightarrow w\right) \\\text{(by Proposition \ref{prop.cycpq.rec} \textbf{(a)})}}}\\ & =1+\sum\limits_{w=\ell+1}^{n}s_{\ell}\left( \ell+1\Longrightarrow w\right) =1+s_{\ell}\sum\limits_{w=\ell+1}^{n}\left( \ell+1\Longrightarrow w\right) .\end{aligned}$$ Comparing this with $$1+s_{\ell}\underbrace{t_{\ell+1}}_{\substack{=\sum\limits_{w=\ell+1}^{n}\left( \ell+1\Longrightarrow w\right) \\\text{(by Proposition \ref{prop.tl-as-sum}% )}}}=1+s_{\ell}\sum\limits_{w=\ell+1}^{n}\left( \ell+1\Longrightarrow w\right) ,$$ we obtain $t_{\ell}=1+s_{\ell}t_{\ell+1}$, qed.   ------------------------------------------------------------------------ We state another simple property of the $t_{\ell}$'s: **Lemma 11** (). [\[lem.commute-with-tj\]]{#lem.commute-with-tj label="lem.commute-with-tj"}Let $\ell\in\left[ n\right]$. Let $\sigma\in S_{n}$. Assume that $\sigma$ leaves all the elements $\ell,\ell+1,\ldots,n$ unchanged. Then, $\sigma$ commutes with $t_{\ell}$ in $\mathbf{k}\left[ S_{n}\right]$. **Proof.** The permutation $\sigma$ leaves all the elements $\ell,\ell+1,\ldots,n$ unchanged, and thus commutes with each cycle $\operatorname*{cyc}% \nolimits_{\ell,\ell+1,\ldots,w}$ with $w\geq\ell$ (because the latter cycle permutes only elements of $\left\{ \ell,\ell+1,\ldots,n\right\}$). Hence, the permutation $\sigma$ also commutes with the sum $\sum\limits_{w=\ell}% ^{n}\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,w}$ of these cycles. Since the definition of $t_{\ell}$ yields $$t_{\ell}=\operatorname*{cyc}\nolimits_{\ell}+\operatorname*{cyc}% \nolimits_{\ell,\ell+1}+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ell +2}+\cdots+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,n}=\sum\limits_{w=\ell }^{n}\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,w},$$ we can rewrite this as follows: The permutation $\sigma$ commutes with $t_{\ell}$. This proves Lemma [\[lem.commute-with-tj\]](#lem.commute-with-tj){reference-type="ref" reference="lem.commute-with-tj"}.   ------------------------------------------------------------------------ Specifically, we will need only the following particular case of Lemma [\[lem.commute-with-tj\]](#lem.commute-with-tj){reference-type="ref" reference="lem.commute-with-tj"}: **Lemma 12** (). [\[lem.commute-with-tj-specific\]]{#lem.commute-with-tj-specific label="lem.commute-with-tj-specific"}Let $i,k,j\in\left[ n\right]$ be such that $i\leq k<j$. Then, $$\left( i\Longrightarrow k\right) t_{j}=t_{j}\left( i\Longrightarrow k\right) \label{eq.lem.commute-with-tj-specific.ab=ba}%$$ and $$\left[ \left( i\Longrightarrow k\right) ,\ t_{j}\right] =0. \label{eq.lem.commute-with-tj-specific.comm}%$$ **Proof.** The permutation $\left( i\Longrightarrow k\right) =\operatorname*{cyc}% \nolimits_{i,i+1,\ldots,k}$ leaves all the elements $k+1,k+2,\ldots,n$ unchanged, and thus leaves all the elements $j,j+1,\ldots,n$ unchanged (since the latter elements are a subset of the former elements (because $k<j$)). Hence, Lemma [\[lem.commute-with-tj\]](#lem.commute-with-tj){reference-type="ref" reference="lem.commute-with-tj"} (applied to $\ell=j$ and $\sigma =\left( i\Longrightarrow k\right)$) shows that $\left( i\Longrightarrow k\right)$ commutes with $t_{j}$ in $\mathbf{k}\left[ S_{n}\right]$. In other words, $\left( i\Longrightarrow k\right) t_{j}=t_{j}\left( i\Longrightarrow k\right)$. This proves ([\[eq.lem.commute-with-tj-specific.ab=ba\]](#eq.lem.commute-with-tj-specific.ab=ba){reference-type="ref" reference="eq.lem.commute-with-tj-specific.ab=ba"}). Now, the definition of a commutator yields $$\left[ \left( i\Longrightarrow k\right) ,\ t_{j}\right] =\left( i\Longrightarrow k\right) t_{j}-t_{j}\left( i\Longrightarrow k\right) =0$$ (since $\left( i\Longrightarrow k\right) t_{j}=t_{j}\left( i\Longrightarrow k\right)$). This proves ([\[eq.lem.commute-with-tj-specific.comm\]](#eq.lem.commute-with-tj-specific.comm){reference-type="ref" reference="eq.lem.commute-with-tj-specific.comm"}). Thus, Lemma [\[lem.commute-with-tj-specific\]](#lem.commute-with-tj-specific){reference-type="ref" reference="lem.commute-with-tj-specific"} is completely proved.   ------------------------------------------------------------------------ # The identities $t_{i+1}t_{i}=\left( t_{i}-1\right) t_{i}% =t_{i}\left( t_{i}-1\right)$ and $\left[ t_{i},t_{i+1}\right] ^{2}=0$ ## The identity $t_{i+1}t_{i}=\left( t_{i}-1\right) t_{i}% =t_{i}\left( t_{i}-1\right)$ We are now ready to prove the first really surprising result: **Theorem 13** (). [\[thm.ti+1ti\]]{#thm.ti+1ti label="thm.ti+1ti"}Let $i\in\left[ n-1\right]$. Then, $$\begin{aligned} t_{i+1}t_{i} & =\left( t_{i}-1\right) t_{i}\label{eq.thm.ti+1ti.ti-1ti}\\ & =t_{i}\left( t_{i}-1\right) . \label{eq.thm.ti+1ti.titi-1}%\end{aligned}$$ **Proof.** From Proposition [\[prop.tl-as-sum\]](#prop.tl-as-sum){reference-type="ref" reference="prop.tl-as-sum"}, we obtain $$\begin{aligned} t_{i} & =\sum\limits_{w=i}^{n}\left( i\Longrightarrow w\right) \label{pf.thm.ti+1ti.ti=}\\ & =\underbrace{\left( i\Longrightarrow i\right) }_{=\operatorname*{id}% =1}+\sum\limits_{w=i+1}^{n}\left( i\Longrightarrow w\right) \ \ \ \ \ \ \ \ \ \ \left( \begin{array} [c]{c}% \text{here, we have split off the}\\ \text{addend for }w=i\text{ from the sum}% \end{array} \right) \nonumber\\ & =1+\sum\limits_{w=i+1}^{n}\left( i\Longrightarrow w\right) .\nonumber\end{aligned}$$ In other words, $$t_{i}-1=\sum\limits_{w=i+1}^{n}\left( i\Longrightarrow w\right) . \label{pf.thm.ti+1ti.ti-1=}%$$ Moreover, ([\[pf.thm.ti+1ti.ti=\]](#pf.thm.ti+1ti.ti=){reference-type="ref" reference="pf.thm.ti+1ti.ti="}) becomes $$t_{i}=\sum\limits_{w=i}^{n}\left( i\Longrightarrow w\right) =\sum\limits_{v=i}^{n}\left( i\Longrightarrow v\right) . \label{pf.thm.ti+1ti.ti=p}%$$ Also, Proposition [\[prop.tl-as-sum\]](#prop.tl-as-sum){reference-type="ref" reference="prop.tl-as-sum"} (applied to $\ell=i+1$) yields $$t_{i+1}=\sum\limits_{w=i+1}^{n}\left( i+1\Longrightarrow w\right) =\sum\limits_{v=i+1}% ^{n}\left( i+1\Longrightarrow v\right) .\nonumber$$ Multiplying this equality by ([\[pf.thm.ti+1ti.ti=\]](#pf.thm.ti+1ti.ti=){reference-type="ref" reference="pf.thm.ti+1ti.ti="}), we obtain $$\begin{aligned} t_{i+1}t_{i} & =\sum\limits_{v=i+1}^{n}\left( i+1\Longrightarrow v\right) \cdot\sum\limits_{w=i}^{n}\left( i\Longrightarrow w\right) \\ & =\sum\limits_{v=i+1}^{n}\ \ \underbrace{\sum\limits_{w=i}^{n}\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) }_{\substack{=\sum\limits_{w=i}% ^{v-1}\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) +\sum\limits_{w=v}^{n}\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) \\\text{(since }i<v\leq n\text{)}}}\\ & =\sum\limits_{v=i+1}^{n}\left( \sum\limits_{w=i}^{v-1}\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) +\sum\limits_{w=v}^{n}\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) \right) \\ & =\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=i}^{v-1}\underbrace{\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) }_{\substack{=\left( i\Longrightarrow w+1\right) \left( i\Longrightarrow v\right) \\\text{(by Lemma \ref{lem.jpiq2}}\\\text{(since }w\leq v-1<v\text{ and thus }v>w\geq i\text{))}}}+\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=v}^{n}\underbrace{\left( i+1\Longrightarrow v\right) \left( i\Longrightarrow w\right) }% _{\substack{=\left( i\Longrightarrow w\right) \left( i\Longrightarrow v-1\right) \\\text{(by Lemma \ref{lem.jpiq1}, applied to }j=i\\\text{(since }w\geq v\geq i+1>i\geq i\text{))}}}\\ & =\sum\limits_{v=i+1}^{n}\ \ \underbrace{\sum\limits_{w=i}^{v-1}\left( i\Longrightarrow w+1\right) \left( i\Longrightarrow v\right) }_{\substack{=\sum\limits_{w=i+1}% ^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\\text{(here, we have substituted }w\text{ for }w+1\text{)}}% }+\underbrace{\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=v}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v-1\right) }_{\substack{=\sum\limits_{v=i}% ^{n-1}\ \ \sum\limits_{w=v+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\\text{(here, we have substituted }v\text{ for }v-1\text{)}}}\\ & =\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=i+1}^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) +\sum\limits_{v=i}^{n-1}\ \ \sum\limits_{w=v+1}% ^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) .\end{aligned}$$ Comparing this with $$\begin{aligned} \left( t_{i}-1\right) t_{i} & =\sum\limits_{w=i+1}^{n}\left( i\Longrightarrow w\right) \cdot\sum\limits_{v=i}^{n}\left( i\Longrightarrow v\right) \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( \text{by multiplying the two equalities (\ref{pf.thm.ti+1ti.ti-1=}) and (\ref{pf.thm.ti+1ti.ti=p}% )}\right) \\ & =\sum\limits_{v=i}^{n}\ \ \underbrace{\sum\limits_{w=i+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) }_{\substack{=\sum\limits_{w=i+1}% ^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) +\sum\limits_{w=v+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\\text{(since }i\leq v\leq n\text{)}}}\\ & =\sum\limits_{v=i}^{n}\left( \sum\limits_{w=i+1}^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) +\sum\limits_{w=v+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \right) \\ & =\underbrace{\sum\limits_{v=i}^{n}\ \ \sum\limits_{w=i+1}^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) }_{\substack{=\sum\limits_{w=i+1}% ^{i}\left( i\Longrightarrow w\right) \left( i\Longrightarrow i\right) +\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=i+1}^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\\text{(here, we have split off the addend for }v=i\text{ from the sum)}}}\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\sum\limits_{v=i}^{n}\ \ \sum\limits_{w=v+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) }% _{\substack{=\sum\limits_{w=n+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow n\right) +\sum\limits_{v=i}^{n-1}\ \ \sum\limits_{w=v+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\\text{(here, we have split off the addend for }v=n\text{ from the sum)}}}\\ & =\underbrace{\sum\limits_{w=i+1}^{i}\left( i\Longrightarrow w\right) \left( i\Longrightarrow i\right) }_{=\left( \text{empty sum}\right) =0}% +\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=i+1}^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\sum\limits_{w=n+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow n\right) }_{=\left( \text{empty sum}\right) =0}+\sum\limits_{v=i}^{n-1}\ \ \sum\limits_{w=v+1}^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) \\ & =\sum\limits_{v=i+1}^{n}\ \ \sum\limits_{w=i+1}^{v}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) +\sum\limits_{v=i}^{n-1}\ \ \sum\limits_{w=v+1}% ^{n}\left( i\Longrightarrow w\right) \left( i\Longrightarrow v\right) ,\end{aligned}$$ we obtain $t_{i+1}t_{i}=\left( t_{i}-1\right) t_{i}$. This proves ([\[eq.thm.ti+1ti.ti-1ti\]](#eq.thm.ti+1ti.ti-1ti){reference-type="ref" reference="eq.thm.ti+1ti.ti-1ti"}). From this, ([\[eq.thm.ti+1ti.titi-1\]](#eq.thm.ti+1ti.titi-1){reference-type="ref" reference="eq.thm.ti+1ti.titi-1"}) follows, since $\left( t_{i}-1\right) t_{i}=t_{i}^{2}-t_{i}=t_{i}\left( t_{i}-1\right)$. Thus, Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"} is proved.   ------------------------------------------------------------------------ ## The identity $\left[ t_{i},t_{i+1}\right] ^{2}=0$ **Corollary 14** (). [\[cor.titi+12=0\]]{#cor.titi+12=0 label="cor.titi+12=0"}Let $i\in\left[ n-1\right]$. Then, $$\left[ t_{i},t_{i+1}\right] =t_{i}\left( t_{i+1}-\left( t_{i}-1\right) \right) \label{eq.cor.titi+12=0.comm}%$$ and $$\left[ t_{i},t_{i+1}\right] t_{i}=0 \label{eq.cor.titi+12=0.prod}%$$ and $$\left[ t_{i},t_{i+1}\right] ^{2}=0. \label{eq.cor.titi+12=0.sq}%$$ **Proof.** The definition of a commutator yields $$\left[ t_{i},t_{i+1}\right] =t_{i}t_{i+1}-\underbrace{t_{i+1}t_{i}% }_{\substack{=t_{i}\left( t_{i}-1\right) \\\text{(by (\ref{eq.thm.ti+1ti.titi-1}))}}}=t_{i}t_{i+1}-t_{i}\left( t_{i}-1\right) =t_{i}\left( t_{i+1}-\left( t_{i}-1\right) \right) .$$ This proves the equality ([\[eq.cor.titi+12=0.comm\]](#eq.cor.titi+12=0.comm){reference-type="ref" reference="eq.cor.titi+12=0.comm"}). Multiplying both sides of this equality by $t_{i}$ on the right, we obtain $$\left[ t_{i},t_{i+1}\right] t_{i}=t_{i}\underbrace{\left( t_{i+1}-\left( t_{i}-1\right) \right) t_{i}}_{=t_{i+1}t_{i}-\left( t_{i}-1\right) t_{i}% }=t_{i}\underbrace{\left( t_{i+1}t_{i}-\left( t_{i}-1\right) t_{i}\right) }_{\substack{=0\\\text{(by (\ref{eq.thm.ti+1ti.ti-1ti}))}}}=0.$$ This proves ([\[eq.cor.titi+12=0.prod\]](#eq.cor.titi+12=0.prod){reference-type="ref" reference="eq.cor.titi+12=0.prod"}). Now, $$\left[ t_{i},t_{i+1}\right] ^{2}=\left[ t_{i},t_{i+1}\right] \underbrace{\left[ t_{i},t_{i+1}\right] }_{\substack{=t_{i}\left( t_{i+1}-\left( t_{i}-1\right) \right) \\\text{(by (\ref{eq.cor.titi+12=0.comm}))}}}=\underbrace{\left[ t_{i},t_{i+1}\right] t_{i}}_{\substack{=0\\\text{(by (\ref{eq.cor.titi+12=0.prod}))}}}\left( t_{i+1}-\left( t_{i}-1\right) \right) =0.$$ This proves ([\[eq.cor.titi+12=0.sq\]](#eq.cor.titi+12=0.sq){reference-type="ref" reference="eq.cor.titi+12=0.sq"}). Thus, Corollary [\[cor.titi+12=0\]](#cor.titi+12=0){reference-type="ref" reference="cor.titi+12=0"} is proved.   ------------------------------------------------------------------------ # [\[sec.t3t1\]]{#sec.t3t1 label="sec.t3t1"}The identities $t_{i+2}\left( t_{i}-1\right) =\left( t_{i}-1\right) \left( t_{i+1}-1\right)$ and $\left[ t_{i},t_{i+2}\right] \left( t_{i}-1\right) =t_{i+1}\left[ t_{i}% ,t_{i+1}\right]$ ## The identity $t_{i+2}\left( t_{i}-1\right) =\left( t_{i}-1\right) \left( t_{i+1}-1\right)$ The next theorem is a "next-level" analogue of Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"}: **Theorem 15** (). [\[thm.ti+2ti\]]{#thm.ti+2ti label="thm.ti+2ti"}Let $i\in\left[ n-2\right]$. Then, $$t_{i+2}\left( t_{i}-1\right) =\left( t_{i}-1\right) \left( t_{i+1}% -1\right) .$$ **Proof.** \[Proof of Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"}.\]From $i\in\left[ n-2\right]$, we obtain $i+1\in\left[ 2,n-1\right] \subseteq\left[ n-1\right]$. Hence, ([\[eq.thm.ti+1ti.titi-1\]](#eq.thm.ti+1ti.titi-1){reference-type="ref" reference="eq.thm.ti+1ti.titi-1"}) (applied to $i+1$ instead of $i$) yields $t_{\left( i+1\right) +1}t_{i+1}=t_{i+1}\left( t_{i+1}-1\right)$. In view of $\left( i+1\right) +1=i+2$, we can rewrite this as $$t_{i+2}t_{i+1}=t_{i+1}\left( t_{i+1}-1\right) . \label{pf.thm.ti+2ti.1}%$$ Furthermore, Corollary [\[cor.tl-via-tl+1\]](#cor.tl-via-tl+1){reference-type="ref" reference="cor.tl-via-tl+1"} (applied to $\ell=i$) yields $t_{i}=1+s_{i}t_{i+1}$ (since $i\in\left[ n-2\right] \subseteq\left[ n-1\right]$). Hence, $$t_{i}-1=s_{i}t_{i+1}. \label{pf.thm.ti+2ti.2}%$$ The definition of $\left( i\Longrightarrow i+1\right)$ yields $\left( i\Longrightarrow i+1\right) =\operatorname*{cyc}\nolimits_{i,i+1,\ldots ,i+1}=\operatorname*{cyc}\nolimits_{i,i+1}=s_{i}$. However, ([\[eq.lem.commute-with-tj-specific.ab=ba\]](#eq.lem.commute-with-tj-specific.ab=ba){reference-type="ref" reference="eq.lem.commute-with-tj-specific.ab=ba"}) (applied to $k=i+1$ and $j=i+2$) yields $\left( i\Longrightarrow i+1\right) t_{i+2}=t_{i+2}\left( i\Longrightarrow i+1\right)$. In view of $\left( i\Longrightarrow i+1\right) =s_{i}$, we can rewrite this as $s_{i}t_{i+2}=t_{i+2}s_{i}$. In other words, $t_{i+2}s_{i}=s_{i}t_{i+2}$. Now, $$t_{i+2}\underbrace{\left( t_{i}-1\right) }_{\substack{=s_{i}t_{i+1}% \\\text{(by (\ref{pf.thm.ti+2ti.2}))}}}=\underbrace{t_{i+2}s_{i}}% _{=s_{i}t_{i+2}}t_{i+1}=s_{i}\underbrace{t_{i+2}t_{i+1}}_{\substack{=t_{i+1}% \left( t_{i+1}-1\right) \\\text{(by (\ref{pf.thm.ti+2ti.1}))}}% }=\underbrace{s_{i}t_{i+1}}_{\substack{=t_{i}-1\\\text{(by (\ref{pf.thm.ti+2ti.2}))}}}\left( t_{i+1}-1\right) =\left( t_{i}-1\right) \left( t_{i+1}-1\right) .$$ Thus, Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"} is proved.   ------------------------------------------------------------------------ **Remark 16** (). [\[rmk.ti+kti\]]{#rmk.ti+kti label="rmk.ti+kti"}The similarity between Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"} and Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"} might suggest that the two theorems are the first two instances of a general identity of the form $t_{i+k}\left( t_{i}-\ell\right) =\left( t_{i}-m\right) \left( \text{something}\right)$ for certain integers $\ell$ and $m$. Unfortunately, such an identity most likely does not exist for $k=3$. Indeed, using SageMath, we have verified that for $n=6$ and $\mathbf{k}=\mathbb{Q}$, the product $t_{4}\left( t_{1}-\ell\right)$ is not a right multiple of $t_{1}-m$ for any $m\in\left\{ 0,1,2,3,4,6\right\}$ and $\ell\in\left[ 0,12\right]$. (The restriction $m\in\left\{ 0,1,2,3,4,6\right\}$ ensures that $t_{1}-m$ is not invertible; otherwise, the claim would be trivial and uninteresting.) We also observe that $t_{4}t_{1}$ does not belong to the $\mathbb{Q}$-linear span of the elements $1$, $t_{i}$ and $t_{i}t_{j}$ for $i\leq j$ when $n=5$. This is another piece of evidence suggesting that the pattern of Theorems [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"} and [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"} does not continue. Generally, for $\mathbf{k}=\mathbb{Q}$, the span of all products of the form $t_{i}t_{j}$ with $i,j\in\left[ n\right]$ inside $\mathbb{Q}\left[ S_{n}\right]$ seems to have dimension $n^{2}-3n+4$ (verified using SageMath for all $n\leq14$). The discrepancy between this dimension and the naive maximum guess $n^{2}$ is fully explained by the $n-1$ identities $t_{i+1}% t_{i}=t_{i}^{2}-t_{i}$ from Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"}, the $n-2$ identities $t_{i+2}t_{i}-t_{i+2}=t_{i}t_{i+1}-t_{i}-t_{i+1}+1$ from Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"}, and the $n-1$ obvious identities $t_{i}t_{n}=t_{n}t_{i}$ that come from $t_{n}=1$ (assuming that these $3n-4$ identities are linearly independent). This suggests that Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"} and Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"} exhaust the interesting quadratic identities between the $t_{i}$. ## The identity $\left[ t_{i},t_{i+2}\right] \left( t_{i}% -1\right) =t_{i+1}\left[ t_{i},t_{i+1}\right]$ **Corollary 17** (). [\[cor.titi+2ti-1\]]{#cor.titi+2ti-1 label="cor.titi+2ti-1"}Let $i\in\left[ n-2\right]$. Then, $$\left[ t_{i},t_{i+2}\right] \left( t_{i}-1\right) =t_{i+1}\left[ t_{i},t_{i+1}\right] .$$ **Proof.** From $i\in\left[ n-2\right]$, we obtain $i+1\in\left[ 2,n-1\right] \subseteq\left[ n-1\right]$. Thus, ([\[eq.thm.ti+1ti.titi-1\]](#eq.thm.ti+1ti.titi-1){reference-type="ref" reference="eq.thm.ti+1ti.titi-1"}) (applied to $i+1$ instead of $i$) yields $t_{\left( i+1\right) +1}t_{i+1}=t_{i+1}\left( t_{i+1}-1\right)$. In view of $\left( i+1\right) +1=i+2$, we can rewrite this as $$t_{i+2}t_{i+1}=t_{i+1}\left( t_{i+1}-1\right) . \label{pf.cor.titi+2ti-1.1}%$$ However, we have $$\begin{aligned} t_{i}\underbrace{t_{i+2}\left( t_{i}-1\right) }_{\substack{=\left( t_{i}-1\right) \left( t_{i+1}-1\right) \\\text{(by Theorem \ref{thm.ti+2ti}% )}}} & =\underbrace{t_{i}\left( t_{i}-1\right) }_{\substack{=t_{i+1}% t_{i}\\\text{(by (\ref{eq.thm.ti+1ti.titi-1}))}}}\left( t_{i+1}-1\right) =t_{i+1}t_{i}\left( t_{i+1}-1\right) \nonumber\\ & =t_{i+1}t_{i}t_{i+1}-t_{i+1}t_{i} \label{pf.cor.titi+2ti-1.2}%\end{aligned}$$ and $$\begin{aligned} t_{i+2}\underbrace{t_{i}\left( t_{i}-1\right) }_{\substack{=t_{i+1}% t_{i}\\\text{(by (\ref{eq.thm.ti+1ti.titi-1}))}}} & =\underbrace{t_{i+2}% t_{i+1}}_{\substack{=t_{i+1}\left( t_{i+1}-1\right) \\\text{(by (\ref{pf.cor.titi+2ti-1.1}))}}}t_{i}=t_{i+1}\left( t_{i+1}-1\right) t_{i}\nonumber\\ & =t_{i+1}t_{i+1}t_{i}-t_{i+1}t_{i}. \label{pf.cor.titi+2ti-1.3}%\end{aligned}$$ Now, the definition of a commutator yields $\left[ t_{i},t_{i+1}\right] =t_{i}t_{i+1}-t_{i+1}t_{i}$ and $\left[ t_{i},t_{i+2}\right] =t_{i}% t_{i+2}-t_{i+2}t_{i}$. Hence, $$\begin{aligned} \underbrace{\left[ t_{i},t_{i+2}\right] }_{=t_{i}t_{i+2}-t_{i+2}t_{i}% }\left( t_{i}-1\right) & =\left( t_{i}t_{i+2}-t_{i+2}t_{i}\right) \left( t_{i}-1\right) \\ & =t_{i}t_{i+2}\left( t_{i}-1\right) -t_{i+2}t_{i}\left( t_{i}-1\right) \\ & =\left( t_{i+1}t_{i}t_{i+1}-t_{i+1}t_{i}\right) -\left( t_{i+1}% t_{i+1}t_{i}-t_{i+1}t_{i}\right) \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( \text{here, we subtracted the equality (\ref{pf.cor.titi+2ti-1.3}) from (\ref{pf.cor.titi+2ti-1.2}% )}\right) \\ & =t_{i+1}t_{i}t_{i+1}-t_{i+1}t_{i+1}t_{i}=t_{i+1}\underbrace{\left( t_{i}t_{i+1}-t_{i+1}t_{i}\right) }_{=\left[ t_{i},t_{i+1}\right] }% =t_{i+1}\left[ t_{i},t_{i+1}\right] .\end{aligned}$$ This proves Corollary [\[cor.titi+2ti-1\]](#cor.titi+2ti-1){reference-type="ref" reference="cor.titi+2ti-1"}.   ------------------------------------------------------------------------ # The identity $\left( 1+s_{j}\right) \left[ t_{i},t_{j}\right] =0$ for all $i\leq j$ ## The identity $\left( 1+s_{j}\right) \left[ t_{j-1}% ,t_{j}\right] =0$ We shall next prove the following: **Lemma 18** (). [\[lem.1+si+1titi+1\]]{#lem.1+si+1titi+1 label="lem.1+si+1titi+1"}Let $i\in\left[ n-2\right]$. Then, $$\left( 1+s_{i+1}\right) \left[ t_{i},t_{i+1}\right] =0.$$ **Proof.** Set $a:=t_{i}$ and $b:=t_{i+1}$. From ([\[eq.si.invol\]](#eq.si.invol){reference-type="ref" reference="eq.si.invol"}), we obtain $s_{i+1}^{2}=\operatorname*{id}=1$. Hence, $s_{i+1}^{2}-1=0$. The definition of a commutator yields $$\begin{aligned} \left[ a-1,\ b-1\right] & =\left( a-1\right) \left( b-1\right) -\left( b-1\right) \left( a-1\right) \nonumber\\ & =\left( ab-a-b+1\right) -\left( ba-b-a+1\right) \nonumber\\ & =ab-ba=\left[ a,b\right] . \label{pf.lem.1+si+1titi+1.-1}%\end{aligned}$$ From $i\in\left[ n-2\right]$, we obtain $i+1\in\left[ 2,n-1\right] \subseteq\left[ n-1\right]$. Hence, Corollary [\[cor.tl-via-tl+1\]](#cor.tl-via-tl+1){reference-type="ref" reference="cor.tl-via-tl+1"} (applied to $\ell=i+1$) yields $t_{i+1}=1+s_{i+1}t_{\left( i+1\right) +1}=1+s_{i+1}t_{i+2}$ (since $\left( i+1\right) +1=i+2$). Hence, $b=t_{i+1}=1+s_{i+1}t_{i+2}$. Therefore, $b-1=s_{i+1}t_{i+2}$. Thus, $$s_{i+1}\underbrace{\left( b-1\right) }_{=s_{i+1}t_{i+2}}=\underbrace{s_{i+1}% s_{i+1}}_{\substack{=s_{i+1}^{2}=1}}t_{i+2}=t_{i+2}. \label{pf.lem.1+si+1titi+1.=ti+2}%$$ However, Theorem [\[thm.ti+2ti\]](#thm.ti+2ti){reference-type="ref" reference="thm.ti+2ti"} yields $$t_{i+2}\left( t_{i}-1\right) =\left( t_{i}-1\right) \left( t_{i+1}% -1\right) .$$ In view of $a=t_{i}$ and $b=t_{i+1}$, we can rewrite this as $$t_{i+2}\left( a-1\right) =\left( a-1\right) \left( b-1\right) .$$ Hence, $$\left( a-1\right) \left( b-1\right) =\underbrace{t_{i+2}}% _{\substack{=s_{i+1}\left( b-1\right) \\\text{(by (\ref{pf.lem.1+si+1titi+1.=ti+2}))}}}\left( a-1\right) =s_{i+1}\left( b-1\right) \left( a-1\right) . \label{pf.lem.1+si+1titi+1.3}%$$ Now, ([\[pf.lem.1+si+1titi+1.-1\]](#pf.lem.1+si+1titi+1.-1){reference-type="ref" reference="pf.lem.1+si+1titi+1.-1"}) becomes $$\begin{aligned} \left[ a,b\right] & =\left[ a-1,\ b-1\right] =\underbrace{\left( a-1\right) \left( b-1\right) }_{\substack{=s_{i+1}\left( b-1\right) \left( a-1\right) \\\text{(by (\ref{pf.lem.1+si+1titi+1.3}))}}}-\left( b-1\right) \left( a-1\right) \\ & =s_{i+1}\left( b-1\right) \left( a-1\right) -\left( b-1\right) \left( a-1\right) \\ & =\left( s_{i+1}-1\right) \left( b-1\right) \left( a-1\right) .\end{aligned}$$ Multiplying both sides of this equality by $1+s_{i+1}$ from the left, we obtain $$\left( 1+s_{i+1}\right) \left[ a,b\right] =\underbrace{\left( 1+s_{i+1}\right) \left( s_{i+1}-1\right) }_{\substack{=\left( s_{i+1}+1\right) \left( s_{i+1}-1\right) \\=s_{i+1}^{2}-1=0}}\left( b-1\right) \left( a-1\right) =0.$$ In view of $a=t_{i}$ and $b=t_{i+1}$, we can rewrite this as $\left( 1+s_{i+1}\right) \left[ t_{i},t_{i+1}\right] =0$. This proves Lemma [\[lem.1+si+1titi+1\]](#lem.1+si+1titi+1){reference-type="ref" reference="lem.1+si+1titi+1"}.   ------------------------------------------------------------------------ The following is just a restatement of Lemma [\[lem.1+si+1titi+1\]](#lem.1+si+1titi+1){reference-type="ref" reference="lem.1+si+1titi+1"}: **Lemma 19** (). [\[lem.1+sjtj-1tj\]]{#lem.1+sjtj-1tj label="lem.1+sjtj-1tj"}Let $j\in\left[ 2,n-1\right]$. Then, $$\left( 1+s_{j}\right) \left[ t_{j-1},t_{j}\right] =0.$$ **Proof.** We have $j-1\in\left[ n-2\right]$ (since $j\in\left[ 2,n-1\right]$). Hence, Lemma [\[lem.1+si+1titi+1\]](#lem.1+si+1titi+1){reference-type="ref" reference="lem.1+si+1titi+1"} (applied to $i=j-1$) yields $$\left( 1+s_{\left( j-1\right) +1}\right) \left[ t_{j-1},t_{\left( j-1\right) +1}\right] =0.$$ In view of $\left( j-1\right) +1=j$, we can rewrite this as $\left( 1+s_{j}\right) \left[ t_{j-1},t_{j}\right] =0$. This proves Lemma [\[lem.1+sjtj-1tj\]](#lem.1+sjtj-1tj){reference-type="ref" reference="lem.1+sjtj-1tj"}.   ------------------------------------------------------------------------ ## Expressing $\left[ t_{i},t_{j}\right]$ via $\left[ t_{j-1},t_{j}\right]$ The following lemma is useful for reducing questions about $\left[ t_{i},t_{j}\right]$ to questions about $\left[ t_{j-1},t_{j}\right]$: **Lemma 20** (). [\[lem.ti-to-tj-1\]]{#lem.ti-to-tj-1 label="lem.ti-to-tj-1"}Let $i,j\in\left[ n\right]$ satisfy $i<j$. Then: 1. We have $$\left[ t_{i},t_{j}\right] =\left[ s_{i}s_{i+1}\cdots s_{j-1},t_{j}\right] t_{j}.$$ 2. We have $$\left[ t_{i},t_{j}\right] =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left[ t_{j-1},t_{j}\right] .$$ **Proof.** A well-known identity for commutators says that if $R$ is a ring, then any three elements $a,b,c\in R$ satisfy $$\left[ ab,c\right] =\left[ a,c\right] b+a\left[ b,c\right] . \label{pf.lem.ti-to-tj-1.b.0}%$$ Hence, if $R$ is a ring, then any two elements $a,b\in R$ satisfy $$\begin{aligned} \left[ ab,b\right] & =\left[ a,b\right] b+a\underbrace{\left[ b,b\right] }_{\substack{=bb-bb\\=0}}\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.lem.ti-to-tj-1.b.0}), applied to }c=b\right) \nonumber\\ & =\left[ a,b\right] b+a0=\left[ a,b\right] b. \label{pf.lem.ti-to-tj-1.b.0abb}%\end{aligned}$$ **(a)** Proposition [\[prop.tl-as-sum\]](#prop.tl-as-sum){reference-type="ref" reference="prop.tl-as-sum"} yields $$\begin{aligned} t_{i} & =\sum\limits_{w=i}^{n}\left( i\Longrightarrow w\right) =\sum\limits_{k=i}% ^{n}\left( i\Longrightarrow k\right) \\ & =\sum\limits_{k=i}^{j-1}\left( i\Longrightarrow k\right) +\sum\limits_{k=j}% ^{n}\underbrace{\left( i\Longrightarrow k\right) }_{\substack{=\left( i\Longrightarrow j\right) \left( j\Longrightarrow k\right) \\\text{(by Lemma \ref{lem.cyc-ijk},}\\\text{since }i\leq j\leq k\text{)}}% }\ \ \ \ \ \ \ \ \ \ \left( \text{since }i<j\leq n\right) \\ & =\sum\limits_{k=i}^{j-1}\left( i\Longrightarrow k\right) +\underbrace{\sum\limits _{k=j}^{n}\left( i\Longrightarrow j\right) \left( j\Longrightarrow k\right) }_{=\left( i\Longrightarrow j\right) \sum\limits_{k=j}^{n}\left( j\Longrightarrow k\right) }\\ & =\sum\limits_{k=i}^{j-1}\left( i\Longrightarrow k\right) +\left( i\Longrightarrow j\right) \underbrace{\sum\limits_{k=j}^{n}\left( j\Longrightarrow k\right) }_{\substack{=\sum\limits_{w=j}^{n}\left( j\Longrightarrow w\right) =t_{j}\\\text{(since Proposition \ref{prop.tl-as-sum}}\\\text{yields }% t_{j}=\sum\limits_{w=j}^{n}\left( j\Longrightarrow w\right) \text{)}}}\\ & =\sum\limits_{k=i}^{j-1}\left( i\Longrightarrow k\right) +\left( i\Longrightarrow j\right) t_{j}.\end{aligned}$$ Thus, $$\begin{aligned} \left[ t_{i},t_{j}\right] & =\left[ \sum\limits_{k=i}^{j-1}\left( i\Longrightarrow k\right) +\left( i\Longrightarrow j\right) t_{j}% ,\ t_{j}\right] \\ & =\sum\limits_{k=i}^{j-1}\underbrace{\left[ \left( i\Longrightarrow k\right) ,\ t_{j}\right] }_{\substack{=0\\\text{(by (\ref{eq.lem.commute-with-tj-specific.comm}) (since }i\leq k<j\text{))}% }}+\left[ \left( i\Longrightarrow j\right) t_{j},\ t_{j}\right] \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( \text{since the commutator }\left[ a,b\right] \text{ is bilinear in }a\text{ and }b\right) \\ & =\underbrace{\sum\limits_{k=i}^{j-1}0}_{=0}+\left[ \left( i\Longrightarrow j\right) t_{j},\ t_{j}\right] =\left[ \left( i\Longrightarrow j\right) t_{j},\ t_{j}\right] \\ & =\left[ \left( i\Longrightarrow j\right) ,t_{j}\right] t_{j}% \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.lem.ti-to-tj-1.b.0abb}), applied to }a=\left( i\Longrightarrow j\right) \text{ and }b=t_{j}\right) \\ & =\left[ s_{i}s_{i+1}\cdots s_{j-1},t_{j}\right] t_{j}%\end{aligned}$$ (since Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} yields $\left( i\Longrightarrow j\right) =s_{i}s_{i+1}\cdots s_{j-1}$). This proves Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(a)**. **(b)** Set $a:=s_{i}s_{i+1}\cdots s_{j-2}$ and $b:=s_{j-1}$ and $c:=t_{j}$. Thus, $$ab=\left( s_{i}s_{i+1}\cdots s_{j-2}\right) s_{j-1}=s_{i}s_{i+1}\cdots s_{j-1}. \label{pf.lem.ti-to-tj-1.b.ab}%$$ However, $i\leq j-1$ (since $i<j$). Hence, Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} (applied to $v=i$ and $w=j-1$) yields $\left( i\Longrightarrow j-1\right) =s_{i}s_{i+1}\cdots s_{\left( j-1\right) -1}=s_{i}s_{i+1}\cdots s_{j-2}=a$ (since $a=s_{i}s_{i+1}\cdots s_{j-2}$). Now, $i\leq j-1<j$. Hence, ([\[eq.lem.commute-with-tj-specific.comm\]](#eq.lem.commute-with-tj-specific.comm){reference-type="ref" reference="eq.lem.commute-with-tj-specific.comm"}) (applied to $k=j-1$) yields $\left[ \left( i\Longrightarrow j-1\right) ,\ t_{j}\right] =0$. In view of $\left( i\Longrightarrow j-1\right) =a$ and $t_{j}=c$, we can rewrite this as $\left[ a,c\right] =0$. Hence, ([\[pf.lem.ti-to-tj-1.b.0\]](#pf.lem.ti-to-tj-1.b.0){reference-type="ref" reference="pf.lem.ti-to-tj-1.b.0"}) becomes $$\left[ ab,c\right] =\underbrace{\left[ a,c\right] }_{=0}b+a\left[ b,c\right] =a\left[ b,c\right] . \label{pf.lem.ti-to-tj-1.b.abc}%$$ On the other hand, applying Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(a)** to $j-1$ instead of $j$, we obtain $$\left[ t_{j-1},t_{j}\right] =\left[ s_{j-1},t_{j}\right] t_{j}. \label{pf.lem.ti-to-tj-1.b.1}%$$ However, Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(a)** yields $$\begin{aligned} \left[ t_{i},t_{j}\right] & =\left[ \underbrace{s_{i}s_{i+1}\cdots s_{j-1}}_{\substack{=ab\\\text{(by (\ref{pf.lem.ti-to-tj-1.b.ab}))}% }},\underbrace{t_{j}}_{=c}\right] t_{j}=\underbrace{\left[ ab,c\right] }_{\substack{=a\left[ b,c\right] \\\text{(by (\ref{pf.lem.ti-to-tj-1.b.abc}% ))}}}t_{j}=\underbrace{a}_{=s_{i}s_{i+1}\cdots s_{j-2}}\left[ \underbrace{b}% _{=s_{j-1}},\underbrace{c}_{=t_{j}}\right] t_{j}\\ & =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \underbrace{\left[ s_{j-1},t_{j}\right] t_{j}}_{\substack{=\left[ t_{j-1},t_{j}\right] \\\text{(by (\ref{pf.lem.ti-to-tj-1.b.1}))}}}=\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left[ t_{j-1},t_{j}\right] .\end{aligned}$$ This proves Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)**.   ------------------------------------------------------------------------ **Corollary 21** (). [\[cor.titjtj-1\]]{#cor.titjtj-1 label="cor.titjtj-1"}Let $i\in\left[ n\right]$ and $j\in\left[ 2,n\right]$ be such that $i\leq j$. Then, $$\left[ t_{i},t_{j}\right] t_{j-1}=0.$$ **Proof.** If $i=j$, then this is obvious (because $i=j$ entails $\left[ t_{i}% ,t_{j}\right] =\left[ t_{j},t_{j}\right] =t_{j}t_{j}-t_{j}t_{j}=0$ and therefore $\underbrace{\left[ t_{i},t_{j}\right] }_{=0}t_{j-1}=0$). Hence, for the rest of this proof, we WLOG assume that $i\neq j$. Combining $i\leq j$ with $i\neq j$, we obtain $i<j$. Hence, Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)** yields $$\left[ t_{i},t_{j}\right] =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left[ t_{j-1},t_{j}\right] .$$ On the other hand, $j-1\in\left[ n-1\right]$ (since $j\in\left[ 2,n\right]$). Thus, ([\[eq.cor.titi+12=0.prod\]](#eq.cor.titi+12=0.prod){reference-type="ref" reference="eq.cor.titi+12=0.prod"}) (applied to $j-1$ instead of $i$) yields $\left[ t_{j-1},t_{j-1+1}\right] t_{j-1}=0$. In other words, $\left[ t_{j-1},t_{j}\right] t_{j-1}=0$ (since $j-1+1=j$). Thus, $$\underbrace{\left[ t_{i},t_{j}\right] }_{=\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left[ t_{j-1},t_{j}\right] }t_{j-1}=\left( s_{i}% s_{i+1}\cdots s_{j-2}\right) \underbrace{\left[ t_{j-1},t_{j}\right] t_{j-1}}_{=0}=0.$$ This proves Corollary [\[cor.titjtj-1\]](#cor.titjtj-1){reference-type="ref" reference="cor.titjtj-1"}.   ------------------------------------------------------------------------ ## The identity $\left( 1+s_{j}\right) \left[ t_{i},t_{j}\right] =0$ for all $i\leq j$ We are now ready to prove the following surprising result: **Theorem 22** (). [\[thm.1+sjtitj\]]{#thm.1+sjtitj label="thm.1+sjtitj"}Let $i,j\in\left[ n-1\right]$ satisfy $i\leq j$. Then, $$\left( 1+s_{j}\right) \left[ t_{i},t_{j}\right] =0.$$ **Proof.** If $i=j$, then this is obvious (since $i=j$ entails $\left[ t_{i}% ,t_{j}\right] =\left[ t_{j},t_{j}\right] =0$). Hence, we WLOG assume that $i\neq j$. Thus, $i<j$ (since $i\leq j$). The transpositions $s_{i},s_{i+1},\ldots,s_{j-2}$ all commute with $s_{j}$ (by reflection locality, since the numbers $i,i+1,\ldots,j-2$ differ by more than $1$ from $j$). Thus, their product $s_{i}s_{i+1}\cdots s_{j-2}$ commutes with $s_{j}$ as well. In other words, $$s_{j}\left( s_{i}s_{i+1}\cdots s_{j-2}\right) =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) s_{j}.$$ Thus, in $\mathbf{k}\left[ S_{n}\right]$, we have $$\begin{aligned} \left( 1+s_{j}\right) \left( s_{i}s_{i+1}\cdots s_{j-2}\right) & =s_{i}s_{i+1}\cdots s_{j-2}+\underbrace{s_{j}\left( s_{i}s_{i+1}\cdots s_{j-2}\right) }_{=\left( s_{i}s_{i+1}\cdots s_{j-2}\right) s_{j}% }\nonumber\\ & =s_{i}s_{i+1}\cdots s_{j-2}+\left( s_{i}s_{i+1}\cdots s_{j-2}\right) s_{j}\nonumber\\ & =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left( 1+s_{j}\right) . \label{pf.thm.1+sjtitj.2}%\end{aligned}$$ However, Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)** yields $\left[ t_{i}% ,t_{j}\right] =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left[ t_{j-1},t_{j}\right]$ (since $i<j$). Hence, $$\begin{aligned} \left( 1+s_{j}\right) \underbrace{\left[ t_{i},t_{j}\right] }_{=\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left[ t_{j-1},t_{j}\right] } & =\underbrace{\left( 1+s_{j}\right) \left( s_{i}s_{i+1}\cdots s_{j-2}% \right) }_{\substack{=\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \left( 1+s_{j}\right) \\\text{(by (\ref{pf.thm.1+sjtitj.2}))}}}\left[ t_{j-1}% ,t_{j}\right] \\ & =\left( s_{i}s_{i+1}\cdots s_{j-2}\right) \underbrace{\left( 1+s_{j}\right) \left[ t_{j-1},t_{j}\right] }_{\substack{=0\\\text{(by Lemma \ref{lem.1+sjtj-1tj})}}}=0.\end{aligned}$$ This proves Theorem [\[thm.1+sjtitj\]](#thm.1+sjtitj){reference-type="ref" reference="thm.1+sjtitj"}.   ------------------------------------------------------------------------ **Corollary 23** (). [\[cor.tn-1titn-1=0\]]{#cor.tn-1titn-1=0 label="cor.tn-1titn-1=0"}Let $n\geq2$ and $i\in\left[ n\right]$. Then, $t_{n-1}\left[ t_{i},t_{n-1}\right] =0$. **Proof.** This is true for $i=n$ (because $t_{n}=1$ and thus $\left[ t_{n}% ,t_{n-1}\right] =\left[ 1,t_{n-1}\right] =1t_{n-1}-t_{n-1}1=t_{n-1}% -t_{n-1}=0$ and therefore $t_{n-1}\underbrace{\left[ t_{n},t_{n-1}\right] }_{=0}=0$). Hence, we WLOG assume that $i\neq n$. Therefore, $i\in\left[ n\right] \setminus\left\{ n\right\} =\left[ n-1\right]$. Also, $n-1\in\left[ n-1\right]$ (since $n\geq2$). The definition of $t_{n-1}$ yields $t_{n-1}=\underbrace{\operatorname*{cyc}% \nolimits_{n-1}}_{=1}+\underbrace{\operatorname*{cyc}\nolimits_{n-1,n}% }_{=s_{n-1}}=1+s_{n-1}$. However, Theorem [\[thm.1+sjtitj\]](#thm.1+sjtitj){reference-type="ref" reference="thm.1+sjtitj"} (applied to $j=n-1$) yields $\left( 1+s_{n-1}\right) \left[ t_{i},t_{n-1}\right] =0$. In view of $t_{n-1}% =1+s_{n-1}$, we can rewrite this as $t_{n-1}\left[ t_{i},t_{n-1}\right] =0$. This proves Corollary [\[cor.tn-1titn-1=0\]](#cor.tn-1titn-1=0){reference-type="ref" reference="cor.tn-1titn-1=0"}.   ------------------------------------------------------------------------ # The identity $\left[ t_{i},t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0$ for all $i,j\in\left[ n\right]$ ## The elements $s_{k}^{+}$ and the left ideals $H_{k,j}$ We now introduce two crucial notions for the proof of our first main theorem: **Definition 24** (). [\[def.si+\]]{#def.si+ label="def.si+"}We set $\mathbf{A}:=\mathbf{k}\left[ S_{n}\right]$. Furthermore, for any $i\in\left[ n-1\right]$, we set $$s_{i}^{+}:=s_{i}+1\in\mathbf{A}.$$ We also set $s_{i}^{+}:=1\in\mathbf{A}$ for all integers $i\notin\left[ n-1\right]$. Thus, $s_{i}^{+}$ is defined for all integers $i$. **Definition 25** (). [\[def.Hkj\]]{#def.Hkj label="def.Hkj"}Let $k$ and $j$ be two integers. Then, we define $$H_{k,j}:=\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}.$$ This is a left ideal of $\mathbf{A}$. Note that $$H_{k,j}=0\ \ \ \ \ \ \ \ \ \ \text{whenever }k<j. \label{eq.def.Hkj.=0}%$$ **Example 26** (). We have $$\begin{aligned} H_{7,3} & =\sum\limits_{\substack{u\in\left[ 3,7\right] ;\\u\equiv 7\operatorname{mod}2}}\mathbf{A}s_{u}^{+}=\mathbf{A}s_{3}^{+}+\mathbf{A}% s_{5}^{+}+\mathbf{A}s_{7}^{+}\ \ \ \ \ \ \ \ \ \ \text{and}\\ H_{7,2} & =\sum\limits_{\substack{u\in\left[ 2,7\right] ;\\u\equiv 7\operatorname{mod}2}}\mathbf{A}s_{u}^{+}=\mathbf{A}s_{3}^{+}+\mathbf{A}% s_{5}^{+}+\mathbf{A}s_{7}^{+},\end{aligned}$$ so that $H_{7,2}=H_{7,3}$. Similarly, $H_{7,4}=H_{7,5}=\mathbf{A}s_{5}% ^{+}+\mathbf{A}s_{7}^{+}$ and $H_{7,6}=H_{7,7}=\mathbf{A}s_{7}^{+}$. Let us prove some basic properties of the left ideals $H_{k,j}$: **Remark 27** (). [\[rmk.Hnj\]]{#rmk.Hnj label="rmk.Hnj"}Let $k$ be an integer such that $k\notin\left[ n-1\right]$. Let $j\in\left[ n\right]$ satisfy $j\leq k$. Then, $H_{k,j}=\mathbf{A}$. **Proof.** Since $k\notin\left[ n-1\right]$, we have $s_{k}^{+}=1$ (by the definition of $s_{k}^{+}$). Also, $k\in\left[ j,k\right]$ (since $j\leq k$). Recall that $H_{k,j}$ is defined as the sum $\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}$. But this sum contains the addend $\mathbf{A}s_{k}^{+}$ (since $k\in\left[ j,k\right]$ and $k\equiv k\operatorname{mod}2$). Hence, $\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}% \supseteq\mathbf{A}\underbrace{s_{k}^{+}}_{=1}=\mathbf{A}1=\mathbf{A}$. Now, $$H_{k,j}=\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}% 2}}\mathbf{A}s_{u}^{+}\supseteq\mathbf{A},$$ so that $H_{k,j}=\mathbf{A}$. This proves Remark [\[rmk.Hnj\]](#rmk.Hnj){reference-type="ref" reference="rmk.Hnj"}.   ------------------------------------------------------------------------ **Lemma 28** (). [\[lem.Hkj.some-basics\]]{#lem.Hkj.some-basics label="lem.Hkj.some-basics"}Let $k$ and $j$ be two integers. Then, $H_{k,j}\subseteq H_{k,j-1}$. **Proof.** Definition [\[def.Hkj\]](#def.Hkj){reference-type="ref" reference="def.Hkj"} yields $H_{k,j}=\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}$ and $H_{k,j-1}=\sum\limits_{\substack{u\in\left[ j-1,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}$. But clearly, any addend of the former sum is an addend of the latter sum as well (since each $u\in\left[ j,k\right]$ satisfies $u\in\left[ j,k\right] \subseteq\left[ j-1,k\right]$). Thus, the former sum is a subset of the latter. In other words, $H_{k,j}\subseteq H_{k,j-1}$. This proves Lemma [\[lem.Hkj.some-basics\]](#lem.Hkj.some-basics){reference-type="ref" reference="lem.Hkj.some-basics"}.   ------------------------------------------------------------------------ **Lemma 29** (). [\[lem.Hkj.some-basics-2\]]{#lem.Hkj.some-basics-2 label="lem.Hkj.some-basics-2"}Let $v$, $w$ and $j$ be three integers such that $v\leq w$ and $v\equiv w\operatorname{mod}2$. Then, $H_{v,j}\subseteq H_{w,j}$. **Proof.** Definition [\[def.Hkj\]](#def.Hkj){reference-type="ref" reference="def.Hkj"} yields $$\begin{aligned} H_{v,j} & =\sum\limits_{\substack{u\in\left[ j,v\right] ;\\u\equiv v\operatorname{mod}2}}\mathbf{A}s_{u}^{+}\ \ \ \ \ \ \ \ \ \ \text{and}% \label{pf.lem.Hkj.some-basics-2.1}\\ H_{w,j} & =\sum\limits_{\substack{u\in\left[ j,w\right] ;\\u\equiv w\operatorname{mod}2}}\mathbf{A}s_{u}^{+}. \label{pf.lem.Hkj.some-basics-2.2}%\end{aligned}$$ However, each $u\in\left[ j,v\right]$ satisfying $u\equiv v\operatorname{mod}2$ is also an element of $\left[ j,w\right]$ (since $u\leq v\leq w$) and satisfies $u\equiv w\operatorname{mod}2$ (since $u\equiv v\equiv w\operatorname{mod}2$). Thus, any addend of the sum $\sum\limits _{\substack{u\in\left[ j,v\right] ;\\u\equiv v\operatorname{mod}% 2}}\mathbf{A}s_{u}^{+}$ is also an addend of the sum $\sum\limits_{\substack{u\in \left[ j,w\right] ;\\u\equiv w\operatorname{mod}2}}\mathbf{A}s_{u}^{+}$. Therefore, the former sum is a subset of the latter sum. In other words, $\sum\limits_{\substack{u\in\left[ j,v\right] ;\\u\equiv v\operatorname{mod}% 2}}\mathbf{A}s_{u}^{+}\subseteq\sum\limits_{\substack{u\in\left[ j,w\right] ;\\u\equiv w\operatorname{mod}2}}\mathbf{A}s_{u}^{+}$. In view of ([\[pf.lem.Hkj.some-basics-2.1\]](#pf.lem.Hkj.some-basics-2.1){reference-type="ref" reference="pf.lem.Hkj.some-basics-2.1"}) and ([\[pf.lem.Hkj.some-basics-2.2\]](#pf.lem.Hkj.some-basics-2.2){reference-type="ref" reference="pf.lem.Hkj.some-basics-2.2"}), we can rewrite this as $H_{v,j}\subseteq H_{w,j}$. This proves Lemma [\[lem.Hkj.some-basics-2\]](#lem.Hkj.some-basics-2){reference-type="ref" reference="lem.Hkj.some-basics-2"}.   ------------------------------------------------------------------------ **Lemma 30** (). [\[lem.Hkj.some-basics-3\]]{#lem.Hkj.some-basics-3 label="lem.Hkj.some-basics-3"}Let $k$ and $j$ be two integers such that $k\equiv j\operatorname{mod}2$. Then, $H_{k,j-1}=H_{k,j}$. **Proof.** Definition [\[def.Hkj\]](#def.Hkj){reference-type="ref" reference="def.Hkj"} yields $$\begin{aligned} H_{k,j} & =\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}\ \ \ \ \ \ \ \ \ \ \text{and}% \label{pf.lem.Hkj.some-basics-3.1}\\ H_{k,j-1} & =\sum\limits_{\substack{u\in\left[ j-1,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}. \label{pf.lem.Hkj.some-basics-3.2}%\end{aligned}$$ However, each element $u\in\left[ j,k\right]$ satisfying $u\equiv k\operatorname{mod}2$ is also an element of $\left[ j-1,k\right]$ (since $u\in\left[ j,k\right] \subseteq\left[ j-1,k\right]$). Conversely, each element $u\in\left[ j-1,k\right]$ satisfying $u\equiv k\operatorname{mod}2$ is also an element of $\left[ j,k\right]$ (since otherwise, it would equal $j-1$, so that we would have $j-1=u\equiv k\equiv j\operatorname{mod}2$, but this would contradict $j-1\not \equiv j\operatorname{mod}2$). Therefore, the elements $u\in\left[ j-1,k\right]$ satisfying $u\equiv k\operatorname{mod}% 2$ are precisely the elements $u\in\left[ j,k\right]$ satisfying $u\equiv k\operatorname{mod}2$. In other words, the sum on the right hand side of ([\[pf.lem.Hkj.some-basics-3.2\]](#pf.lem.Hkj.some-basics-3.2){reference-type="ref" reference="pf.lem.Hkj.some-basics-3.2"}) ranges over the same set as the sum on the right hand side of ([\[pf.lem.Hkj.some-basics-3.1\]](#pf.lem.Hkj.some-basics-3.1){reference-type="ref" reference="pf.lem.Hkj.some-basics-3.1"}). Therefore, the right hand sides of the equalities ([\[pf.lem.Hkj.some-basics-3.2\]](#pf.lem.Hkj.some-basics-3.2){reference-type="ref" reference="pf.lem.Hkj.some-basics-3.2"}) and ([\[pf.lem.Hkj.some-basics-3.1\]](#pf.lem.Hkj.some-basics-3.1){reference-type="ref" reference="pf.lem.Hkj.some-basics-3.1"}) are equal. Hence, their left hand sides must also be equal. In other words, $H_{k,j-1}=H_{k,j}$. This proves Lemma [\[lem.Hkj.some-basics-3\]](#lem.Hkj.some-basics-3){reference-type="ref" reference="lem.Hkj.some-basics-3"}.   ------------------------------------------------------------------------ The following easy property follows from Lemma [\[lem.suip\]](#lem.suip){reference-type="ref" reference="lem.suip"}: **Lemma 31** (). [\[lem.su+ip\]]{#lem.su+ip label="lem.su+ip"}Let $i,u,v\in\left[ n\right]$ be such that $i<u<v$. Then, $$s_{u}^{+}\left( i\Longrightarrow v\right) =\left( i\Longrightarrow v\right) s_{u-1}^{+}.$$ **Proof.** We have $u\in\left[ n-1\right]$ (since $u<v\leq n$ and $u>i\geq1$) and thus $s_{u}^{+}=s_{u}+1$. Also, $u>i\geq1$, so that $u-1>0$. Hence, $u-1\in\left[ n-1\right]$ (since $u-1<u<v\leq n$ and $u-1>0$) and thus $s_{u-1}^{+}=s_{u-1}+1$. Hence, $$\begin{aligned} & \underbrace{s_{u}^{+}}_{=s_{u}+1}\left( i\Longrightarrow v\right) -\left( i\Longrightarrow v\right) \underbrace{s_{u-1}^{+}}_{=s_{u-1}+1}\\ & =\left( s_{u}+1\right) \left( i\Longrightarrow v\right) -\left( i\Longrightarrow v\right) \left( s_{u-1}+1\right) \\ & =s_{u}\left( i\Longrightarrow v\right) +\left( i\Longrightarrow v\right) -\left( i\Longrightarrow v\right) s_{u-1}-\left( i\Longrightarrow v\right) \\ & =\underbrace{s_{u}\left( i\Longrightarrow v\right) }_{\substack{=\left( i\Longrightarrow v\right) s_{u-1}\\\text{(by Lemma \ref{lem.suip})}}}-\left( i\Longrightarrow v\right) s_{u-1}=\left( i\Longrightarrow v\right) s_{u-1}-\left( i\Longrightarrow v\right) s_{u-1}=0.\end{aligned}$$ Thus, $s_{u}^{+}\left( i\Longrightarrow v\right) =\left( i\Longrightarrow v\right) s_{u-1}^{+}$. This proves Lemma [\[lem.su+ip\]](#lem.su+ip){reference-type="ref" reference="lem.su+ip"}.   ------------------------------------------------------------------------ Let us also derive a simple consequence from Corollary [\[cor.tl-via-tl+1\]](#cor.tl-via-tl+1){reference-type="ref" reference="cor.tl-via-tl+1"}: **Lemma 32** (). [\[lem.sj-1+0\]]{#lem.sj-1+0 label="lem.sj-1+0"}Let $j\in\left[ 2,n\right]$. Then, $$s_{j-1}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) =0.$$ **Proof.** From $j\in\left[ 2,n\right]$, we obtain $j-1\in\left[ n-1\right]$. Hence, the definition of $s_{j-1}^{+}$ yields $s_{j-1}^{+}=s_{j-1}% +1=1+s_{j-1}$. Corollary [\[cor.tl-via-tl+1\]](#cor.tl-via-tl+1){reference-type="ref" reference="cor.tl-via-tl+1"} (applied to $\ell=j-1$) yields $t_{j-1}% =1+s_{j-1}t_{\left( j-1\right) +1}=1+s_{j-1}t_{j}$ (since $\left( j-1\right) +1=j$). Thus, $t_{j-1}-1=s_{j-1}t_{j}$, so that $$t_{j}-\underbrace{\left( t_{j-1}-1\right) }_{=s_{j-1}t_{j}}=t_{j}% -s_{j-1}t_{j}=\left( 1-s_{j-1}\right) t_{j}.$$ Thus, $$\underbrace{s_{j-1}^{+}}_{=1+s_{j-1}}\underbrace{\left( t_{j}-\left( t_{j-1}-1\right) \right) }_{=\left( 1-s_{j-1}\right) t_{j}}% =\underbrace{\left( 1+s_{j-1}\right) \left( 1-s_{j-1}\right) }_{\substack{=1-s_{j-1}^{2}=0\\\text{(since (\ref{eq.si.invol}) yields }s_{j-1}^{2}=\operatorname*{id}=1\text{)}}}t_{j}=0.$$ This proves Lemma [\[lem.sj-1+0\]](#lem.sj-1+0){reference-type="ref" reference="lem.sj-1+0"}.   ------------------------------------------------------------------------ ## The fuse The next lemma will help us analyze the behavior of the ideals $H_{k,j}$ under repeated multiplication by $t_{j}$'s: **Lemma 33** (). [\[lem.4\]]{#lem.4 label="lem.4"}Let $j\in\left[ n\right]$ and $k\in\left[ n+1\right]$ be such that $j<k$. Then: 1. If $k\not \equiv j\operatorname{mod}2$, then $s_{k}% ^{+}t_{j}\in H_{k-1,j}$. 2. If $k\equiv j\operatorname{mod}2$, then $s_{k}^{+}\left( t_{j}-1\right) \in H_{k-1,j}$. **Proof.** If $k=n+1$, then both parts of Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} hold for fairly obvious reasons[^3]. Hence, for the rest of this proof, we WLOG assume that $k\neq n+1$. Therefore, $k\in\left[ n+1\right] \setminus\left\{ n+1\right\} =\left[ n\right]$, so that $k\leq n$. Recall that $H_{k-1,j}$ is a left ideal of $\mathbf{A}$, therefore an additive subgroup of $\mathbf{A}$. We have $j<k$, so that $j\leq k-1$. Hence, $k-1\in\left[ j,k-1\right]$. The definition of $H_{k-1,j}$ yields $$H_{k-1,j}=\sum\limits_{\substack{u\in\left[ j,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}\mathbf{A}s_{u}^{+}. \label{pf.lem.4.H=}%$$ Since $\mathbf{A}s_{k-1}^{+}$ is an addend of the sum on the right hand side here (because $k-1\in\left[ j,k-1\right]$ and $k-1\equiv k-1\operatorname{mod}2$), we thus conclude that $\mathbf{A}s_{k-1}% ^{+}\subseteq H_{k-1,j}$. Proposition [\[prop.tl-as-sum\]](#prop.tl-as-sum){reference-type="ref" reference="prop.tl-as-sum"} yields $$t_{j}=\sum\limits_{w=j}^{n}\left( j\Longrightarrow w\right) .$$ Hence, $$\begin{aligned} s_{k}^{+}t_{j} & =s_{k}^{+}\sum\limits_{w=j}^{n}\left( j\Longrightarrow w\right) =\sum\limits_{w=j}^{n}s_{k}^{+}\left( j\Longrightarrow w\right) \nonumber\\ & =\underbrace{\sum\limits_{w=j}^{k}s_{k}^{+}\left( j\Longrightarrow w\right) }_{=s_{k}^{+}\sum\limits_{w=j}^{k}\left( j\Longrightarrow w\right) }+\sum\limits _{w=k+1}^{n}\underbrace{s_{k}^{+}\left( j\Longrightarrow w\right) }_{\substack{=\left( j\Longrightarrow w\right) s_{k-1}^{+}\\\text{(by Lemma \ref{lem.su+ip},}\\\text{since }j<k<w\text{)}}}\ \ \ \ \ \ \ \ \ \ \left( \text{since }j\leq k\leq n\right) \nonumber\\ & =s_{k}^{+}\sum\limits_{w=j}^{k}\left( j\Longrightarrow w\right) +\sum\limits _{w=k+1}^{n}\left( j\Longrightarrow w\right) s_{k-1}^{+}. \label{pf.lem.4.2}%\end{aligned}$$ We now need a better understanding of the sums on the right hand side. For this purpose, we observe that every $w\in\left[ j,n-1\right]$ satisfies $$\begin{aligned} \left( j\Longrightarrow w\right) +\underbrace{\left( j\Longrightarrow w+1\right) }_{\substack{=\left( j\Longrightarrow w\right) s_{w}\\\text{(by Proposition \ref{prop.cycpq.rec} \textbf{(b)})}}} & =\left( j\Longrightarrow w\right) +\left( j\Longrightarrow w\right) s_{w}% \nonumber\\ & =\left( j\Longrightarrow w\right) \underbrace{\left( 1+s_{w}\right) }_{\substack{=s_{w}+1=s_{w}^{+}\\\text{(by the definition of }s_{w}% ^{+}\text{)}}}=\left( j\Longrightarrow w\right) s_{w}^{+}\nonumber\\ & \in\mathbf{A}s_{w}^{+}. \label{pf.lem.4.twoterms}%\end{aligned}$$ **(a)** Assume that $k\not \equiv j\operatorname{mod}2$. Thus, the integer $k-j$ is odd, so that $k-j+1$ is even. Now, $$\begin{aligned} \sum\limits_{w=j}^{k}\left( j\Longrightarrow w\right) & =\left( j\Longrightarrow j\right) +\left( j\Longrightarrow j+1\right) +\left( j\Longrightarrow j+2\right) +\cdots+\left( j\Longrightarrow k\right) \\ & =\underbrace{\left( \left( j\Longrightarrow j\right) +\left( j\Longrightarrow j+1\right) \right) }_{\substack{\in\mathbf{A}s_{j}% ^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\left( \left( j\Longrightarrow j+2\right) +\left( j\Longrightarrow j+3\right) \right) }_{\substack{\in \mathbf{A}s_{j+2}^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\left( \left( j\Longrightarrow j+4\right) +\left( j\Longrightarrow j+5\right) \right) }_{\substack{\in \mathbf{A}s_{j+4}^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\\ & \ \ \ \ \ \ \ \ \ \ +\cdots\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\left( \left( j\Longrightarrow k-1\right) +\left( j\Longrightarrow k\right) \right) }_{\substack{\in \mathbf{A}s_{k-1}^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( \begin{array} [c]{c}% \text{here, we have split our sum into pairs of}\\ \text{consecutive addends, since }k-j+1\text{ is even}% \end{array} \right) \\ & \in\mathbf{A}s_{j}^{+}+\mathbf{A}s_{j+2}^{+}+\mathbf{A}s_{j+4}^{+}% +\cdots+\mathbf{A}s_{k-1}^{+}=\sum\limits_{\substack{u\in\left[ j,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}\mathbf{A}s_{u}^{+}\\ & =H_{k-1,j}\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.lem.4.H=})}\right) .\end{aligned}$$ Now, ([\[pf.lem.4.2\]](#pf.lem.4.2){reference-type="ref" reference="pf.lem.4.2"}) becomes $$\begin{aligned} s_{k}^{+}t_{j} & =s_{k}^{+}\underbrace{\sum\limits_{w=j}^{k}\left( j\Longrightarrow w\right) }_{\in H_{k-1,j}}+\sum\limits_{w=k+1}^{n}% \underbrace{\left( j\Longrightarrow w\right) s_{k-1}^{+}}_{\in \mathbf{A}s_{k-1}^{+}\subseteq H_{k-1,j}}\\ & \in s_{k}^{+}H_{k-1,j}+\sum\limits_{w=k+1}^{n}H_{k-1,j}\subseteq H_{k-1,j}% \ \ \ \ \ \ \ \ \ \ \left( \text{since }H_{k-1,j}\text{ is a left ideal of }\mathbf{A}\right) .\end{aligned}$$ This proves Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} **(a)**. **(b)** Assume that $k\equiv j\operatorname{mod}2$. Thus, the integer $k-j$ is even. Now, $$\begin{aligned} \sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) & =\left( j\Longrightarrow j+1\right) +\left( j\Longrightarrow j+2\right) +\left( j\Longrightarrow j+3\right) +\cdots+\left( j\Longrightarrow k\right) \nonumber\\ & =\underbrace{\left( \left( j\Longrightarrow j+1\right) +\left( j\Longrightarrow j+2\right) \right) }_{\substack{\in\mathbf{A}s_{j+1}% ^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\left( \left( j\Longrightarrow j+3\right) +\left( j\Longrightarrow j+4\right) \right) }_{\substack{\in \mathbf{A}s_{j+3}^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\left( \left( j\Longrightarrow j+5\right) +\left( j\Longrightarrow j+6\right) \right) }_{\substack{\in \mathbf{A}s_{j+5}^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ +\cdots\nonumber\\ & \ \ \ \ \ \ \ \ \ \ +\underbrace{\left( \left( j\Longrightarrow k-1\right) +\left( j\Longrightarrow k\right) \right) }_{\substack{\in \mathbf{A}s_{k-1}^{+}\\\text{(by (\ref{pf.lem.4.twoterms}))}}}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( \begin{array} [c]{c}% \text{here, we have split our sum into pairs of}\\ \text{consecutive addends, since }k-j\text{ is even}% \end{array} \right) \nonumber\\ & \in\mathbf{A}s_{j+1}^{+}+\mathbf{A}s_{j+3}^{+}+\mathbf{A}s_{j+5}^{+}% +\cdots+\mathbf{A}s_{k-1}^{+}=\sum\limits_{\substack{u\in\left[ j,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}\mathbf{A}s_{u}^{+}\nonumber\\ & =H_{k-1,j}\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.lem.4.H=})}\right) . \label{pf.lem.4.2.b.1}%\end{aligned}$$ But $$\begin{aligned} \sum\limits_{w=j}^{k}\left( j\Longrightarrow w\right) & =\underbrace{\left( j\Longrightarrow j\right) }_{=\operatorname*{id}=1}+\sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( \begin{array} [c]{c}% \text{here, we have split off the}\\ \text{addend for }w=j\text{ from the sum}% \end{array} \right) \\ & =1+\sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) .\end{aligned}$$ Now, ([\[pf.lem.4.2\]](#pf.lem.4.2){reference-type="ref" reference="pf.lem.4.2"}) becomes $$\begin{aligned} s_{k}^{+}t_{j} & =s_{k}^{+}\underbrace{\sum\limits_{w=j}^{k}\left( j\Longrightarrow w\right) }_{=1+\sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) }+\sum\limits_{w=k+1}^{n}\left( j\Longrightarrow w\right) s_{k-1}^{+}\\ & =s_{k}^{+}\left( 1+\sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) \right) +\sum\limits_{w=k+1}^{n}\left( j\Longrightarrow w\right) s_{k-1}^{+}\\ & =s_{k}^{+}+s_{k}^{+}\sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) +\sum\limits_{w=k+1}^{n}\left( j\Longrightarrow w\right) s_{k-1}^{+}.\end{aligned}$$ Subtracting $s_{k}^{+}$ from both sides of this equality, we find $$\begin{aligned} s_{k}^{+}t_{j}-s_{k}^{+} & =s_{k}^{+}\underbrace{\sum\limits_{w=j+1}^{k}\left( j\Longrightarrow w\right) }_{\substack{\in H_{k-1,j}\\\text{(by (\ref{pf.lem.4.2.b.1}))}}}+\sum\limits_{w=k+1}^{n}\underbrace{\left( j\Longrightarrow w\right) s_{k-1}^{+}}_{\in\mathbf{A}s_{k-1}^{+}\subseteq H_{k-1,j}}\\ & \in s_{k}^{+}H_{k-1,j}+\sum\limits_{w=k+1}^{n}H_{k-1,j}\\ & \subseteq H_{k-1,j}\ \ \ \ \ \ \ \ \ \ \left( \text{since }H_{k-1,j}\text{ is a left ideal of }\mathbf{A}\right) .\end{aligned}$$ Hence, $s_{k}^{+}\left( t_{j}-1\right) =s_{k}^{+}t_{j}-s_{k}^{+}\in H_{k-1,j}$. This proves Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} **(b)**.   ------------------------------------------------------------------------ From Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} and Lemma [\[lem.sj-1+0\]](#lem.sj-1+0){reference-type="ref" reference="lem.sj-1+0"}, we can easily obtain the following: **Lemma 34** (). [\[lem.4+0\]]{#lem.4+0 label="lem.4+0"}Let $j\in\left[ 2,n\right]$ and $u\in\left[ n\right]$ be such that $u\geq j-1$ and $u\equiv j-1\operatorname{mod}2$. Then, $$s_{u}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) \in H_{u-1,j}.$$ **Proof.** If $u=j-1$, then this follows from $$\begin{aligned} s_{j-1}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) & =0\ \ \ \ \ \ \ \ \ \ \left( \text{by Lemma \ref{lem.sj-1+0}}\right) \\ & \in H_{\left( j-1\right) -1,j}\ \ \ \ \ \ \ \ \ \ \left( \text{since }H_{\left( j-1\right) -1,j}\text{ is a left ideal}\right) .\end{aligned}$$ Thus, for the rest of this proof, we WLOG assume that $u\neq j-1$. Combining this with $u\geq j-1$, we obtain $u>j-1$. Therefore, $u\geq\left( j-1\right) +1=j$. Moreover, $u\neq j$ (since $u\equiv j-1\not \equiv j\operatorname{mod}% 2$). Combining this with $u\geq j$, we obtain $u>j$. Thus, $u\geq j+1$. Also, from $j\in\left[ 2,n\right]$, we obtain $j-1\in\left[ n-1\right] \subseteq\left[ n\right]$. From $u>j$, we obtain $j<u$. Moreover, $u\equiv j-1\not \equiv j\operatorname{mod}2$. Hence, Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} **(a)** (applied to $k=u$) yields $s_{u}^{+}t_{j}\in H_{u-1,j}$ (since $j<u$). Furthermore, Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} **(b)** (applied to $u$ and $j-1$ instead of $k$ and $j$) yields $s_{u}^{+}\left( t_{j-1}-1\right) \in H_{u-1,j-1}$ (since $u\equiv j-1\operatorname{mod}2$ and $j-1<j<u$). Moreover, $H_{u-1,j}\subseteq H_{u-1,j-1}$ (by Lemma [\[lem.Hkj.some-basics\]](#lem.Hkj.some-basics){reference-type="ref" reference="lem.Hkj.some-basics"}, applied to $k=u-1$). Finally, from $u\equiv j-1\operatorname{mod}2$, we obtain $u-1\equiv\left( j-1\right) -1=j-2\equiv j\operatorname{mod}2$. Hence, Lemma [\[lem.Hkj.some-basics-3\]](#lem.Hkj.some-basics-3){reference-type="ref" reference="lem.Hkj.some-basics-3"} (applied to $k=u-1$) yields $H_{u-1,j-1}% =H_{u-1,j}$. Altogether, we now have $$s_{u}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) =\underbrace{s_{u}% ^{+}t_{j}}_{\in H_{u-1,j}}-\underbrace{s_{u}^{+}\left( t_{j-1}-1\right) }_{\in H_{u-1,j-1}=H_{u-1,j}}\in H_{u-1,j}-H_{u-1,j}\subseteq H_{u-1,j}%$$ (since $H_{u-1,j}$ is a left ideal of $\mathbf{A}$). This proves Lemma [\[lem.4+0\]](#lem.4+0){reference-type="ref" reference="lem.4+0"}.   ------------------------------------------------------------------------ Using Lemma [\[lem.4+0\]](#lem.4+0){reference-type="ref" reference="lem.4+0"} with Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} **(a)**, we can obtain the following: **Lemma 35** (). [\[lem.5\]]{#lem.5 label="lem.5"}Let $j\in\left[ n\right]$ and $k\in\left[ n+1\right]$ be such that $1<j\leq k$ and $k\equiv j\operatorname{mod}2$. Then, $$s_{k}^{+}\left[ t_{j-1},t_{j}\right] \in H_{k-2,j}.$$ **Proof.** From $j\in\left[ n\right]$ and $1<j$, we obtain $j\in\left[ 2,n\right]$. Thus, $j-1\in\left[ n-1\right]$. Hence, ([\[eq.cor.titi+12=0.comm\]](#eq.cor.titi+12=0.comm){reference-type="ref" reference="eq.cor.titi+12=0.comm"}) (applied to $i=j-1$) yields $$\left[ t_{j-1},t_{j}\right] =t_{j-1}\left( t_{j}-\left( t_{j-1}-1\right) \right) . \label{pf.lem.5.comm}%$$ Multiplying this equality by $s_{k}^{+}$ from the left, we obtain $$s_{k}^{+}\left[ t_{j-1},t_{j}\right] =s_{k}^{+}t_{j-1}\left( t_{j}-\left( t_{j-1}-1\right) \right) . \label{pf.lem.5.scomm}%$$ However, $k-1\equiv j-1\operatorname{mod}2$ (since $k\equiv j\operatorname{mod}2$). Furthermore, we have $j-1\in\left[ n-1\right] \subseteq\left[ n\right]$ and $j-1<j\leq k$ and $k\equiv j\not \equiv j-1\operatorname{mod}2$. Thus, Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} **(a)** (applied to $j-1$ instead of $j$) yields $$s_{k}^{+}t_{j-1}\in H_{k-1,j-1}=\sum\limits_{\substack{u\in\left[ j-1,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}\mathbf{A}s_{u}^{+}% \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }H_{k-1,j-1}\right) .$$ In other words, we can write $s_{k}^{+}t_{j-1}$ in the form $$s_{k}^{+}t_{j-1}=\sum\limits_{\substack{u\in\left[ j-1,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}a_{u}s_{u}^{+}, \label{pf.lem.5.asum}%$$ where $a_{u}\in\mathbf{A}$ is an element for each $u\in\left[ j-1,k-1\right]$ satisfying $u\equiv k-1\operatorname{mod}2$. Consider these elements $a_{u}$. Now, ([\[pf.lem.5.scomm\]](#pf.lem.5.scomm){reference-type="ref" reference="pf.lem.5.scomm"}) becomes $$\begin{aligned} s_{k}^{+}\left[ t_{j-1},t_{j}\right] & =s_{k}^{+}t_{j-1}\left( t_{j}-\left( t_{j-1}-1\right) \right) \nonumber\\ & =\left( \sum\limits_{\substack{u\in\left[ j-1,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}a_{u}s_{u}^{+}\right) \left( t_{j}-\left( t_{j-1}-1\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.lem.5.asum})}\right) \nonumber\\ & =\sum\limits_{\substack{u\in\left[ j-1,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}a_{u}s_{u}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) . \label{pf.lem.5.sksum}%\end{aligned}$$ However, every $u\in\left[ j-1,k-1\right]$ satisfying $u\equiv k-1\operatorname{mod}2$ satisfies $$s_{u}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) \in H_{k-2,j}. \label{pf.lem.5.p-lem}%$$ \[*Proof of ([\[pf.lem.5.p-lem\]](#pf.lem.5.p-lem){reference-type="ref" reference="pf.lem.5.p-lem"}):* Let $u\in\left[ j-1,k-1\right]$ be such that $u\equiv k-1\operatorname{mod}2$. We have $j\in\left[ 2,n\right]$. Moreover, $u\in\left[ j-1,k-1\right]$ shows that $u\geq j-1$ and $u\leq k-1\leq n$ (since $k\leq n+1$). Thus, $u\in\left[ n\right]$ (since $u\leq n$). Furthermore, $u\equiv k-1\equiv j-1\operatorname{mod}2$. Thus, Lemma [\[lem.4+0\]](#lem.4+0){reference-type="ref" reference="lem.4+0"} yields $s_{u}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) \in H_{u-1,j}$. However, from $u\leq k-1$, we obtain $u-1\leq\left( k-1\right) -1=k-2$. Moreover, from $u\equiv k-1\operatorname{mod}2$, we obtain $u-1\equiv\left( k-1\right) -1=k-2\operatorname{mod}2$. These two facts entail $H_{u-1,j}% \subseteq H_{k-2,j}$ (by Lemma [\[lem.Hkj.some-basics-2\]](#lem.Hkj.some-basics-2){reference-type="ref" reference="lem.Hkj.some-basics-2"}, applied to $u-1$ and $k-2$ instead of $v$ and $w$). Hence, $s_{u}^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) \in H_{u-1,j}\subseteq H_{k-2,j}$. This proves ([\[pf.lem.5.p-lem\]](#pf.lem.5.p-lem){reference-type="ref" reference="pf.lem.5.p-lem"}).\] Now, ([\[pf.lem.5.sksum\]](#pf.lem.5.sksum){reference-type="ref" reference="pf.lem.5.sksum"}) becomes $$s_{k}^{+}\left[ t_{j-1},t_{j}\right] =\sum\limits_{\substack{u\in\left[ j-1,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}a_{u}\underbrace{s_{u}% ^{+}\left( t_{j}-\left( t_{j-1}-1\right) \right) }_{\substack{\in H_{k-2,j}\\\text{(by (\ref{pf.lem.5.p-lem}))}}}\in\sum\limits_{\substack{u\in\left[ j-1,k-1\right] ;\\u\equiv k-1\operatorname{mod}2}}a_{u}H_{k-2,j}\subseteq H_{k-2,j}%$$ (since $H_{k-2,j}$ is a left ideal of $\mathbf{A}$). This proves Lemma [\[lem.5\]](#lem.5){reference-type="ref" reference="lem.5"}.   ------------------------------------------------------------------------ **Lemma 36** (). [\[lem.6\]]{#lem.6 label="lem.6"}Let $i,j\in\left[ n\right]$ and $k\in\left[ n+1\right]$ be such that $i\leq j$ and $k\equiv j\operatorname{mod}2$. Then, $$H_{k,j}\left[ t_{i},t_{j}\right] \subseteq H_{k-2,j}.$$ **Proof.** This is obvious for $i=j$ (since we have $\left[ t_{i},t_{j}\right] =\left[ t_{j},t_{j}\right] =0$ in this case). Thus, we WLOG assume that $i\neq j$. Hence, $i<j$ (since $i\leq j$). Therefore, $1\leq i<j$. Set $a:=s_{i}s_{i+1}\cdots s_{j-2}$. We have $i<j$. Thus, Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)** yields $$\left[ t_{i},t_{j}\right] =\underbrace{\left( s_{i}s_{i+1}\cdots s_{j-2}\right) }_{=a}\left[ t_{j-1},t_{j}\right] =a\left[ t_{j-1}% ,t_{j}\right] .$$ However, it is easy to see that $$s_{u}^{+}a=as_{u}^{+}\ \ \ \ \ \ \ \ \ \ \text{for each }u\in\left[ j,k\right] . \label{pf.lem.6.su+a}%$$ \[*Proof of ([\[pf.lem.6.su+a\]](#pf.lem.6.su+a){reference-type="ref" reference="pf.lem.6.su+a"}):* Let $u\in\left[ j,k\right]$. We must prove that $s_{u}^{+}a=as_{u}^{+}$. If $u\notin\left[ n-1\right]$, then $s_{u}^{+}=1$ (by the definition of $s_{u}^{+}$), and thus this claim boils down to $1a=a1$, which is obvious. Thus, we WLOG assume that $u\in\left[ n-1\right]$. Hence, $s_{u}^{+}% =s_{u}+1$. However, from $u\in\left[ j,k\right]$, we obtain $u\geq j$. Hence, $j\leq u$, so that $j-2\leq u-2$. Thus, each of the integers $i,i+1,\ldots,j-2$ has a distance larger than $1$ from $u$. Hence, each of the transpositions $s_{i},s_{i+1},\ldots,s_{j-2}$ commutes with $s_{u}$ (by reflection locality). Therefore, the product $s_{i}s_{i+1}\cdots s_{j-2}$ of these transpositions also commutes with $s_{u}$. In other words, $a$ commutes with $s_{u}$ (since $a=s_{i}s_{i+1}\cdots s_{j-2}$). In other words, $s_{u}a=as_{u}$. Now, $$\underbrace{s_{u}^{+}}_{=s_{u}+1}a=\left( s_{u}+1\right) a=\underbrace{s_{u}% a}_{=as_{u}}+\,a=as_{u}+a=a\underbrace{\left( s_{u}+1\right) }_{=s_{u}^{+}% }=as_{u}^{+}.$$ This proves ([\[pf.lem.6.su+a\]](#pf.lem.6.su+a){reference-type="ref" reference="pf.lem.6.su+a"}).\] Using ([\[pf.lem.6.su+a\]](#pf.lem.6.su+a){reference-type="ref" reference="pf.lem.6.su+a"}), we can easily see the following: For each $u\in\left[ j,k\right]$ satisfying $u\equiv k\operatorname{mod}2$, we have $$s_{u}^{+}\left[ t_{i},t_{j}\right] \in H_{k-2,j}. \label{pf.lem.6.su+abra}%$$ \[*Proof of ([\[pf.lem.6.su+abra\]](#pf.lem.6.su+abra){reference-type="ref" reference="pf.lem.6.su+abra"}):* Let $u\in\left[ j,k\right]$ be such that $u\equiv k\operatorname{mod}2$. From ([\[pf.lem.6.su+a\]](#pf.lem.6.su+a){reference-type="ref" reference="pf.lem.6.su+a"}), we obtain $s_{u}^{+}a=as_{u}^{+}$. Hence, $$s_{u}^{+}\underbrace{\left[ t_{i},t_{j}\right] }_{=a\left[ t_{j-1}% ,t_{j}\right] }=\underbrace{s_{u}^{+}a}_{=as_{u}^{+}}\left[ t_{j-1}% ,t_{j}\right] =as_{u}^{+}\left[ t_{j-1},t_{j}\right] . \label{pf.lem.6.su+abra.pf.1}%$$ However, $u\in\left[ j,k\right] \subseteq\left[ k\right] \subseteq\left[ n+1\right]$ and $1<j\leq u$ and $u\equiv k\equiv j\operatorname{mod}2$. Thus, Lemma [\[lem.5\]](#lem.5){reference-type="ref" reference="lem.5"} (applied to $u$ instead of $k$) yields $s_{u}% ^{+}\left[ t_{j-1},t_{j}\right] \in H_{u-2,j}$. Furthermore, $u-2\leq k-2$ (since $u\leq k$) and $u-2\equiv k-2\operatorname{mod}2$ (since $u\equiv k\operatorname{mod}2$). Hence, Lemma [\[lem.Hkj.some-basics-2\]](#lem.Hkj.some-basics-2){reference-type="ref" reference="lem.Hkj.some-basics-2"} (applied to $v=u-2$ and $w=k-2$) yields $H_{u-2,j}\subseteq H_{k-2,j}$. Now, ([\[pf.lem.6.su+abra.pf.1\]](#pf.lem.6.su+abra.pf.1){reference-type="ref" reference="pf.lem.6.su+abra.pf.1"}) becomes $$\begin{aligned} s_{u}^{+}\left[ t_{i},t_{j}\right] & =a\underbrace{s_{u}^{+}\left[ t_{j-1},t_{j}\right] }_{\substack{\in H_{u-2,j}}}\in aH_{u-2,j}\subseteq H_{u-2,j}\ \ \ \ \ \ \ \ \ \ \left( \text{since }H_{u-2,j}\text{ is a left ideal}\right) \\ & \subseteq H_{k-2,j}.\end{aligned}$$ This proves ([\[pf.lem.6.su+abra\]](#pf.lem.6.su+abra){reference-type="ref" reference="pf.lem.6.su+abra"}).\] Now, $$\begin{aligned} \underbrace{H_{k,j}}_{\substack{=\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}s_{u}^{+}\\\text{(by the definition of }H_{k,j}\text{)}}}\left[ t_{i},t_{j}\right] & =\left( \sum\limits _{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}% 2}}\mathbf{A}s_{u}^{+}\right) \left[ t_{i},t_{j}\right] =\sum\limits _{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}% 2}}\mathbf{A}\underbrace{s_{u}^{+}\left[ t_{i},t_{j}\right] }_{\substack{\in H_{k-2,j}\\\text{(by (\ref{pf.lem.6.su+abra}))}}}\\ & \subseteq\sum\limits_{\substack{u\in\left[ j,k\right] ;\\u\equiv k\operatorname{mod}2}}\mathbf{A}H_{k-2,j}\subseteq H_{k-2,j}%\end{aligned}$$ (since $H_{k-2,j}$ is a left ideal). This proves Lemma [\[lem.6\]](#lem.6){reference-type="ref" reference="lem.6"}.   ------------------------------------------------------------------------ If $i,j\in\left[ n\right]$ and $k\in\left[ n+1\right]$ are such that $i\leq j$ and $k\equiv j\operatorname{mod}2$, then we can apply Lemma [\[lem.6\]](#lem.6){reference-type="ref" reference="lem.6"} recursively, yielding $$\begin{aligned} H_{k,j}\left[ t_{i},t_{j}\right] & \subseteq H_{k-2,j},\\ H_{k,j}\left[ t_{i},t_{j}\right] ^{2} & \subseteq H_{k-4,j},\\ H_{k,j}\left[ t_{i},t_{j}\right] ^{3} & \subseteq H_{k-6,j},\\ & \ldots.\end{aligned}$$ Eventually, the right hand side will be $0$, and thus we obtain $H_{k,j}% \left[ t_{i},t_{j}\right] ^{s}=0$ for some $s\in\mathbb{N}$. By picking $k$ appropriately (specifically, setting $k=n$ or $k=n+1$ depending on the parity of $n-j$), we can ensure that $H_{k,j}=\mathbf{A}$, and thus this equality $H_{k,j}\left[ t_{i},t_{j}\right] ^{s}=0$ yields $\left[ t_{i}% ,t_{j}\right] ^{s}=0$. Thus, Lemma [\[lem.6\]](#lem.6){reference-type="ref" reference="lem.6"} "lays a fuse" for proving the nilpotency of $\left[ t_{i}% ,t_{j}\right]$. We shall now elaborate on this. ## Products of $\left[ t_{i},t_{j}\right]$'s for a fixed $j$ **Lemma 37** (). [\[lem.right-bound-1\]]{#lem.right-bound-1 label="lem.right-bound-1"}Let $j\in\left[ n\right]$ and $m\in\mathbb{N}$. Let $r$ be the unique element of $\left\{ n,n+1\right\}$ that is congruent to $j$ modulo $2$. (That is, $r=% \begin{cases} n, & \text{if }n\equiv j\operatorname{mod}2;\\ n+1, & \text{otherwise.}% \end{cases}$) Let $i_{1},i_{2},\ldots,i_{m}$ be $m$ elements of $\left[ j\right]$ (not necessarily distinct). Then, $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m}},t_{j}\right] \in H_{r-2m,j}.$$ **Proof.** We induct on $m$: *Base case:* We have $r\geq n$ (by the definition of $r$), so that $r\notin\left[ n-1\right]$ and $j\leq r$ (since $j\leq n\leq r$). Hence, Remark [\[rmk.Hnj\]](#rmk.Hnj){reference-type="ref" reference="rmk.Hnj"} (applied to $k=r$) yields $H_{r,j}=\mathbf{A}$. Now $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{0}},t_{j}\right] =\left( \text{empty product}\right) =1\in \mathbf{A}=H_{r,j}=H_{r-2\cdot0,j}%$$ (since $r=r-2\cdot0$). In other words, Lemma [\[lem.right-bound-1\]](#lem.right-bound-1){reference-type="ref" reference="lem.right-bound-1"} is proved for $m=0$. *Induction step:* Let $m\in\mathbb{N}$. Assume (as the induction hypothesis) that $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m}},t_{j}\right] \in H_{r-2m,j} \label{pf.lem.right-bound-1.IH}%$$ whenever $i_{1},i_{2},\ldots,i_{m}$ are $m$ elements of $\left[ j\right]$. We must prove that $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m+1}},t_{j}\right] \in H_{r-2\left( m+1\right) ,j} \label{pf.lem.right-bound-1.IG}%$$ whenever $i_{1},i_{2},\ldots,i_{m+1}$ are $m+1$ elements of $\left[ j\right]$. So let $i_{1},i_{2},\ldots,i_{m+1}$ be $m+1$ elements of $\left[ j\right]$. We have $r-2m\equiv r\equiv j\operatorname{mod}2$ (by the definition of $r$) and $i_{m+1}\in\left[ j\right] \subseteq\left[ n\right]$ and $i_{m+1}\leq j$ (since $i_{m+1}\in\left[ j\right]$). Hence, Lemma [\[lem.6\]](#lem.6){reference-type="ref" reference="lem.6"} (applied to $k=r-2m$ and $i=i_{m+1}$) yields[^4] $$H_{r-2m,j}\left[ t_{i_{m+1}},t_{j}\right] \subseteq H_{r-2m-2,j}.$$ Now, $$\begin{aligned} \left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m+1}},t_{j}\right] & =\underbrace{\left( \left[ t_{i_{1}}% ,t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m}}% ,t_{j}\right] \right) }_{\substack{\in H_{r-2m,j}\\\text{(by (\ref{pf.lem.right-bound-1.IH}))}}}\cdot\left[ t_{i_{m+1}},t_{j}\right] \\ & \in H_{r-2m,j}\left[ t_{i_{m+1}},t_{j}\right] \subseteq H_{r-2m-2,j}% =H_{r-2\left( m+1\right) ,j}%\end{aligned}$$ (since $r-2m-2=r-2\left( m+1\right)$). In other words, ([\[pf.lem.right-bound-1.IG\]](#pf.lem.right-bound-1.IG){reference-type="ref" reference="pf.lem.right-bound-1.IG"}) holds. This completes the induction step. Thus, Lemma [\[lem.right-bound-1\]](#lem.right-bound-1){reference-type="ref" reference="lem.right-bound-1"} is proved.   ------------------------------------------------------------------------ We can now prove our first main result: **Theorem 38** (). [\[thm.right-bound\]]{#thm.right-bound label="thm.right-bound"}Let $j\in\left[ n\right]$ and $m\in\mathbb{N}$ be such that $2m\geq n-j+2$. Let $i_{1},i_{2},\ldots,i_{m}$ be $m$ elements of $\left[ j\right]$ (not necessarily distinct). Then, $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m}},t_{j}\right] =0.$$ **Proof.** Let $r$ be the element of $\left\{ n,n+1\right\}$ defined in Lemma [\[lem.right-bound-1\]](#lem.right-bound-1){reference-type="ref" reference="lem.right-bound-1"}. Then, $r\leq n+1$, so that $$\underbrace{r}_{\leq n+1}-\underbrace{2m}_{\geq n-j+2}\leq\left( n+1\right) -\left( n-j+2\right) =j-1<j.$$ Thus, $H_{r-2m,j}=0$ (by ([\[eq.def.Hkj.=0\]](#eq.def.Hkj.=0){reference-type="ref" reference="eq.def.Hkj.=0"})). But Lemma [\[lem.right-bound-1\]](#lem.right-bound-1){reference-type="ref" reference="lem.right-bound-1"} yields $$\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{m}},t_{j}\right] \in H_{r-2m,j}=0.$$ In other words, $\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}}% ,t_{j}\right] \cdots\left[ t_{i_{m}},t_{j}\right] =0$. This proves Theorem [\[thm.right-bound\]](#thm.right-bound){reference-type="ref" reference="thm.right-bound"}.   ------------------------------------------------------------------------ ## The identity $\left[ t_{i},t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0$ for any $i,j\in\left[ n\right]$ **Lemma 39** (). [\[lem.right-bound-m-1\]]{#lem.right-bound-m-1 label="lem.right-bound-m-1"}Let $i,j\in\left[ n\right]$ and $m\in\mathbb{N}$ be such that $2m\geq n-j+2$ and $i\leq j$. Then, $\left[ t_{i},t_{j}\right] ^{m}=0$. **Proof.** We have $i\in\left[ j\right]$ (since $i\leq j$). Hence, Theorem [\[thm.right-bound\]](#thm.right-bound){reference-type="ref" reference="thm.right-bound"} (applied to $i_{k}=i$) yields $\underbrace{\left[ t_{i},t_{j}\right] \left[ t_{i},t_{j}\right] \cdots\left[ t_{i}% ,t_{j}\right] }_{m\text{ times}}=0$. In other words, $\left[ t_{i}% ,t_{j}\right] ^{m}=0$. This proves Lemma [\[lem.right-bound-m-1\]](#lem.right-bound-m-1){reference-type="ref" reference="lem.right-bound-m-1"}.   ------------------------------------------------------------------------ **Corollary 40** (). [\[cor.right-bound-m\]]{#cor.right-bound-m label="cor.right-bound-m"}Let $i,j\in\left[ n\right]$ and $m\in\mathbb{N}$ be such that $2m\geq n-j+2$. Then, $\left[ t_{i},t_{j}\right] ^{m}=0$. **Proof.** If $i\leq j$, then Corollary [\[cor.right-bound-m\]](#cor.right-bound-m){reference-type="ref" reference="cor.right-bound-m"} follows directly from Lemma [\[lem.right-bound-m-1\]](#lem.right-bound-m-1){reference-type="ref" reference="lem.right-bound-m-1"}. Thus, we WLOG assume that we don't have $i\leq j$. Hence, $i>j$. Therefore, $j<i$, so that $j\leq i$. Moreover, $2m\geq n-\underbrace{j}% _{<i}+2>n-i+2$. Hence, we can apply Lemma [\[lem.right-bound-m-1\]](#lem.right-bound-m-1){reference-type="ref" reference="lem.right-bound-m-1"} to $j$ and $i$ instead of $i$ and $j$. We thus obtain $\left[ t_{j},t_{i}\right] ^{m}=0$. However, $\left[ t_{i},t_{j}\right] =-\left[ t_{j},t_{i}\right]$ (since any two elements $a$ and $b$ of a ring satisfy $\left[ a,b\right] =-\left[ b,a\right]$). Hence, $\left[ t_{i},t_{j}\right] ^{m}=\left( -\left[ t_{j},t_{i}\right] \right) ^{m}=\left( -1\right) ^{m}% \underbrace{\left[ t_{j},t_{i}\right] ^{m}}_{=0}=0$. This proves Corollary [\[cor.right-bound-m\]](#cor.right-bound-m){reference-type="ref" reference="cor.right-bound-m"}.   ------------------------------------------------------------------------ **Corollary 41** (). [\[cor.right-bound\]]{#cor.right-bound label="cor.right-bound"}For any $x\in\mathbb{R}$, let $\left\lceil x\right\rceil$ denote the smallest integer that is $\geq x$. Let $i,j\in\left[ n\right]$. Then, $\left[ t_{i},t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0$. **Proof.** We have $2\left( \underbrace{\left\lceil \left( n-j\right) /2\right\rceil }_{\geq\left( n-j\right) /2}+1\right) \geq2\left( \left( n-j\right) /2+1\right) =n-j+2$. Thus, Corollary [\[cor.right-bound-m\]](#cor.right-bound-m){reference-type="ref" reference="cor.right-bound-m"} (applied to $m=\left\lceil \left( n-j\right) /2\right\rceil +1$) yields $\left[ t_{i},t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0$. This proves Corollary [\[cor.right-bound\]](#cor.right-bound){reference-type="ref" reference="cor.right-bound"}.   ------------------------------------------------------------------------ ## [\[subsec.ikinn\]]{#subsec.ikinn label="subsec.ikinn"}Can we lift the $i_{1},i_{2},\ldots,i_{m}% \in\left[ j\right]$ restriction? **Remark 42** (). Theorem [\[thm.right-bound\]](#thm.right-bound){reference-type="ref" reference="thm.right-bound"} does not hold if we drop the $i_{1},i_{2}% ,\ldots,i_{m}\in\left[ j\right]$ restriction. For instance, for $n=6$ and $j=3$, we have $$\left[ t_{1},t_{3}\right] \left[ t_{5},t_{3}\right] \left[ t_{4}% ,t_{3}\right] \left[ t_{1},t_{3}\right] \neq 0\ \ \ \ \ \ \ \ \ \ \text{despite }2\cdot4\geq n-j+2.$$ Another counterexample is obtained for $n=4$ and $j=2$, since $\left[ t_{3},t_{2}\right] \left[ t_{1},t_{2}\right] \neq0$. Despite these counterexamples, the restriction can be lifted in some particular cases. Here is a particularly simple instance: **Corollary 43** (). [\[cor.titn-1\]]{#cor.titn-1 label="cor.titn-1"}Assume that $n\geq2$. Let $u,v\in\left[ n\right]$. Then, $\left[ t_{u},t_{n-1}\right] \left[ t_{v},t_{n-1}\right] =0$. **Proof.** We are in one of the following three cases: *Case 1:* We have $u=n$. *Case 2:* We have $v=n$. *Case 3:* Neither $u$ nor $v$ equals $n$. Let us first consider Case 1. In this case, we have $u=n$. Hence, $t_{u}% =t_{n}=1$ and thus $\left[ t_{u},t_{n-1}\right] =\left[ 1,t_{n-1}\right] =0$ (since $\left[ 1,x\right] =0$ for each $x$). Hence, $\underbrace{\left[ t_{u},t_{n-1}\right] }_{=0}\left[ t_{v},t_{n-1}\right] =0$. Thus, Corollary [\[cor.titn-1\]](#cor.titn-1){reference-type="ref" reference="cor.titn-1"} is proved in Case 1. A similar argument proves Corollary [\[cor.titn-1\]](#cor.titn-1){reference-type="ref" reference="cor.titn-1"} in Case 2. Let us now consider Case 3. In this case, neither $u$ nor $v$ equals $n$. In other words, $u$ and $v$ are both $\neq n$. Thus, $u$ and $v$ are elements of $\left[ n\right] \setminus\left\{ n\right\} =\left[ n-1\right]$. Hence, Theorem [\[thm.right-bound\]](#thm.right-bound){reference-type="ref" reference="thm.right-bound"} (applied to $j=n-1$ and $m=2$ and $\left( i_{1},i_{2},\ldots,i_{m}\right) =\left( u,v\right)$) yields $\left[ t_{u},t_{n-1}\right] \left[ t_{v},t_{n-1}\right] =0$ (since $2\cdot 2=4\geq3=n-\left( n-1\right) +2$). Thus, Corollary [\[cor.titn-1\]](#cor.titn-1){reference-type="ref" reference="cor.titn-1"} is proved in Case 3. We have now proved Corollary [\[cor.titn-1\]](#cor.titn-1){reference-type="ref" reference="cor.titn-1"} in all three Cases 1, 2 and 3.   ------------------------------------------------------------------------ **Proposition 44** (). [\[prop.tn-2-three\]]{#prop.tn-2-three label="prop.tn-2-three"}Assume that $n\geq3$. Then: 1. We have $\left[ t_{i},t_{n-2}\right] \left[ s_{n-1},s_{n-2}\right] =0$ for all $i\in\left[ n-2\right]$. 2. We have $\left[ t_{i},t_{n-2}\right] \left[ t_{n-1},t_{n-2}\right] =0$ for all $i\in\left[ n\right]$. 3. We have $\left[ t_{u},t_{n-2}\right] \left[ t_{v},t_{n-2}\right] \left[ t_{w},t_{n-2}\right] =0$ for all $u,v,w\in \left[ n\right]$. **Proof.** \[Proof sketch.\]**(a)** This is easily checked for $i=n-3$ and for $i=n-2$.    [^5] In all other cases, Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)** lets us rewrite $\left[ t_{i},t_{n-2}\right]$ as $\left( s_{i}s_{i+1}\cdots s_{n-4}\right) \left[ t_{n-3},t_{n-2}\right]$, and thus it remains to prove that $\left[ t_{n-3},t_{n-2}\right] \left[ s_{n-1},s_{n-2}\right] =0$, which is exactly the $i=n-3$ case. Thus, Proposition [\[prop.tn-2-three\]](#prop.tn-2-three){reference-type="ref" reference="prop.tn-2-three"} **(a)** is proved. **(b)** This is easily checked for $i=n-1$ and for $i=n$. In all other cases, we have $i\in\left[ n-2\right]$, and an easy computation shows that $\left[ t_{n-1},t_{n-2}\right] =\left[ s_{n-1},s_{n-2}\right] \left( 1+s_{n-1}\right)$, so that the claim follows from Proposition [\[prop.tn-2-three\]](#prop.tn-2-three){reference-type="ref" reference="prop.tn-2-three"} **(a)**. Thus, Proposition [\[prop.tn-2-three\]](#prop.tn-2-three){reference-type="ref" reference="prop.tn-2-three"} **(b)** is proved. **(c)** Let $u,v,w\in\left[ n\right]$. We must prove that $\left[ t_{u},t_{n-2}\right] \left[ t_{v},t_{n-2}\right] \left[ t_{w}% ,t_{n-2}\right] =0$. If any of $u,v,w$ equals $n$, then this is clear (since $t_{n}=1$ and thus $\left[ t_{n},t_{n-2}\right] =\left[ 1,t_{n-2}\right] =0$). Thus, WLOG assume that $u,v,w\in\left[ n-1\right]$. If $v=n-1$, then $\left[ t_{u},t_{n-2}\right] \left[ t_{v},t_{n-2}\right] =\left[ t_{u},t_{n-2}\right] \left[ t_{n-1},t_{n-2}\right] =0$ (by Proposition [\[prop.tn-2-three\]](#prop.tn-2-three){reference-type="ref" reference="prop.tn-2-three"} **(b)**), so that our claim holds. Likewise, our claim can be shown if $w=n-1$. Thus, WLOG assume that neither $v$ nor $w$ equals $n-1$. Hence, $v,w\in\left[ n-2\right]$. Therefore, Theorem [\[thm.right-bound\]](#thm.right-bound){reference-type="ref" reference="thm.right-bound"} shows that $\left[ t_{u},t_{n-2}\right] \left[ t_{v},t_{n-2}\right] =0$, which yields our claim again. This proves Proposition [\[prop.tn-2-three\]](#prop.tn-2-three){reference-type="ref" reference="prop.tn-2-three"} **(c)**.   ------------------------------------------------------------------------ # The identity $\left[ t_{i},t_{j}\right] ^{j-i+1}=0$ for all $i\leq j$ We now approach the proof of another remarkable theorem: the identity $\left[ t_{i},t_{j}\right] ^{j-i+1}=0$, which holds for all $i,j\in\left[ n\right]$ satisfying $i\leq j$. Some more work must be done before we can prove this. ## The elements $\mu_{i,j}$ for $i\in\left[ j-1\right]$ We first introduce a family of elements of the group algebra $\mathbf{k}% \left[ S_{n}\right]$. **Definition 45** (). Set $\mathbf{A}=\mathbf{k}\left[ S_{n}\right]$. **Definition 46** (). [\[def.muij\]]{#def.muij label="def.muij"}Let $j\in\left[ n\right]$, and let $i\in\left[ j-1\right]$. Then, $j-1\geq1$ (since $i\in\left[ j-1\right]$ entails $1\leq i\leq j-1$), so that $j-1\in\left[ n\right]$. Hence, the elements $\left( i\Longrightarrow j-1\right) \in S_{n}$ and $t_{j-1}\in\mathbf{k}\left[ S_{n}\right]$ are well-defined. Now, we define an element $$\mu_{i,j}:=\left( i\Longrightarrow j-1\right) t_{j-1}\in\mathbf{A}.$$ **Lemma 47** (). [\[lem.com-mu\]]{#lem.com-mu label="lem.com-mu"}Let $j\in\left[ n\right]$, and let $i\in\left[ j-1\right]$. Then, $$\begin{aligned} \left[ t_{i},t_{j}\right] & =\left( i\Longrightarrow j-1\right) \left[ t_{j-1},t_{j}\right] \label{eq.lem.com-mu.1}\\ & =\mu_{i,j}\left( t_{j}-t_{j-1}+1\right) . \label{eq.lem.com-mu.2}%\end{aligned}$$ **Proof.** From $i\in\left[ j-1\right]$, we obtain $1\leq i\leq j-1$, so that $j-1\geq1$. Thus, $j-1\in\left[ n-1\right]$ (since $j-1<j\leq n$). Hence, ([\[eq.cor.titi+12=0.comm\]](#eq.cor.titi+12=0.comm){reference-type="ref" reference="eq.cor.titi+12=0.comm"}) (applied to $j-1$ instead of $i$) yields $$\left[ t_{j-1},t_{j-1+1}\right] =t_{j-1}\left( t_{j-1+1}-\left( t_{j-1}-1\right) \right) .$$ Since $j-1+1=j$, we can rewrite this as $$\left[ t_{j-1},t_{j}\right] =t_{j-1}\left( t_{j}-\left( t_{j-1}-1\right) \right) . \label{pf.lem.com-mu.0}%$$ We have $i\leq j-1$. Hence, Proposition [\[prop.cycpq.sss\]](#prop.cycpq.sss){reference-type="ref" reference="prop.cycpq.sss"} (applied to $v=i$ and $w=j-1$) yields $$\left( i\Longrightarrow j-1\right) =s_{i}s_{i+1}\cdots s_{\left( j-1\right) -1}=s_{i}s_{i+1}\cdots s_{j-2}. \label{pf.lem.com-mu.1}%$$ However, $i\leq j-1<j$. Thus, Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)** yields $$\left[ t_{i},t_{j}\right] =\underbrace{\left( s_{i}s_{i+1}\cdots s_{j-2}\right) }_{\substack{=\left( i\Longrightarrow j-1\right) \\\text{(by (\ref{pf.lem.com-mu.1}))}}}\left[ t_{j-1},t_{j}\right] =\left( i\Longrightarrow j-1\right) \left[ t_{j-1},t_{j}\right] .$$ This proves ([\[eq.lem.com-mu.1\]](#eq.lem.com-mu.1){reference-type="ref" reference="eq.lem.com-mu.1"}). Furthermore, $$\begin{aligned} \left[ t_{i},t_{j}\right] & =\left( i\Longrightarrow j-1\right) \underbrace{\left[ t_{j-1},t_{j}\right] }_{\substack{=t_{j-1}\left( t_{j}-\left( t_{j-1}-1\right) \right) \\\text{(by (\ref{pf.lem.com-mu.0}% ))}}}=\underbrace{\left( i\Longrightarrow j-1\right) t_{j-1}}% _{\substack{=\mu_{i,j}\\\text{(by the definition of }\mu_{i,j}\text{)}% }}\underbrace{\left( t_{j}-\left( t_{j-1}-1\right) \right) }% _{=t_{j}-t_{j-1}+1}\\ & =\mu_{i,j}\left( t_{j}-t_{j-1}+1\right) .\end{aligned}$$ This proves ([\[eq.lem.com-mu.2\]](#eq.lem.com-mu.2){reference-type="ref" reference="eq.lem.com-mu.2"}). Thus, Lemma [\[lem.com-mu\]](#lem.com-mu){reference-type="ref" reference="lem.com-mu"} is proved.   ------------------------------------------------------------------------ **Lemma 48** (). [\[lem.abc=cab\]]{#lem.abc=cab label="lem.abc=cab"}Let $R$ be a ring. Let $a,b,c\in R$ be three elements satisfying $ca=ac$ and $cb=bc$. Then, $$c\left[ a,b\right] =\left[ a,b\right] c.$$ **Proof.** The definition of a commutator yields $\left[ a,b\right] =ab-ba$. Thus, $$\begin{aligned} c\underbrace{\left[ a,b\right] }_{=ab-ba} & =c\left( ab-ba\right) =\underbrace{ca}_{=ac}b-\underbrace{cb}_{=bc}a=a\underbrace{cb}_{=bc}% -\,b\underbrace{ca}_{=ac}\\ & =abc-bac=\underbrace{\left( ab-ba\right) }_{=\left[ a,b\right] }c=\left[ a,b\right] c.\end{aligned}$$ This proves Lemma [\[lem.abc=cab\]](#lem.abc=cab){reference-type="ref" reference="lem.abc=cab"}.   ------------------------------------------------------------------------ **Lemma 49** (). [\[lem.move-mu\]]{#lem.move-mu label="lem.move-mu"}Let $i,j,k\in\left[ n\right]$ be such that $i\leq k<j-1$. Then, $$\left[ t_{i},t_{j}\right] \mu_{k,j}=\mu_{k+1,j}\left[ t_{i},t_{j-1}\right] .$$ **Proof.** We have $j-1\geq j-1>k\geq i$. Thus, Lemma [\[lem.jpiq1\]](#lem.jpiq1){reference-type="ref" reference="lem.jpiq1"} (applied to $k$, $j-1$ and $j-1$ instead of $j$, $v$ and $w$) yields $$\begin{aligned} \left( k+1\Longrightarrow j-1\right) \left( i\Longrightarrow j-1\right) & =\left( i\Longrightarrow j-1\right) \left( k\Longrightarrow \underbrace{\left( j-1\right) -1}_{=j-2}\right) \nonumber\\ & =\left( i\Longrightarrow j-1\right) \left( k\Longrightarrow j-2\right) .\label{pf.lem.move-mu.1}%\end{aligned}$$ From $i<j-1$, we obtain $i\leq j-2$ and thus $i\in\left[ j-2\right] \subseteq\left[ j-1\right]$. Likewise, $k\in\left[ j-1\right]$ (since $k<j-1$). Furthermore, Proposition [\[prop.cycpq.rec\]](#prop.cycpq.rec){reference-type="ref" reference="prop.cycpq.rec"} **(b)** (applied to $v=k$ and $w=j-1$) yields $$\begin{aligned} \left( k\Longrightarrow j-1\right) & =\left( k\Longrightarrow\left( j-1\right) -1\right) s_{\left( j-1\right) -1}\ \ \ \ \ \ \ \ \ \ \left( \text{since }k<j-1\right) \nonumber\\ & =\left( k\Longrightarrow j-2\right) s_{j-2}\label{pf.lem.move-mu.2k}%\end{aligned}$$ (since $\left( j-1\right) -1=j-2$). The same argument (applied to $i$ instead of $k$) yields $$\left( i\Longrightarrow j-1\right) =\left( i\Longrightarrow j-2\right) s_{j-2}\label{pf.lem.move-mu.2i}%$$ (since $i<j-1$). We have $k\leq j-2$ (since $k<j-1$) and $j-2<j$. Thus, ([\[eq.lem.commute-with-tj-specific.ab=ba\]](#eq.lem.commute-with-tj-specific.ab=ba){reference-type="ref" reference="eq.lem.commute-with-tj-specific.ab=ba"}) (applied to $k$ and $j-2$ instead of $i$ and $k$) yields $$\left( k\Longrightarrow j-2\right) t_{j}=t_{j}\left( k\Longrightarrow j-2\right) .\label{pf.lem.move-mu.3j}%$$ Furthermore, we have $k\leq j-2$ and $j-2<j-1$. Thus, ([\[eq.lem.commute-with-tj-specific.ab=ba\]](#eq.lem.commute-with-tj-specific.ab=ba){reference-type="ref" reference="eq.lem.commute-with-tj-specific.ab=ba"}) (applied to $k$, $j-2$ and $j-1$ instead of $i$, $k$ and $j$) yields $$\left( k\Longrightarrow j-2\right) t_{j-1}=t_{j-1}\left( k\Longrightarrow j-2\right) .\label{pf.lem.move-mu.3j-1}%$$ The same argument (but using $i$ instead of $k$) shows that $$\left( i\Longrightarrow j-2\right) t_{j-1}=t_{j-1}\left( i\Longrightarrow j-2\right) \label{pf.lem.move-mu.3j-1i}%$$ (since $i\leq j-2$). From ([\[pf.lem.move-mu.3j\]](#pf.lem.move-mu.3j){reference-type="ref" reference="pf.lem.move-mu.3j"}) and ([\[pf.lem.move-mu.3j-1\]](#pf.lem.move-mu.3j-1){reference-type="ref" reference="pf.lem.move-mu.3j-1"}), we obtain $$\left( k\Longrightarrow j-2\right) \left[ t_{j-1},t_{j}\right] =\left[ t_{j-1},t_{j}\right] \left( k\Longrightarrow j-2\right) \label{pf.lem.move-mu.3}%$$ (by Lemma [\[lem.abc=cab\]](#lem.abc=cab){reference-type="ref" reference="lem.abc=cab"}, applied to $R=\mathbf{A}$, $a=t_{j-1}$, $b=t_{j}$ and $c=\left( k\Longrightarrow j-2\right)$). Now, the definition of $\mu_{k,j}$ yields $\mu_{k,j}=\left( k\Longrightarrow j-1\right) t_{j-1}$. Hence, $$\begin{aligned} & \underbrace{\left[ t_{i},t_{j}\right] }_{\substack{=\left( i\Longrightarrow j-1\right) \left[ t_{j-1},t_{j}\right] \\\text{(by (\ref{eq.lem.com-mu.1}))}}}\ \ \underbrace{\mu_{k,j}}_{=\left( k\Longrightarrow j-1\right) t_{j-1}}\nonumber\\ & =\left( i\Longrightarrow j-1\right) \left[ t_{j-1},t_{j}\right] \underbrace{\left( k\Longrightarrow j-1\right) }_{\substack{=\left( k\Longrightarrow j-2\right) s_{j-2}\\\text{(by (\ref{pf.lem.move-mu.2k}))}% }}t_{j-1}\nonumber\\ & =\left( i\Longrightarrow j-1\right) \underbrace{\left[ t_{j-1}% ,t_{j}\right] \left( k\Longrightarrow j-2\right) }_{\substack{=\left( k\Longrightarrow j-2\right) \left[ t_{j-1},t_{j}\right] \\\text{(by (\ref{pf.lem.move-mu.3}))}}}s_{j-2}t_{j-1}\nonumber\\ & =\underbrace{\left( i\Longrightarrow j-1\right) \left( k\Longrightarrow j-2\right) }_{\substack{=\left( k+1\Longrightarrow j-1\right) \left( i\Longrightarrow j-1\right) \\\text{(by (\ref{pf.lem.move-mu.1}))}}}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}\nonumber\\ & =\left( k+1\Longrightarrow j-1\right) \underbrace{\left( i\Longrightarrow j-1\right) }_{\substack{=\left( i\Longrightarrow j-2\right) s_{j-2}\\\text{(by (\ref{pf.lem.move-mu.2i}))}}}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}\nonumber\\ & =\left( k+1\Longrightarrow j-1\right) \left( i\Longrightarrow j-2\right) s_{j-2}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}% .\label{pf.lem.move-mu.5}%\end{aligned}$$ Next, we shall simplify the product $s_{j-2}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}$ on the right hand side. Lemma [\[lem.ti-to-tj-1\]](#lem.ti-to-tj-1){reference-type="ref" reference="lem.ti-to-tj-1"} **(b)** (applied to $j-2$ instead of $i$) yields that $$\begin{aligned} \left[ t_{j-2},t_{j}\right] & =\underbrace{\left( s_{j-2}s_{\left( j-2\right) +1}\cdots s_{j-2}\right) }_{=s_{j-2}}\left[ t_{j-1}% ,t_{j}\right] \ \ \ \ \ \ \ \ \ \ \left( \text{since }j-2<j\right) \nonumber\\ & =s_{j-2}\left[ t_{j-1},t_{j}\right] .\label{pf.lem.move-mu.6}%\end{aligned}$$ Furthermore, $i\leq j-2$, so that $j-2\geq i\geq1$. Combining this with $j-2\leq n-2$ (since $j\leq n$), we obtain $j-2\in\left[ n-2\right] \subseteq\left[ n-1\right]$. Hence, Corollary [\[cor.tl-via-tl+1\]](#cor.tl-via-tl+1){reference-type="ref" reference="cor.tl-via-tl+1"} (applied to $\ell=j-2$) yields $t_{j-2}=1+s_{j-2}\underbrace{t_{\left( j-2\right) +1}}_{=t_{j-1}}=1+s_{j-2}t_{j-1}$. Hence, $$t_{j-2}-1=s_{j-2}t_{j-1}.\label{pf.lem.move-mu.7}%$$ Multiplying the equalities ([\[pf.lem.move-mu.6\]](#pf.lem.move-mu.6){reference-type="ref" reference="pf.lem.move-mu.6"}) and ([\[pf.lem.move-mu.7\]](#pf.lem.move-mu.7){reference-type="ref" reference="pf.lem.move-mu.7"}) together, we obtain $$\left[ t_{j-2},t_{j}\right] \left( t_{j-2}-1\right) =s_{j-2}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}.\label{pf.lem.move-mu.6x7}%$$ On the other hand, Corollary [\[cor.titi+2ti-1\]](#cor.titi+2ti-1){reference-type="ref" reference="cor.titi+2ti-1"} (applied to $i=j-2$) yields $$\left[ t_{j-2},t_{j-2+2}\right] \left( t_{j-2}-1\right) =t_{j-2+1}\left[ t_{j-2},t_{j-2+1}\right] \ \ \ \ \ \ \ \ \ \ \left( \text{since }% j-2\in\left[ n-2\right] \right) .$$ In view of $j-2+2=j$ and $j-2+1=j-1$, we can rewrite this as $$\left[ t_{j-2},t_{j}\right] \left( t_{j-2}-1\right) =t_{j-1}\left[ t_{j-2},t_{j-1}\right] .$$ Comparing this with ([\[pf.lem.move-mu.6x7\]](#pf.lem.move-mu.6x7){reference-type="ref" reference="pf.lem.move-mu.6x7"}), we obtain $$s_{j-2}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}=t_{j-1}\left[ t_{j-2},t_{j-1}\right] .$$ Hence, ([\[pf.lem.move-mu.5\]](#pf.lem.move-mu.5){reference-type="ref" reference="pf.lem.move-mu.5"}) becomes $$\begin{aligned} \left[ t_{i},t_{j}\right] \mu_{k,j} & =\left( k+1\Longrightarrow j-1\right) \left( i\Longrightarrow j-2\right) \underbrace{s_{j-2}\left[ t_{j-1},t_{j}\right] s_{j-2}t_{j-1}}_{=t_{j-1}\left[ t_{j-2},t_{j-1}\right] }\nonumber\\ & =\left( k+1\Longrightarrow j-1\right) \underbrace{\left( i\Longrightarrow j-2\right) t_{j-1}}_{\substack{=t_{j-1}\left( i\Longrightarrow j-2\right) \\\text{(by (\ref{pf.lem.move-mu.3j-1i}))}% }}\left[ t_{j-2},t_{j-1}\right] \nonumber\\ & =\left( k+1\Longrightarrow j-1\right) t_{j-1}\left( i\Longrightarrow j-2\right) \left[ t_{j-2},t_{j-1}\right] . \label{pf.lem.move-mu.9}%\end{aligned}$$ But $k\leq j-2$, so that $k+1\leq j-1$. Thus, $k+1\in\left[ j-1\right]$. Hence, the definition of $\mu_{k+1,j}$ yields $$\mu_{k+1,j}=\left( k+1\Longrightarrow j-1\right) t_{j-1}. \label{pf.lem.move-mu.10}%$$ Furthermore, $i\leq j-2=\left( j-1\right) -1$, so that $i\in\left[ \left( j-1\right) -1\right]$. Hence, ([\[eq.lem.com-mu.1\]](#eq.lem.com-mu.1){reference-type="ref" reference="eq.lem.com-mu.1"}) (applied to $j-1$ instead of $j$) yields $$\begin{aligned} \left[ t_{i},t_{j-1}\right] & =\left( i\Longrightarrow\left( j-1\right) -1\right) \left[ t_{\left( j-1\right) -1},t_{j-1}\right] \nonumber\\ & =\left( i\Longrightarrow j-2\right) \left[ t_{j-2},t_{j-1}\right] \label{pf.lem.move-mu.11}%\end{aligned}$$ (since $\left( j-1\right) -1=j-2$). Multiplying the equalities ([\[pf.lem.move-mu.10\]](#pf.lem.move-mu.10){reference-type="ref" reference="pf.lem.move-mu.10"}) and ([\[pf.lem.move-mu.11\]](#pf.lem.move-mu.11){reference-type="ref" reference="pf.lem.move-mu.11"}), we obtain $$\mu_{k+1,j}\left[ t_{i},t_{j-1}\right] =\left( k+1\Longrightarrow j-1\right) t_{j-1}\left( i\Longrightarrow j-2\right) \left[ t_{j-2}% ,t_{j-1}\right] .$$ Comparing this with ([\[pf.lem.move-mu.9\]](#pf.lem.move-mu.9){reference-type="ref" reference="pf.lem.move-mu.9"}), we obtain $\left[ t_{i}% ,t_{j}\right] \mu_{k,j}=\mu_{k+1,j}\left[ t_{i},t_{j-1}\right]$. This proves Lemma [\[lem.move-mu\]](#lem.move-mu){reference-type="ref" reference="lem.move-mu"}.   ------------------------------------------------------------------------ We can combine Lemma [\[lem.com-mu\]](#lem.com-mu){reference-type="ref" reference="lem.com-mu"} and Lemma [\[lem.move-mu\]](#lem.move-mu){reference-type="ref" reference="lem.move-mu"} into a single result: **Lemma 50** (). [\[lem.increase-mu\]]{#lem.increase-mu label="lem.increase-mu"}Let $j\in\left[ n\right]$, and let $i\in\left[ j\right]$ and $k\in\left[ j-1\right]$. Then, we have $$\left( \left[ t_{i},t_{j}\right] \mu_{k,j}=0\right) \text{ or }\left( \left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in\left[ k+1,j-1\right] \right) .$$ **Proof.** If $i=j$, then this holds for obvious reasons[^6]. Hence, for the rest of this proof, we WLOG assume that $i\neq j$. We have $i\leq j$ (since $i\in\left[ j\right]$). Combining this with $i\neq j$, we obtain $i<j$, so that $i\leq j-1$. In other words, $i\in\left[ j-1\right]$. From $k\in\left[ j-1\right]$, we obtain $1\leq k\leq j-1$, so that $j-1\geq1$ and therefore $j\geq2$. Hence, $j\in\left[ 2,n\right]$. We are in one of the following three cases: *Case 1:* We have $k\geq j-1$. *Case 2:* We have $i>k$. *Case 3:* We have neither $k\geq j-1$ nor $i>k$. Let us first consider Case 1. In this case, we have $k\geq j-1$. Combining this with $k\leq j-1$, we obtain $k=j-1$. The definition of $\mu_{k,j}$ yields $$\mu_{k,j}=\left( \underbrace{k}_{=j-1}\Longrightarrow j-1\right) t_{j-1}=\underbrace{\left( j-1\Longrightarrow j-1\right) }% _{\substack{=1\\\text{(by (\ref{eq.p-cyc-p=1}))}}}t_{j-1}=t_{j-1}.$$ Hence, $$\left[ t_{i},t_{j}\right] \underbrace{\mu_{k,j}}_{=t_{j-1}}=\left[ t_{i},t_{j}\right] t_{j-1}=0\ \ \ \ \ \ \ \ \ \ \left( \text{by Corollary \ref{cor.titjtj-1}}\right) .$$ Thus, we have $\left( \left[ t_{i},t_{j}\right] \mu_{k,j}=0\right)$ or $\left( \left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in\left[ k+1,j-1\right] \right)$. This proves Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} in Case 1. Let us next consider Case 2. In this case, we have $i>k$. Hence, $i\geq k+1$. Combined with $i\leq j-1$, this entails $i\in\left[ k+1,j-1\right]$. Furthermore, ([\[eq.lem.com-mu.2\]](#eq.lem.com-mu.2){reference-type="ref" reference="eq.lem.com-mu.2"}) shows that $$\left[ t_{i},t_{j}\right] =\mu_{i,j}\underbrace{\left( t_{j}-t_{j-1}% +1\right) }_{\in\mathbf{A}}\in\mu_{i,j}\mathbf{A}.$$ We now know that $i\in\left[ k+1,j-1\right]$ and $\left[ t_{i}% ,t_{j}\right] \in\mu_{i,j}\mathbf{A}$. Therefore, $\left[ t_{i}% ,t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}$ for some $\ell\in\left[ k+1,j-1\right]$ (namely, for $\ell=i$). Thus, we have $\left( \left[ t_{i},t_{j}\right] \mu_{k,j}=0\right)$ or $\left( \left[ t_{i}% ,t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell \in\left[ k+1,j-1\right] \right)$. This proves Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} in Case 2. Finally, let us consider Case 3. In this case, we have neither $k\geq j-1$ nor $i>k$. In other words, we have $k<j-1$ and $i\leq k$. Thus, $i\leq k<j-1$. Hence, Lemma [\[lem.move-mu\]](#lem.move-mu){reference-type="ref" reference="lem.move-mu"} yields $$\left[ t_{i},t_{j}\right] \mu_{k,j}=\mu_{k+1,j}\underbrace{\left[ t_{i},t_{j-1}\right] }_{\in\mathbf{A}}\in\mu_{k+1,j}\mathbf{A}.$$ Furthermore, $k<j-1$, so that $k\leq\left( j-1\right) -1$. In other words, $k+1\leq j-1$. Hence, $k+1\in\left[ k+1,j-1\right]$. We now know that $k+1\in\left[ k+1,j-1\right]$ and $\left[ t_{i}% ,t_{j}\right] \mu_{k,j}\in\mu_{k+1,j}\mathbf{A}$. Hence, $\left[ t_{i}% ,t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}$ for some $\ell\in\left[ k+1,j-1\right]$ (namely, for $\ell=k+1$). Thus, we have $\left( \left[ t_{i},t_{j}\right] \mu_{k,j}=0\right)$ or $\left( \left[ t_{i}% ,t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell \in\left[ k+1,j-1\right] \right)$. This proves Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} in Case 3. We have now proved Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} in each of the three Cases 1, 2 and 3. Hence, this lemma is proved in all situations.   ------------------------------------------------------------------------ ## Products of $\left[ t_{i},t_{j}\right]$'s for a fixed $j$ redux For the sake of convenience, we shall restate Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} in a simpler form. To this purpose, we extend Definition [\[def.muij\]](#def.muij){reference-type="ref" reference="def.muij"} somewhat: **Definition 51** (). [\[def.muij2\]]{#def.muij2 label="def.muij2"}Let $j\in\left[ n\right]$, and let $i$ be a positive integer. In Definition [\[def.muij2\]](#def.muij2){reference-type="ref" reference="def.muij2"}, we have defined $\mu_{i,j}$ whenever $i\in\left[ j-1\right]$. We now set $$\mu_{i,j}:=0\in\mathbf{A}\ \ \ \ \ \ \ \ \ \ \text{whenever }i\notin\left[ j-1\right] .$$ Thus, $\mu_{i,j}$ is defined for all positive integers $i$ (not just for $i\in\left[ j-1\right]$). For example, $\mu_{j,j}=0$ (since $j\notin\left[ j-1\right]$). Using this extended meaning of $\mu_{i,j}$, we can rewrite Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} as follows: **Lemma 52** (). [\[lem.increase-mu2\]]{#lem.increase-mu2 label="lem.increase-mu2"}Let $j\in\left[ n\right]$, and let $i\in\left[ j\right]$. Let $k$ be a positive integer. Then, $$\left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+1.$$ **Proof.** If $k\geq j$, then this holds for obvious reasons[^7]. Hence, for the rest of this proof, we WLOG assume that $k<j$. Thus, $k\in\left[ j-1\right]$ (since $k$ is a positive integer). Therefore, Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} yields that we have $$\left( \left[ t_{i},t_{j}\right] \mu_{k,j}=0\right) \text{ or }\left( \left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in\left[ k+1,j-1\right] \right) .$$ In other words, we are in one of the following cases: *Case 1:* We have $\left[ t_{i},t_{j}\right] \mu_{k,j}=0$. *Case 2:* We have $\left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell ,j}\mathbf{A}$ for some $\ell\in\left[ k+1,j-1\right]$. Let us first consider Case 1. In this case, we have $\left[ t_{i}% ,t_{j}\right] \mu_{k,j}=0$. Hence, $\left[ t_{i},t_{j}\right] \mu _{k,j}=0=\mu_{k+1,j}\cdot\underbrace{0}_{\in\mathbf{A}}\in\mu_{k+1,j}% \mathbf{A}$. Hence, $\left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell ,j}\mathbf{A}$ for some integer $\ell\geq k+1$ (namely, for $\ell=k+1$). Thus, Lemma [\[lem.increase-mu2\]](#lem.increase-mu2){reference-type="ref" reference="lem.increase-mu2"} is proved in Case 1. Let us now consider Case 2. In this case, we have $\left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}$ for some $\ell\in\left[ k+1,j-1\right]$. Hence, we have $\left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell ,j}\mathbf{A}$ for some integer $\ell\geq k+1$ (because any $\ell\in\left[ k+1,j-1\right]$ is an integer $\geq k+1$). Thus, Lemma [\[lem.increase-mu2\]](#lem.increase-mu2){reference-type="ref" reference="lem.increase-mu2"} is proved in Case 2. We have now proved Lemma [\[lem.increase-mu2\]](#lem.increase-mu2){reference-type="ref" reference="lem.increase-mu2"} in both Cases 1 and 2. Hence, Lemma [\[lem.increase-mu2\]](#lem.increase-mu2){reference-type="ref" reference="lem.increase-mu2"} is proved in all situations.   ------------------------------------------------------------------------ The next lemma is similar to Lemma [\[lem.right-bound-1\]](#lem.right-bound-1){reference-type="ref" reference="lem.right-bound-1"}, and will play a similar role: **Lemma 53** (). [\[lem.left-bound-1\]]{#lem.left-bound-1 label="lem.left-bound-1"}Let $j\in\left[ n\right]$. Let $k$ be a positive integer, and let $m\in\mathbb{N}$. Let $i_{1},i_{2},\ldots,i_{m}$ be $m$ elements of $\left[ j\right]$ (not necessarily distinct). Then, $$\left[ t_{i_{m}},t_{j}\right] \left[ t_{i_{m-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+m.$$ **Proof.** We shall show that for each $v\in\left\{ 0,1,\ldots,m\right\}$, we have $$\left[ t_{i_{v}},t_{j}\right] \left[ t_{i_{v-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+v. \label{pf.lem.left-bound-1.1}%$$ In fact, we shall prove ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) by induction on $v$: *Base case:* Let us check that ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) holds for $v=0$. Indeed, $$\underbrace{\left[ t_{i_{0}},t_{j}\right] \left[ t_{i_{0-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] }_{=\left( \text{empty product}\right) =1}\mu_{k,j}=\mu_{k,j}=\mu_{k,j}\underbrace{1}_{\in\mathbf{A}}\in\mu _{k,j}\mathbf{A}.$$ Thus, $\left[ t_{i_{0}},t_{j}\right] \left[ t_{i_{0-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}$ for some integer $\ell\geq k+0$ (namely, for $\ell=k$). In other words, ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) holds for $v=0$. This completes the base case. *Induction step:* Let $v\in\left\{ 0,1,\ldots,m-1\right\}$. Assume (as the induction hypothesis) that ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) holds for $v$. We must prove that ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) holds for $v+1$ instead of $v$. In other words, we must prove that $$\left[ t_{i_{v+1}},t_{j}\right] \left[ t_{i_{v}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+\left( v+1\right) .$$ Our induction hypothesis says that ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) holds for $v$. In other words, it says that $$\left[ t_{i_{v}},t_{j}\right] \left[ t_{i_{v-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+v.$$ Let us denote this integer $\ell$ by $w$. Thus, $w\geq k+v$ is an integer and satisfies $$\left[ t_{i_{v}},t_{j}\right] \left[ t_{i_{v-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{w,j}\mathbf{A}. \label{pf.lem.left-bound-1.IS.3}%$$ However, $w\geq k+v\geq k$, so that $w$ is a positive integer. Also, $i_{v+1}\in\left[ j\right]$. Thus, Lemma [\[lem.increase-mu2\]](#lem.increase-mu2){reference-type="ref" reference="lem.increase-mu2"} (applied to $i_{v+1}$ and $w$ instead of $i$ and $k$) yields that $$\left[ t_{i_{v+1}},t_{j}\right] \mu_{w,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq w+1. \label{pf.lem.left-bound-1.IS.4}%$$ Consider this $\ell$. Thus, $\ell\geq\underbrace{w}_{\geq k+v}+\,1\geq k+v+1=k+\left( v+1\right)$. Furthermore, $$\begin{aligned} \underbrace{\left[ t_{i_{v+1}},t_{j}\right] \left[ t_{i_{v}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] }_{=\left[ t_{i_{v+1}},t_{j}\right] \cdot\left( \left[ t_{i_{v}},t_{j}\right] \left[ t_{i_{v-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \right) }\mu_{k,j} & =\left[ t_{i_{v+1}},t_{j}\right] \cdot\underbrace{\left( \left[ t_{i_{v}}% ,t_{j}\right] \left[ t_{i_{v-1}},t_{j}\right] \cdots\left[ t_{i_{1}}% ,t_{j}\right] \right) \mu_{k,j}}_{\substack{\in\mu_{w,j}\mathbf{A}% \\\text{(by (\ref{pf.lem.left-bound-1.IS.3}))}}}\\ & \in\underbrace{\left[ t_{i_{v+1}},t_{j}\right] \mu_{w,j}}_{\substack{\in \mu_{\ell,j}\mathbf{A}\\\text{(by (\ref{pf.lem.left-bound-1.IS.4}))}% }}\mathbf{A}\subseteq\mu_{\ell,j}\underbrace{\mathbf{AA}}_{\subseteq \mathbf{A}}\subseteq\mu_{\ell,j}\mathbf{A}.\end{aligned}$$ Thus, we have found an integer $\ell\geq k+\left( v+1\right)$ that satisfies $\left[ t_{i_{v+1}},t_{j}\right] \left[ t_{i_{v}}% ,t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell ,j}\mathbf{A}$. Hence, we have shown that $$\left[ t_{i_{v+1}},t_{j}\right] \left[ t_{i_{v}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+\left( v+1\right) .$$ In other words, ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) holds for $v+1$ instead of $v$. This completes the induction step. Thus, ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) is proved by induction on $v$. Therefore, we can apply ([\[pf.lem.left-bound-1.1\]](#pf.lem.left-bound-1.1){reference-type="ref" reference="pf.lem.left-bound-1.1"}) to $v=m$. We obtain $$\left[ t_{i_{m}},t_{j}\right] \left[ t_{i_{m-1}},t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+m.$$ This proves Lemma [\[lem.left-bound-1\]](#lem.left-bound-1){reference-type="ref" reference="lem.left-bound-1"}.   ------------------------------------------------------------------------ Now, we can show our second main result: **Theorem 54** (). [\[thm.left-bound\]]{#thm.left-bound label="thm.left-bound"}Let $j\in\left[ n\right]$, and let $m$ be a positive integer. Let $k_{1},k_{2},\ldots,k_{m}$ be any $m$ elements of $\left[ j\right]$ (not necessarily distinct) satisfying $m\geq j-k_{m}+1$. Then, $$\left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m}},t_{j}\right] =0.$$ **Proof.** If $k_{m}=j$, then this claim is obvious[^8]. Hence, for the rest of this proof, we WLOG assume that $k_{m}\neq j$. Combining this with $k_{m}\leq j$ (since $k_{m}\in\left[ j\right]$), we obtain $k_{m}<j$. Hence, $k_{m}\in\left[ j-1\right]$. Therefore, ([\[eq.lem.com-mu.2\]](#eq.lem.com-mu.2){reference-type="ref" reference="eq.lem.com-mu.2"}) (applied to $i=k_{m}$) yields $$\left[ t_{k_{m}},t_{j}\right] =\mu_{k_{m},j}\left( t_{j}-t_{j-1}+1\right) . \label{pf.thm.left-bound.1}%$$ Now, we have $m-1\in\mathbb{N}$ (since $m$ is a positive integer). Let us define an $\left( m-1\right)$-tuple $\left( i_{1},i_{2},\ldots ,i_{m-1}\right)$ of elements of $\left[ j\right]$ by $$\left( i_{1},i_{2},\ldots,i_{m-1}\right) :=\left( k_{m-1},k_{m-2}% ,\ldots,k_{1}\right)$$ (that is, $i_{v}:=k_{m-v}$ for each $v\in\left[ m-1\right]$). Then, $i_{1},i_{2},\ldots,i_{m-1}$ are $m-1$ elements of $\left[ j\right]$. Hence, Lemma [\[lem.left-bound-1\]](#lem.left-bound-1){reference-type="ref" reference="lem.left-bound-1"} (applied to $m-1$ and $k_{m}$ instead of $m$ and $k$) yields $$\left[ t_{i_{m-1}},t_{j}\right] \left[ t_{i_{\left( m-1\right) -1}}% ,t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k_{m},j}\in\mu _{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k_{m}+\left( m-1\right) .$$ Consider this $\ell$. We have $$\ell\geq k_{m}+\left( m-1\right) =k_{m}+\underbrace{m}_{\geq j-k_{m}% +1}-\,1\geq k_{m}+j-k_{m}+1-1=j>j-1,$$ so that $\ell\notin\left[ j-1\right]$. Therefore, $\mu_{\ell,j}=0$ (by Definition [\[def.muij2\]](#def.muij2){reference-type="ref" reference="def.muij2"}). Hence, $$\left[ t_{i_{m-1}},t_{j}\right] \left[ t_{i_{\left( m-1\right) -1}}% ,t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k_{m},j}% \in\underbrace{\mu_{\ell,j}}_{=0}\mathbf{A}=0\mathbf{A}=0.$$ In other words, $$\left[ t_{i_{m-1}},t_{j}\right] \left[ t_{i_{\left( m-1\right) -1}}% ,t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] \mu_{k_{m},j}=0. \label{pf.thm.left-bound.4}%$$ However, from the equality $\left( i_{1},i_{2},\ldots,i_{m-1}\right) =\left( k_{m-1},k_{m-2},\ldots,k_{1}\right)$, we immediately obtain $\left( i_{m-1},i_{\left( m-1\right) -1},\ldots,i_{1}\right) =\left( k_{1},k_{2},\ldots,k_{m-1}\right)$. Therefore, $$\left[ t_{i_{m-1}},t_{j}\right] \left[ t_{i_{\left( m-1\right) -1}}% ,t_{j}\right] \cdots\left[ t_{i_{1}},t_{j}\right] =\left[ t_{k_{1}}% ,t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m-1}}% ,t_{j}\right] .$$ Thus, we can rewrite ([\[pf.thm.left-bound.4\]](#pf.thm.left-bound.4){reference-type="ref" reference="pf.thm.left-bound.4"}) as $$\left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m-1}},t_{j}\right] \mu_{k_{m},j}=0. \label{pf.thm.left-bound.6}%$$ Now, $$\begin{aligned} \left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m}},t_{j}\right] & =\left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}% },t_{j}\right] \cdots\left[ t_{k_{m-1}},t_{j}\right] \underbrace{\left[ t_{k_{m}},t_{j}\right] }_{\substack{=\mu_{k_{m},j}\left( t_{j}% -t_{j-1}+1\right) \\\text{(by (\ref{pf.thm.left-bound.1}))}}}\\ & =\underbrace{\left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}}% ,t_{j}\right] \cdots\left[ t_{k_{m-1}},t_{j}\right] \mu_{k_{m},j}% }_{\substack{=0\\\text{(by (\ref{pf.thm.left-bound.6}))}}}\left( t_{j}-t_{j-1}+1\right) =0.\end{aligned}$$ This proves Theorem [\[thm.left-bound\]](#thm.left-bound){reference-type="ref" reference="thm.left-bound"}.   ------------------------------------------------------------------------ ## The identity $\left[ t_{i},t_{j}\right] ^{j-i+1}=0$ for all $i\leq j$ As a particular case of Theorem [\[thm.left-bound\]](#thm.left-bound){reference-type="ref" reference="thm.left-bound"}, we obtain the following: **Corollary 55** (). [\[cor.left-bound\]]{#cor.left-bound label="cor.left-bound"}Let $i,j\in\left[ n\right]$ be such that $i\leq j$. Then, $\left[ t_{i},t_{j}\right] ^{j-i+1}=0$. **Proof.** We have $j-i\geq0$ (since $i\leq j$) and thus $j-i+1\geq1$. Hence, $j-i+1$ is a positive integer. Moreover, $i$ is an element of $\left[ j\right]$ (since $i\leq j$) and we have $j-i+1\geq j-i+1$. Hence, Theorem [\[thm.left-bound\]](#thm.left-bound){reference-type="ref" reference="thm.left-bound"} (applied to $m=j-i+1$ and $k_{r}=i$) yields $\underbrace{\left[ t_{i}% ,t_{j}\right] \left[ t_{i},t_{j}\right] \cdots\left[ t_{i},t_{j}\right] }_{j-i+1\text{ times}}=0$. Thus, $\left[ t_{i},t_{j}\right] ^{j-i+1}% =\underbrace{\left[ t_{i},t_{j}\right] \left[ t_{i},t_{j}\right] \cdots\left[ t_{i},t_{j}\right] }_{j-i+1\text{ times}}=0$. This proves Corollary [\[cor.left-bound\]](#cor.left-bound){reference-type="ref" reference="cor.left-bound"}.   ------------------------------------------------------------------------ # [\[sec.further\]]{#sec.further label="sec.further"}Further directions ## More identities? A few other properties of somewhere-to-below shuffles can be shown. For example, the proofs of the following two propositions are left to the reader: **Proposition 56** (). We have $t_{i}=\sum\limits_{k=i}^{j-1}s_{i}s_{i+1}\cdots s_{k-1}+s_{i}s_{i+1}\cdots s_{j-1}t_{j}$ for any $1\leq i<j\leq n$. **Proposition 57** (). Let $i,j\in\left[ n-1\right]$ be such that $i\leq j$. Then, $\left[ t_{i},t_{j}\right] =\left[ s_{i}s_{i+1}\cdots s_{j-1},\ s_{j}\right] t_{j+1}t_{j}$. **Proposition 58** (). Set $B_{i}:=\prod\limits_{k=0}^{i-1}\left( t_{1}-k\right)$ for each $i\in\left[ 0,n\right]$. Then, $B_{i}=t_{i}B_{i-1}$ for each $i\in\left[ n\right]$. We wonder to what extent the identities that hold for $t_{1},t_{2}% ,\ldots,t_{n}$ can be described. For instance, we can ask: **Question 59** ().    1. What are generators and relations for the $\mathbb{Q}%$-algebra $\mathbb{Q}\left[ t_{1},t_{2},\ldots,t_{n}\right]$ for a given $n\in\mathbb{N}$ ? 2. Fix $k\in\mathbb{N}$. What identities hold for $t_{1},t_{2},\ldots,t_{k}$ for **all** $n$ ? Is there a single algebra that "governs" the relations between $t_{1},t_{2},t_{3},\ldots$ that hold independently of $n$ ? 3. If a relation between $t_{1},t_{2},\ldots,t_{k}$ holds for all sufficiently high $n\geq k$, must it then hold for all $n\geq k$ ? We suspect that these questions are hard to answer, as we saw in Remark [\[rmk.ti+kti\]](#rmk.ti+kti){reference-type="ref" reference="rmk.ti+kti"} that even the quadratic relations between $t_{1},t_{2}% ,\ldots,t_{n}$ exhibit some rather finicky behavior. The dimension of $\mathbb{Q}\left[ t_{1},t_{2},\ldots,t_{n}\right]$ as a $\mathbb{Q}$-vector space does not seem to follow a simple rule either (see ([\[eq.dimQ\]](#eq.dimQ){reference-type="ref" reference="eq.dimQ"}) for the first few values), although there appear to be some patterns in how this dimension is generated[^9]. Another question, which we have already touched upon in Subsection [\[subsec.ikinn\]](#subsec.ikinn){reference-type="ref" reference="subsec.ikinn"}, is the following: **Question 60** (). Fix $j\in\left[ n\right]$. What is the smallest $h\in\mathbb{N}$ such that we have $\left[ t_{i_{1}},t_{j}\right] \left[ t_{i_{2}},t_{j}\right] \cdots\left[ t_{i_{h}},t_{j}\right] =0$ for all $i_{1},i_{2},\ldots,i_{h}% \in\left[ n\right]$ (as opposed to holding only for $i_{1},i_{2}% ,\ldots,i_{h}\in\left[ j\right]$ )? ## [\[subsec.further.opti\]]{#subsec.further.opti label="subsec.further.opti"}Optimal exponents? Corollary [\[cor.right-bound\]](#cor.right-bound){reference-type="ref" reference="cor.right-bound"} and Corollary [\[cor.left-bound\]](#cor.left-bound){reference-type="ref" reference="cor.left-bound"} give two different answers to the question "what powers of $\left[ t_{i},t_{j}\right]$ are $0$?". One might dare to ask for the **smallest** such power (more precisely, the smallest such exponent). In other words: **Question 61** (). Given $i,j\in\left[ n\right]$, what is the smallest $m\in\mathbb{N}$ such that $\left[ t_{i},t_{j}\right] ^{m}=0$ ? (We assume $\mathbf{k}=\mathbb{Z}$ here to avoid small-characteristic cancellations.) We conjecture that this smallest $m$ is $\min\left\{ j-i+1,\ \left\lceil \left( n-j\right) /2\right\rceil +1\right\}$ whenever $i<j$ (so that whichever of Corollary [\[cor.right-bound\]](#cor.right-bound){reference-type="ref" reference="cor.right-bound"} and Corollary [\[cor.left-bound\]](#cor.left-bound){reference-type="ref" reference="cor.left-bound"} gives the better bound actually gives the optimal bound). Using SageMath, this conjecture has been verified for all $n\leq12$. ## Generalizing to the Hecke algebra The *type-A Hecke algebra* (also known as the *type-A Iwahori-Hecke algebra*) is a deformation of the group algebra $\mathbf{k}\left[ S_{n}\right]$ involving a new parameter $q\in\mathbf{k}$. It is commonly denoted by $\mathcal{H}=\mathcal{H}_{q}\left( S_{n}\right)$; it has a basis $\left( T_{w}\right) _{w\in S_{n}}$ indexed by the permutations $w\in S_{n}%$, but its multiplication is more complicated than composing the indexing permutations. We refer to [@Mathas99] for the definition and a deep study of this algebra. We can define the $q$-deformed somewhere-to-below shuffles $t_{1}^{\mathcal{H}},t_{2}^{\mathcal{H}},\ldots,t_{n}^{\mathcal{H}}$ by $$t_{\ell}^{\mathcal{H}}:=T_{\operatorname*{cyc}\nolimits_{\ell}}% +T_{\operatorname*{cyc}\nolimits_{\ell,\ell+1}}+T_{\operatorname*{cyc}% \nolimits_{\ell,\ell+1,\ell+2}}+\cdots+T_{\operatorname*{cyc}\nolimits_{\ell ,\ell+1,\ldots,n}}\in\mathcal{H}.$$ Surprisingly, it seems that many of the properties of the original somewhere-to-below $t_{1},t_{2},\ldots,t_{n}$ still hold for these deformations. In particular: **Conjecture 62** (). Corollary [\[cor.left-bound\]](#cor.left-bound){reference-type="ref" reference="cor.left-bound"} and Corollary [\[cor.right-bound\]](#cor.right-bound){reference-type="ref" reference="cor.right-bound"} both seem to hold in $\mathcal{H}$ when the $t_{\ell}$ are replaced by the $t_{\ell }^{\mathcal{H}}$. This generalization is not automatic. Our above proofs do not directly apply to $\mathcal{H}$, as (for example) Lemma [\[lem.jpiq2\]](#lem.jpiq2){reference-type="ref" reference="lem.jpiq2"} does not generalize to $\mathcal{H}$. The $\mathcal{H}$-generalization of Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"} appears to be $$qt_{i+1}^{\mathcal{H}}t_{i}^{\mathcal{H}}=\left( t_{i}^{\mathcal{H}% }-1\right) t_{i}^{\mathcal{H}}=t_{i}^{\mathcal{H}}\left( t_{i}^{\mathcal{H}% }-1\right) \label{eq.hecke.ti+1ti}%$$ (verified using SageMath for all $n\leq11$). (The $q$ on the left hand side is necessary; the product $t_{i+1}^{\mathcal{H}}t_{i}^{\mathcal{H}}$ is not a $\mathbb{Z}$-linear combination of $1$, $t_{i}^{\mathcal{H}}$ and $\left( t_{i}^{\mathcal{H}}\right) ^{2}$ when $q=0$.) Our proof of Theorem [\[thm.ti+1ti\]](#thm.ti+1ti){reference-type="ref" reference="thm.ti+1ti"} does not seem to adapt to ([\[eq.hecke.ti+1ti\]](#eq.hecke.ti+1ti){reference-type="ref" reference="eq.hecke.ti+1ti"}), and while we suspect that proving ([\[eq.hecke.ti+1ti\]](#eq.hecke.ti+1ti){reference-type="ref" reference="eq.hecke.ti+1ti"}) won't be too difficult, it is merely the first step. ## One-sided cycle shuffles We return to $\mathbf{k}\left[ S_{n}\right]$. The $\mathbf{k}$-linear combinations $\lambda_{1}t_{1}+\lambda_{2}t_{2}% +\cdots+\lambda_{n}t_{n}$ (with $\lambda_{1},\lambda_{2},\ldots,\lambda_{n}% \in\mathbf{k}$) of the somewhere-to-below shuffles are called the *one-sided cycle shuffles*. They have been studied in [@s2b1]. Again, the main result of [@s2b1] entails that their commutators are nilpotent, but we can ask "how nilpotent?". This question remains wide open, not least due to its computational complexity (even the $n=6$ case brings SageMath to its limits). All that I can say with surety is that the commutators of one-sided cycle shuffles don't vanish as quickly (under taking powers) as the $\left[ t_{i},t_{j}\right]$'s. **Example 63** (). For instance, let us set $n=6$ and choose arbitrary $a,b,c,d,e,a^{\prime },b^{\prime},c^{\prime},d^{\prime},e^{\prime}\in\mathbf{k}$, and then introduce the elements $$\begin{aligned} u & :=at_{1}+bt_{2}+ct_{3}+dt_{4}+et_{5}\ \ \ \ \ \ \ \ \ \ \text{and}\\ u^{\prime} & :=a^{\prime}t_{1}+b^{\prime}t_{2}+c^{\prime}t_{3}+d^{\prime }t_{4}+e^{\prime}t_{5}%\end{aligned}$$ (two completely generic one-sided shuffles, except that omit $t_{6}$ terms since $t_{6}=1$ does not influence the commutator). Then, 10 minutes of torturing SageMath reveals that $\left[ u,u^{\prime}\right] ^{6}=0$, but $\left[ u,u^{\prime}\right] ^{5}$ is generally nonzero. Even this example is misleadingly well-behaved. For $n=7$, it is not hard to find two one-sided cycle shuffles $u,u^{\prime}$ such that $\left[ u,u^{\prime}\right] ^{n}\neq0$. **Question 64** (). For each given $n$, what is the smallest (or at least a reasonably small) $m\in\mathbb{N}$ such that every two one-sided cycle shuffles $u,u^{\prime}$ satisfy $\left[ u,u^{\prime}\right] ^{m}=0$ ? [^1]: Indeed, the first somewhere-to-below shuffle $t_{1}$ is known as the *top-to-random shuffle*, and has been discussed, e.g., in [@Hendri72; @Donnel91; @Phatar91; @Fill96; @BiHaRo99; @DiFiPi92; @Palmes10; @Grinbe18]. More generally, for each $\ell\in\left[ n\right]$, the somewhere-to-below shuffle $t_{\ell}$ is exactly the top-to-random shuffle of the symmetric group algebra $\mathbf{k}\left[ S_{n-\ell+1}\right]$, transported into $\mathbf{k}\left[ S_{n}\right]$ using the embedding $S_{n-\ell+1}\hookrightarrow S_{n}$ that renames the numbers $1,2,\ldots ,n-\ell+1$ as $\ell,\ell+1,\ldots,n$. Thus, we know (e.g.) that the minimal polynomial of $t_{\ell}$ over a characteristic-$0$ field $\mathbf{k}$ is $\prod\limits_{i=0}^{n-\ell-1}\left( x-i\right) \cdot\left( x-\left( n-\ell+1\right) \right)$ (by [@DiFiPi92 Theorem 4.1]). [^2]: We view $S_{n}$ as a subset of $\mathbf{k}\left[ S_{n}\right]$ in the obvious way. [^3]: *Proof.* Assume that $k=n+1$. Then, $k-1=n\notin% \left[ n-1\right]$ and $j\leq k-1$ (since $j<k$), so that $H_{k-1,j}% =\mathbf{A}$ (by Remark [\[rmk.Hnj\]](#rmk.Hnj){reference-type="ref" reference="rmk.Hnj"}, applied to $k-1$ instead of $k$). Thus, both elements $s_{k}^{+}t_{j}$ and $s_{k}^{+}\left( t_{j}-1\right)$ belong to $H_{k-1,j}$ (since they both belong to $\mathbf{A}$). Therefore, both parts of Lemma [\[lem.4\]](#lem.4){reference-type="ref" reference="lem.4"} hold. Qed. [^4]: Strictly speaking, this argument works only if $r-2m\in\left[ n+1\right]$ (since Lemma [\[lem.6\]](#lem.6){reference-type="ref" reference="lem.6"} requires $k\in\left[ n+1\right]$). However, in all remaining cases, we can get to the same result in an even simpler way: Namely, assume that $r-2m\notin\left[ n+1\right]$. Thus, $r-2m$ is either $\leq0$ or $>n+1$. Since $r-2m$ cannot be $>n+1$ (because $r-2\underbrace{m}_{\geq0}\leq r\leq n+1$), we thus conclude that $r-2m\leq0$. Hence, $r-2m\leq0<j$ and therefore $H_{r-2m,j}=0$ (by ([\[eq.def.Hkj.=0\]](#eq.def.Hkj.=0){reference-type="ref" reference="eq.def.Hkj.=0"})). Hence, $$\underbrace{H_{r-2m,j}}_{=0}\left[ t_{i_{m+1}},t_{j}\right] =0\subseteq H_{r-2m-2,j}.$$ [^5]: Indeed, the case of $i=n-2$ is obvious (since $\left[ t_{n-2},t_{n-2}\right] =0$). The case of $i=n-3$ requires some calculations, which can be made simpler by checking that $\left[ t_{n-3},t_{n-2}\right]$ is an element $a\in\mathbf{k}\left[ S_{n}\right]$ satisfying $a=as_{n-2}=as_{n-1}$. (Explicitly, $\left[ t_{n-3},t_{n-2}% \right] =\left( 1-s_{n-2}\right) s_{n-3}b$, where $b$ is the sum of all six permutations in $S_{n}$ that fix each of $1,2,\ldots,n-3$.) [^6]: *Proof.* Assume that $i=j$. Then, $\left[ t_{i},t_{j}\right] =\left[ t_{j},t_{j}\right] =0$ (since $\left[ a,a\right] =0$ for any element $a$ of any ring). Hence, $\underbrace{\left[ t_{i},t_{j}\right] }_{=0}\mu_{k,j}=0$. Therefore, we clearly have $\left( \left[ t_{i},t_{j}\right] \mu_{k,j}=0\right)$ or $\left( \left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in\left[ k+1,j-1\right] \right)$. Thus, Lemma [\[lem.increase-mu\]](#lem.increase-mu){reference-type="ref" reference="lem.increase-mu"} is proved under the assumption that $i=j$. [^7]: *Proof.* Assume that $k\geq j$. Thus, $k\geq j>j-1$, so that $k\notin\left[ j-1\right]$ and therefore $\mu_{k,j}=0$ (by Definition [\[def.muij2\]](#def.muij2){reference-type="ref" reference="def.muij2"}). Hence, $$\left[ t_{i},t_{j}\right] \underbrace{\mu_{k,j}}_{=0}=0=\mu_{k+1,j}% \cdot\underbrace{0}_{\in\mathbf{A}}\in\mu_{k+1,j}\mathbf{A}.$$ Hence, $\left[ t_{i},t_{j}\right] \mu_{k,j}\in\mu_{\ell,j}\mathbf{A}$ for some integer $\ell\geq k+1$ (namely, for $\ell=k+1$). Thus, Lemma [\[lem.increase-mu2\]](#lem.increase-mu2){reference-type="ref" reference="lem.increase-mu2"} is proved under the assumption that $k\geq j$. [^8]: *Proof.* Assume that $k_{m}=j$. Thus, $\left[ t_{k_{m}},t_{j}\right] =\left[ t_{j},t_{j}\right] =0$ (since $\left[ a,a\right] =0$ for any element $a$ of any ring). In other words, the last factor of the product $\left[ t_{k_{1}},t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m}},t_{j}\right]$ is $0$. Thus, this whole product must equal $0$. In other words, $\left[ t_{k_{1}}% ,t_{j}\right] \left[ t_{k_{2}},t_{j}\right] \cdots\left[ t_{k_{m}}% ,t_{j}\right] =0$. This proves Theorem [\[thm.left-bound\]](#thm.left-bound){reference-type="ref" reference="thm.left-bound"} under the assumption that $k_{m}=j$. [^9]: Namely, for all $n\leq8$, we have verified that the algebra $\mathbb{Q}\left[ t_{1},t_{2},\ldots,t_{n}\right]$ is generated by products of $m$ somewhere-to-below shuffles with $m\in\left\{ 0,1,\ldots,n-1\right\}$, and moreover, only one such product for $m=n-1$ is needed.
arxiv_math
{ "id": "2309.05340", "title": "Commutator nilpotency for somewhere-to-below shuffles", "authors": "Darij Grinberg", "categories": "math.CO math.RA", "license": "http://creativecommons.org/publicdomain/zero/1.0/" }
--- abstract: | The subpower membership problem $\mathop{\mathrm{SMP}}(\mathbf{A})$ of a finite algebraic structure $\mathbf{A}$ asks whether a given partial function from $A^k$ to $A$ can be interpolated by a term operation of $\mathbf{A}$, or not. While this problem can be EXPTIME-complete in general, Willard asked whether it is always solvable in polynomial time if $\mathbf{A}$ is a Mal'tsev algebras. In particular, this includes many important structures studied in abstract algebra, such as groups, quasigroups, rings, Boolean algebras. In this paper we give an affirmative answer to Willard's question for a big class of 2-nilpotent Mal'tsev algebras. We furthermore develop tools that might be essential in answering the question for general nilpotent Mal'tsev algebras in the future. author: - Michael Kompatscher bibliography: - bibliography.bib title: The subpower membership problem of 2-nilpotent algebras --- # Introduction It is a recurring and well-studied problem in algebra to describe the closure of a given list of elements under some algebraic operations (let us only mention the affine and linear closure of a list of vectors, or the ideal generated by a list of polynomials). But also in a computational context, this problem has a rich history, appearing in many areas of computer science. In its formulation as *subalgebra membership problem*, the task is to decide whether a given finite list of elements of an algebraic structure generates another element or not. Depending on the algebraic structures studied, a variety of different problems emerges. One of the most well-known examples is the *subgroup membership problem*, in which the task is to decide, if for a given set of permutations $\alpha_1,\ldots,\alpha_n$ on a finite set $X$, another permutation $\beta$ belongs to the subgroup generated by $\alpha_1,\ldots,\alpha_n$ in $S_X$. This problem can be solved in polynomial-time by the famous Schreier-Sims algorithm [@sims-SSalgorithm], whose runtime was analysed in [@FHL-permuationgroups] and [@knuth-SS]. The existence of such efficient algorithms is however not always guaranteed: if the symmetric group $S_X$ is for instance replaced by the full transformation semigroup on $X$, the corresponding membership problem is $\mathop{\mathrm{\mathsf{PSPACE}}}$-complete [@kozen-lowerbounds]. A common feature of many algorithms for the subalgebra membership problem is to generate canonical generating sets of some sorts (such as computing the basis of a vector space via Gaussian elimination, or computing a Gröbner basis via Buchberger's algorithm to solve the ideal membership problem [@buchberger-phd]). But, in general, this is where the similarities end - depending on the algebraic structure, and the encoding of the input, the problem can range over a wide range of complexities, and have applications in vastly different areas such as cryptography [@SU-Thompsonsgroup; @SZ-sgmcrypto], computer algebra [@buchberger-phd; @MM-IMPhard], or proof complexity [@kozen-lowerbounds; @kozen-complexityalgebras]. In this paper, we study a version of the subalgebra membership problem that is called the *subpower membership problem*. For a fixed, finite algebraic structure $\mathbf{A}$ (henceforth also just called an *algebra*) its subpower membership problem $\mathop{\mathrm{SMP}}(\mathbf{A})$ is the problem of deciding if a given tuple $\mathbf b \in \mathbf{A}^k$ is in the subalgebra of $\mathbf{A}^k$ generated by some other input tuples $\mathbf a_1,\ldots,\mathbf a_n \in \mathbf{A}^k$ (here $n$ and $k$ are not fixed, but part of the input). This is equivalent to checking, whether the $n$-ary partial function that maps $\mathbf a_1,\ldots,\mathbf a_n$ component-wise to $\mathbf b$ can be interpolated by a term function of $\mathbf{A}$. For example, if $p$ is a prime, $\mathop{\mathrm{SMP}}({\mathbb Z}_p)$ is the problem of checking whether some vector $\mathbf b \in {\mathbb Z}_p^k$ is in the linear closure of $\mathbf a_1,\ldots,\mathbf a_n \in {\mathbb Z}_p^k$; this can easily be solved by Gaussian elimination. More general, for any finite group $\mathbf G$, $\mathop{\mathrm{SMP}}(\mathbf G)$ can be solved in polynomial time by a version of the Schreier-Sims algorithm [@willard-talk]. Besides being a natural problem in algebra, the subpower membership problem found some applications in some learning algorithms [@BD-Maltsev; @DJ-learability; @IMMVW-fewsubpowers]. Moreover, an efficient algorithm for $\mathop{\mathrm{SMP}}(\mathbf{A})$ implies that it is also feasible to represent the relations invariant by some generating set of tuples. It was in particular remarked (see e.g. [@BMS-SMP]), that a polynomial-time algorithm for $\mathop{\mathrm{SMP}}(\mathbf{A})$ would allow to define infinitary constraint satisfaction problems, in which the constraint relations are given by some generating tuples (with respect to $\mathbf{A}$). This infinitary version of $\mathop{\mathrm{CSP}}$s has the benefit that most of the algebraic machinery to $\mathop{\mathrm{CSP}}$s (see e.g. [@BKW-polymorphisms]) still applies. Exhaustively generating the whole subalgebra generated by $\mathbf a_1,\ldots,\mathbf a_n$ in $\mathbf{A}^k$ gives an exponential time algorithm for $\mathop{\mathrm{SMP}}(\mathbf{A})$. And, in general, we cannot expect to do better: In [@kozik-SMPhard] Kozik constructed a finite algebra $\mathbf{A}$ for which $\mathop{\mathrm{SMP}}(\mathbf{A})$ is $\mathop{\mathrm{\mathsf{EXP}}}$-complete. Even semigroups can have $\mathop{\mathrm{\mathsf{PSPACE}}}$-complete subpower membership problem [@BCMS-SMPsemigroup]. However, for so called *Mal'tsev algebras*, better lower bounds are known. Mal'tsev algebras are algebras defined by having a *Mal'tsev term* $m$, i.e. a term satisfying the identities $y = m(x,x,y) = m(y,x,x)$ for all $x,y$. Mal'tsev algebras lie at the intersection of many areas of mathematics: they include algebraic structures of ubiquitous importance (groups, fields, vector spaces), but also appear in logic (Boolean algebras, Heyting algebras), commutative algebra (rings, modules, $K$-algebras), and non-associative mathematics (quasigroups, loops). Mayr showed in [@mayr-SMP] that the subpower membership problem of every Mal'tsev algebra is in $\mathop{\mathrm{\mathsf{NP}}}$. His proof is based on the fact that every subalgebra $\mathbf R \leq \mathbf{A}^n$ has a small generating set, which generates every element of $\mathbf R$ in a canonical way (a so-called *compact representation*). Thus, to solve the subpower membership problem, one can "guess" a compact representation of the subalgebra generated by $\mathbf a_1,\ldots,\mathbf a_k$, and then check in polynomial time if it generates $\mathbf b$. If such a compact representation can be moreover found in *deterministic* polynomial time, then $\mathop{\mathrm{SMP}}(\mathbf{A})$ is in $\mathop{\mathrm{\mathsf{P}}}$; this is, in fact, the dominant strategy to prove tractability. So far, the existence of such polynomial time algorithms was verified for groups and rings [@willard-talk; @FHL-permuationgroups], supernilpotent algebras [@mayr-SMP], and algebras that generate residually finite varieties [@BMS-SMP]. On the other hand, no examples of $\mathop{\mathrm{\mathsf{NP}}}$-hard or intermediate complexity are known. This leads to the question whether $\mathop{\mathrm{SMP}}(\mathbf{A}) \in \mathop{\mathrm{\mathsf{P}}}$ for *all* finite Mal'tsev algebras $\mathbf{A}$ [@willard-talk]. On a broader scale, this question was also posed for algebras with *few subpowers* [@IMMVW-fewsubpowers Question 8]. An elementary class of Mal'tsev algebras, for which the question still remains open, are *nilpotent* algebra. In fact, they can also be seen as an important stepping stone in answering [@IMMVW-fewsubpowers Question 8], as nilpotent Mal'tsev algebras coincide with nilpotent algebras with few subpowers. Generalizing the concept of nilpotent groups, nilpotent algebras are defined by having a central series of congruences. While they have several nice structural properties, in general nilpotent algebras do not satisfy the two finiteness conditions mentioned above (supernilpotence, residual finiteness), thus they are a natural starting point when trying to generalize known tractability results. But even for 2-nilpotent algebras not much is known, then all polynomial-time algorithms were only constructed by ad-hoc arguments for concrete examples (such as Vaughan-Lee's 12-element loop [@mayr-loop]). The first contribution of this paper is to prove that all 2-nilpotent algebras of size $p\cdot q$ for two primes $p \neq q$ have a tractable subpower membership problem. In fact, we prove an even stronger result in Theorem [\[theorem:main\]](#theorem:main){reference-type="ref" reference="theorem:main"}: $\mathop{\mathrm{SMP}}(\mathbf{A})$ is in $\mathop{\mathrm{\mathsf{P}}}$, whenever $\mathbf{A}$ has a central series $0_\mathbf{A}< \rho < 1_\mathbf{A}$ such that $|\mathbf{A}/\rho| = p$ is a prime, and the blocks of $\rho$ have size coprime to $p$. While this is still a relatively restricted class of nilpotent algebras, our methods have the potential to generalize to all 2-nilpotent Mal'tsev algebras and beyond. Thus, our newly developed tools to analyze $\mathop{\mathrm{SMP}}$ can be regarded as the second main contribution. More specifically, in Theorem [\[theorem:SMP\]](#theorem:SMP){reference-type="ref" reference="theorem:SMP"} we show that whenever $\mathbf{L}\otimes \mathbf{U}$ is a *wreath product* (see Section [3](#sect:diffclon){reference-type="ref" reference="sect:diffclon"}), such that $\mathbf{U}$ is supernilpotent, then $\mathop{\mathrm{SMP}}(\mathbf{L}\otimes \mathbf{U})$ reduces to $\mathop{\mathrm{SMP}}(\mathbf{L}\times \mathbf{U})$ (which is polynomial-time solvable by [@mayr-SMP]) and a version of the subpower membership problem for a multi-sorted algebraic object called a *clonoid* from $\mathbf{U}$ to $\mathbf{L}$. This reduction in particular applies to all 2-nilpotent algebras; an analysis of clonoids between affine algebras then leads to Theorem [\[theorem:main\]](#theorem:main){reference-type="ref" reference="theorem:main"}. If, in future research, we could get rid of the condition of $\mathbf{U}$ being supernilpotent, this would provide a strong tool in studying general Mal'tsev algebras, as every Mal'tsev algebra with non-trivial center can be decomposed into a wreath product. Our paper is structured as follows: Section [2](#sect:prelim){reference-type="ref" reference="sect:prelim"} contains preliminaries and some background on universal algebra. In Section [3](#sect:diffclon){reference-type="ref" reference="sect:diffclon"} we discuss how Mal'tsev algebras with non-trivial center can be represented by a wreath product and we introduce the concept of *difference clonoid* of such a representation. In Section [4](#sect:reduction){reference-type="ref" reference="sect:reduction"} we discuss some situations, in which the subpower membership problem of a wreath product can be reduced to the membership problem of the corresponding difference clonoid. In particular, we prove Theorem [\[theorem:SMP\]](#theorem:SMP){reference-type="ref" reference="theorem:SMP"}. Section [5](#section:clonoids){reference-type="ref" reference="section:clonoids"} contains an analysis of clonoids between ${\mathbb Z}_p$ and coprime Abelian groups, which then leads to the proof of our main result, Theorem [\[theorem:main\]](#theorem:main){reference-type="ref" reference="theorem:main"}. In Section [6](#sect:discussion){reference-type="ref" reference="sect:discussion"} we discuss some possible directions for future research. # Preliminaries {#sect:prelim} In the following, we are going to discuss some necessary notions from universal algebra. For more general background we refer to the textbooks [@bergman-book; @BS-book]. For background on commutator theory we refer to [@FM-commutator-theory] and [@AM-commutator]. For an introduction to Malt'sev algebras and compact representations we refer to [@brady-CSPnotes Chapters 1.7-1.9]. In this paper, we are going to denote tuples by lower case bold letters, e.g. $\mathbf a \in A^k$. In order to avoid double indexing in some situations, we are going to use the notation $\mathbf a(i)$ to denote the $i$-th entry of $\mathbf a$, i.e. $\mathbf a = (\mathbf a(1), \mathbf a(2), \ldots, \mathbf a(k))$. However, otherwise we are going to follow standard notation as used e.g. in [@bergman-book]. ## Basic notions for general algebras An *algebra* $\mathbf{A}= (A;(f_i^{\mathbf{A}})_{i \in I})$ is a first-order structure in a purely functional language $(f_i)_{i \in I}$ (where each symbol $f_i$ has an associated *arity*). We say $\mathbf{A}$ is finite if its domain $A$ is finite. A *subalgebra* $\mathbf{B}= (B;(f_i^{\mathbf{B}})_{i \in I})$ of an algebra $\mathbf{A}= (A;(f_i^{\mathbf{A}})_{i \in I})$ (denoted $\mathbf{B}\leq \mathbf{A}$) is an algebra obtained by restricting all *basic operations* $f_i^{\mathbf{A}}$ to a subset $B \subseteq A$ that is invariant under all $f_i^{\mathbf{A}}$'s. The subalgebra generated by a list of elements $a_1,\ldots,a_n$, denoted by $\mathop{\mathrm{Sg}}_\mathbf{A}(a_1,\ldots,a_n)$ is the smallest subalgebra of $\mathbf{A}$ that contains $a_1,\ldots,a_n$. The *product* $\prod_{i \in I} \mathbf{A}_i$ of a family of algebras $(\mathbf{A}_i)_{i \in I}$ in the same language is defined as the algebra with domain $\prod_{i\in I} A_i$, whose basic operations are defined coordinate-wise. The power $\mathbf{A}^n$ is the product of $n$-many copies of $\mathbf{A}$. Subalgebras of (finite) powers of $\mathbf{A}$ are sometimes also called *subpowers* of $\mathbf{A}$, which motivates the name "subpower membership problem". So, formally the subpower membership problem of $\mathbf{A}$ can be stated as follows:\ $\mathop{\mathrm{SMP}}(\mathbf{A})$\ [Input:]{.smallcaps} $\mathbf b, \mathbf a_1,\ldots,\mathbf a_n \in \mathbf{A}^k$ for some $n,k\in{\mathbb N}$\ [Question:]{.smallcaps} Is $\mathbf b \in \mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$?\ Note that the subpowers of $\mathbf{A}$ are exactly the relations on $A$ that are invariant under $\mathbf{A}$. A *congruence* $\alpha$ of $\mathbf{A}$ is an equivalence relation on $A$ that is invariant under $\mathbf{A}$. We write $\mathop{\mathrm{Con}}(\mathbf{A})$ for the lattice of all congruence of $\mathbf{A}$. We denote the minimal and maximal element of this lattice by $0_\mathbf{A}= \{(x,x) \mid x \in A\}$ and $1_\mathbf{A}= \{(x,y) \mid x,y \in A\}$. For every congruence $\alpha \in \mathop{\mathrm{Con}}(\mathbf{A})$, one can form a quotient algebras $\mathbf{A}/\alpha$ in the natural way. The *term operations* of an algebra $\mathbf{A}$ are all finitary operations that can be defined by a composition of basic operations of $\mathbf{A}$. Two standard ways to represent them is by terms or circuits in the language of $\mathbf{A}$. For a term or circuit $t(x_1,\ldots,x_n)$ in the language of $\mathbf{A}$, we write $t^{\mathbf{A}}(x_1,\ldots,x_n)$ for the induced term operation on $A$. Occasionally, if it is clear from the context, we are not going to distinguish between a term/circuit and the corresponding term operation. The term operations of an algebra $\mathbf{A}$ are closed under composition and contain all projections, therefore they form an algebraic object called a *clone*. For short, we denote this *term clone* of an algebra $\mathbf{A}$ by $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$. Note that $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n) = \{t(\mathbf a_1,\ldots,\mathbf a_n) \mid t \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A}) \}$. We call a ternary operation $m^{\mathbf{A}}(x,y,z) \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$ a *Mal'tsev term* if it satisfies the identities $m^{\mathbf{A}}(y,x,x) = m^{\mathbf{A}}(x,x,y) = y$ for all $x,y\in A$, and call $\mathbf{A}$ a *Mal'tsev algebra* if it has a Mal'tsev term. For instance, every group is a Mal'tsev algebra with Mal'tsev term $m(x,y,z)=xy^{-1}z$. Mal'tsev terms are a classic topic of study in universal algebra (see e.g. [@bergman-book Chapter 7]), and are in particular known to characterize congruence permutable varieties. ## Clonoids We are also going to rely on a multi-sorted generalisation of clones, so-called *clonoids* that were first introduced in [@AM-clonoids] (in a slightly less general way). For a set of operations between two sets $\mathcal C \subseteq \{f \colon A^n \to B \mid n\in \mathbb N\}$, and $k \in {\mathbb N}$ let us write $\mathcal C^{(k)} = \{f \colon A^k \to B \mid f\in \mathcal C\}$ for the subset of $k$-ary functions. Then, for two algebras $\mathbf{A}= (A,(f_i)_{i\in I})$, $\mathbf{B}= (B,(g_j)_{j\in J})$ (in possibly different domains and languages), a set $\mathcal C \subseteq \{f \colon A^n \to B \mid n \in \mathbb N\}$ is called a *clonoid from $\mathbf{A}$ to $\mathbf{B}$*, or *$(\mathbf{A},\mathbf{B})$-clonoid*, if it is closed under composition with term operations of $\mathbf{A}$ from the inside, and $\mathbf{B}$ from the outside, i.e.: $\forall n,k\in {\mathbb N}$ $f \in \mathcal C^{(n)}, t_1,\ldots, t_n \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})^{(k)} \Rightarrow f \circ (t_1,\ldots, t_n) \in \mathcal C^{(k)}$ $s \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{B})^{(n)}, f_1,\ldots, f_n \in \mathcal C^{(k)} \Rightarrow s \circ (f_1,\ldots, f_n) \in \mathcal C^{(k)}$. ## Commutator theory Commutator theory is the subfield of universal algebra that tries to generalise notions such as central subgroups, nilpotence, or solvability from group theory to general algebras. The most commonly used framework is based on so-called term-conditions, which we outline in the following. Let $\mathbf{A}$ be an algebra. For congruences $\alpha,\beta,\gamma \in \mathop{\mathrm{Con}}(\mathbf{A})$ we say that *$\alpha$ centralized $\beta$ modulo $\gamma$* (and write $C(\alpha,\beta;\gamma)$) if and only if for all $p(\mathbf x, \mathbf y) \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, and all tuples $\mathbf a, \mathbf b \in A^n$, $\mathbf c, \mathbf d \in A^m$, such that $a_i \sim_{\alpha} b_i$ for $i=1,\ldots,n$ and $c_j \sim_{\beta} d_j$ for $j=1,\ldots,m$, the implication $$\begin{aligned} p(\mathbf a, \mathbf c) \sim_{\gamma} p(\mathbf a, \mathbf d) \Rightarrow p(\mathbf b, \mathbf c) \sim_{\gamma} p(\mathbf b, \mathbf d)\end{aligned}$$ holds. A congruence $\alpha$ is called *central* if $C(\alpha,0_{\mathbf{A}};1_{\mathbf{A}})$ holds. The *center* is the biggest central congruence. An algebra $\mathbf{A}$ is called *$n$-nilpotent* if there is a *central series of length $n$*, i.e. a series of congruences $0_{\mathbf{A}} = \alpha_0 \leq \alpha_1 \leq \cdots \leq \alpha_n = 1_{\mathbf{A}}$, such that $C(\alpha_{i+1},1_{\mathbf{A}}; \alpha_i)$ for $i = 0,\ldots,n-1$. An algebra $\mathbf{A}$ is called *Abelian*, if it is $1$-nilpotent, i.e. $C(1_{\mathbf{A}},1_{\mathbf{A}}; 0_{\mathbf{A}})$ holds. We are, however, not going to work directly with these definitions. There is a rich structural theory in the special case of Mal'tsev algebras (and, more general, in congruence modular varieties [@FM-commutator-theory]) that gives us very useful characterizations of many commutator theoretical properties. By a result of Herrmann [@herrmann-affine], a Mal'tsev algebra $\mathbf{A}$ is Abelian if and only if it is *affine*, i.e. all of its term operations are affine combination $\sum_{i=1}^n \alpha_i x_i + c$ over some module; in particular the Mal'tsev term is then equal to $x-y+z$. More generally, we are going to a result of Freese and McKenzie [@FM-commutator-theory] that states that a Mal'tsev algebra $\mathbf{A}$ with a central congruence $\rho$ can always be written as a *wreath product* $\mathbf{L}\otimes \mathbf{U}$, such that $\mathbf{L}$ is affine and $\mathbf{U}= \mathbf{A}/\rho$. We are going to discuss such wreath product representations in Section [3](#sect:diffclon){reference-type="ref" reference="sect:diffclon"}. Lastly, we want to mention that the definition of the relation $C$ naturally generalizes to higher arities $C(\alpha_1, \ldots,\alpha_n, \beta; \gamma)$. This notion was first introduced by Bulatov; we refer to [@FM-commutator-theory] and [@AM-commutator] to more background on *higher commutators*. In particular, an algebra is called *$k$-supernilpotent* if $C(1_{\mathbf{A}}, \ldots,1_{\mathbf{A}}; 0_{\mathbf{A}})$, where $1_{\mathbf{A}}$ appears $k+1$ times. There are several known characterizations of supernilpotent Mal'tsev algebras. We are mainly going to use the following: [\[theorem:supernilpotent\]]{#theorem:supernilpotent label="theorem:supernilpotent"} Let $\mathbf{A}$ be a $k$-supernilpotent Mal'tsev algebra, $0 \in A$ a constant and $t,s$ two $n$-ary terms in the language of $\mathbf{A}$.Then $t^{\mathbf{A}} = s^{\mathbf{A}}$ if and only if they are equal on all tuples from the set $S = \{ \mathbf a \in A^n \mid |\{i : \mathbf a(i) \neq 0\}| \leq k \}$. (In fact, this is a characterization of $k$-supernilpotence for Mal'tsev algebras.) ## Compact representations and SMP For any subset $R \subseteq A^n$, we define its *signature* $\mathop{\mathrm{Sig}}(R)$ to be the set of all triples $(i,a,b) \in \{1,\ldots,n\} \times A^2$, such that there are $\mathbf t_a, \mathbf t_b \in R$ that agree on the first $i-1$ coordinates, and $t_a(i) = a$ and $t_b(i) = b$; we then also say that $\mathbf t_a, \mathbf t_b$ are *witnesses* for $(i,a,b) \in \mathop{\mathrm{Sig}}(R)$. If $\mathbf{A}$ is a Mal'tsev algebra, and $\mathbf R\leq \mathbf{A}^n$, then it is known that $\mathbf R$ is already generated by every subset $S \subseteq \mathbf R$ with $\mathop{\mathrm{Sig}}(S) = \mathop{\mathrm{Sig}}(\mathbf R)$ [@brady-CSPnotes Theorem 1.8.2.]. In fact, $\mathbf R$ is then equal to the closure of $S$ under the Mal'tsev operation $m$ alone, and a tuple $\mathbf a$ is in $\mathbf R$ iff can be written as $m(\ldots m(\mathbf a_1,\mathbf b_2,\mathbf a_2), \ldots, \mathbf b_n,\mathbf a_n)$, for some $\mathbf a_i, \mathbf b_i \in S$. For given $\mathbf a \in \mathbf R$ such elements $\mathbf a_i, \mathbf b_i \in S$ can be found polynomial time in $|S|$, by picking $\mathbf a_1$ such that $\mathbf a_1(1) = \mathbf a(1)$, and $\mathbf a_i, \mathbf b_i \in S$ as witnesses for $m(\ldots m(\mathbf a_1,\mathbf b_2,\mathbf a_2), \ldots, \mathbf b_{i-1},\mathbf a_{i-1}))(i)$ and $a(i)$ at position $i$. A *compact representation* of $\mathbf R \leq \mathbf{A}^n$ is a subset $S \subset \mathbf R$ with $\mathop{\mathrm{Sig}}(S) = \mathop{\mathrm{Sig}}(\mathbf R)$ and $|S| \leq 2|\mathop{\mathrm{Sig}}(\mathbf R)| \leq 2n|A|^2$. So, informally speaking, compact representations are small generating sets of $\mathbf R$ with the same signature. It is not hard to see that compact representations always exist. Generalizations of compact representations exist also for relations on different domains ($\mathbf R \leq \mathbf{A}_1\times \cdots \times \mathbf{A}_n$), and relations invariant under algebras with few subpowers, we refer to [@brady-CSPnotes Chapter 2] for more background. By the above, $\mathop{\mathrm{SMP}}(\mathbf{A})$ reduces in polynomial time to the problem of finding a compact representation of $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$ for some input tuples $\mathbf a_1,\ldots,\mathbf a_n \in A^k$. We are going to denote this problem by $\mathop{\mathrm{CompRep}}(\mathbf{A})$. Conversely, it was shown in [@BMS-SMP] that finding a compact representations has a polynomial Turing reduction to $\mathop{\mathrm{SMP}}(\mathbf{A})$. Note further that, to solve $\mathop{\mathrm{CompRep}}(\mathbf{A})$ it is already enough to find a subset $S \subseteq R$ with $\mathop{\mathrm{Sig}}(S) = \mathop{\mathrm{Sig}}(R)$ of polynomial size, since such a set $S$ can then be thinned out to a compact representation. Let us call a set of pairs $\{ (\mathbf c, p_{\mathbf c}) \mid \mathbf c \in S\}$ an *enumerated compact representation* of $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$, if $S$ is a compact representation of $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$, and every $p_{\mathbf c}$ is a circuit in the language of $\mathbf{A}$ of polynomial size (in $n$ and $k$), such that $p_{\mathbf c}(\mathbf a_1,\ldots,\mathbf a_n) = \mathbf c$. Enumerated compact representations were already (implicitly) used in several proofs. In [@BMS-SMP Theorem 4.13.] it was shown that, for algebras with few subpowers, enumerated compact representations always exist; this was used to prove that $\mathop{\mathrm{SMP}}(\mathbf{A}) \in \mathop{\mathrm{\mathsf{coNP}}}$. Moreover, all of the known polynomial time algorithms for $\mathop{\mathrm{CompRep}}(\mathbf{A})$, in fact, compute enumerated compact representations. We are in particular going to need the following result that follows from [@mayr-SMP]: [\[theorem:SMPsupernilpotent\]]{#theorem:SMPsupernilpotent label="theorem:SMPsupernilpotent"} Let $\mathbf{A}$ be a finite supernilpotent Mal'tsev algebra. Then, there is a polynomial time algorithm that computes an enumerated compact representations of $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$, for given $\mathbf a_1,\ldots,\mathbf a_n \in A^k$. Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"} can be seen as a generalization of Gaussian elimination from affine to supernilpotent algebras. We remark that Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"}, although not explicitly stated as such in [@mayr-SMP], follows directly from the algorithm computing a *group representations* $(T_1,T_2,\ldots,T_k)$ of $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$ and the fact that for such a group representation, there is a constant $q$ such that $T = (T_1+q\cdot T_2+\cdots +q\cdot T_k)$ has the same signature as $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$ (see Lemma 3.1. in [@mayr-SMP]). Thus, $T$ together with its defining circuits forms an enumerated compact representation of $\mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$. We are furthermore going to use that there is an algorithm that allows us to fix some values of a relation given by enumerated compact representation: [\[lemma:fixvalues\]]{#lemma:fixvalues label="lemma:fixvalues"} Let $\mathbf{A}$ be a Mal'tsev algebra. Then, there is a polynomial-time algorithm `Fix-values`$(R,a_1,\ldots,a_m)$ that, for a given compact representation $R$ of $\mathbf R = \mathop{\mathrm{Sg}}_{\mathbf{A}^k}(X)$, and constants $a_1,\ldots,a_m \in A$, returns a compact representation $R'$ of $\{ \mathbf x \in \mathbf R \mid \mathbf x(1) = a_1, \ldots, \mathbf x(m) = a_m\}$ (or $\emptyset$ if the relation is empty). If $R$ is moreover enumerated then `Fix-values` also computes polynomial size circuits defining the elements of $R'$ from $X$. The existence of such a `Fix-values` algorithm for compact representation is a well-known result ([@BD-Maltsev], see also [@brady-CSPnotes Algorithm 5]); the additional statement about *enumerated* compact representation follows easily from bookkeeping the defining circuits. We prove Lemma [\[lemma:fixvalues\]](#lemma:fixvalues){reference-type="ref" reference="lemma:fixvalues"} in Appendix [7](#appendix:fixval){reference-type="ref" reference="appendix:fixval"}. # Wreath products and difference clonoids {#sect:diffclon} In this section, we discuss how to represent Mal'tsev algebras with non-trivial center by a so-called *wreath product* $\mathbf{L}\otimes \mathbf{U}$, and associate to it its *difference clonoid*, which gives us a measure on how far it is from being the direct product $\mathbf{L}\times \mathbf{U}$. Let $\mathbf{U}= (U,(f^{\mathbf{U}})_{f\in F})$ and $\mathbf{L}= (L,(f^{\mathbf{L}})_{f\in F})$ be two algebras in the same language $F$, such that $\mathbf{L}$ is affine. Furthermore, let $0 \in L$ and $T = (\hat f)_{f\in F}$ be a family of operations $\hat f \colon U^n\to L$, for each $f\in F$ of arity $n$. Then we define the *wreath product* $\mathbf{L}\otimes^{T,0} \mathbf{U}$ as the algebra $(L\times U,(f^{\mathbf{L}\otimes^T \mathbf{U}})_{f\in F})$ with basic operations $$f^{\mathbf{L}\otimes^{T,0} \mathbf{U}}((l_1,u_1),\ldots,(l_n,u_n)) = (f^{\mathbf{L}}(l_1,\ldots,l_n)+\hat f(u_1,\ldots,u_n),f^{\mathbf{U}}(u_1,\ldots,u_n)),$$ (where $+$ is the addition on $\mathbf{L}$ with respect to neutral element $0$). For simplicity, we are going to write $\mathbf{L}\otimes \mathbf{U}$, if $T$ and $0$ are clear from the context. The name *wreath product* refers to the fact that this is a special case of VanderWerf's wreath products [@vanderwerf-wreathproduct]. We remark that recently also alternative names for $\mathbf{L}\otimes \mathbf{U}$ were suggested, such as *central extension* (by Mayr) and *semidirect product* (by Zhuk). By a result of Freese and McKenzie we can represent Mal'tsev algebras with non-trivial centers as wreath products: [\[theorem:wreath\]]{#theorem:wreath label="theorem:wreath"} Let $\mathbf{A}$ be a Mal'tsev algebra with a central congruence $\alpha$, and let $\mathbf{U}=\mathbf{A}/\alpha$. Then there is an affine algebra $\mathbf{L}$, an element $0 \in L$ and a set of operations $T$, such that $\mathbf{A}\cong \mathbf{L}\otimes^{T,0} \mathbf{U}$. Note that, for a fixed quotient $\mathbf{U}= \mathbf{A}/\alpha$, there is still some freedom in how to choose the operations $f^\mathbf{L}$ of $\mathbf{L}$, and the operations $\hat f \colon U^n \to L$ in $T$ (by adding/subtracting constants). To get rid of this problem, we are from now on always going to assume that $\mathbf{L}$ preserves $0$, i.e. $f^{\mathbf{L}}(0,0,\ldots,0) = 0$ for all $f\in F$. It is then easy to observe that wreath products $\mathbf{L}\otimes^{0,T} \mathbf{U}$ behaves nicely with respect to the direct product $\mathbf{L}\times \mathbf{U}$ in the same language: [\[observation:companion\]]{#observation:companion label="observation:companion"} Let $\mathbf{A}$ be a Mal'tsev algebra with wreath product representation $\mathbf{A}= \mathbf{L}\otimes^{0,T} \mathbf{U}$. Then $t^{\mathbf{A}} = s^{\mathbf{A}} \Rightarrow t^{\mathbf{L}\times \mathbf{U}} = s^{\mathbf{L}\times \mathbf{U}}$. *Proof.* Note that, for every term $t$ in the language of $\mathbf{A}$: $$t^{\mathbf{A}}((l_1,u_1),\ldots,(l_n,u_n)) = (t^{\mathbf{L}}(l_1,\ldots,l_n)+\hat t(u_1,\ldots,u_n),t^{\mathbf{U}}(u_1,\ldots,u_n)),$$ for some $\hat t \colon U^n \to L$ (this can be shown by induction over the height of the term tree). Clearly $t^{\mathbf{A}} = s^{\mathbf{A}}$ implies $t^{\mathbf{U}}=s^{\mathbf{U}}$, and $t^{\mathbf{L}} - s^{\mathbf{L}} = c$, $\hat t - \hat s = -c$ for some constant $c\in L$. Since, by our assumptions, the operations of $\mathbf{L}$ preserve $0$, we get $t^{\mathbf{L}} = s^{\mathbf{L}}$ and $\hat t = \hat s$. Thus $t^{\mathbf{L}\times \mathbf{U}} = s^{\mathbf{L}\times \mathbf{U}}$. ◻ In other terminology, the map $t^{\mathbf{A}} \mapsto t^{\mathbf{L}\times \mathbf{U}}$ is a surjective *clone homomorphism* from $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$ to $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\times\mathbf{U})$, i.e. a map that preserves arities, projections and compositions. The converse of Observation [\[observation:companion\]](#observation:companion){reference-type="ref" reference="observation:companion"} does however not hold, since this map is usually not injective. We define the *difference clonoid* $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ as the kernel of the clone homomorphisms in the following sense: [\[definition:diffclonoid\]]{#definition:diffclonoid label="definition:diffclonoid"} Let $\mathbf{A}= \mathbf{L}\otimes^{0,T} \mathbf{U}$ be a Mal'tsev algebra given as a wreath product. We define the equivalence relation $\sim$ on $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$ by $$t^{\mathbf{A}}\sim s^{\mathbf{A}} :\Leftrightarrow t^{\mathbf{L}\times \mathbf{U}} = s^{\mathbf{L}\times \mathbf{U}}$$ the *difference clonoid $\mathop{\mathrm{Diff}}_0(\mathbf{A})$* is defined as the set of all operation $\hat r \colon U^n \to L$, such that there are $t^{\mathbf{A}} \sim s^{\mathbf{A}} \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$ with: $$\begin{aligned} \label{eq7} t^{\mathbf{A}}((l_1,u_1),\ldots,(l_n,u_n)) &= (t^{\mathbf{L}}(\mathbf l) + \hat t(\mathbf u), t^{\mathbf{U}}(\mathbf u))\\ \label{eq8} s^{\mathbf{A}}((l_1,u_1),\ldots,(l_n,u_n)) &= (t^{\mathbf{L}}(\mathbf l) + \hat t(\mathbf u) + \hat r(\mathbf u), t^{\mathbf{U}}(\mathbf u))\end{aligned}$$ *Notation 1*. In the following, we will stick to the following convention: Function symbols with a hat will always denote operations from some power of $U$ to $L$. For operations $t,s\colon A^n \to A$, and $\hat r \colon U^n \to L$ such as in ([\[eq7\]](#eq7){reference-type="ref" reference="eq7"}) and ([\[eq8\]](#eq8){reference-type="ref" reference="eq8"}) we are slightly going to abuse notation, and write $s = t + \hat r$ and $\hat r = (s-t)$. We next show that $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is indeed a clonoid from $\mathbf{U}$ to $\mathbf{L}$ (extended by the constant $0$). [\[lemma:difference\]]{#lemma:difference label="lemma:difference"}   Let $\mathbf{A}= \mathbf{L}\otimes^{0,T} \mathbf{U}$ be a Mal'tsev algebra given as wreath product. Then: For all $t \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, $\hat r \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$ also $t+\hat r \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is a $(\mathbf{U},(\mathbf{L},0))$-clonoid. *Proof.* To prove (1), let $t \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$ and $\hat r \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$. By definition of the difference clonoid, $\hat r = s_1 -s_2$ for two terms $s_1, s_2 \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, with $s_1 \sim s_2$. In particular, $s_1^{\mathbf{U}} = s_2^{\mathbf{U}}$. For any Mal'tsev term $m \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, necessarily $\hat m(u,u,v) = \hat m(v,u,u) = 0$ holds. This implies that $$t+\hat r = m(t,s_2,s_1) \in\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A}).$$ We next prove (2). So we only need to verify that $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is closed under composition with $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{U})$ (from the inside), respectively $\mathop{\mathrm{\mathsf{Clo}}}((\mathbf{L},0))$ (from the outside). To see that $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is closed under $(\mathbf{L},0)$, note that $0 \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$, as $t-t = 0$, for every term $t\in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$. Further $(\mathbf{L},0)$ is closed under $+$; for this, let $\hat r_1,\hat r_2 \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$. By (1), we know that $t+\hat r_1 \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, for some term $t\in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$. Again, by (1) also $(t+\hat r_1) + \hat r_2)) \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, which shows that $\hat r_1+\hat r_2 \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$. For all unary $e^{\mathbf{L}} \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L})$, and $t\sim s$ with $\hat r = t-s$, note that $e^{\mathbf{A}} t-e^{\mathbf{A}} s = e^{\mathbf{L}} \circ \hat r \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$. Since $\mathbf{L}$ is affine, $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L},0)$ is generated by $+$ and its unary terms, thus $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is closed under $(\mathbf{L},0)$. To see that $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is closed under $\mathbf{U}$ from the inside, simply notice that $t(x_1,\ldots,x_n) \sim s(x_1,\ldots,x_n)$ implies $t(f_1(\mathbf x),\ldots,f_n(\mathbf x)) \sim s(f_1(\mathbf x),\ldots,f_n(\mathbf x))$, for all terms $f_1,\ldots,f_n$. If $\hat r = t^{\mathbf{A}}-s^{\mathbf{A}}$, then $\hat r \circ (f_1^{\mathbf{U}},\ldots,f_n^{\mathbf{U}}) = t\circ (f_1^{\mathbf{U}},\ldots,f_n^{\mathbf{U}})- s\circ (f_1^{\mathbf{U}},\ldots,f_n^{\mathbf{U}}) \in \mathop{\mathrm{Diff}}_0(\mathbf{A})$. ◻ We remark that the choice of the constant $0 \in L$ is not relevant in this construction: since for every $c \in L$ the map $\hat r \mapsto \hat r + c$ is an isomorphism between the $(\mathbf{U},(\mathbf{L},0))$-clonoid $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ and the $(\mathbf{U},(\mathbf{L}',c))$-clonoid $\mathop{\mathrm{Diff}}_c(\mathbf{A})$ (where $f^{\mathbf{L}'}(\mathbf l) = f^{\mathbf{L}}(\mathbf l-(c,c\ldots,c))+c$). Our goal in the next section is to reduce the subpower membership problem to a version of the subpower membership problem for the difference clonoid in which we ask for membership of a tuple $\mathbf l \in L^k$ in the subalgebra of $\mathbf{L}$ given by the image of $\mathbf u_1,\ldots,\mathbf u_n \in U^k$ under the clonoid. In fact, it will be more convenient for us to ask for a compact representation, that's why we define the following problem, for a clonoid $\mathcal C$ from $\mathbf{U}$ to $\mathbf{L}$.\ $\mathop{\mathrm{CompRep}}(\mathcal C)$:\ [Input:]{.smallcaps} A list of tuples $\mathbf u_1,\ldots,\mathbf u_n \in U^k$.\ [Output:]{.smallcaps} A compact representation of $\mathcal C(\mathbf u_1,\ldots,\mathbf u_n) = \{f(\mathbf u_1,\ldots,\mathbf u_n) \mid f \in \mathcal C \} \leq \mathbf{L}^{k}$\ In the case of the difference clonoid $\mathcal C = \mathop{\mathrm{Diff}}_0(\mathbf{A})$ the image algebra $\mathbf{L}$ is affine and contains a constant $0$. Thus then this problem is then equivalent to finding generating set of $\mathcal C(\mathbf u_1,\ldots,\mathbf u_n)$ as a subgroup of $(L,+,0,-)^k$ of polynomial size. By then running Gaussian elimination (generalized to finite Abelian groups), or simply applying Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"} one can then compute a compact representation of $\mathcal C(\mathbf u_1,\ldots,\mathbf u_n)$. # The subpower membership problem of wreath products {#sect:reduction} In this section we discuss our main methodological results. We show that, in some cases the subpower membership problem $\mathop{\mathrm{SMP}}(\mathbf{L}\otimes \mathbf{U})$ of a wreath product can be reduced to $\mathop{\mathrm{CompRep}}(\mathbf{L}\times \mathbf{U})$ and $\mathop{\mathrm{CompRep}}(\mathcal C)$. We first show how such a reduction can be achieved relatively easily in the case where $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\otimes \mathbf{U})$ contains $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\times \mathbf{U})$ (i.e. the identity map is a retraction of the clone homomorphism from Observation [\[observation:companion\]](#observation:companion){reference-type="ref" reference="observation:companion"}): [\[theorem:directprod\]]{#theorem:directprod label="theorem:directprod"} Let $\mathbf{A}= \mathbf{L}\otimes^{(0,T)} \mathbf{U}$ be a finite Mal'tsev algebra, and let $\mathcal C = \mathop{\mathrm{Diff}}_0(\mathbf{A})$. Further assume that $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\times \mathbf{U}) \subseteq \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$. Then $\mathop{\mathrm{CompRep}}(\mathbf{A})$ (and hence also $\mathop{\mathrm{SMP}}(\mathbf{A})$) reduces in polynomial time to $\mathop{\mathrm{CompRep}}(\mathbf{L}\times\mathbf{U})$ and $\mathop{\mathrm{CompRep}}(\mathcal C)$. *Proof.* Let $\mathbf a_1,\ldots,\mathbf a_n \in A^k$ an instance of $\mathop{\mathrm{CompRep}}(\mathbf{A})$; our goal is to find a compact representation of $\mathbf{B}= \mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$. Let us write $\mathbf l_i$ and $\mathbf u_i$ for the projection of $\mathbf a_i$ to $L^k$ and $U^k$ respectively. Let us further define $\mathbf{B}^+ = \mathop{\mathrm{Sg}}_{(\mathbf{L}\times \mathbf{U})^k}(\mathbf a_1,\ldots,\mathbf a_n)$. Then $$\begin{aligned} \mathbf{B}&= \{(t^{\mathbf{L}}(\mathbf l_1,\ldots, \mathbf l_n) + \hat t(\mathbf u_1,\ldots, \mathbf u_n), t^{\mathbf{U}}(\mathbf u_1,\ldots, \mathbf u_n) \mid t \text{ is }F\text{-term} \}, \text{ and }\\ \mathbf{B}^+ &= \{(t^{\mathbf{L}}(\mathbf l_1,\ldots, \mathbf l_n),t^{\mathbf{U}}(\mathbf u_1,\ldots, \mathbf u_n) \mid t \text{ is }F\text{-term}\}.\end{aligned}$$ Since $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\times \mathbf{U}) \subseteq \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$, we can pick a Mal'tsev term of $\mathbf{A}$ that is of the form $m^{\mathbf{A}}((l_1,u_1),(l_2,u_2),(l_3,u_3)) = (l_1-l_2+l_3,m^{\mathbf{U}}(u_1,u_2,u_3))$. Moreover, by Lemma [\[lemma:difference\]](#lemma:difference){reference-type="ref" reference="lemma:difference"}, every term $t^{\mathbf{A}} \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$ can be uniquely written as the sum of $t^{\mathbf{L}\times \mathbf{U}}$ (which by assumption is also in $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A})$) and some $\hat t \in \mathcal C$. Thus, every element of $\mathbf{B}$ is equal to the sum of an element of $\mathbf{B}^+$ and an expression $\hat t(\mathbf u_1,\ldots,\mathbf u_n)$. Let $C^+$ be a compact representation of $\mathbf{B}^+$, and $\hat C$ a compact representation of $\mathcal C(\mathbf u_1,\ldots,\mathbf u_n)$. Then, it follows that every tuple in $\mathbf{B}$ can be written as $$\begin{aligned} \label{fml:red1} m(\ldots, m(\mathbf c_1,\mathbf d_2,\mathbf c_2), \ldots \mathbf d_n,\mathbf c_n) + \hat{\mathbf r}_1 - \hat{\mathbf s}_2 + \hat{\mathbf r}_2 - \ldots - \hat{\mathbf s}_n + \hat{\mathbf r}_n,\end{aligned}$$ for $\mathbf c_i, \mathbf d_i \in C^+$ and $\hat {\mathbf r}_i, \hat{\mathbf s}_i \in \hat C$. (We are aware that tuples in $C^+$ an $\hat C$ have different domains; here we follow the same convention as in Notation [Notation 1](#notation:diff){reference-type="ref" reference="notation:diff"}). Moreover, in formula ([\[fml:red1\]](#fml:red1){reference-type="ref" reference="fml:red1"}), any pair $\mathbf c_i, \mathbf d_i$ (respectively $\hat{\mathbf r}_i, \hat{\mathbf s}_i$) witnesses a fork in the $i$-th coordinate. By our choice of $m$ it is easy to see that formula ([\[fml:red1\]](#fml:red1){reference-type="ref" reference="fml:red1"}) can be rewritten to $$\begin{aligned} m(\ldots, m(\mathbf c_1 + \hat{\mathbf r}_1,\mathbf d_2 + \hat{\mathbf s}_2,\mathbf c_2 + \hat{\mathbf r}_2), \ldots \mathbf d_n + \hat{\mathbf s}_n,\mathbf c_n + \hat{\mathbf r}_n),\end{aligned}$$ Thus the elements $\mathbf c_i + \hat{\mathbf r}_i, \mathbf d_ + \hat{\mathbf s}_i$ witness forks of $\mathbf{B}$ in the $i$-th coordinate. If we define $D = \{\mathbf c +\hat{\mathbf r} \mid \mathbf c \in C,\hat{\mathbf r} \in \hat C \}$, then it follows that $\mathop{\mathrm{Sig}}(D) = \mathop{\mathrm{Sig}}(\mathbf B)$. Moreover $D \subset \mathbf B$, and it is of polynomial size in $n$ and $k$, as $|D| \leq |C|\cdot|\hat C|$. Thus $D$ can be thinned out in polynomial time to a compact representation of $\mathbf{B}$, which finishes our proof. ◻ We remark that, by following the proof of Theorem [\[theorem:directprod\]](#theorem:directprod){reference-type="ref" reference="theorem:directprod"}, also finding *enumerated* compact representations in $\mathbf{A}$ can be reduced to finding *enumerated* compact representations in $\mathbf{L}\times \mathbf{U}$ and $\mathcal C$ (if $\mathcal C$ is given by some finite set of operations that generate it as a clonoid). Unfortunately, the conditions of Theorem [\[theorem:directprod\]](#theorem:directprod){reference-type="ref" reference="theorem:directprod"} are not met for general wreath-products, not even if both $\mathbf{U}$ and $\mathbf{L}$ are both affine (the dihedral group $D_4$ can be shown to be a counterexample). But, if $\mathbf{U}$ is supernilpotent, then we are able to prove the following reduction, independent of the conditions of Theorem [\[theorem:directprod\]](#theorem:directprod){reference-type="ref" reference="theorem:directprod"}: [\[theorem:SMP\]]{#theorem:SMP label="theorem:SMP"} Let $\mathbf{A}= \mathbf{L}\otimes \mathbf{U}$ be a finite Mal'tsev algebra, and let $\mathcal C = \mathop{\mathrm{Diff}}_0(\mathbf{A})$ for some $0\in A$. Further, assume that $\mathbf{U}$ is supernilpotent. Then $\mathop{\mathrm{SMP}}(\mathbf{A})$ reduces in polynomial time to $\mathop{\mathrm{CompRep}}(\mathcal C)$. *Proof.* Let $\mathbf a_1,\ldots,\mathbf a_n , \mathbf b \in A^k$ an instance of $\mathop{\mathrm{SMP}}(\mathbf{A})$; our goal is to check whether $\mathbf b \in \mathbf{B}= \mathop{\mathrm{Sg}}_{\mathbf{A}^k}(\mathbf a_1,\ldots,\mathbf a_n)$. Let us write $\mathbf l_i$ and $\mathbf u_i$ for the projection of $\mathbf a_i$ to $L^k$ and $U^k$ respectively, and $\mathbf l_b$ and $\mathbf u_b$ for the projections of $\mathbf b$ to $L^k$ and $U^k$. Let $F$ be the signature of $\mathbf{A}$ and $\mathbf{L}\times \mathbf{U}$, and let $\mathbf{B}^+ = \mathop{\mathrm{Sg}}_{(\mathbf{L}\times \mathbf{U})^k}(\mathbf a_1,\ldots,\mathbf a_n)$. Then $$\begin{aligned} \mathbf{B}&= \{(t^{\mathbf{L}}(\mathbf l_1,\ldots, \mathbf l_n) + \hat t(\mathbf u_1,\ldots, \mathbf u_n), t^{\mathbf{U}}(\mathbf u_1,\ldots, \mathbf u_n) \mid t \text{ is }F\text{-term} \}, \text{ and }\\ \mathbf{B}^+ &= \{(t^{\mathbf{L}}(\mathbf l_1,\ldots, \mathbf l_n),t^{\mathbf{U}}(\mathbf u_1,\ldots, \mathbf u_n) \mid t \text{ is }F\text{-term}\}.\end{aligned}$$ Recall the definition of $t^{\mathbf{A}} \sim s^{\mathbf{A}}$ from Definition [\[definition:diffclonoid\]](#definition:diffclonoid){reference-type="ref" reference="definition:diffclonoid"}. If $T$ is a $\sim$-transversal set of $\{ t^{\mathbf{A}} \in \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{A}) \mid t^{\mathbf{U}}(\mathbf u_1,\ldots, \mathbf u_n) = \mathbf u_b\}$, then clearly $\mathbf b \in B$ iff $\exists t \in T$ and $\mathbf d \in \mathcal C(\mathbf u_1,\ldots,\mathbf u_n)$, with $\mathbf b = t(\mathbf a_1,\ldots,\mathbf a_n) + \mathbf d$. So, intuitively speaking, the goal of this proof is to first compute such a transversal set, by computing an enumerated compact representation of $\{ (\mathbf l, \mathbf u) \in \mathbf{B}^+ \mid \mathbf u = \mathbf u_b \}$ and then use it together with a compact representation of $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$ to check membership of $\mathbf b$ in $\mathbf{B}$. In practice we need however to consider a relation of higher arity than $\mathbf{B}^+$, since term operations of $\mathbf{L}\times \mathbf{U}$ are not uniquely determined by their value on $\mathbf a_1,\ldots,\mathbf a_n$. So let $S$ be the degree of supernilpotence of $\mathbf{U}$ (and hence also $\mathbf{L}\times \mathbf{U}$). If we think about $\mathbf a_1,\ldots,\mathbf a_n$ as the columns of a matrix of dimension $k\times n$, then let $\tilde{\mathbf a}_1,\ldots,\tilde{\mathbf a}_n \in A^l$ be its extension by rows that enumerate $H = \{(a_1,\ldots,a_n) \in A^n \mid |\{i \colon a_i \neq 0 \}| \leq S \}$ (hence $l \leq k + |A|^S \binom{n}{S}$). It follows from Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"} that we can compute an enumerated compact representation $\tilde{C}$ of $\mathop{\mathrm{Sg}}_{(\mathbf{L}\times \mathbf{U})^l}(\tilde{\mathbf a}_1,\ldots,\tilde{\mathbf a}_n)$ in polynomial time in $n$ and $l$. So, every element in $\tilde{B} = \mathop{\mathrm{Sg}}_{(\mathbf{L}\times \mathbf{U})^l}(\tilde{\mathbf a}_1,\ldots,\tilde{\mathbf a}_n)$ can be written as $m(\ldots m(\tilde{\mathbf c}_1, \tilde{\mathbf d}_2, \tilde{\mathbf c}_2) \ldots \tilde{\mathbf d}_l,\tilde{\mathbf c}_l)$, for $(\tilde{\mathbf c}_i,p_{\tilde{\mathbf c_i}}) , (\tilde{\mathbf d}_i,p_{\tilde{\mathbf d_i}}) \in \tilde{C}$, where $\tilde{C}$ is of size at most $2l|A|^2$, and every element of $\tilde{\mathbf c} \in \tilde{C}$ is equal to $p_{\tilde{\mathbf c}}(\tilde{\mathbf a}_1,\ldots,\tilde{\mathbf a}_n) = \tilde{\mathbf c}$ for the given circuit $p_{\tilde{\mathbf c}}$ of polynomial size. By Theorem [\[theorem:supernilpotent\]](#theorem:supernilpotent){reference-type="ref" reference="theorem:supernilpotent"}, in an $S$-supernilpotent algebra, every term operation is already completely determined by its values on the subset $H$. It follows, that every $n$-ary term operation of $\mathbf{L}\times\mathbf{U}$ can be uniquely described by a circuit $m(\ldots m(p_{\tilde{\mathbf c}_1},p_{\tilde{\mathbf d}_2},p_{\tilde{\mathbf c}_2}), \ldots p_{\tilde{\mathbf d}_l},p_{\tilde{\mathbf c}_l})$ for $\tilde{\mathbf c}_i, \tilde{\mathbf d}_i \in \tilde{C}$. By definition of $\sim$, it follows that also every $n$-ary term operation of $\mathbf{A}$ is $\sim$-equivalent to the operation given by the circuit described by a circuit $m(\ldots m(p_{\tilde{\mathbf c}_1},p_{\tilde{\mathbf d}_2},p_{\tilde{\mathbf c}_2}), \ldots p_{\tilde{\mathbf d}_l},p_{\tilde{\mathbf c}_l})$ for $\tilde{\mathbf c}_i, \tilde{\mathbf d}_i \in \tilde{C}$. We are however only interested in terms $t$ such that $t^{\mathbf{U}}$ maps $\mathbf u_1,\ldots, \mathbf u_n$ to the value $\mathbf u_b$. By Lemma [\[lemma:fixvalues\]](#lemma:fixvalues){reference-type="ref" reference="lemma:fixvalues"}, we can also compute an enumerated compact representation $\tilde{C}'$ of $\{ (\tilde{\mathbf l}, \tilde{\mathbf u}) \in \mathop{\mathrm{Sg}}_{\mathbf{L}\times \mathbf{U}}(\tilde{\mathbf a}_1,\ldots,\tilde{\mathbf a}_n) \mid \tilde{\mathbf u}(i) = \mathbf u_b(i)$ for all $i =1,\ldots,k\}$ in polynomial time. (Although we only prove Lemma [\[lemma:fixvalues\]](#lemma:fixvalues){reference-type="ref" reference="lemma:fixvalues"} for fixing variables to constants, we remark that it can straightforwardly be generalized to fixing the value of the variables to domains $L\times \{u\}$. Alternatively, this can also be achieved by regarding $\mathop{\mathrm{Sg}}_{(\mathbf{L}\times \mathbf{U})^l}(\tilde{\mathbf a}_1,\ldots,\tilde{\mathbf a}_n)$ as a subalgebra of $\mathbf{U}^l\times\mathbf{L}^l$, which however would require us to work with relations on different domains). If $\tilde{C}' = \emptyset$, then we output "False", as then $\mathbf u_b \notin \mathop{\mathrm{Sg}}_{\mathbf{U}^k}(\mathbf u_1,\ldots, \mathbf u_n)$. Otherwise, let $C = \{ p_{\tilde{\mathbf c}}^{\mathbf{A}}(\mathbf a_1,\ldots,\mathbf a_n) \mid \tilde{\mathbf c} \in \tilde{C}' \}$. Also, let $\hat C$ be a compact representation of $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$. By our proof, every element of $\{ (\mathbf l, \mathbf u) \in \mathbf{B}\mid \mathbf u = \mathbf u_b \}$ is equal to the sum of an element $m^{\mathbf{A}}(\ldots, m^{\mathbf{A}}(\mathbf c_1,\mathbf d_2,\mathbf c_2), \ldots \mathbf d_n,\mathbf c_n)$ with $\mathbf c_i,\mathbf d_i \in C$ and an element of $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$. Since $m$ is an affine Malt'sev operation when restricted to $\{ (\mathbf l, \mathbf u) \in \mathbf{B}\mid \mathbf u = \mathbf u_b \}$ this means that $\mathbf b \in \mathbf{B}$ iff $\mathbf l_b$ is in the affine closure of all elements $\mathbf c + \hat{\mathbf r}$ with $\mathbf c \in C$ and $\hat{\mathbf r}\in \hat C$. But this can be checked in polynomial time (by generalized Gaussian elimination, or Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"}), which finishes the proof. ◻ # Clonoids between affine algebras {#section:clonoids} We continue our paper with an analysis of clonoids between affine algebras to prove our main result, Theorem [\[theorem:main\]](#theorem:main){reference-type="ref" reference="theorem:main"}. For a prime $p$, let us write ${\mathbb Z}_p$ for the cyclic group of order $p$, i.e. ${\mathbb Z}_p = (\{0,1,\ldots,p-1\},+,0,-)$. Let us further define the idempotent reduct ${\mathbb Z}_p^{id} = (\{0,1,\ldots,p-1\}, x-y+z)$. Using the unary terms $a x = x + \cdots + x$ ($a$-times), for $a \in {\mathbb Z}_p$, we can regard ${\mathbb Z}_p$ as a vector space over the $p$-element field. More general, using this notation, we will also consider finite Abelian groups $(L,+,0,-)$ as modules over ${\mathbb Z}_{|L|}$. For short, we are going to denote constant $1$-tuples by $\mathbf 1 = (1,1,\ldots,1) \in {\mathbb Z}_p^n$. For two vectors $\mathbf{a}, \mathbf x \in {\mathbb Z}_p^n$, we further denote by $\mathbf{a} \cdot \mathbf x = \sum_{i = 1}^n \mathbf{a}(i) \cdot \mathbf x(i)$ the standard inner product. Then $\mathop{\mathrm{\mathsf{Clo}}}({\mathbb Z}_p) = \{\mathbf x \mapsto \mathbf a \cdot \mathbf x \mid \mathbf a \in {\mathbb Z}_p^n \}$ and $\mathop{\mathrm{\mathsf{Clo}}}({\mathbb Z}_p^{id}) = \{ \mathbf x \mapsto \mathbf a \cdot \mathbf x \mid \mathbf a \in {\mathbb Z}_p^n, \mathbf{a} \cdot \mathbf 1 = 1 \}$. In this section, we are going to study clonoids between affine algebras $\mathbf{U}$ and $\mathbf{L}$, such that $|U| = p$ for some prime $p$, and $p \nmid |L|$. Since every such affine algebra $\mathbf{U}$ has $x-y+z$ as a term operation, it makes sense to study the special case $\mathbf{U}= {\mathbb Z}_p^{id}$. As we are in particular interested in difference clonoids, we furthermore can assume that $\mathbf{L}$ contains a constant operations $0$ (see Lemma [\[lemma:difference\]](#lemma:difference){reference-type="ref" reference="lemma:difference"}), and hence the operations of the Abelian group $(L,+,0,-)$. We remark that our analysis is structurally similar to (but not covered by) Fioravanti's classification of $({\mathbb Z}_p,{\mathbb Z}_q)$-clonoids [@fioravanti-clonoids]. ## $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoids satisfying $p \nmid |L|$ and $f(x,x,\ldots,x) = 0$ Throughout this subsection, let $p$ be a prime, and $\mathbf{L}= (L,+,0,-)$ an Abelian group with $p \nmid |L|$, and $\mathcal C$ be a $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoids satisfying $f(x,x,\ldots,x) = 0$ for all $f \in \mathcal C$ and $x \in {\mathbb Z}_p$. In other words, for every $n\in {\mathbb N}$, $\mathcal C$ maps all tuples from the diagonal $\Delta^n = \{(x,x\ldots,x) \in {\mathbb Z}_p^n\}$ to $0$. We are going to prove that $\mathcal C$ is generated by its binary elements, and therefore by any set of generators $B$ of $\mathcal C^{(2)} \leq \mathbf{L}^{{\mathbb Z}_p^2}$. Moreover, from $B$, we are going to construct canonical generating set of the $n$-ary functions $\mathcal C^{(n)} \leq \mathbf{L}^{{\mathbb Z}_p^n}$. We are, in particular going to use the following set of cofficient vectors for every $n>2$: $$C_n = \{ \mathbf a \in {\mathbb Z}_p^n \mid \exists i > 1 \colon \mathbf a(1) = \mathbf a(2) = \ldots = \mathbf a(i-1) = 0, \mathbf a(i) = 1 \}.$$ Every 2-dimensional subspace $V\leq {\mathbb Z}_p^n$ containing the diagonal $\Delta^n$ has a unique parameterization by the map $$\begin{aligned} e_{\mathbf c}(x,y) &= x (\mathbf 1 - \mathbf c) + y \mathbf c = (x, \mathbf c(2)x + (1-\mathbf c(2))y,\ldots,\mathbf c(n)x + (1-\mathbf c(n))y),\end{aligned}$$ for some $\mathbf c \in C_n$. *Proof.* To see this, note that $V$ contains $\mathbf 1$, and can be therefore parameterized by $e_{\mathbf d}(x,y)$, for some $\mathbf d \notin \Delta^n$. So there is an index $i$ with $\mathbf d(1) = \ldots = \mathbf d(i-1) \neq \mathbf d(i)$. If $\mathbf d \notin C_n$, then we define $\mathbf c = (\mathbf d(i)-\mathbf d(1))^{-1}(\mathbf d - \mathbf d(1) \mathbf 1)$; clearly $\mathbf c \in C_n$, and $\mathbf c$ and $\mathbf 1$ still generate $V$. It is further not hard to see that different elements of $C_n$ generate different planes together with $\mathbf 1$, thus we obtain a unique parameterization of $V$ by $e_{\mathbf c}(x,y)$. ◻ [\[lemma:binary1\]]{#lemma:binary1 label="lemma:binary1"} Let $f \in \mathcal C^{(2)}$. Then, there is a function $f_n \in \mathcal C^{(n)}$, such that $$f_n(x_1,x_2,\ldots,x_n) = \begin{cases} f(x_1,x_2) \text{ if } x_2 = x_3 = \ldots = x_n\\ 0 \text{ else.} \end{cases}$$ *Proof.* We prove the lemma by induction on $n$. For $n=2$, we simply set $f_2 = f$. For an induction step $n \to n+1$, we first define $t_{n+1}(x_1,x_2,\ldots,x_n,x_{n+1})$ as the sum $$\begin{aligned} \sum_{\mathbf a \in {\mathbb Z}_p^{n-1}} f_n(x_1,x_2 + \mathbf a(1)(x_{n+1}-x_n),\ldots,x_n+\mathbf a(n-1)(x_{n+1}-x_n)) \\ -\sum_{\mathbf a \in {\mathbb Z}_p^{n-1}} f_n(x_1,x_1 + \mathbf a(1)(x_{n+1}-x_n),\ldots,x_1+\mathbf a(n-1)(x_{n+1}-x_n)).\end{aligned}$$ Note that, if $x_{n+1} \neq x_n$, then $t_{n+1}$ evaluates to $\sum_{\mathbf a \in {\mathbb Z}_p^{n-1}}f(x_1,\mathbf a) - \sum_{\mathbf a \in {\mathbb Z}_p^{n-1}}f(x_1,\mathbf a) = 0$. On the other hand, if $x_n = x_{n+1}$, then the second sum is equal to $0$, while the first one is equal to $p^{n-1} f_n(x_1,x_2,\ldots,x_n)$. By the induction hypothesis, the function $f_{n+1} = p^{-(n-1)} t_{n+1}$ satisfies the statement of the lemma (note that $p^{-(n-1)}$ exist modulo $|L|$, since $p \nmid |L|$). ◻ We can prove an analogue statement for all 2-dimensional subspaces of ${\mathbb Z}_p^n$ containing $\Delta^n$: [\[lemma:binary2\]]{#lemma:binary2 label="lemma:binary2"} Let $f \in \mathcal C^{(2)}$, and $\mathbf c \in C_n$. Then there is a function $f^{\mathbf c} \in \mathcal C^{(n)}$, such that $$f^{\mathbf c}(x_1,x_2,\ldots,x_n) = \begin{cases} f(x,y) \text{ if } (x_1,x_2,\ldots,x_n) = e_{\mathbf c}(x, y) \\ 0 \text{ else.} \end{cases}$$ *Proof.* Let $\mathbf c \in C^{(n)}$. There is a matrix $\mathbf T \in {\mathbb Z}_p^{n\times n}$, such that $\mathbf T \cdot \mathbf 1 = \mathbf 1$ and $\mathbf T \cdot (\mathbf 1 -\mathbf c) = \mathbf e_1$. Let $f_n$ as in Lemma 2 and $f^{\mathbf c} := f_n\circ T$. Note that by the first condition, all rows of $T$ sum up to $1$, hence $T$ can be expressed by terms of ${\mathbb Z}_p^{id}$. Then $f^{\mathbf c}(e_{\mathbf c}(x,y)) = f_n(T(x(\mathbf 1 -\mathbf c) + y\mathbf c)) = f_n(x \mathbf e_1 + y(\mathbf 1-\mathbf e_1)) = f(x,y)$, and $f^{\mathbf c}(\mathbf x) = 0$ for $\mathbf x \notin e_{\mathbf c}({\mathbb Z}_p^2)$. ◻ We are now ready to prove the main result of this section: [\[lemma:idempotent\]]{#lemma:idempotent label="lemma:idempotent"} Let $\mathcal C$ be a $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoid satisfying $\forall f \in \mathcal C, x \in {\mathbb Z}_p \colon f(x,\ldots,x) = 0$, and let $B$ be a generating set of $\mathcal C^{(2)} \leq \mathbf{L}^{{\mathbb Z}_p^2}$. Then $\mathcal C$ is the $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoid generated by $B$, and $B_{n} := \{ f^{\mathbf c} \mid f \in B, \mathbf c \in C_n \}$ is a generating set of $\mathcal C^{(n)}$ in $\mathbf{L}^{{\mathbb Z}_p^n}$, *Proof.* For any $g \in \mathcal C^{(n)}$ and $\mathbf c \in C^{(n)}$, let us define the binary operation $g_{\mathbf c} = f(e_{\mathbf c}(x,y)) \in \mathcal C^{(2)}$. By Lemma [\[lemma:binary2\]](#lemma:binary2){reference-type="ref" reference="lemma:binary2"}, $g_{\mathbf c}$ generates a function $g_{\mathbf c}^{\mathbf c} \in \mathcal C^{(n)}$, that agrees with $f(x,y)$ on all tuples of the form $e_{\mathbf c}(x,y)$, and that is $0$ else. Since every point of ${\mathbb Z}_p^n \setminus \Delta^n$ is in the image of a unique map $e_{\mathbf c}$, we get $g = \sum_{\mathbf c \in C_n} g_{\mathbf c}^{\mathbf c}$. Every element of the form $g^{\mathbf c}$ can be clearly written as a linear combination of elements $f^{\mathbf c}$, where $f \in B$. It follows that $B_{n}$ generates $\mathcal C^{(n)}$ in $\mathbf{L}^{{\mathbb Z}_p^n}$, and that the clonoid generated by $B$ is $\mathcal C$. ◻ We remark that if $\mathbf{L}= {\mathbb Z}_q$ for a prime $q\neq p$, and $B$ is a basis of the vector space $\mathcal C^{(2)} \leq \mathbf{L}^{{\mathbb Z}_p^2}$, then also $B_n$ is a basis. The generating set $B_{n}$ can be used to decide efficiently the following version of the subpower membership problem for $\mathcal C$: [\[lemma:SMP\]]{#lemma:SMP label="lemma:SMP"} Let $\mathcal C$ be a $({\mathbb Z}_p^{id},\mathbf{L})$-clonoid satisfying $\forall f \in \mathcal C, x \in {\mathbb Z}_p \colon f(x,\ldots,x) = 0$. Then we can solve $\mathop{\mathrm{CompRep}}(\mathcal C)$ in polynomial time. *Proof.* By Lemma [\[lemma:idempotent\]](#lemma:idempotent){reference-type="ref" reference="lemma:idempotent"}, $\mathcal C^{(n)}$ is the linear closure of $B_{n}$. Thus $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$ is equal to the linear closure of $B_{n}(\mathbf u_1,\ldots, \mathbf u_n) := \{ f^{\mathbf c}(\mathbf u_1,\ldots, \mathbf u_n) \mid f \in B, \mathbf c \in C_n \}$. Note that the $i$-th entry $f^{\mathbf c}(\mathbf u_1,\ldots, \mathbf u_n)(i)$ of such a generating element can only be different from $0$ if $(\mathbf u_1,\ldots, \mathbf u_n)(i)$ lies in the 2-dimensional subspace generated by the diagonal $\Delta^n$ and $\mathbf c$. Thus, there are at most $k$ many vectors $\mathbf c \in C_n$ such that $f^{\mathbf c}(\mathbf u_1,\ldots, \mathbf u_n) \neq \mathbf 0$, let $\mathbf c_1,\ldots,\mathbf c_l$ be an enumeration of them. Clearly $D= \{ f^{\mathbf c}(\mathbf u_1,\ldots, \mathbf u_n) \mid f \in B, \mathbf c \in \{\mathbf c_1,\ldots,\mathbf c_l\} \}$ generates $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$; note that we can compute it in linear time $O(kn)$. From the generating set $D$ we can compute a compact representation of $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$ in polynomial time (by generalized Gaussian elimination, or Theorem [\[theorem:directprod\]](#theorem:directprod){reference-type="ref" reference="theorem:directprod"}). ◻ ## General $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoids For an arbitrary $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoid $\mathcal C$, let us define the subclonoid $\mathcal C^{\Delta} = \{f\in \mathcal C \colon f(x,\ldots,x) = 0 \}$. We then show, that every $f \in \mathcal C$ can be written in a unique way as the sum of an element of $\mathcal C^{\Delta}$, and a function that is generated by $\mathcal C^{(1)}$. For this, we need the following lemma: [\[lemma:diagonal\]]{#lemma:diagonal label="lemma:diagonal"} For any $f \in \mathcal C^{(n)}$, let us define $$f'(\mathbf x) = p^{(1-n)}\sum_{\substack{\mathbf a \in {\mathbb Z}_p^n\\ \mathbf a \cdot \mathbf 1 = 1}} f(\mathbf a \cdot \mathbf x,\mathbf a \cdot \mathbf x,\ldots,\mathbf a \cdot \mathbf x).$$ Then $f-f' \in \mathcal C^{\Delta}$, and $f'$ is generated by $\mathcal C^{(1)}$. *Proof.* By definition, $f'$ is in the clonoid generated by the unary function $f(x,x,\ldots,x) \in \mathcal C^{(1)}$. Thus, to prove the lemma, it is only left to show that $f-f' \in \mathcal C^{\Delta}$, or, in other words, that $f(\mathbf x) = f'(\mathbf x)$ for $\mathbf x \in \Delta$. But this is not hard to see, since $$f'(x,x,\ldots,x) = p^{(1-n)}\sum_{\substack{\mathbf a \in {\mathbb Z}_p^n\\ \mathbf a \cdot \mathbf 1 = 1}} f(x,x,\ldots,x) = f(x,x,\ldots,x).$$ ◻ It follows in particular from Lemma [\[lemma:diagonal\]](#lemma:diagonal){reference-type="ref" reference="lemma:diagonal"} and Lemma [\[lemma:idempotent\]](#lemma:idempotent){reference-type="ref" reference="lemma:idempotent"} that every $({\mathbb Z}_p^{id}, \mathbf{L})$-clonoid $\mathcal C$ is generated by any set $A \cup B$, such that $A$ generates $\mathcal C^{(1)}$ in $\mathbf{L}^{\mathbb Z}_p$ and $B$ generates $\mathcal C_\Delta^{(2)}$ in $\mathbf{L}^{{\mathbb Z}_p^2}$. Note that the clonoid generated by $A$ does not need to be disjoint from $\mathcal C^{\Delta}$. We can, however, still prove results analogous to the previous section. [\[lemma:linbasisgeneral\]]{#lemma:linbasisgeneral label="lemma:linbasisgeneral"} Let $\mathcal C$ be a $({\mathbb Z}_p^{id},\mathbf{L})$-clonoid, let $A$ be a generating set of $\mathcal C^{(1)} \leq \mathbf{L}^{{\mathbb Z}_p}$ and $B$ a generating set of $\mathcal C_\Delta^{(2)} \leq \mathbf{L}^{{\mathbb Z}_p^2}$. For every $n$, let us define $A_n = \{ \sum_{\mathbf a \in {\mathbb Z}_p^n, \mathbf a \cdot \mathbf 1 = 1} f(\mathbf a \cdot \mathbf x) \mid f \in A \}$ and let $B_{n}$ be defined as in Lemma [\[lemma:idempotent\]](#lemma:idempotent){reference-type="ref" reference="lemma:idempotent"}. Then $A_n \cup B_{n}$ is a generating set of $\mathcal C^{(n)}$ in $\mathbf{L}^{{\mathbb Z}_p^n}$. *Proof.* We already know from Lemma [\[lemma:idempotent\]](#lemma:idempotent){reference-type="ref" reference="lemma:idempotent"} that $B_{n}$, generates $\mathcal C_\Delta^{(n)} \leq \mathbf{L}^{{\mathbb Z}_p^n}$. By Lemma [\[lemma:diagonal\]](#lemma:diagonal){reference-type="ref" reference="lemma:diagonal"}, every element $f\in \mathcal C^{(n)}$ can be uniquely written as the sum $f'$ and $f-f'$. Furthermore $f'$, by definition, is generated by $A_n$, and $f-f'$ is in $\mathcal C_\Delta^{(n)}$, which finishes our prove. ◻ Lemma [\[lemma:linbasisgeneral\]](#lemma:linbasisgeneral){reference-type="ref" reference="lemma:linbasisgeneral"} allows us to straightforwardly generalize Lemma [\[lemma:SMP\]](#lemma:SMP){reference-type="ref" reference="lemma:SMP"} to arbitrary $({\mathbb Z}_p^{id},\mathbf{L})$-clonoids: [\[lemma:SMP2\]]{#lemma:SMP2 label="lemma:SMP2"} Let $\mathcal C$ be a $({\mathbb Z}_p^{id},\mathbf{L})$-clonoid. Then $\mathop{\mathrm{CompRep}}(\mathcal C) \in \mathop{\mathrm{\mathsf{P}}}$. *Proof.* Let $A_n$ and $B_n$ be defined as in Lemma [\[lemma:linbasisgeneral\]](#lemma:linbasisgeneral){reference-type="ref" reference="lemma:linbasisgeneral"}. Our goal is to compute a compact representation of $\mathcal C(\mathbf u_1,\ldots, \mathbf u_n)$ for some given $\mathbf u_1,\ldots, \mathbf u_n \in {\mathbb Z}_p^k$. By Lemma [\[lemma:linbasisgeneral\]](#lemma:linbasisgeneral){reference-type="ref" reference="lemma:linbasisgeneral"}, every $g \in \mathcal C$ decomposes into the sum of $g'$ and $g - g'$, where $g'$ is generated by $A_n$ and $g - g'$ is generated by $B_{n}$. Thus any image $g(\mathbf u_1,\ldots, \mathbf u_n)$ is in the linear closure of all tuples $f(\mathbf u_1,\ldots, \mathbf u_n)$, for $f \in A_n$ and $B_{n}(\mathbf u_1,\ldots, \mathbf u_n) = \{f^{\mathbb c}(\mathbf u_1,\ldots, \mathbf u_n) \mid f \in B, \mathbb c \in C_n\}$ in $\mathbf{L}^k$. There are at most $|A|$-many tuples of the first form. Furthermore, as in the proof of Lemma [\[lemma:SMP\]](#lemma:SMP){reference-type="ref" reference="lemma:SMP"} we can compute a generating set of $B_{n}(\mathbf u_1,\ldots, \mathbf u_n)$ in polynomial time. By generalized Gaussian elimination (or Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"}), we can obtain a compact representation from these generators in polynomial time. ◻ Lemma [\[lemma:SMP2\]](#lemma:SMP2){reference-type="ref" reference="lemma:SMP2"} allows us to finish the proof of our main result: [\[theorem:main\]]{#theorem:main label="theorem:main"} Let $\mathbf{A}$ be a finite Mal'tsev algebra, with a central series $0_\mathbf{A}< \rho < 1_\mathbf{A}$ such that $|\mathbf{A}/\rho| = p$ is a prime, and the blocks of $\rho$ are of size coprime to $p$. Then $\mathop{\mathrm{SMP}}(\mathbf{A}) \in \mathop{\mathrm{\mathsf{P}}}$. *Proof.* By Theorem [\[theorem:wreath\]](#theorem:wreath){reference-type="ref" reference="theorem:wreath"}, $\mathbf{A}$ is isomorphic to a wreath product $\mathbf{L}\otimes \mathbf{U}$, such that $\mathbf{U}$, $\mathbf{L}$ are affine with $|U| = p$ and $|L|$ coprime to $p$. By Theorem [\[theorem:SMP\]](#theorem:SMP){reference-type="ref" reference="theorem:SMP"}, $\mathop{\mathrm{SMP}}(\mathbf{A})$ reduces to $\mathop{\mathrm{CompRep}}(\mathop{\mathrm{Diff}}_0(\mathbf{A}))$ in polynomial time. The difference clonoid is a clonoid from $\mathbf{U}$ to $(\mathbf{L},0)$. Since both $\mathbf{L}$ and $\mathbf{U}$ are affine, and therefore have term operations describing $x-y+z$, $\mathop{\mathrm{Diff}}_0(\mathbf{A})$ is also a clonoid from ${\mathbb Z}_p^{id}$ to $(L,+,0,-)$. By Lemma [\[lemma:SMP2\]](#lemma:SMP2){reference-type="ref" reference="lemma:SMP2"}, $\mathop{\mathrm{CompRep}}(\mathop{\mathrm{Diff}}_0(\mathbf{A}))$ is solvable in polynomial time, which finishes the proof. ◻ [\[cor:main\]]{#cor:main label="cor:main"} For every nilpotent Mal'tsev algebra $\mathbf{A}$ with $|A| = pq$ for distinct primes $p \neq q$, we have $\mathop{\mathrm{SMP}}(\mathbf{A}) \in \mathop{\mathrm{\mathsf{P}}}$. *Proof.* If $\mathbf{A}$ is affine, then the result holds by (generalized) Gaussian elimination. So assume that $\mathbf{A}$ is 2-nilpotent, but not affine. So $\mathbf{A}$ is isomorphic to $\mathbf{L}\otimes \mathbf{U}$, and wlog. $|L|=q$ and $|U|=p$. Then the result follows directly from Theorem [\[theorem:main\]](#theorem:main){reference-type="ref" reference="theorem:main"}. ◻ # Discussion {#sect:discussion} In Theorem [\[theorem:main\]](#theorem:main){reference-type="ref" reference="theorem:main"} we proved that every Mal'tsev algebra, which can be written as a wreath product $\mathbf{L}\otimes \mathbf{U}$ with $|U|=p$ and $p \nmid |L|$ has a tractable subpower membership problem. But, since the reduction discussed in Theorem [\[theorem:SMP\]](#theorem:SMP){reference-type="ref" reference="theorem:SMP"} extends beyond this case, it is natural to ask, whether we can also extend the tractability also extends to all those cases: **Question 1**. *Is $\mathop{\mathrm{SMP}}(\mathbf{L}\otimes \mathbf{U}) \in \mathop{\mathrm{\mathsf{P}}}$ for every supernilpotent Mal'tsev algebra $\mathbf{U}$?* In particular, if $\mathbf{U}$ is affine, Question [Question 1](#question:tractable){reference-type="ref" reference="question:tractable"} asks, whether the subpower membership problem of all finite 2-nilpotent Mal'tsev algebras can be solved in polynomial time. By Theorem [\[theorem:SMP\]](#theorem:SMP){reference-type="ref" reference="theorem:SMP"}, this reduces to computing compact representations with respect the clonoids between affine algebras. Thus answering the question requires a better understanding of such clonoids. Very recent result [@MW-clonoidsmodules] study such clonoids in the case where $\mathbf{U}$ has a distributive congruence lattice, and $\mathbf{L}$ is coprime to $\mathbf{U}$. Such clonoids are always generated by functions of bounded arity (as in Lemma [\[lemma:binary2\]](#lemma:binary2){reference-type="ref" reference="lemma:binary2"}), thus we expect then similar argument as in Lemma [\[lemma:SMP2\]](#lemma:SMP2){reference-type="ref" reference="lemma:SMP2"} to work in solving $\mathop{\mathrm{CompRep}}(\mathcal C)$. We remark that the fact that every *full* clonoid between such $\mathbf{U}$ and $\mathbf{L}$ is finitely generated was already implicitly used in [@KKK-CEQV-2nilpotent] to obtain polynomial time algorithm for checking whether two circuits over a 2-nilpotent algebra are equivalent. However [@MW-clonoidsmodules] does not cover all clonoids between affine algebras; e.g. for the case $\mathbf{U}= {\mathbb Z}_p \times {\mathbb Z}_p$ and coprime $\mathbf{L}$ nothing is known so far. A reason why much emphasis is placed on coprime $\mathbf{U}$ and $\mathbf{L}$ is, that their wreath products $\mathbf{L}\otimes^{0,T} \mathbf{U}$ are not supernilpotent (for non-trivial operations $T$), and therefore not covered by Theorem [\[theorem:SMPsupernilpotent\]](#theorem:SMPsupernilpotent){reference-type="ref" reference="theorem:SMPsupernilpotent"}. In fact, finite Mal'tsev algebras in finite language are supernilpotent if and only if they decompose into the direct product of nilpotent algebras of prime power size (see e.g. [@AM-commutator Lemma 7.6.]). It is further still consistent with our current knowledge that the conditions of Theorem [\[theorem:directprod\]](#theorem:directprod){reference-type="ref" reference="theorem:directprod"} are always met, for coprime $\mathbf{L}$ and $\mathbf{U}$. This naturally leads to the question: **Question 1**. *Is $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\times \mathbf{U}) \subseteq \mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\otimes \mathbf{U})$, for all finite nilpotent Mal'tsev algebras $\mathbf{L}\otimes \mathbf{U}$ where $\mathbf{L}$ and $\mathbf{U}$ of coprime size?* In fact, in an unpublished proof [@AMW-reduct], a positive answer to Question [Question 1](#question:product){reference-type="ref" reference="question:product"} is given in the case that $\mathop{\mathrm{\mathsf{Clo}}}(\mathbf{L}\otimes \mathbf{U})$ contains a constant operations. A more general version of Question [Question 1](#question:product){reference-type="ref" reference="question:product"} would ask, whether every finite nilpotent Mal'tsev algebra $\mathbf{A}$ has a Mal'tsev term $m$, such that $(A,m)$ is supernilpotent. Lastly we would like to mention that recently the property of *short pp-defitions* was suggested as a witnesses for $\mathop{\mathrm{SMP}}(\mathbf{A}) \in \mathop{\mathrm{\mathsf{coNP}}}$. While Mal'tsev algebras that generate residually finite varieties have short pp-definitions [@BK-shortpp], it is not know whether this is true in the nilpotent case. Thus we ask: **Question 1**. *Does every finite nilpotent Mal'tsev algebras $\mathbf{A}$ have short pp-definitions (and hence $\mathop{\mathrm{SMP}}(\mathbf{A}) \in \mathop{\mathrm{\mathsf{NP}}}\cap \mathop{\mathrm{\mathsf{coNP}}}$)?* Studying Question [Question 1](#question:shortpp){reference-type="ref" reference="question:shortpp"} might especially be a useful approach to discuss the complexity for algebras of high nilpotent degree, if studying the corresponding difference clonoids turns out to be too difficult or technical endeavor. # Proof of Lemma [\[lemma:fixvalues\]](#lemma:fixvalues){reference-type="ref" reference="lemma:fixvalues"} {#appendix:fixval} In this appendix we prove the second statement of Lemma [\[lemma:fixvalues\]](#lemma:fixvalues){reference-type="ref" reference="lemma:fixvalues"}, i.e. we show that for a given *enumerated* compact representation $R$ of a subpower $\mathbf R = \mathop{\mathrm{Sg}}_{\mathbf{A}^k}(X)$ of some Mal'tsev algebra, we can obtain an *enumerated* compact representation $R'$ of $\mathbf R \cap \{ \mathbf x \in A^k \mid \mathbf x(1) = a_1, \ldots, \mathbf x(k) = a_k\}$ for a given list of constants $a_1,\ldots,a_k$. In Algorithm [\[alg:fixvalue\]](#alg:fixvalue){reference-type="ref" reference="alg:fixvalue"} we describe the algorithm `Fix-value`$(R,a)$ that fixes the first coordinate of $\mathbf R$ to $a$; iterating this algorithm $m$ times results in the statement of the Lemma. We remark that `Fix-value`$(R,a)$ is based on the `Fix-values` algorithm in [@brady-CSPnotes Algorithm 5]); although, for simplicity, we only fix the value of one coordinate. Line 7 and 8 corresponds to the call of the subroutine `Nonempty` in [@brady-CSPnotes Algorithm 5]), with the difference that we compute all the elements of the set $T_{j} = \{ (x,y) \in \mathop{\mathrm{pr}}_{1,j} \mathbf R \mid x = a \}$, instead of computing a witness for $(a,y) \in T_{j}$ once at a time. We claim that the running time of `Fix-Value`$(R,a)$ is $O(|A|^2 \cdot n))$. For this note that the exhaustive search in line 7 and 8 will simply recursively apply $m$ to elements from $\mathop{\mathrm{pr}}_{1,j} (R)$ until the set is closed under $m$, and then select all values with $x_1=a$. Since $|\mathop{\mathrm{pr}}_{1,j}\mathbf R| \leq A^2$ this takes at most $|A|^2$ steps. For this reason also the size of the defining circuits $C_j$ (when e.g. stored all together as a circuit with multiple out-gates) is bounded by $|R| + |A|^2$. Since the for-loop of line 6 has at most $n$ iterations, it follows that both the running time of the algorithm and the size of the defining circuits in $R'$ are bounded by $O(|A|^2 \cdot n)$. If we then repeatedly call `Fix-value` to fix the value of the first $m$-many values of $\mathbf R$, this results in an algorithm that runs in time $O(|A|^2 \cdot nm))$. Thus, the only thing that remains to prove is that the algorithm `Fix-Value` is correct. i.e. it indeed outputs an enumerated $R'$ with $\mathop{\mathrm{Sig}}(R') = \mathop{\mathrm{Sig}}(\mathbf R \cap \{ \mathbf x \in A^k \mid \mathbf x(1) = a \})$ (if the output is not empty). So assume that $(i,b,c) \in \mathop{\mathrm{Sig}}(\mathbf R \cap \{ \mathbf x \in A^k \mid \mathbf x(1) = a \})$. If $i =1$, then clearly $(i,b,c) = (1,a,a)$, which is in $\mathop{\mathrm{Sig}}(R')$. So let us assume wlog. that $i > 1$. Since $R$ is a compact representation of $\mathbf R$, there exist tuples $\mathbf r_b, \mathbf r_c \in R$ (and defining circuits $p_{\mathbf r_b}$ and $p_{\mathbf r_c}$), witnessing that $(i,b,c) \in \mathop{\mathrm{Sig}}(R)$. Then $R'$ contains the tuples $\mathbf t$ and $\mathbf s = m(\mathbf t, \mathbf r_b, \mathbf r_c)$, as constructed in line 12 and 13 of Algorithm [\[alg:fixvalue\]](#alg:fixvalue){reference-type="ref" reference="alg:fixvalue"}. Since $\mathbf r_b$ and $\mathbf r_c$ agree on the first $i-1$ coordinates also $\mathbf t$ and $\mathbf s$ do. Moreover $\mathbf t(1) = a$, $\mathbf t(i) = b$, and $\mathbf s(i) = m(b, b,c) = c$, thus $\mathbf t$ and $\mathbf s$ witness $(i,b,c) \in \mathop{\mathrm{Sig}}(\mathbf R \cap \mathbf x(1) = a)$. It follows that $\mathop{\mathrm{Sig}}(R') = \mathop{\mathrm{Sig}}(\mathbf R \cap \{ \mathbf x \in A^k \mid \mathbf x(1) = a \})$, which is what we wanted to prove. $\emptyset$ Let $(\mathbf t, p_{\mathbf t}) \in R$ be such that $(\mathbf t, \mathbf t)$ is a witness of $(1,a,a) \in \mathop{\mathrm{Sig}}(R)$. $R' = \{ (\mathbf t, p_{\mathbf t}) \}$ Recursively apply $m$ to $\mathop{\mathrm{pr}}_{1,j}(R)$ to compute $T_{j} = \{ (x,y) \in \mathop{\mathrm{pr}}_{1,j} (\mathbf R) \mid x = a \}$, and circuits $C_{j} = \{ p_{(x,y)} \mid (x,y) \in T_j \}$ such that $\mathop{\mathrm{pr}}_{1,j}(p_{(x,y)}(X)) = (x,y)$. Let $(\mathbf r_b, p_{\mathbf r_b}), (\mathbf r_c, p_{\mathbf r_c}) \in R$ be witnesses of $(j,b,c) \in \mathop{\mathrm{Sig}}(R)$ Let $\mathbf t = p_{(a,b)}(X)$ $\mathbf s = m(\mathbf t, \mathbf r_b, \mathbf r_c)$ and $p_{\mathbf s} = m(p_{(a,b)}, p_{\mathbf r_b},p_{\mathbf r_c})$ $R' = R' \cup \{ (\mathbf t, p_{(a,b)}), (\mathbf s, p_{\mathbf s}) \}$ $R'$
arxiv_math
{ "id": "2309.16549", "title": "The subpower membership problem of 2-nilpotent algebras", "authors": "Michael Kompatscher", "categories": "math.RA cs.CC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We propose a homogenized supremal functional rigorously derived via power-law approximation by functionals of the type $\underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}f\left(\frac{x}{\varepsilon}, Du\right)$, when $\Omega$ is a bounded open set of $\mathbb R^n$ and $u\in W^{1,\infty}(\Omega;\mathbb R^d)$. The homogenized functional is also deduced directly in the case where the sublevel sets of $f(x,\cdot)$ satisfy suitable convexity properties, as a corollary of homogenization results dealing with pointwise gradient constrained integral functionals.\ **Keywords:** $L^\infty$ functionals, $L^p$-approximation, homogenization, pointwise gradient constraints, $\Gamma$-convergence **2020 Mathematics Subject Classification:** 49J45, 26B25, 74Q99 author: - Lorenza D'Elia, Michela Eleuteri, and Elvira Zappale title: Homogenization of supremal functionals in vectorial setting (via power-law approximation) --- # Introduction and statement of the problem In many applications, such as dielectric breakdown for composite conductors (see [@GNP01]) or to model plasticity problems (see [@KL99]), it is customary to consider variational problems where the energies involved are not integral and where the relevant quantities that come into play do not express a mean property. Minimization problems therefore involve functionals of the type $$\label{pb01} F(u) = \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}f(x, u(x), D u(x)),$$ where $\Omega\subset \mathbb R^n$ is a bounded open set, $u\in W^{1,\infty}(\Omega;\mathbb R^d)$, and $f: \Omega \times \mathbb R^d \times \mathbb R^{d\times n} \to \mathbb R$ is a suitable Borel function. From paper [@ABP02], these problems have been addressed as *supremal functionals.* It is worth to emphasize that this type of functionals can be seen as the counterpart of the Euler-Lagrange (*Euler-Aronsson*) equations (or systems) of $\Delta_\infty$-type (see the pioneering papers [@A1; @A2; @A3; @A4]) and the more recent contributions [@AK; @CKM], among a wide literature.\ On the other hand, for problems of the type [\[pb01\]](#pb01){reference-type="eqref" reference="pb01"}, it is important to consider the pointwise behavior of the energy density also on very small sets; therefore it is relevant to ask whether this class of functionals is stable under $\Gamma$-convergence in $L^{\infty}.$ A first answer to this question has been derived in [@BGP04] in the framework of homogenization, a theory establishing the macroscopic behaviour of materials with highly heterogeneous microstructure. Among the vast literature, we single out the book [@BD] for an overview of homogenization of integral energies and the contributions [@BCPD21; @BFL00; @DF16] and references therein for some recent developments.\ In [@BGP04], the authors consider the following sequence of functionals $$F_\varepsilon(u):= \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}f\left({x\over \varepsilon}, \nabla u(x)\right),$$ where $\Omega$ is a bounded set of $\mathbb{R}^n,$ $u \in W^{1,\infty}(\Omega)$ and the function $f$ is periodic in the first variable. The purpose of this contribution has been to replace this highly oscillating functional, as $\varepsilon$ goes to zero, with a simpler functional $F_{\rm hom}$, the homogenized functional, which is able to capture the relevant features of the family $\{F_\varepsilon\}_\varepsilon$. Their work has been inspired by the papers [@CDP03; @GNP01]: the energy density of the homogenized functional can be represented by means of a cell-problem formula obtained by an approximation technique, thanks to the fact that the power-law approximation described in the already mentioned contributions (consisting in approximating the $L^\infty$ functionals (possibly depending on $\varepsilon$) by an $L^p$-norm type ones, i.e. $(\int_\Omega f^p(\tfrac{x}{\varepsilon},\nabla u(x)) dx)^{\frac{1}{p}}$) holds true also in this inhomogeneous setting. Furthermore, they obtain that the homogenization (i.e. a suitable variational limit as $\varepsilon \to 0$) and the power-law approximation (i.e. a variational limit as $p \to +\infty$) commute.\ In the present paper, we would like to complement the results in [@BGP04] (see also [@BPZ] and [@Z]) by considering homogenization for supremal functionals in the *vectorial setting* by means of power-law approximation. More precisely, let $\Omega$ be a bounded open set of $\mathbb{R}^n$ with Lipschitz boundary. For any $\varepsilon>0$, we introduce the supremal functional defined on $W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$ by $$\label{def:supfunctional} F_\varepsilon(u):= \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}f\left({x\over \varepsilon}, Du(x)\right),$$ where the supremal integrand $f:\mathbb{R}^n\times \mathbb{R}^{d\times n}\to[0, +\infty)$ is a Borel function, $1$-periodic in the first variable which satisfies the following growth conditions: there exist two positive constants $\alpha$ and $\beta$ such that $$\label{growthconditions} \alpha|Z|\leq f(x, Z)\leq \beta(|Z|+1),$$ for any $Z\in\mathbb{R}^{d\times n}$ and for every $x\in\mathbb{R}^n$. Our aim is to provide a homogenized version of the family $\{F_\varepsilon\}_\varepsilon$. To that end, we propose two strategies: one in terms of a direct relaxation procedure (that we exhaustively achieve in some model case), the other strategy, pursued in a wider generality, consisting of an approximation approach by means of "power-law" . More precisely, for any fixed $\varepsilon>0$ and $p >1$, we introduce the double indexed functional $F_{p,\varepsilon}: L^p(\Omega;\hspace{0.03cm}\mathbb{R}^d)\to \mathbb{R}\cup\{+\infty\}$ defined by $$\label{def:Functionalpeps} F_{p,\varepsilon}(u):= \begin{dcases} \left(\int_{\Omega} f^p\left({x\over \varepsilon}, Du(x) \right)dx \right)^{1/p} &\mbox{if } u\in W^{1,p}(\Omega;\hspace{0.03cm}\mathbb{R}^d),\\ &\\ +\infty&\mbox{otherwise in } L^p(\Omega;\mathbb R^d). \end{dcases}$$ Note that, in view of [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}, for any fixed $1<p<+\infty$, the density function $f^p(x, Z)$ satisfies the standard $p$-growth conditions in $W^{1,p}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$, i.e., $$\notag%\label{pgrowth} \alpha^p |Z|^p \leq f^p(x,Z) \leq \gamma (1+|Z|^p),$$ for some constant $\gamma>0$, for every $Z \in \mathbb R^{d \times n}$ and every $x \in \mathbb{R}^n$. Applying standard results (see, e.g., [@BD Chapter 14] and [@DM93 Proposition 6.16 and Chapter 24]), the functionals $\{F_{p,\varepsilon}\}_{p,\varepsilon}$ $\Gamma(L^p)$-converges, as $\varepsilon\rightarrow 0$, to the functional $F^{\rm hom}_p$, i.e. $$\label{GammaLp} \inf\{\liminf_{\varepsilon \to 0} F_{p,\varepsilon}(u_\varepsilon) : u_\varepsilon \to u \hbox{ in } L^p(\Omega;\mathbb R^d)\} = F^{\rm hom}_p(u),$$ where $$\label{def:Fhomp} F^{\rm hom}_p(u)\coloneqq \begin{dcases} \left(\int_{\Omega} f^{\rm hom}_p(Du(x))dx \right)^{1/p} &\hbox{ if }u \in W^{1,p}(\Omega;\mathbb R^d),\\ &\\ + \infty &\hbox{ otherwise in }L^p(\Omega;\mathbb R^d), \end{dcases}$$ where the effective energy density $f^{\rm hom}_p: \mathbb{R}^{d\times n}\to [0, +\infty)$ is characterized by the so-called *asymptotic homogenization formula* $$\label{def:fhomp} f^{\rm hom}_p(Z) :=\lim_{T\to\infty} {1\over T^n} \inf\biggl\{\int_{TY} f^p(x, Z + Du(x))dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,p}_{0} (TY;\hspace{0.03cm} \mathbb{R}^d) \biggr\}.$$ In addition, in light of [@BD Remark 14.6], $f^{\rm hom}_p$ also satisfies the formula $$\label{fhompdoubleinf} f^{\rm hom}_p(Z)=\inf_{j\in\mathbb{N}}\inf\left\{{1\over j^n} \int_{jY} f^p(x, Z+ Du(x))dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,p}_{\#} (jY;\hspace{0.03cm}\mathbb{R}^d) \right\},$$ and in [\[def:fhomp\]](#def:fhomp){reference-type="eqref" reference="def:fhomp"}, one can replace $W^{1,p}_{0}(TY;\hspace{0.03cm}\mathbb{R}^d)$ with $W^{1,p}_{\#}(TY;\hspace{0.03cm}\mathbb{R}^d),$ where $\mathbb R \ni T >0,$ and $Y=(0,1)^n$ (we refer to Section [2](#not&pre){reference-type="ref" reference="not&pre"} for adopted notation). We would like to stress out that the energy density $f$ is assumed to be a Borel function since we drop the quasiconvexity assumption in the second variable.\ At this point, we take the limit as $p\to +\infty$ of the functionals in [\[def:Fhomp\]](#def:Fhomp){reference-type="eqref" reference="def:Fhomp"}. In other words, we start considering first the asymptotics as the homogenization parameter vanishes in the sense of $\Gamma$-convergence with respect to the $L^p$ strong convergence, then we study the $\Gamma$-limit in the power law sense. This is done in Section [3](#tre){reference-type="ref" reference="tre"} and the main result in this direction is Theorem [Theorem 9](#Theorem3.6){reference-type="ref" reference="Theorem3.6"}, where it is proved that $$\notag \Gamma(L^\infty)\mbox{-}\lim_{p\to+\infty} F_p(u)= F^{\rm hom}(u):= \underset{x\in \Omega}{\mbox{{\rm ess-sup}}}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du(x)),$$ for every $u \in W^{1, \infty}(\Omega; \mathbb{R}^d).$ In fact, this latter functional can be considered as the homogenized version of [\[def:supfunctional\]](#def:supfunctional){reference-type="eqref" reference="def:supfunctional"}. We also emphasize that the effective energy density $\widetilde{f}_{\rm hom}$ is characterized through an asymptotic formula $$\label{tildefhom} \widetilde{f}_{\rm hom}(Z):=\inf_{j\in\mathbb{N}} \inf \left\{ \underset{x\in jY}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.03cm} f_\infty(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\},$$ being $$f_\infty(x, Z) := \sup_{k\geq 1} (\widetilde{f^k})^{1/k} (x, Z),$$ for a.e. $x\in\Omega$ and for every $Z\in\mathbb R^{d\times n}$, where $\widetilde{f^k}$ denotes the relaxed energy density of $f^k$, ($k \in \mathbb N$) given by [@Buttazzo Theorem 4.4.1]. In Section [\[quattro\]](#quattro){reference-type="ref" reference="quattro"}, we show that if the function $f(x,\cdot)$ is level convex and upper semicontinuous, then [\[tildefhom\]](#tildefhom){reference-type="eqref" reference="tildefhom"} turns into a cell formula (see [\[cellformula\]](#cellformula){reference-type="eqref" reference="cellformula"}). It is also worth to recall that in contrast with convexity, level convexity does not guarantee any semicontinuity (see [@RZ2]).\ The last step consists in proving a homogenization result for the family of energies $\{F_\varepsilon\}_{\varepsilon}$ given by [\[def:supfunctional\]](#def:supfunctional){reference-type="eqref" reference="def:supfunctional"}: this is done in Section [5](#cinque){reference-type="ref" reference="cinque"} under suitable technical assumptions. Hence, we can summarize our results by means of the following diagram, observing that the vertical right-hand side arrow is obtained by Theorem [Theorem 16](#Theorem5.1){reference-type="ref" reference="Theorem5.1"}: ![image](commutativediagram.png){#Fig:comdiagram width="7cm"} [\[Fig:comdiagram\]]{#Fig:comdiagram label="Fig:comdiagram"} More precisely, we are able to prove that, for any $u\in W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$, $$\notag \Gamma(L^\infty)\mbox{-}\lim_{\varepsilon\to 0} F_\varepsilon(u) = F_{\rm hom}(u).$$ We stress that the proof of the $\Gamma$-$\liminf$ inequality relies on the results obtained by the $L^p$-approximation. Instead, the proof of the $\Gamma$-$\limsup$ inequality is more involved and it is based on a generalization of [@CDA02 Chapter 12] to the vectorial case, requiring the convexity of $\Omega$ (strong-starshapedness would suffice) and a techincal assumptions on the sublevel sets of the density $f(x,\cdot)$. Indeed, in Section [6](#secHomunb){reference-type="ref" reference="secHomunb"} we present a homogenization result, via $\Gamma$-convergence, of the family of vector-valued unbounded functionals of the form $$G_\varepsilon(u) := \begin{dcases} \int_{\Omega} g\left({x\over \varepsilon}, Du(x)\right)dx & \quad \mbox{if } u\in W^{1,q}_{{\rm loc}} (\mathbb{R}^n; \hspace{0.03cm} \mathbb{R}^d)\cap L^\infty_{{\rm loc}} (\mathbb{R}^n; \hspace{0.03cm} \mathbb{R}^d), \\ &\\ +\infty & \mbox{otherwise}, \end{dcases}$$ where $\Omega$ is a bounded convex set with Lipschitz boundary (or strongly star-shaped), $q\in [1, +\infty]$ and the energy density $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ is ${\mathcal L}(\mathbb{R}^n)\otimes {\mathcal B}(\mathbb{R}^{d \times n})$-measurable, $Y$-periodic in the first variable and convex in the second one, with the effective domain ${\rm dom}(g(x, \cdot))$ of $g(x,\cdot)$ containing a fixed cube $Q$, uniformly in $x$. The main result of Section [6](#secHomunb){reference-type="ref" reference="secHomunb"}, namely the homogenization Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"}, is applied, in Theorem [Theorem 16](#Theorem5.1){reference-type="ref" reference="Theorem5.1"}, to the (integral of the) indicator function $$1_{C(x)}(Z)\coloneqq \begin{dcases} 0 & \hbox{ if } Z \in C(x), \\ +\infty & \hbox{ otherwise}, \end{dcases}$$ where $C(x)\subset \mathbb R^{d\times n}$ is a bounded Borel convex set, with not empty interior satisfying suitable hypotheses, as those described above, uniformly with respect to $x \in \Omega$. This result provides also the limsup inequality. The main difficulty is to extend the results in [@CDA02 Chapter 12] to the vectorial setting. In this respect, we extended the scalar techniques in [@CDA02], which, in turn, required, for instance, the need to impose further technical assumptions (see Remark [\[remark:existenceofcube\]](#remark:existenceofcube){reference-type="ref" reference="remark:existenceofcube"}), which, on the other hand, are similar to those present in literature in the context of relaxation for unbounded functionals depending on vector valued fields, see, for instance, [@AH; @W1; @W2]. On the other hand, the results contained in Section 6 are per se interesting, since they provide a vectorial extension of the results proven by [@AHCM; @AHM2; @CCGDA] and to the bibliography contained therein, we also refer to the recent contributions to integral representation problem in the unbounded vectorial setting contained in [@AHM; @DG], and to the results in the mechanical framework [@CK1; @CK2]. # Notation and preliminary results {#not&pre} ## General notations and basic facts The present section is devoted to the introduction of the general notation and the basic facts that we use throughout the paper. - Let $X$ be a set. For every $S \subseteq X$, we denote by $\chi_S$ the characteristic function of $S$ defined by $$\chi_S(x) = \left \{ \begin{array}{lll} \!\!\!\!\!\! & 1 \qquad \textnormal{if $x \in S,$}\\[3mm] \!\!\!\!\!\! & 0 \qquad \textnormal{if $x \in X \setminus S;$} \end{array} \right.$$ - For any $x_0 \in \mathbb{R}^n$ and $r \in (0, + \infty),$ we denote by $B_r(x_0)$ the open ball of $\mathbb{R}^n$ centered in $x_0$ and with radius $r$ and by $Q_r(x_0)$ the open cube centred in $x_0$ having sidelenght $r;$ - For any $x_0 \in \mathbb{R}^n,$ $S \subseteq \mathbb{R}^n$ and $r \in (0, + \infty),$ we set $$\begin{aligned} && {\rm dist}(x_0, S) = \inf \{|x_0 - x|: x \in S\},\\[3mm] && S_r^- = \{x \in S: {\rm dist}(x, \partial S) > r\}, \\[3mm] && S_r^+ = \{x \in \mathbb{R}^n: {\rm dist}(x, S) < r\};\end{aligned}$$ - We denote by $Y$ the unit cell in $\mathbb{R}^n$, i.e., $Y = (0,1)^n$ and by $Q$ a generic cube in $\mathbb R^{d\times n}$ which will be specified case by case; - For every $N \in \mathbb N$, by $\mathcal L^N$ we denote the $N$-dimensional Lebesgue measure; - We say that a subset of $\mathbb{R}^n$ is a polyhedral set if it can be expressed as the intersection of a finite number of closed half-spaces; - $\mathbb{R}^{d\times n}$ denotes the space of $d\times n$ real matrices; - Given $Z\in\mathbb{R}^{d\times n}$, the norm $|Z|$ is defined by $|Z|:=\sum_{i=1}^{d}|Z_i|$, where $Z_i$ is the $i$-th row of the matrix $Z$ and $|Z_i|$ is its Euclidean norm; - For every $O \subset \mathbb R^n$, by $X(O;\mathbb R^d)$ we denote a space of functions defined in $O$ with values in $\mathbb R^d$. $X_\#(KY; \mathbb{R}^d)$ denotes the space of $K$-periodic functions in $X_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d),$ ($K \in (0,+\infty)$) where $$X_{\rm loc}(O; \mathbb{R}^d) = \{u: O \rightarrow \mathbb{R}^d: u \in X(U; \mathbb{R}^d) \,\,\, \textnormal{for all open sets} \,\,\, U \subset \subset O \}.$$ For Sobolev spaces, taking into account the notion of traces given in [@CDA02 Section 4], and for any cube $D\subset \mathbb R^n$, we denote, for every $p\in [1,+\infty]$, by $W^{1,p}_{\#}(C;\mathbb R^d)$ the set $\{v \in W^{1,p}(C;\mathbb R^d): \gamma_C v$ takes the same value on the opposite faces of $C \}$; - for every open subset $\Omega \subset \mathbb R^n$, $\mathcal A(\Omega)$ denotes the family of open subsets of $\Omega$, and $\mathcal A_0(\Omega)$ denotes those which are bounded; - $\mathcal O$ denotes the family of open subsets $\mathcal A(\mathbb R^n)$; - For any function $f: \Omega\times \mathbb \mathbb{R}^{d \times n}$, by $f^{\rm ls}$ we denote the lower semicontinuous envelope of $f(x,\cdot)$ (i.e., with respect to the second variable); - For every $E \in \mathbb{R}^n,$ every function $u$ on $E$ and $x_0 \in \mathbb{R}^n,$ we define the *translated of $u$* as $$T[x_0]u : x \in E - x_0 \mapsto u(x + x_0).$$ Let $\Phi: \mathcal{O} \times U \rightarrow (- \infty, + \infty].$ We say that $\Phi$ is *translation invariant* if $$\Phi(\Omega - x_0, T[x_0]u) = \Phi(\Omega, u) \qquad \textnormal{for every $\Omega \in \mathcal{O}, x_0 \in \mathbb{R}^n, u \in U;$}$$ - For any $A \subset \mathbb R^n$, we denote by ${\rm int}(A)$ the relative interior (relative to the smallest affine subspace which contains $A$) of $A$; - We denote by $PA(\mathbb{R}^n; \mathbb{R}^d)$ the set of the piecewise affine functions $u :\mathbb{R}^n \rightarrow \mathbb{R}^d$, i.e., $u$ is a continuous function such that $$\label{aff-u-Zi}u(x) :=\sum_{i=1}^m (u_{Z_i}(x)+ c_i) \chi_{P_i}(x)\quad \hbox{ for every }x \in \bigcup_{i=1}^m {\rm int}(P_i),$$ with $m \in \mathbb N,$ where for any $i=1,\dots, m$, $u_{Z_i}$ is a linear function given by $u_{Z_i}(x)\coloneqq Z_ix$, with $Z_1,\dots, Z_m\in \mathbb R^{d\times n}, c_1, \dots,c_m \in \mathbb R^d$, and $P_1,\dots, P_m$ polyhedral sets with pairwise disjoint nonempty interiors such that $\bigcup_{i=1}^m P_i=\mathbb R^n$; - Given $f:\mathbb R^{d \times n} \to [0,+\infty]$, by ${\rm dom} f$ we denote the effective domain of $f$, i.e., $${\rm dom }f:=\{Z \in \mathbb R^{d \times n}: f(Z)< +\infty\};$$ ## $\Gamma$-convergence For an exhaustive introduction to $\Gamma$-convergence, we refer to [@DM93]. We only recall the sequential characterization of the $\Gamma$-limit when $X$ is a metric space. **Proposition 1** ([@DM93 Proposition 8.1]). *Let $X$ be a metric space and let $\varphi_{k}: X \to \mathbb R \cup \{\pm \infty\}$, for every $k\in \mathbb{N}$. Then $\{\varphi_k\}_{k}$ $\Gamma$-converges to $\varphi$ with respect to the strong topology of $X$ (and we write $\displaystyle \Gamma(X)\hbox{-}\lim_{k\to +\infty}\varphi_{k}=\varphi$) if and only if* - *($\Gamma$-liminf inequality) for every $x\in X$ and for every sequence $\{x_{k}\}_k$ converging to $x$, it is $$\varphi(x)\le \liminf_{k\to +\infty} \varphi_{k}(x_{k});$$* - *($\Gamma$-limsup inequality) for every $x\in X,$ there exists a sequence $\{x_{k}\}_k$ converging to $x \in X$ such that $$\varphi(x)=\lim_{k\to +\infty} \varphi_{k}(x_{k}).$$* We recall that the $\Gamma\hbox{-}\lim_{k\to +\infty}\varphi_{k}$ is lower semicontinuous on $X$ (see [@DM93 Proposition 6.8]). Since we are dealing with weak$^\ast$ convergence in compact sets, without loss of generality, we can assume that we are in a metric setting, hence we can consider the following definition as the notion of $\Gamma$-convergence for families of functionals $\{\varphi_{\varepsilon}\}_\varepsilon$ as $\mathbb R^+ \ni \varepsilon \to 0$. We also refer to [@CDA02 Propositions 3.2.3, 3.2.4, 3.2.6] to see that $\Gamma^-$-convergence (with respect to a general topology $\tau$) (see Definition 3.2.1, therein), well suited for families of functionals reduces, in the context, subject of our subsequent analysis, to the definition below **Definition 2**. *We say that a family $\{\varphi_{\varepsilon}\}_\varepsilon$ $\Gamma$-converges to $\varphi$, with respect to the metric in $X$ as $\varepsilon\to 0^+$, if $\{\varphi_{{\varepsilon}_{k}}\}_k$ $\Gamma$-converges to $\varphi$ for all sequences $\{{\varepsilon}_{k}\}_k$ of positive numbers converging to $0$ as $k\to +\infty$.* Furthermore, we recall the Urysohn property for the $\Gamma$-convergence of sequences of functionals defined on metric spaces $X$ (or at least satisfying the first axiom of countability). **Proposition 3**. *Under the above assumptions on $X$, let $\lambda \in [-\infty,+\infty]$, then $$\notag%\label{(3.2.11)CDA} \lambda = \Gamma\mbox{-}\lim_{k \rightarrow + \infty} \varphi_k(u)$$ if and only if $$\notag%\label{(3.2.12)CDA} \textnormal{$\forall \{k_h\} \subseteq \mathbb{N}$ strictly increasing, $\exists \{k_{h_j}\} \subseteq \{k_h\}$ such that $\lambda = \Gamma \mbox{-}\lim_{j \rightarrow + \infty} \varphi_{k_{h_j}}(u)$}.$$* # The $\boldsymbol{L^p}$-approximation {#tre} In this section, after having recalled the homogenization of the functionals $\{F_{p, \varepsilon}\}_{p,\varepsilon}$ in [\[GammaLp\]](#GammaLp){reference-type="eqref" reference="GammaLp"}, as $\varepsilon \to 0$, leading to the functional [\[def:Fhomp\]](#def:Fhomp){reference-type="eqref" reference="def:Fhomp"}, we take the limit as $p\to +\infty$. In details, we start considering first the asymptotics as the homogenization parameter vanishes in the sense of $\Gamma$-convergence with respect to the $L^p$ strong convergence, then we study the $\Gamma$-limit in the power law sense, i.e., as $p\to +\infty$. To that end, we combine techniques used in [@BGP04 Lemma 3.2] and [@PZ20 Theorem 2.2]. In particular, we recall the following definition, introduced in [@BJW04], and proved to be necessary and sufficient for the lower semicontinuity of supremal functionals. **Definition 4**. *Let $f:\mathbb R^{d \times n}\to \mathbb{R}$ be a Borel measurable function. We say that $f$ is *strong Morrey quasiconvex* if $$\forall\ \varepsilon>0\ \forall\ Z\in \mathbb{R}^{d\times n}\ \forall\ K>0\ \exists\ \delta=\delta(\varepsilon, K,Z)>0:$$ $$\left.\begin{array}{l}\varphi\in W^{1,\infty}(Y;\mathbb{R}^d)\vspace{0.2cm}\\ ||D\varphi||_{L^\infty(Y;\mathbb{R}^{d\times n})}\le K\vspace{0.2cm}\\ \max_{x\in\partial Y}|\varphi(x)|\le \delta\end{array}\right\}\Longrightarrow f(Z)\le \underset{x\in Y}{\mbox{\rm ess-sup}}\hspace{0.03cm} f(Z+D\varphi(x))+\varepsilon.$$* Let $\Omega \subseteq \mathbb R^n$ and let $f:\Omega \times \mathbb R^{d\times n}\to [0,+\infty]$, be a $\mathcal L(\Omega)\otimes \mathcal B(\mathbb R^{d \times n})$- measurable function. Following [@PZ20] we set $$\label{def:finfinity} f_\infty(x, Z) := \sup_{k\geq 1} (\widetilde{f^k})^{1/k} (x, Z),$$ for a.e. $x\in\Omega$ and for every $Z\in\mathbb R^{d\times n}$, where $\widetilde{f^k}$ denotes the relaxed energy density of $f^k$, ($k \in \mathbb N$) given by [@Buttazzo Theorem 4.4.1], which is a Carathéodory function, quasiconvex in the second variable. **Lemma 5**. *Let $f:\mathbb{R}^n\times\mathbb{R}^{d\times n}\to [0,+\infty]$ be a Borel function, $1$-periodic in the first variable, satisfying [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}. Let $f^{\rm hom}_p$ be the homogenized energy density described by [\[fhompdoubleinf\]](#fhompdoubleinf){reference-type="eqref" reference="fhompdoubleinf"}. Then, for any $Z\in\mathbb{R}^{d\times n}$, it holds $$\label{limfhomp} \lim_{p\to+\infty} (f^{\rm hom}_p)^{1/p}(Z) =\widetilde{f}_{\rm hom}(Z),$$ where $\widetilde{f}_{\rm hom}$ is given by the following formula $$\label{characterizationftildehom} \widetilde{f}_{\rm hom}(Z):=\inf_{j\in\mathbb{N}} \inf \left\{ \underset{x\in jY}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.03cm} f_\infty(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\},$$ for any $Z \in\mathbb{R}^{d\times n}$, where $f_\infty$ is given by [\[def:finfinity\]](#def:finfinity){reference-type="eqref" reference="def:finfinity"}.* *Proof.* To prove [\[limfhomp\]](#limfhomp){reference-type="eqref" reference="limfhomp"}, we show that $(f^{\rm hom}_p)^{1/p}$ is a non-decreasing function with respect to $p>1$, i.e., for $p_1<p_2$, we have that, for any $Z \in\mathbb{R}^{d\times n}$, $$\label{fhomincreasingsequence} (f^{\rm hom}_{p_1})^{1/p_1}(Z)\leq(f^{\rm hom}_{p_2})^{1/p_2}(Z).$$ Fix $j\in\mathbb{N}$ and $Z\in\mathbb{R}^{d\times n}$. Since $f^{p_2}(\cdot, Z+Du(\cdot))\in L^1(jY)$, it follows that $f^{1/p_1}(\cdot, Z+Du(\cdot))\in L^{p_2/p_1}(jY)$, so that an application of the Hölder inequality yields to $$\notag \int_{jY} f^{p_1}\left(x, Z+ Du(x)\right)dx\leq (j^n)^{1-p_1/p_2}\left( \int_{jY} f^{p_2}\left(x, Z+ Du(x)\right)dx\right)^{p_1/p_2}.$$ Moreover, due to $W^{1, p_2}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d)\subset W^{1, p_1}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d)$, we deduce that $$\begin{aligned} {1\over j^n}\inf&\left\{\int_{jY} f^{p_1}\left(x, Z+ Du(x)\right)dx \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, p_1}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d) \right\}\notag\\ & \leq{1\over j^n}\inf\left\{\int_{jY} f^{p_1}\left(x, Z+ Du(x)\right)dx \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, p_2}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d) \right\}\notag\\ &\leq {1\over (j^n)^{p_1/p_2}}\inf\left\{\left( \int_{jY} f^{p_2}\left(x, Z+ Du(x)\right)dx\right)^{p_1/p_2} \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, p_2}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d) \right\}\notag\\ &=\left[{1\over j^n}\inf\left\{ \int_{jY} f^{p_2}\left(x, Z+ Du(x)\right)dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, p_2}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d) \right\} \right]^{p_1/p_2}\notag. \end{aligned}$$ Taking the infimum on $j\in\mathbb{N}$ and using [\[fhompdoubleinf\]](#fhompdoubleinf){reference-type="eqref" reference="fhompdoubleinf"}, it follows that $$\begin{aligned} & f^{\rm hom}_{p_1}(Z)\notag\\ &\quad \leq \inf_{j\in\mathbb{N}} \left[{1\over j^n}\inf\left\{ \int_{jY} f^{p_2}\left(x, Z+ Du(x)\right)dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, p_2}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d) \right\} \right]^{p_1/p_2}\notag\\ &\quad =\left[\inf_{j\in\mathbb{N}} {1\over j^n}\inf\left\{ \int_{jY} f^{p_2}\left(x, Z+ Du(x)\right)dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, p_2}_{\#}(jY;\hspace{0.03cm}\mathbb{R}^d) \right\} \right]^{p_1/p_2}\notag\\ &\quad =(f^{\rm hom}_{p_2})^{p_1/p_2}(Z)\notag, \end{aligned}$$ for any $Z \in\mathbb{R}^{d\times n},$ which implies [\[fhomincreasingsequence\]](#fhomincreasingsequence){reference-type="eqref" reference="fhomincreasingsequence"}. Hence, the limit as $p\to+\infty$ of $(f^{\rm hom}_p)^{1/p}$ exists and we denote it with $\widetilde{f}_{\rm hom}$. In other words, $$\lim_{p\to+\infty} (f^{\rm hom}_p)^{1/p} (Z)= :\widetilde{f}_{\rm hom}(Z) \qquad\mbox{for any }Z\in\mathbb{R}^{d\times n}. %\end{equation}$$ To conclude the proof, it remains to show the characterization [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"}. First, note that, thanks to growth condition [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"} along with [@DM93 Proposition 6.11], for $p>n$, $f^{\rm hom}_p$ may be characterized through the relaxed energy density $\widetilde{f^p}$, i.e., $$\notag%\label{fphom_7_prop_6.11} f^{\rm hom}_p(Z) := \inf_{j\in\mathbb{N}}\inf\left\{{1\over j^n} \int_{jY} \widetilde{f^p}(x, Z+Du(x))dx\hspace{0.03cm} : \hspace{0.03cm} u\in W^{1,p}_{\#}(jY;\hspace{0.03cm} \mathbb{R}^d) \right\}.$$ Hence, for fixed $j\in\mathbb{N}$ and $Z\in\mathbb{R}^{d\times n}$ and for a given $u\in W^{1,\infty}_{\#} (jY; \hspace{0.03cm}\mathbb{R}^d)$, we deduce that $$\notag \int_{jY} \widetilde{f^p} (x, Z+ Du(x))dx\leq j^n \underset{x\in jY}{\mbox{ess-sup}}\hspace{0.03cm} \widetilde{f^p}(x, Z+Du(x)).$$ This, combined with the fact that $W^{1,\infty}_{\#} (jY; \hspace{0.03cm}\mathbb{R}^d) \subset W^{1,p}_{\#} (jY; \hspace{0.03cm}\mathbb{R}^d),$ implies that $$\begin{aligned} {1\over j^n}&\inf \left\{\int_{jY} \widetilde{f^p} (x, Z+ Du(x))dx \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,p}_{\#} (jY; \hspace{0.03cm}\mathbb{R}^d) \right\} \notag\\ & \leq {1\over j^n}\inf \left\{\int_{jY} \widetilde{f^p}(x, Z+ Du(x))dx \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (jY; \hspace{0.03cm}\mathbb{R}^d) \right\} \notag\\ &\leq \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.3cm} \widetilde{f^p}(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}. \notag%\label{ineq1} %&=\left( \inf \left\{\underset{x\in (0,j)^n}{\mbox{ess-sup}}\hspace{0.03cm} f(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} ((0, j)^n; \hspace{0.03cm}\R^d) \right\} \right)^p. \end{aligned}$$ Taking the infimum on $j\in\mathbb{N}$, we get that $$\begin{aligned} f^{\rm hom}_p(Z) &= \inf_{j\in\mathbb{N}}\inf\left\{{1\over j^n} \int_{jY} \widetilde{f^p}(x, Z+ Du(x))dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,p}_{\#} (jY;\hspace{0.03cm}\mathbb{R}^d) \right\}\notag\\ %&\leq \inf_{j\in\mathbb{N}}\left( \inf \left\{\underset{x\in (0,j)^n}{\mbox{ess-sup}}\hspace{0.03cm} f(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# ((0, j)^n; \hspace{0.03cm}\R^d) \right\} \right)^p\notag\\ &\leq \inf_{j\in\mathbb{N}} \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.03cm} \widetilde{f^p}(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}.\notag \end{aligned}$$ This implies that $$\begin{aligned} (f^{\rm hom}_p(Z) )^{1/p} &\leq \inf_{j\in\mathbb{N}} \inf \left\{\left(\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm} \widetilde{f^p}(x, Z+Du(x))\right)^{1/p} \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\} \notag\\ &= \inf_{j\in\mathbb{N}} \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm} (\widetilde{f^p})^{1/p} (x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}.\notag%\label{ineq5} \end{aligned}$$ The same holds when $p\equiv p_k$ is an integer number, therefore it is possible to repeat the same arguments as before by considering a divergent sequence $\{p_k\}_k$ as $k\to+\infty$. We therefore deduce that $$\begin{aligned} (f^{\rm hom}_{p_k}(Z) )^{1/p_k} \leq \inf_{j\in\mathbb{N}} \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm} (\widetilde{f^{p_k}})^{1/p_k} (x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}.\label{ineq4} \end{aligned}$$ We introduce a ${\mathcal L}^n(\Omega)\otimes {\cal B}(\mathbb R^{d\times n})$-measurable function $h_\infty: \Omega\times\mathbb{R}^{d\times n}\to [0, +\infty)$ defined by $$\notag h_\infty (x, Z) := \sup_{k\in\mathbb{N}} \left(\widetilde{f^{p_k}} \right)^{1/p_k} (x, Z).$$ It is possible to show that, for any $u\in W^{1,\infty}(\Omega; \hspace{0.03cm} \mathbb{R}^d)$, $$\notag \underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm} h_\infty(x, Du(x)) = \underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm} f_\infty(x, Du(x)),$$ for a proof see [@PZ20 equations (61) and (64) in Theorem 2.2]. Hence, from [\[ineq4\]](#ineq4){reference-type="eqref" reference="ineq4"}, it follows that $$\begin{aligned} (f^{\rm hom}_{p_k}(Z) )^{1/p_k} &\leq \inf_{j\in\mathbb{N}} \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm}h_\infty (x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}\notag\\ &=\inf_{j\in\mathbb{N}} \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm}f_\infty (x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}.\notag \end{aligned}$$ Finally, taking the limit as $k\to+\infty$, taking into account [\[limfhomp\]](#limfhomp){reference-type="eqref" reference="limfhomp"}, we conclude that $$\begin{aligned} \label{1st} \widetilde{f}_{\rm hom}(Z) \leq \inf_{j\in\mathbb{N}} \inf \left\{\underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm}f_\infty (x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\}. \end{aligned}$$ Now, we show the reverse inequality, assuming, without loss of generality, that $\widetilde f_{{\rm hom}}$ in [\[limfhomp\]](#limfhomp){reference-type="eqref" reference="limfhomp"} is finite. To that end, we set $$\label{varphi} \varphi(Z):= \inf_{j\in\mathbb{N}}\inf\left\{ \underset{x\in jY}{\mbox{ess-sup}}\hspace{0.1cm} f_\infty(x, Z+Du(x)) \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_\# (jY; \hspace{0.03cm}\mathbb{R}^d) \right\},$$ and we fix $\delta > 0$. In view of the characterization of $f^{\rm hom}_p$ given by [\[fhompdoubleinf\]](#fhompdoubleinf){reference-type="eqref" reference="fhompdoubleinf"}, we deduce that for any $p>1$ there exist $\overline{j}\in\mathbb{N}$ and $u_p\in W^{1,p}_{\#}(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^d)$ such that $$\notag \left({1\over\overline{j}^n}\int_{\overline{j}Y}f^p(x, Z+Du_p(x))dx\right)^{1/p}\leq (f^{\rm hom}_p(Z))^{1/p} +\delta$$ Using the growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"} as well as the triangular inequality for the $L^p$-norm, it follows that $$\begin{aligned} \left({1\over\overline{j}^n}\int_{\overline{j}Y} |Du_p(x)|^pdx\right)^{1/p}&\leq \left({1\over\overline{j}^n}\int_{\overline{j}Y} |Du_p(x)+Z|^pdx\right)^{1/p} + \left({1\over\overline{j}^n}\int_{\overline{j}Y} |Z|^pdx\right)^{1/p}\notag\\ &\leq \left({1\over\overline{j}^n}\int_{\overline{j}Y}f^p(x, Z+Du_p(x))dx\right)^{1/p}+|Z|\notag\\ & \leq (f^{\rm hom}_p(Z))^{1/p} +\delta + |Z|\notag\\ &\leq C(\delta, Z),\notag \end{aligned}$$ where the latter constant does not depend on $p$. This, combined with the fact that, for any $p>q>n$, we have that $L^p(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^{d\times n} )\subset L^q(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^{d\times n})$, we deduce that $$\notag {1\over \overline{j}^n}\|Du_p\|_{{L^q(\overline{j}Y\hspace{0.03cm}; \mathbb{R}^{d\times n}})}\leq {1\over \overline{j}^n} \|Du_p\|_{L^p(\overline{j}Y\hspace{0.03cm}; \mathbb{R}^{d\times n})}\leq C.$$ It is not restrictive to assume that $u_p$ has zero average, so that due to Poincaré-Wirtinger's inequality, we conclude that the sequence $\{u_p\}_{p>q}$ is bounded in $W^{1,q}_{\#}(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^d)$, for $q>n$. Therefore, up to subsequences, there exists $u_\infty$ such that $u_p$ weakly converges to $u_\infty$ in $W^{1,q}(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm} \mathbb{R}^d)$. Thanks to the compactness of embedding of Sobolev spaces (see, e.g., [@Bre10 Theorem 9.16]), we conclude that $u_p$ uniformly converges to $u_\infty$ as $p\to +\infty$, since $q>n$. Moreover, $u_\infty\in W^{1,\infty}_{\#}(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^d)$. Indeed, from the boundedness of $\{D u_p \}_{p>q}$ in $L^q(\overline{j}Y\hspace{0.03cm}; \mathbb{R}^{d\times n})$ combined with the $W^{1,q}$-weak convergence of $\{u_p \}_{p>q}$ to $u_\infty$ and the lower semicontinuity of the norm, it follows that $$\notag \|D u_\infty\|_{L^q(\overline{j}Y\hspace{0.03cm};\mathbb{R}^{d\times n})}\leq \liminf_{p\to+\infty} \|D u_p\|_{L^q(\overline{j}Y\hspace{0.03cm}; \mathbb{R}^{d\times n})}\leq C.$$ Now, taking into account that the $L^\infty$ norm can be approximated from below by the $L^q$ norm, as $q \to +\infty$, computing this latter limit, we conclude that $\|D u_\infty\|_{L^\infty(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm} \mathbb{R}^{d\times n})}\leq C$. This together with the uniform convergence of $u_p$ to $u_\infty$ implies that $u_\infty\in W^{1,\infty}_{\#}(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^d)$. Hence, we have shown that, up to subsequences, there exists $u_\infty\in W^{1,\infty}_{\#}(\overline{j}Y\hspace{0.03cm};\hspace{0.03cm}\mathbb{R}^d)$ such that $u_p$ uniformly converges to $u_\infty$, as $p\to+\infty$. Recalling [\[varphi\]](#varphi){reference-type="eqref" reference="varphi"}, thanks to [@PZ20 Theorems 2.2], it follows that $$\begin{aligned} \varphi(Z)&\leq \underset{x\in \overline{j}Y}{\mbox{ess-sup}}\hspace{0.1cm} f_\infty(x, Z+Du_\infty(x))\notag\\ &\leq \liminf_{p\to+\infty} \left({1\over \overline{j}^n} \int_{\overline{j}Y }f^p(x, Z+Du_p(x))dx \right)^{1/p}\notag\\ &\leq \liminf_{p\to+\infty}(f^{\rm hom}_p(Z))^{1/p} + \delta\notag\\ &= \widetilde{f}_{\rm hom}(Z) +\delta. \notag \end{aligned}$$ In view of the arbitrariness of $\delta$, we obtain the reverse inequality of [\[1st\]](#1st){reference-type="eqref" reference="1st"}, as desired. ◻ **Remark 6**. *Since $f^{\rm hom}_p$ is also characterized through asymptotic formula $$\notag f^{\rm hom}_p(Z)=\limsup_{T\to+\infty} \inf{1\over T^n} \left\{\int_{TY}\widetilde{f^p}(x, Z+Du(x))dx\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,p}_{\#}( TY; \hspace{0.03cm}\mathbb{R}^d) \right\},$$ we deduce that $$\label{upperboundwidefhom} \widetilde{f}_{\rm hom}(Z)\leq\limsup_{T\to+\infty}\inf\biggl\{ \underset{x\in TY}{\mbox{\rm ess-sup}}\hspace{0.1cm} f_\infty(x, Z + Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (TY;\hspace{0.03cm}\mathbb{R}^d) \biggr \}.$$ Indeed, for fixed $T$ and $Z\in\mathbb{R}^{d\times n}$, we deduce that $$\begin{aligned} {1\over T^n} &\inf\left\{ \int_{TY} \widetilde{f^p}(x, Z+Du(x))dx \hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,p}_{\#}(TY; \hspace{0.03cm} \mathbb{R}^d) \right\}\notag\\ & \le \, \inf\left\{ \underset{x\in TY}{\mbox{\rm ess-sup}}\hspace{0.1cm} \widetilde{f^p}(x, Z + Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (TY;\hspace{0.03cm}\mathbb{R}^d) \right\}.\notag \end{aligned}$$ Taking the limit as $T\to+\infty$, it yields to $$\begin{aligned} \notag f^{\rm hom}_p(Z) \leq \limsup_{T\to+\infty} \inf\left\{ \underset{x\in TY}{\mbox{\rm ess-sup}}\hspace{0.1cm} \widetilde{f^p}(x, Z + Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (TY;\hspace{0.03cm}\mathbb{R}^d) \right\}, \end{aligned}$$ which implies that $$\begin{aligned} \notag (f^{\rm hom}_p)^{1/p} (Z) \leq \limsup_{T\to+\infty} \inf\left\{ \underset{x\in TY}{\mbox{\rm ess-sup}}\hspace{0.1cm} (\widetilde{f^p})^{1/p}(x, Z + Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (TY;\hspace{0.03cm}\mathbb{R}^d) \right\}, \end{aligned}$$ If $p \equiv k$ is an integer number, thanks to [\[def:finfinity\]](#def:finfinity){reference-type="eqref" reference="def:finfinity"}, it is easy to show that $$\begin{aligned} \notag (f^{\rm hom}_k)^{1/k} (Z) \leq \limsup_{T\to+\infty} \inf\left\{ \underset{x\in TY}{\mbox{\rm ess-sup}}\hspace{0.1cm} f_\infty(x, Z + Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (TY;\hspace{0.03cm}\mathbb{R}^d) \right\}, \end{aligned}$$ and taking the limit as $k\to+\infty$ we conclude that [\[upperboundwidefhom\]](#upperboundwidefhom){reference-type="eqref" reference="upperboundwidefhom"} holds. Now, consider a divergent sequence $\{p_k\}_k$ as $k\to+\infty$. By similar arguments as those in the proof of Lemma [Lemma 5](#lemma:recoveryseq){reference-type="ref" reference="lemma:recoveryseq"}, we obtain that $$\begin{aligned} (f^{\rm hom}_{p_k})^{1/p_k} (Z) \leq \limsup_{T\to+\infty} \inf\left\{ \underset{x\in TY}{\mbox{\rm ess-sup}}\hspace{0.1cm} f_\infty(x, Z + Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (TY;\hspace{0.03cm}\mathbb{R}^d) \right\}.\notag \end{aligned}$$ Taking the limit as $p_k\to+\infty$ yields to [\[upperboundwidefhom\]](#upperboundwidefhom){reference-type="eqref" reference="upperboundwidefhom"}.* **Lemma 7**. *Let $f:\mathbb{R}^n\times\mathbb{R}^{d\times n}\to [0,+\infty]$ be a Borel function, $1$-periodic in the first variable satisfying the growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}. Then, for all open bounded set $\Omega\subset \mathbb{R}^n$ and $u\in W^{1,\infty} (\Omega;\hspace{0.03cm}\mathbb{R}^d)$, we have that $$\notag \lim_{p\to+\infty}\left(\int_{\Omega} f^{\rm hom}_p(Du(x))dx \right)^{1/p}=\underset{x\in\Omega}{\mbox{\rm ess-sup}}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du(x)),$$ where $\widetilde{f}_{\rm hom}$ is given by [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"}.* *Proof.* First, note that since $(f^{\rm hom}_p)^{1/p}$ is a non-decreasing function, we have that, for all $Z\in\mathbb{R}^{d\times n}$, $$\notag (f^{\rm hom}_p(Z))^{1/p}\leq \widetilde{f}_{\rm hom}(Z) \qquad\mbox{for any } p>1.$$ This implies that, for fixed $u\in W^{1, \infty}(\Omega;\hspace{0.03cm} \mathbb{R}^d)$, $$\notag \left[\int_{\Omega} f^{\rm hom}_p(Du(x))dx\right]^{1/p}\leq\left[\int_{\Omega} (\widetilde{f}_{\rm hom}(Du(x)))^pdx\right]^{1/p}= \|\widetilde{f}_{\rm hom}(Du(\cdot))\|_{L^p(\Omega)}.$$ Taking the limit as $p\to+\infty$, we have that $$\notag \lim_{p\to+\infty}\left[\int_{\Omega} f^{\rm hom}_p(Du(x))dx\right]^{1/p} \leq \|\widetilde{f}_{\rm hom}(Du(\cdot))\|_{L^\infty(\Omega)} =\underset{x\in\Omega}{\mbox{ ess-sup}}\hspace{0.03cm} \widetilde{f}_{\rm hom}(Du(x)).$$ Now, we prove the reverse inequality. To that end, fix $\varepsilon>0$ and $u\in W^{1, \infty}(\Omega;\hspace{0.03cm}\mathbb{R}^d)$. We define the set $E_\varepsilon$ by $$\notag E_\varepsilon:= \left\{x\in\Omega\hspace{0.03cm}:\hspace{0.03cm} \widetilde{f}_{\rm hom}(Du(x))> \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\widetilde{f}_{\rm hom}(Du(x))-{\varepsilon\over 2} \right\}.$$ Note that $E_\varepsilon$ has positive measure, i.e. $m_\varepsilon:= {\mathcal L}^n(E_\varepsilon)>0$. We consider $\delta$ such that $0<\delta<<m_\varepsilon$. From [\[limfhomp\]](#limfhomp){reference-type="eqref" reference="limfhomp"} combined with the Egorov theorem, it follows that there exists a set $F_\delta\subset\Omega$ such that ${\mathcal L}^n(F_\delta)\leq\delta$ and $$\notag \lim_{p\to+\infty} \|(f^{\rm hom}_p)^{1/p}(Du(\cdot)) -\widetilde{f}_{\rm hom}(Du(\cdot)) \|_{L^\infty(\Omega\setminus F_\delta)}=0.$$ In particular, there exists $p_0=p_0(\varepsilon)$ such $p\geq p_0$ implies that $$f^{\rm hom}_p(Du(x)) - \widetilde{f}_{\rm hom}(Du(x))\geq -{\varepsilon\over 2}, \qquad \forall x\in \Omega\setminus F_\delta.\notag$$ Note that ${\mathcal L}^n(E_\varepsilon\setminus F_\delta)>0$, since $0<\delta<<m_\varepsilon$. Moreover, if $x\in E_\varepsilon\setminus F_\delta$, then $$\begin{aligned} (f^{\rm hom}_p)^{1\over p} (Du(x))&\geq \widetilde{f}_{\rm hom}(Du(x))-{\varepsilon\over 2}\notag\\ &\geq \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du (x)) -\varepsilon, \qquad\forall p\geq p_0.\notag \end{aligned}$$ Let $E_\varepsilon^p$ be the set defined by $$\notag E_\varepsilon^p:=\left\{x\in\Omega\hspace{0.03cm}:\hspace{0.03cm} (f^{\rm hom}_p)^{1/p} (Du(x))> \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm}\widetilde{f}_{\rm hom}(Du (x)) -\varepsilon\right\}.$$ The set $E_\varepsilon^p$ contains $E_\varepsilon\setminus F_\delta$ which implies that the measure of $E_\varepsilon^p$ is positive for any $p\geq p_0$. Hence, we deduce that $$\begin{aligned} \left(\int_{\Omega} f^{\rm hom}_p(Du(x))dx \right)^{1/p} &\geq \left(\int_{E_\varepsilon^p} f^{\rm hom}_p(Du(x))dx \right)^{1/p}\notag\\ &\geq\left(\int_{E_\varepsilon^p} (\underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm}\widetilde{f}_{\rm hom}(Du(x)) -\varepsilon)^pdx \right)^{1/p}\notag\\ &= {\mathcal L}^n (E^p_\varepsilon)^{1/p} \left(\underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du(x)) -\varepsilon\right)\notag\\ &\geq {\mathcal L}^n (E_\varepsilon\setminus F_\delta)^{1/p} \left(\underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm}\widetilde{f}_{\rm hom}(Du(x)) -\varepsilon\right)\notag, \end{aligned}$$ for any $p\geq p_0$. This implies that $$\begin{aligned} \lim_{p\to+\infty} \left(\int_{\Omega} f^{\rm hom}_p(Du(x))dx \right)^{1/p} \geq \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm}\widetilde{f}_{\rm hom}(Du(x)) -\varepsilon.\notag \end{aligned}$$ In view of the arbitrariness of $\varepsilon$, we obtain the reverse inequality, as desired. ◻ In the next proposition, we show that the function $\widetilde{f}_{\rm hom}$ turns out to be strong Morrey quasiconvex (see Definition [Definition 4](#sMqcx){reference-type="ref" reference="sMqcx"}). **Proposition 8**. *The function $\widetilde{f}_{\rm hom}$ is strong Morrey quasiconvex.* *Proof.* First, note that $f^{\rm hom}_p: \mathbb{R}^{d\times n}\to [0,+\infty)$ is quasiconvex (or Morrey convex, according to [@BJW04 Definition 1.1]) since it is the energy density of $\Gamma$-limit of $F_p^{\rm hom}$ (e.g., see [@BD Theorem 14.5]). In view of [@BJW04 Proposition 2.4], $f^{\rm hom}_p$ is also strong Morrey quasiconvex. Now, we show that for any $p>1$, $(f^{\rm hom}_p)^{1/p}$ is strong Morrey quasiconvex. To that end, fix $\varepsilon>0$ and $k>0$. Since $f^{\rm hom}_p$ is strong Morrey quasiconvex, there exists $\delta=\delta(\varepsilon, k)>0$ such that if $\varphi\in W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$ satisfies $\|D\varphi\|_{L^\infty(Y; \hspace{0.03cm} \mathbb{R}^{d\times n})}\leq k$ and $\max_{x\in\partial Q}|\varphi(x)|\leq \delta$, then $$\label{ineq2} f^{\rm hom}_p(Z) \leq \underset{x\in Y}{\mbox{ess-sup}}\hspace{0.1cm}f^{\rm hom}_p(Z + D\varphi(x)) + \varepsilon^p.$$ Recall that for any $p>1$, the function $h(x):= x^{1/p}$ is subadditive since it is concave function with $h(0)\geq 0$. Hence, from [\[ineq2\]](#ineq2){reference-type="eqref" reference="ineq2"}, it follows that $$\begin{aligned} (f^{\rm hom}_p(Z))^{1/p}&\leq \left(\underset{x\in Y}{\mbox{ess-sup}}\hspace{0.1cm}f^{\rm hom}_p(Z + D\varphi(x)) + \varepsilon^p \right)^{1/p}\notag\\ &\leq \left(\underset{x\in Y}{\mbox{ess-sup}}\hspace{0.1cm}f^{\rm hom}_p(Z + D\varphi(x))\right)^{1/ p} + \varepsilon\notag\\ &= \underset{x\in Y}{\mbox{ess-sup}}\hspace{0.1cm}(f^{\rm hom}_p)^{1/p}(Z + D\varphi(x)) + \varepsilon.\label{ineq3} \end{aligned}$$ Finally, inequality [\[ineq3\]](#ineq3){reference-type="eqref" reference="ineq3"} implies that $(f^{\rm hom}_p)^{1/p}$ is strong Morrey quasiconvex. To conclude the proof, recall that $(f^{\rm hom}_p)^{1/p}$ is a non-decreasing function. Hence, for any $Z\in\mathbb{R}^{d\times n}$, $$\notag \widetilde{f}_{\rm hom}(Z) = \lim_{p\to+\infty} (f^{\rm hom}_p(Z))^{1/p} = \sup_{p\geq 1} (f^{\rm hom}_p(Z))^{1/p}.$$ Moreover, due to the fact that for any $p\geq 1$, $(f^{\rm hom}_p(Z))^{1/p}$ is strong Morrey quasiconvex, we may apply [@PZ20 Proposition 5.2] to conclude that $\widetilde{f}_{\rm hom}$ is strong Morrey quasiconvex, as desired. ◻ The next result provides the second $\Gamma$-limit as $p\to+\infty$ of the functionals $\{F_{p, \varepsilon}\}_{p,\varepsilon}$ in [\[def:Functionalpeps\]](#def:Functionalpeps){reference-type="eqref" reference="def:Functionalpeps"}, after having taken the one with respect to $\varepsilon \to 0$, cf. [\[GammaLp\]](#GammaLp){reference-type="eqref" reference="GammaLp"} and [\[def:Fhomp\]](#def:Fhomp){reference-type="eqref" reference="def:Fhomp"}, and it is a generalization of [@BGP04 Theorem 3.3] in the vectorial case. **Theorem 9**. *Let $f:\mathbb{R}^n\times \mathbb{R}^{d\times n}\to [0,+\infty)$ be a Borel function, $1$-periodic in the first variable and satisfying growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}. Let $\Omega$ be a bounded, open set of $\mathbb{R}^n$ with Lipschitz boundary. For any $p>1$, let $F^{\rm hom}_p$ be the functional defined by [\[def:Fhomp\]](#def:Fhomp){reference-type="eqref" reference="def:Fhomp"}, and $\widetilde{f}_{\rm hom}$ be the function in [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"}. Then, $$\notag \Gamma(L^\infty)\mbox{-}\lim_{p\to\infty} F_p^{\rm hom}(u)= F^{\rm hom}(u):= \underset{x\in \Omega}{\mbox{{\rm ess-sup}}}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du(x)),$$for every $u \in W^{1, \infty}(\Omega; \mathbb{R}^d)$.* *Proof.* For any $p>1$, by [\[GammaLp\]](#GammaLp){reference-type="eqref" reference="GammaLp"} and [@DM93 Proposition 6.16], the functional $$H^{\rm hom}_p(u)=\left({1\over {\mathcal L^n(\Omega) }}\int_{\Omega} f^{\rm hom}_p(Du(x)) dx\right)^{1/p}$$ is lower semicontinuous in $W^{1,p}$ with respect to $L^p$ topology, hence it turns out to be lower semicontinuous in $W^{1, \infty}$ with respect to the $L^\infty$ topology. For the sake of exposition, we assume in the rest of the proof that $\mathcal L^n(\Omega)=1$. Moreover, $H^{\rm hom}_p$ is an increasing family, i.e., for $p_1\leq p_2$, $H^{\rm hom}_{p_1}\leq H^{\rm hom}_{p_2}$. Indeed, using the Jensen inequality combined with the fact that $f^{\rm hom}_p$ is an increasing sequence, we deduce that, for $p_1\leq p_2$, $$\begin{aligned} \left({1\over p_2/p_1 }\int_{\Omega}f^{\rm hom}_{p_1}(Du(x))dx\right)^{p_2/p_1}&\leq {1\over p_2/p_1} \int_{\Omega}(f^{\rm hom}_{p_1}(Du(x)))^{p_2/p_1}dx\notag\\ &\leq{1\over p_2/p_1}\int_{\Omega}f^{\rm hom}_{p_2}(Du(x))dx\notag, \end{aligned}$$ which implies that $H^{\rm hom}_{p_1}(u)\leq H^{\rm hom}_{p_2}(u)$. In view of Lemma [Lemma 7](#lemma:convergenceofhomdensity){reference-type="ref" reference="lemma:convergenceofhomdensity"}, it follows that, for any $u\in W^{1,\infty}(\Omega;\hspace{0.03cm}\mathbb{R}^d)$, $$\notag \lim_{p\to+\infty}H^{\rm hom}_p(u)=\underset{x\in \Omega}{\mbox{ess-sup}}\hspace{0.03cm} \widetilde{f}_{\rm hom}(Du(x)).$$ In other words, $H^{\rm hom}_p$ converges pointwise to $F^{\rm hom}$, as $p\to+\infty$. This combined with the lower semicontinuity of $H^{\rm hom}_p$ allows us to apply [@DM93 Remark 5.5], concluding that $H^{\rm hom}_p$ $\Gamma$-converges to $F^{\rm hom}$ with respect to $L^\infty$ topology. To conclude the proof, it remains to proof that $F^{\rm hom}_p$ $\Gamma$-converges to $F^{\rm hom}$ with respect to $L^\infty$ topology. This is a consequence of $\Gamma$-convergence of $H^{\rm hom}_p$ combined with the equality $$\notag \lim_{p\to+\infty} H^{\rm hom}_p(u)= \lim_{p\to+\infty} F^{\rm hom}_p(u).$$ This concludes the proof. ◻ **Remark 10**. *The previous result along with the homogenization result, given by [\[GammaLp\]](#GammaLp){reference-type="eqref" reference="GammaLp"} and [\[def:Fhomp\]](#def:Fhomp){reference-type="eqref" reference="def:Fhomp"}, for the functionals $\{F_{p,\varepsilon}\}_{p,\varepsilon}$ in [\[def:Functionalpeps\]](#def:Functionalpeps){reference-type="eqref" reference="def:Functionalpeps"} yields to $$\notag \Gamma(L^\infty)\mbox{-}\lim_{p\to +\infty}\left(\Gamma(L^p)\mbox{-}\lim_{\varepsilon\to 0} F_{p.\varepsilon}(u)\right)=\underset{x\in \Omega}{\mbox{ess-sup}}\hspace{0.03cm} \widetilde{f}_{\rm hom}(Du(x)),$$ for every $u \in W^{1, \infty}(\Omega; \mathbb{R}^d)$.* # The cell formula [\[quattro\]]{#quattro label="quattro"} The effective energy density $\widetilde{f}_{\rm hom}$ in [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"} is characterized through an asymptotic formula. In this section, we show that if the function $f(x,\cdot)$ is level convex and upper semicontinuous for every $x\in \mathbb{R}^n$, asymptotic formula turns into a cell formula. First, we recall the definition of level convexity. **Definition 11**. *Let $\Omega \subseteq \mathbb R^n$ be an open set. A function $f:\mathbb{R}^{d\times n} \to \mathbb{R}$ is level convex if for any $t\in (0,1)$ and for any $Z_1, Z_2\in \mathbb{R}^{d\times n}$, it holds $$\notag f(t Z_1+ (1-t)Z_2)\leq f( Z_1)\vee f(Z_2).$$* The following result will be useful in the rest of the paper. **Proposition 12**. *Let $\Omega \subseteq \mathbb R^n$ be an open set. Let $f:\Omega\times \mathbb{R}^{d\times n}\to [0,+\infty)$ be a $\mathcal L(\mathbb R^n)\otimes \mathcal B(\mathbb R^{d\times n})$-measurable function, satisfying growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}. Assume that $f(x, \cdot)$ is level convex and upper semicontinuous for a.e. $x\in\Omega$. Then, $f_\infty$ defined by [\[def:finfinity\]](#def:finfinity){reference-type="eqref" reference="def:finfinity"} turns out to be level convex.* *Proof.* The proof follows from [@PZ20 Remark 5.2]. Indeed, we have that $$\label{finftyQinfty} f_\infty(x, \cdot) = Q_\infty f (x, \cdot) = f^{\rm ls}(x,\cdot) \quad \mbox{for a.e. } x\in\Omega,$$ and $f^{\rm ls}(x, \cdot)$ is level convex, see [@RZ (i), Proposition 2.3]. ◻ **Remark 13**. *Note that, in the above proposition, it is not possible to drop the upper semicontinuity assumption in order to get the equality in [\[finftyQinfty\]](#finftyQinfty){reference-type="eqref" reference="finftyQinfty"}. Indeed arguing as in [@Buttazzo Example 4.4.6] without upper semicontinuity it is possible to show that the sequentially weakly $W^{1,p}$ lower semicontinuous envelope of the functional $G(u):= \int_\Omega f^p(x,D u) dx$, where $$f^p(x,Z):=\left\{\begin{array}{ll} |Z|^p & \hbox{ if } Z= (x_n,0,\dots, 0),\\ &\\ (1+|Z|)^p &\hbox{ otherwise}, \end{array} \right.$$ is expressed by $\int_\Omega \widetilde{f^p}(x,D u)dx$, where $\widetilde{f^p}(x,Z):= (1+|Z|)^p > (f^p)^{\ast \ast}(x,Z)= Q (f^p)(x,Z)=|Z|^p$, consequently for this same function $Q_\infty f(x,Z)= |Z|< f_\infty(x,Z)=1+ |Z|$.* The next proposition provides sufficient conditions to ensure that the asymptotic formula [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"} turns into a single cell formula. **Proposition 14**. *Let $f:\mathbb{R}^n\times \mathbb{R}^{d\times n}\to [0,+\infty)$ be a Borel function, $1$-periodic in the first variable satisfying growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}. Assume that $f(x, \cdot)$ is level convex for every $x\in\mathbb{R}^n$. Then, asymptotic formula [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"} is reduced to the following cell problem $$\label{cellformula} \widetilde{f}_{\rm hom}(Z) = \inf\left\{\underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (Y; \hspace{0.03cm} \mathbb{R}^d)\right\},$$ for all $Z\in \mathbb{R}^{d\times n}$.* *Proof.* For $j\in\mathbb{N}$, we have that $W^{1,\infty}_{\#} (Y; \hspace{0.03cm} \mathbb{R}^d)\subset W^{1,\infty}_{\#} (jY; \hspace{0.03cm} \mathbb{R}^d)$, which implies that $$\begin{aligned} \inf&\left\{\underset{x\in jY}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (jY; \hspace{0.03cm} \mathbb{R}^d) \right\}\notag\\ &\leq \inf\left\{\underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (Y; \hspace{0.03cm} \mathbb{R}^d) \right\}.\notag \end{aligned}$$ Taking the infimum over $j\in\mathbb{N}$, we easily conclude that $$\notag \widetilde{f}_{\rm hom}(Z) \leq \inf\left\{\underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1,\infty}_{\#} (Y; \hspace{0.03cm} \mathbb{R}^d) \right\}.$$ On the other hand, for $v\in W^{1, \infty}_{\#}(jY;\hspace{0.03cm} \mathbb{R}^d)$, we set $$\notag u:= \sum_{i\in I} {1\over j^n} v(x+i),$$ with $I=\{0, 1, \dots, j-1\}^n$. One can easily prove that $u\in W^{1, \infty}_{\#}(Y;\hspace{0.03cm} \mathbb{R}^d)$. Using the level convexity and the periodicity of $f_\infty$, we deduce that $$\begin{aligned} \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f_\infty(x, Z+Du(x)) &= \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f_\infty(x, Z+\sum_{i\in I} {1\over j^n} Dv(x+i)) \notag\\ &\leq \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}\max_{i\in I} f_\infty(x, Z + Dv(x+i))\notag\\ &\leq \max_{i\in I} \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(x, Z + Dv(x+i))\notag\\ &=\max_{i\in I} \underset{y\in (i,1+i)^n}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(y-i, Z + Dv(y))\notag\\ &=\max_{i\in I} \underset{y\in (i,1+i)^n}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(y, Z + Dv(y))\notag\\ &\leq \underset{y\in jY}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} f_\infty(y, Z + Dv(y)),\notag \end{aligned}$$ where we have performed the change of variables $y=x+i$. Now, taking the infimum yields to $$\begin{aligned} \inf &\left\{ \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, \infty}_{\#}(Y; \hspace{0.03cm} \mathbb{R}^d) \right\}\notag\\ &\leq \inf \left\{ \underset{x\in jY}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, \infty}_{\#}(jY; \hspace{0.03cm} \mathbb{R}^d) \right\}.\notag \end{aligned}$$ Finally, taking the infimum over $j\in\mathbb{N}$, we conclude that $$\begin{aligned} \inf \left\{ \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f_\infty(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, \infty}_{\#}(Y; \hspace{0.03cm} \mathbb{R}^d) \right\}\leq \widetilde{f}_{\rm hom}(Z),\notag \end{aligned}$$ as desired. ◻ **Remark 15**. *[\[rmk:cellformula\]]{#rmk:cellformula label="rmk:cellformula"} If $f(x, \cdot)$ is level convex and upper semicontinuous for every $x\in\mathbb{R}^n$, in view of Proposition [Proposition 12](#lcus){reference-type="ref" reference="lcus"}, the cell formula [\[cellformula\]](#cellformula){reference-type="eqref" reference="cellformula"} may be specialized as follows $$\begin{aligned} \widetilde{f}_{\rm hom}(Z) &=\inf \left\{ \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} Q_{\infty} f(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, \infty}_{\#}(Y; \hspace{0.03cm} \mathbb{R}^d) \right\}\notag\\ &=\inf \left\{ \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f^{\rm ls}(x, Z+Du(x))\hspace{0.03cm}:\hspace{0.03cm} u\in W^{1, \infty}_{\#}(Y; \hspace{0.03cm} \mathbb{R}^d) \right\}.\notag \end{aligned}$$* # Homogenization {#cinque} In this section, we show a homogenization result for the family of energies $\{F_\varepsilon\}_{\varepsilon}$ given by [\[def:supfunctional\]](#def:supfunctional){reference-type="eqref" reference="def:supfunctional"}. **Theorem 16**. *Let $\Omega$ be a bounded convex open subset of $\mathbb{R}^n$ with Lipschitz boundary. Let $f: \mathbb{R}^n\times \mathbb{R}^{d\times n}\to[0,+\infty)$ be a Borel function, $1$-periodic in the first variable satisfying the growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}. For any $\varepsilon >0$, let $F_\varepsilon$ be the supremal functional defined by [\[def:supfunctional\]](#def:supfunctional){reference-type="eqref" reference="def:supfunctional"} and let $F_{\rm hom}$ be the functional defined by $$\notag F_{\rm hom}(u):= \underset{x\in \Omega}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du(x)),$$ where $\widetilde{f}_{\rm hom}$ is given by [\[characterizationftildehom\]](#characterizationftildehom){reference-type="eqref" reference="characterizationftildehom"}. We have that* - *For any $u\in W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$ and for any $u_\varepsilon\in W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$ such that $u_\varepsilon$ uniformly converges to $u$, then $$\label{Gammaliminfine} \liminf_{\varepsilon\to 0} F_\varepsilon(u_\varepsilon) \geq F_{\rm hom}(u).$$* - *Assume that $f(x,\cdot)$ is level convex and continuous for every $x\in Y$, and there exist matrices $\{A_i\}_{i=1}^{nd}\subset\mathbb{R}^{d\times n}$ vertices of a cube $Q \ni 0$, such that $$\notag f(x, A_i)\coloneqq \min_{Z\in\mathbb{R}^{d\times n}}f(x, Z),$$ for every $x\in\mathbb{R}^n$. Then, for any $u\in W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$, there exists a sequence $\{u_\varepsilon\}_\varepsilon\subset W^{1,\infty}(\Omega; \hspace{0.03cm}\mathbb{R}^d)$ such that $u_\varepsilon$ uniformly converges to $u$ and $$\notag \lim_{\varepsilon\to 0} F_\varepsilon(u_\varepsilon) = F_{\rm hom}(u).$$* The proof of the liminf inequality relies on the results obtained by the $L^p$-approximation. Instead, the proof of the limsup inequality is trickier and is based on a generalization of [@CDA02 Chapter 12] in the vectorial case. This is the reason why we require the convexity of the domain $\Omega$, and the level convexity and the upper semicontinuity of $f$. Once such a generalization is obtained (cf. Section [6](#secHomunb){reference-type="ref" reference="secHomunb"}), the proof of the limsup inequality is an adaption of the techniques developed in [@BGP04 Proposition 4.4]. However, for the readers' convenience, we provide a complete proof. We require that $f(x,\cdot)$ is a continuous function since in view of Remark [\[rmk:cellformula\]](#rmk:cellformula){reference-type="ref" reference="rmk:cellformula"} we can specialize the homogenized energy density as follows $$\notag \widetilde{f}_{{\rm hom}} (Z) = \inf\left\{ \underset{x\in Y}{\mbox{{\rm ess}-{\rm sup}}}\hspace{0.1cm}f(x, Z+ Du(x)) : u\in W^{1,\infty}_{\#}(Y; \mathbb{R}^d ) \right\}.$$ *Proof.* - We prove the $\Gamma$-liminf inequality. To that end, without loss of generality, we may assume that $\sup_{\varepsilon> 0} F_\varepsilon(u_\varepsilon)<\infty$. Since $f$ satisfied the growth conditions [\[growthconditions\]](#growthconditions){reference-type="eqref" reference="growthconditions"}, we get the homogenization result proven in [\[def:Fhomp\]](#def:Fhomp){reference-type="eqref" reference="def:Fhomp"} and [\[def:fhomp\]](#def:fhomp){reference-type="eqref" reference="def:fhomp"} for the functionals $\{F_{p, \varepsilon}\}_{p,\varepsilon}$ introduced in [\[def:Functionalpeps\]](#def:Functionalpeps){reference-type="eqref" reference="def:Functionalpeps"}. In particular, for any $p>1$, $$\begin{aligned} \left (\int_{\Omega}f^{\rm hom}_p(Du(x))dx \right )^{1/p} &\leq \liminf_{\varepsilon\to 0} \left( \int_{\Omega}f^p\left({x\over \varepsilon}, Du_{\varepsilon}(x)\right)dx \right)^{1/p}\notag\\ & \leq \left({\cal L}^n(\Omega)\right)^{1/p} \liminf_{\varepsilon\to 0} \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm} f\left({x\over \varepsilon}, Du_{\varepsilon}(x)\right)\notag. \end{aligned}$$ Thanks to Lemma [Lemma 7](#lemma:convergenceofhomdensity){reference-type="ref" reference="lemma:convergenceofhomdensity"}, we pass into the limit as $p\to+\infty$ obtaining that $$\begin{aligned} \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm} \widetilde{f}_{\rm hom}(Du(x)) & =\lim_{p\to+\infty} \left (\int_{\Omega}f^{\rm hom}_p(Du(x))dx \right )^{1/p} \nonumber\\ & \leq \liminf_{\varepsilon\to 0} \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.1cm} f\left({x\over \varepsilon}, Du_{\varepsilon}(x)\right)\notag, \end{aligned}$$ which shows [\[Gammaliminfine\]](#Gammaliminfine){reference-type="eqref" reference="Gammaliminfine"}. - The proof of the $\Gamma$-limsup inequality relies on the homogenization Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"} proved in Section [6](#secHomunb){reference-type="ref" reference="secHomunb"}. Fix $\overline{u}\in W^{1,\infty}(\Omega; \mathbb{R}^d)$ and set $M\coloneqq \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.01cm}\widetilde{f}_{{\rm hom}} (D\overline{u}(x))$. We aim at finding a sequence $\{u_\varepsilon\}_\varepsilon\subset W^{1,\infty}(\Omega; \mathbb{R}^d)$ such that $u_\varepsilon\to\overline{u}$ in $L^\infty(\Omega; \mathbb{R}^d)$ as $\varepsilon\to 0$ and $$\limsup_{\varepsilon\to 0} F_\varepsilon(u_\varepsilon) \leq M.$$ To that end, for any $x\in\Omega$, we introduce the sets $C(x)$ and $C_\infty$ defined by $$\notag C(x)\coloneqq \{Z\in\mathbb{R}^{d\times n} : f(x, Z)\leq M \}, \qquad C_\infty\coloneqq\{Z\in\mathbb{R}^{d\times n} : \widetilde{f}_{\rm hom}(Z)\leq M \}.$$ In view of the assumptions on $f$, it turns out that the set $C(x)$ is measurable, $1$-periodic and convex. Since $A_i$ turns out to be minimum points of $f(x, \cdot)$, we deduce that $A_i\in C(x)$, for any $i=1,\dots, dn$. All these assumptions permit us to apply Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"} to the indicator function $1_{C(x)}$ obtaining $$\label{eq0.01} \Gamma(L^\infty)\mbox{-}\lim_{\varepsilon\to 0} \int_\Omega 1_{C\left( {x\over \varepsilon}\right)}(Du(x))dx =\int_\Omega \widetilde{1}^\infty_{{\rm hom}}(Du(x))dx,$$ where the homogenized energy density $\widetilde{1}^\infty_{{\rm hom}}$ is given by the cell formula $$\notag \widetilde{1}^\infty_{{\rm hom}}(Z)= \inf \left\{ \int_Y 1_{C(y)}(Z+Dv(y))dy : v\in W^{1,\infty}_{\#}(Y; \mathbb{R}^d) \right\}.$$ Thanks to Proposition [Proposition 18](#prop:Proposition 12.1.3){reference-type="ref" reference="prop:Proposition 12.1.3"} applied to $p=q=+\infty$, the infimum is actuallly achieved. To conclude the proof, it remains to prove that $$\notag \widetilde{1}^\infty_{{\rm hom}} (Z) = 1_{C_\infty}(Z) \qquad\mbox{for any } Z\in\mathbb{R}^{d\times n}.$$ It is enough to show that $$\notag \widetilde{1}^\infty_{{\rm hom}} (Z) =0\quad \Leftrightarrow \quad 1_{C_\infty}(Z)=0.$$ Indeed, $$\begin{aligned} \widetilde{1}^\infty_{{\rm hom}} (Z) =0 &\Leftrightarrow \mbox{there exists } v\in W^{1,\infty}_{\#}(Y; \mathbb{R}^d) : \int_Y 1_{C(y)}(Z+Dv(y))dy =0\notag\\ &\Leftrightarrow \mbox{there exists } v\in W^{1,\infty}_{\#}(Y; \mathbb{R}^d) : f(x, Z+ Dv(x)) \leq M \quad \mbox{for a.e. } x\in\Omega\notag\\ &\Leftrightarrow \mbox{there exists } v\in W^{1,\infty}_{\#}(Y; \mathbb{R}^d) : \underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.01cm} f(x, Z+Dv(x)) \leq M. \label{0.01} \end{aligned}$$ Thanks to lower semicontinuity and level convexity of $f$, the supremal functional $\mbox{ess-sup}_{x\in Y}f(x, Z+Dv(x))$ turns out to be lower semicontinuous and coercive in $W^{1, \infty}_{\#}(Y; \mathbb{R}^d)$ and thus the infimum in the definition of $\widetilde{f}_{{\rm hom}}$ is achieved. Then, the last condition in [\[0.01\]](#0.01){reference-type="eqref" reference="0.01"} is equivalent to $\widetilde{f}_{\rm hom}(Z)\leq M$., i.e., $1_{C_\infty}(Z)=0$. Thanks to [\[eq0.01\]](#eq0.01){reference-type="eqref" reference="eq0.01"}, there exists a sequence $\{u_\varepsilon\}_\varepsilon$ of functions in $W^{1,\infty}(\Omega; \mathbb{R}^d)$ such that $u_\varepsilon$ uniformly converges to $\overline{u}$ as $\varepsilon\to 0$ and $$\notag \limsup_{\varepsilon\to 0} \int_\Omega 1_{C\left({x\over \varepsilon} \right)}(Du_\varepsilon(x))dx \leq \int_\Omega 1_{C_\infty} (D\overline{u}(x))dx =0.$$ In particular, there exists $\varepsilon_0>0$ such that, for any $\varepsilon\leq\varepsilon_0$, $$\notag \int_\Omega 1_{C\left({x\over \varepsilon} \right)}(Du_\varepsilon(x))dx =0.$$ In other words, $$\notag 1_{C\left({x\over \varepsilon} \right)}(Du_\varepsilon(x)) =0 \quad \mbox{for a.e. } x\in\Omega.$$ In view of the definiton of $C(x)$, it follows that $\underset{x\in\Omega}{\mbox{ess-sup}}\hspace{0.03cm}\hspace{0.01cm} f({x\over \varepsilon}, Du_\varepsilon(x))\leq M$ which, in turn, implies that $F_\varepsilon(u_\varepsilon)\leq M$ and $$\notag \limsup_{\varepsilon\to 0} F_\varepsilon(u_\varepsilon) \leq F_{{\rm hom}} (\overline{u}),$$ as desired.  ◻ # Homogenization of unbounded functionals {#secHomunb} In this section, we provide a homogenization result via $\Gamma$-convergence of the family of unbounded integral functionals of the form $$\label{functCDA} G_\varepsilon(u) := \begin{dcases} \int_{\Omega} g\left({x\over \varepsilon}, Du(x)\right)dx & \quad \mbox{for } u\in W^{1,q}_{{\rm loc}} (\mathbb{R}^n; \hspace{0.03cm} \mathbb{R}^d)\cap L^\infty_{{\rm loc}} (\mathbb{R}^n; \hspace{0.03cm} \mathbb{R}^d), \\ &\\ +\infty & \quad \mbox{otherwise,} \end{dcases}$$ where $\Omega$ is a bounded, open, convex set with Lipschitz boundary, $q\in [1, +\infty]$ and the energy density $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfies the following assumptions: - $g$ is ${\mathcal L}(\mathbb{R}^n)\otimes {\mathcal B}(\mathbb{R}^{d \times n})$-measurable, - $g$ is $Y$-periodic in the first variable and convex in the second one. For any $q\in[1, +\infty]$, we introduce the homogenized energy density $\widetilde{g}^q_{\rm hom}: \mathbb{R}^{d\times n}\to \mathbb{R}$ defined by $$\label{ftildeCDA} \widetilde{g}^q_{\rm hom}(Z) \coloneqq \inf \left \{ \int_Y g(y, Z + Dv) \, dy: v \in W^{1,q}_{\#} (Y; \mathbb{R}^d) \cap L^{\infty}(Y; \mathbb{R}^d)\right \}.$$ Throughout this section, we also require the following technical assumptions: - there exists $\delta\in(0,1)$ such that $$\label{existdelta} B_{2\delta}(0)\subseteq {\rm int}({\rm dom}\widetilde{g}^q_{\rm hom});$$ - We assume that there exist matrices $\{A_i\}_{i=1}^{nd}\subset \mathbb{R}^{d\times n}$, vertices of the cube $Q$ such that ${\rm B}_\delta(0) \subset Q \subset {\rm B}_{2\delta}(0)$ and, for any $i=1, \cdots, nd$, $$\label{hpL} \int_Y g(y, A_i)dy< +\infty.$$ **Remark 17**. *[\[remark:existenceofcube\]]{#remark:existenceofcube label="remark:existenceofcube"} As in [@CDA02], assumption (H3), in the scalar case, allows us to find a cube $Q$ whose vertices $\{A_i\}_{i=1}^{nd}$ are contained in the ball $B_{2\delta}(0)$ in the effective domain of $g(x,\cdot)$ for a.e. $x \in \Omega$. This fact is not evident in the vectorial setting. This is the reason why we introduced the technical assumption ${\rm (H4)}$.* *In turn, ${\rm (H4)}$, together with ${\rm (H3)}$ and [@CDA02 Proposition 1.1.15], due to the convexity of $\widetilde g^q_{\rm hom}$, guarantees that $$\widetilde g_{\rm hom}^q(A_i)<+ \infty,$$ for the same $A_i$ appearing in [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"}.* *Analogously, the convexity of $g$ and [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"} entail that $$\label{H40} g(\cdot, 0)\in L^1_{\rm loc}(\mathbb{R}^n),$$ where $0$ denotes the null matrix in $\mathbb{R}^{d\times n}$.* *The same observations made at the beginning, allows us to prove the $\Gamma (L^\infty)$-limit of our sequence [\[functCDA\]](#functCDA){reference-type="eqref" reference="functCDA"}, dealing with sequences $\{\varepsilon_h\}_h$, converging to $0^+$ as $h\to +\infty$ extracted by the vanishing family $\{\varepsilon\}$. Moreover it is sufficient to replace the generic sequence $\{\varepsilon_h\}_h$ by $\{\frac{1}{h}\}$, since, due to hypotheses $(H1)$-$(H4)$, we can argue exactly as in [@CDA02 Lemma 12.1.2].* Next, we state some properties of the homogenized energy density $\widetilde{g}_{{\rm hom}}^q$ given by [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"}, that will be used in the sequel. The proof of such properties follows the same arguments of [@CDA02 Proposition 12.1.3] and for this reason, it is omitted. **Proposition 18**. *Let $g$ be the energy density satisfying (H1) and (H2). Let $q\in[1,+\infty]$ and let $\widetilde{g}_{{\rm hom}}^q$ be given by [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"}.* - *It turns out that $\widetilde{g}^q_{{\rm hom}}$ is convex.* - *Assume that $p\in [1,+\infty]$ and $q\in[p, +\infty]$. In addition, assume that $g$ is such that $$\notag \begin{dcases} |Z|^p\leq g(x, Z) & \mbox{for a.e. } x\in\mathbb{R}^n \hspace{0.1cm} \mbox{for any } Z\in\mathbb{R}^{d\times n} \hspace{0.1cm} \mbox{if } p\in[1, +\infty),\\ {\rm dom}g(x, \cdot)\subseteq B_R(0) & \mbox{for a.e. } x\in\mathbb{R}^n \hspace{0.1cm} \mbox{if } p=+\infty. \end{dcases}$$ Then, $\widetilde{g}^q_{{\rm hom}}$ satisfies $$\notag \begin{dcases} |Z|^p\leq \widetilde{g}^q_{{\rm hom}}(x, Z) & \mbox{for any } Z\in\mathbb{R}^{d\times n} \hspace{0.1cm} \mbox{if } p\in[1, +\infty),\\ {\rm dom}\widetilde{g}^q_{{\rm hom}}\subseteq B_R(0) & \mbox{if } p=+\infty. \end{dcases}$$* - *Let $q\in (n,+\infty]$. Then, $$\widetilde{g}^q_{{\rm hom}}(Z)= \inf\left\{ \int_Y g(y, Z+Dv)dy : v\in W^{1, q}_{\#}(Y; \mathbb{R}^d) \right\}.$$* - *Assume that $p\in(n, +\infty]$ and $p=q$. Furthermore, assume that $g(x, \cdot)$ is lower semicontinuous for a.e. $x\in\mathbb{R}^n$. Then, $\widetilde{g}^q_{{\rm hom}}$ is lower semicontinuous and $$\notag \widetilde{g}^q_{{\rm hom}}(Z) = \min\left\{\int_Y g(y, Z+Dv)dy : v\in W^{1, q}_{\#}(Y; \mathbb{R}^d) \right\},$$ for any $Z\in\mathbb{R}^{d\times n}$.* The main result of this section is the following. **Theorem 19**. *Let $g:\mathbb R^n \times \mathbb R^{d\times n}\to [0, +\infty]$ be satisfying (H1)-(H4). Then, the functionals $\{G_\varepsilon\}_{\varepsilon}$ given by [\[functCDA\]](#functCDA){reference-type="eqref" reference="functCDA"} $\Gamma(L^\infty)$-converge to the functional $G_{{\rm hom}}$ defined by $$\notag G_{{\rm hom}}(\Omega, u)\coloneqq \int_{\Omega} (\widetilde{g}^q_{{\rm hom}}(Du))^{\rm ls}dx,$$ for any $\Omega\in{\cal A}_0$ and $u\in \bigcup_{s>n} W^{1,s}_{{\rm loc}}(\mathbb{R}^n; \mathbb{R}^d)$ and with $\widetilde{g}^q_{{\rm hom}}$ being defined by [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"}.* The proof is quite long and technical and it is provided at the end of this section as a consequence of all the preliminary results proved in the sequel. Indeed, we adapt the methodology used in [@CDA02 Chapter 12]. Briefly, we investigate some measure theoretic properties of the $\Gamma$-limits. In particular, we prove that the $\Gamma$-liminf is super-additive and the $\Gamma$-limsup is sub-additive. The proof slightly differs from [@CDA02 Proposition 12.2.1] which uses arguments being appropriate only in the scalar setting. Indeed, the proof of these properties in the scalar setting relies on the existence of a particular family of cut-off functions (see [@CDA02 Lemma 11.1.1]). The explicit construction of such cut-off functions may not be repeated in the vectorial case and hence we need to use other techniques to prove the measure theoretic aspect of $\Gamma$-limit (see Proposition [Proposition 20](#Proposition 12.2.1){reference-type="ref" reference="Proposition 12.2.1"}). Then, we provide an integral representation on linear functions. Once again, the proof of such result is different from [@CDA02 Proposition 12.4.6]. Finally, we give a full representation result. ## Measure representation results of $\Gamma$-limits In this subsection, we prove some measure theoretical aspects of the functionals $\{G_\varepsilon\}_{\varepsilon}$ defined by [\[functCDA\]](#functCDA){reference-type="eqref" reference="functCDA"}. To that end, for $h\in\mathbb{N}$, we consider a subsequence $G_{h}:= G_{\varepsilon_h}$ of the family $\{G_\varepsilon\}_{\varepsilon},$ where $\{\varepsilon_h\}_h$ is a positive decreasing sequence that goes to $0^+$ as $h \rightarrow + \infty$.\ We refer to [@CDA02 Definition 2.6.5] for the definition of increasing set functions, superadditvity on disjoint sets and subadditivity. We set $$\begin{aligned} && \widetilde{G}'(\Omega, \cdot): u \in L^{\infty}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d) \mapsto \Gamma(L^{\infty})\mbox{-}\liminf_{h \rightarrow + \infty} G_{\varepsilon_h}(\Omega, u), \label{defliminf}\\[2mm] && \widetilde{G}''(\Omega, \cdot): u \in L^{\infty}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d) \mapsto \Gamma(L^{\infty})\mbox{-}\limsup_{h \rightarrow + \infty} G_{\varepsilon_h}(\Omega, u).\label{deflimsup} \end{aligned}$$ The next result, whose proof is an adaptation of [@CDA02 Proposition 12.2.1], follows. **Proposition 20**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1)-(H4), and let $q\in[1, +\infty]$. Let $\Omega, \Omega_1, \Omega_2\in {\cal A}_0$.* - *If $\Omega_1\cap \Omega_2=\emptyset$ and $\Omega_1\cup\Omega_2\subset \Omega$, then $$\label{ine12.2.2} \widetilde{G}'_{-}(\Omega, u)\geq \widetilde{G}'_{-}(\Omega_1, u) + \widetilde{G}'_{-}(\Omega_2, u),$$ for any $u\in L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$, where $\widetilde{G}'_{-}$ is the inner regular envelope (in the sense of [@DM93 Chapter 15]) of the functional in [\[defliminf\]](#defliminf){reference-type="eqref" reference="defliminf"}.* - *If $\Omega\subset \Omega_1\cup \Omega_2$, then $$\label{ine12.2.3} \widetilde{G}''_{-}(\Omega, u)\leq \widetilde{G}''_{-}(\Omega_1, u) + \widetilde{G}''_{-}(\Omega_2, u),$$ for any $u\in L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$, with $\widetilde{G}''_{-}$ the inner regular envelope of the functional defined by [\[deflimsup\]](#deflimsup){reference-type="eqref" reference="deflimsup"}.* *Proof.* Inequality [\[ine12.2.2\]](#ine12.2.2){reference-type="eqref" reference="ine12.2.2"} follows from the definition of $\widetilde{G}'_{-}$. To show [\[ine12.2.3\]](#ine12.2.3){reference-type="eqref" reference="ine12.2.3"}, we note that, by [\[existdelta\]](#existdelta){reference-type="eqref" reference="existdelta"}, it holds $$0 \in {\rm int} ({\rm dom} \widetilde{g}^q_{\rm hom}).$$ Otherwise, taken $z_0\in {\rm int} ({\rm dom} \widetilde{g}^q_{\rm hom})$, it suffices to replace $g$ with $g(\cdot, z_0+\cdot)$. It is not restrictive to assume that $\Omega\subset\subset \Omega_1\cup\Omega_2$, so that we may simply prove that $$\label{ine12.2.4} \widetilde{G}''(\Omega, u)\leq \widetilde{G}''(\Omega_1, u) + \widetilde{G}''(\Omega_2, u),$$ for any $u\in L^\infty(\mathbb{R}^n; \mathbb{R}^d)$. To that end, fix $u\in L^\infty(\mathbb{R}^n; \mathbb{R}^d)$ and assume that the right-hand side of [\[ine12.2.4\]](#ine12.2.4){reference-type="eqref" reference="ine12.2.4"} is finite. Therefore, for $i=1,2$, there exist two sequences $\{u_h^{i}\}\subset W^{1, q}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)\cap L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$ such that $u_h^i$ converges to $u$ in $L^\infty(\Omega_i; \mathbb{R}^d)$ and, for any positive decreasing sequence $\varepsilon_h \rightarrow 0^+$ as $h \rightarrow + \infty$ $$\label{eq2Prop12.2.1} \limsup_{h\to+\infty} \int_{\Omega_i}g \left(\frac{x}{\varepsilon_h}, Du^i_h \right)dx\leq \widetilde{G}''(\Omega_i, u).$$ Since $\Omega\subset\subset \Omega_1\cup\Omega_2$, there exists $A_1\subset\subset\Omega_1$ such that $\Omega\subset\subset A_1\cup\Omega_2$. For every $h \in \mathbb N$, we construct $w_h\in W^{1,q}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)\cap L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$ which is equal to $u^1_h$ in $\Omega_1$, and $w_h=u^2_h$ in $\Omega_2$. Let $\psi$ be a cut-off function such that $0\leq \psi\leq 1$ a.e. in $\Omega_1$, $\psi\equiv 1$ a.e. in $\overline{A_1}$, $\psi\equiv 0$ a.e. in $\mathbb{R}^n\setminus\Omega_1$, and such that $$\notag%\label{estigradpsi} \|\nabla \psi\|_{L^\infty(\mathbb{R}^n)} \leq {C\over \mbox{dist}(A_1, \partial \Omega_1)}$$ holds for some constant $C>0.$ For any $h\in\mathbb{N}$, we define the vector-valued function $w_h$ by $$\notag w_h\coloneqq \psi u^{1}_h + (1-\psi)u^{2}_h.$$ Note that the sequence $\{w_h\}_h$ converges to $u$ in $L^\infty(\Omega; \mathbb{R}^d)$. For fixed $t\in[0,1)$, evoking the convexity of $g$ and recalling that $\Omega\setminus A_1\subset\subset \Omega_2$, it follows that $$\begin{aligned} & \widetilde{G}''(\Omega, tu)\leq \limsup_{h\to +\infty} \int_\Omega g\left(\frac{x}{\varepsilon_h}, tDw_h\right )dx\notag\\ &= \limsup_{h\to +\infty} \int_\Omega g\left(\frac{x}{\varepsilon_h}, t\psi Du^1_h + t(1-{\psi})Du^2_h + t(u^1_h-u^2_h)\otimes \nabla \psi \right)dx\notag\\ &\leq \limsup_{h\to+\infty} t\int_\Omega g\left(\frac{x}{\varepsilon_h}, \psi Du^1_h + (1-\psi)Du^2_h\right)dx\notag\\ &\quad + \limsup_{h\to+\infty} (1-t)\int_\Omega g\left(\frac{x}{\varepsilon_h}, {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi\right)dx\notag\\ &\leq \limsup_{h\to+\infty} \int_\Omega \psi(x)g\left(\frac{x}{\varepsilon_h}, Du^1_h\right)dx + \limsup_{h\to +\infty} \int_\Omega (1-\psi(x))g\left(\frac{x}{\varepsilon_h}, Du^2_h\right)dx\notag\\ &\quad + \limsup_{h\to +\infty} (1-t)\int_\Omega g\left(\frac{x}{\varepsilon_h}, {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi\right)dx\notag\\ &\leq \limsup_{h\to+\infty} \int_{\Omega_1} g\left(\frac{x}{\varepsilon_h}, Du^1_h\right)dx + \limsup_{h\to+\infty} \int_{\Omega_2} g\left(\frac{x}{\varepsilon_h}, Du^2_h\right)dx\notag\\ &\quad + \limsup_{h\to+\infty} (1-t)\int_\Omega g\left(\frac{x}{\varepsilon_h}, {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi\right)dx .\notag%\label{eq1-Prop12.2.1} \end{aligned}$$ We observe that $\nabla \psi$ is not identically equal to $0$ in $\Omega\cap(\Omega_1\setminus \overline{A_1})$ while $\nabla \psi=0$ in $\Omega\setminus(\Omega_1\setminus \overline{A_1})$. This along with [\[eq2Prop12.2.1\]](#eq2Prop12.2.1){reference-type="eqref" reference="eq2Prop12.2.1"} implies that $$\begin{aligned} \widetilde{G}''(\Omega, tu)& \leq \widetilde{G}''(\Omega_1, u) + \widetilde{G}''(\Omega_2, u)+(1-t)\limsup_{h\to+\infty} \int_{\Omega\setminus(\Omega_1\setminus \overline{A_1})} g\left(\frac{x}{\varepsilon_h}, 0\right)dx\notag\\ &\quad + (1-t)\limsup_{h\to+\infty} \int_{\Omega\cap(\Omega_1\setminus \overline{A_1})} g\left(\frac{x}{\varepsilon_h}, {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi \right)dx.\label{eq3Prop12.2.1} \end{aligned}$$ We estimate the last limsup in the right-hand side of [\[eq3Prop12.2.1\]](#eq3Prop12.2.1){reference-type="eqref" reference="eq3Prop12.2.1"}. First, note that $\Omega\cap(\Omega_1\setminus\overline{A_1})\subset \Omega_1\cap\Omega_2$. Hence, $u^1_h-u^2_h$ converges to $0$ in $L^\infty(\Omega\cap(\Omega_1\setminus\overline{A_1}); \mathbb{R}^d)$. $$\notag {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi \in B_\delta(0),$$ for any $h\geq h_t$ and for a.e. $x\in\Omega\cap(\Omega_1\setminus\overline{A}_1)$. In view of [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"}, ${t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi$ may be written as a convex combination of the vertices $\{A_i\}_{i=1}^{dn}$ of cube $Q$ and $$\begin{aligned} &\int_{\Omega\cap(\Omega_1\setminus \overline{A_1})} g\left (\frac{x}{\varepsilon_h}, {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi\right)dx =\int_{\Omega\cap(\Omega_1\setminus \overline{A_1})} g\left(\frac{x}{\varepsilon_h}, \sum_{i=1}^{dn} s^h_i(x) A_{i} \right)dx\notag\\ &\leq \sum_{i=1}^{dn} \int_{\Omega\cap(\Omega_1\setminus \overline{A_1})} s^h_i(x) g\left(\frac{x}{\varepsilon_h}, A_i\right)dx\notag\\ &\leq \sum_{i=1}^{dn}\int_{\Omega\cap(\Omega_1\setminus \overline{A_1})} g\left(\frac{x}{\varepsilon_h}, A_i\right)dx\notag %\left(\int_{\Omega\cap(\Omega_1\setminus %\overline{A_1})} f\left(r_hx, d {\delta \mbox{dist}%(A_1, \partial \Omega_1) \over 3^n64n\sqrt{n} } e_\alpha\otimes \nabla \psi_h \right)dx %\int_{\Omega\cap(\Omega_1\setminus \overline{A_1})} f\left(r_hx, 0 \right)dx\right),\notag \end{aligned}$$ for any $h\geq h_t$. Now, the Riemann-Lebesgue Lemma ensures that there exists a positive constant $C_f=C(f)$ such that $$\notag \limsup_{h\to+\infty} \int_{\Omega\cap(\Omega_1\setminus \overline{A_1})}g\left(\frac{x}{\varepsilon_h}, {t\over 1-t}(u^1_h-u^2_h)\otimes \nabla \psi\right)dx\leq \mathcal L^n(\Omega)\sum_{i=1}^{{dn}}\int_Y g(y, A_i)dy\leq C_f.$$ This along with [\[eq3Prop12.2.1\]](#eq3Prop12.2.1){reference-type="eqref" reference="eq3Prop12.2.1"} yields to $$\begin{aligned} \widetilde{G}''(\Omega, tu)& \leq \widetilde{G}''(\Omega_1, u) + \widetilde{G}''(\Omega_2, u)+(1-t)\mathcal L^n(\Omega)\int_Yg(y, 0)dy +(1-t) C_f,\notag \end{aligned}$$ for any $t\in [0, 1)$. Finally, passing to the limit as $t\to 1$, we obtain estimate [\[ine12.2.4\]](#ine12.2.4){reference-type="eqref" reference="ine12.2.4"}, as desired. ◻ ## Finiteness conditions In this subsection, we provide some sufficient conditions to get finiteness of the functional $G''(\Omega, u)$. First, we show the finiteness on the set of piecewise affine functions. The proof is quite technical and differs from the one available in the scalar setting where ad hoc cut-off functions maybe easily constructed (see [@CDA02]). Here we propose a more generic construction leading to a slight different estimate than the analogous one given in [@CDA02 Lemma 12.3.1]. For any $u\in PA(\mathbb{R}^n; \mathbb{R}^d)$, we set $$\label{def:functionsigmau} \sigma(u)\coloneqq \max_{i\in\{1,\dots, m\}} {\rm card} \left\{j\in\{1,\dots, m\}\hspace{0.03cm}: \hspace{0.03cm} \overline{P}_j\cap \overline{P}_i \neq\emptyset \right\}.$$ **Lemma 21**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1)-(H4). Let $G''$ be the functional given by [\[deflimsup\]](#deflimsup){reference-type="eqref" reference="deflimsup"} and $q\in [1,+\infty]$. Then, $$\notag G''(\Omega, tu)\leq t\int_{\Omega}\widetilde{g}^q_{\rm hom}(Du)dx +(1-t){\cal L}^n(\Omega)\int_Y g(y,0)dy, %+ \sum_{l=1}^{{dn}}\int_Y g(y, A_l)dy\right] \color{olive} \mathcal L^n(\Omega)???\color{black},$$ for any $\Omega\in{\cal A}_0$, $u\in PA(\mathbb{R}^n; \mathbb{R}^d)$ and $t\in \biggl[0, {2\delta\over 4\sigma(u)(2\|Du\|_{L^\infty} +1)+\delta }\biggr]$, where $\widetilde{g}^q_{\rm hom}$ is defined by [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"}.* *Proof.* First, fix $\Omega\in{\cal A}_0$ and $u{(x)}=\sum_{i=1}^{m}(u_{Z_i}(x)+c_i)\chi_{P_i}(x)$, according to [\[aff-u-Zi\]](#aff-u-Zi){reference-type="eqref" reference="aff-u-Zi"}. For any $i\in\{1,\dots, m\}$, let $\Omega_i=\Omega\cap {\rm int}P_i$ be an open cover of $\Omega$. For $\varepsilon>0$ and $i\in\{1,\dots, m\}$, we introduce the following sets $$\begin{aligned} \Omega_{i,\varepsilon}^{-} &\coloneqq \{x\in\Omega_i : {\rm dist}(x, \partial\Omega_i)>\varepsilon\},\notag\\ \Omega_{i,\varepsilon}^{+} &\coloneqq \{x\in \mathbb{R}^n : {\rm dist}(x, \Omega_i)<\varepsilon\}.\notag \end{aligned}$$ Fix $t\in [0,1)$ to be chosen later. Assume that $$\notag \sum_{i=1}^{m} \mathcal L^n(\Omega_i)\widetilde{g}^q_{\rm hom}(Z_i) = \int_{\Omega}\widetilde{g}^q_{\rm hom}(Du)dx<\infty.$$ This implies that $Z_i\in{\rm dom}\widetilde{g}^q_{\rm hom}$ for every $i\in\{1,\dots, m\}$. Hence, for fixed $\theta\in (0, +\infty)$ and $i\in\{1, \dots, m\}$, there exists $v^i\in W^{1,q}_{\#}(Y; \mathbb{R}^d)\cap L^\infty(Y; \mathbb{R}^d)$ such that $$\notag \int_Y g(y, Z_i+ Dv^i) dy \leq \widetilde{g}^q_{\rm hom}(Z_i) +\theta.$$ For every $i\in\{1, \dots, m\}$, set $v^i_h(\cdot) = {1\over h} v^i(h\cdot)$, with $h\in\mathbb{N}$. It follows that $$\notag \lim_{h\to+\infty} \int_{\Omega\cap \Omega_{i, \varepsilon}^{+}} g\left({x\over \varepsilon_h}, Z_i+Dv^i_h\right)dx \leq {\cal L}^n(\Omega\cap \Omega^{+}_{i,\varepsilon}) (\widetilde{g}^q_{\rm hom}(Z_i) + \theta).$$ The aim is to approximate the fixed piecewise affine function $u\in PA(\mathbb{R}^n; \mathbb{R}^d)$ with a sequence of function $\{w_{\varepsilon}\}_{\varepsilon}$ built by means of a suitable partition of unity subordinate to the open cover $\{\Omega_i\}_{i=1}^{m}$. We proceed in three steps: in the first one, we construct the partition of unity using an appropriate cut-off function and the family $\{w_\varepsilon\}_\varepsilon$, in the second one, we estimate the functional $G''(\Omega, t w_\varepsilon)$, while in the last step, we show our claim passing to the limit on $\varepsilon \to 0$. **STEP 1.** For $\varepsilon>0$ sufficiently small and for $i\in\{1,\dots, m\}$, let $\psi_{i,\varepsilon}$ be a cut-off function such that $\psi_{i, \varepsilon}(x)\equiv 0$ a.e. in $\mathbb{R}^n \setminus \Omega_{i, \varepsilon}^{+}$, $\psi_{i, \varepsilon}(x)\equiv 1$ a.e. in $\overline{\Omega_{i, {\varepsilon\over 2}}^{+}}$, $0\leq \psi_{i, \varepsilon} (x)\leq 1$ a.e. in $\Omega_{i, \varepsilon}^{+}$ and $$\notag \|\nabla \psi_{i, \varepsilon} \|_{L^\infty(\mathbb{R}^n; \mathbb{R}^n)}\leq {C\over {\rm dist}(\Omega_{i, {\varepsilon\over 2}}^{+}, \partial \Omega_{i, \varepsilon}^{+})}.$$ Note that $$\notag \sum_{i=1}^{m} \psi_{i, \varepsilon} (x)\geq 1 \qquad \mbox{for a.e. } x\in \bigcup_{i=1}^{m} \Omega_{i, {\varepsilon\over 2}}^{+}.$$ Then, for fixed $\varepsilon>0$ small enough, we define the partition of unity $\{\gamma_{i, \varepsilon}\}_{i=1}^{m}$ subordinate to $\{\Omega_i\}_{i=1}^{m}$ by $$\notag \gamma_{i, \varepsilon}(x)\coloneqq {1\over \sum_{i=1}^{m}\psi_{i, \varepsilon} (x)}\psi_{i, \varepsilon} (x).$$ For fixed $\varepsilon>0$, let $\{w_{h, \varepsilon}\}_{h\in\mathbb{N}}$ be the sequence defined by $$\notag w_{h, \varepsilon}(x) \coloneqq \sum_{i=1}^{m} \left(u_{Z_i}(x) + c_i + v^i_h(x)\right)\gamma_{i, \varepsilon}(x).$$ Since $v^i_h$ goes to $0$ in $L^\infty(\mathbb{R}^n; \mathbb{R}^d)$ as $h\to +\infty$, we immediately get that, for every $\varepsilon>0$ sufficiently small, $w_{h, \varepsilon}$ converges in $L^\infty(\mathbb{R}^n; \mathbb{R}^d)$ as $h\to+\infty$ to the function $w_\varepsilon$ given by $$\notag w_\varepsilon(x)\coloneqq \sum_{i=1}^{m} \left(u_{Z_i}(x)+ c_i\right) \gamma_{i, \varepsilon}(x).$$ **STEP 2.** Thanks to the convergence $w_{h, \varepsilon}\to w_{\varepsilon}$ as $h\to+\infty$ combined with the definition of $G''(\Omega, \cdot)$ and $(H2)$, for any $\varepsilon>0$ small enough, we deduce that $$\begin{aligned} &G''(\Omega, tw_\varepsilon) \leq \limsup_{h\to+\infty} \int_{\Omega} g\left({x\over \varepsilon_h}, tDw_{h, \varepsilon}(x)\right) dx\notag\\ &\quad=\limsup_{h\to+\infty} \int_{\Omega} g\biggl({x\over \varepsilon_h}, t\sum_{i=1}^{m} (Z_i+Dv^i_h(x))\gamma_{i, \varepsilon}(x) \notag\\ &\hspace{5cm}+ (1-t){t\over (1-t)} \sum_{i=1}^{m} (u_{Z_i}(x)+c_i+v^i_h(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\biggr) dx\notag\\ &\quad \leq t\limsup_{h\to+\infty} \int_{\Omega} g\left({x\over \varepsilon_h}, \sum_{i=1}^{m}(Z_i+Dv^i_h(x))\gamma_{i, \varepsilon}(x) \right)dx\notag\\ &\qquad + (1-t) \limsup_{h\to+\infty} \int_{\Omega} g\left({x\over\varepsilon_h}, {t\over (1-t)} \sum_{i=1}^{m}(u_{Z_i}(x)+c_i+v^i_h(x))\otimes \nabla\gamma_{i, \varepsilon}(x)\right) dx\notag\\ &\quad\leq t\sum_{i=1}^{m}\limsup_{h\to+\infty}\int_{\Omega\cap\Omega_{i, \varepsilon}^{+}} g\left({x\over \varepsilon_h}, Z_i+Dv^i_h(x) \right)dx\notag\\ &\qquad + (1-t) \limsup_{h\to+\infty} \int_{\Omega} g\left({x\over\varepsilon_h}, {t\over (1-t)} \sum_{i=1}^{m}(u_{Z_i}(x)+ c_i+v^i_h(x))\otimes \nabla\gamma_{i, \varepsilon}(x)\right) dx\label{eq1} \end{aligned}$$ In view of Step 1, we known that $\sum_{i=1}^m \nabla \gamma_{i, \varepsilon}\equiv 0$ a.e. in $\Omega$ along with the fact that $\Omega= \left[\Omega\setminus\bigcup_{j=1}^m\Omega_{j,\varepsilon}^{-} \right]\cup \left[\bigcup_{j=1}^m\Omega_{j,\varepsilon}^{-} \right]$, the last limsup in the right-hand side of [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} may be estimated as follows\ $$\begin{aligned} \limsup_{h\to+\infty} &\int_{\Omega} g\left({x\over\varepsilon_h}, {t\over (1-t)} \sum_{i=1}^{m}(u_{Z_i}(x)+c_i+v^i_h(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\right) \notag\\ &=\limsup_{h\to+\infty}\int_{\Omega}g\left({x\over \varepsilon_h}, {t\over (1-t)} \sum_{i=1}^{m}(u_{Z_i}(x)+c_i+v^i_h(x)-u(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\right) dx\notag\\ &\leq \sum_{j=1}^{m}\limsup_{h\to+\infty}\int_{\Omega_{j, \varepsilon}^{-} } g\left({x\over \varepsilon_h}, 0 \right)dx\notag\\ &\quad + \sum_{j=1}^{m}\limsup_{h\to+\infty}\int_{\Omega_j\setminus\Omega_{j, \varepsilon}^{-}}g\left({x\over \varepsilon_h} ,{t\over (1-t)} \sum_{i=1}^{m}(u_{Z_i}(x)+c_i+v^i_h(x)-u(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\right)dx. \label{eq2} \end{aligned}$$ Now, for any $j\in\{1, \dots, m\}$, let $\nu_\varepsilon(\Omega_1, \dots, \Omega_m)(\Omega_j)$ be the number of the elements $\{\Omega_1, \dots, \Omega_m \}$ whose distance from $\Omega_j$ is less than $\varepsilon$. Then, we define $\sigma_\varepsilon=\sigma_\varepsilon(\Omega_1, \dots, \Omega_m)$ by $$\notag \sigma_\varepsilon\coloneqq \sup_{j\in\{1,\dots, m\}} \nu_\varepsilon(\Omega_1, \dots, \Omega_m)(\Omega_j).$$ Since for a fixed $\varepsilon>0$ sufficiently small and for any $j\in\{1,\dots, m\}$, $\nu_\varepsilon(\Omega_1, \dots, \Omega_m)(\Omega_j)\leq \sigma_\varepsilon$, we denote by $\{\Omega_{i_1}, \dots, \Omega_{i_{\sigma_\varepsilon}}\}$ a subset of $\{\Omega_1,\dots, \Omega_m\}$ containing all the sets $\Omega_i$ satisfying ${\rm dist}(\Omega_j, \Omega_i)<\varepsilon$. Therefore, due to the convexity of $g$, we deduce that, for any $j\in\{1,\dots,m\}$ and for any $\varepsilon>0$ sufficiently small, $$\begin{aligned} &\int_{\Omega_j\setminus\Omega_{j, \varepsilon}^{-}}g\left({x\over\varepsilon_h}, {t\over (1-t)} \sum_{i=1}^{m}(u_{Z_i}(x)+c_i+v^i_h(x)-u(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\right)dx\notag\\ &\quad =\int_{\Omega_j\setminus\Omega_{j, \varepsilon}^{-}}g\left({x\over\varepsilon_h}, {t\over (1-t)} \sum_{k=1}^{\sigma_\varepsilon}{1\over \sigma_\varepsilon}\sigma_\varepsilon(u_{Z_{i_k}}(x)+c_{i_k}+v^{i_k}_h(x)-u(x))\otimes \nabla \gamma_{i_k, \varepsilon}(x)\right)dx\notag\\ &\quad\leq\sum_{k=1}^{\sigma_\varepsilon}{1\over \sigma_\varepsilon}\int_{\Omega_j\setminus\Omega_{j, \varepsilon}^{-}}g\left({x\over \varepsilon_h}, {t\over (1-t)} \sigma_\varepsilon(u_{Z_{i_k}}(x)+c_{i_k}+v^{i_k}_h(x)-u(x))\otimes \nabla \gamma_{i_k, \varepsilon}(x)\right)dx\notag\\ &\quad \leq \sum_{k=1}^{\sigma_\varepsilon}{1\over \sigma_\varepsilon}\int_{(\Omega_j\setminus\Omega_{j, \varepsilon}^{-})\cap\Omega^{+}_{i,\varepsilon}}g\left({x\over \varepsilon_h}, {t\over (1-t)} \sigma_\varepsilon(u_{Z_{i_k}}(x)+c_{i_k}+v^{i_k}_h(x)-u(x))\otimes \nabla \gamma_{i_k, \varepsilon}(x)\right)dx\notag\\ &\qquad +\int_{\Omega_j\setminus\Omega^{-}_{j, \varepsilon}} g\left({x\over\varepsilon_h}, 0\right)dx.\notag \end{aligned}$$ Now, note that there exists $\varepsilon(u)\in(0, +\infty)$ such that $\sigma_\varepsilon\leq \sigma(u)$ for any $\varepsilon\in(0, \varepsilon(u))$, where $\sigma(u)$ is given by [\[def:functionsigmau\]](#def:functionsigmau){reference-type="eqref" reference="def:functionsigmau"}. Fix $\varepsilon\in(0, \varepsilon(u))$. Since $$\begin{aligned} \left\| \sigma_\varepsilon{t\over 1-t}(u_{Z_{i}}(\cdot)+ c_i+v^i_h(\cdot)-u(\cdot)) \right\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\varepsilon}; \mathbb{R}^{d\times n}) }\notag \\\leq \sigma_\varepsilon{t\over 1-t}\left(2\|Du\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\varepsilon}; \mathbb{R}^{d\times n}) } +1\right)\varepsilon\notag\\ \leq \sigma(u) {t\over 1-t}\left(2\|Du\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\varepsilon}; \mathbb{R}^{d\times n}) } +1\right)\varepsilon,\label{torefer} \end{aligned}$$ we may choose $t\in [0,1)$ such that $$\notag \sigma(u){t\over 1-t} \left(2\|Du\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\varepsilon}; \mathbb{R}^{d\times n}) } +1\right) \leq{\delta\over 2},$$ with $\delta$ being given by assumption (H3). In other words, $$\notag t\leq %{2\delta \over Z_\e\left(2\|Du\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\e}; \R^d) } +1\right) +\delta}\leq {2\delta \over \sigma(u)\left(2\|Du\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\varepsilon}; \mathbb{R}^{d\times n}) } +1\right) +\delta}.$$ Hence, by [\[torefer\]](#torefer){reference-type="eqref" reference="torefer"}, for this choice of $t$, we obtain that $$\begin{aligned} \notag \left\| \sigma_\varepsilon{t\over 1-t}(u_{Z_{i}}(\cdot)+c_i+v^i_h(\cdot)-u(\cdot)) \right\|_{L^\infty(\Omega\cap\Omega^{+}_{i,\varepsilon};\mathbb{R}^{d \times n})} \leq{\delta\over 2}\varepsilon, \end{aligned}$$ for $h$ sufficiently large and for any $i\in\{1, \dots, m\}$. This ensures that there exists $h_t$ such that $$\begin{aligned} \notag \sigma_\varepsilon{t\over 1-t}(u_{Z_{i}}(x)+c_i+v^i_h(x)-u(x))\otimes \nabla \gamma_{i, \varepsilon} \in B_{\delta}(0), \end{aligned}$$ for any $h>h_t$ and for any $\varepsilon>0$ sufficiently small. Thanks to [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"}, for any $i\in\{1, \dots, m\}$, $\sigma_\varepsilon{t\over 1-t}(u_{Z_{i}}+c_i + v^i_h-u)\otimes \nabla \gamma_{i, \varepsilon}$ may be represented as a convex combination of the vertices $\{A_l\}_{l=1}^{{dn}}$ of the cube $Q$, i.e., for any $i\in\{1,\dots, m\}$ there exist $s^{i,h}_1(x), \dots, s^{i,h}_{dn}(x)\in(0,1)$ such that $$\notag \sigma_\varepsilon{t\over 1-t}(u_{Z_{i}}+ c_i +v^i_h-u)\otimes \nabla \gamma_{i, \varepsilon} =\sum_{l=1}^{dn} s^{i,h}_l A_l.$$ Therefore, for any $\varepsilon\in(0, \varepsilon(u))$, once more by exploiting the convexity of $g$, we deduce $$\begin{aligned} &\limsup_{h\to+\infty} \sum_{k=1}^{\sigma_\varepsilon}{1\over \sigma_\varepsilon}\int_{(\Omega_j\setminus\Omega_{j, \varepsilon}^{-})\cap\Omega^{+}_{j,\varepsilon}}g\left({x\over \varepsilon_h}, {t\over (1-t)} \sigma_\varepsilon(u_{Z_{i_k}}(x)+ c_{i_k}+v^{i_k}_h(x)-u(x))\otimes \nabla \gamma_{i_k, \varepsilon}(x)\right)dx\notag\\ &\quad = \limsup_{h\to+\infty} \sum_{k=1}^{\sigma_\varepsilon}{1\over \sigma_\varepsilon} \int_{(\Omega_j\setminus\Omega_{j, \varepsilon}^{-})\cap\Omega^{+}_{j,\varepsilon}}g\left({x\over \varepsilon_h}, \sum_{l=1}^{{dn}} s_l^{i_k, h_k}(x) A_l \right)dx\notag\\ &\quad \leq \limsup_{h\to+\infty} \sum_{k=1}^{\sigma_\varepsilon}{1\over \sigma_\varepsilon} \sum_{l=1}^{{dn}} \int_{(\Omega_j\setminus\Omega_{j, \varepsilon}^{-})\cap\Omega^{+}_{j,\varepsilon}} s_l^{i_k,h_k}(x)g\left({x\over \varepsilon_h}, A_l \right)dx\notag\\ &\quad \leq {\cal L}^n(\Omega_j\setminus\Omega_{j,\varepsilon}^{-}) \sum_{l=1}^{{dn}}\int_Y g(y, A_l)dy.\notag \end{aligned}$$ This combined with [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} implies that $$\begin{aligned} \limsup_{h\to+\infty} &\int_{\Omega} g\left({x\over\varepsilon_h}, {t\over (1-t)} \sum_{i=1}^{m}(u_{Z_{i}}(x)+c_i+v^i_h(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\right) \notag\\ &\leq \sum_{j=1}^{m}\limsup_{h\to+\infty}\int_{ \Omega_{j, \varepsilon}^{-} } g\left({x\over \varepsilon_h}, 0 \right)dx\notag\\ &\quad + \sum_{j=1}^{m}\limsup_{h\to+\infty}\int_{\Omega_j\setminus\Omega_{j, \varepsilon}^{-}}g\left({x\over \varepsilon_h} ,{t\over (1-t)} \sum_{i=1}^{m}(u_{Z_{i}(x)+c_i}+v^i_h(x)-u(x))\otimes \nabla \gamma_{i, \varepsilon}(x)\right)dx\notag\\ &\quad \leq\sum_{j=1}^{m}\left[ {\cal L}^n(\Omega_{j, \varepsilon}^{-}) \int_Y g(y, 0)dy + {\cal L}^n(\Omega_j\setminus\Omega_{j,\varepsilon}^{-}) \sum_{l=1}^{{dn}}\int_Y g(y, A_l)dy\right]\notag. \end{aligned}$$ Hence, from [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}, we conclude that $$\begin{aligned} G''(\Omega, tw_\varepsilon) &\leq t\sum_{i=1}^{m}{\cal L}^n(\Omega\cap\Omega_{i,\varepsilon}^{+})(\widetilde{g}^q_{\rm hom}(Z_i) + \theta)\notag\\ &\quad + (1-t) \sum_{j=1}^{m} \left[{\cal L}^n(\Omega_{j,\varepsilon}^{-}) \int_Y g(y, 0) dy+ {\cal L}^n(\Omega{\color{blue}_j}\setminus\Omega_{j,\varepsilon}^{-}) \sum_{l=1}^{{dn}} \int_Y g(y, A_l)dy\right], \label{eq3} \end{aligned}$$ for any $\varepsilon\in(0, \varepsilon(u))$, concluding the Step 2. **STEP 3.** First, note that, for any $\varepsilon\in(0, \varepsilon(u))$, $$\begin{aligned} \|w_\varepsilon-u\|_{L^\infty(\Omega; \mathbb{R}^d)}&=\left \|\sum_{i=1}^{m}(u_{Z_{i}}(\cdot)+c_i-u(\cdot))\gamma_{i,\varepsilon}(\cdot) \right\|_{L^\infty(\Omega; \mathbb{R}^d)}\notag\\ &\leq \sum_{i=1}^{m}\|(u_{Z_{i}}(\cdot) +c_i-u(\cdot))\gamma_{i,\varepsilon}(\cdot)\|_{L^\infty(\Omega; \mathbb{R}^d)}\notag\\ &\leq m\left(2\|Du\|_{L^\infty(\Omega; \mathbb{R}^{d\times n})}+1\right)\varepsilon,\notag \end{aligned}$$ This implies that $w_\varepsilon$ converges in $L^\infty(\Omega; \mathbb{R}^d)$ to $u$ as $\varepsilon\to 0$. Hence, using estimate [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"}, we deduce that $$\begin{aligned} G''(\Omega, tu) &\leq \liminf_{\varepsilon\to 0} G''(\Omega, tw_\varepsilon)\notag\\ &\leq t\sum_{i=1}^{m}{\cal L}^n(\Omega\cap\overline{\Omega_{i}})(\widetilde{g}^q_{\rm hom}(Z_i) + \theta)+(1-t){\cal L}^n(\Omega)\int_Yg(y, 0) dy.\notag %&\quad +{\color{blue} (1-t) \sum_{j=1}^{m} \left[{\cal L}^n(\Omega\cap\Omega_{j,\e}^{-}) \int_Yg(y, 0) dy+ {\cal L}^n(\Omega\setminus\Omega_{j,\e}^{-}) \sum_{l=1}^{{dn}} \int_Y g(y, A_l)dy\right]. }\notag \end{aligned}$$ Taking the limit as $\theta\to 0$, we obtain our claim, as desired. ◻ In light of Lemma [Lemma 21](#lemma12.3.1){reference-type="ref" reference="lemma12.3.1"}, we show the finiteness result in the next proposition. **Proposition 22**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1)-(H4). Let $G''$ be the functional given by [\[deflimsup\]](#deflimsup){reference-type="eqref" reference="deflimsup"} and $q\in[1,+\infty]$. Then, there exists $r\in (0, \delta)$ and a constant $C>0$ such that $$\notag G''_{-}(\Omega, u)\leq C {\cal L}^n(\Omega),$$ for any $\Omega\in{\cal A}_0$ and for any $u\in W^{1,\infty}_{\#} (Y; \mathbb{R}^d)$ such that $\|Du\|_{L^\infty(\Omega; \mathbb{R}^{d \times n})}\leq r$.* *Proof.* Fixed $\Omega\in{\cal A}_0$ and let $Q$ be an open cube of $\mathbb{R}^n$ such that $\Omega\subset\subset Q$. Let $r\in (0,+\infty)$ to be chosen later and let $u\in W^{1,\infty}_{\#}(\mathbb{R}^n; \mathbb{R}^d)$ such that $\|Du\|_{L^\infty(\Omega;\mathbb{R}^{d \times n})}\leq r$.\ It is not restrictive to assume that $u=0$ in $\mathbb{R}^n\setminus Q$. For some $l\in\mathbb{N}$, let $S_1, \dots, S_l\subset \mathbb{R}^n\setminus Q$ be polyhedral sets with pairwise disjoint interiors such that ${\cal L}^n((\mathbb{R}^n\setminus Q)\setminus \bigcup_{i=1}^{l}S_i)=0$. Let $P_1, \dots, P_m\subset Q$ be $n$-simplexes with pairwise disjoint interiors such that $Q=\bigcup_{i=1}^{m} P_i$. For every $h\in\mathbb{N}$, let $P_1^h, \dots, P^h_{m^n}$ be $n$-simplexes obtained by taking the ${1\over h}$-replies of $P_1, \dots, P_m$ repeated ${1\over h}Q$-periodically so that $Q=\bigcup_{i=1}^{m^n}P^h_i$. For any $h\in\mathbb{N}$, let $u_h\in PA(\mathbb{R}^n; \mathbb{R}^d)$ be such that $u_h$ is affine on each $n$-simplex of $\{P_1^h, \dots, P_{m^n}^h\}$, equal to $u$ on the vertices of the elements of $\{P_1^h, \dots, P_{m^n}^h\}$ and equal to $0$ in each element of $\{S_1, \dots, S_l\}$. Since, for any $h\in\mathbb{N}$ and for any $i\in\{1, \dots, m^n\}$, $P_i^h$ intersects at most $m^n$ elements of $\{P_1^h, \dots, P_{m^n}^h\}$, we get that - $\sigma(u_h)\leq m^n+l$, with $\sigma(u)$ being given by [\[def:functionsigmau\]](#def:functionsigmau){reference-type="eqref" reference="def:functionsigmau"}; - $u_h$ converges in $L^\infty(\mathbb{R}^n; \mathbb{R}^d)$ to $u$ as $h\to\infty$; - $\|Du_h\|_{L^\infty(\Omega; \mathbb{R}^{d\times n})}\leq C\|Du\|_{L^\infty(\Omega; \mathbb{R}^{d\times n})}$ for any $h\in\mathbb{N}$. Hence, by items (i) and (iii), we deduce that $$\notag {\delta\over 4(m^n+l)(2\delta+1)+\delta} \leq {\delta\over 4\sigma(u_h)\biggl(\|{\delta\over Cr} Du_h\|_{L^\infty(\Omega; \mathbb{R}^{d\times n})} +1 \biggr)+\delta},$$ where $C$ is the constant appearing in (iii). An application of Lemma [Lemma 21](#lemma12.3.1){reference-type="ref" reference="lemma12.3.1"} yields to $$\label{eq4} G^{''} \left (\Omega, t{\delta\over C r} u_h \right ) \leq t\int_{\Omega}\widetilde{g}^q_{\rm hom}\left({\delta\over Cr}Du_h\right)dx+(1-t){\cal L}^n(\Omega)\int_Y g(y,0)dy, %(1-t)\left[ \int_Y g(y,0)dy + \sum_{l=1}^{{dn}}\int_Y g(y, A_l)dy\right].$$ for any $h\in\mathbb{N}$ and for any $t\in\left[0, {\delta\over 4(m^n+l)(2\delta+1)+\delta} \right]$. We can pick $t={Cr\over\delta}$ if and only if $$\notag r\leq {2\delta^2\over c\left[4(m^n+l)(2\delta+1)+\delta\right]}.$$ Thus, choosing $r$ as above in [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"}, we deduce that, for any $h\in\mathbb{N}$, $$\label{eq5} G''(\Omega, u_h)\leq {Cr\over\delta} \int_{\Omega} \widetilde{g}^q_{\rm hom}\left( {\delta\over Cr}Du_h\right) + \left(1-{Cr\over\delta}\right) {\cal L}^n(\Omega) \int_Y g(y,0)dy. % \left[\int_Y g(y,0)dy + \sum_{l=1}^{dn}\int_Y g(y, A_l)dy \right].$$ From the fact that $\|Du_h\|_{L^\infty(\Omega; \mathbb{R}^{d\times n})}\leq Cr$, it follows that $$\label{eq6} \left\| {\delta\over Cr} Du_h\right\|_{L^\infty(\Omega; \mathbb{R}^{d\times n})}\leq \delta \qquad\mbox{for any } h\in\mathbb{N},$$ namely, for any $h\in\mathbb{N}$, ${\delta\over Cr} Du_h\in B_\delta(0)$. This combined with (H3) ensures us that $\widetilde{g}^q_{\rm hom}$ is bounded in $B_\delta(0)$. Finally, from [\[eq5\]](#eq5){reference-type="eqref" reference="eq5"} and [\[eq6\]](#eq6){reference-type="eqref" reference="eq6"}, we conclude that $$\notag G''(\Omega, u_h) \leq \left[\max_{\overline{B_\delta(0)}}\widetilde{g}^q_{\rm hom}+ \int_Y g(y,0)dy \right]{\cal L}^n(\Omega) .% {\color{blue}+ \sum_{l=1}^{{dn}}\int_Yg(y, A_l)dy}$$ Since $u_h$ converges to $u$ as $h\to+\infty$ and in view of the lower semicontinuity of $G''$, it follows that $$\begin{aligned} G''(\Omega, u)&\leq \limsup_{h\to+\infty} G''(\Omega, u_h)\notag\\ &\leq \left[\max_{\overline{B_\delta(0)}}\widetilde{g}^q_{\rm hom}+ \int_Y g(y,0)dy\right]{\cal L}^n(\Omega),\notag %{\color{blue}+ \sum_{l=1}^{{dn}}\int_Y g(y, A_l)dy } \end{aligned}$$ which concludes the proof. ◻ ## Representation on Linear Functions and a blow-up condition In this subsection, we aim to show that, for any bounded open subset $\Omega$ of $\mathbb{R}^n$, it holds that $G'(\Omega, \cdot) = G''(\Omega, \cdot)$ on the space of the linear functions and it possible to represent their common value as an integral. The next proposition claims an upper bound for the $\Gamma$-limsup on the space of linear functions and it is the analogue of [@CDA02 Lemma 12.4.1] in the vectorial case. The proof follows the same arguments of the one in the scalar setting and for this reason we skip it. **Proposition 23**. *Let $g: \mathbb R^n \times \mathbb R^{d\times n} \to [0,+\infty]$ satisfy (H1)-(H2), let $\widetilde{g}^q_{\rm hom}$ be defined in [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"} and let $\{G_{\varepsilon}\}_\varepsilon$ be the functionals defined by [\[functCDA\]](#functCDA){reference-type="eqref" reference="functCDA"}, with $G''$ as in [\[deflimsup\]](#deflimsup){reference-type="eqref" reference="deflimsup"}. We have that $$\notag G''(\Omega, u_Z)\leq {\mathcal L}^n(\Omega)(\widetilde{g}^q_{\rm hom})^{\rm ls}(Z),$$ for any $\Omega\in {\cal A}_0$ and $Z\in\mathbb{R}^{d\times n}$.* To prove a similar inequality with $G'$ in place of $G''$, we gather some properties whose proofs are skipped since they follow the same lines of [@CDA02 Lemma 12.4.2, 12.4.3 and 12.4.4] respectively, besides now we are in the vectorial setting. We just observe that, as in [@CDA02 Lemma 12.4.2], the proof of (i) below is a consequence of the convexity of $g(x,\cdot)$ inherited by $G'$ and $G''$ (see [@CDA02 Proposition 3.4,1.]) also in the vectorial case. **Lemma 24**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1) and (H2). Let $\{G_{\varepsilon}\}_\varepsilon$ be the functionals defined by [\[functCDA\]](#functCDA){reference-type="eqref" reference="functCDA"} and let $G'$ be as in [\[defliminf\]](#defliminf){reference-type="eqref" reference="defliminf"}. Then,* - *assuming also [\[H40\]](#H40){reference-type="eqref" reference="H40"}, $$\notag G'(\Omega, tu)\leq t G'(\Omega, u) + (1-t) {\mathcal L}^n(\Omega) \int_Y g(y, 0)dy,$$ for any $\Omega\in {\cal A}_0$, $u\in L^\infty_{{\rm loc}}(\mathbb{R}^n; \hspace{0.03cm} \mathbb{R}^d)$ and for any $t\in [0,1]$. Similar inequalities hold for $G'', G'_{-}, G''_{-}$ in place of $G'$, with $G''$ defined in [\[deflimsup\]](#deflimsup){reference-type="eqref" reference="deflimsup"}, and the latter two functionals being the inner regular envelopes of the previous ones.* - *$$\notag {1\over r_1^n} G'(x_1+r_1 Y, u_Z)={1\over r_2^n} G'(x_2+r_2 Y, u_Z),$$ for any $x_1, x_2\in\mathbb{R}^n$, $r_1, r_2\in (0, +\infty)$, and $Z\in\mathbb{R}^{d\times n}$.* - *$$\notag \widetilde{g}^q_{\rm hom}(Z) = \inf\left\{\int_Y g(hx, Z+Du)dx \hspace{0.03cm}: \hspace{0.03cm} v\in W^{1,q}_{\#}(Y; \hspace{0.03cm} \mathbb{R}^d)\cap L^\infty(Y; \hspace{0.03cm} \mathbb{R}^d) \right\},$$ for any $Z\in\mathbb{R}^{d\times n}$ and $h\in\mathbb{N}$, with $\widetilde{g}^q_{{\rm hom}}$ defined by [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"} and $q\in[1,+\infty]$.* The following lemma is the analogue of [@CDA02 Lemma 12.4.5] in the vectorial case. However, the proof requires some technical changes and for the readers' convenience, we present the proof. **Lemma 25**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1)- (H4). Let $G'$ be the functional defined by [\[defliminf\]](#defliminf){reference-type="eqref" reference="defliminf"} such that $$\label{ass-F'meno12} G'((-1, 2)^n, u_Z)< + \infty,$$ for some $Z\in\mathbb{R}^{d\times n}$. Then, $$\notag \widetilde{g}^q_{\rm hom}(tZ)<+\infty \qquad\mbox{for all } t\in [0,1).$$* *Proof.* In view of [\[ass-F\'meno12\]](#ass-F'meno12){reference-type="eqref" reference="ass-F'meno12"} there exists a sequence $\{v_h\}_h\subseteq W^{1,q}_{{\rm loc}}(\mathbb{R}^n; \mathbb{R}^d)\cap L^\infty_{{\rm loc}}(\mathbb{R}^n; \mathbb{R}^d)$ and $\{h_k\}\subset\mathbb{N}$ strictly increasing such that $v_h$ converges to $u_Z$ in $L^\infty((-1, 2)^n; \mathbb{R}^d)$ and $$\label{ref2} \int_{(-1,2)^n} g\left({x\over \varepsilon_{h_k}}, Dv_{h_k}\right)dx<+\infty\qquad\mbox{for any } k\in\mathbb{N}.$$ For fixed $\eta\in(0,1)$, let $\psi$ be a cut-off function such that $\psi\equiv 1$ in $[0,1]^n$, $0\leq\psi\leq 1$ in $(-\eta, 1+\eta)^n$ and $\psi\equiv 0$ in $\mathbb{R}^n\setminus (-\eta, 1+\eta)^n$ and $$\label{propcutofffunction} \|\nabla \psi\|_{L^\infty(\mathbb{R}^n; \mathbb{R}^d)} \leq {C\over {\rm dist}(Y, \partial (-\eta, 1+\eta)^n)}.$$ Note that $\sum_{i\in\mathbb{Z}^n} \psi(x+i)$ is a sum over a finite set of indices and the following inequality $$\notag \sum_{i\in\mathbb{Z}^n} \psi(x+i)\geq 1 \qquad \mbox{for a.e. } x\in\mathbb{R}^n,$$ holds. Set $$\notag \widetilde{\psi}(x) \coloneqq {\psi(x)\over \sum_{j\in\mathbb{Z}^n}\psi(x+j)} \qquad\mbox{for a.e. } x\in\mathbb{R}^n.$$ Note that $\widetilde{\psi}\equiv 0$ in $\mathbb{R}^n \setminus (-\eta, 1+\eta)^n$, $0\leq\widetilde{\psi}\leq 1$ in $\mathbb{R}^n$ and $\sum_{i\in\mathbb{Z}^n}\widetilde{\psi}(x+i)=1$. We define the sequence $\{u_h\}_h$ by $$\notag u_h(x) \coloneqq u_Z(x) + \sum_{i\in\mathbb{Z}^n} (v_h(x+i)-u_Z(x+i)) \widetilde{\psi}(x+i),$$ for a.e. $x\in\mathbb{R}^n$ and every $h\in\mathbb{N}$. We stress out that the above sum is extended only to a finite set of indices $i\in\mathbb{Z}^n$ and, as by-product, $u_h\in W^{1, q}_{{\rm loc}}(\mathbb{R}^n; \mathbb{R}^d)\cap L^\infty_{{\rm loc}} (\mathbb{R}^n; \mathbb{R}^d)$. In particular, one can show that $u_h\in W^{1, q}_{\#}(Y; \mathbb{R}^d)\cap L^\infty_{{\rm loc}} (Y; \mathbb{R}^d)$. For fixed $t\in [0,1)$, we prove that there exists $k_t\in \mathbb{N}$ such that $$\label{eq1.4} \int_Y g\left({x\over \varepsilon_{h_{k}} }, tZ+tD(u_{h_{k}}(x) - u_Z(x)) \right)dx<+\infty,$$ for any $k\geq k_t$. To that end, making use of the convexity properties of $g$ as well as of the fact that $\sum_{i\in\mathbb{Z}^n}\widetilde{\psi}(x+i)=1$, we deduce that $$\begin{aligned} &\int_Y g\left({x\over \varepsilon_{h_{k}}}, tZ+tD(u_{h_{k}}-u_Z)\right)dx\notag\\ &=\int_Y g\left({x\over \varepsilon_{h_{k}}}, tZ+tD\left(\sum_{i\in\mathbb{Z}^n}(v_{h_k}(x+i) -u_Z(x+i))\widetilde{\psi}(x+i) \right) \right)dx\notag\\ & =\int_Y g\biggl({x\over \varepsilon_{h_{k}}}, tZ+t\sum_{i\in\mathbb{Z}^n}\widetilde{\psi}(x+i) D(v_{h_k}(x+i)-u_Z(x+i))\notag\\ &\hspace{5cm} + t\sum_{i\in\mathbb{Z}^n}(v_{h_k}(x+i)-u_Z(x+i))\otimes \nabla\widetilde{\psi}(x+i) \biggr)dx\notag\\ &=\int_Y g\left({x\over \varepsilon_{h_{k}}}, t\sum_{i\in\mathbb{Z}^n} \left(\widetilde{\psi}_{h_k}(x+i)Dv_{h_k}(x+i)+ (v_{h_k}(x+i)-u_Z(x+i))\otimes \nabla\widetilde{\psi}(x+i)\right) \right)dx\notag\\ &\leq t\underbrace{\int_Y g\left({x\over \varepsilon_{h_{k}}}, \sum_{i\in\mathbb{Z}^n} \widetilde{\psi}_{h_k}(x+i)Dv_{h_k}(x+i) \right) dx}_{\eqqcolon {\cal I}_1}\notag\\ &\quad + (1-t) \underbrace{\int_Y g\left({x\over \varepsilon_{h_{k}}}, {t\over 1-t} \sum_{i\in\mathbb{Z}^n}(v_{h_k}(x+i)-u_Z(x+i))\otimes \nabla\widetilde{\psi}(x+i) \right)dx}_{\eqqcolon {\cal I}_2},\label{eq1.3} \end{aligned}$$ for any $k\in\mathbb{N}$. Now we estimate ${\cal I}_1$ and ${\cal I}_2$. To that end, set $$\notag I\coloneqq \{i\in\mathbb{Z}^n\hspace{0.03cm}:\hspace{0.03cm} (Y+i)\cap (-\eta, 1+\eta)^n\neq \emptyset \}.$$ Note that $I$ has exactly $3^n$ elements and ${\mathcal L}^n((-1, 2)^n\setminus \bigcup_{i\in I}(Y+i))=0$. Moreover, we have that $$\notag \sum_{i\in I} \widetilde{\psi}_h (x+i)=1 \quad\mbox{for a.e. } x\in Y, \mbox{ and for any }h\in\mathbb{N}.$$ This implies the finiteness ${\cal I}_1$. Indeed, by $\rm (H2)$ $$\begin{aligned} {\cal I}_1&=\int_Y g\left({x\over \varepsilon_{h_k}}, \sum_{i\in I} \widetilde{\psi}(x+i)Dv_{h_k}(x+i) \right) dx\notag\\ &\leq \sum_{i\in I} \int_Y \widetilde{\psi}(x+i)g\left({x\over \varepsilon_{h_k}}, Dv_{h_k}(x+i) \right) dx\notag\\ &\leq \sum_{i\in I} \int_Y g\left({x\over \varepsilon_{h_k}}, Dv_{h_k}(x+i) \right) dx\notag\\ &= \sum_{i\in I} \int_{Y+i} g\left({y-i\over \varepsilon_{h_k}}, Dv_{h_k}(y) \right) dy\notag\\ &= \sum_{i\in I} \int_{Y+i} g\left({y\over \varepsilon_{h_k}}, Dv_{h_k}(y) \right) dy\notag\\ &=\int_{(-1,2)^n} g\left({y\over \varepsilon_{h_k}}, Dv_{h_k}(y) \right) dy<+\infty,\label{eq1.1} \end{aligned}$$ where in the last line we made use of [\[ref2\]](#ref2){reference-type="eqref" reference="ref2"}. Now, we estimate ${\cal I}_2$. First, note that $\nabla \widetilde{\psi} (x+i) = 2\lambda^i(x)\nabla\psi(x+i) - 2 \mu^i(x)\sum_{j\in\mathbb{Z}^n}\nabla \psi(x+i)$, where $\lambda^i$ and $\mu^i$ are defined respectively by $$\notag \lambda^i(x)\coloneqq {1\over 2\sum_{j\in\mathbb{Z}^n} \psi(x+i+j)}, \qquad \mu^i(x)\coloneqq {\psi(x+i)\over 2\left(\sum_{j\in\mathbb{Z}^n} \psi(x+i+j)\right)^2}.$$ It turns out that $0\leq\lambda^i(x)\leq {1\over 2}$ and $0\leq\mu^i(x)\leq {1\over 2}$ for a.e. $x\in Y$, for any $i\in I$. Therefore, the convexity of $g$ leads us to $$\begin{aligned} {\cal I}_2&\leq \sum_{i\in I}{1\over 3^n}\int_Y g\left({x\over\varepsilon_{h_k}}, 3^n{ t\over 1-t} (v_{h_k}(x+i)-u_Z(x+i))\otimes \nabla\widetilde{\psi}(x+i) \right)dx\notag\\ % &\leq \sum_{i\in I} \int_Q f\biggl(h_kx, 3^n{ t\over 1-t} (v_{h_k}(x+i)-u_z(x+i))\otimes2\lambda^i(x)\nabla \psi(x+i)\notag\\ %&\qquad - 3^n{ t\over 1-t} 2\mu^i(x)(v_{h_k}(x+i)-u_z(x+i))\otimes \sum_{j\in\Z^n}D\psi(x+i+j) \notag\\ %&\qquad + 3^n{t\over 1-t} (1-\lambda^i(x)-\mu^i(x))0\biggr)dx\notag\\ &\leq \sum_{i\in I} \int_Y g\left({x\over\varepsilon_{h_k}}, 3^n{t\over 1-t} (v_{h_k}(x+i)-u_Z(x+i))\otimes2 \nabla \psi(x+i)\right) dx\notag\\ &\quad + \sum_{i\in I} \int_Y g\left({x\over\varepsilon_{h_k}}, -3^n{ t\over 1-t} 2(v_{h_k}(x+i)-u_Z(x+i))\otimes \sum_{j\in\mathbb{Z}^n}\nabla\psi(x+i+j)\right) dx\notag\\ &\quad + \sum_{i\in I} \int_Y g\left({x\over\varepsilon_{h_k}}, 0\right)dx.\label{eq1.0} \end{aligned}$$ Now, we consider the first integral in the right-hand side of [\[eq1.0\]](#eq1.0){reference-type="eqref" reference="eq1.0"}. The periodicity of $g$ yields to $$\begin{aligned} \int_Y & g\left({x\over\varepsilon_{h_k}}, 3^n {t\over 1-t} (v_{h_k}(x+i)-u_Z(x+i))\otimes2 \nabla \psi(x+i)\right) dx\notag\\ & =\int_{Y+i}g\left({y\over\varepsilon_{h_k}}, 3^n{ t\over 1-t} (v_{h_k}(y)-u_Z(y))\otimes2 \nabla \psi(y)\right) dy.\notag \end{aligned}$$ Since $Y+i\subset (-1, 2)^n$ along with the fact that $v_h\to u_z$ in $L^\infty((-1, 2)^n; \hspace{0.03cm}\mathbb{R}^d)$ as $h\to+\infty$ and estimate [\[propcutofffunction\]](#propcutofffunction){reference-type="eqref" reference="propcutofffunction"}, for any $t\in(0,1)$ there exists $k_{t}\in\mathbb{N}$ such that $$\notag 3^n2 {t\over 1-t}(v_{h_k}-u_Z)\otimes \nabla \psi\in B_\delta (0) \qquad\mbox{for } k\geq k_{ t}.$$ By [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"}, we may write $3^n2 {t\over 1-t}(v_{h_k}-u_Z)\otimes\nabla \psi$ as a convex combination of the vertices $\{A_i\}_{i=1}^{dn}$ of the cube $Q$, i.e., there exist, for every $h_k \in \mathbb N$, functions $s^{h_k}_1,\dots, s^{h_k}_{nd}: \Omega \to (0,1)$ such that $$\notag 3^n2 {t\over 1-t} (v_{h_k}-u_Z)(x)\otimes\nabla \psi(x) = \sum_{l=1}^{nd} s^{h_k}_l(x) A_l.$$ Thus, in view of the periodicity of $g$ in the first variable and [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"}, $$\begin{aligned} \int_{Y+i}g\left({y\over\varepsilon_{h_k}}, 3^n{ t\over 1-t} (v_{h_k}(y)-u_z(y))\otimes2\nabla \psi(y)\right) dy &\leq \sum_{l=1}^{nd} \int_{Y+i} g\left({y\over\varepsilon_{h_k}}, A_l\right)dy<+\infty,\notag \end{aligned}$$ for any $k\geq k_t$, concluding that the first integral in the right-hand side of [\[eq1.0\]](#eq1.0){reference-type="eqref" reference="eq1.0"} is finite.\ Using similar arguments (see [@CDA02 proof of Lemma 12.4.5]), and exploit the periodicity and convexity of $g$ in (H2), together with the convergence of $v_h$ towards $u_Z$ and [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"}, it is possible to show that also the second integral in the right-hand side of [\[eq1.0\]](#eq1.0){reference-type="eqref" reference="eq1.0"} is finite, so that from [\[eq1.0\]](#eq1.0){reference-type="eqref" reference="eq1.0"}, it follows that $$\begin{aligned} {\cal I}_2&\leq \sum_{i\in I} \left[2\sum_{l=1}^{nd}\int_{Y-i} g\left({x\over \varepsilon_{h_k}}, A_l \right)dy + \int_Y g\left({x\over \varepsilon_{h_k}}, 0 \right)dy\right]<+\infty,\label{eq1.2} \end{aligned}$$ for any $k\geq k_t$. Gathering [\[eq1.1\]](#eq1.1){reference-type="eqref" reference="eq1.1"} and [\[eq1.2\]](#eq1.2){reference-type="eqref" reference="eq1.2"}, from [\[eq1.3\]](#eq1.3){reference-type="eqref" reference="eq1.3"}, we obtain [\[eq1.4\]](#eq1.4){reference-type="eqref" reference="eq1.4"}.\ Using Lemma [Lemma 24](#Lemma 12.4.2, 12.4.3, 12.4.4){reference-type="ref" reference="Lemma 12.4.2, 12.4.3, 12.4.4"} (iii) combined with the fact that $u_{h_k}-u_Z\in W^{1,q}_{\#}(Y; \mathbb{R}^d)\cap L^{\infty}(Y; \mathbb{R}^d)$ and [\[eq1.4\]](#eq1.4){reference-type="eqref" reference="eq1.4"}, we conclude that $$\begin{aligned} \widetilde{g}^q_{\rm hom}(tZ)&=\inf\left\{\int_Y g\left({x\over \varepsilon_{h}}, tZ+Du\right)dx \hspace{0.03cm}: \hspace{0.03cm} v\in W^{1,q}_{\#}(Y; \hspace{0.03cm} \mathbb{R}^d)\cap L^\infty(Y; \hspace{0.03cm} \mathbb{R}^d) \right\}\notag\\ &\leq \int_{Y} g\left({x\over \varepsilon_{h_k}}, tZ+tD(u_{h_k}(x)-u_Z(x))\right)dx<+\infty,\notag \end{aligned}$$ as desired. ◻ The next proposition shows an inequality for $G'$ similar to the one proved in Proposition [Proposition 23](#Lemma 12.4.1){reference-type="ref" reference="Lemma 12.4.1"}. **Proposition 26**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1)-(H4). Let $\widetilde{g}^q_{\rm hom}$ be defined by [\[ftildeCDA\]](#ftildeCDA){reference-type="eqref" reference="ftildeCDA"} and $q\in[1, +\infty]$. Then, $$\notag \mathcal L^n(\Omega) (\widetilde{g}^q_{\rm hom})^{\rm ls} (Z) \leq G'(\Omega, u_Z),$$ for any open and bounded subset $\Omega\subseteq \mathbb R^n$ and $Z\in\mathbb{R}^{d\times n}$.* We skip the proof of Proposition [Proposition 26](#Proposition 12.4.6){reference-type="ref" reference="Proposition 12.4.6"}, since it follows from Lemma [Lemma 24](#Lemma 12.4.2, 12.4.3, 12.4.4){reference-type="ref" reference="Lemma 12.4.2, 12.4.3, 12.4.4"} and [Lemma 25](#Lemma 12.4.5){reference-type="ref" reference="Lemma 12.4.5"}, arguing as in [@CDA02 Proposition 12.4.6], recalling that [\[H40\]](#H40){reference-type="eqref" reference="H40"}, crucial in Lemma [Lemma 24](#Lemma 12.4.2, 12.4.3, 12.4.4){reference-type="ref" reference="Lemma 12.4.2, 12.4.3, 12.4.4"}, is a consequence of [\[hpL\]](#hpL){reference-type="eqref" reference="hpL"} in $(H4)$, see Remark [\[remark:existenceofcube\]](#remark:existenceofcube){reference-type="ref" reference="remark:existenceofcube"}. We only would like to point out that the set appearing in [\[ass-F\'meno12\]](#ass-F'meno12){reference-type="eqref" reference="ass-F'meno12"} may appear a bit weird; on the other hand it serves as an intermediate step to prove Proposition [Proposition 26](#Proposition 12.4.6){reference-type="ref" reference="Proposition 12.4.6"} in the particular case first $\Omega = Y;$ in this case [\[ass-F\'meno12\]](#ass-F'meno12){reference-type="eqref" reference="ass-F'meno12"} turns to be a consequence of Lemma [Lemma 24](#Lemma 12.4.2, 12.4.3, 12.4.4){reference-type="ref" reference="Lemma 12.4.2, 12.4.3, 12.4.4"} item (ii).\ Finally, we observe that the value of $G'(\Omega, \cdot)$ and $G''(\Omega, \cdot)$ coincide on the space of linear functions, as stated in the next result. The proof is omitted, being entirely similar to the one of [@CDA02 Proposition 12.4.7] and relying on Propositions [Proposition 26](#Proposition 12.4.6){reference-type="ref" reference="Proposition 12.4.6"} and [Proposition 23](#Lemma 12.4.1){reference-type="ref" reference="Lemma 12.4.1"}, taking into account the properties of inner regular envelopes of measures (see [@DM93 Chapter 15]). **Proposition 27**. *Under the same assumptions of Proposition [Proposition 26](#Proposition 12.4.6){reference-type="ref" reference="Proposition 12.4.6"}, it holds that $$\notag G'(\Omega, u_Z) = G''(\Omega, u_Z) = G'_{-}(\Omega, u_Z) = G''_{-}(\Omega, u_Z)= {\mathcal L}^n(\Omega) (\widetilde{g}^q_{\rm hom})^{\rm ls} (Z),$$ for any $\Omega\in{\cal A}_0$ and $Z\in\mathbb{R}^{d\times n}$.* Next proposition states that the inner regular envelope of the functional $G'$ given by [\[defliminf\]](#defliminf){reference-type="eqref" reference="defliminf"} satisfies a blow-up condition. It generalizes [@CDA02 Proposition 12.5.2] to the vectorial setting and the proof, following the same lines of [@CDA02 Proposition 12.5.2], is omitted. **Proposition 28**. *Let $g: \mathbb{R}^n\times \mathbb{R}^{d \times n} \to [0, +\infty]$ satisfy (H1) and (H2), and let $q\in [1,+\infty]$. Let $G'_{-}$ be the inner regular envelope of the functional defined by [\[defliminf\]](#defliminf){reference-type="eqref" reference="defliminf"}. Then, $$\notag \limsup_{r\to 0} {1\over r^n} G'_{-}(Q_r(x_0), u) \geq G'_{-} (Q_1(x_0), u(x_0)+ Du(x_0)\cdot (\cdot-x_0)),$$ for a.e. $x_0\in\mathbb{R}^n$ and for any $u\in W^{1,\infty}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$.* ## Representation on Sobolev spaces The proof of Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"} strongly relies on an integral representation result for unbounded functionals on Sobolev spaces. The aim of this subsection is to recall an abstract result which provides sufficient conditions to represent an unbounded functional $G$ depending on a bounded open subsets $\Omega$ of $\mathbb R^n$ and a function $u\in W^{1,p}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$, for $p\in[1,+\infty]$ as an integral. We state a generalization of [@CDA02 Theorem 9.3.8] in the vectorial setting. Since its proof follows the same arguments, we skip it. We start by providing the definition of weakly super-additivity and weakly sub-additivity (see [@CDA02 Definition 2.6.5]). **Definition 29**. *Let $\mathcal{O} \subseteq \mathcal{A}(\mathbb{R}^n)$ and $\alpha: \mathcal{O} \rightarrow [0, + \infty].$ We say that $\alpha$ is:* - *weakly superadditive if $$\alpha (\Omega_1) + \alpha(\Omega_2) \le \, \alpha(\Omega),$$ for every $\Omega_1, \Omega_2, \Omega \in \mathcal{O}$ with $\Omega_1 \cap \Omega_2 = \emptyset,$ $\Omega_1 \cup \Omega_2 \subset \subset \Omega,$* - *weakly subadditive if $$\alpha(\Omega) \le \alpha (\Omega_1) + \alpha(\Omega_2),$$ for every $\Omega, \Omega_1, \Omega_2 \in \mathcal{O}$ with $\Omega_1 \cap \Omega_2 = \emptyset,$ $\Omega \subset \subset \Omega_1 \cup \Omega_2$.* For $p\in[1,+\infty]$, let $G$ be the functional defined by $$\label{(9.3.2)CDA} G: (\Omega, u) \in \mathcal{A}_0 \times W^{1,p}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d) \mapsto G(\Omega, u) \in [0, + \infty],$$ and let $g_G$ be the function given by $$\label{(9.1.6)CDA} g_G: Z \in \mathbb{R}^{d\times n} \mapsto G(Y, u_Z) \in [0, + \infty],$$ where $u_Z(x)\coloneqq Zx$, as above. We introduce the following set of assumptions: - invariance property for linear functions $$\notag%\label{(9.2.5)CDA} G(\Omega; u_Z+c) = G(\Omega; u_Z)\qquad \mbox{for any } \Omega\in{\cal A}_0,$$ for any $\Omega\in{\cal A}_0$, $Z\in\mathbb{R}^{d\times n}$, and $c\in\mathbb{R}^d$; - translation invariance $$\notag%\label{(9.3.31)CDA} G(\Omega-x_0, T[x_0]u_Z) = G(\Omega, u_Z),$$ for any $\Omega\in{\cal A}_0$, $Z\in\mathbb{R}^{d\times n}$ and $x_0\in\mathbb{R}^n$, where $T[x_0]u$ is the translated of $u$ defined by $T[x_0]u(x)\coloneqq u(x-x_0)$ (see Subsection [2](#not&pre){reference-type="ref" reference="not&pre"}); - for every $u \in W^{1,\infty}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d),$ $G(\cdot, u)$ is increasing, weakly superadditive and weakly subadditive; - the blow-up condition holds $$\notag%\label{(9.3.33)CDA} \limsup_{r \to 0^+} \frac{1}{r^n} G(Q_r(x_0),u) \geq G(Q_1(x_0), u(x_0) + Du(x_0)\cdot (\cdot-x_0)),$$ for every $u \in W^{1,\infty}_{\rm loc}(\mathbb R^n;\mathbb R^d)$ and a.e. $x_0 \in \mathbb R^n$; - for any $\Omega\in{\cal A}_0$, $G(\Omega, \cdot)$ is $W^{1,p}(\Omega;\mathbb R^d)$-lower semicontinuous if $p \in [1,+\infty)$, and $\cap _{q \in [1,+\infty)} W^{1,q}(\Omega;\mathbb R^d)$-lower semicontinuous if $p =+\infty$; - for every $u \in W^{1,p}(\mathbb R^n;\mathbb R^d)$, $G(\cdot, u)$ is inner regular; - there exist $z_0 \in {\rm dom} g_G$, $r_0 > 0$ and a Radon positive measure $\mu$ on $\mathbb{R}^n$ such that $G(\Omega, u) \le \mu(\Omega)$ whenever $\Omega \in \mathcal{A}_0, u \in PA(\mathbb{R}^n; \mathbb{R}^d)$ with $Du(x) \in {\rm dom} g_G$ for a.e. $x \in \Omega$ and $\|u - u_{Z_0}\|_{W^{1, \infty}(\Omega; \mathbb{R}^d)} < r_0$; - for every $\Omega \in \mathcal{A}_0,$ $G(\Omega, \cdot)$ is convex; - there exist $z_0 \in \mathbb R^{d\times n}$ and $\delta>0$ such that $B_\delta(z_0) \subseteq {\rm dom} g_G$. The next result shows that under the assumptions listed above, the unbounded functional $G$ may be represented as an integral whose energy density is given by $g_G$. **Theorem 30**. *Let $p \in (1, + \infty]$. Let $G$ be a functional as in [\[(9.3.2)CDA\]](#(9.3.2)CDA){reference-type="eqref" reference="(9.3.2)CDA"}, satisfying assumptions (T1)-(T9), and let $g_G$ be given by [\[(9.1.6)CDA\]](#(9.1.6)CDA){reference-type="eqref" reference="(9.1.6)CDA"}. Then, $g_G$ turns out to be convex and lower semicontinuous and the following representation $$\notag G(\Omega, u) =\int_{\Omega} g_G(Du)dx$$ holds for any $\Omega\in{\cal A}_0$ and $u\in W^{1,p}_{{\rm loc}}(\mathbb{R}^n; \mathbb{R}^d)$.* *Proof.* It follows along the lines of [@CDA02 Theorems 9.3.7 and 9.3.8], relying first on suitable estimates on $C^1(\mathbb R^n;\mathbb R^d)$ fields as in [@CDA02 eq (9.2.1) and (9.2.2)], then on a standard density argument in $W^{1,q}(\Omega;\mathbb R^d)$ for any $q \in [1,+\infty)$ and on the lower semicontinuity of $G$ in the sense of ${\rm (T5)}$. ◻ ## The proof of Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"} {#the-proof-of-theorem-proposition-12.6.1} In this subsection, we provide the proof of Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"} which relies on all the results presented in the previous subsections. *Proof of Theorem [Theorem 19](#Proposition 12.6.1){reference-type="ref" reference="Proposition 12.6.1"}.* Let $\{\varepsilon_k\}_{k}$ be a strictly increasing sequence, then [@CDA02 Proposition 3.4.3], ensures that there exists a further subsequence $\{\varepsilon_{k_j}\}_{j}$ such that $$\begin{aligned} \notag &\sup\left\{\Gamma(L^\infty(A;\mathbb{R}^d))\mbox{-}\liminf_{j\to+\infty} G_{\varepsilon_{k_j}}(A, u) : A\subset\subset\Omega \right\}\notag\\ &\quad = \sup\left\{\Gamma(L^\infty(A;\mathbb{R}^d))\mbox{-}\limsup_{j\to+\infty} G_{\varepsilon_{k_j}}(A, u) : A\subset\subset\Omega \right\}, \label{defmainthm} \end{aligned}$$ for any $\Omega\in{\cal A}_0$ and $u\in L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$. Fix $p\in (n, +\infty]$, so that the following embeddings $W^{1,p}_{\rm loc}(\mathbb{R}^n;\mathbb{R}^d)\subseteq C(\mathbb{R}^n;\mathbb{R}^d) \subseteq L^\infty_{\rm loc}(\mathbb{R}^n;\mathbb{R}^d)$ hold. For any $\Omega\in{\cal A}_0$, we define the functional $H(\Omega, \cdot):W^{1,p}_{\rm loc}(\mathbb{R}^n;\mathbb{R}^d)\mapsto [0,+\infty]$ as the common value of [\[defmainthm\]](#defmainthm){reference-type="eqref" reference="defmainthm"}. Now, we show that the functional $H$ given by [\[defmainthm\]](#defmainthm){reference-type="eqref" reference="defmainthm"} satisfies assumptions (T1)-(T9). Indeed, (T1) is a consequence of the fact that the functionals $G_{\varepsilon_{k_j}}$ as well as $H$ depend only on the gradient of the function $u$.\ (T2) follows from a generalization of [@CDA02 Proposition 12.1.1] to the vectorial case, obtained with identical arguments. Proposition [Proposition 20](#Proposition 12.2.1){reference-type="ref" reference="Proposition 12.2.1"} combined with the fact that the functionals $G'$ and $G''$ are increasing set functions, for any $u\in L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$ implies (T3). To show (T4), first note that by definition of $H$ and standard results concerning $\Gamma$-limits, and their inner regular envelopes, we deduce that $$\begin{aligned} &\limsup_{r\to 0} {\frac{1}{r^n}} H(Q_r(x_0), u)\notag\\ &\quad = \limsup_{r\to 0} {1\over r^n} \sup\left\{ \Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\limsup_{j\to+\infty} G_{\varepsilon_{k_j}}(A, u) : A\subset\subset Q_r(x_0) \right\}\notag\\ &\quad = \limsup_{r\to 0} {1\over r^n} \sup\left\{ \Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\liminf_{j\to+\infty} G_{\varepsilon_{k_j}}(A, u) : A\subset\subset Q_r(x_0) \right\}\notag\\ &\quad \geq \limsup_{r\to 0} {1\over r^n} \sup\left\{ \Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\liminf_{\varepsilon\to 0} G_{\varepsilon}(A, u) : A\subset\subset Q_r(x_0) \right\}\notag\\ &\quad\geq \limsup_{r\to 0} {1\over r^n} G'_- (Q_r(x_0), u)\notag\\ &\quad\geq G'_-(Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0)), \label{eq2mainthm} \end{aligned}$$ where in the last inequality we have used Proposition [Proposition 28](#Prop 12.5.2){reference-type="ref" reference="Prop 12.5.2"}. Since for any $\Omega\in {\cal A}_0$, $G'(\Omega, u+c) = G'(\Omega, u)$ and $G''(\Omega, u+c) = G''(\Omega, u)$ for any $u\in L^\infty_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$ and $c\in\mathbb{R}^d$, an application of Proposition [Proposition 27](#Prop 12.4.7){reference-type="ref" reference="Prop 12.4.7"} yields to $$\begin{aligned} G'_-(Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0)) &= G'_- (Q_1(x_0), - Du(x_0)\cdot(\cdot-x_0))\notag\\ &= G''_- (Q_1(x_0), - Du(x_0)\cdot(\cdot-x_0))\notag\\ &= G''_- (Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0)).\notag \end{aligned}$$ In particular, this implies that there exists the inner regular envelope of the $\Gamma$-limit $G$, i.e., $$\begin{aligned} &G_- (Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0))= G'_- (Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0))\notag\\ &\quad = G''_-(Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0)).\notag \end{aligned}$$ Hence, using once again the properties of $\Gamma$-upper limits, from [\[eq2mainthm\]](#eq2mainthm){reference-type="eqref" reference="eq2mainthm"}, we get $$\begin{aligned} &\limsup_{r\to 0} {1\over r^n} H(Q_r(x_0), u)\notag\\ &\quad\geq G_- (Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0))\notag\\ &\quad=\sup\left\{ \Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\limsup_{\varepsilon\to 0} G_{\varepsilon}(A, u(x_0)- Du(x_0)\cdot(\cdot-x_0)) : A\subset\subset Q_1(x_0) \right\}\notag\\ &\quad\geq\sup\left\{ \Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\limsup_{j\to +\infty} G_{\varepsilon_{k_j}}(A, u(x_0)- Du(x_0)\cdot(\cdot-x_0)) : A\subset\subset Q_1(x_0) \right\}\notag\\ &\quad= H(Q_1(x_0), u(x_0)- Du(x_0)\cdot(\cdot-x_0)),\notag \end{aligned}$$ which proves (T4).\ Assumption (T5) holds too. Indeed, given $\Omega\in {\cal A}_0$ and an open set with Lipschitz boundary $A$ such that $A\subset\subset \Omega$, it is known that $\Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\liminf_{j\to +\infty} G_{\varepsilon_{k_j}}(A, \cdot)$ and $\Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\limsup_{j\to +\infty} G_{\varepsilon_{k_j}}(A, \cdot)$ are $W^{1,p}(\Omega; \mathbb{R}^d)$ (or $\cap_{q\in[1, +\infty)}W^{1,q}(\Omega; \mathbb{R}^d)$)-lower semicontinuous in $W^{1,p}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$. Therefore, $H(\Omega, \cdot)$ is also lower semicontinuous since it agrees with the last upper bound of the family of such functionals obatined letting $A$ vary with the above propertes.\ Since, by definition, $H$ is a sup of a inner regular functionals, it follows that also (T6) is satisfied. To prove (T7), observe that, thanks to Proposition [Proposition 27](#Prop 12.4.7){reference-type="ref" reference="Prop 12.4.7"}, we deduce that $g_G$ given by Theorem [Theorem 30](#teo9.3.8CDA){reference-type="ref" reference="teo9.3.8CDA"} agrees with $(\widetilde{g}^q_{\rm hom})^{\rm ls}$. Moreover, due to (H3), there exists $\delta\in(0,1)$ such that $B_{2\delta}(0)\subseteq {\rm dom} \widetilde{g}^q_{\rm hom}\subseteq {\rm dom}\left(( \widetilde{g}^q_{\rm hom})^{\rm ls}\right)$. Hence, choosing $z_0=0$ in (T7) and exploiting the properites of $\Gamma$-limits, as well as Proposition [Proposition 22](#Prop 12.3.2){reference-type="ref" reference="Prop 12.3.2"}, we conclude that $$\begin{aligned} H(\Omega, u)&\leq \sup\left\{ \Gamma(L^\infty(A; \mathbb{R}^d))\mbox{-}\limsup_{\varepsilon\to 0} G_\varepsilon(A, u) : A\subset\subset\Omega \right\}\notag\\ &=\sup\left\{ G''(A, u) : A\subset\subset\Omega \right\}\notag\\ &\leq c \mathcal L^n(\Omega),\notag \end{aligned}$$ yielding the validity of (T7). (T8) follows from the fact that, for any $q\in[1,+\infty]$, $\widetilde{g}^q_{\rm hom}$ turns out to be a convex function.\ Moreover, the same consideration based on the convexity of $g(x, \cdot)$ and the properties inherited by $\widetilde g_{\rm hom}$, proved in Proposition [Proposition 18](#prop:Proposition 12.1.3){reference-type="ref" reference="prop:Proposition 12.1.3"}, guarantee that $\widetilde g_{\rm hom}$ satisfies also (T9). Since the functional $H$ given by [\[defmainthm\]](#defmainthm){reference-type="eqref" reference="defmainthm"} satisfies all assumptions $(T1)$-$(T9)$, we can apply Theorem [Theorem 30](#teo9.3.8CDA){reference-type="ref" reference="teo9.3.8CDA"} so that $$\label{intrepmainthm} H(\Omega, u) =\int_\Omega (\widetilde{g}^q_{\rm hom})^{\rm ls} (Du)dx,$$ for any $\Omega\in{\cal A}_0$ and $u\in W^{1,p}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$, where we have used the fact that, due to Proposition [Proposition 27](#Prop 12.4.7){reference-type="ref" reference="Prop 12.4.7"}, $g_G$ of Theorem [Theorem 30](#teo9.3.8CDA){reference-type="ref" reference="teo9.3.8CDA"} agrees with $( \widetilde{g}^q_{\rm hom})^{\rm ls}$. Since the representation [\[intrepmainthm\]](#intrepmainthm){reference-type="eqref" reference="intrepmainthm"} holds for any subsequence $\{\varepsilon_{k_j}\}_j$ of $\{\varepsilon_k\}_k$, by Uryshon's property, we conclude that $$\notag G'_{-}(\Omega, u) = G''_{-}(\Omega, u) = \int_\Omega (\widetilde{g}^q_{\rm hom})^{\rm ls} (Du)dx,$$ for any $\Omega\in{\cal A}_0$ and $u\in W^{1,p}_{\rm loc}(\mathbb{R}^n; \mathbb{R}^d)$, as desired. The conclusion is achieved repeating word by word the proof of [@CDA02 Proposition 2.7.4], exploiting the convexity of $\Omega$. ◻ **Acknowledgements** L. D. acknowledges the support of the Austrian Science Fund (FWF) through grants F65, V662, Y1292. M. E. has been partially supported by PRIN 2020 "Mathematics for industry 4.0 (Math4I4)" (coordinator P. Ciarletta). E. Z. acknoledges the received support through Sapienza Progetti d'Ateneo 2022 piccoli "Asymptotic Analysis for composites, fractured materials and with defects ", and through PRIN 2022 " Mathematical Modelling of Heterogeneous Systems" (coordinator E. N. M. Cirillo). The authors are members of INdAM-GNAMPA, whose support is gratefully ackowledged through GNAMPA Project 2023 "Prospettive nelle scienze dei materiali: modelli variazionali, analisi asintotica ed omogeneizzazione". 99 E. Acerbi, G. Buttazzo, F. Prinari. The class of functionals which can be represented by a supremum. *J. Convex Anal.*, **9** (2002), pp. 225--236. O. Anza Hafsa. On the integral representation of relaxed functionals with convex bounded constraints. *ESAIM Control Optim. Calc. Var.*, **16**, (2010), pp. 37--57. O. Anza Hafsa, N.Clozeau, J.-P. Mandallena. Homogenization of nonconvex unbounded singular integrals. *Ann. Math. Blaise Pascal*, **24**, (2017), pp. 135--193. O. Anza Hafsa, J.-P. Mandallena. Homogenization of unbounded singular integrals in $W^{1,\infty}$. *Ric. Mat.*, **61**, (2012), pp. 185--217. O. Anza Hafsa, J.-P. Mandallena. Integral representation of unbounded variational functionals on Sobolev spaces. *Ric. Mat.*, **72**, (2023), pp. 193--234. G. Aronsson. Minimization problems for the functional $\sup_x F(x,f(x),f'(x))$, I. *Ark. Mat.*, **6**, (1965), pp. 33--53. G. Aronsson. Minimization problems for the functional $\sup_x F(x,f(x),f'(x))$, II. *Ark. Mat.*, **6**, (1966), pp. 409--431. G. Aronsson. Extension of functions satisfying Lipschitz conditions. *Ark. Mat.*, **6**, (1967), pp. 551--561. G. Aronsson. On the partial differential equation $u^2_xu_{xx}+2u_xu_yu_{xy}+u^2_yu_{yy}=0$. *Ark. Mat.*, **7**, (1968), pp. 395--425. B. Ayanbayev, N. Katzourakis. Vectorial variational principles in $L^\infty$ and their characterisation through PDE systems. *Appl. Math. Optim.*, **83**, (2021), pp. 833--848. J. F. Babadjian, F. Prinari, E. Zappale. Dimensional reduction for supremal functionals. *Discrete Contin. Dyn. Syst., Series A*, **32**, (2012), pp. 1503--1535. E. N. Barron, R. Jensen, C. Y. Wang. Lower semicontinuity of $L^\infty$ functionals. *Ann. Inst. H. Poincare Anal. Non Lineaire*, **18**, (2004), pp. 495--517. A. Braides, V. Chiadò Piat, L. D'Elia. An extension theorem from connected sets and homogenization of non-local functionals. *Nonlinear Anal.*, **208**, (2021), 112316. A. Braides, A. Defranceschi. *Homogenization of Multiple Integrals.* Oxford University Press, Oxford, 1998. A. Braides, I. Fonseca, G. Leoni. ${\cal A}$-quasiconvexity: relaxation and homogenization. *ESAIM Control Optim. Calc. Var*, **5**, (2000), pp. 539-577. H. Brezis, *Functional Analysis, Sobolev Spaces and Partial Differential Equations.* Universitext series, Springer, New York, 2010. A. Briani, A. Garroni, F. Prinari. Homogenization of $L^\infty$ functionals. *Math. Models Methods Appl. Sci*, **14**, (2004), pp. 1761-1784. G. Buttazzo. *Semicontinuity, relaxation and integral representation in the calculus of variations.* Pitman Research Notes in Mathematics Series 207, Longman Scientific & Technical, Harlow, New York, 1989. L. Carbone, D. Cioranescu, R. De Arcangelis, A. Gaudiello. Homogenization of unbounded functionals and nonlinear elastomers. The general case. *Asymptot. Anal.*, **29**, (2002), pp. 221--272. L. Carbone, R. De Arcangelis. *Unbounded Functionals in the Calculus of Variations: Representation, Relaxation, and Homogenization*. Chapman and Hall, CRC, 2002. T. Champion, L. De Pascale. Homogenization of Dirichlet problems with convex bounded constraints on the gradient. *Z. Anal. Anwend.*, **22**, (2003), pp. 591--608. F. Christowiak, C. Kreisbeck. Homogenization of layered materials with rigid components in single-slip finite crystal plasticity. , **56**, (2017), paper no. 75. F. Christowiak, C. Kreisbeck. Asymptotic rigidity of layered structures and its application in homogenization theory. *Arch. Ration. Mech. Anal.*, **235**, (2020), pp. 51--98. E. Clark, N. Katzourakis, B. Muha. Vectorial variational problems in $L^\infty$ constrained by the Navier-Stokes equations. *Nonlinearity*, **35**, (2022), pp. 470--491. G. Dal Maso. *An introduction to $\Gamma$-convergence.* Progress in Nonlinear Differential Equations and their Applications, Birkhäuser, Boston, 1993. E. Davoli, I. Fonseca. Homogenization of integral energies under periodically oscillating differential constraints. *Calc. Var.*, **55**, (2016), paper no. 69. M. Duerinckx, A. Gloria. Stochastic homogenization of nonconvex unbounded integral functionals with convex growth. *Arch. Ration. Mech. Anal.*, **221**, (2016), pp. 1511--1584. A. Garroni, V. Nesi, M. Ponsiglione. Dielectric breakdown: Optimal bounds. *R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci.*, (2001), pp. 2317--2335. R. V. Kohn, T. D. Little. Some model problems of polycrystal plasticity with deficient basic crystals. *SIAM J. Appl. Math.*, **59**, (1999), pp. 172--197. F. Prinari, E. Zappale. A relaxation result in the vectorial setting and power law approximation for supremal functionals. *J. Optim. Theory Appl.*, **186**, (2020), pp. 412-452. A. M. Ribeiro, E. Zappale. Existence of minimizers for nonlevel convex supremal functionals. *SIAM J. Control Optim.*, **52**, (2014), pp. 3341--3370. A. M. Ribeiro, E. Zappale. Revisited convexity notions for $L^\infty$ variational problems. https://arxiv.org/abs/2309.10190 M. Wagner. On the lower semicontinuous quasiconvex envelope for unbounded integrands. I. *ESAIM Control Optim. Calc. Var.*, **15**, (2009), pp. 68--101. M. Wagner. On the lower semicontinuous quasiconvex envelope for unbounded integrands. II. Representation by generalized controls. *J. Convex Anal.*, **16**, (2009), pp. 441--472. E. Zappale. A remark on dimension reduction for supremal functionals: the case with convex domains. *Differential Integral Equations*, **26**, (2013), pp. 1077--1090.
arxiv_math
{ "id": "2310.01175", "title": "Homogenization of supremal functionals in vectorial setting (via\n power-law approximation)", "authors": "Lorenza D'Elia, Michela Eleuteri and Elvira Zappale", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study the eigenvalue clusters of the Robin Laplacian on the 2-dimensional hemisphere with a variable Robin coefficient on the equator. The $\ell$'th cluster has $\ell+1$ eigenvalues. We determine the asymptotic density of eigenvalues in the $\ell$'th cluster as $\ell$ tends to infinity. This density is given by an explicit integral involving the even part of the Robin coefficient. address: - Department of Mathematics, King's College London, Strand, London, WC2R 2LS, U.K. - Department of Mathematics, King's College London, Strand, London, WC2R 2LS, U.K. author: - Alexander Pushnitski - Igor Wigman title: Eigenvalue clusters for the hemisphere Laplacian with variable Robin condition --- # Introduction and main result {#sec.a} ## The Robin eigenvalues {#sec:setup basic} Let $\Omega$ be the two-dimensional upper unit hemisphere with its boundary $\partial \Omega$ that is the sphere equator. As usual, we parameterise $\Omega$ by the spherical coordinates $\theta, \varphi$, where $\theta\in[0,\pi/2]$ is the polar angle (with the North pole corresponding to $\theta=0$, and the equator corresponding to $\theta=\pi/2$) and $\varphi\in(-\pi,\pi]$ is the azimuthal angle. Denote by $\Delta$ the Laplace-Beltrami operator on $\Omega$, which may be expressed in the spherical coordinates as $$\Delta=\frac1{\sin\theta}\frac{\partial}{\partial \theta}\sin\theta\frac{\partial}{\partial \theta}+\frac1{\sin^2\theta}\frac{\partial^2}{\partial\varphi^2}\ ,$$ and let $\sigma=\sigma(\varphi)$ be a $C^1$-smooth real-valued function on the equator. We are interested in the eigenvalues $\lambda=\lambda(\sigma)$ of the *Robin problem* $$-\Delta u=\lambda u \text{ on $\Omega$}, \qquad \frac{\partial u}{\partial n}+\sigma u=0 \text{ on $\partial\Omega$,}$$ where $\frac{\partial u}{\partial n}=\frac{\partial u}{\partial \theta}$ is the normal derivative at the boundary. We will call these the *Robin eigenvalues* for short and enumerate them as $$\lambda_1(\sigma)\leq \lambda_2(\sigma)\leq\cdots$$ listing them repeatedly in case of multiplicity. We recall that $\lambda_n(\sigma)\to\infty$ as $n\to\infty$. The case $\sigma=0$ corresponds to the Neumann problem; this case allows for separation of variables. The corresponding eigenfunctions are spherical harmonics (those spherical harmonics that satisfy the Robin boundary condition, see Section [2.5](#sec.b2){reference-type="ref" reference="sec.b2"} below), and the eigenvalues are $\ell(\ell+1)$, $\ell\geq0$, of multiplicities $\ell+1$. The case $\sigma=\text{const}$ also allows for separation of variables, and was considered by Rudnick and Wigman [@RW] (with an additional restriction $\sigma>0$). Our first preliminary result is that the Robin eigenvalues form clusters of the size $O(\sqrt{\ell})$ around the Neumann eigenvalues $\ell(\ell+1)$. ****Theorem** 1**. *There exists a constant $C=C(\sigma)>0$ such that all Robin eigenvalues belong to the union of intervals $$\bigcup\limits_{\ell=0}^{\infty} \Lambda_\ell, \qquad \Lambda_\ell=\bigl(\ell(\ell+1)-C\sqrt{\ell+1},\ell(\ell+1)+C\sqrt{\ell+1}\bigr).$$ Moreover, for all sufficiently large $\ell$, the total number of eigenvalues (counting multiplicities) in $\cup_{k=0}^{\ell-1}\Lambda_k$ is $$L:=\ell(\ell+1)/2,$$ and there are exactly $\ell+1$ eigenvalues in $\Lambda_\ell$. Thus, for all sufficiently large $\ell$, the eigenvalues in $\Lambda_\ell$ are $\lambda_{L+k}(\sigma)$ with $k=1,\dots,\ell+1$.* Theorem [**Theorem** 1](#lma.a1){reference-type="ref" reference="lma.a1"} is proved in Section [5.4](#sec:proof Lemma a1){reference-type="ref" reference="sec:proof Lemma a1"} below. We will use the term *$\ell$'th cluster* for the Robin eigenvalues in $\Lambda_\ell$. The results of [@RW] for $\sigma=\text{const}>0$ assert that the estimate $O(\sqrt{\ell})$ on the size of the $\ell$'th cluster is sharp: there are Robin eigenvalues near $\ell(\ell+1)+C\sqrt{\ell}$ for large $\ell$. ## Main result Our aim is to determine the density of Robin eigenvalues in the $\ell$'th cluster. In other words, we are interested in the differences $$\lambda_{L+k}(\sigma)-\ell(\ell+1), \quad k=1,\dots,\ell+1$$ for large $\ell$; these differences are termed the *Robin-Neumann gaps* [@RWY]. We set $$\sigma_\text{even}(\varphi)=\frac12(\sigma(\varphi)+\sigma(\varphi+\pi)).$$ It turns out that, in the high energy limit, the density of Robin eigenvalues depends only on $\sigma_\text{even}$. Our main result is: ****Theorem** 2**. *Let $f\in C^\infty({\mathbb R})$ be a compactly supported test function. Then $$\lim_{\ell\to\infty} \frac1{\ell+1}\sum_{k=1}^{\ell+1}f\bigl(\lambda_{L+k}(\sigma)-\ell(\ell+1)\bigr) = \frac1{4\pi} \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi,$$ where $L=\ell(\ell+1)/2$.* ## Odd $\sigma$ An application of Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} with $\sigma$ odd, i.e. satisfying $$\sigma(\varphi+\pi)=-\sigma(\varphi), \quad \varphi\in(-\pi,\pi],$$ imply that the distribution of the Robin-Neumann gaps in the $\ell$'th cluster converge, as $\ell \rightarrow\infty$, to the delta function at the origin. This leaves open the question of what happens for odd $\sigma$ after possible further rescaling. In this direction we prove the following result: ****Theorem** 3**. *Let $\sigma$ be an odd real-valued trigonometric polynomial of degree $\le d$ (where $d$ is odd). Then for all sufficiently large $\ell$, the Robin eigenvalues in the $\ell$'th cluster satisfy $$\lambda_{L+k}(\sigma)=\ell(\ell+1), \quad L=\ell(\ell+1)/2$$ for all $k=1,\dots,\ell+1$ except possibly for $d+1$ indices $k$. In other words, at most $d+1$ Robin-Neumann gaps in the $\ell$'th cluster are non-zero.* This shows that, at least for odd trigonometric polynomials, it is not possible to rescale the Robin-Neumann gaps to produce a meaningful limit. The proof of Theorem [**Theorem** 3](#thm.a3){reference-type="ref" reference="thm.a3"} is given in Section [6](#sec:odd sigma){reference-type="ref" reference="sec:odd sigma"}. ## Discussion By a change of variable, the statement of Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} may be expressed as $$\lim_{\ell\to\infty} \frac1{\ell+1}\sum_{k=1}^{\ell+1}f\bigl(\lambda_{L+k}(\sigma)-\ell(\ell+1)\bigr) = \int_{-\infty}^\infty f(y)\rho(\sigma;y)dy,$$ where the density $\rho(\sigma;y)$ is given by (denoting $a_+=\max\{a,0\}$ and $a_-=\max\{-a,0\}$) $$\rho(\sigma;y) = \frac{1}{2\pi y^3} \int_{-\pi}^\pi \left(\frac{4(\sigma_\text{\rm even}(\varphi))_+}{\pi}\right)^2 \left(1-\left(\frac{4(\sigma_\text{\rm even}(\varphi))_+}{\pi y}\right)^2\right)_+^{-1/2}d\varphi \quad \text{ for $y>0$}$$ and $$\rho(\sigma;y) = \frac{1}{2\pi y^3} \int_{-\pi}^\pi \left(\frac{4(\sigma_\text{\rm even}(\varphi))_-}{\pi}\right)^2 \left(1-\left(\frac{4(\sigma_\text{\rm even}(\varphi))_-}{\pi y}\right)^2\right)_+^{-1/2}d\varphi \quad \text{ for $y<0$.}$$ If $\sigma$ is a positive constant, this gives $$\rho(\sigma;y) = \frac{16 \sigma^2}{\pi^2 y^3} (1-(4\sigma/\pi y)^2)_+^{-1/2},$$ consistent with [@RW]. In the case of *constant* $\sigma$, the Robin problem allows for separation of variables. For variable $\sigma$, this is no longer the case and we have to use some relatively advanced methods in spectral perturbation theory. For $\sigma$ of *variable sign*, the density $\rho(\sigma;\cdot)$ is supported on both sides of the origin. As a consequence, for such $\sigma$ and for all sufficiently large $\ell$, there are Robin eigenvalues on both sides of $\ell(\ell+1)$ in the $\ell$'th cluster. The assumed condition $\sigma\in C^1(\partial\Omega)$ could be relaxed to $\sigma\in L^\infty(\partial\Omega)$ and even to $\sigma\in L^p(\partial\Omega)$ for any $p>1$ at the expense of making the proofs more technical. This direction is not pursued within this paper. ## Related results In spectral theory, results of the type of Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} originate from the classical work by A. Weinstein [@Weinstein] (see also [@G1; @CdV]). Weinstein considered the operator $-\Delta+V$, where $\Delta$ is the Laplace-Beltrami operator on an $n$-dimensional sphere (more generally, on a symmetric space of rank one) and $V$ is the operator of multiplication by a smooth real-valued function $V$ on the sphere (usually called a potential in this context). In this case, all of the eigenvalues of $-\Delta$ have finite multiplicities which grow with the eigenvalue. Adding the perturbation $V$ splits each eigenvalue into a cluster. Weinstein proved that the asymptotic eigenvalue distribution inside these clusters has a density function given by averaging $V$ along the closed geodesics of the sphere. For some follow-up work in mathematical physics see e.g. [@VB; @RPVB]. Although these results seem related to our Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"}, in the next subsection it is shown that a *formal* application of Weinstein's formula in our setup produces a wrong density function due to an invalid interchange of limits w.r.t. different parameters. Observe that Weinstein's formula for the density function depends only on the *even* part of the potential $V$. In [@G3] Guillemin studied the eigenvalue clusters of $-\Delta+V$ on the sphere, where $V$ is an *odd* potential. An application of Weinstein's formula for an odd potential yields a delta-function at the origin for the density function. Guillemin's result suggests that, after a suitable rescaling, one can still obtain a (different) density function, at least for some odd potentials. This scenario should be compared to our Theorem [**Theorem** 3](#thm.a3){reference-type="ref" reference="thm.a3"}. ## Comparison with Weinstein's formula Here we show what happens upon *formally* applying Weinstein's formula in our setup. We will regard the Robin Laplacian on the hemisphere as a (singular) limit of the perturbed Neumann Laplacian. The geodesics of the upper hemisphere $\Omega$ corresponding to the Neumann Laplacian are the great circles that are reflected from the equator according to Snell's law, i.e. (in this case) angle of incidence equals angle of refraction. Let us parameterise the geodesics as follows. Each geodesic consists of two semi-circles, say $A_1$ and $A_2$, that meet at the equator. Let $\omega_1\in\Omega$ (resp. $\omega_2\in\Omega$) be the normal vector to $A_1$ (resp. to $A_2$). If $\omega_1$ has the spherical coordinates $(\theta,\varphi)$, then $\omega_2$ has the spherical coordinates $(\theta,\varphi+\pi)$. Let us denote the corresponding geodesic by $\Gamma(\theta,\varphi)$. It is clear that $\Gamma(\theta,\varphi+\pi)=\Gamma(\theta,\varphi)$. Thus, we will parameterise all the geodesics if we choose the point $(\theta,\varphi)$ to vary over half of the hemisphere: $\varphi\in[-\pi/2,\pi/2)$ and $\theta\in(0,\pi/2]$. Observe that the length of every such geodesics is $2\pi$. The geodesic $\Gamma(\theta,\varphi)$ gets reflected from the equator at the two points with the azimuthal angles $\varphi\pm\pi/2$, and the angle of incidence/refraction is $\theta$. Now let us consider the operator $-\Delta_N+V$, where $\Delta_N$ is the Laplacian on the hemisphere with the Neumann boundary condition on the equator (this corresponds to the Robin condition with $\sigma=0$). We denote by $$\widetilde{V}(\theta,\varphi)=\frac1{2\pi}\int_{\Gamma(\theta,\varphi)}V(s)ds$$ the average of $V$ over $\Gamma(\theta,\varphi)$; here $ds$ is the length element of the geodesic. Let $\{\lambda_k(V)\}_{k=1}^\infty$ be the eigenvalues of $-\Delta_N+V$ (listed in non-decreasing order). These eigenvalues form clusters around the points $\ell(\ell+1)$. Formally applying Weinstein's formula for the density of eigenvalues to this case yields $$\lim_{\ell\to\infty}\frac1{\ell+1}\sum_{k=1}^{\ell+1}f(\lambda_k(V)-\ell(\ell+1)) = \frac1\pi \int_0^\pi \int_{0}^{\pi/2}f(\widetilde{V}(\theta,\varphi))\sin\theta\, d\theta\, d\varphi; \label{a1}$$ here the integral on the right hand side effects averaging over all geodesics, and the factor $\pi$ in the denominator is the surface area of the quater-sphere (half of the upper hemisphere). Now let $\sigma$ be a continuous function on the equator. For $\varepsilon>0$ sufficiently small we define the function $V_\varepsilon$ on the upper hemisphere by $$V_\varepsilon(\theta,\varphi) = \begin{cases} \frac1\varepsilon\sigma(\varphi), & \text{if } \theta>\frac{\pi}{2}-\varepsilon, \\ 0, & \text{if }\theta\leq\frac\pi2-\varepsilon. \end{cases}$$ It is not difficult to see that the corresponding operator $-\Delta_N+V_\varepsilon$ converges (in the strong resolvent sense) to the Robin Laplacian with the Robin parameter $\sigma$. Let us apply Weinstein's formula [\[a1\]](#a1){reference-type="eqref" reference="a1"} to $V_\varepsilon$ and *formally* interchange the limits w.r.t. $\ell\to\infty$ and $\varepsilon\to0_+$. We claim that the resulting formula for the density function differs from the one in Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} by a factor of $2$. Indeed, simple geometric considerations show that averaging over a geodesic $\Gamma(\theta,\varphi)$ for sufficiently small $\varepsilon$ yields $$\lim_{\varepsilon\to0_+}\widetilde{V}_\varepsilon(\theta,\varphi) =\frac{2\sigma(\varphi+\tfrac{\pi}{2})+2\sigma(\varphi-\tfrac{\pi}{2})}{2\pi\sin\theta} =\frac{2\sigma_{\mathrm{even}}(\varphi+\tfrac{\pi}{2})}{\pi\sin\theta}$$ and therefore $$\begin{aligned} \lim_{\varepsilon\to0_+} &\frac1\pi \int_0^\pi \int_{0}^{\pi/2}f(\widetilde{V}_\varepsilon(\theta,\varphi))\sin\theta\, d\theta\, d\varphi = \frac1\pi \int_0^\pi \int_{0}^{\pi/2}f\left(\frac{2\sigma_{\mathrm{even}}(\varphi+\tfrac{\pi}{2})}{\pi\sin\theta}\right)\sin\theta\, d\theta\, d\varphi \\ &=\frac1\pi \int_0^\pi \int_{0}^{1}f\left(\frac{2\sigma_{\mathrm{even}}(\varphi+\tfrac{\pi}{2})}{\pi\sqrt{1-\xi^2}}\right)\, d\xi\, d\varphi =\frac1\pi \int_0^\pi \int_{0}^{1}f\left(\frac{2\sigma_{\mathrm{even}}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)\, d\xi\, d\varphi \\ &=\frac1{4\pi} \int_{-\pi}^\pi \int_{-1}^{1}f\left(\frac{2\sigma_{\mathrm{even}}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)\, d\xi\, d\varphi,\end{aligned}$$ which differs from the correct result by a factor of 2 in the argument of $f$. We conclude that the "naive" application of Weinstein's formula in this case produces an incorrect result. Without going into details, we observe that a similar phenomenon can be seen in the simple one-dimensional case. The Robin realisation of $-d^2/dx^2$ on the interval $[0,1]$ can be considered as the limit of the Neumann realisations of $-d^2/dx^2+V_\varepsilon$ with a suitable family of potentials $V_\varepsilon$. However, interchanging the limits $\ell\to\infty$ with $\varepsilon\to0_+$ in the asymptotic formula for the $\ell$'th eigenvalue produces an incorrect result. ## Acknowledgements {#acknowledgements .unnumbered} The authors are grateful to Zeév Rudnick for useful remarks on the preliminary version of the text, and to Jean Lagacé for useful discussions. # Definitions and key steps of the proof {#sec.bb} ## Summary We consider the Robin Laplacian as the perturbation of the Neumann Laplacian. The Robin-Neumann gaps in the $\ell$'th cluster are identified (up to a small error term) with the eigenvalues of a certain operator $V_\ell[\sigma]$ acting in the $\ell$'th eigenspace of the Neumann Laplacian; this is Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} below. Next, $V_\ell[\sigma]$ is shown to be unitarily equivalent to a semiclassical pseudodifferential operator acting in the $L^2$ space of the equator $\partial\Omega$, with the semiclassical parameter $1/\ell$ (again up to a small error term). Finally, we appeal to standard trace asymptotics for semiclassical operators, which yields the result. ## Notation and preliminaries We will work with operators in $L^2(\Omega)$, which is the Hilbert space with the norm $$\lVert u\rVert^2_{L^2(\Omega)} = \int_{-\pi}^\pi\int_{0}^{\pi/2} \lvert u(\theta,\varphi)\rvert^2\sin\theta\, d\theta\, d\varphi,$$ and in $L^2(\partial\Omega)$, which is the Hilbert space with the norm $$\lVert u\rVert^2_{L^2(\partial\Omega)} = \int_{-\pi}^\pi\lvert u(\varphi)\rvert^2 d\varphi.$$ We denote the corresponding inner products (linear in the first entry and anti-linear in the second one) by $\langle\cdot,\cdot\rangle_{L^2(\Omega)}$ and $\langle\cdot,\cdot\rangle_{L^2(\partial\Omega)}$. If $A$ is a compact operator in a Hilbert space, let $\{s_n(A)\}_{n=1}^\infty$ be the sequence of singular values of $A$ (listed repeatedly in case of multiplicities). For $1\leq p<\infty$, below $$\lVert A\rVert_{{\mathbf S}_p} = \left(\sum_n s_n(A)^p\right)^{1/p}$$ is the norm of $A$ in the Schatten class ${\mathbf S}_p$. In fact, we will only need the trace norm $\lVert A\rVert_{{\mathbf S}_1}$ and the Hilbert-Schmidt norm $\lVert A\rVert_{{\mathbf S}_2}$. We recall the analogue of the Cauchy-Schwarz inequality for the Schatten norms, $$\lVert AB\rVert_{{\mathbf S}_1}\leq\lVert A\rVert_{{\mathbf S}_2}\lVert B\rVert_{{\mathbf S}_2}.$$ We also recall that the operator trace is a bounded linear functional on the trace class, with $\lvert\mathop{\mathrm{Tr}}A\rvert\leq \lVert A\rVert_{{\mathbf S}_1}$. ## Closed semibounded quadratic forms and the variational principle {#sec.var} We briefly recall the notions related to the variational principle; see e.g. [@BS Section 10.2.3] for the details. Let $A$ be a self-adjoint operator with the domain $\mathop{\mathrm{Dom}}A$ on a Hilbert space. We recall that $A$ is called lower semi-bounded if for any $f\in\mathop{\mathrm{Dom}}A$ we have $$\langle Af,f\rangle\geq m\lVert f\rVert^2$$ with some constant $m\in{\mathbb R}$. By adding a multiple of the identity operator to $A$ if necessary, we may always assume that $m>0$. The *quadratic form of $A$* is $a(f)=\langle Af,f\rangle$ with the domain $\mathop{\mathrm{Dom}}a=\mathop{\mathrm{Dom}}A^{1/2}$ (where the square root is defined in the sense of the functional calculus for self-adjoint operators). One can prove that the quadratic form $a$ is closed, i.e. $\mathop{\mathrm{Dom}}a$ is complete with respect to the norm generated by $a$. Conversely, every closed lower semi-bounded quadratic form corresponds to a unique self-adjoint operator. Let $A$ and $B$ be self-adjoint lower semi-bounded operators, and let $a$ and $b$ be the corresponding quadratic forms with the domains $\mathop{\mathrm{Dom}}a$ and $\mathop{\mathrm{Dom}}b$. One writes $A\leq B$ if $\mathop{\mathrm{Dom}}b\subset\mathop{\mathrm{Dom}}a$ and $$a(f)\leq b(f), \quad \forall f\in\mathop{\mathrm{Dom}}b.$$ The utility of this notion comes from the variational principle. One of the versions of this principle asserts that if both $A$ and $B$ have compact resolvents, then the sequences of eigenvalues of $A$ and $B$ (enumerated in non-decreasing order) satisfy $$\lambda_n(A)\leq \lambda_n(B)$$ for all indices $n$. See e.g. [@BS Theorem 10.2.4] for the details. ## The Robin Laplacian $H[\sigma]$ For a smooth function $u$ on the hemisphere, we write $$\int_{\Omega}\lvert\nabla u\rvert^2dx := \int_{-\pi}^\pi\int_{0}^{\pi/2} \left(\left\lvert\frac{\partial u}{\partial \theta}\right\rvert^2+\frac1{(\sin\theta)^2}\left\lvert\frac{\partial u}{\partial \varphi}\right\rvert^2\right)\sin\theta\, d\theta\, d\varphi$$ for the Dirichlet integral. Recall that the Sobolev space $W^1_2(\Omega)$ is the set of all functions $u\in L^2(\Omega)$ such that the Dirichlet integral is finite. Furthermore, by the Sobolev Trace Theorem (see e.g. [@Necas Theorem 1.1.2]) functions $u\in W^1_2(\Omega)$ can be restricted to the equator; the restrictions belong to $L^2(\partial\Omega)$ and satisfy the estimate $$\int_{-\pi}^\pi \lvert u(\pi/2,\varphi)\rvert^2d\varphi\leq C\int_{\Omega}\lvert\nabla u\rvert^2dx, \quad u\in W^1_2(\Omega), \label{b10}$$ with some absolute constant $C$. Moreover, the corresponding embedding operator $W^1_2(\Omega)\subset L^2(\partial\Omega)$ is compact; see e.g. [@Necas Theorem 2.6.2]. Let $h[\sigma]$ be the quadratic form $$h[\sigma](u)=\int_{\Omega}\lvert\nabla u\rvert^2dx+\int_{-\pi}^\pi\sigma(\varphi)\lvert u(\pi/2,\varphi)\rvert^2d\varphi, \quad u\in W^1_2(\Omega).$$ Since $\sigma$ is bounded, by the embedding [\[b10\]](#b10){reference-type="eqref" reference="b10"}, the second integral in the right hand side is well-defined. The form $h[\sigma]$ is closed in $L^2(\Omega)$; we denote by $H[\sigma]$ the self-adjoint operator corresponding to this form. This operator is the Robin Laplacian, whose eigenvalues have been earlier denoted by $\lambda_n(\sigma)$. Note that the case $\sigma=0$ corresponds to the Neumann Laplacian $H[0]$, and so we can write $$h[\sigma](u)=h[0](u)+\int_{-\pi}^\pi\sigma(\varphi)\lvert u(\pi/2,\varphi)\rvert^2d\varphi. \label{d0}$$ Below we consider $H[\sigma]$ as a perturbation of $H[0]$ in the quadratic form sense. An important technical point here is that (by the compactness of the embedding $W^1_2(\Omega)\subset L^2(\partial\Omega)$) the quadratic form given by the integral over the equator in [\[d0\]](#d0){reference-type="eqref" reference="d0"} is compact in $W^1_2(\Omega)$, and therefore $H[\sigma]$ can be considered as an operator obtained from $H[0]$ by adding a relatively form-compact perturbation; see e.g. [@RS4] for the discussion of the concept of form-compactness. ## Spherical harmonics {#sec.b2} The solutions to the eigenvalue equation $$-\Delta Y=\ell(\ell+1) Y$$ on the sphere are given by the linear combinations of $(2\ell+1)$ spherical harmonics $$Y_\ell^m(\theta,\varphi) = (-1)^m \sqrt{\frac{(2\ell+1)}{2\pi}\frac{(\ell-m)!}{(\ell+m)!}}P_\ell^m(\cos\theta)e^{im\varphi}, \quad m=-\ell,\dots,\ell, \label{b11}$$ where $P_\ell^m$ are the associated Legendre polynomials, and the normalisation is chosen so that $Y_\ell^m$ are orthonormal in $L^2(\Omega)$: $$\int_0^{\pi/2} \int_{-\pi}^\pi Y_\ell^m(\theta,\varphi) \overline{Y_\ell^{m'}(\theta,\varphi)}\sin \theta\, d\theta\, d\varphi=\delta_{m,m'}$$ (this differs from the usual normalisation by the factor of $\sqrt{2}$ because the usual normalisation corresponds to the full sphere). The spherical harmonics with $m-\ell$ even satisfy the Neumann boundary condition at the equator, whereas the ones with $m-\ell$ odd satisfy the Dirichlet boundary condition. ## Neumann eigenspaces We are interested in the Neumann eigenfunctions. Let $\mathbf P_\ell$ be the orthogonal projection in $L^2(\Omega)$ onto the eigenspace of the Neumann Laplacian corresponding to the eigenvalue $\ell(\ell+1)$. In other words, $\mathbf P_\ell$ is the projection onto the subspace spanned by $Y_\ell^m$, $m=-\ell,-\ell+2,\dots,\ell-2,\ell$: $$\mathbf P_\ell: u(\theta,\varphi)\mapsto \sum_{\genfrac{}{}{0pt}{1}{m=-\ell}{\text{$m-\ell$ even}}}^\ell Y_\ell^m(\theta,\varphi)\int_0^{\pi/2}\int_{-\pi}^\pi \overline{Y_\ell^m(\theta',\varphi')}u(\theta',\varphi')\sin \theta'\, d\theta'\ d\varphi'\ . \label{b11a}$$ Clearly, the dimension of the range $\mathop{\mathrm{Ran}}\mathbf P_\ell$ is $\ell+1$. ## The operator $V_\ell[\sigma]$ For $\ell\geq0$, let $V_\ell[\sigma]$ be the operator in $\mathop{\mathrm{Ran}}{\mathbf P}_\ell$ corresponding to the quadratic form $$v_\ell[\sigma](u)= \int_{-\pi}^\pi \sigma(\varphi)\lvert u(\pi/2,\varphi)\rvert^2d\varphi, \quad u\in\mathop{\mathrm{Ran}}{\mathbf P}_\ell. \label{b8}$$ We note that for any spherical harmonic, the restriction onto the equator $\theta=\pi/2$ is a continuous function, and so the above integral is well-defined. Furthermore, by [\[b11\]](#b11){reference-type="eqref" reference="b11"}, for any function $u\in\mathop{\mathrm{Ran}}{\mathbf P}_\ell$, the restriction onto the equator can be written as $$u(\pi/2,\varphi)=e^{i\ell\varphi}w(\varphi),$$ where $w$ is even, i.e. $w(\varphi+\pi)=w(\varphi)$, since the $m-\ell$ of [\[b11a\]](#b11a){reference-type="eqref" reference="b11a"} is even. It follows that $\lvert u(\pi/2,\varphi)\rvert^2$ is even. Therefore, $\sigma$ in [\[b8\]](#b8){reference-type="eqref" reference="b8"} can be replaced by $\sigma_\text{even}$, and so $$V_\ell[\sigma]=V_\ell[\sigma_\text{even}].$$ This explains why the dependence of the limiting density of eigenvalues on $\sigma$ in Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} is through $\sigma_\text{even}$ alone. The first key ingredient of our construction is the following result: ****Theorem** 4**. *For any test function $f\in C^\infty({\mathbb R})$, one has $$\lim_{\ell\to\infty}\frac1{\ell+1}\mathop{\mathrm{Tr}}f(V_\ell[\sigma]) = \frac1{4\pi} \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi.$$* The proof of Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} is given in Section [3](#sec.b){reference-type="ref" reference="sec.b"}. ## Upper and lower bounds on eigenvalues in the $\ell$'th cluster The second key ingredient of our construction is the following result. Below we denote by $\lambda_k(V_\ell[\sigma])$ the eigenvalues of $V_\ell[\sigma]$, enumerated in non-decreasing order. Since the dimension of $\mathop{\mathrm{Ran}}\mathbf P_\ell$ is $\ell+1$, there are $\ell+1$ of these eigenvalues. ****Theorem** 5**. *Let $\varepsilon>0$, and let $\sigma_\pm=\sigma\pm\varepsilon\lvert\sigma\rvert$. Then for all sufficiently large $\ell$, we have the estimates $$\lambda_k(V_\ell[\sigma_-]) \leq \lambda_{L+k}(\sigma)-\ell(\ell+1) \leq \lambda_k(V_\ell[\sigma_+]), \quad L=\ell(\ell+1)/2,$$ for $k=1,\dots,\ell+1$.* The proof is given in Section [5](#sec.d){reference-type="ref" reference="sec.d"}. ## Proof of Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} {#sec.b6} The proof is a simple combination of the upper and lower bounds of Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"} and the asymptotics of Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"}. Applying the estimate $$\lvert f(a)-f(b)\rvert\leq \lVert f'\rVert_{L^\infty}\lvert a-b\rvert$$ to the second inequality (upper bound) in Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"} and summing over $k$, we find $$\begin{aligned} \biggl|\sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1))&-\sum_{k=1}^{\ell+1}f(\lambda_k(V_\ell[\sigma_+]))\biggr| \\ &\leq \lVert f'\rVert_{L^\infty}\sum_{k=1}^{\ell+1} \biggl( \lambda_k(V_\ell[\sigma_+])-\lambda_{L+k}(\sigma)+\ell(\ell+1) \biggr) \\ &\leq \lVert f'\rVert_{L^\infty}\sum_{k=1}^{\ell+1}\biggl(\lambda_k(V_\ell[\sigma_+])-\lambda_k(V_\ell[\sigma_-])\biggr) \\ &= \lVert f'\rVert_{L^\infty}\mathop{\mathrm{Tr}}V_\ell[\sigma_+-\sigma_-] = \varepsilon\lVert f'\rVert_{L^\infty}\mathop{\mathrm{Tr}}V_\ell[\lvert\sigma\rvert].\end{aligned}$$ Using Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} with $f(x)=x$, we find $$\mathop{\mathrm{Tr}}V_\ell[\lvert\sigma\rvert]\leq C(\ell+1)$$ with some $C=C(\sigma)$. We conclude that $$\begin{aligned} \limsup_{\ell\to\infty}\frac1{\ell+1} \sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1)) \leq \limsup_{\ell\to\infty}\frac1{\ell+1} \sum_{k=1}^{\ell+1}f(\lambda_k(V_\ell[\sigma_+])) +C\varepsilon.\end{aligned}$$ Using Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"}, the upper limit in the right hand side can be computed, which gives $$\begin{aligned} \limsup_{\ell\to\infty}\frac1{\ell+1}& \sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1)) \\ &\leq \frac1{4\pi} \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)+4\varepsilon\lvert\sigma_\text{\rm even}(\varphi)\rvert}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi +C\varepsilon.\end{aligned}$$ Since $f$ is smooth, we find $$\int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)+4\varepsilon\lvert\sigma_\text{\rm even}(\varphi)\rvert}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi = \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi +O(\varepsilon)$$ as $\varepsilon\to0$, and so finally we obtain $$\begin{aligned} \limsup_{\ell\to\infty}\frac1{\ell+1}& \sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1)) \\ &\leq \frac1{4\pi} \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi +C'\varepsilon\end{aligned}$$ with some constant $C'$. In the same way we find $$\begin{aligned} \liminf_{\ell\to\infty}\frac1{\ell+1}& \sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1)) \\ &\geq \frac1{4\pi} \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi -C'\varepsilon.\end{aligned}$$ As $\varepsilon>0$ can be taken arbitrarily small, we finally obtain $$\begin{aligned} &\limsup_{\ell\to\infty}\frac1{\ell+1} \sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1)) \\ =& \liminf_{\ell\to\infty}\frac1{\ell+1} \sum_{k=1}^{\ell+1}f(\lambda_{L+k}(\sigma)-\ell(\ell+1)) \\ =& \frac1{4\pi} \int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi,\end{aligned}$$ which is the required statement. 0◻ ## The structure of the rest of the paper In the rest of the paper, we prove Theorems [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} and [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"}. In Section [3](#sec.b){reference-type="ref" reference="sec.b"}, we prove Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"}. The proof of one well-known technical ingredient is postponed to Appendix A. In Section [4](#sec.cc){reference-type="ref" reference="sec.cc"}, we prove some auxiliary estimates for the resolvent of $H[\sigma]$ and related operators. In Section [5](#sec.d){reference-type="ref" reference="sec.d"}, we prove Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"}, and also Lemma [**Theorem** 1](#lma.a1){reference-type="ref" reference="lma.a1"}. In Section [6](#sec:odd sigma){reference-type="ref" reference="sec:odd sigma"} we prove Theorem [**Theorem** 3](#thm.a3){reference-type="ref" reference="thm.a3"}. # The eigenvalues of $V_\ell[\sigma]$ {#sec.b} ## Overview Our main aim in this section is to prove Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} and also the following simple statement: ****Lemma** 6**. *We have the estimate $$\lVert V_\ell[\sigma]\rVert=O(\sqrt{\ell}), \quad \ell\to\infty. \label{b9}$$* By taking $\sigma=\mathop{\mathrm{const}}>0$, it is easy to check that this estimate is sharp, i.e. in this case $\lVert V_\ell[\sigma]\rVert\geq c\sqrt{\ell}$ for some $c>0$ and all sufficiently large $\ell$. Observe that Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} with $f(x)=x$ states $$\mathop{\mathrm{Tr}}V_\ell[\sigma]=O(\ell), \quad \ell\to\infty;$$ it is instructive to compare this with [\[b9\]](#b9){reference-type="eqref" reference="b9"}. ## Multiplications and convolutions We need some notation. Below we consider operators acting in $L^2(\partial\Omega)$. For a bounded function $a$ on $\partial\Omega$, we denote by $M[a]$ the operator of multiplication by $a(\varphi)$ in $L^2(\partial\Omega)$, and by $C[a]$ the operator of convolution with $a$: $$\label{eq:C[a] conv op def} C[a]: u(\varphi)\mapsto \frac1{2\pi}\int_{-\pi}^\pi u(\varphi')a(\varphi-\varphi')d\varphi'.$$ Recall that if $a(\varphi)=\sum_n a_ne^{in\varphi}$, then by Parseval, $C[a]$ is unitarily equivalent to the operator of multiplication by the sequence $\{a_n\}$ in $\ell^2({\mathbb Z})$. In particular, for the operator norm and the Hilbert-Schmidt norm of $C[a]$ we have $$\begin{aligned} \lVert C[a]\rVert&=\sup_n\lvert a_n\rvert, \label{b0a} \\ \lVert C[a]\rVert_{{\mathbf S}_2}^2&=\sum_{n=-\infty}^\infty\lvert a_n\rvert^2. \label{b0b}\end{aligned}$$ ## Three functions on the equator Below we will make use of three trigonometric polynomials on the equator. In order to define them, we first recall that by [\[b11\]](#b11){reference-type="eqref" reference="b11"}, the restrictions of the spherical harmonics onto the equator are $$\label{eq:restr equator Alm} Y_\ell^m(\pi/2,\varphi)=\frac{A_{\ell,m}}{\sqrt{2\pi}}e^{im\varphi}, \quad\text{ where }\quad A_{\ell,m}:=(-1)^m\sqrt{(2\ell+1)\cdot \frac{(\ell-m)!}{(\ell+m)!}}P_\ell^m(0).$$ Now let us denote $$\begin{aligned} x_\ell(\varphi)&=\sqrt{2\pi}\sum_{\genfrac{}{}{0pt}{1}{m=-\ell}{\text{$m-\ell$ even}}}^\ell Y_\ell^m(\pi/2,\varphi) = \sum_{\genfrac{}{}{0pt}{1}{m=-\ell}{\text{$m-\ell$ even}}}^\ell A_{\ell,m}e^{im\varphi}, \notag \\ \label{eq:y ell def} y_\ell(\varphi)&=\sum_{\genfrac{}{}{0pt}{1}{m=-\ell}{\text{$m-\ell$ even}}}^\ell A_{\ell,m}^2 e^{im\varphi}, \\ z_\ell(\varphi) &= \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ even}}}^{\ell-2} \frac{2}{\sqrt{\pi}}(1-(m/\ell)^2)^{-1/4}e^{im\varphi}. \label{b6}\end{aligned}$$ It will show that $z_\ell$ is the asymptotic form of $x_\ell$ (with $A_{\ell,m}$ replaced by $\lvert A_{\ell,m}\rvert$) for large $\ell$. ## Reduction to an operator in $L^2(\partial\Omega)$ Recall that $V_\ell[\sigma]$ is the operator acting in the $(\ell+1)$-dimensional subspace $\mathop{\mathrm{Ran}}{\mathbf P}_\ell\subset L^2(\Omega)$. Below we reduce the analysis of the operator $V_\ell[\sigma]$ to the analysis of a simple operator in $L^2(\partial\Omega)$. ****Lemma** 7**. *The non-zero eigenvalues of $V_\ell[\sigma]$ coincide (including multiplicities) with the non-zero eigenvalues of the operator $$W_\ell[\sigma]:=C[x_\ell]M[\sigma]C[x_\ell]\quad \text{ in $L^2(\partial\Omega)$.} \label{b2a}$$ In particular, for every test function $f\in C^\infty({\mathbb R})$ vanishing at the origin, $$\mathop{\mathrm{Tr}}f(V_\ell[\sigma])=\mathop{\mathrm{Tr}}f(W_\ell[\sigma]). \label{b2}$$* *Proof.* Denote by $e_m(\varphi)=\frac1{\sqrt{2\pi}}e^{im\varphi}$, $m\in{\mathbb Z}$, the standard orthonormal basis in $L^2(\partial\Omega)$. From [\[eq:restr equator Alm\]](#eq:restr equator Alm){reference-type="eqref" reference="eq:restr equator Alm"} it follows that $$C[x_\ell]e_m=Y_\ell^m(\pi/2,\cdot).$$ By the definition of $x_\ell$, the range of the operator $W_\ell[\sigma]$ belongs to the $(\ell+1)$-dimensional subspace of $L^2(\partial\Omega)$ spanned by the elements $e_m$, $m=-\ell,\dots,\ell$, with $m-\ell$ even. Let us compute the matrix of $W_\ell[\sigma]$ in this basis: $$\begin{aligned} \langle W_\ell[\sigma]e_m,e_{m'}\rangle_{L^2(\partial\Omega)} &= \langle M[\sigma]C[x_\ell]e_m,C[x_\ell]e_{m'}\rangle_{L^2(\partial\Omega)} \\ &= \int_{-\pi}^\pi \sigma(\varphi) Y_\ell^m(\pi/2,\varphi)\overline{Y_\ell^{m'}(\pi/2,\varphi)}d\varphi .\end{aligned}$$ Next, let us compute the matrix of $V_\ell[\sigma]$ in the orthonormal basis of $\mathop{\mathrm{Ran}}{\mathbf P}_\ell$ given by $Y_\ell^m$, $m=-\ell,\dots,\ell$, with $m-\ell$ even: $$\langle V_\ell[\sigma]Y_\ell^m,Y_\ell^{m'}\rangle_{L^2(\Omega)} = \int_{-\pi}^\pi \sigma(\varphi)Y_\ell^m(\pi/2,\varphi)\overline{Y_\ell^{m'}(\pi/2,\varphi)}d\varphi.$$ Comparing this, we see that the matrices of $V_\ell[\sigma]$ and $W_\ell[\sigma]$ coincide, hence the result. ◻ ## Asymptotics of $x_\ell$ Our next step is to replace $x_\ell$ by its asymptotics as $\ell\to\infty$. We recall the constants $A_{\ell,m}$ defined in [\[eq:restr equator Alm\]](#eq:restr equator Alm){reference-type="eqref" reference="eq:restr equator Alm"}. We have $$P_\ell^m(0)=(-1)^{(m+\ell)/2}\frac{2^m}{\sqrt{\pi}}\frac{\Gamma\left(\frac{\ell+m+1}{2}\right)}{\Gamma\left(\frac{\ell-m}{2}+1\right)},$$ and therefore $$A_{\ell,m}=(-1)^m(-1)^{(m+\ell)/2}2^m\sqrt{\frac{(2\ell+1)}{\pi}\frac{(\ell-m)!}{(\ell+m)!}}\frac{\Gamma\left(\frac{\ell+m+1}{2}\right)}{\Gamma\left(\frac{\ell-m}{2}+1\right)}.$$ (The sign $(-1)^m(-1)^{(m+\ell)/2}$ is a matter of normalisation of spherical harmonics and plays no role in what follows.) Below we prove that $$A_{\ell,m}^2=\frac{4}{\pi}(1-(m/\ell)^2)^{-1/2}+\text{error term}$$ as $\ell\to\infty$ and $\ell^2-m^2\to\infty$, with suitable estimates for the error term. This is the key technical ingredient of the proof of Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"}. ****Lemma** 8**. *As $\ell\to\infty$, we have $$\begin{aligned} \sum_{m=-\ell+1}^{\ell-1}& (1-(m/\ell)^2)^{-\frac12}=O(\ell), \label{b3} \\ \sum_{m=-\ell}^\ell& A_{\ell,m}^2=O(\ell), \label{b4} \\ \sum_{m=-\ell+1}^{\ell-1}& \left\lvert A_{\ell,m}^2-\frac{4}{\pi}(1-(m/\ell)^2)^{-\frac 12}\right\rvert=O(\ell^{2/3}), \label{b5} \\ \sup_{\lvert m\rvert\leq \ell}&A_{\ell,m}^2=O(\sqrt{\ell}). \label{b5a}\end{aligned}$$* *Proof.* Using the duplication formula for the Gamma function, $$\Gamma(z)\Gamma(z+\tfrac12)=\sqrt{\pi}2^{1-2z}\Gamma(2z),$$ we can rewrite the expression for $A_{\ell,m}$ as $$A_{\ell,m}^2=\frac{2\ell+1}{4\pi}\gamma(\ell-m)\gamma(\ell+m), \quad\text{ where }\quad \gamma(x):=\frac{\Gamma(\frac{x}{2}+\frac12)}{\Gamma(\frac{x}{2}+1)}. \label{b1}$$ By [\[b1\]](#b1){reference-type="eqref" reference="b1"}, we have $A_{\ell,-m}^2=A_{\ell,m}^2$ and so instead of summing from $-\ell$ to $\ell$ we can estimate the sums from $m=0$ to $m=\ell$. The estimate [\[b3\]](#b3){reference-type="eqref" reference="b3"} is elementary: $$\sum_{m=0}^{\ell-1}(1-(m/\ell)^2)^{-1/2} \leq \int_0^\ell(1-(x/\ell)^2)^{-1/2}dx = \ell\int_0^1(1-x^2)^{-1/2}dx =\frac{\pi}{2}\ell. \label{b3a}$$ Next, we use the asymptotic formula for the Gamma function (see e.g. [@AS (6.1.47)]) $$\frac{\Gamma(z+a)}{\Gamma(z+b)}=z^{a-b}(1+O(1/z)), \quad z\to+\infty.$$ For the function $\gamma(x)$ from [\[b1\]](#b1){reference-type="eqref" reference="b1"} this yields $$\gamma(x)=\sqrt{2/x}(1+O(1/x)), \quad x\to\infty.$$ Let $0\leq m\leq \ell-\ell^{1/3}$; then $$\begin{aligned} A_{\ell,m}^2 &= \frac{2\ell+1}{\pi}\gamma(\ell-m)\gamma(\ell+m) \\ &=\frac{2\ell}{\pi}(1+O(\ell^{-1})) \sqrt{\frac{2}{\ell-m}}(1+O(\ell^{-1/3})) \sqrt{\frac{2}{\ell+m}}(1+O(\ell^{-1})) \\ &=\frac{4}{\pi}\frac{\ell}{\sqrt{(\ell-m)(\ell+m)}}(1+O(\ell^{-1/3})) \\ &=\frac{4}{\pi}(1-(m/\ell)^2)^{-\frac12}(1+O(\ell^{-1/3})),\end{aligned}$$ and so $$\sum_{0\leq m\leq \ell-\ell^{1/3}} \left\lvert A_{\ell,m}^2-\frac{4}{\pi}(1-(m/\ell)^2)^{-\frac 12}\right\rvert \leq C\ell^{-1/3}\sum_{0\leq m\leq \ell-\ell^{1/3}}(1-(m/\ell)^2)^{-1/2}.$$ Using [\[b4\]](#b4){reference-type="eqref" reference="b4"}, we conclude that $$\sum_{0\leq m\leq \ell-\ell^{1/3}} \left\lvert A_{\ell,m}^2-\frac{4}{\pi}(1-(m/\ell)^2)^{-\frac 12}\right\rvert = O(\ell^{2/3}).$$ Let us estimate the sum over $\ell-\ell^{1/3}\leq m< \ell$. We have $$\begin{aligned} \sum_{\ell-\ell^{1/3}\leq m< \ell}(1-(m/\ell)^2)^{-1/2} &\leq \int_{\ell-\ell^{1/3}}^\ell(1-(x/\ell)^2)^{-1/2}dx \\ &= \ell\int_{1-\ell^{-2/3}}^1(1-x^2)^{-1/2}dx = \ell \ O(\ell^{-1/3})=O(\ell^{2/3})\end{aligned}$$ and similarly, using that $\gamma(x)=O(x^{-1/2})$, $$\begin{aligned} \sum_{\ell-\ell^{1/3}\leq m< \ell}A_{\ell,m}^2 &\leq C\ell \sum_{\ell-\ell^{1/3}\leq m< \ell}\gamma(\ell-m)\gamma(\ell+m) \\ &\leq C\ell^{1/2}\sum_{0\leq m\leq \ell^{1/3}}\gamma(m) = C\ell^{1/2}\sum_{0\leq m\leq \ell^{1/3}}O(m^{-1/2}) \\ &\leq C\ell^{1/2}\ell^{1/6} = C\ell^{2/3}.\end{aligned}$$ This yields [\[b5\]](#b5){reference-type="eqref" reference="b5"}. The estimate [\[b4\]](#b4){reference-type="eqref" reference="b4"} follows from [\[b3\]](#b3){reference-type="eqref" reference="b3"} and [\[b5\]](#b5){reference-type="eqref" reference="b5"}. Finally, let us check [\[b5a\]](#b5a){reference-type="eqref" reference="b5a"}; we have $$\sup_{\lvert m\rvert\leq \ell}A_{\ell,m}^2= \frac{2\ell+1}{\pi}\sup_{\lvert m\rvert\leq \ell}\gamma(\ell-m)\gamma(\ell+m) =\frac{2\ell+1}{4\pi}O(1/\sqrt{\ell})=O(\sqrt{\ell}),$$ as required. ◻ ## Proof of Lemma [**Lemma** 6](#lma.b1){reference-type="ref" reference="lma.b1"} {#proof-of-lemma-lma.b1} *Proof.* Since $\sigma$ is bounded, we have $$\begin{aligned} \lVert V_\ell[\sigma]\rVert &= \lVert W_\ell[\sigma]\rVert = \lVert C[x_\ell]M[\sigma]C[x_\ell]\rVert \leq \lVert M[\sigma]\rVert\lVert C[x_\ell]\rVert^2 \\ &= \lVert M[\sigma]\rVert\lVert C[x_\ell]^2\rVert.\end{aligned}$$ Using the notation $y_\ell$ (see [\[eq:y ell def\]](#eq:y ell def){reference-type="eqref" reference="eq:y ell def"}), we find $C[x_\ell]^2=C[y_\ell]$. Finally, from [\[b0a\]](#b0a){reference-type="eqref" reference="b0a"} $$\lVert C[y_\ell]\rVert=\sup_m A_{\ell,m}^2=O(\sqrt{\ell}),$$ where we have used Lemma [**Lemma** 8](#lma.b2){reference-type="ref" reference="lma.b2"} at the last step. The proof is complete. ◻ ## Asymptotics of traces of model operators Lemma [**Lemma** 8](#lma.b2){reference-type="ref" reference="lma.b2"} suggests that one may be able to replace $x_\ell$ in [\[b2a\]](#b2a){reference-type="eqref" reference="b2a"} by $z_\ell$, as defined in [\[b6\]](#b6){reference-type="eqref" reference="b6"}. (We dispose of the sign $(-1)^m(-1)^{(m+\ell)/2}$ in the expression for $A_{\ell,m}$ because only the squares $A_{\ell,m}^2$ appear in the final expression.) In preparation for this, we need a proposition. Observe that the Fourier coefficients of $z_\ell$ have the form $\omega(m/\ell)$ with $\omega(x)=2\pi^{-1/2}(1-x^2)_+^{-1/4}$. The proposition below discusses the case where $\omega$ is replaced by a smooth function. ****Proposition** 9**. *Let $\omega\in C_0^\infty(-1,1)$ be a non-negative function, and let $\omega_\ell$ be the function on $\partial\Omega$, $$\omega_\ell(\varphi) = \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ \rm even}}}^{\ell-2} \omega(m/\ell)e^{im\varphi}.$$ Then for any test function $f\in C^\infty({\mathbb R})$ vanishing at the origin, $$\lim_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[\omega_\ell]M[\sigma_\text{\rm even}]C[\omega_\ell]^*) = \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 f\bigl(\lvert\omega(\xi)\rvert^2\sigma_\text{\rm even}(\varphi)\bigr)d\xi\, d\varphi.$$* The operator $C[\omega_\ell]M[\sigma_\text{\rm even}]C[\omega_\ell]^*$ is essentially a pseudodifferential operator with the symbol $\lvert\omega(\xi)\rvert^2\sigma_\text{\rm even}(\varphi)$ and the semiclassical parameter $1/\ell$, and so the above proposition can be derived from the semiclassical spectral theory for general pseudodifferential operators, see e.g. [@DS]. In order to make the paper self-contained, we give a direct proof in the Appendix. Our next aim is to replace a smooth function $\omega$ in the previous proposition by the function $2\pi^{-1/2}(1-x^2)_+^{-1/4}$ which has singularities at $x=\pm1$. This is done through an approximation argument. We need to prepare a general operator-theoretic statement. ## Aside on eigenvalue estimates ****Proposition** 10**. *Let $A$ and $B$ be compact self-adjoint operators in a Hilbert space and let $f\in C^1({\mathbb R})$. Then $$\left\lvert\mathop{\mathrm{Tr}}(f(A)-f(B))\right\rvert\leq \lVert f'\rVert_{L^\infty}\lVert A-B\rVert_{{\mathbf S}_1}.$$* *Proof.* This estimate is well-known in the framework of the spectral shift function theory (see e.g. [@BYa]), but for the sake of completeness we give a direct proof in this simple case. In fact, we have already seen a similar argument in the proof of Theorem [**Theorem** 2](#thm.a2){reference-type="ref" reference="thm.a2"} in Section [2.9](#sec.b6){reference-type="ref" reference="sec.b6"}. First assume that our Hilbert space is finite dimensional, i.e. $A$ and $B$ are both $N\times N$ Hermitian matrices. Denote $X=B-A$ and write $X=X_+-X_-$ with both $X_+$ and $X_-$ positive semi-definite. Then $$-X_-\leq X\leq X_+$$ and therefore by the variational principle (with eigenvalues ordered non-increasingly) we have $$\begin{aligned} \lambda_n(A-X_-)&\leq \lambda_n(A)\leq \lambda_n(A+X_+), \\ \lambda_n(A-X_-)&\leq \lambda_n(B)\leq \lambda_n(A+X_+)\end{aligned}$$ for $n=1,\dots,N$. It follows that for all $n$, $$\begin{aligned} \lvert f(\lambda_n(B))-f(\lambda_n(A))\rvert &\leq \lVert f'\rVert_{L^\infty}(\lambda_n(A+X_+)-\lambda_n(A-X_-)) \\ &= \lVert f'\rVert_{L^\infty}(\mathop{\mathrm{Tr}}(A+X_+)-\mathop{\mathrm{Tr}}(A-X_-)) \\ &= \lVert f'\rVert_{L^\infty}\mathop{\mathrm{Tr}}(X_++X_-).\end{aligned}$$ Finally, using the spectral decomposition of $X$, the operators $X_+$ and $X_-$ can be chosen such that $\lVert X\rVert_{{\mathbf S}_1}=\mathop{\mathrm{Tr}}(X_++X_-)$. If the Hilbert space is infinite dimensional, the proof proceeds along the same lines, but one needs to label the positive and negative eigenvalues of $A$ and $B$ separately, as there may be infinitely many eigenvalues on both sides of the origin. ◻ ## Swapping $\omega_\ell$ for $z_\ell$ ****Theorem** 11**. *Let $z_\ell$ be as in [\[b6\]](#b6){reference-type="eqref" reference="b6"}. Then for any test function $f\in C^\infty({\mathbb R})$ vanishing at the origin, $$\lim_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[z_\ell]M[\sigma_\text{\rm even}]C[z_\ell]) = \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi.$$* *Proof.* *Step 1: trace class estimate.* For any $\varepsilon>0$, let $\omega^{(\varepsilon)}\in C_0^\infty(-1,1)$ be a non-negative function such that $$\begin{aligned} \omega^{(\varepsilon)}(\xi)& = 2\pi^{-1/2}(1-\xi^2)_+^{-1/4}, \quad \lvert\xi\rvert<1-\varepsilon, \\ \omega^{(\varepsilon)}(\xi)&\leq 2\pi^{-1/2}(1-\xi^2)_+^{-1/4}, \quad 1-\varepsilon\leq\lvert\xi\rvert<1.\end{aligned}$$ Using [\[b0b\]](#b0b){reference-type="eqref" reference="b0b"} and [\[b3a\]](#b3a){reference-type="eqref" reference="b3a"}, we obtain: $$\begin{aligned} \lVert C[z_\ell]\rVert_{{\mathbf S}_2}^2 &= \frac{4}{\pi} \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ even}}}^{\ell-2} (1-(m/\ell)^2)^{-1/2}\leq \ell, \\ \lVert C[\omega^{(\varepsilon)}_\ell]\rVert_{{\mathbf S}_2}^2 &= \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ even}}}^{\ell-2} \lvert\omega^{(\varepsilon)}(m/\ell)\rvert^2 \leq\ell, \\ \lVert C[z_\ell]-C[\omega^{(\varepsilon)}_\ell]\rVert_{{\mathbf S}_2}^2 &\leq \frac{4}{\pi} \sum_{\genfrac{}{}{0pt}{1}{1-\varepsilon<\lvert m/\ell\rvert<1}{\text{$m-\ell$ even}}} (1-(m/\ell)^2)^{-1/2} \leq C\ell\sqrt{\varepsilon}.\end{aligned}$$ Writing $$\begin{aligned} C[z_\ell]&M[\sigma_\text{\rm even}]C[z_\ell] - C[\omega^{(\varepsilon)}_\ell]M[\sigma_\text{\rm even}]C[\omega^{(\varepsilon)}_\ell] \\ &= C[z_\ell-\omega^{(\varepsilon)}_\ell]M[\sigma_\text{\rm even}]C[z_\ell] + C[\omega^{(\varepsilon)}_\ell]M[\sigma_\text{\rm even}]C[z_\ell-\omega_\ell^{(\varepsilon)}]\end{aligned}$$ and using the triangle inequality for the trace norm and the Cauchy-Schwarz inequality in Schatten classes, we find $$\begin{aligned} \lVert C[z_\ell]&M[\sigma_\text{\rm even}]C[z_\ell] - C[\omega^{(\varepsilon)}_\ell]M[\sigma_\text{\rm even}]C[\omega^{(\varepsilon)}_\ell]\rVert_{{\mathbf S}_1} \\ \leq& \lVert C[z_\ell-\omega^{(\varepsilon)}_\ell]\rVert_{{\mathbf S}_2} \lVert M[\sigma_\text{\rm even}]\rVert \lVert C[z_\ell]\rVert_{{\mathbf S}_2} \\ &+ \lVert C[\omega^{(\varepsilon)}_\ell]\rVert_{{\mathbf S}_2} \lVert M[\sigma_\text{\rm even}]\rVert \lVert C[z_\ell-\omega^{(\varepsilon)}]\rVert_{{\mathbf S}_2} \leq C\ell\varepsilon^{1/4}.\end{aligned}$$ From here using Proposition [**Proposition** 10](#prp.b5){reference-type="ref" reference="prp.b5"}, we find $$\left\lvert \mathop{\mathrm{Tr}}f(C[z_\ell]M[\sigma_\text{\rm even}]C[z_\ell]) - \mathop{\mathrm{Tr}}f(C[\omega^{(\varepsilon)}_\ell]M[\sigma_\text{\rm even}]C[\omega^{(\varepsilon)}_\ell]) \right\rvert \leq C\lVert f'\rVert_{L^\infty}\ell\varepsilon^{1/4}.$$ *Step 2: estimating $\limsup$ and $\liminf$.* From the previous step, using Proposition [**Proposition** 9](#prp.b4){reference-type="ref" reference="prp.b4"} with $\omega=\omega^{(\varepsilon)}$, we find $$\begin{aligned} \limsup_{\ell\to\infty}& \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[z_\ell]M[\sigma_\text{\rm even}]C[z_\ell]) \\ &\leq \limsup_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[\omega^{(\varepsilon)}_\ell]M[\sigma_\text{\rm even}]C[\omega^{(\varepsilon)}_\ell]) + C\lVert f'\rVert_{L^\infty}\varepsilon^{1/4} \\ &= \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 f\left(\sigma_\text{\rm even}(\varphi) \lvert\omega^{(\varepsilon)}(\xi)\rvert^2 \right)d\xi\, d\varphi + C\lVert f'\rVert_{L^\infty}\varepsilon^{1/4}\end{aligned}$$ and similarly $$\begin{aligned} \liminf_{\ell\to\infty}& \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[z_\ell]M[\sigma_\text{\rm even}]C[z_\ell]) \\ &\geq \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 f\left(\sigma_\text{\rm even}(\varphi) \lvert\omega^{(\varepsilon)}(\xi)\rvert^2 \right)d\xi\, d\varphi - C\lVert f'\rVert_{L^\infty}\varepsilon^{1/4}.\end{aligned}$$ Sending $\varepsilon\to0$, we obtain the required statement. ◻ ## Proof of Theorem [**Theorem** 4](#thm.b1){reference-type="ref" reference="thm.b1"} {#proof-of-theorem-thm.b1} We start by observing that for $f(x)=\text{const}$ the conclusion of the theorem trivially holds. Thus, by subtracting $f(0)$ from $f$ if necessary, we may assume that $f$ vanishes at the origin. By [\[b2\]](#b2){reference-type="eqref" reference="b2"}, we need to prove the relation $$\lim_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[x_\ell]M[\sigma]C[x_\ell]) = \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 f\left(\frac{4\sigma_\text{\rm even}(\varphi)}{\pi\sqrt{1-\xi^2}}\right)d\xi\, d\varphi.$$ First let us estimate the Hilbert-Schmidt norm of $C[x_\ell-z_\ell]$. Using Lemma [**Lemma** 8](#lma.b2){reference-type="ref" reference="lma.b2"}, we find $$\begin{aligned} \lVert C[x_\ell-z_\ell]\rVert_{{\mathbf S}_2}^2 &= \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ even}}}^{\ell-2} \lvert A_{\ell,m}-\frac{2}{\sqrt{\pi}}(1-(m/\ell)^2)^{-1/4}\rvert^2 +2A_{\ell,\ell}^2 \\ &\leq \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ even}}}^{\ell-2} \lvert A_{\ell,m}^2-\frac{4}{\pi}(1-(m/\ell)^2)^{-1/2}\rvert +2A_{\ell,\ell}^2 \\ &\leq \sum_{m=-\ell+1}^{\ell-1} \lvert A_{\ell,m}^2-\frac{4}{\pi}(1-(m/\ell)^2)^{-1/2}\rvert +2A_{\ell,\ell}^2 =O(\ell^{2/3}).\end{aligned}$$ From here, as in the previous proof, writing $$\begin{aligned} C[x_\ell]M[\sigma]C[x_\ell] - C[z_\ell]M[\sigma]C[z_\ell] \\ = C[x_\ell-z_\ell]M[\sigma]C[x_\ell] + C[z_\ell]M[\sigma]C[x_\ell-z_\ell]\end{aligned}$$ and using the Cauchy-Schwarz for the Schatten norm, we find $$\lVert C[x_\ell]M[\sigma]C[x_\ell] - C[z_\ell]M[\sigma]C[z_\ell]\rVert_{{\mathbf S}_1} =O(\ell^{5/6}).$$ From here and the previous theorem, using Proposition [**Proposition** 10](#prp.b5){reference-type="ref" reference="prp.b5"}, we obtain the required asymptotic relation. 0◻ # The Birman-Schwinger principle and resolvent norm estimates {#sec.cc} ## Definitions As in Section [3](#sec.b){reference-type="ref" reference="sec.b"}, we denote by ${\mathbf P}_\ell$ the orthogonal projection in $L^2(\Omega)$ onto the $(\ell+1)$-dimensional eigenspace of the Neumann Laplacian $H[0]$ corresponding to the eigenvalue $\ell(\ell+1)$. We also denote by ${\mathbf P}_\ell^\perp$ the projection onto the orthogonal complement to this eigenspace in $L^2(\Omega)$, so that ${\mathbf P}_\ell+{\mathbf P}_\ell^\perp=I$, the identity operator. Below we will make use of the orthogonal decomposition $$L^2(\Omega)=\mathop{\mathrm{Ran}}{\mathbf P}_\ell\oplus \mathop{\mathrm{Ran}}{\mathbf P}_\ell^\perp.$$ Observe that the Neumann Laplacian $H[0]$ is diagonal with respect to this decomposition. For a fixed $\ell$, let $h_\ell^\perp[\sigma]$ be the restriction of the quadratic form $h[\sigma]$ onto the subspace $\mathop{\mathrm{Ran}}{\mathbf P}_\ell^\perp$, i.e. $$h_\ell^\perp[\sigma](u) = \int_{\Omega}\lvert\nabla u\rvert^2dx+\int_{-\pi}^\pi\sigma(\varphi)\lvert u(\pi/2,\varphi)\rvert^2d\varphi, \quad u\in \mathop{\mathrm{Ran}}{\mathbf P}_\ell^\perp\cap H^1(\Omega). \label{d1}$$ We denote by $H_\ell^\perp[\sigma]$ the self-adjoint operator in $\mathop{\mathrm{Ran}}{\mathbf P}_\ell^\perp$, corresponding to this quadratic form. Let $$R(\lambda)=(H[0]-\lambda I)^{-1}$$ be the resolvent of the Neumann Laplacian. This resolvent is a meromorphic function of $\lambda$ with poles at $\lambda=\ell(\ell+1)$, $\ell=0,1,2,\dots$, and can be represented as the sum $$R(\lambda)=\sum_{\ell=0}^\infty \frac{{\mathbf P}_\ell}{\ell(\ell+1)-\lambda}$$ convergent in the strong operator topology. For a fixed $\ell$, we set $$R_\ell^\perp(\lambda)=\sum_{k\not=\ell}\frac{{\mathbf P}_k}{k(k+1)-\lambda}=R(\lambda)-\frac{{\mathbf P}_\ell}{\ell(\ell+1)-\lambda},$$ i.e. we subtract from $R(\lambda)$ its singular part at $\lambda=\ell(\ell+1)$. Below we will use tilde to denote the restriction of integral operators onto the equator. More precisely, let $K$ be an integral operator in $L^2(\Omega)$ with the integral kernel $k[\theta,\varphi;\theta',\varphi']$, i.e. $$K: u\mapsto \int_{-\pi}^\pi \int_0^{\pi/2}k[\theta,\varphi;\theta',\varphi']u(\theta',\varphi')\sin\theta' d\theta'\, d\varphi' .$$ Then we denote by $\widetilde K$ the integral operator in $L^2(\partial \Omega)$, $$\label{eq:restriction int operator} \widetilde K: u\mapsto \int_{-\pi}^\pi k[\pi/2,\varphi;\pi/2,\varphi']u(\varphi')d\varphi'.$$ In particular, we will use the operators $\widetilde R(\lambda)$, $\widetilde{\mathbf P}_\ell$, $\widetilde R_\ell^\perp(\lambda)$ in $L^2(\partial\Omega)$. ## The Birman-Schwinger principle Below $\sigma_0$ is a non-zero constant. ****Proposition** 12**. *Let $\lambda\in{\mathbb R}\setminus\{\ell(\ell+1): \ell\geq0\}$. Then $$\dim\mathop{\mathrm{Ker}}(H[\sigma_0]-\lambda I) = \dim\mathop{\mathrm{Ker}}(I+\sigma_0\widetilde R(\lambda)). \label{c2a}$$ Similarly, $$\dim\mathop{\mathrm{Ker}}(H_\ell^\perp[\sigma_0]-\lambda I) = \dim\mathop{\mathrm{Ker}}(I+\sigma_0\widetilde R_\ell^\perp(\lambda)). \label{c2}$$* *Proof.* We give only a sketch of the proof of the first identity, the second one is proved in the same way. For typographical reasons, let us write $H_0$ in place of $H[0]$. Using the functional calculus for self-adjoint operators, we define the operator $\lvert H_0-\lambda\rvert^{-1/2}$ and set $$(H_0-\lambda)^{-1/2}:=\lvert H_0-\lambda\rvert^{-1/2}\mathop{\mathrm{sign}}(H_0-\lambda);$$ then $$(H_0-\lambda)^{-1}=(H_0-\lambda)^{-1/2}\lvert H_0-\lambda\rvert^{-1/2}.$$ Observe that $\lvert H_0-\lambda\rvert^{-1/2}$ and $(H_0-\lambda)^{-1/2}$ map $L^2(\Omega)$ onto the Sobolev class $W^1_2(\Omega)$. Furthermore, let $T: W^1_2(\Omega)\to L^2(\partial\Omega)$ be the operator of restriction onto the boundary (also known as the trace operator). This operator is bounded and compact, see e.g. [@Necas Theorem 2.6.2]. It follows that the operators $$T\lvert H_0-\lambda\rvert^{-1/2}, \quad T(H_0-\lambda)^{-1/2}: L^2(\Omega)\to L^2(\partial\Omega)$$ are bounded and compact. As both $\lvert H_0-\lambda\rvert^{-1/2}$ and $(H_0-\lambda)^{-1/2}$ are linear isomorphisms, we have $$\begin{aligned} \dim\mathop{\mathrm{Ker}}(H[\sigma_0]-\lambda I) &= \dim\mathop{\mathrm{Ker}}\biggl(\lvert H_0-\lambda\rvert^{-1/2}(H[\sigma_0]-\lambda I)(H_0-\lambda)^{-1/2}\biggr). %&=\dim\Ker\biggl(I+\sigma_0(T\abs{H_0-\lambda}^{-1/2})^*(T(H_0-\lambda)^{-1/2})\biggr).\end{aligned}$$ We recall that the quadratic form of $H[\sigma_0]$ is $$\begin{aligned} h[\sigma_0](u)&= \int_{\Omega}\lvert\nabla u\rvert^2dx+\sigma_0\int_{-\pi}^\pi\lvert u(\pi/2,\varphi)\rvert^2d\varphi \\ &=h[0](u)+\sigma_0\int_{-\pi}^\pi\lvert u(\pi/2,\varphi)\rvert^2d\varphi, \quad u\in W^1_2(\Omega).\end{aligned}$$ From here we obtain $$\lvert H_0-\lambda\rvert^{-1/2}(H[\sigma_0]-\lambda I)(H_0-\lambda)^{-1/2} = I+\sigma_0(T\lvert H_0-\lambda\rvert^{-1/2})^*(T(H_0-\lambda)^{-1/2}).$$ Using the identity $$\dim\mathop{\mathrm{Ker}}(I+K_1K_2)=\dim\mathop{\mathrm{Ker}}(I+K_2K_1)$$ for any two compact operators $K_1$, $K_2$, we find $$\begin{aligned} \dim\mathop{\mathrm{Ker}}(H[\sigma_0]-\lambda I) &= \dim\mathop{\mathrm{Ker}}\biggl(I+\sigma_0 (T\lvert H_0-\lambda\rvert^{-1/2})^*T(H_0-\lambda)^{-1/2}\biggr) \\ &= \dim\mathop{\mathrm{Ker}}\biggl(I+\sigma_0 T(H_0-\lambda)^{-1/2}(T\lvert H_0-\lambda\rvert^{-1/2})^*\biggr) \\ &=\dim\mathop{\mathrm{Ker}}(I+\sigma_0\widetilde R(\lambda)),\end{aligned}$$ which proves [\[c2a\]](#c2a){reference-type="eqref" reference="c2a"}. ◻ ## Norm estimate for $\widetilde R_\ell^\perp(\lambda)$ ****Lemma** 13**. *We have $$\sup_{\lambda\in[\ell^2,(\ell+1)^2]}\lVert\widetilde R_\ell^\perp(\lambda)\rVert=O(\ell^{-1/2}\log \ell),\quad \ell\to\infty.$$* *Proof.* We have $$\label{eq:R perp tilde sum} \widetilde R_\ell^\perp(\lambda)=\sum_{k\not=\ell}\frac{\widetilde{\mathbf P}_k}{k(k+1)-\lambda}.$$ Recall that the trigonometric polynomial $y_{\ell}$ is defined in [\[eq:y ell def\]](#eq:y ell def){reference-type="eqref" reference="eq:y ell def"}. Then, by the definition [\[b11a\]](#b11a){reference-type="eqref" reference="b11a"} of $\mathbf{P}_{\ell}$ as an integral operator, and $\widetilde{\mathbf{P}}_{\ell}$ its restriction [\[eq:restriction int operator\]](#eq:restriction int operator){reference-type="eqref" reference="eq:restriction int operator"}, it is plain to see that $\widetilde{\mathbf{P}}_{\ell}$ is the convolution operator $$\widetilde{\mathbf{P}}_{\ell} = C[y_{\ell}],$$ as in [\[eq:C\[a\] conv op def\]](#eq:C[a] conv op def){reference-type="eqref" reference="eq:C[a] conv op def"}. Hence, by the linearity, the operator $\widetilde{R}_{\ell}^{\perp}(\lambda)$ in [\[eq:R perp tilde sum\]](#eq:R perp tilde sum){reference-type="eqref" reference="eq:R perp tilde sum"} is the convolution operator $$\widetilde{R}_{\ell}^{\perp}(\lambda)=C[F],$$ on the unit circle, with the function $$F(\varphi)=F_{\ell;\lambda}(\varphi) = \sum\limits_{k\ne \ell}\frac{1}{k(k+1)-\lambda}y_{k}(\varphi).$$ The Fourier coefficients of $F$ are the numbers $$a_{m}=a_{\ell;m}:= \sum\limits_{k\ne \ell}\frac{1}{k(k+1)-\lambda}A_{k,m}^{2},$$ with $A_{\ell,m}$ as in [\[eq:restr equator Alm\]](#eq:restr equator Alm){reference-type="eqref" reference="eq:restr equator Alm"} (treated in Lemma [**Lemma** 8](#lma.b2){reference-type="ref" reference="lma.b2"}), with the convention that $A_{\ell,m}$ vanishes unless $|m|\le \ell$, $m-\ell$ even. Since the operator norm of $C[F]$ is the supremum of the Fourier coefficients of $F$, we have $$\lVert\widetilde R_\ell^\perp(\lambda)\rVert = \sup\limits_{m\in \mathbb{Z}}|a_{m}| = \sup\limits_{m\in\mathbb{Z}}\left| \sum\limits_{k\ne \ell}\frac{1}{k(k+1)-\lambda}A_{k,m}^{2} \right|,$$ that is to be bounded, uniformly w.r.t. $\lambda\in[\ell^2,(\ell+1)^2]$. To bound the numbers $a_{m}$ we employ a strategy similar to the proof of Lemma [**Lemma** 6](#lma.b1){reference-type="ref" reference="lma.b1"}. At first we use the uniform upper bound [\[b5a\]](#b5a){reference-type="eqref" reference="b5a"} of Lemma [**Lemma** 8](#lma.b2){reference-type="ref" reference="lma.b2"}, together with the triangle inequality, to yield $$|a_{m}| \le C\sum\limits_{k\ne \ell}\frac{\sqrt{k}}{|k(k+1)-\lambda|}.$$ Now the result follows from the next lemma. ◻ ****Lemma** 14**. *We have $$\sup_{\lambda\in[\ell^2,(\ell+1)^2]}\sum\limits_{k\ne \ell}\frac{\sqrt{k}}{|k(k+1)-\lambda|} = O(\ell^{-1/2}\log \ell),\quad \ell\to\infty.$$* *Proof.* We consider separately the sums over $k\leq\ell-1$ and over $k\geq\ell+1$. For the first sum we have $$\sum\limits_{k=1}^{\ell-1}\frac{\sqrt{k}}{|k(k+1)-\lambda|} \leq \sum\limits_{k=1}^{\ell-1}\frac{\sqrt{k}}{\ell^2-k(k+1)}.$$ Clearly, every term in the series here is $O(\ell^{-1/2})$. Thus, we can write $$\begin{aligned} \sum\limits_{k=1}^{\ell-1}\frac{\sqrt{k}}{\ell^2-k(k+1)} &= O(\ell^{-1/2})+\sum\limits_{k=1}^{\ell-3}\frac{\sqrt{k}}{\ell^2-k(k+1)} \\ &\leq O(\ell^{-1/2})+\int_1^{\ell-2}\frac{\sqrt{x}}{\ell^2-x(x+1)}dx.\end{aligned}$$ For the integral in the right hand side we find $$\begin{aligned} \int_1^{\ell-2}\frac{\sqrt{x}}{\ell^2-x(x+1)}dx &\leq \int_1^{\ell-2}\frac{\sqrt{x+1}}{\ell^2-(x+1)^2}dx \\ &= \int_2^{\ell-1}\frac{\sqrt{x}}{\ell^2-x^2}dx \leq \int_0^{\ell-1}\frac{\sqrt{x}}{\ell^2-x^2}dx.\end{aligned}$$ By the change of variable $x=\ell y$ we finally obtain $$\int_0^{\ell-1}\frac{\sqrt{x}}{\ell^2-x^2}dx = \ell^{-1/2}\int_0^{1-1/\ell}\frac{\sqrt{y}}{1-y^2}dy = O(\ell^{-1/2}\log \ell)$$ as $\ell\to\infty$. Similarly, for the second sum we find $$\sum\limits_{k=\ell+1}^{\infty}\frac{\sqrt{k}}{|k(k+1)-\lambda|} \leq \sum\limits_{k=\ell+1}^{\infty}\frac{\sqrt{k}}{k(k+1)-\ell(\ell+1)}.$$ For the denominator, we have $$k(k+1)-\ell(\ell+1)=(k-\ell)(k+\ell+1)\geq(k-\ell)k,$$ and so $$\begin{aligned} \sum\limits_{k=\ell+1}^{\infty}\frac{\sqrt{k}}{k(k+1)-\ell(\ell+1)} &\leq \sum\limits_{k=\ell+1}^{\infty}\frac{1}{\sqrt{k}(k-\ell)} = \frac1{\sqrt{\ell+1}} + \sum\limits_{k=\ell+2}^{\infty}\frac{1}{\sqrt{k}(k-\ell)} \\ &\leq\frac1{\sqrt{\ell+1}} + \int_{\ell+1}^\infty \frac{dx}{\sqrt{x}(x-\ell)} \\ &\leq\frac1{\sqrt{\ell+1}} +\ell^{-1/2} \int_{1+1/\ell}\frac{dy}{\sqrt{y}(y-1)} = O(\ell^{-1/2}\log\ell),\end{aligned}$$ as required. ◻ # Proof of Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"} {#sec.d} ## Eigenvalues of $H_\ell^\perp[\sigma]$ We recall that the operator $H_\ell^\perp[\sigma]$ was defined in the beginning of the previous section. ****Lemma** 15**. *Let $\sigma_0$ be a positive number. Then for all sufficiently large $\ell$ and all continuous functions $\sigma$ satisfying $-\sigma_0\leq \sigma \leq \sigma_0$, the operators $H_\ell^\perp[\sigma]$ have no eigenvalues in the interval $\lambda\in[\ell^2,(\ell+1)^2]$. Furthermore, the number of eigenvalues of each operator $H_\ell^\perp[\sigma]$ in the interval $(-\infty,\ell(\ell+1))$ is $L=\ell(\ell+1)/2$.* *Proof.* First consider the case of constant $\sigma$; let $\sigma=t\sigma_0$, $t\in[-1,1]$. We use the Birman-Schwinger principle [\[c2\]](#c2){reference-type="eqref" reference="c2"}. By Lemma [**Lemma** 13](#lma.cc1){reference-type="ref" reference="lma.cc1"} for any $\lambda\in [\ell^2,(\ell+1)^2]$ we have $$\lVert\sigma_0 \widetilde R_\ell^\perp(\lambda)\rVert<1$$ for all sufficiently large $\ell$. It follows that the kernel of $I+t\sigma_0\widetilde R_\ell^\perp(\lambda)$ is trivial, and therefore $H_\ell^\perp[t\sigma_0]$ has no eigenvalues in $\Lambda_\ell$ for all sufficiently large $\ell$. Let us fix such sufficiently large $\ell$. The eigenvalues of $H_\ell^\perp[t\sigma_0]$ depend continuously on $t$ and do not enter the interval $[\ell^2,(\ell+1)^2]$ as $t$ varies from $-1$ to $1$. This means that the number of these eigenvalues below $\ell^{2}$ is independent on $t$. For $t=0$ these are the Neumann eigenvalues and we can count them explicitly: each Neumann eigenvalue $k(k+1)$, $k<\ell$, contributes multiplicity $k+1$; altogether we have $\ell(\ell+1)/2$ eigenvalues below $\ell^{2}$. Now consider the case of a variable $\sigma$. By the assumption $-\sigma_0\leq \sigma\leq \sigma_0$, we find (see definition [\[d1\]](#d1){reference-type="eqref" reference="d1"}) $$H_\ell^\perp[-\sigma_0]\leq H_\ell^\perp[\sigma]\leq H_\ell^\perp[\sigma_0].$$ Denoting the eigenvalues of $H_\ell^\perp[\sigma]$ by $\lambda_n^\perp(\sigma)$, by the variational principle (see Section [2.3](#sec.var){reference-type="ref" reference="sec.var"}) we find $$\lambda_n^\perp(-\sigma_0)\leq \lambda_n^\perp(\sigma)\leq \lambda_n^\perp(\sigma_0)$$ for all $n$. By the first part of the proof, we have, with $L=\ell(\ell+1)/2$, $$\lambda_L^\perp(\sigma_0)<\ell^{2} \quad\text{ and }\quad \lambda_{L+1}^\perp(-\sigma_0)>(\ell+1)^{2}.$$ From here we find $$\lambda_L^\perp(\sigma)<\ell^{2}, \quad\text{ and }\quad \lambda_{L+1}^\perp(\sigma)>(\ell+1)^{2},$$ as required. ◻ ## A variational lemma It is convenient to express the first step of the proof of Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"} as a separate lemma. Recall that we denote $\sigma_\pm=\sigma\pm\varepsilon\lvert\sigma\rvert$. ****Lemma** 16**. *For any $\ell>0$ and any $\varepsilon>0$, we have the inequalities $$H_\ell^-[\sigma]\leq H[\sigma]\leq H_\ell^+[\sigma],$$ where the operators $H_\ell^-[\sigma]$ and $H_\ell^+[\sigma]$ are defined by the sums $$\begin{aligned} H_\ell^+[\sigma]&=\biggl(\ell(\ell+1)I+V_\ell[\sigma_+]\biggr)\oplus H_\ell^\perp[\sigma+\tfrac1\varepsilon\lvert\sigma\rvert], \\ H_\ell^-[\sigma]&=\biggl(\ell(\ell+1)I+V_\ell[\sigma_-]\biggr)\oplus H_\ell^\perp[\sigma-\tfrac1\varepsilon\lvert\sigma\rvert]\end{aligned}$$ in the orthogonal decomposition $$L^2(\Omega)=\mathop{\mathrm{Ran}}{\mathbf P}_\ell\oplus\mathop{\mathrm{Ran}}{\mathbf P}_\ell^\perp.$$* *Proof.* For a fixed $\ell$ and for $u\in H^1(\Omega)$, we write $$u=u_\ell+u_\ell^\perp, \quad u_\ell={\mathbf P}_\ell u, \quad u_\ell^\perp={\mathbf P}_\ell^\perp u.$$ Let us expand the quadratic form $$h[\sigma](u_\ell+u_\ell^\perp)=h[0](u_\ell+u_\ell^\perp)+\int_{-\pi}^\pi\sigma(\varphi)\lvert u_\ell(\pi/2,\varphi)+u_\ell^\perp(\pi/2,\varphi)\rvert^2d\varphi.$$ We observe that the Neumann Laplacian is diagonalised by the decomposition $u=u_\ell+u_\ell^\perp$, i.e. $$h[0](u_\ell+u_\ell^\perp)=h[0](u_\ell)+h[0](u_\ell^\perp).$$ Since $u_\ell$ is in the eigenspace $\mathop{\mathrm{Ran}}{\mathbf P}_\ell$ of $H[0]$, corresponding to the eigenvalue $\ell(\ell+1)$, we have $$h[0](u_\ell)=\ell(\ell+1)\lVert u_\ell\rVert^2_{L^2(\Omega)}.$$ Using our notation [\[d1\]](#d1){reference-type="eqref" reference="d1"}, we can write $$h[0](u_\ell^\perp) = h_\ell^\perp[0](u_\ell^\perp).$$ Thus, $$h[0](u_\ell+u_\ell^\perp) = \ell(\ell+1)\lVert u_\ell\rVert^2 + h_\ell^\perp[0](u_\ell^\perp).$$ Now consider the integral over the equator: $$\begin{aligned} \int_{-\pi}^\pi\sigma(\varphi)\lvert u_\ell(\pi/2,\varphi)+&u_\ell^\perp(\pi/2,\varphi)\rvert^2d\varphi \\ =& \int_{-\pi}^\pi\sigma(\varphi)\lvert u_\ell(\pi/2,\varphi)\rvert^2d\varphi + \int_{-\pi}^\pi\sigma(\varphi)\lvert u_\ell^\perp(\pi/2,\varphi)\rvert^2d\varphi \\ &+ 2\hbox{{\rm Re}}\,\int_{-\pi}^\pi\sigma(\varphi)u_\ell(\pi/2,\varphi)\overline{u_\ell^\perp(\pi/2,\varphi)} d\varphi.\end{aligned}$$ We use the estimate $$2\lvert ab\rvert\leq \varepsilon\lvert a\rvert^2+\frac1{\varepsilon}\lvert b\rvert^2$$ for the cross term in the last expression (with $a=u_\ell$ and $b=u_\ell^\perp$); this yields $$\begin{aligned} \int_{-\pi}^\pi&\sigma(\varphi)\lvert u(\pi/2,\varphi)\rvert^2d\varphi \\ &\leq \int_{-\pi}^\pi(\sigma(\varphi)+\varepsilon\lvert\sigma(\varphi)\rvert)\lvert u_\ell(\pi/2,\varphi)\rvert^2d\varphi + \int_{-\pi}^\pi(\sigma(\varphi)+\frac1\varepsilon\lvert\sigma(\varphi)\rvert)\lvert u_\ell^\perp(\pi/2,\varphi)\rvert^2d\varphi\end{aligned}$$ and similarly $$\begin{aligned} \int_{-\pi}^\pi&\sigma(\varphi)\lvert u(\pi/2,\varphi)\rvert^2d\varphi \\ &\geq \int_{-\pi}^\pi(\sigma(\varphi)-\varepsilon\lvert\sigma(\varphi)\rvert)\lvert u_\ell(\pi/2,\varphi)\rvert^2d\varphi + \int_{-\pi}^\pi(\sigma(\varphi)-\frac1\varepsilon\lvert\sigma(\varphi)\rvert)\lvert u_\ell^\perp(\pi/2,\varphi)\rvert^2d\varphi.\end{aligned}$$ Putting this all together and recalling our notation $v_\ell[\sigma]$ in [\[b8\]](#b8){reference-type="eqref" reference="b8"}, we find $$\begin{aligned} h[\sigma](u)&\leq \ell(\ell+1)\lVert u_\ell\rVert^2_{L^2(\Omega)} +v_\ell[\sigma_+](u_\ell)+h_\ell^\perp[\sigma+\tfrac1\varepsilon\lvert\sigma\rvert](u_\ell^\perp), \label{d2} \\ h[\sigma](u)&\geq \ell(\ell+1)\lVert u_\ell\rVert^2_{L^2(\Omega)} +v_\ell[\sigma_-](u_\ell)+h_\ell^\perp[\sigma-\tfrac1\varepsilon\lvert\sigma\rvert](u_\ell^\perp). \label{d3}\end{aligned}$$ Here [\[d2\]](#d2){reference-type="eqref" reference="d2"} can be rewritten as $H[\sigma]\leq H_\ell^+[\sigma]$, and [\[d3\]](#d3){reference-type="eqref" reference="d3"} as $H_\ell^-[\sigma]\leq H[\sigma]$. We note that in this case the domains of the quadratic forms corresponding to all three operators ($H_\ell^-[\sigma]$, $H[\sigma]$ and $H_\ell^+[\sigma]$) coincide. The proof of the lemma is complete. ◻ ## Proof of Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"} {#proof-of-theorem-thm.d2} Combining Lemma [**Lemma** 16](#lma.var){reference-type="ref" reference="lma.var"} with the variational principle (see Section [2.3](#sec.var){reference-type="ref" reference="sec.var"}), we find $$\lambda_n(H_\ell^-[\sigma])\leq \lambda_n(H[\sigma])\leq \lambda_n(H_\ell^+[\sigma]) \label{d4}$$ for all indices $n$. Consider the upper bound here; recall that $$H_\ell^+[\sigma]=\biggl(\ell(\ell+1)I+V_\ell[\sigma_+]\biggr)\oplus H_\ell^\perp[\sigma+\tfrac1\varepsilon\lvert\sigma\rvert].$$ The sequence of the eigenvalues of $H_\ell^+[\sigma]$ is the re-ordered union of the sequences of eigenvalues of the two components in this orthogonal sum. By Lemma [**Lemma** 15](#lma.d3){reference-type="ref" reference="lma.d3"}, the second component $H_\ell^\perp[\sigma+\tfrac1\varepsilon\lvert\sigma\rvert]$ does not contribute any eigenvalues to the interval $[\ell^2,(\ell+1)^2]$ and has $L=\ell(\ell+1)/2$ eigenvalues below this interval. The first component has exactly $\ell+1$ eigenvalues given by $$\ell(\ell+1)+\lambda_k(V_\ell[\sigma_+]), \quad k=1,\dots, \ell+1.$$ By Lemma [**Lemma** 6](#lma.b1){reference-type="ref" reference="lma.b1"}, all these eigenvalues are in the interval $\Lambda_\ell$. It follows that for the eigenvalues of $H_\ell^+[\sigma]$ in the interval $[\ell^2,(\ell+1)^2]$ we have $$\lambda_{L+k}(H_\ell^+[\sigma]) = \ell(\ell+1)+\lambda_k(V_\ell[\sigma_+]), \quad k=1,\dots,\ell+1.$$ Combining this with the upper bound in [\[d4\]](#d4){reference-type="eqref" reference="d4"}, we obtain the upper bound in the statement of the theorem. The lower bound is obtained in the same way. 0◻ ## Proof of Lemma [**Theorem** 1](#lma.a1){reference-type="ref" reference="lma.a1"} {#sec:proof Lemma a1} By Theorem [**Theorem** 5](#thm.d2){reference-type="ref" reference="thm.d2"} and Lemma [**Lemma** 6](#lma.b1){reference-type="ref" reference="lma.b1"}, the eigenvalues $\lambda_{L+k}(H[\sigma])$, $k=1,\dots,\ell+1$, belong to the interval $\Lambda_\ell$. This yields the required statement. 0◻ # Odd trigonometric polynomials: proof of Theorem [**Theorem** 3](#thm.a3){reference-type="ref" reference="thm.a3"} {#sec:odd sigma} ## Notation and setup We denote by ${\mathcal H}_\ell\subset L^2(\Omega)$ the linear space of spherical harmonics of degree $\ell$, i.e. $${\mathcal H}_\ell=\mathop{\mathrm{Span}}\{Y_\ell^m: -\ell\leq m\leq \ell\}.$$ We decompose ${\mathcal H}_\ell$ according to the parity of $m-\ell$: $${\mathcal H}_\ell={\mathcal H}_\ell^N\oplus{\mathcal H}_\ell^D,$$ where $$\begin{aligned} {\mathcal H}_\ell^N &= \mathop{\mathrm{Span}}\{Y_\ell^m: -\ell\leq m\leq \ell,\quad m-\ell\text{ even}\}, \\ {\mathcal H}_\ell^D &= \mathop{\mathrm{Span}}\{Y_\ell^m: -\ell\leq m\leq \ell,\quad m-\ell\text{ odd}\}.\end{aligned}$$ Equivalently, ${\mathcal H}_\ell^N$ (resp. ${\mathcal H}_\ell^D$) is the subspace of functions satisfying the Neumann (resp. Dirichlet) boundary condition on the equator. Further, we consider the corresponding subspaces of $L^2(\partial\Omega)$: $$\begin{aligned} \widetilde{\mathcal H}_\ell^N &= \mathop{\mathrm{Span}}\{e^{im\varphi}: -\ell\leq m\leq \ell,\quad m-\ell\text{ even}\}, \\ \widetilde{\mathcal H}_\ell^D &= \mathop{\mathrm{Span}}\{e^{im\varphi}: -\ell\leq m\leq \ell,\quad m-\ell\text{ odd}\}.\end{aligned}$$ ****Lemma** 17**. 1. *For any $\ell$, the restriction map $F\mapsto F|_{\partial\Omega}$ is a linear isomorphism between ${\mathcal H}_\ell^N$ and $\widetilde{\mathcal H}_\ell^N$.* 2. *For any $\ell$, the map $F\mapsto \frac{\partial F}{\partial n}|_{\partial\Omega}$ is a linear isomorphism between ${\mathcal H}_\ell^D$ and $\widetilde{\mathcal H}_\ell^D$.* *Proof.* (i) We have $$\begin{aligned} \{Y_\ell^m: \quad -\ell\leq m\leq \ell,\quad m-\ell\text{ even}\}\quad \text{is a linear basis in ${\mathcal H}_\ell^N$}, \\ \{e^{im\varphi}:\quad -\ell\leq m\leq \ell,\quad m-\ell\text{ even}\}\quad \text{is a linear basis in $\widetilde{\mathcal H}_\ell^N$}.\end{aligned}$$ By [\[b11\]](#b11){reference-type="eqref" reference="b11"} and [\[eq:restr equator Alm\]](#eq:restr equator Alm){reference-type="eqref" reference="eq:restr equator Alm"}, we have $$Y_\ell^m|_{\partial\Omega}=Y_\ell^m(\pi/2,\varphi)=\frac{A_{\ell,m}}{\sqrt{2}}e^{im\varphi}, \quad A_{\ell,m}\not=0,$$ and so the restriction map relates these bases through multiplication by non-zero constants. This completes the proof of (i). \(ii\) Similarly, $$\begin{aligned} \{Y_\ell^m: \quad -\ell\leq m\leq \ell,\quad m-\ell\text{ odd}\}\quad \text{is a linear basis in ${\mathcal H}_\ell^D$}, \\ \{e^{im\varphi}:\quad -\ell\leq m\leq \ell,\quad m-\ell\text{ odd}\}\quad \text{is a linear basis in $\widetilde{\mathcal H}_\ell^D$}.\end{aligned}$$ We have $$\frac{\partial Y_{\ell}^{m}}{\partial n}\bigg|_{\partial\Omega} = \frac{\partial Y_{\ell}^{m}}{\partial \theta} (\pi/2,\varphi) = B_{\ell,m}\cdot e^{im \varphi},$$ with $$B_{\ell,m}:=(-1)^{m+1} \sqrt{\frac{(2\ell+1)}{2\pi}\frac{(\ell-m)!}{(\ell+m)!}}\frac{d}{dx}{P_{\ell}^{m}}(x)|_{x=0},$$ that, we claim, does not vanish. Indeed, upon using the assumption that $\ell-m$ is odd, we explicitly have  [@RW p. 128] that $$\frac{d}{dx}{P_{\ell}^{m}}(x)|_{x=0} = \frac{2^{m+1}}{\sqrt{\pi}}\sin\left(\frac{\pi(m+\ell)}{2}\right)\frac{\Gamma\left( \frac{\ell+m}{2}+1 \right)}{\Gamma\left( \frac{\ell-m+1}{2} \right)} = \pm\frac{2^{m+1}}{\sqrt{\pi}}\frac{\Gamma\left( \frac{\ell+m}{2}+1 \right)}{\Gamma\left( \frac{\ell-m+1}{2} \right)} \ne 0.$$ This completes the proof of (ii). ◻ ## Proof of Theorem [**Theorem** 3](#thm.a3){reference-type="ref" reference="thm.a3"} {#proof-of-theorem-thm.a3} Let $\sigma$ be as in the statement of the theorem. Recall that we denoted by $M[\sigma]$ the operator of multiplication by $\sigma$. Let us consider the action of $M[\sigma]$ on functions from the space $\widetilde{\mathcal H}_\ell^N$. Since $\sigma$ is odd, multiplication by $\sigma$ reverses the parity of functions on the boundary. It follows that $M[\sigma]$ maps a suitable subspace of $\widetilde{\mathcal H}_\ell^N$ to $\widetilde{\mathcal H}_\ell^D$. By "suitable subspace" here we mean restricting the degree of $f\in \widetilde{\mathcal H}_\ell^N$ so that the product $\sigma f$ has degree $\leq \ell$. More precisely, we see that (using that $d$ is odd) $$M[\sigma]: \widetilde{\mathcal H}_{\ell-d-1}^N\to \widetilde{\mathcal H}_\ell^D. \label{f2}$$ Motivated by this, let us consider the subspace ${\mathcal H}_\ell^{N,-}\subset{\mathcal H}_\ell^N$ defined by the condition $${\mathcal H}_\ell^{N,-}=\{F\in {\mathcal H}_\ell^N: F|_{\partial\Omega}\in\widetilde{\mathcal H}_{\ell-d-1}^N\}.$$ Take any $F_N\in {\mathcal H}_\ell^{N,-}$ and consider its restriction $f_N=F_N|_{\partial\Omega}$. Then by [\[f2\]](#f2){reference-type="eqref" reference="f2"} we have $\sigma f_N\in \widetilde{\mathcal H}_\ell^D$. By Lemma [**Lemma** 17](#lma.f1){reference-type="ref" reference="lma.f1"}(ii), there exists $F_D\in {\mathcal H}_\ell^D$ such that $$\frac{\partial F_D}{\partial n}\bigg|_{\partial\Omega}=-\sigma f_N.$$ Consider the function $F:=F_N+F_D\in{\mathcal H}_\ell$. Since $F_N$ (resp. $F_D$) satisfies the Neumann (resp. Dirichlet) boundary condition on the equator, we find $$\sigma \cdot F|_{\partial\Omega}+\frac{\partial F}{\partial n}\bigg|_{\partial\Omega} = \sigma \cdot F_{N}|_{\partial\Omega}+\frac{\partial F_D}{\partial n}\bigg|_{\partial\Omega} =\sigma f_N-\sigma f_N=0$$ by construction. Thus, $F$ is a Robin eigenfunction with the eigenvalue $\ell(\ell+1)$. It remains to estimate from below the dimension of the corresponding eigenspace. By inspection, the codimension of $\widetilde{\mathcal H}_{\ell-d-1}^N$ in $\widetilde{\mathcal H}_\ell^N$ is $d+1$. By Lemma [**Lemma** 17](#lma.f1){reference-type="ref" reference="lma.f1"}(i), the codimension of ${\mathcal H}_\ell^{N,-}$ in ${\mathcal H}_\ell^{N}$ is also $d+1$. Since $\dim {\mathcal H}_\ell^{N}=\ell+1$, we find that $\dim{\mathcal H}_\ell^{N,-}=\ell-d$. Thus, the multiplicity of the Robin eigenvalue $\lambda=\ell(\ell+1)$ is at least $\ell-d$. It follows that all but $d+1$ eigenvalues in the $\ell$'th cluster (for large $\ell$) coincide with $\ell(\ell+1)$. The proof of Theorem [**Theorem** 3](#thm.a3){reference-type="ref" reference="thm.a3"} is complete. # Proof of Proposition [**Proposition** 9](#prp.b4){reference-type="ref" reference="prp.b4"} {#proof-of-proposition-prp.b4} Below $\omega$ and $\sigma$ are as in the hypothesis of Proposition [**Proposition** 9](#prp.b4){reference-type="ref" reference="prp.b4"}. In order to make the formulas below more readable, we assume without loss of generality that $\sigma$ is even, $\sigma(\varphi+\pi)=\sigma(\varphi)$, so that $\sigma_\text{\rm even}=\sigma$. ## A commutator estimate ****Lemma** 18**. *We have the Hilbert-Schmidt norm estimate $$\lVert M[\sigma]C[\omega_\ell]-C[\omega_\ell]M[\sigma]\rVert_{{\mathbf S}_2}=O(1), \quad \ell\to\infty. \label{A1}$$* *Proof.* The commutator in [\[A1\]](#A1){reference-type="eqref" reference="A1"} is the integral operator in $L^2(\partial\Omega)$ with the integral kernel $$\omega_\ell(\varphi-\varphi')(\sigma(\varphi)-\sigma(\varphi')).$$ Since $\sigma$ is smooth and *even*, we can write $$\lvert\sigma(\varphi)-\sigma(\varphi')\rvert \leq C\lvert e^{2i(\varphi-\varphi')}-1\rvert.$$ It follows that the Hilbert-Schmidt norm can be estimated as $$\begin{aligned} \lVert M[\sigma]C[\omega_\ell]&-C[\omega_\ell]M[\sigma]\rVert_{{\mathbf S}_2}^2 \\ &= \int_{-\pi}^\pi \int_{-\pi}^\pi \lvert\omega_\ell(\varphi-\varphi')\rvert^2\lvert\sigma(\varphi)-\sigma(\varphi')\rvert^2 d\varphi\ d\varphi' \\ &\leq C\int_{-\pi}^\pi \int_{-\pi}^\pi \lvert\omega_\ell(\varphi-\varphi')\rvert^2 \lvert e^{2i(\varphi-\varphi')}-1\rvert^2 d\varphi\ d\varphi' \\ &= C\int_{-\pi}^\pi \lvert\omega_\ell(\varphi)\rvert^2 \lvert e^{2i\varphi}-1\rvert^2d\varphi.\end{aligned}$$ By Plancherel's theorem, we have $$\begin{aligned} \int_{-\pi}^\pi \lvert\omega_\ell(\varphi)\rvert^2 \lvert e^{2i\varphi}-1\rvert^2d\varphi &= \int_{-\pi}^\pi \lvert\omega_\ell(\varphi)e^{2i\varphi}-\omega_\ell(\varphi)\rvert^2d\varphi \\ &= \sum_{\genfrac{}{}{0pt}{1}{m=-\infty}{\text{$m-\ell$ \rm even}}}^{\infty} \lvert\omega(\tfrac{m}{\ell})-\omega(\tfrac{m+2}{\ell})\rvert^2 =O(1)\end{aligned}$$ as $\ell\to\infty$, where we have used the smoothness of $\omega$ at the last step. ◻ ## The case $f(x)=x^k$ ****Lemma** 19**. *For any integer $k\geq1$, we have $$\lim_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}\bigl((C[\omega_\ell]M[\sigma]C[\omega_\ell]^*)^k\bigr) = \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 \lvert\omega(\xi)\rvert^{2k}(\sigma(\varphi))^kd\xi\, d\varphi.$$* *Proof.* We observe that by the cyclicity of trace, $$\mathop{\mathrm{Tr}}\bigl((C[\omega_\ell]M[\sigma]C[\omega_\ell]^*)^k\bigr) = \mathop{\mathrm{Tr}}\bigl((M[\sigma]C[\omega_\ell]^*C[\omega_\ell])^k\bigr)$$ and $C[\omega_\ell]^*C[\omega_\ell]=C[\Omega_\ell]$, where $$\Omega_\ell(\varphi) = \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ \rm even}}}^{\ell-2} \lvert\omega(m/\ell)\rvert^2e^{im\varphi}.$$ Further, for $k=1$ by a direct evaluation of trace we have $$\begin{aligned} \mathop{\mathrm{Tr}}(M[\sigma]C[\Omega_\ell]) &= \frac1{2\pi}\int_{-\pi}^\pi \sigma(\varphi)d\varphi \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ \rm even}}}^{\ell-2} \lvert\omega(m/\ell)\rvert^2 \\ &=\frac{1}{2\pi}\int_{-\pi}^\pi \sigma(\varphi)d\varphi\cdot \frac{\ell}{2}\int_{-1}^1\lvert\omega(\xi)\rvert^2d\xi+O(1),\end{aligned}$$ for $\ell\to\infty$. If $k=2$, denoting the commutator $R=\bigl[C[\Omega_\ell],M[\sigma]\bigr]$, we find $$\begin{aligned} \mathop{\mathrm{Tr}}\bigl((M[\sigma]C[\Omega_\ell])^2\bigr) =& \mathop{\mathrm{Tr}}(M[\sigma]^2C[\Omega_\ell]^2) + \mathop{\mathrm{Tr}}(M[\sigma]R\ C[\Omega_\ell]).\end{aligned}$$ Here the first trace in the right hand side can be again directly evaluated: $$\begin{aligned} \mathop{\mathrm{Tr}}(M[\sigma]^2C[\Omega_\ell]^2) &= \frac1{2\pi}\int_{-\pi}^\pi \sigma(\varphi)^2d\varphi \sum_{\genfrac{}{}{0pt}{1}{m=-\ell+2}{\text{$m-\ell$ \rm even}}}^{\ell-2} \lvert\omega(m/\ell)\rvert^4 \\ &=\frac{1}{2\pi}\int_{-\pi}^\pi \sigma(\varphi)^2 d\varphi\cdot \frac{\ell}{2}\int_{-1}^1\lvert\omega(\xi)\rvert^2d\xi+O(1),\end{aligned}$$ and the second trace can be estimated as follows: $$\begin{aligned} \lvert\mathop{\mathrm{Tr}}(M[\sigma]R\ C[\Omega_\ell])\rvert \leq \lVert M[\sigma] R\ C[\Omega_\ell]\rVert_{{\mathbf S}_1} &\leq \lVert M[\sigma]\rVert \lVert R\rVert_{{\mathbf S}_2} \lVert C[\Omega_\ell]\rVert_{{\mathbf S}_2} \\ &=O(1)O(1)O(\sqrt{\ell})=O(\sqrt{\ell}).\end{aligned}$$ Similarly for $k\geq3$ we find $$\mathop{\mathrm{Tr}}\bigl((M[\sigma]C[\Omega_\ell])^k\bigr) = \mathop{\mathrm{Tr}}(M[\sigma]^k C[\Omega_\ell]^k) + \text{error}$$ where the trace in the right hand side can be directly evaluated and the error term is a combination of traces of commutator terms, which are all $O(\sqrt{\ell})$. ◻ ## Concluding the proof: application of the Weierstrass approximation theorem Since the norms of $C[\omega_\ell]$ are uniformly bounded, we can choose $R>0$ such that $$\lVert C[\omega_\ell]M[\sigma]C[\omega_\ell]^*\rVert\leq R$$ for all $\ell$. It suffices to prove the statement for $f$ real-valued. We have already checked the statement for $f(x)=x$, and so, subtracting a linear term from $f$, we may assume that $f'(0)=0$. Under this assumption, the function $f(y)/y^2$ is in $C^\infty$, and so by the Weierstrass approximation theorem for any $\varepsilon>0$ we can find polynomials $p_+$ and $p_-$ vanishing at the origin such that $$p_-(y)\leq f(y)\leq p_+(y) \quad \text{ and }\quad p_+(y)-p_-(y)\leq \varepsilon y^2, \quad \text{ for all $\lvert y\rvert\leq R$.}$$ Then we have, using the previous lemma, $$\begin{aligned} \limsup_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[\omega_\ell]M[\sigma]C[\omega_\ell]^*) &\leq \limsup_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}p_+(C[\omega_\ell]M[\sigma]C[\omega_\ell]^*) \\ &= \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 p_+\bigl(\lvert\omega(\xi)\rvert^2\sigma(\varphi)\bigr)d\xi\, d\varphi\end{aligned}$$ and similarly $$\liminf_{\ell\to\infty} \frac{1}{\ell+1}\mathop{\mathrm{Tr}}f(C[\omega_\ell]M[\sigma]C[\omega_\ell]^*) \geq \frac1{4\pi}\int_{-\pi}^{\pi} \int_{-1}^1 p_-\bigl(\lvert\omega(\xi)\rvert^2\sigma(\varphi)\bigr)d\xi\, d\varphi.$$ Furthermore, subtracting the right hand sides here, we find $$\begin{aligned} \int_{-1}^1 p_+\bigl(\lvert\omega(\xi)\rvert^2\sigma(\varphi)\bigr)d\xi\, d\varphi &- \int_{-1}^1 p_-\bigl(\lvert\omega(\xi)\rvert^2\sigma(\varphi)\bigr)d\xi\, d\varphi \\ &\leq \varepsilon \int_{-1}^1 \bigl(\lvert\omega(\xi)\rvert^2\sigma(\varphi)\bigr)^2d\xi\, d\varphi.\end{aligned}$$ Sending $\varepsilon\to0$, we find that $\limsup$ and $\liminf$ above coincide, which gives the required statement. 0◻ 10 M. Abramowitz, I. A. Stegun, *Handbook of mathematical functions with formulas, graphs, and mathematical tables,* National Bureau of Standards Applied Mathematics Series, Washington, D.C., 1964. M. S. Birman, M. Z. Solomjak, *Spectral Theory of Self-Adjoint Operators in Hilbert Space,* D. Reidel Dordrecht, 1986. M. Sh. Birman, D. R. Yafaev, *The spectral shift function. The papers of M. G. Kreı̆n and their further development*, St. Petersburg Math. J. **4** no. 5 (1993), 833--870. Y. Colin de Verdiére, *Sur le spectre des opérateurs elliptiques bicaractéristiques toutes périodiques*, Comment. Math. Helv. **54**(3) (1979), 508--522. M. Dimassi, J. Sjöstrand, *Spectral Asymptotics in the Semi-Classical Limit,* Cambridge University Press, 1999. V. Guillemin, *Some spectral results on rank one symmetric spaces,* Adv. Math. **28** (1978), 129--137; addendum in Adv. Math. **28** (1978), 138--147. V. Guillemin, *Some spectral results for the Laplace operator with potential on the $n$-sphere,* Adv. Math. **27** (1978), 273--286. J. Nečas, *Direct Methods in the Theory of Elliptic Equations,* Springer, 2012. A. Pushnitski, G. Raikov, C. Villegas-Blas, *Asymptotic density of eigenvalue clusters for the perturbed Landau Hamiltonian,* Comm. Math. Phys. **320**, no. 2 (2013), 425--453. Z. Rudnick, I. Wigman, *On the Robin spectrum for the hemisphere,* Annales mathématiques du Quebec, **46**(1) (2022), 121--137. Z. Rudnick, I. Wigman, N. Yesha, *Differences between Robin and Neumann eigenvalues,* Communications in Mathematical Physics, **388** (2021), 1603--1635. M. Reed, B. Simon, *Methods of modern mathematical physics. IV. Analysis of operators,* Academic Press, New York-London, 1978. L. Thomas, C. Villegas-Blas, *Asymptotic of Rydberg states for the hydrogen atom*, Comm. Math. Phys. **187** (1997), 623--645. A. Weinstein, *Asymptotics of eigenvalue clusters for the Laplacian plus a potential,* Duke Math. J. **44** (1977), 883--892.
arxiv_math
{ "id": "2309.07044", "title": "Eigenvalue clusters for the hemisphere Laplacian with variable Robin\n condition", "authors": "Alexander Pushnitski and Igor Wigman", "categories": "math.SP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that pseudo-composition algebras and train algebras of rank 3 generated by idempotents are characterized as axial algebras with fusion laws derived from the Peirce decompositions of idempotents in these classes of algebras. The corresponding axial algebras are called $\mathcal{PC}(\eta)$-axial algebras, where $\eta$ is an element of the ground field. As a first step towards their classification, we describe $2-$ and $3$-generated subalgebras of such algebras. author: - "Ilya Gorshkov, Andrey Mamontov and Alexey Staroletov[^1]" title: Axial view on pseudo-composition algebras and train algebras of rank 3 --- **Keywords:** pseudo-composition algebra, train algebra, axial algebra, Peirce decomposition, idempotent **MSC classes:** 17A99, 17C27, 17D92 # Introduction A commutative algebra is said to be of rank 3 if every its element generates a subalgebra of dimension not greater than two. This class of algebras includes many interesting examples, some general theory and motivation can be found in [@W99]. We are interested in two particular cases. Suppose that $\mathbb{F}$ is a field of characteristic not 2 or 3. A commutative $\mathbb{F}$-algebra $A$ endowed with a non-zero symmetric bilinear form $\varphi$ is called a *pseudo-composition algebra* if $x^3=\varphi(x,x)x$ for all $x\in A$. These algebras have been actively studied in the past, in particular, Meyberg and Osborn obtained their classification, under some restrictions, in [@MO93]. It was shown in [@EO00; @S59] that these algebras are closely related to Jordan algebras of generic degree $\leq 3$. Moreover, it is known that the bilinear form $\varphi$ is a Frobenius form, that is $(xy,z)=(x,yz)$ for all algebra elements $x$, $y$, and $z$ [@EO00]. The second subclass are train algebras of rank 3. Let $A$ be a commutative algebra over a field $\mathbb{F}$ with $\operatorname{char}\mathbb{F}\neq2,3$. The principal powers of $A$ are defined by $x^1=x$ and $x^i=x^{i-1}x$ for $i\geq2$. If there exists a non-zero algebra homomorphism $\omega:A\rightarrow\mathbb{F}$, then $A$ is called a *baric algebra* and $\omega$ is a *weight function*. In this case the pair $(A,\omega)$ is said to be a (principal) train algebra of rank $r$, where $r$ is a positive integer, if there exist $\lambda_1,\ldots,\lambda_{r-1}\in\mathbb{F}$ such that every $x\in A$ satisfies the equality $x^r+\lambda_1\omega(x)x^{r-1}+\ldots+\lambda_{r-1}\omega(x)^{r-1}x=0$. These algebras were introduced by Etherington in 1939 as a part of the algebraic formalism of genetics in his fundamental work [@Eth39]. Let us briefly discuss the concept of axial algebras. A *fusion law* over $\mathbb{F}$ is a pair $(\mathcal{F},\ast)$, where $\mathcal{F}$ is a subset of $\mathbb{F}$ and $\ast:\mathcal{F}\times\mathcal{F}\to 2^\mathcal{F}$ is a map to the set of all subsets of $\mathcal{F}$. If $A$ is a commutative $\mathbb{F}$-algebra and $a\in A$, then $ad_a : A\to A$ stands for the adjoint map sending $u$ to $au$. For $\lambda\in\mathbb{F}$, denote $A_{\lambda}(a)=\{u\in A~|~au=\lambda u\}$ and for $L\subseteq\mathbb{F}$, denote $A_L(a):=\oplus_{\lambda\in L}A_{\lambda}(a)$. An *($\mathcal{F}$-)axial algebra* $A$ is a commutative algebra over $\mathbb{F}$ generated by a set of idempotents $X$, called *axes*, such that for each $a \in X$ we have $A=A_\mathcal{F}(a)$, and $A_{\lambda}(a)A_\mu(a)\subseteq A_{\lambda\ast\mu}(a)$ for all $\lambda,\mu\in\mathcal{F}$. An axis $a$ is said to be *primitive* if $A_1(a)$ is $1$-dimensional; that is, $A_1(a)=\langle a\rangle$. An axial algebra $A$ is *primitive* if $A$ is generated by a set of primitive axes. These properties are reminiscent of the so-called Peirce decompositions in different classes of non-associative algebras. Nevertheless, the main inspiration for the concept of axial algebras were the Griess algebra [@Gr82] and Majorana theory [@Iv09]. Axial algebras were introduced by Hall, Rehren, and Shpectorov in [@HRS1; @HRS2]. Current state-of-art of this topic can be found in a recent survey [@survey]. In this paper we consider pseudo-composition algebras and train algebras of rank 3 from the point of view of axial algebras. The two classes of algebras of rank 3, that we discussed above, are distinguished from others by the following property, which resembles axial behavior: if $a$ is an idempotent in $A$ and $ad_a$ is its adjoint operator, then eigenvalues of $ad_a$ lie in a fixed set of size 3 and eigenvectors obey a fusion law of size 3. Finite fusion laws may be written in the form of tables. There are three examples below in Table [3](#t:1){reference-type="ref" reference="t:1"}. Pseudo-composition algebras are described by $\mathcal{PC}(-1)$ [@Wal88]. The fusion law $\mathcal{J(\eta)}$ corresponds to axial algebras of Jordan type [@HRS2], which include Matsuo algebras related to $3$-transposition groups ($\eta \not = 0,1)$ and Jordan algebras ($\eta=\frac{1}{2}$). The most general case $\mathcal{J(\alpha,\beta)}$ is taken from [@Whybrow2], where all possible 2-generated graded primitive axial algebras are listed. $\ast$ $1$ $\eta$ $\frac{1}{2}$ --------------- --------------- --------------- --------------- $1$ $1$ $\eta$ $\frac{1}{2}$ $\eta$ $\eta$ $1$ $\frac{1}{2}$ $\frac{1}{2}$ $\frac{1}{2}$ $\frac{1}{2}$ $\eta,1$ : Fusion laws $\mathcal{PC}(\eta)$, $\mathcal{J(\eta)}$, and $\mathcal{J(\alpha,\beta)}$ $\ast$ $1$ $0$ $\eta$ -------- -------- -------- -------- $1$ $1$ $\eta$ $0$ $0$ $\eta$ $\eta$ $\eta$ $\eta$ $1,0$ : Fusion laws $\mathcal{PC}(\eta)$, $\mathcal{J(\eta)}$, and $\mathcal{J(\alpha,\beta)}$ $\ast$ $1$ $\alpha$ $\beta$ ---------- ---------- ------------ ------------ $1$ $1$ $\alpha$ $\beta$ $\alpha$ $\alpha$ $1,\alpha$ $\beta$ $\beta$ $\beta$ $\beta$ $1,\alpha$ : Fusion laws $\mathcal{PC}(\eta)$, $\mathcal{J(\eta)}$, and $\mathcal{J(\alpha,\beta)}$ It is noted [@Whybrow2] that the case $\beta=\frac{1}{2}$ is special in terms of multiplication formulas. Observe that $\mathcal{J(\eta)}$ is a special case of $\mathcal{J}(0,\eta)$ where additionally $0*0=\{0\}$; and there is a special subcase $\beta=\eta=\frac{1}{2}$ including Jordan algebras. Meanwhile $\mathcal{PC}(\eta)$ is a special case of $\mathcal{J}(\eta,\frac{1}{2})$ where the product of $\eta$-eigenspaces is also further limited, but this time we exclude $\eta$, not $1$. We show why the value $\eta=-1$ is special for $\mathcal{PC}(\eta)$ and prove that the fusion law characterizes pseudo-composition algebras in this case. **Theorem 1**. *Suppose that $\mathbb{F}$ is a field of characteristic not $2$ or $3$. Suppose that $A$ is a primitive $\mathcal{PC}(\eta)$-axial algebra over $\mathbb{F}$, where $\eta\in\mathbb{F}$ and $\eta\not\in\{1, \frac{1}{2}\}$. Then the following statements hold:* 1. *if $\eta=-1$, then $A$ is a pseudo-composition algebra;* 2. *if $\eta\neq-1$, then $A$ is a train algebra of rank $3$.* The converse statements are true under the following restrictions. Suppose that $\mathbb{F}$ is an infinite field of characteristic not $2$ or $3$. Suppose that $A$ is a pseudo-composition algebra and $\eta=-1$ or a train algebra of rank $3$ and $\eta\neq-1$. If $e$ is an idempotent in $A$, then $A=A_1(e)\oplus A_{\eta}(e)\oplus A_{1/2}(e)$, where $A_1(e)$ is spanned by $e$, and products of eigenvectors from these subspaces obey the fusion law $\mathcal{PC}(\eta)$ [@W99 Propositions 1.3 and 1.4]. Therefore, if $A$ is generated by idempotents, then $A$ is a $\mathcal{PC}(\eta)$-axial algebra. Train algebras of rank 3 corresponding to $\eta=0$ in Theorem [Theorem 1](#th:1){reference-type="ref" reference="th:1"} were investigated in [@Wal88-2], in particular, it was proved that these algebras are Jordan. This implies that the class of primitive $\mathcal{PC}(0)$-axial algebras is exactly the intersection of axial algebra classes corresponding to two fusion laws in Table [3](#t:1){reference-type="ref" reference="t:1"}: $\mathcal{PC}(0)$ and $\mathcal{J}(\frac{1}{2})$. Note that until now, most of the papers on axial algebras have been devoted to two cases: axial algebras of Jordan type (fusion laws $\mathcal{J}(\eta)$) and Monster type  [@survey]. In the proof of Theorem [Theorem 1](#th:1){reference-type="ref" reference="th:1"}, we use methods developed earlier for axial algebras of Jordan type and show how they can be applied to other classes of algebras. This gives us hope for a further expansion of axial algebras within the world of non-associative algebras. In fact, to show that every $\mathcal{PC}(\eta)$-axial algebra is a pseudo-composition algebra or train algebra of rank $3$, we investigate subalgebras generated by three axes. As a separate independent result, we describe such 3-generated algebras. Consider a primitive axis $x$ of an axial algebra $A$ over a field $\mathbb{F}$. If $y\in A$, then denote by $\varphi_x(y)$ the element of $\mathbb{F}$ such that the projection of $y$ on $A_1(x)$ equals $\varphi_x(y)x$. **Theorem 2**. *Suppose that $\mathbb{F}$ is a field of characteristic not two and $\eta\in\mathbb{F}\setminus\{\frac{1}{2}, 1\}$. If $A$ is a $\mathcal{PC}(\eta)$-axial algebra over $\mathbb{F}$ generated by primitive axes $a$, $b$, and $c$, then $A$ is the span of $a$, $b$, $c$, $ab$, $bc$, $ac$, $a(bc)$, and $b(ac)$: in particular $\dim A\leq8$. Furthermore, if $\alpha = \varphi_a(b)$, $\beta =\varphi_b(c)$, $\gamma = \varphi_c(a)$, and $\psi=\varphi_a(bc)$, then* 1. *$A_{\eta}(a)=\langle \alpha a+b-2ab, \gamma a+c-2ac, \psi a+bc-2a(bc)\rangle$;* 2. *$A_{1/2}(a)=\langle \alpha(\eta-1)a-\eta b+ab, \gamma(\eta-1)a-\eta c+ac, \psi(\eta-1)a-\eta bc+a(bc)$,\ $(2\eta^2-1)(2\psi-\beta)a-\eta\gamma b+(\eta-2\eta^2)\alpha c-bc+2b(ac)\rangle$.* Note that if $\eta=-1$ then $\operatorname{char}\mathbb{F}\neq3$ since $\eta\neq\frac{1}{2}$. If $\eta\neq-1$ then the characteristic can be equal to three in contrast to the assumptions of Theorem [Theorem 1](#th:1){reference-type="ref" reference="th:1"}. We show in Tables [\[t:prod\]](#t:prod){reference-type="ref" reference="t:prod"} and [\[t:prod2\]](#t:prod2){reference-type="ref" reference="t:prod2"} how to multiply elements in the algebra from this theorem depending on $\eta$. Moreover, we prove that if a basis for an 8-dimensional algebra has the table of product as in Tables [\[t:prod\]](#t:prod){reference-type="ref" reference="t:prod"} or [\[t:prod2\]](#t:prod2){reference-type="ref" reference="t:prod2"}, then it is indeed a $\mathcal{PC}(\eta)$-axial algebra generated by three primitive axes. Based on Theorem [Theorem 2](#th:2){reference-type="ref" reference="th:2"} and similarly to [@hss2 Problem 1(ii)], we formulate the following problem. **Question 1**. *Suppose that $A$ is a $\mathcal{PC}(\eta)$-axial $\mathbb{F}$-algebra generated by a finite set of primitive axes. Is $A$ finite dimensional over $\mathbb{F}$?* This paper is organized as follows. In Section 2, we described $\mathcal{PC}(\eta)$-axial algebra generated by two primitive axes. This allows us to prove that any $\mathcal{PC}(\eta)$-axial algebra admits a Frobenius form and is spanned by a set of axes. In Section 3, we prove Theorem [Theorem 1](#th:1){reference-type="ref" reference="th:1"}. Section 4 is devoted to the case of $\mathcal{PC}(\eta)$-axial algebra generated by three primitive axes and a proof of Theorem [Theorem 2](#th:2){reference-type="ref" reference="th:2"}. # $\mathcal{PC}(\eta)$-axial algebras generated by two primitive axes In this section, we investigate 2-generated subalgebras of $\mathcal{PC}(\eta)$-axial algebras. This allows us to prove that every primitive $\mathcal{PC}(\eta)$-axial algebra admits a Frobenius form and is spanned by a set of axes. This proof follows the ideas outlined in [@hss Section 4]. Finally, we show that the fusion law for $\eta=-1$ is necessary stricter than it is in the definition. Before moving on to the main part of the section, we want to emphasize that a description of 2-generated subalgebras in a more general context of $J(\alpha,\beta)$-algebras is available in [@Whybrow2]. Provided that $A$ is the linear span of its axes, the proof of the existence of a Frobenius form in [@hss] for axial algebras of Jordan type also works for $J(\alpha,\beta)$-algebras. Throughout, we suppose $\mathbb{F}$ is a field of characteristic not two and $\eta\in\mathbb{F}\setminus\{1,\frac{1}{2}\}$. Fix a $\mathcal{PC}(\eta)$-axial algebra $A$. Recall that if $a$ is a primitive axis, then $A=A_1(a)\oplus A_\frac{1}{2}(a)\oplus A_{\eta}(a)$, where $A_\lambda(a)=\{x\in A~|~ax=\lambda x\}$ and $A_1(a)$ is spanned by $a$. Moreover, for $\lambda,\mu\in\{1,\frac{1}{2},\eta\}$, we have $A_\lambda(a)A_\mu(a)\subseteq A_{\lambda\ast\mu}(a)$, where the fusion law $\lambda\ast\mu$ is described in Table [3](#t:1){reference-type="ref" reference="t:1"}. Note that the fusion law $\mathcal{PC}(\eta)$ is $\mathbb{Z}_2$-graded. Indeed, if $a$ is an axis and $A_+:=A_1(a)\oplus A_\eta(a)$ and $A_-:=A_\frac{1}{2}(a)$, then $A = A_+\oplus A_-$, where $A_+A_+,A_-A_-\subseteq A_+$ and $A_+A_-\subseteq A_-$. This decomposition allows us to define an automorphism of $A$, called *the Miyamoto involution*, $\tau_a:A\rightarrow A$ such that if $x=x_1+x_2$, where $x_1\in A_+$ and $x_2\in A_-$, then $x^{\tau_a}=x_1-x_2$. Clearly, if $b$ is an axis in $A$, then so is $b^{\tau_a}$ and $A_1(b)=A_1(b)^{\tau_a}$, $A_\frac{1}{2}(b)=A_\frac{1}{2}(b)^{\tau_a}$, and $A_\eta(b)=A_\eta(b)^{\tau_a}$. In what follows, we will use these properties without explanation. First, we describe $\mathcal{PC}(\eta)$-axial algebras generated by two primitive axes. We will use double angular brackets $\langle\!\langle~ \rangle\!\rangle$ to indicate subalgebra generation, leaving single brackets for the linear span. **Proposition 1**. *Let $A$ be a $\mathcal{PC}(\eta)$-axial algebra. Then its subalgebra $\langle \langle a,b \rangle \rangle$, generated by two primitive axes $a$ and $b$, is spanned by $a$, $b$, and $ab$. If $b = \alpha a + b_{\eta} + b_{\frac{1}{2}}$, where $b_\eta\in A_\eta(a)$, $b_{\frac{1}{2}}\in A_\frac{1}{2}(a)$, and $\alpha\in\mathbb{F}$, then the projection of $a$ on $A_1(b)$ equals $\alpha b$ and the following product rules hold: $$a(ab)=\frac{1}{2} \bigl((1-\eta)\alpha a - \eta b + (1+2\eta) ab\bigr),$$ $$(ab)^2=\frac{1}{4}\Bigl(\bigl((1-\eta)(1-2\eta)\alpha-\eta(1+2\eta)\bigr)(a+b)+\bigl(2\alpha(1-\eta)(1+2\eta)+6\eta+4\eta^2\bigr)ab\Bigr).$$* *Proof.* From the definition of $b_\eta$ and $b_{\frac{1}{2}}$, we get that $ab = \alpha a + \eta b_{\eta} + \frac{1}{2}b_{\frac{1}{2}}$. Linear combinations of the expression for $ab$ with $b = \alpha a + b_{\eta} + b_{\frac{1}{2}}$ imply that $$b_{\eta} = \frac{1}{1-2\eta}(\alpha a + b - 2ab),$$ $$b_{\frac{1}{2}} = \frac{2}{1-2\eta}(\alpha (\eta-1) a -\eta b +ab).$$ Using these expressions, we find that $$a(ab) = \alpha a + \eta^2 b_{\eta} + \frac{1}{4} b_{\frac{1}{2}}= \frac{1}{2}\bigl((1-\eta)\alpha a - \eta b + (1+2\eta) ab\bigr).$$ Write $a = \beta b + a_{\eta} + a_{\frac{1}{2}}$, where $a_\eta\in A_\eta(b)$, $a_{\frac{1}{2}}\in A_\frac{1}{2}(b)$, and $\beta\in\mathbb{F}$. Due to the symmetry of $a$ and $b$, we have $$b(ab)= \frac{1}{2} (- \eta a + (1-\eta)\beta b + (1+2\eta) ab).$$ Substituting the expressions for $b$ and $ab$ with respect to $a$ into the right side, we find that $$\begin{gathered} b(ab)= \frac{1}{2}\bigl(-\eta a + (1-\eta)\beta (\alpha a + b_{\eta} + b_{\frac{1}{2}}) + (1+2\eta) (\alpha a + \eta b_{\eta} + \frac{1}{2}b_{\frac{1}{2}})\bigr)\\= \frac{1}{2}\Bigl(\bigl(-\eta +(1-\eta)\alpha\beta + (1+2\eta)\alpha\bigr)a+ \bigl((1-\eta)\beta + (1+2\eta)\eta\bigr)b_{\eta} +\bigl((1-\eta)\beta +\frac{1}{2}(1+2\eta)\bigr)b_{\frac{1}{2}}\Bigr).\end{gathered}$$ Calculating this expression in a different way by multiplying $ab$ and $b$ written with respect to $a$, we get that $$\begin{gathered} b(ab) = (\alpha a + \eta b_{\eta} + \frac{1}{2}b_{\frac{1}{2}})(\alpha a + b_{\eta} + b_{\frac{1}{2}})\\=\alpha^2 a + \alpha \eta b_{\eta} +\frac{\alpha}{2}b_{\frac{1}{2}} +\alpha \eta^2 b_{\eta} +\eta b_{\eta}^2+\eta b_{\eta}b_{\frac{1}{2}}+ \frac{\alpha}{4}b_{\frac{1}{2}}+\frac{1}{2}b_{\frac{1}{2}}b_{\eta}+\frac{1}{2}b_{\frac{1}{2}}^2.\end{gathered}$$ Subtracting one of the representations of $b(ab)$ from the other and equating the summands from $A_\frac{1}{2}(a)$ to zero, we find that $$\bigl(\frac{1}{2}(1-\eta)\beta +\frac{1}{4}(1+2\eta)-\frac{3\alpha}{4}\bigr)b_{\frac{1}{2}}-(\eta+\frac{1}{2})b_{\eta}b_{\frac{1}{2}}=0.$$ This implies that $$b_{\eta} b_{\frac{1}{2}} = \frac{1}{2(2\eta+1)}\bigl(2(1-\eta)\beta+(1+2\eta)-3\alpha\bigr)b_{\frac{1}{2}} .$$ Similarly, considering the odd part in the equality $b\cdot b -b=0$, we find that $$(\alpha-1)b_{\frac{1}{2}}+2b_{\eta}b_{\frac{1}{2}}=0.$$ Therefore, $$b_{\eta}b_{\frac{1}{2}} =\frac{1-\alpha}{2}b_{\frac{1}{2}}.$$ Comparing the expressions for $b_{\eta}b_{\frac{1}{2}}$, we infer that $$2(1-\eta)\beta+(2\eta+1)-3\alpha=(1-\alpha)(2\eta+1).$$ This implies that $2(\eta-1)(\alpha-\beta)=0$ and hence $\alpha=\beta$ since $\eta\neq 1$. It remains to express $(ab)^2$ in terms of $a$, $b$, and $ab$. Consider the action of the Miyamoto involution $\tau_a$: $$b^{\tau_a}=\alpha a + b_{\eta} - b_{\frac{1}{2}}=b-2b_{\frac{1}{2}}= \frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab).$$ Using the fact that $b^{\tau_a}$ is an idempotent, we get that $$\bigl(\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab)\bigr)^2-\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab)=0.$$ Opening the square and substituting expressions for $(ab)a$ and $(ab)b$ with $\beta=\alpha$, we find that $$\begin{gathered} \frac{1}{(1-2\eta)^2}\Bigl(16\alpha^2 (1-\eta)^2 a + (1+2\eta)^2 b +16(ab)^2 +8\alpha (1-\eta) (1+2\eta) ab \\-32\alpha (1-\eta) \frac{1}{2} \bigl((1-\eta)\alpha a - \eta b + (1+2\eta) ab)\bigr)-8(1+2\eta)\frac{1}{2}\bigl(- \eta a + (1-\eta)\alpha b + (1+2\eta) ab\bigr)\Bigr) \\-\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab)=0.\end{gathered}$$ This allows us to find the desired expression: $$(ab)^2=\frac{1}{4}\Bigl(\bigl((1-\eta)(1-2\eta)\alpha-\eta(1+2\eta)\bigr)(a+b)+\bigl(2\alpha(1-\eta)(1+2\eta)+6\eta+4\eta^2\bigr)ab\Bigr).$$ Therefore, we see that the subspace $\langle a,b,ab\rangle$ is closed under the algebra multiplication, so $a$, $b$, and $ab$ span the subalgebra $\langle\langle a,b\rangle\rangle$. ◻ **Remark 1**. *The corresponding expression for $(ab)^2$ in [@Whybrow2 Table 1] contains a misprint.* Note that if $\eta=-1$, then the product rules in this proposition indeed define a $\mathcal{PC}(-1)$-axial algebra generated by two primitive axes $a$ and $b$. This will follow from Corollary [Corollary 5](#cor:minus-one){reference-type="ref" reference="cor:minus-one"} below. If $\eta\neq-1$, then additional restrictions must be added (see Proposition [Proposition 4](#p:train){reference-type="ref" reference="p:train"} and Corollary [Corollary 3](#cor:fusion-restricion){reference-type="ref" reference="cor:fusion-restricion"}). **Corollary 1**. *If $a\neq b$ and $\eta=-1$, then $\dim\langle \langle a,b \rangle \rangle=2$ exactly in the following two cases:* *1) $b_{-1}=0$. Then $ab=\frac{\alpha a+b}{2}$. Writing with respect to $b$, we see that $\alpha b-a_{-1}+\frac{1}{2}a_{\frac{1}{2}}=\frac{1}{2}((\alpha^2+1)b+\alpha a_{-1}+\alpha a_{\frac{1}{2}})$. Therefore, $\alpha=1$ and $a_{-1}=0$. In this case $ab=\frac{a+b}{2}$ is also an idempotent. Calculating further $a(ab)=\frac{3a+b}{4}$, we see that there should be many axes.* *2) $b_{\frac{1}{2}}=0$. Then $ab=-b+2\alpha a$ and hence $\alpha b-a_{-1}+\frac{1}{2}a_{\frac{1}{2}}=((2\alpha^2-1)b+2\alpha a_{-1}+2\alpha a_{\frac{1}{2}})$. Therefore, $\alpha=-\frac{1}{2}$ and $a_{\frac{1}{2}}=0$. In this case $ab=-a-b$.* **Proposition 2**. *Let $A$ be a $\mathcal{PC}(\eta)$-axial algebra. Then $A$ is the linear span of its set of primitive axes. In particular, there exists a basis consisting of primitive axes.* *Proof.* Suppose that $A$ is generated by two primitive axes $a$ and $b$. By Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, $A$ is spanned by $a$, $b$, and $ab$. In the proof of Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, it was shown that $$b^{\tau_a}=\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab).$$ Therefore, $A$ is spanned by three axes $a$, $b$, and $b^{\tau_a}$. Now move to the general case for $A$. Denote by $B$ the linear span of the set of primitive axes from $A$. We prove by induction on $n$ that every word $w(a_1,\ldots,a_n)$ being a product of $n$ primitive axes $a_1,\ldots,a_n\in A$ is in $B$. Clearly, the statement is true when $n\leq 2$. Assume $n>2$. Write $w=w_1 \cdot w_2$, where $w_1$ and $w_2$ have smaller lengths. By induction, we may write $w_1=\alpha_1 b_1 + \ldots+\alpha_k b_k$ and $w_2=\beta_1 c_1 + \ldots +\beta_m c_m$ for some primitive axes $\{b_i\}_{1\leq i\leq k}$ and $\{c_j\}_{1\leq j\leq m}$ with coefficients $\alpha_i,\beta_j\in\mathbb{F}$. Therefore, the product $w=w_1 \cdot w_2$ is a linear combination of products $b_ic_j$ involving two axes, which is proved to be in $B$. Thus, $A$ coincides with $B$. ◻ Proposition [Proposition 2](#p:basis){reference-type="ref" reference="p:basis"} allows us to prove, as in [@hss], the existence of a *Frobenius form* for every $\mathcal{PC}(\eta)$-axial algebra $A$. Recall that this means a non-zero symmetric bilinear form that associates with the algebra product, that is $(ab,c)=(a,bc)$ for all $a,b,c\in A$. **Proposition 3**. *Let $A$ be a $\mathcal{PC}(\eta)$-axial algebra. Then $A$ admits a unique Frobenius form such that $(a,a)=1$ for all primitive axes $a\in A$.* *Proof.* If $a$ is a primitive axis in $A$ and $x\in A$, then let $\varphi_a(x)$ be equal to the projection coefficient of $x$ onto $A_1(a)$. Now we define the Frobenius form. By Proposition [Proposition 2](#p:basis){reference-type="ref" reference="p:basis"}, there exists a basis $\mathcal{B}$ of $A$ consisting of primitive axes. For $a,b\in\mathcal{B}$, define $(a,b)=\varphi_a(b)$. Extending this by linearity, we obtain a bilinear form on $A$. In particular, if $u\in A$ and $a\in\mathcal{B}$, then $(a,u)=\varphi_a(u)$. Since $(a,a)=1$ for every $a\in\mathcal{B}$, the form is non-zero. It follows from Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"} that $(a,b)=(b,a)$, so by linearity the form is symmetric. Now we show that $(a,u)=\varphi_a(u)$ for every primitive axis $a\in A$ and $u\in A$. Write $u=\sum\limits_{b\in\mathcal{B}}\alpha_bb$. Then, using Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, we see that $$(a,u)=\sum\limits_{b\in\mathcal{B}}\alpha_b (a,b)=\sum\limits_{b\in\mathcal{B}}\alpha_b (b,a)=\sum\limits_{b\in\mathcal{B}}\alpha_b\varphi_b(a)=\sum\limits_{b\in\mathcal{B}}\alpha_b\varphi_a(b)=\varphi_a(u).$$ Note that the form is invariant under the action of the Miyamoto involution $\tau_a$ for every primitive axis $a\in A$. Indeed, if $b,c\in\mathcal{B}$ and $b=\alpha c+b_\eta+b_\frac{1}{2}$, where $b_\eta\in A_\eta(c)$ and $b_\frac{1}{2}\in A_\frac{1}{2}(c)$, then $b^{\tau_a}=\alpha c^{\tau_a}+b_\eta^{\tau_a}+b_\frac{1}{2}^{\tau_a}$. This implies that $(b,c)=\alpha=(b^{\tau_a},c^{\tau_a})$. Now we verify the identity $(xy,z)=(x,yz)$, which is linear in $x$, $y$ and $z$. We may assume that $y$ is a primitive axis, and that $x$ and $z$ are eigenvectors of $ad_y$ with eigenvalues $\mu$ and $\lambda$, respectively. If $\mu = \lambda$, then $(xy,z)=(\mu x,z)=\mu(x,z)=\lambda(x,z)=(x,\lambda z)=(x,yz)$. If $\mu \not = \lambda$, then it is sufficient to show that $(x,z)=0$. So we are left to prove that the decomposition $A=A_1(y)\oplus A_\eta(y)\oplus A_{\frac{1}{2}}(y)$ is orthogonal with respect to the form. If $x\in A_{\eta}(y) \oplus A_{\frac{1}{2}}(y)$, then $(y,x)=\varphi_y(x)=0$. By the symmetry, we may assume that $x \in A_{\eta}(y)$ and $z\in A_{\frac{1}{2}}(y)$. Now $(x,z)=(x^{\tau_a},z^{\tau_a})=(x,-z)=-(x,z)$, therefore $(x,z)=0$. Thus, the form is Frobenius. Conversely, suppose that a Frobenius form $(\cdot,\cdot)$ on $A$ satisfies the condition $(a,a)=1$ for all primitive axes $a\in A$. Since the form is Frobenius, we see that the summands in the decomposition $A=A_1(a)\oplus A_\eta(a)\oplus A_{\frac{1}{2}}(a)$ are orthogonal with respect to the form. Take $a\in\mathcal{B}$ and $u\in A$. Then $u=\varphi_a(u)a+u_\eta+u_\frac{1}{2}$, where $u_\eta\in A_\eta(a)$ and $u_\frac{1}{2}\in A_\frac{1}{2}(a)$. Now $(a,u)=(a,\varphi_a(u)a)+(a,u_\eta)+(a,u_\frac{1}{2})=\varphi_a(u)$. Therefore, this form coincides with the constructed above Frobenius form. The uniqueness is proved. ◻ **Proposition 4**. *Suppose that $\eta\neq-1$ and $A$ is a $\mathcal{PC}(\eta)$-axial algebra. Denote the Frobenius form on $A$ from Proposition [Proposition 3](#p:form){reference-type="ref" reference="p:form"} by $(\cdot,\cdot)$. Then the following statements hold.* 1. *If $a$ and $b$ are primitive axes in $A$, then $(a,b)=1$.* 2. *If $a$ is an axis in $A$, then the map $w_a:A\rightarrow\mathbb{F}$ defined by $w_a(x)=(a,x)$, is an $\mathbb{F}$-algebra homomorphism. Moreover, for every primitive axis $b\in A$ we have $w_a(x)=w_b(x)$ and for all $x,y\in A$ we have $(x,y)=w_a(xy)$.* *Proof.* Suppose that $a$ and $b$ are primitive axes in $A$. Prove that $(a,b)=1$. This is true if $a=b$, so we can assume that $a\neq b$. Write $b=\alpha a+b_\eta+b_\frac{1}{2}$, where $\alpha\in\mathbb{F}$, $b_\eta\in A_\eta(a)$, and $b_\frac{1}{2}\in A_\frac{1}{2}(a)$. In the proof of Proposition [Proposition 3](#p:form){reference-type="ref" reference="p:form"} we see that $\alpha=(a,b)=w_a(b)$. It is shown in Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"} that $b_{\eta} = \frac{1}{1-2\eta}(\alpha a + b - 2ab)$. Using Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, we find $((1-2\eta)b_{\eta})^2:$ $$\begin{gathered} (\alpha a + b - 2ab)^2=\alpha^2a+b+4(ab)^2+2\alpha ab-4\alpha a(ab)-4b(ab)=\alpha^2 a+b+4(ab)^2+2\alpha ab\\-2\alpha\bigl((1-\eta)\alpha a - \eta b + (1+2\eta) ab\bigr)-2\bigl((1-\eta)\alpha b - \eta a + (1+2\eta) ab\bigr)\\= (\alpha^2(2\eta-1)+2\eta)a+(1+2\alpha\eta-2\alpha(1-\eta) )b+(-4\alpha\eta-2-4\eta)ab+4(ab)^2.\end{gathered}$$ Substituting the value of $(ab)^2$ from Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, we find that $$\begin{gathered} (\alpha a + b - 2ab)^2= \bigl(\alpha^2(2\eta-1)+2\eta+(1-\eta)(1-2\eta)\alpha-\eta(1+2\eta)\bigr)a\\+ \bigl(1+2\alpha(2\eta-1)+(1-\eta)(1-2\eta)\alpha-\eta(1+2\eta)\bigr)b\\+ \bigl(2\alpha(1-\eta)(1+2\eta)+6\eta+4\eta^2-4\alpha\eta-2-4\eta\bigr)ab \\= (2\eta-1)(\alpha^2+\alpha(\eta-1)-\eta)a+ (2\eta-1)(2\alpha+\alpha(\eta-1)-1-\eta)b\\+ \bigl(2\alpha(-2\eta^2+\eta+1-2\eta)+(2\eta-1)(2\eta+2)\bigr)ab= (2\eta-1)(\alpha-1)\bigl( (\alpha+\eta)a+(1+\eta)b-2(\eta+1)ab\bigr).\end{gathered}$$ By the fusion rules, we know that $((1-2\eta)b_{\eta})^2\in A_1(a)$, so $$(2\eta-1)(\alpha-1)\bigl( (\alpha+\eta)a+(1+\eta)b-2(\eta+1)ab\bigr)\in A_1(a).$$ Suppose that $\alpha\neq1$. Then, since $1+\eta\neq0$, we infer that $b-2ab\in A_1(a)$. On the other hand, $b-2ab=-\alpha a+(1-2\eta)b_\eta$. Therefore, we have $b_\eta=0$. So $2ab=b+\alpha a$. By the symmetry of $a$ and $b$, we have $b+\alpha a=a+\alpha b$, which is equivalent to $(1-\alpha)(a-b)=0$. Since $a\neq b$, we infer that $\alpha=1$; a contradiction. Thus, if $a$ and $b$ are primitive axes, then $(a,b)=1$. We know that there exists a basis $\mathcal{B}$ of $A$ consisting of primitive axes. Suppose that $x,y\in A$ and $x=\sum\lambda_i a_i$, $y=\sum\mu_i a_i$, where $\lambda_i,\mu_i\in\mathbb{F}$, $a_i\in\mathcal{B}$, and $i$ runs over a finite set of indices. First show that $w_a(x)=w_b(x)$: $$w_a(x)-w_b(x)=(a,x)-(b,x)=(a,\sum\limits_i\lambda_i a_i)- (b,\sum\limits_i\lambda_i a_i)=\sum\limits_i\lambda_i-\sum\limits_i\lambda_i=0.$$ Clearly, $w_a(x)$ is an $\mathbb{F}$-linear map. It remains to prove that $w_a(xy)=w_a(x)w_a(y)$ and $(x,y)=w_a(xy)$. Using that $w_a(a_ia_j)=w_{a_i}(a_ia_j)$ for all $i$ and $j$, we find that $$\begin{gathered} w_a(xy)=(a,xy)=(a,\sum\limits_{i,j}\lambda_i\mu_j a_i a_j)= \sum\limits_{i,j}\lambda_i\mu_j(a, a_i a_j)=\sum\limits_{i,j}\lambda_i\mu_j(a_i, a_i a_j)\\ \sum\limits_{i,j}\lambda_i\mu_j(a_ia_i, a_j)=\sum\limits_{i,j}\lambda_i\mu_j(a_i,a_j)=\sum\limits_{i,j}\lambda_i\mu_j.\end{gathered}$$ On the other hand, $$w_a(x)w_a(y)=(a,x)(a,y)=(a,\sum\limits_i\lambda_ia_i)(a,\sum\limits_j\mu_ja_j)=(\sum\limits_i\lambda_i)(\sum\limits_j\mu_j)=\sum\limits_{i,j}\lambda_i\mu_j.$$ Therefore, $w_a(xy)=w_a(x)w_a(y)$. Similarly, we see that $(x,y)=\sum\limits_{i,j}\lambda_i\mu_j=w_a(xy)$. ◻ **Corollary 2**. *If $\eta\neq-1$, then every $\mathcal{PC}(\eta)$-axial algebra is a baric algebra.* It also follows from Proposition [Proposition 4](#p:train){reference-type="ref" reference="p:train"} that in the case $\eta\neq-1$, the fusion rules are stricter than they are supposed by definition. **Corollary 3**. *If $\eta\neq-1$ and $a$ is a primitive axis in a $\mathcal{PC}(\eta)$-axial algebra $A$, then $A_\eta(a)^2=\{0\}$ and $A_\frac{1}{2}(a)^2\subseteq A_\eta(a)$.* *Proof.* Define as in Proposition [Proposition 4](#p:train){reference-type="ref" reference="p:train"} the map $w:A\rightarrow\mathbb{F}$ by $w(x)=(a,x)$, where $x\in A$. If $x,y\in A_\eta(a)$, then $w(x)=w(y)=0$ since $A_\eta(a)$ and $A_1(a)$ are orthogonal with respect to the form. Therefore, we find that $w(xy)=w(x)w(y)=0$. On the other hand, by the fusion rules, we have $xy\in A_1(a)$ and hence $xy=(xy,a)a=w(xy)a$. So $xy=0$. Similarly, if $x,y\in A_\frac{1}{2}(a)$, then $w(x)=w(y)=0$ and hence $w(xy)=w(x)w(y)=0$. On the other hand, by the fusion rules, we have $xy\in A_1(a)\oplus A_\eta(a)$ and hence $xy=(xy,a)a+u$, where $u\in A_\eta(a)$. Since $w(xy)=0$, we have $xy=u\in A_\eta(a)$. ◻ In the case of axial algebras of Jordan type, subgroups of the automorphism group generated by Miyamoto involutions play an important role (see, e.g., [@hss2]). As a consequence of Proposition 1, in the conclusion of this section we prove the following assertion, which is of independent interest and can be used in further research. **Lemma 1**. *Suppose that $a$ and $b$ are primitive axes in a $\mathcal{PC}(\eta)$-axial algebra. If $\dim\langle\langle a,b\rangle\rangle=3$, then $\tau_a$ and $\tau_b$ have the following matrices on $\langle\langle a,b\rangle\rangle$ with respect to the basis $a$, $b$, and $ab$[^2]: $$[\tau_a]= \frac{1}{1-2\eta} \begin{pmatrix} 1-2\eta & 0 & 0 \\ 4\alpha (1-\eta) & 1+2\eta & -4 \\ 2\alpha (1-\eta) & 2\eta & -1-2\eta \end{pmatrix}, [\tau_b]= \frac{1}{1-2\eta} \begin{pmatrix} 1+2\eta & 4\alpha (1-\eta) & -4 \\ 0 & 1-2\eta & 0 \\ 2\eta & 2\alpha (1-\eta) & -1-2\eta \\ \end{pmatrix},$$* *Proof.* Clearly, $a^{\tau_a}=a$. We see in the proof of Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, that $$b^{\tau_a}=\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab).$$ Now we find that $$\begin{gathered} (ab)^{\tau_a}=ab^{\tau_a}=\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) b -4ab)a \\=\frac{1}{(1-2\eta)}(4\alpha (1-\eta) a + (1+2\eta) ab -4a(ab))\end{gathered}$$ Substituting the expression for $a(ab)$ from Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, we get that $$(ab)^{\tau_a}=\frac{1}{1-2\eta}(2\alpha (1-\eta) a +2\eta b -(1+2\eta) ab).$$ Therefore, we find all elements of the matrix for $\tau_a$. By the symmetry of $a$ and $b$, we get the matrix for $\tau_b$. ◻ # Proof of Theorem [Theorem 1](#th:1){reference-type="ref" reference="th:1"} {#proof-of-theorem-th1} Throughout this section we suppose that $\mathbb{F}$ is a field of characteristic not two and $A$ is a primitive $\mathcal{PC}(\eta)$-axial algebra over $\mathbb{F}$, where $\eta\in\mathbb{F}\setminus\{1,\frac{1}{2}\}$. If $\eta=-1$, then we also assume that $\operatorname{char}(\mathbb{F})\neq3$ since otherwise $\eta=-1=\frac{1}{2}$. We denote by $(\cdot,\cdot)$ the Frobenius form on $A$ from Proposition [Proposition 3](#p:form){reference-type="ref" reference="p:form"}. To prove that $A$ is a pseudo-composition algebra if $\eta=-1$, we show that a linearization of the identity $x^3=(x,x)x$ hold on $A$. If $\operatorname{char}(\mathbb{F})\neq3$, then the linearization implies that main identity. The same strategy is used for the case $\eta\neq-1$. The proof is split into several assertions. **Lemma 2**. *Suppose that $a$ is an axis in $A$ and $x\in A$. Then $a(ax)=(a,x)\frac{1-\eta}{2}a-\frac{\eta}{2}x+\frac{1+2\eta}{2}ax$.* *Proof.* The proof is exactly the same as for the product $a(ab)$ in Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}. ◻ **Lemma 3**. *Suppose that $a$ is a primitive axis in $A$ and $x,y\in A$. Denote $\alpha=(a,x), \gamma=(a,y)$, $\beta=(x,y)$, and $\psi=(a,xy)$. Write $x=\alpha a+x_\eta+x_\frac{1}{2}$ and $y=\gamma a+y_\eta+y_\frac{1}{2}$, where $x_\eta, y_\eta\in A_\eta(a)$ and $x_\frac{1}{2},y_\frac{1}{2}\in A_\frac{1}{2}(a)$. Then* 1. *$(x_\eta,y_\eta)=\frac{1}{1-2\eta}(\alpha\gamma+\beta-2\psi)$;* 2. *$(x_\frac{1}{2}, y_\frac{1}{2})=\frac{2}{1-2\eta}((\eta-1)\alpha\gamma-\eta\beta+\psi)$.* *Proof.* Clearly, $ax=\alpha a+\eta x_\eta+\frac{1}{2}x_\frac{1}{2}$ and hence $x_\eta=\frac{1}{1-2\eta}(\alpha a+x-2ax)$. Similarly, we find that $y_\eta=\frac{1}{1-2\eta}(\gamma a+y-2ay)$. Therefore, $$\begin{gathered} (1-2\eta)^2(x_\eta,y_\eta)=(\alpha a+x-2ax, \gamma a+y-2ay)\\= \alpha\gamma(a,a)+\alpha(a,y)-2\alpha(a,ay)+\gamma(x,a)+(x,y)-2(x,ay)-2\gamma(ax,a)-2(ax,y)+4(ax,ay) \\=\alpha\gamma+\alpha\gamma-2\alpha\gamma+\alpha\gamma+\beta-2\psi-2\alpha\gamma-2\psi+4(ax,ay)=-\alpha\gamma+\beta-4\psi+4(ax,ay).\end{gathered}$$ Using Lemma [Lemma 2](#l:a(ax)){reference-type="ref" reference="l:a(ax)"}, we see that $$\begin{gathered} 2(ax,ay)=(2a(ax),y)=(\alpha(1-\eta)a-\eta x+(1+2\eta)ax,y)\\=\alpha(1-\eta)(a,y)-\eta(x,y)+(1+2\eta)(ax,y)=\alpha\gamma(1-\eta)-\eta\beta+(1+2\eta)\psi.\end{gathered}$$ This implies that $$(1-2\eta)^2(x_\eta,y_\eta)=(1-2\eta)\alpha\gamma+(1-2\eta)\beta+(4\eta-2)\psi,$$ and hence $(x_\eta,y_\eta)=\frac{1}{(1-2\eta)}(\alpha\gamma+\beta-2\psi)$, as required. Since $\beta=(x,y)=(\alpha a+x_\eta+x_\frac{1}{2},\gamma a+y_\eta+y_\frac{1}{2})=\alpha\gamma+(x_\eta,y_\eta)+(x_\frac{1}{2},y_\frac{1}{2})$, we find that $$(x_\frac{1}{2},y_\frac{1}{2})=\beta-\alpha\gamma-\frac{1}{1-2\eta}(\alpha\gamma+\beta-2\psi)=\frac{1}{1-2\eta}(2(\eta-1)\alpha\gamma-2\eta\beta+2\psi),$$ as claimed. ◻ **Lemma 4**. *Suppose that $a$ is a primitive axis and $x,y\in A$. Denote $\alpha=(a,x)$, $\beta=(x,y)$, $\gamma=(a,y)$, and $\psi=(a,xy)$. Then $$a(xy)+x(ay)+y(ax)=\bigl((1+\eta)(\psi-\alpha\gamma)-\eta\beta\bigr)a-\gamma\eta x-\alpha\eta y+(1+\eta)(xy+\gamma ax+\alpha ay).$$* *Proof.* Write $x=\alpha a+x_\eta+x_\frac{1}{2}$ and $y=\gamma a+y_\eta+y_\frac{1}{2}$, where $x_\eta,x_\eta\in A_\eta(a)$ and $y_\frac{1}{2}, y_\frac{1}{2}\in A_\frac{1}{2}(a)$. Then $$\begin{gathered} a(xy)=a\bigl((\alpha a+x_\eta+x_\frac{1}{2})(\gamma a+y_\eta+y_\frac{1}{2})\bigr)\\=a\bigl(\alpha\gamma a+\alpha\eta y_\eta+\frac{\alpha}{2} y_\frac{1}{2}+\gamma\eta x_\eta+x_\eta y_\eta+x_\eta y_\frac{1}{2}+\frac{\gamma}{2} x_\frac{1}{2}+x_\frac{1}{2}y_\eta+x_\frac{1}{2}y_\frac{1}{2}\bigr)\\=\alpha\gamma a+\alpha\eta^2 y_\eta+\frac{\alpha}{4} y_\frac{1}{2}+\gamma\eta^2 x_\eta+x_\eta y_\eta+\frac{1}{2}x_\eta y_\frac{1}{2}+\frac{\gamma}{4} x_\frac{1}{2}+\frac{1}{2}x_\frac{1}{2}y_\eta+a(x_\frac{1}{2}y_\frac{1}{2}).\end{gathered}$$ Similarly, we see that $$\begin{gathered} x(ay)=(\alpha a+x_\eta+x_\frac{1}{2})(\gamma a+\eta y_\eta+\frac{1}{2}y_\frac{1}{2})\\= \alpha\gamma a+\alpha\eta^2 y_\eta+\frac{\alpha}{4} y_\frac{1}{2}+\gamma\eta x_\eta+\eta x_\eta y_\eta+\frac{1}{2}x_\eta y_\frac{1}{2}+\frac{\gamma}{2}x_\frac{1}{2}+\eta x_\frac{1}{2}y_\eta+\frac{1}{2}x_\frac{1}{2}y_\frac{1}{2}.\end{gathered}$$ By the symmetry of $x$ and $y$, $$y(ax)=\alpha\gamma a+\gamma\eta^2 x_\eta+\frac{\gamma}{4} x_\frac{1}{2}+\alpha\eta y_\eta+\eta x_\eta y_\eta+\frac{1}{2}y_\eta x_\frac{1}{2}+\frac{\alpha}{2} y_\frac{1}{2}+\eta y_\frac{1}{2}x_\eta+\frac{1}{2}x_\frac{1}{2}y_\frac{1}{2}.$$ These equalities imply that $$\begin{gathered} a(xy)+x(ay)+y(ax)-(1+\eta)xy\\=\alpha\gamma(2-\eta) a+\alpha\eta^2 y_\eta+\gamma\eta^2 x_\eta+\alpha\frac{1-\eta}{2}y_\frac{1}{2}+\gamma\frac{1-\eta}{2} x_\frac{1}{2}+\eta x_{\eta}y_{\eta}+a(x_\frac{1}{2}y_\frac{1}{2})-\eta x_\frac{1}{2}y_\frac{1}{2}.\end{gathered}$$ Since $x_\eta y_\eta\in A_1(a)$, we have $x_\eta y_\eta=(a,x_\eta y_\eta) a=(ax_\eta,y_\eta) a=\eta(x_\eta,y_\eta) a$. Applying Lemma [Lemma 3](#l:b_eta-c_eta){reference-type="ref" reference="l:b_eta-c_eta"}, we find that $\eta x_\eta y_\eta=\frac{\eta^2}{1-2\eta}(\alpha\gamma+\beta-2\psi) a$. Since $x_\frac{1}{2}y_\frac{1}{2}\in A_1(a)\oplus A_\eta(a)$, we have $$a(x_\frac{1}{2}y_\frac{1}{2})-\eta x_\frac{1}{2}y_\frac{1}{2}=(1-\eta)(a,x_\frac{1}{2}y_\frac{1}{2}) a= (1-\eta)(ax_\frac{1}{2},y_\frac{1}{2})a=\frac{1-\eta}{2}(x_\frac{1}{2},y_\frac{1}{2})a.$$ Applying Lemma [Lemma 3](#l:b_eta-c_eta){reference-type="ref" reference="l:b_eta-c_eta"}, we see that $a(x_\frac{1}{2}y_\frac{1}{2})-\eta x_\frac{1}{2}y_\frac{1}{2}=\frac{1-\eta}{1-2\eta}((\eta-1)\alpha\gamma-\eta\beta+\psi) a$. Note that $$\begin{gathered} (1+\eta) ax-\eta x=(1+\eta)\alpha a+(1+\eta)\eta x_\eta+\frac{1+\eta}{2}x_\frac{1}{2}-\alpha\eta a-\eta x_\eta-\eta x_\frac{1}{2}\\=\alpha a+\eta^2 x_{\eta}+\frac{1-\eta}{2} x_\frac{1}{2}.\end{gathered}$$ Similarly, $(1+\eta) ay-\eta y=\gamma a+\eta^2 y_{\eta}+\frac{1-\eta}{2} y_\frac{1}{2}$. Using these equalities, we find that $$\begin{gathered} a(xy)+x(ay)+y(ax)-(1+\eta) xy-\gamma((1+\eta) ax-\eta x)-\alpha((1+\eta) ay-\eta y)\\=\alpha\gamma(2-\eta) a-2\alpha\gamma a+\eta x_{\eta}y_{\eta}+a(x_\frac{1}{2}y_\frac{1}{2})-\eta x_\frac{1}{2}y_\frac{1}{2}\\= \bigl(-\alpha\gamma\eta+\frac{\eta^2}{1-2\eta}(\alpha\gamma+\beta-2\psi)+ \frac{1-\eta}{1-2\eta}((\eta-1)\alpha\gamma-\eta\beta+\psi)\bigr) a\\=\frac{1}{1-2\eta}\bigl(-\alpha\gamma\eta(1-2\eta)+\eta^2(\alpha\gamma+\beta-2\psi)+(1-\eta)((\eta-1)\alpha\gamma-\eta\beta+\psi)\bigr)a\\= \frac{1}{1-2\eta}\bigl( (2\eta^2+\eta-1)\alpha\gamma+(2\eta^2-\eta)\beta+(1-\eta-2\eta^2)\psi\bigr)a=\bigl((1+\eta)(\psi-\alpha\gamma)-\eta\beta\bigr)a\end{gathered}$$ Therefore, $$a(xy)+x(ay)+y(ax)=\bigl((1+\eta)(\psi-\alpha\gamma)-\eta\beta\bigr) a-\gamma\eta x-\alpha\eta y+(1+\eta)(xy+\gamma ax+\alpha ay).$$ ◻ **Corollary 4**. *Suppose that $a$ is a primitive axis in $A$ and $x,y\in A$. The following statements hold.* 1. *If $\eta=-1$, then $a(xy)+x(ay)+y(ax)=(x,y) a+(a,y) x+(a,x) y.$* 2. *If $\eta\neq-1$, then $$a(xy)+x(ay)+y(ax)= (1+\eta)\bigl(w(x)ay+w(y)ax+xy\bigr)-\eta\bigl(w(xy)a+w(y)x+w(x)y\bigr),$$ where $w(x)=(a,x)$, $w(y)=(a,y)$, and $w(xy)=(a,xy)$.* *Proof.* If $\eta=-1$, the assertion follows from Lemma [Lemma 4](#l:3prod){reference-type="ref" reference="l:3prod"}. If $\eta\neq-1$, then Proposition [Proposition 4](#p:train){reference-type="ref" reference="p:train"} implies that $w(xy)=w(x)w(y)=(x,y)$. Applying these equalities and Lemma [Lemma 4](#l:3prod){reference-type="ref" reference="l:3prod"}, we get the identity for this case. ◻ **Lemma 5**. *Suppose that $\eta=-1$. If $x,y,z\in A$, then $x(yz)+y(zx)+z(xy)=(y,z)x+(x,z)y+(y,z)z$. In particular, $x^3=(x,x)x$ and hence $A$ is a pseudo-composition algebra.* *Proof.* By Proposition [Proposition 2](#p:basis){reference-type="ref" reference="p:basis"}, $A$ is the span of some set of axes $\{a_i\}_{i\in I}$. Write $x=\sum\limits_{i=1}^n\lambda_i\cdot a_i$, where $\lambda_i\in\mathbb{F}$ for $1\leq i\leq n$. By Corollary [Corollary 4](#c:3prod){reference-type="ref" reference="c:3prod"}, we have $$a_i(yz)+y(a_iz)+z(a_iy)=(y,z)a_i+(a_i,z)y+(a_i,y)z.$$ Multiplying both sides by $\lambda_i$, we get $$(\lambda_ia_i)(yz)+y((\lambda_ia_i)z)+z((\lambda_ia_iy)=(y,z)(\lambda_ia_i)+(\lambda_ia_i,z)y+(\lambda_ia_i,y)z.$$ Summing up the equalities for all $i$, we obtain the required equality $x(yz)+y(zx)+z(xy)=(y,z)x+(x,z)y+(y,z)z$. Finally, if $x=y=z$, then we have $3x^3=3(x,x)x$ and hence $x^3=(x,x)x$ since $\operatorname{char}\mathbb{F}\neq3$. ◻ **Lemma 6**. *Suppose that $\eta\neq-1$. Define the $\mathbb{F}$-algebra homomorphism from Proposition [Proposition 4](#p:train){reference-type="ref" reference="p:train"} by $w$. Then for all $x,y,z\in A$ it is true that $$x(yz)+y(xz)+z(yx)=(\eta+1)\bigl(w(z)xy+w(x)yz+w(y)xz\bigr)- \eta\bigl(w(yz)x+w(zx)y+w(xy)z\bigr).$$ In particular, if $\operatorname{char}\mathbb{F}\neq3$, then $x^3=(\eta+1)w(x)x^2-\eta w(x)^2x$ and hence $A$ is a train algebra of rank $3$.* *Proof.* By Proposition [Proposition 2](#p:basis){reference-type="ref" reference="p:basis"}, $A$ is the span of some set of axes $\{a_i\}_{i\in I}$. Write $x=\sum\limits_{i=1}^n\lambda_i\cdot a_i$, where $\lambda_i\in\mathbb{F}$ for $1\leq i\leq n$. By Corollary [Corollary 4](#c:3prod){reference-type="ref" reference="c:3prod"}, we have $$\begin{gathered} a_i(yz)+y(a_iz)+z(ya_i)\\= (1+\eta)\bigl(w(a_i)yz+w(y)a_iz+w(z)a_iy\bigr)-\eta\bigl(w(yz)a_i+w(a_iz)y+w(a_iy)z\bigr).\end{gathered}$$ Multiplying both sides by $\lambda_i$, we find that $$\begin{gathered} (\lambda_ia_i)(yz)+y((\lambda_ia_i)z)+z(y(\lambda_ia_i))= (1+\eta)\bigl(w(\lambda_ia_i)yz+w(y)(\lambda_ia_i)z+w(z)(\lambda_ia_i)y\bigr)\\-\eta\bigl(w(yz)(\lambda_ia_i)+w((\lambda_ia_i)z)y+w((\lambda_ia_i)y)z\bigr).\end{gathered}$$ Summing up the equalities for all $i$, we get the required equality $$x(yz)+y(xz)+z(yx)=(\eta+1)\bigl(w(x)yz+w(y)xz+w(z)xy\bigr)-\eta\bigl(w(yz)x+w(xz)y+w(xy)z\bigr).$$ If $x=y=z$ and $\operatorname{char}\mathbb{F}\neq3$, then we have $3x^3=3(\eta+1)w(x)x^2-3\eta w(x)^2x$ and hence $x^3=(\eta+1)w(x)x^2-\eta w(x)^2x$, as claimed. ◻ # $\mathcal{PC}(\eta)$-axial algebras generated by three primitive axes In this section, we proof Theorem [Theorem 2](#th:2){reference-type="ref" reference="th:2"}. Throughout, we suppose that $\mathbb{F}$ is a field of characteristic not two and $A$ is a primitive $\mathcal{PC}(\eta)$-axial algebra over $\mathbb{F}$, where $\eta\in\mathbb{F}\setminus\{1,\frac{1}{2}\}$. We denote by $(\cdot,\cdot)$ the Frobenius form on $A$ from Proposition [Proposition 3](#p:form){reference-type="ref" reference="p:form"}. It follows from Proposition [Proposition 4](#p:train){reference-type="ref" reference="p:train"} that if $\eta\neq-1$, then there exists an $\mathbb{F}$-algebra homomorphism $w:A\rightarrow\mathbb{F}$ such that $w(xy)=(x,y)$ for all $x,y\in A$. **Lemma 7**. *Suppose that $a$ is a primitive axis in $A$ and $x,y\in A$. If $\eta=-1$, then $$\begin{gathered} (ax)(ay)=\frac{1}{4}\bigl((6(a,xy)-3(x,y))a+(a,y)x+(a,x)y\\-2(a,y)ax-2(a,x)ay-xy+2x(ay)+2y(ax)\bigr).\end{gathered}$$* *Proof.* Write $x=\alpha a+x_{-1}+x_\frac{1}{2}$ and $y=\beta a+y_{-1}+y_\frac{1}{2}$, where $\alpha=(a,x),\beta=(a,y)$, $x_{-1},y_{-1}\in A_{-1}(a)$, and $x_\frac{1}{2},y_\frac{1}{2}\in A_\frac{1}{2}(a)$. By the fusion laws, we have $x_{-1}y_{-1}=-(x_{-1}, y_{-1})a$. Since $x_{-1}=\frac{1}{3}(x-2ax+\alpha a)$ and $y_{-1}=\frac{1}{3}(y-2ay+\beta a)$, we may express $(ax)(ay)$ from the product $x_{-1}y_{-1}$: $$(ax)(ay)=\frac{1}{4}\bigl(-9(x_{-1}, y_{-1})a-(x+\alpha a)(y-2ay+\beta a)+2(ax)(y+\beta a)\bigr).$$ By Lemma [Lemma 3](#l:b_eta-c_eta){reference-type="ref" reference="l:b_eta-c_eta"}, we know that $9(x_{-1}, y_{-1})=3(x,y)-6(a,xy)+3\alpha\beta.$ Using Lemma [Lemma 2](#l:a(ax)){reference-type="ref" reference="l:a(ax)"}, we see that $$\begin{gathered} (x+\alpha a)(y-2ay+\beta a)=xy-2x(ay)+\beta ax+\alpha ay-2\alpha a(ay)+\alpha\beta a\\=\alpha\beta a+\beta ax+\alpha ay + xy-2x(ay)-2\alpha(\beta a+\frac{1}{2}y-\frac{1}{2}ay)\\=-\alpha\beta a+\beta ax-\alpha y+2\alpha ay+xy-2x(ay).\end{gathered}$$ Finally, we find that $$2(ax)(y+\beta a)=2y(ax)+2\beta(\alpha a+\frac{1}{2}x-\frac{1}{2}ax)=2\alpha\beta a+\beta x-\beta ax+2y(ax).$$ Combining calculations, we infer that $$(ax)(ay)=\frac{1}{4}\bigl((6(a,xy)-3(x,y))a+\beta x+\alpha y-xy-2\beta ax-2\alpha ay+2x(ay)+2y(ax)\bigr),$$ as claimed. ◻ **Lemma 8**. *Suppose that $a$ is a primitive axis in $A$ and $x,y\in A$. If $\eta\neq-1$, then $$\begin{gathered} (ax)(ay)=\frac{1}{4}\bigl((1-2\eta)w(xy)a-\eta w(y)x-\eta w(x)y\\+2\eta w(y)ax+2\eta w(x)ay-xy+2x(ay)+2y(ax)\bigr). \end{gathered}$$* *Proof.* The proof is similar to that of Lemma [Lemma 7](#l:axay1){reference-type="ref" reference="l:axay1"}. ◻ $\begin{tabu}[h!]{|c|} \hline a\cdot a = a, b\cdot b=b, c\cdot c=c, a\cdot b = ab, a\cdot c=ac, b\cdot c=bc, \\ \hline a\cdot bc = a(bc), b\cdot ac = b(ac), c\cdot ab =\beta a+\gamma b+\alpha c-a(bc)-b(ac), \\ \hline \begin{tabu}{@{}c@{}} a\cdot ab = \alpha a+\frac{1}{2}b-\frac{1}{2}{ab}, a\cdot ac = \gamma a+\frac{1}{2}c-\frac{1}{2}{ac}, b\cdot ab = \alpha b+\frac{1}{2}a-\frac{1}{2}{ab}, \\ b\cdot bc = \beta b+\frac{1}{2}c-\frac{1}{2}{bc}, c\cdot ac =\gamma c+\frac{1}{2}a-\frac{1}{2}{ac}, c\cdot bc = \beta c+\frac{1}{2}b-\frac{1}{2}{bc}, \end{tabu}\\ \hline a\cdot(a(bc))=\psi a+\frac{1}{2}bc-\frac{1}{2}a(bc), b\cdot(b(ac))=\psi b+\frac{1}{2}ac-\frac{1}{2}b(ac), \\ \hline \begin{tabu}{@{}c@{}} b\cdot a(bc)=\frac{1}{4}( \beta a+(\gamma-2\psi)b-3\alpha c-2\beta ab+ 6\alpha bc-ac+2a(bc)+2b(ac)), \\ c\cdot a(bc)=\frac{1}{4}(3\beta a-\gamma b+(3\alpha-2\psi)c-ab+6\gamma bc-2\beta ac-2b(ac)), \\ a\cdot b(ac)=\frac{1}{4}( (\beta-2\psi)a+\gamma b-3\alpha c-2\gamma ab -bc+6\alpha ac+2a(bc)+2b(ac) ), \\ c\cdot b(ac)=\frac{1}{4}(-\beta a+3\gamma b+(3\alpha-2\psi)c-ab-2\gamma bc+6\beta ac-2a(bc)), \end{tabu} \\ \hline \begin{tabu}{@{}c@{}} (ab)^2=\frac{1}{4}((6\alpha-1)(a+b)-(4\alpha+2)ab), (bc)^2=\frac{1}{4}((6\beta-1)(b+c)-(4\beta+2)bc), \\ (ac)^2=\frac{1}{4}((6\gamma-1)(a+c)-(4\gamma+2)ac), \end{tabu}\\ \hline \begin{tabu}{@{}c@{}} ab\cdot bc=\frac{1}{4}(3\beta a+(6\psi-\gamma)b+3\alpha c-2\beta ab-2\alpha bc-ac-2b(ac)), \\ bc\cdot ac=\frac{1}{4}(\beta a+\gamma b+(6\psi-3\alpha)c-ab-2\gamma bc-2\beta ac + 2a(bc)+2b(ac)), \\ ab\cdot ac=\frac{1}{4}( (6\psi-\beta)a+3\gamma b+3\alpha c-2\gamma ab -bc-2\alpha ac-2a(bc) ), \end{tabu}\\ \hline \begin{tabu}{@{}c@{}} ab\cdot a(bc)=\frac{1}{8}\bigl((12\alpha\beta-2\beta+6\gamma-6\psi)a+(6\psi-2\beta)b-c-4(\beta+\psi)ab\\+(6\alpha+1)bc-2ac+(2-4\alpha)a(bc)\bigr), \\ ab\cdot b(ac)=\frac{1}{8}\bigl((6\psi-2\gamma)a+(12\alpha\gamma+6\beta-2\gamma-6\psi)b-c-4(\gamma+\psi)ab-2bc\\+(6\alpha+1)ac+(2-4\alpha)b(ac)\bigr), \\ bc\cdot a(bc)=\frac{1}{8}\bigl( (4\beta^2-2\beta+2)a+(1-6\beta)ab+8\psi bc\\+(1-6\beta)ac+(4\beta+2)a(bc) \bigr), \\ bc\cdot b(ac)=\frac{1}{8}\bigl(-a+(12\beta\gamma+6\alpha-2\gamma-6\psi)b+ (6\psi-2\gamma)c-2ab-4(\gamma+\psi)bc\\+(6\beta+1)ac+(2-4\beta)b(ac)\bigr), \\ ac\cdot a(bc)=\frac{1}{8}\bigl( (12\beta\gamma+6\alpha-2\beta-6\psi)a-b+(6\psi-2\beta)c-2ab\\+(6\gamma+1)bc-4(\beta+\psi)ac+(2-4\gamma)a(bc)\bigr), \\ ac\cdot b(ac)=\frac{1}{8}\bigl( (4\gamma^2-2\gamma+2)b+(1-6\gamma)ab+ (1-6\gamma)bc\\+8\psi ac +(4\gamma+2)b(ac) \bigr), \end{tabu}\\ \hline \begin{tabu}{@{}c@{}} (a(bc))^2=\frac{1}{16}\bigl( (36\alpha\beta-4\beta^2+36\beta\gamma-24\beta\psi-6\alpha+2\beta-6\gamma-12\psi-2)a+(1-6\beta)(b+c)\\+2(1-6\beta)(ab+ac)+(4\beta+24\psi+2)bc+(8\beta-16\psi+4)a(bc) \bigr), \\ (b(ac))^2=\frac{1}{16}\bigl( (1-6\gamma)(a+c)+(36\alpha\gamma+36\beta\gamma-4\gamma^2-24\gamma\psi-6\alpha-6\beta+2\gamma-12\psi-2)b \\ +(2-12\gamma)(ab+bc)+(4\gamma+24\psi+2)ac+(8\gamma-16\psi+4)b(ac)\bigr), \\ a(bc)\cdot b(ac)=\frac{1}{16}\bigl( (12\beta\psi-6\alpha\beta-2\beta^2-14\beta\gamma+\beta+6\gamma+6\psi+1)a\\+(12\gamma\psi-6\alpha\gamma-14\beta\gamma-2\gamma^2+6\beta+\gamma+6\psi+1)b\\+(18\alpha^2+6\alpha\beta+6\alpha\gamma-36\alpha\psi+3\alpha-6\psi-2)c\\+(6\gamma-6\alpha+6\beta-32\beta\gamma+12\psi-1)ab+ (24\alpha\gamma-6\alpha+2\beta+6\gamma-12\psi-1)bc\\+(24\alpha\beta-6\alpha+6\beta+2\gamma-12\psi-1)ac\\+(8\gamma-12\alpha-4\beta+16\psi-4)a(bc) +(8\beta-12\alpha-4\gamma+16\psi-4)b(ac)\bigr) \end{tabu} \\ \hline \end{tabu}$ $\begin{tabu}[h!]{|c||c|c|c|c|c|c|c|c|} \hline (,) & a & b & c & ab & bc & ac & a(bc) & b(ac) \\ \hline\hline a & 1 & \alpha & \gamma & \alpha & \psi & \gamma & \psi & \alpha\gamma+\frac{\beta-\psi}{2} \\ \hline b & & 1 & \beta & \alpha & \beta & \psi & \alpha\beta+\frac{\gamma-\psi}{2} & \psi \\ \hline c & & & 1 & \psi & \beta & \gamma & \beta\gamma+\frac{\alpha-\psi}{2} & \beta\gamma+\frac{\alpha-\psi}{2} \\ \hline ab & & & & \frac{2\alpha^2-\alpha+1}{2} & \frac{2\alpha\beta+\gamma-\psi}{2} & \frac{2\alpha\gamma+\beta-\psi}{2} & \frac{4\alpha\psi+2\beta+\psi-2\alpha\beta-\gamma}{4} & \frac{4\alpha\psi+2\gamma+\psi-2\alpha\gamma-\beta}{4} \\ \hline bc & & & & & \frac{2\beta^2-\beta+1}{2} & \frac{2\beta\gamma+\alpha-\psi}{2} & \frac{(6\beta-1)(\alpha+\gamma)-(4\beta+2)\psi}{4} & \frac{4\beta\psi+2\gamma+\psi-2\beta\gamma-\alpha}{4} \\ \hline ac & & & & & & \frac{2\gamma^2-\gamma+1}{2} & \frac{4\gamma\psi+2\beta+\psi-2\beta\gamma-\alpha}{4} & \frac{(6\gamma-1)(\alpha+\beta)-(4\gamma+2)\psi}{4} \\ \hline a(bc) & & & & & & & \begin{tabu}{@{}c@{}}\frac{4\beta^2-6\beta\gamma-6\alpha\beta+4\beta\psi}{8}\\\frac{+8\psi^2+\alpha-2\beta+\gamma+2\psi+2}{8} \end{tabu} & \begin{tabu}{@{}c@{}} \frac{8\alpha\beta\gamma+6\alpha^2-6\alpha\psi-2\beta^2}{8} \\ \frac{+6\beta\gamma+2\beta\psi-2\gamma^2+2\gamma\psi}{8} \\ \frac{-4\psi^2-2\alpha+\beta+\gamma-\psi-1}{8} \end{tabu} \\ \hline b(ac) & & & & & & & & \begin{tabu}{@{}c@{}} \frac{4\gamma^2-6\alpha\gamma-6\beta\gamma+4\gamma\psi}{8} \\ \frac{+8\psi^2+\alpha+\beta-2\gamma+2\psi+2}{8} \end{tabu}\\ \hline \end{tabu}$ $\begin{tabu}[h!]{|c|} \hline a\cdot a = a, b\cdot b=b, c\cdot c=c, a\cdot b = ab, a\cdot c=ac, b\cdot c=bc, \\ \hline a\cdot bc = a(bc), b\cdot ac = b(ac), c\cdot ab=(\eta+1)(ab+bc+ac)-\eta(a+b+c)-a(bc)-b(ac), \\ \hline \begin{tabu}{@{}c@{}} a\cdot ab = \frac{1}{2}((1-\eta)a-\eta b + (1+2\eta)ab), a\cdot ac = \frac{1}{2}((1-\eta)a-\eta c + (1+2\eta)ac), \\ b\cdot ab = \frac{1}{2}((1-\eta)b-\eta a + (1+2\eta)ab), b\cdot bc = \frac{1}{2}((1-\eta)b-\eta c + (1+2\eta)bc), \\ c\cdot ac = \frac{1}{2}((1-\eta)c-\eta a + (1+2\eta)ac), c\cdot bc = \frac{1}{2}((1-\eta)c-\eta b + (1+2\eta)bc), \end{tabu}\\ \hline a\cdot a(bc)=\frac{1}{2}((1-\eta)a-\eta bc + (1+2\eta)a(bc)), b\cdot b(ac)=\frac{1}{2}((1-\eta)b-\eta ac + (1+2\eta)b(ac)), \\ \hline \begin{tabu}{@{}c@{}} b\cdot a(bc)=\frac{1}{4}(-\eta a+ (1-2\eta^2)b+ (\eta-2\eta^2)c+2\eta ab+ (4\eta^2-2\eta)bc-ac+2a(bc)+2b(ac)), \\ c\cdot a(bc)=\frac{1}{4}(-3\eta a-(2\eta^2+2\eta-1)c+ (2\eta+1)(ab+2ac-\eta b)+(4\eta^2+2)bc-2b(ac) ), \\ a\cdot b(ac)=\frac{1}{4}( (1-2\eta^2)a-\eta b+(\eta-2\eta^2)c+2\eta ab -bc +(4\eta^2-2\eta)ac+2a(bc)+2b(ac)), \\ c\cdot b(ac)=\frac{1}{4}(-3\eta b-(2\eta^2+2\eta-1)c+ (2\eta+1)(ab+2bc-\eta a)+(4\eta^2+2)ac-2a(bc) ), \end{tabu} \\ \hline \begin{tabu}{@{}c@{}} (ab)^2=\frac{1}{4}((1-4\eta)(a+b)+(8\eta+2)ab), (bc)^2=\frac{1}{4}((1-4\eta)(b+c)+(8\eta+2)bc), \\ (ac)^2=\frac{1}{4}((1-4\eta)(a+c)+(8\eta+2)ac), \\ ab\cdot bc=\frac{1}{4}( -3\eta(a+c)+(1-4\eta)b+(2\eta+1)(2ab+2bc+ac)-2b(ac)), \\ ab\cdot ac=\frac{1}{4}((1-4\eta)a-3\eta(b+c)+(2\eta+1)(2ab+bc+2ac)-2a(bc)), \\ bc\cdot ac=\frac{1}{4}(-\eta(a+b)+(1-2\eta)c-ab+2\eta(bc+ac)+ 2a(bc)+2b(ac)), \end{tabu} \\ \hline \begin{tabu}{@{}c@{}} ab\cdot a(bc)=\frac{1}{8}( (2-8\eta)a-(2\eta^2+5\eta-1)b-(2\eta^2+\eta)c+ (10\eta+2)ab\\+(4\eta^2-2\eta+1)bc+2\eta ac+(4\eta+2)a(bc)), \\ bc\cdot a(bc)=\frac{1}{8}(-4\eta a-(4\eta^2+3\eta-1)(b+c)+(4\eta-1)(ab+ac)\\+(8\eta^2+2\eta+2)bc+6a(bc)), \\ ac\cdot a(bc)=\frac{1}{8}( (2-8\eta)a-(2\eta^2+\eta)b-(2\eta^2+5\eta-1)c + 2\eta ab\\+(4\eta^2-2\eta+1)bc+(10\eta+2)ac+(4\eta+2)a(bc)), \\ ab\cdot b(ac)=\frac{1}{8}((1-5\eta-2\eta^2)a+(2-8\eta)b-(2\eta^2+\eta)c+ (10\eta+2)ab\\+2\eta bc+(4\eta^2-2\eta+1)ac+(4\eta+2)b(ac)), \\ bc\cdot b(ac)=\frac{1}{8}(-(2\eta^2+\eta)a+(2-8\eta)b+(1-5\eta-2\eta^2)c+2\eta ab\\+(10\eta+2)bc+(4\eta^2-2\eta+1)ac+(4\eta+2)b(ac)), \\ ac\cdot b(ac)=\frac{1}{8}((1-3\eta-4\eta^2)(a+c)-4\eta b+ (4\eta-1)(ab+bc)\\+(8\eta^2+2\eta+2)ac+6b(ac)), \end{tabu} \\ \hline \begin{tabu}{@{}c@{}} (a(bc))^2=\frac{1}{16}( (4-16\eta)a+(1-2\eta-8\eta^2)(b+c)+(8\eta-2)(ab+ac+(2\eta-1)bc)\\+(16\eta+12)a(bc) ), \\ (b(ac))^2=\frac{1}{16}( (1-2\eta-8\eta^2)(a+c)+(4-16\eta)b+ (8\eta-2)(ab+bc+(2\eta-1)ac)\\+(16\eta+12)b(ac) ), \\ a(bc)\cdot b(ac)=\frac{1}{16}( (2-8\eta-6\eta^2)(a+b)+(1-12\eta^2)c + (12\eta-3)ab\\+(12\eta^2-2\eta-1)(bc+ac)+(4\eta+8)(a(bc)+b(ac)) ) \end{tabu} \\ \hline \end{tabu}$ In what follows, we assume that $A$ is generated by primitive axes $a$, $b$, and $c$. Denote by $\mathcal{B}$ the set $\{a, b, c, ab, ac, bc, a(bc), b(ac)\}$. We also use the following notation for values of the form: $\alpha=(a, b)$, $\beta=(b, c)$, $\gamma=(a, c)$, and $\psi= (a,bc)=(b,ac)=(c,ab)$. **Proposition 5**. *The algebra $A$ is the span of $\mathcal{B}$. The pairwise products of elements of $\mathcal{B}$ can be found according to Table [\[t:prod\]](#t:prod){reference-type="ref" reference="t:prod"} and Table [\[t:prod2\]](#t:prod2){reference-type="ref" reference="t:prod2"} depending on $\eta$. Moreover, if $\eta=-1$, then the Gram matrix for elements of $\mathcal{B}$ is as in Table [\[t:gram\]](#t:gram){reference-type="ref" reference="t:gram"} and its determinant is equal to $$\begin{gathered} \frac{3}{2^9}(\alpha+\beta+\gamma-2\psi-1)^4\\\times(12\alpha\beta\gamma-2\alpha^2-2\beta^2-2\gamma^2-2\psi^2+2\alpha\beta+2\beta\gamma+2\alpha\gamma-4\alpha\psi-4\beta\psi-4\gamma\psi+\alpha+\beta+\gamma-2\psi+1)^3.\end{gathered}$$* *Proof.* We have several obvious products: $aa,bb,cc,ab,bc,ac, a(bc), b(ac)\in\mathcal{B}$. By Corollary [Corollary 4](#c:3prod){reference-type="ref" reference="c:3prod"}, $c(ab)$ lies in the span of $\mathcal{B}$. Products $(ab)(ac)$, $(ab)^2$, $(ab)(a(bc))$, $(ab)(b(ac))$, and $(a(bc))^2$ can be found using smaller products with the aid of Lemmas [Lemma 7](#l:axay1){reference-type="ref" reference="l:axay1"} and [Lemma 8](#l:axay2){reference-type="ref" reference="l:axay2"}. We skip calculations and list final results in Tables [\[t:prod\]](#t:prod){reference-type="ref" reference="t:prod"} and [\[t:prod2\]](#t:prod2){reference-type="ref" reference="t:prod2"}. Now we show how to find products $b(a(bc))$, $c(a(bc))$, $a(b(ac))$, and $c(b(ac))$. First consider $a(b(ac))$. We use that $a(b(ac))=\frac{1}{2}(a(b(ac)+c(ab)))+\frac{1}{2}(a(b(ac)-c(ab)))$. If $\eta=-1$, then the first summand can be found as follows: $$a(b(ac)+c(ab))=a(\beta a+\gamma b+\alpha a-a(bc))\in\langle\mathcal{B}\rangle.$$ Similarly, If $\eta\neq-1$, then we use the corresponding expression for $c(ab)$ from Corollary [Corollary 4](#c:3prod){reference-type="ref" reference="c:3prod"}. It remains to find $a(b(ac)-c(ab))$. Write $b=\alpha a+b_\eta+b_\frac{1}{2}$ and $c=\gamma a+c_\eta+c_\frac{1}{2}$. As in Lemma [Lemma 4](#l:3prod){reference-type="ref" reference="l:3prod"}, we see that $b(ac)-c(ab)=x_\eta+x_\frac{1}{2}$, where $x_\eta=(\alpha\eta^2-\alpha\eta)c_\eta-(\gamma\eta^2-\gamma\eta)b_\eta$ and $x_\frac{1}{2}=\frac{\gamma}{4}x_\frac{1}{2}-\frac{\alpha}{4}y_\frac{1}{2}+(\frac{1}{2}-\eta)(b_\eta c_\frac{1}{2}-c_\eta b_\frac{1}{2})$. Clearly, $x_\eta\in A_\eta(a)$ and $x_\frac{1}{2}\in A_\frac{1}{2}(a)$. Then $a(x_\eta+x_\frac{1}{2})=\eta x_\eta+\frac{1}{2}x_\frac{1}{2}=\frac{1}{2}(x_\eta+x_\frac{1}{2})+(\eta-\frac{1}{2})x_\eta$. Now $x_\eta+x_\frac{1}{2}=b(ac)-c(ab)$ and $x_\eta$ is a linear combination of $b_\eta$ and $c_\eta$. As in Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, we see that $$b_{\eta} = \frac{1}{1-2\eta}(\alpha a + b - 2ab), c_{\eta} = \frac{1}{1-2\eta}(\gamma a + c - 2ac).$$ So $a(b(ac)-c(ab))$ belongs to the span of $\mathcal{B}$ and the same is true for $a(b(ac))$. Similar arguments can be applied to find $b(a(bc))$, $c(a(bc))$, and $c(b(ac))$. Finally, we need to find the product of $a(bc)$ and $b(ac)$. If $\eta=-1$, then by Corollary [Corollary 4](#c:3prod){reference-type="ref" reference="c:3prod"}, it is true that $a(bc)+b(ac)=\beta a+\gamma b+\alpha c-c(ab)$, so $$2(a(bc))(b(ac))=(\beta a+\gamma b+\alpha c)^2-2(\beta a+\gamma b+\alpha c)(c(ab))+(c(ab))^2-(a(bc))^2-(b(ac))^2.$$ The right-hand side of this equality is in $\mathcal{B}$ and can be found explicitly using the previous products. We write the final result into Table [3](#t:1){reference-type="ref" reference="t:1"}. If $\eta\neq-1$, then we find the product in the same way using the expression for $c(ab)$ from Corollary [Corollary 4](#c:3prod){reference-type="ref" reference="c:3prod"}. All calculations can be reproduced in the computer algebra system GAP [@GAP]: we upload all commands for products and the Gram matrix in [@file]. ◻ Finally, we describe the subspaces $A_\eta(a)$ and $A_\frac{1}{2}(a)$ to complete the proof of Theorem [Theorem 2](#th:2){reference-type="ref" reference="th:2"}. **Lemma 9**. *The following statements hold.* (i) *$A_{\eta}(a)=\langle \alpha a+b-2ab, \gamma a+c-2ac, \psi a+bc-2a(bc)\rangle$;* (ii) *$A_\frac{1}{2}(a)=\langle \alpha(\eta-1)a-\eta b+ab, \gamma(\eta-1)a-\eta c+ac, \psi(\eta-1)a-\eta bc+a(bc)$,\ $(2\eta^2-1)(2\psi-\beta)a-\eta\gamma b+(\eta-2\eta^2)\alpha c-bc+2b(ac)\rangle.$* *Proof.* Write $b=\alpha a+b_\eta+b_\frac{1}{2}$, where $b_\eta\in A_\eta(a)$ and $b_\frac{1}{2}\in A_\frac{1}{2}(a)$. As in Proposition [Proposition 1](#p:2gen){reference-type="ref" reference="p:2gen"}, we see that $\alpha a+b-2ab=(1-2\eta)b_\eta\in A_\eta(a)$ and $\alpha(\eta-1)a-\eta b+ab=(1-2\eta)b_\frac{1}{2}\in A_\frac{1}{2}(a)$. Similarly, we see that $\gamma a+c-2ac, \psi a+bc-2a(bc)\in A_\eta(a)$ and $\gamma(\eta-1)a-\eta c+ac, \psi(\eta-1)a-\eta bc+a(bc)\in A_\frac{1}{2}(a)$. It is straightforward calculation with the aid of Tables [\[t:prod\]](#t:prod){reference-type="ref" reference="t:prod"} and [\[t:prod2\]](#t:prod2){reference-type="ref" reference="t:prod2"} that $(2\eta^2-1)(2\psi-\beta)a-\eta\gamma b+(\eta-2\eta^2)\alpha c-bc+2b(ac)\in A_\frac{1}{2}(a)$. Now write coefficients of these seven vectors together with $a$, which spans $A_1(a)$, with respect to $a,b,c,ab,bc,ac,a(bc),b(ac)$ into $8\times 8$ matrix. Using GAP, we find that the determinant of this matrix equals $16(\eta-\frac{1}{2})^3$. Then Proposition [Proposition 5](#p:3gen){reference-type="ref" reference="p:3gen"} implies that these eight eigenvectors of $ad_a$ span $A$ and hence the corresponding vectors span $A_\eta(a)$ and $A_\frac{1}{2}(a)$. ◻ We conclude this section with the following statement which is a consequence of Proposition [Proposition 5](#p:3gen){reference-type="ref" reference="p:3gen"} and Lemma [Lemma 9](#l:eigenspaces){reference-type="ref" reference="l:eigenspaces"}. **Corollary 5**. *Fix a field $\mathbb{F}$ of characteristic not $2$ or $3$. Take any $\eta\in\mathbb{F}\setminus\{1,\frac{1}{2}\}$. If $\eta=-1$, then choose arbitrary values of the parameters $\alpha,\beta,\gamma,\psi\in\mathbb{F}$. If $\eta=1$, then set $\alpha=\beta=\gamma=\psi=1$. Assume that $e_1, e_2,\ldots,e_8$ is a basis of an $8$-dimensional vector space $V$ over $\mathbb{F}$. Define products on these elements as in Tables [\[t:prod\]](#t:prod){reference-type="ref" reference="t:prod"} and [\[t:prod2\]](#t:prod2){reference-type="ref" reference="t:prod2"}, where we identify elements of the basis with $a$, $b$, $c$, $ab$, $bc$, $ac$, $a(cb)$, $b(ac)$, $c(ab)$ in these tables. The space $V$ with the thus-defined product forms an $\mathbb{F}$-algebra which we denote by $A(\alpha,\beta,\gamma,\psi)$. Then $A(\alpha,\beta,\gamma,\psi)$ is an axial algebra generated by the three primitive idempotents $e_1$, $e_2$, and $e_3$.* *Moreover, suppose that $A$ is a $\mathcal{PC}(\eta)$-axial algebra generated by three primitive axes $x$, $y$, and $z$ with the Frobenius form as Proposition [Proposition 3](#p:form){reference-type="ref" reference="p:form"}. If $(x, y)=\alpha$, $(y, z)=\beta$, $(x,z)=\gamma$, and $\psi=(x, yz)$, then $A$ is a homomorphic image of $A(\alpha,\beta,\gamma,\psi)$.* 1 S. Walcher, On algebras of rank three, *Comm. Algebra*, **27**:7 (1999), 3401--3438. K. Meyberg, J.M. Osborn, Pseudo-composition algebras, *Math Z.*, **214**:1 (1993), 67--77. A. Elduque, S. Okubo, On algebras satisfying $x^2x^2=N(x)x$, *Math. Z.*, **235**:2 (2000), 275--314. T. A. Springer, On a class of Jordan algebras, *Indag. Math.*, **21** (1959), 259--264. I.M.H. Etherington, Genetic algebras, *Proc. Roy. Soc. Edinburgh*, **59** (1939), 242--258. J.I. Hall, F. Rehren and S. Shpectorov, Universal axial algebras and a theorem of Sakuma, *J. Algebra*, **421** (2015), 394--424. J.I. Hall, F. Rehren, S. Shpectorov, Primitive axial algebras of Jordan type, *J. Algebra*, **437** (2015), 79--115. R.L. Griess Jr., The friendly giant, *Ivent. Math.*, **69** (1982), 1--102. A.A. Ivanov, The Monster Group and Majorana Involutions, Cambridge Tracts in Mathematics 176, Cambridge University Press, (2009), 266 pp. J. McInroy, S. Shpectorov, *Axial algebras of Jordan and Monster type*, [arXiv:2209.08043](https://arxiv.org/abs/2209.08043). H.Röhrl, S. Walcher, Algebras of complexity one, *Algebras Groups Geom.*, **5**:1 (1988), 61--107. M.Whybrow, Graded 2-generated axial algebras, [arXiv:2005.03577](https://arxiv.org/abs/2005.03577). S. Walcher, Bernstein algebras which are Jordan algebras, *Arch. Math. (Basel)*, **50**:3 (1988), 218--222. J.I.Hall, Y.Segev, S.Shpectorov, On primitive axial algebras of Jordan type, Bulletin of the Institute of Mathematics, 13(4), 397-409. J.I. Hall, Y. Segev and S. Shpectorov, Miyamoto involutions in axial algebras of Jordan type half, *Israel J. Math.* **223**:1 (2018), 261--308. The GAP Group, GAP -- Groups, Algorithms, and Programming, Version 4.12.2, 2022. (<https://www.gap-system.org>) <https://github.com/AlexeyStaroletov/AxialAlgebras/tree/master/PC-eta> Ilya Gorshkov, [Sobolev Institute of Mathematics, Novosibirsk, Russia;]{.smallcaps}\ [Saint Petersburg State University, Saint Petersburg, Russia;]{.smallcaps}\ *E-mail address:* `ilygor8@gmail.com` Andrey Mamontov, [Sobolev Institute of Mathematics, Novosibirsk, Russia;]{.smallcaps}\ [Saint Petersburg State University, Saint Petersburg, Russia;]{.smallcaps}\ *E-mail address:* `andreysmamontov@gmail.com` Alexey Staroletov, [Sobolev Institute of Mathematics, Novosibirsk, Russia;]{.smallcaps}\ *E-mail address:* `staroletov@math.nsc.ru` [^1]: The first and second authors are supported by Russian Scientific Foundation (project -11-00081, https://rscf.ru/en/project/22-11-00081/) and the third author is supported by RAS Fundamental Research Program (project FWNF-2022-0002) [^2]: *we suppose that linear transformations act on the right*
arxiv_math
{ "id": "2309.05237", "title": "Axial view on pseudo-composition algebras and train algebras of rank 3", "authors": "Ilya Gorshkov, Andrey Mamontov and Alexey Staroletov", "categories": "math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper presents a new formula for the $q$-shift operator, building on the techniques by Liu and Sears. This formula provides fresh proof of the Carlitz formula and extends it naturally. As applications, we derive an equivalent form of the generalized Carlitz formula to prove two $q$-congruences on cyclotomic polynomials, which expand upon the results of Guo et al. address: Department of Mathematics, East china Normal University, 500 Dongchuan Roda, Shanghai 200241, P. R. China author: - Yang Dunkun title: Representing Carlitz Formula with q-Shift Operator --- # introduction In 1974, Carlitz [@carlitz1974] obtained the following formula for the three variables $a$, $b$, and $q$, **Theorem 1**. $$\label{Carlitz1974} \sum_{k=0}^n \frac{(a, b ;q)_k}{(q;q)_k}(-a b)^{n-k} q^{(n\!-\!k)(n\!+\!k\!-\!1) / 2} \!=\!(a;q)_{n\!+\!1} \sum_{k=0}^n \frac{(\!-\!b)^k q^{\binom{k}{2}}}{(q;q)_k(q;q)_{n-k}\left(1\!-\!a q^{n-k}\right)}.$$ where the q-shifted factorial is defined by $(a;q)_0=1$ and $(a;q)_k=(1-a)(1-aq)\dots(1-aq^{k-1})$ for $k>0$ . In 2008 and 2013,Wang [@wmj2008; @wmj2013] extended the preceding equation by employing advanced mathematical techniques, specifically the Andrews-Askey integral and the $q$-Chu Vandermonde formula. In 2019, Guo [@gjw2019] substantiated Tauraso's congruence conjecture [@rt2013] through the utilization of equation ([\[Carlitz1974\]](#Carlitz1974){reference-type="ref" reference="Carlitz1974"}). Subsequently, this particular formula has remained a prevalent choice for the establishment of various q-congruence formulas, as evidenced by its continued application in recent works such as [@gcy-gjw-2021; @gjw-2023; @wc-nhx-2022; @wxx-2022]. In Gasper's work [@gasper-book Ex 1.17], the Carlitz formula is presented to the reader as an exercise, instantly capturing the author's attention due to its unique structural attributes. Upon encountering this formula, the author is intellectually stimulated to contemplate the approach for its formal proof. If both sides of equation ([\[Carlitz1974\]](#Carlitz1974){reference-type="ref" reference="Carlitz1974"}) are multiplied by $(q;q)_n/(a;q)_{n+1}$, their equivalent equation is derived as follows. $$\begin{aligned} \label{carlitz1974-equ1} \frac{(q;q)_n}{(a;q)_{n+1}}\sum_{k=0}^n \frac{(a, b ;q)_k}{(q;q)_k}(-a b)^{n-k} q^{(n\!-\!k)(n\!+\!k\!-\!1) / 2} \!= \sum_{k=0}^n \begin{bmatrix} n\\k \end{bmatrix}_q\frac{(\!-\!b)^k q^{\binom{k}{2}}}{1\!-\!a q^{n-k}}, \end{aligned}$$ where $$\begin{bmatrix} n\\k \end{bmatrix}_q=\frac{(q;q)_n}{(q;q)_k(q;q)_{n-k}},$$ denotes the $q$-Gaussian coefficient, commencing from the left-hand side of equation ([\[carlitz1974-equ1\]](#carlitz1974-equ1){reference-type="ref" reference="carlitz1974-equ1"}), we deduce the right-hand side of ([\[carlitz1974-equ1\]](#carlitz1974-equ1){reference-type="ref" reference="carlitz1974-equ1"}) utilizing the iterative approach and the $q$-Binomial theorem. Consequently, the proof of Theorem [Theorem 1](#carlitz1974-them){reference-type="ref" reference="carlitz1974-them"} is established. Subsequently, leveraging the $q$-Gaussian coefficient present on the right-hand side of the equation ([\[carlitz1974-equ1\]](#carlitz1974-equ1){reference-type="ref" reference="carlitz1974-equ1"}), Liu [@liuzhiguo-2023] and the present author [@ydk-2023] have developed operator representations for $q$-polynomials containing such coefficients. Building upon the foundational work of Sears [@sears-1951], we derive the subsequent theorem. **Theorem 2**. $$\begin{aligned} \label{carltz-cor-1} \sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}f(aq^{n-k})=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_qb^k(b;q)_{n-k}\eta_a^{n-k}\Delta_a^k f(a), \end{aligned}$$ Here, $\Delta_a^k = (\eta_a-1)(\eta_a-q)\dots(\eta_a-q^{k-1})$, where $\eta_a$ denotes the q-shift operator acting on the variable $a$, defined as $\eta_af(a)=f(aq)$. The operator $\Delta_a^k$ captures the product of consecutive differences arising from the application of the $q$-shift operator. Such an operator is instrumental in elucidating certain $q$-analog expressions. For equation ([\[carltz-cor-1\]](#carltz-cor-1){reference-type="ref" reference="carltz-cor-1"}), let $f(a)=\frac{1}{1-a}$, which is the other proof of Theorem [Theorem 1](#carlitz1974-them){reference-type="ref" reference="carlitz1974-them"}. More generally, if $f(a)=\frac{(ay;q)_{\infty}}{(ax;q)_{\infty}}$, we obtain the following theorem. **Theorem 3**. $$\label{carlitz-cor-2} (ay;q)_n \sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q\frac{(ax;q)_{n\!-\!k}}{(ay;q)_{n\!-\!k}}(\!-\!b)^kq^{\binom{k}{2}}\!=\! \sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(ax, b;q)_kP_{n\!-\!k}(x, y)(\!-\!ab)^{n\!-\!k}q^{\binom{n}{2}\!-\!\binom{k}{2}},$$ where $$P_k(x, y)=(x-y)(x-yq)\dots(x-yq^{k-1}).$$ For the equation ([\[carlitz-cor-2\]](#carlitz-cor-2){reference-type="ref" reference="carlitz-cor-2"}), when $x=1$ and $y=q^m$, where $m \geq 1$, the generalized Carlitz formula can be expressed as follows: $$\begin{aligned} \label{carlitz-m} \sum_{k=0}^n \frac{(a, b ;q)_k(q^m;q)_{n-k}}{(q;q)_k(q;q)_{n-k}}(\!-\!a b)^{n-k} q^{\binom{n}{2}\!-\!\binom{k}{2}} \!=\!(a;q)_{n\!+\!m} \sum_{k=0}^n \frac{(\!-\!b)^k q^{\binom{k}{2}}}{(q;q)_k(q;q)_{n-k}(a q^{n-k};q)_{m}}, \end{aligned}$$ when $m=1$, it reduces to ([\[Carlitz1974\]](#Carlitz1974){reference-type="ref" reference="Carlitz1974"}). Review the definition of $n$-th cyclotomic polynomial as $$\Phi_{n}(q)=:\prod_{\substack{1\leq k\leq n,\\gcd(n, k)=1}}(q-\zeta^k),$$ where $\zeta$ is an $n$ -th primitive root of unity. In recent years, Guo [@gjw2019] has used Theorem [Theorem 1](#carlitz1974-them){reference-type="ref" reference="carlitz1974-them"} to prove the following $q$-congruence: $$\begin{aligned} \sum_{k=0}^{n-1}\frac{q^k}{(-q;q)_k}\begin{bmatrix} 2k\\k \end{bmatrix} \equiv (\!-\!1)^{\frac{n\!-\!1}{2}}q^{\frac{n^2-1}{4}} \mod \Phi_n(q)^2, \end{aligned}$$ Drawing inspiration from Guo's proof in [@gjw2019], we have derived the following equivalent formulation of the generalized Carlitz formula ([\[carlitz-m\]](#carlitz-m){reference-type="ref" reference="carlitz-m"}). **Theorem 4**. $$\begin{aligned} \label{app-lemma-equ-1} \sum_{k=0}^{n-1}&\frac{(a, b;q^2)_k}{(q^2;q^2)_k}(q^{2n\!-\!2k};q^2)_{m\!-\!1}(\!-\!ab)^{n\!-\!k\!-\!1}q^{-k^2\!+\!k}\notag\\ &=\begin{bmatrix} n\!-\!1\\\frac{n\!-\!1}{2} \end{bmatrix}_{q^2}\begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\frac{(\!-\!1)^{\frac{n-1}{2}}q^{\!-\!\frac{(n\!-\!1)(3n\!-\!5)}{4}}}{(-q;q)_{n\!-\!1}^2} \frac{(q^2;q^2)_{m\!-\!1}(a;q^2)_{n\!+\!m\!-\!1}}{(q;q^2)_n}\notag\\ &\ \ \ \ \ \ \ \ \ \ \times (1\!-\!q^n)\!\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1\!-\!n};q^2)_k}{(q^{1\!+\!n};q^2)_k} \frac{b^{k\!+\!\frac{n\!-\!1}{2}}q^{2nk\!-\!2k}}{(aq^{n\!-\!2k-1};q^2)_m}. \end{aligned}$$ Compared to equation ([\[carlitz-m\]](#carlitz-m){reference-type="ref" reference="carlitz-m"}), equation ([\[app-lemma-equ-1\]](#app-lemma-equ-1){reference-type="ref" reference="app-lemma-equ-1"}) may appear more verbose and complex. However, when combined with the works of Guo [@GJW-2018] and Pan [@Ljx-Ph-Zy-2015], equation ([\[app-lemma-equ-1\]](#app-lemma-equ-1){reference-type="ref" reference="app-lemma-equ-1"}) provides a viable pathway for deriving $q$-congruences under the $\mod\Phi_n(q)^2$ condition. Therefore, the $q$-congruences in our article are derived based on equation ([\[app-lemma-equ-1\]](#app-lemma-equ-1){reference-type="ref" reference="app-lemma-equ-1"}). **Theorem 5**. Let $n$ be a positive odd integer, if $1\leq m\leq\frac{n+1}{2}$, then $$\begin{aligned} \label{app-th1-equ} \sum_{k=0}^{n-m}\frac{(q^{4m\!-\!2};q^4)_k}{(q^2;q^2)_k}q^{3k\!-\!k^2\!-\!4km}\equiv (-1)^{\frac{n\!-\!1}{2}\!+\!m\!-\!1}q^{\!-\!\frac{n^2\!-\!1}{2}\!+\!2m^2\!-\!2m}\mod\Phi_n(q). \end{aligned}$$ By letting $q^2\rightarrow q^{-1}$, and applying similar methods as in [@wxx-2022 Lemma 2.1] and [@wc-nhx-2022 Lemma 2.1, Lemma 3.1], we can obtain the following results. **Corollary 1**. $$\begin{aligned} \sum_{k=0}^{n-m}\frac{(q^{2m\!-\!1};q^2)_k}{(q;q)_k}q^{k}\equiv (-1)^{\frac{n\!-\!1}{2}\!+\!m\!-\!1}q^{\frac{n^2\!-\!1}{4}\!-\!m^2\!+\!m}\mod\Phi_n(q), \end{aligned}$$ $$\begin{aligned} \sum_{k=0}^{n-m}\frac{(q^{2m\!-\!1};q^2)_k}{(q;q)_k}q^{2k}\equiv (-1)^{\frac{n\!-\!1}{2}\!+\!m\!-\!1}q^{\frac{n^2\!-\!1}{4}\!-\!m^2\!-\!m\!+\!1}\mod\Phi_n(q).\end{aligned}$$ Let $q\rightarrow 1$ in the above two congruences. We can find the following corollary. **Corollary 2**. Let $p$ be an odd prime. Then $$\begin{aligned} \sum_{k=0}^{p-m}\frac{2^k(m-\frac{1}{2})_k}{k!}\equiv (-1)^{\frac{p\!-\!1}{2}\!+\!m\!-\!1}\mod p.\end{aligned}$$ Particularly, when $m=1$, the congruence mentioned above corresponds to the following congruence, as proven by Sun and Tauraso [@s-t-2010]. $$\begin{aligned} \sum_{k=0}^{p^r-1}\frac{1}{2^k}\binom{2k}{k}\equiv (-1)^{\frac{n\!-\!1}{2}}\mod p.\end{aligned}$$ In 2022, Wang and Yu [@wxx-2022] proved the following $q$-congruences, $$\sum_{k=0}^{n-1}\frac{(q^{4d\!+\!2};q^4)_k}{(q^2;q^2)_k}q^{\!-\!4dk\!-\!k^2\!-\!k} \equiv (\!-\!1)^{\frac{n-1}{2}+d}q^{-\frac{n^2-1}{2}+2d^2+2d} \mod\Phi_n(q),$$ We derive the following when considering the scenario in which the aforementioned $q$-congruence holds $\mod\Phi_n(q)^2$. **Theorem 6**. Let $n$ be a positive odd integer, if $d \in\{-\frac{n-3}{2}, -\frac{n-5}{2}, \dots, \frac{n-5}{2}, \frac{n-3}{2}\}$, then $$\begin{aligned} \label{app-th2-equ} \sum_{k=0}^{n-1}&\frac{(q^{4d\!+\!2};q^4)_k}{(q^2;q^2)_k}q^{\!-\!4dk\!-\!k^2\!-\!k} \equiv (\!-\!1)^{\frac{n-1}{2}\!+\!d}q^{-\frac{n^2-1}{2}+2d^2+2d}\!+\!\mathbf{Sgn}(d) (1\!-\!q^n)q^{2d^2\!+\!3d}\times\notag\\ &\sum_{j=1}^{|d|}\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}} \{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q)^2. \end{aligned}$$ where $$\mathbf{Sgn}(d)=\begin{cases} 1, &d>0;\\ 0, &d=0;\\ -1, &d<0. \end{cases}$$ Upon letting $q^2\rightarrow q^{-1}$, we observe that the following corollary holds: **Corollary 3**. $$\begin{aligned} \label{app-th2-equ-q-k} \sum_{k=0}^{n-1}&\frac{(q^{2d\!+\!1};q^2)_k}{(q;q)_k}q^{k} \equiv (\!-\!1)^{\frac{n-1}{2}\!+\!d}q^{\frac{n^2-1}{4}-d^2-d}\!+\!\mathbf{Sgn}(d) (1\!-\!q^n)\times\notag\\ &\sum_{j=1}^{|d|}\frac{q^{jd\!+\!2j\!-\!d^2\!-\!1-\!\frac{j\!+\!3d}{2}}(1\!+\!q^{-(2j\!-\!1)d})}{1\!-\!q^{2j\!-\!1}} \frac{\{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{\frac{1}{2}\!-\!j}[(\!-\!1)^{j\!+\!d}\!-\!1]\}}{2} \mod\Phi_n(q)^2.\end{aligned}$$ Similarly, using [@wxx-2022 (2.11)] and Lemma [Lemma 8](#app-th2-lamma2){reference-type="ref" reference="app-th2-lamma2"}, we arrive at the congruence: $$\begin{aligned} \sum_{k=0}^{n-1}\frac{(q^{2d\!+\!1};q^2)_k}{(q;q)_k}q^{2k}\equiv q^{-2d-1}\sum_{k=0}^{n-1}\frac{(q^{2d\!+\!1};q^2)_k}{(q;q)_k}q^{k}-q^{-2d-1}(1-q^n) \mod\Phi_n(q)^2, \notag\end{aligned}$$ substituting ([\[app-th2-equ-q-k\]](#app-th2-equ-q-k){reference-type="ref" reference="app-th2-equ-q-k"}) into above equation, we find the following corollary: **Corollary 4**. $$\begin{aligned} \label{app-th2-equ-q-2k} \sum_{k=0}^{n-1}&\frac{(q^{2d\!+\!1};q^2)_k}{(q;q)_k}q^{2k} \equiv (\!-\!1)^{\frac{n-1}{2}\!+\!d}q^{\frac{n^2\!-\!1}{4}\!-\!d^2\!-\!3d\!-\!1} \!+\!q^{\!-\!2d\!-\!1}\mathbf{Sgn}(d) (1\!-\!q^n)\times\notag\\ &\sum_{j=1}^{|d|}\frac{q^{jd\!+\!2j\!-\!d^2\!-\!2-\!\frac{j\!+\!7d}{2}}(1\!+\!q^{-(2j\!-\!1)d})}{1\!-\!q^{2j\!-\!1}} \frac{\{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{\frac{1}{2}\!-\!j}[(\!-\!1)^{j\!+\!d}\!-\!1]\}}{2} \mod\Phi_n(q)^2.\end{aligned}$$ Particularly, by setting $d=-1$ in both ([\[app-th2-equ-q-k\]](#app-th2-equ-q-k){reference-type="ref" reference="app-th2-equ-q-k"}) and ([\[app-th2-equ-q-2k\]](#app-th2-equ-q-2k){reference-type="ref" reference="app-th2-equ-q-2k"}), we reach the Wang-Ni $q$-congruence as established in [@wc-nhx-2022 Theorem 1.1]. $$\begin{aligned} & \sum_{k=0}^{n-1}\frac{(q^{\!-\!1};q^2)_k}{(q;q)_k}q^{k} \equiv (\!-\!1)^{\frac{n+1}{2}}q^{\frac{n^2\!-\!1}{4}}-(1+q)[n] \mod\Phi_n(q)^2, \\ &\sum_{k=0}^{n-1}\frac{(q^{\!-\!1};q^2)_k}{(q;q)_k}q^{2k} \equiv (\!-\!1)^{\frac{n+1}{2}}q^{\frac{n^2\!+\!3}{4}}-2q[n] \mod\Phi_n(q)^2, \end{aligned}$$ Similarly, setting $d=0$ in ([\[app-th2-equ-q-2k\]](#app-th2-equ-q-2k){reference-type="ref" reference="app-th2-equ-q-2k"}) and setting $d=1$ in ([\[app-th2-equ-q-k\]](#app-th2-equ-q-k){reference-type="ref" reference="app-th2-equ-q-k"}) we arrive at the Wang-Ni $q$-congruence [@wc-nhx-2022 Theorem 1.4] $$\begin{aligned} & \sum_{k=0}^{n-1}\frac{(q;q^2)_k}{(q;q)_k}q^{2k} \equiv (\!-\!1)^{\frac{n-1}{2}}q^{\frac{n^2\!-\!5}{4}}+\frac{q-1}{q}[n] \mod\Phi_n(q)^2, \\ &\sum_{k=0}^{n-1}\frac{(q^{3};q^2)_k}{(q;q)_k}q^{k} \equiv (\!-\!1)^{\frac{n+1}{2}}q^{\frac{n^2\!-\!9}{4}}+\frac{1+q}{q^2}[n] \mod\Phi_n(q)^2, \label{gjw-con} \end{aligned}$$ Here, ([\[gjw-con\]](#gjw-con){reference-type="ref" reference="gjw-con"}) refers to a conjecture put forth by Gu and Guo [@gcy-gjw-2021]. By letting $q\rightarrow1$ in either ([\[app-th2-equ-q-k\]](#app-th2-equ-q-k){reference-type="ref" reference="app-th2-equ-q-k"}) or ([\[app-th2-equ-q-2k\]](#app-th2-equ-q-2k){reference-type="ref" reference="app-th2-equ-q-2k"}), we derive the following congruence. **Corollary 5**. Let $d$ be an integer and $p$ an odd prime with $p^r>2|d|+1$ , then $$\begin{aligned} \sum_{k=0}^{p^r-1}\frac{2^k(d+\frac{1}{2})_k}{k!} \equiv (-1)^{\frac{p^r\!-\!1}{2}\!+\!d}\!+\! \mathbf{Sgn}(d)\sum_{j=1}^{|d|}\frac{2p^r(\!-\!1)^{j\!+\!d}}{2j\!-\!1}\mod p^2. \end{aligned}$$ Similarly, when $d=0$, the congruence mentioned above corresponds to the following congruence, as proven by Sun[@szw2010], $$\begin{aligned} \sum_{k=0}^{p^r-1}\frac{1}{2^k}\binom{2k}{k}\equiv (-1)^{\frac{n\!-\!1}{2}}\mod p^2. \end{aligned}$$ # The proofs of Theorem[\[Carlitz1974\]](#Carlitz1974){reference-type="ref" reference="Carlitz1974"}-Theorem[Theorem 3](#carlitz-general-2){reference-type="ref" reference="carlitz-general-2"} {#the-proofs-of-theoremcarlitz1974-theoremcarlitz-general-2} ## The new proof of Theorem[Theorem 1](#carlitz1974-them){reference-type="ref" reference="carlitz1974-them"} {#the-new-proof-of-theoremcarlitz1974-them} *Proof.* Let $$\label{carlitz-l-phi} \phi_n(a, b)=\frac{(q;q)_n}{(a;q)_{n+1}} \sum_{k=0}^n\frac{(a, b;q)_k}{(q;q)_k}(-ab)^{n-k}q^{\binom{n}{2}-\binom{k}{2}},$$ and by $$\begin{aligned} \frac{(q;q)_n}{(q;q)_k}= \frac{(q;q)_{n-1}}{(q;q)_{k-1}}+\frac{(q;q)_{n-1}}{(q;q)_{k}}q^k(1-q^{n\!-\!k}), \notag\end{aligned}$$ then, we have calculated that $$\begin{aligned} \phi_n(a, b)&\!=\!\frac{1}{(a;q)_{n\!+\!1}} \sum_{k=0}^n\left(\frac{(q;q)_{n\!-\!1}}{(q;q)_{k\!-\!1}}\!+\!\frac{(q;q)_{n\!-\!1}}{(q;q)_{k}}q^k(1\!-\!q^{n\!-\!k})\right) (a, b;q)_k(\!-\!ab)^{n-k}q^{\binom{n}{2}\!-\!\binom{k}{2}}\notag\\ &=\!\frac{1}{(a;q)_{n\!+\!1}}\sum_{k=0}^{n-1}\frac{(q;q)_{n\!-\!1}}{(q;q)_{k\!-\!1}} (a, b;q)_{k+1}(\!-\!ab)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}\!+\!n\!-\!1\!-\!k}\notag\\ &\ \ \ \ \ \ \ -\!\frac{1}{(a;q)_{n}}\sum_{k=0}^{n-1} \frac{(q;q)_{n\!-\!1}}{(q;q)_{k}}(a, b;q)_k\frac{abq^k-abq^n}{1-aq^n}(\!-\!ab)^{n\!-\!k\!-\!1}q^{\binom{n}{2}\!-\!\binom{k}{2}}\notag\\ &=\!\frac{(q;q)_{n-1}}{(aq;q)_{n}}\sum_{k=0}^{n-1}\frac{(aq;q)_{k}(b;q)_{k+1}}{(q;q)_{k}} (\!-\!abq)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}}\notag\\ &\ \ \ \ \ \ \ -\!\frac{(q;q)_{n-1}}{(a;q)_{n}}\sum_{k=0}^{n-1} \frac{(a, b;q)_{k}}{(q;q)_{k}}\frac{(1\!-\!aq^n)b\!-\!b(1\!-\!aq^k)}{1-aq^n}(\!-\!ab)^{n\!-\!k\!-\!1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}\!+\!n\!-\!1}\notag\\ &=\!\frac{(q;q)_{n-1}}{(aq;q)_{n}}\sum_{k=0}^{n-1}\frac{(aq;q)_{k}(b;q)_{k+1}}{(q;q)_{k}} (\!-\!abq)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}}\notag\\ &\ \ \ \ \ \ \ -\!bq^{n-1}\frac{(q;q)_{n-1}}{(a;q)_{n}}\sum_{k=0}^{n-1}\frac{(a, b;q)_{k}}{(q;q)_{k}} (\!-\!ab)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}}\notag\\ &\ \ \ \ \ \ \ +\!\frac{(q;q)_{n-1}}{(aq;q)_{n}}\sum_{k=0}^{n-1}\frac{(aq, b;q)_{k}}{(q;q)_{k}} (\!-\!abq)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}}bq^k\notag\\ &=\!\frac{(q;q)_{n-1}}{(aq;q)_{n}}\sum_{k=0}^{n-1}\frac{(aq, b;q)_{k}}{(q;q)_{k}} (\!-\!abq)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}}(1-bq^k+bq^k)\notag\\ &\ \ \ \ \ \ \ -\!bq^{n-1}\frac{(q;q)_{n-1}}{(a;q)_{n}}\sum_{k=0}^{n-1}\frac{(a, b;q)_{k}}{(q;q)_{k}} (\!-\!abq)^{n-k-1}q^{\binom{n-1}{2}\!-\!\binom{k}{2}}\notag\\ &=\phi_{n-1}(aq, b)-bq^{n-1}\phi_{n-1}(a, b)\notag\\ &=(\eta_a-bq^{n-1})\phi_{n-1}(a, b), \notag\end{aligned}$$ so $$\phi_n(a, b)=(\eta_a-bq^{n-1})\phi_{n-1}(a, b),$$ repeated iteration, known from $q$-binomial theorem [@gasper-book Ex 1.2 (vi)] $$\begin{aligned} \phi_n(a, b)&=(\eta_a-bq^{n-1})\phi_{n-1}(a, b)\notag\\ &=(\eta_a-bq^{n-1})(\eta_a-bq^{n-2})\phi_{n-2}(a, b)\notag\\ &=\dots\notag\\ &=(\eta_a-bq^{n-1})(\eta_a-bq^{n-2})\dots(\eta_a-b)\phi_{0}(a, b)\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}\eta_a^{n-k}\phi_{0}(a, b)\label{proof-1-phi-expan}, \end{aligned}$$ substituting $\phi_{0}(a, b)=\frac{1}{1-a}$ into the above equation yields $$\label{Carlitz1974_rhs} \phi_n(a, b)=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q\frac{(-b)^kq^{\binom{k}{2}}}{1-aq^{n-k}}.$$ Therefore, equation ([\[Carlitz1974\]](#Carlitz1974){reference-type="ref" reference="Carlitz1974"}) holds. ◻ ## The proof of Theorem [Theorem 2](#carlitz-general-1){reference-type="ref" reference="carlitz-general-1"} {#the-proof-of-theorem-carlitz-general-1}       In 2022, Liu [@liuzhiguo-2023] presented the expression of the Rogers-Szegö polynomial using the $q$-shift operator as follows: $$\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_qx^k=(x+\eta_x)^n\cdot1,$$ and the author also employed a similar approach to represent a class of equations using the $q$-shift operator in [@ydk-2023]. Considering that the polynomial in equation ([\[Carlitz1974_rhs\]](#Carlitz1974_rhs){reference-type="ref" reference="Carlitz1974_rhs"}) incorporates the $q$-Gauss coefficient $\begin{bmatrix} n\\k \end{bmatrix}_q$, it leads us to consider that an operator represents $\phi_n(a, b)$, Exploring the potential application of the $q$-Pascal rule [@KVPC-book] could provide a way to establish this representation for $\phi_n(a, b)$ as an operator. **Lemma 7**. $$\label{carlitz-r-phi} \phi_n(a, b)=(\eta_a\eta_b-b\eta_b)^n\cdot\frac{1}{1-a}.$$ *Proof.* By $q$-Pascal rule [@KVPC-book], $$\begin{bmatrix} n\!+\!1\\k \end{bmatrix}_q\!=\!\begin{bmatrix} n\\k \end{bmatrix}_qq^k\!+\!\begin{bmatrix} n\\k\!-\!1 \end{bmatrix}_q,$$ we have $$\begin{aligned} \phi_n(a, b)&=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q\frac{(-b)^kq^{\binom{k}{2}}}{1-aq^{n-k}}\notag\\ &=\sum_{k=0}^{n}\left(\!\begin{bmatrix} n-1\\k \end{bmatrix}_qq^k\!+\!\begin{bmatrix} n-1\\k\!-\!1 \end{bmatrix}_q\right)\frac{(-b)^kq^{\binom{k}{2}}}{1-aq^{n-k}}\notag\\ &=\sum_{k=0}^{n-1}\begin{bmatrix} n-1\\k \end{bmatrix}_q\frac{(-bq)^kq^{\binom{k}{2}}}{1-aq^{n-k}}+\sum_{k=1}^{n}\begin{bmatrix} n-1\\k-1 \end{bmatrix}_q\frac{(-b)^kq^{\binom{k}{2}}}{1-aq^{n-k}}\notag\\ &=\sum_{k=0}^{n-1}\begin{bmatrix} n-1\\k \end{bmatrix}_q\frac{(-bq)^kq^{\binom{k}{2}}}{1-aq\cdot q^{n-k-1}}- b\sum_{k=0}^{n-1}\begin{bmatrix} n-1\\k \end{bmatrix}_q\frac{(-bq)^kq^{\binom{k}{2}}}{1-aq^{n-k-1}}\notag\\ &=(\eta_a\eta_b-b\eta_b)\phi_{n-1}(a, b), \notag \end{aligned}$$ through repeated iteration, we can derive $$\begin{aligned} \phi_n(a, b)=(\eta_a\eta_b-b\eta_b)^n\phi_{0}(a, b)=(\eta_a\eta_b-b\eta_b)^n\cdot\frac{1}{1-a}.\notag \end{aligned}$$ ◻ Obviously, since $\eta_a\eta_b\cdot b\eta_b=qb\eta_b\cdot \eta_a\eta_b$ , by [@gasper-book Ex 1.35] $$\begin{aligned} (\eta_a\eta_b-b\eta_b)^n\cdot\frac{1}{1-a}&=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b\eta_b)^k\eta_a^{n-k}\eta_b^{n-k}\cdot\frac{1}{1-a}\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}\eta_a^{n-k}\eta_b^{n}\cdot\frac{1}{1-a}\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q\frac{(-b)^kq^{\binom{k}{2}}}{1-aq^{n-k}}.\notag \end{aligned}$$ However, this approach does not yield the equation form ([\[carlitz-l-phi\]](#carlitz-l-phi){reference-type="ref" reference="carlitz-l-phi"}). Our interest lies in understanding how equation ([\[carlitz-l-phi\]](#carlitz-l-phi){reference-type="ref" reference="carlitz-l-phi"}) is derived without prior knowledge of its expression. Using equation ([\[Carlitz1974_rhs\]](#Carlitz1974_rhs){reference-type="ref" reference="Carlitz1974_rhs"}) to deduce equation ([\[carlitz-l-phi\]](#carlitz-l-phi){reference-type="ref" reference="carlitz-l-phi"}) is the question at hand. An operator can directly represent equation ([\[carlitz-l-phi\]](#carlitz-l-phi){reference-type="ref" reference="carlitz-l-phi"}). By employing Sears' method as outlined in [@sears-1951], we establish the validity of Theorem [Theorem 2](#carlitz-general-1){reference-type="ref" reference="carlitz-general-1"}, as follows. *Proof.* On the one hand, since $\eta_a\eta_b\cdot b\eta_b=qb\eta_b\cdot \eta_a\eta_b$ , by [@gasper-book Ex 1.35], we have $$\begin{aligned} (\eta_a\eta_b-b\eta_b)^n\cdot f(a)&=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b\eta_b)^k\eta_a^{n-k}\eta_b^{n-k}\cdot f(a)\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}\eta_a^{n-k}\eta_b^{n}\cdot f(a)\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}f(aq^{n-k}), \notag \end{aligned}$$ on the other hand, by [@sears-1951 (3.5)] we know $$\begin{aligned} (\eta_a\eta_b-b\eta_b)^n\cdot f(a)&=(\eta_a-b)(\eta_a-bq)\dots(\eta_a-bq^{n-1})\eta_b^{n}\cdot f(a)\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_qb^k(b)_{n-k}\eta_a^{n-k}\Delta_a^k f(a), \notag \end{aligned}$$ where $\Delta_a^k=(\eta_a-1)(\eta_a-q)\dots(\eta_a-q^{k-1}),$ thus, equation ([\[carltz-cor-1\]](#carltz-cor-1){reference-type="ref" reference="carltz-cor-1"}) holds. ◻ ## The proof of Theorem [Theorem 3](#carlitz-general-2){reference-type="ref" reference="carlitz-general-2"} {#the-proof-of-theorem-carlitz-general-2} *Proof.* Let $f(a)=\frac{(ay;q)_{\infty}}{(ax;q)_{\infty}}$, substitution into the left side of ([\[carltz-cor-1\]](#carltz-cor-1){reference-type="ref" reference="carltz-cor-1"}), we have $$\begin{aligned} \label{proof-carlitz-cor-2-l} \sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}f(aq^{n-k})&=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(-b)^kq^{\binom{k}{2}}\frac{(ayq^{n-k};q)_{\infty}}{(axq^{n-k};q)_{\infty}}\notag\\ &=\frac{(ay;q)_{\infty}}{(ax;q)_{\infty}}\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q\frac{(ax;q)_{n-k}}{(ay;q)_{n-k}}(-b)^kq^{\binom{k}{2}}, \end{aligned}$$ by induction, we know $$\Delta_a^nf(a)=\frac{(ayq^n;q)_{\infty}P_n(x, y)(-a)^n}{(ax;q)_{\infty}}q^{\binom{n}{2}},$$ substituting into the right side of ([\[carltz-cor-1\]](#carltz-cor-1){reference-type="ref" reference="carltz-cor-1"}), we deduce that $$\begin{aligned} &\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_qb^k(b;q)_{n\!-\!k}\eta_a^{n-k}\Delta_a^kf(a)\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_qb^k(b;q)_{n\!-\!k}\eta_{a}^{n\!-\!k}\frac{(ayq^k;q)_{\infty}P_k(x, y)(-a)^k}{(ax;q)_{\infty}}q^{\binom{k}{2}}\notag\\ &=\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_qb^k(b;q)_{n\!-\!k}\frac{(ayq^n;q)_{\infty}P_k(x, y)(-aq^{n-k})^k}{(axq^{n\!-\!k};q)_{\infty}}q^{\binom{k}{2}}\notag\\ &=\frac{(ay;q)_{\infty}}{(ax;q)_{\infty}(ay;q)_n}\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(ax, b;q)_{n-k}P_k(x, y)(-ab)^kq^{\binom{k}{2}+k(n-k)}\notag\\ &=\frac{(ay;q)_{\infty}}{(ax;q)_{\infty}(ay;q)_n}\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(ax, b;q)_{k}P_{n-k}(x, y)(-ab)^{n-k}q^{\binom{n}{2}-\binom{k}{2}}, \notag \end{aligned}$$ hence $$\begin{aligned} \frac{(ay;q)_{\infty}}{(ax;q)_{\infty}}&\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q\frac{(ax;q)_{n-k}}{(ay;q)_{n-k}}(-b)^kq^{\binom{k}{2}}\notag\\ &=\frac{(ay;q)_{\infty}}{(ax;q)_{\infty}(ay;q)_n}\sum_{k=0}^n\begin{bmatrix} n\\k \end{bmatrix}_q(ax, b;q)_{k}P_{n-k}(x, y)(-ab)^{n-k}q^{\binom{n}{2}-\binom{k}{2}},\notag\end{aligned}$$ multiply both sides by $\frac{(ax;q)_{\infty}(ay;q)_n}{(ay;q)_{\infty}}$ yields ([\[carlitz-cor-2\]](#carlitz-cor-2){reference-type="ref" reference="carlitz-cor-2"}). ◻ ## The other proof of Proof Theorem[Theorem 1](#carlitz1974-them){reference-type="ref" reference="carlitz1974-them"} {#the-other-proof-of-proof-theoremcarlitz1974-them} *Proof.* Let $x=1, y=q$ in ([\[carlitz-cor-2\]](#carlitz-cor-2){reference-type="ref" reference="carlitz-cor-2"}) we obtain ([\[Carlitz1974\]](#Carlitz1974){reference-type="ref" reference="Carlitz1974"}). ◻ # The proofs of Theorem [Theorem 4](#app-lemma-1){reference-type="ref" reference="app-lemma-1"}-Theorem [Theorem 6](#app-th2){reference-type="ref" reference="app-th2"} {#the-proofs-of-theorem-app-lemma-1-theorem-app-th2} ## The proof of Theorem [Theorem 4](#app-lemma-1){reference-type="ref" reference="app-lemma-1"} {#the-proof-of-theorem-app-lemma-1} *Proof.* Let $q\rightarrow q^2$ and $n\rightarrow n-1$ in ([\[carlitz-m\]](#carlitz-m){reference-type="ref" reference="carlitz-m"}), then the left side of ([\[carlitz-m\]](#carlitz-m){reference-type="ref" reference="carlitz-m"}) become $$\begin{aligned} \label{app-equ-1-proof-1} LHS&=\sum_{k=0}^{n-1}\frac{(a, b;q^2)_k}{(q^2;q^2)_k} \frac{(q^{2m};q^2)_{n\!-\!1\!-\!k}}{(q^{2};q^2)_{n\!-\!1\!-\!k}}(-ab)^{n\!-\!k\!-\!1} q^{(n\!-\!1)^2\!-\!(n\!-\!1)\!-\!k^2\!+\!k}\notag\\ &=\sum_{k=0}^{n-1}\frac{(a, b;q^2)_k}{(q^2;q^2)_k} \frac{(q^{2n\!-\!2k};q^2)_{m\!-\!1\!}}{(q^{2};q^2)_{m\!-\!1}}(-ab)^{n\!-\!k\!-\!1} q^{(n\!-\!1)(n\!-\!2)\!-\!k^2\!+\!k},\end{aligned}$$ and the right side of ([\[carlitz-m\]](#carlitz-m){reference-type="ref" reference="carlitz-m"}) become $$\begin{aligned} \label{app-equ-1-proof-2} RHS&=\sum_{k=0}^{n-1}\frac{(-b)^kq^{k^2-k}}{(q^2;q^2)_k(q^2;q^2)_{n-k-1}} \frac{(a;q^2)_{m+n-1}}{(aq^{2n-2k-2};q^2)_m}\notag\\ &=\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(-b)^{\frac{n-1}{2}\!+\!k}q^{(\frac{n-1}{2}\!+\!k)^2-(\frac{n-1}{2}\!+\!k)}} {(q^2;q^2)_{\frac{n-1}{2}\!+\!k}(q^2;q^2)_{\frac{n\!-\!1}{2}\!-\!k}} \frac{(a;q^2)_{m\!+\!n\!-\!1}}{(aq^{n\!-\!2k-1};q^2)_{m}}\notag\\ &=\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{n\!-\!2k\!+\!1};q^2)_k(a;q^2)_{n\!+\!m\!-\!1}} {(q^2;q^2)_{\frac{n-1}{2}}^2(q^{n\!+\!1};q^2)_k(aq^{n\!-\!2k\!-\!1};q^2)_m} (-b)^{\frac{n-1}{2}\!+\!k}q^{(\frac{n-1}{2}\!+\!k)(\frac{n-3}{2}\!+\!k)}\notag\\ &=\frac{(q;q^2)_n}{(q^2;q^2)_{\frac{n\!-\!1}{2}}^2(1\!-\!q^n)} \frac{(a;q^2)_{n\!+\!m\!-\!1}}{(q;q^2)_n}(1-q^n)\notag\\ &\ \ \ \ \ \ \ \ \times\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{n\!-\!2k\!+\!1};q^2)_k} {(q^{n\!+\!1};q^2)_k}\frac{(-b)^{\frac{n-1}{2}\!+\!k}q^{\frac{(n\!-\!1)(n\!-\!3)}{4}\!+\!k^2\!+\!(n\!-\!2)k}} {(aq^{n\!-\!2k\!-\!1};q^2)_m}, \end{aligned}$$ where $$(q^{n\!-\!2k\!+\!1};q^2)_k=(-1)^kq^{k(n-k)}(q^{1-n};q^2)_k,$$ and $$\frac{(q;q^2)_n}{(q^2;q^2)_{\frac{n-1}{2}}^2(1-q^n)}=\begin{bmatrix} n\!-\!1\\\frac{n-1}{2} \end{bmatrix}_{q^2}\begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\frac{1}{(-q;q)_{n-1}^2},$$ substituting the above two equations into ([\[app-equ-1-proof-2\]](#app-equ-1-proof-2){reference-type="ref" reference="app-equ-1-proof-2"}) to obtain $$\begin{aligned} RHS&=\begin{bmatrix} n\!-\!1\\\frac{n\!-\!1}{2} \end{bmatrix}_{q^2}\begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\frac{(\!-\!1)^{\frac{n\!-\!1}{2}}q^{\frac{(n\!-\!1)(n\!-\!3)}{4}}}{(-q;q)_{n-1}^2} \frac{(a;q^2)_{n\!+\!m\!-\!1}}{(q;q^2)_n}\notag\\ &\ \ \ \ \ \ \ \times(1-q^n)\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1-n};q^2)_k}{(q^{1+n};q^2)_k}\frac{b^{\frac{n-1}{2}\!+\!k}q^{2nk-2k}} {(aq^{n\!-\!2k\!-\!1};q^2)_m}, \notag\end{aligned}$$ combining ([\[app-equ-1-proof-1\]](#app-equ-1-proof-1){reference-type="ref" reference="app-equ-1-proof-1"}) yields ([\[app-lemma-equ-1\]](#app-lemma-equ-1){reference-type="ref" reference="app-lemma-equ-1"}). ◻ ## The proof of Theorem [Theorem 5](#app-th1){reference-type="ref" reference="app-th1"} {#the-proof-of-theorem-app-th1} *Proof.* Let $a\rightarrow q$ , $b\rightarrow -q$ in ([\[app-lemma-equ-1\]](#app-lemma-equ-1){reference-type="ref" reference="app-lemma-equ-1"}), we have $$\begin{aligned} \label{app-th1-proof-0} \sum_{k=0}^{n-1}&\frac{(q^2;q^4)_k}{(q^2;q^2)_k}(q^{2n\!-\!2k};q^2)_{m\!-\!1} q^{-k^2\!-\!k}\notag\\ &=\begin{bmatrix} n\!-\!1\\\frac{n\!-\!1}{2} \end{bmatrix}_{q^2}\begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\frac{(q^2;q^2)_{m\!-\!1}}{(-q;q)_{n\!-\!1}^2}q^{\!-\!\frac{(n\!-\!1)(3n\!+\!1)}{4}} \notag\\ &\ \ \ \ \ \ \ \ \ \ \times (1\!-\!q^n)\!\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1\!-\!n};q^2)_k}{(q^{1\!+\!n};q^2)_k} \frac{(q^{2n\!+\!1};q^2)_{m\!-\!1}}{(q^{n\!-\!2k};q^2)_m}(\!-\!1)^kq^{2nk\!-\!k}. \end{aligned}$$ On the one hand, on the left side of ([\[app-th1-proof-0\]](#app-th1-proof-0){reference-type="ref" reference="app-th1-proof-0"}), when $0\leq k<m-1$, $(q^{2n-2k};q^2)_{m-1}$ contains a factor $(1-q^n)$, due to $1-q^n \equiv0 \mod \Phi_n (q)$, hence $$\begin{aligned} \label{app-th1-proof-1} &\sum_{k=0}^{n-1} \frac{(q^2;q^4)_k}{(q^2;q^2)_k}(q^{2n-2k};q^2)_{m\!-\!1}q^{\!-\!k^2\!-\!k}\notag\\ &\equiv\sum_{k=m\!-\!1}^{n\!-\!1}\frac{(q^2;q^4)_k} {(q^2;q^2)_k}(q^{\!-\!2k};q^2)_{m\!-\!1} q^{\!-\!k^2\!-\!k}\mod \Phi_n(q)\notag\\ &\equiv \sum_{k=0}^{n-m}\frac{(q^2;q^4)_{k\!+\!m\!-\!1}} {(q^2;q^2)_{k\!+\!m\!-\!1}} (q^{\!-\!2k\!-\!2m\!+\!2};q^2)_{m\!-\!1} q^{k\!+\!m\!-\!(k\!+\!m)^2}\mod \Phi_n(q), \end{aligned}$$ where $$\begin{aligned} (q^{\!-\!2k\!-\!2m\!+\!2};q^2)_{m\!-\!1}=(-1)^{m-1}q^{-(m-1)(2k+m)}\frac{(q^2;q^2)_{m+k-1}}{(q^2;q^2)_{k}}, \notag \end{aligned}$$ substituting the above equation into ([\[app-th1-proof-1\]](#app-th1-proof-1){reference-type="ref" reference="app-th1-proof-1"}) to obtain $$\begin{aligned} \label{app-th1-proof-2} &\sum_{k=0}^{n-1} \frac{(q^2;q^4)_k}{(q^2;q^2)_k}(q^{2n-2k};q^2)_{m\!-\!1}q^{\!-\!k^2\!-\!k}\notag\\ &\equiv(\!-\!1)^{m\!-\!1}q^{\!-\!2m^2\!+\!2m}(q^2;q^4)_{m\!-\!1} \sum_{k=0}^{n-m}\frac{(q^{4m\!-\!2};q^4)_k}{(q^2;q^2)_k}q^{3k\!-\!k^2\!-\!4km}\mod \Phi_n(q). \end{aligned}$$ On the other hand, using the following formula given by Pan [@Ljx-Ph-Zy-2015 (1.5)] $$\begin{aligned} \label{equ-pan} \begin{bmatrix} n\!-\!1\\\frac{n-1}{2} \end{bmatrix}_{q^2}\equiv(-1)^{\frac{n-1}{2}}q^{\frac{1-n^2}{4}}(-q;q)_{n-1}^2\mod \Phi_n(q)^2, \end{aligned}$$ and the following formula given by Guo [@GJW-2018 (3.1)] $$\begin{aligned} \label{equ-guo} \begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\equiv(-1)^{n-1}q^{\binom{n}{2}}\mod \Phi_n(q)^2, \end{aligned}$$ we know that on the right side of ([\[app-th1-proof-0\]](#app-th1-proof-0){reference-type="ref" reference="app-th1-proof-0"}), we deduce that $$\begin{aligned} \label{app-th1-proof-3} &\begin{bmatrix} n\!-\!1\\\frac{n\!-\!1}{2} \end{bmatrix}_{q^2}\begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\frac{(q^2;q^2)_{m\!-\!1}}{(-q;q)_{n\!-\!1}^2}q^{\!-\!\frac{(n\!-\!1)(3n\!+\!1)}{4}} \notag\\ &\ \ \ \ \ \ \ \ \ \ \times (1\!-\!q^n)\!\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1\!-\!n};q^2)_k}{(q^{1\!+\!n};q^2)_k} \frac{(q^{2n\!+\!1};q^2)_{m\!-\!1}}{(q^{n\!-\!2k};q^2)_m}(\!-\!1)^kq^{2nk\!-\!k}\notag\\ &\equiv\! (\!-\!1)^{\frac{n\!-\!1}{2}}q^{\!-\!\frac{n^2\!-\!1}{2}}(q^2;q^2)_{m\!-\!1} (1\!-\!q^n)\!\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q;q^2)_{m-1}}{(q^{n\!-\!2k};q^2)_m}(\!-\!1)^kq^{\!-\!k}\mod \Phi_n(q),\end{aligned}$$ in the summation formula to the right of the above equation, only when $0\leq k\leq m\!-\!1$ , $(q^{n\!-\!2k};q^2)_m$ contains the factor $(1-q^n)$ , combined with $q^n\equiv1\mod\Phi_n(q)$ to obtain $$\begin{aligned} \label{app-th1-proof-4} &(1\!-\!q^n)\!\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q;q^2)_{m-1}}{(q^{n\!-\!2k};q^2)_m}(\!-\!1)^kq^{\!-\!k}\notag\\ &\equiv(1\!-\!q^n)\!\sum_{k=0}^{m-1} \frac{(q;q^2)_{m-1}}{(q^{n-2k};q^2)_{k}(1\!-\!q^n)(q^{n\!+\!2};q^2)_{m\!-\!1\!-\!k}}(\!-\!1)^kq^{\!-\!k}\mod\Phi_n(q)\notag\\ &\equiv \sum_{k=0}^{m-1} \frac{(q;q^2)_{m-1}}{(q^{-2k};q^2)_{k}(q^{2};q^2)_{m\!-\!1\!-\!k}}(\!-\!1)^kq^{-\!k}\mod\Phi_n(q)\notag\\ &\equiv \sum_{k=0}^{m-1} \frac{(q;q^2)_{m-1}}{(q^{2};q^2)_{k}(q^{2};q^2)_{m\!-\!1\!-\!k}}q^{k^2}\mod\Phi_n(q)\notag\\ &\equiv\frac{(q;q^2)_{m-1}}{(q^2;q^2)_{m-1}}\sum_{k=0}^{m-1}\begin{bmatrix} m\!-\!1\\k \end{bmatrix}_{q^2}q^{k^2}\mod\Phi_n(q), \end{aligned}$$ by [@gasper-book Ex 1.2(vi)], we know that $$(z;q)_{m-1}=\sum_{k=0}^{m-1}\begin{bmatrix} m\!-\!1\\k \end{bmatrix}_q (-z)^kq^{\frac{k^2-k}{2}},$$ Let $q\rightarrow q^2$ and then let $z\rightarrow -q$ in the above equation, we obtain $$(-q;q^2)_{m-1}=\sum_{k=0}^{m-1}\begin{bmatrix} m\!-\!1\\k \end{bmatrix}_{q^2}q^{k^2},$$ substituting the above equation into ([\[app-th1-proof-4\]](#app-th1-proof-4){reference-type="ref" reference="app-th1-proof-4"}), we have $$\begin{aligned} (1\!-\!q^n)&\!\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1\!-\!n};q^2)_k}{(q^{1\!+\!n};q^2)_k} \frac{(q^{2n\!+\!1};q^2)_{m-1}}{(q^{n\!-\!2k};q^2)_m}(\!-\!1)^kq^{2kn\!-\!k} \equiv\frac{(q^2;q^4)_{m-1}}{(q^2;q^2)_{m-1}}\mod\Phi_n(q), \notag\end{aligned}$$ combining ([\[app-th1-proof-2\]](#app-th1-proof-2){reference-type="ref" reference="app-th1-proof-2"}), ([\[app-th1-proof-3\]](#app-th1-proof-3){reference-type="ref" reference="app-th1-proof-3"}) and ([\[app-th1-proof-4\]](#app-th1-proof-4){reference-type="ref" reference="app-th1-proof-4"}), we obtain ([\[app-th1-equ\]](#app-th1-equ){reference-type="ref" reference="app-th1-equ"}). ◻ ## The proof of Theorem [Theorem 6](#app-th2){reference-type="ref" reference="app-th2"} {#the-proof-of-theorem-app-th2}     Before proving Theorem [Theorem 6](#app-th2){reference-type="ref" reference="app-th2"}, we first prove the following lemma. **Lemma 8**. $$\begin{aligned} \frac{(q^{2d+1};q^2)_{n}}{(q;q)_{n-1}}\equiv (1-q^n) \mod\Phi_n(q)^2. \end{aligned}$$ *Proof.* $$\begin{aligned} \frac{(q^{2d+1};q^2)_{n}}{(q;q)_{n-1}}&= \frac{(q^{2d+1};q^2)_{\frac{n\!-\!1}{2}\!-\!d}(1\!-\!q^n)(q^{n\!+\!2};q^2)_{\frac{n-1}{2}\!+\!d}}{(q;q)_{n-1}}\notag\\ &=(1\!-\!q^n)\frac{(q^{n-2};q^{-2})_{\frac{n\!-\!1}{2}\!-\!d}(q^{n\!+\!2};q^2)_{\frac{n-1}{2}\!+\!d}}{(q;q)_{n-1}}\notag\\ &\equiv (1\!-\!q^n)\frac{(q^{-2};q^{-2})_{\frac{n\!-\!1}{2}\!-\!d}(q^{2};q^2)_{\frac{n-1}{2}\!+\!d}}{(q;q)_{n-1}} \mod\Phi_n(q)^2\notag\\ &\equiv (1\!-\!q^n)\frac{(q^2;q^2)_{\frac{n-1}{2}}^2(\!-\!1)^{\frac{n\!-\!1}{2}\!-\!d}q^{-\frac{n^2-1}{4}-d^2}}{(q;q)_{n-1}} \frac{(q;q^2)_d}{(q^{\!-\!2d\!+\!1};q^2)_d}\mod\Phi_n(q)^2, \end{aligned}$$ where $$(q^{\!-\!2d\!+\!1};q^2)_d=(-1)^dq^{-d^2}(q;q^2)_d,$$ and by ([\[equ-pan\]](#equ-pan){reference-type="ref" reference="equ-pan"}) and [@wc-nhx-2022 (2.7)], we deduce that $$\begin{aligned} \frac{(q^{2d+1};q^2)_{n}}{(q;q)_{n-1}}&\equiv (1\!-\!q^n)\frac{(q^2;q^2)_{\frac{n-1}{2}}^2 (\!-\!1)^{\frac{n\!-\!1}{2}}q^{-\frac{n^2-1}{4}}}{(q;q)_{n-1}}\mod\Phi_n(q)^2\notag\\ &\equiv (1\!-\!q^n)(\!-\!1)^{\frac{n\!-\!1}{2}}q^{-\frac{n^2-1}{4}}(-q;q)_{n-1} \begin{bmatrix} n\!-\!1\\\frac{n-1}{2} \end{bmatrix}_{q^2}^{-1}\mod\Phi_n(q)^2\notag\\ &\equiv(1-q^n)\mod\Phi_n(q)^2. \end{aligned}$$ Therefore, we complete the proof of Lemma [Lemma 8](#app-th2-lamma2){reference-type="ref" reference="app-th2-lamma2"}. ◻ **Lemma 9**. Assume $n$ be a positive odd integer, $d \in\{-\frac{n-3}{2}, -\frac{n-5}{2}, \dots, \frac{n-5}{2}, \frac{n-3}{2}\}$, let $$\Psi_{n, d}(q)=\sum_{k=\!-\!\frac{n\!-\!1}{2}, k\neq d}^{\frac{n\!-\!1}{2}} \frac{(\!-\!1)^kq^{2dk\!-\!k}}{1\!-\!q^{2d-2k}},$$ then $$\begin{aligned} \label{congru-lemma-eq-0} \Psi_{n, d}(q)&\equiv(-1)^{d+1}dq^{2d^2-d}+\mathbf{Sgn}(d)(\!-\!1)^{\frac{n\!-\!1}{2}}q^{2d^2\!+\!\frac{n\!-\!1}{2}} \sum_{j=1}^{|d|}\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}}\notag\\ &\ \ \ \ \ \ \ \times\{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q). \end{aligned}$$ *Proof.* When $d=0$, similar to Guo's proof in [@gjw2019], $$\begin{aligned} \label{proof-congru-lemma-d equiv 0} \Psi_{n, 0}(q)&=\sum_{k=\!-\!\frac{n\!-\!1}{2}, k\neq 0}^{\frac{n\!-\!1}{2}} \frac{(-1)^kq^{-k}}{1-q^{-2k}} =\sum_{k=1}^{\frac{n-1}{2}}\frac{(-1)^kq^k}{1-q^{2k}}+ \sum_{k=1}^{\frac{n-1}{2}}\frac{(-1)^kq^{-k}}{1-q^{-2k}}\notag\\ &=\sum_{k=1}^{\frac{n-1}{2}}\frac{(-1)^k(q^k-q^k)}{1-q^{2k}}=0, \end{aligned}$$ Therefore, ([\[congru-lemma-eq-0\]](#congru-lemma-eq-0){reference-type="ref" reference="congru-lemma-eq-0"}) holds when $d=0$. Generally, if $-\frac{n-1}{2}< d<\frac{n-1}{2}$ and $d\neq 0$, we have $$\begin{aligned} \label{proof-congru-lemma-1} \Psi_{n, d}(q)&=\sum_{k=\!-\!\frac{n\!-\!1}{2}, k\neq d}^{\frac{n\!-\!1}{2}} \frac{(-1)^kq^{2dk-k}}{1-q^{2d-2k}} =\sum_{k=-\frac{n-1}{2}}^{d-1}\frac{(\!-\!1)^kq^{2dk\!-\!k}}{1\!-\!q^{2d\!-\!2k}} +\sum_{k=d+1}^{\frac{n-1}{2}}\frac{(\!-\!1)^kq^{2dk\!-\!k}}{1\!-\!q^{2d\!-\!2k}}\notag\\ &=\sum_{k=1}^{\frac{n-1}{2}\!+\!d}\frac{(\!-\!1)^{k\!+\!d}q^{2d^2\!-\!d\!+\!k\!-\!2dk}}{1-q^{2k}} +\sum_{k=1}^{\frac{n-1}{2}\!-\!d}\frac{(\!-\!1)^{k\!+\!d}q^{2d^2\!-\!d\!-\!k\!+\!2dk}}{1-q^{-2k}}\notag\\ &=(\!-\!1)^dq^{2d^2\!-\!d}\left(\sum_{k=1}^{\frac{n-1}{2}\!+\!d}\frac{(\!-\!1)^kq^{k\!-\!2dk}}{1-q^{2k}} +\sum_{k=1}^{\frac{n-1}{2}\!-\!d}\frac{(\!-\!1)^kq^{2dk\!-\!k}}{1-q^{-2k}}\right). \end{aligned}$$ If $0<d<\frac{n-1}{2}$, then $$\begin{aligned} \label{proof-congru-lemma-2} &\sum_{k=1}^{\frac{n-1}{2}\!+\!d}\frac{(\!-\!1)^kq^{k\!-\!2dk}}{1-q^{2k}} +\sum_{k=1}^{\frac{n-1}{2}\!-\!d}\frac{(\!-\!1)^kq^{2dk\!-\!k}}{1-q^{-2k}}\notag\\ &=\sum_{k=1}^{\frac{n-1}{2}\!-\!d}(\!-\!1)^k\left(\frac{q^{k\!-\!2dk}}{1-q^{2k}} +\frac{q^{2dk\!-\!k}}{1-q^{-2k}}\right)\!+\! \sum_{k=\frac{n-1}{2}\!-\!d\!+\!1}^{\frac{n\!-\!1}{2}\!+\!d}\frac{(-1)^kq^{k\!-\!2dk}}{1-q^{2k}}, \end{aligned}$$ where $$\begin{aligned} \label{proof-congru-lemma-3} &\sum_{k=1}^{\frac{n-1}{2}\!-\!d}(\!-\!1)^k\left(\frac{q^{k\!-\!2dk}}{1-q^{2k}} +\frac{q^{2dk\!-\!k}}{1-q^{-2k}}\right) =\sum_{k=1}^{\frac{n-1}{2}\!-\!d}(-1)^k\frac{q^{k-2dk}-q^{2dk+k}}{1-q^{2k}}\notag\\ &=\sum_{k=1}^{\frac{n-1}{2}\!-\!d}(-1)^k\frac{q^{-2dk}-q^{2dk}}{q^{-k}-q^k} =\sum_{k=1}^{\frac{n-1}{2}\!-\!d}(-1)^k\sum_{j=1}^{d}(q^{(2j-1)k}+q^{-(2j-1)k})\notag\\ &=\sum_{j=1}^{d}\sum_{k=1}^{\frac{n-1}{2}\!-\!d}[(-1)^kq^{(2j-1)k}+(-1)^kq^{-(2j-1)k}]\notag\\ &=\sum_{j=1}^{d}\left(\frac{\!-\!q^{2j\!-\!1}\!+\!(\!-\!1)^{\frac{n\!-\!1}{2}\!-\!d}q^{(2j\!-\!1)(\frac{n-1}{2}-d+1)}}{1+q^{2j-1}} \!+\!\frac{\!-\!q^{\!-\!2j\!+\!1}\!+\!(\!-\!1)^{\frac{n\!-\!1}{2}\!-\!d}q^{\!-\!(2j\!-\!1)(\frac{n\!-\!1}{2}\!-\!d\!+\!1)}}{1+q^{-2j+1}}\right)\notag\\ &\equiv -d+\sum_{j=1}^{d}\frac{(\!-\!1)^{\frac{n\!-\!1}{2}\!+\!d}q^{\frac{n\!-\!1}{2}\!+\!j} (q^{d\!-\!2dj}\!+\!q^{-d\!+\!2dj})}{1+q^{2j-1}}\mod \Phi_n(q), \end{aligned}$$ and $$\begin{aligned} \label{proof-congru-lemma-4} & \sum_{k=\frac{n-1}{2}\!-\!d\!+\!1}^{\frac{n\!-\!1}{2}\!+\!d}\frac{(-1)^kq^{k\!-\!2dk}}{1-q^{2k}} =\sum_{k=\frac{n\!-\!1}{2}\!-\!d\!+\!1}^{\frac{n-1}{2}}\frac{(-1)^kq^{k\!-\!2dk}}{1-q^{2k}} \!+\!\sum_{k=\frac{n\!-\!1}{2}\!+\!1}^{\frac{n\!-\!1}{2}\!+\!d}\frac{(-1)^kq^{k\!-\!2dk}}{1-q^{2k}}\notag\\ &\equiv \sum_{k=1}^{d}\frac{(\!-\!1)^{k+\frac{n\!-\!1}{2}\!-\!d}q^{(k\!+\!\frac{n\!-\!1}{2}\!-\!d)(1\!-\!2d)}}{1-q^{2k-2d-1}} \!+\!\sum_{k=1}^{d}\frac{(\!-\!1)^{k\!+\!\frac{n\!-\!1}{2}}q^{(k\!+\!\frac{n\!-\!1}{2})(1\!-\!2d)}}{1-q^{2k-1}}\mod \Phi_n(q)\notag\\ &\equiv \sum_{k=1}^{d}\frac{(\!-\!1)^{k\!+\!\frac{n\!-\!1}{2}\!+\!1}q^{(\frac{n\!-\!1}{2}\!-\!k\!+\!1)(1-2d)}}{1-q^{-(2k-1)}} \!+\!\sum_{k=1}^{d}\frac{(\!-\!1)^{k\!+\!\frac{n\!-\!1}{2}}q^{\frac{n\!-\!1}{2}\!+\!k-\!2dk\!+\!d}}{1-q^{2k-1}}\mod \Phi_n(q)\notag\\ &\equiv\sum_{k=1}^{d}\frac{(\!-\!q)^{\frac{n-1}{2}\!+\!k}(q^{2dk\!-\!d}\!+\!q^{\!-\!2dk\!+\!d})}{1\!-\!q^{2k-1}}\mod \Phi_n(q), \end{aligned}$$ combining ([\[proof-congru-lemma-1\]](#proof-congru-lemma-1){reference-type="ref" reference="proof-congru-lemma-1"}, [\[proof-congru-lemma-2\]](#proof-congru-lemma-2){reference-type="ref" reference="proof-congru-lemma-2"}, [\[proof-congru-lemma-3\]](#proof-congru-lemma-3){reference-type="ref" reference="proof-congru-lemma-3"}, [\[proof-congru-lemma-4\]](#proof-congru-lemma-4){reference-type="ref" reference="proof-congru-lemma-4"}), we know that when $0<d<\frac{n-1}{2}$, $$\begin{aligned} \label{proof-congru-lemma-d more than 0} \Psi_{n, d}(q) &\equiv(-1)^{d+1}dq^{2d^2-d}+(\!-\!1)^{\frac{n\!-\!1}{2}}q^{2d^2+\frac{n-1}{2}} \sum_{j=1}^d\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}}\notag\\ &\ \ \ \ \ \ \ \times\{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q), \end{aligned}$$ therefore, ([\[congru-lemma-eq-0\]](#congru-lemma-eq-0){reference-type="ref" reference="congru-lemma-eq-0"}) holds when $0<d<\frac{n-1}{2}$. When $-\frac{n-1}{2}<d<0$, make $l=-d$, then $$\begin{aligned} &\sum_{k=1}^{\frac{n-1}{2}\!+\!d}\frac{(\!-\!1)^kq^{k\!-\!2dk}}{1-q^{2k}} +\sum_{k=1}^{\frac{n-1}{2}\!-\!d}\frac{(\!-\!1)^kq^{2dk\!-\!k}}{1-q^{-2k}}\notag\\ &=\sum_{k=1}^{\frac{n-1}{2}\!-\!l}(\!-\!1)^k\left(\frac{q^{k\!+\!2lk}}{1-q^{2k}} +\frac{q^{-2lk\!-\!k}}{1-q^{-2k}}\right)\!+\! \sum_{k=\frac{n-1}{2}\!-\!l\!+\!1}^{\frac{n\!-\!1}{2}\!+\!l}\frac{(-1)^kq^{-k\!-\!2lk}}{1-q^{-2k}}, \notag\end{aligned}$$ similar to ([\[proof-congru-lemma-3\]](#proof-congru-lemma-3){reference-type="ref" reference="proof-congru-lemma-3"}, [\[proof-congru-lemma-4\]](#proof-congru-lemma-4){reference-type="ref" reference="proof-congru-lemma-4"}), we have $$\begin{aligned} \sum_{k=1}^{\frac{n-1}{2}\!-\!l}(\!-\!1)^k&\left(\frac{q^{k\!+\!2lk}}{1\!-\!q^{2k}} \!+\!\frac{q^{\!-\!2lk\!-\!k}}{1\!-\!q^{-2k}}\right) \equiv\! l\!-\!\sum_{j=1}^{l}\frac{(\!-\!1)^{\frac{n\!-\!1}{2}\!+\!l}q^{\frac{n\!-\!1}{2}\!+\!j} (q^{l\!-\!2lj}\!+\!q^{-l\!+\!2lj})}{1+q^{2j-1}}\mod \Phi_n(q), \notag\end{aligned}$$ and $$\begin{aligned} \sum_{k=\frac{n-1}{2}\!-\!l\!+\!1}^{\frac{n\!-\!1}{2}\!+\!l} \frac{(-1)^kq^{-k\!-\!2lk}}{1-q^{-2k}}\equiv -\sum_{j=1}^{l} \frac{(-q)^{\frac{n-1}{2}\!+\!j}(q^{l\!-\!2lj}\!+\!q^{-l\!+\!2lj})}{1-q^{2j-1}}\mod \Phi_n(q), \notag\end{aligned}$$ hence $$\begin{aligned} \label{proof-congru-lemma-d less than 0} \Psi_{n, d}(q) &\equiv(-1)^{d+1}dq^{2d^2-d}\!-\!(\!-\!1)^{\frac{n\!-\!1}{2}}q^{2d^2+\frac{n-1}{2}} \sum_{j=1}^{-d}\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}}\notag\\ &\ \ \ \ \ \ \ \times\{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q).\end{aligned}$$ when $-\frac{n-1}{2}<d<0$, Thus, ([\[congru-lemma-eq-0\]](#congru-lemma-eq-0){reference-type="ref" reference="congru-lemma-eq-0"}) also holds when $-\frac{n-1}{2}<d<0$. Combining ([\[proof-congru-lemma-d equiv 0\]](#proof-congru-lemma-d equiv 0){reference-type="ref" reference="proof-congru-lemma-d equiv 0"}), ([\[proof-congru-lemma-d more than 0\]](#proof-congru-lemma-d more than 0){reference-type="ref" reference="proof-congru-lemma-d more than 0"}) and ([\[proof-congru-lemma-d less than 0\]](#proof-congru-lemma-d less than 0){reference-type="ref" reference="proof-congru-lemma-d less than 0"}) we obtain ([\[congru-lemma-eq-0\]](#congru-lemma-eq-0){reference-type="ref" reference="congru-lemma-eq-0"}). ◻ Now, let's proceed to prove Theorem [Theorem 6](#app-th2){reference-type="ref" reference="app-th2"}. *Proof.* Let $m=1$ , $a=q^{2d+1}$ , $b=-q^{2d+1}$ in ([\[app-lemma-equ-1\]](#app-lemma-equ-1){reference-type="ref" reference="app-lemma-equ-1"}), we deduce that $$\begin{aligned} \sum_{k=0}^{n-1}\frac{(q^{4d\!+\!2};q^4)_k}{(q^2;q^2)_k}&q^{\!-\!4dk\!-\!k^2\!-\!k} =\begin{bmatrix} n\!-\!1\\\frac{n\!-\!1}{2} \end{bmatrix}_{q^2}\begin{bmatrix} 2n\!-\!1\\n\!-\!1 \end{bmatrix}_q\frac{(q^{2d+1};q^2)_nq^{-\frac{(n\!-\!1)(12d\!+\!3n\!+\!1)}{4}}} {(-q;q)_{n-1}^2(q;q^2)_n}\notag\\ &\times(1-q^n)\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1-n};q^2)_k}{(q^{1+n};q^2)_k} \frac{(-1)^kq^{(2n+2d)k-k}}{1-q^{n+2d-2k}}, \notag \end{aligned}$$ since $$\frac{(q^{2d+1};q^2)_n}{(q;q^2)_n}=\frac{(q;q^2)_{n+d}}{(q;q^2)_d(q;q^2)_n} =\frac{(q^{2n+1};q^2)_d}{(q;q^2)_d},$$ using ([\[equ-pan\]](#equ-pan){reference-type="ref" reference="equ-pan"}, [\[equ-guo\]](#equ-guo){reference-type="ref" reference="equ-guo"}) we know that $$\begin{aligned} \label{proof-congru-theorem-1} \sum_{k=0}^{n-1}\frac{(q^{4d\!+\!2};q^4)_k}{(q^2;q^2)_k}&q^{\!-\!4dk\!-\!k^2\!-\!k} \equiv (-1)^{\frac{n-1}{2}}q^{\!-\!\frac{n^2\!-\!1}{2}\!+\!3d\!-\!3dn} \frac{(q^{2n+1};q^2)_d}{(q;q^2)_d}\notag\\ &\times(1-q^n)\sum_{k=\!-\!\frac{n\!-\!1}{2}}^{\frac{n\!-\!1}{2}} \frac{(q^{1-n};q^2)_k}{(q^{1+n};q^2)_k} \frac{(-1)^kq^{(2n+2d)k-k}}{1-q^{n+2d-2k}}\mod\Phi_n(q)^2\notag\\ &\equiv(-1)^{\frac{n-1}{2}}q^{\!-\!\frac{n^2\!-\!1}{2}\!+\!3d} \{(1\!-\!q^n)\Psi_{n, d}(q)\!+\!\notag\\ &\ \ \frac{(q^{2n\!+\!1};q^2)_d}{(q;q^2)_d} \frac{(q^{1\!-\!n};q^2)_d}{(q^{1\!+\!n};q^2)_d}(\!-\!1)^dq^{\!-\!dn\!+\!2d^2\!-\!d}\}\mod\Phi_n(q)^2.\end{aligned}$$ If $0\leq d<\frac{n-1}{2}$, then $$\begin{aligned} \frac{(q^{2n+1};q^2)_d}{(q;q^2)_d}\frac{(q^{1-n};q^2)_d}{(q^{1+n};q^2)_d}& =\prod_{j=0}^{d-1}\frac{(1-q^{2n+1+2j})(1-q^{1-n+2j})}{(1-q^{1+2j})(1-q^{n+1+2j})}\notag\\ &=\prod_{j=0}^{d-1}\frac{-q^{1-n+2j}(1+q^{3n})+1+q^{n+2+4j}}{(1-q^{1+2j})(1-q^{n+1+2j})}\notag\\ &=\prod_{j=0}^{d-1}\frac{\!-\!q^{1\!-\!n\!+\!2j}(1\!+\!q^{n})(1\!-\!q^n\!+\!q^{2n})\!+\!1\!+\!q^{n\!+\!2\!+\!4j}}{(1-q^{1+2j})(1-q^{n+1+2j})}, \notag\end{aligned}$$ from the proof of Theorem 1.2 in [@Liujicai-petrov-2020] $$\begin{aligned} \label{equ-liujicai} 1-q^{2n}=(1+q^n)(1-q^n)\equiv2(1-q^n)\mod \Phi_n(q)^2,\end{aligned}$$ we know that $$1-q^n+q^{2n}\equiv q^n\mod \Phi_n(q)^2,$$ hence $$\begin{aligned} \frac{(q^{2n+1};q^2)_d}{(q;q^2)_d}\frac{(q^{1-n};q^2)_d}{(q^{1+n};q^2)_d} & = \prod_{j=0}^{d-1}\frac{\!-\!q^{1\!-\!n\!+\!2j}(1\!+\!q^{n})(1\!-\!q^n\!+\!q^{2n})\!+\!1\!+\!q^{n\!+\!2\!+\!4j}}{(1-q^{1+2j})(1-q^{n+1+2j})}\notag\\ &\equiv \prod_{j=0}^{d-1}\frac{\!-\!q^{1\!+\!2j}(1\!+\!q^{n})\!+\!1\!+\!q^{n\!+\!2\!+\!4j}}{(1-q^{1+2j})(1-q^{n+1+2j})}\mod \Phi_n(q)^2\notag\\ &\equiv \prod_{j=0}^{d-1}\frac{(1-q^{1+2j})(1-q^{n+1+2j})}{(1-q^{1+2j})(1-q^{n+1+2j})}\mod \Phi_n(q)^2\notag\\ &\equiv 1\mod \Phi_n(q)^2, \notag\end{aligned}$$ Similarly, if $-\frac{n-1}{2}<d<0$, then $$\begin{aligned} \frac{(q^{2n\!+\!1};q^2)_{d}}{(q;q^2)_{d}}\frac{(q^{1\!-\!n};q^2)_{d}}{(q^{1\!+\!n};q^2)_{d}} =\frac{(q;q^2)_{-d}(q^{1\!-\!n};q^2)_{-d}}{(q^{1\!-\!2n};q^2)_{-d}(q^{1\!+\!n};q^2)_{-d}}\equiv 1\mod\Phi_n(q)^2, \notag \end{aligned}$$ combining ([\[congru-lemma-eq-0\]](#congru-lemma-eq-0){reference-type="ref" reference="congru-lemma-eq-0"}), the formula for the braces in the final equation of ([\[proof-congru-theorem-1\]](#proof-congru-theorem-1){reference-type="ref" reference="proof-congru-theorem-1"}) becomes $$\begin{aligned} \label{proof-congru-theorem-3} &(1\!-\!q^n)\Psi_{n, d}(q)+\frac{(q^{2n+1};q^2)_d}{(q;q^2)_d} \frac{(q^{1-n};q^2)_d}{(q^{1+n};q^2)_d}(-1)^dq^{\!-\!dn\!+\!2d^2\!-\!d}\notag\\ &\equiv (1\!-\!q^n)\Psi_{n, d}(q)\!+\!(\!-\!1)^dq^{-dn\!+\!2d^2\!-\!d}\mod \Phi_n(q)^2\notag\\ &\equiv (1\!-\!q^n)(-1)^{d\!+\!1}dq^{2d^2\!-\!d}\!+\!(\!-\!1)^dq^{-dn\!+\!2d^2\!-\!d}\!+\! \mathbf{Sgn}(d)(1\!-\!q^n)(\!-\!1)^{\frac{n\!-\!1}{2}}q^{2d^2\!+\!\frac{n\!-\!1}{2}}\times\notag\\ &\ \ \ \sum_{j=1}^{|d|}\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}} \{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q)^2\notag\\ &\equiv(\!-\!1)^dq^{2d^2\!-\!d}(q^{\!-\!dn}\!-\!d\!+\!dq^{n}) \!+\! \mathbf{Sgn}(d)(1\!-\!q^n)(\!-\!1)^{\frac{n\!-\!1}{2}}q^{2d^2+\frac{n-1}{2}}\times\notag\\ &\ \ \ \sum_{j=1}^d\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}} \{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q)^2, \end{aligned}$$ by ([\[equ-liujicai\]](#equ-liujicai){reference-type="ref" reference="equ-liujicai"}), we know that $$\begin{aligned} (1\!-\!q^{-dn})\!=\!(1\!-\!q^{\!-\!n})(1\!+\!q^{\!-\!n}\!+\!\dots\!+\!q^{(\!-\!d\!+\!1)n}) \equiv d(1\!-\!q^{\!-\!n})\equiv\!-\!d(1\!-\!q^n)\mod\Phi_n(q)^2, \notag\end{aligned}$$ so that $$\begin{aligned} q^{-dn}-d(1-q^n)\equiv1\mod\Phi_n(q)^2,\end{aligned}$$ thus, the first term in the final form of ([\[proof-congru-theorem-3\]](#proof-congru-theorem-3){reference-type="ref" reference="proof-congru-theorem-3"}) is $$\begin{aligned} \label{proof-congru-theorem-4} &(\!-\!1)^dq^{2d^2\!-\!d}(q^{\!-\!dn}\!-\!d\!+\!dq^{n})\equiv(\!-\!1)^dq^{2d^2\!-\!d}\mod\Phi_n(q)^2, \end{aligned}$$ combining ([\[proof-congru-theorem-1\]](#proof-congru-theorem-1){reference-type="ref" reference="proof-congru-theorem-1"}), ([\[proof-congru-theorem-3\]](#proof-congru-theorem-3){reference-type="ref" reference="proof-congru-theorem-3"}) and ([\[proof-congru-theorem-4\]](#proof-congru-theorem-4){reference-type="ref" reference="proof-congru-theorem-4"}) we have $$\begin{aligned} \label{proof-congru-theorem-5} \sum_{k=0}^{n-1}&\frac{(q^{4d\!+\!2};q^4)_k}{(q^2;q^2)_k}q^{\!-\!4dk\!-\!k^2\!-\!k} \!\equiv\! (\!-\!1)^{\frac{n\!-\!1}{2}\!+\!d}q^{\!-\!\frac{n^2\!-\!1}{2}\!+\!2d^2\!+\!2d}\!+\! \mathbf{Sgn}(d)(1\!-\!q^n)q^{\!-\!\frac{n(n\!-\!1)}{2}\!+\!3d\!+\!2d^2}\times\notag\\ &\sum_{j=1}^{|d|}\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}} \{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q)^2\notag\\ &\equiv (\!-\!1)^{\frac{n\!-\!1}{2}\!+\!d}q^{\!-\!\frac{n^2\!-\!1}{2}\!+\!2d^2\!+\!2d}\!+\! \mathbf{Sgn}(d)(1\!-\!q^n)q^{2d^2\!+\!3d}\times\notag\\ &\ \ \ \sum_{j=1}^{|d|}\frac{q^{j\!-\!2dj}(1\!+\!q^{(4j\!-\!2)d})}{1\!-\!q^{4j\!-\!2}} \{1\!+\!(\!-\!1)^{j\!+\!d}\!+\!q^{2j\!-\!1}[(\!-\!1)^{j\!+\!d}\!-\!1]\}\mod\Phi_n(q)^2. \end{aligned}$$ Therefore, Theorem [Theorem 6](#app-th2){reference-type="ref" reference="app-th2"} holds. ◻ # Acknowledgment The author would like to thank Professor Zhiguo Liu for his invaluable technical and material support while writing this manuscript. Special thanks go to Jing Gu for carefully reviewing the initial draft and providing valuable insights. The author also acknowledges Feng Liu and Deliang Wei for their assistance in the writing process. 99 Carlitz, L, A q-identity, Fibonacci Quart. 12 (1974), 369--372. Gasper, George; Rahman, Mizan, Basic hypergeometric series, With a foreword by Richard Askey. Second edition. Encyclopedia of Mathematics and its Applications, 96. Cambridge University Press, Cambridge, 2004. xxvi+428 pp. Guo, Victor J. W, A q-analogue of a Ramanujan-type supercongruence involving central binomial coefficients, J. Math. Anal. Appl. 458 (2018), no. 1, 590--600. Guo, Victor J. W, Proof of a q-congruence conjectured by Tauraso, Int. J. Number Theory 15 (2019), no. 1, 37--41. Gu, Cheng-Yang; Guo, Victor J. W, Two q-congruences from Carlitz's formula, Period. Math. Hungar. 82 (2021), no. 1, 82--86. Guo, Victor J. W, New q-analogues of a congruence of Sun and Tauraso, Publ. Math. Debrecen 102 (2023), no. 1-2, 103--109. Kac V , Cheung P , Quantum Calculus\[J\], The American Mathematical Monthly, 2002, 110(7);x+112. Liu, Jianxin; Pan, Hao; Zhang, Yong, A generalization of Morley's congruence, Adv. Difference Equ. 2015, 2015;254, 7 pp. Liu, Ji-Cai; Petrov, Fedor, Congruences on sums of q-binomial coefficients, Adv. in Appl. Math.116(2020), 102003, 11 pp. Liu Zhiguo, A q-operational equation and the Rogers-Szegő polynomials, Sci. China Math. 66 (2023), no. 6, 1199--1216. Sears, D. B, On the transformation theory of basic hypergeometric functions. Proc. London Math. Soc. (2) 53 (1951), 158--180. Z.-W. Sun, Binomial coefficients, Catalan numbers and Lucas quotients, Sci. China Math. 53 (2010) 2473--2488. Sun, Zhi-Wei; Tauraso, Roberto New congruences for central binomial coefficients, Adv. in Appl. Math. 45 (2010), no. 1, 125--148. R. Tauraso, Some q-analogs of congruences for central binomial sums, Colloq. Math. 133 (2013) 133--143. Wang, Chen; Ni, He-Xia, Some q-congruences arising from certain identities, Period. Math. Hungar. 85 (2022), no. 1, 45--51. Wang, Xiaoxia; Yu, Menglin, Some generalizations of a congruence by Sun and Tauraso, Period. Math. Hungar. 85 (2022), no. 2, 240--245. Wang, Mingjin, A remark on Andrews-Askey integral, J. Math. Anal. Appl. 341 (2008), no. 2, 1487--1494. Wang, Mingjin, A transformation for the Al-Salam-Carlitz polynomials, Ars Combin. 112 (2013), 411--418. Yang, Dunkun, An expansion of (q, $\lambda$ )-derivative operator, Ramanujan J. 60 (2023), no. 4, 1127--1149.
arxiv_math
{ "id": "2309.00659", "title": "Representing Carlitz formula with q-shift operator", "authors": "Dunkun Yang", "categories": "math.NT math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove the Ramanujan and Sato--Tate conjectures for Bianchi modular forms of weight at least $2$. More generally, we prove these conjectures for all regular algebraic cuspidal automorphic representations of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$ of parallel weight, where $F$ is any CM field. We deduce these theorems from a new potential automorphy theorem for the symmetric powers of $2$-dimensional compatible systems of Galois representations of parallel weight. address: - Department of Mathematics, Imperial College London, London SW7 2AZ, UK - The University of Chicago, 5734 S University Ave, Chicago, IL 60637, USA - Department of Mathematics, Imperial College London, London SW7 2AZ, UK - Mathematical Institute, University of Oxford, Woodstock Road, Oxford OX2 6GG, UK - Department of Pure Mathematics and Mathematical Statistics, Wilberforce Road, Cambridge CB3 0WB, UK author: - George Boxer - Frank Calegari - Toby Gee - James Newton - Jack A. Thorne title: The Ramanujan and Sato--Tate Conjectures for Bianchi modular forms --- # Introduction {#sec: intro} Let $f = \sum_{n=1}^{\infty} a_n q^n$ be a cuspidal modular form of weight $k \ge 2$ and level $\Gamma_1(N) \subset \mathop{\mathrm{SL}}_2({\mathbf Z})$ which is an eigenform for all the Hecke operators $T_p$ for $(p,N)=1$ and normalized so that $a_1 = 1$. The Ramanujan conjecture for $f$ --- proved by Deligne [@Deligne] as a consequence of the Weil conjectures --- is the claim that $$|a_p| \le 2 \cdot p^{(k-1)/2}.$$ Suppose that the coefficients of $f$ are real. The Sato--Tate conjecture (proved in a sequence of papers [@cht; @tay; @HSBT; @blght]) is the theorem that the normalized values $a_p/2p^{(k-1)/2} \in [-1,1]$ are equidistributed with respect to the Sato--Tate measure $2/\pi \cdot \sqrt{1-x^2} dx$ unless $f$ is a so-called CM form, in which case the corresponding measure is the average of the atomic measure with support zero and the measure $1/\pi \cdot 1/\sqrt{1-x^2} dx$ (the proof in this CM case is much easier and follows from [@Hecke]). (If the coefficients $a_p$ are not real, some minor modifications are required to formulate the conjecture properly.) These conjectures were originally made for the particular (non-CM) form $f = \Delta = q \prod_{n=1}^{\infty} (1 - q^n)^{24} = \sum_{n=1}^{\infty} \tau(n) q^n$ of level $\mathop{\mathrm{SL}}_2({\mathbf Z})$ and weight $k = 12$ studied by Ramanujan; this particular case turns out to be no easier than the general case. Both of these conjectures have an equivalent reformulation in the language of automorphic representations. Associated to a cuspidal modular eigenform $f$ (as above) is an automorphic representation $\pi$ for $\mathop{\mathrm{GL}}(2)/{\mathbf Q}$. The data of $\pi$ includes irreducible admissible infinite dimensional complex representations $\pi_p$ of $\mathop{\mathrm{GL}}_2({\mathbf Q}_p)$ for all $p$. For $(p,N)=1$, the representations $\pi_p$ satisfy the additional property of being so-called *spherical*, and are in particular classified by a pair of complex numbers $\{\alpha_p,\beta_p\}$ known as Satake parameters, which are related to the original coefficients $a_p$ via the equation $$x^2 - a_p x + p^{k-1} \chi(p) = (x - \alpha_p)(x - \beta_p),$$ where $\chi: ({\mathbf Z}/N {\mathbf Z})^{\times} \rightarrow {\mathbf C}^{\times}$ is the Nebentypus character of $f$. The Ramanujan conjecture is equivalent to the equality $|\alpha_p|=|\beta_p| = p^{(k-1)/2}$, which can be reformulated as saying that the representation $\pi_p$ is tempered. The Sato--Tate conjecture is equivalent (for non-CM forms) to the claim that the conjugacy classes of the matrices $$\frac{1}{p^{(k-1)/2}} \cdot \left( \begin{matrix} \alpha_p & 0 \\ 0 & \beta_p \end{matrix} \right)$$ are equidistributed in $\mathop{\mathrm{SU}}(2)/\text{conjugacy}$ with respect to the probability Haar measure. One advantage of these reformulations is that they can be generalized; the original Ramanujan conjecture becomes the statement that if $\pi$ is a regular algebraic cuspidal automorphic representation for $\mathop{\mathrm{GL}}(2)/{\mathbf Q}$, then $\pi_p$ is tempered for all $p$. The general Ramanujan conjecture is the statement that if $\pi$ is a cuspidal automorphic representation for $\mathop{\mathrm{GL}}(n)/F$ for any number field $F$, then $\pi_v$ is tempered for all primes $v$ of $F$. (One can generalize further to groups beyond $\mathop{\mathrm{GL}}(n)$ but then the formulation becomes more subtle.) This conjecture is still open in the case of $\mathop{\mathrm{GL}}(2)/{\mathbf Q}$; after one drops the adjectives "regular algebraic" (or even just "regular"), one then allows Maass forms, which seem beyond the reach of all current techniques. On the other hand, one can consider regular algebraic automorphic representations $\pi$ for $\mathop{\mathrm{GL}}(2)/F$ for number fields $F$. If $F$ is a totally real field, then these correspond to Hilbert modular forms of weight $(k_i)_{i=1}^{d}$ (with $d=[F:{\mathbf Q}]$) with all weights $k_i$ at least $2$; the theory here is close to the original setting of classical modular forms. One point of similarity is that Hilbert modular forms can also be written as $q$-series (now in more variables). Moreover, just as for classical modular forms, there is a direct link between Hilbert modular forms and the étale cohomology of certain algebraic (Shimura) varieties, which allows one to deduce the Ramanujan conjecture in these cases as a consequence of the Weil conjectures ([@Brylinski-Labesse; @MR2327298]). The Sato--Tate conjecture can also be proved in these cases by arguments generalizing those used for modular forms [@blgg]. In this paper, we consider the Ramanujan and Sato--Tate conjectures for regular algebraic cuspidal automorphic representations for $\mathop{\mathrm{GL}}(2)/F$ where $F$ is now an imaginary quadratic field (or more generally an imaginary CM field). In this case, the classical interpretation of these objects (sometimes called Bianchi modular forms when $F$ is imaginary quadratic) looks quite different from the familiar $q$-expansions associated to classical or Hilbert modular forms; for example, if $F$ is an imaginary quadratic field, they can be thought of as vector valued differential one-forms on arithmetic hyperbolic three manifolds. The Eichler--Shimura map allows one to relate classical modular forms of weight $k \ge 2$ to the cohomology of local systems for congruence subgroups of $\mathop{\mathrm{SL}}_2({\mathbf Z})$; the analogous theorem also allows one to relate Bianchi modular forms to the cohomology of local systems for subgroups of $\mathop{\mathrm{SL}}_2(\mathcal{O}_F)$. However, what is missing in this setting is that there is now no longer any direct link to the cohomology of algebraic varieties. Despite this, in this paper, we prove the Ramanujan conjecture for regular algebraic cuspidal automorphic representations in full for all quadratic fields and with an assumption on the weight for arbitrary imaginary CM fields. For a precise clarification of what parallel weight $k$ means, see Definition [Definition 1](#parallelweight){reference-type="ref" reference="parallelweight"}. When $F$ is imaginary quadratic, all regular algebraic cuspidal automorphic representations for $\mathop{\mathrm{GL}}(2)/F$ have parallel weight. **Theorem 1** (Ramanujan Conjecture, Theorem [\[thm: Ramanujan thm main paper\]](#thm: Ramanujan thm main paper){reference-type="ref" reference="thm: Ramanujan thm main paper"}). *Let $F/{\mathbf Q}$ be an imaginary CM field. Let $\pi$ be a cuspidal algebraic automorphic representation for $\mathop{\mathrm{GL}}(2)/F$ of parallel weight $k\ge 2$. Then $\pi_v$ is tempered for all finite places $v$; in particular, for places $v$ prime to the level of $\pi$, the Satake parameters $\{\alpha_v,\beta_v\}$ of $\pi_v$ satisfy $|\alpha_v| = |\beta_v| = N(v)^{(k-1)/2}$.* **Theorem 2** (Sato--Tate Conjecture, Theorem [Theorem 1](#thm_Sato_Tate_general_case){reference-type="ref" reference="thm_Sato_Tate_general_case"}). *Let $F/{\mathbf Q}$ be an imaginary CM field. Let $\pi$ be a cuspidal algebraic automorphic representation for $\mathop{\mathrm{GL}}(2)/F$ of parallel weight $k\ge 2$. Assume that $\pi$ does not have CM, equivalently, $\pi$ is not the automorphic induction of an algebraic Hecke character from a quadratic CM extension $F'/F$. For each finite place $v$ prime to the level of $\pi$, let $a_v= (\alpha_v + \beta_v)/(2 N(v)^{(k-1)/2})$ denote the normalized parameter, and suppose that the $a_v$ are real. Then the $a_v$ are uniformly distributed with respect to the Sato--Tate measure $2/\pi \cdot \sqrt{1-x^2} dx$.* As in the case $F={\mathbf Q}$, a minor modification of the statement is needed when the $a_v$ are not real; we relegate the details of this to Section [7.2](#subsec:satotate){reference-type="ref" reference="subsec:satotate"}. We also discuss some alternate formulations of Theorem [Theorem 1](#Ramanujan){reference-type="ref" reference="Ramanujan"} when $F/{\mathbf Q}$ is an imaginary quadratic field in Section [1.3](#Bianchi){reference-type="ref" reference="Bianchi"}. To prove Theorems [Theorem 1](#Ramanujan){reference-type="ref" reference="Ramanujan"} and [Theorem 2](#SatoTate){reference-type="ref" reference="SatoTate"}, we prove the potential automorphy of the symmetric powers of the compatible systems of Galois representations associated to a cuspidal, regular algebraic automorphic representation $\pi$ of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$. Here again is a simplified version of our main result in this direction. (To orient the reader, the integer $k \ge 2$ parametrizing the weight in this discussion above is related the integer $m \ge 1$ below via the relation $m = k - 1$. This mirrors the fact that the Hodge--Tate weights of $p$-adic Galois representations associated to modular forms of weight $k$ are equal to $\{0,k-1\}$.) **Theorem 3** (Potential automorphy of symmetric powers, Theorem [Theorem 1](#PO){reference-type="ref" reference="PO"}). *Let $F$ be a CM field, let $M$ be a number field, and let $m \ge 1$ be an integer. Suppose we have a system of Galois representations $$\rho_{\lambda}: G_F \rightarrow \mathop{\mathrm{GL}}_2(\overline{M}_{\lambda})$$ indexed by primes $\lambda$ of $M$ with the following compatibilities:* 1. *$\rho_{\lambda}$ is unramified outside a finite set of primes $\{v \in S\} \cup \{v | N(\lambda)\}$ where $S$ is independent of $\lambda$. For any $v$ not in this set, the characteristic polynomial $P_v(X) = X^2 + a_v X + b_v$ of $\rho_p(\mathrm{Frob}_v)$ lies in $M[X]$ and is independent of $v$.* 2. *For all but finitely many $\lambda$, the representations $\rho_{\lambda} |_{G_v}$ for primes $v|N(\lambda)$ and $v \notin S$ are crystalline with Hodge--Tate weights $H = \{0,m\}$ for every embedding of $F$ into $\overline{{\mathbf Q}}_p$, and the characteristic polynomial of crystalline Frobenius is $P_v(X)$.* *Assume that at least one $\rho_{\lambda}$ is irreducible. Then:* 1. ***Purity:** for any embedding of $M \hookrightarrow \mathbf{C}$, the roots $\alpha_v$ and $\beta_v$ of $X^2 + a_v X + b_v$ have absolute value $q^{m/2}$ where $q = N(v)$.* 2. ***Potential automorphy:** There is a number field $F'/F$ such that the restrictions $\rho_{\lambda} |_{G_{F'}}$ are all automorphic and associated to a fixed cuspidal algebraic $\pi$ for $\mathop{\mathrm{GL}}(2)/F'$.* 3. ***Potential automorphy of symmetric powers:** Fix $n-1 \ge 2$. Either:* 1. *The $\rho_{\lambda}$ are all induced from a compatible system associated to an algebraic Hecke character $\chi$ of some quadratic extension $F'/F$. Then $\mathop{\mathrm{Sym}}^{n-1} \rho_{\lambda}$ is reducible and decomposes into representations of dimension two and one which are all automorphic over $F$.* 2. *There is a number field $F'/F$ such that the representations $\mathop{\mathrm{Sym}}^n \rho_{\lambda} |_{G_{F'}}$ are all irreducible and automorphic, associated to a fixed cuspidal algebraic $\Pi$ for $\mathop{\mathrm{GL}}(n)/F'$.* The Galois representations associated to cuspidal, regular algebraic automorphic representations of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$ are not yet known to satisfy the conditions of Theorem [Theorem 3](#ithm_potential_automorphy_of_symmetric_powers){reference-type="ref" reference="ithm_potential_automorphy_of_symmetric_powers"}, but rather a weaker condition (they form a 'very weakly compatible system'). We establish potential automorphy of symmetric powers also under this weaker condition. Once again, we refer to the statement of Theorem [Theorem 1](#PO){reference-type="ref" reference="PO"} in the main body of the paper for the precise statement that is used to deduce Theorems [Theorem 1](#thm: Ramanujan thm main paper){reference-type="ref" reference="thm: Ramanujan thm main paper"} and [Theorem 1](#thm_Sato_Tate_general_case){reference-type="ref" reference="thm_Sato_Tate_general_case"} (and therefore Theorems [Theorem 1](#Ramanujan){reference-type="ref" reference="Ramanujan"} and [Theorem 2](#SatoTate){reference-type="ref" reference="SatoTate"} above). ## The new ideas in this paper When $m = 1$, Theorems [Theorem 1](#Ramanujan){reference-type="ref" reference="Ramanujan"}, [Theorem 2](#SatoTate){reference-type="ref" reference="SatoTate"} and [Theorem 3](#ithm_potential_automorphy_of_symmetric_powers){reference-type="ref" reference="ithm_potential_automorphy_of_symmetric_powers"} were proved in [@10author] (see [@10author Thm 1.01, Thm 1.0.2, Thm 7.1.14]). The deduction of Theorems [Theorem 1](#Ramanujan){reference-type="ref" reference="Ramanujan"} and [Theorem 2](#SatoTate){reference-type="ref" reference="SatoTate"} from Theorem [Theorem 3](#ithm_potential_automorphy_of_symmetric_powers){reference-type="ref" reference="ithm_potential_automorphy_of_symmetric_powers"} exactly parallels the arguments in [@10author], so we now focus on explaining the proof of Theorem [Theorem 3](#ithm_potential_automorphy_of_symmetric_powers){reference-type="ref" reference="ithm_potential_automorphy_of_symmetric_powers"}. Unsurprisingly, our arguments build on those of [@10author]: in particular, we prove the the potential automorphy of the symmetric powers $\mathop{\mathrm{Sym}}^n \mathcal{R}$ by checking the residual automorphy over some extension $F' / F$ and then applying an automorphy lifting theorem. We would like to highlight three new ingredients which appear here: 1. A result on generic reducedness of special fibres of weight $0$ (local) crystalline deformation rings (see §[\[subsec: Ihara avoidance discussion\]](#subsec: Ihara avoidance discussion){reference-type="ref" reference="subsec: Ihara avoidance discussion"} for a further introductory discussion). Using the local-global compatibility result of [@caraiani-newton], this leads to a new automorphy lifting theorem in the setting of arbitrary ramification (Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"}). 2. An application of a theorem of Drinfeld and Kedlaya [@DrinfeldKedlaya], showing generic ordinarity of families of Dwork motives (Proposition [Proposition 1](#ordinarypoints){reference-type="ref" reference="ordinarypoints"}). This makes it possible to verify the potential residual automorphy of certain residual representations by an automorphic motive which is crystalline ordinary at some set of $p$-adic places. 3. A "$p$-$q$-$r$" switch including a version of the "Harris tensor product trick" which incorporates an additional congruence between two tensor products of compatible families. One is a tensor product with an induction of a character, as usual. The other is a tensor product with a different auxiliary compatible family, which gives us more flexibility to realise different local properties at places related by complex conjugation. We discuss in the remainder of the introduction the need for this argument, and give a more detailed sketch in §[6.2](#subsec_proof_of_pot_aut){reference-type="ref" reference="subsec_proof_of_pot_aut"} below. To explain in more detail the need for these innovations, suppose given a compatible system $(\rho_\lambda)$ as in the statement of Theorem [Theorem 3](#ithm_potential_automorphy_of_symmetric_powers){reference-type="ref" reference="ithm_potential_automorphy_of_symmetric_powers"}, therefore of Hodge--Tate weights $\{0, m \}$ for some $m \geq 1$ (and with $m \geq 2$ if we hope to go beyond the cases treated in [@10author]). The general strategy for proving potential automorphy (the so-called "$p$-$q$ switch") is as follows: 1. After making some CM base extension $H/F$ (depending on $n$), find an auxiliary $n$-dimensional compatible system $\mathcal{S}=\{ s_{\lambda}\}$ such that: 1. For one prime $\lambda$, the residual representations $\mathop{\mathrm{Sym}}^{n-1} \overline{r}_{\lambda} |_{G_H}$ and $\overline{s}_{\lambda}$ coincide, and moreover satisfy a number of standard "Taylor--Wiles" conditions. 2. For a second prime $\lambda'$, the residual representation $\overline{s}_{\lambda'}$ is induced from a character and is thus residually automorphic. 3. The Hodge--Tate weights of the compatible system $\mathcal{S}$ coincide with those of $\mathop{\mathrm{Sym}}^{n-1}\mathcal{R}|_{G_H}$. 2. Apply an automorphy lifting theorem at $\lambda'$ to deduce that the compatible system $\mathcal{S}$ is automorphic. Then deduce that the residual representation $\overline{s}_{\lambda}$ is automorphic, and use automorphy lifting theorems again to deduce that $\mathop{\mathrm{Sym}}^{n-1}\mathcal{R}|_{G_H}$ is automorphic. In our setting, both of these steps cause problems, but the second problem is more serious. The issue in the first step is that the most natural source of compatible systems $\mathcal{S}$ are those arising from motives, and a geometrically varying family of motives cannot have Hodge--Tate weights $0,m,\ldots,mn$ with $m \ge 2$ by Griffiths transversality. (This difficulty is already present if $F={\mathbf Q}$ and one wants to prove the Sato--Tate conjecture for a classical modular form of weight greater than $2$, such as $\Delta$.) The now-standard resolution to this problem is to employ the "Harris tensor product trick" [@HarrisDouble], and replace $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}$ by $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}\otimes \mathop{\mathrm{Ind}}^{G_{F}}_{G_{L}} \mathcal{X}$ for some cyclic CM extension $L/F$, where $\mathcal{X}$ is a compatible system of algebraic Hecke characters chosen sufficiently carefully so that this new compatible system has consecutive Hodge--Tate weights. One proves this new compatible system is potentially automorphic (using for $\mathcal{S}$ a compatible system coming from the cohomology of the Dwork family), and then deduces the same for $\mathop{\mathrm{Sym}}^{n-1}\mathcal{R}$ using cyclic base change [@MR1007299]. There is a second difficulty which also arises even in the case of $\Delta$. Automorphy lifting theorems typically require that the compatible systems $\mathcal{S}$ and $\mathop{\mathrm{Sym}}^{n-1}\mathcal{R}$ (or more generally $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}\otimes \mathop{\mathrm{Ind}}^{G_{F}}_{G_{L}} \mathcal{X}$) have "the same" behaviour at places $v|p$; for example, one might demand that they are both ordinary, and indeed it is straightforward (at least after a ramified base change) to find ordinary representations in the Dwork family, and presumably difficult to understand the non-ordinary representations in any generality. This means that one would like to show that many of the representations in the compatible system $\mathcal{R}$ are ordinary. For a weight $2$ modular form, it is relatively easy to prove that there are infinitely many primes $p$ for which the $p$-adic Galois representation is ordinary at $p$. However, the existence of infinitely many ordinary primes for $\Delta$ (or for any non-CM form of weight $k \ge 4$) remains an open question, so one also has to consider the possibility that the residual representation $\overline{r}_{\lambda}|_{G_{F_v}}$ is locally of the form $\omega^m_2 \oplus \omega^{mp}_2$ on inertia at $p$. This problem was resolved for classical modular forms in [@blght], via a further study of the Dwork family; in particular, showing that certain residual representations of the shape $\mathop{\mathrm{Sym}}^{n-1} (\omega_2 \oplus \omega^p_2)$ arise (locally on inertia) as residual representations in that family. We now consider the case of an imaginary CM field $F$. Given the automorphy lifting theorems for CM fields proved in [@10author], the most serious difficult in adapting the strategy of [@blght] is that there is no way to avoid the possibility that a representation $r_{\lambda}$ can be simultaneously ordinary at one prime $v|p$ and non-ordinary at the complex conjugate place $v^c$. (One might hope to avoid this by considering places with $v=v^c$, but it then seems hopeless to find the required representations in the Dwork family in the non-ordinary case.) This is a problem because the automorphy lifting theorems in [@10author] for non-ordinary representations require $F$ to be unramified at our non-ordinary prime $v^c$; while at the ordinary prime $v$, we need to be able to make a highly ramified base change of (imaginary) CM fields $F'/F$ to find an appropriate representation in the Dwork family. It is however impossible to arrange that such an extension of CM fields is unramified at $v^c$ and ramified at $v$. One of the key innovations in this paper is to prove an automorphy lifting theorem that allows us to make a ramified base change at $v^c$. This was done in the two-dimensional case in [@caraiani-newton]; we discuss the difficulties in extending this result to higher dimensions and how we overcome them in Section [\[subsec: Ihara avoidance discussion\]](#subsec: Ihara avoidance discussion){reference-type="ref" reference="subsec: Ihara avoidance discussion"} below. There turns out to be one final wrinkle: our automorphy lifting theorem applies to certain $p$-adic representations with consecutive Hodge--Tate weights which are crystalline at $p$ and are either ordinary or are (on the same component of a local crystalline deformation ring as) a symmetric power of the representation $\mathop{\mathrm{Ind}}^{G_{{\mathbf Q}_p}}_{G_{{\mathbf Q}_{p^2}}} \omega_2$. It turns out that we can't achieve these conditions by considering tensor products $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}\otimes \mathop{\mathrm{Ind}}^{G_F}_{G_L} \mathcal{X}$ where $\mathcal{X}$ is a compatible system of algebraic Hecke characters. The problem is that algebraic Hecke characters have a very restricted form, and the fact that $F$ is an imaginary CM field implies that such an $\mathcal{X}$ can exist only if $r_{\lambda}$ is either both ordinary or non-ordinary at each pair of places $\{v,v^c\}$ permuted by complex conjugation in $\mathop{\mathrm{Gal}}(F/F^{+})$. Our solution is to instead consider tensor products of the form $(\mathop{\mathrm{Sym}}^{n-1} {\mathcal R})\otimes {\mathcal R}_{\mathrm{aux}}$, where ${\mathcal R}_{\mathrm{aux}}$ is a compatible system coming from (part of) the cohomology of the Dwork hypersurface [@QianPotential]. It is now no longer possible to directly deduce the potential automorphy of $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}$ from this product. This is not necessary to prove the Ramanujan conjecture --- already the automorphy of this tensor product combined with the Jacquet--Shalika bounds (and the fact that $\mathcal{R}_{\mathrm{aux}}$ is pure) is enough to deduce purity --- but it is necessary to prove the Sato--Tate conjecture. However, once the potential automorphy of ${\mathcal S}_{\mathrm{aux}} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{aux}}$ is established, we can (having chosen ${\mathcal R}_{\mathrm{aux}}$ carefully to begin with) find a third compatible system ${\mathcal R}_{\mathrm{CM}}$ such that ${\mathcal S}_{\mathrm{aux}} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{aux}}$ and ${\mathcal S}_{\mathrm{CM}} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{CM}}$ are residually the same at a *third* prime $r$, and ${\mathcal R}_{\mathrm{CM}}$ is induced from a character. Even though we do not have any control over the $r$-adic representation associated to $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ locally at $v|r$, the fact that it occurs as the same tensor factor in the $r$-adic representations of both ${\mathcal S}_{\mathrm{aux}}$ and ${\mathcal S}_{\mathrm{CM}}$ means we can still put ourselves in a situation where both $r$-adic Galois representations lie on the same component of a local deformation ring at $v|r$. From this $p$-$q$-$r$ switch, we can show that ${\mathcal S}_{\mathrm{CM}} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{CM}}$ is potentially automorphic, from which we deduce that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ is automorphic. One might also ask whether for general CM fields $F$ one can drop the hypothesis that ${\mathcal R}$ has parallel weight. The difficulty in doing so is as follows: In order to pass from $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}$ to a compatible system with consecutive Hodge--Tate weights, one needs to tensor this compatible system with a second compatible system with certain prescribed local properties. If ${\mathcal R}$ does not have parallel weight, this auxiliary compatible system cannot have consecutive Hodge--Tate weights and for reasons explained above also cannot be induced from a compatible system of characters. It is very hard to construct such compatible systems because of the constraints on families of geometric local systems imposed by Griffiths transversality. The existence of even a *single* regular algebraic cuspidal automorphic representation for $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$ for some CM field $F$ which is neither of parallel weight $2$, of CM type, or arising from base change from the totally real subfield $F^{+}$ was only found (by a computation) in [@calmazur Lemma 8.11(2)] (see also [@MR3091734]). ## Ihara avoidance and the Emerton--Gee stack {#subsec: Ihara avoidance discussion} There are two main difficulties in proving automorphy lifting liftings for $p$-adic representations with $p$ ramified in $F$. One is having local--global compatibility theorems at the places dividing $p$; this was resolved in the recent work of Caraiani--Newton [@caraiani-newton]. The other difficulty was alluded to above: the usual Taylor--Wiles method for automorphy lifting only allows us to deduce the automorphy of a $p$-adic representation $r$ from the automorphy of a congruent representation $r'$ if we know that for all finite places $v$, the representations  $r|_{G_{F_v}}$ and $r'|_{G_{F_v}}$ are "connected", in the sense that they lie on the same component of the appropriate local deformation ring. As we have sketched above, in the particular cases that we consider in this paper, we have arranged this property at the places $v|p$ by considering the ordinary and non-ordinary cases separately. (It was this construction that required us to pass to a situation where $p$ is highly ramified in $F$.) We are not, however, able to arrange that our representations are connected at all the places $v\nmid p$. Fortunately, Taylor [@tay] found a way to prove automorphy lifting theorems when the representations fail to be connected at some places $v\nmid p$, using his so-called "Ihara avoidance" argument. This argument makes an ingenious use of two different local deformation problems at places $v\nmid p$, which are congruent modulo $p$, and relates two corresponding patched modules of automorphic forms. The key point which makes this argument possible is to work with local deformation rings having the following "unique generalization" property: any generic point of their special fibre has a unique generalization to the generic fibre. More geometrically, we need to avoid having two distinct irreducible components in characteristic zero which map to a common irreducible component in the special fibre. In order to apply this argument one also needs the unique generalization property for the deformation rings at the places  $v|p$. This was previously only known in the Fontaine--Laffaille and ordinary contexts, in which case the crystalline deformation rings can be understood completely explicitly (and in the former case, there is even a unique irreducible component). (This problem was sidestepped to some extent in [@blgg; @BLGGT], but the approach there combines the Ihara avoidance argument with the Khare--Wintenberger lifting argument to produce characteristic zero lifts of residual representations of the prescribed weight and level. In our $\ell_0 > 0$ situation (in the language of [@CG]) such lifts do not always exist.) One way to establish the unique generalization property (when it holds) would be to explicitly compute the irreducible components of the generic fibres of the deformation rings, but this appears to be hopeless for crystalline deformation rings in any generality. However, as was already observed in [@tay §3] in the case $v\nmid p$, an alternative approach is to consider an appropriate moduli stack of Galois representations, and show that its special fibre is generically reduced (or even generically smooth), for example by showing that the deformation rings for generic choices of the residual Galois representation are formally smooth. It then follows that for an arbitrary residual representation, the special fibres of the (${\mathbf Z}_p$-flat quotients of) the deformation rings are generically reduced, which implies the unique generalization property. (While [@tay §3] does not explicitly work with moduli stacks of Galois representations, [@tay Lem. 3.2] is easily reformulated in these terms; and while  [@tay Prop. 3.1(3)] does not explicitly state that the ${\mathbf Z}_p$-flat quotient of the deformation ring has generically reduced special fibre, this follows from the argument, as in [@MR4190048 Prop. 3.1].) The unique generalization property has subsequently been used by Thorne in a context with $v\nmid p$ in the proof of [@jack Thm. 8.6] (in order to avoid any additional hypotheses when introducing an auxiliary prime to make the level structure sufficiently small), and in the case that $v|p$ by Caraiani--Newton [@caraiani-newton], who used the results of [@caraiani2022geometric], which establish the generic reducedness of the special fibres of the crystalline deformation rings in the 2-dimensional (tamely potentially) Barsotti--Tate case, by an analysis of the corresponding Emerton--Gee stacks. Unfortunately an (unconditional) argument with the Breuil--Mézard conjecture shows that generic reducedness is extremely rare when $v|p$ (see Remark [Remark 1](#rem: comparison to CEGS){reference-type="ref" reference="rem: comparison to CEGS"}). We are however able to prove the following theorem. reduced .ithm} **Theorem 4** (Theorem [Theorem 1](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"}). *Suppose that $p>n$, that $K/{\mathbf Q}_p$ and ${\mathbf F}/{\mathbf F}_p$ are finite extensions, and that  $\overline{\rho}: G_K \to \mathop{\mathrm{GL}}_n({\mathbf F})$ is a continuous representation. Let $R^{\mathrm{crys},0}$ be the universal lifting ring for crystalline lifts of $\overline{\rho}$ of parallel Hodge--Tate weights $0,1,\dots,n-1$. Then the special fibre of $\mathop{\mathrm{Spec}}R^{\mathrm{crys},0}$ is generically reduced.* We refer the reader to the introduction to Section [2](#sec:genred){reference-type="ref" reference="sec:genred"} for a detailed overview of the proof of Theorem [Theorem 4](#intro thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="intro thm: special fibre weight 0 crystalline def ring generically reduced"}, which as above relies on proving the corresponding property of the relevant Emerton--Gee stacks [@emertongeepicture] (whose versal rings are the crystalline lifting rings). The irreducible components of the special fibres of these stacks were described in [@emertongeepicture], and we prove our result by combining this description with a computation of extensions of rank $1$ Breuil modules. An amusing feature of this argument is that we prove a result about the deformation rings of arbitrary $n$-dimensional mod $p$ representations by reducing to a calculation for reducible $2$-dimensional representations. ## Bianchi Modular Forms {#Bianchi} Let us specialize to the case when $F/{\mathbf Q}$ is an imaginary quadratic field. Let $\pi$ be a regular algebraic cuspidal automorphic representation of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$. Let $\chi$ be the central character of $\pi$. By definition, the representation $\pi$ occurs in $L^2_{\mathrm{cusp}}(\mathop{\mathrm{GL}}_2(F) \backslash \mathop{\mathrm{GL}}_2(\mathbf{A}_F))$. Let $\mathfrak{g}$ be the Lie algebra of $\mathop{\mathrm{GL}}_2({\mathbf C})$ as a real group. The assumption that $\pi$ is regular algebraic is equivalent to the condition that the infinitesimal character of $\pi_{\infty}$ is the same as $V^{\vee}$ for an algebraic representation $V$ of $\mathrm{Res}_{F/{\mathbf Q}} \mathop{\mathrm{GL}}_2$. The assumption that $\pi$ is cuspidal places a restriction on $V$ corresponding to the fact (noted earlier) that such $\pi$ has parallel weight; the corresponding representations are parametrized (up to twist) by an integer $k \ge 2$, where $k=2$ corresponds to the case when $V$ is trivial. This choice of $k$ determines the action of $Z(\mathfrak{g})$ on $\pi_{\infty}$, and by taking functions which are suitable eigenvectors under $Z(\mathfrak{g})$, we may arrive at certain vector valued Hecke eigenfunctions $\Phi$ on $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$ with Fourier expansions ([@Williams §1.2], [@HidaCritical §6]) $$f \left[ \left( \begin{matrix} t & z \\ 0 & 1 \end{matrix} \right) \right] = |t|_F \sum_{\alpha \in F^{\times}} c(\alpha t \delta_F,f) W(\alpha t_{\infty}) e_F(\alpha z),$$ where $\delta = \delta_F$ is the different, $\alpha t \delta$ can be interpreted as a fractional ideal of $\mathcal{O}_F$, $c(I,f)$ is a Fourier coefficient which vanishes unless $I \subset \mathcal{O}_F$ and which we may assume is normalized so that $c(\mathcal{O}_F,f) = 1$, $W$ is an explicit Whittaker function which is vector valued in some explicit representation of $\mathop{\mathrm{SU}}(2)$ depending on $k$, and $e_F$ is an explicit additive character of $F \backslash \mathbf{A}_F$. This has a direct translation into more classical language, and can be interpreted as a collection of $h_F$ functions on a finite union of hyperbolic spaces $\mathbf{H}^3$. The explicit functions $f$ (either adelically or classically) are known as Bianchi modular forms. For a normalized Bianchi eigenform $f$ of weight $k$ and level prime to $\mathfrak{p}$, Theorem [Theorem 1](#thm: Ramanujan thm main paper){reference-type="ref" reference="thm: Ramanujan thm main paper"} implies the following bound: **Theorem 5**. *Let $f$ be a cuspidal Bianchi modular eigenform of level $\mathfrak{n}$ and weight $k$. Let $\mathfrak{p}$ be a prime ideal of $\mathcal{O}_F$ not dividing $\mathfrak{n}$, and let $c(\mathfrak{p},f)$ be an eigenvalue of $T_\mathfrak{p}$ on $H$. Then $$\label{ramanujanbianchi} |c(\mathfrak{p},f)| \le 2 N(\mathfrak{p})^{(k-1)/2}.$$* This connects our theorem with the more classical version of the Ramanujan conjecture for modular forms [@Deligne] as discussed earlier in the introduction. The eigenvalues $c(\mathfrak{p},f)$ associated to $f$ have a second interpretation in terms of the cohomology of arithmetic groups and arithmetic hyperbolic $3$-manifolds. The algebraic representations $V$ of $\operatorname{Res}_{F/{\mathbf Q}}\mathop{\mathrm{GL}}_2$ are all, up to twist, given on real points $\operatorname{Res}_{F/{\mathbf Q}}\mathop{\mathrm{GL}}_2({\mathbf R}) = \mathop{\mathrm{GL}}_2({\mathbf C})$ by the representations $\mathop{\mathrm{Sym}}^{k-2} {\mathbf C}^2 \otimes \overline{\mathop{\mathrm{Sym}}^{l-2} {\mathbf C}^2}$ for a pair of integers $k, l \geq 2$. Let $\mathfrak{n} \leq \mathcal{O}_F$ be a non-zero ideal. Having fixed $k$ and $l$, we can form the group cohomology $$H = H^1(\Gamma_1(\mathfrak{n}), \mathop{\mathrm{Sym}}^{k-2} {\mathbf C}^2 \otimes \overline{\mathop{\mathrm{Sym}}^{l-2} {\mathbf C}^2})$$ of the standard congruence subgroup $\Gamma_1(\mathfrak{n}) \leq \mathop{\mathrm{GL}}_2(\mathcal{O}_F)$. Then $H$ is a finite-dimensional ${\mathbf C}$-vector space. Let $H_{\mathrm{par}} \subset H$ denote the subgroup consisting of classes which vanish under the restriction of $H$ to $H^1(P, \mathop{\mathrm{Sym}}^{k-2} {\mathbf C}^2 \otimes \overline{\mathop{\mathrm{Sym}}^{l-2} {\mathbf C}^2})$ for any parabolic subgroup $P \subset \Gamma_1(\mathfrak{n})$. More geometrically, one can interpret $H$ as the cohomology of a local system on the Bianchi manifold $Y_1(\mathfrak{n}) = \mathbf{H}^3/\Gamma_1(\mathfrak{n})$. If $X_1(\mathfrak{n})$ is the Borel--Serre compactification of $Y_1(\mathfrak{n})$, then parabolic cohomology consists of classes which are trivial on the boundary $X_1(\mathfrak{n}) \smallsetminus Y_1(\mathfrak{n})$; this boundary may be identified (when $\Gamma_1(\mathfrak{n})$ is torsion free) with a finite disjoint union of complex tori. The spaces $H$ and $H_{\mathrm{par}}$ are equipped with a commuting family of linear operators, the unramified Hecke operators $T_{\mathfrak{p}}$, indexed by the principal ideals $\mathfrak{p} \leq \mathcal{O}_F$ not dividing $\mathfrak{n}$. More precisely, if one writes $\mathfrak{p} = (\pi)$, then the group $A \Gamma_1(\mathfrak{n}) A^{-1} \cap \Gamma_1(\mathfrak{n}) = \Gamma_1(\mathfrak{n},\mathfrak{p})$ has finite index in $\Gamma_1(\mathfrak{n})$, where $$A = \left(\begin{matrix} \pi & 0 \\ 0 & 1 \end{matrix} \right);$$ the map $T_{\mathfrak{p}}$ is induced by composing (in a suitable order) a restriction map, a conjugation by $A$ map, and a trace map respectively. Note that the existence of (a large family of) such operators comes from the fact that $\Gamma = \mathop{\mathrm{GL}}_2(\mathcal{O}_F)$ has infinite index inside its commensurator in $\mathop{\mathrm{GL}}_2(F)$; as shown by Margulis [@Margulis Thm. IX.1.13], this characterizes the arithmeticity of $\Gamma$. In order to obtain an action of  $T_{\mathfrak{p}}$ for more general prime ideals $\mathfrak{p}$, one needs to replace $Y_1(\mathfrak{n})$ by a disconnected union of $h_F$ commensurable arithmetic hyperbolic manifolds $Y_1(\mathfrak{n};\mathfrak{a}) = \mathbf{H}/\Gamma_1(\mathfrak{n};\mathfrak{a})$ indexed by ideals $\mathfrak{a}$ in the class group $\mathrm{Cl}(\mathcal{O}_F)$ of $F$ prime to $\mathfrak{n}$, and where $Y_1(\mathfrak{n};\mathcal{O}_F) = Y_1(\mathfrak{n})$. The group $\Gamma(\mathcal{O}_F;\mathfrak{a})$ is the automorphism group of the $\mathcal{O}_F$-module $\mathcal{O}_F \oplus \mathfrak{a}$, which consists explicitly of matrices of the form $$\left( \begin{matrix} \mathcal{O}_F & \mathfrak{a}^{-1} \\ \mathfrak{a} & \mathcal{O}_F \end{matrix} \right) \cap \mathop{\mathrm{GL}}_2(F)$$ with determinant in $\mathcal{O}^{\times}_F$. The space $H_{\mathrm{par}}$ vanishes unless $k = l$ ([@Harder 3.6.1]), which we now assume. If $h_F = 1$, the Eichler--Shimura isomorphism ([@Harder §3.6]) gives a map from Bianchi cuspidal modular eigenforms $f$ of weight $k$ as described above and cohomology classes $\eta_f \in H_{\mathrm{par}}$ which are simultaneous eigenforms for all the Hecke operators. Moreover, the eigenvalues of $T_{\mathfrak{p}}$ on $\eta_f$ are given exactly by $c(\mathfrak{p},f)$. If $h_F > 1$, one must replace $H$ and $h_{\mathrm{par}}$ by the direct sum of the corresponding cohomology groups over $Y_1(\mathfrak{n};\mathfrak{a})$ for $\mathfrak{a} \in \mathrm{Cl}(\mathcal{O}_F)$. Theorem [Theorem 5](#ithm_simplified_Ramanujan){reference-type="ref" reference="ithm_simplified_Ramanujan"} now implies: **Theorem 6**. *Let $\mathfrak{p}$ be a principal prime ideal of $\mathcal{O}_F$ not dividing $\mathfrak{n}$, and let $a_{\mathfrak{p}}$ be an eigenvalue of $T_\mathfrak{p}$ on $H_{\mathrm{par}}$. Then $| a_{\mathfrak{p}} | \leq 2 N(\mathfrak{p})^{(k-1)/2}$.* These explicit formulations of our theorems can be generalized in a number of ways. Remaining in the setting of arithmetic hyperbolic $3$-manifolds (or orbifolds), we can replace $\mathop{\mathrm{GL}}_2(\mathcal{O}_F)$ by a congruence subgroup $\Gamma$ of the norm one units in a maximal order $\mathcal{O}$ of a division algebra $D/F$ where $F \hookrightarrow {\mathbf C}$ is a number field with one complex place and $D$ is definite at all real places of $F$. When $[F:{\mathbf Q}] = 2$, we obtain the Bianchi manifolds as above (when $D/F$ is split) but also certain compact hyperbolic arithmetic three manifolds; our theorem applies equally well in the latter case (note that $H=H_{\mathrm{par}}$ in this setting). On the other hand, suppose that $F$ has at least one real place; for example, take $F = \mathbf{Q}[\theta]/(\theta^3 - \theta + 1)$, let $k = 2$, let $D/F$ be ramified at the real place and the unique prime of norm $5$. Now $H = H_{\mathrm{par}}$ is the first cohomology group of a congruence cover of the Weeks manifold. The generalized Ramanujan conjecture still predicts a bound of the shape $|a_{\mathfrak{p}}| \le 2 N(\mathfrak{p})^{1/2}$ for the eigenvalues of the Hecke operators $T_{\mathfrak{p}}$. However, our methods do not apply in this situation, and the best current bounds remain those of the form $|a_{\mathfrak{p}}| \le 2 N(\mathfrak{p})^{1/2 + 7/64}$ proved using analytic methods (see [@Sarnak]). We finish with an application of a different sort. Let $\Gamma = \mathop{\mathrm{SL}}_2(\mathcal{O}_F)$. The quotients $\Gamma \backslash \mathbf{H}^3$ were first investigated by Bianchi [@Bianchi], and for that reason they are known as Bianchi orbifolds. For a Bianchi modular form $f$ of level one, one may [@marshall §3] associate to $f$ a normalized measure $\mu_f$ on $\Gamma \backslash \mathbf{H}^3$. One then has the following [@marshall Cor 3]: **Theorem 7**. *Assume that $F$ has class number one[^1]. For any sequence of Bianchi modular eigenforms $f$ of weight tending to $\infty$, the measures $\mu_f$ converge weakly to the hyperbolic volume on $Y = \mathop{\mathrm{SL}}_2(\mathcal{O}_F) \backslash \mathbf{H}^3$.* *Proof.* As noted in [@marshall-erratum], the proof given in [@marshall] assumes the Ramanujan conjecture for Bianchi modular forms --- this is now a consequence of Theorem [Theorem 5](#ithm_simplified_Ramanujan){reference-type="ref" reference="ithm_simplified_Ramanujan"}. ◻ ## Acknowledgements {#subsec: acknowledge} Some of the ideas in Section [2](#sec:genred){reference-type="ref" reference="sec:genred"} were found in joint discussions with Matthew Emerton, and we are grateful to him for allowing us to include them here, as well as for his assistance in proving Theorem [\[thm:Xdred is algebraic\]](#thm:Xdred is algebraic){reference-type="ref" reference="thm:Xdred is algebraic"} [\[item: we can get generic Ext classes in rhobar\]](#item: we can get generic Ext classes in rhobar){reference-type="eqref" reference="item: we can get generic Ext classes in rhobar"}. We would also like to thank Patrick Allen and Matthew Emerton for their comments on an earlier version of the paper. ## Notation {#subsec: notation} Let $K/{\mathbf Q}_p$ be a finite extension. If $\sigma:K \hookrightarrow\overline{{\mathbf Q}}_p$ is a continuous embedding of fields then we will write $\mathrm{HT}_\sigma(\rho)$ for the multiset of Hodge--Tate numbers of $\rho$ with respect to $\sigma$, which by definition contains $i$ with multiplicity $\dim_{\overline{{\mathbf Q}}_p} (W \otimes_{\sigma,K} \widehat{\overline{{K}}}(i))^{G_K}$. We write $\varepsilon$ for the $p$-adic cyclotomic character, which is a crystalline representation with $\mathrm{HT}_\sigma(\varepsilon)=\{ -1\}$ for each $\sigma$. We say that $\rho$ has weight $0$ if for each $\sigma:K\hookrightarrow\overline{{\mathbf Q}}_p$ we have $\mathrm{HT}_\sigma(\rho)=\{0,1,\dots,d-1\}$. We often somewhat abusively write that a representation $\rho:G_K\to\mathop{\mathrm{GL}}_d(\overline{{\mathbf Z}}_p)$ is crystalline of weight $0$ if the corresponding representation $\rho:G_K\to\mathop{\mathrm{GL}}_d(\overline{{\mathbf Q}}_p)$ is crystalline of weight $0$. Let ${\mathcal O}$ be the ring of integers in some finite extension $E/{\mathbf Q}_p$, and suppose that $E$ is large enough that it contains the images of all embeddings $\sigma:K\hookrightarrow\overline{{\mathbf Q}}_p$. Write $\varpi$ for a uniformizer of ${\mathcal O}$, and ${\mathcal O}/\varpi={\mathbf F}$ for its residue field. We write $\operatorname{Art}_K: K^\times\to W_K^{\operatorname{ab}}$ for the isomorphism of local class field theory, normalized so that uniformizers correspond to geometric Frobenius elements. Let $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_d(\overline{{\mathbf F}}_p)$ be a continuous representation. Then after enlarging $E$ and thus ${\mathbf F}$ if necessary, we may assume that the image of $\overline{\rho}$ is contained in $\mathop{\mathrm{GL}}_d({\mathbf F})$. We write $R^{\square,{\mathcal O}}_{\overline{\rho}}$ for the universal lifting ${\mathcal O}$-algebra of $\overline{\rho}$; by definition, this (pro-)represents the functor ${\mathcal D}^{\square,{\mathcal O}}_{\overline{\rho}}$ given by lifts of $\overline{\rho}$ to representations $\rho: G_K \to \mathop{\mathrm{GL}}_d(A)$, for $A$ an Artin local ${\mathcal O}$-algebra with residue field ${\mathbf F}$. The precise choice of $E$ is unimportant, in the sense that if ${\mathcal O}'$ is the ring of integers in a finite extension $E'/E$, then by [@BLGGT Lem. 1.2.1] we have $R^{\square,{\mathcal O}'}_{\overline{\rho}}=R^{\square,{\mathcal O}}_{\overline{\rho}}\otimes_{{\mathcal O}}{\mathcal O}'$. We write  $R^{\mathrm{crys},\underline{0},{\mathcal O}}_{\overline{\rho}}$ for the unique ${\mathcal O}$-flat quotient of $R_{\overline{\rho}}^{\square,{\mathcal O}}$ with the property that if $B$ is a finite flat $E$-algebra, then an ${\mathcal O}$-algebra homomorphism $R_{\overline{\rho}}^{\square,{\mathcal O}}\to B$ factors through  $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}$ if and only if the corresponding representation of $G_K$ is crystalline of weight $0$. We will let $\operatorname{rec}_K$ be the local Langlands correspondence of [@ht], so that if $\pi$ is an irreducible complex admissible representation of $\mathop{\mathrm{GL}}_n(K)$, then $\operatorname{rec}_K(\pi)$ is a Frobenius semi-simple Weil--Deligne representation of the Weil group $W_K$. We write $\operatorname{rec}^T_K$ for the arithmetic normalization of the local Langlands correspondence, as defined in e.g.  [@Clo14 §2.1]; it is defined on irreducible admissible representations of $\mathop{\mathrm{GL}}_n(K)$ defined over any field which is abstractly isomorphic to ${\mathbf C}$ (e.g. $\overline{{\mathbf Q}}_l$). Let $F$ be a number field. If $v$ is a finite place of $F$ then we write $k(v)$ for the residue field of $F_v$. We identify dominant weights $\lambda$ of $\operatorname{Res}_{F/{\mathbf Q}}\mathop{\mathrm{GL}}_n$ with sets of tuples of integers $(\lambda_{\tau,1}\ge \lambda_{\tau,2} \ge \cdots \ge \lambda_{\tau,n})_{{\tau: F\hookrightarrow {\mathbf C}}}$ indexed by complex embeddings of $F$ (cf. [@10author §2.2.1]). If $\pi$ is an irreducible admissible representation of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$ and $\lambda$ is a dominant weight, we say that $\pi$ is regular algebraic of weight $\lambda$ if the infinitesimal character of $\pi_\infty$ is the same as that of $V_\lambda^\vee$, where $V_\lambda$ is the algebraic representation of $\operatorname{Res}_{F/{\mathbf Q}}\mathop{\mathrm{GL}}_n$ of highest weight $\lambda$. We say that $\pi$ is regular algebraic if it is regular algebraic of some weight. **Definition 1** (Parallel Weight). Suppose $\pi$ is regular algebraic of weight $\lambda$. We say that $\pi$ is of parallel weight if $\lambda_{\tau, 1} - \lambda_{\tau, 2}$ is independent of $\tau$; equivalently, if $\pi$ admits a regular algebraic twist of weight $\mu = (m-1, 0)_\tau$ for some $m \ge 1$ in ${\mathbf Z}$. We say that $\pi$ has parallel weight $k$ for some integer $k \ge 2$ if ${\mu} = (k-2, 0)_\tau$. Let $F$ be an imaginary CM field, and let $\pi$ be a cuspidal, regular algebraic weight $\lambda$ automorphic representation of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$. The weight ${\lambda} = (\lambda_{\tau, 1}, \lambda_{\tau, 2})_\tau \in ({\mathbf Z}^2)^{\mathop{\mathrm{Hom}}(F, {\mathbf C})}$ satisfies: - There is an integer $w \in {\mathbf Z}$ such that for all $\tau$, we have $\lambda_{\tau, 1} + \lambda_{\tau c, 2} = w$. In particular, for all $\tau$ we have $\lambda_{\tau, 1} - \lambda_{\tau, 2} = \lambda_{\tau c, 1} - \lambda_{\tau c, 2}$. This is a consequence of Clozel's purity lemma [@MR1044819 Lemma 4.9]. In particular, if $F$ is imaginary quadratic, $\pi$ is necessarily of parallel weight. # The special fibres of weight $0$ crystalline lifting rings are generically reduced {#sec:genred} The goal of this section is to prove Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"}, which shows that if $p>n$, then for any finite extension  $K/{\mathbf Q}_p$ and any $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_n(\overline{{\mathbf F}}_p)$, the special fibre of the corresponding weight $0$ crystalline lifting ring is generically reduced. We deduce this from the corresponding statement for the special fibre of the weight $0$ crystalline Emerton--Gee stack. This stack was introduced in [@emertongeepicture]. We recall the results from [@emertongeepicture] that we need in Section [\[subsec: special fibre EG stacks\]](#subsec: special fibre EG stacks){reference-type="ref" reference="subsec: special fibre EG stacks"} below, but for this introduction the key points are as follows: the full Emerton--Gee stack ${\mathcal X}$ is a stack of $(\varphi,\Gamma)$-modules which sees all $\overline{\rho}$ at once, and whose versal ring at any $\overline{\rho}$ is the corresponding unrestricted lifting ring; and the weight $0$ crystalline Emerton--Gee stack ${\mathcal X}^{\underline{0}}$ is a closed substack whose versal ring at any $\overline{\rho}$ is the corresponding weight $0$ crystalline lifting ring. Generic reducedness for the (special fibre of the) stack ${\mathcal X}^{\underline{0}}$ is equivalent to the generic reducedness for the special fibres of the crystalline lifting rings, as we show by a direct argument below (in the proofs of Theorem [Theorem 1](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"} and Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"}). Working on the stack allows us to argue more geometrically, and in particular one of the main theorems of [@emertongeepicture] classifies the irreducible components of the underlying reduced substack of ${\mathcal X}$, and shows that the underlying reduced substack of the special fibre $\overline{{\mathcal X}}^{\underline{0}}$ of ${\mathcal X}^{\underline{0}}$ is a union of these irreducible components. In order to show that the  $\overline{{\mathcal X}}^{\underline{0}}$ is generically reduced, it therefore suffices to determine which irreducible components are contained in this special fibre, and to show that $\overline{{\mathcal X}}^{\underline{0}}$ is reduced at a generic point of each such component. The classification in  [@emertongeepicture] of the irreducible components is via a description of the generic  $\overline{\rho}$ which occur on that component. These are all of the form $$\overline{\rho}\cong \begin{pmatrix} \overline{\chi}_1 &*&\dots &*\\ 0& \overline{\chi}_2&\dots &*\\ \vdots&& \ddots &\vdots\\ 0&\dots&0& \overline{\chi}_n\end{pmatrix}$$ where the $\overline{\chi}_i:G_K\to \overline{{\mathbf F}}_p^{\times}$ are characters and the extension classes $*$ are in generic position (in particular nonsplit). The characters $\overline{\chi}_i|_{I_K}$ are fixed on each irreducible component, and the components are usually determined by the data of the  $\overline{\chi}_i|_{I_K}$ (see Theorem [\[thm:Xdred is algebraic\]](#thm:Xdred is algebraic){reference-type="ref" reference="thm:Xdred is algebraic"} (3) for a precise statement). In order to prove our results, we show that the condition that a generic such $\overline{\rho}$ has a crystalline lift of weight $0$ seriously constrains the possible $\overline{\chi}_i$. We use a theorem of Tong Liu [@MR2388556], which in particular shows (under the assumption that $p>n$) that $\overline{\rho}$ is obtained from a (crystalline) Breuil module. In the case $n=2$, we can then argue as follows: we can compute the possible extensions of rank 1 Breuil modules, and we find that a sufficiently generic extension of $\overline{\chi}_2$ by $\overline{\chi}_1$ can only have a crystalline lift of weight $0$ if it is ordinary, in the sense that  $\overline{\chi}_1$ is unramified and $\overline{\chi}_2$ is an unramified twist of $\overline{\varepsilon}^{-1}$, where $\overline{\varepsilon}$ is the mod $p$ cyclotomic character. Furthermore, a generic such extension arises from a unique Breuil module, which is ordinary. (Since two-dimensional weight $0$ crystalline representations are given by the generic fibres of $p$-divisible groups, we can alternatively phrase this result as showing that $\overline{\rho}$ comes from a unique finite flat group scheme over ${\mathcal O}_K$, which is ordinary in the sense that it is an extension of a multiplicative by an étale group scheme.) By an argument of Kisin [@kis04 Prop. 2.4.14], the deformations of this Breuil module are also ordinary, so that the weight $0$ crystalline lifting rings for these generic $\overline{\rho}$'s are also ordinary. It is then easy to show that the crystalline ordinary lifting ring for a generic $\overline{\rho}$ is formally smooth, and thus has reduced special fibre, which completes the argument. Perhaps surprisingly, we are able to make a similar argument for general $n$, without making any additional calculations. For each $i$, we apply our computation of extension classes of Breuil modules to the extension of  $\overline{\chi}_{i+1}$ by $\overline{\chi}_i$ arising as a subquotient of $\overline{\rho}$. If all of these extensions are sufficiently generic, we show that $\overline{\rho}$ can only admit crystalline lifts of weight $0$ if $\overline{\chi}_i|_{I_K}=\overline{\varepsilon}^{1-i}$ for all $i$. Furthermore, we also see that a generic such $\overline{\rho}$ can only arise from an ordinary Breuil module, and again deduce that all weight $0$ crystalline lifts of $\overline{\rho}$ are ordinary. From this we deduce the formal smoothness of the corresponding lifting rings for generic $\overline{\rho}$, and conclude as above. The organization of the proof is as follows. In Section [2.1](#subsec:Breuil modules and strongly divisible modules){reference-type="ref" reference="subsec:Breuil modules and strongly divisible modules"} we recall Liu's results [@MR2388556] on strongly divisible modules and lattices in semistable representations, and deduce the results that we need on crystalline Breuil modules. In Section [2.2](#subsec: extensions rank one Breuil modules){reference-type="ref" reference="subsec: extensions rank one Breuil modules"} we compute extensions of rank one Breuil modules. This is essentially elementary, using only semilinear algebra and some combinatorics. We deduce from this in Section [2.3](#subsec: generic weight 0){reference-type="ref" reference="subsec: generic weight 0"} that sufficiently generic crystalline representations of weight $0$ are ordinary. In Section [2.4](#subsec: special fibre EG stacks){reference-type="ref" reference="subsec: special fibre EG stacks"} we give a brief introduction to Emerton--Gee stacks, and prove some slight generalizations of some results of [@emertongeepicture], before deducing our generic reducedness results in Section [2.5](#subsec: generic reducedness){reference-type="ref" reference="subsec: generic reducedness"}. ## Breuil modules and strongly divisible modules {#subsec:Breuil modules and strongly divisible modules} We begin by recalling some standard results about Breuil modules and Breuil--Kisin modules. The results we use are largely due to Breuil, Kisin and Liu, but for convenience we mostly cite the papers [@MR3079258; @MR3705291] which deduce versions of these results with coefficients and prove some exactness properties of the functors to Galois representations which we will make use of in our main arguments. (Note that [@MR3705291 App. A] makes a running assumption on the ramification of the field $K/{\mathbf Q}_p$, but this is only made in order to discuss tame descent data and compare to Fontaine--Laffaille theory, and it is easy to check that all of the results we cite from there are valid for general $K/{\mathbf Q}_p$ with trivial descent data, with identical proofs (or often with simpler proofs, as there is no need to consider the descent data).) Let $K/{\mathbf Q}_p$ be a finite extension for some $p>2$, with ring of integers ${\mathcal O}_K$ and residue field $k$. Write $e$ for the absolute ramification degree of $K$, and $f$ for its inertial degree $[k:{\mathbf F}_p]$. Fix a uniformizer $\pi\in K$ with Eisenstein polynomial $E(u)$, which we choose so that  $E(0)=p$. Fix also a compatible choice $(\pi^{1/p^n})_{n\ge 1}$ of $p$-power roots of $\pi$ in $\overline{{\mathbf Q}}_p$, and set  $K_n:=K(\pi^{1/p^n})$ and $K_\infty:=\cup_{n\ge 1}K_n$. Let $E/{\mathbf Q}_p$ be a finite extension containing the normal closure of $K$, with ring of integers ${\mathcal O}$ and residue field ${\mathbf F}$. We will consider various semilinear algebra objects with coefficients in a finite ${\mathcal O}$-algebra $A$, and it is trivial to verify that all of our definitions are compatible with extension of scalars of $A$ in an obvious way. In particular, we often take $A={\mathbf F}$, but we are free to replace ${\mathbf F}$ by an arbitrary finite extension, or (after passing to a limit in an obvious fashion) by $\overline{{\mathbf F}}_p$. For any finite ${\mathcal O}$-algebra $A$ we let $\mathfrak{S}_A:=(W(k)\otimes_{{\mathbf Z}_p}A)\llbracket u\rrbracket$, equipped with the usual $A$-linear, $W(k)$-semilinear Frobenius endomorphism  $\varphi$, with $\varphi(u)=u^p$. For any integer $h\ge 0$, a *Breuil--Kisin module with $A$-coefficients* of height at most $h$ is a finite free $\mathfrak{S}_{A}$-module $\mathfrak{M}$ equipped with a $\varphi$-semilinear map $\varphi:\mathfrak{M}\to\mathfrak{M}$ such that the cokernel of the linearized Frobenius $\varphi^*\mathfrak{M}\xrightarrow{1\otimes\varphi}\mathfrak{M}$ is killed by $E(u)^h$, where as usual $\varphi^*\mathfrak{M}$ denotes the Frobenius pullback $\mathfrak{S}_A\otimes_{\varphi,\mathfrak{S}_A}\mathfrak{M}$. (Here we indulge in a standard abuse of notation in writing $\varphi$ for both the endomorphism of $\mathfrak{S}_A$ and of $\mathfrak{M}$, which should not cause any confusion.) Suppose that $A={\mathbf F}$, and let $\overline{S}_{{\mathbf F}}:=\mathfrak{S}_{{\mathbf F}}/u^{ep}$. If $h\le p-2$, then a *quasi-Breuil module with ${\mathbf F}$-coefficients* ${\mathcal M}$ of height $h$ is a finite free $\overline{S}_{\mathbf F}$ module ${\mathcal M}$ equipped with a $\overline{S}_{\mathbf F}$-submodule $$u^{eh}{\mathcal M}\subseteq{\mathcal M}^h\subseteq{\mathcal M}$$ and a $\varphi$-semilinear map $\varphi:{\mathcal M}^h\to{\mathcal M}$ such that $$\overline{S}_{{\mathbf F}}\cdot\varphi({\mathcal M}^h)={\mathcal M}.$$ (The morphism $\varphi$ is usually denoted $\varphi_h$ in the literature, but we will shortly fix the choice $h=p-2$ for the rest of the paper, so we have omitted the subscript for the sake of cleaner notation.) For each $0\le h\le p-2$, there is by [@MR1695849 Thm. 4.1.1] an equivalence of categories between the category of Breuil--Kisin modules with ${\mathbf F}$-coefficients of height at most $h$ and the category of quasi-Breuil modules with ${\mathbf F}$-coefficients of height at most $h$. Explicitly, a Breuil--Kisin module $\mathfrak{M}$ of height $h\le p-2$ determines a quasi-Breuil module as follows. Write $\mathfrak{M}^h:=(1\otimes\varphi)^{-1}(u^{eh}\mathfrak{M})\subseteq\varphi^*\mathfrak{M}$. Set ${\mathcal M}:=\varphi^*\mathfrak{M}/u^{pe}$, and $\mathcal{M}^h=\mathfrak{M}^h/u^{pe}\varphi^*\mathfrak{M}$. Then $\varphi:{\mathcal M}^h\to{\mathcal M}$ is defined by the composite $${\mathcal M}^h\xrightarrow{1\otimes \varphi}u^{eh}\overline{S}_{{\mathbf F}}\otimes_{\mathfrak{S}_{{\mathbf F}}}\mathfrak{M}\xrightarrow{\varphi_h\otimes 1} \overline{S}_{{\mathbf F}}\otimes_{\mathfrak{S}_{{\mathbf F}}}\mathfrak{M}={\mathcal M},$$ where $\varphi_h:u^{eh}\overline{S}_{{\mathbf F}}\to \overline{S}_{{\mathbf F}}$ is the $\varphi$-semilinear morphism $\varphi_h(u^{eh}x):=\varphi(x)$. (Note that this is well-defined because if $x$ is divisible by $u^{e(p-h)}$, then $\varphi(x)$ is divisible by $u^{ep(p-h)}$ and in particular by $u^{ep}=0$.) We will often say that the Breuil--Kisin module $\mathfrak{M}$ *underlies* the quasi-Breuil module ${\mathcal M}$. Write $N:\overline{S}_{{\mathbf F}}\to \overline{S}_{{\mathbf F}}$ for the $(k\otimes_{{\mathbf F}_p}{\mathbf F})$-linear derivation $-u\frac{\partial}{\partial u}$. A *Breuil module with ${\mathbf F}$-coefficients* ${\mathcal M}$ of height $h$ is a quasi-Breuil module equipped with the additional data of a map $N:{\mathcal M}\to{\mathcal M}$ which satisfies: - $N(sx)=sN(x)+N(s)x$ for all $s\in \overline{S}_{{\mathbf F}}, x\in{\mathcal M}$, - $u^{e}N({\mathcal M}^h)\subseteq{\mathcal M}^h$, - and $\varphi(u^{e}N(x))=N(\varphi(x))$ for all $x\in{\mathcal M}^h$. We say that a Breuil module ${\mathcal M}$ is *crystalline* if $N({\mathcal M})\subseteq u{\mathcal M}$. *Remark 1*. While we will not explicitly need this below, it can be checked that if ${\mathcal M}$ is crystalline, then $u^eN({\mathcal M}^h) \subseteq u N({\mathcal M}^h)$. To see this, note that since $\overline{S}_{{\mathbf F}}\cdot\varphi({\mathcal M}^h)={\mathcal M}$, there is an induced ${\mathbf F}_p$-linear surjection ${\mathcal M}^h/u{\mathcal M}^h\to{\mathcal M}/u{\mathcal M}$, which is in fact an isomorphism (comparing dimensions as in [@Breuil-ENS Lem. 2.2.1.1]). If ${\mathcal M}$ is crystalline then $N$ acts by $0$ on ${\mathcal M}/u{\mathcal M}$, and the commutation relation between $N$ and $\varphi$ then shows that $u^eN$ acts by $0$ on ${\mathcal M}^h/u{\mathcal M}^h$, as required. We now define the Galois representations associated to Breuil modules and to Breuil--Kisin modules, beginning with the latter. An *étale $\varphi$-module with ${\mathbf F}$-coefficients* is by definition a finite free $(k\otimes_{{\mathbf F}_p}{\mathbf F})((u))$-module $M$ with a semilinear endomorphism $\varphi:M\to M$ such that the linearized Frobenius $\varphi^*M\xrightarrow{1\otimes\varphi}M$ is an isomorphism. Note that by definition, if $\mathfrak{M}$ is a Breuil--Kisin module with ${\mathbf F}$-coefficients, then $\mathfrak{M}[1/u]$ is an étale $\varphi$-module with ${\mathbf F}$-coefficients. Let $k((u))^{\operatorname{sep}}$ denote a separable closure of $k((u))$. By the results of [@MR1106901] (see e.g. [@kis04 1.1.12]), the functor $$T_\infty:M\mapsto(M\otimes_{k((u))}k((u))^{\operatorname{sep}})^{\varphi=1}$$ is an equivalence of categories between the category of étale $\varphi$-modules with ${\mathbf F}$-coefficients and the category of continuous representations of $G_{K_\infty}$ on ${\mathbf F}$-vector spaces, and we have $\dim_{{\mathbf F}}T_\infty(M)=\operatorname{rank}_{(k\otimes_{{\mathbf F}_p}{\mathbf F})((u))}M$. We also write $T_\infty$ for the induced functor from Breuil--Kisin modules to $G_{K_\infty}$-representations given by $\mathfrak{M}\mapsto T_\infty(\mathfrak{M}[1/u])$. Similarly, if $\mathfrak{M}$ is the Breuil--Kisin module underlying a quasi-Breuil module ${\mathcal M}$, we write $T_\infty({\mathcal M})$ for $T_\infty(\mathfrak{M})$. Similarly, there is a functor $T$ from the category of Breuil modules of height at most $h$ with ${\mathbf F}$-coefficients to the category of continuous representations of $G_{K}$ on ${\mathbf F}$-vector spaces defined by $$T({\mathcal M}):=\mathop{\mathrm{Hom}}_{k[u]/u^{ep},\varphi,N}({\mathcal M},\widehat{A})^\vee,$$where $\widehat{A}:=\widehat{A}_{\operatorname{st}}\otimes_Sk[u]/u^{ep}$ is defined for example in [@MR3705291 (A.3.1)]. Again we have $\dim_{{\mathbf F}}T({\mathcal M})=\operatorname{rank}_{\overline{S}_{{\mathbf F}}}{\mathcal M}$. Furthermore, by [@MR3705291 Prop. A.3.2] the forgetful functor from Breuil modules to quasi-Breuil modules induces an isomorphism $$T({\mathcal M})|_{G_{K_\infty}}\stackrel{\sim}{\longrightarrow}T_\infty({\mathcal M}).$$ From now on all of our Breuil modules will be crystalline and have height $(p-2)$. We write ${\mathbf F}\!\operatorname{BrMod}^{\operatorname{cr}}$ for the category of crystalline Breuil modules of height $(p-2)$ with ${\mathbf F}$-coefficients, and ${\mathbf F}\!\operatorname{qBrMod}$ for the category of quasi-Breuil modules of height $(p-2)$ with ${\mathbf F}$-coefficients, which we identify with the category of Breuil--Kisin modules of height at most $(p-2)$ with ${\mathbf F}$-coefficients. We say that a complex $$\label{eqn: ses of BrMod}0\to{\mathcal M}_1\to{\mathcal M}\to{\mathcal M}_2\to 0$$ in ${\mathbf F}\!\operatorname{BrMod}^{\operatorname{cr}}$ or ${\mathbf F}\!\operatorname{qBrMod}$ is exact if it induces exact sequences of $\overline{S}_{{\mathbf F}}$-modules $0\to{\mathcal M}_1\to{\mathcal M}\to{\mathcal M}_2\to 0$ and $$0\to{\mathcal M}_1^{p-2}\to{\mathcal M}^{p-2}\to{\mathcal M}_2^{p-2}\to 0.$$ It is easily checked that a complex of quasi-Breuil modules is exact if and only if the corresponding complex of Breuil--Kisin modules is exact (as a complex of $\mathfrak{S}_{{\mathbf F}}$-modules). If ${\mathcal M}$ is an object of ${\mathbf F}\!\operatorname{BrMod}^{\operatorname{cr}}$, then an $\overline{S}_{{\mathbf F}}$-submodule $\mathcal{N}\subseteq{\mathcal M}$ is a *Breuil submodule of ${\mathcal M}$* if it is a direct summand of ${\mathcal M}$ as a $k[u]/u^{ep}$-module, and we furthermore have $N(\mathcal{N})\subseteq\mathcal{N}$ and $\varphi(\mathcal{N}\cap{\mathcal M}^r)\subseteq\mathcal{N}$. Then $\mathcal{N}$ inherits the structure of a crystalline Breuil module from ${\mathcal M}$, as does the quotient ${\mathcal M}/\mathcal{N}$, and by [@MR3705291 Lem. 2.3.2], the complex of crystalline Breuil modules $$0\to\mathcal{N}\to{\mathcal M}\to{\mathcal M}/\mathcal{N}\to 0$$ is exact; and conversely if [\[eqn: ses of BrMod\]](#eqn: ses of BrMod){reference-type="eqref" reference="eqn: ses of BrMod"} is exact, then ${\mathcal M}_1$ is a Breuil submodule of ${\mathcal M}$. **Theorem 1**. 1. *The categories ${\mathbf F}\!\operatorname{BrMod}^{\operatorname{cr}}$ and ${\mathbf F}\!\operatorname{qBrMod}$ are exact categories in the sense of [@MR2655184], and the functors $T$ and $T_\infty$ are exact.* 2. *For any object ${\mathcal M}$ of ${\mathbf F}\!\operatorname{BrMod}^{\operatorname{cr}}$, there is an order preserving bijection $\Theta$ between the Breuil submodules of ${\mathcal M}$ and the $G_K$-subrepresentations of $T({\mathcal M})$, taking $\mathcal{N}$ to the image of $T(\mathcal{N})\hookrightarrow T({\mathcal M})$. Furthermore if ${\mathcal M}_1\subseteq{\mathcal M}_2$ are Breuil submodules of ${\mathcal M}$, then $\Theta({\mathcal M}_2)/\Theta({\mathcal M}_1)\cong T({\mathcal M}_2/{\mathcal M}_1)$.* *Proof.* The statement for quasi-Breuil modules is [@MR2951749 Thm. 2.1.2]. The rest of the theorem for not-necessarily-crystalline Breuil modules is [@MR3705291 Prop. 2.3.4, 2.3.5]. The case of crystalline Breuil modules follows formally, because (as noted above) a Breuil submodule of a crystalline Breuil module is automatically crystalline (as is the corresponding quotient submodule). Alternatively, it is straightforward to check that the proofs of  [@MR3705291 Prop. 2.3.4, 2.3.5] go through unchanged, once one notes that the duality on Breuil modules [@MR3079258 Defn. 3.2.8] by definition preserves the subcategory of crystalline Breuil modules. ◻ We now show that any Galois representation obtained as the reduction mod $p$ of a lattice in a crystalline representation with Hodge--Tate weights in the range $[0,p-2]$ comes from a crystalline Breuil module. This is essentially an immediate consequence of the main theorem of Liu's paper [@MR2388556], which proves an equivalence of categories between $G_K$-stable lattices inside semistable representations with Hodge--Tate weights in the range $[0,p-2]$ and strongly divisible modules. From this one can easily deduce an equivalence of categories between $G_K$-stable lattices inside crystalline representations with Hodge--Tate weights in the range $[0,p-2]$ and an appropriate category of "crystalline strongly divisible lattices", but since we do not need this, we avoid recalling the definitions of strongly divisible lattices and leave it to the interested reader. Recall that by the results of [@MR2263197], if $\rho:G_K\to\mathop{\mathrm{GL}}_n({\mathcal O})$ is a lattice in a crystalline representation with non-negative Hodge--Tate weights, there is a Breuil--Kisin module with ${\mathcal O}$-coefficients  $\mathfrak{M}_{{\mathcal O}}$ associated to $\rho|_{G_{K_\infty}}$ (see e.g. [@MR3164985 Thm. 3.2(3), Prop. 3.4(3)] for a precise reference allowing ${\mathcal O}$-coefficients). modules .thm} **Theorem 1**. *Let $\rho:G_K\to\mathop{\mathrm{GL}}_n({\mathcal O})$ be a lattice in a crystalline representation with Hodge--Tate weights in $[0,h]$ for some integer $0\le h\le p-2$, and write $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_n({\mathbf F})$ for its reduction modulo $\mathfrak{m}_{{\mathcal O}}$. Then there is a crystalline Breuil module ${\mathcal M}$ with ${\mathbf F}$-coefficients such that $\overline{\rho}\cong T({\mathcal M})$. Furthermore, the underlying Breuil--Kisin module of ${\mathcal M}$ has height at most $h$, and is the reduction modulo $\mathfrak{m}_{\mathcal O}$ of the Breuil--Kisin module $\mathfrak{M}_{{\mathcal O}}$ with ${\mathcal O}$-coefficients associated to $\rho|_{G_{K_\infty}}$.* *Proof.* Since crystalline representations are in particular semistable, it is immediate from [@MR3079258 Prop. 3.1.4, Lem. 3.2.2] that there is a not-necessarily crystalline Breuil module ${\mathcal M}$ with ${\mathbf F}$-coefficients such that $\overline{\rho}\cong T({\mathcal M})$, whose underlying Breuil--Kisin module has height at most $h$ (note that our $h$ is the integer $r$ in the statement of [@MR3079258 Prop. 3.1.4]). We claim that the Breuil module provided by these results is necessarily crystalline. To see this, note first that (in the case at hand, with no descent data) [@MR3079258 Prop. 3.1.4] is a trivial consequence of the main result [@MR2388556 Thm. 2.3.5] of Liu's paper [@MR2388556], and gives an equivalence of categories between $G_K$-stable ${\mathcal O}$-lattices inside semistable $E$-representations of $G_K$ with Hodge--Tate weights in the range $[0,p-2]$ and strongly divisible modules with ${\mathcal O}$-coefficients. In particular, there is a strongly divisible module with ${\mathcal O}$-coefficients $\widehat{{\mathcal M}}$ corresponding to $\rho$. We do not recall the notion of a strongly divisible module here, but we note that they are by definition modules over a coefficient ring $S_{{\mathcal O}}$, equipped with a Frobenius, a filtration and a monodromy operator $N$, and by [@MR3079258 Lem. 3.2.2], the Breuil module ${\mathcal M}$ is obtained from the strongly divisible module $\widehat{{\mathcal M}}$ by tensoring over $S_{{\mathcal O}}$ with $\overline{S}_{{\mathbf F}}$. By the commutative diagram at the end of [@MR2388556 §3.4], the strongly divisible module $\widehat{{\mathcal M}}$ has underlying Breuil--Kisin module $\mathfrak{M}_{{\mathcal O}}$ (via the fully faithful functor of [@MR2388556 Cor. 3.3.2]). It follows immediately that the underlying Breuil--Kisin module of ${\mathcal M}$ is the reduction modulo $\mathfrak{m}_{{\mathcal O}}$ of $\mathfrak{M}_{{\mathcal O}}$, as claimed. It remains to show that if $\rho$ is crystalline, the monodromy operator $N$ on ${\mathcal M}$ vanishes mod $u$. This follows immediately from the compatibility between  $\widehat{{\mathcal M}}$ and the weakly admissible module $D$ associated to $\rho$, for which see  [@MR2388556 §3.2]. ◻ ## Extensions of rank one Breuil modules {#subsec: extensions rank one Breuil modules} In this section we make a computation of the possible extensions of rank one Breuil modules, and prove Lemma [Lemma 1](#lem:criteria for missing extensions){reference-type="ref" reference="lem:criteria for missing extensions"}, which gives a constraint on the Breuil modules which can witness sufficiently generic extensions of characters. These calculations are elementary, but are complicated in the case of a general field $K/{\mathbf Q}_p$, and the reader may find it helpful to firstly work through the case that $K/{\mathbf Q}_p$ is totally ramified, where the calculations simplify dramatically. Let $\overline{\sigma}_0:k\hookrightarrow{\mathbf F}$ be a fixed embedding. Inductively define $\overline{\sigma}_1,\dots,\overline{\sigma}_{f-1}$ by $\overline{\sigma}_{i+1}=\overline{\sigma}_i\circ\varphi^{-1}$, where $\varphi$ is the arithmetic Frobenius on $k$; we will often consider the numbering to be cyclic, so that $\overline{\sigma}_f=\overline{\sigma}_0$. There are idempotents $e_i\in k\otimes_{{\mathbf F}_p}{\mathbf F}$ such that if $M$ is any $k\otimes_{{\mathbf F}_p}{\mathbf F}$-module, then $M=\bigoplus_ie_iM$, and $e_iM$ is the subset of $M$ consisting of elements $m$ for which $(x\otimes 1)m=(1\otimes\overline{\sigma}_i(x))m$ for all $x\in k$. Note that $(\varphi\otimes 1)(e_i) = e_{i+1}$ for all $i$. As explained above, we are free to work with coefficients in $\overline{{\mathbf F}}_p$ rather than ${\mathbf F}$, and for convenience we do so throughout this section. (To be precise, this means that we apply the definitions above with $\overline{S}_{{\mathbf F}}$ replaced by $(k\otimes_{{\mathbf F}_p}\overline{{\mathbf F}}_p)[u]/u^{ep}$.) It will be clear to the reader that the coefficients do not intervene in any way in the calculations, and we could equally well work with coefficients in any finite extension of ${\mathbf F}$. **Definition 1**. Let $s_0,\ldots,s_{f-1}$ be non-negative integers, and let $a \in \overline{{\mathbf F}}_p^{\times}$. Let $\mathfrak{M}(\underline{s};a)$ be the rank one Breuil--Kisin module with $\overline{{\mathbf F}}_p$ coefficients such that $\mathfrak{M}(\underline{s}; a)_i$ is generated by $e_i$ with $$\varphi(e_{i-1}) = (a)_{i} u^{s_{i}} e_{i}.$$ Here and below, $(a)_0 = a$ and $(a)_i = 1$ if  $i\ne 0$. By [@MR3164985 Lem. 6.2], any rank one Breuil--Kisin module is isomorphic to (exactly) one of the form $\mathfrak{M}(\underline{s}; a)$. **Definition 1**. Set $\alpha_i(\mathfrak{M}(\underline{s};a)) := \frac{1}{p^f-1} \sum_{j=1}^{f} p^{f-j} s_{j+i}$. By [@MR3324938 Lem. 5.1.2], there exists a nonzero map $\mathfrak{M}(\underline{s};a) \to \mathfrak{M}(\underline{t};b)$ if and only if $\alpha_i(\mathfrak{M}(\underline{s};a)) - \alpha_i(\mathfrak{M}(\underline{t};b)) \in {\mathbf Z}_{\ge 0}$ for all $i$, and $a=b$. We now show that each rank one Breuil--Kisin module of height at most $(p-2)$ underlies a unique rank one (crystalline) Breuil module. (In particular, all rank one Breuil modules are crystalline.) **Lemma 1**. *If each $s_i\in[0,e(p-2)]$ then the rank one Breuil--Kisin module $\mathfrak{M}(\underline{s}; a)$ underlies a unique height $(p-2)$ Breuil module ${\mathcal M}={\mathcal M}(\underline{s}; a)$ with $${\mathcal M}_j^{p-2}=\langle u^{e(p-2)-s_j}(1\otimes e_{j-1})\rangle,$$ $$\varphi( u^{e(p-2)-s_j}(1\otimes e_{j-1})) = (a)_{j} (1\otimes e_{j}),$$ $$N((1\otimes e_{j-1}))=0.$$* *Proof.* We begin by noting that since $(\varphi^*\mathfrak{M}(\underline{s}; a))_i$ is generated by $(1\otimes e_{i-1})$, the quasi-Breuil module ${\mathcal M}=\varphi^*\mathfrak{M}(\underline{s}; a)/u^{ep}$ corresponding to $\mathfrak{M}(\underline{s}; a)$ has the given form. It is easy to see that taking  $N((1\otimes e_{j-1}))=0$ gives ${\mathcal M}$ the structure of a Breuil module. To see that this is the only possibility, write $N((1\otimes e_{j-1}))=\nu_j(1\otimes e_{j-1})$. Then we have $$N(u^{e(p-2)-s_j}(1\otimes e_{j-1}))=u^{e(p-2)-s_j}\left(s_j-e(p-2)+\nu_j\right)(1\otimes e_{j-1})\in {\mathcal M}_j^{p-2},$$ so that $$\varphi(u^eN(u^{e(p-2)-s_j}(1\otimes e_{j-1})))\in u^{ep}\varphi({\mathcal M}_j)=0,$$ and the equation $\varphi(u^eN(u^{e(p-2)-s_j}(1\otimes e_{j-1})))=N\varphi(u^{e(p-2)-s_j}(1\otimes e_{j-1}))$ gives $\nu_{j+1}=0$ for each $j$, as required. ◻ The extensions of rank one Breuil--Kisin modules are computed as follows. **Proposition 1**. *Let $\mathfrak{M}$ be an extension of $\mathfrak{M}(\underline{s};a)$ by $\mathfrak{M}(\underline{t};b)$. Then we can choose bases $e_i,f_i$ of the $\mathfrak{M}_i$ so that $\varphi$ has the form* *$$\begin{aligned} \label{eqn: basis for extension of BK modules} \varphi(e_{i-1}) & = &(b)_i u^{t_i} e_i \\ \varphi(f_{i-1}) & = &(a)_i u^{s_i} f_i + y_i e_i \nonumber \end{aligned}$$ with $y_i \in {\mathbf F}\llbracket u \rrbracket$ a polynomial with $\deg(y_i) < s_i$, except that when there is a nonzero map $\mathfrak{M}(\underline{s};a)\to \mathfrak{M}(\underline{t};b)$ we must also allow $y_j$ to have a term of degree $s_j + \alpha_j(\mathfrak{M}(\underline{s};a))-\alpha_j(\mathfrak{M}(\underline{t};b))$ for any one choice of $j$.* *Proof.* This is [@MR3324938 Prop. 5.1.3]. ◻ *Remark 1*. While this is not claimed in [@MR3324938 Prop. 5.1.3], we expect that it is possible to show that distinct choices of the $y_i$ in [Proposition 1](#prop:kisin-module-extensions){reference-type="ref" reference="prop:kisin-module-extensions"} give distinct extensions of Breuil--Kisin modules. We now compute a constraint on extension classes of rank 1 Breuil modules. **Lemma 1**. *Let ${\mathcal M}$ be a crystalline Breuil module which is an extension of ${\mathcal M}(\underline{s};a)$ by ${\mathcal M}(\underline{t};b)$, with underlying Breuil--Kisin module $\mathfrak{M}$ as in Proposition [Proposition 1](#prop:kisin-module-extensions){reference-type="ref" reference="prop:kisin-module-extensions"}.* *For each $i$ we set* *$$\label{eqn: definition of ni} n_i:=\frac{1}{p^f-1}\sum_{j=1}^fp^{f-j}(s_{j+i-1}-t_{j+i-1}-e)\in{\mathbf Q}.$$ Then the $y_j$ in Proposition [Proposition 1](#prop:kisin-module-extensions){reference-type="ref" reference="prop:kisin-module-extensions"} cannot have any terms of degree $l< s_j-e +\max(n_{j+1},1)$ with $l\not\equiv t_j\pmod{p}$.* *Proof.* The quasi-Breuil module corresponding to $\mathfrak{M}$ has ${\mathcal M}_j$ generated by $(1\otimes e_{j-1}),(1\otimes f_{j-1})$ with $${\mathcal M}_j^{p-2}=\langle u^{e(p-2)-t_j}(1\otimes e_{j-1}), u^{e(p-2)-s_j}(1\otimes f_{j-1})-u^{e(p-2)-s_j-t_j}y_j(1\otimes e_{j-1})\rangle,$$ where $\varphi$ is given by $$\varphi( u^{e(p-2)-t_j}(1\otimes e_{j-1})) = (b)_{j} (1\otimes e_{j}),$$$$\varphi( u^{e(p-2)-s_j}(1\otimes f_{j-1})-u^{e(p-2)-s_j-t_j}y_j(1\otimes e_{j-1}) ) = (a)_{j} (1\otimes f_{j}).$$ (Note that $\mathfrak{M}$ must have height at most $(p-2)$, since it underlies a Breuil module, so $y_j$ is indeed divisible by $u^{s_j+t_j-e(p-2)}$.) We have $N(1\otimes e_{j-1})=0$, and we write $N(1\otimes f_{j-1})=\mu_j(1\otimes e_{j-1})$ with $\mu_j\in u\overline{{\mathbf F}}_p[u]$, where for each $j$ we must have $$u^eN(u^{e(p-2)-s_j}(1\otimes f_{j-1})-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j(1\otimes e_{j-1}))\in {\mathcal M}^{p-2}_j,$$ and given this, we have the commutation relation $$\begin{aligned} \varphi (u^eN(u^{e(p-2)-s_j}&(1\otimes f_{j-1})-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j(1\otimes e_{j-1})))\\&= N\varphi(u^{e(p-2)-s_j}(1\otimes f_{j-1})-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j(1\otimes e_{j-1})) \\&=N((a)_{j} (1\otimes f_{j}))\\&=(a)_j\mu_{j+1}(1\otimes e_{j}). \end{aligned}$$ We have $$\begin{aligned} N(u^{e(p-2)-s_j}(1\otimes f_{j-1}) & -u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j(1\otimes e_{j-1})) \\ & = (s_j-e(p-2))u^{e(p-2)-s_j}(1\otimes f_{j-1})+u^{e(p-2)-s_j}\mu_j(1\otimes e_{j-1})\\&\quad +N(-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j)(1\otimes e_{j-1})\\ & = (s_j-e(p-2))(u^{e(p-2)-s_j}(1\otimes f_{j-1})-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j(1\otimes e_{j-1})) \\ & \quad + (u^{e(p-2)-s_j}\mu_j+N(-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j))(1\otimes e_{j-1})\\ & \quad +((s_j-e(p-2))u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j)(1\otimes e_{j-1}),\end{aligned}$$ so we need the quantity $$u^e(u^{e(p-2)-s_j}\mu_j+N(-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j)+(s_j-e(p-2))u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j)$$ to be divisible by $u^{e(p-2)-t_j}$; and assuming this holds, the commutation relation with $\varphi$ reads $$\begin{gathered} (a)_{j}\mu_{j+1}= (b)_{j}u^{p(e-e(p-2)+t_j)}\varphi\biggl(u^{e(p-2)-s_j}\mu_j\\+N(-u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j)+(s_j-e(p-2))u^{e(p-2)-s_j-t_j}(b^{-1})_jy_j\biggr). \end{gathered}$$ In particular we see that $\mu_{j+1}\in\mathop{\mathrm{im}}\varphi$, and writing $\mu_{j+1}=\varphi(\mu'_{j+1})$ and rearranging, we obtain $$\label{eqn: phi commutation rank 2} u^{e-s_j+t_j}\left((b)_{j}\varphi(\mu'_j)+u^{-t_j}(t_jy_j+N(y_j))\right)=(a)_{j}\mu'_{j+1}. %$$ (Strictly speaking, since $\varphi(u^e)=u^{ep}=0$, this is only an equation modulo $u^e$; but it is easily checked that all terms have degree less than $e$, so it holds literally.) Examining the left hand side of [\[eqn: phi commutation rank 2\]](#eqn: phi commutation rank 2){reference-type="eqref" reference="eqn: phi commutation rank 2"}, we note that there can be no cancellation between the terms in $\varphi(\mu'_j)$ and $u^{-t_j}(t_jy_j+N(y_j))$, as the exponents of $u$ in $\varphi(\mu'_j)$ are all divisible by $p$, while none of the exponents of $u$ in $u^{-t_j}(t_jy_j+N(y_j))$ are divisible by $p$ (the terms in $t_jy_j$ with exponent $\equiv t_j \mod{p}$ cancel with terms in $N(y_j)$). Let $d_j\ge 1$ be the $u$-adic valuation of $\mu'_j$ (setting $d_j = e$ if $\mu_j'$ is divisible by $u^e$). Then  [\[eqn: phi commutation rank 2\]](#eqn: phi commutation rank 2){reference-type="eqref" reference="eqn: phi commutation rank 2"} gives us the inequality $$\label{eqn: valuation inequality to get ni}pd_j-(s_j-t_j-e)\ge d_{j+1}.$$ (To see this, note that if the left hand side is at least $e$, there is nothing to prove; and if $d_j=e$, then since $s_j-t_j-e\le e(p-3)$, the left hand side is at least $3e>e$. Otherwise the term $u^{e-s_j+t_j}(b)_{j}\varphi(\mu'_j)$ means that the left hand side of [\[eqn: phi commutation rank 2\]](#eqn: phi commutation rank 2){reference-type="eqref" reference="eqn: phi commutation rank 2"} has a term of degree $pd_j-(s_j-t_j-e)$, because of the lack of cancellation.) Multiplying the inequalities [\[eqn: valuation inequality to get ni\]](#eqn: valuation inequality to get ni){reference-type="eqref" reference="eqn: valuation inequality to get ni"} by suitable powers of $p$ and summing, we have $$\sum_{j=1}^fp^{f-j}(pd_{j+i-1}-d_{j+i})\ge \sum_{j=1}^fp^{f-j}(s_{j+i-1}-t_{j+i-1}-e),$$which simplifies to $d_i\ge n_i$, where $n_i$ is as in [\[eqn: definition of ni\]](#eqn: definition of ni){reference-type="eqref" reference="eqn: definition of ni"}. Since ${\mathcal M}$ is crystalline by assumption, we also have $d_i\ge 1$, so that $d_i\ge\max(1,n_i)$ for all $i$. Returning to [\[eqn: phi commutation rank 2\]](#eqn: phi commutation rank 2){reference-type="eqref" reference="eqn: phi commutation rank 2"}, since the right hand side has valuation $d_{j+1}$, the lack of cancellation implies that the term $u^{e-s_j}(t_jy_j+N(y_j))$ on the left hand side is divisible by $u^{\max(1,n_{j+1})}$; equivalently, the terms in $y_j$ of degree less than $s_j-e+\max(1,n_{j+1})$ and not congruent to $t_j$ modulo $p$ vanish, as claimed. ◻ *Remark 1*. It follows easily from the definitions that if we have two extensions of ${\mathcal M}(\underline{s};a)$ by ${\mathcal M}(\underline{t};b)$ as in Lemma [\[lem: Breuil module extensions\]](#lem: Breuil module extensions){reference-type="ref" reference="lem: Breuil module extensions"}, then their Baer sum corresponds to the extension obtained by summing the $y_j$ (and has $N$ given by summing the $\mu_j$). In the following arguments it will be useful to note that by the definition of the $n_j$ in [\[eqn: definition of ni\]](#eqn: definition of ni){reference-type="eqref" reference="eqn: definition of ni"}, we have $$\label{eqn: relation of nj} n_j+(s_{j-1}-t_{j-1}-e)=pn_{j-1}$$ for all $j$. **Lemma 1**. *Write* *$$\label{eqn: ri}r_i:=s_i-t_i-e+\lfloor n_{i+1}\rfloor-p\lfloor n_i\rfloor +1.$$Then $r_i\in [1,p]$ for each $i$.* *Proof.* Using [\[eqn: relation of nj\]](#eqn: relation of nj){reference-type="eqref" reference="eqn: relation of nj"}, we have $r_i-1=p(n_i-\lfloor n_i\rfloor)-(n_{i+1}-\lfloor n_{i+1}\rfloor)$ and since for any real number $x$ we have $(x-\lfloor x\rfloor) \in [0,1)$ we have $r_i-1\in (-1,p)$. Since $r_i$ is an integer, the result follows. ◻ Write $$\mathop{\mathrm{Ext}}^1_{\operatorname{BrMod}^{\operatorname{cr}}}({\mathcal M}(\underline{s};a),{\mathcal M}(\underline{t};b))$$ for the $\mathop{\mathrm{Ext}}^1$ group computed in the exact category $\overline{{\mathbf F}}_p\!\operatorname{BrMod}^{\operatorname{cr}}$. Write $$\overline{\chi}_1:=T({\mathcal M}(\underline{t};b)),$$ $$\overline{\chi}_2:=T({\mathcal M}(\underline{s};a)).$$ Then the restriction maps $$\mathop{\mathrm{Ext}}^1_{\operatorname{BrMod}^{\operatorname{cr}}}({\mathcal M}(\underline{s};a),{\mathcal M}(\underline{t};b))\xrightarrow{\,\operatorname{res}_K\,} \mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)\xrightarrow{\,\operatorname{res}_{K_{\infty}}\,}\mathop{\mathrm{Ext}}^1_{G_{K_\infty}}(\overline{\chi}_2,\overline{\chi}_1)$$ are homomorphisms of $\overline{{\mathbf F}}_p$-vector spaces. Regarding elements of $\mathop{\mathrm{Ext}}^1_{G_{K_\infty}}(\overline{\chi}_2,\overline{\chi}_1)$ as étale $\varphi$-modules, we have the following description of the image of the restriction map $\operatorname{res}_{K_{\infty}}$. (In our key Lemma [Lemma 1](#lem:criteria for missing extensions){reference-type="ref" reference="lem:criteria for missing extensions"}, we will show that the composition $\operatorname{res}_{K_{\infty}}\circ\operatorname{res}_K$ has smaller image.) **Lemma 1**. 1. *[\[resKinfty injective\]]{#resKinfty injective label="resKinfty injective"} The restriction map $\operatorname{res}_{K_{\infty}}$ is injective unless $\overline{\chi}_1\overline{\chi}_2^{-1}=\overline{\varepsilon}$, in which case its kernel is $1$-dimensional, and is generated by the très ramifiée line given by the Kummer extension corresponding to the chosen uniformizer $\pi$ of $K$.* 2. *[\[item: dim image resKinfty\]]{#item: dim image resKinfty label="item: dim image resKinfty"}The image of $\operatorname{res}_{K_{\infty}}$ has dimension $[K:{\mathbf Q}_p]$, unless $\overline{\chi}_1=\overline{\chi}_2$, in which case it has dimension $[K:{\mathbf Q}_p]+1$.* 3. *The étale $\varphi$-modules  $M$ in the image of $\operatorname{res}_{K_{\infty}}$ are precisely those for which we can choose a basis $e_i,f_i$ of  $M_i$ so that $\varphi$ has the form* *$$\begin{aligned} \label{eqn: the Mi in image of GK} \varphi(e_{i-1}) & = &(b)_i u^{t_i} e_i \\ \varphi(f_{i-1}) & = &(a)_i u^{s_i} f_i + y_i e_i \nonumber \end{aligned}$$ where $y_i \in \overline{{\mathbf F}}_p[u,u^{-1}]$ has nonzero terms only in degrees $[s_i+\lfloor n_{i+1}\rfloor-e+1,\dots,s_i+\lfloor n_{i+1}\rfloor]$; except that when $\overline{\chi}_1=\overline{\chi}_2$ we also allow $y_j$ to have a term of degree $$s_j + \frac{1}{p^f-1}\sum_{j=1}^fp^{f-j}(s_{j+i}-t_{j+i})$$ for any one choice of $j$.* *Proof.* The first part is [@MR3324938 Lem. 5.4.2]. The second part then follows from the usual computation of the dimension of $\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)$ via Tate's Euler characteristic formula and local duality. The final part can presumably be proved in an elementary way, but for convenience we explain how to deduce it from the results of [@MR3324938] on the Breuil--Kisin modules associated to certain crystalline representations with small Hodge--Tate weights. This was first explained in [@MR3705280 Thm. 3.3.2] in the case $K/{\mathbf Q}_p$ unramified (which employed the earlier results [@MR3164985]), and in general  [@MR4504652 Thm. 4.2], under the assumption that $\overline{\chi}_1\overline{\chi}_2^{-1}\ne\overline{\varepsilon}$. However the comparison to the notation used in [@MR4504652 Thm. 4.2] is not immediate, and we need to treat the case $\overline{\chi}_1\overline{\chi}_2^{-1}=\overline{\varepsilon}$, so for the convenience of the reader we explain in our notation how to extract the claim from [@MR3324938]. It is easy to check that the étale $\varphi$-modules in [\[eqn: the Mi in image of GK\]](#eqn: the Mi in image of GK){reference-type="eqref" reference="eqn: the Mi in image of GK"} span a space of the dimension computed in part (2), so it suffices to show that all of the possibilities in  [\[eqn: the Mi in image of GK\]](#eqn: the Mi in image of GK){reference-type="eqref" reference="eqn: the Mi in image of GK"} do indeed arise from $G_K$-representations. We can and do twist so that that $t_i=0$ for all $i$. (This has the effect of replacing $s_i$ by $s_i-t_i$, and leaving $n_i$ unchanged, so the general statement follows immediately by twisting back.) Then our strategy is to show that our étale $\varphi$-modules arise from the reductions of certain crystalline representations. In fact, we will see that they arise from the reductions of crystalline extensions of $p$-adic characters. We make the change of variables $f_i'=u^{-\lfloor n_{i+1}\rfloor}f_i$, and write $y_i':=u^{-p\lfloor n_{i}\rfloor}y_i$. Then we have $$\varphi(e_{i-1}) = (b)_i e_i,$$ $$\varphi(f_{i-1}')= (a)_i u^{r_i+e-1} f_i'+y'_ie_i.$$ By [\[eqn: ri\]](#eqn: ri){reference-type="eqref" reference="eqn: ri"} and our assumption that $t_i=0$, we have $$r_i+p\lfloor n_i\rfloor=r_i+t_i+p\lfloor n_i\rfloor=s_i-e+\lfloor n_{i+1}\rfloor +1,$$ so we need to show that every choice of $y'_i$ having nonzero terms in degrees $[r_i,r_i+e-1]$ occurs (together with the additional term in the statement in the case that $\overline{\chi}_1=\overline{\chi}_2$). If we make a further change of variables to replace $f'_i$ with $f'_i+z_ie_i$ for all $i$, with $z_i\in\overline{{\mathbf F}}_p$, then we may exchange the terms in $y_i'$ of degree $r_i+e-1$ with terms in $y_{i-1}$ of degree $0$ (cf. [\[eqn: change of variables from BK to etale\]](#eqn: change of variables from BK to etale){reference-type="eqref" reference="eqn: change of variables from BK to etale"}), so it suffices in turn to show that every choice of $y_i'$ having nonzero terms in degrees $0$ and $[r_i,r_i+e-2]$ occurs in the image of $\operatorname{res}_{K_{\infty}}$ (again, together with the additional term in the statement in the case that $\overline{\chi}_1=\overline{\chi}_2$). Recall [@MR3324938 Defn. 2.3.1] that a pseudo-Barsotti--Tate representation of weight $\{r_i\}$ is a 2-dimensional crystalline representation whose labelled Hodge--Tate weights are $\{0,1\}$, except at a chosen set of $f$ embeddings lifting the embeddings $\sigma_i:k\hookrightarrow\overline{{\mathbf F}}_p$, where they are $\{0,r_i\}$. By [@MR3324938 Defn. 4.1.3], these are the representations which have $\sigma_{r-1,0}:=\otimes_{i}\mathop{\mathrm{Sym}}^{r_i-1}k^2\otimes_{k,\sigma_i}\overline{{\mathbf F}}_p$ as a Serre weight. Now consider [@MR3324938 Thm. 5.1.5], taking the $t_i$ there to be zero, the $x_i$ to be $e-1$, and the $s_i$ there to be our $r_i+e-1$ (which are not necessarily equal to our $s_i$ -- we apologize for this temporary notation). Note that with this choice, the Breuil--Kisin modules spanned by our basis $e_i,f'_i$ are precisely the extensions of Breuil--Kisin modules in [@MR3324938 Thm. 5.1.5], for the rank one Breuil--Kisin modules which are the minimal and maximal models of $\overline{\chi}_1,\overline{\chi}_2$ as in the statement of [@MR3324938 Prop. 5.3.4]. So by  [@MR3324938 Prop. 5.3.4, Thm. 5.1.5], if $\psi\in\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)$ comes from the reduction of a pseudo-Barsotti--Tate representations of weight $\{r_i\}$, then $\operatorname{res}_{K_{\infty}}(\psi)$ is given by an étale $\varphi$-module as in [\[eqn: the Mi in image of GK\]](#eqn: the Mi in image of GK){reference-type="eqref" reference="eqn: the Mi in image of GK"}. It therefore suffices to show that these classes  $\operatorname{res}_{K_{\infty}}(\psi)$ span the image of $\operatorname{res}_{K_{\infty}}$. To see this, we consider the reductions of reducible crystalline representations. As in the proof of [@MR3324938 Thm. 5.4.1], we choose crystalline characters $\chi_{1,\min}, \chi_{2,\max}$ which lift $\overline{\chi}_1, \overline{\chi}_2$ respectively. More precisely, these characters are determined (up to unramified twist, which we do not specify) by their Hodge--Tate weights, which (recalling that $t_i=0$ for all $i$) we can and do choose so that $\chi_{2,\max}$ is unramified, and so that any crystalline extension of $\chi_{2,\max}$ by $\chi_{1,\min}$ is pseudo-Barsotti--Tate of weight $\{r_i\}$. The space of crystalline extensions of  $\chi_{2,\max}$ by $\chi_{1,\min}$ is identified with the Galois cohomology group $H^1_{f}(G_K,\chi_{1,\min}\chi_{2,\max}^{-1})$, and as in the proof of [@MR3324938 Thm. 5.4.1], one immediately computes that the dimension of the image of the reduction map $$\label{eqn: redn of H1f}H^1_{f}(G_K,\chi_{1,\min}\chi_{2,\max}^{-1})\to H^1(G_K,\overline{\chi}_1\overline{\chi}_2^{-1})$$ is $[K:{\mathbf Q}_p]$, unless $\overline{\chi}_1=\overline{\chi}_2$ in which case it is $[K:{\mathbf Q}_p]+1$; so by part [\[item: dim image resKinfty\]](#item: dim image resKinfty){reference-type="eqref" reference="item: dim image resKinfty"}, this image has the same dimension as the image of $\operatorname{res}_{K_{\infty}}$. In particular we see that we are done if the restriction of $\operatorname{res}_{K_{\infty}}$ to the image of [\[eqn: redn of H1f\]](#eqn: redn of H1f){reference-type="eqref" reference="eqn: redn of H1f"} is injective. If $\overline{\chi}_1\overline{\chi}_2^{-1}\ne\overline{\varepsilon}$ then this is automatic by part [\[resKinfty injective\]](#resKinfty injective){reference-type="eqref" reference="resKinfty injective"}, so we may suppose that $\overline{\chi}_1\overline{\chi}_2^{-1}=\overline{\varepsilon}$. If some $r_i\ne p$, then by [@caraiani2022local Lem. A.4] the image of [\[eqn: redn of H1f\]](#eqn: redn of H1f){reference-type="eqref" reference="eqn: redn of H1f"} is contained in the peu ramifiée subspace, so we again conclude by part [\[resKinfty injective\]](#resKinfty injective){reference-type="eqref" reference="resKinfty injective"}. Finally if $r_i=p$ for all $i$, then as in the proof of [@MR3324938 Thm. 6.1.18], the union of the images of [\[eqn: redn of H1f\]](#eqn: redn of H1f){reference-type="eqref" reference="eqn: redn of H1f"} as $\chi_{1,\min},\chi_{2,\max}$ range over their twists by unramified characters with trivial reduction is all of $H^1(G_K,\overline{\chi}_1\overline{\chi}_2^{-1})$, so we are done. ◻ **Lemma 1**. *Suppose that $\sum_{j=1}^f(s_j-t_j-e)<0$. Then either there exists an $i$ with $\lfloor n_{i+1}\rfloor=-1$ and $r_i\ne p$, or there exists an $i$ with $\lfloor n_{i+1}\rfloor\le -2$.* *Proof.* Summing [\[eqn: ri\]](#eqn: ri){reference-type="eqref" reference="eqn: ri"} over all $j$, we have $$\sum_{j=1}^f(s_j-t_j-e)=\sum_{j=1}^f\left((p-1)\lfloor n_{j+1}\rfloor+(r_j-1)\right).$$If this is negative, there exists an $i$ with $$(p-1)\lfloor n_{i+1}\rfloor+(r_i-1)< 0.$$ Since $r_i\ge 1$, we must have $\lfloor n_{i+1}\rfloor <0$. If $\lfloor n_{i+1}\rfloor=-1$ then we have $1-p+r_i-1<0$, so $r_i<p$, as required. ◻ **Lemma 1**. *Suppose that $\sum_{j=1}^f(s_j-t_j-e)<0$. Then the restriction map $$\mathop{\mathrm{Ext}}^1_{\operatorname{BrMod}^{\operatorname{cr}}}({\mathcal M}(\underline{s};a),{\mathcal M}(\underline{t};b))\xrightarrow{\,\operatorname{res}_K\,} \mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)$$ is not surjective.* *Proof.* It suffices to show that $\mathop{\mathrm{im}}(\operatorname{res}_{K_{\infty}}\circ\operatorname{res}_K)$ is a proper subspace of $\mathop{\mathrm{im}}(\operatorname{res}_{K_{\infty}})$. Viewing classes in $\mathop{\mathrm{Ext}}^1_{G_{K_\infty}}(\overline{\chi}_2,\overline{\chi}_1)$ as étale $\varphi$-modules, it therefore suffices to exhibit an étale $\varphi$-module as in the statement of Lemma [Lemma 1](#lem: classes that show up for G_K representations){reference-type="ref" reference="lem: classes that show up for G_K representations"} (3) which is not in the image of $\operatorname{res}_{K_{\infty}}\circ\operatorname{res}_K$. By Lemma [Lemma 1](#lem: we also win here){reference-type="ref" reference="lem: we also win here"}, we may assume that for some $i$ we either have $\lfloor n_{i+1}\rfloor=-1$ and $r_i\ne p$, or we have $\lfloor n_{i+1}\rfloor\le -2$. If $r_i\ne p$ then we set $x_i=s_i+\lfloor n_{i+1}\rfloor-e+1$, while if $r_i=p$ (so that $\lfloor n_{i+1}\rfloor\le -2$) we set $x_i=s_i+\lfloor n_{i+1}\rfloor-e+2$. It follows from [\[eqn: ri\]](#eqn: ri){reference-type="eqref" reference="eqn: ri"} that $s_i+\lfloor n_{i+1}\rfloor-e+1\equiv t_i+r_i\pmod{p}$, so we have $$\label{eqn: xi not ti mod p}x_i\not\equiv t_i\pmod{p}.$$ We claim that we also have $$\label{eqn: xi inequality for image restriction} s_i+\lfloor n_{i+1}\rfloor-e+1\le x_i\le s_i+\lfloor n_{i+1}\rfloor.$$Indeed by definition we have $x_i=s_i+\lfloor n_{i+1}\rfloor-e+1$ or $x_i=s_i+\lfloor n_{i+1}\rfloor-e+2$, so the lower bound is immediate, and the upper bound is also automatic unless $e=1$. If $e=1$, we need to rule out the possibility that we are in the case $x_i=s_i+\lfloor n_{i+1}\rfloor-e+2$. In this case we assumed that $\lfloor n_{i+1}\rfloor\le -2$, so in particular $n_{i+1}<-1$; but since we have $s_j-t_j-e\ge -e(p-1)$ for all $j$, it follows from [\[eqn: definition of ni\]](#eqn: definition of ni){reference-type="eqref" reference="eqn: definition of ni"} that we have $n_j\ge -e$ for all $j$, and in particular if  $e=1$ we have $n_{i+1}\ge -1$, as required. We also have $x_i\le s_i-e$ (because if $\lfloor n_{i+1}\rfloor=-1$ then $x_i=s_i+\lfloor n_{i+1}\rfloor-e+1=s_i-e$, and otherwise $\lfloor n_{i+1}\rfloor\le -2$ and we have $x_i=s_i+\lfloor n_{i+1}\rfloor-e+2\le s_i-e$), i.e.  $$\label{eqn: xi inequality} x_i < s_i-e+1 = s_i-e+\max(n_{i+1},1).$$ (We have written the inequality in this form so that we can apply Lemma [Lemma 1](#lem: Breuil module extensions){reference-type="ref" reference="lem: Breuil module extensions"}.) Set $y'_i=u^{x_i}$ and $y'_j=0$ for all $j\ne i$. By [\[eqn: xi inequality for image restriction\]](#eqn: xi inequality for image restriction){reference-type="eqref" reference="eqn: xi inequality for image restriction"} and Lemma [\[lem: classes that show up for G_K representations\]](#lem: classes that show up for G_K representations){reference-type="ref" reference="lem: classes that show up for G_K representations"}, it suffices to show that the étale $\varphi$-module $M$ arising from taking the $y_j$ in [\[eqn: the Mi in image of GK\]](#eqn: the Mi in image of GK){reference-type="eqref" reference="eqn: the Mi in image of GK"} to be our $y'_j$ is not of the form $\mathfrak{M}[1/u]$ for any Breuil--Kisin module $\mathfrak{M}$ satisfying the constraints of Lemma [Lemma 1](#lem: Breuil module extensions){reference-type="ref" reference="lem: Breuil module extensions"}. Suppose on the contrary that $\mathfrak{M}$ as in [\[eqn: basis for extension of BK modules\]](#eqn: basis for extension of BK modules){reference-type="eqref" reference="eqn: basis for extension of BK modules"} has $\mathfrak{M}[1/u]\cong M$. This means that there is a change of variables $e'_j=e_j$, $f_j'=f_j+\lambda_j e_j$ with $\lambda_j\in \overline{{\mathbf F}}_p((u))$ having the property that for all $j$, we have $$\varphi(f'_{j-1})= (a)_j u^{s_j} f'_j + y_j e_j.$$ Equivalently, for each $j$ we must have $$\label{eqn: change of variables from BK to etale}y_j=y'_j+ (b)_j u^{t_j}\varphi(\lambda_{j-1}) -(a)_j u^{s_j}\lambda_j.$$ Recall that we chose $y'_i=u^{x_i}$, where $x_i$ satisfies [\[eqn: xi not ti mod p\]](#eqn: xi not ti mod p){reference-type="eqref" reference="eqn: xi not ti mod p"} and [\[eqn: xi inequality\]](#eqn: xi inequality){reference-type="eqref" reference="eqn: xi inequality"}, so the coefficient of  $u^{x_i}$ in $y_i$ is zero by Lemma [\[lem: Breuil module extensions\]](#lem: Breuil module extensions){reference-type="ref" reference="lem: Breuil module extensions"}. The coefficient of $u^{x_i}$ in $u^{t_i}\varphi(\lambda_{i-1})$ is also zero (again by [\[eqn: xi not ti mod p\]](#eqn: xi not ti mod p){reference-type="eqref" reference="eqn: xi not ti mod p"}), so it follows from [\[eqn: change of variables from BK to etale\]](#eqn: change of variables from BK to etale){reference-type="eqref" reference="eqn: change of variables from BK to etale"} with $j=i$ that the coefficient of $u^{x_i}$ in $u^{s_i}\lambda_i$ is nonzero. Thus $\lambda_i$ has a term of degree $x_i-s_i$. By [\[eqn: xi inequality\]](#eqn: xi inequality){reference-type="eqref" reference="eqn: xi inequality"} we have $x_i-s_i\le -e$. We claim that this implies that every $\lambda_j$ has a term of degree at most $-e$. To see this we rewrite  [\[eqn: change of variables from BK to etale\]](#eqn: change of variables from BK to etale){reference-type="eqref" reference="eqn: change of variables from BK to etale"} for $j$ replaced by $j+1$ in the form $$\label{eqn: rewritten for lambda valuation}(a)_{j+1} u^{s_{j+1}}\lambda_{j+1}=(y'_{j+1}-y_{j+1})+ (b)_{j+1} u^{t_{j+1}}\varphi(\lambda_{j}).$$ If $j+1\ne i$ then $y'_{j+1}-y_{j+1}\in\overline{{\mathbf F}}_p[[u]]$, so if $\lambda_{j}$ has a term of degree at most $-e$, then $u^{t_{j+1}}\varphi(\lambda_{j})$ has a term of degree at most $t_{j+1}-ep$, which must cancel with a term in $u^{s_{j+1}}\lambda_{j+1}$. Thus $\lambda_{j+1}$ has a term of degree at most $t_{j+1}-ep-s_{j+1}\le e(p-2)-ep=-2e<-e$, so the claim follows from induction (beginning with the case $j=i$). Now $v_j\le -e$ denote the $u$-adic valuation of $\lambda_j$. Then $u^{t_{j+1}}\varphi(\lambda_{j})$ has a nonzero term of degree $t_{j+1}+pv_j$, which again must cancel with a term in $u^{s_{j+1}}\lambda_{j+1}$. (Indeed, we have $t_{j+1}+pv_j\le e(p-2)-ep<0$, and the only possible term in any $y'_{j+1}-y_{j+1}$ of negative degree is the term $u^{x_i}$ in $y'_i$, which cannot cancel with a term of degree $t_{i}+pv_{i-1}$ by [\[eqn: xi not ti mod p\]](#eqn: xi not ti mod p){reference-type="eqref" reference="eqn: xi not ti mod p"}). We therefore have $$v_{j+1}\le t_{j+1}-s_{j+1}+pv_j\le t_{j+1}+pv_j\le e(p-2)+pv_j,$$ i.e.  $$\label{eqn: lower bounds on lambda from phi relation}pv_j-v_{j+1}\ge -e(p-2).$$ Summing these inequalities multiplied by appropriate powers of $p$, we have $$\sum_{j=1}^fp^{f-j}(pv_{j+i-1}-v_{j+i})\ge -e(p-2)(p^f-1)/(p-1),$$ so that $v_j\ge -e(p-2)/(p-1)>-e$ for each $j$. Since we already saw that $v_j\le -e$ for all $j$, we have a contradiction, and we are done. ◻ **Definition 1**. Let $\overline{\chi}_1,\overline{\chi}_2:G_K\to\overline{{\mathbf F}}_p^{\times}$ be two characters. We say that an element of $\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)$ is generic if it is not in the image of the restriction map $$\mathop{\mathrm{Ext}}^1_{\operatorname{BrMod}^{\operatorname{cr}}}({\mathcal M}(\underline{s};a),{\mathcal M}(\underline{t};b))\xrightarrow{\,\operatorname{res}_K\,} \mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)$$ for any rank 1 Breuil modules ${\mathcal M}(\underline{s};a)$ and ${\mathcal M}(\underline{t};b))$ with $T({\mathcal M}(\underline{t};b))=\overline{\chi}_1$, $T({\mathcal M}(\underline{s};a))=\overline{\chi}_2$ and $\sum_{j=1}^f(s_j-t_j-e)<0$. *Remark 1*. Note that by Lemma [Lemma 1](#lem:criteria for missing extensions){reference-type="ref" reference="lem:criteria for missing extensions"}, the generic extensions in $\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_2,\overline{\chi}_1)$ are the complement of the union of finitely many proper subspaces. *Remark 1*. Definition [\[defn: generic extension class\]](#defn: generic extension class){reference-type="ref" reference="defn: generic extension class"} may seem a little ad hoc, but it is closely related to the condition of being a generic $\overline{{\mathbf F}}_p$-point on an irreducible component of the 2-dimensional Emerton--Gee stack (which we recall in Section [2.4](#subsec: special fibre EG stacks){reference-type="ref" reference="subsec: special fibre EG stacks"}). To make this precise, we would need to work simultaneously with arbitrary unramified twists of the characters $\overline{\chi}_1,\overline{\chi}_2$. While it is clear that the arguments above are uniform across such unramified twists, and we could presumably formulate and prove our results in the context of stacks of Breuil modules (and Breuil--Kisin modules), there does not seem to be any benefit in doing so. Indeed, while working with $\overline{{\mathbf F}}_p$-points occasionally leads to slightly clumsy formulations, we view it as a feature of the structural results proved in [@emertongeepicture] (see e.g. Theorem [\[thm:Xdred is algebraic\]](#thm:Xdred is algebraic){reference-type="ref" reference="thm:Xdred is algebraic"}) that we can prove statements about families of Galois representations (e.g. lifting rings) by only thinking about representations valued in $\overline{{\mathbf F}}_p$. *Remark 1*. While it may be possible to use other integral $p$-adic Hodge theories (e.g. $(\varphi,\widehat{G})$-modules) to prove a version of Lemma [Lemma 1](#lem:criteria for missing extensions){reference-type="ref" reference="lem:criteria for missing extensions"} which could apply to the reductions of crystalline representations in a greater range of Hodge--Tate weights than $[0,p-2]$, it is unlikely that it can be significantly improved. Indeed already for $K={\mathbf Q}_p$, there are irreducible 2-dimensional crystalline representations of $G_{{\mathbf Q}_p}$ with Hodge--Tate weights $0,p+2$ whose corresponding mod $p$ Breuil--Kisin modules are of the form $\begin{pmatrix}bu^p&x\\0&au^2\end{pmatrix}$ where $a,b\in\overline{{\mathbf F}}_p^{\times}$ and $x\in\overline{{\mathbf F}}_p$ are arbitrary, and consequently give all extensions of the corresponding characters of $G_{{\mathbf Q}_p}$ when $a\ne b$. (In addition, it is not clear to us whether the analogue of Lemma [Lemma 1](#lem: Breuil module extensions){reference-type="ref" reference="lem: Breuil module extensions"} holds for $(\varphi,\widehat{G})$-modules, even in height $[0,p-2]$, although we have not seriously pursued this question.) ## Generic weight 0 crystalline representations {#subsec: generic weight 0} In this subsection and the next, in order to be compatible with the notation of [@emertongeepicture], we work with $d$-dimensional rather than $n$-dimensional representations. **Definition 1**. We say that a representation $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_d({\mathbf F})$ is generic if it has the form $$\overline{\rho}\cong \begin{pmatrix} \overline{\chi}_d &*&\dots &*\\ 0& \overline{\chi}_{d-1}&\dots &*\\ \vdots&& \ddots &\vdots\\ 0&\dots&0& \overline{\chi}_1\\ \end{pmatrix}$$ and for $i=1,\ldots,d-1$, the off diagonal extension class in $\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi}_i,\overline{\chi}_{i+1})$ is generic in the sense of Definition [Definition 1](#defn: generic extension class){reference-type="ref" reference="defn: generic extension class"}. **Theorem 1**. *Suppose $p>d$ and let $\rho:G_K\to\mathop{\mathrm{GL}}_d({\mathcal O})$ be a weight 0 crystalline representation such that $\overline{\rho}$ is generic in the sense of Definition [Definition 1](#defn: generic rhobar){reference-type="ref" reference="defn: generic rhobar"}. Then $$\rho\simeq\begin{pmatrix} \mathrm{ur}_{\lambda_d} &*&\dots &*\\ 0& \mathrm{ur}_{\lambda_{d-1}}\varepsilon^{-1}&\dots &*\\ \vdots&& \ddots &\vdots\\ 0&\dots&0& \mathrm{ur}_{\lambda_1}\varepsilon^{1-d}\\ \end{pmatrix}$$ for some $\lambda_1,\ldots,\lambda_d\in{\mathcal O}^\times$.* *Proof.* The proof will be by induction on $d$. The base case $d=1$ is trivial. For the inductive step, we claim that $\rho$ fits in an exact sequence $$0\to\rho'\to\rho\to\mathrm{ur}_{\lambda_1}\varepsilon^{1-d}\to 0.$$ Admitting this for the moment, $\rho'$ is a $d-1$ dimensional crystalline representation of weight 0, and $\overline{\rho}'$ is generic (since $\overline{\rho}$ has a unique $d-1$ dimensional subrepresentation, which is generic). We conclude by induction on $d$. We now prove the key claim above. As the Hodge-Tate weights of $\rho$ are contained in the interval $[0,d-1]\subseteq[0,p-2]$, by Theorem [\[thm: reductions of crystalline reps are crystalline Breuil modules\]](#thm: reductions of crystalline reps are crystalline Breuil modules){reference-type="ref" reference="thm: reductions of crystalline reps are crystalline Breuil modules"} there is a crystalline Breuil module ${\mathcal M}$ of rank $d$ with $\overline{\rho}\cong T({\mathcal M})$, whose underlying Breuil--Kisin module has height at most $(d-1)$. By Theorem [\[thm: properties of crystalline Breuil modules\]](#thm: properties of crystalline Breuil modules){reference-type="ref" reference="thm: properties of crystalline Breuil modules"} (2), the unique maximal filtration on $\overline{\rho}$ determines a filtration $0={\mathcal M}^0\subset{\mathcal M}^1\subset\dots\subset{\mathcal M}^d={\mathcal M}$ by crystalline Breuil submodules. Write ${\mathcal M}^i/{\mathcal M}^{i-1}\simeq {\mathcal M}(\underline{s(i)};a_i)$ in the notation of Lemma [\[lem: classification of rank one Breuil modules\]](#lem: classification of rank one Breuil modules){reference-type="ref" reference="lem: classification of rank one Breuil modules"}. It follows from Lemma [Lemma 1](#lem:criteria for missing extensions){reference-type="ref" reference="lem:criteria for missing extensions"} and the definition of genericity that for each $1\leq i\leq d-1$ we have $$\sum_{j=1}^f(s(i+1)_j-s(i)_j-e)\ge 0.$$ Summing these inequalities over $i$, we obtain $$\sum_{j=1}^f(s(d)_j-s(1)_j)\ge ef(d-1).$$ Since the underlying Breuil--Kisin module of ${\mathcal M}$ has height at most $(d-1)$, we have $s(i)_j\leq e(d-1)$ for all $i,j$, and hence $s(d)_j-s(1)_j\leq e(d-1)$ for all $j$. Since we also have the reverse inequality summed over $f$ this implies that $s(d)_j-s(1)_j=e(d-1)$ for all $j$, and hence $s(1)_j=0$ and $s(d)_j=e(d-1)$ for all $j$. Now let $\mathfrak{M}/\mathfrak{S}_{{\mathcal O}}$ be the Breuil--Kisin module associated to $\rho$, and $\overline{\mathfrak{M}}=\mathfrak{M}\otimes_{{\mathcal O}}{\mathbf F}$. This is the Breuil--Kisin module underlying $\mathcal{M}$ by Theorem [\[thm: reductions of crystalline reps are crystalline Breuil modules\]](#thm: reductions of crystalline reps are crystalline Breuil modules){reference-type="ref" reference="thm: reductions of crystalline reps are crystalline Breuil modules"}. Since $s(d)_j = e(d-1)$ for all $j$, we have shown that $\overline{\mathfrak{M}}$ has a rank $1$ quotient $\overline{\mathfrak{M}}\to\mathfrak{S}_{{\mathbf F}}\cdot v$ where $\varphi(\overline{v})=\overline{\lambda} u^{e(d-1)}\overline{v}$ for some $\overline{\lambda}\in (k\otimes{\mathbf F})^\times$. It follows from  [@kis04 Prop. 1.2.11] (or rather its obvious generalization from height $1$ to height $(d-1$) Breuil--Kisin modules) that this lifts to a quotient $\mathfrak{M}\to\mathfrak{S}_{{\mathcal O}}\cdot v$ where $\varphi(v)=\lambda E(u)^{d-1}v$ for some $\lambda\in (W(k)\otimes{\mathcal O})^\times$. Indeed, using height $(d-1)$ duality [@Liu07 §3.1], we need to lift a rank one 'multiplicative' submodule of $\overline{\mathfrak{M}}^*$ to ${\mathfrak{M}}^*$, where multiplicative means that the linearization of $\varphi$ is an isomorphism. As in [@kis04 Prop. 1.2.11], we have a maximal multiplicative submodule ${\mathfrak{M}}^{*,m}$ of ${\mathfrak{M}}^*$ which lifts the maximal multiplicative submodule of $\overline{\mathfrak{M}}^*$ and therefore has rank at least one. Since $\rho$ is weight $0$ crystalline, its maximal unramified subrepresentation has dimension at most one. It follows that ${\mathfrak{M}}^{*,m}$ has rank one and is the desired lift. Finally it follows from the full faithfulness of the functor from lattices in crystalline representations to Breuil--Kisin modules (see [@MR2263197 Prop. 1.3.15], or for the precise statement we are using here [@kisin-abelian Thm. 1.2.1]) that there is a nonzero map $\rho\to\mathrm{ur}_{\lambda_1}\varepsilon^{1-d}$. ◻ **Corollary 1**. *Let $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_d({\mathbf F})$ be a generic representation. Suppose that $\overline{\rho}$ has a crystalline lift of weight 0. Then $\overline{\rho}$ has the form $$\overline{\rho}\cong\begin{pmatrix} \mathrm{ur}_{\overline{\lambda}_d} &*&\dots &*\\ 0& \mathrm{ur}_{\overline{\lambda}_{d-1}}\overline{\varepsilon}^{-1}&\dots &*\\ \vdots&& \ddots &\vdots\\ 0&\dots&0& \mathrm{ur}_{\overline{\lambda}_1}\overline{\varepsilon}^{1-d}\\ \end{pmatrix}$$ and moreover the off-diagonal extensions are peu ramifiée.* *Proof.* The first statement is immediate from Theorem [Theorem 1](#thm: rho generic ordinary){reference-type="ref" reference="thm: rho generic ordinary"}, while the claim about the extensions follows from the fact that the reduction of a crystalline representation $$\begin{pmatrix}\mathrm{ur}_{\lambda_2}&*\\0&\mathrm{ur}_{\lambda_1}\varepsilon^{-1}\end{pmatrix}$$ is peu ramifiée. ◻ ## Recollections on Emerton--Gee stacks {#subsec: special fibre EG stacks} We now recall some of the main results of [@emertongeepicture], and prove a slight extension of them. We use the notation of [@emertongeepicture], and in particular we continue to work with $d$-dimensional rather than $n$-dimensional representations. As above, we let $E/{\mathbf Q}_p$ be a finite extension containing the Galois closure of $K$, with ring of integers ${\mathcal O}$, uniformizer $\varpi$, and residue field ${\mathcal O}/\varpi={\mathbf F}$. The stack ${\mathcal X}_d$ over $\mathop{\mathrm{Spf}}{\mathcal O}$ is defined in [@emertongeepicture Defn. 3.2.1]. It is a stack of $(\varphi,\Gamma)$-modules, but if ${\mathbf F}'/{\mathbf F}$ is a finite extension (or if ${\mathbf F}'=\overline{{\mathbf F}}_p$), then the groupoid of points $x\in{\mathcal X}_d({\mathbf F}')$ is canonically equivalent to the groupoid of Galois representations $\overline{\rho}: G_K \to \mathop{\mathrm{GL}}_d({\mathbf F}')$ [@emertongeepicture §3.6.1], and we use this identification without comment below. The stack ${\mathcal X}_d$ is a Noetherian formal algebraic stack [@emertongeepicture Cor. 5.5.18], and it admits closed substacks cut out by (potentially) crystalline or semistable conditions. In particular there is a closed substack ${\mathcal X}_d^{\mathrm{crys},0}$ of ${\mathcal X}_d$ corresponding to crystalline representations of weight $0$, which has the following properties. **Proposition 1**. *[\[prop: crystalline weight 0 substack\]]{#prop: crystalline weight 0 substack label="prop: crystalline weight 0 substack"}* 1. *${\mathcal X}_d^{\mathrm{crys},0}$ is a $p$-adic formal algebraic stack, which is flat over $\mathop{\mathrm{Spf}}{\mathcal O}$ and of finite type. In particular, the special fibre $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}:={\mathcal X}_{d}^{\mathrm{crys},0}\times_{\mathop{\mathrm{Spf}}{\mathcal O}}\mathop{\mathrm{Spec}} {\mathbf F}$ is an algebraic stack.* 2. *If $A^\circ$ is a finite flat ${\mathcal O}$-algebra, then ${\mathcal X}_{d}^{\mathrm{crys},0}(A^\circ)$  is the subgroupoid of ${\mathcal X}_{d}(A^\circ)$ consisting of $G_K$-representations which after inverting $p$ are crystalline of weight $0$.* 3. *The special fibre $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}:={\mathcal X}_{d}^{\mathrm{crys},0}\times_{\mathop{\mathrm{Spf}}{\mathcal O}}\mathop{\mathrm{Spec}}{\mathbf F}$ is equidimensional of dimension $[K:{\mathbf Q}_p] d(d-1)/2$.* 4. *For any finite extension ${\mathbf F}'$ of ${\mathbf F}$ and any point $x:\mathop{\mathrm{Spec}}{\mathbf F}'\to\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$, there is a versal morphism $\mathop{\mathrm{Spf}} R^{\mathrm{crys},0,{\mathcal O}'}_{\overline{\rho}}\to{\mathcal X}_{d}^{\mathrm{crys},0}$ at $x$, where $\overline{\rho}: G_K \to \mathop{\mathrm{GL}}_d({\mathbf F}')$ is the representation corresponding to $x$, ${\mathcal O}':=W({\mathbf F}')\otimes_{W({\mathbf F})}{\mathcal O}$, and $R^{\mathrm{crys},0,{\mathcal O}'}_{\overline{\rho}}$ is the weight $0$ crystalline lifting ring.* *Proof.* We define ${\mathcal X}_d^{\mathrm{crys},0}$ to be the stack ${\mathcal X}_{K,d}^{\mathrm{crys},\underline{\lambda},\tau}$ of [@emertongeepicture Defn. 4.8.8], taking $\underline{\lambda}$ to be given by  $\lambda_{\sigma,i}=d-i$ for all $\sigma,i$, and $\tau$ to be trivial. Then the first two claims are [@emertongeepicture Thm. 4.8.12], the third is [@emertongeepicture Thm. 4.8.14], and the final claim is [@emertongeepicture Prop. 4.8.10]. ◻ We now recall some definitions from [@emertongeepicture §5.5]. By a *Serre weight* $\underline{k}$ we mean a tuple of integers $\{k_{\overline{\sigma},i}\}_{\overline{\sigma}:k\hookrightarrow\overline{{\mathbf F}}_p,1\le i\le d}$ with the properties that - $p-1\ge k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}\ge 0$ for each $1\le i\le d-1$, and - $p-1\ge k_{\overline{\sigma},d}\ge 0$, and not every $k_{\overline{\sigma},d}$ is equal to $p-1$. For each  $\overline{\sigma}:k\hookrightarrow{\mathbf F}$, we define the fundamental character $\omega_{\overline{\sigma}}$ to be the composite $$\omega_{\overline{\sigma}}:I_K \xrightarrow{}% W_K^{\operatorname{ab}} \xrightarrow{\,\operatorname{Art}_K^{-1}\,} {\mathcal O}_K^{\times}\xrightarrow{} k^{\times} \xrightarrow{\,\overline{\sigma}\,} {\mathbf F}^{\times}.$$ As in [@emertongeepicture § 5.5], for each Serre weight $\underline{k}$ we choose characters $\omega_{\underline{k},i}:G_K\to{\mathbf F}^\times$ ($i=1,\dots,d$) with $$\omega_{\underline{k},i}|_{I_K}=\prod_{\overline{\sigma}:k\hookrightarrow{\mathbf F}}\omega_{\overline{\sigma}}^{-k_{\overline{\sigma},i}},$$in such a way that if $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=p-1$ for all $\overline{\sigma}$, then $\omega_{\underline{k},i}=\omega_{\underline{k},i+1}$. (In [@emertongeepicture § 5.5] it was erroneously claimed that we could impose further constraints on the $\omega_{\underline{k},i}$, but as explained in [@EGaddenda], these properties are all that we require.) For any $\nu\in\overline{{\mathbf F}}_p$ we write $\mathrm{ur}_{\nu}:G_K\to\overline{{\mathbf F}}_p^{\times}$ for the unramified character taking a geometric Frobenius to $\lambda$. We say that a representation $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_d(\overline{{\mathbf F}}_p)$ is *maximally nonsplit of niveau $1$* if it has a unique filtration by $G_K$-stable $\overline{{\mathbf F}}_p$-subspaces such that all of the graded pieces are one-dimensional representations of $G_K$. We assign a unique Serre weight $\underline{k}$ to each such $\overline{\rho}$ in the following way: we say that $\overline{\rho}$ is of weight $\underline{k}$ if and only we can write $$\label{eqn:weight k version of maximally nonsplit structure}\overline{\rho}\cong \begin{pmatrix} \mathrm{ur}_{\nu_d}\omega_{\underline{k},d} &*&\dots &*\\ 0& \mathrm{ur}_{\nu_{d-1}}\overline{\varepsilon}^{-1}\omega_{\underline{k},d-1}&\dots &*\\ \vdots&& \ddots &\vdots\\ 0&\dots&0& \mathrm{ur}_{\nu_1}\overline{\varepsilon}^{1-d}\omega_{\underline{k},1}\\ \end{pmatrix};$$ this uniquely determines $\underline{k}$, except that if $\omega_{\underline{k},i}=\omega_{\underline{k},i+1}$ then we need to say whether $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=p-1$ for all $\overline{\sigma}$ or $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=0$ for all $\overline{\sigma}$. We distinguish these possibilities as follows: if $\omega_{\underline{k},i}=\omega_{\underline{k},i+1}$, then we set $k_{\overline{\sigma},d-i}-k_{\overline{\sigma},d+1-i}=p-1$ for all $\overline{\sigma}$ if and only if $\nu_i=\nu_{i+1}$ and the element of  $$\mathop{\mathrm{Ext}}^1_{G_K}(\mathrm{ur}_{\nu_i}\overline{\varepsilon}^{i-d}\omega_{\underline{k},i}, \mathrm{ur}_{\nu_{i+1}}\overline{\varepsilon}^{i+1-d}\omega_{\underline{k},i+1})=H^1(G_K,\overline{\varepsilon})$$ determined by $\overline{\rho}$ is très ramifiée. Let $({\mathbf G}_m)^d_{\underline{k}}$ denote the closed subgroup scheme of $({\mathbf G}_m)^d$ parameterizing tuples $(x_1,\dots,x_d)$ for which $x_i=x_{i+1}$ whenever $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=p-1$ for all $\overline{\sigma}$. By the definition that we just made, if $\overline{\rho}$ is maximally nonsplit of niveau $1$ and weight $\underline{k}$, then the tuple $(\nu_1,\dots,\nu_d)$ is an $\overline{{\mathbf F}}_p$-point of $({\mathbf G}_m)^d_{\underline{k}}$ (where the $\nu_i$ are as in in [\[eqn:weight k version of maximally nonsplit structure\]](#eqn:weight k version of maximally nonsplit structure){reference-type="eqref" reference="eqn:weight k version of maximally nonsplit structure"}). We have the following slight variant on [@emertongeepicture Thm. 5.5.12]. **Theorem 1**. *Suppose that $p>2$. [\[thm:Xdred is algebraic\]]{#thm:Xdred is algebraic label="thm:Xdred is algebraic"}* 1. *The Ind-algebraic stack ${\mathcal X}_{d,\operatorname{red}}$ is an algebraic stack, of finite presentation over ${\mathbf F}$.* 2. *${\mathcal X}_{d,\operatorname{red}}$ is equidimensional of dimension $[K:{\mathbf Q}_p] d(d-1)/2$.* 3. *The irreducible components of ${\mathcal X}_{d,\operatorname{red}}$ are indexed by the Serre weights $\underline{k}$. More precisely, for each $\underline{k}$ there is an irreducible component ${\mathcal X}_{d,\operatorname{red}}^{\underline{k}}$ containing a dense open substack ${\mathcal U}^{\underline{k}}$, all of whose $\overline{{\mathbf F}}_p$-points are maximally nonsplit of niveau one and weight $\underline{k}$; and the ${\mathcal X}_{d,\operatorname{red}}^{\underline{k}}$ exhaust the irreducible components of ${\mathcal X}_{d,\operatorname{red}}$.* 4. *[\[item: we can get generic Ext classes in rhobar\]]{#item: we can get generic Ext classes in rhobar label="item: we can get generic Ext classes in rhobar"} There is an open subscheme  $T$ of $({\mathbf G}_m)^d_{\underline{k}}$ such that for all $(t_1,\dots,t_d)\in T(\overline{{\mathbf F}}_p)$, there is an $\overline{{\mathbf F}}_p$-point of ${\mathcal U}^{\underline{k}}$ corresponding to a representation [\[eqn:weight k version of maximally nonsplit structure\]](#eqn:weight k version of maximally nonsplit structure){reference-type="eqref" reference="eqn:weight k version of maximally nonsplit structure"} with $\nu_i=t_i$ for all $i$, and which is generic in the sense of Definition [Definition 1](#defn: generic rhobar){reference-type="ref" reference="defn: generic rhobar"}.* *Proof.* Everything except for part [\[item: we can get generic Ext classes in rhobar\]](#item: we can get generic Ext classes in rhobar){reference-type="eqref" reference="item: we can get generic Ext classes in rhobar"} is part of [@emertongeepicture Thm. 5.5.12, Thm. 6.5.1]. (This makes no use of the hypothesis that $p>2$.) Part [\[item: we can get generic Ext classes in rhobar\]](#item: we can get generic Ext classes in rhobar){reference-type="eqref" reference="item: we can get generic Ext classes in rhobar"} follows from the version of [@emertongeepicture Thm. 5.5.12] proved in [@EGaddenda], as we explain below. We begin by taking $T$ to be an open contained in the image of the eigenvalue morphism ${\mathcal U}^{\underline{k}} \to ({\mathbf G}_m)^d_{\underline{k}}$, and then further shrink it so that for any $m<n$ and $(t_1,\dots,t_d)\in T(\overline{{\mathbf F}}_p)$ either: - we have $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=p-1$ for all $\overline{\sigma}$ and all $m\le i<n$, or - we have $(\mathrm{ur}_{t_n}\overline{\varepsilon}^{n-d}\omega_{\underline{k},n})/(\mathrm{ur}_{t_m}\overline{\varepsilon}^{m-d}\omega_{\underline{k},m})\not=\overline{\varepsilon}$. We then fix $(t_1,\dots,t_d)\in T(\overline{{\mathbf F}}_p)$, and regard each $\mathop{\mathrm{Ext}}^1_{G_K}(\mathrm{ur}_{t_i}\overline{\varepsilon}^{i-d}\omega_{\underline{k},i}, \mathrm{ur}_{t_{i+1}}\overline{\varepsilon}^{i+1-d}\omega_{\underline{k},i+1})$ as an affine space over $\overline{{\mathbf F}}_p$, and as in [@EGaddenda] we define $$\mathop{\mathrm{Ext}}^1_{(t_1,\dots,t_d),\underline{k}}\subseteq\prod_{i=1}^{d-1}\mathop{\mathrm{Ext}}^1_{G_K}(\mathrm{ur}_{t_i}\overline{\varepsilon}^{i-d}\omega_{\underline{k},i}, \mathrm{ur}_{t_{i+1}}\overline{\varepsilon}^{i+1-d}\omega_{\underline{k},i+1})$$ to be the closed subvariety of tuples of extension classes $(\psi_1,\dots,\psi_{d-1})$ determined by the condition that for each $i=1,\dots,d-2$, the cup product $\psi_i\cup\psi_{i+1}$ vanishes. The version of [@emertongeepicture Thm. 5.5.12] proved in [@EGaddenda] states in particular that for a dense Zariski open subset $U$ of $\mathop{\mathrm{Ext}}^1_{(t_1,\dots,t_d),\underline{k}}$, the corresponding extension classes are realized by some $\overline{\rho}\in{\mathcal U}^{\underline{k}}(\overline{{\mathbf F}}_p)$; so it suffices to show that $U$ contains a point $(\psi_1,\dots,\psi_{d-1})$ with each $\psi_i$ generic. As the locus of generic classes in $\mathop{\mathrm{Ext}}^1_{(t_1,\dots,t_d),\underline{k}}$ is open, and $U$ is dense, it suffices in turn to exhibit a single generic class in $\mathop{\mathrm{Ext}}^1_{(t_1,\dots,t_d),\underline{k}}$. To do this, first note that $\psi_i\cup\psi_{i+1}$ is an element of the $\mathop{\mathrm{Ext}}^2$ group $$\mathop{\mathrm{Ext}}^2_{G_K}(\mathrm{ur}_{t_{i}}\overline{\varepsilon}^{i-d}\omega_{\underline{k},i}, \mathrm{ur}_{t_{i+2}}\overline{\varepsilon}^{i+2-d}\omega_{\underline{k},i+2}).$$ This group vanishes unless $(\mathrm{ur}_{t_{i+2}}\overline{\varepsilon}^{i+2-d}\omega_{\underline{k},i+2})/(\mathrm{ur}_{t_i}\overline{\varepsilon}^{i-d}\omega_{\underline{k},i})=\overline{\varepsilon}$, which by our choice of $T$ can only occur when $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=k_{\overline{\sigma},i+1}-k_{\overline{\sigma},i+2}=p-1$ for all $\overline{\sigma}$ and $\overline{\varepsilon}=1$. Thus if $\overline{\varepsilon}\not=1$, we can just choose each  $\psi_i$ to be any generic extension class, and the cup product condition is automatically satisfied. We assume from now on that $\overline{\varepsilon}=1$ and fix a maximal interval $m<n$ such that $k_{\overline{\sigma},i}-k_{\overline{\sigma},i+1}=p-1$ for all $\overline{\sigma}$ and all $m\le i< n$. The characters $\mathrm{ur}_{t_i}\overline{\varepsilon}^{i-d}\omega_{\underline{k},i}$ for $m\le i\leq n$ are all equal, and we write $\overline{\chi}$ for their common value. The cup product pairing is a perfect alternating pairing $$\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi},\overline{\chi})\times \mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi},\overline{\chi})\to \mathop{\mathrm{Ext}}^2_{G_K}(\overline{\chi},\overline{\chi})=H^2(G_K,\overline{\varepsilon})\simeq {\mathbf F}.$$ (This is the only point in the argument where we use our assumption that $p>2$.) Accordingly we can pick a single generic class $\psi\in\mathop{\mathrm{Ext}}^1_{G_K}(\overline{\chi},\overline{\chi})$ and take $\psi_i=\psi$ for $m\leq i<n$. ◻ ## Generic reducedness {#subsec: generic reducedness} We now compute the underlying cycle of the weight $0$ crystalline stack, and deduce our main result on generic reducedness (Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"}). This underlying cycle is defined as follows. By Theorem [\[thm:Xdred is algebraic\]](#thm:Xdred is algebraic){reference-type="ref" reference="thm:Xdred is algebraic"}, the special fibre $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is a closed substack of the special fibre $\overline{{\mathcal X}}_d$, and its irreducible components (with the induced reduced substack structure) are therefore closed substacks of the algebraic stack $\overline{{\mathcal X}}_{d,\operatorname{red}}$ (see [@stacks-project [Tag 0DR4](https://stacks.math.columbia.edu/tag/0DR4)] for the theory of irreducible components of algebraic stacks and their multiplicities). Furthermore, $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ and $\overline{{\mathcal X}}_{d,\operatorname{red}}$ are both algebraic stacks over ${\mathbf F}$ which are equidimensional of dimension $[K:{\mathbf Q}_p]d(d-1)/2$. It follows that the irreducible components of  $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ are irreducible components of $\overline{{\mathcal X}}_{d,\operatorname{red}}$, and are therefore of the form $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{k}}$ for some Serre weight $\underline{k}$. For each $\underline{k}$, we write $\mu_{\underline{k}}(\overline{{\mathcal X}}^{\mathrm{crys},0}_d)$ for the multiplicity of $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{k}}$ as a component of $\overline{{\mathcal X}}^{\mathrm{crys},0}_d$. We write $Z_{\mathrm{crys},0}=Z(\overline{{\mathcal X}}^{\mathrm{crys},0}_d)$ for the corresponding cycle, i.e. for the formal sum $$\label{eqn: cris HS multiplicity stack}Z_{\mathrm{crys},0}=\sum_{\underline{k}}\mu_{\underline{k}}(\overline{{\mathcal X}}^{\mathrm{crys},0}_d)\cdot\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{k}},$$ which we regard as an element of the finitely generated free abelian group ${\mathbf Z}[{\mathcal X}_{d,\operatorname{red}}]$ whose generators are the irreducible components $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{k}}$. **Theorem 1**. *Suppose that $p>d$. Then we have an equality of cycles $$Z_{\mathrm{crys},0}=\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{0}},$$ where $\underline{0}$ is the Serre weight $\underline{k}$ with $k_{\sigma,i}=0$ for all $\sigma,i$. In particular, the special fibre $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is generically reduced.* *Proof.* Suppose (in the notation of Theorem [\[thm:Xdred is algebraic\]](#thm:Xdred is algebraic){reference-type="ref" reference="thm:Xdred is algebraic"} (3)) that $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{k}}$ is an irreducible component of ${\mathcal X}_{d,\operatorname{red}}$ contained in the $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$. We begin by showing that $\underline{k}=\underline{0}$. By Theorem [\[thm:Xdred is algebraic\]](#thm:Xdred is algebraic){reference-type="ref" reference="thm:Xdred is algebraic"} ([\[item: we can get generic Ext classes in rhobar\]](#item: we can get generic Ext classes in rhobar){reference-type="ref" reference="item: we can get generic Ext classes in rhobar"}) after possibly enlarging ${\mathbf F}$, we can pick a point $x:\mathop{\mathrm{Spec}}{\mathbf F}\to{\mathcal U}^{\underline{k}}$ so that the corresponding representation $\overline{\rho}:G_K\to\mathop{\mathrm{GL}}_d({\mathbf F})$ is generic in the sense of Definition [Definition 1](#defn: generic rhobar){reference-type="ref" reference="defn: generic rhobar"} (it is also maximally nonsplit of niveau one and weight $\underline{k}$, since it comes from a point of ${\mathcal U}^{\underline{k}}$). Since $x$ is in $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$, $\overline{\rho}$ has a crystalline lift of weight 0. We can now apply Corollary [Corollary 1](#cor: rhobar generic ordinary){reference-type="ref" reference="cor: rhobar generic ordinary"} to conclude that $\underline{k}=0$. We have now shown that the support of $Z_{\mathrm{crys},0}$ is indeed $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{0}}$, i.e. that the underlying reduced substack of $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is equal to $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{0}}$, and it remains to determine the generic multiplicity. To do this, we modify our choice of point $x$ as follows: by definition, we have $({\mathbf G}_m)^d_{\underline{0}}=({\mathbf G}_m)^d$, so we can and do choose our point $x$ such that if $i\not=j$, then $$\label{eqn: generic eigenvalues for ordinary smooth}(\mathrm{ur}_{\nu_i}\overline{\varepsilon}^{i-d})/(\mathrm{ur}_{\nu_j}\overline{\varepsilon}^{j-d})\not=1,\overline{\varepsilon}.$$ We will show that $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is reduced in some open neighbourhood of $x$. Since the reduced locus is open, and $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is irreducible, this implies that $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is generically reduced. We claim that the crystalline lifting ring $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}$ is formally smooth, where $\overline{\rho}$ corresponds to our chosen point $x$. Indeed, by Theorem [Theorem 1](#thm: rho generic ordinary){reference-type="ref" reference="thm: rho generic ordinary"}, crystalline lifts of $\overline{\rho}$ of weight $0$ are ordinary, and so $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}$ is the weight 0 ordinary lifting ring of $\overline{\rho}$. Since $\overline{\rho}$ is maximally nonsplit (i.e. has a unique filtration with rank $1$ graded pieces) and satisfies [\[eqn: generic eigenvalues for ordinary smooth\]](#eqn: generic eigenvalues for ordinary smooth){reference-type="eqref" reference="eqn: generic eigenvalues for ordinary smooth"}, the deformation problem represented by $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}$ coincides with the one considered in [@cht 2.4.2] (taking $F_{\tilde{v}}$ there to be our $K$, $n$ to be our $d$, and $\chi_{v,i}$ to be $\varepsilon^{-i}$), and the formal smoothness is [@cht Lem. 2.4.7]. By Theorem [\[prop: crystalline weight 0 substack\]](#prop: crystalline weight 0 substack){reference-type="ref" reference="prop: crystalline weight 0 substack"} we have a versal morphism $\mathop{\mathrm{Spec}}R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi\to \overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ at $x$, where $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi$ is formally smooth and in particular reduced. By  [@stacks-project [Tag 0DR0](https://stacks.math.columbia.edu/tag/0DR0)] we may find a smooth morphism $V\to \overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ with source a finite type ${\mathcal O}/\varpi$-scheme, and a point $v\in V$ with residue field ${\mathbf F}$, such that there is an isomorphism $\widehat{{\mathcal O}}_{V,v}\cong R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi$, compatible with the given morphism to $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$. By [@stacks-project [Tag 00MC](https://stacks.math.columbia.edu/tag/00MC)] and [@stacks-project [Tag 033F](https://stacks.math.columbia.edu/tag/033F)], the local ring ${{\mathcal O}}_{V,v}$ is reduced. Since being reduced is an open condition, we see that $V$ is reduced in an open neighbourhood of $v$; and since it is also a smooth local condition (see [@stacks-project [Tag 04YH](https://stacks.math.columbia.edu/tag/04YH)]) it follows that $\overline{{\mathcal X}}_{d}^{\mathrm{crys},0}$ is reduced in an open neighbourhood of $x$, and we are done. ◻ *Remark 1*. Since the algebraic representation of $\mathop{\mathrm{GL}}_d$ of highest weight $0$ is the trivial representation, Theorem [\[thm: BM cycle weight 0\]](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"} shows that if $p>d$, the cycle $Z_{\underline{0}}$ in the geometric Breuil--Mézard conjecture [@emertongeepicture Conj. 8.2.2] is necessarily equal to $\overline{{\mathcal X}}_{d,\operatorname{red}}^{\underline{0}}$. As far as we are aware, this is the only instance in which such a cycle has been computed for $d>2$ and $K/{\mathbf Q}_p$ arbitrary. reduced .thm} **Theorem 1**. *Suppose that $p>d$, that $K/{\mathbf Q}_p$ is a finite extension, and that $E/{\mathbf Q}_p$ is a finite extension containing the Galois closure of $K$, with ring of integers ${\mathcal O}$ and residue field ${\mathbf F}$.* *Then for any $\overline{\rho}: G_K \to \mathop{\mathrm{GL}}_d({\mathbf F})$, the special fibre $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi$ of the weight  $0$ crystalline lifting ring is generically reduced.* *Proof.* We follow the proof of [@caraiani2022geometric Thm. 4.6]. By Proposition [\[prop: crystalline weight 0 substack\]](#prop: crystalline weight 0 substack){reference-type="ref" reference="prop: crystalline weight 0 substack"}, we have a versal morphism $\mathop{\mathrm{Spf}}R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi \to\overline{{\mathcal X}}_d^{\mathrm{crys},0}$ at the ${\mathbf F}$-point of ${\mathcal X}_{d,\operatorname{red}}$ corresponding to  $\overline{\rho}$. By [@stacks-project [Tag 0DR0](https://stacks.math.columbia.edu/tag/0DR0)] we may find a smooth morphism $V\to \overline{{\mathcal X}}_d^{\mathrm{crys},0}$ with source a finite type ${\mathbf F}$-scheme, and a point $v\in V$ with residue field ${\mathbf F}$, such that there is an isomorphism $\widehat{{\mathcal O}}_{V,v}\cong R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi$, compatible with the given morphism to $\overline{{\mathcal X}}_d^{\mathrm{crys},0}$. By Theorem [Theorem 1](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"}, there is a dense open substack ${\mathcal U}$ of $\overline{{\mathcal X}}_d^{\mathrm{crys},0}$ such that ${\mathcal U}$ is reduced. Since being reduced is a smooth local property, the pullback of ${\mathcal U}$ to $V$ is a reduced open subscheme of $V$; and this pullback is furthermore dense in $V$, because the formation of the scheme-theoretic image of ${\mathcal U}$ in   $\overline{{\mathcal X}}_d^{\mathrm{crys},0}$ commutes with flat base change [@stacks-project [Tag 0CMK](https://stacks.math.columbia.edu/tag/0CMK)]. Thus $V$ is generically reduced, and the complete local rings of $V$ at finite type points are generically reduced by  [@caraiani2022geometric Lem. 4.5]. In particular $R^{\mathrm{crys},0,{\mathcal O}}_{\overline{\rho}}/\varpi\cong \widehat{{\mathcal O}}_{V,v}$ is generically reduced, as required. ◻ *Remark 1*. The case $d=2$ of Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"} is a special case of [@caraiani2022geometric Thm. 4.6]. In both cases the statement is deduced from the corresponding statement for the stack $\overline{{\mathcal X}}_d^{\mathrm{crys},0}$, and indeed in the case $d=2$, Theorem [Theorem 1](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"} is a special case of [@caraiani2022geometric Thm. 7.1, 7.6] (although the generic reducedness statement is proved earlier in [@caraiani2022geometric Prop. 4.1]). The argument that we use to prove Theorem [\[thm: BM cycle weight 0\]](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"} is necessarily rather different from the proof of [@caraiani2022geometric Thm. 4.6], which was written before [@emertongeepicture], and in particular could not use the structure of generic points on the irreducible components of ${\mathcal X}_{2,\operatorname{red}}$. Instead, the proof in [@caraiani2022geometric] uses the Kisin resolution of  ${\mathcal X}_2^{\mathrm{crys},0}$ (originally defined for lifting rings in [@kis04]). By results on local models for Shimura varieties, this Kisin resolution has reduced special fibre, and the arguments in [@caraiani2022geometric] show that the map from the Kisin resolution is an isomorphism on dense open substacks of the source and target. In dimension greater than $2$ we do not know of a candidate Kisin resolution for which we could expect to argue in this way. The result [@caraiani2022geometric Thm. 4.6] is more general than Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"} (as always, in the special case $d=2$), because it also proves the analogous statement for the potentially crystalline lifting ring of weight $0$ and any tame type. The Breuil--Mézard conjecture implies that the analogous statement necessarily fails for $d\ge 4$ (even if $K={\mathbf Q}_p$), because the reductions modulo $p$ of the corresponding inertial types contain Serre weights with multiplicities greater than $1$, even for generic choices of type (see [@MR4549091 Rem. 8.1.4]). Similarly, Theorem [Theorem 1](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"} is best possible in the sense that for any parallel Serre weight $\underline{k}$ greater than $0$ (i.e. $k_{\sigma,i}$ is independent of $\sigma$, and the $k_{\sigma,i}$ are not all equal), the stack ${\mathcal X}_d^{\mathrm{crys},\underline{k}}$ cannot have generically reduced special fibre once $K$ is sufficiently ramified. (While the Breuil--Mézard conjecture is not known, standard arguments with Taylor--Wiles patching give the expected lower bounds for Breuil--Mézard multiplicities, so it is presumably possible to prove unconditionally that the special fibres of the corresponding stacks are not generically reduced.) but we can't prove it .rem} *Remark 1*. Despite Remark [Remark 1](#rem: extension class is close to sharp){reference-type="ref" reference="rem: extension class is close to sharp"}, it seems plausible to us that Theorem [Theorem 1](#thm: BM cycle weight 0){reference-type="ref" reference="thm: BM cycle weight 0"} should also hold if $p\le d$, but any proof will necessarily be more complicated, and presumably cannot rely only on an analysis of the successive extension classes of characters of the kind that we have made here. # An automorphy lifting theorem in weight $0$ ## Preliminaries Our goal in this section is to state and prove Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"}, which is an automorphy lifting theorem for $n$-dimensional crystalline weight  $0$ $p$-adic representations of $G_F$, where $F$ is an imaginary CM field in which $p$ is arbitrarily ramified. The key innovations that allow us to prove this theorem are the local-global compatibility result of [@caraiani-newton] and the generic reducedness result that we proved in Theorem [Theorem 1](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"}. Given these ingredients, the proof is very close to those of  [@10author Theorem 6.1.1] and [@miagkov-thorne Theorem 1.2], and we refer to those papers for some of the details of the arguments, and for any unfamiliar terminology. We begin by introducing some terminology and notation we will need for the statement and proof. ### Galois preliminaries Fix a continuous irreducible representation $\overline{\rho}: G_F \to \mathop{\mathrm{GL}}_n(\overline{{\mathbf F}}_p)$ for a number field $F$. We fix a coefficient field $E/{\mathbf Q}_p$ such that $\overline{\rho}(G_F) \subset \mathop{\mathrm{GL}}_n({\mathbf F})$. We will use the notion of a *decomposed generic* representation $\overline{\rho}$, defined in [@10author Definition 4.3.1]. We will also use the notion of an *adequate subgroup* of $\mathop{\mathrm{GL}}_n({\mathbf F})$, see for example [@miagkov-thorne Definition 1.1.1]. Let $v$ be a finite place of $F$. As in [@10author §6.2.1], a *local deformation problem* is a $\widehat{\mathrm{PGL}}_n$-stable subfunctor of the lifting functor ${\mathcal D}_v^\square:= {\mathcal D}_{\overline{\rho}|_{G_{F_v}}}^{\square,{\mathcal O}}$, (pro-)representable by a quotient $R_v$ of the lifting ring $R_v^\square$. The following local deformation problems will be relevant: - the lifting functor itself, ${\mathcal D}_v^\square$, - for $v|p$, weight $0$ crystalline lifts ${\mathcal D}_v^{\mathrm{crys},\underline{0}}$, represented by $R^{\mathrm{crys},\underline{0},{\mathcal O}}_{\overline{\rho}|_{G_{F_v}}}$, - the local deformation problem ${\mathcal D}_v^\chi$ defined in [@10author §6.2.15]. In this case, we assume that $q_v\equiv 1 \mod p$, that $\overline{\rho}|_{G_{F_v}}$ is trivial, that $p > n$, and we have a tuple $(\chi_{i})_{i = 1, \dots, n}$ of characters $\chi_{i} : {\mathcal O}_{F_v}^\times \to {\mathcal O}^\times$ which are trivial modulo $\varpi$. Then ${\mathcal D}_v^\chi$ classifies lifts $\rho \colon G_{F_v} \rightarrow \mathop{\mathrm{GL}}_n(A)$ such that $$\mathrm{char}_{\rho(\sigma)}(X) = \prod_{i=1}^n (X - \chi_i(\mathrm{Art}_{F_v}^{-1}(\sigma)))$$ for all $\sigma \in I_{F_v}$. Let $S$ be a finite set of finite places of $F$ containing $S_p$ and all places at which $\overline{\rho}$ is ramified. Then we use the notion of a *global deformation problem* from [@10author Definition 6.2.2]. We will be able to restrict to the case where $\Lambda_v = {\mathcal O}_v$ for all $v \in S$, so our global deformation problems will be tuples ${\mathcal S}= (\overline{\rho},S,\{{\mathcal O}\}_{v \in S},\{{\mathcal D}_v\}_{v \in S})$. Each ${\mathcal D}_v$ is a local deformation problem, representable by a quotient $R_v$ of $R_v^\square$. There is an associated functor ${\mathcal D}_{\mathcal S}$ of deformations of $\overline{\rho}$ satisfying the local condition ${\mathcal D}_v$ for each $v \in S$. It is representable by $R_{{\mathcal S}}$. More generally, if $T \subset S$, we have a functor ${\mathcal D}_{{\mathcal S}}^{T}$ of $T$-framed deformations, which is representatble by  $R_{{\mathcal S}}^T$. The $T$-framed global deformation ring $R_{{\mathcal S}}^T$ receives a natural ${\mathcal O}$-algebra map from $R_{{\mathcal S}}^{T,\operatorname{loc}} := \widehat{\otimes}_{v\in T,{\mathcal O}} R_v$. ### Automorphic preliminaries Now we assume that $F$ is an imaginary CM number field. On the automorphic side, we will be interested in cuspidal automorphic representations of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$ which are *regular algebraic of weight $0$*. This means that the infinitesimal character of $\pi_\infty$ matches the infinitesimal character of the trivial representation of $\mathop{\mathrm{GL}}_n(F_\infty)$. These automorphic representations contribute to the cohomology with trivial coefficients of locally symmetric spaces. Let $X_\infty = \mathop{\mathrm{GL}}_n(F_\infty)/{\mathbf R}_{> 0}K_\infty$ be the symmetric space, with $K_\infty$ a maximal compact subgroup of $\mathop{\mathrm{GL}}_n(F_\infty)$ (since $F$ is totally imaginary, $K_\infty$ is connected). Suppose we have a *good subgroup* $K \subset \mathop{\mathrm{GL}}_n(\mathbf{A}_F^\infty)$. In other words, $K$ is neat, compact, open, and factorizes as $K = \prod_v K_v$ for compact open subgroups $K_v \subset \mathop{\mathrm{GL}}_n(F_v)$. Then we can define a smooth manifold $$X_K = \mathop{\mathrm{GL}}_n(F) \backslash \left(X_\infty \times \mathop{\mathrm{GL}}_n(\mathbf{A}_F^\infty)/K \right).$$ Fix a finite set of finite places $S$ of $F$ containing $S_p$, with $K_v = \mathop{\mathrm{GL}}_n({\mathcal O}_v)$ for $v \notin S$. We factorize $K=K_S K^S$. We have an abstract Hecke algebra $\mathcal{H}(\mathop{\mathrm{GL}}_n(\mathbf{A}_F^{\infty,S}),K^S)$ with coefficients in ${\mathbf Z}$, a tensor product of spherical Hecke algebras over finite places $v \notin S$. Suppose that $V$ is a finite ${\mathcal O}$-module with an action of $G(F)\times K_S$. Then, as explained in [@10author §2.1.2], $V$ descends to a local system of ${\mathcal O}$-modules ${\mathcal V}$ on $X_K$, and we have a natural Hecke action $$\mathcal{H}(\mathop{\mathrm{GL}}_n(\mathbf{A}_F^{\infty,S}),K^S)\otimes_{{\mathbf Z}}{\mathcal O}\to \mathop{\mathrm{End}}_{\mathbf{D}({\mathcal O})}(R\Gamma(X_K,{\mathcal V})).$$ The image of this ${\mathcal O}$-algebra map is a finite ${\mathcal O}$-algebra denoted by ${\mathbf T}^S(K,{\mathcal V})$. If $\mathfrak{m}$ is a maximal ideal of ${\mathbf T}^S(K,{\mathcal V})$, it has an associated semisimple Galois representation $$\overline{\rho}_\mathfrak{m}: G_{F,S'} \to \mathop{\mathrm{GL}}_n(k(\mathfrak{m}))$$ for a suitable set of places $S'$ containing $S$ [@10author Theorem 2.3.5]. For $v \notin S'$, the characteristic polynomial of $\overline{\rho}_\mathfrak{m}(\mathrm{Frob}_v)$ equals the image of $$\begin{split} P_v(X) = X^n&-T_{v, 1}X^{n-1} + \dots + (-1)^iq_v^{i(i-1)/2}T_{v, i}X^{n-i}+\dots \\ & + q_v^{n(n-1)/2}T_{v,n} \in \mathcal{H}(\mathop{\mathrm{GL}}_n(F_v),\mathop{\mathrm{GL}}_n({\mathcal O}_{F_v}))[X]. \end{split}$$ in the residue field $k(\mathfrak{m})$. We write $T_{v, i} \in \mathcal{H}(\mathop{\mathrm{GL}}_n(F_v), \mathop{\mathrm{GL}}_n({\mathcal O}_{F_v}))$ for the double coset operator $$T_{v, i} = [ \mathop{\mathrm{GL}}_n({\mathcal O}_{F_v}) \mathrm{diag}(\varpi_v, \dots, \varpi_v, 1, \dots, 1) \mathop{\mathrm{GL}}_n({\mathcal O}_{F_v})],$$ where $\varpi_v$ appears $i$ times on the diagonal. When $\overline{\rho}_\mathfrak{m}$ is absolutely irreducible, the cohomology groups $H^i(X_K,{\mathcal O})_\mathfrak{m}\otimes_{{\mathcal O}} E$ can be described in terms of cuspidal automorphic representations which are regular algebraic of weight 0 [@10author Theorem 2.4.10]. ## An automorphy lifting theorem The rest of this section is devoted to the proof of the following theorem, which is a version of [@10author Theorem 6.1.1] and [@miagkov-thorne Theorem 1.2] allowing arbitrary ramification at primes dividing $p$, at the price of restricting to weight $0$ automorphic representations. **Theorem 1**. *Let $F$ be an imaginary CM or totally real field and let $p>n$ be a prime. Suppose given a continuous representation $\rho : G_F \to \mathop{\mathrm{GL}}_n(\overline{{\mathbf Q}}_p)$ satisfying the following conditions:* 1. *$\rho$ is unramified almost everywhere.* 2. *For each place $v | p$ of $F$, the representation $\rho|_{G_{F_v}}$ is crystalline of weight $0$, i.e. with Hodge--Tate weights $HT_{\tau}(\rho)=\{0,1,2,\ldots,n-1\}$ for each $\tau:F_v\hookrightarrow\overline{{\mathbf Q}}_p$.* 3. *[\[part: ALT cond dgi\]]{#part: ALT cond dgi label="part: ALT cond dgi"} $\overline{\rho}$ is absolutely irreducible and decomposed generic. The image of $\overline{\rho}|_{G_{F(\zeta_p)}}$ is adequate *(*as a subgroup of $\mathop{\mathrm{GL}}_n({\mathbf F})$, for sufficiently large ${\mathbf F}$*)*.* 4. *[\[pa4fl\]]{#pa4fl label="pa4fl"} [\[part:scalartoremark\]]{#part:scalartoremark label="part:scalartoremark"} There exists $\sigma \in G_F - G_{F(\zeta_p)}$ such that $\overline{\rho}(\sigma)$ is a scalar.* 5. *There exists a cuspidal automorphic representation $\pi$ of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$ satisfying the following conditions:* 1. *$\pi$ is regular algebraic of weight $0$.* 2. *There exists an isomorphism $\iota : \overline{{\mathbf Q}}_p\to {\mathbf C}$ such that $\overline{\rho} \cong \overline{r_\iota(\pi)}$.* 3. *[\[assm:connects at p\]]{#assm:connects at p label="assm:connects at p"} If $v | p$ is a place of $F$, then $\pi_v$ is unramified and $r_\iota(\pi)|_{G_{F_v}} \sim \rho|_{G_{F_v}}$ *(*"connects to", in the sense of [@BLGGT §1.4]*)*.* *Then $\rho$ is automorphic: there exists a cuspidal automorphic representation $\Pi$ of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$ of weight $\lambda$ such that $\rho \cong r_\iota(\Pi)$. Moreover, if $v$ is a finite place of $F$ and either $v | p$ or both $\rho$ and $\pi$ are unramified at $v$, then $\Pi_v$ is unramified.* *Remark 1*. In assumption ([\[assm:connects at p\]](#assm:connects at p){reference-type="ref" reference="assm:connects at p"}), we are using [@caraiani-newton Theorem 4.3.1] which shows that $r_\iota(\pi)|_{G_{F_v}}$ is crystalline with the same labelled Hodge--Tate weights as $\rho|_{G_{F_v}}$. Choose a $p$-adic coefficient field $E$ which contains the Galois closure of $F$ and such that $\overline{\rho}(G_F)\subset \mathop{\mathrm{GL}}_n({\mathbf F})$. Then assumption [\[assm:connects at p\]](#assm:connects at p){reference-type="eqref" reference="assm:connects at p"} is that $r_\iota(\pi)|_{G_{F_v}}$ and $\rho|_{G_{F_v}}$ define points on the same irreducible component of the weight $0$ crystalline lifting ring $R^{\mathrm{crys},\underline{0},{\mathcal O}}_{\overline{\rho}|_{G_{F_v}}}\otimes_{{\mathcal O}}\overline{{\mathbf Q}}_p$. We begin by imposing some additional assumptions, under which we can use the Calegari--Geraghty version of the Taylor--Wiles--Kisin patching method to prove an automorphy lifting theorem. We then deduce Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} by a standard base change argument. We refer the reader to [@10author] for any unfamiliar notation. We let $F$ be an imaginary CM field with maximal totally real subfield $F^+$ and complex conjugation $c \in \mathop{\mathrm{Gal}}(F/F^+)$. We fix an integer $n \ge 1$, an odd prime $p > n$ and an isomorphism $\iota : \overline{{\mathbf Q}}_p\cong {\mathbf C}$. We let $\pi$ be a cuspidal automorphic representation of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$, which is regular algebraic of weight $0$. We suppose we have a finite set $S$ of finite places of $F$, containing the set $S_p$ of places of $F$ above $p$, and a (possibly empty) subset $R \subset (S\smallsetminus S_p)$. Then we assume that the following conditions are satisfied: 1. If $l$ is a prime lying below an element of $S$, or which is ramified in $F$, then $F$ contains an imaginary quadratic field in which $l$ splits. In particular, each place of $S$ is split over $F^+$ and the extension $F / F^+$ is everywhere unramified. 2. For each $v \in S_p$, let $\overline{v}$ denote the place of $F^+$ lying below $v$. Then there exists a place $\overline{v}' \neq \overline{v}$ of $F^+$ such that $\overline{v}' | p$ and $$\sum_{\overline{v}'' \neq \overline{v}, \overline{v}'} [ F^+_{\overline{v}''} : {\mathbf Q}_p] > \frac{1}{2} [ F^+ : {\mathbf Q}_p].$$ 3. The residual representation $\overline{r_\iota(\pi)}$ is absolutely irreducible and decomposed generic, and $\overline{r_\iota(\pi)}|_{G_{F(\zeta_p)}}$ has adequate image. 4. If $v$ is a place of $F$ lying above $p$, then $\pi_v$ is unramified. 5. If $v \in R$, then $\pi_v^{\mathrm{Iw}_v} \neq 0$, $q_v \equiv 1 \text{ mod }p$ and $\overline{r_\iota(\pi)}|_{G_{F_v}}$ is trivial. 6. If $v \in S - (R \cup S_p)$, then $\pi_v$ is unramified, $v\notin R^c$, and $H^2(F_v, \operatorname{ad}\overline{r_\iota(\pi)}) = 0$. 7. $S-(R \cup S_p)$ contains at least two places with distinct residue characteristics. 8. If $v \not\in S$ is a finite place of $F$, then $\pi_v$ is unramified. We define an open compact subgroup $K = \prod_v K_v$ of $\mathop{\mathrm{GL}}_n(\widehat{{\mathcal O}}_F)$ as follows: - If $v \not\in S$, or $v \in S_p$, then $K_v = \mathop{\mathrm{GL}}_n({\mathcal O}_{F_v})$. - If $v \in R$, then $K_v = \mathrm{Iw}_v$. - If $v \in S - (R \cup S_p)$, then $K_v = \mathrm{Iw}_{v,1}$ is the pro-$v$ Iwahori subroup of $\mathop{\mathrm{GL}}_n({\mathcal O}_{F_v})$. By [@10author Theorem 2.4.10], we can find a coefficient field $E \subset \overline{{\mathbf Q}}_p$ and a maximal ideal ${\mathfrak m}\subset {\mathbf T}^S(K, {\mathcal O})$ such that $\overline{\rho}_{\mathfrak m}\cong \overline{r_\iota(\pi)}$. After possibly enlarging $E$, we can and do assume that the residue field of $\mathfrak{m}$ is equal to ${\mathbf F}$, the residue field of $E$. For each tuple $(\chi_{v, i})_{v \in R, i = 1, \dots, n}$ of characters $\chi_{v, i} : k(v)^\times \to {\mathcal O}^\times$ which are trivial modulo $\varpi$, we define a global deformation problem $${\mathcal S}_\chi = (\overline{\rho}_{\mathfrak m}, S, \{ {\mathcal O}\}_{v \in S}, \{ {\mathcal D}_v^{\mathrm{crys},\underline{0}} \}_{v \in S_p} \cup \{ {\mathcal D}_v^\chi \}_{v \in R} \cup \{ {\mathcal D}_v^\square \}_{v \in S - (R \cup S_p)}).$$ We will assume that either $\chi_{v,i} = 1$ for all $v \in R$ and all $1\le i\le n$, or that for each $v \in R$ the $\chi_{v,i}$ are pairwise distinct. Extending ${\mathcal O}$ if necessary, we may assume that all irreducible components of our local lifting rings and their special fibres are geometrically irreducible. We fix representatives $\rho_{{\mathcal S}_\chi}$ of the universal deformations which are identified modulo $\varpi$ (via the identifications $R_{{\mathcal S}_\chi} / \varpi \cong R_{{\mathcal S}_1} / \varpi$). We define an ${\mathcal O}[K_S]$-module ${\mathcal O}(\chi^{-1})$, where $K_S$ acts by the projection $K_S \to K_R = \prod_{v \in R} \mathrm{Iw}_v \to \prod_{v \in R} (k(v)^\times)^n$. **Proposition 1**. *There exists an integer $\delta \geq 1$, depending only on $n$ and $[F : {\mathbf Q}]$, an ideal $J \subset {\mathbf T}^S( R \Gamma(X_K, {\mathcal V}_\lambda(\chi^{-1})))_{\mathfrak m}$ such that $J^\delta = 0$, and a continuous surjective homomorphism $$f_{{\mathcal S}_\chi} : R_{{\mathcal S}_\chi} \to {\mathbf T}^S( R \Gamma(X_K, {\mathcal O}(\chi^{-1})))_{\mathfrak m}/ J$$ such that for each finite place $v \not \in S$ of $F$, the characteristic polynomial of $f_{{\mathcal S}_\chi} \circ \rho_{{\mathcal S}_\chi}(\mathrm{Frob}_v)$ equals the image of $P_v(X)$ in ${\mathbf T}^S( R \Gamma(X_K, {\mathcal O}(\chi^{-1})))_{\mathfrak m}/ J$.* *Proof.* This is a version of [@10author Proposition 6.5.3], using [@caraiani-newton Theorem 4.2.15] to verify that we satisfy the crystalline weight $0$ condition at $v \in S_p$. ◻ This proposition means that it makes sense to talk about the support of $H^\ast(X_K, {\mathcal O})_{\mathfrak m}$ over $R_{{\mathcal S}_{1}}$, since $f_{{\mathcal S}_1}$ realizes $\mathop{\mathrm{Spec}}({\mathbf T}^S(K,{\mathcal O})_\mathfrak{m})$ as a closed subset of $\mathop{\mathrm{Spec}}(R_{{\mathcal S}_1})$. Here are the essential properties of the (completed tensor products of) local deformation rings in our situation: **Lemma 1**. *Fix a tuple $\chi = (\chi_{v, i})_{v \in R, i = 1, \dots, n}$ of characters $\chi_{v, i} : k(v)^\times \to {\mathcal O}^\times$ which are trivial modulo $\varpi$. We assume that either $\chi_{v,i} = 1$ for all $v \in R$ and all $1\le i\le n$, or that for each $v \in R$ the $\chi_{v,i}$ are pairwise distinct.* 1. *$R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}$ is equidimensional of dimension $1+n^2|S| + \frac{n(n-1)}{2}[F:{\mathbf Q}]$ and every generic point has characteristic $0$.* 2. *[\[uniquegen\]]{#uniquegen label="uniquegen"} Each generic point of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}/\varpi$ is the specialization of a unique generic point of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}$.* 3. *[\[irredcompsatp\]]{#irredcompsatp label="irredcompsatp"} Assume that $\chi_{v,1},\ldots,\chi_{v,n}$ are pairwise distinct for each $v\in R$. Then the natural map $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}\to\mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S_p,\operatorname{loc}} = \mathop{\mathrm{Spec}}R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}}$ induces a bijection on irreducible components.* 4. *[\[regular generic fibre\]]{#regular generic fibre label="regular generic fibre"} Each characteristic zero point of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}}$ lies on a unique irreducible component.* 5. *Assume that $\chi_{v,1},\ldots,\chi_{v,n}$ are pairwise distinct for each $v\in R$, and let $C$ be an irreducible component of $\mathop{\mathrm{Spec}} R_{{\mathcal S}_{1}}^{S,\operatorname{loc}}$. Write $C_p$ for the image of $C$ in $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}}$ *(*so that $C_p$ is an irreducible component of $\mathop{\mathrm{Spec}} R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}}$*)*. Then the generic points of $C\cap \mathop{\mathrm{Spec}}R_{{\mathcal S}_{1}}^{S,\operatorname{loc}}/\varpi$ generalize *(*via the equality $\mathop{\mathrm{Spec}} R_{{\mathcal S}_{1}}^{S,\operatorname{loc}}/\varpi= \mathop{\mathrm{Spec}} R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}/\varpi$*)* to the generic point of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}$ corresponding to $C_p$ via the bijection of part [\[irredcompsatp\]](#irredcompsatp){reference-type="eqref" reference="irredcompsatp"}. *(*By part ([\[uniquegen\]](#uniquegen){reference-type="ref" reference="uniquegen"}), each of these points has a unique generalization.*)** *Proof.* We begin by noting that [@blght Lemma 3.3] allows us to describe the set of irreducible components of $\mathop{\mathrm{Spec}}(R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}})$ (respectively, its special fibre) as the product over $v \in S$ of the sets of irreducible components of the local deformation rings (respectively, their special fibres). (Here we use that the irreducible components of the local deformation rings that we consider are all in characteristic zero) The first part follows from [@10author Lemma 6.2.25] (we have a different deformation condition at $p$, but $R^{\mathrm{crys},\underline{0},{\mathcal O}}_{\overline{\rho}|_{G_{F_v}}}$ is ${\mathcal O}$-flat by definition and equidimensional of dimension $1+n^2 + \frac{n(n-1)}{2}[F_v : {\mathbf Q}_p]$ by [@kisindefrings Theorem 3.3.4]). For the second part, for each $v \in S$ and local deformation ring $R_v$ we need to check that the generic points of $\mathop{\mathrm{Spec}}R_v/\varpi$ have unique generalizations to $\mathop{\mathrm{Spec}}R_v$. For $v|p$, this follows from Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"} --- see [@caraiani-newton Lemma 5.3.3] for the argument that generically reduced special fibre implies unique generalizations of its generic points, and note that we are assuming $p > n$. For $v \in R$, the property we need follows from [@10author Props. 6.2.16, 6.2.17]. For $v \in S - (R\cup S_p)$, $R_v = R_v^{\square}$ is formally smooth over ${\mathcal O}$. The third part follows from the irreducibility of the local deformation rings for $v \in S-S_p$ [@10author Prop. 6.2.17], and the fourth part from the regularity of $R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}}\left[1/p\right]$ [@kisindefrings Theorem 3.3.8]. For the final part, by the third part is enough to note that as we saw above, it follows from Theorem [\[thm: special fibre weight 0 crystalline def ring generically reduced\]](#thm: special fibre weight 0 crystalline def ring generically reduced){reference-type="ref" reference="thm: special fibre weight 0 crystalline def ring generically reduced"} that the generic points of the special fibre of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}}/\varpi = \mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S_p,\operatorname{loc}}/\varpi$ uniquely generalize to generic points of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_{1}}^{S_p,\operatorname{loc}} = \mathop{\mathrm{Spec}}R_{{\mathcal S}_{\chi}}^{S_p,\operatorname{loc}}$. ◻ **Theorem 1**. *Suppose we are given two homomorphisms $f_1, f_2: R_{{\mathcal S}_1}\to {\mathcal O}$ with associated liftings $\rho_1, \rho_2: G_{F,S} \to \mathop{\mathrm{GL}}_n({\mathcal O})$. Suppose $\ker(f_1) \in \mathop{\mathrm{Supp}}_{R_{{\mathcal S}_1}}(H^\ast(X_K, {\mathcal O})_{\mathfrak m})$ and $\rho_1|_{G_{F_v}} \sim \rho_2|_{G_{F_v}}$ for each $v \in S_p$. Then $\ker(f_2) \in \mathop{\mathrm{Supp}}_{R_{{\mathcal S}_1}}(H^\ast(X_K, {\mathcal O})_{\mathfrak m})$.* *Proof.* We patch, as in [@10author §6.5] and [@miagkov-thorne §8], replacing the Fontaine--Laffaille local condition at $v \in S_p$ with the crystalline weight $0$ condition. Once again, we use [@caraiani-newton Theorem 4.2.15] to ensure that we have the necessary maps from deformation rings with these local conditions to our Hecke algebras. We record the output of this patching process, complete details of which can be found in [@10author §6.4--6.5]. Fix a tuple $\chi = (\chi_{v, i})_{v \in R, i = 1, \dots, n}$ of characters $\chi_{v, i} : k(v)^\times \to {\mathcal O}^\times$ which are trivial modulo $\varpi$, and with $\chi_{v,1},\ldots,\chi_{v,n}$ pairwise distinct for each $v\in R$. Patching will provide us with the following: 1. A power series ring $S_\infty={\mathcal O}\llbracket X_1,\cdots,X_r\rrbracket$ with augmentation ideal $\mathfrak{a}_\infty = (X_1, \dots, X_r)$. 2. [\[ihara_complex_iso\]]{#ihara_complex_iso label="ihara_complex_iso"} Perfect complexes $C_\infty, C'_\infty$ of $S_\infty$-modules, an isomorphism $$C_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \varpi \cong C'_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \varpi$$ in $\mathbf{D}(S_\infty / \varpi)$ and an isomorphism $$C_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \mathfrak{a}_{\infty} \cong R \mathop{\mathrm{Hom}}_{\mathcal O}( R \Gamma(X_K, {\mathcal O})_\mathfrak{m}, {\mathcal O})[-d]$$ in $\mathbf{D}({\mathcal O})$. 3. Two $S_\infty$-subalgebras $$T_\infty\subset \mathop{\mathrm{End}}_{\mathbf{D}(S_\infty)}(C_\infty)$$ and $$T'_\infty \subset \mathop{\mathrm{End}}_{\mathbf{D}(S_\infty)}(C'_\infty),$$ which have the same image in $$\mathop{\mathrm{End}}_{\mathbf{D}(S_\infty/\varpi)}(C_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \varpi ) = \mathop{\mathrm{End}}_{\mathbf{D}(S_\infty/\varpi)}(C'_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \varpi ),$$ where these endomorphism algebras are identified using the fixed isomorphism in ([\[ihara_complex_iso\]](#ihara_complex_iso){reference-type="ref" reference="ihara_complex_iso"}). Call this common image $\overline{T}_\infty$. Note that $T_\infty$ and $T'_\infty$ are finite $S_\infty$-algebras. The map $$T_\infty \to \mathop{\mathrm{End}}_{\mathbf{D}({\mathcal O})}(C_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \mathfrak{a}_{\infty}) = \mathop{\mathrm{End}}_{\mathbf{D}({\mathcal O})}(R \Gamma(X_K, {\mathcal O})_\mathfrak{m})^{op}$$ factors through a map $T_\infty \to {\mathbf T}(K,{\mathcal O})_\mathfrak{m}$. 4. Two Noetherian complete local $S_\infty$-algebras $R_\infty$ and $R'_\infty$, which are power series algebras over $R_{{\mathcal S}_{1}}^{S,\operatorname{loc}}$ and $R_{{\mathcal S}_{\chi}}^{S,\operatorname{loc}}$ respectively. We have a surjective $R_{{\mathcal S}_{1}}^{S,\operatorname{loc}}$-algebra map $R_\infty \to R_{{\mathcal S}_{1}}$, which factors through an ${\mathcal O}$-algebra map $R_\infty/\mathfrak{a}_{\infty} \to R_{{\mathcal S}_{1}}$. We also have surjections $R_\infty\twoheadrightarrow T_\infty/I_\infty$, $R'_\infty\twoheadrightarrow T'_\infty/I'_\infty$, where $I_\infty$ and $I'_\infty$ are nilpotent ideals. We write $\overline{I}_\infty$ and $\overline{I}'_\infty$ for the image of these ideals in $\overline{T}_\infty$. These maps fit into a commutative diagram $$\xymatrix{ R_\infty \ar[d] \ar[r] & R_{{\mathcal S}_1}\ar[d] \\ T_\infty/I_\infty \ar[r] & {\mathbf T}(K,{\mathcal O})_\mathfrak{m}^{\operatorname{red}}.}$$ 5. An isomorphism $R_\infty/\varpi\cong R'_\infty/\varpi$ compatible with the $S_\infty$-algebra structure and the actions (induced from $T_\infty$ and $T'_\infty$) on $$H^*( C_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \varpi)/(\overline{I}_\infty+\overline{I}'_\infty) = H^*( C'_\infty \otimes^\mathbf{L}_{S_\infty} S_{\infty} / \varpi)/(\overline{I}_\infty+\overline{I}'_\infty),$$ where these cohomology groups are identified using the fixed isomorphism. 6. Integers $q_0 \in {\mathbf Z}$ and $l_0 \in {\mathbf Z}_{\geq 0}$ such that $$H^\ast(X_K,E)_\mathfrak{m}\neq 0,$$ and these groups are non-zero only for degrees in the interval $[q_0,q_0+l_0]$. Moreover, $\dim R_\infty=\dim R'_\infty=\dim S_\infty -l_0$. With that out of the way, we let $x \in \mathop{\mathrm{Spec}}R_\infty$ be the automorphic point coming from $\ker(f_1)$. By the first part of [@caraiani-newton Proposition 5.4.2], there is an irreducible component $C_a$ of $\mathop{\mathrm{Spec}}R_\infty$, containing $x$, with $C_a \subset \mathop{\mathrm{Spec}}T_\infty$. Let $C$ be any irreducible component of $\mathop{\mathrm{Spec}}R_\infty$ containing $\ker(f_2)$. Since $\rho_1|_{G_{F_v}} \sim \rho_2|_{G_{F_v}}$ for each $v \in S_p$, $C$ and $C_a$ map to the same irreducible component of $\mathop{\mathrm{Spec}}R_{{\mathcal S}_1}^{S_p,\operatorname{loc}}$ (we are using part ([\[regular generic fibre\]](#regular generic fibre){reference-type="ref" reference="regular generic fibre"}) of Lemma [Lemma 1](#lem:local def ring props){reference-type="ref" reference="lem:local def ring props"} here, which says that each characteristic $0$ point lies in a unique irreducible component of $R_{{\mathcal S}_1}^{S_p,\operatorname{loc}}$). By Lemma [Lemma 1](#lem:local def ring props){reference-type="ref" reference="lem:local def ring props"}, the generic points of $C \cap \mathop{\mathrm{Spec}}R_\infty/\varpi$ and $C_a \cap \mathop{\mathrm{Spec}}R_\infty/\varpi$ all generalize to the same irreducible component of $\mathop{\mathrm{Spec}}R_\infty'$. We can apply the second part of [@caraiani-newton Proposition 5.4.2] to deduce that $C \subset \mathop{\mathrm{Spec}}{\mathbf T}_\infty$, and therefore $\ker(f_2)$ is in the support of $H^*(C_\infty)$. It follows as in [@10author Corollary 6.3.9] (see also [@caraiani-newton Corollary 5.4.3]) that $\ker(f_2)$ is in the support of $H^\ast(X_K,{\mathcal O})_\mathfrak{m}$, as desired. ◻ *Proof of Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"}.* This is immediate from Theorem [Theorem 1](#thm:alt_after_bc){reference-type="ref" reference="thm:alt_after_bc"} via a standard base change argument identical to the one found in [@10author §6.5.12]. ◻ # The Dwork family {#ssec:dwork} ## Definitions We begin by introducing the Dwork motives we need to consider. For our purposes, we need to consider the non-self dual motives (with coefficients) studied in [@Qian; @QianPotential] rather than the self-dual (generalized) symplectic motives previously considered in [@HSBT; @blght]. Let $n > 2$ and $N > 100 n + 100$ be integers, with $N$ odd and $(N, n) = 1$. Let $\zeta_N \in \overline{{\mathbf Q}}$ be a primitive $N^\text{th}$ root of unity. Let $R_0 = {\mathbf Z}[\zeta_N, N^{-1}]$, $T_0 = \mathop{\mathrm{Spec}}R_0[ t, (1-t^N)^{-1}]$, and let $Z \subset \mathbf{P}^{N-1}_{T_0}$ be the family of smooth hypersurfaces of degree $N$ and dimension $N - 2$ defined by the equation $$X_1^N + \dots + X_N^N = N t X_1 \dots X_N.$$ We write $\pi : Z \to T_0$ for the natural projection. Let $\mu_N$ denote the group of $N^\text{th}$ roots of unity in ${\mathbf Z}[\zeta_N]^\times$. Then the group $H = \mu_N^N / \Delta(\mu_N)$ acts on $\mathbf{P}^{N-1}$ by multiplication of coordinates, and the subgroup $$H_0 = \ker( \prod : H \to \mu_N )$$ preserves $Z$. The action of $H_0$ extends to an action of $H$ on the central fibre $Z_0$ (which is a Fermat hypersurface). Let $M = {\mathbf Q}(e^{2 \pi i / N}) \subset {\mathbf C}$, and set $$X = \mathop{\mathrm{Hom}}(H, M^\times),$$ $$X_0 = \mathop{\mathrm{Hom}}(H_0, M^\times).$$ A choice of embedding $\tau : {\mathbf Q}(\zeta_N) \to {\mathbf C}$ determines an isomorphism $$f_\tau : X \cong \ker\left( \sum : ({\mathbf Z}/ N {\mathbf Z})^N \to {\mathbf Z}/ N {\mathbf Z}) \right),$$ but we do not fix a preferred choice. We do choose a character $\underline{\chi} \in (\chi_1, \dots, \chi_N) \in X$ with the following properties: - The trivial character of $\mu_N$ occurs $n+1$ times among $\chi_1, \dots, \chi_N$, and each other character appears at most once. - Let $\rho_1, \dots, \rho_n$ be the non-trivial characters $\mu_N \to M^\times$ which do not appear in $\chi_1, \dots, \chi_N$. Then the stabilizer of the set $\{ \rho_1, \dots, \rho_n \}$ in $\mathop{\mathrm{Gal}}(M / {\mathbf Q})$ is trivial. The existence of such $\underline{\chi}$ is established in [@QianPotential Lem. 3.1], as a consequence of the assumption $N > 100 n + 100$. The precise choice is not important. For any place $\lambda$ of $M$ of characteristic $l$, we define ${\mathcal V}_\lambda = (\pi[1/l]_\ast {\mathcal O}_{M_\lambda})^{H_0 = \underline{\chi}|_{H_0}}$. It is a lisse sheaf of finite free ${\mathcal O}_{M_\lambda}$-modules on $T_0[1/l]$. If $k$ is a perfect field which is an $R_0[1/l]$-algebra, and $t \in T_0(k)$, then we write $V_{t, \lambda} = {\mathcal V}_{\lambda, \overline{t}}$ for the stalk at a geometric point lying above $t$; it is an ${\mathcal O}_{M_\lambda}[G_k]$-module, finite free as ${\mathcal O}_{M_\lambda}$-module. Katz [@KatzExponential; @KatzDwork] defines hypergeometric sheaves on $T_1 = \mathop{\mathrm{Spec}}R_0[t, t^{-1}, (1-t)^{-1}]$. We give the definition just in the case of interest. Let $j : T_1 \to \mathbf{G}_{m, R_0}$ be the natural open immersion, and let $f : T_1 \to \mathbf{G}_{m, R_0}$ be the map induced by $t \mapsto 1-t$. Fixing again a place $\lambda$ of $M$ of characteristic $l$, let ${\mathcal L}_i$ denote the rank  1 lisse $M_\lambda$-sheaf on $\mathbf{G}_{m, R_0}[1/l]$ associated to $\rho_i$, and let $\mathcal{F}_i = j[1/l]_! f[1/l]^\ast {\mathcal L}_i$. We set $${\mathcal E}_\lambda = j[1/l]^\ast (\mathcal{F}_1 \ast_! \mathcal{F}_2 \ast_! \dots \ast_! \mathcal{F}_n)[n-1],$$ where $\ast_!$ denotes multiplicative convolution with compact support. ## Basic properties and good ordinary points Associated to specializations of ${\mathcal E}_\lambda$ are compatible systems of Galois representations. In this section, we establish some of their basic properties. Most importantly, we prove (in Proposition [Proposition 1](#ordinarypoints){reference-type="ref" reference="ordinarypoints"}) the existence of many specializations which have crystalline ordinary reduction. (In [@Qian], Qian proves that specializations sufficiently close to $t = \infty$ are semistable ordinary, but that is not sufficient for our purposes where we need to work with crystalline representations.) **Theorem 1**. 1. *${\mathcal E}_\lambda$ is a lisse $M_\lambda$-sheaf on $T_1[1/l]$ of rank $n$. The sheaf ${\mathcal E}_\lambda \otimes_{M_\lambda} \overline{M}_\lambda$ is geometrically irreducible. Moreover, ${\mathcal E}_\lambda$ is pure of weight $n-1$ and there is an isomorphism $\det {\mathcal E}_\lambda \cong M_\lambda(n(1-n)/2)$.* 2. *Let $k$ be an $R_0[1/l]$-algebra which is a finite field of cardinality $q$, and let $x \in T_1(k)$. Then we have* *$$\label{eqn: trace on E} \mathrm{tr}( \mathrm{Frob}_k \mid {\mathcal E}_{\lambda, \overline{x}} ) = (-1)^{n-1} \sum_{\substack{x_1, \dots, x_n \in k \\ \prod_{i=1}^n x_i = x}} \prod_{i=1}^n \rho_i( ( 1-x_i )^{(q-1)/N} )$$ where we identify $\mu_N = k^\times[N]$ and extend $\rho_i$ by $\rho_i(0) = 0$.* 3. *There exists a *(*unique*)* continuous character $$\Psi_\lambda : \pi_1(\mathop{\mathrm{Spec}}R_0[1/l]) \to {\mathcal O}_{M_\lambda}^\times$$ with the following property: let $T_0' = T_0[1/t]$, let $j' : T_0' \to T_0$ be the natural open immersion, and let $g : T_0' \to T_1$ be the map induced by $t \mapsto t^{-N}$. Then there is an isomorphism of $M_\lambda$-sheaves on $T_0'[1/l]$:* *$$\label{eqn: V to E iso} (j'[1/l])^\ast {\mathcal V}_\lambda \otimes_{{\mathcal O}_{M_\lambda}} M_\lambda \cong g^\ast {\mathcal E}_\lambda \otimes_{M_\lambda} M_\lambda(\Psi_\lambda).$$* *Proof.* The construction and properties of ${\mathcal E}_\lambda$ are summarized in [@KatzDwork] (where it is the sheaf denoted $\mathcal{H}^{can}( \mathbf{1} (n \text{ times}), \{ \rho_i \})$) and given in detail in [@KatzExponential Ch. 8]. See also [@DrinfeldKedlaya §A.1.6]. The computation of the determinant follows from [@KatzExponential Theorem 8.12.2(1a)], noting that $\prod_{i=1}^n \rho_i = \mathbf{1}$. Property (2) follows from the definition. The existence of $\Psi_\lambda$ as in (3) follows from [@KatzDwork Theorem 5.3] (we have also used the relation $[-1]^\ast \mathcal{H}^{can}( \mathbf{1} (n \text{ times}), \{ \rho_i \}) \cong \mathcal{H}^{can}( \{ \rho^{-1}_i \}, \mathbf{1} (n \text{ times}))$). The uniqueness follows from geometric irreducibility and Schur's lemma. ◻ **Lemma 1**. *There exists a Hecke character $\Psi : {\mathbf Q}(\zeta_N)^\times \backslash \mathbf{A}_{{\mathbf Q}(\zeta_N)}^\times \to {\mathbf C}^\times$ of type $A_0$, unramified away from $N$, and with field of definition contained inside $M$, such that $\Psi_\lambda$ is associated to $\Psi$. In other words, $\Psi_\lambda$ is 'independent of $\lambda$'. Moreover, $\Psi$ is defined over $M$ and we have $\Psi c(\Psi) = | \cdot |^{N-n}$.* *Proof.* We may argue as in [@KatzDwork Question 5.5] to see that almost all Frobenius traces of $\Psi_\lambda$ lie in $M$, and are independent of $\lambda$. The existence of the character $\Psi$, of type $A_0$, follows from the main result of [@henniart]. There are $c$-linear isomorphisms ${\mathcal V}_{c(\lambda)} \cong {\mathcal V}_\lambda^\vee(1-N)$ and ${\mathcal E}_{c(\lambda)} \cong {\mathcal E}_\lambda^\vee(1-n)$, so the final part is again a consequence of Schur's lemma. ◻ For any place $\lambda$ of $M$ of characteristic $l$, we define ${\mathcal W}_\lambda = {\mathcal V}_\lambda \otimes_{{\mathcal O}_{M_\lambda}} {\mathcal O}_{M_\lambda}(\Psi_\lambda^{-1})$, a lisse sheaf of finite free ${\mathcal O}_{M_\lambda}$-modules on $T_0[1/l]$. Thus ${\mathcal W}_\lambda \otimes_{{\mathcal O}_{M_\lambda}} M_\lambda$ is a lisse $M_\lambda$-sheaf of rank $n$ which is pure of weight $n-1$, geometrically irreducible, and of determinant $M_\lambda(n(1-n)/2)$. If $k$ is a perfect field which is an $R_0[1/l]$-algebra, and $t \in T_0(k)$, then we write $W_{t, \lambda} = {\mathcal W}_{\lambda, \overline{t}}$ for the stalk at a geometric point lying above $t$; it is an ${\mathcal O}_{M_\lambda}[G_k]$-module, finite free as ${\mathcal O}_{M_\lambda}$-module. The local systems ${\mathcal W}_\lambda$ are the ones we will use in building the moduli spaces used in the Moret-Bailly argument in §[6.2](#subsec_proof_of_pot_aut){reference-type="ref" reference="subsec_proof_of_pot_aut"}. In particular, let us write $\overline{{\mathcal W}}_\lambda = {\mathcal W}_\lambda \otimes_{{\mathcal O}_{M_\lambda}} k(\lambda)$ and define $\overline{W}_{t, \lambda}$ similarly. **Proposition 1**. *Let $F / {\mathbf Q}(\zeta_N)$ be a number field.* 1. *Let $v$ be a finite place of $F$ of characteristic $l$, and let $\lambda$ be a place of $M$ of characteristic not equal to $l$. If $l \nmid N$ and $t \in T_0({\mathcal O}_{F_v})$, then $W_{t, \lambda}$ is unramified, and the polynomial $Q_v(X) = \det(X - \mathrm{Frob}_v \mid W_{t, \lambda})$ has coefficients in ${\mathcal O}_M[X]$ and is independent of $\lambda$.* 2. *Let $v$ be a finite place of $F$ of characteristic $l$, and let $\lambda$ be a place of $M$ of the same characteristic. Let $t \in T_0(F_v)$. Then $W_{t, \lambda}$ is de Rham and for any embedding $\tau : F_v \to \overline{M}_\lambda$, we have $\mathrm{HT}_\tau(W_{t, \lambda}) = \{ 0, 1, \dots, n-1 \}$. If $l \nmid N$ and $t \in T_0({\mathcal O}_{F_v})$, then $W_{t, \lambda}$ is crystalline and the characteristic polynomial of $\mathrm{Frob}_v$ on $\mathrm{WD}(W_{t, \lambda})$ equals $Q_v(X)$. In particular, $W_{t, \lambda}$ is ordinary if and only if the roots of $Q_v(X)$ in $\overline{M}_\lambda$ have $l$-adic valuations $0, [k(v) : {\mathbf F}_l], \dots, (n-1) [k(v) : {\mathbf F}_l]$.* 3. *Let $t \in T_0(F)$, and let $S$ be the set of finite places $v$ of $F$ such that either $v | N$, or $v \nmid N$ and $t \not\in T_0({\mathcal O}_{F_v}) \subset T_0(F_v)$. Then $$( M, S, \{ Q_v(X) \}_{v \not \in S}, \{ W_{t, \lambda} \}_\lambda, \{ \{0, 1, \dots, n-1\}\}_\tau )$$ is a weakly compatible system of $l$-adic representations of $G_F$ over $M$ of rank $n$, pure of weight $n-1$, in the sense of [@BLGGT §5.1].* *Proof.* For the first part, we note that $Y_t$ is smooth and proper over ${\mathcal O}_{F_v}$, so by smooth proper base change $H^\ast_{\text{\'et}}(\overline{Y}_{t, F_v}, {\mathcal O}_{M_\lambda})$ is unramified and there is an isomorphism $$H^{N-2}_{\text{\'et}}(Y_{t, \overline{F}_v}, {\mathcal O}_{M_\lambda}) \cong H^{N-2}_{\text{\'et}}(Y_{\overline{t}, \overline{k(v)}}, {\mathcal O}_{M_\lambda}),$$ where $\overline{t}$ denotes the image of $t$ in $T_0(k(v))$. [@MR332791 Theorem 2(2)] shows that for any $h \in H$ the characteristic polynomial of $h \cdot \mathrm{Frob}_v$ on this group has coefficients in ${\mathcal O}_M$ and is independent of the choice of $\lambda \nmid l$, and this implies that $Q_v(X)$ also has coefficients in ${\mathcal O}_M[X]$ and is independent of $\lambda$. For the second part, note that $W_{t, \lambda}$ is de Rham because it is a subquotient of $H_{\text{\'et}}^{N-2}(Y_t, M_\lambda) \otimes M_\lambda(\Psi_\lambda^{-1})$, which is de Rham. To compute the Hodge--Tate weights, we use [@QianPotential Lemma 3.10], which implies that there is an integer $M_\tau$ such that $\mathrm{HT}_\tau(V_{t, \lambda}) = \{ M_\tau, M_\tau + 1, \dots, M_\tau + (n-1) \}$. Since $W_{t, \lambda}$ is a twist of $V_{t, \lambda}$, there is an integer $M'_\tau$ such that $\mathrm{HT}_\tau(W_{t, \lambda}) = \{ M'_\tau, M'_\tau + 1, \dots, M'_\tau + (n-1) \}$. Looking at determinants shows that $n M'_\tau + n(n-1)/2 = n(n-1)/2$, hence $M'_\tau = 0$. If further $l \nmid N$ and $t \in T_0({\mathcal O}_{F_v})$ then again $Y_t$ is smooth and proper, so $H_{\text{\'et}}^{N-2}(Y_t, M_\lambda)$ is crystalline, and also $M_\lambda(\Psi_\lambda^{-1})$ is crystalline, hence $W_{t, \lambda}$ is crystalline. The crystalline comparison theorem implies that there is an isomorphism $$D_{cris}( H_{\text{\'et}}^{N-2}(Y_{t, \overline{F}_v}, {\mathbf Q}_l) ) \cong H^{N-2}_{cris}(Y_{\overline{t}} / F_{v, 0}),$$ respecting the action of Frobenius $\phi_v$ and $H_0$ on each side. Choosing an embedding $\sigma_0 : F_{v, 0} \to \overline{M}_\lambda$, there is an isomorphism $$D_{cris}( H_{\text{\'et}}^{N-2}(Y_{t, \overline{F}_v}, {\mathbf Q}_l) ) \otimes_{F_{v, 0}, \sigma_0} \overline{M}_\lambda \cong H^{N-2}_{cris}(Y_{\overline{t}} / F_{v, 0}) \otimes_{F_{v, 0}, \sigma_0} \overline{M}_\lambda,$$ equivariant for the $\overline{M}_\lambda$-linear action of $\phi_v^{[k(v) : {\mathbf F}_l]}$. By definition, $\mathrm{WD}(W_{t, \lambda})$ is the unramified representation of $W_{F_v}$ over $\overline{M}_\lambda$ afforded by the $\Psi_\lambda^{-1}$-twist of the $\underline{\chi}|_{H_0}$-isotypic subspace of the left-hand side. We therefore need to check that the characteristic polynomial of $\phi_v^{[k(v) : {\mathbf F}_l]}$ on the $\Psi_\lambda^{-1}$-twist of the $\underline{\chi}|_{H_0}$-isotypic subspace of the right-hand side equals $Q_v(X)$. This follows again from [@MR332791 Theorem 2(2)] (applicable here by the main result of [@MR904940]). The characterization of ordinary representations follows from [@ger Lemma 2.32]. The third part follows from the first two parts and the definition of a weakly compatible system. ◻ We now apply the results of Drinfeld--Kedlaya [@DrinfeldKedlaya] to deduce that the $W_{t,\lambda}$ are ordinary for generic choices of $t$. **Proposition 1**. *Let $v$ be a place of ${\mathbf Q}(\zeta_N)$ of characteristic $l \nmid N$, and let $\lambda$ be a place of $M$ of the same characteristic. Then there exists a non-empty Zariski open subset $U(v; \lambda) \subset T_{0, k(v)}$ with the following property: for any finite extension $F_w / {\mathbf Q}(\zeta_N)_v$ and any $t \in T_0({\mathcal O}_{F_w})$ such that $\overline{t} = t \text{ mod }(\varpi_w) \in U(v; \lambda)(k(w))$, $W_{t, \lambda}$ is a crystalline ordinary representation of $G_{F_w}$.* *Proof.* Fix an auxiliary place $\mu$ of $M$ of characteristic not $l$. If $k / k(v)$ is a finite extension of cardinality $q$ and $x \in T_0(k)$, we write $Q_x(X) \in {\mathcal O}_M[X]$, for the characteristic polynomial of $\mathrm{Frob}_x$ on $W_{x, \mu}$. Let $s_1(x) \geq s_2(x) \geq \dots \geq s_n(x)$ denote $[k : {\mathbf F}_l]^{-1}$ times the $l$-adic valuations of the roots of $Q_x(X)$ in $\overline{M}_\lambda$. Observe that these normalized slopes $s_i(x)$ do not change if $k$ is replaced by a larger extension (leaving the point $x$ unchanged). By Proposition [Proposition 1](#prop_independence_of_l){reference-type="ref" reference="prop_independence_of_l"}, it suffices to show the existence of a non-empty Zariski open subset $U \subset T_{0, k(v)}$ such that if $k / k(v)$ is a finite extension and $x \in U(k)$, then $s_i(x) = n-i$ for each $i = 1, \dots, n$. By [@DrinfeldKedlaya Theorem 1.3.3], we can find a non-empty Zariski open subset $V \subset T_{0, k(v)}$ such that the numbers $s_i(x)$ are constant for $x \in V(k)$, and moreover such that $s_i(x) \leq s_{i+1}(x) + 1$ for each $i = 1, \dots, n-1$. To complete the proof, it suffices to show that there is a non-empty Zariski open subset $U \subset V$ such that if $x \in U(k)$, then $s_n(x) = 0$. Indeed, consideration of determinants shows that $s_1(x) + s_2(x) + \dots + s_n(x) = n(n-1)/2$ for all $x \in T_0(k)$. If $x \in V(k)$ and $s_n(x) = 0$ then $s_i(x) \leq n-i$ for each $i = 1, \dots, n$, hence $s_1(x) + \dots + s_n(x) \leq n(n-1)/2$, with equality if and only if $s_i(x) = n-i$ for each $i = 1, \dots, n$. Finally, by [\[eqn: V to E iso\]](#eqn: V to E iso){reference-type="eqref" reference="eqn: V to E iso"}, it is enough to show the analogous statement for the pullback of ${\mathcal E}_\mu$ to $T_{1, k(v)}$. We will prove this by showing that there is a non-empty Zariski open subset $U_1 \subset T_{1, k(v)}$ such that if $x \in T_{1, k(v)}(k)$, then $\mathrm{tr}( \mathrm{Frob}_x \mid {\mathcal E}_{\mu, \overline{x}}) \not\equiv 0 \text{ mod }\lambda$, or in other words (using [\[eqn: trace on E\]](#eqn: trace on E){reference-type="eqref" reference="eqn: trace on E"}) that $$\sum_{\substack{x_1, \dots, x_n \in k \\ \prod_{i=1}^n x_i = x}} \prod_{i=1}^n \rho_i( ( 1-x_i )^{(q-1)/N} ) \not\equiv 0 \text{ mod }\lambda.$$ We will produce this set $U_1$ by a computation following [@DrinfeldKedlaya §A.3]. Let $\tau : {\mathbf Q}(\zeta_N) \to M$ be an isomorphism identifying the place $v$ with the place $\lambda$. The character $k(v)^\times \to k(v)^\times$, $z \mapsto \tau^{-1} \rho_i( z^{(q_v-1)/N} )$, is given by the formula $z \mapsto z^{c_i}$ for some integer $c_i$ with $1 \leq c_i \leq q_v-2$. If $q = q_v^d$ then we find that the pre-image under $\tau$ of the left-hand side of the displayed equation is given by $$\sum_{\substack{x_1, \dots, x_n \in k \\ \prod_{i=1}^n x_i = x}} \prod_{i=1}^n \mathbf{N}_{k / k(v)} (1 - x_i)^{c_i} = \sum_{\substack{x_1, \dots, x_n \in k \\ \prod_{i=1}^n x_i = x}} \prod_{i=1}^n (1 - x_i)^{\widetilde{c}_i},$$ where $\widetilde{c}_i = c_i \cdot (q-1) / (q_v-1) < q - 1$. This we can in turn compute as $$\begin{gathered} \sum_{\substack{x_1, \dots, x_n \in k \\ \prod_{i=1}^n x_i = x}} \sum_{\substack{ 0 \leq r_i \leq \widetilde{c}_i \\ i = 1, \dots, n}} \prod_{i=1}^n \binom{\widetilde{c}_i}{r_i} (-x_i)^{r_i} \\ = \sum_{\substack{ 0 \leq r_i \leq \widetilde{c}_i \\ i = 1, \dots, n}} (-1)^{r_1 + \dots + r_n} \prod_{i=1}^n \binom{\widetilde{c}_i}{r_i} \sum_{x_1, \dots, x_{n-1} \in k^\times} \left( \prod_{i=1}^{n-1} x_i^{r_i - r_n} \right) x^{r_n}. \end{gathered}$$ We now use that if $r \in {\mathbf Z}$ and $r \not\equiv 0 \text{ mod }(q-1)$, then $\sum_{z \in k^\times} z^r = 0$. This implies that the inner sum vanishes except if $r_i = r_n$ for each $i = 1, \dots, n-1$. We obtain (noting that there are only finitely many non-zero terms in the sum on the right-hand side): $$\sum_{0 \leq r \leq \widetilde{c}_n} (-1)^{nr} \prod_{i=1}^n \binom{\widetilde{c}_i}{r} (-1)^{n-1} x^r = (-1)^{n-1} \sum_{r \geq 0} (-1)^{nr} \prod_{i=1}^n \binom{\widetilde{c}_i}{r} x^r.$$ Define a polynomial $u(T) \in k(v)[T]$: $$u(T) = \sum_{r \geq 0} (-1)^{nr} \prod_{i=1}^n \binom{c_i}{r} T^r.$$ We claim that there is an equality $$\sum_{r \geq 0} (-1)^{nr} \prod_{i=1}^n \binom{\widetilde{c}_i}{r} x^r = \mathbf{N}_{k / k(v)}( u(x) ).$$ This will complete the proof: indeed, it will imply that we can take $U_1 = T_{1, k(v)}[1/u]$ (noting that $u$ is non-zero, since its constant term is 1, and so $U_1$ is indeed non-empty). To show this, we expand $$\begin{gathered} \mathbf{N}_{k / k(v)}( u(x) ) = \left( \sum_{r \geq 0} (-1)^{nr} \prod_{i=1}^n \binom{c_i}{r} x^r \right)^{1 + q_v + \dots + q_v^{d-1}} \\ = \sum_{r_0, \dots, r_{d-1} \geq 0} (-1)^{n(r_0 + \dots + r_{d-1} q_v^{d-1})} \prod_{i=1}^n \prod_{j=0}^{d-1} \binom{c_i}{r_j} x^{r_0 + r_1 q_v + \dots + r_d q_v^{d-1}}.\end{gathered}$$ We now observe that a given tuple $(r_0, \dots, r_{d-1})$ can contribute a non-zero summand only if $r_j < q_v-1$ for each $j$, so each value of $r = r_0 + r_1 q_v + \dots + r_{d-1} q_v^{d-1}$ is represented at most once. Furthermore, since $(1+X)^{\widetilde{c}_i} = \prod_{j=0}^{d-1} (1+X^{q_v^j})^{c_i}$ in ${\mathbf F}_l[X]$, we have in this case a congruence $$\binom{\widetilde{c}_i}{r} \equiv \prod_{j=0}^{d-1} \binom{c_i}{r_j} \text{ mod }l,$$ showing that $\mathbf{N}_{k / k(v)}( u(x) )$ indeed equals $$\sum_{r \geq 0} (-1)^{nr} \prod_{i=1}^n \binom{\widetilde{c}_i}{r} x^r,$$ as desired. ◻ ## Basics on unitary groups over finite fields In order to discuss the possible (residual) images of the Galois representations associated to our Dwork family, we recall here some basic facts about unitary groups over finite fields which will be used in the sequel. (Nothing here is original but we include it for convenience of exposition.) Let $l/k$ be a quadratic extension of finite fields. Let $p$ be a prime and let $M \in M_n(l)$. Let $M^t$ denote the transpose of $M$ and $M^{c}$ the conjugate of $M$ by the generator of $\mathop{\mathrm{Gal}}(l/k)$. We define the adjoint $M^{\dagger}$ of $M$ to be $M^{\dagger}:= (M^{c})^t = (M^t)^{c}$. Note that $(AB)^{\dagger} = B^{\dagger} A^{\dagger}$. We recall: **Definition 1**. The unitary group $\mathrm{GU}_n(l)$ is the subgroup of matrices $M \in \mathop{\mathrm{GL}}_n(l)$ satisfying $M^{\dagger} M = \lambda \in k^{\times}$. Let $\nu$ be the multiplier character $\nu: \mathrm{GU}_n(l) \rightarrow k^{\times}$ sending $M$ to $M^{\dagger} M$ and let $\mathop{\mathrm{SU}}_n(l)$ denote the kernel of $\nu$. If $V$ is a representation of a finite group $G$ over $l$, let $V^{c}:=V\otimes_{l,c}l$ denote the representation obtained by conjugating the coefficients by the generator of $\mathop{\mathrm{Gal}}(l/k)$. If $x \in V$, we set $x^c := x\otimes 1 \in V^c$. If $x \in l$, we write either $cx$ or $x^c$ for the conjugate of $x$ by $c \in \mathop{\mathrm{Gal}}(l/k)$. **Definition 1**. If $l/k$ is a quadratic extension of finite fields and $\mathop{\mathrm{Gal}}(l/k) \simeq \langle c \rangle$, then a Hermitian form on a vector space $V$ over $l$ is an $l$-valued pairing on $V$ which is $k$-bilinear, satisfies $\langle ax,y \rangle = a \langle x,y\rangle$ and $\langle x,ay \rangle = ca \langle x,y\rangle$ for $a \in l$, and moreover satisfies $\langle y,x \rangle = \langle x,y \rangle^c = c \langle x,y \rangle$. *Remark 1*. Scaling the pairing by an element $\eta \in l$ such that $c \eta = - \eta$, all the conditions remain true except that now $\langle x,y \rangle = - c \langle x,y \rangle$. The basic fact concerning unitary groups over finite fields is that there is essentially only one non-degenerate Hermitian form. In practice, it will be useful to formulate this in the following lemma. **Lemma 1**. *Let $V$ be a vector space over $l$ with an absolutely irreducible representation of a group $G$. Suppose that there is an isomorphism $$\label{twistselfdual} V^{\vee} \simeq V^{c} \otimes \chi^{-1}$$ for some multiplier character $\chi: G \rightarrow k^{\times}$. Then, after a suitable choice of basis for $V$, the corresponding map $G \rightarrow \mathop{\mathrm{GL}}_n(l)$ has image in $\mathrm{GU}_n(l)$.* *Proof.* An isomorphism ([\[twistselfdual\]](#twistselfdual){reference-type="ref" reference="twistselfdual"}) is equivalent to the existence of a $G$-equivariant non-degenerate bilinear pairing: $$\psi: V \times V^{c} \rightarrow \chi,$$ where by abuse of notation we consider $\chi$ as a $1$-dimensional vector space over $l$. By Schur's Lemma, the isomorphism in ([\[twistselfdual\]](#twistselfdual){reference-type="ref" reference="twistselfdual"}) is unique up to scaling and thus $\psi$ is also unique up to scaling. If we define $\psi'$ to be the map: $$\psi'(x,y^{c}) = \psi(y,x^{c})^{c},$$ then $\psi'$ is also a $G$-equivariant bilinear map from $V \times V^{c}$ to $\chi$ and hence $\psi'$ is equal to $\psi' = \lambda \psi$ for some $\lambda \in l^\times$, that is, $$\psi(y,x^{c})^{c} = \psi(x,y^{c}) \cdot \lambda.$$ Applying this twice, we get $\psi(x,y^{c}) = \lambda \cdot \lambda^{c}\cdot \psi(x,y^{c})$, and thus $N_{l/k}(\lambda) = 1$. By Hilbert Theorem $90$, it follows that $\lambda = c \eta/\eta$ for some $\eta \in l$. Replacing $\psi(x,y)$ by $\psi(x,y)/\eta$, we deduce that $\psi(x,y^{c}) = \psi(y,x^{c})^c$. It follows that $$\langle x,y \rangle := \psi(x,y^{c})$$ defines a non-degenerate Hermitian form on $V$ in the sense of Definition [Definition 1](#hermetian){reference-type="ref" reference="hermetian"}. Let $A$ denote the matrix associated to this Hermitian form, so that $A^{\dagger} = A$. Then $G \subset \mathrm{GU}(V,A)$, that is, matrices $M$ such that $M^{\dagger} A M = \lambda \cdot A$ for some $\lambda \in k^{\times}$. But now we use the fact that there is a unique non-degenerate equivalence class of Hermitian forms associated to $l/k$, namely, they are all equivalent to $A = I$ and so $G \subset \mathrm{GU}_n(l)$. (See, for example, [@MR656449 §4].) ◻ ## Moduli spaces and monodromy {#subsec_def_of_dwork_moduli_space_with_level_structure} We shall now discuss a number of moduli spaces related to finding Dwork motives with fixed residual representations, and compute the corresponding monodromy groups. Since it will be important to find such motives whose $p$-adic representations are related to symmetric powers of "niveau two" representations, for our applications we will have to take $p \equiv -1 \bmod N$, and thus be in cases excluded by [@QianPotential]. **Definition 1**. Let $l_1, l_2 \nmid 2N$ be distinct primes and let $\lambda_1, \lambda_2$ be places of ${\mathbf Q}(\zeta_N)$ of these characteristics. Suppose we are given the following data: 1. [\[arepresentation\]]{#arepresentation label="arepresentation"} A field $F / {\mathbf Q}(\zeta_N)$ and for each $i = 1, 2$ an étale sheaf $\overline{U}_{\lambda_i}$ on $\mathop{\mathrm{Spec}}F$ of $k(\lambda_i)$-modules of rank $n$. 2. [\[determinant\]]{#determinant label="determinant"} For each $i = 1, 2$ an isomorphism $\eta_i : \wedge^n \overline{{\mathcal W}}_{\lambda_i} \to \wedge^n \overline{U}_{\lambda_i, T_{0, F}}$ of sheaves of $k(\lambda_i)$-modules. 3. [\[another\]]{#another label="another"} For each $i = 1, 2$, if $-1 \text{ mod } N \in \langle l_i \rangle \leq ({\mathbf Z}/ N {\mathbf Z})^\times$ (equivalently: if $c \in \mathop{\mathrm{Gal}}(k(\lambda_i) / {\mathbf F}_{l_i})$), then we fix in addition a perfect ${\mathbf F}_l$-bilinear morphism $\langle \cdot, \cdot \rangle_{\overline{U}_{\lambda_i}} : \overline{U}_{\lambda_i} \times \overline{U}_{\lambda_i} \to k(\lambda_i)(1-n)$ satisfying the following conditions: 1. For all $x, y \in \overline{U}_{\lambda_i}$, $a \in k(\lambda_i)$, we have $\langle a x, y \rangle = a \langle x, y \rangle$, $\langle x, a y \rangle = c(a) \langle x, y \rangle$. 2. For all $x, y \in \overline{U}_{\lambda_i}$, we have $\langle y, x \rangle = - c \langle x, y \rangle$ . That is, the pairing is Hermitian in the sense of Definition [Definition 1](#hermetian){reference-type="ref" reference="hermetian"} up to a scalar $\eta$ with $c \eta = - \eta$, see Remark [Remark 1](#minushermetian){reference-type="ref" reference="minushermetian"}. Note that if $-1 \text{ mod } N \in \langle l_i \rangle$ then there is also a perfect ${\mathbf F}_l$-bilinear morphism $\langle \cdot, \cdot \rangle_{\overline{{\mathcal W}}_{\lambda_i}} : \overline{{\mathcal W}}_{\lambda_i} \times \overline{{\mathcal W}}_{\lambda_i} \to k(\lambda_i)(1-n)$ satisfying the same two conditions, induced by Poincaré duality on the hypersurface $Y \to T_0$. If $k / F$ is a field extension and $t \in T_0(k)$, then we write $\langle \cdot, \cdot \rangle_{\overline{W}_{t, \lambda_i}}$ for the induced perfect pairing on $\overline{W}_{t, \lambda}$. Given such data, let us write $\mathcal{F}(\{ \overline{U}_{\lambda_i} \})$ for the functor which sends a scheme $S \to T_{0, F}$ to the set of pairs of isomorphisms $\phi_i : \overline{{\mathcal W}}_{\lambda_i, S} \to \overline{U}_{\lambda_i, S}$ $( i = 1, 2)$ satisfying the following conditions: - For each $i = 1, 2$, $\wedge^n \phi_i = \eta_i$. - For each $i = 1, 2$, if $-1 \text{ mod } N \in \langle l_i \rangle$, then $\phi_i$ intertwines $\langle \cdot, \cdot \rangle_{\overline{{\mathcal W}}_{\lambda_i}, S}$ and $\langle \cdot, \cdot \rangle_{\overline{U}_{\lambda_i}, S}$. Then $\mathcal{F}(\{ \overline{U}_{\lambda_i} \})$ is represented by a finite étale $T_{0, F}$-scheme $T(\{ \overline{U}_{\lambda_i} \})$. We need the following variant of [@QianPotential Proposition 3.8]. **Proposition 1**. *With notation as above, $T(\{ \overline{U}_{\lambda_i} \})$ is a geometrically irreducible smooth $F$-scheme.* *Proof.* We need to show that the geometric monodromy group $\pi_1(T_{0, \overline{{\mathbf Q}}})$ acts transitively on the fibres of $T(\{ \overline{U}_{\lambda_i} \})$. The existence of the pairing $\langle \cdot, \cdot \rangle_{\overline{{\mathcal W}}_{\lambda_i}}$ shows that if $-1 \text{ mod } N \in \langle l_i \rangle$, then the image of the geometric monodromy group acting on the geometric generic fibre of $\overline{{\mathcal W}}_{\lambda_i}$ may be identified with a subgroup of $\mathrm{SU}_n(k(\lambda_i))$, and otherwise it may be identified with a subgroup of $\mathrm{SL}_n(k(\lambda_i))$. We claim that it is enough to show that equality holds in either of these cases. Indeed, let $H_i$ denote the image at each prime $l_i$ (which would then be either $\mathrm{SU}_n(k(\lambda_i))$ or $\mathop{\mathrm{SL}}_n(k(\lambda_i))$). Since we are assuming $l_1, l_2 \nmid 2$ and $n > 2$, it follows that the $H_i$ are perfect and their associated projective groups (i.e. the $H_i$ modulo their subgroups of scalar matrices) are simple (Lemma [Lemma 1](#steinberg){reference-type="ref" reference="steinberg"}), and moreover $H_1 \not\cong H_2$ (also by Lemma [Lemma 1](#steinberg){reference-type="ref" reference="steinberg"}). Goursat's lemma implies that the image of geometric monodromy acting on $\overline{{\mathcal W}}_{\lambda_1} \times \overline{{\mathcal W}}_{\lambda_2}$ must be $H_1 \times H_2$, completing the proof. If $-1 \text{ mod } N \not\in \langle l_i \rangle$, then the required statement follows from [@QianPotential Lemma 3.7]. Now suppose that $-1 \text{ mod } N \in \langle l_i \rangle$. In this case, we can follow the proof of [@QianPotential Lemma 3.7] (now allowing the case $-1 \text{ mod } N \in \langle l_i \rangle$, which is used there to exclude the possibility of image a special unitary group) to conclude that $H_i$ is isomorphic to a subgroup of $\mathrm{SU}_n(k(\lambda_i))$ which maps to $\mathrm{SU}_n(k(\lambda_i))$ or $\mathop{\mathrm{SL}}_n(k(\lambda_i))$ with image a normal subgroup of index dividing $N$. The only possibility is that this map is in fact an isomorphism and that $H_i \cong \mathrm{SU}_n(k(\lambda_i))$, as required. ◻ *Remark 1*. If $l \equiv 1 \bmod N$, so that $l$ splits completely in ${\mathbf Q}(\zeta_N)$, and $\lambda | l$, then the data of an étale sheaf $\overline{U}_{\lambda}$ on $\mathop{\mathrm{Spec}}F$ satisfying conditions ([\[arepresentation\]](#arepresentation){reference-type="ref" reference="arepresentation"}), ([\[determinant\]](#determinant){reference-type="ref" reference="determinant"}), and ([\[another\]](#another){reference-type="ref" reference="another"}) of Definition [Definition 1](#pqmoduli){reference-type="ref" reference="pqmoduli"} is nothing more than a representation $$\overline{r}_l: G_F \rightarrow \mathop{\mathrm{GL}}_n({\mathbf F}_l)$$ with determinant $\overline{\varepsilon}^{-n(n-1)/2}$. If $l \equiv -1 \bmod N$, however, then (in light of Lemma [Lemma 1](#selfdual){reference-type="ref" reference="selfdual"}) these conditions correspond to a representation $$\overline{r}_l: G_F \rightarrow \mathrm{GU}_n({\mathbf F}_{l^2})$$ with multiplier character $\overline{\varepsilon}^{1-n}$ (and determinant $\overline{\varepsilon}^{-n(n-1)/2}$). In practice, we shall only consider the moduli spaces $T$ with primes $l_1,l_2$ that are either $\pm 1 \bmod N$. in which case we sometimes write $T(\{ \overline{U}_{\lambda_i} \})$ as $T(\overline{r}_{\lambda_1},\overline{r}_{\lambda_2})$, replacing the étale sheaf with the corresponding representations, which are always assumed to satisfy conditions ([\[arepresentation\]](#arepresentation){reference-type="ref" reference="arepresentation"}), ([\[determinant\]](#determinant){reference-type="ref" reference="determinant"}), and ([\[another\]](#another){reference-type="ref" reference="another"}) of Definition [Definition 1](#pqmoduli){reference-type="ref" reference="pqmoduli"}. We will also use the simpler variant where there is a single prime $l \equiv 1 \bmod N$, a choice of place $\lambda | l$, and a representation $\overline{r}_{\lambda}: G_F \rightarrow \mathop{\mathrm{GL}}_n({\mathbf F}_l)$ of determinant $\overline{\varepsilon}^{-n(n-1)/2}$, giving rise to a geometrically irreducible $F$-scheme $T(\overline{r}_{\lambda})$. The following lemma will be used to prove the existence of local points on $T(\{ \overline{U}_{\lambda_i} \})$ in certain cases. **Lemma 1**. *Let $l > n$ be a prime such that $l \equiv -1 \text{ mod }N$, and let $v, \lambda$ be places of ${\mathbf Q}(\zeta_N)$, $M$, respectively, of residue characteristic $l$. Let $\tau, c \tau : k(v) \to k(\lambda)$ be the two distinct isomorphisms, and let $\omega_\tau : I_{{\mathbf Q}(\zeta_N)_v} \to k(\lambda)^\times$ be the character $\tau \circ \operatorname{Art}_{{\mathbf Q}(\zeta_N)_v}^{-1}$. Then there is an isomorphism $$\overline{W}_{0, \lambda} \cong \bigoplus_{i=1}^n \omega_\tau^{i-1} \omega_{c \tau}^{n-i}.$$* *Proof.* The action of $H_0$ on $Y_0$ extends to an action of $H$, leading to a decomposition $$\overline{W}_{0, \lambda} = \bigoplus_{j = 1}^n \overline{W}_{0, \lambda, j},$$ where $W_{0, \lambda, j}$ is the $\Psi_\lambda^{-1}$-twist of the $H$-eigenspace in $H^{N-1}(Y_{0, \overline{{\mathbf Q}}}, {\mathcal O}_{M_\lambda})$ for the character $(\chi_1 \rho_j^{-1}, \dots, \chi_N \rho_j^{-1})$, and $\overline{W}_{0, \lambda, j} := W_{0, \lambda, j}\otimes_{{\mathcal O}_{M_\lambda}}k(\lambda)$. Here we have used the computation of [@MR654325 I.7.4], which moreover shows that each summand here has rank 1 over $k(\lambda)$. Moreover, this decomposition is orthogonal with respect to $\langle \cdot, \cdot \rangle_{\overline{W}_{0, \lambda}}$, showing that $\overline{W}_{0, \lambda, j} \otimes_{k(\lambda), c} k(\lambda) \cong \overline{W}_{0, \lambda, j}^\vee \varepsilon^{1-n}$ as $k(\lambda)[G_{{\mathbf Q}(\zeta_N)}]$-modules. After permuting $\rho_1, \dots, \rho_n$, we can assume that $\mathrm{HT}_\tau(W_{0, \lambda, j}) = j-1$. Then we have $\overline{W}_{0, \lambda, j} \cong k(\lambda)(\omega_\tau^{j-1} \omega_{c \tau}^{a_j})$ for some integers $a_j$ with $\{ a_1, \dots, a_n \} = \{ 0, \dots, n-1 \}$. The last sentence of the previous paragraph shows that we must in fact have $j-1 + a_j = n-1$, completing the proof. ◻ ## A result of Moret-Bailly {#subsec: MB variant} We will use the following variant of the extensions [@frankII Theorem 3.1], [@BLGGT Proposition 3.1.1] of the main result of [@mb]. **Proposition 1**. *Let $F$ be an imaginary CM field, Galois over ${\mathbf Q}$, and let $T / F$ be a smooth, geometrically irreducible variety. Suppose given the following data:* 1. *A finite extension $F^{\mathrm{avoid}} / F$ and disjoint finite sets $S_0$ of rational primes.* 2. *For each $l \in S_0$ and each place $v | l$ of $F$, a Galois extension $L_v / F_v$. These have the property that if $\sigma \in G_{{\mathbf Q}_l}$ then $\sigma(L_v) = L_{\sigma(v)}$.* 3. *For each $l \in S_0$ and each place $v | l$ of $F$, a non-empty open subset $\Omega_v \subset T(L_v)$, invariant under the action of $\mathop{\mathrm{Gal}}(L_v / F_v)$.* *Then we can find a finite CM extension $F' / F$ and a point $P \in T(F')$ with the following properties:* 1. *$F' / {\mathbf Q}$ is Galois and $F' / F$ is linearly disjoint from $F^{\mathrm{avoid}} / F$.* 2. *For each $l \in S_0$ and each place $v | l$ of $F$ and $w | v$ of $F'$, there is an isomorphism $F'_w \cong L_v$ of $F_v$-algebras such that $P \in \Omega_v \subset T(F'_w) \cong T(L_v)$.* *Suppose given further a finite group $G$ and a surjective homomorphism $f : \pi_1^{\text{\'et}}(T) \to G$. Then we can further choose $P$ so that the image of $f \circ P_\ast : G_{F'} \to G$ is surjective.* *Proof.* Without the last sentence, this is a special case of [@BLGGT Proposition 3.1.1] (taking $K_0 = {\mathbf Q}$ in the notation there), noting that (as in [@frankII Theorem 3.1]) we can choose $F'$ to be of the form $F' = F E$ for a Galois, totally real extension $E / {\mathbf Q}$, and therefore in particular to be CM. To get the last sentence, it suffices to add further local conditions at places of sufficiently large norm, ensuring that the image of $f \circ P_\ast$ meets every conjugacy class of $G$ (in close analogy with the argument of [@frankII Proposition 3.2] -- the surjectivity is then a consequence of Jordan's theorem). To define the necessary local conditions, we can spread $T$ out to a geometrically irreducible scheme ${\mathcal T}$, smooth and of finite type over ${\mathcal O}_F$, such that $f$ factors through $\pi_1^{\text{\'et}}({\mathcal T})$. Then [@MR2920749 Proposition 9.15] shows that for any $X > 0$ and any conjugacy class $C \subset G$, we can find a finite place $v$ of $F$ of norm $q_v > X$ and a point $x \in {\mathcal T}(k(v))$ such that the image of (arithmetic) Frobenius under $f \circ x_\ast$ lies in $C$. For each conjugacy class $C$ of $G$, we choose one such place $v_C$ and point $x_C$ for each conjugacy class of $G$ and take $\Omega_{v_C}$ to be the pre-image of $x_C$ in ${\mathcal T}({\mathcal O}_{F_{v_C}}) \subset T(F_{v_C})$. We may assume that if $C \neq C'$ then $v_C$ and $v_{C'}$ have distinct residue characteristics $l_C \neq l_{C'}$, and then replace $S_0$ by $S_0 \cup \{ l_C \mid C \subset G \}$. Finally, if $v | l_C$ and $v \neq v_C$, we take $\Omega_v = T(F_v)$. Provided the norm $q_{v_C}$ is sufficiently large, these sets $\Omega_v$ will also be non-empty, as required. ◻ # Preliminaries on deformation rings and Galois theory ## Lemmas on components of Galois deformation rings We begin by defining a certain local representation which shall appear repeatedly in the sequel. **Definition 1**. For $n, m \in {\mathbf Z}_{\ge 1}$, let $\varepsilon_2, \varepsilon_2' : G_{{\mathbf Q}_{p^2}} \to \overline{{\mathbf Z}}_p^\times$ be the two Lubin--Tate characters trivial on $\operatorname{Art}_{{\mathbf Q}_{p^2}}(p)$, and let $\rho_{n,m, 0}$ denote the representation $$\label{eqn: defn of rho0}\rho_{n,m,0} = \bigoplus_{i=1}^n \varepsilon^{m(n-i)}_2 (\varepsilon'_2)^{m(i-1)}: G_{{\mathbf Q}_{p^2}} \rightarrow \mathop{\mathrm{GL}}_{n}(\overline{{\mathbf Z}}_p).$$ We assume that $p > nm$, so the representation $\rho_{n,m,0}$ is Fontaine--Laffaille. If the value of $n$ is implicit, we often simply write $\rho_0$ for $\rho_{n,1,0}$. **Lemma 1**. *Let $K_0 / {\mathbf Q}_{p^2}$ be an unramified extension and let $\rho: G_{K_0} \rightarrow \mathop{\mathrm{GL}}_{n}(\overline{{\mathbf Z}}_p)$ be any crystalline representation of Hodge--Tate weights $\{ 0, m, 2m, \dots, (n-1)m \}$ *(*with respect to any embedding $K_0 \to \overline{{\mathbf Q}}_p$*)* such that $\overline{\rho}|_{I_{K_0}}=\overline{\rho}_{n, m, 0}|_{I_{K_0}}$. Then:* 1. *[\[item: finite unramified makes them the same\]]{#item: finite unramified makes them the same label="item: finite unramified makes them the same"} There is a finite unramified extension $K_1 / K_0$ such that $\overline{\rho}|_{G_{K_1}} = \overline{\rho}_{n, m, 0} |_{G_{K_1}}$.* 2. *For any finite extension $K/ K_0$ such that $\overline{\rho}|_{G_{K}} = \overline{\rho}_{n, m, 0} |_{G_{K}}$, we have $\rho |_{G_K} \sim \rho_{n, m, 0} |_{G_K}$ *(*"connects to", in the sense of [@BLGGT §1.4]*)*.* *Proof.* The first claim is clear. For the second, choose $K_1 / K_0$ minimal such that $\overline{\rho}|_{G_{K_1}} = \overline{\rho}_{n,m,0}|_{G_{K_1}}$. Since $p > nm$ and $K_1$ is unramified, the lifting ring $R^{\mathrm{crys},\{ 0, \dots, (n-1)m \},{\mathcal O}}_{\overline{\rho}|_{G_{K_1}}}$ is formally smooth by Fontaine--Laffaille theory. It follows that $\rho |_{G_{K_1}} \sim \rho_0|_{G_{K_1}}$, and then these representations are still connected after passing to any further finite extension. ◻ **Lemma 1**. *Let $K$ be a finite extension of ${\mathbf Q}_p$. Let $\rho_1,\rho_2$ be ordinary, crystalline weight $0$ representations of $G_K$ with $\overline{\rho}_1=\overline{\rho}_2$ the trivial representation. Then $\rho_1\sim\rho_2$.* *Proof.* This follows immediately from [@ger Lemma 3.14] --- the ordinary weight $0$ crystalline lifting ring of the trivial representation is irreducible. ◻ **Lemma 1**. *Let $K$ be a finite extension of ${\mathbf Q}_p$, and let $\rho: G_{K} \rightarrow \mathop{\mathrm{GL}}_{n}(\overline{{\mathbf Z}}_p)$ be crystalline of weight $0$. Then there exists a constant $c = c(K,\rho,n)$ with the following property:* - *if $t: G_K \rightarrow \mathop{\mathrm{GL}}_{n}(\overline{{\mathbf Z}}_p)$ is crystalline of weight $0$, and $t \equiv \rho|_{G_K} \bmod p^c$, then $t \sim \rho|_{G_K}$.* *Proof.* Up to conjugation, the image of $\rho$ lands in $\mathop{\mathrm{GL}}_n(\mathcal{O}_E)$ for some finite extension $E/{\mathbf Q}_p$ with residue field $k$. Let $R = R^{\mathrm{crys},\underline{0}}_{\overline{\rho}|_{G_{K}}} \otimes_{W(k)} \mathcal{O}_E$ denote the weight $0$ crystalline lifting ring of $\overline{\rho}|_{G_K}: G_K \rightarrow \mathop{\mathrm{GL}}_n(k)$. By assumption, $R$ has specializations corresponding to $\rho$ and to $t$. We may choose a finite set of elements $\{g_k: 1 \le k \le d\}$ of $G_K$ such that $$R = \mathcal{O}_E\llbracket X_{ijk} : 1 \le i,j \le n, 1\le k \le d \rrbracket/I$$ for an ideal $I$, and the universal lifting $\rho^{{\operatorname{univ}}}:G_K \to \mathop{\mathrm{GL}}_n(R)$ of $\overline{\rho}$ satisfies $$\rho^{{\operatorname{univ}}}(g_k) = \rho(g_k) + [X_{ijk}]_{i,j=1}^{n},$$ so that $\mathfrak{p}= (X_{ijk})$ is the height one prime associated to $\rho$. The condition that $t \equiv \rho|_{G_K} \bmod p^c$ is then equivalent to the condition that the corresponding homomorphism $t: R \to \overline{{\mathbf Z}}_p$ satisfies $v_p(t(X_{ijk})) \ge c$ for all $i,j,k$. The generic fibre of $R$ is formally smooth at $\mathfrak{p}$ by [@kisindefrings Theorem 3.3.8], and so in particular there is a unique minimal prime $\mathfrak{P}$ of $R[1/p]$ containing the prime $\mathfrak{p}$. Suppose that $\mathfrak{Q}$ is any minimal prime ideal of $R[1/p]$ which does not contain $\mathfrak{p}$. Then $\mathfrak{Q}$ contains an element $P(X_{ijk}) \in R[1/p]$ which doesn't vanish at $X_{ijk} = 0$ and hence has a non-zero constant term. After scaling if necessary, we may assume that $P \in R$. But now any specialization of $P$ with $v_p(X_{ijk}) > v_p(P(0,0,\ldots,0))$ for every $(i,j,k)$ will be non-zero, and hence, if $c > v_p(P(0,0,\ldots,0))$, then $t$ cannot lie on the irreducible component corresponding to $\mathfrak{Q}$. Since $R$ has only finitely many minimal prime ideals (it is Noetherian), there exists a choice of $c$ which guarantees that $t$ lies on the component corresponding to $\mathfrak{P}$. ◻ ## Lemmas on big image conditions In order to apply Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} to a $p$-adic representation of $G_F$, one needs first to establish that the image of the residual representation (and its restriction to $G_{F(\zeta_p)}$) satisfies certain technical hypotheses, in particular conditions ([\[part: ALT cond dgi\]](#part: ALT cond dgi){reference-type="ref" reference="part: ALT cond dgi"}) and ([\[part:scalartoremark\]](#part:scalartoremark){reference-type="ref" reference="part:scalartoremark"}). In this section, we prove some lemmas showing that a number of representations of a form we shall encounter later have these properties. We first combine these conditions into the following definition: **Definition 1**. Say that a representation $\overline{s}: G_F \rightarrow \mathop{\mathrm{GL}}_n(\overline{{\mathbf F}}_p)$ satisfies the Taylor--Wiles big image conditions if the following hold: 1. [\[generic\]]{#generic label="generic"} The representation $\overline{s}$ is decomposed generic. 2. [\[adequate\]]{#adequate label="adequate"} The representation $\overline{s}|_{G_{F(\zeta_p)}}$ has adequate image. 3. [\[whatever\]]{#whatever label="whatever"} There exists $\sigma \in G_F - G_{F(\zeta_p)}$ such that $\overline{s}(\sigma)$ is scalar. We have: **Lemma 1**. *Suppose that $\overline{s}: G_F \rightarrow \mathop{\mathrm{GL}}_n(\overline{{\mathbf F}}_p)$ satisfies the Taylor--Wiles big image conditions. Suppose that $F$ is Galois. Let $H/F$ be a finite extension whose Galois closure over ${\mathbf Q}$ is linearly disjoint over $F$ from both $F(\zeta_p)$ and the Galois closure over ${\mathbf Q}$ of the fixed field of $\ker(\overline{s})$. Then $\overline{s}|_{G_H}$ satisfies the Taylor--Wiles big image conditions.* *Proof.* Let $\widetilde{H}$ be the Galois closure of $H$ over ${\mathbf Q}$. Since $\overline{s}|_{G_{H}}$ satisfies the Taylor--Wiles conditions if $\overline{s}|_{G_{\widetilde{H}}}$ does, we assume that $H = \widetilde{H}$ is Galois over ${\mathbf Q}$. The conditions ensure that the images of $\overline{s}$ and $\overline{s}|_{G_H}$ coincide, and also the images of $\overline{s}|_{G_{F(\zeta_p)}}$ and $\overline{s}|_{G_{H(\zeta_p)}}$ coincide. Thus condition ([\[adequate\]](#adequate){reference-type="ref" reference="adequate"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"} holds. Let $M$ be the Galois closure of the fixed field of $\ker(\overline{s})$. Then we have an isomorphism $$\mathop{\mathrm{Gal}}(H \cdot M(\zeta_p)/F) \simeq \mathop{\mathrm{Gal}}(M(\zeta_p)/F) \times \mathop{\mathrm{Gal}}(H/F),$$ and so $\mathop{\mathrm{Gal}}(M(\zeta_p)/F) \simeq \mathop{\mathrm{Gal}}(H \cdot M(\zeta_p)/H)$ via the map $\sigma \rightarrow (\sigma,1)$. Moreover, $$\mathop{\mathrm{Gal}}(H \cdot M(\zeta_p)/{\mathbf Q}) \subset \mathop{\mathrm{Gal}}(M(\zeta_p)/{\mathbf Q}) \times \mathop{\mathrm{Gal}}(H/{\mathbf Q})$$ is the subgroup of elements whose projection to $\mathop{\mathrm{Gal}}(F/{\mathbf Q})$ is the diagonal. There exists a conjugacy class $\langle \sigma \rangle \in \mathop{\mathrm{Gal}}(M(\zeta_p)/F)$ such that any rational prime unramified in $H \cdot M(\zeta_p)$ whose Frobenius element corresponds to $\sigma$ is decomposed generic for $\overline{s}$. Then $(\sigma,1)$ will be decomposed generic for $\overline{s}|_{G_H}$. Similarly, if $\sigma \in \mathop{\mathrm{Gal}}(M(\zeta_p)/F) - \mathop{\mathrm{Gal}}(M(\zeta_p)/F(\zeta_p))$ is an element such that $\overline{s}(\sigma)$ is scalar, then the same is true of $(\sigma,1) \in \mathop{\mathrm{Gal}}(H \cdot M(\zeta_p)/H)$. ◻ We shall also use the following group-theoretic fact. **Lemma 1**. *Consider the collection of groups $G$ either of the form $\mathrm{PSL}_n({\mathbf F}_{p^k})$ or of the form $\mathrm{PSU}_n({\mathbf F}_{p^{2k}})$ for all primes $p$ and integers $k \ge 1$, $n \ge 2$. Then $G$ is simple unless $(n,p) \in \{(2,2), (2,3), (3,2)\}$. These groups are all pairwise mutually non-isomorphic as $n$ and $p$ both vary except for the following isomorphisms:* 1. *$\mathrm{PSL}_2({\mathbf F}_{p^k}) \simeq \mathrm{PSU}_2({\mathbf F}_{p^{2k}})$,* 2. *$\mathrm{PSL}_2({\mathbf F}_5) \simeq \mathrm{PSL}_2({\mathbf F}_4)$,* 3. *$\mathrm{PSL}_2({\mathbf F}_7) \simeq \mathrm{PSL}_3({\mathbf F}_2)$.* *If we restrict $G$ to be of the form $G = \mathrm{PSL}_2({\mathbf F}_{p^k})$ or $\mathrm{PSU}_n({\mathbf F}_{p^{2k}})$, and $A \in G$ is the image of any matrix with eigenvalues in ${\mathbf F}_p$, then any automorphism of $G$ preserves these eigenvalues up to scalar.* *Proof.* If $n=2$, then there is an isomorphism $\mathrm{PSU}_2({\mathbf F}_{p^{2k}}) \simeq \mathrm{PSL}_2({\mathbf F}_{p^k})$. Otherwise, $\mathrm{PSL}_n({\mathbf F}_{p^k}) \simeq A_{n-1}(p^k)$ and $\mathrm{PSU}_{n}({\mathbf F}_{p^{2k}}) \simeq {}^2 A_{n-1}(p^{2k})$ is the Steinberg group. These groups are (twisted in the second case) Chevalley groups. The simplicity statement follows from  [@MR0466335 Thm 37(b)] (see also [@MR0407163]). The list of exceptional isomorphisms between (possibly twisted) simple Chevelley groups was determined in [@MR0466335 Thm 37(a)]. By a theorem of Steinberg [@MR121427] (and [@MR0407163 Thm 12.5.1]), the outer automorphism group of either $\mathrm{PSL}_n({\mathbf F}_{p^k})$ or $\mathrm{PSU}_{n}({\mathbf F}_{p^{2k}})$ is generated by diagonal automorphisms (conjugation by diagonal elements), by field automorphisms (acting on ${\mathbf F}_{p^k}$ or ${\mathbf F}_{p^{2k}}$ respectively), and the graph automorphism coming from the automorphism of the Dynkin diagram $A_{n-1}$ if $n > 2$ (associated to the inverse transpose map). Certainly diagonal automorphisms preserve eigenvalues and field automorphisms act on the eigenvalues and so preserve eigenvalues in ${\mathbf F}_p$. There are no graph automorphisms for $n=2$. For $n > 2$, the graph automorphism $M \mapsto (M^t)^{-1} = \sigma (M^{\dagger})^{-1} = \sigma M$ in the unitary group coincides with the field automorphism, and so also preserves rational eigenvalues. ◻ **Lemma 1**. *Let $F/{\mathbf Q}$ be Galois. Consider representations: $$\begin{aligned} \overline{r}_A: G_F & \rightarrow \mathop{\mathrm{GL}}_2({\mathbf F}_p), \\ \overline{r}_B: G_F & \rightarrow \mathrm{GU}_m({\mathbf F}_{p^2}) \rightarrow \mathop{\mathrm{GL}}_{m}({\mathbf F}_{p^2}), \end{aligned}$$ such that the images $\overline{r}_A(G_{F(\zeta_p)})$ and $\overline{r}_B(G_{F(\zeta_p)})$ equal $\mathop{\mathrm{SL}}_2({\mathbf F}_p)$ and $\mathop{\mathrm{SU}}_m({\mathbf F}_{p^2})$ respectively. Consider the representation: $$\overline{s}= \mathop{\mathrm{Sym}}^{n-1} \overline{r}_A \otimes \overline{r}_B: G_F \rightarrow \mathop{\mathrm{GL}}_{mn}({\mathbf F}_{p^2})$$ Assume that $p>2mn+1$, and if $m=2$ assume that the fixed fields of the kernels of the projective representations associated to $\overline{r}_A$ and $\overline{r}_B$ are linearly disjoint over $F(\zeta_p)$.* 1. *The representation $\overline{s}$ satisfies conditions ([\[generic\]](#generic){reference-type="ref" reference="generic"}) and ([\[adequate\]](#adequate){reference-type="ref" reference="adequate"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"}.* 2. *[\[isselfdual\]]{#isselfdual label="isselfdual"} If $\det(\overline{r}_A) = \overline{\varepsilon}^{-m}$ and $\overline{r}_B$ has multiplier character $\overline{\varepsilon}^{1 - m}$, then $\overline{s}$ has image in $\mathrm{GU}_{mn}({\mathbf F}_{p^2})$ with multiplier character $\overline{\varepsilon}^{1 - mn}$.* 3. *If in addition to the assumptions in ([\[isselfdual\]](#isselfdual){reference-type="ref" reference="isselfdual"}), one additionally assumes that $\overline{\varepsilon}(G_F)={\mathbf F}^{\times}_p$, then $\overline{s}$ also satisfies condition ([\[whatever\]](#whatever){reference-type="ref" reference="whatever"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"} and thus satisfies the Taylor--Wiles big image conditions.* *Proof.* Let $H_A$ and $H_B$ denote the extensions of $F(\zeta_p)$ corresponding to the fixed fields of the kernels of the projective representations associated to $\overline{r}_A$ and $\overline{r}_B$. Our assumption on the images of $\overline{r}_A$ and $\overline{r}_B$ imply that $$\mathop{\mathrm{Gal}}(H_A/F(\zeta_p)) \simeq \mathrm{PSL}_2({\mathbf F}_p), \quad \mathop{\mathrm{Gal}}(H_B/F(\zeta_p)) \simeq \mathrm{PSU}_m({\mathbf F}_{p^2}).$$ Let $\widetilde{H}_A$ and $\widetilde{H}_B$ denote the Galois closures of $H_A$ and $H_B$ over ${\mathbf Q}$, and let $\widetilde{H}$ denote the compositum of $\widetilde{H}_A$ and $\widetilde{H}_B$. Since $F/{\mathbf Q}$ is Galois and $\mathrm{PSL}_2({\mathbf F}_p)$ and $\mathrm{PSU}_n({\mathbf F}_{p^2})$ are simple (as $p \ge 5$), we have isomorphisms $\mathop{\mathrm{Gal}}(\widetilde{H}_A/F(\zeta_p)) \sim \mathrm{PSL}_2({\mathbf F}_p)^r$ and $\mathop{\mathrm{Gal}}(\widetilde{H}_B/F(\zeta_p)) \simeq \mathrm{PSU}_m({\mathbf F}_{p^2})^s$ respectively for some positive integers $r$ and $s$. Thus $$\mathop{\mathrm{Gal}}(\widetilde{H}/F(\zeta_p)) \simeq \mathop{\mathrm{Gal}}(\widetilde{H}_A/F(\zeta_p)) \times \mathop{\mathrm{Gal}}(\widetilde{H}_B/F(\zeta_p)),$$ since either $m > 2$ and the groups have no common quotients, or $m=2$ and the fields are linearly disjoint by assumption. If $G = \mathrm{PSL}_2({\mathbf F}_p)$ or $G = \mathrm{PSU}_m({\mathbf F}_{p^2})$, then $G$ is simple by Lemma [Lemma 1](#steinberg){reference-type="ref" reference="steinberg"}. Moreover, by the same lemma, for any equivalence class of matrices $A$ with eigenvalues in ${\mathbf F}_p$, any automorphism of $G$ preserves the (unordered set of) eigenvalues up to scalars. We also have $\mathop{\mathrm{Aut}}(G^m) \simeq \mathop{\mathrm{Aut}}(G) \rtimes S_m$. Let $\alpha = \beta^2 \in {\mathbf F}^{\times 2}_p$ be an element such that $1,\alpha,\ldots,\alpha^{mn-1}$ are all distinct; such an $\alpha$ exists because $p - 1 \ge 2mn$. Let $A$ be a matrix in $\mathrm{PSL}_2({\mathbf F}_p)$ with eigenvalues (up to scalars) $1$ and $\alpha$, and let $B \in \mathrm{PSU}_m({\mathbf F}_{p^2})$ have eigenvalues $1,\alpha^n,\ldots,\alpha^{n(m-1)}$. Explicitly, let $A = \mathrm{diag}(\beta,\beta^{-1}) \in \mathop{\mathrm{SL}}_2({\mathbf F}_p)$, and then construct $B$ as follows. Certainly $A^n \in \mathop{\mathrm{SL}}_2({\mathbf F}_p)$. The image of $A^n$ under the $m-1$th symmetric power map lands in $\mathrm{Sp}_{m}({\mathbf F}_p)$ or $\mathrm{SO}_m({\mathbf F}_p)$ depending on the parity of $m$ (here we mean the symplectic or orthogonal groups defined using the bilinear form induced by the standard symplectic form on ${\mathbf F}_p^2$). These groups are conjugate to subgroups of $\mathop{\mathrm{SU}}_m({\mathbf F}_{p^2})$ by Lemma [Lemma 1](#selfdual){reference-type="ref" reference="selfdual"}. By Chebotarev, we find a prime $q$ unramified in $\widetilde{H}$ and such that for a fixed choice of $\mathfrak{q}|q$ in $\widetilde{H}$, $\mathrm{Frob}_{\mathfrak{q}}\in \mathop{\mathrm{Gal}}(\widetilde{H}/F(\zeta_p)) \subset \mathop{\mathrm{Gal}}(\widetilde{H}/{\mathbf Q})$ has the form $$(A,A,\ldots,A) \times (B,B,\ldots,B) \in \mathrm{PSL}_2({\mathbf F}_p)^r \times \mathrm{PSU}_m({\mathbf F}_{p^2})^s.$$ The eigenvalues (up to scalar) of $A$ and $B$ are preserved by the action of $\mathop{\mathrm{Gal}}(F/{\mathbf Q})$; this follows from our description of the automorphism group of each factor. The images of these elements in $\mathop{\mathrm{Gal}}(F(\zeta_p)/{\mathbf Q})$ are trivial, so such a prime $q$ will split completely in $F(\zeta_p)$ and so satisfy $q \equiv 1 \bmod p$. Moreover, the Frobenius elements at all other primes above $q$ will be conjugate inside $\mathop{\mathrm{Gal}}(\widetilde{H}/{\mathbf Q})$. Hence the image of (any conjugate of) $\mathrm{Frob}_{\mathfrak{q}}$ under $\overline{s}$ has eigenvalues (up to scalar) given by $1, \alpha, \ldots, \alpha^{mn-1}$. In particular, they are all distinct. Since $q \equiv 1 \bmod p$, this implies that $\overline{s}$ is decomposed generic, which is property ([\[generic\]](#generic){reference-type="ref" reference="generic"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"}. To see that $\overline{s}|_{G_{F(\zeta_p)}}$ has adequate image, it suffices to show that the image is absolutely irreducible and thus is also adequate by [@jack Theorem A.9] (using the assumption $p > 2mn+1$). The irreducibility follows easily from the fact that $\mathop{\mathrm{Sym}}^{n-1}(\overline{r}_A)$ is irreducible as long as $p > n$. This proves property ([\[adequate\]](#adequate){reference-type="ref" reference="adequate"})of Definition [Definition 1](#tw){reference-type="ref" reference="tw"}. Assume that $\det(\overline{r}_A) = \overline{\varepsilon}^{-m}$ and that $\overline{r}_B$ has multiplier character $\overline{\varepsilon}^{1-m}$. Then $\overline{r}_A \otimes \overline{r}_B$ is absolutely irreducible and self-dual (i.e. there is an isomorphism of the form [\[twistselfdual\]](#twistselfdual){reference-type="eqref" reference="twistselfdual"}) with multiplier character $$\overline{\varepsilon}^{-m(n-1)} \cdot \overline{\varepsilon}^{1-m} = \overline{\varepsilon}^{1-mn},$$ and so the image lies in $\mathrm{GU}_{mn}({\mathbf F}_{p^2})$ with this multiplier character by Lemma [Lemma 1](#selfdual){reference-type="ref" reference="selfdual"}. This establishes condition ([\[isselfdual\]](#isselfdual){reference-type="ref" reference="isselfdual"}). Assume that $\overline{\varepsilon}(G_F)={\mathbf F}^{\times}_p$. Let $M_A$ and $M_B$ denote the fixed fields of the kernels of $\overline{r}_A$ and $\overline{r}_B$, and let $M$ be the compositum of $M_A$ and $M_B$. By our assumption on linear disjointness of $H_A$ and $H_B$, $\mathop{\mathrm{Gal}}(M/F(\zeta_p))$ is the direct product $\mathop{\mathrm{SL}}_2({\mathbf F}_p) \times \mathop{\mathrm{SU}}_m({\mathbf F}_{p^2})$, and $\mathop{\mathrm{Gal}}(M/F)$ is the subgroup of matrices $(A,B)$ of $\mathop{\mathrm{GL}}_2({\mathbf F}_p) \times \mathrm{GU}_m({\mathbf F}_{p^2})$ with $\det(A) = \eta^{-m}$ and $\nu(B) = \eta^{1-m}$ for some $\eta \in {\mathbf F}^{\times}_p$. Hence the image certainly contains $(\beta^{m} I_2,\beta^{m-1} I_m)$, where $I_n$ denotes the trivial matrix in $\mathop{\mathrm{SL}}_n({\mathbf F}_p)$ and $\beta \in {\mathbf F}^{\times}_p$ is a primitive root. Then, by Chebotarev, there exists $\sigma \in G_F$ whose image in $\mathop{\mathrm{Gal}}(M/F)$ is this element. Since $p > 2mn+1 \ge 2m+1$, we have $\beta^{2m} \ne 1$. Since $\overline{\varepsilon}(\sigma) = \beta^{2m}$, the element $\sigma$ is not contained in $G_{F(\zeta_p)}$. On the other hand, we see that $\overline{s}(\sigma)$ is also scalar, and we are done. ◻ We shall need the following well-known property of induced representations (specialized to the context in which we shall apply it in the proof of the following lemma). **Lemma 1**. *Let $E/{\mathbf Q}$ be a cyclic Galois extension of degree $m$ linearly disjoint from $F$, and let $L = E \cdot F$. Let $\overline{\psi}: G_L \rightarrow {\mathbf F}^{\times}_p$ be a character and let $\overline{r}_B = \mathop{\mathrm{Ind}}^{G_F}_{G_L} \overline{\psi}: G_F \rightarrow \mathop{\mathrm{GL}}_m(\overline{{\mathbf F}}_p)$. Let $q$ be a prime of ${\mathbf Q}$ such that $\overline{r}_B$ is unramified at all $v|q$, $q$ splits completely in $F$, and $\mathrm{Frob}_q$ generates $\mathop{\mathrm{Gal}}(E/{\mathbf Q})$. Then, for $v|q$ in $F$, the eigenvalues of $\overline{r}_B(\mathrm{Frob}_v)$ are of the form $\lambda, \zeta \lambda, \ldots, \zeta^{m-1} \lambda$ for some $\lambda$ where $\zeta$ is a primitive $m$th root of unity.* *Proof.* The assumption that $E$ is linearly disjoint from $F$ ensures that $\mathop{\mathrm{Gal}}(L/F) \simeq \mathop{\mathrm{Gal}}(E/{\mathbf Q})$ is cyclic of order $m$. There is an isomorphism $\overline{r}_B \simeq \overline{r}_B \otimes \chi$ where $\chi$ is a character (factoring through $\mathop{\mathrm{Gal}}(L/F)$) of order $m$. Thus $\overline{r}_{B}(\mathrm{Frob}_v)$ is conjugate to $\chi(\mathrm{Frob}_v) \overline{r}_{B}(\mathrm{Frob}_v) = \zeta \cdot \overline{r}_{B}(\mathrm{Frob}_v)$, where $\zeta$ is a primitive $m$th root of unity and the result follows. ◻ We shall also need the following variant of Lemma [Lemma 1](#yeomanslemmanew){reference-type="ref" reference="yeomanslemmanew"}: **Lemma 1**. *Let $F/{\mathbf Q}$ be Galois. Consider representations: $$\begin{aligned} \overline{r}_A: G_F & \rightarrow \mathop{\mathrm{GL}}_2({\mathbf F}_p), \\ \overline{r}_B: G_F & \rightarrow \mathop{\mathrm{GL}}_{m}({\mathbf F}_p). \end{aligned}$$ Assume that:* 1. *The image of $\overline{r}_A(G_{F(\zeta_p)})$ equals $\mathop{\mathrm{SL}}_2({\mathbf F}_p)$.* 2. *There is a cyclic Galois extension $E / {\mathbf Q}$ of degree $m$ and linearly disjoint from $F$, such that, setting $L = E \cdot F$, there is a character $\overline{\psi} : G_L \to {\mathbf F}_p^\times$ with $\overline{r}_B \cong \mathop{\mathrm{Ind}}_{G_L}^{G_F} \overline{\psi}$ and $\overline{r}_B|_{G_{F(\zeta_p)}}$ irreducible.* *Consider the representation: $$\overline{s}= \mathop{\mathrm{Sym}}^{n-1} \overline{r}_A \otimes \overline{r}_B: G_F \rightarrow \mathop{\mathrm{GL}}_{mn}({\mathbf F}_p).$$ Assume that $p>2mn+1$. Then:* 1. *The representation $\overline{s}$ satisfies conditions ([\[generic\]](#generic){reference-type="ref" reference="generic"}) and ([\[adequate\]](#adequate){reference-type="ref" reference="adequate"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"}.* 2. *If $\det(\overline{r}_A) = \overline{\varepsilon}^{-m}$ and $\det(\overline{r}_B) = \overline{\varepsilon}^{-m(m-1)/2}$ and the image of $\overline{\varepsilon}(G_L)={\mathbf F}^{\times}_p$, then $\overline{s}$ also satisfies condition ([\[whatever\]](#whatever){reference-type="ref" reference="whatever"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"} and thus satisfies the Taylor--Wiles big image conditions.* *Proof.* The representation $\overline{r}_B$ has solvable image. The assumption that $\overline{r}_B|_{G_{F(\zeta_p)}}$ is irreducible implies that $F(\zeta_p)$ and $E$ are linearly disjoint. As in the proof of Lemma [Lemma 1](#yeomanslemmanew){reference-type="ref" reference="yeomanslemmanew"}, let $H_A$ and $H_B$ denote the extensions of $F(\zeta_p)$ corresponding to the fixed fields of the kernels of the projective representations associated to $\overline{r}_A$ and $\overline{r}_B$ and $\widetilde{H}_A$, $\widetilde{H}_B$ their Galois closures over ${\mathbf Q}$. Let $\widetilde{H}$ be the compositum of $\widetilde{H}_A$ and $\widetilde{H}_B$. We deduce once more that $$\mathop{\mathrm{Gal}}(\widetilde{H}_A/F(\zeta_p)) \simeq \mathrm{PSL}_2({\mathbf F}_p)^r$$ and, since $\mathrm{PSL}_2({\mathbf F}_p)$ has no solvable quotients, $$\mathop{\mathrm{Gal}}(\widetilde{H}/F(\zeta_p)) \simeq \mathop{\mathrm{Gal}}(\widetilde{H}_A/F(\zeta_p)) \times \mathop{\mathrm{Gal}}(\widetilde{H}_B/F(\zeta_p)).$$ We have $E \subset \widetilde{H}_B$ and, since $\overline{r}|_{G_{F(\zeta_p)}}$ is irreducible, $\mathop{\mathrm{Gal}}(\widetilde{H}_B/F(\zeta_p)) \to \mathop{\mathrm{Gal}}(E / {\mathbf Q})$ is surjective. Let $\alpha \in {\mathbf F}^{\times}_p$ be an element such that $1,\alpha,\ldots,\alpha^{mn-1}$ are all distinct; such an $\alpha$ exists because $p - 1 \ge mn$. By Chebotarev, we find a prime $q$ unramified in $\widetilde{H}$, split in $F(\zeta_p)$, and such that for a fixed choice of $\mathfrak{q}|q$ in $\widetilde{H}$, $\mathrm{Frob}_{\mathfrak{q}}\in \mathop{\mathrm{Gal}}(\widetilde{H}/F(\zeta_p)) \subset \mathop{\mathrm{Gal}}(\widetilde{H}/{\mathbf Q})$ has the form $$(A,A,\ldots,A) \times \sigma \in \mathrm{PSL}_2({\mathbf F}_p)^r \times \mathop{\mathrm{Gal}}(\widetilde{H}_B/F(\zeta_p)),$$ where $A$ has eigenvalues with ratio $\alpha$ and $\sigma$ projects to a generator of $\mathop{\mathrm{Gal}}(E / {\mathbf Q})$. Hence, by Lemma [Lemma 1](#useme){reference-type="ref" reference="useme"}, the image of (any conjugate of) $\mathrm{Frob}_{\mathfrak{q}}$ under $\overline{s}$ has eigenvalues (up to scalar) given by: $$\alpha^i \zeta^j, i = 0,\ldots,n-1, j = 0,\ldots,m-1,$$ and in particular all eigenvalues are distinct since otherwise $\alpha^{km} = 1$ for some $k < n$. Since $q \equiv 1 \bmod p$, this implies that $\overline{s}$ is decomposed generic, which is property ([\[generic\]](#generic){reference-type="ref" reference="generic"}). Property ([\[adequate\]](#adequate){reference-type="ref" reference="adequate"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"} follows exactly as in the proof of Lemma [Lemma 1](#yeomanslemmanew){reference-type="ref" reference="yeomanslemmanew"}. Now suppose that $\det(\overline{r}_A) = \overline{\varepsilon}^{-m}$ and $\det(\overline{r}_B) = \overline{\varepsilon}^{-m(m-1)/2}$ and that $\overline{\varepsilon}(G_L)={\mathbf F}^{\times}_p$. We now show that property ([\[whatever\]](#whatever){reference-type="ref" reference="whatever"}) of Definition [Definition 1](#tw){reference-type="ref" reference="tw"} holds. Let $M_A$ and $M_B$ denote the fixed fields of the kernels of $\overline{r}_A|_{G_{F(\zeta_p)}}$ and $\overline{r}_B|_{G_{F(\zeta_p)}}$ respectively. Since $\mathop{\mathrm{SL}}_2({\mathbf F}_p)$ has no solvable quotients, the map $G_{M_B} \to \mathop{\mathrm{Gal}}(M_A / F(\zeta_p))$ is surjective. Let $\beta \in {\mathbf F}_p^\times$ be a primitive root. We claim that we can find $\sigma \in G_F$ such that $\overline{r}_A(\sigma) = \beta^{-m^2} \cdot I_2$ and $\overline{r}_B(\sigma) = \beta^{m(1-m)} \cdot I_m$. To see this, first choose $g \in G_L$ such that $\overline{\varepsilon}(g) = \beta^2$, and let $h = \prod_{\tau \in \mathop{\mathrm{Gal}}(L / F)} {}^\tau g$. Then $\overline{r}_B(h) = \beta^{m(1-m)} I_m$, as $\prod_{\tau \in \mathop{\mathrm{Gal}}(L / F)} {}^\tau \overline{\psi} = \det \overline{r}_B|_{G_L} = \overline{\varepsilon}^{-m(m-1)/2}$. We now choose $\sigma$ of the form $h \gamma$ where $\gamma \in G_{M_B}$; since $\overline{r}_B(\gamma)$ is trivial, this means that $\overline{r}_{B}(\sigma) = \overline{r}_{B}(h)$ is of the correct form. On the other hand, we have $\det \overline{r}_A(h) = \overline{\varepsilon}^{-m}(g)^{m} = \beta^{-2m^2}$. Since $G_{M_B} \to \mathop{\mathrm{Gal}}(M_A / F(\zeta_p)) \simeq \mathop{\mathrm{SL}}_2({\mathbf F}_p)$ is surjective, we choose $\gamma \in G_{M_B}$ so that $\overline{\rho}_A(h) \overline{\rho}_A(\gamma) = \det(\overline{\rho}_A(h)) \cdot I_2$, and then $\overline{\rho}_A(\sigma) = \beta^{-m^2} \cdot I_2$. By construction, $\overline{s}(\sigma)$ is scalar. On the other hand, $\overline{\varepsilon}(\sigma) = \beta^{2m} \neq 1$, as $p-1 > 2m$ because $p > 2nm + 1$, so $\sigma \in G_F - G_{F(\zeta_p)}$, as required. ◻ ## Character building lemmas In this section, we construct some induced extensions with certain desirable local properties. We begin with the following well-known lemma: **Lemma 1** (Globalizing local characters). *Let $F$ be a number field, and let $S$ be a finite set of places of $F$. Let $\psi_v: G_{F_v} \rightarrow {\mathbf Z}/n{\mathbf Z}$ be a collection of characters for all $v \in S$. Assume that $S$ does not contain any places $v|2$. Then there exists a global character $\chi: G_F \rightarrow {\mathbf Z}/n{\mathbf Z}$ such that $\chi|_{G_{F_v}} = \psi_v$ for all $v \in S$.* *Proof.* This is a consequence of [@ArtinTate §X Thm. 5] (see also [@conradlifting Appendix A]). More precisely, the claim holds (without the hypothesis on $S$) if $n$ is odd. If $n$ is even, there exists an explicitly defined element $$a_{F,n} \in (F^{\times})^{n/2}$$ which is a perfect $n$th power for all but a finite set (possibly empty) places $S_{F,n}$ of primes $v|2$. Then the $\psi_v$ come from a global character $\chi$ of order $n$ if and only if either  $S_F \not\subset S$, or  $S_F \subset S$ and $$\prod_{v \in S_F} \psi_v(a_{F,n}) = 1.$$ Since we have assumed that $S$ contains no places above $2$, either $S_F$ is empty or $S_F \not\subset S$, so the result follows. ◻ *Remark 1*. One cannot drop the hypothesis on $S$ in general because of the Grunwald--Wang phenomenon (e.g. $F = {\mathbf Q}$, $v = 2$, $n = 8$, and $\psi_2$ unramified with order $8$). If one considers general $F$, one cannot even globalize a local character $\psi_v$ up to a character $\phi_v$ which is unramified at $v$. Let $F = {\mathbf Q}(\sqrt{-5})$, and consider a character $$\chi: F^{\times} \backslash \mathbf{A}^{\times}_F \rightarrow {\mathbf Z}/8{\mathbf Z}.$$ Since $16 \in F^{\times}$ is a perfect $8$th power for all $v|F$ of odd residue characteristic (this is true even for $F={\mathbf Q}$) and also in $F^{\times}_{\infty} \simeq {\mathbf C}^{\times}$, it follows that the restriction: $$\chi_2: F^{\times}_2 \rightarrow {\mathbf Z}/8{\mathbf Z}$$ satisfies $\chi_2(16) = 1$. Since $F_2/{\mathbf Q}_2$ is ramified of degree $2$, we find that $16$ has valuation $8$ and hence $\phi_2(16) = 1$ for any unramified character  $F^{\times}_2\rightarrow {\mathbf Z}/8{\mathbf Z}$. Since neither $2$ nor $-1$ is a square in $F_2 \simeq {\mathbf Q}_2(\sqrt{-5})$, it follows that $16$ is not a perfect $8$th power in $F^{\times}_2$, and thus there exists a local character $\psi_2: F^{\times}_2 \rightarrow {\mathbf Z}/8{\mathbf Z}$ such that $\psi_2(16) = -1$. But from the above, we see that there is no global character $\chi$ such that $\chi_2 = \psi_2 \phi_2$ for an unramified character $\phi_2$. We now show that any character over a CM field can be written as an $m$th power of another character over some finite CM extension satisfying certain properties. The argument is essentially the same as in the proof of [@10author Theorem 7.1.11]. **Lemma 1**. *Let $\eta$ be a finite order character of $G_F$ for a CM field $F$, let $m$ be an integer, and let $F^{\mathrm{avoid}}/F$ be a finite extension. Then there exists a totally real Galois extension $M/{\mathbf Q}$ linearly disjoint from $F^{\mathrm{avoid}}$ and a character $\psi$ of $G_{M \cdot F}$ such that  $\eta|_{G_{MF}}=\psi^m$. Moreover, if $\eta$ is unramified at all $v$ dividing some finite set of primes $T$ of ${\mathbf Q}$ not including $2$, then we may take $M$ to be totally split at all primes dividing those in $T$, and $\psi$ to be unramified at primes dividing those in $T$.* *Proof.* By induction, it suffices to consider the case when $m$ is prime. Assume that $\eta$ has order $n$. There is an exact sequence: $$0 \rightarrow {\mathbf Z}/m {\mathbf Z}\rightarrow {\mathbf Z}/mn{\mathbf Z}\rightarrow {\mathbf Z}/n{\mathbf Z}\rightarrow 0.$$ The character $\eta$ gives a class in $H^1(F,{\mathbf Z}/n{\mathbf Z})$ which we want to write as an $m$th power, which amounts to lifting this class to $H^1(F,{\mathbf Z}/mn{\mathbf Z})$. The obstruction to this is an element $\partial\eta$ lying in $$H^2(F,{\mathbf Z}/m{\mathbf Z}) \hookrightarrow H^2(F(\zeta_m),\mu_m)) \simeq \mathrm{Br}(F(\zeta_m))[m],$$ where the injectivity of the first map follows from Hochschild--Serre and the fact that $[F(\zeta_m):F]$ is prime to $m$ (since $m$ is prime). From the Albert--Brauer--Hasse--Noether theorem, there is an injection $$\mathrm{Br}(F(\zeta_m))[m] \hookrightarrow \bigoplus_{v} \mathrm{Br}(F(\zeta_m)_v)[m].$$ The image of the class $\partial\eta$ is zero for all $v$ not dividing a finite set of places $S$ of ${\mathbf Q}$ (the places where $\eta$ is ramified), and is zero for $v$ dividing places in $T$ (since $\eta$ is unramified there and there is no obstruction to lifting an unramified local character). Since $F$ is totally imaginary and $\mathrm{Br}({\mathbf C}) = 0$, we may assume $S$ consists only of finite primes and we may also assume that $\infty \in T$. If $K$ is a local field and $L/K$ has degree $m$ then the map $\mathrm{Br}(K)[m] \rightarrow \mathrm{Br}(L)$ is trivial [@MR911121 §VI, Thm. 3]. Hence any class in $\mathrm{Br}(F(\zeta_m))[m]$ is trivial in $F(\zeta_m) \cdot M$ whenever $[M_v:{\mathbf Q}_v]$ is divisible by $m [F(\zeta_m):{\mathbf Q}]$ for any prime $v$ in $S$. Hence it suffices to find such a Galois extension $M/{\mathbf Q}$ disjoint from $F^{\mathrm{avoid}}$, in which the places in $T$ are totally split (since $\infty\in T$ this implies that $M$ is totally real). This is essentially done in [@ArtinTate §X Thm. 6] and we can appeal to [@cht Lemma 4.1.2] for the precise statement we need. Since $\eta$ is unramified at primes in $T$, for each $v|T$ the image of $\psi|_{I_{(M \cdot F)_v}}$ has order dividing $m$. Thus by Lemma [Lemma 1](#realize){reference-type="ref" reference="realize"} we may twist $\psi$ by another character of order $m$ (which doesn't change $\psi^m$) so that it is unramified at $v|T$. ◻ # Automorphy of compatible systems {#sec_automorphy_of_compatible_systems} ## Compatible systems and purity We recall the following definition from [@10author §7]. **Definition 1**. Let $F$ be a number field. A very weakly compatible system ${\mathcal R}$ (of rank $n$ representations of $G_F$, with coefficients in $M$) is by definition a tuple $$(M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\}),$$ where: 1. $M$ is a number field; 2. $S$ is a finite set of finite places of $F$; 3. for each finite place $v \not\in S$ of $F$, $Q_v(X) \in M[X]$ is a monic degree $n$ polynomial; 4. for each $\tau : F \hookrightarrow \overline{M}$, $H_\tau$ is a multiset of integers; 5. for each finite place $\lambda$ of $M$ (say of residue characteristic $l$), $$r_\lambda : G_F \to \mathop{\mathrm{GL}}_n(\overline{M}_\lambda)$$ is a continuous, semi-simple representation satisfying the following conditions: 1. If $v \not \in S$ and $v \nmid l$, then $r_\lambda|_{G_{F_v}}$ is unramified and the characteristic polynomial of $\mathrm{Frob}_v$ equals $Q_v(X)$. 2. For $l$ outside a set of primes of Dirichlet density 0, $r_\lambda$ is crystalline and $\mathrm{HT}_\tau(r_\lambda) = H_\tau$. 3. For every $l$, we have $\mathrm{HT}_\tau(\det r_\lambda) = \sum_{h \in H_\tau} h$. If $F' / F$ is a finite extension then we may define the restricted very weakly compatible system $${\mathcal R}|_{G_{F'}} = (M, S_{F'}, \{ Q_w(X) \}, \{ r_\lambda|_{G_{F'}} \}, \{H_\tau' \}),$$ where $S_{F'}$ is the set of places of $F'$ lying above $S$, $Q_w(X) = \det r_\lambda(X - \mathrm{Frob}_w)$ (thus independent of $\lambda$), and $H'_{\tau} = H_{\tau|_F}$. If $${\mathcal R}_1 = (M, S_1, \{ Q_{1, v}(X) \}, \{ r_{1, \lambda} \}, \{H_{1, \tau}\}), \, {\mathcal R}_2 = (M, S_2, \{ Q_{2, v}(X) \}, \{ r_{2, \lambda} \}, \{H_{2, \tau} \})$$ are very weakly compatible systems with a common coefficient field $M$, then we can define the tensor product $${\mathcal R}_1 \otimes {\mathcal R}_2 = (M, S_1 \cup S_2, \{ Q_{ v}(X) \}, \{ r_{1, \lambda} \otimes r_{2, \lambda} \}, \{H_{ \tau}\}),$$ where we take $Q_v(X) = \det (r_{1, \lambda} \otimes r_{2, \lambda})(X - \mathrm{Frob}_v)$ (thus independent of $\lambda$) and $H_\tau = \{ k + l \, | \, k \in H_{1 ,\tau}, l \in H_{2, \tau} \}$ (sums taken with multiplicity). The following definition summarizes some possible properties of very weakly compatible systems. These were all defined in [@10author], with the exception of ([\[def_weak_aut\]](#def_weak_aut){reference-type="ref" reference="def_weak_aut"}) ('weakly automorphic'). This condition arises for us because we consider tensor products of compatible systems, one of which has poorly controlled ramification. Lemma [Lemma 1](#lem_weakly_automorphic_and_pure){reference-type="ref" reference="lem_weakly_automorphic_and_pure"} gives conditions under which 'weakly automorphic' can be upgraded to 'automorphic'. **Definition 1**. Let $${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$$ be a very weakly compatible system. We say that ${\mathcal R}$ is: 1. pure of weight $m \in {\mathbf Z}$, if it satisfies the following conditions: 1. for each $v \not\in S$, each root $\alpha$ of $Q_v(X)$ in $\overline{M}$, and each $\iota:\overline{M} \hookrightarrow{\mathbf C}$ we have $$| \iota \alpha |^2 = q_v^m;$$ 2. for each $\tau:F \hookrightarrow\overline{M}$ and each complex conjugation $c$ in $\mathop{\mathrm{Gal}}(\overline{M}/{\mathbf Q})$ we have $$H_{c \tau} = \{ m-h: \,\,\, h \in H_\tau\}.$$ 2. automorphic, if there is a regular algebraic, cuspidal automorphic representation $\pi$ of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$ and an embedding $\iota : M \hookrightarrow {\mathbf C}$ such that for every finite place $v \not\in S$ of $F$, $\pi_v$ is unramified and $\operatorname{rec}^T_{F_v}(\pi_v)(\mathrm{Frob}_v)$ has characteristic polynomial $\iota(Q_v(X))$. 3. [\[def_weak_aut\]]{#def_weak_aut label="def_weak_aut"} weakly automorphic of level prime to $T$, if $T$ is a finite set of finite places of  $F$, disjoint from $S$, and there is a regular algebraic, cuspidal automorphic representation $\pi$ of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$ and an embedding $\iota : M \hookrightarrow {\mathbf C}$ such that for all but finitely many finite places $v \not\in S$ of $F$, and for every $v \in T$, $\pi_v$ is unramified and $\operatorname{rec}^T_{F_v}(\pi_v)(\mathrm{Frob}_v)$ has characteristic polynomial $\iota(Q_v(X))$. 4. irreducible, if for $l$ outside a set of primes of Dirichlet density 0, and for all $\lambda | l$ of $M$, $r_\lambda$ is irreducible. 5. strongly irreducible, if for every finite extension $F' / F$, the compatible system ${\mathcal R}|_{G_{F'}}$ is irreducible. For a CM number field $F$, and a regular algebraic weight $\lambda$, cuspidal automorphic representation $\pi$ of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$, there is an associated automorphic very weakly compatible system $$\mathcal{R}=(M,S,\{ Q_v(X) \}, r_{\pi, \lambda} ,H_{\tau} ),$$ where $H_\tau = \{ \lambda_{\tau, 1} + 1, \lambda_{\tau, 2} \}$ (see [@10author Lemma 7.1.10]). We now recall that the potential automorphy of symmetric powers of ${\mathcal R}$ is enough to imply purity. **Lemma 1**. *Let ${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$ be a compatible system of rank 2 representations of $G_F$ such that $H_{\tau} = \{0,m\}$ for a fixed $m \in {\mathbf N}$ for all $\tau$. Fix a finite place $v_0$ of $F$ which is not in $S$, and let $X_0 = \{v_0\}$. Suppose that for infinitely many $n\ge 1$, we can find a finite Galois extension $F_n / F$ such that the compatible system $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}|_{F_n}$ is weakly automorphic of level prime to $X_{0, F_n} = \{ v | v_0 \}$. Then the roots $\alpha_1, \alpha_2$ of $Q_{v_0}(X)$ in $\overline{M}$ satisfy $$| \iota \alpha |^2 = q_{v_0}^m$$ for each $\iota:\overline{M} \hookrightarrow{\mathbf C}$.* *Proof.* Choose a place $v_n|v_0$ in $F_n$ and fix $\iota:\overline{M}\hookrightarrow{\mathbf C}$. We are assuming that $\mathop{\mathrm{Sym}}^{n-1} \mathcal{R}|_{F_n}$ is associated to a cuspidal automorphic representation $\Pi$ of $\mathop{\mathrm{GL}}_{n}(\mathbf{A}_{F_n})$ and $\Pi_{v_n}$ is unramified. Up to a finite order character, the determinant of our rank $n$ automorphic compatible system is given by the $-mn(n-1)/2$th power of the cyclotomic character, so the central character of $\Pi$ is (again, up to a finite order Hecke character) $| \cdot |^{n(m-1)(1-n)/2}$, and in particular $\Pi | \cdot |^{(m-1)(n-1)/2}$ is unitary. Since we know that $|\iota(\alpha_1\alpha_2)| = q_{v_0}^m$, it suffices to prove that $|\iota\alpha_i|\le q_{v_0}^{m/2}$ for $i = 1, 2$. Let $q_{v_n} = q_{v_0}^f$. As in the proof of [@10author Cor. 7.1.13], we can apply the Jacquet--Shalika bound [@jsajm103 Cor. 2.5] to deduce that that $|\iota\left( \alpha_i^{f(n-1)}\right)|\le q_{v_n}^{((m-1)(n-1) + n)/2}$, so $|\iota\alpha_i|\le q_{v_0}^{m/2+1/2(n-1)}$. Letting $n$ tend to $\infty$ gives the desired bound on $|\iota\alpha_i|$. ◻ **Lemma 1**. *Let $F$ be a CM number field, and let $${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$$ be a very weakly compatible system of rank $n$ representations of $G_F$ which is weakly automorphic, corresponding to a regular algebraic, cuspidal automorphic representation $\pi$ of $\mathop{\mathrm{GL}}_n(\mathbf{A}_F)$, and pure of weight $m \in {\mathbf Z}$. Then ${\mathcal R}$ is automorphic.* *Proof.* Choose some embedding of $M$ in ${\mathbf C}$. By assumption, there is a finite set $S'\supseteq S$ of finite places of $F$ such that for each $v \not \in S'$, $\pi_v$ is unramified and $\operatorname{rec}^T_{F_v}(\pi_v)(\mathrm{Frob}_v)$ has characteristic polynomial $Q_v(X)$. We must show that this holds for all $v \not\in S$. Choose $v \in S' - S$, a rational prime $p$ not lying under $v$, and an isomorphism $\iota : \overline{{\mathbf Q}}_p \to {\mathbf C}$. Let $\lambda$ denote the place of $M$ induced by $\iota^{-1}$. Then the Chebotarev density theorem implies that there is an isomorphism $r_\lambda \cong r_\iota(\pi)$. By assumption, $r_\lambda|_{G_{F_v}}$ is unramified and pure of weight $m$. By [@ilavarma Theorem 1], there is an isomorphism $r_\iota(\pi)|_{W_{F_v}}^{\mathrm{ss}} \cong \iota^{-1} \operatorname{rec}^T_{F_v}(\pi_v)^{\mathrm{ss}}$. We deduce that $\pi_v$ is a subquotient of an unramified principal series, namely the one with Satake parameter determined by $Q_v(X)$. Since $r_\lambda|_{G_{F_v}}$ is pure, this principal series representation is irreducible and $\pi_v$ is unramified, as desired. ◻ **Lemma 1**. *Let $F$ be a number field and let $${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$$ be a very weakly compatible system of rank $2$ representations of $G_F$ which is strongly irreducible. Let ${\mathcal L}({\mathcal R})$ denote the set of primes $l$ satisfying the following conditions:* 1. *$l \not\in S$ and for each place $\lambda | l$ of $M$, $r_\lambda$ is crystalline of Hodge--Tate weights $H_\tau$.* 2. *For each place $\lambda | l$ of $M$, $\overline{r}_\lambda(G_{\widetilde{F}})$ contains a conjugate of $\mathop{\mathrm{SL}}_2({\mathbf F}_l)$, where $\widetilde{F}$ is the Galois closure of $F / {\mathbf Q}$.* *Then ${\mathcal L}({\mathcal R})$ has Dirichlet density 1.* *Proof.* The set of primes $l$ having property (1) has Dirichlet density 1, by definition of a very weakly compatible system. The lemma therefore follows from [@10author Lemma 7.1.3]. ◻ ## Potential automorphy theorems {#subsec_proof_of_pot_aut} Our goal in this section is to prove Theorem [Theorem 1](#thm_pot_aut_of_powers){reference-type="ref" reference="thm_pot_aut_of_powers"}. The proof will occupy the whole section, but to keep the presentation organized and somewhat motivated, we deduce it from Theorem [Theorem 1](#thm_pot_aut_of_powers_moderately_many_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_moderately_many_assumptions"} below, which we will in turn deduce from Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"}. **Theorem 1**. *Let $F$ be an imaginary CM number field, and let $${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$$ be a very weakly compatible system of rank $2$ representations of $G_F$. Let $m \geq 1$ be an integer, and suppose that the following conditions are satisfied:* 1. *For each $\tau$, $H_\tau = \{ 0, m \}$.* 2. *$\det r_\lambda = \varepsilon^{-m}$.* 3. *[\[last_assumption\]]{#last_assumption label="last_assumption"} ${\mathcal R}$ is strongly irreducible.* *Then ${\mathcal R}$ is pure of weight $m$, and for each $n \geq 1$, there exists a finite CM extension $F_n / F$ such that $F_n / {\mathbf Q}$ is Galois and $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}|_{G_{F_n}}$ is automorphic.* *Proof.* We may assume that $m \ge 2$, since otherwise the result follows from [@10author Cor 7.1.12]. Let $v_0 \not\in S$ be a place of $F$. Theorem [Theorem 1](#thm_pot_aut_of_powers_moderately_many_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_moderately_many_assumptions"} states that we can find, for each $n \geq 1$, a CM extension $F_n / F$, Galois over ${\mathbf Q}$, such that $\mathop{\mathrm{Sym}}^{n-1}{\mathcal R}|_{G_{F_n}}$ is weakly automorphic of level prime to $\{ v | v_0 \}$. By Lemma [Lemma 1](#lem:pot aut implies purity){reference-type="ref" reference="lem:pot aut implies purity"}, the roots of $Q_{v_0}(X)$ are $q_{v_0}$-Weil numbers of weight $m$. Since $v_0 \not\in S$ is arbitrary, this shows that the compatible system ${\mathcal R}$ is pure of weight $m$. We can then apply Lemma [Lemma 1](#lem_weakly_automorphic_and_pure){reference-type="ref" reference="lem_weakly_automorphic_and_pure"} to conclude that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}|_{G_{F_n}}$ is automorphic, as required. ◻ *Remark 1*. We assume in our arguments below that $m>1$. Our argument certainly applies in principle to the case $m=1$, but certain statements we make through the proof assume that $m \ge 2$, and so this assumption avoids having to make the necessary extra remarks to cover the case $m=1$. Moreover, our argument in the case $m=1$ would involve tensoring ${\mathcal R}$ with auxiliary $1$-dimensional representations, and not so surprisingly can be simplified to the point where it becomes very similar to the proof of [@10author Cor 7.1.12]. Before giving our first technical result towards the proof of Theorem [Theorem 1](#thm_pot_aut_of_powers_moderately_many_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_moderately_many_assumptions"} (and hence Theorem [Theorem 1](#thm_pot_aut_of_powers){reference-type="ref" reference="thm_pot_aut_of_powers"} above), we sketch the idea of the proof. We begin with the strongly irreducible, very weakly compatible system ${\mathcal R}$ of rank 2 and parallel Hodge--Tate weights $\{0, m \}$, and wish to show that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ is potentially automorphic. This presents difficulties since the compatible system $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ has parallel Hodge--Tate weights $\{ 0, m, 2m, \dots, (n-1) m\}$, while the auxiliary motives that we can construct to show potential automorphy have consecutive (and parallel) Hodge--Tate weights (and moreover, our automorphy lifting theorem Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} applies only to Galois representations with consecutive Hodge--Tate weights). To get around this, we construct auxiliary compatible systems as follows: - An auxiliary compatible system ${\mathcal R}_{\mathrm{aux}}$ of rank $m$ and with consecutive (and parallel) Hodge--Tate weights $\{ 0, 1, \dots, m-1 \}$. Then $(\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{aux}}$ has rank $nm$ and consecutive (and parallel) Hodge--Tate weights $\{ 0, 1, \dots, nm-1 \}$. - A second auxiliary compatible system ${\mathcal R}_{\mathrm{CM}}$ of rank $m$ and with consecutive (and parallel) Hodge--Tate weights $\{ 0, 1, \dots, m-1 \}$ which is moreover induced from a character. - A third auxiliary compatible system ${\mathcal S}_{\mathrm{UA}}$ of rank $nm$ with consecutive (and parallel) Hodge--Tate weights $\{ 0, 1, \dots, mn-1 \}$, and which is moreover automorphic. We will construct ${\mathcal S}_{\mathrm{UA}}$ (and ${\mathcal R}_{\mathrm{aux}}$) as a member of the families of motives considered in §[4](#ssec:dwork){reference-type="ref" reference="ssec:dwork"}. (The subscript 'UA' stands for 'universally automorphic'.) These are chosen to behave well with respect to distinct primes $p, r$ as follows: - There is a congruence modulo $p$ linking ${\mathcal S}_{\mathrm{aux}} := ( \mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{aux}}$ and ${\mathcal S}_{\mathrm{UA}}$. We will apply Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} to conclude that ${\mathcal S}_{\mathrm{aux}}$ is automorphic. - There is a congruence modulo $r$ linking ${\mathcal R}_{\mathrm{aux}}$ and ${\mathcal R}_{\mathrm{CM}}$, and therefore also linking  ${\mathcal S}_{\mathrm{aux}}$ and ${\mathcal S}_{\mathrm{CM}} := ( \mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{CM}}$. We will apply Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} a second time to conclude that ${\mathcal S}_{\mathrm{CM}}$ is automorphic. - Since ${\mathcal R}_{\mathrm{CM}}$ is induced from a Hecke character, ${\mathcal S}_{\mathrm{CM}}$ is also induced (from an $n$-dimensional compatible system). We will then be able to apply the description of the image of automorphic induction given in [@MR1007299] to conclude that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ is itself automorphic. The most significant conditions that must be satisfied to apply Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} in each case are the non-degeneracy of the residual images and the 'connects' relation locally at the $p$-adic (resp. $r$-adic) places of $F$. The non-degeneracy of the residual images will be easy to arrange by careful choice of data. It is the 'connects' relation that is more serious and imposes the circuitous route followed here to prove the theorem. The statement of Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"} below is long but is merely a precise formulation of the properties required of the various auxiliary compatible systems needed to carry out the above sketch. The main point in the proof of Theorem [Theorem 1](#thm_pot_aut_of_powers_moderately_many_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_moderately_many_assumptions"} will be to show how to construct auxiliary compatible systems with these properties. **Proposition 1**. *Let $F$ be an imaginary CM number field, let $m \ge 2$ and $n\ge 1$ be integers, and let $X_0$ be a finite set of finite places of $F$. Let $${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$$ be a very weakly compatible system of rank $2$ representations of $G_F$ satisfying the following conditions:* 1. *[\[ass_HT\]]{#ass_HT label="ass_HT"} For each $\tau$, $H_\tau = \{ 0, m \}$.* 2. *$\det r_\lambda = \varepsilon^{-m}$.* 3. *[\[last_assumption_second\]]{#last_assumption_second label="last_assumption_second"} ${\mathcal R}$ is strongly irreducible.* 4. *$X_0 \cap S = \emptyset$.* 5. *[\[ass_galois\]]{#ass_galois label="ass_galois"} $F / {\mathbf Q}$ is Galois and contains an imaginary quadratic field $F_0$. Moreover, $M \leq {\mathbf C}$.* *Suppose we can find the following additional data:* 6. *A cyclic totally real extension $E / {\mathbf Q}$ of degree $m$, linearly disjoint from $F$, and a character $\Psi : \mathbf{A}_L^\times \to M^\times$, where $L = E \cdot F$, satisfying the following conditions:* 1. *There is an embedding $\tau_0 : F_0 \to {\mathbf C}$, and a labelling $\tau_1, \dots, \tau_m : E \cdot F_0 \to {\mathbf C}$ of the embeddings $E \cdot F_0 \to {\mathbf C}$ which extend $\tau_0$ such that for each $\alpha \in L^\times$, we have $$\Psi(\alpha) = \prod_{i=1}^m \tau_i(\mathbf{N}_{L / E \cdot F_0}(\alpha))^{m-i} c \tau_i(\mathbf{N}_{L / E \cdot F_0}(\alpha))^{i-1}.$$* *We let $\{ \Psi_\lambda \}$ denote the weakly compatible system associated to $\Psi$, and let $${\mathcal R}_{\mathrm{CM}} = \{ \mathop{\mathrm{Ind}}_{G_L}^{G_F} \Psi_\lambda \} = (M, S_{\mathrm{CM}}, \{ Q_{\mathrm{CM}, v }(X) \}, \{ r_{\mathrm{CM}, \lambda} \} , \{ H_{\mathrm{CM}, \tau} \})$$ denote the induced weakly compatible system. *(*Then $H_{\mathrm{CM}, \tau} = \{ 0, 1, \dots, m-1 \}$ for all $\tau$, and we take $S_{\mathrm{CM}}$ to be the set of places of $F$ ramified in $L$ or above which $\Psi$ is ramified.*)** 1. *For all $\lambda$, $\det r_{\mathrm{CM}, \lambda} = \varepsilon^{-m(m-1)/2}$.* 7. *Distinct primes $p, r$, not dividing any place of $S$, and places $\mathfrak{p}, \mathfrak{r}$ of $M$ lying above them.* 8. *A weakly compatible system of rank $m$ representations of $G_F$ $${\mathcal R}_{\mathrm{aux}} = (M, S_{\mathrm{aux}}, \{ Q_{\mathrm{aux}, v}(X) \}, \{ r_{\mathrm{aux}, \lambda} \}, \{ H_{\mathrm{aux}, \tau} \}),$$ satisfying the following conditions:* 1. *${\mathcal R}_{\mathrm{aux}}$ is pure of weight $m-1$ and $S_{\mathrm{aux}}$ does not intersect $X_0 \cup \{ v | pr \}$.* 2. *For all $\lambda$, $\det r_{\mathrm{aux}, \lambda} = \varepsilon^{-m(m-1)/2}$. For all $\tau$, $H_{\mathrm{aux}, \tau} = \{ 0, 1, \dots, m-1 \}$.* 9. *A weakly compatible system $${\mathcal S}_{\mathrm{UA}} = ( M, S_{\mathrm{UA}}, \{ Q_{\mathrm{UA}, v}(X) \}, \{ s_{\mathrm{UA}, \lambda} \}, \{ H_{\mathrm{UA}, \tau} \})$$ of rank $nm$ representations of $G_F$, satisfying the following conditions:* 1. *${\mathcal S}_{\mathrm{UA}}$ is pure of weight $nm-1$ and $S_{\mathrm{UA}}$ does not intersect $X_0 \cup \{ v | pr \}$.* 2. *For all $\lambda$, $\det s_{\mathrm{UA}, \lambda} = \varepsilon^{-nm(nm-1)/2}$. For all $\tau$, $H_{\mathrm{UA}, \tau} = \{ 0, 1, \dots, n m-1 \}$.* 3. *${\mathcal S}_{\mathrm{UA}}$ is weakly automorphic of level prime to $X_0 \cup \{ v | pr \}$.* *Let ${\mathcal S}_{\mathrm{aux}} = \{ s_{\mathrm{aux}, \lambda} \} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{aux}}$ and ${\mathcal S}_{\mathrm{CM}} = \{ s_{\mathrm{CM}, \lambda} \} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}) \otimes {\mathcal R}_{\mathrm{CM}}$. These compatible systems of rank $nm$ have coefficients in the number field $M$. Suppose that these data satisfy the following additional conditions:* 10. *$L/F$ is unramified at $X_0 \cup \{ v | pr \}$, and $\Psi$ is unramified at the places of $L$ lying above $X_0\cup \{ v | pr \}$. *(*Then $S_\mathrm{CM}\cap (X_0 \cup \{ v | p r \}) = \emptyset.$*)** 11. *$p > 2 n m + 1$, and $[F(\zeta_p) : F] = p-1$.* 12. *$r > 2 n m + 1$, $r$ splits completely in $E \cdot F_0$, and $[L(\zeta_r) : L] = r-1$.* 13. *Up to conjugation, there are sandwiches $$\mathop{\mathrm{SL}}_2({\mathbf F}_p) \leq \overline{r}_\mathfrak{p}(G_F) \leq \mathop{\mathrm{GL}}_2({\mathbf F}_p)$$ and $$\mathop{\mathrm{SL}}_2({\mathbf F}_r) \leq \overline{r}_\mathfrak{r}(G_F) \leq \mathop{\mathrm{GL}}_2({\mathbf F}_r).$$ If $m > 2$ then the image $\overline{r}_{\mathrm{aux}, \mathfrak{p}}(G_F)$ is a conjugate of $\operatorname{GU}_m({\mathbf F}_{p^2})$ and $\overline{r}_{\mathrm{aux}, \mathfrak{p}}$ has multiplier character $\overline{\varepsilon}^{1-m}$. If $m = 2$ then the image $\overline{r}_{\mathrm{aux}, \mathfrak{p}}(G_F)$ is a conjugate of $\mathop{\mathrm{GL}}_2({\mathbf F}_p)$. The representation $\overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F(\zeta_r)}}$ is irreducible. If $m = 2$, then the extensions of $F(\zeta_p)$ cut out by the projective representations associated to $\overline{r}_\mathfrak{p}|_{G_{F(\zeta_p)}}$ and $\overline{r}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F(\zeta_p)}}$ are linearly disjoint.* 14. *There are isomorphisms $\overline{s}_{\mathrm{UA}, \mathfrak{p}} \cong \overline{s}_{\mathrm{aux}, \mathfrak{p}}$ and $\overline{r}_{\mathrm{aux}, \mathfrak{r}} \cong \overline{r}_{\mathrm{CM}, \mathfrak{r}}$.* 15. *There is a decomposition $S_p = \Sigma^{\textrm{ord}} \sqcup \Sigma^{\textrm{ss}}$ of the set $S_p$ of $p$-adic places of $F$ such that for each place $v | p$ of $F$, $F_v$ contains ${\mathbf Q}_{p^2}$, $\overline{r}_\mathfrak{p}|_{G_{F_v}}$ and $\overline{\rho}_{2, m, 0}|_{G_{F_v}}$ are trivial, and:* 1. *if $v \in \Sigma^{\textrm{ord}}$, then $r_{\mathfrak{p}}|_{G_{F_v}}$ is crystalline ordinary;* 2. *if $v \in \Sigma^{\textrm{ss}}$, then $r_{\mathfrak{p}}|_{G_{F_v}} \sim \rho_{2, m, 0}|_{G_{F_v}}$.* 16. *If $v \in \Sigma^{\textrm{ord}}$, then $\overline{r}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ is trivial and $r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}}$ are both crystalline ordinary. If $v \in \Sigma^{\textrm{ss}}$, then $\overline{r}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ is trivial, $r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}}$ are both crystalline, and $r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}} \sim \rho_{m, 1, 0}|_{G_{F_v}}$ and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}} \sim \rho_{nm, 1, 0}|_{G_{F_v}}$.* 17. *[\[ass_CM_triv\]]{#ass_CM_triv label="ass_CM_triv"} For each place $v | r$ of $F$, $\overline{r}_{\mathrm{aux},\mathfrak{r}}|_{G_{F_v}} \cong \overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}}$ is trivial and $r_{\mathrm{aux},\mathfrak{r}}|_{G_{F_v}}$ is crystalline ordinary.* *Then $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ is weakly automorphic of level prime to $X_0$.* *Proof.* We first show that ${\mathcal S}_{\mathrm{aux}}$ is weakly automorphic by applying Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} to $s_{\mathrm{aux}, \mathfrak{p}}$. To justify this, we need to check that $\overline{s}_{\mathrm{aux}, \mathfrak{p}} \cong \overline{s}_{\mathrm{UA}, \mathfrak{p}}$ satisfies the Taylor--Wiles conditions (as formulated in Definition [Definition 1](#tw){reference-type="ref" reference="tw"}) and that for each place $v | p$ of $F$, we have $s_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}} \sim s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}}$. The Taylor--Wiles conditions hold by Lemma [Lemma 1](#yeomanslemmanew){reference-type="ref" reference="yeomanslemmanew"}. If $v \in \Sigma^{\mathrm{ord}}$, then $\overline{s}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ is trivial, and both $s_{\mathrm{aux}, \mathfrak{p}} \cong (\mathop{\mathrm{Sym}}^{n-1} r_\mathfrak{p}) \otimes r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}}$ are crystalline ordinary, so Lemma [Lemma 1](#prep2){reference-type="ref" reference="prep2"} implies that $s_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}} \sim s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}}$. If $v \in \Sigma^{{\mathrm{ss}}}$, then $\overline{s}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}}$ is trivial and our assumptions imply that $\mathop{\mathrm{Sym}}^{n-1} r_\mathfrak{p}|_{G_{F_v}} \sim \rho_{n,m,0}|_{G_{F_v}}$ and $r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}} \sim \rho_{m, 1, 0}|_{G_{F_v}}$ and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}} \sim \rho_{nm, 1, 0}|_{G_{F_v}}$, hence $$s_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_v}} \sim \rho_{n,m,0}|_{G_{F_v}} \otimes \rho_{m, 1, 0}|_{G_{F_v}} \cong \rho_{nm, 1, 0}|_{G_{F_v}} \sim s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_v}}.$$ Therefore ${\mathcal S}_{\mathrm{aux}}$ is weakly automorphic of level prime to $X_0 \cup \{ v | pr \}$. We next show that ${\mathcal S}_{\mathrm{CM}}$ is weakly automorphic of level prime to $X_0$ by applying Theorem [Theorem 1](#thm:main_automorphy_lifting_theorem){reference-type="ref" reference="thm:main_automorphy_lifting_theorem"} to $s_{\mathrm{CM}, \mathfrak{r}}$. The Taylor--Wiles conditions for $\overline{s}_{\mathrm{CM}, \mathfrak{r}} \cong \overline{s}_{\mathrm{aux}, \mathfrak{r}}$ hold by Lemma [Lemma 1](#yeomanslemmanewvariant){reference-type="ref" reference="yeomanslemmanewvariant"}. To check the connectedness conditions, let $v | r$ be a place of $F$. Then $\overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}} \cong \overline{r}_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_v}}$ is trivial and $r_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_v}}$ is crystalline ordinary, by assumption. Since $r$ splits completely in $E$, $v$ splits completely in $L$, and we can label the places $w_i | v$ so that $w_i|_E$ is the place induced by the embedding $\tau_i$. There is an isomorphism $$r_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}} \cong \oplus_{w | v} \alpha_{i},$$ where for each $i = 1, \dots, m$, $\alpha_{i} : G_{F_{v}} \to \overline{M}_\mathfrak{r}^\times$ is a continuous character with the property that for any $u \in {\mathcal O}_{F_v}^\times$, we have $$\alpha_{ i}( \operatorname{Art}_{F_v}(u) ) = \prod_{\substack{ \tau \in \mathop{\mathrm{Hom}}(L_{w_i}, \overline{M}_\mathfrak{r}) \\ \tau|_{F_0} = \tau_0}} \tau(u)^{-(m-i)}$$ if $v$ lies above the place of $F_0$ induced by $\tau_0$, and $$\alpha_{w, i}( \operatorname{Art}_{F_v}(u) ) = \prod_{\substack{ \tau \in \mathop{\mathrm{Hom}}(L_{w_i}, \overline{M}_\mathfrak{r}) \\ \tau|_{F_0} = c \tau_0}} \tau(u)^{-(i-1)}$$ otherwise. It follows that $r_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}}$ is also crystalline ordinary. By Lemma [Lemma 1](#prep2){reference-type="ref" reference="prep2"}, we have $r_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}} \sim r_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_v}}$, and using [@BLGGT p. 530, (5)] it follows that $$s_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}} = \mathop{\mathrm{Sym}}^{n-1} r_{\mathfrak{r}}|_{G_{F_v}} \otimes r_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_v}} \sim \mathop{\mathrm{Sym}}^{n-1} r_{\mathfrak{r}}|_{G_{F_v}} \otimes r_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_v}} = s_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_v}}.$$ We can now show that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ is weakly automorphic of level prime to $X_0$. Let $\pi$ be the regular algebraic, cuspidal automorphic representation of $\mathop{\mathrm{GL}}_{nm}(\mathbf{A}_F)$ which is associated to the compatible system ${\mathcal S}_{\mathrm{CM}}$. By construction, $\pi$ is unramified at $X_0$. Let $\eta : F^\times \backslash \mathbf{A}_F^\times \to {\mathbf C}^\times$ be the character of order $m$ associated to the inducing field $L / F$ of ${\mathcal R}_{\mathrm{CM}}$. Then $\pi \cong \pi \otimes (\eta \circ \det)$, so by cyclic base change [@MR1007299 Ch. 3, Thm 4.2], we deduce that $\pi$ is the induction of a cuspidal automorphic representation $\Pi$ for $\mathop{\mathrm{GL}}_{n}(\mathbf{A}_{L})$, which by consideration of the infinity type of $\pi$ must also be regular algebraic. More precisely, for any place $w$ of $L$ lying above a place $v$ of $F$, we have $$\operatorname{rec}_{F_v}(\pi)|_{W_{L_w}} = \oplus_{i=0}^{m-1} \operatorname{rec}_{L_w}(\Pi^{\sigma^i}),$$ where $\sigma$ is a generator for $\mathop{\mathrm{Gal}}(L / F)$. Since $L$ is CM and $\Pi$ is regular algebraic, $\Pi$ has an associated compatible system of $l$-adic Galois representations. If $l$ is a prime and $\iota : \overline{{\mathbf Q}}_l \to {\mathbf C}$ is an isomorphism, with $\iota^{-1}$ inducing the place $\lambda$ of $M$, then we find $$r_\iota(\pi)|_{G_L} = \oplus_{i=0}^{m-1} \mathop{\mathrm{Sym}}^{n-1} r_\lambda|_{G_L} \otimes \Psi_\lambda^{\sigma^{i}} \cong \oplus_{i=0}^{m-1} r_\iota(\Pi^{\sigma^i}).$$ Choosing $\lambda$ so that $\mathop{\mathrm{Sym}}^{n-1} r_\lambda$ is irreducible (e.g. $\lambda = \mathfrak{p}$), we find that $\mathop{\mathrm{Sym}}^{n-1} r_\lambda|_{G_L}$ is a character twist of $r_\iota(\Pi)$. Undoing the twist and making cyclic descent (using the irreducibility of $\mathop{\mathrm{Sym}}^{n-1} r_\lambda|_{G_L}$, as in [@10author Proposition 6.5.13]) shows that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ is weakly automorphic over $F$ of level prime to $X_0$, as desired. ◻ The next theorem is proved by constructing the data required by Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"} (after possibly extending the base field $F$). **Theorem 1**. *Let $F$ be an imaginary CM number field, and let $${\mathcal R}= (M, S, \{ Q_v(X) \}, \{ r_\lambda \}, \{H_\tau\})$$ be a very weakly compatible system of rank $2$ representations of $G_F$. Let $m \ge 2$ be an integer, and suppose that the following conditions are satisfied:* 1. *For each $\tau$, $H_\tau = \{ 0, m \}$.* 2. *$\det r_\lambda = \varepsilon^{-m}$.* 3. *[\[last_assumption_third\]]{#last_assumption_third label="last_assumption_third"} ${\mathcal R}$ is strongly irreducible.* *Let $v_0 \not\in S$ be a place of $F$. Then for each $n \geq 1$, there is a CM extension $F_n/ F$, Galois over ${\mathbf Q}$, such that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}|_{G_{F_n}}$ is weakly automorphic of level prime to $\{v|v_0\}$.* *Proof.* We can fix $n \geq 1$. Let $p_0$ denote the residue characteristic of $v_0$, let $F_0$ be an imaginary quadratic field, and let $F_1$ denote the Galois closure of $F \cdot F_0$ over ${\mathbf Q}$. Embed $M$ in ${\mathbf C}$ arbitrarily, and let $X_1$ denote the set of places of $F_1$ lying above $v_0$. It suffices to prove the following statement: - There exists a CM extension $F' / F_1$, Galois over ${\mathbf Q}$, such that (after possibly enlarging $M$) ${\mathcal R}|_{G_{F'}}$ satisfies the hypotheses of Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"} with $X_0$ taken to be the set of places of $F'$ lying above $X_1$. Indeed, Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"} will then imply that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}|_{G_{F'}}$ is weakly automorphic of level prime to $X_0 = \{ v | v_0 \}$, which is what we need to prove. To prove this statement, we will consider a series of CM extensions $F_{j+1} / F_j$ ($j = 1, 2, \dots$), each Galois over ${\mathbf Q}$. For any such extension $F_j / F_1$, ${\mathcal R}|_{G_{F_j}}$ satisfies Assumptions ([\[ass_HT\]](#ass_HT){reference-type="ref" reference="ass_HT"})--([\[ass_galois\]](#ass_galois){reference-type="ref" reference="ass_galois"}) of Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"} with respect to $X_j$, the set of places of $F_j$ lying above $v_0$. The extensions $F_{j+1} / F_j$ will be chosen in order to satisfy the remaining assumptions. Let $E/{\mathbf Q}$ be any totally real cyclic extension of degree $m$ linearly disjoint from $F_1$, in which $p_0$ is unramified. (We can find such $E$ by taking the degree $m$ subfield of ${\mathbf Q}(\zeta_{p'})$, where $p'$ is any sufficiently large prime $\equiv 1 \bmod 2m$.) Let $L_1 = E \cdot F_1$. For any extension $F_j / F_1$, we will set $L_j = E \cdot F_j$. Choose an odd prime $q_1 \nmid X_0$ which splits completely in $L_1$ and a place $v_1 | q_1$ of $F_1$ which splits completely as $v_1 = w_1 \dots w_m$ in $L_1$. Fix an embedding $\tau_0 : F_0 \to {\mathbf C}$, and a labelling $\tau_1, \dots, \tau_m : E \cdot F_0 \to {\mathbf C}$ of the embeddings $E \cdot F_0 \to {\mathbf C}$ which extend $\tau_0$. After enlarging $M$, we can find a character $\Psi_0 : \mathbf{A}_{L_1}^\times \to M^\times$, unramified at the places above $X_1$, such that for each $\alpha \in L_1^\times$, we have $$\Psi_0(\alpha) = \prod_{i=1}^m \tau_i(\mathbf{N}_{L_1 / E \cdot F_0}(\alpha))^{m-i} c \tau_i(\mathbf{N}_{L_1 / E \cdot F_0}(\alpha))^{i-1},$$ and moreover such that the characters $\Psi_0|_{{{\mathcal O}^\times_{L_{w_i}}}}$ ($i = 1, \dots, m$) are wildly ramified, pairwise distinct, and satisfy $\prod_{i=1}^m \Psi_0|_{{\mathcal O}^\times_{L_{w_i}}} = 1$ (where we identify $F_{v_1} = L_{w_i}$ for each $i$). If $\lambda$ is a place of $M$, then $\varepsilon^{m(m-1)/2} \det \mathop{\mathrm{Ind}}_{G_{L_1}}^{G_{F_1}} \Psi_{0, \lambda}$ is a character of finite order which is unramified at $v_0$ and $v_1$. Using Lemma [Lemma 1](#fixdeterminant){reference-type="ref" reference="fixdeterminant"}, and possibly enlarging $M$ further, we can find a CM extension $F_2 / F_1$, Galois over ${\mathbf Q}$, and a twist $\Psi_2 : \mathbf{A}_{L_2}^\times \to M^\times$ of $\Psi_0 \circ \mathbf{N}_{L_2 / L_1}$ by a character of $L_2^\times \backslash \mathbf{A}_{L_2}^\times$ of finite order, unramified above $v_0$ and $v_1$, such that for any place $\lambda$ of $M$, $\det \mathop{\mathrm{Ind}}_{G_{L_2}}^{G_{F_2}} \Psi_{2, \lambda} = \varepsilon^{-m(m-1)/2}$. (If $v_0 | 2$, then the twist whose existence is guaranteed from Lemma [Lemma 1](#fixdeterminant){reference-type="ref" reference="fixdeterminant"} may be ramified above $X_2$; if so, it is certainly ramified of finite order, and we enlarge $F_2$ further so that $\mathop{\mathrm{Ind}}_{G_{L_2}}^{G_{F_2}} \Psi_{2, \lambda}$ is unramified at all places above $X_2$ and $L_2/F_2$ is unramified at all places above $X_2$.) If $F_j / F_2$ is a finite extension, then we set $\Psi_j = \Psi_2 \circ \mathbf{N}_{F_j / F_2}$. Let ${\mathcal R}_{\mathrm{CM}} = \{ \mathop{\mathrm{Ind}}_{G_{L_2}}^{G_{F_2}} \Psi_{2, \lambda} \} = \{ r_{\mathrm{CM}, \lambda} \}$. We now choose any primes $N$, $p \in {\mathcal L}({\mathcal R}|_{G_{F_2}})$, $r \in {\mathcal L}({\mathcal R}|_{G_{F_2}})$ not dividing $v_0 v_1$ and satisfying the following conditions: - $N > 100nm + 100$ and $N$ is unramified in $L_2$. - $p \equiv -1 \text{ mod }N$ and $p > 2nm+1$. - $r \equiv 1 \text{ mod }N$ and $r > 2nm+1$. - $p$ splits completely in $L_2$ and $M$ and $r$ splits completely in $L_2(\zeta_p)$ and $M$. - The character $\Psi_2$ is unramified at the places of $L$ above $p$ and $r$. Choosing $\mathfrak{p}|p$ and $\mathfrak{r}| r$ arbitrarily, there will be sandwiches up to conjugation $$\mathop{\mathrm{SL}}_2({\mathbf F}_p) \leq \overline{r}_\mathfrak{p}(G_{F_2}) \leq \mathop{\mathrm{GL}}_2({\mathbf F}_p)$$ and $$\mathop{\mathrm{SL}}_2({\mathbf F}_r) \leq \overline{r}_\mathfrak{r}(G_{F_2}) \leq \mathop{\mathrm{GL}}_2({\mathbf F}_r),$$ and for each $p$-adic (resp. $r$-adic) place $v$ of $F$, $r_\mathfrak{p}|_{G_{F_v}}$ (resp. $r_\mathfrak{r}|_{G_{F_v}}$) is crystalline. (Here we are using the definition of ${\mathcal L}({\mathcal R}|_{G_{F_2}})$ and the fact that $p$, $r$ split in $M$.) The representation $\overline{r}_{\mathrm{CM}, \mathfrak{r}}$ can be chosen to take values in $\mathop{\mathrm{GL}}_m({\mathbf F}_r)$. Since the prime $N$ is unramified in $L_2(\zeta_r)$, $E / {\mathbf Q}$ is linearly disjoint from $F_2(\zeta_N, \zeta_r) / {\mathbf Q}$. The ramification at $v_1$ implies that $\overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_2(\zeta_N, \zeta_r)}}$ is absolutely irreducible. Let $v$ be a $p$-adic place of $F$. Then $F_{2, v} = M_\mathfrak{p}= {\mathbf Q}_p$. By [@ber05 Théorème 3.2.1], either $r_\mathfrak{p}|_{G_{F_{2, v}}}$ is (crystalline) ordinary, or there is an isomorphism $\overline{r}_\mathfrak{p}|_{G_{{\mathbf Q}_{p^2}}} \cong \overline{\rho}_{2, m, 0}$ (notation as in Definition [Definition 1](#df: defn of rho0){reference-type="ref" reference="df: defn of rho0"}). In the latter case, Lemma [Lemma 1](#prep){reference-type="ref" reference="prep"} shows that for any finite extension $K / {\mathbf Q}_{p^2}$, we have $r_\mathfrak{p}|_{G_{K}} \sim \rho_{2, m, 0}|_{G_K}$. We write $\Sigma_2^{\text{ord}}$ (resp. $\Sigma_2^{\text{ss}}$) for the set of $p$-adic places of $F_2$ such that $r_\mathfrak{p}|_{G_{F_{2, v}}}$ is (resp. is not) ordinary. If $F_j / F_2$ is a finite extension, then we write $\Sigma_j^{\text{ord}}$ for the set of places of $F_j$ lying above a place of $\Sigma_2^{\text{ord}}$ (and define $\Sigma_j^{\text{ss}}$ similarly). Let $B / F_2(\zeta_N, \zeta_p, \zeta_r)$ be the extension cut out by $\overline{r}_\mathfrak{p}\times \overline{r}_\mathfrak{r}\times \overline{r}_{\mathrm{CM}, \mathfrak{r}}$. We now choose a solvable CM extension $F_3 / F_2(\zeta_N)$, Galois over ${\mathbf Q}$ and linearly disjoint from $B \cdot F_2 / F_2(\zeta_N)$, such that for each place $v | p$ of $F_3$, $F_v$ contains ${\mathbf Q}_{p^2}$. We moreover adjoin $e^{2 \pi i / N}$ to $M$ and extend $\mathfrak{p}$, $\mathfrak{r}$ arbitrarily to places of this enlarged $M$. At this point we choose (for later use) a semistable elliptic curve $A / {\mathbf Q}$ with good reduction at $p$, $r$, and $p_0$. We choose a prime $q$ with the following properties: - $q > 2 n m + 1$ and $q$ splits in $M$. In particular, $q \equiv 1 \text{ mod }N$. We choose a place $\mathfrak{q}| q$ of $M$. - $\overline{\rho}_{A, q}(G_{F_3}) = \mathop{\mathrm{GL}}_2({\mathbf F}_q)$ and $A$ has good ordinary reduction at $q$. Let $B'$ denote the composite of $B$ with the extension of $F_3$ cut out by $\overline{\rho}_{A, q}$. Having chosen an integer $N$ and extension $F_3 / {\mathbf Q}(\zeta_N)$, we have access to the families of motives over $T_0 = \mathbf{P}^1_{F_3} - \{ \mu_N, \infty \}$ constructed in §[4](#ssec:dwork){reference-type="ref" reference="ssec:dwork"}. We will use the families of motives both of rank $m$ and of rank $nm$. We write ${}_m W_{t, \lambda}$, ${}_{nm} W_{t, \lambda}$ for the ${\mathcal O}_{M_\lambda}[G_K]$-modules of ranks $m$, $nm$ constructed in § [4](#ssec:dwork){reference-type="ref" reference="ssec:dwork"} associated to an extension $K / F_3$ and point $t \in T_0(K)$. We claim that we can find a CM extension $F_4 / F_3$, Galois over ${\mathbf Q}$ and linearly disjoint from $B' \cdot F_3 / F_3$ such that for any place $v | p r p_0 q$ of $F_4$, the representations $\overline{r}_\mathfrak{p}|_{G_{F_{4, v}}}$, $\overline{r}_\mathfrak{r}|_{G_{F_{4, v}}}$, $\overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_{4, v}}}$ and $\overline{\rho}_{A, q}|_{G_{F_{4, v}}}$ are all trivial, and the following additional data exists for $k \in \{ m, nm \}$: 1. [\[item:MB ord\]]{#item:MB ord label="item:MB ord"} If $v \in \Sigma_4^{\textrm{ord}}$, then there is a non-empty open subset ${}_k \Omega_v \subset T_0({\mathcal O}_{F_{4,v}})$ such that if $t \in {}_k \Omega_v$, then ${}_k \overline{W}_{t, \mathfrak{p}}$ is trivial and ${}_k W_{t, \mathfrak{p}}$ is crystalline ordinary. Moreover, ${}_k \overline{W}_{t, \mathfrak{r}}$ and ${}_k \overline{W}_{t, \mathfrak{q}}$ are both trivial. 2. [\[item:MB ss\]]{#item:MB ss label="item:MB ss"} If $v \in \Sigma_4^{\textrm{ss}}$, then there is a non-empty open subset ${}_k \Omega_v \subset T_0({\mathcal O}_{F_{4, v}})$ such that if $t \in {}_k \Omega_v$, then ${}_k \overline{W}_{t, \mathfrak{p}}$ and $\overline{\rho}_{k, 1, 0}|_{G_{F_{4, v}}}$ are trivial, ${}_k W_{t, \mathfrak{p}}$ is crystalline, and ${}_k W_{t, \mathfrak{p}} \sim \rho_{k, 1, 0}|_{G_{F_{4, v}}}$. Moreover, ${}_k \overline{W}_{t, \mathfrak{r}}$ and ${}_k \overline{W}_{t, \mathfrak{q}}$ are both trivial. 3. [\[item:MB r\]]{#item:MB r label="item:MB r"} If $v | r$ is a place of $F_4$, then there is a non-empty open subset ${}_k \Omega_v \subset T_0({\mathcal O}_{F_{4, v}})$ such that if $t \in \Omega_v$, then ${}_k \overline{W}_{t, \mathfrak{r}}$ is trivial and $W_{t, \mathfrak{r}}$ is crystalline ordinary. Moreover, ${}_k \overline{W}_{t, \mathfrak{q}}$ and ${}_k \overline{W}_{t, \mathfrak{p}}$ are both trivial. 4. [\[item:MB p_0\]]{#item:MB p_0 label="item:MB p_0"} If $v | p_0$ is a place of $F_4$, then there is a non-empty open subset ${}_k \Omega_v \subset T_0({\mathcal O}_{F_{4, v}})$ such that if $t \in {}_k \Omega_v$, then ${}_k \overline{W}_{t, \mathfrak{r}}$, ${}_k \overline{W}_{t, \mathfrak{q}}$ and ${}_k \overline{W}_{t, \mathfrak{p}}$ are all trivial. 5. [\[item:MB q\]]{#item:MB q label="item:MB q"} If $v | q$ is a place of $F_4$, then there is a non-empty open subset ${}_k \Omega_v \subset T_0({\mathcal O}_{F_{4, v}})$ such that if $t \in {}_k \Omega_v$, then ${}_k \overline{W}_{t, \mathfrak{q}}$ is trivial and ${}_k W_{t, \mathfrak{q}}$ is crystalline ordinary. Moreover, ${}_k \overline{W}_{t, \mathfrak{p}}$ and ${}_k \overline{W}_{t, \mathfrak{r}}$ are both trivial. Indeed, we can take $F_4 = K^+ \cdot F_3$, where $K^+ / {\mathbf Q}$ is a Galois, totally real extension with $K^+_v$ large enough for each place $v | prp_0 q$, as we now explain, dropping the subscript $k$ which is fixed for the next two paragraphs. For [\[item:MB ord\]](#item:MB ord){reference-type="ref" reference="item:MB ord"}, we claim that it is enough to show that once $F_{4, v}$ is large enough, we can find a single point of $t \in T_0(F_{4, v})$ such that $\overline{W}_{t, \mathfrak{p}}$, $\overline{W}_{t, \mathfrak{r}}$, and $\overline{W}_{t, \mathfrak{q}}$ are all trivial and $W_{t, \mathfrak{p}}$ is crystalline ordinary. Indeed, by a version of Krasner's Lemma due to Kisin [@kisin-krasner Theorem 5.1], for any $c > 0$ there exists an open ball $U_t$ around $t$ in $T_0({\mathcal O}_{F_{4, v}})$, such that for any $t' \in U_t$, the representations $W_{\mathfrak{p}, t} / (p^c)$, $W_{\mathfrak{p}, t'} / (p^c)$ and $W_{\mathfrak{r}, t} / (r^c)$, $W_{\mathfrak{r}, t'} / (r^c)$ and $W_{\mathfrak{q}, t} / (q^c)$, $W_{\mathfrak{q}, t'} / (q^c)$ are isomorphic. By Lemma [Lemma 1](#smooth){reference-type="ref" reference="smooth"}, we can choose $c > 1$ so that this forces $W_{\mathfrak{p}, t} \sim W_{\mathfrak{p}, t'}$, hence (by Lemma [Lemma 1](#prep2){reference-type="ref" reference="prep2"}) that $W_{\mathfrak{p}, t'}$ is crystalline ordinary. The existence of a crystalline ordinary point $t$ follows from Proposition [Proposition 1](#ordinarypoints){reference-type="ref" reference="ordinarypoints"} and Proposition [Proposition 1](#prop_independence_of_l){reference-type="ref" reference="prop_independence_of_l"}(2), after which we enlarge $F_{4, v}$ further if necessary to force the residual representations to be trivial. Then we take $\Omega_v = U_t$. For [\[item:MB r\]](#item:MB r){reference-type="ref" reference="item:MB r"} and [\[item:MB q\]](#item:MB q){reference-type="ref" reference="item:MB q"}, the argument is essentially the same as case [\[item:MB ord\]](#item:MB ord){reference-type="ref" reference="item:MB ord"}, while for [\[item:MB p_0\]](#item:MB p_0){reference-type="ref" reference="item:MB p_0"}, it is even simpler. For [\[item:MB ss\]](#item:MB ss){reference-type="ref" reference="item:MB ss"}, we enlarge $F_{4, v}$ so that $\overline{\rho}_{k, 1, 0}|_{G_{F_{4, v}}}$ and $\overline{W}_{\mathfrak{p}, 0}|_{G_{F_{4, v}}}$ are trivial. By Lemma [Lemma 1](#prep){reference-type="ref" reference="prep"} and Lemma [Lemma 1](#specializations){reference-type="ref" reference="specializations"}, we have $W_{\mathfrak{p}, 0}|_{G_{F_{4, v}}} \sim \rho_{k, 1, 0}|_{G_{F_{4, v}}}$. Employing the same argument as in the previous paragraph, using [@kisin-krasner Theorem 5.1] and Lemma [Lemma 1](#smooth){reference-type="ref" reference="smooth"}, we can find a non-empty open neighbourhood $\Omega_v \subset T_0({\mathcal O}_{F_{4, v}})$ of $0 \in T_0({\mathcal O}_{F_{4, v}})$ such that if $t \in \Omega_v$, then $W_{\mathfrak{p}, t}$ is crystalline and $W_{\mathfrak{p}, t} \sim W_{\mathfrak{p}, 0}|_{G_{F_{4, v}}}$. Since $\sim$ is a transitive relation, this leads to a choice of $\Omega_v$ with the desired property. To construct the compatible system ${\mathcal R}_{\mathrm{aux}}$, we will apply Proposition [Proposition 1](#prop_variation_of_MB){reference-type="ref" reference="prop_variation_of_MB"}. If $m=2$ we can use a modular curve with level $r$-structure, and since the argument in this case is a straightforward (and considerably simpler) variant on the argument that we use if $m>2$, we leave this case to the reader. In the case $m > 2$ we use the moduli space $T = T( \overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_4}} )$ defined in Remark [Remark 1](#pqmoduliremark){reference-type="ref" reference="pqmoduliremark"}, which is defined since $r \equiv 1 \text{ mod }N$ and $\overline{r}_{\mathrm{CM}, \mathfrak{r}}$ takes values in $\mathop{\mathrm{GL}}_m({\mathbf F}_r)$, with determinant $\overline{\varepsilon}^{-m(m-1)/2}$. We take $F^{\mathrm{avoid}} = B' \cdot F_4$. We take the homomorphism $\pi_1^{\text{\'et}}(T_{F_4}) \to \operatorname{GU}_m({\mathbf F}_{p^2})$ to be the one associated to the local system $\overline{{\mathcal W}}_\mathfrak{p}$. We take $S_0 = \{ p, r, p_0, q \}$. If $v$ is a place lying above a prime in $S_0$, we take $L_v = F_{4, v}$ and $\Omega_v$ to be the pre-image in $T(F_{4, v})$ of the set ${}_m \Omega_v$. Note that $\Omega_v$ is certainly open, and it is non-empty because we have arranged that for each place $v | S_0$ of $F_4$, and for each $t \in {}_m \Omega_v$, $\overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_{4, v}}}$ and $\overline{W}_{t, \mathfrak{r}}$ are both trivial (hence isomorphic!). Proposition [Proposition 1](#prop_variation_of_MB){reference-type="ref" reference="prop_variation_of_MB"} now yields an imaginary CM extension $F_5 / F_4$, Galois over ${\mathbf Q}$ and in which the places above $S_0$ all split completely, and a weakly compatible system $\{ W_{t, \lambda} \}$ of representations of $G_{F_5}$ with coefficients in ${\mathbf Q}(e^{2 \pi i / N}) \subset M$. We take ${\mathcal R}_{\mathrm{aux}} = \{ r_{\mathrm{aux}, \lambda} \} = \{ W_{t, \lambda} \}$ and note that the statement of Proposition [Proposition 1](#prop_variation_of_MB){reference-type="ref" reference="prop_variation_of_MB"} and the definition of the sets $\Omega_v$ imply that ${\mathcal R}_{\mathrm{aux}}$ has the following properties: - $\overline{r}_{\mathrm{aux}, \mathfrak{p}}(G_{F_5}) = \operatorname{GU}_m({\mathbf F}_{p^2})$ (note we are assuming that $m >2$). - If $v \in \Sigma_5^{\mathrm{ord}}$, then $\overline{r}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_{5, v}}}$ is trivial and $r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_{5, v}}}$ is crystalline ordinary. - If $v \in \Sigma_5^{\mathrm{ss}}$, then $\overline{r}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_{5, v}}}$ is trivial and $r_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_{5, v}}} \sim \rho_{m, 1, 0}|_{G_{F_{5, v}}}$. - $S_{\mathrm{aux}}$ is disjoint from $X_5 \cup \{ v | p r \}$. (Use Proposition [Proposition 1](#prop_independence_of_l){reference-type="ref" reference="prop_independence_of_l"}.) - There is an isomorphism $\overline{r}_{\mathrm{aux}, \mathfrak{r}} \cong \overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_5}}$. For each place $v | r$ of $F_5$, $\overline{r}_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_{5, v}}}$ is trivial and $r_{\mathrm{aux}, \mathfrak{r}}|_{G_{F_{5, v}}}$ is crystalline ordinary. We set ${\mathcal S}_{\mathrm{aux}} = (\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}|_{G_{F_5}}) \otimes {\mathcal R}_{\mathrm{aux}}$, and now construct ${\mathcal S}_{\mathrm{UA}}$. The places $v | prp_0q$ split in $F_5 / F_4$, so if $v$ is a place of $F_5$ dividing $prp_0q$ we may define ${}_k \Omega_v = {}_k \Omega_{v|_{F_4}}$ to keep in hand the data [\[item:MB ord\]](#item:MB ord){reference-type="ref" reference="item:MB ord"}--[\[item:MB q\]](#item:MB q){reference-type="ref" reference="item:MB q"} defined above. We will apply Proposition [Proposition 1](#prop_variation_of_MB){reference-type="ref" reference="prop_variation_of_MB"} to the moduli space $$T = T(\overline{s}_{\mathrm{aux},\mathfrak{p}}, \mathop{\mathrm{Sym}}^{nm-1} \overline{\rho}_{A,q} |_{G_{F_5}}).$$ We take $F^{\mathrm{avoid}} = B' \cdot F_5$. We do not specify a homomorphism $f$. We take $S_0 = \{ p, r, p_0, q \}$. If $v$ is a place lying above a prime in $S_0$, we take $L_v = F_{5, v}$ and $\Omega_v$ to be the pre-image in $T(F_{5, v})$ of the set ${}_{nm} \Omega_v$. Once again, this pre-image is non-empty because we have trivialized all of the relevant residual representations. (Since $p \equiv -1 \text{ mod }N$, the definition of $T$ involves a choice of Hermitian structure. We are therefore invoking the fact here that over a finite field, any two Hermitian spaces of the same dimension are isomorphic.) Proposition [Proposition 1](#prop_variation_of_MB){reference-type="ref" reference="prop_variation_of_MB"} then yields a CM extension $F_6 / F_5$, Galois over ${\mathbf Q}$, and a point $t \in T(F_6)$ corresponding to a weakly compatible system ${\mathcal S}_{\mathrm{UA}} = \{ s_{\mathrm{UA}, \lambda} \} = \{ W_{t, \lambda} \}$ of rank $nm$ representations of $G_{F_6}$ with the following properties: - There are isomorphisms $\overline{s}_{\mathrm{UA}, \mathfrak{p}} \cong \overline{s}_{\mathrm{aux}, \mathfrak{p}}|_{G_{F_6}}$ and $\overline{s}_{\mathrm{UA}, \mathfrak{q}} \cong \mathop{\mathrm{Sym}}^{nm-1} \overline{\rho}_{A, q}|_{G_{F_6}}$. - If $v \in \Sigma_6^{\mathrm{ord}}$, then $\overline{s}_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_{6, v}}}$ is trivial and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_{6, v}}}$ is crystalline ordinary. - If $v \in \Sigma_6^{\mathrm{ss}}$, then $\overline{s}_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_{6, v}}}$ is trivial and $s_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_{6, v}}} \sim \rho_{nm, 1, 0}|_{G_{F_{6, v}}}$. - For each place $v | q$ of $F_6$, $\overline{s}_{\mathrm{UA}, \mathfrak{p}}|_{G_{F_{6, v}}}$ is trivial and $s_{\mathrm{UA}, q}|_{G_{F_{6, v}}}$ is crystalline ordinary. - $S_{\mathrm{UA}}$ is disjoint from $X_6 \cup \{ v | p r \}$. We now claim that Assumptions ([\[ass_HT\]](#ass_HT){reference-type="ref" reference="ass_HT"})--([\[ass_CM_triv\]](#ass_CM_triv){reference-type="ref" reference="ass_CM_triv"}) of Proposition [Proposition 1](#thm_pot_aut_of_powers_all_the_assumptions){reference-type="ref" reference="thm_pot_aut_of_powers_all_the_assumptions"} are satisfied for the compatible system ${\mathcal R}|_{G_{F_6}}$, set $X_0 = X_6$ of places of $F_6$, and auxiliary compatible systems ${\mathcal R}_{\mathrm{CM}}|_{G_{F_6}}$, ${\mathcal R}_{\mathrm{aux}}|_{G_{F_6}}$, and ${\mathcal S}_{\mathrm{UA}}$ (defined over $F_6$ by construction). Let us verify these assumptions in turn. - As already observed, ([\[ass_HT\]](#ass_HT){reference-type="ref" reference="ass_HT"})--([\[ass_galois\]](#ass_galois){reference-type="ref" reference="ass_galois"}) are automatically satisfied. - We take $\Psi = \Psi_6$. The extension $E / {\mathbf Q}$ is linearly disjoint from $F_6$ because $E \leq B$, while $\Psi$ has the given infinity type, so (6) is satisfied. - The primes $p, r$ are prime to $S$ by construction, so (7) is satisfied. - ${\mathcal R}_{\mathrm{aux}}|_{G_{F_6}}$ has the claimed properties by construction, so (8) is satisfied. The same is true for ${\mathcal S}_{\mathrm{UA}}$, except we need to justify the fact that ${\mathcal S}_{\mathrm{UA}}$ is weakly automorphic of level prime to $X_6 \cup \{ v | p r \}$. Note that the $q$-adic representation $\mathop{\mathrm{Sym}}^{nm - 1} \rho_{A, q}|_{G_F}$ is automorphic by the combination of the main results of [@MR1839918; @MR3394612; @newton2022symmetric; @MR1007299] (or alternately by [@Clo23]), associated to a regular algebraic, cuspidal automorphic representation of $\mathop{\mathrm{GL}}_{nm}(\mathbf{A}_F)$ which is $\iota$-ordinary with respect to any isomorphism $\iota : \overline{{\mathbf Q}}_q \to {\mathbf C}$. (We could also verify the automorphy, at the cost of further extending the field $F_6$, by a further application of Proposition [Proposition 1](#prop_variation_of_MB){reference-type="ref" reference="prop_variation_of_MB"} as is done in [@10author].) We would now like to apply [@miagkov-thorne Theorem 1.3] to conclude that ${\mathcal S}_{\mathrm{UA}}$ is weakly automorphic (noting that the cited result includes the conclusion that the automorphic representation witnessing the weak automorphy of ${\mathcal S}_{\mathrm{UA}}$ is unramified at any place where both $\rho_{A, q}$ and $s_{\mathrm{UA}, \mathfrak{q}}$ are unramified). We must verify that $\mathop{\mathrm{Sym}}^{nm - 1} \overline{\rho}_{A, q}|_{G_F}$ satisfies the Taylor--Wiles conditions (as formulated in Definition [Definition 1](#tw){reference-type="ref" reference="tw"}). By Lemma [Lemma 1](#restricttw){reference-type="ref" reference="restricttw"}, it suffices to check that $\mathop{\mathrm{Sym}}^{nm - 1} \overline{\rho}_{A, q}$ satisfies these conditions (as a representation of $G_{\mathbf Q}$), and this follows from the definitions, together with an application of [@jackapp Theorem A.9] (using our assumption $q > 2nm + 1$). - $L / F$ and $\Psi$ are unramified above $X_0 \cup \{ v | p r \}$ by construction, so (10) is satisfied. - We have chosen the primes $p, r$ so that $p > 2nm +1$ and $r > 2nm + 1$. At each step the extension $F_{j+1} / F_{j}$ has been chosen linearly disjoint from $L_j(\zeta_p, \zeta_r)$, so (11) and (12) are satisfied. - The images $\overline{r}_\mathfrak{p}(G_{F_2})$, $\overline{r}_\mathfrak{r}(G_{F_2})$ and $\overline{r}_{\mathrm{aux}, \mathfrak{p}}(G_{F_5})$ are large by construction, and at each step the extension $F_{j+1} / F_{j}$ has been chosen so that the image does not change on restriction to the smaller Galois group. Moreover, $\overline{r}_{\mathrm{CM}, \mathfrak{r}}|_{G_{F_2(\zeta_r)}}$ is irreducible, and again the analogous property holds over $F_6$ by construction. Therefore (13) is satisfied. - Assumptions (14)--(17) hold by construction. This completes the proof. ◻ # Applications ## The Ramanujan Conjecture We are now in a position to prove the (more general verisons of the) main theorems of the introduction as a consequence of Theorem [Theorem 1](#thm_pot_aut_of_powers){reference-type="ref" reference="thm_pot_aut_of_powers"}. Let $F$ be an imaginary CM field, and let $\pi$ be a regular algebraic cuspidal automorphic representation of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$. We write $(a_\tau \ge b_\tau)_{\tau: F\hookrightarrow {\mathbf C}}$ for the weight of $\pi$. Recall that we say that $\pi$ is *of parallel weight* if $a_{\tau}-b_{\tau}$ is independent of $\tau$. **Theorem 1**. *Let $F$ be an imaginary CM field, and let $\pi$ be a regular algebraic cuspidal automorphic representation of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$ of parallel weight. Then, for all primes $v$ of $F$, the representation $\pi_v$ is *(*essentially*)* tempered.* *Proof.* Since $\pi$ is assumed to have parallel weight, there is an integer $m \geq 1$ such that $a_{\tau}-b_{\tau} = m-1$ for all $\tau:F\hookrightarrow{\mathbf C}$. By Clozel's purity lemma [@MR1044819 Lemma 4.9], there is an integer $w$ with $a_{\tau} + b_{c\tau} = w$ for all $\tau$. It follows that $b_{\tau}+b_{c\tau}=w-m+1$ is independent of $\tau$. In particular, there exists an algebraic Hecke character of $\mathbf{A}_F^\times$ with weight $(b_{\tau})_{\tau:F\hookrightarrow {\mathbf C}}$, so after twisting we may assume that $(a_\tau,b_\tau) = (m-1,0)$ for all  $\tau$. The central character of $\pi$ is then of the form $\psi|\cdot|^{1-m}$ for a finite order Hecke character $\psi$. Exactly as in the proof of [@10author Theorem 7.1.1], we can find a quadratic CM extension $F'/F$ for which the character $\psi\circ N_{F'/F}$ is a square. We can check temperedness after base change to $F'$. Twisting by a finite order Hecke character, we may then assume that $\pi$ has central character $|\cdot|^{1-m}$. Exactly as in the proof of [@10author Cor. 7.1.15], we can make a further solvable base change to reduce to checking temperedness of the unramified $\pi_v$. Hence it suffices to show that the associated very weakly compatible system ${\mathcal R}$ (cf. [@10author Lemma 7.1.10]) is pure. By [@10author Lemma 7.1.2], either ${\mathcal R}$ is strongly irreducible, Artin up to twist, or induced from a quadratic extension. If ${\mathcal R}$ is induced, then purity follows from the purity of rank one (very weakly) compatible systems. The compatible family ${\mathcal R}$ cannot be Artin up to twist because that is incompatible with having distinct Hodge--Tate weights. Thus ${\mathcal R}$ is strongly irreducible, and the result follows from Theorem [Theorem 1](#thm_pot_aut_of_powers){reference-type="ref" reference="thm_pot_aut_of_powers"}. ◻ ## The potential automorphy of compatible systems and the Sato--Tate conjecture {#subsec:satotate} **Theorem 1**. *Let $F$ be a CM field, and let $\mathcal{R}=(M,S,\{ Q_v(X) \}, r_\lambda ,H_{\tau} )$ be a very weakly compatible system of rank 2 representations of $G_F$ that is strongly irreducible. Suppose there exists an integer $m \geq 1$ such that $H_\tau = \{ 0, m \}$ for each embedding $\tau : F \to \overline{M}$. Then ${\mathcal R}$ is pure of weight $m$, and for each $n \geq 1$, there exists a finite CM extension $F' / F$, Galois over ${\mathbf Q}$, such that $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}|_{G_{F'}}$ is automorphic.* *If one alternatively assumes that $\mathcal{R}$ is irreducible but not strongly irreducible, then ${\mathcal R}$ is pure of weight $m$, and for each $n \geq 1$, $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ decomposes as a direct sum of compatible systems of dimension at most $2$ which are automorphic.* *Proof.* Assume that ${\mathcal R}$ is strongly irreducible. As in the proof of Theorem [Theorem 1](#thm: Ramanujan thm main paper){reference-type="ref" reference="thm: Ramanujan thm main paper"}, we can reduce to the case where ${\mathcal R}$ has determinant $\varepsilon^{-m}$. But now Theorem [Theorem 1](#PO){reference-type="ref" reference="PO"} follows directly from Theorem [Theorem 1](#thm_pot_aut_of_powers){reference-type="ref" reference="thm_pot_aut_of_powers"}. If ${\mathcal R}$ is not strongly irreducible, then from [@10author Lemma 7.1.2] it follows that ${\mathcal R}$ is induced from a compatible system of algebraic Hecke characters for some quadratic extension $F'/F$ (the condition on the Hodge--Tate weights ensures that ${\mathcal R}$ is not Artin up to twist). Then the symmetric powers $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ decompose as a sum of two-dimensional induced compatible systems and (when $n$ is odd) a one-dimensional compatible system. In particular for any $n$, $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ decomposes as a direct sum of automorphic compatible systems, and the purity statement follows from the purity of (the Galois representations associated to) algebraic Hecke characters. ◻ We next give a statement of the Sato--Tate conjecture, including Theorem [Theorem 2](#SatoTate){reference-type="ref" reference="SatoTate"} as a special case, before giving the proof when $\pi$ has parallel weight. Let $F$ be an imaginary CM field, and let $\pi$ be a cuspidal automorphic representation of $\mathop{\mathrm{GL}}_2(\mathbf{A}_F)$ which is regular algebraic of weight $\lambda$ and not CM (i.e. not automorphically induced). Thus there is an integer $w$ such that $\lambda_{\tau, 1} + \lambda_{\tau c, 2} = w$ for all $\tau \in \mathop{\mathrm{Hom}}(F, {\mathbf C})$. The central character of $\pi$ has the form $\omega_\pi = | \cdot|^{-w} \psi$, where $\psi : F^\times \backslash \mathbf{A}_F^\times \to {\mathbf C}^\times$ is a unitary Hecke character of type $A_0$. We define the Sato--Tate group of $\pi$, $\mathrm{ST}(\pi)$, as follows: - If $\psi$ has finite order $a \geq 1$, then $\mathrm{ST}(\pi) = \mathrm{U}_2({\mathbf R})_a := \{ g \in \mathrm{U}_2({\mathbf R}) \mid \det(g)^a = 1 \}$. - If $\psi$ has infinite order, then $\mathrm{ST}(\pi) = \mathrm{U}_2({\mathbf R})$. **Lemma 1**. *$\mathrm{ST}(\pi)$ is a compact subgroup of $\mathop{\mathrm{GL}}_2({\mathbf C})$. If $v$ is a finite place of $F$ such that $\pi_v$ is unramified and essentially tempered, then the $\mathop{\mathrm{GL}}_2({\mathbf C})$-conjugacy class of $q_v^{-w/2} \operatorname{rec}_{F_v}(\pi_v)(\mathrm{Frob}_v)$ intersects $\mathrm{ST}(\pi)$ in a unique conjugacy class of $\mathrm{ST}(\pi)$.* *Proof.* The group $\mathrm{ST}(\pi)$ is compact since $\mathrm{U}_2({\mathbf R})$ is. It is well-known that two elements of $\mathrm{U}_2({\mathbf R})$ which become conjugate in $\mathop{\mathrm{GL}}_2({\mathbf C})$ are conjugate by an element of $\mathrm{SU}_2({\mathbf R})$. All we need to show then is that if $v$ is a finite place of $F$ such that $\pi_v$ is unramified, and $\operatorname{rec}_{F_v}(\pi_v)(\mathrm{Frob}_v) = \mathrm{diag}(\alpha_v, \beta_v)$, then $\alpha_v, \beta_v$ are complex numbers of absolute value $q_v^{w/2}$, and further if $\psi$ has finite order $a$ then $(q_v^{-w} \alpha_v \beta_v)^a = 1$. Since $\pi_v$ is essentially tempered, we have $| \alpha_v | = | \beta_v |$. On the other hand, we have $\alpha_v \beta_v = \psi(\varpi_v) q_v^w$, hence $| \alpha_v \beta_v | = q_v^w$ (as $\psi$ is unitary), and if $\psi$ has finite order $a$ then $(q_v^{-w} \alpha_v \beta_v)^a = 1$. ◻ If $v$ is a place such that $\pi_v$ is unramified and essentially tempered, then we write $[\pi_v] \in \mathrm{ST}(\pi)$ for a representative of the conjugacy class of $q_v^{-w/2} \operatorname{rec}_{F_v}(\pi_v)(\mathrm{Frob}_v) \in \mathop{\mathrm{GL}}_2({\mathbf C})$. **Theorem 1**. *Suppose that $\pi$ has parallel weight. Let $S_\pi$ denote the set of finite places of $F$ at which $\pi$ is unramified. With notation as above, the classes of elements $[\pi_v] \in \mathrm{ST}(\pi)$ [(]{.upright}$v \not\in S_\pi$[)]{.upright} are equidistributed with respect to the Haar probability measure $\mu_{\mathrm{ST}}$ of $\mathrm{ST}(\pi)$. More precisely, for any continuous, conjugation-invariant function $f : \mathrm{ST}(\pi) \to {\mathbf C}$, we have $$\lim_{X \to \infty} \frac{ \sum_{v \not\in S_\pi, q_v < X} f([\pi_v])}{ \#\{v \not\in S_\pi, q_v < X\}} = \int_{g \in \mathrm{ST}(\pi)} f(g) \, d \mu_{\mathrm{ST}(\pi)}.$$* *Proof.* If $\rho$ is a finite-dimensional irreducible representation of $\mathrm{ST}(\pi)$, let us define $$L^{S_\pi}(\pi, \rho, s) = \prod_{v \not\in S_\pi} \det(1 - q_v^{-s} \rho([\pi_v]))^{-1},$$ an Euler product which converges absolutely in the right half-plane $\operatorname{Re}(s) > 1$. According to the criterion of Serre [@serreabladic Ch. I, Appendix], the theorem will be proved if we can show that for each non-trivial such $\rho$, $L^{S_\pi}(\pi, \rho, s)$ admits a meromorphic continuation to ${\mathbf C}$ which is holomorphic and non-vanishing on the line $\operatorname{Re}(s) = 1$. This may be deduced from the potential automorphy of the symmetric powers $\mathop{\mathrm{Sym}}^{n-1} {\mathcal R}$ of the compatible system associated to $\pi$, exactly as in e.g. [@GeeSTwt3 §7] and [@blght §8], after noting that ${\mathcal R}$ is strongly irreducible (again invoking [@10author Lemma 7.1.2] and the assumption that $\pi$ is not CM). Note also that the list of non-trivial one-dimensional representations of $\mathrm{ST}(\pi)$ depends on the order of the character $\psi$. ◻ =3em [^1]: *The paper [@marshall] has the following to say about this assumption: "For simplicity, we assume our fields to have narrow class number one throughout the paper, but this is not essential." One might therefore expect it to be possible to prove the more general adelic statement for all imaginary quadratic $F$ under the additional hypothesis (as explained in [@everymanwilldohisduty §1]) that, when $h_F$ is even, one avoids certain dihedral forms which vanish identically on half of the connected components of the adelic quotient. Similarly, concerning the assumption on the level, the paper [@marshall] says "The proof may easily be modified to allow a nontrivial level in any case."*
arxiv_math
{ "id": "2309.15880", "title": "The Ramanujan and Sato-Tate Conjectures for Bianchi modular forms", "authors": "George Boxer, Frank Calegari, Toby Gee, James Newton and Jack A.\n Thorne", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Valued constraint satisfaction problems (VCSPs) are a large class of computational optimisation problems. If the variables of a VCSP take values from a finite domain, then recent results in constraint satisfaction imply that the problem is in P or NP-complete, depending on the set of admitted cost functions. Here we study the larger class of cost functions over countably infinite domains that have an oligomorphic automorphism group. We present a hardness condition based on a generalisation of pp-constructability as known for (classical) CSPs. We also provide a universal-algebraic polynomial-time tractability condition, based on the concept of fractional polymorphisms. We apply our general theory to study the computational complexity of resilience problems in database theory (under bag semantics). We show how to construct, for every fixed conjunctive query (and more generally for every union of conjunctive queries), a set of cost functions with an oligomorphic automorphism group such that the resulting VCSP is polynomial-time equivalent to the resilience problem; we only require that the query is *connected* and show that this assumption can be made without loss of generality. For the case where the query is *acylic*, we obtain a complexity dichotomy of the resilience problem, based on the dichotomy for finite-domain VCSPs. To illustrate the utility of our methods, we exemplarily settle the complexity of a (non-acyclic) conjunctive query whose computational complexity remained open in the literature by verifying that it satisfies our tractability condition. We conjecture that for resilience problems, our hardness and tractability conditions match, which would establish a complexity dichotomy for resilience problems for (unions of) conjunctive queries. address: - Institut für Algebra, TU Dresden - Institut für Algebra, TU Dresden - Institut für Informatik, Leipzig University author: - Manuel Bodirsky - Žaneta Semanišinová - Carsten Lutz bibliography: - global.bib title: The Complexity of Resilience Problems via Valued Constraint Satisfaction Problems --- [^1] # Introduction If ${\mathfrak A}$ is a database and $\mu$ is a conjunctive query (or a union of conjunctive queries), then the *resilience* of $\mu$ in ${\mathfrak A}$ is the minimum number of tuples that need to be removed from ${\mathfrak A}$ to achieve that ${\mathfrak A}$ does not satisfy $\mu$. For some queries $\mu$, the computational complexity of computing the resilience of a given database is in P for some queries $\mu$ and NP-hard for others, but a complete classification has so far been elusive [@Resilience; @NewResilience; @LatestResilience-arxiv]. A natural variation of this problem introduced in [@LatestResilience-arxiv] is that the input is a *bag database*, that is, it contains each tuple with a multiplicity $k \in \mathbb N$. We present a surprising link between the resilience problem for (unions of) conjunctive queries under bag semantics and a large class of computational optimisation problems called *valued constraint satisfaction problems (VCSPs)*. In a VCSP, we are given a finite set of variables, a finite set of cost functions on those variables, and a threshold $u$, and the task is to find an assignment to the variables so that the sum of the costs is at most $u$. The computational complexity of such problems has been studied depending on the admitted cost functions, which we may view as a *valued structure*. A complete classification has been obtained for valued structures with a finite domain, showing that the corresponding VCSPs are in P or NP-hard [@KozikOchremiak15; @KolmogorovKR17; @BulatovFVConjecture; @ZhukFVConjecture; @Zhuk20]. There are also some results about VCSPs of valued structures with infinite domains [@BodirskyMaminoViola-Journal; @ViolaZivny]. We show that the resilience problem for every connected conjunctive query (and in fact for every *union* of connected conjunctive queries) can be formulated as a VCSP for a valued structure with an *oligomorphic automorphism group*, i.e., a structure with a countable domain that, for every fixed $k$, has only finitely many orbits of $k$-tuples under the action of the automorphism group. This property is important for classical CSPs (which can be seen as VCSPs where all cost functions take values in $\{0,\infty\}$), since it enables to extend and use some tools from finite-domain CSPs (see, e.g., [@Book]). The complexity classification for the general, not necessarily connected case can be reduced to the connected case. This result also extends to the more general setting where some relations or tuples are *exogenous*, meaning that they may *not* be removed from the database. If the conjunctive query is acyclic, then we even obtain a VCSP for a valued structure with a finite domain and obtain a P versus NP-complete dichotomy from the known dichotomy for such VCSPs. This leads us to initiating the systematic study of VCSPs of countably infinite valued structures with an oligomorphic automorphism group. In particular, we develop a notion of *expressive power* which is based on *primitive positive definitions* and other complexity-preserving operators, inspired by the techniques known from VCSPs over finite domains. We use the expressive power to obtain polynomial-time reductions between such VCSPs, which we use to formulate a hardness condition for them and conjecture that for VCSPs that stem from resilience problems this hardness condition is necessary and sufficient, unless P = NP. We also present an algebraic condition for valued structures that implies that the induced VCSP is in P, based on the concept of *fractional polymorphisms*, which generalise classical polymorphisms, a common tool for proving tractability of CSPs. To prove membership in P, we use a reduction to finite-domain VCSPs which can be solved by a linear programming relaxation technique. We conjecture that the resulting algorithm solves all resilience problems that are in P. We demonstrate the utility of our algebraic tractability condition by applying it to a concrete conjunctive query for which the computational complexity of resilience has been stated as an open problem in the literature [@NewResilience] (Section [8.5](#sect:expl){reference-type="ref" reference="sect:expl"}). **Related Work.** The study of resilience problems was initiated in [@Resilience]. The authors obtain a P versus NP-complete dichotomy for the class of self-join-free conjunctive queries, i.e., queries in which each relation symbol occurs at most once. In a subsequent paper [@NewResilience], several results are obtained for conjunctive queries with self-joins of a specific form, while the authors also state a few open problems of similar nature that cannot be handled by their methods. In the latest article [@LatestResilience-arxiv], Gatterbauer and Makhija present a unified approach to resilience problems based on integer linear programming, which works both under bag semantics and under set semantics. However, the new complexity results in [@LatestResilience-arxiv] again concern only self-join free queries. Our approach is independent from self-joins and hence allows to study conjunctive queries that were difficult to treat before. We stress that VCSPs of countable valued structures with an oligomorphic automorphism group greatly surpass resilience problems. For example, many computational problems in the recently very active area of *fixed parameter tractability for graph separation problems* can be formulated as VCSPs of appropriate countable valued structures with an oligomorphic automorphism group. In particular, this applies to the *directed feedback edge set problem*, the *directed symmetric multicut problem*, and many other problems with recent breakthrough results concerning FPT algorithms where the parameter is the number of edges in the graph that needs to be removed [@KPW22; @KKPW21]. Several of these problems such as the *multicut problem* and the *coupled min cut problem* can even be formulated as VCSPs over a finite domain. Our notion of expressive power (and fractional polymorphisms) can be used to also study the parametrised complexity of all of these problems. **Outline.** The article is organised from the general to the specific, starting with VCSPs in full generality (Section [2](#sect:prelims){reference-type="ref" reference="sect:prelims"}), then focussing on valued structures with an oligomorphic automorphism group (Section [3](#sect:oligo){reference-type="ref" reference="sect:oligo"}), for which our notion of expressive power (Section [4](#sect:expr){reference-type="ref" reference="sect:expr"}) leads to polynomial-time reductions. Our general hardness condition, which also builds upon the notion of expressive power, is presented in Section [5](#sect:gen-hard){reference-type="ref" reference="sect:gen-hard"}. To study the expressive power and to formulate general polynomial-time tractability results, we introduce the concept of *fractional polymorphisms* in Section [6](#sect:fpol){reference-type="ref" reference="sect:fpol"} (they are probability distributions over operations on the valued structure). We take inspiration from the theory of VCSPs for finite-domain valued structures, but apply some non-trivial modifications that are specific to the infinite-domain setting (because the considered probability distributions are over uncountable sets). We then present a general polynomial-time tractability result (Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"}) which is phrased in terms of fractional polymorphisms. Section [8](#sect:resilience){reference-type="ref" reference="sect:resilience"} applies the general theory to resilience problems. We illustrate the power of our approach by settling the computational complexity of a resilience problem for a concrete conjunctive query from the literature (Section [8.5](#sect:expl){reference-type="ref" reference="sect:expl"}). Section [9](#sect:concl){reference-type="ref" reference="sect:concl"} closes with open problems for future research. # Preliminaries {#sect:prelims} ## Valued Structures The set $\{0,1,2,\dots\}$ of natural numbers is denoted by ${\mathbb N}$, the set of rational numbers is denoted by $\mathbb Q$, the set of non-negative rational numbers by ${\mathbb Q}_{\geq 0}$ and the set of positive rational numbers by $\mathbb Q_{>0}$. We use analogous notation for the set of real numbers $\mathbb{R}$ and the set of integers $\mathbb Z$. We also need an additional value $\infty$; all we need to know about $\infty$ is that - $a < \infty$ for every $a \in {\mathbb Q}$, - $a + \infty = \infty + a = \infty$ for all $a \in {\mathbb Q} \cup \{\infty\}$, and - $0 \cdot \infty = \infty \cdot 0 = 0$ and $a \cdot \infty = \infty \cdot a = \infty$ for $a > 0$. Let $C$ be a set and let $k \in {\mathbb N}$. A *weighted relation of arity $k$ over $C$* is a function $R \colon C^k \to {\mathbb Q} \cup \{\infty\}$. We write ${\mathscr R}_C^{(k)}$ for the set of all weighted relations of arity $k$, and define $${\mathscr R}_C := \bigcup_{k \in {\mathbb N}} {\mathscr R}_C^{(k)}.$$ A weighted relation is called *finite-valued* if it takes values only in $\mathbb Q$. **Example 1**. *The *weighted equality relation* $R_=$ is the binary weighted relation defined over $C$ by $R_=(x,y) = 0$ if $x=y$ and $R_=(x,y) = \infty$ otherwise. The *empty relation* $R_{\emptyset}$ is the unary weighted relation defined over $C$ by $R_{\emptyset}(x) = \infty$ for all $x \in C$.* A weighted relation $R \in {\mathscr R}_C^{(k)}$ that only takes values from $\{0,\infty\}$ will be identified with the following relation in the usual sense $$\{a \in C^k \mid R(a) = 0\}.$$ For $R \in {\mathscr R}_C^{(k)}$ the *feasibility relation of $R$* is defined as $$\mathop{\mathrm{Feas}}(R) := \{a \in C^k \mid R(a) < \infty\}.$$ A *(relational) signature* $\tau$ is a set of *relation symbols*, each equipped with an arity from ${\mathbb N}$. A *valued $\tau$-structure* $\Gamma$ consists of a set $C$, which is also called the *domain* of $\Gamma$, and a weighted relation $R^{\Gamma} \in {\mathscr R}_C^{(k)}$ for each relation symbol $R \in \tau$ of arity $k$. A *$\tau$-structure* in the usual sense may then be identified with a valued $\tau$-structure where all weighted relations only take values from $\{0,\infty\}$. **Example 2**. *Let $\tau = \{<\}$ be a relational signature with a single binary relation symbol $<$. Let $\Gamma_{<}$ be the valued $\tau$-structure with domain $\{0,1\}$ and where ${<}(x,y) = 0$ if $x < y$, and ${<}(x,y) = 1$ otherwise.* **Example 3**. *Let $\tau = \{E,N\}$ be a relational signature with two binary relation symbols $E$ and $N$. Let $\Gamma_{\text{LCC}}$ be the valued $\tau$-structure with domain ${\mathbb N}$ and where $E(x,y) = 0$ if $x = y$ and $E(x,y)=1$ otherwise, and where $N(x,y) = 0$ if $x \neq y$ and $N(x,y)=1$ otherwise.* An *atomic $\tau$-expression* is an expression of the form $R(x_1,\dots,x_k)$ for $R \in \tau$ and (not necessarily distinct) variable symbols $x_1,\dots,x_k$. A *$\tau$-expression* is an expression $\phi$ of the form $\sum_{i \leq m} \phi_i$ where $m \in {\mathbb N}$ and $\phi_i$ for $i \in \{1,\dots,m\}$ is an atomic $\tau$-expression. Note that the same atomic $\tau$-expression might appear several times in the sum. We write $\phi(x_1,\dots,x_n)$ for a $\tau$-expression where all the variables are from the set $\{x_1,\dots,x_n\}$. If $\Gamma$ is a valued $\tau$-structure, then a $\tau$-expression $\phi(x_1,\dots,x_n)$ defines over $\Gamma$ a member of ${\mathscr R}_C^{(n)}$, which we denote by $\phi^{\Gamma}$. If $\phi$ is the empty sum then $\phi^{\Gamma}$ is constant $0$. ## Valued Constraint Satisfaction {#sect:vcsp} In this section we assume that $\Gamma$ is a fixed valued $\tau$-structure for a *finite* signature $\tau$. The weighted relations of $\Gamma$ are also called *cost functions*. The *valued constraint satisfaction problem for $\Gamma$*, denoted by *$\mathop{\mathrm{VCSP}}(\Gamma)$*, is the computational problem to decide for a given $\tau$-expression $\phi(x_1,\dots,x_n)$ and a given $u \in {\mathbb Q}$ whether there exists $a \in C^n$ such that $\phi^{\Gamma}(a) \leq u$. We refer to $\phi(x_1,\dots,x_n)$ as an *instance* of $\mathop{\mathrm{VCSP}}(\Gamma)$, and to $u$ as the *threshold*. Tuples $a \in C^n$ such that $\phi^{\Gamma}(a) \leq u$ are called a *solution for $(\phi,u)$*. The *value* of $\phi$ (with respect to $\Gamma$) is defined to be $$\inf_{a \in C^n} \phi^{\Gamma}(a).$$ In some contexts, it will be beneficial to consider only a given $\tau$-expression $\phi$ to be the input of $\mathop{\mathrm{VCSP}}(\Gamma)$ (rather than $\phi$ and the threshold $u$) and a tuple $a \in C^n$ will then be called a *solution for $\phi$* if the value of $\phi$ equals $\phi^{\Gamma}(a)$. Note that in general there might not be any solution. If there exists a tuple $a \in C^n$ such that $\phi^{\Gamma}(a) < \infty$ then $\phi$ is called *satisfiable*. Note that our setting also captures classical CSPs, which can be viewed as the VCSPs for valued structures $\Gamma$ that only contain cost functions that take value $0$ or $\infty$. In this case, we will sometimes write $\mathop{\mathrm{CSP}}(\Gamma)$ for $\mathop{\mathrm{VCSP}}(\Gamma)$. Below we give a few examples of known optimisation problems that can be formulated as valued constraint satisfaction problems. **Example 4**. *The problem $\mathop{\mathrm{VCSP}}(\Gamma_<)$ for the valued structure $\Gamma_<$ from Example [Example 2](#expl:vs-mc){reference-type="ref" reference="expl:vs-mc"} models the *directed max-cut* problem: given a finite directed graph $(V,E)$ (we do allow loops and multiple edges), partition the vertices $V$ into two classes $A$ and $B$ such that the number of edges from $A$ to $B$ is maximal. Maximising the number of edges from $A$ to $B$ amounts to minimising the number $e$ of edges within $A$, within $B$, and from $B$ to $A$. So when we associate $A$ to the preimage of $0$ and $B$ to the preimage of $1$, computing the number $e$ corresponds to finding the evaluation map $s \colon V \rightarrow \{0,1\}$ that minimises the value $\sum_{(x,y) \in E} {<}(s(x),s(y))$, which can be formulated as an instance of $\mathop{\mathrm{VCSP}}(\Gamma_<)$. Conversely, every instance of $\mathop{\mathrm{VCSP}}(\Gamma_<)$ corresponds to a directed max-cut instance. It is known that $\mathop{\mathrm{VCSP}}(\Gamma_{<})$ is NP-complete [@GareyJohnson] (even if we do not allow loops and multiple edges in the input). We mention that this problem can be viewed as a resilience problem in database theory as explained in Section [8](#sect:resilience){reference-type="ref" reference="sect:resilience"}.* **Example 5**. *Consider the valued structure $\Gamma_{\geq}$ with domain $\{0,1\}$ and the binary weighted relation $\geq$ defined by ${\geq}(x,y)=0$ if $x \geq y$ and ${\geq}(x,y)=1$ otherwise. Similarly to the previous example, $\mathop{\mathrm{VCSP}}(\Gamma_{\geq})$ models the directed min-cut problem, i.e., given a finite directed graph $(V,E)$, partition the vertices $V$ into two classes $A$ and $B$ such that the number of edges from $A$ to $B$ is minimal. The min-cut problem is solvable in polynomial time; see, e.g., [@max-flow].* **Example 6**. *The problem of *least correlation clustering with partial information* [@ViolaThesis Example 5] is equal to $\mathop{\mathrm{VCSP}}(\Gamma_{\text{LCC}})$ where $\Gamma_{\text{LCC}}$ is the valued structure from Example [Example 3](#expl:vs-mcc){reference-type="ref" reference="expl:vs-mcc"}. It is a variant of the min-correlation clustering problem [@CorrelationClustering], where we have precisely one constraint between any two variables. The problem is NP-complete in both settings [@GareyJohnson; @ViolaThesis].* # Oligomorphicity {#sect:oligo} Many facts about VCSPs for valued structures with a finite domain can be generalised to a large class of valued structures over an infinite domain, defined in terms of automorphisms. Automorphisms of valued structures are defined as follows. **Definition 7**. *Let $k \in {\mathbb N}$, let $R \in {\mathscr R}^{(k)}_C$, and let $\alpha$ be a permutation of $C$. Then $\alpha$ *preserves* $R$ if for all $a \in C^k$ we have $R(\alpha(a)) = R(a)$. If $\Gamma$ is a valued structure with domain $C$, then an *automorphism* of $\Gamma$ is a permutation of $C$ that preserves all weighted relations of $R$.* The set of all automorphisms of $\Gamma$ is denoted by $\mathop{\mathrm{Aut}}(\Gamma)$, and forms a group with respect to composition. Let $k \in {\mathbb N}$. An *orbit of $k$-tuples* of a permutation group $G$ is a set of the form $\{ \alpha(a) \mid \alpha \in G \}$ for some $a \in C^k$. A permutation group $G$ on a countable set is called *oligomorphic* if for every $k \in {\mathbb N}$ there are finitely many orbits of $k$-tuples in $G$. From now on, whenever we write that a structure has an oligomorphic automorphism group, we also imply that its domain is countable. Clearly, every valued structure with a finite domain has an oligomorphic automorphism group. A countable structure has an oligomorphic automorphism group if and only if it is *$\omega$-categorical*, i.e., if all countable models of its first-order theory are isomorphic [@Hodges]. **Example 8**. *The automorphism group of $\Gamma_{\text{LCC}}$ from Examples [Example 3](#expl:vs-mcc){reference-type="ref" reference="expl:vs-mcc"} and [Example 6](#expl:MCC){reference-type="ref" reference="expl:MCC"} is the full symmetric group and hence oligomorpohic.* **Lemma 9**. *Let $\Gamma$ be a valued structure with a countable domain $C$ and an oligomorphic automorphism group. Then for every instance $\phi(x_1,\dots,x_n)$ of $\mathop{\mathrm{VCSP}}(\Gamma)$ there exists $a \in C^n$ such that the value of $\phi$ equals $\phi^{\Gamma}(a)$.* *Proof.* The statement follows from the assumption that there are only finitely many orbits of $n$-tuples of $\mathop{\mathrm{Aut}}(\Gamma)$, because it implies that there are only finitely many possible values from ${\mathbb Q} \cup \{\infty\}$ for $\phi^{\Gamma}(a)$. ◻ A first-order sentence is called *universal* if it is of the form $\forall x_1,\dots,x_l.\; \psi$ where $\psi$ is quantifier-free. Every quantifier-free formula is equivalent to a formula in conjunctive normal form, so we assume in the following that quantifier-free formulas are of this form. Recall that a $\tau$-structure ${\mathfrak A}$ *embeds* into a $\tau$ structure ${\mathfrak B}$ if there is an injective map from $A$ to $B$ that preserves all relations of ${\mathfrak A}$ and their complements; the corresponding map is called an *embedding*. The *age* of a $\tau$-structure is the class of all finite $\tau$-structures that embed into it. A structure ${\mathfrak B}$ with a finite relational signature $\tau$ is called - *finitely bounded* if there exists a universal $\tau$-sentence $\phi$ such that a finite structure ${\mathfrak A}$ is in the age of ${\mathfrak B}$ if and only if ${\mathfrak A}\models \phi$. - *homogeneous* if every isomorphism between finite substructures of ${\mathfrak B}$ can be extended to an automorphism of ${\mathfrak B}$. If $\tau' \subseteq \tau$, then a $\tau'$-structure ${\mathfrak B}'$ is called the *reduct* of ${\mathfrak B}$ if ${\mathfrak B}$ and ${\mathfrak B}'$ have the same domain and $R^{{\mathfrak B}'}=R^{{\mathfrak B}}$ for every $R\in \tau'$. Note that for every structure ${\mathfrak B}$ with a finite relational signature, for every $n$ there are only finitely many non-isomorphic substructures of ${\mathfrak B}$ of size $n$. Therefore, all countable homogeneous structures with a finite relational signature and all of their reducts have finitely many orbits of $k$-tuples for all $k \in \mathbb N$, and hence an oligomorphic automorphism group. **Theorem 10**. *Let $\Gamma$ be a countable valued structure with finite signature such that there exists a finitely bounded homogeneous structure ${\mathfrak B}$ with $\mathop{\mathrm{Aut}}({\mathfrak B}) \subseteq \mathop{\mathrm{Aut}}(\Gamma)$. Then $\mathop{\mathrm{VCSP}}(\Gamma)$ is in NP.* *Proof.* Let $(\phi,u)$ be an instance of $\mathop{\mathrm{VCSP}}(\Gamma)$ with $n$ variables. Since $\mathop{\mathrm{Aut}}({\mathfrak B}) \subseteq \mathop{\mathrm{Aut}}(\Gamma)$, every orbit of $n$-tuples of $\mathop{\mathrm{Aut}}(\Gamma)$ is determined by the substructure induced by ${\mathfrak B}$ on the elements of some tuple from the orbit. Note that two tuples $(a_1, \dots, a_n)$ and $(b_1, \dots, b_n)$ lie in the same orbit of $\mathop{\mathrm{Aut}}({\mathfrak B})$ if and only if the map that maps $a_i$ to $b_i$ for $i\in \{1,\dots,n\}$ is an isomorphism between the substructures induced by ${\mathfrak B}$ on $\{a_1, \dots, a_n\}$ and on $\{b_1, \dots, b_n\}$. Whether a given finite structure ${\mathfrak A}$ is in the age of a fixed finitely bounded structure ${\mathfrak B}$ can be decided in polynomial time: if $\phi$ is the universal $\tau$-sentence which describes the age of ${\mathfrak B}$, it suffices to exhaustively check all possible instantiations of the variables of $\phi$ with elements of $A$ and verify whether $\phi$ is true in ${\mathfrak A}$ under the instantiation. Hence, we may non-deterministically generate a structure ${\mathfrak A}$ with domain $\{1,\dots,n\}$ from the age of ${\mathfrak B}$ and then verify in polynomial time whether the value $\phi^{\Gamma}(b_1, \dots, b_n)$ is at most $u$ for any tuple $(b_1,\dots,b_n) \in B^n$ such that $i \mapsto b_i$ is an embedding of ${\mathfrak A}$ into ${\mathfrak B}$. ◻ # Expressive Power {#sect:expr} One of the fundamental concepts in the theory of constraint satisfaction is the concept of *primitive positive definitions*, which is the fragment of first-order logic where only equality, existential quantification, and conjunction are allowed (in other words, negation, universal quantification, and disjunction are forbidden). The motivation for this concept is that relations with such a definition can be added to the structure without changing the complexity of the respective CSP. The natural generalisation to *valued* constraint satisfaction is the following notion of expressibility. **Definition 11**. *Let $\Gamma$ be a valued $\tau$-structure. We say that $R \in {\mathscr R}^{(k)}_C$ can be *expressed* by $\Gamma$ if there exists a $\tau$-expression $\phi(x_1,\dots,x_k,y_1,\dots,y_n)$ such that for all $a \in C^k$ we have $$R(a) = \inf_{b \in C^n} \phi^{\Gamma}(a,b).$$* Note that $\inf_{b \in C^n} \phi^{\Gamma}(a,b)$ might be irrational or $-\infty$. If this is the case in Definition [Definition 11](#def:expr){reference-type="ref" reference="def:expr"}, then $\phi$ does not witness that $R$ can be expressed in $\Gamma$ since weighted relations must have weights from ${\mathbb Q} \cup \{\infty\}$. If $C$ has an oligomorphic permutation group, however, then Lemma [Lemma 9](#lem:value){reference-type="ref" reference="lem:value"} guarantees existence. We will further see in Lemma [Lemma 17](#lem:expr-reduce){reference-type="ref" reference="lem:expr-reduce"} that if $\Gamma$ has an oligomorphic automorphism group, then the addition of weighted relations that are expressible by $\Gamma$ does not change the computational complexity of $\mathop{\mathrm{VCSP}}(\Gamma)$. Another way to derive new relations from existing ones that preserves the computational complexity of the original VCSP is introduced in the following definition. **Definition 12**. *Let $R,R' \in {\mathscr R}_C$. We say that $R'$ can be obtained from $R$ by* - **non-negative scaling* if there exists $r \in {\mathbb Q}_{\geq 0}$ such that $R = r R'$;* - **shifting* if there exists $s \in {\mathbb Q}$ such that $R = R' + s$.* In the literature about the complexity of finite-domain VCSPs we find another operator on sets of weighted relations that preserves the complexity of the VCSP: the operator Opt (see, e.g., [@FullaZivny; @KrokhinZivny17]). **Definition 13**. *Let $R \in {\mathscr R}^{(k)}_C$. The relation containing all minimal-value tuples of $R$ is defined as $$\mathop{\mathrm{Opt}}(R) := \{a \in \mathop{\mathrm{Feas}}(R) \mid R(a) \leq R(b) \text{ for every } b \in C^k\}.$$* **Definition 14** (weighted relational clone). *A *weighted relational clone (over $C$)* is a subset of ${\mathscr R}_C$ that contains $R_{=}$ and $R_{\emptyset}$ (from Example [Example 1](#expl:wrels){reference-type="ref" reference="expl:wrels"}), and is closed under expressibility, shifting, and non-negative scaling, $\mathop{\mathrm{Feas}}$, and $\mathop{\mathrm{Opt}}$. For a valued structure $\Gamma$ with domain $C$, we write $\langle \Gamma \rangle$ for the smallest relational clone that contains the weighted relations of $\Gamma$.* The following example shows that neither the operator $\mathop{\mathrm{Opt}}$ nor the operator $\mathop{\mathrm{Feas}}$ is redundant in the definition above. **Example 15**. *Consider the domain $C=\{0,1,2\}$ and the unary weighted relation $R$ on $C$ defined by $R(0)=0$, $R(1)=1$ and $R(2)=\infty$. Then the relation $\mathop{\mathrm{Feas}}(R)$ cannot be obtained from $R$ by expressing, shifting, non-negative scaling and use of $\mathop{\mathrm{Opt}}$. Similarly, the relation $\mathop{\mathrm{Opt}}(R)$ cannot be obtained from $R$ by expressing, shifting, non-negative scaling and use of $\mathop{\mathrm{Feas}}$.* **Remark 16**. *Note that for every valued structure $\Gamma$ and $R\in \langle \Gamma \rangle$, every automorphism of $\Gamma$ is an automorphism of $R$.* The motivation of Definition [Definition 14](#def:wrelclone){reference-type="ref" reference="def:wrelclone"} for valued CSPs stems from the following lemma, which shows that adding relations in $\langle\Gamma \rangle$ does not change the complexity of the $\mathop{\mathrm{VCSP}}$ up to polynomial-time reductions. For valued structures over finite domains this is proved in [@VCSP-Galois], except for the operator $\mathop{\mathrm{Opt}}$, for which a proof can be found in [@FullaZivny Theorem 5.13]. Only parts of the proof can be generalised to valued structures over infinite domains in the general case, that is, when oligomorphic automorphism groups are not required; see, e.g., Schneider and Viola [@SchneiderViola] and Viola [@ViolaThesis Lemma 7.1.4]. Note, however, that in these works the definition of VCSPs was changed: instead of asking whether a solution can be found of value at most $u$, they ask whether there exists a solution of value strictly less than $u$, to circumvent problems about infima that are not realised. Moreover, in [@SchneiderViola] the authors restrict themselves to finite-valued weighted relations and hence do not consider the operator $\mathop{\mathrm{Opt}}$. It is visible from Example [Example 15](#expl:opt-feas){reference-type="ref" reference="expl:opt-feas"} that the operator $\mathop{\mathrm{Opt}}$ cannot be simulated by the other ones already on finite domains, which is why it was introduced in [@FullaZivny]. The same is true for the operator $\mathop{\mathrm{Feas}}$, which was included implicitly in [@FullaZivny] by allowing to scale weighted relations by $0$ and defining $0 \cdot \infty=\infty$. In our approach, we work under the assumption that the valued structure has an oligomorphic automorphism group, which implies that infima in expressions are realized and the values of VCSPs of such structures can be attained. Therefore, we obtain a polynomial-time reduction for each of the operators in Definition [Definition 14](#def:wrelclone){reference-type="ref" reference="def:wrelclone"} as in the finite domain case. **Lemma 17**. *Let $\Gamma$ be a valued structure with an oligomorphic automorphism group and a finite signature. Suppose that $\Delta$ is a valued structure with a finite signature over the same domain $C$ such that every cost function of $\Delta$ is from $\langle \Gamma \rangle$. Then there is a polynomial-time reduction from $\mathop{\mathrm{VCSP}}(\Delta)$ to $\mathop{\mathrm{VCSP}}(\Gamma)$.* *Proof.* Let $\tau$ be the signature of $\Gamma$. It suffices to prove the statement for expansions of $\Gamma$ to signatures $\tau \cup \{R\}$ that extend $\tau$ with a single relation $R$, $R^{\Delta} \in \langle \Gamma \rangle$. If $R^{\Delta} = R_\emptyset$, then an instance $\phi$ of $\mathop{\mathrm{VCSP}}(\Delta)$ with threshold $u \in \mathbb Q$ is unsatisfiable if and only if $\phi$ contains the symbol $R$ or if it does not contain $R$ and is unsatisfiable viewed as an instance of $\mathop{\mathrm{VCSP}}(\Gamma)$. In the former case, choose a $k$-ary relation symbol $S \in \tau$ and note that $S^{\Gamma}$ attains only finitely many values, by the oligomorphicity of $\mathop{\mathrm{Aut}}(\Gamma)$. Let $u' \in \mathbb Q$ be smaller than all of them. Then $S(x_1, \dots, x_k)$ is an instance of $\mathop{\mathrm{VCSP}}(\Gamma)$ that never meets the threshold $u'$, so this provides a correct reduction. In the latter case, for every $a \in C^n$ we have that $\phi^{\Delta}(a) = \phi^{\Gamma}(a)$; this provides a polynomial-time reduction. Now suppose that $R^{\Delta} = R_=$. Let $\psi(x_{i_1},\dots,x_{i_k})$ be obtained from an instance $\phi(x_1,\dots,x_n)$ of $\mathop{\mathrm{VCSP}}(\Delta)$ by identifying all variables $x_i$ and $x_j$ such that $\phi$ contains the summand $R(x_i,x_j)$. Then $\phi$ is satisfiable if and only if the instance $\psi$ is satisfiable, and $\inf_{a \in C^n} \phi^{\Delta}(a) = \inf_{b \in C^k} \psi^{\Gamma}(b)$; Again, this provides a polynomial-time reduction. Next, consider the case that for some $\tau$-expression $\delta(y_1,\dots,y_l,z_1,\dots,z_k)$ we have $$R^{\Delta}(y_1, \dots, y_l)= \inf_{a \in C^k} \delta^{\Gamma}(y_1, \dots, y_l, a_1, \dots, a_k).$$ Let $\phi(x_1,\dots,x_n)$ be an instance of $\mathop{\mathrm{VCSP}}(\Delta)$. We replace each summand $R(y_1,\dots,y_{l})$ in $\phi$ by $\delta(y_1,\dots,y_l,z_1,\dots,z_k)$ where $z_1,\dots,z_k$ are new variables (different for each summand). Let $\theta(x_1,\dots,x_n,w_1,\dots,w_t)$ be the resulting $\tau$-expression after doing this for all summands that involve $R$. For any $a \in C^{n}$ we have that $$\phi(a_1,\dots,a_n) = \inf_{b \in C^t} \theta(a_1,\dots,a_{n},b)$$ and hence $\inf_{a \in C^n} \phi = \inf_{c \in C^{n+t}} \theta$; here we used the assumption that $\mathop{\mathrm{Aut}}(\Gamma)$ is oligomorphic. Since we replace each summand by an expression whose size is constant (since $\Gamma$ is fixed and finite) the expression $\theta$ can be computed in polynomial time, which shows the statement. Suppose that $R^{\Delta}=rS^{\Gamma}+s$ where $r \in \mathbb Q_{\geq 0}, s \in \mathbb Q$. Let $p\in \mathbb Z_{\geq 0}$ and $q \in \mathbb Z_{>0}$ be coprime integers such that $p/q = r$. Let $(\phi, u)$ be an instance of $\mathop{\mathrm{VCSP}}(\Delta)$ where $\phi(x_1, \dots, x_n)=\sum_{i=1}^{\ell} \phi_i + \sum_{j=1}^k \psi_j$, the summands $\phi_i$ contain only symbols from $\tau$, and each $\psi_j$ involves the symbol $R$. Let $\psi_j'$ be the expression obtained from $\psi_j$ by replacing $R$ with $S$. For $i \in \{1, \dots, \ell\}$ replace $\phi_i$ with $q$ copies of itself and for $j \in \{1, \dots, k\}$, replace $\psi_j$ with $p$ copies of $\psi_j'$; let $\phi'(x_1, \dots, x_n)$ be the resulting $\tau$-expression. Define $u':= q(u-ks)$. Then for every $a \in C^n$ the following are equivalent: $$\begin{aligned} \phi(a_1, \dots, a_n) & = \sum_{i=1}^{\ell} \phi_i+ \sum_{j=1}^k \left(\frac{p}{q} \psi'_j + s \right) \leq u \\ \phi'(a_1, \dots, a_n) & = q\sum_{i=1}^{\ell} \phi_i+ p \sum_{j=1}^k \psi'_j \leq qu-qks = u'\end{aligned}$$ Since $(\phi', u')$ can be computed from $(\phi, u)$ in polynomial time, this provides the desired reduction. Now suppose that $R^{\Delta}=\mathop{\mathrm{Feas}}(S^{\Gamma})$ for some $S\in \tau$. Let $(\phi,u)$ be an instance of $\mathop{\mathrm{VCSP}}(\Delta)$, i.e., $\phi(x_1, \dots, x_n)=\sum_{i=1}^{\ell} \phi_i + \sum_{j=1}^k \psi_j$ where $\psi_j$, $j \in \{1, \dots, k\}$ are all the atomic expressions in $\phi$ that involve $R$. If $R^\Delta=R_{\emptyset}$, then the statement follows from the reduction for $R_\emptyset$. Therefore, suppose that this not the case and let $w$ be the maximum finite weight assigned by $S$. Note that there are only finitely many values that the $\ell$ atoms $\phi_i$ may take and therefore only finitely many values that $\sum_{i=1}^{\ell} \phi_i$ may take. Let $v$ be the smallest of these values such that $v>u$ and let $d = v-u$; if $v$ does not exist, let $d=1$. To simplify the notation, set $t = \lceil (kw)/d \rceil + 1$. Let $\psi_j'$ be the $\tau$-expression resulting from $\psi_j$ by replacing the symbol $R$ by the symbol $S$. Let $\phi'$ be the $\tau$-expression obtained from $\phi$ by replacing each atom $\phi_i$ with $t$ copies of it and replacing every atom $\psi_j$ by $\psi_j'$. Let $(\phi', tu+kw)$ be the resulting instance of $\mathop{\mathrm{VCSP}}(\Gamma)$; note that it can be computed in polynomial time. We claim that for every $a\in C^n$, the following are equivalent: $$\begin{aligned} \phi(a_1, \dots, a_n) & = \sum_{i=1}^{\ell} \phi_i+ \sum_{j=1}^k \psi_j \leq u \label{eq:feas1}\\ \phi'(a_1, \dots, a_n) & = t \cdot \sum_{i=1}^{\ell} \phi_i+ \sum_{j=1}^k \psi'_j \leq tu+kw \label{eq:feas2}\end{aligned}$$ If [\[eq:feas1\]](#eq:feas1){reference-type="eqref" reference="eq:feas1"} holds, then by the definition of $\mathop{\mathrm{Feas}}$ we must have $\psi_j=0$ for every $j \in \{1, \dots, k\}$. Thus $\sum_{i=1}^{\ell} \phi_i \leq u$ and $\sum_{j=1}^k \psi'_j \leq kw$, which implies [\[eq:feas2\]](#eq:feas2){reference-type="eqref" reference="eq:feas2"}. Conversely, if [\[eq:feas2\]](#eq:feas2){reference-type="eqref" reference="eq:feas2"} holds, then $\psi_j'$ is finite for every $j \in\{1, \dots, k\}$ and hence $\psi_j=0$. Moreover, [\[eq:feas2\]](#eq:feas2){reference-type="eqref" reference="eq:feas2"} implies $$\sum_{i=1}^{\ell} \phi_i \leq u + \frac{kw}{t}.$$ Note that if $v$ exists, then $u + (kw)/t < v$. Therefore (regardless of the existence of $v$), this implies $\sum_{i=1}^{\ell} \phi_i \leq u$, which together with what we have observed previously shows [\[eq:feas1\]](#eq:feas1){reference-type="eqref" reference="eq:feas1"}. Finally, we consider the case that $R^{\Delta} = \mathop{\mathrm{Opt}}(S^{\Gamma})$ for some relation symbol $S \in \tau$. Since $\tau$ is finite and $\mathop{\mathrm{Aut}}(\Gamma)$ is oligomorphic, we may assume without loss of generality that the minimum weight of all weighted relations in $\Delta$ equals $0$; otherwise, we subtract the smallest weight assigned to a tuple by some weighted relation in $\Delta$. This transformation does not affect the computational complexity of the VCSP (up to polynomial-time reductions). We may also assume that $S^{\Gamma}$ takes finite positive values, because otherwise $\mathop{\mathrm{Opt}}(S^{\Gamma}) = S^{\Gamma}$ and the statement is trivial. Let $m$ be the smallest positive weight assigned by $S^{\Gamma}$ and let $M$ be the largest finite weight assigned by any weighted relation of $\Gamma$ (again we use that $\tau$ is finite and that $\mathop{\mathrm{Aut}}(\Gamma)$ is oligomorphic). Let $(\phi, u)$, where $\phi(x_1,\dots,x_n)= \sum_{i=1}^k\phi_i$, be an instance of $\mathop{\mathrm{VCSP}}(\Delta)$. For $i \in \{1,\dots,k\}$, if $\phi_i$ involves the symbol $R$, then replace it by $k \cdot \lceil M/m \rceil + 1$ copies and replace $R$ by $S$. Let $\phi'$ be the resulting $\tau$-expression. We claim that $a \in C^n$ is a solution to the instance $(\phi',\min(kM,u))$ of $\mathop{\mathrm{VCSP}}(\Gamma)$ if and only if it is the solution to $(\phi,u)$. If $a \in C^n$ is such that $\phi(a) \leq u$ then for every $i \in \{1,\dots,k\}$ such that $\phi_i$ involves $R$ we have $\phi_i(a) = 0$. In particular, the minimal value attained by $S^{\Gamma}$ equals $0$ by our assumption, and hence $\phi'(a) = \phi(a) \leq u$ and hence $\phi'(a) \leq \min(kM,u)$. Now suppose that $\phi(a) > u$. Then $\phi'(a)>u \geq \min(kM,u)$ or there exists an $i \in \{1,\dots,k\}$ such that $\phi_i(a) = \infty$. If $\phi_i$ does not involve the symbol $R$, then $\phi'(a) = \infty$ as well. If $\phi_i$ involves the symbol $R$, then $\phi'(a) \geq (k \cdot \lceil M/m \rceil + 1) m > kM$. In any case, $\phi'(a)>\min(kM, u)$. Since $\phi'$ can be computed from $\phi$ in polynomial time, this concludes the proof. ◻ The next example illustrates the use of Lemma [Lemma 17](#lem:expr-reduce){reference-type="ref" reference="lem:expr-reduce"} for obtaining hardness results. **Example 18**. *We revisit the countably infinite valued structure $\Gamma_{\text{LCC}}$ from Example [Example 3](#expl:vs-mcc){reference-type="ref" reference="expl:vs-mcc"}. Recall that $\mathop{\mathrm{VCSP}}(\Gamma_{\text{LCC}})$ is the least correlation clustering problem with partial information and that $\mathop{\mathrm{Aut}}(\Gamma_{\text{LCC}})$ is oligomorphic. Let $\Gamma_{\text{EC}}$ be the relational structure with the same domain as $\Gamma_{\text{LCC}}$ and the relation $R := \{(x,y,z) \mid (x=y \wedge y \neq z) \vee (x \neq y \wedge y = z)\}$ (attaining values $0$ and $\infty$). Note that $$R(x,y,z) = \mathop{\mathrm{Opt}}(N(x,z) + N(x,z) + E(x,y) + E(y,z)).$$ This provides an alternative proof of NP-hardness of the least correlation clustering with partial information via Lemma [Lemma 17](#lem:expr-reduce){reference-type="ref" reference="lem:expr-reduce"}, because $\mathop{\mathrm{CSP}}(\Gamma_{\text{EC}})$ is known to be NP-hard [@ecsps].* # Hardness from pp-Constructions {#sect:gen-hard} A universal-algebraic theory of VCSPs for finite valued structures has been developed in [@KozikOchremiak15], following the classical approach to CSPs which is based on the concepts of cores, addition of constants, and primitive positive interpretations. Subsequently, an important conceptual insight has been made for classical CSPs which states that every structure that can be interpreted in the expansion of the core of the structure by constants can also be obtained by taking a pp-power if we then consider structures up to homomorphic equivalence [@wonderland]. We are not aware of any published reference that adapts this perspective to the algebraic theory of VCSPs, so we develop (parts of) this approach here. As in [@wonderland], we immediately step from valued structures with a finite domain to the more general case of valued structures with an oligomorphic automorphism group. **Definition 19** (pp-power). *Let $\Gamma$ be a valued structure with domain $C$ and let $d \in {\mathbb N}$. Then a ($d$-th) *pp-power* of $\Gamma$ is a valued structure $\Delta$ with domain $C^d$ such that for every weighted relation $R$ of $\Delta$ of arity $k$ there exists a weighted relation $S$ of arity $kd$ in $\langle \Gamma \rangle$ such that $$R((a^1_1,\dots,a^1_d),\dots,(a^k_1,\dots,a^k_d)) = S(a^1_1,\dots,a^1_d,\dots,a^k_1,\dots,a^k_d).$$* The name 'pp-power' comes from 'primitive positive power', since for relational structures expressibility is captured by primitive positive formulas. The following proposition shows that the VCSP of a pp-power reduces to the VCSP of the original structure. **Proposition 20**. *Let $\Gamma$ and $\Delta$ be valued structures such that $\mathop{\mathrm{Aut}}(\Gamma)$ is oligomorphic and $\Delta$ is a pp-power of $\Gamma$. Then $\mathop{\mathrm{Aut}}(\Delta)$ is oligomorphic and there is a polynomial-time reduction from $\mathop{\mathrm{VCSP}}(\Delta)$ to $\mathop{\mathrm{VCSP}}(\Gamma)$.* *Proof.* Let $d$ be the dimension of the pp-power and let $\tau$ be the signature of $\Gamma$. By Remark [Remark 16](#rem:aut){reference-type="ref" reference="rem:aut"}, $\mathop{\mathrm{Aut}}(\Gamma) \subseteq \mathop{\mathrm{Aut}}(\Delta)$ and thus $\mathop{\mathrm{Aut}}(\Delta)$ is oligomorphic. By Lemma [Lemma 17](#lem:expr-reduce){reference-type="ref" reference="lem:expr-reduce"}, we may suppose that for every weighted relation $R$ of arity $k$ of $\Delta$ the weighted relation $S \in \langle \Gamma \rangle$ of arity $dk$ from the definition of pp-powers equals $S^{\Gamma}$ for some $S \in \tau$. Let $(\phi,u)$ be an instance of $\mathop{\mathrm{VCSP}}(\Delta)$. For each variable $x$ of $\phi$ we introduce $d$ new variables $x_1,\dots,x_d$. For each summand $R(y^1,\dots,y^k)$ we introduce a summand $S(y^1_1,\dots,y^1_d,\dots,y^k_1,\dots,y^k_d)$; let $\psi$ be the resulting $\tau$-expression. It is now straightforward to verify that $(\phi,u)$ has a solution with respect to $\Delta$ if and only if $(\psi,u)$ has a solution with respect to $\Gamma$. ◻ Note that, in particular, if $\mathop{\mathrm{VCSP}}(\Gamma)$, parametrized by the threshold $u$, is fixed-parameter tractable, then so is $\mathop{\mathrm{VCSP}}(\Delta)$. If $C$ and $D$ are sets, then we equip the space $C^D$ of functions from $D$ to $C$ with the topology of pointwise convergence, where $C$ is taken to be discrete. In this topology, a basis of open sets is given by $${\mathscr S}_{a,b} := \{f \in C^D \mid f(a)=b\}$$ for $a \in D^k$ and $b \in C^k$ for some $k \in {\mathbb N}$. For any topological space $T$, we denote by $B(T)$ the Borel $\sigma$-algebra on $T$, i.e., the smallest subset of the powerset ${\mathcal P}(T)$ which contains all open sets and is closed under countable intersection and complement. We write $[0,1]$ for the set $\{x \in {\mathbb R} \mid 0 \leq x \leq 1\}$. **Definition 21** (fractional map). *Let $C$ and $D$ be sets. A *fractional map* from $D$ to $C$ is a probability distribution $$(C^D, B(C^D),\omega \colon B(C^D) \to [0,1]),$$ that is,* - *$\omega$ is countably additive: if $A_1,A_2,\dots \in B(C^D)$ are disjoint, then $$\omega(\bigcup_{i \in {\mathbb N}} A_i) = \sum_{i \in {\mathbb N}} \omega(A_i).$$* - *$\omega(C^D) = 1$.* If $f \in C^D$, we often write $\omega(f)$ instead of $\omega(\{f\})$. Note that $\{f\} \in B(C^D)$ for every $f$. The set $[0,1]$ carries the topology inherited from the standard topology on ${\mathbb R}$. We also view ${\mathbb R} \cup \{\infty\}$ as a topological space with a basis of open sets given by all open intervals $(a,b)$ for $a,b \in {\mathbb R}$, $a<b$ and additionally all sets of the form $\{x \in {\mathbb R} \mid x > a\} \cup \{\infty\}$. A *(real-valued) random variable* is a *measurable function* $X \colon T \to {\mathbb R} \cup \{\infty\}$, i.e., pre-images of elements of $B({\mathbb R} \cup \{\infty\})$ under $X$ are in $B(T)$. If $X$ is a real-valued random variable, then the *expected value of $X$ (with respect to a probability distribution $\omega$)* is denoted by $E_\omega[X]$ and is defined via the Lebesgue integral $$E_\omega[X] := \int_T X d \omega.$$ Recall that the Lebesgue integral $\int_T X d \omega$ need not exist, in which case $E_\omega[X]$ is undefined; otherwise, the integral equals a real number, $\infty$, or $-\infty$. For the convenience of the reader we recall the definition and some properties of the Lebesgue integral, specialised to our setting, in Appendix [10](#sect:lebesgue){reference-type="ref" reference="sect:lebesgue"}. Also recall that the expected value is - *linear*, i.e., for every $a,b \in \mathbb{R}$ and random variables $X$, $Y$ such that $E_\omega[X]$ and $E_\omega[Y]$ exist and $aE_\omega[X]+bE_\omega[Y]$ is defined we have $$E_\omega[aX+bY]=aE_\omega[X]+bE_\omega[Y];$$ - *monotone*, i.e., if $X,Y$ are random variables such that $E_\omega[X]$ and $E_\omega[Y]$ exist and $X(f) \leq Y(f)$ for all $f \in T$, then $E_\omega[X] \leq E_\omega[Y]$. Let $C$ and $D$ be sets. In the rest of the paper, we will work exclusively on a topological space $C^D$ of maps $f \colon D \rightarrow C$ and the special case ${\mathscr O}_C^{(\ell)}$ for some $\ell \in \mathbb N$ (i.e., $D = C^{\ell}$). Note that if $C$ and $D$ are infinite, then these spaces are uncountable and hence there are probability distributions $\omega$ such that $\omega(A)=0$ for every $1$-element set $A$. Therefore, in these cases, $E_\omega[X]$ for a random variable $X$ might not be expressible as a sum. **Definition 22** (fractional homomorphism). *Let $\Gamma$ and $\Delta$ be valued $\tau$-structures with domains $C$ and $D$, respectively. A *fractional homomorphism* from $\Delta$ to $\Gamma$ is a fractional map from $D$ to $C$ such that for every $R \in \tau$ of arity $k$ and every tuple $a \in D^k$ it holds for the random variable $X \colon C^D \rightarrow \mathbb{R}\cup\{\infty\}$ given by $$f \mapsto R^{\Gamma}(f(a))$$ that $E_\omega[X]$ exists and that $$E_\omega[X] \leq R^{\Delta}(a).$$* The following lemma shows that if $\mathop{\mathrm{Aut}}(\Gamma)$ is oligomorphic, then the expected value from Definition [Definition 22](#def:frac-hom){reference-type="ref" reference="def:frac-hom"} always exists. **Lemma 23**. *Let $C$ and $D$ be sets, $a \in D^k$, $R \in {\mathscr R}_C^{(k)}$. Let $X \colon C^D \rightarrow \mathbb{R}\cup\{\infty\}$ be the random variable given by $$f \mapsto R(f(a)).$$ If $\mathop{\mathrm{Aut}}(C; R)$ is oligomorphic, then $E_\omega[X]$ exists and $E_\omega[X] > -\infty$.* *Proof.* It is enough to show that $\int_{C^D} X^- d \omega \neq \infty$. Since $\mathop{\mathrm{Aut}}(C;R)$ is oligomorphic, there are only finitely many orbits of $k$-tuples in $\mathop{\mathrm{Aut}}(C;R)$. Let $O_1, \dots, O_m$ be all orbits of $k$-tuples of $\mathop{\mathrm{Aut}}(C; R)$ on which $R$ is negative. For every $i \in \{1, \dots, m\}$, let $b_i \in O_i$. Then we obtain (see [\[eq:exp-sum\]](#eq:exp-sum){reference-type="eqref" reference="eq:exp-sum"} in Appendix [10](#sect:lebesgue){reference-type="ref" reference="sect:lebesgue"} for a detailed derivation of the first equality) $$\begin{aligned} \int_{C^D} X^- d \omega & = \sum_{b \in C^k,R(b) < 0} -R(b) \omega({\mathscr S}_{a,b}) \\ & = - \sum_{i=1}^m R(b_i) \sum_{b \in O_i} \omega({\mathscr S}_{a,b}) \\ &= - \sum_{i=1}^m R(b_i) \; \omega \left( \bigcup_{b \in O_i}{\mathscr S}_{a,b} \right) \\ & \leq -\sum_{i=1}^m R(b_i) < \infty.\qedhere \end{aligned}$$ ◻ **Lemma 24**. *Let $\Gamma_1$, $\Gamma_2$, $\Gamma_3$ be countable valued $\tau$-structures such that there exists a fractional homomorphism $\omega_1$ from $\Gamma_1$ to $\Gamma_2$ and a fractional homomorphism $\omega_2$ from $\Gamma_2$ to $\Gamma_3$. Then there exists a fractional homomorphism $\omega_3 := \omega_2 \circ \omega_1$ from $\Gamma_1$ to $\Gamma_3$.* *Proof.* Let $C_1$, $C_2$, $C_3$ be the domains of $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$, respectively. If $a \in C_1^k$ and $c \in C_3^k$, for some $k \in {\mathbb N}$, then define $$\omega_3({\mathscr S}_{a,c}) := \sum_{b \in C_2^k} \omega_1({\mathscr S}_{a,b}) \omega_2({\mathscr S}_{b,c}).$$ Note that on sets of this form, i.e., on basic open sets in $C_3^{C_1}$, $\omega_3$ is countably additive. Since our basis of open sets is closed under intersection, this definition extends uniquely to all of $B(C_3^{C_1})$ by Dynkin's $\pi$-$\lambda$ theorem. ◻ The following was shown for valued structures over finite domains in [@Butti Proposition 8.4]. **Proposition 25**. *Let $\Gamma$ and $\Delta$ be valued $\tau$-structures with domains $C$ and $D$ and with a fractional homomorphism $\omega$ from $\Delta$ to $\Gamma$. Then the value of every $\mathop{\mathrm{VCSP}}$ instance $\phi$ with respect to $\Gamma$ is at most the value of $\phi$ with respect to $\Delta$.* *Proof.* Let $\phi(x_1,\dots,x_n) = \sum_{i=1}^m R_i(x_{j_1^i}, \dots, x_{j_{k_i}^i})$ be a $\tau$-expression, where $j_1^i, \dots, j_{k_i}^i \in \{1, \dots, n\}$ for every $i \in \{1, \dots m\}$. To simplify the notation in the proof, if $v=(v_1, \dots, v_t)$ is a $t$-tuple of elements of some domain and $i_1,\dots, i_s \in \{1, \dots, t\}$, we will write $v_{i_1, \dots, i_s}$ for the tuple $(v_{i_1}, \dots, v_{i_s})$. Let $\varepsilon > 0$. From the definition of infimum, there exists $a^*\in D^n$ such that $$\label{eq:inf-tup} \phi^{\Delta}(a^*) \leq \inf_{a \in D^n} \phi^{\Delta}(a)+ \varepsilon/2$$ and $f^* \in C^D$ such that $$\label{eq:inf-op} \phi^{\Gamma}(f^*(a^*)) \leq \inf_{f\in C^D} \phi^{\Gamma} (f(a^*)) + \varepsilon/2.$$ Note that $E_{\omega}[f \mapsto R_i^{\Gamma}(f(a^*)_{j_1^i, \dots, j_{k_i}^i})]$ exists for every $i \in \{1, \dots, m\}$ by the definition of a fractional homomorphism. Suppose first that $\sum_{i=1}^m E_{\omega}[f \mapsto R_i^{\Gamma}(f(a^*)_{j_1^i, \dots, j_{k_i}^i})]$ is defined. Then $$\begin{aligned} \inf_{b \in C^n} \phi^{\Gamma}(b) &\leq \phi^{\Gamma} (f^*(a^*)) && \text{(definition of infimum)}\\ & \leq \inf_{f \in C^D} \phi^{\Gamma} (f(a^*)) + \varepsilon/2 &&\text{(by \eqref{eq:inf-op}) }\\ & \leq E_{\omega}[f \mapsto \phi^{\Gamma}(f(a^*))] + \varepsilon/2 &&\text{(by the monotonicity of $E_\omega$)} \\ &= \sum_{i=1}^m E_{\omega}[f \mapsto R_i^{\Gamma}(f(a^*)_{j_1^i, \dots, j_{k_i}^i})] + \varepsilon/2 &&\text{(by the linearity of $E_\omega$)}\\ & \leq \sum_{i=1}^m R_i^{\Delta}(a^*_{j_1^i, \dots, j_{k_i}^i}) + \varepsilon/2 &&\text{(since $\omega$ is a fractional homomorphism)}\\ &= \phi^{\Delta}(a^*) + \varepsilon/2 \\ & \leq \inf_{a \in D^n} \phi^{\Delta}(a) + \varepsilon &&\text{(by \eqref{eq:inf-tup}).}\end{aligned}$$ Since $\varepsilon>0$ was chosen arbitrarily, it follows that the value of $\phi$ with respect to $\Gamma$ is at most the value of $\phi$ with respect to $\Delta$. Suppose now that $\sum_{i=1}^m E_{\omega}[f \mapsto R_i^{\Gamma}(f(a^*)_{j_1^i, \dots, j_{k_i}^i})]$ is not defined. Then there exists $i \in \{1,\dots, m\}$ such that $E_\omega[f \mapsto R_i^\Gamma(f(a^*)_{j_1^i, \dots, j_{k_i}^i})] = \infty$. By the definition of a fractional homomorphism, this implies that $R_i^\Delta(a^*_{j_1^i, \dots, j_{k_i}^i})= \infty$ and hence $\sum_{i=1}^m R_i^{\Delta}(a^*_{j_1^i, \dots, j_{k_i}^i})=\infty$. Therefore, we obtain as above that $$\inf_{b \in C^n} \phi^{\Gamma}(b) \leq\inf_{a \in D^n} \phi^{\Delta}(a),$$ which is what we wanted to prove. ◻ **Remark 26**. *For finite domains, the converse of Proposition [Proposition 25](#prop:frac-hom){reference-type="ref" reference="prop:frac-hom"} is true as well [@Butti Proposition 8.4].* We say that two valued $\tau$-structures $\Gamma$ and $\Delta$ are *fractionally homomorphically equivalent* if there exists a fractional homomorphisms from $\Gamma$ to $\Delta$ and from $\Delta$ to $\Gamma$. Clearly, fractional homomorphic equivalence is indeed an equivalence relation on valued structures of the same signature. **Corollary 27**. *Let $\Gamma$ and $\Delta$ be valued $\tau$-structures with oligomorphic automorphism groups that are fractionally homomorphically equivalent. Then $\mathop{\mathrm{VCSP}}(\Gamma)$ and $\mathop{\mathrm{VCSP}}(\Delta)$ are polynomial-time equivalent.* *Proof.* In fact, the two problems $\mathop{\mathrm{VCSP}}(\Gamma)$ and $\mathop{\mathrm{VCSP}}(\Delta)$ coincide. By Proposition [Proposition 25](#prop:frac-hom){reference-type="ref" reference="prop:frac-hom"}, for every instance $\phi$, the values of $\phi$ with respect to $\Gamma$ and $\Delta$ are equal. By Lemma [Lemma 9](#lem:value){reference-type="ref" reference="lem:value"}, the value is attained in both structures and hence every instance $\phi$ with a threshold $u$ has a solution with respect to $\Gamma$ if and only if it has a solution with respect to $\Delta$. ◻ **Remark 28**. *Note that if $\Gamma$ and $\Delta$ are classical relational $\tau$-structures that are homomorphically equivalent in the classical sense, then they are fractionally homomorphically equivalent when we view them as valued structures: if $h_1$ is the homomorphism from $\Gamma$ to $\Delta$ and $h_2$ is the homomorphism from $\Delta$ to $\Gamma$, then this is witnessed by the fractional homomorphisms $\omega_1$ and $\omega_2$ such that $\omega_1(h_1) = \omega_2(h_2) = 1$.* **Definition 29** (pp-construction). *Let $\Gamma$ and $\Delta$ be valued structures. We say that $\Delta$ has a *pp-construction* in $\Gamma$ if $\Delta$ is fractionally homomorphically equivalent to a structure $\Delta'$ which is a pp-power of $\Gamma$.* **Corollary 30**. *Let $\Gamma$ and $\Delta$ be valued structures with finite signatures and oligomorphic automorphism groups such that $\Delta$ has a pp-construction in $\Gamma$. Then there is a polynomial-time reduction from $\mathop{\mathrm{VCSP}}(\Delta)$ to $\mathop{\mathrm{VCSP}}(\Gamma)$.* *Proof.* Combine Proposition [Proposition 20](#prop:pp-hard){reference-type="ref" reference="prop:pp-hard"} and Corollary [Corollary 27](#cor:hom-hard){reference-type="ref" reference="cor:hom-hard"}. ◻ Let $\mathop{\mathrm{OIT}}$ be the following relation $$\mathop{\mathrm{OIT}}= \{(0,0,1),(0,1,0),(1,0,0) \}.$$ It is well-known (see, e.g., [@Book]) that $\mathop{\mathrm{CSP}}(\{0,1\};\mathop{\mathrm{OIT}})$ is NP-complete. **Corollary 31**. *Let $\Gamma$ be a valued structure with a finite signature and oligomorphic automorphism group such that $(\{0,1\};\mathop{\mathrm{OIT}})$ has a pp-construction in $\Gamma$. Then $\mathop{\mathrm{VCSP}}(\Gamma)$ is NP-hard.* *Proof.* Follows from the NP-hardness of $\mathop{\mathrm{CSP}}(\{0,1\};\mathop{\mathrm{OIT}})$ via Corollary [Corollary 30](#cor:pp-constr-red){reference-type="ref" reference="cor:pp-constr-red"}. ◻ **Lemma 32**. *The relation of pp-constructibility on the class of countable valued structures is transitive.* *Proof.* Clearly, a pp-power of a pp-power is again a pp-power, and fractional homomorphic equivalence is transitive by Lemma [Lemma 24](#lem:compose){reference-type="ref" reference="lem:compose"}. We are therefore left to prove that if $\Gamma$ and $\Delta$ are valued structures such that $\Delta$ is a $d$-dimensional pp-power of $\Gamma$, and if $\Gamma'$ is fractionally homomorphically equivalent to $\Gamma$ via fractional homomorphisms $\omega_1 \colon \Gamma \to \Gamma'$ and $\omega_2 \colon \Gamma' \to \Gamma$, then $\Delta$ also has a pp-construction in $\Gamma'$. Let $C$ and $C'$ be the domains of $\Gamma$ and $\Gamma'$, respectively. Take the $\tau$-expressions that define the weighted relations of $\Delta$ over $\Gamma$, and interpret them over $\Gamma'$ instead of $\Gamma$; let $\Delta'$ be the resulting valued structure. Note that $\Delta'$ is a $d$-dimensional pp-power of $\Gamma'$. For a map $f \colon \Gamma \to \Gamma'$, let $\tilde f \colon \Delta \to \Delta'$ be given by $(x_1,\dots,x_{d}) \mapsto (f(x_1),\dots,f(x_{d}))$. Then for all $S \in B((C')^{C})$ we define $$\tilde \omega_1 (\{\tilde f \mid f \in S\}) := \omega_1(S)$$ and $$\tilde \omega_1(\tilde S):=\tilde \omega_1 (\tilde{S} \cap \{\tilde f \mid f \in (C')^C\})$$ for all $\tilde S \in B \big(((C')^d)^{C^d} \big)$. Note that $\tilde \omega_1$ is a fractional homomorphism from $\Delta$ to $\Delta'$. Analogously we obtain from $\omega_2$ a fractional homomorphism $\tilde \omega_2$ from $\Delta'$ to $\Delta$. Therefore, $\Delta$ is fractionally homomorphically equivalent to $\Delta'$, which is a pp-power of $\Gamma'$. In other words, $\Delta$ has a pp-construction in $\Gamma'$. ◻ # Fractional Polymorphisms {#sect:fpol} In this section we introduce *fractional polymorphisms* of valued structures; they are an important tool for formulating tractability results and complexity classifications of VCSPs. For valued structures with a finite domain, our definition specialises to the established notion of a fractional polymorphism which has been used to study the complexity of VCSPs for valued structures over finite domains (see, e.g. [@ThapperZivny13]). Our approach is different from the one of Viola and Schneider [@ViolaThesis; @SchneiderViola] in that we work with arbitrary probability spaces instead of distributions with finite support. As we will see in Section [7](#sect:tract){reference-type="ref" reference="sect:tract"}, fractional polymorphisms can be used to give sufficient conditions for tractability of $\mathop{\mathrm{VCSP}}$s of certain valued structures with oligomorphic automorphism groups. This justifies the more general notion of a fractional polymorphism, as it might provide a tractability proof for more problems. We do not know if there are examples in our setting where it is necessary to use the more general notion; see Question [Question 81](#question:int-vs-fin-supp){reference-type="ref" reference="question:int-vs-fin-supp"}. Let ${\mathscr O}_C^{(\ell)}$ be the set of all operations $f \colon C^{\ell} \to C$ on a set $C$ of arity $\ell$. We equip ${\mathscr O}_C^{(\ell)}$ with the topology of pointwise convergence, where $C$ is taken to be discrete. That is, the basic open sets are of the form $$\begin{aligned} {\mathscr S}_{a^1,\dots,a^{\ell}, b} := \{ f \in {\mathscr O}^{(\ell)}_C \mid f(a^1,\dots,a^{\ell}) = b \} \label{eq:S}\end{aligned}$$ where $a^1,\dots,a^{\ell}, b\in C^m$, for some $m \in {\mathbb N}$, and $f$ is applied componentwise. Let $${\mathscr O}_C := \bigcup_{\ell \in {\mathbb N}} {\mathscr O}_C^{(\ell)}.$$ **Definition 33** (fractional operation). *Let $\ell \in {\mathbb N}$. A *fractional operation on $C$ of arity $\ell$* is a probability distribution $$\big({\mathscr O}_C^{(\ell)},B({\mathscr O}_C^{(\ell)}), \omega \colon B({\mathscr O}_C^{(\ell)}) \to [0,1] \big).$$ The set of all fractional operations on $C$ of arity $\ell$ is denoted by ${\mathscr F}^{(\ell)}_C$, and ${\mathscr F}_C := \bigcup_{\ell \in {\mathbb N}} {\mathscr F}^{(\ell)}_C$.* If the reference to $C$ is clear, we occasionally omit the subscript $C$. We often use $\omega$ for both the entire fractional operation and for the map $\omega \colon B({\mathscr O}_C^{(\ell)}) \to [0,1]$. **Definition 34**. *A fractional operation $\omega \in {\mathscr F}_C^{(\ell)}$ *improves* a $k$-ary weighted relation $R \in {\mathscr R}^{(k)}_C$ if for all $a^1,\dots,a^{\ell} \in C^k$ $$E := E_\omega[f \mapsto R(f(a^1,\dots,a^{\ell}))]$$ exists and $$\begin{aligned} E \leq \frac{1}{\ell} \sum_{j = 1}^{\ell} R(a^j). \label{eq:fpol}\end{aligned}$$* Note that [\[eq:fpol\]](#eq:fpol){reference-type="eqref" reference="eq:fpol"} has the interpretation that the expected value of $R(f(a^1,\dots,a^\ell))$ is at most the average of the values $R(a^1),\dots,R(a^\ell)$. Also note that if $R$ is a classical relation improved by a fractional operation $\omega$ and $\omega(f) > 0$ for $f \in {\mathscr O}^{(\ell)}$, then $f$ must preserve $R$ in the usual sense. It follows from Lemma [Lemma 23](#lem:E-exists){reference-type="ref" reference="lem:E-exists"} that if $\mathop{\mathrm{Aut}}(C; R)$ is oligomorphic, then $E_\omega[f \mapsto R(f(a^1, \dots, a^{\ell}))]$ always exists and is greater than $-\infty$. **Definition 35** (fractional polymorphism). *If $\omega$ improves every weighted relation in $\Gamma$, then $\omega$ is called a *fractional polymorphism of $\Gamma$*; the set of all fractional polymorphisms of $\Gamma$ is denoted by $\mathop{\mathrm{fPol}}(\Gamma)$.* **Remark 36**. *A fractional polymorphism of arity $\ell$ of a valued structure $\Gamma$ might also be viewed as a fractional homomorphism from a specific $\ell$-th pp-power of $\Gamma$ to $\Gamma$, which we denote by $\Gamma^{\ell}$: if $C$ is the domain and $\tau$ the signature of $\Gamma$, then the domain of $\Gamma^{\ell}$ is $C^{\ell}$, and for every $R \in \tau$ of arity $k$ we have $$R^{\Gamma^{\ell}}((a^1_1,\dots,a^1_{\ell}),\dots,(a^k_1,\dots,a^k_{\ell})) := \frac{1}{\ell} \sum_{i=1}^\ell R^{\Gamma}(a^1_i,\dots,a^{k}_i).$$* **Example 37**. *Let $\pi^\ell_i \in {\mathscr O}^{(\ell)}_C$ be the $i$-th projection of arity $\ell$, which is given by $\pi^\ell_i(x_1,\dots,x_\ell) = x_i$. The fractional operation $\mathop{\mathrm{Id}}_\ell$ of arity $\ell$ such that $\mathop{\mathrm{Id}}_\ell(\pi^{\ell}_{i}) = \frac{1}{\ell}$ for every $i \in \{1,\dots,\ell\}$ is a fractional polymorphism of every valued structure with domain $C$.* **Example 38**. *Let $\Gamma$ be a valued structure and $\alpha\in \mathop{\mathrm{Aut}}(\Gamma)$. The fractional operation $\omega \in \mathscr{F}_C^{(1)}$ defined by $\omega(\alpha)=1$ is a fractional polymorphism of $\Gamma$.* Let ${\mathscr C} \subseteq {\mathscr F}_C$. We write ${\mathscr C}^{(\ell)}$ for ${\mathscr C} \cap {\mathscr F}^{(\ell)}_C$ and $\mathop{\mathrm{Imp}}({\mathscr C})$ for the set of weighted relations that are improved by every fractional operation in ${\mathscr C}$. **Lemma 39**. *Let $R \in {\mathscr R}^{(k)}_C$ and let $\Gamma$ be a valued structure with domain $C$ and an automorphism $\alpha \in \mathop{\mathrm{Aut}}(\Gamma)$ which does not preserve $R$. Then $R \notin \mathop{\mathrm{Imp}}(\mathop{\mathrm{fPol}}(\Gamma)^{(1)})$.* *Proof.* Since $\alpha$ does not preserve $R$, there exists $a \in C^k$ such that $R(a) \neq R(\alpha(a))$. If $R(\alpha(a)) > R(a)$, then let $\omega \in {\mathscr F}_C^{(1)}$ be the fractional operation defined by $\omega(\alpha) = 1$. Then $\omega$ improves every weighted relation of $\Gamma$ and does not improve $R$. If $R(\alpha(a)) < R(a)$, then the fractional polymorphism $\omega$ of $\Gamma$ given by $\omega(\alpha^{-1}) = 1$ does not improve $R$. ◻ Parts of the arguments in the proof of the following lemma can be found in the proof of [@ViolaThesis Lemma 7.2.1]; however, note that the author works with a more restrictive notion of fractional operation, so we cannot cite this result. **Lemma 40**. *For every valued $\tau$-structure $\Gamma$ over a countable domain $C$ we have $$\langle \Gamma \rangle \subseteq \mathop{\mathrm{Imp}}(\mathop{\mathrm{fPol}}(\Gamma)).$$* *Proof.* Let $\omega \in \mathop{\mathrm{fPol}}(\Gamma)^{(\ell)}$. By definition, $\omega$ improves every weighted relation $R$ of $\Gamma$. It is clear that $\omega$ also preserves $\phi_{\emptyset}$. To see that $\omega$ preserves $\phi_=$, let $a^1,\dots,a^{\ell} \in C^2$. Note that either $a^i_1 = a^i_2$ for every $i \in \{1,\dots,\ell\}$, in which case $f(a^1_1,\dots,a^{\ell}_1)=f(a^1_2,\dots,a^{\ell}_2)$ for every $f \in {\mathscr O}_C^{(\ell)}$, and hence $$E_\omega[f \mapsto \phi_=(f(a^1,\dots,a^{\ell}))] = 0 = \frac{1}{\ell} \sum_{j = 1}^{\ell} \phi_=(a^j).$$ or $a^i_1 \neq a^i_2$ for some $i \in \{1,\dots,\ell\}$, in which case $\frac{1}{\ell} \sum_{j = 1}^{\ell} \phi_=(a^j) = \infty$ and the inequality in ([\[eq:fpol\]](#eq:fpol){reference-type="ref" reference="eq:fpol"}) is again satisfied. The statement is also clear for weighted relations obtained from weighted relations in $\Gamma$ by non-negative scaling and addition of constants, since these operations preserve the inequality in ([\[eq:fpol\]](#eq:fpol){reference-type="ref" reference="eq:fpol"}) by the linearity of expectation. Let $\phi(x_1,\dots,x_k, y_1, \dots, y_n)$ be a $\tau$-expression. We need to show that the fractional operation $\omega$ improves the $k$-ary weighted relation $R$ defined for every $a\in C^k$ by $R(a)= \inf_{b\in C^n} \phi^{\Gamma}(a,b)$. Since $\phi$ is a $\tau$-expression, there are $R_i\in \tau$ such that $$\phi(x_1,\dots,x_k, y_1, \dots, y_n)=\sum_{i=1}^m R_i(x_{p_1^i}, \dots, x_{p_{k_i}^i},y_{q_1^i}, \dots, y_{q_{n_i}^i})$$ for some $k_i, n_i \in \mathbb N$, $p_1^i, \dots, p_{k_i}^i\in \{1, \dots, k\}$ and $q_1^i, \dots, q_{n_i}^i\in \{1, \dots, n\}$. In this paragraph, if $v=(v_1, \dots, v_t) \in C^t$ and $i_1,\dots, i_s \in \{1, \dots, t\}$, we will write $v_{i_1, \dots, i_s}$ for the tuple $(v_{i_1}, \dots, v_{i_s})$ for short. Let $a^1, \dots, a^{\ell} \in C^k$. Let $\varepsilon > 0$ be a rational number. From the definition of an infimum, for every $j \in \{1, \dots, \ell\}$, there is $b^j\in C^n$ such that $$R(a^j) \leq \phi(a^j, b^j) < R(a^j)+ \varepsilon.$$ Moreover, for every $f \in {\mathscr O}_C^{(\ell)}$, $$R(f(a^1, \dots, a^{\ell})) \leq \phi(f(a^1, \dots, a^{\ell}), f(b^1, \dots, b^{\ell})).$$ By linearity and monotonicity of expectation, we obtain $$\begin{aligned} E_\omega[f \mapsto R(f(a^1,\dots,a^{\ell}))] & \leq E_\omega[f \mapsto \phi(f(a^1,\dots,a^{\ell}), f(b^1, \dots, b^{\ell}))] \\ &=E_\omega[f \mapsto \sum_{i=1}^{m} R_i((f(a^1,\dots,a^{\ell}))_{p_1^i, \dots, p_{k_i}^i}, (f(b^1, \dots, b^{\ell}))_{q_1^i, \dots, q_{n_i}^i})]\\ &= \sum_{i=1}^{m} E_\omega[f \mapsto R_i((f(a^1,\dots,a^{\ell}))_{p_1^i, \dots, p_{k_i}^i}, (f(b^1, \dots, b^{\ell}))_{q_1^i, \dots, q_{n_i}^i})]. \end{aligned}$$ Since $\omega$ improves $R_i$ for every $i\in \{1, \dots, m\}$, the last row of the inequality above is at most $$\begin{aligned} \sum_{i=1}^{m} \frac{1}{\ell} \sum_{j=1}^{\ell} R_i(a^j_{p_1^i, \dots, p_{k_i}^i}, b^j_{q_1^i, \dots, q_{n_i}^i}) &= \frac{1}{\ell} \sum_{j=1}^{\ell} \sum_{i=1}^{m} R_i(a^j_{p_1^i, \dots, p_{k_i}^i}, b^j_{q_1^i, \dots, q_{n_i}^i}) \\ &=\frac{1}{\ell} \sum_{j=1}^{\ell} \phi(a^j, b^j) < \frac{1}{\ell} \sum_{j=1}^{\ell} R(a^j) + \varepsilon. \end{aligned}$$ Since $\varepsilon$ was arbitrary, it follows that $\omega$ improves $R$. Finally, we prove that $\mathop{\mathrm{Imp}}(\mathop{\mathrm{fPol}}(\Gamma))$ is closed under $\mathop{\mathrm{Feas}}$ and $\mathop{\mathrm{Opt}}$. Let $R \in \tau$ be of arity $k$ and define $S= \mathop{\mathrm{Feas}}(R)$ and $T = \mathop{\mathrm{Opt}}(R)$. We aim to show that $S, T \in \mathop{\mathrm{Imp}}(\mathop{\mathrm{fPol}}(\Gamma))$. Let $s^1,\dots,s^{\ell} \in C^k$. If $S(s^i) = \infty$ for some $i \in \{1,\dots,\ell\}$, then $\frac{1}{\ell} \sum_{j = 1}^{\ell} S(s^j) = \infty$ and hence $\omega$ satisfies [\[eq:fpol\]](#eq:fpol){reference-type="eqref" reference="eq:fpol"} (with $R$ replaced by $S$) for the tuples $s^1,\dots,s^{\ell}$. So suppose that $S(s^i) = 0$ for all $i \in \{1,\dots,\ell\}$, i.e., $R(s^i)$ is finite for all $i$. Since $\omega$ improves $R$ it holds that $$\begin{aligned} \label{eq:imp-R} E_\omega[f \mapsto R(f(s^1,\dots,s^{\ell}))] \leq \frac{1}{\ell} \sum_{j = 1}^{\ell} R(s^j) \end{aligned}$$ and hence the expected value on the left-hand side is finite as well. By [\[eq:exp-sum\]](#eq:exp-sum){reference-type="eqref" reference="eq:exp-sum"} in Appendix [10](#sect:lebesgue){reference-type="ref" reference="sect:lebesgue"}, $$\begin{aligned} \label{eq:E-expr} E_\omega[f \mapsto R(f(s^1,\dots,s^{\ell}))] = \sum_{t \in C^k} R(t) \omega({\mathscr S}_{s^1,\dots,s^{\ell}, t}),\end{aligned}$$ which implies that $R(t)$ is finite and $S(t)=0$ unless $\omega({\mathscr S}_{s^1,\dots,s^{\ell}, t})=0$. Consequently (again by [\[eq:exp-sum\]](#eq:exp-sum){reference-type="eqref" reference="eq:exp-sum"}), $$E_\omega[f \mapsto S(f(s^1,\dots,s^{\ell}))] = \sum_{t \in C^k} S(t) \omega({\mathscr S}_{s^1,\dots,s^{\ell}, t}) = 0 = \frac{1}{\ell} \sum_{j = 1}^{\ell} S(s^j).$$ It follows that $\omega$ improves $S$. Moving to the weighted relation $T$, we may again assume without loss of generality that $T(s^i)=0$ for every $i \in \{1, \dots, \ell\}$ as we did for $S$. This means that $c := R(s^1) = \cdots = R(s^\ell) \leq R(b)$ for every $b \in C^k$. Therefore, the right-hand side in [\[eq:imp-R\]](#eq:imp-R){reference-type="eqref" reference="eq:imp-R"} is equal to $c$ and by combining it with [\[eq:E-expr\]](#eq:E-expr){reference-type="eqref" reference="eq:E-expr"} we get $$\begin{aligned} \sum_{t \in C^k} R(t) \omega({\mathscr S}_{s^1,\dots,s^{\ell}, t}) \leq c.\end{aligned}$$ Together with the assumption that $R(t) \geq c$ for all $t \in C^k$ and $\omega$ being a probability distribution we obtain that $R(t) = c$ and $T(t) = 0$ unless $\omega({\mathscr S}_{s^1,\dots,s^{\ell}, t}) = 0$, and hence $$E_\omega[f \mapsto T(f(s^1,\dots,s^{\ell}))] = \sum_{t \in C^k} T(t) \omega({\mathscr S}_{s^1,\dots,s^{\ell}, t}) = 0 = \frac{1}{\ell} \sum_{j = 1}^{\ell} T(s^j).$$ This concludes the proof that $\omega$ improves $T$. ◻ **Example 41**. *Let $<$ be the binary relation on $\{0,1\}$ and $\Gamma_<$ the valued structure from Example [Example 2](#expl:vs-mc){reference-type="ref" reference="expl:vs-mc"}. By definition, $\mathop{\mathrm{Opt}}(<) \in \langle \Gamma_< \rangle$. Denote the minimum operation on $\{0,1\}$ by $\min$ and let $\omega$ be a binary fractional operation defined by $\omega(\min)=1$. Note that $\omega \in \mathop{\mathrm{fPol}}(\{0,1\}; \mathop{\mathrm{Opt}}(<))$. However, $$< \left(\min \left( \begin{pmatrix}0\\1\end{pmatrix} , \begin{pmatrix}0\\0\end{pmatrix} \right) \right)= \, {<}(0, 0) = 1,$$ while $(1/2) \cdot {<}(0,1) + (1/2) \cdot {<}(0,0)= 1/2$. This shows that $\omega$ does not improve $<$ and hence $< \; \not \in \langle (\{0,1\}; \mathop{\mathrm{Opt}}(<)) \rangle$ by Lemma [Lemma 40](#lem:easy){reference-type="ref" reference="lem:easy"}.* # Polynomial-time Tractability via Canonical Fractional Polymorphisms {#sect:tract} In this section we make use of a tractability result for finite-domain VCSPs of Kolmogorov, Krokhin, and Rolinek [@KolmogorovKR17], which itself was building on earlier work of Kolmogorov, Thapper, and Živný [@KolmogorovThapperZivny15; @ThapperZivny13]. **Definition 42**. *An operation $f \colon C^{\ell} \to C$ for $\ell \geq 2$ is called *cyclic* if $$f(x_1,\dots,x_{\ell}) = f(x_2,\dots,x_{\ell},x_1)$$ for all $x_1,\dots,x_{\ell} \in C$. Let $\mathop{\mathrm{Cyc}}_C^{(\ell)} \subseteq {\mathscr O}_C^{(\ell)}$ be the set of all operations on $C$ of arity $\ell$ that are cyclic.* If $G$ is a permutation group on a set $C$, then $\overline G$ denotes the closure of $G$ in the space of functions from $C \to C$ with respect to the topology of pointwise convergence. Note that $\overline G$ might contain some operations that are not surjective, but if $G = \mathop{\mathrm{Aut}}({\mathfrak B})$ for some structure ${\mathfrak B}$, then all operations in $\overline G$ are still embeddings of ${\mathfrak B}$ into ${\mathfrak B}$ that preserve all first-order formulas. **Definition 43**. *Let $G$ be a permutation group on the set $C$. An operation $f \colon C^{\ell} \to C$ is called *pseudo cyclic with respect to $G$* if there are $e_1,e_2 \in \overline G$ such that for all $x_1,\dots,x_{\ell} \in C$ $$e_1(f(x_1,\dots,x_{\ell})) = e_2(f(x_2,\dots,x_{\ell},x_1)).$$ Let $\mathop{\mathrm{PC}}_G^{(\ell)} \subseteq {\mathscr O}_C^{(\ell)}$ be the set of all operations on $C$ of arity $\ell$ that are pseudo cyclic with respect to $G$.* Note that $\mathop{\mathrm{PC}}_G^{(\ell)} \in B({\mathscr O}^{(\ell)}_C)$. Indeed, the complement can be written as a countable union of sets of the form ${\mathscr S}_{a^1,\dots,a^{\ell},b}$ where for all $f \in {\mathscr O}_C^{(\ell)}$ the tuples $f(a^1,\dots,a^{\ell})$ and $f(a^2,\dots,a^{\ell},a^1)$ lie in different orbits with respect to $G$. **Definition 44**. *Let $G$ be a permutation group with domain $C$. An operation $f \colon C^{\ell} \to C$ for $\ell \geq 2$ is called *canonical with respect to $G$* if for all $k \in {\mathbb N}$ and $a^1,\dots,a^{\ell} \in C^k$ the orbit of the $k$-tuple $f(a^1,\dots,a^{\ell})$ only depends on the orbits of $a^1,\dots,a^{\ell}$ with respect to $G$. Let $\mathop{\mathrm{Can}}_G^{(\ell)} \subseteq {\mathscr O}_C^{(\ell)}$ be the set of all operations on $C$ of arity $\ell$ that are canonical with respect to $G$.* Note that $\mathop{\mathrm{Can}}_G^{(\ell)} \in B({\mathscr O}^{(\ell)}_C)$, since the complement is a countable union of sets of the form ${\mathscr S}_{a^1,\dots,a^{\ell},b} \cap {\mathscr S}_{c^1,\dots,c^{\ell},d}$ where for all $i \in \{1,\dots,\ell\}$ the tuples $a^i$ and $c^i$ lie in the same orbit with respect to $G$, but $b$ and $d$ do not. **Remark 45**. *Note that if $h$ is an operation over $C$ of arity $\ell$ which is canonical with respect to $G$, then $h$ induces for every $k \in {\mathbb N}$ an operation $h^*$ of arity $\ell$ on the orbits of $k$-tuples of $G$. Note that if $h$ is pseudo cyclic with respect to $G$, then $h^*$ is cyclic.* **Definition 46**. *A fractional operation $\omega$ is called *pseudo cyclic with respect to $G$* if for every $A \in B({\mathscr O}_C^{(\ell)})$ we have $\omega(A) = \omega(A \cap \mathop{\mathrm{PC}}^{(\ell)}_{G})$. Canonicity with respect to $G$ and cyclicity for fractional operations are defined analogously.* We refer to Section [8.3](#sect:inf-dual){reference-type="ref" reference="sect:inf-dual"} for examples of concrete fractional polymorphisms of valued structures $\Gamma$ that are cyclic and canonical with respect to $\mathop{\mathrm{Aut}}(\Gamma)$. If the reference to a specific permutation group $G$ is clear, then we omit for cyclicity and canonicity the specification 'with respect to $G$'. We will prove below that canonical pseudo cyclic fractional polymorphisms imply polynomial-time tractability of the corresponding VCSP. We prove this result by reducing to tractable VCSPs over finite domains. Motivated by Theorem [Theorem 10](#thm:fb-NP){reference-type="ref" reference="thm:fb-NP"} and the infinite-domain tractability conjecture from [@BPP-projective-homomorphisms], we state these results for valued structures related to finitely bounded homogeneous structures. **Definition 47** ($\Gamma^*_m$). *Let $\Gamma$ be a valued structure with signature $\tau$ such that $\mathop{\mathrm{Aut}}(\Gamma)$ contains the automorphism group of a homogeneous structure ${\mathfrak B}$ with a finite relational signature. Let $m$ be at least as large as the maximal arity of the relations of ${\mathfrak B}$. Let $\Gamma^*_m$ be the following valued structure.* - *The domain of $\Gamma^*_m$ is the set of orbits of $m$-tuples of $\mathop{\mathrm{Aut}}(\Gamma)$.* - *For every $R \in \tau$ of arity $k \leq m$ the signature of $\Gamma_m^*$ contains a unary relation symbol $R^*$, which denotes in $\Gamma_m^*$ the unary weighted relation that returns on the orbit of an $m$-tuple $t=(t_1, \dots, t_m)$ the value of $R^{\Gamma}(t_1,\dots,t_k)$ (this is well-defined, because the value is the same for all representatives $t$ of the orbit).* - *For every $p \in \{1,\dots,m\}$ and $i,j \colon \{1,\dots,p\} \to \{1,\dots,m\}$ there exists a binary relation $C_{i,j}$ which returns $0$ for two orbits of $m$-tuples $O_1$ and $O_2$ if for every $s \in O_1$ and $t \in O_2$ we have that $(s_{i(1)},\dots,s_{i(p)})$ and $(t_{j(1)},\dots,t_{j(p)})$ lie in the same orbit of $p$-tuples of $\mathop{\mathrm{Aut}}(\Gamma)$, and returns $\infty$ otherwise.* Note that $\mathop{\mathrm{Aut}}({\mathfrak B})$ and hence $\mathop{\mathrm{Aut}}(\Gamma)$ has finitely many orbits of $k$-tuples for every $k \in \mathbb N$ and therefore $\Gamma^*_m$ has a finite domain. The following generalises a known reduction for CSPs from [@BodMot-Unary]. **Theorem 48**. *Let $\Gamma$ be a valued structure such that $\mathop{\mathrm{Aut}}(\Gamma)$ equals the automorphism group of a finitely bounded homogeneous structure ${\mathfrak B}$. Let $r$ be the maximal arity of the relations of ${\mathfrak B}$ and the weighted relations in $\Gamma$, let $v$ be the maximal number of variables that appear in a single conjunct of the universal sentence $\psi$ that describes the age of ${\mathfrak B}$, and let $m \geq \max(r+1,v,3)$. Then there is a polynomial-time reduction from $\mathop{\mathrm{VCSP}}(\Gamma)$ to $\mathop{\mathrm{VCSP}}(\Gamma^*_m)$.* *Proof.* Let $\tau$ be the signature of $\Gamma$ and $\tau^*$ be the signature of $\Gamma^*_m$. Let $\phi$ be an instance of $\mathop{\mathrm{VCSP}}(\Gamma)$ with threshold $u$ and let $V$ be the variables of $\phi$. Create a variable $y({\bar x})$ for every $\bar x =(x_1, \dots, x_m) \in V^m$. For every summand $R(x_1,\dots,x_k)$ of $\phi$ and we create a summand $R^*(y(x_1, \dots, x_k, \dots, x_k))$; this makes sense since $m\geq r$. For every $\bar{x}, \bar{x}' \in V^m$, $p \in \{1,\dots,m\}$, and $i,j \colon \{1,\dots,p\} \to \{1,\dots,m\}$, add the summand $C_{i,j}(y(\bar x),y(\bar x'))$ if $(x_{i(1)},\dots, x_{i(p)}) = (x'_{j(1)},\dots, x'_{j(p)})$; we will refer to these as *compatibility constraints*. Let $\phi^*$ be the resulting $\tau^*$-expression. Clearly, $\phi^*$ can be computed from $\phi$ in polynomial time. Suppose first that $(\phi,u)$ has a solution; it will be notationally convenient to view the solution as a function $f$ from the variables of $\phi$ to the elements of $\Gamma$ (rather than a tuple). We claim that the map $f^*$ which maps $y(\bar x)$ to the orbit of $f(\bar x)$ in $\mathop{\mathrm{Aut}}(\Gamma)$ is a solution for $(\phi^*,u)$. And indeed, each of the summands involving a symbol $C_{i,j}$ evaluates to $0$, and $(\phi^*)^{\Gamma^*_m}$ equals $\phi^{\Gamma}$. Now suppose that $(\phi^*,u)$ has a solution $f^*$. To construct a solution $f$ to $(\phi,u)$, we first define an equivalence relation $\sim$ on $V$. For $x_1,x_2 \in V$, define $x_1 \sim x_2$ if a (equivalently: every) tuple $t$ in $f^*(y(x_1,x_2,\dots,x_2))$ satisfies $t_1=t_2$. Clearly, $\sim$ is reflexive and symmetric. To verify that $\sim$ is transitive, suppose that $x_1 \sim x_2$ and $x_2 \sim x_3$. In the following we use that $m \geq 3$. Let $i$ be the identity map on $\{1,2\}$, let $j \colon \{1,2\} \to \{2,3\}$ be given by $x \mapsto x+1$, and let $j' \colon \{1,2\} \to \{1,3\}$ be given by $j'(1) = 1$ and $j'(2) = 3$. Then $\phi^*$ contains the conjuncts $$\begin{aligned} & C_{i, i}(y(x_1, x_2, x_2, \dots, x_2), y(x_1,x_2,x_3,\dots,x_3)),\\ & C_{i, j}(y(x_2,x_3,x_3,\dots,x_3),y(x_1,x_2,x_3,\dots,x_3)), \\ & C_{i, j'}(y(x_1,x_3,x_3,\dots,x_3),y(x_1,x_2,x_3,\dots,x_3)). \end{aligned}$$ Let $t$ be a tuple from $f^*(y(x_1,x_2,x_3,\dots,x_3))$. Then it follows from the conjuncts with the relation symbols $C_{i,i}$ and $C_{i,j}$ that $t_1=t_2$ and $t_2=t_3$, and therefore $t_1=t_3$. Thus we obtain from the conjunct with $C_{i, j'}$ that $x_1 \sim x_3$. **Claim 0.** For all equivalence classes $[x_1]_\sim,\dots,[x_m]_\sim$, $t \in f^*(y(x_1,\dots,x_m))$, $S \in \sigma$ of arity $k$, and $j \colon \{1, \dots, k\} \rightarrow \{1, \dots, m\}$, whether ${\mathfrak B}\models S(t_{j(1)},\dots,t_{j(k)})$ does not depend on the choice of the representatives $x_{1},\dots,x_{m}$. It suffices to show this statement if we choose another representative $x'_i$ for $[x_i]_\sim$ for some $i \in \{1, \dots, m\}$, because the general case then follows by induction. Suppose that for every $t \in f^*(y(x_1,\dots,x_m))$ we have ${\mathfrak B}\models S(t_{j(1)},\dots,t_{j(k)})$; we have to show that for every $t' \in f^*(y(x_1,\dots,x_{i-1},x_i',x_{i+1},\dots,x_m))$ we have ${\mathfrak B}\models S(t'_{j(1)},\dots,t'_{j(k)})$. If $i \notin \{j(1),\dots,j(k)\}$, then $\phi^*$ contains $$C_{j,j}(y(x_1,\dots,x_m),y(x_1,\dots,x_{i-1},x_i',x_{i+1},\dots,x_m))$$ and hence ${\mathfrak B}\models S(t'_{j(1)},\dots,t'_{j(k)})$. Now suppose that $i \in \{j(1),\dots,j(k)\}$; for the sake of notation we suppose that $i=j(1)$. By the definition of $\sim$, and since $x_{j(1)} \sim x'_{j(1)}$, every tuple $t'' \in f^*(y(x_{j(1)},x_{j(1)}',\dots,x_{j(1)}'))$ satisfies $t''_1 = t''_2$. Let $\tilde t$ be a tuple from $$f^*(y(x_{j(1)},\dots,x_{j(k)},x'_{j(1)},\dots,x'_{j(1)})).$$ (Here we use that $m \geq r+1$.) - ${\mathfrak B}\models S(\tilde t_{1},\dots,\tilde t_{k})$ because of a compatibility constraint between $y(x_1,\dots,x_m)$ and $y(x_{j(1)}$, $\dots,x_{j(k)},x'_{j(1)},\dots,x'_{j(1)})$ in $\phi^*$; - $\tilde t_1 = \tilde t_{k+1}$ because of a compatibility constraint between $y(x_{j(1)},\dots,x_{j(k)},x'_{j(1)},\dots,x'_{j(1)})$ and $y(x_{j(1)},x_{j(1)}',\dots,x_{j(1)}')$ and $x_{j(1)}\sim x_{j(1)}'$ in $\phi^*$; - hence, ${\mathfrak B}\models S(\tilde t_{k+1},\tilde t_2,\dots,\tilde t_k)$; - ${\mathfrak B}\models S(t'_{j(1)},t'_{j(2)},\dots,t'_{j(k)})$ because of a compatibility constraint between the variables $y(x_{j(1)},\dots,x_{j(k)}$, $x'_{j(1)},\dots,x'_{j(1)})$ and $y(x_1,\dots,x_{i-1},x'_i,x_{i+1},\dots,x_m)$ in $\phi^*$: namely, if $j' \colon \{1, \dots, k\} \to \{1, \dots, m \}$ is the map that coincides with the identity map except that $j'(1) := k+1$, then $\phi^*$ contains $$C_{j,j'} \big (y(x_1,\dots,x_{i-1},x'_{i},x_{i+1},\dots,x_m),y(x_{j(1)},\dots,x_{j(k)},x'_{j(1)},\dots,x'_{j(1)}) \big ).$$ This concludes the proof of Claim 0. Now we can define a structure ${\mathfrak C}$ in the signature $\sigma$ of ${\mathfrak B}$ on the equivalence classes of $\sim$. If $S \in \sigma$ has arity $k$, $j_1,\dots,j_k \in \{1,\dots,m\}$, and $[x_1]_\sim,\dots,[x_{m}]_\sim$ are equivalence classes of $\sim$ such that the tuples $t$ in $f^*(y(x_1,\dots,x_m))$ satisfy $S^{{\mathfrak B}}(t_{j_1},\dots,t_{j_k})$ for some representatives $x_1, \dots x_m$ (equivalently, for all representatives, by Claim 0), then add $([x_{j_1}]_\sim,\dots,[x_{j_k}]_\sim)$ to $S^{{\mathfrak C}}$. No other tuples are contained in the relations of ${\mathfrak C}$. **Claim 1.** If $[x_1]_\sim,\dots,[x_m]_\sim$ are equivalence classes of $\sim$, and $t \in f^*(y(x_1,\dots,x_m))$, then $[x_i]_{\sim} \mapsto t_i$, for $i \in \{1,\dots,m\}$, is an isomorphism between a substructure of ${\mathfrak C}$ and a substructure of ${\mathfrak B}$ for any choice of representatives $x_1, \dots, x_m$. First note that $[x_i]_{\sim} = [x_j]_{\sim}$ if and only if $t_i = t_j$, so the map is well-defined and bijective. Let $S \in \sigma$ be of arity $k$ and $j \colon \{1, \dots, k\} \to \{1, \dots, m \}$. If ${\mathfrak B}\models S(t_{j(1)},\dots,t_{j(k)})$, then ${\mathfrak C}\models S([x_{j(1)}]_\sim,\dots,[x_{j(k)}]_\sim)$ by the definition of ${\mathfrak C}$. Conversely, suppose that ${\mathfrak C}\models S([x_{j(1)}]_\sim,\dots[x_{j(k)}]_\sim)$. By Claim 0 and the definition of ${\mathfrak C}$, there is $t' \in f^*(y(x_1,\dots,x_m))$ such that ${\mathfrak B}\models S(t'_{j(1)},\dots,t'_{j(k)})$. Since $f^*(y(x_1,\dots,x_m))$ is an orbit of $\mathop{\mathrm{Aut}}({\mathfrak B})$, we have ${\mathfrak B}\models S(t_{j(1)},\dots,t_{j(k)})$ as well. **Claim 2.** ${\mathfrak C}$ embeds into ${\mathfrak B}$. It suffices to verify that ${\mathfrak C}$ satisfies each conjunct of the universal sentence $\psi$. Let $\psi'(x_1,\dots,x_q)$ be such a conjunct, and let $[c_1]_\sim,\dots,[c_q]_\sim$ be elements of ${\mathfrak C}$. Let $t$ be a tuple from the orbit $f^*(y(c_1,\dots,c_q,\dots,c_q))$ of $\mathop{\mathrm{Aut}}(\Gamma)$; this makes sense since $m \geq v$. Since $t_1, \dots, t_q$ are elements of ${\mathfrak B}$, the tuple $(t_1,\dots,t_q)$ satisfies $\psi'$. Claim 1 then implies that $([c_1]_\sim,\dots,[c_q]_\sim)$ satisfies $\psi'$. Let $e$ be an embedding of ${\mathfrak C}$ to ${\mathfrak B}$. For every $x \in V$, define $f(x)=e([x]_\sim)$. Note that for every summand $R(x_1, \dots, x_k)$ in $\phi$ and $t \in f^*(y(x_1, \dots, x_k, \dots, x_k))$, we have $$R^*(f^*(y(x_1, \dots, x_k, \dots, x_k))) = R(t_1, \dots, t_k) = R(e([x_1]_\sim), \dots, e([x_k]_\sim)) = R(f(x_1), \dots, f(x_k)),$$ where the middle equality follows from $t_i \mapsto e([x_i]_\sim)$ being a partial isomorphism of ${\mathfrak B}$ by Claim 1 and 2, which by the homogeneity of ${\mathfrak B}$ extends to an automorphism of ${\mathfrak B}$ and therefore also an automorphism of $\Gamma$. Since $f^*$ is a solution to $(\phi^*,u)$, it follows from the construction of $\phi^*$ that $f$ is a solution to $(\phi,u)$. ◻ Let $G$ be a permutation group that contains the automorphism group of a homogeneous structure with a finite relational signature of maximal arity $k$. A fractional operation $\omega$ over the domain $C$ of $\Gamma$ of arity $\ell$ which is canonical with respect to $G$ induces a fractional operation $\omega^*$ on the orbits of $k$-tuples of $G$, given by $$\omega^*(A) := \omega \big ( \{f \in \mathop{\mathrm{Can}}^{(\ell)}_G \mid f^* \in A\} \big ),$$ for every subset $A$ of the set of operations of arity $\ell$ on the finite domain of $\Gamma^*_k$ (all such subsets are Borel). Note that $\{f \in \mathop{\mathrm{Can}}^{(\ell)}_G \mid f^* \in A\}$ is a measurable subset of $\mathscr O_C^{(\ell)}$. Also note that if $\omega$ is pseudo cyclic, then $\omega^*$ is cyclic. Statements about the fractional polymorphisms of $\Gamma^*_m$ lift back to statements about the fractional polymorphisms of $\Gamma$ via the following useful lemma. **Lemma 49**. *Let $\Gamma$ be a valued structure such that $\mathop{\mathrm{Aut}}(\Gamma)$ equals the automorphism group $G$ of a finitely bounded homogeneous structure and let $m$ be as in Theorem [Theorem 48](#thm:red-fin){reference-type="ref" reference="thm:red-fin"}. Let $\nu \in \mathop{\mathrm{fPol}}(\Gamma^*_m)$ be cyclic. Then there exists $\omega \in \mathop{\mathrm{fPol}}(\Gamma)$ which is canonical with respect to $G$ such that $\omega^* = \nu$.* *Proof.* Let $C$ be the domain of $\Gamma$, let $D$ be the domain of $\Gamma_m^*$, and let $\ell$ be the arity of $\nu$. Suppose that $\nu(f) > 0$ for some operation $f$. Then there exists a function $g \colon C^{\ell}\to C$ which is canonical with respect to $G$ such that $g^* = f$ by Lemma 4.9 in [@BodMot-Unary] (also see Lemma 10.5.12 in [@Book]). For every such $f$, choose $g$ such that $g^*=f$ and define $\omega(g) := \nu(f)$ and $\omega(h) := 0$ for all other $h \in {\mathscr O}_C^{(\ell)}$. Since the domain of $\Gamma^*_m$ is finite, this correctly defines a fractional operation $\omega$ of the same arity $\ell$ as $\nu$. It also improves every weighted relation $R$ of $\Gamma$: if $R$ has arity $k$, and $a^1,\dots,a^{\ell} \in C^k$, then $$\begin{aligned} E_\omega[g \mapsto R(g(a^1,\dots,a^{\ell}))] & = \sum_{g \in {\mathscr O}_C^{(\ell)}} \omega(g) R(g(a^1,\dots,a^{\ell})) \\ & = \sum_{f \in {\mathscr O}^{(\ell)}_D} \nu(f) R^*(f(a^1,\dots,a^{\ell})_1,\dots,f(a^1,\dots,a^{\ell})_k,\dots,f(a^1,\dots,a^{\ell})_k) \\ & \leq \frac{1}{\ell} \sum_{j = 1}^{\ell} R^*(a^j_1,\dots,a^j_k,\dots,a^j_k) \\ & = \frac{1}{\ell} \sum_{j = 1}^{\ell} R(a^j_1,\dots,a^j_k). \qedhere\end{aligned}$$ ◻ **Lemma 50**. *Let $G$ be the automorphism group of a homogeneous structure ${\mathfrak B}$ with a relational signature of maximal arity $m$. If $\omega \in {\mathscr F}_C^{(\ell)}$ is canonical with respect to $G$ such that $\omega^*$ (defined on the orbits of $m$-tuples of $G$) is cyclic, then $\omega$ is pseudo cyclic with respect to $G$.* *Proof.* We use the fact that if $f$ is canonical with respect to $G$ such that $f^*$ (defined on the orbits of $m$-tuples) is cyclic, then $f$ is pseudo cyclic (see the proof of Proposition 6.6 in [@BPP-projective-homomorphisms]; also see Lemma 10.1.5 in [@Book]). Let $C$ be the domain of $\Gamma$ and let $a^1,\dots,a^\ell,b \in C^m$. It suffices to show that $\omega(S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{PC}}^{(\ell)}_G) = \omega(S_{a^1,\dots,a^{\ell},b}).$ Indeed, $$\begin{aligned} \omega(S_{a^1,\dots,a^{\ell},b}) & = \omega(S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{Can}}_G^{(\ell)}) && \text{(canonicity of $\omega$)}\\ & = \omega^* \big (\{f^* \mid f \in S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{Can}}_G^{(\ell)} \} \big ) && \text{(definition of $\omega^*$)}\\ & = \omega^* \big (\{f^* \mid f \in S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{Can}}_G^{(\ell)}\} \cap \mathop{\mathrm{Cyc}}^{(\ell)}_C \big) && \text{(by assumption)} \\ & = \omega^* \big (\{f^* \mid f \in S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{Can}}_G^{(\ell)} \cap \mathop{\mathrm{PC}}^{(\ell)}_G\} \big ) && \text{(fact mentioned above and Remark~\ref{rem:can-act})} \\ & = \omega(S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{Can}}_G^{(\ell)} \cap \mathop{\mathrm{PC}}^{(\ell)}_G) \\ & = \omega(S_{a^1,\dots,a^{\ell},b} \cap \mathop{\mathrm{PC}}^{(\ell)}_G). \qedhere \end{aligned}$$ ◻ ## Fractional Polymorphims on Finite Domains For studying canonical operations, we can use known results about operations on finite domains. **Definition 51**. *Let $\omega$ be a fractional operation of arity $\ell$ on a finite domain $C$. Then the *support of $\omega$* is the set $$\mathop{\mathrm{Supp}}(\omega) = \{f \in {\mathscr O}^{(\ell)}_C \mid \omega(f) > 0\}.$$ If ${\mathscr F}$ is a set of fractional operations, then $$\mathop{\mathrm{Supp}}(\mathscr F) := \bigcup_{\omega \in {\mathscr F}} \mathop{\mathrm{Supp}}(\omega).$$* Note that, a fractional operation $\omega$ on a finite domain is determined by the values $\omega(f)$, $f \in \mathop{\mathrm{Supp}}(\omega)$, in contrast to fractional operations on infinite domains. Moreover, a fractional polymorphism $\omega$ of a valued structure with a finite domain is cyclic if and only if all operations in its support are cyclic, in accordance to the definitions from [@KozikOchremiak15]. An operation $f \colon C^4 \to C$ is called *Siggers* if $f(a,r,e,a) = f(r,a,r,e)$ for all $a,r,e \in C$. **Lemma 52**. *Let $\Gamma$ and $\Delta$ be valued structures with finite domains that are fractionally homomorphically equivalent.* - *If $\Gamma$ has a cyclic fractional polymorphism, then $\Delta$ has a cyclic fractional polymorphism of the same arity.* - *If $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma))$ contains a cyclic operation, then $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Delta))$ contains a cyclic operation of the same arity.* *Proof.* Let $C$ be the domain of $\Gamma$ and let $D$ be the domain of $\Delta$. Let $\nu_1$ be a fractional homomorphism from $\Gamma$ to $\Delta$, and let $\nu_2$ be a fractional homomorphism from $\Delta$ to $\Gamma$. Define $\nu_2'$ as the fractional homomorphism from $\Delta^{\ell}$ to $\Gamma^{\ell}$ as follows. If $f \colon D \to C$, then $f'$ denotes the map from $D^{\ell}$ to $C^{\ell}$ given by $(c_1,\dots,c_{\ell}) \mapsto (f(c_1),\dots,f(c_{\ell}))$. Define $\nu_2'(f') := \nu_2(f)$ and $\nu_2'(h)=0$ for all other $h \colon D^{\ell}\rightarrow C^{\ell}$; since $C$ and $D$ are finite, this defines a fractional operation. Suppose that $\omega$ is a fractional polymorphism of $\Gamma$ of arity $\ell$. Then $\omega' := \nu_1 \circ \omega \circ \nu_2'$ is a fractional homomorphism from $\Delta^{\ell}$ to $\Delta$ (see Lemma [Lemma 24](#lem:compose){reference-type="ref" reference="lem:compose"}), and hence a fractional polymorphism of $\Delta$ (see Remark [Remark 36](#rem:frac-pol-frac-hom){reference-type="ref" reference="rem:frac-pol-frac-hom"}). Note that if $\omega$ is cyclic, then $\omega'$ is cyclic; this shows that first statement of the lemma. Next, suppose that there exists $\omega \in \mathop{\mathrm{fPol}}^{(\ell)}(\Gamma)$ such that $\mathop{\mathrm{Supp}}(\omega)$ contains a cyclic operation $g$ of arity $\ell$. Since the domain $C$ of $\Gamma$ is finite, there exists a function $f_1 \colon C \to D$ such that $\nu_1(f_1) > 0$ and a function $f_2 \colon D \to C$ such that $\nu_2(f_2)>0$. Note that $f_1 \circ g \circ f_2' \colon D^{\ell}\to D$ is cyclic since $g$ is cyclic, and that $\omega'(f_1 \circ g \circ f_2') > 0$. ◻ The following definition is taken from [@KozikOchremiak15]. **Definition 53** (core). *A valued structure $\Gamma$ over a finite domain is called a *core* if all operations in $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma))^{(1)}$ are injective.* We have been unable to find an explicit reference for the following proposition, but it should be considered to be known; we also present a proof as a guide to the literature. **Proposition 54**. *Let $\Gamma$ be a valued structure with a finite domain. Then there exists a core valued structure $\Delta$ over a finite domain which is fractionally homomorphically equivalent to $\Gamma$.* *Proof.* Let $C$ be the domain of $\Gamma$. If $\Gamma$ itself is a core then there is nothing to be shown, so we may assume that there exists a non-injective $f \in \mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}^{(1)}(\Gamma))$. Since $C$ is finite, we have that $D := f(C) \neq C$; let $\Delta$ be the valued structure with domain $D$ and the same signature as $\Gamma$ whose weighted relations are obtained from the corresponding weighted relations of $\Gamma$ by restriction to $D$. It then follows from Lemma 15 in [@KozikOchremiak15] in combination with Remark [Remark 26](#rem:frac-hom-equiv-fin){reference-type="ref" reference="rem:frac-hom-equiv-fin"} that $\Gamma$ and $\Delta$ are fractionally homomorphically equivalent. After applying this process finitely many times, we obtain a core valued structure that is fractionally homomorphically equivalent to $\Gamma$. ◻ The following lemma is a variation of Proposition 39 from [@KozikOchremiak15], which is phrased there only for valued structures $\Gamma$ that are cores and for idempotent cyclic operations. **Lemma 55**. *Let $\Gamma$ be a valued structure over a finite domain. Then $\Gamma$ has a cyclic fractional polymorphism if and only if $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma))$ contains a cyclic operation.* *Proof.* The forward implication is trivial. We prove the reverse implication. Let $\Delta$ be a core valued structure over a finite domain that is homomorphically equivalent to $\Gamma$, which exists by Proposition [Proposition 54](#prop:core){reference-type="ref" reference="prop:core"}. By Lemma [Lemma 52](#lem:h1-transfer){reference-type="ref" reference="lem:h1-transfer"}, $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Delta))$ contains a cyclic operation. Then $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Delta))$ contains even an idempotent cyclic operation: If $c \in \mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Delta))$ is cyclic, then the operation $c_0 \colon x \mapsto c(x, \dots, x)$ is in $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Delta))$ as well. Since $\Delta$ is a finite core, $c_0$ is bijective and therefore $c_0^{-1}$ (which is just a finite power of $c_0$) and the idempotent cyclic operation $c_0^{-1} \circ c$ lie in $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Delta))$. By Proposition 39 in [@KozikOchremiak15], $\Delta$ has a cyclic fractional polymorphism and by Lemma [Lemma 52](#lem:h1-transfer){reference-type="ref" reference="lem:h1-transfer"}, $\Gamma$ also has one. ◻ The following outstanding result classifies the computational complexity of VCSPs for valued structures over finite domains; it does not appear in this form in the literature, but we explain how to derive it from results in [@KozikOchremiak15; @KolmogorovKR17; @BulatovFVConjecture; @ZhukFVConjecture; @Zhuk20]. In the proof, if ${\mathfrak C}$ is a finite relational structure (understood also as a valued structure), we denote by $\mathop{\mathrm{Pol}}({\mathfrak C})$ the set $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}({\mathfrak C}))$; this notation is consistent with the literature since the set $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}({\mathfrak C}))$ concides with the set of polymorphisms of a relational structure. **Theorem 56**. *Let $\Gamma$ be a valued structure with a finite signature and a finite domain. If $(\{0,1\};\mathop{\mathrm{OIT}})$ does not have a pp-construction in $\Gamma$, then $\Gamma$ has a fractional cyclic polymorphism, and $\mathop{\mathrm{VCSP}}(\Gamma)$ is in P, and it is NP-hard otherwise.* *Proof.* If $(\{0,1\};\mathop{\mathrm{OIT}})$ has a pp-construction in $\Gamma$, then the NP-hardness of $\mathop{\mathrm{VCSP}}(\Gamma)$ follows from Corollary [Corollary 30](#cor:pp-constr-red){reference-type="ref" reference="cor:pp-constr-red"}. So assume that $(\{0,1\};\mathop{\mathrm{OIT}})$ does not have a pp-construction in $\Gamma$. Let ${\mathfrak C}$ be a classical relational structure on the same domain as $\Gamma$ such that $\mathop{\mathrm{Pol}}({\mathfrak C})= \mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma))$; it exists since $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma))$ contains projections by Remark [Example 37](#expl:id){reference-type="ref" reference="expl:id"} and is closed under composition by Lemma [Lemma 24](#lem:compose){reference-type="ref" reference="lem:compose"} and Remark [Remark 36](#rem:frac-pol-frac-hom){reference-type="ref" reference="rem:frac-pol-frac-hom"}. Note that therefore $\mathop{\mathrm{fPol}}(\Gamma) \subseteq \mathop{\mathrm{fPol}}({\mathfrak C})$ and since $\Gamma$ has a finite domain, [@FullaZivny Theorem 3.3] implies that every relation of ${\mathfrak C}$ lies in $\langle \Gamma \rangle$. Since $\Gamma$ does not pp-construct $(\{0,1\};\mathop{\mathrm{OIT}})$, neither does ${\mathfrak C}$, and in particular, ${\mathfrak C}$ does not pp-construct $(\{0,1\};\mathop{\mathrm{OIT}})$ in the classical relational setting (see [@wonderland Definition 3.4, Corollary 3.10]). Combining Theorems 1.4 and 1.8 from [@wonderland], $\mathop{\mathrm{Pol}}({\mathfrak C})$ contains a cyclic operation. Since $\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma))$ contains a cyclic operation, by Lemma [Lemma 55](#lem:cyclic-supp){reference-type="ref" reference="lem:cyclic-supp"}, $\Gamma$ has a cyclic fractional polymorphism. Then Kolmogorov, Rolínek, and Krokhin [@KolmogorovKR17] prove that in this case $\mathop{\mathrm{CSP}}(\Gamma)$ can be reduced to a finite-domain CSP with a cyclic polymorphism; such CSPs were shown to be in P by Bulatov [@BulatovFVConjecture] and, independently, by Zhuk [@ZhukFVConjecture]. ◻ The problem of deciding for a given valued structure $\Gamma$ with finite domain and finite signature whether $\Gamma$ satisfies the condition given in the previous theorem can be solved in exponential time [@Kolmogorov-Meta]. We now state consequences of this result for certain valued structures with an infinite domain. **Proposition 57**. *Let ${\mathfrak B}$ be a finitely bounded homogeneous structure and let $\Gamma$ be a valued structure with finite relational signature such that $\mathop{\mathrm{Aut}}(\Gamma) = \mathop{\mathrm{Aut}}({\mathfrak B})$. Let $m$ be as in Theorem [Theorem 48](#thm:red-fin){reference-type="ref" reference="thm:red-fin"}. Then the following are equivalent.* 1. *$\mathop{\mathrm{fPol}}(\Gamma)$ contains a fractional operation which is canonical and pseudo cyclic with respect to $\mathop{\mathrm{Aut}}({\mathfrak B})$;* 2. *$\mathop{\mathrm{fPol}}(\Gamma^*_m)$ contains a cyclic fractional operation;* 3. *$\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma^*_m))$ contains a cyclic operation.* 4. *$\mathop{\mathrm{Supp}}(\mathop{\mathrm{fPol}}(\Gamma_m^*))$ contains a Siggers operation.* *Proof.* We first prove the implication from (1) to (2). If $\omega$ is a fractional polymorphism of $\Gamma$, then $\omega^*$ is a fractional polymorphism of $\Gamma_{m}^*$: the fractional operation $\omega^*$ improves $R^*$ because $\omega$ improves $R$, and $\omega^*$ improves $C_{i,j}$ for all $i,j$ because $\omega$ is canonical with respect to $G$. Finally, if $\omega$ is pseudo cyclic with respect to $G$, then $\omega^*$ is cyclic. The implication from (2) to (1) is a consequence of Lemma [Lemma 49](#lem:lift){reference-type="ref" reference="lem:lift"} and Lemma [Lemma 50](#lem:pc){reference-type="ref" reference="lem:pc"}. The equivalence of (2) and (3) follows from Lemma [Lemma 55](#lem:cyclic-supp){reference-type="ref" reference="lem:cyclic-supp"}. The equivalence of (3) and (4) is proved in [@Book Theorem 6.9.2]; the proof is based on [@Cyclic Theorem 4.1]. ◻ Note that item (4) in the previous proposition can be decided algorithmically for a given valued structure $\Gamma^*_m$ (which has a finite domain and finite signature). **Theorem 58**. *If the conditions from Proposition [Proposition 57](#prop:black-box){reference-type="ref" reference="prop:black-box"} hold, then $\mathop{\mathrm{VCSP}}(\Gamma)$ is in P.* *Proof.* If $\Gamma^*_m$ has a cyclic fractional polymorphism of arity $\ell \geq 2$, then the polynomial-time tractability of $\mathop{\mathrm{VCSP}}(\Gamma^*_m)$ follows from Theorem [Theorem 56](#thm:fin-vcsp-tract){reference-type="ref" reference="thm:fin-vcsp-tract"}. For $m$ large enough, we may apply Theorem [Theorem 48](#thm:red-fin){reference-type="ref" reference="thm:red-fin"} and obtain a polynomial-time reduction from $\mathop{\mathrm{VCSP}}(\Gamma)$ to $\mathop{\mathrm{VCSP}}(\Gamma^*_m)$, which concludes the proof. ◻ # Application: Resilience {#sect:resilience} A research topic that has been studied in database theory is the computational complexity of the so-called *resilience problem* [@Resilience; @NewResilience; @LatestResilience-arxiv]. We formulate it here for the case of conjunctive queries and, more generally, for unions of conjunctive queries. We generally work with Boolean queries, i.e., queries without free variables. Our results, however, can be extended also to the non-Boolean case. A *conjunctive query* is a primitive positive $\tau$-sentence and a *union of conjunctive queries* is a (finite) disjunction of conjunctive queries. Note that every existential positive sentence can be written as a union of conjunctive queries. Let $\tau$ be a finite relational signature and $\mu$ a conjunctive query over $\tau$. The input to the *resilience problem for* $\mu$ consists of a finite $\tau$-structure ${\mathfrak A}$, called a *database*[^2], and the task is to compute the number of tuples that have to be removed from relations of ${\mathfrak A}$ so that ${\mathfrak A}$ does *not* satisfy $\mu$. We call this number the *resilience* of ${\mathfrak A}$ (with respect to $\mu$). As usual, this can be turned into a decision problem where the input also contains a natural number $u \in {\mathbb N}$ and the question is whether the resilience is at most $u$. Clearly, ${\mathfrak A}$ does not satisfy $\mu$ if and only if its resilience equals $0$. The computational complexity of this problem depends on $\mu$ and various cases that can be solved in polynomial time and that are NP-hard have been described in [@Resilience; @NewResilience; @LatestResilience-arxiv]. A general classification, however, is open. A natural variation of the problem is that the input database is a *bag database*, meaning that it may contain tuples with *multiplicities*, i.e., the same tuple may have multiple occurrences in the same relation. Formally, a bag database is a valued structure with all weights (which represent multiplicities) taken from $\mathbb N$. Resilience on bag databases was introduced by Makhija and Gatterbauer [@LatestResilience-arxiv] who also present a conjunctive query for which the resilience problem with multiplicities is NP-hard whereas the resilience problem without multiplicities is in P. Note that bag databases are of importance because they represent SQL databases more faithfully than set databases [@bag-semantics]. Bag databases often require different methods than set databases [@bag-semantics; @bag-semantics-new]. In this paper, we exclusively consider bag databases. Note that if the resilience problem of a query $\mu$ can be solved in polynomial time on bag databases, then also the resilience problem on set databases can be solved in polynomial time. A natural generalization of the basic resilience problem defined above is obtained by admitting the decoration of databases with a subsignature $\sigma \subseteq \tau$, in this way declaring all tuples in $R^{{\mathfrak A}}$, $R \in \sigma$, to be *exogenous*. This means that we are not allowed to remove such tuples from ${\mathfrak A}$ to make $\mu$ false; the tuples in the other relations are then called *endogenous*. For brevity, we also refer to the relations in $\tau$ as being exogenous/endogenous. If not specified, then $\sigma=\emptyset$, i.e., all tuples are endogenous. Different variants of exogenous tuples were studied [@LatestResilience-arxiv]. However, in bag semantics all of them are polynomial-time equivalent to a problem of this form, see Remark [Remark 74](#rem:exo){reference-type="ref" reference="rem:exo"}. In this paper, we generally admit exogeneous relations. The resilience problem that we study is given in Figure [\[fig:res-problem\]](#fig:res-problem){reference-type="ref" reference="fig:res-problem"}. Fixed: a relational signature $\tau$, a subset $\sigma \subseteq \tau$, and a union $\mu$ of conjuctive queries over $\tau$. $m :=$ minimal number of tuples to be removed from the relations in $\{R^{{\mathfrak A}} \mid R \in \tau \setminus \sigma \}$\ so that ${\mathfrak A}\not \models \mu$. We next explain how to represent resilience problems as VCSPs using appropriately chosen valued structures with oligomorphic automorphism groups. **Example 59**. *The following query is taken from Meliou, Gatterbauer, Moore, and Suciu [@Meliou2010]; they show how to solve its resilience problem without multiplicities in polynomial time by a reduction to a max-flow problem. Let $\mu$ be the query $$\exists x,y,z \big ( R(x,y) \wedge S(y,z) \big ).$$ Observe that a finite $\tau$-structure satisfies $\mu$ if and only if it does not have a homomorphism to the $\tau$-structure ${\mathfrak B}$ with domain $B = \{0,1\}$ and the relations $R^{{\mathfrak B}} = \{(0,1),(1,1)\}$ and $S^{{\mathfrak B}} = \{(0,0),(0,1)\}$ (see Figure [\[fig:fin-dual\]](#fig:fin-dual){reference-type="ref" reference="fig:fin-dual"}). We turn ${\mathfrak B}$ into the valued structure $\Gamma$ with domain $\{0,1\}$ where $R^{\Gamma}(0,1) = R^{\Gamma}(1,1) = 0 = S^{\Gamma}(0,0) = S^{\Gamma}(0,1)$ and $R^{\Gamma}$ and $S^{\Gamma}$ take value 1 otherwise. Then $\mathop{\mathrm{VCSP}}(\Gamma)$ is precisely the resilience problem for $\mu$ (with multiplicities). Our results reprove the result from [@LatestResilience-arxiv] that even with multiplicities, the problem can be solved in polynomial time (see Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"}, Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"} and Example [Example 70](#expl:fin-dual-rev){reference-type="ref" reference="expl:fin-dual-rev"}).* **Example 60**. *Let $\mu$ be the conjunctive query $$\exists x,y,z (R(x,y) \wedge S(x,y,z)).$$ This query is *linear* in the sense of Freire, Gatterbauer, Immerman, and Meliou and thus its resilience problem without multiplicities can be solved in polynomial time (Theorem 4.5 in [@Meliou2010]; also see Fact 3.18 in [@Resilience-Arxive]). Our results reprove the result from [@LatestResilience-arxiv] that this problem remains polynomial-time solvable with multiplicities (see Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"}, Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"} and Example [Example 75](#expl:simple-inf-rev){reference-type="ref" reference="expl:simple-inf-rev"}).* **Remark 61**. *Note that if the resilience problem (with or without multiplicities) for a union $\mu$ of conjunctive queries is in P, then also the computational problem of *finding* tuples to be removed from the input database ${\mathfrak A}$ so that ${\mathfrak A}\not \models \mu$ is in P. To see this, let $u \in \mathbb N$ be threshold. If $u=0$, then no tuple needs to be found and we are done. Otherwise, for every tuple $t$ in a relation $R^{{\mathfrak A}}$, we remove $t$ from $R^{{\mathfrak A}}$ and test the resulting database with the threshold $u-m$, where $m$ is the multiplicity of $t$. If the modified instance is accepted, then $t$ is a correct tuple to be removed and we may proceed to find a solution of this modified instance.* ## Connectivity {#sect:conn} We show that when classifying the resilience problem for conjunctive queries, it suffices to consider queries that are connected. The *canonical database* of a conjunctive query $\mu$ with relational signature $\tau$ is the $\tau$-structure ${\mathfrak A}$ whose domain are the variables of $\mu$ and where $a \in R^{{\mathfrak A}}$ for $R \in \tau$ if and only if $\mu$ contains the conjunct $R(a)$. A $\tau$-structure is *connected* if it cannot be written as the disjoint union of two $\tau$-structures with non-empty domains. Conversely, the *canonical query* of a relational $\tau$-structure ${\mathfrak A}$ is the conjunctive query whose variable set is the domain $A$ of ${\mathfrak A}$, and which contains for every $R \in \tau$ and $\bar a \in R^{{\mathfrak A}}$ the conjunct $R(\bar a)$. **Remark 62**. *All terminology introduced for $\tau$-structures also applies to conjunctive queries with signature $\tau$: by definition, the query has the property if the canonical database has the property.* In particular, it is clear what it means for a conjunctive query to be connected. **Lemma 63**. *Let $\mu_1,\dots,\mu_k$ be conjunctive queries such that $\mu_i$ does not imply $\mu_j$ if $i \neq j$. Then the resilience problem for $\mu := \mu_1 \wedge \cdots \wedge \mu_k$ is NP-hard if the resilience problem for one of the $\mu_i$ is NP-hard. Conversely, if the resilience problem is in P (in NP) for each $\mu_i$, then the resilience problem for $\mu$ is in P as well (in NP, respectively). The same is true in the setting without multiplicities and/or exogeneous relations.* *Proof.* We first present a polynomial-time reduction from the resilience problem of $\mu_i$, for some $i \in \{1,\dots,k\}$, to the resilience problem of $\mu$. Given an instance ${\mathfrak A}$ of the resilience problem for $\mu_i$, let $m$ be the number of tuples in relations of ${\mathfrak A}$. Let ${\mathfrak A}'$ be the disjoint union of ${\mathfrak A}$ with $m$ copies of the canonical database of $\mu_j$ for every $j \in \{1,\dots,k\} \setminus \{i\}$. Observe that ${\mathfrak A}'$ can be computed in polynomial time in the size of ${\mathfrak A}$ and that the resilience of ${\mathfrak A}$ with respect to $\mu_i$ equals the resilience of ${\mathfrak A}'$ with respect to $\mu$. Conversely, if the resilience problem is in P for each $\mu_i$, then also the resilience problem for $\mu$ is in P: given an instance ${\mathfrak A}$ of the resilience problem for $\mu$, we compute the resilience of ${\mathfrak A}_j$ with respect to $\mu_i$ for every $i \in \{1,\dots,k\}$, and the minimum of all the resulting values. The proof for the membership in NP is the same. The same proof works in the setting without multiplicities. ◻ When classifying the complexity of the resilience problem for conjunctive queries, by Lemma [Lemma 63](#lem:con){reference-type="ref" reference="lem:con"} we may restrict our attention to conjunctive queries that are connected. We also formulate an immediate corollary of Lemma [Lemma 63](#lem:con){reference-type="ref" reference="lem:con"} that, after finitely many applications, establishes the same for unions of conjunctive queries. **Corollary 64**. *Let $\mu = \mu_1 \wedge \dots \wedge \mu_k$ be as in Lemma [Lemma 63](#lem:con){reference-type="ref" reference="lem:con"} and suppose that $\mu$ occurs in a union $\mu'$ of conjunctive queries. For $i \in \{1, \dots, k\}$, let $\mu_i'$ be the union of queries obtained by replacing $\mu$ by $\mu_i$ in $\mu'$. Then the resilience problem for $\mu'$ is NP-hard if the resilience problem for one of the $\mu_i'$ is NP-hard. Conversely, if the resilience problem is in P (in NP) for each $\mu_i'$, then the resilience problem for $\mu'$ is in P as well (in NP, respectively). The same is true in the setting without multiplicities and/or exogeneous relations.* ## Finite Duals If $\mu$ is a union of conjunctive queries with signature $\tau$, then a *dual* of $\mu$ is a $\tau$-structure ${\mathfrak A}$ with the property that a finite structure ${\mathfrak B}$ has a homomorphism to ${\mathfrak A}$ if and only if ${\mathfrak B}$ does not satisfy $\mu$. The conjunctive query in Example [Example 59](#expl:fin-dual){reference-type="ref" reference="expl:fin-dual"}, for instance, even has a *finite* dual. There is an elegant characterisation of the (unions of) conjunctive queries that have a finite dual. To state it, we need some basic terminology from database theory. **Definition 65**. *The *incidence graph* of a relational $\tau$-structure ${\mathfrak A}$ is the bipartite undirected multigraph whose first colour class is $A$, and whose second colour class consists of expressions of the form $R(b)$ where $R \in \tau$ has arity $k$, $b \in A^k$, and ${\mathfrak A}\models R(b)$. An edge $e_{a,i,R(b)}$ joins $a \in A$ with $R(b)$ if $b_i = a$. A structure is called *acyclic* if its incidence graph is acyclic, i.e., it contains no cycles (if two vertices are linked by two different edges, then they establish a cycle). A structure is called a *tree* if it is acyclic and connected in the sense defined in Section [8.1](#sect:conn){reference-type="ref" reference="sect:conn"}.* The following was proved by Nešetřil and Tardif [@NesetrilTardif]; also see [@Foniok-Thesis; @LLT]. **Theorem 66**. *A conjunctive query $\mu$ has a finite dual if and only if the canonical database of $\mu$ is homomorphically equivalent to a tree. A union of conjunctive queries has a finite dual if and only if the canonical database for each of the conjunctive queries is homomorphically equivalent to a tree.* The theorem shows that in particular Example [Example 60](#expl:simple-inf){reference-type="ref" reference="expl:simple-inf"} does not have a finite dual, since the query given there is not acyclic and hence cannot be homomorphically equivalent to a tree. To construct valued structures from duals, we introduce the following notation. **Definition 67**. *Let ${\mathfrak B}$ be a $\tau$-structure and $\sigma \subseteq \tau$. Define $\Gamma({\mathfrak B}, \sigma)$ to be the valued $\tau$-structure on the same domain as ${\mathfrak B}$ such that* - *for each $R \in \tau \setminus \sigma$, $R^{\Gamma({\mathfrak B}, \sigma)}(a) := 0$ if $a \in R^{{\mathfrak B}}$ and $R^{\Gamma({\mathfrak B}, \sigma)}(a) := 1$ otherwise, and* - *for each $R \in \sigma$, $R^{\Gamma({\mathfrak B}, \sigma)}(a) := 0$ if $a \in R^{{\mathfrak B}}$ and $R^{\Gamma({\mathfrak B}, \sigma)}(a) := \infty$ otherwise.* Note that $\mathop{\mathrm{Aut}}({\mathfrak B}) = \mathop{\mathrm{Aut}}(\Gamma({\mathfrak B}, \sigma))$ for any $\tau$-structure ${\mathfrak B}$ and any $\sigma$. In the following result we use a correspondence between resilience problems for acyclic conjunctive queries and valued CSPs. The result then follows from the P versus NP-complete dichotomy theorem for valued CSPs over finite domains stated in Theorem [Theorem 56](#thm:fin-vcsp-tract){reference-type="ref" reference="thm:fin-vcsp-tract"}. **Theorem 68**. *Let $\mu$ be a union of acyclic conjunctive queries with relational signature $\tau$ and let $\sigma \subseteq \tau$. Then the resilience problem for $\mu$ with exogenous relations from $\sigma$ is in P or NP-complete. Moreover, it is decidable whether the resilience problem for a given union of acyclic conjunctive queries is in P. If $\mu$ is a union of queries each of which is homomorphically equivalent to a tree and ${\mathfrak B}$ is the finite dual of $\mu$ (which exists by Theorem [Theorem 66](#thm:NT){reference-type="ref" reference="thm:NT"}), then $\mathop{\mathrm{VCSP}}(\Gamma({\mathfrak B},\sigma))$ is polynomial-time equivalent to the resilience problem for $\mu$ with exogenous relations from $\sigma$.* *Proof.* By virtue of Corollary [Corollary 64](#cor:con){reference-type="ref" reference="cor:con"}, we may assume for the P versus NP-complete dichotomy that each of the conjunctive queries in $\mu$ is connected and thus a tree. The same is true also for the polynomial-time equivalence to a VCSP since replacing a conjunctive query in a union with a homomorphically equivalent one does not affect the complexity of resilience. Define $\Gamma := \Gamma({\mathfrak B}, \sigma)$. We show that $\mathop{\mathrm{VCSP}}(\Gamma)$ is polynomial-time equivalent to the resilience problem for $\mu$ with exogenous relations from $\sigma$. Given a finite bag database ${\mathfrak A}$ with signature $\tau$ with exogenous tuples from relations in $\sigma$, let $\phi$ be the $\tau$-expression which contains for every $R\in \tau$ and for every tuple $a \in R^{{\mathfrak A}}$ the summand $R(a)$ with the same number of occurrences as is the multiplicity of $a$ in $R^{{\mathfrak A}}$. Conversely, for every $\tau$-expression $\phi$ we can create a bag database ${\mathfrak A}$ with signature $\tau$ and exogenous relations from $\sigma$. The domain of ${\mathfrak A}$ is the set of variables of $\phi$ and for every $R \in \tau$ and $a \in R^{{\mathfrak A}}$ with multiplicity equal to the number of occurrences of the summand $R(a)$ in $\phi$. In both situations, the resilience of ${\mathfrak A}$ with respect to $\mu$ equals the value of $\phi$ with respect to $\Gamma$. This shows the final statement of the theorem. The first statement now follows from Theorem [Theorem 56](#thm:fin-vcsp-tract){reference-type="ref" reference="thm:fin-vcsp-tract"}. Concerning the decidability of the tractability condition, first note that the finite dual of $\mu$, and hence also $\Gamma$, can be effectively computed from $\mu$ (e.g., the construction of the dual in [@NesetrilTardif] is effective). The existence of a fractional cyclic polymorphism for a given valued structure $\Gamma$ with finite domain and finite signature can be decided (in exponential time in the size of $\Gamma$; see [@Kolmogorov-Meta]). ◻ **Remark 69**. *We mention that Theorem [Theorem 68](#thm:fin-dual){reference-type="ref" reference="thm:fin-dual"} also applies to regular path queries which can be shown to always have a finite dual, see the related [@CalvaneseDegiocomoLenzeriniVardi].* Theorem [Theorem 68](#thm:fin-dual){reference-type="ref" reference="thm:fin-dual"} can be combined with the tractability results for VCSPs from Section [7](#sect:tract){reference-type="ref" reference="sect:tract"} that use fractional polymorphisms. To illustrate fractional polymorphisms and how to find them, we revisit a known tractable resilience problem from [@Meliou2010; @Resilience-Arxive; @Resilience; @NewResilience] and show that it has a fractional canonical pseudo cyclic polymorphism. **Example 70**. *We revisit Example [Example 59](#expl:fin-dual){reference-type="ref" reference="expl:fin-dual"}. Consider again the conjunctive query $$\exists x,y,z (R(x,y) \wedge S(y,z)).$$ There is a finite dual ${\mathfrak B}$ of $\mu$ with domain $\{0,1\}$ which is finitely bounded homogeneous, as described in Example [Example 59](#expl:fin-dual){reference-type="ref" reference="expl:fin-dual"}. That example also describes a valued structure $\Gamma$ which is actually $\Gamma({\mathfrak B},\emptyset)$. Let $\omega$ be the fractional cyclic operation given by $\omega(\min) = \omega(\max) = \frac{1}{2}$. Since $\mathop{\mathrm{Aut}}(\Gamma)$ is trivial, $\omega$ is canonical. The fractional operation $\omega$ improves both weighted relations $R$ and $S$ (they are *submodular*; see, e.g., [@KrokhinZivny17]) and hence is a canonical cyclic fractional polymorphism of $\Gamma$.* Combining Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"} and [Theorem 68](#thm:fin-dual){reference-type="ref" reference="thm:fin-dual"}, Example [Example 70](#expl:fin-dual-rev){reference-type="ref" reference="expl:fin-dual-rev"} reproves the results from [@Resilience] (without multiplicities) and [@LatestResilience-arxiv] (with multiplicities) that the resilience problem for this query is in P. ## Infinite Duals {#sect:inf-dual} Conjunctive queries might not have a finite dual (see Example [Example 60](#expl:simple-inf){reference-type="ref" reference="expl:simple-inf"}), but unions of connected conjunctive queries always have a countably infinite dual. Cherlin, Shelah and Shi [@CherlinShelahShi] showed that in this case we may even find a dual with an oligomorphic automorphism group (see Theorem [Theorem 71](#thm:css){reference-type="ref" reference="thm:css"} below). This is the key insight to phrase resilience problems as VCSPs for valued structures with oligomorphic automorphism groups. The not necessarily connected case again reduces to the connected case by Corollary [Corollary 64](#cor:con){reference-type="ref" reference="cor:con"}. In Theorem [Theorem 71](#thm:css){reference-type="ref" reference="thm:css"} below we state a variant of a theorem of Cherlin, Shelah, and Shi [@CherlinShelahShi] (also see [@Hubicka-Nesetril-ForbiddenHomomorphisms; @Hubicka-Nesetril; @Book]). If ${\mathfrak B}$ is a structure, we write ${\mathfrak B}_{\text{pp}(m)}$ for the expansion of ${\mathfrak B}$ by all relations that can be defined with a connected primitive positive formula (see Remark [Remark 62](#rem:db){reference-type="ref" reference="rem:db"}) with at most $m$ variables, at least one free variable, and without equality. For a union of conjunctive queries $\mu$ over the signature $\tau$, we write $|\mu|$ for the maximum of the number of variables of each conjunctive query in $\mu$, the maximal arity of $\tau$, and 2. **Theorem 71**. *For every union $\mu$ of connected conjunctive queries over a finite relational signature $\tau$ there exists a $\tau$-structure ${\mathfrak B}_{\mu}$ such that the following statements hold:* 1. *$({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$ is homogeneous.* 2. *$\mathop{\mathrm{Age}}({\mathfrak B}_{\text{pp}(|\mu|)})$ is the class of all substructures of structures of the form ${\mathfrak A}_{\text{pp}(|\mu|)}$ for a finite structure ${\mathfrak A}$ that satisfies $\neg \mu$.* 3. *A countable $\tau$-structure ${\mathfrak A}$ satisfies $\neg \mu$ if and only if it embeds into ${\mathfrak B}_{\mu}$.* 4. *${\mathfrak B}_\mu$ is finitely bounded.* 5. *$\mathop{\mathrm{Aut}}({\mathfrak B}_{\mu})$ is oligomorphic.* 6. *$({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$ is finitely bounded.* *Proof.* The construction of a structure ${\mathfrak B}_{\mu}$ with the given properties follows from a proof of Hubička and Nešetřil [@Hubicka-Nesetril; @Hubicka-Nesetril-ForbiddenHomomorphisms] of the theorem of Cherlin, Shelah, and Shi [@CherlinShelahShi], and can be found in [@Book Theorem 4.3.8]. Properties (1), (2) and property (3) restricted to finite structures ${\mathfrak A}$ are explicitly stated in [@Book Theorem 4.3.8]. Property (3) restricted to finite structures clearly implies property (4). Property (5) holds because homogeneous structures with a finite relational signature have an oligomorphic automorphism group. Property (3) for countable structures now follows from [@Book Lemma 4.1.7]. Since we are not aware of a reference for (6) in the literature, we present a proof here. Let $\sigma$ be the signature of $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. We claim that the following universal $\sigma$-sentence $\psi$ describes the structures in the age of $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. If $\phi$ is a $\sigma$-sentence, then $\phi'$ denotes the $\tau$-sentence obtained from $\phi$ by replacing every occurrence of $R(\bar x)$, for $R \in \sigma \setminus \tau$, by the primitive positive $\tau$-formula $\eta(\bar x)$ for which $R$ was introduced in $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. Then $\psi$ is a conjunction of all $\sigma$-sentences $\neg \phi$ such that $\phi$ is primitive positive, $\phi'$ has at most $|\mu|$ variables, and $\phi'$ implies $\mu$. Clearly, there are finitely many conjuncts of this form. Suppose that ${\mathfrak A}\in \mathop{\mathrm{Age}}({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. Then ${\mathfrak A}$ satisfies each conjunct $\neg \phi$ of $\psi$, because otherwise ${\mathfrak B}_{\mu}$ satisfies $\phi'$, and thus satisfies $\mu$, contrary to our assumptions. The interesting direction is that if a finite $\sigma$-structure ${\mathfrak A}$ satisfies $\psi$, then ${\mathfrak A}$ embeds into $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. Let $\phi$ be the canonical query of ${\mathfrak A}$. Let ${\mathfrak A}'$ be the canonical database of the $\tau$-formula $\phi'$. Suppose for contradiction that ${\mathfrak A}' \models \mu$. Let $\chi$ be a minimal subformula of $\phi$ such that the canonical database of $\chi$ models $\mu$. Then $\chi$ has at most $|\mu|$ variables and implies $\mu$, and hence $\neg \chi$ is a conjunct of of $\psi$ which is not satisfied by ${\mathfrak A}$, a contradiction to our assumptions. Therefore, ${\mathfrak A}' \models \neg \mu$ and by Property (2), we have that ${\mathfrak A}'_{\text{pp}(|\mu|)}$ has an embedding $f$ into $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. We claim that the restriction of $f$ to the elements of ${\mathfrak A}$ is an embedding of ${\mathfrak A}$ into $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. Clearly, if ${\mathfrak A}\models R(\bar x)$ for some relation $R$ that has been introduced for a primitive positive formula $\eta$, then ${\mathfrak A}'$ satisfies $\eta(\bar x)$, and hence ${\mathfrak B}_\mu \models \eta(f(\bar x))$, which in turn implies that $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)} \models R(f(\bar x))$ as desired. Conversely, if $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)} \models R(f(\bar x))$, then ${\mathfrak A}'_{\text{pp}(|\mu|)} \models R(\bar x)$, and hence ${\mathfrak A}' \models \eta(\bar x)$. This in turn implies that ${\mathfrak A}\models R(\bar x)$. Since the restriction of $f$ and its inverse preserve the relations from $\tau$ trivially, we conclude that ${\mathfrak A}$ embeds into $({\mathfrak B}_\mu)_{\text{pp}(|\mu|)}$. ◻ By Properties (1) and (6) of Theorem [Theorem 71](#thm:css){reference-type="ref" reference="thm:css"}, ${\mathfrak B}_\mu$ is always a reduct of a finitely bounded homogeneous structure. For short, we write $\Gamma_\mu$ for $\Gamma({\mathfrak B}_\mu, \emptyset)$ and $\Gamma_{\mu,\sigma}$ for $\Gamma({\mathfrak B}_\mu, \sigma)$, see Definition [Definition 67](#def:valued-dual){reference-type="ref" reference="def:valued-dual"}. For some queries $\mu$, the structure ${\mathfrak B}_\mu$ can be replaced by a simpler structure ${\mathfrak C}_\mu$. This will be convenient for some examples that we consider later, because the structure ${\mathfrak C}_\mu$ is finitely bounded and homogeneous itself and hence admits the application of Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"}. To define the respective class of queries, we need the following definition. The *Gaifman graph* of a relational structure ${\mathfrak A}$ is the undirected graph with vertex set $A$ where $a,b \in A$ are adjacent if and only if $a \neq b$ and there exists a tuple in a relation of ${\mathfrak A}$ that contains both $a$ and $b$. The Gaifman graph of a conjunctive query is the Gaifman graph of the canonical database of that query. **Theorem 72**. *For every union $\mu$ of connected conjunctive queries over a finite relational signature $\tau$ such that the Gaifman graph of each of the conjunctive queries in $\mu$ is complete, there exists a countable $\tau$-structure ${\mathfrak C}_\mu$ such that the following statements hold:* 1. *${\mathfrak C}_\mu$ is homogeneous.* 2. *$\mathop{\mathrm{Age}}({\mathfrak C}_\mu)$ is the class of all finite structures ${\mathfrak A}$ that satisfy $\neg \mu$.* *Moreover, ${\mathfrak C}_\mu$ is finitely bounded, $\mathop{\mathrm{Aut}}({\mathfrak C}_\mu)$ is oligomorphic, and a countable $\tau$-structure satisfies $\neg \mu$ if and only if it embeds into ${\mathfrak C}_{\mu}$.* *Proof.* Let ${\mathfrak A}_1$ and ${\mathfrak A}_2$ be finite $\tau$-structures that satisfy $\neg \mu$. Since the Gaifman graph of each of the conjunctive queries in $\mu$ is complete, the union of the structures ${\mathfrak A}_1$ and ${\mathfrak A}_2$ satisfies $\neg \mu$ as well. By Fraïssé's Theorem (see, e.g., [@Hodges]) there is a countable homogeneous $\tau$-structure ${\mathfrak C}_\mu$ such that $\mathop{\mathrm{Age}}({\mathfrak C}_\mu)$ is the class of all finite structures that satisfy $\neg \mu$; this shows that ${\mathfrak C}_\mu$ is finitely bounded. Homogeneous structures with finite relational signature clearly have an oligomorphic automorphism group. For the final statement, see [@Book Lemma 4.1.7]. ◻ Note that ${\mathfrak C}_\mu$ is homomorphically equivalent to ${\mathfrak B}_\mu$ by [@Book Lemma 4.1.7]. Therefore, $\Gamma({\mathfrak C}_\mu, \sigma)$ is homomorphically equivalent to $\Gamma_{\mu,\sigma}$ for any $\sigma \subseteq \tau$. The following proposition follows straightforwardly from the definitions and provides a valued constraint satisfaction problem that is polynomial-time equivalent to the resilience problem for $\mu$, similar to Theorem [Theorem 68](#thm:fin-dual){reference-type="ref" reference="thm:fin-dual"}. **Proposition 73**. *The resilience problem for a union of connected conjunctive queries $\mu$ where the relations from $\sigma \subseteq \tau$ are exogenous is polynomial-time equivalent to $\mathop{\mathrm{VCSP}}(\Gamma({\mathfrak B},\sigma))$ for any dual ${\mathfrak B}$ of $\mu$; in particular, to $\mathop{\mathrm{VCSP}}(\Gamma_{\mu,\sigma})$.* *Proof.* Let ${\mathfrak B}$ be a dual of $\mu$. For every bag database ${\mathfrak A}$ over signature $\tau$ and with exogenous relations from $\sigma$, let $\phi$ be the $\tau$-expression obtained by adding atomic $\tau$-expressions $S(x_1,\dots,x_n)$ according to the multiplicity of the tuples $(x_1,\dots,x_n)$ in $S^{{\mathfrak A}}$ for all $S \in \tau$. Note that $\phi$ can be computed in polynomial time. Then the resilience of ${\mathfrak A}$ with respect to $\mu$ is at most $u$ if and only if $(\phi,u)$ has a solution over $\Gamma({\mathfrak B}, \sigma)$. To prove a polynomial-time reduction in the other direction, let $\phi$ be a $\tau$-expression. We construct a bag database ${\mathfrak A}$ with signature $\tau$. The domain of ${\mathfrak A}$ are the variables that appear in $\phi$ and for every $S\in \tau$ and tuple $(x_1, \dots, x_n) \in S^{{\mathfrak A}}$, its multiplicity in ${\mathfrak A}$ is the number of times that $(x_1, \dots, x_n)$ in $S^{{\mathfrak A}}$ occurs as a summand of $\phi$. The relations $S^{{\mathfrak A}}$ with $S \in \sigma$ are exogenous in ${\mathfrak A}$, the remaining ones are endogenous. Again, ${\mathfrak A}$ can be computed in polynomial time and the resilience of ${\mathfrak A}$ with respect to $\mu$ is at most $u$ if and only if $(\phi,u)$ has a solution over $\Gamma({\mathfrak B}, \sigma)$. ◻ In [@LatestResilience-arxiv] one may find a seemingly more general notion of exogenous tuples, where in a single relation there might be both endogenous and exogenous tuples. Using Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"} and Lemma [Lemma 17](#lem:expr-reduce){reference-type="ref" reference="lem:expr-reduce"}, however, we can show that classifying the complexity of resilience problems according to our original definition also entails a classification of this variant. **Remark 74**. *Let $\mu$ be a union of conjunctive queries with the signature $\tau$, let $\sigma \subseteq \tau$, and let $\rho \subseteq \tau \setminus \sigma$. Suppose we would like to model the resilience problem for $\mu$ where the relations in $\sigma$ are exogenous and the relations in $\rho$ might contain both endogenous and exogenous tuples. Let ${\mathfrak B}$ be a dual of $\mu$ and $\Gamma$ be the expansion of $\Gamma({\mathfrak B}, \sigma)$ where for every relational symbol $R \in \rho$, there is also a relation $(R^{x})^{\Gamma}=R^{{\mathfrak B}}$, i.e., a classical relation that takes values $0$ and $\infty$. The resilience problem for $\mu$ with exogenous tuples specified as above is polynomial-time equivalent to $\mathop{\mathrm{VCSP}}(\Gamma)$ by analogous reductions as in Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"}. Note that $(R^{x})^{\Gamma}=\mathop{\mathrm{Opt}}\big (R^{\Gamma({\mathfrak B}, \sigma)} \big)$ for every $R \in \rho$, and therefore by Lemma [Lemma 17](#lem:expr-reduce){reference-type="ref" reference="lem:expr-reduce"}, $\mathop{\mathrm{VCSP}}(\Gamma)$ is polynomial-time equivalent to $\mathop{\mathrm{VCSP}}(\Gamma({\mathfrak B},\sigma))$ and thus to the resilience problem for $\mu$ where the relations in $\sigma$ are exogeneous and the relations in $\tau \setminus \sigma$ are purely endogeneous. This justifies the restriction to our setting for exogenous tuples. Moreover, the same argument shows that if resilience of $\mu$ with all tuples endogenous is in P, then all variants of resilience of $\mu$ with exogenous tuples are in P as well.* Similarly as in Example [Example 70](#expl:fin-dual-rev){reference-type="ref" reference="expl:fin-dual-rev"}, Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"} can be combined with the tractability results for VCSPs from Section [7](#sect:tract){reference-type="ref" reference="sect:tract"} that use fractional polymorphisms to prove tractability of resilience problems. **Example 75**. *We revisit Example [Example 60](#expl:simple-inf){reference-type="ref" reference="expl:simple-inf"}. Consider the conjunctive query $\exists x,y,z \, (R(x,y) \wedge S(x,y,z))$ over the signature $\tau = \{R,S\}$. Note that the Gaifman graph of $\mu$ is complete; let ${\mathfrak C}_{\mu}$ be the structure from Theorem [Theorem 72](#thm:freeAP){reference-type="ref" reference="thm:freeAP"}. We construct a binary pseudo cyclic canonical fractional polymorphism of $\Gamma({\mathfrak C}_{\mu}, \emptyset)$. Let ${\mathfrak M}$ be the $\tau$-structure with domain $(C_\mu)^2$ and where* - *$((b^1_1,b^2_1),(b^1_2,b^2_2)) \in R^{{\mathfrak M}}$ if and only if $(b^1_1,b^1_2) \in R^{{\mathfrak C}_{\mu}}$ and $(b^2_1,b^2_2) \in R^{{\mathfrak C}_{\mu}}$, and* - *$((b^1_1,b^2_1),(b^1_2,b^2_2),(b^1_3,b^2_3)) \in S^{{\mathfrak M}}$ if and only if $(b^1_1,b^1_2,b^1_3) \in S^{{\mathfrak C}_{\mu}}$ or $(b^2_1,b^2_2,b^2_3) \in S^{{\mathfrak C}_{\mu}}$.* *Similarly, let ${\mathfrak N}$ be the $\tau$-structure with domain $(C_\mu)^2$ and where* - *$((b^1_1,b^2_1),(b^1_2,b^2_2)) \in R^{{\mathfrak N}}$ if and only if $(b^1_1,b^1_2) \in R^{{\mathfrak C}_{\mu}}$ or $(b^2_1,b^2_2) \in R^{{\mathfrak C}_{\mu}}$, and* - *$((b^1_1,b^2_1),(b^1_2,b^2_2),(b^1_3,b^2_3)) \in S^{{\mathfrak N}}$ if and only if $(b^1_1,b^1_2,b^1_3) \in S^{{\mathfrak C}_{\mu}}$ and $(b^2_1,b^2_2,b^2_3) \in S^{{\mathfrak C}_{\mu}}$.* *Note that ${\mathfrak M}\not \models \mu$ and hence there exists an embedding $f \colon {\mathfrak M}\to {\mathfrak C}_{\mu}$. Similarly, there exists an embedding $g \colon {\mathfrak N}\to {\mathfrak C}_{\mu}$. Clearly, both $f$ and $g$ regarded as operations on the set $C_{\mu}$ are pseudo cyclic (but in general not cyclic) and canonical with respect to $\mathop{\mathrm{Aut}}({\mathfrak C}_{\mu})$ (see Claim 6 in Proposition [Proposition 79](#prop:mu1){reference-type="ref" reference="prop:mu1"} for a detailed argument of this type). Let $\omega$ be the fractional operation given by $\omega(f) = \frac{1}{2}$ and $\omega(g) = \frac{1}{2}$. Then $\omega$ is a binary fractional polymorphism of $\Gamma := \Gamma({\mathfrak C}_{\mu},\emptyset)$: for $b^1,b^2 \in (C_\mu)^2$ we have $$\begin{aligned} \sum_{h \in {\mathscr O}^{(2)}} \omega(h) R^{\Gamma}(h(b^1,b^{2})) & = \frac{1}{2} R^{\Gamma}(f(b^1,b^2)) + \frac{1}{2} R^{\Gamma}(g(b^1,b^2)) \nonumber \\ & = \frac{1}{2} \sum_{j = 1}^2 R^{\Gamma}(b^j). \label{eq:submod}\end{aligned}$$ so $\omega$ improves $R$, and similarly we see that $\omega$ improves $S$.* We proved that the corresponding valued structure has a binary canonical pseudo cyclic fractional polymorphism. By Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"} and [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"}, this reproves the results from [@Resilience] (without multiplicities) and [@LatestResilience-arxiv] (with multiplicities) that the resilience problem for this query is in P. ## The Resilience Tractability Conjecture In this section we present a conjecture which implies, together with Corollary [Corollary 31](#cor:OIT){reference-type="ref" reference="cor:OIT"} and Lemma [Lemma 63](#lem:con){reference-type="ref" reference="lem:con"}, a P versus NP-complete dichotomy for resilience problems for finite unions of conjunctive queries. **Conjecture 76**. *Let $\mu$ be a union of connected conjunctive queries over the signature $\tau$, and let $\sigma \subseteq \tau$. If the structure $(\{0,1\};\mathop{\mathrm{OIT}})$ has no pp-construction in $\Gamma := \Gamma_{\mu,\sigma}$, then $\Gamma$ has a fractional polymorphism of arity $\ell \geq 2$ which is canonical and pseudo cyclic with respect to $\mathop{\mathrm{Aut}}(\Gamma)$ (and in this case, $\mathop{\mathrm{VCSP}}(\Gamma)$ is in P by Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"}).* The conjecture is intentionally only formulated for VCSPs that stem from resilience problems, because it is known to be false for the more general situation of VCSPs for valued structures $\Gamma$ that have the same automorphisms as a reduct of a finitely bounded homogeneous structure [@Book] (Section 12.9.1; the counterexample is even a CSP). However, see Conjecture [Conjecture 82](#conj:VCSP){reference-type="ref" reference="conj:VCSP"} for a conjecture that could hold for VCSPs in this more general setting. For the following conjunctive query $\mu$, the NP-hardness of the resilience problem without multiplicites was shown in [@Resilience]; to illustrate our condition, we verify that $(\{0,1\}; \mathop{\mathrm{OIT}})$ has a pp-construction in $\Gamma_\mu$ and thus prove in a different way that the resilience problem (with multiplicities) for $\mu$ is NP-hard. **Example 77** (Triangle query). *Let $\tau$ be the signature that consists of three binary relation symbols $R$, $S$, and $T$, and let $\mu$ be the conjunctive query $$\exists x,y,z \big (R(x,y) \wedge S(y,z) \wedge T(z,x) \big ).$$ The resilience problem without multiplicities for $\mu$ is NP-complete [@Resilience], and hence $\mathop{\mathrm{VCSP}}(\Gamma_{\mu})$ is NP-hard (Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"}). Since the Gaifman graph of $\mu$ is NP-complete, the structure ${\mathfrak C}_\mu$ from Theorem [Theorem 72](#thm:freeAP){reference-type="ref" reference="thm:freeAP"} exists. Let $\Gamma:=\Gamma({\mathfrak C}_\mu, \emptyset)$. We provide a pp-construction of $(\{0,1\};\mathop{\mathrm{OIT}})$ in $\Gamma$, which also proves NP-hardness of $\mathop{\mathrm{VCSP}}(\Gamma)$ and hence the resilience problem of $\mu$ with multiplicities by Corollary [Corollary 31](#cor:OIT){reference-type="ref" reference="cor:OIT"}. Since $\Gamma$ is homomorphically equivalent to $\Gamma_\mu$, this also provides a pp-construction of $(\{0,1\};\mathop{\mathrm{OIT}})$ in $\Gamma_\mu$ (see Lemma [Lemma 32](#lem:pp-trans){reference-type="ref" reference="lem:pp-trans"}).* *Let $C$ be the domain of $\Gamma$. Let $\phi(a,b,c,d,e,f,g,h,i)$ be the $\tau$-expression $$\begin{aligned} & R(a,b) + S(b,c) + T(c,d) + R(d,e) + S(e,f) + T(f,g) + R(g,h) + S(h,i) \label{eq:firstgroup} \\ + \; & T(i,g) + S(h,f) + R(g,e) + T(f,d) + S(e,c) + R(d,b) + T(c,a) . \label{eq:secondgroup} \end{aligned}$$ For an illustration of $\mu$ and $\phi$, see Figure [\[fig:triad\]](#fig:triad){reference-type="ref" reference="fig:triad"}. Note that $\phi$ can be viewed as seven overlapping copies of $\mu$.* *In what follows, we say that an atomic $\tau$-expression holds if it evaluates to $0$. Note that every atom in [\[eq:firstgroup\]](#eq:firstgroup){reference-type="eqref" reference="eq:firstgroup"} except the first and the last ones appears in exactly two copies of $\mu$ in $\phi$, whereas all other atoms of $\phi$ occur in only one copy of $\mu$ in $\phi$. Hence, since there are seven copies of $\mu$ in $\phi$, in the optimal solution of the instance $\phi$ of $\mathop{\mathrm{VCSP}}(\Gamma)$ all atoms in [\[eq:secondgroup\]](#eq:secondgroup){reference-type="eqref" reference="eq:secondgroup"} hold, and either every atom at even position or every atom at odd position in [\[eq:firstgroup\]](#eq:firstgroup){reference-type="eqref" reference="eq:firstgroup"} holds. Let $RT \in \langle \Gamma \rangle$ be given by $$RT(a,b,f,g) := \mathop{\mathrm{Opt}}\inf_{c,d,e,h,i \in C} \phi.$$ Note that $RT(a,b,f,g)$ holds if and only if* - *$R(a,b)$ holds and $T(f,g)$ does not hold, or* - *$T(f,g)$ holds and $R(a,b)$ does not hold,* *where the reverse implication uses that ${\mathfrak C}_\mu$ is homogeneous. Similarly, define $RS\in \langle \Gamma \rangle$ by $$RS(a,b,h,i) := \mathop{\mathrm{Opt}}\inf_{c,d,e,f,g \in C} \phi.$$ Note that $RS(a,b,h,i)$ holds if and only if* - *$R(a,b)$ holds and $S(h,i)$ does not hold, or* - *$S(h,i)$ holds and $R(a,b)$ does not hold.* *Next, we define the auxiliary relation $\widetilde{RS}(a,b,e,f)$ to be $$\mathop{\mathrm{Opt}}\inf_{c,d,g,h,i \in C} \phi.$$ Note that $\widetilde{RS}(a,b,e,f)$ holds if and only if* - *both $R(a,b)$ and $S(e,f)$ hold, or* - *neither $R(a,b)$ and nor $S(e,f)$ holds.* *This allows us to define the relation $$RR(u,v,x,y):= \inf_{w,z \in C} RS(u,v,w,z) + \widetilde{RS}(x,y,w,z)$$ which holds if and only if* - *$R(u,v)$ holds and $R(x,y)$ does not hold, or* - *$R(x,y)$ holds and $R(u,v)$ does not hold.* *Define $M \in \langle \Gamma \rangle$ as $$\begin{aligned} M := \mathop{\mathrm{Opt}}\inf_{x,y,z \in C} \big (&RR(u,v,x,y) + RS(u',v',y,z) + RT(u'',v'',z,x) \\ + \; & R(x,y) + S(y,z) + T(z,x) \big ).\end{aligned}$$ Note that $R(x,y)$, $S(y,z)$ and $T(z,x)$ cannot hold at the same time and therefore $(u,v,u',v',u'',v'') \in M$ if and only if exactly one of of $R(u,v)$, $R(u',v')$, and $R(u'',v'')$ holds. Let $\Delta$ be the pp-power of $(C;M)$ of dimension two with signature $\{\mathop{\mathrm{OIT}}\}$ such that $$\mathop{\mathrm{OIT}}^{\Delta}((u,v),(u',v'),(u'',v'')) := M(u,v,u',v',u'',v'').$$ Then $\Delta$ is homomorphically equivalent to $(\{0,1\};\mathop{\mathrm{OIT}})$, witnessed by the homomorphism from $\Delta$ to $(\{0,1\};\mathop{\mathrm{OIT}})$ that maps $(u,v)$ to $1$ if $R(u,v)$ and to $0$ otherwise, and the homomorphism $(\{0,1\};\mathop{\mathrm{OIT}}) \to \Delta$ that maps $1$ to any pair of vertices $(u,v) \in R$ and $0$ to any pair of vertices $(u,v) \notin R$. Therefore, $\Gamma$ pp-constructs $(\{0,1\}; \mathop{\mathrm{OIT}})$.* We mention that another conjecture concerning a P vs. NP-complete complexity dichotomy for resilience problems appears in [@LatestResilience-arxiv Conjecture 7.7]. The conjecture has a similar form as Conjecture [Conjecture 76](#conj:tract){reference-type="ref" reference="conj:tract"} in the sense that it states that a sufficient hardness condition for resilience is also necessary. The relationship between our hardness condition from Corollary [Corollary 31](#cor:OIT){reference-type="ref" reference="cor:OIT"} and the condition from [@LatestResilience-arxiv] remains to be studied. ## An example of formerly open complexity {#sect:expl} We use our approach to settle the complexity of the resilience problem for a conjunctive query that was mentioned as an open problem in [@NewResilience] (Section 8.5): $$\begin{aligned} \mu & := \exists x,y (S(x) \wedge R(x,y) \wedge R(y,x) \wedge R(y,y)) \label{eq:mu1}\end{aligned}$$ Let $\tau = \{R,S\}$ be the signature of $\mu$. To study the complexity of resilience of $\mu$, it will be convenient to work with a dual which has different model-theoretic properties than the duals ${\mathfrak B}_\mu$ from Theorem [Theorem 71](#thm:css){reference-type="ref" reference="thm:css"} and ${\mathfrak C}_\mu$ from Theorem [Theorem 72](#thm:freeAP){reference-type="ref" reference="thm:freeAP"}, namely a dual that is a model-complete core. **Definition 78**. *A structure ${\mathfrak B}$ with an oligomorphic automorphism group is *model-complete* if every embedding of ${\mathfrak B}$ into ${\mathfrak B}$ preserves all first-order formulas. It is a *core* if every endomorphism is an embedding.* Note that the definition of cores of valued structures with finite domain (Definition [Definition 53](#def:core){reference-type="ref" reference="def:core"}) and the definition above specialise to the same concept for relational structures over finite domains. A structure with an oligomorphic automorphism group is a model-complete core if and only if for every $n \in {\mathbb N}$ every orbit of $n$-tuples can be defined with an existential positive formula [@Book]. Every countable structure ${\mathfrak B}$ is homomorphically equivalent to a model-complete core, which is unique up to isomorphism [@Cores-journal; @Book]; we refer to this structure as the model-complete core of ${\mathfrak B}$. The advantage of working with model-complete cores is that the structure is in a sense 'minimal' and therefore easier to work with in concrete examples.[^3] **Proposition 79**. *There is a finitely bounded homogeneous dual ${\mathfrak B}$ of $\mu$ such that the valued $\tau$-structure $\Gamma := \Gamma({\mathfrak B}, \emptyset)$ has a binary fractional polymorphism which is canonical and pseudo cyclic with respect to $\mathop{\mathrm{Aut}}(\Gamma)$. Hence, $\mathop{\mathrm{VCSP}}(\Gamma)$ and the resilience problem for $\mu$ are in P. As a consequence, the polynomial-time tractability result even holds for resilience of $\mu$ with exogeneous relations from any $\sigma \subseteq \tau$.* *Proof.* Since the Gaifman graph of $\mu$ is a complete graph, there exists the structure ${\mathfrak C}_\mu$ as in Theorem [Theorem 72](#thm:freeAP){reference-type="ref" reference="thm:freeAP"}. Let ${\mathfrak B}$ be the model-complete core of ${\mathfrak C}_\mu$. Note that ${\mathfrak B}$ has the property that a countable structure ${\mathfrak A}$ maps homomorphically to ${\mathfrak B}$ if and only if ${\mathfrak A}\models \neg \mu$; in particular, ${\mathfrak B}$ is a dual of $\mu$ and ${\mathfrak B}\models \neg \mu$. The structure ${\mathfrak C}_\mu$ is homogeneous, and it is known that the model-complete core of a homogeneous structure is again homogeneous (see Proposition 4.7.7 in [@Book]), so ${\mathfrak B}$ is homogeneous. Let $\Gamma := \Gamma({\mathfrak B}, \emptyset)$. Note that $$\begin{aligned} {\mathfrak B}& \models \forall x \big ( \neg S(x) \vee \neg R(x,x) \big ) \label{eq:parti} \\ \text{ and } {\mathfrak B}& \models \forall x,y \big (x = y \vee R(x,y) \vee R(y,x) \big ). \label{eq:total}\end{aligned}$$ To see [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"}, suppose for contradiction that ${\mathfrak B}$ contains distinct elements $x,y$ such that neither $(x,y)$ nor $(y,x)$ is in $R^{{\mathfrak B}}$. Let ${\mathfrak B}'$ be the structure obtained from ${\mathfrak B}$ by adding $(x,y)$ to $R^{{\mathfrak B}}$. Then ${\mathfrak B}' \models \neg \mu$ as well, and hence there is a homomorphism from ${\mathfrak B}'$ to ${\mathfrak B}$ by the properties of ${\mathfrak B}$. This homomorphism is also an endomorphism of ${\mathfrak B}$ which is not an embedding, a contradiction to the assumption that ${\mathfrak B}$ is a model-complete core. Also observe that $$\begin{aligned} {\mathfrak B}& \models \forall x,y \big (x= y \vee (R(x,y) \wedge R(y,x)) \vee (S(x) \wedge R(y,y)) \vee (R(x,x) \wedge S(y)) \big ). \label{eq:double}\end{aligned}$$ Suppose for contradiction that [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} does not hold for some distinct $x$ and $y$. Then $\neg S(x) \vee \neg R(y,y)$ and $\neg R(x,x) \vee \neg S(y)$, i.e., $\neg S(x) \wedge \neg R(x,x)$, or $\neg S(x) \wedge \neg S(y)$, or $\neg R(y,y) \wedge \neg R(x,x)$, or $\neg R(y,y) \wedge \neg S(y)$. In each of these cases we may add both $R$-edges between the distinct elements $x$ and $y$ to ${\mathfrak B}$ and obtain a structure not satisfying $\mu$, which leads to a contradiction as above. For an illustration of a finite substructure of ${\mathfrak B}$ which contains a representative for every orbit of pairs in $\mathop{\mathrm{Aut}}({\mathfrak B})$, see Figure [\[fig:dual-mu1\]](#fig:dual-mu1){reference-type="ref" reference="fig:dual-mu1"}. **Claim 1.** For every a finite $\tau$-structure ${\mathfrak A}$ that satisfies $\neg \mu$ and the sentences in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"}, there exists a *strong* homomorphism to ${\mathfrak B}$, i.e., a homomorphism that also preserves the complements of $R$ and $S$. First observe that ${\mathfrak B}$ embeds the countably infinite complete graph, where $R$ is the edge relation and precisely one element lies in the relation $S$; this is because this structure maps homomorphically to ${\mathfrak B}$ and unless embedded, it contradicts ${\mathfrak B}\not \models \mu$. In particular, there are infinitely many $x \in B$ such that ${\mathfrak B}\models \neg S(x) \wedge \neg R(x,x)$ and by [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"}, for every $y \in B$, $x \neq y$, we have ${\mathfrak B}\models R(x,y) \wedge R(y,x)$. To prove the claim, let ${\mathfrak A}$ be a finite structure that satisfies $\neg \mu$ and the sentences in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"}. For a homomorphism $h$ from ${\mathfrak A}$ to ${\mathfrak B}$, let $$s(h) := |\{x \in A \mid {\mathfrak A}\models \neg S(x) \wedge {\mathfrak B}\models S(h(x)) \} |$$ and $$r(h) := |\{(x,y) \in A^2 \mid {\mathfrak A}\models \neg R(x,y) \wedge {\mathfrak B}\models R(h(x), h(y)) \}|.$$ Let $h$ be a homomorphism from ${\mathfrak A}$ to ${\mathfrak B}$, which exists since ${\mathfrak A}\models \neg \mu$. If $s(h)+r(h)=0$, then $h$ is a strong homomorphism and there is nothing to prove. Suppose therefore $s(h)+r(h)>0$. We construct a homomorphism $h'$ such that $r(h')+s(h')<r(h)+s(h)$. Since $r(h)+s(h)$ is finite, by applying this construction finitely many times, we obtain a strong homomorphism from ${\mathfrak A}$ to ${\mathfrak B}$. If $s(h)>0$, then there exists $a \in A \setminus S^{{\mathfrak A}}$ such that $h(a) \in S^{{\mathfrak B}}$. By [\[eq:parti\]](#eq:parti){reference-type="eqref" reference="eq:parti"}, ${\mathfrak B}\not \models R(h(a),h(a))$ and hence ${\mathfrak A}\not \models R(a,a)$. Pick $b \in B \setminus h(A)$ such that ${\mathfrak B}\models \neg S(b) \wedge \neg R(b,b)$ and define $$h'(x):= \begin{cases} b \text{ if }x=a,\\ h(x) \text{ otherwise}. \end{cases}$$ Observe that $h'$ is a homomorphism, $s(h')<s(h)$ and $r(h')=r(h)$. If $r(h)>0$, then there exists $(x,y) \in A^2 \setminus R^{{\mathfrak A}}$ such that $(h(x),h(y)) \in R^{{\mathfrak B}}$. If $x=y$, the argument is similar as in the case $s(h)>0$. Finally, if $x \neq y$, then ${\mathfrak A}\models (S(x) \wedge R(y,y)) \vee (R(x,x) \wedge S(y))$, because ${\mathfrak A}$ satisfies the sentence in [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"}. Since ${\mathfrak A}$ satisfies the sentence in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"}, ${\mathfrak A}\models R(y,x)$. Since $h$ is a homomorphism, we have $${\mathfrak B}\models R(h(x), h(y)) \wedge R(h(y), h(x)) \wedge ((S(h(x)) \wedge R(h(y),h(y))) \vee (R(h(x),h(x)) \wedge S(h(y)))),$$ which contradicts ${\mathfrak B}\not \models \mu$. **Claim 2.** Every finite $\tau$-structure ${\mathfrak A}$ that satisfies $\neg \mu$ and the sentences in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} embeds into ${\mathfrak B}$. In particular, ${\mathfrak B}$ is finitely bounded. Let ${\mathfrak A}$ be such a structure. By Theorem [Theorem 71](#thm:css){reference-type="ref" reference="thm:css"}, there is an embedding $e$ of ${\mathfrak A}$ into ${\mathfrak B}_\mu$. Since ${\mathfrak B}_\mu$ is homogeneous and embeds every finite $\tau$-structure that satisfies $\neg \mu$, there exists a finite substructure ${\mathfrak A}'$ of ${\mathfrak B}_\mu$ satisfying the sentences in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} such that $e({\mathfrak A})$ is a substructure of ${\mathfrak A}'$ and for all distinct $a, b \in A$ there exists $s \in S^{{\mathfrak A}'}$ such that ${\mathfrak B}_\mu \models R(e(a),s) \wedge R(s,e(b))$. By Claim 1, there is a strong homomorphism $h$ from ${\mathfrak A}'$ to ${\mathfrak B}$. We claim that $h \circ e$ is injective and therefore an embedding of ${\mathfrak A}$ into ${\mathfrak B}$. Suppose there exist distinct $a,b \in A$ such that $h(e(a))=h(e(b))$. Since $e({\mathfrak A})$ satisfies the sentence in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and $h$ is a strong homomorphism, we obtain that ${\mathfrak B}_\mu \models R(e(a),e(a)) \wedge R(e(b), e(b))$. Let $s \in S^{{\mathfrak A}'}$ be such that ${\mathfrak B}_\mu \models R(e(a),s) \wedge R(s,e(b))$. Hence, $${\mathfrak B}\models S(h(s)) \wedge R(h(e(a)), h(s)) \wedge R(h(s), h(e(a))) \wedge R(h(e(a)), h(e(a))),$$ a contradiction to ${\mathfrak B}\not \models \mu$. It follows that $h \circ e$ is an embedding of ${\mathfrak A}$ into ${\mathfrak B}$. We define two $\{R,S\}$-structures ${\mathfrak M},{\mathfrak N}$ with domain $B^2$ as follows. For all $x_1,x_2,y_1,y_2,x,y \in B$ define $$\begin{aligned} {\mathfrak M},{\mathfrak N}& \models R \big ((x_1,y_1),(x_2,y_2) \big) \label{eq:pres-R} & \text{ if } {\mathfrak B}& \models R(x_1,x_2) \wedge R(y_1,y_2), \\ {\mathfrak M},{\mathfrak N}& \models S \big ((x,y) \big) & \text{ if } {\mathfrak B}& \models S(x) \wedge S(y) \label{eq:pres-S} \\ {\mathfrak M}& \models S \big ((x,y) \big ) & \text{ if } {\mathfrak B}& \models S(x) \vee S(y) \label{eq:M-max} \\ {\mathfrak N}& \models R \big ((x,y),(x,y) \big ) & \text{ if } {\mathfrak B}& \models R(x,x) \vee R(y,y) . \label{eq:N-max}\end{aligned}$$ Add pairs of distinct elements to $R^{{\mathfrak M}}$ and $R^{{\mathfrak N}}$ such that both ${\mathfrak M}$ and ${\mathfrak N}$ satisfy the sentence in [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} (note that no addition of elements to $S^{{\mathfrak M}}$ and $S^{{\mathfrak N}}$ is needed). Finally, add $((x_1,y_1),(x_2,y_2))$ to $R^{{\mathfrak M}}$ and $((x_2,y_2),(x_1,y_1))$ to $R^{{\mathfrak N}}$ [\[eq:M-key\]]{#eq:M-key label="eq:M-key"} if at least one of the following cases holds: 1. ${\mathfrak B}\models S(x_1) \wedge R(x_1,x_2) \wedge R(x_2,x_2) \wedge R(y_2, y_2) \wedge R(y_2,y_1) \wedge S(y_1)$, 2. ${\mathfrak B}\models R(x_1,x_1) \wedge R(x_1,x_2) \wedge S(x_2) \wedge y_1 = y_2 \wedge R(y_1,y_2)$, 3. ${\mathfrak B}\models S(y_1) \wedge R(y_1,y_2) \wedge R(y_2,y_2) \wedge R(x_2,x_2) \wedge R(x_2,x_1) \wedge S(x_1)$, 4. ${\mathfrak B}\models R(y_1,y_1) \wedge R(y_1,y_2) \wedge S(y_2) \wedge x_1 = x_2 \wedge R(x_1,x_2)$. Conditions (A) and (B) are illustrated in Figure [\[fig:mu1-ABCD\]](#fig:mu1-ABCD){reference-type="ref" reference="fig:mu1-ABCD"}; conditions (C) and (D) are obtained from (A) and (B) by replacing $x$ by $y$. Note that for $(x_1, y_1)=(x_2,y_2)$, none of the conditions (A)-(D) is ever satisfied. No other atomic formulas hold on ${\mathfrak M}$ and ${\mathfrak N}$. Note that both ${\mathfrak M}$ and ${\mathfrak N}$ satisfy the property stated for ${\mathfrak B}$ in [\[eq:parti\]](#eq:parti){reference-type="eqref" reference="eq:parti"}. **Claim 3.** ${\mathfrak M}$ and ${\mathfrak N}$ satisfy the sentence in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"}. We prove the statement for ${\mathfrak M}$; the proof for ${\mathfrak N}$ is similar. Let $(x_1,y_1), (x_2, y_2) \in B$ be such that $(x_1, y_1) \neq (x_2, y_2)$ and ${\mathfrak M}\models \neg R((x_2, y_2), (x_1, y_1))$. Since ${\mathfrak M}$ satisfies the sentence in [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"}, we must have either ${\mathfrak M}\models S(x_1, y_1) \wedge R((x_2, y_2), (x_2, y_2))$ or ${\mathfrak M}\models S(x_2, y_2) \wedge R((x_1, y_1), (x_1, y_1))$. Suppose the former is true; the other case is treated analogously. Then ${\mathfrak B}\models R(x_2, x_2) \wedge R(y_2, y_2)$ and ${\mathfrak B}\models S(x_1) \vee S(y_1)$. If ${\mathfrak B}\models S(x_1)$, then $x_1 \neq x_2$ and by [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} we have ${\mathfrak B}\models R(x_1,x_2) \vee R(x_2, x_1)$. By [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} for $(y_1,y_2)$, we obtain that ${\mathfrak M}\models R((x_1, y_1), (x_2, y_2))$ by [\[eq:pres-R\]](#eq:pres-R){reference-type="eqref" reference="eq:pres-R"} or one of the conditions (A)-(D). The argument if ${\mathfrak B}\models S(y_1)$ is similar with $x$ and $y$ switched. **Claim 4.** ${\mathfrak M}$ and ${\mathfrak N}$ satisfy $\neg \mu$. Let $x_1,x_2,y_1,y_2 \in B$. Suppose for contradiction that $${\mathfrak M}\models S(x_1,y_1) \wedge R((x_1,y_1),(x_2,y_2)) \wedge R((x_2,y_2),(x_1,y_1)) \wedge R((x_2,y_2),(x_2,y_2)).$$ By the definition of ${\mathfrak M}$, we have ${\mathfrak B}\models R(x_2,x_2) \wedge R(y_2,y_2)$ and ${\mathfrak B}\models S(x_1) \vee S(y_1)$. Assume that ${\mathfrak B}\models S(x_1)$; the case ${\mathfrak B}\models S(y_1)$ is analogous. By the assumption, ${\mathfrak M}\models R((x_1, y_1),(x_2,y_2))$. Then, by the definition of ${\mathfrak M}$, one of the conditions [\[eq:pres-R\]](#eq:pres-R){reference-type="eqref" reference="eq:pres-R"}, (A)-(D) holds, or $${\mathfrak M}\models \neg \big (S(x_1,y_1) \wedge R((x_2,y_2),(x_2,y_2)) \big)$$ (recall that $((x_1,y_1),(x_2,y_2))$ might have been added to $R^{{\mathfrak M}}$ so that ${\mathfrak M}$ satisfies the sentence in [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"}). The last option is false by the assumption and by [\[eq:parti\]](#eq:parti){reference-type="eqref" reference="eq:parti"}, ${\mathfrak B}\models \neg S(x_2) \wedge \neg S(y_2)$, and hence neither (B) nor (D) holds. Therefore, one of the conditions [\[eq:pres-R\]](#eq:pres-R){reference-type="eqref" reference="eq:pres-R"}, (A), or (C) holds for $((x_1,y_1), (x_2,y_2))$. Similarly, we obtain that one of the conditions [\[eq:pres-R\]](#eq:pres-R){reference-type="eqref" reference="eq:pres-R"} or (B) holds for $((x_2,y_2),(x_1,y_1))$, since ${\mathfrak M}\models R((x_2, y_2),(x_1,y_1))$ (to exclude (D) we use the assumption that ${\mathfrak B}\models S(x_1)$ and hence $x_1 \neq x_2$). This yields six cases and in each of them we must have that ${\mathfrak B}\models R(x_1,x_2) \wedge R(x_2,x_1)$ or ${\mathfrak B}\models S(y_1) \wedge R(y_1, y_2) \wedge R(y_2,y_1)$. Since ${\mathfrak B}\models S(x_1) \wedge R(x_2,x_2) \wedge R(y_2,y_2)$, this contradicts ${\mathfrak B}\models \neg \mu$. Since $(x_1,y_1),(x_2,y_2) \in M$ were chosen arbitrarily, this shows that ${\mathfrak M}\models \neg \mu$. The argument for ${\mathfrak N}$ is similar. **Claim 5.** There is an embedding $f$ of ${\mathfrak M}$ into ${\mathfrak B}$ and an embedding $g$ of ${\mathfrak N}$ into ${\mathfrak B}$. We show the claim for ${\mathfrak M}$; the proof for ${\mathfrak N}$ is analogous. By [@Book Lemma 4.1.7], it is enough to show that every finite substructure of ${\mathfrak M}$ embeds into ${\mathfrak B}$. By the definition of ${\mathfrak M}$ and Claims 3 and 4, every finite substructure ${\mathfrak M}$ satisfies [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"}, [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} and $\neg \mu$ and hence, by Claim 2, it embeds into ${\mathfrak B}$. Let $\omega$ be the fractional operation over $B$ defined by $\omega(f) = \frac{1}{2}$ and $\omega(g) = \frac{1}{2}$. **Claim 6.** $\omega$ is pseudo cyclic and canonical with respect to $\mathop{\mathrm{Aut}}({\mathfrak B})=\mathop{\mathrm{Aut}}(\Gamma)$. Note that since ${\mathfrak B}$ is homogeneous in a finite relational signature, two $k$-tuples of elements of ${\mathfrak B}$ lie in the same orbit if and only if they satisfy the same atomic formulas. Therefore, the canonicity of $f$ and $g$ with respect to $\mathop{\mathrm{Aut}}({\mathfrak B})$ follows from the definition of ${\mathfrak M}$ and ${\mathfrak N}$: for $(a,b) \in B^2$, whether ${\mathfrak B}\models S(f(a,b))$ only depends on whether ${\mathfrak M}\models S(a,b)$ by Claim 5, which depends only on the atomic formulas that hold on $a$ and on $b$ in ${\mathfrak B}$. An analogous statement is true for atomic formulas of the form $R(x,y)$ and $x=y$. Therefore, $f$ is canonical. The argument for the canonicity of $g$ is analogous. To see that $f$ and $g$ are pseudo cyclic, we show that $f^*$ and $g^*$ defined on $2$-orbits (using the terminology of Remark [Remark 45](#rem:can-act){reference-type="ref" reference="rem:can-act"}) are cyclic. By the definition of $f^*$, we need to show that for any $a_1, a_2, b_1, b_2 \in B$, the two pairs $(f(a_1, b_1), f(a_2,b_2))$ and $(f(b_1, a_1), f(b_2,a_2))$ satisfy the same atomic formulas. For the formulas of the form $S(x)$ and $R(x,y)$, this can be seen from Claim 5 and the definition of ${\mathfrak M}$ and ${\mathfrak N}$, since each of the conditions [\[eq:pres-R\]](#eq:pres-R){reference-type="eqref" reference="eq:pres-R"},[\[eq:pres-S\]](#eq:pres-S){reference-type="eqref" reference="eq:pres-S"},[\[eq:M-max\]](#eq:M-max){reference-type="eqref" reference="eq:M-max"},[\[eq:N-max\]](#eq:N-max){reference-type="eqref" reference="eq:N-max"},[\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} and the union of (A), (B), (C), (D) is symmetric with respect to exchanging $x$ and $y$. For the atomic formulas of the form $x=y$, this follows from the injectivity of $f$. This shows that $f^*$ is cyclic; the argument for $g^*$ is the same. Hence, the pseudo-cyclicity of $f$ and $g$ is a consequence of Lemma [Lemma 50](#lem:pc){reference-type="ref" reference="lem:pc"} for $m = 2$. **Claim 7.** $\omega$ improves $S$. By the definition of ${\mathfrak M}$ and ${\mathfrak N}$ and Claim 5, we have for all $x,y \in B$ $$\omega(f)S^{\Gamma}(f(x,y)) + \omega(g)S^{\Gamma}(g(x,y)) = \frac{1}{2} (S^{\Gamma}(x) + S^{\Gamma}(y)).$$ **Claim 8.** $\omega$ improves $R$. Let $x_1,y_1,x_2,y_2 \in B$. We have to verify that $$\begin{aligned} \omega(f) R^{\Gamma}(f(x_1,y_1),f(x_2,y_2)) + \omega(g) R^{\Gamma}(g(x_1,y_1),g(x_2,y_2)) \leq \frac{1}{2} (R^{\Gamma}(x_1,x_2) + R^{\Gamma}(y_1,y_2)). \label{eq:pres-expl}\end{aligned}$$ We distinguish four cases. - ${\mathfrak M},{\mathfrak N}\models R((x_1,y_1),(x_2,y_2))$. Then Inequality [\[eq:pres-expl\]](#eq:pres-expl){reference-type="eqref" reference="eq:pres-expl"} holds since the left-hand side is zero, and the right-hand side is non-negative (each weighted relation in $\Gamma$ is non-negative). - ${\mathfrak M},{\mathfrak N}\models \neg R((x_1,y_1),(x_2,y_2))$. Since ${\mathfrak M}$ and ${\mathfrak N}$ satisfy the sentences in [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} and [\[eq:double\]](#eq:double){reference-type="eqref" reference="eq:double"} and ${\mathfrak B}$ satisfies [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"} we must have ${\mathfrak B}\models \neg R(x_1,x_2) \wedge \neg R(y_1,y_2)$, and both sides of the inequality evaluate to $1$. - ${\mathfrak M}\models \neg R((x_1,y_1),(x_2,y_2))$ and ${\mathfrak N}\models R((x_1,y_1),(x_2,y_2))$. By Claim 5, the left-hand side evaluates to $\frac{1}{2}$. By [\[eq:pres-R\]](#eq:pres-R){reference-type="eqref" reference="eq:pres-R"}, we have ${\mathfrak B}\models \neg R(x_1, x_2)$ or ${\mathfrak B}\models \neg R(y_1, y_2)$. Therefore, the right-hand side of [\[eq:pres-expl\]](#eq:pres-expl){reference-type="eqref" reference="eq:pres-expl"} is at least $\frac{1}{2}$ and the inequality holds. - ${\mathfrak M}\models R((x_1,y_1),(x_2,y_2))$ and ${\mathfrak N}\models \neg R((x_1,y_1),(x_2,y_2))$. Similar to the previous case. This exhausts all cases and concludes the proof of Claim 8. It follows that $\omega$ is a binary fractional polymorphism of $\Gamma$ which is canonical and pseudo cyclic with respect to $\mathop{\mathrm{Aut}}(\Gamma)$. Polynomial-time tractability of $\mathop{\mathrm{VCSP}}(\Gamma)$ follows by Theorem [Theorem 58](#thm:tract){reference-type="ref" reference="thm:tract"} and [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"}. The final statement follows from Remark [Remark 74](#rem:exo){reference-type="ref" reference="rem:exo"}. ◻ # Conclusion and Future Work {#sect:concl} We formulated a general hardness condition for VCSPs of valued structures with an oligomorphic automorphism group and a new polynomial-time tractability result. We use the latter to resolve a resilience problem whose complexity was left open in the literature and conjecture that our conditions exactly capture the hard and easy resilience problems for conjunctive queries (with multiplicities), respectively. In fact, a full classification of resilience problems for conjunctive queries based on our approach seems feasible, but requires further research, as discussed in the following. We have proved that if $\Gamma$ is a valued structure with an oligomorphic automorphism group and $R$ is a weighted relation in the smallest weighted relational clone that contains the weighted relations of $\Gamma$, then $R$ is preserved by all fractional polymorphisms of $\Gamma$ (Lemma [Lemma 40](#lem:easy){reference-type="ref" reference="lem:easy"}). We do not know whether the converse is true. Note that it is known to hold for the special cases of finite-domain valued structures [@CohenCooperJeavonsVCSP; @FullaZivny] and for classical relational structures with $0$-$\infty$ valued relations (CSP setting) having an oligomorphic automorphism group [@BodirskyNesetril]. **Question 80**. *Let $\Gamma$ be a valued structure with an oligomorphic automorphism group. Is it true that $R \in \langle \Gamma \rangle$ if and only if $R \in \mathop{\mathrm{Imp}}(\mathop{\mathrm{fPol}}(\Gamma))$?* Note that a positive answer to this question would imply that the computational complexity of VCSPs for valued structures $\Gamma$ with an oligomorphic automorphism group, and in particular the complexity of resilience problems, is fully determined by the fractional polymorphisms of $\Gamma$. Fractional polymorphisms are probability distributions on operations. In all the examples that arise from resilience problems that we considered so far, it was sufficient to work with fractional polymorphisms $\omega$ that are *finitary*, i.e., such that there are finitely many operations $f_1,\dots,f_k \in {\mathscr O}_C$ such that $\sum_{i \in \{1,\dots,k\}} \omega(f_i) = 1$. This motivates the following question. **Question 81**. *Does our notion of pp-constructability change if we restrict to finitary fractional homomorphisms $\omega$? Is there a valued structure $\Gamma$ with an oligomorphic automorphism group and a weighted relation $R$ such that $R$ is not improved by all fractional polymorphism of $\Gamma$, but *is* improved by all finitary fractional polymorphisms $\omega$? In particular, are these statements true if we restrict to valued $\tau$-structures $\Gamma$ that arise from resilience problems as described in Proposition [Proposition 73](#prop:connection){reference-type="ref" reference="prop:connection"}?* In the following, we formulate a common generalisation of the complexity-theoretic implications of Conjecture [Conjecture 76](#conj:tract){reference-type="ref" reference="conj:tract"} and the infinite-domain tractability conjecture from [@BPP-projective-homomorphisms] that concerns a full complexity classification of VCSPs for valued structures from reducts of finitely bounded homogeneous structures. **Conjecture 82**. *Let $\Gamma$ be a valued structure with finite signature such that $\mathop{\mathrm{Aut}}(\Gamma) = \mathop{\mathrm{Aut}}({\mathfrak B})$ for some reduct ${\mathfrak B}$ of a countable finitely bounded homogeneous structure. If $(\{0,1\};\mathop{\mathrm{OIT}})$ has no pp-construction in $\Gamma$, then $\mathop{\mathrm{VCSP}}(\Gamma)$ is in P (otherwise, we already know that $\mathop{\mathrm{VCSP}}(\Gamma)$ is NP-complete by Theorem [Theorem 10](#thm:fb-NP){reference-type="ref" reference="thm:fb-NP"} and Corollary [Corollary 31](#cor:OIT){reference-type="ref" reference="cor:OIT"}).* One might hope to prove this conjecture under the assumption of the infinite-domain tractability conjecture. Recall that also the finite-domain VCSP classification was first proven conditionally on the finite-domain tractability conjecture [@KolmogorovKR17; @KozikOchremiak15], which was only confirmed later [@ZhukFVConjecture; @BulatovFVConjecture]. We also believe that the 'meta-problem' of deciding whether for a given conjunctive query the resilience problem with multiplicities is in P is decidable. This would follow from a positive answer to Conjecture [Conjecture 76](#conj:tract){reference-type="ref" reference="conj:tract"} because $\Gamma^*_m$ can be computed and Item 4 of Proposition [Proposition 57](#prop:black-box){reference-type="ref" reference="prop:black-box"} for the finite-domain valued structure $\Gamma^*_m$ can be decided algorithmically using linear programming [@Kolmogorov-Meta]. # The Lebesgue Integral {#sect:lebesgue} It will be convenient to use an additional value $-\infty$ that has the usual properties: - $-\infty < a$ for every $a \in \mathbb{R} \cup \{\infty\}$, - $a +(-\infty) = (-\infty) + a= - \infty$ for every $a \in \mathbb{R}$, - $a \cdot \infty = \infty \cdot a = -\infty$ for $a<0$, - $0\cdot (-\infty) = (-\infty)\cdot 0=0$. - $a\cdot (-\infty)=(-\infty) \cdot a = -\infty$ for $a > 0$ and $a \cdot (-\infty)= (-\infty)\cdot a = \infty$ for $a < 0$. The sum of $\infty$ and $-\infty$ is undefined. Let $C$ and $D$ be sets. We define the Lebesgue integration over the space $C^D$ of all functions from $D$ to $C$. We usually (but not always) work with the special case $D = C^{\ell}$, i.e. the space is $\mathscr O_C^{(\ell)}$ for some set $C$ and $\ell\in \mathbb N$. To define the Lebesgue integral, we need the definition of a *simple function*: this is a function $Y \colon C^D \to {\mathbb R}$ given by $$\sum_{k=1}^n a_k 1_{S_k}$$ where $n \in \mathbb N$, $S_1,S_2,\dots$ are disjoint elements of $B(C^D)$, $a_k \in {\mathbb R}$, and $1_S \colon C^D \to \{0,1\}$ denotes the indicator function for $S \subseteq C^D$. If $Y$ is a such a simple function, then the Lebesgue integral is defined as follows: $$\int_{C^D} Y d \omega := \sum_{k =1}^n a_k \omega(S_k).$$ If $X$ and $Y$ are two random variables, then we write $X \leq Y$ if $X(f) \leq Y(f)$ for every $f \in C^D$. We say that $X$ is *non-negative* if $0 \leq X$. If $X$ is a non-negative measurable function, then the Lebesgue integral is defined as $$\int_{C^D} X d \omega := \sup \left\{ \int_{C^D} Y d \omega \mid 0 \leq Y \leq X, Y \text{ simple} \right\}.$$ For an arbitrary measurable function $X$, we write $X = X^+ - X^-$, where $$X^+(x) := \begin{cases} X(x) & \text{if } X(x) > 0 \\ 0 & \text{otherwise} \end{cases}$$ and $$X^-(x) := \begin{cases} -X(x) & \text{if } X(x) < 0 \\ 0 & \text{otherwise.} \end{cases}$$ Then both $X^+$ and $X^-$ are measurable, and both $\int_{C^D} X^- d \omega$ and $\int_{C^D} X^+ d \omega$ take values in ${\mathbb R}_{\geq 0} \cup \{\infty\}$. If both take value $\infty$, then the integral is undefined (see Remark [Remark 83](#rem:undef){reference-type="ref" reference="rem:undef"}). Otherwise, define $$\int_{C^D} X d \omega := \int_{C^D} X^+ d \omega - \int_{C^D} X^- d \omega.$$ In particular, note that for $X \geq 0$ the integral is always defined. Let $\omega$ be a fractional map from $D$ to $C$, let $R \in {\mathscr R}^{(k)}_{C}$ be a weighted relation, and let $s \in D^k$. Then $X \colon C^D \rightarrow {\mathbb R} \cup \{\infty\}$ given by $$f \mapsto R(f(s))$$ is a random variable: if $(a,b)$ is a basic open subset of ${\mathbb R} \cup \{\infty\}$, then $$\begin{aligned} X^{-1}((a,b)) & = \{f \in C^D \mid R(f(s)) \in (a,b) \} \\ & = \bigcup_{t \in C^k,R(t) \in (a,b)}{\mathscr S}_{s,t}\end{aligned}$$ is a union of basic open sets in $C^D$, hence open. The argument for the other basic open sets in ${\mathbb R} \cup \{\infty\}$ is similar. If the set $C$ is countable and $X$ is as above, we may express $E_{\omega}[X]$ as a sum, which is useful in proofs in Sections [5](#sect:gen-hard){reference-type="ref" reference="sect:gen-hard"} and [6](#sect:fpol){reference-type="ref" reference="sect:fpol"}. If $E_{\omega}[X]$ exists, then $$\begin{aligned} E_{\omega}[X] & = \int_{C^D} X^+ d \omega - \int_{C^D} X^- d \omega \nonumber \\ & = \sup \left \{ \int_{C^D} Y d \omega \mid 0 \leq Y \leq X^+, Y \text{ simple} \right \} - \sup \left \{ \int_{C^D} Y d \omega \mid 0 \leq Y \leq X^-, Y \text{ simple} \right \} \nonumber\\ & = \sum_{t \in C^k,R(t) \geq 0} R(t) \omega( {\mathscr S}_{s,t}) + \sum_{t \in C^k,R(t) < 0} R(t) \omega( {\mathscr S}_{s,t}) \nonumber \\ & = \sum_{t \in C^k} R(t) \omega( {\mathscr S}_{s,t}). \label{eq:exp-sum}\end{aligned}$$ **Remark 83**. *The Lebesgue integral $$\int_{{\mathscr O}^{(\ell)}_C} X d \omega = \int_{{\mathscr O}^{(\ell)}_C} X^+ d \omega - \int_{{\mathscr O}^{(\ell)}_C} X^- d \omega$$ need not exist: e.g., consider $C = {\mathbb N}$, $k = \ell = 1$, and $R(x) = -2^x$ if $x \in {\mathbb N}$ is even and $R(x) = 2^x$ otherwise. Let $s \in C$ and define $X \colon {\mathscr O}^{(1)}_C \to {\mathbb R} \cup \{\infty\}$ by $$f \mapsto R(f(s)).$$* *Let $\omega$ be a unary fractional operation such that for every $t \in C$ we have $\omega({\mathscr S}_{s,t}) = \frac{1}{2^{t+1}}$. Then $$\begin{aligned} \int_{{\mathscr O}^{(\ell)}_C} X^+ d \omega & = \sup \{ \int_{{\mathscr O}^{(\ell)}_C} Y d \omega \mid 0 \leq Y \leq X^+, Y \text{ simple} \} \\ & = \sum_{t \in C,R(t) \geq 0} R(t) \omega( {\mathscr S}_{s,t}) \\ & = \sum_{t \in C, R(t) \geq 0} \frac{1}{2} = \infty\end{aligned}$$ and, similarly, $\int_{{\mathscr O}^{(\ell)}_C} X^- d \omega = \infty$.* [^1]: The first two authors have been funded by the European Research Council (Project POCOCOP, ERC Synergy Grant 101071674) and by the DFG (Project FinHom, Grant 467967530). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The third author was supported by the DFG project LU 1417/3-1 QTEC [^2]: To be precise, a finite relational structure is not exactly the same as a database because the latter may not contain elements that do not contain in any relation. This difference, however, is inessential for the problems studied in this paper. [^3]: The model-complete core of ${\mathfrak B}_\mu$ would be a natural choice for the canonical dual of $\mu$ to work with instead of ${\mathfrak B}_\mu$. However, proving that the model-complete core has a finitely bounded homogeneous expansion (so that, for example, Theorem [Theorem 10](#thm:fb-NP){reference-type="ref" reference="thm:fb-NP"} applies) requires introducing further model-theoretical notions [@MottetPinskerCores] which we want to avoid in this article.
arxiv_math
{ "id": "2309.15654", "title": "The Complexity of Resilience Problems via Valued Constraint Satisfaction\n Problems", "authors": "Manuel Bodirsky, \\v{Z}aneta Semani\\v{s}inov\\'a, Carsten Lutz", "categories": "math.LO cs.CC cs.DB", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | We introduce the abstract notion of a *smoothable fine compactified Jacobian* of a nodal curve, and of a family of nodal curves whose general element is smooth. Then we introduce the notion of a combinatorial stability condition for line bundles and their degenerations. We prove that smoothable fine compactified Jacobians are in bijection with these stability conditions. We then turn our attention to *fine compactified universal Jacobians*, that is, fine compactified Jacobians for the moduli space $\overline{\mathcal{M}}_g$ of stable curves (without marked points). We prove that every fine compactified universal Jacobian is isomorphic to the one first constructed by Caporaso, Pandharipande and Simpson in the nineties. In particular, without marked points, there exists no fine compactified universal Jacobian unless $\gcd(d+1-g, 2g-2)=1$. address: - N. Pagani, Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, United Kingdom - O. Tommasi, Dipartimento di Matematica "Tullio Levi-Civita", University of Padova, via Trieste 63, IT-35127 Padova, Italy author: - Nicola Pagani - Orsola Tommasi bibliography: - biblio-curves.bib title: Stability conditions for line bundles on nodal curves --- # Introduction A classical construction from the XIX century associates with every smooth projective curve $X$ its Jacobian (the moduli space of degree $0$ line bundles on $X$), a principally polarized abelian variety of dimension $g$. The construction carries on to smooth projective families of curves. One challenging problem arises when $X$ ceases to be smooth. In this case the Jacobian can still be constructed, but in general it fails to be proper. A general problem from the mid XX century was to construct well-behaved compactifications of the Jacobian, whose boundary corresponds to degenerate line bundles of some kind. Many different constructions have been pursued according to the particular generality required and the initial inputs (see for example [@igusa], [@oda79], [@AK], [@caporaso], [@simpson], [@panda], [@esteves]); some of these work in the relative case of families as well. For simplicity here we restrict ourselves to the case where $X$ is a nodal curve. Also, we will fix the horizon of all possible degenerations of line bundles to *torsion-free coherent sheaves of rank $1$*. Since we will aim to construct *proper* moduli stacks of stable sheaves, without losing in generality we will additionally assume that all sheaves are *simple*. In this generality, the moduli space of sheaves was constructed as an algebraic space by Altman--Kleiman [@AK]. Esteves [@esteves] later proved it satisfies the existence part of the valuative criterion of properness. (This moduli space is not of finite type, hence it is not proper, whenever $X$ is reducible). Most modular constructions of fine compactified Jacobians use some set of instructions (for example coming from GIT) to single out an open subset of the moduli space of simple sheaves choosing certain *stable* elements, to end up with a proper moduli stack. The construction is often followed by the observation that stability of a sheaf only depends on its multidegree and on its locally free locus in $X$, and then that these discrete data obey a collection of axioms (for example, the number of stable multidegrees of line bundles on a nodal curve $X$ equals the complexity of the dual graph of $X$). In this paper we introduce an abstract notion of a *fine compactified Jacobian* as an *open* subspace of the moduli space of rank $1$ torsion-free simple sheaves of some fixed degree (not necessarily zero) on $X$, which is furthermore *proper* (see Definition [Definition 2](#def:finecompjac){reference-type="ref" reference="def:finecompjac"}).[^1] It was observed in [@paganitommasi Section 3] that fine compactified Jacobians can be badly behaved in the sense that they can fail to fit into a family for an infinitesimal smoothing of the curve. (This phenomenon already occurs when $X$ has genus $1$). Thus, we add a smoothability axiom to the objects that we aim to study. Note that our definition of smoothable fine compactified Jacobian includes the modular fine compactified Jacobians constructed in the literature (e.g. those constructed by Esteves [@esteves] and by Oda--Seshadri [@oda79] and recently studied in [@mv], [@meravi]) . We then prove our first classification result, stating that smoothable fine compactified Jacobians correspond to a combinatorial datum that we call a *stability condition* (for smoothable fine compactified Jacobians), which keeps track of the multidegree and of the regular locus of the elements of the moduli space: **Theorem 1**. *Let $X$ be a nodal curve. Taking the associated assignment (see Definition [Definition 28](#assocassign){reference-type="ref" reference="assocassign"}) induces a bijection $$\Set{\begin{array}{l} \textrm{Smoothable fine compactified } \\ \textrm{ \ \ \ \ \ \ \ Jacobians of } X \end{array} } \to \Set{\begin{array}{l} \textrm{Stability conditions for } X \textrm{ as}\\ \textrm{introduced in Definition~\ref{finejacstab}} \end{array} }$$ whose inverse is defined by taking the moduli space of sheaves that are stable with respect to a given stability condition (Definition [Definition 23](#defstable){reference-type="ref" reference="defstable"}).* This is Corollary [Corollary 38](#bijection){reference-type="ref" reference="bijection"} (in the case where $S$ is a point). The most difficult part is the proof of properness of the moduli space of stable sheaves (Lemma [\[properoverdelta\]](#properoverdelta){reference-type="ref" reference="properoverdelta"}). In fact, Corollary [Corollary 38](#bijection){reference-type="ref" reference="bijection"} is an extension of Theorem [Theorem 1](#mainthm){reference-type="ref" reference="mainthm"} to the case of smoothable fine compactified Jacobians of *families* of nodal curves whose generic element is smooth (Definition [Definition 8](#def:familyfinecompjac){reference-type="ref" reference="def:familyfinecompjac"}). The combinatorial notion of a stability condition for a *family* is introduced in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, as the datum of a stability condition for each fiber, with an additional constraint of compatibility under the degenerations that occur in the family (which induce morphisms of the corresponding dual graphs). A natural question is whether our abstract definition of fine compactified Jacobians produces new examples. The most general procedure to construct smoothable fine compactified Jacobians that we are aware of is by means of numerical polarizations. This was introduced by Oda--Seshadri [@oda79] for the case of a single nodal curve, then further developed by Kass--Pagani [@kp2] for the case of the universal family over the moduli space of pointed stable curves (equivalent objects were constructed by Melo in [@melouniversal] following Esteves [@esteves]). These definitions and constructions are reviewed in Section [8](#Sec: OS){reference-type="ref" reference="Sec: OS"}. By Theorem [Theorem 1](#mainthm){reference-type="ref" reference="mainthm"}, it is a completely combinatorial (but hard) question whether every smoothable fine compactified Jacobian is given by a numerical polarization. (Note that "smoothable" here is essential, due to the aforementioned genus $1$ examples in [@paganitommasi Section 3]. Those examples are not smoothable, whereas all compactified Jacobians obtained from numerical polarizations are smoothable). The case when the genus of $X$ equals $1$ was settled in the affirmative in [@paganitommasi Proposition 3.15], and in Example [**Example** 45](#genus1){reference-type="ref" reference="genus1"} we discuss how to extend this to the case when the first Betti number of the dual graph of $X$ equals $1$. In Example [**Example** 46](#ibd){reference-type="ref" reference="ibd"} we settle in the affirmative the case of integral break divisors (the case when $X$ is stable was due to Christ--Payne--Shen in [@cps]). We resolve in the positive the similar question for the case of the universal curve over $\overline{\mathcal{M}}_g$. Without marked points, degree $d$ fine compactified universal Jacobians exist if and only if $d$ satisfies $\gcd(d-g+1, 2g-2)=1$, and for such $d$'s these Jacobians are all given by universal numerical polarizations. This is Corollary [Corollary 58](#maincoroll){reference-type="ref" reference="maincoroll"}. These compactified Jacobians were constructed in the nineties by Caporaso [@caporaso], Pandharipande [@panda] and Simpson [@simpson]. A similar result does not hold in the presence of marked points: in [@paganitommasi] the authors produce examples of fine compactified universal Jacobians for $\overline{\mathcal{M}}_{1,n}$ for all $n \geq 6$ that are not obtained from a universal numerical polarization, hence that do not arise from the methods by Kass--Pagani [@kp2] or by Esteves and Melo [@esteves; @melouniversal] (more details in Remark [**Remark** 51](#n>0){reference-type="ref" reference="n>0"}). An explicit combinatorial characterization of the collection $\Sigma^d_{g,n}$ of degree $d$ fine compactified universal Jacobians for $\overline{\mathcal{M}}_{g,n}$ is available via Corollary [Corollary 38](#bijection){reference-type="ref" reference="bijection"} applied to the universal family over $\overline{\mathcal{M}}_{g,n}$ (see also Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"} and Remark [**Remark** 17](#Rem: univ stab){reference-type="ref" reference="Rem: univ stab"}). It would be interesting to interpret each element of $\Sigma^d_{g,n}$ as a (top-dimensional) chamber in some stability space, as was done in [@kp2] for the case of compactified universal Jacobians arising from numerical polarizations. We explore these questions in Section [10](#sec: final){reference-type="ref" reference="sec: final"}. Compactified universal Jacobians have recently played a role in enumerative geometry in the theory of the ($k$-twisted) double ramification cycle, see [@bhpss]. As realized in [@HKP], whenever a fine compactified universal Jacobian contains the locus $Z$ of line bundles of multidegree zero, the double ramification cycle can be defined as the pullback of $[Z]$, via some Abel--Jacobi section. This perspective plays an important role in [@hmpps], where an extension to a *logarithmic double ramification cycle* is also defined. Because there are different fine compactified universal Jacobians containing $Z$, the double ramification cycle can be equivalently defined as the pullback of $[Z]$ from different spaces, potentially leading to different formulas, hence to relations in the cohomology of the moduli space of curves. Our classification leads to a complete description of *all* fine compactified universal Jacobians containing $Z$, whereas previously only those obtained via Kass--Pagani's method [@kp2] were considered. The problem of studying the stability space of complexes of sheaves on a projective variety $X$ has attracted a lot of attention in the last two decades, after Bridgeland's breakthrough [@bridgeland] (extended to the case of families in [@bayer]). Most of the literature has been devoted to the case when $X$ is nonsingular. It is natural to try to explicitly describe this stability space for $X$ singular, and one place to start is assuming that $X$ is a nodal curve. We expect that the combinatorics developed in Theorem [Theorem 1](#mainthm){reference-type="ref" reference="mainthm"} should be regarded as some kind of skeleton of that stability space. ## Acknowledgments Many of the initial ideas of this project are owed to Jesse Kass. It is with great pleasure that we acknowledge his intellectual contribution to this work. We are grateful to Jonathan Barmak for sharing with us [@barmak] a solution to the combinatorial problem that appears in the proof of Lemma [Lemma 52](#enoughontrees){reference-type="ref" reference="enoughontrees"}. NP is very grateful to Alex Abreu and Filippo Viviani for several important contributions that came after an early version of the preprint was shared with them. In particular, Filippo Viviani pointed out a gap in the proof the earlier version of Proposition [Proposition 32](#cond2){reference-type="ref" reference="cond2"}, and Alex Abreu suggested the numerical polarization of Example [**Example** 46](#ibd){reference-type="ref" reference="ibd"}. # Notation Throughout we work with Noetherian schemes over a fixed algebraically closed ground field $k$. A **curve** over an extension $K$ of $k$ is a $\mathop{\mathrm{Spec}}(K)$-scheme $X$ that is proper over $\mathop{\mathrm{Spec}}(K)$, geometrically connected, and of pure dimension $1$. The curve $X$ is a **nodal curve** if it is geometrically reduced and when passing to an algebraic closure $\overline{K}$, its local ring at every singular point is isomorphic to $\overline{K}[[x,y]]/(xy)$. A coherent sheaf on a nodal curve $X$ has **rank $1$** if its localisation at each generic point of $X$ has length $1$. It is **torsion-free** if it has no embedded components. If $F$ is a rank $1$ torsion-free sheaf on a nodal curve $X$ we denote by ${\operatorname{N}}(F)$ the subset of $X$ where $F$ fails to be locally free. Note that ${\operatorname{N}}(F)$ is contained in the singular locus of $X$. If $F$ is a rank $1$ torsion-free sheaf on $X$ we say that $F$ is **simple** if its automorphism group is $\mathbb{G}_m$, or equivalently if $X\setminus {\operatorname{N}}(F)$ is connected. A **family of curves** over a $k$-scheme $S$ is a proper, flat morphism $\mathcal{X} \to S$ whose fibers are curves. A family of curves $\mathcal{X} \to S$ is a **family of nodal curves** if the fibers over all geometric points are nodal curves. If $X$ is a nodal curve over $K$, we denote by $\Gamma(X)$ its **dual graph** i.e. the labelled graph where each vertex $v$ corresponds to an irreducible component $X_{\overline{K}}^v$ of the base change of $X$ to (the spectrum of) an algebraic (equivalently, a separable) closure $\overline{K}$, and edges corresponding to the nodes of $X_{\overline{K}}$. Note that if $X^v_K$ is an irreducible component defined over $K$, then it is also defined over any extension $L$ of $K$, and the corresponding vertices of the dual graphs $\Gamma(X_K)$ and $\Gamma(X_L)$ are canonically identified. The dual graph is labelled by the geometric genus $p_g(X^v_{\overline{K}})$. The definition of dual graphs extends to the case where $(X,p_1,\dots,p_n)$ is an $n$-pointed curve. In this case the dual graph $\Gamma(X)$ also has $n$ half-edges labelled from $1$ to $n$, corresponding to the marked points $p_1, \ldots, p_n$. We refer to [@acg2] and [@memoulvi] for a detailed definition and for the notion of graph morphisms. Recall from [@CCUW § 7.2] that if $\mathcal{X}/S$ is a family of nodal curves and $s,t$ are geometric points of $S$, then every étale specialization of $t$ to $s$ (written as $t \rightsquigarrow s$) induces a morphism of dual graphs $\Gamma(X_s)\rightarrow \Gamma(X_t)$. (For the definition of étale specialization in this context we refer to [@CCUW Appendix A]). For a graph $G$ and $H$ a subgraph of $G$, we denote by $G\setminus H$ and by $G/H$ the graph obtained from $G$ by removing the edges of $H$ and the graph obtained from $G$ by contracting the edges of $H$, respectively. We will denote by $\Delta$ the spectrum of a DVR with residue field $K$, and by $0$ (resp. by $\eta$) its closed (resp. its generic) point. A **smoothing** of a nodal curve $X/K$ over $\Delta$ is a flat family $\mathcal{X}/\Delta$ whose generic fiber $\mathcal{X}_{\eta}/\eta$ is smooth and with an isomorphism of $K$-schemes $\mathcal{X}_0 \cong X$. The smoothing is **regular** if so is its total space $\mathcal{X}$. A **family of rank $1$ torsion-free sheaves** over a family of curves $\mathcal{X} \to S$ is a coherent sheaf on $\mathcal{X}$, flat over $S$, whose fibers over the geometric points have rank $1$ and are torsion-free. If $F$ is a rank $1$ torsion-free sheaf on a nodal curve $X$ with irreducible components $X_i$, we denote by $F_{\widetilde{X}_{i}}$ the maximal torsion-free quotient of the pullback of $F$ to the normalization $\widetilde{X}_i$ of $X_i$, and then define the **multidegree** of $F$ by $${\underline{\deg}}(F) := ({\deg}(F_{\widetilde{X}_{i}})) \in \mathbf{Z}^{\operatorname{Vert}(\Gamma(X))}.$$ We define the **(total) degree** of $F$ to be $\deg_X(F):=\chi(F)-1+p_a(X)$, where $p_a(X)= h^1(X, \mathcal{O}_X)$ is the arithmetic genus of $X$. The total degree and the multidegree of $F$ are related by the formula $\deg_X(F) = \sum \deg_{X_i} F + \delta(F)$, where $\delta(F)$ denotes the number of nodes of $X$ where $F$ fails to be locally free. [^2] If $X' \subseteq X$ is a subcurve (by which we will always mean a union of irreducible components), then $\deg_{X'}(F)$ is defined as $\deg (F_{X'})$, where $F_{X'}$ is the maximal torsion-free quotient of $F \otimes \mathcal{O}_{X'}$. The total degree on $X$ is related to the degree on a subcurve by the formula $$\label{deg:subcurve} \deg_X(F)= \deg_{X'}(F) + \deg_{\overline{X \setminus X'}}(F) + \# ( {\operatorname{N}}(F) \cap (X' \cap \overline{(X \setminus X')})).$$ where the overline denotes the (Zariski) closure. From now on we fix an integer $d$ once and for all. ## Spaces of multidegrees Here we define the space of multidegrees on a nodal curve as the collection of certain divisors on its dual graph and on its connected spanning subgraphs, suitably organised by degrees. Let $\Gamma$ be a graph and let $\Gamma_0 \subseteq \Gamma$ be a connected spanning subgraph of $\Gamma$. We will denote by $n_\Gamma(\Gamma_0)$ or simply by $n(\Gamma_0)$ the number of elements in $\operatorname{Edges}(\Gamma) \setminus \operatorname{Edges}(\Gamma_0)$. Define the space of multidegrees of total degree $d$ of $\Gamma$ at $\Gamma_0$ as the following collection of divisors on $\Gamma$: $$\label{sgamma} S^d_{\Gamma}(\Gamma_0) := \Set{\underline{\mathbf d} \in \mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)} : \sum_{v \in \mathop{\mathrm{Vert}}(\Gamma)} \underline{\mathbf d}(v) = d - n(\Gamma_0)} \subset \mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}.$$ According to our convention for the multidegree, if $X$ is a nodal curve with dual graph $\Gamma$ and $F\in {\operatorname{Simp}}^d(X)$, then $\Gamma \setminus {\operatorname{N}}(F)$ is a connected spanning subgraph $\Gamma_0(F)$ of $\Gamma$, and we have $$\underline{\deg}(F) \in S^d_{\Gamma}(\Gamma_0(F)).$$ # Fine compactified Jacobians {#secfinejac} In this section we introduce the notion of a (smoothable) fine compactified Jacobian for a nodal curve (Definition [Definition 2](#def:finecompjac){reference-type="ref" reference="def:finecompjac"}), and for a flat family of nodal curves (Definition [Definition 8](#def:familyfinecompjac){reference-type="ref" reference="def:familyfinecompjac"}). Let $\mathcal{X}/S$ be a flat family of nodal curves over a $k$-scheme $S$. Then there is an algebraic space $\operatorname{Pic}^d(\mathcal{X}/S)$ parameterizing line bundles on $\mathcal{X}/S$ of relative degree $d$ (see [@blr Chapter 8.3]). By [@AK] and [@esteves] the space $\operatorname{Pic}^d(\mathcal{X}/S)$ embeds in an algebraic space ${\operatorname{Simp}}^{d}(\mathcal{X}/S)$ parameterizing flat families of degree $d$ rank $1$ torsion-free simple sheaves on $\mathcal{X}/S$. The latter is locally of finite type over $S$ and satisfies the existence part of the valuative criterion of properness. However, it can fail to be of finite type and separated. In the special case of $S= {\operatorname{Spec}}(K)$ and $X=\mathcal{X}/S$ we will simply write $\operatorname{Pic}^d(X)$ (resp. ${\operatorname{Simp}}^d(X)$) for $\operatorname{Pic}^d(\mathcal{X}/S)$ (resp. for ${\operatorname{Simp}}^d(\mathcal{X}/S)$). Let $X$ be a nodal curve over some field extension $K$ of $k$. The main point of this paper is to describe well-behaved subspaces of ${\operatorname{Simp}}^{d}(X)$, generalizing existing notions of compactified Jacobians in the literature. Specifically, we study the following subschemes. **Definition 2**. A **degree $d$ fine compactified Jacobian** is a geometrically connected open subscheme $\overline{J}\subseteq {\operatorname{Simp}}^{d}(X)$ that is proper over $\mathop{\mathrm{Spec}}(K)$. We say that the fine compactified Jacobian $\overline{J}$ is **smoothable** if there exists a regular smoothing $\mathcal X\to\Delta$ of $X$, where $\Delta$ is the spectrum of a DVR with residue field $K$, such that $\overline{J}$ is the fiber over $0\in\Delta$ of an open and $\Delta$-proper subscheme of ${\operatorname{Simp}}^{d}(\mathcal{X}/\Delta)$. Note that the fiber over the generic point $\eta$ of a nonempty, open and $\Delta$-proper subscheme of ${\operatorname{Simp}}^{d}(\mathcal{X}/\Delta)$ is necessarily the moduli space of degree $d$ line bundles $\operatorname{Pic}^d(\mathcal{X}_{\eta}/{\eta})$. As openness and properness are stable under base change, the fiber over $0 \in \Delta$ is open in ${\operatorname{Simp}}^d(X)$ and $K$-proper. The axiom "geometrically connected" is redundant in the smoothable case, because the moduli space of degree $d$ line bundles on the generic point is geometrically connected and dense in ${\operatorname{Simp}}^{d}(\mathcal{X}/\Delta)$. ****Remark** 3**. It follows from Lemma [\[properoverdelta\]](#properoverdelta){reference-type="ref" reference="properoverdelta"} (combined with Proposition [Lemma 29](#exists-forall){reference-type="ref" reference="exists-forall"}) that requiring the subscheme $\overline{J}\subseteq {\operatorname{Simp}}^d(X)$ to extend to *some regular* smoothing of the curve is equivalent to requiring that it extends to *all* smoothings. In this paper, we will focus on smoothable fine compactified Jacobians, since they are better behaved and occur more often in applications. ****Remark** 4**. When $K=k$ is algebraically closed and $\operatorname{char}(k)=0$, Definition [Definition 2](#def:finecompjac){reference-type="ref" reference="def:finecompjac"} coincides with [@paganitommasi Definitions 2.1, 2.4] (by passing to the completion of the DVR). In [@paganitommasi Section 3] the authors give a complete classification of fine compactified Jacobians of curves of genus $1$, showing in particular the existence of nonsmoothable examples. ****Example** 5**. If $X$ is a geometrically irreducible curve over $K$, then ${\operatorname{Simp}}^d(X)$ is proper over $K$, so the only degree $d$ fine compactified Jacobian is ${\operatorname{Simp}}^d(X)$ itself. These Jacobians are always smoothable. (See Examples  [**Example** 13](#ex: stab-irred){reference-type="ref" reference="ex: stab-irred"} and [**Example** 43](#irred-is-OS){reference-type="ref" reference="irred-is-OS"} for the corresponding unique stability condition). ****Example** 6**. In the case when there are two geometrically irreducible components, fine compactified Jacobians are no longer irreducible. Assume for simplicity that $X$ is a **vine curve of type $t$**, that is, the union of two nonsingular curves intersecting transversely at $t$ nodes. We will later see in Example [**Example** 31](#vinecurves-nonsm){reference-type="ref" reference="vinecurves-nonsm"} that every fine compactified Jacobian of $X$ is smoothable, and that it consists of $t$ irreducible components whose generic points correspond to line bundles of consecutive bidegrees. ****Remark** 7**. The moduli space ${\operatorname{Simp}}^d(X)$ of a nodal curve $X$ admits a natural stratification (see for example [@mv]) into locally closed subsets $${\operatorname{Simp}}^d(X) = \bigsqcup_{(\Gamma_{0},\underline{\mathbf d})}\mathcal J_{(\Gamma_{0},\underline{\mathbf d})}$$ where the union runs over all connected spanning subgraphs $\Gamma_{0}\subseteq \Gamma(X)$ and all multidegrees $\underline{\mathbf d}\in S^d_{\Gamma(X)}(\Gamma_0)$. Each subspace $\mathcal J_{(\Gamma_{0},\underline{\mathbf d})}\subset {\operatorname{Simp}}^d(X)$ is defined as the locus whose points are sheaves $F$ that fail to be locally free on ${\operatorname{N}}(F)=\mathop{\mathrm{Edges}}(\Gamma) \setminus \mathop{\mathrm{Edges}}(\Gamma_{0})$, and whose multidegree $\underline{\deg}(F)$ equals $\underline{\bf d}$. We now extend the notion of a fine compactified Jacobian to the case of a family of nodal curves $\mathcal{X}/S$. Recall that the moduli space ${\operatorname{Simp}}^d(\mathcal{X}/S)$ is also defined in [@melouniversal] when $\mathcal{X}/S$ is the universal curve $\overline{\mathcal{C}}_{g,n}/\overline{\mathcal{M}}_{g,n}$ over the moduli stack of stable $n$-pointed curves of arithmetic genus $g$. In this case ${\operatorname{Simp}}^d(\overline{\mathcal{C}}_{g,n}/\overline{\mathcal{M}}_{g,n})$ is a Deligne--Mumford stack representable (by algebraic spaces) and flat over $\overline{\mathcal{M}}_{g,n}$. **Definition 8**. Assume that $S$ is irreducible with generic point $\theta$, and assume that the generic fiber $\mathcal{X}_\theta/\theta$ is smooth. A **family of degree $d$ fine compactified Jacobians** for the family $\mathcal{X}/S$ is an open algebraic subspace $\overline{\mathcal J}\subseteq {\operatorname{Simp}}^d(\mathcal{X}/S)$ that is proper over $S$. We say that a degree $d$ fine compactified Jacobian $\overline{\mathcal J}_{{{g,n}}}\subset {\operatorname{Simp}}^d(\overline{\mathcal{C}}_{g,n}/\overline{\mathcal{M}}_{g,n})$ is a **degree $d$ fine compactified universal Jacobian for the universal curve over $\overline{\mathcal{M}}_{g,n}$**. (We will often omit to specify "for the universal curve over $\overline{\mathcal{M}}_{g,n}$", when clear from the context). Note that the assumption that the generic fiber is smooth implies that all fibers over $S$ of a degree $d$ fine compactified Jacobian are smoothable. # Stability conditions for smoothable fine compactified Jacobians Here we define the combinatorial data identifying *smoothable* fine compactified Jacobians. We first do so for a single nodal curve, that is, for a fixed dual graph (Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}), and then we generalize the definition to families (Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"}). If $X$ is a nodal curve over an algebraically closed field, a sheaf $F\in {\operatorname{Simp}}^d(X)$ has two natural combinatorial invariants, given by its multidegree and by the subset ${\operatorname{N}}(F)\subseteq \operatorname{Sing}(X)=\mathop{\mathrm{Edges}}(\Gamma(X))$ of points of the curve where $F$ fails to be locally free. Hence it makes sense to study a fine compactified Jacobian $\overline{J}$ on $X$ by looking at all pairs $\left(\Gamma_0(F)= \Gamma(X) \setminus {\operatorname{N}}(F),\underline{\deg}(F)\right)$ with $F\in\overline{J}$. Recall that, with the notation introduced in Equation [\[sgamma\]](#sgamma){reference-type="ref" reference="sgamma"}, we can regard $\underline{\deg}(F)$ as an element of the space of multidegrees $S_{\Gamma(X)}^d(\Gamma_0(F))$. For a single curve $X$, we identify, in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, the two properties characterizing the set of such pairs. One is related to properness, combined with the smoothability of the Jacobian, and it requires that the set of stable multidegrees should be a minimal complete set of representatives for the natural chip-firing action on the dual graph (see Definition [Definition 9](#def:chipfiring){reference-type="ref" reference="def:chipfiring"}). The other corresponds to openness. In combinatorial terms, this means that if we add an edge $e$ to $\Gamma_0$, the set of stable multidegrees on $\Gamma\cup\{e\}$ should contain all multidegrees obtained by "adding a chip" to either endpoints of $e$. For a family of curves, we further require compatibility with all contractions of the dual graphs involved. ## Stability conditions for a single curve We start by introducing the twister group of a graph, which will play a role in characterizing smoothable compactified Jacobians. **Definition 9**. Let $G$ be a graph. For each $v \in \mathop{\mathrm{Vert}}(G)$, define the *twister of* $G$ at $v$ to be the element of $\mathbf{Z}^{\mathop{\mathrm{Vert}}(G)}$ defined by $$\mathop{\mathrm{Tw}}_{G, v} (w) = \begin{cases} \text{ \# of edges of } G \text{ having } v \text{ and } w \text{ as endpoints }& \text{when } w \neq v, \\ - \text{ \# of nonloop edges of } G \text{ having } v \text{ as an endpoint} & \text{when } w =v.\end{cases}$$ The **twister group** (or chip-firing group) $\mathop{\mathrm{Tw}}(G)$ is the subgroup of $\mathbf{Z}^{\mathop{\mathrm{Vert}}(G)}$ generated by the set $\Set{\mathop{\mathrm{Tw}}_{G, v}}_{ v \in \mathop{\mathrm{Vert}}(G)}$. Recall from Equation [\[sgamma\]](#sgamma){reference-type="eqref" reference="sgamma"} the definition of the space of multidegrees $S^d_{G}(G)$ of total degree equal to $d$. The twister group of $G$ is contained in the sum zero submodule $S^0_{G}(G)$ of $\mathbf{Z}^{\mathop{\mathrm{Vert}}(G)}$. Hence the group structure on $\mathbf{Z}^{\mathop{\mathrm{Vert}}(G)}$ restricts to an action of $\mathop{\mathrm{Tw}}(G)$ on $S^d_{G}(G)$. The quotient group $$\label{jacobiangraph} J^d(G):=S^d_{G}(G)/\mathop{\mathrm{Tw}}(G)$$ is then a torsor over $J^0(G)$, which is a finite abelian group. The latter is also known as the **Jacobian** of the graph $G$. (It is also known by other names in the literature, such as the *degree class group*, or the *sandpile group*, or the *critical group* of the graph $G$). ****Remark** 10**. Let $\mathcal{X}$ be a regular smoothing of $X$ over some discrete valuation ring $\Delta$ with generic point $\eta$. Let $T$ be the image under the restriction map $\mathop{\mathrm{Pic}}(\mathcal X)\rightarrow \mathop{\mathrm{Pic}}(X)$ of the kernel of the surjection $\mathop{\mathrm{Pic}}(\mathcal X)\rightarrow \mathop{\mathrm{Pic}}(\mathcal X_\eta)$ (the restriction to the generic point). We claim that the restriction to $T$ of the multidegree homomorphism $\mathop{\mathrm{Pic}}(X) \to \mathbb{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$ defines an isomorphism $T \to \mathop{\mathrm{Tw}}(\Gamma)$. The linear combinations of irreducible components of $X$ are then elements of $\mathop{\mathrm{Pic}}(\mathcal{X})$. Since $\mathcal X$ is regular, the irreducible components $\{X_v\}_{v\in\mathop{\mathrm{Vert}}(\Gamma)}$ of $X$ are Cartier divisors on $\mathcal X$ and it is easy to check that the elements of $T$ are of the form $$% \mathcal O_{\mathcal X}\left(\sum_{v\in\mathop{\mathrm{Vert}}(\Gamma)}{d_vX_v}\right)\otimes \mathcal O_X$$ with $\underline{\mathbf d}\in\mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$. One can explicitly compute that the restriction to $T$ of the multidegree map $\mathop{\mathrm{Pic}}(X)\rightarrow\mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$ is given by $$\begin{array}{rcl} % \mathcal O_{\mathcal X}\left(\sum_{v\in\mathop{\mathrm{Vert}}(\Gamma)}{d_vX_v}\right)\otimes \mathcal O_X&\longmapsto&\sum_{v\in\mathop{\mathrm{Vert}}(\Gamma)}d_v\mathop{\mathrm{Tw}}_{\Gamma,v}. \end{array}$$ This homomorphism is injective by [@raynaud70 (5.2)], and it surjects onto $\mathop{\mathrm{Tw}}(\Gamma)$ by the latter's definition. This proves that $T \to \mathop{\mathrm{Tw}}(\Gamma)$ is an isomorphism. In particular, $T$ is independent of the choice of the regular smoothing. We are now ready to define our notion of a stability condition. **Definition 11**. A **degree $d$ smoothable fine compactified Jacobian stability condition**, or shortly a **degree $d$ stability condition** for the graph $\Gamma$ is a subset $$\sigma = \{(\Gamma_0,\underline{\mathbf d}):\;\underline{\mathbf d}\in S^d_\Gamma(\Gamma_0)\}\subset \{\text{connected spanning subgraphs of }\Gamma\}\times \mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$$ satisfying the following two conditions: 1. [\[firstfjs\]]{#firstfjs label="firstfjs"} For all edges $e$ of $\mathop{\mathrm{Edges}}(\Gamma)\setminus\mathop{\mathrm{Edges}}(\Gamma_0)$ with endpoints $v_1$ and $v_2$ we have $$(\Gamma_0,\underline{\mathbf d})\in\sigma \Rightarrow (\Gamma_0\cup\{e\},\underline{\mathbf d}+\underline{\mathbf e}_{v_1}),(\Gamma_0\cup\{e\},\underline{\mathbf d}+\underline{\mathbf e}_{v_2})\in\sigma, %$$ where $\underline{\mathbf e}_{v_i}$ denotes the vector in the standard basis of $\mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$ corresponding to $v_i$. 2. [\[secondfjs\]]{#secondfjs label="secondfjs"} For every connected spanning subgraph $\Gamma_0$, the subset $$\sigma(\Gamma_0) := \{\underline{\mathbf d}:\;(\Gamma_0,\underline{\mathbf d})\in\sigma\} \subset S^d_{\Gamma}(\Gamma_0)$$ is a minimal complete set of representatives for the action of the twister group $\mathop{\mathrm{Tw}}(\Gamma_0)$ on $S^d_{\Gamma}(\Gamma_0)$. If $X$ is a nodal curve, a degree $d$ stability condition on $X$ is a degree $d$ stability condition on its dual graph $\Gamma(X)$. Note that the genus of each of the components of $X$ does not play any role in the above definition. ****Remark** 12**. The number of elements of $\sigma(\Gamma_0)$ equals the number of elements of the Jacobian $J^0(\Gamma_0)$. By the Kirchoff--Trent theorem, this number equals the complexity $c(\Gamma_0)$ of the graph $\Gamma_0$, defined as the number of spanning trees of $\Gamma_0$. (In particular, it is finite). It is in general hard to classify all stability conditions on a given stable graph. However, the task is within reach when the number of vertices is small. ****Example** 13**. If $\Gamma$ only has $1$ vertex (i.e. it is the dual graph of an irreducible curve), there is exactly $1$ stability condition on $\Gamma$. If there are $t$ edges, the unique degree $d$ stability condition is $$\sigma=\bigcup_{0 \leq i \leq t-1} \bigcup_{E \subseteq \mathop{\mathrm{Edges}}(\Gamma), |E|=i} \set{(\Gamma \setminus E,d-i)}.$$ ****Example** 14**. If instead $\Gamma$ consists of $2$ vertices $v_1, v_2$ connected by $t$ edges, and no other edges (i.e. $\Gamma$ is the dual graph of a vine curve of type $t$, see Example [**Example** 6](#ex: vinecurves){reference-type="ref" reference="ex: vinecurves"}), let $\Gamma_1, \ldots, \Gamma_t$ be the spanning trees. Then it follows from the definition that for every stability condition $\sigma$ there exists a unique integer $\lambda$ such that 1. $\sigma(\Gamma_i)=\set{(\lambda,d+1-\lambda-t)}$ for all $i=1, \ldots, t$; 2. $\sigma(\Gamma)=\set{(\lambda, d-\lambda) ,(\lambda+1, d-\lambda-1), \ldots, (\lambda+t-1, d+1-\lambda-t)}$. ## Stability conditions for families {#Sec: families} So far we have discussed the notion of a stability condition for a curve in isolation. The flatness condition for families of sheaves imposes an additional compatibility constraint. **Definition 15**. Let $f\colon \Gamma \to \Gamma'$ be a morphism of graphs and let $\sigma$ and $\sigma'$ be degree $d$ stabilities on $\Gamma$ and $\Gamma'$, respectively. We say that $\sigma$ is $f$-compatible with $\sigma'$ if for every connected spanning subgraph $\Gamma_0$ of $\Gamma$, we have $$(\Gamma_0,\underline{\mathbf d})\in\sigma \implies (\Gamma_0',\underline{\mathbf d'}) \in \sigma',$$ where $\Gamma_0'$ is the image of $\Gamma_0$ under $f$ (and therefore it is a connected spanning subgraph of $\Gamma'$), and $\underline{\mathbf d'}$ is defined by $$\label{comp-contractions} \underline{\mathbf d'}(w) = \sum_{f(v)=w} \underline{\mathbf d}(v) + \# \left\{\textrm{edges of } \Gamma\setminus\Gamma_0 \textrm{ that are contracted to } w \textrm{ by } f\right\}.$$ Note that the notion of $f$-compatibility only depends upon the map that $f$ induces on the set of vertices of the two graphs. We are now ready for the definition of the compatibility constraint. **Definition 16**. Let $\mathcal{X}/S$ be a family of nodal curves. A **family of degree $d$ stability conditions (for fine compactified Jacobians)** on $\mathcal X/S$ is the assignment of a degree $d$ stability condition $\sigma_s$ on $\Gamma(X_s)$ (as in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}) for every geometric point $s$ in $S$ that is compatible for all morphisms $\Gamma(X_s) \to \Gamma(X_t)$ arising from any étale specialization $t \rightsquigarrow s$ occurring on $S$. We will say that a family of degree $d$ stability conditions for the universal family $\overline{\mathcal{C}}_{g,n}$ over $\overline{\mathcal{M}}_{g,n}$ is a **degree $d$ universal stability condition of type $(g,n)$** (and often omit "of type $(g,n)$" when clear from the context). These stability conditions will be studied in Section [9](#sec: univ){reference-type="ref" reference="sec: univ"}. A family of universal stability conditions for the universal curve is the assignment of a stability condition for each element of $G_{g,n}$, which is furthermore compatible with automorphisms and contractions: ****Remark** 17**. In the case of universal stability conditions, the étale specializations of points of $\overline{\mathcal{M}}_{g,n}$ induce all morphisms in the category $G_{g,n}$ of stable graphs of genus $g$ with $n$ marked half-edges. In particular, if $\sigma$ is a degree $d$ universal stability condition of type $(g,n)$ and $\alpha \colon \Gamma(X_1) \to \Gamma(X_2)$ is an isomorphism of the dual graphs of two pointed curves $[X_1],[X_2]\in\overline{\mathcal{M}}_{g,n}$, then $\alpha$ identifies $\sigma_{[X_1]}$ with $\sigma_{[X_2]}$. We conclude that a degree $d$ universal stability condition of type $(g,n)$ is a collection $\Set{\sigma_\Gamma}_{\Gamma \in G_{g,n}}$ such that $\sigma_\Gamma$ is a degree $d$ stability condition on $\Gamma$ and $\sigma_{\Gamma}$ is $f$-compatible with $\sigma_{\Gamma'}$ for any morphism $f:\;\Gamma\rightarrow \Gamma'$ in $G_{g,n}$. # Combinatorial preparation: perturbations and lifts This section contains combinatorial results that will be used in later proofs. First we introduce the notion of a perturbation for the multidegree of a sheaf that fails to be locally free at some nodes. This corresponds to deforming a simple sheaf into a line bundle (Lemma [Lemma 19](#impliedbyopen){reference-type="ref" reference="impliedbyopen"}). Our main result here is Corollary [Corollary 22](#coroll: chipfiring){reference-type="ref" reference="coroll: chipfiring"}, where we relate the chip-firing action on a graph with the chip-firing action on the graph obtained by blowing up each of its edges an equal amount of times. **Definition 18**. Let $H \subseteq G$ be a connected spanning subgraph. An *$H$-perturbation* is an element in $S^{n(H)}_{G}(G)$ (for $n(H)=|\operatorname{Edges}(G) \setminus \operatorname{Edges}(H)|$) of the form $$\sum_{e \in \mathop{\mathrm{Edges}}(G) \setminus \mathop{\mathrm{Edges}}(H)} {\underline{\mathbf e}}_{t(e)}$$ where $t$ is some choice of an orientation on $\mathop{\mathrm{Edges}}(G) \setminus \mathop{\mathrm{Edges}}(H)$. By the description in Remark [**Remark** 7](#sec:strat){reference-type="ref" reference="sec:strat"} of the stratification of ${\operatorname{Simp}}^d(X)$, and by Lemma [Lemma 19](#impliedbyopen){reference-type="ref" reference="impliedbyopen"}, if $\Gamma_0$ is a connected spanning subgraph of $\Gamma$, a $\Gamma_0$-perturbation is the same as the multidegree of a line bundle on $X$ that specializes to a sheaf $F$ with $\Gamma_0(F)=\Gamma_0$, $\underline{\deg}(F)=\underline{\mathbf 0}$. We are now ready for the following technical lemma. **Lemma 19**. *Let $\mathcal X$ be a family of nodal curves over the spectrum $\Delta$ of a DVR with algebraically closed residue field $\overline{K}$, and consider a section $\sigma:\;\Delta\rightarrow {\operatorname{Simp}}^d(\mathcal X/\Delta)$. Let us fix a geometric point $\overline\eta$ lying over the generic point of $\Delta$ and denote by $\Gamma$ and $\Gamma'$ the dual graphs of the geometric fibers $X_0$ and $X_{\overline\eta}$ of $\mathcal X$, respectively, and by $f:\;\Gamma\rightarrow \Gamma'$ the morphism of graphs induced by the specialization of $\overline\eta$ to $0$. Write $F_0$ and $F_{\overline{\eta}}$ for $\sigma(0)$ and $\sigma(\overline\eta)$, respectively, and set $\Gamma_0=\Gamma_0(F_0)\subseteq \Gamma$ and $\Gamma_0'=\Gamma_0(F_{\overline{\eta}})\subseteq \Gamma'$ for the connected subgraphs where the corresponding sheaf is locally free. Then there exists a $\Gamma_0\subseteq f^{-1}(\Gamma_0')$-perturbation $T$ such that the multidegrees $\underline{\mathbf d}=\underline{\deg}(F_0)$ and $\underline{\mathbf d}' = \underline{\deg}(F_{\overline\eta})$ satisfy the relation $$\underline{\mathbf d}'= f\left(\underline{\mathbf d}+T\right).$$* Note that if $\Gamma_0'\subseteq \Gamma'$ is connected and spanning, then so is $f^{-1}(\Gamma_0') \subseteq \Gamma$. *Proof.* The section $\sigma$ defines a rank $1$ sheaf $\mathcal F$ over $\mathcal X/\Delta$. By [@esteves-pacini Proposition 5.5], the projectivization $\mathcal Y=\mathbf P_{\mathcal X}(\mathcal F)$ is a family of curves over $\Delta$ and there exists a line bundle $\mathcal L$ over $\mathcal Y/\Delta$ such that $\mathcal F = \psi_*\mathcal L$ holds for the natural map $\psi:\;\mathcal Y\rightarrow \mathcal X$. Geometrically, the restriction of $\psi$ to $Y_{\overline\eta}\rightarrow X_{\overline\eta}$ is the blow up that corresponds to adding a genus $0$ vertex on all edges of $\Gamma'$ that do not belong to $\Gamma_0'$. The description of $\psi_0:\;Y_0\rightarrow X_0$ in terms of $\Gamma$ and $\Gamma_0$ is completely analogous. Moreover, the multidegrees of $L_{\overline\eta}$ and $L_{0}$ can be obtained from the multidegrees of $F_{\overline\eta}$ and $F_{0}$, respectively, by taking $\underline{\mathbf d}'$ (resp. $\underline{\mathbf d}$) and assigning degree $1$ to the additional vertices. Then the claim follows from the fact that $L_0$ is a specialization of the line bundle $L_{\overline\eta}$. ◻ We will now prove some combinatorial ingredients that relate the chip-firing on a graph with the chip-firing on its blow up. We fix a graph $\Gamma$ and let $\widetilde{\Gamma}={{\widetilde{\Gamma}}}_m$ be the graph obtained by subdividing each edge of $\Gamma$ into $m+1$ edges by adding $m$ vertices. We start by relating the complexities. **Lemma 20**. *For every $m\geq 1$, the following formula relates the number of spanning trees (i.e. the complexity) of ${{\widetilde{\Gamma}}}_m$ with the complexity of the spanning subgraphs of $\Gamma$: $$\label{eq: number} c({{\widetilde{\Gamma}}}_m)= \sum_{\substack{G \subseteq \Gamma \textrm{ connected}\\ \textrm{ spanning subgraph}}} c(G) \cdot m^{\left|\mathop{\mathrm{Edges}}(\Gamma) \setminus \mathop{\mathrm{Edges}}(G)\right|}.$$* *Proof.* By applying [@bms Theorem 3.4] for each new vertex added, we deduce the equality $$c({{\widetilde{\Gamma}}}_m)=(m+1)^{1+e-v}c(\Gamma)=\sum_{n=1}^{1+e-v}\binom{e-v+1}{n}c(\Gamma)\;m^{e-v+1-n},$$ where $e$ and $v$ denote the number of edges and vertices of $\Gamma$, respectively. (It is easy to check that the formula in op. cit. also holds when the graph contains loops). On the other hand, we have that each spanning tree $T$ of $\Gamma$ can be extended in $\binom{e-v+1}{n}$ ways to a subgraph $H \subseteq \Gamma$ with $v-1+n$ edges by choosing $n$ edges in the set $\mathop{\mathrm{Edges}}(\Gamma)\setminus \mathop{\mathrm{Edges}}(T)$. This implies $$\sum_{\substack{H\subseteq \Gamma \\ \text{spanning subgraph}\\ |\mathop{\mathrm{Edges}}(H)|=v-1+n}}c(H) = \binom{e-v+1}{n} c(\Gamma).$$ The claim then follows by combining the two equalities above. ◻ We now define the subset $\widetilde{\sigma} \subset S^d_{\widetilde{\Gamma}}(\widetilde{\Gamma})$ by lifting each element $(\Gamma_0,\underline{\mathbf d}) \in \sigma$ to multiple elements in $S^d_{\widetilde{\Gamma}}(\widetilde{\Gamma})$ as follows. First of all, for every element $(\Gamma_0,\underline{\mathbf d}) \in \sigma$ we can naturally identify $\underline{\mathbf d}$ with the element of $\mathbf{Z}^{\mathop{\mathrm{Vert}}(\widetilde{\Gamma})}$ obtained by extending $\mathbf{d}$ to zero on all exceptional vertices. Then a lift of $(\Gamma_0,\underline{\mathbf d})$ is any element $\widetilde{\underline{\mathbf d}}\in S^d_{\widetilde{\Gamma}}(\widetilde{\Gamma})$ of the form $$\label{lift}\widetilde{\underline{\mathbf d}} = \underline{\mathbf d}+\sum_{e\in \mathop{\mathrm{Edges}}(\Gamma)\setminus\mathop{\mathrm{Edges}}(\Gamma_0)}\underline{\mathbf e}_{v(e)},$$ for some choice of function $v$ from the set of edges $\mathop{\mathrm{Edges}}(\Gamma)\setminus\mathop{\mathrm{Edges}}(\Gamma_0)$ to $\mathop{\mathrm{Vert}}(\widetilde{\Gamma})$ such that each vertex $v(e)$ is one of the $m$ interior vertices of the rational chain in $\widetilde{\Gamma}$ that corresponds to the edge $e$. Recall from Definition [Definition 18](#def: pert){reference-type="ref" reference="def: pert"} the notion, for $H$ a subgraph of $G$, of an $H$-perturbation. This terminology gives a way to relate the action of $\mathop{\mathrm{Tw}}({{\widetilde{\Gamma}}}_m)$ with that of $\mathop{\mathrm{Tw}}(\Gamma)$. **Lemma 21**. *Let $G$ and $G'$ be connected spanning subgraphs of $\Gamma$ and let $\underline{\mathbf d} \in S^d_\Gamma(G)$ and $\underline{\mathbf d}' \in S^d_\Gamma(G')$. Assume that there exist two lifts $\underline{\tilde{\mathbf d}}$ and ${\underline{\tilde{\mathbf d}}}'$ in $\widetilde{\Gamma}$, defined as in Equation [\[lift\]](#lift){reference-type="eqref" reference="lift"}, that are different and that belong to the same $\mathop{\mathrm{Tw}}(\widetilde{\Gamma})$-orbit.* *Then there exist a $G$-perturbation $T$ and a $G'$-perturbation $T'$ such that $\underline{\mathbf d}+T$ and $\underline{\mathbf d}'+T'$ are different and belong to the same $\mathop{\mathrm{Tw}}(\Gamma)$-orbit.* *Proof.* By the definition of the twister group we have $$\label{firststep} \underline{\tilde{\mathbf d}}-\underline{\tilde{\mathbf d}}'= \operatorname{div}(g):=\sum_{v\in\mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m)}g(v)\mathop{\mathrm{Tw}}_{{{\widetilde{\Gamma}}}_m,v}$$ for some $g:\;\mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m)\rightarrow \mathbf{Z}$ which we will now study in more detail. By construction, the graph ${{\widetilde{\Gamma}}}_m$ comes with an inclusion $\mathop{\mathrm{Vert}}(\Gamma)\hookrightarrow\mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m)$, which we use to define a function $f:\;\mathop{\mathrm{Vert}}(\Gamma)\rightarrow \mathbf{Z}$ by $$f(v) = \left\lfloor \frac{g(v)}{m+1}\right\rfloor.$$ For later use, we wish to make sure that the function $f$ is nonconstant. This can always be achieved by using that $\operatorname{div}(g)$ does not change if $g$ is modified by adding the same constant to all vertices of ${{\widetilde{\Gamma}}}_m$. Since $\tilde{\underline{\mathbf d}}$ and $\tilde{\underline{\mathbf d}}{}'$ are distinct, the function $g:\;\mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m)\rightarrow \mathbf{Z}$ is nonconstant, so that there exists an integer $0\leq\ell\leq m$ such that $f_\ell(v) = \left\lfloor \frac{g(v)+\ell}{m+1}\right\rfloor$ is also nonconstant, and we may then use $f_\ell$ in place of $f$ in what follows. We would like to compare $$\operatorname{div}(f):=\sum_{v\in\mathop{\mathrm{Vert}}({\Gamma})}f(v)\mathop{\mathrm{Tw}}_{{\Gamma},v}$$ with $\underline{\mathbf d}-\underline{\mathbf d}'$. To this end, we study the behaviour of $g$ at the endpoints of an arbitrary edge $e$ of $\Gamma$. By definition, the inverse image of $e$ under the contraction morphism $$\alpha:\;\mathop{\mathrm{Edges}}({{\widetilde{\Gamma}}}_m)\rightarrow\mathop{\mathrm{Edges}}(\Gamma)$$ consists of a chain of $m+1$ edges. Fix the orientation for ${{\widetilde{\Gamma}}}_m$ so that $g$ increases following the direction of the edges. For each $e \in \mathop{\mathrm{Edges}}(\Gamma)$, we then order the $m+2$ vertices of the chain $\alpha^{-1}(e)$ as $v_0,\dots,v_{m+1}$ following the orientation. In particular, the vertices $v_0$ and $v_{m+1}$ correspond to the two endpoints of $e$. Because of the way we defined the lifts in [\[lift\]](#lift){reference-type="eqref" reference="lift"}, for $e\in G\cap G'$ we have $\operatorname{div}(g)(v_i)=0$ for all $i=1,\dots,m$. For $e\in G'\setminus G$ (resp. $e\in G\setminus G'$) there exists a single vertex $v_{j}$ with $1\leq j\leq m$ such that $\operatorname{div}(g)(v_j)=1$ (resp. $\operatorname{div}(g)(v_j)=-1$) and $\operatorname{div}(g)(v_i)=0$ for all $i=1,\dots,m$, $i\neq j$. In the case $e\notin G\cup G'$, we either have $\operatorname{div}(g)(v_i)=0$ for all $i=1,\dots,m$, or there exist two distinct indices $j_+,j_-\in\{1,\dots,m\}$ such that $\operatorname{div}(g)(v_{j_+})=1$, $\operatorname{div}(g)(v_{j_-})=-1$ and $\operatorname{div}(g)(v_i)=0$ for $i\in\{1,\dots,m\}\setminus\{j_+,j_-\}$. As a consequence, the value of $g$ at each vertex $v_i$ in the chain $\alpha^{-1}(e)$ is uniquely determined by $j$ (or by $j_+,j_-$) and by the value of $g$ at the first two vertices of the chain, that is, by $a=g(v_0)$ and $b=g(v_1)$. For example, for the last two vertices we have $$\begin{array}{lll} g(v_{m+1})-g(v_{m})&g(v_{m+1})&\\ b-a&(m+1)(b-a)+a&\text{if }e\in G\cap G'\\ b-a-1&(m+1)(b-a)-(m+1-j)+a&\text{if }e\in G'\setminus G\\ b-a+1&(m+1)(b-a)+(m+1-j)+a&\text{if }e\in G\setminus G'\\ b-a&(m+1)(b-a)+j_+-j_-+a&\text{if }e\notin G\cup G'.\\ \end{array}$$ In particular, if we denote by $v$ and $w$ the endpoints of $e$ corresponding to $v_0$ and $v_{m+1}$ respectively, we have: - $f(v)-f(w) = a-b$ for $e\in G\cap G'$ or $e\notin G\cup G'$ with $j_+=j_-$; - $f(v)-f(w) \in\{a-b,a-b+1\}$ for $e\in G'\setminus G$ or $e\notin G\cup G'$ with $j_+<j_-$; - $f(v)-f(w) \in\{a-b,a-b-1\}$ for $e\in G\setminus G'$ or $e\notin G\cup G'$ with $j_+>j_-$. We use this to define three sets of directed edges of $\Gamma$. To this end, we use the same orientation of the edges of $\Gamma$ that we chose earlier. For each $e\in\mathop{\mathrm{Edges}}(\Gamma)$, we denote the first endpoint of $e$ by $v=v_0$ and the second by $w=v_{m+1}$. Let now $S$ be the following set of directed edges: - all edges $e\in G'\setminus G$, oriented from $v$ to $w$ if $f(v)-f(w)=g(v_0)-g(v_1)+1$ and from $w$ to $v$ if $f(v)-f(w)=g(v_0)-g(v_1)$; - all edges $e\notin G\cup G'$ with $f(v)-f(w)=g(v_0)-g(v_1)+1$, oriented from $v$ to $w$. Then let $S'$ be the following set of directed edges: - all edges $e\in G\setminus G'$, oriented from $v$ to $w$ if $f(v)-f(w)=g(v_0)-g(v_1)-1$ and from $w$ to $v$ if $f(v)-f(w)=g(v_0)-g(v_1)$; - all edges $e\notin G\cup G'$ with $f(v)-f(w)=g(v_0)-g(v_1)-1$, oriented from $v$ to $w$. We define $N\subseteq\mathop{\mathrm{Edges}}(\Gamma)$ to be the set of edges that are neither in $S$ or in $S'$. [^3] If one forgets the orientation, then $S$ consists of the union of $\mathop{\mathrm{Edges}}(G'\setminus G)$ and of a subset $P\subseteq\mathop{\mathrm{Edges}}(\Gamma\setminus(G\cup G'))$. Analogously, the support of $S'$ is the union of $\mathop{\mathrm{Edges}}(G\setminus G')$ and the same $P$ as before, but with the edges in $P$ taken with the opposite orientation than in the case of $S$. With this notation, we have $N=\mathop{\mathrm{Edges}}(G\cap G')\cup (\mathop{\mathrm{Edges}}(\Gamma\setminus(G\cup G')\setminus P)$. Let us denote by $s,t:\;S\cup S'\rightarrow \mathop{\mathrm{Vert}}(\Gamma)$ the maps sending each oriented edge to its source and its target respectively. Then define $$T' = \sum_{e \in S} {\underline{\mathbf e}}_{s(e)}, \ \ \ T = \sum_{e \in S'} {\underline{\mathbf e}}_{s(e)}.$$ We claim that $\operatorname{div}(f) = \underline{\mathbf d}-\underline{\mathbf d}'+T-T'$. To prove the claim, we will check that both functions have the same value at each vertex $v$ of $\Gamma$. If we view $v$ as a vertex of ${{\widetilde{\Gamma}}}_m$, from $\underline{\mathbf d}-\underline{\mathbf d}'=\operatorname{div}(g)$ we deduce the equality $$\label{dminusd'} \sum_{\begin{subarray}{l}e \text{ edge of }\Gamma\\\text{with endpoint }v\end{subarray}}(g(v)-g(v_1(e)) = \underline{\mathbf d}(v)-\underline{\mathbf d}'(v),$$ where we denoted by $v_1(e)$ the first internal vertex of $\alpha^{-1}(e)$ adjacent to $v=v_0$. Then by definition of $S$, $S'$ and $N$ we have the equalities Since all edges with endpoint $v$ appear exactly once in each of the sums above, by adding them together and using [\[dminusd\'\]](#dminusd'){reference-type="eqref" reference="dminusd'"} we obtain $$\label{finalstep} \operatorname{div}(f)(v) + T'(v) - T(v) = \underline{\mathbf d}(v)-\underline{\mathbf d}'(v).$$ This proves the claim and ensures that $\underline{\mathbf d}+T$ and ${\underline{\mathbf d}}'+T'$ are in the same $\mathop{\mathrm{Tw}}(\Gamma)$-orbit. Finally, the fact that $\underline{\mathbf d}+T$ and ${\underline{\mathbf d}}'+T'$ are different follows from the fact that the function $f$ is nonconstant. ◻ We are now ready to state and prove our main result of this section. **Corollary 22**. *Let $\sigma$ be a subset of $$\{\text{connected spanning subgraphs of }\Gamma\}\times \mathbf{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$$ such that the collection $\{\sigma(G)\}$, whose elements are defined by $$\sigma(G) := \{\underline{\mathbf d}:\;(G,\underline{\mathbf d})\in\sigma\} \subset S^d_{\Gamma}(G),$$ satisfies Part (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}.* *Then let $\widetilde{\sigma}_m$ be the set whose elements are all lifts to the graph ${{\widetilde{\Gamma}}}_m$, via Equation [\[lift\]](#lift){reference-type="eqref" reference="lift"}, of all elements of $\sigma(G)$ for each $G \subseteq \Gamma$.* *Then the following are equivalent:* 1. *the subset $\widetilde{\sigma}_m\subset S^d({{\widetilde{\Gamma}}}_m)$ is a minimal complete set of representatives for the action of $\mathop{\mathrm{Tw}}({{\widetilde{\Gamma}}}_m)$, and* 2. *the collection $\sigma$ satisfies Part (2) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} (hence it is a degree $d$ stability condition).* *Proof.* It is straightforward to check that the number of elements of $\widetilde{\sigma}_m$ equals the right hand side of [\[eq: number\]](#eq: number){reference-type="eqref" reference="eq: number"}, which equals the complexity of ${{\widetilde{\Gamma}}}_m$ by Lemma [\[complexityblowup\]](#complexityblowup){reference-type="eqref" reference="complexityblowup"}. Therefore it suffices to prove that the following statements are equivalent: 1. [\[uno\]]{#uno label="uno"} any two elements of $\widetilde{\sigma}_m$ are chip-firing equivalent on ${{\widetilde{\Gamma}}}_m$ if and only if they are equal, and 2. [\[due\]]{#due label="due"} for every connected spanning subgraph $G \subseteq \Gamma$, any two elements of $\sigma(G)$ are chip-firing equivalent on $G$ if and only if they are equal. The fact that [\[due\]](#due){reference-type="eqref" reference="due"} implies [\[uno\]](#uno){reference-type="eqref" reference="uno"} follows immediately from Lemma [Lemma 21](#perturbation){reference-type="ref" reference="perturbation"} and the assumption that $\sigma$ satisfies Part (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}. For the converse implication, let ${\bf \underline{d}}$ and ${\bf \underline{d}}'$ be different elements in $\sigma(G)$, satisfying $${\bf \underline{d}} - {\bf \underline{d}}' = \sum_{v \in \mathop{\mathrm{Vert}}(G)} f(v) \mathop{\mathrm{Tw}}_{G,v}$$ for some $f \colon \mathop{\mathrm{Vert}}(G) \to \mathbb{Z}$. First we show that we may assume $G=\Gamma$. Choose an orientation $s,t \colon \mathop{\mathrm{Edges}}(\Gamma) \setminus \mathop{\mathrm{Edges}}(G) \to \mathop{\mathrm{Vert}}(\Gamma)$ and define $${\bf \underline{e}}:= {\bf \underline{d}}+ \sum_{e \in \mathop{\mathrm{Edges}}(\Gamma) \setminus \mathop{\mathrm{Edges}}(G)} s(e), \quad {\bf \underline{e}}':= {\bf \underline{d}}'+ \sum_{e \in \mathop{\mathrm{Edges}}(\Gamma) \setminus \mathop{\mathrm{Edges}}(G)} t(e).$$ Then by Part (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} we have that ${\bf \underline{e}}, {\bf \underline{e}}'$ are different elements of $\sigma(\Gamma)$, and $${\bf \underline{e}}- {\bf \underline{e}}' = \sum_{v \in \mathop{\mathrm{Vert}}(\Gamma)} f(v) \mathop{\mathrm{Tw}}_{\Gamma,v}.$$ Then ${\bf\underline{e}}$ (resp. ${\bf\underline{e}}'$) admits a unique lift ${\bf\tilde{\underline{e}}}$ (resp. ${\bf\tilde{\underline{e}}}'$) to $\widetilde{\sigma}_m$ via Equation [\[lift\]](#lift){reference-type="ref" reference="lift"}. Now we define $g \colon \mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m) \to \mathbb{Z}$ to show that ${\bf\tilde{\underline{e}}}$ and ${\bf\tilde{\underline{e}}}'$ are in the same $\mathop{\mathrm{Tw}}({{\widetilde{\Gamma}}}_m)$-orbit. As in the proof of Lemma [Lemma 21](#perturbation){reference-type="ref" reference="perturbation"}, let $\alpha \colon \mathop{\mathrm{Edges}}({{\widetilde{\Gamma}}}_m) \to \mathop{\mathrm{Edges}}(\Gamma)$ be the map induced by a contraction morphism. For a fixed $e \in \mathop{\mathrm{Edges}}(\Gamma)$, let $v$ and $w$ be the endpoints of $e$ such that $f(v)<f(w)$. Then orient the $m+2$ vertices in the chain $\alpha^{-1}(e)$ in a consecutive way and so that $\alpha$ maps the first to $v$ and the last to $w$, in other words $\alpha^{-1}(v)=v_0, v_1, \ldots, v_m, v_{m+1}=\alpha^{-1}(w)$. Then let $g(v_i)=f(v) +i \cdot (f(w)-f(v))$. This defines a function $g \colon \mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m) \to \mathbb{Z}$, and with this definition we have $${\bf \tilde{\underline{e}}}- {\bf \tilde{\underline{e}}}' = \sum_{v \in \mathop{\mathrm{Vert}}({{\widetilde{\Gamma}}}_m)} g(v) \mathop{\mathrm{Tw}}_{{{\widetilde{\Gamma}}}_m},v.$$ This proves that ${\bf \tilde{\underline{e}}}$ and ${\bf \tilde{\underline{e}}}'$, which are different elements of $\widetilde{\sigma}_m$, are in the same $\mathop{\mathrm{Tw}}({{\widetilde{\Gamma}}}_m)$-orbit, and thus it completes our proof that [\[uno\]](#uno){reference-type="eqref" reference="uno"} implies [\[due\]](#due){reference-type="eqref" reference="due"}. ◻ # From stability conditions to smoothable fine compactified Jacobians In this section we show how to construct a fine compactified Jacobian from a given stability condition. We define a sheaf to be *stable* if its multidegree is as prescribed by the stability condition, both in the case of a single nodal curve and for families. Then we show that the moduli space of stable sheaves thus defined is a fine compactified Jacobian as introduced in Definitions [Definition 2](#def:finecompjac){reference-type="ref" reference="def:finecompjac"} and [Definition 8](#def:familyfinecompjac){reference-type="ref" reference="def:familyfinecompjac"}. The case of a curve in isolation is Corollary [Corollary 26](#stab->fine){reference-type="ref" reference="stab->fine"}, and the more general case of families with smooth generic element is Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"}. The most difficult part is the proof of properness (Lemma [\[properoverdelta\]](#properoverdelta){reference-type="ref" reference="properoverdelta"}). Let $X$ be a nodal curve over some field extension $K$ of $k$, and let $\Gamma$ be its dual graph. **Definition 23**. Let $\sigma$ be a degree $d$ stability condition for $\Gamma$ (see Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}). We say that $[F] \in {\operatorname{Simp}}^d(X)$ is **stable with respect to $\sigma$** if $(\Gamma_0(F),\underline{\deg}(F))$ is in $\sigma$, where $\Gamma_0(F)=\Gamma \setminus {\operatorname{N}}(F)$ is the (necessarily connected and spanning) subgraph of $\Gamma$ that is dual to the normalization of $X$ at the points (necessarily nodes) where $F$ fails to be locally free. We define $\overline{J}_\sigma\subseteq {\operatorname{Simp}}^d(X)$ as the subscheme of sheaves that are stable with respect to $\sigma$. Let now $\mathcal{X}/S$ be a family of nodal curves over a $k$-scheme $S$. For every geometric point $s$ of $S$, denote by $\Gamma_s$ the dual graph of the fiber $\mathcal{X}_s$. **Definition 24**. Let $\mathfrak{S}=\set{\sigma_s}_{s \in S}$ be a family of degree $d$ stabilities for $\mathcal{X}/S$ as in Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"}. If $F$ is a geometric point of ${\operatorname{Simp}}^d(\mathcal{X}/S)$ lying over $s\in S$, we say that $F$ is **stable with respect to $\mathfrak{S}$** if $F$ is stable with respect to $\sigma_s$ (see Definition [Definition 23](#defstable){reference-type="ref" reference="defstable"}). We define $\overline{\mathcal{J}}^d_{\mathfrak{S}}\subseteq {\operatorname{Simp}}^d(\mathcal{X}/S)$ as the algebraic subspace whose geometric points are sheaves that are stable with respect to $\mathfrak{S}$. The main result of this section is the following: **Theorem 25**. *Assume that $S$ is irreducible with generic point $\theta$, and that the generic element of the family $\mathcal{X}_\theta/\theta$ is smooth. Let $\mathfrak{S}$ be a family of degree $d$ stability conditions for $\mathcal{X}/S$. Then $\overline{\mathcal{J}}^d_{\mathfrak{S}}\subseteq {\operatorname{Simp}}^d(\mathcal{X}/S)$ is a degree $d$ fine compactified Jacobian for the family $\mathcal{X}/S$.* We immediately deduce: **Corollary 26**. *The moduli space $\overline{J}_\sigma \subseteq {\operatorname{Simp}}^d(X)$ of $\sigma$-stable sheaves is a smoothable degree $d$ fine compactified Jacobian for $X$.* *Proof.* Apply Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"} to a regular smoothing $\mathcal{X}/\Delta$ of $X$. ◻ The most delicate part of the proof of Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"} is the following result, which builds on the combinatorial preparation of the previous section. **Lemma 27**. *With the same assumptions as in Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"}, the moduli space $\overline{\mathcal{J}}^d_{\mathfrak{S}}$ is proper over $S$. [\[properoverdelta\]]{#properoverdelta label="properoverdelta"}* *Proof.* By Definitions [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} and [Definition 23](#defstable){reference-type="ref" reference="defstable"}, the scheme $\overline{\mathcal{J}}^d_{\mathfrak{S}}$ is the union of finitely many locally closed strata and hence of finite type and quasi-separated over $S$. To prove properness, we apply the valuative criterion [@stacks [Lemma 70.22.3](https://stacks.math.columbia.edu/tag/0H1Z)]. Given the spectrum $\Delta$ of a DVR and a commutative diagram (of solid arrows) $$\xymatrix{ {\eta=\Delta\setminus\{0\}}\ar[r]^{ f}\ar@{^{(}->}[d]& {\operatorname{Pic}^d(\mathcal{X}_\theta)}\ar@{^{(}->}[r]&{\overline{\mathcal{J}}^d_{\mathfrak{S}}}\ar[d]\\ \Delta\ar[rr]\ar@{.>}[rru]&&{S}, }$$ we need to prove that there exists exactly one dotted arrow that keeps the diagram commutative. (Note that $\operatorname{Pic}^d(\mathcal{X}_{\theta})$ is open and dense in $\overline{\mathcal{J}}^d_{\mathfrak{S}}$). Let us recall that morphisms to moduli spaces of sheaves (line bundles, simple sheaves) correspond to sheaves on the corresponding family of curves, but this correspondence is a bijection only after identifying two such sheaves whenever they differ by the pullback of a line bundle from the base. In the remainder of this proof we will slightly abuse the notation and assume this identification. Let us denote by $\mathcal X_{\Delta}/\Delta$ the base change of $\mathcal X/S$ under $\Delta\rightarrow S$ and by $\overline{\mathcal{J}}^d_{\Delta}\subseteq {\operatorname{Simp}}^d(\mathcal X/\Delta)$ the base change of $\overline{\mathcal{J}}^d_{\mathfrak{S}}$. Let $s$ be the image of $0 \in \Delta$ under $\Delta \to S$. Denoting by $X= \mathcal{X}_s=\mathcal{X}_{\Delta,0}$ the fiber over $0 \in \Delta$ of the family $\mathcal{X}_{\Delta}/\Delta$, by the hypothesis that $\mathcal{X}_{\theta}/\theta$ is smooth we deduce that $\mathcal{X}_{\Delta}/\Delta$ is a smoothing of $X$. Moreover, we have that $\overline{\mathcal{J}}^d_{\Delta}$ is the disjoint union of $\operatorname{Pic}^d(\mathcal{X}_{\Delta, \eta})= {\operatorname{Simp}}^d(\mathcal{X}_{\Delta, \eta})$ and of $\overline{J}_\sigma \subseteq {\operatorname{Simp}}^d(X)$, where $\sigma$ is the stability condition for $\mathcal{X}_s$. The morphism $f: \Delta\setminus\{0\}\rightarrow \operatorname{Pic}^d(\mathcal{X}_\theta)$ corresponds to a line bundle $\mathcal [\mathcal{L}_{\eta}] \in \operatorname{Pic}^d(\mathcal{X}_{\Delta, \eta}/\eta)$. We know from [@esteves] that ${\operatorname{Simp}}^d(\mathcal X_\Delta/\Delta)$ satisfies the existence part of the valuative criterion of properness, hence $\mathcal L_{\eta}$ extends (possibly nonuniquely) to an element $[\mathcal L'] \in {\operatorname{Simp}}^d(\mathcal{X}_\Delta/\Delta)$. Our proof is concluded if we can show that $\mathcal L_{\eta}$ extends to a unique $[\mathcal L] \in\overline{\mathcal{J}}^d_{ \Delta}$. When the total space $\mathcal{X}_{\Delta}$ of the smoothing $\mathcal{X}_\Delta/\Delta$ is regular, this follows immediately from Remark [**Remark** 10](#twister){reference-type="ref" reference="twister"}. If the total space $\mathcal{X}_\Delta$ of the smoothing $\mathcal{X}_{\Delta}/\Delta$ is not regular, then each of the nodes of the central fiber $X$ is a singularity of type $A_n$ of $\mathcal X_\Delta$, for some $n \geq 0$. Thus each $e\in\mathop{\mathrm{Edges}}(\Gamma)$ corresponds to a singular point of type $A_{n(e)}$ for some $n(e)\geq 0$. Let $m$ be least common multiple of the $\{n(e)\}_{e \in \mathop{\mathrm{Edges}}(\Gamma)}$, and consider the resolution of singularities $g \colon \widetilde{\mathcal{X}} \to \mathcal{X}_\Delta$ obtained by blowing up $m$ times each node of $\mathcal X_\Delta$. The central fiber $\widetilde{X} \subset \widetilde{\mathcal{X}}$ is therefore obtained by replacing each node of the central fiber $X$ of $\mathcal{X}_\Delta$ with a rational bridge of length $m$. Equivalently, the dual graph $\widetilde{\Gamma}$ of $\widetilde{X}$ is obtained by subdividing $m$ times each edge of $\Gamma=\Gamma(X)$ by adding $m$ additional genus $0$ vertices. In other words, each edge of $\Gamma$ is replaced in $\widetilde{\Gamma}$ by a rational chain of $m+1$ edges. We have now achieved that $\widetilde{\mathcal{X}}$ is regular, and taking the direct image via $g$ induces a surjection $g_* \colon \mathop{\mathrm{Pic}}^d(\widetilde{X}) \to {\operatorname{Simp}}^d(X)$ on the central fiber. In order to prove the existence and uniqueness of an extension of $\mathcal L_{\eta}$ to $\mathcal{X}_\Delta/\Delta$ with a sheaf in $\overline{\mathcal{J}}^d_{\Delta}$, we prove that $g^*(\mathcal L_{\eta})$ extends uniquely on $\widetilde{\mathcal{X}}/\Delta$ to a line bundle whose restriction to $\widetilde{X}$ has multidegree in $\widetilde{\sigma}$, where the elements of $\widetilde{\sigma}$ are defined by lifting elements of $\sigma$ to $\widetilde{\Gamma}$ as in Equation [\[lift\]](#lift){reference-type="ref" reference="lift"}. Note that taking the direct image via $g$ maps line bundles on $\widetilde{X}$ with multidegree in $\widetilde{\sigma}(\widetilde{\Gamma})$ to rank $1$, torsion free, simple sheaves on $X$ with multidegree in $\sigma(\Gamma_0)$ for $\Gamma_0 \subseteq \Gamma$ the spanning subgraph whose edges $e$ correspond to the rational bridges where the value of $\widetilde{\sigma}(\widetilde{\Gamma})$ is equal to zero on all vertices. By Corollary [Corollary 22](#coroll: chipfiring){reference-type="ref" reference="coroll: chipfiring"} combined with the assumption that $\sigma$ satisfies the second axiom of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, we deduce that $\widetilde{\sigma}$ is a minimal complete set of representatives for the action of the twister group $\mathop{\mathrm{Tw}}(\widetilde{\Gamma})$ on $S^d(\widetilde{\Gamma})$. We conclude, as in the regular case, that $g^*(\mathcal L_{\eta})$ extends to a unique line bundle $\widetilde{\mathcal L}$ on $\widetilde{\mathcal{X}}/\Delta$ whose multidegree on the central fiber $\widetilde{X}$ is an element of $\widetilde{\sigma}$. Therefore $g_*{\widetilde{\mathcal{L}}}$ is the unique extension of $\mathcal L_{\eta}$ whose multidegree on $X$ is an element of $\sigma$. This completes the proof. ◻ We are now ready to complete the proof of Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"}. *Proof of Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"}.* As $\overline{\mathcal{J}}^d_{\mathfrak{S}}$ is by definition constructible, openness is shown by proving that $\overline{\mathcal{J}}^d_{\mathfrak{S}} \subseteq {\operatorname{Simp}}^d(\mathcal{X}/S)$ is stable under generalization ([@stacks [Lemma 5.19.10](https://stacks.math.columbia.edu/tag/0060)]). That $\overline{\mathcal{J}}^d_{\mathfrak{S}}$ is stable under generalization follows from the first requirement of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, and from the requirement that the family of stability conditions is compatible for graph morphisms arising from étale specializations (as specified in Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"}). Properness is Lemma [\[properoverdelta\]](#properoverdelta){reference-type="ref" reference="properoverdelta"}. ◻ # From smoothable fine compactified Jacobians to stability conditions In this section we show that a smoothable fine compactified Jacobian defines a stability condition in a natural way. We do so first for the case of a curve in isolation (Corollary [Corollary 35](#associsstab){reference-type="ref" reference="associsstab"}), and then for the case of families (Proposition [Proposition 36](#compcontract){reference-type="ref" reference="compcontract"}). We conclude by proving our main result, Corollary [Corollary 38](#bijection){reference-type="ref" reference="bijection"}, which establishes that the operations described in this section and in the previous one are inverse to each other. Throughout this section, we will consider a fixed nodal curve $X$ over a field extension $K$ of $k$, with dual graph $\Gamma$ and denote by $\overline{J}\subset \operatorname{Simp}^{d}(X)$ a degree $d$ fine compactified Jacobian of $X$. **Definition 28**. We define the **associated assignment** $\sigma_{\overline{J}}$ of ${\overline{J}}$ as $$\sigma_{\overline{J}}=\{(\Gamma_0(F),\underline{\deg}(F)):\;[F]\in\overline{J}\},$$ where $\Gamma_0(F)=\Gamma\setminus {\operatorname{N}}(F)$ is the connected spanning subgraph of $\Gamma$ obtained by removing the edges corresponding to the nodes where $F$ fails to be locally free. A key point is that, if a sheaf is an element of a given fine compactified Jacobian, then so are all other sheaves that fail to be locally free on the same set of nodes and that have the same multidegree: **Lemma 29**. *Let $\overline{J}\subset {\operatorname{Simp}}^d(X)$ be a degree $d$ fine compactified Jacobian, and assume that $(\Gamma_0,\underline{\mathbf d}) \in \sigma_{\overline{J}}$. Then $\overline{J}$ contains all sheaves $[F]\in {\operatorname{Simp}}^d(X)$ with ${\operatorname{N}}(F) =\Gamma\setminus\Gamma_0$ and $\underline{\deg}(F)=\underline{\mathbf d}$.* *Proof.* By possibly passing to the partial normalization of $X$ at the nodes in $\Gamma\setminus\Gamma_0$ we may assume that $\Gamma_0=\Gamma$. Thus we aim to prove that if $\mathcal{J}^{\underline{\mathbf d}}(X)\cap \overline{J}\neq \emptyset$, then $\mathcal{J}^{\underline{\mathbf d}}(X) \subset \overline{J}$. The moduli space of line bundles $\mathcal{J}^{\underline{\mathbf d}}(X)$ of multidegree $\underline{\mathbf d}$ is irreducible and $\overline{J}$ is open, so $\mathcal{J}^{\underline{\mathbf d}}(X) \cap \overline{J}\neq \emptyset$ implies that $\mathcal{J}^{\underline{\mathbf d}}(X)\cap \overline{J}$ is dense in $\mathcal{J}^{\underline{\mathbf d}}(X)$. For $[F]$ in $\mathcal{J}^{\underline{\mathbf d}}(X)$ we can then find a line bundle $L$ on $X \times \Delta$, for some DVR $\Delta$, whose generic element $L_{\eta}$ is in $\mathcal{J}^{\underline{\mathbf d}}(X) \cap \overline{J}$ and whose special fiber $L_0$ equals $F$. By properness of $\overline{J}$ there exists a sheaf $L'$ on $X\times \Delta$ with the property that $L'_{X \times \eta}=L_{X \times \eta}$ (here $\eta$ is the generic point of $\Delta$) and whose special fiber $L'_0$ is in $\overline{J}$. Then $L' \otimes L^{-1}$ defines a morphism $\Delta \to {\operatorname{Simp}}^0(X)$ that maps the generic point $\eta$ to a closed point, hence it must be the constant morphism. We conclude, in particular, that $[F]=[L_0']$ and so $[F] \in \overline{J}$. ◻ Our first goal is to show that if the fine compactified Jacobian is also smoothable, then its associated assignment does indeed satisfy the $2$ conditions of a degree $d$ stability condition given in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}. That the associated assignment to a fine compactified Jacobian satisfies the first condition of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} follows immediately from Lemma [Lemma 19](#impliedbyopen){reference-type="ref" reference="impliedbyopen"}: **Corollary 30**. *The associated assignment $\sigma_{\overline{J}}$ satisfies $$\label{Eqn: ChipAdding} (\Gamma_0,\underline{\mathbf d})\in\sigma_{\overline{J}} \Rightarrow (\Gamma_0\cup\{e\},\underline{\mathbf d}+\underline{\mathbf e}_{v_1}),(\Gamma_0\cup\{e\},\underline{\mathbf d}+\underline{\mathbf e}_{v_2})\in\sigma_{\overline{J}},$$ for all spanning subgraphs $\Gamma_0\subseteq\Gamma$ and all edges $e$ of $\mathop{\mathrm{Edges}}(\Gamma)\setminus\mathop{\mathrm{Edges}}(\Gamma_0)$ with endpoints $v_1$ and $v_2$.* Note that the above statement remains valid (and with the same proof) for an arbitrary *open* subscheme of ${\operatorname{Simp}}^d(X)$, properness and smoothability do not play any role here. Our aim is now to prove, in Proposition [Proposition 32](#cond2){reference-type="ref" reference="cond2"}, that smoothable fine compactified Jacobians also satisfy the second condition of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}. Before we do that, we show that Corollary [Corollary 30](#cond1){reference-type="ref" reference="cond1"} allows us to conclude that fine compactified Jacobians of vine curves are automatically smoothable. ****Example** 31**. Let $X$ be a vine curve of type $t$ (as defined in Example [**Example** 6](#ex: vinecurves){reference-type="ref" reference="ex: vinecurves"}) over some algebraically closed field $K=k$. We claim that any fine compactified Jacobian $\overline{J}(X)$ of $X$ is of the form $\overline{J}(X) = \overline{J}_\sigma(X)$ for some stability condition $\sigma$ as in Example [**Example** 14](#vinestab){reference-type="ref" reference="vinestab"}. In particular, by Corollary [Corollary 26](#stab->fine){reference-type="ref" reference="stab->fine"}, $\overline{J}(X)$ is smoothable. This is clear when $t=1$, as in that case each geometrically connected component of ${\operatorname{Simp}}^d(X)=\operatorname{Pic}^d(X)$ is proper. For the case of arbitrary $t>1$, we work by induction on $t$, using only Part (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} and the claim of Lemma [Lemma 29](#exists-forall){reference-type="ref" reference="exists-forall"} (neither of which relies on the hypothesis of smoothability). Fix $e \in \mathop{\mathrm{Edges}}(\Gamma(X))$, let $X_e$ be the partial normalization of $X$ at the node $e$ only. Because $t >1$ we have that $X_e$ is connected and therefore ${\operatorname{Simp}}^{d-1}(X_e) \neq \emptyset$. Let $j\colon {\operatorname{Simp}}^{d-1}(X_e) \to {\operatorname{Simp}}^{d}(X)$ be the morphism obtained by taking the pushforward along the partial normalization $X_e \to X$. Let $\overline{J}_e \subseteq j^{-1}\left(\overline{J}(X)\right)$ be a connected component. Then $\overline{J}_e$ is a degree $d-1$ fine compactified Jacobian of $X_e$, and $X_e$ is a vine curve of type $t-1$. We apply the induction hypothesis to deduce that there exists some degree $d-1$ stability condition $\sigma_e$ as described in Example [**Example** 14](#vinestab){reference-type="ref" reference="vinestab"} such that $\overline{J}_e=\overline{J}_{\sigma_e}(X_e)$. Let $\sigma$ be the stability condition on $X$ of the type studied in Example [**Example** 14](#vinestab){reference-type="ref" reference="vinestab"} that is minimal among those that respect Condition (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} and whose restriction to $X_e$ equals $\sigma_e$. By Corollary [Corollary 30](#cond1){reference-type="ref" reference="cond1"} we have that $\overline{J}_e=\overline{J}_\sigma(X)$ is contained in $\overline{J}(X)$. By Corollary [Corollary 26](#stab->fine){reference-type="ref" reference="stab->fine"}, $\overline{J}_\sigma$ is open in ${\operatorname{Simp}}^d(X)$, it is proper and geometrically connected. As $\overline{J}(X)$ also enjoys these three properties, we conclude that $\overline{J}(X)=\overline{J}_\sigma(X)$. We now go back to our main line of reasoning, and show that the assignment associated to a *smoothable* fine compactified Jacobian also satisfies the second condition of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}. **Proposition 32**. *If $\overline{J}\subseteq {\operatorname{Simp}}^d(X)$ is also *smoothable*, then for all spanning subgraphs $\Gamma_0 \subseteq \Gamma$, the subset $\sigma_{\overline{J}}(\Gamma_0)\subset S^d_{\Gamma}(\Gamma_0)$ is a minimal complete set of representatives for the action of the twister group $\mathop{\mathrm{Tw}}(\Gamma_0)$ on $S^d_{\Gamma}(\Gamma_0)$.* *Proof.* We first fix some notation for our proof. Let $\mathcal{X}/\Delta$ be a regular smoothing of $X$ and let $\overline{\mathcal J}_{\mathcal{X}/\Delta}$ be an open and $\Delta$-proper subscheme of ${\operatorname{Simp}}^d(\mathcal{X}/\Delta)$ whose special fiber equals $\overline{J}$. Then consider the degree $2$ base change $\Delta \to \Delta$ of $\mathcal{X}/\Delta$, giving a nonregular smoothing $\mathcal{X}'/ \Delta$ with $A_2$ singularities at all nodes of the special fiber $X$. By functoriality of the Picard functor, and by stability under base change of openness and properness, we have that the base change $\overline{\mathcal J}'_{\mathcal{X}'/\Delta}$ is also an open and $\Delta$-proper subscheme of ${\operatorname{Simp}}^d(\mathcal{X}'/\Delta)$. Finally, let $f \colon \widetilde{\mathcal{X}} \to \mathcal{X}'$ be the blow up at all singularities. The family $\widetilde{\mathcal{X}}$ is then a regular smoothing of the special fiber $\widetilde{X}$, the curve obtained from $X$ by replacing each node with an irreducible rational bridge. We are now ready for the proof. First observe that, by the $m=1$ case of Corollary [Corollary 22](#coroll: chipfiring){reference-type="ref" reference="coroll: chipfiring"}, it is enough to prove that the collection $\widetilde{\sigma}_{\overline{J}}$ obtained by lifting every element of $\sigma_{\overline{J}}$ via Equation [\[lift\]](#lift){reference-type="ref" reference="lift"} is a minimal complete set of representatives for the action of $\mathop{\mathrm{Tw}}({\Gamma(\widetilde{X}}))$. Let ${\bf \vec{d}}$ be the lift via [\[lift\]](#lift){reference-type="eqref" reference="lift"} of some ${\mathbf d} \in S^d_\Gamma(\Gamma_0)$ for some $\Gamma_0 \subseteq \Gamma$. There exists $[L] \in \mathop{\mathrm{Pic}}^d(\widetilde{X})$ such that $\underline{\deg}(f_*(L)) \in S^d_{\Gamma}(\Gamma_0)$. By Hensel's lemma, $L$ extends to a family $[\mathcal{L}] \in \operatorname{Pic}^d(\widetilde{\mathcal{X}}/\Delta)$. Since $\overline{J}_{\mathcal{X}'/\Delta}$ is universally closed over $\Delta$, there exists $\mathcal{F}'\in {\operatorname{Simp}}^d(\mathcal{X}'/\Delta)$ such that $\mathcal{F}'|_{\mathcal{X}'_\eta}= f_* \mathcal{L}_{\mathcal{X}'_\eta}$ and $[\mathcal{F}'|_X] \in \overline{J}_{X'}$. We take $[\mathcal{L}'] \in \operatorname{Pic}^d(\widetilde{\mathcal{X}}/\Delta)$ such that $f_*(\mathcal{L}')= \mathcal{F}'$. Thus we have that $\left(\mathcal{L}'\otimes \mathcal{L}^{-1}\right)_{\widetilde{\mathcal{X}}_{\eta}}=\mathcal{O}_{\mathcal{X}_{\eta}}$. We let then ${\bf \vec{t}}= \underline{\deg}(\mathcal{L}'\otimes \mathcal{L}^{-1}|_{\widetilde{X}})$, and so ${\bf\vec{d}}+{\bf\vec{t}}= \underline{\deg} (\mathcal{L}'|_{\widetilde{X}})$ is in $\widetilde{\sigma}_{\overline{J}}$. This proves that $\widetilde{\sigma}_{\overline{J}}$ is a complete set of representatives. To prove minimality, assume that there exist lifts ${\bf \vec{d}_1}, {\bf \vec{d}_2} \in \widetilde{\sigma}_{\overline{J}}$ such that ${\bf \vec{d}_2}={\bf \vec{d}_1}+ {\bf \vec{t}}$ for some ${\bf \vec{t}} \in \mathop{\mathrm{Tw}}(\Gamma(\widetilde{X}))$. By Hensel's lemma and by Lemma [Lemma 29](#exists-forall){reference-type="ref" reference="exists-forall"}, there exist $[\mathcal{L}_1], [\mathcal{L}_2] \in \mathop{\mathrm{Pic}}^d(\widetilde{\mathcal{X}}/\Delta)$, coinciding on the generic fiber, and whose multidegrees on $\widetilde{X}$ equal ${\bf \vec{d}_1}, {\bf \vec{d}_2}$. The pushforwards $f_*(\mathcal{L}_1)$ and $f_*(\mathcal{L}_2)$ coincide on $\mathcal{X}'_{\eta}$, and because $\overline{J}_{\mathcal{X}'/\Delta}$ is separated over $\Delta$, their central fibers must coincide, which implies that ${\bf \vec{t}}=0$. This proves minimality. ◻ ****Remark** 33**. In [@caporasoneron] Caporaso introduced the notion of "being of Néron type" for a degree $d$ compactified Jacobian of a nodal curve $X$. In our notation, this can be restated as the property that $\sigma_{\overline{J}}(\Gamma(X))$ is a minimal complete set of representatives for the action of $\mathop{\mathrm{Tw}}(\Gamma(X))$ on $S^d_{\Gamma(X)}(\Gamma(X))$. Proposition [Proposition 32](#cond2){reference-type="ref" reference="cond2"} proves, in particular, that all smoothable fine compactified Jacobians are of Néron type. Similar results were obtained in [@kass] and [@meloviv] for fine compactified Jacobians obtained from some numerical polarizations (see Section [8](#Sec: OS){reference-type="ref" reference="Sec: OS"} for the notion of a numerical polarization). We refer the reader to [@kass] for the definition of a Néron model and its relations with compactified Jacobians. ****Remark** 34**. The smoothability assumption in the statement of Proposition [Proposition 32](#cond2){reference-type="ref" reference="cond2"} is crucial. In [@paganitommasi] the authors give an example, for $X$ a nodal curve of genus $1$, of degree $0$ fine compactified Jacobians $\overline{J}\subset {\operatorname{Simp}}^0(X)$ whose collection of line bundle multidegrees contain an arbitrary number $r\geq 2$ of elements for each orbit of the action of $\mathop{\mathrm{Tw}}(\Gamma)$ on $S^0_{\Gamma}(\Gamma)$. Such compactified Jacobians can always be extended to a universally closed family over a regular smoothing, but such extensions are *separated* if and only if $r$ equals $1$. We conclude with the main result: **Corollary 35**. *The associated assignment (Definition [Definition 28](#assocassign){reference-type="ref" reference="assocassign"}) to a smoothable degree $d$ fine compactified Jacobian of $X$ is a degree $d$ stability condition for the dual graph $\Gamma(X)$ (as in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}).* *Proof.* This is obtained as a combination of Corollary [Corollary 30](#cond1){reference-type="ref" reference="cond1"} and Proposition [Proposition 32](#cond2){reference-type="ref" reference="cond2"}. ◻ We observe that the same result holds for families. **Proposition 36**. *Let $\mathcal{X}/S$ be a family of nodal curves, and let $\overline{\mathcal J}=\overline{\mathcal J}_{\mathcal{X}/S}$ be a degree $d$ fine compactified Jacobian for the family. Assume that for each geometric point $s$ of $S$ the fine compactified Jacobian $\overline{J}_s$ of the fiber $X_s$ over $s \in S$ is *smoothable*.* *Then the collection $\Set{\sigma_{\overline{J}_s}}$ of the associated assignments of $\overline{J}_s$ for all geometric points $s$ of $S$ is a family of degree $d$ stability conditions (as in Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"}).* *Proof.* The case when $S$ is a single geometric point is Corollary [Corollary 35](#associsstab){reference-type="ref" reference="associsstab"}. To complete our proof we will show that the assignment of $\sigma_{\overline{J}_s}$ is compatible with all morphisms $f \colon \Gamma(\mathcal{X}_{s}) \to \Gamma(\mathcal{X}_t)$ arising from étale specializations $t \rightsquigarrow s$. Assume that $F_{t}$ is a simple sheaf on the nodal curve $\mathcal{X}_{t}$ that specializes to $F_s$ on $\mathcal{X}_s$. Suppose that the subcurve $\mathcal{X}_{t, 0} \subseteq \mathcal{X}_{t}$ generalizes the subcurve $\mathcal{X}_{s, 0} \subseteq \mathcal{X}_s$. By flatness and by continuity of the Euler characteristic, and because we are passing to the torsion-free quotients, the degrees are related by $$\label{condition} \deg_{\mathcal{X}_{t, 0}}(F_{t})= \deg_{\mathcal{X}_{s, 0}}(F_{s}) + n(F_s,f)$$ where $n(F_s,f)$ is the number of nodes of $\mathcal{X}_{s,0}$ that are smoothened in $t$ and where $F_s$ fails to be locally free. (See Equation [\[deg:subcurve\]](#deg:subcurve){reference-type="eqref" reference="deg:subcurve"}). Formula [\[condition\]](#condition){reference-type="eqref" reference="condition"} is precisely the condition of Equation [\[comp-contractions\]](#comp-contractions){reference-type="eqref" reference="comp-contractions"}. ◻ ****Remark** 37**. Assume that the dual graph of the fibers of $\mathcal{X}/S$ is constant in $S$, and let $S$ be irreducible with $\eta \in S$ its generic point. Then the family $\mathcal X/S$ induces a group homomorphism from the group of étale specializations of $\eta$ to itself, which equals the Galois group $\operatorname{Gal}(k(\eta)^{\operatorname{sep}}, k(\eta)))$, to the automorphism group $\operatorname{Aut}(\Gamma(X_{k(\eta)^{\operatorname{sep}}}))$ of the dual graph of the generic fiber. Let $G$ be the image of this group homomorphism. In this case, Proposition [Proposition 36](#compcontract){reference-type="ref" reference="compcontract"} implies that the associated assignment $\sigma_{\overline{J}_{k(\eta)}}$, which is defined as a collection of discrete data on the dual graph $\Gamma(X_{k(\eta)^{\operatorname{sep}}})=\Gamma(X_{\overline{k(\eta)}})$, is invariant under the action of $G$. Note that, in the particular case where $S$ is a stratum of $\overline{\mathcal{M}}_{g,n}$ of curves whose dual graph is equal to a fixed graph $\Gamma$, we have $G=\operatorname{Aut}(\Gamma)$. By combining Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"}/Corollary [Corollary 26](#stab->fine){reference-type="ref" reference="stab->fine"} with Proposition [Proposition 36](#compcontract){reference-type="ref" reference="compcontract"} and Lemma [Lemma 29](#exists-forall){reference-type="ref" reference="exists-forall"}, we immediately obtain that the two operations of (1) taking the associated assignment to a fine compactified Jacobian, and (2) constructing the moduli space of stable sheaves associated with a given stability condition, are inverses of each other. **Corollary 38**. *Let $\mathcal{X}/S$ be a family of nodal curves over an irreducible $S$, and assume that $S=\mathop{\mathrm{Spec}}(K)$, or that $\mathcal{X}_{\theta}/\theta$ is smooth for $\theta$ the generic point of $S$.* *If $\overline{\mathcal J}\subseteq {\operatorname{Simp}}^d(\mathcal{X}/S)$ is a degree $d$ fine compactified Jacobian and $\sigma_{\overline{\mathcal J}}$ is its associated assignment (Definition [Definition 28](#assocassign){reference-type="ref" reference="assocassign"}), then the moduli space $\overline{\mathcal{J}}_{\sigma_{\overline{\mathcal J}}}$ of $\sigma_{\overline{\mathcal J}}$-stable sheaves (Definitions [Definition 23](#defstable){reference-type="ref" reference="defstable"} and [Definition 24](#defstablefamily){reference-type="ref" reference="defstablefamily"}) equals $\overline{\mathcal J}$.* *Conversely, if $\tau$ is a family of degree $d$ stability conditions, and $\overline{\mathcal J}_\tau$ is the moduli space of $\tau$-stable sheaves, then the associated assignment $\sigma_{\overline{\mathcal J}_\tau}$ equals $\tau$.* # Numerical polarizations {#Sec: OS} An example of a (smoothable fine compactified Jacobian) stability condition on a nodal curve $X/K$ (as in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}) comes from numerical polarizations, introduced by Oda--Seshadri in [@oda79]. (In fact, the Oda--Seshadri formalism permits to also construct compactified Jacobians that are not necessarily *fine* in the sense of Definition [Definition 2](#def:finecompjac){reference-type="ref" reference="def:finecompjac"}). In this section we review the notion of numerical stability for a single curve (Definition [Definition 39](#OS- stable){reference-type="ref" reference="OS- stable"}) and extend it to families (Definition [Definition 48](#familyphistab){reference-type="ref" reference="familyphistab"}) following [@kp3]. We let $\Gamma$ be the dual graph of $X$. Recall that a subgraph $\Gamma' \subseteq \Gamma$ is **induced** when for all pairs of vertices $v,w$ it contains, it also contains all edges of $\Gamma$ having $v$ and $w$ as endpoints. **Definition 39**. Let $V^d(\Gamma) \subset \mathbb{R}^{\mathop{\mathrm{Vert}}(\Gamma)}$ be the sum-$d$ affine subspace. Let ${\phi} \in V^d(\Gamma)$, and let $\Gamma_0 \subseteq \Gamma$ be a (not necessarily connected) spanning subgraph with edge set $E_0=\mathop{\mathrm{Edges}}(\Gamma_0)$. We say that $\underline{\mathbf d} \in S^d_{\Gamma}(\Gamma_0)$ is **${\phi}$-semistable (resp. ${\phi}$-stable)** on $\Gamma_0$ when the inequality $$\label{stabineq}\left|\sum_{v \in \operatorname{Vert}(\Gamma')}( \underline{\bf{d}}(v) -\phi(v)) + \left|\mathop{\mathrm{Edges}}(\Gamma') \setminus E_0 \right|+\frac{e_{\mathop{\mathrm{Edges}}(\Gamma) \setminus E_0}(\Gamma')}{2}\right|\leq \frac{e_{E_0}(\Gamma') }{2} % %$$ (resp. $<$) is satisfied for all induced subgraphs $\emptyset \subsetneq \Gamma' \subsetneq \Gamma$. Here by $e_{E}(\Gamma')$ for some $E \subseteq \mathop{\mathrm{Edges}}(\Gamma)$ we denote the cardinality of the set of edges in $E$ that do not belong either to $\Gamma'$ or to the edge set of the induced subgraph on $\mathop{\mathrm{Vert}}(\Gamma) \setminus \mathop{\mathrm{Vert}}(\Gamma')$. For $\phi \in V^d(\Gamma)$ we define the (numerical, Oda--Seshadri) $\phi$-stability condition $$\label{phisemistable} \mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}(\Gamma_0):= \Set{ \underline{\mathbf d} \in S^d_{\Gamma}(\Gamma_0): \underline{\mathbf d} \text{ is } {\phi} \text{-semistable on } \Gamma_0 } \subset S^d_{\Gamma}(\Gamma_0).$$ We define ${\phi} \in V^d(\Gamma)$ to be **nondegenerate** when for every spanning subgraph $\Gamma_0\subseteq \Gamma$, all elements of $\mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}(\Gamma_0)$ are ${\phi}$-stable. Elements of $V^d(\Gamma)$ are called **numerical polarizations**. ****Remark** 40**. If the spanning subgraph $\Gamma_0 \subseteq \Gamma$ is not connected, then $\mathop{\mathrm{Edges}}(\Gamma) \setminus \mathop{\mathrm{Edges}}(\Gamma_0)$ is a collection of edges that disconnects $\Gamma$. If we take for $\Gamma'$ the induced subgraph of $\Gamma$ on a connected component of $\Gamma_0$, the RHS of Inequality [\[stabineq\]](#stabineq){reference-type="eqref" reference="stabineq"} equals zero. Therefore if ${\phi}$ is nondegenerate and the spanning subgraph $\Gamma_0$ is not connected, we always have $\mathop{\mathrm{\sigma}}_{\Gamma, {\phi}} (\Gamma_0) = \emptyset$. We conclude that, for nondegenerate ${\phi}$'s, the disconnected spanning subgraphs $\Gamma_0$ of $\Gamma$ do not carry any additional information and they can be disregarded, as we have done in Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}. Every nondegenerate numerical polarization gives a stability condition: **Proposition 41**. *Let $X$ be a nodal curve and let ${\phi} \in V^d(\Gamma(X))$ be nondegenerate. Then the ${\phi}$-stability condition $\mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}$ defined in [Definition 39](#OS- stable){reference-type="ref" reference="OS- stable"} is a degree $d$ stability condition (as in [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}) and the moduli space $\overline{J}_{\mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}}(X)$ of sheaves that are stable with respect to $\mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}$ (as defined in [Definition 23](#defstable){reference-type="ref" reference="defstable"}) is a degree $d$ smoothable fine compactified Jacobian.* *Proof.* The fact that the moduli space is a smoothable fine compactified Jacobian is [@paganitommasi Proposition 2.9]. Thus $\mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}$ defines a stability condition by Corollary [Corollary 35](#associsstab){reference-type="ref" reference="associsstab"}. ◻ As far as we know, the following question is open. ****Question** 42**. Let $\sigma_\Gamma$ be a degree $d$ stability condition. Does there exist a ${\phi} \in V^d(\Gamma)$ such that $\sigma_{\Gamma} = \mathop{\mathrm{\sigma}}_{\Gamma, {\phi}}$? (If one such ${\phi}$ exists, it is necessarily nondegenerate). Below we give three classes of examples where we can answer Question [**Question** 42](#question){reference-type="ref" reference="question"} in the positive. ****Example** 43**. (Irreducible curves). If $X$ is irreducible, then there is a unique stability condition $\sigma$ (see Example  [**Example** 13](#ex: stab-irred){reference-type="ref" reference="ex: stab-irred"}) and we have $\sigma=\sigma_{\phi}$ for $\phi$ the only element of $V^d(X)=\{d\}$. ****Example** 44**. (Vine curves of type $t$). Let $\Gamma$ consist of $2$ vertices $v_1, v_2$ connected by $t$ edges (and no loops). The complexity of $\Gamma$ equals $t$. With the notation as in Example [**Example** 14](#vinestab){reference-type="ref" reference="vinestab"}, it is straightforward to check that a stability condition $\sigma_{\Gamma}$ determined by some $\lambda \in \mathbf{Z}$ as described in loc.cit. equals $\mathop{\mathrm{\sigma}}_{\Gamma, \phi_\Gamma}$ for $$\phi_\Gamma(v_1, v_2):= \left(\lambda- \frac{t-1}{2}, d -\lambda + \frac{t-1}{2} \right),$$ and that $\phi_\Gamma \in V^d(\Gamma)$ is nondegenerate. ****Example** 45**. (Curves whose dual graph has genus $1$). Let $X$ be such that $b_1(\Gamma(X))=1$. A degree $d$ stability condition $\sigma$ on $\Gamma(X)$ is the same datum as a stability condition on $\Gamma'$, where $\Gamma'$ is obtained from $\Gamma(X)$ by setting the genus of each vertex to $0$. Let $X'$ be a curve whose dual graph is $\Gamma'$. Then by Corollary [Corollary 26](#stab->fine){reference-type="ref" reference="stab->fine"} the scheme $\overline{J}_{\sigma}(X')$ is a smoothable fine compactified Jacobian. Smoothable fine compactified Jacobians on $X'$ have been classified in [@paganitommasi]: they all arise as $\overline{J}_{\sigma_\phi}(X')$ for some $\phi \in V^d(\Gamma')=V^d(\Gamma(X))$. Thus, by Corollary [Corollary 35](#associsstab){reference-type="ref" reference="associsstab"} we have that $\sigma$ is also of that form. ****Example** 46**. (Integral Break Divisors). Let $\Gamma$ be a (not necessarily stable) graph and assume $d=b_1(\Gamma)$. Define the stability condition $\sigma_{\operatorname{IBD}}$ by $$\sigma_{\operatorname{IBD}}(\Gamma_0)=\{ \text{integral break divisors for } \Gamma_0 \},$$ for all connected spanning subgraphs $\Gamma_0 \subseteq \Gamma$. Recall that $\underline{\bf d} \in S^{b_1(\Gamma)}_{\Gamma}(\Gamma_0)$ is an integral break divisor for $\Gamma_0$ if and only if it is of the form $$\underline{\bf d} (v) =g(v) + \sum_{e \in \mathop{\mathrm{Edges}}(\Gamma_0) \setminus \mathop{\mathrm{Edges}}(T)} \underline{\bf e}_{t(e)}(v)$$ for some choice of a spanning tree $T \subseteq \Gamma_0$ and of an orientation $t \colon \mathop{\mathrm{Edges}}(\Gamma_0) \setminus \mathop{\mathrm{Edges}}(T) \to \mathop{\mathrm{Vert}}(\Gamma_0)$ (for $w \in \mathop{\mathrm{Vert}}(\Gamma_0)$ we denote by $\underline{\bf e}_w$ the function that is $1$ on the vertex $w$ and $0$ elsewhere). We claim that $\sigma_{\operatorname{IBD}}$ is indeed a degree $d=b_1(\Gamma)$ stability condition. Condition (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} follows directly from the definition of an integral break divisor. The second axiom follows immediately from [@abkm Theorem 1.3] (earlier proved in [@mz]). When $\Gamma$ is stable, it follows from [@cps Lemma 5.1.5] that $\sigma_{\operatorname{IBD}}= \sigma_{\Gamma, \phi_{\operatorname{can}}^{\Gamma}}$, for $$\label{phican} \phi_{\operatorname{can}}^{\Gamma}(v):= \frac{g(\Gamma) }{2 g(\Gamma) -2} \cdot \left(2g(v)-2+\operatorname{val}_{\Gamma}(v)\right)$$ where $\operatorname{val}_{\Gamma}(v)$ is the valence of the vertex $v$ in $\Gamma$. (This follows from loc.cit. as $\phi^{\Gamma}_{\operatorname{can}}$ is induced from the canonical divisor, which is ample when $\Gamma$ is stable). When $\Gamma$ is not necessarily stable, we define $$\label{phiIBD} \phi_{\operatorname{IBD}}(v):= \frac{g(\Gamma) + |\mathop{\mathrm{Vert}}(\Gamma)|}{2 \left( g(\Gamma) + |\mathop{\mathrm{Vert}}(\Gamma)|\right) -2} \cdot (2g(v)+\operatorname{val}_{\Gamma}(v)) -1.$$ We claim that $\sigma_{\operatorname{IBD}}= \sigma_{\Gamma, \phi_{\operatorname{IBD}}}$. Let $\Gamma'$ be the graph obtained from $\Gamma$ by increasing the genus of each vertex by $1$. Then the integral break divisors of $\Gamma'$ are the integral break divisors on $\Gamma$ increased by $1$ on each vertex and $\Gamma'$ is stable. Therefore the integral break divisors on $\Gamma$ are stable for the stability condition $\phi_{\operatorname{can}}^{\Gamma'}-1$, which equals $\phi_{\operatorname{IBD}}$. This proves our claim. We are now ready to define families of numerical polarizations, similarly to what was done in Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"} for families of stability conditions. We shall see that there are two ways of doing so, and that they are not equivalent. **Definition 47**. Let $f \colon \Gamma \to \Gamma'$ be a morphism of stable graphs. We say that $\phi \in V^d(\Gamma)$ is $f$-compatible with $\phi' \in V^d(\Gamma')$ if $$\phi'(w)= \sum_{f(v)=w} \phi(v).$$ **Definition 48**. Let $\mathcal{X}/S$ be a family of nodal curves, and let $\Phi=(\phi_s \in V^d(\Gamma(\mathcal{X}_s)) )_{s \in S}$ be a collection of numerical polarizations, one for each geometric point of $S$. Assume also that $\Phi$ is **nondegenerate**, by which we mean that so is each of its coordinates $\phi_s$ for $s \in S$ (as per Definition [Definition 39](#OS- stable){reference-type="ref" reference="OS- stable"}). The collection $\Phi$ is **weakly compatible** if the collection $\sigma_{\Phi}:=(\sigma_{\Gamma(X_s), \Phi(\Gamma(X_s))})_{s \in S}$ of degree $d$ stability conditions defined via Proposition [Proposition 41](#OSisfine){reference-type="ref" reference="OSisfine"} is a family of degree $d$ stability conditions as prescribed by Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"}. The collection $\Phi$ is **strongly compatible** if it is $f$-compatible for all morphisms $f \colon \Gamma(\mathcal{X}_{s}) \to \Gamma(\mathcal{X}_t)$ that arise from some étale specialization $t \rightsquigarrow s$ occurring on $S$. For a strongly compatible collection, one can define the notion of a $\Phi$-stable sheaf for all fibers as in Definition [Definition 39](#OS- stable){reference-type="ref" reference="OS- stable"}. We shall not do that, as this would be a repetition of our Definition [Definition 24](#defstablefamily){reference-type="ref" reference="defstablefamily"}. Instead, we observe the following: **Corollary 49**. *Let $\mathcal{X}/S$ be a family of nodal curves over an irreducible scheme $S$ with generic point $\theta$, and assume that the generic element $\mathcal{X}_\theta/\theta$ is smooth. If $\Phi$ is a nondegenerate and weakly compatible family of numerical polarizations, the moduli space $\overline{\mathcal{J}}_{\sigma_{\Phi}}$ of sheaves that are stable with respect to $\sigma_{\Phi}$ is a family of degree $d$ fine compactified Jacobians.* *Proof.* By combining Proposition [Proposition 41](#OSisfine){reference-type="ref" reference="OSisfine"} with the fact that $\Phi$ is nondegenerate and weakly compatible, we deduce that $\sigma_{\Phi}$ is a family of stability conditions for $\mathcal{X}/S$. The result follows then from Theorem [Theorem 25](#cor: fromstabtojac){reference-type="ref" reference="cor: fromstabtojac"}. ◻ In [@kp3] the authors studied the theory of universal Oda--Seshadri stability conditions for compactified Jacobians. In loc.cit. they defined, for fixed $(g,n)$ such that $2g-2+n>0$, the space $V_{g,n}^d$ of strongly compatible elements $\Phi=(\phi_s)_{s \in \overline{\mathcal{M}}_{g,n}}$ for all morphisms $f \colon \Gamma_1 \to \Gamma_2$ between any two elements of ${G}_{g,n}$ (a skeleton of the category of stable $n$-pointed graphs of genus $g$). In loc.cit. the authors also constructed, for each such nondegenerate $\Phi \in V_{g,n}^d$, a compactified Jacobian $\overline{\mathcal{J}}_{g,n}(\Phi)$. In the language of this paper, this can be rephrased as follows. **Corollary 50**. *If $\Phi\in V_{g,n}^d$ is a nondegenerate (strongly compatible, numerical) stability condition, then $\overline{\mathcal{J}}_{g,n}(\Phi) \subset {\operatorname{Simp}}^d(\overline{\mathcal{C}}_{g,n}/\overline{\mathcal{M}}_{g,n})$ is a fine compactified universal Jacobian.* A natural question that we will address next is whether every fine compactified universal Jacobian arises as in Corollary [Corollary 50](#stronglycomp){reference-type="ref" reference="stronglycomp"}, or if one can construct fine compactified universal Jacobian from nondegenerate weakly compatible numerical stability conditions that are not strongly compatible. ****Remark** 51**. It is clear that a strongly compatible nondegenerate family of numerical polarizations is also weakly compatible. The converse is not true. Even more: on some families $\mathcal{X}/S$ there exist nondegenerate families of numerical polarizations $\Phi=\left(\phi_s \in V(\Gamma_s)\right)$ that are weakly compatible, and such that there exists no nondegenerate $\Phi'=(\phi'_s)_{s \in S}$ that is strongly compatible and satisfies $\mathop{\mathrm{\sigma}}_{\Gamma_s, \phi'_s}=\mathop{\mathrm{\sigma}}_{\Gamma_s, \phi_s}$ for all $s \in S$. Such examples are shown to exist in [@paganitommasi Section 6], when $\mathcal{X}/S$ is the universal family $\overline{\mathcal{C}}_{1,n}/ \overline{\mathcal{M}}_{1,n}$ and $n \geq 6$. In other words, for all $n\geq 6$ there are fine compactified universal Jacobians $\overline{\mathcal{J}}_{1,n}$ that are not of the form $\overline{\mathcal{J}}_{1,n}(\Phi)$ for any $\Phi \in V^d_{1,n}$. Our main result in the next section, Theorem [Theorem 57](#OS=fine){reference-type="ref" reference="OS=fine"}, settles in the affirmative the analogous question for the case of the universal family $\overline{\mathcal{C}}_{g}/ \overline{\mathcal{M}}_{g}$ for all $g \geq 2$. More explicitly, when $n=0$, every fine compactified universal Jacobian $\overline{\mathcal{J}}_{g}$ arises as in Corollary [Corollary 50](#stronglycomp){reference-type="ref" reference="stronglycomp"}, i.e. it is of the form $\overline{\mathcal{J}}_{g}(\Phi)$ for some $\Phi \in V_g^d$. # Classification of universal stability conditions {#sec: univ} The main result in this section is a classification of *universal* stability conditions, that is, the case when the family $\mathcal{X}/S$ is the universal family over $S= \overline{\mathcal{M}}_g$, the moduli stack of stable curves of genus $g$ (and no marked points). As an intermediate step, we will also produce results that are valid for the case of the universal family over $S= \overline{\mathcal{M}}_{g,n}$ for arbitrary $n$. One way to generate universal stability conditions is by means of strongly compatible (universal) numerical polarizations, see Definition [Definition 48](#familyphistab){reference-type="ref" reference="familyphistab"} and Corollary [Corollary 50](#stronglycomp){reference-type="ref" reference="stronglycomp"}. Our main result here is Theorem [Theorem 57](#OS=fine){reference-type="ref" reference="OS=fine"}, where we classify all universal stability conditions with $n=0$ and for every genus, by proving that they all arise from strongly compatible numerical polarizations, i.e. they all are of the form $\mathop{\mathrm{\sigma}}_{\Phi}$ for some $\Phi \in V_g^d$ (see Corollary [Corollary 49](#cor: familyphi){reference-type="ref" reference="cor: familyphi"} and Corollary [Corollary 50](#stronglycomp){reference-type="ref" reference="stronglycomp"}). Combining with the fact, shown in Proposition [Proposition 36](#compcontract){reference-type="ref" reference="compcontract"}, that universal stability conditions classify fine compactified universal Jacobians, we deduce in Corollary [Corollary 58](#maincoroll){reference-type="ref" reference="maincoroll"} that, in the absence of marked points, all fine compactified universal Jacobians are the classical ones constructed in the nineties by Caporaso, Pandharipande and Simpson. As discussed in Remark [**Remark** 51](#n>0){reference-type="ref" reference="n>0"}, an analogous result does not hold, in general, when $n>0$. Recall from Example [**Example** 6](#ex: vinecurves){reference-type="ref" reference="ex: vinecurves"} that, for $t \in \mathbb{N}$, a **vine curve of type $t$** is a curve with $2$ nonsingular irreducible components joined by $t$ nodes. These curves and their dual graphs will play an important role in this section. We will break down our argument in $3$ parts. ## First part: universal stability conditions are uniquely determined by their restrictions to vine curves The main result of this section, summarized in the title, is a combination of Lemma [Lemma 52](#enoughontrees){reference-type="ref" reference="enoughontrees"} and Lemma [Lemma 53](#determinedover2){reference-type="ref" reference="determinedover2"} below. The first one establishes that a stability condition for a given curve is uniquely determined by its restriction to the spanning trees of the dual graph of that curve. In the second one we prove that an assignment of integers on all spanning trees of all elements of $G_{g,n}$ that is compatible with graph morphisms is uniquely determined by its values over the dual graphs of all vine curves. Let us fix a degree $d$ universal stability condition $\sigma$ of type $(g,n)$. Recall from Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"} (and Remark [**Remark** 17](#Rem: univ stab){reference-type="ref" reference="Rem: univ stab"}) that this is the datum of a collection $$\mathop{\mathrm{\sigma}}=\Set{\mathop{\mathrm{\sigma}}_{\Gamma}}_{\Gamma \in {G}_{g,n}}.$$ that is compatible with all graph morphisms in $G_{g,n}$. By Condition (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, if $T \subseteq \Gamma$ is a spanning *tree*, then $\mathop{\mathrm{\sigma}}_{\Gamma}(T)= \set{\underline{\mathbf d}_{T}}$ contains a *unique* multidegree $\underline{\mathbf d}_{T} \in S^d_\Gamma(T)$. The following lemma shows that a stability condition is "overdetermined" by the collection of data obtained by extracting this unique value from all spanning trees. **Lemma 52**. *Let $\Gamma$ be a stable graph and let us fix, for all spanning trees $T\subseteq \Gamma$, a multidegree $\underline{\mathbf d}_{T} \in S^d_\Gamma(T)$. Then there exists *at most one* degree $d$ stability condition $\mathop{\mathrm{\sigma}}_{\Gamma}$ whose value $\mathop{\mathrm{\sigma}}_{\Gamma}(T)$ at each spanning tree $T\subseteq \Gamma$ equals $\Set{\underline{\mathbf d}_{T}}$.* *Proof.* Let $\mathop{\mathrm{\sigma}}_{\Gamma}$ be a degree $d$ stability condition that satisfies the hypothesis. Inductively define another collection $M_{\Gamma}$ by defining $M_{\Gamma}(\Gamma_0) \subset S_{\Gamma}^d(\Gamma_0)$ for all spanning subgraphs $\Gamma_0$ of $\Gamma$. Starting from the assignments $M_{\Gamma}(T):=\Set{d_{T}}$ for all spanning trees $T \subseteq \Gamma_0$, let $M_{\Gamma}$ be the minimal assignment that satisfies Condition (2) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}. Because $\mathop{\mathrm{\sigma}}_{\Gamma}$ satisfies Condition (2) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, for all spanning subgraphs $\Gamma_0 \subseteq \Gamma$ we have the inclusion $M_{\Gamma}(\Gamma_0) \subseteq \mathop{\mathrm{\sigma}}_{\Gamma}(\Gamma_0)$. The inequality $|M_{\Gamma}(\Gamma_0)|\geq c(\Gamma_0)$ was proved by Barmak [@barmak] (we refer to Yuen's Phd thesis, [@chiho Theorem 3.5.1], for a publicly available proof. Barmak's proof was originally shared with Yuen, who then filled in more details and published it with Barmak's permission.) By Condition (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} we also have the equality $|\mathop{\mathrm{\sigma}}_{\Gamma}(\Gamma_0)| = c(\Gamma_0)$. From this we conclude that $M_{\Gamma}(\Gamma_0)$ and $\mathop{\mathrm{\sigma}}_{\Gamma}(\Gamma_0)$ must coincide. ◻ By applying the same idea as in [@kp3 Lemma 3.9], we now see how compatibility with graph morphisms propagates an assignment of multidegrees on all nonsymmetric strata of vine curves to an assignment on all vertices of all spanning trees of all graphs $\Gamma \in G_{g,n}$. The reason for excluding the symmetric graphs is that compatibility under the automorphism exchanging the two vertices forces a unique possible value for each universal degree $d$ stability condition on those graphs. Let $T_{g,n} \subseteq G_{g,n}$ be the collection of loopless graphs with $2$ vertices, and let $T'_{g,n} \subseteq T_{g,n}$ be the subset of graphs such that each automorphism fixes the two vertices. For each element $G \in T_{g,n}$, fix an ordering of its two vertices, i.e. $\mathop{\mathrm{Vert}}(G)=\{v_1^G, v_2^G\}$. **Lemma 53**. *For each function $\alpha \colon T'_{g,n} \to \mathbb{Z}$, there exists a unique collection of assignments $\underline{\mathbf d}_{\Gamma, T} \in S^d_{\Gamma}(T)$ for each spanning tree $T \subseteq \Gamma$ of each element $\Gamma \in G_{g,n}$ such that we have $$\label{compcond} \sum_{v \in \mathop{\mathrm{Vert}}(\Gamma) : f(v) = v_1^G} \underline{\mathbf d}_{\Gamma, T}(v) = \alpha(G)%$$ for all morphisms $f \colon \Gamma \to G$ where $G$ is in $T'_{g,n}$.* Note that Equation [\[compcond\]](#compcond){reference-type="eqref" reference="compcond"} is the same as compatibility for graph morphisms defined in Definition [Definition 15](#combo-family){reference-type="ref" reference="combo-family"}, Equation [\[comp-contractions\]](#comp-contractions){reference-type="eqref" reference="comp-contractions"}. *Proof.* This is an extension of the proof given in [@kp3 Lemma 3.8]. The main point of the proof is that for a fixed spanning tree $T \subseteq \Gamma$, contracting all but one edge of $T$ induces a bijection from $\mathop{\mathrm{Edges}}(T)$ to the set of morphisms from $\Gamma$ to some graph $G$ isomorphic to an element of $T_{g,n}$. The isomorphism is unique when $G \in T'_{g,n}$, and there are two isomorphisms when $G \in T_{g,n} \setminus T'_{g,n}$. Each $e \in \mathop{\mathrm{Edges}}(T)$ induces a morphism $\Gamma \to \Gamma_e$ obtained by contracting all edges $\mathop{\mathrm{Edges}}(T) \setminus \{ e\}$. There are two cases, depending on how each automorphism of $\Gamma_e$ acts on $\mathop{\mathrm{Vert}}(\Gamma_e)$: (1) If each acts trivially then there exists a unique isomorphism from $\Gamma_e$ to an element of $T'_{g,n}$. (2) if some act nontrivially, there are two isomorphisms from $\Gamma_e$ to an element of $T_{g,n} \setminus T'_{g,n}$. Now we view $\underline{\bf d}_{\Gamma, T}$ as a vector with $|\mathop{\mathrm{Vert}}{T}|=|\mathop{\mathrm{Vert}}{\Gamma}|$ unknown entries. In Case (1) we obtain an affine linear constraint among these unknowns by Equation [\[compcond\]](#compcond){reference-type="eqref" reference="compcond"}. In Case (2) a linear constraint is given by compatibility for the extra automorphism, which imposes that the sum of the values of $\underline{\bf d}_{\Gamma, T}$ on the vertices on one side of $e$ equals the sum of the values on the vertices on the other side of $e$. Altogether, this gives $\left|\mathop{\mathrm{Edges}}(T)\right|=\left|\mathop{\mathrm{Vert}}(\Gamma)\right|-1$ affine linear constraints on the $\left|\mathop{\mathrm{Vert}}(\Gamma)\right|$ different entries of $\underline{\mathbf d}_{\Gamma, T}$. One more affine linear constraint is given by the fact that $\underline{\bf d}_{\Gamma, T} \in S^d_{\Gamma}(T) \subset \mathbb{Z}^{\mathop{\mathrm{Vert}}(\Gamma)}$. All in all, we obtain an affine linear system of $|\operatorname{Vert}(T)|$ equations in $|\operatorname{Vert}(T)|$ unknowns, which one can check has invertible determinant over $\mathbb{Z}$, hence a unique solution for $\underline{\mathbf d}_{\Gamma, T}$. ◻ Lemmas [Lemma 52](#enoughontrees){reference-type="ref" reference="enoughontrees"} and [Lemma 53](#determinedover2){reference-type="ref" reference="determinedover2"} allow us to regard every degree $d$ universal stability condition of type $(g,n)$ as an element of $\mathbb{Z}^{T'_{g,n}}$. However, not all elements of $\mathbb{Z}^{T'_{g,n}}$ give rise to a universal stability condition. For $\Gamma \in G_{g,n}$ it can happen that the collection on all spanning trees $\{\underline{\bf d}_{\Gamma,T}\}_{T}$ obtained from some $\alpha\in \mathbb{Z}^{T'_{g,n}}$ as in Lemma [Lemma 53](#determinedover2){reference-type="ref" reference="determinedover2"} is not the restriction of any stability condition on the graph $\Gamma$ (see Lemma [Lemma 52](#enoughontrees){reference-type="ref" reference="enoughontrees"}). ## Second part: when $n=0$ over each vine curve with at least $2$ nodes there is at most $1$ stability condition that extends to a universal stability In this section we fix $g \geq 2$ and $n=0$. The main result here is Corollary [Corollary 55](#twovertexgraph){reference-type="ref" reference="twovertexgraph"}. We first need some combinatorial preparation. Let $\operatorname{GSym}_g$ be the trivalent graph with $2g-2$ vertices $v_1, \ldots, v_{2g-2}$ of genus $0$, where each vertex $v_i$ is connected to $v_{i-1}$, $v_{i+1}$ and $v_{i+g-1}$ (indices should be considered modulo $2g-2$). For $i\in\mathbf{Z}/(2g-2)\mathbf{Z}$ and $j=1,\dots,g-1$, we shall denote by $e_i$ the edge joining $v_i$ and $v_{i+1}$ and by $e_j'$ the edge joining $v_j$ and $v_{j+g-1}$. (See Picture [1](#fig:gsigmag){reference-type="ref" reference="fig:gsigmag"}). ![$\operatorname{GSym}_g$ [\[fig:gsigmag\]]{#fig:gsigmag label="fig:gsigmag"}](Symg-pic.png){#fig:gsigmag} Let $\Gamma_g \subset \operatorname{GSym}_g$ be the maximal $1$-cycle consisting of all edges of the form $e_i$. **Lemma 54**. *For every $d \in \mathbf{Z}$ there is at most one assignment $$\mathop{\mathrm{\sigma}}_{\operatorname{GSym}_g}(\Gamma_g)\subset S^d_{\operatorname{GSym}_g}(\Gamma_g)$$ that satisfies the two conditions of Definition  [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"}, and that extends to a degree $d$ universal stability condition of type $(g,0)$. When such an assignment exists, the integers $d-g+1$ and $2g-2$ are necessarily coprime.* *Proof.* Assume that $\mathop{\mathrm{\sigma}}_{\operatorname{GSym}_g}$ is the restriction to $\operatorname{GSym}_g$ of some universal stability $\mathop{\mathrm{\sigma}}$. Since $\Gamma_g$ is a graph of genus $1$, we can apply the results from [@paganitommasi Section 3] to describe the assignment $\mathop{\mathrm{\sigma}}_{\operatorname{GSym}_g}(\Gamma_g)$. Namely, in genus $1$ it is known that all such assignments are induced by a polarization $\phi$ whose value $\phi(v_i)$ at each vertex $v_i$ is given by the average of the $2g-2$ admissible multidegrees in $\mathop{\mathrm{\sigma}}_{\operatorname{GSym}_g}(\Gamma_g)$. In particular, the polarization $\phi$ should be invariant under the automorphisms of $\operatorname{GSym}_g$ preserving $\Gamma_g$, such as the cyclic automorphism mapping $e_i$ to $e_{i+1}$ for all $i$. From this we deduce that all $\phi(v_i)$ are equal, and since we have obtained $\Gamma_g$ from $\operatorname{GSym}_g$ by removing the $g-1$ edges $e_i'$, we have $\sum_{i\in\mathbf{Z}/(2g-2)\mathbf{Z}}\phi_i=d-g+1$. It follows that there exists at most $1$ assignment, and that this assignment should be the one induced by $\phi=(\frac{d-g+1}{2g-2},\dots,\frac{d-g+1}{2g-2})$. Then the claim follows from the fact that this polarization is nondegenerate if and only if $d-g+1$ and $2g-2$ are coprime. ◻ We can now prove the following. **Corollary 55**. *If $G\in {G}_{g}$ is a loopless graph with $2$ vertices and at least $2$ edges, there exists at most one degree $d$ stability condition for $G$ that extends to a degree $d$ universal stability condition.* In the proof we will denote by $\Gamma(t,i,j)$ the object of $G_{g}$ that consists of two vertices of genus $i$ and $j$ connected by $t$ edges. (Note that $g=i+j+t-1$). *Proof.* The claim is obtained by proving that the value of an assignment on vine curves of type $t \geq 2$ can be obtained, using compatibility with graph morphisms (Definition [Definition 15](#combo-family){reference-type="ref" reference="combo-family"}), from the assignment calculated on $\Gamma_g \subset \operatorname{GSym}_g$ (described in Lemma [Lemma 54](#verysymmetric){reference-type="ref" reference="verysymmetric"}). Recall that a degree $d$ stability condition over a graph with $2$ vertices has the same value on all spanning trees and is uniquely determined by this value (Example [**Example** 14](#vinestab){reference-type="ref" reference="vinestab"}). First we prove the claim when $G = \Gamma(t, i,0)$ has one vertex of genus zero. Apply Lemma [Lemma 54](#verysymmetric){reference-type="ref" reference="verysymmetric"} to deduce the uniqueness of an assignment $\mathop{\mathrm{\sigma}}_{\operatorname{GSym}_g}(\Gamma_g)$ that satisfies the two conditions of Definition  [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} and automorphism-invariance. Then apply to $\Gamma_g$ the first composition of contractions defined in [@kp3 Lemma 3.9]. The statement for an arbitrary graph $G= \Gamma(t, i, j)$ is obtained by applying the second set of contractions used in the proof of [@kp3 Lemma 3.9], to deduce the value of $\mathop{\mathrm{\sigma}}$ on $\Gamma(t, i, j)$ from the value of $\mathop{\mathrm{\sigma}}$ on $G=\Gamma( i-j+2, t+2j-2, 0)$. More precisely, consider the graph $G'$ with $4$ vertices $w_1,w_2,w_3,w_4$ defined in the proof of [@kp3 Lemma 3.9], and its spanning subgraph ${G}_0':=w_1-w_2-w_4-w_3-w_1$. Let $\alpha \in \mathbf{Z}$ be defined by the condition $$\mathop{\mathrm{\sigma}}_{\Gamma( i-j+2, t+2j-2, 0)}({G}_0)= \Set{(d-i+j -1-\alpha, \alpha) }$$ for ${G}_0$ a spanning tree of $\Gamma( i-j+2, t+2j-2, 0)$. By combining the axioms of a degree $d$ stability with compatibility with graph morphisms (Definition [Definition 15](#combo-family){reference-type="ref" reference="combo-family"}), we find out $$\mathop{\mathrm{\sigma}}_{G'}({G}_0')= \Set{(\beta+1, \beta, \alpha, \alpha), (\beta, \beta+1, \alpha, \alpha), (\beta, \beta, \alpha+1, \alpha), (\beta,\beta, \alpha, \alpha+1) }$$ for the unique $\beta \in \mathbf{Z}$ such that $2\alpha+ 2 \beta = d+1 - t - i + j$ (observe that $d+1-t - i + j$ must be even since so is $d-g+1$, because by Lemma [Lemma 54](#verysymmetric){reference-type="ref" reference="verysymmetric"} it is coprime with $2g-2$, and by using $g=t + i + j -1$). From Condition (1) of Definition [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} we deduce that the unique assignment on the spanning tree ${G}_0'' \subset {G}_0'$ defined by $w_2- w_1 -w_3 - w_4$ equals $$\mathop{\mathrm{\sigma}}_{G'}({G}_0'') = \Set{(\beta, \beta, \alpha, \alpha) },$$ and applying the second set of contractions used in the proof of [@kp3 Lemma 3.9] we deduce the value of $\mathop{\mathrm{\sigma}}$ on the graph $G= \Gamma(t, i, j)$. ◻ ****Remark** 56**. Corollary [Corollary 55](#twovertexgraph){reference-type="ref" reference="twovertexgraph"} asserts that there is at most $1$ degree $d$ universal stability on loopless graphs $G \in G_{g,0}$ with $2$ vertices and at least two edges. We claim that when $d \in \mathbf{Z}$ satisfies $\gcd(d-g+1, 2g-2)=1$, one such stability exists. Indeed, take $\mathop{\mathrm{\sigma}}_{\phi^d_{\operatorname{can}}}$ for $\phi_{\operatorname{can}}^d := \frac{d}{2g-2}\; \underline{\deg}(\omega_{\pi}) \in V^d_{g,n}$. It was observed in [@kp3 Remark 5.13] that this universal stability condition is fine precisely when $d-g+1$ and $2g-2$ are coprime. By uniqueness, the degree $d$ stability condition for a loopless graph $G$ with $2$ vertices and at least $2$ edges must then coincide with $\mathop{\mathrm{\sigma}}_{G, \phi_{\operatorname{can}}^d(G)}$. ## Third part: Conclusion We are now in a position to conclude. In this section, we fix $g \geq 2$. **Theorem 57**. *Let $\tau$ be a degree $d$ universal stability condition of type $(g,0)$ (Definition [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"} and Remark [**Remark** 17](#Rem: univ stab){reference-type="ref" reference="Rem: univ stab"}). Then $\gcd(d-g+1, 2g-2)=1$ and there exists a nondegenerate $\Phi \in V_{g}^d$ such that $\tau$ equals $\mathop{\mathrm{\sigma}}_{\Phi}$ (as in Remark [**Remark** 51](#n>0){reference-type="ref" reference="n>0"}).* *Proof.* By combining Lemma [Lemma 52](#enoughontrees){reference-type="ref" reference="enoughontrees"} and Lemma [Lemma 53](#determinedover2){reference-type="ref" reference="determinedover2"} we deduce that proving the equality $$\label{equality} \tau = \mathop{\mathrm{\sigma}}_{\Phi}$$ for some $\Phi \in V_g^d$ (necessarily nondegenerate) is equivalent to proving the equality of the restrictions $$\label{equality2} \tau_G = \mathop{\mathrm{\sigma}}_{G,\Phi(G)} \quad \text{for all loopless graphs } G \in {G}_{g} \text{ with } 2 \text{ vertices.}$$ By Lemma [Lemma 54](#verysymmetric){reference-type="ref" reference="verysymmetric"} if $\gcd(d-g+1, 2g-2) \neq 1$ there exists no universal stability condition. From now on we will assume $\gcd(d-g+1, 2g-2) = 1$. Under this assumption, by [@kp3 Corollary 3.6, Remark 5.13] there exists a nondegenerate $\Phi \in V_g^d$ (a modification over curves with separating edges of the canonical stability $\phi^d_{\operatorname{can}}$ discussed in Remark [**Remark** 56](#rem: canonical){reference-type="ref" reference="rem: canonical"}) such that $\mathop{\mathrm{\sigma}}_{G, \phi(G)}$ coincides with the restriction $\tau_G$ of $\tau$ to all loopless graphs $G$ with $2$ vertices and $1$ edge. By Corollary [Corollary 50](#stronglycomp){reference-type="ref" reference="stronglycomp"}, the assignment $\mathop{\mathrm{\sigma}}_{\Phi}$ defines a degree $d$ universal stability condition, which coincides with $\tau_G$ for all loopless graphs $G$ with $2$ vertices (by construction when $G$ has $1$ edge, and by Corollary [Corollary 55](#twovertexgraph){reference-type="ref" reference="twovertexgraph"} when $G$ has at least $2$ edges). We conclude that [\[equality2\]](#equality2){reference-type="eqref" reference="equality2"} holds, hence that $\tau = \mathop{\mathrm{\sigma}}_{\Phi}$. ◻ By combining Theorem [Theorem 57](#OS=fine){reference-type="ref" reference="OS=fine"} and Proposition [Proposition 36](#compcontract){reference-type="ref" reference="compcontract"} (with $\mathcal{X}/S$ the universal family over $S=\overline{\mathcal{M}}_g$), we immediately deduce the following classification result. **Corollary 58**. *If $\overline{\mathcal{J}}_g$ is a degree $d$ fine compactified universal Jacobian for $\overline{\mathcal{M}}_g$, then $d$ satisfies $\gcd(d-g+1, 2g-2)=1$, and there exists $\Phi \in V_{g}^d$ such that $\overline{\mathcal{J}}_g = \overline{\mathcal{J}}_{g} (\Phi)$.* The relation between the Jacobians discussed in Corollary [Corollary 58](#maincoroll){reference-type="ref" reference="maincoroll"} and Caporaso's [@caporaso], Pandharipande's [@panda], and Simpson's [@simpson], is discussed in [@kp2 Remark 5.9]. # Final remarks and open questions {#sec: final} In Definitions [Definition 11](#finejacstab){reference-type="ref" reference="finejacstab"} and [Definition 16](#familyfinejacstab){reference-type="ref" reference="familyfinejacstab"} we introduced the notion of families of degree $d$ stability conditions for a single curve $X$ and for the universal family $\overline{\mathcal{C}}_{g,n}\to \overline{\mathcal{M}}_{g,n}$. For $X$ a nodal curve with dual graph $\Gamma$, let $\Sigma^d(\Gamma)$ be the set of all degree $d$ stability conditions. In Section [8](#Sec: OS){reference-type="ref" reference="Sec: OS"} we defined, following Oda--Seshadri [@oda79], a stability space $V^d(\Gamma)$ with a subspace of degenerate elements (a union of hyperplanes). We define $\mathcal{P}^d(\Gamma)$ to be the set whose elements are the connected components (maximal dimensional polytopes) of the nondegenerate locus in $V^d(\Gamma)$. By Corollary [Corollary 38](#bijection){reference-type="ref" reference="bijection"} and Proposition [Proposition 41](#OSisfine){reference-type="ref" reference="OSisfine"}, there is an injection $\mathcal{P}^d(\Gamma) \to \Sigma^d(\Gamma)$, and it is natural to ask when this map is surjective. This is the same as Question [**Question** 42](#question){reference-type="ref" reference="question"}. The case when $g(X)=1$ (or even $b_1(\Gamma(X))=1$) was settled in the affirmative in [@paganitommasi], see Example [**Example** 45](#genus1){reference-type="ref" reference="genus1"}. Then let $\Sigma^d_{g,n}$ be the set of all degree $d$ stability conditions of type $(g,n)$. The latter can be described as the inverse limit $$\Sigma^d_{g,n}= \varprojlim_{G\in G_g,n} \Sigma^d(G)$$ where $G_{g,n}$ is the category of stable $n$-pointed dual graphs of genus $g$. Similarly, for universal (and strongly compatible) numerical polarizations (see Section [8](#Sec: OS){reference-type="ref" reference="Sec: OS"}), in [@kp3] the authors defined the affine space $$V^d_{g,n}:=\varprojlim_{G\in G_{g,n}} V^d(G)$$ (see Corollary [Corollary 50](#stronglycomp){reference-type="ref" reference="stronglycomp"}). The latter is always nonempty, as it contains the canonical stability condition. Let $\mathcal{P}^d_{g,n}$ be the set whose elements are the connected components of the complement of the degenerate stability conditions in $V_{g,n}^d$ (see Definition [Definition 48](#familyphistab){reference-type="ref" reference="familyphistab"}). By Corollary [Corollary 38](#bijection){reference-type="ref" reference="bijection"} and Corollary [Proposition 41](#OSisfine){reference-type="ref" reference="OSisfine"}, we have a natural injection $\mathcal{P}^d_{g,n} \to \Sigma^d_{g,n}$, and one can ask under which assumptions the latter is surjective. The genus $1$ case was settled in [@paganitommasi]: the map $\mathcal{P}^d_{1,n} \to \Sigma^d_{1,n}$ is a bijection if and only if $n \leq 5$ (see Remark [**Remark** 51](#n>0){reference-type="ref" reference="n>0"}). The case without marked points is solved by Corollary [Corollary 58](#maincoroll){reference-type="ref" reference="maincoroll"}: the map $\mathcal{P}^d_g \to \Sigma^d_g$ is a bijection for all $g$. Here are some natural future problems to address. 1. What is the analogue of Theorem [Theorem 1](#mainthm){reference-type="ref" reference="mainthm"} for the case of smoothable compactified Jacobians (not necessarily fine)? By this we mean a smoothable open substack of the moduli space of rank $1$ torsion-free sheaves (not just the simple ones) that have a proper good moduli space. 2. Is there a natural stability space with walls, containing $V^d_{g,n}$, and a natural bijection from the set of its maximal-dimensional chambers to $\Sigma^d_{g,n}$? [^1]: The adjective "fine" classically refers to the existence of a Poincaré sheaf. [^2]: note that in the Notation section of the papers [@kp2; @kp3; @paganitommasi] the last equation is incorrectly written with a minus sign: $- \delta(F)$ instead of the correct $+\delta(F)$. [^3]: Note that the definitions of $S$, $S'$ and $N$ are independent of the chosen orientation.
arxiv_math
{ "id": "2309.08509", "title": "Stability conditions for line bundles on nodal curves", "authors": "Nicola Pagani and Orsola Tommasi", "categories": "math.AG math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we consider a scaling limit of some branching random walk in random environment whose off-spring distribution has infinite variance. The Laplace functional of the limiting super-process is given by $\mathbb{E}\left[e^{-\langle\mu_t,\varphi_0\rangle}\right] = e^{-\langle\mu_0,U_t(\varphi_0)\rangle}$, where $U_t(\varphi)$ is the solution to the equation $$\partial_t\varphi = \mathcal{H}\varphi - \varphi^{1+\beta},\qquad \varphi(0) = \varphi_0.$$ The existence and uniqueness of this non-linear equation are also proved as an intermediate step. We also give a martingale characterization of above super-process and show that it possesses the compact support property. author: - Ruhong Jin bibliography: - RSBM.bib title: A Generalized Rough Super Brownian Motion --- # Introduction The branching random walk in the random environment(BRWRM) is a random walk with stochastic branching mechanism and branching rate. This work explores the large-scale behaviour of some BRWRM whose branching mechanism and rate depend on the random environment at each spatial points. The scaling limit of branching random walk has been discussed thoroughly in [@dawson1993measure]. The aim of this paper is to generalize the results in [@dawson1993measure] to the BRWRM with a rough environment. Given a random potential $$V(x) = \xi(x), \qquad \{\xi(x)\}_{x\in\mathbb{Z}^2}\text{ i.i.d }\sim \Phi,$$ with $\mathbb{E}[\Phi] = 0,\mathbb{E}[\Phi^2] = 1$ on lattice $\mathbb{Z}^2$, it is shown in [@perkowski2021rough] that certain scaled branching random walks in the random environment with birth(one particle) rate $\xi_+$ and die rate $\xi_-$ will converge to a measure-valued Markov process whose Laplace transformation is associated to the stochastic PDE. $$\partial_t\varphi = (\Delta + \xi)\varphi -\varphi^2,\qquad \varphi(0) =\varphi_0,$$ where $\xi$ is the white noise. The BRWRM considered in [@perkowski2021rough] has a finite variance off-spring distribution and a general question is what happens if the variance is infinite. It is discussed in [@dawson1993measure; @etheridge2000introduction] that the Laplace functionals of the limit super processes given by classical branching random walks should satisfy the PDE $$\partial_t\varphi = L\varphi - b(x)\varphi -c(t,x)\varphi^2 + \int_0^\infty(1-e^{-\theta \varphi(t,x)}-\theta\varphi(t,x))n(x,d\theta),$$ where $A$ is a generator of a Feller semi-group, $b$ is bounded and $c > 0$ and $n$ is some sort of transition kernel. Due to the result in [@perkowski2021rough], it is then natural to think that the super-process should exist when $b$ is only a distribution, for example, white noise $\xi$. We do not try to answer this general question in the paper since we are lacking knowledge of dealing with the part with transition kernel $n$. However, we start with a non-trivial but simpler case when $n(x,d\theta) \sim \theta^{\beta +2}$ for $0 < \beta < 1$. In other words, we try to deduce the existence and uniqueness of the super-process whose Laplace functional is given by the SPDE $$\label{equ.beta_PAM} \partial_t\varphi = (\Delta + \xi)\varphi -\varphi^{1+\beta}.$$ To this goal, we consider the BRWRM whose branching mechanism is given by $$\frac{2\xi_+}{(1+\beta)|\xi|}(1-s)^{1+\beta} - \frac{\xi}{|\xi|} + \frac{2\xi_+}{|\xi|}s,$$ and show the scaling limit of this branching random walk gives us exactly the desired super process. It is worth mentioning that there is actually no general existence and uniqueness theory of equation [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="eqref" reference="equ.beta_PAM"} in the literature. The main difficulty of solving equation [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="eqref" reference="equ.beta_PAM"} is the Anderson Hamiltonian $\mathcal{H}= \Delta + \xi$. Due to the lack of regularity, it is not possible to define the product $\xi\varphi$ directly in dimension $2$ or $3$. The study of equation [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="eqref" reference="equ.beta_PAM"} then requires recently built techniques such as regularity [@hairer2014theory] or the para-controlled distributions [@gubinelli2015paracontrolled]. In dimension 2, all of the solution theories require a renormalization, which means we need to remove a diverging constant in the Anderson Hamiltonian $\mathcal{H}$ and consider actually the equation $\partial_t\varphi = (\mathcal{H}-\infty)\varphi - \varphi^{1+\beta}$. It is understood as the limit of renormalized and discretized equation with $$\xi^n_e = \xi^n - c_n, c_n \simeq log(n).$$ The second difficulty of solving equation [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="eqref" reference="equ.beta_PAM"} arises from the weight operation when solving the corresponding linear equation $\partial_t\varphi = \mathcal{H}\varphi$ as in [@martin2019paracontrolled; @hairer2015simple]. The solution theory of PAM uses heavily the logarithm growth of $\xi$ on the $\mathbb{R}^2$. When we view [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="eqref" reference="equ.beta_PAM"} as $$\partial_t\varphi = (\mathcal{H}- \varphi^\beta)\varphi.$$ The growth of $\xi - \varphi^\beta$ will be exponential and thus we can not apply directly the solution theory of PAM on $\mathbb{R}^2$. However, we will see in section [4](#sec.variant){reference-type="ref" reference="sec.variant"} that the minus sign helps us build the existence and uniqueness results. In recent work by Perkowski and Rosati [@perkowski2021rough], they only obtain a partial existence and uniqueness result of equation [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="eqref" reference="equ.beta_PAM"} but this is enough for their application due to the $L^2$ bounded property from the finite variance off-spring distribution. However, their methods rely strongly on the $L^2$ martingale theory and can not be used directly for the infinite variance case. We, therefore, search for an alternative way, combining the idea of [@perkowski2021rough] and the classical approach via Laplace functional, to give the convergence in the infinite variance case. Finally, although we work in dimension 2, all results(with some change of parameters) are valid in dimension 1 with a much easier argument. **Structure of the work.** We only consider a deterministic environment with assumption [Assumption 1](#ass.environment){reference-type="ref" reference="ass.environment"} since the construction of random environment from the deterministic environment is given in [@perkowski2021rough]. We introduce our model and the definition of $\beta-$rough super Brownian motion in section [3](#sec.model_formulation){reference-type="ref" reference="sec.model_formulation"} and state our main results, i.e. the convergence results of the scaled branching random walk, a martingale problem formulation and the compact support property of the $\beta-$super Brownian motion. We study a variant of PAM ($\mathcal{H}\rightarrow \mathcal{H}- \phi$ for some $\phi$) and the non-linear equation like [\[equ.beta_PAM\]](#equ.beta_PAM){reference-type="ref" reference="equ.beta_PAM"} in section [4](#sec.variant){reference-type="ref" reference="sec.variant"} and show the existence and uniqueness of solution in the space with weight $e^{l|x|^\sigma}$ for any $l\in\mathbb{R},0\leq \sigma<1$. The result allows us to implement freely with the $\varphi$ in the Laplace functional $\mathbb{E}[e^{-\langle\mu_t,\varphi\rangle}]$. We then give the detailed proof of convergence of scaled branching random walk to the $\beta-$rough super Brownian motion by estimating the moments of $\sup_{0\leq s\leq t}\langle\mu_s,\varphi\rangle$. This estimation requires controlling branching random walks from a coupling method and passing the estimations to a simpler branching random walk. Then we apply the Aldous criterion to obtain the tightness. Section [6](#sec.martingale_problem){reference-type="ref" reference="sec.martingale_problem"} is devoted to giving a martingale problem formulation of $\beta-$rough super Brownian motion. We finally then prove the compact support property that is proved in [@jin2023compact] for the finite variance case. # Notation ## Notation on regularity We define the lattice $\mathbb{Z}_n^2 = \frac{1}{n}\mathbb{Z}^2$ and also denote $\mathbb{Z}_{\infty}^2 :=\mathbb{R}^2$ for convenience. Similarly, we define $\mathbb{T}_n^2:=(\mathbb{R}/n\mathbb{Z})^2$ and $\tilde{\mathbb{T}}_n^2:=n\left(-\frac{1}{2},\frac{1}{2}\right]^2$ .We will consider weighted Besov space on $\mathbb{Z}_n^2$ introduced in [@martin2019paracontrolled]. To motivate the definition, let's first introduce the definition of weight. **Definition 1**. *We denote by $$\omega^{pol}(x) := \log(1+|x|),\qquad\omega_\sigma^{exp}(x) :=|x|^\sigma,$$ where $x \in \mathbb{R}^d,\sigma \in (0,1)$. For $\omega \in \pmb{\omega}:=\{\omega^{pol}\}\cup \{\omega_\sigma^{exp}|\sigma\in(0,1)\}$, we denote by $\pmb{\varrho}(\omega)$ the set of measurable, strictly positive $\rho:\mathbb{R}^d\rightarrow(0,\infty)$such that $$\rho(x)\lesssim\rho(y)e^{\lambda\omega(x-y)},$$ for some $\lambda = \lambda(\rho) > 0$. We also introduce the notation $\pmb{\varrho}(\pmb{\omega}):=\bigcup_{\omega\in\pmb{\omega}}\pmb{\varrho}(\omega)$. The objects $\rho \in \pmb{\varrho}(\pmb{\omega})$ will be called *weights*.* The weight we considered in this paper is of the following form: $$p(a)(x) := (1+|x|)^a,\qquad e(l)(x):=e^{l|x|^\sigma},$$ for non-negative $a,l$ and $0 < \sigma < 1$. We drop the index $\sigma$ since it does not influence the proof. For function $\varphi:\mathbb{Z}_n^2 \rightarrow \mathbb{R}$, we define its Fourier transformation as function on torus $\mathbb{T}_n^2$ by $$\mathcal{F}_n\varphi(k) := \frac{1}{n^2}\sum_{x\in \mathbb{Z}_n^2}\varphi(x)e^{-2\pi\iota k\cdot x},\qquad k\in\mathbb{T}_n^2,$$ and the inverse Fourier transformation for functions $\varphi: \mathbb{T}_n^2 \rightarrow\mathbb{R}$, $$\mathcal{F}_n^{-1}\varphi(x) := \int_{\mathbb{T}_n^2} \varphi(k)e^{2\pi\iota k\cdot x}dk,\qquad x \in \mathbb{Z}_n^2.$$ Let $\omega \in \pmb{\omega}$, it is defined in [@martin2019paracontrolled] the space $\mathcal{S}_\omega$ consisting of all functions such that all derivatives of itself and its Fourier transformation decay like $e^{-\lambda \omega}$ for any $\lambda > 0$. The dual space of $\mathcal{S}'_\omega$ is called ultra-distribution which will be the space the Besov spaces are built on. Suppose $\rho_{-1},\rho_0 \in \mathcal{S}'_\omega(\mathbb{R}^2)$ are two non-negative and radial functions such that the support of $\rho_{-1}$ is contained in a ball $B \subset \mathbb{R}^2$, the support of $\rho_0$ is contained in an annulus $\{x \in \mathbb{R}^2: 0 < a \leq |x|\leq b\}$ and such that with $$\rho_{j} = \rho_{0}(2^{-j}\cdot),$$ the following conditions are satisfied: 1. $\sum_{i=-1}^\infty \rho_j(x) = 1,\forall x \in \mathbb{R}^2$ 2. supp($\rho_i$) $\cap$ supp($\rho_j$) = $\emptyset$ if $|i-j| > 1$ For each $\omega \in \pmb{\omega}$, such a partition of unity always exists in $\mathcal{S}'_\omega(\mathbb{R}^2)$. We define furthermore an index $j_n$ to be the minimum index such that the support of $\phi_j$ intersects with $\tilde{\mathbb{T}}_n^2$. Update the partition of unity of $\tilde{\mathbb{T}}_n^2$ by defining periodic functions $$\rho^n_j = \rho_j, \qquad j < j_n, \qquad \rho^n_{j_n} = 1 - \sum_{j=-1}^{j_n-1}\rho_j.$$ For a ultra-distribution $f\in\mathcal{S}'_\omega(\mathbb{Z}_n^2)$, the Littlewood-Paley blocks of $f$ as $$\Delta^n_jf = \mathcal{F}^{-1}(\rho^n_j\mathcal{F}(f)).$$ for $j = -1,0,\cdots,j_n$. For $\alpha\in \mathbb{R}, p,q \in[1,\infty]$ and $\rho \in \pmb{\varrho}(\pmb{\omega})$, we define the weighted Besov space $B_{p,q}^\alpha(\mathbb{Z}_n^2,\rho)$ by $$B_{p,q}^\alpha(\mathbb{Z}_n^2,\rho):= \{f \in \mathcal{S}'_\omega: \|f\|_{B_{p,q}^\alpha(\mathbb{Z}_n^2,\rho)} := \|(2^{j\alpha}\|\rho^{-1}\Delta_jf\|_{L^p})\|_{\ell^q} < \infty\}.$$ In particular, the discrete Hölder space is defined by $C^\alpha(\mathbb{Z}_n^2,\rho):= B^{\alpha}_{\infty,\infty}(\mathbb{Z}_n^2,\rho)$ for $\alpha$ not an integer. The extension operator $\mathcal{E}^n$ defined in [@martin2019paracontrolled] will be helpful when we consider the convergence from the discrete equation to the continuum equation. We also consider the space concerning the regularity with respect to time. Fix a time horizon $T>0$, we define the space $C_TC^\alpha(\mathbb{Z}_n^2,e(l))$ to be all continuous in time function $f:[0,T]\rightarrow C^\alpha(\mathbb{Z}_n^2,e(l+T))$ such that the norm $$\|f\|_{C_TC^\alpha(\mathbb{Z}_n^2,e(l))}:=\sup_{0\leq t\leq T}\|f(t)\|_{C^\alpha(\mathbb{Z}_n^2,e(l+t))},$$ is finite. For $0 < \beta < 1$, the space $C^\beta_TC^\alpha(\mathbb{Z}_n^2,e(l))$ is defined by $$\|f\|_{C^\beta_T C^\alpha(\mathbb{Z}^2_n,e(l))} := \|f\|_{C_TC^\alpha(\mathbb{Z}_n^2,e(l))} + \sup_{0\leq s< t\leq T} \frac{\|f(t)-f(s)\|_{C^\alpha(\mathbb{Z}_n^2,e(l+t))}}{t-s}.$$ We also define the space $\mathcal{M}^\gamma C^\alpha(\mathbb{Z}_n^2,e(l))$ to be all function $f$ such that $t^\gamma f(t) \in C_TC^\alpha(\mathbb{Z}_n^2,e(l))$. And the space $\mathcal{L}^{\gamma,\alpha}(\mathbb{Z}_n^2,e(l))$ consists of all functions $f$ such that $$f \in \mathcal{M}^\gamma C^\alpha(\mathbb{Z}_n^2,e(l)),\qquad t^\gamma f\in C^{\frac{\alpha}{2}}L^\infty(\mathbb{R}^2,e(l)).$$ We simply write $\mathcal{L}^\alpha$ when $\gamma = 0$. ## Notation on operators We give the definition of some discrete operators here. We define the discrete Laplace on lattice $\mathbb{Z}_n^2$ by $$\Delta^n f(x) := n^2\sum_{y\sim x}(f(y) - f(x)).$$ We will use $P_t^n$ to denote the discrete heat kernel, i.e. for any function $\varphi$, we denote $P_t^n\varphi$ to be the mild solution of the equation $$\label{equ.heat} \left\{ \begin{array}{ll} \partial_t w^n_t = \Delta^n w^n_t,\\ w^n_0 =\varphi. \end{array} \right.$$ We also use $P_t$ for the continuum equation. Also, for a deterministic environment $\xi^n$(which will be introduced in assumption [Assumption 1](#ass.environment){reference-type="ref" reference="ass.environment"}), we define the Anderson Hamiltonian $\mathcal{H}^n = \Delta^n + \xi^n_e$, where $\xi^n_e = \xi^n - c_n$ with $c_n$ a renormalization constant to be introduced in the definition of deterministic environment. We then use the operator $T_t^n\varphi$ to indicate the solution to the equation $$\label{equ.PAM_homo} \left\{ \begin{array}{ll} \partial_t w^n_t = \mathcal{H}^nw^n_t,\\ w^n_0 =\varphi. \end{array} \right.$$ if the solution exists and is unique. All above definitions move to the continuous case with symbols $P_t,\Delta$ and $T_t,\mathcal{H}$. There is another definition of $\mathcal{H}^n$ which links to the continuous equation by the para-product. **Definition 2**. *For any two distributions $\varphi,\phi$ on $\mathbb{Z}_n^2$, the product can be decomposed as $\varphi\cdot\phi =\varphi \olessthan\phi + \varphi\odot\phi + \phi\olessthan\varphi$ such that $$\varphi\olessthan\phi = \sum_{j=-1}^{j_n} \Delta^n_{< j-1}\varphi\Delta^n_j\phi,\qquad \varphi\odot\phi = \sum_{\substack{|i-j|\leq 1\\-1 \leq i,j\leq j_n}}\Delta^n_i\varphi\Delta^n_j\phi,$$ where $\Delta^n_{< i} = \sum_{j=-1}^i\Delta^n_j$. When $n=\infty$, it is exactly the same definition on $\mathbb{R}^2$. This builds the relation between discrete and continuum cases.* The definition of para-product allows us to define the para-controlled distribution. **Definition 3**. *Suppose $\kappa > 0$, $\xi^n$ and $\mathcal{I}\xi^n$ satisfy the assumption [Assumption 1](#ass.environment){reference-type="ref" reference="ass.environment"}. We say $\varphi^n$ is para-controlled if $\varphi^n \in C^{1-\kappa}(\mathbb{Z}_n^2,e(l))$ for some $l \in \mathbb{R}$ and $$\varphi^{n,\#}:= \varphi^n - \varphi^n \olessthan\mathcal{I}\xi^n \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l)).$$* We define the para-controlled space $\mathcal{D}(\mathbb{Z}_n^2, e(l))$ for all paracontrolled functions $\varphi^n$ such that $$\|\varphi^n\|_{\mathcal{D}(\mathbb{Z}_n^2,e(l))} = \|\varphi^n\|_{C^{1-\kappa}(\mathbb{Z}_n^2,e(l))} + \|\varphi^{n,\#}\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))} < \infty.$$ We also define the domain $\mathcal{D}_{\mathcal{H}}$ of the operator $\mathcal{H}$ on $\mathbb{R}^2$, which is given by $$\mathcal{D}_{\mathcal{H}}:=\left\{ \int_0^t T_su ds: t > 0, u \in C^\zeta(\mathbb{R}^2,e(l)), l\in\mathbb{R},\zeta > 0\right\}.$$ The property of this set is discussed in [@perkowski2021rough]. ## General notations Throughout the paper, we will use $X^n$ to indicate the random walk on the lattice $\mathbb{Z}_n^2$ with the generator $\Delta^n$. We denote $\mathcal{M}_F(\mathbb{R}^2)$ as the space of finite measure on $\mathbb{R}^2$ equipped with weak topology defined by family $C_b(\mathbb{R}^2)$. We also let $\mathcal{M}_F^{\pm}(\mathbb{R}^2)$ be the signed finite measure on $\mathbb{R}^2$. For any Polish space $E$, we consider the space of $E-$valued càdlàg path $\mathbb{D}([0,\infty),E)$ endowed with Skorohod topology. # Model Formulation {#sec.model_formulation} From now on, fix a parameter $1 < \beta < 2$. To simplify this paper, we only consider the deterministic environment $\xi^n$, which is simply a function from $\mathbb{Z}_n^2$ to $\mathbb{R}$. We refer readers to [@perkowski2021rough] for how to deal with the situation when the environment $\xi^n$ is random. Let $\chi$ be a smooth function taking value 1 outside of $\left(-\frac{1}{4},\frac{1}{4}\right)^2$ and $0$ in $\left(-\frac{1}{8},\frac{1}{8}\right)^2$. **Assumption 1**. *For a deterministic environment $\xi^n$, define $\mathcal{I}\xi^n$ to be the solution to the equation $-\Delta^n\mathcal{I}\xi^n = \mathcal{F}_n^{-1}(\chi\mathcal{F}_n\xi^n)$. Consider a regularity parameter $\kappa' \in \left(0,\frac{1}{4}\right)$. We assume* 1. *There exists $\xi\in\cap_{a>0}C^{-1-\kappa'}(\mathbb{R}^2,p(a))$ such that, for all $a > 0$, $$\sup_n\|\xi^n\|_{C^{-1-\kappa'}(\mathbb{Z}_n^2,p(a))} < \infty,\qquad \mathcal{E}^n\xi^n\rightarrow\xi \text{ in }C^{-1-\kappa'}(\mathbb{R}^2,p(a))$$* 2. *The quantities $\frac{|\xi^n|}{n}$ is uniformly bounded over $n$. Furthermore, there exists $\nu \geq 0$ such that the following convergence holds $$\mathcal{E}^n\frac{\xi_+^n}{n} \rightarrow \nu,\qquad \mathcal{E}^n\frac{|\xi^n|}{n} \rightarrow 2\nu$$ in $C^{-\delta}(\mathbb{R}^2,p(a))$ for any $\delta >0, a > 0$.* 3. *There exists $\{c_n\}\subset\mathbb{R}$ with $\frac{c_n}{n}\rightarrow 0$ and there exists $\mathcal{I}\xi\in\cap_{a>0} C^{1+\kappa'}(\mathbb{R}^2,p(a))$ and $\mathcal{I}\xi\diamond\xi \in \cap_{a>0}C^{-2\kappa'}(\mathbb{R}^2,p(a))$ such that for all $a > 0$, $$\sup_{n}\|\mathcal{I}\xi^n\|_{C^{-1-\kappa'}(\mathbb{Z}_n^2,p(a))} + \sup_n\|(\mathcal{I}\xi^n\odot\xi^n) - c_n\|_{C^{-2\kappa'}(\mathbb{Z}_n^2,p(a))} < \infty$$ and $\mathcal{E}^n\mathcal{I}\xi^n \rightarrow \mathcal{I}\xi$ in $C^{-1-\kappa'}(\mathbb{R}^2,p(a))$ and $\mathcal{E}^n(\mathcal{I}\xi^n\odot\xi^n-c_n)\rightarrow \mathcal{I}\xi\diamond\xi$ in $C^{-2\kappa'}(\mathbb{R}^2,p(a))$.* We call a distribution $\xi$ on $\mathbb{R}^2$ a deterministic environment if there exists $\xi^n$ satisfying the assumption [Assumption 1](#ass.environment){reference-type="ref" reference="ass.environment"} and $\xi$ is the limit. Given a deterministic environment with assumption $\ref{ass.environment}$, we establish the particle systems approximating desired super-process. For each $n\in\mathbb{N}$, we consider a branching random walk on the lattice $\mathbb{Z}_n^2$. In the language of branching random walk, the underlying motion is given by random walk $X^n$ with generator $\Delta^n$. The branching rate is given by $$a(x,dt) = |\xi^n(x)|dt,$$ for all $x \in \mathbb{Z}_n^2$. The branching mechanism is given by the generating function $$g^n(x,s) = \frac{2\xi^n_+}{(1+\beta)|\xi^n|}(1-s)^{1+\beta} - \frac{\xi^n}{|\xi^n|} + \frac{2\xi_+^n}{|\xi^n|}s = \sum_{i=0}^\infty p_i\cdot q_i(x)s^i,$$ where $$\sum_{i=0}^\infty p_i s^i = \frac{1}{1+\beta}(1-s)^{1+\beta} + s := g(s).$$ One can verify that we have $$q_0(x) = \frac{(1-\beta)\xi^n_+ + (1+\beta)\xi^n_-}{|\xi^n|} \in \left(1-\beta,1+\beta\right),\qquad q_i(x) = \frac{2\xi^n_+}{|\xi^n|}\leq 2, i \geq 2.$$ Since $\frac{\partial}{\partial s} g^n(x,0) = 0 = p_1$, we can set $q_1(x)$ to be any number. For example, set $q_1(x) = 1$. A detailed description can be given by defining the infinitesimal generator of a Markov process on the configuration space $E^n = (\mathbb{N})^{\mathbb{Z}_n^2}$ endowed with product topology. For any configuration $\eta \in E^n$, we define $$\eta^{x\rightarrow y}(z) = \mathbbm{1}_{\{\eta(x) > 0\}}(\eta(z) + \mathbbm{1}_{z=y} - \mathbbm{1}_{z=x}),\qquad \eta^{x+k}(z) = 0\vee(\eta(z)+(k-1)\mathbbm{1}_{z=x}).$$ Furthermore, for any $F \in C_b(E^n)$, we define $$\Delta_x^n F(\eta) = n^2\left(\sum_{y\sim x}F(\eta^{x\rightarrow y}) - F(\eta)\right), \qquad d_x^{k} F(\eta) = F(\eta^{x+k}) - F(\eta).$$ Since any configuration on $E^n$ can be viewed as a sigma finite point measure on $\mathbb{R}^2$ and we eventually prove the tightness in the measure space, we will freely change from $E^n$ to the point measure on $\mathbb{R}^2$ in the following content. To construct the approximation particle systems, we need furthermore the Poisson random measure introduced in appendix [8](#app.random){reference-type="ref" reference="app.random"}. **Definition 4**. *Fix a compact supported measure $\mu^n_0$ on $\mathbb{Z}_n^2$ and an average parameter $\varrho\leq \beta$. Let $\epsilon = n^{-\frac{1}{\varrho}}$. We define an $E^n-$valued stochastic process $Z^n(t)$ started at a Poisson random measure on $\mathbb{Z}_n^2$ with intensity $\frac{\mu^n_0}{\epsilon}$ with infinitesimal generator $$\label{equ.generator} \mathcal{L}^n F(\eta) = \sum_{x\in\mathbb{Z}_n^2} \eta_x\left[\Delta_x^n F(\eta) + |a^n(x)|\sum_{k=0}^\infty p_kq_k(x)d_x^k F(\eta)\right],$$ for any $F \in \mathcal{D}(\mathcal{L}^n)$ which consists of all $F\in C_b(E^n)$ such that the right-hand side of [\[equ.generator\]](#equ.generator){reference-type="eqref" reference="equ.generator"} is finite. Finally, set $\mu^n(t) = \epsilon Z^n(t)$ for all $t\in[0,\infty)$.* Our main results concern the limit behaviour of the measures $\mu^n$ when $\varrho = \beta$, which will be called $\beta-$rough super Brownian motion. Here is the definition of $\beta-$super Brownian motion. We use $\prescript{}{\varkappa}{U_t(\varphi)}$ to denote the unique solution to the equation $$\label{equ.nonlinear_PAM_RSBM} \left\{ \begin{array}{ll} \partial_t w_t = \mathcal{H}w_t - \varkappa w_t^{1+\beta},\\ w_0 = \varphi. \end{array} \right.$$ In most situation, we will drop the prescript $\varkappa$ and write $U_t(\varphi)$ if there is no confusion. **Definition 5**. *Let $\xi$ be a deterministic environment satisfying assumption [Assumption 1](#ass.environment){reference-type="ref" reference="ass.environment"}. Let $\varkappa > 0$ and $\mu_0$ be a compact support measure. Let $\mu$ be a process with values in the space $\mathbb{D}([0,\infty),\mathcal{M}_F(\mathbb{R}^2))$, such that $\mu(0) = \mu_0$. Write $\mathcal{F}= \{\mathcal{F}_t\}_{t \in [0,\infty)}$ for the completed and right-continuous filtration generated by $\mu$. We call $\mu$ a $\beta-$rough super Brownian motion with parameter $\varkappa$ if it satisfies one of the three properties below:* 1. *For any $t\geq 0$ and non-negative function $\varphi \in C_c^\infty(\mathbb{R}^2)$. The process $$N_t^{\varphi}(s) := e^{-\langle\mu_s,U_{t-s}(\varphi)\rangle},$$ is a bounded martingale.* 2. *For any $t\geq 0$, $\varphi_0 \in C_c^\infty(\mathbb{R}^2)$ and $f \in C_tC^\zeta(\mathbb{R}^2,e(l))$ for any $\zeta > 0, l < -t$. Let $\varphi_t(s)$ solves $$\partial_s\varphi_t(s) + \mathcal{H}\varphi(s) = f(s), \qquad s\in[0,t],\varphi_t(t) = \varphi_0,$$ then $$\mathbb{M}^{\varphi_0,f}_t(s):=\langle\mu_s,\varphi_t(s)\rangle - \langle\mu_0,\varphi_0\rangle - \int_0^s\langle\mu_r,f(r)\rangle dr,$$ defined for $s\in[0,t]$, is a a $L^{1+\theta}$ bounded purely discontinuous martingale for any $0 < \theta < \beta$. In addition, the random point measure associated with $\mathbb{M}^{\varphi_0,f}_t$ has the compensator $$\langle\mu_s,\varkappa|\varphi_t(s)|^{1+\beta}\rangle\frac{\beta(1+\beta)}{\Gamma(1-\beta)x^{\beta+2}}\mathbbm{1}_{x\varphi_t>0}dsdx$$* 3. *For all function $\phi \in \mathcal{D}_\mathcal{H}$, $$L^\phi_t := \langle{\mu_t,\phi}\rangle - \langle\mu_0,\phi\rangle - \int_0^t \langle\mu_s,\mathcal{H}\phi\rangle ds,$$ is a $L^{1+\theta}$ bounded purely discontinuous martingale for any $0 < \theta < \beta$. In addition, the random point measure associated with $L_t^{\phi}$ has the compensator $$\langle\mu_t,\varkappa|\phi|^{1+\beta}\rangle\frac{\beta(1+\beta)}{\Gamma(1-\beta)x^{\beta+2}}\mathbbm{1}_{x\phi>0}dtdx.$$* **Remark 6**. *It will be seen in the proof of [Theorem 7](#thm.equivalent_def){reference-type="ref" reference="thm.equivalent_def"} that the second definition actually holds for $f$ with any weight $e(l)$ if we goes from 2. to 1. and obtain the moment estimation of $\langle\mu_t,e(l)\rangle$ from definition 1. via similar argument as discrete case. This moments estimation then ensure the limit procedure in the proof from 3. to 2. when the weight of $f$ is arbitrary.* Our main results are the following: **Theorem 7**. *The three definition in [Definition 5](#def.beta_RSBM){reference-type="ref" reference="def.beta_RSBM"} are equivalent.* The proof is given in section [6](#sec.martingale_problem){reference-type="ref" reference="sec.martingale_problem"} **Theorem 8**. *For any $\varkappa > 0$, the $\beta-$rough super Brownian motion exists and has unique law. Furthermore, it can be realized as a limit of some particles systems.* The proof is given in section [5](#sec.existence){reference-type="ref" reference="sec.existence"}. For the case $\varkappa < \frac{2}{1+\beta}$, it is the limit of $\mu_n$. And for more general case, we do the mixing method as in the paper [@perkowski2021rough]. As a direct consequence of the method in [@jin2023compact] and a coupling method, we can show the compact support property and the super-exponentially persistence of the $\beta-$super Brownian motion. **Definition 9**. *We say a stochastic process with value in $\mathcal{M}_F(\mathbb{R}^2)$ is super-exponentially persistent if, for any non-zero positive function $\varphi\in C_c^\infty(\mathbb{R}^2)$ and for all $\lambda > 0$, it holds that $$\mathbb{P}\left[\lim_{t\rightarrow\infty}e^{-t\lambda}\langle\mu(t),\varphi\rangle = \infty\right] > 0.$$* **Theorem 10**. *The $\beta-$rough super Brownian motion is super-exponentially persistent.* **Definition 11**. *We say a stochastic process with value in $\mathcal{M}_F(\mathbb{R}^2)$ possesses the compact support property if $$\mathbb{P}\left[\bigcup_{0\leq s \leq t}supp(\mu_s)\text{ is compact}\right] = 1,$$ for any $t \geq 0$.* **Theorem 12**. *The $\beta-$rough super Brownian motion possesses the compact support property.* # On variants of PAM {#sec.variant} In this section, we discuss the required knowledge of SPDEs to construct the $\beta-$rough super Brownian motion. To start with, let's give the solution theory of PAM on $\mathbb{Z}_n^2$ and $\mathbb{R}^2$. Details can be found in [@martin2019paracontrolled]. Let $\kappa' < \kappa\in \left(0,\frac{1}{4}\right)$ be a regularity parameter and $T > 0$ be the time horizon. **Theorem 13**. *Consider $\varphi\in C^{1-\kappa}(\mathbb{Z}_n^2,e(l))$ and $\varphi^{\#}\in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$. Then the equation $$\label{equ.PAM} \left\{ \begin{array}{ll} \partial_t w^n_t = \mathcal{H}^nw^n_t + f^n,\\ w^n_0 =\varphi^n, \end{array} \right.$$ admits a unique solution with the estimations $$\label{equ.PAM_estimation} \|w^n\|_{\mathcal{L}_T^{1-\kappa}(\mathbb{Z}_n^2,e(l+T))} \lesssim \|f^n\|_{\mathcal{M}^\gamma C^{-1+2\kappa}(\mathbb{Z}_n^2,e(l))} + \|\varphi\|_{\mathcal{D}(\mathbb{Z}_n^2,e(l))}.$$ The constant in $'\lesssim'$ is uniform over $n$. Furthermore, if the functions $\mathcal{E}^n \varphi^n \rightarrow \varphi$ and $\mathcal{E}^n f^n \rightarrow f$, then we have the convergence of solution $w^n$ to the solution of continuum PAM with the same estimation as [\[equ.PAM_estimation\]](#equ.PAM_estimation){reference-type="eqref" reference="equ.PAM_estimation"}.* **Remark 14**. *When the initial condition $\varphi$ is not paracontrolled but smooth, we could operate as follows. Consider $\tilde{w}_t = w_t - \varphi^n$, then the function $w_t$ satisfies $$\label{equ.PAM_smooth_initial} \left\{ \begin{array}{ll} \partial_t \tilde{w}_t = \mathcal{H}^n\tilde{w}_t + f^n + \mathcal{H}^n\varphi^n\\ \tilde{w}_0 = 0 \end{array} \right.$$ Now if $\varphi^n \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$ for some $l \in \mathbb{R}$, then we see that $\mathcal{H}^n\varphi^n$ is well-defined and belongs to the space $C^{-1+2\kappa}(\mathbb{Z}_n^2,e(l)p(a))$ for any $a > 0$. Then theorem [Theorem 13](#thm.PAM){reference-type="ref" reference="thm.PAM"} can be applied to equation [\[equ.PAM_smooth_initial\]](#equ.PAM_smooth_initial){reference-type="eqref" reference="equ.PAM_smooth_initial"} and obtain estimations on the solution $w_t$.* ## Discrete case In this section, we fix a positive integer $n$ and consider equation on the discrete space-time $\mathbb{R}_+\times\mathbb{Z}_n^2$. We now turn to the consideration of a non-linear discrete PAM. $$\label{equ.nonlinear_PAM} \left\{ \begin{array}{ll} \partial_t w^n_t = \mathcal{H}^n w^n_t - f(w^n_t,\xi^n)w^n_t, & \text{ in }(0,\infty]\times\mathbb{Z}_n^2,\\ w^n_0 =\varphi^n \geq 0, & \text{ on }\mathbb{Z}_n^2,\\ w^n_t \geq 0, & \text{ }\forall t \geq 0, \end{array} \right.$$ where $f$ is a non-negative function. We give assumptions on $f$ here: **Assumption 2**. *There exists some $\alpha > 0$, $$|f(x,y)| \lesssim 1 + |x|^\alpha,$$ and the three-variables function $$\frac{f(x_1,y)x_1 - f(x_2,y)x_2}{x_1-x_2},$$ is locally bounded on $(\mathbb{Z}_n^2)^3$.* For [\[equ.nonlinear_PAM\]](#equ.nonlinear_PAM){reference-type="eqref" reference="equ.nonlinear_PAM"}, we have the following theorem. **Theorem 15**. *Suppose $f$ satisfies assumption [Assumption 2](#ass.nonlinearity){reference-type="ref" reference="ass.nonlinearity"}. Let $\varphi^n \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$ for some $l \in \mathbb{R}$. Then the equation [\[equ.nonlinear_PAM\]](#equ.nonlinear_PAM){reference-type="eqref" reference="equ.nonlinear_PAM"} admits a unique solution $w_t^n$ in the space $\cup_{\hat{l}\in\mathbb{R},\sigma\in(0,1)} \mathcal{L}^{1-\kappa}(\mathbb{Z}_n^2,e(\hat{l}))$ and we have the estimation $$\|w^n_t\|_{\mathcal{L}^{1-\kappa}(\mathbb{Z}_n^2,e(l'+(1+\alpha)t))} \lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))}(1+\|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))}^\alpha),$$ for any $l' >(1+\alpha)l$.* To prove theorem [Theorem 15](#thm.nonlinear_PAM){reference-type="ref" reference="thm.nonlinear_PAM"}, we first give a lemma on representing solutions of a larger class of equations by Dynkin's formula. **Lemma 16**. *Suppose $R:\mathbb{R}^3\rightarrow\mathbb{R}$ is continuous and has at most polynomial growth. Let $\phi \in C_TL^\infty(\mathbb{Z}_n^2,e(l))$ and $v_t\in C_TL^\infty(\mathbb{Z}_n^2, e(l))$ be the solution to the equation $$\left\{ \begin{array}{ll} \partial_t v_t = (\Delta^n + \xi^n)v_t - R(v_t,\xi^n,\phi_t), &\text{ in }(0,\infty]\times\mathbb{Z}_n^2,\\ v_0 = \varphi,&\text{ on }\mathbb{Z}_n^2, \end{array} \right.$$ for some $l\in\mathbb{R}$. Then we have for $x \in \mathbb{Z}_n^2$ $$v_t(x) = \mathbb{E}\left[\varphi(X^n_t) + \int_0^t(v_{t-s}\cdot\xi^n)(X^n_s) - R(v_{t-s},\xi^n,\phi_{t-s})(X_s^n)ds|X_0 = x\right].$$* *Proof.* As said in [@martin2019paracontrolled], all the space $C^\alpha(\mathbb{Z}_n^2,e(l))$ are equivalent when $n$ and $l$ are fixed. The assumption on $\xi^n$ then tells us that the weighted supremum norm of $\xi^n$ is bounded. Together with the assumption on $R$, we see that $\partial_tv_t \in C^\alpha(\mathbb{Z}_n^2,e(\hat{l}))$ for some $\hat{l}$ and any $\alpha\in\mathbb{R}$. In particular $\nabla^n\partial_tv_t \in L^\infty(\mathbb{Z}_n^2,e(\hat{l}))$. For each $N > 0$ and fixed $t > 0$, consider truncated function $f^N(s,x) = v_{t-s}(x)\mathbbm{1}_{|x| \leq N}$. We also define $f(s,x) = v_{t-s}(x)$. Since $X_s^n$ is a strong Markov process with generator $$\Delta^n\varphi(x) = n^2\sum_{y\sim x}(\varphi(y)-\varphi(x)),$$ from Dynkin's formula, we calculate $$\begin{aligned} \mathbb{E}\left[f(t,X_t^n) - f(0,x)\right] &=\sum_{i} \lim_{N\rightarrow\infty}\mathbb{E}\left[f^N(t_{i+1},X_{t_{i+1}}^n) - f^N(t_i,X_{t_i}^n)\right] \\ &=\sum_i \lim_{N\rightarrow\infty}\mathbb{E}\left[\int_{t_i}^{t_{i+1}}\Delta^nf^N(t_{i+1},X_s^n) + \partial_tf^N(s,X_{t_i}^n)ds\right]\\ &=\sum_i \mathbb{E}\left[\int_{t_i}^{t_{i+1}}\Delta^nf(t_{i+1},X_s^n) + \partial_tf(s,X_{t_i}^n)ds\right]. \end{aligned}$$ The first and third equations hold because the bounds for $v,\partial_tv$ and $\forall l_0\in\mathbb{R}$ $$\mathbb{E}\left[e^{l_0\sup_{0\leq s\leq t}|X_s^n|}\right] < \infty.$$ The error terms can be estimated as $$\begin{aligned} &\sum_i\mathbb{E}\left[\left|\int_{t_i}^{t_{i+1}}\Delta^n\left(f(t_{i+1},X_s^n) - f(s,X_s^n)\right)ds\right|\right]\\ &\lesssim \sum_i\mathbb{E}\left[\int_{t_i}^{t_{i+1}}\int_s^{t_{i+1}}e^{l|X_s^n|^\sigma}duds\right] \lesssim_n \sum_i(t_{i+1} - t_i)^2, \end{aligned}$$ and, let $M_t^n:=\sup_{0\leq s\leq t}|X_s^n|$, $$\begin{aligned} &\sum_i\mathbb{E}\left[\left|\int_{t_i}^{t_{i+1}}\partial_tf^N(s,X_{s}^n) - \partial_tf^N(s,X_{t_i}^n)ds\right|\right] \\ &\lesssim \sum_i \mathbb{E}\left[\int_{t_i}^{t_{i+1}}|X_{t_i}^n -X_s^n|e^{\hat{l}|M_t^n|^\sigma}ds\right] \\ &\lesssim \sum_i\int_{t_i}^{t_{i+1}}\mathbb{E}[|X_{t_i}^n - X_s^n|^2]\mathbb{E}[e^{2\hat{l}|M_t^n|^\sigma}]ds \lesssim \sum_i (t_{i+1} - t_i)^3. \end{aligned}$$ Thus we obtain that $$\mathbb{E}\left[f(t,X_t^n) - f(0,x)\right] = \mathbb{E}\left[\int_{0}^{t}\Delta^nf(s,X_s^n) + \partial_tf(s,X_{s}^n)ds\right].$$ Insert the equation of $v$, we then obtain the result. ◻ Next, we need a lemma on a variant equation of PAM, namely $$\label{equ.variant_PAM} \left\{ \begin{array}{ll} \partial_t w^n_t = (\mathcal{H}^n - \phi^n_t)w^n_t, &\text{ in }(0,\infty]\times\mathbb{Z}_n^2,\\ w^n_0 =\varphi^n\geq 0, &\text{ on }\mathbb{Z}_n^2,\\ w^n_t \geq 0, & \text{ }\forall t \geq 0, \end{array} \right.$$ where $\phi^n_t$ is a positive function. **Lemma 17**. *Suppose non-negative functions $\phi^n_t \in C_TL^\infty(\mathbb{Z}^2_n,e(l_0+\cdot))$ and $\varphi^n \in C^{1+2\kappa}(\mathbb{Z}^2_n,e(l))$ for some $l_0,l \in \mathbb{R}$. Then the equation [\[equ.variant_PAM\]](#equ.variant_PAM){reference-type="eqref" reference="equ.variant_PAM"} admits a minimum non-negative solution given by mild formulation $$w_t^n = T_t^n(\varphi^n) + \int_0^t T^n_{t-s}(\phi^n_{s}w_s^n)ds,$$ with estimation $$\|w_t^n\|_{\mathcal{L}^{1-\kappa}(\mathbb{Z}_n^2,e(l+l_0+2t))}\lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))}(1+\|\phi_t^n\|_{L^\infty(\mathbb{Z}_n^2,e(l_0+t))}),$$ and $$\|w_t^n\|_{L^\infty(\mathbb{Z}_n^2,e(l+t))} \lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))},$$ where $'\lesssim'$ is independent of $n$. Furthermore, the uniqueness of solution holds in the space $\cup_{\hat{l}\in\mathbb{R}} C_TL^\infty(\mathbb{Z}_n^2,e(\hat{l}))$.* *Proof.* By similar argument in [@gartner1990parabolic], we know the minimum solution of equation [\[equ.variant_PAM\]](#equ.variant_PAM){reference-type="eqref" reference="equ.variant_PAM"} is given by the following Feynman-Kac formula $$w_t^n(x) = \mathbb{E}\left[e^{\int_0^t(\xi^n_e - \phi_{t-s}^n)(X^n_s)ds}\varphi^n(X^n_t)|X^n_0 = x\right],$$ if it is finite. By the positivity of $\phi^n$ and $\varphi^n$, we see the integrand has the estimation $$0 \leq e^{\int_0^t(\xi^n_e - \phi_{t-s}^n)(X^n_s)ds}\varphi^n(X^n_t) \leq e^{\int_0^t\xi^n_e(X^n_s)ds}\varphi^n(X^n_t).$$ Thus $w_t^n$ is finite due to the solvability of discrete PAM. Furthermore, we then have the estimation $$\|w_t^n\|_{L^\infty(\mathbb{Z}_n^2,e(l+t))} \lesssim \|T_t^n\varphi^n\|_{L^\infty(\mathbb{Z}^2_n,e(l+t))} \lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}^2_n,e(l))}.$$ By the regularity estimation of PAM, we have $$\begin{aligned} \|w^n\|_{\mathcal{L}^{1-\kappa}(\mathbb{Z}_n^2,e(l+l_0+2\cdot))}&\lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l+l_0))} + \|\phi^nw^n\|_{C_TL^\infty(\mathbb{Z}_n^2,e(l+l_0+2\cdot))} \\ &\lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))}(1+\|\phi_t^n\|_{L^\infty(\mathbb{Z}_n^2,e(l_0+t))}). \end{aligned}$$ Now from the Feynman Kac formulation to mild formulation, the integration by parts tells us that $$\begin{aligned} w_t^n(x) &= \mathbb{E}_{x}\left[\left( 1 - \int_0^t d_se^{\int_s^t(\xi_e^n-\phi_{t-r}^n)(X_r^n)dr}\right)\varphi^n(X_t^n)\right]\\ &=\mathbb{E}_{x}\left[\varphi^n(X^n_t) + \int_0^te^{\int_s^t(\xi_e^n-\phi^n_{t-u})(X^n_u)du}(\xi_e^n-\phi^n_{t-s})(X^n_s)ds \varphi^n(X^n_t)\right]\\ &= \mathbb{E}_{x}\left[\varphi^n(X^n_t) +\int_0^te^{\int_{t-s}^t(\xi_e^n-\phi^n_{t-u})(X^n_u)du}\varphi^n(X^n_t)(\xi_e^n-\phi^n_s)(X^n_{t-s})ds \right]\\ &= \mathbb{E}_{x}\left[\varphi^n(X^n_t) +\int_0^t w_s^n(X_{t-s}^n)(\xi_e^n-\phi^n_s)(X^n_{t-s})ds \right]\\ &= P^n_t(\varphi^n) + \int_0^t P^n_{t-s}(w^n_{s}(\xi^n_e - \phi^n_{s}))ds, \end{aligned}$$ where we applied the Markov property of $X^n$ at the fourth equality. Then basic argument goes to the formulation with operator $T^n$. On the uniqueness, if the solution $w^n \in C_TL^\infty(\mathbb{Z}_n^2,e(\hat{l}))$ for some $\hat{l}\in\mathbb{R}$, we know by lemma [Lemma 16](#lem.Dynkin_formula_PAM){reference-type="ref" reference="lem.Dynkin_formula_PAM"} that the solution must be given by $$w^n_t(x) = \mathbb{E}\left[\varphi(X^n_t) + \int_0^t(w_{t-s}\cdot\xi_e^n)(X^n_s) - w_{t-s}(X^n_r)(\phi^n_{t-s})(X_s^n)ds|X_0 = x\right],$$ which is exactly the mild formulation. Now we do the reverse procedure. The integration by parts formula gives us $$\begin{aligned} &\int_0^t e^{\int_{t-s}^t(\xi_e^n - \phi^n_u)(X_{t-u}^n)du}(\xi_e^n - \phi^n_{t-s})(X_s^n)\int_s^t w^n_{t-u}(\xi_e^n - \phi_{t-u}^n)(X_u^n)duds \\ =& \int_0^t \int_s^t w_{t-u}^n(\xi_e^n - \phi_{t-u}^n)(X_u^n)du d\left(e^{\int_{t-s}^t(\xi_e^n - \phi^n_u)(X_{t-u}^n)du}\right)\\ =& \int_0^t e^{\int_{t-s}^t(\xi_e^n - \phi^n_u)(X_{t-u}^n)du}w^n_{t-s}(\xi_e^n - \phi_{t-s}^n)(X_s^n)ds \\ &- \int_0^tw_{t-u}^n(\xi_e^n-\phi^n_{t-u})(X_u^n)du. \end{aligned}$$ Thus we have by Markov property $$\begin{aligned} w_t^n(x) =& \mathbb{E}_x\left[-\int_0^t e^{\int_{t-s}^t(\xi_e^n - \phi_u^n)(X_{t-u}^n)du}(\xi_e^n - \phi_{t-s}^n)(X_s^n)\left\{w_{t-s}^n(X_s^n) - \varphi^n(X_t^n) \right.\right. \\ &- \left.\int_s^t w_{t-u}^n(\xi_e^n - \phi_{t-u}^n)(X_u^n)du\}ds + w_t^n(x)\right]\\ =& \mathbb{E}_x\left[w_t^n(x) - \int_0^tw_{t-s}^n(\xi_e^n-\phi_{t-s}^n)(X_s^n)ds \right.\\ &+\left. \int_0^t\varphi^n(X_t^n)d\left(e^{\int_{t-s}^t(\xi_e^n - \phi_u^n)(X_{t-u}^n)du}\right)\right]\\ =& \mathbb{E}_x\left[\varphi^n(X_t^n)e^{\int_0^t(\xi_e^n - \phi^n_{t-s})(X_s)ds}\right]. \end{aligned}$$ Thus solution $w_t^n$ is unique. ◻ **Remark 18**. *Above lemma also holds when $\varphi^n \in \mathcal{D}$ by theorem [Theorem 13](#thm.PAM){reference-type="ref" reference="thm.PAM"} and the estimation of $w_t^n$ will change to $$\|w_t^n\|_{\mathcal{L}^{1-\kappa}(\mathbb{Z}_n^2,e(l+l_0+2t))}\lesssim \|\varphi^n\|_{\mathcal{D}(\mathbb{Z}_n^2,e(l))}(1+\|\phi_t^n\|_{L^\infty(\mathbb{Z}_n^2,e(l_0+t))}).$$ Furthermore, we can see that the only solution to the equation [\[equ.variant_PAM\]](#equ.variant_PAM){reference-type="eqref" reference="equ.variant_PAM"} with initial $0$ should be $0$ by the proof of the above theorem even **without the positivity conditions (for $\phi^n, w^n$)**.* Now we are able to give a proof of theorem [Theorem 15](#thm.nonlinear_PAM){reference-type="ref" reference="thm.nonlinear_PAM"} *Proof of theorem [Theorem 15](#thm.nonlinear_PAM){reference-type="ref" reference="thm.nonlinear_PAM"}.* We define the map $\mathcal{K}: \phi^n\rightarrow \mathcal{K}(\phi^n)$ to be the minimum solution to the equation [\[equ.variant_PAM\]](#equ.variant_PAM){reference-type="eqref" reference="equ.variant_PAM"}. Now consider a function $w \in C_T(\mathbb{Z}_n^2,e((l+\cdot)))$, we have $$\|\mathcal{K}(f(w,\xi))_t\|_{L^\infty(\mathbb{Z}_n^2,e(l+t))} \lesssim \|\varphi^n\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))},$$ This estimation shows that map $\mathcal{K}(f(\cdot,\xi_e^n))$ maps the whole space $C_T(\mathbb{Z}_n^2,e((l+\cdot)))$ to a bounded subspace of itself. We now equip the space with the weak$*$ topology. From Schauder's fixed point theorem, we know there exists a fixed point for this map and the fixed point is exactly the mild solution to the equation [\[equ.nonlinear_PAM\]](#equ.nonlinear_PAM){reference-type="eqref" reference="equ.nonlinear_PAM"}. To show $w_t^n$ is indeed a solution of equation [\[equ.nonlinear_PAM\]](#equ.nonlinear_PAM){reference-type="eqref" reference="equ.nonlinear_PAM"}. We could write $w_t^n$ in the Feynman Kac formulation $$w_t^n(x) = \mathbb{E}_x\left[e^{\int_0^t(\xi_e^n - f(w_{t-s}^n,\xi^n_e))(X_s)ds}\varphi^n(X_t)\right],$$ then by the lemma [Lemma 17](#lem.variant_PAM){reference-type="ref" reference="lem.variant_PAM"}, we know $w_t^n$ is the minimum solution to the equation $$\left\{ \begin{array}{ll} \partial_t u^n_t = (\mathcal{H}^n - f(w_t^n,\xi^n_e))u^n_t,\\ u^n_0 =\varphi^n,\\ u^n_t \geq 0. \end{array} \right.$$ Thus is it a solution to the equation [\[equ.nonlinear_PAM\]](#equ.nonlinear_PAM){reference-type="eqref" reference="equ.nonlinear_PAM"}. Now we turn to the uniqueness property. Suppose we have two solutions to the equation [\[equ.nonlinear_PAM\]](#equ.nonlinear_PAM){reference-type="eqref" reference="equ.nonlinear_PAM"}, say $\tilde{w}^n$ and $w^n$. Then the differences $$\bar{w}^n := \tilde{w}^n - w^n,$$ satisfies the equation $$\left\{ \begin{array}{ll} \partial_t \bar{w}^n_t = \mathcal{H}^n\bar{w}^n_t - \frac{f(\tilde{w}_t^n,\xi^n_e)w^n_t - f(w_t^n,\xi^n_e)w^n_t}{\bar{w}_t^n}\bar{w}^n_t,\\ \bar{w}^n_0 =0. \end{array} \right.$$ Above estimation shows that supremum norm of $\tilde{w}^n$ is bounded, and combining with assumption [Assumption 2](#ass.nonlinearity){reference-type="ref" reference="ass.nonlinearity"}, we know $$\frac{f(\tilde{w}_t^n,\xi^n_e)w^n_t - f(w_t^n,\xi^n_e)w^n_t}{\bar{w}_t^n} \in C_TL^\infty(Z_n^2,e(\tilde{l}).$$ Then from remark [Remark 18](#rem.variant_PAM){reference-type="ref" reference="rem.variant_PAM"}, we know $\bar{w}^n = 0$ and hence the uniqueness holds. ◻ ## Continuous case For the continuous case, all the existence problems of mild solutions can be solved by sending $n$ to infinity in the discrete since all the estimations in the discrete cases are uniform over $n$. We are then interested in the uniqueness of solutions. As we can see in the discrete case, the keys to the uniqueness are the equivalence between Feynman-Kac formula and the mild formulation. It is not clear what $e^{\int_0^t\xi(X_s)ds}$ means in Feynman-Kac formula for the continuous case. However, we can hide this quantity by considering the directed polymer measure directly. The construction and convergence of polymer measure for spatial white noise by mollified or discrete white noise on $\mathbb{T}^2$ are given in [@cannizzaro2018multidimensional; @chouk2017invariance]. We can see the argument in [@chouk2017invariance] is closely related to the convergence of solutions to the corresponding Parabolic Anderson Models and the application of [@martin2019paracontrolled] on the whole $\mathbb{R}^2$ will then results in the construction and convergence of polymer measure on $\mathbb{R}^2$. Intuitively, we only need the Markov property of $X^n$ when we prove the uniqueness of equation [\[equ.variant_PAM\]](#equ.variant_PAM){reference-type="eqref" reference="equ.variant_PAM"}. And in [@alberts2014continuum], the authors show the polymer measure conditioned on the environment is actually a Markov process, which implies that the uniqueness in the continuous case should also hold. We start with the definition of polymer measure $Q$ on the space $C[0,T]$. **Definition 19**. *Let $X_t$ be the canonical process of $C[0,T]$, define the finite dimensional distribution for $0 = t_0 < t_1 < \cdots < t_k \leq T$ and $x_0 = x,x_i\in\mathbb{R}^2,i=1,\cdots,k$: $$Q_x\left[X_{t_i} \in dx_i\right] = \frac{1}{\mathcal{Z}_T(x)}\prod_{i=0}^{k-1}\mathcal{Z}_{t_{i+1}-t_i}(x_i,x_{i+1})\mathcal{Z}_{T-t_k}(x_k)dx_1dx_2\cdot dx_k,$$ where $\mathcal{Z}_t(x,y)$ is the solution to the equation $$\label{equ.PAM_dirac} \left\{ \begin{array}{ll} \partial_t \mathcal{Z}_t(x,y) = \mathcal{H}_x\mathcal{Z}_t(x,y), & \text{ in }(0,\infty]\times\mathbb{R}^2,\\ \mathcal{Z}_0(x) =\delta_y(x), &\text{ on }\mathbb{R}^2. \end{array} \right.$$ and $\mathcal{Z}_t(x) := \int_{\mathbb{R}^2}\mathcal{Z}_t(x,y)dy$. The measure $Q_x$ generated by the above finite-dimensional distribution is called the directed polymer measure starting at point $x$.* **Remark 20**. *The two-dimensional dirac function have the regularity of $B_{p,\infty}^{-2(1-1/p)}$ and applying conclusion in [@martin2019paracontrolled] with $p=1$, we see that equation [\[equ.PAM_dirac\]](#equ.PAM_dirac){reference-type="eqref" reference="equ.PAM_dirac"} admits a unique solution in the paracontrolled space. The solution $\mathcal{Z}_t(x,y)$ is actually a Green's function and we have the representation $$T_t\varphi(x) = \int_{\mathbb{R}^2}\mathcal{Z}_t(x,y)\varphi(y)dy.$$ Furthermore, the consistency of finite-dimensional distribution is guaranteed by $$\mathcal{Z}_{t+s}(x,y) = (T_{t+s}\delta_y)(x) = (T_t\mathcal{Z}_s(\cdot,y))(x) = \int_{\mathbb{R}^2}\mathcal{Z}_t(x,z)\mathcal{Z}_s(z,y)dz.$$ In addition, the markov property of polymer measure can be seen from its finite-dimensional distribution directly.* Apart from the Markov property, we need a result on the transition function of the polymer measure, which can actually be viewed as a version of definition of polymer measure. **Lemma 21**. *We have the following representation for $\varphi \in C^{1+2\kappa}(\mathbb{R}^2,e(l))$: $$T_{t-s}\varphi(X_s) = \mathbb{E}_x^{Q}\left[\mathcal{Z}_{T-s}(X_s)\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)}|X_s\right],$$ for $X_s$ following the probability measure $Q_x$.* *Proof.* For any measurable function $g$, we calculate $$\begin{aligned} &\mathbb{E}_x^Q\left[\mathcal{Z}_{T-s}(X_s)\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)}g(X_s)\right] \\ =& \frac{1}{\mathcal{Z}_T(x)}\int_{\mathbb{R}^2}\mathcal{Z}_s(x,x_1)\mathcal{Z}_{t-s}(x_1,x_2)\mathcal{Z}_{T-t}(x_2)\mathcal{Z}_{T-s}(x_1)g(x_1)\frac{\varphi(x_2)}{\mathcal{Z}_{T-t}(x_2)}dx_1dx_2\\ =& \frac{1}{\mathcal{Z}_T(x)}\int_{\mathbb{R}^2}\mathcal{Z}_s(x,x_1)\mathcal{Z}_{t-s}(x_1,x_2)\mathcal{Z}_{T-s}(x_1)g(x_1)\varphi(x_2)dx_1dx_2\\ =&\frac{1}{\mathcal{Z}_T(x)}\int_{\mathbb{R}}T_{t-s}\varphi(x_1)g(x_1)\mathcal{Z}_s(x,x_1)\mathcal{Z}_{T-s}(x_1)dx_1 = \mathbb{E}^Q_x[T_{t-s}\varphi(X_s)g(X_s)]. \end{aligned}$$ Thus the result follows. ◻ Now consider equation $$\label{equ.variant_PAM_continuum} \left\{ \begin{array}{ll} \partial_t w_t = (\mathcal{H}- \phi_t)w_t, & \text{ in }(0,\infty]\times\mathbb{R}^2,\\ w_0 =\varphi\geq 0, &\text{ on }\mathbb{R}^2\\ w_t \geq 0, &\text{ }\forall t \geq 0. \end{array} \right.$$ **Lemma 22**. *Suppose $\phi_t \in C_TL^\infty(\mathbb{R}^2,e(l_0+\cdot))$ and $\varphi \in C^{1+2\kappa}(\mathbb{R}^2,e(l))$ for some $l_0,l \in \mathbb{R}$. Then the equation [\[equ.variant_PAM\]](#equ.variant_PAM){reference-type="eqref" reference="equ.variant_PAM"} admits a solution given by mild formulation $$w_t = T_t(\varphi) + \int_0^t T_{t-s}(\phi_{s}w_s)ds,$$ with estimation $$\|w_t\|_{\mathcal{L}^{1-\kappa}(\mathbb{R}^2,e(l+l_0+2t))}\lesssim \|\varphi\|_{C^{1+2\kappa}(\mathbb{R}^2,e(l))}(1+\|\phi_t\|_{L^\infty(\mathbb{R}^2,e(l_0+t))}),$$ and $$\|w_t\|_{L^\infty(\mathbb{R}^2,e(l+t))} \lesssim \|\varphi\|_{C^{1+2\kappa}(\mathbb{R}^2,e(l))}.$$ Same result for $\varphi\in\mathcal{D}^{1-\epsilon}(\mathbb{R}^2,e(l))$. Furthermore, the uniqueness of solution holds in the space $\cup_{\hat{l}\in\mathbb{R},\sigma\in(0,1)} C_TC^0(\mathbb{R}^2,e(\hat{l}))$.* *Proof.* The existence is a direct consequence of discrete case and taking the limit. On the uniqueness, the mild formulation is given by $$\begin{aligned} w_t(x) =& T_t\varphi - \int_0^t T_{t-s}(\phi_sw_s)ds \\ =& \mathbb{E}^Q_x\left[\mathcal{Z}_T(x)\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)} - \mathcal{Z}_T(x)\int_0^t \frac{\phi_{t-s}w_{t-s}(X_s)}{\mathcal{Z}_{T-s}(X_s)}ds\right]. \end{aligned}$$ Furthermore, let $0 < s < t$, we have $$\begin{aligned} w_{t-s}(X_s) = \mathbb{E}_x^Q\left[\mathcal{Z}_{T-s}(X_s)\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)} - \mathcal{Z}_{T-s}(X_s)\int_s^t\frac{\phi_{t-u}w_{t-u}(X_u)}{\mathcal{Z}_{T-u}(X_u)}du\large|X_s\right]. \end{aligned}$$ Thus, same calculation as in the lemma [Lemma 17](#lem.variant_PAM){reference-type="ref" reference="lem.variant_PAM"} gives $$\begin{aligned} w_t(x) =& \mathbb{E}_x^Q\left[w_t(x) + \mathcal{Z}_T(x)\int_0^t e^{\int_{t-s}^t-\phi_u(X_{t-u})du}\frac{-\phi_{t-s}(X_s)}{\mathcal{Z}_{T-s}(X_s)}\left\{w_{t-s}(X_s) - \right.\right.\\ & \left.\left.\mathcal{Z}_{T-s}(X_s)\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)} + \mathcal{Z}_{T-s}(X_s)\int_s^t\frac{\phi_{t-u}(X_u)w_{t-u}(X_u)}{\mathcal{Z}_{T-u}(X_u)}du\right\}ds\right]\\ =& \mathbb{E}_x^Q\left[\mathcal{Z}_T(x)\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)}\left(1 - \int_0^te^{-\int_{t-s}^t\phi_{u}(X_{t-u})dy}\phi_{t-s}(X_s)ds\right)\right]\\ =&\mathbb{E}_x^Q\left[\mathcal{Z}_T(x)e^{-\int_0^t\phi_{t-u}(X_u)du}\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)}\right]. \end{aligned}$$ The continuity of $w_t, X_t$ and the strict positivity of $\mathcal{Z}$ ensure all the above calculations inside the expectation are valid. Thus we obtain the uniqueness of mild solution to the equation [\[equ.variant_PAM_continuum\]](#equ.variant_PAM_continuum){reference-type="eqref" reference="equ.variant_PAM_continuum"}. ◻ It is then a direct consequence that the solution of equation $$\label{equ.nonlinear_PAM_continuum} \left\{ \begin{array}{ll} \partial_t w_t = \mathcal{H}w_t - f(w_t)w_t, &\text{ in }(0,\infty]\times\mathbb{R}^2,\\ w_0 =\varphi\geq 0, &\text{ on }\mathbb{R}^2,\\ w_t \geq 0,&\text{ }\forall t\geq 0, \end{array} \right.$$ is unique, where $f$ is a non-negative function. **Corollary 23**. *Suppose $|f(x)|\leq |x|^\alpha$ for some $\alpha > 0$, $xf(x)$ is locally Lipschitz and $\varphi\in C^{1+2\kappa}(\mathbb{R}^2,e(l))$, then there exists a unique mild solution to the equation [\[equ.nonlinear_PAM_continuum\]](#equ.nonlinear_PAM_continuum){reference-type="eqref" reference="equ.nonlinear_PAM_continuum"} and we have the estimation $$\|w_t\|_{\mathcal{L}^{1-\kappa}(\mathbb{R}^2,e(l'+(1+\alpha)t))} \lesssim \|\varphi\|_{C^{1+2\kappa}(\mathbb{R}^2,e(l))}(1+\|\varphi\|_{C^{1+2\kappa}(\mathbb{R}^2,e(l))}^\alpha),$$ for any and $l' >(1+\alpha)l$.* *Proof.* The existence comes from the approximation by discrete equation. Uniqueness comes from the similar argument as in the proof of theorem [Theorem 15](#thm.nonlinear_PAM){reference-type="ref" reference="thm.nonlinear_PAM"}. ◻ **Remark 24**. *All the results apply to the inhomogeneous equations: $$\label{equ.variant_PAM_continuum_inhomo} \left\{ \begin{array}{ll} \partial_t w_t = (\mathcal{H}- \phi_t)w_t + g_t, &\text{ in }(0,\infty]\times\mathbb{R}^2,\\ w_0 =\varphi\geq 0, &\text{ on }\mathbb{R}^2,\\ w_t \geq 0&\text{ }\forall t\geq 0, \end{array} \right.$$ and $$\label{equ.nonlinear_PAM_continuum_inhomo} \left\{ \begin{array}{ll} \partial_t w_t = \mathcal{H}w_t - f(w_t)w_t + g_t, &\text{ in }(0,\infty]\times\mathbb{R}^2,\\ w_0 =\varphi\geq 0, &\text{ on }\mathbb{R}^2,\\ w_t \geq 0&\text{ }\forall t\geq 0, \end{array} \right.$$ with the additional condition $g \in C_TL^\infty(\mathbb{R}^2,e(l))$ and non-negative. Since in these case, the mild solution of [\[equ.variant_PAM_continuum_inhomo\]](#equ.variant_PAM_continuum_inhomo){reference-type="eqref" reference="equ.variant_PAM_continuum_inhomo"} can be expressed as $$\begin{aligned} w_t(x) = \mathbb{E}_x^Q&\left[\mathcal{Z}_T(x)e^{-\int_0^t\phi_{t-u}(X_u)du}\frac{\varphi(X_t)}{\mathcal{Z}_{T-t}(X_t)}\right. \\ &\quad+ \left.\mathcal{Z}_T(s)\int_0^te^{-\int_0^s\phi_{t-u}(X_u)du}\frac{g_{t-s}(X_s)}{\mathcal{Z}_{T-s}(X_s)}ds\right]. \end{aligned}$$ Thus the comparison principle with respect to $\phi$ still holds. The result is needed for the compact support property.* **Remark 25**. *All the above discussion is valid for the operator $\mathcal{H}$ replaced by $\Delta$ with much simpler argument and better regularity.* # Existence as limit of particle systems {#sec.existence} ## For the case $\varkappa < \frac{2}{1+\beta}$ In this section, we will show that the family $\{\mu^n\}$ defined in definition [Definition 4](#def.discrete_mu){reference-type="ref" reference="def.discrete_mu"} is tight in $\mathbb{D}([0,\infty),\mathcal{M}(\mathbb{R}^2))$ and its limit is the $\beta-$ rough super Brownian motion with desired Laplace functional. We will need auxiliary branching random walks given by the same branching rate $a^n$ with branching mechanism $g(s)$, which we denote the empirical measure as $\{\tilde{\mu}^n\}$. Notice this branching mechanism $g$ is independent with respect to the $n$ and position $x$ and is well studied in classical books[@dawson1992infinitely; @dawson1993measure] when the branching rate is also independent of the position. **Theorem 26**. *Suppose $\mu_0$ is supported on a compact set. The limit $\mu_t:= \lim_{n\rightarrow\infty}\mu_t^n$ exists as stochastic processes in the space $\mathbb{D}([0,\infty),\mathcal{M}(\mathbb{R}^2))$ and, when $\varrho = \beta$, for all non-negative function $\varphi \in C^{1+2\kappa}(\mathbb{R}^2,e(l)),l\in\mathbb{R}$, the laplace transformation is given by $$\mathbb{E}\left[e^{-\langle\mu_t,\varphi\rangle}\right] = e^{-\langle\mu_0,U_t(\varphi)\rangle},$$ where $U_t(\varphi)$ is the unique non-negative solution to the equation [\[equ.nonlinear_PAM_RSBM\]](#equ.nonlinear_PAM_RSBM){reference-type="eqref" reference="equ.nonlinear_PAM_RSBM"} with $\varkappa = \frac{2\nu}{1+\beta}$. In particular, this shows $\mu_t$ is markov and the first definition of definition [Definition 5](#def.beta_RSBM){reference-type="ref" reference="def.beta_RSBM"} is satisfied. Thus the $\beta-$rough super Brownian motion exists and is unique in law. In addition, when $\varrho < \beta$, the limit $\mu_t$ satisfies the equation PAM.* Let's first give a more detailed description of spatial dependent branching mechanism and branching rate here. Since all particles in the system move independently, it is sufficient to describe the motion of one particle. To describe the branching time of one particle, let's consider independent Poisson processes $N_t^x$ with intensity $a(x)$ attached at each site $x$ of $\mathbb{Z}_n^2$. Let $X_t^n$ be a simple random walk on $\mathbb{Z}_n^2$, then the branching time $\tau$ can be defined by $$\tau := \inf\{t \geq 0: X_t^n \text{ triggers the clock of }N_t^{X_t^n} \text{ at time }t\}.$$ By the strong Markov property of Poisson process, we know that each time $X^n$ changes site, say time $t_0$ , there is a new Poisson process $(N_{t}^x - N_{t_0}^x)_{t\geq t_0}$ which is independent of all past event determining the branching time. Now let's do a scaling: set $\tilde{N}_t^x = N_{\frac{t}{a(x)}}^x$, then $\tilde{N}_t^x$ are Poisson processes with intensity $1$. If we paste all the pieces of scaled Poisson processes before $X_t^n$ branches, we obtain a random variable following the law of exponential distribution with parameter 1. Hence, by the scaling time, we can give a new definition of branching time $\tau$: $$\tau:= \inf\left\{t \geq 0: \int_0^t a(X_s)ds \geq \tilde{\tau}\right\},$$ where $\tilde{\tau}$ is of exponential distribution with parameter $1$. This also leads to the law of branching time $\tau$, which is given by $$\label{equ.branching_time} \mathbb{P}[\tau \geq t] = e^{-\int_0^ta(X_s,ds)}.$$ As the consequence of Poisson Cluster Random Measure [Lemma 41](#lem.Poisson_cluster){reference-type="ref" reference="lem.Poisson_cluster"}, we are able to give the Laplace functional of $\mu_t^n$ and $\tilde{\mu}_t^n$. **Lemma 27**. *Let $A = \mathcal{H}^n$ or $\Delta^n, B = \frac{2\xi_+^n\epsilon^{\beta}}{1+\beta}$or $\frac{|\xi^n|\epsilon^{\beta}}{1+\beta}$ correspondingly. For each $\varphi$, there exists $U_t^n(\varphi)$ or $\tilde{U}_t^n(\varphi)$ being the unique solution to the equation $$\label{equ.approx_laplace_PDE} \left\{ \begin{array}{ll} \partial_t v^n_t = A v_t - B(v_t^n)^{1+\beta}, & \text{ in }(0,\infty]\times\mathbb{R}^2,\\ v^n_0 = \frac{1-e^{-\epsilon\varphi(x)}}{\epsilon}, & \text{ on }\mathbb{R}^2. \end{array} \right.$$ such that the Laplace functional of $\mu_t^n$ and $\tilde{\mu_t}^n$ is given by $$\mathbb{E}[e^{-\langle \mu_t^n,\varphi\rangle}] = e^{-\langle \mu^n_0,U_t^n(\varphi)\rangle},\qquad \mathbb{E}[e^{-\langle \tilde{\mu}_t^n,\varphi\rangle}] = e^{-\langle \mu^n_0,\tilde{U}_t^n(\varphi)\rangle}.$$* *Proof.* We only show the proof for $\mu_t^n$, the proof to $\tilde{\mu}_t^n$ is similar. Define $$w_{t}^n(x):= \mathbb{E}\left[e^{-\langle\mu_t^n,\varphi\rangle}|Z_0^n = \delta_x\right].$$ Then lemma [Lemma 46](#lem.laplace_equi){reference-type="ref" reference="lem.laplace_equi"} and lemma [Lemma 45](#lem.laplace_equation_formula){reference-type="ref" reference="lem.laplace_equation_formula"} tell us that $$w_{t}^n(x):= \mathbb{E}\left[e^{-\epsilon\varphi(x)} + \int_0^t|\xi^n|(X^n_r)g^n(X^n_r,w^n_{t-r}(X_r))dr - \int_0^t|\xi^n|w_{t-r}^n(X_r)dr\right].$$ The Poisson cluster formula [Lemma 41](#lem.Poisson_cluster){reference-type="ref" reference="lem.Poisson_cluster"} then gives us that $$\mathbb{E}[e^{-\langle \mu_t^n,\varphi\rangle}] = e^{-\left\langle\mu^n_0,\frac{1-w^\epsilon_{t}}{\epsilon}\right\rangle}.$$ Now define $$v^n_{t}(x) = \frac{1 - w_{t}^n(x)}{\epsilon}.$$ The calculation shows that $$\label{equ.laplace_discrete} v_{t}^n(x) = \mathbb{E}\left[\frac{1-e^{-\epsilon\varphi(X^n_t)}}{\epsilon} + \int_0^t v^n_{t-r}\xi(Z_r) - \frac{1}{1+\beta}(v^n_{t-r})^{1+\beta}(2\xi_+^n\epsilon^{\beta})ds\right].$$ Thus, $v_{t}^n$ is the unique solution to the equation [\[equ.approx_laplace_PDE\]](#equ.approx_laplace_PDE){reference-type="eqref" reference="equ.approx_laplace_PDE"} by lemma [Lemma 17](#lem.variant_PAM){reference-type="ref" reference="lem.variant_PAM"}. ◻ Next goal is to prove the tightness of $\mu^n$ in the space $\mathbb{D}(\mathbb{R}^+,\mathcal{M}(\mathbb{R}^2))$. We need following lemmas. **Lemma 28**. *For any $0 < \beta < 1$, there exists an integer $N(\beta)$ such that there exists a coupling of process $\mu_t^n$ and $N(\beta)$ sum of independent $\tilde{\mu}_t^{n,i}$ being copies of $\tilde{\mu}_t^n$ so that we have $$\langle\mu_t^n,\varphi\rangle \leq \sum_{i=1}^{N(\beta)}\langle\tilde{\mu}_t^{n,i},\varphi\rangle,$$ for any non-negative $\varphi$. Furthermore, $\langle\tilde{\mu}_t^{n,i},1\rangle$ are martingales.* *Proof.* Consider random variables $Z^n_0(x)$ and $Z^n_i$ whose generators are given by $g^n(x,s)$ and $g(s)$. We show that there exists $N(\beta)$ such that for any $k \in \mathbb{N}$, the inequality holds $$\label{equ.controll} \mathbb{P}[Z_0^n(x)\geq k] \leq \mathbb{P}\left[\sum_{i=1}^{N(\beta)}Z^n_i \geq k\right].$$ By the definition of generator $g^n$, we rewrite the generator into infinite series $$g^n(x,s) = \sum_{k=0}^\infty p_kg_k(x)s^k,$$ and $$g(s) = \sum_{k=0}^\infty p_ks^k.$$ We know furthermore that $g_0 \geq \frac{1-\beta}{1+\beta}$ and $g_k\leq 2$ for $k\geq 1$. Now consider $Z_1^n + Z_2^n + Z_3^n$, we have for all $k \in \mathbb{N}$, $$\mathbb{P}[Z_1^n + Z_2^n + Z_3^n\geq k] \geq 3\mathbb{P}[Z_1^n\geq k]\mathbb{P}[Z_1^n < k]^2.$$ Since $\lim_{k\rightarrow\infty} \mathbb{P}[Z_1^n < k] = 1$, we know there exists $k_0$ such that $$\mathbb{P}[Z_1^n + Z_2^n + Z_3^n\geq k_0] \geq 2\mathbb{P}[Z_1^n\geq k_0] \geq \mathbb{P}[Z_0^n(x) \geq k_0].$$ Now for the $k < k_0$, due to $p_0g_0(x) > 0$, the central limit theorem tells us the existence of $N(\beta)$ that [\[equ.controll\]](#equ.controll){reference-type="eqref" reference="equ.controll"} holds. Then the result follows since we can always produce more particles at each site using inequality [\[equ.controll\]](#equ.controll){reference-type="eqref" reference="equ.controll"}. The martingale property of $\langle\tilde{\mu}_t^{n,i},1\rangle$ follows from the fact that $g'(1) = 1$, which means that each time we branch, the expectation of number of off-spring is again $1$. So the total number of the particle system should form a martingale. ◻ **Lemma 29**. *Let $l$ be any real number. For $\varphi\in C^{1+2\gamma}(\mathbb{Z}_n^2,e(l))$, the process $$M_t^{n,\varphi}(s):=\mu_s^n(T_{t-s}^n\varphi) - \epsilon Z_0^n(T_t^n\varphi),\qquad \tilde{M}_t^{n,\varphi}(s):= \tilde{\mu}_s^n(P_{t-s}^n\varphi) - \epsilon Z_0^n(P_t^n\varphi),$$ are martingales. Thus by the formula of Poisson random measure, $$\mathbb{E}[\langle \mu_t^n,\varphi\rangle] = \langle\mu_0^n,T^n_t\varphi\rangle,\qquad \mathbb{E}[\langle\tilde{\mu}_t^n,\varphi\rangle] = \langle\mu_0^n,P_t^n\varphi\rangle.$$* *Proof.* We consider only $\mu_t^n$ here, the argument for $\tilde{\mu}_t^n$ is similar. For any time dependent function $\phi_\cdot\in C^1_tL^\infty(\mathbb{Z}_n^2,e(-t+\cdot))$. Consider the function $F_\phi^t: [0,t]\times (\mathbb{N}^{\mathbb{Z}_n^2})\rightarrow \mathbb{R}$ defined as $$F_\phi^t(s,\eta) = \sum_{x\in\mathbb{Z}_n^2} \phi_s(x)\eta_x.$$ Apply Dynkin's formula to $F_\phi^t$ and do similar things as in the proof of lemma [Lemma 16](#lem.Dynkin_formula_PAM){reference-type="ref" reference="lem.Dynkin_formula_PAM"}, we obtain that $$M_t(s):=\mu_s^n(\phi_s) - \epsilon Z_0^n(\phi_0) - \int_0^s \partial_r F_\phi^t(r,\mu_r^n) + \mathcal{L}^nF_\phi^t(r,\mu_r^n)dr,$$ is a martingale. The calculation gives $$\partial_rF_\phi^t(r,\mu_r^n) = \mu_r^n(\partial_r\phi_r),\qquad \mathcal{L}^nF_\phi^t(r,\mu_r^n) = \mu_r^n(\mathcal{H}^n\phi_r).$$ Consider $\phi_s = T^n_{t-s}\varphi$, we have the equality $$\partial_r\phi_r + \mathcal{H}^n\phi_r = 0.$$ When $\varphi$ is compactly supported, the result follows since now we have $\phi_\cdot \in C_t^1L^\infty(\mathbb{Z}_n^2,e(-t+\cdot))$. For more general $\varphi \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$, we only need to consider when $\varphi$ is non-negative since the general $\varphi$ can be controlled by some non-negative function in the same space. We can then approximate $\varphi$ by an increasing sequence of compactly supported functions $\varphi_m$ such that $\varphi_m \rightarrow \varphi \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l+1))$. This is visible since all regularities are the same for the discrete spaces and increasing weight a little bit can make $\varphi$ to be $0$ at the infinity under this weight. Then the solution theory of PAM or heat equation gives us that $$T_t\varphi_m \rightarrow T_t\varphi \in C^{1-\kappa}(\mathbb{Z}_n^2,e(l+1)).$$ Then by monotone convergence theorem, we have $$\mathbb{E}[\langle\mu_t^n,\varphi\rangle] = \lim_{m\rightarrow\infty}\mathbb{E}[\langle\mu_t^n,\varphi_m\rangle] = \lim_{m\rightarrow\infty}\langle\mu_0^n,T_t^n\varphi_m\rangle = \langle\mu_0^n,T_t^n\varphi\rangle < \infty,$$ since $\mu_0^n$ is compact supported. Thus $M_t$ is a martingale and the result follows. ◻ Our next goal is to obtain the moment estimation of quantity $\sup_{0\leq s \leq t}\langle \mu_t^n,\varphi\rangle$ for some suitable $\varphi$. The direct operation on the $\mu_t^n$ suffers some technique issues on applying $\mathcal{H}^n$ to the functions in the domain of operator $\mathcal{H}$. It is non-trivial at all to see if the resulting functions $\mathcal{H}^n\varphi^n$ with $\varphi^n\rightarrow\varphi \in \mathcal{D}_\mathcal{H}$ are uniformly bounded over $n$ at least in $L^\infty$. Thus, we turn to the estimation of sums of $\tilde{\mu}_t^n$ which governs $\mu_t^n$ and are much easier to manipulate. We will manipulate on $\mu_t^n$ as much as possible and only consider $\tilde{\mu}_t^n$ when there is some technical issue on $\mu_t^n$ that we can not solve. We will follow the argument of lemma 5.5.4 in [@dawson1992infinitely]. **Lemma 30**. *Suppose $0 < \theta < \beta$ and $\mu_0$ is compactly supported. For any function $\varphi\in L^{\infty}(\mathbb{Z}_n^2,e(l))$ with some $l\in\mathbb{R}$, we have the moments estimation uniformly over $n$ $$\mathbb{E}\left[\langle \mu_t^n,\varphi\rangle^{1+\theta}\right] \lesssim_{\mu_0,l,\theta} \|\varphi\|_{L^\infty(\mathbb{Z}_n^2,e(l))}^{1+\theta}.$$ Furthermore, for any function $\varphi\in L^{\infty}(\mathbb{Z}_n^2,e(l))$ such that $\Delta^n\varphi \in L^{\infty}(\mathbb{Z}_n^2,e(l))$, we have the uniform estimation of finite moments of supreme over $n$. $$\mathbb{E}\left[\sup_{0\leq s\leq t}\langle \mu_s^n,\varphi\rangle^{1+\theta}\right] \lesssim_{\mu_0,l,\theta,\beta} \|\varphi\|_{L^\infty(\mathbb{Z}_n^2,e(l))} + \|\varphi\|_{L^\infty(\mathbb{Z}_n^2,e(l))}^2 + \|\Delta^n\varphi\|_{L^\infty(\mathbb{Z}_n^2,e(l))}^{1+\beta}.$$* *Proof.* **1.** Consider first a non-negative function $\varphi \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$ for some $l\in\mathbb{R}$, we have $$\label{equ.moment_est} \begin{aligned} \mathbb{E}[\langle \mu_s^n,\varphi\rangle^{1+\theta}]\lesssim 1 + (1+\theta)\int_1^\infty r^{1+\theta}\int_0^{\frac{2}{r}}[\mathbb{E}[e^{-\langle \mu_s^n,u\varphi\rangle}] - 1 + \mathbb{E}[\langle \mu_s^n,u\varphi\rangle]]dudr, \end{aligned}$$ by lemma [Lemma 47](#lem.auxiliary){reference-type="ref" reference="lem.auxiliary"}. We calculate $$\begin{aligned} &\mathbb{E}[e^{-\langle \mu_s^n,u\varphi\rangle}] - 1 + \mathbb{E}[\langle \mu_s^n,u\varphi\rangle] \\ \lesssim& u^{1+\beta}\langle \mu_0^n,T_s^n(\varphi)\rangle^{1+\beta} + e^{-\langle \mu_0^n,U_s^n(u\varphi)\rangle} - e^{-\langle \mu_0^n,T_s^n(u\varphi)\rangle}\\ \lesssim& u^{1+\beta}\langle \mu_0^n,T_s^n(\varphi)\rangle^{1+\beta} +\langle \mu_0^n,|U_s^n(u\varphi) - T_s^n(u\varphi)|\rangle. \end{aligned}$$ From the representation $$U_t^n(u\varphi) = T_t^n\left(\frac{1-e^{-\epsilon u\varphi}}{\epsilon}\right) - \int_0^t T_{t-s}^n\left(\frac{2\xi_+^n\epsilon^\beta}{1+\beta}(U_s^n(u\varphi))^{1+\beta}\right)ds \leq uT_t^n\varphi,$$ we obtain $$\begin{aligned} &|U_s^n(u\varphi) - T_s^n(u\varphi)| \\ \lesssim & \left|T_t^n\left(\frac{1-e^{-\epsilon u\varphi}-\epsilon u\varphi}{\epsilon}\right)\right| + \left|\int_0^t T_{t-s}^n\left(\frac{2\xi_+^n\epsilon^\beta}{1+\beta}(U_s^n(u\varphi))^{1+\beta}\right)ds\right|\\ \lesssim& \epsilon^\beta u^{1+\beta}|T_t^n(\varphi^{1+\beta})| + u^{1+\beta}\left|\int_0^tT_{t-s}^n\left(\frac{2\xi_+^n\epsilon^\beta}{1+\beta}T_s^n(\varphi)^{1+\beta}\right)ds\right|. \end{aligned}$$ Let $$\begin{aligned} I^n_t(\varphi):=& \sup_{0\leq s\leq t}\langle\mu_0^n,T_s^n(\varphi)\rangle^{1+\beta} + \langle\mu_0^n,\epsilon^\beta T_s^n(\varphi^{1+\beta})\rangle \\ &+ \left\langle\mu_0^n,\left|\int_0^sT_{s-r}^n\left(\frac{2\xi_+^n\epsilon^\beta}{1+\beta}T_r^n(\varphi)^{1+\beta}\right)dr\right| \right\rangle. \end{aligned}$$ Then we have the estimation $$\begin{aligned} \mathbb{E}[\langle\mu_s^n,\varphi\rangle^{1+\theta}] \lesssim 1 + (1+\theta)\int_1^\infty r^{1+\theta}\int_0^{\frac{2}{r}}u^{1+\beta}I_s^n(\varphi) dudr \lesssim 1 + \frac{1+\theta}{\beta-\theta}I_s^n(\varphi). \end{aligned}$$ The solution theory of PAM gives the bounds for any $\varphi \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$, $$\|T_s^n(\varphi)\|_{C_TC^{1-\kappa}(\mathbb{Z}^2_n,e(l+\cdot))} \lesssim \|\varphi\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))},$$ and by lemma 3.10 in [@martin2019paracontrolled] $$\begin{aligned} &\left\|\int_0^sT^n_{s-r}\left(\frac{2\xi_+^n\epsilon^\beta}{1+\beta}T_r^n(\varphi)^{1+\beta}\right)dr\right\|_{C_TC^{1-\kappa}(\mathbb{Z}_n^2,e(\hat{l}+\cdot))} \\ \lesssim& \left\|\frac{2\xi_+^n\epsilon^\beta}{1+\beta}\right\|_{C^{-\kappa}(\mathbb{Z}_n^2,p(b))}\left\|T_t^n(\varphi)^{1+\beta}\right\|_{C^{1-\kappa}(\mathbb{Z}_n^2,e(\hat{l}+\cdot))}, \end{aligned}$$ where we choose $\hat{l} = ((1+\beta)l)\vee l$. It then suggests the bounds $$I_t^n(\varphi) \lesssim \langle\mu_0^n,e(\hat{l}+t)\|\varphi\|\rangle^{1+\beta} + \langle\mu_0^n, e(\hat{l}+t)\epsilon^\beta\|\varphi\|^{1+\beta}\rangle,$$ where $\|\varphi\|$ indicates $\|\varphi\|_{C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))}$. Thus we have $$\mathbb{E}[\langle\mu_t^n,\varphi\rangle^{1+\theta}] \lesssim 1 +(\langle\mu_0^n,e(\hat{l}+t)\rangle^{1+\beta} + \langle\mu_0^n,e(\hat{l}+t)\rangle)\|\varphi\|^{1+\beta},$$ for all non-negative $\varphi \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$ or $\varphi \in \mathcal{D}^{1-\kappa}(\mathbb{Z}_n^2,e(l))$. In particular, if we choose $\varphi = e^{l\rho(x)}$, where $\rho(x)$ is a slight modification of $|x|^\sigma$ so that $\varphi \in C^{1+2\kappa}(\mathbb{Z}_n^2,e(l))$, we know the quantity $$\mathbb{E}\left[\langle \mu^n_t,e(l)\rangle^{1+\theta}\right]$$ is finite uniformly over $n$ for all $l\in \mathbb{R}$. It is then easy to see that if $\|\varphi\|_{L^\infty(\mathbb{Z}_n^2,e(l))}$ is finite, then we must have $$\mathbb{E}\left[|\langle \mu^n_t,\varphi\rangle|^{1+\theta}\right] \leq 2\|\varphi\|_{L^\infty(\mathbb{Z}_n^2,e(l))}^{1+\theta}\mathbb{E}[\langle\mu^n_t,e(l)\rangle^{1+\theta}] < \infty,$$ uniformly over $n$. **2.** Now on the moment estimation of supreme, we have to use branching random walks $\tilde{\mu}_t^{n,i}$. Notice first that by the Laplace functional of $\tilde{\mu}_t^n$, we have the same results as the first step for $\mu_t^n$. Furthermore, by the proof of lemma [Lemma 29](#lem.martingale){reference-type="ref" reference="lem.martingale"}, we know that $$M_t := \langle\tilde{\mu}_t^n,\varphi\rangle - \langle\epsilon Z_0^n,\varphi\rangle - \int_0^t \langle\tilde{\mu}_s^n,\Delta^n\varphi\rangle ds$$ is a martingale if $\Delta^n\varphi \in L^\infty(\mathbb{Z}_n^2,e(l))$ for some $l\in\mathbb{R}$. Thus by Doob's inequality, we can have the control $$\mathbb{E}\left[\sup_{0\leq s\leq t}\langle\tilde{\mu}_s^n,\varphi\rangle^{1+\theta}\right] \lesssim \mathbb{E}[|M_t|^{1+\theta}] + \mathbb{E}\left[\int_0^t|\langle\tilde{\mu}_s^n,\Delta^n\varphi\rangle|^{1+\theta}ds\right] + \langle\mu_0,\varphi\rangle + \langle\mu_0,\varphi\rangle^2.$$ Then by the coupling lemma [Lemma 28](#lem.coupling){reference-type="ref" reference="lem.coupling"}, we know $$\mathbb{E}\left[\sup_{0\leq s\leq t}\langle\mu_s^n,\varphi\rangle^{1+\theta}\right] \lesssim_{N(\beta)} \sum_{i=1}^{N(\beta)}\mathbb{E}\left[\sup_{0\leq s\leq t}\langle\tilde{\mu}_s^{n,i},\varphi\rangle^{1+\theta}\right],$$ which gives the desired result. ◻ Now we are able to show the convergence of the particle system to the super process. *Proof of theorem [Theorem 26](#thm.existence_laplace){reference-type="ref" reference="thm.existence_laplace"}.* Suppose $\varphi \in C^{2}(\mathbb{R}^2,e(l))$ for some $l\in\mathbb{R}$. Consider $Y_t^n = \langle\mu_t^n,\varphi\rangle$. To prove the tightness of $(\mu_t^n)_n$, we apply Jakubowski's criterion. By lemma [Lemma 30](#lem.moment_est){reference-type="ref" reference="lem.moment_est"}, we know $$\langle\mu_t^n, 1 + |\cdot|^2\rangle$$ is uniformly bounded over $n$. Since for any $K$, the set $\{\mu:\langle\mu,1+|\cdot|^2\rangle < K\}$ is compact, we know compact containment property is satisfied by $\mu_t^n$. Secondly, we apply Aldous's criterion. We have to prove $$Y^n_{\tau_n+\delta_n} - Y^n_{\tau_n}\rightarrow0$$ in distribution, where $\tau_n$ are stopping times and $\delta_n$ decreasing to $0$. The moment bound shows that $\{Y^n_{\tau_n+\delta_n},Y^n_{\tau_n}\}$ are tight. It then suffices to consider the difference of Laplace transform $$\mathbb{E}\left[e^{-sY^n_{\tau_n+\delta_n}-tY^n_{\tau_n}}\right] - \mathbb{E}\left[e^{-(s+t)Y_{\tau_n}}\right].$$ Taking the conditional expectation with respect to $\mu_{\tau_n}^n$, we have to obtain the formula of $\mathbb{E}[e^{-\langle\mu_{\tau_n+\delta_n}^n,\varphi\rangle}|\mu_{\tau_n}^n]$. By lemma [Lemma 45](#lem.laplace_equation_formula){reference-type="ref" reference="lem.laplace_equation_formula"} and lemma [Lemma 46](#lem.laplace_equi){reference-type="ref" reference="lem.laplace_equi"}, we know $$\mathbb{E}[e^{-\langle\mu_{\tau_n+\delta_n}^n,\varphi\rangle}|Z_{\tau_n}^n = \delta_x] = 1 -\epsilon U_{\delta_n}^n(\varphi)(x).$$ We then obtain that $$\mathbb{E}[e^{-\langle\mu_{\tau_n+\delta_n}^n,\varphi\rangle}|\mu_{\tau_n}^n] = \mathbb{E}[e^{-\langle Z_{\tau_n+\delta_n}^n,\epsilon\varphi\rangle}|Z_{\tau_n}^n] = e^{-\langle Z_{\tau_n}^n,-\log(1-\epsilon U_{\delta_n}^n(\varphi))\rangle}.$$ Thus we have $$\begin{aligned} &\mathbb{E}\left[e^{-sY^n_{\tau_n+\delta_n}-tY^n_{\tau_n}}\right] - \mathbb{E}\left[e^{-(s+t)Y_{\tau_n}}\right] \\ =& \quad\mathbb{E}\left[e^{-\langle \mu_{\tau_n}^n,-\frac{1}{\epsilon}\log(1-\epsilon U^n_{\delta_n}(s\varphi)) - \langle\mu_{\tau_n}^n,t\varphi\rangle\rangle}\right] - \mathbb{E}\left[e^{-\langle \mu_{\tau_n}^n,(t+s)\varphi\rangle}\right]\\ \lesssim & \quad \mathbb{E}\left[\left\langle\mu_{\tau_n}^n,\left|-\frac{1}{\epsilon}\log(1-\epsilon U^n_{\delta_n}(s\varphi)) - s\varphi\right|\right\rangle\right]\\ \lesssim &\quad\mathbb{E}\left[\sup_{0\leq s\leq t}\langle\mu_{s}^n,1\rangle\right]\left\|-\frac{1}{\epsilon}\log(1-\epsilon U^n_{\delta_n}(s\varphi)) - s\varphi\right\|_{L^\infty(\mathbb{Z}_n^2)}\\ \lesssim &\quad \left\|-\frac{1}{\epsilon}\log(1-\epsilon U^n_{\delta_n}(s\varphi)) - U_{\delta_n}^n(s\varphi)\right\|_{L^{\infty}(\mathbb{Z}_n^2)} + \|U_{\delta_n}^n(s\varphi) - s\varphi\|_{L^\infty(\mathbb{Z}_n^2)}, \end{aligned}$$ which goes to zero as $n\rightarrow\infty$ by estimation of SPDEs and lemma [Lemma 47](#lem.auxiliary){reference-type="ref" reference="lem.auxiliary"}. This tells us that we have the limit $$\left(Y_{\tau_n+\delta_n}^n , Y_{\tau_n}^n\right)\rightarrow (Y_\infty,Y_\infty)$$ in distribution. Thus any linear combination of the above vectors also converges in distribution. In particular we have $$Y_{\tau_n+\delta_n}^n - Y_{\tau_n}^n \rightarrow 0$$ in distribution. Thus we obtain the tightness of process $Y_t^n$. Since the family $C_c^\infty(\mathbb{R}^2)$ separates points in $\mathcal{M}_F(\mathbb{R}^2)$, the Jakubowski's criterion gives the tightness of $\mu^n$ in $\mathbb{D}([0,\infty),\mathcal{M}_F(\mathbb{R}^2))$. The discussion of SPDEs in section [4](#sec.variant){reference-type="ref" reference="sec.variant"} and the tightness of $Y_t$ tell us the limit process $\mu_t$ has the Laplace functional as stated in the theorem. For the $\rho < \beta$, the limit $\mu_t$ satisfies the Laplace functional $$\mathbb{E}[e^{-\langle\mu_t,\varphi\rangle}] = e^{-\langle\mu_0,T_t(\varphi)\rangle} = e^{-\langle T_t\mu_0,\varphi\rangle},$$ which implies that $\mu_t = T_t\mu_0$. ◻ ## For the general $\varkappa >\frac{2}{1+\beta}$ We do the mixing as in the paper [@perkowski2021rough]. **Definition 31**. *Fix a compact supported measure $\mu^n_0$ on $\mathbb{Z}_n^2$. Let $\epsilon = n^{-\frac{1}{\beta}}$ and $c > 0$. We define a $E^n-$valued stochastic process $Z^n(t)$ started at a Poisson random measure on $\mathbb{Z}_n^2$ with intensity $\frac{\mu^n_0}{\epsilon}$ with infinitesimal generator $$\mathcal{L}^n F(\eta) = \sum_{x\in\mathbb{Z}_n^2} \eta_x\left[\Delta_x^n F(\eta) + |\xi^n|\sum_{k=0}^\infty p_kq_k(x)d_x^k F(\eta) + c|\xi^n|\sum_{k=0}^\infty p_kd_x^k F(\eta)\right]$$ for any $F \in \mathcal{D}(\mathcal{L}^n)$ which consists of all $F\in C_b(E^n)$ such that the right-hand side of [\[equ.generator\]](#equ.generator){reference-type="eqref" reference="equ.generator"} is finite. Finally, set $\mu^n(t) = \epsilon Z^n(t)$ for all $t\in[0,\infty)$.* It follows that the branching rate of the above branching random walk is $$(1+c)|\xi^n|$$ and the branching mechanism is given by $$\frac{g^n + cg}{1+c}$$ Then this is clear that we can still find some controlled branching mechanism and perform the exact same procedure as for the case $\varkappa < \frac{2}{1+\beta}$. The final parameter of the $\beta-$rough super Brownian motion given by this mixture will be $$\varkappa = \frac{2\nu}{1+\beta} + \frac{2\nu c}{1+\beta}.$$ This finishes the proof of theorem [Theorem 8](#thm.existence_RSBM){reference-type="ref" reference="thm.existence_RSBM"}. **Remark 32**. *For $\alpha > \frac{1}{2}$, we can replace the Laplace with the generator of $\alpha-$stable process and obtain the convergence and existence of $(\alpha,d,\beta)-$super process.* # Martingale problem {#sec.martingale_problem} This section is devoted to proving the equivalence of three definitions of $\beta-$rough super Brownian motion. Let's start the first direction: from the Laplace functional to the martingale problem. We follow the method in book [@dawson1993measure]. Let's begin with a lemma in [@ethier2009markov] (corollary 3.3 in chapter 2) **Lemma 33**. *Let $X,Y$ be real-valued, adapted right continuous processes. Suppose that for each $t$, $\inf_{s\leq t} X_s > 0$. Then $$X_t - \int_0^t Y_sds$$ is a local martingale if and only if $$X_t exp\left(-\int_0^t \frac{Y_s}{X_s}ds\right)$$ is a local martingale.* **Lemma 34**. *Suppose $\mu$ satisfies the first definition in definition [Definition 5](#def.beta_RSBM){reference-type="ref" reference="def.beta_RSBM"}, then for all non-negative $\phi \in \mathcal{D}_\mathcal{H}$, the process $$N_t(\phi):=e^{-\langle\mu_t,\phi\rangle + \int_0^t\langle\mu_s,\mathcal{H}\phi - \varkappa\phi^{1+\beta}\rangle ds},$$ is a local $\mathcal{F}-$ martingale.* *Proof.* Taking $B \in \mathcal{F}_s$, we check $$\mathbb{E}\left[(e^{-\langle\mu_t,\phi\rangle}-e^{-\langle\mu_s,\phi\rangle})\mathbbm{1}_B \right] = \mathbb{E}\left[ -\int_s^t\langle\mu_r,\mathcal{H}\phi - \varkappa\phi^{1+\beta}\rangle e^{-\langle\mu_r,\phi\rangle}dr\mathbbm{1}_B\right].$$ It is sufficient to prove that for $t\geq s$, $$\frac{d}{dt}\mathbb{E}\left[e^{-\langle\mu_t,\phi\rangle}\mathbbm{1}_B \right] = \mathbb{E}\left[-\langle\mu_t,\mathcal{H}\phi - \varkappa\phi^{1+\beta}\rangle e^{-\langle\mu_t,\phi\rangle}\mathbbm{1}_B\right].$$ By the definition, we have $$\begin{aligned} \frac{d}{dt}\mathbb{E}\left[e^{-\langle\mu_t,\phi\rangle}\mathbbm{1}_B \right] &= \lim_{\epsilon\rightarrow 0}\frac{1}{\epsilon}\mathbb{E}\left[\left(e^{-\langle\mu_{t+\epsilon},\phi\rangle} - e^{-\langle\mu_t,\phi\rangle}\right)\mathbbm{1}_B\right]\\ &= \lim_{\epsilon\rightarrow0}\frac{1}{\epsilon}\mathbb{E}\left[\mathbbm{1}_B e^{-\langle\mu_t,\phi\rangle}\left(e^{-\langle\mu_t,\epsilon\frac{U_\epsilon(\phi) -\phi}{\epsilon}\rangle}-1\right)\right]. \end{aligned}$$ Now, in order to apply the dominant convergence theorem, we have to verify $$\left\|\frac{U_{\epsilon}(\phi) - \phi}{\epsilon}\right\|_{L^\infty(\mathbb{R}^2,e(l))}$$ is uniformly bounded for some $l\in\mathbb{R}$. By definition of $\mathcal{D}_\mathcal{H}$, there exists $\varphi \in C^{1+2\kappa}(\mathbb{R}^2,e(l_0))$ such that $\phi = \int_0^t T_s\varphi ds$. Then $$U_t(\phi) = T_t(\phi) - \varkappa\int_0^t T_{t-s}(U_s(\phi))^{1+\beta}ds,$$ which implies that $$\begin{aligned} \frac{1}{\epsilon}(U_\epsilon(\phi) - \phi) &= \frac{1}{\epsilon}\left(T_\epsilon(\phi)-\phi - \varkappa\int_0^\epsilon T_{\epsilon-s}(v_s(\phi))^{1+\beta}\right) \\ &= \frac{1}{\epsilon}\left(\int_t^{t+\epsilon} T_s\varphi ds - \int_0^\epsilon T_s\varphi ds -\varkappa\int_0^\epsilon T_{\epsilon-s}(U_s(\phi))^{1+\beta}ds \right). \end{aligned}$$ Thus we have $$\frac{1}{\epsilon}\|U_\epsilon(\phi) - \phi\|_{L^\infty(\mathbb{R}^2,e(l))} \lesssim 1 + \|\varphi\|^{1+\beta}_{C^{1+2\kappa}(\mathbb{R}^2,e(l_0))},$$ for $l > 1 + ((1+\beta)l_0)\vee l_0\vee (l_0 + t)$ uniformly over $\epsilon$. In addition, we have by the continuity of $T_{\cdot}\varphi, U_{\cdot}\varphi$ that $$\lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon}(U_\epsilon(\phi) - \phi) = T_t\varphi - \varphi - \varkappa\phi^{1+\beta} = \mathcal{H}\phi - \varkappa\phi^{1+\beta}.$$ Thus dominant convergence theorem applies and we know $$e^{-\langle\mu_t,\phi\rangle} + \int_0^t \langle\mu_r,\mathcal{H}\phi-\varkappa\phi^{1+\beta}\rangle e^{-\langle\mu_r,\phi\rangle}dr$$ is a local martingale. Lemma [Lemma 33](#lem.martingale_trans){reference-type="ref" reference="lem.martingale_trans"} then shows that $N_t$ is a local martingale. ◻ We are now ready to prove the theorem [Theorem 7](#thm.equivalent_def){reference-type="ref" reference="thm.equivalent_def"} *Proof of theorem [Theorem 7](#thm.equivalent_def){reference-type="ref" reference="thm.equivalent_def"}.* **1.$\Rightarrow$ 3.** We first notice that since the Laplace functional uniquely determines the finite-dimensional law of the stochastic process, the $1+\theta$ moments estimation of $\sup_{0\leq s\leq t}\langle\mu_s,\phi\rangle$ for any $\phi\in L^\infty(\mathbb{R}^2,e(l)),l\in\mathbb{R}$ and $0 \leq \theta < \beta$ then follows by the first definition. We therefore have the $1+\theta$ boundedness of $L_t^\phi$ for any $\phi\in\mathcal{D}_\mathcal{H}$. By repeating the argument in [@dawson1993measure], we can obtain the martingale problem of $\mu$ from definition 1. Since the argument is quite long, we give only a sketch here and refer interested readers to chapter 6.1 in [@dawson1993measure]. First we choose a non-negative function $\phi\in\mathcal{D}_\mathcal{H}$. Let $$Y_t(\phi) = e^{-\langle\mu_t,\phi\rangle},\qquad H_t(\phi) = e^{-\int_0^t\langle\mu_s,\mathcal{H}\phi-\varkappa\phi^{1+\beta}\rangle ds},$$ then $Y_t(\phi) = H_t(\phi)N_t(\phi)$ is a strict positive semi-martingale. The Itô's formula gives the representation $$\label{equ.representation1} \begin{aligned} dY_t(\phi) &= H_t(\phi)dN_t(\phi) + N_{t-}(\phi)dH_t(\phi) \\ &= H_t(\phi)dN_t(\phi) - Y_{t-}(\phi)\langle\mu_t,\mathcal{H}\phi-\varkappa\phi^{1+\beta}\rangle dt \end{aligned}.$$ Again by Itô's formula, we know the process $\langle\mu_t,\phi\rangle = -\log Y_t(\phi)$ is again a semi-martingale. Now let $\mathcal{N}(ds,d\nu)$ be adapted random point measure on $\mathbb{R}_+\times \mathcal{M}_F(\mathbb{R}^2)$ given by $\sum \delta_{(s,\mu_s - \mu_{s-})}$, $\widehat{\mathcal{N}}$ denotes its compensator and $\widetilde{\mathcal{N}}$ the corresponding martingale measure. See appendix [8](#app.random){reference-type="ref" reference="app.random"} for more detail on random point measure. The unique canonical decomposition of semi-martingale (lemma [Theorem 42](#thm.semi_decomposition){reference-type="ref" reference="thm.semi_decomposition"}) gives that $$\langle\mu_t,\phi\rangle = \langle\mu_0,\phi\rangle + B'_t(\phi) + M_t^C(\phi) + \widetilde{\mathcal{N}}_t(\phi) + \mathcal{N}_t(\phi),$$ where $B_t'(\phi)$ is predictable, continuous and locally of bounded variation, $M_t^C(\phi)$ is a continuous local martingale with predictable increasing process $C_t(\phi)$ and $$\widetilde{\mathcal{N}}_t(\phi) :=\int_0^t\int_{\mathcal{M}_F^{\pm}(\mathbb{R}^2)}\mathbbm{1}_{|\langle\nu,\phi\rangle|\leq \|\phi\|}\langle\nu,\phi\rangle\widetilde{\mathcal{N}}(ds,d\nu),$$ $$\mathcal{N}_t(\phi) :=\int_0^t\int_{\mathcal{M}_F^{\pm}(\mathbb{R}^2)}\mathbbm{1}_{|\langle\nu,\phi\rangle|> \|\phi\|}\langle\nu,\phi\rangle\mathcal{N}(ds,d\nu).$$ The uniqueness of decomposition gives the relation $$B'_t(\theta\phi) = \theta B'_t(\phi), \quad M_t^C(\theta\phi) = \theta M_t^C(\phi),\quad C_t(\theta\phi) = \theta^2C_t(\phi),$$ for any $\theta > 0$. Applying Itô's lemma [Lemma 43](#lem.Ito){reference-type="ref" reference="lem.Ito"} to $Y_t$ and obtain the second representation $$\label{equ.representation2} \begin{aligned} dY_t(\phi) =& Y_{t-}(\phi)\left(-dB_t(\phi) + \frac{1}{2}dC_t(\phi) + \int_{\mathcal{M}_F^{\pm}(\mathbb{R}^2)}(e^{-\langle\nu,\phi\rangle} - 1+ \langle\nu,\phi\rangle)\widehat{\mathcal{N}}(dt,d\nu)\right)\\ &+ d(\text{local martingale}), \end{aligned}$$ where $$B_t(\phi) = B'_t(\phi) + \int_{\mathcal{M}_F^{\pm}(\mathbb{R}^2)}\langle\nu,\phi\rangle \mathbbm{1}_{|\langle\nu,\phi\rangle|>\|\phi\|}\widehat{\mathcal{N}}(dt,d\nu).$$ Now since $Y_t$ is a special semi-martingale, the unique decomposition lemma [Lemma 44](#lem.sp_decomposition){reference-type="ref" reference="lem.sp_decomposition"} can be used to identify the predictable locally bounded variation part of two representations [\[equ.representation1\]](#equ.representation1){reference-type="eqref" reference="equ.representation1"} and [\[equ.representation2\]](#equ.representation2){reference-type="eqref" reference="equ.representation2"}, we have $$\langle\mu_t,\mathcal{H}\phi - \varkappa\phi^{1+\beta}\rangle dt = dB_t(\phi) - \frac{1}{2}dC_t(\phi) - \int_{\mathcal{M}_F^{\pm}(\mathbb{R}^2)} (e^{-\langle\nu,\phi\rangle} - 1+ \langle\nu,\phi\rangle)\widehat{\mathcal{N}}(dt,d\nu).$$ Replacing $\phi$ with $\theta\phi$ and comparing coefficient before different power of $\theta$, we obtain $\int_0^t\langle\mu_s,\mathcal{H}\phi\rangle ds= B_t(\phi), C_t(\phi) = 0$ and $$\begin{aligned} &\int_{\mathcal{M}_F^{\pm}(\mathbb{R}^2)} (e^{-\langle\nu,\phi\rangle} - 1+ \langle\nu,\phi\rangle)\widehat{\mathcal{N}}(dt,d\nu) \\ =& \langle\mu_t,\varkappa\phi^{1+\beta}\rangle dt = \int_0^\infty\frac{\varkappa\beta(1+\beta)}{\Gamma(1-\beta)u^{\beta+2}}\langle\mu_t,e^{-u\phi} - 1 + u\phi\rangle du, \end{aligned}$$ where $\Gamma(1-\beta)$ is the Gamma function. This shows that $$\widehat{\mathcal{N}}(dt,d\nu) = \frac{\varkappa\beta(1+\beta)}{\Gamma(1-\beta)u^{\beta+2}} dtdu\mu_t(dx)\delta_{u\delta_x}(d\nu).$$ Now for the general $\phi\in\mathcal{D}_\mathcal{H}$, we can decompose $\phi = \phi_+ - \phi_-$ and see that $\langle\mu_t,\phi\rangle$ is again a semi-martingale. Then by Itô's formula, we have $$\begin{aligned} \mathbb{M}_t^f :=& f(\langle{\mu_t,\phi}\rangle) - f(\langle\mu_0,\phi\rangle) - \int_0^t f'(\langle\mu_s,\phi\rangle)\langle\mu_s,\mathcal{H}\phi\rangle ds -\frac{\varkappa\beta(1+\beta)}{\Gamma(1-\beta)}\int_0^t\\ &\left\langle\mu_t,\int_0^\infty\frac{1}{u^{\beta+2}}(f(\langle\mu_t,\phi\rangle + u\phi(\cdot)) - f(\langle\mu_t,\phi\rangle) - f'(\langle\mu_t,\phi\rangle)u\phi(\cdot))du\right\rangle dt \end{aligned}$$ is a purely discontinuous local martingale for all $\phi \in \mathcal{D}_\mathcal{H}$. Now by comparing to the formula given by the compensator of $\langle\mu_t,\phi\rangle$ on $\mathbb{R}$, we obtain the result. **3.$\Rightarrow$ 2.** The proof is very similar to the proof in [@perkowski2021rough]. However, we have to be careful with non-square-integrable martingale. To start with, fix an $l<0$, from lemma [Lemma 48](#lem.lower_bounds){reference-type="ref" reference="lem.lower_bounds"}, we know there exists a non-negative function $\tilde{\phi} \in \mathcal{D}_\mathcal{H}$ such that $\frac{\tilde{\phi}}{e(l)}$ is strict larger than a small positive number. We choose a stopping time $T_n$ according to this $\langle\mu_s,\tilde{\phi}\rangle$ describing the $n-$th jump times with jump bigger than 1. Then the formula $$\mathbb{E}\left[\sum_{0\leq r\leq T_n}1_{\langle\mu_r-\mu_{r-},\tilde{\phi}\rangle > 1}\right] = \mathbb{E}\left[\int_0^{T_n} \int_0^\infty \mathbbm{1}_{u>1}\langle\mu_r,\varkappa\phi^{1+\beta}\rangle\frac{\beta(1+\beta)}{\Gamma(1-\beta)u^{\beta+2}}dudr\right]$$ gives that $$\mathbb{E}\left[\int_0^{T_n}\langle\mu_r,\varkappa\tilde{\phi}^{1+\beta}\rangle dr\right] < \infty,$$ which implies that $\mathbb{E}\left[\int_0^{T_n}\langle\mu_r,e(l)\rangle\right] < \infty$ by the assumption of $\tilde{\phi}$. Then by the moments bounds of $L_t^{\tilde{\phi}}$ and doob's maximal inequality, we know that we actually have $$\label{equ.supre_first_moment} \mathbb{E}\left[\sup_{0\leq r\leq T_n}\langle\mu_{r},e(l)\rangle\right] < \infty.$$ We will assume we have $\mathbb{E}\left[\sup_{0\leq r\leq s}\langle\mu_{r},e(l)\rangle\right] < \infty$ and derive the local martingale property of 2. first. Now consider a piecewise function $f:[0,t]\rightarrow\mathcal{D}_\mathcal{H}$ with weight $e(l-t+\cdot)$and $\varphi_0\in\mathcal{D}_\mathcal{H}$ with weight $e(l-t)$. Let $\varphi_t$ be the solution to the equation $$\partial_s\varphi_t(s) + \mathcal{H}\varphi_t(s) = f(s), \quad s\in[0,t],\quad \varphi_t(t) = \varphi_0,$$ which is given by $\varphi_t(s) = T_{t-s}\varphi_0 + \int_s^tT_{r-s}f(r)dr$. By assumption we have $\varphi_t(s) \in \mathcal{D}_\mathcal{H}$ for all $s\leq t$. For two time step $t_1 < t_2$ and a partition $\pi$ of $[t_1,t_2]$, we have $$\label{equ.martingale_in} \begin{aligned} &\mathbb{E}\left[\langle\mu_{t_2},\varphi_t(t_2) - \langle{\mu_{t_1},\varphi_{t}(t_1)}\rangle|\mathcal{F}_{t_1}\right] \\ =& \mathbb{E}\left[\int_{t_1}^{t_2}\langle\mu_{s'},f(s)-\mathcal{H}\varphi_t(s)\rangle+\langle \mu_s,\mathcal{H}\varphi_t(s'')\rangle ds|\mathcal{F}_{t_1}\right], \end{aligned}$$ where $s'$ is the smallest partition point in $\pi$ bigger than $s$ and where $s''$ is the largest partition point in $\pi$ s maller than $s$. The right continuous property of $\mu_t$ in $\mathcal{M}_F(\mathbb{R}^2)$ shows that $\lim_{s'\rightarrow s ,s' >s}\langle\mu_{s'},(f(s) - \mathcal{H}\varphi_t(s))\rangle = \langle\mu_{s},(f(s) - \mathcal{H}\varphi_t(s))\rangle$, together with [\[equ.supre_first_moment\]](#equ.supre_first_moment){reference-type="eqref" reference="equ.supre_first_moment"}, we can send the mesh of partition of $[t_1,t_2]$ go to zero to get the limit of the first term of [\[equ.martingale_in\]](#equ.martingale_in){reference-type="eqref" reference="equ.martingale_in"} $$\mathbb{E}\left[\int_{t_1}^{t_2}\langle\mu_s,f(s)-\mathcal{H}\varphi_t(s)\rangle ds|\mathcal{F}_{t_1}\right].$$ It remains to check $$\lim_{s'' \rightarrow s, s'' < s}\mathbb{E}[\langle\mu_s,\mathcal{H}\varphi_t(s'') - \mathcal{H}\varphi_t(s)\rangle|\mathcal{F}_{t_1}] \rightarrow 0.$$ By the definition of $\varphi_t(s)$, we have $$\begin{aligned} &|\mathcal{H}\varphi_t(s'') - \mathcal{H}\varphi_t(s)| \\ =& \left|T_{t-s''}\mathcal{H}\varphi_0 - T_{t-s}\mathcal{H}\varphi_0 + \int_s^tT_{r-s''}f(r)-T_{r-s}f(r)dr + \int_{s''}^sT_{r-s''}f(r)dr\right|\\ &\lesssim (s-s'')^{\frac{1-\kappa}{2}}e(l). \end{aligned}$$ This gives the result and shows that $\mathbb{M}_t^{\varphi_0,f}$ is a local martingale. And the density of $\mathcal{D}_\mathcal{H}$ gives the result. We will see it is actually a martingale when we show the direction 2.$\rightarrow$ 1. and 1. implies the moments estimation of $\langle\mu_t,\phi\rangle$. From the compensator of $L_t^\phi$, we can obtain the compensator of the random point measure $\mathcal{N}$ defined in the first step. Now since the purely discontinuous martingale is determined by its jump part and the jump of $\mathbb{M}_t^{\varphi_0,f}$ is given by $\sum_{0\leq s\leq t} \langle\mu_s-\mu_{s-},\varphi_t(s)\rangle$. Then from the compensator of $\mathcal{N}$, we know the compensator of $\mathbb{M}_t^{\varphi_0,f}$ should be given by $$\langle\mu_s,\varkappa\varphi_t(s)^{1+\beta}\rangle\frac{\beta(1+\beta)}{\Gamma(1-\beta)x^{\beta+2}}\mathbbm{1}_{x>0}dsdx$$ **2.$\Rightarrow$ 1.** We apply Itô's formula to the semi-martingale $\langle\mu_s,\varphi_t(s)\rangle$ and obtain $$\begin{aligned} &f(\langle\mu_s,\varphi_t(s)\rangle) \\ =& f(\langle\mu_0,\varphi_t(0)) + \int_0^sf'(\langle\mu_r,\varphi_t(r)\rangle)\langle\mu_r,f(r)\rangle dr +\int_0^s\int_0^\infty \langle\mu_r,\varkappa\varphi_t(s)^{1+\beta}\rangle\\ & \frac{\beta(1+\beta)}{\Gamma(1-\beta)u^{\beta+2}}(f(\langle\mu_{r-},\varphi_t(r)\rangle+u) - f(\langle\mu_{r-},\varphi_t(r)\rangle) - f'(\langle\mu_{r-},\varphi_t(r)\rangle)u)dudr\\ &+ \text{local martingale}. \end{aligned}$$ Choose $f(x) = e^{-x}$ and $f(r) = -\varkappa (U_{t-s}(\varphi_0))^{1+\beta}$, then $\varphi_t(s) = U_{t-s}\varphi_0$ and $$e^{-\langle\mu_s,U_{t-s}\varphi_0\rangle} - e^{-\langle\mu_0,U_t\varphi_0\rangle}$$ is a bounded local martingale. Thus it is a martingale, which gives the result. ◻ # Properties of RSBM In this section, we discuss the compact support property and the super exponentially persistence of the $\beta-$super Brownian motion. The compact support property for the rough super Brownian motion with $\beta = 1$ has been discussed in [@jin2023compact] and here we will give a sketch of how the method in [@jin2023compact] can be applied for the $0 < \beta < 1$. Apart from the assumption [Assumption 1](#ass.environment){reference-type="ref" reference="ass.environment"}, we need as well the bounds $$\sup_{x\in \mathbb{R}^2} p(a)(x)^{-1}\int_{\mathbb{R}^2}[(\mathcal{I}(\mathcal{E}^n\xi^n)(y) - \mathcal{I}(\mathcal{E}^n\xi^n)(x))(\mathcal{E}^n\xi^n)(x) - c_n]\Psi^\delta(y-x)dy \lesssim \delta^{1-\kappa'}$$ for any $a > 0,0<\delta<1$ with $\Psi^\delta$ defined in [@jin2023compact]. Let $U_t^\phi\varphi$ be the solution to the equation $$\label{equ.inhomo_nonlinear_PAM_RSBM} \left\{ \begin{array}{ll} \partial_t w_t = \mathcal{H}w_t - w_t^{1+\beta} + \phi, & \text{ in }(0,\infty]\times\mathbb{R}^2,\\ w_0 = \varphi, & \text{ on }\mathbb{R}^2. \end{array} \right.$$ Then we have **Lemma 35**. *For any function $\varphi\in C^{1+2\kappa}(\mathbb{R}^2,e(l))$ and $\phi \in C^{-1+2\kappa}(\mathbb{R}^2,e(l))$ for some $l\in\mathbb{R}$, the process $$N_t(s) = e^{-\langle\mu_s,U_{t-s}^\phi\varphi\rangle - \int_0^s\langle\mu_r,\phi\rangle dr}$$ is a $\mathcal{F}-$martingale. In particular, $$\mathbb{E}\left[e^{-\int_0^t \langle\mu_r,\phi\rangle dr}\right] = e^{-\langle\mu_0,U_t^\phi0\rangle}.$$* *Proof.* Let $f(x) = e^{-x}$ and $\varphi_t(s) = U_{t-s}^\phi\varphi$ in the Itô's formula of $\langle\mu_s,\varphi_t(s)\rangle$ and then apply lemma [Lemma 33](#lem.martingale_trans){reference-type="ref" reference="lem.martingale_trans"}, we can get the result. ◻ As discussed in [@jin2023compact], let $\mathcal{P}_{n} = [-n,n]^2$ and $\mathcal{P}_{n}^m := \left(-n-\frac{1}{m},n+\frac{1}{m}\right)^2$. Define $\phi^n_m \in C^\infty_c$ such that $$\phi_n^m \begin{cases} = 0 & \text{ on $\mathcal{P}_{n}$}, \\ = m & \text{ on $(\mathcal{P}_{n}^m)^c$},\\ \in [0,m] & \text{ elsewhere.} \end{cases}$$ Since the limit $$\lim_{m\rightarrow\infty}\mathbb{E}\left[e^{-\int_0^t \langle\mu_r,\phi_n^m\rangle dr}\right]$$ describes the probability that $\{\mu_s\}_{0\leq s\leq t}$ stays in the box $\mathcal{P}_{n}$ and the limit over $n$ describes the probability the compact support property holds. The compact support property holds if we show $$\lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty}(U_t^{\phi_n^m}0)\eta_n = 0,$$ uniformly over the support of $\mu_0$ for some cut-off function $\eta_n = 1$ on $\mathcal{P}_{n-1}$ and $0$ on $\mathcal{P}_{n}^c$. We here give a uniform interior estimation lemma on $U_t^{\phi_n^m} 0$ without proof. The similar result is proved in [@jin2023compact]. Also in [@moinat2020local; @moinat2020space] for similar results of $\Phi_3^4$ model. **Lemma 36**. *Let $n,m\in\mathbb{N}$ and $a > 0$, we have $$\|(U_t^{\phi_n^m}0)\eta_n\|_{\mathcal{M}_TC^{1-\kappa}(\mathbb{R}^2,p(a))}\lesssim 1,$$ where $'\lesssim'$ depends on time horizon $T$, $\|\xi\|_{C^{-1-\kappa}(\mathbb{R}^2,p(b))}$ and $\|\mathcal{I}\xi\diamond\xi\|_{C^{-2\kappa}(\mathbb{R}^2,p(b))}$ for $b = \frac{4a}{\beta}$. Here the weight for the norm does not increase along time but remain $p(a)$.* *Proof of theorem [Theorem 12](#thm.RSBM_CSP){reference-type="ref" reference="thm.RSBM_CSP"}.* The equation of $(U_t^{\phi_n^m}0)\eta_n$ is given by $$\partial_t (U_t^{\phi_n^m}0)\eta_n = \mathcal{H}(U_t^{\phi_n^m}0)\eta_n - (U_t^{\phi_n^m}0)^{1+\beta}\eta_n + 2\nabla((U_t^{\phi_n^m}0)\nabla\eta_n) - (U_t^{\phi_n^m}0)\Delta\eta_n.$$ Theorem [Theorem 13](#thm.PAM){reference-type="ref" reference="thm.PAM"} then gives a bound on $\|(U_t^{\phi_n^m}0)\eta_n\|_{\mathcal{L}_T^{1-\kappa}}(\mathbb{R}^2,e(l))$. Thus the compact embedding theorem tells us that, up to some subsequence, we have $$U_t:=\lim_{m\rightarrow\infty}\lim_{n\rightarrow\infty}(U_t^{\phi_n^m}0)\eta_n$$ is the solution to the equation $$\partial_t U_t = \mathcal{H}U_t - U_t^{1+\beta},\qquad U_0 = 0.$$ Thus the uniqueness theorem tells us that $U_t = 0$. And we get the result. ◻ We now turn to the super-exponentially persistence of the generalized rough super Brownian motion. The super-exponentially persistence has been discussed in [@perkowski2021rough] using the approximation of super process in box of $\mathbb{R}^2$. We here will use the result in [@perkowski2021rough], together with the coupling method, to show the theorem [Theorem 10](#thm.RSBM_SEP){reference-type="ref" reference="thm.RSBM_SEP"}. *Proof of theorem [Theorem 10](#thm.RSBM_SEP){reference-type="ref" reference="thm.RSBM_SEP"}.* We need to consider mixed super processes for the proof. Let $\bar{\mu}^n$ be the mixed particle systems with branching rate $(1+c)|\xi^n|$ and branching mechanism $$\frac{h^n + ch}{1+c},$$ where $h^n$ and $h$ are defined as $g^n$ and $g$ with $\beta = 1$. For $0 < \beta < 1$, we define a sequence of independent probability measures $\{\mu^{n,i}\}_{i\leq N(\beta)}$ corresponding to particle systems with branching rate $(1+c)|\xi^n|$ and branching mechanism $$\frac{g^n + cg}{1+c}.$$ For any $\varphi \in C_c^\infty(\mathbb{R}^2)$, we now then need to compare the two quantities $\langle\bar{\mu}^n,\varphi\rangle$ and $\sum_{i=1}^{N(\beta)}\langle\mu^{n,i},\varphi\rangle$. Let $Z_0$ be the random variable with generating function $\frac{h^n + ch}{1+c}$ and $Z_i$ be the random variable with generating function $\frac{g^n + cg}{1+c}$ for $i=1,\cdots,N(\beta)$, we need to prove $$\mathbb{P}[Z_0 \geq 2] \leq \mathbb{P}\left[\sum_i^{N(\beta)} Z_i \geq 2\right],$$ which is equivalent to showing that $$\mathbb{P}[Z_0 = 0] \geq \mathbb{P}\left[\sum_i^{N(\beta)} Z_i = 0\right] = \mathbb{P}\left[Z_1 = 0\right]^{N(\beta)},$$ Since $c > 0$, we know $\mathbb{P}[Z_0 = 0] >0$ and $\mathbb{P}\left[Z_1 = 0\right] < 1$, the existence of $N(\beta)$ is guaranteed. Thus, there exists $N(\beta) > 0$ such that $$\langle\bar{\mu}^n,\varphi\rangle<\sum_{i=1}^{N(\beta)}\langle\mu^{n,i},\varphi\rangle.$$ Now consider event $A_0 = \{\lim_{t\rightarrow\infty}e^{-t\lambda}\langle\bar{\mu}^n,\varphi\rangle\}$, $A_i = \{\lim_{t\rightarrow\infty}e^{-t\lambda}\langle\bar{\mu}^n,\varphi\rangle\}$,then $$A_0 \subset \bigcup_{i=1}^{N(\beta)}A_i.$$ We also have $\mathbb{P}[A_i] = \mathbb{P}[A_1]$ for any $i = 1,\cdots, N(\beta)$. By the result in [@perkowski2021rough], we know $$\mathbb{P}[A_0] > 0,$$ Hence we must have $\mathbb{P}[A_1] > 0$, which gives the result. ◻ # Random point measure {#app.random} We review some basics on the random point measure and purely discontinuous martingale in this and next appendix. We refer reader to [@liptser2012theory] for more detail. Let $(\Omega, \mathcal{F}, \mathcal{F}_t, \mathbb{P})$ be a probability space and $(E,\mathcal{E})$ be a Polish space with $\sigma-$algebra generated by topological sets. Denote $\mathbb{R}_+ = [0,\infty)$. We give the definition of random measure on the space $\mathbb{R}_+\times E$ here **Definition 37**. *The family $\mu = \{\mu(\omega,dt,dx), \omega \in \Omega\}$ of $\sigma$ finite non-negative measure $\mu(\omega,\cdot)$ on $(\mathbb{R}_+\times E,\mathcal{B}(\mathbb{R}_+)\otimes \mathcal{E})$, such that for each $A \in \mathcal{B}(\mathbb{R}_+)\otimes \mathcal{E}$ the variable $\mu(\cdot,A)$ is $\mathcal{F}$ measurable and $$\mu(\omega,\{0\}\times E) = 0, \qquad \forall \omega \in \Omega,$$ is called a random measure on $\mathbb{R}_+\times E$. We call $\mu$ predictable if for any predictable random variable $X(\omega,s,x)$, the process $$\int_0^t\int_E X(\omega,s,x)\mu(\omega,ds,dx)$$ is predictable.* We then give the definition of the compensator of a random measure $\mu$. **Definition 38**. *A predictable random measure $\nu$ is called the compensator of the random measure $\mu$ if for any non-negative predictable random process $X(\omega,s,x)$, we have $$\mathbb{E}\left[\int_0^t\int_EX(s,x)\mu(dsdx)\right] = \mathbb{E}\left[\int_0^t\int_EX(s,x)\nu(dsdx)\right]$$* **Example 39**. *We will consider the random point measure $\mu$ associated to jumps of some process $X_t$ with value in $E$. The random point measure is defined via $$\mu = \sum_{0\leq s\leq t} \delta_{s,X_{s} - X_{s-}}$$ its compensator will play an important role in the Itô's formula for a purely discontinuous martigale.* Here is another example of random measure on the space $E$ without time. **Example 40**. *A Poisson random measure $\Lambda$ on $E$ with intensity $\mu$ is a random measure with Laplace functional given by $$\mathbb{E}_\Lambda[e^{-\langle\cdot,\varphi\rangle}] = e^{-\int_{E}(1-e^{-\varphi(x)})\mu(dx)}$$ for any non-negative measurable function $\varphi: E\rightarrow\mathbb{R}$.* The Poisson random measure is a point measure whose formal definition is given by Poisson random variables describing the distribution of particle numbers in some set. We only need its Laplace characterization here which can also determine the Poisson random measure uniquely. We also need a lemma to the Poisson cluster random measure. **Lemma 41**. *Suppose we have a family of independent random measure $(\mathbb{P}_x)_{x\in E}$ on the space $\tilde{E}$ and a Poisson random measure $\Lambda$ on $E$ with intensity $\mu$. Then the Laplace functional of the random measure $\mathbb{P}_{\Lambda} : =\int_{E} \mathbb{P}_x \Lambda(dx)$ is given by $$\mathbb{E}_{\Lambda}[e^{-\langle\cdot,\varphi\rangle}] = e^{-\int_{E}(1-\mathbb{E}_x[e^{-\langle\cdot,\varphi\rangle}])\mu(dx)}$$ for any non-negative measurable function $\varphi:\tilde{E}\rightarrow\mathbb{R}$.* *Proof.* For a fixed realization of Poisson random measure $\Lambda$, the conditional Laplace functional of $\mathbb{P}_{\Lambda}$ is given by $$\prod_{x \in \Lambda} \mathbb{E}_x\left[e^{-\langle\cdot,\varphi\rangle}\right] = e^{\int_{E}\log \mathbb{E}_x[e^{-\langle\cdot,\varphi\rangle}]\Lambda(dx)} = e^{-\langle\Lambda,-\log \mathbb{E}_x[e^{-\langle\cdot,\varphi\rangle}]\rangle}$$ Now take the expectation with respect to Poisson random measure $\Lambda$ and use the Laplace functional of Poisson random measure, we obtain the result. ◻ # Canonical decomposition of a semi-martingale and Itô's formula In this section, we say a stochastic process $A$ is of locally bounded variation if almost surely, the total variation of $A$, denoted by Var($A$), is finite on every finite interval. We also say $A$ is locally integrable if there exists a localization $T_n$ such that $\mathbb{E}[Var(A)_{T_n}] < \infty$ for all $n\in\mathbb{N}$. It is well-known that a càdlàg stochastic process $X_t$ on $\mathbb{R}$ is semi-martingale if it admits the decomposition $$X_t = X_0 + M_t + A_t$$ where $M_t$ is a local martingale and $A_t$ is a locally bounded variation process. The canonical decomposition of $X_t$ is given by **Theorem 42**. *Suppose $X_t$ is a semi-martingale and $a > 0$. Then $X_t$ admits a unique decomposition $$X_t = X_0 + A_t^a + X_t^c + \int_0^t\int_{|x| \leq a}x d(\mu-\nu) + \int_0^t\int_{|x| > a} x d\mu$$ where $X_t^c$ is the continuous martingale part of $X_t$, $A_t^a$ is a continuous predictable bounded variation process, $\mu$ is the jump measure of $X_t$ given by $$\mu((0,t],C) = \sum_{0\leq s \leq t}\mathbbm{1}_{\{X_s-X_{s-} \in C\}}, \quad C\subset \mathbb{R}$$ and $\nu$ is the compensator of $\mu$.* Now consider a much more general case when the jump measure $\mu$ is defined on some Polish space $E$ and $\nu$ is again its compensator. Consider a semi-martingale in the form $$\label{equ.semimartingale} X_t = X_0 + A_t + X_t^c + \int_0^t\int_E g_1 d(\mu-\nu) + \int_0^t\int_E g_2 d\mu$$ where $A_t$ is a continuous locally bounded variation process and $X_t^c$ is a continuous local martingale. For $g_1,g_2$, we also require $g_1g_2 =0$ and $\int_0^t\int_E |g_2|d\mu < \infty$ after some localization. The condition on $g_1$ is much more complicated and we refer the reader to the section 5, chapter 3 in [@liptser2012theory] for the definition. **Lemma 43**. *Let $f \in C^{1,2}(\mathbb{R}_+\times \mathbb{R})$, then for semi-martingale $X_t$ with decomposition [\[equ.semimartingale\]](#equ.semimartingale){reference-type="eqref" reference="equ.semimartingale"}, we have the Itô's formula $$\begin{aligned} &f(t,X_t) - f(0,X_0) \\ =& \int_0^t \partial_tf(s,X_s) ds + \int_0^t \partial_xf(s,X_s)dA_s + \frac{1}{2}\int_0^t \partial^2_{xx}f(s,X_s)d\langle X^c\rangle_s\\ +&\int_{(0,t]}\int_E(f(s,X_{s-} + g_2(s,x)) - f(s,X_{s-}))\mu(ds,dx) \\ +&\int_{(0,t]}\int_E(f(s,X_{s-}+g_1(s,x)) -f(s,X_{s-}) - \partial_x f(s,X_{s-})g_1(s,x))\nu(ds,dx) \end{aligned}$$ when the last term is of locally integrable.* We also need a unique decomposition of special semi-martingale whose supreme of jumps are of locally integrable. **Lemma 44**. *Let $X_t$ be a special semi-martingale, then $X_t$ admits a unique decomposition $$X_t = X_0 + A_t + M_t$$ where $M_t$ is a local martingale and $A_t$ is of locally bounded variation.* # Lemmas for Branching random walk We here give some useful lemma on branching random walk. **Lemma 45**. *For any branching random walk $Z_t$ on $\mathbb{Z}_n^2$ with branching mechanism $g(x,s) = \sum_{k=0}^\infty p_k(x)s^k$ and branching rate $a(x,ds)$, if $$\sum_{k=0}^\infty kp_k(x) < K$$ uniformly over $t,x$ with some finite constant $K$, then the branching random walk exists(no explosion, especially $\mathbb{E}[\langle Z_t,1\rangle]$ is finite) and we have the expression for the Laplace functional of $Z_t$ for any test function $\varphi$ such that $$\label{equ.laplace_equation_formula} \begin{aligned} w_t(x) &:= \mathbb{E}\left[e^{-\langle Z_t,\varphi\rangle}\right|Z_0 = \delta_x] \\ &= \mathbb{E}\left[e^{-\varphi(X_t) - a(0,t)} + \int_0^te^{-a(0,r)}g(r,X_r,w_{t-r}(X_r))a(X_r,dr)|X_0 = x\right] \end{aligned}$$ where $X_t$ is the simple random walk on $\mathbb{Z}_n^2$ and $$a(s,t) = \int_s^t a(X_r,dr)$$* *Proof.* The non-explosion follows by the corollary 1, page 111 in [@athreya1972branching]. Let $\tau$ to be the first branching time of $Z$ in the time interval $[s,t]$ and $M(x)$ are independent random variables with the generator function $g(x,s)$. We define filtrations $\mathcal{F}_t = \sigma(Z_s,s\leq t)$ and $\mathcal{G}_t = \sigma(X_s,s\leq t)$. Then the random time $\tau$ is both stopping time of filtrations $\mathcal{F}_t$ and $\mathcal{G}_t$. Then we have $$\begin{aligned} w_{t}(x) &= \mathbb{E}_{x}\left[e^{-\langle Z_t,\varphi\rangle}\mathbbm{1}_{\tau \leq t} + e^{-\langle Z_t,\varphi\rangle}\mathbbm{1}_{\tau > t}\right]\\ &= \mathbb{E}_{x}\left[\mathbb{E}[w_{t-\tau}(X_\tau)^{M(X_{\tau})}|\mathcal{F}_{\tau}]\mathbbm{1}_{\tau \leq t} + e^{-\langle X_t,\varphi\rangle}\mathbbm{1}_{\tau > t}\right]\\ &=\mathbb{E}_{x}\left[e^{-\langle X_t,\varphi\rangle - a(s,t)} + g(\tau,X_\tau,w_{t-\tau}(X_\tau))\mathbbm{1}_{\tau\leq t}\right] \end{aligned}$$ by [\[equ.branching_time\]](#equ.branching_time){reference-type="eqref" reference="equ.branching_time"} and the definition of moment generating function. Then by the law of $\tau$, we obtain the desired result. ◻ Here is a general lemma concerning the Feynman Kac formula and the mild solution to some equation. **Lemma 46**. *For any measurable function $g:[0,\infty]\times \mathbb{Z}_n^2\times [0,1] \rightarrow [0,1]$, following the notation in lemma [Lemma 45](#lem.laplace_equation_formula){reference-type="ref" reference="lem.laplace_equation_formula"} ,the equation [\[equ.laplace_equation_formula\]](#equ.laplace_equation_formula){reference-type="eqref" reference="equ.laplace_equation_formula"} is equivalent to $$w_{t}(x) = \mathbb{E}\left[e^{-\varphi(X_t)} + \int_0^t g(r,X_r,w_{t-r}(X_r))a(X_r,dr) - \int_0^t w_{t-r}(X_r)a(X_r,dr)\right]$$* *Proof.* See lemma 4.3.4 in [@dawson1993measure], a simple version is given in the proof of lemma [Lemma 17](#lem.variant_PAM){reference-type="ref" reference="lem.variant_PAM"} when we try to show the equivalence of Feynman Kac formulation and the mild formulation. ◻ # Auxiliary lemmas Here is an auxiliary lemma for our calculation of moment estimation. **Lemma 47**. *We have following inequalities:* 1. *$1 -x + \frac{x^2}{2} - e^{-x}\geq 0$, $x\geq 0$* 2. *$\frac{x}{4} \leq 1 -x + \frac{x^2}{2} - e^{-x}$ for $x \geq 2$.* 3. *$0\leq e^{-x} - 1+x\leq 2x^{1+\beta}$ for $x\geq 0, 0< \beta\leq 1$.* 4. *$0 \leq -\frac{1}{\epsilon}\log(1-\epsilon x) - x\leq 2\epsilon x^2$ for $\epsilon >0, x \geq 0, \epsilon x \leq \frac{1}{2}$.* 5. *For non-negative random variable $X$, we have $\mathbb{E}[X^{1+\theta}] \leq (1+\theta)\delta^{1+\theta}+(1+\theta)\int_\delta^\infty r^\theta\mathbb{P}[X\geq r]dr$, where $\delta >0, \theta\geq -1$.* 6. *$\mathbb{P}[X\geq r] \leq 2r\int_0^{\frac{2}{r}}\mathbb{E}\left[e^{-uX} - 1 + uX\right]du$, if $X\geq 0, \mathbb{E}[X] < \infty$.* *Proof.* The first three inequalities are obvious and we omit the proofs. For the fourth inequality, we consider the expansion of $\log(1-x)$ when $0 \leq x < 1$, we have $$\begin{aligned} -\frac{1}{\epsilon}\log(1-\epsilon x) - x &= \sum_{i=2}^\infty \frac{\epsilon^{i-1}x^i}{i} = \epsilon x^2\left(\frac{1}{2} + \sum_{i=1}^\infty\frac{\epsilon^ix^i}{i+2}\right) \\ &\leq \epsilon x^2\left(\frac{1}{2} - \log(1-\epsilon x)\right) \leq 2\epsilon x^2 \end{aligned}$$ The fifth inequality is from the basic probability theory. $$\begin{aligned} \mathbb{E}\left[X^{1+\theta}\right] &= \int_0^\infty \int_0^x (1+\theta)t^\theta dt \mathbb{P}[dx]\\ &= (1+\theta)\int_0^\infty \int_t^\infty t^\theta \mathbb{P}[dx]dt \\ &\leq (1+\theta)\left(\delta^{1+\theta} + \int_\delta^\infty t^\theta\mathbb{P}[X\geq t]dt\right) \end{aligned}$$ The final inequality is given by $$\begin{aligned} r\int_0^{\frac{2}{r}} \mathbb{E}[e^{-uX} - 1 + uX] du &= r\int_0^{\frac{2}{r}}\int_0^\infty (e^{-ux} - 1 + ux) \mathbb{P}[dx]du \\ &= \int_0^{\infty} \frac{r}{x}\left(1 - e^{-\frac{2x}{r}} - \frac{2x}{r} + \frac{1}{2}\frac{(2x)^2}{r^2}\right)\mathbb{P}[dx]\\ &\geq \int_r^\infty \frac{x}{2r}\mathbb{P}[dx] \geq \frac{1}{2}\mathbb{P}[X\geq r] \end{aligned}$$ We use the second inequality for the third inequality. ◻ We also need to find a special function $\varphi_0\in\mathcal{D}_\mathcal{H}$ such that we have some lower bounds. **Lemma 48**. *There exists positive function $\varphi_0\in\mathcal{D}_{\mathcal{H}}$ such that $$\varphi_0(x) \geq Ce^{-l|x|^\sigma}$$ for some constant $C$ and any $l>0$.* *Proof.* Consider $u_0 = 1$ and $A_tu_0$ for some horizon $t$. Then we have the representation $$T^n_s u_0 = P^n_s u_0 + \int_0^s P^n_{s-r}((T^n_ru_0)\xi^n)ds$$ where $P_t^n$ is the discrete heat semigroup. Since $u_0 = 1$, we know $P^n_s u_0 = 1$ for all time $s$. Let $$w^n_s = \int_0^s P^n_{s-r}((T_r^nu_0)\xi^n)ds$$ we see by the solution theory of PAM that $w^n_s \in C^{\frac{1-\epsilon}{2}}L^\infty(\mathbb{Z}_n^d,e(t))$ uniformly over $n$. Thus there exists $C>0$ such that $$|w_s^n(x)| \leq Cs^{\frac{1-\epsilon}{2}}e^{t|x|^\sigma}$$ for each $x$, set $s(x) = \frac{e^{-\frac{2t}{1-\epsilon}|x|^\sigma}}{(2C)^{\frac{2}{1-\epsilon}}}$, then for any $0 \leq r \leq s(x)$, we have $$|w_r^n(x)| \leq \frac{1}{2}$$ Choosing $C$ large enough such that $s(x)\leq t$ for all $x \in \mathbb{Z}^2_n$. Then $$A_t u_0(x) = \int_0^t T_su_0(x)ds \geq \frac{1}{2}\int_0^{s(x)}dr = \frac{s(x)}{2}$$ This gives the result. ◻
arxiv_math
{ "id": "2309.09551", "title": "A Generalized Rough Super Brownian Motion", "authors": "Ruhong Jin", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider sewing machinery between finite difference and analytical solutions defined at different scales: far away and near the source of the perturbation of the flow. One of the essences of the approach is that coarse problem and boundary value problem in the proxy of the source model two different flows. In his remarkable paper Peaceman propose a framework how to deal with solutions defined on different scale for linear **time independent** problem by introducing famous, Peaceman well block radius. In this article we consider novel problem how to solve this issue for transient flow generated by compressiblity of the fluid. We are proposing method to glue solution via total fluxes, which is predefined on coarse grid and changes in the pressure, due to compressibility, in the block containing production(injection) well. It is important to mention that the coarse solution \"does not see\" boundary. From industrial point of view our report provide mathematical tool for analytical interpretation of simulated data for compressible fluid flow around a well in a porous medium. It can be considered as a mathematical \"shirt\" on famous Peaceman well-block radius formula for linear (Darcy) transient flow but can be applied in much more general scenario. In the article we use Einstein approach to derive Material Balance equation, a key instrument to define $R_0$. We will enlarge Einstein approach for three regimes of the Darcy and non-Darcy flows for compressible fluid(time dependent): $\textbf{I}. Stationary ; \textbf{II}. Pseudo \ Stationary(PSS) ; \textbf{III}. Boundary \ Dominated(BD).$ Note that in all known authors literature, rate of the production on the well is time independent. **Our MB equation tuned to prove that corresponding Peaceman well block radius for each of the regime of flow is time independent, and converge to Peaceman well block radius when exterior reservoir radius is vanishing.** For clarity we will first derive Peaceman Well Block formula for each regimes of the flows in 1-D Euclidean case and then more difficult and more practical 2-D radial cases. author: - A. Ibraguimov$^1$, E. Zakirov.$^2$, I. Indrupskiy$^2$, D. Anikeev$^2$, A. Zhaglova$^2$. title: Peaceman Well Block Problem For Time-Dependent Flows of Compressible Fluid --- $^{1}$ Texas Tech University(USA) and Oil and Gas Research Institute of RAS,Moscow\ e-mail: `akif.ibraguimov@ttu.edu`\ $^2$ Oil and Gas Research Institute of RAS, Moscow\ e-mail: `ezakirov@ogri.ru`, `i-ind@ipng.ru` `anikeev@ogri.ru`, `azhaglova90@gmail.com` # Introduction Peaceman well-block radius[@zia7] is routinely used by engineers, but was not rigorously studied even for steady-state flows. Detailed review on fundamentals Peaceman well bock radius for linear and non-linear stationary flows in porous media was recently accepted for publication in Applied and Computational Mathematics, an international journal and was published in the archive [@izia]. Here we just want we often refer on to mention that concept of equivalent well block radius was introduced in Russia(see [@zia1],and [@zia2]), but was not translated, and therefore was not cited in modern literature, as it often happen. In the basis of the idea behind Peaceman well block radius lay material balance equation which enable to sew analytical solution with simulated one, and interpreted result of computed value of the pressure in the block containing well. In this section we will describe the paradigm for material balance(MB) as an algebraic set of equations, and indicate our intended application. To introduce MB system of equation let first consider the finite set of dependant variables $$\label{p-q-def} \mathcal{P}=\left\{p_{\pm r_0, 0}(s);p_{0,\pm r_0}(s);p_{\pm 1, 0}(s);p_{0,\pm 1}(s);q_x^{\pm}(s);q_y^{\pm}(s)\right\}.$$ Let $$\label{K-Q-def} \mathcal{K}=\left\{K_x^{\pm};K_y^{\pm}\right\} \ \textnormal{and} \ \mathcal{Q}=\left\{Q_x^{\pm};Q_y^{\pm}\right\}.$$ be inputs, which in this study are considered to be constants. To make discussion more motivated we will highlight intended application. Consider diffusive process in the domain $\Omega \ni 0$ with source/sink - $0$, which is igniting process and $\Omega_N=\sum_{i=1}^N B_i$ grid approximating $\Omega$. Let $\Omega_N\supset B_0\ni 0$ to be characterised by blocks $B_i$, of the \"size\" $\Delta$, and contains $0$, for all $i$. Major assumption is that process of the transport and changes of the fluid is much \"faster\" than geological process, and therefore dependents of the $\mathbf{K}$ and $\mathcal{Q}$ to be ignored. Assume that conductivity at blocks of the interest are fixed flow generated by source($well$ which fixed and located in the box $B(\Delta)$ of size $\Delta,$ and this property holds for each $\Delta$. Let set $\mathcal{P}$ contains only parameters defined only in center $B_0=B_{0,0}$( domain of the parameters $p_{\pm r_0, 0}(s);p_{0,\pm r_0}(s) , \cdots$ are in $B_0$) and nearest surrounding four blocks $B_{i,J}$( domain of the parameters $p_{\pm r_0, 0}(s);p_{0,\pm r_0}(s)$ are in $B_{\pm 1,0} \ B_{0, \pm 1} .$). Consider the filtration, which controlled by Material Balance (MB) equation as an algebraic equation w.r.t. unknown variable $p_{a,b}(s)$, depending on parameter $s$ and input variable $q_a^b(s)$ also depending on parameter $s$. $s$ is the model for time. System also featured by input parameter $\tau,$ which associate to changes in properties of variables $p$ on time interval $\left[s,s+\tau\right].$ This $\tau$ in some sens connect our equation to Einstein equation of material balance(see [@Einstein56], [@ibr-isank-sob]), is predefined and set to be very small. **Remark 1**. *Note that Einstein equation of material balance is naturally stochastic, whether ours is deterministic. In spite of that we think that Einstein's method can be extended stochastic processes defined on THE stochastic grid. We will leav this for further research.* Dependent variables fro set $\mathcal{P}$ and $\mathcal{P}$ w.r.t. parameters $\mathcal{K}$,$\mathcal{Q}$, and $\tau$ are subject to algebraic equation: $$\begin{aligned} \label{Mat-Bal-Alg} & \tau\cdot K_x^- \cdot \left(p_{-r_0,0}(s) - p_{-1,0}(s)\right)=\tau\cdot q_x^-(s)+Q_x^-\left(p_{-r_0,0}(s+\tau)-p_{-r_0,0}(s)\right)\\ & \tau\cdot K_x^+ \cdot \left(p_{r_0,0}(s) - p_{1,0}(s)\right)=\tau \cdot q_x^+(s)+Q_x^+\left(p_{-r_0,0}(s+\tau)-p_{-r_0,0}(s)\right)\\ & \tau \cdot K_y^- \cdot \left(p_{0,-r_0}(s) - p_{0,-1}(s)\right)=\tau \cdot q_y^-(s)+Q_y^-\left(p_{-r_0,0}(s+\tau)-p_{-r_0,0}(s)\right)\\ & \tau \cdot K_y^+ \cdot \left(p_{0,r_0}(s) - p_{0,1}(s)\right)=\tau \cdot q_y^+(s) +Q_y^+\left(p_{0,r_0}(s+\tau)-p_{0,r_0}(s)\right) \end{aligned}$$ Denote: $$\label{qx+qy} q_x(s)=q_x^-(s)+q_x^+(s) \ \ q_y(s)=q_y^-(s)+q_y^+(s) ,Q_x=Q_x^-+Q_x^+ \ \ Q_y=Q_y^-+Q_y^+$$ and $$\label{qx+qy} q(s)=q_x(s)+q_y(s) \ ; \ Q=Q_x+Q_y.$$ Assume symmetry condition w.r.t. $+$ and $-,$ which we state as follows: **Definition 1**. *Symmetry structural constrains w.r.t. $+, \ -$.* 1. *$K$ coefficient $$\label{qx+-Kx+-sym} K_x^-=K_x^+ = K_x \ ; \ K_y^-=K_y^+=K_y .$$* 2. *$q$ parameter $$\label{q-sym} q_x^-(s)=q_x^+(s)=\frac{q_x(s)}{2} \ ; \ q_y^-(s)=q_y^+(s)=\frac{q_y(s)}{2} .$$* 3. *$Q$ coefficient $$\label{Q-sym} Q_x^-=Q_x^+=\frac{Q_x}{2} \ ; \ Q_y^-=Q_y^+=\frac{Q_y}{2} .$$* 4. *$p$ variable w.r.t first index $$\label{p-x-dir} p_{-r_0,0}(s)= p_{r_0,0}(s) =p_{r_0}^x(s)\ ; \ p_{-1,0}(s)= p_{1,0}(s)=p_{1}^x(s) .$$* 5. *$p$ variable w.r.t second index $$\label{p-y-dir} p_{0,-r_0}(s)= p_{0,r_0}(s)=p_{r_0}^y(s) \ ; \ p_{0,-1}(s)= p_{0,1}(s)=p_{1}^y(s) .$$* Then from [\[Mat-Bal-Alg\]](#Mat-Bal-Alg){reference-type="eqref" reference="Mat-Bal-Alg"}, and some basic algebraic manipulations follows $$\begin{aligned} & \tau\cdot 2\cdot K_x \cdot \left(p_{r_0}^x(s) - p_{1}^x(s)\right)=\tau\cdot q_x(s)+Q_x\cdot 2\cdot\left(p_{r_0}^x(s+\tau) - p_{r_0}^x(s)\right),\label{MB-A-Sym-x}\\ & \tau\cdot 2\cdot K_y \cdot \left(p_{r_0}^y(s) - p_{1}^y(s)\right)=\tau\cdot q_y(s)+Q_y\cdot 2 \cdot \left(p_{r_0}^y(s+\tau) - p_{r_0}^y(s)\right)\label{MB-A-Sym-y}. \end{aligned}$$ If one will assume that $(p_{r_0}^y(s) - p_{1}^y(s)=0,$ $\left(p_{r_0}^y(s+\tau) - p_{r_0}^y(s)\right), \text{and} , q_y(s)=0$ then we will get a precursor for 1-D MB which in the [case of symmetry]{.ul} in $x$ - direction will take a form $$\label{MB-A-Sym-1-D} \boxed{\tau\cdot 2\cdot K_x \cdot \left(p_{r_0}^x(s) - p_{1}^x(s)\right)=\tau\cdot q_x(s)+Q_x \cdot 2\cdot \left(p_{r_0}^x(s+\tau) - p_{r_0}^x(s)\right).}$$ As a precursor for 2-D MB which in [case of symmetry and isotropy]{.ul} we will assume that $p_{r_0}=p_{r_0}^x=p_{r_0}^y, \ \cdots$ and anisotropy: $K_x=K_y$ letting $q(s)=q_x(s)+q_y(s)$ and $Q(s)=Q_x(s)+Q_y(s)$ we will take a form $$\begin{aligned} \label{Mat-Bal-Alg-Sym-in all- isotr} \boxed{\tau\cdot 4\cdot K \cdot \left(p_{r_0}(s) - p_{1}(s)\right)=\tau\cdot q(s)+Q(s)\cdot 4 \left(p_{r_0}(s+\tau) - p_{r_0}(s)\right).}\end{aligned}$$ In general setting algebraic variable(letter) of interest $p_i^{x,y,\cdots}$, $i=0,1,2,\cdots$ may depend on parameter, and it is common in algebraic geometry structure. Assume that $i=0,1$ then using above arguments, we are considering Algebraic Parametric Structure as a sewing machinery between numerical and \"analytical\" solutions : $$\label{1-D MB} \tau\cdot\left(J_{1,0}^p\cdot\left(p_0(s)-p_1(s) \right) -I_q\cdot q(s)\right)=L_q^{p_0}\cdot\left(p_0(s+\tau)-p_0(s)\right) .\\ $$ Value of the coefficients and their dependence on input parameters can vary depending on the intended applications, dimension, geometry and dynamics of the process, discretization $\text{etc}.$ **Remark 2**. *In a view of the algebraic structure equation of intended application $p_i(s)$ are dependant variables, [\[1-D MB\]](#1-D MB){reference-type="eqref" reference="1-D MB"} $q(s)$ is main [ function defining process]{.ul} , and three others coefficients $J_{1,0}^p,$ $I_q,$ and $L_q^{p_0}$ contains essential characteristics of the algebraic and geometrical structure of the media of the flow and its discretization by domain $\Omega_N$.We will choose this coefficients in next paragraph.* # Motivation for Material Balance Equations and Application to Numerical Scheme Consider flow in the reservoir $\Omega$, and corresponding mathematical model. Numerical simulator of the flow provide three basic information 1. Geometric approximation of the domain of multi-component and multi-phase flow 2. Numerical Value of the functions of interest in each block 3. Value of the parameters characterising domain w.r.t. chemical and physical of the fluids and media To motivate the algebraic structure of MB [\[1-D MB\]](#1-D MB){reference-type="eqref" reference="1-D MB"} equation, consider orthogonal grid of dimension $M\times N$ and size $\Delta_x$ and $\Delta_y.$ Let $P_{(M,N)}$ be $M\times N$ matrix of the pressure value, with an elements $p_{i,j}(t),$ which associate block $B_{i,j}.$ Assume that block $B_{0,0}$ contains source at center $(0,0)$, which generate differences in the function $p_{i,j}(t).$ Here $D=\Omega\times (0.h)$ is 3-dimensional cylinder and there non flow in $z$ direction. Assume Green type function $p(x,y,t)$ be a solution of the the basic modeling problem $$\begin{aligned} &L\cdot\frac{\partial p(x,y,t)}{\partial t }-J\cdot\left(\frac{\partial^2}{\partial x^2 }+ \frac{\partial^2}{\partial y^2 } \right)p= I\cdot\delta(x,y) \ \ \text{in} \ \ \left(\Omega \setminus (0,0)\right)\times (-\infty,\infty) \ , \\ &B(p)=0 \ \text{on} \ \partial\Omega \times (-\infty,\infty). \end{aligned}$$ Here $B$ to be boundary operator, which in our case will be Dirichlet, or Newman operator. To approximate function $p(x,y,t)$ consider finite different solution of the problem in rectangular domain $$\begin{aligned} &L\cdot\frac{ p_{i,i}(t+\tau)-p_{i,i}(t)}{ \tau }-\\ \nonumber &J\cdot\left(\frac{p_{i-1,j}(t)-2p_{i,j}(t)+p_{i+1,j}(t)}{\Delta_x ^2}+ \frac{p_{i,j-1}(t)-2p_{i,j}(t)+p_{i,j+1}(t)}{\Delta_y ^2}\right)= \\ \nonumber &I\cdot\frac{\delta_{i,j}}{\Delta_x\cdot \Delta_y\cdot h} \ \ \text{in} \ \ \Omega_N \setminus (0,0) \ , \\ &B(p)=0 \ \text{on} \ \partial\Omega_N \times (-\infty,\infty) \,\ \end{aligned}$$ or $$\begin{aligned} \label{MB-fdif-dx-dy} &L\cdot(\Delta_x\cdot \Delta_y\cdot h)\cdot \left( p_{i,j}(t+\tau)-p_{i,j}(t)\right)=\\ \nonumber &\tau \cdot\left[J h\left(\frac{\Delta_y}{\Delta_x}(p_{i-1,j}(t)-2p_{i,j}(t)+p_{i+1,j}(t))+ \frac{\Delta_x}{\Delta_y}( p_{i,j-1}(t)-2p_{i,j}(t)+p_{i,j+1}(t))\right)+I\delta_{i,j}\right] ,\\ \nonumber &B(p)=0 \ \text{on} \ \partial\Omega_N \times (-\infty,\infty). \end{aligned}$$ Here $\delta_{i,j}$ is Kronecker symbol. Equation above is basic and can be applied in $1-D$ and $2-D$ cases, although has in both cases many similarities but it differ. Namely: 1. [ 1-D Material Balance in \"last blocks\" $B_0, B_1$]{.ul} Under assumption [1-D Symmetry ]{.ul} let $\Delta_y=const$, and $h=const$ for all $\Delta=\Delta_x$ then MB takes form $$\begin{aligned} \label{MB-fin-dif} &L\cdot\Delta\cdot \Delta_y h \left( p_{0}(t+\tau)-p_{0}(t)\right)=\\ \nonumber &\tau\left(2\cdot J\Delta_y\cdot h\cdot\frac{\left(p_{1}(t)-p_{0}(t)\right)}{\Delta}+I\delta_{0,0}\right)=\tau\left(2\cdot (J\cdot(\Delta_y\cdot h))\cdot\frac{\left(p_{1}(t)-p_{0}(t)\right)}{\Delta}+q\delta_{0,0}\right) ,\\ \nonumber \end{aligned}$$ 2. [ Radial Material Balance Equation in \"last blocks\" $B_0, B_{pm 1, 0}, B_{0, pm 1}$ ]{.ul} Under assumption of $\underline{ 2-D \text{ symmetry}}$ if one will will let that $$\label{delta-sym} \boxed{\Delta=\Delta_x=\Delta_y}.$$ We will also assume that thickness of the reservoir is constant and $$\label{q-ss-pss} \boxed{I=q}.$$ Then equation [\[MB-fdif-dx-dy\]](#MB-fdif-dx-dy){reference-type="eqref" reference="MB-fdif-dx-dy"} can be simplified as $$\begin{aligned} \label{MB-fin-dif} &L\cdot\Delta^2\cdot h \left( p_{0}(t+\tau)-p_{0}(t)\right)=\\ \nonumber &\tau\left(4\cdot J\cdot h\cdot\left(p_{1}(t)-p_{0}(t)\right)+I\delta_{i,j}\right)=\tau\left(4\cdot (J\cdot h)\cdot\left(p_{1}(t)-p_{0}(t)\right)+q\delta_{i,j}\right) ,\\ \nonumber \end{aligned}$$ For convenience let summarize comments on properties of the media w.r.t. the flow as itemized remarks **Remark 3**. 1. *In above $q$ is total rate on the well(well production), which is time dependant $\left(q=q(s)\right)$ for PSS and BD regimes.* 2. *In this article for 2-D flows we assume isotropy and symmetry flows: $$\label{sym-aniz} \boxed{ K=K_x^-=K_x^+=K_y^-=K_y^+ \ ; \ q_x^-=q_x^+=q_y^-=q_y^+ \ ; \ Q_x^-=Q_x^+=Q_y^-=Q_y^+.}$$* 3. *In this article for 1-D flows we assume isotropy and symmetry flows: $$\label{sym-aniz-1-D} \boxed{ K=K_x^-=K_x^+ \ ; \ q=q_x^-=q_x^+\ q_y^-=q_y^+=0 \ ; Q= Q_x^-=Q_x^+ \; \ Q_y^-=Q_y^+=0.}$$* 4. *All above assumptions was made enable use analytical solution, which can be constructed explicitly. In case when analytical solution is unavailable in the gluing machinery one can use the numerical solution on fine scale for the corresponding IBVP.* 5. *MB equation \"does not see boundary $\Omega_N$ and used to glue analytical solution by solving Peaceman problem. But analytical solution will take into account impact of boundary condition on value of Peaceman Radius. We will see that in linear case $R_0$ will depend on size of the domain only in time dependent problem. Once more $R_0$ as it was shown by Peaceman will be independent on size of the domain, and this is quite remarkable finding by Peaceman. This issue was in the detail discussed in our article [@izia]* 1-D Material Balance Now consider Grid given in Fig. [1](#Einstein-MB-1D){reference-type="ref" reference="Einstein-MB-1D"} with no flow condition in $y$ direction. Assume $h\cdot\Delta_y=1$ and let $\Delta_x=\Delta$. Then $1-D$ approximation of the Grid as in Figure 2'. Corresponding MB on 1-D grid in $y$ dirrection is considered to be \"trivial\" Let in material balance equation [\[1-D MB\]](#1-D MB){reference-type="eqref" reference="1-D MB"} $$\begin{aligned} \label{notation-MB} &I_q(s)=q\cdot\frac{1}{h\cdot\Delta_y\cdot \Delta_x}, \\ &J_{1,0}^p=2K\cdot\frac{1}{ \Delta_x^2}, \\ &L_q^{p_0}=\phi\cdot C_p. \end{aligned}$$ In above we let $$\label{delta-yh=1} \Delta_y\cdot h=1, \ C^0=\phi\cdot C_p,q(s)=q_x(s).$$ $$\label{notation-delta} \Delta_x=\Delta.$$ Then MB equation [\[1-D MB\]](#1-D MB){reference-type="eqref" reference="1-D MB"} takes form $$\label{1-D MB-t} 2K\cdot\left(p_0(s)-p_1(s) \right) =-q\cdot\Delta+C^0\cdot\frac{p_0(s+\tau)-p_0(s)}{\tau}\cdot\Delta^2 .$$ Note that if we will use the finite difference[\[MB-fin-dif\]](#MB-fin-dif){reference-type="eqref" reference="MB-fin-dif"} as a MB one will get $$\begin{aligned} \label{MB-fin-dif-1D} &L\cdot\Delta\cdot \Delta_y h \left( p_{0}(t+\tau)-p_{i,i}(t)\right)=\\ \nonumber & \tau\left(2\cdot (J\cdot(\Delta_y\cdot h))\cdot\frac{\left(p_{1}(t)-p_{0}(t)\right)}{\Delta}+q\delta_{i,j}\right) ,\\ \nonumber &B(p)=0 \ \text{on} \ \partial\Omega \times (-\infty,\infty), \end{aligned}$$ which is equivalent to [\[1-D MB-t\]](#1-D MB-t){reference-type="eqref" reference="1-D MB-t"} under [\[delta-yh=1\]](#delta-yh=1){reference-type="eqref" reference="delta-yh=1"},[\[notation-delta\]](#notation-delta){reference-type="eqref" reference="notation-delta"} if $J=K \ ,\text{and} \, L=C^0 .$ Linear Steady State Material Balance(Algebraic) $p$ does not depend on $s$. Constrain that $p_i$ for all $i$-s are $s$ independent can be changed to more physical one: One of the multiplier in equation [\[SS-contstrain\]](#SS-contstrain){reference-type="eqref" reference="SS-contstrain"} is equal zero, namely $$\label{SS-contstrain} \phi C_p\frac{V_0}{V}\cdot\frac{p_0(s+\tau)-p_0(s)}{\tau} \equiv 0 .$$ Physically [\[SS-contstrain\]](#SS-contstrain){reference-type="eqref" reference="SS-contstrain"} it can be stated that flow of fluid or fluid itself is such that if one can assume that 1. $\tau$ - the time interval of compression of the fixed volume $V_0$ with respect to whole volume $V$ of flow filtration is too big; 2. porosity $\phi$ is negligible ; 3. compressibility $C_p$ is negligible; 4. changes of the pressure in the block $V_0$ w.r.t. $\tau$ is negligible. **Remark 4**. *Although we consider fracture $\frac{V_0}{V}=1$  we keep this factor for interpretation in [\[SS-contstrain\]](#SS-contstrain){reference-type="eqref" reference="SS-contstrain"}for mathematical generality.* **Definition 2**. *We will say that MB balance is steady state if condition [\[SS-contstrain\]](#SS-contstrain){reference-type="eqref" reference="SS-contstrain"} holds for all $s$ and $\tau.$* Then symmetrical, isotropic and steady-state steady state 1-D Balance Equation has the form $$\label{ss-MB-1D} 2K\cdot(p_0-p_1)=q\Delta .$$ **Remark 5**. *Note that in steady state MB $p_i$ is parameter(time in our intended application) independent.* To sew value of $p_0$ with pressure trace on the boundary of flow with given rate $q$ let us consider 1-D flow of non-compressible fluid towards gallery $x=0.$ The flow is subjected to: (i) linear Darcy equation with fixed, (ii) $s$ independent pressure $p=p_e$ on the reservoir boundary $x=r_e,$ and (iii) production rate $q$ on the gallery at $x=0$ as the inner boundary of the flow. Corresponding analytical model for $1-D$ pressure has a form $$\begin{aligned} \frac{d}{dx}\left(K \frac{d}{dx}p_{an}(x)\right)=0;\label{ss-eq}\\ p_{an}(x)\Big|_{x=R_e}=p_e;\label{ss-bc-ext}\\ -K\cdot \frac{d p_{an}}{dx}\Big|_{x=0}=q .\label{ss-well-cond}\end{aligned}$$ **Definition 3**. *Let $1-D$ domain $(0;R_e)$ is split by grid $\left[0,\Delta,2\cdot\Delta,\cdots N\cdot\Delta\right]$ were $N\Delta =r_e.$ We will say that Peaceman problem is well posed w.r.t. MB [\[ss-MB-1D\]](#ss-MB-1D){reference-type="eqref" reference="ss-MB-1D"} for $1-D$ flows if for any given $\Delta$ exist $R_0$ depending on $\Delta$ s.t. analytical solution of the 1-D SS problem [\[ss-eq\]](#ss-eq){reference-type="eqref" reference="ss-eq"}-[\[ss-well-cond\]](#ss-well-cond){reference-type="eqref" reference="ss-well-cond"} satisfies equation.* $$\label{MB-an-1D} -2K\cdot\left(p_{an}(\Delta)-p_{an}(R_0)\right)=q\cdot \Delta.$$ **Theorem 1**. *In order Peaceman problem to be well posed w.r.t. MB [\[ss-MB-1D\]](#ss-MB-1D){reference-type="eqref" reference="ss-MB-1D"} for $1-D$ flows it is necessary and sufficient that $$\label{R_0-SS} R_0=\frac{\Delta}{2}.$$* **Proof.* First analytical solution $p_{an}(x)$ has a form $$\begin{aligned} &p_{an}(x)=A\cdot x+ B , \ A=-\frac{q}{K}, \ B=p_e + \frac{q}{K} r_e. \end{aligned}$$ Substituting $p_{an}(x)$ into [\[MB-an-1D\]](#MB-an-1D){reference-type="eqref" reference="MB-an-1D"}one will get $$\begin{aligned} \label{ss-R-0-calc} & 2KA((\Delta)-R_0+(B-B))=q\Delta \ \text{or}, \ 2(\Delta)=\Delta + 2R_0 \ \text{or}, \ \Delta=2R_0. \end{aligned}$$ From above formula [\[R_0-SS\]](#R_0-SS){reference-type="eqref" reference="R_0-SS"} follows. ◻*   **Remark 6**. *This theorem is elementary but we brought it here to highlight reasoning why Peaceman formula for $R_0$ for SS regime does depend only on size of the block $\Delta$ but does not depend on size of the domain, conductivity and rate of production. In in-fact follows from mean-value Lagrange theorem, Darcy Law and Divergence Theorem (Conservation law) for not-compressible Fluid.* 1-D Linear PSS Material Balance(Algebraic) and corresponding $R_0$ Let define the PPS constrain for the solution of algebraic material balance equation assuming in addition that $p_0$ $p_1$ and $q$ are conditioned as follows **Definition 4**. *We will say that MB is steady state if $$\label{q-s-ind} q(s)=q \ \text{is $s$ independent.}$$ $$\label{p1-p0-s-ind} p_0(s+\tau)-p_0(s)=q\cdot C_0 \cdot \tau .\ \text{and } \ C_0\text{ is $s$ independent.}$$* From above obliviously follows that difference $$\label{p0(s+tau)-p0(s)-tau} p_1(s)-p_0(s)\ \text{is $s$ independent}.$$ Then linear 1-D PSS Material Balance will have form $$2K\cdot\left( p_1- p_{0}\right) =q\cdot\Delta\left(1-\phi c_p\cdot 1 \cdot C_0\Delta \right)=q\Delta\left(1-C_1\Delta \right).$$ For simplicity we will let $C_1=1$ 1-D linear MB-constrains for $BD$ regime and corresponding well box radius $R_0$ MB for BD regime can be stated in form of the definition for convenience in terms of keys input constrains on $q(s), \ p_i(s) \ \text{and} \ p_0(s+\tau).$ **Definition 5**. *Algebraic MB-constrains for boundary dominated is stated as follows. Exist constants $Q_0, \ \mathbf{P}_1. \mathbf{P}_0,$ s.t. items below hold varables $p_i$ and $q$ in MB equation* 1. *$$\frac{q(s)}{p_0(s)}=Q_0(r_e)$$ in above $Q_0(r_e)$ is $\Delta$ and $s$ independent constant* 2. *$$\frac{p_1(s)}{p_0(s)}=\mathbf{P}_1(\Delta,r_e)$$ in above $\mathbf{P}_1(\Delta,r_e)$ is constant depending on $\Delta$ and $r_e$, but not $s$.* 3. *$$\frac{p_0(s+\tau)}{p_0(s)}=\mathbf{P}_0(\Delta,r_e) \frac{e^{-C(K,r_e)\cdot\tau}-1}{\tau}$$ in above $\mathbf{P}_0(\Delta,r_e)$ is constant depending on $\Delta$ and $r_e$, but not $s$.* Boundary dominated analytical problem Consider analytical problem $$\begin{aligned} \label{BDD-anal-prob} & \frac{\partial}{\partial x }\left(K\frac{\partial }{\partial x}u_0(x,t)\right)=c_0\cdot\frac{\partial u_0(x,t)}{\partial t}\\ & u(x,t)\Big|_{x=0}=0\\ &K\frac{\partial u_0(x,t)}{\partial x}\Big|_{x=r_e}=0\\ &u_0(x,0)=\phi_0(x) \ \text{here} \ \phi_0(x) \text{is first eigenfunction.} \\ & u_0(x,t)\Big|_{t=0}=\phi_0(x) \ \end{aligned}$$ Assuming for simplicity that $c_0=1$ It is not difficult to prove the following **Proposition 1**. *Let $u_0(x,t)$ be an analytical solution of IBVP [\[BDD-anal-prob\]](#BDD-anal-prob){reference-type="eqref" reference="BDD-anal-prob"}: $$\label{bd-sol-0} u_0(x,t)=e^{-K\lambda_0 t} \sin(\lambda_0 x).$$ Define variable in MB equation as $$\begin{aligned} p_0(s)= u_0(R_0,s) \label{p_0-BD} ; \\ p_1(s)=u_0(\Delta,s) \label{p_1-BD} ;\\ q(s)=K\frac{\partial u_0}{\partial x}\Big|_{x=0}.\label{q-BD}\end{aligned}$$ Then all items in Definition [Definition 5](#MB-BDD-constr){reference-type="ref" reference="MB-BDD-constr"} well defined for specific constants $Q_0, \ \mathbf{P}_0, \ \mathbf{P}_1$ for any $R_0$ and $\Delta,$ and $$\label{lambda-0} \lambda_0=\frac{\pi}{2\cdot r_e}$$* **Remark 7**. *Note that all initial Data for all $\mathbf{three \ analytical \ problems }$ are assigned in such way that corresponding productivity index are time independent.* It important to state that existence of the constants is of main interest, and it will addressed below. We brought the Proposition [Proposition 1](#MB-4-ansol){reference-type="ref" reference="MB-4-ansol"} in order to follow frame of the construction and motivate Peaceman well-posedness as **Definition 6**. *We will say that Peaceman problem for BD regime is well posed w.r.t. time dependent MB [\[BDD-anal-prob\]](#BDD-anal-prob){reference-type="eqref" reference="BDD-anal-prob"} for $1-D$ flows if for any given $\Delta,$ and $r_e$ exist $R_0^{BD}(\Delta,r_e)$ depending on $\Delta$ and $r_e$ s.t. analytical solution of the 1-D BD problem [\[BDD-anal-prob\]](#BDD-anal-prob){reference-type="eqref" reference="BDD-anal-prob"} satisfies equation and in addition constrains in the definition [\[MB-BDD-constr\]](#MB-BDD-constr){reference-type="eqref" reference="MB-BDD-constr"} hold.* **Lemma 1**. *Assume that $R_0^{bd}<\Delta$ then for Peaceman's well posed for BD regime of the filtration it is sufficient $$\label{r-0-bd-geneic} \sin (\lambda_0 R_0^{bd})-\sin(\lambda_0\cdot \Delta)+\frac{\lambda_0\Delta}{2} =\sin(\lambda_0\cdot R_0^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{-\lambda_0^2\tau}-1}{\tau},$$ Moreover as $r_e \to \infty ,$ and $\tau\to 0$ $R_{0}^{bd}(\lambda_0,\tau)$ converges to Peaceman steady state $R_0.$* *Here $\lambda_0$ is the first eigenvalue, and $R_0^{bd}$, which deliver solution to transcendent equation [\[r-0-bd-geneic\]](#r-0-bd-geneic){reference-type="eqref" reference="r-0-bd-geneic"} defines by value of $\Delta$, but and addition depend on $r_e,$ and $\tau \,\ K\,\ c_p$.* *Proof.* To proof the Theorem it is suffice write down solution of the problem [\[BDD-anal-prob\]](#BDD-anal-prob){reference-type="ref" reference="BDD-anal-prob"} and calculate $$\label{bd-sol-q} q(s)=K\frac{\partial u_0(x,t)}{\partial x}\Big|_{x=0}=e^{-K\lambda_0 s}\lambda_0 \cos(\lambda_0 x)\Big|_{x=0} =K\cdot e^{-K\lambda_0 s}\lambda_0$$ Here $\lambda_0$ is First eigenvalue. For connivance let rewrite MB $$\label{1-D MB-t-BD} 2K\cdot\left(p_0(s)-p_1(s) \right) =-q(s)\cdot\Delta+ 1\cdot\frac{p_0(s+\tau)-p_0(s)}{\tau}\cdot\Delta$$ and let $R_0=R_0^{bd}<\Delta$ be unknown then $$\begin{aligned} &p_0(s)=e^{-K\lambda_0^2 s} \sin(\lambda_0 R_0^{bd}),\label{p_0-bd-an}\\ &p_1(s)=e^{-K\lambda_0^2 s} \sin(\lambda_0^2\Delta) \label{p_1-bd-an}\\ &p_0(s+\tau)=e^{-K\lambda_0^2( s+\tau)} \sin(\lambda_0 R_0^{bd}),\label{p_0-tau-bd-an}\\ &q(s)=K\cdot e^{-K\lambda_0^2 s}\lambda_0\label{q(s)-BD-an} \end{aligned}$$ Let $\lambda=\lambda_0$ be the first eigenvalue. Using [\[p_0-bd-an\]](#p_0-bd-an){reference-type="eqref" reference="p_0-bd-an"}-[\[q(s)-BD-an\]](#q(s)-BD-an){reference-type="eqref" reference="q(s)-BD-an"} MB [\[1-D MB-t-BD\]](#1-D MB-t-BD){reference-type="eqref" reference="1-D MB-t-BD"} takes form $$\label{1-D MB-t-bd-an} 2K\cdot e^{-\lambda^2 s}\left(\sin(\lambda R_0^{bd} -\sin(\lambda \Delta) \right) =-K\lambda\cdot\Delta e^{-\lambda^2 s}+ 1\cdot\sin(\lambda R_0^{bd})\cdot \frac{e^{-\lambda^2(s+\tau)}-e^{-\lambda^2s}}{\tau}\cdot\Delta.$$ After some factoring by$e^{-\lambda^2 s}$ equation [\[1-D MB-t-bd-an\]](#1-D MB-t-bd-an){reference-type="eqref" reference="1-D MB-t-bd-an"} takes form $$\label{1-D MB-t-bd-an-1} 2K\cdot \left(\sin(\lambda R_0^{bd} -\sin(\lambda \Delta) \right) =-K\lambda\cdot\Delta +1\cdot 1\cdot\sin(\lambda R_0^{bd})\cdot \frac{e^{-\lambda^2\cdot\tau}-1}{\tau}\cdot\Delta$$ Dividing [\[1-D MB-t-bd-an-1\]](#1-D MB-t-bd-an-1){reference-type="eqref" reference="1-D MB-t-bd-an-1"} by $2K$ we will get [\[r-0-bd-geneic\]](#r-0-bd-geneic){reference-type="eqref" reference="r-0-bd-geneic"} in the main Theorem [Lemma 1](#BD-R_0-generic){reference-type="ref" reference="BD-R_0-generic"} . In order analytical to satisfy MB equation it is sufficient to assume that $R_0^{bd}(\Delta,r_e,\tau)$ solve equation [\[r-0-bd-geneic\]](#r-0-bd-geneic){reference-type="eqref" reference="r-0-bd-geneic"}. One can simplify above equation [\[1-D MB-t-bd-an-1\]](#1-D MB-t-bd-an-1){reference-type="eqref" reference="1-D MB-t-bd-an-1"}. Namely $$\label{1-D MB-t-bd-simp} 2K\cdot 2 \left(\sin\frac{\lambda\cdot(R_0^{bd}-\Delta)}{2}\cos\frac{\lambda(R_0^{bd}+\Delta)}{2}\right) = -K\lambda\cdot\Delta + 1\cdot\sin(\lambda R_0^{bd})\cdot\lambda^2\cdot \frac{e^{-\lambda^2\cdot\tau}-1}{\lambda^2\tau}\cdot\Delta.$$ As $\tau \to 0$ from [\[1-D MB-t-bd-simp\]](#1-D MB-t-bd-simp){reference-type="eqref" reference="1-D MB-t-bd-simp"} on can have $$\label{1-D MB--bd-simp1} 4K\cdot \left(\sin\frac{\lambda\cdot(R_0^{bd}-\Delta)}{2}\cos\frac{\lambda(R_0^{bd}+\Delta)}{2}\right) = -K\lambda\cdot\Delta + 1\cdot\sin(\lambda R_0^{bd})\cdot\lambda^2\cdot(-1)\cdot\Delta.$$ Under assumption that $\lambda$ to be such that $\cos\frac{\lambda(R_0^{bd}+\Delta)}{2} \approx 1$ Last equation provide the compact approximation for $R_0^{bd}$ computation is small enough: $$\label{1-D MB--bd-aprox-0} R_0^{bd}\approx\frac{\Delta}{2}-\frac{1}{2K}\lambda^2R_0^{bd}\Delta$$ or equivalently $$\label{1-D MB--bd-aprox} R_0^{bd}\approx\frac{\frac{\Delta}{2}}{1+\frac{1}{2K}\lambda^2\Delta}.$$ Finally taking into account explicit formula for $\lambda=\frac{\pi}{2r_e}$ and assuming that $\frac{\Delta}{8K}\frac{\pi^2}{r_e^2}$is small enough we will get the approximate formula for $R_0^{bd}$ $$\begin{aligned} \label{1-D MB--bd-aprox-0} &\boxed{R_0^{bd}\approx\frac{\frac{\Delta}{2}}{1+\frac{1}{8K}\frac{\pi^2}{r_e^2}\Delta}} \end{aligned}$$ ◻ **Remark 8**. *It is evident that for small enough $\tau$ , small $\lambda \Delta$ and small fraction $\frac{\phi c_p}{2K}$ the appropriate $R_0^{BD}$ which solves equation [\[1-D MB-t-bd-an-1\]](#1-D MB-t-bd-an-1){reference-type="eqref" reference="1-D MB-t-bd-an-1"} exists.* Using Lagrange mean value result similar to aove theorem to conclude if reservoir is unbounded then our obtained Peaceman well block radius is the same as in steady state (Classical) case. We state formulation and proof of the the following theorem for future more generic implementation in a view of Landis's multidimensional mean-value theorem [@landis-book]. **Theorem 2**. *Well block radius $R_0^{bd}(r_e,\Delta)$ which deliver Peaceman well posedness asymptotically converges $\frac{\Delta}{2}=R_0$ as $r_e\to \infty$for any fixed $\tau$.* *Proof.* First observe that from Lagrange mean value theorem follows that $$\label{r-0-bd-geneic-1} \cos\xi\cdot \lambda \left(R_0^{bd}- \Delta\right)+\frac{\lambda\Delta}{2} =\sin(\lambda\cdot R_0^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{-\lambda^2\tau}-1}{\tau},$$ were $$\label{xi} \lambda R_0^{bd} <\xi<\lambda \Delta, \text{and consequently } \cos{\xi}=1+O(\lambda).$$ After division by $\lambda$ in [\[r-0-bd-geneic-1\]](#r-0-bd-geneic-1){reference-type="eqref" reference="r-0-bd-geneic-1"} we will get $$\label{r-0-bd-geneic-2} \cos\xi\cdot\left(R_0^{bd}- \Delta\right)+\frac{\Delta}{2} =\lambda\cdot\sin(\lambda\cdot R_0^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{-\lambda^2\tau}-1}{\lambda^2\tau},$$ or assuming as before that $\lambda$ to be such that $\cos\xi\approx 1$ $$\label{r-0-bd-geneic-3} R_0^{bd}-\frac{\Delta}{2}=\lambda\cdot\sin(\lambda\cdot R_0^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{-\lambda^2\tau}-1}{\lambda^2\tau}+O(\lambda).$$ Evidently in RHS of [\[r-0-bd-geneic-4\]](#r-0-bd-geneic-4){reference-type="eqref" reference="r-0-bd-geneic-4"} term $-1\cdot\frac{e^{-\lambda^2\tau}-1}{\lambda^2\tau}=O(1)$ therefore statement of theorem follows [\[lambda-0\]](#lambda-0){reference-type="eqref" reference="lambda-0"} and equation [\[r-0-bd-geneic-4\]](#r-0-bd-geneic-4){reference-type="eqref" reference="r-0-bd-geneic-4"} below. $$\label{r-0-bd-geneic-4} R_0^{bd}-\frac{\Delta}{2}=O(\lambda).$$ ◻ ![Einstein Mat Balance for 1-D Flow](Einstein-1-D-sm.pdf){#Einstein-MB-1D} ![Einstein Mat balance equation on the 5 spots grid, 2-D case](Einstein-1.pdf){#Einstein-mat-bal-r} Pseudo Steady-State Material Balance Consider flow toward well $\Gamma_w$ in isolated reservoir $V$ of height $h=1$, and $\phi\cdot c_p =1$. Material balance equation for transient flow for slightly compressible fluid in a block with dimensions $\Delta \times \Delta \cdot 1$, volume $V_0=\Delta^2 \cdot 1$ and pressure $p_0$ containing a well (source/sink) with flow rate $q$ (positive for source and negative for sink): $$\label{basic-MB-PSS-Linear case-1} -4K\cdot(p_0(s)-p_1(s)) + \frac{q}{h} = \Delta^2 \cdot 1 \cdot \frac {1}{\tau} \left(p_0(s+\tau)-p_0(s)\right),$$ Let the reservoir domain $U$ with volume V, boundary $\partial U =\Gamma_e \cup \Gamma_w$ and thickness $h$. **Assumption 1**. *Assume PSS constrain for slightly compressible fluid of compressiblity $c_p$.* 1. *$$\label{PSS-1-consrain} \left(p_0(s+\tau)-p_0(s)\right)= q \cdot \frac{\tau}{1 \cdot V },$$* 2. *$$\text{difference} \ : \ p_0(s)-p_1(s)=constant \left(s \ \ \text{independent}\right).$$* *Under above constrain in [\[PSS-1-consrain\]](#PSS-1-consrain){reference-type="eqref" reference="PSS-1-consrain"} in the assumption [\[PSS-MB\]](#PSS-MB){reference-type="eqref" reference="PSS-MB"} will take a form $$\label{basic-MB-PSS-Linear case} 4K\cdot(p_0(s)-p_1(s)) = \frac{q}{1} \cdot \left(1-\frac{\Delta^2}{V}\right),$$ [where $q$ is given constant in time rate during given(fixed) time $\tau$, is considered to be the same]{.ul}* *[for any time step $s$.]{.ul}* **Remark 9**. *Note that $p_i(s)$ depend on parameter (time in our application) $s$, but difference in PSS MB does not depend on $s$. It is remarkable difference in compare to Steady State regime.* Consider transient 2-D radial flow in the isolated the annual domain $U$for slightly compressible fluid towards well $\Gamma_w$ with given production rate and non-flow condition on the radius $\Gamma_e$: $$\begin{aligned} &K\cdot \Delta p =1\cdot\frac{\partial p}{\partial t} \ \text{in} \ U=U(0,r_w,r_e) ; \label{ss-pde} \\ & K\cdot\frac{\partial p}{\partial \nu}=0\label{ext-BC} \ \text{on} \ \Gamma_e, \ \ r=r_e \ ;\\ &K\cdot\int_{\Gamma_w}\frac{\partial p}{\partial \nu}ds=-\tilde{q} \ \text{on} \ \Gamma_w \ , \ r=r_w .\label{well-BC} \end{aligned}$$ Here $$U(0,r_w,r_e)=\{x: r_w<|x|<r_e \} \ , \Gamma_w=\{x:|x|=r_w\ , \Gamma_e=\{x:|x|=r_e\} \, \ x=(x_1,x_2) ,$$ and $$\tilde{q}=\dfrac{q}{1}, V=1\cdot|U| \ , \ K=\frac{k}{\mu} , \ \frac{\partial p}{\partial \nu} \textnormal{ external derivative in co-normal direction } .$$ For generic case in order to deal with problem it is natural to consider the mixed boundary value problem for elliptic equation which is well-defined from mathematical point of view the following approach, and can be generalised for different scenarios. In order $R_0$ to be time independent we will split approach for IBVP. Namely consider PSS solution of the above problem [\[ss-pde\]](#ss-pde){reference-type="eqref" reference="ss-pde"}-[\[well-BC\]](#well-BC){reference-type="eqref" reference="well-BC"} defined as follows: $$p_{pss}(x,t)=w(x)+At,$$ In above $$\label{A-definition} A=\frac{\tilde{q}}{1 \cdot |U|},$$ and $w(x)$ is solution of steady state problem: $$\begin{aligned} \nabla \cdot \left(K \nabla w(x)\right)= \frac{\tilde{q}}{|U|} \ \textnormal {in} \ U , \label{w-equation}\\ w(x)=0 \ \text{on} \ \Gamma_w \ , \label{w-cond-well}\\ K \frac{\partial w}{\partial \vec{\nu}}=0 \ \text{on} \ \Gamma_e. \ \label{ext-bound-cond}\end{aligned}$$ In radial axial-symmetrical(radial flow) cases pseudo-steady state $p_{pss}(r,t)$ will take a form $$\label{pss-radial} p_{pss}(r,t)=w(r)+At$$ Using representation for $p_{pss}(r,t)$ It is not difficult to prove **Theorem 3**. *In order $\left[R_0^{PSS}(r_e,\Delta)\right]$ - Peaceman radius for PSS problem it is sufficient to find $\left[R_0^{PSS}(r_e,\Delta)\right]$ s.t. $$4K\cdot\left(w(\Delta)-w(\left[R_0^{pss}(r_e,\Delta)\right])\right)=-\frac{ q}{1 \cdot V}\left( V-\Delta^2\right). \ $$* From explicit form of the solution of the problem[\[w-equation\]](#w-equation){reference-type="eqref" reference="w-equation"}-[\[ext-bound-cond\]](#ext-bound-cond){reference-type="eqref" reference="ext-bound-cond"} one can find such $\left[R_0^{pss}(r_e,\Delta)\right].$ We will use another approach based on the formulation of the problem in term of velocity field. This velocity framework will help in future to work on nonlinear flows and some cases explicitly obtain associated Peaceman well block radius. First let us start with **Remark 10**. *In generic set $U$ be domain with split boundary $\partial U=\Gamma_e\cup \Gamma_w,$ were $$\Gamma_e\cap \Gamma_w=\emptyset,$$ and $\Gamma_e, \ ,\Gamma_w$ are compacts.* we will say that velocity field $v(x)$ has PSS profile if following assumption holds **Assumption 2**. *We will say that velocity field subject for PSS regime of flows if velocity is time-independent and the solves the following BVP $$\begin{aligned} \nabla \cdot \vec{v}=C \ \textnormal{in} \ U \label{cont-eq} \\ \vec{v}\cdot \vec{\nu}=0 \ \text{on } \Gamma_e \label{no-flow-BC}. \end{aligned}$$ Note that due to divergence theorem constant $C$ form [\[cont-eq\]](#cont-eq){reference-type="eqref" reference="cont-eq"} satisfies $$\label{div-th} \int_{\Gamma_w} \vec{v}\cdot\vec{\nu}d s=\tilde{q}=C\cdot|U|$$* **Remark 11**. *Note that in our original research PSS regimes[@ibragim-Prod-Ind] was defined in term of pressure function. Namely we assumed that flow is PSS if $\frac{\partial p }{\partial t}=constant,$ and no flow condition on exterior boundary. In our intended application both definitions are equivalent to each other.* **Theorem 4**. *Exists solution of Peaceman problem for time dependant PSS regime of the production, and corresponding $R_0^{PSS}(r_e,\Delta)$ defined by the equation $$\label{R0-PSS-exact-3} -\pi+\frac{\left[R_0^{PSS}(r_e,\Delta)\right]^2}{r_e^2}+\pi\frac{ r_w^2}{r_e^2}=-2\cdot\left(\ln\frac{\Delta}{\left[R_0^{PSS}(r_e,\Delta)\right]}\right).$$ Moreover $$\lim_{r_e\to \infty} \left[R_0^{PSS}(r_e,\Delta)\right]=R_0^{SS}=R_{Peaceman}$$* *Proof.* In general vector field solution of the BVP [\[cont-eq\]](#cont-eq){reference-type="eqref" reference="cont-eq"}-[\[no-flow-BC\]](#no-flow-BC){reference-type="eqref" reference="no-flow-BC"} is not unique, but in radial case is well defined : $$\begin{aligned} -\frac{1}{r}(r v(r))_r=C=\frac{\tilde{q}}{|U|} \ , \label{cont-eq-r}\\ \vec{v}\cdot \vec{r}\big|_{r=r_e}=0 \ \label{no-flow-BC-r} \end{aligned}$$ Then for $r_w \leq r\leq r_e$: $$\label{v-pss-an} v(r)=-\frac{C}{2}\cdot r +\frac{C_1}{r}.$$ In above due to BC constants can be selected as: $$\label{const-in-v-pss} C=\frac{q}{\pi(r_e^2-r_w^2)}=\frac{q}{|U|}, \ \text{and} \ C_1=\frac{C}{2}r_e^2.$$ Then due to Darcy equation - PSS solution $w(r)=-\frac{\mu}{k}\int v(r) dr$ and consequently $$\label{PSS-an-4-MB} w(r)=-K^{-1}\left[-\frac{C}{4}\cdot r^2 +C_1\ln r+C_2\right].$$ In above constant $C_2$ have chosen s.t. $w(r)|_{r=r_w}=0$ and therefore : $$C_2=\left[ \frac{C}{4}\cdot r_w^2-C_1\ln r_w\right].$$ Consequently expression for pressure function to choose $R_0^{PSS}(r_e,\Delta)$ using Theorem [Theorem 3](#PSS-MB){reference-type="ref" reference="PSS-MB"} in case of PSS regime. $$\label{PSS-an_MB} -\tilde{q} \cdot \left(1-\frac{\Delta^2}{V}\right)=4K\cdot(p_1-p_0)=4K\cdot[w(\Delta)-w(R_0)]$$ Then due to [\[PSS-an-4-MB\]](#PSS-an-4-MB){reference-type="eqref" reference="PSS-an-4-MB"} and [\[const-in-v-pss\]](#const-in-v-pss){reference-type="eqref" reference="const-in-v-pss"} one has $$\begin{aligned} \label{PSS-anal-MB-anul-1} &-\tilde{q} \cdot \left(1-\frac{\Delta^2}{|U|}\right)=4\cdot K K^{-1}\cdot\\ \nonumber &\cdot\left[\left(C\cdot\frac{\Delta^2}{4}-C_1\ln (\Delta)-C_2\right)-\left(C\cdot\frac{\left[R_0^{PSS}(r_e,\Delta)\right]^2}{4}-C_1\ln (\left[R_0^{PSS}(r_e,\Delta)\right])-C_2\right)\right] \\ &=\left[C\cdot\left(\Delta^2-R_0^2\right)-4\cdot\left(C_1\ln \frac{\Delta}{\left[R_0^{PSS}(r_e,\Delta)\right]}\right)\right]=\nonumber\\ &C\cdot\left[\left(\Delta^2-\left[R_0^{PSS}(r_e,\Delta)\right]^2\right)-2\cdot\left(r_e^2\ln \frac{\Delta}{\left[R_0^{PSS}(r_e,\Delta)\right]}\right)\right].\nonumber \end{aligned}$$ After simplification one can get $$\label{R0-PSS-exact-1} \Delta^2-|U|=\left(\Delta^2-\left[R_0^{PSS}(r_e,\Delta)\right]^2\right)-2\cdot\left(r_e^2\ln \frac{\Delta}{\left[R_0^{PSS}(r_e,\Delta)\right]}\right),$$ or $$\label{R0-PSS-exact-2} \left[R_0^{PSS}(r_e,\Delta)\right]^2-\pi\left(r_e^2-r_w^2\right) =\left[R_0^{PSS}(r_e,\Delta)\right]^2-|U|=-2\cdot\left(r_e^2\ln \frac{\Delta}{\left[R_0^{PSS}(r_e,\Delta)\right]}\right).$$ From later main equation [\[R0-PSS-exact-3\]](#R0-PSS-exact-3){reference-type="eqref" reference="R0-PSS-exact-3"} follows. Second statement of the theorem follows follows from [\[R0-PSS-exact-3\]](#R0-PSS-exact-3){reference-type="eqref" reference="R0-PSS-exact-3"}. ◻ From Theorem [Theorem 4](#PSS-Radius){reference-type="ref" reference="PSS-Radius"} formula [\[R0-PSS-exact-3\]](#R0-PSS-exact-3){reference-type="eqref" reference="R0-PSS-exact-3"} follows monotonicity property of Peaceman PSS radius $R_0^{PSS}(r_e,\Delta)$ w.r.t. external radius $r_e$ **Theorem 5**. *Under condition of applicability of our formulae for PSS problem - $r_e\geq R_0^{PSS}(r_e,\Delta)$ function $R_0^{PSS}(r_e,\Delta)$-monotonically decreasing.* *Proof.* Taking derivative of left and right hand side of the equation [\[R0-PSS-exact-3\]](#R0-PSS-exact-3){reference-type="eqref" reference="R0-PSS-exact-3"} one can get $$\label{expl-der-pss-rad} 2\frac{\left[r_e^2-\left(R_0^{PSS}(r_e,\Delta)\right)^2\right]}{R_0^{PSS}(r_e,\Delta)\cdot r_e^2}\cdot\frac{d}{d r_e}R_0^{PSS}(r_e,\Delta)=-2\left[\frac{\left(R_0^{PSS}(r_e,\Delta)\right)^2}{r_e^3}+\pi\frac{r_w^2}{r_e^3}\right]<0$$ But in a view of applicability of the framework bracket $\left[r_e^2-\left(R_0^{PSS}(r_e,\Delta)^2\right)\right]>0$, therefor statement of the theorem follows from [\[expl-der-pss-rad\]](#expl-der-pss-rad){reference-type="eqref" reference="expl-der-pss-rad"}. ◻ **Remark 12**. *It is evident that as $r_e>>R_0$ one can get good approximation using Peaceman well block radius* *$$\label{R0-PSS-approx} \frac{\pi}{2}\approx\left(\ln \frac{\Delta}{R_0}\right).$$* Linear Boundary Dominate Material Balance Linear Material Balance for slightly compressible fluid the same as before, and flow toward well $\Gamma_w$ in isolated reservoir $V$ of height $h=1$, and $\phi\cdot c_p =1$. Assume boundary dominated (BD) constrain for slightly compressible fluid the same as for PSS. Namely $$\label{basic-MB-PSS-Linear case-1} -4K\cdot(p_0(s)-p_1(s)) + \frac{q(s)}{h} = 1\cdot \frac{V_0}{1} \cdot \frac {1}{\tau} \left(p_0(s+\tau)-p_0(s)\right),$$ Once again let $V$ be the volume of the reservoir domain $U$ with boundary $\partial U =\Gamma_e \cup \Gamma_w$ . In order Peaceman radius for boundary dominated regime to be time independent assume the following **Assumption 3**. *Assume that:* *[A-1]{.ul}.* *$$\label{q/p1} \frac{q(s)}{p_1(s)}=C_1$$* *Constant $C_1$ is $s$ independent* *[A-2]{.ul}.* *$$\label{p0/p1} \frac{p_0(s)}{p_1(s)}=C_2$$* *Constant $C_2$ is $s$ independent* *[A-3]{.ul}.* *$$\label{p0s+/p0s} \tau^{-1}\cdot\left( \frac{p_0(s+\tau)}{\cdot p_0(s)}-1\right)\approx C_3 \ ,\ \text{for} \ \ \tau<<1.$$* *Constant $C_3$ is $s$ and $\tau$ independent.* BD IBVP is defined as $$\begin{aligned} &K\cdot \Delta p =c_p\phi\frac{\partial p}{\partial t} \ \text{in} \ U=U(0,r_e,r_w)=B(0,r_e)\setminus B(0,r_w) ; \label{ss-pde} \\ & K\cdot\frac{\partial p}{\partial \nu}=0\label{ext-BC} \ \text{on} \ \Gamma_e, \ \ r=r_e \ ;\\ &p(x)=p_w \ \text{on} \ \Gamma_w \ , \ r=r_w .\label{well-BC} \end{aligned}$$ For simplicity we assume that $p_w=0$. Once more assuming that radial flow towards well of radius $r_w$, let base solution of the problem above to be in the form of $$\label{basic-anal-bdd-sol} p(x,t)=u_0(x,t)=e^{-\lambda_0\cdot t\frac{\cdot K}{1}}\varphi_0(x).$$ Here $\varphi_0(x)$ first eigenfunction and first $\lambda_0$ eigenvalue of the problem in the domain $U,$ with split boundaries $\partial U=\Gamma_w\cup \Gamma_e$. $$\begin{aligned} & -\Delta \varphi_0(x)=\lambda_0 \varphi_0(x) \ \text{in } \ U \label{eigen-eq} ;\\ & \varphi_0(x)=0 \ \text{on} \ \Gamma_w \ ,\left( \text{in radial case when } r=r_w \right)\label{eigen-well-cond} ; \\ & \frac{\partial \varphi_0(x)}{\partial \nu}=0\ \text{on} \ \Gamma_e. \ \left(\text{in radial case when }\ r=r_e \right)\label{eigen-ext-bound-cond}\end{aligned}$$ **Remark 13**. *Motivation to consider this type of solution comes from paper[@ibragim-Prod-Ind], in which we proved that corresponding productivity index is time independent.* First let us check constrains in the [Assumption 3](#BD-assump for MB eq){reference-type="ref" reference="BD-assump for MB eq"} w.r.t. analytical solution [\[basic-anal-bdd-sol\]](#basic-anal-bdd-sol){reference-type="eqref" reference="basic-anal-bdd-sol"} Indeed it is not difficult to see that all three conditions : $A_1$, $A_2$, and $A_3$ in Assumption [Assumption 3](#BD-assump for MB eq){reference-type="ref" reference="BD-assump for MB eq"} are satisfied with $$\begin{aligned} & C_1(\Delta_,R_0)=\frac{\varphi_0(\Delta)}{\varphi_0(R_0)} \ C_2=\Lambda\frac{\int \varphi_0 dx}{\varphi_0(R_0)} \ C_3=\phi\cdot V_0 \cdot c_p \cdot \Lambda \end{aligned}$$ Finally assuming that $c_p=1$for given $\Delta$ one has equation for $R_0^{bd}$ for $$\label{BD-eq-for-R0} \frac{\varphi_0(\Delta)}{\varphi_0(R_0)}+ \lambda_0\frac{\int \varphi_0 dx}{\varphi_0(R_0)}=\lambda_0\cdot V_0 ,\ \text{or} \ \frac{\varphi_0(\Delta)}{ \Delta^2}+ \frac{1}{c_p}\cdot\frac{\int_{\Gamma_w} \frac{\partial \varphi_0}{\partial \nu} ds}{\Delta^2}=\lambda_0\varphi_0(R_0).$$ Function $\varphi_0(r)$ satisfying conditions [\[eigen-well-cond\]](#eigen-well-cond){reference-type="eqref" reference="eigen-well-cond"}-[\[eigen-ext-bound-cond\]](#eigen-ext-bound-cond){reference-type="eqref" reference="eigen-ext-bound-cond"} is a solution of the Sturm-Liouville problem for the Helmholtz equation in an annual domain with Dirichlet and Neumann conditions: $$\begin{aligned} \frac{1}{r}\frac{\partial}{\partial r}\left( r \frac{\partial \varphi_0(r)}{\partial r } \right)+\lambda_0\varphi_0(r)=0, \ r_w<r<r_e\label{eq-SL} \\ \varphi_0(r_w)=0, \frac{\partial\varphi_0(r)}{\partial{r}}\Big|_{r=r_e}=0 .\label{eq-SL-cond}\end{aligned}$$ We are interesting in the non-negative solution of the above boundary value problem which has the form (see [@Tich]) $$\label{varphi-r-sol} \varphi_0(r)={J_0(\sqrt{\lambda_0}{r_w})}{N_0(\sqrt{\lambda_0}{r})}-{N_0(\sqrt{\lambda_0}{r_w})}{J_0(\sqrt{\lambda_0}{r})},$$ were $\lambda_0$ to be first eigenvalue , which is solution of the transcendent equation: $$\label{determ_0} {N_0(\sqrt{\lambda_0}{r_w})}{J'_0(\sqrt{\lambda_0}{r_e})}-{N'_0(\sqrt{\lambda_0}{r_e})}{J_0(\sqrt{\lambda_0}{r_w})}=0$$ Consequently, the solution of the problem [\[ss-pde\]](#ss-pde){reference-type="eqref" reference="ss-pde"}-[\[well-BC\]](#well-BC){reference-type="eqref" reference="well-BC"} has a form $$\label{problem-sol} u_0(r,t)=e^{-\lambda_0\cdot t\frac{\cdot K}{1}}[{J_0(\sqrt{\lambda_0}{r_w})}{N_0(\sqrt{\lambda_0}{r})}-{N_0(\sqrt{\lambda_0}{r_w})}{J_0(\sqrt{\lambda_0}{r})}]$$ One can directly verify all constrains in Assumption [Assumption 3](#BD-assump for MB eq){reference-type="ref" reference="BD-assump for MB eq"} by letting $$p_1(s)=u_0(\Delta ,s) \ , \ p_0(s)=u_0(R_0,s) \, \ p_0(s+\tau)=u_0(R_0,s+\tau) \, \ q(s)=-2\pi r_w\cdot K \frac{\partial u_0(r,s)}{\partial r}\big|_{r=r_w}$$ In this article we will not provide proof of the existence of Peaceman well block radius for boundary dominated regime in radial case, and investigate its property depending on the parameters of the problem. Instead of that we will state the statement in form of remark leaving details for upcoming publication. **Remark 14**. *Substitute $p_0(s)$, $p_1(s)$ , $p_0(s+\tau)$, and $q(s)$ into material balance equation [\[basic-MB-PSS-Linear case-1\]](#basic-MB-PSS-Linear case-1){reference-type="eqref" reference="basic-MB-PSS-Linear case-1"}. Then we will get transcended equation for $R_0^{BD}(r_e,\Delta)$ of the form $$\begin{aligned} \varphi_0(R_0^{BD}(r_e,\Delta))-\varphi_0(\Delta)=-\frac{2}{\pi}\ln{\frac{\Delta}{R_0^{BD}(r_e,\Delta)}}\end{aligned}$$* *Here $\varphi_0(r)$ is the first eigenfunction of the problem [\[eq-SL\]](#eq-SL){reference-type="eqref" reference="eq-SL"}-[\[eq-SL-cond\]](#eq-SL-cond){reference-type="eqref" reference="eq-SL-cond"}, which defined by equation [\[varphi-r-sol\]](#varphi-r-sol){reference-type="eqref" reference="varphi-r-sol"}.* 200 Landis, E.M. *Second Order Equations of Elliptic and Parabolic Type*, Moscow, Nauka, 1971 (Russian); English transl.: Transl. of Mathematical Monograph, **171**, AMS, Providence, RI, 1998. Einstein A. *Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen*, Annalen der Physik, ser.4 vol. 17 (1905); cited by On the Movement of Small Particles Suspended in a Stationary Liquid Kinetic Theory of Heat in Investigations on the Theory of the Brownian Movement. New York, USA: Dover Publications, Inc.1956. Ibragimov, A., Khalmanova, D., Valkó, P. P., and Walton, J. R., *On a mathematical model of the productivity index of a well from reservoir engineering*,SIAM J. Appl. Math. 65, 1952 (2005). Tolstov, Y.G. *Application of the method of electrical modeling of physical phenomena to solving some problems of underground hydraulics*, Tech Magazine. Physics (Russian), volume XII, issue 10, 1942. Vakhitov G.G. *Solving problems of underground hydrodynamics by the finite difference method*, Moscow, Proceedings of VNIIneft, issue 10, Gostoptehizda (Russian) pp. 53-88., 1957. Peaceman, D.W. *Interpretation of Well-Block Pressures in Numerical Reservoir Simulation*, SPEJ, 183-94, June 1978; Trans., AIME, 253. Paper SPE 6893 Peaceman, D.W. *Interpretation of Well-Block Pressures in Numerical Reservoir Simulation with Nonsquare Grid Blocks and Anisotropic Permeability*, SPEJ, June, pp. 531-543, 1983; Trans., AIME, 275. Paper SPE 10528 presented at the 1982 SPE Symposium on Reservoir Simulation, New Orleans, Jan 31-Feb 3. A.Ibragimov,Z.Sobol,I.Hevage *Einstein's model of \"the movement of small particles in a stationary liquid\" revisited: finite propagation speed* Turkish Journal of Mathematics ,Vol. 47 ,No. SI-1, 2023. A Ibragimov, E Zakirov, I Indrupskiy, D Anikeev *Fundamentals in Peaceman Model for Well-Block radius For Non-Linear Flows Near Well* - arXiv preprint arXiv:2203.10140, 2022 Related articles B. M. Budak, A. A. Samarskii and A. N. Tikhonov, *A Collection of Problems on Mathematical Physics.* International Series of Monographs on Pure and Applied Mathematics, Volume 52 Oxford/London/Edinburgh/New York/Paris/Frankfurt 1964, Pergamon Press.
arxiv_math
{ "id": "2309.05372", "title": "Peaceman Well Block Problem For Time-Dependent Flows of Compressible\n Fluid", "authors": "A. Ibraguimov, E. Zakirov, I. Indrupskiy, D. Anikeev, A. Zhaglova", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We generalise the notion of the Tate--Šafarevič group $\Sha(S/\mathbb{P}^1)$ of an elliptic K3 surface $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ with a section to $\Sha(S,h)$ of a K3 surface $S$ endowed with a linear system $|h|$. The construction, which uses Grothendieck's special Brauer group, provides an efficient way to deal with moduli spaces of twisted sheaves supported on curves in $|h|$. address: Mathematisches Institut & Hausdorff Center for Mathematics, Universität Bonn, Endenicher Allee 60, 53115 Bonn, Germany author: - Daniel Huybrechts and Dominique Mattei title: The special Brauer group and twisted Picard varieties --- # Introduction Let $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ be an elliptic K3 surface with a section. Another elliptic K3 surface $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ (without a section) is called a twist of $S_0$ if its Jacobian fibration is $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$, i.e. ${\rm Pic}^0(S/\mathbb{P}^1)\simeq S_0$ relative over $\mathbb{P}^1$. The twists of a fixed $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ are parametrised by the Tate--Šafarevič group $\Sha(S_0/\mathbb{P}^1)$. According to a result originally due to Artin and Tate [@BrauerIII], this group is naturally isomorphic to the Brauer group, so $\Sha(S_0/\mathbb{P}^1)\simeq{\rm Br}(S_0)$. Changing perspective, every twist $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ of $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ can be viewed as a moduli space of rank one sheaves on the fibres of $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ twisted by some class $\alpha\in {\rm Br}(S_0)$. We will rephrase this by writing $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ as ${\rm Pic}_\alpha^0(S_0/\mathbb{P}^1)$ for some Brauer class $\alpha\in {\rm Br}(S_0)$. ## In this article we generalise the classical picture and consider twisted Picard varieties for arbitrary generically smooth, complete linear systems ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ of curves contained in a K3 surface $S$. More specifically, this will lead us to consider twisted relative Jacobians ${\rm Pic}_\alpha^0({\mathcal C}/|h|)$, generalising the classical relative Picard varieties ${\rm Pic}^d({\mathcal C}/|h|)$. However, in general the twists are not indexed by elements in the Brauer group ${\rm Br}(S)$ but by elements in a certain extension of it. Throughout we will only be interested in the birational type of ${\rm Pic}_\alpha^0({\mathcal C}/|h|)$ or, more precisely, in its generic fibre ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}^0({\mathcal C}_\eta)$. Our main result is the following theorem, see Section [4.3](#sec:mainproof){reference-type="ref" reference="sec:mainproof"}. **Theorem 1**. *Consider a complete, generically smooth linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ on a K3 surface $S$. Then there exists a natural extension of the Brauer group by a finite cyclic group $$\xymatrix{0\ar[r]&\mathbb{Z}/m\cdot\mathbb{Z}\ar[r]&\Sha(S,h)\ar[r]&{\rm Br}(S)\ar[r]&0.}$$ The group $\Sha(S,h)$ is called Tate--Šafarevič group of the pair $(S,h)$ and its elements parametrise all twisted relative Jacobians ${\rm Pic}_\alpha^0({\mathcal C}/|h|)$.* *Furthermore, the product in $\Sha(S,h)$ corresponds to the products of twisted Picard varieties viewed as torsors for ${\rm Pic}^0({\mathcal C}/|h)\xymatrix@1@=15pt{\ar[r]&}|h|$, see Remarks [Remark 20](#rem:twistint){reference-type="ref" reference="rem:twistint"} & [Remark 33](#rem:groupstructure){reference-type="ref" reference="rem:groupstructure"}.* The integer $m$ in the theorem is the divisibility of $h$ as an element in the lattice ${\rm NS}(S)$ and we will later see that also all ${\rm Pic}_\alpha^d({\mathcal C}/|h|)$ for non-zero $d$ are taken care of by $\Sha(S,h)$, see Proposition [Proposition 35](#prop:getall){reference-type="ref" reference="prop:getall"}. This new Tate--Šafarevič group $\Sha(S,h)$ extends the notion of the classical Tate--Šafarevič group of an elliptic K3 surface $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ with a section to an arbitrary elliptic K3 surface $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. It turns out that there exists a natural isomorphism $\Sha(S,f)\simeq\Sha(S_0/\mathbb{P}^1)$ between the new Tate--Šafarevič group $\Sha(S,f)$ and the classical one $\Sha(S_0/\mathbb{P}^1)$ of its Jacobian fibration $S_0=\overline{\rm Pic}^0(S/\mathbb{P}^1)$, see Section [5.1](#sec:classTS){reference-type="ref" reference="sec:classTS"}. ## Working with twisted sheaves poses a number of technical problems. Firstly, in order to talk about twisted sheaves it is not enough to just fix a Brauer class $\alpha\in {\rm Br}(S)$. A certain geometric realisation is needed. This could be an Azumaya algebra, a gerbe, a Brauer--Severi variety, or a Čech cocycle. Secondly, to deal with moduli spaces, certain numerical invariants, e.g. Chern classes or Mukai vectors, need to be fixed. For example, one cannot define, without introducing a certain ambiguity, the degree of an $\alpha$-twisted sheaf on a fibre of $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. As was shown by Lieblich [@LiebPhD Ch. 5], the ambiguity can be lifted by passing to $\mu_n$-gerbes, which is roughly the same as lifting a Brauer class to the special Brauer group. Working with the special Brauer group allows us to deal with all Brauer classes at the same time. To address these issues we make use of Grothendieck's special Brauer group ${\rm SBr}(S)$ which is a certain extension of ${\rm Br}(S)$ by ${\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$, cf. [@BrauerII]. However, it will turn out that it is more convenient to work with a smaller subgroup ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}(S)$, the restricted special Brauer group, which is a natural extension of ${\rm Br}(S)$ by the discriminant group of ${\rm NS}(S)$: $$\xymatrix{0\ar[r]&{\rm NS}(S)^\ast/{\rm NS}(S)\ar[r]&{\rm SBr}^{\rm o}(S)\ar[r]&{\rm Br}(S)\ar[r]&0.}$$ The various Tate--Šafarevič groups $\Sha(S,h)$ of curve classes on $S$ are then constructed as certain quotients ${\rm SBr}^{\rm o}(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}\Sha(S,h)$. ## To get an idea of the role of the special Brauer group, let us look at the case of a smooth projective integral curve $C$ over an arbitrary field. In this case, there exists a short exact sequence of the form $$\xymatrix{0\ar[r]&\mathbb{Q}/\mathbb{Z}\ar[r]&{\rm SBr}(C)\ar[r]&{\rm Br}(C)\ar[r]&0.}$$ In addition to the classical Picard varieties ${\rm Pic}^d(C)$, $d\in \mathbb{Z}$, which are torsors for ${\rm Pic}^0(C)$, one can define varieties ${\rm Pic}_\alpha^d(C)$ for any class $\alpha\in {\rm SBr}(C)$ and any rational number $d\in \mathbb{Q}$. If not empty, they are again torsors for ${\rm Pic}^0(C)$. Furthermore, a non-empty ${\rm Pic}_\alpha^d(C)$ is a trivial torsor if and only if $\alpha\in \mathbb{Q}/\mathbb{Z}$, see Section [3.1](#sec:modulicurve){reference-type="ref" reference="sec:modulicurve"} for further details. ## Our discussion also sheds a new light on work of Markman [@Mark]. Building upon his work, we will prove the following result in Section [5.3](#sec:Mark){reference-type="ref" reference="sec:Mark"}. **Theorem 2**. *Let $X\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^n$ be Lagrangian fibration of a non-special projective hyperkähler manifold of ${\rm K3}^{[n]}$-type. Then there exists a K3 surface $S$, a complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ on $S$, and a class $\alpha\in\Sha(S,h)$ such that $X$ is birational to ${\rm Pic}^0_\alpha({\mathcal C}/|h|)$ relative over $|h|\simeq\mathbb{P}^n$.* The assumption on $X$ to be non-special, which just says $(H^{2,0}\oplus H^{0,2})(X)\cap H^2(X,\mathbb{Q})=\{0\}$, should be superfluous. If $X$ is not assumed projective, the K3 surface $S$ may be non-projective and for the class $\alpha$ one may have to use an analytic version of the special Brauer group. **Conventions:** In most of this article, we deliberately restrict to complex projective K3 surfaces, but there are interesting questions to explore for more general types of varieties as well as in more arithmetic settings. In our discussion it will sometimes be convenient to deal with the scheme-theoretic generic curve in a linear system on a complex projective K3 surface, in which case we work with curves over function fields $\mathbb{C}(t_1,\ldots,t_n)$. For simplicity we will always assume that the complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ parametrises at least one smooth curve or, equivalently, that the generic fibre is smooth. The cases of interest to us are ample complete linear systems and elliptic pencils. For an Azumaya algebra ${\mathcal A}$ we call $d({\mathcal A})\coloneqq\sqrt{{\rm rk}({\mathcal A})}$ the degree of ${\mathcal A}$. **Acknowledgements:** We wish to thank Nick Addington for inspiration and Reinder Meinsma for helpful discussions. We are also grateful to Asher Auel and Evgeny Shinder for constructive criticism of the first version of the paper and to Alexei Skorobogatov for his help with the literature. The first author gratefully acknowledges the hospitality of the ITS-ETH Zurich during his stay in the spring of 2023. # Grothendieck's special Brauer group The special Brauer group introduced by Grothendieck in [@BrauerII Rem. 3.9] is an extension of the classical Brauer group. In this section, we recall its definition and its cohomological description and explain how to use it to define Chern characters of twisted sheaves. In the next section, we will then explain in what sense it is better suited to study moduli spaces of twisted sheaves. ## Hodge theory Assume $X$ is a smooth complex projective variety. We are mainly interested in the case of K3 surfaces, but ultimately we will want to apply everything also to projective hyperkähler manifolds. By ${\rm NS}(X)$ we denote the Néron--Severi group of $X$ and by $T(X)$ its transcendental lattice, i.e. the smallest saturated sub-Hodge structure of $H^2(X,\mathbb{Z})$ with $H^{2,0}(X)\subset T(X)\otimes \mathbb{C}$. The inclusion ${\rm NS}(X)\oplus T(X)\subset H^2(X,\mathbb{Z})$ is rarely an equality but always of finite index. For simplicity we will ignore any torsion in $H^2(X,\mathbb{Z})$. Next, we define $T'(X)$ by the short exact sequence $$\label{eqn:NST'} \xymatrix{0\ar[r]&{\rm NS}(X)\ar[r]& H^2(X,\mathbb{Z})\ar[r]&T'(X)\ar[r]&0.}$$ Then the composition $T(X)\subset H^2(X,\mathbb{Z})\xymatrix@1@=18pt{\ar@{->>}[r]&}T'(X)$ realizes the transcendental lattice as a subgroup $T(X)\subset T'(X)$ of finite index. The quotient is a finite group, which we will denote $$A(X)\coloneqq T'(X)/T(X).$$ For a surface $S$, the unimodular intersection form provides a natural identification $T'(S)\simeq T(S)^\ast$ and $A(S)$ is nothing but the discriminant group of the transcendental lattice or, equivalently, of the Néron--Severi lattice: $$A(S) \simeq T(S)^\ast/T(S)\simeq{\rm NS}(S)^\ast/{\rm NS}(S).$$ For a hyperkähler manifold, the index of $T'(X)\subset T(X)^\ast$ is the discriminant of the Beauville--Bogomolov--Fujiki pairing on $H^2(X,\mathbb{Z})$ and so $A(X)$ is often strictly smaller than $T(X)^\ast/T(X)$. Tensoring the exact sequence $0\xymatrix@1@=15pt{\ar[r]&}T(X)\xymatrix@1@=15pt{\ar[r]&}T'(X)\xymatrix@1@=15pt{\ar[r]&}A(X)\xymatrix@1@=15pt{\ar[r]&}0$ with $\mathbb{Q}/\mathbb{Z}$ induces an exact sequence, cf. [@CTS Sec. 5.3]: $$\label{eqn:ATTses} \xymatrix{0\ar[r]&A(X)\ar[r]&T(X)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&T'(X)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&0,}$$ where we identified ${\rm Tor}_1(A(X),\mathbb{Q}/\mathbb{Z})$ with $A(X)$. The inclusion is explicitly described by first lifting an element in $A(X)$ to a class in $T'(X)$, which is unique up to elements in $T(X)$, and then viewing it as an element in $T(X)\otimes \mathbb{Q}=T'(X)\otimes\mathbb{Q}$. **Remark 3**. Under our assumptions, a result of Gabber and de Jong, cf. [@CTS Ch. 4], shows that the Brauer group equals the cohomological Brauer group, so ${\rm Br}(X)\simeq H^2_{\text{\'et}}(X,\mathbb{G}_m)$. The latter group can be identified with the torsion subgroup $H^2(X,{\mathcal O}_X^\ast)_{\rm tor}\subset H^2(X,{\mathcal O}_X^\ast)$, using the analytic topology. Furthermore, the exponential sequence provides an exact sequence $$\xymatrix@C=18pt{0\ar[r]&{\rm NS}(X)\ar[r]&H^2(X,\mathbb{Z})\ar[r]&H^2(X,{\mathcal O}_X)\ar[r]&H^2(X,{\mathcal O}^\ast_X)\ar[r]&H^3(X,\mathbb{Z})\ar[r]&}$$ and in particular both groups $T(X)\subset T'(X)$ can be viewed as subgroups of $H^2(X,{\mathcal O}_X)$. If $H^3(X,\mathbb{Z})=0$, then $$\label{eqn:BrT} {\rm Br}(X)\simeq H^2(X,{\mathcal O}_X^\ast)_{\rm tor}\simeq\left(H^2(X,{\mathcal O}_X)/T'(X)\right)_{\rm tor}\simeq T'(X)\otimes\mathbb{Q}/\mathbb{Z},$$ which for a surface $S$, using $T(S)^\ast\simeq T'(S)$, is often written as ${\rm Br}(S)\simeq{\rm Hom}(T(S),\mathbb{Q}/\mathbb{Z})$. Without any assumption on $H^3(X,\mathbb{Z})$, the isomorphism ([\[eqn:BrT\]](#eqn:BrT){reference-type="ref" reference="eqn:BrT"}) only describes the divisible part of ${\rm Br}(X)$. ## Special Brauer group In the very last remark of [@BrauerII], Grothendieck introduced the special Brauer group ${\rm SBr}(X)$ of a scheme $X$. This notion turns out to be very useful for our purposes. The definition is similar to the definition of the Brauer group ${\rm Br}(X)$ as the group of Morita equivalence classes of Azumaya algebras. **Definition 4**. The *special Brauer group* ${\rm SBr}(X)$ of a scheme $X$ is the group of equivalence classes of Azumaya algebras ${\mathcal A}$ with respect to the equivalence relation generated by ${\mathcal A}\sim {\mathcal A}\otimes\mathcal{E\mkern-3mu nd}(F)$ with the locally free sheaf $F$ required to have trivial determinant $\det(F)\simeq{\mathcal O}_X$. **Remark 5**. (i) In order to ensure that with this definition the tensor product ${\mathcal A}\otimes{\mathcal A}'$ defines a group structure on ${\rm SBr}(X)$ one needs to use the fact that any Azumaya algebra ${\mathcal A}$ has trivial determinant. \(ii\) Variants of the above definition exist. For example, instead of requiring $\det(F)$ to be trivial one can ask it to be only torsion or algebraically (or cohomologically) trivial. It turns out that all these conditions eventually lead to the same group. By construction, ${\rm SBr}(X)$ naturally surjects onto ${\rm Br}(X)$ and the kernel has been determined in [@BrauerII Rem. 3.9]: There exists a short exact sequence $$\label{eqn:SBrBr} \xymatrix{0\ar[r]&{\rm Pic}(X)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&{\rm SBr}(X)\ar[r]&{\rm Br}(X)\ar[r]&0.}$$ As explained in [@BrauerII Rem. 3.9], the cohomological version of ([\[eqn:SBrBr\]](#eqn:SBrBr){reference-type="ref" reference="eqn:SBrBr"}) is obtained as the limit of the exact sequences $$\xymatrix{0\ar[r]&{\rm Pic}(X)/\ell^n\cdot{\rm Pic}(X)\ar[r]&H^2(X,\mu_{\ell^n})\ar@{->>}[r]&H^2_{\text{\'et}}(X,\mathbb{G}_m)[\ell^n]\ar[r]&0}$$ induced by the Kummer sequence. In particular, whenever ${\rm Br}(X)\xymatrix@1@=15pt{\ar[r]^-\sim&}H^2_\text{\'et}(X,\mathbb{G}_m)$, then $${\rm SBr}(X)[\ell^\infty]\simeq H^2(X,\mathbb{Q}_\ell/\mathbb{Z}_\ell(1))={\lim_{\xymatrix@1@=15pt{\ar[r]&}}} H^2(X,\mu_{\ell^n}).$$ Since we assume $X$ to be a smooth complex projective variety, we can use singular cohomology to describe the situation. One finds $${\rm Pic}(X)\otimes\mathbb{Q}/\mathbb{Z}\simeq{\rm NS}(X)\otimes\mathbb{Q}/\mathbb{Z}\simeq(\mathbb{Q}/\mathbb{Z})^{\oplus\rho(X)}$$ and $$\label{eqn:SBrcoh} {\rm SBr}(X)\simeq H^2(X,\mathbb{Q}/\mathbb{Z})\simeq(\mathbb{Q}/\mathbb{Z})^{\oplus b_2(X)}\oplus H^3(X,\mathbb{Z})_{\text{tors}}.$$ If $H^3(X,\mathbb{Z})_{\text{tors}}=0$, the sequence ([\[eqn:SBrBr\]](#eqn:SBrBr){reference-type="ref" reference="eqn:SBrBr"}) is obtained from ([\[eqn:NST\'\]](#eqn:NST'){reference-type="ref" reference="eqn:NST'"}) by tensoring with $\mathbb{Q}/\mathbb{Z}$: $$\label{eqn:sesSBrBr} \xymatrix@R=6pt@C=12pt{0\ar[r]&{\rm Pic}(X)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&{\rm SBr}(X)\ar[r]&{\rm Br}(X)\ar[r]&0\\&\simeq{\rm NS}(X)\otimes\mathbb{Q}/\mathbb{Z}&\simeq H^2(X,\mathbb{Z})\otimes\mathbb{Q}/\mathbb{Z}&~~~\simeq T'(X)\otimes \mathbb{Q}/\mathbb{Z}.&}$$ **Remark 6**. The Brauer group ${\rm Br}(X)\simeq H^2(X,\mathbb{G}_m)$ can also be viewed as the set of isomorphism classes of $\mathbb{G}_m$-gerbes on $X$. Similarly, $H^2(X,\mu_n)$ is the set of isomorphism classes of $\mu_n$-gerbes on $X$. From this perspective, ${\rm SBr}(X)$ is the set of all $\mu_n$-gerbes on $X$, where a $\mu_n$-gerbe is identified with the naturally associated $\mu_{nk}$-gerbe under $\mu_n\subset\mu_{nk}$. **Remark 7**. (i) The injection is geometrically realised by sending $(1/r)\cdot L$ with $L\in {\rm Pic}(X)$ to the Azumaya algebra given by $\mathcal{E\mkern-3mu nd}(F)$, where $F$ is any vector bundle of rank $r$ and determinant $L$. Use $\mathcal{E\mkern-3mu nd}(F_1)\otimes\mathcal{E\mkern-3mu nd}(F_1^\ast\otimes F_2)\simeq\mathcal{E\mkern-3mu nd}(F_2)\otimes\mathcal{E\mkern-3mu nd}(F_1\otimes F_1^\ast)$ to see that this is independent of the choice of $F$. \(ii\) This description fits with the interpretation of ${\rm SBr}(X)$ as parametrising $\mu_n$-gerbes. The $\mu_n$-gerbe associated with $\mathcal{E\mkern-3mu nd}(F)$, i.e. its image under $H^1(X,{\rm PGl}(r))\xymatrix@1@=15pt{\ar[r]&}H^2(X,\mu_n)$, is the image of $L\in {\rm Pic}(X)=H^1(X,\mathbb{G}_m)$ under the boundary map $\delta_r\colon H^1(X,\mathbb{G}_m)\xymatrix@1@=15pt{\ar[r]&}H^2(X,\mu_r)$ induced by the Kummer sequence. \(ii\) As for the classical Brauer group, one checks that $|\alpha|\mid d{({\mathcal A})}$ for every Azumaya algebra representing a class $\alpha\in {\rm SBr}(X)$ in the special Brauer group. For example, if ${\mathcal A}=\mathcal{E\mkern-3mu nd}(F)$ with ${\rm rk}(F)=r$, then $\alpha^r$ is realised by the endomorphism bundle of $F^{\otimes r}\otimes\det(F)^\ast$ which has trivial determinant. The cohomological description ([\[eqn:SBrcoh\]](#eqn:SBrcoh){reference-type="ref" reference="eqn:SBrcoh"}) of the special Brauer group reveals that ${\rm SBr}(X)$ is a purely topological invariant of $X$, unlike the standard version ${\rm Br}(X)$, which depends on the Picard number and thus may change under deformations. The exact sequence ([\[eqn:sesSBrBr\]](#eqn:sesSBrBr){reference-type="ref" reference="eqn:sesSBrBr"}) provides a geometric interpretation for the link between jumps of the Picard number and drops of the rank of the Brauer group. Note that both sequences, ([\[eqn:SBrBr\]](#eqn:SBrBr){reference-type="ref" reference="eqn:SBrBr"}) and ([\[eqn:sesSBrBr\]](#eqn:sesSBrBr){reference-type="ref" reference="eqn:sesSBrBr"}), actually split, but only non-canonically. **Remark 8**. For a K3 surface $S$, it is common to call a lift of a class $\alpha\in {\rm Br}(S)$ to an element in $H^2(S,\mathbb{Q})$ a *B-field* lift of $\alpha$. Such a B-field lift then induces a lift of $\alpha$ to a class in the special Brauer group ${\rm SBr}(S)$, see Section [4.6](#sec:Bfield2){reference-type="ref" reference="sec:Bfield2"} for more details. **Example 9**. For a smooth projective irreducible curve $C$ over a field $k$, the sequence ([\[eqn:SBrBr\]](#eqn:SBrBr){reference-type="ref" reference="eqn:SBrBr"}) becomes $$\xymatrix@R=6pt{0\ar[r]&\mathbb{Q}/\mathbb{Z}\ar[r]&{\rm SBr}(C)\ar[r]&{\rm Br}(C)\ar[r]&0.}$$ If $k$ is algebraically closed, then ${\rm Br}(C)$ is trivial and $\mathbb{Q}/\mathbb{Z}\simeq{\rm SBr}(C)$. Although we mostly consider complex curves, the more general situation sheds light on the situation when applied to the scheme-theoretic generic fibre of a linear system $|h|$ on a surface. ## Chern character Instead of looking at classes in the Brauer group ${\rm Br}(X)$ we now fix a class in the special Brauer group $$\alpha\in{\rm SBr}(X).$$ Representing $\alpha$ by an Azumaya algebra ${\mathcal A}$, we can consider the abelian category ${\rm Coh}(X,{\mathcal A})$ (and its derived category). The same class $\alpha$ is also represented by any Azumaya algebra of the form ${\mathcal A}'\coloneq {\mathcal A}\otimes \mathcal{E\mkern-3mu nd}(F)$ with $F$ locally free and such that $\det(F)\simeq{\mathcal O}_X$. The two categories are equivalent to each other via $$\label{eqn:AA'} {\rm Coh}(X,{\mathcal A})\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Coh}(X,{\mathcal A}'),~E\xymatrix@1@=15pt{\ar@{|->}[r]&}E\otimes F.$$ Since $\mathcal{E\mkern-3mu nd}(F')\simeq\mathcal{E\mkern-3mu nd}(F)$ if and only if $F'\simeq F\otimes L$ for some line bundle $L$ which, moreover, is torsion if $\det(F')\simeq\det(F)$, the equivalence ([\[eqn:AA\'\]](#eqn:AA'){reference-type="ref" reference="eqn:AA'"}) is canonical up to equivalences given by tensoring with torsion line bundles. In particular, there is a distinguished equivalence ([\[eqn:AA\'\]](#eqn:AA'){reference-type="ref" reference="eqn:AA'"}) for K3 surfaces and hyperkähler manifolds. Since $X$ is assumed to be a smooth complex projective variety, we may use singular cohomology and Hodge theory to fix cohomological invariants, but the following definitions can be adapted to other situations. As in [@HS2; @HSeattle], we consider the Chern character$${\rm ch}_{\mathcal A}\colon {\rm Coh}(X,{\mathcal A})\xymatrix@1@=15pt{\ar[r]&}H^*(X,\mathbb{Q}), ~E\xymatrix@1@=15pt{\ar@{|->}[r]&}{\rm ch}_{\mathcal A}(E)\coloneqq \sqrt{{\rm ch}({\mathcal A})}^{-1} \cdot{\rm ch}(E).$$ Since we will mostly work with sheaves on curves (contained in K3 surfaces), multiplication with $\sqrt{{\rm ch}({\mathcal A})}^{-1}$ is just division by the integer $d{({\mathcal A})}$. For K3 surfaces, it is often more convenient to work with the Mukai vector $v_{\mathcal A}(E)\coloneqq {\rm ch}_{\mathcal A}(E)\cdot{\rm td}(S)^{1/2}$, which for sheaves concentrated on curves in the K3 surface is the same as ${\rm ch}_{\mathcal A}(E)$. Note that on curves and surfaces ${\rm ch}_{\mathcal A}$ commutes with the equivalence ([\[eqn:AA\'\]](#eqn:AA'){reference-type="ref" reference="eqn:AA'"}). More precisely, if $\det(F)\simeq{\mathcal O}$ and ${\mathcal A}'={\mathcal A}\otimes\mathcal{E\mkern-3mu nd}(F)$ then $${\rm ch}_{{\mathcal A}'}(E\otimes F)={\rm ch}_{\mathcal A}(E),$$ for then ${{\rm ch}(\mathcal{E\mkern-3mu nd}(F))}={{\rm ch}(F^\ast)\cdot{\rm ch}(F)}={\rm ch}(F)^2$. **Definition--Proposition 10**. *Let $\alpha\in {\rm SBr}(X)$ be a class in the special Brauer group on a variety of dimension at most two. Then the *twisted Chern character* $${\rm ch}_\alpha(E)\coloneqq{\rm ch}_{\mathcal A}(E)$$ for $E\in {\rm Coh}(X,\alpha)\coloneqq{\rm Coh}(X,{\mathcal A})$ is independent of the choice of the Azumaya algebra ${\mathcal A}$ representing the class $\alpha\in {\rm SBr}(X)$.0◻* **Remark 11**. If $\alpha\in{\rm SBr}(X)$ is represented by an Azumaya algebra ${\mathcal A}\in H^1(X,{\rm PGl}(n))$, then ${\rm Coh}(X,{\mathcal A})$ is equivalent to the category ${\rm Coh}({\mathcal M})_1$ of sheaves of weight one on the associated $\mu_n$-gerbe ${\mathcal M}_{{\mathcal A}}$, cf. [@LiebDuke] and [@HSeattle §3] for further references. For two Azumaya algebras ${\mathcal A},{\mathcal A}'\in H^1(X,{\rm PGl}(n))$ with isomorphic $\mu_n$-gerbes ${\mathcal M}_{\mathcal A}\simeq{\mathcal M}_{{\mathcal A}'}$, i.e. inducing the same class in $H^2(X,\mu_n)$, one could work with the Chern character ${\rm ch}_{\mathcal M}={\rm ch}_{{\mathcal M}'}$ on the DM stacks ${\mathcal M}_{\mathcal A}\simeq{\mathcal M}_{{\mathcal A}'}$ instead of ${\rm ch}_\alpha$. However, when passing from the $\mu_n$-gerbe ${\mathcal M}_{\mathcal A}$ associated with an Azumaya algebra ${\mathcal A}$ to the $\mu_{nr}$-gerbe ${\mathcal M}_{{\mathcal A}'}$ associated with ${\mathcal A}'\coloneqq {\mathcal A}\otimes\mathcal{E\mkern-3mu nd}(E)$ for some locally free sheaf $E$ of rank $r$ and trivial determinant, the situation becomes less clear. The induced map between the DM stacks ${\mathcal M}_{\mathcal A}\xymatrix@1@=15pt{\ar[r]&}{\mathcal M}_{{\mathcal A}'}$ (over $X$), which is compatible with the natural inclusion $\mu_n\xymatrix@1@=15pt{\ar@{^(->}[r]&}\mu_{nr}$, will, by functoriality of the Chern character, pull-back ${\rm ch}_{{\mathcal M}_{{\mathcal A}'}}(F\otimes E)$ to ${\rm ch}_{{\mathcal M}_{\mathcal A}}(F)$. Here, $F\otimes E\in {\rm Coh}(X,{\mathcal A}')\simeq{\rm Coh}({\mathcal M}_{{\mathcal A}'})_1$ is the image of $F\in {\rm Coh}(X,{\mathcal A})\simeq{\rm Coh}({\mathcal M}_{\mathcal A})_1$ under the equivalence ${\rm Coh}(X,{\mathcal A})\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Coh}(X,{\mathcal A}')$, $F\xymatrix@1@=15pt{\ar@{|->}[r]&}F\otimes E$. Now, the fact that Proposition [Definition--Proposition 10](#defprop:Chern){reference-type="ref" reference="defprop:Chern"} only holds in dimension $\leq 2$, shows that in general ${\rm ch}_{{\mathcal M}_{\mathcal A}}(F)\ne{\rm ch}_{\mathcal A}(F)$. Thus, expressing ${\rm ch}_{\mathcal M}(F)$ for $F\in {\rm Coh}({\mathcal M})$ in terms of classical Chern classes of $F$ viewed as coherent sheaf on $X$ seems to involve first passing to the 'minimal' gerbe of ${\mathcal M}$. There one would expect ${\rm ch}_{\mathcal M}(F)={\rm ch}_{\mathcal A}(F)$. This has bearings on the comparison between moduli spaces of sheaves over Azumaya algebras and moduli spaces of sheaves on gerbes, see Remark [Remark 18](#rem:gerbesmoduli){reference-type="ref" reference="rem:gerbesmoduli"}. # Moduli spaces of twisted sheaves Moduli spaces of (semi-)stable twisted sheaves have been constructed by Lieblich [@LiebDuke] and Yoshioka [@Yosh], for the construction from the point of view of modules over Azumaya algebras see [@HS; @Simpson]. Most of the classical arguments carry over. However, it is important to emphasise, that the construction of the moduli space requires an additional choice, it is not enough to fix a Brauer class $\alpha\in {\rm Br}(S)$. For example, one possibility is to choose a Čech coycle $\{\alpha_{ijk}\}$ representing $\alpha$, which allows one to use twisted sheaves. Another possibility is to fix a Brauer--Severi variety $P_\alpha\xymatrix@1@=15pt{\ar[r]&}S$ representing $\alpha$, which is used in [@Yosh]. For us it will be more convenient to view these moduli spaces as moduli spaces of untwisted coherent sheaves $E$ which are modules over an Azumaya algebra ${\mathcal A}$ representing a given Brauer class $\alpha$. As we will only be interested in the birational type of moduli spaces, the difference between stability and semi-stability and the choice of a polarisation, already needed to define stability, are all irrelevant. However, the question whether a moduli space is a coarse or a fine moduli space is important. Over the stable locus, which is never empty on our setting, it is independent of the choice of a birational model. ## Moduli on curves {#sec:modulicurve} Let us again first consider the case of a smooth projective irreducible curve $C$ over an arbitrary field $k$. The discussion should be compared to the one by Lieblich e.g. [@LiebComp Sec. 3.2.2] using the language of sheaves on $\mu_n$-gerbes, see also the introduction of Section [3.2](#sec:ModSurf){reference-type="ref" reference="sec:ModSurf"}. **Definition 12**. For a class $\alpha\in {\rm SBr}(C)$ in the special Brauer group represented by an Azumaya algebra ${\mathcal A}$ and $r\in \mathbb{Z}$, $d\in \mathbb{Q}$ we denote by $M_\alpha(r,d)$ the moduli space of semi-stable vector bundles $E\in {\rm Coh}(C,{\mathcal A})$ with ${\rm ch}_{\mathcal A}(E)=(r,d)$. For the special case $r=1$ we use the notation $${\rm Pic}_\alpha^d(C)\coloneqq M_\alpha(1,d)$$ and call it the *twisted Picard group* of degree $d$. There is no ambiguity in this definition, as for any other Azumaya algebra ${\mathcal A}'$ representing $\alpha\in {\rm SBr}(C)$ there exists an equivalence ${\rm Coh}(C,{\mathcal A})\simeq{\rm Coh}(C,{\mathcal A}')$ unique up to tensoring with torsion line bundles which identifies semi-stable vector bundles of Mukai vector ${\rm ch}_{\mathcal A}=(r,d)$ in ${\rm Coh}(C,{\mathcal A})$ with those with ${\rm ch}_{{\mathcal A}'}=(r,d)$ in ${\rm Coh}(C,{\mathcal A}')$. However, the isomorphism between two moduli spaces defined by means of different Azumaya algebras representing the class in ${\rm SBr}(C)$ is natural only up to tensoring with torsion line bundles. Note that any locally free sheaf $E\in {\rm Coh}(C,{\mathcal A})$ satisfies $d{({\mathcal A})}\mid{\rm rk}(E)$. Therefore, the constant coefficient of ${\rm ch}_{\mathcal A}(E)$ is indeed an integer, which explains why we fixed $r\in \mathbb{Z}$. However, this is not true for the degree $d$, which is usually only contained in $(1/d{({\mathcal A})})\cdot\mathbb{Z}$. **Remark 13**. If $E\in {\rm Coh}(C,{\mathcal A})$ is (semi-)stable with ${\rm ch}_{\mathcal A}(E)=(r,d)$, then also any line bundle twist $E\otimes L\in {\rm Coh}(C,{\mathcal A})$ is (semi-)stable and ${\rm ch}_{\mathcal A}(E\otimes L)=(r,d+r\cdot \deg(L))$. So, if there exists a line bundle $L$ of degree one on $C$, then for fixed $r$ every moduli space $M_\alpha(r,d)$ is isomorphic to one with $d\in(1/d{({\mathcal A})})\cdot\mathbb{Z}\cap[0,r)$. If we allow ourselves to twist only with line bundles of degree in $m\cdot\mathbb{Z}$, then $[0,r)$ has to be replaced by $[0,m\cdot r)$. **Remark 14**. (i) All non-empty ${\rm Pic}^d_\alpha(C)$, $d\in \mathbb{Q}$, $\alpha\in{\rm SBr}(C)$ are torsors for ${\rm Pic}^0(C)$ and this torsor structure does not depend on the choice of the Azumaya algebra either. If $k$ is algebraically closed, all non-empty ${\rm Pic}^d_\alpha(C)$ are trivial torsors and, therefore, non-naturally isomorphic to ${\rm Pic}^0(C)$. \(ii\) Assume that there exists an $E\in {\rm Coh}(C,{\mathcal A})$ defining a point in ${\rm Pic}_\alpha^d(C)$. Then ${\rm rk}(E)=d{({\mathcal A})}$ and, therefore, the natural injection ${\mathcal A}\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}\mathcal{E\mkern-3mu nd}(E)$ is an isomorphism. For the latter, use that both sheaves are locally free of the same rank and with trivial determinant. In particular, $$\alpha\in {\rm NS}(C)\otimes\mathbb{Q}/\mathbb{Z}\subset{\rm SBr}(C).$$ If, moreover, there exists a line bundle $L$ on $C$ with $L^{{\rm rk}(E)}\simeq\det(E)$, then ${\mathcal A}\simeq\mathcal{E\mkern-3mu nd}(E\otimes L^\ast)$ with $\det(E\otimes L^\ast)\simeq{\mathcal O}_X$ and, hence, $\alpha\in {\rm SBr}(C)$ is trivial. In particular, for $k$ algebraically closed, the non-emptyness of ${\rm Pic}^d_\alpha(C)$ with $d$ an integer(!) implies that $\alpha\in {\rm SBr}(C)$ is trivial, because then ${\rm rk}(E)\mid \deg(E)$. This is no longer true for $d\not\in\mathbb{Z}$. Indeed, any vector bundle $F$ of rank $r$ can be viewed as a stable sheaf over ${\mathcal A}=\mathcal{E\mkern-3mu nd}(F)$ with ${\rm ch}_{\mathcal A}(F)=1+\mu(F)$, i.e. $F\in {\rm Pic}^{\mu(F)}_\alpha(C)$ for $\alpha=[{\mathcal A}]\in {\rm Pic}(C)\otimes\mathbb{Q}/\mathbb{Z}\subset{\rm SBr}(C)$. Let us now only fix an ordinary Brauer class $\alpha\in {\rm Br}(C)$. Then the moduli space $M_\alpha(r)$ of all stable twisted sheaves of rank $r$ is still well defined but only locally of finite type. For example, $${\rm Pic}_\alpha(C)=M_\alpha(1)$$ parametrises all locally free $E\in {\rm Coh}(C,{\mathcal A})$ of rank ${\rm rk}(E)=d{({\mathcal A})}$. The definition does not depend on the choice of the Azumaya algebra ${\mathcal A}$ representing $\alpha\in {\rm Br}(C)$. Note that ${\rm Pic}_\alpha(C)$ is a countable disjoint union of smooth projective varieties which are torsors for ${\rm Pic}^0(C)$. Moreover, it is induced from ${\rm Pic}_{\tilde \alpha}^0(C)$, where $\tilde\alpha\in{\rm SBr}(C)$ is an arbitrary lift of $\alpha$. For later use we state this as the following. **Proposition 15**. *Let $\tilde\alpha\in {\rm SBr}(C)$ be a lift of a class $\alpha\in {\rm Br}(C)$. Then the Picard scheme ${\rm Pic}_\alpha(C)$ is naturally a torsor for the group scheme ${\rm Pic}(C)= \bigsqcup {\rm Pic}^d(C)$, where the action is given by tensor product. More precisely, $${\rm Pic}_{\alpha}(C)\simeq({\rm Pic}(C)\times {\rm Pic}_{\tilde\alpha}^0(C))/{\rm Pic}^0(C)$$ as torsors for ${\rm Pic}(C)$.0◻* We shall need the following general fact, cf.  [@LiebPhD Sec. 5.1.3] or [@LiebComp Prop. 3.2.2.6] for the version for $\mu_n$-gerbes. For completeness sake, we include a sketch of the proof. **Proposition 16**. *Let $C$ be a smooth projective curve over a field $k$. Then the map $$\label{eqn:HSBr} {\rm Br}(C)\simeq H^2_{\text{\rm \'et}}(C,\mathbb{G}_m)\xymatrix@1@=15pt{\ar[r]&}H^1(k,{\rm Pic}(\bar C))$$ induced by the Hochschild--Serre spectral sequence maps a Brauer class $\alpha\in {\rm Br}(C)$ to the class of the ${\rm Pic}(C)$-torsor ${\rm Pic}_\alpha(C)$.* There is a version of this result for arbitrary schemes $X$, but the map ([\[eqn:HSBr\]](#eqn:HSBr){reference-type="ref" reference="eqn:HSBr"}) is then only defined on the kernel of ${\rm Br}(X)\xymatrix@1@=15pt{\ar[r]&}{\rm Br}(\bar X)$. *Proof.* The map ([\[eqn:HSBr\]](#eqn:HSBr){reference-type="ref" reference="eqn:HSBr"}) is part of the commutative diagram $$\xymatrix{{\rm Br}(C)\simeq H_{\text{\rm \'et}}^2(C,\mathbb{G}_m)\ar[r]& H^1(k,H_{\text{\rm \'et}}^1(\bar C,\mathbb{G}_m))\\ H^1(C,{\rm PGL}(n))\ar[u]\ar[r]&H^0(k,H^1(\bar C,{\rm PGL}(n))),\ar[u]&}$$ where the right vertical map is the boundary map induced by the short exact sequence $$\xymatrix{0\ar[r]&{\rm Pic}(\bar C)\simeq H^1(\bar C,\mathbb{G}_m)\ar[r]&H^1(\bar C,{\rm GL}(n))\ar[r]&H^1(\bar C,{\rm PGL}(n))\ar[r]&0}$$ after taking group cohomology. Computing this boundary map explicitly shows that ${\mathcal A}\in H^1(C,{\rm PGL}(n))$ is mapped to the torsor which parametrises all $E\in {\rm Coh}(C,{\mathcal A})$ with $\mathcal{E\mkern-3mu nd}(E)\simeq{\mathcal A}$, which is nothing but ${\rm Pic}_\alpha(C)$. ◻ **Remark 17**. Let us elaborate on Remark [Remark 13](#rem:range){reference-type="ref" reference="rem:range"}. \(i\) For simplicity we first assume that there exists a line bundle $L_0$ of degree one on $C$ and so, in particular, ${\rm Pic}^1(C)(k)$ is not empty. Consider a class $\alpha=(p/r)\cdot L_0\in \mathbb{Q}/\mathbb{Z}\subset{\rm SBr}(C)$ with $p,r\in \mathbb{Z}$ coprime, so that $|\alpha|=r$, and pick a locally free sheaf $F$ on $C$ with ${\rm rk}(F)=r$ and $\det(F)=L^p_0$. Then ${\mathcal A}_F\coloneqq\mathcal{E\mkern-3mu nd}(F)$ represents the class $\alpha\in {\rm SBr}(C)$ and locally free sheaves $E\in {\rm Coh}(C,{\mathcal A}_F)$ with ${\rm ch}_{{\mathcal A}_F}(E)=(1,d)$ are all of the form $E\simeq F\otimes L$ for some line bundle $L$. Since $(1/r)\deg(F\otimes L)=(p/r)+\deg(L)$, this shows that $d\equiv(p/r)$ modulo $\mathbb{Z}$, i.e. $\bar d=\alpha\in \mathbb{Q}/\mathbb{Z}$. In other words, up to isomorphisms induced by multiplication with line bundles there exists only one non-empty twisted Picard variety ${\rm Pic}_\alpha^\alpha(C)$ (admittedly, a somewhat confusing notation) which is furthermore non-naturally isomorphic to ${\rm Pic}^0(C)$. \(ii\) Let us now consider the case that the minimal positive degree of a line bundle $L_0$ on $C$ is $m$. Similar arguments as above show the following: For a given $\alpha =(p/r)\cdot L_0\in{\rm NS}(C)\otimes\mathbb{Q}/\mathbb{Z}\subset{\rm SBr}(C)$, there exist, up to tensor products with line bundles, at most $m$ twisted Picard varieties ${\rm Pic}_\alpha^{d +i}(C)$, $i=0,\ldots,m-1$, where $d=(m\cdot p/r)$, all torsors for ${\rm Pic}^0(C)$. \(iii\) The two situations considered above will later be mixed as follows. We will consider curves $C\subset S$ in a complex projective K3 surface. In this case, there clearly exists a line bundle of degree one on each individual $C$, so that we can consider $\alpha=(p/r)\cdot L_0\in {\rm SBr}(C)$ with $\deg(L_0)=1$. However, in order to let $C$ vary in its linear system, we only allow twists by line bundles $L$ of degree $m$, where $m$ is determined by $({\rm NS}(S).[C])=m\cdot\mathbb{Z}$. Thus, one considers ${\rm Pic}_\alpha^{d+i}(C)$ with $d=(p/r)$ and $i=0,\ldots,m-1$. ## Moduli on surfaces {#sec:ModSurf} Let us now turn to sheaves on K3 surfaces. Again, the discussion should be compared to the work of Lieblich e.g. in [@LiebPhD Sec. 5.1]. In particular, he already introduced twisted Picard schemes for fibred surfaces. Note that in our setting, the curves are not necessarily the fibres of a morphism but elements in a linear system. According to Definition--Proposition [Definition--Proposition 10](#defprop:Chern){reference-type="ref" reference="defprop:Chern"}, the twisted Chern character ${\rm ch}_\alpha(E)$ is well defined, i.e. ${\rm ch}_{\mathcal A}(E)={\rm ch}_{{\mathcal A}'}(E\otimes F)$ for ${\mathcal A}'={\mathcal A}\otimes\mathcal{E\mkern-3mu nd}(F)$ with $\det(F)\simeq{\mathcal O}_S$. Furthermore, the equivalence ${\rm Coh}(S,{\mathcal A})\simeq{\rm Coh}(S,{\mathcal A}')$ preserves stability with respect to a polarisation ${\mathcal O}(1)$ which we will suppress in the notation.[^1] In the case of K3 surfaces, it is more convenient to work with the (twisted) Mukai vector $v_\alpha(E)=v(E)\cdot{\rm td}(S)^{1/2}$, which is also well defined. Hence, by [@HS; @Simpson; @Yosh] the moduli space $M_\alpha(v)$ of semi-stable $\alpha$-twisted sheaves with twisted Mukai vector $v_\alpha(E)=v$ is well defined as long as $\alpha\in{\rm SBr}(S)$ is fixed as a class in the special Brauer group. **Remark 18**. Once again, there should be a way to phrase everything in terms of Lieblich's moduli spaces of sheaves on $\mu_n$-gerbes, cf. [@LiebDuke; @LiebComp]. In particular, if a class $\alpha\in {\rm SBr}(X)$ is represented by a $\mu_n$-gerbe ${\mathcal M}$, one should be able to compare moduli spaces of sheaves on ${\mathcal M}$ with a certain moduli space of sheaves on the $\mu_{nk}$-gerbe naturally associated with ${\mathcal M}$ via the inclusion $\mu_n\subset \mu_{nk}$. However, comparing Chern characters, and so Hilbert polynomials, and stability is tricky in general, see Remark [Remark 11](#rem:compLieb){reference-type="ref" reference="rem:compLieb"} and [@LiebDuke Lem. 2.3.2.8]. We will be mainly interested in the case of sheaves $E$ on $S$ supported on curves $C\subset S$ in a (generically smooth) linear system $|h|$, in which case there is no difference between the Mukai vector $v(E)$ and the Chern character ${\rm ch}(E)$. More precisely, the Mukai vector is in this case of the form $v(E)=(0,r\cdot h,s)$, where $s=\chi(E)$. If $E$ is a line bundle on $C$, then $r=1$. In general, if $C$ is integral, $r$ is the rank of $E$ as a sheaf on $C$. We can now twist the situation with respect to a class $\alpha\in{\rm SBr}(S)$ in the special Brauer group. Let us spell this out in detail. Choosing an Azumaya algebra ${\mathcal A}$ representing $\alpha$, we consider all sheaves $E\in {\rm Coh}(S,\alpha)={\rm Coh}(S,{\mathcal A})$ with $v_\alpha(E)=v_{\mathcal A}(E)={\rm ch}_{\mathcal A}(E)=(0,r\cdot h,s)$, where $s=\chi(E)$. **Definition 19**. Consider a K3 surface $S$ with a generically smooth complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ and a special Brauer class $\alpha\in {\rm SBr}(S)$. Then we denote the moduli space $M_\alpha(0, h,s)$ of all semi-stable sheaves $E\in {\rm Coh}(S,\alpha)={\rm Coh}(S,{\mathcal A})$ with $v_\alpha(E)=v_{\mathcal A}(E)$ by $$\overline{{\rm Pic}}_\alpha^d({\mathcal C}/|h|)\coloneqq M_\alpha(0,h,s)$$ and call it the *compactified twisted relative Picard variety*. We will abuse the notation slightly and denote by $${\rm Pic}_\alpha^d({\mathcal C}/|h|)\subset \overline{{\rm Pic}}_\alpha^d({\mathcal C}/|h|)$$ the open subset $\pi^{-1}(|h|_{\text{sm}})$ of twisted sheaves concentrated on smooth curves $C\in |h|$. Here, $$\label{eqn:piproj} \pi\colon\overline{\rm Pic}_\alpha^d({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]&}|h|$$ is the natural projection. Note that stability depends on the choice of a polarisation and so $\overline{{\rm Pic}}_\alpha^d({\mathcal C}/|h|)$ depends on it as well. However, the open part ${\rm Pic}_\alpha^d({\mathcal C}/|h|)$ does not and as we are only interested in these moduli spaces up to birational isomorphism (over the liner system $|h|$), we can safely ignore the polarisation. In fact, in our discussion, often only the scheme-theoretic generic fibre ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}^d({\mathcal C}_\eta)$ of ([\[eqn:piproj\]](#eqn:piproj){reference-type="ref" reference="eqn:piproj"}) matters. **Remark 20**. The fibre $\pi^{-1}(C)$ of ([\[eqn:piproj\]](#eqn:piproj){reference-type="ref" reference="eqn:piproj"}) over a smooth curve $C\in |h|$ consists of all (stable) locally free ${\mathcal A}|_C$-sheaves of rank $d{({\mathcal A})}$ and degree $d\cdot d{({\mathcal A})}$. Thus, it is nothing but ${\rm Pic}^d_{\alpha|_C}(C)$ as introduced in the previous section. In particular, if not empty, the fibre is naturally a torsor for ${\rm Pic}^0(C)$. More globally, if not empty, ${\rm Pic}_\alpha^d({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]&}|h|_{\text{sm}}$ is a torsor for the abelian group scheme ${\rm Pic}^0({\mathcal C}/|h|_{\text{sm}})\xymatrix@1@=15pt{\ar[r]&}|h|_{\text{sm}}$. Note that, according to Remark [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"}, if the fibre is non-empty and $d\in \mathbb{Z}$ is an integer, then $\alpha|_C=1\in {\rm SBr}(C)$. Tensor product with a line bundle $L$ on $S$ with $(L.h)=m$ defines an isomorphism $$\overline{\rm Pic}^d_\alpha({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]^-\sim&}\overline{\rm Pic}^{d+m}_\alpha({\mathcal C}/|h|) ~\text{ and }~ {\rm Pic}^d_\alpha({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Pic}^{d+m}_\alpha({\mathcal C}/|h|).$$ As in the case of curves, $s$ and $d$ need not be integers. But note that if for a fixed class $\alpha\in{\rm SBr}(S)$ and two $d,d'\in \mathbb{Q}$ the two relative twisted Picard varieties ${\rm Pic}^d_\alpha({\mathcal C}/|h|)$ and ${\rm Pic}_\alpha^{d'}({\mathcal C}/|h|)$ are both not empty, then $d'=d+i$ for some integer $i$, see Remarks [Remark 13](#rem:range){reference-type="ref" reference="rem:range"} & [Remark 17](#rem:expand){reference-type="ref" reference="rem:expand"}. **Remark 21**. Using Remark [Remark 17](#rem:expand){reference-type="ref" reference="rem:expand"}, we find that up to tensoring with line bundles on $S$ there are only $m$ twisted relative Picard schemes ${\rm Pic}_{\alpha}^{d+i}({\mathcal C}/|h|)$ with $i=0,\ldots,m-1$. Here, as before, $m$ satisfies $m\cdot \mathbb{Z}=({\rm NS}(S).h)$, i.e. it is the divisibility of $h$ as an element of the lattices ${\rm NS}(S)$, and the rational number $d=p/r$ is determined by writing $\alpha|_C=(p/r)\cdot L_0$ for some line bundle $L_0$ of degree one on $C$ and with coprime $p$ and $r$. In particular, unless $d=0$, there is no preferred choice for the degree $d+i$ that would work well with the group structure of ${\rm SBr}(S)$. This observation will give rise to introducing the restricted special Brauer group in the next section. Let us conclude this section by explaining a relative version of Remark [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"}. **Remark 22**. If instead of a class in ${\rm SBr}(S)$ we only fix a Brauer class $\alpha\in {\rm Br}(S)$, then we define $$\overline{\rm Pic}_\alpha({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]&}|h|$$ as the moduli space of all (semi-)stable sheaves $E\in {\rm Coh}(S,{\mathcal A})$ with Mukai vector ${\rm ch}_{\mathcal A}(E)=(0,h,\ast)$. This is a countable disjoint union of projective schemes over $|h|$. The scheme-theoretic generic fibre is ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)$, where ${\mathcal C}_\eta$ is the generic fibre of ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$. Restricting to smooth curves in $|h|$, the scheme ${\rm Pic}_\alpha({\mathcal C}/|h|_{\text{sm}})\xymatrix@1@=15pt{\ar[r]&}|h|_{\text{sm}}$ is a torsor for the countable union of projective group schemes ${\rm Pic}({\mathcal C}/|h|_{\text{sm}})=\bigsqcup{\rm Pic}^d({\mathcal C}/|h|_{\text{sm}}) \xymatrix@1@=15pt{\ar[r]&}|h|_{\text{sm}}$ # The restricted special Brauer group From the perspective of moduli spaces of twisted sheaves (or, rather, of modules over Azumaya algebras) on curves contained in K3 surfaces, not all classes $\alpha\in {\rm SBr}(S)$ are relevant. Only classes in a drastically smaller group give naturally rise to non-empty moduli spaces, cf. Remarks [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"} & [Remark 21](#rem:nodist){reference-type="ref" reference="rem:nodist"}. This leads to the notion of the restricted special Brauer group. ## Restricted Brauer group For a smooth integral curve $C\subset S$ in a K3 surface $S$, restriction defines an exact sequence $$\xymatrix{0\ar[r]&[C]^\perp\ar[r]&{\rm NS}(S)\ar[r]& {\rm NS}(C)\ar[r]& \mathbb{Z}/m\cdot \mathbb{Z}\ar[r]& 0,}$$ where $m$ is the divisibility of $[C]$ as a class in ${\rm NS}(S)$, i.e. $m\cdot \mathbb{Z}=({\rm NS}(S).[C])$. Tensoring with $\mathbb{Q}/\mathbb{Z}$ gives $$\label{eqn:DefrC} \xymatrix@R=18pt{&[C]^\perp\otimes\mathbb{Q}/\mathbb{Z}\ar@{^(->}[d]&& &\\ 0\ar[r]&\ker(r_C)\ar@{->>}[d]\ar[r]&{\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]^{r_C}& {\rm NS}(C)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&0.\\ &\mathbb{Z}/m\cdot\mathbb{Z}&&& }$$ Observe that $\ker(r_C)$ only depends on the cohomology class $[C]\in {\rm NS}(S)$. In fact, we can define $\ker(r_h)$ for any $h\in {\rm NS}(S)$. Also note that passing to multiples, increases the kernel, i.e. $\ker(r_h)\subset\ker(r_{2h})\subset\cdots\subset{\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$. **Lemma 23**. *For a K3 surface $S$ one has $$\bigcap \ker(r_C)=\bigcap \ker(r_h)\subset{\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z},$$ where the first intersection is over all integral curves $C\subset S$ and the second over all cohomology classes $h\in {\rm NS}(S)$. Furthermore, this subgroup is ${\rm NS}(S)^\ast/{\rm NS}(S)\subset{\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$.* *Proof.* The first equality follows from the fact that ${\rm NS}(S)$ is generated by classes of integral (smooth) curves. The equality with ${\rm NS}(S)^\ast/{\rm NS}(S)$ is checked by a direct computation. ◻ Combining ([\[eqn:DefrC\]](#eqn:DefrC){reference-type="ref" reference="eqn:DefrC"}) with the natural restriction maps for (special) Brauer groups, one obtains the commutative diagram of short exact rows and columns $$\xymatrix{{\rm NS}(C)\otimes\mathbb{Q}/\mathbb{Z}\ar@{=}[r]&{\rm SBr}(C)&\\ {\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}\ar@{->>}[u]^-{r_C}\ar@{^(->}[r]&{\rm SBr}(S)\ar@{->>}[u]^-{R_C}\ar@{->>}[r]&{\rm Br}(S)&\\ \ker(r_C)\ar@{^(->}[u]\ar@{^(->}[r]&\ker(R_C)\ar@{^(->}[u]\ar@{->>}[r]&{\rm Br}(S).\ar@{=}[u] }$$ A priori, the definition of $\ker(R_C)$ needs the curve $C\subset S$. But, since the restriction map $R_C\colon {\rm SBr}(S)=H^2(S,\mathbb{Z})\otimes\mathbb{Q}/\mathbb{Z}\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm SBr}(C)=H^2(C,\mathbb{Z})\otimes\mathbb{Q}/\mathbb{Z}$ is induced by the cohomological restriction map $H^2(S,\mathbb{Z})\xymatrix@1@=15pt{\ar[r]&}H^2(C,\mathbb{Z})$, its kernel remains unchanged for $C$ moving in a fixed linear system. Note that the images of ${\rm NS}(S)$ and $H^2(S,\mathbb{Z})$ in ${\rm NS}(C)=H^2(C,\mathbb{Z})$ may be different, as they are determined by the divisibility of $[C]$ as a class in ${\rm NS}(S)$ resp. in $H^2(S,\mathbb{Z})$. **Definition 24**. The *restricted special Brauer group* is intersection $${\rm SBr}^{\rm o}(S)\coloneq \bigcap \ker(R_C)$$ over all smooth, integral curves $C\subset S$. **Remark 25**. Viewing ${\rm SBr}(S)$ as the group that parametrises all $\mu_n$-gerbes, $n\in \mathbb{Z}$, on $S$, see Remark [Remark 6](#rem:gerbes){reference-type="ref" reference="rem:gerbes"}, ${\rm SBr}^{\rm o}(S)\subset {\rm SBr}(S)$ is the subgroup of all $\mu_n$-gerbes that become trivial on all intergral curves contained in $S$. **Proposition 26**. *There exists a natural isomorphism $${\rm SBr}^{\rm o}(S)\simeq T(S)\otimes\mathbb{Q}/\mathbb{Z}$$ and a natural short exact sequence, cf. ([\[eqn:ATTses\]](#eqn:ATTses){reference-type="ref" reference="eqn:ATTses"}), $$\label{eqn:sesASBro} \xymatrix@R=2pt@C=12pt{0\ar[r]&A(S)\ar[r]&{\rm SBr}^{\rm o}(S)\ar[r]&{\rm Br}(S)\ar[r]&0\\ &\simeq T'(S)/T(S)&\simeq T(S)\otimes\mathbb{Q}/\mathbb{Z}&\simeq T'(S)\otimes\mathbb{Q}/\mathbb{Z}.&}$$* *Proof.* Clearly, for every curve $C\subset S$ the transcendental lattice $T(S)\subset H^2(S,\mathbb{Z})$ is contained in the kernel of the restriction map $H^2(S,\mathbb{Z})\xymatrix@1@=15pt{\ar[r]&}H^2(C,\mathbb{Z})$ and, in fact, $T(S)$ is the set of all classes in $H^2(S,\mathbb{Z})$ vanishing on all curves $C\subset S$. From this, both assertions follow easily. ◻ **Remark 27**. As $A(S)\simeq{\rm NS}(S)^\ast/{\rm NS}(S)$ and $T(S)^\ast\simeq T'(S)$, the exact sequence ([\[eqn:sesASBro\]](#eqn:sesASBro){reference-type="ref" reference="eqn:sesASBro"}) can also be viewed as induced by the longer exact sequence, cf. [@CTS (5.26), p. 142], $$\xymatrix@C=15pt{0\ar[r]&{\rm NS}(S)\ar[r]&{\rm NS}(S)^\ast\ar[r]& T(S)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&T(S)^\ast\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&0.}$$ ## Restricted Brauer group via moduli spaces The restricted special Brauer group can be alternatively characterised by the non-emptiness of moduli spaces. **Proposition 28**. *For any generically smooth complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$, any integer $s$, and any class $\alpha\in {\rm SBr}^{\rm o}(S)\subset{\rm SBr}(S)$, the moduli space ${\rm Pic}_\alpha^0({\mathcal C}/|h|)$ is non-empty.* *More precisely, ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}(S)$ is the subgroup of all special Brauer classes such that ${\rm Pic}^0_\alpha({\mathcal C}/|h|)$ is not empty for all generically smooth complete linear systems ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$.* *Proof.* Take any smooth curve $C\in |h|$. By definition of the restricted special Brauer group, the class $\alpha|_C\in {\rm SBr}^{\rm o}(C)$ is trivial. Hence, $\alpha|_C$ can be represented by ${\mathcal O}_C$ and any degree zero line bundle, e.g. ${\mathcal O}_C$ itself, defines a point in ${\rm Pic}^0_{\alpha|_C}(C)$. In particular, ${\rm Pic}_\alpha^0({\mathcal C}/|h|)\ne\emptyset$. Conversely, if ${\rm Pic}^0_\alpha({\mathcal C}/|h|)\ne\emptyset$ for a fixed class $\alpha\in{\rm SBr}(S)$ and all ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$, then according to Remark [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"}, $\alpha|_C\in {\rm SBr}(C)$ is trivial for all smooth curves $C\subset S$ and hence $\alpha\in {\rm SBr}^{\rm o}(S)$. ◻ It is important to emphasise that although $\alpha|_C\in {\rm SBr}(C)$ is trivial for a class $\alpha\in {\rm SBr}^{\rm o}(S)$ and any smooth curve $C\subset S$, the restriction $\alpha|_{{\mathcal C}_\eta}\in {\rm SBr}({\mathcal C}_\eta)$ to the generic fibre of the complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ is trivial if and only if $\alpha$ is of the form $(1/r)\cdot L\in {\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$ with $\deg(L|_C)=0$. So, in particular, the associated class $\bar \alpha\in{\rm Br}(S)$ would be trivial in this case. ## Generalised Tate--Šafarevič group {#sec:mainproof} It turns out that for a given complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ the restricted special Brauer group ${\rm SBr}^{\rm o}(S)$ does not parametrise the moduli spaces ${\rm Pic}^0_\alpha({\mathcal C}/|h|)$ efficiently. We will show that the map associating with a class $\alpha\in{\rm SBr}^{\rm o}(S)$ the moduli space ${\rm Pic}^0_\alpha({\mathcal C}/|h|)$ factorises via a certain quotient of ${\rm SBr}^{\rm o}(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}\Sha(S,h)$. For this purpose, we consider for any $h\in {\rm NS}(S)$ the natural map $$\zeta_h\colon A(S)= T(S)^\ast/T(S){\simeq{\rm NS}(S)^\ast/{\rm NS}(S)}\xymatrix@1@=15pt{\ar[r]&}\mathbb{Z}/({\rm NS}(S).h),~\varphi\xymatrix@1@=15pt{\ar@{|->}[r]&}\varphi(h)$$ and consider the cokernel of the composition $\ker(\zeta_h)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}A(S)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}{\rm SBr}^{\rm o}(S)$. This is used in the following definition, the motivation for which will become clear in the course of the next section. **Definition 29**. Let ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ be a complete linear system on a K3 surface $S$. Then the *Tate--Šafarevič group* of $(S,h)$ is defined as $$\Sha(S,h)\coloneqq {\rm SBr}^{\rm o}(S)/\ker(\zeta_h)$$ and we denote the projection by $$\label{eqn:SBroSha} \xi_h\colon {\rm SBr}^{\rm o}(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}\Sha(S,h).$$ Thus, there exists short exact sequence $$\label{eqn:Shases} \xymatrix@C=15pt{0\ar[r]&\mathbb{Z}/({\rm NS}(S).h)\ar[r]&\Sha(S,h)\ar[r]&{\rm Br}(S)\ar[r]&0,}$$ which together with ([\[eqn:sesASBro\]](#eqn:sesASBro){reference-type="ref" reference="eqn:sesASBro"}) is part of the commutative diagram $$\label{eqn:CDShah} \xymatrix{ 0\ar[r]&{\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}\ar[r]&{\rm SBr}(S)\ar[r]&{\rm Br}(S)\ar[r]&0\\ 0\ar[r]&A(S)\ar@{^(->}[u]\ar@{->>}[d]_{\zeta_h}\ar[r]&{\rm SBr}^{\rm o}(S)\ar@{^(->}[u]\ar[r]\ar@{->>}[d]_{\xi_h}&{\rm Br}(S)\ar@{=}[u]\ar@{=}[d]\ar[r]&0\\ 0\ar[r]&\mathbb{Z}/({\rm NS}(S).h)\ar[r]&\Sha(S,h)\ar[r]&{\rm Br}(S)\ar[r]&0.}$$ **Example 30**. The natural projection is an isorphism $\Sha(S,h)\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Br}(S)$ if and only if $h\in{\rm NS}(S)$ has divisibility one, i.e. $({\rm NS}(S).h)=\mathbb{Z}$. **Proposition 31**. *Consider a complete, generically smooth linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$. Then, for any two classes $\alpha_1,\alpha_2\in {\rm SBr}^{\rm o}(S)$ with $\xi_h(\alpha_1)=\xi_h(\alpha_2)$, the two moduli spaces ${\rm Pic}_{\alpha_1}^0({\mathcal C}/|h|)$ and ${\rm Pic}_{\alpha_2}^0({\mathcal C}/|h|)$ are naturally isomorphic. More precisely, their generic fibres are isomorphic torsors for ${\rm Pic}^0({\mathcal C}_\eta)$.* *Proof.* Write $\alpha_2=\alpha_1\cdot\gamma$ with $\gamma=(1/r)\cdot L\in {\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$. If $\gamma\in A(S)\subset {\rm SBr}^{\rm o}(S)$, then $r\mid(L.h)$, and under the stronger assumption $\gamma\in \ker(\zeta_h)\subset A(S)$, we find in addition a line bundle $L'\in {\rm NS}(S)$ such that $(L'.h)+(1/r)(L.h)=0$. Thus, we can assume that $\gamma$ is of the form $(1/r)\cdot L$ with $(L.h)=0$. Let now $F$ be a locally free sheaf on $S$ with $\det(F)\simeq L$ and ${\rm rk}(F)=r$. Then $${\rm Pic}_{\alpha_1}^0({\mathcal C}_\eta)\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Pic}_{\alpha_2}^0({\mathcal C}_\eta), ~E\xymatrix@1@=15pt{\ar@{|->}[r]&}E\otimes F$$ is an isomorphism of torsors for ${\rm Pic}^0({\mathcal C}_\eta)$. ◻ **Remark 32**. Consider a complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$. Then, without introducing any ambiguity, the proposition allows us to speak of the moduli spaces ${\rm Pic}^0_\alpha({\mathcal C}/|h|)$ for any class $\alpha\in \Sha(S,h)$ in the Tate--Šafarevič group of $(S,h)$. **Remark 33**. By construction $\Sha(S,h)$ has a group structure which is compatible with taking twisted Picard varieties. More precisely, for two classes $\alpha_1,\alpha_2$ the generic fibre of ${\rm Pic}_{\alpha_1\alpha_2}^0({\mathcal C}/|h|)$, as a torsor for ${\rm Pic}^0({\mathcal C}_\eta)$, is the quotient $${\rm Pic}^0_{(\alpha_1\alpha_2)|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)\simeq({\rm Pic}^0_{\alpha_1|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)\times {\rm Pic}^0_{\alpha_2|_{{\mathcal C}_\eta}}({\mathcal C}_\eta))/{\rm Pic}^0({\mathcal C}_\eta),$$ where the isomorphism is given by tensor product. This remark allows one to strengthen the result of Proposition [Proposition 31](#prop:comptwo){reference-type="ref" reference="prop:comptwo"} as follows. **Corollary 34**. *The generic fibres ${\rm Pic}^0_{\alpha_1|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)$ and ${\rm Pic}^0_{\alpha_2|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)$ for two classes $\alpha_1,\alpha_2\in {\rm SBr}^{\rm o}(S)$ are isomorphic torsors for ${\rm Pic}^0({\mathcal C}_\eta)$ if and only if $\xi_h(\alpha_1)=\xi_h(\alpha_2)$.* *Proof.* To prove this it suffices to show that if ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}^0({\mathcal C}_\eta)$ is the trivial ${\rm Pic}^0({\mathcal C}_\eta)$-torsor, then $\alpha\in \ker(\xi_h)$. The triviality of the torsor implies ${\rm Pic}^0_{\alpha|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)\ne\emptyset$ and ${\rm Pic}^0_{\alpha|_C}(C)\ne\emptyset$ for the generic fibre ${\mathcal C}_\eta$ resp. the general curve $C\in |h|$. According to Remark [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"}, the latter implies that $\alpha|_C$ is trivial, which we knew already as $\alpha\in {\rm SBr}^{\rm o}(S)$. But the same remark also shows that $\alpha|_{{\mathcal C}_\eta}$ is contained in ${\rm NS}({\mathcal C}_\eta)\otimes\mathbb{Q}/\mathbb{Z}$, i.e. that the induced Brauer class $\bar\alpha|_{{\mathcal C}_\eta}\in {\rm Br}({\mathcal C}_\eta)$ is trivial. Since the restriction map ${\rm Br}(S)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}{\rm Br}({\mathcal C}_\eta)$ is injective, cf. [@CTS Thm. 3.5.5], this shows that $\alpha\in A(S)\subset{\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}\subset{\rm SBr}^{\rm o}(S)$, i.e. $\alpha=(1/r)\cdot L$ or, equivalently, $\alpha$ is represented by $\mathcal{E\mkern-3mu nd}(F)$ where $F$ is locally free of rank $r$ and determinant $L$ with $r\mid (L.h)$. The existence of a rational point of ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}^0({\mathcal C}_\eta)$ implies the existence of a line bundle ${\mathcal M}$ on ${\mathcal C}_\eta$ such that $\deg(F|_{{\mathcal C}_\eta}\otimes {\mathcal M})=0$ and hence of a line bundle $M$ on $S$ such that $(L.h)+r\cdot (M.h)=0$. This eventually proves $\alpha\in\ker(\xi_h)$. ◻ Combining the short exact sequence ([\[eqn:Shases\]](#eqn:Shases){reference-type="ref" reference="eqn:Shases"}) and Corollary [Corollary 34](#cor:injec){reference-type="ref" reference="cor:injec"} concludes the proof of Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"}.0◻ ## Analytic Tate--Šafarevič group {#sec:Markman} Before continuing our discussion we make a digression on the analytic version of our construction and compare it to a construction of Markman [@Mark], cf. [@AbRo]. This will not be used on the rest of the paper. In our context, it seems natural to introduce the *analytic special Brauer group* as $${\rm SBr}^{\rm o}(S)^{\rm an}\coloneqq H^2(S,{\mathcal O}_S)/T(S).$$ It comes with a surjection onto the analytic Brauer group $${\rm SBr}^{\rm o}(S)^{\rm an}= H^2(S,{\mathcal O}_S)/T(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm Br}(S)^{\rm an}=H^2(S,{\mathcal O}_S^\ast)$$ and naturally contains the (algebraic) special Brauer group as the subgroup of all torsion elements ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}^{\rm o}(S)^{\rm an}$. We have the natural commutative diagram $$\xymatrix{0\ar[r]&A(S)\ar[r]\ar@{=}[d]&{\rm SBr}^{\rm o}(S)\simeq T(S)\otimes\mathbb{Q}/\mathbb{Z}\ar[r] \ar@{^(->}[d]&{\rm Br}(S)\simeq T(S)^\ast\otimes\mathbb{Q}/\mathbb{Z}\ar[r]\ar@{^(->}[d]&0\\ 0\ar[r]&A(S)\ar[r]&{\rm SBr}^{\rm o}(S)^{\rm an}\simeq H^2(S,{\mathcal O}_S)/T(S)\ar[r]&{\rm Br}(S)^{\rm an}=H^2(S,{\mathcal O}_S^\ast)\ar[r]&0.}$$ Also, analogously to ([\[eqn:SBroSha\]](#eqn:SBroSha){reference-type="ref" reference="eqn:SBroSha"}), one can define the analytic version $\Sha(S,h)^{\rm an}$ of $\Sha(S,h)$ as a quotient ${\rm SBr}^{\rm o}(S)^{\rm an}\xymatrix@1@=18pt{\ar@{->>}[r]&}\Sha(S,h)^{\rm an}$. Then $\Sha(S,h)\subset\Sha(S,h)^{\rm an}$ is the torsion subgroup. The corresponding commutative diagram is $$\xymatrix{0\ar[r]&\mathbb{Z}/({\rm NS}(S).h))\ar[r]\ar@{=}[d]&\Sha(S,h)\ar[r] \ar@{^(->}[d]&{\rm Br}(S)\ar[r]\ar@{^(->}[d]&0\\ 0\ar[r]&\mathbb{Z}/({\rm NS}(S).h))\ar[r]&\Sha(S,h)^{\rm an}\ar[r]&{\rm Br}(S)^{\rm an}\ar[r]&0.}$$ It is possible to introduce ${\rm Pic}_\alpha^0({\mathcal C}/|h|)$ for any class in the analytic special Brauer $\alpha\in {\rm SBr}^{\rm o}(S)^{\rm an}$ or $\alpha\in \Sha(S,h)^{\rm an}$. However, if $\alpha$ is not torsion, then it will be non-algebraic and, in particular, it is more difficult to talk about its generic fibre. In other words, only a bimeromorphic equivalence class of a complex manifold together with a holomorphic projection to $|h|$ will be defined. Markman introduces a group called $\Sha^0$, see [@Mark (7.7)]. He identifies it with a certain finite quotient of ${\rm SBr}^{\rm o}(S)^{\rm an}=H^2(S,{\mathcal O}_S)/T(S)$ by enlarging the transcendental lattice $T(S)$ by all classes in $T(S)\otimes\mathbb{Q}$ that can be completed by elements in ${\rm NS}(S)\otimes\mathbb{Q}$ to integral classes in $H^2(S,\mathbb{Z})$ that are orthogonal to all irreducible components of all curves $C\in |h|$. In particular, if $S$ has Picard number one, then ${\rm SBr}^{\rm o}(S)^{\rm an}=\Sha^0$, but in general $\Sha^0$ depends on the choice of $h$ and approximates our analytic Tate--Šafarevič group $\Sha(S,h)^{\rm an}$. More precisely, the analytic version quotient ([\[eqn:SBroSha\]](#eqn:SBroSha){reference-type="ref" reference="eqn:SBroSha"}) factors through $\Sha^0$: $${\rm SBr}^{\rm o}(S)^{\rm an}\xymatrix@1@=18pt{\ar@{->>}[r]&}\Sha^0\xymatrix@1@=18pt{\ar@{->>}[r]&}\Sha(S,h)^{\rm an}.$$ ## All twisted Picard varieties Although we only consider classes in the restricted special Brauer group ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}(S)$ and only consider $d=0$, we still get all twisted Picard varieties of arbitrary degree on curves in $S$. This is the next result which morally is a consequence of the surjectivity ${\rm SBr}^{\rm o}(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm Br}(S)$. **Proposition 35**. *Consider a generically smooth complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$. Fix $d\in \mathbb{Q}$ and $\alpha\in {\rm SBr}(S)$, such that ${\rm Pic}_\alpha^d({\mathcal C}/|h|)$ is non-empty. Then there exists a class $\alpha_0\in {\rm SBr}^{\rm o}(S)$ (or $\alpha_0\in\Sha(S,h)$) and a natural isomorphism of torsors $${\rm Pic}^0_{\alpha_0}({\mathcal C}/|h|)\simeq{\rm Pic}_\alpha^d({\mathcal C}/|h|).$$* *Proof.* The assertion is about the generic fibres ${\rm Pic}^0_{\alpha_0|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)$ and ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}^d({\mathcal C}_\eta)$ and to ease the notation, we will simply write ${\rm Pic}_\alpha^d$ instead of ${\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}^d({\mathcal C}_\eta)$, etc. As a warmup, let us first discuss the special case that $\alpha=(1/r)\cdot L\in {\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}\subset{\rm SBr}(S)$ and $d=(L.h)/r$. Then we let $\alpha_0$ be the trivial class and define $${\rm Pic}_{\alpha_0}^0={\rm Pic}^0\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Pic}^d_\alpha,~E\xymatrix@1@=15pt{\ar@{|->}[r]&}E\otimes F.$$ Here, $F$ is a locally free sheaf on $S$ of rank $r$ and determinant $L$, and hence $\alpha=[\mathcal{E\mkern-3mu nd}(F)]$, and $E$ is any degree zero line bundle on a fibre $C\in |h|$, so that the tensor product with $F$ is actually the tensor product with $F|_C$ on $C$, cf. Remarks [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"} & [Remark 17](#rem:expand){reference-type="ref" reference="rem:expand"}. In other words, the ${\rm Pic}^0$-torsor ${\rm Pic}_{\alpha}^d$ is trivial as it contains the rational point $F|_{{\mathcal C}_\eta}$. This proves the result for $\alpha\in {\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$, but only for one particular $d$. To obtain all $d+i$, one has to allow non-trivial classes in $A(S)\subset{\rm SBr}^{\rm o}(S)$. To be precise, fix a line bundle $L_0$ on $S$ with $(L_0.h)\cdot \mathbb{Z}=({\rm NS}(S).h)$ and let $\alpha_0=(i/(L_0.h))\cdot L_0$. Pick a locally free sheaf $F_0$ with ${\rm rk}(F_0)=(L_0.h)$ and $\det(F_0)=L_0^i$. Then for $\alpha_0\coloneqq[\mathcal{E\mkern-3mu nd}(F_0)]$ one has ${\rm Pic}_{\alpha_0}^0\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Pic}_\alpha^{d+i}$ via $E\xymatrix@1@=15pt{\ar@{|->}[r]&}E\otimes F_0$. For the general case, write any $\alpha\in {\rm SBr}(S)$ as a product $\alpha=\alpha_0\cdot\alpha_1$ with $\alpha_0\in {\rm SBr}^{\rm o}(S)$ and $\alpha_1\in {\rm NS}(S)\otimes \mathbb{Q}/\mathbb{Z}$. Once the decomposition is picked, we first deal with ${\rm Pic}^d_\alpha({\mathcal C}/|h|)$ for one choice of $d$ and then show how to modify the decomposition $\alpha=\alpha_0\cdot\alpha_1$ to $\alpha=(\alpha_0\cdot\gamma^{-1})\cdot(\gamma\cdot \alpha_1)$ by some $\gamma\in A(S)$ to obtain all $d+i$, cf. Remark [Remark 20](#rem:twistint){reference-type="ref" reference="rem:twistint"}. For the fixed choice of $\alpha_0=[{\mathcal A}_0]$ and $\alpha_1=[\mathcal{E\mkern-3mu nd}(F)]$, one has $${\rm Pic}^0_{\alpha_0}\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Pic}_\alpha^d, ~E\xymatrix@1@=15pt{\ar@{|->}[r]&}E\otimes F$$ with $d=\mu(F|_C)$. Now let $\gamma$ be the class $(i/(L_0.h))\cdot L_0$ and choose $F_0$ as above. Then ${\rm Pic}_{\alpha_0\cdot\gamma^{-1}}^0\xymatrix@1@=15pt{\ar[r]^-\sim&} {\rm Pic}_\alpha^{d+i}$ via $E\xymatrix@1@=15pt{\ar@{|->}[r]&}E\otimes (F\otimes F_0)$. ◻ **Example 36**. In particular, all the untwisted relative Picard varieties ${\rm Pic}^d({\mathcal C}/|h|)$ are still accounted for, namely $${\rm Pic}^d({\mathcal C}/|h|)\simeq{\rm Pic}_{\bar d}^0({\mathcal C}/|h|).$$ Indeed, the class $\bar d\in \mathbb{Z}/({\rm NS}(S).h)$ is the image of $\alpha=(d/(L_0.h))\cdot L_0\in A(S)\subset {\rm NS}(S)\otimes\mathbb{Q}/\mathbb{Z}$, where $L_0\in{\rm NS}(S)$ satisfies $(L_0.h)\cdot\mathbb{Z}=({\rm NS}(S).h)$. Hence, ${\rm Pic}^0_{\bar d}({\mathcal C}/|h|)$ is a moduli space in ${\rm Coh}(S,\mathcal{E\mkern-3mu nd}(F))$ with $F$ any locally free sheaf of rank $r=(L_0.h)$ and determinant $L_0^{-d}$. To conclude, use the natural injections ${\rm Pic}^d(C)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}{\rm Pic}^0_{\bar d}({\mathcal C}/|h|)$, $L\xymatrix@1@=15pt{\ar@{|->}[r]&}F|_C\otimes L$, observing $\deg(F|_C\otimes L)=(L_0^{-d}.C)+d \cdot {\rm rk}(F)=0$. ## Digression on B-fields {#sec:Bfield2} We briefly come back to Remark [Remark 8](#rem:Bfield1){reference-type="ref" reference="rem:Bfield1"}. Assume $\alpha\in {\rm Br}(S)\simeq H^2(S,{\mathcal O}_S^\ast)_{\rm tor}\simeq T'(S)\otimes \mathbb{Q}/\mathbb{Z}$ and pick a B-field lift of $\alpha$, i.e. a class $B\in H^2(S,\mathbb{Q})$ such its image under the exponential map $H^2(S,\mathbb{Q})\xymatrix@1@=15pt{\ar[r]&}H^2(S,{\mathcal O}_S)\xymatrix@1@=15pt{\ar[r]&}H^2(S,{\mathcal O}_S^\ast)$, or, equivalently, under the projection $H^2(S,\mathbb{Q})\xymatrix@1@=18pt{\ar@{->>}[r]&}T'(S)\otimes\mathbb{Q}\xymatrix@1@=18pt{\ar@{->>}[r]&}T'(S)\otimes\mathbb{Q}/\mathbb{Z}$, gives back $\alpha$. Using the decomposition $H^2(S,\mathbb{Q})=({\rm NS}(S)\otimes\mathbb{Q})\oplus (T(S)\otimes\mathbb{Q})$, the class $B$ can also be projected onto a class in $T(S)\otimes \mathbb{Q}$ and then further onto a class $\tilde\alpha\in{\rm SBr}^{\rm o}(S)\simeq T(S)\otimes\mathbb{Q}/\mathbb{Z}$, which maps to $\alpha$ under ${\rm SBr}^{\rm o}(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm Br}(S)$. In general, picking a B-field lift for a class $\alpha\in {\rm Br}(S)$ is strictly more information than is actually needed. We will now explain why for most practical purposes, a lift to a class in ${\rm SBr}^{\rm o}(S)$ suffices. \(i\) Via the exponential map, the class $\alpha\in{\rm SBr}(S)$ or rather the corresponding class in $H^2(S,\mathbb{Z})\otimes\mathbb{Q}/\mathbb{Z}$ maps to a class $\alpha^{0,2}\in H^{0,2}(S)=H^2(S,{\mathcal O}_S)$. This allows one to associate with a generator $\sigma\in H^{2,0}(S)=H^0(S,\omega_S)$ the class $\sigma_\alpha\coloneqq\sigma+\sigma\wedge \alpha^{0,2}\in H^2(S,\mathbb{C})\oplus H^4(S,\mathbb{C})$, so that we can define a Hodge structure $$\widetilde H(S,\alpha,\mathbb{Z})$$ of K3 type associated with $\alpha\in{\rm SBr}^0(S)$ as follows: The underlying lattice is nothing but the extended Mukai lattice $\widetilde H(S,\mathbb{Z})$ and its $(2,0)$-part is spanned by $\sigma_\alpha$. All other parts of the Hodge structure are then determined by the usual orthogonality requirements. \(ii\) Representing the class $\alpha\in{\rm SBr}(S)$ in the special Brauer group by an Azumaya algebra ${\mathcal A}$, allows us to realise the abelian category ${\rm Coh}(S,\alpha)$ as ${\rm Coh}(S,{\mathcal A})$. Then, for any $E\in {\rm Coh}(S,\alpha)$ we consider the Mukai vector $v_\alpha(E)={\rm ch}_\alpha(E)\cdot\sqrt{{\rm td}(S)}$, well defined and independent of the choice of ${\mathcal A}$, see Definition--Proposition [Definition--Proposition 10](#defprop:Chern){reference-type="ref" reference="defprop:Chern"}. But the Mukai vector is of type $(1,1)$ with respect to the untwisted Hodge structure and in general not with respect to the twisted Hodge structure. However, things are better in our situation, as indeed $$v_\alpha(E)\in\widetilde H^{1,1}(S,\alpha,\mathbb{Q})$$ for sheaves $E\in {\rm Coh}(S,\alpha)$ supported on curves in $S$. # Elliptic and Lagrangian fibrations We specialise to the case of elliptic K3 surfaces and link our theory to the classical Ogg--Tate--Šafarevič theory, see [@HuyK3 Ch. 11] for comments and references. We wish to point out that moduli spaces of twisted line bundles on fibred surfaces have been studied by Lieblich in his thesis [@LiebPhD Ch. 5]. For an elliptic K3 surface $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ with a section, the Tate--Šafarevič group $\Sha(S_0/\mathbb{P}^1)$ parametrises elliptic K3 surfaces $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ together with an isomorphism $S_0\simeq\overline{\rm Pic}^0(S/\mathbb{P}^1)$ over $\mathbb{P}^1$. The surface $S_0$ is thus a moduli space of stable sheaves on $S$. However, usually it is only a coarse moduli space, i.e. a universal sheaf on $S_0\times_{\mathbb{P}^1}S$ exists only as a twisted sheaf with respect to a Brauer class $\beta\in {\rm Br}(S_0)$. Turning this around, shows that the twists $S$ parametrised by $\Sha(S_0/\mathbb{P}^1)$ can be viewed as moduli spaces of twisted sheaves of rank one on the fibres of $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. One would like to write this last observation as ${\rm Pic}^0_\beta(S_0)\simeq S$, although $\beta$ seems to occur naturally as a class in ${\rm Br}(S_0)$ and not as a class in ${\rm SBr}^{\rm o}(S_0)$. The reason why this is possible is that the natural map $\Sha(S_0,f)\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm Br}(S_0)$ is in fact an isomorphism and $\Sha(S_0,f)\simeq\Sha(S_0/\mathbb{P}^1)$. This shall be explained first. ## Comparison with the classical Tate--Šafarevič group {#sec:classTS} We begin with an elliptic K3 surface $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ without a section. We denote by $f$ the class of a fibre and let $m$ be such that $m\cdot\mathbb{Z}=({\rm NS}(S).f)$. Then $S_0\coloneqq\overline{\rm Pic}^0(S/\mathbb{P}^1)=M(0,f,0)$ is the relative Jacobian of $S$, which is an elliptic K3 surface $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ with a section. Denoting by $f$ also the class of a fibre of $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$, the existence of a section implies $({\rm NS}(S_0).f)=\mathbb{Z}$. Then, by virtue of Remark [Example 30](#exa:divone){reference-type="ref" reference="exa:divone"}, the natural projection in ([\[eqn:CDShah\]](#eqn:CDShah){reference-type="ref" reference="eqn:CDShah"}) is an isomorphism $$\label{eqn:ShaBr1} \Sha(S_0,f)\xymatrix@1@=15pt{\ar[r]^-\sim&}{\rm Br}(S_0).$$ On the other hand, viewing ${\rm Pic}_\beta^0(S_0/\mathbb{P}^1)$ as a torsor for ${\rm Pic}^0(S_0/\mathbb{P}^1)$, which compactifies to $S_0$, defines a group homomorphism $\Sha(S_0,f)\xymatrix@1@=15pt{\ar[r]&}\Sha(S_0/\mathbb{P}^1)$, which is in fact an isomorphism. **Lemma 37**. *Mapping $\beta\in \Sha(S_0,f)$ to ${\rm Pic}^0_\beta(S_0/\mathbb{P}^1)$ viewed as a torsor for $S_0$ defines an isomorphism of groups $$\label{eqn:ShaSha} {\rm Br}(S_0)\simeq\Sha(S_0,f)\xymatrix@1@=15pt{\ar[r]^-\sim&}\Sha(S_0/\mathbb{P}^1).$$* *Proof.* Indeed, as recalled above, we know that every twist $S$ is a moduli space of twisted sheaves of rank one on the fibres of $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. Due to the existence of the section, ${\rm Pic}_\beta^d(S_0/\mathbb{P}^1)\simeq{\rm Pic}^0_\beta(S_0/\mathbb{P}^1)$ for all $d$, so the map is surjective. The injectivity is a consequence of Corollary [Corollary 34](#cor:injec){reference-type="ref" reference="cor:injec"}. ◻ Combining ([\[eqn:ShaBr1\]](#eqn:ShaBr1){reference-type="ref" reference="eqn:ShaBr1"}) and ([\[eqn:ShaSha\]](#eqn:ShaSha){reference-type="ref" reference="eqn:ShaSha"}), one recovers directly the well-known isomorphism $$\label{eqn:ShaBrnew} \Sha(S_0/\mathbb{P}^1)\simeq{\rm Br}(S_0).$$ However, at this point it is not evident that our isomorphism coincides with classical one by Artin and Tate [@BrauerII §4] that uses the Lefschetz spectral sequence, cf. Remark [Remark 43](#rem:ShaBrnew){reference-type="ref" reference="rem:ShaBrnew"}. The classical Tate--Šafarevič group $\Sha(S_0/\mathbb{P}^1)$ is only defined for elliptic K3 surfaces with a section. But the description of it as $\Sha(S_0,f)$ allows us to speak of the Tate--Šafarevič group without assuming the existence of a section. **Definition 38**. Assume $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ is an elliptic K3 surface (with or without a section). If $f$ denotes the class of a fibre, then we call $\Sha(S,f)$ the *Tate--Šafarevič group* of $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. ## Tate--Šafarevič group without a section As a next step we will explain the link between this new Tate--Šafarevič group $\Sha(S,f)$ of an elliptic K3 surface $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ and the classical Tate--Šafarevič group $\Sha(S_0,f)\simeq\Sha(S_0/\mathbb{P}^1)$ of its Jacobian fibration. We begin by recalling the short exact sequence $$\label{eqn:ShaBr}\xymatrix{0\ar[r]&\langle\beta\rangle\ar[r]& {\rm Br}(S_0)\ar[r]&{\rm Br}(S)\ar[r]&0,}$$ where $\beta\in {\rm Br}(S_0)\simeq\Sha(S_0/\mathbb{P}^1)\simeq\Sha(S_0,f)$ is the class corresponding to $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. The result goes back to Artin and Tate, cf. [@BrauerII §4]. First consider the scheme-theoretic generic fibre $E$ of $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ which is a smooth curve of genus one over $\mathbb{C}(t)$. Its Jacobian $E_0\coloneqq{\rm Pic}^0(E)$ is the identity component of ${\rm Pic}(E)$. More precisely, there exists a short exact sequence $\xymatrix{0\ar[r]&E_0={\rm Pic}^0(E)\ar[r]&{\rm Pic}(E)\ar[r]&\mathbb{Z}\ar[r]&0.}$ The relative version of this is a short exact sequence $$\xymatrix{0\ar[r]&{\mathcal S}_0\ar[r]&R^1\pi_\ast\mathbb{G}_m/_{\text{vert}}\ar[r]&\mathbb{Z}\ar[r]&0,}$$ cf. [@HuyK3 Rem. 11.5.9]. Here, $\pi\colon S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ is the projection and ${\mathcal S}_0$ is the sheaf of étale (or analytic) local section of the Jacobian fibration $S_0={\rm Pic}^0(S/\mathbb{P}^1)\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$. Taking cohomology leads to the exact sequence $$\label{eqn:ArtTate} \xymatrix{H^0(\mathbb{P}^1,R^1\pi_\ast\mathbb{G}_m)\ar[r]&\mathbb{Z}\ar[r]&H^1(\mathbb{P}^1,{\mathcal S}_0)\ar[r]& H^1(\mathbb{P}^1,R^1\pi_\ast\mathbb{G}_m)\ar[r]&0.}$$ The Lefschetz spectral sequence induces isomorphisms ${\rm Pic}(S)\simeq H^0(\mathbb{P}^1,R^1\pi_\ast\mathbb{G}_m)$ and ${\rm Br}(S)\simeq H^2(S,\mathbb{G}_m)\simeq H^1(\mathbb{P}^1,R^1\pi_\ast\mathbb{G}_m)$. Also, the first map is nothing but $L\xymatrix@1@=15pt{\ar@{|->}[r]&}(L.f)$ and, therefore, its cokernel is $\mathbb{Z}/({\rm NS}(S).f)$. Applied to the Jacbian fibration $S_0\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ itself, ([\[eqn:ArtTate\]](#eqn:ArtTate){reference-type="ref" reference="eqn:ArtTate"}) induces an isomorphism $H^1(\mathbb{P}^1,{\mathcal S}_0)\simeq{\rm Br}(S_0)$. In general, one obtains a short exact sequence $$\xymatrix{0\ar[r]&\mathbb{Z}/({\rm NS}(S).f)\ar[r]&{\rm Br}(S_0)\ar[r]&{\rm Br}(S)\ar[r]&0}$$ and the kernel can indeed be shown to be just the subgroup $\langle\beta\rangle$. For example, the order $|\beta|$ of the subgroup $\langle\beta\rangle$ equals the minimal fibre degree of any line bundle on $S$, i.e. $|\beta|\cdot \mathbb{Z}=({\rm NS}(S).f)$. **Remark 39**. (i) Over the generic point of $\mathbb{P}^1$, the surjection ${\rm Br}(S_0)\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm Br}(S)$ viewed as the map $H^1(\mathbb{P}^1,{\mathcal S}_0)\xymatrix@1@=18pt{\ar@{->>}[r]&}H^1(\mathbb{P}^1,R^1\pi_\ast\mathbb{G}_m)$ corresponds to $$H^1(\mathbb{C}(t),\bar E_0)\xymatrix@1@=15pt{\ar[r]&}H^1(\mathbb{C}(t),{\rm Pic}(\bar E))$$ which maps a torsor $E'$ for $E_0$ to the torsor $({\rm Pic}(E)\times E')/E_0$ for ${\rm Pic}(E)$. \(ii\) The map $${\rm Br}(S)\xymatrix@1@=15pt{\ar[r]^-\sim&}H^1(\mathbb{P}^1,R^1\pi_\ast\mathbb{G}_m)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}H^1(\mathbb{C}(t),{\rm Pic}(\bar E))$$ is geometrically realised by mapping a Brauer class $\alpha\in {\rm Br}(S)$ to the ${\rm Pic}(E)$-torsor ${\rm Pic}_{\alpha|_E}(E)$, see Remarks [Remark 14](#rem:Brauertorsor){reference-type="ref" reference="rem:Brauertorsor"} & [Remark 22](#rem:Brauertorsor2){reference-type="ref" reference="rem:Brauertorsor2"}, and Proposition [Proposition 16](#prop:BrTorsCurves){reference-type="ref" reference="prop:BrTorsCurves"}. Recall that the Leray spectal sequence for the projection $S\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^1$ restricts to the Hochschild--Serre spectral sequence on the generic fibre. **Remark 40**. An alternative way of constructing ([\[eqn:ShaBr\]](#eqn:ShaBr){reference-type="ref" reference="eqn:ShaBr"}) uses Hodge theory. Indeed, the universal $(\beta,1)$-twisted sheaf on $S_0\times_{\mathbb{P}^1}S$ induces a Hodge isometry $\widetilde H(S,\mathbb{Z})\xymatrix@1@=15pt{\ar[r]^-\sim&}\widetilde H(S_0,\beta,\mathbb{Z})$ which restricts to the inclusion $T(S)\xymatrix@1@=15pt{\ar[r]^-\sim&}T(S_0,\beta)=\ker(\beta\colon T(S_0)\xymatrix@1@=15pt{\ar[r]&}\mathbb{Q}/\mathbb{Z})$, cf. Section [4.6](#sec:Bfield2){reference-type="ref" reference="sec:Bfield2"}, [@HuyK3 Ch. 14.4.1] or [@HS1 §4]. Applying ${\rm Hom}(~,\mathbb{Q}/\mathbb{Z})$ and using ${\rm Br}(S)\simeq{\rm Hom}(T(S),\mathbb{Q}/\mathbb{Z})$ leads to an exact sequence $\xymatrix{0\ar[r]&\langle\beta\rangle\ar[r]&{\rm Br}(S_0)\ar[r]&{\rm Br}(S)\ar[r]&0.}$ This is indeed nothing but ([\[eqn:ShaBr\]](#eqn:ShaBr){reference-type="ref" reference="eqn:ShaBr"}), but as we will not use this fact here, we do not give a proof. Comparing ([\[eqn:ShaBr\]](#eqn:ShaBr){reference-type="ref" reference="eqn:ShaBr"}) with the bottom sequence in ([\[eqn:CDShah\]](#eqn:CDShah){reference-type="ref" reference="eqn:CDShah"}) suggests the next result. **Proposition 41**. *Mapping $\alpha\in\Sha(S,f)$ to ${\rm Pic}_\alpha^0(S/\mathbb{P}^1)$, viewed as a torsor for ${\rm Pic}^0(S/\mathbb{P}^1)\subset S_0$, defines an isomorphism $\Sha(S,f)\xymatrix@1@=15pt{\ar[r]^-\sim&}\Sha(S_0/\mathbb{P}^1)$ which can be completed to a commutative diagram $$\xymatrix{0\ar[r]&\mathbb{Z}/m\cdot\mathbb{Z}\ar[r]\ar[d]_\simeq&\Sha(S,f)\ar[d]_\simeq\ar[r]&{\rm Br}(S)\ar@{=}[d]\ar[r]&0\\ 0\ar[r]&\langle\beta\rangle\ar[r]&\Sha(S_0/\mathbb{P}^1)\ar[r]&{\rm Br}(S)\ar[r]&0.}$$ In particular, the generator of the subgroup $\mathbb{Z}/m\cdot\mathbb{Z}\subset\Sha(S,f)$ is mapped to the class $\beta\in \Sha(S_0/\mathbb{P}^1)$ corresponding to $S$.* *Proof.* The injectivity is again a consequence of Corollary [Corollary 34](#cor:injec){reference-type="ref" reference="cor:injec"} and the surjectivity follows from both groups being divisible of the same rank. Next we show that $\bar 1\in \mathbb{Z}/m\cdot \mathbb{Z}$ is mapped to $\beta$. This is a consequence of the more general observation made in Example [Example 36](#ex:genexa){reference-type="ref" reference="ex:genexa"}. In this particular case it says that for the class $\alpha\in {\rm SBr}(S)$ represented by $\mathcal{E\mkern-3mu nd}(F)$, where $F$ is locally free of rank $m$ with its determinant satisfying $(\det(F).f)=m$, there exists an isomorphism ${\rm Pic}^0_\alpha(S/\mathbb{P}^1)\simeq{\rm Pic}^1(S/\mathbb{P}^1)\subset S$ . To prove the commutativity of the diagram on the right, it suffices to control the generic fibre. According to Remark [Remark 39](#rem:BrTorH1){reference-type="ref" reference="rem:BrTorH1"}, the map $$\Sha(S,f)\xymatrix@1@=15pt{\ar[r]&}\Sha(S_0/\mathbb{P}^1)\simeq{\rm Br}(S_0)\xymatrix@1@=15pt{\ar[r]&}{\rm Br}(S)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}H^1(\mathbb{C}(t),{\rm Pic}(\bar E))$$ is given by $\alpha\xymatrix@1@=15pt{\ar@{|->}[r]&}{\rm Pic}^0_{\alpha|_E}(E)\xymatrix@1@=15pt{\ar@{|->}[r]&}({\rm Pic}(E)\times {\rm Pic}^0_{\alpha|E}(E))/E_0$ and the map $${\rm SBr}^{\rm o}(S)\xymatrix@1@=18pt{\ar@{->>}[r]&}{\rm Br}(S)\,\xymatrix@1@=15pt{\ar@{^(->}[r]&}H^1(\mathbb{C}(t),{\rm Pic}(\bar E))$$ sends $\alpha\in {\rm SBr}^{\rm o}(S)$ first to $\bar\alpha\in {\rm Br}(S)$ and then to ${\rm Pic}_{\bar\alpha|_E}(E)$, which by Proposition [Proposition 15](#prop:BrSBrPictorsor){reference-type="ref" reference="prop:BrSBrPictorsor"} is the torsor $({\rm Pic}(E)\times {\rm Pic}^0_{\alpha|_E}(E))/E_0$. ◻ **Remark 42**. Observe that the surjectivity says that for every $\gamma\in {\rm SBr}^{\rm o}(S_0)\simeq{\rm Br}(S_0)$ there exists a class $\alpha\in {\rm SBr}^{\rm o}(S)$ such that $${\rm Pic}^0_\gamma(S_0/\mathbb{P}^1)\simeq{\rm Pic}^0_\alpha(S/\mathbb{P}^1).$$ It would be interesting to find a geometric proof for this. One idea could be to use the two twisted universal families on $S_0\times_{\mathbb{P}^1} {\rm Pic}^0_\gamma(S_0/\mathbb{P}^1)$ and on $S\times_{\mathbb{P}^1} S_0$ to produce a family on $S\times_{\mathbb{P}^1}{\rm Pic}^0_\gamma(S_0/\mathbb{P}^1)$ inducing the desired isomorphism by universality. However, as the first family is twisted by $(\gamma\times 1)$ and the second one by $(1\times\beta)$, they do not concatenate directly. One would first need to transform e.g. the first one to a family twisted by some $(\beta^{-1}\times\delta)$. **Remark 43**. (i) Observe that the above arguments in particular show that the isomorphism ([\[eqn:ShaBrnew\]](#eqn:ShaBrnew){reference-type="ref" reference="eqn:ShaBrnew"}) coincides with the classical one constructed via the Lefschetz spectral sequence. \(ii\) There is yet another way of linking ${\rm Br}(S_0)$ and $\Sha(S_0/\mathbb{P}^1)$, cf. [@HuyK3 Rem. 11.5.9]. If $S$ is the twist associated with $\beta\in \Sha(S_0/\mathbb{P}^1)$, then $S_0$ can be viewed as a moduli space of sheaves on $S$, namely ${\rm Pic}^0(S/\mathbb{P}^1)\simeq S_0$. However, $S_0$ is only a coarse moduli space and the obstruction to the existence of a universal family is a class $\gamma_\beta\in {\rm Br}(S_0)$. This defines a map $\Sha(S_0/\mathbb{P}^1)\xymatrix@1@=15pt{\ar[r]&}{\rm Br}(S_0)$, $\beta\xymatrix@1@=15pt{\ar@{|->}[r]&}\gamma_\beta$, which again is nothing but ([\[eqn:ShaBrnew\]](#eqn:ShaBrnew){reference-type="ref" reference="eqn:ShaBrnew"}). ## Twisting Lagrangian fibrations {#sec:Mark} We conclude by linking our discussion with a result by Markman [@Mark]. Among other things, he proves that every non-special hyperkähler manifold of ${\rm K3}^{[n]}$-type $X$ together with a Lagrangian fibration $X\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^n$ is a certain 'twist' $M_s$ of a Mukai system $M={\rm Pic}^d({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]&}|h|$ associated with some K3 surface $S$ and a complete linear system ${\mathcal C}\xymatrix@1@=15pt{\ar[r]&}|h|$ on it. His twists are parametrised by elements $s\in {\rm SBr}^{\rm o}(S)^{\rm an}$ of the analytic special Brauer group ${\rm SBr}^{\rm o}(S)^{\rm an}$ (or rather of $\Sha^0$ used in [@Mark], see Section [4.4](#sec:Markman){reference-type="ref" reference="sec:Markman"}), which contains the special Brauer group ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}^{\rm o}(S)^{\rm an}$ as its torsion group. How do these twists compare to our twisted Picard varieties ${\rm Pic}_\alpha^0({\mathcal C}/|h|)$, where $\alpha$ is an element in the smaller group ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}^{\rm o}(S)^{\rm an}$? A priori, our setting seems at the same time more special and more general for the following reasons. \(i\) Markman only considers relative Picard varieties ${\rm Pic}^d({\mathcal C}/|h|)\xymatrix@1@=15pt{\ar[r]&}|h|$ which can be compactfied to hyperkähler manifolds. In other words, only those moduli spaces ${\rm Pic}^d({\mathcal C}/|h|)\simeq M(0,h,s)$, where $d=s+(1/2)(h.h)$, are considered for which the Mukai vector $(0,h,s)$ is primitive. As we are only concerned with the generic fibre of the projection, this restriction is irrelevant for us. \(ii\) On the other hand, Markman 'twists' an arbitrary smooth ${\rm Pic}^d({\mathcal C}/|h|)=M(0,h,s)\xymatrix@1@=15pt{\ar[r]&}|h|$, and not only those with a section as our ${\rm Pic}^0({\mathcal C}/|h|)$, to obtain the given Lagrangian fibration $X\xymatrix@1@=15pt{\ar[r]&}\mathbb{P}^n$. This can be remedied by applying Proposition [Proposition 35](#prop:getall){reference-type="ref" reference="prop:getall"} and Example [Example 36](#ex:genexa){reference-type="ref" reference="ex:genexa"}. Theorem [Theorem 2](#thm:main2){reference-type="ref" reference="thm:main2"} reinterprets [@Mark Thm. 1.5] in the projective setting. The first step consists of observing that for $M={\rm Pic}^d({\mathcal C}/|h|)$ and $s\in {\rm SBr}^{\rm o}(S)^{\rm an}$, Markman's twist $M_s$ is algebraic if and only if $s$ is contained in ${\rm SBr}^{\rm o}(S)\subset{\rm SBr}^{\rm o}(S)^{\rm an}$. For this observe that $\alpha=s\in {\rm SBr}^{\rm o}(S)^{\rm an}$ is torsion, or equivalently contained in ${\rm SBr}^{\rm o}(S)$, if and only if its image $\bar \alpha\in {\rm Br}(S)^{\rm an}$ is torsion, i.e. contained in ${\rm Br}(S)\subset{\rm Br}(S)^{\rm an}$. The results was proved in broader generality by Abasheva and Rogov [@AbRo Thm. 5.19]. Next, according to Example [Example 36](#ex:genexa){reference-type="ref" reference="ex:genexa"}, we know that $M$ is birational to ${\rm Pic}^d({\mathcal C}/|h|)\simeq{\rm Pic}^0_{\bar d}({\mathcal C}/|h|)$ and by virtue of Remark [Remark 33](#rem:groupstructure){reference-type="ref" reference="rem:groupstructure"} we have $$\label{eqn:prod} ({\rm Pic}_{\bar d}^0({\mathcal C}_\eta)\times {\rm Pic}^0_{\alpha|_{{\mathcal C}_\eta}}({\mathcal C}_\eta))/{\rm Pic}^0({\mathcal C}_\eta)\simeq{\rm Pic}^0_{\bar d\cdot\alpha|_{{\mathcal C}_\eta}}({\mathcal C}_\eta).$$ To conclude one shows that for $\alpha=s\in \Sha(S,h)$ the twist $M_s$ is the left hand side of ([\[eqn:prod\]](#eqn:prod){reference-type="ref" reference="eqn:prod"}), which is a direct consequence of the discussion in [@Mark §7.2]. This is proved analogously to the two-dimensional case, cf. Remark [Remark 43](#rem:ShaBrnew){reference-type="ref" reference="rem:ShaBrnew"}. Hence, $M_s$ is birational to ${\rm Pic}_{\bar d\cdot\alpha}^0({\mathcal C}/|h|)$. As an alternative for the last step, one could prove first a version of ([\[eqn:prod\]](#eqn:prod){reference-type="ref" reference="eqn:prod"}) for the total Picard varieties: $({\rm Pic}_{\bar d}^0({\mathcal C}_\eta)\times {\rm Pic}_{\alpha|_{{\mathcal C}_\eta}}({\mathcal C}_\eta))/{\rm Pic}^0({\mathcal C}_\eta)\simeq{\rm Pic}_{\bar d\cdot\alpha|_{{\mathcal C}_\eta}}({\mathcal C}_\eta)$ and compare the left hand side with $M_s$ via Remark [Remark 39](#rem:BrTorH1){reference-type="ref" reference="rem:BrTorH1"}, see also Proposition [Proposition 15](#prop:BrSBrPictorsor){reference-type="ref" reference="prop:BrSBrPictorsor"}. 0◻ 99 A. Abasheva, V. Rogov *Šafarevič--Tate groups of holomorphic Lagrangian fibrations. *arxiv2112.10921.** J.-L. Colliot-Thélène, A. Skorobogatov *The Brauer--Grothendieck Group. *Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge, volume 71. 2021.** A. Grothendieck *Le groupe de Brauer II. Théorie cohomologique. *in: Dix Exposés sur la cohomologie des schémas. volume 3 of Adv. Stud. in Pure Maths. North Holland Amsterdam (1968), 66--87.** A. Grothendieck *Le groupe de Brauer III. Exemples et compléments. *in: Dix exposés sur la cohomologie des schémas. volume 3 of Adv. Stud. in Pure Maths. North Holland Amsterdam (1968), 88--188.** N. Hoffmann, U. Stuhler *Moduli schemes of generically simple Azumaya modules. *Doc. Math. 10 (2005), 369--389.** D. Huybrechts, P. Stellari *Equivalences of twisted K3 surfaces. *Math. Ann. 332 (2005), 901--936.** D. Huybrechts, P. Stellari *Proof of Căldăraru's conjecture. *in *Moduli spaces and arithmetic geometry*, Adv. Stud. Pure Math. 45 (2006), 31--42.** D. Huybrechts *The global Torelli theorem: classical, derived, twisted. *Algebraic geometry. Seattle 2005. Part 1, AMS Proc. Sympos. Pure Math., 80, Part 1, (2009), 235--258.** D. Huybrechts *Lectures on K3 surfaces. *volume 158 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2016.** M. Lieblich *Moduli of twisted sheaves. *Duke Math. J. 138 (2007), 23--118.** M. Lieblich *Twisted sheaves and the period--index problem. *Compos. Math. 144 (2008), 1--31.** M. Lieblich *Moduli of twisted sheaves and generalized Azumaya algebras. *PhD thesis. MIT (2004).** E. Markman *Lagrangian fibrations of holomorphic-symplectic varieties of ${\rm K3}^{[n]}$-type. *Algebraic and Complex Geometry. volume 71 of Springer Proc. Math. and Stat. (2014), 241--283.** C. Simpson *Moduli of representations of the fundamental group of a smooth projective variety I. *Publ. Math. Inst. Hautes Études Sci. 79 (1994), 47--129.** K. Yoshioka *Moduli spaces of twisted sheaves on a projective variety. *in *Moduli spaces and arithmetic geometry*, Adv. Stud. Pure Math. 45 (2006), 1--30.** [^1]: Note that in general only $\mu$-stability is preserved under this equivalences. However, on surfaces and under the assumption that $F$ has trivial determinant, also Gieseker stability is preserved.
arxiv_math
{ "id": "2310.04032", "title": "The special Brauer group and twisted Picard varieties", "authors": "Daniel Huybrechts and Dominique Mattei", "categories": "math.AG", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | In this paper, the nonlinear (orbital) stability of static $\pi$-shifted Néel walls in ferromagnetic films, under the reduced wave-type dynamics for the in-plane magnetization proposed by Capella, Melcher and Otto [@CMO07], is established. It is proved that the spectrum of the linearized operator around the static Néel wall lies in the stable complex half plane with non-positive real part. This information is used to show that small perturbations of the static Néel wall converge to a translated orbit belonging to the manifold generated by the static wall. address: - | (A. Capella) Instituto de Matemáticas\ Universidad Nacional Autónoma de México\ Circuito Exterior s/n, Ciudad Universitaria\ C.P. 04510 Cd. de México (Mexico) - | (C. Melcher) Lehrstuhl I für Mathematik\ RWTH Aachen\ D-52056 Aachen (Germany) - | (L. Morales) Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\ Universidad Nacional Autónoma de México\ Circuito Escolar s/n, Ciudad Universitaria\ C.P. 04510 Cd. de México (Mexico) - | (R. G. Plaza) Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\ Universidad Nacional Autónoma de México\ Circuito Escolar s/n, Ciudad Universitaria\ C.P. 04510 Cd. de México (Mexico) author: - Antonio Capella - Christof Melcher - Lauro Morales - Ramón G. Plaza bibliography: - references.bib title: Nonlinear stability of static Néel walls in ferromagnetic thin films --- # Introduction {#secintro} In order to study the motion of magnetization vectors in ferromagnetic materials, in 1935 Landau and Lifshitz [@LanLif35] introduced a model system of equations, later reformulated and re-derived by Gilbert [@Gilb55; @Gilb04], which constitutes the fundamental and best accepted mathematical model that describes the magnetization in ferromagnets. Since ferromagnetic thin films exhibit a wide range of applications to the design and manufacturing of magnetic storage devices, the Landau-Lifshitz-Gilbert (LLG) model has attracted a great deal of attention from physicists and mathematicians alike in the last decades. A great variety of patterns of magnetization vectors appear in ferromagnetic films. For instance, narrow transition regions between opposite magnetization domains are called *domain walls*. Some of the most common wall types in such materials are called Néel walls, separating two opposite magnetization regions by an in-plane rotation, oriented along an axis; Bloch walls, for which the magnetization rotates about the normal of the domain wall, pointing along the domain wall plane in a 3D system; or Walker walls, which are formed under the presence of an external magnetic field (see, e.g., Hubert and Schäfer[@HuSch98] for further information). One of the main objectives of recent mathematical studies is to understand the behavior of these dynamical coherent structures developed by the magnetization of a ferromagnet. The stability under small perturbations of these microstructures is important, not only to validate the mathematical model but also to enhance the numerical simulations performed by physicists and engineers to optimize and design new ferromagnetic materials (see, e.g., [@LabBer99]). Up to our knowledge, the literature on the dynamical stability theory for magnetic domain walls is scarce. The stability of one-dimensional Bloch walls has been addressed by Krukowski [@Kruk87] using a spectral (linearized) calculation of energies of ground states, and by Carbou and Labbé [@CarLab06], under the nanowire, one-dimensional approximation by Sanchez [@Sanch09]. Takasao [@Tak11] improved this last result for Walker walls, also in one dimension and in the presence of an external magnetic field. Carbou [@Carb10] proved the stability of a Walker wall in the three-dimensional model using the energy method and under a simplifying assumption that gets rid of the non-local part of the operator. Most of these works employ energy methods to conclude stability, that is, the analyses are based on performing *a priori* energy estimates on the equations of magnetization evolution and relying on their intrinsic structure. This paper is devoted to studying the dynamical stability of static Néel walls. Our departure point is the one-dimensional thin film reduction of the micromagnetic energy proposed by Capella, Melcher, and Otto [@CMO07] (outlined previously in [@MuOs06] for numerical purposes), which establishes an effective system for the in-plane magnetization by taking the thin film layer limit. The resulting system underlies a wave-type dynamics for the Néel wall's phase. The authors prove the existence and uniqueness of a static Néel wall's phase profile in the absence of external fields, as well as the emergence of traveling wave solutions near the static profile under the influence of a small constant external forcing. The authors also outline the stability of these structures under small one-dimensional perturbations. The present analysis constitutes a follow-up of such formulation and a full study of the nonlinear stability of the static Néel wall under small, one-dimensional perturbations of the phase itself. As far as we know, this problem has not been studied before in the literature. One of the main technical difficulties pertains to the non-locality of the dynamical equation, even at a linear level. In contrast to previous studies, we adopt a spectral approach to the problem. Motivated by the ideas in [@CMO07], in which the linearized operator around the static phase is defined and previously studied, we profit from this information and perform a full spectral stability analysis of this operator, that includes a proof of its relative compactness with respect to an asymptotic operator. In contrast with standard techniques, which are usually applied to local differential operators with bounded coefficients and which are based on truncating such coefficients with their asymptotic limits (see, e.g., [@KaPro13], Section 3.1), in this work and by necessity (because we are studying a non-local operator) we develop a novel procedure that focuses on describing totally bounded sets in terms of $L^2$-equicontinuity and uniform decay in Fourier space (see Theorem [Theorem 27](#themcompct_L){reference-type="ref" reference="themcompct_L"} below). This relative compactness plays a crucial role in the location of the essential spectrum of a block operator matrix that encodes the linearization of the nonlinear wave equation for perturbations of the static wall. It is proved that both essential and point spectra are stable, that is, they belong to the stable half-plane of complex numbers with negative real part, except for the origin, which is associated with translations of the Néel wall (see Theorem [Theorem 40](#mainspthm){reference-type="ref" reference="mainspthm"}). An important feature is the presence of an *spectral gap*, which is a positive distance from the eigenvalue zero to the rest of the spectrum. This allows us to establish the exponential decay of the solutions to the spectral problem when projected outside the one-dimensional vector space generated by translations of the static profile. Upon application of the well-known Gearhart-Prüss theorem [@CrL03; @EN00] and after the establishment of uniform resolvent estimates, we conclude that the semigroup generated by the linear block matrix operator is exponentially decaying in the appropriate subspace. This information is then used to prove nonlinear stability. For that purpose, we apply an abstract result, due originally to Sattinger [@Sa76] and adapted to a Hilbert space setting by Lattanzio *et al.* [@LMPS16], that establishes nonlinear stability from spectral stability by controlling the growth of nonlinear terms and profiting from the fact that the manifold generated by the wave is one-dimensional (the group of translations). We regard our contributions not only new in the context of ferromagnetic wall stability analysis, but also of methodological nature: we advocate for spectral and nonlinear analysis as a feasible and effective method in the study of this type of problems. The unpublished note by Huber [@Hub10] warrants note as the only work (as far as we know) that performs a rigorous spectral analysis of the linearized operator around a Néel wall for a layer of small (but positive) thickness, $\epsilon> 0$. Huber does not prove the spectral stability of this structure but employs the spectral information to obtain time-periodic solutions in a vicinity of it. (We note that in layers with positive thickness, the linearized operators are sectorial, in contrast with the present case of a thin-film limit.) ## Plan of the paper {#plan-of-the-paper .unnumbered} This paper is structured as follows. Section [2](#secprel){reference-type="ref" reference="secprel"} contains a brief description of the thin-film dynamical model in [@CMO07], recalls some of the main properties of the static Néel wall's phase, and states the main result of this paper. Section [3](#seclinearopL){reference-type="ref" reference="seclinearopL"} is devoted to the full, rigorous study of the linearized (scalar) operator around the static Néel wall defined in [@CMO07]. In particular, it is shown that it is relatively compact to an asymptotic operator, a feature that plays a key role in the stability analysis. Section [4](#secspectral){reference-type="ref" reference="secspectral"} establishes the spectral stability of the Néel wall's phase. The spectral problem is posed in terms of a block operator matrix and the stability of both its essential and point spectra is established. Section [5](#secsemigroup){reference-type="ref" reference="secsemigroup"} is devoted to generating the associated semigroup and to showing the exponential decay of solutions to the linearized equations outside a one-dimensional space related to translations of the profile. The final Section [6](#sec:nonlinear_stability){reference-type="ref" reference="sec:nonlinear_stability"} contains the proof of Theorem [Theorem 3](#maintheorem){reference-type="ref" reference="maintheorem"}. ## Notations {#notations .unnumbered} Along this manuscript, we denote the spaces $L^2(\mathbb{R}, \mathbb{C}), \ H^1(\mathbb{R}, \mathbb{C})$ and $H^2(\mathbb{R}, \mathbb{C})$ of complex-valued functions by $L^2, \ H^1$ and $H^2$. Meanwhile, their real-valued version are denoted by $L^2(\mathbb{R}), \ H^1(\mathbb{R})$ and $H^2(\mathbb{R})$ respectively. The set of unitary vectors in $\mathbb{R}^n$ is denoted by $\mathbb{S}^{n-1}$. The operators $\hat{\cdot}:L^2\to L^2$ and $\check{\cdot}:L^2\to L^2$ stand for the Fourier transform and its inverse respectively. Also, $\xi$ represents the variable in the frequency domain. In the same fashion, the half-Laplacian is defined by the relation $(-\nabla)^{1/2}u = (|\xi|\hat{u})\check{}$, and $\|u\|_{\dot{H}^{1/2}}$ denotes the fractional $H^{1/2}$-norm of the function $u\in L^2$ given by $\|u\|_{\dot{H}^{1/2}}:=\left \| |\xi|^{1/2}\hat{u} \right \|_{L^2}$. Finally, for two linear operators, say ${\mathcal{A}}$ and ${\mathcal{T}}$, the commutator $[{\mathcal{A}}, {\mathcal{T}}]$ is given by the difference ${\mathcal{A}}{\mathcal{T}}- {\mathcal{T}}{\mathcal{A}}$. # Preliminaries and main result {#secprel} ## The micromagnetic model The Landau and Lifshitz continuum theory of ferromagnetic materials[@LanLif35] is based on a magnetization field ${\bf m}: \tilde\Omega\to {\mathbb S}^2$, that represents the local average magnetic moment, and a variational principle in term of the *micromagnetic energy*. In the absence of an external field, the micromagnetic energy is given by $$\mathbb{E}(\mathbf{m}) = \frac{1}{2}\Big( d^2 \int_{\widetilde{\Omega}} |\nabla \mathbf{m}|^2 \, dx + \int_{\mathbb{R}^3} |\nabla U|^2 + Q \int_{\widetilde{\Omega}} \Phi(\mathbf{m}) \, dx \Big).$$ where $d> 0$ is the exchange length and $\nabla U$ is the *stray field* defined uniquely via the distribution equation $\Delta U = \textrm{div}\,(\mathbf{m}\boldsymbol{1}_{\tilde \Omega})$ ($\boldsymbol{1}_A$ denotes the indicator function of the set $A$). The stray-field energy favors vanishing distributional divergence, namely, $\nabla\cdot {\bf m} = 0$ in $\tilde\Omega$ and ${\bf m}\cdot n = 0$ on $\partial\tilde\Omega$, where $n$ is the outward normal to the boundary. The last integral models crystalline anisotropies via a penalty energy, for which $\Phi$ acts as a penalty function, and it usually has the form of an even polynomial in $\mathbf{m}\in \mathbb{S}^2$. The parameter $Q>0$ measures the relative strength of anisotropy penalization against stray-field interaction. The combination of the stray-field energy (which is a non-local term) and the non-convex saturation constraint $|{\bf m}|=1$ gives rise to pattern formation among magnetic domains where the magnetization is almost constant. Thin transition layers separating the magnetic domains are known as domain walls and may form complex patterns [@HuSch98]. ## Néel wall in soft magnetic thin films A thin film is an infinitely extended magnetic material $\tilde\Omega = \mathbb{R}^2\times (0,\delta)$ where $\delta \ll d$. In this regime, it is safe to assume that the magnetization is independent of the $x_3$ variable. By assuming further that the magnetization is $\ell$-periodic in the ${\bf e}_2$ direction, namely, $${\bf m}(x_1,x_2+\ell) = \mathbf{m}(x_1,x_2) \quad\text{for any } x = (x_1,x_2)\in\mathbb{R}^2,$$ and that the material has a uniaxial anisotropy in the $e_2$ direction, with $\Phi({\bf m}) = 1- m_2^2$, we consider transition layers connecting antipodal states on the easy axis $${\bf m}:\mathbb{R}^2\to \mathbb{S}^2 \quad\text{ with } \mathbf{m}(\pm \infty, x_2) = (0,\pm 1, 0) \quad \text{for any } x_2\in \mathbb{R}.$$ In this case, the stray filed energy is approximated at leading order by $$E_{stray}({\bf m})= \frac{1}{2} \hspace{4pt} \mbox{--}\hspace{-9pt}\int_0^\ell\int_\mathbb{R}\left(\frac{\delta}{2} \left| |\nabla|^{\frac{1}{2}} {\mathcal H}(m) \right|^2 + m_3^2 \right) dx$$ where ${\bf m} = (m, m_3)$ with $m=(m_1,m_2)$ and formally ${\mathcal H}(m)= \nabla \Delta^{-1}\text{div } m$ ( see [@CMO07] for further details).Thus, the micromagnetic energy becomes $$\mathbb{E}({\bf m})= \frac{1}{2} \hspace{4pt} \mbox{--}\hspace{-9pt}\int_0^\ell\int_\mathbb{R}\left( d^2|\nabla m |^2 + \frac{\delta}{2} \left| |\nabla|^{\frac{1}{2}} {\mathcal H}(m) \right|^2 + Q(1-m_2)^2 + m_3^2 \right) dx.$$ Néel walls are one-dimensional transition layers observed in soft ferromagnetic thin films, that is, magnetic materials with relatively weak anisotropic energy. Here, we consider a parameter regime of soft thin films so that the anisotropy and relative thickness are balanced, more precisely $$\label{reg-Neel} Q \ll 1, \quad \kappa = d/\delta \ll 1 \quad \text{while } {\mathcal Q} = 4 \kappa^2 Q.$$ Therefore, it is feasible to introduce the small parameter $\varepsilon = \sqrt{Q}$. By rescaling the length $x$ by $w = \delta /(2Q)$, and the energy by $\delta/2$, the micromagnetic energy becomes $$\label{Energy-epsilon} E_\varepsilon ({\bf m}) = \frac{1}{2} \hspace{4pt} \mbox{--}\hspace{-9pt}\int_0^L\int_\mathbb{R}\left({\mathcal Q}|\nabla m |^2 + \left| |\nabla|^{\frac{1}{2}} {\mathcal H}(m) \right|^2 + (1-m_2)^2 + \left(\frac{m_3}{\varepsilon}\right)^2 \right) dx,$$ where $L= \ell/w$ and we assumed $\varepsilon \ll {\mathcal Q} \ll 1$. Assuming further that $m=m(x_1)$ then ${\mathcal H}(m) = m_1 {\bf e}_1$ is independent of $x_2$ and the reduced variational principle for the one-dimensional wall transition is $$\label{varppio} \begin{aligned} E_\varepsilon ({\bf m}) = \frac{1}{2} \int_\mathbb{R}& \Big( {\mathcal Q} | {\bf m}'|^2 +| |\nabla|^{\frac{1}{2}} m_1|^2 + (1-m_2)^2 + \left(\frac{m_3}{\varepsilon}\right)^2 \Big) dx \to \min,\\ &{\bf m} : \mathbb{R}\to {\mathbb S}^2 \qquad \text{with} \;\; {\bf m}(\pm\infty) =(0,\pm 1,0), \end{aligned}$$ where ${\bf m}'=\frac{d{\bf m}}{dx_1}$. In [@GC04] it is shown that for $\varepsilon_k\to 0$ there exists a sequence of minimizers ${\bf m}_{\varepsilon_k}$ of [\[varppio\]](#varppio){reference-type="eqref" reference="varppio"} with a subsequence that locally converges to ${\bf m} = (m,0)$ and satisfies $$\label{limit-varppio} \begin{aligned} E_0 (m) = \frac{1}{2} \int_\mathbb{R}& \Big( {\mathcal Q}| m' |^2 +| |\nabla|^{\frac{1}{2}} m_1|^2 + m_1^2 \Big) dx \to \min,\\ &m : \mathbb{R}\to {\mathbb S}^2 \qquad \text{with} \;\; m(\pm\infty) =(0,\pm 1). \end{aligned}$$ Since $|m'|=(m_1')^2/(1-m_2^2)$ is a strictly convex functional of $m_1$, the variational principle [\[limit-varppio\]](#limit-varppio){reference-type="eqref" reference="limit-varppio"} has a minimizer for any ${\mathcal Q}>1$. The minimizer is called a Néel wall. We refer to the energy in [\[limit-varppio\]](#limit-varppio){reference-type="eqref" reference="limit-varppio"} as the Néel wall energy, and for convenience, we write it as $$E_0 (m) = \frac{1}{2} \left({\mathcal Q} \Vert m'\Vert_{L^2}^2 + \Vert m_1 \Vert_{\dot{H}^{1/2}}^2 + \Vert m_1\Vert_{L^2}^2 \right).$$ Since the left translation is an $L^2$-isometry, the expression of $E_0(m)$ is invariant to translations in the spatial coordinate $x$. This invariance is inherited by the energy, yielding that minimizers of [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"} are unique up to a constant translation. For our analysis, we introduce the phase $\theta:\mathbb{R}\to \mathbb{R}$ so that $m=(\cos\theta, \sin\theta)$ and the Néel wall energy becomes $$\label{varprob} \begin{aligned} \mathcal{E}(\theta) &= \frac{1}{2} \big( {\mathcal Q}\|\theta'\|_{L^2}^2 + \|\cos \theta\|^2_{{\dot H}^{1/2}} + \|\cos \theta\|_{L^2}^2 \big) \; \rightarrow \; \min\\ \theta : \mathbb{R}&\to (-\pi/2,\pi/2), \qquad \text{with} \;\; \theta(\pm\infty) = \pm \pi/2. \end{aligned}$$ Since we are interested in Néel wall dynamic properties, we refer to minimizers of [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"} as the *static* Néel wall phase. From now on, we assume ${\mathcal Q}=1$ and we abuse notation by letting $\partial_x\theta=\theta'$ and $\partial_{x}^2\theta =\theta''$ to avoid confusion. The following proposition summarizes the basic properties of the static Néel wall phase. **Proposition 1** (properties of the static Néel wall [@CMO07; @Melc03]). *There exists a static Néel wall solution with phase $\overline{\theta}= \overline{\theta}(x)$, $\overline{\theta}: \mathbb{R}\to (-\pi/2,\pi/2)$, satisfying the following:* - *[\[propa\]]{#propa label="propa"} $\overline{\theta}$ is a strict minimizer of the variational problem [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"}, with center at the origin, $\overline{\theta}(0) = 0$, and monotone increasing, $\partial_x \overline{\theta}> 0$ $\, \forall x \in \mathbb{R}$.* - *[\[propb\]]{#propb label="propb"} $\overline{\theta}$ is a smooth solution to $$\label{ELeq} \partial_x^2 \theta + \sin \theta (1+(-\Delta)^{1/2}) \cos \theta = 0,$$ which is the Euler-Lagrange equation for the variational problem [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"}.* - *[\[propc\]]{#propc label="propc"} $\partial_x \overline{\theta}\in H^2$.* - *[\[propd\]]{#propd label="propd"} For all centered variation $u \in H^1$ with $u(0) = 0$ there holds $$\label{Hesspos} \mathop{\mathrm{\mathrm{Hess}\,}}\mathcal{E}(\overline{\theta}) \langle u,u \rangle_{L^2} \geq \|u \, \partial_x \overline{\theta}\|^2_{L^2} + \mathrm{Re}\,\, b [u\sin \overline{\theta}, u \sin \overline{\theta}],$$ where the bilinear form $b[\cdot,\cdot] : H^1 \times H^1 \to \mathbb{C}$, defined as, $$\label{defbilinearB} b[f,g] = \int_\mathbb{R}(1+|\xi|) \hat f(\xi) \hat g(\xi)^* \, d\xi, \qquad f, g \in H^1,$$ is equivalent to the standard inner product in $H^{1/2}$.* *Proof.* Property [(a)](#propa) results from combining Lemma 1 in [@CMO07] with the main results of [@Melc03] (Propositions 1 and 2). The proof of the smoothness of the Néel wall can be found in [@Melc04] (Proposition 2). Since it is a minimizer it satisfies equation [\[ELeq\]](#ELeq){reference-type="eqref" reference="ELeq"} (see Lemma 1 in [@CMO07]). This shows [(b)](#propb). Moreover, it is proved in [@CMO07] (Theorem 1 and Lemma 1) that $\partial_x \overline{\theta}$, $\partial_x^2 \overline{\theta}\in L^2(\mathbb{R})$. As pointed out by the authors, from the Euler-Lagrange equation [\[ELeq\]](#ELeq){reference-type="eqref" reference="ELeq"} the regularity arguments of Lemma 1 can be bootstrapped to show that $\partial_x^3 \overline{\theta}\in L^2(\mathbb{R})$. This shows [(c)](#propc). Finally, property [(d)](#propd) is the content of Lemma 3 in [@CMO07]. ◻ **Corollary 2**. *There exists a uniform constant $C > 0$ such that $$\| \partial_x \overline{\theta}\|_\infty, \| \partial^2_x \overline{\theta}\|_\infty \leq C.$$* *Proof.* Follows immediately from the fact that $\partial_x \overline{\theta}\in H^2$ (see Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"} [(c)](#propc)) and Sobolev's inequality: $\| u \|_\infty^2 \leq 2 \|u \|_{L^2} \| \partial_xu \|_{L^2}$ for all $u \in H^2$. ◻ ## LLG dynamics The time evolution of the magnetization distribution on a ferromagnetic body $\widetilde{\Omega} \subset \mathbb{R}^3$ is governed by the Landau-Lifshitz-Gilbert (LLG) equation [@LanLif35; @Gilb55; @Gilb04]: $$\label{eqLLG} \mathbf{m}_t+ \alpha \mathbf{m}\times \mathbf{m}_t-\gamma \mathbf{m}\times \mathbf{H}_{\mathrm{eff}} = 0,$$ where $\mathbf{m}: \widetilde{\Omega} \times (0,\infty) \to \mathbb{S}^2 \subset \mathbb{R}^3$ is the magnetization field, $\alpha > 0$ is a non-dimensional damping coefficient (Gilbert factor), and $\gamma > 0$ is the (constant) absolute value of the gyromagnetic ratio with dimensions of frequency (see, e.g., [@Gilb04]). The effective field, $\mathbf{H}_{\mathrm{eff}} = \mathbf{h}- \nabla \mathbb{E}(\mathbf{m})$, is the applied field $\mathbf{h}$ and the negative functional gradient of the micromagnetic energy $\mathbb{E}(\mathbf{m})$. If we consider a single magnetic spin $\mathbf{m}=\mathbf{m}(t)$ under a constant magnetic field $\mathbf{h}$ and neglect damping the magnetization $m$ will precess about the applied field $\mathbf{h}$ with a frequency given by $\omega = \gamma |\mathbf{h}|$. When the damping is turned on, the vector $\mathbf{m}$ will spiral down around $\mathbf{h}$ until $\mathbf{m}$ and $\mathbf{h}$ become parallel. The typical relaxation time is $1/(\alpha\omega)$. In bulk materials, there exists a one-dimensional optimal path connecting antipodal magnetization states known as the Bloch wall. Bloch walls are such that $m_1=0$ and the transition is perpendicular to the transition axis. In this case, the magnetization $\mathbf{m}$ is divergence-free free and the stray field energy vanishes. Under this condition, there exist explicit dynamic solutions in the bulk, where under an applied field $\mathbf{h}= H {\bf e}_2$ the magnetization rotates to develop a $m_1$ component. This component implies a rotation of the other magnetization components advancing the domain wall [@HuSch98; @Melc04]. ## LGG wave-type dynamic limit in thin films Thin films are incompatible with gyrotropic wall motion due to the incompatibility constraint of the in-plane magnetization imposed by the stray field. In this configuration, the competition between energy and dynamic forces becomes singular in the thin field limit. In [@CMO07], an effective suitable limit is considered under the appropriate regime where the oscillatory features of the LLG dynamics are preserved in the limit. It turns out that the effective dynamics depend on the asymptotic regime as $\alpha$ and the relative thickness $\delta/d$ tend to zero. For the precise scaling and regime in [@CMO07] let $\varepsilon = \sqrt{Q}$ and consider [\[reg-Neel\]](#reg-Neel){reference-type="eqref" reference="reg-Neel"} when $\varepsilon \ll {\mathcal Q}$ while ${\mathcal Q} = (2\varepsilon d/\delta)^2 \lesssim 1$ is small but bounded from below. That is, $\varepsilon\sim \delta/d$ can be regarded as the relative thickness. Under these assumptions we rescale space and time by $$x\mapsto w x \quad \text{where } w = \delta/(2\varepsilon)^2,\quad t \mapsto t/(\gamma\varepsilon)$$ In this scaling, the mean effective field $\mathbf{H}_{\mathrm{eff}}$ becomes $$\mathbf{H}_{\mathrm{eff}} = -\varepsilon^2\nabla E_\varepsilon(\mathbf{m})\quad\text{ where } E_\varepsilon = (2/\delta) \mathbb{E}(\mathbf{m})$$ where $E_\varepsilon(\mathbf{m})$ is given by [\[Energy-epsilon\]](#Energy-epsilon){reference-type="eqref" reference="Energy-epsilon"}. Therefore, the LLG equation [\[eqLLG\]](#eqLLG){reference-type="eqref" reference="eqLLG"} becomes, $$\label{LLG-epsilon} \mathbf{m}_t + \alpha \mathbf{m}\times \mathbf{m}_t + \varepsilon \mathbf{m}\times E_\varepsilon(\mathbf{m}) = 0.$$ To derive the effective equation for the in-plane magnetization it is necessary to write down $E_\varepsilon(\mathbf{m})$ in terms of $m=(m_1,m_2)$ and $m_3$, that is, $$E_\varepsilon(\mathbf{m}) = E_0(m) + \frac{1}{2}\hspace{4pt} \mbox{--}\hspace{-9pt}\int_0^L\int_\mathbb{R}\Big({\mathcal Q} |\nabla m_3|^2 + \left(\frac{m_3}{\varepsilon}\right)^2\Big) dx$$ where $$\label{E_02D} E_0(m) = \frac{1}{2}\hspace{4pt} \mbox{--}\hspace{-9pt}\int_0^L\int_\mathbb{R}\Big( {\mathcal Q}|\nabla m|^2 + ||\nabla|^\frac{1}{2}{\mathcal H}(m)|^2 + (1-m_2^2) \Big) dx.$$ Notice that for one-dimensional transition layers the energy $E_0$ coincides with the reduced Néel wall energy [\[limit-varppio\]](#limit-varppio){reference-type="eqref" reference="limit-varppio"}. In [@CMO07] it is shown that as $$\varepsilon\to 0 \quad\text{while}\quad \alpha(\varepsilon)/\varepsilon \to \nu$$ for some positive $\nu$, while keeping ${\mathcal Q}=1$ for every $\varepsilon >0$, there exist a sequence of solution $\mathbf{m}_\varepsilon$ of [\[LLG-epsilon\]](#LLG-epsilon){reference-type="eqref" reference="LLG-epsilon"} $L$-periodic in the $x_2$ direction such that the in-plane magnetization $m_\varepsilon$ weakly converges to $m\in {\mathbb S}^1$ (in the appropriate spaces) a weak solution of $$\label{LLG-limit} [\partial_t^2 m + \nu \partial_t m + \nabla E_0(m)]\perp {T_m \mathbb S}^1.$$ Because $E_0(m)$ coincides with the Néel wall energy, it is clear that under the appropriate boundary conditions at infinity (e.g. [\[limit-varppio\]](#limit-varppio){reference-type="eqref" reference="limit-varppio"}) the static Néel wall profile $\bar m=(\cos\bar\theta,\sin\bar\theta)$ is a static solution of [\[LLG-limit\]](#LLG-limit){reference-type="eqref" reference="LLG-limit"}. ## Main result The static Néel wall solution and the wave-type dynamic equation [\[LLG-limit\]](#LLG-limit){reference-type="eqref" reference="LLG-limit"} are the starting point of the present work. We state our main result in terms of the magnetic phase $\theta:\mathbb{R}\times (0,\infty) \to \mathbb{R}$. As function of $\theta(x,t)$, equation [\[LLG-limit\]](#LLG-limit){reference-type="eqref" reference="LLG-limit"} with the boundary conditions given by [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"} becomes $$\label{reddyneq} \left\{ \ \ \begin{aligned} &\partial_t^2 \theta + \nu \partial_t \theta + \nabla {\mathcal{E}}(\theta) = 0, \\ &\theta(-\infty,t) =-\pi/2,\quad \theta(\infty,t) =\pi/2,\\ &\theta(x,0) =\theta_0(x) ,\quad \partial_t\theta(x,0) =v_0(x) \end{aligned} \right.$$ where $(\theta_0,v_0)$ are some initial conditions and the energy ${\mathcal{E}}(\theta)$ is as in [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"}. After these definitions we are ready to state our main result. **Theorem 3** (orbital stability of the static Néel wall). *Let ${\mathcal{J}}\subset H^1(\mathbb{R})\times L^2(\mathbb{R})$ be the set of initial conditions such that the Cauchy problem [\[reddyneq\]](#reddyneq){reference-type="eqref" reference="reddyneq"} has a global solution. Then there exists $\varepsilon > 0$ sufficiently small such that if the pair $(\theta_0, v_0) \in {\mathcal{J}}$ satisfies $$\| \theta_0 - \overline{\theta}\|_{H^1} + \| v_0 \|_{L^2} < \varepsilon,$$ then the solution to [\[reddyneq\]](#reddyneq){reference-type="eqref" reference="reddyneq"} with initial condition $(\theta(x,0), \partial_t \theta(x,0)) = (\theta_0, v_0)$ satisfies for any $t > 0$, $$\| \theta (\cdot, t) - \overline{\theta}(\cdot + \delta) \|_{H^1} \leq C \exp(- \omega t),$$ for some shift $\delta \in \mathbb{R}$ and constants $C, \omega > 0$ that may depend on $(\theta_0, v_0)$ and $\varepsilon$.* **Remark 4**. It is to be noticed that we are not proving the global existence of the solution for a given small initial perturbation. Theorem [Theorem 3](#maintheorem){reference-type="ref" reference="maintheorem"} states that any eventual initial small perturbation of the static Néel profile, if exists, must decay to a translation of it. This type of behavior is also called *orbital* stability (or stability *in shape*), as initial perturbations decay to an element of the orbit or manifold generated by the static wave which, in this case, is the one-dimensional manifold of translations. The existence of global solutions can be studied using standard semigroup techniques and with the help of the decaying estimates performed in this work; we do not pursue such analysis here. Instead, we focus on the stability problem alone. # The linearized operator around the static Néel wall's phase {#seclinearopL} In this section, we study the linearized operator around the static Néel wall's phase. We examine its main properties and locate its resolvent and spectra. Notably, we prove that it is a relatively compact perturbation of an asymptotic operator, a property that will play a key role later on. We start by recalling some standard definitions from spectral theory which can be found in the classical literature on the subject [@Kat80; @ReedS2; @ReedS1]. **Definition 5**. Let ${\mathcal{T}}$ and ${\mathcal{S}}$ two linear operator from $X$ to $Y$, Banach spaces. It is said that ${\mathcal{S}}$ is relatively bounded with respect to ${\mathcal{T}}$ (or simply ${\mathcal{T}}$-bounded) if $D({\mathcal{S}})\subset D({\mathcal{T}})$ and $$\|{\mathcal{S}}u\| \leq a\|u\|+b\|{\mathcal{T}}u\|, \qquad \forall \, u\in D({\mathcal{T}}),$$ where $a, b$ are non negative constants. The greatest lower bound $b_0$ of all possible constants $b$ is called the relative bound of ${\mathcal{S}}$ with respect to ${\mathcal{T}}$ (or simply the ${\mathcal{T}}$-bound of ${\mathcal{S}}$). **Definition 6**. Let ${\mathcal{L}}\in \mathscr{C}(X,Y)$ be a linear, closed operator from $X$ to $Y$, Banach spaces. The resolvent $\rho({\mathcal{L}})$, the point spectrum $\sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$ and the essential spectrum $\sigma_\mathrm{\tiny{ess}}({\mathcal{L}})$ are defined as: $$\begin{aligned} \rho({\mathcal{L}}) &:= \{\lambda \in \mathbb{C}\, : \, {\mathcal{L}}- \lambda \,\text{ is injective and onto, and } ({\mathcal{L}}- \lambda)^{-1} \, \text{is bounded} \, \},\\ \sigma_\mathrm{\tiny{pt}}({\mathcal{L}}) &:= \{ \lambda \in \mathbb{C}\,: \; {\mathcal{L}}- \lambda \,\text{ is Fredholm with index zero and has a non-trivial kernel} \},\\ \sigma_\mathrm{\tiny{ess}}({\mathcal{L}}) &:= \{ \lambda \in \mathbb{C}\,: \; {\mathcal{L}}- \lambda \,\text{ is either not Fredholm or has index different from zero} \}. \end{aligned}$$ The spectrum of ${\mathcal{L}}$ is the set $\sigma({\mathcal{L}}) := \sigma_\mathrm{\tiny{ess}}({\mathcal{L}}) \cup \sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$. If $\lambda \in \sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$ we refer to it as an *eigenvalue*. **Remark 7**. Several definitions of essential spectrum are in use. This definition is due to Weyl [@We10] (see also [@Kat80; @KaPro13]), and has the advantage that the remaining spectrum, namely $\sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$, is a discrete set of isolated eigenvalues. It is also to be observed that $\rho({\mathcal{L}}) = \mathbb{C}\backslash \sigma({\mathcal{L}})$ because the operator ${\mathcal{L}}- \lambda$ is closed (cf. Kato [@Kat80], p. 167). **Definition 8**. Let $X$ be a Banach space and assume that ${\mathcal{T}}$ and ${\mathcal{T}}_0$ are two closed and linear operators on $X$. The operator ${\mathcal{T}}$ is a relatively compact perturbation of ${\mathcal{T}}_0$ if $({\mathcal{T}}_0-{\mathcal{T}})(\lambda \mathrm{I}- {\mathcal{T}}_0)^{-1}:X\to X$ is compact for some $\lambda \in \rho\left ( {\mathcal{T}}_0 \right )$. ## Basic properties It can be shown (cf. Capella *et al.* [@CMO07]) that the linearization of the mapping $\nabla \mathcal{E}(\theta)$ around the static Néel wall's phase $\overline{\theta}= \overline{\theta}(x)$ is given by $$\label{defL0} \left\{ \begin{aligned} &{\mathcal{L}}: L^2 \to L^2,\\ &D({\mathcal{L}}) = H^2,\\ & {\mathcal{L}}u := - \partial^2_xu + \mathcal{S}u - c_\theta u, \qquad u \in D({\mathcal{L}}), \end{aligned} \right.$$ where the nonlocal operator ${\mathcal{S}}$ is defined as $$\label{defS} \left\{ \begin{aligned} &{\mathcal{S}}: L^2 \to L^2,\\ &D({\mathcal{S}}) = H^1,\\ & {\mathcal{S}}u := \sin \overline{\theta}(1+(-\Delta)^{1/2}) (u\sin \overline{\theta}), \quad u \in D({\mathcal{S}}), \end{aligned} \right.$$ and $c_\theta = c_\theta(x)$, $x \in \mathbb{R}$, is the coefficient defined as $$\label{defctheta} c_\theta := \cos \overline{\theta}(1+(-\Delta)^{1/2}) \cos \overline{\theta}.$$ It can be easily shown that $c_\theta = c_\theta(x)$ is a real and uniformly bounded coefficient in $H^2$ (see [@CMO07]). Notice that the non-local operator, $1 + (-\Delta)^{1/2} : L^2 \to L^2$, is defined through $$(1 + (-\Delta)^{1/2}) u := ((1+|\xi|)\widehat{u}(\xi))^\vee,$$ for any $u$ in its natural domain, $D({\mathcal{S}}) = H^1$, and since $D(-\partial_x^2) = H^2 \subset H^1$, the natural domain of definition of ${\mathcal{L}}$ is $D({\mathcal{L}}) = H^2$. Therefore, we regard ${\mathcal{L}}$ as a densely defined operator in $L^2$ with domain $D({\mathcal{L}}) = H^2$. For notational convenience we denote $$s_{\theta}(x) := \sin \overline{\theta}(x), \qquad x \in \mathbb{R},$$ which is also real, smooth and bounded for all $x \in \mathbb{R}$. The next lemma shows a relation between the Hilbert transform and the half-Laplacian which will be used later on. We present it without proof, inasmuch as the latter can be found in many references (see, e.g., [@CCH21; @King-v1-09; @Ner75]). **Lemma 9**. *Let ${\mathcal{H}}: L^2\to L^2$ be the Hilbert transform given by $$u\mapsto \mathrm{P.V.}\,\frac{1}{\pi} \int_\mathbb{R}\frac{u(s)}{x-s} \, ds.$$ Then, ${\mathcal{H}}$ is an isometry on $L^2$. Moreover, if $u\in H^1$ then $$(-\Delta)^{1/2}u= {\mathcal{H}}(\partial_x u) = \partial_x {\mathcal{H}}u.$$* From the definition of the coefficients $s_\theta$ and $c_\theta$ we immediately have the following properties. **Corollary 10**. *There exists a uniform constant $C > 0$ such that $$\label{boundscs} \begin{aligned} \| c_{\theta}\|_\infty, \| \partial_xc_{\theta} \|_\infty &\leq C,\\ \| s_{\theta}\|_\infty, \| \partial_xs_{\theta} \|_\infty, \| \partial^2_xs_{\theta} \|_\infty &\leq C. \end{aligned}$$* *Proof.* Follows directly from Corollary [Corollary 2](#corthetabded){reference-type="ref" reference="corthetabded"} and the regularity of $c_{\theta}$ and $s_{\theta}$. ◻ **Corollary 11**. *Let $u,v \in D({\mathcal{S}})$. Then $\left \langle {\mathcal{S}}u \, , v \right \rangle_{L^2} = b[s_{\theta}u,s_{\theta}v]$.* *Proof.* This can be easily proved by applying Plancherel's theorem. Indeed, there holds $$\begin{aligned} \langle {\mathcal{S}}u, v \rangle_{L^2} &= \int_\mathbb{R}s_{\theta}(x) (1 + (-\Delta)^{1/2}) (s_{\theta}(x) u) v^* \, dx \\ &= \int_\mathbb{R}(1 + (-\Delta)^{1/2}) (s_{\theta}(x) u) (s_{\theta}(x) v)^* \, dx \\ &= \int_\mathbb{R}(1 + |\xi|) \widehat{(s_{\theta} u)}(\xi) \widehat{(s_{\theta} v)}(\xi)^* \, d\xi\\ &= b[s_{\theta}u,s_{\theta}v], \end{aligned}$$ as claimed. ◻ The following proposition summarizes the basic properties of the linearized operator ${\mathcal{L}}$ and the Néel wall's phase, which have already been proved in [@CMO07]. **Proposition 12**. *The operator ${\mathcal{L}}$ and the static Néel wall's phase $\overline{\theta}$ satisfy:* - *[\[propaa\]]{#propaa label="propaa"} $\partial_x \overline{\theta}\in D({\mathcal{L}})$ with ${\mathcal{L}}\partial_x \overline{\theta}= 0$.* - *[\[propbb\]]{#propbb label="propbb"} For all $f \in L^2$ such that $f \perp \partial_x \overline{\theta}$ in $L^2$ there exists a solution $u \in H^2$ to the equation ${\mathcal{L}}u = f$. The solution is unique up to a constant multiple of $\partial_x \overline{\theta}$.* - *[\[propcc\]]{#propcc label="propcc"} There exists a uniform constant $\Lambda_0 > 0$ such that if $u \in H^1$ and $\langle u, \partial_x \overline{\theta}\rangle_{L^2} = 0$, then $$\label{L0bound} \langle {\mathcal{L}}u, u\rangle_{L^2} \geq \Lambda_0 \|u\|_{L^2}^2.$$* - *[\[propdd\]]{#propdd label="propdd"} Let $f \in \{\partial_x \overline{\theta}\}^\perp \subset L^2$. Then the equation ${\mathcal{L}}u = f$ has a strong solution $u \in H^2$, unique up to a constant multiple of $\partial_x \overline{\theta}$. Moreover, if $u \in \{\partial_x \overline{\theta}\}^\perp$, then $$\label{H2bound} \|u\|_{H^2} \leq C \|f\|_{L^2},$$ for some $C > 0$.* *Proof.* The proof follows from Lemmata 4 and 5, together with Proposition 1 in [@CMO07]. ◻ Next, we are going to verify that the operator ${\mathcal{L}}$ is self-adjoint. For that purpose, we remind the reader that it is well-known that the Laplacian, $- \Delta = -\partial_x^2$, is essentially self-adjoint in $L^2$ when defined on $C_0^\infty(\mathbb{R})$, but it is actually self-adjoint on its maximal domain, $D(-\partial^2_x) = H^2 \subset L^2$. This property can be easily verified using the Fourier transform, which unitarily diagonalizes the Laplacian (see, e.g., Kato [@Kat80], section V-5.2, pp. 299--302). First, we have the following observation. **Lemma 13**. *The operator ${\mathcal{S}}: L^2 \to L^2$ is symmetric.* *Proof.* We recall that ${\mathcal{S}}$ is symmetric if and only if its domain is dense and $\langle v, \mathcal{S}u \rangle_{L^2} = \langle \mathcal{S}v, u \rangle_{L^2}$ for all $u, v \in D({\mathcal{S}})$. Since $D({\mathcal{S}}) = H^1$ is dense in $L^2$ we only need to verify the latter property. But Corollary [Corollary 11](#cor:operatorS){reference-type="ref" reference="cor:operatorS"} and the hermiticity of $b$ yield $$\left \langle {\mathcal{S}}u \, , v \right \rangle_{L^2} = b[s_{\theta}u, s_{\theta}v] =b[s_{\theta}v, s_{\theta}u]^* =\left \langle {\mathcal{S}}v \, , u \right \rangle_{L^2}^* =\left \langle u \, , {\mathcal{S}}v \right \rangle_{L^2},$$ for all $u, v \in H^1$, as claimed. ◻ We now verify that the operator ${\mathcal{L}}$ is self-adjoint through the application of Kato-Rellich's theorem twice. **Theorem 14**. *The operator ${\mathcal{L}}: L^2 \to L^2$ with domain $D({\mathcal{L}}) = H^2$ is self-adjoint.* *Proof.* First, note that ${\mathcal{L}}$ is clearly a symmetric operator, because its domain is dense in $L^2$ and there holds $$\begin{aligned} \langle {\mathcal{L}}u, v \rangle_{L^2} &= \langle - \partial^2_xu, v \rangle_{L^2} + \langle {\mathcal{S}}u, v \rangle_{L^2} + \langle c_{\theta} u, v \rangle_{L^2},\\ &=\langle u, - \partial_x^2 v \rangle_{L^2} + \langle u, {\mathcal{S}}v \rangle_{L^2} + \langle u, c_{\theta} v \rangle_{L^2},\\ &= \langle u, {\mathcal{L}}v \rangle_{L^2}, \end{aligned}$$ for all $u, v \in H^2$, after integration by parts, application of Lemma [Lemma 13](#lemSsym){reference-type="ref" reference="lemSsym"} and the fact that $c_{\theta}$ is real. Now, it is well-known that for every $u \in H^2$ there holds the estimate $$\label{Katoineq} \| \partial_xu \|_{L^2} \leq k \| \partial^2_xu \|_{L^2} + \frac{2}{k} \| u \|_{L^2},$$ for any arbitrary $k > 0$ (see Kato [@Kat80], p. 192). Let us denote the operator, $$\left\{ \begin{aligned} &\widetilde{{\mathcal{S}}}: L^2 \to L^2,\\ &D(\widetilde{{\mathcal{S}}}) = H^1,\\ & \widetilde{{\mathcal{S}}}u := s_{\theta} (-\Delta)^{1/2} (s_{\theta} u), \quad u \in D(\widetilde{{\mathcal{S}}}), \end{aligned} \right.$$ so that ${\mathcal{S}}= s_{\theta}^2 \mathrm{I}+ \widetilde{{\mathcal{S}}}$. Following the arguments of Lemma [Lemma 13](#lemSsym){reference-type="ref" reference="lemSsym"}, it is easy to verify that $\widetilde{{\mathcal{S}}}$ is a symmetric operator. Moreover, from Corollary [Corollary 2](#corthetabded){reference-type="ref" reference="corthetabded"}, we observe that $s_{\theta} = \sin \overline{\theta}$ and $\partial_xs_{\theta} = (\partial_x \overline{\theta}) \cos \overline{\theta}$ are uniformly bounded functions for all $x \in \mathbb{R}$, and there exists a constant $C_0 > 0$ such that $\| s_{\theta} \|_\infty \leq 1$ and $\|\partial_xs_{\theta} \|_\infty \leq C_0$. Therefore, $$\begin{aligned} \| \widetilde{{\mathcal{S}}}u \|_{L^2} &\leq \left( \int_\mathbb{R}| (-\Delta)^{1/2} (s_{\theta}(x) u) |^2 \, dx \right)^{1/2} = \left( \int_\mathbb{R}|\xi|^2 | \widehat{(s_{\theta} u)}(\xi) |^2 \, d\xi \right)^{1/2}\\ &\leq \| \partial_x( s_{\theta} u ) \|_{L^2} \leq \| \partial_xs_{\theta} \|_\infty \| u \|_{L^2} + \| s_{\theta} \|_\infty \| \partial_xu \|_{L^2} \leq C_0 \| u \|_{L^2} + \| \partial_xu \|_{L^2}. \end{aligned}$$ Inequality [\[Katoineq\]](#Katoineq){reference-type="eqref" reference="Katoineq"} then yields $$\| \widetilde{{\mathcal{S}}}u \|_{L^2} \leq k \| -\partial^2_xu \|_{L^2} + \Big( C_0 + \frac{2}{k}\Big) \| u \|_{L^2},$$ for all $u \in H^2$ and any arbitrary $k > 0$. Since $D(-\partial_x^2) = H^2 \subset D(\widetilde{{\mathcal{S}}}) = H^1$ and since $k > 0$ is arbitrary, this shows that the symmetric operator $\widetilde{{\mathcal{S}}}$ is relatively bounded with respect to $-\partial_x^2$ with relative bound equal to zero. Consequently, we may apply Kato-Rellich's theorem (see Reed and Simon, vol. II [@ReedS2], Theorem X.12, p. 162) to conclude that the operator $\widetilde{{\mathcal{S}}}- \partial_x^2 : L^2 \to L^2$, with domain $D( \widetilde{{\mathcal{S}}}- \partial_x^2) = D(- \partial_x^2) = H^2$, is self-adjoint. Finally, let us write ${\mathcal{L}}= - \partial_x^2 + {\mathcal{S}}+ c_{\theta} \mathrm{I}= - \partial_x^2 + \widetilde{{\mathcal{S}}}+ \beta \mathrm{I}$, where $\beta := s_{\theta}^2 + c_{\theta}$ is a real, smooth and bounded coefficient. Clearly, $$\| \beta u \|_{L^2} \leq \| \beta \|_\infty \| u \|_{L^2} \leq \| \beta \|_\infty \| u \|_{L^2} + k \| (\widetilde{{\mathcal{S}}}- \partial_x^2) u \|_{L^2},$$ for all $u \in H^2$ and for any $k > 0$. Since $D(\widetilde{{\mathcal{S}}}- \partial_x^2) = H^2 \subset D(\beta \mathrm{I}) = L^2$, we conclude that the symmetric operator $\beta \mathrm{I}$ is $(\widetilde{{\mathcal{S}}}- \partial_x^2)-$bounded with relative bound equal to zero. Upon application, once again, of Kato-Rellich's theorem we conclude that the operator ${\mathcal{L}}= - \partial_x^2 + \widetilde{{\mathcal{S}}}+ \beta \mathrm{I}$, with domain $D({\mathcal{L}}) = H^2$, is self-adjoint. The theorem is proved. ◻ **Corollary 15**. *${\mathcal{L}}$ is a closed operator.* *Proof.* Since every self-adjoint operator is closed (it coincides with its adjoint, which is closed), the conclusion follows from Theorem [Theorem 14](#thmLselfadj){reference-type="ref" reference="thmLselfadj"}. ◻ ## The spectrum of ${\mathcal{L}}$ Thanks to Theorem [Theorem 14](#thmLselfadj){reference-type="ref" reference="thmLselfadj"}, we immediately obtain from basic properties of self-adjoint operators that the $L^2$-spectrum of ${\mathcal{L}}$ is real, $\sigma({\mathcal{L}}) \subset \mathbb{R}$. Moreover, from Proposition [Proposition 12](#propL0){reference-type="ref" reference="propL0"} [(a)](#propaa) and Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"} [(c)](#propc), we already know that $\partial_x \overline{\theta}$ is an eigenfunction of ${\mathcal{L}}$ associated to the eigenvalue $\lambda = 0$, referred to as *the translation eigenvalue*. This means that any translation of the Néel wall's phase, $\overline{\theta}(\cdot + \delta)$, remains a Néel wall (that is, a minimizer of the variational problem [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"}, which might not be longer centered at $x = 0$, though). Now, if we decompose $L^2 = \{\partial_x \overline{\theta}\}^\perp \oplus \mathrm{span} \{\partial_x \overline{\theta}\}$, and if we suppose that there exists $u \in H^2 \subset L^2$, $u \neq 0$, with $u = u_\perp + \alpha \partial_x \overline{\theta}$ with some $\alpha \in \mathbb{C}$ such that ${\mathcal{L}}u = 0$, then by Proposition [Proposition 12](#propL0){reference-type="ref" reference="propL0"} [(c)](#propcc) there holds $$0 = \langle {\mathcal{L}}(u_\perp + \alpha \partial_x \overline{\theta}), u_\perp + \alpha \partial_x \overline{\theta}\rangle_{L^2} = \langle {\mathcal{L}}u_\perp, u_\perp \rangle_{L^2} \geq \Lambda_0 \|u_\perp\|_{L^2}^2,$$ yielding $u_\perp = 0$. This implies that the geometric multiplicity of $\lambda = 0$ is equal to one. Recalling that for self-adjoint operators on Hilbert spaces the algebraic and geometric multiplicities of an eigenvalue coincide (see Kato [@Kat80], p. 273), we readily obtain the following result. **Corollary 16**. *$\lambda = 0$ is a simple eigenvalue of the operator ${\mathcal{L}}: L^2 \to L^2$, with eigenfunction $\partial_x \overline{\theta}\in D({\mathcal{L}}) = H^2$.* We use this information to prove the following spectral bound. **Lemma 17**. *The $L^2$-spectrum of ${\mathcal{L}}$ satisfies $\sigma({\mathcal{L}}) \subset \{ 0 \} \cup [\Lambda_0, \infty)$, where $\Lambda_0 > 0$ is the constant from Proposition [Proposition 12](#propL0){reference-type="ref" reference="propL0"} [(c)](#propcc).* *Proof.* Corollary [Corollary 16](#corzeroL0){reference-type="ref" reference="corzeroL0"} implies that $\lambda = 0 \in \sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$ and, therefore, it is an isolated simple eigenvalue. Moreover, we also know that $\sigma({\mathcal{L}})_{|L^2} \subset \mathbb{R}$. Hence, by the spectral decomposition theorem (see Theorem III-6.17, p. 178, in Kato [@Kat80]) we have a decomposition for ${\mathcal{L}}$ according to a decomposition of the Hilbert space, $L^2 = X' \oplus X''$, such that $\sigma({\mathcal{L}})_{|X'} = \{0\}$ and $\sigma({\mathcal{L}})_{|X''} = \sigma({\mathcal{L}})_{|L^2} \backslash \{0\}$, where $X' = {\mathcal{P}}_0 L^2$, $X'' = (\mathrm{I}- {\mathcal{P}}_0) L^2$ and ${\mathcal{P}}_0$ is the spectral projection associated to the eigenvalue $\lambda = 0$, determined by the Dunford integral, $${\mathcal{P}}_0 = - \frac{1}{2\pi i} \int_{\partial \Gamma} (\lambda\mathrm{I}-{\mathcal{L}})^{-1} \, d \lambda,$$ with $\partial \Gamma$ being a simple, rectifiable curve such that $\partial \Gamma \subset \rho({\mathcal{L}})$ and $\Gamma \cap \sigma({\mathcal{L}})_{|L^2} = \{0\}$. Actually, since the eigenvalue is simple (the rank of ${\mathcal{P}}_0$ is equal to one), we have $X' = \text{span} \{ \partial_x \overline{\theta}\} \subset L^2$ and $X'' = \{ \partial_x \overline{\theta}\}^{\perp}$ in $L^2$. Next, we verify that ${\mathcal{L}}_{|X''}$ is also self-adjoint. This restriction of ${\mathcal{L}}$ is defined as $$\left\{ \begin{aligned} &{\mathcal{L}}_{|X''} : X'' \to X'',\\ &D({\mathcal{L}}_{|X''}) = D({\mathcal{L}}) \cap X'' = H^2 \cap X'',\\ & {\mathcal{L}}_{|X''} u := {\mathcal{L}}u, \quad u \in D({\mathcal{L}}_{|X''}). \end{aligned} \right.$$ Clearly, ${\mathcal{L}}_{|X''}$ is symmetric because $D({\mathcal{L}}_{|X''})$ is dense in $X'' = \{ \partial_x \overline{\theta}\}^{\perp}$ and ${\mathcal{L}}$ is symmetric. Thus, we apply the basic criterion of self-adjointness: in order to show that ${\mathcal{L}}_{|X''}$ is self-adjoint it suffices to show that $({\mathcal{L}}_{|X''} \pm i)(D({\mathcal{L}}) \cap X'') = X''$ (see, e.g., Theorem VIII.3, p. 256, in Reed and Simon, vol. I [@ReedS1]). But we already know that ${\mathcal{L}}\pm i : D({\mathcal{L}}) \to L^2$ is surjective because ${\mathcal{L}}$ is self-adjoint. Therefore, for $v \in X'' \subset L^2$ there exist elements $u_\pm \in D({\mathcal{L}}) = H^2$ such that $({\mathcal{L}}\pm i ) u_\pm = v$. This implies that $$\begin{aligned} ({\mathcal{L}}\pm i)(\mathrm{I}- {\mathcal{P}}_0) u_\pm = ({\mathcal{L}}\pm i) u_\pm - ({\mathcal{L}}\pm i) {\mathcal{P}}_0 u_\pm &= v - ({\mathcal{L}}{\mathcal{P}}_0 u_\pm \pm i {\mathcal{P}}_0 u_\pm) \\&= v - {\mathcal{P}}_0 ({\mathcal{L}}\pm i ) u_\pm\\&= (\mathrm{I}- {\mathcal{P}}_0) v \, \in X'', \end{aligned}$$ with $(\mathrm{I}- {\mathcal{P}}_0) u_\pm \in X''$. That is, $({\mathcal{L}}_{|X''} \pm i ) : D({\mathcal{L}}) \cap X'' \to X''$ is surjective, and this proves that ${\mathcal{L}}_{|X''}$ is self-adjoint. Finally, from Rayleigh's formula for semi-bounded self-adjoint operators (cf. Kato [@Kat80], p. 278), we have the bound $$\langle {\mathcal{L}}_{|X''} u, u\rangle_{L^2} = \langle {\mathcal{L}}u, u\rangle_{L^2} \geq \Lambda_0 \|u\|_{L^2}^2,$$ for all $u \in D({\mathcal{L}}) \cap X'' = H^2 \cap \{ \partial_x \overline{\theta}\}^{\perp}_{L^2}$ (see Proposition [Proposition 12](#propL0){reference-type="ref" reference="propL0"} [(c)](#propcc)), which implies, in turn, that $\sigma({\mathcal{L}}_{X''}) \subset [\Lambda_0, \infty)$. Kato's decomposition theorem then yields $\sigma({\mathcal{L}})_{L^2} \subset \{ 0 \} \cup [\Lambda_0, \infty)$, as claimed. ◻ ## The asymptotic operator ${\mathcal{L}}_\infty$ We now examine the following operator, defined by $$\label{defLinf} \left\{ \begin{aligned} &{\mathcal{L}}_\infty : L^2 \to L^2,\\ &D({\mathcal{L}}_\infty) = H^2,\\ & {\mathcal{L}}_\infty u := - \partial^2_xu + (1+(-\Delta)^{1/2})u, \quad u \in D({\mathcal{L}}_\infty). \end{aligned} \right.$$ This operator results from (formally) taking the limit when $x \to \pm \infty$ in the expression of ${\mathcal{L}}$, recalling that $\overline{\theta}(\pm \infty) = \pm \pi/2$. Let us define the bilinear form $$\label{defbilfA} \begin{aligned} a_\infty[\cdot, \cdot] &: H^1 \times H^1 \to \mathbb{C},\\ a_\infty[u,v] &:= \langle \partial_xu, \partial_xv \rangle_{L^2} + b[u,v], \qquad u,v \in H^1, \end{aligned}$$ where $b[\cdot,\cdot]$ is the bilinear form defined in [\[defbilinearB\]](#defbilinearB){reference-type="eqref" reference="defbilinearB"}. It follows from standard facts that if $f \in L^2$, then the equation $$\label{equLinfty} {\mathcal{L}}_\infty u = f,$$ is endowed with a weak formulation in the space $H^1$ in terms of the bilinear form $a_\infty[\cdot, \cdot]$. Indeed, we say that $u \in H^1$ is a weak solution to [\[equLinfty\]](#equLinfty){reference-type="eqref" reference="equLinfty"} provided that $$\label{weakequLinfty} a_\infty[u,v] = \langle {\mathcal{L}}_\infty u, v \rangle_{L^2} = \langle f, v \rangle_{L^2}, \qquad \forall \, v \in H^1.$$ **Lemma 18**. *The bilinear form $a_\infty[\cdot, \cdot]$ defines an inner product in $H^1$ whose induced norm is equivalent to the standard $H^1$-norm.* *Proof.* First, it is to be noticed that $a_\infty[\cdot, \cdot]$ is complex symmetric, $a_\infty[u,v]^* = a_\infty[v,u]$ for all $u, v \in H^1$, and clearly bilinear. Use Plancherel's identity to observe that $$b[u,u] = \int_\mathbb{R}(1 + |\xi|)|\widehat{u}(\xi)|^2 \, d\xi \geq \| u \|^2_{L^2},$$ yielding $a_\infty[u,u] = \| \partial_xu \|^2 + b[u,u] \geq \| \partial_xu \|^2 + \|u \|_{L^2} = \| u \|^2_{H^1}$ for all $u \in H^1$. On the other hand, by Cauchy-Schwarz' inequality it follows that $$b[u,u] = \int_\mathbb{R}(1 + |\xi|)|\widehat{u}(\xi)|^2 \, d\xi \leq \int_\mathbb{R}\big( \tfrac{3}{2} + \tfrac{1}{2}\xi^2) |\widehat{u}(\xi)|^2 \, d\xi,$$ yielding, $$a_\infty[u,u] = \| \partial_xu \|_{L^2}^2 + b[u,u] \leq \| \partial_xu \|_{L^2}^2 + \int_\mathbb{R}\big( \tfrac{3}{2} + \tfrac{1}{2}\xi^2) |\widehat{u}(\xi)|^2 \, d\xi = \tfrac{3}{2} \| u \|_{H^1}^2.$$ Finally, we notice that $a_\infty[u,u] = 0$ if and only if $u = 0$. This shows the result. ◻ We now apply the previous lemma to show that [\[equLinfty\]](#equLinfty){reference-type="eqref" reference="equLinfty"} has a unique strong solution in $H^2$. **Lemma 19**. *For every $f \in L^2$ there exists a unique solution $u \in H^2$ to [\[equLinfty\]](#equLinfty){reference-type="eqref" reference="equLinfty"}. Moreover, there exists a uniform constant $C > 0$ such that $$\label{estufH2} \| u \|_{H^2} \leq C \| f \|_{L^2}.$$* *Proof.* The bilinear form is continuous in $H^1$ because $$\begin{aligned} |a_\infty[u,v]| \leq | \langle \partial_xu, \partial_xv \rangle_{L^2} | + |b[u,v]| &\leq \| \partial_xu \|_{L^2}\| \partial_xv \|_{L^2} + \int_\mathbb{R}(1 + |\xi|) |\widehat{u}(\xi)| |\widehat{v}(\xi)| \, d\xi\\ &\leq \| \partial_xu \|_{L^2}\| \partial_xv \|_{L^2} + \| u \|_{L^2}\| v \|_{L^2} + \| \partial_xu \|_{L^2}\| v \|_{L^2}\\ &\leq C \| u \|_{H^1}\| v \|_{H^1}, \end{aligned}$$ for all $u, v \in H^1$. Moreover, in Lemma [Lemma 18](#lemAH1){reference-type="ref" reference="lemAH1"} we have already verified that $a_\infty[\cdot,\cdot]$ is $H^1$-elliptic. Thus, by Lax-Milgram theorem, for each $f \in L^2$ there exists a unique weak solution $u \in H^1$ to [\[weakequLinfty\]](#weakequLinfty){reference-type="eqref" reference="weakequLinfty"}. This solution solves $${\mathcal{L}}_\infty u = - \partial^2_xu + (1 + (-\Delta)^{1/2}) u = f,$$ in the sense of distributions. Therefore, for any test function $\varphi \in C_0^\infty(\mathbb{R})$ there holds $$\langle \partial_xu, \partial_x\varphi \rangle_{L^2} + \langle (1+(-\Delta)^{1/2})u, \varphi \rangle_{L^2} = \langle f, \varphi \rangle_{L^2}.$$ By Plancherel's identity this implies that $$\int_\mathbb{R}\big[ (1 + |\xi| + \xi^2) \widehat{u}(\xi) - \widehat{f}(\xi) \big] \widehat{\varphi}(\xi)^* \, d\xi = 0,$$ for all $\varphi \in C_0^\infty(\mathbb{R})$. Hence, $(1 + |\xi| + \xi^2) \widehat{u}(\xi) = \widehat{f}(\xi)$ a.e. in $\xi \in \mathbb{R}$. Therefore, $$\| u \|_{H^2}^2 = \int_\mathbb{R}(1 + \xi^2)^2 |\widehat{u}(\xi)|^2 \, d\xi = \int_\mathbb{R}\Big( \frac{1+\xi^2}{1+|\xi|+\xi^2}\Big)^2 |\widehat{f}(\xi)|^2 \, d\xi \leq \|f \|_{L^2}^2 < \infty.$$ This yields $u \in H^2$, that $u$ is a strong solution to [\[equLinfty\]](#equLinfty){reference-type="eqref" reference="equLinfty"} as well as estimate [\[estufH2\]](#estufH2){reference-type="eqref" reference="estufH2"}. The lemma is proved. ◻ **Lemma 20**. *The asymptotic operator ${\mathcal{L}}_\infty : L^2 \to L^2$ with domain $D({\mathcal{L}}_\infty) = H^2$ is endowed with the following properties:* - *${\mathcal{L}}_\infty$ is self-adjoint. [\[propbbb\]]{#propbbb label="propbbb"}* - *$\ker {\mathcal{L}}_\infty = \{ 0 \}$. [\[propccc\]]{#propccc label="propccc"}* - **ran*$({\mathcal{L}}_\infty) = L^2$. [\[propddd\]]{#propddd label="propddd"}* - *$\sigma({\mathcal{L}}_\infty)_{|L^2} \subset [1,\infty)$. [\[propeee\]]{#propeee label="propeee"}* - *$({\mathcal{L}}_\infty)^{-1}$ exists and it is bounded.* *Proof.* Notice that ${\mathcal{L}}_\infty$ is clearly symmetric with dense domain. Therefore, the proof of property [(a)](#propaaa) follows a Kato-Rellich argument, very similar to that in the proof of Theorem [Theorem 14](#thmLselfadj){reference-type="ref" reference="thmLselfadj"} and we omit it. Items [(b)](#propbbb) and [(c)](#propccc) follow directly from Lemma [Lemma 19](#lemsolLinf){reference-type="ref" reference="lemsolLinf"} due to ${\mathcal{L}}_\infty$ is self-adjoint and its spectrum is real, $\sigma({\mathcal{L}}_\infty)_{|L^2} \subset \mathbb{R}$. If $u \in D({\mathcal{L}}_\infty) = H^2$, then $\langle {\mathcal{L}}_\infty u, u \rangle_{L^2} = a_\infty[u,u] \geq \| u \|_{H^1}^2 \geq \| u \|_{L^2}^2$, showing that ${\mathcal{L}}_\infty$ is semi-bounded. By Rayleigh's spectral bound for semi-bounded self-adjoint operators in Hilbert spaces (cf. [@Kat80], p. 278) we obtain $$\inf \sigma({\mathcal{L}}_\infty)_{|L^2} = \inf_{0 \neq v \in D({\mathcal{L}}_\infty)} \frac{\langle {\mathcal{L}}_\infty v, v \rangle_{L^2}}{\| v \|_{L^2}^2} = \inf_{0 \neq v \in D({\mathcal{L}}_\infty)} \frac{a_\infty[v,v]}{\| v \|_{L^2}^2} \geq 1.$$ This shows [(d)](#propddd). Property [(e)](#propeee) follows directly from [(d)](#propddd), inasmuch as it implies that $\lambda = 0 \in \rho({\mathcal{L}}_\infty)$. This completes the proof of the lemma. ◻ ## Relative compactness In this section we prove that the linearized operator around the static Néel wall's phase, ${\mathcal{L}}$, is a relatively compact perturbation of ${\mathcal{L}}_\infty$. This fundamental property will be useful later on. First we establish an elementary result. **Lemma 21**. *Let $\mu \in \mathbb{C}\backslash [1,\infty)$ be fixed. Then the function $$\left\{ \begin{aligned} g_\mu &: [0,\infty) \to \mathbb{R},\\ g_\mu(\eta) &:= \frac{\eta^2+1}{|\eta^2 + \eta +1 - \mu|}, \qquad \eta \geq 0, \end{aligned} \right.$$ is bounded above, that is, there exists a positive constant $\widetilde{C} = \widetilde{C}(\mu) > 0$ such that $|g_\mu(\eta)| \leq \widetilde{C}(\mu)$ for all $\eta \geq 0$. Moreover, if $\mathrm{Re}\,\mu < 0$, then the constant $\widetilde{C}$ may be chosen independently of $\mu$.* *Proof.* Fix $\mu \in \mathbb{C}\backslash [1,\infty)$. Then the function $g_\mu$ is continuous in $\eta \in [0,\infty)$. Since, $$g_\mu(0) = \frac{1}{|1-\mu|}, \qquad \lim_{\eta \to \infty} g_\mu(\eta) = 1,$$ then from continuity we deduce the existence of $\widetilde{C} = \widetilde{C}(\mu) > 0$ such that $|g_\mu(\eta)| \leq \widetilde{C}(\mu)$ for all $\eta \geq 0$. If $\mathrm{Re}\,\mu < 0$, then we have $1 - \mathrm{Re}\,\mu > 1$ and therefore $$|\eta^2 + \eta + 1 - \mu | \geq | \mathrm{Re}\,( \eta^2 + \eta + 1 - \mu)| = \eta^2 + \eta + 1 - \mathrm{Re}\,\mu > \eta^2 + 1,$$ yielding $|g_\mu(\eta)| \leq 1$ for all $\eta \geq 0$. ◻ **Lemma 22**. *Fix $\mu \in \mathbb{C}\backslash [1,\infty) \subset \rho({\mathcal{L}}_\infty)$. Then for every $f \in L^2$ there exists a unique solution $u \in H^2$ to the equation $({\mathcal{L}}_\infty - \mu) u = f$. Moreover, there exists a constant $C = C(\mu) > 0$ such that $$\label{H2est} \| u \|_{H^2} \leq C(\mu) \| f \|_{L^2}.$$* *Proof.* From Lemma [Lemma 20](#lempropsLinf){reference-type="ref" reference="lempropsLinf"} [(d)](#propddd), if we fix $\mu \in \mathbb{C}\backslash [1,\infty)$, then $\mu \in \rho({\mathcal{L}}_\infty)$ and $({\mathcal{L}}_\infty - \mu) : D({\mathcal{L}}_\infty) = H^2 \subset L^2 \to L^2$ is onto. Hence, for every $f \in L^2$ there exists $u \in H^2$ such that $({\mathcal{L}}_\infty - \mu) u = f$. This implies that $(\xi^2 + 1 + |\xi| - \mu) \widehat{u}(\xi) = \widehat{f}(\xi)$. Noticing that for $\mu \in \mathbb{C}\backslash [1,\infty)$ we have $\xi^2 + 1 + |\xi| - \mu \neq 0$ for all $\xi \in \mathbb{R}$, we obtain the estimate $$\begin{aligned} \| u \|_{H^2}^2 &= \int_\mathbb{R}(1 + \xi^2)^2 |\widehat{u}(\xi)|^2 \, d\xi = \int_\mathbb{R}|g_\mu(|\xi|)|^2 |\widehat{f}(\xi)|^2 \, d\xi\\ &\leq \widetilde{C}(\mu)^2 \int_\mathbb{R}|\widehat{f}(\xi)|^2 \, d\xi = C(\mu) \| f \|_{L^2}^2, \end{aligned}$$ thanks to Lemma [Lemma 21](#lemboundg){reference-type="ref" reference="lemboundg"}. ◻ **Lemma 23**. *${\mathcal{L}}_\infty - {\mathcal{L}}$ continuously maps $H^2$ into $H^1$.* *Proof.* Take any $u \in H^2$. We then have $$\label{diff_L} ({\mathcal{L}}_\infty - {\mathcal{L}}) u = (1+(-\Delta)^{1/2})u - s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u) + c_{\theta} u.$$ Apply bounds [\[boundscs\]](#boundscs){reference-type="eqref" reference="boundscs"} to obtain $$\begin{aligned} \| c_{\theta} u \|_{H^1}^2 &= \| c_{\theta} u \|_{L^2}^2 + \| \partial_x (c_{\theta} u ) \|_{L^2}^2 \\ &\leq \| c_{\theta} \|_\infty^2 \big( \|u \|_{L^2}^2 + \| \partial_xu \|_{L^2}^2\big) + \|\partial_xc_{\theta} \|_\infty^2 \|u \|_{L^2}^2 \leq C \| u\|_{H^2}^2, \end{aligned}$$ for some $C > 0$. Moreover, $$\begin{aligned} \| (1+(-\Delta)^{1/2})u \|_{H^1}^2 &= \int_\mathbb{R}(1+\xi^2) \big| \big((1+(-\Delta)^{1/2})u\big)^{\wedge}(\xi) \big|^2 \, d \xi \\ &\leq 2 \int_\mathbb{R}(1+\xi^2)^2 |\widehat{u}(\xi)|^2 \, d \xi = 2 \| u\|_{H^2}^2. \end{aligned}$$ Apply bounds [\[boundscs\]](#boundscs){reference-type="eqref" reference="boundscs"} once again to obtain $$\begin{aligned} \| s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u) \|_{H^1}^2 &\leq \| s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u) \|_{L^2}^2 + \\ & \quad + \| (\partial_xs_{\theta}) (1+(-\Delta)^{1/2})(s_{\theta}u) \|_{L^2}^2 + \\ & \quad + \| s_{\theta} \partial_x (1+(-\Delta)^{1/2})(s_{\theta}u) \|_{L^2}^2\\ &\leq C \| (1+(-\Delta)^{1/2})(s_{\theta}u) \|_{H^1}^2\\ &\leq 2C \int_\mathbb{R}(1+\xi^2)^2 |\widehat{(s_{\theta}u)}(\xi)|^2 \, d \xi = 2C \| s_{\theta} u \|_{H^2}^2 \leq \widetilde{C} \| u \|_{H^2}^2, \end{aligned}$$ for some $\widetilde{C} > 0$. Combine these estimates to conclude that there exists a constant $C > 0$ such that $$\label{estLLinf} \| ({\mathcal{L}}_\infty - {\mathcal{L}}) u \|_{H^1} \leq C \| u \|_{H^2},$$ for all $u \in D({\mathcal{L}}) = H^2$. This shows the result. ◻ At this point, let us recall two useful theorems, one due to Pego [@P85a] and the other due to Kolmogorov [@Kolm31] and Riesz [@Riesz33-o] (see, for example, [@HaHe10] and the references therein), describing totally bounded sets in $L^2$ and in $L^p$, respectively. **Theorem 24** (Pego [@P85a]). *Let ${\mathcal{F}}$ be a bounded set of $L^2(\mathbb{R}^n)$ and $\widehat{{\mathcal{F}}}:=\{\widehat{u}\,|\, u\in {\mathcal{F}}\}$. The functions for ${\mathcal{F}}$ are $L^2$-equicontinuous if and only if the functions for $\widehat{{\mathcal{F}}}$ decay uniformly in $L^2$ and vice versa.* *Proof.* See Theorem 1 in [@P85a]. ◻ **Theorem 25** (Kolmogorov-Riesz [@Kolm31; @Riesz33-o]). *A bounded set ${\mathcal{F}}\subset L^p(\mathbb{R}^n)$ with $1\leq p<\infty$ is totally bounded if and only if* - **($L^p$-equicontinuity)* $\lim_{h\to 0}\int_{\mathbb{R}^n}|u(x+h)-u(x)|^p\,dx = 0$ uniformly for $u\in {\mathcal{F}}$, and* - **($L^p$-uniform decay)* $\lim_{R\to\infty}\int_{|x|>R}|u(x)|^p\,dx = 0$ uniformly for $u\in {\mathcal{F}}$.* *Proof.* See the proof of Theorem 5 in Hanche-Olsen and Holden [@HaHe10]. ◻ We now prove a result which will be helpful in the proof of Theorem [Theorem 27](#themcompct_L){reference-type="ref" reference="themcompct_L"} below. **Proposition 26**. *Let ${\mathcal{F}}$ be a bounded set in $H^1$ and let $\phi\in H^1$ be a fixed function such that $\|\partial_x \phi\|_{L^\infty}<\infty$. Then the set $\phi{\mathcal{F}}:=\{\phi u \,|\, u\in {\mathcal{F}}\}$ is totally bounded in $L^2$.* *Proof.* First, we prove that $\lim_{|x|\to \infty} \phi(x) = 0$. By density, there exists a sequence $\{u_n\}_{n \in \mathbb{N}}\subset C^{\infty}_0(\mathbb{R})$ that converges to $\phi$ in $H^1$. Thanks to the Sobolev inequality, $\|v\|_{L^\infty}\leq 2\left \| v \right \|_{L^2}\left \| \partial_xv \right \|_{L^2}$ for $v\in H^1$, the $H^1$-convergence of $\{u_n\}$ to $\phi$ is improved to $L^\infty$-convergence, and for every $\epsilon>0$ there exists $N\in \mathbb{N}$ such that $$\|\phi-u_n\|_{L^\infty} <\epsilon \quad \mbox{for } n>N.$$ Since each $u_n$ has compact support, there exists $R>0$ such that $u_n(x) = 0$ for $|x|>R$. Hence $$|\phi(x)|\leq|\phi(x)-u_n(x)|\leq \|\phi-u_n\|_{L^\infty}\leq \epsilon.$$ Therefore, $\lim_{|x|\to \infty} \phi(x) = 0$. It is also easy to see that $\phi{\mathcal{F}}$ is bounded in $L^2$. Indeed, by hypothesis, there exists $\widetilde{M}>0$ such that $\sup_{u\in {\mathcal{F}}}\left \| u \right \|_{H^1}<\widetilde{M}$. Moreover, since $\|\phi\|_{L^\infty}< \infty$, by the Sobolev inequality we obtain $$\sup_{v\in \phi{\mathcal{F}}}\left \| v \right \|_{L^2}\leq \sup_{u\in {\mathcal{F}}}\left \| \phi u \right \|_{L^2}\leq \widetilde{M} \|\phi\|_{L^\infty}.$$ Second, we prove that $\phi{\mathcal{F}}$ is $L^2$-equicontinuous. By Sobolev imbedding theorems, we can assume that $\phi\in C^0\cap L^\infty$. Also, by hypothesis $\|\partial_x \phi\|_{L^\infty}<\infty$. Hence, $$\left \| \phi u \right \|_{H^1}^2 \leq \int_\mathbb{R}(\phi^2 +(\partial_x\phi)^2)(u^2+(\partial_xu)^2)\,dx\leq (\|\phi\|_{L^\infty}^2+\|\partial_x \phi\|_{L^\infty}^2)\left \| u \right \|_{H^1}^2<M^2.$$ for every $u\in {\mathcal{F}}$ and $M := \sqrt{\|\phi\|_{L^\infty}^2+\|\partial_x \phi\|_{L^\infty}^2}\, \widetilde{M}$. Thus $\phi{\mathcal{F}}$ is bounded in $H^1$. Then, for every $v\in \phi{\mathcal{F}}$ $$\begin{aligned} \int_{\{|\xi|>R\}}|\hat{v}(\xi)|^2 \, d\xi & \leq \frac{1}{1+R^2}\int_{\mathbb{R}}(1+\xi^2)|\hat{v}(\xi)|^2\, d\xi = \frac{\left \| v \right \|_{H^1}^2}{1+R^2} \leq \frac{M^2}{1+R^2}. \end{aligned}$$ Thus, the functions in $\widehat{\phi{\mathcal{F}}}$ are $L^2$-uniformly decaying. By Theorem [Theorem 24](#themPego){reference-type="ref" reference="themPego"}, the functions in $\phi{\mathcal{F}}$ are $L^2$-equicontinuous. Finally, we prove that the functions in $\phi{\mathcal{F}}$ are $L^2$-uniformly decaying. Indeed, if $v\in\phi{\mathcal{F}}$ then $v = \phi u$ for some $u\in {\mathcal{F}}\subset H^1$. This yields, $$\int_{|x|>R}|v(x)|^2\, dx = \left \| \boldsymbol{1}_{\{|x|>R\}}(\phi u) \right \|_{L^2}^2 \leq \|\boldsymbol{1}_{\{|x|>R\}}\phi\|_{L^\infty}^2\left \| u \right \|_{L^2}^2$$ Again, since $\left \| u \right \|_{L^2}\leq \widetilde{M}$ for every $u\in {\mathcal{F}}$ and $\phi(x)\to 0$ as $|x|\to \infty$, we conclude that $$\lim_{R\to \infty}\int_{|x|>R}|\phi u|^2\, dx \leq \lim_{R\to \infty} 2\widetilde{M}^2\|\boldsymbol{1}_{\{|x|>R\}}\phi\|_{L^\infty}^2=0$$ uniformly for $u\in{\mathcal{F}}$. Thus, by Theorem [Theorem 25](#themKolmogorov){reference-type="ref" reference="themKolmogorov"}, the set $\phi{\mathcal{F}}$ is totally bounded in $L^2$. ◻ We now prove the main result of this section. **Theorem 27**. *The operator ${\mathcal{L}}_\infty - {\mathcal{L}}:H^2 \to H^1$ is compact in $L^2$.* *Proof.* Let ${\mathcal{F}}$ be a bounded subset of $H^2$, namely $\sup_{u\in {\mathcal{F}}}\left \| f \right \|_{H^2}<M$ for some $M>0$. Then, fix $\delta>0$ and let $g_\delta\in C^{\infty}$ be an increasing and antisymmetric function such that $g_{\delta}(x)=x/|x|$ for $|x|\geq \delta$. With these tools at hand and assuming that ${\mathcal{T}}$ stands for the operator $(1+(-\Delta)^{1/2})$, the operator $\mathop{\mathrm{\mathcal{L_\infty}}}-\mathop{\mathrm{\mathcal{L}}}$ is easily recast, by adding and subtracting the terms $g_{\delta}(x){\mathcal{T}}(s_{\theta}u)+g_{\delta}(x){\mathcal{T}}(g_{\delta}(x)u)$, as $$\label{recast_L} ({\mathcal{L}}_\infty - {\mathcal{L}}) u = {\mathcal{Q}}_1 u + {\mathcal{Q}}_2 u + {\mathcal{Q}}_3 (s_{\theta}u)+{\mathcal{Q}}_4 u,$$ where $$\begin{aligned} \label{aux_A12} {\mathcal{Q}}_1 u &:={\mathcal{T}}u-g_{\delta}{\mathcal{T}}(g_{\delta}u), &{\mathcal{Q}}_2 u&:= g_{\delta}{\mathcal{T}}[(g_{\delta}-s_{\theta})u],\\ \label{aux_A34} {\mathcal{Q}}_3 u&:=[g_{\delta}-s_{\theta}]{\mathcal{T}}u, &{\mathcal{Q}}_4 u&:= c_{\theta} u.\end{aligned}$$ Since the set of compact operators between two Banach spaces is a linear manifold, we shall prove that each operator ${\mathcal{Q}}_i :H^2\to H^1$ is compact in $L^2$ by showing that the set ${\mathcal{Q}}_i {\mathcal{F}}:=\{{\mathcal{Q}}_i u\,|\, u\in {\mathcal{F}}\}\subset H^1$ is totally bounded in $L^2$, for each $1 \leq i \leq 4$. First, we analyze ${\mathcal{Q}}_4$. Notice that ${\mathcal{Q}}_4 {\mathcal{F}}$ is totally bounded by Proposition [Proposition 26](#proptotbon){reference-type="ref" reference="proptotbon"} since ${\mathcal{F}}\subset H^2$, $c_\theta$ is a smooth function which belongs to $H^2$ and $\lim_{x\to\pm\infty}c_{\theta}(x) = 0$ (see the beginning of proof of Proposition [Proposition 26](#proptotbon){reference-type="ref" reference="proptotbon"}). Second, we examine ${\mathcal{Q}}_3$. Indeed the set ${\mathcal{T}}{\mathcal{F}}:=\{{\mathcal{T}}u \,|\, u \in {\mathcal{F}}\}\subset H^1$ satisfies that $\sup_{v\in {\mathcal{T}}{\mathcal{F}}} \left \| v \right \|_{H^1}\leq \sqrt{2}M$ because $\left \| {\mathcal{T}}u \right \|_{H^1}^2\leq 2\left \| u \right \|_{H^2}$. Then, ${\mathcal{T}}{\mathcal{F}}$ is bounded in $H^1$ and, consequently, also in $L^2$. Now, observe that $${\mathcal{Q}}_3 {\mathcal{F}}= \{[g_{\delta}-s_{\theta}]{\mathcal{T}}u \,|\, u\in {\mathcal{F}}\} = \{[g_{\delta}-s_{\theta}]v \,|\, v\in {\mathcal{T}}{\mathcal{F}}\}=[g_{\delta}-s_{\theta}]{\mathcal{T}}{\mathcal{F}},$$ and that $\lim_{x\to\pm\infty} (g_\delta-s_{\theta})(x) =0$ since $\lim_{x\to\pm\infty} \overline{\theta}(x) =\pm\pi/2$. In order to apply Proposition [Proposition 26](#proptotbon){reference-type="ref" reference="proptotbon"} and to conclude that ${\mathcal{Q}}_3 {\mathcal{F}}$ is totally bounded we only need to show that $g_\delta-s_{\theta}\in H^1$ and $\partial_x(g_\delta-s_{\theta})\in L^\infty$. This follows by standard calculus. It is easily seen that $|\theta/|\theta|-\sin \theta|<\cos\theta$ for every $\theta\in (-\pi/2,0)\cup (0,\pi/2)$, and since $\overline{\theta}$ is a strictly increasing function with $\overline{\theta}(0)=0$, one concludes that $x/|x| = \mathrm{sgn}\,(\overline{\theta}(x))$ for every $x\neq 0$. These two facts readily imply that $(x/|x|-s_{\theta}(x))^2<\cos^2 \overline{\theta}(x)$ a.e. in $x\in \mathbb{R}$, and $x/|x|-s_{\theta}\in L^2$. Recalling that $g_\delta(x) = x/|x|-s_{\theta}$ for $|x|\geq \delta$ and $|g_\delta(x)|<2$ for $|x|<\delta$, one concludes that $g_\delta-s_{\theta}\in L^2\cap L^\infty$. In the same fashion, and using Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"} we can readily prove that $\partial_x(g_\delta-s_{\theta})\in L^2\cap L^\infty$. Therefore, we conclude that $[g_\delta-s_{\theta}]{\mathcal{T}}{\mathcal{F}}$ is totally bounded. The compactness of the linear operator ${\mathcal{Q}}_3$ in $L^2$ then follows. Moreover, observe that $s_{\theta}\in H^1\cap L^{\infty}$ and the compactness of ${\mathcal{Q}}_3$ imply the compactness of the linear operator $u\mapsto {\mathcal{Q}}_3 (s_{\theta}u)$ involved in [\[recast_L\]](#recast_L){reference-type="eqref" reference="recast_L"}. Let us now study the operator ${\mathcal{Q}}_2$. We claim that ${\mathcal{T}}^{-1}: H^2\to H^3$ is continuous. Indeed, since $(1+|\xi|)^2\geq 1+|\xi|^2$, we have $$\begin{aligned} \|{\mathcal{T}}^{-1} u\|^2_{H^3} =\int_\mathbb{R}\frac{1+\xi^6}{(1+|\xi|)^2} |\hat{u}(\xi)|^2\,d\xi &\leq \int_\mathbb{R}(1-\xi^2+|\xi|^4) |\hat{u}(\xi)|^2\,d\xi \leq \left \| u \right \|_{H^2}^2. \end{aligned}$$ Notice that ${\mathcal{Q}}_2 = g_{\delta}(x){\mathcal{T}}{\mathcal{Q}}_3 {\mathcal{T}}^{-1}$, and since $g_\delta {\mathcal{T}}:H^1\to L^2$ is bounded, the compactness of ${\mathcal{Q}}_2$ is proved by showing that ${\mathcal{Q}}_3 :H^3\to H^2$ is compact in $H^1$. Let $\{u_j\}_{j>0}\subset H^3$, then by the second step (compactness of ${\mathcal{Q}}_3$ in $L^2$), there exists a subsequence $\{u_{j_k}\}_{k>0}$ and $u\in L^2$ such that $\left \| u-{\mathcal{Q}}_3 u_{j_k} \right \|_{L^2}\to 0$ as $k\to \infty$. Since $\{u_{j_k}\}_{k>0}\subset H^3$, then $\{{\mathcal{Q}}_3 u_{j_k}\}_{k>0}\subset H^2$ and $$\label{DerExp} \begin{aligned} \partial_x({\mathcal{Q}}_3 u_{j_k}) =\partial_x([g_\delta-s_{\theta}]{\mathcal{T}}u_{j_k}) = & \partial_x(g_\delta-s_{\theta}){\mathcal{T}}u_{j_k} + [g_\delta-s_{\theta}]{\mathcal{T}}\partial_xu_{j_k} \\ = & \partial_x(g_\delta-s_{\theta}){\mathcal{T}}u_{j_k} + {\mathcal{Q}}_3 \partial_xu_{j_k}, \end{aligned}$$ where the first equality follows by noticing that $\partial_x$ and ${\mathcal{T}}$ commute, $$\widehat{\partial_x{\mathcal{T}}u} = i\xi(1+\xi)\hat{u}(\xi) = (1+\xi)i\xi\hat{u}(\xi) = (1+\xi)\widehat{\partial_x u} =\widehat{{\mathcal{T}}\partial_x u}.$$ It is not difficult to see that $\partial_x(g_\delta-s_{\theta})\in H^1$ with $\|\partial_x(g_\delta-s_{\theta})\|_{L^\infty}<\infty$. Hence, by Proposition [Proposition 26](#proptotbon){reference-type="ref" reference="proptotbon"}, the linear operator $\partial_x(g_\delta-s_{\theta}){\mathcal{T}}:H^3\to H^2$ is compact in $L^2$. Therefore, there exist two functions, $v$ and $w$, both in $L^2$, and a subsequence denoted as $\{u_\ell\}_{\ell>0}$, such that $$\lim_{\ell\to\infty} \left \| v-\partial_x(g_\delta-s_{\theta}){\mathcal{T}}u_{\ell} \right \|_{L^2} = 0, \quad \mbox{and}\quad \lim_{\ell\to\infty} \left \| w-{\mathcal{Q}}_3 \partial_xu_{\ell} \right \|_{L^2} =0.$$ We will prove that $u\in H^1$ and $\partial_xu =v+w$. The argument follows by density and letting $\phi\in C_0^\infty$, where $$\left \langle u \, , \partial_x\phi \right \rangle_{L^2} = \left \langle u-[g_\delta-s_{\theta}]{\mathcal{T}}u_{\ell} \, , \partial_x\phi \right \rangle_{L^2} +\left \langle [g_\delta-s_{\theta}]{\mathcal{T}}u_{\ell} \, , \partial_x\phi \right \rangle_{L^2}.$$ Now, take the limit when $\ell\to \infty$ and use the facts that $\left \| u-(g_\delta-s_{\theta}){\mathcal{T}}u_{\ell} \right \|_{L^2} \to 0$ and that strong convergence implies weak convergence, in order to obtain $$\begin{aligned} \left \langle u \, , \partial_x\phi \right \rangle_{L^2} = \lim_{\ell \to \infty }\left \langle [g_\delta-s_{\theta}]{\mathcal{T}}u_{\ell} \, , \partial_x\phi \right \rangle_{L^2} = &-\lim_{\ell \to \infty }\left \langle \partial_x([g_\delta-s_{\theta}]{\mathcal{T}}u_{\ell}) \, , \phi \right \rangle_{L^2} \\ = &-\lim_{\ell \to \infty }\left \langle \partial_x(g_\delta-s_{\theta}){\mathcal{T}}u_{\ell}+{\mathcal{Q}}_3 \partial_xu_{\ell} \, , \phi \right \rangle_{L^2} \\ =&- \left \langle v+w \, , \phi \right \rangle_{L^2}. \end{aligned}$$ Whence, for every bounded sequence $\{u_j\}_{j>0}\subset H^3$ there exists a convergent subsequence $\{{\mathcal{Q}}_3 u_\ell\}_{\ell>0}$ in $H^1$. In other words, ${\mathcal{Q}}_3 :H^3\to H^2$ is compact in $H^1$. Finally, we study the operator ${\mathcal{Q}}_1$. From the definition of ${\mathcal{Q}}_1$, we have $${\mathcal{Q}}_1 u={\mathcal{T}}u-g_{\delta}(x){\mathcal{T}}(g_{\delta}(x)u) = (1-g_\delta^2){\mathcal{T}}u + g_\delta(g_\delta {\mathcal{T}}-{\mathcal{T}}g_\delta)u = (1-g_\delta^2){\mathcal{T}}u + g_\delta[g_\delta, {\mathcal{T}}]u.$$ Notice that $(1-g_\delta^2(x)) = 0$ for $|x|\geq \delta$ and it is bounded for $|x|<\delta$ as well as its derivative. Hence by Proposition [Proposition 26](#proptotbon){reference-type="ref" reference="proptotbon"}, the operator $(1-g_\delta^2){\mathcal{T}}:H^2\to H^1$ is compact in $L^2$. For this last term, it will be enough to prove that the commutator $[g_\delta, {\mathcal{T}}]:H^2\to H^1$ is compact in $L^2$, since the multiplication by $g_\delta$ is a continuous operation. Indeed, we notice that the term $[g_\delta, {\mathcal{T}}]$ can be written in terms of the Hilbert transform ${\mathcal{H}}$ (see Lemma [Lemma 9](#lemma:Hilbert){reference-type="ref" reference="lemma:Hilbert"}) as $$\begin{aligned}\relax [g_\delta, {\mathcal{T}}]u = [g_\delta, (-\Delta)^{1/2}]u = [g_\delta, {\mathcal{H}}\circ \partial_x]u &= g_\delta {\mathcal{H}}(\partial_x u)- {\mathcal{H}}(\partial_x(g_\delta u))\\ &= g_\delta {\mathcal{H}}(\partial_x u) - {\mathcal{H}}(g_\delta \partial_xu)- {\mathcal{H}}((\partial_xg_\delta)u)\\ &= [g_\delta, {\mathcal{H}}](\partial_x u) - {\mathcal{H}}((\partial_xg_\delta)u). \end{aligned}$$ Observe that $(\partial_xg_\delta)\mathrm{I}:H^2\to H^1$ is compact in $L^2$ since the hypothesis in Proposition [Proposition 26](#proptotbon){reference-type="ref" reference="proptotbon"} is satisfied by choosing $\phi =\partial_xg_\delta$. Also, since the Hilbert transform is continuous on $L^2$, we conclude that ${\mathcal{H}}\circ (\partial_xg_\delta)\mathrm{I}: H^2\to H^1$ is compact on $L^2$. Thus we must prove that the linear operator $[g_\delta, {\mathcal{H}}]\partial_x:H^2\to H^1$ is compact in $L^2$. Notice that $\{[g_\delta, {\mathcal{H}}]\partial_x u \,|\, u\in {\mathcal{F}}\}$ is $L^2$-equicontinuous since this set is bounded in $H^1$. This readily follows by applying the properties of the Hilbert transform to the terms in $\left \| [g_\delta, {\mathcal{H}}]\partial_xu \right \|_{H^1}^2$. Indeed, we have the estimates $$\left \| [g_\delta, {\mathcal{H}}]\partial_xu \right \|_{L^2}^2 \leq \left \| g_\delta {\mathcal{H}}\partial_xu \right \|_{L^2}^2 + \left \| {\mathcal{H}}(g_\delta\partial_xu) \right \|_{L^2}^2 \leq 2\|g_\delta\|_{L^\infty}^2\left \| \partial_xu \right \|_{L^2}^2,$$ and $$\begin{aligned} \left \| \partial_x([g_\delta, {\mathcal{H}}]\partial_xu) \right \|_{L^2}^2 \leq & \left \| \partial_x(g_\delta {\mathcal{H}}\partial_xu) \right \|_{L^2}^2 + \left \| \partial_x( {\mathcal{H}}(g_\delta\partial_xu)) \right \|_{L^2}^2\\ \leq & \left \| (\partial_xg_\delta) {\mathcal{H}}\partial_xu \right \|_{L^2}^2+\left \| g_\delta {\mathcal{H}}\partial^2_xu \right \|_{L^2}^2 + \left \| {\mathcal{H}}\partial_x(g_\delta\partial_xu) \right \|_{L^2}^2\\ \leq & \|\partial_xg_\delta\|_{L^\infty}^2\left \| \partial_xu \right \|_{L^2}^2+\|g_\delta\|_{L^\infty}^2\left \| \partial^2_xu \right \|_{L^2}^2 + \left \| \partial_x(g_\delta\partial_xu) \right \|_{L^2}^2\\ \leq & 2\|\partial_xg_\delta\|_{L^\infty}^2\left \| \partial_xu \right \|_{L^2}^2+2\|g_\delta\|_{L^\infty}^2\left \| \partial^2_xu \right \|_{L^2}^2. \end{aligned}$$ It remains to show that functions in the set $\{[g_\delta, {\mathcal{H}}]\partial_x u \,|\, u\in {\mathcal{F}}\}$ are $L^2$-uniformly decaying. For simplicity, let $v$ denote $u_x$. Hence, $v\in {\mathcal{F}}' := \{ u_x \, : \, u \in {\mathcal{F}}\}$, which is a bounded set in $H^1$. We recall that $$\begin{aligned} \pi[g_\delta, {\mathcal{H}}]\partial_x u =\pi[g_\delta, {\mathcal{H}}]v = & \, \mathrm{P.V.}\,\, \int_\mathbb{R}\frac{g_\delta(x)-g_\delta(y)}{x-y}v(y) \, dy \\ = & \lim_{\epsilon\to 0}\int_{|y-x|>\epsilon} \frac{g_\delta(x)-g_\delta(y)}{x-y}v(y) \, dy\\ = & \lim_{\epsilon\to 0}\int_{|h|>\epsilon} \frac{g_\delta(x+h)-g_\delta(x)}{h}v(x+h) \, dh\\ = & \lim_{\epsilon\to 0}\int_{|h|>\epsilon} \frac{1}{h}\int_x^{x+h}g_\delta'(t)\,dt \, v(x+h) \, dh. \end{aligned}$$ Since we are interested in the behavior of $\left \| \boldsymbol{1}_{|x|>R}[g_\delta, {\mathcal{H}}]\partial_x u \right \|_{L^2}^2$ for $R\to \infty$, we assume that $R>2\delta$ and $\epsilon<\delta$. For $x>R$ the integral is split as $$\begin{aligned} \pi[g_\delta, {\mathcal{H}}]v(x) = & \int_{-\infty}^{-x+\delta} \!\frac{1}{h}\int_{x}^{x+h} \!g_\delta'(t)\,dt \, v(x+h) \, dh + \\ & + \lim_{\epsilon\to 0} \left[\int_{-x+\delta}^{-\epsilon} \frac{1}{h}\int_{x}^{x+h} \!g_\delta'(t)\,dt \, v(x+h) \, dh + \int_{\epsilon}^{\infty}\frac{1}{h}\int_{x}^{x+h}\!g_\delta'(t)\,dt \, v(x+h) \, dh \right]. \end{aligned}$$ Notice that the last two integral are equal to zero since $\mathop{\mathrm{supp}}g' \subset [-\delta,\delta]$ and $\delta<x+h$ for $h>\delta-x$. Moreover if $C := \int_{|x|\leq \delta}g'(x) \, dx$ then $$\begin{aligned} \pi[g_\delta, {\mathcal{H}}]v(x) = & \int_{-\infty}^{-x-\delta} \frac{1}{h}\int_{x}^{x+h}g_\delta'(t)\,dt \, v(x+h) \, dh + \int_{-x-\delta}^{-x+\delta} \frac{1}{h}\int_{x}^{x+h}g_\delta'(t)\,dt \, v(x+h) \, dh\\ = &\int_{-\infty}^{-x-\delta} \frac{1}{h}\int_{\delta}^{-\delta}g_\delta'(t)\,dt \, v(x+h) \, dh + \int_{-x-\delta}^{-x+\delta} \frac{1}{h}\int_{\delta}^{x+h}g_\delta'(t)\,dt \, v(x+h) \, dh\\ = &-C\int_{-\infty}^{-x-\delta} \frac{v(x+h)}{h} \, dh + \int_{-x-\delta}^{-x+\delta} \frac{1}{h}\int_{\delta}^{x+h}g_\delta'(t)\,dt \, v(x+h) \, dh.\\ \end{aligned}$$ Now, we use the variable change $y=x+h$, the fundamental theorem of calculus, and the fact that $g_\delta(\delta) = 1$, to obtain $$\begin{aligned} \pi[g_\delta, {\mathcal{H}}]v(x) = &-C\int_{-\infty}^{-\delta} \frac{v(y)}{y-x} \, d + \int_{-\delta}^{\delta} \frac{1}{y-x}\int_{\delta}^{y}g_\delta'(t)\,dt \, v(y) \, dy\\ = &-C\int_{-\infty}^{-\delta} \frac{v(y)}{y-x} \, dy - \int_{-\delta}^{\delta} \frac{1-g_\delta(y)}{y-x} \, v(y) \, dy.\\ \end{aligned}$$ A similar analysis applies for $x<-R$. Thus, for $|x|>R>2\delta$ there holds $$\pi[g_\delta, {\mathcal{H}}]v(x) =\begin{cases} C{\displaystyle\int^{\infty}_{\delta} \frac{v(y)}{y-x} \, dy} + {\displaystyle\int_{-\delta}^{\delta} \frac{g_\delta(y)+1}{y-x} \, v(y) \, dy} & \mbox{for }x<-R,\\ \\ -C{\displaystyle\int_{-\infty}^{-\delta} \frac{v(y)}{y-x} \, dy} - {\displaystyle\int_{-\delta}^{\delta} \frac{1-g_\delta(y)}{y-x} \, v(y) \, dy}& \mbox{for }x>R.\\ \end{cases}$$ These expressions can be recast as, $$\label{eq:commutator} \pi[g_\delta, {\mathcal{H}}]v(x) = C\int^{\infty}_{\delta} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|} \, dy + \int_{-\delta}^{\delta} \frac{g_\delta(y)-\mathrm{sgn}\,(x)}{y-x} \, v(y) \, dy.$$ Notice that both integrals are convergent. Indeed, since $v= u_x$, then an integration by parts yields $$\begin{aligned} \int^{\infty}_{\delta} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|} \, dy= & \int^{\infty}_{\delta} \frac{u'(-\mathrm{sgn}\,(x)y)}{y+|x|} \, dy \\ = & \left.\frac{-\mathrm{sgn}\,(x)u(-\mathrm{sgn}\,(x)y)}{y+|x|}\right|_{\delta}^\infty +\int^{\infty}_{\delta} \frac{-\mathrm{sgn}\,(x)u(-\mathrm{sgn}\,(x)y)}{(y+|x|)^2} \, dy \\ = & -\mathrm{sgn}\,(x)\left[-\frac{u(-\mathrm{sgn}\,(x)\delta)}{\delta+|x|} +\int^{\infty}_{\delta} \frac{u(-\mathrm{sgn}\,(x)y)}{(y+|x|)^2} \, dy\right] \\ = & -\mathrm{sgn}\,(x)\int^{\infty}_{\delta} \frac{u(-\mathrm{sgn}\,(x)y)-u(-\mathrm{sgn}\,(x)\delta)}{(y+|x|)^2} \, dy. \end{aligned}$$ Since ${\mathcal{F}}$ is bounded in $H^2$, then $\|u\|_{L^\infty}\leq \left \| u \right \|_{H^1}^2\leq M$ for every $u\in {\mathcal{F}}$, which implies that $$\left|\int^{\infty}_{\delta} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|} \, dy\right| \leq 2M\int^{\infty}_{\delta} \frac{1}{(y+|x|)^2} \, dy = \frac{2M}{\delta +|x|}.$$ This yields $$\label{eq:commutator_p1} \int_{|x|>R}\left(\int^{\infty}_{\delta} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|} \, dy\right)^2 dx \leq 4M^2\int_{|x|>R}\frac{dx}{(\delta+x)^2} \leq \frac{8M^2}{\delta+R}.$$ Now, we analyze the second integral in [\[eq:commutator\]](#eq:commutator){reference-type="eqref" reference="eq:commutator"}. By Jensen's inequality and the fact that $\|g_\delta\|_{L^\infty}\leq 1$, one gets $$\begin{aligned} \int_{|x|>R}\left(\int_{-\delta}^{\delta} \frac{g_\delta(y)-\mathrm{sgn}\,(x)}{y-x} \, v(y) \, dy\right)^2 \! dx &\leq 2\delta \int_{|x|>R}\int_{-\delta}^{\delta} \frac{\left(g_\delta(y)-\mathrm{sgn}\,(x)\right)^2}{\left(y-x\right)^2} \, v(y)^2 \, dy dx\\ &\leq 8\delta \int_{|x|>R}\int_{-\delta}^{\delta} \frac{1}{\left(y-x\right)^2} \, v(y)^2 \, dy dx. \end{aligned}$$ Since $v\in L^2$ and $(y-x)^2 \geq (|x|-\delta)$, then for every $y\in(-\delta,\delta)$ we obtain $$\label{eq:commutator_p2} \int_{|x|>R}\left(\int_{-\delta}^{\delta} \frac{g_\delta(y)-\mathrm{sgn}\,(x)}{y-x} \, v(y) \, dy\right)^2 dx\leq 8\delta\left \| v \right \|_{L^2}^2 \int_{|x|>R}\frac{dx}{\left(|x|-\delta\right)^2} \leq \frac{16\delta M^2}{R-\delta}.$$ We easily see that Young's inequality implies that $$\begin{aligned} \int_{|x|>R}([g_\delta, {\mathcal{H}}]v(x))^2 \,dx &\leq \frac{2C^2}{\pi^2}\int_{|x|>R}\left(\int^{\infty}_{\delta} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|} \, dy\right)^2 dx\\ & \; + \frac{2}{\pi^2}\int_{|x|>R}\int_{-\delta}^{\delta}\left( \frac{g_\delta(y)-\mathrm{sgn}\,(x)}{y-x} \, v(y) \, dy\right)^2 dx\\ &\leq \frac{16M^2(C^2+2\delta)}{\pi(R-\delta)}. \end{aligned}$$ Therefore, it follows that the set $\{[g_\delta, {\mathcal{H}}]\partial_xu\,|\, u \in {\mathcal{F}}\}$ is totally bounded and the operator $[g_\delta, {\mathcal{H}}]\partial_x:H^2\to H^1$ is compact in $L^2$. Therefore $[g_\delta,{\mathcal{T}}]:H^2 \to H^1$ and ${\mathcal{Q}}_1 :H^2 \to H^1$ both are compact in $L^2$. This completes the proof. ◻ **Theorem 28**. *The operator ${\mathcal{L}}$ is a relatively compact perturbation of ${\mathcal{L}}_\infty$.* *Proof.* Let $\mu\in \rho\left ( \mathop{\mathrm{\mathcal{L_\infty}}} \right )$, hence $(\mu - {\mathcal{L}}_\infty)^{-1} : L^2 \to H^2$ is a continuous linear operator and, by Theorem [Theorem 27](#themcompct_L){reference-type="ref" reference="themcompct_L"}, $\mathop{\mathrm{\mathcal{L_\infty}}}-\mathop{\mathrm{\mathcal{L}}}:H^2\to H^1$ is compact in $L^2$. This implies that the operator $(\mathop{\mathrm{\mathcal{L_\infty}}}-\mathop{\mathrm{\mathcal{L}}})(\mu - {\mathcal{L}}_\infty)^{-1}$ is compact on $L^2$ (see, e.g., [@Kat80] p. 158). ◻ **Remark 29**. An immediate consequence of this relative compactness result is that the essential spectrum of ${\mathcal{L}}$ and the spectrum of ${\mathcal{L}}_\infty$ coincide, by virtue of Weyl's essential spectrum theorem (see, e.g., [@KaPro13], p. 29). Albeit we do not apply the latter to the operator ${\mathcal{L}}$ *per se*, Theorem [Theorem 28](#themrelcompct){reference-type="ref" reference="themrelcompct"} will play a key role in the location of the essential spectrum of a block operator matrix, as we shall see below. # Perturbation equations and spectral stability {#secspectral} In order to establish the perturbation equations, consider a solution $\overline{\theta}(x) + u(x,t)$ to the reduced dynamic equation [\[reddyneq\]](#reddyneq){reference-type="eqref" reference="reddyneq"}. Here $u$ is the perturbation of the static Néel wall's phase which, by the boundary conditions on the real line, must satisfy $$\label{bcu} u(\pm \infty, t) = 0, \qquad t > 0.$$ Upon substitution into [\[reddyneq\]](#reddyneq){reference-type="eqref" reference="reddyneq"}, we obtain the following nonlinear equation for the perturbation, $$\label{nlpert} \partial_t^2 u + \nu \partial_t u + \nabla {\mathcal{E}}(\overline{\theta}+ u) = 0.$$ In view of [\[defL0\]](#defL0){reference-type="eqref" reference="defL0"}, equation [\[nlpert\]](#nlpert){reference-type="eqref" reference="nlpert"} can be recast as $$\partial_t^2 u + \nu \partial_t u + {\mathcal{L}}u + {\mathcal{N}}(u) = 0,$$ where ${\mathcal{L}}u$ is the linearization around $\overline{\theta}$ of $\nabla \mathcal{E}(\overline{\theta}+ u)$ acting on the perturbation $u$, and $${\mathcal{N}}(u) := \nabla {\mathcal{E}}(\overline{\theta}+ u) - {\mathcal{L}}u = O(u^2),$$ comprises the nonlinear terms. In view of the form of the operator [\[defL0\]](#defL0){reference-type="eqref" reference="defL0"} we reckon the perturbation equation as a nonlinear wave equation. By making the (standard) change of variables $v = \partial_t u$, solving the perturbation equation [\[nlpert\]](#nlpert){reference-type="eqref" reference="nlpert"} is equivalent to solving the nonlinear hyperbolic system $$\label{NLsyst} \partial_t \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix}0 & \mathrm{I}\\ - {\mathcal{L}}& - \nu \mathrm{I}\end{pmatrix} \begin{pmatrix}u \\ v\end{pmatrix} + \begin{pmatrix}0 \\ {\mathcal{N}}(u)\end{pmatrix},$$ in an appropriate space, which will be determined later. ## The spectral problem By linearizing equation [\[nlpert\]](#nlpert){reference-type="eqref" reference="nlpert"} around the Néel wall's phase, we obtain the following equation for the perturbation, $$\label{linequ} \partial_t^2 u + \nu \partial_t u + {\mathcal{L}}u = 0,$$ which is equivalent to the following linear system in the $(u,v)$ variables, $$\label{linsyst} \partial_t \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix}0 & \mathrm{I}\\ - {\mathcal{L}}& - \nu \mathrm{I}\end{pmatrix} \begin{pmatrix}u \\ v\end{pmatrix}.$$ Specialize the linearized equation [\[linequ\]](#linequ){reference-type="eqref" reference="linequ"} to perturbations of the form $e^{\lambda t}u(x)$, with $\lambda \in \mathbb{C}$ and $u \in X$, being $X$ a Banach space to be determined below. Substituting into [\[linequ\]](#linequ){reference-type="eqref" reference="linequ"}, we obtain the following spectral problem $$\label{spectralu} (\lambda^2 + \nu \lambda) u + {\mathcal{L}}u = 0.$$ **Remark 30**. Under the substitution $\lambda = i \zeta$, equation [\[spectralu\]](#spectralu){reference-type="eqref" reference="spectralu"} can be written in terms of a *quadratic operator pencil*, $\widetilde{{\mathcal{T}}} u = 0$ (cf. Markus [@Ma88]), with $\widetilde{{\mathcal{T}}} = \widetilde{{\mathcal{T}}}_0 + \zeta \widetilde{{\mathcal{T}}}_1 + \zeta^2 \widetilde{{\mathcal{T}}}_2$, and $\widetilde{{\mathcal{T}}}_0 = \partial_x^2 + {\mathcal{L}}$, $\widetilde{{\mathcal{T}}}_1 = i \nu \mathrm{I}$, $\widetilde{{\mathcal{T}}}_2 = \mathrm{I}$. The transformation $v = \lambda u$ (the spectral equivalent of the change of variables $v = \partial_t u$) defines an appropriate cartesian product of the base space which allows us to write equation [\[spectralu\]](#spectralu){reference-type="eqref" reference="spectralu"} as a genuine eigenvalue problem of the form $$\label{evproblem} \lambda \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} 0 & \mathrm{I}\\ - {\mathcal{L}}& -\nu \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} =: {\mathcal{A}}\begin{pmatrix} u \\ v \end{pmatrix}.$$ The matrix operator ${\mathcal{A}}$ is often called the companion matrix to the pencil $\widetilde{{\mathcal{T}}}$ (see [@BrJoK14; @KHKT13] for further information). Clearly, equation [\[evproblem\]](#evproblem){reference-type="eqref" reference="evproblem"} is the spectral equation associated to the linear system [\[linsyst\]](#linsyst){reference-type="eqref" reference="linsyst"}. We shall refer to both [\[spectralu\]](#spectralu){reference-type="eqref" reference="spectralu"} and [\[evproblem\]](#evproblem){reference-type="eqref" reference="evproblem"} as the spectral problem making no distinction. In the present stability analysis, we are interested in the spectral properties of the block operator, $${\mathcal{A}}: H^1 \times L^2 \to H^1 \times L^2,$$ regarded as a linear, densely defined operator in $H^1 \times L^2$ with domain $D({\mathcal{A}}) := H^2 \times H^1$. In other words, we choose our energy base space as $$X := H^1 \times L^2.$$ This choice is not only consistent with the boundary conditions [\[bcu\]](#bcu){reference-type="eqref" reference="bcu"} for perturbations of the Néel wall's phase but, more importantly, it relates to the appropriate energy space encoding perturbations of the energy functional defined in [\[varprob\]](#varprob){reference-type="eqref" reference="varprob"}, which requires variations $u \in H^1$. In addition, the condition $v\in L^2$ implies that those perturbations have finite kinetic energy because $v$ is the spectral equivalent to $\partial_tu$. Thus, the stability analysis pertains to localized perturbations with finite energy in $X = H^1 \times L^2$. For shortness, let us introduce the notation $$U = (u,v) \in H^2 \times H^1, \qquad {\mathcal{A}}U = (v, -{\mathcal{L}}u - \nu v) \in H^1 \times L^2.$$ In addition, the standard scalar product in $H^1 \times L^2$ will be denoted as $$\langle U, F \rangle_{X} := \langle (u,v), (f,g) \rangle_{H^1 \times L^2} = \langle u,f \rangle_{H^1} + \langle v, g \rangle_{L^2},$$ for any $U = (u,v)$ and $F = (f,g)$ in $X$. **Remark 31**. It is to be observed that this choice of the energy space conveys a slight abuse of notation. Indeed, the operator ${\mathcal{L}}$ in the expression for ${\mathcal{A}}$ in [\[evproblem\]](#evproblem){reference-type="eqref" reference="evproblem"} refers actually to its restriction to $H^1$, namely, to the operator $$\begin{aligned} \widetilde{{\mathcal{L}}} &:= {\mathcal{L}}_{|H^1} &\widetilde{{\mathcal{L}}} &: H^1 \to L^2,\\ D(\widetilde{{\mathcal{L}}}) &= H^2 \subset H^1, &\widetilde{{\mathcal{L}}} u&:= {\mathcal{L}}u, \quad \forall \, u \in H^2, \end{aligned}$$ where, rigorously speaking, ${\mathcal{L}}$ is the operator from $L^2$ to $L^2$ defined in [\[defL0\]](#defL0){reference-type="eqref" reference="defL0"}. However, since the original properties remain (for example, its closedness and its spectral bounds, as the reader may easily verify), for simplicity we keep the notation ${\mathcal{L}}: H^1 \to L^2$ with the same dense domain $D({\mathcal{L}}) = H^2$ in the definition of the operator ${\mathcal{A}}$ under consideration. In the sequel, we shall remind the reader of this distinction at the steps of the proofs where its is explicitly required. The first property of the block operator ${\mathcal{A}}$ that we verify is its closedness, so that the definitions of resolvent and spectra, as well as their basic properties, apply. **Lemma 32**. *The matrix block operator ${\mathcal{A}}: H^1 \times L^2 \to H^1 \times L^2$ is closed.* *Proof.* Let $U_j = (u_j, v_j) \in D({\mathcal{A}}) = H^2 \times H^1$, $j \in \mathbb{N}$, be a Cauchy sequence in $X = H^1 \times L^2$ such that $\{ {\mathcal{A}}U_j \}_{j \in \mathbb{N}}$ is a Cauchy sequence in $X$ as well. Let us denote their limits as $U = (u,v) = \lim_{j \to \infty} U_j$ and $F = (f,g) = \lim_{j \to \infty} {\mathcal{A}}U_j$, both in $X$. This implies that $$\begin{aligned} v_j &\to f, \quad \text{in } \, H^1,\\ -{\mathcal{L}}u_j - \nu v_j &\to g, \quad \text{in } \, L^2, \end{aligned}$$ as $j \to \infty$. Since $v_j \to f$ in $H^1$ implies that $v_j \to f$ in $L^2$, we obtain $- {\mathcal{L}}u_j \to g + \nu f$ in $L^2$. Because $\{ u_j \}_{j \in \mathbb{N}}$ is a Cauchy sequence in $H^2$ we deduce that it is also a Cauchy sequence in $L^2$. By virtue of the closedness of the operator ${\mathcal{L}}$ when regarded as an operator from $L^2$ to $L^2$ (see Corollary [Corollary 15](#corLclosed){reference-type="ref" reference="corLclosed"}), we deduce that $u_j \to u$ in $L^2$ implies $u \in D({\mathcal{L}}) = H^2$ and $- {\mathcal{L}}u = g + \nu f$. Therefore, $U = (u,v) \in D({\mathcal{A}})$ and $${\mathcal{A}}U = (v, - {\mathcal{L}}u - \nu v) = (f,g) = F.$$ This proves that ${\mathcal{A}}$ is a closed operator. ◻ Another important property is that the translation eigenvalue remains. **Lemma 33**. *$\lambda = 0$ is a simple eigenvalue of ${\mathcal{A}}$ with eigenfunction $$\label{defTheta} \Theta := (\partial_x \overline{\theta}, 0) \in D({\mathcal{A}}) = H^2 \times H^1.$$* *Proof.* Since $\partial_x \overline{\theta}\in H^2$ (see Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"} [(c)](#propc)) we clearly notice that $\Theta \in D({\mathcal{A}})$. Moreover, ${\mathcal{A}}\Theta = (0, - {\mathcal{L}}\partial_x \overline{\theta}) = 0$. Hence $\lambda = 0 \in \sigma_\mathrm{\tiny{pt}}({\mathcal{A}})$ with eigenfunction $\Theta$. To verify that it spans the whole kernel, let $0 \neq U = (u,v) \in \ker {\mathcal{A}}$. Since $u \in H^2 \subset L^2$, writing $u = u_\perp \oplus \alpha \partial_x \overline{\theta}$ with $\langle u_\perp, \partial_x \overline{\theta}\rangle = 0$ and some $\alpha \in \mathbb{C}$, yields $$0 = {\mathcal{A}}U = {\mathcal{A}}(u_\perp, v) + \alpha {\mathcal{A}}(\partial_x \overline{\theta}, 0) = (v, - {\mathcal{L}}u_\perp - \nu v).$$ Therefore $v = 0$ and ${\mathcal{L}}u_\perp = 0$. By Corollary [Corollary 16](#corzeroL0){reference-type="ref" reference="corzeroL0"}, $u_\perp = 0$ and this shows that the geometric multiplicity is equal to one. Finally, the algebraic multiplicity is equal to one. Otherwise, there would exist a non trivial Jordan chain ${\mathcal{A}}U = \alpha \Theta$, $\alpha\in \mathbb{C}\setminus \{0\}$ with $U\neq 0$. This implies that $${\mathcal{A}}U = (v, - {\mathcal{L}}u - \nu v) = (\alpha \partial_x \overline{\theta}, 0).$$ Therefore $v = \alpha \partial_x \overline{\theta}$ and $-{\mathcal{L}}u = \nu \alpha \partial_x \overline{\theta}$. Then $\mathop{\mathrm{\mathcal{L}}}$ has a nontrivial Jordan chain which contradicts Corollary [Corollary 16](#corzeroL0){reference-type="ref" reference="corzeroL0"}. ◻ ## Point spectral stability After these preparations, we are ready to prove that the operator ${\mathcal{A}}$ is point spectrally stable. **Lemma 34**. *Let $\lambda \in \sigma_\mathrm{\tiny{pt}}({\mathcal{A}})$, $\lambda \neq 0$. Then $$\label{ptspectbd} \mathrm{Re}\,\lambda \leq - \tfrac{1}{2} \nu + \tfrac{1}{2} \boldsymbol{1}_{[2 \sqrt{\Lambda_0}, \infty)}(\nu) \sqrt{\nu^2 - 4 \Lambda_0} < 0,$$ where $\Lambda_0$ is given by Proposition [Proposition 12](#propL0){reference-type="ref" reference="propL0"} [(c)](#propcc) and $\boldsymbol{1}_\Omega(\cdot)$ denotes the characteristic function of any measurable set $\Omega \subset \mathbb{R}$.* *Proof.* Suppose that $\lambda \in \sigma_\mathrm{\tiny{pt}}({\mathcal{A}})$ with $\lambda \neq 0$. Hence, there exists $U = (u,v) \in D({\mathcal{A}}) = H^2 \times H^1$ such that ${\mathcal{A}}U = \lambda U$. This yields $\lambda u = v$ and $(\lambda + \nu)v + {\mathcal{L}}u = 0$. Upon substitution, we obtain $${\mathcal{L}}u + \lambda (\lambda + \nu) u = 0, \qquad u \in H^2 = D({\mathcal{L}}).$$ Therefore, $-\lambda(\lambda + \nu) \in \sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$ with eigenfunction $u$. Since ${\mathcal{L}}$ is self-adjoint we obtain $\lambda(\lambda + \nu) \in \mathbb{R}$. Due to $u \in H^2 \subset L^2$ and $v \in H^1 \subset L^2$ we may decompose $u = u_\perp \oplus \alpha \partial_x \overline{\theta}$, $v = v_\perp \oplus \alpha \partial_x \overline{\theta}$, for some $\alpha, \beta \in \mathbb{C}$, and $\langle u_\perp, \partial_x \overline{\theta}\rangle_{L^2} = \langle v_\perp, \partial_x \overline{\theta}\rangle_{L^2} = 0$. Substituting one arrives at the relations $$\lambda u_\perp = v_\perp, \quad \beta = \lambda \alpha,$$ $${\mathcal{L}}u_\perp + \lambda(\lambda + \nu) (u_\perp + \alpha \partial_x \overline{\theta}) = 0.$$ Take the $L^2$-product of last equation with $u_\perp$. The result is $$\begin{aligned} 0 &= \langle {\mathcal{L}}u_\perp, u_\perp \rangle_{L^2} + \lambda(\lambda + \nu) \| u_\perp \|_{L^2}^2 + \lambda(\lambda + \nu) \langle \alpha \partial_x \overline{\theta}, u_\perp \rangle_{L^2} \geq (\Lambda_0 + \lambda^2 + \lambda \nu ) \| u_\perp \|_{L^2}^2, \end{aligned}$$ because $\langle u_\perp, \partial_x \overline{\theta}\rangle_{L^2} = 0$ and $\lambda(\lambda + \nu) \in \mathbb{R}$. Therefore, we obtain the bound $$\label{lambound} \lambda ( \lambda + \nu) \leq - \Lambda_0.$$ Henceforth, we arrive at the relations [\[las\]]{#las label="las"} $$\begin{aligned} \mathrm{Im}\,(\lambda(\lambda + \nu)) &= (\mathrm{Im}\,\lambda) (\nu + 2 \mathrm{Re}\,\lambda) = 0, \label{lasa}\\ - \Lambda_0 \geq \mathrm{Re}\,(\lambda(\lambda + \nu)) &= (\mathrm{Re}\,\lambda)^2 - (\mathrm{Im}\,\lambda)^2 + \nu \mathrm{Re}\,\lambda. \label{lasb}\end{aligned}$$ Since $\nu > 0$ is a given physical constant,[^1] we have two parameter regimes: (i) $\nu \in (0, 2 \sqrt{\Lambda_0})$, or (ii) $\nu \in [2 \sqrt{\Lambda_0},\infty)$. Let us examine the first case. From [\[lasa\]](#lasa){reference-type="eqref" reference="lasa"} we either have $\lambda \in \mathbb{R}$ or $\mathrm{Re}\,\lambda = - \tfrac{1}{2}\nu$. Assuming $\lambda \in \mathbb{R}$, we readily observe that [\[lasb\]](#lasb){reference-type="eqref" reference="lasb"} has no real $\lambda$-solutions if $\nu \in (0, 2 \sqrt{\Lambda_0})$. Indeed, with basic calculus tools one can easily verify that the real polynomial $q(\lambda) = \lambda^2 + \nu \lambda + \Lambda_0$ has a unique global minimum at $\lambda = - \tfrac{1}{2}\nu$ with $q(- \tfrac{1}{2}\nu) = \Lambda_0 - \tfrac{1}{4} \nu^2 > 0$. Thus, we are left with the case $\mathrm{Re}\,\lambda = - \tfrac{1}{2}\nu$ which clearly satisfies [\[ptspectbd\]](#ptspectbd){reference-type="eqref" reference="ptspectbd"}. In the second parameter regime with $\nu \in [2 \sqrt{\Lambda_0},\infty)$, again we either have $\lambda \in \mathbb{R}$ or $\mathrm{Re}\,\lambda = - \tfrac{1}{2}\nu$. If $\lambda$ is real then $\lambda^2 + \nu \lambda \leq - \Lambda_0$ holds only for $$\lambda \in \big[ - \tfrac{1}{2}\nu - \tfrac{1}{2}\sqrt{\nu^2 - 4 \Lambda_0}, - \tfrac{1}{2}\nu + \tfrac{1}{2}\sqrt{\nu^2 - 4 \Lambda_0} \big].$$ Clearly in both cases the bound [\[ptspectbd\]](#ptspectbd){reference-type="eqref" reference="ptspectbd"} holds. This shows the lemma. ◻ **Corollary 35** (point spectral stability). *$$\sigma_\mathrm{\tiny{pt}}({\mathcal{A}}) \subset \{ 0 \} \cup \{ \lambda \in \mathbb{C}\, : \, \mathrm{Re}\,\lambda \leq - \tfrac{1}{2} \nu + \tfrac{1}{2} \sqrt{\nu^2 - 4 \Lambda_0} \, \boldsymbol{1}_{[2 \sqrt{\Lambda_0}, \infty)}(\nu) \}.$$* *Proof.* Follows immediately from Lemmata [Lemma 33](#lemzeroremains){reference-type="ref" reference="lemzeroremains"} and [Lemma 34](#lemptspecstab){reference-type="ref" reference="lemptspecstab"}. ◻ ## Stability of the essential spectrum In this section, we study the essential spectrum of the block operator ${\mathcal{A}}$. To that end, we define the following auxiliary asymptotic matrix block operator, $$\label{defAinf} {\mathcal{A}}_\infty : H^1 \times L^2 \to H^1 \times L^2, \qquad {\mathcal{A}}_\infty := \begin{pmatrix} 0 & \mathrm{I}\\ - {\mathcal{L}}_\infty & - \nu \mathrm{I}\end{pmatrix},$$ with dense domain $D({\mathcal{A}}_\infty) = H^2 \times H^1$. Once again, with a slight abuse in notation the operator ${\mathcal{L}}_\infty$ in [\[defAinf\]](#defAinf){reference-type="eqref" reference="defAinf"} refers to the restriction of the operator defined in [\[defLinf\]](#defLinf){reference-type="eqref" reference="defLinf"} to the space $H^1$, namely, to the operator $$\begin{aligned} \widetilde{{\mathcal{L}}}_\infty &:= {{\mathcal{L}}_\infty}_{|H^1} &\widetilde{{\mathcal{L}}}_\infty &: H^1 \to L^2,\\ D(\widetilde{{\mathcal{L}}}_\infty) &= H^2 \subset H^1, &\widetilde{{\mathcal{L}}}_\infty u&:= {\mathcal{L}}_\infty u, \quad \forall \, u \in H^2, \end{aligned}$$ so that the energy base space of the asymptotic operator ${\mathcal{A}}_\infty$ is $H^1 \times L^2$. In the sequel, we write ${\mathcal{L}}_\infty$ to denote this restriction. Therefore, for any $U = (u,v) \in D({\mathcal{A}}_\infty) = H^2 \times H^1$ we clearly have ${\mathcal{A}}_\infty U = (v, - {\mathcal{L}}_\infty u -\nu v) \in H^1 \times L^2$. **Lemma 36**. *The asymptotic block operator ${\mathcal{A}}_\infty : H^1 \times L^2 \to H^1 \times L^2$ is closed and onto.* *Proof.* The proof of the closedness of ${\mathcal{A}}_\infty$ is the same as that of Lemma [Lemma 32](#lemAclosed){reference-type="ref" reference="lemAclosed"} and we omit it. To show that ${\mathcal{A}}_\infty$ is onto, notice that for any $F = (f,g) \in H^1 \times L^2$ the equation ${\mathcal{A}}_\infty U = F$ with $U = (u,v) \in D({\mathcal{A}}_\infty) = H^2 \times H^1$ is equivalent to the system $$v = f, \qquad - {\mathcal{L}}_\infty u = g + \nu f.$$ By defining $v := f \in H^1$ and by virtue of Lemma [Lemma 19](#lemsolLinf){reference-type="ref" reference="lemsolLinf"}, given $g + \nu f \in L^2$ there exists a unique solution $u \in H^2$ to the equation $- {\mathcal{L}}_\infty u = g + \nu f$. Hence, $H^1 \times L^2 = {\mathcal{R}}({\mathcal{A}}_\infty)$, as claimed. ◻ In this fashion, ${\mathcal{A}}_\infty$ is a closed, densely defined operator with full range. The following result determines the location of its spectrum. **Lemma 37**. *If $\lambda \in \sigma({\mathcal{A}}_\infty)$ then $$\label{ddstar} \mathrm{Re}\,\lambda \leq - \tfrac{1}{2} \nu + \tfrac{1}{2} \boldsymbol{1}_{[2, \infty)}(\nu) \sqrt{\nu^2 - 4} < 0.$$* *Proof.* Assume $\lambda \in \mathbb{C}$, $U = (u,v) \in D({\mathcal{A}}_\infty) = H^2 \times H^1$ and $F = (f,g) \in X = H^1 \times L^2$ are such that $(\lambda-{\mathcal{A}}_\infty) U = F$. This equation is equivalent to the system $$\lambda u - v = f, \qquad {\mathcal{L}}_\infty u + (\lambda + \nu) v = g.$$ Substituting the first equation into the second, we arrive at $$\big( {\mathcal{L}}_\infty + \lambda(\lambda + \nu) \big) u = g + (\lambda + \nu) f.$$ For any $\nu > 0$ and $\lambda \in \mathbb{C}$ fixed, we have $g + (\lambda + \nu) f \in L^2$. Thus, from Lemma [Lemma 20](#lempropsLinf){reference-type="ref" reference="lempropsLinf"} [(d)](#propddd) and the resolvent estimate from Lemma [Lemma 22](#lemressol){reference-type="ref" reference="lemressol"}, this equation has a unique solution $u \in H^2$ provided that $\lambda(\lambda + \nu) \in \mathbb{C}\backslash (-\infty, -1]$. Moreover, by Young's inequality $$\| U \|_{H^1 \times L^2}^2 = \| u \|_{H^1}^2 + \| v \|_{L^2}^2 = \| u \|_{H^1}^2 + \| f + \lambda u \|_{L^2}^2 \leq ( 1 + 2 |\lambda|) \| u \|_{H^1}^2 + 2 \|f \|_{L^2}^2.$$ From Lemma [Lemma 22](#lemressol){reference-type="ref" reference="lemressol"}, if $u \in H^2$ solves $({\mathcal{L}}_\infty + \lambda(\lambda + \nu)) u = g + (\lambda + \nu) f$ with $\mu = \lambda(\lambda+\nu) \in \mathbb{C}\backslash (-\infty, -1]$, then there exists a constant $C = C( \lambda, \nu) > 0$ such that $$\| u \|_{H^1} \leq \| u \|_{H^2} \leq C(\lambda,\nu) \| g + (\lambda + \nu) f \|_{L^2}.$$ Therefore, we obtain that $$\begin{aligned} \| U \|_{H^1 \times L^2}^2 \leq ( 1 + 2 |\lambda|) \| u \|_{H^1}^2 + 2 \|f \|_{L^2}^2 &\leq ( 1 + 2 |\lambda|) C(\lambda,\nu)^2 \| g + (\lambda + \nu) f \|_{L^2}^2 + 2 \| f \|_{L^2}^2\\ &\leq \overline{C}(\lambda, \nu) \big( \| f \|_{H^1}^2 + \| g \|_{L^2}^2 \big)= \overline{C}(\lambda, \nu) \| F \|_{H^1 \times L^2}^2, \end{aligned}$$ for some $\overline{C}(\lambda, \nu) > 0$. This shows that $\lambda \in \rho({\mathcal{A}}_\infty)$. To sum up, we have proved that $\lambda(\lambda + \nu) \in \mathbb{C}\backslash (-\infty, -1] \; \Rightarrow \; \lambda \in \rho({\mathcal{A}}_\infty)$, or, equivalently, that $$\label{locspAinf} \sigma({\mathcal{A}}_\infty) \subset \big\{ \lambda \in \mathbb{C}\, : \, \lambda (\lambda + \nu) \in (-\infty, -1] \big\}.$$ Now, notice that the relation that defines the set on the right hand side of [\[locspAinf\]](#locspAinf){reference-type="eqref" reference="locspAinf"} can be recast as $$\label{starry} \begin{aligned} \mathrm{Im}\,(\lambda(\lambda + \nu)) &= (\mathrm{Im}\,\lambda) (\nu + 2 \mathrm{Re}\,\lambda) = 0, \\ - 1 \geq \mathrm{Re}\,(\lambda(\lambda + \nu)) &= (\mathrm{Re}\,\lambda)^2 - (\mathrm{Im}\,\lambda)^2 + \nu \mathrm{Re}\,\lambda. \end{aligned}$$ First, let us assume that $\nu \in (0,2)$. Then the first equation in [\[starry\]](#starry){reference-type="eqref" reference="starry"} implies that either $\mathrm{Im}\,\lambda = 0$ or $\mathrm{Re}\,\lambda = - \tfrac{1}{2} \nu$. In the latter case there is nothing to prove. Whereas, if $\lambda \in \mathbb{R}$, then the second relation in [\[starry\]](#starry){reference-type="eqref" reference="starry"}, namely $\lambda^2 + \nu \lambda \leq -1$ has no real $\lambda$-solutions. Thus, [\[ddstar\]](#ddstar){reference-type="eqref" reference="ddstar"} holds if $\nu \in (0,2)$. Second, suppose that $\nu \geq 2$. Once again, we have two cases, either $\lambda \in \mathbb{R}$ or $\mathrm{Re}\,\lambda = - \tfrac{1}{2} \nu$. In the latter case [\[ddstar\]](#ddstar){reference-type="eqref" reference="ddstar"} clearly holds. In the former case, the inequality $\lambda^2 + \lambda \nu \leq -1$ is satisfied only if $$\lambda \in \big[ - \tfrac{1}{2}\nu - \tfrac{1}{2}\sqrt{\nu^2 - 4}, - \tfrac{1}{2}\nu + \tfrac{1}{2}\sqrt{\nu^2 - 4} \big],$$ determining values of $\lambda$ for which [\[ddstar\]](#ddstar){reference-type="eqref" reference="ddstar"} also holds. The proof is complete. ◻ The following lemma is the key ingredient to locate the essential spectrum of the block operator ${\mathcal{A}}$. **Lemma 38**. *The block operator ${\mathcal{A}}$ is a relatively compact perturbation of ${\mathcal{A}}_\infty$.* *Proof.* Suppose $\lambda \in \rho({\mathcal{A}}_\infty)$ and let $\{ U_j \}_{j \in \mathbb{N}}$ be a bounded sequence in $H^1 \times L^2$. Therefore, $(\lambda - {\mathcal{A}}_\infty)^{-1} U_j \in D({\mathcal{A}}_\infty) = H^2 \times H^1$ is a bounded sequence in $H^2 \times H^1$ because $(\lambda - {\mathcal{A}}_\infty)^{-1}$ is a bounded operator. Hence, if we denote $$\begin{pmatrix} f_j \\ g_j \end{pmatrix} := (\lambda - {\mathcal{A}}_\infty)^{-1} U_j,$$ we have, $$( {\mathcal{A}}_\infty- {\mathcal{A}}) (\lambda - {\mathcal{A}}_\infty)^{-1} U_j = \begin{pmatrix} 0 & 0 \\ {\mathcal{L}}_\infty - {\mathcal{L}}& 0\end{pmatrix} \begin{pmatrix} f_j \\ g_j \end{pmatrix} = \begin{pmatrix} 0 \\ ({\mathcal{L}}_\infty- {\mathcal{L}}) f_j \end{pmatrix}.$$ Since $\{ f_j \}_{j \in \mathbb{N}}$ is bounded in $H^2$ and ${\mathcal{L}}_\infty - {\mathcal{L}}:H^2\to H^1$ is compact in $L^2$ (see Theorem [Theorem 27](#themcompct_L){reference-type="ref" reference="themcompct_L"} above), the bounded sequence $\{({\mathcal{L}}_\infty - {\mathcal{L}}) f_j\} \subset H^1$ has a convergent subsequence in $L^2$. This implies that $( {\mathcal{A}}_\infty- {\mathcal{A}}) (\lambda - {\mathcal{A}}_\infty)^{-1} U_j$ has a convergent subsequence in $H^1 \times L^2$. Thus, the operator $( {\mathcal{A}}_\infty- {\mathcal{A}}) (\lambda - {\mathcal{A}}_\infty)^{-1}$ is compact on $H^1 \times L^2$ for every $\lambda \in \rho({\mathcal{A}}_\infty)$, and the proof is complete. ◻ The most important consequence of last result is the location of the essential spectrum of ${\mathcal{A}}$. **Corollary 39**. *$\sigma({\mathcal{A}}_\infty) = \sigma_\mathrm{\tiny{ess}}({\mathcal{A}})$. Moreover, any $\lambda \in \sigma_\mathrm{\tiny{ess}}({\mathcal{A}})$ satisfies estimate [\[ddstar\]](#ddstar){reference-type="eqref" reference="ddstar"}.* *Proof.* This is a direct consequence of Weyl's essential spectrum theorem (see [@KaPro13], p. 29) and Lemma [Lemma 37](#lemAinfsb){reference-type="ref" reference="lemAinfsb"}. ◻ ## Spectral stability with uniform spectral gap Let us summarize the content of Corollaries [Corollary 35](#corptsp){reference-type="ref" reference="corptsp"} and [Corollary 39](#corsameess){reference-type="ref" reference="corsameess"} into the following result, which conveys the spectral stability of the Néel wall's phase in the appropriate energy space with a uniform spectral gap, that is, a positive distance from the eigenvalue zero to the rest of the spectrum. **Theorem 40**. *For each fixed $\nu > 0$ there exists a uniform positive constant $$\zeta_0 (\nu) := \tfrac{1}{2}\nu - \max \Big\{ \tfrac{1}{2} \boldsymbol{1}_{[2, \infty)}(\nu) \sqrt{\nu^2 - 4} , \tfrac{1}{2} \boldsymbol{1}_{[2 \sqrt{\Lambda_0}, \infty)}(\nu) \sqrt{\nu^2 - 4 \Lambda_0}\Big\} > 0,$$ such that $$\sigma({\mathcal{A}}) \subset \{ 0 \} \cup \big\{ \lambda \in \mathbb{C}\, : \, \mathrm{Re}\,\lambda \leq - \zeta_0(\nu) < 0 \big\}.$$* **Remark 41**. The positive constant $\zeta_0(\nu)$ is uniform because the spectral bound $\Lambda_0$ does not depend on the parameter $\nu$. This spectral gap determines an exponential decay for the solutions to the evolutionary equation, as we shall see in the sequel. # Semigroup generation and decay {#secsemigroup} ## The adjoint operator It is known (see Kato [@Kat80], Remark 6.23, p. 184) that if $\lambda \in \mathbb{C}$ is an eigenvalue of a closed operator ${\mathcal{T}}: D({\mathcal{T}}) \subset H \to H$ on a Hilbert space $H$, then $\lambda^*$ is an eigenvalue of ${\mathcal{T}}^*$ (formal adjoint) with the same geometric and algebraic multiplicities. In the present context, since $H^1$ and $L^2$ are reflexive Hilbert spaces, then ${\mathcal{A}}: H^1\times L^2 \to H^1 \times L^2$ and $D({\mathcal{A}}) = H^2 \times H^1$ has a formal adjoint which is also densely defined and closed. Moreover, ${\mathcal{A}}^{**} = {\mathcal{A}}$ (cf. [@Kat80], Theorem 5.29, p. 168). Upon these observations we immediately have the following **Lemma 42**. *$\lambda = 0$ is an isolated, simple eigenvalue of ${\mathcal{A}}^* : X^*\to X^*$.* The following result determines the form of the adjoint of the linearized block matrix operator around the Néel wall's phase. **Lemma 43**. *The formal adjoint ${\mathcal{A}}^*$, restricted to the domain $D({\mathcal{A}})$, is given by $$\label{eqA*} \left.{\mathcal{A}}^*\right|_{D({\mathcal{A}})} = \begin{pmatrix} 0 & {\mathcal{F}}\\ -\partial_{xx}+\mathrm{I}& -\nu \end{pmatrix}$$ where the operator ${\mathcal{F}}:H^1\to H^{-1}$ is formally defined as the map $$v\mapsto -({\mathcal{S}}v-c_{\theta}v,\partial_x v) =: {\mathcal{F}}v.$$ Moreover, ${\mathcal{F}}|_{H^2}=[1+(-\Delta)]^{-1}\mathop{\mathrm{\mathcal{L}}}$, where $[1+(-\Delta)]^{-1} \mathop{\mathrm{\mathcal{L}}}v$ denotes the convolution of the Bessel potential for $k=2$ with $\mathop{\mathrm{\mathcal{L}}}v$.* *Proof.* First, let $U=(u,v)$ and $V=(w,z)$ be both in $D({\mathcal{A}}) = H^2 \times H^1$. Then by definition of the inner product in $X$, we have $$\begin{aligned} \left \langle {\mathcal{A}}U \, , V \right \rangle_{X} = \left \langle v \, , w \right \rangle_{H^1} - \left \langle \mathop{\mathrm{\mathcal{L}}}u + \nu v \, , z \right \rangle_{L^2} = & \left \langle v \, , w-\nu z \right \rangle_{L^2} -\left \langle \mathop{\mathrm{\mathcal{L}}}u \, , z \right \rangle_{L^2}+\left \langle \partial_x v \, , \partial_x w \right \rangle_{L^2}. \end{aligned}$$ Since $w\in H^2$, integration by parts on the last term leads us to $$\label{eq:innerprodws} \left \langle {\mathcal{A}}U \, , V \right \rangle_{X} = \left \langle v \, , -\partial^2_x w+w-\nu z \right \rangle_{L^2} -\left \langle \mathop{\mathrm{\mathcal{L}}}u \, , z \right \rangle_{L^2}.$$ Also, by the symmetry of the linear operator ${\mathcal{S}}$ (see Lemma [Lemma 13](#lemSsym){reference-type="ref" reference="lemSsym"}), we recast the last inequality as $$\left \langle {\mathcal{A}}U \, , V \right \rangle_{X}= \left \langle v \, , -\partial^2_x w + w-\nu z \right \rangle_{L^2} -\left \langle \partial_xu \, , \partial_x z \right \rangle_{L^2} -\left \langle u \, , {\mathcal{S}}z -c_{\theta}z \right \rangle_{L^2},$$ since $z\in H^1$. Therefore, $\left \langle {\mathcal{A}}U \, , V \right \rangle_{X} = \left \langle U \, , {\mathcal{A}}^*V \right \rangle_{X}$ for ${\mathcal{A}}^*$ as in [\[eqA\*\]](#eqA*){reference-type="eqref" reference="eqA*"} where ${\mathcal{F}}z\in H^{-1}$ is represented by the pair $-({\mathcal{S}}z-c_{\theta}z,\partial_x z)\in L^2\times L^2$. Finally, assume that $z\in H^2$ and let ${\mathcal{K}}$ be the Bessel potential with parameter $k=2$ on $L^2$ functions, defined by the Fourier symbol $(1+|\xi|^2)^{-1}$. Apply Plancherel's identity twice to the last term of [\[eq:innerprodws\]](#eq:innerprodws){reference-type="eqref" reference="eq:innerprodws"} in order to get $$\left \langle \mathop{\mathrm{\mathcal{L}}}u \, , z \right \rangle_{L^2}=\left \langle u \, , \mathop{\mathrm{\mathcal{L}}}z \right \rangle_{L^2} = \left \langle \hat{u} \, , \widehat{\mathop{\mathrm{\mathcal{L}}}z}(\xi) \right \rangle_{L^2} = \left \langle \hat{u} \, , (1+|\xi|^2)\widehat{{\mathcal{K}}\mathop{\mathrm{\mathcal{L}}}z}(\xi) \right \rangle_{L^2}=\left \langle u \, , {\mathcal{K}}\mathop{\mathrm{\mathcal{L}}}z \right \rangle_{H^1}.$$ Last equality holds because ${\mathcal{K}}\mathop{\mathrm{\mathcal{L}}}z\in H^1$ with $\left \| {\mathcal{K}}\mathop{\mathrm{\mathcal{L}}}z \right \|_{H^1} ^2\leq\left \| \mathop{\mathrm{\mathcal{L}}}z \right \|_{L^2}^2$. This shows the result. ◻ **Corollary 44**. *Let ${\mathcal{A}}^*$ be the formal adjoint of ${\mathcal{A}}$. Also assume that $\left.{\mathcal{A}}^*\right|_{D({\mathcal{A}})}$ and ${\mathcal{F}}$ be as in Lemma [Lemma 43](#Thetanotzero){reference-type="ref" reference="Thetanotzero"} and define $$\label{eq:phi} \Phi := ( \nu[1+(-\Delta)]^{-1}\ \partial_x \overline{\theta}, \partial_x \overline{\theta}).$$ Then $\Phi \in X^*$ is an eigenvector of the adjoint ${\mathcal{A}}^*:X^*\to X^*$, associated to the isolated, simple eigenvalue $\lambda=0$.* *Proof.* First, we claim that $[1+(-\Delta)]^{-1}\ \partial_x \overline{\theta}\in H^2$. This can easily seen by Plancherel's identity since, $$\label{eq:firstentry} \left \| [1+(-\Delta)]^{-1}\ \partial_x \overline{\theta} \right \|_{H^2}^2 = \nu^2 \int_\mathbb{R}(1+|\xi|^2)^2(1+|\xi|^2)^{-2}\left|\widehat{\partial_x\overline{\theta}}\right|^2d\xi = \nu^2\left \| \partial_x\overline{\theta} \right \|_{L^2}^2.$$ Thanks to property [(c)](#propc) in Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"}, we know that $\partial_x\overline{\theta}\in H^2$; therefore $\Phi\in H^2\times H^2\subset D({\mathcal{A}})$. Since $H^2\subset H^1\subset L^2\subset H^{-1}$ and $(H^1\times L^2)^* = H^{-1}\times L^2$ holds due to the used norm in $X$, it follows that $\Phi\in X^*$. Also, Lemma [Lemma 43](#Thetanotzero){reference-type="ref" reference="Thetanotzero"} yields $${\mathcal{A}}^*\Phi = \left.{\mathcal{A}}^*\right|_{D({\mathcal{A}})}\Phi=( {\mathcal{F}}\partial_x\overline{\theta},0) =( {\mathcal{K}}\mathop{\mathrm{\mathcal{L}}}\partial_x\overline{\theta},0)= (0,0).$$ The last equality holds since the Bessel potential is an invertible linear operator on $L^2$ and $\mathop{\mathrm{\mathcal{L}}}\partial_x\overline{\theta}=0$ in $L^2$. ◻ If we define $\Phi_0:=(\nu \partial_x\overline{\theta}, \partial_x\overline{\theta})$ then it is clear that $\Phi_0\in L^2\times L^2$. The following result shows that $\left \langle \cdot \, , \Phi \right \rangle_{X}\in (H^1\times L^2)^*$ has a natural extension to the dual of $L^2\times L^2$. **Corollary 45**. *Let $F \in H^1\times L^2$ and $\Phi_0=(\nu \partial_x\overline{\theta}, \partial_x\overline{\theta})^\top$, then $$\left \langle F \, , \Phi \right \rangle_{X} =\left \langle F \, , \Phi_0 \right \rangle_{L^2}.$$* *Proof.* The result follows by a straightforward calculation in the Fourier space; indeed, for any $F=(f,g) \in H^1\times L^2$ there holds $$\begin{split} \left \langle F \, , \Phi \right \rangle_{X} =& \left \langle f \, , \nu[1+(-\Delta)]^{-1}\partial_x \overline{\theta} \right \rangle_{H^1} + \left \langle g \, , \partial_x \overline{\theta} \right \rangle_{L^2}\\ = & \nu\int_\mathbb{R}\widehat{f}(\xi)(\widehat{\partial_x \overline{\theta}}(\xi))^* d\xi + \left \langle g \, , \partial_x \overline{\theta} \right \rangle_{L^2}\\ = & \left \langle f \, , \nu \partial_x\overline{\theta} \right \rangle_{L^2} +\left \langle g \, , \partial_x \overline{\theta} \right \rangle_{L^2} = \left \langle F \, , \Phi_0 \right \rangle_{L^2}. \end{split}$$ ◻ Now, let us denote the inner product $$\label{definprod} \Xi : = \left \langle \Theta \, , \Phi \right \rangle_{X}=\nu \left \| \partial_x\overline{\theta} \right \|_{L^2}^2 > 0,$$ and define the Hilbert space $X_1 \subset H^1 \times L^2$ as the range of the spectral projection $$\label{eq:projector} {\mathcal{P}}U := U - \Xi^{-1}\left \langle U \, , \Phi \right \rangle_{X}\Theta, \qquad U \in H^1 \times L^2,$$ that is, $X_1 := {\mathcal{R}}({\mathcal{P}})$. In this fashion we project out the eigenspace spanned by the single eigenfunction, $\Theta = (\partial_x \overline{\theta}, 0)$. We shall verify that, outside this eigenspace, the associated semigroup decays exponentially. First, it is to be observed that Corollary [Corollary 45](#char_work_space){reference-type="ref" reference="char_work_space"} implies the following explicit characterization of the space $X_1$. **Lemma 46**. *Let ${\mathcal{P}}$ be the spectral projector defined in [\[eq:projector\]](#eq:projector){reference-type="eqref" reference="eq:projector"} and let $X_1$ be its range. Then $$\label{eq:X1} X_1 =\left\{F\in H^1\times L^2 \ \middle|\ \left \langle F \, , \Phi_0 \right \rangle_{L^2} =0 \right\}.$$* *Proof.* Let $F\in X_1$. Hence, $F = {\mathcal{P}}F$ because ${\mathcal{P}}$ is a projector. By [\[eq:projector\]](#eq:projector){reference-type="eqref" reference="eq:projector"}, we have $F =F -\Xi^{-1}\left \langle F \, , \Phi \right \rangle_{X}\Theta$, which implies $0=\left \langle F \, , \Phi \right \rangle_{X} =\left \langle F \, , \Phi_0 \right \rangle_{L^2}$, due to Corollary [Corollary 45](#char_work_space){reference-type="ref" reference="char_work_space"}. The converse holds trivially. ◻ ## Generation of the semigroup and decay estimates In this section we prove that a restriction of the linearized block operator around the Néel wall's phase is the infinitesimal generator of an exponentially-decaying semigroup. For that purpose we need to show some resolvent estimates. Let us recall the growth bound for a semigroup $e^{t {\mathcal{T}}}$ (where ${\mathcal{T}}$ denotes its infinitesimal generator), $$\omega_0 = \inf \{\omega \in \mathbb{R}\, : \, \lim_{t \to + \infty} e^{-\omega t}\|e^{t {\mathcal{T}}}\|\}.$$ We say a semigroup is uniformly (exponentially) stable whenever $\omega_0 < 0$. The spectral bound of the generator is defined as $$s({\mathcal{T}}) := \sup \{ \mathrm{Re}\,\lambda \, : \, \lambda \in \sigma({\mathcal{T}})\}.$$ Since the spectral mapping theorem (that is, $\sigma(e^{t{\mathcal{T}}}) \backslash \{0\} = e^{t\sigma({\mathcal{T}})}$ for all $t \geq 0$) is not true in general for $C_0$-semigroups (see [@EN00]), for stability purposes we rely on the Gearhart-Prüss theorem (cf. [@Gea78; @Pr84]), which restricts our attention to semigroups on Hilbert spaces (see also [@CrL03; @EN00] and the references therein). It states that any $C_0$-semigroup $\{e^{t{\mathcal{T}}}\}_{t\geq 0}$ on a Hilbert space $H$ is uniformly exponentially stable if and only if $s({\mathcal{T}}) <0$ and the resolvent satisfies $\sup_{\mathrm{Re}\,\lambda > 0} \|({\mathcal{T}}- \lambda)^{-1}\| < \infty$ (see Lemma [Lemma 62](#lemF6){reference-type="ref" reference="lemF6"} below). It is well-known that the generalized Hille-Yosida theorem (see, e.g., [@EN00], p. 69) requires all powers of the resolvent to conclude the existence of a $C_0$-semigroup unless it is quasi-contractive. Therefore, we apply the classical Lumer-Philips theorem instead. For that purpose we need some preparations. Following Capella *et al.* [@CMO07], we define $L^2_\perp := \{\partial_x\overline{\theta}\}^\perp_{L^2}$. For $k=1$ and $2$, we define $H^k_\perp$ as $H^k\cap L^2_\perp$. Next lemma describes the structure of these subspaces. **Lemma 47**. *Let $L^2_\perp$ be the $L^2$-orthogonal complement of $\partial_x \overline{\theta}$. For $k=1,2$ define $H^k_\perp$ as the intersection between $H^k$ and $L^2_\perp$. Then, for every $\bar{u}\in H^k$, $$\label{eq:direct_sumH1} \bar{u} = u +\alpha\partial_x\overline{\theta}$$ for some $u\in H^k_\perp$ and $\alpha\in \mathbb{C}$.* Notice that this lemma needs to be proved since, in general, the intersection does not distribute the direct sum. *Proof.* Assume $k$ is fixed and $\bar{u}\in H^k$. The spectral decomposition theorem (see Theorem III-6.17, p. 178 in [@Kat80]) and Corollary [Corollary 16](#corzeroL0){reference-type="ref" reference="corzeroL0"} yield $L^2 = L^2_\perp\oplus \mathop{\mathrm{Span}}\{\partial_x\overline{\theta}\}$ and because $H^k\subset L^2$ there exist $u\in L^2_\perp$ and $\alpha\in \mathbb{C}$ such that $\bar{u}= u +\alpha \partial_x \overline{\theta}$. Since $\partial_x\overline{\theta}\in H^k$, by Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"} [(c)](#propc) there holds $u = \bar{u}-\alpha \partial_x \overline{\theta}\in H^k$. Thus $u\in H^k_\perp$. ◻ This splitting also extends to the working (product) space $H^1\times L^2$. The proof of the following corollary is omitted. **Corollary 48**. *For every $\bar{U}\in H^1\times L^2$ there exist $U\in H^1_\perp \times L^2$ and $\alpha\in \mathbb{C}$ such that $\bar{U} = U + \alpha \Theta$.* **Lemma 49**. *Define $a:H^1_\perp\times H^1_\perp\rightarrow \mathbb{C}$ as $$\label{eq:a} a\left[ u, v\right] := \left \langle \partial_x u \, , \partial_x v \right \rangle_{L^2} + b[s_{\theta}u,s_{\theta}v]-\left \langle c_{\theta}u \, , v \right \rangle_{L^2},$$ with $b$ as in [\[defbilinearB\]](#defbilinearB){reference-type="eqref" reference="defbilinearB"}. Then, $a[\cdot,\cdot]$ is a positive, Hermitian, sesquilinear form. Moreover, if $u\in H^2_\perp$ $$\label{eq:innerprod&L} \left \langle \mathop{\mathrm{\mathcal{L}}}u \, , v \right \rangle_{L^2} = a[u,v] \quad \mbox{for every } v\in H^1_\perp.$$* *Proof.* The sesquilinearity and hermiticity of $a$ follows trivially from its definition, meanwhile it is positive defined due to item [(c)](#propcc) in Proposition [Proposition 12](#propL0){reference-type="ref" reference="propL0"}. Finally, relation [\[eq:innerprod&L\]](#eq:innerprod&L){reference-type="eqref" reference="eq:innerprod&L"} follows from an integration by parts and from Corollary [Corollary 11](#cor:operatorS){reference-type="ref" reference="cor:operatorS"}. ◻ With slight changes to the arguments presented in [@CMO07], we can prove that $a[\cdot,\cdot]$ induces an inner product in $H^1_\perp$ equivalent to the $H^1$-inner product. The norm induced by this sesquilinear form is denoted by $\|\cdot\|_a:H^1_\perp\to \mathbb{C}$. In other words, $$\|u\|_a := \sqrt{a[u,u]},\qquad \text{for every } u\in H^1_\perp.$$ **Proposition 50**. *Let us define $$\label{defZ} Z := H^1_\perp\times L^2.$$ Then $(Z,\left \langle \cdot \, , \cdot \right \rangle_{X})$ is a Hilbert space. In addition, if $\|\cdot\|_Z : Z \to \mathbb{C}$ and $\|\cdot\|_2: Z \to\mathbb{C}$ are defined by $$\label{eq:inner_1} \left\| U\right\|_Z := \sqrt{\|u\|_a^2 + \left \| v \right \|_{L^2}^2},$$ $$\label{eq:inner_2} \| U\|_2:= \|u\|_a + \left \| v \right \|_{L^2},$$ where $U=(u,v)\in Z$, then $\|\cdot\|_Z$ and $\|\cdot\|_2$ are norms in $Z$, both equivalent to $\left \| \cdot \right \|_{X}$.* *Proof.* It suffices to show that $Z$ is a closed linear subspace of $X=H^1\times L^2$. The linearity of $Z$ follows from the linearity of $L^2$ and the linearity of $H^1_\perp$. Now, assume $\{U_j = (u_j, v_j)\}_{j\in\mathbb{N}}$ is a Cauchy sequence in $Z$. Therefore, $\{u_j\}_{j\in\mathbb{N}}$ and $\{v_j\}_{j\in\mathbb{N}}$ are Cauchy sequences in $H^1$ and in $L^2$, respectively, and $u_j\to u$ and $v_j\to v$ for some $u\in H^1$ and $v\in L^2$. Note that $u\in H^1_\perp$ since $H^1$-convergence implies weak $L^2$-convergence and $0=\left \langle u_j \, , \partial_x\overline{\theta} \right \rangle_{L^2}$ for every $j\in \mathbb{N}$. Therefore $Z$ is a closed linear subspace of $X$. Next, we will show that $\|\cdot\|_Z$ and $\|\cdot\|_2$ are norms in $Z$. Clearly, both functions are positive defined and absolutely homogeneous since $\left \| \cdot \right \|_{L^2}$ and $\|\cdot\|_a$ are norms in $L^2$ and $H^1_\perp$, respectively. Also, subadditivity of $\|\cdot\|_2$ readily follows from the subaditivity of $\|\cdot\|_a$ and of $\left \| \cdot \right \|_{L^2}$. To verify the subadditivity for $\|\cdot\|_Z$, let $U = (u_1,u_2)$ and $V=(v_1,v_2)$ belong to $Z$; then we obtain $$\begin{split} \|U+V\|_Z^2 &= \|u_1+u_2\|^2_a + \left \| v_1+v_2 \right \|_{L^2}^2\\ &\leq \left(\|u_1\|_a + \|u_2\|_a\right)^2 +\left(\left \| v_1 \right \|_{L^2} + \left \| v_2 \right \|_{L^2}\right)^2\\ &= (\|u_1\|_a^2 + \left \| v_1 \right \|_{L^2}^2) + (\|u_2\|_a^2 + \left \| v_2 \right \|_{L^2}^2) +2 (\|u_1\|_a \|u_2\|_a+\left \| v_1 \right \|_{L^2}\left \| v_2 \right \|_{L^2})\\ &\leq \left(\|U\|_Z + \|V\|_Z \right)^2. \end{split}$$ Finally, we prove that both norms are equivalent to $\left \| \cdot \right \|_{X}$. Indeed, since $\|\cdot\|_a$ and $\left \| \cdot \right \|_{H^1}$ are equivalent in $H^1_\perp$, there exist $k_0$ and $K_0$ two positive constants such that $k_0\|u\|_a \leq \left \| u \right \|_{H^1}\leq K_0 \|u\|_a$ for each $u\in H^1_\perp$. Hence $$k_0^2\|u\|_a^2 +\left \| v \right \|_{L^2}^2 \leq \left \| (u,v)^\top \right \|_{X}^2\leq K_0^2\|u\|_a^2 + \left \| v \right \|_{L^2}^2.$$ By choosing $k_1 =\sqrt{\min\{1,k_0^2\}}$ and $K_1 =\sqrt{\max\{1,K_0^2\}}$ $$k_1\|U\|_Z \leq \left \| U \right \|_{X} \leq K_1\|U\|_Z, \qquad \text{for every } U = (u,v)^\top \in Z.$$ Thus, $\|\cdot\|_Z$ and $\left \| \cdot \right \|_{X}$ are equivalent in $H^1_\perp$. Since, clearly, $$(\left \| u \right \|_{H^1}+\left \| v \right \|_{L^2})^2\leq 2(\left \| u \right \|_{H^1}^2+\left \| v \right \|_{L^2}^2 )\leq 2(\left \| u \right \|_{H^1}+\left \| v \right \|_{L^2})^2.$$ taking the square root and using the equivalence between $\|\cdot\|_a$ and $\left \| \cdot \right \|_{H^1}$ one obtains $$(k_0\|u\|_a+\left \| v \right \|_{L^2})\leq \sqrt{2}\left \| U \right \|_{X} \leq \sqrt{2}(K_0\|u\|_a+\left \| v \right \|_{L^2}).$$ Again, choosing $k_2 =\min\{1,k_0\}/\sqrt{2}$ and $K_2 =\max\{1,K_0\}$, we get $$k_2\|U\|_2 \leq \left \| U \right \|_{X} \leq K_2\|U\|_2, \qquad \mbox{for every } U = (u,v)^\top \in Z.$$ ◻ **Remark 51**. Note that $\|\cdot\|_Z$ is induced by the inner product $\langle\cdot,\cdot\rangle_Z:Z \times Z\to \mathbb{C}$ given by $$\langle U,V\rangle_Z := a[u,w] + \left \langle v \, , z \right \rangle_{L^2},\quad \mbox{with } U = (u,v), \; V = (w,z).$$ Henceforth, $\langle\cdot,\cdot\rangle_Z$ is equivalent to $\left \langle \cdot \, , \cdot \right \rangle_{X}$ in $Z$. **Lemma 52**. *Let $\langle\cdot,\cdot\rangle_{\tilde{X}}: X\times X \to \mathbb{C}$ be defined as $$\langle \bar{U},\bar{V}\rangle_{\tilde{X}} :=\langle U,V\rangle_Z + \left \langle \bar{U} \, , \beta \Theta \right \rangle_{X} + \left \langle \alpha\Theta \, , \bar{V} \right \rangle_{X} - \alpha\beta^* \left \| \Theta \right \|_{X}^2,$$ where $\bar{U} = U + \alpha\Theta$ and $\bar{V} = V + \beta\Theta$ for some $U,V\in Z$ and $\alpha,\beta\in\mathbb{C}$ (see Corollary [Corollary 48](#H1L2Split){reference-type="ref" reference="H1L2Split"}). Then $\langle\cdot,\cdot\rangle_{\tilde{X}}$ is and inner product in $X$ equivalent to $\left \langle \cdot \, , \cdot \right \rangle_{X}$.* *Proof.* First, we prove that $\langle\cdot,\cdot\rangle_{\tilde{X}}: X\times X \to \mathbb{C}$ is an inner product. It is clearly an hermitian sesquilinear form because it is a sum of four inner products whose first entry depends linearly on $\bar{U}$ and their second entry also depends linearly on $\bar{V}$. In view of Corollary [Corollary 48](#H1L2Split){reference-type="ref" reference="H1L2Split"}, if $\bar{U}\in X$, then $\bar{U} = U + \alpha \Theta$ for some $U\in Z$ and $\alpha\in \mathbb{C}$ which yields $$\langle\bar{U},\bar{U}\rangle_{\tilde{X}} =\|U\|_Z^2 + 2 \mathrm{Re}\,\!\!\left \langle U \, , \alpha\Theta \right \rangle_{X}+\left \| \alpha\Theta \right \|_{X}^2.$$ Thus, by adding and subtracting $\left \| U \right \|_{X}^2$, one gets $$\label{eq:posdefX} \langle\bar{U},\bar{U}\rangle_{\tilde{X}} =\left \| \bar{U} \right \|_{X}^2 -\left \| U \right \|_{X}^2 + \|U\|_Z^2 \geq \|U\|_Z^2.$$ Last inequality holds since $\left \| \bar{U} \right \|_{X}^2 \geq \left \| U \right \|_{X}^2$ with equality if and only if $\alpha = 0$. Henceforth, $\langle\bar{U},\bar{U}\rangle_{\tilde{X}} = 0$ if and only if $\bar{U}=0$. Second, we prove that $\langle\cdot,\cdot\rangle_{\tilde{X}}$ and $\left \langle \cdot \, , \cdot \right \rangle_{X}$ are equivalent. Since $\langle\cdot,\cdot\rangle_Z$ and $\left \langle \cdot \, , \cdot \right \rangle_{X}$ are equivalent in $Z$, then there exist two positive constants $k, K > 0$ such that $0<k\leq 1\leq K$ and $k\left \| U \right \|_{X}\leq \|U\|_Z\leq K \left \| U \right \|_{X}$ (see the proof of Lemma [Proposition 50](#lem:equi_norms){reference-type="ref" reference="lem:equi_norms"}). By applying this relation into the equality in [\[eq:posdefX\]](#eq:posdefX){reference-type="ref" reference="eq:posdefX"}, we obtain $$(k^2-1)\left \| U \right \|_{X}^2+\left \| \bar{U} \right \|_{X}^2\leq \langle\bar{U},\bar{U}\rangle_{\tilde{X}} \leq (K^2-1)\left \| U \right \|_{X}^2 +\left \| \bar{U} \right \|_{X}^2$$ Since $\left \| U \right \|_{X}\leq \left \| \bar{U} \right \|_{X}$, we conclude that $$k^2\left \| \bar{U} \right \|_{X}^2\leq \langle\bar{U},\bar{U}\rangle_{\tilde{X}} \leq K^2\left \| \bar{U} \right \|_{X}^2.$$ and the proof is complete. ◻ The following resolvent estimate is the key ingredient to apply Lumer-Phillips theorem. We use the appropriate choice of a metric in order to prove it. **Lemma 53**. *There exists $\eta_0 \in \mathbb{R}$ such that $$\mathrm{Re}\,\! \left \langle {\mathcal{A}}{\bar{U}} \, , {\bar{U}} \right \rangle_{X}\leq \eta_0 \|{\bar{U}}\|_X^2$$ for every ${\bar{U}}\in D({\mathcal{A}})$.* *Proof.* Note that if ${\bar{U}}\in D({\mathcal{A}})\subset X$, then ${\bar{U}}= U+\alpha \Theta$ for some $U\in Z$ and $\alpha\in \mathbb{C}$ due to Corollary [Corollary 48](#H1L2Split){reference-type="ref" reference="H1L2Split"}. Moreover, $U=(u,v)$, with $u\in H^2_\perp$ and $v\in H^1$ also, by Lemma [Lemma 47](#lem:H1split){reference-type="ref" reference="lem:H1split"}, $v=w+\beta\partial_x\overline{\theta}$ for some $w\in H^1_\perp$ and $\beta\in \mathbb{C}$. Since $\lambda=0$ is a eigenvalue of ${\mathcal{A}}$ with eigenfunction $\Theta$ (see Lemma [Lemma 33](#lemzeroremains){reference-type="ref" reference="lemzeroremains"}), we have that $${\mathcal{A}}{\bar{U}}= {\mathcal{A}}U = V+\beta \Theta, \quad \text{and } \quad V:=\begin{pmatrix} w\\ -\nu v-\mathop{\mathrm{\mathcal{L}}}u \end{pmatrix}\in Z.$$ Then, $$\begin{split} \left\langle {\mathcal{A}}{\bar{U}},{\bar{U}}\right\rangle_{\tilde{X}} =&\langle V,U\rangle_Z + \left \langle \beta \Theta \, , U \right \rangle_{X} + \left \langle V \, , \alpha\Theta \right \rangle_{X} + \alpha^*\beta \left \| \Theta \right \|_{X}^2. \end{split}$$ In view of Remark [Remark 51](#rem:innerprod){reference-type="ref" reference="rem:innerprod"} and [\[eq:innerprod&L\]](#eq:innerprod&L){reference-type="eqref" reference="eq:innerprod&L"}, the term $\langle V,U\rangle_Z$ is recast as $$\langle V,U\rangle_Z = a[w,u]-\left \langle \mathop{\mathrm{\mathcal{L}}}u \, , v \right \rangle_{L^2}-\nu\left \| v \right \|_{L^2}^2 = 2i \, \mathrm{Im}\,a[w,u]-\nu\left \| v \right \|_{L^2}^2\\ $$ Upon substitution into the expression for $\left\langle {\mathcal{A}}{\bar{U}},{\bar{U}}\right\rangle_{\tilde{X}}$, one gets $$\left\langle {\mathcal{A}}{\bar{U}},{\bar{U}}\right\rangle_{\tilde{X}} = 2i\,\mathrm{Im}\,a[w,u]-\nu\left \| v \right \|_{L^2}^2+ \left \langle \beta \Theta \, , U \right \rangle_{X} + \left \langle \beta\Theta \, , \alpha\Theta \right \rangle_{X} + \left \langle V \, , \alpha\Theta \right \rangle_{X}.$$ Now, using the explicit form of $\Theta$ and the fact that $\left \langle w \, , \partial_x\overline{\theta} \right \rangle_{L^2} = 0$ we obtain $$\left \langle V \, , \alpha\Theta \right \rangle_{X} = \left \langle w \, , \alpha\partial_x \overline{\theta} \right \rangle_{H^1} = \left \langle \partial_x w \, , \alpha\partial^2_x \overline{\theta} \right \rangle_{L^2}=-\left \langle w \, , \alpha\partial^3_{x}\overline{\theta} \right \rangle_{L^2},$$ where the last equality follows upon integration by parts. Henceforth, $$\label{eq:completeAZZ} \left\langle {\mathcal{A}}{\bar{U}},{\bar{U}}\right\rangle_{\tilde{X}} = 2 i\,\mathrm{Im}\,a[w,u]-\nu\left \| v \right \|_{L^2}^2+ \left \langle \beta \Theta \, , U \right \rangle_{X} + \left \langle \beta \Theta \, , \alpha\Theta \right \rangle_{X} -\left \langle w \, , \alpha\partial^3_{x}\overline{\theta} \right \rangle_{L^2} .$$ Taking the real part of [\[eq:completeAZZ\]](#eq:completeAZZ){reference-type="eqref" reference="eq:completeAZZ"} and applying Cauchy-Schwartz inequality yields $$2\mathrm{Re}\,\!\left\langle {\mathcal{A}}{\bar{U}},{\bar{U}}\right\rangle_{\tilde{X}} \leq -2\nu\left \| v \right \|_{L^2}^2+ \left \| U \right \|_{X}^2 +2\left \| \beta \Theta \right \|_{X}^2 + \left \| \alpha\Theta \right \|_{X}^2 + \left \| \alpha\partial^3_{x}\overline{\theta} \right \|_{L^2}^2+\left \| w \right \|_{L^2}^2 .$$ Note that $\left \| \partial^3_{x}\overline{\theta} \right \|_{L^2}<\infty$ and $\left \| \partial_x\overline{\theta} \right \|_{L^2}\neq 0$ due to Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"}. Thereby, we may define the positive constants $C_1 :=\left \| \Theta \right \|_{X}^2/\left \| \partial_x \overline{\theta} \right \|_{L^2}^2$ and $C_2:=\left \| \partial^3_x \overline{\theta} \right \|_{L^2}^2/\left \| \partial_x \overline{\theta} \right \|_{L^2}^2$, depending only on $\overline{\theta}$, so that $$\begin{split} 2\mathrm{Re}\,\!\left\langle{\mathcal{A}}{\bar{U}},{\bar{U}}\right\rangle_{\tilde{X}} \leq & -2\nu\left \| v \right \|_{L^2}^2+ \left \| U \right \|_{X}^2 +2C_1\left \| \beta \partial_x\overline{\theta} \right \|_{L^2}^2 + \left \| \alpha\Theta \right \|_{X}^2 + C_2\left \| \alpha\partial_{x}\overline{\theta} \right \|_{L^2}^2+\left \| w \right \|_{L^2}^2 \\ \leq & -2\nu\left \| v \right \|_{L^2}^2+ (2+C_2)\left \| {\bar{U}} \right \|_{X}^2 +(1+2C_1)\left \| v \right \|_{L^2}^2 \\ \leq & (3+2C_1+C_2)\left \| {\bar{U}} \right \|_{X}^2 \end{split}$$ The last two inequalities hold because $\left \| {\bar{U}} \right \|_{X}\geq \left \| {\bar{U}} \right \|_{X}\geq \left \| v \right \|_{L^2}\geq \max\{\left \| w \right \|_{L^2},\left \| \beta \partial_x\overline{\theta} \right \|_{L^2}\}$ and $\left \| {\bar{U}} \right \|_{X}\geq \left \| \alpha\Theta \right \|_{X}\geq \left \| \alpha \partial_x\overline{\theta} \right \|_{L^2}$. Finally, the equivalence between $\left \langle \cdot \, , \cdot \right \rangle_{X}$ and $\langle \cdot,\cdot \rangle_{\tilde{X}}$ implies the existence of $K>0$ such that $$\begin{split} \mathrm{Re}\,\left\langle {\bar{U}}, {\mathcal{A}}{\bar{U}}\right\rangle_{X} \leq \tfrac{1}{2}K(3+2C_1+C_2)\|{\bar{U}}\|_X^2, \end{split}$$ yielding the result with $\eta_0 = \tfrac{1}{2}K(3+2C_1+C_2) > 0$. ◻ **Lemma 54**. *There exists $\tau > \eta_0$ such that ${\mathcal{A}}- \tau$ is onto.* *Proof.* First we notice that, from the proof of Lemma [Lemma 53](#lemF4){reference-type="ref" reference="lemF4"}, $\eta_0>0$. In addition, we know that every $\lambda>0$ belongs to $\rho\left ( {\mathcal{A}} \right )$ due to Theorem [Theorem 40](#mainspthm){reference-type="ref" reference="mainspthm"}. Therefore, the proof is complete by choosing any $\tau >\omega_0$. ◻ As an immediate consequence of Lemmata [Lemma 53](#lemF4){reference-type="ref" reference="lemF4"} and [Lemma 54](#lemF5){reference-type="ref" reference="lemF5"}, we are now able to apply the classical Lumer-Phillips theorem (see, e.g., Theorem 12.22, p. 407, in [@ReRo04]) and to claim the following result. **Lemma 55**. *The operator ${\mathcal{A}}: H^1 \times L^2 \to H^1 \times L^2$ with $D({\mathcal{A}}) = H^2\times H^1$ is the inifinitesimal generator of a $C_0$-semigroup of quasicontractions $\{e^{t{\mathcal{A}}}\}_{t\geq 0}$.* **Corollary 56**. *For each $A({\mathcal{A}}) = U \in H^2 \times H^1$ there holds $$\frac{d}{dt} \big( e^{t{\mathcal{A}}} U \big) = e^{t{\mathcal{A}}} {\mathcal{A}}U = {\mathcal{A}}(e^{t{\mathcal{A}}}U).$$* *Proof.* Follows from Lemma [Lemma 55](#lemsiSG){reference-type="ref" reference="lemsiSG"} and basic properties of semigroups (cf. [@EN00; @Pa83]). ◻ We now observe that on a reflexive Banach space, weak and weak$^*$ topologies coincide, and therefore the family of dual operators $\{ (e^{t {\mathcal{A}}})^*\}_{t\geq 0}$, consisting of all the formal adjoints in $L^2$ is a $C_0$-semigroup as well (cf. [@EN00], p. 44). Moreover, the infinitesimal generator of this semigroup is simply ${\mathcal{A}}^*$ (see Corollary 10.6 in [@Pa83]), so we denote $(e^{t {\mathcal{A}}})^* = e^{t {\mathcal{A}}^*}$. By semigroup properties we readily have $$e^{t {\mathcal{A}}} \Theta = \Theta, \qquad \text{and,} \qquad e^{t {\mathcal{A}}^*} \Phi = \Phi.$$ As a result of these identities and of the definition of the projector, we have **Lemma 57**. *For all $t \geq 0$ there holds $e^{t {\mathcal{A}}} {\mathcal{P}}= {\mathcal{P}}e^{t {\mathcal{A}}}$.* *Proof.* Let $U \in H^2 \times H^1$; then $$\begin{aligned} {\mathcal{P}}e^{t {\mathcal{A}}} U = e^{t {\mathcal{A}}} U - \Xi^{-1}\left \langle e^{t {\mathcal{A}}} U \, , \Phi \right \rangle_{X}\Theta &= e^{t {\mathcal{A}}} U - \Xi^{-1}\left \langle U \, , e^{t {\mathcal{A}}^*}\Phi \right \rangle_{X}\Theta \\ &= e^{t {\mathcal{A}}} U -\Xi^{-1}\left \langle U \, , \Phi \right \rangle_{X}e^{t {\mathcal{A}}}\Theta \\ &= e^{t {\mathcal{A}}} {\mathcal{P}}U,\end{aligned}$$ as claimed. ◻ Last result implies that $X_1$ is an $e^{t {\mathcal{A}}}$-invariant closed (Hilbert) subspace of $X = H^2 \times H^1$. Hence, we define the domain $$D_1 := \{U \in {\mathcal{D}}\cap X_1 \, : \, {\mathcal{A}}U \in X_1 \},$$ and the operator $${\mathcal{A}}_1 : D_1 \subset X_1 \to X_1,$$ $${\mathcal{A}}_1 U := {\mathcal{A}}U, \qquad U \in D_1,$$ as the restriction of ${\mathcal{A}}$ on $X_1$. Therefore, ${\mathcal{A}}_1$ is a closed, densely defined operator on the Hilbert space $X_1$. Moreover, **Lemma 58**. *$\lambda = 0$ is not in the spectrum of ${\mathcal{A}}_1$.* *Proof.* It suffices to verify that $\Theta \notin X_1$. We compute ${\mathcal{P}}\Theta = \Theta - \Xi^{-1}\langle \Theta, \Phi \rangle_{L^2} \Theta = 0$. Hence $0 \neq \Theta \in \ker {\mathcal{P}}$ and therefore the eigenfunction associated to the eigenvalue $\lambda = 0$ is not in ${\mathcal{R}}({\mathcal{P}}) = X_1$. ◻ In this fashion we project out $\lambda = 0$ from the spectrum. As a consequence of spectral stability (see Theorem [Theorem 40](#mainspthm){reference-type="ref" reference="mainspthm"} above), we obtain the following **Corollary 59**. *$\sigma ({\mathcal{A}}_1)$ is a strict subset of the stable complex plane, $$\sigma ({\mathcal{A}}_1) \subset \{\lambda \in \mathbb{C}\, : \, \mathrm{Re}\,\lambda \leq - \zeta_0(\nu) < 0\},$$ and the spectral bound of ${\mathcal{A}}_1$ is strictly negative, $s({\mathcal{A}}_1) < 0$.* **Lemma 60**. *The family of operators $\{e^{t {\mathcal{A}}_1}\}_{t\geq 0}$, $e^{t {\mathcal{A}}_1} : X_1 \to X_1$, defined as $$e^{t {\mathcal{A}}_1}U := e^{t {\mathcal{A}}} U, \quad U \in X_1, \; t \geq 0,$$ is a $C_0$-semigroup of quasicontractions in the Hilbert space $X_1$ with infinitesimal generator ${\mathcal{A}}_1$.* *Proof.* The semigroup properties are inherited from those of $e^{t {\mathcal{A}}}$ in $X = H^1 \times L^2$. That ${\mathcal{A}}_1$ is the infinitesimal generator follows from Corollary in Section 2.2 of [@EN00], p. 61. ◻ Finally, in order to prove that the semigroup is exponentially decaying, we rely on Gearhart-Prüss theorem and we need to show that $$\sup_{\mathrm{Re}\,\lambda>0} \|(\lambda-{\mathcal{A}}_1)^{-1}\|_{X_1\to X_1}<\infty.$$ This condition is satisfied if any solution $U$ to the linear equation $(\lambda-{\mathcal{A}}_1)U =F$ for $F\in H^1\times L^2$ satisfies a resolvent estimate, $\left \| U \right \|_{X}\leq C(\lambda)\left \| F \right \|_{X}$, in which the constant $C(\lambda)$ remains bounded in $\mathrm{Re}\,\lambda>0$. The next result goes in that direction. **Lemma 61**. *Let $\lambda\in \rho\left ( {\mathcal{A}} \right )$ and $f,g,u,v\in L^2_\perp$ be such that $F=(f,g)^\top \in X_1$, $U = (u,v)^\top \in D_1$ and $(\lambda-{\mathcal{A}}_1)U =F$. Then, $$\label{eq:used_ineq} \left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right| \leq \|U\|_2\|F\|_2$$ Moreover, if $C_0$ and $C_1$ are two fixed positive numbers, then there exists a constant $K(C_0,C_1)>0$ such that $\left \| U \right \|_{X}\leq K(C_0,C_1)\left \| F \right \|_{X}$ for all $\lambda$ such that $\mathrm{Re}\,\lambda>C_0$ or $|\mathrm{Im}\,\lambda|>C_1$.* *Proof.* First, we write the vectorial equation as a system of linear equations, $$\label{eq:sys1} \lambda u - v = f,$$ $$\label{eq:sys2} \mathop{\mathrm{\mathcal{L}}}u +(\lambda+\nu)v = g.$$ Take the $L^2$- product on the left of [\[eq:sys1\]](#eq:sys1){reference-type="eqref" reference="eq:sys1"} with $\mathop{\mathrm{\mathcal{L}}}u$, and the $L^2$-product on the right of [\[eq:sys2\]](#eq:sys2){reference-type="eqref" reference="eq:sys2"} with $v$. The result is $$\lambda^*\left \langle \mathop{\mathrm{\mathcal{L}}}u \, , u \right \rangle_{L^2} -\left \langle \mathop{\mathrm{\mathcal{L}}}u \, , v \right \rangle_{L^2} = \left \langle \mathop{\mathrm{\mathcal{L}}}u \, , f \right \rangle_{L^2}, \quad \left \langle \mathop{\mathrm{\mathcal{L}}}u \, , v \right \rangle_{L^2} +(\lambda+\nu)\left \| v \right \|_{L^2}^2 = \left \langle g \, , v \right \rangle_{L^2} .$$ Notice that $u\in H^2_\perp$ and $v,f\in H^1_\perp$. By Lemma [Lemma 49](#lem:sesqui_form){reference-type="ref" reference="lem:sesqui_form"}, these equations can be written in terms of the sesquilinear form $a[\cdot,\cdot]$ as $$\lambda^* a[u,u] - a[u,v] = a[u,f],$$ $$a[u,v] +(\lambda+\nu)\left \| v \right \|_{L^2}^2 = \left \langle g \, , v \right \rangle_{L^2}.$$ Then, the complex modulus of the sum of these equations satisfies $$\left | \lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right | \leq | a[u,f]|+ |\left \langle g \, , v \right \rangle_{L^2}|$$ Since $a$ is a nonnegative, Hermitian, sesquilinear form, then the Cauchy-Schwartz inequality remains valid for $a$ in $H^1_\perp$, as well as the classic inner product in $L^2$. Hence, $$\left | \lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right | \leq a^{1/2}[u,u]a^{1/2}[f,f]+ \left \| g \right \|_{L^2}\left \| v \right \|_{L^2}.$$ Also note that the right-hand side of last equation is bounded by $$\left[\left \| v \right \|_{L^2}+a^{1/2}[u,u]\right]\left[\left \| g \right \|_{L^2}+a^{1/2}[f,f]\right] = \|U\|_2\|F\|_2.$$ Thus, inequality [\[eq:used_ineq\]](#eq:used_ineq){reference-type="eqref" reference="eq:used_ineq"} follows. Second, use [\[eq:used_ineq\]](#eq:used_ineq){reference-type="eqref" reference="eq:used_ineq"} to get $$\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right| \leq \left[a^{1/2}[u,u]+\left \| v \right \|_{L^2}\right]\|F\|_2.$$ Notice that $\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right| = 0$ if and only if $(u,v) = (0, 0)$ because $\mathrm{Re}\,\lambda \geq 0$ and $\nu>0$. Hence, if $(u,v)\neq 0$, we have $$a^{1/2}[u,u]+\left \| v \right \|_{L^2} \leq \frac{\left(a^{1/2}[u,u]+\left \| v \right \|_{L^2}\right)^2}{\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right|}\|F\|_2.$$ If $\mathrm{Re}\,\lambda> C_0>0$, for some $C_0 > 0$, then $$\frac{\left(\left \| v \right \|_{L^2}+a^{1/2}[u,u]\right)^2}{\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right|} \leq \frac{2}{\mathrm{Re}\,\lambda}\ \frac{a[u,u] +\left \| v \right \|_{L^2}^2}{ a[u,u] +\left \| v \right \|_{L^2}^2+ \frac{\nu}{\mathrm{Re}\,\lambda}\left \| v \right \|_{L^2}^2 } \leq \frac{2}{C_0}.$$ Now, if $|\mathrm{Im}\,\lambda|\geq C_1>0$ and $\mathrm{Re}\,\lambda \geq 0$ then $$\frac{\left(a^{1/2}[u,u]+\left \| v \right \|_{L^2}\right)^2}{\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right|}\leq 2 \frac{a[u,u] +\left \| v \right \|_{L^2}^2}{ \sqrt{C_1^2(a[u,u] -\left \| v \right \|_{L^2}^2)^2+ \nu^2\left \| v \right \|_{L^2}^4 }}.$$ Let us write $a[u,u] = r^2\cos^2 t$ and $\left \| v \right \|_{L^2}^2 = r^2 \sin^2 t$ for some $r>0$ and $t\in [0,\pi/2]$. This change of variables implies that $$\begin{split} \frac{\left(\left \| v \right \|_{L^2}+a^{1/2}[u,u]\right)^2}{\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right|}\leq & \frac{2}{ \sqrt{C_1^2\cos^22t+ \nu^2\cos^4 t }}\\ \leq & \frac{2}{\sqrt{C_1^2 \cos^2 2t + \tfrac{1}{4} {\nu^2}(1+\cos 2t)^2}}\\ \leq & \frac{2}{ \sqrt{\left(C_1^2+\tfrac{1}{4} \nu^2\right)\cos^22t+ \tfrac{1}{2}\nu^2\cos 2t +\tfrac{1}{4} \nu^2}}. \end{split}$$ Let us denote, $$h(t) := \left(C_1^2+\tfrac{1}{4} \nu^2\right)\cos^22t+ \tfrac{1}{2}\nu^2\cos 2t +\tfrac{1}{4} \nu^2, \qquad t \in [0, \pi/2].$$ This is a not vanishing $C^1$-function with global minimum for $t_c\in (\pi/4,\pi/2)$ determined by the relation $\cos 2t_c = -\nu^2/(4C_1^2 +\nu^2)$. Thus, a straightforward computation implies that $$\frac{\left(\left \| v \right \|_{L^2}+a^{1/2}[u,u]\right)^2}{\left|\lambda^* a[u,u]+(\lambda+\nu)\left \| v \right \|_{L^2}^2\right|} \leq \frac{2}{ \sqrt{h(t_c)}} = \frac{2\sqrt{\nu^2 +4C_1^2}}{\nu C_1}.$$ Therefore, if $K=2\max\{\sqrt{\nu^2 +4C_1^2}/(\nu C_1),1/C_0\}$, we obtain $$\|U\|_2 \leq K \|F\|_2.$$ Finally, we conclude the existence of a constant $K(C_0,C_1)>0$ such that $\left \| U \right \|_{X}\leq K(C_0,C_1)\left \| F \right \|_{X}$ due to the equivalence between the norms $\|\cdot\|_2$ and $\left \| \cdot \right \|_{X}$; see Proposition [\[lem:equi_norms\]](#lem:equi_norms){reference-type="eqref" reference="lem:equi_norms"}. Thus, the second statement also holds. This completes the proof. ◻ We are left to prove the following estimate. **Lemma 62**. *$$\sup_{\mathrm{Re}\,\lambda > 0} \| (\lambda-{\mathcal{A}}_1)^{-1} \|_{X_1 \to X_1} < \infty.$$* *Proof.* Let $\lambda\geq 0$, so $\lambda\in \rho({\mathcal{A}}_1)$ by Corollary [Corollary 59](#elcoro){reference-type="ref" reference="elcoro"} and choose $C_0$ and $C_1$ two positive numbers. Then, we split the set $\{\lambda\in \mathbb{C}\ |\ \mathrm{Re}\,\lambda\geq 0\}$ into three disjoint sets, namely $$\begin{array}{l} S_0 = \{\lambda\in \mathbb{C}\ | \ 0\leq \mathrm{Re}\,\lambda \leq C_0, \ |\textrm{Im}\left(\lambda\right)|\leq C_1\},\\ S_1 = \{\lambda\in \mathbb{C}\ | \ 0\leq \mathrm{Re}\,\lambda \leq C_0, \ C_1<|\textrm{Im}\left(\lambda\right)|\},\\ S_2 = \{\lambda\in \mathbb{C}\ | \ C_0< \mathrm{Re}\,\lambda \}. \end{array}$$ In the rest of the proof, we will show that for every $\bar{F}\in X_1$ the solution $\bar{U}\in D_1\subset X_1$ to the equation $(\lambda-{\mathcal{A}}_1)\bar{U} = \bar{F}$ is uniformly bounded for $\lambda\in S_k$ with $k=0,1,$ or $2$. We analyze the behavior on $S_0$. We claim that $\lambda\to \| (\lambda-{\mathcal{A}}_1)^{-1}\|$ is a continuous mapping. Indeed it follows by the continuity of the mapping $\lambda\to (\lambda-{\mathcal{A}}_1)^{-1}$ and the reversed triangle's inequality, since for every $\lambda,\ \mu\in \rho({\mathcal{A}}_1)$ there holds $$\left|\ \|(\lambda-{\mathcal{A}}_1)^{-1}\|-\|(\mu-{\mathcal{A}}_1)^{-1}\|\ \right|\leq \|(\lambda-{\mathcal{A}}_1)^{-1}-(\mu-{\mathcal{A}}_1)^{-1}\|.$$ Now, we observe that $S_0$ is a compact subset contained in $\rho({\mathcal{A}}_1)$, where the mapping $\lambda\to \|(\lambda-{\mathcal{A}}_1)^{-1}\|$ is continuous. Then, it follows that there exists $K_1>0$ such that $\|(\lambda-{\mathcal{A}}_1)^{-1}\|\leq K_1$. The analysis on $S_1$ and $S_2$ is as follows. Since $H^k\subset L^2 =H^0$ for $k>0$, we write the entries in $\bar{F}$ and $\bar{U}$ as the sum of two terms, one in $\mathop{\mathrm{Span}}\{\partial_x \overline{\theta}\}$ and the other in $H^k_\perp$. More precisely, by Lemma [Lemma 46](#lem:char_X1){reference-type="ref" reference="lem:char_X1"} we know that there exist $u\in H^2_\perp$, $v,f\in H^1_\perp$, $g\in L^2_\perp$ and $\alpha,\gamma\in \mathbb{C}$ such that $\bar{U} =(u, v) +\alpha (1, -\nu)\partial_x\overline{\theta}$ and $\bar{F} =( f, g) + \gamma (1, -\nu)\partial_x\overline{\theta}$. The vectorial equation $(\lambda-{\mathcal{A}}_1)\bar{U} = \bar{F}$ translates into three equations: $$\lambda u - v = f,\qquad \mathop{\mathrm{\mathcal{L}}}u +(\lambda+\nu)v = g,\qquad \mbox{and} \qquad \alpha(\lambda + \nu) = \gamma.$$ Now let $U=(u,v)$ and $F=(f,g)$. Since $u,v,f,$ and $g$ satisfies Lemma [Lemma 61](#lem:main_ineq){reference-type="ref" reference="lem:main_ineq"}, then $$\left \| U \right \|_{X} \leq K(C_0,C_1)\left \| F \right \|_{X}.$$ Thus, $$\left \| \bar{U} \right \|_{X} \leq \left \| U \right \|_{X} + \frac{\left \| \gamma(1,-\nu)\partial_x\overline{\theta} \right \|_{X}}{|\lambda+\nu|} \leq \left(K(C_0,C_1) + \frac{1}{|\lambda+\nu|}\right)\left \| \bar{F} \right \|_{X}.$$ Hence, $(\lambda-{\mathcal{A}}_1)^{-1}$ is bounded on $S_1\cup S_2$ and the proof is complete. ◻ Now from Lemma [Lemma 62](#lemF6){reference-type="ref" reference="lemF6"} and Corollary [Corollary 59](#elcoro){reference-type="ref" reference="elcoro"}, we may apply Gearhart-Prüss theorem directly to conclude the following: **Theorem 63**. *There exists a uniform $M \geq 1$ and $\omega_1 > 0$ such that $$\label{lindecay} \|e^{t {\mathcal{A}}_1} U\|_{H^1\times L^2} \leq M e^{-\omega_1 t} \|U\|_{H^1\times L^2},$$ for all $t \geq 0$, $U \in X_1$.* # Nonlinear (orbital) stability {#sec:nonlinear_stability} In this section we study the stability of the solution $\theta(x,t)$, if it exists, to the Cauchy problem [\[reddyneq\]](#reddyneq){reference-type="eqref" reference="reddyneq"}, $$\label{eq:nlpert} \begin{split} \partial_t^{2}\theta +\nu\partial_t \theta + \nabla {\mathcal{E}}(\theta) = 0,& \qquad x\in \mathbb{R},\ t > 0, \\ \theta(x,0) = u_0(x),& \qquad x\in \mathbb{R},\\ \quad \partial_t \theta(x,0) = v_0(x),&\qquad x\in \mathbb{R}, \end{split}$$ when the initial conditions are close to the static Néel wall $\overline{\theta}$. This problem can be rewritten as a nonlinear vector system of equations by setting $\varphi=\partial_t\theta$. Hence, if $W = (\theta,\varphi)$, $W_0 = (u_0,v_0)$, and $F(W) = (\varphi,-\nu \varphi - \nabla {\mathcal{E}}(\theta))^\top$, we get $$\label{eq:FOnlpert} \begin{split} \partial_t W = F(W), & \qquad x\in \mathbb{R},\ t > 0, \\ W(x,0)=W_0(x), & \qquad x\in \mathbb{R}. \end{split}$$ **Remark 64**. It is known that the nonlinear term in [\[eq:nlpert\]](#eq:nlpert){reference-type="eqref" reference="eq:nlpert"} is invariant to translations in the spatial variable (see Lemma 2.6 in [@Melc03]). Thus, if $\overline{\theta}$ denotes the phase of the static Néel wall, then $\nabla {\mathcal{E}}(\overline{\theta}(\cdot +\delta))=0$ for every $\delta\in \mathbb{R}$. This symmetry is inherited by equation [\[eq:FOnlpert\]](#eq:FOnlpert){reference-type="eqref" reference="eq:FOnlpert"}. Indeed, $$F(\phi(\delta)) = 0, \quad \mbox{for } \quad \phi(\delta) = (\overline{\theta}(\cdot +\delta),0)^\top .$$ Hence, taking the derivative with respect to $\delta$, we get $DF(\phi(\delta))\phi'(\delta) = 0$. Therefore, zero is an eigenvalue of the $DF(\phi(\delta))$ with eigenfunction $\phi'(\delta)$, expressing, once again, translation invariance. The linearized system around $\phi(\delta)$ now reads, $$\label{eq:FOlpert} \begin{split} \partial_t V = {\mathcal{A}}^\delta V & \qquad x\in \mathbb{R},\ t > 0, \\ V(x,0)=V_0(x) & \qquad x\in \mathbb{R}, \end{split}$$ where, $${\mathcal{A}}^\delta: = \begin{pmatrix} 0 & \mathrm{I}\\ -\mathop{\mathrm{\mathcal{L}}}^\delta & -\nu\mathrm{I}\end{pmatrix}, \qquad \mbox{and}\qquad \mathop{\mathrm{\mathcal{L}}}^\delta u= \left. \frac{d}{d\epsilon}\nabla{\mathcal{E}}\left(\overline{\theta}(\cdot+\delta)+\epsilon u\right)\right|_{\epsilon=0}.$$ These operators are defined on the same base spaces as before: $H^1 \times L^2$ and $L^2$, respectively. Notice that $\mathop{\mathrm{\mathcal{L}}}^\delta$ is similar to $\mathop{\mathrm{\mathcal{L}}}$, but the only difference lies on the dependence on $\delta$ due to the translation in the argument of the Néel wall's phase. Then, the following identification is well justified, $${\mathcal{A}}^0:= {\mathcal{A}}, \qquad \mbox{and}\qquad \mathop{\mathrm{\mathcal{L}}}^0:=\mathop{\mathrm{\mathcal{L}}}.$$ Due to previous results, the system [\[eq:FOlpert\]](#eq:FOlpert){reference-type="eqref" reference="eq:FOlpert"} for $\delta=0$ has a unique solution in $X_1$ given by the action of a $C_0$-semigroup, generated by ${\mathcal{A}}_1$, on the initial condition $V_0\in X_1$. It is not difficult to see that all the arguments before Section [6](#sec:nonlinear_stability){reference-type="ref" reference="sec:nonlinear_stability"} are easily adapted to the case $\delta\neq 0$ since the translation by $\delta$ in the argument of $\overline{\theta}$ can be interpreted as the action of the left translation operator $T_l(\delta)$, which is an $L^2$-isometry and a $C^0$-semigroup with generator $\partial_x$ (see [@EN00]). Therefore, since $\overline{\theta}\in H^1$, there holds $$\left \| \partial_x \overline{\theta}(x+\delta) \right \|_{L^2} = \left \| \partial_x T_l(\delta) \overline{\theta}(x) \right \|_{L^2} = \left \| T_l(\delta) \partial_x \overline{\theta}(x) \right \|_{L^2} = \left \| \partial_x \overline{\theta}(x) \right \|_{L^2},$$ which implies that the $H^1$-norm and $L^2$-norm remains invariant. Thus, we must emphasize this $\delta$-dependence in all the terms that depends on the profile $\overline{\theta}$. For example, we replace $\overline{\theta}$ by $\overline{\theta}_\delta = \overline{\theta}(\cdot + \delta)$, as well as the vector $\Theta$, the projector ${\mathcal{P}}$ and the space $X_1$, which are replaced by $\Theta(\delta)$, ${\mathcal{P}}(\delta)$ and $X_1(\delta)$, respectively. We represent these functions, operators and spaces for the case $\delta\neq 0$ with the explicit dependence on this variable. It is important to point out that the spectral and growth bounds *do not change*, because they depend on the invariant $H^1$ and $L^2$ norms. As a result, from the previous analysis we know that system [\[eq:FOlpert\]](#eq:FOlpert){reference-type="eqref" reference="eq:FOlpert"} has a unique solution in $X = H^1 \times L^2$ given by the action of the $C_0$-semigroup $\{e^{t {\mathcal{A}}^\delta}\}_{t>0}$, on the initial condition $V_0\in X$. Moreover, due to Theorem [Theorem 63](#lemmaeight){reference-type="ref" reference="lemmaeight"}, there exist uniform constants $M\geq 1$ and $\Tilde{\omega}>0$ such that $$\|e^{t{\mathcal{A}}_1^{\delta}}V_0\|_{H^1\times L^2}\leq Me^{-\tilde{\omega} t}\|V_0\|_{H^1\times L^2}$$ Notice that if ${\mathcal{P}}(\delta)$ is the projector defined in [\[eq:projector\]](#eq:projector){reference-type="eqref" reference="eq:projector"} for $\delta\neq 0$ and $V_0\in H^1\times L^2$, then ${\mathcal{P}}(\delta)V_0\in X_1(\delta)$ and the linear system [\[eq:FOlpert\]](#eq:FOlpert){reference-type="eqref" reference="eq:FOlpert"} has at least one solution given by $$\label{eq:solH1L2} V = e^{t{\mathcal{A}}^\delta}{\mathcal{P}}(\delta)V_0 + (\mathrm{I}-{\mathcal{P}}(\delta))V_0,$$ since $$\begin{split} \partial_t V =& \partial_t\left[e^{t{\mathcal{A}}^\delta}{\mathcal{P}}(\delta)V_0 + (\mathrm{I}-{\mathcal{P}}(\delta))V_0\right]\\ =& {\mathcal{A}}^\delta [e^{t{\mathcal{A}}^\delta}{\mathcal{P}}(\delta)V_0] \\ =& {\mathcal{A}}^\delta \left[ e^{t{\mathcal{A}}^\delta}{\mathcal{P}}(\delta)V_0 + (\mathrm{I}-{\mathcal{P}}(\delta))V_0\right]\\ =& {\mathcal{A}}^\delta V. \end{split}$$ Moreover, due to standard properties of $C_0$-semigroups, it follows that $$\lim_{t\rightarrow 0}V = \lim_{t\rightarrow 0} \left[e^{t{\mathcal{A}}^\delta}{\mathcal{P}}(\delta)V_0 + (\mathrm{I}-{\mathcal{P}}(\delta))V_0\right] ={\mathcal{P}}(\delta)V_0 + (\mathrm{I}-{\mathcal{P}}(\delta))V_0= V_0.$$ In order to establish nonlinear stability we rely on an application of the implicit function theorem in Hilbert spaces given by Lattanzio *et al.* [@LMPS16] based on a similar result for Banach spaces presented by Sattinger [@Sa76]. We present this result here to ease the reading. **Theorem 65**. *Let $X$ be a Hilbert space and $I\subset \mathbb{R}$ be an open neighborhood of $\delta=0$. Assume that $F:{\mathcal{D}}\subset X \rightarrow X$ and $\phi:I\subset \mathbb{R}\rightarrow {\mathcal{D}}$ satisfies $F(\phi) = 0$. If ${\mathcal{P}}(\delta)$ is the projector onto $\{\phi'(\delta)\}^\perp_X$ and there exist positive constants $C_0,\delta_0,M,\omega,$ and $\gamma$ such that* 1. *[\[H1\]]{#H1 label="H1"} for every solution $V = V(t,V_0,\delta)$ to [\[eq:FOlpert\]](#eq:FOlpert){reference-type="eqref" reference="eq:FOlpert"}, $$\label{eq:H1} \|{\mathcal{P}}(\delta) V(t,V_0,\delta)\|_X\leq C_0e^{-\omega t}\|{\mathcal{P}}(\delta) V_0\|_X,$$* 2. *[\[H2\]]{#H2 label="H2"} $\phi$ is differentiable at $\delta = 0$ with $$\label{eq:H2} \|\phi(\delta) -\phi(0)-\phi'(0)\delta\|_{X}\leq C_0|\delta|^{1+\gamma},$$ for $|\delta|<\delta_0$, and* 3. *[\[H3\]]{#H3 label="H3"} $F$ is differentiable at $\phi(\delta)$ for every $\delta\in(-\delta_0,\delta_0)$ with $$\label{eq:H3} \|F(\phi(\delta)+W) -F(\phi(\delta))-DF(\phi(\delta))W\|_{X}\leq C_0\|W\|_{X}^{1+\gamma},$$ for $|\delta|<\delta_0$ and $\|W\|_{X}\leq M$.* *Then there exists $\epsilon>0$ such that for any $W_0\in B_\epsilon(\phi(0)) \subset X$ there exists $\delta\in I$ and a positive constant C for which the solution $W(t;W_0)$ to the nonlinear system [\[eq:FOnlpert\]](#eq:FOnlpert){reference-type="eqref" reference="eq:FOnlpert"} satisfies $$\label{eq:stability} \|W(t,W_0)-\phi(\delta)\|_{X}\leq C\ \|W_0-\phi(0)\|_{X}\ e^{-\omega t}.$$* We proceed with the nonlinear stability result for the Néel wall's phase. ## Proof of Theorem [Theorem 3](#maintheorem){reference-type="ref" reference="maintheorem"} {#proof-of-theorem-maintheorem} We begin the proof by setting $X = H^1\times L^2$ and $$\phi(\delta) = (T_l(\delta)\overline{\theta},0),\qquad F(W) = \begin{pmatrix} \varphi\\-\nu \varphi - \nabla {\mathcal{E}}(\theta)\end{pmatrix},\qquad {\mathcal{D}}:= H^2\times H^1.$$ Due to Remark [Remark 64](#rk:4){reference-type="ref" reference="rk:4"}, we know that $F(\phi(\delta)) = 0$ for every $\delta\in\mathbb{R}$. Now, let $V_0\in {\mathcal{D}}$ be an initial condition such that $V(t,V_0,\delta)$ is a solution to the linear system [\[eq:FOlpert\]](#eq:FOlpert){reference-type="eqref" reference="eq:FOlpert"}. By setting ${\mathcal{P}}(\delta)$ as the projector in Theorem [Theorem 65](#LPstability){reference-type="ref" reference="LPstability"}, it follows that [\[eq:H1\]](#eq:H1){reference-type="eqref" reference="eq:H1"} is satisfied (see Theorem [Theorem 63](#lemmaeight){reference-type="ref" reference="lemmaeight"}). We turn our attention to the second hypothesis in Theorem [Theorem 65](#LPstability){reference-type="ref" reference="LPstability"}. We know that $\overline{\theta}\in H^2$ is a smooth real-valued function. Hence $\phi\in H^1\times L^2$ and $$\|\phi(\delta)-\phi(0)-\phi'(0)\delta\|_{H^1\times L^2} = \|T_l(\delta)\overline{\theta}-\overline{\theta}-\partial_x\overline{\theta}\delta\|_{H^1}.$$ This term is easily estimated with the integral representation of the remainder for Taylor polynomials, yielding $$\begin{split} |T_l(\delta)\overline{\theta}-\overline{\theta}-\partial_x\overline{\theta}\delta|^2 &= \delta^4\left|\int_0^1 (1-t)\, \partial^2_x\overline{\theta}(x+t\delta)\, dt\right|^2\\ &\leq \delta^4\int_0^1 (1-t)^2\,\left( \partial^2_x\overline{\theta}(x+t\delta)\right)^2\, dt, \end{split}$$ where the last inequality follows from Jensen's inequality. Now, integrating in $x$ leads us to $$\left \| T_l(\delta)\overline{\theta}-\overline{\theta}-\partial_x\overline{\theta}\delta \right \|_{L^2}^2 \leq \delta^4 \int_\mathbb{R}\int_0^1 (1-t)^2\,\left( \partial^2_x\overline{\theta}(x+t\delta)\right)^2\, dt\,dx.$$ Since the integrand is not negative, we can interchange the order of integration. Also, by noticing that $\partial_x$ is the generator of the left translation semigroup, we have $$\partial^2_x\overline{\theta}(x+t\delta) = \partial^2_xT_l(t\delta)\overline{\theta}= T_l(t\delta)\partial^2_x\overline{\theta}.$$ Therefore, $$\begin{split} \left \| T_l(\delta)\overline{\theta}-\overline{\theta}-\partial_x\overline{\theta}\delta \right \|_{L^2}^2 &\leq \delta^4 \int_0^1(1-t)^2\int_\mathbb{R}\,\left( \partial^2_x\overline{\theta}(x+t\delta)\right)^2\,dx \,dt\\ &= \delta^4 \int_0^1(1-t)^2\int_\mathbb{R}\,\left(T_l(t\delta) \partial^2_x\overline{\theta}\right)^2\,dx \,dt\\ &= \delta^4\left \| \partial^2_x\overline{\theta} \right \|_{L^2}^2 \int_0^1(1-t)^2\,dt, \end{split}$$ where the last equality follows because $T_l(t\delta)$ is an isometry in $L^2$. A similar argument is applied to $\partial_x\overline{\theta}$. Indeed, $$\begin{split} \left \| T_l(\delta)\partial_x\overline{\theta}-\partial_x\overline{\theta}-\partial^2_x\overline{\theta}\delta \right \|_{L^2}^2 &\leq \delta^4 \int_0^1 (1-t)^2\,\int_\mathbb{R}\left( \partial^3_x\overline{\theta}(x+t\delta)\right)^2\, dx\, dt\\ &= \delta^4 \int_0^1(1-t)^2\int_\mathbb{R}\,\left(T_l(t\delta) \partial^3_x\overline{\theta}\right)^2\,dx \,dt\\ &= \delta^4\left \| \partial^3_x\overline{\theta} \right \|_{L^2}^2 \int_0^1(1-t)^2\,dt. \end{split}$$ With the last two results, we conclude that $$\left \| \phi(\delta)-\phi(0)-\phi'(0)\delta \right \|_{X} \leq \frac{\left \| \partial_x^2 \overline{\theta} \right \|_{H^1}}{\sqrt{3}} \, \delta^2.$$ Finally, we prove that [\[eq:H3\]](#eq:H3){reference-type="eqref" reference="eq:H3"} holds. If $W=(w_1,w_2)^\top \in H^2\times H^1$, the expressions for $F$ and $\phi(\delta)$ imply that $$F(\phi(\delta)) = \begin{pmatrix} 0 \\ -\nabla {\mathcal{E}}\left(T_l(\delta)\overline{\theta}\right) \end{pmatrix},\quad F(\phi(\delta)+W) = \begin{pmatrix} w_2 \\ -\nu w_2-\nabla {\mathcal{E}}\left(w_1+T_l(\delta)\overline{\theta}\right) \end{pmatrix},$$ and $$DF(\phi(\delta))W = {\mathcal{A}}^\delta W = \begin{pmatrix} w_2 \\ -\nu w_2- {\mathcal{L}}^\delta w_1 \end{pmatrix}.$$ In order to simplify the notation, we denote $T_l(\delta)\overline{\theta}$ by $\overline{\theta}_\delta$. Then, a substitution on the left hand side of [\[eq:H3\]](#eq:H3){reference-type="eqref" reference="eq:H3"} implies $$\left \| F(\phi(\delta)+W) -F(\phi(\delta))-DF(\phi(\delta))W \right \|_{X} =\left \| \nabla {\mathcal{E}}(\overline{\theta}_\delta +w_1)-\nabla{\mathcal{E}}(\overline{\theta}_\delta)-\mathop{\mathrm{\mathcal{L}}}^\delta w_1 \right \|_{L^2}.$$ From Proposition [Proposition 1](#propNeelw){reference-type="ref" reference="propNeelw"} [(b)](#propb) we have that $$\begin{aligned} \nabla {\mathcal{E}}(\overline{\theta}_\delta+w_1) &= -\partial_x^2\overline{\theta}_\delta -\partial_x^2w_1-\sin (\overline{\theta}_\delta+w_1)\, \left(1+(-\Delta)^{1/2}\right)\cos (\overline{\theta}_\delta+w_1),\\ -\nabla {\mathcal{E}}(\overline{\theta}_\delta) &= \partial_x^2\overline{\theta}_\delta +\sin \overline{\theta}_\delta\, \left(1+(-\Delta)^{1/2}\right)\cos \overline{\theta}_\delta,\\ -\mathop{\mathrm{\mathcal{L}}}^\delta w_1 &= \partial_x^2 w_1 - \sin \overline{\theta}_\delta(1+(-\Delta)^{1/2}) \sin \overline{\theta}_\delta w_1 +w_1\cos \overline{\theta}_\delta (1+(-\Delta)^{1/2}) \cos \overline{\theta}_\delta. \end{aligned}$$ By letting ${\mathcal{K}}:=\nabla {\mathcal{E}}(\overline{\theta}_\delta +w_1)-\nabla{\mathcal{E}}(\overline{\theta}_\delta)-\mathop{\mathrm{\mathcal{L}}}^\delta w_1$, we have $$\begin{aligned} {\mathcal{K}}&= -\sin (\overline{\theta}_\delta+w_1)\, \left(1+(-\Delta)^{1/2}\right)\cos (\overline{\theta}_\delta+w_1) +\sin \overline{\theta}_\delta\, \left(1+(-\Delta)^{1/2}\right)\cos \overline{\theta}_\delta + \\ & \quad -\sin \overline{\theta}_\delta(1+(-\Delta)^{1/2}) \sin \overline{\theta}_\delta w_1 +w_1\cos \overline{\theta}_\delta (1+(-\Delta)^{1/2}) \cos \overline{\theta}_\delta. \end{aligned}$$ Next, we rearrange the last expression by adding and subtracting the term $\sin (\overline{\theta}_\delta+w_1)\, \left[1+(-\Delta)^{1/2}\right](\cos (\overline{\theta}_\delta) - w_1\sin (\overline{\theta}_\delta))$. Hence ${\mathcal{K}}= A_1 + A_2 + A_3$ where $$\begin{aligned} A_1 &:= -\sin (\overline{\theta}_\delta+w_1)\, \left[1+(-\Delta)^{1/2}\right]\left(\cos (\overline{\theta}_\delta+w_1) -\cos \overline{\theta}_\delta + w_1\sin \overline{\theta}_\delta\right),\\ A_2 &:= (\sin (\overline{\theta}_\delta+w_1)-\sin \overline{\theta}_\delta)[1+(-\Delta)^{1/2}](w_1 \sin \overline{\theta}_\delta),\\ A_3 &:= -(\sin (\overline{\theta}_\delta+w_1)-\sin \overline{\theta}_\delta-w_1\cos\overline{\theta}_\delta)[1+(-\Delta)^{1/2}]\cos \overline{\theta}_\delta. \end{aligned}$$ From standard calculus, we know that $$\begin{aligned} \sin (\overline{\theta}_\delta+w_1)-\sin \overline{\theta}_\delta &= w_1\int_0^1 \cos(\theta_\delta +\xi w_1)d\xi,\\ \cos (\overline{\theta}_\delta+w_1)-\cos \overline{\theta}_\delta &= -w_1\int_0^1 \sin(\theta_\delta +\xi w_1)d\xi. \end{aligned}$$ Then, by applying the same procedure, we achieve $$\begin{split} \sin (\overline{\theta}_\delta+w_1)-\sin \overline{\theta}_\delta-w_1\cos\overline{\theta}_\delta =& w_1\int_0^1\left[\cos( \theta_\delta+\xi w_1)-\cos\overline{\theta}_\delta\right] \, d\xi\\ =& -w_1^2\int_0^1\int_0^1 \xi \sin(\overline{\theta}_\delta + \xi\eta w_1)\, d\eta d\xi, \end{split}$$ and $$\begin{split} \cos (\overline{\theta}_\delta+w_1)-\cos \overline{\theta}_\delta+w_1\sin\overline{\theta}_\delta =& -w_1\int_0^1\left[\sin( \theta_\delta+\xi w_1)-\sin\overline{\theta}_\delta\right] \, d\xi\\ =& -w_1^2\int_0^1\int_0^1 \xi \cos(\overline{\theta}_\delta + \xi\eta w_1)\, d\eta d\xi. \end{split}$$ Therefore, we have that: $$\label{eq:trig_identities} \begin{aligned} |\sin (\overline{\theta}_\delta+w_1)-\sin \overline{\theta}_\delta| &\leq |w_1|, \\ |\sin (\overline{\theta}_\delta+w_1)-\sin \overline{\theta}_\delta-w_1\cos\overline{\theta}_\delta| &\leq \tfrac{1}{2}|w_1|^2, \\ |\cos (\overline{\theta}_\delta+w_1)-\cos \overline{\theta}_\delta+w_1\cos\overline{\theta}_\delta| &\leq \tfrac{1}{2}|w_1|^2. \end{aligned}$$ Notice that $w_1\in L^\infty$, due to $w_1\in H^2(\mathbb{R})$ and the Sobolev imbedding theorems. This fact and Hölder inequality imply that $w_1^2\in L^2$ due to $\left \| w_1^2 \right \|_{L^2}^2\leq \left \| w_1 \right \|_{L^2}^2\left \| w_1 \right \|_{L^\infty}^2$. Moreover $w_1^2\in H^1$ with $\left \| w_1^2 \right \|_{H^1}\leq 2\left \| w_1 \right \|_{H^1}^2$ since $$\begin{split} \left \| w_1^2 \right \|_{H^1}^2 = & \left \| w_1^2 \right \|_{L^2}^2 + \left \| 2w_1\partial_x w_1 \right \|_{L^2}^2\\ \leq & \left \| w_1 \right \|_{L^\infty}^2\left(\left \| w_1 \right \|_{L^2}^2 + 4\left \| \partial_x w_1 \right \|_{L^2}^2\right)\\ \leq & 4\left \| w_1 \right \|_{L^\infty}^2\left \| w_1 \right \|_{H^1}^2\\ \leq & 4\left \| w_1 \right \|_{H^1}^4. \end{split}$$ This property allows us to easily estimate $L^2$-norm of $A_1$, since $$\begin{array}{rl} \left \| A_1 \right \|_{L^2}\leq & C\left \| \cos (\overline{\theta}_\delta+w_1) -\cos \overline{\theta}_\delta + w_1\sin \overline{\theta}_\delta \right \|_{H^1}\\ \leq & \, C\left \| \frac{w_1^2}{2} \right \|_{H^1}\\ \leq & \, C\left \| w_1 \right \|_{H^1}^2, \end{array}$$ where the very first inequality followed since $\left \| [1+(-\Delta)^{1/2}]u \right \|_{L^2} \leq C \left \| u \right \|_{H^1}$ for every $u\in H^1$. Also the $L^2$-norm for the terms $A_2$ and $A_3$ can be bounded using [\[eq:trig_identities\]](#eq:trig_identities){reference-type="eqref" reference="eq:trig_identities"}, $$\begin{array}{rl} \left \| A_2 \right \|_{L^2}^2 \leq & \left \| |w_1|[1+(-\Delta)^{1/2}](w_1 \sin \overline{\theta}_\delta) \right \|_{L^2}^2\\ \leq & \left \| w_1 \right \|_{L^\infty}^2\left \| [1+(-\Delta)^{1/2}](w_1 \sin \overline{\theta}_\delta) \right \|_{L^2}^2\\ \leq & C^2\left \| w_1 \right \|_{L^\infty}^2\left \| w_1 \sin \overline{\theta}_\delta \right \|_{H^1}^2, \end{array}$$ $$\begin{array}{rl} \left \| A_3 \right \|_{L^2}^2 \leq & \left \| \frac{|w_1|^2}{2}[1+(-\Delta)^{1/2}]\cos \overline{\theta}_\delta \right \|_{L^2}^2\\ \leq & \frac{1}{4}\left \| w_1 \right \|_{L^\infty}^4\left \| [1+(-\Delta)^{1/2}]\cos \overline{\theta}_\delta \right \|_{L^2}^2\\ \leq & \frac{C^2}{4}\left \| w_1 \right \|_{L^\infty}^4\left \| \cos \overline{\theta}_\delta \right \|_{H^1}^2. \end{array}$$ Due to the Sobolev inequality $\left \| w_1 \right \|_{L^\infty}^2\leq 2\left \| w_1 \right \|_{L^2}\left \| \partial_x w_1 \right \|_{L^2}$, we have that $\left \| w_1 \right \|_{L^\infty}\leq \left \| w_1 \right \|_{H^1}$. Also, we notice that $$\left \| w_1\sin \overline{\theta}_\delta \right \|_{H^1}^2 \leq \left \| w_1 \right \|_{L^2}^2 + \left \| \partial_x w_1 \right \|_{L^2}^2 + \left \| w_1\partial_x\overline{\theta}_\delta \right \|_{L^2}^2 \leq \left( 1+\left \| \partial_x\overline{\theta}_\delta \right \|_{L^\infty}^2\right)\left \| w_1 \right \|_{H^1}^2.$$ Thus, we obtain $$\left \| A_2 \right \|_{L^2}\leq C\sqrt{1+\left \| \partial_x\overline{\theta}_\delta \right \|_{L^\infty}^2}\left \| w_1 \right \|_{H^1}^2, \quad \mbox{and}\quad \left \| A_3 \right \|_{L^2}\leq \frac{C\left \| \cos \overline{\theta}_\delta \right \|_{H^1}}{2}\left \| w_1 \right \|_{H^1}^2.$$ Gluing together these three inequalities, we have that $$\left \| {\mathcal{K}} \right \|_{L^2}\leq \left \| A_1 \right \|_{L^2} + \left \| A_2 \right \|_{L^2} +\left \| A_3 \right \|_{L^2} \leq \tilde{C}\left \| w_1 \right \|_{H^1}^2 \leq \tilde{C}\left \| W \right \|_{X}^2$$ and (H3) in Theorem [Theorem 65](#LPstability){reference-type="ref" reference="LPstability"} is verified. The proof is complete. 0◻ # Acknowledgements {#acknowledgements .unnumbered} A. Capella and R. G. Plaza thank Professors Yuri Latushkin and Jaime Angulo Pava for enlightening conversations and useful suggestions during a workshop at the Casa Matemática Oaxaca (BIRS-CMO). The work of A. Capella and R. G. Plaza was partially supported by CONAHCyT, México, grant CF-2023-G-122. The work of L. Morales was supported by CONAHCyT, México, through the Program "Estancias Postdoctorales por México 2022". [^1]: Notice that ${\mathcal{L}}$ and its spectral bound $\Lambda_0$ do not depend on $\nu$
arxiv_math
{ "id": "2309.04432", "title": "Nonlinear Stability of Static N\\'eel Walls in Ferromagnetic Thin Films", "authors": "A. Capella, C. Melcher, L. Morales, R. G. Plaza", "categories": "math.AP math.SP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $n$ be a positive integer and $G$ be a transitive permutation subgroup of $S_n$. Given a number field $K$ with $[K:\mathbb{Q}]=n$, we let $\widetilde{K}$ be its Galois closure over $\mathbb{Q}$ and refer to $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})$ as its Galois group. We may identify this Galois group with a transitive subgroup of $S_n$. Given a real number $X>0$, we set $N_{n}(X;G)$ to be the number of such number fields $K$ for which the absolute discriminant is bounded above by $X$, and for which $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})$ is isomorphic to $G$ as a permutation subgroup of $S_n$. We prove an asymptotic upper bound for $N_n(X;G)$ as $X\rightarrow\infty$. This result is conditional and based upon the non-vanishing of certain polynomial determinants in $n$-variables. We expect that these determinants are non-vanishing for many groups, and demonstrate through some examples how they may be computed. address: - Chennai Mathematical Institute, H1, SIPCOT IT Park, Kelambakkam, Siruseri, Tamil Nadu 603103, India - Chennai Mathematical Institute, H1, SIPCOT IT Park, Kelambakkam, Siruseri, Tamil Nadu 603103, India author: - Hrishabh Mishra - Anwesh Ray bibliography: - references.bib title: Upper bounds for the number of number fields with prescribed Galois group --- # Introduction ## Background and historical remarks Let $n$ be a positive integer and $X$ be a positive real number. For a number field $K$, we set $\Delta_K$ to denote the discriminant of $K$ over $\mathbb{Q}$. Set $N_n(X)$ to be the number of number fields $K$ with $[K:\mathbb{Q}]=n$ and with $|\Delta_K|\leq X$. The Hermite Minkowski theorem implies that $N_n(X)$ is finite. It is expected that $N_n(X)\sim c_n X$, where $c_n$ is a constant that depends on $n$ (cf. [@ellenberg2006number p.723]). This conjecture has only been established for $n\leq 5$ (cf. [@davenport1971density; @bhargava2005density; @bhargava2008density; @bhargava2010density]). However, not much is known for $n\geq 6$. From a different perspective, there has been significant interest in obtaining asymptotic upper bounds for $N_n(X)$ (as $X\rightarrow \infty$). - Schmidt [@schmidt1995number] showed that $N_n(X)\ll X^{\frac{n+2}{4}}$. - Ellenberg and Venkatesh [@ellenberg2006number] obtained an exponent that is $O\left(\operatorname{exp}\left(c\sqrt{\log n }\right)\right)$, where $c>0$ is a constant. - Couveignes [@couveignes2020enumerating] shows that $N_n(X)\ll X^{c(\log n)^3}$, for an undetermined constant $c>0$. - Finally, Lemke-Oliver and Thorne [@lemke2022upper] show that $N_n(X)\ll X^{c(\log n)^2}$. In this case, the constant $c$ can be taken to be $1.564$. It is natural to consider asymptotics in the setting in which the Galois group of the Galois closure of the number field in question is prescribed. Given a number field $K$ with $[K:\mathbb{Q}]=n$, set $\widetilde{K}$ to denote the Galois closure of $K$. We shall by abuse of notation simply refer to $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})$ as the *Galois group of $K$*. Let $\sigma_1, \dots, \sigma_n$ be an enumeration of the embeddings of $K$ into $\bar{\mathbb{Q}}$, with the convention that $\sigma_1=\operatorname{Id}$. Let $[n]$ be the set of integers in the range $[1,n]$ and let $S_n$ be the set of bijections $[n]\rightarrow [n]$. We shall identify $\sigma_i$ with $i\in [n]$ and the set of permutations of $\sigma_1, \dots, \sigma_n$ with $S_n$. Given $\sigma\in \operatorname{Gal}(\widetilde{K}/\mathbb{Q})$ and $i\in [1, n]$, the composite $\sigma \sigma_i$ is well defined and is also an embedding of $K$ into $\bar{\mathbb{Q}}$. In this way, the Galois group $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})$ permutes the embeddings $\sigma_1, \dots, \sigma_n$. This gives rise to a permutation representation $$\Phi: \operatorname{Gal}(\widetilde{K}/\mathbb{Q})\hookrightarrow S_n,$$ with $$\Phi(\sigma)(\sigma_i):=\sigma \sigma_i.$$ It is via this representation that one may view $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})$ as a transitive subgroup of $S_n$. We fix a transitive subgroup $G$ of $S_n$. Let $X$ be a positve real number, and set $$N_{n} (X;G):=\#\{K\mid [K:\mathbb{Q}]=n, |\Delta_K|\leq X, \operatorname{Gal}(\widetilde{K}/\mathbb{Q})\simeq G\},$$ where the isomorphism $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})\simeq G$ is that of permutation subgroups of $S_n$. We note that the quantity $N_n(X;G)$ depends not only on the group $G$ but also on the embedding of $G$ into $S_n$. However, this is suppressed in our notation. Given a conjugacy class $C$ of $G$, let $\operatorname{ind}(C)$ denote $\operatorname{ind}(g)$, where $g\in C$. For any group $G\neq 1$, set $G^\#:=G\backslash\{1\}$, and set $$a(G):=\left(\operatorname{min}\{\operatorname{ind}(g)\mid g\in G^\#\}\right)^{-1}.$$ Malle made predictions about the asymptotic growth of $N_n(X;G)$ as a function of $X$. We state the *weak version* of his conjecture below, cf. [@malle2002distribution p.316] for further details. **Conjecture 1** (Malle's conjecture -- weak form). *Let $G\subseteq S_n$ be a transitive permutation group. Then, for all $\epsilon>0$, there exist constants $c_1(G), c_2(G; \epsilon)>0$ such that $$c_1(G)X^{a(G)}\leq N_{n}(X;G)<c_2(G;\epsilon)X^{a(G)+\epsilon},$$ for all large enough values of $X$.* We note that when $n=|G|$ and $G\subset S_n$ via the regular representation, $N_n(X;G)$ simply counts the number of Galois extensions $K/\mathbb{Q}$ with $|\Delta_K|\leq X$ and with $\operatorname{Gal}(K/\mathbb{Q})\simeq G$ (as a permutation subgroup of $S_n$). Consider the special case when $G$ is a finite group with $n:=|G|>4$, and $G\hookrightarrow S_{n}$ the regular representation. Then, Ellenberg and Venkatesh [@ellenberg2006number Proposition 2.8] showed that for any $\epsilon>0$, one has the asymptotic upper bound $N_n(X;G)\ll X^{3/8+\epsilon}$. Here, the implied constant is allowed to depend on $n$ and $\epsilon>0$. However, when permutation representation $G\hookrightarrow S_n$ is *not* the regular representation, not much is known in general. For the alternating subgroup $A_n\hookrightarrow S_n$, asymptotic upper bounds due to Larson and Rolen [@larson2013upper] for $N_n(X;A_n)$ improve upon Schmidt's upper bounds by a factor of $X^{1/4}$. ## Main results In order to state our main results precisely, we introduce some further notation. Let $G$ be a transitive subgroup of $S_n$. Our goal is to establish asymptotic upper bounds for $N_{n} (X;G)$, provided some additional conditions are satisfied. We assume without loss of generality that there exists a number field $K$ with $[K:\mathbb{Q}]=n$, such that $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})\simeq G$. The subgroup of $S_n$ that fixes the embedding $\sigma_1$ is denoted by $\operatorname{Stab}(\sigma_1)$, and is identified with $S_{n-1}$. With this convention in place, we find that $$\operatorname{Gal}(\widetilde{K}/K)=\operatorname{Gal}(\widetilde{K}/\mathbb{Q})\cap S_{n-1}.$$ Let $H$ be the intersection $G\cap S_{n-1}$. Since $H$ is identified with $\operatorname{Gal}(\widetilde{K}/K)$, we find that $$\widetilde{K}^H=K,\text{ and }[G:H]=[K:\mathbb{Q}]=n.$$ Let $N$ denote the normalizer of $H$ in $G$ and let $K_0:=K^N$ be the fixed field of $N$. In particular, $K/K_0$ is a Galois extension. Setting $r:=|N/H|=[K:K_0]$, it shall be assumed without loss of generality that $$\operatorname{Gal}(K/K_0)=\{\sigma_1, \dots, \sigma_r\}.$$ Thus for $j\in [1, r]$, the image of $\sigma_j$ is contained in $K$. **Definition 2**. *For $i\in [1, n]$ and $j\in [1,r]$, we consider the composite embedding $$K\xrightarrow{\sigma_j} K\xrightarrow{\sigma_i}\widetilde{K}.$$ Consequently, there is a permutation $\pi_j\in S_n$ such that $\sigma_i \sigma_j=\sigma_{\pi_j(i)}$.* Since $\sigma_1$ is the identity on $K$, it follows that $\pi_1$ is the identity in $S_n$. Also note that $\pi_j(i)\neq i$ unless $j=1$. One may describe the permutations $\pi_j$ intrinsically. Write $$G/H=\bigcup_{i=1}^n g_i H\text{ and } N/H=\bigcup_{j=1}^r g_j H.$$ Note that for $i\in [1,n]$ and $j\in [1, r]$, the element $g_j$ normalizes $H$, and hence $g_j H =Hg_j$. Thus we find that for $j\in [1,r]$, the following relationship holds $$g_i H g_j H=g_ig_j H.$$ There is a unique index $k\in [1, n]$ such that $$g_i g_j H=g_k H,$$ and we set $\pi_j(i):=k$. For a number field $K$ as above, the embeddings $\sigma_i$ coincide with cosets $g_i H$. **Definition 3**. *Given a tuple $\mathbf{a}=(a_1, \dots, a_r)\in \mathbb{Z}_{\geq 0}^r$, we set $$\operatorname{Tr}_{\mathbf{a}}(x_1, \dots, x_n):=\sum_{i=1}^n \left(\prod_{j=1}^r x_{\pi_j(i)}^{a_j}\right)\in \mathbb{Z}[x_1, \dots, x_n].$$ We shall refer to such functions as trace functions.* Assume that $r\geq 3$. Fix integers $k$ and $t$ such that $k\in [2, r)$ and $t\geq 2$. Let $\mathcal{S}_k(r)$ be the all $k$-element subsets of $[r]$ that contain $1$. Given $\mathcal{B}\in \mathcal{S}_k(r)$, let $\mathbf{a}(\mathcal{B})=(a_1, \dots, a_r)$ be the vector which is defined so that $$a_i:=\begin{cases} t\text{ if }i=1;\\ 1\text{ if }i\neq 1\text{ and }i \in \mathcal{B};\\ 0\text{ otherwise. } \end{cases}$$Set $l:=\# \mathcal{S}_k(r)=\binom{r-1}{k-1}$ and $\mathcal{B}_1, \dots, \mathcal{B}_l$ to be an enumeration of the subsets in $\mathcal{S}_k(r)$. For $i\in [1, l]$, we set $\mathbf{a}_i:=\mathbf{a}(\mathcal{B}_i)$. We state our main results below. **Theorem 1**. *Let $G$ be a transitive permutation group contained in $S_n$ and set $H:=G\cap S_{n-1}$. We let $N$ be the normalizer of $H$ and set $r:=|N/H|$. For $j\in [1, r]$, we let $\pi_j\in S_n$ be the associated permutation as prescribed by Definition [Definition 2](#def of pi){reference-type="ref" reference="def of pi"}. We make the following assumptions.* 1. *[\[condition 1\]]{#condition 1 label="condition 1"} There exists $k\in [1,r-1]$ such that $l:=\binom{r-1}{k}\geq n$.* 2. *[\[condition 2\]]{#condition 2 label="condition 2"} Let $\mathbf{a}_1, \dots, \mathbf{a}_l$ be the integral vectors as in Definition [Definition 6](#def of ai){reference-type="ref" reference="def of ai"}. Then, assume that for a subset $\{\mathbf{a}_{i_1}, \dots, \mathbf{a}_{i_n}\}$ of $\{\mathbf{a}_1, \dots, \mathbf{a}_l\}$, the Jacobian matrix $$\mathbb{D}=\mathbb{D}(f_1, \dots, f_n):=\left(\frac{\partial f_i}{\partial x_j}\right)_{1\leq i, j\leq n}$$ of the trace functions $f_j:=\operatorname{Tr}_{\mathbf{a}_j}$ has determinant which is not identically $0$.* *Then, we have that $N_{n}(X;G)\ll X^{k+t-1}$.* Let us further specialize Theorem [Theorem 1](#main thm){reference-type="ref" reference="main thm"} to a family of groups $G$ for which condition [\[condition 1\]](#condition 1){reference-type="eqref" reference="condition 1"} is satisfied. We let $m\in \mathbb{Z}_{\geq 1}$ be an integer and $A$ be a transitive permutation subgroup of $S_m$. The convention here is that if $m=1$, then $A$ is trivial. Let $B$ be any finite group and set $q$ to denote its cardinality. Via the regular representation, $B$ is a subgroup of $S_q$. We write $\iota_A:A\hookrightarrow S_m$ and $\iota_B: B\hookrightarrow S_q$ denote the embeddings of $A$ and $B$ into $S_m$ and $S_q$ respectively. Then, $\iota:=\iota_A\times \iota_B$ realizes $G:=A\times B$ as a subgroup of $S_{mq}$. Let $H_1\subseteq S_{m-1}$ be the intersection $A\cap S_{m-1}$, and set $H:=G\cap S_{mq-1}$. It is easy to see that $H=H_1\times 1$. Since $\iota_B$ is the regular representation, $N:=N_G(H)\supseteq N_1\times B$, where $N_1=N_A(H_1)$. Therefore, $r=|N/H|\geq |B|$, and thus the condition [\[condition 1\]](#condition 1){reference-type="eqref" reference="condition 1"} of Theorem [Theorem 1](#main thm){reference-type="ref" reference="main thm"} is satisfied if $$\binom{|B|-1}{k-1}\geq m|B|.$$ **Corollary 1**. *Let $G=A\times B$ as above, and $k$ be an integer in the range $[3, |B|]$. Assume that the following conditions are satisfied* 1. *$\binom{|B|-1}{k-1}\geq m|B|$,* 2. *For a subset $\{\mathbf{a}_{i_1}, \dots, \mathbf{a}_{i_n}\}$ of $\{\mathbf{a}_1, \dots, \mathbf{a}_l\}$, $$\det\mathbb{D}(f_1, \dots, f_n)\neq 0,$$ where $f_j:=\operatorname{Tr}_{\mathbf{a}_j}$.* *Then, we have that $N_{n}(X;G)\ll X^{k+t-1}$.* The next result is a special case of Corollary [Corollary 1](#main thm 2){reference-type="ref" reference="main thm 2"}. **Corollary 2**. *Let $r\in \mathbb{Z}_{\geq 2}$, $n_1, \dots, n_r$ be integers with $n_i\geq 2$ for all $i$. Set $G:=S_{n_1}\times S_{n_2}\times \dots \times S_{n_r}$ and consider the natural embedding $$G\hookrightarrow S_{n_1}\times S_{n_2}\times \dots \times S_{n_{r-1}}\times S_{(n_r!)}\hookrightarrow S_{n_1n_2\dots n_{r-1}(n_r!)},$$ where the last factor $S_{n_r}$ operates via the regular representation. Assume that for $t=2$,* 1. *$l=\binom{n_r!-1}{2}\geq \left(\prod_{i=1}^{r-1} n_i\right) n_r!$,* 2. *$\det \mathbb{D}\neq 0$ for some subset of vectors of $\{\mathbf{a}_i\mid i\in [1, l]\}$.* *Then, we have that $N_{n}(X;G)\ll X^{4}$.* Fix $(n_1, \dots, n_{r-1})$, then, for large enough values of $n_r$, the inequality $\binom{n_r!-1}{2}\geq \left(\prod_{i=1}^{r-1} n_i\right) n_r!$ is satisfied. Then, the above bound $N_n(X;G)\ll X^4$ is significantly better than what one is able to derive from the aforementioned asymptotic upper bounds for $N_n(X)$. We acknowledge that the result is only conditional since it assumes the smoothness condition $\det \mathbb{D}\neq 0$. We expect that this condition to hold for most groups in this family. Furthermore, the smoothness condition is indeed very concrete, as our examples show. In section [3](#s 3){reference-type="ref" reference="s 3"}, we illustrate Corollary [Corollary 1](#main thm 2){reference-type="ref" reference="main thm 2"} for two examples. - First, we consider $G=S_3\hookrightarrow S_6$ via the regular representation. - Second, we take $G=S_3\times D_8$ and consider the embedding that is the following composite $$G\hookrightarrow S_3\times S_8\hookrightarrow S_{24},$$ with $m=3$ and $q=8$. The second factor of this embedding is via the regular representation of $D_8$. Then, we find that $B=D_8$ has $8$ elements. Thus, taking $k=4$, we find that $\binom{7}{3}=35> 24$. These examples only serve to illustrate our results, which are far more general. We do not claim that they cannot be derived from known results. It is perhaps possible to effectively illustrate more elaborate examples illustrating Corollaries [Corollary 1](#main thm 2){reference-type="ref" reference="main thm 2"} and [Corollary 2](#cor B){reference-type="ref" reference="cor B"}, though this proves to be cumbersome. The Jacobian matrix for the second example itself is a $24\times 24$ polynomial matrix in $24$ variables. ## Outlook Our methods are motivated by the strategy taken in the above mentioned works of Ellenberg-Venkatesh [@ellenberg2006number], Couveignes [@couveignes2020enumerating], and Lemke-Oliver and Thorne [@lemke2022upper]. The methods introduced in this manuscript could potentially motivate future developments in the area of number field counting. ## Acknowledgment When the project was started, the second named author's research was supported by the CRM-Simons postdoctoral fellowship. # A conditional upper bound In this section, we establish a conditional asymptotic upper bound for $N_n(X;G)$. This result is based on a numerical criterion that involves the non-vanishing of the determinant of a Jacobian matrix. In the next section, this criterion is demonstrated through an example. ## A general criterion Let $G$ be a transitive subgroup of $S_n$. We assume without loss of generality that there exists a number field $K$ with $[K:\mathbb{Q}]=n$, such that $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})\simeq G$. Let $\alpha\in \mathcal{O}_K$ be a primitive element, i.e., $K=\mathbb{Q}(\alpha)$. For $i\in [1, n]$, set $\alpha_i:=\sigma_i(\alpha)$. Recall that for $j\in [1, r]$, the image of $\sigma_j$ is contained in $K$. Observe therefore that $\alpha_i\in \mathcal{O}_K$ for all $i\in [1, r]$. For $\mathbf{a}=(a_1, \dots, a_r)\in \mathbb{Z}_{\geq 0}^r$, we recall from Definition [Definition 3](#def of tr){reference-type="ref" reference="def of tr"}, that $$\operatorname{Tr}_{\mathbf{a}}(x_1, \dots, x_n):=\sum_{i=1}^n \left(\prod_{j=1}^r x_{\pi_j(i)}^{a_j}\right).$$ We note that $$\operatorname{Tr}_{\mathbf{a}}(\alpha_1, \dots, \alpha_n)=\sum_{i=1}^n \left(\prod_{j=1}^r \sigma_i \sigma_j(\alpha)^{a_j}\right)=\operatorname{Tr}_{L/K}\left(\prod_{j=1}^r \sigma_j(\alpha)^{a_j}\right)\in \mathcal{O}_K.$$Set $\lVert \alpha\rVert$ to denote the maximum of $|\sigma(\alpha)|$ as $\sigma$ ranges over all embeddings of $L$ into $\bar{\mathbb{Q}}$. Letting $x_\alpha:=(\alpha_1, \dots, \alpha_r)$, we find that $$\lVert \operatorname{Tr}_{\mathbf{a}}(x_\alpha)\rVert\ll \lVert \alpha\rVert^{H(\mathbf{a})},$$ where $H(\mathbf{a}):= \sum_{i \in [1,r]}a_i$. **Lemma 4**. *For $N\geq 1$, consider polynomial functions $$f_1, \dots, f_N:\mathbf{A}^N(\mathbb{C})\rightarrow \mathbf{A}^1(\mathbb{C}).$$ Assume that the determinant of $\left(\frac{\partial f_i}{\partial x_j}\right)_{1\leq i,j\leq N}$ is not identically zero. Then, there exists a non-zero polynomial $P(x_1, \dots, x_N)$ such that whenever $P(\mathbf{x}_0)\neq 0$, the variety $$V_{\mathbf{x}_0}:=\left\{\mathbf{x}\in \mathbf{A}^N(\mathbb{C})\mid f_i(\mathbf{x})=f_i(\mathbf{x}_0)\text{ for all } i\in [1, N]\right\}$$ consists of at most $\prod_i \deg f_i$ points.* *Proof.* The above result is [@lemke2022upper Lemma 2.1]. ◻ **Proposition 5**. *Let $G$ be a transitive permutation group contained in $S_n$ and set $H:=G\cap S_{n-1}$. We set $N$ to be the normalizer of $H$, $r:=|N/H|$ and for $j\in [1, r]$, let $\pi_j\in S_n$ be the associated permutation (cf. Definition [Definition 2](#def of pi){reference-type="ref" reference="def of pi"}). Let $\mathbf{a}_1, \dots, \mathbf{a}_n \in \mathbb{Z}_{\geq 0}^r$ be a set of vectors and set $f_i:=\operatorname{Tr}_{\mathbf{a}_i}$ for $i\in [1, n]$. Assume that the determinant of the Jacobian-matrix $$\mathbb{D}=\mathbb{D}(f_1, \dots, f_n):=\left(\frac{\partial f_i}{\partial x_j}\right)_{1\leq i, j\leq n}$$ is not identically $0$. Then we have the asymptotic bound $$N_{n}(X;G)\ll X^{\frac{1}{n}\left(\sum_{i=1}^n H(\mathbf{a}_i)\right)},$$ where the implied constant depends only on the vectors $\{\mathbf{a}_i\mid i\in [1, n]\}$.* *Proof.* For $i\in [1, n]$, set $f_i:=\operatorname{Tr}_{\mathbf{a}_i}$ and $z:=\prod_i \operatorname{deg}f_i$. Since the determinant of $\mathbb{D}$ is not identically zero, it follows from Lemma [Lemma 4](#lemma 2.2){reference-type="ref" reference="lemma 2.2"} that there exists a non-zero polynomial $P(x_1, \dots, x_n)$ such that whenever $P(\mathbf{x}_0)\neq 0$, the variety $$V_{\mathbf{x}_0}:=\left\{\mathbf{x}\in \mathbf{A}^n(\mathbb{C})\mid f_i(\mathbf{x})=f_i(\mathbf{x}_0)\text{ for all } i\in [1, n]\right\}$$ consists of at most $z$ points. Let $K$ be a number field with $[K:\mathbb{Q}]=n$ and $|\Delta_K|\leq X$ and assume that $\operatorname{Gal}(\widetilde{K}/\mathbb{Q})\simeq G$ as permutation subgroups of $S_n$. Recall that $r=|\operatorname{Gal}(K/K_0)|$ and $\sigma_1, \dots, \sigma_r$ are embeddings with image in $K$. Let $x_\alpha$ denote the point $(\sigma_1(\alpha), \sigma_2(\alpha), \dots, \sigma_r(\alpha))\in \mathcal{O}_K^r$. Then it follows from a standard argument (cf. the proof of [@lemke2022upper Theorem 1.2]) that $\alpha\in \mathcal{O}_K$ can be chosen such that 1. $P(x_\alpha)\neq 0$, 2. $K=\mathbb{Q}(\alpha)$, 3. $\lVert \alpha\rVert \ll X^{\frac{1}{n}}$. Then there are at most $z$ values $x\in \mathbb{C}^r$ such that $\operatorname{Tr}_{\mathbf{a}_i}(x)=\operatorname{Tr}_{\mathbf{a}_i}(x_\alpha)$ for all $i=1, \dots, n$. In particular, the number field $K$ is determined up to $z$ choices by the vector $$\left(\operatorname{Tr}_{\mathbf{a}_1}(x_\alpha), \operatorname{Tr}_{\mathbf{a}_2}(x_\alpha), \dots, \operatorname{Tr}_{\mathbf{a}_i}(x_\alpha), \dots, \operatorname{Tr}_{\mathbf{a}_n}(x_\alpha)\right)\in \mathbb{Z}^n.$$ On the other hand, $|\operatorname{Tr}_{\mathbf{a}_i}(x_\alpha)|\ll X^{\frac{H(\mathbf{a}_i)}{n}}$ for all $i\in [1, n]$, and thus, the total number of such vectors is at most $\left(\prod_i (2X)^{\frac{H(\mathbf{a}_i)}{n}}\right)$. We deduce that $N_n(X;G)\ll \prod_i X^{\frac{H(\mathbf{a}_i)}{n}}$, where the implied constant depends only on the vectors $\mathbf{a}_i$. ◻ With respect to notation from Proposition [Proposition 5](#main prop){reference-type="ref" reference="main prop"}, assume that $r\geq 3$. Fix integers $k$ and $t$ such that $k\in [2, r)$ and $t\geq 2$. Let $\mathcal{S}_k(r)$ be the all $k$-element subsets of $[r]$ that contain $1$. Given $\mathcal{B}\in \mathcal{S}_k(r)$, let $\mathbf{a}(\mathcal{B})=(a_1, \dots, a_r)$ be the vector which is defined so that $$a_i:=\begin{cases} t\text{ if }i=1;\\ 1\text{ if }i\neq 1\text{ and }i \in \mathcal{B};\\ 0\text{ otherwise. } \end{cases}$$ **Definition 6**. *Set $l:=\# \mathcal{S}_k(r)=\binom{r-1}{k-1}$ and assume that $l\geq n$. Write $\mathcal{B}_1, \dots, \mathcal{B}_l$ to be an enumeration of the subsets in $\mathcal{S}_k(r)$ and set $\mathbf{a}_i:=\mathbf{a}(\mathcal{B}_i)$.* **Proposition 7**. *With respect to notation above, the trace functions $f_i:=\operatorname{Tr}_{\mathbf{a}_i}$ are linearly independent over $\mathbb{C}$.* *Proof.* We write $\mathbf{a}_1=(b_1, \dots, b_r)$, with $b_1=2$ and $b_j\in \{0,1\}$ for $j\in [2, r]$. Note that the monomial $g:=x_1^{b_1}x_2^{b_2}\dots x_r^{b_r}$ is the support of $f_1$. It suffices to show that this monomial is not in the support of any of the polynomials $f_i$ for $i\in [2, l]$. Write $\mathbf{a}_i:=(c_1, \dots, c_r)$, once again with $c_1=2$ and $c_i\in \{0,1\}$ for $i\in [2, r]$. Then, any monomial in the support of $f_i$ is of the form $h:=x_{\pi_1(j)}^{c_1}x_{\pi_2(j)}^{c_2}\dots x_{\pi_r(j)}^{c_r}$ for some $j\in [1, n]$. Note that $\pi_1(j)=j$, and $c_1=2$. Therefore, in orderfor $g=h$, it must be the case that $j=1$. This implies that $\mathbf{a}_i=\mathbf{a}_1$, which is a contradiction. Therefore, none of the monomials in the support of $f_i$ coincide with $g$. This implies that the functions $f_1, \dots, f_l$ are linearly independent over $\mathbb{C}$. ◻ We end the section with the proofs of Theorem [Theorem 1](#main thm){reference-type="ref" reference="main thm"} and Corollary [Corollary 1](#main thm 2){reference-type="ref" reference="main thm 2"}. *Proof of Theorem [Theorem 1](#main thm){reference-type="ref" reference="main thm"}.* We find that $H(\mathbf{a}_i)=k+t-1$ for all $i\in [1, l]$, and thus, the result is a direct consequence of Proposition [Proposition 5](#main prop){reference-type="ref" reference="main prop"}. ◻ *Proof of Corollary [Corollary 1](#main thm 2){reference-type="ref" reference="main thm 2"}.* The result follows directly from Theorem [Theorem 1](#main thm){reference-type="ref" reference="main thm"}. ◻ # Computations verifying the Jacobian condition {#s 3} ## Example 1: $S_3\subset S_6$ We take $G=S_3$ sitting inside $S_6$ via the regular representation. The permutations of $3$ elements are $g_1=\mathbf{1}$, $g_2=(12)$, $g_3=(23)$, $g_4=(13)$, $g_5=(123)$ and $g_6=(132)$. In this case, $H=1$ and $N=G$, we find that $r=6$. Given an $S_3$-extension, $K/\mathbb{Q}$, we identify $g_i$ with an embedding $\sigma_i:K\hookrightarrow \bar{\mathbb{Q}}$. From the multiplication table for $S_3$, one is able to determine that $$\begin{split} & \pi_1=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 1 & 2 & 3 & 4 & 5 & 6 \end{pmatrix}, \pi_2=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 1 & 6 & 5 & 4 & 3 \end{pmatrix}, \pi_3=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 3 & 5 & 1 & 6 & 2 & 4 \end{pmatrix}, \\ & \pi_4=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 4 & 6 & 5 & 1 & 3 & 2 \end{pmatrix}, \pi_5=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 5 & 3 & 4 & 2 & 6 & 1 \end{pmatrix}, \pi_6=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 6 & 4 & 2 & 3 & 1 & 5 \end{pmatrix}. \end{split}$$ We take $t=2$, $k=3$ and note that $l=\binom{r-1}{k-1}=\binom{5}{2}=10>6$. Let us list the set $\{\mathbf{a}_i\mid i\in [1, 10]\}$. We find that $$\begin{split} & \mathbf{a}_1=(2,1, 1, 0, 0, 0), \mathbf{a}_2=(2,1, 0, 1, 0, 0), \mathbf{a}_3=(2,1, 0, 0, 1, 0), \\ & \mathbf{a}_4=(2,1, 0, 0, 0, 1),\mathbf{a}_5=(2,0, 1, 1, 0, 0),\mathbf{a}_6=(2,0, 1, 0, 1, 0),\\ & \mathbf{a}_7=(2,0, 1, 0, 0, 1),\mathbf{a}_8=(2,0, 0, 1, 1, 0),\mathbf{a}_9=(2,0, 0, 1, 0, 1),\\ & \mathbf{a}_{10}=(2,0,0,0,1,1). \end{split}$$ Let $\mathbf{a}=(a_1, \dots, a_6)$, we have that $$\begin{split}\operatorname{Tr}_{\mathbf{a}}=& x_1^{a_1}x_2^{a_2}x_3^{a_3}x_4^{a_4}x_5^{a_5}x_6^{a_6}+x_2^{a_1}x_1^{a_2}x_5^{a_3}x_6^{a_4}x_3^{a_5}x_4^{a_6}+x_3^{a_1}x_6^{a_2}x_1^{a_3}x_5^{a_4}x_4^{a_5}x_2^{a_6} \\ + & x_4^{a_1}x_5^{a_2}x_6^{a_3}x_1^{a_4}x_2^{a_5}x_3^{a_6}+x_5^{a_1}x_4^{a_2}x_2^{a_3}x_3^{a_4}x_6^{a_5}x_1^{a_6}+x_6^{a_1}x_3^{a_2}x_4^{a_3}x_2^{a_4}x_1^{a_5}x_5^{a_6}.\end{split}$$ Setting $f_i:=\operatorname{Tr}_{\mathbf{a}_i}$, we compute the jacobian of $(f_1, \dots, f_6)$ in the variables $(x_1, \dots, x_6)$, and find that its determinant is not identically $0$. This computation was performed on the `SageMathCloud`, the code is provided below var('x1,x2,x3,x4,x5,x6') f1 = x1^2*x2*x3+x2^2*x1*x6+x3^2*x5*x1 +x4^2*x6*x5+x5^2*x3*x4+x6^2*x4*x2 f2 = x1^2*x2*x4+x2^2*x1*x6+x3^2*x6*x5 +x4^2*x5*x1+x5^2*x4*x3+x6^2*x3*x4 f3 = x1^2*x2*x5+x2^2*x1*x3+x3^2*x6*x4 +x4^2*x5*x2+x5^2*x4*x6+x6^2*x3*x1 f4 = x1^2*x2*x6+x2^2*x1*x4+x3^2*x6*x2 +x4^2*x5*x3+x5^2*x4*x1+x6^2*x3*x5 f5=x1^2*x3*x4+x2^2*x5*x6+x3^2*x1*x5 +x4^2*x6*x1+x5^2*x2*x3+x6^2*x4*x3 f6=x1^2*x3*x5+x2^2*x5*x3+x3^2*x1*x4 +x4^2*x6*x2+x5^2*x2*x6+x6^2*x4*x1 b=jacobian( [f1,f2,f3,f4, f5,f6], [x1,x2,x3,x4,x5,x6]) a=det(b) print(a) The conditions of Theorem [Theorem 1](#main thm){reference-type="ref" reference="main thm"} are satisfied in this case. ## Example 2: We set $G:=S_3\times D_8\subset S_{24}$, where $D_8\subset S_8$ via the regular representation. We note that $D_8$ is a subgroup of $S_4$. It consists of the rotations of the square $1, a, a^2, a^3$, and the reflections $b,ab,a^2b,a^3b$. We order these elements $g_1, \dots, g_4$ and $g_5, \dots, g_8$ respectively. We order the set $\{(i,j)\mid i\in [1, 3], j\in [1, 8]\}$ in lexicographic order, with $D_4$ acting on the second components. Thus, $x_1,x_2, \dots, x_{24}=x_{(1, 1)}, \dots, x_{(3, 8)}$. For ease of notation, we set $$\begin{split} & u_1, \dots, u_8=x_1, \dots, x_8;\\ & v_1, \dots, v_8= x_9, \dots, x_{16};\\ & w_1, \dots, w_8=x_{17}, \dots, x_{24}. \end{split}$$ By inspecting the multiplication tables for $D_8$, we find that $$\begin{split}\operatorname{Tr}_{\mathbf{a}}=& u_1^{a_1}u_2^{a_2}u_3^{a_3}u_4^{a_4}u_5^{a_5}u_6^{a_6}u_7^{a_7}u_8^{a_8}+u_2^{a_1}u_3^{a_2}u_4^{a_3}u_1^{a_4}u_6^{a_5}u_7^{a_6}u_8^{a_7}u_5^{a_8} \\ +& u_3^{a_1}u_4^{a_2}u_1^{a_3}u_2^{a_4}u_7^{a_5}u_8^{a_6}u_5^{a_7}u_6^{a_8}+u_4^{a_1}u_1^{a_2}u_2^{a_3}u_3^{a_4}u_8^{a_5}u_5^{a_6}u_6^{a_7}u_7^{a_8}\\ +& u_5^{a_1}u_8^{a_2}u_7^{a_3}u_6^{a_4}u_1^{a_5}u_4^{a_6}u_3^{a_7}u_2^{a_8}+u_6^{a_1}u_5^{a_2}u_8^{a_3}u_7^{a_4}u_2^{a_5}u_1^{a_6}u_4^{a_7}u_3^{a_8}\\ + & u_7^{a_1}u_6^{a_2}u_5^{a_3}u_8^{a_4}u_3^{a_5}u_2^{a_6}u_1^{a_7}u_4^{a_8}+u_8^{a_1}u_7^{a_2}u_6^{a_3}u_5^{a_4}u_4^{a_5}u_3^{a_6}u_2^{a_7}u_1^{a_8} \\ + & v_1^{a_1}v_2^{a_2}v_3^{a_3}v_4^{a_4}v_5^{a_5}v_6^{a_6}v_7^{a_7}v_8^{a_8}+v_2^{a_1}v_3^{a_2}v_4^{a_3}v_1^{a_4}v_6^{a_5}v_7^{a_6}v_8^{a_7}v_5^{a_8} \\ +& v_3^{a_1}v_4^{a_2}v_1^{a_3}v_2^{a_4}v_7^{a_5}v_8^{a_6}v_5^{a_7}v_6^{a_8}+v_4^{a_1}v_1^{a_2}v_2^{a_3}v_3^{a_4}v_8^{a_5}v_5^{a_6}v_6^{a_7}v_7^{a_8}\\ +& v_5^{a_1}v_8^{a_2}v_7^{a_3}v_6^{a_4}v_1^{a_5}v_4^{a_6}v_3^{a_7}v_2^{a_8}+v_6^{a_1}v_5^{a_2}v_8^{a_3}v_7^{a_4}v_2^{a_5}v_1^{a_6}v_4^{a_7}v_3^{a_8}\\ + & v_7^{a_1}v_6^{a_2}v_5^{a_3}v_8^{a_4}v_3^{a_5}v_2^{a_6}v_1^{a_7}v_4^{a_8}+v_8^{a_1}v_7^{a_2}v_6^{a_3}v_5^{a_4}v_4^{a_5}v_3^{a_6}v_2^{a_7}v_1^{a_8} \\ +& w_1^{a_1}w_2^{a_2}w_3^{a_3}w_4^{a_4}w_5^{a_5}w_6^{a_6}w_7^{a_7}w_8^{a_8}+w_2^{a_1}w_3^{a_2}w_4^{a_3}w_1^{a_4}w_6^{a_5}w_7^{a_6}w_8^{a_7}w_5^{a_8} \\ +& w_3^{a_1}w_4^{a_2}w_1^{a_3}w_2^{a_4}w_7^{a_5}w_8^{a_6}w_5^{a_7}w_6^{a_8}+w_4^{a_1}w_1^{a_2}w_2^{a_3}w_3^{a_4}w_8^{a_5}w_5^{a_6}w_6^{a_7}w_7^{a_8}\\ +& w_5^{a_1}w_8^{a_2}w_7^{a_3}w_6^{a_4}w_1^{a_5}w_4^{a_6}w_3^{a_7}w_2^{a_8}+w_6^{a_1}w_5^{a_2}w_8^{a_3}w_7^{a_4}w_2^{a_5}w_1^{a_6}w_4^{a_7}w_3^{a_8}\\ + & w_7^{a_1}w_6^{a_2}w_5^{a_3}w_8^{a_4}w_3^{a_5}w_2^{a_6}w_1^{a_7}w_4^{a_8}+w_8^{a_1}w_7^{a_2}w_6^{a_3}w_5^{a_4}w_4^{a_5}w_3^{a_6}w_2^{a_7}w_1^{a_8}.\end{split}$$ We take $k=4$ and thus find that $l=\binom{7}{3}=35$. We choose a set of $24$ vectors $\mathbf{a}_1, \dots, \mathbf{a}_{24}$ and list them below $$\begin{split} & \mathbf{a}_1=(2,1,1,1,0,0,0,0), \mathbf{a}_2=(2,1,1,0,1,0,0,0), \mathbf{a}_3=(2,1,1,0,0,1,0,0),\\ & \mathbf{a}_4=(2,1,1,0,0,0,1,0), \mathbf{a}_5=(2,1,1,0,0,0,0,1), \mathbf{a}_6=(2,1,0,1,1,0,0,0),\\ & \mathbf{a}_7=(2,1,0,1,0,1,0,0), \mathbf{a}_8=(2,1,0,1,0,0,1,0), \mathbf{a}_9=(2,1,0,1,0,0,0,1),\\ & \mathbf{a}_{10}=(2,1,0,0,1,1,0,0), \mathbf{a}_{11}=(2,1,0,0,1,0,1,0), \mathbf{a}_{12}=(2,1,0,0,1,0,0,1),\\ & \mathbf{a}_{13}=(2,1,0,0,0,1,1,0), \mathbf{a}_{14}=(2,1,0,0,0,1,0,1), \mathbf{a}_{15}=(2,1,0,0,0,0,1,1),\\ & \mathbf{a}_{16}=(2,0,1,1,1,0,0,0), \mathbf{a}_{17}=(2,0,1,1,0,1,0,0), \mathbf{a}_{18}=(2,0,1,1,0,0,1,0),\\ & \mathbf{a}_{19}=(2,0,1,1,0,0,0,1), \mathbf{a}_{20}=(2,0,1,0,1,1,0,0), \mathbf{a}_{21}=(2,0,1,0,1,0,1,0),\\ & \mathbf{a}_{22}=(2,0,1,0,1,0,0,1), \mathbf{a}_{23}=(2,0,1,0,0,1,1,0), \mathbf{a}_{24}=(2,0,1,0,0,1,0,1). \end{split}$$ Setting $f_i:=\operatorname{Tr}_{\mathbf{a}_i}$, consider the Jacobian matrix of $(f_1, \dots, f_{24})$ in the variables $$(u_1, \dots, u_8, v_1, \dots, v_8, w_1, \dots, w_8).$$ If the associated Jabobian matrix is shown to be nonsingular, then the Corollary [Corollary 1](#main thm 2){reference-type="ref" reference="main thm 2"} implies that $N_{24}(X;S_3\times D_8)\ll X^5$. Compare this with Schmidt's upper bound, which implies that $N_{24}(X; S_3\times D_8)\ll X^{6.5}$. ## Data availability statement {#data-availability-statement .unnumbered} No data was generated or analyzed in establishing our results.
arxiv_math
{ "id": "2310.00601", "title": "Upper bounds for the number of number fields with prescribed Galois\n group", "authors": "Hrishabh Mishra, Anwesh Ray", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we investigate a class of McKean-Vlasov stochastic differential equations under Lévy-type perturbations. We first establish the existence and uniqueness theorem for solutions of the McKean-Vlasov stochastic differential equations by utilizing the Euler-like approximation. Then under some suitable conditions, we show that the solutions of McKean-Vlasov stochastic differential equations can be approximated by the solutions of the associated averaged McKean-Vlasov stochastic differential equations in the sense of mean square convergence. In contrast to the existing work, a novel feature is the use of a much weaker condition---local Lipschitzian in the state variables, allowing for possibly super-linearly growing drift, but linearly growing diffusion and jump coefficients. Therefore, our results are suitable for a wider class of McKean-Vlasov stochastic differential equations. address: - School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China - School of Mathematics and Statistics $\&$ Center for Mathematical Sciences, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China - Department of Mathematics $\&$ Department of Physics, Great Bay University, Dongguan, Guangdong 523000, China - Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China author: - Ying Chao - Jinqiao Duan - Ting Gao - Pingyuan Wei bibliography: - yingreferences.bib title: Well-posedness and averaging principle for Lévy-type McKean-Vlasov stochastic differential equations under local Lipschitz conditions --- McKean-Vlasov stochastic differential equations; well-posedness; averaging principle; one-sided local Lipschitz condition; Lévy-type perturbations 60H10 ,60G51 ,34C29 ,35Q83 # Introduction McKean-Vlasov stochastic differential equations (SDEs) have received a great deal of attention in recent years, due to their wide applications across many fields such as stochastic controls, stochastic games and statistical physics. Equations of this type were first initiated in [@Mckean1966] inspired by the kinetic theory of Kac [@Kac1956], and differ from usual SDEs in the sense that the coefficients additionally depend on the probability distribution of the solution process. In the literature, McKean-Vlasov SDEs are also referred to as mean-field SDEs. The reason is that these equations are obtained as the limits of some weakly interacting particles equations as the number of particles tends to infinity (so-called propagation of chaos). In view of the development on the aforementioned McKean-Vlasov SDEs, the noise processes considered are mainly Gaussian ones. However, the systems of practical relevance in physics and biology, sometimes have to be modeled by non-Gaussian noise. This can be verified by some abrupt jump in the individual particles and the related whole population. To reproduce the performance of these natural phenomena, it is appropriate to consider (non-Gaussian) Lévy-type perturbations [@Applebaum2009; @Duan2015; @Liu2022]. In this paper, we focus on the following $d$-dimensional Lévy-type McKean-Vlasov SDE $$\begin{aligned} \label{Main} &dX_{\varepsilon}(t)=b\left(\frac{t}{\varepsilon},X_{\varepsilon}(t),\mathscr{L}_{X_{\varepsilon}(t)}\right) dt+\sigma\left(\frac{t}{\varepsilon},X_{\varepsilon}(t),\mathscr{L}_{X_{\varepsilon}(t)}\right)dW(t)+\int_{U}h\left(\frac{t}{\varepsilon},X_{\varepsilon}(t),\mathscr{L}_{X_{\varepsilon}(t)},z\right)\tilde{N}(dt,dz),\;\; X_{\varepsilon}(0)=x_0,\end{aligned}$$ on $t\in [0,T]$ with a small parameter $\varepsilon>0$. Where $\mathscr{L}_{X(t)}$ denotes the law of $X(t)$ at time $t$, $W(t)$ is an $m$-dimensional standard Wiener process defined on the complete probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, \mathbb{P})$ with $(\mathcal{F}_t)_{t\geq0}$ satisfying the usual conditions, $N(dt,dz)$ is the Poisson random measure on a $\sigma$-finite measure space $(U,\mathcal{U},\nu)$ with $U\subseteq {\mathbb{R}^d}\backslash \{ 0\}$, independent of $W(t)$, and $\tilde N(dt,dz) = N(dt,dz) - \nu (dz)dt$ is its compensated Poisson random measure. The precise assumptions on the coefficients $b:[0,T]\times\mathbb{R}^d\times M_2(\mathbb{R}^d)\to\mathbb{R}^d$, $\sigma:[0,T]\times\mathbb{R}^d\times M_2(\mathbb{R}^d)\to\mathbb{R}^{d\times m}$, and $h:[0,T]\times\mathbb{R}^d\times M_2(\mathbb{R}^d)\times\mathbb{R}^d\to\mathbb{R}^d$ will be specified in later sections (see Section [2](#sec2){reference-type="ref" reference="sec2"} for the definition of $M_2(\mathbb{R}^d)$). The first aim of this paper is to consider the well-posedness of the McKean-Vlasov SDEs in the form of ([\[Main\]](#Main){reference-type="ref" reference="Main"}). Let us briefly review some previous works about the well-posedness for McKean-Vlasov SDEs with Brownian noise. Under the global Lipschitz condition, the existence and uniqueness of the strong solutions for McKean-Vlasov SDEs were obtained by using the fixed point theorem, for example, in [@Carmona2018probabilistic; @Bahlali2020]. The results of the case with one-sided global Lipschitz drift term and global Lipschitz diffusion term can be found in [@Wang2018; @Dos2022simulation]. To deal with the situation of locally Lipschitz with respect to (w.r.t. for short) the measure and globally Lipschitz w.r.t. the state variable, Kloeden and Lorenz [@Kloeden2010] established a method of constructing interpolated Euler-like approximations. Recently, an extension to local Lipschitz conditions w.r.t. the state variable but under uniform linear growth assumption was treated by Li *et al.* [@Li2022strong]; see also [@Ding2021]. Moreover, Hong *et al.* [@Hong2022mckean] discussed the strong and weak well-posedness of a class of McKean-Vlasov SDEs with drift and diffusion coefficients fulfilling some locally monotone conditions, whereas they need to impose some extra structure on the coefficients to obtain a unique solution. Unlike the case of Brownian noise, the related study of McKean-Vlasov SDEs with Lévy noise is still in its infancy, but some interesting works are emerging. For example, Hao *et al.* [@Hao2016] considered a class of Lévy-type McKean-Vlasov SDEs satisfying global Lipschitz plus linear growth conditions, established the existence and uniqueness of solutions and studied the intrinsic link with nonlocal Fokker-Planck equations. The well-posedness results have been further developed in the case of super-linearly drift, diffusion and jump coefficients by using the fixed point theorem [@Mehri2020; @Neelima2020]. Motivated by the previous works on Brownian case as well as Lévy case, in this paper, we aim to treat ([\[Main\]](#Main){reference-type="ref" reference="Main"}) only imposing local Lipschitz conditions w.r.t. the state variable, allowing for possibly super-linearly growing drift. We highlight that some essential difficulties occur. One the one hand, compared with the classical SDEs, the standard localization argument etc. cannot be applied directly due to the distribution dependent coefficients. On the other hand, the non-Gaussian Lévy noise gives rise to several difficulties both in analytic and probabilistic aspects. Therefore, the results of classical SDEs (even with Lévy noise) or that of McKean-Vlasov SDEs with Brownian noise cannot be extended to McKean-Vlasov SDEs with Lévy noise directly. In this paper, we develop a Lévy-type technique of Euler-like approximations to overcome difficulties cased by the local condition and the distribution dependency. The crux of our method, which is different from the Brownian case [@Kloeden2010], lies in dealing with the drift terms in more general conditions as well as the jump terms. Apart from the existence and uniqueness of solutions, we are further interested in establishing a stochastic averaging principle for ([\[Main\]](#Main){reference-type="ref" reference="Main"}) with drifts of polynomial growth under local Lipschitz conditions w.r.t. the state variable. In fact, the averaging principle is a powerful method to extract effective dynamics from complex systems arising from mechanics, mathematics, and other research areas. Since the pioneer work of Khasminskii [@Khasminskii1968], the averaging principle for usual SDEs has received a lot of attention, and has stimulated much of the study on controls, stability analyses, optimization methods. Although the problems considered take different forms (usually classified in terms of the noise or the conditions satisfied by its nonlinear terms), the essence behind the averaging method is to give a simplification of dynamical systems and obtain approximating solutions to differential equations, see, e.g., [@Xu2011; @Ma2019; @Pei2020]. Based on the idea of stochastic averaging, the second goal of this paper is to show that the solution of ([\[Main\]](#Main){reference-type="ref" reference="Main"}), with $0<\varepsilon\ll1$, converges to the following averaged equation: $$\begin{aligned} \label{MASDE} d\bar{X}(t)=\bar{b}\left(\bar{X}(t),\mathscr{L}_{\bar{X}(t)}\right)dt+\bar{\sigma}\left(\bar{X}(t),\mathscr{L}_{\bar{X}(t)}\right)dW(t)+\int_{U}\bar{h}\left(\bar{X}(t),\mathscr{L}_{\bar{X}(t)},z\right)\tilde{N}(dt,dz),\;\;\bar{X}(0)=x_0,\end{aligned}$$ in certain sense, by taking the assumption of proper averaging conditions. For more details of [\[MASDE\]](#MASDE){reference-type="eqref" reference="MASDE"}, see Section [3](#sec3){reference-type="ref" reference="sec3"}. Again, we have to point out that, compared with the case of classical SDEs, there are much few results on the averaging principle for McKean-Vlasov SDEs due to their distribution-dependent feature. And the existing references on averaging principles for McKean-Vlasov SDEs mainly focus on Brownian cases [@Xu2021; @Shen2022]. For some interesting results with other types of noise, e.g., fractional Brownian noise, we refer to [@Shen2022av]. Nevertheless, to the best of authors' knowledge, the averaging principle for McKean-Vlasov SDEs with Lévy noise has not yet been considered to date. This inspires us to establish an averaging principle. The rest of this paper is arranged as follows. In Section [2](#sec2){reference-type="ref" reference="sec2"}, we devote to investigating the existence and uniqueness of solutions to a class of McKean-Vlasov SDEs with Lévy-type perturbations. In Section [3](#sec3){reference-type="ref" reference="sec3"}, we prove an averaging principle for the solutions of the considered McKean-Vlasov SDEs. In Section [4](#sec4){reference-type="ref" reference="sec4"}, we present a specific example to illustrate the theoretical results of this article. And we postpone the details of the proof for Lemma [Lemma 4](#lemma3-1){reference-type="ref" reference="lemma3-1"} and the propagation of chaos result to the end as Appendix. # Well-posedness of Lévy-type McKean-Vlasov stochastic differential equations {#sec2} We start with some notations used in the sequel. Let $|\cdot|$ and $\langle \cdot, \cdot\rangle$ be the Euclidean vector norm and the scalar product in $\mathbb{R}^d$, respectively. For a matrix $A$, we use the Frobenius norm by $\|A\|=\sqrt{tr[AA^T]}$, where $A^T$ stands for the transpose of the matrix $A$. Let $\mathcal{M}(\mathbb{R}^d)$ represent the space of all probability measures on $\mathbb{R}^d$ carrying the usual topology of weak convergence. Furthermore, for $p\geqslant 1$, let $\mathcal{M}_p(\mathbb{R}^d)$ denote the subspace of $\mathcal{M}(\mathbb{R}^d)$ as follows: $$\mathcal{M}_p(\mathbb{R}^d):=\left\{\mu\in\mathcal{M}(\mathbb{R}^d): \mu(|\cdot|^p):=\int_{\mathbb{R}^d}|x|^p\mu(dx)<\infty\right\}.$$ For $\mu_1, \mu_2\in\mathcal{M}_p(\mathbb{R}^d)$, the $L^p$-Wasserstein metric between $\mu_1$ and $\mu_2$ is defined by $$W_p(\mu_1,\mu_2):=\inf_{\pi\in\mathscr{C}(\mu_1,\mu_2)}\left(\int_{\mathbb{R}^d\times \mathbb{R}^d}|x-y|^p\pi(dx,dy)\right)^{\frac{1}{p}},$$ where $\mathscr{C}(\mu_1,\mu_2)$ means the collection of all the probability measures whose marginal distributions are $\mu_1$, $\mu_2$, respectively. Then $\mathcal{M}_p(\mathbb{R}^d)$ endowed with above metric is a Polish space. Let $\delta_x$ be Dirac delta measure centered at the point $x\in\mathbb{R}^d$. A direct calculation shows the $\delta_x$ belongs to $\mathcal{M}_p(\mathbb{R}^d)$ for any $x\in\mathbb{R}^d$. Another remark is that, if $\mu_1=\mathscr{L}_X$, $\mu_2=\mathscr{L}_Y$ are the corresponding distributions of random variables $X$ and $Y$ respectively, then $$(W_p(\mu_1,\mu_2))^p\leqslant \int_{\mathbb{R}^d\times \mathbb{R}^d}|x-y|^p\mathscr{L}_{(X,Y)}(dx,dy)=\mathbb{E}|X-Y|^p,$$ in which $\mathscr{L}_{(X,Y)}$ represents the joint distribution of random vector $(X,Y)$. Given $T>0$, let $D([0,T];\mathbb{R}^d)$ be the collection of all càdlàg (i.e., right continuous with left limits) functions from $[0,T]$ to $\mathbb{R}^d$ equipped with the supremum norm. For $p\geqslant1$, we use $L^p(\Omega;\mathbb{R}^d)$ to denote the family of all $\mathbb{R}^d$-valued random variables $X$ with $\mathbb{E}|X|^p<\infty$. Analogously, we denote by $L^p(\Omega;D([0,T];\mathbb{R}^d))$ the subspace of all $D([0,T];\mathbb{R}^d)$-valued random variables, which satisfy $\mathbb{E}[\sup_{0\leqslant t\leqslant T}|X(t)|^p]<\infty.$ Equipping with the following norm $$||X||_{L^p}:=\left(\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X(t)|^p\right]\right)^{\frac{1}{p}},$$ the space $L^p(\Omega;D([0,T];\mathbb{R}^d))$ is a Banach space. ## Formulation of the well-posedness results Keep in mind that this section is dedicated to establishing the existence and uniqueness theorem for the solutions of $d$-dimensional Lévy-type McKean-Vlasov SDEs, i.e., $$\begin{aligned} \label{Main-equation} &dX(t)=b\left(t,X(t),\mathscr{L}_{X(t)}\right) dt+\sigma\left(t,X(t),\mathscr{L}_{X(t)}\right)dW(t)+\int_{U}h\left(t,X(t),\mathscr{L}_{X(t)},z\right)\tilde{N}(dt,dz)\end{aligned}$$ on $t\in[0,T]$ with initial data $X(0)=x_0$, where $$b:[0,T]\times\mathbb{R}^d\times M(\mathbb{R}^d)\to\mathbb{R}^d, \;\sigma:[0,T]\times\mathbb{R}^d\times M(\mathbb{R}^d)\to\mathbb{R}^{d\times m},\; h:[0,T]\times\mathbb{R}^d\times M(\mathbb{R}^d)\times U\to\mathbb{R}^d,$$ are Borel measurable functions. Let us first give the precise definition of a solution to ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}). **Definition 1**. *We say that ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}) admits a unique strong solution if there exists an $\{\mathcal{F}_t\}_{0\leqslant t\leqslant T}$-adapted $\mathbb{R}^d$-valued càdlàg stochastic process $\{X(t)\}_{0\leqslant t\leqslant T}$ such that* 1. *$X(t)=x_0+\int_0^t b\left(s,X(s),\mathscr{L}_{X(s)}\right) ds+\int_0^t\sigma\left(s,X(s),\mathscr{L}_{X(s)}\right)dW(s)+\int_0^t\int_{U}h\left(s,X(s),\mathscr{L}_{X(s)},z\right)\tilde{N}(ds,dz), \; t\in[0,T], \; \mathbb{P}-a.s.;$* 2. *if $Y=\{Y(t)\}_{0\leqslant t\leqslant T}$ is another solution with $Y(0)=x_0$, then $$\mathbb{P}(X(t)=Y(t) \;\hbox{ for all} \;0\leqslant t\leqslant T)=1.$$* Assume that there exists $\kappa\geqslant2$ such that the following assumptions hold. (One-sided local Lipschitz condition on the state variable) For every $R>0$, there exists a constant $L_R>0$ such that for any $t\in[0,T]$, $x,y\in\mathbb{R}^d$ with $|x|\vee|y|\leqslant R$, and $\mu\in\mathcal{M}_2(\mathbb{R}^d)$, $$\langle x-y, b(t,x,\mu)-b(t,y,\mu)\rangle\vee\|\sigma(t,x,\mu)-\sigma(t,y,\mu)\|^2\vee\int_U\left|h(t,x,\mu,z)-h(t,y,\mu,z)\right|^2\nu(dz)\leqslant L_R|x-y|^2.$$ **A2.** (Global Lipschitz condition on the measure) There exists $L>0$ such that, for any $t\in[0,T]$, $x\in\mathbb{R}^d$ and $\mu_1,\mu_2\in\mathcal{M}_2(\mathbb{R}^d)$, $$|b(t,x,\mu_1)-b(t,x,\mu_2)|^2+\|\sigma(t,x,\mu_1)-\sigma(t,x,\mu_2)\|^2+\int_U\left|h(t,x,\mu,z)-h(t,y,\mu,z)\right|^2\nu(dz)\leqslant LW_2^2(\mu_1,\mu_2).$$ (Continuity) For any $t\in[0,T]$, $b(t,\cdot,\cdot), \sigma(t,\cdot,\cdot), \int_Uh(t,\cdot,\cdot,z)\nu(dz)$ is continuous on $\mathbb{R}^d\times\mathcal{M}_2(\mathbb{R}^d).$ (One-sided linear & global linear growth condition) There exists $K>0$ such that, for any $t\in[0,T]$, $x\in\mathbb{R}^d$ and $\mu\in\mathcal{M}_2(\mathbb{R}^d)$, $$\langle x, b(t,x,\mu)\rangle\vee \|\sigma(t,x,\mu)\|^2\vee \int_U|h(t,x,\mu,z)|^2\nu(dz)\leqslant K(1+|x|^2+W_2^2(\mu,\delta_0)).$$ ($\kappa$-order growth condition w.r.t. the drift coefficient $b$) There exists positive $K_1$ such that, for any $t\in[0,T]$, $x\in\mathbb{R}^d$ and $\mu\in\mathcal{M}_{2}(\mathbb{R}^d)$, $$|b(t,x,\mu)|^2\leqslant K_1(1+|x|^{\kappa}+W_2^{\kappa}(\mu,\delta_0)).$$ ($r$-order moment condition for the initial data) Consider $x_0\in L^r(\Omega;\mathbb{R}^d)$ for some $r\geqslant \max\{\kappa^2/2,4\}$, i.e., $\mathbb{E}|x_0|^r<\infty.$ (Additional growth conditions w.r.t. the jump coefficient $h$) There exists a positive $K_2$ such that, for any $t\in[0,T]$, $x\in\mathbb{R}^d$ and $\mu\in\mathcal{M}_{2}(\mathbb{R}^d)$, $$\int_U|h(t,x,\mu,z)|^r\nu(dz)\leqslant K_2(1+|x|^r+W_2^{r}(\mu,\delta_0)).$$ In addition, if $\kappa >2$, there exist constants $K_3, L^{'}>0$, such that, for any $t\in[0,T]$, $x,y\in\mathbb{R}^d$ and $\mu,\mu_1,\mu_2\in\mathcal{M}_{2}(\mathbb{R}^d)$, $$\int_U|h(t,x,\mu,z)|^\kappa\nu(dz)\leqslant K_3(1+|x|^\kappa+W_2^{\kappa}(\mu,\delta_0)),$$ $$\int_U\left|h(t,x,\mu_1,z)-h(t,y,\mu_2,z)\right|^{\kappa}\nu(dz)\leq L^{'}W_2^{\kappa}(\mu_1,\mu_2),$$ and for every $R>0$, there exists a constant $L_R^{'}>0$ such that for any $t\in[0,T]$, $x,y\in\mathbb{R}^d$ with $|x|\vee|y|\leqslant R$, and $\mu\in\mathcal{M}_2(\mathbb{R}^d)$, $$\int_U\left|h(t,x,\mu,z)-h(t,y,\mu,z)\right|^{\kappa}\nu(dz)\leqslant L_R^{'}|x-y|^{\kappa}.$$ The main result of this section is stated as follows. **Theorem 1**. ***(Well-posedness)** Let Assumptions **A1**-**A7** be satisfied. Then equation ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}) admits a unique strong solution $X(t)\in L^{\kappa}(\Omega;\mathbb{R}^d)$ for $t\in[0,T]$ with the initial value $X(0)=x_0$. Moreover, the following estimate holds: $$\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X(t)|^r\right]\leqslant C,$$ with $C:=C(T,r,\mathbb{E}|x_0|^r)$ is a positive number. Here $\kappa\geqslant2$ and $r\geqslant \max\{\kappa^2/2,4\}$.* **Remark 1**. *We point out that the conditions in Assumptions **A1**-**A7** are carefully chosen and the results in Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"} are generally applicable:* - *The one-sided local Lipschitz condition in **A1** is weaker than the classical local Lipschitz condition. In fact, it is clear that the local Lipschitz condition implies the one-sided local Lipschitz one (by applying the mean value inequality directly). But it is not vice versa. For example, let $b(t,x,\mu)=x^3-x^{\frac{1}{3}}+t+\int_{\mathbb{R}} z\mu(dz)$ in $\mathbb{R}$. Note that, for $|x|\vee|y|\leqslant R$, $$\langle x-y, b(t,x,\mu)-b(t,y,\mu)\rangle=|x-y|^2(x^2+xy+y^2)-(x-y)(x^{\frac{1}{3}}-y^{\frac{1}{3}})\leqslant 3R^2|x-y|^2,$$ as $(x-y)(x^{\frac{1}{3}}-y^{\frac{1}{3}})\geqslant0$ for all $x,y$. We can find that $b$ is one-sided locally Lipschitz but not locally Lipschitz.* - *Comparing with the one-sided (global) Lipschitz condition in the recently paper [@Neelima2020], i.e., there exists $C>0$ such that for any $x,y\in\mathbb{R}^d$, $\mu\in\mathcal{M}_2(\mathbb{R}^d),$ $$\langle x-y, b(t,x,\mu)-b(t,y,\mu)\rangle+\|\sigma(t,x,\mu)-\sigma(t,y,\mu)\|^2+\int_U\left|h(t,x,\mu,z)-h(t,y,\mu,z)\right|^2\nu(dz)\leqslant C|x-y|^2,$$ the one-sided local Lipschitz condition in **A1** is expressed via the operations "$\vee$\" instead of "$+$\". Even so, such a "local\" condition may include some weaker cases. For example, let $b$ be a one-sided locally Lipschitz function and $\sigma=h=x$ with $\nu(U)<\infty$. Then **A1** holds but the the one-sided (global) Lipschitz condition in [@Neelima2020] is failed.* - *The result is degenerated into a pure-Brownian one if we set $h\equiv 0$. Different from the previous Brownian case paper [@Li2022strong] in which the drift coefficient need to satisfy the linear growth condition, the drift coefficient $b$ here only satisfies the one-sided linear growth condition, and may be polynomial growth w.r.t. the state variable (see **A4-A5**).* ## Euler type approximation and auxiliary lemmas One key of our method to prove Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"} is constructing an Euler-like sequence for the McKean-Vlasov SDEs [\[Main-equation\]](#Main-equation){reference-type="eqref" reference="Main-equation"}. Once we show that this sequence is Cauchy in an appropriate complete space (i.e., $L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$, as we will see later), we are able to conclude that there exists a limiting process which is indeed the desired solution to [\[Main-equation\]](#Main-equation){reference-type="eqref" reference="Main-equation"}. To this end, given $T>0$, we consider the equidistant partitions of $[0,T]$. For any integer $n\geqslant1$, we set $\triangle_n=\frac{T}{n}$ and $t_k^n=k\triangle_n$, $k=0,1,\ldots, n$. In this way, for fixed $k$ ($0\leqslant k \leqslant n-1$) and $t\in(t_k^n, t_{k+1}^n]$, we consider $$\label{Approximation-k} dX^{(n)}(t)=b\left(t,X^{(n)}(t),\mu^{(n)}_{t_k^n}\right)dt+\sigma\left(t,X^{(n)}(t),\mu^{(n)}_{t_k^n}\right)dW(t)+\int_{U}h\left(t,X^{(n)}(t),\mu^{(n)}_{t_k^n},z\right)\tilde{N}(dt,dz),$$ where $\mu^{(n)}_{t_k^n}=\mathscr{L}_{X^{(n)}(t_k^n)}$. It is clear that, for each fixed $k$, if the initial value $X^{(n)}(t_k^n)$ and the distribution $\mu^{(n)}_{t_k^n}$ (at the left point $t_k^n$) are known, then [\[Approximation-k\]](#Approximation-k){reference-type="eqref" reference="Approximation-k"} is a standard SDE independent of the law of $X^{(n)}(t)$. We now prove inductively the existence and uniqueness of the solution to [\[Approximation-k\]](#Approximation-k){reference-type="eqref" reference="Approximation-k"}. In fact, for $k=0$ and $t\in[0,t_1^n]$, the distribution is $\mu^{(n)}_{0}=\mathscr{L}_{X^{(n)}(0)}=\mathscr{L}_{x_0}$. Applying Assumptions **A1**$\&$**A4**, we observe that the coefficients in [\[Approximation-k\]](#Approximation-k){reference-type="eqref" reference="Approximation-k"} (with $k=0$) satisfy $$\left\langle x-y, b\left(t,x,\mu^{(n)}_{0}\right)-b\left(t,y,\mu^{(n)}_{0}\right)\right\rangle+\left\|\sigma\left(t,x,\mu^{(n)}_{0}\right)-\sigma\left(t,y,\mu^{(n)}_{0}\right)\right\|^2+\int_U\left|h\left(t,x,\mu^{(n)}_{0},z\right)-h\left(t,y,\mu^{(n)}_{0},z\right)\right|^2\nu(dz)\leqslant 3L_R|x-y|^2$$ and $$\begin{split} \left\langle x, b\left(t,x,\mu^{(n)}_{0}\right)\right\rangle+\left\|\sigma\left(t,x,\mu^{(n)}_{0}\right)\right\|^2+\int_U\left|h\left(t,x,\mu^{(n)}_{0},z\right)\right|^2\nu(dz)&\leqslant 3K\left(1+|x|^2+W_2^2\left(\mu^{(n)}_{0},\delta_0\right)\right)\\ &\leqslant3K\left(1+|x|^2\right)\left(1+\mathbb{E}\left|X^{(n)}(0)\right|^2\right).\notag \end{split}$$ Referring to Theorem 1.1 in [@Majka2016note], it admits a unique solution on $[0,t_1^n]$. Furthermore, by Assumption **A5**, it follows that for $r\geqslant\max\{\frac{\kappa^2}{2},4\}$, there exits a positive constant $C$ such that $$\mathbb{E}\left[\sup_{0\leqslant t\leqslant t_1^n}\left|X^{(n)}(t)\right |^r \right]\leqslant C\left(1+\mathbb{E}\left|X^{(n)}(0)\right|^r\right), \notag$$ whose proof is quite similar to Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"} below, and we omit the details here. Therefore, we can define $X^{(n)}(t_1^n)$ (which satisfies $\mathbb{E}|X^{(n)}(t_1^n)|^r<\infty$) and $\mu^{(n)}_{t_1^n}=\mathscr{L}_{X^{(n)}(t_1^n)}$. For $k=1$ and $t\in(t_1^n,t_2^n]$, we can use $(X^{(n)}(t_1^n),\mu^{(n)}_{t_1^n})$ in place of $(X^{(n)}(0),\mu^{(n)}_{0})$ and repeat the above procedure. Inductively, for any $k=0,1,\ldots, n-1$ and $t\in(t_k^n, t_{k+1}^n]$, we obtain the existence and uniqueness of the solution to the SDE [\[Approximation-k\]](#Approximation-k){reference-type="eqref" reference="Approximation-k"} as well as the corresponding estimate $$\label{estimate-k} \mathbb{E}\left[\sup_{t_k^n\leqslant t\leqslant t_{k+1}^n}\left|X^{(n)}(t)\right|^r\right]\leqslant C\left(1+\mathbb{E}\left|X^{(n)}(t_k^n)\right|^r\right),$$ by similar arguments. At this point, we define by $[t]_n=t_k^n$ for all $t\in (t_k^n,t_{k+1}^n]$, $k=0,1,\ldots, n-1$. Then, for $t\in[0,T]$, we introduce the following approximating SDE $$\label{Approximation-Eq} dX^{(n)}(t)=b\left(t,X^{(n)}(t),\mu^{(n)}_{[t]_{n}}\right)dt+\sigma\left(t,X^{(n)}(t),\mu^{(n)}_{[t]_{n}}\right)dW(t)+\int_{U}h\left(t,X^{(n)}(t),\mu^{(n)}_{[t]_{n}},z\right)\tilde{N}(dt,dz),$$ with the initial value $X^{(n)}(0)=x_0$, where $\mu^{(n)}_{[t]_{n}}=\mathscr{L}_{X^{(n)}([t]_{n})}$. According to the above procedures and results for [\[Approximation-k\]](#Approximation-k){reference-type="eqref" reference="Approximation-k"}, we conclude that there exists a unique solution to [\[Approximation-Eq\]](#Approximation-Eq){reference-type="eqref" reference="Approximation-Eq"}. In fact, for each $n\geqslant 1$ and $t\in[0,T]$, we can always find certain $k_\ast$ ($0\leqslant k_\ast \leqslant n-1$) such that $t\in (t_{k_\ast}^n,t_{{k_\ast}+1}^n]$. Then, the solution to [\[Approximation-Eq\]](#Approximation-Eq){reference-type="eqref" reference="Approximation-Eq"} can be written as $$\begin{aligned} X^{n}(t)=&x_0+\sum_{k=0}^{k_\ast}\int_{t_k^n}^{t_{k+1}^n \wedge t} b\left(t,X^{(n)}(s),\mu^{(n)}_{t_k^n}\right)ds+\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{t_k^n}\right)dW(s)+\int_{U}h\left(s,X^{(n)}(s),\mu^{(n)}_{t_k^n},z\right)\tilde{N}(ds,dz) \notag\\ =&X^{n}(t_1^n)+\sum_{k=1}^{k_\ast}\int_{t_k^n}^{t_{k+1}^n \wedge t} b\left(t,X^{(n)}(s),\mu^{(n)}_{t_k^n}\right)ds+\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{t_k^n}\right)dW(s)+\int_{U}h\left(s,X^{(n)}(s),\mu^{(n)}_{t_k^n},z\right)\tilde{N}(ds,dz) \notag\\ &\cdots \;\;\;\; \cdots\notag\\ =&X^{n}(t_{k_\ast}^n)+\int_{t_{k_\ast}^n}^{t} b\left(t,X^{(n)}(s),\mu^{(n)}_{t_{k_\ast}^n}\right)ds+\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{t_{k_\ast}^n}\right)dW(s)+\int_{U}h\left(s,X^{(n)}(s),\mu^{(n)}_{t_{k_\ast}^n},z\right)\tilde{N}(ds,dz),\notag\end{aligned}$$ and it is well defined based on the results for [\[Approximation-k\]](#Approximation-k){reference-type="eqref" reference="Approximation-k"} with $k=0,1,\ldots,k_\ast$. Moreover, we have the following estimate $$\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)\right|^r\right]=\mathbb{E}\left[\max_{0\leqslant k\leqslant n-1}\sup_{t_k^n\leqslant t\leqslant t_{k+1}^{n}}\left|X^{(n)}(t)\right|^r\right]\leqslant \sum_{k=0}^{n-1}\mathbb{E}\left[\sup_{t_k^n\leqslant t\leqslant t_{k+1}^{n}}\left|X^{(n)}(t)\right|^r\right]\leqslant C(n)<\infty.$$ Notice that, under Assumption **A6**, i.e., the initial data $x_0$ satisfies $\mathbb{E}|x_0|^r<\infty$ with $r\geqslant\max\{\frac{\kappa^2}{2},4\}\geqslant\kappa$, we conclude that $X^{(n)}\in L^{r}(\Omega;D([0,T];\mathbb{R}^d))\subset L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$. That is, the stochastic processes $\{X^{(n)}(t)\}_{n\geqslant1}$ given by [\[Approximation-Eq\]](#Approximation-Eq){reference-type="eqref" reference="Approximation-Eq"} form a sequence in $L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$. Next, to show that this sequence is indeed Cauchy, we need the following two useful lemmas. **Lemma 1**. *(Uniform boundedness property) Under Assumptions **A4, A6** and **A7**, for any $T>0$, there is a positive constant $C_r$ (which is independent of $n$) such that $$\label{UBeq} \mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)\right|^r\right]\leqslant C_r.$$* *Proof.* For $r\geqslant\max\{\frac{\kappa^2}{2},4\}$ and $t\in[0,T]$, applying Itô's formula to $|x|^r$ yields that, $$\begin{aligned} \label{ItoEq} &\left|X^{(n)}(t)\right|^r\notag\\ =&|x_0|^r+r\int_0^{t}\left|X^{(n)}(s)\right|^{r-2}\left\langle X^{(n)}(s), b\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)\right\rangle ds+\frac{r}{2}\int_0^{t}\left|X^{(n)}(s)\right|^{r-2} \left\|\sigma(s,X^{(n)}(s),\mu^{(n)}_{[s]_n})\right\|^2 ds\notag\\ &+\frac{r(r-2)}{2}\int_0^{t}\left|X^{(n)}(s)\right|^{r-4} \left\|(X^{(n)}(s))^T\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)\right\|^2 ds\notag\\ &+r\int_0^{t}\left|X^{(n)}(s)\right|^{r-2}\left\langle X^{(n)}(s), \sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)dW(s)\right\rangle\notag\\ &+r\int_0^{t}\int_U\left|X^{(n)}(s)\right|^{r-2}\left\langle X^{(n)}(s), h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right\rangle \tilde{N}(ds,dz)\notag\\ &+\int_0^{t}\int_U\left[\left|X^{(n)}(s)+h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right|^{r}-\left|X^{(n)}(s)\right|^r-r\left|X^{(n)}(s)\right|^{r-2}\left\langle X^{(n)}(s), h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right\rangle\right] N(ds,dz).\end{aligned}$$ By virtue of Assumption **A4** and Young's inequality $$\label{YE} ab\leqslant\epsilon \frac{a^p}{p}+\epsilon^{-\frac{q}{p}}\frac{b^q}{q}\; \;\; \hbox{for all}\; \epsilon, a, b>0,\;\hbox{where}\; p>1,\; \frac{1}{p}+\frac{1}{q}=1,$$ one can estimate the second term of ([\[ItoEq\]](#ItoEq){reference-type="ref" reference="ItoEq"}) by $$\begin{aligned} \label{term2} &rK\int_0^{t}\left|X^{(n)}(s)\right|^{r-2}\left(1+\left|X^{(n)}(s)\right|^2+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^2\right)ds\notag\\ \leqslant& (r-2)K\int_0^t\left|X^{(n)}(s)\right|^rds+2K\int_0^t\left(1+\left|X^{(n)}(s)\right|^2+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^2\right)^{\frac{r}{2}}ds\notag\\ \leqslant& (r-2)K\int_0^t\left|X^{(n)}(s)\right|^rds+2\cdot3^{\frac{r}{2}-1}K\int_0^t\left(1+\left|X^{(n)}(s)\right|^r+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^r\right)ds.\end{aligned}$$ Here, we take $\epsilon=1$, $p=\frac{r}{r-2}$, $q=\frac{r}{2}$, and the last step is based on the elementary inequality $(a+b+c)^{l}\leqslant 3^{l-1}(|a|^l+|b|^l+|c|^l)$ for all $a,b,c\in\mathbb{R}$, $l\geqslant1$ and Hölder inequality. Analogously, the third and fourth terms of ([\[ItoEq\]](#ItoEq){reference-type="ref" reference="ItoEq"}) can be estimated by $$\label{term3} \frac{r-2}{2}K\int_0^t\left|X^{(n)}(s)\right|^rds+3^{\frac{r}{2}-1}K\int_0^t\left(1+\left|X^{(n)}(s)\right|^r+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^r\right)ds,$$ and $$\label{term4} \frac{(r-2)^2}{2}K\int_0^t\left|X^{(n)}(s)\right|^rds+3^{\frac{r}{2}-1}(r-2)K\int_0^t\left(1+\left|X^{(n)}(s)\right|^r+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^r\right)ds,$$ respectively. Further, note that the map $y\to|y|^r$ is of class $C^2$, by the remainder formula, for any $y, b\in\mathbb{R}^d$, one obtains $$\label{RF} |y|^r-|b|^r-r|b|^{r-2}\langle b, y-b\rangle\leqslant C_1\int_0^1|y-b|^2|b+\theta(y-b)|^{r-2}d\theta\leqslant C_1(|b|^{r-2}|y-b|^2+|y-b|^r),$$ and thus the last term on the right hand side of ([\[ItoEq\]](#ItoEq){reference-type="ref" reference="ItoEq"}) can be estimated as $$\label{termN} C_1\int_0^{t}\int_U\left(\left|X^{(n)}(s)\right|^{r-2}\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right|^2+\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right|^{r}\right)N(ds,dz).$$ Denote the above estimate ([\[termN\]](#termN){reference-type="ref" reference="termN"}) by $N_t$. Substituting ([\[term2\]](#term2){reference-type="ref" reference="term2"})-([\[termN\]](#termN){reference-type="ref" reference="termN"}) into ([\[ItoEq\]](#ItoEq){reference-type="ref" reference="ItoEq"}), taking suprema over $[0,u]$ for $u\in[0,T]$ and then taking expectations gives that $$\begin{aligned} \label{Te} \mathbb{E}\sup_{0\leqslant t\leqslant u}|X^{(n)}(t)|^r\leqslant&\mathbb{E}|x_0|^r+\mathbb{E}\sup_{0\leqslant t\leqslant u}|M_t|+\mathbb{E}\sup_{0\leqslant t\leqslant u}|N_t|+\frac{(r+1)(r-2)}{2}K\int_0^u\mathbb{E}\left|X^{(n)}(s)\right|^rds\notag\\ &+3^{\frac{r}{2}-1}(r+1)K\int_0^u\left(1+\mathbb{E}\left|X^{(n)}(s)\right|^r+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^r\right)ds,\end{aligned}$$ where $$M_t:=r\int_0^{t}\left|X^{(n)}(s)\right|^{r-2}\left\langle X^{(n)}(s), \sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)dW(s)\right\rangle+r\int_0^{t}\int_U\left|X^{(n)}(s)\right|^{r-2}\left\langle X^{(n)}(s), h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right\rangle \tilde{N}(ds,dz)$$ is indeed a local martingale. On the one hand, by the Burkholder-Davis-Gundy inequality (see, e.g., Theorem 7.3 of Chapter 1 in [@Mao2007] or Lemma 2.1 of [@Dareiotis2016]), there exists a constant $C_2>0$ such that $$\begin{aligned} \mathbb{E}\sup_{0\leqslant t\leqslant u}|M_t|&\leqslant C_2r\mathbb{E}\left[\int_0^{u}\left|X^{(n)}(s)\right|^{2r-2}\left\|\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)\right\|^2ds+\int_0^{u}\int_U\left|X^{(n)}(s)\right|^{2r-2}\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)\right|^2\nu(dz)ds\right]^{\frac{1}{2}}\notag\\ &\leqslant C_2r\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t)\right|^{r-1}\left(\int_0^{u}\left\|\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)\right\|^2ds+\int_0^{u}\int_U\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n}\right)\right|^2\nu(dz)ds\right)^{\frac{1}{2}}\right],\notag\end{aligned}$$ which on the application of Assumption **A4** gives $$\mathbb{E}\sup_{0\leqslant t\leqslant u}|M_t|\leqslant C_2r\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t)\right|^{r-1}\left(2K\int_0^u\left(1+\left|X^{(n)}(s)\right|^2+\mathbb{E}\left|X^{(n)}([s]_n)\right|^2\right)ds\right)^{\frac{1}{2}}\right].$$ Then, due to Young's inequality given in ([\[YE\]](#YE){reference-type="ref" reference="YE"}) (letting $\epsilon=\frac{1}{2C_2(r-1)}$, $p=\frac{r}{r-1}$, $q=r$), Hölder inequality, elementary inequality and Lyapunov inequality, one further has $$\begin{aligned} \label{Me} \mathbb{E}\sup_{0\leqslant t\leqslant u}|M_t| &\leqslant \frac{1}{2}\mathbb{E}\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t)\right|^r+C_2^r(2(r-1))^{r-1}(2K)^{\frac{r}{2}}\mathbb{E}\left[\int_0^u\left(1+\left|X^{(n)}(s)\right|^2+\mathbb{E}\left|X^{(n)}([s]_n)\right|^2\right)ds\right]^{\frac{r}{2}}\notag\\ &\leqslant \frac{1}{2}\mathbb{E}\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t)\right|^r+C_2^r(2(r-1))^{r-1}(2K)^{\frac{r}{2}}u^{\frac{r}{2}-1}\mathbb{E}\left[\int_0^u\left(1+\left|X^{(n)}(s)\right|^2+\mathbb{E}\left|X^{(n)}([s]_n)\right|^2\right)^{\frac{r}{2}}ds\right]\notag\\ &\leqslant \frac{1}{2}\mathbb{E}\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t)\right|^r+C_2^r(2(r-1))^{r-1}(2K)^{\frac{r}{2}}(3u)^{\frac{r}{2}-1}\mathbb{E}\int_0^u\left(1+\left|X^{(n)}(s)\right|^r+\mathbb{E}\left|X^{(n)}([s]_n)\right|^r\right)ds.\end{aligned}$$ On the other hand, by Assumptions **A4**$\&$**A7**, Young's inequality ([\[YE\]](#YE){reference-type="ref" reference="YE"}) (with $\epsilon=1$, $p=\frac{r}{r-2}$, $q=\frac{r}{2}$), elementary inequality and Lyapunov inequality, $$\begin{aligned} \label{Ne} \mathbb{E}\sup_{0\leqslant t\leqslant u}|N_t| &\leqslant C_1\mathbb{E}\int_0^{u}\int_U\left(\left|X^{(n)}(s)\right|^{r-2}\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right|^2+\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)\right|^{r}\right)\nu(dz)ds\notag\\ &\leqslant C_1\mathbb{E}\int_0^{u}\left[K\left|X^{(n)}(s)\right|^{r-2}\left(1+\left|X^{(n)}(s)\right|^2+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^2\right)+K_2\left(1+\left|X^{(n)}(s)\right|^{r}+\left(\mathbb{E}\left|X^{(n)}([s]_{n})\right|^2\right)^{\frac{r}{2}}\right)\right]ds\notag\\ &\leqslant C_1K\frac{r-2}{r}\int_0^u\mathbb{E}\left|X^{(n)}(s)\right|^rds+C_1\cdot\left(\frac{2K}{r}3^{\frac{r}{2}-1}+K_2\right)\int_0^u\left(1+\mathbb{E}\left|X^{(n)}(s)\right|^r+\mathbb{E}\left|X^{(n)}([s]_{n})\right|^r\right)ds.\end{aligned}$$ As a result, combining all the above estimates ([\[Me\]](#Me){reference-type="ref" reference="Me"})-([\[Ne\]](#Ne){reference-type="ref" reference="Ne"}) and applying the Grönwall inequality, we get from ([\[Te\]](#Te){reference-type="ref" reference="Te"}) that $$\mathbb{E}\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)\right|^r\leqslant2(1+\mathbb{E}|x_0|^r)e^{\left[K(r-2)\frac{r^2+r+2C_1}{r}+4\cdot3^{\frac{r}{2}-1}(r+1)+C_2^{r}2^{r+1}(r-1)^{r-1}(2K)^{\frac{r}{2}}(3T)^{\frac{r}{2}-1}+4C_1\cdot\left(\frac{2K}{r}3^{\frac{r}{2}-1}+K_2\right)\right]T}\leqslant C_r.$$ It is clear that the positive constant $C_r$ is dependent on $r, T, K, K_2$ and initial condition $x_0$, but independent of $n$. The proof is therefore complete. ◻ **Lemma 2**. *(Time Hölder continuity) Let Assumptions **A4-A7** hold. For any $x_0\in L^r(\Omega;\mathbb{R}^d)$ with $r\geqslant\kappa^2/2$, there exists a positive constant $C_{\kappa}$ such that, for any $0\leqslant s\leqslant t\leqslant T$ with $|t-s|\leqslant1$, $$\sup_{n\geqslant1}\mathbb{E}\left[\left|X^{(n)}(t)-X^{(n)}(s)\right|^{\kappa}\right]\leqslant C_{\kappa}|t-s|.$$* *Proof.* It follows from ([\[Approximation-Eq\]](#Approximation-Eq){reference-type="ref" reference="Approximation-Eq"}) that $$X^{(n)}(t)-X^{(n)}(s)=\int_s^tb\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)du+\int_s^t\sigma\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)dW(u)+\int_s^t\int_{U}h\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}},z\right)\tilde{N}(du,dz).$$ By taking expectations on both sides, one gets $$\begin{aligned} \label{timeDifference} \mathbb{E}\left|X^{(n)}(t)-X^{(n)}(s)\right|^{\kappa}\leqslant&3^{\kappa-1}\mathbb{E}\left[\left|\int_s^tb\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)du\right|^{\kappa}\right]+3^{\kappa-1}\mathbb{E}\left[\left|\int_s^t\sigma\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)dW(u)\right|^{\kappa}\right]\notag\\ &+3^{\kappa-1}\mathbb{E}\left[\left|\int_s^t\int_{U}h\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}},z\right)\tilde{N}(du,dz)\right|^{\kappa}\right]:=B_1+B_2+B_3.\end{aligned}$$ Subsequently we estimate $B_1$, $B_2$ and $B_3$ separately, and for readability, we only present the core estimating procedures here. By Hölder inequality, Assumption **A5**, Lyapunov inequality and estimate ([\[UBeq\]](#UBeq){reference-type="ref" reference="UBeq"}), we have $$\begin{aligned} B_1\leqslant& 3^{\kappa-1}(t-s)^{\kappa-1}\int_s^t\mathbb{E}\left[\left|b\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)\right|^{\kappa}\right]du\leqslant3^{\frac{3\kappa}{2}-2}K_1^{\frac{\kappa}{2}}(t-s)^{\kappa-1}\int_s^t\left(1+\mathbb{E}\left|X^{(n)}(u)\right|^{\frac{\kappa^2}{2}}+\left(\mathbb{E}\left|X^{(n)}([u]_n)\right|^2\right)^{\frac{\kappa^2}{4}}\right)du\notag\\ \leqslant&2\cdot3^{\frac{3\kappa}{2}-2}K_1^{\frac{\kappa}{2}}(t-s)^{\kappa-1}\int_s^t\left(1+\mathbb{E}\sup_{0\leqslant u\leqslant T}\left|X^{(n)}(u)\right|^{\frac{\kappa^2}{2}}\right)du\leqslant C_{\kappa}(t-s)^{\kappa}.\notag\end{aligned}$$ By Burkholder-Davis-Gundy inequality, Hölder inequality, Assumption **A4** and estimate ([\[UBeq\]](#UBeq){reference-type="ref" reference="UBeq"}), we get $$\begin{aligned} B_2\leqslant& 3^{\kappa-1}M_{\kappa}\mathbb{E}\left[\left|\int_s^t\left\|\sigma\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)\right\|^2du\right|^{\frac{\kappa}{2}}\right]\leqslant3^{\kappa-1}M_{\kappa}(t-s)^{\frac{\kappa-2}{2}}\int_s^t\mathbb{E}\left[\left\|\sigma\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}}\right)\right\|^{\kappa}\right]du\notag\\ \leqslant&2\cdot3^{\frac{3\kappa}{2}-2}M_{\kappa}K^{\frac{\kappa}{2}}(t-s)^{\frac{\kappa-2}{2}}\int_s^t\left(1+\mathbb{E}\sup_{0\leqslant u\leqslant T}\left|X^{(n)}(u)\right|^{\kappa}\right)du\leqslant C_{\kappa}(t-s)^{\frac{\kappa}{2}},\notag\end{aligned}$$ where $M_{\kappa}=[\kappa^{\kappa+1}/2(\kappa-1)^{\kappa-1}]^{\frac{\kappa}{2}}$. Applying the Kunita's first inequality (see, e.g., Theorem 4.4.23 of [@Applebaum2009] or Lemma 2.1 of [@Dareiotis2016]), Hölder inequality, Assumptions **A4$\&$A7** and estimate ([\[UBeq\]](#UBeq){reference-type="ref" reference="UBeq"}), we obtain $$\begin{aligned} B_3\leqslant& 3^{\kappa-1}D\mathbb{E}\left[\left|\int_s^t\int_U\left|h\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}},z\right)\right|^2\nu(dz)du\right|^{\frac{\kappa}{2}}\right] + 3^{\kappa-1}D\int_s^t\mathbb{E}\left[\int_U\left|h\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}},z\right)\right|^{\kappa}\nu(dz)\right]du\notag\\ \leqslant& 3^{\kappa-1}D(t-s)^{\frac{\kappa-2}{2}}\int_s^t\mathbb{E}\left[\left(\int_U\left|h\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}},z\right)\right|^2\nu(dz)\right)^{\frac{\kappa}{2}}\right]du + 3^{\kappa-1}D\int_s^t\mathbb{E}\left[\int_U\left|h\left(u,X^{(n)}(u),\mu^{(n)}_{[u]_{n}},z\right)\right|^{\kappa}\nu(dz)\right]du\notag\\ \leqslant& 3^{\kappa-1}DK^{\frac{\kappa}{2}}(t-s)^{\frac{\kappa-2}{2}}\int_s^t\mathbb{E}\left[\left(1+\left|X^{(n)}(u)\right|^2+\mathbb{E}X^{(n)}([u]_n)^2\right)^{\frac{\kappa}{2}}\right]du + 3^{\kappa-1}DK_3\int_s^t\mathbb{E}\left[\left(1+\left|X^{(n)}(u)\right|^{\kappa}+\left(\mathbb{E}X^{(n)}([u]_n)^2\right)^{\frac{\kappa}{2}}\right)\right]du\notag\\ \leqslant&2\cdot3^{\frac{3\kappa}{2}-2}DK^{\frac{\kappa}{2}}(t-s)^{\frac{\kappa-2}{2}}\int_s^t\left(1+\mathbb{E}\sup_{0\leqslant u\leqslant T}\left|X^{(n)}(u)\right|^{\kappa}\right)du+2\cdot3^{\kappa-1}DK_3\int_s^t\left(1+\mathbb{E}\sup_{0\leqslant u\leqslant T}\left|X^{(n)}(u)\right|^{\kappa}\right)du\notag\\ \leqslant& C_{\kappa}[(t-s)+(t-s)^{\frac{\kappa}{2}}],\notag\end{aligned}$$ where $D$ is a positive number dependent of $\kappa$. Consequently, the desired assertion follows by substituting the above estimates of $B_1$, $B_2$, and $B_3$ into ([\[timeDifference\]](#timeDifference){reference-type="ref" reference="timeDifference"}) and taking the supremum arguments. ◻ With Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"}$\&$[Lemma 2](#HolderC){reference-type="ref" reference="HolderC"} at hand, we now show the following lemma. **Lemma 3**. *(Cauchy sequences) The sequence $\{X^{(n)}(t)\}_{n\geqslant1}$ given by ([\[Approximation-Eq\]](#Approximation-Eq){reference-type="ref" reference="Approximation-Eq"}) is a Cauchy sequence in $L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$ in the following sense: for any $n,m\geqslant1$, $$\label{Cauchy-se} \left\|X^{(n)}-X^{(m)}\right\|_{L^{\kappa}}=\left(\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\right]\right)^{\frac{1}{\kappa}}\to0, \;\; \hbox{as}\; n, m\to\infty.$$* *Proof.* Note that for $t\in[0,T]$, $$\begin{aligned} X^{(n)}(t)-X^{(m)}(t)=&\int_0^t\left[b\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-b\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right]ds+\int_0^t\left[\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right]dW(s)\notag\\ &+\int_0^t\int_{U}\left[h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right]\tilde{N}(ds,dz).\notag\end{aligned}$$ We define a stopping time $$\tau_R:=\inf\left\{t\in[0,T]: \left|X^{(n)}(t)\right|\vee\left|X^{(m)}(t)\right|> R\right\},$$ for each $R>0$. It should be pointed out that the stopping time technique is applied here since ([\[Approximation-Eq\]](#Approximation-Eq){reference-type="ref" reference="Approximation-Eq"}) is a classical SDE (not distribution dependent). Then, by De Morgan's Law, we arrive at $$\label{JJ} \mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\right]=\mathbb{E}\Big[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\mathbb{I}_{\{\tau_R>T\}}\Big]+\mathbb{E}\Big[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\mathbb{I}_{\{\tau_R\leqslant T\}}\Big]:=J_1+J_2,$$ where $\mathbb{I}_A$ is an indicator function of set $A$. We next estimate each summand on the right hand side of equation above. \(1\) Estimate of the term $J_1$. Notice that $$\label{J_1} J_1%=\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\mathbb{I}_{\{\tau_R>T\}}\right] \leqslant\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}\right].$$ By applying Itô formula, we calculate that $$\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}=J_{1,R}(t)+J_{2,R}(t)+J_{3,R}(t)+J_{4,R}(t)+J_{5,R}(t)+J_{6,R}(t),$$ where $$\begin{aligned} J_{1,R}(t)=&\kappa\int_0^{t\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\langle X^{(n)}(s)-X^{(m)}(s), b\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-b\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right\rangle ds,\notag\\ J_{2,R}(t)=&\frac{\kappa}{2}\int_0^{t\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\| \sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right) \right\|^2 ds,\notag\\ J_{3,R}(t)=&\frac{\kappa(\kappa-2)}{2}\int_0^{t\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-4}\left\| \left(X^{(n)}(s)-X^{(m)}(s)\right)^T\left(\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right) \right\|^2 ds,\notag\\ J_{4,R}(t)=&\int_0^{t}\int_U\bigg[\left|X^{(n)}(s)-X^{(m)}(s)+h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right|^{\kappa}-\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}\notag\\ &-\kappa\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\langle X^{(n)}(s)-X^{(m)}(s), h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right\rangle \bigg] N(ds,dz),\notag\\ J_{5,R}(t)=&\kappa\int_0^{t\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\langle X^{(n)}(s)-X^{(m)}(s),\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)dW(s)\right\rangle,\notag\\ J_{6,R}(t)=&\kappa\int_0^{t\wedge\tau_R}\int_U\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\langle X^{(n)}(s)-X^{(m)}(s),h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right\rangle\tilde{N}(ds,dz).\notag\end{aligned}$$ In order to take the suprema over the time and the expectation, we need to estimate $\mathbb{E}\big[\sup_{0\leqslant t\leqslant u}J_{i,R}(t)\big], i=1, \cdots, 6$, respectively. Note that $J_{i,R},i=1,2,3$ are standard Lebesgue integrals, we can estimate these three terms similarly. In fact, for any $u\in[0,T]$, by virtue of Assumptions **A1-A2**, we obtain $$\begin{aligned} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{1,R}(t)\right]\leqslant&\kappa\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\langle X^{(n)}(s)-X^{(m)}(s), b\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-b\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)\right\rangle ds\right]\notag\\ &+\kappa\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-1}\cdot\left|b\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)-b\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right|ds\right]\notag\\ \leqslant&\kappa L_R\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\right]+\kappa\sqrt{L}\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-1}\cdot W_2\left(\mu^{(n)}_{[s]_{n}},\mu^{(m)}_{[s]_{m}}\right)ds\right],\notag\end{aligned}$$ which on the application of Young's inequality ([\[YE\]](#YE){reference-type="ref" reference="YE"}) (with $\epsilon=1$, $p=\frac{\kappa}{\kappa-1}$, $q=\kappa$) yields $$\begin{aligned} \label{J_{1,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{1,R}(t)\right]\leqslant&\left(\kappa L_R+(\kappa-1)\sqrt{L}\right)\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\right]\notag\\ &+3^{\kappa-1}\sqrt{L}\int_0^{u\wedge\tau_R}\left[W_2^{\kappa}(\mu^{(n)}_{[s]_{n}},\mu^{(n)}_{s})+W_2^{\kappa}\left(\mu^{(n)}_{s},\mu^{(m)}_{s}\right)+W_2^{\kappa}\left(\mu^{(m)}_{s},\mu^{(m)}_{[s]_{m}}\right)\right]ds\notag\\ \leqslant&\left[\kappa L_R+\sqrt{L}\left(\kappa-1+3^{\kappa-1}\right)\right]\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+3^{\kappa-1}\sqrt{L}\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds.\end{aligned}$$ Analogously, by Assumptions **A1-A2** and Young's inequality ([\[YE\]](#YE){reference-type="ref" reference="YE"}) (with $\epsilon=1$, $p=\frac{\kappa}{\kappa-2}$, $q=\frac{\kappa}{2}$) we can also obtain that $$\begin{aligned} \label{J_{2,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{2,R}(t)\right] %\leqslant&\kappa\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\|\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)\right\|^2 ds\right]\notag\\ %&+\kappa\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\|\sigma\left(s,X^{(m)}(s-),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right\|^2 ds\right]\notag\\ %\leqslant&\kappa L_R\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\right]+\kappa L\mathbb{E}\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}W_2^2\left(\mu^{(n)}_{[s]_{n}},\mu^{(m)}_{[s]_m}\right)ds\notag\\ \leqslant&\left[\kappa L_R+L\left(\kappa-2+2\cdot3^{\kappa-1}\right)\right]\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+2L\cdot3^{\kappa-1}\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds,\end{aligned}$$ and $$\begin{aligned} \label{J_{3,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{3,R}(t)\right] %\leqslant&\kappa(\kappa-2)\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\|\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)\right\|^2 ds\right]\notag\\ %&+\kappa(\kappa-2)\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left\|\sigma\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right\|^2 ds\right]\notag\\ \leqslant&(\kappa-2)\left[\kappa L_R+\left(\kappa-2+2\cdot3^{\kappa-1}\right)L\right]\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+2L\cdot3^{\kappa-1}(\kappa-2)\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds.\end{aligned}$$ As for the last three terms, we first use the remainder formula in ([\[RF\]](#RF){reference-type="ref" reference="RF"}) and Assumption **A7** to get $$\begin{aligned} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{4,R}(t)\right] \leqslant& C_1\mathbb{E}\int_0^{u\wedge\tau_R}\int_U\bigg(\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa-2}\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right|^2\notag\\ &+\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right|^{\kappa}\bigg)\nu(dz)ds.\notag\\ \leqslant&C_1\left(L_R+\frac{(\kappa-2+2\cdot3^{\kappa-1})L}{\kappa}\right)\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+2C_1L\cdot\frac{3^{\kappa-1}}{\kappa}\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds\notag\\ &+C_1\mathbb{E}\int_0^{u\wedge\tau_R}\int_U\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}},z\right)\right|^{\kappa}\nu(dz)ds \notag\\ &+C_1\mathbb{E}\int_0^{u\wedge\tau_R}\int_U\left|h\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_n},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right|^{\kappa}\nu(dz)ds\notag\\ \leqslant &C_1\left(L_R+\frac{(\kappa-2+2\cdot3^{\kappa-1})L}{\kappa}+L_R^{'}+L^{'}3^{\kappa-1}\right)\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+3^{\kappa-1}C_1\left(\frac{2L}{\kappa}+L^{'}\right)\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds.\end{aligned}$$ We exploit the Burkholder-Davis-Cundy inequality and Assumptions **A2-A3** to obtain $$\begin{aligned} %\label{J_{5,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{5,R}(t)\right]\leqslant&\kappa\cdot\sqrt{32}\mathbb{E}\left(\int_0^{u\wedge\tau_R}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{2\kappa-2}\cdot\left\|\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right\|^2ds\right)^{\frac{1}{2}}\notag\\ \leqslant&6\kappa\mathbb{E}\bigg[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa-1}\cdot\bigg(\int_0^{u\wedge\tau_R}2\left\|\sigma\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)\right\|^2\notag\\ &+2\left\|\sigma\left(s,X^{(m)}(s),\mu^{(n)}_{[s]_{n}}\right)-\sigma\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}}\right)\right\|^2ds\bigg)^{\frac{1}{2}}\bigg]\notag\\ \leqslant&6\kappa\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa-1}\cdot\left(\int_0^{u\wedge\tau_R}2L_R\left|X^{(n)}(s)-X^{(m)}(s)\right|^2+2LW_2^2\left(\mu^{(n)}_{[s]_{n}},\mu^{(m)}_{[s]_{m}}\right)ds\right)^{\frac{1}{2}}\right].\notag\end{aligned}$$ Then owing to Young's inequality ([\[YE\]](#YE){reference-type="ref" reference="YE"}) (with $\epsilon=\frac{1}{24(\kappa-1)}$, $p=\frac{\kappa}{\kappa-1}$, $q=\kappa$) , Hölder inequality, elementary inequality and Lyapunov inequality, we further have $$\begin{aligned} \label{J_{5,R}} &\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{5,R}(t)\right]\notag\\ \leqslant& \frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}\right] +6^{\kappa}\cdot2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}\mathbb{E}\left(\int_0^{u\wedge\tau_R}L_R\left|X^{(n)}(s)-X^{(m)}(s)\right|^2+LW_2^2\left(\mu^{(n)}_{[s]_{n}},\mu^{(m)}_{[s]_{m}}\right)ds\right)^{\frac{\kappa}{2}}\notag\\ \leqslant &\frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}\right] +6^{\kappa}\cdot2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\mathbb{E}\left[\int_0^{u\wedge\tau_R}\left(L_R\left|X^{(n)}(s)-X^{(m)}(s)\right|^2+LW_2^2\left(\mu^{(n)}_{[s]_{n}},\mu^{(m)}_{[s]_{m}}\right)\right)^{\frac{\kappa}{2}}ds\right]\notag\\ \leqslant & \frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}\right] +6^{\kappa}\cdot2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\left(L_R^{\frac{\kappa}{2}}+L^{\frac{\kappa}{2}}3^{\kappa-1}\right)\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+6^{\kappa}\cdot(2L)^{\frac{\kappa}{2}}(12(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds.\end{aligned}$$ Finally, we apply Kunita's first inequality (see, e.g., Lemma 2.1 of [@Dareiotis2016]) and Young's inequality ([\[YE\]](#YE){reference-type="ref" reference="YE"}) (with $\epsilon=\frac{1}{4D(\kappa-1)}$, $p=\frac{\kappa}{\kappa-1}$, $q=\kappa$) to obtain $$\begin{aligned} \label{J_{6,R}} &\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}J_{6,R}(t)\right]\notag\\ \leqslant&\kappa D\mathbb{E}\left(\int_0^{u\wedge\tau_R}\int_U\left|X^{(n)}(s)-X^{(m)}(s)\right|^{2\kappa-2}\cdot\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_{n}},z\right)-h\left(s,X^{(m)}(s),\mu^{(m)}_{[s]_{m}},z\right)\right|^2\nu(dz)ds\right)^{\frac{1}{2}}\notag\\ %\leqslant&\frac{1}{4}\mathbb{E}\Big[\sup_{0\leqslant t\leqslant u}|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)|^{\kappa-1}\Big] %+D^{\kappa}\cdot2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}\mathbb{E}\Big(\int_0^{u\wedge\tau_R}L_R|X^{(n)}(s-)-X^{(m)}(s-)|^2+LW_2^2(\mu^{(n)}_{[s]_{n}-}),\mu^{(m)}_{[s]_{m}-})ds\Big)^{\frac{\kappa}{2}}\notag\\ \leqslant & \frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}\right] +D^{\kappa}\cdot2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\left(L_R^{\frac{\kappa}{2}}+L^{\frac{\kappa}{2}}3^{\kappa-1}\right)\int_0^{u\wedge\tau_R}\mathbb{E}\left|X^{(n)}(s)-X^{(m)}(s)\right|^{\kappa}ds\notag\\ &+D^{\kappa}\cdot(2L)^{\frac{\kappa}{2}}(12(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\int_0^{u\wedge\tau_R}\left(\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_{n})\right|^{\kappa}+\mathbb{E}\left|X^{(m)}(s)-X^{(m)}([s]_{m})\right|^{\kappa}\right)ds.\end{aligned}$$ Substituting ([\[J\_{1,R}\]](#J_{1,R}){reference-type="ref" reference="J_{1,R}"})-([\[J\_{6,R}\]](#J_{6,R}){reference-type="ref" reference="J_{6,R}"}) into ([\[J_1\]](#J_1){reference-type="ref" reference="J_1"}) yields that $$\begin{aligned} %\label{J1} &\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^2\right]\notag\\ \leqslant&2\Bigg[\kappa^2 L_R+(\kappa-1+3^{\kappa-1})\sqrt{L}+(\kappa-1)(\kappa-2+2\cdot3^{\kappa-1})L+C_1\left(L_R+\frac{(\kappa-2+2\cdot3^{\kappa-1})L}{\kappa}+L_R^{'}+L^{'}3^{\kappa-1}\right)\notag\\ &+(6^{\kappa}+D^{\kappa})2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\left(L_R^{\frac{\kappa}{2}}+L^{\frac{\kappa}{2}}3^{\kappa-1}\right)\Bigg]\int_0^{u}\mathbb{E}\sup_{0\leqslant t\leqslant s}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}ds\notag\\ &+2\cdot3^{\kappa-1}\left[\sqrt{L}+2L(\kappa-1)+C_1\left(\frac{2L}{\kappa}+L^{'}\right)+(6^{\kappa}+D^{\kappa})(2L)^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}u^{\frac{\kappa}{2}-1}\right]\notag\\ &\cdot\int_0^{u\wedge\tau_R}\left(\sup_{0\leqslant t\leqslant s}\mathbb{E}\left|X^{(n)}(t)-X^{(n)}([t]_{n})\right|^{\kappa}+\sup_{0\leqslant t\leqslant s}\mathbb{E}\left|X^{(m)}(t)-X^{(m)}([t]_{m})\right|^{\kappa}\right)ds.\notag\end{aligned}$$ In addition, for any $t\in[0,T]$, the result in Lemma [Lemma 2](#HolderC){reference-type="ref" reference="HolderC"} implies that $$\mathbb{E}\left|X^{(n)}(t)-X^{(n)}([t]_{n})\right|^{\kappa}\leqslant Ch_n \;\; \hbox{and} \;\; \mathbb{E}\left|X^{(m)}(t)-X^{(m)}([t]_{n})\right|^{\kappa}\leqslant Ch_m.$$ By the above estimates, together with Grönwall inequality, we conclude that $$\begin{aligned} \label{JJ-1} J_1\leqslant&\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t\wedge\tau_R)-X^{(m)}(t\wedge\tau_R)\right|^{\kappa}\right]\notag\\ \leqslant&2CT(h_n+h_m)\cdot3^{\kappa-1}\left[\sqrt{L}+2L(\kappa-1)+C_1\left(\frac{2L}{\kappa}+L^{'}\right)+(6^{\kappa}+D^{\kappa})(2L)^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}T^{\frac{\kappa}{2}-1}\right]\notag\\ &\cdot e^{2T\left[\kappa^2 L_R+(\kappa-1+3^{\kappa-1})\sqrt{L}+(\kappa-1)(\kappa-2+2\cdot3^{\kappa-1})L+C_1\left(L_R+\frac{(\kappa-2+2\cdot3^{\kappa-1})L}{\kappa}+L_R^{'}+L^{'}3^{\kappa-1}\right)+(6^{\kappa}+D^{\kappa})2^{\frac{\kappa}{2}}(4(\kappa-1))^{\kappa-1}T^{\frac{\kappa}{2}-1}\left(L_R^{\frac{\kappa}{2}}+L^{\frac{\kappa}{2}}3^{\kappa-1}\right)\right]}\notag\\ :&=T(h_n+h_m)C(\kappa,T,L,L^{'})e^{T\cdot C(\kappa,T,L,L_R,L^{'},L_R^{'})}.\end{aligned}$$ \(2\) Estimate of the term $J_2$. With aid of the Cauchy-Schwarz inequality, one can have $$\begin{aligned} \label{J-2} J_2=\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\mathbb{I}_{\{\tau_R\leqslant T\}}\right] \leqslant&\sqrt{\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^{\kappa}\right)^2}\sqrt{\mathbb{E}\left(\mathbb{I}_{\{\tau_R\leqslant T\}}\right)^2}\notag\\ \leqslant&2^{\kappa-\frac{1}{2}}\sqrt{\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)\right|^{2\kappa}+\sup_{0\leqslant t\leqslant T}\left|X^{(m)}(t)\right|^{2\kappa}\right]}\sqrt{\mathbb{P}\Big(\tau_R\leqslant T\Big)}\notag\\ \leqslant&C_1\sqrt{\mathbb{P}\Big(\tau_R\leqslant T\Big)}.\end{aligned}$$ Here we have used the result of Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"}. Notice that, applying the subadditivity of probability and using Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"} again, we can estimate that $$\mathbb{P}\Big(\tau_R\leqslant T\Big) \leqslant\mathbb{E}\left(\mathbb{I}_{\{\tau_R\leqslant T\}}\frac{|X^{(n)}(\tau_R)|^{4}+|X^{(m)}(\tau_R)|^{4}}{R^4}\right)\leqslant\frac{1}{R^4}\left(\mathbb{E}\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)\right|^4+\mathbb{E}\sup_{0\leqslant t\leqslant T}\left|X^{(m)}(t)\right|^4\right)\leqslant \frac{C}{R^4}.$$ By substituting this into ([\[J-2\]](#J-2){reference-type="ref" reference="J-2"}), we further obtain $$\label{JJ-2} J_2\leqslant \frac{C}{R^2}.$$ At this point, we can estimate [\[JJ\]](#JJ){reference-type="eqref" reference="JJ"} by combining [\[JJ-1\]](#JJ-1){reference-type="eqref" reference="JJ-1"} and [\[JJ-2\]](#JJ-2){reference-type="eqref" reference="JJ-2"}: $$\label{JJ-1+2} \mathbb{E}\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X^{(m)}(t)\right|^\kappa\leqslant T(h_n+h_m)C(\kappa,T,L,L^{'})e^{T\cdot C(\kappa,T,L,L_R,L^{'},L_R^{'})} +\frac{C}{R^2}.$$ Note that $R$ is independent of $n$, $m$, and $\frac{C}{R^2}$ converges to $0$ as $R\to\infty$. For any given $\varepsilon>0$, there exists a sufficiently large number $R(\varepsilon)>0$, such that, $$\frac{C}{R_{*}^2}<\frac{\varepsilon}{2},$$ when $R_{*}\geqslant R(\varepsilon)$. Since both $h_n$ and $h_m$ converge to 0 as $n,m\to\infty$, with the $\varepsilon>0$ chosen above, we have $$T(h_n+h_m)C(\kappa,T,L,L^{'})e^{T\cdot C(\kappa,T,L,L_{R_*},L^{'},L_{R^{'}_*})}<\frac{\varepsilon}{2},$$ by letting $n,m\to\infty$. Consequently, we conclude that [\[Cauchy-se\]](#Cauchy-se){reference-type="eqref" reference="Cauchy-se"} holds. ◻ ## Proof of Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"} {#proof-of-theorem-mainresult1} In this subsection, we turn to proving our main theorem in this section. The proof consists of three steps. **Step one: (Existence)** Let $\{X^{(n)}(t)\}_{n\geqslant1}$ be a Cauchy sequence in $L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$ given by [\[Approximation-Eq\]](#Approximation-Eq){reference-type="eqref" reference="Approximation-Eq"}. Keep in mind that $L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$ is a complete space under the $L^{\kappa}$ norm. Thus there exists an $\{\mathcal{F}_t\}_{0\leqslant t\leqslant T}$-adapted $\mathbb{R}^d$-valued càdlàg stochastic process $\{X(t)\}_{0\leqslant t\leqslant T}$ with $X(0)=x_0$ and $\mu_t=\mathcal{L}_{X(t)}$ such that $$\label{L2C} \lim_{n\to \infty}\left[\mathbb{E}\left|X^{(n)}(t)-X(t)\right|^{\kappa}\right]^{\frac{1}{\kappa}}\leqslant\lim_{n\to \infty}\left[\mathbb{E}\sup_{0\leqslant t\leqslant T}\left|X^{(n)}(t)-X(t)\right|^{\kappa}\right]^{\frac{1}{\kappa}}=0.%\lim_{n\to \infty}\sup_{0\leqslant t\leqslant T}W_{\kappa}(\mu_t^{(n)},\mu_t)\leqslant$$ We next prove that $\{X(t)\}_{0\leqslant t\leqslant T}$ is a solution to [\[Main-equation\]](#Main-equation){reference-type="eqref" reference="Main-equation"}. Indeed, the main idea is to show that the right-hand side of ([\[Approximation-Eq\]](#Approximation-Eq){reference-type="ref" reference="Approximation-Eq"}) converges in probability to $$x_0+\int_0^tb(s,X(s),\mu_s)ds+\int_0^t\sigma(s,X(s),\mu_s)dW(s)+\int_0^t\int_Uh(s,X(s),\mu_s,z)\tilde{N}(dz,ds),$$ by taking the limit on both sides of ([\[Approximation-Eq\]](#Approximation-Eq){reference-type="ref" reference="Approximation-Eq"}). Here $\mu_s=\mathcal{L}(X(s))$ for any $s\in[0,T]$. First of all, it follows from ([\[L2C\]](#L2C){reference-type="ref" reference="L2C"}) that, there exists a subsequence (for notational simplicity, still indexed by $n$) such that, for any $s\in[0,T]$, $$X^{(n)}(s,\omega) \to X(s,\omega),\;\; \mathbb{P}\text{-a.s.}$$ By Lemma [Lemma 2](#HolderC){reference-type="ref" reference="HolderC"}, the Wasserstein metric of $\mu^{(n)}_{[s]_n}$ and $\mu_s$ satisfies $$\begin{aligned} \label{md} \lim_{n\to \infty}\sup_{0\leqslant s\leqslant t}W_{2}^{\kappa}\left(\mu^{(n)}_{[s]_n},\mu_s\right)&\leqslant2^{\kappa-1}\lim_{n\to \infty}\sup_{0\leqslant s\leqslant t}\mathbb{E}\left|X^{(n)}(s)-X^{(n)}([s]_n)\right|^{\kappa}+2^{\kappa-1}\lim_{n\to \infty}\mathbb{E}\left[\sup_{0\leqslant s\leqslant t}\left|X^{(n)}(s)-X(s)\right|^{\kappa}\right]\notag\\ &\leqslant2^{\kappa-1}C\lim_{n\to\infty}h_n=0.\end{aligned}$$ Taking Assumption **A6** into account, we immediately have that, for any $s\in[0,T]$ and almost all $\omega\in\Omega$, $$%\label{Conver-1} b\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)\to b\left(s, X(s),\mu_s\right),\; \sigma\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)\to \sigma\left(s, X(s),\mu_s\right),\notag$$ $$\label{Conver-1} \int_Uh\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n},z\right)\nu(dz)\to\int_Uh\left(s,X(s),\mu_s,z\right)\nu(dz),$$ as $n\to\infty$. Next, we claim that $\{b(s, X^{(n)}(s), \mu^{(n)}_{[s]_n})\}_{n\geqslant1}$ and $\{\sigma(s, X^{(n)}(s), \mu^{(n)}_{[s]_n})\}_{n\geqslant1}$ are uniformly integrable. In fact, from Assumptions **A4-A5** and Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"}, we obtain the following uniform boundedness: $$\begin{aligned} &\sup_{n\geqslant1}\mathbb{E}|b(s, X^{(n)}(s), \mu^{(n)}_{[s]_n})| \leqslant\sqrt{3K_1}\sup_{n\geqslant1}\mathbb{E}[1+|X^{(n)}(s)|^{\frac{\kappa}{2}}+W_2^{\frac{\kappa}{2}}( \mu^{(n)}_{[s]_n},\delta_0)]\leqslant \sqrt{3K_1}(1+2C), \notag\\ &\sup_{n\geqslant1}\mathbb{E}\|\sigma(t, X^{(n)}(s), \mu^{(n)}_{[s]_n})\|^2 \leqslant K\sup_{n\geqslant1}\mathbb{E}[1+|X^{(n)}(s)|^{2}+W_2^{2}( \mu^{(n)}_{[s]_n},\delta_0)]\leqslant K(1+2C),\notag \end{aligned}$$ and the following uniform absolute continuity: $$\begin{aligned} &\sup_{n\geqslant1}\mathbb{E}\left[\left|b\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)\right|\cdot \mathbb{I}_A\right] \leqslant K_1\sup_{n\geqslant1}\left[\mathbb{E}\left(1+\left|X^{(n)}(s)\right|^{\kappa}+W_2^{\kappa}\left( \mu^{(n)}_{[s]_n},\delta_0\right)\right)\right]^{\frac{1}{2}}(\mathbb{P}(A))^{\frac{1}{2}}\leqslant K_1\sqrt{1+2C}(\mathbb{P}(A))^{\frac{1}{2}}\to 0,\notag\\ &\sup_{n\geqslant1}\mathbb{E}\left[\left\|\sigma\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)\right\|^2\cdot \mathbb{I}_A\right] \leqslant K\sup_{n\geqslant1}\left[\mathbb{E}\left(1+\left|X^{(n)}(s)\right|^{2}+W_2^{2}\left( \mu^{(n)}_{[s]_n},\delta_0\right)\right)^2\right]^{\frac{1}{2}}(\mathbb{P}(A))^{\frac{1}{2}}\leqslant K\sqrt{3(1+2C)}(\mathbb{P}(A))^{\frac{1}{2}}\to 0,\notag\end{aligned}$$ when $\mathbb{P}(A)\to 0$. Here, the elementary inequality $$%\label{EIE} \left(\sum_{i=1}^k|a_i|\right)^l\leqslant\left(k\max_{1\leqslant i\leqslant k}|a_i|\right)^l\leqslant k^l\sum_{i=1}^k|a_i|^l, \; \forall l>0, \; a_i\in\mathbb{R}, k\in\mathbb{N}, \notag$$ has been used. The uniform integrability for $\{b(s, X^{(n)}(s), \mu^{(n)}_{[s]_n})\}_{n\geqslant1}$ as well as $\{\sigma(s, X^{(n)}(s), \mu^{(n)}_{[s]_n})\}_{n\geqslant1}$ follows by Lemma 3, page 190 of [@Shiryave1989]. Hence, the dominated convergence theorem (see, e.g., Theorem 4, page 188 of [@Shiryave1989]), together with ([\[Conver-1\]](#Conver-1){reference-type="ref" reference="Conver-1"}) yields, for any $s\in[0,T]$, $$\label{Eb1} \lim_{n\to \infty}\mathbb{E}\left|b\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-b\left(s, X(s),\mu_s\right)\right|=0,$$ $$\label{Esg1} \lim_{n\to \infty}\mathbb{E}\left\|\sigma\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-\sigma\left(s, X(s),\mu_s\right)\right\|^2=0.$$ In addition, note that, following from ([\[L2C\]](#L2C){reference-type="ref" reference="L2C"}), $$%\label{XB} \mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X(t)\right|^{\kappa}\right]\leqslant C.\notag$$ We further have the following estimates based on Assumptions **A4-A5** and Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"}, $$\begin{aligned} \label{Eb2} \sup_{n\geqslant1}\sup_{s\in[0,t]}\mathbb{E}\left|b\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-b(s, X(s),\mu_s)\right|&\leqslant \sqrt{3K_1}\sup_{n\geqslant1}\sup_{s\in[0,t]}\mathbb{E}\left[2+\left|X^{(n)}(s)\right|^{\frac{\kappa}{2}}+|X(s)|^{\frac{\kappa}{2}}+W_2^{\frac{\kappa}{2}}\left( \mu^{(n)}_{[s]_n},\delta_0\right)+W_2^{\frac{\kappa}{2}}(\mu_s,\delta_0)\right]\notag\\ &\leqslant 2\sqrt{3K_1}(1+2C),\end{aligned}$$ $$\begin{aligned} \label{Esg2} \sup_{n\geqslant1}\sup_{s\in[0,t]}\mathbb{E}\left\|\sigma\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-\sigma(s, X(s),\mu_s)\right\|^2&\leqslant 2K\sup_{n\geqslant1}\sup_{s\in[0,t]}\mathbb{E}\left[2+\left|X^{(n)}(s)\right|^{2}+|X(s)|^{2}+W_2^{\kappa}\left( \mu^{(n)}_{[s]_n},\delta_0\right)+W_2^{2}(\mu_t,\delta_0)\right]\notag\\ &\leqslant 4K(1+2C).\end{aligned}$$ For any $t\in[0,T]$, by the dominated convergence combined with ([\[Eb1\]](#Eb1){reference-type="ref" reference="Eb1"}) and ([\[Eb2\]](#Eb2){reference-type="ref" reference="Eb2"}), we eventually obtain $$\begin{aligned} \label{Step-1-1} \lim_{n\to \infty}\mathbb{E}\left|\int_0^t\left(b\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-b(s, X(s),\mu_s)\right)ds\right|\leqslant\lim_{n\to \infty}\int_0^t\mathbb{E}\left|b\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-b(s, X(s),\mu_s)\right|ds=0.\end{aligned}$$ Similarly, in view of ([\[Esg1\]](#Esg1){reference-type="ref" reference="Esg1"}) and ([\[Esg2\]](#Esg2){reference-type="ref" reference="Esg2"}), we arrive at $$\begin{aligned} \label{Step-1-2} \lim_{n\to \infty}\mathbb{E}\left|\int_0^t\left(\sigma\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-\sigma(s, X(s),\mu_s)\right)dW(s)\right|^2=\lim_{n\to \infty}\int_0^t\mathbb{E}\left\|\sigma\left(s, X^{(n)}(s), \mu^{(n)}_{[s]_n}\right)-\sigma(s, X(s),\mu_s)\right\|^2ds =0.\end{aligned}$$ Finally, we look into the estimates for the integral with respect to the Poisson random measure. For any $u\in[0,T]$, it follows from Kunita's first inequality and Assumptions **A1-A2** that $$\begin{aligned} &\mathbb{E}\sup_{0\leqslant u\leqslant t}\left|\int_0^u\int_U\left(h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h(s,X(s),\mu_s,z)\right)\tilde{N}(ds,dz)\right|\notag\\ \leqslant& D\mathbb{E}\left[\int_0^t\int_U\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h(s,X(s),\mu_s,z)\right|^2\nu(dz)ds\right]^{\frac{1}{2}}\notag\\ \leqslant&\sqrt{2}D\mathbb{E}\left[\int_0^t\int_U\left(\left|h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h(s,X(s),\mu^{(n)}_{[s]_n},z)\right|^2+\left|h\left(s,X(s),\mu^{(n)}_{[s]_n},z\right)-h(s,X(s),\mu_s,z)\right|^2\right)\nu(dz)ds\right]^{\frac{1}{2}}\notag\\ \leqslant&\sqrt{2}D\mathbb{E}\left[\int_0^t\left|X^{(n)}(s)-X(s)\right|^2ds+\int_0^tW_2^{2}\left(\mu^{(n)}_{[s]_n},\mu_s\right)ds\right]^{\frac{1}{2}}\notag\\ \leqslant&2D\mathbb{E}\left[\int_0^t\left|X^{(n)}(s)-X(s)\right|^2ds\right]^{\frac{1}{2}}+2D\mathbb{E}\left[\int_0^tW_2^{2}\left(\mu^{(n)}_{[s]_n},\mu_s\right)ds\right]^{\frac{1}{2}}\notag\\ \leqslant&2D\sqrt{T}\mathbb{E}\left[\sup_{0\leqslant s\leqslant t}\left|X^{(n)}(s)-X(s)\right|\right]+2D\sqrt{T}\sup_{0\leqslant s\leqslant t}W_2\left(\mu^{(n)}_{[s]_n},\mu_s\right).\notag\\end{aligned}$$ By ([\[L2C\]](#L2C){reference-type="ref" reference="L2C"}), ([\[md\]](#md){reference-type="ref" reference="md"}) and Lyapunov inequality, we conclude that $$\begin{aligned} \label{Step-1-3} \lim_{n\to\infty}\mathbb{E}\sup_{0\leqslant u\leqslant t}\left|\int_0^u\int_U\left(h\left(s,X^{(n)}(s),\mu^{(n)}_{[s]_n},z\right)-h(s,X(s),\mu_s,z)\right)\tilde{N}(ds,dz)\right|=0.\end{aligned}$$ As a consequence, by [\[Step-1-1\]](#Step-1-1){reference-type="eqref" reference="Step-1-1"}, [\[Step-1-2\]](#Step-1-2){reference-type="eqref" reference="Step-1-2"} and [\[Step-1-3\]](#Step-1-3){reference-type="eqref" reference="Step-1-3"}, $\{X(t)\}_{0\leqslant t\leqslant T}$ is a strong solution to ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}). We are done with the existence. **Step two: (Boundedness).** For $t\in[0,T]$, let $X(t)\in L^{\kappa}(\Omega;D([0,T];\mathbb{R}^d))$ be any solution to equation ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}). In what follows, we calculate the $r$th moment of the solution $(X(t))_{0\leqslant t\leqslant T}$. For every $R>0$, we define the stopping time $$\pi_R:=\inf\big\{t\in[0,T]:|X(t)|> R\big\}\wedge T.$$ It is clear that $|X(t)|\leqslant R$ for $0\leqslant t\leqslant \pi_R$. Using the same technique as Lemma [Lemma 1](#UBP){reference-type="ref" reference="UBP"} was proved, we calculate that for any $u\in[0,T]$, $$\begin{aligned} &\mathbb{E}\sup_{0\leqslant t\leqslant u\wedge\pi_R}|X(t)|^r\notag\\ \leqslant&\mathbb{E}|x_0|^r+\frac{1}{2}\mathbb{E}\sup_{0\leqslant t\leqslant u\wedge\pi_R}|X(t)|^r+K(r-2)\frac{r^2+r+2C_1}{2r}\int_0^u\left(1+\mathbb{E}\sup_{0\leqslant t\leqslant s\wedge\pi_R}|X(t)|^r\right)ds\notag\\ &+\left[2\cdot3^{\frac{r}{2}-1}(r+1)K+C_2^r2^{r}(r-1)^{r-1}(2K)^{\frac{r}{2}}(3T)^{\frac{r}{2}-1}+2C_1\cdot\left(\frac{2K}{r}3^{\frac{r}{2}-1}+K_2\right)\right]\int_0^u\left(1+\mathbb{E}\sup_{0\leqslant t\leqslant s\wedge\pi_R}|X(t)|^r\right)ds.\notag\end{aligned}$$ Notice that $\pi_R\to T,$ $\mathbb{P}$-a.s. Therefore, we can finish the proof of the estimate ([\[UBeq\]](#UBeq){reference-type="ref" reference="UBeq"}) by Grönwall inequality and the Fatou lemma. That is $$\mathbb{E}\sup_{0\leqslant t\leqslant T}|X(t)|^r\leqslant \liminf_{R\to\infty}\mathbb{E}\sup_{0\leqslant t\leqslant T\wedge\pi_R}|X(t)|^r\leqslant C_r<\infty.$$ **Step three: (Uniqueness)** Let $X(t)$, $Y(t)$ be two solutions for ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}) on the same probability space with $X(0)=Y(0)$. By ([\[UBeq\]](#UBeq){reference-type="ref" reference="UBeq"}), for a fixed $r\geqslant\max\{\kappa^2/2, 4\}$, there exists positive constant $C_r$ such that $$\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X(t)|^r\right]\leqslant C_r,\; \mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|Y(t)|^r\right]\leqslant C_r.$$ For a sufficiently large $R>0$, we define the stopping time $$\bar{\tau}_R:=\inf\big\{t\in[0,T]: |X(t)|\vee|Y(t)|\geqslant R\big\}.$$ Let us now replace $|X^{(n)}(t)-X^{(m)}(t)|$ and $\tau_R$ by $|X(t)-Y(t)|$ and $\bar{\tau}_R$, respectively. Then, applying the same proof process as the existence of solution to equation ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}) was proved, we have $$\begin{aligned} \mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X(t)-Y(t)|^{2}\right]&=\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X(t)-Y(t)|^{2}\mathbb{I}_{\{\bar{\tau}_R>T\}}\right]+\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X(t)-Y(t)|^{2}\mathbb{I}_{\{\bar{\tau}_R\leqslant T\}}\right]\notag\\ &\leqslant\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|X(t\wedge\bar{\tau}_R)-Y(t\wedge\bar{\tau}_R)\right|^{2}\right]+C_1\sqrt{\mathbb{P}\Big(\bar{\tau}_R\leqslant T\Big)}\leqslant\frac{C}{R^2}.\notag\end{aligned}$$ Letting $R\to\infty$, it gives the uniqueness of solution to equation ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}). This completes the proof of Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"}. # Stochastic averaging principle {#sec3} In this section, we establish a stochastic averaging principle for the following stochastic integral equation $$\begin{aligned} \label{AMain} X_{\varepsilon}(t)=&x_0+\int_0^t b\left(\frac{s}{\varepsilon },X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)ds+\int_0^t\sigma\left(\frac{s}{\varepsilon },X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)dW(s)\notag\\ &+\int_0^t\int_{U}h\left(\frac{s}{\varepsilon },X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)},z\right)\tilde{N}(ds,dz),\;\;\;t\in[0,T],\end{aligned}$$ where $\varepsilon$ is a positive small parameter ($0<\varepsilon\ll1$). Assuming [\[AMain\]](#AMain){reference-type="eqref" reference="AMain"} fulfills the conditions in Assumptions **A1-A7**, the existence and uniqueness of its solution follows immediately from Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"}. As mentioned in the Introduction, our main objection is to show that the solution (i.e., $X^{\varepsilon}(t)$, $t\in[0,T]$) of [\[AMain\]](#AMain){reference-type="eqref" reference="AMain"} could be approximated by some other simpler (or averaged) process in an appropriate sense. To proceed, we associate ([\[AMain\]](#AMain){reference-type="ref" reference="AMain"}) with the following averaged McKean-Vlasov SDE: $$\begin{aligned} \label{ASDE} \bar{X}(t)=x_0+\int_0^t \bar{b}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)ds+\int_0^t\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)dW(s)+\int_0^t\int_{U}\bar{h}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)},z\right)\tilde{N}(ds,dz),\;\;\;t\in[0,T],\end{aligned}$$ where $\bar{b}: \mathbb{R}^d\times M_2(\mathbb{R}^d)\to \mathbb{R}^d$, $\bar{\sigma}: \mathbb{R}^d\times M_2(\mathbb{R}^{d})\to\mathbb{R}^{d\times m}$ and $\bar{h}: \mathbb{R}^d\times M_2(\mathbb{R}^d)\times U\to \mathbb{R}^d$ are Borel measurable functions. To ensure that ([\[ASDE\]](#ASDE){reference-type="ref" reference="ASDE"}) also admits a unique solution and to apply the stochastic averaging arguments, we need to make use of the following averaging conditions. We point out that such conditions are slightly different from the classical conditions (see, e.g., [@Xu2011; @Shen2022]) due to the type of nonlinearity terms. (Averaging conditions) There exist positive bounded functions (sometimes known as rate functions of convergence) $\varphi_i(t)$ ($t\in[0,T]$) with $\lim_{t\to\infty}\varphi_i(t)=0$, $i=1, 2, 3$, such that $$\frac{1}{t}\int_{0}^{t}|b(s,x,\mu)-\bar{b}(x,\mu)|^2ds\leqslant\varphi_1(t)C_R^b(1+|x|^2),\;\;\;\; \frac{1}{t}\int_{0}^{t}\|\sigma(s,x,\mu)-\bar{\sigma}(x,\mu)\|^2ds\leqslant\varphi_2(t)C_R^{\sigma}(1+|x|^2),$$ $$\frac{1}{t}\int_{0}^{t}\int_{U}|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^2\nu(dz)ds\leqslant\varphi_3(t)C_R^{h}(1+|x|^2),$$ respectively, for any $t\in[0,T]$, $x,y\in\mathbb{R}^d$ with $|x|\vee|y|\leqslant R$, and $\mu\in\mathcal{M}_2(\mathbb{R}^d)$. Furthermore, if $\kappa>2$, we need an additional condition.\ (Additional averaging conditions w.r.t. the jump coefficients) There exist a positive bounded function $\varphi(t)$ ($t\in[0,T]$) with $\lim_{t\to\infty}\varphi(t)=0$ such that $$\frac{1}{t}\int_{0}^{t}\int_{U}|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^r\nu(dz)ds\leqslant\varphi(t)C_R^{h}(1+|x|^r),$$ $$\frac{1}{t}\int_{0}^{t}\int_{U}|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^{\kappa}\nu(dz)ds\leqslant\varphi(t)C_R^{h}(1+|x|^{\kappa}),$$ respectively, for any $t\in[0,T]$, $x,y\in\mathbb{R}^d$ with $|x|\vee|y|\leqslant R$, and $\mu\in\mathcal{M}_2(\mathbb{R}^d)$. The main theorem on an averaging principle for ([\[AMain\]](#AMain){reference-type="ref" reference="AMain"}) is thus formulated below. **Theorem 2**. ***(An Averaging Principle)** Suppose that Assumptions **A1-A9** hold. Then, we have the following averaging principle $$\lim_{\varepsilon\to0}\mathbb{E}\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^{2}=0.$$* Moreover, by Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"} and Chebyshev-Markov inequality, we have the following corollary. **Corollary 1**. *The original solution $X_{\varepsilon}(t)$ converges in probability to the averaged solution $\bar{X}(t)$, that is, for any given number $\delta>0$, $$\mathbb{P}(\sup_{0\leqslant t\leqslant T}\left|X_{\varepsilon}(t)-\bar{X}(t)\right|>\delta)\to 0,$$ as $\varepsilon\to 0$.* Prior to prove Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"}, the well-posedness of equation ([\[ASDE\]](#ASDE){reference-type="ref" reference="ASDE"}) should be considered. The following lemma shows that it can be assured. **Lemma 4**. *There is a unique solution $\bar{X}(t)$ to the averaged equation ([\[ASDE\]](#ASDE){reference-type="ref" reference="ASDE"}) under Assumptions **A1-A9**.* *Proof.* By Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"}, we only need to check that coefficients functions $\bar{b}$, $\bar{\sigma}$, $\bar{h}$ fulfill the conditions for the existence and uniqueness of the solution. Note that both [\[AMain\]](#AMain){reference-type="eqref" reference="AMain"} and ([\[ASDE\]](#ASDE){reference-type="ref" reference="ASDE"}) have the same initial value $x_0$. The condition in Assumption **A6** holds directly. For the conditions in Assumptions **A1-A5**, we only discuss about the case of function $\bar{b}$, and the results for functions $\bar{\sigma}, \bar{h}$ follow by similar arguments. At last, we verify that $\bar{h}$ satisfies the condition in Assumption **A7**. The details of this proof are given in the Appendix. ◻ We next complete the proof of Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"} in what follows. *Proof of Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"}.* For any $t\in[0,T]$, it follows from ([\[AMain\]](#AMain){reference-type="ref" reference="AMain"}) and ([\[ASDE\]](#ASDE){reference-type="ref" reference="ASDE"}) that $$\begin{aligned} X_{\varepsilon}(t)-\bar{X}(t)=&\int_0^t\left[b\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{b}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right]ds+\int_0^t\left[\sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right]dW(s)\notag\\ &+\int_0^t\int_{U}\left[h\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)},z\right)-\bar{h}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)},z\right)\right]\tilde{N}(ds,dz).\notag\end{aligned}$$ To deal with the one-sided local Lipschitz case, for each $R>0$, a stopping time is defined as follows: $$\eta_R:=\inf\big\{t\in[0,T]: |X_{\varepsilon}(t)|\vee|\bar{X}(t)|> R\big\}.$$ According to De Morgan's Law, we have $$\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^{2}\right] =\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^{2}\mathbb{I}_{\{\eta_R>T\}}\right]+\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^{2}\mathbb{I}_{\{\eta_R\leqslant T\}}\right]:=I_1+I_2.$$ We now calculate each term on the right hand side of equation above. \(1\) Calculation of the term $I_1$. We first obtain that $$\label{I_1} I_1=\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^{2}\mathbb{I}_{\{\eta_R>T\}}\right]\leqslant\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^{2}\right].$$ By using the Itô formula, we have $$|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\tau_R)|^{2}=\sum_{i=1}^5\Lambda_{i}(t),$$ where $$\begin{aligned} \Lambda_{1}(t)=&2\int_0^{t\wedge\eta_R}\left\langle X_{\varepsilon}(s)-\bar{X}(s), b\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{b}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\rangle ds,\notag\\ \Lambda_{2}(t)=&\int_0^{t\wedge\eta_R}\left\| \sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2 ds,\notag\\ \Lambda_{3}(t)=&\int_0^{t\wedge\eta_R}\int_U\left|h\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)},z\right)-\bar{h}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)},z\right)\right|^2 N(ds,dz),\notag\\ \Lambda_{4}(t)=&2\int_0^{t\wedge\eta_R}\left\langle X_{\varepsilon}(s)-\bar{X}(s), \sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)dW(s)\right\rangle,\notag\\ \Lambda_{5}(t)=&2\int_0^{t\wedge\eta_R}\left\langle X_{\varepsilon}(s)-\bar{X}(s),h\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)},z\right)-\bar{h}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)},z\right)\right\rangle\tilde{N}(ds,dz).\notag\end{aligned}$$ By taking supremum over $[0, u]$ for $u\in[0,T]$ and then taking expectations, we next estimate $\mathbb{E}\big[\sup_{0\leqslant t\leqslant u}\Lambda_{i}(t)\big], i=1, \ldots, 5$, respectively. In view of of Assumptions **A1, A2, A8**, we obtain $$\begin{aligned} \label{I_{1,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\Lambda_{1}(t)\right]\leqslant&2\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\eta_R}\left\langle X_{\varepsilon}(s)-\bar{X}(s), b\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-b\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)\right\rangle ds\right]\notag\\ &+2\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|\cdot\left|b\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-b\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right|ds\right]\notag\\ &+2\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|\cdot\left|b\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)-\bar{b}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right|ds\right]\notag\\ \leqslant&2L_R\mathbb{E}\left[\int_0^{u\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|^2ds\right]+2\sqrt{L}\mathbb{E}\left[\int_0^{u\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|\cdot W_2\left(\mathscr{L}_{X_{\varepsilon}(s)},\mathscr{L}_{\bar{X}(s)}\right)ds\right]\notag\\ &+2\mathbb{E}\left[\int_0^{u\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|\cdot\left|b\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)-\bar{b}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right|ds\right]\notag\\ \leqslant&(2L_R+2\sqrt{L}+1)\mathbb{E}\left[\int_0^{u\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|^2ds\right]\notag\\ &+u\mathbb{E}\left[\frac{\varepsilon}{u\wedge\eta_R}\int_0^{\frac{u\wedge\eta_R}{\varepsilon}}\left|b\left(s,\bar{X}(s\varepsilon),\mathscr{L}_{\bar{X}(s\varepsilon)}\right)-\bar{b}\left(\bar{X}(s\varepsilon),\mathscr{L}_{\bar{X}(s\varepsilon)}\right)\right|^2ds\right]\notag\\ \leqslant&(2L_R+2\sqrt{L}+1)\int_0^u\mathbb{E}\sup_{0\leqslant t\leqslant s}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2ds+uC_R^b\varphi_1\left(\frac{u\wedge\eta_R}{\varepsilon}\right)\left(1+\mathbb{E}\sup_{0\leqslant t\leqslant u}|\bar{X}(t)|^2\right)\notag\\ \leqslant&(2L_R+2\sqrt{L}+1)\int_0^u\mathbb{E}\sup_{0\leqslant t\leqslant s}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2ds+uC_R^b\cdot C\varphi_1\left(\frac{u\wedge\eta_R}{\varepsilon}\right).\end{aligned}$$ Here we have used the fact that, for each $u\in[0,T]$, $\left(\mathbb{E}\sup_{0\leqslant t\leqslant u}|\bar{X}(t)|^2\right)^{\frac{1}{2}}\leqslant(\mathbb{E}\sup_{0\leqslant t\leqslant u}|\bar{X}(t)|^r)^{\frac{1}{r}}<\infty$ if $\mathbb{E}|X_{\varepsilon}(0)|^r<\infty$. By Assumptions **A1, A2, A8**, and using the same techniques as ([\[I\_{1,R}\]](#I_{1,R}){reference-type="ref" reference="I_{1,R}"}) was proved, we get $$\begin{aligned} \label{I_{2,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\Lambda_{2}(t)\right]\leqslant&3\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\eta_R}\left\|\sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\sigma\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{X^{\varepsilon}(s)}\right)\right\|^2 ds\right]\notag\\ &+3\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\eta_R}\left\|\sigma\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\sigma\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2 ds\right]\notag\\ &+3\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\int_0^{t\wedge\eta_R}\left\|\sigma\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2 ds\right]\notag\\ \leqslant&3L_R\mathbb{E}\left[\int_0^{u\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|^2ds\right]+3L\mathbb{E}\left[\int_0^{u\wedge\eta_R}W_2^2\left(\mathscr{L}_{X_{\varepsilon}(s)},\mathscr{L}_{\bar{X}(s)}\right)ds\right]\notag\\ &+3\mathbb{E}\left[\int_0^{u\wedge\eta_R}\left\|\sigma\left(\frac{s}{\varepsilon},\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2 ds\right]\notag\\ %\leqslant&3(L_R^{(2)}+K^2)\mathbb{E}\Big[\int_0^{u\wedge\tau_R}|X^{\varepsilon}(s)-\bar{X}(s)|^2ds\Big]\notag\\ %&+3u\mathbb{E}\Big[\frac{\varepsilon}{u\wedge\tau_R}\int_0^{\frac{u\wedge\tau_R}{\varepsilon}}\|\sigma(s,\bar{X}(s\varepsilon),\mathscr{L}(\bar{X}(s\varepsilon)))-\bar{\sigma}(\bar{X}(s\varepsilon),\mathscr{L}(\bar{X}(s\varepsilon))\|^2ds\Big]\notag\\ \leqslant&3(L_R+L)\int_0^u\mathbb{E}\sup_{0\leqslant t\leqslant s}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2ds+3uC_R^{\sigma}\cdot C\varphi_2\left(\frac{u\wedge\eta_R}{\varepsilon}\right).\end{aligned}$$ By a similar argument as ([\[I\_{2,R}\]](#I_{2,R}){reference-type="ref" reference="I_{2,R}"}), one uses Assumptions **A1, A2, A8** to get $$\begin{aligned} \label{I_{3,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\Lambda_{3}(t)\right]&\leqslant\mathbb{E}\left[\int_0^{t\wedge\eta_R}\int_U\left|h\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)},z\right)-\bar{h}\left(X(s),\mathscr{L}_{\bar{X}(s)},z\right)\right|^2\nu(dz)ds\right] \notag\\ &\leqslant3(L_R+L)\int_0^u\mathbb{E}\sup_{0\leqslant t\leqslant s}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2ds+3uC_R^{h}\cdot C\varphi_3\left(\frac{u\wedge\eta_R}{\varepsilon}\right).\end{aligned}$$ Due to Burkholder-Davis-Cundy inequality, Young's inequality [\[YE\]](#YE){reference-type="eqref" reference="YE"} (with $\epsilon=\frac{1}{24},p=q=2$), and ([\[I\_{2,R}\]](#I_{2,R}){reference-type="ref" reference="I_{2,R}"}), we have $$\begin{aligned} \label{I_{4,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\Lambda_{4}(t)\right]\leqslant&2\cdot\sqrt{32}\mathbb{E}\left(\int_0^{u\wedge\eta_R}|X_{\varepsilon}(s)-\bar{X}(s)|^2\cdot\left\|\sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2ds\right)^{\frac{1}{2}}\notag\\ \leqslant&12\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|\cdot\left(\int_0^{u\wedge\eta_R}\left\|\sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2ds\right)^{\frac{1}{2}}\right]\notag\\ \leqslant&\frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2\right]+144\mathbb{E}\left[\int_0^{u\wedge\eta_R}\left\|\sigma\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)}\right)-\bar{\sigma}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)}\right)\right\|^2ds\right]\notag\\ \leqslant&\frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2\right]+432(L_R+L)\int_0^u\mathbb{E}\sup_{0\leqslant t\leqslant s}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2ds\notag\\ &+432uC_R^{\sigma}\cdot C\varphi_2\left(\frac{u\wedge\eta_R}{\varepsilon}\right).\end{aligned}$$ Analogously, in view of the Kunita's first inequality, Young's inequality [\[YE\]](#YE){reference-type="eqref" reference="YE"} (with $\epsilon=\frac{1}{4D},p=q=2$) and ([\[I\_{3,R}\]](#I_{3,R}){reference-type="ref" reference="I_{3,R}"}), we arrive at $$\begin{aligned} \label{I_{5,R}} \mathbb{E}\left[\sup_{0\leqslant t\leqslant u}\Lambda_{5}(t)\right]\leqslant&2D\mathbb{E}\left(\int_0^{u\wedge\eta_R}\int_U|X_{\varepsilon}(s)-\bar{X}(s)|^2\cdot\left|h\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X^{\varepsilon}(s)},z\right)-\bar{h}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)},z\right)\right|^2\nu(dz)ds\right)^{\frac{1}{2}}\notag\\ \leqslant&\frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2\right]\notag\\ &+4D^2\mathbb{E}\left[\int_0^{u\wedge\eta_R}\int_U\left|h\left(\frac{s}{\varepsilon},X_{\varepsilon}(s),\mathscr{L}_{X_{\varepsilon}(s)},z\right)-\bar{h}\left(\bar{X}(s),\mathscr{L}_{\bar{X}(s)},z\right)\right|^2\nu(dz)ds\right]\notag\\ \leqslant&\frac{1}{4}\mathbb{E}\left[\sup_{0\leqslant t\leqslant u}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2\right]+12D^2(L_R+L)\int_0^u\mathbb{E}\sup_{0\leqslant t\leqslant s}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2ds\notag\\ &+12D^2uC_R^{h}\cdot C\varphi_3\left(\frac{u\wedge\eta_R}{\varepsilon}\right).\end{aligned}$$ By substituting ([\[I\_{1,R}\]](#I_{1,R}){reference-type="ref" reference="I_{1,R}"})-([\[I\_{5,R}\]](#I_{5,R}){reference-type="ref" reference="I_{5,R}"}) into ([\[I_1\]](#I_1){reference-type="ref" reference="I_1"}), and further utilizing the Grönwall inequality, we obtain $$\begin{aligned} \label{II-1} I_1\leqslant&\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t\wedge\eta_R)-\bar{X}(t\wedge\eta_R)|^2\right]\notag\\ \leqslant&\left[2TC_R^b\cdot C\varphi_1\left(\frac{T\wedge\eta_R}{\varepsilon}\right)+870TC_R^{\sigma}\cdot C\varphi_2\left(\frac{T\wedge\eta_R}{\varepsilon}\right)+6(1+4D^2)TC_R^{\sigma}C\varphi_3\left(\frac{T\wedge\eta_R}{\varepsilon}\right)\right]\notag\\ &\cdot e^{\left[4(L_R+\sqrt{L})+2+12\cdot(73+2D^2)(L_R+L^2)\right]T}.\end{aligned}$$ \(2\) Calculation of the term $I_2$. Using the Cauchy-Schwarz inequality and Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"}, we deduce that $$\begin{aligned} \label{II-2} I_2=&\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^2\mathbb{I}_{\{\eta_R\leqslant T\}}\right] \leqslant\sqrt{\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^2\right)^2}\sqrt{\mathbb{E}\left(\mathbb{I}_{\{\eta_R\leqslant T\}}\right)^2}\notag\\ \leqslant&2\sqrt{2}\sqrt{\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)|^4+\sup_{0\leqslant t\leqslant T}|\bar{X}(t)|^4\right)}\sqrt{\mathbb{E}\left(\mathbb{I}_{\{\eta_R\leqslant T\}}\frac{|X_{\varepsilon}(\eta_R)|^4+|\bar{X}(\eta_R)|^4}{R^4}\right)}\notag\\ \leqslant&\frac{2\sqrt{2}}{R^2}\left(\mathbb{E}\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)|^4+\mathbb{E}\sup_{0\leqslant t\leqslant T}|\bar{X}(t)|^4\right)\leqslant \frac{C}{R^2}.\end{aligned}$$ By combining the estimates of $I_1$ and $I_2$, i.e., [\[II-1\]](#II-1){reference-type="eqref" reference="II-1"} and [\[II-2\]](#II-2){reference-type="eqref" reference="II-2"}, we conclude that $$\begin{aligned} \mathbb{E}\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^2\leqslant& \left[2TC_R^b\cdot C\varphi_1\left(\frac{T\wedge\eta_R}{\varepsilon}\right)+870TC_R^{\sigma}\cdot C\varphi_2\left(\frac{T\wedge\eta_R}{\varepsilon}\right)+6(1+4D^2)TC_R^{\sigma}C\varphi_3\left(\frac{T\wedge\eta_R}{\varepsilon}\right)\right]\notag\\ &\cdot e^{\left[4(L_R+\sqrt{L})+2+12\cdot(73+2D^2)(L_R+L^2)\right]T}+\frac{C}{R^2}.\notag\end{aligned}$$ Now, for any $\delta>0$, we can choose $R>0$ sufficiently large such that $\frac{C}{R^4}<\frac{\delta}{2}$. Furthermore, by taking $\varepsilon$ small enough and using the averaging condition **A8**, we can have that $$\left[2TC_R^b\cdot C\varphi_1\left(\frac{T\wedge\eta_R}{\varepsilon}\right)+870TC_R^{\sigma}\cdot C\varphi_2\left(\frac{T\wedge\eta_R}{\varepsilon}\right)+6(1+4D^2)TC_R^{\sigma}C\varphi_3\left(\frac{T\wedge\eta_R}{\varepsilon}\right)\right]\cdot e^{\left[4(L_R+\sqrt{L})+2+12\cdot(73+2D^2)(L_R+L^2)\right]T}<\frac{\delta}{2}.$$ Therefore, the arbitrariness of $\delta$ implies that $\mathbb{E}\sup_{0\leqslant t\leqslant T}|X_{\varepsilon}(t)-\bar{X}(t)|^2$ converges to 0, as $\varepsilon$ goes to 0. This completes the proof. ◻ # Example {#sec4} In this section, let us present an illustrative example for our theoretical results of this paper.\ **Example 4.1.** Consider the following one-dimensional McKean-Vlasov SDE $$\begin{aligned} \label{Example-equation} dX_{\varepsilon}(t)=&\left[\left(X_{\varepsilon}(t)-X_{\varepsilon}^3(t)\right)\frac{\frac{t}{\varepsilon}}{1+\frac{t}{\varepsilon}}+\mathbb{E}X_{\varepsilon}(t)\right]dt+\left[X_{\varepsilon}(t)\sin\left(\log^2\left(1+X_{\varepsilon}^2(t)\right)\right)\frac{\frac{t}{\varepsilon}}{2+\frac{t}{\varepsilon}}+\mathbb{E}X_{\varepsilon}(t)\right]dW(t) \notag\\ &+\int_{U}\left[X_{\varepsilon}(t)\sin\left(\log^{\frac{3}{2}}\left(1+X_{\varepsilon}^2(t)\right)\right)\left(1-e^{-\frac{t}{\varepsilon}}\right)+\mathbb{E}X_{\varepsilon}(t)\right]\tilde{N}(dt, dz), \;\;\; t\in[0,T],\end{aligned}$$ with $X_{\varepsilon}(0)=x_0$ being a constant. Here, $W(t)$ is a scalar Wiener process, $U=\mathbb{R}\backslash \{ 0\}$, and $\nu$ is a finite measure with $\nu(U)=1$. Define $$b(t,x,\mu)=(x-x^3)\frac{t}{1+t}+\int_{\mathbb{R}}y\mu(dy), \;\; \sigma(t,x,\mu)=\psi(x)\frac{t}{2+t}+\int_{\mathbb{R}}y\mu(dy),\;\; h(t,x,\mu,z)=\phi(x)(1-e^{-t})+\int_{\mathbb{R}}y\mu(dy),$$ where $\psi(x)=x\sin(\log^2(1+x^2))$ and $\phi(x)=x\sin(\log^{\frac{3}{2}}(1+x^2))$ are two continuously differentiable functions. For any $x\in \mathbb{R}$, we can calculate that $$\label{Twoab} |\psi(x)|\leqslant|x|, \;\;|\phi(x)|\leqslant |x|,\;\;|(\partial_x\psi)(x)|\leqslant 1+4\log(1+x^2) \;\;\text{and}\;\;|(\partial_x\phi)(x)|\leqslant 1+3\sqrt{\log(1+x^2)}.$$ **(1) Well-posedness.** To show that ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) has a unique solution $(X_{\varepsilon}(t))_{0\leqslant t\leqslant T}$, we need to examine that the conditions in Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"} are satisfied. For any $R>0$, $x,y\in\mathbb{R}$ with $|x|\vee|y|\leqslant R$ and $\mu\in\mathcal{M}_2(\mathbb{R})$, one computes $$\begin{aligned} (x-y)(b(t,x,\mu)-b(t,y,\mu))&=(x-y)(x-x^3-y+y^3)\frac{t}{1+t}\leqslant |x-y|^2-|x-y|^2(x^2+xy+y^2)\notag\\ &\leqslant |x-y|^2(1-xy)\leqslant (1+R^2)|x-y|^2:=L_R^1|x-y|^2,\notag\end{aligned}$$ $$\begin{aligned} |\sigma(t,x,\mu)-\sigma(t,y,\mu)|^2&=\left|\psi(x)-\psi(y)\right|^2\left(\frac{t}{2+t}\right)^2\leqslant \left|\int_0^1(\partial_x\psi)(y+\theta(x-y))(x-y)d\theta\right|^2\notag\\ &\leqslant \left[\sup_{|z|\leqslant R}\left|(\partial_x\psi)(z)\right|\right]^2\cdot|x-y|^2\leqslant \left[\sup_{|z|\leqslant R}\left(1+4\log(1+z^2)\right)\right]^2\cdot|x-y|^2\notag\\ &\leqslant \left(1+4\log(1+R^2)\right)^2|x-y|^2 := L_R^2|x-y|^2,\notag\end{aligned}$$ $$\begin{aligned} \int_U|h(t,x,\mu,z)-h(t,y,\mu,z)|^2\nu(dz)&=\left|\phi(x)-\phi(y)\right|^2(1-e^{-t})^2\leqslant \left|\int_0^1(\partial_x\phi)(y+\theta(x-y))(x-y)d\theta\right|^2\notag\\ &\leqslant \left[\sup_{|z|\leqslant R}|(\partial_x\phi)(z)|\right]^2\cdot|x-y|^2\leqslant \left[\sup_{|z|\leqslant R}\left(1+3\sqrt{\log(1+z^2)}\right)\right]^2\cdot|x-y|^2\notag\\ &\leqslant \left(1+3\sqrt{\log(1+R^2)}\right)^2|x-y|^2 := L_R^3|x-y|^2.\notag\end{aligned}$$ Denote $L_R=\max\{L_R^1,L_R^2,L_R^3\}$. It implies that Assumption **A1** is satisfied. According to the expressions of $b$, $\sigma$ and $h$, for any $x\in\mathbb{R}$ and $\mu_1,\mu_2\in\mathcal{M}_2(\mathbb{R})$, we calculate $$\begin{aligned} &|b(t,x,\mu_1)-b(t,x,\mu_2)|^2+\|\sigma(t,x,\mu_1)-\sigma(t,x,\mu_2)\|^2+\int_U\left|h(t,x,\mu_1,z)-h(t,y,\mu_2,z)\right|^2\nu(dz)\notag\\ =&3\left|\int_{\mathbb{R}}y\mu_1(dy)-\int_{\mathbb{R}}y\mu_2(dy)\right|^2\leqslant3W_2^2(\mu_1,\mu_2),\notag%:=LW_2^2(\mu_1,\mu_2)\end{aligned}$$ which means that $b$, $\sigma$ and $h$ meet Assumptions **A2-A3**. In addition, owing to ([\[Twoab\]](#Twoab){reference-type="ref" reference="Twoab"}), the boundeness of $\frac{t}{1+t}$, $\frac{t}{2+t}$ and $1-e^{-t}$, we can deduce that, for any $x\in\mathbb{R}^d$, $\mu\in\mathcal{M}_2(\mathbb{R}),$ $$\begin{aligned} x\cdot b(t,x,\mu)&\leqslant x(x-x^3)\left(\frac{t}{1+t}\right)+x\int_{\mathbb{R}}y\mu(dy)\leqslant x^2+\frac{1}{2}x^2+\frac{1}{2}\left(\int_{\mathbb{R}}y\mu(dy)\right)^2\leqslant 2\left(1+x^2+W_2^{2}(\mu,\delta_0)\right), \notag\end{aligned}$$ $$\begin{aligned} |\sigma(t,x,\mu)|^2=\left|\psi(x)\frac{t}{2+t}+\int_{\mathbb{R}}y\mu(dy)\right|^2\leqslant2|\psi(x)|^2+2\left(\int_{\mathbb{R}}y\mu(dy)\right)^2 \leqslant 2\left(1+|x|^2+W_2^2(\mu,\delta_0)\right),\notag\end{aligned}$$ $$\begin{aligned} \int_U|h(t,x,\mu,z)|^2\nu(dz)=\left|\phi(x)(1-e^{-t})+\int_{\mathbb{R}}y\mu(dy)\right|^2\leqslant2|\phi(x)|^2+2\left(\int_{\mathbb{R}}y\mu(dy)\right)^2 \leqslant 2\left(1+|x|^2+W_2^2(\mu,\delta_0)\right).\notag\end{aligned}$$ Thus Assumption **A4** holds. Note that, for any $x\in\mathbb{R}^d$, $\mu\in\mathcal{M}_2(\mathbb{R}),$ $$\begin{aligned} |b(t,x,\mu)|^2\leqslant2(x-x^3)^2\left(\frac{t}{1+t}\right)^2+2\left(\int_{\mathbb{R}}y\mu(dy)\right)^2\leqslant 2x^2-4x^4+2x^6+2W_2^2(\mu,\delta_0)\leqslant 4\left(1+x^6+W_2^{6}(\mu,\delta_0)\right). \notag\end{aligned}$$ We can find that Assumption **A5** holds with $\kappa=6$. In addition, note that $X_{\varepsilon}(0)$ is a constant, thus Assumption **A6** (with $r\geqslant 18$) naturally holds. Due to the expression of $h$ and the finiteness of $\nu$, Assumption **A7** can be verified easily using the same technique as Assumptions **A1-A2** was checked. **(2) The averaging principle.** Define $$\bar{b}(x,\mu)=x-x^3+\int_{\mathbb{R}}y\mu(dy), \;\;\bar{\sigma}(x,\mu)=\psi(x)+\int_{\mathbb{R}}y\mu(dy), \;\; \bar{h}(x,\mu,z)=\phi(x)+\int_{\mathbb{R}}y\mu(dy).$$ Then we can verify that the averaging conditions in Assumption **A8-A9** (with $\kappa=6$ and $r\geqslant 18$) are satisfied: $$\begin{aligned} \frac{1}{t}\int_{0}^{t}|b(s,x,\mu)-\bar{b}(x,\mu)|^2ds=\frac{1}{t}\int_{0}^{t}|x-x^3|^2\left[1-\frac{s}{1+s}\right]^2ds=x^2(1-x^2)^2\frac{1}{1+t}\leqslant \varphi_1(t)C_R^b(1+|x|^2)\notag\end{aligned}$$ $$\begin{aligned} \frac{1}{t}\int_{0}^{t}|\sigma(s,x,\mu)-\bar{\sigma}(x,\mu)|^2ds=\frac{1}{t}\int_{0}^{t}\psi^2(x)\left[1-\frac{s}{2+s}\right]^2ds=\psi^2(x)\frac{2(1+t)}{(2+t)t}\leqslant \varphi_2(t)C_R^{\sigma}(1+|x|^2)\notag\end{aligned}$$ $$\begin{aligned} \frac{1}{t}\int_{0}^{t}\int_U|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^2\nu(dz)ds=\frac{1}{t}\int_{0}^{t}\phi^2(x)[1-(1-e^{-s})]^2ds=\phi^2(x)\frac{1-e^{-2t}}{2t}\leqslant \varphi_3(t)C_R^{h}(1+|x|^2),\notag\end{aligned}$$ and $$\begin{aligned} \frac{1}{t}\int_{0}^{t}\int_U|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^l\nu(dz)ds=\frac{1}{t}\int_{0}^{t}\phi^l(x)[1-(1-e^{-s})]^lds %=\phi^l(x)\frac{1}{t}\int_{0}^{t}e^{-ls}ds =\phi^l(x)\frac{1-e^{-lt}}{lt} \leqslant \varphi(t)C_R^{h}(1+|x|^l),\;\;l=r \text{ or }\kappa,\notag\end{aligned}$$ for $x\in\mathbb{R}$ with $|x|\leqslant R$, where $\varphi_1(t)=\frac{1}{1+t}$, $\varphi_2(t)=\frac{1}{1+t}$, $\varphi_3(t)=\frac{1-e^{-2t}}{2t}$ and $\varphi(t)=\frac{1-e^{-lt}}{lt}$ are continuous and positive bounded functions with $\lim_{t\to\infty}\varphi_i(t)=\lim_{t\to\infty}\varphi(t)=0,\;i=1,2,3$. Based on the above discussion and the result of Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"}, the solution of equation ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) can be approximated by the following equation (with $\bar{X}(0)=x_0$) $$\begin{aligned} \label{AExample-equation} d\bar{X}(t)=&\left[\left(\bar{X}(t)-\bar{X}^3(t)\right)+\mathbb{E}\bar{X}(t)\right]dt+\left[\bar{X}(t)\sin\left(\log^2\left(1+\bar{X}^2(t)\right)\right)+\mathbb{E}\bar{X}(t)\right]dW(t) \notag\\ &+\int_{U}\left[\bar{X}(t)\sin\left(\log^{\frac{3}{2}}\left(1+\bar{X}^2(t)\right)\right)+\mathbb{E}\bar{X}(t)\right]\tilde{N}(dt,dz), \;\;\; t\in[0,T],\end{aligned}$$ in the sense of mean square. Now we carry out the numerical simulation to compute the solutions of ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) and ([\[AExample-equation\]](#AExample-equation){reference-type="ref" reference="AExample-equation"}) under conditions $x_0=1$, $T=10$, $\varepsilon=0.01$ and $x_0=1$, $T=10$, $\varepsilon=0.001$, respectively. Fig. [\[1a\]](#1a){reference-type="ref" reference="1a"} and Fig.[\[1b\]](#1b){reference-type="ref" reference="1b"} show the comparisons of the exact solution $X_{\varepsilon}(t)$ for ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) with the averaged solution $\bar{X}(t)$ for ([\[AExample-equation\]](#AExample-equation){reference-type="ref" reference="AExample-equation"}). As we can see, there is a good agreement between solutions of the original equation and the averaged equation. One can further find that error $X_{\varepsilon}(t)-\bar{X}(t)$ decreases when $\varepsilon$ varies from $0.01$ to $0.001$. This result agrees well with our averaging principle (Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"}). ![(Color online) Comparison of the interacting N-particle systems associated with McKean-Vlasov SDEs (4.1) and (4.3), for $N=100$ and $\varepsilon=0.01$. ](Figures/comp-001.png){width="66.5%"} We remark that in numerical simulation, to approximate McKean-Vlasov SDE with ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) and ([\[AExample-equation\]](#AExample-equation){reference-type="ref" reference="AExample-equation"}), we have used $N$-dimensional systems of interacting particles, respectively, which can be regarded as usual SDEs. This is so-called propagation of chaos result (see Appendix). Moreover, we note that, to simulate the integrals w.r.t. the compensated Poisson random measure $\tilde N(dt,dz) = N(dt,dz) - \nu (dz)dt$, we have applied the technique of introducing a compound Poisson process $\int_U z\tilde N(t,dz)$; See Section 4.3.2 in [@Applebaum2009]. For our example, we simulate $N=100$ particles with a time step $0.01$, $T=10$. Fig.2(a) and Fig.2(b) show the realizations of the interacting particle systems associated with McKean-Vlasov SDE ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) and ([\[AExample-equation\]](#AExample-equation){reference-type="ref" reference="AExample-equation"}), respectively, with the same initial conditions $X_{\varepsilon}(0)=\bar{X}(0)=1$ and $\varepsilon=0.01$. Numerically, the Wasserstein distance between distributions of ([\[Example-equation\]](#Example-equation){reference-type="ref" reference="Example-equation"}) (i.e., $\mathscr{L}_{X_{\varepsilon}(t)}$) and ([\[AExample-equation\]](#AExample-equation){reference-type="ref" reference="AExample-equation"}) (i.e., $\mathscr{L}_{\bar{X}(t)}$ ) is thus approximated by those of the interacting particle systems; See Fig.2(c). # Acknowledgements {#acknowledgements .unnumbered} This research was supported by the National Natural Science Foundation of China (12101484, 12141107), the Fundamental Research Funds for the Central Universities (xzy012022001, 5003011053), and the China Postdoctoral Science Foundation (2022TQ0009, 2022M720264). # Appendix {#appendix .unnumbered} ***A. The details of the proof for Lemma [Lemma 4](#lemma3-1){reference-type="ref" reference="lemma3-1"}**.* For $\forall t\in[0,T]$, $\forall x,y\in\mathbb{R}^d$ and $\forall \mu,\mu_1,\mu_2 \in\mathcal{M}_{2}(\mathbb{R}^d)$, we calculate successively that $$\begin{aligned} \left\langle x-y, \bar{b}(x,\mu)-\bar{b}(y,\mu)\right\rangle %=&\frac{1}{t}\int_0^t\left[\left\langle x-y,\bar{b}(x,\mu)-b(s,x,\mu)\right\rangle+\left\langle x-y,b(s,y,\mu)-\bar{b}(y,\mu)\right\rangle\right] ds+\frac{1}{t}\int_0^t\left\langle x-y, b(s,x,\mu)-b(s,y,\mu)\right\rangle ds\\ \leqslant&|x-y|^2+\frac{1}{2t}\int_0^t|b(s,x,\mu)-\bar{b}(x,\mu)|^2+|b(s,y,\mu)-\bar{b}(y,\mu)|^2ds+L_R|x-y|^2\notag\\ \leqslant&\varphi_1(t)C_R^b(1+|x|^2+|y|^2)+(\frac{1}{2}+L_R)|x-y|^2,\notag\\ \left\langle x, \bar{b}(x,\mu)\right\rangle %=&\left\langle x,\frac{1}{t}\int_0^t \bar{b}(x,\mu)-b(s,x,\mu)ds\right\rangle+\frac{1}{t}\int_0^t\langle x, b(s,x,\mu)\rangle ds\\ \leqslant&\frac{|x|^2}{2}+\frac{1}{2t}\int_0^t|b(s,x,\mu)-\bar{b}(x,\mu)|^2ds+K(1+|x|^2+W_2^2(\mu,\delta_0))\notag\\ \leqslant&\frac{1}{2}\varphi_1(t)C_R^b(1+|x|^2)+\frac{3}{2}K(1+|x|^2+W_2^2(\mu,\delta_0)),\notag\\ |\bar{b}(x,\mu_1)-\bar{b}(x,\mu_2)|^2 \leqslant&\frac{3}{t}\int_0^t|b(s,x,\mu_1)-\bar{b}(x,\mu_1)|^2+|b(s,x,\mu_2)-\bar{b}(x,\mu_2)|^2+|b(s,x,\mu_1)-b(s,x,\mu_2)|^2ds\notag\\ \leqslant&6C_R^b\varphi_1(t)(1+|x|^2)+3LW_2^2(\mu_1,\mu_2),\notag\\ |\bar{b}(x,\mu_1)-\bar{b}(y,\mu_2)|^2 \leqslant&\frac{3}{t}\int_0^t|b(s,x,\mu_1)-\bar{b}(x,\mu_1)|^2+|b(s,y,\mu_2)-\bar{b}(y,\mu_2)|^2+|b(s,x,\mu_1)-b(s,y,\mu_2)|^2ds\notag\\ \leqslant&6C_R^b\varphi_1(t)(1+|x|^2)+\frac{3}{t}\int_0^t|b(s,x,\mu_1)-b(s,y,\mu_2)|^2ds,\notag\\ |\bar{b}(x,\mu)|^2 \leqslant& 2\Big|\frac{1}{t}\int_0^t b(s,x,\mu)-\bar{b}(x,\mu)ds\Big|^2+2\Big|\frac{1}{t}\int_0^tb(s,x,\mu)ds\Big|^2\notag\\ \leqslant& 2C_R^b\varphi_1(t)(1+|x|^2)+2K(1+|x|^{\kappa}+W_2^{\kappa}(\mu,\delta_0)).\notag\end{aligned}$$ Similar results for $\bar{\sigma}$ and $\bar{h}$ are also available. Let $t\to\infty$ in above estimates, we conclude that the averaged equation ([\[ASDE\]](#ASDE){reference-type="ref" reference="ASDE"}) satisfies Assumptions **A1-A5**. We next check the extra conditions for $\bar{h}$ by calculating that (for $l=r$ or $\kappa$) $$\begin{aligned} \int_U|\bar{h}(x,\mu,z)|^l\nu(dz) &\leqslant\frac{2^{l-1}}{t}\int_0^t \int_U|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^r+|h(s,x,\mu,z)|^r\nu(dz)ds\notag\\ &\leqslant 2^{l-1}C_R^{h}\varphi(t)(1+|x|^l)+2^{l-1}K(1+|x|^{l}+W_2^{l}(\mu,\delta_0)),\notag\\ \int_U|\bar{h}(x,\mu,z)-\bar{h}(y,\mu,z)|^{\kappa}\nu(dz) \leqslant&\frac{3^{\kappa-1}}{t}\int_0^t\int_U|h(s,x,\mu,z)-\bar{h}(x,\mu,z)|^{\kappa}+|h(s,y,\mu,z)-\bar{h}(y,\mu,z)|^{\kappa}\notag\\ &+|h(s,x,\mu,z)-h(s,y,\mu,z)|^{\kappa}\nu(dz)ds\notag\\ \leqslant& 3^{\kappa-1}\varphi(t)C_R^{h}(2+|x|^{\kappa}+|y|^{\kappa})+3^{\kappa-1}L_R^{'}|x-y|^{\kappa}, \notag\\ \int_U|\bar{h}(x,\mu_1,z)-\bar{h}(x,\mu_2,z)|^{\kappa}\nu(dz) \leqslant& \frac{3^{\kappa-1}}{t}\int_0^t\int_U|h(s,x,\mu_1,z)-\bar{h}(x,\mu_1,z)|^{\kappa}+|h(s,x,\mu_2,z)-\bar{h}(x,\mu_2,z)|^{\kappa}\notag\\ &+\int_U|h(s,x,\mu_1,z)-h(s,x,\mu_2,z)|^{\kappa}\nu(dz)ds\notag\\ \leqslant&2\cdot 3^{\kappa-1}\varphi(t)C_R^{h}(1+|x|^{\kappa})+3^{\kappa-1}L^{'}W_2^{\kappa}(\mu_1,\mu_2),\notag\end{aligned}$$ and then taking $t\to\infty$. That is, Assumption **A7** holds. ◻ For $N\geqslant1$ and $i=1,2,\ldots, N$, let $(W^i,\tilde{N}^i,X^i(0))$ be independent copies of $(W,\tilde{N},X(0))$. We introduce the non-interacting particle system associated with the McKean-Vlasov SDE ([\[Main-equation\]](#Main-equation){reference-type="ref" reference="Main-equation"}): $$\begin{aligned} \label{Non-Main-equation} &dX^i(t)=b\left(t,X^i(t),\mathscr{L}_{X^i(t)}\right) dt+\sigma\left(t,X^i(t),\mathscr{L}_{X^i(t)}\right)dW^i(t)+\int_{U}h\left(t,X^i(t),\mathscr{L}_{X^i(t)},z\right)\tilde{N}^i(dt,dz)\end{aligned}$$ on $t\in[0,T]$ with initial data $X^i(0)$. According to Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"}, one has $\mathscr{L}_{X^i(t)}=\mathscr{L}_{X(t)}$, $i=1,2,\ldots, N$. We also consider the associated interacting particle system $$\begin{aligned} \label{Inter-Main-equation} &dX^{i,N}(t)=b\left(t,X^{i,N}(t),\mu_t^{X,N}\right) dt+\sigma\left(t,X^{i,N}(t),\mu_t^{X,N}\right)dW^{i}(t)+\int_{U}h\left(t,X^{i,N}(t),\mu_t^{X,N},z\right)\tilde{N}^{i}(dt,dz)\end{aligned}$$ with initial data $X^{i,N}(0)=X^i(0)$, where $\mu_t^{X,N}$ is an empirical measure of $N$ particles given by $\mu_t^{X,N}:=\frac{1}{N}\sum_{j=1}^N\delta_{X^{j,N}(t)}$. We next state and prove the propagation of chaos result. **Proposition 1**. *(Propagation of chaos) Suppose Assumptions **A1-A7** hold and $r\geqslant4$. Then, the interacting particle system ([\[Inter-Main-equation\]](#Inter-Main-equation){reference-type="ref" reference="Inter-Main-equation"}) is well-posed and converges to the non-interacting particle system ([\[Non-Main-equation\]](#Non-Main-equation){reference-type="ref" reference="Non-Main-equation"}), that is, $$\label{Poc} \lim_{N\to\infty}\sup_{1\leqslant i\leqslant N}\sup_{0\leqslant t\leqslant T}\mathbb{E}\left|X^i(t)-X^{i,N}(t)\right|^2=0.$$* *Proof.* First, note that the interacting particle system $\{X^{i,N}\}_{1\leqslant i\leqslant N}$ given in equation ([\[Inter-Main-equation\]](#Inter-Main-equation){reference-type="ref" reference="Inter-Main-equation"}) can be regarded as ordinary SDEs driven by Lévy noise taking values in $\mathbb{R}^{d\times N}$. Thus, according to the Theorem 1.1 of [@Majka2016note], it has a unique càdlàg solution under Assumptions **A1, A4, A5** such that $$\sup_{1\leqslant i\leqslant N}\mathbb{E}\sup_{0\leqslant t\leqslant T}\left|X^{i,N}(t)\right|^4\leqslant C,$$ for any $N\geqslant1$ where $C>0$ is independent of $N$. To treat the one-sided local Lipschitz case, for any $1\leqslant i\leqslant N$ and $R>0$, define the stopping time: $$\zeta_R:=\inf\left\{t\in[0,T]: |X^i(t)|\vee|X^{i,N}(t)|> R\right\}.$$ Then, by De Morgan's Law, we arrive at $$\sup_{0\leqslant t\leqslant T}\mathbb{E}\left|X^i(t)-X^{i,N}(t)\right|^{2}\leqslant \sup_{0\leqslant t\leqslant T}\mathbb{E}\left[\left|X^i(t)-X^{i,N}(t)\right|^{2}\mathbb{I}_{\{\zeta_R>T\}}\right]+\sup_{0\leqslant t\leqslant T}\mathbb{E}\left[\left|X^i(t)-X^{i,N}(t)\right|^{2}\mathbb{I}_{\{\zeta_R\leqslant T\}}\right]:=Q_1+Q_2,$$ where $\mathbb{I}_A$ is an indicator function of set $A$. Similar to the proof of Theorem [Theorem 2](#Th){reference-type="ref" reference="Th"}, we now calculate $Q_1$ and $Q_2$ respectively. \(1\) Estimation of the term $Q_1$. Note that $$\label{Q_1} Q_1=\sup_{0\leqslant t\leqslant T}\mathbb{E}\left[\left|X^i(t)-X^{i,N}(t)\right|^{2}\mathbb{I}_{\{\zeta_R>T\}}\right]\leqslant \sup_{0\leqslant t\leqslant T}\mathbb{E}\left|X^i(t\wedge\zeta_R)-X^{i,N}(t\wedge\zeta_R)\right|^{2}.$$ By Itô formula, $$\begin{aligned} \mathbb{E}\left|X^i(t\wedge\zeta_R)-X^{i,N}(t\wedge\zeta_R)\right|^{2} =&2\mathbb{E}\int_0^{t\wedge\zeta_R}\left\langle X^i(s)-X^{i,N}(s), b\left(s,X^i(s),\mathscr{L}_{X^i(s)}\right)-b\left(s,X^{i,N}(s),\mu_s^{X,N}\right)\right\rangle ds\notag\\ &+\mathbb{E}\int_0^{t\wedge\zeta_R}\left\| \sigma\left(s,X^i(s),\mathscr{L}_{X^i(s)}\right)-\sigma\left(s,X^{i,N}(s),\mu_s^{X,N}\right) \right\|^2 ds\notag\\ &+\mathbb{E}\int_0^{t\wedge\zeta_R}\int_U\left|h\left(s,X^i(s),\mathscr{L}_{X^i(s)},z\right)-h\left(s,X^{i,N}(s),\mu_s^{X,N},z\right)\right|^2 \nu(dz)ds:=\sum_{i=1}^3Q_{i,R}(t).\notag \end{aligned}$$ We estimate $Q_{i,R}$, $i=1,2,3$ by Assumptions **A1-A2** and obtain that $$\begin{aligned} Q_{1,R}(t) %\leqslant&2\mathbb{E}\int_0^{t\wedge\zeta_R}\left\langle X^i(s)-X^{i,N}(s), b\left(s,X^i(s),\mathscr{L}_{X^i(s)}\right)-b\left(s,X^{i,N}(s),\mathscr{L}_{X^i(s)}\right)\right\rangle ds\notag\\ %&+2\mathbb{E}\int_0^{t\wedge\zeta_R}\left|X^i(s)-X^{i,N}(s)\right|\cdot \left|b\left(s,X^{i,N}(s),\mathscr{L}_{X^{i}(s)}\right)-b\left(s,X^{i,N}(s),\mu_s^{X,N}\right)\right|ds\notag\\ \leqslant&(2L_R+1)\int_0^{t\wedge\zeta_R}\mathbb{E}\left|X^i(s)-X^{i,N}(s)\right|^2ds+L\int_0^{t\wedge\zeta_R} \mathbb{E}W^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X,N}\right)ds,\notag\\ Q_{2,R}(t) %\leqslant&2\mathbb{E}\int_0^{t\wedge\zeta_R}\left\|\sigma\left(s,X^i(s),\mathscr{L}_{X^i(s)}\right)-\sigma\left(s,X^{i,N}(s),\mathscr{L}_{X^i(s)}\right)\right\|^2ds+2\mathbb{E}\int_0^{t\wedge\zeta_R}\left\|\sigma\left(s,X^{i,N}(s),\mathscr{L}_{X^{i}(s)}\right)-\sigma\left(s,X^{i,N}(s),\mu_s^{X,N}\right)\right\|^2ds\notag\notag\\ \leqslant&2L_R\int_0^{t\wedge\zeta_R}\mathbb{E}\left|X^i(s)-X^{i,N}(s)\right|^2ds+L\int_0^{t\wedge\zeta_R}\mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X,N}\right)ds,\notag\\ Q_{3,R}(t)\leqslant &2L_R\int_0^{t\wedge\zeta_R}\mathbb{E}\left|X^i(s)-X^{i,N}(s)\right|^2ds+L\int_0^{t\wedge\zeta_R}\mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X,N}\right)ds,\notag\end{aligned}$$ where $$\int_0^{t\wedge\zeta_R}\mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X,N}\right)ds \leqslant 2\int_0^{t\wedge\zeta_R}\mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X}\right)ds+2\int_0^{t\wedge\zeta_R}\mathbb{E}W_2^2\left(\mu_s^{X},\mu_s^{X,N}\right)ds,$$ with $\mu_t^X:=\frac{1}{N}\sum_{i=1}^N\delta_{X^{i}(t)}$ the empirical measure. Then, by combining these estimates and applying the Grönwall inequality, we eventually have $$\begin{aligned} Q_1\leqslant\sup_{0\leqslant t\leqslant T}\mathbb{E}\left|X^i(t\wedge\zeta_R)-X^{i,N}(t\wedge\zeta_R)\right|^{2} \leqslant 6Le^{(6L_R+1+6L)T}\int_0^{T}\mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X}\right)ds.\notag\end{aligned}$$ \(2\) Estimation of the term $Q_2$. Using the Cauchy-Schwarz inequality and Theorem [Theorem 1](#mainresult1){reference-type="ref" reference="mainresult1"}, we deduce that $$\begin{aligned} \label{Q-2} Q_2=\sup_{0\leqslant t\leqslant T}\mathbb{E}\left[\left|X^i(t)-X^{i,N}(t)\right|^2\mathbb{I}_{\{\zeta_R\leqslant T\}}\right] \leqslant&\sup_{0\leqslant t\leqslant T}\sqrt{\mathbb{E}\left(\left|X^i(t)-X^{i,N}(t)\right|^2\right)^2}\sqrt{\mathbb{E}\left(\mathbb{I}_{\{\zeta_R\leqslant T\}}\right)^2}\notag\\ \leqslant&\frac{2\sqrt{2}}{R^2}\left(\mathbb{E}\sup_{0\leqslant t\leqslant T}|X^i(t)|^4+\mathbb{E}\sup_{0\leqslant t\leqslant T}|{X}^{i,N}(t)|^4\right)\leqslant \frac{C}{R^2}.\end{aligned}$$ With the estimations of $Q_1$ and $Q_2$ at hand, we conclude that $$\begin{aligned} \label{Q-1+2} \sup_{1\leqslant i\leqslant N}\sup_{0\leqslant t\leqslant T}\mathbb{E}\left|X^i(t)-X^{i,N}(t)\right|^2\leqslant6Le^{(6L_R+1+6L)T}\int_0^{T}\sup_{1\leqslant i\leqslant N}\mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X}\right)ds+\frac{C}{R^2}.\end{aligned}$$ Notice that, by the Theorem 5.8 of [@Carmona2018probabilistic], $$%\label{} \begin{split} \mathbb{E}W_2^2\left(\mathscr{L}_{X^{i}(s)},\mu_s^{X}\right)\leqslant C \left \{ \begin{array}{lll} N^{-1/2},& \text{if } d<4,\\ N^{-1/2}\ln(N),&\text{if } d=4,\\ N^{-1/2},&\text{if } d>4. \end{array} \right. \end{split}$$ We find that the right hand of the estimate [\[Q-1+2\]](#Q-1+2){reference-type="eqref" reference="Q-1+2"} converges to 0 when $N$ goes to infinity. The result thus follows and the proof is completed. ◻
arxiv_math
{ "id": "2309.02906", "title": "Well-posedness and averaging principle for L\\'evy-type McKean-Vlasov\n stochastic differential equations under local Lipschitz conditions", "authors": "Ying Chao, Jinqiao Duan, Ting Gao, Pingyuan Wei", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In view of the recent proofs of the $P=W$ conjecture, the present paper reviews and relate the latest results in the non abelian Hodge theory of curves, with a view on how $P=W$ phenomena appear in multiple areas of algebraic geometry. Finally, we retrace the history of results on the conjecture up to the three new proofs of the statement in full generality. author: - Camilla Felisetti bibliography: - surveyPW.bib title: P=W phenomena in algebraic and enumerative geometry --- **Keywords:** Higgs bundles, character varieties, $P=W$ conjecture, non abelian Hodge theory, hyperkähler manifolds\ **MSC 2020 classification:** 14D20, 14D06, 32S60 # Introduction This paper reviews and relate the recent results on non abelian Hodge theory of curves, hyperkähler geometry and theory of enumerative invariants motivated and inspired by the $P=W$ conjecture, formulated in 2010 by de Cataldo, Hausel and Migliorini. This conjecture, stating the equality of two filtrations of very different origin on the cohomology groups of incarnations of the moduli space of representations of the fundamental group of a smooth projective complex curve, has been recently proved in full generality. The survey is organized as follows: contains a recap of non abelian Hodge theory, introducing the two sides of the correspondence, namely Dolbeault and Betti moduli spaces. In we introduce the P=W conjecture for smooth moduli spaces, while in we present its generalizations. Then, and move away from non abelian Hodge theory and show how P=W phenomena affect and are affected by several areas of algebraic geometry. Here we present hyperkähler geometry and curve counting, as they play an important role in some of the proofs of the P=W conjecture. Finally in we present the history of proofs starting from the original proof in rank 2 by de Cataldo, Hausel and Migliorini to those appeared in 2022 and 2023, finally proving the conjecture in full generality. Also, since we make systematic use of many concepts coming from the theory of perverse sheaves, we included them in . # Short recap of non abelian Hodge theory {#sec:recap} The moduli space of holomorphic vector bundles on a projective curve has played a pivoting role in modern algebraic geometry and nowadays we can count extensive literature concerning its properties. As it is well known, the construction of this moduli space involves geometric invariant theory and it comes naturally with the notions of stability and semistability. The celebrated theorem of Narasimhan and Seshadri [@NarasimhanSeshadri] asserts that semistable vector bundles of rank $n$ and degree zero correspond to unitary representations of the fundamental group of the curve, up to conjugation. Non abelian Hodge theory finds its roots in the will of finding a "complexification" of this correspondence, establishing a real analytic isomorphism between the so called *Betti* and *Dolbeault* moduli spaces. We refer to [@MiglioUMI] for a beautiful survey on the topic, partially covering some motivations for this work. ## Betti moduli space Let $C$ be a smooth projective complex curve of genus $g$. Its fundamental group $\pi(C;p_0)$ has $2g$ free generators $\alpha_1,\ldots \alpha_g,\beta_1,\ldots,\beta_g$ satisfying the relation $$\alpha_1,\beta_1\alpha_1^{-1}\beta_1^{-1}\ldots \alpha_g,\beta_g\alpha_g^{-1}\beta_g^{-1}=\mathrm{id}.$$ Hence a representation of $\pi(C;p_0)$ into the group $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$ is the datum of a collection of $2g$ matrices $A_1,\ldots,A_g,B_1\ldots B_g$ such that $\prod_{i=1}^g[A_i,B_i]=I.$ The subset of $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}^{2g}$ of $2g$-tuples of matrices satisfying the relation above is an affine variety, called $Rep(n,\mathop{\mathrm{GL(\textit{n},\mathbb{C})}})$. The group $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$ acts on $Rep(n,\mathop{\mathrm{GL(\textit{n},\mathbb{C})}})$ by simultaneous conjugation and the GIT quotient $$\label{eq:defbetti} \mathcal{M}_{B}(n,0):=\left\lbrace A_1,\ldots,A_g,B_1\ldots B_g\in\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}^{2g}\mid \prod_{i=1}^g[A_i,B_i]=I\right\rbrace\sslash \mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$$ is called the *Betti moduli space* or character variety of $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$. The Betti moduli space is an affine algebraic variety, generally singular, parametrizing isomorphism classes of completely reducible representations of the fundamental group of $C$ in $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$, with an open nonsingular Zariski dense subset $\mathcal{M}_{B}(n,0)^s$ corresponding to irreducible ones. To avoid singularities one can consider a twisted version of the Betti moduli space, where the identity is replaced by its multiple by a primitive root of unity in [\[eq:defbetti\]](#eq:defbetti){reference-type="eqref" reference="eq:defbetti"} to define $$\mathcal{M}_{B}(n,d):= \left\lbrace A_1,\ldots,A_g,B_1,\ldots ,B_g\in\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}^{2g}\mid \prod_{i=1}^g[A_i,B_i]=e^{\frac{2\pi i d}{n}}I\right\rbrace\sslash \mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$$ for some $d\in\mathbb{Z}$. This corresponds geometrically to considering representations of the fundamental group of a punctured curve with prescribed unipotent monodromy around the puncture. It can be shown that for $d$ coprime to $n$ every semisimple representation is in fact irreducible and $\mathcal{M}_{B}(n,d)$ is a nonsingular affine variety. ## The Dolbeault moduli space As Betti moduli space is the complexified counterpart of the unitary character variety, likewise we can consider a moduli space of vector bundles with enriched structure. Let us briefly recall some properties of the moduli space of vector bundles first. A vector bundle $V$ is *semistable* if $$\text{ for all proper subbundles }W\subset V \text{ we have }\dfrac{\deg W}{\mathop{\mathrm{rank}}W}\leq \dfrac{\deg V}{\mathop{\mathrm{rank}}V}$$ and that $V$ is *stable* if a strict inequality holds for all proper subbundles. The moduli space $\mathcal{N}(n,d)$ of semistable vector bundles had been constructed by Mumford in the early 60's via geometric invariant theory and parametrizes equivalence classes of semistable vector bundles of rank $n$ and degree $d$. The equivalence relation is defined as follows: since any semistable bundle admits a filtration by subbundles such that the associated graded object is a direct sum of stable bundle with the same slope, we say that two semistable bundles are equivalent if they have isomorphic graded objects. For stable bundles this equivalence relation coincides simply with isomorphism. The moduli space $\mathcal{N}(n,d)$ enjoys many remarkable properties. (i) $\mathcal{N}(n,d)$ is a projective variety of dimension $n^2(g-1)+1$, which contains a nonsingular Zariski dense open subset $\mathcal{N}(n,d)^s$ that parametrizes stable bundles. (ii) If $n$ and $d$ are coprime then every semistable bundle is stable and $\mathcal{N}(n,d)$ is nonsingular. (iii) If $n$ and $d$ are coprime there exists a universal bundle $\mathcal{V}$ on $\mathcal{N}(n,d)\times C$. **Definition 1**. A Higgs bundle is a pair $(V,\phi)$ where $V$ is an algebraic vector bundle on $C$ and $\phi\in H^0(C,\mathrm{End}(V)\otimes K_C)$ is an endomorphism with values in the canonical bundle of $C$, which we call *Higgs field*. **Definition 2**. A Higgs bundle $(V,\phi)$ is semistable if $$\dfrac{\deg W}{\mathop{\mathrm{rank}}W}\leq \dfrac{\deg V}{\mathop{\mathrm{rank}}V}$$ for all proper subbundles $W$ that are $\phi$-invariant. Note that, since stability considers not all subbundles but just the $\phi$-invariant ones, a Higgs bundle can be stable and have unstable underlying vector bundle. Again thanks to geometric invariant theory, Nitsure [@Nitsure91] constructs a moduli space of equivalence classes of semistable Higgs bundles of rank $n$ and degree $d$, called the *Dolbeault moduli space*. $$\mathcal{M}_{Dol}(n,d):= \left\lbrace (V,\phi) \text{ semistable Higgs bundle of rank }n \text{ and degree }d\right\rbrace/\sim_S$$ The space $\mathcal{M}_{Dol}(n,d)$ has a rich geometry and carry some features similiar to those mentioned for the moduli space of vector bundles. (i) $\mathcal{M}_{Dol}(n,d)$ is a quasi-projective variety of dimension $2n^2(g-1)+2$, which contains a nonsingular Zariski open dense subset $\mathcal{M}_{Dol}(n,d)^s$ that parametrizes stable Higgs bundles of rank $n$ and degree $d$. (ii) If $n$ and $d$ are coprime then every semistable Higgs bundle is stable and $\mathcal{M}_{Dol}(n,d)$ is nonsingular. (iii) If $n$ and $d$ are coprime there exists a universal Higgs bundle $\mathcal{V}$ on $\mathcal{M}_{Dol}(n,d)\times C$.\ **Remark 1**. (**Symplectic structure**) Clearly, any semistable vector bundle can be viewed as a Higgs bundle with 0 Higgs field and the map $V\mapsto (V,0)$ defines an embedding $\mathcal{N}(n,d)^s\hookrightarrow \mathcal{M}_{Dol}(n,d)$. Indeed something more is true: If $V$ is a stable vector bundle then for all possible $\phi\in H^0(End(V)\otimes K_C)$ the pair $(V,\phi)$ will be a stable Higgs bundle. Thanks to the deformation theory of vector bundles and Serre duality, one can interpret the vector space $H^0(End(V)\otimes K_C)$ as the fiber of the cotangent bundle at $\mathcal{N}(n,d)$ in $V$. As a result we have an embedding $$T^*\mathcal{N}(n,d)^s\hookrightarrow \mathcal{M}_{Dol}(n,d).$$ The natural symplectic structure on $T^*\mathcal{N}(n,d)^s$ extends to a symplectic structure on the whole $\mathcal{M}_{Dol}(n,d)^s$, i.e. there exists a holomorphic symplectic form $\omega\in H^2(\mathcal{M}_{Dol}(n,d),\mathbb{C})$ which is non degenerate at any point. ### The Hitchin fibration The moduli space $\mathcal{M}_{Dol}(n,d)$ admits a projective map $h: \mathcal{M}_{Dol}(n,d)\rightarrow \mathcal{A}$, called the *Hitchin fibration*, where the target $\mathcal{A}:=\bigoplus_{i=1}^n H^0(C,K_C^{i})$ is an affine space and the fiber of $h$ over a general point $a\in \mathcal{A}$ is isomorphic to the Jacobian of a branched covering of $C$. Any Higgs field $\phi$ is a twisted endomorphism of a vector bundle $V$, so it has a well defined characteristic polynomial, whose coefficients are $\mathrm{tr}(\phi)\in H^0(C,K_C)$, $\mathrm{tr}(\Lambda^2\phi)\in H^0(C,K_C^2), \ldots ,\det(\phi)\in H^0(C,K_C^n)$. This defines a map $$h: \mathcal{M}_{Dol}(n,d) \to \mathcal{A}, \quad (V,\phi) \mapsto \mathrm{charpol}(\phi).$$ The map $h$ is proper and flat and we can describe the fibers via an "abelianization" process. Consider the total space $\mathrm{Tot}(K_C)$ of the canonical bundle of $C$: calling $y$ the coordinate on the fiber in a local trivialization of $K_C$ and choosing a point $a=(a_1,\ldots, a_n)\in \mathcal{A}$, the equation $$p_a(y):=y^n+a_1y^{n-1}+\ldots +a_{n-1}y+a_n=0$$ makes global sense and defines a curve $C_a$ inside $\mathrm{Tot}(K_C)$, called *spectral curve*. The map $f_a:C_a\rightarrow C$ is a branched $n:1$ cover of $C$; moreover the total space of the family $f:\mathcal{C}\rightarrow \mathcal{A}$, having $f^{-1}(a)=C_a$ is nonsingular and so is the generic spectral curve. Now, for every rank 1 torsion free sheaf $\mathcal{L}$ on $C_a$, the pushforward $V=f_{a,*}\mathcal{L}$ is a torsion free (and thus locally free) sheaf on $C$, i.e. a vector bundle of rank $n$. Moreover the multiplication by $y$ on $\mathcal{O}_{C_a}=\mathcal{O}_C[y]/(p_a(y))$ endows $V$ with an endomorphism: $$\phi: V\rightarrow V\otimes K_C$$ such that $$\phi^n+a_1y^{n-1}+\ldots +a_{n-1}\phi+a_n=0.$$ The Cayley-Hamilton theorem implies that $p_a$ is the characteristic polynomial of $\phi$. To sum up, we can associate to any rank 1 torsion free sheaf on $C_a$ a Higgs bundle $(V,\phi)$ such that $\phi$ has $p_a$ as characteristic polynomial, i.e. $(V,\phi)\in h^{-1}(a)$, provided that it is semistable (this is true, for example, when $C_a$ is irreducible and reduced). This is the content of the BNR correspondence [@BNR89]. When $C_a$ is irreducible and reduced, for any $\delta \in\mathbb{Z}$ the locus parametrizing rank 1 torsion free, degree $\delta$ sheaves $\mathcal{L}$ on $C_a$ is called the *compactified Jacobian* $\overline{Pic}^{\delta}(C_a)$ of $C_a$. The degree $\delta$ is chosen so that $\chi(C_a,\mathcal{L})=\chi(C,f_{a,*}\mathcal{L})$. When $C_a$ is nonsigular any torsion free sheaf is locally free, hence $\overline{Pic}^{\delta}(C_a)$ is nothing but the usual Picard scheme. When $C_a$ is reducible or non reduced, then the definition of the compactified Jacobian is more complicated and involves a stability condition on the torsion free sheaves. We refer to [@MRV19 Appendix] and [@Schaub] for a more detailed description. **Theorem 1** (BNR correspondence). *Let $a\in \mathcal{A}$. There is an isomorphism $$h^{-1}(a)\cong \overline{Pic}^{\delta}(C_a).$$* Note that generically $\overline{Pic}^{\delta}(C_a)$ has dimension equal to the genus of $C_a$. This can be computed via Riemann-Hurwitz formula and it is equal to half of the dimension of $\mathcal{M}_{Dol}(n,d)$. Since the fibers of $h$ have dimension which is half the dimension of the sympletic manifold $\mathcal{M}_{Dol}(n,d)$ it is reasonable to wonder whether $h$ is a Lagrangian fibration. **Remark 2** (The generic fibers are Lagrangian tori). If one considers the holomorphic symplectic form $\omega$ introduced in Remark , then the fibers of $h$ are Lagrangian, i.e. $\omega$ vanishes identically when restricted to smooth point of the fibers. Hence the Hitchin map $h:\mathcal{M}_{Dol}(n,d)\rightarrow \mathcal{A}$ is a Lagrangian fibration. # The original conjecture {#sec:theoriginalconjecture} We are now in position to state the main result of non abelian Hodge theory. The following theorem is the coronating result of many works in the field and it is understood as a sum of results of Donaldson, Corlette, Hitchin and Simpson, see [@Donaldson83], [@Corlette88], [@Hitchin87], [@Simpson90]. **Theorem 2** (Non abelian Hodge theorem). *For all $n$ and $d$ there exists a real analytic isomorphism $$\mathcal{M}_{Dol}(n,d)\cong \mathcal{M}_{B}(n,d).$$* Moreover, the *isosingularity principle* by Simpson [@Simpson1994 Theorem 10.6] ensures this isomorphism preserves the local structure of the singularities. In view of the non abelian Hodge theorem one may ask how the rich the geometry of the Dolbeault moduli space is reflected on the Betti side. In particular, one might try to see what happens already at the level of cohomology groups: this is the content of the P=W conjecture. The formulation of the conjecture originated by a surprising phenomenon which was discovered in 2008 by Hausel and Rodriguez-Villegas [@HauselRodriguez-Villegas2008] while studying the mixed Hodge structure of the character variety $\mathcal{M}_{B}(2,1)$. ## Weight filtration and curious Hard Lefschetz As it is well known, the rational cohomology groups of a complex algebraic variety $X$ are endowed with a *mixed Hodge structure*. Loosely speaking, a mixed Hodge structure is the datum of two filtrations: a decreasing filtration $F^{\bullet}$ on $H^*(X,\mathbb{C})$, called the *Hodge filtration*, and an increasing filtration $W^{\bullet}$ on $H^*(X,\mathbb{Q})$ called the *weight filtration* such that the graded pieces $$\mathrm{Gr}_{\ell}^WH^k(X,\mathbb{Q})=W_{\ell}H^k(X,\mathbb{Q})/W_{\ell-1}H^k(X,\mathbb{Q})$$ with respect to $W^{\bullet}$ behave as the $\ell$-th cohomology of a smooth projective variety. Namely $\mathrm{Gr}_{\ell}^WH^k(X,\mathbb{Q})\otimes\mathbb{C}$ admits a \"Hodge decomposition\" $$\mathrm{Gr}_{\ell}^WH^k(X,\mathbb{Q})\otimes\mathbb{C}\cong \bigoplus_{p+q=\ell} \left(\mathrm{Gr}_{\ell}^WH^k(X,\mathbb{C})\right)^{p,q}.$$ In [@HauselRodriguez-Villegas2008], Hausel and Rodriguez-Villegas computed the weight filtration on $\mathcal{M}_{B}(2,1)$, discovering a surprising symmetry which they named \"Curious Hard Lefschetz\", due to the similarity with Hard Lefschetz theorem for compact Kähler manifolds. In fact they proved that there exists a class $\alpha\in H^2(\mathcal{M}_{B}(2,1),\mathbb{Q})$ such that cupping with powers of $\alpha$ induces isomorphisms between the graded pieces of the weight filtration: $$\cup \alpha^{\ell}: \mathrm{Gr}_{\dim \mathcal{M}_{B}-2\ell}^WH^k(\mathcal{M}_{B}(2,1),\mathbb{Q})\xrightarrow{\sim} \mathrm{Gr}_{\dim \mathcal{M}_{B}+2\ell}^WH^{k+2\ell}(\mathcal{M}_{B}(2,1),\mathbb{Q}).$$ Moreover they conjectured that the same is true for all $\mathcal{M}_{B}(n,d)$ with $n$ and $d$ coprime. **Theorem 3** (Curious Hard Lefschetz). *Let $n$ and $d$ be coprime. There exists a class $\alpha \in H^2(\mathcal{M}_{B}(n,d))$ such that $$\cup \alpha^{\ell}: \mathrm{Gr}_{\dim \mathcal{M}_{B}-2\ell}^WH^k(\mathcal{M}_{B}(n,d),\mathbb{Q})\xrightarrow{\sim} \mathrm{Gr}_{\dim \mathcal{M}_{B}+2\ell}^WH^{k+2\ell}(\mathcal{M}_{B}(n,d),\mathbb{Q}).$$ is an isomorphism.* A decade later, the weight filtration for $\mathcal{M}_{B}(n,d)$ was computed by Shende in [@Shende17] and the curious Hard Lefschetz theorem was proved by Mellit in [@Mellit2019]. The word "curious" is mainly due to two facts: first, unlike classical Hard Lefschetz theorem, cupping with $\alpha$ increases the weight by $4$ instead of $2$; second the geometric contest in which this phenomenon is happening is that of the cohomology of an affine variety. Nevertheless, the simmetry of curious Hard Lefschetz reproduces that of a cornerstone result in the theory of *perverse sheaves* (see ), the *relative Hard Lefschetz theorem*, which applies to all projective maps of algebraic varieties. In the case of the Hitchin fibration $h$, the theorem asserts that the cup product with a relatively ample class $\alpha\in H^2(\mathcal{M}_{Dol}(n,d),\mathbb{Q})$ induces isomorphisms $$\cup \alpha^{\ell} : \mathrm{Gr}^P_{\frac{\dim \mathcal{M}_{Dol}}{2}-\ell}H^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})\xrightarrow{\sim}\mathrm{Gr}^P_{\frac{\dim \mathcal{M}_{Dol}}{2}+\ell} H^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q}),$$ where $P^{\bullet}$ is a filtration on $H^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})$ which is constructed starting from $h$ and it is called *perverse Leray filtration associated with $h$*. This phenomenon suggested that the weight filtration on $\mathcal{M}_{B}(n,d)$ might encode topological information about the Hitchin map. **Remark 3**. Note that, the Hodge structure on $\mathcal{M}_{Dol}(n,d)$ for $n$ and $d$ coprime is pure, i.e. the weight filtration is trivial. In fact the weight filtrations is highly trascendental while the isomorphism of the non abelian Hodge theorem is far from being algebraic: for instance, $\mathcal{M}_{B}(n,d)$ is affine, while $\mathcal{M}_{Dol}(n,d)$ is fibered through compact algebraic subvarieties. ## Perverse filtration associated with the Hitchin map While the definition of the perverse filtration for a general map $f:X\rightarrow Y$ is rather technical (see ), it assumes a particularly simple and explicit form when $f$ is a flat projective map, $X$ is nonsingular and $Y$ is affine. Since this is the case for the Hitchin map $h: \mathcal{M}_{Dol}(n,d)\rightarrow \mathcal{A}$ for $n$ and $d$ coprime, we state it for this case. **Definition 3**. Let $n$ and $d$ be coprime and $h:\mathcal{M}_{Dol}(n,d)\rightarrow \mathcal{A}$ be the Hitchin map. For all non negative integers $s$, let $\Lambda^s$ be a general $s$-dimensional affine subspace of $\mathcal{A}$. The perverse filtration $P^{\bullet}$ on $H^*(\mathcal{M}_{Dol}(n,d))$ is defined as the kernel of the natural restriction map from the cohomology of $\mathcal{M}_{Dol}(n,d)$ to that of the preimage of an affine subspace of given dimension, i.e. $$P_pH^k(\mathcal{M}_{Dol}(n,d),\mathbb{Q}):=\mathrm{Ker}\left\lbrace H^k(\mathcal{M}_{Dol}(n,d),\mathbb{Q})\rightarrow H^k(h^{-1}(\Lambda^{k-p-1}),\mathbb{Q})\right\rbrace.$$ ## Cohomological P=W conjecture To explain the correspondence between curious Hard Lefshcetz and the classical relative Hard Lefschetz theorem for $h$, De Cataldo, Hausel and Migliorini [@deCataldoHauselMigliorini2012] conjectured that, for $n$ and $d$ coprime, the non abelian Hodge theorem should exchange the weight filtration on $H^*(\mathcal{M}_{B}(n,d),\mathbb{Q})$ with the perverse filtration on $H^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})$ up to a trivial renumbering. **Conjecture 4** (P=W conjecture). *Let $n$ and $d$ be coprime and $$\psi: \mathcal{M}_{B}(n,d)\rightarrow \mathcal{M}_{Dol}(n,d)$$ be the real analytic isomorphism of the non abelian Hodge theorem. Then the associated isomorphism in cohomology $$\psi^*:H^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})\rightarrow H^*(\mathcal{M}_{B}(n,d),\mathbb{Q})$$ is such that, for all $\ell\in \mathbb{Z}$, $$P_{\ell}H^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})\xrightarrow{\sim} W_{2\ell}H^*(\mathcal{M}_{B}(n,d),\mathbb{Q}).$$* The P=W conjecture has been proved in 2010 in rank 2 in the original paper by de Cataldo, Hausel and Migliorini [@deCataldoHauselMigliorini2012]. The conjecture remained untackled for almost ten years when de Cataldo, Maulik and Shen proved it for genus 2 and arbitrary rank [@deCataldoMaulikShen2019]. Moreover, an enumerative approach was proposed by Chiarello, Hausel and Szenes in [@CHS2020]. Later, two different proofs of the conjecture in full generality appeared in September 2022: the first is due to Maulik and Shen [@MaulikShen2022], while the second is due to Hausel, Mellit, Minets and Schiffmann [@HMMS2022]. The last proof appeared a year later and is due to Maulik, Shen and Yin [@MSY]. In , we review the main steps and ideas of these proofs.\ *Example 1*. (P=W for generators in rank 2) Consider the moduli spaces $\mathcal{M}_{Dol}(2,1)$ and $\mathcal{M}_{B}(2,1)$ on a curve of genus $g$. By [@HauselThaddeus04], the cohomology is generated in low degree by classes $\varepsilon_i\in H^1$ for $i=1,\ldots, 2g$, $\alpha\in H^2$, $\psi_i\in H^3$ for $i=1,\ldots, 2g$ and $\beta\in H^4$. The tables below show respectively perversity and weight of the generators , see [@deCataldoHauselMigliorini2012]. $H^0(\mathcal{M}_{Dol})$ $H^1(\mathcal{M}_{Dol})$ $H^2(\mathcal{M}_{Dol})$ $H^3(\mathcal{M}_{Dol})$ $H^4(\mathcal{M}_{Dol})$ ---------- -------------------------- --------------------------------- -------------------------- -------------------------- -------------------------- $Gr^P_0$ 1 $Gr^P_1$ $\varepsilon_i, \ i=0\ldots 2g$ $Gr^p_2$ $\alpha$ $\psi_i,\ i=0\ldots 2g$ $\beta$ : Perversity of generators of $H^*(\mathcal{M}_{Dol}(2,1),\mathbb{Q})$. $H^0(\mathcal{M}_{B})$ $H^1(\mathcal{M}_{B})$ $H^2(\mathcal{M}_{B})$ $H^3(\mathcal{M}_{B})$ $H^4(\mathcal{M}_{B})$ ---------- ------------------------ --------------------------------- ------------------------ ------------------------- ------------------------ $Gr^W_0$ 1 $Gr^W_2$ $\varepsilon_i, \ i=0\ldots 2g$ $Gr^W_4$ $\alpha$ $\psi_i,\ i=0\ldots 2g$ $\beta$ : Weights of generators of $H^*(\mathcal{M}_{B}(2,1),\mathbb{Q})$. As one can see, weights and perversities of corresponding generators in cohomology match the prediction of P=W. While the weight filtration is naturally compatible with cup product, the perverse filtration is not. As we will see, proving the multiplicativity of $P$ is one of the hardest task in all proofs of the conjecture. **Remark 4**. It is well known that the non abelian Hodge isomorphism exchanges the $h$-relatively ample class inducing the relative Hard Lefschetz theorem for $h$ into the class $\alpha$ in the statement of curious Hard Lefschetz theorem [Theorem 3](#chl){reference-type="ref" reference="chl"}. As a result, the P=W conjecture implies that the curious Hard Lefschetz theorem corresponds to the classical relative Hard Lefschetz theorem for $h$. **Remark 5**. For expository purposes, we considered just the of the group $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$. Nontheless the P=W conjecture is formulated, and in some cases proved, for all groups of type $\mathrm{A}_{n-1}$ [@deCataldoMaulikShen2020], see [@MaulikShen2021 §5] for a systematic discussion. Moreover, most of the ideas presented in this section make sense also when we replace $C$ by any compact algebraic variety. For example, the P=W conjecture has been recently proved by Bolognese, Kuronya and Ulirsch [@BKU2023] for abelian varieties. ### Numerical P=W conjecture In [@Shende17] Shende proved that, for $n$ and $d$ coprime, all cohomology classes of $H^*(\mathcal{M}_B(n,d))$ are of *Hodge-Tate type*, i.e. we have a decomposition $$H^*(\mathcal{M}_{B}(n,d))=\bigoplus_{\ell,i} \ ^{\ell} \mathrm{Hdg}^i$$ with $$\ ^{\ell} \mathrm{Hdg}^i:= W_{2\ell}H^i(\mathcal{M}_{B}(n,d), \mathbb{Q})\cap F_{\ell}H^i\mathcal{M}_{B}(n,d),\mathbb{C})\cap \overline{F}_{\ell}H^i(\mathcal{M}_{B}(n,d),\mathbb{C})$$ Hence, the P=W conjecture implies that the *Hodge numbers of $\mathcal{M}_{B}(n,d)$* of the Betti moduli space $$\label{hn} h^{i,j}(\mathcal{M}_{B}(n,d)):=\dim( \ ^i \mathrm{Hdg}^{i+j}(\mathcal{M}_{B}(n,d))$$ correspond to the *perverse Hodge numbers* of the Dolbeault moduli space $$\label{phn} ^{\mathfrak{p}}h^{i,j}(\mathcal{M}_{Dol}(n,d)):=\dim \mathrm{Gr}^P_{i}H^{i+j}(\mathcal{M}_{Dol}(n,d),\mathbb{Q}).$$ **Conjecture 5** (Numerical P=W). *Let $n$ and $d$ be coprime. In the above notation, $$\label{numericalpw} \ ^{\mathfrak{p}}h^{i,j}(\mathcal{M}_{Dol}(n,d))= h^{i,j}(\mathcal{M}_{B}(n,d)).$$* # Generalizations of the conjecture to the non coprime case {#sec:piwi} One may wonder whether the picture described in the previous section holds for the honest character variety of $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$ which corresponds to the Higgs bundles of rank $n$ and degree 0. In general, when $n$ and $d$ are not coprime the moduli spaces are no longer smooth and the cohomology groups of $\mathcal{M}_{Dol}(n,d)$ and $\mathcal{M}_{B}(n,d)$ do not satisfy the relative and curious Hard Lefschetz theorems anymore. Nonetheless, it is known that the relative Hard Lefschetz theorem for $h$ holds for *intersection cohomology* $IH^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})$, see .\ Intersection cohomology is a generalization of cohomology designed to restore the Hodge theoretic properties which cease to hold for the cohomology of singular varieties, such as Poincaré duality. Moreover, the intersection cohomology groups of an algebraic variety carry a mixed Hodge structure and, if the variety is projective, they satisfy Hard Lefschetz theorem and admit a pure Hodge structure. This suggests that a natural formulation for the P=W conjecture in the singular case would involve this invariant. Another evidence of this fact was provided by de Cataldo and Maulik in [@deCataldoMaulik2018], where they proved that the perverse filtration on intersection cohomology is independent of the complex structure of the curve $C$, exactly as it happens for the weight filtration. As a result, they conjectured [@deCataldoMaulik2018 Question 4.1.7] the following statement. **Conjecture 6** (PI=WI conjecture). *Let $\psi: \mathcal{M}_{B}(n,d)\rightarrow \mathcal{M}_{Dol}(n,d)$ be the real analytic isomorphism of the non abelian Hodge theorem. Then the associated isomorphism in intersection cohomology $$\psi^*:IH^*(\mathcal{M}_{Dol}(n,d),\mathbb{Q})\rightarrow IH^*(\mathcal{M}_{B}(n,d),\mathbb{Q})$$ is such that, for all $\ell\in \mathbb{Z}$, $$P_{\ell} IH^*(\mathcal{M}_{Dol},(n,d) ,\mathbb{Q})\xrightarrow{\simeq} W_{2\ell}IH^*(\mathcal{M}_{B}(n,d), \mathbb{Q}).$$* At present, the PI=WI conjecture has been proved for $d=0$ by the author and Mauri in [@FelisettiMauri] for moduli spaces which admits a symplectic resolution of singularities, namely when $g=1$ and $n$ is arbitrary and $g=2$, $n=2$. Remarkably, these cases can be viewed as degenerations of the known few examples of *hyperkähler manifolds*[^1] up to deformation, as we will see in more detail in . **Remark 6**. Recently Davison showed that, under some conjectural hypothesis, the P=W conjecture is equivalent to the PI=WI conjecture by a phenomenon called *$\chi$-independence*, which will be treated in . *Example 2*. Consider the moduli spaces $\mathcal{M}_{Dol}(1,0)$ and $\mathcal{M}_{B}(1,0)$ on a curve of genus 1. Then $$\mathcal{M}_{Dol}(1,0)\cong Pic^0(C)\times \mathbb{C}\quad \mathcal{M}_{B}(1,0)\cong \mathbb{C}^*\times \mathbb{C}^*$$ and the Hitchin fibration is nothing but the projection $h: Pic^0(C)\times \mathbb{C}\rightarrow \mathbb{C}$ on the second factor. Note that since the moduli spaces are nonsingular, intersection cohomology and cohomology coincide. Using definition , we can compute the perverse filtration: $$\begin{aligned} P_0H^0(\mathcal{M}_{Dol}(1,0),\mathbb{Q}) &= H^0(\mathcal{M}_{Dol}(n,1),\mathbb{Q}),\\ P_1H^1(\mathcal{M}_{Dol}(1,0),\mathbb{Q})&= H^1(\mathcal{M}_{Dol}(n,1),\mathbb{Q}),\\ P_2H^2(\mathcal{M}_{Dol}(1,0),\mathbb{Q})&= H^2(\mathcal{M}_{Dol}(n,1),\mathbb{Q}).\end{aligned}$$ Likewise the weight filtration on the Betti side is given by $$\begin{aligned} W_0H^0(\mathcal{M}_{Dol}(1,0),\mathbb{Q}) &= H^0(\mathcal{M}_{Dol}(n,1),\mathbb{Q}),\\ W_2H^1(\mathcal{M}_{Dol}(1,0),\mathbb{Q})&=H^1(\mathcal{M}_{Dol}(n,1),\mathbb{Q}),\\ W_4H^2(\mathcal{M}_{Dol}(1,0),\mathbb{Q})&= H^2(\mathcal{M}_{Dol}(n,1),\mathbb{Q}).\end{aligned}$$ As predicted by the PI=WI conjecture, the filtrations correspond to each other. # P=W phenomena in compact setting {#sec:pwcompact} One of the key ingredients of the proof of P=W [@deCataldoMaulikShen2019] by de Cataldo, Maulik and Shen is the fact that one can view the Dolbeault moduli space as a special fiber of a \"degeneration\" having a compact hyperkähler manifold as generic object. Here by degeneration we mean a flat (not necessarily proper) morphism of normal algebraic varieties, typically over a curve. ## Degeneration of Hitchin systems to Mukai systems The proposal of using the extensively studied geometry of hyperkähler manifolds to understand that of moduli spaces of non abelian Hodge theory find its roots in a work by Donagi, Ein and Lazarsfeld in 1997 [@DEL97], where they exhibit the first instance of the aforementioned degenerations. For a more detailed and general description see for instance [@deCataldoMaulikShen2019], [@deCataldoMaulikShen2020], [@FelisettiMauri]. The compact hyperkähler manifolds appearing in the degenerations are Mukai moduli spaces of sheaves on a K3 (or abelian) surface. Given any coherent sheaf $\mathcal{F}$ on a K3 surface $S$, one can associate to it a *Mukai vector* $$v = (rk(\mathcal{F}), c_1(\mathcal{F}), \chi(\mathcal{F})-rk(\mathcal{F})) \in H^{*}_{\mathrm{alg}}(S, \mathbb{Z}).$$ We denote by $M_v(S)$ the moduli space of Gieseker semistable sheaves on $S$ with Mukai vector $v$ for a sufficiently general polarization $H$ (which is tipically omitted in the notation); see [@Simpson1994I §1]. The idea is to embed the curve $C$ into $S$ $$%\label{eq:embeddingjcs} j: C \hookrightarrow S$$ and to consider the family $$\mathcal{S}=\left(\mathrm{Bl}_{C\times 0}S\times \mathbb{A}^1\right)\setminus \left( S\times 0\right)\rightarrow \mathbb{A}^1.$$ The central fiber $\mathcal{S}_0$ is isomorphic to the cotangent space $T^*C$, while the restriction to $\mathbb{A}^1 \setminus \{0\}$ is a trivial fibration $S\times (\mathbb{A}^1 \setminus \{0\})\rightarrow \mathbb{A}^1 \setminus \{0\}$. Take a relative compactification $\mathcal{S} \subset \overline{\mathcal{S}}$ over $\mathbb{A}^1$. For all $t\in \mathbb{A}^1$, set $\beta_t=n[C]\in H_2(\mathcal{S}_t,\mathbb{Z})$ with $n>0$ and consider $$\mathcal{M} \to \mathbb{A}^1,$$ the coarse relative moduli space of one-dimensional Gieseker semistable sheaves $\mathcal{F}$ whose support is proper and contained in $\mathcal{S}_t \subseteq \overline{\mathcal{S}}_t$ with $\chi(\mathcal{F})=\chi$ and $[\mathrm{Supp}\mathcal{F}]=\beta_t$; see [@Simpson1994I Theorem 1.21]. Choosing $\chi$ appropriately, the central fiber recovers the Dolbeault moduli space $$\mathcal{M}_0 \simeq \mathcal{M}_{Dol}(n,d).$$ In fact the moduli space of Higgs bundles on $C$ of rank $n$ and degree $d$ can be realized via the BNR-correspondence [@BNR89] as the moduli space of one-dimensional Gieseker-semistable sheaves $\mathcal{F}$ on $T^*C$ with $\chi(\mathcal{F})=\chi$ and $[\mathrm{Supp}\mathcal{F}]=\beta_0$. The general fiber is isomorphic to $$\mathcal{M}_t \simeq M_v(S)$$ with Mukai vector $v=(0,nC, \chi)$. In this degeneration, if $\dim M_v(S)=2N$, the natural forgetful Lagrangian fibration $$\pi: M_v(S)\rightarrow \mathbb{P}^{N}, \quad \mathcal{F}\mapsto [\mathrm{Supp}\mathcal{F}]$$ is sent to the Hitchin fibration $h:\mathcal{M}_{Dol}(n,d)\rightarrow \mathcal{A}$.\ It is then natural to ask the following question: **Question:** Can we see a manifestation of the P=W correspondence in the compact setting? More generally, given a compact hyperkähler manifold with a Lagrangian fibration, do we have a P=W phenomenon?\ ## P=W phenomena for hyperkähler manifolds The answer to this question was first provided by Shen and Yin in [@ShenYin2018]. Given a nonsingular hyperkähler manifold $M$ with a Lagrangian fibration $\pi:M\rightarrow B$, the cohomology groups of $M$ admit a pure Hodge structure. Moreover one can consider the perverse filtration on $H^*(M,\mathbb{Q})$ associated with $\pi$. As a result, we have well defined Hodge numbers and perverse Hodge numbers as in [\[hn\]](#hn){reference-type="eqref" reference="hn"} and [\[phn\]](#phn){reference-type="eqref" reference="phn"}. Using a deformation argument on the *period domain* $D$ parametrizing all possible hyperkähler structures on $M$, Shen and Yin prove that, given a hyperkähler manifold with a Lagrangian fibration, the perverse numbers associated with the fibration match with the Hodge numbers of the total space, establishing a result analogous to Conjecture [\[numericalpw\]](#numericalpw){reference-type="ref" reference="numericalpw"}. **Theorem 7**. *Let $\pi:M\rightarrow B$ be a Lagrangian fibration from a nonsingular hyperkähler manifold to an algebraic variety $B$ and let $P$ be the perverse filtration associated with $\pi$. Then $$\ ^\mathfrak{p} h^{i,j}(M)=h^{i,j}(M)$$* The above result has applications in several areas of algebraic geometry: for example it can be used to determine that the cohomology of the base of the Lagrangian fibration $\pi$ is that of a projective space, providing an answer to a question of Matsushita [@Matsushita99]; also the equality between perverse and Hodge numbers allows to compute enumerative invariants associated with specific Calabi-Yau threefolds, see . has been generalized by Shen, Yin and the author to singular hyperkähler varieties which admit a symplectic resolution, again replacing cohomology by intersection cohomology and providing a compact counterpart of the PI=WI conjecture [@FelisettiShenYin2021]. Later, Harder, Li, Shen and Yin refined the result of by identifying the perverse filtration of a Lagrangian fibration on $M$ with the monodromy weight filtration of a maximally unipotent degeneration of compact hyperkähler manifolds [@HLSY2019]. In particular this implies the multiplicativity of the perverse filtration with respect to cup product on $H^*(M,\mathbb{Q})$: this will be a key tool in the proof by de Cataldo, Maulik and Shen. **Theorem 8** (Multiplicativity of the perverse filtration in the compact setting). *Let $\pi:M\rightarrow B$ be a Lagrangian fibration from a nonsingular hyperkähler manifold to an algebraic variety $B$. The the perverse filtration $P$ associated with $\pi$ is multiplicative under cup product, i.e. $$\cup: \ P_pH^i(M)\times P_qH^j(M)\rightarrow P_{p+q}H^{i+j}(M).$$* # P=W phenomena in enumerative geometry {#sec:pwenumerative} ## From P=W phenomena to enumerative geometry counting invariants P=W phenomena have found tremendous application in enumerative geometry: for instance, perverse Hodge numbers play an important role in recent constructions of curve counting invariants. In the late 90's, Gopakumar and Vafa defined numerical invariants $n_{g,\beta}\in\mathbb{Z}$ for a Calabi--Yau 3-fold $Y$ and a curve class $\beta\in H^2(Y, \mathbb{Z})$, called *GV invariants* or *BPS invariants*. These invariants count genus $g$ curves on $Y$ lying in the curve class $\beta$, see [@GV]. Moreover they are expected to give an equivalent counting to that of the Gromov-Witten invariants (see [@PandariphandeThomas2014] for a detailed account of this fact). In 2018, Maulik and Toda [@MaulikToda2018] proposed a definition of BPS-invariants via the Hilbert-Chow morphism $$hc:\mathcal{M}_{\beta}\rightarrow \mathrm{Chow}_{\beta}(Y)$$ from the moduli space of 1-dimensional stable sheaves on $Y$ with support $\beta$ to the corresponding Chow variety. When $\mathcal{M}_{\beta}$ is nonsingular, Maulik and Toda prove an equality of power series $$\label{GV} \sum_{i\in \mathbb{Z}}\chi( \ ^{\mathfrak{p}}\mathcal{H}^i(Rhc_*\mathbb{Q}_{\mathcal{M}_{\beta}}[\dim \mathcal{M}_{\beta}]))y^i=\sum_{g\geq 0}n_{g,\beta}\left(y^{\frac{1}{2}}+y^{-\frac{1}{2}}\right)^{2g}.$$ The quantity on left hand side of [\[GV\]](#GV){reference-type="eqref" reference="GV"} is the Euler characterstic of the *perverse cohomology sheaves* (see ) arising in the decomposition theorem of the map $hc$ and it can be computed in terms of the perverse hodge numbers $^{\mathfrak{p}}h^{i,j}(\mathcal{M_{\beta}})$.\ If one considers $Y=S\times \mathbb{C}$ for a K3 surface $S$, then the topology of the map $hc$ is uniquely determined by that of $\pi:M_v(S)\rightarrow \mathbb{P}^N$ where the Mukai vector on $S$ is chosen approprietly (in particular so that $\beta$ is the support of the sheaves and $M_v(S)$ is nonsingular). In that case $M_v(S)$ is deformation equivalent to the $N$-th Hilbert scheme of S, for $N=\frac{1}{2}\beta^2+1$, so its Hodge numbers (and thus the perverse!) are known. In other words, provides a direct calculation of the BPS invariants of a K3 surface in terms of the Hodge numbers of the Hilbert schemes of points on a K3 surface, see [@ShenYin2018] and [@deCataldoMaulikShen2019] for a detailed discussion on this fact. ## From enumerative geometry to P=W phenomena In the other direction, it is not to be underestimated the large amount of evidences that enumerative geometry has offered to the P=W conjecture, its variants and more generally to P=W phenomena. Let us start by an easy observation: two Betti moduli spaces $\mathcal{M}_{B}(n,d)$ and $\mathcal{M}_{B}(n,d')$ are Galois conjugate whenever $\gcd(n,d)=\gcd(n,d')=1$, thus the algebraic isomorphism between them induces an isomorphism of mixed Hodge structures $$H^*(\mathcal{M}_{B}(n,d))\cong H^*(\mathcal{M}_{B}(n,d'))$$ between their cohomology groups. Hence, the P=W conjecture suggests that the perverse filtration on $H^*(\mathcal{M}_{Dol}(n,d))$ should be independent of $d$ as long as it is coprime with $n$. Using methods coming from cohomological Donaldson-Thomas theory [@Szendroi2016], in 2021 Kinjo and Koseki [@KinjoKoseki2021] (see also [@COW]) proved that in fact this is true, providing new evidences of the P=W conjecture. This phenomenon is usually referred to as *$\chi$-independence* since an invariant of the moduli space of Higgs bundles, such as the perverse filtration, depends only on the rank $n$ and it is not affected by choice of the Euler characteristic $\chi$. **Theorem 9** ($\chi$-independence for $\mathcal{M}_{Dol}(n,d)$). *Let $n,d,d'$ be integers such that $n>0$ and $\gcd(n,d)=\gcd(n,d')=1$. Then there exists an isomorphism $$H^*(\mathcal{M}_{Dol}(n,d))\cong H^*(\mathcal{M}_{Dol}(n,d'))$$ preserving Hodge structures and the perverse filtration.* When $n$ and $d$ are not coprime, as explained in , the natural invariant suggested by perverse sheaves theory is intersection cohomology. However, motivated by physics, Chuang, Diaconescu and Pan [@CDP14] proposed to use *BPS cohomology*: in fact, despite being just phisically defined, BPS cohomology is expected to enjoy a $\chi$-independence phenomenon without assuming coprimality between $n$ and $d$. Further, the BPS cohomology groups on both Dolbeault and Betti sides carry a Lie algebra structure (see [@Davison2020] and [@DHSM2022]) compatible with non abelian Hodge isomorphism [@SS20 Conjecture 1.5], opening the way to a possible representation theoretic approach to P=W problems. To overcome the difficulty of a rigorous mathematical definition, Kinjo and Koseki [@KinjoKoseki2021] propose a definition of BPS cohomology on $\mathcal{M}_{Dol}(n,d)$ using the cohomological Donaldson-Thomas theory of the 3-fold $\mathrm{Tot}(\mathcal{O}_C\oplus K_C)$. Such as the intersersection cohomology of $\mathcal{M}_{Dol}(n,d)$ is defined as the hypercohomology of intersection complexes $\mathcal{IC}^{Dol}_{n,d}$, the BPS cohomology is the hypercohomology of another complex $\mathcal{BPS}^{Dol}_{n,d}$: $$H^*_{BPS}(\mathcal{M}_{Dol}(n,d)):=\mathbb{H}^{*}(\mathcal{BPS}^{Dol}_{n,d}).$$ Indeed there is a natural morphism $$\mathcal{IC}^{Dol}_{n,d}\hookrightarrow \mathcal{BPS}^{Dol}_{n,d}$$ which is an isomorphism in the coprime case and an injection of perverse sheaves for general $n$ and $d$. *Example 3*. To see that in the non coprime case there is no isomorphism between the complexes it is sufficient to consider the moduli space $\mathcal{M}_{Dol}(2,0)$ on a genus 2 curve. Setting $$\begin{aligned} P^{IC}_t(X)&=\sum_i \dim(IH^i(X))t^i\\ P^{BPS}_t(X)&=\sum_i \dim(H_{BPS}^i(X))t^i\,\ \end{aligned}$$ one has that by [@Felisetti2018] and [@Rayan2018] $$\begin{aligned} P^{IC}_t(\mathcal{M}_{Dol}(2,0))&=(1+t)^4(1+t^2+2t^4+2t^6)\\ P^{BPS}_t(\mathcal{M}_{Dol}(2,0))&=(1+t)^4(1+t^2+4t^3+2t^4+4t^5+2t^6).\ \end{aligned}$$ The following theorem can be viewed as a noncoprime analgoue of .. **Theorem 10**. *Let $n,d,d'$ be integers such that $n>0$. Then there exists an isomorphism $$H_{BPS}^*(\mathcal{M}_{Dol}(n,d))\cong H_{BPS}^*(\mathcal{M}_{Dol}(n,d'))$$ preserving Hodge structures and the perverse filtration.* It is now natural to wonder whether there is a formulation of the P=W conjecture for BPS cohomology and what would be the relation with P=W and PI=WI. ### All P=W conjectures at once The formulation of P=W conjecture for BPS cohomology is understood in terms of three steps: 1. Consider the moduli stacks $\mathfrak{M}_{Dol}(n,d)$ and $\mathfrak{M}_B(n,d)$ of respectively semistable Higgs bundles on $C$ and semisimple representations of the its fundamental group into $\mathop{\mathrm{GL(\textit{n},\mathbb{C})}}$. By [@KinjoKoseki2021 Theorem 5.16], one can decompose the Borel--Moore homology of $\mathfrak{M}_{Dol}(n,0)$ into tensor products of the BPS cohomology. Combining this with a $\chi$-independence result, one generalizes the result to $\mathfrak{M}_{Dol}(n,d)$. 2. Davison establishes the result on the decomposition of the Borel--Moore homology of $\mathfrak{M}_{B}(n,0)$ into tensor products of the BPS cohomology on the Betti side, see [@Davisonnahtstacks Theorem 2.2]. Moreover he conjectures a $\chi$-independence result analogous to theorem [Theorem 10](#chiindepnoncoprime){reference-type="ref" reference="chiindepnoncoprime"}, see [@Davisonnahtstacks Conjecture 4.10]. 3. Later Davison conjectures the existence of an isomorphism $\Upsilon$ between the Borel-Moore homology groups of $\mathfrak{M}_{Dol}(n,0)$ and $\mathfrak{M}_B(n,0)$ $$H^*_{BM}(\mathfrak{M}_{Dol}(n,0))\cong H^*_{BM}(\mathfrak{M}_{B}(n,0))$$ and formulates a stacky version of the P=W conjecture, involving the natural weight filtration $W^{\bullet}$ on $H^*_{BM}(\mathfrak{M}_B(n,0))$ and a suitably defined perverse filtration $\mathfrak{F}^{\bullet}$ on $H^*_{BM}(\mathfrak{M}_{Dol}(n,0))$, see [@Davisonnahtstacks §5]. We state the stacky P=W conjecture (we omit shifts for ease of the reader and refer to [@Davisonnahtstacks Conjecture B] for a precise statement). **Conjecture 11** (PS=WS conjecture). *There exists an isomorphism $$\Upsilon: H^*_{BM}(\mathfrak{M}_B(n,0))\xrightarrow{\simeq}H^*_{BM}(\mathfrak{M}_{Dol}(n,0))$$ such that $$\Upsilon(W_{2i}H^*_{BM}(\mathfrak{M}_{B}(n,0)))=\mathfrak{F}_i(H^*_{BM}(\mathfrak{M}_{Dol}(n,0))).$$* **Remark 7**. The existence of the isomorphism has been recently shown in [@Hennecart2023], while the $\chi$-independence result [@Davisonnahtstacks Conjecture 4.10] for $\mathcal{M}_{B}(n,0)$ have been partially proved by Davison, Hennecart and Mejia-Schlegel in 2022, see [@DHSM2022]. Let us make to two final remarks relating the three versions of the P=W conjecture stated in this paper. **Remark 8** ($PS=WS\Leftrightarrow PI=WI$). Davison, Hennecart, Schlegel-Mejia [@DHSM2022] established a theorem describing the relation between BPS cohomology and intersection cohomology for Dolbeault and Betti moduli spaces, see also [@MauriMigliorini]). As result, the P=W conjectures via the BPS cohomology is equivalent to PI=WI and, as long as $\gcd(n,d)=\gcd(n, d')$, holds for intersection cohomology too. **Remark 9** ($\chi$-independence $\Rightarrow (PS=WS\Leftrightarrow P=W)$). Davison shows that, if the conjecture [@Davisonnahtstacks Conjecture 4.10] on $\chi$-independence for the Betti moduli space holds, then all versions of the P=W conjecture are equivalent. Such $\chi$-independence result has been proven in genus 0 and 1 in [@Davisonnahtstacks], and partially proved in greater generality in [@DHSM2022]. # The proofs {#sec:proofs} In this final section we briefly review the proofs of the P=W conjecture in the coprime case throughout the years. For simplicity we will always consider the case when $d=1$, as we have seen that the $\chi$-independence results imply that this is not a restrictive hypothesis. ## The original proof of de Cataldo-Hausel-Migliorini in rank 2 As we mentioned in the introduction, the P=W conjecture was first stated by de Cataldo, Hausel and Migliorini in 2010 in [@deCataldoHauselMigliorini2012], where they also proved it in the rank 2 case. In rank 2, the cohomology of $\mathcal{M}_{Dol}(2,1)$ and $\mathcal{M}_{B}(2,1)$ is known in terms of generators and relations [@HauselThaddeus04]. Consider the $\mathcal{V}$ universal Higgs bundle on $\mathcal{M}_{Dol}(2,1)\times C$ and denote by $e_1,\ldots,e_{2g}$ a symplectic basis of $H^1(C,\mathbb{Q})$ and by $\Omega\in H^2(C,\mathbb{Q})$ the Poincaré dual of the class of a point. The second chern class of the bundle $End(\mathcal{V})$ has a Künneth decomposition $$c_2(End(\mathcal{V}))=\alpha\otimes \Omega +\sum_{i=1}^{2g}\psi_i\otimes e_i+\beta\otimes 1,$$ for some classes $\alpha\in H^2(\mathcal{M}_{Dol}(2,1),\mathbb{Q})$, $\psi_1,\ldots,\psi_{2g}\in H^3(\mathcal{M}_{Dol}(2,1))$ and $\beta\in H^4(\mathcal{M}_{Dol}(2,1),\mathbb{Q})$. Moreover the generators of $H^*(Pic^1(C),\mathbb{Q})$ pull back to the classes $\epsilon_1,\ldots,\epsilon_{2g}$ via the morphism $\mathcal{M}_{Dol}(2,1)\rightarrow Pic^1(C)$ given by $(V,\phi)\mapsto \det (V)$. **Theorem 12**. *The classes $\alpha,\beta, \psi_i, \epsilon_i$ for $i=1\ldots 2g$ are a set of multiplicative generators cohomology of $\mathcal{M}_{Dol}(2,1)$.* Relations between these generators are described in the companion paper [@HauselThaddeusrelations], allowing to construct a proof of P=W at the level of generators. **Theorem 13** ([@deCataldoHauselMigliorini2012], Theorem 3.1.1). *The place of the multiplicative generators in the perverse Leray filtration of $H^*(\mathcal{M}_{Dol}(2,1),\mathbb{Q})$ is half the weight filtration of $H^*(\mathcal{M}_{B}(2,1),\mathbb{Q}).$* Since the weight filtration is compatible with cup product by definition, the hard task is now to prove the compatibility of cup product with the perverse filtration. As we already mentioned, the multiplicavity of the perverse Leray filtration is one of the main difficulties in all proofs of the P=W conjecture. Unfortunately, compatibility for perverse filtration holds just in a weak sense, i.e. $$P_pH^*(\mathcal{M}_{Dol}(2,1),\mathbb{Q})\cup P_qH^*(\mathcal{M}_{Dol}(2,1),\mathbb{Q})\rightarrow P_{p+q+N}H^*(\mathcal{M}_{Dol}(2,1),\mathbb{Q}),$$ where $N$ is the relative dimension of the map $h$. In contrast, strong compatibility is satisfied by the ordinary Leray filtration, which is contained in the perverse one. As a result the strategy of the proof goes as follows. 1. Show that on a sufficiently large open dense subset $U$ of $\mathcal{A}$ the Leray and the perverse Leray filtration coincide, so that on $U$ the perverse Leray filtration is compatible with cup product. Since the perversity of a class is tested by restricting it to the inverse image of a generic affine subspace of $\mathcal{A}$ if $\mathcal{A}\setminus U$ has a large codimension, in a certain range of dimensions, the subspace can be chosen inside $U$ where the perverse filtration is multiplicative. This allows to compute the perversity of many monomial generators of $H^*(\mathcal{M}_{Dol}(2,1))$. 2. Treat the remaining generators via an ad hoc argument (see [@deCataldoHauselMigliorini2012 §4.3]). Let us explain briefly the content of the first step. Let $\mathcal{A}_{reg}=\left\lbrace a\in \mathcal{A}\mid C_a \text{ is nonsingular }\right\rbrace$ and $\mathcal{A}_{ell}=\left\lbrace a\in \mathcal{A}\mid C_a \text{ is integral }\right\rbrace$. The open set $\mathcal{A}_{ell}$ is called the *elliptic locus* and is the candidate open set $U$ appearing in item 1. of the strategy. The restriction of spectral family $f:\mathcal{C}\rightarrow \mathcal{A}_{reg}$ to the regular locus is a smooth family and so is the relative (compactified) Jacobian $f_J:Pic(\mathcal{C})\rightarrow \mathcal{A}_{reg}$. On $\mathcal{A}_{reg}$ the decomposition theorem of $h$ is equivalent to that of $f_J$ and reads as $$Rh_*\mathbb{Q}_{\mid A_{reg}}=Rf_{J,*}\mathbb{Q}=\bigoplus_i R^if_{J,*}(\mathbb{Q}_{Pic(\mathcal{C})})[-i]=\bigoplus_i \Lambda^iR^1f_{*}\mathbb{Q}_{\mathcal{C}},$$ where the second equality follows from the fact the $Pic(C_a)$ is an abelian variety for all $a\in \mathcal{A}$. If one extends the family to $\mathcal{A}_{ell}$, then the local systems $\Lambda^iR^1f_{*}\mathbb{Q}_{\mathcal{C}}=:\Lambda^iR^1$ are replaced by their intersection cohomology complexes $\mathcal{IC}(\Lambda^iR^1)$ and other supports might appear. However, this does not happen. **Theorem 14** ([@deCataldoHauselMigliorini2012], Theorem 1.1.2). *Keep the notation as above. Then* (i) *$$Rh_*\mathbb{Q}_{\mid A_{ell}}=\bigoplus_i \mathcal{IC}\left(\Lambda^i R^1\right);$$* (ii) *If $j:\mathcal{A}_{reg}\hookrightarrow \mathcal{A}_{ell}$ is the open inclusion of the regular locus inside the elliptic one then $$\mathcal{IC}\left(\Lambda^i R^1\right)=j_*\left(\Lambda^i R^1\right),$$ i.e. the perverse sheaves appearing in the decomposition theorem are actual sheaves up to a dimensional shift.* These two statements are a special case of Ngô support theorem [@ngo] (see also [@MS],[@MSV],[@Felisettihilb] for the case of compactified Jacobians and Hilbert schemes). In particular they imply that the classical and the perverse Leray filtrations coincide on $\mathcal{A}_{ell}$ and, since the former is multiplicative, so is the latter. **Remark 10**. This technique does not extend to higher rank: in fact already for $n=3$, there are fibers over the elliptic locus which have non palindromic Betti numbers, contradicting the first part of . ## de Cataldo-Maulik-Shen's and Maulik-Shen's proofs In 2019, de Cataldo, Maulik and Shen prove the P=W conjecture in arbitrary rank for curves of genus 2 and reduce the proof of P=W to the multiplicativity of the perverse filtration. Later, in 2022, Maulik and Shen, building on these results, prove the conjecture in full generality. One of the key tools in the approach of these two works is the reduction to the compact case via a degeneration similar to that of . To make the notation lighter, from now on we will denote $\mathcal{M}_{Dol}(n,1)$ and $\mathcal{M}_{B}(n,1)$ simply by $\mathcal{M}_{Dol}$ and $\mathcal{M}_{B}$ respectively. Likewise what happens in rank 2, the generators of the cohomology of $\mathcal{M}_{Dol}$ in terms of the Chern character of the (normalized) universal bundle. In particular given the diagram Markman defines for each $\gamma\in H^i(C,\mathbb{Q})$ a *tautological class* $$c(\gamma,k)=q_{\mathcal{M},*}(ch_k(\mathcal{V})\cup q_c^{*}\gamma)\in H^{i+2k-2}(\mathcal{M}_{Dol},\mathbb{Q})$$ and proves that $$\left\lbrace c(\gamma,k) \mid \gamma \in H^*(C,\mathbb{Q}), \ k=1,\ldots, \dim \mathcal{M}_{Dol}\right\rbrace$$ is a set of generators for $H^{*}(\mathcal{M}_{Dol})$ and, consequently, for $\mathcal{M}_{B}$ through the non abelian Hodge isomorphism. We say that $$c(\gamma,k) \text{ is } \begin{cases} \text{ \textit{even} } \mbox{ if }\gamma\in H^0(C,\mathbb{Q})\oplus H^2(C,\mathbb{Q}),\\ \text{ \textit{odd} } \mbox{ if }\gamma\in H^1(C,\mathbb{Q}).\\ \end{cases}$$ On the Betti side, the weight filtration on $\mathcal{M}_{B}$ has been computed by Shende in [@Shende17]. In particular he proves the following. **Theorem 15** (Weight of tautological classes). *The mixed Hodge structure on $H^*(\mathcal{M}_{B},\mathbb{Q})$ is of Hodge-Tate type and for all $\gamma \in H^*(C,\mathbb{Q})$ one has that $$c(\gamma,k)\in \ ^k\mathrm{\mathrm{Hdg}}^{i+2k-2}(\mathcal{M}_{B}).$$* As a result there is a canonical decomposition of graded vector spaces $$H^*(\mathcal{M}_{Dol},\mathbb{Q})\cong H^*(\mathcal{M}_{B},\mathbb{Q})=\bigoplus_{k,i} \ ^{k}\mathrm{Hdg}^i(\mathcal{M}_{B})$$ and the P=W conjecture can be rephrased as $$P_k H^*(\mathcal{M}_{Dol},\mathbb{Q})=\bigoplus_{k'\leq k} \ ^{k}\mathrm{Hdg}^*(\mathcal{M}_{B}).$$ Hence, one can split the resolution of the P=W conjecture in two separate problems (see [@deCataldoMaulikShen2019 Conjecture 0.3]). **Conjecture 16** (Equivalent version of P=W). *There exists a splitting $G_*H^*(\mathcal{M}_{Dol},\mathbb{Q})$ of the perverse filtration such that* (i) *(Tautological classes) $c(\gamma,k)\in G_kH^*(\mathcal{M}_{Dol},\mathbb{Q})$ (i.e. has perversity $k$) for all $k\geq 0$ and all $\gamma \in H^*(C,\mathbb{Q})$;* (ii) *(Multiplicativity) The perverse filtration is multiplicative, i.e. $$G_pH^*(\mathcal{M}_{Dol},\mathbb{Q})\cup G_qH^*(\mathcal{M}_{Dol},\mathbb{Q})\rightarrow G_{p+q}H*(\mathcal{M}_{Dol},\mathbb{Q}).$$* ### De Cataldo-Maulik-Shen's proofs for even tautological classes De Cataldo, Maulik and Shen prove the first item for both odd and even tautological classes, i.e. the P=W is reduced to the multiplicativity of $P$. For even tautological classes they can say more: calling $R^*(\mathcal{M}_{Dol})$ (resp. $R^*(\mathcal{M}_{B})$) the subalgbra generated by even tautological classes they prove the following. **Theorem 17**. *The P=W conjecture holds for $R^*(\mathcal{M}_{Dol})$ and $R^*(\mathcal{M}_{B})$ for all genus $g\geq 2$ and for all $n\geq 1$ one has $$P_kH^*(\mathcal{M}_{Dol})\cap R^*(\mathcal{M}_{Dol}) =W_{2k}H^*(\mathcal{M}_{B})\cap R^*(\mathcal{M}_{B}).$$* The idea exploited for proving these results is to first study perverse filtrations for certain compact hyperkähler manifolds, motivated in part by the work of Shen and Yin presented in . Again, one construct a degeneration similar to that of , where the K3 surface $S$ is replaced by an abelian surface $A$ and prove analogues of the result for the moduli space $\mathcal{M}_{\beta,A}$ of semistable 1-dimensional sheaves on $A$ with support on a curve class $\beta$. The advantage of this approach is that, in the hyperkähler setting, there are results by Markman [@Markman2002] on monodromy operators for $\mathcal{M}_{\beta,A}$ which do not appear in the Hitchin setting and determine heavy constraints on the tautological classes of universal families. The upshot is that one can exploit these results to compute the perverse filtration and show that it is multiplicative in the compact setting. The remaining task is to go back to the Hitchin setting of Higgs bundles. Since the perverse filtration does not depend on the complex structure of the curve, it suffices to examine the case of a curve $$j_C:C\hookrightarrow A$$ embedded in an abelian surface. This allows them to construct a *specialization morphism* $$\mathrm{sp}^{!}: H^{*}(\mathcal{M}_{\beta,A},\mathbb{Q})\rightarrow H^*(\mathcal{M}_{Dol},\mathbb{Q})$$ which preserves the perverse filtration. In particular they prove [@deCataldoMaulikShen2019 Propositions 4.4 and 4.9] that all classes $c(\gamma,k)$ with $$\label{eq:imj} \gamma\in \mathrm{Im}(j_C^*)$$ are in the right perversity and that the restriction of the splitting of the perverse filtration to the subalgebra generated by those classes is multiplicative. Since all even classes $c(\gamma,k)$ satisfy , one gets . Moreover when $g=2$ the abelian surface $A$ can be chosen to be $Jac(C)$ so that the morphism $j_C^*$ is surjective and the P=W conjecture is verified. ### Maulik-Shen proof's of the full conjecture The first proof of the P=W conjecture in full generality, due Maulik and Shen [@MaulikShen2022], appeared in 2022. To prove the result they consider the generators $c(\gamma,k)$ and prove . We enlist here the main steps of their proof. Since the technicalities are beyond the scope of this survey, we confine ourselves to enlightening the main ingredients and ideas of each step. 1. They first introduce an auxiliary notion of *strong perversity*, which has the advantage of being multiplicative with respect to cup product. Then, they formulate a sheaf theoretic enhancement of , which involves strong perversities of Chern classes on a twisted Dolbeault moduli space $\mathcal{M}_{Dol}^{\mathcal{L}}$ [@MaulikShen2022 Theorem 2.6]. In particular they show that the P=W conjecture amounts to prove that the strong perversity of the $k-th$ Chern character of the universal bundle $\mathcal{V}^{\mathcal{L}}$ on $\mathcal{M}_{Dol}^{\mathcal{L}}\times C$ implies that each tautological class $c(k,\gamma)$ has perversity $k$. The techincal passage to the twisted case is due to the fact that the decomposition theorem for twisted Hitchin systems is in general more manageable. 2. The second key idea in the proof of Maulik-Shen is to make use of *Yun's global springer theory* [@Yun2011]. Global springer theory constructs a nonsingular Deligne Mumford stack $\mathfrak{M}^{par}$ of parabolic twisted Higgs bundles together with a map $$\pi: \mathfrak{M}^{par}\rightarrow \mathcal{M}_{Dol}^{\mathcal{L}}\times C$$ forgetting the parabolic datum. Composing with the twisted Hitchin fibration $h^{\mathcal{L}}:\mathcal{M}_{Dol}^{\mathcal{L}}\rightarrow \mathcal{A}_{\mathcal{L}}$ tensored by the identity on $C$, one gets a proper parabolic Hitchin map $$h^{par}:\mathfrak{M}^{par}\rightarrow \mathcal{A}_{\mathcal{L}}\times C.$$ In particular, the parabolic datum yields a decomposition of the $k-th$ Chern character of $\pi^*\mathcal{V}^{\mathcal{L}}$ as product of $k$ Chern classes $c_1(L(\zeta))$ of line bundles associated with the characters $\zeta$ of the torus $\mathbb{C}^*\subset \mathrm{PGL}_n(\mathbb{C})$. 3. Yun shows that the strong perversity of $c_1(L(\zeta))$ on the elliptic locus $\mathcal{A}_{\mathcal{L}}^{ell}\times C$ is exactly 1, thus implying that on $\mathcal{A}_{\mathcal{L}}^{ell}\times C$ the strong perversity of $ch_k(\mathcal{V}^{\mathcal{L}})$ is $k$. [@MaulikShen2022 §3]. 4. Since the result on the strong perversity of $c_1(L(\zeta))$ can be rephrased in terms of maps between the perverse cohomology sheaves $\ ^{\mathfrak{p}}\mathcal{H}^i(Rh^{par}\mathbb{Q}_{\mathfrak{M}^{par}})$, Maulik and Shen prove a generalized version of Chaudouard-Laumon's support theorem [@CL2017] for the parabolic Hitchin map [@MaulikShen2022 §4] showing that the $^{\mathfrak{p}}\mathcal{H}^i$ are supported on the whole Hitchin base $\mathcal{A}\times C$. This allows to to extend the result on the elliptic locus to the whole Hitchin base. The result now follows from a lemma stating the equality of strong perversities of $ch_k(\pi^*\mathcal{V}^{\mathcal{L}})$ and $ch_k(\mathcal{V}^{\mathcal{L}})$. ## Hausel-Mellit-Minets-Schiffman's proof The proof by Hausel, Mellit, Minets and Schiffmann [@HMMS2022] has also appeared in 2022 and follows quite a different approach from the previous ones: in fact this proof of P=W goes is based on an action of the Lie algebra $\mathcal{H}_2$ of polynomial Hamiltonian vector fields on the plane with respect to the standard symplectic form on the cohomology of the moduli space $\mathcal{M}_{Dol}$. Moreover this approach for the proof works in the broader context of moduli spaces of stable parabolic Higgs bundles. 1. The action of $\mathcal{H}_2$ is defined cohomological Hall algebra techniques (see [@MMSV]). 2. Once the action of the Lie algebra is constructed, one can identify certain operators that produce a $\mathfrak{sl}_2$-triples, namely copies of the Lie algebra $\mathfrak{sl}_2\subset\mathcal{H}_2$. Every $\mathfrak{sl}_2$-triple acting on the cohomology of a variety comes equipped with a corresponding increasing filtration: in the case of $\mathcal{M}_{Dol}$ (or its variants) the author find a specific triple such that the associated filtration coincide with the perverse filtration. Using the formalism of the $\mathcal{H}_2$ action the authors manage to conclude the P=W conjecture on the pure cohomology over elliptic locus. 3. Once the conjecture is established on the elliptic locus, the authors extend the result globally, using the fact that, in certain situations, the global cohomology of the moduli space injects into isomorphically into the pure part of the cohomology of the elliptic locus. ## Maulik-Shen-Yin's proof The last proof in chronological order combines the approaches of the previous ones, suggesting a new direction for finding P=W phenomena. In fact, the last proof by Maulik, Shen and Yin is obtained as a consequence of a more general result on a class of morphisms, denominated by the authors *dualizable abelian fibrations*, which are modelled on the Hitchin map and the compactified Jacobian fibrations. Roughly speaking, a dualizable abelian fibration is an abelian fibration $\pi:M\rightarrow B$ with a dual fibration $\pi^{\vee}:M^{\vee}\rightarrow B^{\vee}$ satisfying two main properties: (i) (Fourier-Mukai) $M$ and $M^{\vee}$ are related via a Fourier-Mukai (FM) transform with nice properties mimicking those of dual abelian schemes; (ii) (Full support) All the simple perverse summands in the decomposition theorem of $R\pi_*\mathbb{Q}_M$ are supported on the whole base $B$. The Chern character of the FM kernel induces a Fourier transform in cohomology $$\mathfrak{F}=\sum \mathfrak{F}_k:H^*(M^{\vee},\mathbb{Q})\rightarrow H^*(M,\mathbb{Q}), \quad \text{such that } \mathfrak{F}_k(H^i(M^{\vee},\mathbb{Q})) \subseteq H^{i}(M,\mathbb{Q}),$$ and a filtration $$C_kH^*(M,\mathbb{Q})=\mathrm{Span}\{Im\mathfrak{F}_j\mid j\leq k\}.$$ called the Chern filtration. *Example 4*. Consider the Hichin map $h:\mathcal{M}_{Dol}\rightarrow \mathcal{A}$ and restrict it to the elliptic locus $\mathcal{A}_{ell}\subset\mathcal{A}$ parametrizing integral spectral curves. As we have seen in the rank 2 case, on $\mathcal{A}_{ell}$ the Hitchin fibration is nothing but the relative compactified jacobian fibration of the spectral family $\mathcal{C}_{ell}\rightarrow \mathcal{A}_{ell}$: $$f_J:Pic(\mathcal{C}_{ell})\rightarrow \mathcal{A}_{ell}.$$ Maulik, Shen and Yin prove that $f_J$ satisfies conditions (i) and (ii) of dualizable abelian fibrations (see [@MSY Theorem 0.2]): indeed the theory of Fourier-Mukai transforms for compactified jacobians was developed by Arinkin [@Arinkin2011][@Arinkin2013], while the full support in the decomposition theorem of $Rf_{J,*}\mathbb{Q}_{Pic(\mathcal{C}_{ell})}$ is ensured by a combination of Ngô support theorem [@ngo] and Severi identitities [@MaulikShen2023 Lemma 4.1]. In this case, the associated Chern filtration corresponds simply to the filtration defined via the tautological classes of the moduli space $\mathcal{M}_{Dol}$. The main result in Maulik-Shen-Yin's paper is the following theorem, see [@MSY Theorem 0.1], establishing multiplicativity of the perverse filtration and the inclusion of the Chern filtration in the perverse one for all dualizable abelian fibration satisfying a techincal condition on the Fourier-Mukai transform called *Fourier Vanishing* (FV). **Theorem 18**. *Let $\pi:M\rightarrow B$ be a dualizable abelian fibration satisfying (FV). Then* 1. *(Multiplicavity of $P$)The perverse filtration $P_*$ on $H^*(M,\mathbb{Q})$ associated with $\pi$ is multiplicative;* 2. *(Perverse $\supset$ Chern) For any class $\alpha\in H^*(M^{\vee},\mathbb{Q})$ $$\mathfrak{F}_k(\alpha)\in P_kH^*(M,\mathbb{Q}).$$* As an application of this theorem, the authors provide a new proof of P=W. In fact the P=W conjecture can be decomposed into three identities: $$P_k(H^*(\mathcal{M}_{Dol},\mathbb{Q}))=C_k(H^*(\mathcal{M}_{Dol},\mathbb{Q}))=C_k(H^*(\mathcal{M}_{B},\mathbb{Q}))=W_{2k}(H^*(\mathcal{M}_{B},\mathbb{Q})),$$ where $C_k$ denotes the filtration defined via the tautological classes of the moduli space as in Example . Since the second and third identities had been established earlier by work of Markman [@Markman2002] Hausel--Thaddeus [@HauselThaddeus04] and Shende [@Shende17] respectively, one is left with proving the first identity $"P=C"$ on $\mathcal{M}_{Dol}$. Actually the Curious Hard Lefschetz theorem, allows to reduce the proof of the quality to just showing that $$C_kH^*(\mathcal{M}_{Dol},\mathbb{Q})\subseteq P_kH^*(\mathcal{M}_{Dol},\mathbb{Q})$$. Since the compactified jacobian fibration in Example satisfies the Fourier Vanishing condition (see [@MSY §3.5]), the hypothesis of are satisfied and thus one can establish $C\subseteq P$ on the elliptic locus of the Hitchin map. The result can now be extended to the whole Hitchin base via the same argument as in [@HMMS2022]. # Intersection cohomology and perverse sheaves {#sec:appendix} In this appendix we recall the main notions of the theory of perverse sheaves and intersection cohomology, which appear in the paper. For proofs, examples and several enlighting explanations, we refer to [@deCataldoMigliorini2009]. All cohomology groups are considered with rational coefficient, so we omit it in the notation.\ Let $Y$ be an algebraic variety. We denote by $D^b_c(Y)$ the bounded derived category of $\mathbb{Q}$-constructible complexes on $Y$. Let $\mathbb{D}\colon D^b_c(Y)\rightarrow D^b_c(Y)$ be the Verdier duality functor. The two full subcategories $$\begin{aligned} {^{\mathfrak{p}}}D^b_{\leq 0}(Y) &:= \left \lbrace K^*\in D^b_c(Y) \mid \dim \mathrm{Supp}(\mathcal{H}^j(K^*))\leq -j \right \rbrace\\ {^{\mathfrak{p}}}D^b_{\geq 0}(Y) &:=\left \lbrace K^* \in D^b_c(Y) \mid \dim \mathrm{Supp}(\mathcal{H}^j(\mathbb{D}K^*))\leq -j \right \rbrace \end{aligned}$$ define a $t$-structure on $D^b_c(X)$, called *perverse* $t$-structure. The heart $$\mathcal{P}(Y) := {^{\mathfrak{p}}}D^b_{\leq 0}(Y)\cap {^{\mathfrak{p}}}D^b_{\geq 0}(Y)$$ of the $t$-structure is the abelian category of *perverse sheaves*. The truncation functors are denoted ${^{\mathfrak{p}}}\tau_{\leq k}\colon D^b_c(Y)\to {^{\mathfrak{p}}}D^b_{\leq k}(Y)$, ${^{\mathfrak{p}}}\tau_{\geq k}\colon D^b_c(X)\to {^{\mathfrak{p}}}D^b_{\geq k}(Y)$, and the perverse cohomology functors are $${^{\mathfrak{p}}} \mathcal{H}^{k} :={^{\mathfrak{p}}}\tau_{\leq k} {^{\mathfrak{p}}}\tau_{\geq k}\colon D^b_c(Y) \to \mathcal{P}(Y).$$ The category $\mathcal{P}(Y)$ is Artinian, thus every $K\in \mathcal{P}(Y)$ admits an increasing finite filtration with quotients simple objects. Simple perverse sheaves are all of the form $$\mathcal{IC}_{\overline{Z}}(\mathcal{L})$$ where $\mathcal{IC}_{\overline{Z}}(\mathcal{L})$ denotes the intersection complex associated with some locally closed smooth subvariety $Z$ and a simple local system $\mathcal{L}$ on it. **Definition 4**. The intersection complex $\mathcal{IC}_{\overline{Z}}(\mathcal{L})$ associated with a local system $\mathcal{L}$ is a complex of sheaves on $Z$ which extends $\mathcal{L}[\dim Z]$ and is determined up to unique isomorphism in the derived category by the conditions: - $\mathcal{H}^j(\mathcal{IC})_{\overline{Z}}(\mathcal{L}))=0 \quad \text{ for all } j< -\dim Z$, - $\mathcal{H}^{-\dim Y}(\mathcal{IC}_{\overline{Z}}(\mathcal{L})_{\mid Z})\cong \mathcal{L}$, - $\dim \mathrm{Supp}\mathcal{H}^j(\mathcal{IC}_{\overline{Z}}(\mathcal{L}))<-j, \text{ for all }j\in \mathbb{Z}$, - $\dim \mathrm{Supp}\mathcal{H}^j(\mathbb{D}\mathcal{IC}_{\overline{Z}}(\mathcal{L}))<-j, \text{ for all }j\in \mathbb{Z}$. When $\mathcal{L}=\mathbb{Q}_{U}$ for an open nonsingular Zariski dense subset $U$ of $Y$ we simply denote $\mathcal{IC}_Y(\mathcal{L})$ by $\mathcal{IC}_Y$. Note that, if $Y$ is smooth, then $\mathcal{IC}_Y\cong \mathbb{Q}[\dim Y]$. **Definition 5** (Intersection cohomology). Let $Y$ be an algebraic variety. We set $$IH^i(Y)\doteq \mathbb{H}^{i-\dim Y}(\mathcal{IC}_Y)$$ and we call it the $i$-th intersection cohomology group of $Y$. In general, given any local system $\mathcal{L}$ supported on a locally closed subset $Z$ of $Y$, the cohomology groups of $Y$ with coefficients in $\mathcal{L}$ are shifted hypercohomology groups of the intersection complex associated to $\mathcal{L}$: $$IH^*(\overline{Y},\mathcal{L})=H^{*-\dim Y}(\overline{Y},\mathcal{IC}_{\overline{Y}}(\mathcal{L})).$$ **Proposition 19** (Properties of Intersection cohomology). *Let $Y$ be an algebraic variety of dimension $n$. Then its intersection cohomology group enjoys the following properties:* (i) *$IH^i(Y)$ is finite dimensional vector spaces and $IH^i(Y)=0$ for $i\not\in\{0\ldots 2n\}$.* (ii) *There is a natural morphism $H^d(Y)\rightarrow IH^d(Y)$ which is an isomorphism when $Y$ as at worst finite quotient singularities. This morphism endows $IH^*(Y)$ with a module structure over $H^*(Y)$, but in general $IH^*(Y)$ has no ring structure or cup product.* (iii) *(**Poincaré duality**) There is a canonical isomorphism $IH^i(Y)\cong IH^{2n-i}(Y)^{\vee}$ for all $i\in \mathbb{N}$.* (iv) *$IH^i(Y)$ carry a natural mixed Hodge structure. If $Y$ is projective, the mixed Hodge structure is pure of weight $i$ and $IH^i(Y)$ admits a Hodge decomposition $$IH^i(Y)\otimes\mathbb{C}\cong \bigoplus IH^{p,q}(Y),\quad \overline{IH^{p,q}(Y)}\cong H^{q,p}(Y).$$* ## The decomposition theorem package The crowning result of the theory of perverse sheaves is the Decomposition theorem by Beilinson, Bernstein and Deligne [@bbd]. For the rest the section we state all result not assuming varieties to be necessarily nonsingular; when a variety $X$ is nonsingular then $IH^*(X)$ can by replaced simply by $H^*(X)$. **Theorem 20** (Decomposition theorem). *Let $h:X\rightarrow Y$ be a proper algebraic map of complex algebraic varieties. There is an isomorphism in $D^b_c(Y)$ $$\label{dt1} Rh_*\mathcal{IC}_X\cong \bigoplus_{i\in \mathbb{Z}} \ ^{\mathfrak{p}}\mathcal{H}^i(Rh_*\mathcal{IC}_X)[-i].$$ Furthermore, the perverse sheaves $^{\mathfrak{p}}\mathcal{H}^i(Rh_*\mathcal{IC}_X)$ are semisimple, i.e. there exists a stratification of $Y\sqcup Y_{\alpha}$ such that $^{\mathfrak{p}}\mathcal{H}^i(Rh_*\mathcal{IC}_X)$ decomposes as direct sum $$^{\mathfrak{p}}\mathcal{H}^i(Rh_*\mathcal{IC}_X)\cong \bigoplus_\alpha \mathcal{IC}_{\overline{Y_{\alpha}}}(\mathcal{L}_{\alpha}),$$ where $\mathcal{L}_{\alpha}$ are semisimple local systems on $Y_\alpha$.* The direct sum [\[dt1\]](#dt1){reference-type="eqref" reference="dt1"} is finite and $i$ ranges from $-r(h)$ to $r(h)$, where $r(h)$ is defined as $$r(h)=\dim X\times_Y X-\dim X.$$ The decomposition theorem is understood in combination with the *Relative Hard Lefschetz theorem*. As the name suggests, the Relative Hard Lefschetz theorem stated below is a relative version of Hard Lefschetz theorem and it is closely intertwined with the decomposition theorem as it expresses a symmetry between the summands in [\[dt1\]](#dt1){reference-type="eqref" reference="dt1"}. **Theorem 21** (Relative Hard Lefschetz). *Let $h:X\rightarrow Y$ be a proper map of algebraic varieties with $X$ quasi-projective and let $\alpha$ be the first Chern class of a hyperplane line bundle on $X$. Then we have isomorphisms $$\alpha^i \cup: \ ^{\mathfrak{p}}\mathcal{H}^{-i}(Rh_*\mathcal{IC}_X)\xrightarrow{\simeq} \ ^{\mathfrak{p}}\mathcal{H}^{i}(Rh_*\mathcal{IC}_X).$$* ## Perverse Leray filtration Let $h:X\rightarrow Y$ be a projective map of algebraic varieties. **Definition 6**. The *perverse Leray filtration associated to $h$* is defined as $$P_k IH^*(X)=\mathrm{Im}\left\lbrace H^*(Y,\ ^\mathfrak{p}\tau_{\leq -k}\mathrm{R}h_*\mathcal{IC}_X) \rightarrow H^*(Y,\mathrm{R}h_*\mathcal{IC}_X)\right\rbrace.$$ **Remark 11**. In some papers, like in the original one on the P=W conjecture [@deCataldoHauselMigliorini2012], the perverse filtration is defined with a shift so that it ranges from 0 to the cohomological degree: $$P'_k IH^*(X)=P_k H^{*-(\dim X -r(h))}(Y,\mathrm{R}h_*\mathcal{IC}_X[\dim X -r(h)]).$$ In the case of the Hitchin fibration this amounts to shift by the dimension of the Hitchin base. **Definition 7**. A class $\eta\in H^*(X)$ has perversity $k$ if $$\eta\in P_kH^*(X)\text{ and }\alpha\not\in P_{k-1}H^*(X).$$ **Definition 8**. A *splitting* of the perverse filtration is a vector space decomposition $IH^*(X)=\bigoplus_{\ell} G_{\ell}IH^*(X)$ such that $$P_kIH^*(X)=\bigoplus_{\ell\leq k}G_{\ell}IH^*(X).$$ When $Y$ is affine, de Cataldo and Migliorini provided an equivalent simpler geometric description of the perverse filtration. Since in this article we will always take $h$ to be Hitchin map, we state the result assuming that $\dim X = 2\dim Y$. Let $\Lambda^k\subset Y$ be a general $k$-dimensional linear section of $Y\subset \mathbb{A}^N$. **Theorem 22** (Flag filtration). **[@deCataldoMigliorini2010 Theorem 4.1.1]*[\[thm:kercharacterizationp\]]{#thm:kercharacterizationp label="thm:kercharacterizationp"} $$P'_p IH^d(X)=\mathrm{Ker}\left\lbrace IH^d(X)\rightarrow IH^d(h^{-1}(\Lambda^{d-p-1})) %H^*(\Lambda^{d-k-1}, (R\chi_*IC_X)|_{ \Lambda^{d-k-1}}) \right \rbrace.$$* For ease of the reader, we restate the relative the relative Hard Lefschetz theorem in terms of the perverse Leray filtration. **Theorem 23** (Relative hard Lefschetz for the perverse filtration). *Let $h\colon X\rightarrow Y$ be a proper map of algebraic varieties and let $\alpha\in H^2(X)$ be the first Chern class of a relatively ample line bundle. Then there exists an isomorphism $$\cup\alpha^k\colon Gr^P_{-k}IH^*(X)\rightarrow Gr^P_{k}IH^{*+2l}(X).$$* # Declarations {#declarations .unnumbered} **Funding:** The author is supported by INdAM GNSAGA.\ **Conflict of interests:** The author declares no conflict of interests. Department of Mathematics, Physics and Informatics\ University of Modena and Reggio Emilia\ `camilla.felisetti@unimore.it` [^1]: Roughly speaking, a manifold is hyperkähler if it has three complex structures interacting quaternionically, see [@Beauville84] for a precise definition.
arxiv_math
{ "id": "2309.15061", "title": "$P=W$ phenomena in algebraic and enumerative geometry", "authors": "Camilla Felisetti", "categories": "math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this work is considered a diffusion problem, referred to as the *Ventcel problem*, involving a second order term on the domain boundary (the Laplace-Beltrami operator). A variational formulation of the Ventcel problem is studied, leading to a finite element discretization. The focus is on the construction of high order curved meshes for the discretization of the physical domain and on the definition of the lift operator, which is aimed to transform a function defined on the mesh domain into a function defined on the physical one. This *lift* is defined in a way as to satisfy adapted properties on the boundary, relatively to the trace operator. Error estimations are computed and expressed both in terms of finite element approximation error and of geometrical error, respectively associated to the finite element degree $k\ge 1$ and to the mesh order $r\ge 1$. The numerical experiments we led allow us to validate the results obtained and proved on the *a priori* error estimates depending on the two parameters $k$ and $r$. author: - "Fabien Caubet[^1], Joyce Ghantous[^2], Charles Pierre[^3]" bibliography: - biblio.bib title: A priori error estimates of a diffusion equation with Ventcel boundary conditions on curved meshes --- # Introduction #### Motivations The origins of the finite element method (see [@eforigin]) can be traced back to the 1950s when engineers started to solve numerically structural mechanics problems in aeronautics. A key point in the analysis of this method is to obtain an estimation of the error produced while approximating the solution $u$ of a problem, typically a PDE, by its finite element approximation $u_h$. Let us mention that there are two types of error estimation either an *a priori* or an *a posteriori* estimation. The goal of an *a priori* error estimation is to assess the error $\|u - u_h\|$ in terms of the mesh size $h$, the problem data, and the exact solution $u$. Conversely, an *a posteriori* estimation depends on $h$ and the computed solution $u_h$, but not on $u$. Altogether, the *a priori* error analysis is mainly oriented for theoretical qualification, while the *a posteriori* error analysis serves practical purposes. Together these approaches provide a broad view on the reliability of the approximation method considered. In this work, we focus on *a priori* error estimations. In various situations, we have to numerically solve a problem, typically a PDE, on lex geometry. This work is aimed at certain industrial applications where the object or material under consideration is surrounded by a thin layer with different properties (typically a surface treatment or a corrosion layer). The presence of this layer causes some difficulties while discretizing the domain and numerically solving the problem. To overcome this problem, a classical approach consists in approximating the domain by a similar one without a thin layer but equipped with artificial boundary conditions, like the so-called *Ventcel boundary conditions* [@Gvial]. The physical properties of the thin layer are then contained in these new boundary conditions. These last two topics are going to be the main focus of this paper: we are going to consider the numerical resolution of a (scalar) PDE equipped with higher order boundary conditions, which are the Ventcel boundary conditions, to after that assess the *a priori* error produced by a finite element approximation, on higher order meshes. #### The Ventcel problem and its approximation Let $\Omega$ be a nonempty bounded connected domain in $\mathbb{R}^{d}$, $d=2$, 3, with a smooth boundary $\Gamma := \partial\Omega$. Considering a source term $f$ and a boundary condition $g$, as well as some given constants $\kappa \ge 0$, $\alpha,\,\beta>0$, the Ventcel problem that we will focus on is the following: $$\label{1} \left\{ % \begin{aligned} % -\Delta u + \kappa u &=& f & \ \ \ \text{ in } \Omega,\\ % -\beta \Delta_{\Gamma} u + \partial_{\mathrm{n}} u + \alpha u &=& g & \ \ \ \text{ on } \Gamma,\\ % \end{aligned} \begin{array}{rcll} -\Delta u + \kappa u &=& f & \text{ in } \Omega,\\ -\beta \Delta_{\Gamma} u + \partial_{\mathrm{n}} u + \alpha u &=& g & \text{ on } \Gamma,\\ \end{array} \right.$$ where $\boldsymbol{\mathrm{n}}$ denotes the external unit normal to $\Gamma$, $\partial_{\mathrm{n}} u$ the normal derivative of $u$ along $\Gamma$ and $\Delta_\Gamma$ the Laplace-Beltrami operator. Notice that the domain $\Omega$ is required to be smooth due to the presence of second order boundary conditions. Thus, the physical domain $\Omega$ can not be fitted by a polygonal mesh domain. We then resort to high order meshes of geometrical order $r \ge 2$ defined in Section [3](#sec:mesh){reference-type="ref" reference="sec:mesh"}, following the work of many authors (see, e.g., [@elliott; @ed; @PHcia; @ciaravtransf]). Notice that the domain of the mesh of order $r$, denoted $\Omega_h^{(r)}$, does not fits the domain $\Omega$, but the numerical results are more accurate as will be exposed in Section [7](#sec:numerical-ex){reference-type="ref" reference="sec:numerical-ex"}. A $\mathbb{P}^k$-Lagrangian finite element method is used with a degree $k \ge 1$ to approximate the exact solution $u$ of System [\[1\]](#1){reference-type="eqref" reference="1"} by a finite element function $u_h$ defined on the mesh domain $\Omega_h^{(r)}$. Note that by distinguishing between the parameters $r$ and $k$, we want to highlight the influence of the geometrical order $r$ of the mesh and of the finite element approximation degree $k$ on the computational error: this allows the degree of the finite element method $k$ to be chosen according to the choice of the geometrical order $r$. Notice that an *isoparametric approach*, that is taking $k=r$, to this problem is treated in [@elliott; @ed]. Since $\Omega_h^{(r)}\neq \Omega$, in order to compare the numerical solution $u_h$ defined on $\Omega_h^{(r)}$ to the exact solution $u$ defined on $\Omega$ and to obtain *a priori* error estimations, the notion of *lifting* a function from a domain onto another domain needs to be introduced. The *lift functional* was firstly introduced in the 1970s by many authors (see, e.g., [@dubois; @scott; @Lenoir1986; @nedelec]). Among them, let us emphasize the lift based on the orthogonal projection onto the boundary $\Gamma$, introduced by Dubois in [@dubois] and further improved in terms of regularity by Elliott *et al.* in [@elliott]. However, the lift defined in [@elliott] does not fit the orthogonal projection on the computational domain's boundary. As will be seen in Section [4.1](#sec:lift-def-surf-vol){reference-type="ref" reference="sec:lift-def-surf-vol"}, this condition is essential to guaranty the theoretical analysis of this problem. In order to address this issue, an alternative definition is introduced in this paper which will be used to perform a numerical study of the computational error of System [\[1\]](#1){reference-type="eqref" reference="1"}. This modification in the lift definition has a big impact on the error approximation as is observed in the numerical examples in Section [7](#sec:numerical-ex){reference-type="ref" reference="sec:numerical-ex"}. The main result is the following *a priori* error estimations, which will be explained in details and proved in Section [6](#ERROR-section){reference-type="ref" reference="ERROR-section"}: $$\| u-u_h^\ell \|_{\mathrm{L}^2 (\Omega,\Gamma)} = O ( h^{k+1} + h^{r+1}) \quad {\rm and } \quad \| u- u_h^\ell \|_{\mathrm{H}^1 (\Omega,\Gamma)} = O ( h^{k} + h^{r+1/2}),$$ where $h$ is the mesh size and $u_h^\ell$ denotes the *lift* of $u_h$ (given in Definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"}), and $\mathrm{L}^2 (\Omega,\Gamma)$ and $\mathrm{H}^1 (\Omega,\Gamma)$ are Hilbert spaces defined below. #### Paper organization Section [2](#sec:notations_def){reference-type="ref" reference="sec:notations_def"} contains all the mathematical tools and useful definitions to derive the weak formulation of System [\[1\]](#1){reference-type="eqref" reference="1"}. Section [3](#sec:mesh){reference-type="ref" reference="sec:mesh"} is devoted to the definition of the high order meshes. In Section [4](#section-lift){reference-type="ref" reference="section-lift"}, are defined the volume and surface lifts, which are the keystones of this work. A Lagrangian finite element space and discrete formulation of System [\[1\]](#1){reference-type="eqref" reference="1"} are presented in Section [5](#FE-section){reference-type="ref" reference="FE-section"}, alongside their *lifted forms* onto $\Omega$. The *a priori* error analysis is detailed in Section [6](#ERROR-section){reference-type="ref" reference="ERROR-section"}. The paper wraps up in Section [7](#sec:numerical-ex){reference-type="ref" reference="sec:numerical-ex"} with some numerical experiments studying the method convergence rate dependency on the geometrical order $r$ and on the finite element degree $k$. # Notations and needed mathematical tools {#sec:notations_def} Firstly, let us introduce the notations that we adopt in this paper. Throughout this paper, $\Omega$ is a nonempty bounded connected open subset of $\mathbb{R}^{d}$ $(d=2,3)$ with a smooth (at least $\mathcal{C}^2$) boundary $\Gamma:=\partial{\Omega}$. The unit normal to $\Gamma$ pointing outwards is denoted by $\boldsymbol{\mathrm{n}}$ and $\partial_{\mathrm{n}} u$ is a normal derivative of a function $u$. We denote respectively by $\mathrm{L}^2(\Omega)$ and $\mathrm{L}^2(\Gamma)$ the usual Lebesgue spaces endowed with their standard norms on $\Omega$ and $\Gamma$. Moreover, for $k \geq 1$, $\mathrm{H}^{k+1}(\Omega)$ denotes the usual Sobolev space endowed with its standard norm. We also consider the Sobolev spaces $\mathrm{H}^{k+1}(\Gamma)$ on the boundary as defined e.g. in [@ventcel1 §2.3]. It is recalled that the norm on $\mathrm{H}^{1}(\Omega)$ is: $\|u\|^2_{\mathrm{H}^{1}(\Omega)} : = \|u\|^2_{\mathrm{L}^2(\Omega)} + \|\nabla_\Gamma u\|^2_{\mathrm{L}^2(\Omega)},$ where $\nabla_\Gamma$ is the tangential gradient defined below; and that $\|u\|^2_{\mathrm{H}^{k+1}(\Gamma)} := \|u\|^2_{\mathrm{H}^{k}(\Gamma)} + \|\nabla_\Gamma u\|^2_{\mathrm{H}^{k}(\Gamma)}$. Throughout this work, we rely on the following Hilbert space (see [@ventcel1]) $$\mathrm{H}^{1}(\Omega,\Gamma):= \{ u \in \mathrm{H}^{1}(\Omega), \ u_{|_\Gamma} \in \mathrm{H}^{1}(\Gamma) \},$$ equipped with the norm $\|u\|^2_{\mathrm{H}^{1}(\Omega,\Gamma)} : = \|u\|^2_{\mathrm{H}^{1}(\Omega)} + \| u\|^2_{\mathrm{H}^{1}(\Gamma)}.$ In a similar way is defined the following space $\mathrm{L}^2 (\Omega,\Gamma):= \{ u \in \mathrm{L}^2(\Omega), \ u_{|_\Gamma} \in \mathrm{L}^2(\Gamma) \},$ equipped with the norm $\|u\|^2_{\mathrm{L}^2(\Omega,\Gamma)} := \|u\|^2_{\mathrm{L}^2(\Omega)} + \| u\|^2_{\mathrm{L}^2(\Gamma)}$. More generally, we define $\mathrm{H}^{k+1} (\Omega,\Gamma):=\{ u \in \mathrm{H}^{k+1}(\Omega), \ u_{|_\Gamma} \in \mathrm{H}^{k+1}(\Gamma) \}$. Secondly, we recall the definition of the tangential operators (see, e.g., [@livreopt]). **Definition 1**. *Let $w \in \mathrm{H}^1(\Gamma)$, $W \in \mathrm{H}^1(\Gamma,\mathbb{R}^d)$ and $u \in \mathrm{H}^2 (\Gamma)$. Then the following operators are defined on $\Gamma$:* - *the tangential gradient of $w$ given by $\nabla_\Gamma w :=\nabla\tilde{w} - (\nabla\tilde{w} \cdot \boldsymbol{\mathrm{n}})\boldsymbol{\mathrm{n}}$, where $\tilde{w} \in \mathrm{H}^1(\mathbb{R}^d)$ is any extension of $w$;* - *the tangential divergence of $W$ given by $\mathrm{div}_{\Gamma} W : = \mathrm{div}\tilde{W}- (\mathrm{D}\tilde{W} \boldsymbol{\mathrm{n}}) \cdot \boldsymbol{\mathrm{n}}$, where $\tilde{W} \in \mathrm{H}^1(\mathbb{R}^d, \mathbb{R}^d)$ is any extension of $W$ and $\mathrm{D}\tilde{W} = (\nabla\tilde{W}_i)_{i=1}^d$ is the differential matrix of the extension $\tilde{W}$;* - *the Laplace-Beltrami operator of $u$ given by $\Delta_\Gamma u := \mathrm{div}_\Gamma (\nabla_\Gamma u)$.* Additionally, the constructions of the mesh used in Section [3](#sec:mesh){reference-type="ref" reference="sec:mesh"} and of the lift procedure presented in Section [4](#section-lift){reference-type="ref" reference="section-lift"} are based on the following fundamental result that may be found in [@tubneig] and [@GT98 §14.6]. For more details on the geometrical properties of the tubular neighborhood and the orthogonal projection defined below, we refer to [@D1; @D2; @actanum]. **Proposition 1**. *Let $\Omega$ be a nonempty bounded connected open subset of $\mathbb{R}^{d}$ with a $\mathcal{C}^2$ boundary $\Gamma= \partial \Omega$. Let $\mathrm{d}: \mathbb{R}^d \to \mathbb{R}$ be the signed distance function with respect to $\Gamma$ defined by, $$\mathrm{d}(x) := \left \{ \begin{array}{ll} -\mathrm{dist}(x, \Gamma)& {\rm if } \, x \in \Omega , \\ 0& {\rm if } \, x \in \Gamma , \\ \mathrm{dist}(x, \Gamma)& {\rm otherwise}, \end{array} \right. \qquad {\rm with} \quad \mathrm{dist}(x, \Gamma) := \inf \{|x-y|,~ \ y \in \Gamma \}.$$ Then there exists a tubular neighborhood $\mathcal{U}_{\Gamma}:= \{ x \in \mathbb{R}^d ; |\mathrm{d}(x)| < \delta_\Gamma \}$ of $\Gamma$, of sufficiently small width $\delta_\Gamma$, where $\mathrm{d}$ is a $\mathcal{C}^2$ function. Its gradient $\nabla\mathrm{d}$ is an extension of the external unit normal $\boldsymbol{\mathrm{n}}$ to $\Gamma$. Additionally, in this neighborhood $\mathcal{U}_{\Gamma}$, the orthogonal projection $b$ onto $\Gamma$ is uniquely defined and given by, $$b\, :~ x \in \mathcal{U}_{\Gamma} \longmapsto b(x):=x-\mathrm{d}(x)\nabla\mathrm{d}(x) \in \Gamma.$$* Finally, the variational formulation of Problem [\[1\]](#1){reference-type="eqref" reference="1"} is obtained, using the integration by parts formula on the surface $\Gamma$ (see, e.g. [@livreopt]), and is given by, $$\label{fv_faible} \mbox{find } u \in \mathrm{H}^1 (\Omega,\Gamma)\mbox{ such that } a(u,v) = l(v), \, \forall \ v \in \mathrm{H}^1(\Omega,\Gamma),$$ where the bilinear form $a$, defined on $\mathrm{H}^1(\Omega,\Gamma)^2$, is given by, $$a(u,v) := \int_{\Omega} \nabla u \cdot \nabla v \, \mathrm{d}x +\kappa \int_{\Omega} u v \, \mathrm{d}x + \beta \int_{\Gamma} \nabla_{\Gamma} u \cdot \nabla_{\Gamma} v \, \mathrm{d}\sigma + \alpha \int_{\Gamma} u v \, \mathrm{d}\sigma,$$ and the linear form $l$, defined on $\mathrm{H}^1(\Omega,\Gamma)$, is given by, $$l(v) := \int_{\Omega} f v \, \mathrm{d}x +\int_{\Gamma} g v \, \mathrm{d}\sigma.$$ The following theorem claims the well-posedness of the problem [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"} proven in [@Jaca th. 2] and [@ventcel1 th. 3.3] and establishes the solution regularity proven in [@ventcel1 th. 3.4]. **Theorem 1**. *Let $\Omega$ and $\Gamma= \partial \Omega$ be as stated previously. Let $\alpha$, $\beta >0$, $\kappa \ge 0$, and $f \in \mathrm{L}^2(\Omega)$, $g \in \mathrm{L}^2(\Gamma)$. Then there exists a unique solution $u \in \mathrm{H}^{1}(\Omega,\Gamma)$ to problem ([\[fv_faible\]](#fv_faible){reference-type="ref" reference="fv_faible"}).* *Moreover, if $\Gamma$ is of class $\mathcal{C}^{k+1}$, and $f \in \mathrm{H}^{k-1}(\Omega)$, $g \in \mathrm{H}^{k-1}(\Gamma)$, then the solution $u$ of [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"} is in $\mathrm{H}^{k+1} (\Omega,\Gamma)$ and is the strong solution of the Ventcel problem [\[1\]](#1){reference-type="eqref" reference="1"}. Additionally, there exists $c>0$ such that the following inequality holds, $$\|u\|_{\mathrm{H}^{k+1}(\Omega,\Gamma)} \le c ( \|f\|_{\mathrm{H}^{k-1}(\Omega)} + \|g\|_{\mathrm{H}^{k-1}(\Gamma)}).$$* # Curved mesh definition {#sec:mesh} In this section we briefly recall the construction of curved meshes of geometrical order $r\ge 1$ of the domain $\Omega$ and introduce some notations. We refer to [@Jaca Section 2] for details and examples (see also [@elliott; @scott; @dubois; @Bernardi1989]). Recall for $r\ge 1$, the set of polynomials in $\mathbb{R}^d$ of order $r$ or less is denoted by $\mathbb{P}^r$. From now on, the domain $\Omega$, is assumed to be at least $\mathcal{C}^{r+2}$ regular, and $\hat{T}$ denotes the reference simplex of dimension $d$. In a nutshell, the way to proceed is the following. 1. Construct an affine mesh $\mathcal{T}_h^{(1)}$ of $\Omega$ composed of simplices $T$ and define the affine transformation $F_T:~\hat{T}\rightarrow T:=F_T(\hat{T})$ associated to each simplice $T$. 2. For each simplex $T \in \mathcal{T}_h^{(1)}$, a mapping $F_T^{(e)}:~\hat{T}\rightarrow {T}^{(e)}:= F_T^{(e)}(\hat{T})$ is designed and the resulting *exact elements* ${T}^{(e)}$ will form a curved exact mesh $\mathcal{T}_h^{(e)}$ of $\Omega$. 3. For each $T \in \mathcal{T}_h^{(1)}$, the mapping $F_T^{(r)}$ is the $\mathbb{P}^r$ interpolant of $F_T^{(e)}$. The curved mesh $\mathcal{T}_h^{(r)}$ of order $r$ is composed of the elements ${T}^{(r)}:= F_T^{(r)}(\hat{T})$. ## Affine mesh $\mathcal{T}_h^{(1)}$ Let $\mathcal{T}_h^{(1)}$ be a polyhedral mesh of $\Omega$ made of simplices of dimension $d$ (triangles or tetrahedra), it is chosen as quasi-uniform and henceforth shape-regular (see [@quasi-unif definition 4.4.13]). Define the mesh size $h:= \max\{\mathrm{diam}(T); \; T \in \mathcal{T}_h^{(1)}\}$, where $\mathrm{diam}(T)$ is the diameter of $T$. The mesh domain is denoted by $\Omega_h^{(1)}:= \cup_{T\in \mathcal{T}_h^{(1)}}T$. Its boundary denoted by $\Gamma_h^{(1)} :=\partial \Omega_h^{(1)}$ is composed of $(d-1)$-dimensional simplices that form a mesh of $\Gamma = \partial \Omega$. The vertices of $\Gamma_h^{(1)}$ are assumed to lie on $\Gamma$. For $T \in \mathcal{T}_h^{(1)}$, we define an affine function that maps the reference element onto $T$, $$F_T: \hat{T}\to T:=F_T(\hat{T}) % \fonction{\ft}{\tref }{T:=\ft(\tref)}{ \hat{x}}{\displaystyle \ft(\hat{x}) = B_T \hat{x} + b_T,}$$ **Remark 1**. *For a sufficiently small mesh size $h$, the mesh boundary satisfies $\Gamma_h^{(1)} \subset \mathcal{U}_\Gamma$, where $\mathcal{U}_{\Gamma}$ is the tubular neighborhood given in proposition [Proposition 1](#tub_neigh_orth_proj_prop){reference-type="ref" reference="tub_neigh_orth_proj_prop"}. This guaranties that the orthogonal projection $b: \Gamma_h^{(1)}\rightarrow \Gamma$ is one to one which is required for the construction of the exact mesh.* ## Exact mesh $\mathcal{T}_h^{(e)}$ In the 1970's, Scott gave an explicit construction of an exact triangulation in two dimensions in [@scott], generalised by Lenoir in [@Lenoir1986] afterwards (see also [@elliott §4] and [@ed §3.2]). The present definition of an exact transformation $F_T^{(e)}$ combines the definitions found in [@Lenoir1986; @scott] with the projection $b$ as used in [@dubois]. Let us first point out that for a sufficiently small mesh size $h$, a mesh element $T$ cannot have $d+1$ vertices on the boundary $\Gamma$, due to the quasi uniform assumption imposed on the mesh $\mathcal{T}_h^{(1)}$. A mesh element is said to be an internal element if it has at most one vertex on the boundary $\Gamma$. **Definition 2**. *Let $T\in\mathcal{T}_h^{(1)}$ be a non-internal element (having at least 2 vertices on the boundary). Denote $v_i = F_T(\hat{v}_i)$ as its vertices, where $\hat{v}_i$ are the vertices of $\hat{T}$. We define $\varepsilon_i=1$ if $v_i\in \Gamma$ and $\varepsilon_i=0$ otherwise. To $\hat{x}\in \hat{T}$ is associated its barycentric coordinates $\lambda_i$ associated to the vertices $\hat{v}_i$ of $\hat{T}$ and $\lambda^*(\hat{x}):= \sum_{i=1}^{d+1} \varepsilon_i \lambda_i$ (shortly denoted by $\lambda^*$). Finally, we define $\hat{\sigma} : = \{ \hat{x}\in \hat{T}; \lambda^*(\hat{x}) = 0 \}$ and the function $\hat{y}:= \dfrac{1}{\lambda^*}\sum_{i=1}^{d+1} \varepsilon_i \lambda_i\hat{v}_i\in\hat{T}$, which is well defined on $\hat{T}\backslash\hat{\sigma}$.* Consider a non-internal mesh element $T \in \mathcal{T}_h^{(1)}$ and the affine transformation $F_T$. In the two dimensional case, $F_T(\hat{\sigma})$ will consist of the only vertex of $T$ that is not on the boundary $\Gamma$. In the three dimensional case, the tetrahedral $T$ either has 2 or 3 vertices on the boundary. In the first case, $F_T(\hat{\sigma})$ is the edge of $T$ joining its two internal vertices. In the second case, $F_T(\hat{\sigma})$ is the only vertex of $T$. **Definition 3**. *We denote $\mathcal{T}_h^{(e)}$ the mesh consisting of all exact elements ${T}^{(e)}=F_T^{(e)}(\hat{T})$, where $F_T^{(e)}= F_T$ for all internal elements, as for the case of non-internal elements $F_T^{(e)}$ is given by, $$\label{eq:def-fte} \begin{array}[t]{lrcl}F_T^{(e)} :&\hat{T} &\longrightarrow &{T}^{(e)}:=F_T^{(e)}( \hat{T}) \\& \hat{x}& \longmapsto &\displaystyle F_T^{(e)}( \hat{x}) := \left\lbrace \begin{array}{ll} x & {\rm if } \, \hat{x}\in \hat{\sigma}, \\ x+(\lambda^*)^{r+2} ( b(y) - y) & {\rm if } \, \hat{x}\in \hat{T}\backslash\hat{\sigma}, \end{array} \right. \end{array}$$ with $x = F_T( \hat{x})$ and $y = F_T( \hat{y})$. It has been proven in [@elliott] that $F_T^{(e)}$ is a $\mathcal{C}^1$-diffeomorphism and $C^{r+1}$ regular on $\hat{T}$.* **Remark 2**. *For $x\in T \cap \Gamma_h$, we have that $\lambda^*= 1$ and so $y = x$ inducing that $F_T^{(e)}( \hat{x}) = b(x)$. Then $F_T^{(e)}\circ F_T^{-1} = b$ on $T \cap \Gamma_h$.* ## Curved mesh $\mathcal{T}_h^{(r)}$ of order $r$ {#sub-sec:ftr} The exact mapping $F_T^{(e)}$, defined in [\[eq:def-fte\]](#eq:def-fte){reference-type="eqref" reference="eq:def-fte"}, is interpolated as a polynomial of order $r \ge 1$ in the classical $\mathbb{P}^r$-Lagrange basis on $\hat{T}$. The interpolant is denoted by $F_T^{(r)}$, which is a $\mathcal{C}^1$-diffeomorphism and is in $\mathcal{C}^{r+1}(\hat{T})$ (see [@PHcia chap. 4.3]). For more exhaustive details and properties of this transformation, we refer to [@elliott; @ciaravtransf; @PHcia]. Note that, by definition, $F_T^{(r)}$ and $F_T^{(e)}$ coincide on all $\mathbb{P}^r$-Lagrange nodes. The curved mesh of order $r$ is $\mathcal{T}_h^{(r)}:= \{ {T}^{(r)}; T \in \mathcal{T}_h^{(1)}\}$, $\Omega_h^{(r)}:= \cup_{{T}^{(r)}\in \mathcal{T}_h^{(r)}}{T}^{(r)}$ is the mesh domain and $\Gamma_h^{(r)}:= \partial \Omega_h^{(r)}$ is its boundary. # Functional lift {#section-lift} We recall that $r \ge 1$ is the geometrical order of the curved mesh. With the help of aforementioned transformations, we define *lifts* to transform a function on a domain $\Omega_h^{(r)}$ or $\Gamma_h^{(r)}$ into a function defined on $\Omega$ or $\Gamma$ respectively, in order to compare the numerical solutions to the exact one. We recall that the idea of lifting a function from the discrete domain onto the continuous one was already treated and discussed in many articles dating back to the 1970's, like [@nedelec; @scott; @Lenoir1986; @Bernardi1989] and others. Surface lifts were firstly introduced in 1988 by Dziuk in [@Dz88], to the extend of our knowledge, and discussed in more details and applications by Demlow in many of his articles (see [@D1; @D2; @D3; @D4]). ## Surface and volume lift definitions {#sec:lift-def-surf-vol} **Definition 4** (Surface lift). *Let $u_h\in {\rm L}^2(\Gamma_h^{(r)})$. The surface lift $u_h^L\in {\rm L}^2(\Gamma)$ associated to $u_h$ is defined by, $$u_h^L\circ b := u_h,$$ where $b: \Gamma_h^{(r)}\rightarrow \Gamma$ is the orthogonal projection, defined in Proposition [Proposition 1](#tub_neigh_orth_proj_prop){reference-type="ref" reference="tub_neigh_orth_proj_prop"}. Likewise, to $u \in \mathrm{L}^2(\Gamma)$ is associated its inverse lift $u^{-L}$ given by, $u^{-L} := u \circ b \in \mathrm{L}^2(\Gamma_h^{(r)}).$* The use of the orthogonal projection $b$ to define the surface lift is natural since $b$ is well defined on the tubular neighborhood $\mathcal{U}_{\Gamma}$ of $\Gamma$ (see Proposition [Proposition 1](#tub_neigh_orth_proj_prop){reference-type="ref" reference="tub_neigh_orth_proj_prop"}) and henceforth on $\Gamma_h^{(r)}\subset \mathcal{U}_{\Gamma}$ for sufficiently small mesh size $h$. A volume lift is defined, using the notations in definition [Definition 2](#def:sigma-lambdaetoile-haty){reference-type="ref" reference="def:sigma-lambdaetoile-haty"}, we introduce the transformation $G_h^{(r)}:~\Omega_h^{(r)}\rightarrow \Omega$ (see figure [\[fig:Gh2\]](#fig:Gh2){reference-type="ref" reference="fig:Gh2"}) given piecewise for all ${T}^{(r)}\in \mathcal{T}_h^{(r)}$ by, $$\label{eq:def-fter} {G_h^{(r)}}_{|_{{T}^{(r)}}} \!\! := F_{T^{(r)}}^{(e)}\circ ({F_T^{(r)}})^{-1}, \; F_{T^{(r)}}^{(e)}( \hat{x}) \!:=\! \left\lbrace \begin{array}{ll} \!\! x & {\rm if } \, \hat{x}\in \hat{\sigma}\\ \!\! x+(\lambda^*)^{r+2} ( b(y) - y) & {\rm if } \, \hat{x}\in \hat{T}\backslash\hat{\sigma} \end{array} \right.,$$ with $x := F_T^{(r)}( \hat{x})$ and $y := F_T^{(r)}( \hat{y})$ (see figure [\[fig:haty\]](#fig:haty){reference-type="ref" reference="fig:haty"} for the affine case). Notice that this implies that ${G_h^{(r)}}_{|_{{T}^{(r)}}} = id_{|_{{T}^{(r)}}}$, for any internal mesh element ${T}^{(r)}~\in~\mathcal{T}_h^{(r)}$. Note that, by construction, $G_h^{(r)}$ is globally continuous and piecewise differentiable on each mesh element. For the remainder of this article, the following notations are crucial. $\mathrm{D}G_h^{(r)}$ denotes the differential of $G_h^{(r)}$, $(\mathrm{D}{G_h^{(r)}})^t$ is its transpose and $J_h$ is its Jacobin. **Definition 5** (Volume lift). *Let $u_h\in {\rm L}^2(\Omega_h^{(r)})$. We define the volume lift associated to $u_h$, denoted $u_h^\ell \in {\rm L}^2(\Omega)$, by, $$u_h^\ell\circ G_h^{(r)}:= u_h.$$ In a similar way, to $u\in {\rm L}^2(\Omega)$ is associated its inverse lift $u^{-\ell}\in \mathrm{L}^2(\Omega_h^{(r)})$ given by $u^{-\ell} :=u \circ G_h^{(r)}.$* **Proposition 2**. *The volume and surface lifts coincide on $\Gamma_h^{(r)}$, $$\forall ~ u_h \in {\rm H}^1(\Omega_h^{(r)}), \quad \left ( {\rm Tr} ~u_h\right )^L = {\rm Tr} (u_h^\ell).$$ Consequently, the surface lift $v_h^L$ (resp. the inverse lift $v^{-L}$) will now be simply denoted by $v_h^\ell$ (resp. $v^{-\ell}$).* *Proof.* Taking $x\in {T}^{(r)}\cap\Gamma_h^{(r)}$, $\hat{x}= (F_T^{(r)})^{(-1)}(x)$ satisfies $\lambda^*=1$ and so $\hat{y}=\hat{x}$ and $y=x$. Thus $F_{T^{(r)}}^{(e)}( \hat{x}) = b(x)$, in other words, $$G_h^{(r)}(x)=F_{T^{(r)}}^{(e)}\circ( F_T^{(r)})^{-1}(x) = b (x), \ \ \ \ \ \forall \ x \in {T}^{(r)}\cap \Gamma_h^{(r)}.$$ ◻ **Proposition 3**. *Let ${T}^{(r)}\in \mathcal{T}_h^{(r)}$. Then the mapping ${G_h^{(r)}}_{|_{{T}^{(r)}}}$ is $\mathcal{C}^{r+1}({T}^{(r)})$ regular and a $\mathcal{C}^1$- diffeomorphism from ${T}^{(r)}$ onto ${T}^{(e)}$. Additionally, for a sufficiently small mesh size $h$, there exists a constant $c>0$, independent of $h$, such that, $$\label{ineq:Gh-Id_Jh-1} \forall \ x \in {T}^{(r)}, \ \ \ \ \| \mathrm{D}{G_h^{(r)}}(x) - \mathrm{Id}\| \le c h^r \qquad \mbox{ and } \qquad | J_h(x)- 1 | \le c h^r,$$ where $G_h^{(r)}$ is defined in [\[eq:def-fter\]](#eq:def-fter){reference-type="eqref" reference="eq:def-fter"} and $J_h$ is its Jacobin.* The full proof of this proposition is partially adapted from [@elliott] and has been detailed in appendix [8](#appendix:proof-Ghr){reference-type="ref" reference="appendix:proof-Ghr"}. **Remark 3** (Lift regularity). *The lift transformation $G_h^{(r)}:~\Omega_h^{(r)}\rightarrow \Omega$ in ([\[eq:def-fter\]](#eq:def-fter){reference-type="ref" reference="eq:def-fter"}) involves the function, $$\label{rem-def-rho} \rho_{{T}^{(r)}}:~ \hat{x}\in \hat{T}\mapsto (\lambda^*)^{s} (b(y) - y),$$ with an exponent $s=r+2$ inherited from [@elliott]: this exponent value guaranties the $\mathcal{C}^{r+1}$ (piecewise) regularity of the function $G_h^{(r)}$. However, decreasing that value to $s=2$ still ensures that $G_h^{(r)}$ is a (piecewise) $\mathcal{C}^{1}$ diffeomorphism and also that Inequalities ([\[ineq:Gh-Id_Jh-1\]](#ineq:Gh-Id_Jh-1){reference-type="ref" reference="ineq:Gh-Id_Jh-1"}) hold: this can be seen when examining the proof of Proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"} in Appendix [8](#appendix:proof-Ghr){reference-type="ref" reference="appendix:proof-Ghr"}. Consequently, the convergence theorem [Theorem 2](#th-error-bound){reference-type="ref" reference="th-error-bound"} still holds when setting $s=2$ in the definition of $\rho_{{T}^{(r)}}$.* **Remark 4** (Former lift definition). *The volume lift defined in [\[def:liftvolume\]](#def:liftvolume){reference-type="eqref" reference="def:liftvolume"} is an adaptation of the lift definition in [@elliott], which however does not fulfill the property [Proposition 2](#prop:trace){reference-type="ref" reference="prop:trace"}. Precisely, in [@elliott], to $u_h \in \mathrm{H}^1 (\Omega_h^{(r)})$ is associated the lifted function $u_h^\ell\in \mathrm{H}^1 (\Omega)$, given by $u_h^\ell \circ G_h := u_h$, where $G_h:~ \Omega_h^{(r)}\rightarrow \Omega$ is defined piecewise, for each mesh element ${T}^{(r)}\in \mathcal{T}_h^{(r)}$, by ${G_h}_{|_{{T}^{(r)}}} := F_T^{(e)}\circ ({F_T^{(r)}})^{-1}$, where $T$ is the affine element relative to ${T}^{(r)}$, $F_T^{(e)}$ is defined in [\[eq:def-fte\]](#eq:def-fte){reference-type="eqref" reference="eq:def-fte"} and $F_T^{(r)}$ is its $\mathbb{P}^r$-Lagrangien interpolation given in section [3.3](#sub-sec:ftr){reference-type="ref" reference="sub-sec:ftr"}. However, this transformation does not coincide with the orthogonal projection $b$, on the mesh boundary $\Gamma_h^{(r)}$. Indeed, since $F_T^{(e)}\circ F_T^{-1} = b$ on $T \cap \Gamma_h$ (see Remark [Remark 2](#rem:Fe_T){reference-type="ref" reference="rem:Fe_T"}), we have, $$G_h(x) = b \circ F_T\circ ({F_T^{(r)}})^{-1}(x) \ne b(x), \quad \forall \ x \in \Gamma_h^{(r)}\cap{T}^{(r)}.$$ Consequently in this case, $( {\rm Tr} ~u_h )^L \ne {\rm Tr} (u_h^\ell)$.* ## Lift of the variational formulation {#sub-sec:impact-lift} With the lift operator, one may express an integral over $\Gamma_h^{(r)}$ (resp. $\Omega_h^{(r)}$) with respect to one over $\Gamma$ ( resp. $\Omega$), as will be discussed in this section. #### Surface integrals In this subsection, all results stated may be found alongside their proofs in [@D1; @demlow2019], but we recall some necessary informations for the sake of completeness. For extensive details, we also refer to [@D2; @actanum; @Dz88]. Throughout the rest of the paper, $\, \mathrm{d}\sigma$ and $\, \mathrm{d}\sigma_h$ denote respectively the surface measures on $\Gamma$ and on $\Gamma_h^{(r)}$. Let $J_b$ be the Jacobian of the orthogonal projection $b$, defined in Proposition [Proposition 1](#tub_neigh_orth_proj_prop){reference-type="ref" reference="tub_neigh_orth_proj_prop"}, such that $\, \mathrm{d}\sigma(b(x))=J_b(x) \, \mathrm{d}\sigma_h(x)$, for all $x \in \Gamma_h^{(r)}$. Notice that $J_b$ is bounded independently of $h$ and its detailed expression may be found in [@D1; @D2]. Consider also the lift of $J_b$ given by $J_b^\ell\circ b = J_b$ (see Definition [Definition 4](#def:liftsurface){reference-type="ref" reference="def:liftsurface"}). Let $u_h, v_h \in \mathrm{H}^1 (\Gamma_h)$ with $u_h^\ell, v_h^\ell \in \mathrm{H}^1 (\Gamma)$ as their respected lifts. Then, one has, $$\label{pass_fct_scalaire_surface} \int_{\Gamma_h^{(r)}} u_h v_h \, \mathrm{d}\sigma_h = \int_{\Gamma} u^{\ell }_h v^{\ell }_h \frac{ \, \mathrm{d}\sigma}{J_b^\ell}.$$ A similar equation may be written with tangential gradients. We start by given the following notations. The outer unit normal vectors over $\Gamma$ and $\Gamma_h^{(r)}$ are respectively denoted by $\boldsymbol{\mathrm{n}}$ and $\boldsymbol{\mathrm{n}}_{hr}$. Denote $P:= \mathrm{Id}-\boldsymbol{\mathrm{n}}\otimes \boldsymbol{\mathrm{n}}$ and $P _h := \mathrm{Id}-\boldsymbol{\mathrm{n}}_{hr}\otimes \boldsymbol{\mathrm{n}}_{hr}$ respectively as the orthogonal projections over the tangential spaces of $\Gamma$ and $\Gamma_h^{(r)}$. Additionally, the Weingarten map $\mathcal{H}: \mathbb{R}^{d } \to \mathbb{R}^{d\times d}$ is given by $\mathcal{H}:=\mathrm{D}^2 \mathrm{d}$, where $\mathrm{d}$ is the signed distance function (see Proposition [Proposition 1](#tub_neigh_orth_proj_prop){reference-type="ref" reference="tub_neigh_orth_proj_prop"}). With the previous notations, we have, $$%\label{expression_grad_surface} \nabla_{\Gamma_h}{v_h(x)}=P _h(I-\mathrm{d}\mathcal{H})P\nabla_{\Gamma}{v^{\ell }_h(b(x))}, \ \ \ \ \ \forall \ x \in \Gamma_h^{(r)}.$$ Using this equality, we may derive the following expression, $$\label{pass_grad_surface} \int_{\Gamma_h^{(r)}} \nabla_{\Gamma_h^{(r)}} u_h \cdot \nabla_{\Gamma_h^{(r)}} v_h \, \mathrm{d}\sigma_h = \int_{\Gamma} A_h^\ell\nabla_{\Gamma} u^{\ell }_h \cdot \nabla_{\Gamma} v^{\ell }_h \, \mathrm{d}\sigma,$$ where $A_h^\ell$ is the lift of the matrix $A_h$ given by, $$\label{eq:expression_Ah} %A_h^\ell(b(x)) A_h(x) := \frac{1}{J_b(x)}P(I-\mathrm{d}\mathcal{H})P _h(I-\mathrm{d}\mathcal{H})P(x), \ \ \ \ \ \forall \ x \in \Gamma_h^{(r)}.$$ #### Volume integrals Similarly, consider $u_h, v_h \in \mathrm{H}^1 (\Omega_h)$ and let $u_h^\ell, v_h^\ell \in \mathrm{H}^1 (\Omega)$ be their respected lifts (see Definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"}), we have, $$\label{pass_fct_scalaire_volume} \int_{\Omega_h}u_h v_h \, \mathrm{d}x = \int_\Omega u_h^\ell v_h^\ell \frac{1}{J_h^\ell} dy,$$ where $J_h$ denotes the Jacobian of $G_h^{(r)}$ and $J_h^\ell$ is its lift given by $J_h^\ell\circ G_h^{(r)}= J_h$. Additionally, the gradient can be written as follows, for any $x \in \Omega_h^{(r)}$, $$\nabla v_h(x) =\nabla( v_h^\ell \circ {G_h^{(r)}})(x)= {^\mathsf{T}}\mathrm{D}{G_h^{(r)}}(x) ( \nabla v_h^\ell)\circ{(G_h^{(r)}(x))}.$$ Using a change of variables $z=G_h^{(r)}(x) \in \Omega$, one has, $(\nabla v_h)^\ell(z) = {^\mathsf{T}} \mathrm{D}{G_h^{(r)}}(x) \nabla v_h^\ell{(z)}. %\ \ \ \ \ \mbox{where } x = (\Ghr)^{-1}(z) .$ Finally, introducing the notation, $$\label{eq:matrice-Ghr} \mathcal{G}_h^{(r)}(z) := {^\mathsf{T}}\mathrm{D}{G_h^{(r)}}(x),$$ one has, $$\label{pass_grad_volume} \int_{\Omega^{(r)}_h} \nabla u_h \cdot \nabla v_h \, \mathrm{d}x = \int_{\Omega} \mathcal{G}_h^{(r)}(\nabla u_h^\ell) \cdot \mathcal{G}_h^{(r)}(\nabla v_h^\ell) \frac{\, \mathrm{d}x}{J_h^\ell}.$$ ## Useful estimations #### Surface estimations We recall two important estimates proved in [@D1]. There exists a constant $c>0$ independent of $h$ such that, $$\label{ineq:AhJh} ||A_h^\ell-P||_{\mathrm{L}^\infty(\Gamma)} \le c h^{r+1} \qquad \mbox{ and } \qquad \left\| 1-\frac{1}{J_b^\ell} \right\|_{\mathrm{L}^\infty(\Gamma)} \le ch^{r+1},$$ where $A_h^\ell$ is the lift of $A_h$ defined in [\[eq:expression_Ah\]](#eq:expression_Ah){reference-type="eqref" reference="eq:expression_Ah"} and $J_b$ is the Jacobin of the projection $b$. #### Volume estimations A direct consequence of the proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"} is that both $\mathrm{D}G_h^{(r)}$ and $J_h$ are bounded on every ${T}^{(r)}\in \mathcal{T}_h^{(r)}$. As an extension of that, by Definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"} of the lift, both $\mathcal{G}_h^{(r)}$ and $J_h^\ell$ are also bounded on ${T}^{(e)}$. Additionally, the inequalities [\[ineq:Gh-Id_Jh-1\]](#ineq:Gh-Id_Jh-1){reference-type="eqref" reference="ineq:Gh-Id_Jh-1"} will not be directly used in the error estimations in Section [6](#ERROR-section){reference-type="ref" reference="ERROR-section"}, the following inequalities will be used instead, $$\label{ineq:Ghr-Id_1/Jh-1} \forall \ x \in {T}^{(e)}, \ \ \ \ \| \mathcal{G}_h^{(r)}(x) - \mathrm{Id}\| \le c h^r \qquad \mbox{ and } \qquad \left| \frac{1}{J_h^\ell(x)}- 1 \right| \le c h^r,$$ where $\mathcal{G}_h^{(r)}$ is given in [\[eq:matrice-Ghr\]](#eq:matrice-Ghr){reference-type="eqref" reference="eq:matrice-Ghr"}. These inequalities are a consequence of the lift applied on the inequalities [\[ineq:Gh-Id_Jh-1\]](#ineq:Gh-Id_Jh-1){reference-type="eqref" reference="ineq:Gh-Id_Jh-1"}. **Remark 5**. *Let us emphasize that, there exists an equivalence between the $\mathrm{H}^{m}$-norms over $\Omega_h$ (resp. $\Gamma_h$) and the $\mathrm{H}^{m}$-norms over $\Omega$ (resp. $\Gamma$), for $m=0,1$. Let $v_h \in \mathrm{H}^1 (\Omega_h,\Gamma_h)$ and let $v_h^\ell \in \mathrm{H}^1 (\Omega,\Gamma)$ be its lift, then for $m=0, 1$, there exist strictly positive constants independent of $h$ such that, $$\begin{array}{rcccl} c_1 \|v_h^\ell\|_{\mathrm{H}^{m}(\Omega)} &\le& \|v_h\|_{\mathrm{H}^{m}(\Omega_h)} &\le& c_2 \|v_h^\ell\|_{\mathrm{H}^{m}(\Omega)}, \\ c_3 \| v_h^\ell\|_{\mathrm{H}^{m}(\Gamma)} &\le& \| v_h\|_{\mathrm{H}^{m}(\Gamma_h)} &\le& c_4 \| v_h^\ell\|_{\mathrm{H}^{m}(\Gamma)}. \end{array}$$ The second estimations are proved in [@D1]. As for the first inequalities, one may prove them while using the equations [\[pass_fct_scalaire_volume\]](#pass_fct_scalaire_volume){reference-type="eqref" reference="pass_fct_scalaire_volume"} and [\[pass_grad_volume\]](#pass_grad_volume){reference-type="eqref" reference="pass_grad_volume"}. They hold due to the fact that $J_h$ and $\mathrm{D}G_h^{(r)}$ (respectively $\frac{1}{J_h^\ell}$ and $\mathcal{G}_h^{(r)}$) are bounded on ${T}^{(r)}$ (resp. ${T}^{(e)}$), as a consequence of the proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"} and the inequalities in [\[ineq:Ghr-Id_1/Jh-1\]](#ineq:Ghr-Id_1/Jh-1){reference-type="eqref" reference="ineq:Ghr-Id_1/Jh-1"}.* # Finite element approximation {#FE-section} In this section, is presented the finite element approximation of problem [\[1\]](#1){reference-type="eqref" reference="1"} using $\mathbb{P}^k$-Lagrange finite element approximation. We refer to [@EG; @PHcia] for more details on finite element methods. ## Finite element spaces and interpolant definition {#subsec:fe-space-interpolant} Let $k \geq 1$, given a curved mesh $\mathcal{T}_h^{(r)}$, the $\mathbb{P}^k$-Lagrangian finite element space is given by, $$\mathbb{V}_h:= \{ \chi \in C^0(\Omega_h^{(r)}); \ \chi_{|_T}= \hat{\chi} \circ (F_T^{(r)})^{-1} , \ \hat{\chi} \in \mathbb{P}^k(\hat{T}), \ \forall \ T \in \mathcal{T}_h^{(r)}\}.$$ Let the $\mathbb{P}^r$-Lagrangian interpolation operator be denoted by $\mathcal{I}^{(r)}: v \in \mathcal{C}^0 (\Omega_h^{(r)}) \mapsto \mathcal{I}^{(r)}(v) \in \mathbb{V}_h$. The lifted finite element space (see Section [4.1](#sec:lift-def-surf-vol){reference-type="ref" reference="sec:lift-def-surf-vol"} for the lift definition), is defined by, $$\mathbb{V}_h^\ell:= \{ v_h^\ell; \ v_h \in \mathbb{V}_h\},$$ and its lifted interpolation operator $\mathcal{I}^\ell$ given by, $$\label{def:interpolation-op-lifte} \begin{array}[t]{lrcl}\mathcal{I}^\ell :&\mathcal{C}^0 ({\Omega}) &\longrightarrow &\mathbb{V}_h^\ell\\&v& \longmapsto &\mathcal{I}^\ell(v) := \big( \mathcal{I}^{(r)}(v^{-\ell}) \big)^\ell. \end{array} %= \big( \Ihr (v \circ \Ghr ) \big) \circ {\Ghr}^{-1} }$$ Notice that, since $\Omega$ is an open subset of $\mathbb{R}^2$ or $\mathbb{R}^3$, then we have the following Sobolev injection $\mathrm{H}^{k+1}(\Omega) \hookrightarrow \mathcal{C}^0 (\Omega)$. Thus, any function $w \in \mathrm{H}^{k+1}(\Omega)$ may be associated to an interpolation element $\mathcal{I}^\ell(w) \in \mathbb{V}_h^\ell$. The lifted interpolation operator plays a part in the error estimation and the following interpolation inequality will display the finite element error in the estimations. **Proposition 4**. *Let $v \in \mathrm{H}^{k+1} (\Omega,\Gamma)$ and $2 \le m \le k+1$. There exists a constant $c>0$ independent of $h$ such that the interpolation operator $\mathcal{I}^\ell$ satisfies the following inequality, $$\|v-\mathcal{I}^\ell v\|_{\mathrm{L}^2(\Omega,\Gamma)} + h \|v-\mathcal{I}^\ell v\|_{\mathrm{H}^1(\Omega,\Gamma)} \le c h^{m} \|v\|_{\mathrm{H}^{m} (\Omega,\Gamma)}. %\ \ \ \ \forall \ v \in \Hk1 \omgam.$$* *Proof.* Using the norm equivalence in Remark [Remark 5](#rem:norm-equiv){reference-type="ref" reference="rem:norm-equiv"}, this inequality derives from given interpolation theory (see [@Bernardi1989 Corollary 4.1] for norms over $\Omega$ and  [@D1; @D2] for norms over $\Gamma$). Indeed, for $v \in \mathrm{H}^{k+1} (\Omega,\Gamma)$ and $s=0, 1$, we have, $$\begin{aligned} \| v - \mathcal{I}^\ell v \|_{\mathrm{H}^{s}(\Omega,\Gamma)} & = \left\| (v^{-\ell})^\ell - \big( \mathcal{I}^{(r)}(v^{-\ell}) \big)^\ell\right\|_{\mathrm{H}^{s}(\Omega,\Gamma)} \le c \left\| v^{-\ell} - \mathcal{I}^{(r)}(v^{-\ell}) \right\|_{\mathrm{H}^{s} (\Omega_h^{(r)},\Gamma_h^{(r)})} \\ & \le c h^{m-s} \| v^{-\ell} \|_{\mathrm{H}^{m} (\Omega_h^{(r)},\Gamma_h^{(r)})} \le c h^{m-s} \| v \|_{\mathrm{H}^{m} (\Omega,\Gamma)},\end{aligned}$$ for a constant $c>0$ independent of $h$. ◻ ## Finite element formulation From now on, to simplify the notations, we denote $\Omega_h$ and $\Gamma_h$ to refer to $\Omega_h^{(r)}$ and $\Gamma_h^{(r)}$, for any geometrical order $r\ge 1$. #### Discrete formulation Given $f \in \mathrm{L}^2(\Omega)$ and $g \in \mathrm{L}^2(\Gamma)$ the right hand side of Problem [\[1\]](#1){reference-type="eqref" reference="1"}, we define (following [@elliott; @D1]) the following linear form $l_h$ on $\mathbb{V}_h$ by, $$l_h(v_h) := \int_{\Omega _h} v_h f^{-\ell}J_h\, \mathrm{d}x + \int_{\Gamma_h} v_h g^{-\ell}J_b\, \mathrm{d}\sigma_h,$$ where $J_h$ (resp. $J_b$) is the Jacobin of $G_h^{(r)}$ (resp. the orthogonal projection $b$). With this definition, $l_h(v_h)=l(v_h^\ell)$, for any $v_h \in \mathbb{V}_h$, where $l$ is the right hand side in the formulation [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"}. The approximation problem is to find $u_h \in \mathbb{V}_h$ such that, $$\label{fvh} a_h(u_h,v_h) = l_h(v_h), \ \ \ \ \ \forall \ v_h \in \mathbb{V}_h,$$ where $a_h$ is the following bilinear form, defined on $\mathbb{V}_h\times \mathbb{V}_h$, $$\begin{aligned} a_h(u_h,v_h) &:= \int_{\Omega _h} \nabla u_h \cdot \nabla v_h \, \mathrm{d}x + \kappa \int_{\Omega _h} u_h v_h \, \mathrm{d}x \\ & +\beta \int_{\Gamma_h} \nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} v_h \, \mathrm{d}\sigma_h + \alpha \int_{\Gamma_h} u_h v_h \, \mathrm{d}\sigma_h,\end{aligned}$$ **Remark 6**. *Since $a_h$ is bilinear symmetric positively defined on a finite dimensional space, then there exists a unique solution $u_h \in \mathbb{V}_h$ to the discrete problem [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"}.* #### Lifted discrete formulation We define the lifted bilinear form $a_h^\ell$, defined on $\mathbb{V}_h^\ell\times \mathbb{V}_h^\ell$, throughout, $$a^\ell_h(u^\ell_h,v^\ell_h)=a_h(u_h,v_h) \quad \mbox{ for } u_h, v_h \in \mathbb{V}_h,$$ applying [\[pass_grad_volume\]](#pass_grad_volume){reference-type="eqref" reference="pass_grad_volume"}, [\[pass_fct_scalaire_volume\]](#pass_fct_scalaire_volume){reference-type="eqref" reference="pass_fct_scalaire_volume"}, [\[pass_grad_surface\]](#pass_grad_surface){reference-type="eqref" reference="pass_grad_surface"} and [\[pass_fct_scalaire_surface\]](#pass_fct_scalaire_surface){reference-type="eqref" reference="pass_fct_scalaire_surface"}, its expression is given by, $$\begin{gathered} a_h^\ell(u^\ell_h,v^\ell_h) = \int_{\Omega} \mathcal{G}_h^{(r)}(\nabla u_h^\ell) \cdot \mathcal{G}_h^{(r)}(\nabla v_h^\ell) \frac{\, \mathrm{d}x}{J_h^\ell}+\beta \int_{\Gamma} A^{\ell }_h \nabla_{\Gamma} u^{\ell }_h \cdot \nabla_{\Gamma} v^{\ell }_h \, \mathrm{d}\sigma \\ + \kappa \int_{\Omega} (u_h)^\ell (v_h)^\ell \frac{\, \mathrm{d}x}{J_h^\ell} + \alpha \int_{\Gamma} (u_h)^\ell (v_h)^\ell \frac{\, \mathrm{d}\sigma}{J_b^\ell}.\end{gathered}$$ Keeping in mind that $u$ is the solution of [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"} and $u_h^\ell$ is the lift of the solution of [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"}, for any $v_h^\ell \in \mathbb{V}_h^\ell\subset \mathrm{H}^1 (\Omega,\Gamma)$, we notice that, $$\label{rem:a=ahell-For-vhell} a(u,v_h^\ell) = l(v_h^\ell)= l_h(v_h)= a_h(u_h,v_h) = a^\ell_h(u^\ell_h,v^\ell_h).$$ Using the previous points, we can also define the lifted formulation of the discrete problem [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"} by: find $u_h^\ell \in \mathbb{V}_h^\ell$ such that, $$%\label{fvhlifte} a_h^\ell(u^\ell_h,v^\ell_h)= l(v_h^\ell), \ \ \ \ \ \forall \ v_h^\ell \ \in \mathbb{V}_h^\ell.$$ # Error analysis {#ERROR-section} Throughout this section, $c$ refers to a positive constant independent of the mesh size $h$. From now on, the domain $\Omega$, is assumed to be at least $\mathcal{C}^{k+1}$ regular, and the source terms in problem [\[1\]](#1){reference-type="eqref" reference="1"} are assumed more regular: $f \in \mathrm{H}^{k-1}(\Omega)$ and $g \in \mathrm{H}^{k-1}(\Gamma)$. Then according to [@ventcel1 Theorem 3.4], the exact solution $u$ of Problem [\[1\]](#1){reference-type="eqref" reference="1"} is in $\mathrm{H}^{k+1}(\Omega,\Gamma)$. Our goal in this section is to prove the following theorem. **Theorem 2**. *Let $u \in \mathrm{H}^{k+1}(\Omega,\Gamma)$ be the solution of the variational problem [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"} and $u_h \in \mathbb{V}_h$ be the solution of the finite element formulation [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"}. There exists a constant $c > 0$ such that, $$\label{errh1_errl2} \|u-u_h^\ell \|_{\mathrm{H}^1(\Omega, \Gamma) } \le c ( h^k + h^{r+1/2}) \quad \mbox{ and } \quad \|u-u_h^\ell \|_{\mathrm{L}^2(\Omega, \Gamma) } \le c ( h^{k+1} + h^{r+1}),$$ where $u_h^\ell \in \mathbb{V}_h^\ell$ denotes the lift of $u_h$ onto $\Omega$, given in Definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"}.* The overall error in this theorem is composed of two components: the geometrical error and the finite element error. To prove these error bounds, we proceed as follows: 1. estimate the geometric error: we bound the difference between the exact bilinear form $a$ and the lifted bilinear form $a_h^\ell$; 2. bound the $\mathrm{H}^1$ error using the geometric and interpolation error estimation, proving the first inequality of [\[errh1_errl2\]](#errh1_errl2){reference-type="eqref" reference="errh1_errl2"}; 3. an Aubin-Nitsche argument helps us prove the second inequality of [\[errh1_errl2\]](#errh1_errl2){reference-type="eqref" reference="errh1_errl2"}. ## Geometric error First of all, we introduce $B_h^\ell \subset \Omega$ as the union of all the non-internal elements of the exact mesh $\mathcal{T}_h^{(e)}$, $$B_h^\ell= \{ \ {T}^{(e)}\in \mathcal{T}_h^{(e)}; \ {T}^{(e)}\, \mbox{has at least two vertices on } \Gamma \}. %\ftre \circ (\ftr)^{-1}(\tr) = \te \}.$$ Note that, by definition of $B_h^\ell$, we have, $$\label{JDG} \frac{1}{J_h^\ell}-1 =0 \ \ \ \mbox{and} \ \ \ \mathcal{G}_h^{(r)}- \mathrm{Id}= 0 \ \ \ \mbox{in} \ \Omega \backslash B_h^\ell.$$ The following corollary involving $B_h^\ell$ is a direct consequence of [@elliott Lemma 4.10] or [@Grisvard2011 Theorem 1.5.1.10]. **Corollary 1**. *Let $v \in \mathrm{H}^1(\Omega)$ and $w \in \mathrm{H}^2(\Omega)$. Then, for a sufficiently small $h$, there exists $c>0$ such that the following inequalities hold, $$%\label{h1/2}\label{blh} \label{h1/2_blh} \|v\|_{\mathrm{L}^2(B_h^\ell)} \le c h^{1/2} \|v\|_{\mathrm{H}^1(\Omega)} \qquad \mbox{and} \qquad \|w\|_{\mathrm{H}^1(B_h^\ell)} \le c h^{1/2} \|w\|_{\mathrm{H}^2(\Omega)}.$$* The difference between $a$ and $a_h$, referred to as the geometric error, is evaluated in the following proposition. **Proposition 5**. *Consider $v, w \in \mathbb{V}_h^\ell$. There exists $c>0$, such that the following geometric error estimation hold, $$\label{ineq:a-al2} |a(v,w)-a_h^\ell(v,w)| \le c h^r \|\nabla v\|_{\mathrm{L}^2(B_h^\ell)}\|\nabla w\|_{\mathrm{L}^2(B_h^\ell)} + ch^{r+1}\|v\|_{\mathrm{H}^1 (\Omega,\Gamma)}\|w\|_{\mathrm{H}^1 (\Omega,\Gamma)}. %\qquad \forall \, v, w \in \Vhlifte.$$* The following proof is inspired by [@elliott lemma 6.2]. The main difference is the use of the modified lift given in definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"} and the corresponding transformation $G_h^{(r)}$ alongside its associated matrix $\mathcal{G}_h^{(r)}$, defined in [\[eq:matrice-Ghr\]](#eq:matrice-Ghr){reference-type="eqref" reference="eq:matrice-Ghr"}, which leads to several changes in the proof. *Proof.* Let $v, w \in \mathbb{V}_h^\ell$. By the definitions of the bilinear forms $a$ and $a_h^\ell$, we have, $$|a(v,w)-a_h^\ell(v,w)| \le a_1(v,w) +\kappa a_2(v,w) +\beta a_3(v,w)+ \alpha a_4(v,w),$$ where the terms $a_i$, defined on ${\mathbb{V}_h^\ell}\times\mathbb{V}_h^\ell$, are respectively given by, $$\begin{array}{rclrcl} \!\!a_1(v,w) &\!\!\!\!\!:=\!\!\!\!\!& \displaystyle \left|\int_{\Omega} \nabla w \cdot \nabla v - \mathcal{G}_h^{(r)}\nabla w \cdot \mathcal{G}_h^{(r)}\nabla v \frac{1}{J_h^\ell}\, \mathrm{d}x\right|, & \!\!a_2(v,w) &\!\!\!\!\!:=\!\!\!\!\!& \displaystyle \left|\int_{\Omega} w v \ (1- \frac{1}{J_h^\ell}) \, \mathrm{d}x\right|, \!\!\\ \!\!a_3(v,w) &\!\!\!\!\!:=\!\!\!\!\!& \displaystyle \left|\int_{\Gamma} (A_h^\ell- \mathrm{Id}) \ \nabla_{\Gamma} w \cdot \nabla_{\Gamma} v \, \mathrm{d}\sigma\right|, & \!\!a_4(v,w) &\!\!\!\!\!:=\!\!\!\!\!& \displaystyle \left|\int_{\Gamma} w v \ (1- \frac{1}{J_b^\ell})\, \mathrm{d}\sigma\right|.\!\! \end{array}$$ The next step is to bound each $a_i$, for $i=1 ,2 ,3, 4$, while using [\[ineq:Ghr-Id_1/Jh-1\]](#ineq:Ghr-Id_1/Jh-1){reference-type="eqref" reference="ineq:Ghr-Id_1/Jh-1"} and [\[ineq:AhJh\]](#ineq:AhJh){reference-type="eqref" reference="ineq:AhJh"}. First of all, notice that $a_1(v,w) \le Q_1 +Q_2+Q_3$, where, $$\begin{aligned} Q_1 &:=& \displaystyle \left|\int_{\Omega} ( \mathcal{G}_h^{(r)}-\mathrm{Id}) \ \nabla w \cdot \mathcal{G}_h^{(r)}\nabla v \frac{1}{J_h^\ell}\, \mathrm{d}x\right|,\\ Q_2 &:=& \displaystyle \left|\int_{\Omega} \nabla w \cdot ( \mathcal{G}_h^{(r)}-\mathrm{Id})\nabla v \frac{1}{J_h^\ell}\, \mathrm{d}x\right|,\\ Q_3 &:=& \displaystyle \left|\int_{\Omega} \nabla w \cdot \nabla v (\frac{1}{J_h^\ell}-1) \, \mathrm{d}x\right|.\end{aligned}$$ We use [\[JDG\]](#JDG){reference-type="eqref" reference="JDG"} and [\[ineq:Ghr-Id_1/Jh-1\]](#ineq:Ghr-Id_1/Jh-1){reference-type="eqref" reference="ineq:Ghr-Id_1/Jh-1"} to estimate each $Q_j$ as follows, $$\begin{aligned} & Q_1 = \left|\int_{B_h^\ell} ( \mathcal{G}_h^{(r)}-\mathrm{Id}) \ \nabla w \cdot \mathcal{G}_h^{(r)}\nabla v \frac{1}{J_h^\ell}\, \mathrm{d}x\right|\le ch^r \| \nabla w\|_{\mathrm{L}^2(B_h^\ell)} \| \nabla v\|_{\mathrm{L}^2(B_h^\ell)},\\ & Q_2 = \left|\int_{B_h^\ell} \nabla w \cdot ( \mathcal{G}_h^{(r)}-\mathrm{Id})\nabla v \frac{1}{J_h^\ell}\, \mathrm{d}x\right| \le ch^r \| \nabla w\|_{\mathrm{L}^2(B_h^\ell)} \| \nabla v\|_{\mathrm{L}^2(B_h^\ell)},\\ & Q_3 = \left|\int_{B_h^\ell} \nabla w \cdot \nabla v (\frac{1}{J_h^\ell}-1) \, \mathrm{d}x \right| \le ch^r \| \nabla w\|_{\mathrm{L}^2(B_h^\ell)} \| \nabla v\|_{\mathrm{L}^2(B_h^\ell)}.\end{aligned}$$ Summing up the latter terms, we get, $a_1(v,w) \le ch^r \| \nabla w\|_{\mathrm{L}^2(B_h^\ell)} \| \nabla v\|_{\mathrm{L}^2(B_h^\ell)}.$ Similarly, to bound $a_2$, we proceed by using [\[JDG\]](#JDG){reference-type="eqref" reference="JDG"} and [\[ineq:Ghr-Id_1/Jh-1\]](#ineq:Ghr-Id_1/Jh-1){reference-type="eqref" reference="ineq:Ghr-Id_1/Jh-1"} as follows, $$\begin{aligned} a_2(v,w)& = \left|\int_{B_h^\ell} w v \ (1- \frac{1}{J_h^\ell}) \, \mathrm{d}x\right| \le c h^r \| w\|_{\mathrm{L}^2(B_h^\ell)}\| v\|_{\mathrm{L}^2(B_h^\ell)}.\end{aligned}$$ Since $v,w \in~\mathbb{V}_h^\ell \subset \mathrm{H}^1(\Omega,\Gamma)$, we use [\[h1/2_blh\]](#h1/2_blh){reference-type="eqref" reference="h1/2_blh"} to get, $$a_2(v,w) \le c h^{r+1} \|w\|_{\mathrm{H}^1(\Omega)}\| v\|_{\mathrm{H}^1(\Omega)}.$$ Before estimating $a_3$, we need to notice that, by definition of the tangential gradient over $\Gamma$, $P \nabla_\Gamma= \nabla_\Gamma$ where $P=\mathrm{Id}-\boldsymbol{\mathrm{n}}\otimes \boldsymbol{\mathrm{n}}$ is the orthogonal projection over the tangential spaces of $\Gamma$. With the estimation [\[ineq:AhJh\]](#ineq:AhJh){reference-type="eqref" reference="ineq:AhJh"}, we get, $$\begin{aligned} a_3(v,w) & = \left| \int_{\Gamma} (A_h^\ell- P) \ \nabla_{\Gamma} w \cdot \nabla_{\Gamma} v \, \mathrm{d}\sigma \right| \\ & \le ||A_h^\ell-P||_{\mathrm{L}^\infty(\Gamma)} \| w\|_{\mathrm{H}^1(\Gamma)} \| v\|_{\mathrm{H}^1(\Gamma)} \le ch^{r+1} \| w\|_{\mathrm{H}^1(\Gamma)} \| v\|_{\mathrm{H}^1(\Gamma)} .\end{aligned}$$ Finally, using [\[ineq:AhJh\]](#ineq:AhJh){reference-type="eqref" reference="ineq:AhJh"}, we estimate $a_4$ as follows, $$\begin{aligned} a_4(v,w) = \left| \int_{\Gamma} w v \ (1- \frac{1}{J_b^\ell})\, \mathrm{d}\sigma\right| \le ch^{r+1} \| w\|_{\mathrm{L}^2(\Gamma)}\| v\|_{\mathrm{L}^2(\Gamma)}.\end{aligned}$$ The inequality [\[ineq:a-al2\]](#ineq:a-al2){reference-type="eqref" reference="ineq:a-al2"} is easy to obtain when summing up $a_i$, for all $i=1, 2, 3, 4$. ◻ **Remark 7**. *Let us point out that, with $u$ (resp. $u_h$) the solution of the problem [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"} (resp. [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"}), we have, $$\label{rem:bound-uhell-indep-h} \|u_h^\ell \|_{\mathrm{H}^1 (\Omega,\Gamma)} \le c\|u \|_{\mathrm{H}^1 (\Omega,\Gamma)},$$ where $c>0$ is independent with respect to $h$. In fact, a relatively easy way to prove it is by employing the geometrical error estimation [\[ineq:a-al2\]](#ineq:a-al2){reference-type="eqref" reference="ineq:a-al2"}, as follows, $$\begin{aligned} c_c \|u_h^\ell \|_{\mathrm{H}^1 (\Omega,\Gamma)}^2 \le a(u_h^\ell,u_h^\ell) %& \le a(u_h^\ell-u,u_h^\ell)+ a(u,u_h^\ell)\\ \le a(u_h^\ell,u_h^\ell)-a(u,u_h^\ell) + a(u,u_h^\ell),\end{aligned}$$ where $c_c$ is the coercivity constant. Using [\[rem:a=ahell-For-vhell\]](#rem:a=ahell-For-vhell){reference-type="eqref" reference="rem:a=ahell-For-vhell"}, we have, $$c_c \|u_h^\ell \|_{\mathrm{H}^1 (\Omega,\Gamma)}^2 \le a(u_h^\ell,u_h^\ell)-a_h^\ell(u_h^\ell,u_h^\ell) + a(u,u_h^\ell) = (a-a_h^\ell)(u_h^\ell,u_h^\ell) + a(u,u_h^\ell).$$ Thus applying the estimation [\[ineq:a-al2\]](#ineq:a-al2){reference-type="eqref" reference="ineq:a-al2"} along with the continuity of $a$, we get, $$\begin{aligned} \|u_h^\ell \|_{\mathrm{H}^1 (\Omega,\Gamma)}^2 &\le c h^r \|\nabla u_h^\ell\|^2_{\mathrm{L}^2(B_h^\ell)}+ ch^{r+1}\|u_h^\ell\|_{\mathrm{H}^1 (\Omega,\Gamma)}^2+ c\|u \|_{\mathrm{H}^1 (\Omega,\Gamma)} \| u_h^\ell\|_{\mathrm{H}^1 (\Omega,\Gamma)} \\ & \le c h^r \|u_h^\ell\|^2_{\mathrm{H}^1 (\Omega,\Gamma)}+ c\|u \|_{\mathrm{H}^1 (\Omega,\Gamma)} \| u_h^\ell\|_{\mathrm{H}^1 (\Omega,\Gamma)}.\end{aligned}$$ Thus, we have, $$(1-ch^r) \|u_h^\ell \|_{\mathrm{H}^1 (\Omega,\Gamma)}^2 \le c\|u \|_{\mathrm{H}^1 (\Omega,\Gamma)} \| u_h^\ell\|_{\mathrm{H}^1 (\Omega,\Gamma)}.$$ For a sufficiently small $h$, we have $1-ch^r>0$, which concludes the proof.* ## Proof of the $\mathrm{H}^1$ error bound in Theorem [Theorem 2](#th-error-bound){reference-type="ref" reference="th-error-bound"} {#proof-of-the-mathrmh1-error-bound-in-theorem-th-error-bound} Let $u \in \mathrm{H}^{k+1} (\Omega,\Gamma)$ and $u_h \in \mathbb{V}_h$ be the respective solutions of [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"} and [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"}. To begin with, we use the coercivity of the bilinear form $a$ to obtain, denoting $c_c$ as the coercivity constant, $$\begin{gathered} c_c \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}^2 \le a(\mathcal{I}^\ell u-u_h^\ell,\mathcal{I}^\ell u-u_h^\ell) = a(\mathcal{I}^\ell u,\mathcal{I}^\ell u-u_h^\ell)-a(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell) \\ = a_h^\ell(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell)-a(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell) +a(\mathcal{I}^\ell u,\mathcal{I}^\ell u-u_h^\ell) - a_h^\ell(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell), %\end{split} %\end{equation*}\end{gathered}$$ where. in the latter equation, we added and subtracted $a_h^\ell(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell)$. Thus, $$c_c \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}^2 \le \big( a_h^\ell-a\big)(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell) +a(\mathcal{I}^\ell u,\mathcal{I}^\ell u-u_h^\ell) - a_h^\ell(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell).$$ Applying [\[rem:a=ahell-For-vhell\]](#rem:a=ahell-For-vhell){reference-type="eqref" reference="rem:a=ahell-For-vhell"} with $v=\mathcal{I}^\ell u -u_h^\ell \in \mathbb{V}_h^\ell$, we have, $$c_c \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}^2 \le | (a_h^\ell-a)(u_h^\ell,\mathcal{I}^\ell u-u_h^\ell)| + |a(\mathcal{I}^\ell u -u ,\mathcal{I}^\ell u-u_h^\ell)|.$$ Taking advantage of the continuity of $a$ and the estimation [\[ineq:a-al2\]](#ineq:a-al2){reference-type="eqref" reference="ineq:a-al2"}, we obtain, $$\begin{array}{l} c_c \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}^2 \\[0.1cm] \begin{array}{rcl} & \le & \!\!\!\displaystyle c \big( h^r \|\nabla u_h^\ell\|_{\mathrm{L}^2(B_h^\ell)}\|\nabla(\mathcal{I}^\ell u-u_h^\ell)\|_{\mathrm{L}^2(B_h^\ell)} + h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}\|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}\big) \!\!\! \\[0.1cm] & & \displaystyle \qquad{ } + c_{cont} \|\mathcal{I}^\ell u -u\|_{\mathrm{H}^1(\Omega,\Gamma)} \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} \\[0.1cm] & \le &\!\!\! \displaystyle c \big( h^r \|\nabla u_h^\ell\|_{\mathrm{L}^2(B_h^\ell)} + h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} \\[0.1cm] & & \displaystyle \qquad{ } + c_{cont} \|\mathcal{I}^\ell u -u\|_{\mathrm{H}^1(\Omega,\Gamma)} \big) \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}. \end{array} \end{array}$$ Then, dividing by $\|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}$, we have, $$\begin{aligned} \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} & \le c\left( h^r \|\nabla u_h^\ell\|_{\mathrm{L}^2(B_h^\ell)} + h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} + \|\mathcal{I}^\ell u -u\|_{\mathrm{H}^1(\Omega,\Gamma)} \right).\end{aligned}$$ To conclude, we use the latter inequality in the following estimation as follows, $$\begin{array}{l} \| u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} \le \| u-\mathcal{I}^\ell u\|_{\mathrm{H}^1(\Omega,\Gamma)} + \|\mathcal{I}^\ell u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} \\[0.1cm] % & \le &\displaystyle \| u-\Ihlifte u\|_{\HH1\omgam} + \|\Ihlifte u-u_h^\ell\|_{\HH1\omgam} \\ \le \displaystyle c\left( h^r \|\nabla u_h^\ell\|_{\mathrm{L}^2(B_h^\ell)} + h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} + \|\mathcal{I}^\ell u -u\|_{\mathrm{H}^1(\Omega,\Gamma)} \right) \end{array}$$ Using the proposition [Proposition 4](#prop:interpolation-ineq){reference-type="ref" reference="prop:interpolation-ineq"} and the inequalities [\[h1/2_blh\]](#h1/2_blh){reference-type="eqref" reference="h1/2_blh"}, we have, $$\begin{array}{l} \| u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} \\[0.1cm] \le \displaystyle c h^r (\|\nabla(u_h^\ell-u)\|_{\mathrm{L}^2(B_h^\ell)}+\|\nabla u\|_{\mathrm{L}^2(B_h^\ell)}) + c h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} + c h^k \|u\|_{\mathrm{H}^{k+1}(\Omega,\Gamma)}\\[0.1cm] \le \displaystyle c h^r (\|u_h^\ell-u\|_{\mathrm{H}^1 (\Omega,\Gamma)}+ h^{1/2}\| u\|_{\mathrm{H}^2(\Omega)}) + ch^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} + c h^k \|u\|_{\mathrm{H}^{k+1}(\Omega,\Gamma)}. \end{array}$$ Thus we have, $$\begin{aligned} (1-c h^r) \| u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} & \le c \left( h^{r+1/2}\| u\|_{\mathrm{H}^2(\Omega)} + h^k \|u\|_{\mathrm{H}^{k+1}(\Omega,\Gamma)} + h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)}\right). \end{aligned}$$ For a sufficiently small $h$, we arrive at, $$\begin{aligned} \| u-u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} & \le c \left( h^{r+1/2}\| u\|_{\mathrm{H}^2(\Omega,\Gamma)} + h^k \|u\|_{\mathrm{H}^{k+1}(\Omega,\Gamma)} + h^{r+1}\|u_h^\ell\|_{\mathrm{H}^1(\Omega,\Gamma)} \right).%\\ % & \le c \left( h^{r+1/2} + h^k \right),\end{aligned}$$ This provides the desired result using [\[rem:bound-uhell-indep-h\]](#rem:bound-uhell-indep-h){reference-type="eqref" reference="rem:bound-uhell-indep-h"}. ## Proof of the $\mathrm{L}^2$ error bound in Theorem [Theorem 2](#th-error-bound){reference-type="ref" reference="th-error-bound"} {#proof-of-the-mathrml2-error-bound-in-theorem-th-error-bound} Recall that $u \in \mathrm{H}^1(\Omega,\Gamma)$ is the solution of the variational problem [\[fv_faible\]](#fv_faible){reference-type="eqref" reference="fv_faible"}, $u_h \in \mathbb{V}_h$ is the solution of the discrete problem [\[fvh\]](#fvh){reference-type="eqref" reference="fvh"}. To estimate the $\mathrm{L}^2$ norm of the error, we define the functional $F_h$ by, $$\begin{array}[t]{lrcl}F_h :&\mathrm{H}^1 (\Omega,\Gamma) &\longrightarrow &\mathbb{R}\\&v& \longmapsto &\displaystyle F_h(v)=a(u-u_h^\ell,v). \end{array}$$ We bound $|F_h(v)|$ for any $v \in \mathrm{H}^2 (\Omega,\Gamma)$ in the lemma [Lemma 1](#lem:Fh){reference-type="ref" reference="lem:Fh"}. Afterwards an Aubin-Nitsche argument is applied to bound the $\mathrm{L}^2$ norm of the error. **Lemma 1**. *For all $v \in \mathrm{H}^2(\Omega,\Gamma)$, there exists $c>0$ such that the following inequality holds, $$\label{ineq:F-h} |F_h(v)| \le c ( h^{k+1} + h^{r+1} ) \|v\|_{\mathrm{H}^2(\Omega,\Gamma)}.$$* **Remark 8**. *To prove lemma [Lemma 1](#lem:Fh){reference-type="ref" reference="lem:Fh"}, some key points for a function $v \in \mathrm{H}^2 (\Omega,\Gamma)$ are presented. Firstly, inequality [\[h1/2_blh\]](#h1/2_blh){reference-type="eqref" reference="h1/2_blh"} implies that, $$\label{ineq:blh-for-v} \forall \, v \in \mathrm{H}^2 (\Omega,\Gamma), \quad \|\nabla v\|_{\mathrm{L}^2(B_h^\ell)} \le c h^{1/2} \|v\|_{\mathrm{H}^2(\Omega)}.$$ Secondly, then the interpolation inequality in proposition [Proposition 4](#prop:interpolation-ineq){reference-type="ref" reference="prop:interpolation-ineq"} gives, $$\label{ineq:interpolation-v} \forall \, v \in \mathrm{H}^2 (\Omega,\Gamma), \quad \| \mathcal{I}^\ell v - v \|_{\mathrm{H}^1 (\Omega,\Gamma)} \le c h \|v\|_{\mathrm{H}^2 (\Omega,\Gamma)}.$$ Applying [\[rem:a=ahell-For-vhell\]](#rem:a=ahell-For-vhell){reference-type="ref" reference="rem:a=ahell-For-vhell"} for $\mathcal{I}^\ell v \in \mathbb{V}_h^\ell$, we have, $$\label{eq:a=ahell-For-Iv} \forall \, v \in \mathrm{H}^2 (\Omega,\Gamma), \quad a(u,\mathcal{I}^\ell v) = l(\mathcal{I}^\ell v) = a_h^\ell(u_h^\ell, \mathcal{I}^\ell v).$$* *Proof of lemma [Lemma 1](#lem:Fh){reference-type="ref" reference="lem:Fh"}.* Consider $v \in \mathrm{H}^2(\Omega, \Gamma)$. We may decompose $|F_h(v)|$ in two terms as follows, $$|F_h(v)| = |a(u-u_h^\ell, v)| % = |a(u-u_h^\ell, v) +a(u-u_h^\ell, \Ihlifte v)-a(u-u_h^\ell, \Ihlifte v)| \\ \le |a(u-u_h^\ell, v- \mathcal{I}^\ell v)| + | a(u-u_h^\ell, \mathcal{I}^\ell v)| =: F_1 + F_2.$$ Firstly, to bound $F_1$, we take advantage of the continuity of the bilinear form $a$ and apply the $\mathrm{H}^1$ error estimation [\[errh1_errl2\]](#errh1_errl2){reference-type="eqref" reference="errh1_errl2"}, alongside the inequality [\[ineq:interpolation-v\]](#ineq:interpolation-v){reference-type="eqref" reference="ineq:interpolation-v"} as follows, $$\begin{aligned} F_1 %= |a(u-u_h^\ell, v- \Ihlifte v)| &\le c_{cont} \, \| u-u_h^\ell\|_{\mathrm{H}^1 (\Omega,\Gamma)} \| v-\mathcal{I}^\ell v \|_{\mathrm{H}^1 (\Omega,\Gamma)} \le c ( h^k + h^{r+1/2} ) \, h \|v\|_{\mathrm{H}^2 (\Omega,\Gamma)} \\ & \le c ( h^{k+1} + h^{r+3/2} ) \, \|v\|_{\mathrm{H}^2 (\Omega,\Gamma)}.\end{aligned}$$ Secondly, to estimate $F_2$, we resort to equations [\[eq:a=ahell-For-Iv\]](#eq:a=ahell-For-Iv){reference-type="eqref" reference="eq:a=ahell-For-Iv"} and [\[ineq:a-al2\]](#ineq:a-al2){reference-type="eqref" reference="ineq:a-al2"} as follows, $$\begin{gathered} F_2 = |a(u, \mathcal{I}^\ell v)-a(u_h^\ell, \mathcal{I}^\ell v)| = |a_h^\ell(u_h^\ell, \mathcal{I}^\ell v)-a(u_h^\ell, \mathcal{I}^\ell v)| = |(a_h^\ell-a)(u_h^\ell, \mathcal{I}^\ell v)| \\ \le c h^r \|\nabla u_h^\ell\|_{\mathrm{L}^2(B_h^\ell)}\|\nabla(\mathcal{I}^\ell v)\|_{\mathrm{L}^2(B_h^\ell)} +c h^{r+1}\|u_h^\ell \|_{\mathrm{H}^1(\Omega,\Gamma)} \| \mathcal{I}^\ell v\|_{\mathrm{H}^1(\Omega,\Gamma)}.\end{gathered}$$ Next, we will treat the first term in the latter inequality separately. We have, $$\begin{aligned} F_3 & := & h^r \|\nabla u_h^\ell\|_{\mathrm{L}^2(B_h^\ell)}\|\nabla(\mathcal{I}^\ell v)\|_{\mathrm{L}^2(B_h^\ell)} \\ & \le & h^r \Big( \|\nabla(u_h^\ell-u) \|_{\mathrm{L}^2(B_h^\ell)} + \|\nabla u\|_{\mathrm{L}^2(B_h^\ell)} \Big) \Big( \|\nabla(\mathcal{I}^\ell v-v)\|_{\mathrm{L}^2(B_h^\ell)}+\|\nabla v\|_{\mathrm{L}^2(B_h^\ell)} \Big)\\ & \le & h^r \Big( \| u_h^\ell-u \|_{\mathrm{H}^1 (\Omega,\Gamma)} + \|\nabla u\|_{\mathrm{L}^2(B_h^\ell)} \Big) \Big( \|\mathcal{I}^\ell v-v\|_{\mathrm{H}^1 (\Omega,\Gamma)}+\|\nabla v\|_{\mathrm{L}^2(B_h^\ell)} \Big).\end{aligned}$$ We now apply the $\mathrm{H}^1$ error estimation [\[errh1_errl2\]](#errh1_errl2){reference-type="eqref" reference="errh1_errl2"}, the inequality [\[ineq:blh-for-v\]](#ineq:blh-for-v){reference-type="eqref" reference="ineq:blh-for-v"} and the interpolation inequality [\[ineq:interpolation-v\]](#ineq:interpolation-v){reference-type="eqref" reference="ineq:interpolation-v"}, as follows, $$\begin{aligned} F_3 & \le & c \, h^r \Big( h^k + h^{r+1/2} + h^{1/2} \| u\|_{\mathrm{H}^2(\Omega,\Gamma)} \Big) \Big( h\|v\|_{\mathrm{H}^2 (\Omega,\Gamma)}+h^{1/2}\| v\|_{\mathrm{H}^2 (\Omega,\Gamma)} \Big)\\ & \le & c \, h^r \, h^{1/2} \Big( h^{k-1/2} + h^{r} + \| u\|_{\mathrm{H}^2(\Omega,\Gamma)} \Big) \Big( h^{1/2}+1\Big) h^{1/2} \| v\|_{\mathrm{H}^2 (\Omega,\Gamma)} \\ & \le & c \, h^{r+1} \Big( h^{k-1/2} + h^{r} + \| u\|_{\mathrm{H}^2(\Omega,\Gamma)} \Big) \Big( h^{1/2}+1\Big) \| v\|_{\mathrm{H}^2 (\Omega,\Gamma)}.\end{aligned}$$ Noticing that $k-1/2>0$ (since $k\ge 1$) and that $\Big( h^{k-1/2} + h^{r} + \| u\|_{\mathrm{H}^2(\Omega,\Gamma)} \Big) \Big( h^{1/2}~+~1~\Big)$ is bounded by a constant independent of $h$, we obtain $F_3 \le c \, h^{r+1} \| v\|_{\mathrm{H}^2 (\Omega,\Gamma)}.$ Using the previous expression of $F_2$, $$F_2 \le c h^{r+1} \| v\|_{\mathrm{H}^2 (\Omega,\Gamma)} +c h^{r+1}\|u_h^\ell \|_{\mathrm{H}^1(\Omega,\Gamma)} \| \mathcal{I}^\ell v\|_{\mathrm{H}^1(\Omega,\Gamma)}.$$ Moreover, noticing that $\| \mathcal{I}^\ell v\|_{\mathrm{H}^1(\Omega,\Gamma)} \le c \| v\|_{\mathrm{H}^2(\Omega,\Gamma)}$, $$F_2 \le c h^{r+1} \| v\|_{\mathrm{H}^2 (\Omega,\Gamma)} +c h^{r+1}\|u_h^\ell \|_{\mathrm{H}^1(\Omega,\Gamma)}\| v\|_{\mathrm{H}^2 (\Omega,\Gamma)} \le c h^{r+1} \| v\|_{\mathrm{H}^2(\Omega,\Gamma)},$$ using [\[rem:bound-uhell-indep-h\]](#rem:bound-uhell-indep-h){reference-type="eqref" reference="rem:bound-uhell-indep-h"}. We conclude the proof by summing the estimates of $F_1$ and $F_2$. ◻ *Proof of the $\mathrm{L}^2$ estimate [\[errh1_errl2\]](#errh1_errl2){reference-type="eqref" reference="errh1_errl2"}.* Defining $e:=u-u^\ell_h$, the aim is to estimate the following $\mathrm{L}^2$ error norm: $\|e\|_{\mathrm{L}^2(\Omega,\Gamma)}^2 = \|u-u^\ell_h\|_{\mathrm{L}^2(\Omega)}^2 + \|u-u^\ell_h\|_{\mathrm{L}^2(\Gamma)}^2.$ Let $v\in \mathrm{L}^2(\Omega,\Gamma)$. We define the following problem: find $z_{v} \in~\mathrm{H}^1(\Omega,\Gamma)$ such that, $$\label{dual} a(w,z_{v}) =\langle w,v \rangle_{\mathrm{L}^2(\Omega,\Gamma)}, \ \ \ \forall \ w \in \mathrm{H}^1(\Omega,\Gamma), %\langle w,v \rangle_{\L2(\Omega)}+\langle w, v \rangle_{\L2(\Gamma)}$$ Applying Theorem [Theorem 1](#th_existance_unicite_u){reference-type="ref" reference="th_existance_unicite_u"} for $f=v$ and $g= v_{|_{\Gamma}}$, there exists a unique solution $z_v \in \mathrm{H}^1 (\Omega,\Gamma)$ to [\[dual\]](#dual){reference-type="eqref" reference="dual"}, which satisfies the following inequality, $$\|z_{v}\|_{\mathrm{H}^2(\Omega,\Gamma)} \le c \|v\|_{\mathrm{L}^2(\Omega,\Gamma)}.$$ Taking $v = e \in \mathrm{L}^2(\Omega,\Gamma)$ and $w = e \in ~\mathrm{H}^1 (\Omega,\Gamma)$ in [\[dual\]](#dual){reference-type="eqref" reference="dual"}, we obtain $F_h(z_e)=a(e,z_e)= \|e\|_{\mathrm{L}^2(\Omega,\Gamma)}^2$. In this case, Theorem [Theorem 1](#th_existance_unicite_u){reference-type="ref" reference="th_existance_unicite_u"} implies, $$\label{reg} \|z_e\|_{\mathrm{H}^2(\Omega,\Gamma)} \le c \|e\|_{\mathrm{L}^2(\Omega,\Gamma)}.$$ Applying Inequality [\[ineq:F-h\]](#ineq:F-h){reference-type="eqref" reference="ineq:F-h"} for $z_e \in \mathrm{H}^2(\Omega,\Gamma)$ and afterwards Inequality [\[reg\]](#reg){reference-type="eqref" reference="reg"}, we have, $$\|e\|_{\mathrm{L}^2(\Omega,\Gamma)}^2=|F_h(z_e)| \le c ( h^{k+1} + h^{r+1} ) \|z_e\|_{\mathrm{H}^2(\Omega,\Gamma)} \le c ( h^{k+1} + h^{r+1} ) \|e\|_{\mathrm{L}^2(\Omega,\Gamma)},$$ which concludes the proof. ◻ # Numerical experiments {#sec:numerical-ex} In this section are presented numerical results aimed to illustrate the theoretical convergence results in theorem [Theorem 2](#th-error-bound){reference-type="ref" reference="th-error-bound"}. Supplementary numerical results will be provided in order to highlight the properties of the volume lift introduced in definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"} relatively to the lift transformation $G_h^{(r)}:~\Omega_h^{(r)}\rightarrow \Omega$ given in ([\[eq:def-fter\]](#eq:def-fter){reference-type="ref" reference="eq:def-fter"}). The Ventcel problem ([\[1\]](#1){reference-type="ref" reference="1"}) is considered with $\alpha = \beta = \kappa = 1$ on the unit disk $\Omega$, $$\arraycolsep=2pt \left\{ \begin{array}{rcll} -\Delta u + u &=& f & \text{ in } \Omega, \\ {- \Delta_{\Gamma} u} + \partial_n u + u &=& g & \text{ on } \Gamma, \end{array} \right.$$ with the source terms $f(x,y)= - y \mathrm{e}^{x}$ and $g(x,y)= y \mathrm{e}^{x} ( 3 + 4x-y^2 )$ corresponding to the exact solution $u = -f$. The discrete problem ([\[fvh\]](#fvh){reference-type="ref" reference="fvh"}) is implemented and solved using the finite element library Cumin [@cumin]. Curved meshes of $\Omega$ of geometrical order $1\le r \le 3$ have been generated using the software Gmsh[^4]. All numerical results presented in this section can be fully reproduced using dedicated source codes available on Cumin Gitlab[^5]. ![Numerical solution of the Ventcel problem on affine and quadratic meshes.](ventcel_2d_expl.png "fig:"){#f1 width="25%"}     ![Numerical solution of the Ventcel problem on affine and quadratic meshes.](ventcel_2d_expl_1.png "fig:"){#f1 width="25%"} The numerical solutions $u_h$ are computed for $\mathbb{P}^k$ finite elements, with $k=1,\dots, 4$, on series of successively refined meshes of order $r=1,\dots, 3$ , as depicted on figure [2](#f1){reference-type="ref" reference="f1"} for coarse meshes (affine and quadratic). For each mesh order $r$ and each finite element degree $k$, the following numerical errors are computed: $$\| u-u_h^\ell \|_{\mathrm{L}^2(\Omega)}, \quad \| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2(\Omega) },\quad \| u-u_h^\ell \|_{\mathrm{L}^2(\Gamma)} \quad {\rm and}\quad \| \nabla_\Gamma u- \nabla_\Gamma u_h^\ell \|_{\mathrm{L}^2(\Gamma) }, % \|u-u_h^\ell \|_{\L2(\Omega) }, \quad %\| \na (u-u_h^\ell) \|_{\L2 (\Omega) }, %\quad %\|u-u_h^\ell \|_{\L2(\Gamma) } \quad {\rm and} \quad %\| \nt (u-u_h^\ell) \|_{\L2 (\Gamma) },$$ with $u_h^\ell$ given in definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"}. The convergence orders of these errors, relatively to the mesh size, are reported in tables [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"} and [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"}. ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ $\| u-u_h^\ell \|_{\mathrm{L}^2(\Omega) }$ $\| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2(\Omega)}$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ Affine mesh (r=1) [1.98]{style="color: black"} [1.99]{style="color: black"} [1.97]{style="color: black"} [1.97]{style="color: black"} 1.00 [1.50]{style="color: black"} [1.49]{style="color: black"} [1.49]{style="color: black"} Quadratic mesh (r=2) 2.01 3.14 [3.94]{style="color: black"} [3.97]{style="color: black"} 1.00 2.12 3.03 [3.48]{style="color: black"} Cubic mesh (r=3) 2.04 [2.45]{style="color: black"} [3.44]{style="color: black"} 4.04 1.02 [1.47]{style="color: black"} [2.42]{style="color: black"} 3.46 ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ : Convergence orders, interior norms. [\[tab:conv-1-Omega\]]{#tab:conv-1-Omega label="tab:conv-1-Omega"} \ The convergence orders presented in table [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"}, relatively to $\mathrm{L}^2$ norms on $\Omega$, deserve comments. In the affine case $r=1$, the figures are in perfect agreement with estimates ([\[errh1_errl2\]](#errh1_errl2){reference-type="ref" reference="errh1_errl2"}): the $\mathrm{L}^2$ error norm is in $O(h^{k+1} + h^2)$ and the $\mathrm{L}^2$ norm of the gradient of the error is in $O(h^{k} + h^{1.5})$. For quadratic meshes, a super convergence is observed, the case $r=2$ behaves as if $r=3$: the $\mathrm{L}^2$ error norm is in $O(h^{k+1} + h^4)$ and the $\mathrm{L}^2$ norm of the gradient of the error is in $O(h^{k} + h^{3.5})$. This super convergence, though not understood, has been documented and further investigated in [@Jaca]. For the cubic case eventually, a default of order -1/2 is observed on the convergence orders (excepted for the $\mathbb{P}^1$ case): the $\mathrm{L}^2$ error norm is in $O(h^{k+1/2} + h^4)$ and the $\mathrm{L}^2$ norm of the gradient of the error is in $O(h^{k-1/2} + h^{3.5})$. This default might not be in relation with the finite element approximation since it is not observed on the quadratic case but might neither be related with the cubic meshes since this default vanishes when considering $\mathrm{L}^2$ boundary errors, see table [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"}. Further experiments showed us that this default is not caused by the specific Ventcel boundary condition, it similarly occurs when considering a Poisson problem with Neumann boundary condition on the disk: so far we have no clues on its explanation. ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ -------------------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ $\| u-u_h^\ell \|_{\mathrm{L}^2 (\Gamma)}$ $\| \nabla_\Gamma u- \nabla_\Gamma u_h^\ell \|_{\mathrm{L}^2 (\Gamma)}$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ Affine mesh (r=1) [2.00]{style="color: black"} [2.03]{style="color: black"} [2.01]{style="color: black"} [2.01]{style="color: black"} 1.00 [2.00]{style="color: black"} [1.98]{style="color: black"} [1.98]{style="color: black"} Quadratic mesh (r=2) 2.00 3.00 [4.00]{style="color: black"} [4.02]{style="color: black"} 1.00 2.00 3.00 [4.02]{style="color: black"} Cubic mesh (r=3) 2.00 [3.00]{style="color: black"} [4.00]{style="color: black"} 4.21 1.00 [2.00]{style="color: black"} [3.00]{style="color: black"} 3.98 ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ -------------------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ : Convergence orders, boundary norms. [\[tab:conv-1-Gamma\]]{#tab:conv-1-Gamma label="tab:conv-1-Gamma"} \ Let us now discuss table [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"}, which reports convergence orders relatively to $\mathrm{L}^2$ boundary norms. The first interesting point is that the $\mathrm{L}^2$ convergence towards the gradient of $u$ is faster than expressed in ([\[errh1_errl2\]](#errh1_errl2){reference-type="ref" reference="errh1_errl2"}): $O(h^{k} + h^{r+1})$ instead of $O(h^{k} + h^{r+1/2})$. Meanwhile the $\mathrm{L}^2$ convergence towards $u$ behaves as expected. The super convergence in the quadratic case is similarly observed, which meshes behave as if $r=3$. #### Lift transformation regularity As discussed in remark [Remark 3](#rem:lift-regularity){reference-type="ref" reference="rem:lift-regularity"}, the lift transformation $G_h^{(r)}:~\Omega_h^{(r)}\rightarrow \Omega$ in ([\[eq:def-fter\]](#eq:def-fter){reference-type="ref" reference="eq:def-fter"}) has a regularity controlled by the exponent $s$ in the term $(\lambda^\star)^s$, which exponent is set to $r+2$ to ensure a piecewise $C^{r+1}$ regularity. The convergence properties should however remain the same when setting $s=2$. This has been numerically observed: the tables [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"} and [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"} remain identical when setting $s=2$. More surprisingly, when setting $s=1$, tables [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"} and [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"} also remain unchanged, though in this case estimate ([\[ineq:Gh-Id_Jh-1\]](#ineq:Gh-Id_Jh-1){reference-type="ref" reference="ineq:Gh-Id_Jh-1"}) does no longer hold. Precisely in this case, it can be easily seen that $\mathrm{D}{G_h^{(r)}}$ has singularities in the non-internal elements. It seems that this singularities "*are not seen*", likely because the quadrature method nodes (used to approximate integrals) lie away of these singularities. ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ $\| u-u_h^\ell \|_{\mathrm{L}^2(\Omega) }$ $\| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2(\Omega)}$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ Quadratic mesh (r=2) 2.01 2.51 [2.49]{style="color: black"} [2.49]{style="color: black"} 1.00 1.52 1.49 [1.49]{style="color: black"} Cubic mesh (r=3) 2.04 [2.50]{style="color: black"} [2.48]{style="color: black"} 2.49 1.03 [1.51]{style="color: black"} [1.49]{style="color: black"} 1.49 ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ : Convergence orders for the lift in [@elliott]. [\[tab:conv-2\]]{#tab:conv-2 label="tab:conv-2"} -- -- : Convergence orders for the lift in [@elliott]. [\[tab:conv-2\]]{#tab:conv-2 label="tab:conv-2"} ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ -------------------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ $\| u-u_h^\ell \|_{\mathrm{L}^2 (\Gamma)}$ $\| \nabla_\Gamma u- \nabla_\Gamma u_h^\ell \|_{\mathrm{L}^2 (\Gamma)}$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ $\mathbb{P}^1$ $\mathbb{P}^2$ $\mathbb{P}^3$ $\mathbb{P}^4$ Quadratic mesh (r=2) 2.00 3.00 [2.99]{style="color: black"} [2.99]{style="color: black"} 1.00 2.00 3.00 [2.98]{style="color: black"} Cubic mesh (r=3) 2.00 [3.00]{style="color: black"} [2.99]{style="color: black"} 2.98 1.00 [2.00]{style="color: black"} [3.00]{style="color: black"} 2.98 ---------------------- -------------------------------------------- ------------------------------ ------------------------------ ------------------------------ -------------------------------------------------------------------------- ------------------------------ ------------------------------ ------------------------------ : Convergence orders for the lift in [@elliott]. [\[tab:conv-2\]]{#tab:conv-2 label="tab:conv-2"} #### Former lift definition As developed in remark [Remark 4](#rem:lift-elliott-trace-non-conserve){reference-type="ref" reference="rem:lift-elliott-trace-non-conserve"}, another lift transformation $G_h^{(r)}:~\Omega_h^{(r)}\rightarrow \Omega$ had formerly been introduced in [@elliott], with different properties on the boundary. We reported the convergence orders observed with this lift in table [5](#tab:conv-2){reference-type="ref" reference="tab:conv-2"}.\ The first observation is that $\| u- u_h^\ell \|_{\mathrm{L}^2 (\Omega)}$ is at most in $O(h^{2.5})$ whereas $\| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2 (\Omega)}$ is at most in $O(h^{1.5})$, resulting in a clear decrease of the convergence rate as compared to tables [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"} and [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"}. Similarly, $\| u- u_h^\ell \|_{\mathrm{L}^2 (\Gamma)}$ and $\| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2 (\Gamma)}$ are at most in $O(h^3)$ whereas they could reach $O(h^4)$ in tables [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"} and [2](#tab:conv-1-Gamma){reference-type="ref" reference="tab:conv-1-Gamma"}. Notice that the lift transformation intervenes at two different stages: for the right hand side definition in ([\[fvh\]](#fvh){reference-type="ref" reference="fvh"}) and for the error computation itself. We experienced the following. We set the lift for the right hand side computation to the one in [@elliott] whereas the lift for the error computation is the one in definition [Definition 5](#def:liftvolume){reference-type="ref" reference="def:liftvolume"} (so that the numerical solution $u_h$ is the same as in table [5](#tab:conv-2){reference-type="ref" reference="tab:conv-2"}, only its post treatment in terms of errors is different). Then we observed that the results are partially improved: for the $\mathbb{P}^4$ case on cubic meshes, $\| u- u_h^\ell \|_{\mathrm{L}^2 (\Omega)} = O(h^{3.0})$ and $\| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2 (\Omega)} = O(h^{2.5})$, which remains lower than the convergence orders in table [1](#tab:conv-1-Omega){reference-type="ref" reference="tab:conv-1-Omega"}. Still considering the lift definition in [@elliott], we also experienced that the exponent $s$ in the term $(\lambda^\star)^s$ in the lift definition (see remark [Remark 3](#rem:lift-regularity){reference-type="ref" reference="rem:lift-regularity"}) has an influence on the convergence rates. Surprisingly, the best convergence rates are obtained when setting $s=1$: this case corresponds to the minimal regularity on the lift transformation $G_h^{(r)}$, the differential of which (as previously discussed) has singularities on the non-internal mesh elements. In that case however, the convergence rares goes up to $O(h^{3.5})$ and $O(h^{2.5})$ on quadratic and cubic meshes for $\| u-u_h^\ell \|_{\mathrm{L}^2(\Omega) }$ and $\| \nabla u- \nabla u_h^\ell \|_{\mathrm{L}^2(\Omega)}$ respectively. Meanwhile, it has been noticed that setting $s=1$ somehow damages the quality of the numerical solution on the domain boundary: these last results are surprising and with no clear explanation. Eventually, when setting $s\ge 2$, the convergence rates are lower and identical to those in table [5](#tab:conv-2){reference-type="ref" reference="tab:conv-2"}. # Proof of Proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"} {#appendix:proof-Ghr} Following the notations given in definition [Definition 2](#def:sigma-lambdaetoile-haty){reference-type="ref" reference="def:sigma-lambdaetoile-haty"}, we present the proof of Proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"} which requires a series of preliminary results given in Propositions [Proposition 6](#prop-y){reference-type="ref" reference="prop-y"}, [Proposition 7](#Prop:by-y){reference-type="ref" reference="Prop:by-y"} and [Proposition 8](#prop:rho-tr){reference-type="ref" reference="prop:rho-tr"}. The proofs of these propositions are inspired by the proofs of [@Bernardi1989 lemma 6.2],  [@elliott lemma 4.3] and [@elliott proposition 4.4] respectively. **Proposition 6**. *The map $y: \hat{x}\in \hat{T}\backslash\hat{\sigma}\mapsto y:=F_T^{(r)}{( \hat{y})} \in \Gamma_h^{(r)}$ is a smooth function and for all $m \ge 1$, there exists a constant $c>0$ independent of $h$ such that, $$\label{ineq:y} \|\mathrm{D}^m y \|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \le \frac{ch}{(\lambda^*)^m}.%, \ \ \ \ \ \mbox{ for all } 1\le m\le r+1.$$* **Remark 9**. *The proof of this proposition and of the next one rely on the formula of Faà di Bruno (see [@Bernardi1989 equation 2.9]). This formula states that for two functions $f$ and $g$, which are of class $\mathcal{C}^m$, such that $f \circ g$ is well defined, then, $$\label{Faa_di_bruno_eqt} \mathrm{D}^m(f\circ g) = \sum_{p=1}^m \Big( \mathrm{D}^p(f) \sum_{i\in E(m,p)} c_i \prod_{q=1}^m \mathrm{D}^q g ^{i_q} \Big), % \quad for 1\le m \le r+1,$$ where $E(m,p) := \{ i \in \mathbb{N}^{m}; \sum_{q=1}^m i_q=p$ and $\sum_{q=1}^m q i_q = m \}$ and $c_i$ are positives constants, for all $i \in E(m,p)$.* *Proof of Proposition [Proposition 6](#prop-y){reference-type="ref" reference="prop-y"}.* We detail the proof in the $2$ dimensional case, the 3D case can be proved in a similar way. Consider, the reference triangle $\hat{T}$ with the usual orientation. Its vertices are denoted $(\hat{v}_i)_{i=1}^3$ and the associated barycentric coordinates respectively are: $\lambda_1 = 1-x_1-x_2$, $\lambda_2 = x_2$ and $\lambda_3 = x_1$. Consider a non-internal mesh element ${T}^{(r)}$ such that, without loss of generality, $v_1 \notin \Gamma$. In such a case, depicted in figure [\[E-Er\]](#E-Er){reference-type="ref" reference="E-Er"}, $\varepsilon_1=0$ and $\varepsilon_2=\varepsilon_3=1$, since $v_2, v_3~\in~\Gamma~\cap~{T}^{(r)}$. This implies that $\lambda^*=\lambda_2+\lambda_3=x_2+x_1$ and, $$\label{eq:haty} \hat{y}= \frac{1}{\lambda^*} (\lambda_2 \hat{v}_2+\lambda_3 \hat{v}_3) = \frac{1}{x_2+x_1} (x_2 \hat{v}_2+x_1 \hat{v}_3).$$ In this case, $\hat{\sigma}= \{ \hat{v}_1\}$ and $\hat{y}$ is defined on $\hat{T}\setminus \{ \hat{v}_1\}$. By differentiating the expression [\[eq:haty\]](#eq:haty){reference-type="eqref" reference="eq:haty"} of $\hat{y}$ and using an induction argument, it can be proven that there exists a constant $c>0$, independent of $h$, such that, $$\label{ineq:haty} \|\mathrm{D}^m \hat{y}\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \le \frac{c}{(\lambda^*)^m}, \ \ \ \ \ \mbox{ for all } m\ge 1. %1\le m\le r+1.$$ Since $F_T^{(r)}$ is the $\mathbb{P}^r$-Lagrangian interpolant of $F_T^{(e)}$ on $\hat{T}$, then $y=F_T^{(r)}\circ \hat{y}$ is a smooth function on $\hat{T}\backslash\hat{\sigma}$. We now apply the inequality [\[Faa_di_bruno_eqt\]](#Faa_di_bruno_eqt){reference-type="eqref" reference="Faa_di_bruno_eqt"} for $y = F_T^{(r)}\circ \hat{y}$ to estimate its derivative's norm as follows, for all $m\ge 1$, $$\label{bernardi_with_y} \| \mathrm{D}^m(y)\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \le \sum_{p=1}^m \Big( \| \mathrm{D}^p(F_T^{(r)})\|_{\mathrm{L}^\infty(\hat{e})} \sum_{i\in E(m,p)} c_i \prod_{q=1}^m \| \mathrm{D}^q \hat{y}\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})}^{i_q} \Big),$$ where $\hat{e}:= (F_T^{(r)})^{(-1)}(e^{(r)})$ and $e^{(r)}:=\partial {T}^{(r)}\cap \Gamma_h^{(r)}$ are displayed in Figure [\[E-Er\]](#E-Er){reference-type="ref" reference="E-Er"}. Afterwards, we decompose the sum into two parts, one part taking $p=1$ and the second one for $p \ge 2$, and apply inequality [\[ineq:haty\]](#ineq:haty){reference-type="eqref" reference="ineq:haty"}, $$\begin{gathered} \| \mathrm{D}^m(y)\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \\ \begin{array}{l} \le \displaystyle \| \mathrm{D}(F_T^{(r)})\|_{\mathrm{L}^\infty(\hat{e})} \!\!\!\!\!\sum_{i\in E(m,1)} \prod_{q=1}^m (\frac{c}{(\lambda^*)^q})^{i_q} \!+\! \sum_{p=2}^m \Big( \| \mathrm{D}^p(F_T^{(r)})\|_{\mathrm{L}^\infty(\hat{e})} \!\!\!\!\!\sum_{i\in E(m,p)} \prod_{q=1}^m (\frac{c}{(\lambda^*)^q})^{i_q}\Big) \!\!\!\!\!\!\!\\ \le \displaystyle c h {\lambda^*}^{(-\sum_{q=1}^m qi_q)}+ c \sum_{p=2}^m h^r {\lambda^*}^{(-\sum_{q=1}^m qi_q)} \le c h (\lambda^*)^{-m}, \end{array}\end{gathered}$$ using that $\| \mathrm{D}(F_T^{(r)})\|_{\mathrm{L}^\infty(\hat{e})} \le ch$ and $\| \mathrm{D}^p(F_T^{(r)})\|_{\mathrm{L}^\infty(\hat{e})} \le ch^r$, for $2 \le p \le r+1$ (see [@ciaravtransf page 239]), where the constant $c>0$ is independent of $h$. This concludes the proof. ◻ **Proposition 7**. *Assume that $\Gamma$ is $\mathcal{C}^{r+2}$ regular. Then the mapping $b\circ y: \hat{x}\in \hat{T}\backslash\hat{\sigma}\mapsto b(y(\hat{x})) \in \Gamma$ is of class $\mathcal{C}^{r+1}$. Additionally, for any $1\le m\le r+1$, there exists a constant $c>0$ independent of $h$ such that, $$\label{ineq:by-y} \|\mathrm{D}^m(b(y)- y) \|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \le \frac{ch^{r+1}}{(\lambda^*)^m}.$$* *Proof.* Since $\Gamma$ is $\mathcal{C}^{r+2}$ regular, the orthogonal projection $b$ is a $\mathcal{C}^{r+1}$ function on a tubular neighborhood of $\Gamma$ (see [@actanum Lemma 4.1] or [@demlow2019]). Consequently, following Proposition [Proposition 6](#prop-y){reference-type="ref" reference="prop-y"}, $b(y)-y$ is of class $\mathcal{C}^{r+1}$ on $\hat{T}\backslash\hat{\sigma}$. Secondly, consider $1\le m \le r+1$. Applying the Faà di Bruno formula [\[Faa_di_bruno_eqt\]](#Faa_di_bruno_eqt){reference-type="eqref" reference="Faa_di_bruno_eqt"} for the function $b(y)-y=(b-id) \circ y$, we have, $$\label{eq:Faa_di_bruno_by-y} \| \mathrm{D}^m(b(y)-y)\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \le \sum_{p=1}^m \Big( \| \mathrm{D}^p(b-id)\|_{\mathrm{L}^\infty(e^{(r)})} \sum_{i\in E(m,p)} c_i \prod_{q=1}^m \| \mathrm{D}^q y\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})}^{i_q} \Big),$$ where $e^{(r)}= \partial {T}^{(r)}\cap \Gamma_h^{(r)}$ is displayed in Figure [\[E-Er\]](#E-Er){reference-type="ref" reference="E-Er"}. Notice that $b(v)= v$ for any $\mathbb{P}^r$-Lagrangian interpolation nodes $v\in \Gamma \cap e^{(r)}$. Then $id_{|_{e^{(r)}}}$ is the $\mathbb{P}^r$-Lagrangian interpolant of $b_{|_{e^{(r)}}}$. Consequently, the interpolation inequality can be applied as follows (see [@EG; @Bernardi1989]), $$\forall z \in e^{(r)}, \quad \|\mathrm{D}^p (b(z)-z) \| \le ch^{r+1-p} , \ \ \ \ \ \mbox{ for any } 0 \le p \le r+1.$$ This interpolation result combined with [\[ineq:y\]](#ineq:y){reference-type="eqref" reference="ineq:y"} is replaced in [\[eq:Faa_di_bruno_by-y\]](#eq:Faa_di_bruno_by-y){reference-type="eqref" reference="eq:Faa_di_bruno_by-y"} to obtain, $$\begin{gathered} \| \mathrm{D}_{ \hat{x}}^m(b(y)-y)\|_{\mathrm{L}^\infty(\hat{T}\backslash\hat{\sigma})} \le c \sum_{p=1}^m \Big( h^{r+1-p} \sum_{i\in E(m,p)} \prod_{q=1}^m (\frac{h}{(\lambda^*)^q})^{i_q} \Big)\\ \le c \sum_{p=1}^m \Big( h^{r+1-p} \frac{h^{\sum_{q=1}^m i_q}}{(\lambda^*)^{\sum_{q=1}^m qi_q}} \Big) \le c \sum_{p=1}^m \Big( h^{r+1-p} \frac{h^p}{(\lambda^*)^m} \Big) \le c \frac{h^{r+1}}{(\lambda^*)^m},\end{gathered}$$ where the constant $c>0$ is independent of $h$. This concludes the proof. ◻ Now, we introduce the mapping $\rho_{{T}^{(r)}}$, such that $F_{T^{(r)}}^{(e)}=F_T^{(r)}+\rho_{{T}^{(r)}}$ transforms $\hat{T}$ into the exact triangle ${T}^{(e)}$. **Proposition 8**. *Let $\rho_{{T}^{(r)}}: \hat{x}\in \hat{T}\mapsto \rho_{{T}^{(r)}}(\hat{x}) \in \mathbb{R}^d$, be given by, $$\rho_{{T}^{(r)}}(\hat{x}):= \left\{ \begin{array}{ll} 0 & {\rm if } \, \hat{x}\in \hat{\sigma}, \\ (\lambda^*)^{r+2} (b(y) - y) & {\rm if } \, \hat{x}\in \hat{T}\backslash\hat{\sigma}. \end{array}\right.$$ The mapping $\rho_{{T}^{(r)}}$ is of class $\mathcal{C}^{r+1}$ on $\hat{T}$ and there exist a constant $c>0$ independent of $h$ such that, $$\label{ineq:rho_tr} \| \mathrm{D}^m \rho_{{T}^{(r)}}\|_{\mathrm{L}^\infty(\hat{T})} \le ch^{r+1}, \ \ \ \mbox{ for } \ \ 0 \le m \le r+1.$$* *Proof.* The mapping $\rho_{{T}^{(r)}}$ is of class $\mathcal{C}^{r+1}(\hat{T}\backslash\hat{\sigma})$, being the product of equally regular functions. Consider $0 \le m \le r+1$. Applying the Leibniz formula, we have, $$\begin{aligned} \mathrm{D}^m{\rho_{{T}^{(r)}}}_{|_{\hat{T}\backslash\hat{\sigma}}} & = \mathrm{D}^m ((\lambda^*)^{r+2} (b(y) - y) ) \\ & = \sum_{i=0}^m \big(_{\ i}^m \big) (r+2)....(r+3-i) (\lambda^*)^{r+2-i}\mathrm{D}^{m-i}(b(y)-y).\end{aligned}$$ Then applying [\[ineq:by-y\]](#ineq:by-y){reference-type="eqref" reference="ineq:by-y"}, we get, for $\hat{x}\in \hat{T}\backslash\hat{\sigma},$ $$\| \mathrm{D}^m \rho_{{T}^{(r)}}(\hat{x}) \| \le c \sum_{i=0}^m (\lambda^*)^{r+2-i} \frac{ch^{r+1}}{(\lambda^*)^{m-i}} \le c h^{r+1} (\lambda^*)^{r+2-m} .$$ Since $r+2-m >0$, $(\lambda^*)^{r+2-m}~\underset{\hat{x}\to \hat{\sigma}}\longrightarrow~0$. Consequently, $\mathrm{D}^m \rho_{{T}^{(r)}}$ can be continuously extended by $0$ on $\hat{\sigma}$ when $0\le m \le r+1$. Thus $\rho_{{T}^{(r)}}\in \mathcal{C}^{r+1}$ and the latter inequality ensures [\[ineq:rho_tr\]](#ineq:rho_tr){reference-type="eqref" reference="ineq:rho_tr"}. ◻ We can now prove Proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"}, as mentioned before, its proof relies on the previous propositions. *Proof of Proposition [Proposition 3](#prop-Gh){reference-type="ref" reference="prop-Gh"}.* Let ${T}^{(r)}\in \mathcal{T}_h^{(r)}$ be a non-internal curved element. Let $x=F_T^{(r)}(\hat{x}) \in {T}^{(r)}$ where $\hat{x}\in \hat{T}$. Following the equation [\[eq:def-fter\]](#eq:def-fter){reference-type="eqref" reference="eq:def-fter"}, we recall that, $F_{T^{(r)}}^{(e)}( \hat{x}) = x +~\rho_{{T}^{(r)}}~(\hat{x})$. Then $G_h^{(r)}$ can be written as follows, $${G_h^{(r)}}_{|_{{T}^{(r)}}} = F_{T^{(r)}}^{(e)}\circ ({F_T^{(r)}})^{-1} = (F_T^{(r)}+ \rho_{{T}^{(r)}}) \circ ({F_T^{(r)}})^{-1} = id_{|_{{T}^{(r)}}} + \rho_{{T}^{(r)}}\circ ({F_T^{(r)}})^{-1}.$$ Firstly, with Proposition [Proposition 8](#prop:rho-tr){reference-type="ref" reference="prop:rho-tr"}, $\rho_{{T}^{(r)}}$ is of class $\mathcal{C}^{r+1}(\hat{T})$ and $F_T^{(r)}$ is a polynomial, then $G_h^{(r)}$ is also $\mathcal{C}^{r+1}({T}^{(r)}).$ Secondly, $F_T^{(r)}$ is a $\mathcal{C}^1$-diffeomorphism and there exists a constant $c>0$ independent of $h$ such that (see [@ciaravtransf page 239]), $$\label{ineq:ftr-1} \| \mathrm{D}(F_T^{(r)})^{-1}\| \le \frac{c}{h}.$$ Additionally, by applying [\[ineq:rho_tr\]](#ineq:rho_tr){reference-type="eqref" reference="ineq:rho_tr"} and [\[ineq:ftr-1\]](#ineq:ftr-1){reference-type="eqref" reference="ineq:ftr-1"}, the following inequality holds, $$\label{ineq:cdt-ciar} \|\mathrm{D}( \rho_{{T}^{(r)}})\|_{\mathrm{L}^\infty(\hat{T})} \| \mathrm{D}(({F_T^{(r)}})^{-1})\|_{\mathrm{L}^\infty({T}^{(r)})} \le ch^{r+1} \frac{c}{h} = c h^r < 1.$$ Then by applying [@ciaravtransf theorem 3], $F_T^{(r)}+\rho_{{T}^{(r)}}$ is a $\mathcal{C}^1$-diffeomorphism, being the sum of a $\mathcal{C}^1$-diffeomorphism and a $\mathcal{C}^1$ mapping, which satisfy [\[ineq:cdt-ciar\]](#ineq:cdt-ciar){reference-type="eqref" reference="ineq:cdt-ciar"}. Therefore, $G_h^{(r)}=(F_T^{(r)}+\rho_{{T}^{(r)}}) \circ (F_T^{(r)})^{-1}$ is a $\mathcal{C}^1$-diffeomorphism. To obtain the first inequality of [\[ineq:Gh-Id_Jh-1\]](#ineq:Gh-Id_Jh-1){reference-type="eqref" reference="ineq:Gh-Id_Jh-1"}, we differentiate the latter expression, $$\mathrm{D}{G_h^{(r)}}_{|_{{T}^{(r)}}} - \mathrm{Id}_{|_{{T}^{(r)}}} = \mathrm{D}(\rho_{{T}^{(r)}}\circ ({F_T^{(r)}})^{-1}) = \mathrm{D}(\rho_{{T}^{(r)}}) \circ {(({F_T^{(r)}})^{-1})} \mathrm{D}({F_T^{(r)}})^{-1}.$$ Using [\[ineq:rho_tr\]](#ineq:rho_tr){reference-type="eqref" reference="ineq:rho_tr"} and [\[ineq:ftr-1\]](#ineq:ftr-1){reference-type="eqref" reference="ineq:ftr-1"}, we obtain, $$\| \mathrm{D}{G_h^{(r)}}_{|_{{T}^{(r)}}} - \mathrm{Id}_{|_{{T}^{(r)}}}\|_{\mathrm{L}^\infty({T}^{(r)})} \le \|\mathrm{D}( \rho_{{T}^{(r)}})\|_{\mathrm{L}^\infty(\hat{T})} \| \mathrm{D}(({F_T^{(r)}})^{-1})\|_{\mathrm{L}^\infty({T}^{(r)})} %\le ch^{r+1} \frac{1}{h} \le ch^r,$$ where the constant $c>0$ is independent of $h$. Lastly, the second inequality of [\[ineq:Gh-Id_Jh-1\]](#ineq:Gh-Id_Jh-1){reference-type="eqref" reference="ineq:Gh-Id_Jh-1"} comes as a consequence of the first one, by definition of a Jacobian. ◻ [^1]: Université de Pau et des Pays de l'Adour, E2S UPPA, CNRS, LMAP, UMR 5142, 64000 Pau, France. `fabien.caubet@univ-pau.fr` [^2]: Université de Pau et des Pays de l'Adour, E2S UPPA, CNRS, LMAP, UMR 5142, 64000 Pau, France. `joyce.ghantous@univ-pau.fr` [^3]: Université de Pau et des Pays de l'Adour, E2S UPPA, CNRS, LMAP, UMR 5142, 64000 Pau, France. `charles.pierre@univ-pau.fr` [^4]: Gmsh: a three-dimensional finite element mesh generator, <https://gmsh.info/> [^5]: Cumin GitLab deposit, <https://gmsh.info/>
arxiv_math
{ "id": "2309.02437", "title": "A priori error estimates of a diusion equation with Ventcel boundary\n conditions on curved meshes", "authors": "Fabien Caubet (LMAP), Joyce Ghantous (LMAP), Charles Pierre (LMAP)", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Unprojection theory is a philosophy due to Miles Reid, which becomes a useful tool in algebraic geometry for the construction and the study of new interesting geometric objects such as algebraic surfaces and 3-folds. In this present work we introduce a new format of unprojection, which wecall the 4-intersection format. It is specified by a codimension $2$ complete intersection ideal $I$ which is contained in four codimension $3$ complete intersection ideals $J_1, J_2, J_3, J_4$and leads to the construction of codimension $6$ Gorenstein rings. As an application, we construct three families of codimension $6$ Fano $3$-folds embedded in weighted projective space which correspond to the entries with identifier numbers $29376$, $9176$ and $24198$ respectively in the Graded Ring Database. address: | VASILIKI PETROTOU\ EINSTEIN INSTITUTE OFMATHEMATICS, HEBREW UNIVERSITY OF JERUSALEM, 91904 JERUSALEM, ISRAEL author: - Vasiliki Petrotou title: The 4-intersection unprojection format --- # Introduction {#sec!introduction} The theory of unprojection focuses on constructing and analyzing commutative rings in terms of simpler ones. It also provides, in an intrinsic way, the relation between the commutative rings associated to certain constructions which appear often in algebraic geometry such as for example the Castelnuovo blow-down of a $-1$-curve on a surface [@P1; @P2; @PR; @R1]and also in algebraic combinatorics [@BP2]. Kustin and Miller, motivated by the question of the structureof codimension $4$ Gorenstein rings [@KM1; @KM2; @KM3; @KM4; @KM5], introduced [@KM] a method that constructs Gorenstein rings with more complicated structure than the initial data, which is now known as Kustin-Miller (or Type I) unprojection. In this unprojection type the codimension increases by $1$. Some years later, around $1995$, Reid reinterpreted and generalised the construction of Kustin and Miller and formulated the main principles of unprojection theory [@R1]. His motivation was to provide an algebraic language useful for the study of birational geometry.Since then, many papers on foundational questions of unprojection theory  [@AL; @CL1; @CL2; @LP; @NP1; @NP2; @P2; @P3; @P4; @PR; @PV; @TR] have appeared.We now summarise Reid's formulation of unprojection.Assume that $J\subset R$ is a codimension $1$ ideal with $R$, $R/J$ being Gorenstein. Denote by $i\colon J\rightarrow R$ the inclusion map. Then there exists an $R$-module homomorphism $\phi \colon J\rightarrow R$ such that the $R$-module $\operatorname{Hom}_R(J, R)$ is generated by the set $\{i, \phi\}$.Using $\phi$, Reid defined the new unprojection ring, see [@PV Definition  3.1]. Papadakis and Reid proved that the ring of unprojection is Gorenstein ([@PR Theorem  1.5]). We refer the reader to [@PR Example  2.3] for the simplest example of Kustin-Miller unprojection. Unprojection theory has found many applications. In birational geometry, in the study of Fano 3-folds, algebraic surfaces of general type and Mori-flips  [@BGR; @BKR; @CM; @CPR; @NP1]. Moreover, in the construction of some interesting geometric objects such as K3 surfaces, Fano $3$-folds and Calabi-Yau $3$-folds of high codimension  [@BG; @BKR; @NP2; @PV]. There are also interesting applications in algebraic combinatorics [@BP1; @BP2; @BP3; @BP4].Sometimes, especially for the construction of geometric objects of high codimension, it is necessary to perfom not only one but a series of unprojections. Neves and Papadakis [@NP2] developed a theory which is based on the repeated use of Kustin-Miller unprojection and as a result produces Gorenstein rings of high codimension as through the process a new unprojection variable is added. This theory is called parallel Kustin-Miller unprojection. A brief summary of the main aspects of the theory is given in [@PV Subsection  3.2].In this paper we develop a new format of unprojection, which we call the $4$-intersection format. It is specified by a codimension $2$ complete intersection ideal $I$ with the property that it is contained in four codimension $3$ complete intersection ideals $J_1,\dots,J_4$. Using this data we construct, by parallel Kustin-Miller unprojection, a codimension $6$ Gorenstein ring. As an application we construct three families of Fano $3$-folds of codimension $6$ embedded in weighted projective space which correspond to the entries with ID: 29376, ID: 9176, and ID: 24198 in the GradedRing Database  [@ABR; @BR; @BS1; @BS2; @BS3].In Section $2$ we give some preliminary notions and results that we need in this paper. In Section $3$ we introduce the $4$-intersection unprojection format. In Subsection [3.1](#subs!mainres2to6){reference-type="ref" reference="subs!mainres2to6"}, we give a specific example of $4$-intersection format, and we construct, using parallel unprojection, a codimension $6$ Gorenstein ring.In Section $4$ we give some applications. Subsections [4.1](#constr!29376){reference-type="ref" reference="constr!29376"}, [4.2](#constr2!9176){reference-type="ref" reference="constr2!9176"} and [4.3](#constr3!24198){reference-type="ref" reference="constr3!24198"}contain three specific $4$-dimensional quotients of the ring studied in Section $3$. We check, using partially the computer algebra systems Macaulay2 [@GS] and Singular [@GPS01], that the geometric objects defined by themcorrespond to the above mentioned families of the Fano 3-folds. # Preliminaries We start by recalling some notions and results that are required for the rest of the paper.Denote by $k=\mathbb{C}$ the field of complex numbers. ****Definition** 1**. Let $R$ be a Noetherian ring and $I\subset R$ an ideal. We define the *codimension* of $I$ in $R$, denoted by $\operatorname{codim} I$, as follows:$$\operatorname{codim} I = \dim R - \dim R/I.$$ ****Theorem** 2**. *(Krull's Principal Ideal Theorem)Assume that $R$ is a local Noetherian ring and $I$ is an ideal of $R$ which is generated by $n$ elements. Then, $\operatorname{codim} I \leq n$.* For a proof of Theorem [**Theorem** 2](#thm!krullprincipal){reference-type="ref" reference="thm!krullprincipal"}, see for example [@BH p. 414].In the present work, we will call an ideal $I$ of a polynomial ring $k[x_1,\dots,x_n]$ over a field $k$ a *complete intersection ideal* if $I$ can be generated by $\operatorname{codim} I$ elements.We refer to [@BH Section 2.3] for more details about this notion. ****Definition** 3**. A Noetherian local ring $R$ is called *Gorenstein* if it has finite injective dimension as an $R$-module. More generally, a Noetherian ring $R$ is called Gorenstein if for every maximal ideal $\mathfrak m$ of $R$ the localization $R_{\mathfrak m}$ is Gorenstein. ****Definition** 4**. An ideal $I$ of a Gorenstein ring $R$ is called *Gorenstein* if the quotient ring $R/I$ is Gorenstein. ****Theorem** 5**. *(Serre) [\[thm!serbusc\]]{#thm!serbusc label="thm!serbusc"}Let $R=k[x_1,\dots, x_n] / I$ be the polynomial ring in $n$ variables of positive degree divided by a homogeneous ideal $I$. If $\text{codim I}= 1$ or $2$ then* *$R$ is Gorenstein $\Leftrightarrow$ $I$ is a complete intersection.* For a proof of Theorem [\[thm!serbusc\]](#thm!serbusc){reference-type="ref" reference="thm!serbusc"}, see for example [@E Corollary  21.20]. ****Definition** 6**. Let $R$ be a Gorentein local ring and $J\subset R$ a codimension $1$ ideal such that the quotient ring $R/J$ be Gorenstein. Under these assumptions, the $R$-module $\operatorname{Hom}_R(J,R)$ is generated by the inclusion map $i\colon J\rightarrow R$ and an extra homomorphism $\phi \colon J\rightarrow R$, ([@PR Lemma   1.1]). Denote by $T$ a new unprojection variable. We call *Kustin-Miller unprojection ring*, $\operatorname{ Unpr}(J,R)$, of the pair $J\subset R$ the quotient $$\operatorname{Unpr}(J,R) =\frac{R[T]}{(Tr-\phi(r): r \in J)}.$$ ****Theorem** 7**. *([@PR Theorem  1.5])The Kustin-Miller unprojection ring $\operatorname{Unpr}(J,R)$ is Gorenstein.* More discussion about the motivation of study and basic examples of Kustin-Miller unprojection are contained in [@PR; @PV; @R1].For the main principles of parallel Kustin-Miller unprojection we refer the reader to [@NP2; @PV]. ## Unprojection of a codimension 2 complete intersection inside a codimension 3 complete intersection {#fund!calc11} In this subsection we specify a codimension $2$ complete intersection ideal $I$ and a codimension $3$ complete intersection $J$ such that $I\subset J$. Following [@P2 Section 4], we give the explicit description of the unprojection ring $\operatorname{Unpr}(J/I,R/I)$ of the pair $J/I\subset R/I$. Let $R=k[a_i, b_i, x_j]$, where $1\leq i\leq 3$ and $j\in \{1, 3, 5\}$, be the standard graded polynomial ring in $9$ variables over a field $k$. We set $$f_1 = a_1x_1+a_2x_3+a_3x_5, \quad \quad f_2 = b_1x_1+b_2x_3+b_3x_5,$$and consider the ideals $$I=(f_1, f_2), \quad \quad J=(x_1, x_3, x_5)$$ of $R$. We denote by $A$ the $2\times 3$ matrix$$A = \begin{pmatrix} a_1 & a_2 & a_3\\ b_1 & b_2 & b_3 \end{pmatrix}$$and, for $1 \leq i \leq 3$, by $A_i$ the $2\times 2$ submatrix of $A$ obtained by removing the $i$-th column of $A$. ****Proposition** 8**. *The ideal $I$ is a homogeneous codimension $2$ Gorenstein ideal of $R$ and the ideal $J$ isa homogeneous codimension $3$ Gorenstein ideal. Moreover, $I$ is a subset of $J$.* *Proof.* We first prove that $\operatorname{codim} I = 2$. The ideal $I$ is generated by two homogeneous polynomials of $R$ of degree $2$. Hence, by Theorem [**Theorem** 2](#thm!krullprincipal){reference-type="ref" reference="thm!krullprincipal"}$\operatorname{codim} I\leq 2$. To prove the claim it is enough to show that $\operatorname{codim} I\geq 2$. We set $f_3=-b_1f_1+a_1f_2$. Let $>$ be the lexicographic order on $R$ with $$a_1>\dots> a_3> b_1>\dots>b_3>x_1>\dots> x_3.$$We denote by $Q$ the initial ideal of $I$ with respect $>$. It is well-knownthat $\operatorname{codim} I = \operatorname{codim} Q$.We set$$L = ( a_1x_1, b_1x_1, a_1b_2x_3).$$Since the initial term of $f_1$ is $a_1x_1$, the initial term of $f_2$ is $b_1x_1$ and the initial term of $f_3$ is $a_1b_2x_3$ we have $L \subset Q$,hence $\operatorname{codim} L \leq \operatorname{codim} Q$.We consider the affine variety $X=V(L) \subset \mathbb{A}^{9}$. It holds that $$X= V(x_1,x_3)\cup V(b_2, x_1)\cup V(a_1, b_1)\cup V(a_1, x_1),$$hence $\dim X = 9 - 2 = 7$.Using that$$\dim \ R/L=\dim \ X,$$it follows that $\operatorname{codim} L = 2$. Hence $\operatorname{codim} I\geq 2$. Therefore, by Theorem [\[thm!serbusc\]](#thm!serbusc){reference-type="ref" reference="thm!serbusc"} the ideal $I$ is Gorenstein.We now prove that $\operatorname{codim} J = 3$. According to the Third Isomorphism Theorem of rings $$R/J \cong k[a_1 ,a_2, a_3, b_1, b_2, b_3]$$So, $\dim \ R/J= 6$. Hence,$$\operatorname{codim} \ J = \dim \ R - \dim \ R/J = 3.$$By the last isomorphism, the ideal $J$ is Gorenstein.By the equality of matrices$$\begin{pmatrix} f_{1} \ f_{2} \end{pmatrix} = A \begin{pmatrix} x_{1} \\ x_{3} \\ x_{5} \end{pmatrix}$$it follows that $I\subset J$. ◻ We set, for $1 \leq i \leq 3$, $h_i$ to be the determinant of the matrix $A_i$. Denote by $$\phi\colon J/I\rightarrow R/I$$the map such that $\phi(x_1 + I)= h_1+ I, \, \, \, \, \phi(x_3 + I)= -h_2+ I, \, \, \, \, \phi(x_5+ I)= h_3+ I$. By [@P2 Theorem 4.3], $\operatorname{Hom}_{R/I}(J/I,R/I)$ is generated as $R/I$-module by the inclusion map $i$ and $\phi$. As a corollary,$$\operatorname{Unpr}(J/I,R/I) =\frac{R[T]}{I+(Tx_1-h_1, Tx_3-(-h_2), Tx_5-h_3)}.$$ # The $4$-intersection unprojection format {#sec!4intformat} In this section we introduce the notion of $4$-intersection unprojection format. ****Definition** 9**. Assume that $J_1,\dots,J_4$ are four codimension $3$ complete intersection ideals and $I$ is a codimension $2$ complete intersection ideal. We say that $I$ is a $4$-intersection ideal in $J_1, \dots , J_4$ if $I \subset J_t$ for all $1 \leq t \leq 4$. An important question is how to explicitly construct $I$ and $J_t$ such that $I$ is a $4$-intersection ideal in $J_1, \dots , J_4$.In Subsection [3.1](#subs!mainres2to6){reference-type="ref" reference="subs!mainres2to6"} we present such a construction. ## A specific $4$-intersection unprojection format {#subs!mainres2to6} In the present subsection we specify the following: a codimension $2$ complete intersection ideal $I$and four codimension $3$ complete intersection ideals $J_1,\dots ,J_4$ such that$I$ is a $4$-intersection ideal in $J_1, \dots , J_4$. Using this configuration as initial data, we construct, by parallel Kustin-Miller unprojection  [@NP2], a codimension $6$ Gorenstein ring.Assume that $k$ is a field. We consider the standard graded polynomial ring $R=~k[c_i, x_i]$, where $1\leq i \leq 6$. We set$$f=c_1x_1x_2+c_2x_3x_4+c_3x_5x_6, \quad \quad g=c_4x_1x_2+c_5x_3x_4+c_6x_5x_6,$$$I=(f,g)$ and $$J_1= (x_1, x_3, x_5), \; J_2= (x_1, x_4, x_6), \; J_3= (x_2, x_3, x_6), \; J_4= (x_2, x_4, x_5).$$It is clear that $f,g$ are homogeneous elements of degree $3$ and $I$ is a $4$-intersection ideal in the ideals $J_1, \dots , J_4$.In the applications we need to specialize the variables $c_i$ to elements of $k$. We now give a precise way to do that. Consider the Zariski open subset$$\mathcal{U}=\{(u_1,\dots, u_6)\in \mathbb{A}^{6}: u_i\neq 0 \, \, \, \text{for all} \, \, \,1 \leq i \leq 6 \}.$$We assume that $(d_1, \dots , d_6)\in \mathcal{U}$. We denote by $\hat{R}=k[x_1,\dots, x_6]$ the polynomial ring in the variables $x_i$. Let $$\hat{\phi}\colon R\rightarrow \hat{R}$$be the unique $k$-algebra homomorphism such that $\hat{\phi}(x_i)= x_i$,     $\hat{\phi}(c_i)= d_i$ for all $1 \leq i \leq 6$. We denote by $\hat{I}$ the ideal of the ring $\hat{R}$ generated by the subset $\hat{\phi}(I)$. ****Proposition** 10**. *The ideals $I$ and $\hat{I}$ are homogeneous codimension $2$ Gorenstein ideals.* *Proof.* Since $I$ is generated by two elements, we have,by Theorem [**Theorem** 2](#thm!krullprincipal){reference-type="ref" reference="thm!krullprincipal"}, that $\operatorname{codim} I\leq 2$. Now we show that $\operatorname{codim} I\geq 2$. We set$$r_1= -c_4f+c_1g, \quad \quad r_2= g, \quad \quad r_3=f .$$ Let $>$ be the lexicographic order on $R$ with $c_1>\dots> c_6> x_1>\dots>x_6$. Consider the ideal $$L =(\operatorname{in}_{>} (r_1), \operatorname{in}_{>} (r_2), \operatorname{in}_{>} (r_3)),$$ where$\operatorname{in}_{>}(r_1)= x_3x_4c_1c_5,\operatorname{in}_{>}(r_2)= x_1x_2c_4$ and $\operatorname{in}_{>}(r_3)= x_1x_2c_1.$We now prove that $\operatorname{codim} L = 2$. It is enough to show that $\dim \ R/L= 10$. Consider the affine variety $X=V(L)\subset \mathbb{A}^{12}$. It holds that $$X= V(c_4,c_1)\cup V(c_5, x_1)\cup V(x_4, x_1)\cup V(x_3, x_1)\cup V(c_1, x_1)\cup V(c_5, x_2)\cup$$$$\cup V(x_4, x_2)\cup V(x_3, x_2)\cup V(c_1, x_2).$$Using that,$$\dim \ R/L=\dim \ X$$the claim is proven. Hence, $\operatorname{codim} I\geq 2$. In what follows we show that the ideal $\hat{I}$ is also a codimension $2$ Gorenstein ideal. We set$$\tilde{r_1}= \hat{\phi}(r_1), \quad \quad \tilde{r_2}= \hat{\phi}(r_2).$$ Let $>$ be the lexicographic order on $\hat{R}$ with $x_1>\dots> x_6$. Consider the ideal $$Q=(\operatorname{in}_{>} (\tilde{r_1}), \operatorname{in}_{>} (\tilde{r_2})),$$ where$\operatorname{in}_{>}(\tilde{r_1})= x_3x_4d_1d_5, \, \operatorname{in}_{>}(\tilde{r_2})= x_1x_2d_4.$It is immediate that $Q=(x_3x_4, x_1x_2)$. It is enough to show that $\dim \ R/Q= 4$. Consider the affine variety $Y=V(Q)\subset \mathbb{A}^{6}$. It holds that $$Y= V(x_2,x_4)\cup V(x_2, x_3)\cup V(x_1, x_3)\cup V(x_1, x_4).$$Using that,$$\dim \ R/Q=\dim \ Y$$the claim is proven. Hence, $\operatorname{codim} \hat{I}\geq 2$. By Theorem [\[thm!serbusc\]](#thm!serbusc){reference-type="ref" reference="thm!serbusc"}, the ideals $I$ and $\hat{I}$ are Gorenstein. ◻ ****Proposition** 11**. *(i) For all $t$ with $1\leq t\leq 4$, the ideal $J_t / I$ is a codimension $1$ homogeneous ideal of the quotient ring $R/I$ such that the ring $R/ J_t$ is Gorenstein.(ii) For all $t, s$ with $1\leq t < s\leq 4$, it holds that $\operatorname{codim}_{R/I}(J_t/I+J_s/I)= 3$.* *Proof.* We first prove $(i)$.According to the Third Isomorphism Theorem of rings $$\label{iso!pols11} R/J_1 \cong k[c_1,\dots, c_6, x_2, x_4, x_6], \, \, \, R/J_2 \cong k[c_1,\dots, c_6, x_2, x_3, x_5],$$ $R/J_3 \cong k[c_1,\dots, c_6, x_1, x_4, x_5], \, \, \, R/J_4 \cong k[c_1,\dots, c_6, x_1, x_3, x_6].$ So, we conclude that for all t with $1\leq t\leq 4$, $\dim \ R/J_t= 9.$ By Proposition [**Proposition** 10](#prop!codim2){reference-type="ref" reference="prop!codim2"}, it follows that $\dim \ R/I = \dim \ R- \operatorname{codim} \ I=10.$ Hence, using the last two equalities we have that for all t with $1\leq t\leq 4$$$\operatorname{codim} J_{t}/I=1.$$Due to the isomorphisms ([\[iso!pols11\]](#iso!pols11){reference-type="ref" reference="iso!pols11"}) for all $t$ with $1\leq t\leq 4$, the ring $R/J_t$ is Gorenstein. Concerning the Claim $(ii)$, the Third Isomorphism Theorem of rings implies that$$R/(J_1+J_2) \cong k[c_1,\dots, c_{6}, x_2], \, \, \, \, R/(J_1+J_3) \cong k[c_1,\dots, c_{6}, x_4],$$ $$R/(J_1+J_4) \cong k[c_1,\dots, c_{6}, x_6], \, \, \, \, R/(J_2+J_3) \cong k[c_1,\dots, c_{6}, x_5],$$$$R/(J_2+J_4) \cong k[c_1,\dots, c_{6}, x_3], \, \, \, \, R/(J_3+J_4) \cong k[c_1,\dots, c_{6}, x_1].$$From the later isomorphisms it holds that for $t, s$ with $1\leq t < s\leq 4$,$$\dim \ R/(J_t+J_s)= 7.$$Recall that $\dim \ R/I= 10$. Taking into account the definition of codimension we conclude that for all $t, s$ with $1\leq t < s\leq 4$,$$\operatorname{codim} \ (J_t/I+J_s/I)= 3.$$ ◻ For all $t$, with $1\leq t\leq 4$, we denote by $i_t\colon J_t/I \rightarrow R/I$ the inclusion map. In what follows, we define $\phi_t\colon J_t/ I \rightarrow R/I$ for all $t$, with $1\leq t\leq 4$, and prove that these maps satisfy the assumptions of the [@NP2 Theorem 2.3]. Recall the polynomials $h_1, h_2, h_3$ which were defined in Section [2.1](#fund!calc11){reference-type="ref" reference="fund!calc11"}. We denote by $\widetilde{h_1},\widetilde{h_2},\widetilde{h_3}$ the polynomials which occur from $h_1, h_2, h_3$ if we substitute$$a_1=c_1x_2, \, a_2=c_2x_4, \, a_3=c_3x_6, \, b_1=c_4x_2, \, b_2=c_5x_4, \, b_3=c_6x_6.$$ ****Proposition** 12**. *There exists a unique graded homomorphism of $R/I$-modules $\phi_1\colon J_1/ I \rightarrow R/I$ such that* *$\phi_1(x_1 + I)= \widetilde{h_1}+ I, \, \, \, \, \phi_1(x_3 + I)=\widetilde{h_2}+ I, \, \, \, \, \phi_1(x_5 + I)= \widetilde{h_3}+ I$* *Proof.* It follows from [@NP2 Theorem 4.3]. ◻ For the definition of $\phi_2$ we replace $x_3$ by $x_4$ and $x_5$ by $x_6$. In this case, $\widetilde{h_1},\widetilde{h_2},\widetilde{h_3}$ are the polynomials which occur from $h_1, h_2, h_3$ if we substitute$$a_1=c_1x_2, \, a_2=c_2x_3, \, a_3=c_3x_5, \, b_1=c_4x_2, \, b_2=c_5x_3, \, b_3=c_6x_5.$$ For the definitions of $\phi_3$ and $\phi_4$ we work similarly. For all t, with $1\leq t\leq 4$, the degree of $\phi_t$ is equal to $3$.By the discussion after [@NP2 Proposition  2.1] the new unprojection variable has degree equal to the degree of the corresponding $\phi_t$. ****Proposition** 13**. *For all $t$, with $1\leq t\leq 4$, the $R/I$-module $\operatorname{Hom}_{R/I}(J_t/I,R/I)$ is generated by the two elements $i_t$ and $\phi_t$.* *Proof.* It follows from [@P2 Theorem 4.3]. ◻ For all $t, s$, with $1\leq t,s\leq 4$ and $t\neq s$, we define $r_{ts}=0$. ****Proposition** 14**. *For all $t, s$, with $1\leq t,s\leq 4$ and $t\neq s$, it holds that$$\phi_t(J_t/I) \subset J_s/I.$$* *Proof.* It is a direct computation using the definition of the maps $\phi_t$. ◻ ****Proposition** 15**. *For all $t, s$, with $1\leq t,s\leq 4$ and $t\neq s$, there exists a homogeneous element $A_{st}$ such that$$\phi_s (\phi_t (p)) = A_{st}p$$for all $p\in J_t/I$.* *Proof.* It follows from [@NP2 Proposition 2.1]. ◻ ****Remark** 16**. We note that the elements $A_{st}$ are polynomial expressions in the variables $c_i$ and $x_j$. We computed them using the computer algebra program Macaulay2 [@GS]. We now write down $A_{12}$:$$A_{12}=(x_2^2)(c_3c_4-c_1c_6)(-c_2c_4+c_1c_5).$$Applying symmetry, one can get formulas for all $A_{st}$. Following [@NP2 Section 2], we write down explicitly the final ring as a quotient of a polynomial ring by a codimension $6$ ideal. ****Definition** 17**. Let $T_1, T_2, T_3, T_4$ be four new variables of degree $3$. We define as *$I_{un}$* the ideal $$(I)+ (T_1x_1 \text{-} \phi_1(x_1),\, T_1x_3 \text{-} \phi_1(x_3), \, T_1x_5 \text{-} \phi_1(x_5), \, T_2x_1 \text{-} \phi_2(x_1), \, T_2x_4 \text{-} \phi_2(x_4), \, T_2x_6 \text{-} \phi_2(x_6),$$ $$T_3x_2 \text{-} \phi_3(x_2), \, T_3x_3 \text{-} \phi_3(x_3), \, T_3x_6 \text{-} \phi_3(x_6), \, T_4x_2 \text{-} \phi_4(x_2), T_4x_4 \text{-} \phi_4(x_4), \, T_4x_5 \text{-} \phi_4(x_5),$$$$T_2T_1 \text{-} A_{21}, \, T_3T_1 \text{-} A_{31}, \, T_4T_1 \text{-} A_{41}, \, T_3T_2 \text{-} A_{32}, \, T_4T_2 \text{-} A_{42}, \, T_4T_3 \text{-} A_{43} )$$ of the polynomial ring $R[T_1,T_2,T_3,T_4]$.We set $\textit{$R_{un}$}= R[T_1,T_2,T_3,T_4]/ I_{un}$. ****Remark** 18**. The reason we put, for all $1 \leq i \leq 4$, $\operatorname{deg} T_i = 3$ is that each homomorphism $\phi_i$ is graded of degree $3$. We also note thataccording to [@NP2 Proposition 2.1] the degree of each $A_{st}$ is equal to 6. ****Theorem** 19**. *The ring $R_{un}$ is Gorenstein.* *Proof.* By Propositions  [**Proposition** 11](#thm!assum1){reference-type="ref" reference="thm!assum1"}, [**Proposition** 12](#prop!defphi1){reference-type="ref" reference="prop!defphi1"} and [**Proposition** 14](#prop!existrst1){reference-type="ref" reference="prop!existrst1"}, the assumptions of [@NP2 Theorem 2.3] are satisfied. Hence, the ring $R_{un}$ is Gorenstein. ◻ ****Proposition** 20**. *The homogeneous ideal $I_{un}$ is a codimension $6$ ideal with a minimal generating set of $20$ elements.* *Proof.* According to the grading of the variables and the discussion before Proposition [**Proposition** 13](#prop!hom1){reference-type="ref" reference="prop!hom1"} it is not difficult to see that $I_{un}$ is a homogeneous ideal. Recall that in Kustin-Miller unprojection the codimension increases by $1$. Hence, the homogeneous ideal $I_{un}$, as a result of a series of four unprojections of Kustin-Miller type starting by thecodimension $2$ ideal $I$, is a codimension $6$ ideal. In order to prove that $I_{un}$ is minimallygenerated by $20$ elements we use the idea of specialization. More precisely we set $c_1=c_3=c_{5}=c_{6}=0$ and $c_2=c_4=1$ in the ideal $I_{un}$. We call $\widetilde{I_{un}}$ the ideal which occurs after these substitutions.The ideal $\widetilde{I_{un}}$ is a homogeneous ideal with $16$ monomials and $4$ binomials as generators. It is not difficult to see that $\widetilde{I_{un}}$ is minimally generated by these elements. Hence, we conclude that $I_{un}$ is generated by at least $20$ elements. By Definition [**Definition** 17](#def!mrin1){reference-type="ref" reference="def!mrin1"}, $I_{un}$ is generated by $20$ homogeneous elements. The result follows. ◻ # Applications {#Sec!app1} In this section we prove, using Theorem [**Theorem** 19](#main!thm1){reference-type="ref" reference="main!thm1"}, the existence of $3$ families of Fano $3$- folds of codimension $6$ in weighted projective space. For some basic definitions and facts related to singularities and Fano $3$-folds which appear through this section we refer to [@PV Section 3].We note that in what follows we make essential use of the computer algebra systems Macaulay2 [@GS] and Singular [@GPS01]. The first construction is summarised in the following theorem. It corresponds to theentry $29376$ of Graded Ring Database [@ABR; @BR; @BS1; @BS2; @BS3]. More details for the construction are given in Subsection [4.1](#constr!29376){reference-type="ref" reference="constr!29376"}. ****Theorem** 21**. *There exists a family of quasismooth, projectively normal and projectively Gorenstein Fano $3$-folds $X\subset \mathbb{P}(1^8, 2,3)$, nonsingular away from one quotient singularity $\frac{1}{3}(1,1,2)$, with Hilbert series* *$P_{X}(t)=\frac{1-6t^2+15t^4-20t^6+15t^{8}-6t^{10}+t^{12}}{(1-t)^8(1-t^2)(1-t^3)}.$* [\[constr!11\]]{#constr!11 label="constr!11"} The second construction is summarised in the following theorem. It corresponds to theentry $9176$ of Graded Ring Database. More details for the construction are given in Subsection [4.2](#constr2!9176){reference-type="ref" reference="constr2!9176"}. ****Theorem** 22**. *There exists a family of quasismooth, projectively normal and projectively Gorenstein Fano $3$-folds $X\subset \mathbb{P}(1^2, 2^5,3^3)$, nonsingular away from eight quotient singularities $\frac{1}{2}(1,1,1)$, with Hilbert series* *$P_{X}(t)=\frac{1-6t^4-8t^5+2t^6+24t^7+21t^8-16t^9-36t^{10}-16t^{11}+21t^{12}+24t^{13}+2t^{14}-8t^{15}-6t^{16}+t^{20}}{(1-t)^2(1-t^2)^5(1-t^3)^3}.$* The third construction is summarised in the following theorem. It corresponds to theentry $24198$ of Graded Ring Database. More details for the construction are given in Subsection [4.3](#constr3!24198){reference-type="ref" reference="constr3!24198"}. ****Theorem** 23**. *There exists a family of quasismooth, projectively normal and projectively Gorenstein Fano $3$-folds $X\subset \mathbb{P}(1^6, 2^3,3)$, nonsingular away from two quotient singularities $\frac{1}{2}(1,1,1)$ and one quotient singularity $\frac{1}{3}(1,1,2)$, with Hilbert series* *$P_{X}(t)=\frac{1-t^2-10t^3+5t^4+24t^5-5t^6-28t^7-5t^8+24t^9+5t^{10}-10t^{11}-t^{12}+t^{14}}{(1-t)^6(1-t^2)^3(1-t^3)}.$* ## Construction of Graded Ring Database entry with ID: 29376 {#constr!29376} In this subsection, we give the details of the construction for the family described in Theorem [4.1](#constr!29376){reference-type="ref" reference="constr!29376"}.Denote by $k=\mathbb{C}$ the field of complex numbers. Consider the polynomial ring $R= k[x_i, c_i]$, where $1\leq i \leq 6$. Let $R_{un}$ be the ring in Definition [**Definition** 17](#def!mrin1){reference-type="ref" reference="def!mrin1"} and $\hat{R}=k[x_1,\dots, x_6]$ be the polynomial ring in the variables $x_i$.We substitute the variables $(c_1, \dots, c_{6})$ which appear in the definitions of the rings $R$ and $R_{un}$ with a general element of $k^{6}$ (in the sense of being outside a proper Zariski closed subset of $k^{6}$). Let $\hat{I}$ be the ideal of $\hat{R}$ which is obtained by the ideal $I$ and $\hat{ I}_{un}$ the ideal of $\hat{R}[T_1,T_2,T_3,T_4]$ which is obtained by the ideal $I_{un}$ after this substitution. We set $\hat{R}_{un}= \hat{R}[T_1,T_2,T_3,T_4]/\hat{ I}_{un}$. In what follows $x_1, x_3, x_5$ are variables of degree $1$ and $x_2, x_4, x_6$ are variables of degree $2$. Hence, from the discussion before the Proposition [**Proposition** 13](#prop!hom1){reference-type="ref" reference="prop!hom1"} it follows that the degrees of $T_2, T_3, T_4$ are equal to $1$ and the degree of $T_1$ is equal to $3$. According to this grading the ideals $\hat{I}$ and $\hat{ I}_{un}$ are homogeneous. Due to Theorem [**Theorem** 19](#main!thm1){reference-type="ref" reference="main!thm1"}, $\text{Proj} \ \hat{R}_{un}\subset \mathbb{P} (1^{6}, 2^3, 3)$ is a projectively Gorenstein $3$-fold.Let $A= k[w_{1}, w_{2}, T_2, T_3, T_4, x_1, x_3, x_5, x_6, T_1]$ be the polynomial ring over $k$ with $w_{1}, w_{2}$ variables of degree $1$ and the other variables of degree noted as above. Consider the unique $k$-algebra homomorphism $\psi\colon \hat{R}[T_1, T_2, T_3, T_4]\rightarrow A$ such that $\psi(x_1)= x_1$,   $\psi(x_2)= f_1$,   $\psi(x_3)= x_3$,   $\psi(x_4)= f_2$, $\psi(x_5)= x_5$,   $\psi(x_6)= x_6$,   $\psi(T_1)= T_1$,   $\psi(T_2)= T_2$, $\psi(T_3)= T_3$,   $\psi(T_4)= T_4$ where,$f_1= l_1x_1^2+ l_2x_1x_3+ l_3x_3^2+ l_4x_1x_5+ l_5x_3x_5+ l_6x_5^2+ l_7x_1T_2+ l_8x_{3}T_2+ l_9x_{5}T_{2}+ l_{10}T_{2}^2+ l_{11}x_{1}T_3+ l_{12}x_{3}T_3+ l_{13}x_{5}T_3+ l_{14}T_{2}T_3+ l_{15}T_3^2+l_{16}x_{1}T_4++ l_{17}T_{3}T_{4}+ l_{18}x_{5}T_4+l_{19}T_2T_4+l_{20}T_3T_4+l_{21}T_4^2+l_{22}x_1w_1+l_{23}x_3w_1+l_{24}x_5w_1+l_{25}T_2w_1+l_{26}T_3w_1+l_{27}T_4w_1+ l_{28}w_1^2+l_{29}x_1w_2+l_{30}x_3w_2+l_{31}x_5w_2+l_{32}T_2w_2+l_{33}T_3w_2+l_{34}T_4w_2+ l_{35}w_1w_2+l_{36}w_2^2+l_{37}x_6$,$f_2= l_{38}x_1^2+ l_{39}x_1x_3+ l_{40}x_3^2+ l_{41}x_1x_5+ l_{42}x_3x_5+ l_{43}x_5^2+ l_{44}x_1T_2+ l_{45}x_{3}T_2+ l_{46}x_{5}T_{2}+ l_{47}T_{2}^2+ l_{48}x_{1}T_3+ l_{49}x_{3}T_3+ l_{50}x_{5}T_3+ l_{51}T_{2}T_3+ l_{52}T_3^2+l_{53}x_{1}T_4+l_{54}T_{3}T_{4}+l_{55}x_{5}T_4+l_{56}T_2T_4+l_{57}T_3T_4+l_{58}T_4^2+l_{59}x_1w_1+l_{60}x_3w_1+l_{61}x_5w_1+l_{62}T_2w_1+l_{63}T_3w_1+l_{64}T_4w_1+ l_{65}w_1^2+l_{66}x_1w_2+l_{67}x_3w_2+l_{68}x_5w_2+l_{69}T_2w_2+l_{70}T_3w_2+l_{71}T_4w_2+ l_{72}w_1w_2+l_{73}w_2^2+l_{74}x_6$,and $(l_1,\dots, l_{74})\in k^{74}$ are general. In other words, $f_1, f_2$ are two general degree $2$ homogeneous elements of $A$.Denote by $Q$ the ideal of the ring A generated by the subset $\psi(\hat{I}_{un})$.Let $X= V(Q)\subset \mathbb{P} (1^{8}, 2, 3)$. It is immediate that $X\subset \mathbb{P}(1^8, 2, 3)$ is a codimension $6$ projectively Gorenstein $3$-fold. ****Proposition** 24**. *The ring $A/Q$ is an integral domain.* *Proof.* It is enough to show that the ideal $Q$ is prime. For a specific choice of rationalvalues for the parameters $c_i, l_j$, for $1\leq i\leq 6$ and $1\leq j\leq 74$ we checked, using the computeralgebra program Macaulay2, that the ideal which was obtained by specialization from $Q$ is a homogeneous, codimension $6$, prime ideal with the right Betti table. ◻ In what follows, we show that the only singularities of $X\subset \mathbb{P}(1^8, 2, 3)$ is a quotient singularity of type $\frac{1}{3}(1,1,2)$. According to the discussion after [@PV Definition 2.7], $X$ belongs to the Mori category. The proof of the following proposition is based on a computation with computer algebra system Singular [@GPS01] using the strategy described in [@PV Proposition 6.4] and is omitted. ****Proposition** 25**. *Consider $X= V(Q)\subset \mathbb{P} (1^{8},2, 3)$. Denote by $X_{cone}\subset \mathbb{A}^{10}$ the affine cone over $X$. The scheme $X_{cone}$ is smooth outside the vertex of the cone.* ****Remark** 26**. For the computation of singular locus of weighted projective space in Proposition [**Proposition** 27](#sing!specsing1){reference-type="ref" reference="sing!specsing1"}, we follow [@IF Section 5]. ****Proposition** 27**. *Consider the singular locus $$\text{Sing}( \mathbb{P}(1^{8}, 2, 3))= \{[0:0:0:0:0:0:0:0:1:0] \}\cup \{ [0:0:0:0:0:0:0:0:0:1]\}$$ of the weighted projective space $\mathbb{P}(1^8, 2,3)$. The intersection of $X$ with $\text{Sing}( \mathbb{P}(1^{8}, 2, 3))$ consists of a unique reduced point which is quotient singularity of type $\frac{1}{3}(1,1,2)$ for $X$.* *Proof.* We checked with the computer algebra program Macaulay2 that the intersection of $X$ with $Z$ consists of one reduced point. Wedenote this point by $P$. The point $P$ corresponds to the ideal $(x_i, T_j , w_k)$ for $i\in \{1, 3, 5, 6\}$, $2\leq j\leq~4$, $1\leq k\leq 2$. By Proposition [**Proposition** 25](#Prop!quasismooth1){reference-type="ref" reference="Prop!quasismooth1"}, X is smooth outside $P$. Around $P$ we have that $T_1 = 1$.Looking at the equations of $Q$ we can eliminate the variables $x_1, x_3, x_5, T_2, T_3, T_4$ since thesevariables appear in the set of equations multiplied by $T_1$. This means that $P$ is a quotientsingularity of type $\frac{1}{3}(1,1,2)$. ◻ ****Lemma** 28**. *Let $\omega_{\hat{R}/\hat{I}}$ be the canonical module of $\hat{R}/\hat{I}$. It holds that the canonical module $\omega_{\hat{R}/\hat{I}}$ is isomorphic to $\hat{R}/\hat{I}(-3)$.* *Proof.* From the minimal graded free resolution of $\hat{R}/\hat{I}$ as $\hat{R}$-module$$0\rightarrow \hat{R}(-6)\rightarrow \hat{R}(-3)^{2}\rightarrow \hat{R}$$and the fact that the sum of the degrees of the variables is equal to $9$ we conclude that $$\omega_{\hat{R}/\hat{I}}= \hat{R}/\hat{I}(6-9)= \hat{R}/\hat{I}(-3).$$ ◻ ****Proposition** 29**. *The minimal graded resolution of $A/Q$ as $A$-module is equal to$$\label{eq!resR1}0 \rightarrow C_6 \rightarrow C_5 \rightarrow C_4 \rightarrow C_3 \rightarrow C_2 \rightarrow C_1 \rightarrow C_0 \rightarrow 0$$ where$$\begin{aligned} & C_6 = A(-12), \quad \quad C_5 = A(-8)^{6}\oplus A(-9)^{8} \oplus A(-10)^{6}, \\ & C_4 = A(-6)^{8}\oplus A(-7)^{24}\oplus A(-8)^{24}\oplus A(-9)^{8}, \\ & C_3 = A(-4)^{3}\oplus A(-5)^{24}\oplus A(-6)^{36}\oplus A(-7)^{24}\oplus A(-8)^{3}, \\ & C_2 = A(-3)^{8}\oplus A(-4)^{24}\oplus A(-5)^{24}\oplus A(-6)^{8}, \\ & C_1 = A(-2)^{6}\oplus A(-3)^{8}\oplus A(-4)^{6}, \quad \quad C_0 = A. \end{aligned}$$ Moreover, the canonical module of $A/Q$ is isomorphic to $(A/Q)(-1)$ and the Hilbert series of $A/Q$ as graded $A$-module is equal to $$\frac{1-6t^2+15t^4-20t^6+15t^{8}-6t^{10}+t^{12}}{(1-t)^8(1-t^2)(1-t^3)}.$$* *Proof.* The computation of the minimal graded free resolution of $A/Q$ is based on the method which is described in the proof of [@NP1 Proposition 3.4].Using the minimal graded free resolution ([\[eq!resR1\]](#eq!resR1){reference-type="ref" reference="eq!resR1"}) of $A/Q$ and that the sum of the degrees of the variables is equal to $13$ we conclude that $$\omega_{A/Q}= A/Q(12-13)= A/Q(-1).$$The last conclusion of Proposition [**Proposition** 29](#prop!gradres1){reference-type="ref" reference="prop!gradres1"} follows easily from the resolution ([\[eq!resR1\]](#eq!resR1){reference-type="ref" reference="eq!resR1"}). ◻ By Propositions [**Proposition** 25](#Prop!quasismooth1){reference-type="ref" reference="Prop!quasismooth1"}, [**Proposition** 27](#sing!specsing1){reference-type="ref" reference="sing!specsing1"} and [**Proposition** 29](#prop!gradres1){reference-type="ref" reference="prop!gradres1"}, it follows that $X$ is a Fano $3$-fold. ## Construction of Graded Ring Database entry with ID: 9176 {#constr2!9176} In this subsection we sketch the construction of the family of Fano $3$-folds describedin Theorem [**Theorem** 22](#constr!9176){reference-type="ref" reference="constr!9176"}.Denote by $k=\mathbb{C}$ the field of complex numbers. Consider the polynomial ring $R= k[x_i, c_i]$, where $1\leq i \leq 6$. Let $R_{un}$ be the ring in Definition [**Definition** 17](#def!mrin1){reference-type="ref" reference="def!mrin1"} and $\hat{R}=k[x_1,\dots, x_6]$ be the polynomial ring in the variables $x_i$.We substitute the variables $(c_1, \dots, c_{6})$ which appear in the definitions of the rings $R$ and $R_{un}$ with a general element of $k^{6}$ (in the sense of being outside a proper Zariski closed subset of $k^{6}$). Let $\hat{I}$ be the ideal of $\hat{R}$ which is obtained by the ideal $I$ and $\hat{ I}_{un}$ the ideal of $\hat{R}[T_1,T_2,T_3,T_4]$ which is obtained by the ideal $I_{un}$ after this substitution. We set $\hat{R}_{un}= \hat{R}[T_1,T_2,T_3,T_4]/\hat{ I}_{un}$. In what follows $x_1, x_3, x_5$ are variables of degree $2$ and $x_2, x_4, x_6$ are variables of degree $3$. Hence, from the discussion before the Proposition [**Proposition** 13](#prop!hom1){reference-type="ref" reference="prop!hom1"} it follows that the degrees of $T_2, T_3, T_4$ are equal to $2$ and the degree of $T_1$ is equal to $4$. According to this grading the ideals $\hat{I}$ and $\hat{ I}_{un}$ are homogeneous. Due to Theorem [**Theorem** 19](#main!thm1){reference-type="ref" reference="main!thm1"}, $\text{Proj} \ \hat{R}_{un}\subset \mathbb{P} (2^{6}, 3^{3}, 4)$ is a projectively Gorenstein $3$-fold.Let $A= k[w_{1}, w_{2},x_1,x_5, T_2, T_3, T_4,x_2,x_4,x_6]$ be the polynomial ring over $k$ with $w_{1}, w_{2}$ variables of degree $1$ and the other variables withdegree noted as above. Consider the unique $k$-algebra homomorphism $\psi\colon \hat{R}[T_1, T_2, T_3, T_4]\rightarrow A$ such that $\psi(x_1)= x_1$,   $\psi(x_2)= x_2$,   $\psi(x_3)= f_1$,   $\psi(x_4)= x_4$, $\psi(x_5)= x_5$,   $\psi(x_6)= x_6$,   $\psi(T_1)= f_2$,   $\psi(T_2)= T_2$, $\psi(T_3)= T_3$,   $\psi(T_4)= T_4$ where,$f_1= l_1w_1^2+l_2w_1w_2+l_3w_2^2+l_4x_1+l_5x_5+l_6T_2+l_7T_3+l_8T_4$,$f_2=l_9w_1^4+l_{10}w_1^3w_2+l_{11}w_1^2w_2^2+l_{12}w_1w_2^3+l_{13}w_2^4+l_{14}w_1^2x_1+l_{15}w_1w_2x_1+l_{16}w_2^2x_1+l_{17}x_1^2+l_{18}w_1^2x_5+l_{19}w_1w_2x_5+l_{20}w_2^2x_5+l_{21}x_1x_5+l_{22}x_5^2+l_{23}w_1^2T_2+l_{24}w_1w_2T_2+l_{25}w_2^2T_2+l_{26}x_1T_2+l_{27}x_5T_2+l_{28}T_2^2+l_{29}w_1^2T_3+l_{30}w_1w_2T_3+l_{31}w_2^2T_3+l_{32}x_1T_3+l_{33}x_5T_3+l_{34}T_2T_3+l_{35}T_3^2+l_{36}w_1^2T_4+l_{37}w_1w_2T_4+l_{38}w_2^2T_4+l_{39}x_1T_4+l_{40}x_5T_4+l_{41}T_2T_4+l_{42}T_3T_4+l_{43}T_4^2+l_{44}w_1x_2+l_{45}w_2x_2+l_{46}w_1x_4+l_{47}w_2x_4+l_{48}w_1x_6+l_{49}w_2x_6$,and $(l_1,\dots, l_{49})\in k^{49}$ are general. In other words, $f_1$ is a general degree $2$ homogeneous element of $A$ and $f_2$ is a general degree $4$ homogeneous element of $A$ .Denote by $Q$ the ideal of the ring A generated by the subset $\psi(\hat{I}_{un})$.Let $X= V(Q)\subset \mathbb{P} (1^{2}, 2^5, 3^3)$. It is immediate that $X\subset \mathbb{P}(1^{2}, 2^5, 3^3)$ is a codimension $6$ projectively Gorenstein $3$-fold. ****Proposition** 30**. *The ring $A/Q$ is an integral domain.* *Proof.* It is enough to show that the ideal $Q$ is prime. For a specific choice of rationalvalues for the parameters $c_i, l_j$, for $1\leq i\leq 6$ and $1\leq j\leq 49$ we checked using the computeralgebra program Macaulay2 that the ideal which was obtained by $Q$ is a homogeneous, codimension $6$, prime ideal with the right Betti table. ◻ In what follows, we show that the only singularities of $X\subset \mathbb{P}(1^2, 2^5, 3^3)$ are eight quotient singularities of type $\frac{1}{2}(1,1,1)$. According to the discussion after [@PV Definition 2.7], $X$ belongs to the Mori category. The proof of the following proposition is based on a computation with computer algebra system Singular [@GPS01] using the strategy described in [@PV Proposition 6.4] and is omitted. ****Proposition** 31**. *Consider $X= V(Q)\subset \mathbb{P} (1^2, 2^5, 3^3)$. Denote by $X_{cone}\subset \mathbb{A}^{10}$ the affine cone over $X$. The scheme $X_{cone}$ is smooth outside the vertex of the cone.* ****Remark** 32**. For the computation of singular locus of weighted projective space in Proposition [**Proposition** 33](#sing!specsing2){reference-type="ref" reference="sing!specsing2"}, we follow [@IF Section 5]. ****Proposition** 33**. *Consider the singular locus $$\text{Sing}( \mathbb{P}(1^{2}, 2^5, 3^3))= F_1\cup F_2$$,$$F_1= \{[0:0:a:b:c:d:e:0:0:0]: [a:b:c:d:e]\in \mathbb{P}^4 \}$$and$$F_2=\{ [0:0:0:0:0:0:0:a:b:c]: [a:b:c]\in \mathbb{P}^2\}$$of the weighted projective space $\mathbb{P}(1^{2}, 2^5, 3^3)$. The intersection of $X$ with $\text{Sing}( \mathbb{P}(1^{2}, 2^5, 3^3))$ consists of eight reduced points which are quotient singularities of type $\frac{1}{2}(1,1,1)$ for $X$.* *Proof.* We proved with the computer algebra program Macaulay2 that the intersection of $X$ with $Z$ consists of eight reduced points. Following the strategy of the proof of Proposition [**Proposition** 27](#sing!specsing1){reference-type="ref" reference="sing!specsing1"}, we checked that each of these points is a quotient singularity of type $\frac{1}{2}(1,1,1)$. ◻ ****Lemma** 34**. *Let $\omega_{\hat{R}/\hat{I}}$ be the canonical module of $\hat{R}/\hat{I}$. It holds that the canonical module $\omega_{\hat{R}/\hat{I}}$ is isomorphic to $\hat{R}/\hat{I}(-5)$.* *Proof.* From the minimal graded free resolution of $\hat{R}/\hat{I}$ as $\hat{R}$-module$$0\rightarrow \hat{R}(-10)\rightarrow \hat{R}(-5)^{2}\rightarrow \hat{R}$$and the fact that the sum of the degrees of the variables is equal to $15$ we conclude that $$\omega_{\hat{R}/\hat{I}}= \hat{R}/\hat{I}(10-15)= \hat{R}/\hat{I}(-5).$$ ◻ ****Proposition** 35**. *The minimal graded resolution of $A/Q$ as $A$-module is equal to$$\label{eq!resR2}0 \rightarrow C_6 \rightarrow C_5 \rightarrow C_4 \rightarrow C_3 \rightarrow C_2 \rightarrow C_1 \rightarrow C_0 \rightarrow 0$$ where$$\begin{aligned} & C_6 = A(-20), \quad \quad C_5 = A(-14)^{6}\oplus A(-15)^{8} \oplus A(-16)^{6}, \\ & C_4 = A(-11)^{8}\oplus A(-12)^{24}\oplus A(-13)^{24}\oplus A(-14)^{8}, \\ & C_3 = A(-8)^{3}\oplus A(-9)^{24}\oplus A(-10)^{36}\oplus A(-11)^{24}\oplus A(-12)^{3}, \\ & C_2 = A(-6)^{8}\oplus A(-7)^{24}\oplus A(-8)^{24}\oplus A(-9)^{8}, \\ & C_1 = A(-4)^{6}\oplus A(-5)^{8}\oplus A(-6)^{6}, \quad \quad C_0 = A. \end{aligned}$$ Moreover, the canonical module of $A/Q$ is isomorphic to $(A/Q)(-1)$ and the Hilbert series of $A/Q$ as graded $A$-module is equal to $$\frac{1-6t^4-8t^5+2t^6+24t^7+21t^8-16t^9-36t^{10}-16t^{11}+21t^{12}+24t^{13}+2t^{14}-8t^{15}-6t^{16}+t^{20}}{(1-t)^2(1-t^2)^5(1-t^3)^3}.$$* *Proof.* The computation of the minimal graded free resolution of $A/Q$ is based on the method which is described in the proof of [@NP1 Proposition 3.4].Using the minimal graded free resolution ([\[eq!resR2\]](#eq!resR2){reference-type="ref" reference="eq!resR2"}) of $A/Q$ and that the sum of the degrees of the variables is equal to $21$ we conclude that $$\omega_{A/Q}= A/Q(20-21)= A/Q(-1).$$The last conclusion of Proposition [**Proposition** 35](#prop!gradres2){reference-type="ref" reference="prop!gradres2"} follows easily from the resolution ([\[eq!resR2\]](#eq!resR2){reference-type="ref" reference="eq!resR2"}). ◻ By Propositions [**Proposition** 31](#Prop!quasismooth2){reference-type="ref" reference="Prop!quasismooth2"}, [**Proposition** 33](#sing!specsing2){reference-type="ref" reference="sing!specsing2"} and [**Proposition** 35](#prop!gradres2){reference-type="ref" reference="prop!gradres2"}, it follows that $X$ is a Fano $3$-fold. ## Construction of Graded Ring Database entry with ID: 24198 {#constr3!24198} In this final subsection, we sketch the construction for the family of Fano 3-folds which is describedin Theorem [**Theorem** 23](#constr!24198){reference-type="ref" reference="constr!24198"}.Denote by $k=\mathbb{C}$ the field of complex numbers. Consider the polynomial ring $R= k[x_i, c_i]$, where $1\leq i \leq 6$. Let $R_{un}$ be the ring in Definition [**Definition** 17](#def!mrin1){reference-type="ref" reference="def!mrin1"} and $\hat{R}=k[x_1,\dots, x_6,c_3,c_6]$ be the polynomial ring in the variables $x_i$ and $c_3,c_6$.We substitute the variables $(c_1,c_2,c_4,c_5)$ which appear in the definitions of the rings $R$ and $R_{un}$ with a general element of $k^{4}$ (in the sense of being outside a proper Zariski closed subset of $k^{4}$). Let $\hat{I}$ be the ideal of $\hat{R}$ which is obtained by the ideal $I$ and $\hat{ I}_{un}$ the ideal of $\hat{R}[T_1,T_2,T_3,T_4]$ which is obtained by the ideal $I_{un}$ after this substitution. We set $\hat{R}_{un}= \hat{R}[T_1,T_2,T_3,T_4]/\hat{ I}_{un}$. In what follows $x_1, x_3, x_5, x_6, c_3, c_6$ are variables of degree $1$ and $x_2, x_4$ are variables of degree $2$. Hence, from the discussion before the Proposition [**Proposition** 13](#prop!hom1){reference-type="ref" reference="prop!hom1"} it follows that the degree of $T_1$ is equal to $3$, thedegrees of $T_2, T_3$ are equal to $2$ and the degree of $T_4$ is equal to $1$. According to this grading the ideals $\hat{I}$ and $\hat{ I}_{un}$ are homogeneous. Due to Theorem [**Theorem** 19](#main!thm1){reference-type="ref" reference="main!thm1"}, $\text{Proj} \ \hat{R}_{un}\subset \mathbb{P} (1^{7}, 2^{4}, 3)$ is a projectively Gorenstein $5$-fold.Let $A= k[x_{1}, x_{3},x_5,x_6, c_3, c_6, x_2,x_4,T_3,T_1]$ be the polynomial ring with variables of degree noted as above. Consider the unique $k$-algebra homomorphism $\psi\colon \hat{R}[T_1, T_2, T_3, T_4]\rightarrow A$ such that $\psi(x_1)= x_1$,   $\psi(x_2)= x_2$,   $\psi(x_3)= x_3$,   $\psi(x_4)= x_4$, $\psi(x_5)= x_5$,   $\psi(x_6)= x_6$,   $\psi(c_3)= c_3$,   $\psi(c_6)= c_6$, $\psi(T_1)= T_1$,   $\psi(T_2)= f_1$,   $\psi(T_3)= T_3$,   $\psi(T_4)= f_2$ where,$f_1=l_1x_1^2+l_2x_1x_3+l_3x_3^2+l_4x_1x_5+l_5x_3x_5+l_6x_5^2+l_7x_1x_6+l_8x_3x_6+l_9x_5x_6+l_{10}x_6^2+l_{11}x_1c_3+l_{12}x_3c_3+l_{13}x_5c_3+l_{14}x_6c_3+l_{15}c_3^2+l_{16}x_1c_6+l_{17}x_3c_6+l_{18}x_5c_6+l_{19}x_6c_6+l_{20}c_3c_6+l_{21}c_6^2+l_{22}x_2+l_{23}x_4+l_{24}T_3$,$f_2=l_{25}x_1+l_{26}x_3+l_{27}x_5+l_{28}x_6+l_{29}c_3+l_{30}c_6$,and $(l_1,\dots, l_{30})\in k^{30}$ are general. In other words, $f_1$ is a general degree $2$ homogeneous element of $A$ and $f_2$ is a general degree $1$ homogeneous element of $A$.Denote by $Q$ the ideal of the ring A generated by the subset $\psi(\hat{I}_{un})$.Let $X= V(Q)\subset \mathbb{P} (1^{6}, 2^3, 3)$. It is immediate that $X\subset \mathbb{P}(1^{6}, 2^3, 3)$ is a codimension $6$ projectively Gorenstein $3$-fold. ****Proposition** 36**. *The ring $A/Q$ is an integral domain.* *Proof.* It is enough to show that the ideal $Q$ is prime. For a specific choice of rationalvalues for the parameters $c_i, l_j$, for $i\in \{1,2,4,5\}$ and $1\leq j\leq 30$ we checked using the computeralgebra program Macaulay2 that the ideal which was obtained by $Q$ is a homogeneous, codimension $6$, prime ideal with the right Betti table. ◻ In what follows, we show that the only singularities of $X\subset \mathbb{P}(1^6, 2^3, 3)$ are two quotient singularities of type $\frac{1}{2}(1,1,1)$ and one quotient singularity of type $\frac{1}{3}(1,1,2)$. According to the discussion after [@PV Definition 2.7], $X$ belongs to the Mori category. The proof of the following proposition is based on a computation with computer algebra system Singular [@GPS01] using the strategy described in [@PV Proposition 6.4] and is omitted. ****Proposition** 37**. *Consider $X= V(Q)\subset \mathbb{P} (1^6, 2^3, 3)$. Denote by $X_{cone}\subset \mathbb{A}^{10}$ the affine cone over $X$. The scheme $X_{cone}$ is smooth outside the vertex of the cone.* ****Remark** 38**. For the computation of singular locus of weighted projective space in Proposition [**Proposition** 39](#sing!specsing3){reference-type="ref" reference="sing!specsing3"}, we follow [@IF Section 5]. ****Proposition** 39**. *Consider the singular locus $$\text{Sing}( \mathbb{P}(1^{6}, 2^3, 3))= F_1\cup \{ [0:0:0:0:0:0:0:0:0:1]\}$$where,$$F_1=\{ [0:0:0:0:0:0:a:b:c:0]: [a:b:c]\in \mathbb{P}^2\}$$of the weighted projective space $\mathbb{P}(1^{6}, 2^3, 3)$. The intersection of $X$ with $\text{Sing}( \mathbb{P}(1^{6}, 2^3, 3))$ consists of two reduced points which are quotient singularities of type $\frac{1}{2}(1,1,1)$ and one reduced point which is quotient singularity of type $\frac{1}{3}(1,1,2)$ for $X$.* *Proof.* We proved with the computer algebra program Macaulay2 that the intersection of $X$ with $Z$ consists of three reduced points. Following the strategy of the proof of Proposition [**Proposition** 27](#sing!specsing1){reference-type="ref" reference="sing!specsing1"}, we checked that two of these points are quotient singularities of type $\frac{1}{2}(1,1,1)$ and the third point is quotient singularity of type $\frac{1}{3}(1,1,2)$ for $X$. ◻ ****Lemma** 40**. *Let $\omega_{\hat{R}/\hat{I}}$ be the canonical module of $\hat{R}/\hat{I}$. It holds that the canonical module $\omega_{\hat{R}/\hat{I}}$ is isomorphic to $\hat{R}/\hat{I}(-4)$.* *Proof.* From the minimal graded free resolution of $\hat{R}/\hat{I}$ as $\hat{R}$-module$$0\rightarrow \hat{R}(-6)\rightarrow \hat{R}(-3)^{2}\rightarrow \hat{R}$$and the fact that the sum of the degrees of the variables is equal to $10$ we conclude that $$\omega_{\hat{R}/\hat{I}}= \hat{R}/\hat{I}(6-10)= \hat{R}/\hat{I}(-4).$$ ◻ ****Proposition** 41**. *The minimal graded resolution of $A/Q$ as $A$-module is equal to$$\label{eq!resR3}0 \rightarrow C_6 \rightarrow C_5 \rightarrow C_4 \rightarrow C_3 \rightarrow C_2 \rightarrow C_1 \rightarrow C_0 \rightarrow 0$$ where$$\begin{aligned} & C_6 = A(-14), \quad \quad C_5 = A(-9)^{2}\oplus A(-10)^{7} \oplus A(-11)^{10} \oplus A(-12)^{1}, \\ & C_4 = A(-7)^{4}\oplus A(-8)^{20}\oplus A(-9)^{28}\oplus A(-10)^{12}, \\ & C_3 = A(-5)^{2}\oplus A(-6)^{25}\oplus A(-7)^{36}\oplus A(-8)^{25}\oplus A(-9)^{2}, \\ & C_2 = A(-4)^{12}\oplus A(-5)^{28}\oplus A(-6)^{20}\oplus A(-7)^{4}, \\ & C_1 = A(-2)^{1}\oplus A(-3)^{10}\oplus A(-4)^{7}\oplus A(-5)^{2}, \quad \quad C_0 = A. \end{aligned}$$ Moreover, the canonical module of $A/Q$ is isomorphic to $(A/Q)(-1)$ and the Hilbert series of $A/Q$ as graded $A$-module is equal to $$\frac{1-t^2-10t^3+5t^4+24t^5-5t^6-28t^7-5t^8+24t^9+5t^{10}-10t^{11}-t^{12}+t^{14}}{(1-t)^6(1-t^2)^3(1-t^3)}.$$* *Proof.* The computation of the minimal graded free resolution of $A/Q$ is based on the method which is described in the proof of [@NP1 Proposition 3.4].Using the minimal graded free resolution ([\[eq!resR2\]](#eq!resR2){reference-type="ref" reference="eq!resR2"}) of $A/Q$ and that the sum of the degrees of the variables is equal to $15$ we conclude that $$\omega_{A/Q}= A/Q(14-15)= A/Q(-1).$$The last conclusion of Proposition [**Proposition** 41](#prop!gradres3){reference-type="ref" reference="prop!gradres3"} follows easily from the resolution ([\[eq!resR3\]](#eq!resR3){reference-type="ref" reference="eq!resR3"}). ◻ By Propositions [**Proposition** 37](#Prop!quasismooth3){reference-type="ref" reference="Prop!quasismooth3"}, [**Proposition** 39](#sing!specsing3){reference-type="ref" reference="sing!specsing3"} and [**Proposition** 41](#prop!gradres3){reference-type="ref" reference="prop!gradres3"}, it follows that $X$ is a Fano $3$-fold. # Acknowledgements {#sec!acknowledgements .unnumbered} I would like to thank Stavros Papadakis for important discussions and suggestions which have improved the present paper. I benefited from experiments with the computer algebra programs Macaulay2 [@GS] and Singular [@GPS01]. Part of this work is contained in my PhD Thesis, [@PV1] carried out at the University of Ioannina, Greece. This work was financially supported by Horizon Europe ERC Grant number: 101045750 / Project acronym: HodgeGeoComb with principal investigator Karim Adiprasito, whom I warmly thank. 99Altınok, S. (1998). Graded rings corresponding to polarised K3 surfaces and Q-Fano 3-folds. PhD dissertation. University of Warwick, Coventry, UK. Altınok, S., Brown, G., Reid, M. (2002). Fano 3-folds, K3 surfaces and graded rings. Topology and geometry: commemorating SISTAG. Contemp. Math. Amer. Math. Soc. 314: 25--53. Providence, RI. Böhm, J., Papadakis, S.A. (2012). On the structure of Stanley-Reisner rings associated to cyclic polytopes.Osaka J. Math. 49: 81--100. Böhm, J., Papadakis, S.A. (2013). Stellar subdivisions and Stanley-Reisner rings of Gorenstein complexes. Australas. J. Combin. 55: 235--247.Böhm, J., Papadakis, S.A. (2015). Bounds for the Betti numbers of successive stellar subdivisions of a simplex. Hokkaido Math. J. 44: 341--364.Böhm, J., Papadakis, S.A. (2020). Weak Lefschetz Property and Stellar Subdivisions of Gorenstein Complexes. Australas. J. Combin. 76: 266--287. Brown, G., Reid, M. (2017). Diptych varieties. II: Apolar varieties. Higher dimensional algebraic geometry---in honour of Professor Yujiro Kawamata's sixtieth birthday. Adv. Stud. Pure Math. 74: 41--72.Bruns, W., Herzog, J. (1993). Cohen-Macaulay rings. Cambridge, Cambridge Studies in Advanced Mathematics, 39. Cambridge University Press. Brown, G., Kasprzyk, A M. (2002). Graded ring database. Online searchable database. Available at:http://grdb.co.uk/. Brown, G., Georgiadis, K. (2017). Polarized Calabi-Yau 3-folds in codimension 4. Math. Nachr. 290: 710--725.Brown, G., Kerber, M., Reid, M. (2012). Fano 3-folds in codimension 4, Tom and Jerry, Part I. Compos. Math. 148: 1171--1194.Brown, G., Suzuki, K. (2007). Fano 3-folds with divisible anticanonical class. Manuscripta. Math. 123: 37--51.Brown, G., Suzuki, K. (2007). Computing certain Fano 3-folds. Japan J. Indust. Appl. Math. 24: 241--250.Brown, G., Kasprzyk, A M. (2022). Kawamata boundedness for Fano threefoldsand the graded ring database. arXiv preprint, available at <https://arxiv.org/abs/2201.07178>.Campo, L. (2020). Sarkisov links for index 1 fano 3-folds in codimension 4. arXiv preprint, available at <https://arxiv.org/abs/2011.12209>.Campo, L. (2021). Fano 3-folds and double covers by half elephants. arXiv preprint, available at <https://arxiv.org/abs/2103.17219>.Corti, A., Mella, M. (2004). Birational geometry of terminal quartic 3-folds I. Amer. J. Math. 126: 739--761.Corti, A., Pukhlikov, A., Reid, M. (2000). Fano 3-fold hypersurfaces. Explicit birational geometry of 3-folds. Cambridge, London Math. Soc. Lecture Note Ser., 281, Cambridge Univ. Press, 175--258. Decker, W., Greuel, G.-M., Pfister, G., Schönemann, H. (2019). Singular 4-1-2 --- A computer algebra system for polynomial computations. Available at:http://www.singular.uni-kl.de.Eisenbud, D. (1995). Commutative algebra with a view toward algebraic geometry. Graduate Texts in Mathematics, 150. Springer-Verlag. Grayson, D., Stillman, M. Macaulay2, a software system for research in algebraic geometry. Available at:https://macaulay2.com/.Iano-Fletcher, A. R. (2000). Working with weighted complete intersections. Cambridge, Cambridge Studies in Advanced Mathematics, 281. Cambridge University Press: 101--173. Kustin, A., Miller, M. (1983). Constructing big Gorenstein ideals from small ones. J. Algebra 85: 303--322.Kustin, A., Miller, M. (1980). Algebra structures on minimal resolutions of Gorenstein rings of embedding codimension four. Math. Z. 173: 171--184.Kustin, A., Miller, M. (1981). A general resolution for grade four Gorenstein ideals. Manuscripta Math. 35: 221--269.Kustin, A., Miller, M. (1892). Structure theory for a class of grade four Gorenstein ideals. Trans. Amer. Math. Soc. 270: 287--307.Kustin, A., Miller, M. (1984). Deformation and linkage of Gorenstein algebras. Trans. Amer. Math. Soc. 284: 501--534.Kustin, A., Miller, M. (1985). Classification of the Tor-algebras of codimension four Gorenstein local rings. Math. Z. 190: 341--355. Liedtke, C., Papadakis, S.A (2010). Birational modifications of surfaces via unprojections. J. Algebra 323: 2510--2519.Neves, J., Papadakis, S.A. (2009). A construction of numerical Campedelli surfaces with torsion $\mathbb{Z}/6$. Trans. Amer. Math. Soc. 361: 4999--5021.Neves, J., Papadakis, S.A. (2013). Parallel Kustin-Miller unprojection with an application to Calabi--Yau geometry. Proc. Lond. Math. Soc. (3) 106: 203--223. Papadakis, S. A. (2001). Gorenstein rings and Kustin-Miller unprojection. PhD dissertation. University of Warwick, Coventry, UK. Available at:https://www.math.tecnico.ulisboa.pt/$\sim$ papadak/. Papadakis, S.A. (2004). Kustin-Miller unprojection with complexes. J. Algebraic Geom. 13: 249--268. Papadakis, S.A. (2006). Type II unprojection. J. Algebraic Geom. 15: 399--414. Papadakis, S.A. (2007). Towards a general theory of unprojection. J. Math. Kyoto Univ. 47: 579--598.Papadakis, S.A, Reid, M. (2004). Kustin-Miller unprojection without complexes. J. Algebraic Geom. 13: 563--577.Petrotou, V. (2022). Tom & Jerry triples with an application to Fano 3-folds. Commun. Algebra 50: 3960--3977. Petrotou, V. (2022). Unprojection Theory, Applications to Algebraic Geometry and Anisotropy of Simplicial Spheres. PhD dissertation. University of Ioannina, Greece. Available at:https://sites.google.com/view/vpetrotou.Reid, M. (2000). Graded Rings and Birational Geometry. In: Ohno, K., ed. Proc. of algebraic symposium (Kinosaki, Oct 2000): 1--72. Available at: [https://www.maths.warwick.ac.uk/$\sim$miles/3folds](https://www.maths.warwick.ac.uk/~miles/3folds). Taylor, R. (2020). Type II unprojections, Fano threefolds and codimension four constructions.Phd Thesis, University of Warwick.
arxiv_math
{ "id": "2309.03334", "title": "The 4-Intersection Unprojection Format", "authors": "Vasiliki Petrotou", "categories": "math.AG math.AC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | A stabilizer based on the forwarding technique is proposed for semilinear infinite-dimensional systems in cascade form. Sufficient conditions for local exponentially stability and global asymptotic stability of the closed-loop are derived. Results for the problem of local set-point output regulation are also obtained. Finally, an application to a system consisting of a flexible beam attached to a rotating joint is proposed. author: - "Nicolas Vanspranghe[^1]" - "Lucas Brivadis[^2]" - Lassi Paunonen bibliography: - forwarding-cdc-hal.bib title: " On forwarding techniques for stabilization and set-point output regulation of semilinear infinite-dimensional systems[^3] " --- # Introduction The stabilization of infinite-dimensional systems presenting a cascade form is a modern challenge in control theory and has been investigated by many researchers in recent years. In particular, a strong focus has been made on cascades of partial differential equations (PDEs) with ordinary differential equations (ODEs) [@MarAst22conf; @BalMar23; @DEUTSCHER2021109338], or of ODEs with PDEs [@marx2021forwarding; @astolfi:hal-04007322], or of PDEs with other PDEs [@auriol2; @auriol3]. Approaches known as "backstepping" or "forwarding" (each relying on a different paradigm, and adapted to different cascade structures) have been developed to tackle the issue. In this paper, we focus on the cascade of two infinite-dimensional systems where the first one is semilinear and control-affine, and the second one is linear, neutrally stable (Lyapunov stable but not asymptotically stable) and driven by a semilinear output of the first one. This type of systems arises in two contexts in practice: in the stabilization of cascade systems where the second system is linear and driven by an output of the first one, and in the output regulation by means of integral action of semilinear infinite-dimensional systems with potentially semilinear output. Concerning the output regulation problem, many recent works have proposed to extend the linear finite-dimensional theory of [@1101104] to linear infinite-dimensional systems by means of an infinite-dimensional internal model principle [@Pohjolainen; @bymes2000output; @rebarber2003internal; @paunonen2010internal; @paunonen2015controller; @doi:10.1137/17M1136407]. The extension of these works to the infinite-dimensional abstract nonlinear context is still an open problem, although some recent progress have been made in specific contexts [@logemann2000time; @natarajan2016approximate]. We propose to tackle the cascade stabilization problem by means of the forwarding approach that was originally developed for cascades of nonlinear ODEs in [@MazPra96; @PraOrt01]. This approach is well-suited for the class of cascades under consideration in the present paper, making use of the internal stability of the first subsystem. It could also allow to extend our results to systems with saturated input as in [@marx2021forwarding], and does not rely on any specific hyperbolic structure of the PDE (systems are considered in an abstract framework). The extension of this strategy to infinite-dimensional systems is an active research area, especially for linear systems (or for linear systems with saturated or cone-bounded nonlinearities applied to the input) [@terrand2019adding; @MarAst22conf; @marx2021forwarding; @Nat21]. In order to deal with nonlinear infinite-dimensional systems, the theory developed in [@mattia-forwarding] for finite-dimensional systems by means of incremental stability is instrumental, and has recently been adapted in the context of infinite-dimensional output regulation in [@VanBri23]. The present paper is a continuation of the work proposed in [@VanBri23]: while the class of systems under consideration is less generic (here, we restrict ourselves to semilinear systems), the results obtained here are stronger than the ones presented in [@VanBri23] regarding the following points: the input-to-state stability assumption on the first system is weaker; the input acts via a linear operator that may depend on the state; the output of the first system may be nonlinear; the dynamics of the first system is not assumed to be globally contracting; the dynamics of the second system is more general than a pure integrator. These extensions require modifying the controller designed by forwarding, and in particular improving the result of existence of an invariant graph. Finally, we apply our results to a system which consists of a flexible beam attached to a rotating joint. #### Organization of the paper {#organization-of-the-paper .unnumbered} In [2](#sec:stab){reference-type="ref" reference="sec:stab"}, we tackle the cascade stabilization problem. The controller we propose ensures local exponential stability and global asymptotic stability under the assumption of existence of some invariant graph. Sufficient conditions for this are derived in [3](#sec:syl){reference-type="ref" reference="sec:syl"}. We apply our results in the context of output regulation in [4](#sec:reg){reference-type="ref" reference="sec:reg"}, and to an example of flexible structure in [5](#sec:flex){reference-type="ref" reference="sec:flex"}. #### Notation {#notation .unnumbered} The norm of a given normed vector space $E$ is denoted by $\|\cdot \|_E$. If $x \in E$ and $r > 0$, $\mathcal{B}_E(x, r)$ denotes the open ball of $E$ centered at $x$ and of radius $r$. Let $E_1$ and $E_2$ be normed vector spaces. Then, $\mathcal{L}(E_1, E_2)$ denotes the space of bounded (i.e., continuous) linear operators from $E_1$ to $E_2$ equipped with the operator norm. We say that a map $f : E_1 \to E_2$ is Fréchet differentiable at $x \in E_1$ if there exists a (necessarily unique) linear map $\mathrm{d}f(x) \in \mathcal{L}(E_1, E_2)$ such that $f(x + h) = f(x) + \mathrm{d}f(x) h + o(\|h\|_{E_1})$ as $\|h\|_{E_1} \to 0$. Also, $\mathcal{C}^1(E_1, E_2)$ denotes the space of continuously Fréchet differentiable maps from $E_1$ to $E_2$, i.e., those $f$ for which the Fréchet differential $\mathrm{d}f$ is continuous from $E_1$ to $\mathcal{L}(E_1, E_2)$. By a *locally* Lipschitz continuous map $f : E_1 \to E_2$, we mean a map that is Lipschitz continuous on every bounded subset of $E_1$. The scalar product of a given Hilbert space $E$ is written $\langle \cdot, \cdot \rangle_E$. If $E_1$ and $E_2$ are Hilbert spaces, each operator $L \in \mathcal{L}(E_1, E_2)$ possesses an adjoint $L^* \in \mathcal{L}(E_2, E_1)$ that is uniquely defined by $\langle Lx, y \rangle_{E_2} = \langle x, L^*y \rangle_{E_1}$ for all $x \in E_1$ and $y \in E_2$. Integrals of functions taking values in Banach spaces are understood in the sense of Bochner. # Stabilization of cascade systems {#sec:stab} Let $A : \mathcal{D}(A) \to X$ be the infinitesimal generator of a strongly continuous semigroup $\{e^{tA} \}_{t \geqslant 0}$ on a real Hilbert space $X$. Its domain $\mathcal{D}(A)$ is equipped with the graph norm. Given input and output spaces $U$ and $Y$, both of which are assumed to be real Hilbert spaces as well, consider the control system [\[eq:cascade-x\]]{#eq:cascade-x label="eq:cascade-x"} $$\begin{aligned} \label{eq:open-loop-x} &\dot{x} = Ax + f(x) + g(x)u, \\ \label{eq:z-sub} &\dot{z} = Sz + Cx + h(x),\end{aligned}$$ where: - $f:X\to X$ is locally Lipschitz continuous, $f(0) = 0$; - $g : X \to \mathcal{L}(U, X)$ is locally Lipschitz continuous; - $h : X \to Y$ is locally Lipschitz continuous, $h(0) = 0$; - $C \in \mathcal{L}(\mathcal{D}(A), Y)$, i.e., $C$ is $A$-bounded; - $S \in \mathcal{L}(Y)$ is skew-adjoint, i.e., $S^* = - S$. The controlled $x$-subsystem is governed by a semilinear equation and has a (possibly nonlinear) output that is fed to the linear $z$-subsystem, which we wish to stabilize. We further assume that the $x$-component is stable or can be pre-stabilized in a way that ensures the following properties. *Assumption 1* (Semiglobal Input-to-State Stability (ISS) of the $x$-subsystem). There exists a Lyapunov functional $V \in \mathcal{C}^1(X, \mathbb{R})$ that is quadratic-like, i.e., there exist $m_1, m_2 > 0$ such that $$\label{eq:strict-lyap} m_1 \|x\|^2_X \leqslant V(x) \leqslant m_2 \|x\|^2_X, \quad \forall x \in X,$$ and there exists $\beta > 0$ such that for all $x \in \mathcal{D}(A)$ and all $u \in U$, $$\label{eq:ISS0} \mathrm{d}V(x) [Ax + f(x) + g(x)u] \leqslant\beta \|u\|^2_U$$ Moreover, for any bounded open set $\mathcal{B}\subset X$, there exist a quadratic-like Lyapunov functional $V_\mathcal{B}\in \mathcal{C}^1(X, \mathbb{R})$ and $\alpha_\mathcal{B}, \beta_\mathcal{B}> 0$ such that for all $x \in \mathcal{D}(A)\cap \mathcal{B}$ and all $u \in U$, $$\label{eq:ISS} \mathrm{d}V_\mathcal{B}(x) [Ax + f(x) + g(x)u] \leqslant- \alpha_\mathcal{B}V_\mathcal{B}+ \beta_\mathcal{B}\|u\|^2_U.$$ implies the following formal statements: along all solutions to [\[eq:open-loop-x\]](#eq:open-loop-x){reference-type="ref" reference="eq:open-loop-x"}, $$\dot{V} \leqslant\beta \| u \|^2_U,$$ and, given an open bounded set $\mathcal{B}$, if $x$ remains in $\mathcal{B}$ then $$\dot{V}_\mathcal{B}\leqslant- \alpha_\mathcal{B}V_\mathcal{B}+ \beta_\mathcal{B}\|u\|^2_U,$$ for some Lyapunov function $V_\mathcal{B}$. While this formulation may seem unusual, it will naturally arise in our PDE application. The next assumption connects the $x$- and $z$-subsystems and is instrumental in building a forwarding-based controller. *Assumption 2* (Invariant graph). There exists a map $\mathcal{M}\in \mathcal{C}^1(X, Y)$ with $\mathrm{d}\mathcal{M}$ locally Lipschitz continuous such that $\mathcal{M}(0) = 0$ and for all $x \in \mathcal{D}(A)$, $$\label{eq:forwarding} \mathrm{d}\mathcal{M}(x) (A + f)(x) = S \mathcal{M}(x) + (C + h)(x).$$ Its geometric interpretation is that, when $u = 0$, the graph of $\mathcal{M}$ is an invariant manifold for the cascade [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}. Under [Assumption 2](#as:forwarding){reference-type="ref" reference="as:forwarding"}, we consider the nonlinear state feedback $$\label{eq:forwarding-feedback} u = g(x)^*\mathrm{d}\mathcal{M}(x)^* [z - \mathcal{M}(x)],$$ which can be modelled as a locally Lipschitz map on the extended state space $X \times Y$. **Theorem 1** (Well-posedness). *xz For any initial data $[x_0, z_0]\in X \times Y$, there exists a unique (global) mild solution $[x, z] \in \mathcal{C}(\mathbb{R}^+, X \times Y)$ to the closed-loop equations [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}. If $[x_0, z_0] \in \mathcal{D}(A) \times Y$, then $[x, z]$ is a classical solution and enjoys the regularity $[x, z] \in \mathcal{C}(\mathbb{R}^+, \mathcal{D}(A) \times Y) \cap \mathcal{C}^1(\mathbb{R}^+, X \times Y)$. Furthermore, for any $\tau > 0$, the map $[x_0, z_0] \mapsto [x, y]$ is continuous from $X \times Y$ to $\mathcal{C}([0, \tau], X \times Y)$ equipped with the uniform norm.* *Proof.* Observe that [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"} in closed-loop with [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} constitute a locally Lipschitz perturbation of the linear equations $\dot{x} = Ax, \dot{z} = Cx$. Those generate a strongly continuous semigroup on $X \times Y$. Indeed, taking advantage of the $A$-boundedness of $C$, one can show txhat the operator matrix $\begin{bsmallmatrix} A & 0 \\ C & 0 \end{bsmallmatrix}$ with domain $\mathcal{D}(A) \times Y$ is closed and has nonempty resolvent set. In that case, semigroup generation is equivalent to existence of unique classical solutions for all initial data in the domain [@AreBat01book Theorem 3.1.12], which is immediate. Then, existence and uniqueness of local mild and classical solutions for the nonlinear problem together with uniform continuous dependence on the initial data follow from [@CurZwa20book Theorem 11.1.5]. The proof that all closed-loop solutions are global is postponed: our stability analysis will show that they cannot blow up in finite time -- see [\[eq:dec-W\]](#eq:dec-W){reference-type="ref" reference="eq:dec-W"} below.x ◻ *Remark 1*. All formal computation performed in the sequel can be justified by considering classical solutions and passing to the limit in suitable expressions for general initial data. **Theorem 2** (Local Exponential Stability). *Suppose that $$\label{eq:range-cond} \mathop{\mathrm{Range}}\mathrm{d}\mathcal{M}(0) g(0) = Y.$$ Then the zero equilibrium is Locally Exponentially Stable for [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"} in closed loop with [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}.* *Proof.* Consider the Lyapunov candidate $$W(x, z) \triangleq V(x) + \frac{\beta}{4} \|z - \mathcal{M}(x) \|^2_Y$$ with $\beta$ as in [\[eq:ISS0\]](#eq:ISS0){reference-type="ref" reference="eq:ISS0"}. Along solutions to [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}, we have $$\label{eq:derivative-W} \dot{W} = \dot{V} + \frac{\beta}{2} \langle z - \mathcal{M}(x), \dot{z} - \mathrm{d}\mathcal{M}(x) \dot{x} \rangle_Y.$$ Since $S$ is skew-adjoint, $$\langle z - \mathcal{M}(x), Sz \rangle_Y = \langle z - \mathcal{M}(x), S \mathcal{M}(x) \rangle_Y$$ Therefore, plugging [\[eq:open-loop-x,eq:forwarding-feedback,eq:forwarding\]](#eq:open-loop-x,eq:forwarding-feedback,eq:forwarding){reference-type="ref" reference="eq:open-loop-x,eq:forwarding-feedback,eq:forwarding"} leads to $$\begin{aligned} \langle z - \mathcal{M}(x), \dot{z} - \mathrm{d}\mathcal{M}(x) \dot{x} \rangle_U &= \langle z - \mathcal{M}(x), - \mathrm{d}\mathcal{M}(x)g(x) u \rangle_U \\ &= - \|g(x)^* \mathrm{d}\mathcal{M}(x)^*[z - \mathcal{M}(x)]\|^2_U. \end{aligned}$$ Using the (global) ISS property [\[eq:ISS0\]](#eq:ISS0){reference-type="ref" reference="eq:ISS0"}, we obtain $$\label{eq:dec-W} \dot{W} \leqslant- \frac{\beta}{2} \| u \|^2_U \leqslant 0.$$ shows that the $W$-sublevel sets $$\mathcal{N}_c \triangleq \{ [x_0, z_0] \in X \times Y : W(x_0, z_0) \leqslant c \}, \quad c > 0,$$ are positively invariant under the flow of [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}. Furthermore, since $\mathcal{M}$ is continuous and vanishes at zero, using also [\[eq:strict-lyap\]](#eq:strict-lyap){reference-type="ref" reference="eq:strict-lyap"}, for any $\varepsilon> 0$ we can find $c >0$ such that $\mathcal{N}_c \subset \mathcal{B}_{X \times Y}(0, \varepsilon)$ and, conversely, for any $c > 0$, there exists $\delta > 0$ such that $\mathcal{B}_{X\times Y}(0, \delta) \subset \mathcal{N}_c$. This combined with positive invariance of each $\mathcal{N}_c$ proves that the origin is Lyapunov stable. On the other hand, because of the range condition [\[eq:range-cond\]](#eq:range-cond){reference-type="ref" reference="eq:range-cond"}, a transposition argument -- see, e.g., [@Bre11book Theorem 2.20] -- provides $\lambda > 0$ such that $$\|g(0)^* \mathrm{d}\mathcal{M}(0)^* y \|^2_U \geqslant\lambda \|y\|^2_Y, \quad \forall y \in Y.$$ Because $x \mapsto g(x) \mathrm{d}\mathcal{M}(x)$ is continuous, it is possible to find some ball $\mathcal{O}\triangleq \mathcal{B}_X(0, r)$, $r > 0$, such that $$\label{eq:local-coerc} \|g(x)^* \mathrm{d}\mathcal{M}(x)^* y \|^2_U \geqslant\frac{\lambda}{2} \|y\|^2_Y, \quad \forall x \in \mathcal{O}, \forall y \in Y.$$ We are now ready to prove local exponential stability of the origin. Fix $c >0$ such that $\mathcal{N}_c \subset \mathcal{O}\times Y$ and pick $\delta > 0$ such that $\mathcal{B}_{X \times Y}(0, \delta) \subset \mathcal{N}_c$. The $x$-coordinate of any closed-loop solution originating from $\mathcal{N}_c$ remains in the $V$-sublevel set $\mathcal{B}\triangleq \{ x \in X : V(x) < 2c \}$, which is bounded. Thus, by [Assumption 1](#as:ISS){reference-type="ref" reference="as:ISS"} there exists a Lyapunov functional $V_\mathcal{B}$ and positive constants $\alpha_\mathcal{B}$, $\beta_\mathcal{B}$ such that $\dot{V}_\mathcal{B}\leqslant-\alpha_\mathcal{B}V_\mathcal{B}+ \beta_\mathcal{B}\|u\|^2_U$ along all closed-loop solutions originating from $\mathcal{N}_c$. We let $$\label{eq:Wb} W_\mathcal{B}(x, z) \triangleq V_\mathcal{B}(x) + \frac{\beta_\mathcal{B}}{4} \|z - \mathcal{M}(x) \|^2_Y$$ and this time, similarly as in but taking advantage of [\[eq:local-coerc\]](#eq:local-coerc){reference-type="ref" reference="eq:local-coerc"}, we obtain $$\label{eq:Wb-strict} \begin{aligned} \dot{W}_\mathcal{B}& % - \alpha_\B {V}_\B - \frac{\beta_\B}{2} \|u\|^2_U \\ \leqslant- \alpha_\mathcal{B}{V}_\mathcal{B}- \frac{\beta_\mathcal{B}\lambda}{4} \|z - \mathcal{M}(x) \|^2_Y \\ & \leqslant- \min \{ \alpha_\mathcal{B}, \lambda \} W_\mathcal{B} \end{aligned}$$ along all solutions to [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} originating from $\mathcal{N}_c$. It then follows from [\[eq:Wb-strict\]](#eq:Wb-strict){reference-type="ref" reference="eq:Wb-strict"} and Grönwall's lemma that $$W_\mathcal{B}(x(t), z(t)) \leqslant e^{- \min \{ \alpha_\mathcal{B}, \lambda \} t} W_\mathcal{B}(x_0, z_0)$$ for all $t \geqslant 0$ and any closed-loop solution $[x, z]$ with initial data $[x_0, z_0]$ taken in $\mathcal{N}_c$. To conclude, we recall that $\mathcal{M}$ is Lipschitz continuous on the bounded set $\mathcal{B}$, $\mathcal{M}(0) = 0$ and $0 \in \mathcal{B}$. On the other hand, $V_\mathcal{B}$ has quadratic upper and lower bounds. Thus, using a couple of triangular inequalities, we deduce that there exist positive constants $K_1, K_2$ such that $K_1 W_\mathcal{B}(x, z) \leqslant\|x\|^2_X + \|z\|^2_Y \leqslant K_2 W_\mathcal{B}(x, z)$ for all $x \in \mathcal{B}$ and $z \in Y$, which completes the proof. ◻ **Theorem 3** (Global Asymptotic Stability). *In addition to [\[eq:range-cond\]](#eq:range-cond){reference-type="ref" reference="eq:range-cond"}, suppose that* 1. *[\[it:L2-adm\]]{#it:L2-adm label="it:L2-adm"} The linear semigroup $\{e^{tA}\}_{t \geqslant 0}$ is exponentially stable;* 2. *$Y$ is finite-dimensional.* *Then the zero equilibrium is Globally Asymptotically Stable for [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"} in closed loop with [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}.* *Proof.* Let the initial data $[x_0, z_0] \in X \times Y$ be fixed. If $W(x_0, z_0) = 0$, then $x_0 = 0$ and $z_0 = 0$. Otherwise, it follows again from [\[eq:dec-W\]](#eq:dec-W){reference-type="ref" reference="eq:dec-W"} that the $x$-coordinate of the closed-loop solution originating from $[x_0, z_0]$ remains in an open bounded set $\mathcal{B}$ of the form $\mathcal{B}= \{ x \in X : V(x) < 2W(x_0, z_0) \}$. Thus, by [Assumption 1](#as:ISS){reference-type="ref" reference="as:ISS"}, there exists a Lyapunov functional $V_\mathcal{B}$ and positive constants $\alpha_\mathcal{B}, \beta_\mathcal{B}$ such that $W_\mathcal{B}$ constructed just as in [\[eq:Wb\]](#eq:Wb){reference-type="ref" reference="eq:Wb"} satisfies $$\label{eq:Wb-bis} \dot{W}_\mathcal{B}\leqslant- \alpha_\mathcal{B}V_\mathcal{B}- \frac{\beta_\mathcal{B}}{2} \|u\|^2_U$$ along the closed-loop solution originating from $[x_0, z_0]$. Integrating [\[eq:Wb-bis\]](#eq:Wb-bis){reference-type="ref" reference="eq:Wb-bis"} over $(0, + \infty)$ yields $$\label{eq:Wb-integrated} \int_0^{+\infty} \alpha_\mathcal{B}V_\mathcal{B}(x(t)) + \frac{\beta_\mathcal{B}}{2} \|u(t)\|^2_U \, \mathrm{d}t \leqslant 2 W_\mathcal{B}(x_0, z_0).$$ In particular, $x \in L^2(0, + \infty; X)$ and $u \in L^2(0, + \infty; U)$. Because $x$ is bounded in $X$ and $f$ is locally Lipschitz continuous, this also implies that $f(x) \in L^2(0, + \infty; X)$. On the other hand, because $g$ is locally Lipschitz continuous as well and $u \in L^2(0, + \infty; U)$, we must have $g(x)u \in L^2(0, +\infty; X)$. Let $\Xi \triangleq f(x) + g(x) u$. Then $\Xi \in L^2(0, + \infty; X)$ and $x$ solves the Cauchy problem $\dot{x} = Ax + \Xi$, $x(0) = x_0$. Since $\{e^{tA}\}_{t \geqslant 0}$ is exponentially stable, it follows from [@CurZwa20book Lemma 5.2.2] that $x(t) \to 0$ in $X$ as $t \to + \infty$. Another consequence of [\[eq:dec-W\]](#eq:dec-W){reference-type="ref" reference="eq:dec-W"} is that $z$ remains bounded in $Y$, which is assumed to be finite-dimensional; hence relative compactness of $\{z(t), t \geqslant 0 \}$ in $Y$. We are now in the position to carry out a standard LaSalle invariance argument [@Las60; @Daf78]. Indeed, we now know that the $\omega$-limit set $\omega(x_0, z_0)$ of the singleton $[x_0, z_0]$ under the flow of [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} is nonempty; it is in fact included in $\{0\} \times Y$ and thus compact, implying that, as $t \to + \infty$, $$\label{eq:omega-att} \mathop{\mathrm{dist}}( [x(t), z(t)], \omega(x_0, z_0)) \to 0.$$ Given $[\tilde{x}_0, \tilde{z}_0] \in \omega(x_0, z_0)$, let $[\tilde{x}, \tilde{z}]$ be the closed-loop solution originating from $[\tilde{x}_0, \tilde{z}_0]$. Since the $W$-sublevel sets are all closed and invariant under [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}, $V(\tilde{x}) \leqslant W(x_0, z_0) < 2 W(x_0, z_0)$ for all $t\geqslant 0$; thus, [\[eq:Wb-bis\]](#eq:Wb-bis){reference-type="ref" reference="eq:Wb-bis"} is valid for $[\tilde{x}, \tilde{z}]$ as well. By (strict) invariance of $\omega(x_0, z_0)$, the continuous, monotone decreasing and lower-bounded function $t \mapsto W_\mathcal{B}(\tilde{x}(t), \tilde{z}(t))$ must in fact be constant. We then infer from the $[\tilde{x}, \tilde{z}]$-version of [\[eq:Wb-bis\]](#eq:Wb-bis){reference-type="ref" reference="eq:Wb-bis"} that $$\label{eq:lassalle-zero} \tilde{x}(t) = 0, \quad \tilde{u}(t) = 0, \quad \forall t \geqslant 0,$$ where $\tilde{u}$ is the control for $[\tilde{x}, \tilde{z}]$. leads to $$g(0)^* \mathrm{d}\mathcal{M}(0)^* \tilde{z}(t) = 0, \quad t \geqslant 0,$$ where we recall that $\mathcal{M}(0) = 0$. Because $\mathrm{d}\mathcal{M}(0)g(0)$ is assumed to be surjective, $g(0)^* \mathrm{d}\mathcal{M}(0)^*$ is injective and we finally obtain $\omega(x_0, z_0) = \{0\}$. By [\[eq:omega-att\]](#eq:omega-att){reference-type="ref" reference="eq:omega-att"}, this shows that $0$ is globally attractive for the closed-loop [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}. ◻ # Solving the nonlinear Sylvester equation {#sec:syl} In this section, we establish sufficient conditions under which a suitable solution $\mathcal{M}$ to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} exists. Consider the uncontrolled $x$-equation: $$\label{eq:unc-x} \dot{x} = Ax + f(x).$$ It is clear from the arguments in the proof of [Theorem 1](#claim:wp){reference-type="ref" reference="claim:wp"} that [\[eq:unc-x\]](#eq:unc-x){reference-type="ref" reference="eq:unc-x"} gives rise to a dynamical system in $X$, with similar regularity and uniform approximation properties. We denote by $\{\mathcal{T}_t\}_{t \geqslant 0}$ the associated evolution semigroup: $t \mapsto \mathcal{T}_t x_0$ is the unique solution $x$ to [\[eq:unc-x\]](#eq:unc-x){reference-type="ref" reference="eq:unc-x"} with initial condition $x(0) = x_0$. Let us now discuss the consequences of [Assumption 1](#as:ISS){reference-type="ref" reference="as:ISS"} (with $u = 0$) on the stability of [\[eq:unc-x\]](#eq:unc-x){reference-type="ref" reference="eq:unc-x"}. The positive invariance of each $V$-sublevel set under the flow of [\[eq:unc-x\]](#eq:unc-x){reference-type="ref" reference="eq:unc-x"} combined with the existence of a strict Lyapunov functional $V_\mathcal{B}$ on each bounded set $\mathcal{B}$ of $X$ imply *semiglobal* exponential stability: for each bounded set $\mathcal{B}$ there exist $M_\mathcal{B}\geqslant 1$ and $\mu_\mathcal{B}> 0$ such that $$\label{eq:stab-T} \|\mathcal{T}_t x_0 \|_X \leqslant M_{\mathcal{B}} e^{- \mu_\mathcal{B}t} \|x_0\|_X, \quad \forall x_0 \in \mathcal{B}, \forall t \geqslant 0.$$ In particular, the zero equilibrium *uniformly* attracts the bounded sets of $X$, i.e., for any $\varepsilon$-ball, $\varepsilon> 0$, around the origin and any bounded set $\mathcal{B}$, there exists a time $T$ after which all solutions originating from $\mathcal{B}$ remain in that $\varepsilon$-ball. The following additional assumptions are now in force. *Assumption 3*. The maps $f$ are Fréchet differentiable with locally Lipschitz continuous differentials. Also, without loss of generality, $\mathrm{d}f(0) = 0$ and $\mathrm{d}h(0) = 0$. *Assumption 4*. There exist a coercive self-adjoint operator $P \in \mathcal{L}(X)$ and a positive constant $\mu$ such that $$\label{eq:ineq-first-var} \langle Ax, P x \rangle_X \leqslant- \mu \|x\|^2_X, \quad \forall x \in \mathcal{D}(A).$$ implies that $\{e^{tA}\}_{t \geqslant 0}$ is exponentially stable with a coercive quadratic Lyapunov functional. **Theorem 4** (Existence of $\mathcal{M}$). *Let $\mathcal{M}_0 \in \mathcal{L}(X, Y)$ be the (unique) solution to the linear Sylvester equation $$\label{eq:sylvester-lin} \mathcal{M}_0 A = S \mathcal{M}_0 + C.$$ Then, the unique solution $\mathcal{M}$ to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"}, $\mathcal{M}(0) = 0$, is given by $$\label{eq:M-nonlin} \mathcal{M}(x) = \mathcal{M}_0x + \int_0^{+ \infty} e^{-t S} [ \mathcal{M}_0 f(\mathcal{T}_t x) - h(\mathcal{T}_t x)] \, \mathrm{d}t.$$ Furthermore, $\mathcal{M}\in \mathcal{C}^1(X, Y)$ and $\mathrm{d}\mathcal{M}$ is locally Lipschitz continuous.* *Proof.* First, since $\{e^{tA}\}_{t \geqslant 0}$ is exponentially stable, $0$ lies in the resolvent set of $A$. On the other hand, $S$ is skew-adjoint and bounded, and thus generates a (uniformly continuous) group $\{e^{tS}\}_{t \in \mathbb{R}}$ of isometries on $Y$. With that in mind, it can be checked by following the proof of [@NatGil14 Lemma III.4] or using [@Ngu01 Theorem 2.1] that the unique solution $\mathcal{M}_0 \in \mathcal{L}(X, Y)$ to [\[eq:sylvester-lin\]](#eq:sylvester-lin){reference-type="ref" reference="eq:sylvester-lin"} is given by $$\mathcal{M}_0x = CA^{-1}x - \int_0^{+ \infty} S e^{-tS} C A^{-1}e^{tA}x \, \mathrm{d}t$$ for all $x \in X$. Now we look for a Fréchet differentiable solution $\mathcal{M}$ to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} of the form $\mathcal{M}(x) = \mathcal{M}_0 x + \mathcal{F}(x)$. Since $\mathcal{M}_0$ solves [\[eq:sylvester-lin\]](#eq:sylvester-lin){reference-type="ref" reference="eq:sylvester-lin"}, such a map $\mathcal{M}$ satisfies [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} if and only if $$\mathcal{M}_0 f(x) + \mathrm{d}\mathcal{F}(x) (A + f)(x) = S \mathcal{F}(x) + h(x)$$ for all $x \in \mathcal{D}(A)$, or equivalently, $$\label{eq:calcul-dF-T} \mathrm{d}\mathcal{F}(\mathcal{T}_t x) \frac{\mathrm{d}}{\mathrm{d}t} \mathcal{T}_t x - S \mathcal{F}(\mathcal{T}_t x) = - \mathcal{M}_0 f(\mathcal{T}_t x) + h(\mathcal{T}_tx), \quad \forall x \in \mathcal{D}(A), \forall t \geqslant 0.$$ By applying (the invertible operator) $e^{-tS}$ to [\[eq:calcul-dF-T\]](#eq:calcul-dF-T){reference-type="ref" reference="eq:calcul-dF-T"}, we see that $\mathcal{M}$ solves [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} if and only if for all $x \in \mathcal{D}(A)$ and $t \geqslant 0$, $$\label{eq:calcul-dF-final} \frac{\mathrm{d}}{\mathrm{d}t} e^{-t S} \mathcal{F}(\mathcal{T}_t x) = - e^{-t S} [ \mathcal{M}_0 f(\mathcal{T}_t x) - h(\mathcal{T}_t x)].$$ Since $\mathcal{M}$ (and thus $\mathcal{F}$) is assumed to be continuous and vanish at $0$, we can integrate [\[eq:calcul-dF-final\]](#eq:calcul-dF-final){reference-type="ref" reference="eq:calcul-dF-final"} and obtain that if $\mathcal{M}$ is indeed a solution to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"}, it must be given by [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"}. Indeed, recall here that $\mathcal{M}_0$ is unique and the integral in [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} is absolutely convergent because of [\[eq:stab-T\]](#eq:stab-T){reference-type="ref" reference="eq:stab-T"} together with the property that $f$ and $h$ are linearly bounded on bounded sets. Conversely, we have to prove that the map $\mathcal{M}$ defined by [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} for any $x \in X$ is Fréchet differentiable and solves [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} or, equivalently, satisfies [\[eq:calcul-dF-final\]](#eq:calcul-dF-final){reference-type="ref" reference="eq:calcul-dF-final"} for all $x \in \mathcal{D}(A)$ and $t \geqslant 0$, where we let $\mathcal{F}\triangleq \mathcal{M}- \mathcal{M}_0$. Assume for the moment that $\mathcal{M}$ is Fréchet differentiable and let $x \in \mathcal{D}(A)$. For all $t \geqslant 0$, $$\label{eq:diff-F} e^{-t S} \mathcal{F}(\mathcal{T}_t x) - \mathcal{F}(x) = -\int_0^{t} e^{-s S} [ \mathcal{M}_0 f(\mathcal{T}_s x) - h(\mathcal{T}_s x)] \, \mathrm{d}s.$$ Dividing [\[eq:diff-F\]](#eq:diff-F){reference-type="ref" reference="eq:diff-F"} by $t > 0$ and letting $t \to 0$ yield $$\frac{\mathrm{d}}{\mathrm{d}t} e^{-tS} \mathcal{F}(\mathcal{T}_t x) \bigg|_{t = 0} = h(x) - \mathcal{M}_0 f( x),$$ which implies [\[eq:calcul-dF-T\]](#eq:calcul-dF-T){reference-type="ref" reference="eq:calcul-dF-T"}. The differentiability of $\mathcal{M}$ (along with the Lipschitz continuity of the differential) is a extension of the (lengthy) proof of [@VanBri23 Theorem 3.4]. It is omitted here.[^4] In the sequel, we shall use that $\mathrm{d}\mathcal{M}(0) = \mathcal{M}_0$, which follows from [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"}. ◻ We now know that [\[as:smooth,as:first-var\]](#as:smooth,as:first-var){reference-type="ref" reference="as:smooth,as:first-var"} imply [Assumption 2](#as:forwarding){reference-type="ref" reference="as:forwarding"} and the solution $\mathcal{M}$ to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} is unique. As a result, we can reformulate the hypothesis [\[eq:range-cond\]](#eq:range-cond){reference-type="ref" reference="eq:range-cond"} of [Theorem 2](#claim:LES){reference-type="ref" reference="claim:LES"} in terms of the original control system only and, in the case $S = 0$, recover the classical non-resonance condition [@Isi03book]. **Corollary 1** (Non-resonance condition). *The range condition [\[eq:range-cond\]](#eq:range-cond){reference-type="ref" reference="eq:range-cond"} reads as $\mathop{\mathrm{Range}}\mathcal{M}_0g(0) = Y$, where $\mathcal{M}_0$ is the unique solution to the Sylvester equation [\[eq:sylvester-lin\]](#eq:sylvester-lin){reference-type="ref" reference="eq:sylvester-lin"}. In particular, if $S = 0$, [\[eq:range-cond\]](#eq:range-cond){reference-type="ref" reference="eq:range-cond"} reads as follows: $$\label{eq:non-resonance} \mathop{\mathrm{Range}}CA^{-1}g(0) = Y,$$* *Proof.* This is a consequence of the uniqueness of $\mathcal{M}$ and the property that $\mathrm{d}\mathcal{M}(0) = \mathcal{M}_0$. ◻ # Local set-point output regulation {#sec:reg} In this section, we present an application of the forwarding approach for stabilization of cascade systems in the context of set-point output regulation. Let $y_\mathrm{ref}\in Y$ be a (small) deviation from the output $y = Cx + h(x)$ at the equilibrium (here, the origin). We wish to find a control $u$ steering $y$ to $y_\mathrm{ref}$ while maintaining $x$ bounded. In the spirit of [@mattia-forwarding; @VanBri23], consider in place of [\[eq:z-sub\]](#eq:z-sub){reference-type="ref" reference="eq:z-sub"} $$\label{eq:z-sub-int} \dot{z} = Cx + h(x) - y_\mathrm{ref}.$$ The crucial property of integral action is that at *any* equilibrium, the output must be at the desired value $y_\mathrm{ref}$. We assume that [\[as:ISS,as:smooth,as:first-var\]](#as:ISS,as:smooth,as:first-var){reference-type="ref" reference="as:ISS,as:smooth,as:first-var"} are satisfied, which in turn means that [Assumption 2](#as:forwarding){reference-type="ref" reference="as:forwarding"} is satisfied as well by [Theorem 4](#claim:sylvester){reference-type="ref" reference="claim:sylvester"}. In particular, the unique solution $\mathcal{M}$ to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"}, $\mathcal{M}(0) = 0$ is given by [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} -- here, $S = 0$, and thus $\mathcal{M}_0 = CA^{-1}$. We thus consider the nonlinear feedback [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} of the state $[x, z]$ governed by [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}-[\[eq:z-sub-int\]](#eq:z-sub-int){reference-type="ref" reference="eq:z-sub-int"}. **Theorem 5** (Set-point output regulation). *Suppose that $$\mathop{\mathrm{Range}}CA^{-1}g(0) = Y.$$ Then, there exists $r > 0$ such that for any $y_\mathrm{ref}\in \mathcal{B}_Y(0, r)$, the following property holds: the $x$-subsystem [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"} supplemented with the output integrator [\[eq:z-sub-int\]](#eq:z-sub-int){reference-type="ref" reference="eq:z-sub-int"} and in closed loop with [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} possesses an equilibrium $[x^\star, z^\star] \in \mathcal{D}(A) \times Y$ that is Locally Exponentially Stable, satisfies $(C + h)(x^\star) = y_\mathrm{ref}$ and whose basin of attraction contains the origin.* *Remark 2*. That the closed-loop system in presence of the additional term $y_\mathrm{ref}$ is well-posed and forward complete can be verified by following the proofs of [\[claim:wp,claim:LES\]](#claim:wp,claim:LES){reference-type="ref" reference="claim:wp,claim:LES"}. can be proved by following the strategy described in [@VanBri23 Section 4.2.1].[^5] *Remark 3*. A similar result can be obtained in presence of small constant disturbance $d$ in the $x$-equation [\[eq:cascade-x\]](#eq:cascade-x){reference-type="ref" reference="eq:cascade-x"}, as considered in [@VanBri23]. However, this requires global Lipschitz continuity of $\mathcal{M}$ and $\mathrm{d}\mathcal{M}$, which is much harder to obtain, and is not guaranteed a priori by [Theorem 4](#claim:sylvester){reference-type="ref" reference="claim:sylvester"}. # Applications to flexible structures {#sec:flex} We consider the planar motion of a flexible homogeneous Euler-Bernoulli beam of length $L$ attached to a rotating joint at one end and free at the other end. The deflection $w(\xi, t)$ in the beam frame at the position $\xi\in[0, L]$ and time $t\in\mathbb{R}^+$ and the rotation angle $\theta(t)$ are governed by the following set of equations: [\[eq:w-beam\]]{#eq:w-beam label="eq:w-beam"} $$\begin{aligned} &\rho \frac{\partial^2w}{\partial t^2} + \lambda \frac{\partial w}{\partial t} + EI \frac{\partial^4w}{\partial\xi^4} + \rho \xi \ddot{\theta} - \rho \dot{\theta}^2 w = 0, \\ &I_R \ddot{\theta}(t) = EI \frac{\partial^2w}{\partial\xi^2}(0, t) + \tau(t), \\ & w(0, t) = \frac{\partial w}{\partial\xi}(0, t) = 0, \\ &\frac{\partial^3w}{\partial\xi^3}(L, t) = \frac{\partial^2w}{\partial\xi^2} (L, t) = 0, \end{aligned}$$ where $\lambda$ is a viscous damping coefficient, $E$ is the Young modulus, $I$ and $\rho$ are the moment of inertia and the density of the cross section, $I_R$ is the moment of inertia of the rotating joint, and $\tau$ is the torque applied to the joint, which is our control input. We refer the reader to [@Mor91] for more details on that model. For the sake of simplicity, we set all physical constants to $1$, with the exception of $\lambda$.[^6] A possible control objective is reference tracking of the angular position $\theta$. Let $\theta_\mathrm{ref} \in \mathbb{R}$ be the desired angle. We start with some important observations. If we set the torque input as $$\label{eq:torque-stat} \tau = - \theta + \theta_\mathrm{ref} + \tilde{\tau},$$ then, in the new coordinate system $[w, \theta] \mapsto [v, \phi]$ where $$\phi(t) \triangleq \theta(t) - \theta_{\mathrm{ref}}, \quad v(\xi, t) \triangleq w(\xi,t) + \xi \phi(t),$$ the equations of motion [\[eq:w-beam\]](#eq:w-beam){reference-type="ref" reference="eq:w-beam"} supplied with [\[eq:torque-stat\]](#eq:torque-stat){reference-type="ref" reference="eq:torque-stat"} become: [\[eq:beam-v\]]{#eq:beam-v label="eq:beam-v"} $$\begin{aligned} &\frac{\partial^2v}{\partial t^2} + \lambda \frac{\partial v}{\partial t} + \frac{\partial^4v}{\partial\xi^4} - \lambda \xi \dot{\phi} - \dot{\phi}^2(v - \xi \phi) = 0, \\ &\ddot{\phi}(t) = \frac{\partial^2v}{\partial\xi^2}(0, t) - \phi(t) + \tilde{\tau}(t), \\ &\frac{\partial^3v}{\partial\xi^3}(L, t) = \frac{\partial^2v}{\partial\xi^2} (L, t) = v(0, t) = 0, \\ & \frac{\partial v}{\partial\xi}(0, t) = \phi(t).\end{aligned}$$ Note in particular that the new equations [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"} do *not* depend on the choice of $\theta_\mathrm{ref}$. Now, in order to pre-stabilize the plant, an energy approach suggests the following nonlinear feedback in torque: $$\label{eq:torque} \tilde{\tau} = - \dot{\phi} \int_\Omega v \frac{\partial v}{\partial t} \, \mathrm{d}\xi + (\phi \dot{\phi} - \lambda) \int_\Omega \xi \frac{\partial v}{\partial t} \, \mathrm{d}\xi - \dot{\phi} + u.$$ Indeed, this yields $$\label{eq:energy-balance} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \left ( \int_\Omega \left | \frac{\partial v}{\partial t} \right |^2 + \left | \frac{\partial^2v}{\partial\xi^2} \right |^2 \mathrm{d}\xi + |\dot{\phi}|^2 + |\phi|^2 \right ) = - \lambda \int_\Omega \left | \frac{\partial v}{\partial t} \right |^2 \mathrm{d}\xi - |\dot{\phi}|^2 + u \dot{\phi}$$ along trajectories of [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"} in closed loop with [\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"}. *Remark 4*. The pre-stabilizer given in [\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"} is different from the control laws introduced in [@Mor91]. Let us introduce an operator model for the control system [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"}. Having set $\Omega \triangleq (0, L)$, we define the spaces $$\begin{aligned} &H_0 \triangleq L^2(\Omega) \times \mathbb{R}, \\ & H_1 \triangleq \left \{ \begin{bmatrix} v \\ \phi \end{bmatrix}\in H^2(\Omega) \times \mathbb{R} \left | \begin{aligned} &\frac{\partial v}{\partial\xi}(0) = \phi \\ &v(0) = 0 \end{aligned} \right. \right \},\end{aligned}$$ which we equip with the following scalar products: $$\begin{aligned} &\left \langle \begin{bmatrix} p_1 \\ \omega_1 \end{bmatrix}, \begin{bmatrix} p_2 \\ \omega_2 \end{bmatrix} \right \rangle_{H_0} \triangleq \omega_1 \omega_2 + \int_\Omega p_1 p_2 \, \mathrm{d}\xi, \\ \label{eq:scalar-H1} &\left \langle \begin{bmatrix} v_1 \\ \phi_1 \end{bmatrix}, \begin{bmatrix} v_2 \\ \phi_2 \end{bmatrix} \right \rangle_{H_1} \triangleq \phi_1 \phi_2 + \int_\Omega \frac{\partial^2v_1}{\partial\xi^2} \frac{\partial^2v_2}{\partial\xi^2} \, \mathrm{d}\xi.\end{aligned}$$ Then, $H_0$ and $H_1$ are Hilbert spaces.[^7] Now, let $X \triangleq H_1 \times H_0$ equipped with its product Hilbertian structure. We define an unbounded operator $A : \mathcal{D}(A) \to X$ by $$\begin{aligned} &\mathcal{D}(A) \triangleq \left \{ \begin{bmatrix} v \\ \phi \\ p \\ \omega \end{bmatrix} \in H_1 \times H_1 \; \left |\; \begin{aligned} & v \in H^4(\Omega), \\ & \frac{\partial^3v}{\partial\xi^3}(L) = 0, \\& \frac{\partial^2v}{\partial\xi^2} (L) = 0 \end{aligned} \right. \right \}, \\ & A \begin{bmatrix} v \\ \phi \\ p \\ \omega \end{bmatrix} \triangleq \begin{bmatrix} p \\ \omega \\ - \frac{\partial^4v}{\partial\xi^4} - \lambda p \\ \frac{\partial^2v}{\partial\xi^2}(0) - \omega - \phi \end{bmatrix}, \quad \forall \begin{bmatrix} v \\ \phi \\ p \\ \omega \end{bmatrix} \in \mathcal{D}(A). % & A = \begin{bmatrix} % 0 & 0 & 1 & 0 \\ % 0 & 0 & 0 & 1 \\ % - \ddddl{}{\xi} & 0 & - \lambda & 0 \\ % \ddl{}{\xi}_{ | \xi = 0} & - 1 & 0 & - k % \end{bmatrix}\end{aligned}$$ It follows from the Lumer-Phillips theorem and standard arguments that $A$ is the generator of a contraction semigroup on $X$. Here, as an input space we let $U \triangleq \mathbb{R}$, and the control map $g$ is linear and given by $g(x)u = Bu = [0, 0, 0, u]$ for all $x \in X$ and $u \in U$. We then put all the nonlinear terms from [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"} into a locally Lipschitz map $f : X \to X$ that meets the requirements of [2](#sec:stab){reference-type="ref" reference="sec:stab"} and also [Assumption 3](#as:smooth){reference-type="ref" reference="as:smooth"}. Letting $V \triangleq (1/2) \| \cdot \|^2_X$, the energy balance [\[eq:energy-balance\]](#eq:energy-balance){reference-type="ref" reference="eq:energy-balance"} reads as $$\label{eq:Vdot-beam} \dot{V} = - \lambda \int_\Omega \left | \frac{\partial v}{\partial t} \right |^2 \mathrm{d}\xi - |\dot{\phi}|^2 + u \dot{\phi} \leqslant\frac{1}{2} |u|^2$$ along solutions to [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"}. Now, for [Assumption 1](#as:ISS){reference-type="ref" reference="as:ISS"} to be satisfied, we also need strict control Lyapunov functionals on each (open) bounded subset of $X$. To that end, we can "strictify" the total energy $V$. Let $\varepsilon> 0$ and define $$\label{eq:V-eps-def} V_\varepsilon(v, \phi, p, \omega) \triangleq V(v, \phi, p, \omega) + \varepsilon\int_\Omega v p \, \mathrm{d}\xi + \varepsilon\phi \omega + \frac{\varepsilon\lambda}{2} \int_\Omega |v|^2 \, \mathrm{d}\xi + \frac{ \varepsilon}{2} |\phi|^2$$ for all $[v, \phi, p, \omega] \in X$. Then, along solutions to [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"}, $$\label{eq:V-eps} \dot{V}_\varepsilon= (\varepsilon- \lambda) \int_\Omega \left | \frac{\partial v}{\partial t} \right |^2 \mathrm{d}\xi + (\varepsilon- 1) |\dot{\phi}|^2 + u \dot{\phi} - \varepsilon\int_\Omega \left | \frac{\partial^2v}{\partial\xi^2} \right |^2 \, \mathrm{d}\xi - \varepsilon|\phi|^2 + \varepsilon u \phi + \varepsilon R(v, \phi, \dot{v}, \dot{\phi}),$$ where the term $R$ is given by: $$\label{eq:def-R} R(v, \phi, \dot{v}, \dot{\phi}) \triangleq - \lambda \dot{\phi} \int_\Omega \xi v \, \mathrm{d}\xi - \dot{\phi}^2\int_\Omega (v - \xi \phi) v \, \mathrm{d}\xi - \phi \dot{\phi} \int_\Omega v \frac{\partial v}{\partial t} \, \mathrm{d}\xi + (\phi \dot{\phi} - \lambda) \phi \int_\Omega \xi \frac{\partial v}{\partial t} \, \mathrm{d}\xi.$$ Let us deal with the first term in [\[eq:def-R\]](#eq:def-R){reference-type="ref" reference="eq:def-R"}: we can estimate $$\left | \lambda \dot{\phi} \int_\Omega \xi v \, \mathrm{d}\xi \right | \leqslant\frac{1}{4 \varepsilon} |\dot{\phi}|^2 + \varepsilon\lambda^2 L^3 \int_\Omega |v|^2 \, \mathrm{d}\xi.$$ Then, we get $$\label{eq:est-R-1} \varepsilon\left | \lambda \dot{\phi} \int_\Omega \xi v \, \mathrm{d}\xi \right | \leqslant\frac{1}{4} |\dot{\phi}|^2 + \varepsilon^2 \lambda^2 L^3 K |\phi |^2 + \varepsilon^2 \lambda^2 L^3 K \int_\Omega \left | \frac{\partial^2v}{\partial\xi^2} \right |^2 \mathrm{d}\xi,$$ where $K > 0$ is some constant coming from the equivalence of the norms $\|\cdot \|_{H_1}$ and $\|\cdot \|_{H^2(\Omega) \times \mathbb{R}}$ on $H_1$. Thus, going back to [\[eq:V-eps\]](#eq:V-eps){reference-type="ref" reference="eq:V-eps"}, we see that all terms of [\[eq:est-R-1\]](#eq:est-R-1){reference-type="ref" reference="eq:est-R-1"} can be absorbed into the negative part of the right-hand side of [\[eq:V-eps\]](#eq:V-eps){reference-type="ref" reference="eq:V-eps"} -- here, the $\varepsilon^2$-prefactor is important. Next, let $\mathcal{B}$ be a fixed open bounded subset of $X$. We then examine the remainder of [\[eq:def-R\]](#eq:def-R){reference-type="ref" reference="eq:def-R"}, which we denote by $R'$, and observe that it can be estimated as follows: there exists a positive constant $K_{\mathcal{B}}$ such that $$|R'(v, \phi, p, \omega)| \leqslant K_\mathcal{B}\int_\Omega |p|^2 \, \mathrm{d}\xi + K_\mathcal{B}|\omega|^2$$ for all $[v, \phi, p, \omega] \in \mathcal{B}$. Therefore, for any solution to [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"} that remains in $\mathcal{B}$, the term $R'$, once multiplied by $\varepsilon$, can be absorbed into the "good" terms in [\[eq:V-eps\]](#eq:V-eps){reference-type="ref" reference="eq:V-eps"}, provided that $\varepsilon$ is chosen sufficiently small. Finally, we are left with the term $u \dot{\phi} + \varepsilon u \phi$, which is readily dealt with using Young's inequality. At this point we have proved that for each (open) bounded subset $\mathcal{B}$ of $X$, there exists $\varepsilon> 0$ such that the Lyapunov candidate $V_\varepsilon$ as defined in [\[eq:V-eps-def\]](#eq:V-eps-def){reference-type="ref" reference="eq:V-eps-def"} satisfies the requirements of [Assumption 1](#as:ISS){reference-type="ref" reference="as:ISS"}. Note that with any $V_\varepsilon$ with $\varepsilon$ sufficiently small, we can also prove that the semigroup generated by $A$ (i.e., the linear part of [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"}) is exponentially stable and possesses a coercive Lyapunov functional. Indeed, because the pre-stabilizing feedback [\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"} is meant to cancel out the nonlinear terms in the energy balance, [\[eq:Vdot-beam\]](#eq:Vdot-beam){reference-type="ref" reference="eq:Vdot-beam"} holds for the linear problem as well, and so does [\[eq:V-eps\]](#eq:V-eps){reference-type="ref" reference="eq:V-eps"} with $R$ replaced by $0$. Now that we have verified that the control system [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"} satisfies [\[as:ISS,as:smooth\]](#as:ISS,as:smooth){reference-type="ref" reference="as:ISS,as:smooth"} (and thus [Assumption 2](#as:forwarding){reference-type="ref" reference="as:forwarding"} as well by [Theorem 4](#claim:sylvester){reference-type="ref" reference="claim:sylvester"}), we can put it in cascade with the output integrator $$\label{eq:int-theta-ref} \dot{z} = \phi = \theta - \theta_{\mathrm{ref}}$$ and finally use the forwarding feedback law [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} to control the extended $[v, \phi, z]$-system. This is made possible by [Theorem 4](#claim:sylvester){reference-type="ref" reference="claim:sylvester"}, which provides the (unique) solution $\mathcal{M}$ to [\[eq:forwarding\]](#eq:forwarding){reference-type="ref" reference="eq:forwarding"} and guarantees its required Lipschitz properties. Note again that $\mathcal{M}$ do not depend on the particular choice of $\theta_\mathrm{ref}$. In order to apply [\[claim:GAS,claim:LES\]](#claim:GAS,claim:LES){reference-type="ref" reference="claim:GAS,claim:LES"}, it remains to check the non-resonance condition [\[eq:non-resonance\]](#eq:non-resonance){reference-type="ref" reference="eq:non-resonance"}. Here, this is done by computing the input-to-steady-state map: constant input $u$ yields a stationary solution $[v, \phi]$ given by $v(\xi) = u \xi$, $\phi = u$. Therefore, by [\[claim:LES,claim:GAS\]](#claim:LES,claim:GAS){reference-type="ref" reference="claim:LES,claim:GAS"}, the controller [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"} achieves global asymptotic stabilization and local exponential stabilization in $X \times Y = H_1 \times H_0 \times \mathbb{R}$ of the nonlinear system [\[eq:beam-v\]](#eq:beam-v){reference-type="ref" reference="eq:beam-v"}-[\[eq:torque\]](#eq:torque){reference-type="ref" reference="eq:torque"} in cascade with [\[eq:int-theta-ref\]](#eq:int-theta-ref){reference-type="ref" reference="eq:int-theta-ref"}. Coming back to the original $[w, \theta]$-coordinates, we then see that the (unique) equilibrium at which $\theta = \theta_\mathrm{ref}$ is globally asymptotically stable and locally exponentially stable, and this holds for any choice of $\theta_\mathrm{ref}$. In summary, we have designed a dynamic feedback law for the original nonlinear plant [\[eq:w-beam\]](#eq:w-beam){reference-type="ref" reference="eq:w-beam"} that enables global set-point output tracking of the angular position $\theta$ at any reference $\theta_\mathrm{ref}$. *Remark 5*. In this analysis, we were able to derive global results in both initial data and reference by applying [\[claim:LES,claim:GAS\]](#claim:LES,claim:GAS){reference-type="ref" reference="claim:LES,claim:GAS"} after a suitable change of variables and control input instead of relying on the local [Theorem 5](#th:set-point){reference-type="ref" reference="th:set-point"}. The underlying key property which we used here is that the original equations [\[eq:w-beam\]](#eq:w-beam){reference-type="ref" reference="eq:w-beam"}, although nonlinear, are left invariant under constant inputs -- of course, up to a change of variable related to the resulting steady state. is applicable to systems lacking this feature but provides weaker results in comparison. # Concluding remarks We have presented new results for stabilization of nonlinear systems consisting of a semilinear component whose output is integrated by a neutrally stable linear subsystem. For this purpose, we have introduced a new class of nonlinear Sylvester equations, which we demonstrated to be solvable. Our approach also provides a solution for the local set-point output tracking problem. As a case study, we have investigated the nonlinear dynamics of a flexible beam attached to a rigid body. Now, while the formula [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} completely determines the nonlinear map $\mathcal{M}$ and its Fréchet differential, and thus the state feedback [\[eq:forwarding-feedback\]](#eq:forwarding-feedback){reference-type="ref" reference="eq:forwarding-feedback"}, the exact implementation in practice seems out of reach in most cases. This is an issue even in the finite-dimensional context [@PraOrt01]. Nevertheless, in view of [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"}, our control based on the nonlinear map $\mathcal{M}$ can be seen as a perturbation of the linear solution $\mathcal{M}_0$ of [\[eq:sylvester-lin\]](#eq:sylvester-lin){reference-type="ref" reference="eq:sylvester-lin"}. Hence, the additional integral containing the nonlinear perturbation terms in [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} can be interpreted as compensation terms that improve the control brought by $\mathcal{M}_0$. It is expected that taking into account even approximations of these compensation terms should lead better closed-loop performances, as it should be investigated in future works. # Additional material for the proofs of [\[claim:sylvester,th:set-point\]](#claim:sylvester,th:set-point){reference-type="ref" reference="claim:sylvester,th:set-point"} {#sec:add-proof} This section contains additional technical details and is intended for reading along [@VanBri23]. *Proof of [Theorem 4](#claim:sylvester){reference-type="ref" reference="claim:sylvester"} (continued, sketch).* It remains to prove that $\mathcal{M}$ is differentiable and $\mathrm{d}\mathcal{M}$ is locally Lipschitz continuous. First, the nonlinear semigroup $\{\mathcal{T}_t\}_{t \geqslant 0}$ is Fréchet differentiable [@VanBri23 Lemma 4.5]. Furthermore, because $\mathrm{d}f$ is locally Lipschitz continuous with $\mathrm{d}f(0) = 0$, it follows from [\[eq:ineq-first-var\]](#eq:ineq-first-var){reference-type="ref" reference="eq:ineq-first-var"} that there exists an open neighborhood $\mathcal{V}$ of $0$ in $X$ such that for all $x \in \mathcal{V}$ and $\delta \in \mathcal{D}(A)$, $$\label{eq:new-first-var} \langle A\delta + \mathrm{d}f(x)\delta, P \delta \rangle_X \leqslant- \frac{\mu}{2} \|\delta\|^2_X.$$ generalizes [@VanBri23 Hypothesis 3.3, item (ii)], where the same property had to hold globally in $x$ and with $P = \mathop{\mathrm{id}}$. It implies that, in the region $\mathcal{V}$, the nonlinear semigroup $\{\mathcal{T}_t\}_{t \geqslant 0}$ is strictly contractive with respect to the (equivalent) norm $\| P^{1/2} \cdot \|_X$. This is enough for our purpose: by the property of semiglobal exponential stability, for any bounded set $\mathcal{B}$, there exists $T_\mathcal{B}> 0$ such that $\mathcal{T}_t \mathcal{B}\subset \mathcal{V}$ for all $t \geqslant T_\mathcal{B}$, and we then can obtain estimates of the form $$\label{eq:exp-cont} \| \mathcal{T}_tx_1 - \mathcal{T}_tx_2\|_X \leqslant K e^{- \tilde{\mu} t} \|x_1 - x_2\|_X$$ holding for all $t \geqslant T_\mathcal{B}$ and $x_1, x_2 \in \mathcal{B}$, and also $$\label{eq:exp-cont-dT} \| [\mathrm{d}\mathcal{T}_t(x_1) - \mathrm{d}\mathcal{T}_t(x_2)] \delta \|_X \leqslant K e^{- \tilde{\mu} t} \|\delta\|_X \|x_1 - x_2 \|_X$$ for all $t \geqslant T_\mathcal{B}$, $x_1, x_2 \in \mathcal{B}$ and $\delta \in X$, where the positive constants $K$ and $\tilde{\mu}$ can be chosen independent of $\mathcal{B}$. On the other hand, by the local Lipschitz properties of the nonlinear terms and the fact that there is a bounded set that contains all $\mathcal{T}_t \mathcal{B}$, $t \geqslant 0$, we can also obtain counterparts to [\[eq:exp-cont,eq:exp-cont-dT\]](#eq:exp-cont,eq:exp-cont-dT){reference-type="ref" reference="eq:exp-cont,eq:exp-cont-dT"} with exponential growth instead of decay and constants depending on $\mathcal{B}$, which is enough do deal with the finite time interval $[0, T_\mathcal{B}]$. Therefore, one can adapt the arguments in the proof of [@VanBri23 Theorem 3.4] by splitting in two the integral in [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} with the time $T_\mathcal{B}$, where $\mathcal{B}$ is an arbitrary but fixed bounded set, "differentiate under the integral sign" in [\[eq:M-nonlin\]](#eq:M-nonlin){reference-type="ref" reference="eq:M-nonlin"} with Lebesgue's dominated convergence theorem to obtain differentiability, and prove Lipschitz continuity of the differential by taking advantage of [\[eq:exp-cont,eq:exp-cont-dT\]](#eq:exp-cont,eq:exp-cont-dT){reference-type="ref" reference="eq:exp-cont,eq:exp-cont-dT"}. Note that [@VanBri23 Theorem 3.4] deals with the case $S = 0$, but "new" terms stemming from the presence of $S$ in [\[eq:ineq-first-var\]](#eq:ineq-first-var){reference-type="ref" reference="eq:ineq-first-var"} are linear and do not change much to the proof. ◻ *Proof of [Theorem 5](#th:set-point){reference-type="ref" reference="th:set-point"} (sketch).* At the beginning of [@VanBri23 Section 4.2.1], the proof of [@VanBri23 Theorem 3.2] is outlined in five items. Let us describe the differences with [Theorem 5](#th:set-point){reference-type="ref" reference="th:set-point"} itemise. We work in the new coordinate system $[x, \eta] \triangleq [x, z - \mathcal{M}(x)]$. 1. This corresponds to [\[eq:local-coerc\]](#eq:local-coerc){reference-type="ref" reference="eq:local-coerc"}. 2. A similar property can be obtained by taking advantage of [\[eq:local-coerc,eq:new-first-var\]](#eq:local-coerc,eq:new-first-var){reference-type="ref" reference="eq:local-coerc,eq:new-first-var"}. 3. A counterpart to [@VanBri23 Lemma 4.2] can be obtained by adapting our Lyapunov analysis from in presence of $y_\mathrm{ref}$. 4. The same arguments are valid. 5. This is not required here.  ◻ [^1]: Mathematics and Statistics, Faculty of Information Technology and Communication Sciences, Tampere University, P.O. Box 692, 33101 Tampere, Finland. Emails: `name.surname@tuni.fi`. [^2]: Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des Signaux et Systèmes, 91190, Gif-sur-Yvette, France. Email: `lucas.brivadis@centralesupelec.fr`. [^3]: This work has been partially supported by the Academy of Finland Grant number 349002. [^4]: See [7](#sec:add-proof){reference-type="ref" reference="sec:add-proof"} for a sketch of the proof. [^5]: It is however quite lenghty, so we only sketch the differences with respect to [@VanBri23 Theorem 3.2] in [7](#sec:add-proof){reference-type="ref" reference="sec:add-proof"}. [^6]: This is to highlight the fact that, although we take advantage of viscous damping to considerably simplify computations, $\lambda > 0$ can be taken small. [^7]: One can proceed by showing that $H_1$ is a closed subspace of $H^2(\Omega) \times \mathbb{R}$ and therefore a Hilbert space if equipped with the inherited scalar product. That [\[eq:scalar-H1\]](#eq:scalar-H1){reference-type="ref" reference="eq:scalar-H1"} defines an equivalent norm is obtained with standard arguments.
arxiv_math
{ "id": "2310.05486", "title": "On forwarding techniques for stabilization and set-point output\n regulation of semilinear infinite-dimensional systems", "authors": "Nicolas Vanspranghe, Lucas Brivadis (L2S), Lassi Paunonen", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that if $E\subseteq \mathbb{R}^2$ is analytic and $1<d < \dim_H(E)$, there are "many" points $x\in E$ such that the Hausdorff dimension of the pinned distance set $\Delta_x E$ is at least $d\left(1 - \frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right)$, where $D = \dim_P(E)$. In particular, we prove that $\dim_H(\Delta_x E) \geq \frac{d(d-4)}{d-5}$ for these $x$, which gives the best known lower bound for this problem when $d \in (1, 5-\sqrt{15})$. We also prove that there exists some $x\in E$ such that the packing dimension of $\Delta_x E$ is at least $\frac{12 -\sqrt{2}}{8\sqrt{2}}$. Moreover, whenever the packing dimension of $E$ is sufficiently close to the Hausdorff dimension of $E$, we show the pinned distance set $\Delta_x E$ has full Hausdorff dimension for many points $x\in E$; in particular the condition is that $D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$. We also consider the pinned distance problem between two sets $X, Y\subseteq \mathbb{R}^2$, both of Hausdorff dimension greater than 1. We show that if either $X$ or $Y$ has equal Hausdorff and packing dimensions, the pinned distance $\Delta_x Y$ has full Hausdorff dimension for many points $x\in X$. address: - Department of Mathematics, University of Wisconsin, Madison, Wisconsin 53715 - Department of Mathematics, University of Chicago, Chicago, IL 60637 author: - Jacob B. Fiedler - D. M. Stull bibliography: - ibpds.bib title: Dimension of Pinned Distance Sets for Semi-Regular Sets --- [^1] # Introduction Given some $E\subseteq \mathbb{R}^n$ of a particular size, a natural question is whether one can bound the size of its distance set, defined as $$\Delta E =\{\vert x - y\vert: x, y\in E\}$$ The continuous version of this question is the *Falconer distance problem*, concerning the fractal dimension of $E$. Falconer conjectured that for Borel sets $E$, whenever $\dim_H(E)>\frac{n}{2}$, then $\Delta E$ has positive measure. In the plane, the best known bound is that $\dim_H(E)>\frac{5}{4}$ implies $\Delta E$ has positive measure and is due to Guth, Iosevich, Ou, and Wang [@GutIosOuWang20]. In higher dimensions, very recent work of Du, Ou, Ren, and Zhang established the threshold that $\dim_H(E)>\frac{n}{2}+\frac{1}{4}-\frac{1}{8n+4}$ implies $\Delta E$ has positive measure [@DuOuRenZhang23a][@DuOuRenZhang23b][@Ren23]. In fact, these works each established a stronger result concerning *pinned* distance sets. For $x\in \mathbb{R}^n$, we define $$\Delta_x E =\{\vert x - y\vert: y\in E\}.$$ The aforementioned papers prove that under the respective assumptions, there exists some $x\in E$ such that $\Delta_x E$ has positive measure. Note that such a result for pinned distance sets immediately implies the corresponding result for distance sets. A closely related problem is to prove lower bounds on the Hausdorff or packing dimension of $\Delta_x E$ for "many" $x\in E$, given $d=\dim_H(E)$. Restricting consideration to $\mathbb{R}^2$ for the remainder of the paper, we now discuss a few previous bounds of this type. For $d\in(1, \frac{5}{4})$, Liu proved that $\dim_H(\Delta_x E)>\frac{4d}{3}-\frac{2}{3}$ in [@Liu20]. Shmerkin proved a better bound for $d$ not much greater than 1, namely that $\dim_H(\Delta_x E)\geq \frac{2}{3}+\frac{1}{42}$ [@Shmerkin20]. The second author improved the best known bound for $d$ not much larger than 1, proving that $\dim_H(\Delta_x E)\geq\frac{d}{4}+\frac{1}{2}$ [@Stull22c]. Complementing the bounds cited in the previous paragraph, Du, Ou, Ren and Zhang proved a bound for $d$ *less* than the dimension threshold of Falconer's conjecture [@DuOuRenZhang23a]. In $\mathbb{R}^2$, they showed $\sup_{x\in E}\dim_H(\Delta_x E)\geq\frac{5d}{3}-1$. As for packing dimension, in [@KelShm19], Shmerkin and Keleti proved that $$\dim_P(\Delta_x E)\geq \frac{1}{4}\left(1 + d + \sqrt{3d(2-d)}\right).$$ Finally, we note that Shmerkin proved that for sets $E$ which are *regular* in the sense that $\dim_H(E)=\dim_P(E)$, so long as $\dim_H(E)>1$, then for most $x$, the pinned distance set has full dimensions, i.e., $\dim_H(\Delta_x E)=1$ [@Shmerkin19].[^2] Our work makes a number of improvements to the pinned distance problem in the plane. First, we are able to prove a dimensional lower bound which takes into account the packing dimension of $E$. **Theorem 1**. *Let $E\subseteq \mathbb{R}^2$ be analytic such that $1<d <\dim_H(E)$. Then there is a subset $F \subseteq E$ of full dimension such that $$\dim_H(\Delta_x E) \geq d\left(1 - \frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right),$$ for all $x\in F$, where $D = \dim_P(E)$. In particular, $\dim_H(E\setminus F)\leq d<\dim_H(E)$. Furthermore, if $$D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$$ Then $\dim_H(\Delta_x E)=1$.* Note that the second part of the theorem significantly generalizes Shmerkin's result for regular sets $E\subseteq\mathbb{R}^2$, in the sense that we show that $E$ need not be fully regular to have most of its pinned distance sets be full dimension. Instead, it only needs to have sufficiently close Hausdorff and packing dimension, a form of "semi-regularity". As a corollary of this theorem, we obtain an improvement over the second author's previous bound, namely. **Corollary 2**. *Let $E\subseteq \mathbb{R}^2$ be analytic such that $1 < d<\dim_H(E)$. Then there is a subset $F \subseteq E$ of full dimension such that $$\dim_H(\Delta_x E) \geq \frac{d(d-4)}{d-5}.$$ for all $x\in F$.* Additionally, we can bound the dimension of the pinned distance sets in terms of only the packing dimension, as below. **Corollary 3**. *Let $E\subseteq \mathbb{R}^2$ be analytic such that $\dim_H(E) > 1$. Then, for all $x\in E$ outside a set of (Hausdorff) dimension one, $$\dim_H(\Delta_x E) \geq \frac{D+1}{2D},$$ where $D = \dim_P(E)$.* This corollary turns out to be useful in establishing the following improvement on Shmerkin and Keleti's packing dimension bound.[^3] **Theorem 4**. *Let $E\subseteq \mathbb{R}^2$ be analytic such that $\dim_H(E) > 1$. Then there exists some $x\in E$ such that, $$\dim_P(\Delta_x E) \geq \frac{12 -\sqrt{2}}{8\sqrt(2)}\approx 0.9356.$$* However, our results are more general than the above. We are able to prove essentially the same bounds in the case that our pinned points $x$ lie in some set $X$ and we consider the set of distances from $x$ to some analytic set $Y$. Theorem [Theorem 1](#thm:maintheorem){reference-type="ref" reference="thm:maintheorem"} is itself an immediate corollary of the following more general theorem. **Theorem 5**. *Let $Y\subseteq \mathbb{R}^2$ be analytic such that $1<d_y =\dim_H(Y)$ and $D_y=\mathop{\mathrm{Dim}}_p(Y)$. Let $X\subseteq \mathbb{R}^2$ be such that $1<d_x <\dim_H(X)$ and $D_x=\mathop{\mathrm{Dim}}_p(X)$. Then there is some $F \subseteq X$ of full dimension such that $$\dim_H(\Delta_x E) \geq d\left(1 - \frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right),$$ for all $x\in F$, where $d=\min\{d_x, d_y\}$ and $D=\max\{D_x, D_y\}$. In particular, $\dim_H(X\setminus F)\leq d_x<\dim_H(X)$ Furthermore, if $$D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$$ Then $\dim_H(\Delta_x E)=1$.* The second portion of this theorem amounts to a semi-regularity condition on both $X$ and $Y$. Our work also shows that we can achieve full dimension pinned distance sets when all that we require of $X$ is $\dim_H(X)>1$, at the cost of a somewhat more strict semi-regularity condition on $Y$. **Theorem 6**. *Let $Y\subseteq\mathbb{R}^2$ be analytic with $\dim_H(Y) > 1$ and $\dim_P(Y) < 2 \dim_H(Y)-1$. Let $X \subseteq \mathbb{R}^2$ be any set such that $\dim_H(X) > 1$. Then for all $x\in X$ outside a set of (Hausdorff) dimension one, $$\dim_H(\Delta_x Y) = 1.$$* Finally, we are able to show that regularity in just the pin set $X$ is good enough to imply that typical pinned distance sets have dimension 1. **Theorem 7**. *Let $Y\subseteq\mathbb{R}^2$ be analytic with $\dim_H(Y) > 1$. Let $X \subseteq \mathbb{R}^2$ be any set such that $\dim_H(X) = \dim_P(X) > 1$. Then there is a subset $F \subseteq X$ such that, $$\dim_H(\Delta_x Y) = 1,$$ for all $x \in F$. Moreover, $\dim_H(X\setminus F) < \dim_H(X)$.* Now, we outline the structure of the paper. This paper employs recently developed "effective" methods in the study of dimension. These methods connect Hausdorff and packing dimension to Kolmogorov complexity through *point-to-set principles*, and thus open classical problems up to attack by tools from computability theory. Section 2 covers the necessary preliminaries from computability theory, as well as a few less introductory but crucial lemmas. Section 3 deals with the proof of an effective projection theorem for points $x$ which is primarily used to bound the growth of the complexity of $\vert x - y \vert$ in the next section. In order to obtain this projection theorem, we need to perform a certain partitioning argument on the interval $[1, r]$ considering the complexity function $K_r(x)$. Partitioning $[1, r]$ into smaller intervals so that the complexity function has useful properties on each interval, or even just certain intervals, is a recurring idea throughout this paper. Section 4 is the main thrust of the argument on the effective side. The idea is to now partition $[1, r]$ so that $K_r(\vert x - y\vert )$ either grows at an optimal rate of 1 or grows at an average rate at least equal to the average growth rate of $K_r(y)$ on each interval. First, in section 4.2, we construct a partition which only uses the first kind of interval; this does not require the application of the projection theorem and will thus be essential in proving Theorem [Theorem 6](#thm:regularYFullDim){reference-type="ref" reference="thm:regularYFullDim"}.[^4] Section 4.3 details the construction of a more general partition that achieves better bounds using the projection theorem, and section 4.4 sums over this partition to obtain the effective analog of Theorem [Theorem 5](#thm:moregeneralmaintheorem){reference-type="ref" reference="thm:moregeneralmaintheorem"}. This effective analog is a bound on the complexity of the distance at *every* precision, and in section 4.5 we use it as a basis to obtain improved bounds at certain well-chosen precisions; this yields the effective analog of Theorem [Theorem 4](#thm:packingThm){reference-type="ref" reference="thm:packingThm"}. Section 5 is where we perform the reductions to the classical results. Essentially, we have to show that sets of the given dimensions always have points $x$ and $y$ with the desired algorithmic properties, which then imply the bounds of our effective theorems hold for certain distances. Performing these reductions yields Theorem [Theorem 4](#thm:packingThm){reference-type="ref" reference="thm:packingThm"}, Theorem [Theorem 5](#thm:moregeneralmaintheorem){reference-type="ref" reference="thm:moregeneralmaintheorem"}, and Theorem [Theorem 6](#thm:regularYFullDim){reference-type="ref" reference="thm:regularYFullDim"}. Section 6, wherein the goal is to prove Theorem [Theorem 7](#thm:regularXFullDim){reference-type="ref" reference="thm:regularXFullDim"}, is more self-contained. The idea is that if the point $x$ is regular in the sense of having equal effective Hausdorff and effective packing dimensions, we get an essentially optimal effective projection theorem. This allows us to take intervals as long as we want when partitioning $\vert x-y\vert$, which makes it straightforward to establish the bound of 1. A complication when performing the reduction to the classical result is that regular sets do not necessarily contain sufficiently many regular points, so we need a variant of the projection theorem that holds for $x$'s that are *almost* regular. As a consequence, we cannot take arbitrarily long intervals when partitioning $\vert x-y\vert$, but as $\dim_H(Y)>d_y>1$, we do not need arbitrarily long intervals to get the bound of 1. Thus, the reduction goes through. *Remark 1*. For readers who want a relatively straightforward demonstration of the main ideas of this paper, we suggest considering starting with section 6. The case of almost regular $x$, while it does not follow from the previous sections, has a similar structure to sections 3-5 without as many of the complications. # Preliminaries {#sec:prelim} ## Kolmogorov complexity and effective dimension The *conditional Kolmogorov complexity* of binary string $\sigma\in\{0,1\}^*$ given a binary string $\tau\in\{0,1\}^*$ is the length of the shortest program $\pi$ that will output $\sigma$ given $\tau$ as input. Formally, the conditional Kolmogorov complexity of $\sigma$ given $\tau$ is $$K(\sigma\mid\tau)=\min_{\pi\in\{0,1\}^*}\left\{\ell(\pi):U(\pi,\tau)=\sigma\right\}\,,$$ where $U$ is a fixed universal prefix-free Turing machine and $\ell(\pi)$ is the length of $\pi$. Any $\pi$ that achieves this minimum is said to *testify* to, or be a *witness* to, the value $K(\sigma\mid\tau)$. The *Kolmogorov complexity* of a binary string $\sigma$ is $K(\sigma)=K(\sigma\mid\lambda)$, where $\lambda$ is the empty string. We can easily extend these definitions to other finite data objects, e.g., vectors in $\mathbb{Q}^n$, via standard binary encodings. See [@LiVit08] for details. The *Kolmogorov complexity* of a point $x\in\mathbb{R}^m$ at *precision* $r\in\mathbb{N}$ is the length of the shortest program $\pi$ that outputs a *precision-$r$* rational estimate for $x$. Formally, this is $$K_r(x)=\min\left\{K(p)\,:\,p\in B_{2^{-r}}(x)\cap\mathbb{Q}^m\right\}\,,$$ where $B_{\varepsilon}(x)$ denotes the open ball of radius $\varepsilon$ centered on $x$. Note that this implies that the Kolmogorov complexity of a point is non-decreasing in precision. The *conditional Kolmogorov complexity* of $x$ at precision $r$ given $y\in\mathbb{R}^n$ at precision $s\in\mathbb{R}^n$ is $$K_{r,s}(x\mid y)=\max\big\{\min\{K_r(p\mid q)\,:\,p\in B_{2^{-r}}(x)\cap\mathbb{Q}^m\}\,:\,q\in B_{2^{-s}}(y)\cap\mathbb{Q}^n\big\}\,.$$ When the precisions $r$ and $s$ are equal, we abbreviate $K_{r,r}(x\mid y)$ by $K_r(x\mid y)$. As a matter of notational convenience, if we are given a non-integral positive real as a precision parameter, we will always round up to the next integer. Thus $K_{r}(x)$ denotes $K_{\lceil r\rceil}(x)$ whenever $r\in(0,\infty)$. A basic property, proven by Case and J. Lutz [@CasLut15] shows that the growth rate of the Kolmogorov complexity of a point is essentially bounded by the dimension of the ambient space. Since this paper concerns $\mathbb{R}^2$, we will frequently use this in the form that for any $\varepsilon>0$, for sufficiently large $s$ we have that $$K_{r+s}(x)\leq K_r(x)+2 s + \varepsilon s$$ We may *relativize* the definitions in this section to an arbitrary oracle set $A \subseteq \mathbb{N}$. We will frequently consider the complexity of a point $x \in \mathbb{R}^n$ *relative to a point* $y \in \mathbb{R}^m$, i.e., relative to an oracle set $A_y$ that encodes the binary expansion of $y$ is a standard way. We then write $K^y_r(x)$ for $K^{A_y}_r(x)$. Oracle access to the *entire* binary expansion of a point is no less useful than conditional access to that binary expansion only up to a certain precision. Thus, we note that, for every $x\in\mathbb{R}^n$ and $y\in\mathbb{R}^m$, $$\label{eq:OraclesDontIncrease} K_{s,r}(x\mid y)\geq K^y_s(x) - O(\log r) - O(\log s),$$ for every $s, r\in\mathbb{N}$ One of the most useful properties of Kolmogorov complexity is that it obeys the *symmetry of information*. That is, for every $\sigma, \tau \in\{0,1\}^*$, $$K(\sigma, \tau) = K(\sigma) + K(\tau \mid \sigma, K(\sigma)) + O(1)\,.$$ We also have the more technical lemmas detailing versions of symmetry of information that hold for Kolmogorov complexity in $\mathbb{R}^n$. Lemma [Lemma 8](#lem:unichain){reference-type="ref" reference="lem:unichain"} was proved in the second author's work [@LutStu20]. **Lemma 8** ([@LutStu20]). *For every $m,n\in\mathbb{N}$, $x\in\mathbb{R}^m$, $y\in\mathbb{R}^n$, and $r,s\in\mathbb{N}$ with $r\geq s$,* 1. *$\displaystyle |K_r(x\mid y)+K_r(y)-K_r(x,y)\big|\leq O(\log r)+O(\log\log \vert y\vert)\,.$* 2. *$\displaystyle |K_{r,s}(x\mid x)+K_s(x)-K_r(x)|\leq O(\log r)+O(\log\log\vert x\vert)\,.$* A consequence of Lemma [Lemma 8](#lem:unichain){reference-type="ref" reference="lem:unichain"}, is the following. **Lemma 9** ([@LutStu20]). *Let $m,n\in\mathbb{N}$, $x\in\mathbb{R}^m$, $z\in\mathbb{R}^n$, $\varepsilon> 0$ and $r\in\mathbb{N}$. If $K^x_r(z) \geq K_r(z) - O(\log r)$, then the following hold for all $s \leq r$.* 1. *$\displaystyle K^x_s(z) \geq K_s(z) - O(\log r)\,.$* 2. *$\displaystyle K_{s, r}(x \mid z) \geq K_s(x)- O(\log r)\,.$* J. Lutz [@Lutz03a] initiated the study of effective dimensions (also known as *algorithmic dimensions*) by effectivizing Hausdorff dimension using betting strategies called *gales*, which generalize martingales. Mayordomo showed that effective Hausdorff dimension can be characterized using Kolmogorov complexity [@Mayordomo02]. In this paper, we use this characterization as a definition. The *effective Hausdorff dimension* of a point $x\in\mathbb{R}^n$ is $$\dim(x)=\liminf_{r\to\infty}\frac{K_r(x)}{r}\,.$$ The *effective packing dimension* of a point $x\in\mathbb{R}^n$ is $$\mathop{\mathrm{Dim}}(x)=\limsup_{r\to\infty}\frac{K_r(x)}{r}\,.$$ We can relativize both definitions, so that the effective Hausdorff and packing dimension *with respect to an oracle* $A\subseteq \mathbb{N}$ are $\dim^A(x)=\liminf_{r\to\infty}\frac{K^A_r(x)}{r}$ and $\mathop{\mathrm{Dim}}^A(x)=\limsup_{r\to\infty}\frac{K^A_r(x)}{r}$ ## The Point-to-Set Principle {#subsec:ptsp} The *point-to-set principle* shows that the Hausdorff and packing dimension of a set can be characterized by the effective Hausdorff and effecive packing dimension of its individual points. Specifically, J. Lutz and N. Lutz [@LutLut18] showed the following for arbitrary subsets of $\mathbb{R}^n$. **Theorem 10** (Point-to-set principle [@LutLut18]). *Let $n \in \mathbb{N}$ and $E \subseteq \mathbb{R}^n$. Then $$\mathop{\mathrm{dim_H}}(E) = \adjustlimits\min_{A \subseteq \mathbb{N}} \sup_{x \in E} \dim^A(x).$$ $$\mathop{\mathrm{dim_P}}(E) = \adjustlimits\min_{A \subseteq \mathbb{N}} \sup_{x \in E} \mathop{\mathrm{Dim}}^A(x).$$* Stated as above, it is clear that Hausdorff and pacing dimension are in a certain respect dual to each other. The only difference is a limit inferior versus a limit superior for the individual points. This immediately implies that the packing dimension of a set is no less than its Hausdorff application. The general point-to-set principle is extremely useful, but for some applications, we would like to either remove the oracle, or at least be able to say something about which oracles achieve the minimum. The first point-to-set principle for Hausdorff dimension, which holds for a restricted class of sets, was implicitly proven by Hitchcock [@Hitchcock05] and J. Lutz [@Lutz03a]. A set $E \subseteq \mathbb{R}^n$ is *effectively compact relative to* $A$ if the set of finite open covers of $E$ by rational balls is computably enumerable relative to $A$.[^5] We will use the fact that every compact set is effectively compact relative to some oracle. **Theorem 11** ([@Hitchcock05; @Lutz03a]). *Let $E \subseteq \mathbb{R}^n$ and $A \subseteq \mathbb{N}$ be such that $E$ is effectively compact relative to $A$. Then $$\mathop{\mathrm{dim_H}}(E) = \sup\limits_{x \in E} \dim^A(x)\,.$$* *Remark 2*. We only state this restricted point-to-set principle for Hausdorff dimension because it is known that it fails for packing dimension, see [@Conidis08]. Informally, this can be seen to occur because effective compactness and Hausdorff dimension both deal with covers, whereas it is hard to convert the covers we have from effective compactness into usable information about packings. In order to apply Theorem [Theorem 11](#thm:strongPointToSetDim){reference-type="ref" reference="thm:strongPointToSetDim"} to the pinned distance sets, we need the following fact of computable analysis. **Observation 12**. *Let $E \subseteq \mathbb{R}^2$ be a compact set and let $A\subseteq\mathbb{N}$ be an oracle relative to which $E$ is effectively compact. Then, for every $x\in\mathbb{R}^2$, $\Delta_x E$ is effectively compact relative to $(x, A)$.* ## Helpful lemmas {#subsec:primarylemmas} In this section, we recall several lemmas which were introduced by Lutz and Stull [@LutStu20; @LutStu18] and which will be used throughout the paper. Note that these lemmas each relativize with the addition of an oracle $A$. The first lemma shows that the precision to which we can compute $e$ given $x, w$ such that $p_e x = p_e w$ depends linearly on the distance of $x$ and $w$. **Lemma 13** ([@LutStu20]). *Let $z \in \mathbb{R}^2$, $e \in S^{1}$, and $r \in \mathbb{N}$. Let $w \in \mathbb{R}^2$ such that $p_e z = p_e w$ up to precision $r$.[^6] Then $$K_r(w) \geq K_t(z) + K_{r-t,r}(e\mid z) + O(\log r)\,,$$ where $t := -\log \vert z-w\vert$.* We will commonly need to lower the complexity of points at specified positions. The following lemma shows that conditional complexity gives a convenient way to do this. **Lemma 14** ([@LutStu20]). *Let $z\in\mathbb{R}^2$, $\eta \in\mathbb{Q}_+$, and $r\in\mathbb{N}$. Then there is an oracle $D=D(r,z,\eta)$ with the following properties.* - *For every $t\leq r$, $$K^D_t(z)=\min\{\eta r,K_t(z)\}+O(\log r)\,.$$* - *For every $m,t\in\mathbb{N}$ and $y\in\mathbb{R}^m$, $$K^{D}_{t,r}(y\mid z)=K_{t,r}(y\mid z)+ O(\log r)\,,$$ and $$K_t^{z,D}(y)=K_t^z(y)+ O(\log r)\,.$$* - *If $B\subseteq\mathbb{N}$ satisfies $K^B_r(z) \geq K_r(z) - O(\log r)$, then $$K_r^{B,D}(z)\geq K_r^D(z) - O(\log r)\,.$$* - *For every $t\in\mathbb{N}$, $u\in\mathbb{R}^n, w\in\mathbb{R}^m$ $$K_{r,t}(u\mid w) \leq K^D_{r,t}(u\mid w) + K_r(z) - \eta r + O(\log r)\,.$$* *In particular, this oracle $D$ encodes $\sigma$, the lexicographically first time-minimizing witness to $K(z{\upharpoonright}r\mid z{\upharpoonright}s)$, where $s = \max\{t \leq r \, : \, K_{t-1}(z) \leq \eta r\}$.* The final lemma in this section is a crucial tool at several points of the argument. Under certain conditions, it lets us lower bound the complexity growth of the $\vert x-y\vert$ by the complexity growth of $y$ on particular intervals. **Lemma 15**. *Suppose that $x, y\in\mathbb{R}^2$, $t<r\in\mathbb{N}$, and $\eta, \varepsilon\in\mathbb{Q}_+$ satisfy the following conditions.* - *$K_r(y)\leq \left(\eta +\frac{\varepsilon}{2}\right)r$.* - *For every $w \in B_{2^{-t}}(y)$ such that $\vert x-y\vert = \vert x-w\vert$, $$K_{r}(w)\geq \eta r + \min\{\varepsilon r, r-s -\varepsilon r\}\,,$$ where $s=-\log\vert y-w\vert\leq r$.* *Then for every oracle set $A\subseteq\mathbb{N}$, $$K_{r,t}^{A, x}(y \mid y) \leq K^{A,x}_{r,t}( \vert x-y\vert\mid y) + 3\varepsilon r + K(\varepsilon,\eta)+O(\log r)\,.$$* # Projection theorem {#sec:projections} The main goal of this section is to prove the following projection theorem: **Theorem 16**. *Let $x \in \mathbb{R}^2$, $e \in \mathcal{S}^1$, $\varepsilon\in \mathbb{Q}^+$, $C\in\mathbb{N}$, $A\subseteq\mathbb{N}$, and $t, r \in \mathbb{N}$. Suppose that $r$ is sufficiently large, and that the following hold.* 1. *$1 < d \leq \dim^A(x) \leq \mathop{\mathrm{Dim}}^A(x) \leq D$.* 2. *$t \geq \frac{d(2-D)}{2}r$.* 3. *$K^{x, A}_s(e) \geq s - C\log s$, for all $s \leq t$.* *Then $$K^A_r(x \,|\, p_e x, e) \leq \max\{\frac{D-1}{D}(dr - t) + K^A_r(x) - dr, K^A_r(x) - r\} + \varepsilon r.$$* This projection has somewhat more restrictive hypotheses than the projection theorem of [@Stull22c]. Namely, depending on $D$, we may have a rather large lower bound on $t$. However, in the next section when we will need to apply this projection theorem, if $t$ is smaller than the above, it is easy to deduce the desired result without reference to this theorem. We will begin by introducing some definitions and tools from [@Stull22c] which will be of use in proving the modified projection theorem. ## Projection preliminaries We need to consider $K^A_s(x)$ as a function of $s$. For convenience, we define $f:\mathbb{R}_+ \rightarrow \mathbb{R}_+$ to be the piece-wise linear function which agrees with $K^A_s(x)$ on the integers such that $f(a) = f(\lfloor a\rfloor) + (a -\lfloor a\rfloor)(f(\lceil a \rceil) - f(\lfloor a\rfloor))$, for any non-integer $a$. Note that $f$ is non-decreasing since $K^A_r(x)$ is, and, for every $a < b\in\mathbb{N}$, $f(b) - f(a) \leq 2(b-a) + O(\log \lceil b \rceil)$. That is, the maximal growth rate of $f$ on large intervals is about 2. There are several specials kinds of intervals on which we can bound the complexity of projections. - An interval $[a,b]$ is called ***teal*** if $f(b) - f(c) \leq b-c$ for every $a\leq c\leq b$. - An interval $[a, b]$ is called ***yellow*** if $f(c) - f(a) \geq c - a$ for every $a\leq c \leq b$. More concretely, these intervals are useful due to the following proposition, which is Corollary 16 in [@Stull22c]: **Proposition 17**. *Let $A\subseteq \mathbb{N}$, $x\in\mathbb{R}^2, e\in\mathcal{S}^1, \varepsilon\in\mathbb{Q}_+$, $C\in\mathbb{N}$ and $t<r\in\mathbb{R}_+$. Suppose that $r$ is sufficiently large and $K^{A,x}_s(e) \geq s - C\log r$, for all $s\leq r-t$. Then the following hold.* 1. *If $[t,r]$ is yellow, $$K^A_{r,r,r,t}(x\mid p_e x, e,x) \leq K^A_{r,t}(x\mid x) - (r-t) + \varepsilon r .$$* 2. *If $[t,r]$ is teal, $$K^A_{r,r,r,t}(x\mid p_e x, e,x) \leq \varepsilon r.$$* We denote the set of teal intervals by $T$ and the set of yellow intervals by $Y$. Supposing that a partition of $[1, r]$ consists of only yellow and teal intervals, and has essentially a constant number of terms, we could repeatedly apply symmetry of information to [Proposition 17](#prop:projectionYellowTeal){reference-type="ref" reference="prop:projectionYellowTeal"} and deduce a useful bound for $K^A_r(x \,|\, p_e x, e)$. We make the notion of such an "admissible" partition more precise: a partition $\mathcal{P}=\{[a_i, a_{i+1}]\}_{i=0}^k$ of closed intervals with disjoint interiors is ***$(M,r,t)$-admissible*** if $[1, r] = \cup_i [a_i, a_{i+1}]$, and it satisfies the following conditions. - $k \leq M$, - $[a_i, a_{i+1}]$ is either yellow or teal, - $a_{i+1} \leq a_i + t$. We can repeatedly apply the symmetry of information to write the complexity $K^A_{r}(x \mid p_e x, e)$ as a sum of complexities over a partition of $[1, r]$, allowing us to apply [Proposition 17](#prop:projectionYellowTeal){reference-type="ref" reference="prop:projectionYellowTeal"}. This idea is encapsulated by the following result in [@Stull22c]. **Lemma 18**. *Suppose that $x \in \mathbb{R}^2$, $e \in \mathcal{S}^1$, $\varepsilon\in \mathbb{Q}^+$, $C\in\mathbb{N}$, $t, r \in \mathbb{N}$ satisfy (P1)-(P3). If $\mathcal{P} = \{[a_i, a_{i+1}]\}_{i=0}^k$ is an $(3C,r,t)$-admissible partition, and $r$ is sufficiently large, then $$\begin{aligned} K^A_{r}(x \mid p_e x, e) &\leq \varepsilon r + \sum\limits_{i\in \textbf{Bad}} K^A_{a_{i+1}, a_{i}}(x \mid x) - (a_{i+1} - a_i),\end{aligned}$$ where* ***Bad** $=\{i\leq k\mid [a_i, a_{i+1}] \notin T\}$.* Note that admissible partitions of specified intervals always exist, as per the following lemma: **Lemma 19**. *Let $x\in\mathbb{R}^2$, $r, C\in\mathbb{N}$ and $\frac{r}{C}\leq t < r$. For any $0\leq a < b \leq r$, there is an $(3C,r,t)$-admissible partition of $[a,b]$.* However, a partition merely being admissible isn't enough to establish the desired bounds. We can do better by consider the special intervals which are both yellow and teal. - An interval $[a, b]$ ***green*** if it is yellow and teal and $b-a\leq t$. Green intervals are often advantageous because they combine the best of yellow intervals (complexity of $x$ grows superlinearly) and the best of teal intervals (we can compute $x$ with few bits given its projection). We denote the set of green intervals by $G$. We now introduce two more types of intervals to formulate a few results pertaining to green intervals. - An interval $[a,b]$ is called ***red*** if $f$ is strictly increasing on $[a,b]$. - An interval $[a,b]$ is called ***blue*** if $f$ is constant on $[a,b]$. In [@Stull22c], a partition $\hat{\mathcal{P}}=\hat{\mathcal{P}}(x, r, t)$ of $[1, r]$ with the following properties is constructed: - The interiors of the elements of $\hat{\mathcal{P}}$ are disjoint. - Each interval is red, blue, or green. - If $[a, b]$ is red and $[b, c]$ is blue (not necessarily in $\hat{\mathcal{P}}$), then $b$ is contained in the interior of a green interval in $\hat{\mathcal{P}}$. Moreover, any $b$ that's contained in *any* green interval is contained in a green interval in $\hat{\mathcal{P}}$. - Suppose $I_0,\ldots, I_{n+1}$ is a a red-green-blue sequence in $\hat{\mathcal{P}}$ i.e. a sequence $I_0,\ldots, I_{n+1}$ of consecutive intervals in $\hat{\mathcal{P}}$ such that $I_0$ is red, $I_1,\ldots, I_n$ are green, and $I_{n+1}$ is blue. Then the total length of $I_1,\ldots, I_n$ is at least $t$. Call a maximal collection of consecutive green intervals a ***green block***. The last property, that green blocks preceded by a red and succeeded by a blue interval have length at least $t$, will be particularly important. Informally, this property holds because if a green interval had length less than $t$ with red on the left and blue on the right, it would always be possible to lengthen the green by consuming some of the blue and red. The final fact from [@Stull22c] we need in this section is the following: if there is no red-green-blue sequence in $\hat{\mathcal{P}}= \hat{\mathcal{P}}(x, r, t)$, then there is an essentially all yellow admissible partition of $[1, r]$. More specifically, in this case there is an admissible partition $\mathcal{P}$ such that for some $c$ not depending on $r$, if $c\leq a_i<a_{i+1}$ and $[a_i, a_{i+1}]\in P$, then $[a_i, a_{i+1}]$ is yellow. Intuitively, this is because if a blue interval appears after a red interval, there has to be a red-green-blue sequence somewhere in between. $\dim(x)>1$, so after a certain point there has to be a red interval. Consequently, after a certain point, there can only be red or green intervals, which we can convert into an all-yellow partition of $[c, r]$. *Remark 3*. In fact, it is easy to convert a partition of any subinterval of $[1, r]$ consisting of only red and green intervals into an all-yellow $3C$-admissible partition of the subinterval. Just observe that green intervals are yellow, any subinterval of a red interval is yellow, and the union of adjacent yellow intervals is yellow. Greedily combining the red and green intervals from the left to the right and beginning a new yellow interval when the length of the previous yellow is about to exceed $t$ accomplishes this. With these definitions and tools, we can now prove the modified projection theorem. ## Proof of the projection theorem **Theorem [Theorem 16](#thm:modifiedProjectionTheorem){reference-type="ref" reference="thm:modifiedProjectionTheorem"} 1**. *Let $x \in \mathbb{R}^2$, $e \in \mathcal{S}^1$, $\varepsilon\in \mathbb{Q}^+$, $C\in\mathbb{N}$, $A\subseteq\mathbb{N}$, and $t, r \in \mathbb{N}$. Suppose that $r$ is sufficiently large, and that the following hold.* 1. *$1 < d \leq \dim(x) \leq \mathop{\mathrm{Dim}}(x) \leq D$.* 2. *$t \geq \frac{d(2-D)}{2}r$.* 3. *$K^{A, x}_s(e) \geq s - C\log s$, for all $s \leq t$.* *Then $$K^A_r(x \,|\, p_e x, e) \leq \max\{\frac{D-1}{D}(dr - t) + K^A_r(x) - dr, K^A_r(x) - r\} + \varepsilon r.$$* *Proof.* Let $x$ be as in the statement of the theorem, $r$ sufficiently large, and $t \geq \frac{d(2-D)}{2}r$. Let $\hat{P}= \hat{P}(x, r, t)$ be a partition of $[1,r]$ satisfying the properties described in the last section. Let $S$ be the number of red-green-blue sequences in $\hat{P}$. Note that $S<\frac{2}{d(2-D)}$, since the green block in each red-green-blue sequence is at least length $t$, and these blocks have to be separated from each other by some amount. **The case $S$ = 0:** In this case, there are no red-green-blue sequences, so let $\mathcal{P}$ be the all yellow $3C$-admissible partition guaranteed by last fact of the previous section. $$\sum\limits_{I_i \in \mathcal{P}-Y} a_{i+1} - a_i \leq c,$$ for some constant $c$, and so for sufficiently large $r$ $$B := \sum\limits_{I_i \in \mathcal{P}\cap Y} a_{i+1} - a_i \geq r - \frac{\varepsilon r}{2}.$$ By symmetry of information, we can write $$\begin{aligned} K^A_r(x) &\geq \sum\limits_{I_i \in \mathcal{P}} K^A_{a_{i+1}, a_i}(x\mid x) - O(\log r)\tag*{}\\ &\geq \sum\limits_{I_i \in \mathcal{P}\cap Y} K^A_{a_{i+1}, a_i}(x\mid x) - O(\log r)\\ &\geq -\frac{\varepsilon r}{4} + \sum\limits_{I_i \in \mathcal{P}\cap Y} K^A_{a_{i+1}, a_i}(x\mid x)\tag*{}\end{aligned}$$ To apply Lemma [Lemma 18](#lem:boundGoodPartitionProjection){reference-type="ref" reference="lem:boundGoodPartitionProjection"} we note that, on green intervals, $K^A_{a, b}(x\mid x) = b - a$. Therefore, we see that $$\begin{aligned} K^A_r(x) &\geq K^A_r(x\mid p_e x, e) + B -\frac{\varepsilon r}{2}\tag*{[Lemma \ref{lem:boundGoodPartitionProjection}]}\\ &\geq K^A_r(x\mid p_e, x, e) + r - \varepsilon r.\end{aligned}$$ Thus, $$\label{sZero1} K^A_r(x\mid p_e x, e) \leq K^A_r(x) - r + \varepsilon r,$$ and the conclusion follows. **The case $S$ = 1:** Now, suppose there is exactly one red-green-blue sequence in $\hat{\mathcal{P}}$. Then there is a precision $1 < r_1 < r - t$ and an $s\geq t$ such that $[r_1, r_1+s]$ is green in $\hat{\mathcal{P}}$. Let $\mathcal{P}_1$ be a $3C$-admissible partition of $[1,r_1]$, and $\mathcal{P}_2$ be a $3C$-admissible partition of $[r_1 +s, r]$, guaranteed by Lemma [Lemma 19](#lem:goodPartitionProjection){reference-type="ref" reference="lem:goodPartitionProjection"}. Since there is exactly one red-green-blue sequence in $\hat{\mathcal{P}}$, $\mathcal{P}_1$ contains no red-green-blue sequences. Therefore, using the same argument as in the previous case, $\mathcal{P}_1$ is essentially covered by yellow intervals and we conclude that $$\begin{aligned} K^A_{r_1}(x \mid p_e x, e) &\leq K^A_{r_1}(x) - r_1 + \frac{\varepsilon r_1}{4}\\ (D-1)r_1 + \frac{\varepsilon r_1}{2}.\end{aligned}$$ Let $r_2 \geq r_1+s$ be minimal precision such that $[r_2, r]$ is the union of yellow intervals[^7] Therefore we have $$\begin{aligned} K^A_{r}(x\mid p_e x, e) &\leq K^A_{r_1}(x) - r_1 + K^A_{r, r_2}(x\mid x) - (r-r_2) + \varepsilon r\\ &\leq \left(D - 1\right)r_1 + K^A_{r, r_2}(x\mid x) - (r-r_2) + \varepsilon r\\ &\leq \left(D - 1\right)r_1 + \left(d - 1\right)\left(r-r_2\right) + K^A_r(x) - d r + \varepsilon r\\ &\leq \left(D - 1\right)B + K^A_r(x) - d r + \varepsilon r.\end{aligned}$$ If $B \leq \frac{d r - t}{D}$, the conclusion follows. So, we assume that $B > \frac{d r - t}{D}$. Hence, $$\begin{aligned} K^A_{r}(x\mid p_e x, e) &\leq K^A_r(x) - s - B+ \varepsilon r\\ &\leq K^A_r(x) - t - B+ \varepsilon r\\ &< K^A_r(x) - t - \frac{d r - t}{D}+ \varepsilon r\\ &= K^A_r(x) - d r + \frac{D-1}{D}\left(d r - t\right)+ \varepsilon r,\end{aligned}$$ and the conclusion follows. **The case $S\geq 2$:** We now consider the case that there are at least two red-green-blue sequence in $\hat{P}$. Let $$\label{eq:lengthOfTealIntervals} L = \sum\limits_{I_i \in \mathcal{P}\cap G} a_{i+1} - a_i$$ be the total length of the green intervals in $\mathcal{P}$. In this case we have $L \geq 2t$. Let $$B = \sum\limits_{i\in \textbf{Bad}} a_{i+1}-a_i$$ be the total length of the bad (non-teal) intervals in $\mathcal{P}$. We first prove that $$\label{eq:projectionMainThm1} K^A_r(x\mid p_e x, e) \leq \min\{K^A_r(x) - B - 2t, B\} +\varepsilon r.$$ Since $x$ is an element of $\mathbb{R}^2$, $$K^A_{a_{i+1}, a_i}(x\mid x) \leq 2(a_{i+1} - a_i) + O(\log r).$$ Therefore, by Lemma [Lemma 18](#lem:boundGoodPartitionProjection){reference-type="ref" reference="lem:boundGoodPartitionProjection"}, with respect to $\varepsilon/ 4$, $$\label{eq:projectionMainThm2} K^A_r(x\mid p_e x, e) \leq \frac{\varepsilon r}{2} + B.$$ By repeated applications of the symmetry of information, $$\begin{aligned} K^A_r(x) &\geq -\frac{\varepsilon r}{2} + \sum\limits_{I_i \in \mathcal{P}\cap T} K^A_{a_{i+1}, a_i}(x\mid x) + \sum\limits_{i\in \textbf{Bad}} K^A_{a_{i+1}, a_i}(x\mid x)\tag*{}\\ &\geq -\frac{\varepsilon r}{2} + \sum\limits_{I_i \in \mathcal{P}\cap G} K^A_{a_{i+1}, a_i}(x\mid x) + \sum\limits_{i\in \textbf{Bad}} K^A_{a_{i+1}, a_i}(x\mid x)\tag*{}\\ &= -\frac{\varepsilon r}{2} +L + \sum\limits_{i\in \textbf{Bad}} K^A_{a_{i+1}, a_i}(x\mid x)\tag*{}\\ &\geq 2t + K^A_r(x\mid p_e x, e) + B -\varepsilon r\label{eq:projectionMainThm3}\end{aligned}$$ Combining ([\[eq:projectionMainThm2\]](#eq:projectionMainThm2){reference-type="ref" reference="eq:projectionMainThm2"}) and ([\[eq:projectionMainThm3\]](#eq:projectionMainThm3){reference-type="ref" reference="eq:projectionMainThm3"}) proves inequality ([\[eq:projectionMainThm1\]](#eq:projectionMainThm1){reference-type="ref" reference="eq:projectionMainThm1"}). By inequality ([\[eq:projectionMainThm1\]](#eq:projectionMainThm1){reference-type="ref" reference="eq:projectionMainThm1"}), if $$B \leq K^A_r(x) - d r+ \frac{D-1}{D}(dr - t),$$ we are done, so we assume otherwise. Applying ([\[eq:projectionMainThm1\]](#eq:projectionMainThm1){reference-type="ref" reference="eq:projectionMainThm1"}) again and using our assumption on $t$ implies that $$\begin{aligned} K^A_r(x\mid p_e x,e) &\leq K^A_r(x) - 2t - B+\varepsilon r\\ &< \frac{d}{D}r - \frac{D+1}{D}t+\varepsilon r\\ &\leq \frac{D-1}{D}(dr-t) +\varepsilon r\\ &\leq K^A_r(x) - d r + \frac{D-1}{D}\left(d r - t\right) + \varepsilon r,\end{aligned}$$ and the proof is complete. ◻ # Effective dimension of distances {#sec:effdim} In the previous section, we considered partitions of the interval $[1, r]$ depending on the complexity function of our pinned point $x$. In particular, we were able to use these partitions to get a bound on the complexity of $x$ given $e$ and the projection of $x$ in the direction of $e$. Now, we need to consider the complexity function of $y$ and relate this to the complexity of $\vert x-y\vert$. Similar to before, we let $f:\mathbb{R}_+ \rightarrow \mathbb{R}_+$ be the piece-wise linear function which agrees with $K^A_s(y)$ on the integers, and $f(a) = f(\lfloor a\rfloor) + (a -\lfloor a\rfloor)(f(\lceil a \rceil) - f(\lfloor a\rfloor))$, for any non-integer $a$. Since $K^A_s(y) \leq K^A_{s+1}(y)$ for every $s\in\mathbb{N}$, $f$ is non-decreasing and since $y\in \mathbb{R}^2$, for every $a < b\in\mathbb{R}$, $f(b) - f(a) \leq 2(b-a) + O(\log \lceil b \rceil)$. As before, we make the following definitions: an interval $[a,b]$ is called ***teal*** if $f(b) - f(c) \leq b-c$ for every $c\in[a,b]$. It is called ***yellow*** if $f(c) - f(a) \geq c - a$, for every $c\in[a,b]$. We denote the set of teal intervals by $T$ and the set of yellow intervals by $Y$. For reference, here we list some conditions that our points $x$ and $y$ will be assumed to satisfy throughout the remainder of this section. Let $x,y\in\mathbb{R}^2$, $e = \frac{x-y}{\vert x - y\vert}$ and $A, B \subseteq \mathbb{N}$. For this section, we let $d_x = \dim^A(x)$, $D_x = \mathop{\mathrm{Dim}}^A(x)$, $d_y = \dim^A(y)$ and $D_y = \mathop{\mathrm{Dim}}^A(y)$. We will assume that $x$ and $y$ satisfy the following conditions. - $d_x,d_y > 1$ - $K^{x,A}_r(e) = r - O(\log r)$ for all $r$. - $K^{x,A, B}_r(y) \geq K^{A}_r(y) - O(\log r)$ for all sufficiently large $r$. - $K^{A}_r(e\mid y) = r - o(r)$ for all $r$. ## Complexity of distances on yellow and teal intervals {#subsec:distYellowTeal} As compared to the corresponding part of [@Stull22c], we'll need to partition $[1, r]$ more carefully. However, similar to the proof of the projection theorem in the previous section, there are a few tools from [@Stull22c] that we can reuse. **Lemma 20**. *Suppose that $A\subseteq\mathbb{N}$, $x, y \in \mathbb{R}^2$ and $e = \frac{x - y}{\vert x - y\vert}$ satisfy (C1)-(C4) for every $r\in\mathbb{N}$. Then for every $\varepsilon\in\mathbb{Q}_+$ and all sufficiently large $r\in\mathbb{N}$, the following hold.* 1. *If $[t,r]$ is yellow and $t\leq r\leq 2t$ $$K^{A,x}_{r,r,t}(y\mid \vert x-y\vert, y) \leq K^{A}_{r,t}(y\mid y) - (r-t) + \varepsilon r .$$* 2. *If $[t,r]$ is teal, and $t\leq r\leq 2t$, $$K^{A,x}_{r,r,t}(y\mid \vert x-y\vert, y) \leq \varepsilon r .$$* We say that a partition $\mathcal{P} = \{[a_i, a_{i+1}]\}_{i=0}^k$ of intervals with disjoint interiors is ***good*** if $[1, r] = \cup_i [a_i, a_{i+1}]$ and it satisfies the following conditions. - $[a_i, a_{i+1}]$ is either yellow or teal, - $a_{i+1} \leq 2a_i$, for every $i$ and - $a_{i+2} > 2 a_{i}$ for every $i < k$. Note that (G3) ensures that the errors do not pile up. Furthermore, observe that (G2) is somewhat different than the admissibility condition in the previous section. There, we had that an interval could not be longer than $t$: essentially some fixed quantity, at least when partitioning. Now, the requirement is that the intervals cannot be any more than \"doubling\", so the best we can hope for in a partition is a logarithmic number terms. Indeed, just as we had admissible partitions for every $[a, b]$, the following lemma guarantees the existence of good partitions. **Lemma 21**. *For every $y\in \mathbb{R}^2$ and every $r\in\mathbb{N}$, there is a good partition of $[1, r]$.* The following lemma uses repeated applications of the symmetry of information to write $K^x_r(y\mid \vert x-y\vert)$ as a sum of its complexity on the intervals of a partition. The conclusion then follows via Lemma [Lemma 20](#lem:distancesYellowTeal){reference-type="ref" reference="lem:distancesYellowTeal"}. **Lemma 22**. *Let $A\subseteq\mathbb{N}$. Let $\mathcal{P} = \{[a_i, a_{i+1}]\}_{i=0}^k$ be a good partition. Then $$\begin{aligned} K^{A,x}_{r}(y \mid\vert x-y\vert) &\leq \varepsilon r + \sum\limits_{i\in \textbf{Bad}} K^{A}_{a_{i+1}, a_{i}}(y \mid y) - (a_{i+1} - a_i),\end{aligned}$$ where* ***Bad** $=\{i\leq k\mid [a_i, a_{i+1}] \notin T\}$.* So applying the previous lemma with $\frac{\varepsilon}{2}$, absorbing the log term for sufficiently large $r$, and recalling condition (C3), we have that $$\label{eq:distancepartitionbound} K^{x, A}_r(\vert x - y\vert) \geq K^A_r(y) - \sum\limits_{i\in \textbf{Bad}} K^A_{a_{i+1}, a_{i}}(y \mid y) - (a_{i+1} - a_i) - \varepsilon r,$$ Constructing a particular good partition to optimize this bound - either at every precision or well-chosen precisions - will be crucial to proving a bound on the effective Hausdorff and effective packing dimension of such points (respectively) and will thus be a key goal of the remainder of this section. ## Sufficient conditions for an all-yellow partition We will describe a more general partition strategy in the next subsection, but here we will introduce a related partition for a simpler scenario. In particular, we will discuss situations where we can guarantee the existence of a good partition consisting (essentially) of only yellow intervals which are not more than doubling. To see why this is significant, observe that if we had such a partition $\mathcal{P}$, equation $\ref{eq:distancepartitionbound}$, together with the observation that the complexity of $y$ grows at an average rate of exactly 1 on yellow intervals that are *also* green, implies that for sufficiently large $r$ $$\begin{aligned} K^{x, A}_r(\vert x - y\vert) &\geq K^A_r(y) - \sum\limits_{i\in \mathcal{P}} K^A_{a_{i+1}, a_{i}}(y \mid y) - (a_{i+1} - a_i) - \frac{\varepsilon}{2} r\\ &\geq K^A_r(y) - K^A_r(y) + r - \varepsilon r\\ &= (1-\varepsilon) r\end{aligned}$$ Since we can take $\varepsilon$ to be as small as desired, this implies the dimension of the distance is 1. We now state the necessary conditions and make the above argument more rigorous. **Proposition 23**. *Suppose that, in addition to conditions $(C1)-(C4)$, we also have that $D_y<2d_y-1$. Then $\dim^{x, A}(\vert x-y\vert)=1$.* Intuitively, this extra requirement expresses that, if $y$ is semi-regular, then we have the same conclusion as if $y$ is regular (in the sense of having equal effective Hausdorff and effective packing dimension). As long as these dimensions do not differ by too much, we can find an all-yellow partition. To see why we have this bound specifically, consider the adversary complexity function in Figure [\[fig:allyellow\]](#fig:allyellow){reference-type="ref" reference="fig:allyellow"}. We can only work with yellow intervals that are at most doubling, so we want to choose $D_y$ and $d_y$ to guarantee that the average growth rate of the complexity from $\frac{r}{2}$ to $r$ is at least 1. Maximizing $K^A_{\frac{r}{2}}(y)$ and minimizing $K^A_{r}(y)$ allows us to conclude the above when $D_y<2d_y-1$. Now, we prove the proposition. Pick an $\varepsilon>0$ small enough that $d_x-\frac{\varepsilon}{4}>1$ and $D_y+\frac{\varepsilon}{4}<2(d_y-\frac{\varepsilon}{4})-1$. For all $r$ sufficiently large, we have that $(d_y - \frac{\varepsilon}{4}) s < K^A_s(y)<(D_y + \frac{\varepsilon}{4})s$ whenever $s\geq\frac{\varepsilon}{4} r$. We now construct a partition of the interval $[\frac{\varepsilon}{2} r, r]$. Set $r_0 = r$. Given $r_i$, first check if $r_i\leq \frac{\varepsilon}{2}r$. If it is, let $i=n$ and we are done. If not, we let $r_{i+1}$ be the largest value of $s$ such that $K^A_s(y) = K^A_{\frac{r_i}{2}}(y) + (s-\frac{r_i}{2})$. Note that $\frac{r_i}{2}>\frac{\varepsilon}{4}r$, so the inequalities at the start of the proof hold in the interval we consider on this step. First, we show that $r_{i+1}<r_i$. To see this, observe that $$K^A_{\frac{r_i}{2}}(y) + (s-\frac{r_i}{2})< s+(D_y+\frac{\varepsilon}{4}-\frac{1}{2})r_i$$ On the other hand, $$K^A_s(y)>(d_y -\frac{\varepsilon}{4})s,$$ so for an $s$ satisfying the above equation, we have $$s<\dfrac{\frac{D_y}{2} - \frac{1}{2} + \frac{\varepsilon}{8}}{d_y-1-\frac{\varepsilon}{4}}r_i<r_i.$$ Here, the second inequality follows from our choice of $\varepsilon$. Now, we define the partition $\mathcal{P}$ to be $[1, r_n], [r_n, r_{n-1}], ..., [r_1, r_0]$. We now claim that each $[r_{i+1}, r_i]$ is a yellow interval which is at most doubling. If it were not yellow, we would have some $s^\prime\in[r_{i+1}, r_i]$ such that $K^A_{s^\prime}(y)-K^A_{r_{i+1}}(y)<s^\prime - r_{i + 1}$. This implies that $K^A_{s^\prime}(y) < K^A_{\frac{r_i}{2}}(y) + (s^\prime-\frac{r_i}{2})$. However, $K^A_{r_i}(y) > K^A_{\frac{r_i}{2}}(y) + (r_i-\frac{r_i}{2})$, so by the intermediate value theorem, there is some $s^{\prime\prime}\in(s^\prime, r_i)$ such that $K^A_{s^{\prime\prime}}(y) = K^A_{\frac{r_i}{2}}(y) + (s^{\prime\prime}-\frac{r_i}{2})$, contradicting that $r_{i+1}$ was the maximal such precision. Finally, $[r_{i + 1}, r_i]$ is at most doubling because $r_{i+1}$ cannot be any less than $\frac{r_i}{2}$, as $K^A_{\frac{r_i}{2}}(y) = K^A_{\frac{r_i}{2}}(y) + (\frac{r_i}{2}-\frac{r_i}{2})$. Thus we have a partition of $[1, r]$ where all but the first interval are yellow. We want to apply Lemma [Lemma 22](#lem:boundGoodPartitionDistance){reference-type="ref" reference="lem:boundGoodPartitionDistance"}, so we make $\mathcal{P}$ into a good partition by taking a good partition $\mathcal{P}_{[1, r_n]}$ of the first interval $[1, r_n]$ and replacing it in $\mathcal{P}$ with $\mathcal{P}_{[1, r_n]}$. As for the remaining intervals, the union of yellow intervals is still yellow, so simply greedily combine them from the left to the right. Start a new yellow interval each time adding the next $[r_{i+1}, r_i]$ would make the current yellow interval more than doubling. For ease of notation, continue to denote this modification of $\mathcal{P}$ by $\mathcal{P}$. The conditions being satisfied, we apply Lemma [Lemma 22](#lem:boundGoodPartitionDistance){reference-type="ref" reference="lem:boundGoodPartitionDistance"} relative to $A$ with $\frac{\varepsilon}{2}$ via ([\[eq:distancepartitionbound\]](#eq:distancepartitionbound){reference-type="ref" reference="eq:distancepartitionbound"}), obtaining: $$\begin{aligned} K^{x, A}_r(\vert x - y\vert) &\geq K^A_r(y) - \frac{\varepsilon}{2} r \sum\limits_{i\in \mathcal{P}\setminus\mathcal{P}_{[1, r_n]}} K^A_{a_{i+1}, a_{i}}(y \mid y) - (a_{i+1} - a_i)\\ &\geq K^A_r(y) - (K^A_r(y)-r + \frac{\varepsilon}{2}r) - \frac{\varepsilon}{2}r\\ &=(1 - \varepsilon)r\end{aligned}$$ Since we only needed $r$ to be sufficiently large, $$\dim^{x, A}(\vert x-y\vert)\geq 1-\varepsilon.$$ and taking a sequence of $\varepsilon$ going to 0 gives the desired conclusion. ## Constructing a general partition In this subsection, we describe a partitioning strategy that works outside of the special case considered in Proposition [Proposition 23](#prop:yellowfulldimension){reference-type="ref" reference="prop:yellowfulldimension"}. The key limitation of the previous subsection was that we could only use intervals that were at most doubling, which we now would like to enhance - at least for certain intervals - using the projection theorem of section 3. The new partition will involve a combination of yellow intervals and certain teal intervals, with the teal intervals chosen so we can apply Theorem [Theorem 16](#thm:modifiedProjectionTheorem){reference-type="ref" reference="thm:modifiedProjectionTheorem"} to understand the complexity growth of $\vert x-y\vert$ on them. To begin, fix a precision $r\in\mathbb{N}$. To keep the expressions involved reasonably brief, we set $d = \min\{d_x, d_y\}$ and $D = \max\{D_x, D_y\}$.[^8] Our goal is to give a lower bound on the complexity $K^{A,x}_r(\vert x-y\vert)$ at precision $r$. We will first define a sequence $r = r_0 > r_1 >\ldots > r_k = 1$ of precisions. We then prove a lower bound of the complexity $K^A_{r_{i+1}, r_i}(\vert x - y\vert \mid \vert x - y\vert)$ on each interval of the resulting partition of $[1, r]$. **Constructing the partition $\mathcal{P}$**: We define the sequence $\{r_i\}$ inductively. To begin, we set $r_0 = r$. Having defined the sequence up to $r_i$, we choose $r_{i+1}$ as follows. Let $a \leq r_i$ be the minimal real such that $[a, r_i]$ can be written as the union of yellow intervals whose lengths are at most doubling. If $a < r_i$, then we set $r_{i+1} = a$. In this case, we will refer to $[r_{i+1},r_i]$ as a **yellow** interval of $\mathcal{P}$. Otherwise, let $t_i < r_i$ be the maximum of all reals such that $$\label{eq:choiceOfRi+1} f(t) = f(r_i) + \frac{D - 1}{D}\left(dr_i - (d +1)t\right) - d(r_i-t).$$ Let $t_i^\prime < r_i$ be the largest real such that $f(r_i) = f(t^\prime_i) + r_i - t_i^\prime$. Note that such a $t^\prime$ must exist. We then set $$r_{i+1} = \max\{t_i, t_i^\prime\}.$$ Note that, in this case, $[r_{i+1}, r_i]$ is teal. We therefore refer to intervals of this case as **teal** intervals of $\mathcal{P}$. We begin by observing that our partition is well defined. **Observation 24**. *Suppose that $r_i\leq r$ and $r_i > C$, for some fixed constant $C$ depending only on $y$. Then there is at least one $t$ such that $$f(t) = f(r_i) + \frac{D - 1}{D}\left(dr_i - (d +1)t\right) - d(r_i-t).$$* *Proof.* We first note that $f(0) = O(1)$, and so $$\begin{aligned} f(r_i) + \frac{D - 1}{D}\left(dr_i - (d +1)t\right) - d(r_i-t) &= f(r_i) + d\frac{D - 1}{D}r_i - dr_i\\ &= f(r_i) - \frac{d}{D}r_i\\ &> f(0).\end{aligned}$$ We also see that, at $t = r_i$, $$f(r_i) + \frac{D - 1}{D}\left(dr_i - (d +1)t\right) - d(r_i-t) < f(r_i),$$ and so by the mean value theorem, the conclusion follows. ◻ We now show that a partition $\mathcal{P}$ does not contain too many intervals. This will allow us to control the accumulation of error terms when applying the symmetry of information. **Lemma 25**. *If $[r_{i+1}, r_i] \in \mathcal{P}$ is teal, then $r_{i+1} \leq \frac{r_i}{2}$.* *Proof.* Suppose that $[r_{i+1}, r_i] \in \mathcal{P}$ is teal. Then, by the construction of $\mathcal{P}$, $[t, r_i]$ is not yellow, for any $\frac{r_i}{2}\leq t < r_i$. This immediately implies that $t^\prime_i < \frac{r_i}{2}$. Moreover, for any $t > \frac{r_{i}}{2}$, we see that $$\begin{aligned} f(t) - f(r_i) + \frac{d}{D}r_i - \frac{d+1-D}{D}t &\geq \frac{d+1}{D}t - \frac{D-d}{D}r_i\\ > 0,\end{aligned}$$ implying that $t_i \leq \frac{r_i}{2}$, and the conclusion follows. ◻ **Lemma 26**. *Let $\varepsilon>0$. Suppose that $r_{i+1}$ is sufficiently large and $[r_{i+1}, r_i]$ is a teal interval of the above partition. Then $$r_{i+1} \geq \frac{d(D-1)}{D^2 + D - d - 1}r_i-\varepsilon r_i.$$ In particular, $$r_{i+1} \geq \frac{d(2-D)}{2+d(2-D)}r_i+1.$$* *Proof.* Assume that $r_{i+1}$ is large enough that, for all $s>r_{i+1}$, $$d s-\varepsilon^\prime s \leq K^A_s(y) \leq D s+\varepsilon^\prime s$$ for some $\varepsilon^\prime$ to be determined. By our choice of $r_{i+1}$, $$K^A_{r_{i+1}}(y) \geq K^A_{r_i}(y) - \frac{d}{D}r_i + \frac{1+d-D}{D}r_{i+1} - \varepsilon^\prime r_i.$$ We then have $$\begin{aligned} D r_{i+1} &\geq D_y r_{i+1}\\ &\geq K^A_{r_{i+1}}(y)-\varepsilon^\prime r_{i+1}\\ &\geq K^A_{r_i}(y) - \frac{d}{D}r_i + \frac{1+d-D-D\varepsilon^\prime}{D}r_{i+1}\\ &\geq dr_i-\varepsilon^\prime r_i - \frac{d}{D}r_i + \frac{1+d-D-D\varepsilon^\prime}{D}r_{i+1}\\ &= \frac{d(D-1)-D\varepsilon^\prime}{D}r_i + \frac{1+d-D-D\varepsilon^\prime}{D}r_{i+1}. \end{aligned}$$ Rearranging and simplifying yields $$r_{i+1} \geq \frac{d(D-1)-D\varepsilon^\prime}{D^2 + D - d - 1-D\varepsilon^\prime}r_i.$$ Given $\varepsilon>0$, choosing $\varepsilon^\prime$ sufficiently small compared to these quantities gives the desired conclusion: $$r_{i+1} \geq \frac{d(D-1)}{D^2 + D - d - 1}r_i-\varepsilon r_i.$$ It is straightforward to verify that this implies that $$r_{i+1} \geq \frac{d(2-D)}{2+d(2-D)}r_i$$ for sufficiently small $\varepsilon$. Indeed, using the above inequality, it suffices to show that $$\frac{d(D-1)}{D^2 + D - d - 1} > \frac{d(2-D)}{2+d(2-D)},$$ or, equivalently, $d(2-D) > D -D^2+1$. By our assumption, $d > 1$, and so the above inequality follows. Thus, for $r$ sufficiently large, we have $$r_{i+1} \geq \frac{d(2-D)}{2+d(2-D)}r_i+1.$$ ◻ **Lemma 27**. *Let $\varepsilon> 0$. Suppose that $[r_{i+1}, r_i]\in \mathcal{P}$ is a yellow interval. Then, $$K^{A,x}_{r_i, r_{i+1}}(\vert x -y\vert \mid \vert x-y\vert) \geq r_i - r_{i+1} - \varepsilon r_i.$$* *Proof.* By assumption, $[r_{i+1}, r_i]$ is the union of of yellow intervals $[a_{j+1}, a_j]$ such that $a_j \leq 2a_{j+1}$. By a simple greedy strategy we can construct a partition $P_1 = \{[b_{k+1}, b_k]\}$ of $[r_{i+1}, r_i]$ such that, for every $k$, $[b_{k+1}, b_k]$ is yellow, $b_k \leq 2 b_{k+1}$ and $b_{k+2} > 2b_k$. That is, $P_1$ is a good partition of $[r_{i+1}, r_i]$. The conclusion then follows from Lemma [Lemma 22](#lem:boundGoodPartitionDistance){reference-type="ref" reference="lem:boundGoodPartitionDistance"}. ◻ **Lemma 28**. *Let $\varepsilon>0$ be given and suppose that $[r_{i+1}, r_i]$ is teal and $r_i$ is sufficiently large. Then $$\frac{K^A_{r_i, r_{i+1}}(y\mid y)}{r_i - r_{i+1}} \geq \min\{1, \frac{d(2D - d -1)}{D^2+D-Dd-1}-\varepsilon\}$$* *Proof.* Recall that we chose $r_{i+1}$ to be $r_{i+1} = \max\{t, t^\prime\}$, where $t^\prime$ is the largest real such that $[t^\prime, r_i]$ is green, and $t$ is the largest real such that $$f(t) = f(r_i) + \frac{D - 1}{D}\left(dr_i - (d +1)t\right) - d(r_i-t).$$ If $r_{i+1} = t^\prime$, then $$K^A_{r_{i+1}}(y) = K^A_{r_i}(y) - (r_i - r_{i+1}),$$ and the conclusion holds trivially. We now assume that $r_{i+1}=t$. Then, by the previous proposition, $$r_{i+1} \geq \frac{d(D-1)}{D^2 + D - d - 1}r_i-\varepsilon r_i.$$ We proceed via a tedious, but straightforward, calculation. Noting that our interval is not green, we have from the definition of $r_{i+1}$ that $$K^A_{r_i}(y)-K^A_{r_{i+1}}(y) =- \frac{D - 1}{D}\left(dr_i - (d +1)r_{i+1}\right) + d(r_i-r_{i+1}).$$ Using this condition and the above bound on $r_{i+1}$ allows one to bound the growth rate on such an interval. We omit the details since the algebra becomes quite unpleasant. ◻ **Observation 29**. *When $D\leq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$, $\frac{K^A_{r_i, r_{i+1}}(y\mid y)}{r_i - r_{i+1}} = 1$ whenever $[r_{i+1},r_i]$ is teal.* The last goal of this subsection is to prove that these particular teals are useful, namely, that we can lower bound the growth of the complexity of the distance by the growth of the complexity of $y$ on them. We start by stating the following lemma from [@Stull22c]: **Lemma 30**. *Let $x, y\in \mathbb{R}^2$ and $r\in \mathbb{N}$. Let $z\in \mathbb{R}^2$ such that $\vert x-y\vert = \vert x-z\vert$. Then for every $A\subseteq \mathbb{N}$, $$K^A_r(z) \geq K^A_t(y) + K^A_{r-t, r}(x\mid y) - K_{r-t}(x\mid p_{e^\prime} x, e^\prime) - O(\log r),$$ where $e^\prime = \frac{y-z}{\vert y-z\vert}$ and $t = -\log \vert y-z\vert$.* Though it may appear somewhat cumbersome, the above lemma is a relatively straightforward consequence of attempting to compute $x$ given access to $y$ up to a certain precision through the use of $z$ and $w$ -- the midpoint between $y$ and $z$ -- which has the property that $p_{e^\prime}x=p_{e^\prime}w$, which is a key connection between projections and distances. In particular, note the term $K_{r-t}(x\mid p_{e^\prime} x, e^\prime)$ above; bounding this is where the projection theorem will be useful. Now, we state the final lemma of this subsection. **Lemma 31**. *Suppose that $[r_{i+1}, r_i] \in \mathcal{P}$ is a teal interval. For any $\varepsilon>0$, provided that $r_{i+1}$ is sufficiently large, we have $$K^{A,x}_{r_i, r_i, r_{i+1}}(y \mid \vert x - y \vert, y) \leq \varepsilon r_i.$$ Therefore, $K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert ) \geq K^{A,x}_{r_i, r_{i+1}}(y\mid y) - \varepsilon r_{i}$.* Notice that the conclusion of Lemma [Lemma 15](#lem:pointDistance){reference-type="ref" reference="lem:pointDistance"} is almost exactly the conclusion of this lemma. Thus, we need to verify that its conditions are satisfied, which is the content of this proof. Essentially, this entails proving a lower bound on the complexity of points $z$ which are the same distance from $x$ as $y$ is. *Proof.* Let some small rational $\varepsilon>0$ be given, and assume $r_{i+1}$ is sufficiently large. Let $\eta$ be the rational such that $\eta r_i=K^A_{r_i}(y)-4\varepsilon$. Let $G=D(r, y, \eta)$ be the oracle of Lemma [Lemma 14](#lem:oracles){reference-type="ref" reference="lem:oracles"} relative to $A$. Our goal is to apply Lemma [Lemma 15](#lem:pointDistance){reference-type="ref" reference="lem:pointDistance"}. It is routine to verify that condition (i) of Lemma [Lemma 15](#lem:pointDistance){reference-type="ref" reference="lem:pointDistance"} holds. We must therefore verify condition (ii) That is, we need to show that, for any $z\in B_{2^{-r_{i+1}}}(y)$ whose distance from $x$ is $\vert x - y\vert$, either (i) $K^{A,G}_{r_i}(z)$ is greater than $\eta r_i$ or (ii) $z$ is very close to $y$. Formally, we must show that, for any such $z$, $$\label{eq:goodtealgrowth4} K_{r_i}^{A, G}(z)\geq \eta r_i + \min\{\varepsilon r_i, r_i-s-\varepsilon r_i\},$$ where $s=-\log \vert y-z\vert$. To that end, let $z\in B_{2^{-r_{i+1}}}(y)$ such that $\vert x-y\vert=\vert x-z\vert$. Let $s=\vert y-z\vert$. We consider two cases. For the first, assume that $s\geq \frac{r_i}{2}-\log r_i$. Then, as observed in [@Stull22c], the projections of $y$ and $z$ in the direction $e$ are almost exactly the same. Specifically, $\vert p_ey - p_e z\vert< r_i^2 2^{-r_i}$. Then, letting $r_i^\prime=r_i-2 \log r_i$, these projections are indistinguishable at precision $r_i^\prime$. This enables us to apply Lemma [Lemma 13](#lem:lowerBoundOtherPoints){reference-type="ref" reference="lem:lowerBoundOtherPoints"} which, in conjunction with property (C4) and the properties of our oracle $G$ imply that $$K_{r_i}^{A, G}(z)\geq K_s^{A, G}(y) + r_i - s - \frac{\varepsilon}{2}r_i-O(\log r_i)$$ Then, using the fact that $K_s^{A, G}(y)=\min\{\eta r_i, K_s^{A}(y) + O(\log r_i)$ and considering each of these cases establishes ([\[eq:goodtealgrowth4\]](#eq:goodtealgrowth4){reference-type="ref" reference="eq:goodtealgrowth4"}) in the case that $s \geq \frac{r_i}{2}-\log r_i$. This leaves the case that $s<\frac{r_i}{2}-\log r_i$. Note that this immediately implies that $K^{A,G}_s(y) = K^A_s(y) - O(\log r_i)$. Lemma [Lemma 30](#lem:lowerBoundOtherPointDistance){reference-type="ref" reference="lem:lowerBoundOtherPointDistance"}, relative to $(A, G)$, implies that $$K^{A, G}_{r_i}(z) \geq K^{A, G}_s(y) + K^{A, G}_{r_i-s, r_i}(x\mid y) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r).$$ To bound the projection term, we need to apply Theorem [Theorem 16](#thm:modifiedProjectionTheorem){reference-type="ref" reference="thm:modifiedProjectionTheorem"} with respect to $x$, $e^\prime$, $\varepsilon$, a constant $C$ (depending only on $x$ and $y$), $t=s$, and $r=r_i-s$. We now check that the conditions are satisfied. First, observe that $r_{i+1}-1<s<\frac{r_i}{2} - \log r_i$, since $z$ is assumed to be within $2^{-r_{i+1}}$ of $y$. The second inequality implies that we can take $r_i-s$ to be sufficiently large, since $r_i$ is taken to be sufficiently large. From the first inequality and Lemma [Lemma 26](#lem:leftEndpointNotTooSmall){reference-type="ref" reference="lem:leftEndpointNotTooSmall"}, we obtain that $$s\geq\left(\frac{d(2-D)}{2+d(2-D)}r_i+1\right)-1.$$ Hence, $$s\geq \frac{d(2-D)}{2}(r_i - s)$$ Thus conditions (P1) and (P2) are satisfied. As for condition (P3), from an observation in [@Stull22c], $e^\prime$ and $e$ are close enough to each other that, using the fact that $e$ and its orthogonal complement are computable from each other, we have for $s^\prime\leq s$ $$K^{A, x}_{s^\prime}(e^\prime)=K^{A, x}_{s^\prime}(e)+O(\log s^\prime).$$ So, using condition (C2), we have $$K^{A, x}_{s^\prime}(e^\prime)\geq s^\prime - C \log s^\prime$$ and thus we may apply Theorem [Theorem 16](#thm:modifiedProjectionTheorem){reference-type="ref" reference="thm:modifiedProjectionTheorem"}. Using Theorem [Theorem 16](#thm:modifiedProjectionTheorem){reference-type="ref" reference="thm:modifiedProjectionTheorem"} the properties of $G$, and our choice of $r_{i+1}$ yields $$\begin{aligned} K^{A, G}_{r_i}(z) &\geq K^{A, G}_s(y) + K^{A, G}_{r_i-s, r_i}(x\mid y) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A}_s(y) + K^{A}_{r_i-s, r_i}(x\mid y) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A}_s(y) + K^{A}_{r_i-s}(x) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A}_s(y) + K^{A}_{r_i-s}(x) - K^A_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A}_s(y) + K^{A}_{r_i-s}(x) - \varepsilon r_i-O(\log r)\\ &- \max\{K^A_{r_i-s}(x) - \frac{d}{D}r_i+\frac{d+1-D}{D}s, K^A_{r_i-s}(x) - (r_i-s)\} \end{aligned}$$ Hence, $$\label{eq:goodTealGrowth1} K^{A, G}_{r_i}(z) \geq K^{A}_s(y) + \min\{\frac{d}{D}r_i-\frac{d+1-D}{D}s,(r_i-s)\}- \varepsilon r_i-O(\log r)$$ By our choice of $r_{i+1}$, ([\[eq:choiceOfRi+1\]](#eq:choiceOfRi+1){reference-type="ref" reference="eq:choiceOfRi+1"}), we see that $$\label{eq:goodTealGrowth2} K^A_s(y) \geq K^A_{r_i}(y) -\frac{d}{D}r_i + \frac{d+1 - D}{D}s.$$ Combining ([\[eq:goodTealGrowth1\]](#eq:goodTealGrowth1){reference-type="ref" reference="eq:goodTealGrowth1"}) and ([\[eq:goodTealGrowth2\]](#eq:goodTealGrowth2){reference-type="ref" reference="eq:goodTealGrowth2"}) shows that $$\begin{aligned} K^{A, G}_{r_i}(z) &\geq K^{A}_s(y) + \min\{\frac{d}{D}r_i-\frac{d+1-D}{D}s,(r_i-s)\}- \varepsilon r_i-O(\log r) \\ &\geq K^A_{r_i}(y) -\frac{d}{D}r_i + \frac{d+1 - D}{D}s \\ &\;\;\;\;\;\;\;\;\;\;\;\;+ \min\{\frac{d}{D}r_i-\frac{d+1-D}{D}s,(r_i-s)\}- \varepsilon r_i-O(\log r)\end{aligned}$$ If $\frac{d}{D}r_i-\frac{d+1-D}{D}s \leq r_i - s$, then we have $$\begin{aligned} K^{A, G}_{r_i}(z) &\geq K^A_{r_i}(y) - 2\varepsilon r_i\\ &\geq \eta r_i + \varepsilon r_i,\end{aligned}$$ and ([\[eq:goodtealgrowth4\]](#eq:goodtealgrowth4){reference-type="ref" reference="eq:goodtealgrowth4"}) holds. Otherwise, since $[r_{i+1},r_i]$ is teal, $$\begin{aligned} K^{A, G}_{r_i}(z) &\geq K^A_s(y) + r_i-s- \varepsilon r_i-O(\log r) \\ &\geq K^A_{r_i}(y) - \varepsilon r_i-O(\log r)\\ &\geq \eta r_i + \varepsilon r_i\end{aligned}$$ and we can again establish ([\[eq:goodtealgrowth4\]](#eq:goodtealgrowth4){reference-type="ref" reference="eq:goodtealgrowth4"}). Therefore, we are able to apply Lemma [Lemma 15](#lem:pointDistance){reference-type="ref" reference="lem:pointDistance"}, which shows that $$\begin{aligned} K^{A, x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert) &\geq K^{A, x}_{r_i, r_{i+1}}(\vert x - y\vert \mid y)\\ &\geq K^{A, G, x}_{r_i, r_{i+1}}(\vert x - y\vert \mid y)\\ &\geq K^{A, G, x}_{r_i, r_{i+1}}(y\mid y) - 3\varepsilon r + K(\varepsilon, \eta) - O(\log r_i)\\ &\geq K^{A, x}_{r_i, r_{i+1}}(y\mid y) - 4\varepsilon r,\end{aligned}$$ and the proof is complete. ◻ ## Main theorem for effective Hausdorff dimension In this section, we prove the point-wise analog of the main theorem of this paper. That is, we prove the following. **Theorem 32**. *Suppose that $x, y\in\mathbb{R}^2$, $e = \frac{y-x}{\vert y-x\vert}$, and $A, B\subseteq\mathbb{N}$ satisfy the following.* - *$d_x, d_y > 1$* - *$K^{x,A}_r(e) = r - O(\log r)$ for all $r$.* - *$K^{x,A, B}_r(y) \geq K^{A}_r(y) - O(\log r)$ for all sufficiently large $r$.* - *$K^{A}_r(e\mid y) = r - o(r)$ for all $r$.* *Then $$\dim^{x,A}(\vert x-y\vert) \geq d\left(1 - \frac{(D-1)(D-d)}{2(D^2+D -1)-2d(2D-1)}\right),$$ where $d = \min\{d_x, d_y\}$ and $D = \max\{D_x, D_y\}$. Furthermore, if $$D\leq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$$ Then $\dim^{x,A}(\vert x-y\vert)=1$.* In the last subsection we have a good bound on the complexity growth of the distance on any teal interval, and the complexity growth on any yellow is 1. Then, it would seem that the worst case scenario is that our partition is (almost) all teal. But, this case is advantageous too, because if there is very little yellow, then almost all the complexity growth for $y$ has to take place on the teals, and Lemma [Lemma 31](#lem:goodtealgrowth){reference-type="ref" reference="lem:goodtealgrowth"} indicates we can transfer *all* the growth of $K_s^A(y)$ on teals to $K_s^A(\vert x - y\vert)$. So, the worst case scenario is actually when there is an intermediate amount of yellow. Now, we formalize this and prove the theorem. *Proof.* Let $\varepsilon>0$ be given and let $r$ be sufficiently large. Let $\mathcal{P} = \{[r_{i+1}, r_i]\}$ be the partition of $[1, r]$ defined in the previous section. Let $L$ be the total length of the yellow intervals. Recall that if $[r_{i+1}, r_i]$ is yellow, then we have that $$K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert ) \geq r_i - r_{i+1} - \varepsilon r_{i}$$ By the previous lemma, and repeated applications of the symmetry of information, we have for sufficiently large $r$ that $$\begin{aligned} K^{A,x}_r(\vert x - y\vert ) &= \sum\limits_{i\in Y} K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert ) + \sum\limits_{i\in Y^C} K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert )\\ &\geq L -\frac{\varepsilon}{3} r+ \sum\limits_{i\in Y^C} K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert )\\ &\geq L -\frac{2\varepsilon}{3}r + \sum\limits_{i\in Y^C} K^{A,x}_{r_i, r_{i+1}}(y\mid y)\\ &\geq L + \min\{1, \frac{d(2D-d-1)}{D^2 + D - Dd - 1}\}(r - L)-\varepsilon r.\label{eq:notmuchteal}\end{aligned}$$ From Observation [Observation 29](#obs:generalpartitionfulldimension){reference-type="ref" reference="obs:generalpartitionfulldimension"}, we have 1 as the minimum when $D\leq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$, so in this case $K^{A,x}_r(\vert x - y\vert )\geq L+(r-L)-\varepsilon r=r -\varepsilon r$. Taking $\varepsilon$ as small as desired and letting $r$ go to infinity completes the proof. Now assume we are not in the above case and that $r$ is sufficiently large given $\varepsilon$, and note the following bound which is advantageous when there is not much yellow: $$\begin{aligned} d_y r &\leq K^{A,x}_r(y)+\frac{\varepsilon}{3}\\ &= \sum\limits_{i\in Y} K^{A,x}_{r_i, r_{i+1}}(y\mid y) + \sum\limits_{i\in Y^C} K^{A,x}_{r_i, r_{i+1}}(y\mid y )\\ &\leq 2L + \frac{2\varepsilon}{3} +\sum\limits_{i\in Y^C} K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert )\\ &= L + K^{A,x}_r(\vert x - y\vert ) +\varepsilon r.\end{aligned}$$ Hence, $$\label{eq:distanceBoundTermsofL} K^{A,x}_r(\vert x - y\vert) \geq \max\{L + \frac{d(2D-d-1)}{D^2 + D - Dd - 1}(r - L), d r - L\}-\varepsilon r.$$ The first term is increasing in $L$ (since we are considering the case where $\frac{d(2D-d-1)}{D^2 + D - Dd - 1}<1$), and the second term is decreasing in $L$, so we can set them equal to find the minimum over all $L$, which yields $$K^{x,A}_r(\vert x-y\vert) \geq d\left(1 - \frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right) -\varepsilon r.$$ Since we can take $\varepsilon$ as small as desired and then let $r$ go to infinity, this completes the proof. ◻ Now, we can consider a few special cases. First, note that when combined with Proposition [Proposition 23](#prop:yellowfulldimension){reference-type="ref" reference="prop:yellowfulldimension"}, for any choice of $d$, our lower bound on the dimension is monotone decreasing in $D$. Thus, setting $D=2$ gives the following corollary: **Corollary 33**. *When conditions (C1)-(C4) are satisfied, $$\dim^{x, A} (\vert x-y\vert) \geq \frac{d(d-4)}{d-5}.$$* Similarly, our lower bound is monotone increasing in $d$, so setting $d=1$ gives **Corollary 34**. *When conditions (C1)-(C4) are satisfied,* *$$\dim^{x, A} (\vert x-y\vert) \geq \frac{D+1}{2D}.$$* Note that these are the effective analogs of Corollary [Corollary 2](#cor:firstMainCor){reference-type="ref" reference="cor:firstMainCor"} and Corollary [Corollary 3](#cor:secondMainCor){reference-type="ref" reference="cor:secondMainCor"} in the introduction. The first of these corollaries is a helpful comparison point to previous work on the pinned distance problem, and the second will be useful in the next subsection. ## Main theorem for effective packing dimension We can use our work in the previous section to prove a new bound on the packing dimension of pinned distance sets. The basic idea is that for effective Hausdorff dimension, we had to prove a lower bound for $K_r^{x, A}(\vert x-y\vert)$ at *every* sufficiently large precision (since dim is defined with a limit inferior), whereas for effective packing dimension, we are free to *choose* a sequence of advantageous precisions (since Dim is defined with a limit superior). The idea is to consider "maximal" precisions $r_i$ where $K^A_{r_i}(y)\approx Dr_i$. These maximal precisions have to be contained in large yellow intervals, and prior to the large yellow intervals, we can use the bound from Corollary [Corollary 34](#cor:hausdorffbound2){reference-type="ref" reference="cor:hausdorffbound2"}, which holds at *every* precision. We call an interval $[a, b]$ an **all yellow** interval if there is a good partition of $[a,b]$ consisting entirely yellow intervals whose lengths are at most doubling. **Lemma 35**. *Suppose that $\varepsilon> 0$, $A,B\subseteq\mathbb{N}$ and $x, y\in \mathbb{R}^2$ satisfy (C1)-(C4). In addition, assume that $\mathop{\mathrm{Dim}}^A(x) \leq \mathop{\mathrm{Dim}}^A(y)$. Let $r$ be a sufficiently large precision which is maximal in the sense that $K^{A, x}_r(y)\geq D_y r -\varepsilon r$. Then $$\label{eq:boundDistanceMaximalPrec} K^{A,B,x}_r(\vert x - y\vert) \geq \frac{D^2 - D + 2}{2D}r - \varepsilon r.$$ Moreover, if $[r_1, r_2]$ is an all yellow interval containing $r$, then $$\label{eq:lowerBoundr2} K^{A,B,x}_{r_2}(\vert x - y \vert )\geq r_2 - \frac{3D - D^2 - 2}{2D}r - \varepsilon r.$$* *Proof.* Let $r$ be sufficiently large such that $$K^{A}_r(y) \geq D r - \frac{\varepsilon}{2} r.$$ Let $\mathcal{P} = \{[r_{i+1}, r_i]\}$ be the partition of $[1,r]$ defined in the previous section. Let $L$ be the total length of the yellow intervals in $\mathcal{P}$. Using ([\[eq:distanceBoundTermsofL\]](#eq:distanceBoundTermsofL){reference-type="ref" reference="eq:distanceBoundTermsofL"}), we have that $$K^{A,x}_r(\vert x - y\vert) \geq \max\{L + \frac{2}{D+ 1}(r - L), D r - L\} - \varepsilon r.$$ We can therefore conclude that ([\[eq:boundDistanceMaximalPrec\]](#eq:boundDistanceMaximalPrec){reference-type="ref" reference="eq:boundDistanceMaximalPrec"}) holds. Let $[r_1, r_2]$ be an all yellow interval containing $r$. Then, by Lemma [Lemma 20](#lem:distancesYellowTeal){reference-type="ref" reference="lem:distancesYellowTeal"}, $$\begin{aligned} r_2 - r_1 - \frac{\varepsilon r}{2} &\leq K^{A,x}_{r_2,r_1}(\vert x - y \vert \mid \vert x - y\vert)\\ &\leq K^{A,x}_{r_2}(\vert x - y \vert ) - K^{A,x}_{r_1}(\vert x - y \vert) - O(\log r)\\ &\leq K^{A,x}_{r_2}(\vert x - y \vert ) - \left(K^{A,x}_{r}(\vert x - y \vert) - (r-r_1)\right)- O(\log r).\end{aligned}$$ Rearranging, and using ([\[eq:boundDistanceMaximalPrec\]](#eq:boundDistanceMaximalPrec){reference-type="ref" reference="eq:boundDistanceMaximalPrec"}), we see that for sufficiently large $r$ $$\begin{aligned} K^{A,x}_{r_2}(\vert x - y \vert ) &= K^{A,x}_{r}(\vert x - y \vert) - (r-r_1) + r_2 - r_1\tag*{}-\frac{\varepsilon}{2}r\\ &\geq \frac{D^2 - D + 2}{2D}r - \varepsilon r + r_2 - r\tag*{}\\ &= r_2 - \frac{3D - D^2 - 2}{2D}r - \varepsilon r\tag*{},\end{aligned}$$ and the conclusion follows. ◻ **Theorem 36**. *Suppose that $x, y\in\mathbb{R}^2$, $e = \frac{y-x}{\vert y-x\vert}$, and $A,B\subseteq\mathbb{N}$ satisfy $\mathop{\mathrm{Dim}}^A(x)\leq \mathop{\mathrm{Dim}}^A(y)$ and the following.* - *$d_x, d_y > 1$,* - *$K^{x,A}_r(e) = r - O(\log r)$ for all $r$.* - *$K^{x,A,B}_r(y) \geq K^{A}_r(y) - O(\log r)$ for all sufficiently large $r$.* - *$K^{A}_r(e\mid y) = r - o(r)$ for all $r$.* *Then $\mathop{\mathrm{Dim}}^{x, A, B}(\vert x - y\vert) \geq \frac{3D_y^2 - D_y + 6}{8D_y}$* *Remark 4*. The extra requirement that $\mathop{\mathrm{Dim}}^A(x)\leq \mathop{\mathrm{Dim}}^A(y)$ seems necessary to obtain the largest lower bound with our methods. Previously in this section, we assumed $D$ was the larger of $D_x, D_y$, a safe assumption since a larger $D_x$ gives a worse projection theorem and a larger $D_y$ gives a worse adversary function when partitioning. For the packing bound, however, a higher $D_y$ could actually improve the bound. Thus, if $D_y$ were actually much smaller than $D$, this could make for a worse bound, which necessitates the extra assumption. *Proof.* Let $\varepsilon> 0$. Let $r$ be a sufficiently large precision which is maximal in the sense that $K^{A}_r(y)\geq D_y r -\varepsilon r$. We also assume that the function $f$ associated to $K^A_s(y)$ is increasing on $[r-1, r]$. Note that there are infinitely many such $r$. We first assume that there exists an all yellow interval $[r_1, r_2]$ containing $r$ such that $r_2 \geq \frac{4}{3}r$. Then, $$K^{A,x}_{r_2}(\vert x - y \vert ) \geq \frac{3D^2-D+6}{8D}r_2 - \varepsilon r_2,$$ and the proof is complete. We now assume that, no such all yellow interval exists. Let $[r_1, r_2]$ be an all yellow interval containing $r$ which is of maximal length. This implies that there is an interval $[a, r_2]$ which is the union of green intervals, whose lengths are at most doubling, such that $a \leq \frac{r_2}{2}$. Hence $r \geq \frac{3}{2} a$. For convenience, we set $r^\prime = \frac{r_2}{2}$. Let $\mathcal{P} = \{[r_{i+1}, r_i]\}$ be the partition of $[1,r^\prime]$. Let $L$ be the total length of the yellow intervals in $\mathcal{P}$. We first see that $$K^{A,x}_{r^\prime}(\vert x - y\vert) \geq \max\{L + \frac{2}{D+ 1}(r^\prime - L), K^A_{r^\prime}(y) - L\}.$$ Since $[a , r_2]$ is green, and $r \geq \frac{3}{2}r^\prime$, $$\begin{aligned} K^{A}_{r^\prime}(y) &\geq K^{A}_{2r^\prime}(y) - r^\prime\\ &\geq K^{A}_r(y) -r^\prime\\ &= Dr -r^\prime\\ &> \frac{3D -2}{2}r^\prime.\end{aligned}$$ We now show that $$\label{eq:lowerBoundAtRprime} K^{A,x}_{r^\prime}(\vert x - y\vert) \geq \frac{3D^2 - 5D+6}{4D}r^\prime - \varepsilon r.$$ If $$K^{A}_{r^\prime}(y) - L \geq \frac{3D^2 - 5D+6}{4D}r^\prime - \varepsilon r,$$ then ([\[eq:lowerBoundAtRprime\]](#eq:lowerBoundAtRprime){reference-type="ref" reference="eq:lowerBoundAtRprime"}) holds immediately. Otherwise, using our lower bound on $K^{A}_{r^\prime}(y)$, we see that $$L \geq \frac{3D^2 + D - 6}{4D}r^\prime.$$ Therefore, $$\begin{aligned} K^{A,x}_{r^\prime}(\vert x - y\vert) &\geq L + \frac{2}{D+1}(r^\prime - L)-\varepsilon r\\ &\geq \frac{3D^2 - 5D+6}{4D}r^\prime - \varepsilon r,\end{aligned}$$ and so ([\[eq:lowerBoundAtRprime\]](#eq:lowerBoundAtRprime){reference-type="ref" reference="eq:lowerBoundAtRprime"}) holds. Since $[r_1, r_2]$ is all yellow, by Lemma [Lemma 20](#lem:distancesYellowTeal){reference-type="ref" reference="lem:distancesYellowTeal"} we see that $$\begin{aligned} K^{x,A, B}_{2r^\prime}(\vert x - y\vert) &\geq \frac{3D^2 - 5D+6}{4D}r^\prime + r^\prime-\varepsilon r\\ &= \frac{3D^2 - D+6}{4D}r^\prime -\varepsilon r\\ &\geq \frac{3D^2 - D+6}{8D}2r^\prime - 2 \varepsilon r^\prime,\end{aligned}$$ Noting we can take $\varepsilon$ as small as desired and then let $r$ go to infinity completes the proof. ◻ # Dimensions of pinned distance sets In this section, we reduce the effective Hausdorff and packing theorems to their classical analogues. We prove Theorem [Theorem 5](#thm:moregeneralmaintheorem){reference-type="ref" reference="thm:moregeneralmaintheorem"} and then Theorem [Theorem 6](#thm:regularYFullDim){reference-type="ref" reference="thm:regularYFullDim"} in the first subsection, then consider packing dimension. Throughout, we have $x,y\in\mathbb{R}^2$, $e = \frac{x-y}{\vert x - y\vert}$ and $A,B \subseteq \mathbb{N}$. For ease of reference, we list conditions $(C1)-(C4)$ here: - $\dim^A(x)>d_x, d_y > 1$ - $K^{x,A}_r(e) = r - O(\log r)$ for all $r$. - $K^{x,A,B}_r(y) \geq K^{A}_r(y) - O(\log r)$ for all sufficiently large $r$. - $K^{A}_r(e\mid y) = r - o(r)$ for all $r$. Note the modification of condition (C1), here we drop the convention that $d_x=\dim^A(x)$. For the reduction, we will want $d_x$ to be strictly less than $\dim_H(X)$, because we will remove an exceptional set of size $d_x$. Thus, we want $\dim_A(x)>d_x$ so we can apply our effective theorem uniformly. ## Hausdorff dimension of pinned distance sets A main tool we need to reduce the classical statements to their effective counterparts is the following radial projection theorem of Orponen [@Orponen19DimSmooth]: **Theorem 37**. *Let $Y\subseteq\mathbb{R}^2$ be a Borel set with $s=dim_H(Y)>1$ such that there is a measure $\mu\in \mathcal{M}(Y)$ such that there is a measure satisfying $I_d(\mu)<\infty$ for all $1<d<s$. Then there is a Borel $G\subseteq \mathbb{R}^2$ with $\dim_H(G)\leq 2-\dim_H(Y)$ such that, for every $x\in\mathbb{R}^2\setminus\text{spt}(\mu)$, $\mathcal{H}^1(\pi_x (Y))>0$. Moreover, the pushforward of $\mu$ under $\pi_x$ is absolutely continuous with respect to $\mathcal{H}^1|_{S^1}$ for $x\notin G$.* With this theorem, we are able to prove the following. **Lemma 38**. *Let $Y\subseteq\mathbb{R}^2$ be compact with $0<\mathcal{H}^d(Y)<\infty$ for some $d_y>1$, and let $d_x>1$ be given. Let $\mu = \mathcal{H}^s|_Y$. Let $A\subseteq\mathbb{N}$ be an oracle such that $A$ is a packing oracle for $Y$, $Y$ is computably compact relative to $A$ and $\mu$ is computable relative to $A$. Then, there is a set $G$ of Hausdorff dimension at most $d_x$ such that for all $x\in \mathbb{R}^2 - (\text{spt}(\mu)\cup G)$, and all $B\subseteq\mathbb{N}$, there exists some $y\in Y$ such that the pair $x, y$ satisfy conditions (C1)-(C4).* *Proof.* Let $G$ be the set guaranteed by Orponen's radial projection theorem with respect to $Y$ and $\mu=\mathcal{H}^s|_Y$. Let $x\in \mathbb{R}^2 - (\text{spt}(\mu)\cup G)$ have effective Hausdorff dimension relative to $A$ greater than $d_x$, and define $N = \{e\in S^1: (\exists^\infty r) K_r^{x, A}(e)<r-4\log(r)\}$. As observed in the second author's previous paper, $\mathcal{H}^1|_{S^1}(N) = 0$. Orponen's theorem guarantees the absolute continuity of $\pi_{x\#}(\mu)$ with respect to $\mathcal{H}^1|_{S^1}$ for $x$ outside the exceptional set $G$, where $\pi_x(y) = \frac{y-x}{||y-x||}\in S^1$. Thus, $\mathcal{H}^s|_Y(\pi^{-1}_x(N)) = 0$. Now, let $$M = \{y \in Y: \dim^A(y)\geq s \text{ and } K_r^{x, A,B}(y)>K_r^A(y) - 8 \log r \text{ for large enough }r\}$$ Again from [@Stull22c], we have that $\mathcal{H}^s(M) = \mathcal{H}^s(Y)>0$ by assumption.[^9] Thus, $M - \pi_x^{-1}(N)$ has dimension $s$ and is in particular nonempty. Picking a $y$ in $M - \pi_x^{-1}(N)$, we may check the conditions: $x$ satisfies (C1) by assumption, we group together all $e$ not satisfying (C2) in the negligible set $N$, and $y$ satisfies its part of (C1) and (C3) by the definition of $M$. As for (C4), Lemma 31 in [@Stull22c] shows that (C1)-(C3) imply (C4), so the proof of this lemma is complete. ◻ Now, we would like to drop the $\text{spt}(\mu)$ from the excluded set, which we will do by proving the following lemma: **Lemma 39**. *Let $Y\subseteq \mathbb{R}^2$ be a compact set such that $0<\mathcal{H}^s(Y)<\infty$ for some $s>1$, let $A$ be any oracle relative to which $Y$ is effectively compact, and let $d_x>1$ be given. Then there is a set $G\subseteq\mathbb{R}^2$ of Hausdorff dimension at most $d_x$ such that for every $x\in \mathbb{R}^2 - G$, and every $B\subseteq\mathbb{N}$, there is a $y\in Y$ such that the pair $x, y$ satisfies (C1)-(C4)* *Proof.* Let $Y$ be as in the statement of the lemma, and write $Y = Y_1 \cup Y_2$ for disjoint compact $Y_1$, $Y_2$ such that $0<\mathcal{H}^s(Y_1)<\infty$ and $0<\mathcal{H}^s(Y_2)<\infty$. Let $\mu_1, \mu_2$ be $\mathcal{H}^s|_{Y_1}, \mathcal{H}^s|_{Y_2}$ respectively. Suppose $A_1$ and $A_2$ are effective compactness oracles for $Y_1$ and $Y_2$ respectively, $A_3$ is an effective compactness oracle for $Y$, and let $\hat{\mu}_i$ encode $\mu_i(Q)$ for each ball $Q$ with rational center and radius. Let $A$ be the join of $A_1, A_2, A_3, \hat{\mu}_1$, and $\hat{\mu}_2$. Then $A$ and $Y_i$ satisfy the hypotheses of the previous lemma for each $i$. Let $G_1, G_2$ be the exceptional sets guaranteed by the lemma. Let $G = G_1 \cup G_2 \cup \{x\in\mathbb{R}^2: \dim^A(x)\leq d_x\}$. If $x\in\mathbb{R}^2\setminus G$, then by the previous lemma, there is some $y$ in either $Y_1$ or $Y_2$ satisfying conditions (C1)-(C4) (since $x$ can only be in the support of at most *one* of $\mu_1, \mu_2$). ◻ Now we are in a position to prove our main theorem. **Theorem [Theorem 5](#thm:moregeneralmaintheorem){reference-type="ref" reference="thm:moregeneralmaintheorem"} 1**. *Let $Y\subseteq \mathbb{R}^2$ be analytic such that $1<d_y =\dim_H(Y)$ and $D_y=\dim_p(Y)$. Let $X\subseteq \mathbb{R}^2$ be such that $1<d_x <\dim_H(X)$ and $D_x=\mathop{\mathrm{Dim}}_p(X)$. Then there is some $F \subseteq X$ of full dimension such that $$\dim_H(\Delta_x E) \geq d\left(1 - \frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right),$$ for all $x\in F$, where $d=\min\{d_x, d_y\}$ and $D=\max\{D_x, D_y\}$. In particular, $\dim_H(X\setminus F)\leq d_x<\dim_H(X)$ Furthermore, if $$D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$$ Then $\dim_H(\Delta_x E)=1$.* *Proof.* Let $Y$ and $X$ be as above, and let $B$ be the trivial oracle.[^10] $Y$ is analytic, thus for any $s<d_y=\dim_H(Y)$, there is a compact $Y_s$ such that $0<\mathcal{H}^s(Y_s)<\infty$. Let $\{s_i\}_{i\in\mathbb{N}} = d_y - \frac{1}{i}$. Let $Y_{s_i}$ be a sequence of such compact sets. The $Y_{s_i}$ are compact, so let $A_1, A_2, ...$ be oracles relative to which they are effectively compact. Now, we use the general point-to-set principle. Let $A_Y$ and $A_X$ be packing oracles for $Y$ and $X$ respectively, that is, oracles such that $\dim_P(Y) = \sup_{y\in Y} \mathop{\mathrm{Dim}}^{A_Y}(y)$ and likewise for $X$. Now, take the join of $A_Y, A_X, A_1, A_2, ...$, and call this new oracle $A$. Note that this oracle retains all the desired properties of its constituents. In particular, every $Y_{s_i}$ is effectively compact relative to $A$, for all $y\in Y$ we have that $\mathop{\mathrm{Dim}}^{A}(y)\leq \dim_P(Y) =D_y$, and for all $x\in X$ we have that $\mathop{\mathrm{Dim}}^{A}(x)\leq \dim_P(X) =D_x$. Using $A$, we may apply the previous lemma to each $Y_{s_i}$, giving a corresponding exceptional set $G_i$. Observe that $\dim_H(X)>d_x\geq \dim_H(G_i)$, so the set of $x\in{X}$ for which there is some $y\in Y_{s_i}$ satisfying conditions (C1)-(C4) is nonempty for each sufficiently large $i$ (large enough that $s_i>1$). In fact, by the countable stability of Hausdorff dimension, $\dim_H(X)>d_x\geq \dim_H(\bigcup_{i\in\mathbb{N}} G_i)$. If we denote this union by $G$, then for all $x\in X-B$, there is a $y\in Y$ such that (C1)-(C4) hold with $A$ as the oracle. We choose $A$ such that the desired packing dimension conditions hold relative to $A$ for *any* $x\in X$, $y\in Y_{s_i}\subseteq(Y)$, so along with conditions (C1)-(C4), all the hypotheses of our effective theorem are satisfied. Applying this theorem when $D\geq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$, then, we obtain that $$\dim^{A, x}(\vert x-y\vert) \geq (d-\frac{1}{i})\left(1 - \frac{\left(D-1\right)\left(D-(d-\frac{1}{i})\right)}{2D^2+\left(2-4(d-\frac{1}{i})\right)D+(d-\frac{1}{i})^2+(d-\frac{1}{i})-2}\right).$$ By Observation [Observation 12](#obs:effcompact){reference-type="ref" reference="obs:effcompact"}, since $Y_{s_i}$ is effectively compact relative to $A$, $\Delta_x Y_{s_i}$ is effectively compact relative to $(A, x)$. Finishing the proof, by Theorem [Theorem 11](#thm:strongPointToSetDim){reference-type="ref" reference="thm:strongPointToSetDim"}, we have that $$\begin{aligned} \dim_H(\Delta_x E) &\geq \dim_H(\Delta_x E_{s_i})\\ &= \sup_{y\in E_{s_i}} \dim^{A, x}(\vert x-y\vert)\\ &\geq (d-\frac{1}{i})\left(1 - \frac{\left(D-1\right)\left(D-(d-\frac{1}{i})\right)}{2D^2+\left(2-4(d-\frac{1}{i})\right)D+(d-\frac{1}{i})^2+(d-\frac{1}{i})-2}\right).\end{aligned}$$ Letting $i$ go to infinity completes the proof in this case. If we are in the case that $D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$, then for some $i$ large enough, we have $$D<\frac{(3+\sqrt{5})(d-\frac{1}{i})-1-\sqrt{5}}{2}.$$ Then, repeating the above argument with $i$ at least this large gives the bound of 1 immediately and completes the proof. ◻ Assuming $E=X = Y$, we immediately have as a corollary that **Theorem [Theorem 1](#thm:maintheorem){reference-type="ref" reference="thm:maintheorem"} 1**. *Let $E\subseteq \mathbb{R}^2$ be analytic such that $1<d <\dim_H(E)$. Then there is a subset $F \subseteq E$ of full dimension such that $$\dim_H(\Delta_x E) \geq d\left(1 - \frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right),$$ for all $x\in F$, where $D = \dim_P(E)$. In particular, $\dim_H(E\setminus F)\leq d<\dim_H(E)$. Furthermore, if $$D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$$ Then $\dim_H(\Delta_x E)=1$.* In a similar manner, we prove the following theorem. **Theorem [Theorem 6](#thm:regularYFullDim){reference-type="ref" reference="thm:regularYFullDim"} 1**. *Let $Y\subseteq\mathbb{R}^2$ be analytic with $\dim_H(Y) > 1$ and $\dim_P(Y) < 2 \dim_H(Y)-1$. Let $X \subseteq \mathbb{R}^2$ be any set such that $\dim_H(X) > 1$. Then for all $x\in X$ outside a set of (Hausdorff) dimension one,* *$\dim_H(\Delta_x Y) = 1$.* We can follow essentially the same argument as above, except now we only need the dimension of $x$ to be greater than 1, at the cost of also requiring that $\dim^A(y)$ and $\mathop{\mathrm{Dim}}^A(y)$ are close enough, as in the hypothesis of Proposition [Proposition 23](#prop:yellowfulldimension){reference-type="ref" reference="prop:yellowfulldimension"}. *Proof.* Let $Y$ and $X$ be as above, and again let $B$ be the trivial oracle. $Y$ is analytic, thus for any $s<\dim_H(Y)$, there is a compact $Y_s$ such that $0<\mathcal{H}^s(Y_s)<\infty$. Let $s$ be such that $\dim_p(Y)<2s-1<2\dim_H(Y)-1$, and let $A_s$ be an oracle relative to which $Y_s$ is effectively compact. Let $A_Y$ be a packing oracle for $Y$. Now, take the join of $A_Y, A_s$, and call this new oracle $A$. Now, apply Lemma [Lemma 39](#lem:lastReductionLemma){reference-type="ref" reference="lem:lastReductionLemma"} to $Y_{s}$ relative to $A$ with $d_x=1$. As above, conditions (C1)-(C4) are satisfied with an exceptional set of dimension at most 1. $Y_s\subseteq Y$, so for any $y\in Y_s$, $\mathop{\mathrm{Dim}}^A(y)\leq \dim_P(Y)$. Hence, $\mathop{\mathrm{Dim}}^A(y)<2\dim^A(y)-1$, and all the conditions for Proposition [Proposition 23](#prop:yellowfulldimension){reference-type="ref" reference="prop:yellowfulldimension"} are satisfied, and we have as above that for $x$ not in our exceptional set, $$\begin{aligned} \dim_H(\Delta_x Y) &\geq \dim_H(\Delta_x Y_s)\\ &= \sup_{y\in Y_s} \dim^{A, x}(||x-y||)\\ &=1\end{aligned}$$ ◻ ## Packing dimension of pinned distance sets **Theorem [Theorem 4](#thm:packingThm){reference-type="ref" reference="thm:packingThm"} 1**. *Let $E\subseteq \mathbb{R}^2$ be analytic such that $\dim_H(E) > 1$. Then, for some $x\in E$, $$\dim_P(\Delta_x E) \geq \frac{12 -\sqrt{2}}{8\sqrt(2)}\approx 0.9356.$$* *Proof.* Let $E\subseteq \mathbb{R}^2$ be analytic such that $d = \dim_H(E) > 1$. Let $D = \dim_P(E)$. Since $E$ is analytic, there is a compact subset $F\subseteq E$ such that $0 < \mathcal{H}^s(F) < \infty$, for $s = \frac{d+1}{2}$. Let $\mu = \mathcal{H}^s_{\vert F}$. Let $A_1$ be an oracle relative to which $F$ is effectively compact. Let $A_2$ be a packing oracle for $F$, that is, $\dim_P(F) = \sup_{x\in F} \mathop{\mathrm{Dim}}^{A_2}(x)$. Let $A$ be the join of $A_1$ and $A_2$. Using Orponen's radial projection theorem, we see that there is an exceptional set $G_1$ such that the pushforward of $\mu$ under $\pi_x$ is absolutely continuous with respect to $\mathcal{H}^1_{\vert S^1}$ for all $x \notin G_1$. Let $G_2 = \{x\in F \mid \dim^A(x) < \frac{s+1}{2}\}$ and $G = G_1 \cup G_2$. Since $\dim_H(G_1) \leq 1$ and $\dim_H(G_2) \leq \frac{s+1}{2}< s$ we see that $\dim_H(G) < s$. We now choose $x \in F - G$ such that the set $F^\prime = \{y \in F - G\mid \mathop{\mathrm{Dim}}^A(x)\leq \mathop{\mathrm{Dim}}^A(y)\}$ has positive $\mu$ measure, which is possible since we can choose some $x$ that has minimal or nearly minimal effective packing dimension. Let $B$ be a packing oracle for $\Delta_x E$. The proof of Lemma [Lemma 38](#lem:reductionUsingOrponen){reference-type="ref" reference="lem:reductionUsingOrponen"} shows that there is a $y\in F^\prime$ such that the pair $x,y$ satisfies (C1)-(C4). Moreover, $\mathop{\mathrm{Dim}}^A(x)\leq \mathop{\mathrm{Dim}}^A(y)$. We may therefore apply Theorem [Theorem 36](#thm:effectivePackingThm){reference-type="ref" reference="thm:effectivePackingThm"}, which shows that $$\mathop{\mathrm{Dim}}^{A, x, B}(\vert x-y\vert)\geq \frac{3D^2 - D + 6}{8D},$$ where $D = \mathop{\mathrm{Dim}}^A(y)$. Since this is minimized when $D = \sqrt{2}$, we conclude that $$\mathop{\mathrm{Dim}}^{A, x, B}(\vert x-y\vert)\geq \frac{12 -\sqrt{2}}{8\sqrt(2)}.$$ Our choice of $B$ and the point-to-set principle concludes the proof, since $$\begin{aligned} \dim_P(\Delta_x E) &\geq \sup_{y\in E} \mathop{\mathrm{Dim}}^{B}(\vert x - y\vert)\\ &\geq \sup_{y\in E} \mathop{\mathrm{Dim}}^{x, A, B}(\vert x-y\vert)\\ &\geq _\frac{12 -\sqrt{2}}{8\sqrt(2)}.\end{aligned}$$ ◻ # Regular pin sets give full dimension pinned distance sets Now, we consider a case that is not covered by our previous theorems. Our work thus far implies that if $Y$ is sufficiently regular and $\dim_H(X)>1$, then $\dim_H(\Delta_x Y)=1$ for all $x$ in a full dimension subset of $X$. More surprisingly, however, the above holds even if just the *pin* set $X$ is regular. In this case, we are able to deduce an essentially optimal effective projection theorem for $x$ which allows us to utilize arbitrarily long green intervals when partitioning $K^A_r(y)$. If we have access to these green intervals, we will never be forced to use a strictly teal interval[^11], implying that our partition is all-yellow and that $\dim^{x, A}(\vert x-y \vert) =1$. We would like to perform the reduction to the classical result by finding some point $x$ that is regular, but there is a problem. Suppose $X$ is regular of dimension greater than 1 and $\dim_H(Y)>1$. In general, we cannot assume $X$ contains regular points relative to a given oracle. For instance, consider the set $X=\{x\in\mathbb{R}^2: \mathop{\mathrm{Dim}}(x)=1.2\text{ and }\dim(x)<1.2\}$. The packing and Hausdorff dimensions of $X$ will be 1.2, but, relative to any oracle $A$, any point will have effective Hausdorff dimension strictly less than 1.2, though it may have packing dimension *exactly* 1.2. However, we can overcome this obstacle, due to the fact that $\dim_H(Y) > 1$. This fact implies that there a bound on the length of any green interval. We start by stating and proving an effective projection theorem that holds when $x$ is sufficiently regular. **Proposition 40**. *Let $A\subseteq\mathbb{N}$, $x \in \mathbb{R}^2$, $e \in \mathcal{S}^1$, $\varepsilon\in \mathbb{Q}^+$, $C, C^\prime\in\mathbb{N}$, and $t, r \in \mathbb{N}$. Suppose that $r$ is sufficiently large, and that the following hold.* 1. *$1 < d_x < \dim^A(x)$.* 2. *$t \geq \frac{r}{C}$.* 3. *$K^{x, A}_s(e) \geq s - C^\prime\log s$, for all $s \leq t$.* *Then there exists some $\varepsilon_x$ depending only on $d_x$ and $C$ such that $\mathop{\mathrm{Dim}}^A(x)-d_x<\varepsilon_x$ implies that $$K^A_r(x \,|\, p_e x, e) \leq K^A_r(x) - r + \varepsilon r.$$* The key of this result is that the $\varepsilon_x$ we need in the almost-regularity condition does not depend on $\varepsilon$. The way we will apply this theorem, $C$ is going to depend on $d_y$. So, given a lower bound for the effective Hausdorff dimension of points $x$ and $y$, $\varepsilon_x>0$ expresses how close to regular $x$ needs to be in order to apply this theorem, and that required closeness is *fixed* given sets $X, Y$ when we perform the reduction. The idea of the proof is quite simple. When we partition in $x$, we were allowed to use intervals up to length $t$. By picking $r$ large enough that the complexity function lies very close to the line $d_x s$,[^12] we can show that it is impossible for there to be any red-green-blue sequences past some precision $\varepsilon r$, since this would imply the existence of a green block of length at least $t$. Thus, $[1, r]$ is almost entirely covered by red and green intervals, and we obtain an essentially all-yellow partition of it. Formally, *Proof.* Let $x$ satisfying the conditions of the proposition for some $\varepsilon_x$ be given. Let $C$, and $\varepsilon$ be given and choose $\varepsilon^\prime$ depending on $\varepsilon_x$, $d_x$, and $C$ in a manner which we will detail shortly. Now let $r$ be large enough that $d_x s-\varepsilon^\prime s\leq K^A_s(x)\leq d_x s+\varepsilon^\prime s$ for all $s\geq \sqrt{r}$. Note that we must have $\varepsilon^\prime>\varepsilon_x$. Let $t\geq \frac{r}{C}$ be given, and let $\hat{\mathcal{P}}=\hat{\mathcal{P}}(x, r, t)$ be the partition of $[1, r]$ by red, blue and green intervals considered in section 3.1. We will show that if $\varepsilon_x$ is sufficiently small depending only on $C$ and $d_x$, then the desired conclusion holds. On one hand, any red-green-blue sequence has a green block of length at least $t$; this was one of the key properties of this partition in section 3. On the other hand, since the slope is $1$ on green intervals, if we have $K^A_{r-t}(x)=(d_x +\varepsilon^\prime)(r-t)$, (its maximum possible value) and that $[r-t, r]$ is green, then $$\begin{aligned} K^A_r(x)&\leq(d_x +\varepsilon^\prime)(r-t) + t\\ &\leq (d_x+\varepsilon^\prime) r - (d_x-1-\varepsilon^\prime)\frac{r}{C}\end{aligned}$$ So, if $\varepsilon^\prime$ is small enough that $$\label{eq:newboundOnOtherEpsilon} -\frac{d_x}{C}+2\varepsilon^\prime -\frac{\varepsilon^\prime}{C}-\frac{1}{C}<0,$$ we have a contradiction, since by assumption $K^A_r(x)\geq d_x r-\varepsilon^\prime r$. If we attempted to place the left endpoint of a green block of a red-green-blue sequence at any $\sqrt{r}\leq s\leq r$, we would clearly have the same contradiction, in fact, the green interval would be forced to be even shorter. Hence, there are no red-green-blue sequences such that the green block is contained in $[\sqrt{r}, r]$. Provided that $\varepsilon_x$ satisfies equation ([\[eq:newboundOnOtherEpsilon\]](#eq:newboundOnOtherEpsilon){reference-type="ref" reference="eq:newboundOnOtherEpsilon"}), we can choose such an $\varepsilon^\prime>\varepsilon_x$ also satisfying it. Now, we want to determine when, given $\varepsilon$, the interval $[\sqrt{r}, \frac{\varepsilon}{2} r]$ is forced to intersect at least one red interval. For convenience, let $\varepsilon^{\prime\prime} r = \sqrt{r}$. Then, we will choose $r$ large enough to satisfy this condition and large enough that $\varepsilon^\prime$ satisfies ([\[eq:newboundOnOtherEpsilon\]](#eq:newboundOnOtherEpsilon){reference-type="ref" reference="eq:newboundOnOtherEpsilon"}). Notice that $[\varepsilon^{\prime\prime} r, \frac{\varepsilon}{2} r]$ intersecting some red interval will imply there are no blue intervals in $[\frac{\varepsilon}{2}, r]$, or else we would have the green block of a red-green-blue sequence contained in $[\varepsilon^{\prime\prime} r, r]$. $[\varepsilon^{\prime\prime} r, \frac{\varepsilon}{2} r]$ must intersect a red interval when the average growth rate of $K^A_s(x)$ on this interval is strictly greater than 1. We can easily bound that growth rate as follows $$\dfrac{K^A_{\frac{\varepsilon}{2}r}(y)-K^A_{\varepsilon^{\prime\prime} r}(y)}{(\frac{\varepsilon}{2} -\varepsilon^{\prime\prime})r}\geq\dfrac{(d_x-\varepsilon^{\prime\prime})\frac{\varepsilon}{2}r-(d_x+\varepsilon^{\prime\prime})\varepsilon^{\prime\prime} r}{(\frac{\varepsilon}{2} -\varepsilon^{\prime\prime})r}.$$ Since $d_x>1$, for any $\varepsilon>0$, there exists some $\varepsilon^{\prime\prime}$ small enough that the right hand side is greater than 1. Hence, for $r$ sufficiently large, $[\frac{\varepsilon}{2}r, r]$ is covered entirely by red and green intervals. Just as in the $S=0$ case of section 3.2, we cite the result that an interval covered only by red and green intervals in $\hat{\mathcal{P}}$ can be covered by an all-yellow $3C$-admissible partition. Summing over this partition using Lemma [Lemma 18](#lem:boundGoodPartitionProjection){reference-type="ref" reference="lem:boundGoodPartitionProjection"} relative to $A$ with respect to $\frac{\varepsilon}{2}$ then yields $$K^A_r(x \mid p_e x, e) \leq K^A_r (x) - r + \varepsilon r,$$ which completes the proof. ◻ Equipped with this proposition, we define a new partition of $[1, r]$ for $K^A_s(y)$. The key is that this partition will, after a certain small precision, only use yellow intervals. Some of these yellow intervals will be very long green intervals, and we will use the above proposition to prove an analogue of Lemma [Lemma 31](#lem:goodtealgrowth){reference-type="ref" reference="lem:goodtealgrowth"} on them. Once we are able to sum over the intervals in our partition, we are essentially done. We now define this partition inductively. Set $r_0=r$ and assume we have defined the sequence up to $r_i$. Take a good partition of $[1, r_i]$. Let $a$ denote the minimal real such that $[a, r_i]$ is the union of yellow intervals whose lengths are at most doubling. If $a < r_i$ add all these yellow intervals to the partition, and let $r_{i+1} = a$. Otherwise, let $r_{i+1}$ be the smallest precision such that $[r_{i+1}, r_i]$ is green, i.e., such that $f(r_i) = f(r_{i+1}) + r_i - r_{i+1}$. Note that such an $r_{i+1}$ exists in this case, since $r_i$ is the right endpoint of a teal interval. Greedily combine intervals that have a less than doubling union, re-index the intervals, and denote this partition by $\mathcal{P}$. It is easy to see that $2 r_{i+2} <r_i$. As a result, once we have established that we can use these long green intervals in our sum, the conclusion of Lemma [Lemma 22](#lem:boundGoodPartitionDistance){reference-type="ref" reference="lem:boundGoodPartitionDistance"} follows immediately. Now, we prove a lower bound for $r_{i+1}$ on these green intervals (depending on $d_y$) on when $r_i$ is sufficiently large. **Lemma 41**. *Let $0<\varepsilon<\frac{d_y-1}{2}$ be given, and suppose $r_i$ is large enough that for all $s>\frac{d_y-1}{8} r_i$, we have $d_y s - \varepsilon s\leq K_s^A(y)\leq 2 s + \varepsilon s$. Then $$r_{i+1}\geq \frac{d_y-1-\varepsilon}{3+\varepsilon}r_i\geq \frac{d_y-1}{8} r_i$$* *Proof.* The desired result is an immediate consequence of the fact that the line $K^A_{r_i}(y)-(r_i-s)$ cannot intersect $K^A_s(y)$ for any $s<\frac{d_y-1-\varepsilon}{3+\varepsilon}r_i$, hence, any green interval with right endpoint $r_i$ has to have left endpoint larger than $\frac{d_y-1-\varepsilon}{3+\varepsilon}r_i$. Now, assume $s$ is such that $K^A_{r_i}(y)-(r_i-s)=K^A_s(y)$. The aforementioned fact is the consequence of a simple calculation. $$K^A_{r_i}(y)-(r_i-s)\geq d_y r_i-\varepsilon r_i - (r_i-s)$$ whereas $K_s^A(y)\leq 2s +\varepsilon s$. Setting the bounds equal yields $r_{i+1}\geq s\geq\frac{d_y-1-\varepsilon}{3+\varepsilon}r_i$ and thus completes the proof. ◻ Now, we are able to state and prove the analogue of Lemma [Lemma 31](#lem:goodtealgrowth){reference-type="ref" reference="lem:goodtealgrowth"}. **Lemma 42**. *Suppose that $[r_{i+1}, r_i]$ is a green interval in our constructed partition. For any $\varepsilon>0$, provided that $r_{i+1}$ is sufficiently large, we have $$K^{A,x}_{r_i, r_i, r_{i+1}}(y \mid \vert x - y \vert, y) \leq \varepsilon r_i.$$ Therefore, $K^{A,x}_{r_i, r_{i+1}}(\vert x - y\vert \mid \vert x - y\vert ) \geq K^{A,x}_{r_i, r_{i+1}}(y\mid y) - \varepsilon r_{i}$.* As before, the strategy is to use our projection theorem to verify that the hypotheses of Lemma [Lemma 15](#lem:pointDistance){reference-type="ref" reference="lem:pointDistance"} hold, which immediately implies the desired bound. *Proof.* Let some small rational $\varepsilon>0$ be given, and assume $r_{i+1}$ is sufficiently large. Let $\eta$ be the rational such that $\eta r= K^A_r(y)-4\varepsilon$. Let $G=D(r, y, \eta)$ be the oracle of Lemma [Lemma 14](#lem:oracles){reference-type="ref" reference="lem:oracles"} relative to $A$. Let $z\in B_{2^{-r_{i+1}}}(y)$ and satisfying $\vert x-y\vert=\vert x-z\vert$ be given. Letting $s=\vert y-z\vert$, we again have two cases. The $s\geq \frac{r_i}{2}-\log r_i$ case is identical to the previous proof, so we assume $s<\frac{r_i}{2}-\log r_i$. Lemma 29 applied relative to $(A, G)$ implies that $$K^{A, G}_r(z) \geq K^{A, G}_s(y) + K^{A, G}_{r_i-s, r_i}(x\mid y) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r).$$ To bound the projection term, we need to apply Proposition [Proposition 40](#prop:alternateOptimalProjectionTheorem){reference-type="ref" reference="prop:alternateOptimalProjectionTheorem"} with respect to $x$, $e^\prime$, $\frac{\varepsilon}{2}$, the constant $C$ such that $\frac{1}{C}= \frac{d_y-1}{16}$, $t=s$, and $r=r_i-s$. Again, we have that $r_{i+1}-1<s<\frac{r_i}{2} - \log r_i$, since $z$ is assumed to be within $2^{-r_{i+1}}$ of $s$. Clearly, then, we can take $r_i-s$ to be sufficiently large. By assumption, $1<d_x=D_x$, and by [Lemma 41](#lem:regularSLowerBound){reference-type="ref" reference="lem:regularSLowerBound"}, we have that $$\begin{aligned} s&\geq\frac{d_y-1}{8}r_i-1\\ &\geq \frac{d_y-1}{8}(r_i-s)-1\\ &\geq \frac{d_y-1}{16}(r_i-s)\\ &= \frac{r_i-s}{C}\end{aligned}$$ Thus, we can examine sufficiently large precisions and conditions (P1) and (P2) are satisfied. As before, (P3) is satisfied using (C2). After using some of the properties of the oracle $G$ and taking $r_i-s$ to be sufficiently large, we can apply the projection theorem relative to $A$, as below: $$\begin{aligned} K^{A, G}_{r_i}(z) &\geq K^{A, G}_s(y) + K^{A, G}_{r_i-s, r_i}(x\mid y) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A, G}_s(y) + K^{A}_{r_i-s, r_i}(x\mid y) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A, G}_s(y) + K^{A}_{r_i-s}(x) - K^{A, G}_{r_i-s}(x\mid p_{e^\prime} x, e^\prime) - O(\log r)\\ &\geq K^{A, G}_s(y) + d_x (r_i-s) -\frac{\varepsilon}{2} r_i - K^A_{r_i-s}(x\mid p_{e^\prime} x, e^\prime)\\ &\geq K^{A, G}_s(y) + d_x (r_i-s) - d_x(r_i-s) + (r_i-s) - \varepsilon r_i\\ &= K^{A, G}_s(y) + (r_i-s) - \varepsilon r_i\end{aligned}$$ Because of the choice of $\eta$, for small enough $\varepsilon$ we can guarantee that $K^{A, G}_s(y)=K^A_s(y) - O(\log r_i)$, since $s<\frac{r_i}{2}$. Hence, $$K^{A, G}_{r_i}(z)\geq K^{A}_s(y) + (r_i-s) - 2 \varepsilon r_i.$$ Finally, using the fact that these intervals are green, hence teal, we have $$\begin{aligned} K^{A, G}_{r_i}(z)&\geq K^{A}_s(y) + (r_i-s) - 2 \varepsilon r_i\\ &\geq K^{A}_{r_i}(y)-(r_i-s) + (r_i-s) - 2 \varepsilon r_i\\ &\geq (\eta +\varepsilon) r_i\\\end{aligned}$$ Thus, the conditions for Lemma [Lemma 15](#lem:pointDistance){reference-type="ref" reference="lem:pointDistance"} hold and we apply it, completing the proof. ◻ Now, we are able to prove the main effective theorem of this section. As indicated, since we can now sum over our green intervals, we can begin with the conclusion of Lemma [Lemma 22](#lem:boundGoodPartitionDistance){reference-type="ref" reference="lem:boundGoodPartitionDistance"}. **Theorem 43**. *There is some $\varepsilon_x$ sufficiently small depending only on $d_x$ and $d_y$ that, supposing $x, y\in\mathbb{R}^2$, $e = \frac{y-x}{\vert y-x\vert}$, and $A, B\subseteq\mathbb{N}$ satisfy the following.* - *$1<d_y<\dim^A(y), d_x<\dim^A(x)\leq \mathop{\mathrm{Dim}}^A(x) <D_x$.[^13]* - *$K^{x,A}_r(e) = r - O(\log r)$ for all $r$.* - *$K^{x,A, B}_r(y) \geq K^{A}_r(y) - O(\log r)$ for all sufficiently large $r$.* - *$K^{A}_r(e\mid y) = r - o(r)$ for all $r$.* *Then, $\mathop{\mathrm{Dim}}^A(x)-\dim^A(x)<\varepsilon_x$ implies* *$\dim^{x,A}(\vert x-y\vert) =1$* *Proof.* Let $\varepsilon>0$ be given. Let $r$ be large enough that applying Lemma [Lemma 22](#lem:boundGoodPartitionDistance){reference-type="ref" reference="lem:boundGoodPartitionDistance"} in the form of equation ([\[eq:distancepartitionbound\]](#eq:distancepartitionbound){reference-type="ref" reference="eq:distancepartitionbound"}), we have $$K^{A, x}_r(\vert x - y\vert) \geq K^A_r(y) - \sum\limits_{i\in \textbf{Bad}} K^A_{a_{i+1}, a_{i}}(y \mid y) - (a_{i+1} - a_i) - \frac{\varepsilon}{2} r.$$ Recalling that the complexity grows at an average rate of exactly 1 on green intervals, thus implies $$\begin{aligned} K^{A, x}_r(\vert x - y\vert) &\geq K^A_r(y) - \sum\limits_{i\in \mathcal{P}} K^A_{a_{i+1}, a_{i}}(y \mid y) - (a_{i+1} - a_i) - \frac{\varepsilon}{2} r\\ &\geq K^A_r(y)-K^A_r(y) + r - \varepsilon r\\ &= (1-\varepsilon) r\end{aligned}$$ Taking $\varepsilon$ as small as desired then letting $r$ go to infinity completes the proof. ◻ Finally, we state and prove the classical result. **Theorem 44**. *Let $X\subseteq\mathbb{R}^2$ be such that $1<d_x<\dim_H(X)=\dim_P(X)$ and let $Y\subseteq \mathbb{R}^2$ be analytic and satisfy $1<d_y<\dim_H(Y)$. Then there is some $F\subseteq X$ such that,* *$\dim_H(\Delta_x Y)=1$* *for all $x\in F$. Moreover, $\dim_H(X\setminus F)<\dim_H(X)$.* *Proof.* Let $X$ and $Y$ be as above, and let $B$ be the trivial oracle. $Y$ is analytic, thus for any $d_y<s<\dim_H(Y)$, there is a compact $Y_s$ such that $0<\mathcal{H}^s(Y_s)<\infty$. Fix some such $s$ and a corresponding $Y_s$, and let $A_s$ be its effective compactness oracle. Now, we use the general point-to-set principle. Let $A_Y$ and $A_X$ be packing oracles for $Y$ and $X$ respectively. Now, take the join of $A_X, A_Y$, and $A_s$ and call this new oracle $A$. Relative to $A$, we may apply Lemma [Lemma 39](#lem:lastReductionLemma){reference-type="ref" reference="lem:lastReductionLemma"} to $Y_{s}$, giving a corresponding exceptional set $G$. Note that we choose $A$ such that the desired packing dimension conditions hold relative to $A$ for *any* $x\in X$. In light of this fact and observing that $\dim_H(X)>d_x\geq \dim_H(G)$, the set of $x\in{X}$ for which there is some $y\in Y_{s}$ satisfying conditions relative to $A$ (C1)-(C4) is nonempty. Then for all $x\in X-G$, there is a $y\in Y_{s}$ such that (C1)-(C4) hold with $A$ as the oracle. Now, further refine $X-G$ by removing all the points such that $\mathop{\mathrm{Dim}}_P(X) - \dim^A(x)\geq \varepsilon_x$ for the $\varepsilon_x$ in Theorem [Theorem 43](#thm:secondEffectiveTheoremPinnedSet){reference-type="ref" reference="thm:secondEffectiveTheoremPinnedSet"}. Since $A$ is a packing oracle for $X$, this implies that the remaining points satisfy $\mathop{\mathrm{Dim}}^A(x)- \dim^A(x)<\varepsilon_x$ Letting $F$ denote $X-\left(G\cup \{x: \mathop{\mathrm{Dim}}_P(X) - \dim^A(x)\}\right)$ immediately implies that all $x\in F$ satisfy the requirements of Theorem [Theorem 43](#thm:secondEffectiveTheoremPinnedSet){reference-type="ref" reference="thm:secondEffectiveTheoremPinnedSet"}, and furthermore that $\dim_H(X\setminus F)<\dim_H(X)$. Finally, applying the effective theorem yields that $\dim^{A, x}(\vert x-y\vert)=1$ for such a pair $x, y$. By Observation [Observation 12](#obs:effcompact){reference-type="ref" reference="obs:effcompact"}, since $Y_{s}$ is effectively compact relative to $A$, $\Delta_x Y_{s}$ is effectively compact relative to $(A, x)$. Finishing the proof, by theorem [Theorem 11](#thm:strongPointToSetDim){reference-type="ref" reference="thm:strongPointToSetDim"}, we have that $$\begin{aligned} \dim_H(\Delta_x Y) &\geq \dim_H(\Delta_x Y_{s})\\ &= \sup_{y\in Y_{s}} \dim^{A, x}(\vert x-y\vert)\\ &= 1\end{aligned}$$ ◻ [^1]: The first author was supported in part by NSF DMS-2037851 and NSF DMS-2246906. Both authors are grateful to the American Institute of Mathematics for hosting the workshop *Effective methods in measure and dimension*, which was the genesis of this collaboration. [^2]: Observe that this regularity is weaker than Alfors-David regularity, which Orponen considered in [@Orponen17]. Throughout the remainder of the paper, by regular, we mean the more general notion. [^3]: More precisely, we use its effective analog. [^4]: Recall that Theorem [Theorem 6](#thm:regularYFullDim){reference-type="ref" reference="thm:regularYFullDim"} was more or less independent of the dimension of $X$, so long as it is greater than 1. This is why we do not use the projection theorem here, as it concerns the pin $x$ and its effective dimension. [^5]: The balls are rational in the sense that the coordinates of the centers and the radii are all rational numbers, which allows us to identify each ball by a finite string. [^6]: *This lemma was originally proven without the "up to precision $r$" qualifier, but we rephrase like this to match the form we will use the lemma in. The generalization is essentially immediate, because in this case there will be some sufficiently close point to $w$ with *exactly* the same projection as $z$, indistinguishable from $w$ at precision $r$.* [^7]: We allow $r_2$ to be equal to $r$, in the case that $[r_1+s,r]$ is covered by teal intervals. [^8]: It would be possible to obtain a somewhat better bound in Theorem [Theorem 5](#thm:moregeneralmaintheorem){reference-type="ref" reference="thm:moregeneralmaintheorem"} that more freely involves $d_x, d_y, D_x,$ and $D_y$ using the same approach of this section, at the cost of significantly worse calculations. [^9]: This result actually deals with $M = \{y \in Y: \dim^A(y)\geq s \text{ and } K_r^{x, A}(y)>K_r^A(y) - 8 \log r \text{ for large enough }r\}$, but since the statement only involves $x$ as an oracle, the same proof goes through with the oracle $x, B$. [^10]: We need the oracle $B$ to make the *packing* reduction go through; in the Hausdorff case, it can be removed. [^11]: A non-green teal interval, so one with a growth rate of strictly less than 1 [^12]: How close it needs to be to this line for the argument to go through depends on $C$, how close we are allowed to assume it is depends on $\varepsilon_x$ since the point is not exactly regular. Thus, we choose $\varepsilon_x$ based in part on $C$. [^13]: *Our condition (C1) is slightly different than before; this is just to make the reduction easier. Specifically, it will be helpful if we now think of $d_y$ as a strict lower bound on the dimension of points $y$. Note that this change will not complicate the application of Proposition [Lemma 39](#lem:lastReductionLemma){reference-type="ref" reference="lem:lastReductionLemma"}, since we will use it on a compact set with dimension greater than $d_y$.*
arxiv_math
{ "id": "2309.11701", "title": "Dimension of Pinned Distance Sets for Semi-Regular Sets", "authors": "Jacob B. Fiedler and D. M. Stull", "categories": "math.CA math.LO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this work, we provide deterministic error bounds for the actual state evolution of nonlinear systems embedded with the linear parametric variable (LPV) formulation and steered by model predictive control (MPC). The main novelty concerns the explicit derivation of these deterministic bounds as polytopic tubes using linear differential inclusions (LDIs), which provide exact error formulations compared to linearization schemes that introduce additional error and deteriorate conservatism. The analysis and method are certified by solving the regulator problem of an unbalanced disk that stands as a classical control benchmark example. author: - "Dimitrios S. Karachalios$^{1}$, Maryam Nezami$^{1}$, Georg Schildbach$^{1}$ and Hossameldin S. Abbas$^{1}$ [^1] [^2]" bibliography: - main.bib title: " **Error Bounds in Nonlinear Model Predictive Control with Linear Differential Inclusions of Parametric-Varying Embeddings** " --- # INTRODUCTION Interesting control applications involve dynamics that do not satisfy the superposition and scaling principles thus, are nonlinear. On one hand, modeling with partial differential equations and discretizing with finite methods results in large-scale ordinary differential equations that only efficient model reduction can lead to feasible control [@AntoulasBook2005]. On the other hand, good observables result in highly nonlinear models of low-dimension easier to handle. Both approaches introduce errors due to discretization, reduction schemes, or model mismatches due to assumptions with the original plant. Handling wisely the error that comes from the modeling mismatches between the predictive model and the actual plant's response has driven research to provide robust-tube control methods for linear and nonlinear systems in a series of papers [@Mayne2011tubes; @MAYNE2005219; @Cannon2011; @LANGSON2004125; @Schildbach2013]. These methods perform well under disturbances or measurement noise. Among many nonlinear control methods, increasing attention has been drawn to nonlinear model predictive control (NMPC), which can cast the control task into an optimization problem that can handle in addition input-state constraints. Representing control as an optimization problem results in finding the global minimum over a nonconvex manifold that is quite challenging to solve in real-time. In addition, the nonlinear constraints in both input and states increase the complexity of the admissible search space for detecting that minimum. To overcome the aforementioned problems in NMPC, an excellent alternative is to embed the nonlinear system in a linear parameter-varying (LPV) formulation that results in a quadratic manifold with an inherent unique optimal solution that can be solved efficiently and online. Moreover, the nonlinear constraints can be realized with tangential schemes that allow adaptive linear constraint formulation. Combining the quadratic manifold along with the adaptive linear constraints, the linear parametric-varying model predictive control (LPVMPC) problem results in solving a classical quadratic program (QP) with many efficient algorithms to handle this operation in real-time as in autonomous driving tasks [@Nezami2023]. The challenge in representing nonlinear systems with the LPV embedding is the appropriate prediction of the uncertainty quantification introduced by the so-called scheduling parameter $p$. The vector $p$ absorbs all the nonlinear dependencies in an affine representation, and the LPV operator realizes the nonlinear manifold with an adaptive tangential hyperplane for fixed values of $p$ [1](#fig:LPV){reference-type="ref" reference="fig:LPV"}. Therefore, to solve the LPVMPC problem with QP, the prediction of the scheduling parameter within the receding horizon needs to be initialized prior and consequently produces an error between the actual and model responses. ![The LPV operator $f_{\text{LPV}}(x_t,u_t):=A(p_t^*)x_t+Bu_t=x_t^+$ realizes the nonlinear manifold with a sliding tangent hyperplane over the time $t$ under a fixed scheduling parameter $p_t^*$ that introduces the error state $e_t^+$.](fig0.pdf){#fig:LPV width="55%"} The problem of the uncertainty related to embedding LPV models into an MPC framework has recently received considerable attention. For instance, one approach is the adoption of tube-based MPC for the LPV setting, as proposed in [@HanemaTubes]. In this paper, under the assumption of the boundedness of future $p$, an anticipative tube MPC algorithm for LPV systems is proposed. Also, in [@nezami2022robust], an approach inspired by the anticipative tube MPC introduced in [@HanemaTubes] for autonomous lane keeping is presented. This paper suggests a cascade MPC control architecture incorporating both LPV lateral and linear longitudinal models. Furthermore, in [@Abbas2016], the stability of the LPVMPC design is guaranteed by using a bilinear matrix inequality. However, this method can be conservative and computationally demanding. In [@bujarbaruah2022robust], an offline approach is proposed for finding bounds on model uncertainty as a function of the control inputs. In other words, the tightenings of the constraints in the MPC are functions of decision variables, i.e., control inputs. In addition to MPC-related studies, there are also some works that address the problem of providing bounds on LPV modeling mismatches. In [@BAINIER202267], the paper norm bounds the state trajectory of a continuous time LPV system. However, the nonlinear system needs to satisfy the Lipschitz condition. In [@Chaillou2007], by using the context of strongly monotone operators, an upper bound and a lower bound for a specific class of nonlinear systems is suggested. In this paper, at first, the derivation of the dynamics of the error for a general LPV system is explained. In the next step, we provide exact error bounds, between the LPV model and the actual nonlinear system, by utilizing the linear differential inclusion (LDI), [@BoydLDI], of the nonlinear operator over the parametric uncertainty that is produced in the LPVMPC framework. The proposed bound on error can serve as a foundation for establishing further theoretical guarantees, e.g., stability, and recursive feasibility, which will justify safety features in control of the system. In [2](#sec:Prel){reference-type="ref" reference="sec:Prel"}, we start with preliminaries and formal representation of the problem under consideration. In [3](#sec:Analdyna){reference-type="ref" reference="sec:Analdyna"}, we formulate explicitly the error dynamics that evolve within the receding horizon along with the polytopic error bounds. In [4](#sec:results){reference-type="ref" reference="sec:results"}, we apply our method to a classical control benchmark with ease of reproducibility. Finally, in [5](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}, we summarize our findings and provide the open challenges and future research directions. # Preliminaries & Problem formulation {#sec:Prel} ## Definitions & assumptions We start with the discrete nonlinear dynamical system $$\label{eq:sysnlc} \Sigma:x_{k+1}=f(x_k,u_k),$$ of state dimension $n_{\mathrm{x}}$, and input dimension $n_u$. Considering the sampling time $t_s$, it holds $t_k=t_sk,~\forall k\in{\mathbb Z}_+$, with $x_k=x(t_sk)$, and $x_0=x(0)$ the initial condition state vector. $f:{\mathbb R}^{n_{\mathrm{x}}}\times{\mathbb R}^{n_u}\rightarrow{\mathbb R}^{n_{\mathrm{x}}}$ is a nonlinear operator. The nonlinear dynamical system in [\[eq:sysnlc\]](#eq:sysnlc){reference-type="eqref" reference="eq:sysnlc"} can be represented equivalently with a parametric varying linear (LPV) formulation that will give rise to methods that do linear in an adaptive way. An appropriate scheduling parameter vector $p$ with dimension $n_{p}$ should be introduced to accomplish that. Towards one further simplification, the remaining LPV system and through a filter that will increase the state dimension can recast the scheduling dependence only to the linear matrix $A(p)$ allowing a static $B$. As a result, the original nonlinear system [\[eq:sysnlc\]](#eq:sysnlc){reference-type="eqref" reference="eq:sysnlc"} can be embedded in the following linear parameter varying (LPV) formulation [\[sys:LPV\]](#sys:LPV){reference-type="eqref" reference="sys:LPV"} as $$\label{sys:LPV} \Sigma:\left\{\begin{aligned} x_{k+1}&=A(p_k)x_k+Bu_k,\\ p_k&=\rho(x_k,u_k),~x_0=x(0),\end{aligned}\right.$$ where the mapping $\rho:{\mathbb R}^{n_{\mathrm{x}}}\times{\mathbb R}^{n_u}\rightarrow{\mathbb R}^{n_{p}}$ is also given explicitly. In particular, the $\rho(\cdot)$ is a known nonlinear function of the state/input-$(x,u)$, which allows the embedding of [\[eq:sysnlc\]](#eq:sysnlc){reference-type="eqref" reference="eq:sysnlc"} in [\[sys:LPV\]](#sys:LPV){reference-type="eqref" reference="sys:LPV"}. Furthermore, the following parameterized matrix $A(p_k):{\mathbb R}^{n_p}\rightarrow{\mathbb R}^{n\times n}$ is also known and affine in terms of the scheduling parameter $p$. In particular, the affine structure of the discrete operator $A(p_k)$ can be expressed as $$\label{eq:affine} A(p_k):=A_0+\sum_{l=1}^{n_p}p_k^{[l]}A_{l},$$ where $p_k^{[l]}$ denotes the $l^{th}$-element of the vector $p$ and $A_l$ are constant matrices. Together with the input matrix $B\in{\mathbb R}^{n_{\mathrm{x}}\times n_u}$, the discrete-time LPV system is well-defined. The following remark [Remark 1](#rem:stanassu){reference-type="ref" reference="rem:stanassu"} summarizes the appropriate general assumptions to proceed with what follows for solving the control task by considering LPV predictive models. **Remark 1** (Standing assumptions). *To reflect the generalization of the method, here are the minimal assumptions:* - *Appropriately smoothness of the nonlinear operator $f$ has been assumed (i.e., higher-order differentiability and continuity).* - *The scheduling parameter can be measured at each sampling time $k$ but remains unknown within the receding horizon of length N.* - *The input matrix $B$ does not depend on the scheduling parameter. This can be relaxed easily through a p-filter.* - *No other disturbances or measurement noise has been assumed to affect the system $\Sigma$.* **Problem 1** (Error propagation within the receding horizon). *We are interested in bounding the error produced between the true response of the actual nonlinear system and that predicted via LPV within the receding horizon control strategy (i.e., MPC) under a fixed scheduling prediction signal [1](#fig:LPV){reference-type="ref" reference="fig:LPV"} that is inferred from prior knowledge.* ## Stabilization & model predictive control Unstable operation modes characterize the underlying dynamics in many interesting control applications. Therefore, control strategies that stabilize and drive the system to desired states under constraints are crucial. In the linear case, linear quadratic regulation (LQR) as a feedback state (i.e., $u=Kx$) efficiently provides stable closed-loop systems. In addition, LQR has been extended to handle LPV representations and provides robust-model state feedback [@robustLPVLQR] that stabilizes the system for all possible parametrization of the scheduling $p\in {\cal P}$---a drawback, though, for LQR is that it cannot handle input-state constraints. Thus, we propose splitting the control input into two parts; the first part will stabilize the nonlinear plant through an LPV-LQR controller, and the second part will handle the input-state constraints. Specifically, the input design has the following structure: $$\label{eq:LQR} u_k=u_k^{\text{LQR}}+u_k^{\text{MPC}}=K x_k+u_k^{\text{MPC}}.$$ By substituting [\[eq:LQR\]](#eq:LQR){reference-type="eqref" reference="eq:LQR"} to [\[sys:LPV\]](#sys:LPV){reference-type="eqref" reference="sys:LPV"}, we result to $$\footnotesize \begin{aligned} x_{k+1}&=A(p_k)x_k+Bu_k,\\ &=A(p_k)x_k+B(K x_k+u_k^{\text{MPC}}),\\ &=\underbrace{\left(A(p_k)+BK\right)}_{A_c(p_k)}x_k+Bu_k^{\text{MPC}}.\\ \end{aligned}$$ Thus, the closed-loop dynamics of applying the LQR controller makes $A_c(p_k)$ stable and can be written as $$\label{sys:LPVLQR} \Sigma_{\text{c}}:\left\{\begin{aligned} x_{k+1}&=A_c(p_k)x_k+Bu_k,\\ p_k&=\rho(x_k,u_k), \end{aligned}\right.$$ with $A_c(p_k):=A(p_k)+BK$, where we denote with subscript "c\" the closed-loop operator and the remaining controller $u_k ^{\text{MPC}}$ can be denoted again as $u_k$ without asserting any confusion. ## Model predictive control (MPC) with LPV embeddings After stabilizing the dynamics with the LQR feedback control, we want to drive the system to a given reference under some input and state constraints. Thus, we use MPC for control tasks with input or state constraints. The whole control problem can be cast as a constrained optimization problem within a given receding horizon of length $N$. The energy (cost) can be penalized with the quadratic weighted matrices[^3] $Q,~R$ and the quadratic cost $P$ for the terminal cost. At $t_k=k\cdot t_s$ and for $i=0,\ldots,N-1$ that will lead to a classical quadratic program (QP) along with the efficient algorithms that will allow real-time performance. The energy function to be minimized is $$\footnotesize \begin{aligned} J_k(u_{i|k})&:=\underset{u_{i|k}^*}{\min}\sum_{i=0}^{N-1}\left(\lVert \hat{x}_{i|k}\rVert_Q^2+\lVert u_{i|k}\rVert_R^2\right)+\lVert \hat{x}_{N|k}\rVert_P^2. \end{aligned}$$ In addition, we can introduce adaptive linear constraints. These are input and state constraints and can be introduced with the following sets: $$\label{eq:constraints} \begin{aligned} &\hat{x}_{i|k}\in\mathcal{X}_{i|k}=\{\hat{x}_k\in{\mathbb R}^n|G_k^x \hat{x}_k\leq h_k^x\},\\ &u_{i|k}\in\mathcal{U}_{i|k}=\{u_k\in{\mathbb R}^{m}|G_k^u u_k\leq h_k^u\}.\\ \end{aligned}$$ **Problem 2**. *QP optimization as $\texttt{QP}(\hat{p}_{i|k},\hat{x}_k,x_{i|k}^{\text{ref}})$[\[prob:Prob_LPVMPC\]]{#prob:Prob_LPVMPC label="prob:Prob_LPVMPC"}* *[\[eq:LPV_MPC\]]{#eq:LPV_MPC label="eq:LPV_MPC"} $$\begin{aligned} \underset{u_{i|k}^*}{\text{min}} \ & \! \sum_{i=0}^{N-1}\left(\lVert \hat{x}_{i|k} - x^{\text{ref}}_{i|k}\rVert ^2_Q + \lVert u_{i|k}\rVert^2_R\right) + \lVert \hat{x}_{N|k} - x^{\text{ref}}_{N|k}\rVert ^2_P \\ \text{s.t.} \;\; & \hat{x}_{i+1|k}\!=\!\!A_c(\hat{p}_{i|k})\hat{x}_{i|k}\!\!+\!\!Bu_{i|k},~i\!=\!0,\!\ldots\!,\!N\!\!-\!\!1 \\ & \hat{x}_{0|k} = x_{0|k}=x_k, \\ & \hat{x}_{i|k} \in \mathcal{X}_{i|k}, \quad \forall i = 0,1,\ldots,N, \label{eq:NMPC_state_cons}\\ & u_{i|k} \in \mathcal{U}_{i|k} \label{eq:NMPC_input_cons}, \quad \forall i = 0,1,\ldots,N-1. \end{aligned}$$* The decision variables in the quadratic program (QP) in [\[prob:Prob_LPVMPC\]](#prob:Prob_LPVMPC){reference-type="ref" reference="prob:Prob_LPVMPC"} can be considered explicitly the control input and implicitly the states. The state and input constraints in [\[eq:NMPC_state_cons\]](#eq:NMPC_state_cons){reference-type="ref" reference="eq:NMPC_state_cons"} and [\[eq:NMPC_input_cons\]](#eq:NMPC_input_cons){reference-type="ref" reference="eq:NMPC_input_cons"} are defined in [\[eq:constraints\]](#eq:constraints){reference-type="ref" reference="eq:constraints"}. For the optimization [\[prob:Prob_LPVMPC\]](#prob:Prob_LPVMPC){reference-type="ref" reference="prob:Prob_LPVMPC"} to be solved optimally, the estimated scheduling signal $\hat{p}_{i|k},~i=0,\ldots,N-1$, should be substituted numerically and offered prior as a prediction that will inevitably introduce error. Next, we explicitly introduce and define the error by providing the appropriate analysis for deriving deterministic bounds. [\[algo:lpvmpc\]]{#algo:lpvmpc label="algo:lpvmpc"} **Input**: Initial conditions $x_0$, the reference $(x^{\texttt{ref}},y^{\texttt{ref}})$ with $k\in\mathbb{Z}_+$ and the hyper-parameters $\texttt{MaxIter}\in\mathbb{Z}_+,~\varepsilon\in\mathbb{R}_+$.\ **Output**: The control input $u_k,~k=1,\ldots$, that drives the nonlinear system to the reference under linear constraints. [\[alg:LPVMPC\]]{#alg:LPVMPC label="alg:LPVMPC"} Initialize for $k=0$ the scheduling vector $\hat{p}_{i|0}$ as $$\hat{p}_{i|0}:=\rho\left(x_0,u_0=0\right),~i=0,\ldots,N-1$$ Update the state $x_{i|k}^{\text{ref}}$ Set $j=0$ $j\leftarrow j+1$ Solve the QP in [\[eq:LPV_MPC\]](#eq:LPV_MPC){reference-type="eqref" reference="eq:LPV_MPC"} $$\begin{aligned} \left[\hat{x}_{i+1|k},u_{i|k}\right]&\leftarrow\texttt{QP}^{(j)}(\hat{p}_{i|k},x_k,x_{i|k}^{\text{ref}}),~i=0,\ldots,N-1\\ %\textcolor{red}{u_{N|k}}&\textcolor{red}{:=u_{N-1|k}(\text{we assume that})}\\ \text{Update}~\hat{p}_{i|k}&:=\rho(x_{i|k},u_{i|k}),~i=0,\ldots,N\\ \gamma_j&:=\|\hat{p}_{i|k}^{(j)}-\hat{p}_{i|k}^{(j-1)}\|_2,~i=0,\ldots,N-1 \end{aligned}$$ Apply $u_k=u_{0|k}$ to the system [\[sys:LPVLQR\]](#sys:LPVLQR){reference-type="eqref" reference="sys:LPVLQR"} Measure $x_{k+1}$ Update $\hat{p}_{i|k+1}=\hat{p}_{i+1|k},~i=0,\ldots,N-1$ $k\leftarrow k+1$ Quantifying the (parametric) uncertainty and prediction of the scheduling signal is at the heart of this study. The error source has to do with the fact that the scheduling initialization (estimation) $\hat{p}_{i|k}$ is not necessarily the correct (true) one that the nonlinear plant will follow. In particular, by implementing the optimized control input $u_{i|k}^\ast$ to the real plant, the true response $x_{i|k}$ will deviate from the predicted one $\hat{x}_{i|k}$. Thus, we aim to provide bounds for this mismatch by introducing the error state at time $k$ and prediction $i$ (i.e., at $(i+k)^{\text{th}}$ real simulation time) between the true and predicted response that is defined as $$\label{eq:errorstate} e_{i|k}:=\underbrace{x_{i|k}}_{\text{true}}-\underbrace{\hat{x}_{i|k}}_{\text{predicted}},$$ where $e_{0|k}=x_{0|k}-\hat{x}_{0|k}=x_0-x_0=0,~\forall k\in{\mathbb Z}_+$. Before deriving the theoretical analysis of the error [\[eq:errorstate\]](#eq:errorstate){reference-type="eqref" reference="eq:errorstate"}, we state the mean value theorem (MVT) in the generalized multivariable case, which will provide the LDI of the substantial quantities. **Theorem 1** (The mean value theorem (MVT)). *Let $g$ be a multivariable function defined over a nonsingleton set $\{o\}\neq[x_1,x_2]\subset{\mathbb R}^n$. If* - *$g$ is continuous in $[x_1,x_2]$, and* - *$g$ is differentiable in $(x_1,x_2)$, then* *$$\exists\xi\in(x_1,x_2):g(x_2)=g(x_1)+\nabla g(\xi)(x_2-x_1).$$* **Remark 2** (Jacobian). *The Jacobian is computed $$\footnotesize J(x):=\nabla g(x)=\left[\begin{array}{ccc} \frac{\partial g_1}{\partial x_1} & \cdots & \frac{\partial g_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial g_n}{\partial x_1} & \cdots & \frac{\partial g_n}{\partial x_n} \\ \end{array}\right],~x=\left[\begin{array}{c} x_1 \\ \vdots\\ x_n \end{array}\right].$$* # Analysis of the error dynamics & bounds {#sec:Analdyna} At time $k$, the prediction of the scheduling signal is $\hat{p}_{i|k},~i=0,\ldots,N-1$. Schematically, we have the following high-level information: $$\footnotesize \hat{p}_{i|k}\rightarrow\boxed{\hat{\Sigma}(\hat{p}_{i|k})}\xrightarrow[u^{\ast}_{i|k}]{\text{MPC}}\boxed{\Sigma(x_{i|k},u_{i|k}^\ast)}\xrightarrow[\text{to}~\Sigma]{\text{apply}} p_{i|k}=\rho(x_{i|k},u_{i|k}).$$ ## Error propagation within the receding horizon The scope of the following analysis is to provide the error propagation for the deviation between $\hat{x}_{i|k}$ (predicted state) and $x_{i|k}$ (true state) when a scheduling signal $\hat{p}_{i|k}$ has been used from a previous prediction within the receding horizon. Once a scheduling parameter has been assumed, the solution of the QP offers the optimally designed input $u^\ast$ along with the predicted states $\hat{x}$. Implementation of the input $u^\ast$ to the real system results in the true states $x$. These dynamical systems can be represented formally before and after the solution of the QP problem for $i=0,\ldots,N-1$ as $$\begin{aligned} \footnotesize \hat{x}_{i+1|k}&=A_c(\hat{p}_{i|k})\hat{x}_{i|k}+Bu_{i|k}^{\ast},~\text{(before QP solution)}\label{eq:LPVhat}\\ x_{i+1|k}&=A_c(p_{i|k})x_{i|k}+Bu_{i|k}^{\ast},~\text{(after QP solution)}\label{eq:LPVtrue} \end{aligned}$$ By having the error state from ([\[eq:errorstate\]](#eq:errorstate){reference-type="ref" reference="eq:errorstate"}), and after subtracting ([\[eq:LPVhat\]](#eq:LPVhat){reference-type="ref" reference="eq:LPVhat"}) from ([\[eq:LPVtrue\]](#eq:LPVtrue){reference-type="ref" reference="eq:LPVtrue"}), it remains a dynamical system that explains the error dynamics as $$\label{eq:error} e_{i+1|k}=A_c(p_{i|k})x_{i|k}-A_c(\hat{p}_{i|k} )\hat{x}_{i|k}.$$ **Remark 3** (True and predicted scheduling). *The main difference in ([\[eq:LPVtrue\]](#eq:LPVtrue){reference-type="ref" reference="eq:LPVtrue"}) and ([\[eq:LPVhat\]](#eq:LPVhat){reference-type="ref" reference="eq:LPVhat"}) is how the states explain the deduced scheduling signals. In particular, for the true state $x_{i|k}$, it holds $p_{i|k}=\sigma(x_{i|k},u_{i|k}^\ast)$, but for the predicted state, we cannot claim the same and the mismatch $\hat{p}_{i|k}\overset{?}{\sim}\sigma(\hat{x}_{i|k},u_{i|k}^\ast)$ produces nonzero error dynamics.* We continue with ([\[eq:error\]](#eq:error){reference-type="ref" reference="eq:error"}) after substituting the affine operator ([\[eq:affine\]](#eq:affine){reference-type="ref" reference="eq:affine"}) and we result to $$\footnotesize \begin{aligned} e_{i+1|k}&=\left(A_{c0}+\sum_{l=1}^{n_p}p_{i|k}^{[l]}A_{cl}\right)x_{i|k}-\left(A_{c0}+\sum_{l=1}^{n_p}\hat{p}_{i|k}^{[l]}A_{cl}\right)\hat{x}_{i|k}\\ &=A_{c0}e_{i|k}+\underbrace{\left(\sum_{l=1}^{n_p}p_{i|k}^{[l]}A_{cl}\right)}_{\sigma(x_{i|k})}x_{i|k}-\underbrace{\left(\sum_{l=1}^{n_p}\hat{p}_{i|k}^{[l]}A_{cl}\right)}_{\hat{\sigma}_{i|k}}\hat{x}_{i|k}\\ &=A_{c0}e_{i|k}+\sigma(x_{i|k})x_{i|k}-\hat{\sigma}_{i|k}\hat{x}_{i|k}, \end{aligned}$$ where the smooth operator $g:{\mathbb R}^n\rightarrow{\mathbb R}^n$ is introduced to model the nonlinear functionality as $$\label{eq:operator} g(x):=\sigma(x)x,~x\in{\mathbb R}^n,$$ with $\sigma(x):=\sum_{l=1}^{n_p}p_{i|k}^{[l]}A_{cl}$. By denoting further the known quantity $\hat{\sigma}_{i|k}=\sum_{l=1}^{n_p}\hat{p}_{i|k}^{[l]}A_{cl}$, we can conclude to the following error dynamics $\forall k\in{\mathbb Z}_+$ and $i=0,\ldots,N-1$ in a concise way as $$\label{eq:errordynamics} e_{i+1|k}=A_{c0}e_{i|k}+g(x_{i|k})-\hat{\sigma}_{i|k}\hat{x}_{i|k},~e_{0|k}:=0.$$ The scheduling signal's initialization at time step $k$ is explained before we proceed with handling the unknown quantities in [\[eq:errordynamics\]](#eq:errordynamics){reference-type="eqref" reference="eq:errordynamics"} (i.e., $\xi_{i|k},~x_{i|k},~\hat{x}_{i|k}$). The idea is to initialize at time $k$ with the previous true (simulated) response of the actual plant (equivalent LPV) at time $k-1$ by satisfying $\hat{p}_{i|k}=p_{i+1|k-1}=\rho(x_{i+1|k-1})=\hat{\sigma}_{i|k}$. This infers information from the previous true response of the system and provides a good prediction due to the assumed smoothness of the nonlinear operators. ## Linear differential inclusion (LDI) & error polytopes The idea is to write the LDI (exact linearization) with the help of the [Theorem 1](#the:MVT){reference-type="ref" reference="the:MVT"} over the line segments $[x_{i+1|k-1},x_{i|k}],~i=0,\ldots,N-1,~\forall k\in{\mathbb Z}_+$, which involves the two true sequential state points how to propagate over the time $k$. Applying [Theorem 1](#the:MVT){reference-type="ref" reference="the:MVT"}, exists $\xi_{i|k}\in(x_{i+1|k-1},x_{i|k})$ such that $g(x_{i|k})=g(x_{i+1|k-1})+\nabla g(\xi_{i|k})(x_{i|k}-x_{i+1|k-1})$. Substituting in [\[eq:errordynamics\]](#eq:errordynamics){reference-type="eqref" reference="eq:errordynamics"}, it remains with $\hat{\sigma}_{i|k}=\sigma(x_{i+1|k-1})$ $$\label{eq:errorpropagation} \begin{aligned} e_{i+1|k}&=A_{c0}e_{i|k}+g(x_{i+1|k-1})-\hat{\sigma}_{i|k}\hat{x}_{i|k}\\ &+\nabla g(\xi_{i|k})(x_{i|k}-x_{i+1|k-1}),\\ &=A_{c0}e_{i|k}+\underbrace{\sigma(x_{i+1|k-1})}_{\hat{\sigma}_{i|k}}x_{i+1|k-1}-\hat{\sigma}_{i|k}\hat{x}_{i|k}\\ &+\nabla g(\xi_{i|k})(x_{i|k}-x_{i+1|k-1})\Leftrightarrow\\ e_{i+1|k}&=A_{c0}e_{i|k}+\hat{\sigma}_{i|k}(x_{i+1|k-1}-\hat{x}_{i|k})+\\ &+\nabla g(\xi_{i|k})(x_{i|k}-x_{i+1|k-1}),~e_{0|k}=0. \end{aligned}$$ Due to the uncertainty introduced by the unknowns (i.e., $\xi_{i|k},~x_{i|k},~\hat{x}_{i|k}$) in [\[eq:errorpropagation\]](#eq:errorpropagation){reference-type="eqref" reference="eq:errorpropagation"}, the way to proceed is to represent the error dynamics as a sequence of polytopic convex sets that will enclose the extreme behavior between the predictive and true state response deterministically due to no other disturbances. We know from the limitations of the original plant how to handle the variation of $\lVert x_{i|k}-x_{i+1|k-1}\rVert$ (e.g., physical constraints on acceleration, speed, and position). We can further enforce these constraints with the help of the decision variables $\hat{x}$ in the LPVMPC so as to impose the same variation in $\lVert x_{i+1|k-1}-\hat{x}_{i|k}\rVert$. Exploiting the affine structure of the gradient w.r.t. the scheduling parameters evaluated at $\xi_{i|k}\in(\xi_{i+1|k-1},x_{i|k})$, we can represent $\nabla g(\xi_{i|k})(x_{i|k}-x_{i+1|k-1})$ as the convex polytope $\mathbb{W}_{i|k}=Co\{\nu_{1},\nu_{2},...,\nu_{m}\},~m\in{\mathbb Z}_+,~m\leq 2^{n_{\mathrm{x}}(n_{\mathrm{x}}+1)}$ where $\nu_{(\cdot)}$ represents the vertices. Similarly for the product $\hat{\sigma}_{i|k}(x_{i+1|k-1}-\hat{x}_{i|k})$, we can define the convex polytope $\mathbb{V}_{i|k}=Co\{\bar{\nu}_{1},\bar{\nu}_{2},...,\bar{\nu}_{\bar{m}}\},~\bar{m}\in{\mathbb Z}_+,~\bar{m}\leq 2^{n_{\mathrm{x}}}$. By denoting the vertex $E_{0|k}:=e_{0|k}=0_{n_\textrm{x}}$, the polytopic error tubes can be computed recursively with the Minkowski summation "$\oplus$\"[^4] and $\forall k\in{\mathbb Z}_+\cup\{0\}$ along with $i=0,\ldots,N-1$ as $$\label{eq:errorpolytubes} \begin{aligned} \mathbb{E}_{i+1|k}&=\left(A_{c0}\mathbb{E}_{i|k}\right)\oplus\mathbb{V}_{i|k}\oplus\mathbb{W}_{i|k},~\mathbb{E}_{0|k}=0,\\ e_{i|k}&\in\mathbb{E}_{i|k} \Leftrightarrow x_{i|k}\in(\hat{x}_{i|k}\oplus\mathbb{E}_{i|k}). \end{aligned}$$ # Results {#sec:results} **Example 1** (The unbalanced disk regulator problem). *We start by providing the [1](#tab:MPCPar){reference-type="ref" reference="tab:MPCPar"} with all the input-output constraints along with the tuning parameters for solving the LPVMPC problem.* ----------------------------- ----------------------------- ----------------------------- ----------------------- ***Parameter*** ***Value*** ***Parameter*** ***Value*** *Lower bound on $\theta_k$* *$-2\pi$ \[rad\]* *Upper bound on $\theta_k$* *$2\pi$ \[rad\]* *Lower bound on $\omega_k$* *$-10\pi$ \[rad/s\]* *Upper bound on $\omega_k$* *$10\pi$ \[rad/s\]* *Lower bound on $u_k$* *$-10$ \[V\]* *Upper bound on $u_k$* *$10$ \[V\]* *Sampling time $t_s$* *$0.01$ \[s\]* *Horizon length $N$* *$10$* *Lower bound on $\Delta_1$* *$-t_s 10\pi$ \[rad\]* *Upper bound on $\Delta_1$* *$t_s 10\pi$ \[rad\]* *Lower bound on $\Delta_2$* *any \[rad/s$^2$\]* *Upper bound on $\Delta_2$* *any \[rad/s$^2$\]* *Quadratic state costs* *$Q=\texttt{diag}(8,~0.1)$* *Quadratic input cost* *$R=0.5$* ----------------------------- ----------------------------- ----------------------------- ----------------------- : *MPC Parameters* *The terminal cost $P$ is computed from the solution of the Lyapunov matrix equation in the robust model-based LQR together with the feedback gain $K$. The unbalanced disk control regulator problem is under consideration that starts from the initial conditions $x_0=\left[\begin{array}{cc} -6 & 0 \\ \end{array}\right]^\top$. We want to drive the dynamics to the state origin that is our reference signal $x^{\text{ref}}=\left[\begin{array}{cc} 0 & 0 \\ \end{array}\right]^\top$. The dynamical system that describes the phenomenon with angular displacement-$\theta(t)$ and angular speed-$\omega(t)$ (i.e., $\omega(t)=\dot{\theta}(t)$) with state vector $x(t)=[\begin{array}{cc} \theta(t) & \omega(t) \end{array}]^\top$ has the continuous space-state representation as in [\[sys:LPV\]](#sys:LPV){reference-type="eqref" reference="sys:LPV"} after introducing the scheduling variable $p(t):=\sin(\theta(t))/\theta(t):=\mathop{\mathrm{sinc}}(\theta(t))$. The matrices that define the continuous in-time system with $p(t)=\mathop{\mathrm{sinc}}(\theta(t))$ are $$\footnotesize A_{cont}(p(t))=\left[\begin{array}{cc} 0 & 1 \\ \frac{mgl}{I_n}p(t) & -\frac{1}{\tau} \end{array}\right],~B_{cont}=\left[\begin{array}{c} 0 \\ \frac{K_m}{\tau} \end{array}\right],$$ and the chosen parameters are provided in [2](#tab:disk_parameters){reference-type="ref" reference="tab:disk_parameters"}.* ---------- ------------------------------------ *$I_n$* *$2.4\cdot1e-4$ \[kg$\cdot m^2$\]* *$m$* *$0.076~$ \[Kg\]* *$g$* *$9.81$ \[m/s\]* *$l$* *$0.041$ \[m\]* *$\tau$* *$0.4$ \[1/s\]* *$K_m$* *$11$ \[rad/Vs$^2$\]* ---------- ------------------------------------ : *Parameters for the unbalanced disk example* *[\[tab:disk_parameters\]]{#tab:disk_parameters label="tab:disk_parameters"}* *We discretize with forward Euler[^5], and with $x_k=\left[\begin{matrix} \theta_k & \omega_k \end{matrix}\right]^\top$, the remaining dicrete LPV system [\[sys:LPV\]](#sys:LPV){reference-type="eqref" reference="sys:LPV"} with scheduling parameter $p_k=\mathop{\mathrm{sinc}}(\theta_k)$ has the following matrices $$\footnotesize \begin{aligned}\label{eq:discdisc} A(p_k)&=\left[\begin{array}{cc} 1 & t_s \\ t_s\frac{mgl}{I_n}p_k & 1-\frac{t_s}{\tau} \end{array}\right],~B=\left[\begin{array}{c} 0 \\ t_s\frac{K_m}{\tau} \end{array}\right]. \end{aligned}$$ In the Appendix, we obtain the heavy computations of the important quantities to be used next, and we proceed from [\[eq:appenlast\]](#eq:appenlast){reference-type="eqref" reference="eq:appenlast"}, by defining the vector $\xi=\left[\begin{array}{cc} \xi^{(1)} & \xi^{(2)}\\ \end{array}\right]^\top$, and assume $\xi^{(1)}\in[\pi=\xi_{min}^{(1)},\xi_{max}^{(1)}=2\pi]$ where gives the maximum variation of the $-1\leq\cos(\xi^{1})\leq 1$. Then, we define $$\footnotesize \begin{aligned} G_1(\xi)&=\left[\begin{array}{cc} 0 & 0 \\ \gamma\cos(\xi^{(1)}) & 0\end{array}\right],~h_1=\left[\begin{array}{c} \pm\Delta_1 \\ \pm\Delta_2\end{array}\right],\\ \nu_{1}^{max}&=G_1(\xi_{max}^{(1)})h_1=\left[\begin{array}{c} 0 \\ \gamma\cos(\xi_{max}^{(1)})\Delta_1 \end{array}\right],\\ \nu_{1}^{min}&=G_1(\xi_{min}^{(1)})h_1=\left[\begin{array}{c} 0 \\ \gamma\cos(\xi_{min}^{(1)})\Delta_1 \end{array}\right]. \end{aligned}$$ Due to the switching sign in $\cos(\xi^{(1)})=\pm 1$, we can use the $+\Delta_1=t_s\omega_{max}$ only, otherwise will have the same (redundant) vertices. Moreover, it is algebraically evident that $\Delta_2$ can be any arbitrary value as it will not affect the topology of the polytope. The convex hull is defined from the two vertices only as $$G_{1}(\xi_{i|k})h_1\in\texttt{Co}(\nu_1^{min},\nu_1^{max}):=\mathbb{W}.$$ Instead of considering the most conservative case in bounding the maximum variation of the gradient, we could obtain convex sets $\mathbb{W}_{i|k}$ that can change online and take under consideration the curvature, which could reduce conservatism but it will increase the complexity. Accordingly, we define $$\footnotesize \begin{aligned} G_2(\hat{p}_{i|k})&=\left[\begin{array}{cc} 0 & 0 \\ \gamma\hat{p}_{i|k} & 0\end{array}\right],~h_2(\pm)=\left[\begin{array}{c} \pm\Delta_1 \\ \pm\Delta_2\end{array}\right],\\ \nu_{2}^{max}&=G_2(\hat{p}_{i|k})h_2(+)=\left[\begin{array}{c} 0 \\ \gamma\hat{p}_{i|k}\Delta_1 \end{array}\right],\\ \nu_{2}^{min}&=G_1(\hat{p}_{i|k})h_2(-)=\left[\begin{array}{c} 0 \\ -\gamma\hat{p}_{i|k}\Delta_1 \end{array}\right]. \end{aligned}$$ Thus, the convex hull is defined from the two vertices as $$G_{2}(\hat{p}_{i|k})h_2\in\texttt{Co}(\nu_2^{min},\nu_2^{max}):=\mathbb{V}_{i|k}.$$ We denote further the vertex $\mathbb{E}_{0|k}:=e_0=\left[\begin{array}{cc} 0 & 0 \\ \end{array}\right]^T$, and finally, the translated convex error sets around the predictions $\hat{x}_{i|k}$ with the notation in [\[eq:Ac0Ac1\]](#eq:Ac0Ac1){reference-type="eqref" reference="eq:Ac0Ac1"} are computed as $$\begin{aligned} \mathbb{E}_{i+1|k}&=(A_{c0}\mathbb{E}_{i|k})\oplus\mathbb{V}_{i|k}\oplus\mathbb{W},~i=0,\ldots,N-1\\ x_{i|k}&\in\left(\hat{x}_{i|k}\oplus\mathbb{E}_{i|k}\right),~\forall k\in{\mathbb Z}_+. \end{aligned}$$* *In [\[fig:fig1\]](#fig:fig1){reference-type="ref" reference="fig:fig1"}-(upper) is depicted the solution of the LPVMPC problem for $k=1$ along with the sequence of the error polytopes $\mathbb{E}_{i|k},~i=0,\ldots,N-1$ that gives $N$ steps ahead bounded prediction of the actual response $x_{i|k}$ compared to the predicted one. In simple terms, as the actual system has been embedded equivalently to the LPV and there are no other disturbances, we have prior the maximum variation of the true response of the system. For $k=1$, to have an improved prediction of the scheduling signal, we solve the MPC problem in [\[alg:LPVMPC\]](#alg:LPVMPC){reference-type="ref" reference="alg:LPVMPC"} with $\texttt{MaxIter}=10$ and $\varepsilon=1e-7$. This reduces the efficiency in the beginning as we solve a sequence of MPC problems for the fixed $k=1$, but also allows a "warm start\" that makes the initialization of the scheduling more meaningful for the system. For the rest of the simulations $k>1$, we set $\texttt{MaxIter}=1$, and we benefit from the maximum efficiency in solving QPs where the average time in (s) is $\sim 0.002<t_s=0.01$, and that certifies that we can reach real-time.* *In [\[fig:fig1\]](#fig:fig1){reference-type="ref" reference="fig:fig1"}-(lower), similarly, the aforementioned analysis is illustrated at time $k=15$.* *![image](fig1.pdf){width="75%"} ![image](fig2.pdf){width="75%"}* *In [\[fig:fig3\]](#fig:fig3){reference-type="ref" reference="fig:fig3"}, the complete solution with the LPVMPC framework for the regulator problem of the unbalanced disk is illustrated. The monotonicity of the angular displacement $\theta(t)$ that reaches the origin target without overshooting outlines the good performance that can be seen in NMPC frameworks. In addition, the computational burden has been avoided after utilizing the QP performance.* ![image](fig3.pdf){width="75%"} # CONCLUSIONS {#sec:conclusion} *In this study, we were concerned with nonlinear control problems and aimed to provide a method that offers efficient solutions by analyzing some of the inherent challenges. To make MPC feasible for the nonlinear case, we represented the nonlinear system equivalently with the linear parameter-varying (LPV) embedding that can benefit from the well-established linear control theory and maintain some of the advantages like quadratic program (QP) performance robust stabilization with the linear quadratic regulator (LQR).* *The main challenge of using the LPV formulation for such control tasks was the uncertainty introduced by the predicted scheduling parameter acting as the source that dissipates errors within the receding horizon between the actual and predicted responses of the system. To tackle this challenge, we explicitly derived the error dynamics, and by applying linear differential inclusions (LDIs), we obtained the polytopic error bounds. Such a result can be computed before the MPC solution as long the actual systems' maximum capabilities (i.e., constraints) are accessible, which will help further analysis.* *The derived polytopic error tubes reasonably estimate the actual response in the unbalanced disk example without being conservative. Despite that preliminary good result, in the future, we will work on reducing conservativeness further by tightening online the derived bounds by appropriately handling the polytopic sets $\mathbb{W}_{i|k},~\mathbb{V}_{i|k}$ online. In addition, we plan to use the derived error formulations with machine learning (ML) techniques, such as Gaussian processes (GPs), that can improve the predictions (e.g., to the scheduling parameters) by offering variance measures that will certify robustness against system's disturbances and measurement noise.* *We also aim to investigate more challenging systems regarding dimensionality in the state or scheduling parameter dimension. Reduction techniques will be mandatory for handling the complexity of the derived analysis. The computation of invariant sets from the derived error analysis will set the ground for our research endeavors in the immediate future. Finally, our long-term goal is to provide theoretical guarantees such as stability and recursive feasibility that will assert safety in autonomous systems that vary from mechanical to medical engineering disciplines.* # APPENDIX {#appendix .unnumbered} *Stabilizing the unstable plant with the robust model-based LQR feedback, the gain $K$ is computed optimally. To simplify the exposition, suppose that $K=\left[\begin{array}{cc} \alpha & \beta\end{array}\right]$, thus the stabilized linear matrix $A_c(p_k)$ has the following form: $$\footnotesize \begin{aligned} % A_c(p_k)&=A(p_k)+BK=\left[\begin{array}{cc} % 1 & t_s \\ % t_s\frac{mgl}{I_n}p_k & 1-\frac{t_s}{\tau} %\end{array}\right]+\\ %&+\left[\begin{array}{c} % 0 \\ % t_s\frac{K_m}{\tau} % \end{array}\right]\left[\begin{array}{cc} %\alpha & \beta\end{array}\right]\Leftrightarrow\\ A_c(p_k)&=\left[\begin{array}{cc} 1 & t_s \\ t_s\frac{mgl}{I_n}p_k+\alpha t_s\frac{K_m}{\tau} & 1-\frac{t_s}{\tau}+\beta t_s\frac{K_m}{\tau} \end{array}\right] \end{aligned}$$ We define the parameters $\gamma:=t_smgl/I_n$, $\delta:=\alpha t_sK_m/\tau$, $\eta:=1-t_s/\tau+\beta t_sK_m/\tau$, and exploit the affine structure $$\label{eq:Ac0Ac1} \footnotesize A_c(\rho(\theta_k))=\underbrace{\left[\begin{array}{cc} 1 & t_s \\ \delta & \eta \end{array}\right]}_{A_{c0}}+\underbrace{\left[\begin{array}{cc} 0 & 0 \\ \gamma & 0 \end{array}\right]}_{A_{c1}}\mathop{\mathrm{sinc}}(\theta_k).$$ The operator $g$ with $x=\left[\begin{array}{cc} \theta & \omega \\ \end{array}\right]^T$ remains $$\footnotesize \begin{aligned} g(x)&=\sigma\left(\left[\begin{array}{c} \theta \\ \omega \end{array}\right]\right)\left[\begin{array}{c} \theta \\ \omega \end{array}\right]=\mathop{\mathrm{sinc}}(\theta)\left[\begin{array}{c} \theta \\ \omega \end{array}\right]=\left[\begin{array}{c} \sin(\theta) \\ \omega\mathop{\mathrm{sinc}}(\theta) \end{array}\right], \end{aligned}$$ where the Jacobian is computed as $$\footnotesize J(x)=\nabla g(x)=\left[\begin{array}{cc} \cos(\theta) & 0\\ \frac{\omega}{\theta}\left(\cos(\theta)-\mathop{\mathrm{sinc}}(\theta)\right) & \mathop{\mathrm{sinc}}(\theta) \end{array}\right].$$* Substituting in [\[eq:errorpropagation\]](#eq:errorpropagation){reference-type="eqref" reference="eq:errorpropagation"}, we can derive explicitly $$\label{eq:appenlast} \footnotesize \begin{aligned} e_{i+1|k}&=\left[\begin{array}{cc} 1 & t_s \\ \delta & \eta \end{array}\right]e_{i|k}+\left[\begin{array}{cc} 0 & 0 \\ \gamma\hat{p}_{i|k} & 0 \end{array}\right]\left[\begin{array}{c} x^{(1)}_{i+1|k-1}-\hat{x}^{(1)}_{i|k} \\ x^{(2)}_{i+1|k-1}-\hat{x}^{(2)}_{i|k} \end{array}\right]\\ &+\left[\begin{array}{cc} 0 & 0 \\ \gamma\cos(\xi_{i|k}^{(1)}) & 0 \end{array}\right]\left[\begin{array}{c} x^{(1)}_{i|k}-\hat{x}^{(1)}_{i+1|k-1} \\ x^{(2)}_{i|k}-\hat{x}^{(2)}_{i+1|k-1}\end{array}\right]. \end{aligned}$$ [^1]: \*First author was supported by DFG (German research foundation) [^2]: $^{1}$The authors are with the Faculty of Electrical Engineering in Medicine, University of Luebeck, Germany. `email: dimitrios.karachalios@uni-luebeck.de` [^3]: The quadratic weighted cost is defined as $\lVert x\rVert_Q^2=x^{\top}Qx$. Similarly, for $R$ and $P$. [^4]: Minkowski sum: $A\oplus B:=\{a+b~|~a\in A,~b\in B\}$. [^5]: *Forward Euler: $\dot{x}(t_k)\approx\frac{x(t_k+t_s)-x(t_k)}{t_s},~t_k=t_s\cdot k,~k\in{\mathbb Z}_+$.*
arxiv_math
{ "id": "2310.01049", "title": "Error Bounds in Nonlinear Model Predictive Control with Linear\n Differential Inclusions of Parametric-Varying Embeddings", "authors": "Dimitrios S. Karachalios and Maryam Nezami and Georg Schildbach and\n Hossameldin S. Abbas", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper is inspired by an imaging problem encountered in the framework of Electrical Resistance Tomography involving two different materials, one or both of which are nonlinear. Tomography with nonlinear materials is in the early stages of developments, although breakthroughs are expected in the not-too-distant future. We consider nonlinear constitutive relationships which, at a given point in the space, present a behaviour for large arguments that is described by monomials of order $p$ and $q$. The original contribution this work makes is that the nonlinear problem can be approximated by a weighted $p-$Laplace problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the $p-$Laplacian in inverse problems with nonlinear materials. Moreover, when $p=2$, this provides a powerful bridge to bring all the imaging methods and algorithms developed for linear materials into the arena of problems with nonlinear materials. The main result of this work is that for ''large'' Dirichlet data in the presence of two materials of different order (i) one material can be replaced by either a perfect electric conductor or a perfect electric insulator and (ii) the other material can be replaced by a material giving rise to a weighted $p-$Laplace problem. [**MSC 2020**]{.smallcaps}: 35J62, 78A46, 35R30. [**Key words and phrases**]{.smallcaps}. Elliptic PDE, Quasilinear PDE, Nonlinear problems, Linear approximation, Asymptotic behaviour, Imaging, Electrical Resistance Tomography, Inverse problem. author: - Antonio Corbo Esposito$^1$, Luisa Faella$^1$, Gianpaolo Piscitelli$^2$, Ravi Prakash$^3$, Antonello Tamburrino$^{1,4}$ bibliography: - biblioCFPPT.bib title: The $p-$Laplace ''Signature'' for Quasilinear Inverse Problems with Large Boundary Data --- [^1] # Introduction This paper treats a nonlinear imaging problem in Electrical Resistance Tomography (ERT). The aim is to retrieve the nonlinear electrical conductivity $\sigma$, from boundary data in stationary conditions (steady currents): $$\label{gproblem1} \begin{cases} \mathop{\mathrm{div}}\Big(\sigma (x, |\nabla u(x)|) \nabla u (x)\Big) =0\ \text{in }\Omega\vspace{0.2cm}\\ u(x) =f(x)\qquad\qquad\qquad\quad\ \text{on }\partial\Omega, \end{cases}$$ where $f$ is the applied boundary potential, $u$ is the electric scalar potential and $\Omega\subset\mathbb{R}^n$, $n \geq 2,$ is an open bounded domain with Lipschitz boundary which represents the region occupied by the conducting material. Retrieving the nonlinear electrical conductivity $\sigma$ from boundary data, i.e. from the $\Lambda_\sigma: f \mapsto \sigma \partial_n u$ operator, the so-called Dirichlet-to-Neumann (DtN) operator, is the nonlinear variant of the Calderón problem [@calderon1980inverse; @calderon2006inverse]. In the literature, there are very few contributions on the subject of imaging in the presence of nonlinear materials. As quoted in [@lam2020consistency], *'' \... the mathematical analysis for inverse problems governed by nonlinear Maxwell's equations is still in the early stages of development.''*. In this framework, we mention the work made for $p-$Laplace type nonlinearities [@Salo2012_IP; @brander2015enclosure; @brander2016calderon; @brander2018superconductive; @guo2016inverse; @brander2018monotonicity; @hauer2015p], the work by Sun [@sun2004inverse; @Sun_2005] for weak nonlinearities, the work by Cârstea and Kar [@carstea2020recovery] which treated a nonlinear problem (linear plus a nonlinear term) and the work by Corbo Esposito et al. [@corboesposito2021monotonicity]. The latter treat a general nonlinearity within the framework of the Monotonicity Principle Method. It can be expected that as new methods and algorithms become available, the demand for nondestructive evaluation and imaging of nonlinear materials will eventually and significantly rise. From the physical standpoint, in steady current operations, the electric field is given by the electrical scalar potential as ${\bf E}(x)=-\nabla u(x)$, and the electrical current density ${\bf J}(x)$ depends on the electric field as follows: $$\label{J} {\bf J}(x)=\sigma(x,\vert {\bf E}(x)\vert) {\bf E}(x)\quad\forall x\in\Omega.$$ Equation [\[J\]](#J){reference-type="eqref" reference="J"} represents a constitutive relationship which is nonlinear, local, isotropic and memoryless. [\[3-Applications\]]{#3-Applications label="3-Applications"} From a general perspective, nonlinear electrical conductivities can be found in many materials like semiconducting and ceramic materials [@bueno2008sno2; @boucher2018interest; @lupo1996field], superconductors [@seidel2015applied; @krabbes2006high] and in biological tissues [@foster1989dielectric; @corovic2013modeling]. Other than steady currents, problem ([\[gproblem1\]](#gproblem1){reference-type="ref" reference="gproblem1"}) is common to other physical settings. Remaining in the framework of electromagnetism, both nonlinear electrostatic phenomena (see [@miga2011non] and references therein, see [@yarali20203d]) and nonlinear magnetostatic[^2]$^1$ phenomena (see [@1993ferr.book.....B]) can be modelled as in ([\[gproblem1\]](#gproblem1){reference-type="ref" reference="gproblem1"}). Specifically, we address ERT with a nonlinear electrical conductivity when the boundary data is ''large'',  in the sense that the boundary data is $\lambda f(x)$ where constant $\lambda$ approaches infinity. This contribution is companion of [@corboesposito2023thep0laplacesignature], where ERT with nonlinear conductivities was treated in the ''small'' boundary data limit ($\lambda \to 0$) and it completes the ''philosophy'' of limiting problems. Indeed, when the electrical conductivity is unknown, no other limiting cases can be conceived, other than the small and large boundary data cases. During the analysis of this class of problems, we recognize the limiting behaviour of the normalized solution defined as $v^\lambda = u^\lambda / \lambda$, being $u^\lambda$ the solution of [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"} when the boundary data is $\lambda f$. To be more specific, we assume that the electrical conductivity $\sigma$ has a different behaviour for $E \to +\infty$ in region $A \subset \subset \Omega$ and $B=\Omega \backslash A$, i.e. $$\begin{aligned} \label{eq:asy1} \sigma_B(x,E) \sim \beta (x)E^{p-2} \quad \text{for a.e.}\ x \in B,\\ \label{eq:asy2} \sigma_A(x,E) \sim \alpha (x)E^{q-2} \quad \text{for a.e.}\ x \in A,\end{aligned}$$ where $\sigma_B$ and $\sigma_A$ are the restriction of $\sigma$ to $B$ and $A$, respectively, $E = \left| \mathbf{E} \right| = \left| \nabla u \right|$, $\alpha(x)$ and $\beta(x)$ are proper functions and $p$ and $q$ are the asymptotic growth exponents (see Figure [1](#fig_1_omega){reference-type="ref" reference="fig_1_omega"}). ![The electrical conductivity has different asymptotic growth exponents $q$ and $p$ in regions $A$ and $B$, respectively. Region $B$ surrounds $A$ and $\partial \Omega \subset \partial B$.](LimL_1_omega.png){#fig_1_omega width="90%"} In this contribution, we prove that the normalized solution $v^\lambda$ approaches a limit for $\lambda \to +\infty$ which is the solution of a weighted $p-$Laplace problem in region $B$. Moreover, in solving this weighted $p-$Laplace problem in region $B$, the boundary condition at the interface between $B$ and $A$ is either that of a Dirichlet boundary condition (constant scalar potential for $q>p$) or that of a Neumann boundary condition (vanishing normal component of the scalar potential for $q<p$). From the physical standpoint, region $A$ is seen as either a perfect electric conductor ($q>p$) or a perfect electric insulator ($q<p$). Finally, the case $p=q$ (that is the case when $A=\emptyset$) has been treated in [@corboesposito2021monotonicity; @MPMETHODS]. It is worth noting that understanding the behaviour of the solution in region $B$ is of paramount importance because the imaging system collects data, i.e. it measures a discrete approximation of $\Lambda_\sigma$, onto $\partial \Omega$, which is the outer boundary of $B$. In other terms, the behaviour in $B$ determines the measured data. These results make it possible to approximate a nonlinear problem with a weighted $p-$Laplace one in region $B$, together with a proper boundary condition on $\partial A$, corresponding to either a perfect electric conductor or a perfect electric insulator in $A$ (see Section [3.3](#subsec_hyp){reference-type="ref" reference="subsec_hyp"} for the underlying assumptions). They are quite interesting from both a mathematical and an engineering point of view because they reduce the original quasilinear problem in $\Omega$ to the canonical problem of the weighted $p-$Laplace equation only in region $B$. From the perspective of inverse problems, these results allow to apply to [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"} the results developed for the nonlinear $p-$Laplace variant of the Calderón problem, initially posed by Salo and Zhong [@Salo2012_IP], and studied in [@brander2015enclosure; @brander2016calderon; @brander2018superconductive; @guo2016inverse; @brander2018monotonicity; @hauer2015p]. Moreover, the case $p=2$ is of paramount importance because the problem in region $B$ reduces to a linear one and, therefore, we have a powerful bridge to apply all the imaging methods and algorithms developed for linear materials to nonlinear ones. ![Description of two possible applications. Left: inverse obstacle problem where the interface ($\partial A$) between two phases is unknown. $A$ and $B$ are the regions occupied by the inner material and the outer material, respectively. Right: nondestructive testing where regions A and B are known, while the position and shape of region $C$ (a crack) is unknown. Materials in regions $A$ and $B$ are known, as well.](LimL_2_intro.png){#fig_2_intro width="\\textwidth"} [\[5a-IT\]]{#5a-IT label="5a-IT"} For instance, a classical class for solving an inverse problem in the presence of linear materials is that of iterative methods, which is based on the (iterative) minimization of a proper cost function [@tikhonov1977solutions; @tikhonov1998nonlinear]. An overview of iterative methods can be found in several specialized textbooks [@bertero1998introduction; @tarantola2005inverse; @vogel2002computational; @engl1996regularization]. Other than the popular Gauss-Newton method and its variants (see [@qi2000iteratively] for a review), we mention some relevant iterative approaches applied to inverse problems such as the Quadratic Born approximation [@pierri1997local], Bayesian approaches [@premel2002eddy], the Total Variation regularization [@RudinOsher1992nonlinear; @pirani2008multi], the Levenberg-Marquardt method for nonlinear inverse problems [@Hanke_1997], the Level Set method [@dorn2000shape; @harabetian1998regularization], the Topological Derivative method [@Jackowska-Strumillo2002231; @ammari2012stability; @fernandez2019noniterative] and the Communication Theory approach [@tamburrino2000communications]. Inverse problems in the presence of hollow domains, as it is the case for $B$, has been deeply investigated. Recent results on this subject includes approaches based on a parameterization of the obstacle [@colton1998inverse], on the level sets method [@ameur2004level], on the concept of shape derivative combined with that of level sets [@allaire2004structural], on the topological derivative method [@bonnet2008inverse], on the quasi-reversibility method [@bourgeois2010aquasi], on a shape optimization method [@caubet2013shape], and on an integral equation method [@kress2005nonlinear; @cakoni2012integral]. Noniterative methods are an excellent alternative to iterative ones, because they call for the computation of a proper function of the space, the indicator function, giving the shape of the interface between two different materials. The computation of the indicator function is much less expensive than the computation required by an iterative method, thus making noniterative methods suitable for real-time operations. Only a handful of noniterative methods are currently available. These include the Linear Sampling Method (LSM) by Colton and Kirsch [@Colton_1996], which evolved into the Factorization Method (FM) proposed by Kirsch [@Kirsch_1998]. Ikehata proposed the Enclosure Method (EM) [@ikehata1999draw; @Ikehata_2000] and Devaney applied MUSIC (MUltiple SIgnal Classification), a well-known algorithm in signal processing, as an imaging method [@Devaney2000]. Finally, Tamburrino and Rubinacci proposed the Monotonicity Principle Method (MPM) [@Tamburrino_2002]. Assumptions made in this work are presented in Section [3.3](#subsec_hyp){reference-type="ref" reference="subsec_hyp"}. They refer to a quite standard framework, apart from (A4) and (A4') which correspond to [\[eq:asy1\]](#eq:asy1){reference-type="eqref" reference="eq:asy1"} and [\[eq:asy2\]](#eq:asy2){reference-type="eqref" reference="eq:asy2"}. These two latter specific assumptions prescribe the pointwise convergence of the constitutive relationship for large electric fields $E$. Moreover, they are sharp, as shown by means of the counterexamples of Section [6](#counter_sec){reference-type="ref" reference="counter_sec"}. [\[8-arch\]]{#8-arch label="8-arch"} The paper is organized as follows: in Section [2](#underlying){reference-type="ref" reference="underlying"} we present the ideas underpinning the work; in Section [3](#fram_sec){reference-type="ref" reference="fram_sec"} we set the notations and the problem, together with the required assumptions; in Section [4](#mean_sec){reference-type="ref" reference="mean_sec"} we study the fundamental inequality for large Dirichlet data; in Section [5](#large_sec){reference-type="ref" reference="large_sec"} we discuss the limiting case for large Dirichlet data; in Section [6](#counter_sec){reference-type="ref" reference="counter_sec"} we provide the counterexamples proving that the specific assumptions are sharp and, finally, in Section [7](#Con_sec){reference-type="ref" reference="Con_sec"} we provide some conclusions. # Underlying ideas and expected results {#underlying} In this section we present the main ideas underpinning this work. The key is the ''educated guess'' that when the boundary data is ''large'', we expect the electric field $\mathbf{E}=-\nabla{u}$ to be large a.e. in $\Omega$ and, therefore, its behaviour is expected to be governed by the asymptotic behaviour of the constitutive relationship $\sigma=\sigma \left(x,E\right)$ in [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"}. Let $A\subset\subset\Omega$ and $B:=\Omega\setminus\overline A$, we assume that there exist two constants $q$ and $p$, and two functions $\beta$ and $\alpha$ which capture the behaviour of $\sigma$, as $E \to +\infty$, in $B$ and $A$, according to [\[eq:asy1\]](#eq:asy1){reference-type="eqref" reference="eq:asy1"} and [\[eq:asy2\]](#eq:asy2){reference-type="eqref" reference="eq:asy2"}, respectively. Analysis of nonlinear problems is fascinating because of the wide variety of different cases. Some representative cases are shown in Figure [7](#fig_3_sigma){reference-type="ref" reference="fig_3_sigma"}. ![Relationship between the electrical conductivities for regions $A$ and $B$, when considering their asymptotic approximations: $\sigma_B(\cdot,E) \sim \beta(\cdot)E^{p-2}$ (solid line) and $\sigma_A(\cdot,E) \sim \alpha(\cdot) E^{q-2}$ (dashed line). Similar configurations are obtained when the order relation between $p$ and $q$ is reversed. Only the behaviour for large $E$ is shown.](LimL_3_a.png "fig:"){#fig_3_sigma width="32.5%"} ![Relationship between the electrical conductivities for regions $A$ and $B$, when considering their asymptotic approximations: $\sigma_B(\cdot,E) \sim \beta(\cdot)E^{p-2}$ (solid line) and $\sigma_A(\cdot,E) \sim \alpha(\cdot) E^{q-2}$ (dashed line). Similar configurations are obtained when the order relation between $p$ and $q$ is reversed. Only the behaviour for large $E$ is shown.](LimL_3_b.png "fig:"){#fig_3_sigma width="32.5%"} ![Relationship between the electrical conductivities for regions $A$ and $B$, when considering their asymptotic approximations: $\sigma_B(\cdot,E) \sim \beta(\cdot)E^{p-2}$ (solid line) and $\sigma_A(\cdot,E) \sim \alpha(\cdot) E^{q-2}$ (dashed line). Similar configurations are obtained when the order relation between $p$ and $q$ is reversed. Only the behaviour for large $E$ is shown.](LimL_3_c.png "fig:"){#fig_3_sigma width="32.5%"} ![Relationship between the electrical conductivities for regions $A$ and $B$, when considering their asymptotic approximations: $\sigma_B(\cdot,E) \sim \beta(\cdot)E^{p-2}$ (solid line) and $\sigma_A(\cdot,E) \sim \alpha(\cdot) E^{q-2}$ (dashed line). Similar configurations are obtained when the order relation between $p$ and $q$ is reversed. Only the behaviour for large $E$ is shown.](LimL_3_d.png "fig:"){#fig_3_sigma width="32.5%"} ![Relationship between the electrical conductivities for regions $A$ and $B$, when considering their asymptotic approximations: $\sigma_B(\cdot,E) \sim \beta(\cdot)E^{p-2}$ (solid line) and $\sigma_A(\cdot,E) \sim \alpha(\cdot) E^{q-2}$ (dashed line). Similar configurations are obtained when the order relation between $p$ and $q$ is reversed. Only the behaviour for large $E$ is shown.](LimL_3_e.png "fig:"){#fig_3_sigma width="32.5%"} When, for instance, $q<p$ it can be reasonably expected that either (i) region $A$ is a perfect electric insulator or (ii) region $B$ is a perfect electric conductor, because $\sigma_B$ would be dominant if compared to $\sigma_A$. When $\partial \Omega$ is contained in $\partial B$, that is $A\subset\subset\Omega$, the ambiguity between (i) and (ii) can be resolved by looking at the boundary data. Specifically, if the boundary data is nonconstant, then region $B$ cannot be assimilated to a PEC because the potential $u$ would be constant and, therefore, noncompatible with the boundary data. Region $A$ can therefore reliably be assimilated to a PEI. Moreover, the limiting problem where the conductor in region $A$ is replaced by a PEI can reliably be modelled by a weighted $p-$Laplace problem in region $B$, with a boundary condition on $\partial A$ given by a vanishing normal component of $\mathbf{J}$. In other words, $u \sim u_p$ in $B$, where $u_p$ is the solution of the weighted $p-$Laplace problem arising from the electrical conductivity $\beta(x) E^{p-2}$ in $B$. The latter observation is also inspiring because it properly defines the concept of ''large'' boundary data and the limiting problem. Specifically, it is well known that the operator mapping the boundary data $f$ into the solution of a weighted $p-$Laplace problem is a homogeneous operator of degree 1, i.e. the solution corresponding to $\lambda f(x)$ is equal to $\lambda u_p(x)$, where $u_p$ is the solution corresponding to the boundary data $f$. Thus, the term ''problem for large boundary data'' means [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"} where the boundary data is $\lambda f$ and $\lambda \to +\infty$. Moreover, this suggests the need to study the convergent properties of the normalized solution $v^\lambda$, defined as the ratio $u^\lambda / \lambda$, where $u^\lambda$ is the solution of [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"} corresponding to the Dirichlet data $\lambda f(x)$. Indeed, if $u^\lambda$ can be approximated by the solution of the weighted $p-$Laplace problem, then the normalized solution $v^\lambda$ converges in $B$, i.e. it is expected to be constant w.r.t. $\lambda$, as $\lambda$ approaches $+\infty$. We term this limit as $v^\infty$ and we expect it to be equal to $u_p$, i.e. the solution of the weighted $p-$Laplace problem with boundary data $f$. From the formal point of view, when $q<p$, we will indeed prove that the normalized solution $v^\lambda$ weakly converges to $v_\Omega$, which is termed $v_B$ in $B$ and $v_A$ in $A$. The function $v_B\in W^{1,p}(B)$ is the solution of a weighted $p-$Laplace problem in region $B$: $$\label{pproblem_B} \begin{cases} \mathop{\mathrm{div}}\Big(\beta (x) |\nabla {v_B}(x)|^{p-2}\nabla v_B (x)\Big) =0 & \text{in }B\vspace{0.2cm}\\ \beta (x) |\nabla {v_B}(x)|^{p-2}\partial_\nu {v_B}(x) =0 & \text{on }\partial A\vspace{0.2cm}\\ {v_B}(x) =f(x) & \text{on }\partial \Omega;\vspace{0.2cm}\\ \end{cases}$$ the function $v_A\in W^{1,q}(A)$ is the solution of a weighted $q-$Laplace problem in region $A$: $$\label{qproblem_A} \begin{cases} \mathop{\mathrm{div}}\Big(\alpha (x) |\nabla {v_A}(x)|^{q-2}\nabla {v_A} (x)\Big) =0 &\text{in }A\vspace{0.2cm}\\ {v_A}(x) = {v_B}(x) & \text{on }\partial A. \end{cases}$$ From the physical standpoint, problem [\[pproblem_B\]](#pproblem_B){reference-type="eqref" reference="pproblem_B"} corresponds to stationary currents where the electrical conductivity in $B$ is $\sigma(x,E)=\beta(x)E^{p-2}$, and region $A$ is replaced by a perfectly insulating material (PEI). Problem [\[qproblem_A\]](#qproblem_A){reference-type="eqref" reference="qproblem_A"} corresponds to stationary currents in region $A$, with boundary data given by the solution of [\[pproblem_B\]](#pproblem_B){reference-type="eqref" reference="pproblem_B"}, and electrical conductivity equal to $\sigma(x,E)=\alpha(x)E^{q-2}$. It is worth noting that any solutions of problems [\[pproblem_B\]](#pproblem_B){reference-type="eqref" reference="pproblem_B"} and [\[qproblem_A\]](#qproblem_A){reference-type="eqref" reference="qproblem_A"} satisfy the minimum problems [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"} and [\[Linfty\]](#Linfty){reference-type="eqref" reference="Linfty"}, respectively, described in Section [5](#large_sec){reference-type="ref" reference="large_sec"}. Similarly, when $p<q$, it turns out that $v^\lambda$ converges to $w\in W^{1,p}(\Omega)$ for $\lambda\to+\infty$, where $w$ is constant in each connected component of $A$ and is the solution of: $$\label{pproblem_Bgrad} \begin{cases} \mathop{\mathrm{div}}\Big(\beta (x) |\nabla w(x)|^{p-2}\nabla w (x)\Big) =0\ &\text{in }B\vspace{0.2cm}\\ %\partial_\tau v^\infty(x) =0\qquad\qquad\qquad\qquad\quad\ \ \text{on}\ \partial A,\vspace{0.2cm} {|\nabla w|=0} &\text{a.e. in } A\\ \int_{\partial A}\sigma(x,|\nabla w(x)|)\partial_\nu w(x)dS=0\\ w(x) =f(x)\qquad\qquad\qquad\qquad\quad &\text{on }\partial \Omega\vspace{0.2cm} % {v^\infty \in W^{1,p}(\Omega)} \end{cases}$$ In this case, from the physical standpoint, region $A$ can be replaced by a Perfect Electric Conductor (PEC). It is worth noting that the solution of problem [\[pproblem_Bgrad\]](#pproblem_Bgrad){reference-type="eqref" reference="pproblem_Bgrad"} satisfies the minimum problem [\[N\]](#N){reference-type="eqref" reference="N"}, described in Section [5](#large_sec){reference-type="ref" reference="large_sec"}. Finally, we highlight that in both cases ($q<p$ or $q>p$), the limiting problems in regions $B$ and $A$ can be solved in cascade (see Figure [8](#fig_4_decoupling){reference-type="ref" reference="fig_4_decoupling"}). Specifically, the problem for region $B$ can be solved first, then the problem for region $A$ can be solved from the knowledge of the trace on $\partial A$ of the solution evaluated in $B$. ![The problem in $B$ can be solved independently of the problem in $A$. The problem in $A$ can be solved through the knowledge of the trace on $\partial A$ of the solution in $B$.](LimL_4_decoupling.png){#fig_4_decoupling width="45%"} # Framework of the Problem {#fram_sec} ## Notations Throughout this paper, $\Omega$ denotes the region occupied by the conducting materials. We assume that $\Omega\subset\mathbb{R}^n$, $n\geq 2$, is a bounded domain (i.e. an open and connected set) with Lipschitz boundary and $A\subset\subset\Omega$ is an open bounded set with Lipschitz boundary and a finite number of connected components, such that $B:=\Omega\setminus\overline A$ is still a domain. Hereafter we consider $1<p,q<+\infty$, $p\neq q$. Region $B$ is occupied by a conducting material with an asymptotic $p-$growth$^*$[^3] (for large electric fields) whereas region $A$ is occupied by the material with an asymptotic $q-$growth (for large electric fields), see Figure [1](#fig_1_omega){reference-type="ref" reference="fig_1_omega"}. We denote by $dS$ the $(n-1)-$dimensional Hausdorff measure. Moreover, we set $$L^\infty_+(\Omega):=\{\theta\in L^\infty(\Omega)\ |\ \theta\geq c_0\ \text{a.e. in}\ \Omega, \ \text{for a positive constant}\ c_0\}.$$ Furthermore, the Sobolev space $W^{1,p}_0(\Omega)$ is the closure set of $C_0^1(\Omega)$ with respect to the $W^{1,p}-$norm. The applied boundary voltage $f$ belongs to the abstract trace space $B_{p}^{1-\frac 1p,p}(\partial\Omega)$, which is a Besov space (refer to [@JERISON1995161; @leoni17]), equipped with the following norm: $$||u||_{B^{1-\frac 1p,p}(\partial\Omega)}=||u||_{L^p(\partial\Omega)}+|u|_{B^{1-\frac 1p,p}(\partial\Omega)}<+\infty,$$ where for the precise definition of $|\cdot|_{B^{1-\frac 1p,p}(\partial\Omega)}$ we refer to [@leoni17 Def.18.32]. For the sake of brevity, we denote this space by $X^p(\partial \Omega)$ and we point out that its elements can be identified as the functions in $W^{1,p}(\Omega)$, modulo the equivalence relation $f\in [g]_{X^p(\partial \Omega)}$ if and only if $f-g\in W^{1,p}_0(\Omega)$. Finally, we denote by $X^p_\diamond (\partial \Omega)$ the set of elements in $X^p(\partial \Omega)$ with zero average on $\partial\Omega$ with respect to the measure $dS$. ## The Scalar Potential and Dirichlet Energy In terms of the electric scalar potential, that is ${\bf E}(x)=-\nabla u(x)$, the nonlinear Ohm's law ${\bf J}(x)=\sigma(x,E(x)){\bf E}(x)$ is $$% \label{gOhm} {\bf J} (x)=- \sigma (x, |\nabla u(x)|)\nabla u(x),$$ where $\sigma$ is the electrical conductivity, ${\bf E}$ is the electric field, and ${\bf J}$ is the electric current density. The electric scalar potential $u$ solves the steady current problem: $$\label{gproblem} \begin{cases} \mathop{\mathrm{div}}\Big(\sigma (x, |\nabla u(x)|) \nabla u (x)\Big) =0\ \text{in }\Omega\vspace{0.2cm}\\ u(x) =f(x)\qquad\qquad\qquad\quad\ \text{on }\partial\Omega, \end{cases}$$ where $f\in X_\diamond^p(\partial \Omega)$. Problem [\[gproblem\]](#gproblem){reference-type="eqref" reference="gproblem"} is meant in the weak sense, that is $$\int_{\Omega }\sigma \left( x,| \nabla u(x) |\right) \nabla u (x) \cdot\nabla \varphi (x)\ \text{d}x=0\quad\forall\varphi\in C_c^\infty(\Omega).$$ We observe that the solution $u$ restricted to $B$ belongs to $W^{1,p}(B)$, whereas $u$ restricted to $A$ belongs to $W^{1,q}(A)$. Therefore, the solution $u$ as a whole is an element of the larger functional space $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)$. Furthermore, (i) if $p\leq q$ then $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)=W^{1,p}(\Omega)$, and (ii) if $p\geq q$ then $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)=W^{1,q}(\Omega)$. The solution $u$ satisfies the boundary condition in the sense that $u-f\in W_0^{1,p}(\Omega)\cup W_0^{1,q}(\Omega)$ and we write $u|_{\partial\Omega}=f$. Moreover, the solution $u$ is variationally characterized as $$\label{gminimum} \arg\min\left\{ \mathbb{F}_\sigma\left( u\right)\ :\ u\in W^{1,p}(\Omega)\cup W^{1,q}(\Omega), \ u|_{\partial\Omega}=f\right\}.$$ In ([\[gminimum\]](#gminimum){reference-type="ref" reference="gminimum"}), the functional $\mathbb{F}_\sigma\left( u\right)$ is the Dirichlet Energy $$%\label{genergy} \mathbb{F}_\sigma \left( u \right) = \int_{B} Q_B (x,|\nabla u(x)|)\ \text{d}x+ \int_A Q_A (x,|\nabla u(x)|)\ \text{d}x$$ where $Q_B$ and $Q_A$ are the Dirichlet Energy density in $B$ and in $A$, respectively: $$\begin{aligned} %\label{Bdensity} & Q_{B} \left( x,E\right) :=\int_{0}^{E} \sigma_B\left( x,\xi \right)\xi \text{d}\xi\quad \text{for a.e.}\ x\in B\ \text{and}\ \forall E\geq0,\\ & Q_{A}\left( x,E\right) :=\int_{0}^{E} \sigma_A\left( x,\xi \right)\xi \text{d}\xi\quad \text{for a.e.}\ x\in A\ \text{and}\ \forall E\geq 0,%\label{Adensity}\end{aligned}$$ and $\sigma_B$ and $\sigma_A$ are the restriction of the electrical conductivity $\sigma$ in $B$ and $A$. ## Requirements on the Dirichlet Energy densities {#subsec_hyp} In this Section, we provide the assumptions on the Dirichlet Energy densities $Q_B$ and $Q_A$, to guarantee the well-posedness of the problem and to prove the main convergence results of this paper. For each individual result, we will make use of a suitable subset of assumptions, among those listed in the following. Firstly, we recall the definition of the Carathéodory function. **Definition 1**. *$Q:\Omega\times[0,+\infty)\to\mathbb{R}$ is a Carathéodory function iff:* - *$x\in\overline\Omega\mapsto Q(x,E)$ is measurable for every $E\in[0,+\infty)$,* - *$E\in [0,+\infty)\mapsto Q(x, E)$ is continuous for almost every $x\in\Omega$.* The existence and the uniqueness of the solution of [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"} are guaranteed by the following assumptions on $Q_B$ and $Q_A$. - $Q_B$ and $Q_A$ are Carathéodory functions; - $E\mapsto Q_B(x,E)$ and $E\to Q_A(x,E)$ are positive, $C^1$, strictly convex function such that $Q_B(x,0)=0$ for a.e. $x\in B$, and $Q_A(x,0)=0$ for a.e. $x\in A$. Since $Q_B$ and $Q_A$ are positive, convex and vanishing in $0$, they are both increasing functions in $E$. - There exist three positive constants $\underline Q\leq \overline Q$ and $E_0$, such that: $$\begin{split} (i)\ \underline Q \left[\left(\frac{E}{E_0}\right)^p- 1\right]\leq Q_B(x, E)\leq \overline Q \left[\left(\frac{E}{E_0}\right)^p+ 1\right]\ \ \text{for a.e.} \ x\in B \ \text{and}\ \forall\ E\ge 0,\\ (ii)\ \underline Q \left[\left(\frac {E}{E_0}\right)^q- 1\right]\leq Q_A(x, E)\leq \overline Q \left[\left(\frac{E}{E_0}\right)^q+ 1\right]\ \ \text{for a.e.} \ x\in A \ \text{and}\ \forall\ E\ge 0. \end{split}$$ Assumptions (A1), (A2) and (A3) assure the existence of the solution of problem [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"}, see Theorem [Theorem 2](#existence_thm){reference-type="ref" reference="existence_thm"} below. Assumption (A3) is well-known in literature (see e.g. assumptions (H4) in [@corboesposito2021monotonicity] and (A2) in [@lam2020consistency]). In order to obtain the convergence results for large Dirichlet boundary data, we introduce the concept of asymptotic $p-$growth and asymptotic $q-$growth, as in the following assumption. - There exists a function $\beta$ such that: $$\begin{split} \lim_{E\to +\infty} \frac{Q_B (x,E)}{E^p}=\beta(x)\quad \text{for a.e.}\ x\in B. \end{split}$$ We observe that $\beta\in L^\infty_+(B)$, by (A3). This is a key assumption for this contribution: it guarantees the convergence of the solution in region $B$. In Section [6](#counter_sec){reference-type="ref" reference="counter_sec"}, we provide a counterexample to show that assumption (A4) is sharp. - There exists a function $\alpha$ such that: $$\begin{split} \lim_{E\to +\infty} \frac{Q_A (x,E)}{E^q}=\alpha(x)\quad \text{for a.e.}\ x\in A. \end{split}$$ We observe that $\alpha\in L^\infty_+(A)$ by (A3). This assumption is needed to guarantee the convergence of the solution in region $A$ for $p>q$. Assumption (A4') is sharp. ## Connection among $\sigma$, ${\bf J}$ and $Q$ This paper is focused on the properties of the Dirichlet Energy density $Q$, while, in physics and engineering the electrical conductivity $\sigma$ is of greater interest. From this perspective, assumptions ( Ax) are able to include a wide class of electrical conductivities (see Figure [14](#fig_5_assumptions){reference-type="ref" reference="fig_5_assumptions"}). In other words, the ( Ax)s are not restrictive in practical applications. ![Example of a constitutive relationship when $p>2$, $p=2$ and $p<2$: (a) in terms of electrical conductivity and (b) in terms of current density. Dashed lines correspond to the upper and lower constraints to either $\sigma$ or ${J}_\sigma$. $\underline \sigma$ $(\overline \sigma)$ is related to $\underline Q$ $(\overline Q)$, $p$ and $E_0$.](LimL_5_1_sigmaEpge2.png "fig:"){#fig_5_assumptions width="42%"} ![Example of a constitutive relationship when $p>2$, $p=2$ and $p<2$: (a) in terms of electrical conductivity and (b) in terms of current density. Dashed lines correspond to the upper and lower constraints to either $\sigma$ or ${J}_\sigma$. $\underline \sigma$ $(\overline \sigma)$ is related to $\underline Q$ $(\overline Q)$, $p$ and $E_0$.](LimL_5_2_JEpge2.png "fig:"){#fig_5_assumptions width="42%"} ![Example of a constitutive relationship when $p>2$, $p=2$ and $p<2$: (a) in terms of electrical conductivity and (b) in terms of current density. Dashed lines correspond to the upper and lower constraints to either $\sigma$ or ${J}_\sigma$. $\underline \sigma$ $(\overline \sigma)$ is related to $\underline Q$ $(\overline Q)$, $p$ and $E_0$.](LimL_5_3_sigmaEp=2.png "fig:"){#fig_5_assumptions width="42%"} ![Example of a constitutive relationship when $p>2$, $p=2$ and $p<2$: (a) in terms of electrical conductivity and (b) in terms of current density. Dashed lines correspond to the upper and lower constraints to either $\sigma$ or ${J}_\sigma$. $\underline \sigma$ $(\overline \sigma)$ is related to $\underline Q$ $(\overline Q)$, $p$ and $E_0$.](LimL_5_4_JEp=2.png "fig:"){#fig_5_assumptions width="42%"} ![Example of a constitutive relationship when $p>2$, $p=2$ and $p<2$: (a) in terms of electrical conductivity and (b) in terms of current density. Dashed lines correspond to the upper and lower constraints to either $\sigma$ or ${J}_\sigma$. $\underline \sigma$ $(\overline \sigma)$ is related to $\underline Q$ $(\overline Q)$, $p$ and $E_0$.](LimL_5_5_sigmaEple2.png "fig:"){#fig_5_assumptions width="42%"} ![Example of a constitutive relationship when $p>2$, $p=2$ and $p<2$: (a) in terms of electrical conductivity and (b) in terms of current density. Dashed lines correspond to the upper and lower constraints to either $\sigma$ or ${J}_\sigma$. $\underline \sigma$ $(\overline \sigma)$ is related to $\underline Q$ $(\overline Q)$, $p$ and $E_0$.](LimL_5_6_JEple2.png "fig:"){#fig_5_assumptions width="42%"} There is a close connection between $\sigma$, $J_\sigma$ and $Q_\sigma$. Indeed, we observe that $$\begin{split} Q_B \left( x,E \right) =\int_{0}^{E}{J_B} ({ x, \xi}) \ \text{d}{ \xi}\quad\text{for a.e.} \ x\in B\ \text{and}\ \forall E> 0,\\ Q_A \left( x,E \right) =\int_{0}^{E}{J_A} ({ x, \xi}) \ \text{d}{ \xi}\quad\text{for a.e.} \ x\in A\ \text{and}\ \forall E> 0, \end{split}$$ where $J_B$ and $J_A$ is the magnitude of the current density in regions $B$ and $A$, respectively: $$\label{connsJQ} \begin{split} J_B (x, E)&=\partial_E Q_B(x,E)=\sigma_B(x, E)E\quad \text{for a.e.} \ x \in B\ \text{and}\ \forall E>0.\\ J_A (x, E)&=\partial_E Q_A(x,E)=\sigma_A(x, E)E\quad \text{for a.e.} \ x \in A\ \text{and}\ \forall E>0. \end{split}$$ The electrical conductivity $\sigma(x,E)$ is the secant to the graph of the function $J_\sigma(x,E(x))$ and $Q_\sigma (x, E(x))$ is the area of the sub-graph of $J_\sigma(x, E(x))$. For a geometric interpretation of the connections between $\sigma$, $J_\sigma$ and $Q_\sigma$, see Figure [16](#fig_6_JE){reference-type="ref" reference="fig_6_JE"}. ![For any given spatial point in the region $\Omega$, (a) the electrical conductivity $\sigma(\cdot,E)$ is the secant line to the graph of the function $J_\sigma(\cdot,E)$; (b) $Q_\sigma (\cdot, E)$ is the area of the sub-graph of $J_\sigma(\cdot, E)$.](LimL_6_JE.png "fig:"){#fig_6_JE width="45%"} ![For any given spatial point in the region $\Omega$, (a) the electrical conductivity $\sigma(\cdot,E)$ is the secant line to the graph of the function $J_\sigma(\cdot,E)$; (b) $Q_\sigma (\cdot, E)$ is the area of the sub-graph of $J_\sigma(\cdot, E)$.](LimL_6_JEarea.png "fig:"){#fig_6_JE width="45%"} ## Existence and uniqueness of the solutions The proof of the existence and uniqueness of the solution for problem [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"}, relies on standard methods of the Calculus of Variations, when the Dirichlet Energy density presents the same growth bounds (described in (A3)) in any point of the domain $\Omega$. The case treated in this work is nonstandard, because the Dirichlet Energy density presents different growth bounds in $B$ and $A$ and, hence, we provide a proof in the following. **Theorem 2**. *Let $1<p, q<+\infty$, $p\neq q$, $f\in X_\diamond^p(\partial \Omega)$. If (A1), (A2), (A3) hold, then there exists a unique solution of problem [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"}.* *Proof.* Let us consider the case $p>q$; if $E\ge E_0$ then $E^p\ge E_0^{p-q} E^q$. By assumption (A3), for any $E\ge E_0$, we have $$\begin{split} & Q_B(x, E)\ge \underline{Q}\left[ \left(\frac{E}{E_0}\right)^p {-1}\right]\ge \underline{Q}\left[\left(\frac{E}{E_0}\right)^q {-1}\right]\quad\text{a.e. in } B,\\ %& Q_B(x, E)\ge \underline{Q}\left[ \left(\frac{E}{E_0}\right)^p-\left(\frac{\tilde E}{E_0}\right)^p\right]\ge\underline{Q}\left[ \left(\frac{\tilde E}{E_0}\right)^{p-q}\left(\frac{E}{E_0}\right)^q-\left(\frac{\tilde E}{E_0}\right)^p\right]\quad\text{a.e. in } B;\\ & Q_A(x, E)\ge \underline{Q}\left[ \left(\frac{E}{E_0}\right)^q {-1}\right]\qquad\qquad\qquad\qquad\quad\text{a.e. in } A. %& Q_A(x, E)\ge \underline{Q}\left[ \left(\frac{E}{E_0}\right)^q-\left(\frac{\tilde E}{E_0}\right)^q\right]\quad\text{a.e. in } A. \end{split}$$ Therefore, setting $Q_\sigma=Q_B$ in $B$ and $Q_\sigma=Q_A$ in $A$, we have $$%Q_\sigma(x, E)\ge \underline{Q}\min\left\{\left(\frac{\widetilde E}{E_0}\right)^{p-q},1 \right\}\left(\frac{E}{E_0}\right)^q-\widetilde Q,\quad\text{for any }E\ge\widetilde{E}. Q_\sigma(x, E)\ge \underline{Q}\left[\left(\frac{E}{E_0}\right)^q {-1}\right]\quad \text{a.e. in } \Omega,$$ $\text{for any }E\ge 0$ (since the inequality is trivial for any $E\ge E_0$). Similarly, when $p<q$, we have $$Q_\sigma(x,E)\ge\underline{Q}\left[ \left(\frac{E}{E_0}\right)^p {-1}\right]\quad \text{a.e. in } \Omega,$$ for any $E\ge 0$. Hence, in both cases, the function $Q_\sigma(x,\cdot)$ is coercive for a.e. $x\in\Omega$. Therefore, since $Q_\sigma$ is also strictly convex by assumption, standard methods of the Calculus of Variations [@dacorogna2007direct Th. 3.30] assure that the solution exists and is unique. ◻ Condition $f\in X_\diamond^p(\partial \Omega)$ is a necessary assumption because it guarantees the existence of a function in $f+W_{0}^{1,p}(\Omega)$ such that $\mathbb F_\sigma (f)<+\infty$ (see [@dacorogna2007direct Th. 3.30] for details). Moreover, optimization problems on domains with holes have received a great deal of interest in recent years, see e.g. [@della2020optimal; @gavitone2021isoperimetric; @paoli2020stability; @paoli2020sharp] and reference therein. ## Normalized solution Through this paper we study the behaviour of the solution of problem [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"} for large Dirichlet boundary data, i.e. the behaviour of $u^\lambda$ defined as the solution of: $$\label{Fspezzata} \min_{\substack{u\in W^{1,p}(\Omega)\cup W^{1,q}(\Omega)\\ u=\lambda f\ \text{on}\ \partial \Omega}}\mathbb F_\sigma(u),$$ for $\lambda\to +\infty$. To this purpose, as discussed in Section [2](#underlying){reference-type="ref" reference="underlying"}, it is convenient to introduce the normalized solution $v^\lambda$ defined as: $$%\label{v_scal} v^\lambda=\frac{u^\lambda}{\lambda}.$$ For any prescribed $f\in X^p_\diamond(\partial \Omega)$ and $\lambda>0$, $v^\lambda$ is the solution of the following variational problem: $$\label{G_norm} \min_{\substack{v\in W^{1,p}(\Omega)\cup W^{1,q}(\Omega)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb G^\lambda(v),\quad \mathbb G^\lambda(v)=\frac 1 {\lambda^p}\left(\int_{B} Q_B(x,\lambda|\nabla v(x)|)dx+\int_{A} Q_A(x,\lambda|\nabla v(x)|)dx\right).$$ The multiplicative factor $1/\lambda^p$ is introduced in order to guarantee that the minimum values of the functional $\mathbb G^\lambda$ are bounded for large $\lambda$. The normalized solution makes it possible to ''transfer'' parameter $\lambda$ in [\[Fspezzata\]](#Fspezzata){reference-type="eqref" reference="Fspezzata"} from the boundary data to the functional $\mathbb G^\lambda$. Specifically, we will prove that $v^\lambda$ converges, under very mild hypotheses, for $\lambda\to+\infty$ to the solution of a problem where the material in region $A$ is replaced by either a PEC or a PEI. In the PEC case ($p<q$) the solution $w$ is constant in the region A. In the PEI case ($p>q$) the limit of $v^\lambda$ in $B$ is termed $v_B$, whereas the limit of $v^\lambda$ in $A$ is termed $v_A$. Moreover (i) $v_B$ and $v_A$ arise from a weighted $p-$Laplace and a weighted $q-$Laplace problem, respectively and (ii) $v_B$ provides the boundary data on $\partial A$ for evaluating $v_A$. # The fundamental inequality for large Dirichlet data {#mean_sec} In this Section we provide the main tool to achieve the convergence results in the limiting cases for ''large'' Dirichlet boundary data. Specifically, we will show that the asymptotic behaviour of the Dirichlet Energy corresponds to a $p-$Laplace modelled equation [@Salo2012_IP; @guo2016inverse] in the domain $B$. In the following, we study the asymptotic behaviour of the Dirichlet Energy in the outer region $B$. We first analyse the special case when the Dirichlet Energy density $Q_B$ is modelled on the weighted $p-$Laplace case, that is $Q_B(x,E)=\theta(x)E^p$ on $B\times[0,+\infty)$; we then give the result in the more general case. **Lemma 3**. *Let $1<%q\neq p<+\infty$, $F%\Omega \subset\mathbb{R}^n$ be a bounded domain with Lipschitz boundary, $\theta\in L^\infty_+(F)$ and $\{ {h}_n\}_{n\in\mathbb{N}}$ be a sequence weakly convergent to ${h}$ in $W^{1,p}(F)$. Then we have $$\label{only_in_factor_E3} \int_{F} \theta(x)|\nabla {h}(x)|^p dx\le\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla {h}_n(x)|^p dx.$$* *Proof.* Let us set $L:=\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla {h}_n(x)|^p dx$. If $L=+\infty$, the inequality [\[only_in_factor_E3\]](#only_in_factor_E3){reference-type="eqref" reference="only_in_factor_E3"} is trivial; otherwise we consider a subsequence $\{n_j\}_{j\in\mathbb{N}}$ such that $$\lim_{ {j}\to+\infty}\int_{F} \theta(x)|\nabla {h_{n_j}}(x)|^p dx=L.$$ This means that for any $\varepsilon>0$, there exists $\nu\in\mathbb{N}$ such that $$\label{defin_liminf} L-\varepsilon<\int_{F} \theta(x)|\nabla {h_{n_j}}(x)|^p dx<L+\varepsilon$$ for any ${j}\ge\nu$. Then, by Mazur's Lemma (refer for example to [@dunford1963linear; @renardy2006introduction]), for any $n\in\mathbb{N}$ there exists a function $N:\mathbb{N}\to\mathbb{N}$ and a sequence $\{\alpha_{n, {l}}\}_{ {l}=n}^{N(n)}$ such that 1. $\alpha_{ {n},l}\geq 0$ for any ${l}\in[n,N(n)]$, 2. $\sum_{ {l}=n}^{N(n)}\alpha_{n, {l}}=1$, 3. $z_{ {n}}:=\sum_{ {l}=n}^{N(n)}\alpha_{n, {l}} {h_{n_l}}$ $\to {h}$ in $W^{1,p}(F)$. Then there exists a subsequence $\{z_{n_{ {k}}}\}_{ {k}\in\mathbb{N}}$ such that $$\label{eguagl_liminf} \liminf_{n\to+\infty} \int_{F} \theta(x)|\nabla z_n(x)|^p dx= \lim_{ {k}\to+\infty}\int_{F} \theta(x)|\nabla z_{n_{ {k}}}(x)|^p dx$$ and another subsequence, still indicated with $\{z_{n_{ {k}}}\}_{ {k}\in\mathbb{N}}$, such that $\nabla z_{n_{ {k}}}\to \nabla {h}$ a.e. in $F$ [@leoni17 Chap. 18]. Therefore, we have $$\begin{split} \mathbb \int_{F} \theta(x)|\nabla {h}(x)|^p dx&= \int_{F} \theta(x)\lim_{ k\to+\infty}|\nabla z_{n_k}(x)|^p dx\leq \liminf_{ {k}\to+\infty}\int_{F} \theta(x)|\nabla z_{n_{ {k}}}(x)|^p dx\\ &=\lim_{ {k}\to+\infty}\int_{F} \theta(x)|\nabla z_{n_{ {k}}}(x)|^p dx =\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla z_n(x)|^p dx\\ &\le\liminf_{n\to+\infty}\sum_{ {l}=n}^{N(n)}\alpha_{n, {l}}\int_{F} \theta(x)|\nabla {h_{n_l}}(x)|^p dx\\ &<\liminf_{n\to+\infty}\mathbb \sum_{ {l}=n}^{N(n)}\alpha_{n, {l}}\left(L+\varepsilon\right)=L+\varepsilon\\ &=\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla {h}_n(x)|^p dx+\varepsilon, \end{split}$$ where in the first line the equality follows from the convergence result of (M3) and the inequality follows from Fatou's Lemma, in the second line we applied [\[eguagl_liminf\]](#eguagl_liminf){reference-type="eqref" reference="eguagl_liminf"}, in the third line we applied the convexity of $|\cdot|^p$, and in the fourth line we applied [\[defin_liminf\]](#defin_liminf){reference-type="eqref" reference="defin_liminf"}. Conclusion [\[only_in_factor_E3\]](#only_in_factor_E3){reference-type="eqref" reference="only_in_factor_E3"} follows from the arbitrariness of $\varepsilon>0$. ◻ When $Q_B$ is factorized, the assumptions (A4) are satisfied. The next step consists in extending [\[only_in_factor_E3\]](#only_in_factor_E3){reference-type="eqref" reference="only_in_factor_E3"} from a weighted $p-$Laplace case to the quasilinear case. In doing this, we restrict the validity of the result to sequences of the solutions of problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}. The main difficulty in proving this result lies in evaluating an upper bound of the Lebesgue measure of that part of $B$ where the solutions $v^\lambda$ admit large values of the gradient (see Figure [17](#fig_7_insiemi){reference-type="ref" reference="fig_7_insiemi"}). ![The objective of the proof is to show that the set of the points in $B$ such that the solution $v^{\lambda_{n_j}}$ does not satisfy the fundamental inequality ($C^c_{n_j}$) and admits large values of the gradient ($D^c_{n_j}$) can be made small enough (shaded region).](LimL_7_insiemi.png){#fig_7_insiemi width="\\textwidth"} **Lemma 4**. *Let $1<p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$, (A1), (A2), (A3), (A4) hold and let the solutions $v^\lambda$ of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} be weakly convergent to $v$ in $W^{1,p}(B)$, as $\lambda\to+\infty$. Let $\{\lambda_n\}_{n\in\mathbb{N}}$ be an increasing sequence such that $\lambda_n\to+\infty$, as $n\to+\infty$, and $$\label{seq_to_liminf_lem_infty} \lim_{n\to+\infty}\frac{1}{\lambda_n^p}\int_{B}Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)dx=\liminf_{\lambda\to+\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,\lambda|\nabla v^{\lambda}(x)|)dx.$$ Then, for any $0<\delta\le\inf_{B}\beta$ and $\theta>0$, there exists $F_{\delta,\theta}\subseteq B$ with $|B\setminus F_{\delta,\theta}|<\theta$ such that $$\label{step_to_conclude_infty} \liminf_{n\to+ \infty}\int_{F_{\delta,\theta}} (\beta(x)-\delta)|\nabla v^{\lambda_n}(x)|^p dx\le \liminf_{\lambda\to+\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,\lambda|\nabla v^{\lambda}(x)|)dx,$$ where $\beta$ is given in (A4).* *Proof.* Let us fix $0<\delta\le\inf_{B}\beta$ and $\theta>0$, for any $n\in\mathbb{N}$, we set $$C_{\delta,n}=C_{n}:=\left\{x\in B\ : \ (\beta(x)-\delta)(\lambda_n|\nabla v^{\lambda_n}(x)|)^p\le Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)\right\}.$$ Now, for any constant $\ell>0$, we set $$\begin{split} D_{\ell,n}=D_{n}:=\{ x\in B \ : \ \lambda_n|\nabla v^{\lambda_n}(x)|\le \lambda_n\ell\}\\ D_{\ell,n}^c=D^c_{n}:=\{ x\in B \ : \ \lambda_n|\nabla v^{\lambda_n}(x)|>\lambda_n \ell\} \end{split}$$ Now, let us define $$E_{\delta,\ell,n}=E_{n}:=\left\{x\in B\ : \ (\beta(x)-\delta)E^p\leq Q_B(x,E)\ \ \forall E> \lambda_n \ell\right\}.$$ Therefore, we have $$\label{chain_inclusion3} \begin{split} & C_{n}\supseteq C_{n}\cap D^c_{n}\supseteq E_{n}\cap D^c_{n},\\ & C_{n}\cup D_{n}\supseteq% ( C_{\delta,n}\cap D^c_{\ell,n})\cup D_{L,n}\supseteq E_n\cup D_n %( E_{n}\cap D^c_{n}) \cup D_{n} =(D^c_{n}\setminus E_{n})^c, \end{split}$$ for any $n\in\mathbb{N}$. Let us observe that $E_{n}$ is increasing with respect to $n$ and that $\left| \bigcup_{n=1}^{+\infty}E_{n}\right|=|B|$. Therefore there exists a natural number $n_1=n_1(\theta)$ such that $$\label{stimaF3} \left|\bigcup_{n=1}^{n_1} E_{n}\right|=\left|E_{n_1}\right|\ge |B|-\frac \theta 2.$$ Passing to the complementary sets in [\[chain_inclusion3\]](#chain_inclusion3){reference-type="eqref" reference="chain_inclusion3"}, we obtain $$C^c_{n_1}\cap D^c_{n_1}\subseteq D^c_{n_1}\setminus E_{n_1}\subseteq B \setminus E_{n_1}\quad\text{with}\quad |C_{n_1}^c\cap D^c_{n_1}|\leq %\frac\theta 4+\frac\theta 4= \frac\theta 2,$$ where we have used [\[stimaF3\]](#stimaF3){reference-type="eqref" reference="stimaF3"}. Analogously, we construct a subsequence $\{\lambda_{n_j}\}_{j\in\mathbb{N}}$ such that $$|C_{n_j}^c \cap D^c_{n_j}|\leq \frac{\theta}{2^j}.$$ Then, by defining $$F_{\delta,\theta}: =\bigcap_{j=1}^\infty (C_{n_j}\cup D_{n_j}),$$ we have $$F^c_{\delta,\theta}= \bigcup_{j=1}^\infty (C_{n_j} \cup D_{n_j})^c \quad\text{with}\quad |F^c_{\delta,\theta}|\leq \sum_{j=1}^{+\infty} \frac{\theta}{2^j}=\theta,$$ which implies $$|B |-\theta\le |F_{\delta,\theta}|\le |B|.$$ Therefore, we have $$\begin{split} &\liminf_{n\to+\infty}\int_{F_{\delta,\theta}}(\beta(x)-\delta)|\nabla v^{\lambda_{n}}(x)|^p dx\leq\liminf_{j\to+\infty}\int_{F_{\delta,\theta}}(\beta(x)-\delta)|\nabla v^{\lambda_{n_j}}(x)|^p dx\\ &\le\liminf_{j\to+\infty}\int_{C_{n_j}\cup D_{n_j}}(\beta(x)-\delta)|\nabla v^{\lambda_{n_j}}(x)|^p dx\\ &\le\liminf_{j\to+\infty}\int_{C_{n_j} }(\beta(x)-\delta)|\nabla v^{\lambda_{n_j}}(x)|^p dx+\limsup_{j\to+\infty}\int_{D_{n_j}}(\beta(x)-\delta)|\nabla v^{\lambda_{n_j}}(x)|^p dx\\ &\le\liminf_{j\to+\infty}\frac{1}{\lambda_{n_j}^p}\int_{C_{n_j}} Q_B(x,|\lambda_{n_j}\nabla v^{\lambda_{n_j}}(x)|)dx+\frac{ \overline{Q}}{E_0^p} \ell^p |B|\\ &\le\lim_{j\to+\infty}\frac{1}{\lambda_{n_j}^p}\int_B Q_B(x,|\lambda_{n_j}\nabla v^{\lambda_{n_j}}(x)|)dx+\frac{ \overline{Q}}{E_0^p} \ell^p |B|\\ &=\liminf_{\lambda\to+\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,\lambda|\nabla v^{\lambda}(x)|)dx+\frac{ \overline{Q}}{E_0^p} \ell^p |B|, \end{split}$$ where in the second line we exploited $\beta(x)-\delta \geq 0$ a.e. in $B$, in the fourth line we exploited $\beta\leq \overline Q / E_0^p$, as follows from (A3) and (A4) and in the last line we exploited [\[seq_to_liminf_lem_infty\]](#seq_to_liminf_lem_infty){reference-type="eqref" reference="seq_to_liminf_lem_infty"}. Therefore, the conclusion follows from the arbitrariness of $\ell>0$. ◻ We are now in a position to prove the previous result in the general case expressing the asymptotic behaviour of the Dirichlet Energy for the outer region in terms of a factorized $p-$Laplacian form. Specifically, the following Proposition provides a fundamental inequality relating the two cases. **Proposition 5**. *Let $1<p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$, (A1), (A2), (A3), (A4) hold and the solutions $v^\lambda$ of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} weakly converge to $v$ in $W^{1,p}(B)$, as $\lambda\to+\infty$. Then $$\label{fundamental_inequality3} \int_{B}\beta(x)|\nabla v(x)|^pdx\le \liminf_{\lambda\to+\infty}\frac 1 {\lambda^p}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx,$$ where $\beta$ is given in (A4).* *Proof.* We assume that $v$ is nonconstant, otherwise the conclusion is trivial. Therefore, the integral at the l.h.s. of [\[fundamental_inequality3\]](#fundamental_inequality3){reference-type="eqref" reference="fundamental_inequality3"} is positive because $\beta\in L^\infty_+(B)$. We firstly observe that the measure $$%\label{mu3} \kappa: F\in {\mathcal B(\Omega)}\mapsto \int_F\beta(x)|\nabla v(x)|^pdx,$$ where $\mathcal B(\Omega)$ is the class of the borelian sets contained in $\Omega$, is absolutely continuous with respect to the Lebesgue measure; then, for any $\varepsilon>0$, there exists $\theta>0$ such that for $|F|<\theta$ implies $\kappa(F)<\frac\varepsilon 2$. Then, for any $0<\delta\le\inf_{B}\beta$, the set $F_{\delta, \theta}$, given in Lemma [Lemma 4](#lem_dis_fund_infty){reference-type="ref" reference="lem_dis_fund_infty"}, satisfies $|B\setminus F_{\delta,\theta}|<\theta$ and implies that $$\kappa(B\setminus F_{\delta,\theta})=\int_{B\setminus F_{\delta,\theta}}\beta(x)|\nabla v(x)|^pdx <\frac \varepsilon 2.$$ Let $\{\lambda_n\}_{n\in\mathbb{N}}$ be an increasing sequence such that $\lambda_n\to+\infty$ as $n\to+\infty$, with $\lambda_n>0\ \forall n\in\mathbb{N}$ satisfying [\[seq_to_liminf_lem_infty\]](#seq_to_liminf_lem_infty){reference-type="eqref" reference="seq_to_liminf_lem_infty"} appearing in Lemma [Lemma 4](#lem_dis_fund_infty){reference-type="ref" reference="lem_dis_fund_infty"}. We have $$\begin{split} &\int_{B} \beta(x)|\nabla v(x)|^p dx-\varepsilon=\int_{B\setminus F_{\delta,\theta}} \beta(x)|\nabla v(x)|^p dx+\int_{F_{\delta,\theta}} \beta(x)|\nabla v(x)|^p dx- \varepsilon\\ &<\int_{F_{\delta,\theta}} \beta(x)|\nabla v(x)|^p dx- \frac \varepsilon 2 =\int_{F_{\delta,\theta}} \beta(x)|\nabla v(x)|^p dx-\delta \int_{B} |\nabla v(x)|^p dx\\ &\le \int_{F_{\delta,\theta}} (\beta(x)-\delta)|\nabla v(x)|^p dx\le \liminf_{n\to+\infty}\int_{F_{\delta,\theta}} (\beta(x)-\delta)|\nabla v^{\lambda_n}(x)|^p dx\\ &\le\liminf_{\lambda\to+\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,t|\nabla v^{\lambda}(x)|)dx. \end{split}$$ where we applied the absolute continuity of $\kappa$ in the second line, we set $\delta\leq\min\{\varepsilon/\left(2\int_{B}|\nabla v(x)|^p dx\right), \inf_B \beta(x)\}$, we applied the lower semicontinuity of the integral functional in the third line, and we exploited the [\[step_to_conclude_infty\]](#step_to_conclude_infty){reference-type="eqref" reference="step_to_conclude_infty"} of Lemma [Lemma 4](#lem_dis_fund_infty){reference-type="ref" reference="lem_dis_fund_infty"} in the fourth line. The conclusion follows from letting $\varepsilon\to 0^+$. ◻ # Limit for large Dirichlet data {#large_sec} In this Section, we treat the limiting case of problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} for large boundary data, i.e when $\lambda$ approaches infinity. We distinguish two cases according to the values of $p$ and $q$, which correspond to the asymptotic growth exponents of the electrical conductivity in $B$ and $A$, respectively: 1. $1<q<p<+\infty$ (see Section [5.1](#Large_bigger){reference-type="ref" reference="Large_bigger"}); 2. $1<p<q<+\infty$ (see Section [5.2](#Large_smaller){reference-type="ref" reference="Large_smaller"}). Specifically, in the first case, we prove that: - $v^\lambda\rightharpoonup v_B$ in $W^{1,p}(B)$, as $\lambda\to +\infty$, where $v_B$ in $B$ is the unique solution of problem [\[pproblem_B\]](#pproblem_B){reference-type="eqref" reference="pproblem_B"}; - $v^\lambda\rightharpoonup v_A$ in $W^{1,q}(A)$, as $\lambda\to +\infty$, where $v_A$ in $A$ is the unique solution of problem [\[qproblem_A\]](#qproblem_A){reference-type="eqref" reference="qproblem_A"} (under an additional hypothesis); For $1<q<p$, the limiting solution is characterized by the following problems: $$\label{Hinfty} \min_{\substack{v\in W^{1,p}(B)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb B(v),\quad \mathbb B(v)=\int_{B} \beta(x)|\nabla v(x)|^pdx%+ \int_{A} c(x)|\nabla v(x)|dx ,$$ $$\label{Linfty} \min_{\substack{v\in W^{1,q}(A)\\ v=v_B\ \text{on}\ \partial A}}\mathbb A (v),\quad \mathbb A(v)=\int_{ A} \alpha(x)|\nabla v(x)|^q dx.$$ Hereafter, we recall that we have denoted by $v_B\in W^{1,p}(B)$ and $v_A\in W^{1,q}(A)$ as the unique solutions of [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"} and [\[Linfty\]](#Linfty){reference-type="eqref" reference="Linfty"}, respectively. Problems [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"} and [\[Linfty\]](#Linfty){reference-type="eqref" reference="Linfty"} are the variational form of problems [\[pproblem_B\]](#pproblem_B){reference-type="eqref" reference="pproblem_B"} and [\[qproblem_A\]](#qproblem_A){reference-type="eqref" reference="qproblem_A"}, respectively. Whereas, in the second case, we prove that: - $v^\lambda\rightharpoonup w$ in $W^{1,p}(\Omega)$, as $\lambda\to +\infty$, where $w$ in $B$ is the unique solution of problem [\[pproblem_Bgrad\]](#pproblem_Bgrad){reference-type="eqref" reference="pproblem_Bgrad"} and it is constant in each connected component of $A$. - $v^\lambda\to w$ in $W^{1,q}(A)$, as $\lambda\to +\infty$, where $w$ is constant in any connected component of $A$. In the latter case, the constant value assumed by $w$ in each connected component of $A$ guarantees the continuity of $w$ across the boundaries with $B$. For $1<p<q$, the limiting solution is characterized by the following problem: $$\label{N} \min_{\substack{v\in W^{1,p}(\Omega)\\ |\nabla v|=0\ \text{a.e. in}\ A \\ v=f\ \text{on}\ \partial \Omega}}\mathbb B(v),\quad \mathbb B(v)=\int_{B} \beta(x)|\nabla v(x)|^{p} dx.$$ The unique solution of [\[N\]](#N){reference-type="eqref" reference="N"} is denoted by ${w\in W^{1,p}(\Omega)}$. Problem [\[N\]](#N){reference-type="eqref" reference="N"} is the variational form for problem [\[pproblem_Bgrad\]](#pproblem_Bgrad){reference-type="eqref" reference="pproblem_Bgrad"}. **Remark 6**. From the physical standpoint, the function ${w}\in W^{1,p}(\Omega)$ solution of problem [\[N\]](#N){reference-type="eqref" reference="N"}, corresponds to the solution of a weighted $p-$Laplace problem in $B$ with region $A$ filled by a PEC. For the sequel, we need an upper bound for both the Dirichlet Energy density and the $L^p-$norm of the gradient of the solution $v^\lambda$ (in the outer region) of problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}, as provided by the following Lemma. **Lemma 7**. *Let $1<p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$, (A1), (A2) and (A3) hold and ${w}\in W^{1,p}(\Omega)$ be the function achieving the minimum of the variational problem [\[N\]](#N){reference-type="eqref" reference="N"}. Then, the solution $v^\lambda$ of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} satisfies $$\label{etildegrad} \int_{B} |\nabla v^\lambda(x)|^p dx\leq \overline Q /\underline Q %\frac{\overline Q}{\underline Q} \left(\int_{B} |\nabla {w} (x)|^p dx+ %\frac{\overline Q}{\underline Q} \frac{E^p_0}{\lambda^p} |B|\right)+\frac{ {E_0^p}}{\lambda^p} |B|$$ and $$\label{etildeQ} \begin{split} \mathbb G^\lambda (v^\lambda)%=\frac{1}{\lambda^p}\left(\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|) dx+\int_{A} Q_A(x,\lambda|\nabla v^\lambda(x)|) dx\right)\\& \leq \frac{\overline Q}{E_0^p}\int_{B} |\nabla {w} (x)|^p dx+\frac{\overline Q}{\lambda^p} |B|. \end{split}$$* *Proof.* Let us consider the two auxiliary functions $$\begin{split} Q_1( E)=\underline Q\left[\left(\frac{E}{E_0}\right)^p- {1}\right]\quad\forall E>0,\\ Q_2(E)=\overline Q\left[\left(\frac{E}{E_0}\right)^p+ {1}\right]\quad\forall E>0. \end{split}$$ In terms of $Q_1$ and $Q_2$, the assumption (A3) reads as $$\label{chainQ} Q_1(E)\le Q_B(x,E)\le Q_2(E) \quad \text{for a.e.} \ x\in B \ \text{and}\ \forall E>0.$$ Therefore, we have $$\label{proof_prelim_lemma} \begin{split} \int_{B} |\nabla v^\lambda(x)|^pdx-\frac{ {E_0^p}}{\lambda^p} |B|&=\frac{E_0^p}{\underline Q}\frac{1}{\lambda^p} \int_{B}Q_1(\lambda|\nabla v^\lambda (x)|)dx\\ &\leq\frac{E_0^p}{\underline Q}\frac{1}{\lambda^p}\int_{B} Q_B(x,\lambda|\nabla v^\lambda (x)|)dx\le\frac{E_0^p}{\underline Q}\mathbb G^\lambda( v^\lambda)\\ &\le \frac{E_0^p}{\underline Q}\mathbb G^\lambda( {w})= \frac{E_0^p}{\underline Q}\frac{1}{\lambda^p}\int_{B} Q_B(x, \lambda|\nabla {w} (x)|) dx\\ & \le \frac{E_0^p}{\underline Q}\frac{1}{\lambda^p}\int_{B} Q_2( \lambda |\nabla {w} (x)|) dx\\ & = \frac{\overline Q}{\underline Q}\int_{B} |\nabla {w} (x)|^p dx+\frac{\overline Q}{\underline Q}\frac{ {E_0^p}}{\lambda^p} |B|, \end{split}$$ where in the first line we applied the definition of $Q_1$ when $E=\lambda\nabla v^\lambda$, in the second line we exploited the leftmost inequality [\[chainQ\]](#chainQ){reference-type="eqref" reference="chainQ"}, in the third line we applied $w$ as the test function for functional [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} and exploited that $|\nabla w|=0$ a.e. in $A$, in the fourth line we exploited the rightmost inequality ([\[chainQ\]](#chainQ){reference-type="ref" reference="chainQ"}), and in the fifth line we applied the definition of $Q_2$ when $E=\lambda\nabla {w}$. This concludes the proof of [\[etildegrad\]](#etildegrad){reference-type="eqref" reference="etildegrad"}. Inequality [\[etildeQ\]](#etildeQ){reference-type="eqref" reference="etildeQ"} is a byproduct of [\[proof_prelim_lemma\]](#proof_prelim_lemma){reference-type="eqref" reference="proof_prelim_lemma"}, obtained by comparing the last term of the second line with the final term. ◻ ## First case: $\mathbf {q <p}$ {#Large_bigger} For any fixed $f\in X_\diamond^p(\partial \Omega)$, we study the problem as $\lambda$ approaches infinity. Since $q<p$, we have $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)= W^{1,q}(\Omega)$ and, hence, the variational problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} particularizes as $$\label{G^tlarge} \min_{\substack{v\in W^{1,q}(\Omega)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb G^\lambda(v).%,\quad \mathbb G^t(v)=\frac 1 {t^p}\left(\int_{B} Q_B(x,t|\nabla v(x)|)dx+\int_{A} Q_A(x,t|\nabla v(x)|)dx\right).$$ Here, we first show that $v^\lambda$ is weakly convergent in $W^{1,p}(B)$ to the solution of problem [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"}. Then, we prove that $v^\lambda$ is weakly convergent in $W^{1,q}(A)$ to the solution of problem [\[Linfty\]](#Linfty){reference-type="eqref" reference="Linfty"}, under an additional assumption. **Theorem 8**. *Let $1<q<p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$ and $v^\lambda$ be the solution of [\[G\^tlarge\]](#G^tlarge){reference-type="eqref" reference="G^tlarge"}. If (A1), (A2), (A3) and (A4) hold, then* - *$v^\lambda\rightharpoonup v_B$ in $W^{1,p}(B)$, as $\lambda\to +\infty$,* *where $v_B\in W^{1,p}(B)$ is the unique solution of [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"}.* *Proof.* Let $w\in W^{1,p}(\Omega)$ be the minimizer of problem [\[N\]](#N){reference-type="eqref" reference="N"}. From [\[etildegrad\]](#etildegrad){reference-type="eqref" reference="etildegrad"} of Lemma [Lemma 7](#primadiEtilde){reference-type="ref" reference="primadiEtilde"}, we have $$\label{limitatezza_v_lambda} \int_{B} |\nabla v^\lambda(x)|^p dx\leq{\overline Q}/{\underline Q}\left(\int_{B} | {w} (x)|^p dx+\frac{ {E_0^p}}{\lambda^p} |B|\right)+\frac{ {E_0^p}}{\lambda^p} |B|<\infty.$$ Since the last two integral terms tend to $0$ as $\lambda\to+\infty$, then $||\nabla v^\lambda||_{L^p(B)}^p$ is definitively upper bounded. Moreover, up to subsequences, since $v^\lambda=f$ on $\partial\Omega$, then [@leoni17 Chap. 11] there exists $v^\infty\in W^{1,p}(B)$, with $v^\infty=f$, such that $$\label{conv_vt_v_infty} v^\lambda \rightharpoonup v^\infty\quad\text{in}\ W^{1,p}(B)\quad\text{as}\ \lambda\to+\infty.$$ Let $\tilde v_B\in W^{1,p}(\Omega)$ be an extension of $v_B\in W^{1,p}(B)$. By using $\tilde v_B$ as the test function in [\[G\^tlarge\]](#G^tlarge){reference-type="eqref" reference="G^tlarge"}, and from the rightmost inequality of assumption ( A3.ii), we have $$\label{duepezzi} \mathbb{G}^\lambda(v^\lambda)\leq \mathbb{G}^\lambda(\tilde v_B)\leq \frac 1 {\lambda^p}\int_{B} Q_B(x,\lambda|\nabla v_B(x)|)dx+\frac{ \overline{Q}}{E_0^q}\frac 1 {\lambda^{p-q}}\int_{A} |\nabla \tilde v_B(x)|^qdx+\frac{ \overline{Q}} {\lambda^{p}}|A|.$$ By using the Dominated Convergence Theorem together with (A3) and (A4) to treat the first term on the r.h.s. of [\[duepezzi\]](#duepezzi){reference-type="eqref" reference="duepezzi"}, and by taking into account that the second and third terms vanish, we have $$\label{caso3convupii} \limsup_{\lambda\to+\infty}\mathbb{G}^\lambda(v^\lambda)\le\int_{B}\beta(x)|\nabla v_B(x)|^p dx=\mathbb B(v_B).$$ Since $v^\lambda$ is weakly convergent (see [\[conv_vt_v\_infty\]](#conv_vt_v_infty){reference-type="eqref" reference="conv_vt_v_infty"}), we can apply Proposition [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"} which gives: $$\label{caso3convdownii} \mathbb B(v^\infty)\leq \liminf_{\lambda\to +\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,|\nabla v^\lambda(x)|)dx.$$ Therefore, we have the following inequalities $$\label{chain_large_i} \begin{split} \mathbb B(v_B)\leq \mathbb B(v^\infty)&\leq \liminf_{\lambda\to +\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,\lambda|\nabla v^\lambda(x)|)dx\\ &\leq\limsup_{\lambda\to+\infty}\mathbb{G}^\lambda(v^\lambda)\leq\lim_{\lambda\to+\infty}\mathbb{G}^\lambda(\tilde v_B)= \mathbb B(v_B), \end{split}$$ where in the first inequality we have used $v^\infty$ as test function in $\mathbb B$, the second inequality is [\[caso3convdownii\]](#caso3convdownii){reference-type="eqref" reference="caso3convdownii"}, the third inequality is obtained by adding the integral of the Dirichlet Energy density in $A$, the fourth inequality is obtained by using $\tilde v_B$ as the test function in $\mathbb G^\lambda$, and the last inequality is [\[caso3convupii\]](#caso3convupii){reference-type="eqref" reference="caso3convupii"}. This implies that $\mathbb B(v_B)= \mathbb B(v^\infty)$ and $v_B=v^\infty$, because of the uniqueness of the solution of problem [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"}. This result together with [\[conv_vt_v\_infty\]](#conv_vt_v_infty){reference-type="eqref" reference="conv_vt_v_infty"}, gives the conclusion. ◻ **Remark 9**. From [\[chain_large_i\]](#chain_large_i){reference-type="eqref" reference="chain_large_i"}, we find that the fundamental inequality [\[caso3convdownii\]](#caso3convdownii){reference-type="eqref" reference="caso3convdownii"}, also stated in Proposition [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"}, holds as an equality: $$\mathbb B(v^\infty)=\lim_{\lambda\to +\infty}\frac{1}{\lambda^p}\int_{B}Q_B(x,|\nabla v^\lambda(x)|)dx.$$ The boundedness of $v^\lambda$ in $W^{1,p}(B)$ implies the boundedness of $Tr(v^\lambda)\in X^p(\partial A)$ and therefore $Tr(v^\lambda)\in X^q(\partial A)$. Then, by the Inverse Trace inequality in Besov spaces [@leoni17 Th. 18.40], we have the boundedness of $v^\lambda\in W^{1,q}(A)$. Then, up to a subsequence, there exists $v_A\in W^{1,q}(A)$ such that $v^\lambda\rightharpoonup v_A$ in $W^{1,q}(A)$. Indeed, we recall from Theorem [Theorem 8](#Thm_conv_limlarge){reference-type="ref" reference="Thm_conv_limlarge"} that $v^\lambda\rightharpoonup v_B$ in $W^{1,p}(B)$ as $\lambda\to +\infty$; this implies that $v^\lambda\rightharpoonup v_B$ in $W^{1,q}(B)$. Therefore, by [@leoni17 Cor. 18.4], we have $Tr(v^\lambda)\to Tr(v_B)=Tr(v_A)$ in $L^q(\partial A)$ as $\lambda\to +\infty$. However, to prove the desired result, we also need the stronger convergence $$||Tr(v^\lambda)-Tr(v_A)||_{X^q(\partial A)}\to 0^+,\quad \text{as }\ \lambda\to+\infty.$$ This convergence holds when $\nabla v^\lambda\to\nabla v_B$ in $L^p(B)$, by using [@leoni17 Th. 18.40]. This problem will be addressed in a forthcoming paper. Anyway, it seems interesting to state the following conditional results. **Lemma 10**. *Let $1<q<p<+\infty$, if (A1), (A2), (A3) and (A4) hold, and $||Tr(v^\lambda)-Tr(v_A)||_{X^q(\partial A)}\to 0^+$, as $\lambda\to+\infty$, then there exist two positive constants $C$ and $\lambda_0$ such that $||\nabla v^\lambda||_{L^q(A)}\leq C$ for any $\lambda>\lambda_0$.* *Proof.* For any $v\in v^\lambda+W^{1,q}_0(A)$, we have $$\label{chain_p_en_i} \begin{split} \frac{\underline{Q}}{E_0^{ {q}}}\int_A |\nabla v^\lambda(x)|^q \text{d}x {\ - \frac{\ \underline{Q} |A|}{\lambda^q}}&\leq\frac{1}{\lambda^q}\int_A Q_A(x, {\lambda}|\nabla{v^\lambda}(x)|)\text{d}x\\ &\leq\frac{\overline{Q}}{E_0^{ {q}}} \int_A |\nabla v(x)|^q\text{d}x {\ + \frac{\ \overline{Q} |A|}{\lambda^q}} \end{split}$$ where in the first inequality we used assumption (A3), and in the second (A3) and the minimality of $v^\lambda$. The Inverse Trace inequality in Besov spaces [@leoni17 Th. 18.40] assures the existence of a function $g\in v^\lambda+ W^{1,q}_0(A)$ such that $||\nabla g||_{q}\leq K(q,\Omega) ||Tr(v^\lambda)||_{X^q(\partial A)}$. Therefore, from [\[chain_p\_en_i\]](#chain_p_en_i){reference-type="eqref" reference="chain_p_en_i"} with $v=g$ and since $Tr(g)=Tr(v^\lambda)$ on $\partial A$, we have $$\label{upp_vlambda} ||\nabla v^\lambda||^q_{q} \leq {\frac{\overline Q}{\underline Q}}||\nabla g||^q_{q} {\ +\ \frac{(\overline Q+\underline{Q})E_0^q |A|}{\underline Q \lambda^q}} \le {\frac{\overline Q}{\underline Q }K}||Tr(v^\lambda)||^q_{X^q(\partial A)} {\ +\ \frac{(\overline Q+\underline{Q})E_0^q |A|}{\underline Q \lambda^q}}.$$ Hence, $||\nabla v^\lambda||^q_{L^q(A)}$ is definitively upper bounded because of the convergence of $Tr(v^\lambda)$. ◻ In order to identify $v_A$, before proving the inner convergence result, we introduce the following functional $$\label{Ilambda_corpo} \mathbb I^\lambda(v)=\frac{1}{\lambda^q}\int_A Q_A(x,\lambda|\nabla v(x)|)dx,$$ and we observe that, for any $\lambda>0$, $v^\lambda$ is also the minimizer of $$\min_{\substack{v\in W^{1,q}(A)\\ v=v^\lambda \ \text{on } \partial A}}\mathbb I^\lambda(v).$$ **Theorem 11**. *Let $1<q<p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$ and $v^\lambda$ be the solution of [\[G\^tlarge\]](#G^tlarge){reference-type="eqref" reference="G^tlarge"}. If (A1), (A2), (A3), (A4) and (A4') hold, and $||Tr(v^\lambda)-Tr(v_A)||_{X^q(\partial A)}\to 0^+$, as $\lambda\to+\infty$, then* - *$v^\lambda {\rightharpoonup} v_A$ in $W^{1,q}(A)$, as $\lambda\to +\infty$,* *where $v_A\in W^{1,q}(A)$ is the unique solution of [\[Linfty\]](#Linfty){reference-type="eqref" reference="Linfty"}.* *Proof.* The Inverse Trace inequality [@leoni17 Th. 18.34] assures the existence of a function $w^\lambda\in Tr(v^\lambda-v_A)+ W^{1,q}_0(A)$ such that $||\nabla w^\lambda ||_{q}\leq K(q,\Omega) ||Tr(v^\lambda)-Tr(v_A)||_{X^q(\partial A)}$. Since, by assumption, the last quantity is infinitesimal, then $||\nabla w^\lambda ||_{q}\to 0$. For any $0<t<1$, let us consider the convex combination $$%\iota^\lambda v_A+w^\lambda=t \frac{v_A%\iota^\lambda }{t}+(1-t)\frac{w^\lambda}{1-t};$$ then, by the convexity of $\mathbb I^\lambda$, we have $$\label{convex_comb_I} \begin{split} \mathbb I^\lambda(v_A+w^\lambda)&\le t\mathbb I^\lambda\left( \frac{v_A}{t}\right)+(1-t)\mathbb I^\lambda\left(\frac{w^\lambda}{1-t}\right)\\ &\leq t\mathbb I^\lambda\left( \frac{v_A}{t}\right)+(1-t)\overline Q\left(\int_A\frac{|\nabla w^\lambda(x)|^q}{(1-t)^qE_0^q}dx+\frac{|A|}{\lambda^q}\right). \end{split}$$ by assumption (A3.ii) . By passing to the $\liminf$ in [\[convex_comb_I\]](#convex_comb_I){reference-type="eqref" reference="convex_comb_I"} and taking into account that $||\nabla w^\lambda ||_{q}\to 0$ as $\lambda\to+\infty$, we have $$\liminf_{\lambda\to +\infty} \mathbb I^\lambda(v_A%\iota^\lambda +w^\lambda)\le t\liminf_{\lambda\to + \infty} \mathbb I^\lambda\left( \frac{v_A%\iota^\lambda }{t}\right)=t^{1-q}\liminf_{\lambda\to + \infty} \mathbb I^\lambda\left(% \iota^\lambda v_A\right).$$ Therefore, the limit as $t\to 1^-$ provides $$\label{IiwIi} \liminf_{\lambda\to +\infty} \mathbb I^\lambda(v_A%\iota^\lambda +w^\lambda)\le \liminf_{\lambda\to + \infty} \mathbb I^\lambda\left(v_A% \iota^\lambda \right).$$ From Proposition [Lemma 10](#lpconv2){reference-type="ref" reference="lpconv2"}, we know that $||\nabla v^\lambda||_{L^q(A)}$ is upper bounded and hence there exists $v^\infty \in W^{1,q}(A)$ such that $v^\lambda\rightharpoonup v^\infty$ in $W^{1,q}(A)$. Hence, we have the following inequalities $$\label{conv_vt_v_infty_A} \begin{split} \mathbb A(v_A)\leq \mathbb A(v^\infty)&\leq \liminf_{\lambda\to+\infty}\mathbb{I}^\lambda(v^\lambda)\leq\liminf_{\lambda\to +\infty}\mathbb{I}^\lambda(v_A%\iota^\lambda +w^\lambda)\\ &\leq\liminf_{\lambda\to +\infty}\mathbb{I}^\lambda(v_A%\iota^\lambda )=\lim_{\lambda\to+\infty}\mathbb{I}^\lambda(v_A)= \mathbb A(v_A). \end{split}$$ where the first inequality follows from testing $v^\infty$ in $\mathbb A$, the second inequality follows from using the fundamental result of Proposition [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"}, the third inequality accounts the minimality of $v^\lambda$ for $\mathbb I^\lambda$, the fourth inequality is [\[IiwIi\]](#IiwIi){reference-type="eqref" reference="IiwIi"}, and the last equalities follows from using the dominated convergence theorem together with (A3) and (A4'). Proposition [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"} was applied in the second inequality for $B$, $p$, $\beta$ and (A4) replaced by $A$, $q$, $\alpha$ and (A4'), respectively. Equation [\[conv_vt_v\_infty_A\]](#conv_vt_v_infty_A){reference-type="eqref" reference="conv_vt_v_infty_A"} implies that $\mathbb A(v_A)= \mathbb A(v^\infty)$ and $v_A=v^\infty$, by the uniqueness of the solution of problem [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"}. ◻ As a corollary, we have a global convergence result. **Corollary 12**. *Let $1<q<p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$ and $v^\lambda$ be the solution of [\[G\^tlarge\]](#G^tlarge){reference-type="eqref" reference="G^tlarge"}. If (A1), (A2), (A3), (A4) and (A4') hold, and $||Tr(v^\lambda)-v_A||_{X^q(\partial A)}\to 0^+$, as $\lambda\to+\infty$, then* - *$v^\lambda\rightharpoonup v_\Omega$ in $W^{1,q}(\Omega)$, as $\lambda\to +\infty$,* *where $v_\Omega\in W^{1,q}(\Omega)$ is defined by $$% \label{etainfty} v_\Omega= \begin{cases} v_B\quad\text{in }B\\ v_A\quad\text{in }A, \end{cases}$$ $v_B\in W^{1,p}(B)$ is the unique solution of [\[Hinfty\]](#Hinfty){reference-type="eqref" reference="Hinfty"}, $v_A\in W^{1,q}(A)$ is the unique solution of [\[Linfty\]](#Linfty){reference-type="eqref" reference="Linfty"}.* ## Second case: $\mathbf{p<q}$ {#Large_smaller} For any fixed $f\in X_\diamond^p(\partial \Omega)$, since $p<q$, we have $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)= W^{1,p}(\Omega)$ and, hence, the variational problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} particularizes as $$\label{G^tlargeii} \min_{\substack{v\in W^{1,p}(\Omega)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb G^\lambda(v)%,\quad \mathbb G^t(v)=\frac 1 {t^{p}} \left(\int_{B} Q_B(x,t|\nabla v(x)|)dx+\int_{A} Q_A(x,t|\nabla v(x)|)dx\right) .$$ We denote by ${w}$ the solution of the limiting problem defined in [\[N\]](#N){reference-type="eqref" reference="N"}. Problem [\[N\]](#N){reference-type="eqref" reference="N"} is relevant because $v^\lambda$ weakly converges to $w$ for $\lambda \to +\infty$. We observe that the condition $|\nabla v|=0$ in [\[N\]](#N){reference-type="eqref" reference="N"} is equivalent to saying that $v$ is constant on each connected component of $A$. In other words, region $A$ behaves as a perfect electric conductor and, in addition, region $B$ corresponds to a $p-$Laplacian modelled material. **Theorem 13**. *Let $1<p<q<+\infty$, $f\in X^p_\diamond(\partial \Omega)$ and $v^\lambda$ be the solution of [\[G\^tlargeii\]](#G^tlargeii){reference-type="eqref" reference="G^tlargeii"}. If (A1), (A2), (A3) and (A4) hold, then* - *$v^\lambda\rightharpoonup {w}$ in $W^{1,p}(\Omega)$, as $\lambda\to +\infty$,* *where ${w}\in W^{1,p}(\Omega)$ is the unique solution of [\[N\]](#N){reference-type="eqref" reference="N"}. Moreover, we have* - *$v^\lambda\to {w}$ in $W^{1,q}(A)$, as $\lambda\to +\infty$.* *Proof.* For the sake of simplicity, we will only treat the case when $A$ has one connected component. Let ${w}\in W^{1,p}(\Omega)$ the solution of [\[N\]](#N){reference-type="eqref" reference="N"}. By [\[etildegrad\]](#etildegrad){reference-type="eqref" reference="etildegrad"} of Lemma [Lemma 7](#primadiEtilde){reference-type="ref" reference="primadiEtilde"}, we have $$\label{chainGtvtwOmega4} \int_{B} |\nabla v^\lambda(x)|^p dx\leq\frac{\overline Q}{\underline Q}\left(\int_{B} |\nabla {w} (x)|^p dx+\frac{ {E_0^p}}{\lambda^p} |B|\right)+\frac{ {E_0^p}}{\lambda^p} |B|.$$ Hence the left hand side in [\[chainGtvtwOmega4\]](#chainGtvtwOmega4){reference-type="eqref" reference="chainGtvtwOmega4"} is upper bounded. Moreover, we have $$\label{chainGtvtwA} \begin{split} \lambda^{q-p}\int_{A}|\nabla v^\lambda(x)|^qdx &\leq\frac{ {E_0^q}}{\lambda^p} |A|+\frac{E_0^q}{\underline Q \lambda^p}\int_{A} Q_A(x,|\lambda\nabla v^\lambda(x)|)dx\\ &\leq\frac{ {E_0^q}}{\lambda^p} |A|+\frac{E_0^q}{\underline Q} \mathbb G^\lambda(v^\lambda)\\ &\leq\frac{ {E_0^q}}{\lambda^p} |A|+\frac{\overline{Q}}{\underline Q}E_0^{q-p}\int_{B} |\nabla {w}(x)|^pdx+\frac{ \overline{Q}}{\underline Q }\frac{{ {E_0^q}}}{\lambda^{ {p}}} |B|. \end{split}$$ where in the first inequality we used the left-hand side of assumption (A3.ii), in the second inequality we used the definition of $\mathbb G^\lambda$ and in the third inequality we exploited [\[etildeQ\]](#etildeQ){reference-type="eqref" reference="etildeQ"} of Lemma [Lemma 7](#primadiEtilde){reference-type="ref" reference="primadiEtilde"}. From [\[chainGtvtwOmega4\]](#chainGtvtwOmega4){reference-type="eqref" reference="chainGtvtwOmega4"} and [\[chainGtvtwA\]](#chainGtvtwA){reference-type="eqref" reference="chainGtvtwA"}, it is clear that $v^\lambda\in W^{1,p}(\Omega)\cap W^{1,q}(A)$ and that $||v^\lambda||_{W^{1,p}(\Omega)}$ is upper bounded. Therefore, taking into account that $v^\lambda=f$ on $\partial\Omega$ for any $\lambda>0$, there exists a function $v^\infty\in W^{1,p}(\Omega)$ such that $v^\lambda\rightharpoonup v^\infty$ in $W^{1,p}(\Omega)$, up to a subsequence, with $v^\infty=f$ on $\partial\Omega$. Finally, from [\[chainGtvtwA\]](#chainGtvtwA){reference-type="eqref" reference="chainGtvtwA"}, we find that $\int_{A}|\nabla v^\lambda(x)|^qdx=O(\lambda^{p-q})$. Therefore, $v^\lambda\to v^\infty$ in $W^{1, {q}}(A)$ and $v^\infty$ is constant in $A$ because $\nabla v^\lambda \to 0$ in $L^{q}(A)$. Since assumption (A4) holds, then we are in position to apply Proposition [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"} which gives $$\label{fundamental_inequality_inftyii} \mathbb B(v^\infty)\le \liminf_{\lambda\to +\infty}\frac 1 {\lambda^p}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx.$$ Consequently, we have the following inequalities $$\label{chain_large_ii} \begin{split} \mathbb B(v_\Omega)\le\mathbb B(v^\infty)&\le\liminf_{\lambda\to +\infty}\frac 1 {\lambda^p}\int_{B} Q_B(x,|\lambda\nabla v^\lambda(x)|)dx\\ &\le \liminf_{\lambda\to +\infty}\mathbb G^\lambda(v^\lambda) \le \lim_{\lambda\to +\infty}\mathbb{G}^\lambda( {w})=\ \mathbb B( {w}), \end{split}$$ where in the first inequality we used $v^\infty$ as the test function in $\mathbb B$, in the second inequality we used [\[fundamental_inequality_inftyii\]](#fundamental_inequality_inftyii){reference-type="eqref" reference="fundamental_inequality_inftyii"}, in the third inequality we used the definition of $\mathbb G^\lambda$, in the fourth inequality we used $w$ as the test function in $\mathbb G^\lambda$, and in the equality we used assumption (A4) and the dominated convergence Theorem. This implies that $\mathbb B( {w})=\mathbb B(v^\infty)$ and hence $v^\infty= {w}$ by the uniqueness of the solution of problem [\[N\]](#N){reference-type="eqref" reference="N"}. ◻ **Remark 14**. From [\[chain_large_ii\]](#chain_large_ii){reference-type="eqref" reference="chain_large_ii"}, we find that the fundamental inequality [\[fundamental_inequality_inftyii\]](#fundamental_inequality_inftyii){reference-type="eqref" reference="fundamental_inequality_inftyii"}, also stated in Proposition [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"}, holds as an equality: $$\mathbb B(v^\infty)= \liminf_{\lambda\to +\infty}\frac 1 {\lambda^p}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx.$$ # The pointwise convergence assumptions in the limiting case {#counter_sec} The main aim of this Section is to prove that assumption (A4) for large Dirichlet data is sharp. Specifically, we provide a counterexample where (A4) does not hold, while (A1), (A2) and (A3) still hold. For this case, we prove that, for the Dirichlet energy [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}, the convergence results of Theorems [Theorem 8](#Thm_conv_limlarge){reference-type="ref" reference="Thm_conv_limlarge"} and [Theorem 13](#Thm_conv_limlargeii){reference-type="ref" reference="Thm_conv_limlargeii"} (and hence of Theorem [Theorem 11](#thm_Large_inner){reference-type="ref" reference="thm_Large_inner"}) do not hold. With a similar approach, not reported here for the sake of brevity, it is possible to prove that even assumption (A4') is sharp. In order to prove this result, we need to build a Dirichlet energy function $Q_B$ such that the ratio $Q_B(x,E)/E^p$ does not admit the limit for $E \to +\infty$. This is the case when the ratio $Q_B(x,E)/E^p$ oscillates between two different values. In the following Lemma we prove that a Dirichlet Energy density of this type exists (see Figure [18](#fig_8_counterlarge){reference-type="ref" reference="fig_8_counterlarge"} for the geometrical interpretation). ![The continuous line represents the function $\Phi$ for the counter example. It gives the Dirichlet Energy density as $\Psi(E)=\Phi(E)+E^2$.](LimL_8_counterlargePhi.png){#fig_8_counterlarge width="60%"} **Lemma 15**. *Let $L>1$, then there exist two sequences $$\lambda_n'\uparrow +\infty \quad\text{and}\quad \lambda_n''\uparrow +\infty$$ such that $$%...< \lambda''_n,\ L \lambda''_n<\lambda'_{n},\ \ L\lambda'_{n}<\lambda''_{n+1}%<...\quad \quad\forall n\in\mathbb{N},$$ and a strictly convex function $\Psi:[0,+\infty[\to[0,+\infty[$ such that $$\Psi|_{[\lambda_n',L \lambda_n']}(E)=2E^2\quad \Psi|_{[\lambda_n'',L \lambda_n'']}(E)=3E^2.$$* *Proof.* Let us fix $\lambda_1''>0$. For each $n \in \mathbb{N}$ we set the auxiliary function $\Phi$ equal to $2 E^2$ in $(\lambda_n'', L \lambda_n'')$ and equal to $E^2$ in $(\lambda_n', L \lambda_n')$. In interval $(L \lambda_n'',\lambda_n')$ the function $\Phi$ is equal to the tangent line to function $2E^2$ evaluated at $L \lambda_n''$. Point $\lambda_n'$ is found at the intersection of this tangent line with function $E^2$. In interval $(L \lambda_n',\lambda_{n+1}'')$ the function $\Phi$ is a straight line, continuous at $L \lambda_n'$ and tangent to $2E^2$. Point $\lambda_{n+1}''$ is found as the abscissa of the tangent point between this straight line and function $2E^2$. This procedure is applied iteratively from $n=1$. Function $\Phi$ turns out to be convex and sequences $\{\lambda_n'\}_{n \in \mathbb{N}}$ and $\{\lambda_n''\}_{n \in \mathbb{N}}$ are monotonically increasing to infinity. Therefore, the measure of intervals where $\Phi$ is equal to $E^2$ or equal to $2E^2$ is nonvanishing. It is possible to prove that the provided sequences $\{\lambda'_n\}_{n\in\mathbb{N}}$ and $\{\lambda_n''\}_{n\in\mathbb{N}}$ are two geometrical progressions. Indeed $$\lambda_n'=c_1L\lambda_n'',\ \lambda_{n+1}''=c_2L \lambda_n',\ \lambda_{n+1}'=C^2L^2\lambda_n',\ \lambda_{n+1}''=C^2L^2\lambda_n'',$$ where $c_1=2+\sqrt 2$, $c_2=1+\frac{\sqrt 2}{2}$ and $C$ is the geometrical average of $c_1$ and $c_2$, that is $C^2%c_2Lc_1L =\left(3+2\sqrt 2\right)$. Finally, we set $\Psi(E)=\Phi(E)+E^2$. $\Psi$ is a strictly convex function. ◻ The Dirichlet energy density defined as $Q_B(x,E)=\Psi(E)$, satisfies all the assumptions except (A4). This energy density is the basis to build a counterexample proving that (A4) is sharp. Specifically, we consider a 2D case ($n=2$) and $p=2$ in the outer region. The asymptotic growth exponent $q$ satisfies condition $1<q<\infty$. Let $r$ be greater than or equal to 10, and let the outer region $B$ be the annulus centred in the origin with radii $1$ and $r$. This annulus is $D_r\setminus\overline D_1$, where $D_r$ and $D_1$ are the disks of radii $r$ and $1$, respectively, and centered at the origin. The inner region is, therefore, $D_1$. We focus on problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}, where the Dirichlet energy density is defined as $$\begin{split} Q_B(x,E)&=\Psi(E)\quad\text{in}\ D_r\setminus\overline D_1\times [0,+\infty[,\qquad\\ Q_A(x,E)&=E^q\qquad\text{in}\ D_1\times [0,+\infty[. \end{split}$$ Let $\gamma$ be defined as $\gamma=7+\frac {12}{r^2}$. We denote $x=(x_1,x_2) \in \mathbb{R}^2$ and we consider the problem $$\label{G^tc1large} \min_{\substack{v\in W^{1,q}(D_r)\\ v=\gamma x_1\ \text{on}\ \partial D_r}}\mathbb G^\lambda(v),\quad \mathbb G^\lambda(v)=\frac 1 {\lambda^2}\left(\int_{ D_r\setminus D_1} \Psi(\lambda|\nabla v(x)|)dx+\int_{D_1} \lambda^q|\nabla v(x)|^q dx\right).$$ Here we prove that $\lim_{\lambda\to +\infty}\mathbb G^\lambda(v^\lambda)$ does not exist. Specifically, the two sequences $\{\lambda_n'\}_{n\in\mathbb{N}}\uparrow +\infty$ and $\{\lambda_n''\}_{n\in\mathbb{N}}\uparrow+\infty$ of Lemma [Lemma 15](#succ_Llarge){reference-type="ref" reference="succ_Llarge"} give $$\label{counter_resultlarge} \limsup_{n\to +\infty}\mathbb{G}^{\lambda_n'}(v^{\lambda_n'})\le\ell_1<\ell_2\le\liminf_{n\to +\infty}\mathbb{G}^{\lambda_n''}(v^{\lambda_n''}).$$ As usual, $v^\lambda$ is the solution of [\[G\^tc1large\]](#G^tc1large){reference-type="eqref" reference="G^tc1large"}. Let us consider the following problem $$\begin{aligned} %\label{AuxPbc1}&\min_{\substack{v\in H^{1}(A_{1,r})\\ v=\alpha_1x_1\ \text{on}\ \partial B_r\\\partial_nv=0\ \text{on}\ \partial B_1}}\int_{ A_{1,r}} |\nabla v(x)|^2dx,\qquad\\ \label{problem_down} &\min_{\substack{v\in H^{1}(D_r\setminus\overline D_1)\\ v=\gamma x_1\ \text{on}\ \partial D_r\\v=const.\ \text{on}\ \partial D_1}}\mathbb C (v), \quad \mathbb C(v)=\int_{ D_r\setminus D_1} |\nabla v(x)|^2dx.\end{aligned}$$ The symmetry of the domain and the zero average of the boundary data imply that the constant appearing in [\[problem_down\]](#problem_down){reference-type="eqref" reference="problem_down"} on $\partial D_1$ is zero. An easy computation reveals that $$v_{D_r}(x)=\frac{7r^2+12}{r^2-1}\left(1-\frac{1}{x_1^2+x_2^2}\right)x_1\quad\text{in}\ D_r\setminus\overline D_1$$ is the solution of [\[problem_down\]](#problem_down){reference-type="eqref" reference="problem_down"}, that $\Delta v_{D_r}=0$ in $D_r\setminus\overline D_1$, and that we have $$\begin{split} %\alpha_1\frac{r^2}{r^2+1}\left(1-\frac{\sqrt{2}}{r}\right)^2\le |\nabla v_1(x)|^2\le\alpha_1\frac{r^2}{r^2+1} \left(1+\frac 2{r^2}\right)^2\quad\text{in}\ \R^n\setminus B_{r},\\ \frac{7r^2-12}{r^2-1}\left(1-\frac{1}{\rho^2}\right)\le|\nabla v_{D_r}(x)|\le\frac{7r^2+12}{r^2-1} \left(1+\frac 1{\rho^2}\right)\quad\text{on}\ \partial D_{\rho},\ 1<\rho\le r. \end{split}$$ Consequently, when $\rho\ge2$, we have $$\label{stima_nablaw} 1\le\frac 34 \frac{7r^2-12}{r^2-1}\le|\nabla v_{D_r}(x)|\le\frac 54 \frac{7r^2+12}{r^2-1}\leq 10\quad\text{in}\ D_r\setminus D_2.$$ Let $L$ be greater than 10, $\lambda_n'\uparrow +\infty$ and let $\lambda_n''\uparrow +\infty$ be the two sequences of Lemma [Lemma 15](#succ_Llarge){reference-type="ref" reference="succ_Llarge"}. It turns out that $$\label{stime_up_down} \lambda_n' \le% t_n'\ m \leq \lambda_n'|\nabla v_{D_r}(x)|%\leq t_n'\ M \le L \lambda'_n\quad\text{in}\ D_r\setminus D_2.$$ We have $$\label{inf_counter} \begin{split} &\limsup_{n\to +\infty}\mathbb G^{\lambda'_n}(v^{\lambda_n'})\leq\limsup_{n\to +\infty} \mathbb G^{\lambda'_n}(v_{D_r})\\ &=\limsup_{n\to +\infty}\frac 1{(\lambda'_n)^2} \int_{D_r\setminus D_2}\Psi(\lambda'_n|\nabla v_{D_r}(x)|)dx+\limsup_{n\to +\infty}\frac 1{(\lambda'_n)^2} \int_{D_2\setminus D_1}\Psi(\lambda'_n|\nabla v_{D_r}(x)|)dx\\ &\leq 2 \int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx+3 \int_{D_2\setminus D_1}|\nabla v_{D_r}(x)|^2dx, \end{split}$$ where in the first line we used the minimality of $v^{\lambda'_n}$ for $\mathbb G^{\lambda'_n}$ and that $v_{D_r}$ is an admissible function for problem [\[G\^tc1large\]](#G^tc1large){reference-type="eqref" reference="G^tc1large"}, in the second line we exploited the property that the gradient of $v_{D_r}$ in $D_1$ is vanishing, and in the third line we used [\[stime_up_down\]](#stime_up_down){reference-type="eqref" reference="stime_up_down"}. By setting $\ell_1$ equal to [\[inf_counter\]](#inf_counter){reference-type="eqref" reference="inf_counter"}: $$\ell_1:=2 \int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx+3 \int_{D_2\setminus D_1}|\nabla v_{D_r}(x)|^2dx,$$ we have the leftmost inequality in [\[counter_resultlarge\]](#counter_resultlarge){reference-type="eqref" reference="counter_resultlarge"}. To obtain the rightmost inequality in [\[counter_resultlarge\]](#counter_resultlarge){reference-type="eqref" reference="counter_resultlarge"}, we consider the following problems $$\label{AuxF} \min_{\substack{v\in H^{1}(D_r\setminus\overline D_1)\\ v=\gamma x_1\ \text{on}\ \partial D_r}} \mathbb H^\lambda (v),\quad\mathbb H^\lambda(v)= \frac{1}{\lambda^2}\int_{D_r\setminus D_2}\Psi(\lambda|\nabla v(x)|)dx+2 \int_{D_2\setminus D_1}|\nabla v(x)|^2dx.$$ $$\label{problem_up} \min_{\substack{v\in H^{1}(D_r\setminus\overline D_1)\\ v=\gamma x_1\ \text{on}\ \partial D_r}}\mathbb D(v),\quad \mathbb D (v) =3\int_{D_r\setminus D_2}|\nabla v(x)|^2dx+2\int_{D_2\setminus D_1}|\nabla v(x)|^2dx.$$ The unique solution of [\[problem_up\]](#problem_up){reference-type="eqref" reference="problem_up"} is $$w_{D_r}(x)= \begin{cases} \left(7+\frac{12}{x_1^2+x_2^2}\right)x_1&\quad\text{in}\ D_r\setminus\overline D_2\\ 8\left(1+\frac{1}{x_1^2+x_2^2}\right)x_1 &\quad\text{in}\ D_2\setminus\overline D_1. \end{cases}$$ Analogously to [\[stima_nablaw\]](#stima_nablaw){reference-type="eqref" reference="stima_nablaw"}, it can be easily proved that $$1\le 4\le \left(7-\frac{12}{\rho^2}\right)\le|\nabla w_{D_r}(x)|\le \left(7+\frac {12}{\rho^2}\right)\le 10< L\quad\text{on}\ \partial D_{\rho},\ 2\le\rho\le r.$$ and hence we choose $L>10$ such that $$\label{stime_up} \lambda_n'' \le% t''_n\ m \leq \lambda_n''|\nabla w_{D_r}(x)|%\leq t_n''\ M \le L \lambda''_n\quad\text{in}\ D_r\setminus D_2.$$ Therefore, we have $$\label{sup_counter} \mathbb{G}^{\lambda_n''}(v^{\lambda_n''})\geq\mathbb H^{\lambda''_n}(w_{D_r})=\mathbb D(w_{D_r}),$$ where the inequality comes from the definition of $\Psi$. The equality follows from the fact that $\mathbb H^{\lambda''_n}$ coincides with $\mathbb D$ by [\[stime_up\]](#stime_up){reference-type="eqref" reference="stime_up"} and the definition of $\Psi$. We highlight that $w_{D_r}$ is a local minimizer in $W^{1,\infty}(D_r)\cap W^{1,2}(D_r)$, since $\Psi$ does not depend on $x$ (see [@cianchi2010global] for details). Finally, $w_{D_r}$ is a global minimizer thanks to the uniqueness of [\[AuxF\]](#AuxF){reference-type="eqref" reference="AuxF"}. By setting $$\ell_2(r):= \mathbb D(w_{D_r}).$$ we have the rightmost inequality in [\[counter_resultlarge\]](#counter_resultlarge){reference-type="eqref" reference="counter_resultlarge"} by passing to the limit in [\[sup_counter\]](#sup_counter){reference-type="eqref" reference="sup_counter"}. At this stage, it only remains to be proved that $\ell_1(r)<\ell_2(r)$. To this purpose, we notice that: $$\begin{split} \ell_1(r)&= 2 \int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx+3 \int_{D_2\setminus D_1}|\nabla v_{D_r}(x)|^2dx\\ \ell_2(r)&= 3\int_{D_r\setminus D_2}|\nabla w_{D_r}(x)|^2dx+2\int_{D_2\setminus D_1}|\nabla w_{D_r}(x)|^2dx. \end{split}$$ Condition $\ell_1(r)<\ell_2(r)$ holds for large $r$, by observing that (i) $v_{D_r}$ and $w_{D_r}$ solve the same associated Euler-Lagrange equation on $D_2\setminus\overline D_1$, (ii) $\nabla v_{D_r}(x)$ and $\nabla w_{D_r}(x)$ are bounded functions on the bounded domain $D_2\setminus D_1$ by [\[stime_up_down\]](#stime_up_down){reference-type="eqref" reference="stime_up_down"} and [\[stime_up\]](#stime_up){reference-type="eqref" reference="stime_up"}, respectively, and (iii) it turns out that $$\begin{split} &\lim_{r\to+\infty}\frac{ \int_{ D_r\setminus D_2} |\nabla v_{D_r}(x)|^2dx}{\int_{ D_r\setminus D_2} |\nabla w_{D_r}(x)|^2dx}= 1,\\ &\lim_{r\to+\infty}\int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx=\lim_{r\to+\infty}\int_{D_r\setminus D_2}|\nabla w_{D_r}(x)|^2dx=+\infty. \end{split}$$ # Conclusions {#Con_sec} This study is motivated by Inverse Problems in the presence of nonlinear materials, where the treatment of nonlinear constitutive relationships is still at an early stage of development, as clearly stated in [@lam2020consistency]. We focus on Electrical Resistance Tomography where the aim is to retrieve the electrical conductivity/resistivity of a material by means of stationary (DC) currents. Our main results prove that the original nonlinear problem can be replaced by a proper $p-$Laplace problem, when the prescribed Dirichlet data are ''large''. Specifically, we prove that in the presence of two materials with different asymptotic growth, the scalar potential in the outer region in contact with the boundary where the Dirichlet data is prescribed, can be computed by (i) replacing the interior region with either a Perfect Electric Conductor or a Perfect Electric Insulator and (ii) replacing the original problem (material) in the outer region with a weighted $p-$Laplace problem. In a certain sense, the presence of the ''fingerprint'' of a weighted $p-$Laplace problem can be recognized in an arbitrary nonlinear problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the weighted $p-$Laplacian in inverse problems with nonlinear materials. For $p=2$, i.e. when the material in the outer region is linear, these results constitute a powerful bridge making it possible to bring all theoretical results, imaging methods and algorithms developed for linear materials into the arena of problems with nonlinear materials. The fundamental tool to prove the convergence results are the inequalities appearing in Propositions [Proposition 5](#fund_ine_prop){reference-type="ref" reference="fund_ine_prop"}. They express the asymptotic behaviour of the Dirichlet Energy for the outer region in terms of a factorized $p-$Laplacian form. Moreover, we prove that our assumptions are sharp, by means of proper counterexamples. Finally, it would be interesting to provide a numerical example, referring to a superconducting cable, as an application of the theoretical results proved in this paper. # Acknowledgements {#acknowledgements .unnumbered} This work has been partially supported by the MiUR-Progetto Dipartimenti di eccellenza 2018-2022 grant ''Sistemi distribuiti intelligenti''of Dipartimento di Ingegneria Elettrica e dell'Informazione ''M. Scarano'', by the MiSE-FSC 2014-2020 grant ''SUMMa: Smart Urban Mobility Management'' and by GNAMPA of INdAM. ieeetr [^1]: \ $^1$Dipartimento di Ingegneria Elettrica e dell'Informazione ''M. Scarano'', Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy.\ $^2$Dipartimento di Matematica e Applicazioni ''R. Caccioppoli'', Università degli Studi di Napoli Federico II, Via Cinthia n. 26, Complesso Universitario Monte Sant'Angelo, 81026 Napoli, Italy.\ $^3$Departamento de Matemática, Facultad de Ciencias Físicas y Matemáticas, Universidad de Concepción, Avenida Esteban Iturra s/n, Bairro Universitario, Casilla 160 C, Concepción, Chile.\ $^4$Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI-48824, USA.\ Email: corbo\@unicas.it, l.faella\@unicas.it, gianpaolo.piscitelli\@unina.it *(corresponding author)*, rprakash\@udec.cl, antonello.tamburrino\@unicas.it. [^2]: $^1$In magnetostatics, it is possible to introduce a magnetic scalar potential for treating simply connected and source free regions. [^3]: $^*$ The concept of asymptotic $p-$growth and asymptotic $q-$growth is formalized in assumption (A4) and (A4'), respectively.
arxiv_math
{ "id": "2309.15206", "title": "The p-Laplace \"Signature\" for Quasilinear Inverse Problems with Large\n Boundary Data", "authors": "A. Corbo Esposito, L. Faella, G. Piscitelli, R. Prakash, A. Tamburrino", "categories": "math.AP", "license": "http://creativecommons.org/publicdomain/zero/1.0/" }
--- abstract: | We derive in this paper Gaussian estimates for a general parabolic equation $u_{t}-\big(a(x)u_{x}\big)_x= r(x)u$ over $\mathbb{R}$. Here $a$ and $r$ are only assumed to be bounded, measurable and $\mathrm{essinf}_\mathbb{R}a>0$. We first consider a canonical equation $\nu (x) \partial_{t}p - \partial_{x }\big( \nu (x)a(x)\partial_{x}p\big)+W\partial_{x}p=0$, with $W\in \mathbb{R}$, $\nu$ bounded and $\mathrm{essinf}_\mathbb{R}\nu>0$, for which we derive Gaussian estimates for the fundamental solution: $$\forall t>0, x,y\in \mathbb{R}, \quad \displaystyle\frac{1}{Ct^{1/2}}e^{-C|T(x)-T(y)-Wt|^{2}/t} \leq P(t,x,y)\leq \frac{C}{t^{1/2}}e^{-|T(x)-T(y)-Wt|^{2}/Ct}$$ where $T$ is a corrector satisfying appropriate properties. We then show that any solution $u$ of the original equation could be divided by some generalized principal eigenfunction $\phi_\gamma$ so that $p:=u/\phi_\gamma$ satisfies a canonical equation. As a byproduct of our proof, we derive Nash type estimates, that is, Holder continuity in $x$, for the solutions of the canonical equation. author: - "Grégoire Nadin[^1]" title: "**Gaussian estimates for general parabolic operators in dimension $1$**" --- Gaussian estimates, parabolic equation, corrector, generalized eigenfunctions, Nash type estimates. 35B40; 35K10; 35K15 # Introduction and main result ## State of the art on Gaussian estimates In this paper, we consider the equation $$\label{eq:parab}\left\{\begin{array}{l} u_{t}-\big(a(x)u_{x}\big)_x= r(x)u \quad \hbox{ for all } t>0, \ x\in \mathbb{R},\\ u(0,x)=u_{0}(x) \quad \hbox{ for all } x>0,\\ \end{array}\right.$$ where $u_{0}\in L^\infty (\mathbb{R})$ is a compactly supported, nonnegative and non-null initial datum. We make the following hypotheses on the measurable functions $r$ and $a$ along all this article: $$\label{hyp} \exists \mu>0, \quad |r(x)|\leq \mu, \quad \frac{1}{\mu}\leq a(x)\leq \mu \quad \hbox{ for a.e. } x\in \mathbb{R}.$$ Under these hypotheses, it is well-known that one can define a fundamental solution $U(t,x,y)$ of ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}), associated with the initial datum $\delta_y$. Of course a drift term $b(x)u_x$ could also be addressed, with $b$ bounded over $\mathbb{R}$. In that case, one uses the change of variables $v(t,x):=u(t,x)e^{\int_0^x \frac{b}{2a}}$ in order to reduce to an equation with no drift like ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}). When $r\equiv 0$ and $a\equiv 1$, it is well-known that $U(t,x,y)\equiv \frac{1}{\sqrt{4\pi t}}e^{-|x-y|^{2}/4t}$. When $r\equiv 0$ and $a$ depends on $x$, it was proved by Aronson [@Aronson] using the Harnack inequality and by Fabes and Stroock [@FabesStroock] that there exists a constant $C>0$, which only depends on $\mu$, such that $$\frac{1}{Ct^{1/2}} e^{-C|x-y|^{2}/t} \leq U(t,x,y)\leq \frac{C}{t^{1/2}} e^{-|x-y|^{2}/Ct}.$$ Fabes and Stroock provided a direct proof of this estimate relying on methods developed by Nash [@Nash] to investigate the Holder continuity of the solutions. Several generalization of this result have been provided for bounded domains with Dirichlet [@ZhangDir], Neumann [@Wang] or general [@Arendt; @Daners] boundary conditions. We also refer to [@Kumagai] for Gaussian estimates on graphs, and to [@Grigor'yan97; @Grigor'yan06] for Gaussian estimates on general manifolds. When $r\not\equiv 0$, but $r$ decays like $r(x)\simeq \frac{a}{1+|x|^b}$ at infinity, Zhang [@Zhang] derived Gaussian estimates (in the more general framework of Riemannian manifolds). If we do not impose a decay at infinity on $r$, then an exponential growth rate is expected in the estimate. Of course we could always easily derive from [@Aronson; @FabesStroock] the estimate $$\frac{1}{Ct^{1/2}} e^{-C|x-y|^{2}/t- \|r\|_\infty t} \leq U(t,x,y)\leq \frac{C}{t^{1/2}} e^{-|x-y|^{2}/Ct+ \|r\|_\infty t}.$$ The aim of the present paper is to obtain more accurate estimates that encapsulate more precisely the exponential growth rate created by $r$. In order to be more precise, it is convenient to investigate the canonical form of equation ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}), namely $$\label{eq:canonical}\left\{ \begin{array}{ll} \nu (x) \partial_{t}p - \partial_{x }\big( \nu (x)a(x)\partial_{x}p\big)+W\partial_{x}p=0 &\hbox{ for all } t\in (0,\infty), x\in \mathbb{R},\\ p(0,x) = p_0(x) &\hbox{ for all } x\in \mathbb{R},\\ \end{array}\right.$$ where $W$ is a constant and $\nu \in L^\infty (\mathbb{R})$, $\mathrm{essinf}_\mathbb{R}\nu>0$. Such a canonical form arises when one divides the solution of ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}) by some appropriate eigenfunction (see Section [6.1](#sec:UP){reference-type="ref" reference="sec:UP"} below), that is: $u(t,x)= \phi_\gamma (x) e^{\gamma t} p(t,x)$. If $\nu -1$ admits a bounded primitive, then Norris [@Norris] proved that there exists a positive constant $C>0$ such that the fundamental solution $P(t,x,y)$ associated with equation ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}) with initial datum $P(0,\cdot,y)=\delta_y$ satisfies $$\label{eq:Norris} \frac{1}{Ct^{1/2}} e^{-C|x-Wt-y|^{2}/t} \leq P(t,x,y)\leq \frac{C}{t^{1/2}} e^{-|x-Wt-y|^{2}/Ct}$$ and thus $$\frac{\phi_\gamma (x)}{Ct^{1/2}\phi_\gamma (y)} e^{-C|x-Wt-y|^{2}/t+\gamma t} \leq U(t,x,y)\leq \frac{C\phi_\gamma (x)}{t^{1/2}\phi_\gamma (y)} e^{-|x-Wt-y|^{2}/Ct+\gamma t}.$$ This is the type of estimates we want to derive when $r\not \equiv 0$, for a general dependence of $a$ and $r$ with respect to $x$. The boundedness hypothesis on the primitive of $\nu-1$ is satisfied in particular when $a$ and $r$ are periodic, and in that case Norris even obtains more accurate estimates [@Norris]. But we could construct counter-examples for which it is not satisfied anymore when $r$ is almost periodic for example. Lastly, let us mention a paper of Norris and Stroock [@NorrisStroock], which addresses general $a$ and $r$. They obtain a lower bound and an upper bound that are not of the same type. It is not clear to us how to compare their result with the one derived in the present paper. ## Statement of the result for the canonical equation {#sec:statement} We address in this section the canonical equation ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}) and we make the following hypotheses on the measurable functions $\nu$ and $a$: $$\label{hyp:original} \exists\mu>0 \hbox{ s.t. } \quad \frac{1}{\mu}\leq \nu(x)\leq \mu, \quad \frac{1}{\mu}\leq a(x)\leq \mu \quad \hbox{ for a.e. } x\in \mathbb{R}.$$ ****Theorem** 1**. *Assume ([\[hyp:original\]](#hyp:original){reference-type="ref" reference="hyp:original"}). Let $P(t,x,y)$ the fundamental solution of ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}), associated with the initial datum $\delta_y$. Then there exists a constant $C>0$ (only depending on $\mu$, not on $W$) such that $$\label{eq:lbub} \forall t>0, x,y\in \mathbb{R}, \quad \displaystyle\frac{1}{Ct^{1/2}}e^{-C|T(x)-T(y)-Wt|^{2}/t} \leq P(t,x,y)\leq \frac{C}{t^{1/2}}e^{-|T(x)-T(y)-Wt|^{2}/Ct}.$$* The function $T$ is defined by the following Proposition. ****Proposition** 2**. *There exists a unique solution $T$ of $$\label{eq:T} -\big( a(x)\nu(x)T'\big)'+WT'=W\nu(x) \hbox{ in } \mathbb{R}, \quad T(0)=0$$ such that $T(x)/|x|$ is bounded over $\mathbb{R}$. Moreover, one has $m\leq T'(x)\leq M$ for all $x\in \mathbb{R}$, for $m=1/\mu^5$ and $M=\mu^5$.* Assume that $\nu$ and $r$ are $1-$periodic in $x$, and that $\int_0^1 \nu = 1$ by rescaling. Then there exists a unique periodic solution $\chi=\chi(x)$ of $$-\big( a(x)\nu(x)\chi'\big)'+W\chi'=W\big(\nu(x)-1\big) +\big( a(x)\nu(x)\big)'\hbox{ in } \mathbb{R}, \quad \chi(0)=0$$ since the right-hand is of average $0$ (such a quantity is introduced in [@HNRR2] for example). One has $T(x)=\chi(x)+x$, with $\chi$ bounded. Hence, even if it means increasing $C$, one can recover Norris' estimate ([\[eq:Norris\]](#eq:Norris){reference-type="ref" reference="eq:Norris"}). For more general dependences in $x$, say $\nu$ and $a$ almost periodic for example, $\nu$ having average $1$, one can still define $\chi(x):= T(x)-x$, but this quantity is not bounded in $x$ in general. We could identify $\chi$ as a corrector (see [@Jhikov]) and $a_{hom} := \lim _{x\rightarrow+\infty}\frac{1}{x}\int_0^x \nu (y)a(y)T'(y)^2dy$ is an effective diffusivity in the almost periodic or random stationary ergodic frameworks. We were not able to push further this observation, and hope to be able to derive more accurate estimates in these frameworks in a future work. This illustrates why we need to introduce function $T$ in order, somehow, to quantify the fluctuations in the estimate created by the heterogeneity. The proof of Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"} follows the same steps as in [@FabesStroock] (for $W=0$) and [@Norris] (for periodic coefficients). However, we need to adapt these proofs in order to take into account the heterogeneity through the corrector $T$, and to check that this corrector satisfies appropriate properties. These are the main difficulties in this paper. It would be interesting to extend these results to multi-dimensional frameworks. The main difficulty would be the introduction of an appropriate corrector $T$. The methods we use in our proof to derive the properties of $T$ are one-dimensional ones. We thus leave this for future works. As a byproduct of our result of independent interest, we derive, as in [@FabesStroock], a Nash estimate for the solutions of the canonical equation. ****Theorem** 3**. *Assume ([\[hyp:original\]](#hyp:original){reference-type="ref" reference="hyp:original"}) and let $p$ a solution of ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}), with $p(0,\cdot)\in L^1 (\mathbb{R})$. For each $\delta\in (0,1)$, there exist $C=C(\mu,\delta)$ and $\beta=\beta(\mu,\delta)\in (0,1)$ such that for all $R>0$ and $(s,\xi)\in (R^2,\infty) \times \mathbb{R}$: $$|p(t,x)-p(t',x')|\leq C \|p\|_{L^\infty ((s-R^2,s)\times B(\xi,R))}\Big( \frac{\max \{\sqrt{|t-t'|}, |T(x)-T(x')-W(t-t')|\}}{R}\Big)^\beta$$ for all $(t,x), (t',x')\in (s,\infty)\times \mathbb{R}$, such that $|T(x)-T(\xi)-Wt|<R$, $|T(x')-T(\xi)-Wt'|<R$.* *In particular, one has (for another constant depending on $\mu$ and $\delta$ that we still denote $C$), for all $t>0$, $x,x'\in \mathbb{R}$: $$\label{eq:osc}|p(t,x)-p(t,x')|\leq \frac{C}{t^{\frac{1+\beta}{2}}}\|p(0,\cdot)\|_{L^1 (\mathbb{R})} |x-x'|^\beta.$$* We could also derive the following $L^1-L^\infty$ continuity estimate (for which we do not provide a proof since it is immediate from Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}). ****Proposition** 4**. *Assume ([\[hyp:original\]](#hyp:original){reference-type="ref" reference="hyp:original"}) and let $p, q$ two solutions of ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}), with $p(0,\cdot)-q(0,\cdot)\in L^1 (\mathbb{R})$. Then there exists a constant $C=C(\mu)$ (independent of $W$) such that for all $t>0$: $$\|p(t,\cdot)-q(t,\cdot)\|_{L^\infty (\mathbb{R})}\leq \frac{C}{\sqrt{t}}\|p(0,\cdot)-q(0,\cdot)\|_{L^1 (\mathbb{R})}.$$* ## Statement of the result for the original equation {#sec:statementoriginal} Before stating our result for the original equation, we need to introduce the eigenelements that will unable us to reduce to a canonical equation. Let $$\underline{\gamma} := \sup_{\varphi \in H^{1}(\mathbb{R})}\displaystyle \frac{\int_{\mathbb{R}}\big( r(x)\varphi^{2}-a(x)(\varphi')^{2}\big)dx}{\int_{\mathbb{R}}\varphi^{2}dx}.$$ The bounds ([\[hyp\]](#hyp){reference-type="ref" reference="hyp"}) on $a$ and $r$ ensure that $\underline{\gamma}$ is well-defined and finite. For $\gamma > \underline{\gamma}$, we know from [@Freidlin2; @Noleninv] that $$\label{eq:phigamma} \big(a(x)\phi_{x}\big)_x + (r(x) - \gamma) \phi = 0, \quad x \in \mathbb{R}, \quad \phi>0, \quad \phi (0)=1, \quad \lim_{x\rightarrow+\infty}\phi (x)=0$$ admits a unique solution $\phi=\phi_{\gamma}$. We define similarly a unique solution $\widetilde{\phi}_{\gamma}$ with $\lim_{x\rightarrow-\infty}\widetilde{\phi}_{\gamma} (x)=0$ instead of $\lim_{x\rightarrow+\infty}\phi_{\gamma}(x)=0$. The main result of this paper is the following. ****Theorem** 5**. *Assume ([\[hyp\]](#hyp){reference-type="ref" reference="hyp"}) and let $\gamma>\underline{\gamma}$. Let $U(t,x,y)$ the fundamental solution of ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}), associated with the initial datum $\delta_y$. Then there exists two constants $C>0$ and $W\in \mathbb{R}$ (depending on $\gamma$) and a function $T_\gamma=T_\gamma (x)=-W\dot{\phi}_\gamma (x)/\phi_\gamma (x)$, where $\dot{\phi}_\gamma$ is defined in Lemma [**Lemma** 21](#lem:wgamma){reference-type="ref" reference="lem:wgamma"}, such that for all $t>0, x,y\in \mathbb{R}$: $$\label{eq:lbub} \displaystyle\frac{1}{Ct^{1/2}}\frac{\phi_\gamma (x)}{\phi_\gamma (y)} e^{-C|T_\gamma(x)-T_\gamma(y)-Wt|^{2}/t+\gamma t} \leq U(t,x,y)\leq \frac{C}{t^{1/2}}\frac{\phi_\gamma (x)}{\phi_\gamma (y)} e^{-|T_\gamma(x)-T_\gamma(y)-Wt|^{2}/Ct+\gamma t}.$$* Indeed, the $\dot{\phi}_\gamma$ refers to the derivative of $\phi_\gamma$with respect to $\gamma$. When $a\equiv 1$ and $r\equiv 0$, one has $\phi_\gamma (x)=e^{-\sqrt{\gamma}x}$, $W=2\sqrt{\gamma}$, and $T_\gamma (x) = x$. Hence, $U(t,x,y)\equiv \frac{1}{\sqrt{4\pi t}}e^{-|x-y|^{2}/4t}$ could also be written $$U(t,x,y)\equiv \frac{1}{\sqrt{4\pi t}} e^{-\sqrt{\gamma}(y-x)-\frac{1}{4t}|x-y-2\sqrt{\gamma}t|^{2}+\gamma t}= \frac{1}{\sqrt{4\pi t}}\frac{\phi_\gamma (x)}{\phi_\gamma (y)} e^{-|T_\gamma(x)-T_\gamma(y)-W t|^{2}/4t+\gamma t}$$ which is consistent with Theorem [**Theorem** 5](#thm:princ){reference-type="ref" reference="thm:princ"} (even if our result is less accurate in this case since we need to introduce a constant $C>0$). In this case the estimate does not depend on $\gamma$ after simplification. When $a$ and $r$ are periodic with respect to $x$, then one can prove that $x-T_\gamma (x)$ stays bounded with respect to $x$. Hence, we recover Norris' result [@Norris] in that case. It is not possible in general to replace $T_\gamma (x)$ by $x$. If $r=r(x,\omega)$ is random stationary ergodic with respect to $(x,\omega)$, then we expect the fluctuations of $T_\gamma (x)$ around $x$ to be of order $\sqrt{x\ln\ln x}$. We leave this particular case for a future work. When $r\equiv 0$, one can prove that $\underline{\gamma}=0$, that $\phi_\gamma (x)\rightarrow 1$ as $\gamma\rightarrow\underline{\gamma}$ locally in $x$ and that $T_\gamma (x)$ converges to the constant $x$ as $\gamma\rightarrow\underline{\gamma}$. Hence, we could recover Aronson's [@Aronson] original result as well. One needs to be careful however since the constant $C$ also depends on $\gamma$. We have one degree of freedom in this estimate, which is $\gamma>\underline{\gamma}$. It would thus be tempting to try to optimize this inequality with respect to $\gamma$. Bu the reader should keep in mind that $C$, $W$ and $T_\gamma$ depend on $\gamma$, and that it may happen that $C\rightarrow+\infty$ as $\gamma\rightarrow\underline{\gamma}$ or $+\infty$. We were thus unable to carry out such an optimization. One should thus choose $\gamma$ depending on the type of behavior of $U$ one wants to quantify. ## Estimates for Green functions of the canonical elliptic equations {#sec:statementoriginal} We now consider the Green solution $G_\lambda=G_\lambda(x,y)$ solutions of the canonical elliptic equation $$\label{eq:ellW} -\big( a(x)\nu (x) G_x\big)_x +W G_x+\lambda W^2 \nu (x) G = \nu (x)\delta_y \hbox{ in } \mathbb{R},$$ for some $\lambda>0$. ****Proposition** 6**. *Assume ([\[hyp:original\]](#hyp:original){reference-type="ref" reference="hyp:original"}). Then there exists a constant $C=C(\mu)>0$ independent of $W$, such that for all $x,y\in \mathbb{R}$, $$\frac{1}{W C\sqrt{\lambda+C}} e^{-2W\sqrt{C}\big( \sqrt{\lambda +C}-\sqrt{C}\big)\big( T(x)-T(y)\big)}\leq G_\lambda(x,y)\leq \frac{C}{W \sqrt{\lambda C+1}} e^{-\frac{2W}{C}\big( \sqrt{\lambda C+1}-1\big)\big( T(x)-T(y)\big)}.$$* This immediately follows from the classical identity $G_\lambda (x,y)=\int_0^\infty e^{-\lambda W^2 t}P(t,x,y)dt$ and the following computation, available for all $a,b>0$ and $X\in \mathbb{R}$: $$\int_0^\infty e^{-a t}\frac{e^{-b|X-t|^2/t}}{\sqrt{t}}dt=\sqrt{\frac{\pi}{a+b}}e^{-2\sqrt{b}(\sqrt{a+b}-\sqrt{b})X}.$$ See for example [@Bages] for a proof of this identity. # Properties of the function $T$ In this section we first prove the existence and the properties of $T$ and of an adjoint function $\widetilde{T}$. Then we introduce a flow $X$ associated with $T$ and the particular solution $f(t,x)=T(x)-Wt$ of the canonical equation, that will be a crucial tool in the proof of Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}. In the rest of the paper, we will assume that $W\geq 0$. The case $W\leq 0$ could be addressed with the change of variable $x\mapsto -x$. ## Proof of Proposition [**Proposition** 2](#prop:T){reference-type="ref" reference="prop:T"}. {#proof-of-proposition-propt.} *Proof of Proposition [**Proposition** 2](#prop:T){reference-type="ref" reference="prop:T"}..* We define $T_R$ the unique solution of $$-\big( a(x)\nu(x)T_R'\big)'+WT_R'=\nu(x)W \hbox{ in } (0,R), \quad T_R(0)=0, \ T_R(R)=0.$$ If $W=0$, then $T_R\equiv 0$. If $W>0$, then one can easily prove that $T_R>0$ over $(0,R)$ and it follows that $R\mapsto T_R(x)$ is increasing for all $x>0$ (and $R>x$). Moreover, $T_R$ is uniformly bounded in $W^{2,\infty}_{loc}$ by elliptic regularity estimates. We could thus define $T(x):=\lim_{R\rightarrow+\infty} T_R(x)$. Let $\overline{x}\in [0,R]$ such that $(a\nu T_R')(\overline{x})=\max_{[0,R]}a\nu T_R'$. We want to prove that that $(\nu T_R')(\overline{x})\leq \mu^3$. If $(\nu T_R')(\overline{x})\leq 0$ we are done. Assume that $(\nu T_R')(\overline{x})>0$. We recall that $\mu\geq a,\nu \geq 1/\mu$. As $T_R>0$ over $(0,R)$ and $T_R(R)=0$, one has $T_R'(R)\leq 0$ and thus $\overline{x}<R$. For $\varepsilon>0$ small enough, one thus gets $(a\nu T_R')(\overline{x}+\varepsilon)\leq (a\nu T_R')(\overline{x})$ and thus $(a\nu T_R')'(\overline{x})\leq 0$. This leads to $$WT_R'(\overline{x})\leq -\big( a \nu T_R'\big)'(\overline{x})+WT_R'(\overline{x})=\nu(\overline{x})W.$$ Hence, $T_R'(\overline{x})\leq \nu(\overline{x})\leq \mu$. We conclude that $\max_{[0,R]}a\nu T_R'\leq \mu^3$ and thus $\max_{[0,R]} T_R'\leq \mu^5$, from which $T'\leq \mu^5$ follows by letting $R\rightarrow+\infty$. The inequality $T'\geq 1/\mu^5$ is proved similarly. Assume now that $S$ is another solution of ([\[eq:T\]](#eq:T){reference-type="ref" reference="eq:T"}) such that $S(x)/|x|$ is bounded. Let $R=S-T$. One has $$-\big( a(x)\nu(x)R'\big)'+WR'=0 \hbox{ in } \mathbb{R}, \quad R(0)=0.$$ Assume by contradiction that there exists $x_0$ such that $R'(x_0)>0$. Then one gets $$(a\nu R')(x)=(a\nu R')(x_0)e^{\int_{x_0}^x W/(a\nu)}\geq (a\nu R')(x_0)e^{W/\mu^2 (x-x_0)}.$$ This is a contradiction since $R'$ would then grow exponentially, contradicting $R(x)/|x|$ bounded. Hence $R'\leq 0$. One gets $R'\geq 0$ by symmetry and thus $R$ is constant equal to $0$ since $R(0)=0$. This shows uniqueness: $S\equiv T$. ◻ We could similarly construct an adjoint solution. ****Proposition** 7**. *There exists a unique solution $\widetilde{T}$ of $$\label{eq:Ttilde} \big( a(x)\nu(x)\widetilde{T}'\big)'+W\widetilde{T}'=\nu(x)W \hbox{ in } \mathbb{R}, \quad \widetilde{T}(0)=0$$ such that $\widetilde{T}(x)/|x|$ is bounded over $\mathbb{R}$. Moreover, one has $m\leq \widetilde{T}'(x)\leq M$ for all $x\in \mathbb{R}$, for $m=1/\mu^4$ and $M=\mu^4$.* *Proof.* We just apply the change of variables $S(x):=-\widetilde{T}(-x)$ and use Proposition [**Proposition** 2](#prop:T){reference-type="ref" reference="prop:T"}. ◻ ## Definition and properties of the flow $X(t;y)$ It will sometimes be more convenient, in particular when proving the lower bound in Theorem [**Theorem** 5](#thm:princ){reference-type="ref" reference="thm:princ"}, to use the flow $X(t,y)$ instead of the time $T$, which is the inverse of $x\mapsto T (y)-T (x)$. Namely, let $X(t;y)$ the unique (since $m\leq T'\leq M$) solution of $$\label{eq:defX}T\big(X(t;y)\big)-Wt=T(y)\quad \hbox{ for all } t,y\in \mathbb{R}.$$ ****Lemma** 8**. *The function $X$ satisfies the semi-group property, in the sense that for all $s>0, t>0$ and $y\in \mathbb{R}$, one has $$X\big(s;X(t;y)\big) = X(t+s;y).$$* *Proof.* One has $$T\Big(X\big(s,X(t;y)\big)\Big) -Wt-Ws=T\big(X(t;y)\big) -Wt=T(y).$$ The conclusion follows. ◻ ****Lemma** 9**. *The function $$f(t,x):= T (x)-Wt$$ is a time-global solution of $$\label{eq:f} \nu (x) \partial_{t}f - \partial_{x }\big( \nu (x)a(x)\partial_{x}f\big)+W\partial_{x}f=0 \hbox{ for all } t\in\mathbb{R}, x\in \mathbb{R}.$$ Moreover, $f(t,x+\cdot)-(t,y)$ admits as a unique root $X(t;y)$ for all $t,x\in \mathbb{R}$, and one has: $$\label{eq:estg} \forall t>0, x,y\in \mathbb{R}, \quad m |x|\leq |f\big(t,x+X(t;y)\big)-f\big(0,y\big)| \leq M|x|,$$ and $W/M\leq X'\leq W/m$ over $\mathbb{R}$.* *Proof.* One easily verifies that $f$ satisfies ([\[eq:f\]](#eq:f){reference-type="ref" reference="eq:f"}) and the fact that $X(t;y)$ is the unique root of $f(t,x+\cdot)-(t,y)$. Estimate ([\[eq:estg\]](#eq:estg){reference-type="ref" reference="eq:estg"}) follows from $m\leq T'\leq M$. By translating the origin one can assume that $y=0$. As $f\big( t,X(t;0)\big)=0$, one has $f_{t}+X'(t;0)f_{x}=0$ at $(t,X(t;0))$ and thus, as $f_{t}\equiv -W$ and $-M\leq f_{x}(t,X(t;0))\leq -m$, one has $W/M<X'(t;0)\leq W/m$. ◻ Similarly, let $Y(t;y)$ the unique solution of $\widetilde{T}(Y(t;y))-Wt=\widetilde{T}(y)$ for all $t,y\in R$. We could prove that $$g(t,x):= \widetilde{T}(x)-Wt$$ is a time-global solution of $$\label{eq:g}\left\{ \begin{array}{l} -\nu (x) \partial_{t}g - \partial_{x }\big( \nu (x)a(x)\partial_{x}g\big)-W\partial_{x}g=0 \hbox{ for all } t\in\mathbb{R}, x\in \mathbb{R},\\ g(t,Y(t;0))=0 \quad \hbox{ for all } t\in \mathbb{R}.\\ \end{array} \right.$$ Moreover, $$\forall t>0, x,y\in \mathbb{R}, \quad m |y|\leq |g\big(t,y+Y(t;x)\big)-g\big(0,x\big)| \leq M|y|.$$ ****Lemma** 10**. *There exists a constant $\tau=4\mu^7 /W$ such that $$\label{eq:compfg} |T-\widetilde{T}|\leq \tau.$$ Hence, $$\label{eq:compXY} |X-Y|\leq 2\tau/m.$$* *Proof.* Let $R:= T-\widetilde{T}$. One has $$\big( a\nu(T+\widetilde{T})'\big)'=W R'\hbox{ in } \mathbb{R}, \quad R(0)=0.$$ Integrating, one gets $$W R(x)=\big(a\nu(T+\widetilde{T})'\big)(x)-\big(a\nu(T+\widetilde{T})'\big)(0) \leq 2\mu^2 (M-m)\leq 4\mu^7.$$ On the other hand, $W R\geq \frac{2}{\mu^2} (m-M)\geq -4\mu^3$. Next, as $T\big(X(t;y)\big)-\widetilde{T}\big(Y(t;y)\big)=T(y)-\widetilde{T}(y)$, one has $$m| X(t;y)-Y(t;y)|\leq |T\big(X(t;y)\big)-T\big(Y(t;y)\big)|\leq |(T-\widetilde{T})(y)|+|(T-\widetilde{T})\big(Y(t;y)\big)|\leq 2\tau.$$ ◻ Lastly, we have the following technical inequality. ****Lemma** 11**. *There exists a constant $C=C(\mu)>0$ (independent of $W$) such that for all $t>0$, $x,y\in \mathbb{R}$: $$|f(t,x)-f(0,y)|\leq C|g(t,x)-g(0,y)|+C\sqrt{t}.$$* *Proof.* One computes $$\begin{array}{rcl} |f\big(t,x\big)-f(0,y)|&\leq&M|x-X(t;y)|\\ &\leq & M|x-Y(t;y)|+M|Y(t;y)-X(t;y)|\\ &\leq & \frac{M}{m}|g(t,x)-g(0,y)| +M|Y(t;y)-X(t;y)|.\\ \end{array}$$ Now, ([\[eq:compXY\]](#eq:compXY){reference-type="ref" reference="eq:compXY"}) yields $|X-Y|\leq 2\tau/m$ and Lemma [**Lemma** 9](#lem:f){reference-type="ref" reference="lem:f"} gives $|X(t;y)-Y(t;y)|\leq 2Wt/m$ for all $t>0$ and $y\in \mathbb{R}$. Hence, $|X(t;y)-Y(t;y)|\leq \frac{2}{m}\sqrt{W t\tau}$ for all $t>0$ and $y\in \mathbb{R}$. The conclusion follows since $W\tau=4\mu^7$ does not depend on $\mu$. ◻ # The upper bound We define for all $\alpha>0$: $$G_{\alpha}(t,x):=e^{\alpha g(t,x)}.$$ Easy computations yield $$-\nu (x) \partial_{t}G_{\alpha} - \partial_{x }\big( \nu (x)a(x)\partial_{x}G_{\alpha}\big)-W\partial_{x}G_{\alpha}= -\alpha^{2}\nu (x) a(x)(\partial_{x}g)^{2}G_{\alpha} \hbox{ for all } t\in\mathbb{R}, x\in \mathbb{R}.$$ *Proof of the upper bound in Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}.* Take $\alpha \in \mathbb{R}$ and let $V_{k}(t):= \int_{\mathbb{R}}\nu (x)G_{\alpha k}(t,x) p^{k}(t,x)dx$. Easy computations yield $$\begin{array}{rl}V_{2k}'(t) =& 4\alpha^{2} k^2\int_{\mathbb{R}}\nu (x)a(x)f_{x}^{2}(t,x)G_{2\alpha k}(t,x) p^{2k}(t,x)dx\\ & - 2k(2k-1) \int_{\mathbb{R}}\nu (x)a(x)G_{2\alpha k}(t,x) p_{x}^{2}(t,x)p^{2k-2}(t,x)dx.\end{array}$$ Let $\Psi:=G_{\alpha k}(t,x) p^{k}(t,x)$, so that $\Psi_x=\alpha k g_x G_{\alpha k} p^{k}+ k G_{\alpha k}p_x p^{k-1}$ and thus $$G_{2\alpha k}(p_x)^2 p^{2k-2}=\Big(G_{\alpha k}p_x p^{k-1}\Big)^2=\frac{1}{k^2}\Big( \Psi_x-\alpha k g_x G_{\alpha k} p^{k} \Big)^2.$$ We get $$\begin{array}{rcl}V_{2k}'(t) &=& 4\alpha^{2} k^2\int_{\mathbb{R}}\nu a g_{x}^{2}G_{2\alpha k}p^{2k} - \frac{2(2k-1)}{k} \int_{\mathbb{R}}\nu a\Big( \Psi_x^2-2\alpha k g_x G_{\alpha k} p^{k} \Psi_x+\alpha^2 k^2 g_x^2 G_{2\alpha k} p^{2k}\Big)\\ &\leq & 16\alpha^{2} k^2\int_{\mathbb{R}}\nu a g_{x}^{2}G_{2\alpha k}p^{2k} - \frac{(2k-1)}{k} \int_{\mathbb{R}}\nu a \Psi_x^2 \\ \end{array}$$ Next, the Nash inequality applied to $\Psi$ yields that there exists a constant $C>0$ such that: $$\label{eq:V2k} V_{2k}'(t)\leq 16\alpha^{2}k^2 \|a\|_\infty M^{2}V_{2k}(t) - 2CV_{2k}^{3}(t)/V_{k}^{4}(t).$$ We now use the same arguments as in Section 1 of [@FabesStroock]. Namely, let $p_{j}:=2^{j}$, $U_{j}(t):= \| G_{\alpha}(t,\cdot)p(t,\cdot)\|_{L^{p_{j}}(\nu (x) dx)}$ and $W_{j}(t):= \max \{ s^{(1/4-1/2p_{j})}U_{j}(s), \ 0\leq s\leq t\}$. Then $$U_{j}'(t)\leq 8 \times 2^{j} \alpha^{2}M^{2} \|a\|_\infty U_{j}(t)-\frac{C}{2^{j}}\Big(\frac{t^{1/4-1/2^{j}}}{W_{j-1}(t)}\Big)^{2^{j+1}}U_{j}(t)^{1+2^{j+1}}$$ and thus one derives from Lemma 1.4 of [@FabesStroock] that there exists a constant, that we still denote $C$, such that $$W_{j}(t)\leq W_{j-1}(t) (4^{j}C)^{1/2^{j+1}}e^{C\alpha^{2}t/2^{j}}.$$ We thus conclude that, even if it means increasing $C$, $$\sup_{j}W_{j}(t)\leq C e^{C\alpha^{2}t} W_{1}(t).$$ Hence, $$U_{j}(t)=\| G_{\alpha}(t,\cdot)p(t,\cdot)\|_{L^{p_{j}}(\nu (x) dx)} \leq \frac{C}{t^{1/4-1/2^{j+1}}} e^{C\alpha^{2}t} W_1(t).$$ Letting $j\rightarrow+\infty$, it follows that $$\| G_{\alpha}(t,\cdot)p(t,\cdot)\|_{\infty} \leq \frac{C}{t^{1/4}} e^{C\alpha^{2}t} W_1(t).$$ We also know that $$U_{1}'(t)\leq 16 \alpha^{2}M^{2} \|a\|_\infty U_{1}(t)$$ and thus $U_1(t)\leq U_1(0)e^{16 \alpha^{2}M^{2} \|a\|_\infty t}$. As $W_1(t)=\max \{ U_1(s), 0\leq s\leq t\}$ by definition of $W_1$, we have proved that $W_1(t)\leq U_1(0)e^{C\alpha^2 t}$ for some constant $C$ and thus, even if it means increasing $C$: $$\label{eq:Uj}\| G_{\alpha}(t,\cdot)p(t,\cdot)\|_{\infty} \leq \frac{C}{t^{1/4}} e^{C\alpha^{2}t}\| G_{\alpha}(0,\cdot)p(0,\cdot)\|_{L^{2}(\nu (x) dx)} .$$ On the other hand, one easily checks that $$0\leq V_{1}'(t) = 4\alpha^{2} \int_{\mathbb{R}}\nu (x)a(x)g_{x}^{2}(t,x)G_{2\alpha k}(t,x) p(t,x)dx\leq 4\alpha^{2}M^2 \|a\|_\infty V_1(t).$$ It follows that $V_1$ is nondecreasing and $V_{1}(t)\leq e^{4\alpha^{2}M^2\|a\|_\infty t} V_{1}(0)$. Now, using again ([\[eq:V2k\]](#eq:V2k){reference-type="ref" reference="eq:V2k"}) with $k=1$, we get $$U_{1}'(t)\leq 16\alpha^{2}M^{2} \|a\|_\infty U_{1}(t)-\frac{C}{2}\Big(\frac{1}{U_0(t)}\Big)^{4}U_{1}(t)^{5}.$$ As $U_0\equiv V_1$ and $V_1$ is nondecreasing, Lemma 1.4 of [@FabesStroock] yields: $$\label{eq:Uj2}U_{1}(t)=\| G_{\alpha}(t,\cdot)p(t,\cdot)\|_{L^{2}(\nu (x) dx)} \leq \frac{C}{t^{1/4}} e^{C\alpha^{2}t} U_0(t)=\frac{C}{t^{1/4}} e^{C\alpha^{2}t} \| G_{\alpha}(0,\cdot)p(0,\cdot)\|_{L^{1}(\nu (x) dx)} .$$ Combining ([\[eq:Uj\]](#eq:Uj){reference-type="ref" reference="eq:Uj"}) and ([\[eq:Uj2\]](#eq:Uj2){reference-type="ref" reference="eq:Uj2"}) thanks to the semi-group property, we eventually obtain $$\| G_{\alpha}(t,\cdot)p(t,\cdot)\|_{\infty} \leq \frac{C}{t^{1/2} }e^{C\alpha^{2}t} \| G_{\alpha}(0,\cdot)p(0,\cdot)\|_{L^{1}(\nu (x) dx)},$$ that is, for all $t>0, x\in \mathbb{R}$: $$e^{\alpha g(t,x)}p(t,x)\leq \frac{C}{t^{1/2}} e^{C\alpha^{2}t} \int_{\mathbb{R}}\nu (y)e^{\alpha g(0,y)} p_{0}(y)dy.$$ It terms of the gaussian $P(t,x,y)$ associated with the initial datum $\delta_{y}$, this reads $$P(t,x,y)\leq \frac{C}{t^{1/2}} e^{C\alpha^{2}t-\alpha g(t,x)+\alpha g(0,y)}.$$ For any $t>0$, $x,y\in \mathbb{R}$, we now take $\alpha = \frac{g(t,x)-g(0,y)}{2Ct}$, which yields $$P(t,x,y)\leq \frac{C}{t^{1/2}} e^{ -\frac{|g(t,x)-g(0,y)|^{2}}{4Ct}}.$$ It now follows from Lemma [**Lemma** 11](#lem:compfg2){reference-type="ref" reference="lem:compfg2"} that, for a generic constant $C=C(\mu)>0$: $$P(t,x,y)\leq \frac{C}{t^{1/2}} e^{ -\frac{|f(t,x)-f(0,y)|^{2}}{4Ct}}.$$ This proves the upper bound in Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}. ◻ # The lower bound Define for all $t\in (0,1)$: $$\rho (t,x'):= e^{-|\widetilde{T}(x')-(t-1)W|^{2}/\sigma(2-t)},$$ with $\sigma= 4M^2$. ****Lemma** 12**. *One has: $$-\nu (x') \partial_{t}\rho - \partial_{x' }\big( \nu (x')a(x')\partial_{x'}\rho\big)-W\partial_{x'}\rho\geq 0.$$* *Proof.* One computes: $$\begin{array}{l} -\nu (x') \partial_{t}\rho - \partial_{x' }\big( \nu (x')a(x')\partial_{x'}\rho\big)-W\partial_{x'}\rho\\ \displaystyle = \frac{\nu (x')a(x')\big(\widetilde{T}(x')-(t-1)W\big)^2}{\sigma(2-t)^2}\rho-\frac{4 a(x')\nu (x')\big(\widetilde{T}(x')-(t-1)W\big)^2 g_{x'}^2(t,x')}{\sigma^2(2-t)^2}\rho\\ \displaystyle +\frac{2\nu (x')a(x')g_{x'}^2(t,x')}{\sigma(2-t)}\rho\\ \displaystyle \geq \frac{2\nu (x')a(x')g_{x'}^2(t,x')}{\sigma(2-t)}\rho\geq 0 \quad \hbox{(since $\sigma= 4M^2$)}.\\ \end{array}$$ ◻ Let $$G_{x}(t):= \int_{\mathbb{R}}\rho(t,x')\nu (x') \ln \Big( P\big(t,x',Y(-1;x)\big)\Big)dx'.$$ Define the auxiliary functions $$Q(t):= \int_{\mathbb{R}}\rho(t,x')\nu (x')dx'$$ and $$H_{x}(t):= G_{x}(t) - Q(t)\ln \big(\overline{C}/t^{1/2}\big)= \int_{\mathbb{R}}\rho(t,x')\nu (x') \ln \big( \frac{ t^{1/2}P\big(t,x',Y(-1;x)\big)}{\overline{C}}\big)dx',$$ where $\overline{C}$ is given by the upper bound in Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}, which yields that $H_{x}(t)\leq 0$ for all $t>0$. ****Proposition** 13**. *For all $R>0$, there exists a positive constant $B_{R}$ such that for all $x\in (-R,R)$: $$G_{x}(1)\geq -B_{R}+Q(1) \ln \overline{C}.$$* *Proof of Proposition [**Proposition** 13](#prop:entropy){reference-type="ref" reference="prop:entropy"}..* Equivalently, we need to prove that $H_{x}(1)\geq -C$ for some positive constant $C$ depending on $R$. We compute using Lemma [**Lemma** 12](#lem:rho){reference-type="ref" reference="lem:rho"} (here $P$ is always considered at $\big(t,x',Y(-1;x)\big)$, $\rho$ at $(t,x')$, and $\nu$ at $x'$): $$\begin{array}{rl} H_{x}'(t)\geq &\displaystyle\int_{\mathbb{R}}\big(-(a\nu\rho_{x'})_{x'}-W\rho_{x'}\big) \ln \big( \frac{ t^{1/2}P}{\overline{C}}\big)dx'+\int_{\mathbb{R}}\frac{\rho}{2t}\nu dx'\\ &\displaystyle+\int_{\mathbb{R}}\frac{\rho}{P}\big((\nu a P_{x'})_{x'}-W P_{x'}\big) dx'.\\ \end{array}$$ We could integrate by parts and get $$\begin{array}{rl} H_{x}'(t)\geq &\displaystyle\int_{\mathbb{R}}\big(-(a\nu\rho_{x'})_{x'}-W\rho_{x'}\big) \ln \big( \frac{ t^{1/2}P}{\overline{C}}\big)dx'+\int_{\mathbb{R}}\frac{\rho}{2t}\nu dx'\\ &\displaystyle-W\int_{\mathbb{R}}\rho \frac{ P_{x'}}{P} dx'\displaystyle-\int_{\mathbb{R}}\rho_{x'}\nu a \frac{P_{x'}}{P} dx'+\int_{\mathbb{R}} \rho\nu a\frac{P_{x'}^{2}}{P^{2}}dx'\\ &\\ = &\int_{\mathbb{R}}\frac{\rho}{2t}\nu dx'+\int_{\mathbb{R}} \rho\nu a \frac{P^{2}_{x'}}{ P^{2}}dx'\displaystyle\\ &\\ \geq &\int_{\mathbb{R}} \rho\nu a \frac{P^{2}_{x'}}{ P^{2}}dx'.\\ \end{array}$$ We are left with the term involving $P_{x'}^{2}$. As $\nu$ and $a$ have positive infimum, we could use the spectral gap inequality after a change of variables $X=\widetilde{T}(x')-W(1-t)$, and $u(t,X):= \ln P \big(t,x',Y(-1;x)\big)$: $$\begin{array}{l} \displaystyle\int_{\mathbb{R}} \rho\nu a \frac{P^{2}_{x'}}{P^{2}}dx' \geq \frac{1}{C} \int_\mathbb{R}e^{-|\widetilde{T}(x')-W(t-1)|^{2}/C(2-t)}\frac{P^{2}_{x'}}{P^{2}}(t,x',x)dx'\\ \\ =\displaystyle \frac{1}{C}\int_\mathbb{R}e^{-|X|^{2}/C(2-t)}\frac{P^{2}_{x'}}{P^{2}}\big(t,\widetilde{T}^{-1}(X+W(1-t)),x\big)\frac{dX}{\widetilde{T}'\big( \widetilde{T}^{-1}(X+W(1-t))\big)}\\ \\ =\displaystyle \frac{1}{C}\int_\mathbb{R}e^{-|X|^{2}/C(2-t)} u_{x'}^2\big(t,X\big)\widetilde{T}'\big( \widetilde{T}^{-1}(X+W(1-t))\big)dX\\ \\ \geq \displaystyle \frac{m}{C}\int_\mathbb{R}e^{-|X|^{2}/C(2-t)} u_{x'}^2\big(t,X\big)dX \quad \hbox{ since } \widetilde{T}'\geq m\\ \\ \geq \displaystyle \frac{2m}{C(2-t)}\int_\mathbb{R}e^{-|X|^{2}/C(2-t)}\Big(u(t,X)- \widetilde{G}_{x'}(t)\Big)^{2}dX \quad \hbox{ by the spectral gap inequality}\\ \\ = \displaystyle \frac{2m}{C(2-t)}\int_\mathbb{R}\rho (t,x')\Big( \ln P\big(t,x',X(1;x)\big)- \widetilde{G}_{x}(t)\Big)^{2} \widetilde{T}'(x')dx'\\ \\ \geq \displaystyle \frac{2m^2}{C(2-t)}\int_\mathbb{R}\rho (t,x')\Big( \ln P\big(t,x',X(1;x)\big)- \widetilde{G}_{x}(t)\Big)^{2} dx' \hbox{ since } \widetilde{T}'\geq m\\ \end{array}$$ where $$\begin{array}{rcl} \widetilde{G}_{x}(t)&:=& \displaystyle\frac{1}{\sqrt{C\pi}}\int_\mathbb{R}\frac{e^{-|X|^{2}/C(2-t)}}{(2-t)^{1/2}} u(t,X)dX\\ &&\\ &= & \displaystyle\frac{1}{\sqrt{C\pi (2-t)}} \int_\mathbb{R}\rho (t,x')\ln P\big(t,x',Y(-1;x)\big)\widetilde{T}'(x')dx'.\\ \end{array}$$ As $P(t,x',x) \leq \overline{C} /t^{1/2}$ by the upper bound in Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}, one gets, as $q\mapsto \big(\ln q - \widetilde{G}_{x}(t)\big)^{2}/q$ is decreasing on $(e^{2+\widetilde{G}_{x}(t)},\infty)$: $$\begin{array}{l} \displaystyle\int_{\mathbb{R}} \rho\nu a \frac{P^{2}_{x}}{P^{2}}dx \geq \displaystyle \frac{2m^2}{C(2-t)}\frac{\big(\widetilde{G}_{x}(t)-\ln (\overline{C} /t^{1/2})\big)^{2}}{\overline{C} /t^{1/2}}\int_{P\big(t,x',Y(-1;x)\big)\geq e^{2+\widetilde{G}_{x}(t)}}\rho (t,x') P(t,x',x)dx'\\ \\ \geq \frac{1}{C} \big( \widetilde{G}_{x}(t)-\ln (\overline{C} /t^{1/2})\big)^{2}\int_{P\big(t,x',Y(-1;x)\big)\geq e^{2+\widetilde{G}_{x}(t)}}\nu(x') \rho(t,x')P(t,x',x)dx'\\ \end{array}$$ for all $t\in [1/2,1]$, for some new constant depending on $\mu$, that we still denote $C$. We now notice that $$\begin{array}{rcl} \widetilde{G}_{x}(t)-\ln (\overline{C} /t^{1/2})&=& \displaystyle\frac{1}{\sqrt{C\pi (2-t)}} \int_{\mathbb{R}}\rho(t,x')\ln ( \displaystyle\frac{ t^{1/2}P\big(t,x',Y(-1;x)\big)}{\overline{C}}) \widetilde{T}'(x')dx'\\ &&\\ &\leq & C \displaystyle \int_{\mathbb{R}}\nu (x')\rho(t,x')\ln ( \frac{ t^{1/2}P\big(t,x',Y(-1;x)\big)}{\overline{C}})dx'\\ &&\\ &=& CH_{x}(t)\leq 0,\\ \end{array}$$ for some generic constant $C>0$. Hence, $\big( \widetilde{G}_{x}(t)-\ln (\overline{C} /t^{1/2})\big)^{2}\geq \big(CH_{x}(t)\big)^{2}$. On the other hand, we know from the upper bound of Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"} that there exists $A>0$ such that for all $$\int_{|\widetilde{T}(x')-W(t-1)|> A}\nu (x') P\big(t,x',Y(-1;x)\big)dx' \leq 1/2 \quad \hbox{ for all } (t,x)\in [1/2,1]\times (-R,R).$$ Hence, as $\int_{\mathbb{R}} \nu (x')P\big(t,x',Y(-1;x)\big)dx' =\int_{\mathbb{R}} \nu (x')P\big(0,x',Y(-1;x)\big)dx'=1$ since $P(0,\cdot,Y(-1;x))=\delta_{Y(-1;x)}$, one gets $\int_{|\widetilde{T}(x')-W(t-1)|\leq A} \nu (x')P\big(t,x',Y(-1;x)\big)dx' \geq 1/2$ for all $(t,x)\in [1/2,1]\times (-R,R)$. This yields for all $t\in [1/2,1]$: $$\begin{array}{l} \int_{P(t,x',Y(-1;x))\geq e^{2+\widetilde{G}_{x}(t)}}\nu(x') \rho(t,x') P\big(t,x',Y(-1;x)\big)dx'\\ \\ \geq \int_{\mathbb{R}}\nu \rho P- e^{2+\widetilde{G}_{x}(t)}\int_{\mathbb{R}}\nu \rho \\ \\ \geq e^{-A^{2}/\sigma}\int_{|\widetilde{T}(x')-W(t-1)|\leq A}\nu P- Ce^{2+\widetilde{G}_{x}(t)}\\ \\ \geq\displaystyle \frac{1}{2}e^{-A^{2}/\sigma}- Ce^{2+CH_{x}(t)+\ln (\overline{C} /t^{1/2})}\\ \\ =\displaystyle\frac{1}{2} e^{-A^{2}/\sigma}- C e^{2+CH_{x}(t)}\\ \end{array}$$ for some generic constant $C>0$ depending on $\mu$, where we have used that $t\mapsto \int_{\mathbb{R}}\nu (x') \rho (t,x') dx'$ is bounded with respect to $t$. We conclude that there exists a constant $C>0$ such that $$\displaystyle\int_{\mathbb{R}} \rho\nu \frac{P^{2}_{x}}{P^{2}}dx\geq \Big(\frac{1}{2} e^{-A^{2}/\sigma}- C e^{2+CH_{x}(t)}\Big) \big( H_{x}(t)\big)^{2}.$$ It follows that for all $t\in [1/2,1]$: $$H_{x}'(t)\geq \Big( \frac{1}{2}e^{-A^{2}/\sigma}- C e^{2+CH_{x}(t)}\Big) \big( H_{x}(t)\big)^{2}.$$ Assume that $C H_x(1)< -\frac{A^2}{\sigma}- \ln (4C)-2$. Then if there exists $t\in [1/2,1]$ such that $C H_x(t_0)= -\frac{A^2}{\sigma}- \ln (4C)-2$, one would get for all $t\in [t_0,1]$: $$H_{x}'(t)\geq \frac{1}{4}e^{-A^{2}/\sigma} \big( H_{x}(t)\big)^{2}>0,$$ and it would easily follow that $H_x$ would be increasing on $(t_0,1)$, contradicting $C H_x(1)< -\frac{A^2}{\sigma}- \ln (4C)-2$. We have thus proved that $C H_x(t)> -\frac{A^2}{\sigma}- \ln (4C)-2$ for all $t\in [1/2,1]$, from which it follows that $$H_{x}'(t)\geq \frac{1}{4}e^{-A^{2}/\sigma} \big( H_{x}(t)\big)^{2}>0 \quad \hbox{ on } [1/2,1].$$ Integrating on $(t,1)$ and using $H_x\leq 0$, this gives $H_x(1)\geq -8 e^{A^{2}/\sigma}$. We have thus proved that $$H_x(1)\geq \min\big\{ -\frac{A^2}{C\sigma}- \frac{\ln (4C)}{C}-\frac{2}{C}, -8 e^{A^{2}/\sigma}\big\}.$$ ◻ ****Proposition** 14**. *For all $r>0$ large enough, there exists a constant $C$, which only depends on $\mu$, such that for all $t>0$ and $x,y\in \mathbb{R}$ such that $$|T(x)-T(y)-Wt|\leq r \sqrt{t},$$ one has $$P\big(t,x,y\big)\geq \frac{1}{C\sqrt{t}}.$$* *Proof of Proposition [**Proposition** 14](#prop:lowbound){reference-type="ref" reference="prop:lowbound"}.* First of all, adapting [@FabesStroock], a translation and scaling argument yields that it is enough to show that there exists a constant $C>0$, which only depends on $\mu$, not on $W$, such that for all $x,y\in (-R,R)$: $$\label{eq:scaling} P\big(2,X(1;x),Y(-1;y)\big)\geq \frac{1 }{C}.$$ Let us prove this claim. Assume that ([\[eq:scaling\]](#eq:scaling){reference-type="ref" reference="eq:scaling"}) is proved with a constant $C=C(\mu)>0$. Take $\sigma>0$, $z\in \mathbb{R}$, and let $P_\sigma^z (t,x',y'):= \sigma P\big(\sigma^2 t,\sigma (x'+z), \sigma (y'+z)\big)$. Then $P_\sigma$ satisfies $$\nu (\sigma (x'+z)) \partial_{t}P_\sigma^z - \partial_{x }\big( \nu (\sigma (x'+z)) a(\sigma (x'+z)) \partial_{x}P^z_\sigma \big)+\sigma W\partial_{x}P^z_\sigma =0 \hbox{ for all } t\in (0,\infty), x\in \mathbb{R}.$$ It follows from ([\[eq:scaling\]](#eq:scaling){reference-type="ref" reference="eq:scaling"}) that, as the constant $C>0$ does not depend on $W$, one has for all $x,y\in (-R,R)$: $$\label{eq:Psigma}P_\sigma \big(2,X^z_\sigma (1,x'),Y^z_\sigma (-1,y')\big)\geq C,$$ where $X^z_\sigma$ is the unique solution of $$T^z_\sigma \big( X^z_\sigma (t,x')\big) -\sigma W t = T^z_\sigma (x')$$ and $T^z_\sigma$ is the unique solution $T$ of $$-\big( \nu (\sigma (x'+z))a(\sigma (x'+z))T'\big)'+ \sigma W T'=\sigma W \nu (\sigma (x'+z))\hbox{ in } \mathbb{R}, \quad T(-z)=-z$$ such that $x\mapsto T(x')/x'$ is bounded over $\mathbb{R}$ (see Corollary [**Corollary** 22](#lem:Tgamma){reference-type="ref" reference="lem:Tgamma"}). Hence, by uniqueness one has $T^z_\sigma (x') = T (\sigma (x'+z))/ \sigma-z$. It follows that $X^z_\sigma (t,x')=X (\sigma^2 t , \sigma (x'+z))/\sigma-z$. Similarly, $Y^z_\sigma (t,x')=Y (\sigma^2 t , \sigma (x'+z))/\sigma-z$. Hence, we get from ([\[eq:Psigma\]](#eq:Psigma){reference-type="ref" reference="eq:Psigma"}): $$\label{eq:Psigma2}P \big(2\sigma^2,X (\sigma^2 , \sigma (x'+z)),Y(-\sigma^2 t,\sigma (y'+z)\big)\geq C/\sigma$$ for all $x',y'\in (-R,R)$, $\sigma>0$ and $z\in \mathbb{R}$. Let $t>0$ and $x,y\in \mathbb{R}$ such that $$|T(x)-T(y)-Wt|\leq r \sqrt{t}.$$ Let $u, v\in \mathbb{R}$ such that $x=X(t/2,\sqrt{t/2}u)$ and $y=Y(-t/2,\sqrt{t/2}v)$. By definition of $X$ and $Y$, one has $$\begin{array}{rcl} m\sqrt{t/2}|u-v|&\leq & |T(\sqrt{t/2}u)-T(\sqrt{t/2}v)| \\ &&\\ &\leq & |T(\sqrt{t/2}u)-T(\sqrt{t/2}v)|+|T(\sqrt{t/2}v)-\widetilde{T}(\sqrt{t/2}v)| \\ &&\\ &=&|T(x)-\widetilde{T}(y)-Wt|+|T(\sqrt{t/2}v)-\widetilde{T}(\sqrt{t/2}v)| \quad \hbox{ by def. of } X \hbox{ and } Y\\ &&\\ &\leq& (r+\sqrt{2}M)\sqrt{t} \quad \hbox{ since } |T'|\leq M \hbox{ and } |\widetilde{T}'|\leq M.\\ \end{array}$$ Hence, $|u-v|\leq (r+\sqrt{2}M)\sqrt{2}/m$. We now take $\sigma =\sqrt{t/2}$, $z=\frac{1}{2}(u+v)$, $x'=\frac{1}{2}(u-v)$, $y'=\frac{1}{2}(v-u)$, and $R=(r+\sqrt{2}M)/\sqrt{2}m$. As $|x'|\leq R$ and $|y'|\leq R$, one gets from ([\[eq:Psigma2\]](#eq:Psigma2){reference-type="ref" reference="eq:Psigma2"}), using the definitions of $u$ and $v$, the result of Proposition [**Proposition** 14](#prop:lowbound){reference-type="ref" reference="prop:lowbound"}. Let us now turn back to the proof of ([\[eq:scaling\]](#eq:scaling){reference-type="ref" reference="eq:scaling"}). Consider the adjoint fundamental solution $\hat{P}(t,x,y)$. One has the semi-group property: $$P(2, x,y)= \int_{\mathbb{R}}\nu (y) P(1,x,z)P(1, z,y)dz.$$ Also, one easily checks that $P(t,x,z)=\hat{P}(-t,z,x)$. Hence, $$P(2,x,y)= \int_{\mathbb{R}}\nu (z) \hat{P}(-1,z,x)P(1, z,y)dz.$$ It follows from the Jensen inequality that $$\begin{array}{l}\ln P\big(2,X(1;x),Y(-1;y)\big) \\ \geq \ln \Big(\int_{\mathbb{R}}\nu (z) \hat{P}(-1,z,X(1;x))P(1, z,Y(-1;y)) e^{-|z|^{2}/C} \Big)dz\\ \geq \ln Q(1)+ \frac{1}{Q(1)} \int_{\mathbb{R}}\nu (z)e^{-|z|^{2}/C} \ln\big(\hat{P}(-1,z,X(1;x)) P(1, z,Y(-1;y)) \big) dz\\ = \ln Q(1)+ \frac{1}{Q(1)} \big( G_{y}(1)+\hat{G}_{x}(-1)\big)\\ \end{array}$$ where $$\hat{G}_{x}(t):= \int_{\mathbb{R}}\nu (z)e^{-|T(x)-W(1+t)|^{2}/\sigma (2+t)} \ln \big( \hat{P}(t,z,X(1;x))\big)dz.$$ It follows from Proposition [**Proposition** 13](#prop:entropy){reference-type="ref" reference="prop:entropy"} applied to $G_{y}$ and $\hat{G}_{x}$, that: $$\ln P\big(2,X(1;x),Y(-1;y)\big) \geq \ln Q(1)+ \frac{1}{Q(1)}\Big( -2B_{R}+2Q(1) \ln \overline{C}\Big).$$ One could easily check from the definition of $Q$ that $Q(1)\geq 1/C$ for some constant $C>0$. The conclusion follows. ◻ *Proof of the lower bound in Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}.* We know from Proposition [**Proposition** 14](#prop:lowbound){reference-type="ref" reference="prop:lowbound"} that there exists $C>0$ such that for all $x,y$ such that $|T(x)-T(y)-Wt|\leq 3\sqrt{t},$ one has $P(t,x,y)\geq \frac{1}{C\sqrt{t}}$. Consider $x,y\in \mathbb{R}$, $t>0$ and let $k\in \mathbb{N}$ such that $$|T(x)-T(y)-Wt|\leq \sqrt{kt}.$$ We choose points $x_0=y, x_1,...,x_{k-1}, x_k=x$ such that for all $i=0,...,k-1$: $$|T(x_{i+1})-T(x_i)-Wt/k|\leq \sqrt{t/k}.$$ The semi-group property yields for all $t>0$: $$P\big(t,x,y\big)\geq \int ...\int_{|x'_i-x_i|\leq \frac{1}{M}\sqrt{t/k}}\Pi_{i=0}^{k-1}P\big(t/k, x_{i+1}', x_{i}'\big) dx_{1}'... dx_{k-1}'.$$ If $|x'_i-x_i|\leq \frac{1}{M}\sqrt{t/k}$, one has $$|T(x_{i+1}')-T(x_i')-Wt/k|\leq |T(x_{i+1})-T(x_i)-Wt/k|+ |T(x_{i+1}')-T(x_{i+1})|+ |T(x_{i}')-T(x_{i})|\leq 3\sqrt{t/k}$$ since $|T'|\leq M$, and thus $P(t/k,x_{i+1}',x_i')\geq \frac{1}{C\sqrt{t/k}}$. Hence, $$P\big(t,x,y\big)\geq \Big(\frac{\sqrt{k}}{C\sqrt{t}}\Big)^{k}\Big( \frac{1}{M}\sqrt{t/k}\Big)^{k-1}= M(CM)^{-k}\sqrt{k/t}\geq e^{-Ck}/C\sqrt{t}$$ for some alternative constant that we still denote $C$. As $|T(x)-T(y)-Wt|\leq \sqrt{kt}$, we thus conclude that $$P\big(t,x,y\big)\geq \frac{e^{-C|T(x)-T(y)-Wt|^2/t}}{C\sqrt{t}}.$$ ◻ # The Nash type estimate We now let $$T(\xi,R,s):=\{ (t,x)\in \mathbb{R}\times\mathbb{R}, \ |T(x)-T(\xi)-Wt|<R, \ t\geq s\}$$ and we consider the fundamental solution $P^{(\xi, R)}=P^{(\xi, R)}(t,s,x,y)$ associated with equation $$\label{eq:pxi} \left\{\begin{array}{ll} \nu(x)p_t- \big( a(x)\nu (x)p_x\big)_x- Wp_x=0 & \hbox{ in } T(\xi,R,s),\\ p(t,x)=0 &\hbox{ for all } (t,x)\in \partial T(\xi,R,s)\cap (s,\infty),\\ \end{array}\right.$$ with initial datum $P(s,s,\cdot,y)=\delta_y/ \nu (y)$. ****Lemma** 15**. *For each $\delta \in (0,1)$, there is an $\alpha=\alpha (\mu,\delta)>0$ such that $$P^{(\xi,R)}(t,s,x,y)\geq \frac{\alpha}{C\sqrt{t-s}}e^{-C\frac{|T(x)-T(y)-W(t-s)|^2}{t-s}}$$ for all $(t,x)\in T(\xi,\delta R,s)$, $(s,y)\in T(\xi,\delta R,s)$, and $t\in (s,s+R^2)$, where $C=C(\mu)$ is the same as in Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}. In particular, if one also has $t-s\geq \gamma R^2$, then $P^{(\xi,R)}(t,s,x,y)\geq \frac{\alpha}{CR}e^{-4C\delta^2/\gamma}$.* *Proof.* By translation, we can assume that $\xi=0$ and $s=0$, and we denote $T(R):=T(0,R,0)$. There exist two nonnegative functions on $\mathbb{R}^+$ $mes_y^+$ and $mes_y^-$ with total mass less or equal to $1$ such that $$\begin{array}{ll}P^{(0,R)}(t,0,x,y)=&P(t,0,x,y)-\int_0^t \nu \big(X(R+Wr,0)\big)P\big(t,r,x,X(R+Wr,0)\big)mes^+_y(r)dr\\ &-\int_0^t \nu \big(X(R+Wr,0)\big)P\big(t,r,x,X(-R+Wr,0)\big)mes^-_y(r)dr,\\ \end{array}$$ where we remind to the reader that $z:=X(\pm R+Wr,0)$ is the unique solution of $T(z)=\pm R+Wr$. We refer to [@HNRR2] for a proof of this claim in the periodic framework, that is indeed still available in the general framework, with $mes_y^+ (r) = |P_x^{(0,R)}\big(r,0,X(R+Wr,0),y\big)|$ and $mes_y^- (r) = |P_x^{(0,R)}\big(r,0,X(-R+Wr,0),y\big)|$. Hence, by Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}, one has $$P^{(0,R)}(t,0,x,y)\geq \frac{1}{C\sqrt{t}} e^{-C|T(x)-T(y)-Wt|^2/t} -C\sup_{0\leq \tau\leq t} \frac{1}{\sqrt{\tau}} e^{-R^2(1-\delta)^2/C\tau}$$ for $(t,x)\in T(\delta R)$. In particular, there is an $\varepsilon\in (0,1-\delta)$, depending only on $C$ and $\delta$, such that $$P^{(0,R)}(t,0,x,y)\geq \frac{1}{2C\sqrt{t}} e^{-C|T(x)-T(y)-Wt|^2/t}$$ for all $(t,x)\in T( \delta R)$ and $t\in (0,\varepsilon^2 R^2]$, and $y$ with $|T(x)-T(y)-Wt|<\varepsilon R$. We could conclude as in the derivation of the lower bound in the proof of Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"} that $$P^{(0,R)}(t,0,x,y)\geq \frac{\alpha}{C\sqrt{t}} e^{-C|T(x)-T(y)-Wt|^2/t}$$ for some $\alpha>0$, for all $(t,x)\in T(\xi,\delta R,s)$, $(s,y)\in T(\xi,\delta R,s)$, and $t \in (0,R^2)$. This concludes the proof. ◻ ****Lemma** 16**. *For each $\delta>0$, let $\rho:= 1- \varepsilon$ where $\varepsilon$ is given by Lemma [**Lemma** 15](#lem:PxiR){reference-type="ref" reference="lem:PxiR"}. Then for all $s,\xi\in \mathbb{R}$ and $R>0$: $$Osc (p;s,\xi,\delta R)\leq \rho Osc (p;s,\xi, R)$$ for any solution $p$ of ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}), where $$Osc (p;s,\xi, R):= \sup \{ |p(t,x)-p(t',x')|, (t,x), (t',x')\in T(\xi, R,s)\}.$$* *Proof.* We need to adapt the proof of lemma 5.2 in [@FabesStroock]. Define $$S:=\Big\{ y\in \mathbb{R}, \ |T(y)-T(\xi)-W(s-R^2)|<\delta R \hbox{ and } p(s-R^2,y)>\frac{M(R)+m(R)}{2}\Big\}.$$ We can assume that $$|S|\geq \frac{1}{2} | \{ y\in \mathbb{R}, \ |T(y)-T(\xi)-W(s-R^2)|<\delta R\}|,$$ otherwise we consider $1-p$ instead of $p$. For all $(t,x)\in T(\xi, \delta R, s-\delta^2 R^2)$, one has $$\begin{array}{rcl} p(t,x)-m(R)& \geq & \int_\mathbb{R}\nu (y) \big( p(s-R^2,y)-m(R)\big) P^{(\xi,R)}(t,s-R^2,x,y)dy\\ &&\\ & \geq & \frac{M(R)-m(R)}{2}\int_S \nu (y)P^{(\xi,R)}(t,s-R^2,x,y)dy\\ &&\\ & \geq & \frac{M(R)-m(R)}{2}\int_S \nu (y) \frac{\alpha}{CR}e^{-4C\delta^2/(1-\delta^2)}dy \quad \hbox{using Lemma \ref{lem:PxiR}}\\ &&\\ & \geq & \frac{M(R)-m(R)}{4} \inf_\mathbb{R}\nu (y) \frac{\alpha}{C}e^{-4C\delta^2/(1-\delta^2)}dy=: \varepsilon\big(M(R)-m(R)\big)\\ \end{array}$$ where $\varepsilon$ is an arbitrary small constant only depending on $\mu$ and $\delta$. The conclusion follows with $\rho:=1-\varepsilon$. ◻ *Proof of Theorem [**Theorem** 3](#thm:Nash){reference-type="ref" reference="thm:Nash"}..* The first part of the Theorem could be derived as Theorem 5.3 of [@FabesStroock] using Lemma [**Lemma** 16](#lem:osc){reference-type="ref" reference="lem:osc"}. In order to derive ([\[eq:osc\]](#eq:osc){reference-type="ref" reference="eq:osc"}), we first notice that it follows from Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"} that $$p(t,x)\leq \frac{C}{\sqrt{t}}\int_\mathbb{R}e^{-|T(x)-T(y)-Wt|^2/Ct}p(0,y)dy\leq \frac{C}{\sqrt{t}} \|p(0,\cdot)\|_{L^1 (\mathbb{R})}.$$ Take $t=t'=s$, $\xi=X(-t;x)$ and $R=\sqrt{t/2}$. By definition of $X$, one has $T(x)-T(\xi)-Wt=0$ and $|T(x')-T(\xi)-Wt|=|T(x)-T(x')|$. Hence, if $|T(x)-T(x')|\leq \sqrt{t/2}=R$, one gets from the first part of the Theorem: $$|p(t,x)-p(t,x')|\leq C \|p\|_{L^\infty ((t/2,t)\times B(\xi,R))}\Big( \frac{|T(x)-T(x')|}{\sqrt{t/2}}\Big)^\beta.$$ If $|T(x)-T(x')|\geq \sqrt{t/2}$, this inequality still holds with $C=2$. Hence, for all $t>0$ and $x,x'\in \mathbb{R}$, one has $$|p(t,x)-p(t,x')|\leq C \|p\|_{L^\infty ((t/2,t)\times B(\xi,R))}\Big( \frac{|T(x)-T(x')|}{\sqrt{t/2}}\Big)^\beta\leq \frac{C}{t^{\frac{1+\beta}{2}}}\|p(0,\cdot)\|_{L^1 (\mathbb{R})} |x-x'|^\beta$$ for some generic constant $C$ depending on $\mu$ and $\beta$. ◻ # Proof of the estimates for the original equation ## Definitions and properties of the Wronskian and the invariant measure {#sec:UP} We first define the Wronskian, which is known to be constant. ****Lemma** 17**. *Let $W_{\gamma}:=a \widetilde{\phi}_{\gamma}' \phi_{\gamma}- a\phi_{\gamma}' \widetilde{\phi}_{\gamma}$. Then $W_{\gamma}$ is a positive constant over $\mathbb{R}$.* Then, if $U$ is the fundamental solution associated with ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}), easy computations yield that $P(t,x,y)=U(t,x,y)\phi_\gamma (y)/ \phi_\gamma (x) e^{\gamma t}$ is the fundamental solution associated with ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}) with $W=W_\gamma$ and $\nu=\nu_\gamma:= \phi_{\gamma} \widetilde{\phi}_{\gamma}$. In order to derive Theorem [**Theorem** 5](#thm:princ){reference-type="ref" reference="thm:princ"} from Theorem [**Theorem** 1](#thm:heatkernel){reference-type="ref" reference="thm:heatkernel"}, we need to check that $\nu_\gamma$ and $W_\gamma$ satisfy hypotheses ([\[hyp:original\]](#hyp:original){reference-type="ref" reference="hyp:original"}). As $\nu_\gamma$ plays the role of an invariant measure, we want this function to satisfy good ellipticity properties. ****Proposition** 18**. *For $\gamma > \underline \gamma$, define $\nu_{\gamma}:= \phi_{\gamma} \widetilde{\phi}_{\gamma}$, that we will just denote $\nu$ if there is no ambiguity. Then* - *$\inf_{\mathbb{R}}\nu_{\gamma}>0$,* - *$\nu_{\gamma}$ is bounded.* The proof of this Proposition will rely on the following Lemma. ****Lemma** 19**. *For all $\gamma>\underline{\gamma}$, there exists $\varepsilon>0$ such that $$\forall x\in \mathbb{R}, \quad \frac{\widetilde{\phi}_{\gamma}'(x)}{\widetilde{\phi}_{\gamma}(x)} \geq \frac{\phi_{\gamma}'(x)}{\phi_{\gamma}(x)} + \varepsilon.$$* *Proof.* Assume first that $x=0$. We know from Lemma 2.7 of [@Noleninv] that if $\gamma>\gamma'>\underline{\gamma}$, for all $0<\varepsilon\leq\big( \sqrt{\gamma -\inf_{\mathbb{R}}r}-\sqrt{ \gamma' - \inf_{\mathbb{R}}r}\big)/\sqrt{\inf_\mathbb{R}a}$: $$\forall x\geq 0, \quad \phi_{\gamma}(x)\leq \phi_{\gamma'}(x)e^{-\varepsilon x}.$$ The same result applies to $\widetilde{\phi}_\gamma$ with the change of variable $x\mapsto -x$, yielding: $$\forall x\leq 0, \quad \widetilde{\phi}_{\gamma}(x)\leq \widetilde{\phi}_{\gamma'}(x)e^{\varepsilon x}.$$ Hence, $\widetilde{\phi}_{\gamma}'(0)\geq \widetilde{\phi}_{\gamma'}'(0)+\varepsilon.$ Let now prove the following claim $$\forall x\geq 0, \quad \widetilde{\phi}_{\gamma}(x)\geq \widetilde{\phi}_{\gamma'}(x).$$ Define the Wronskian $Z(x)= \widetilde{\phi}_{\gamma'}'(x)\widetilde{\phi}_\gamma(x)- \widetilde{\phi}_{\gamma'}(x)\widetilde{\phi}_\gamma'(x).$ Then, $(aZ)'(x)=(\gamma'-\gamma) \widetilde{\phi}_{\gamma'}(x)\widetilde{\phi}_\gamma(x)<0$. As $Z(0)\leq -\varepsilon$, one has $Z(x)\leq -\varepsilon$ for all $x\geq 0$, and thus $\widetilde{\phi}_{\gamma'}'(x)/\widetilde{\phi}_{\gamma'}(x)\leq \widetilde{\phi}_\gamma'(x)/\widetilde{\phi}_\gamma(x)$ for all $x\geq 0$, and the claim follows by integration from $0$ to $x\geq 0$. Combining these two inequalities, one gets $$\forall x\geq 0, \quad \frac{\widetilde{\phi}_{\gamma}(x)}{\phi_{\gamma}(x)}\geq \frac{\widetilde{\phi}_{\gamma'}(x)}{\phi_{\gamma'}(x)}e^{\varepsilon x}.$$ Moreover, we know that $\widetilde{\phi}_{\gamma'}(x) \geq \phi_{\gamma'}(x)$ for all $x\geq 0$. Hence, $$\forall x\geq 0, \quad \ln\widetilde{\phi}_{\gamma}(x)\geq \ln\phi_{\gamma}(x)+\varepsilon x.$$ Taking the first order Taylor development near $x=0^{+}$, one gets $$\frac{\widetilde{\phi}_{\gamma}'(0)}{\widetilde{\phi}_{\gamma}(0)} \geq \frac{\phi_{\gamma}'(0)}{\phi_{\gamma}(0)} + \varepsilon.$$ In order to handle the case $x\neq 0$, we just translate the origin and take $\phi_{\gamma}(\cdot)/\phi_{\gamma}(x)$. ◻ *Proof of Proposition [**Proposition** 18](#prop:measure){reference-type="ref" reference="prop:measure"}..* We first notice that $$\frac{W_{\gamma}}{a\nu_{\gamma}} = \frac{\widetilde{\phi}_{\gamma}'}{\widetilde{\phi}_{\gamma}}-\frac{\phi_{\gamma}'}{\phi_{\gamma}}.$$ The Harnack inequality yields that $\frac{\phi_{\gamma}'}{\phi_{\gamma}}$ and $\frac{\widetilde{\phi}_{\gamma}'}{\widetilde{\phi}_{\gamma}}$ are bounded, so $\nu_{\gamma}$ admits a positive infimum. Lastly, Lemma [**Lemma** 19](#lem:ineqphi){reference-type="ref" reference="lem:ineqphi"} yields that $$\frac{W_{\gamma}}{a\nu_{\gamma}} \geq \varepsilon$$ and thus $\nu_{\gamma}$ is bounded. ◻ ## Convexity of $\phi_\gamma$ with respect to $\gamma$ ****Lemma** 20**. *For all $\gamma, \gamma'>\underline{\gamma}$, one has for all $x\in \mathbb{R}$, $\sigma \in (0,1)$: $$\frac{\phi_{(1-\sigma)\gamma+\sigma \gamma'}'(x)}{\phi_{(1-\sigma)\gamma+\sigma \gamma'}(x)}\leq (1-\sigma) \frac{\phi_{\gamma}'(x)}{\phi_{\gamma}(x)}+\sigma\frac{\phi_{\gamma'}'(x)}{\phi_{\gamma'(x)}}.$$* *Proof.* We could always assume that $x=0$ by translation. Next, classical arguments from Lemma 2.5 of [@Noleninv] give $$\phi_{(1-\sigma)\gamma+\sigma \gamma'}(x)\leq \phi_{\gamma}(x)^{1-\sigma}\phi_{\gamma'}(x)^{\sigma} \quad \hbox{ for all } x\in \mathbb{R}.$$ Expending near $x=0^{+}$, one gets $$\phi_{(1-\sigma)\gamma+\sigma \gamma'}'(0)\leq (1-\sigma)\phi_{\gamma}'(0)+\sigma\phi_{\gamma'}'(0),$$ which ends the proof when $x=0$. ◻ ## The derivative $\dot{\phi}_\gamma$ and its properties ****Lemma** 21**. *The function $\gamma\mapsto \phi_\gamma$ admits a derivative $\dot{\phi}_{\gamma}$ for all $\gamma>\underline{\gamma}$, which is the unique solution of $$\label{eq:phipoint}\big( a(x)\dot{\phi}_{\gamma}'\big)'+\big(r(x)-\gamma\big)\dot{\phi}_{\gamma}=\phi_{\gamma} \hbox{ in } \mathbb{R}, \quad \dot{\phi}_{\gamma}(0)=0$$ such that $x\mapsto \displaystyle\frac{\dot{\phi}_{\gamma}}{x\phi_{\gamma}}$ is bounded over $\mathbb{R}$.* *Proof.* First, the convexity of $\gamma\mapsto \ln \phi_{\gamma}(x)$ mentioned in the proof Lemma [**Lemma** 20](#lem:phiconvexe){reference-type="ref" reference="lem:phiconvexe"} yields that one can always define a left derivative $$\dot{\phi}_{\gamma}(x):=\phi_{\gamma}(x)\times\lim_{\gamma'\rightarrow\gamma^{-}}\displaystyle\frac{\ln\phi_{\gamma}(x)-\ln\phi_{\gamma'}(x)}{\gamma-\gamma'}.$$ On the other hand, we know from Lemma [**Lemma** 20](#lem:phiconvexe){reference-type="ref" reference="lem:phiconvexe"} that $\gamma \mapsto \phi_{\gamma}'(x)/\phi_{\gamma}(x)$ is convex for all $x\in \mathbb{R}$. One could easily check that $$|\phi_{\gamma}'(x)/\phi_{\gamma}(x)|\leq \sqrt{\displaystyle\frac{\gamma-\inf_{\mathbb{R}}r}{\inf_\mathbb{R}a}}.$$ Hence, it is a well-known property of convex functions that $\phi_{\gamma}'(x)/\phi_{\gamma}(x)$ is $\frac{\sqrt{\gamma+\delta-\inf_{\mathbb{R}}r}}{\delta\sqrt{\inf_\mathbb{R}a}}$-Lipschitz-continuous with respect to $\gamma$ on any ball of radius $\delta$. By exchanging the derivatives with respect to $\gamma$ and $x$, we get that $x\mapsto \displaystyle\frac{\dot{\phi}_{\gamma}}{x\phi_{\gamma}}$ is bounded over $\mathbb{R}$. Lastly, if $\psi$ is another solution of ([\[eq:phipoint\]](#eq:phipoint){reference-type="ref" reference="eq:phipoint"}) such that $x\mapsto \displaystyle\frac{\psi}{x\phi_{\gamma}}$ is bounded over $\mathbb{R}$, then $z:=\dot{\phi}_{\gamma}-\psi$ would satisfy $$\big(a(x)z'\big)'+\big( r(x)-\gamma\big)z=0 \hbox{ over } \mathbb{R}, \quad z(0)=0.$$ We could thus write it $z=A\phi_{\gamma}+B\widetilde{\phi}_{\gamma}$ since these functions are two independent solutions of the equation. Moreover, $z(0)=A+B=0$ and thus $B=-A$. Dividing by $x\phi_{\gamma}(x)$, as $\widetilde{\phi}_{\gamma}/\phi_{\gamma}$ converges at least exponentially to $+\infty$ as $x\rightarrow+\infty$ by Lemma [**Lemma** 19](#lem:ineqphi){reference-type="ref" reference="lem:ineqphi"}, one gets a contradiction unless $A=0$, which means that $z\equiv 0$. Hence $\dot{\phi}_{\gamma}$ is uniquely defined. ◻ ****Corollary** 22**. *The function $T_\gamma:=-W_\gamma \dot{\phi}_{\gamma}/\phi_\gamma$ is the unique solution of $$-\big( \nu (x)a(x)T_{\gamma}'\big)'+ W_\gamma T_{\gamma}'=W_\gamma \nu (x) \hbox{ in } \mathbb{R}, \quad T_{\gamma}(0)=0$$ such that $x\mapsto \displaystyle\frac{T_{\gamma}}{x}$ is bounded over $\mathbb{R}$.* *Proof.* The existence follows from Lemma [**Lemma** 21](#lem:wgamma){reference-type="ref" reference="lem:wgamma"} and easy computations. The uniqueness follows from Proposition [**Proposition** 2](#prop:T){reference-type="ref" reference="prop:T"}. ◻ ## Proof of Theorem [**Theorem** 5](#thm:princ){reference-type="ref" reference="thm:princ"} {#proof-of-theorem-thmprinc} *Proof of Theorem [**Theorem** 5](#thm:princ){reference-type="ref" reference="thm:princ"}..* Let $U$ the fundamental solution associated with ([\[eq:parab\]](#eq:parab){reference-type="ref" reference="eq:parab"}), easy computations yield that $P(t,x,y)=U(t,x,y)\phi_\gamma (y)/ \phi_\gamma (x) e^{\gamma t}$ is the fundamental solution associated with ([\[eq:canonical\]](#eq:canonical){reference-type="ref" reference="eq:canonical"}) with $W=W_\gamma$ and $\nu=\nu_\gamma:= \phi_{\gamma} \widetilde{\phi}_{\gamma}$. Proposition [**Proposition** 18](#prop:measure){reference-type="ref" reference="prop:measure"} yields $\nu_\gamma$ satisfies hypotheses [\[hyp:original\]](#hyp:original){reference-type="ref" reference="hyp:original"}. Corollary [**Corollary** 22](#lem:Tgamma){reference-type="ref" reference="lem:Tgamma"} yields that $T_\gamma:=-W_\gamma \dot{\phi}_{\gamma}/\phi_\gamma$. The conclusion follows. ◻ 1 W. Arendt. . , 38(1):87--130, 1997. D.G. Aronson. . , 22:607--694, 1968. M. Bages, P. Martinez, J. Roquejoffre. How travelling waves attract the solutions of KPP-type equations. , 364(10):5415--5468, 2012. J. Cerny, A. Drewitz, L. Schmitz. (Un-)bounded transition fronts for the parabolic Anderson model and the randomized F-KPP equation. , 2021. A. Drewitz, L. Schmitz. Invariance principles and Log-distance of F-KPP fronts in a random medium. , 2021. E. B. Fabes and D. W. Stroock. A new proof of Moser's parabolic Harnack inequality using the old ideas of Nash. , 96(4):327--338, 1986 M. Freidlin. On wave front propagation in periodic media. , 7:147--166, 1984. M. Freidlin, and J. Gärtner. On the propagation of concentration waves in periodic and random media. , 20:1282--1286, 1979 A. Grigor'yan. . , 45(1):33--52, 1997. A. Grigor'yan. . , 398:93--191, Amer. Math. Soc., Providence, RI, 2006. F. Hamel, J. Nolen, J.M. Roquejoffre, L. Ryzhik. The logarithmic delay of KPP fronts in a periodic medium. , 18(3), 465?505, 2016. V. V. Jikov, S. M. Kozlov, O. A. Oleinik. , 1994. T. Kumagai. . , 40(3):793--818, 2004. J. Nash. . , 80:931--954, 1958. J. Nolen. A central limit theorem for pulled fronts in a random medium. , 6(2):167--194, 2011. J. Nolen. An invariance principle for random traveling waves in one dimension. , 43(1), 153?188, 2011. J. Nolen, and L. Ryzhik. Traveling waves in a one-dimensional random medium. , 2009. J. Nolen, and J. Xin. . , 26(3):815--839, 2008. J. Nolen, and J. Xin. . , 11(2), 2009 J. Nolen, and J. Xin. . , 269:493--532, 2007. J. Norris. . , 140:161-195, 1997. J. Norris, D. W. Stroock. . , 62(2):373--402, 1991. J. Wang. . , 178(2):377--398, 1997. D. Daners. . , 217:13--41, 2000. Qi S. Zhang. . , 182(2):344--370, 2001. Qi S. Zhang. . , 182(2):416--430, 2002. [^1]: Institut Denis Poisson, Université d'Orléans, Université de Tours, CNRS, Orléans, France (`gregoire.nadin@cnrs.fr`).
arxiv_math
{ "id": "2310.01048", "title": "Gaussian estimates for general parabolic operators in dimension 1", "authors": "Gr\\'egoire Nadin (IDP, CNRS)", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - John M. O'Brien bibliography: - references.bib title: Homology of loops on a splitting-rank symmetric space via the Geometric Satake for real groups --- # Introduction {#introduction .unnumbered} Let $G$ be a complex reductive group with compact form $G_c$. The Geometric Langlands Correspondence predicts a relationship between sheaves on the loop space $\Omega G_c$ and data attached to the dual reductive group $G^\vee$. Central to the equivalence, then, is an understanding of the homology and equivariant homology of $\Omega G_c$ in terms compatible with the Langlands program. Ginzburg gave the first such description [@ginzburg2000perverse], followed later by Bezrukavnikov, Finkelburg, and Mirković [@bezrukavnikov2014equivariant] and Yun and Zhu [@yun_integral_2011]. Here we discuss Yun and Zhu's formulation of $H_*(\Omega(G_c))$ in terms of the Langlands dual group: **Theorem 1 [@yun_integral_2011] 1**. *For $G$ a connected reductive group over ${\mathbb{C}}$ with almost simple derived group, there is an isomorphism of Hopf algebras* *$$H_*(\Omega G_c, {\mathbb{Z}}[1/\ell_G]) \cong \mathcal{O}(G^\vee_e)[1/\ell_G]$$* *where $\ell_G$ denotes the square of the ratio of root lengths of $G_c$, $e$ a regular nilpotent of $G^\vee$, and $G^\vee_e$ the centralizer of $e$ in $G^\vee$.* They also prove a description of the equivariant cohomology in terms of the regular centralizer group scheme associated to $G^\vee$. The proof uses the Geometric Satake Correspondence and known homotopy equivalence $\Omega G_c \rightarrow {\textnormal{Gr}}= G({\mathbb{C}}((t)))/G({\mathbb{C}}[[t]])$ to canonically construct such a morphism of Hopf algebras. The Relative Langlands program, a vast generalization of the Geometric Langlands program, studies sheaves on a wider range of spaces. One interesting class of such spaces are the loop spaces of symmetric spaces. Recall that a symmetric space $X_c = G_c/K_c$ is the quotient of $G_c$ by the fixed points of an involution. Examples include spheres, projective spaces, and many other familiar spaces. It is natural to ask whether a description of $H_*(\Omega(G_c/K_c))$ analogous to [@yun_integral_2011] holds. There is a dual group--the Gaitsgory-Nadler dual group $G^\vee_X$--attached to such spaces. It is a subgroup of $G^\vee$ with root data a modification of the relative roots of $G$. However, describing $H_*(\Omega(G_c/K_c))$ in general is difficult even without involving dual groups--proofs involve passing to $\mathbb{F}_2$-coefficients to deal with non-orientability and seeing what information about ${\mathbb{Z}}$-coefficients we can glean. In principal, one could prove analogues of Theorem 1 by studying the algebraic loop space ${\mathcal L}X = X({\mathbb{C}}((t)))$. However, sheaves on ${\mathcal L}X$ are unwieldy, due to the tendency of $G({\mathbb{C}}[[t]])$-orbits to be infinite dimensional and infinite codimensional. Instead, we use Quillen's result that $\Omega(G_c/K_c)$ is homotopy equivalent to ${\textnormal{Gr}}_{\mathbb{R}}$, where ${\textnormal{Gr}}_{\mathbb{R}}$ is the affine Grassmannian $G_{\mathbb{R}}({\mathbb{R}}((t)))/G_{\mathbb{R}}({\mathbb{R}}[[t]])$ of the real form $G_{\mathbb{R}}$ of $G$ associated to $G_c/K_c$. Here, the equivalent strata--the $G_{\mathbb{R}}({\mathbb{R}}[[t]])$-orbits--are miraculously finite-dimensional and concrete. In this paper we use this real analytic model to compute the homology and equivariant homology of a particularly nice family of loop spaces of symmetric spaces--the family of splitting-rank symmetric spaces. This family is known to have no $2$-torsion in homology. Furthermore, aside from the compact Lie groups, splitting-rank symmetric spaces dual group $G^\vee_X$ of type $A_{m-1}$. If we choose centerless forms of the associated real groups, we get $G^\vee_X \cong {\textnormal{PGL}}(m, {\mathbb{C}})$. The results of this paper will feature in a couple upcoming works. With Tsao-Hsien Chen, Mark Macerato, and David Nadler we will prove a version of the Derived Satake Equivalence for $\textnormal{GL}_n({\mathbb{H}})$ [@chen2022quaternionic]. Later, Macerato and I plan to use this description to generalize the Derived Satake Equivalence to real splitting-rank groups. **Theorem 2 1**. *For $X= G_c/K_c$ a compact, splitting-rank symmetric space of adjoint type, there is an isomorphism of Hopf algebras* *$$H_*(\Omega(X), {\mathbb{Z}}[\frac{1}{\ell_X}]) \cong \mathcal{O}(G^\vee_{X,e}),$$* *where $e$ is a regular nilpotent in $\mathfrak{g}^\vee_{X,e}$, $G^\vee_{X,e}$ the centralizer of $e$, and $\ell_X$ the product of a few small primes.* The equivariant calculation also holds, relating the equivariant homology to the regular centralizer of $G^\vee_X$. # Factorization and homology {#factorization-and-homology .unnumbered} Here we endow $H_*(\Omega(X))$ with the structure of a Hopf algebra in an *a priori* different way than loop concatenation, using factorization for the real affine Grassmannian. Recall that, for $\Gamma$ a smooth complex curve with real form $\Gamma_{\mathbb{R}}$, we have the Beilinson-Drinfeld Grassmannian: $${\textnormal{Gr}}_{{\mathbb{R}}, \Gamma_{\mathbb{R}}^2} = \{(x,y,\varepsilon, \tau): x,y\in \Gamma_{\mathbb{R}};\; \varepsilon: E\rightarrow \Gamma_{\mathbb{R}};\; \tau \textnormal{ a trivialization of E away from (x,y)})\}$$ Here $E$ is a principal $G_{\mathbb{R}}$-bundle over $\Gamma_{\mathbb{R}}$. This is a real analytic space over $\Gamma_{\mathbb{R}}^2$. Its fibers over the diagonal embedding of $\Gamma_{\mathbb{R}}$ are given by ${\textnormal{Gr}}_{{\mathbb{R}}}$, while fibers elsewhere are isomorphic to ${\textnormal{Gr}}_{{\mathbb{R}}} \times {\textnormal{Gr}}_{{\mathbb{R}}}$. Specialization of the constant sheaf in this family gives us a product $\wedge_{sp}: H_*({\textnormal{Gr}}_{{\mathbb{R}}} \times {\textnormal{Gr}}_{{\mathbb{R}}}) \rightarrow H_*({\textnormal{Gr}}_{{\mathbb{R}}})$. In the case of complex groups, this product is identical to the ordinary Pontryagin product by [@yun_integral_2011]. Dually, we denote the coproduct $\Delta_{sp}: H^*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow H^*({\textnormal{Gr}}_{\mathbb{R}}\times {\textnormal{Gr}}_{\mathbb{R}})$. Factorization interacts with the Geometric Satake Equivalence for real groups in the following way. Consider the subcategory ${\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})$ of $P_{G_{\mathbb{R}}({\mathbb{R}}[[t]])}({\textnormal{Gr}}_{\mathbb{R}})$ defined by Nadler for real groups [@nadler_perverse_2005]. For splitting-rank cases, this is the entire category $P_{G_{\mathbb{R}}({\mathbb{R}}[[t]])}({\textnormal{Gr}}_{\mathbb{R}})$ . It is a Tannakian category with fiber functor $H^*: {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow {\mathop{\rm Vect}}_{\mathbb{Z}}$ and monoidal structure given by convolution, or equivalently (up to sign), by fusion. We use the latter: for ${\mathcal F}, {\mathcal G}\in {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})$, let ${\mathcal F}_{\Gamma_{\mathbb{R}}}$ be the \"globalization\" on ${\textnormal{Gr}}_{{\mathbb{R}}, \Gamma_{{\mathbb{R}}}}$. Let $\Delta: \Gamma_{\mathbb{R}}\rightarrow \Gamma_{\mathbb{R}}^2$ denote the inclusion of the diagonal and $i: \Gamma_{\mathbb{R}}^2 - \Delta \rightarrow \Gamma_{\mathbb{R}}^2$ the inclusion of the complement of the diagonal. Then $${\mathcal F}* {\mathcal G}:= \Delta^*i_{!*}({\mathcal F}_{\Gamma_{\mathbb{R}}} \boxtimes {\mathcal G}_{\Gamma_{\mathbb{R}}})|_{\Gamma^2_{\mathbb{R}}-\Delta}$$ Using factorization, we have the following lemma concerning cohomology classes: **Lemma 1**. *We have isomorphisms, for ${\mathcal F},{\mathcal G}\in Q({\textnormal{Gr}}_{\mathbb{R}})$ and $h\in H^*({\textnormal{Gr}}_{\mathbb{R}}, k)$,* *$$\begin{tikzcd} H^*({\mathcal F})\otimes H^*({\mathcal G})\arrow[d, "\cup(\Delta_{sp} h)"]\arrow[r, "\sim" ] & H^*({\mathcal F}*{\mathcal G}) \arrow[d, "\cup h"] \\ H^*({\mathcal F})\otimes H^*({\mathcal G}) \arrow[r, "\sim"] & H^*({\mathcal F}*{\mathcal G}) \end{tikzcd}$$* *An identical statement holds for $C$-equivariant homology, for $C$ the maximal compact torus of $G_{\mathbb{R}}$.* We will use this lemma in several places to construct morphisms into the dual group and points in the Lie algebra. # MV Filtration {#mv-filtration .unnumbered} As in [@yun_integral_2011] we collect some results on an equivariant version of the coweight filtration of Mirkovic-Vilonen. Let $C$ denote the maximal torus of the compact symplectic group $K$, realized as the quaternionic unitary group. Let $S_{\mu, {\mathbb{R}}}$ be the $U_{\mathbb{R}}((t))$ orbit through $t^\nu$ for $\nu \in \Lambda_{T_K}$. By Nadler's Theorem 8.5.1 [@nadler_perverse_2005], for ${\mathcal F}\in {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})_{^pH}$ we have that $H^*_c(S_{\nu, {\mathbb{R}}}, {\mathcal F})$ is concentrated in degree $k = \langle \check{\rho},\nu\rangle$. Since every $\nu = \theta(\mu) + \mu$ for some complex cocharacter $\mu$ and $\theta$ the involution induced by complex conjugation, we have that $k$ is of even parity. The MV filtration filters $H^*({\mathcal F})$ by coweights, with ${\textnormal{Fil}}_{\geq \nu} H^*({\mathcal F}) = \ker(H^*({\mathcal F}) \rightarrow H^*(S_{<\nu, {\mathbb{R}}}, {\mathcal F})$, where $S_{<\nu, {\mathbb{R}}} = \overline{S_{\nu, {\mathbb{R}}}} - S_{\mu,{\mathbb{R}}}$. Likewise, we may filter $H^*_C({\mathcal F})$ by ${\textnormal{Fil}}_{\geq \nu}^{C} H_C^*({\mathcal F}) = \ker(H_C^*({\mathcal F}) \rightarrow H_{C}^*(S_{<\nu, {\mathbb{R}}}, {\mathcal F})$. For $R_C = H_C^*(*, {\mathbb{Z}})$, we have the following theorem: **Theorem 3 1**. *There is a natural isomorphism* *$$H_C^*({\textnormal{Gr}}_{\mathbb{R}}, -) \cong H^*({\textnormal{Gr}}_{\mathbb{R}}, -) \otimes R_C: {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})_{^pH} \rightarrow {\textnormal{Mod}}^*(R_C)$$* *Proof.* The proof follows exactly as in [@yun_integral_2011]. Since $H^*_c(S_{\nu, {\mathbb{R}}}, {\mathcal F})$ is concentrated in a single degree, applying a spectral sequence gives us that $H^*_{C,c}(S_{\nu, {\mathbb{R}}},{\mathcal F}) \cong H^*_c(S_{\nu, {\mathbb{R}}}, {\mathcal F}) \otimes R_C$. Then $H^*_C({\mathcal F})$ has a spectral sequence with $E_1$ terms $H^*_{C,c}(S_{\nu, {\mathbb{R}}},{\mathcal F})$. By the vanishing along parity, this degenerates. By degree consideration, since ${\textnormal{Fil}}^C_{>\mu} H^*({\mathcal F})$ is concentrated in degree above $\langle \check{\rho}, \nu \rangle$, then the following short exact sequence splits canonically: $$0\rightarrow {\textnormal{Fil}}^C_{>\mu} H_C^*({\mathcal F}) \rightarrow {\textnormal{Fil}}^C_{\geq \mu} H_C^*({\mathcal F}) \rightarrow H^*_{C,c}(S_{\nu, {\mathbb{R}}}, {\mathcal F}) \rightarrow 0$$ Hence we have the following isomorphisms: $$H_C^*({\mathcal F}) \cong \bigoplus_{\mu \in X_*(T_{\mathbb{R}})} H^*_{C,c}(S_{\nu, {\mathbb{R}}}, {\mathcal F}) \cong \bigoplus_{\mu \in X_*(T_{\mathbb{R}})} H^*_{c}(S_{\nu, {\mathbb{R}}},{\mathcal F}) \otimes R_C \cong H^*({\mathcal F}) \otimes R_C$$ ◻ We observe that this result can be strengthened. Let $M$ denote the Levi factor of the minimal parabolic $P_{\mathbb{R}}\subset G_{\mathbb{R}}$. We have that $M$ commutes with every $t^\nu$, hence we get a filtration on the $M$-equivariant cohomology. Taking invariants of the Weyl group of the Levi factor $W_M$, we get graded pieces $H^*_{M,c}(S_{\nu, {\mathbb{R}}},{\mathcal F}) \cong (H^*_c(S_{\nu, {\mathbb{R}}}, {\mathcal F}) \otimes R_C)^{W_M}$. By degree considerations, as above, the filtration splits. Hence, $$H_M^*({\mathcal F}) \simeq \bigoplus_{\mu \in X_*(T_{\mathbb{R}})} H^*_{M,c}(S_{\nu, {\mathbb{R}}}, {\mathcal F}) \cong \bigoplus_{\mu \in X_*(T_{\mathbb{R}})} (H^*_c(S_{\nu, {\mathbb{R}}}, {\mathcal F}) \otimes R_C)^{W_M}$$ Thus, we have an analogous natural isomorphism: **Theorem 4 1**. *There is a natural isomorphism* *$$H_M^*({\textnormal{Gr}}_{\mathbb{R}}, -) \cong (H^*({\textnormal{Gr}}_{\mathbb{R}}, -) \otimes R_C)^{W_M}: {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})_{^pH} \rightarrow {\textnormal{Mod}}^*(R_M)$$* # The canonical automorphism {#the-canonical-automorphism .unnumbered} In this section, we use Lemma 2 to construct a canonical automorphism of $$H_C^*(-) \otimes_{R_C} H^C_*({\textnormal{Gr}}_{\mathbb{R}}): {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})_{^pH} \rightarrow {\textnormal{Mod}}^*(H^C_*({\textnormal{Gr}}_{\mathbb{R}}))$$ and likewise for their $M$-equivariant counterparts. We use an adaptation of the approach in [@yun_integral_2011]. Recall that, by the natural cell structure on ${\textnormal{Gr}}_{\mathbb{R}}$, $H_C^*({\textnormal{Gr}}_{\mathbb{R}})$ and $H_*^C({\textnormal{Gr}}_{\mathbb{R}})$ are free $R_C$-modules concentrated in even degree. For dual bases $h^i$ and $h_i$ of $H_C^*({\textnormal{Gr}}_{\mathbb{R}})$ and $H_*^C({\textnormal{Gr}}_{\mathbb{R}})$, we set $$\sigma_C(v\otimes h) = \sum_i (h^i\cup h)\otimes (h_i \wedge_{sp} h)$$ for $\wedge$ the product on homology induced by specialization. The construction is independent of choice of basis. Likewise, $H_M^*({\textnormal{Gr}}_{\mathbb{R}})$ and $H^M_*({\textnormal{Gr}}_{\mathbb{R}})$ are free $R_M$-modules concentrated in even degree, as $W_M$-fixed points of the action on $H_C^*({\textnormal{Gr}}_{\mathbb{R}})$ and $H_*^C({\textnormal{Gr}}_{\mathbb{R}})$. Hence we may define $\sigma_M$ analogously. We claim that $\sigma_C$ induces a morphism of Tannakian group schemes $$\sigma_C: {\textnormal{Spec }}H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow {\mathop{\rm Aut\,}}^\otimes H_C^* \cong G^\vee_X \times {\textnormal{Spec }}R_C$$ where $G^\vee_X$ is the Gaitsgory-Nadler dual group of $X$. Two properties need to be proven for this to hold. First, we need that $\sigma_C$ is a tensor automorphism so that it represents a $H_*^C({\textnormal{Gr}}_{\mathbb{R}})$-point of ${\mathop{\rm Aut\,}}^\otimes H_C^*$. Then, we need that that the induced map respects multiplications in each algebraic group. Both properties essentially follow from adjointness of multiplication and comultiplication in $H_*^C({\textnormal{Gr}}_{\mathbb{R}})$ and $H^*_C({\textnormal{Gr}}_{\mathbb{R}})$. We will give the details below. **Lemma 2**. *The automorphism $\sigma_C$ of $H_C^*(-) \otimes_{R_C} H^C_*({\textnormal{Gr}}_{\mathbb{R}})$ is a tensor automorphism.* *Proof.* This follows from adjointness of $\Delta_{sp}$ and $\wedge_{sp}$. Let $v_1$ and $v_2$ be elements of $H_C^*({\mathcal F}_1)$ and $H_C^*({\mathcal F}_2)$. Under the isomorphism $H_C^*({\mathcal F}_1 * {\mathcal F}_2) \cong H_C^*({\mathcal F}_1) \otimes_{R_C} H_C^*({\mathcal F}_2)$, $v_1\otimes v_2$ identifies with an element of $H_C^*({\mathcal F}_1 * {\mathcal F}_2)$. Then we have the following computation: $$\sigma_C(v_1\otimes 1) \otimes \sigma_C(v_2\otimes 1) = \sum_{i,j} (h^i\cup v_1)\otimes (h^j \cup v_2) \otimes (h_i \wedge_{sp} h_j)$$ In other words, for $\mathcal M$ the matrix of $\wedge_{sp}: H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \otimes_{R_C} H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow H_*^C({\textnormal{Gr}}_{\mathbb{R}})$ with respect to the bases $\{h_i \otimes h_j\}$ and $\{h_k\}$, we have that $$\sigma_C(v_1\otimes 1) \otimes \sigma_C(v_2\otimes 1) = \sum_{i,j, k} \mathcal M_{k, ij} (h^i\otimes h^j) \cdot (v_1 \otimes v_2) \otimes h_k.$$ where $\cdot$ represents the factor-wise cup product action. Now, we use Lemma 1 on cupping with cohomology classes to calculate $\sigma_C((v_1\otimes v_2) \otimes 1)$. We have that $$\sigma_C((v_1\otimes v_2) \otimes 1) = \sum_k \Delta_{sp}(h^k)(v_1\otimes v_2)\otimes h_k.$$ For ${\mathcal D}$ the matrix of $\Delta_{sp}: H^*_C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow H^*_C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow H^*_C({\textnormal{Gr}}_{\mathbb{R}})$, we have that $$\sigma_C((v_1\otimes v_2) \otimes 1) = \sum_{i,j, k} {\mathcal D}_{ij, k} (h^i\otimes h^j) \cdot (v_1 \otimes v_2) \otimes h_k.$$ Since $\Delta_{sp}$ is the adjoint of $\wedge_{sp}$, we have that ${\mathcal D}= \mathcal M^T$. Hence, we have our tensor structure: $$\sigma_C((v_1\otimes v_2) \otimes 1) = \sigma_C(v_1\otimes 1) \otimes \sigma_C(v_2\otimes 1)$$ ◻ **Lemma 3**. *The map $\sigma_C: {\textnormal{Spec }}H^C_*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow G^\vee_X \times {\textnormal{Spec }}R_C$ is a morphism of algebraic groups.* *Proof.* This follows from adjointness of $\cup$ and $\Delta_*$, where $\Delta_*$ is the coproduct in homology. We need to check that the algebraic group multiplication ${\textnormal{Spec }}(\Delta_*)$ of $H^C_*({\textnormal{Gr}}_{\mathbb{R}})$ is mapped to the multiplication $\mu$ of $G^\vee_X$. This corresponds to the following diagram of categories commuting $$\begin{tikzcd} {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}})_{^pH} \arrow[d, "\mu*"] \arrow[r,"\sigma_C"] &{\textnormal{Mod}}^*(H_*^C({\textnormal{Gr}}_{\mathbb{R}})) \arrow[d, "\Delta_*"]\\ {\mathbf Q}({\textnormal{Gr}}_{\mathbb{R}}\times {\textnormal{Gr}}_{\mathbb{R}})_{^pH} \arrow[r, "\sigma_C \otimes \sigma_C"] &{\textnormal{Mod}}^*(H_*^C({\textnormal{Gr}}_{\mathbb{R}})\otimes H_*^C({\textnormal{Gr}}_{\mathbb{R}})) \end{tikzcd}$$ Here, $\mu^*$ is the pullback of the multiplication map $\mu: G_{\mathbb{R}}({\mathbb{R}}((t))) \times_{G({\mathbb{R}}[[t]])} {\textnormal{Gr}}_{{\mathbb{R}}} \rightarrow {\textnormal{Gr}}_{\mathbb{R}}$. Now, $(\sigma_C\otimes\sigma_C) \circ \mu^*$ maps a sheaf ${\mathcal F}$ to $H^*_C({\mathcal F})\otimes H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \otimes H_*^C({\textnormal{Gr}}_{\mathbb{R}})$ together with automorphism $$v\otimes 1 \otimes 1 \longmapsto \sum_{i,j} (h^i\cup h^j) \cup v \otimes (h_i\otimes h_j).$$ Similarly, $\Delta_*\circ \sigma_C$ maps ${\mathcal F}$ to its $H^*_C({\mathcal F})\otimes H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \otimes H_*^C({\textnormal{Gr}}_{\mathbb{R}})$ together with automorphism $$v\otimes 1 \otimes 1 \longmapsto \sum_k h^k\cup v \otimes \Delta_*(h^k).$$ By an adjointness argument identical to that for the previous lemma, the automorphisms are equal. ◻ Since the morphism respects the MV filtration, its image lies in a Borel subgroup of $G^\vee_X \times {\textnormal{Spec }}R_C$. Hence, we write $$\sigma_C: {\textnormal{Spec }}H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_X \times {\textnormal{Spec }}R_C$$ Likewise, in the $M$-equivariant setting, we have a morphism of Tannakian groups $$\sigma_M: {\textnormal{Spec }}H_*^M({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_X \times {\textnormal{Spec }}R_M.$$ De-equivariantizing, we have a base morphism in ordinary homology $$\sigma: {\textnormal{Spec }}H_*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_X$$ We make an observation about the nature of this morphism, to be used in a later paper. Analogously to Remark 3.4 in [@yun_integral_2011], we have the following result regarding the composition $\tau_C= \pi \circ \sigma_C: {\textnormal{Spec }}H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow T^\vee_X \times {\textnormal{Spec }}R_C$ for $\pi$ the natural projection $B^\vee_X \times {\textnormal{Spec }}R_C \rightarrow T^\vee_X \times {\textnormal{Spec }}R_C$. **Lemma 4**. *We have the following equality of morphisms:* *$$\tau_C = {\textnormal{Spec }}(Loc_*): {\textnormal{Spec }}H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow T^\vee_X \times {\textnormal{Spec }}R_C$$* *Proof.* We follow the proof closely. For each weight vector $v_\lambda \in H_{C,c}(S_{\lambda,{\mathbb{R}}, {\mathcal F})}$, we have that, for $i_\lambda$ the inclusion of $\{t_\lambda\}$ into ${\textnormal{Gr}}_{\mathbb{R}}$ $$\sigma_C(v_\lambda) = \sum_j i_\lambda^*(h^j) \cup v_\lambda \otimes h_j = v_\lambda \cup i_{\lambda,*}(1)$$ Since equivariant localization $Loc^*$ is the product of all $i_\lambda^*$, its dual $Loc_*$ is the sum of all $i_{\lambda,*}$, the same effect of $\sigma_C$. Hence $\tau_C = {\textnormal{Spec }}(Loc_*)$. ◻ # Primitive classes {#primitive-classes .unnumbered} We seek to show that image of $\sigma$ centralizes a principal nilpotent of $\mathfrak{g}^\vee_X$. For now, we consider the adjoint forms of $G_{\mathbb{R}}: {\textnormal{PSL}}(n,{\mathbb{H}}), {\textnormal{PSO}}(1,2n-1),$ and $PE_6(F_4)$ along with their corresponding symmetric spaces. In every case, our relative dual group $G^\vee_X = \textnormal{SL}(n,{\mathbb{C}})$. Once and for all, identify $\Phi^\vee_X = \{e_i-e_j\}$ with simple roots $\alpha_i = e_i-e_{i+1}$. Recall that $e^1 = e_1^*$ is the fundamental coweight corresponding to $\alpha_{1}$. Consider the minuscule variety ${\textnormal{Gr}}_{e^1, {\mathbb{R}}}\cong G_{\mathbb{R}}/P_{e^1,{\mathbb{R}}}$, where $P_{e^1,{\mathbb{R}}}$ is the parabolic of $G_{\mathbb{R}}$ generated by the root subgroups $U_\alpha$ with $\langle e^1, \alpha\rangle\leq 0$. When $G_{\mathbb{R}}= {\textnormal{PSL}}(n,{\mathbb{H}}), {\textnormal{PSO}}(1,2n-1),$ or $PE_6(F_4)$, we have that ${\textnormal{Gr}}_{e^1,{\mathbb{R}}} = {\mathbb{H}}P^n, S^{2n-2},$ or ${\mathbb{O}}P^2$ respectively. In all cases, $H^*({\textnormal{Gr}}_{e^1,{\mathbb{R}}}) \cong {\mathbb{Z}}[a]/(a^n)$, where $a$ is of degree $d=4, 2n-2,$ or $8$ respectively. In the projective space cases, we may realize $a$ as the Euler class of the dual of the tautological bundle.[^1] For the even spheres, it is $-\frac{1}{2}e_1(TS^{2n-2})$. Since $H_*({\textnormal{Gr}}_{e^1, {\mathbb{R}}})$ generates the homology of ${\textnormal{Gr}}_{\mathbb{R}}$ by an argument of Bott [@Bott_1958], we may extend the class $a\in H^*({\textnormal{Gr}}_{e^1,{\mathbb{R}}})$ to a class $a_{\det}\in H^*({\textnormal{Gr}}_{\mathbb{R}})$. We call this class the determinant class by analogy with the determinant line bundle over the complex affine Grassmannian. Indeed, for $G_{\mathbb{R}}$ defined over ${\mathbb{H}}$ we may repeat the construction in [@yun_integral_2011] to get an honest quaternionic determinant line bundle over ${\textnormal{Gr}}_{\mathbb{R}}$ . However, for $G_{\mathbb{R}}= {\textnormal{PSO}}(1, 2n-1)$ for $n\neq 1, 2, 4, 8$ we may not generalize the construction--there is no bundle even on ${\textnormal{Gr}}_{e^1,{\mathbb{R}}} = S^{2n}$ with the correct Euler class. Using [Lemma 1](#tensorend){reference-type="ref" reference="tensorend"} and the fact that $a_{\det}$ is primitive, we get that taking the cup product with $a_{\det}$ produces an endomorphism of $H^*: Q({\textnormal{Gr}}_{\mathbb{R}})_{^pH} \rightarrow {\mathop{\rm Vect}}_k$ with tensor formula $a_{\det}\otimes 1 + 1\otimes a_{\det}$. In diagrams, $$\begin{tikzcd} H^*({\mathcal F})\otimes H^*({\mathcal G})\arrow[d, "\cup(a_{\det}\otimes 1 + 1 \otimes a_{\det})"]\arrow[r, "\sim" ] & H^*({\mathcal F}*{\mathcal G}) \arrow[d, "\cup a_{\det}"] \\ H^*({\mathcal F})\otimes H^*({\mathcal G}) \arrow[r, "\sim"] & H^*({\mathcal F}*{\mathcal G}) \end{tikzcd}$$ According to the Tannakian dictionary, such an endomorphism corresponds to an element of $e_X\in \mathfrak{g}_X^\vee$. According to the basis $\{1,\alpha, \alpha^2,\dots, \alpha^{n-1}\}$ of $H^*({\textnormal{Gr}}_{\mathbb{R}})$, cupping with $\alpha$ corresponds to the standard principal nilpotent in $\mathfrak{g}_X^\vee$. Now, we give the class $M_{\mathbb{R}}$-equivariant structure, giving rise to an element $a_{\det}^M$ restricting to $a_{\det}$. As before, $a_{\det}^M$ will be primitive in $H^*_M({\textnormal{Gr}}_{\mathbb{R}},{\mathbb{Q}})$. Hence cupping with $a_{\det}^M$ will correspond to an element of $\mathfrak{g}_X^\vee\otimes R_M$ with constant part $a_{\det}$. By degree considerations, the nonconstant part $a_{\det}^M-a_{\det}$ lies in ${\mathfrak t}_X^\vee \otimes H^d_M({\textnormal{Gr}}_{\mathbb{R}})$. Hence $a_{\det}^M-a_{\det}$ is of form $\sum_i g_i\otimes h_i$--the action on $H_M^*(S_{\nu, {\mathbb{R}}})$ is given by $t^\nu \mapsto \sum_i \langle g_i, \nu \rangle h_i$. Using this formula, we calculate $a_{\det}^M-a_{\det}$ case-by-case: For $G_{\mathbb{R}}= {\textnormal{PGL}}_n({\mathbb{H}})$, recall that $M \cong (H^*)^n/{\mathbb{R}}^*$. Up to homotopy equivalence, we may take $M \simeq K\cap M = (\textnormal{SU}(2))^n/\{\pm 1\}$. Hence we may take $\mathfrak{k} \cap \mathfrak{m} \cong \mathfrak{su}(2)$. The tautological line bundle on ${\mathbb{H}}P^{n-1}$ does not have an $M$ equivariant structure since the center acts nontrivially on fibers. However, we may give it the usual $(H^*)^n$-equivariant structure and take the equivariant Euler class to get $a_{\det}^M$. To calculate the nonconstant part $a_{\det}^M-a_{\det}$, restrict to each of the fixed points $t^{e^i}$. On the fiber above $\{t^{e^i}\}$, $M$ acts by the pullback of the standard vector representation of $\mathfrak{su}(2)$ along the $i$th projection $M \rightarrow \mathfrak{su}(2)$. Thus, at $t^{e^i}$, $a_{\det}^M-a_{\det}$ restricts to $z_i^2$, where $z_i$ is the coordinate of the Cartan of the $i$th copy of $\mathfrak{su}(2)$. Observing that we quotient by the subgroup $\{\pm 1\}$, we get the formula for this case, $$a_{\det}^M-a_{\det}= \frac{1}{2}\sum_i e_i \otimes (z_i^2)$$ For $G_{\mathbb{R}}= PE_6(F_4)$ or ${\textnormal{PSO}}(1,9)$, we may use the equivariant structure of the tautological line bundle on ${\mathbb{O}}P^n$ to give our class equivariant structure. We treat the case of ${\textnormal{PSO}}(1,9)$ first: here, $M = {\textnormal{SO}}(8)$ and ${\textnormal{Gr}}_{\mu,{\mathbb{R}}} = {\mathbb{O}}P^1$. As in the quaternionic case, the center of the double cover $\textnormal{Spin}(9)$ of the maximal compact subgroup ${\textnormal{SO}}(9)$ acts nontrivially on fibers. Hence we must pick up a coefficient of $\frac{1}{2}$ in the expansion. Restricting to the north pole $\{t^{e_1}\}$, the bundle becomes the vector representation of $\mathfrak{m}$. Hence, for $z_i,$ $i=2$ to $5$ the coordinates of the Cartan of ${\mathfrak{so}}(8)$, we get that $$a_{\det}^M-a_{\det}= \frac{1}{2}e_1 \otimes z_2\dots z_5$$ For $G_{\mathbb{R}}= PE_6(F_4)$, we observe that $M = \textnormal{Spin}(8)$. Since the compact group $F_4$ is both simply-connected and centerless, we do not pick up any fractional coefficients. Restricting to $\{t^{x_1}\}$ and $\{t^{x_2}\}$ respectively, we get the vector representation of $\mathfrak{m}$. Hence $$a_{\det}^M-a_{\det}= (e_1+e_2)\otimes z_2\dots z_5.$$ For $G_{\mathbb{R}}= {\textnormal{PSO}}(1,2n-1)$, we use $a_{\det}= e^M_1(TS^{2n-2})$ with $M = {\textnormal{SO}}(2n-2)$. We may identify the restriction to $\{t^{e^1}\}$ with the vector space ${\mathfrak{so}}(2n-1)/{\mathfrak{so}}(2n-2)$. The left action of ${\textnormal{SO}}(2n-2)$ may be identified with the vector representation. Hence the nonconstant portion of the Euler class is given by $$a_{\det}^M-a_{\det}= e_1\otimes t_2t_3\dots t_n$$ where $t_i$ are the coordinates of the copy of ${\textnormal{SO}}(2n-2)$ in the lower right corner of ${\textnormal{SO}}^+(1,2n-1)$. # Proof of isomorphism {#proof-of-isomorphism .unnumbered} We now calculate the image of $\sigma_M$ and $\sigma$ and show that they are injective. Let $B^\vee_{X, e^M}$ denote the centralizer of $e^M$ in $B^\vee_X \times {\textnormal{Spec }}R_M$. Since $\sigma_M$ commutes with $p_1^M$, the image of $\sigma_M$, considered as a morphism, lands in $B^\vee_{X, e^M}$ Thus, our morphism becomes $$\sigma_M: {\textnormal{Spec }}H_*^M({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_{X, e^M}$$ For the torus $C$, let $B^\vee_{X, e^C}$ be the centralizer of $e^C = e^M$ in $B^\vee_{X} \times {\textnormal{Spec }}R_C$. In this setting, we have $$\sigma_C: {\textnormal{Spec }}H_*^C({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_{X, e^C}.$$ Passing to the nonequivariant setting, we get $$\sigma: {\textnormal{Spec }}H_*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_{X, e}$$ where $B^\vee_{X, e}$ is the centralizer of $e$ in $B^\vee_{X}$. We show that these are, in fact, isomorphisms after inverting a few small primes. For the non-group cases, it suffices to invert at most the prime $2$. **Theorem 7 1**. *The map $\sigma$, resp. $\sigma_C$ and $\sigma_M$, are closed embeddings.* *Proof.* We prove the non-equivariant case. The $C$ and $M$-equivariant cases follow by lifting basis elements. We wish to show that the image of every $H_*({\textnormal{Gr}}_{\nu, {\mathbb{R}}})$ inside $H_*({\textnormal{Gr}}_{\mathbb{R}})$ lies in the image of $\mathcal{O}(B^\vee_{X, e})$. If such a statement holds, then the map $\mathcal{O}(B^\vee_{X, e})\rightarrow H_*({\textnormal{Gr}}_{\mathbb{R}})$ is surjective, since $H_*({\textnormal{Gr}}_{\mathbb{R}})$ is generated by the fundamental class of one cell ${\textnormal{Gr}}_{\nu, {\mathbb{R}}}$. To show that such a cell exists, we appeal to Theorems 1-3 of Bott's paper \"The Space of Loops on a Lie Group\" [@Bott_1958]. Bott works with the case where the symmetric space is a centerless compact Lie group. However, the results of Bott-Samelson's prior paper [@bottsamelson] give most of the setup for Theorem 1 in our case. The remainder of the proofs of Theorems 1 and 2 follow by observation that, for centerless, splitting-rank real groups, the root datum of the relative dual group is that of $\textnormal{SL}_n({\mathbb{C}})$. Theorem 3 is an application of Theorems 1 and 2, together with homological commutativity of $H^*(\Omega(G/K),{\mathbb{Q}})$. As such, Bott's results immediately extend, and we have our generating cell ${\textnormal{Gr}}_{\nu, {\mathbb{R}}}$. Now, consider the following sequence of sheaves on ${\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}$: $${\mathbb{Z}}\rightarrow {\mathcal I}^\nu_*[-\langle \rho^\vee, \nu \rangle] \rightarrow j^\nu_*{\mathbb{Z}}$$ Taking cohomology, we get the maps $$H^*({\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}) \rightarrow H^{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*) \rightarrow H^*({\textnormal{Gr}}_{\nu, {\mathbb{R}}}).$$ Here the first map is given explicitly by cupping with $v_{low} = [{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}]$. The composition and the second maps are given by the natural restrictions. Dualizing, we get maps in homology: $$H_*({\textnormal{Gr}}_{\nu, {\mathbb{R}}}) \rightarrow H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*) \rightarrow H_*({\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}) .$$ Here our map $H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*) \rightarrow H_*({\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}})$ is the adjoint of $H^*({\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}) \rightarrow H^{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*)$. Hence its composition with $H_*({\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}) \rightarrow H_*({\textnormal{Gr}}_{\mathbb{R}})$ is given by $$u \longmapsto \sum_i \langle u, h^i\cdot[{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}]\rangle\,h_i.$$ Now, there is another natural map $H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*) \rightarrow H_*({\textnormal{Gr}}_{{\mathbb{R}}})$. Recall that $H^{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*)$ is the $G^{\vee}_{X}$-Schur module of lowest weight $[{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}]$. We map $H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*) \rightarrow \mathcal{O}(B^\vee_{X,e})$ by the matrix coefficient of $H^{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*)$ with respect to $[{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}]$. Explicitly, the map is given by $$u \mapsto \langle u, b\cdot [{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}] \rangle$$ for all $b\in B^\vee_{X,e}$. We now take the composition $$H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*) \rightarrow \mathcal{O}(B^\vee_{X,e}) \xrightarrow \sigma^* H_*({\textnormal{Gr}}_{\mathbb{R}}).$$ This composition is given by the following formula: $$u \longmapsto \langle u, b\cdot \sigma([{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}]) \rangle = \sum_i \langle u, h^i\cdot[{\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}]\rangle\,h_i$$ This is the same as our earlier map. In other words, the following square commutes: $$\begin{tikzcd} H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*)\arrow[d] \arrow[r] & \mathcal{O}(B^\vee_{X,e})\arrow[d] \\ H_*({\textnormal{Gr}}_{\leq \nu, {\mathbb{R}}}) \arrow[r] & H_*({\textnormal{Gr}}_{\mathbb{R}}) \end{tikzcd}$$ Precomposing our first map $H_*({\textnormal{Gr}}_{\nu, {\mathbb{R}}}) \rightarrow H_{*-\langle \rho^\vee, \nu \rangle} ({\mathcal I}^\nu_*)$, we get that the image of $H_*({\textnormal{Gr}}_{\nu, {\mathbb{R}}})$ in $H_*({\textnormal{Gr}}_{\mathbb{R}})$ lies inside the image of $\sigma^*$. Allowing $\nu$ to range over all cocharacters, we get that $\sigma^*$ is a surjection. Hence $\sigma: {\textnormal{Spec }}H_*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow B^\vee_{X, e}$ is a closed embedding. ◻ For injectivity, we need an intermediate lemma on flatness. Let $\ell_{G}$ be the squared ratio of the lengths of the real coroots of $G_{\mathbb{R}}$. **Lemma 5**. *The scheme $B^\vee_{X,e}$ is flat over ${\mathbb{Z}}[1/\ell_G]$.* *Likewise, $B^\vee_{X,e^C}$ (resp. $B^\vee_{X, e^M}$) is flat over $R_C[1/n_G\ell_G]$ (respectively $R_M[1/n_G\ell_G]$).* *Proof.* The proof is exactly as in \[YZ\]. First, we prove the non-equivariant case. Then we will reduce both equivariant cases to the non-equivariant case. Define the following map: $$\varphi: B^{\vee}_{X} \rightarrow \mathfrak{u}^\vee_X \times {\textnormal{Spec }}{\mathbb{Z}}[1/\ell_G]$$ given by $\varphi(b) = \textnormal{Ad}(b)e-e$. By construction, $B^\vee_{X,e}$ is precisely the fiber over zero of this map. Each fiber of $B^\vee_{X,e} \rightarrow {\textnormal{Spec }}{\mathbb{Z}}[1/\ell_G]$, then, has dimension at least $r = \dim B^\vee_X - \dim u^\vee_X$. If each fiber has dimension exactly $r$, then $B^\vee_{X,e}$ is locally a complete intersection over ${\textnormal{Spec }}{\mathbb{Z}}[1/\ell_G]$. Since ${\textnormal{Spec }}{\mathbb{Z}}[1/\ell_G]$ is regular, then $B^\vee_{X,e}$ will be flat. We check the dimension of the fiber over each prime $p$ of ${\textnormal{Spec }}{\mathbb{Z}}[1/\ell_G]$. Base change $B^\vee_X$ to $k = \overline{\mathbb{F}}_p$. In this setting, we may scale $e = \sum_i n_ix_i$ by an element of $T(k)$ to the standard principal nilpotent $e_0 = \sum_i x_i$. Then $B^\vee_{X,e} = Z^\vee \times U^\vee_{X,e_0}$. Since $\dim U^\vee_{X,e_0} = r-\dim Z^\vee$, we get that our fiber dimension is exactly $r$. Hence $B^\vee_{X,e}$ is flat over ${\mathbb{Z}}[1/\ell_G]$. For the equivariant cases, we reduce to the nonequivariant case in the following way. We treat the case of $R_C[1/\ell_Gn_G]$--an identical argument applies to $R_M[1/\ell_Gn_G]$. We mimic the construction of $\varphi$ in this setting. Consider the morphism $$\textnormal{Ad}(-)e^M-e^M: B^\vee_X \times {\textnormal{Spec }}R_C[1/\ell_Gn_G] \rightarrow \mathfrak{u}^\vee_X \times {\textnormal{Spec }}R_C[1/\ell_Gn_G].$$ Like in the non-equivariant case, $B^\vee_{X, e^C}$ is the fiber above zero of this map. As before, $B^\vee_{X, e^C}$ will be locally a complete intersection, and hence flat, if every fiber dimension is exactly $r$. Recall that ${\textnormal{Spec }}R_C$ comes equipped with a natural $\mathbb{G}_m$-action given by the grading on $R_C$, along with the inclusion ${\textnormal{Spec }}{\mathbb{Z}}\rightarrow {\textnormal{Spec }}R_C$ given by letting every monomial tend to zero. The $\mathbb{G}_m$-action is compatible with all base changes ${\textnormal{Spec }}(R_C\otimes k)$ for $k$ the residue field of a prime of ${\mathbb{Z}}$. Hence, for a prime $P \in {\textnormal{Spec }}R_C$ lying above $p \in {\textnormal{Spec }}{\mathbb{Z}}$, we have that the $\mathbb{G}_m$-orbit of $P$ remains above $p$. The closure of the orbit, then, contains $p$. Thus we have $\dim B^\vee_{X, e^C, P} \leq \dim B^\vee_{X, e^C, p}$, where $p\in {\mathbb{Z}}$. Since $B^\vee_{X, e^C, p}$ is simply the fiber of $B^\vee_{X, e}$ over $p$, its fiber dimensions are all $r$. Thus, every fiber in the equivariant case also has dimension $r$, proving the assertion. ◻ Now we prove injectivity and hence prove isomorphism. **Theorem 8 1**. *We have the following isomorphisms: $$\begin{aligned} {\textnormal{Spec }}H_*({\textnormal{Gr}}_{{\mathbb{R}}}, {\mathbb{Z}}[1/\ell_G]) &\cong B^\vee_{X,e}[1/\ell_G] \\ {\textnormal{Spec }}H_*^C({\textnormal{Gr}}_{{\mathbb{R}}}, {\mathbb{Z}}[1/n_G\ell_G]) &\cong B^\vee_{X,e^C}[1/\ell_G]\\ {\textnormal{Spec }}H_*^M({\textnormal{Gr}}_{{\mathbb{R}}}, {\mathbb{Z}}[1/n_G\ell_G]) &\cong B^\vee_{X,e^M} [1/\ell_G]\end{aligned}$$* *Proof.* We begin with the $C$-equivariant case, where we can use the equivariant localization theorem. Up to contracting our semi-infinite orbits $S_{\nu,{\mathbb{R}}}$, our fixed points are given by $t^\nu$ with $\nu \in X_*(A)$ for $A$ the maximal split torus. Using the localization theorem, the natural map $$Loc^*: H_C^*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow \prod_{\nu\in X_*(A_{\mathbb{R}})} H_C^*(\{t^\nu\}).$$ becomes an isomorphism when tensored with $Q = \textnormal{Frac } R_C$. We take the adjoint: $$Loc_*: \bigoplus_{\nu\in X_*(A_{\mathbb{R}})} H_C^*(\{t^\nu\}) \rightarrow H^C_*({\textnormal{Gr}}_{\mathbb{R}})$$ Identify the sum on the left with $R_C[X_*(A_{\mathbb{R}})]$ and take the spectrum. We get the following map: $${\textnormal{Spec }}(Loc_*): {\textnormal{Spec }}H^C_*({\textnormal{Gr}}_{\mathbb{R}}) \rightarrow A^\vee \times {\textnormal{Spec }}R_C$$ This becomes an isomorphism after base-changed with ${\textnormal{Spec }}Q$. We claim that this factors as follows: $$\begin{tikzcd} {\textnormal{Spec }}(H^C_*({\textnormal{Gr}}_{\mathbb{R}})) \arrow[r, "\sigma_C"] \arrow[rd] & \arrow[d] B^\vee_{X, e^C}[1/\ell_G]\\ & A^\vee \times {\textnormal{Spec }}R_C \end{tikzcd}$$ Consider the composition of $\sigma_C$ with projection to $A^\vee \times {\textnormal{Spec }}R_C$. Observe that the action of $H_C^*({\textnormal{Gr}}_{\mathbb{R}})$ on each character functor $H_{C,c}(S_{\nu,{\mathbb{R}}}, {\mathcal F})$ factors through restriction to $H_C(S_{\nu,{\mathbb{R}}}) = H_C(\{t^\nu\})$. By adjunction, for $i_\nu = \{t^\nu\} \rightarrow {\textnormal{Gr}}_{\mathbb{R}}$ and $v_\nu \in H_{C,c}(S_{\nu,{\mathbb{R}}}, {\mathcal F})$, we have that $$\sigma_C(v_\nu) = \sum_{\nu} (i_nu^*(h^i)\cup v_\nu) \otimes h_i = v_\nu\otimes i_{\nu,*}(1).$$ Summing over all $i_{\nu,*}$, we get the map $Loc_*$. Hence the diagram commutes. Now, after localization, we have that ${\textnormal{Spec }}(Loc)_*$ is an isomorphism. By principality of $e$, the projection onto $A^\vee \times {\textnormal{Spec }}R_C$ is also an isomorphism after localization. Thus $\sigma_C$ is an isomorphism after localization. Using flatness of both $B^\vee_{X,e^C}[1/\ell_G]$ and ${\textnormal{Spec }}(H^C_*({\textnormal{Gr}}_{\mathbb{R}}, {\mathbb{Z}}[1/n_G\ell_G))$, we get that $\sigma_C$ is an isomorphism. This proves the $C$-equivariant case. For the $M$-equivariant case, take the quotient by $W_M$ of both sides. For the de-equivariantized case, base changing the $C$-equivariant case to ${\textnormal{Spec }}{\mathbb{Q}}$ gives an isomorphism ${\textnormal{Spec }}H_*({\textnormal{Gr}}_{{\mathbb{R}}}, {\mathbb{Q}}) \cong B^\vee_{X,e}\times {\textnormal{Spec }}{\mathbb{Q}}$. Using flatness, we get the result over ${\mathbb{Z}}[1/\ell_G]$. ◻ # $G_{\mathbb{R}}$-equivariant homology {#g_mathbbr-equivariant-homology .unnumbered} We use the description of $C$ and $M$-equivariant homology to calculate the $G_{\mathbb{R}}$-equivariant homology of ${\textnormal{Gr}}_{\mathbb{R}}$. Recall that $G_{\mathbb{R}}({\mathbb{R}}[[t]])$ retracts onto $G_{\mathbb{R}}$, which in turn retracts onto its maximal compact subgroup $K$. After inverting 2, we have that $R_K = (R_C)^{W_K}$, where $W_K$ is the Weyl group of $K$. Thus we may identify $$H_*^{G_{\mathbb{R}}}({\textnormal{Gr}}_{\mathbb{R}}) \cong (H_*^C({\textnormal{Gr}}_{\mathbb{R}}))^{W_K} \cong \mathcal{O}(B^\vee_{X, e^C})^{W_K}.$$ To calculate the quotient $B^\vee_{X, e^C}/W_K$, we filter $W_K$ in two steps. We know that $B^\vee_{X, e^M} = B^\vee_{X, e^C}/W_K$. Since $W_K$ is normal with $W/W_K \cong W(G^\vee_X)$, we note that $B^\vee_{X, e^C}/W_K \cong B^\vee_{X, e^M}/W(G^\vee_X)$. By [@yun_integral_2011], after inverting the set $S_2$ of all bad primes of $G^\vee_{X}$ and those dividing $n$, we may identify $B^\vee_{X, e^M}/W(G^\vee_X)$ with the regular centralizer group scheme $J^\vee_X$ of $G^\vee_X$. Hence, we have the following theorem. **Theorem 9 1**. *For $G_{\mathbb{R}}$ centerless, there is an isomorphism of group schemes* We conclude with a quick observation about the other cases. We have only used $G_{\mathbb{R}}$ being centerless to guarantee a minuscule cocharacter $\nu$--hence a smooth variety ${\textnormal{Gr}}_{\leq \nu,{\mathbb{R}}}$ generating $H_*({\textnormal{Gr}}_{\mathbb{R}},{\mathbb{Q}})$. Other forms may have such a coweight: in particular, by a paper of Crabb and Mitchell, the form $G_{\mathbb{R}}= \textnormal{GL}_n({\mathbb{H}})$ corresponding to $G/K = \textnormal{U}(2n)/{\textnormal{Sp}}(n)$ has such a coweight [@crabb_mitchell_1988]. For these our results hold by an identical proof. Hence, we have the following corollary: **Corollary 1 1**. *For $G_{\mathbb{R}}$ with a minuscule generating cocharacter, we have the following isomorphisms:* *$$\begin{aligned} {\textnormal{Spec }}H_*({\textnormal{Gr}}_{{\mathbb{R}}}, {\mathbb{Z}}[1/\ell_G]) &\cong B^\vee_{X,e}[1/\ell_G] \\ {\textnormal{Spec }}H_*^M({\textnormal{Gr}}_{{\mathbb{R}}}, {\mathbb{Z}}[1/n_G\ell_G]) &\cong B^\vee_{X,e^M}[1/\ell_G]\\ {\textnormal{Spec }}H_*^{G_{\mathbb{R}}}({\textnormal{Gr}}_{\mathbb{R}}, {\mathbb{Z}}_{S_2}) &\cong J^\vee_X \times {\textnormal{Spec }}{\mathbb{Z}}_{S_2}\end{aligned}$$* # Acknowledgements {#acknowledgements .unnumbered} I would like to acknowledge my advisor Tsao-Hsien Chen for his help with this work, as well as David Nadler and Mark Macerato for insightful conversations. This research is supported by NSF grant DMS 2143722. [^1]: For the octonionic case, realize ${\mathbb{O}}P^{n-1}$ and as the set of rank-one projections in the corresponding Jordan algebra $\mathfrak{h}_n({\mathbb{O}})$ of self-adjoint octonionic matrices, $n=2,3$. Taking the image of each projection on ${\mathbb{O}}^n = {\mathbb{R}}^{3n}$ gives the tautological bundle on ${\mathbb{O}}P^{n-1}$.
arxiv_math
{ "id": "2309.09935", "title": "Homology of loops on a splitting-rank symmetric space via the Geometric\n Satake for real groups", "authors": "John O'Brien", "categories": "math.RT math.AG math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | To a Riemannian manifold $(M,g)$ endowed with a magnetic form $\sigma$ and its Lorentz operator $\Omega$ we associate an operator $M^{\Omega}$, called the *magnetic curvature operator*. Such an operator encloses the classical Riemannian curvature of the metric $g$ together with terms of perturbation due to the magnetic interaction of $\sigma$. From $M^{\Omega}$ we derive the *magnetic sectional curvature* $\mathrm{Sec}^{\Omega}$ and the *magnetic Ricci curvature* $\mathrm{Ric}^{\Omega}$. On closed manifolds, with a Bonnet-Myers argument, we show that if $\mathrm{Ric}^{\Omega}$ is positive on an energy level below the Mãné critical value then, on that energy level, the magnetic dynamics carries a contractible periodic orbit. In particular, when $\sigma$ is nowhere vanishing, this implies the existence of contractible periodic orbits on every energy level close to zero. Finally, on closed oriented even dimensional manifolds, we discuss about the topological restrictions which appear when one requires $\mathrm{Sec}^{\Omega}$ to be positive. author: - | Valerio Assenza\ Heidelberg University\ vassenza\@mathi.uni-heidelberg.de bibliography: - MagneticCurvature-ClosedMagneticGeodesics.bib title: MAGNETIC CURVATURE AND EXISTENCE OF CLOSED MAGNETIC GEODESICS ON LOW ENERGY LEVELS --- # Introduction and results ## Magnetic systems and closed magnetic geodesics A magnetic system is the data of $(M,g,\sigma)$ where $(M,g)$ is a closed Riemannian manifold and $\sigma$ is a closed 2-form on $M$ and in this context it is referred as the *magnetic form*. The condition that $\sigma$ is closed generalizes the fact that, in Euclidean space, magnetic fields are divergence-free according to Maxwell's equations. A bundle operator $\Omega : TM \to TM$, called Lorentz operator, is associated with $\sigma$ and the metric $g=\langle \ , \ \rangle$ in the following sense: $$\label{compa} \langle v , \Omega(w) \rangle = \sigma(v,w), \ \ \ \forall v,w\in T_pM ,\ \forall p \in M .$$ Observe that $\Omega$ is antisymmetric with respect to $g$. In this setting, denoting by $\frac{D}{dt}$ the Levi-Civita covariant derivative, the dynamics of a charged particle, moving on $M$ under the influence of the magnetic field $\sigma$, is described by the second-order differential equation: $$\label{mg} \frac{D}{dt}\dot{\gamma}=\Omega(\dot{\gamma}) \ .$$ A solution $\gamma: \mathbb{R}\to M$ to [\[mg\]](#mg){reference-type="eqref" reference="mg"} is called a *magnetic geodesic*. Observe that when there is no magnetic interaction, i.e $\Omega=0$, the above equation is reduced to the equation of standard geodesics. The energy $E(p,v)=\frac{1}{2}\langle v , v \rangle$ is constant along magnetic geodesics so that the *magnetic flow* $\varphi_{\sigma}:TM \times \mathbb{R}\to TM$, obtained by lifting equation (2) to $TM$, leaves invariant the sets $\Sigma_k= \{ (p,v), \ |v|=\sqrt{2k} \}$, with $k\in (0,+\infty)$. A *closed magnetic geodesic* is a periodic magnetic geodesic and one of the most relevant interests in the theory is the study of existence, multiplicity and free homotopy class of closed magnetic geodesic in a given $\Sigma_k$ (see for instance [@intro1],[@intro2] or also [@intro3]). In a variational setting, closed magnetic geodesic with energy $k$ are precisely zeros for a suitable closed 1-form $\eta_k$, called the *magnetic action form*, which is defined on $\mathcal{M}$, the space of absolutely continuous loops with $L^2$-integrable derivative and free period (for more details about the magnetic action form we refer the reader to [@Merry],[@AB] and to Section [2](#pre){reference-type="ref" reference="pre"} of the present paper). Such a form always admits a local primitive (as defined in equation [\[primitive\]](#primitive){reference-type="eqref" reference="primitive"}) and generalizes the differential of the classical action functional $S_k$, which is globally defined for example if $\sigma$ is exact on the base manifold $M$ (see [@Abb2]). When the magnetic form is weakly exact, namely $\tilde{\sigma}=p^* \sigma$ is exact on the universal cover $p: \Tilde{M} \to M$, then $\eta_k$ still admits a primitive $S_k^{\sigma}$ which is defined on the connected component of contractible elements $\mathcal{M}_0$. In particular, it is given by $$\label{primcon} S_k^{\sigma}:\mathcal{M}_0 \to \mathbb{R}, \ \ \ S_k^{\sigma}(\gamma)=\int_0^T \Big\{ \frac{|\dot{\gamma}|^2}{2}+k \Big\} \ \mathrm{d}t + \int_{B^2} D_\gamma^* \sigma ,$$ where $D_{\gamma} : B^2 \to M$ is an arbitrary capping disk for $\gamma$. Since $\sigma$ is weakly exact if and only if $[\sigma] \in H^2(M,\mathbb{R})$ vanishes over $\pi_2(M)$, definition [\[primcon\]](#primcon){reference-type="eqref" reference="primcon"} is independent of the choice of $D_{\gamma}$. Among the values of the energy, the Mãné critical value $c=c(g,\sigma) \in [0,+\infty]$ plays a crucial role in the theory. If $\sigma$ is weakly exact, define $c$ by $$c= \inf \{ k\geq0 \ | \ S_k^{\sigma}(\gamma)\geq 0, \ \forall \gamma \in \mathcal{M}_0 \}.$$ If $\sigma$ is not weakly exact we set $c(g,\sigma)=+\infty$. We also remark that $c=0$ if and only if $\sigma =0$. Moreover, $c$ is finite if and only if $\tilde{\sigma}$ admits a bounded primitive, i.e. there exists $\tilde{\theta} \in \Omega^1(\tilde{M})$ such that $\mathrm{d}\tilde{\theta} = \tilde{\sigma}$ and $$\sup_{x \in \tilde{M}} |\tilde{\theta}_x| < +\infty \ .$$ Generally the magnetic dynamic can drastically differ on $\Sigma_k$ depending on whether $k$ is above, below or equal to $c$. This becomes evident when we look at the existence of closed magnetic geodesics. **Theorem 1 ([@Con1; @osuna],[@Merry; @AB]) 1**. *Let $(M,g,\sigma)$ be a magnetic system and $c \in [0,+\infty]$ its Mãné critical value.* - *If $\pi_1(M)$ is non trivial and $k>c$, then $\eta_k$ carries a zero in each non trivial homotopy class.* - *If $M$ is simply connected and $k>c$, then $\eta_k$ carries a contractible zero.* - *For almost all $k \in (0,c)$ there exists a contractible zero of $\eta_k$.* In fact, when $c$ is finite the magnetic form admits a primitive on each connected component of $\mathcal{M}$. Such a primitive is bounded from below for $k>c$ and satisfies the Palais-Smale condition; thus the existence of a zero for $\eta_k$, which is a minimizer of any primitive, is obtained via classical variational methods through minimization. Below $c$ such compactness conditions are not satisfied anymore and the variational approach becomes extremely delicate. Despite that, for almost all energy $k$, with a Struwe monotonicity argument [@struwe], one can construct converging Palais-Smale sequences using a minimax geometry of $\eta_k$ on the set of loops with short length (more details in Section [5](#proofA){reference-type="ref" reference="proofA"}). In general, it is still an open question if we can extend the existence of closed magnetic geodesic (contractible or not) to the whole interval $(0,c)$. Under stronger assumptions, some progress had been made. For exact magnetic systems on non simply connected compact oriented surfaces, Taimanov in [@taim2], Contreras, Macarini and Paternain in [@CMP] and later Asselle and Mazzucchelli in [@AsMaz], show the existence of a zero of $\eta_k$, which is a minimizer of a global primitive $S_k$, for all $k\in (0,+\infty)$. Another relevant result is in the work of Ginzburg-Gürel [@GB], refined by Usher in [@ush], where it is showed the existence of contractible closed magnetic geodesics for all energies close to zero when the magnetic form is symplectic. In this work we extend the result of Ginzburg and Usher from symplectic magnetic forms to nowhere vanishing magnetic forms. **Theorem A1 1**. *Let $(M,g,\sigma)$ be a magnetic system. If $\sigma$ is nowhere vanishing, then there exists a positive real number $\rho$ such that for every $k \in (0, \rho)$, $\eta_k$ has a contractible zero (i.e. there exists a contractible closed magnetic geodesic with energy $k$).* Observe that the assumption on a closed 2-form to be nowhere vanishing presents weak topological obstructions and the set of such systems is very rich. Indeed, in dimension two, being nowhere vanishing is the same as being symplectic, and it is well known that every oriented compact surfaces admits a symplectic form. The situation is drastically different in dimension bigger than 2. There are topological obstructions for a manifold to carry a symplectic form (for instance, it needs to be even-dimensional and all cohomology groups in even degree must be non-zero). On the other hand, as treated in [@eliashberg], every smooth manifolds admits a nowhere vanishing closed 2-form in each cohomology class. Let us point out that Theorem A1 is an immediate corollary of the main result of this paper which shows the existence of contractible closed magnetic geodesic for levels of the energy below $c$ and positively curved in a sense which we describe in the next subsection. ## Magnetic Curvature Let $SM$ be the unit sphere bundle and consider $E$ and $E^1$, the bundles of complementary and unitary complementary direction over $SM$. In particular, a fiber of $E$ and $E^1$ at the point $(p,v)\in SM$ is the set $E_{(p,v)}=\{ w \in T_pM \ | \ \langle v,w \rangle =0 \}$ and $E^1_{(p,v)}=\{ w \in S_pM \ | \ \langle v,w \rangle =0 \}$. Let $k\in (0,+\infty)$, denote by $R$ be the Riemannian curvature operator and by $\Omega^2=\Omega \circ \Omega$. Consider the bundle operators $R^{\Omega}_k,A^{\Omega}:E \to E$ given by $$\label{Rmag} R^{\Omega}_k(v,w)= 2kR(w,v)v-\sqrt{2k} (\nabla_w \Omega )(v) \ ,$$ $$\label{operatorA} A^{\Omega}(v,w)=\frac{3}{4}\langle w, \Omega(v) \rangle \Omega(v) -\frac{1}{4}\Omega^2(w)- \frac{1}{4} \langle \Omega(w),\Omega(v) \rangle v \ .$$ Define the *magnetic curvature operator* $M^{\Omega}_k:E \to E$ at the energy level $k$ as: $$\label{mco} M^{\Omega}_k(v,w)=R^{\Omega}_k(v,w)+A^{\Omega}(v,w) \ .$$ Definition [\[mco\]](#mco){reference-type="eqref" reference="mco"} comes naturally by approaching the linearization of equation [\[mg\]](#mg){reference-type="eqref" reference="mg"} as well as by developing the second variation of a primitive of $\eta_k$ around one of its zero (see for instance Lemma [Lemma 8](#lemmasec){reference-type="ref" reference="lemmasec"}). From $M^{\Omega}_k$ we derive the magnetic curvature functions. Precisely, the *magnetic sectional curvature* $\mathrm{Sec}^{\Omega}_k:E^1 \to \mathbb{R}$ and the *magnetic Ricci curvature* $\mathrm{Ric}^{\Omega}_k:SM \to \mathbb{R}$ are the functions given by $$\label{dmsec} \mathrm{Sec}^{\Omega}_k(v,w)=\langle M^{\Omega}_k(v,w),w \rangle$$ $$\label{dmricc} \mathrm{Ric}^{\Omega}_k(v)=\mathrm{trace}(M^{\Omega}_k(v,\cdot))$$ Observe that if Sec and Ric denote the classical sectional curvature and Ricci curvature, by looking at [\[mco\]](#mco){reference-type="eqref" reference="mco"}, we obtain for definition [\[dmsec\]](#dmsec){reference-type="eqref" reference="dmsec"} and definition [\[dmricc\]](#dmricc){reference-type="eqref" reference="dmricc"} the following expression $$\label{msec} \mathrm{Sec}^{\Omega}_k(v,w)= 2k\mathrm{Sec}(v,w)-\sqrt{2k} \langle(\nabla_w \Omega )(v),w \rangle+\langle A^{\Omega}(v,w),w \rangle \ ,$$ $$\label{mricc} \mathrm{Ric}^{\Omega}_k(v)=\mathrm{trace}(M^{\Omega}_k(v,\cdot))=2k\mathrm{Ric}(v)- \sqrt{2k} \ \mathrm{trace}\Big((\nabla \Omega)(v)\Big)+\mathrm{trace}\Big(A^{\Omega}(v,\cdot)\Big) \ .$$ On compact oriented surfaces, the magnetic sectional curvature coincides with the standard magnetic curvature that can be found in the work of the Gabriel and Miguel Paternain [@gmpaternain]. In higher dimension, $\mathrm{Sec}^{\Omega}$ appeared in the paper of Wojtkowski [@woj], where it is deduced by studying magnetic Jacobi fields. In particular it is showed that if $\mathrm{Sec}^{\Omega}_k$ is negative, then the magnetic flow restricted to $\Sigma_k$ is of Anosov type (see also [@Gou] and [@Gro1] for previous results in this direction). In the context of the second variation of the action, a magnetic Ricci curvature defined as the trace of the operator $R^{\Omega}_k$ was considered by Bahri and Taimanov in [@TB]. To the author's best knowledge, this is the first time the magnetic curvature has been defined in full generality.\ For the purpose of this paper, we put in relation $\mathrm{Sec}^{\Omega}_k$ with the second variation of the action (see Lemma [Lemma 8](#lemmasec){reference-type="ref" reference="lemmasec"}). In particular, we prove a Bonnet-Myers theorem (see Lemma [Lemma 13](#Bonnet-Myers){reference-type="ref" reference="Bonnet-Myers"}) stating that if $\gamma$ is a zero of $\eta_k$ and $\mathrm{Ric}^{\Omega}_k>0$, then a bound on the Morse index of $\gamma$ implies a bound on its period. The magnetic Bonnet-Myers together with index estimates for zeroes of $\eta_k$ obtained by the Struwe monotonicity argument (Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} and Lemma [Lemma 24](#index2){reference-type="ref" reference="index2"}), allow us to recover a converging Palais-Smale sequence and prove the existence of a contractible closed magnetic geodesic. Before stating the main theorem, we remark that techniques to recover Palais-Smale through index estimates were previously considered for instance by Bahri and Coron in [@bagia], where they were approaching the Kazdan-Warner problem; by Coti Zelati and Ekeland in [@coti] in a general variational setting and by Benci and Giannoni in [@begi] in the context of billiards. **Theorem A 1**. *Let $(M,g,\sigma)$ be a magnetic system. If $k\in (0,c)$ is such that $\mathrm{Ric}^{\Omega}_k>0$, then $\eta_k$ carries a contractible zero (i.e. there exists a contractible closed magnetic geodesic with energy $k$).* With a different approach, a weaker result was proved by Bahri and Taimanov in [@TB] where, by assuming the trace of $R^{\Omega}_k$ to be positive, they showed the same statement for exact magnetic systems. It is crucial that they did not include $A^{\Omega}$ in their definition of magnetic curvature. In fact, the terms in [\[mricc\]](#mricc){reference-type="eqref" reference="mricc"} coming from this operator turn out to be always non negative and they are zero at $p\in M$ if and only if $\sigma_p=0$ (see Lemma [Lemma 5](#aomega){reference-type="ref" reference="aomega"}). This last consideration enlarges the number of situations where we can apply Theorem A and it has a strong impact when we look at magnetic system on low energy levels. Indeed, in Lemma [Lemma 6](#cor1){reference-type="ref" reference="cor1"}, we show that if the magnetic form is symplectic, then $\mathrm{Sec}^{\Omega}_k>0$ is positive for $k$ close to zero; analogously if $\sigma$ is nowhere vanishing, then $\mathrm{Ric}^{\Omega}_k$ is positive for values of $k$ close to zero. Intuitively we can look at symplectic magnetic systems and nowhere vanishing magnetic systems with small energies as the magnetic analogue of positively sectionally curved and positively Ricci curved manifolds in Riemannian geometry. Observe that point (ii) of Lemma [Lemma 6](#cor1){reference-type="ref" reference="cor1"} together with Theorem A immediately implies Theorem A1 stated in the previous section.\ In this paper, we also put in evidence some topological restrictions that appear when we require $\mathrm{Sec}^{\Omega}_k$ to be positive. For instance, in dimension 2, a symplectic magnetic system can be fully characterized in terms of positive magnetic sectional curvature for low energies. **Theorem B 1**. *Let $(M,g,\sigma)$ be a magnetic system on a closed oriented surface. If there exists a positive $k_0$ such that $\mathrm{Sec}^{\Omega}_k>0$ $\forall k \in (0,k_0)$, then either $\sigma$ is symplectic, or $\sigma=0$ and $g$ has positive Gaussian curvature (and $M=S^2$).* The proof of Theorem B is purely topological and the reader can consult it independently from the rest of the paper in Section [3.2](#surfaces){reference-type="ref" reference="surfaces"}. In higher even dimension, we show that $\mathrm{Sec}^{\Omega}_k$ can not be positive when $k$ is below $c$ and $M$ is oriented and simply connected. In particular, in Lemma [Lemma 11](#Klinglemma){reference-type="ref" reference="Klinglemma"} we will give a magnetic version of the classical Synge's theorem [@kli3]. Recall that this says that on even dimensional oriented Riemannian manifold with positive sectional curvature, no geodesic is length minimizer. In a magnetic setting one can deduce the following statement. **Theorem C 1**. *Let $(M,g,\sigma)$ be a magnetic system on an even-dimensional oriented manifold. If $\mathrm{Sec}^{\Omega}_k>0$ for some $k>c$ then $M$ is simply connected. Moreover, if $M$ is a compact oriented surface and $\sigma$ is exact with $\mathrm{Sec}^{\Omega}_k>0$, then $M=S^2$ and $k>c$.* ### Plan of the paper {#plan-of-the-paper .unnumbered} We conclude the introduction by giving an outline of this paper. In Section [2](#pre){reference-type="ref" reference="pre"} we introduce the magnetic action form $\eta_k$, the Hessian of a local primitive and the notion of vanishing sequence and Morse index of a vanishing point. In particular, in Lemma [Lemma 3](#chart){reference-type="ref" reference="chart"} we build up a local chart centered on a zero of $\eta_k$ which isolates the negative directions of the Hessian; such a construction will be useful for the index estimates of Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} and Lemma [Lemma 24](#index2){reference-type="ref" reference="index2"}. In the first part of Section [3](#lowenergy){reference-type="ref" reference="lowenergy"} we discuss the positivity of $\mathrm{Sec}^{\Omega}_k$ and $\mathrm{Ric}^{\Omega}_k$ when $k$ is close to zero while the second part is independently devoted to the proof of Theorem B ([3.2](#surfaces){reference-type="ref" reference="surfaces"}). In Section [4](#hessian){reference-type="ref" reference="hessian"} we investigate the relation between the Hessian of the magnetic form and the sectional magnetic curvature (Lemma [Lemma 8](#lemmasec){reference-type="ref" reference="lemmasec"}). Here, we generalize the Synge's Theorem and the Bonnet-Myers theorem to the magnetic case (Lemma [Lemma 11](#Klinglemma){reference-type="ref" reference="Klinglemma"} and Lemma [Lemma 13](#Bonnet-Myers){reference-type="ref" reference="Bonnet-Myers"}). In particular, from Lemma [Lemma 11](#Klinglemma){reference-type="ref" reference="Klinglemma"} we deduce the proof of Theorem C. Finally, with the back ground of the previous sections, Section [5](#proofA){reference-type="ref" reference="proofA"} is fully dedicated to the proof of Theorem A (and Theorem A1); we approach it by considering separately the weakly exact case in Section [5.1](#weaklyexact){reference-type="ref" reference="weaklyexact"} and the non weakly exact case in Section [5.2](#nonweaklyexact){reference-type="ref" reference="nonweaklyexact"}. The scheme of the proof is based for both cases on the following steps: first we present the minimax geometry of $\eta_k$. Then we retrace the Struwe monotonicity argument (Lemma [Lemma 18](#struwe1){reference-type="ref" reference="struwe1"} and Lemma [Lemma 23](#struwe2){reference-type="ref" reference="struwe2"}) which prepare the ground to show, in Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} and Lemma [Lemma 24](#index2){reference-type="ref" reference="index2"}, that vanishing points coming from this variational setting have index bounded by 1. Lastly, with this index estimate and with Lemma [Lemma 13](#Bonnet-Myers){reference-type="ref" reference="Bonnet-Myers"}, we find a converging vanishing sequence of $\eta_k$ which completes the proof. ### Acknowledgements {#acknowledgements .unnumbered} The author warmly thanks his mentors Peter Albers and Gabriele Benedetti for their support during the draft of this article. He is also grateful to Alberto Abbondandolo, Luca Asselle, Johanna Bimmermann, Samanyu Sanjay and Iskander Taimanov for valuable discussions, and to James Marshal Reber and Ivo Terek for their precious corrections and suggestions on this work. A special thanks goes to Marco Mazzucchelli who suggested how to adapt the index estimate to the magnetic form in Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"}. Finally, this work had been completed at the University of Toronto and the author is deeply grateful to Jacopo De Simoi for his hospitality.\ The author is partially supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy EXC2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster), the Collaborative Research Center SFB/TRR 191 - 281071066 (Symplectic Structures in Geometry, Algebra and Dynamics), the Research Training Group RTG 2229 - 281869850 (Asymptotic Invariants and Limits of Groups and Spaces) and by the NSERC Discovery Grant RGPIN-2022-04188. # Preliminaries {#pre} ## Hilbert space of loops with free period and the magnetic action form Let $\Lambda=H^1(S^1,M)$ be the set of absolutely continuous loops on $M$ parametrized over the unit circle with $L^2$-integrable tangent vector. The space $\Lambda$ admits a structure of *Hilbert manifold* modeled as $H^1(S^1,\mathbb{R}^n)$. For more details and properties we refer the reader to [@Kli1]. For simplicity, we assume that $M$ is orientable which, up to taking a double cover of $M$, is no loss of generality for the following constructions. The tangent space at $x\in \Lambda$ is naturally identified with the space of absolutely continuous vector field along $x$ with $L^2$-integrable covariant derivative and it is naturally isomorphic to $H^1(S^1,\mathbb{R}^n)$. We endow $\Lambda$ with a metric $g_{\Lambda}$ which in each tangent space, by choosing a trivialization $\mathrm{\Psi}:S^1 \times \mathbb{R}^n \to x^*TM$ of $TM$ along $x$, is defined as $$g_{\Lambda}( \zeta_1,\zeta_2 ) = \int_0^1 \Big[ \langle \zeta_1,\zeta_2 \rangle_{x} + \Big\langle \frac{D}{dt}\zeta_1, \frac{D}{dt}\zeta_2 \Big\rangle_{x} \Big] \ \mathrm{d}s \ .$$ The distance $d_{\Lambda}$ induced by $g_{\Lambda}$ gives to $\Lambda$ a structure of *complete Riemannian manifold*. We will denote by $e$ and respectively $\ell$ the $L^2$ energy and the length of $x$ which are defined by $$e(x)=\int_0^1 |\dot{x}|^2 \ \mathrm{d}s , \ \ell(x)=\int_0^1 |\dot{x}| \ \mathrm{d}s .$$ The space $C^{\infty}(S^1,M)$ is dense in $\Lambda$, and a local chart centered on a smooth loop $x$ is given by $$F_{\Lambda}:H^1(S^1,B_{r}(0)) \to \Lambda , \ F_{\Lambda}(\zeta)(t)= \mathrm{exp}_{x(t)}(\mathrm{\Psi}(t,\zeta(t))) ;$$ where $B_r(0) \subset \mathbb{R}^n$ is an open ball and exp is the exponential map of $(M,g)$. As pointed out in [@Abb2 Remark 2.2], $F_{\Lambda}$ is bi-Lipschitz (i.e. $F_{\Lambda}$ and $F_{\Lambda}^{-1}$ are Lipschitz) with respect to the standard distance $d_0$ of $H^1(S^1,\mathbb{R}^n)$ and $d_{\Lambda}$. This is a consequence of the fact that the norm $| \ |_0$ of $H^1(S^1,\mathbb{R}^n)$ induced by the Euclidean scalar product of $\mathbb{R}^n$ and the norm $| \ |_{\Lambda}$ induced by $(F_{\Lambda})^{*} g_{\Lambda}$ are equivalent on $H^1(S^1,B_r(0))$.\ Consider now $\mathcal{M}= \Lambda \times (0,+\infty)$. This set can be interpreted as the set of absolutely continuous loops on $M$ with $L^2$-integrable tangent vector and free period. Indeed, a point $(x,T) \in \mathcal{M}$ is identified with $\gamma :[0,T] \to M$ through $\gamma(t)=x(\frac{t}{T})$. Vice versa, an absolutely continuous loop with $L^2$-integrable tangent vector corresponds to the pair $(x,T)$ with $x(s)=\gamma(sT)$. We provide $\mathcal{M}$ with the Riemannian structure obtained from the product of $\Lambda$ endowed with $g_{\Lambda}$ and the Euclidean structure of $(0,+\infty)$. In particular the tangent space splits into $$T\mathcal{M}= T\Lambda \oplus \mathbb{R}\frac{d}{dT} ,$$ and in this splitting the metric is given by $$g_{\mathcal{M}} = g_{\Lambda} + dT^2 .$$ If $x\in C^{\infty}(S^1,M)$, a local chart centered at $(x,T)$ is obtained through the product $$\label{chartM} F_{\mathcal{M}}=F_{\Lambda} \times \mathrm{Id}_{(0,+\infty)}: H^1(B_r(0),M)\times (0,+\infty)\to \mathcal{M}\ \ , \ \ F_{\mathcal{M}}(\zeta, \tau)=(F_{\Lambda}(\zeta),T) \ .$$ With abuse of notation, denote by $| \ |_{0}$ the standard product norm of $H^1(S^1,\mathbb{R}^n)\times (0,+\infty)$ and observe that $| \ |_{0}$ and the norm induced by $(F_{\mathcal{M}})^*g_{\mathcal{M}}$ are equivalent on $H^1(B_r(0),M)\times (0,+\infty)$. In analogy with $F_{\Lambda}$, this implies that the local chart defined in [\[chartM\]](#chartM){reference-type="eqref" reference="chartM"} is bi-Lipschitz (since it is the product of two bi-Lipschitz maps). We point out that the distance induced by $g_{\mathcal{M}}$ is not complete because $dT^2$ is not complete in the Euclidean factor. Nevertheless completeness is obtained by restricting $g_{\Lambda}$ on sets of the form $\Lambda \times [T_*, +\infty)$ for every arbitrary $T_*>0$. Finally, since $\mathcal{M}$ is homotopically equivalent to $\Lambda$, its connected components are in correspondence with the elements of $[S^1,M]$, namely the set of conjugacy classes of $\pi_1(M)$. If $[ \mu ]$ is such a class, we denote by $\mathcal{M}_{[\mu]}$ the respective connected component. In particular, with $\mathcal{M}_0$ we indicate the connected component of contractible loops with free period.\ Hereafter, we often identify $\gamma$ with the respective $(x,T)$. For simplicity, we also use the \"dot\" to indicate the Levi-Civita covariant derivative \"$\frac{D}{\mathrm{d}{t}}$\". A vector field over $x$ and its respective parametrization over $\gamma$ is denoted with the same symbol. Observe that the rescaling of the tangent vectors and their time derivatives with respect to the two different parametrization is given by $\dot{\gamma}(t) = \frac{1}{T} \dot{x}(\frac{t}{T})$. Analogously, if $V$ is a vector field along $x$, then $\dot{V}(t) = \frac{1}{T} \dot{V}(\frac{t}{T})$. ## Magnetic action form Fix $k\in (0,+\infty)$ and consider the $k$-kinetic action $A_k: \mathcal{M}\to R$ defined as: A_k(x,T)&= T_0\^1 { +k } s\ &= \_0\^T { +k } t . Let $\Theta \in \Omega^1(\Lambda)$ be such that $$\Theta_x(V)=\int_0^1 \langle \Omega(V),\dot{x} \rangle \mathrm{d}s$$ Denoting with $\pi_{\Lambda}: \mathcal{M}\to \Lambda$ the projection into the first factor of $\mathcal{M}$, the *$k$-magnetic action form* $\eta_k\in \Omega^1(\mathcal{M})$ is defined by $$\eta_k(\gamma) = d_{\gamma}A_k + \pi_\Lambda ^* \Theta_x .$$ If $x$ is of class $C^2$ and $(V,\tau)\in T_x\Lambda \oplus \mathbb{R}\frac{\partial}{\partial T}$, then $\eta_k$ acts as follows: $$\eta_k(\gamma)(V,0)=-\int_0^T \langle \nabla_{\dot{\gamma}}\dot{\gamma}- \Omega(\dot{\gamma}),V \rangle \mathrm{d}t , \ \eta_k(\gamma)(0,\tau)= \frac{\tau}{T}\int_0^T \Big\{ k-\frac{|\dot{\gamma}|^2}{2} \Big\} \mathrm{d}t .$$ In agreement with [@AB Lemma 2.2] and [@Abb2 Lemma 3.1], $\eta_k$ is a smooth a section of $T\mathcal{M}$ and $\eta_k(\gamma)=0$ if and only if $\gamma$ is a closed magnetic geodesic contained in $\Sigma_k$. We will denote by $\mathcal{Z}(\eta_k)$ the zero set of $\eta_k$. An important property of the magnetic action form concerns its behavior when integrated over a loop of $\mathcal{M}$. Indeed if $u:[0,1] \to \mathcal{M}$ is a closed path, the integral $\int_0^1 u^*\eta_k$ depends only on the homotopy class of $u$. In this sense $\eta_k$ is said to be *closed* [@AB Corollary 2.4].\ Let $\mathcal{U}\subseteq \mathcal{M}$ be an open set diffeomorphic to a ball and $(x_0,T_0) \in \mathcal{U}$. A local primitive $S_k^{\sigma}: \mathcal{U}\to \mathbb{R}$ of $\eta_k$, centered in $(x_0,T_0)$, is defined by $$\label{primitive} S_k^{\sigma}(x,T) = A_k(x,T)+ \int_{S^1 \times [0,1]} c_{x_0,x}^* \ \sigma \ .$$ Here, $c_{x_0,x}: S^1 \times [0,1] \to M$ is a cylinder which connects $x_0$ and $x$ such that $c_{x_0,x}( \cdot , s) \in \pi_{\Lambda}(\mathcal{U})$ for every $s$. Indeed, the closedness of $\eta_k$ makes the above definition independent from the choice of $c_{x_0,x}$ and $dS_k^{\sigma}= \eta_k$ on $\mathcal{U}$. It is obvious that, if $\gamma \in \mathcal{Z}(\eta_k)$, then $\gamma$ is a critical point for every primitive of $\eta_k$. ## Hessian of $\eta_k$ and the index of a vanishing point Let $\gamma$ be a zero of $\eta_k$ and $S_k^{\sigma}$ be a local primitive of $\eta_k$ centered at $\gamma$. If $\zeta \in T_{\gamma} \mathcal{M}$, we naturally identify the tangent space $T_{\zeta}(T_{\gamma} \mathcal{M})$ with $T_{\gamma}\mathcal{M}$ itself. Under this identification we can look at the second (Fréchet) derivative $\mathrm{d}^2_{\gamma} S_k^{\sigma}$ of $S_k^{\sigma}$ at the point $\gamma$, as a bilinear form on $T_{\gamma}\mathcal{M}$. Because the connection on $M$ is the Levi-Civita connection, such a bilinear form is symmetric. The Hessian $\mathrm{Hess}_{\gamma}(S_k^{\sigma})$ of $S_k^{\sigma}$ at $\gamma$ is the quadratic form associated with $\mathrm{d}^2_{\gamma} S_k^{\sigma}$. We refer the reader to [@chang] for more details. The fact that two local primitives of $\eta_k$ differ by a constant makes natural the following definition. **Definition 1**. *Let $\gamma \in \mathcal{Z}(\eta_k)$. The Hessian $\mathrm{Q}_{\gamma}(\eta_k)$ of $\eta_k$ at $\gamma$ is the quadratic form given by $$\mathrm{Q}_{\gamma}(\eta_k)=\mathrm{Hess}_{\gamma}(S_k^{\sigma}) ,$$ where $S_k^{\sigma}$ is any primitive of $\eta_k$ defined on a neighbourhood of $\gamma$.* **Lemma 1**. *Let $\gamma \in \mathcal{Z}(\eta_k)$ and $\zeta=(V,\tau)\in T_{\gamma}\mathcal{M}$. Then* *$$\mathrm{Q}_{\gamma}(\eta_k)(V,\tau) = \int_0^T \left\{ \langle \dot{V}-\Omega(V),\dot{V}\rangle-\langle R(V,\dot{\gamma})\dot{\gamma}-\nabla_V \Omega (\dot{\gamma}),V\rangle-\frac{\langle \dot{V},\dot{\gamma}\rangle^2}{||\dot{\gamma}||^2} \right\} \mathrm{d}t \ +$$ $$+\int_0^T \left( \frac{\langle \dot{V},\dot{\gamma}\rangle}{||\dot{\gamma}||}-\frac{\tau}{T}||\dot{\gamma}|| \right)^2 \mathrm{d}t .$$* **Proof.* The computation follows by adopting [@Gou Section 2] to a primitive of $\eta_k$. ◻* If $\gamma$ is a vanishing point, the signature of $\mathrm{Q}_{\gamma}(\eta_k)$ is well-defined because it is independent of the local coordinate chart. Therefore, there exist two vector subspaces $H_{\gamma},E_{\gamma} \subset T_{\gamma} \mathcal{M}$ such that the tangent space splits into $$T_\gamma \mathcal{M}= H_{\gamma} \oplus E_{\gamma} ,$$ and $E_{\gamma}$ is the maximal subspace, in terms of dimension, for which the restriction $\mathrm{Q}_{\gamma}(\eta_k)|_{_{E_{\gamma}}}$ is negative definite. **Definition 2**. *Let $\gamma \in \mathcal{Z}(\eta_k)$. The *Morse index* of $\gamma$, which is denoted by $\mathrm{index}(\gamma)$, is the non negative integer given by $$\mathrm{index}(\gamma)=\mathrm{dim}E_{\gamma} .$$* As argued in [@Abb1 Proposition 3.1], the self adjoint operator associated with $\mathrm{d}^2_{\gamma} S_k^{\sigma}$ is a compact deformation of a Fredholm operator, which implies that the index of a vanishing point of $\eta_k$ is always finite.\ We end the section by constructing a system of local coordinates around a zeroes of $\eta_k$ which isolates the direction of $T_{\gamma}\mathcal{M}$ where $\mathrm{Q}_{\gamma}(\eta_k)$ is negative definite. Such construction is used in the proof of Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} and Lemma [Lemma 24](#index2){reference-type="ref" reference="index2"} in Section [5](#proofA){reference-type="ref" reference="proofA"}.\ Let $\gamma$ such that $\mathrm{index}(\gamma)=d$ and consider $(\mathcal{U}_{\gamma},F_{\mathcal{M}})$ local coordinates centered at $\gamma$ as given in Equation [\[chartM\]](#chartM){reference-type="eqref" reference="chartM"}. By the assumption on the index and by the equivalence of the norms $| \ |_0$ and $| \ |_{\mathcal{M}}$ there exist two vector spaces $H_{\gamma},E_{\gamma} \subset H^1(S^1,\mathbb{R}^n)\times (0,+\infty)$ and a positive constant $D_{\gamma}$ such that $$H^1(S^1,\mathbb{R}^n)\times (0,+\infty)= H_{\gamma} \oplus E_{\gamma} ,$$ the space $E_{\gamma}$ is of dimension $d$ and $$\label{inequality} \mathrm{Q}_{0}(F_{\mathcal{M}}^{*} \eta_k)(\zeta,\tau)\leq -4D_{\gamma}|(\zeta,\tau)|^2_{0} \ , \ \ \forall (\zeta,\tau) \in E_{\gamma} .$$ Denote by $B_r^H$ and $B^E_r$ respectively the open ball of radius $r$ centered at the origin of $H_{\gamma}$ and $E_{\gamma}$. **Lemma 3**. *There exists a local chart $(\mathrm{\Phi}_{\gamma}^{-1}(\mathcal{V}_{\gamma}),\mathrm{\Phi}_{\gamma})$ centered at $\gamma$ such that $\mathcal{V}_{\gamma}=B_r^H \times B_r^E$. Moreover, this system of local coordinates enjoys the following properties:* - *$\mathrm{\Phi}_{\gamma}$ is bi-Lipschitz (with respect to $d_0$ and $d_{\mathcal{M}}$) and $\mathrm{\Phi}_{\gamma}(\mathcal{Z}(\eta_k))\subseteq B_r^H \times \{ 0 \}$.* - *If $S_k^{\sigma}$ is an arbitrarily primitive of $\eta_k$ on $\mathrm{\Phi}_{\gamma}^{-1}(\mathcal{V}_{\gamma})$, then for every $(y_h,y_e)\in B_r^H \times B_r^E$ and for every $0<\lambda< r - |y_e|_{_0}$ it holds $$(S_k^{\sigma}\circ \mathrm{\Phi_{\gamma}^{-1}})\Big(y_h,y_e+\lambda \frac{y_e}{ \ |y_e|_{_{0}}}\Big) \leq (S_k^{\sigma}\circ \mathrm{\Phi_{\gamma}^{-1}})\Big(y_h,y_e \Big)-D_{\gamma}\lambda^2 .$$* **Proof.* Let $(\mathcal{U}_{\gamma},F_{\mathcal{M}})$, $H_{\gamma}$ and $E_{\gamma}$ as above and $S_k^{\sigma}$ a local primitive defined on $\mathcal{U}_{\gamma}$. By and by the smoothness of $S_k^{\sigma}$ there exists $\mathcal{V}_{\gamma}\subset U_{\gamma}$ open neighborhood of $\gamma$ such that* *$$\label{inequality2} \mathrm{Q}_{\theta}(F_{\mathcal{M}}^* \eta_k)(\zeta,\tau)\leq - 2D_{\gamma}|(\zeta,\tau)|_{0}^2 \ , \ \forall \theta \in F_{\mathcal{M}}^{-1}(V_{\gamma}) \ , \ \forall (\zeta,\tau) \in E_{\gamma} .$$* *Without loss of generality, for a positive real $R$, we can assume $\mathcal{V}_{\gamma}$ diffeomorphic through $F_{\mathcal{M}}$ to $B_R^H \times B^E_R$ with coordinates $(y_h,y_e)$ centered in $\gamma$. Consider now the function $G_{\gamma}:B_R^H \times B^E_R \to E_{\gamma}$ defined as $$G_{\gamma}(y_h,y_e)=\partial_{y_e}(S_k^{\sigma}\circ \mathrm{\Phi_{\gamma}^{-1}})(y_h,y_e) .$$ As consequence of the Implicit Function Theorem, there exists a function $g_{\gamma}:B_R^H \to B_R^E$ such that, over $B_R^H \times B^E_R$, the equality $G_{\gamma}(y_h,y_e)=0$ holds if and only if $y_e=g_{\gamma}(y_h)$. In particular, all the critical points of $S_k^{\sigma}$ in this local coordinate chart belong to the graph of $g_{\gamma}$ which we denote by $\mathcal{G}(g_{\gamma})$. Consider the map $T(y_h,y_e)=(y_h,y_e-g_{\gamma}(y_h))$ which pointwise translates $\mathcal{G}(g_{\gamma})$ to $B_R^H \times \{ 0 \}$ in the direction of $y_e$. Let $r\in(0,R)$ such that $B_r^H \times B_r^E$ is the image through $T$ of an open neighbourhood of $\mathcal{G}(g_{\gamma})$ and set $\mathrm{\Phi}_{\gamma}=F_{\mathcal{M}} \circ T$ defined on $\mathcal{V}_{\gamma}=\mathrm{\Phi}_{\gamma}^{-1}(B_r^H \times B_r^E)$. Observe that $\mathrm{\Phi}_{\gamma}$ is bi-Lipschitz because composition of bi-Lipschitz maps. Moreover, if $\theta \in \mathcal{Z}(\eta_k)$, then $\mathrm{\Phi}_{\gamma}(\theta)\in B_R^H \times \{ 0 \}$ since $F_{M}$ sends zeroes of $\eta_k$ to $\mathcal{G}(g_{\gamma})$ and $T$ sends the graph of $g_{\gamma}$ to $B_r^H \times \{ 0 \}$.* *Finally let $\lambda<r-|y_e|_{0}$ and for simplicity write $q(t)=\Big(y_h,y_e+t\lambda \frac{y_e}{|y_e|}_0 \Big)$. Up to shrinking $r$ we have that* *(S_k\^)(q(1))-(S_k\^)(q(0))&=\_0\^1(S_k\^)(q(t)) t\ &=\_0\^1\_q(t)(S_k\^)((t)) t\ &=\_0\^1_0\^t (S_k\^)\_\_(y_h,y_e)((0)) s t + \_0\^1_0\^t s t\ &\_0\^1_0\^t \_q(t)(S\^k\_)((0)) s t\ &-D\_ \^2 ,* *where in the above inequalities we first use the fact that the first derivative of $(S_k^{\sigma}\circ \mathrm{\Phi_{\gamma}^{-1}})$ with respect to $y_e$ is non positive for points close to $B_r^H \times \{ 0 \}$ and then the inequality pointed out in [\[inequality2\]](#inequality2){reference-type="eqref" reference="inequality2"} ◻* ## Vanishing sequences Our goal is to find zeroes of $\eta_k$ via minimax methods. The next definition generalizes to the magnetic action form the notion of Palais-Smale sequence. **Definition 4**. *A $vanishing \ sequence$ for $\eta_k$ is a sequence $(x_n,T_n)=(\gamma_n)$ of $\mathcal{M}$ such that $$|\eta_k(\gamma_{n})|_{_{\mathcal{M}}} \to 0 .$$* Because $\eta_k$ is continuous, $\mathcal{Z}(\eta_k)$ coincides with the limit point set of vanishing sequence. Thus understanding whether a vanishing sequence admits or not a converging subsequence becomes of crucial importance. Generally if the sequence of periods $T_n$ diverges to infinity or tends to zero there is no hope to find a limit point for $\gamma_n$. Nevertheless the next theorem shows that this is exactly the case to avoid in order to have compactness. **Theorem ([@AB Theorem 2.6]) 1**. *Let $\gamma_n$ be a vanishing sequence on a given connected component of $\mathcal{M}$. If $T_n\in [T_*,T^*]$ for positive constants $T_*<T^*$ then $\gamma_n$ admits a limit point.* # Magnetic curvature on low energy levels {#lowenergy} ## Symplectic and nowhere vanishing magnetic forms. {#lowenergysub} We evidence the expression in a basis for the magnetic Ricci curvaure. Let $v$ be a unit tangent vector and complete $v$ to an orthonormal basis $\{ v, e_2, ..., e_n \}$. Then $$\begin{aligned} \mathrm{Ric}^{\Omega}_k(v)&=2k\mathrm{Ric}(V)-\sqrt{2k} \ \mathrm{trace}(\nabla \Omega(V))+\mathrm{trace}(A_{\Omega}^V) \nonumber && \\ &= \sum _{i=2}^n \Big \{2k \mathrm{Sec}(v,e_i)- \sqrt{2k} \langle (\nabla_{e_i}\Omega)(v),e_i \rangle + \langle A^{\Omega}(v,e_i),e_i \rangle \Big\} \nonumber && \\ &=\sum _{i=2}^n \mathrm{Sec}^{\Omega}_k(v,e_i) \ . \end{aligned}$$ Thus, as in the Riemannian case, $\mathrm{Ric}^{\Omega}_k$ is the sum of the magnetic sectional curvature computed by fixing any orthonormal basis of the complement of $v$. A question which emerges naturally is whereas a magnetic system show up a k-level of the energy positively curved in terms of $\mathrm{Sec}^{\Omega}_k$ or $\mathrm{Ric}^{\Omega}_k$. For our purpose we are interested to magnetic system positively curved and in range of energy close to zero. In this section we point out that this requirement is satisfied by two standard classes of magnetic systems: symplectic magnetic systems respect to the magnetic sectional curvature and nowhere vanishing magnetic systems respect to the magnetic Ricci curvature. We need the following relevant lemma. **Lemma 5**. *Let $A^{\Omega}$ the operator defined in [\[operatorA\]](#operatorA){reference-type="eqref" reference="operatorA"}. Then the following statements hold* - *$\langle A^{\Omega}(v,w) , w \rangle \geq 0$ for every $(v,w)\in E^1$ and it is equal to zero if and only if $\Omega(w)=0$,* - *$\mathrm{trace}\Big(A^{\Omega}(v,\cdot)\Big) \geq 0$ for every $v\in SM$ and it is equal to zero if and only if $\Omega_p=0$.* **Proof.* By definition of $A^{\Omega}$, if $(v,w) \in E^1$, then $$\label{A2} \langle A^{\Omega}(v ,w), w \rangle = \frac{3}{4} \langle w, \Omega(v) \rangle^2 + \frac{1}{4}|\Omega(w)|^2 ,$$ and point (i) holds. Moreover, from the above equation, if we extend $v$ to an orthonormal basis $\{ v, e_2, ..., e_n \}$ then $$\begin{aligned} \mathrm{trace}\Big(A^{\Omega}(v, \cdot)\Big)&= \sum_{i=2}^n \langle A^{\Omega}(v ,e_i), e_i \rangle \nonumber \\ & =\sum_{i=2}^n \Big\{ \frac{3}{4} \langle \langle e_i ,\Omega(v) \rangle \Omega(v), e_i \rangle + \frac{1}{4} \langle \Omega(e_i),\Omega(e_i) \rangle \Big\} \nonumber &&\\ &=\sum_{i=2}^n \frac{3}{4} \langle e_i ,\Omega(v) \rangle^2 + \frac{1}{4}\Big(\sum_{i,j=2}^n \langle \Omega(e_i),e_j \rangle^2 + \sum_{i=2}^n \langle \Omega(e_i),v \rangle^2 \Big) \nonumber &&\\ &=\sum_{i=2}^n \langle e_i ,\Omega(v) \rangle^2 + \frac{1}{4} \sum_{i,j=2}^n \langle \Omega(e_i),e_j \rangle^2 . \label{terms}\end{aligned}$$ which is always non negative and equal to zero if and only if $\Omega_p = 0$. thus, also point (ii) holds. ◻* **Lemma 6**. *Let $(M,g,\sigma)$ be a magnetic system. It holds that* - *if $\sigma$ is symplectic then there exists a $k_0>0$ such that $\mathrm{Sec}^{\Omega}_k>0$ for every $k\in (0,k_0)$;* - *if $\sigma$ is nowhere vanishing then there exists a $k_0>0$ such that $\mathrm{Ric}^{\Omega}_k>0$ for every $k\in (0,k_0)$.* **Proof.* First observe that, if $\sigma$ is symplectic then by the compatibility condition [\[compa\]](#compa){reference-type="eqref" reference="compa"} it follows that $\Omega(w)\neq 0$ for every $(p,w)\in TM$. In particular, by Equation [\[A2\]](#A2){reference-type="eqref" reference="A2"}, this implies that $\langle A^{\Omega}(v ,w), w \rangle$ is strictly positive for every $(v,w) \in E^1$. Analogously, if $\sigma$ is nowhere vanishing, then at least one term in [\[terms\]](#terms){reference-type="eqref" reference="terms"} is different from zero which implies that $\mathrm{trace}\Big(A^{\Omega}(v, \cdot)\Big)$ is strictly positive for every $v \in SM$. Since $E^1$ and $SM$ are compact and since the operator $A^{\Omega}$ is independent of $k$, we deduce that, if $\sigma$ is symplectic then for small values of $k$ $$\Big\langle A^{\Omega}(v,w)+R^{\Omega}_{k}(v,w),w\Big\rangle>0 \ , \ \forall (v,w) \in E^1 .$$ If $\sigma$ is nowhere vanishing, then for small values of $k$ $$\mathrm{trace}\Big( A^{\Omega}(v,\cdot)+R^{\Omega}_k(v,\cdot) \Big)>0 \ , \ \forall v\in SM .$$ The statement follows. ◻* ## Symplectic magnetic forms in dimension 2. {#surfaces} In this subsection $(M,g)$ is a closed oriented Riemannian surface and we denote by $\mathcal{K}:M \to \mathbb{R}$ its Gaussian curvature. If $\sigma$ is a closed two form on $M$, then there exists a smooth function $b:M \to \mathbb{R}$ such that $$\label{volume} \sigma = b \cdot \mathrm{vol}(g) \ ,$$ where $\mathrm{vol}(g)$ is the volume form on $M$ induced by the metric $g$. For simplicity we refer to a magnetic form $\sigma$ and in case to its Lorentz operator $\Omega$ through the function $b$ given in [\[volume\]](#volume){reference-type="eqref" reference="volume"}. Observe that, due to dimensional reasons, $\mathrm{Ric}_k^b$ coincides with $\mathrm{Sec}_k^{b}$ and its expression in terms of $b$ is pointed out in the next lemma. **Lemma 7**. *Let $(M,g,b)$ be a magnetic system on a closed oriented surface. Then $$\mathrm{Sec}^b_k(v) = 2k\cdot \mathcal{K}- \sqrt{2k}\cdot \mathrm{d}b(i \cdot v) + b^2 ,$$ where by $i$ we denote the anticlockwise rotation of an angle $\frac{\pi}{2}$.* **Proof.* In any orthonormal basis we have that $$\label{ope} \Omega = b \cdot i \ \mathrm{and} \ \nabla_v \Omega = \mathrm{d}b(v)\cdot i .$$ Therefore, by definition [\[msec\]](#msec){reference-type="eqref" reference="msec"}, by using [\[ope\]](#ope){reference-type="eqref" reference="ope"} and an orthonormal basis $\{ v , e_2= i \cdot v \}$, we obtain $$\begin{aligned} \mathrm{Sec}^b_k(v)& = 2k \mathrm{Sec}(v,e_2) - \sqrt{2k}\langle (\nabla_{e_2} \Omega )(v) , e_2 \rangle + \langle A^{\Omega}(v,e_2),e_2 \rangle \\ & = 2k \cdot \mathcal{K}- \sqrt{2k}\langle \mathrm{d}b(e_2) i \cdot v , e_2 \rangle + \frac{3}{4}b^2 + \frac{1}{4}b^2 \\ & = 2k \cdot \mathcal{K}- \sqrt{2k} \cdot \mathrm{d}b(i \cdot v) + b^2 . \end{aligned}$$ ◻* In this setting, Theorem B is reformulated as follows **Theorem B 2**. *Let $(M,g,b)$ be a magnetic system on a closed oriented surface. If there exists a positive real constant $k_0$ such that $\mathrm{Sec}^b_k > 0$ for every $k \in (0,k_0)$, then either $b$ is nowhere zero or $b$ is constantly zero and $\mathcal{K}>0$ (and $M=S^2$).* **Proof.* Suppose that for small values of $k$ we have that $\mathrm{Sec}^b_k>0$. Denote by $S$ the subset of $M$ defined as $$S = \{ p \in M \ | \ b(p)=0 \} .$$ This subset is closed by definition. If we show that $S$ is also open, the statement follows. Let $p \in S$ and for an arbitrary small radius $r$, let $B_r(p)$ be an open ball centered at $p$. Let $q \in B_r(p)$ be such that $$|b(q)| = \max _{z \in B_r(p)} |b(z)| \ .$$ Assume that $|b(q)|\neq 0$ and denote by $|\mathrm{d}b|_{\infty}$ the uniform norm of $\mathrm{d}b$. In particular, by assumption, for small values of $k$, it yields $$2k \mathcal{K}+ b^2 > \sqrt{2k}|\mathrm{d}b|_{\infty} ,$$ which implies that $$|b(q) - b(p) | \leq \int_0^{d(x,y)} |\mathrm{d}b|_{\infty} \mathrm{d}t \leq \frac{r}{\sqrt{2k}} ( 2k \max_{z \in B_r(p)} \mathcal{K}(z) + |b(q)|^2) = r \sqrt{2k} \max_{z \in B_r(p)} \mathcal{K}(z) + \frac{r}{\sqrt{2k}} |b(q)|^2.$$ Since $b(p)=0$, up to shrinking $r$, we choose $k$ such that $\sqrt{2k}=|b(q)|$ then from the above inequality we deduce that $$|b(q)| \leq (r \max_{z \in B_r(p)} \mathcal{K}(z) + r ) |b(q)| .$$ Up to shrinking $r$ and $k$ we also have that $$r \max_{z \in B_r(p)} \mathcal{K}(z) + r<1 ,$$ which implies that $|b(q)|=0$. Therefore, for every $p\in S$ we can find an open ball $B_r(p)$ centered such that $b|_{B_r(p)}=0$ which concludes the proof. ◻* # Magnetic curvature and Hessian of $\eta_k$ {#hessian} Let $\gamma\in \mathcal{Z}(\eta_k)$ and consider the splitting $\dot{\gamma}\oplus \{ \dot{\gamma}\}^{\perp}$ of $\gamma^* TM$, where $\{ \dot{\gamma}\}^{\perp}$ is the orthogonal complement of $\dot{\gamma}$. We decompose a vector field along $\gamma$ into $V=V_1+V_2$, where $V_1$ denotes its component along $\dot{\gamma}$ and $V_2$ its orthogonal component. To avoid any kind of confusion we adopt the following notations: $\frac{D}{\mathrm{d}t} V_1=\dot{V_1}$ and $(\dot{V})_1=\langle \dot{V},\dot{\gamma}\rangle \frac{\dot{\gamma}}{|\dot{\gamma}|^2}$. Analogously, $\frac{D}{\mathrm{d}t} V_2=\dot{V_2}$ while $(\dot{V})_2$ indicate the orthogonal projection of $\dot{V}$ on $\{ \dot{\gamma}\}^{\perp}$. The next crucial lemma shows how such a splitting let us write the Hessian of $\eta_k$ in terms of the sectional magnetic curvature. **Lemma 8**. *Let $V=V_1+V_2$ be a variation along $\gamma$ and $\tau \in \mathbb{R}$. Then $$\label{crucial} \mathrm{Q}_{\gamma}(\eta_k)(V,\tau)=\int_0^T \Big|(\dot{V})_2-\frac{1}{2}(\Omega(V_1)+\Omega(V))_2\Big|^2 + \Big(\frac{\langle \dot{V},\dot{\gamma}\rangle}{|\dot{\gamma}|} - \frac{\tau}{T}|\dot{\gamma}|\Big)^2 \mathrm{d}t -\int_0^T |V_2|^2\mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|},\frac{V_2}{|V_2|}\Big) \mathrm{d}t \ .$$* **Proof.* Preliminarily, the need the following identities: $$\label{ide1} (\nabla_{V_1} \Omega)(\dot{\gamma})=(\nabla_{\dot{\gamma}}\Omega)(V_1) ,$$ $$\label{ide2} \dot{V}_1=(\dot{V})_1-\Omega(V)_1+\Omega(V_1) ,$$ $$\label{ide3} \frac{d}{\mathrm{d}t} \langle \Omega(V_1),V \rangle=\langle (\nabla_{\dot{\gamma}}\Omega)(V_1),V \rangle + \langle\Omega(\dot{V}_1),V \rangle + \langle \Omega(V_1),\dot{V} \rangle .$$* *In the expression of $\mathrm{Q}_{\gamma}(\eta_k)$ in Lemma [Lemma 1](#secondvariation){reference-type="ref" reference="secondvariation"}, we decompose $V$ into its component $V_1$ and $V_2$ and use the identity [\[ide1\]](#ide1){reference-type="eqref" reference="ide1"} to obtain:* *$$\label{conto1} \mathrm{Q}_{\gamma}(\eta_k)(V,\tau)=\int_0^T P(V) \mathrm{d}t -\int_0^T \Big\{ \langle R(V_2,\dot{\gamma})\dot{\gamma}- (\nabla_{V_2}\Omega)(\dot{\gamma}),V_2 \rangle \Big\} \mathrm{d}t + \int_0^T\Big(\frac{\langle \dot{V},\dot{\gamma}\rangle}{|\dot{\gamma}|} - \frac{\tau}{T}|\dot{\gamma}|\Big)^2\mathrm{d}t ,$$ where $P(V)= |(\dot{V})_2|^2+ \langle (\nabla_{\dot{\gamma}}\Omega)(V_1),V \rangle - \langle \dot{V}, \Omega(V) \rangle$. With the help of identities [\[ide2\]](#ide2){reference-type="eqref" reference="ide2"} and [\[ide3\]](#ide3){reference-type="eqref" reference="ide3"} and a Stokes argument we obtain:* *\_0\^T P(V)t&=\_0\^T { \|()\_2\|\^2 + (V_1),V - (\_1),V - (V_1), - , (V) } t&&\ &=\_0\^T { \|()\_2\|\^2 + \_1,(V) - (V_1), ()\_2 - ()\_2, (V) - ()\_1,(V) } t&&\ &=\_0\^T { \|()\_2\|\^2- (V_1)+(V)\_2,()\_2 +\_1-()\_1, (V)} t&&\ &=\_0\^T { \|()\_2- ((V_1)+(V))\_2\|\^2-\|(V_1)+(V)\_2\|\^2- (V)\_1,(V) + (V_1),(V)\_2 } t  . &&\ * *Write $H(V)=\frac{1}{4}|\Omega(V_1)+\Omega(V)_2|^2+ \langle \Omega(V)_1,\Omega(V) \rangle - \langle \Omega(V_1),\Omega(V)_2 \rangle$ and observe that* *H(V)&= \|2(V_1)+(V_2)\_2\|\^2+\|(V_2)\_1\|\^2+(V_1),(V_1)- (V_1),(V_2)\_2 &&\ &= \|(V_2)\_2\|\^2+\|(V_2)\_1\|\^2 &&\ &= \|(V_2)\|\^2+\|(V_2)\_1\|\^2 &&\ &= \|V_2\|\^2 A\^(,),. &&\ * *Where in the last equality we use the definition [\[operatorA\]](#operatorA){reference-type="eqref" reference="operatorA"} of $A^{\Omega}$. Therefore, we finally obtain that $$\int_0^T P(V)\mathrm{d}t = \int_0^T \Big\{ |(\dot{V})_2+ \frac{1}{2}(\Omega(V_1)+\Omega(V))_2|^2 - |V_2|^2 \Big\langle A^{\Omega}(\frac{\dot{\gamma}}{|\dot{\gamma}|},\frac{V_2}{|V_2|}),\frac{V_2}{|V_2|}\Big\rangle \Big\} \mathrm{d}t .$$ By substituting $P(V)$ in [\[conto1\]](#conto1){reference-type="eqref" reference="conto1"}, the statement follows. ◻* The next lemma shows how we can always construct variations along $\gamma$ such that their evaluation in the second derivative has no terms which depend on the tangential component. **Lemma 9**. *Let $V$ be a variation along $\gamma$ such that $\langle V,\dot{\gamma}\rangle=0$. Then there exists a periodic function $g:[0,T] \to \mathbb{R}$ and a real constant $\tau$ depending linearly on $V$ such that, if we write $W=V+g\dot{\gamma}$, then $$\mathrm{Q}_{\gamma}(\eta_k)(W,\tau)= \int_0^T \Big|(\dot{V})_2-\frac{1}{2} \Omega(V)_2\Big|^2 \mathrm{d}t -\int_0^T |V|^2\mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|},\frac{V}{|V|}\Big) \mathrm{d}t .$$* **Proof.* Consider $g$ and $\tau$ defined as follows $$g(t)=-\int_0^t \Big\{ \frac{\langle \dot{V},\dot{\gamma}\rangle}{|\dot{\gamma}|^2} - \frac{\tau}{T} \Big\} \mathrm{d}t \ \ , \ \ \tau= \int_0^T \frac{\langle \dot{V},\dot{\gamma}\rangle}{|\dot{\gamma}|^2} \mathrm{d}t .$$ In particular, the couple $(g,\tau)$ satisfies the differential problem along $\gamma$ given by $$\label{differentialproblem} \begin{cases} \dot{g}+\frac{\langle \dot{V},\dot{\gamma}\rangle}{|\dot{\gamma}|^2} - \frac{\tau}{T}=0 \\ g(0)=g(T)=0 \end{cases}$$ Thus if $W=V+g\dot{\gamma}$, then $$\label{usa1} \Big(\frac{\langle \dot{W},\dot{\gamma}\rangle}{|\dot{\gamma}|} - \frac{\tau}{T}|\dot{\gamma}|\Big)^2 = |\dot{\gamma}|^2\Big(\frac{\langle \dot{V},\dot{\gamma}\rangle}{|\dot{\gamma}|^2}+\dot{g} - \frac{\tau}{T}\Big)^2 =0 ,$$ and $$\label{usa2} (\dot{W})_2-\frac{1}{2}(\Omega(W_1)+\Omega(W))_2 =(\dot{V})_2+g\ddot{\gamma}-g\Omega(\dot{\gamma})-\frac{1}{2}\Omega(V)_2=(\dot{V})_2-\frac{1}{2}\Omega(V)_2 .$$ By using [\[usa1\]](#usa1){reference-type="eqref" reference="usa1"} and [\[usa2\]](#usa2){reference-type="eqref" reference="usa2"} in the expression [\[crucial\]](#crucial){reference-type="eqref" reference="crucial"} of Lemma [Lemma 8](#lemmasec){reference-type="ref" reference="lemmasec"} the statement follows.\  ◻* Along $\gamma$, with the same splitting as above, we define the operator $\Tilde{\Omega}: \gamma^*TM \to \gamma^*TM$ as $$\label{omegatilde} \tilde{\Omega}(V)=\Omega(V_1)+\Omega(V)_1+\frac{1}{2}\Omega(V_2)_2 .$$ Consider now the differential problem given by $$\label{parallel} \dot{V}=\tilde{\Omega}(V) .$$ This is an ordinary linear system of first order differential equations which allows us to define a linear isomorphism $P_{\gamma}:T_{\gamma(0)}M \to T_{\gamma(0)}M$ as follows $$P_{\gamma}(v)=V(T) ,$$ where $V(t)$ is the unique solution of [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"} with initial vector $v$. **Lemma 10**. *The operator $\Tilde{\Omega}$ is antisymmetric with respect to the metric $g$. This in particular implies that the linear isomorphism $P_{\gamma}$ is orthogonal.* **Proof.* Let $V$ and $W$ be variations along $\gamma$. Then, by definition [\[omegatilde\]](#omegatilde){reference-type="eqref" reference="omegatilde"} of $\tilde{\Omega}$ $$\begin{aligned} \langle \tilde{\Omega}(V),W \rangle &= \langle \Omega(V_1)+\Omega(V)_1+\frac{1}{2}\Omega(V_2)_2 ,W \rangle && \\ &=-\langle V_1,\Omega(W)\rangle- \langle V, \Omega(W_1) \rangle - \frac{1}{2} \langle V_2, \Omega(W_2) \rangle && \\ &=-\langle V, \Omega(W)_1 \rangle - \langle V, \Omega(W_1) \rangle - \frac{1}{2} \langle V,\Omega(W_2)_2 \rangle && \\ &=-\langle V, \tilde{\Omega}(W)\rangle .\end{aligned}$$ This implies that if $V$ and $W$ are solution of [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"}, then $$\frac{d}{\mathrm{d}t} \langle V ,W \rangle = \langle \tilde{\Omega}(V),W \rangle + \langle V,\tilde{\Omega}(W) \rangle=0 .$$ By definition of $P_{\gamma}$, the statement follows. ◻* Thus $P_{\gamma}$ is an orthogonal operator and in this sense the above construction generalizes the notion of Riemannian parallel transport to the magnetic case. We can now proceed to the proof of Theorem C which is an immediate consequence of the next statement. **Lemma 11**. *Let $(M,g,\sigma)$ be a magnetic system over an oriented even dimensional manifold and $\gamma$ a zero of $\eta_k$. If $\mathrm{Sec}^{\Omega}_k>0$, then $\mathrm{index}(\gamma) \geq 1$.* **Proof.* Assume $\mathrm{Sec}^{\Omega}_k>0$. Consider $G \subset \gamma^*TM$ the orthogonal sub bundle respect to $\dot{\gamma}$ together with its projection map $\mathrm{p}_G: \gamma^*TM \to G$. Because $\tilde{\Omega}(\dot{\gamma})=\Omega(\dot{\gamma})$, $\dot{\gamma}$ is a periodic solution of [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"} and the map $P_{\gamma}$ leaves invariant $G(0)$. Since $P_{\gamma}$ is orthogonal and since $M$ is oriented, we deduce that $\tilde{P}_{\gamma}=P_{\gamma} \circ \mathrm{p}_G$ is a special orthogonal isomorphism of $G(0)$. By assumption $G(0)$ is odd dimensional so that $1$ is eigenvalue of $\tilde{P}_{\gamma}$ i.e. there exists $v \in G$ such that $\tilde{P}_{\gamma}(v)=v$. By definition of $\tilde{P}_{\gamma}$, this is equal to say that there exists $V$ periodic solution of [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"}, orthogonal to $\dot{\gamma}$. Let $V$ such a solution and observe that $\tilde{\Omega}(V)_2=\frac{1}{2}\Omega(V)_2$. Consider $W=V+g\dot{\gamma}$ and $\tau$ as in Lemma [Lemma 9](#tecnica1){reference-type="ref" reference="tecnica1"}. It follows that $$\begin{aligned} \mathrm{Q}_{\gamma}(\eta_k)(W,\tau)&= \int_0^T \Big|(\dot{V})_2-\frac{1}{2} \Omega(V)_2\Big|^2 \mathrm{d}t -\int_0^T |V|^2\mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|},\frac{V}{|V|}\Big) \mathrm{d}t && \\ &=-\int_0^T |V|^2\mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|},\frac{V}{|V|} \Big) \mathrm{d}t<0 \ .\end{aligned}$$ Therefore the index of $\gamma$ is at least one. ◻* *Proof of Theorem C.* As argued in [@Abb2], $\eta_k$ carries a minimizer in each non trivial free homotopy class of loops of $M$ for $k>c$. If $M$ is oriented and even dimensional and $\mathrm{Sec}^{\Omega}_k>0$, then by Lemma [Lemma 11](#Klinglemma){reference-type="ref" reference="Klinglemma"}, $\pi_1(M)$ has to be necessarily trivial. In the case when $M$ is closed oriented surfaces, as showed in [@CMP], if the magnetic form is exact then $\eta_k$ carries a minimizer for every $k\in (0,c]$. Thus with the same argument one can deduce that if $\mathrm{Sec}^{\Omega}_k$ is positive on a level $k$ then $M=S^2$ and $k>c$. ◻ The next lemma gives an espicit formula of the second variation evaluated on vector field obtained by rescaling unitary solutions of equation [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"}. **Lemma 12**. *Let $V$ be a of [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"} of unit norm and orthogonal to $\dot{\gamma}$. Consider $V^f=fV$, where $f:[0,T] \to \mathbb{R}$ with $f(0)=f(T)=0$. There exists a variation $W^f$ and a real number $\tau$ such that $$\mathrm{Q}_{\gamma}(\eta_k)(W^f,\tau)= \int_0^T \Big\{ \dot{f}^2 - f^2 \mathrm{Sec}^{\Omega}_k\Big( \frac{\dot{\gamma}}{|\dot{\gamma}|} , V \Big) \Big\} \mathrm{d}t \ .$$* **Proof.* Let $V^f$ as in the statement. Then by Lemma [Lemma 9](#tecnica1){reference-type="ref" reference="tecnica1"}, there exist $g$ and $\tau$ such that, writing $W^f=V^f+g\dot{\gamma}$, it follows that $$\begin{aligned} \mathrm{Q}_{\gamma}(\eta_k)(W^f,\tau)&=\int_0^T \Big| (\dot{V}^f)_2 - \frac{1}{2} \Omega(V^f)_2 \Big|^2- |V^f|^2 \ \mathrm{Sec}^{\Omega}_k\Big( \frac{\dot{\gamma}}{|\dot{\gamma}|}, V\Big) \mathrm{d}t && \\ &=\int_0^T \Big| \dot{f}V+f(\dot{V})_2 - f\frac{1}{2} \Omega(V)_2 \Big|^2- |fV|^2 \ \mathrm{Sec}^{\Omega}_k\Big( \frac{\dot{\gamma}}{|\dot{\gamma}|}, V \Big) \mathrm{d}t && \\ &=\int_0^T \Big\{ \dot{f}^2-f^2\mathrm{Sec}^{\Omega}_k\Big( \frac{\dot{\gamma}}{|\dot{\gamma}|}, V \Big) \Big\} \mathrm{d}t .\end{aligned}$$ ◻* With the previous lemma in one hand, we conclude the section by stating the magnetic version of the Riemannian Bonnet-Myers theorem. **Lemma 13**. *Let $\gamma=(x,T)$ be a zero of $\eta_k$ with $\mathrm{index}(\gamma)=m$. If $\mathrm{Ric}^{\Omega}_k\geq \frac{1}{r^2}>0$ for a positive constant $r$, then $$T\leq r \pi(m+1)$$* **Proof.* Let $\{ V_1,...,V_{n-1} \}$ be a family of [\[parallel\]](#parallel){reference-type="eqref" reference="parallel"} with unit norm and orthogonal to $\dot{\gamma}$. For $j=0,1,...,m$ and $i=1,...,n-1$ look at $V_i^{f_{j}}=f_{j}V_i$, where $$f_j(t)= \left\{\begin{split} \sin\Big( \frac{(m+1)\pi t}{T}\Big) \ \ \ , \ t\in \Big[\frac{jT}{m+1},\frac{(j+1)T}{m+1}\Big] \\ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm{otherwise}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \end{split}\right.$$* *Assume $\mathrm{Ric}^{\Omega}_k\geq \frac{1}{r^2}>0$ and suppose by contradiction $T>\pi r (m+1)$. Let $W_{i}^{f_j}=V_{i}^{f_j} + g_{ij}\dot{\gamma}$ and $\tau_{ij}$ given by Lemma [Lemma 12](#tecnica2){reference-type="ref" reference="tecnica2"}, fix $j$ and observe that the sum* *$$\begin{aligned} \sum_{i=1}^{n-1} \mathrm{Q}_{\gamma}(\eta_k)(W_{i}^{f_j},\tau_{ij}) &= \sum_{i=1}^{n-1} \int_{\frac{jT}{m+1}}^{\frac{(j+1)T}{m+1}} \Big\{ \frac{(m+1)^2\pi^2}{T^2} \cos^2 \Big( \frac{(m+1)\pi t}{T} \Big) - \sin^2 \Big( \frac{(m+1)\pi t}{T}\Big) \mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|},V_i \Big) \Big\} \mathrm{d}t && \\ & =(n-1)\Big[ \frac{(m+1)^2\pi^2}{2T(m+1)}-\int_{\frac{jT}{m+1}}^{\frac{(j+1)T}{m+1}} \Big\{ \sin^2 \Big( \frac{(m+1)\pi t}{T} \Big) \mathrm{Ric}^{\Omega}_k\Big( \frac{\dot{\gamma}}{|\dot{\gamma}|} \Big) \Big\} \mathrm{d}t \Big] && \\ & \leq(n-1)\Big[ \frac{(m+1)^2\pi^2}{2T(m+1)}-\frac{T}{2 r^2 (m+1)}\Big] && \\ & =(n-1)\Big[ \frac{(m+1)^2\pi^2 r^2-T^2}{2T(m+1) r^2}\Big] <0 \ .\end{aligned}$$* *Therefore for every $j$ there exists $i_{j}\in \{ 0,1,...,n-1 \}$ such that $\mathrm{Q}_{\gamma}(\eta_k)(W_{i_j}^{f_j},\tau_{i_j j})<0$. Moreover, by construction, for every $s\neq l$ the supports of $f_{is}$ and $f_{il}$ are disjoint. In particular, we deduce that $W_{i_j}^{f_j}$ are linearly independent. For real coefficients $\lambda_1,...,\lambda_{m+1}$ define $$V=\sum_{j=1}^{m+1} \lambda_j V^{f_j}_{i_j} , \ g= \sum_{j=1}^{m+1} \lambda_j g_{i_j j} \ \mathrm{and} \ \tau= \sum_{j=1}^{m+1} \lambda_j \tau_{i_j j}.$$ Observe that $g$ and $\tau$, by linearity of Equation [\[differentialproblem\]](#differentialproblem){reference-type="eqref" reference="differentialproblem"} and Equation [\[usa2\]](#usa2){reference-type="eqref" reference="usa2"}, are exactly the ones associated to $V$ by Lemma [Lemma 9](#tecnica1){reference-type="ref" reference="tecnica1"} in order to obtain that $$\begin{aligned} \mathrm{Q}_{\gamma}(\eta_k)\Big(\sum_{j=1}^{m+1}(W^{f_j},\tau_{i_j j})\Big)&= \int_0^T \Big|(\dot{V})_2-\frac{1}{2} \Omega(V)_2\Big|^2 \mathrm{d}t -\int_0^T |V|^2\mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|},\frac{V}{|V|}\Big) \mathrm{d}t \\ &=\sum_{j=1}^{m+1} \lambda_j^2 \int_{\frac{jT}{m+1}}^{\frac{(j+1)T}{m+1}} \Big\{ \dot{f}_{ j}^2 - f_j^2 \ \mathrm{Sec}^{\Omega}_k\Big(\frac{\dot{\gamma}}{|\dot{\gamma}|}, V_{i_j} \Big) \Big\} \mathrm{d}t \\ &= \lambda_j^2 \sum_{j=1}^{m+1} \mathrm{Q}_{\gamma}(\eta_k)(W^{f_j},\tau_{i_j j}) \leq 0 ,\end{aligned}$$ where in second line we used again that the support of $f_j$ are disjoint. Observe that the equality in the above equation holds if and only if $\lambda_j=0$ for every $j$. Summing up the family $\{ (W_{i_j}^j, \tau_{i_j j}) \}$ is a family of linear independent vectors of $T_{\gamma}\mathcal{M}$ which generate a vector subspace $Z$, of dimension $m+1$, such that $\mathrm{Q}_{\gamma}(\eta_k)|_{_Z}$ is negative definite. We conclude that the index of $\gamma$ is equal or greater than $m+1$ in contradiction with the assumption. ◻* # Index estimates below $c(g,\sigma)$ and proof of Theorem A (and Theorem A1) {#proofA} ## Weakly exact case {#weaklyexact} ### Mini max geometry below $c(g,\sigma)$. {#minimaxdebole} In this paragraph, we assume $\sigma$ to be weakly exact. As in [\[primcon\]](#primcon){reference-type="eqref" reference="primcon"}, Let $S_k^{\sigma}$ be the primitive of $\eta_k$ defined on $\mathcal{M}_0$. Below the critical value, $S_k^{\sigma}$ enjoys a minimax geometry on the set of loops with short length which we discuss now. Denote by $M^+=M\times (0,+\infty)$ the set of constant loops with free period and consider $$\Gamma^k_0=\{ \phi:[0,1]\to \mathcal{M}_0, \ \phi(0) \in M^+ \ \mathrm{and} \ S^k_{\sigma}(\phi(1))<0 \} .$$ When $k$ is below $c$, $\Gamma^k_0$ is non empty and we define the mini max value function as follows $$\mathbf{s}:(0,c) \to \mathbb{R}\ , \ \mathbf{s}(k)=\inf_{\Gamma^k_0}\max_{[0,1]} S_k^{\sigma}(\phi(t)) .$$ The mini max geometry of $S_k^{\sigma}$ on $\Gamma_0^k$ is highlighted by the next lemma. **Lemma 14**. *Let $I$ be an open interval with compact closure fully contained in $(0,c)$. Then there exists a positive $\varepsilon=\varepsilon(I)$ such that $$\mathbf{s}(k)\geq \varepsilon \ , \ \forall k\in I .$$* **Proof.* For the proof we refer the reader to [@AB Section 5]. ◻* Consider the negative gradient vector field $-\nabla S_k^{\sigma}$ on $\mathcal{M}_0$, induced by the Riemannian metric $g_{\mathcal{M}}$. Since $-\nabla S_k^{\sigma}$ is smooth, it admits a local flow for positive time which in general is not complete. Indeed, source of non completeness mainly originates from the non completeness of $(0,+\infty)$ and the fact that $|\nabla S_k^{\sigma}|_{\mathcal{M}}$ is not bounded on $\mathcal{M}_0$. One can avoid these situations by considering the pseudo gradient given by $$\label{pseudow} \mathrm{X}_k= (h \circ S_k^{\sigma}) \frac{\nabla S_k^{\sigma}}{\sqrt{1+|\nabla S_k^{\sigma}|_{\mathcal{M}}^2}} .$$ Here $h:\mathbb{R}\to [0,1]$ is a cut-off function such that $h^{-1}\{ 0 \}=(-\infty , \frac{s(k)}{4}]$ and $h^{-1}\{ 1 \}=[\frac{s(k)}{2}, +\infty)$. As argued in [@Abb2 Section 8], because $\mathrm{X}_k$ vanishes on the set $\{ S_k^{\sigma}< \frac{s(k)}{4} \}$, it admits a positively complete flow $\Psi_{k}: [0,+\infty)\times \mathcal{M}_0 \to \mathcal{M}_0$. The next lemma evidences two important properties of $\Psi_{k}$. **Lemma 15**. *Let $u=(x,T): (0,+\infty)\to \mathcal{M}_0$ be an integral curve of $\Psi_{k}$. Then for every $s \in (0,+\infty)$ the following statements hold:* - *$S_k^{\sigma}(u(s)) \leq S_k^{\sigma}(u(0))$.* - *$|T(s)-T(0)|^2 \leq s\Big( S_k^{\sigma}(u(0)) - S_k^{\sigma}(u(s))\Big)$.* **Proof.* For the proof we refer the reader to [@Merry Section 5]. ◻* **Lemma 16**. *The set $\Gamma_0^k$ is $\Psi_{k}$-invariant.* *Proof.* Since for every $(p,T) \in M^+$, $\nabla S_k^{\sigma}(p,T) = k\frac{\partial}{\partial T}$ then $\Psi_{k}( t , M^+) \subseteq M^+$ for every positive time $t$. This fact together with point (i) of Lemma [Lemma 15](#gradientlemma){reference-type="ref" reference="gradientlemma"} implies that $\Gamma_0^k$ is $\Psi_{k}$-invariant. ◻ ### Existence almost everywhere and index estimates. Hereafter we fix an interval $I$ with compact closure fully contained in $(0,c)$. Because $S_k^{\sigma}$ is monotone increasing with respect to the parameter $k$, one can easily deduce that also $\mathbf{s}$ is monotone increasing. Thus there exists $J\subset I$ of full Lebesgue measure such that $\mathbf{s}$ is differentiable on $J$. Let us point out that the almost every where differentiability of $\mathbf{s}$ is a crucial assumption in the Struwe monotonicity argument. Indeed, when $k\in J$, we can recover compactness condition for vanishing sequences related to the minimax geometry pointed out in Section [5.1.1](#minimaxdebole){reference-type="ref" reference="minimaxdebole"}. For more details about this construction, we address the reader to [@Abb2 Section 8] and [@Merry Section 5]. Here we will need a more precise version of this statement (see [@AMP] and [@AAMP]), in order to prepare the ground to prove in Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} which states that vanishing points of $\eta_k$ coming from this minimax geometry have Morse index bounded by 1. We proceed by showing that at the values of the energy where $\mathbf{s}$ is differentiable we can bound the period of points in the image of elements of $\Gamma_0^k$ which almost realize the minimax value. **Lemma 17**. *Let $k\in J$. Then there exists a constant $A$ such that for every small $\varepsilon>0$, by writing $k_{\varepsilon}=k+\varepsilon$, if $\phi \in \Gamma_0^{k}$ and $\max S^{\sigma}_{k_\varepsilon}(\phi) \leq \mathbf{s}(k_{\varepsilon}) + \varepsilon$, then for every $t\in[0,1]$ satisfying also $S_k^{\sigma}(\phi(t))> \mathbf{s}(k)- \varepsilon$, the inequality $T_{\phi(t)} \leq A+2$ holds.* **Proof.* Since $\mathbf{s}$ is differentiable at $k$ there exists a positive constant $A=A(k)$ such that for every positive $\varepsilon$ $$\label{costanteA} |\mathbf{s}(k_{\varepsilon})-\mathbf{s}(k)|\leq A \cdot |k_{\varepsilon}-k| .$$ Thus, if $\phi \in \Gamma_0^k$ and $t \in [0,1]$ are as in the statement, then $$T_{\phi(t)}=\frac{S_{\sigma}^{k_\varepsilon} (\phi(t))-S_k^{\sigma}(\phi(t))}{k_{\varepsilon}-k}\leq \frac{\mathbf{s}(k_\varepsilon)+\varepsilon-\mathbf{s}(k)+\varepsilon}{\varepsilon} \leq A+2 .$$ ◻* Let $T^*> \sqrt{2\mathbf{s}(k)}+3(A+1)$ for which the role is clarified later. Consider the set $$\mathcal{B}_k= \{ S_k^{\sigma}\geq \frac{\mathbf{s}(k)}{2} \} \cap \{ T < T^* \} .$$ By Lemma [Lemma 14](#minimaxlemma){reference-type="ref" reference="minimaxlemma"}, $\mathbf{s}(k)$ is positive and, as argued in [@Merry Proposition 5.8], $S_k^{\sigma}$ restricted to $\mathcal{B}_k$ satisfies Palais-Smale. Therefore the set $\mathcal{C}_k = \mathrm{Crit}(S_k^{\sigma}) \cap \mathcal{B}_k = \mathcal{Z}(\eta_k) \cap \mathcal{B}_k$ is compact. The next statement shows that $\mathcal{C}_k$ is non empty and that we can always find an element of $\Gamma_0^k$ which passes arbitrarily close to $\mathcal{C}_k$. **Lemma 18**. *Let $k\in J$. For every $V$ open neighborhood of $\mathcal{C}_k$ and for every small $\varepsilon>0$ there exists an element $\varphi_{\varepsilon} \in \Gamma_0^k$ such that $$\varphi_{\varepsilon} ([0,1])\subset \{ S_k^{\sigma}< \mathbf{s}(k) \} \cup \Big(\{ \mathbf{s}(k)\leq S_k^{\sigma}< \mathbf{s}(k)+\varepsilon \} \cap V \Big) \ .$$ In particular, $\mathcal{C}_k$ is non empty.* **Proof.* Let $k$ is a point of differentiability for $\mathbf{s}$ and let $V$ be an open neighborhood of $\mathcal{C}_k$. Since $\mathcal{C}_k$ consists of fixed points for $\Psi_{k}$ and $S_k^{\sigma}$ satisfies Palais-Smale on $\mathcal{B}_k$, there exists an open set $V^{'}$ and a positive $\delta=\delta(V^{'})$ such that $V^{'}$ still contains $\mathcal{C}_k$, $\Psi_{k}([0,1] \times V^{'}) \subset V \cap \mathcal{B}_k$ and $$\label{boundgradient} |\nabla S_k^{\sigma}|_{\mathcal{M}} \geq \delta >0 \ \ \mathrm{on} \ \ \forall (x,T) \in \mathcal{B}_k \setminus V^{'} \ .$$ Fix a positive $\varepsilon$ such that $\varepsilon < \delta^2$. By definition of $\mathbf{s}$, for every $\tilde{\varepsilon} \in (0, \varepsilon)$ there exists $\phi_{\tilde{\varepsilon}} \in \Gamma_0^{k_{\tilde{\varepsilon}}}$ such that $$\max_{t \in [0,1]} S^{\sigma}_{k_{\tilde{\varepsilon}}}( \phi_{\tilde{\varepsilon}}(t)) \leq \mathbf{s}(k_{\tilde{\varepsilon}}) + \tilde{\varepsilon} \ .$$ Moreover, up to shrinking $\tilde{\varepsilon}$, we can assume that $\tilde{\varepsilon}(A+1)<\varepsilon$ and that $\phi_{\tilde{\varepsilon}} \in \Gamma_0^k$. By using [\[costanteA\]](#costanteA){reference-type="eqref" reference="costanteA"} in Lemma [Lemma 17](#periodolimitato){reference-type="ref" reference="periodolimitato"}, we deduce that $$S_k^{\sigma}(\phi_{\tilde{\varepsilon}}(t)) < S^{\sigma}_{k_{\tilde{\varepsilon}}}( \phi_{\tilde{\varepsilon}}(t)) \leq \mathbf{s}(k_{\tilde{\varepsilon}}) + \tilde{\varepsilon} - \mathbf{s}(k) + \mathbf{s}(k) \leq \mathbf{s}(k)+(A+1)\tilde{\varepsilon}<\mathbf{s}(k) + \varepsilon \ .$$ We claim that the element $\varphi_{\varepsilon} \in \Gamma_0^k$ defined by $$\label{element} \varphi_{\varepsilon} = \Psi_{k}(1,\phi_{\tilde{\varepsilon}}(t)) ,$$ is the desired one. Indeed, first observe that, because $S_k^{\sigma}$ decreases along the flow lines of $\Psi_{k}$, if $t \in [0,1]$ is such that $S_k^{\sigma}(\phi_{\tilde{\varepsilon}}(t))< s(k)$ then $S_k^{\sigma}(\varphi_{\varepsilon} (t))< s(k)$. By the choice of $T^*$, by Lemma [Lemma 17](#periodolimitato){reference-type="ref" reference="periodolimitato"} and by Lemma [Lemma 15](#gradientlemma){reference-type="ref" reference="gradientlemma"}, if $S_k^{\sigma}(\phi_{\tilde{\varepsilon}} (t)) \in (\mathbf{s}(k), \mathbf{s}(k) + \varepsilon)$, then either $S_k^{\sigma}(\varphi_{\varepsilon} (t))< s(k)$ or $\Psi_{k}(s, \phi_{\tilde{\varepsilon}}(t) ) \in\mathcal{B}_k \cap (\mathbf{s}(k), \mathbf{s}(k) + \varepsilon)$ for every $s \in [0,1]$. Let us focus on the second case. If there exists a time $s_0\in [0,1]$ such that $\Psi_{k}(s_0, \phi_{\tilde{\varepsilon}}(t)) \in V^{'}$ then, by the assumptions on $V^{'}$, we deduce that $\Psi_{k}( 1 , \phi_{\tilde{\varepsilon}}(t)) \in V^{'}$. Suppose by contradiction that for every $t$ such that $\Psi_{k}(s, \phi_{\tilde{\varepsilon}}(t) ) \in\mathcal{B}_k \cap (\mathbf{s}(k), \mathbf{s}(k) + \varepsilon)$ for every $s \in [0,1]$, $\Psi_{k}(s, \phi_{\tilde{\varepsilon}}(t) )$ does not enter $V^{'}$. Then, by [\[boundgradient\]](#boundgradient){reference-type="eqref" reference="boundgradient"}, we deduce that $$\label{eq1} \begin{split} S_k^{\sigma}(\varphi_{\varepsilon} (t)) & = S_k^{\sigma}(\phi_{\tilde{\varepsilon}} (t)) + \int_0^1 \frac{d}{ds}S_k^{\sigma}(\Psi_{k}(s,\phi_{\tilde{\varepsilon}} (t))) \mathrm{d}s \\ & \leq \mathbf{s}(k) + \varepsilon - \int_0^1 |\nabla S_k^{\sigma}|_{_{\mathcal{M}}}^2 \mathrm{d}s \\ & \leq \mathbf{s}(k) +\varepsilon - \delta^2 \\ & < \mathbf{s}(k) \ . \end{split}$$ In this way we find an element of $\Gamma_0^k$ such that $\varphi_{\varepsilon}([0,1]) \subset \{ S_k^{\sigma}< \mathbf{s}(k) \}$ which contradicts the definition of $\mathbf{s}(k)$. We can conclude that $\varphi_{\varepsilon}$, as defined in [\[element\]](#element){reference-type="eqref" reference="element"}, is such that either $S_k^{\sigma}(\varphi_{\varepsilon}(t))<\mathbf{s}(k)$ or $S_k^{\sigma}(\varphi_{\varepsilon}(t))\in (\mathbf{s}(k),\mathbf{s}(k) + \varepsilon)$ and $\varphi_{\varepsilon}(t) \in V^{'}\subset V$. The claim is proved. ◻* **Lemma 19**. *Let $k\in J$. There exists $\gamma \in \mathcal{Z}(\eta_k)$ such that $\mathrm{index}(\gamma)\leq 1$.* **Proof.* By Lemma [Lemma 18](#struwe1){reference-type="ref" reference="struwe1"} the set $\mathcal{C}_k$ is not empty. Let $\gamma \in \mathcal{C}_k$ and consider $(\mathcal{V}_{\gamma},\mathrm{\Phi}_{\gamma})$ and $D_{\gamma}$ as in Lemma [Lemma 3](#chart){reference-type="ref" reference="chart"}. Let $\rho^t_{\gamma}: B_{r_{\gamma}}^H \times (B_{r_{\gamma}}^E \setminus \{ 0 \} ) \to B_{r_{\gamma}}^H \times B^E_{r_{\gamma}}$ be the deformation given by $$\rho^t_{\gamma}(y_h,y_e )=\Big(y_h,y_e + \min \{ t, r_{\gamma}-|y_e|_0 \} \frac{y_e}{|y_e|_0} \Big) .$$ Observe that $\rho^t_{\gamma}$ pushes points of $B_{r_{\gamma}}^H \times (B_{r_{\gamma}}^E \setminus \{ 0 \} )$ towards the boundary, in the direction of $E$. In particular, if $t<r_{\gamma}-|y_e|_0$, then by point (ii) of Lemma [Lemma 3](#chart){reference-type="ref" reference="chart"}, it holds $$\label{chart2} (S_k^{\sigma}\circ \mathrm{\Phi_{\gamma}^{-1}})(\rho^t_{\gamma}(y_h,y_e)) \leq (S_k^{\sigma}\circ \mathrm{\Phi_{\gamma}^{-1}})(y_h,y_e) - D_{\gamma} t^2 .$$ Denote by $\mathcal{W}_{\gamma}=\mathrm{\Phi}_{\gamma}^{-1}(B_{\sfrac{r_{\gamma}}{4}}^H \times B^E_{\sfrac{r_{\gamma}}{4}})$ and by $\mathcal{U}_{\gamma}=\mathrm{\Phi}_{\gamma}^{-1}(B_{\sfrac{r_{\gamma}}{2}}^H \times B^E_{\sfrac{r_{\gamma}}{2}})$. Let $\chi_{\gamma}$ be a bump function on $\mathcal{M}_0$ supported in $\mathcal{V}_{\gamma}$ such that $\chi_{\gamma} (\mathcal{U}_{\gamma})=1$. By contradiction suppose that every $\gamma \in \mathcal{C}_k$ has index greater than 1. Since $\mathcal{C}_k$ is compact, there exist $\gamma_1,..., \gamma_n\in \mathcal{C}_k$ and local charts $(\mathcal{V}_{\gamma_i},\mathrm{\Phi}_{\gamma_i})$ such that $\mathcal{C}_k \subset \bigcup_{i=1}^n \mathcal{W}_{\gamma_i}$. Without loss of generality, we can assume that $M^+ \cap \mathrm{\Phi}_{\gamma_i}^{-1}(\mathcal{V}_{\gamma_i}) = \emptyset$ for every $i$. Let us point out that the change of coordinates is bi-Lipschitz because composition of bi-Lipschitz map. We will denote by $C_{ij}$ the Lipschitz constant of the change of coordinates $\mathrm{\Phi}_{\gamma_j}\circ \mathrm{\Phi}_{\gamma_i}^{-1}$ and by $| \cdot |_{0,i}$ the norm on $\mathrm{\Phi}_{\gamma_i}(\mathcal{V}_{\gamma_i})$ induced by the chart $\mathrm{\Phi}_{\gamma_i}$.\ Let $\delta=\min_i D_{\gamma_i}$ , $R=\min_i r_{\gamma_i}$, $C=\max_{i\neq j} C_{ij}$ and fix $$\varepsilon \in \Big(0,\min \Big\{ \frac{R}{4nC}, \frac{R}{4n} \Big\} \Big) .$$ For $j=1,...,n$ consider the set $$\mathcal{W}_{\gamma_i, j} = \mathrm{\Phi}_{\gamma_i}^{-1} \Big(\bigcup_{z \in \mathrm{\Phi}_{\gamma_i}(\mathcal{W}_{\gamma_i})}B(z,j\varepsilon C)\Big) ,$$ Where $B(z,j\varepsilon C)$ is the ball in $\mathrm{\Phi}_{\gamma_i}(\mathcal{W}_{\gamma_i})$ (with respect to the $| \cdot |_{0,i}$) centered at $z$ with radius $j\varepsilon C$. The following inclusions hold $$\mathcal{W}_{\gamma_i} \subset \mathcal{W}_{\gamma_i, 1} \subset \mathcal{W}_{\gamma_i, 2} \subset ... \subset \mathcal{W}_{\gamma_i, j}\subset \mathcal{W}_{\gamma_i, j+1} \subset ... \subset \mathcal{W}_{\gamma_i, n} \subset \mathcal{U}_{\gamma_i} \subset \mathcal{V}_{\gamma_i} .$$ By Lemma [Lemma 18](#struwe1){reference-type="ref" reference="struwe1"}, there exists an element $\phi \in \Gamma_0^k$ such that $$\label{inclusion} \phi([0,1]) \subset \{ S_k^{\sigma}< \mathbf{s}(k) \} \cup \Big( \{ \mathbf{s}(k) \leq S_k^{\sigma}<\mathbf{s}(k) + \varepsilon^2 \delta \} \cap \bigcup_{i=1}^n \mathcal{W}_{\gamma_i} \Big) .$$ Up to refining the cover, we can assume that the endpoints $\phi(0),\phi(1) \notin \bigcup_{i=1}^n \mathcal{V}_{\gamma_i}$. By point (i) of Lemma [Lemma 3](#chart){reference-type="ref" reference="chart"}, $\mathrm{\Phi}_{\gamma_i} (\mathcal{C}_k \cap \mathrm{\Phi}_{\gamma_i}^{-1}(\mathcal{V}_{\gamma_i})) \subseteq B^H_{r_{\gamma_i}} \times \{ 0 \}$ for every $i$, and by contradiction we are assuming that $B^H_{r_{\gamma_i}} \times \{ 0 \}$ has codimension bigger than one. Since the domain of $\phi$ has dimension one, with the help of the transversality theorem [@trans Section 5], we can find an element $\phi_0$ arbitrarily $C^0$ close to $\phi$ such that its image $\phi_0([0,1])$ does not intersect $B^H_{r_{\gamma}} \times \{ 0 \}$ and it still contained in the right-hand term of the inclusion [\[inclusion\]](#inclusion){reference-type="eqref" reference="inclusion"}.\ Now define $$\label{deformazione} \phi_1 (t)= \left\{\begin{split} \mathrm{\Phi}_{\gamma_1}^{-1} \Big( \rho_{_{\gamma_1}}^{\varepsilon \chi_{_{\gamma_1}} (\phi_0(t))} (\mathrm{\Phi}_{\gamma_1}(\phi_0(t)))\Big) \ \ \mathrm{if} \ \phi_0(t)\in \mathcal{V}_{\gamma_1} \ , \\ \phi_{0}(t) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm{otherwise}\ . \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \end{split}\right.$$ Observe that $\phi_1 \in \Gamma_0^k$ because it is obtained by deforming continuously the segment of $\phi_0$ contained in $V_{\gamma_1}$ keeping the endpoints fixed. Moreover, if $\phi_0(t) \in \{ \mathbf{s}(k) \leq S_k^{\sigma}< \mathbf{s}(k) + \varepsilon^2 \delta \} \cap \mathcal{W}_{\gamma_1}$, then the choice of $\varepsilon$ and [\[chart2\]](#chart2){reference-type="eqref" reference="chart2"} implies that $$\begin{aligned} S_k^{\sigma}( \phi_1(t))& \leq S_k^{\sigma}( \phi_0(t)) - D_{\gamma_1} \varepsilon^2 &&\\ & \leq \mathbf{s}(k) + \delta \varepsilon^2 - D_{\gamma_1}\varepsilon^2 &&\\ &< \mathbf{s}(k) . \end{aligned}$$ Finally because $\rho_{_{\gamma_1}}^{\varepsilon \chi_{_{\gamma_1}}}$ pushes points of $\mathrm{\Phi}_{\gamma_1}(\mathcal{W}_{\gamma_1})$ at distance at most $\varepsilon$, if $\phi_0(t) \in \mathcal{W}_{\gamma_1} \cap \mathcal{W}_{\gamma_i}$ then $$\label{lipschitz} \Big| \mathrm{\Phi}_{\gamma_i} (\phi_1(t)) - \mathrm{\Phi}_{\gamma_i} (\phi_0(t)) \Big|_{0,i} \leq C_{1i} \Big| \mathrm{\Phi}_{\gamma_1} (\phi_1(t)) - \mathrm{\Phi}_{\gamma_1} (\phi_0(t)) \Big|_{0,1} \leq C\varepsilon ,$$ so that $\phi_1(t) \in \mathcal{W}_{\gamma_i,1}$.\ Summing up the element $\phi_1$ enjoys $$\phi_1([0,1]) \subset \{ S_k^{\sigma}< \mathbf{s}(k) \} \cup \Big( \{ \mathbf{s}(k) \leq S_k^{\sigma}< \mathbf{s}(k)+\varepsilon^2 \delta \} \cap \bigcup_{i=2}^n \mathcal{W}_{\gamma_i,1} \Big) .$$ We repeat the construction as follows. Define $\phi_2 \in \Gamma_0^k$ as $$\phi_2 (t)= \left\{\begin{split} \mathrm{\Phi}_{\gamma_2}^{-1} \Big( \rho_{_{\gamma_2}}^{\varepsilon \chi_{_{\gamma_2}} (\phi_1(t))} (\mathrm{\Phi}_{\gamma_2}(\phi_1(t)))\Big) \ \ \mathrm{if} \ \phi_1(t)\in \mathcal{V}_{\gamma_1} \ , \\ \phi_{1}(t) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm{otherwise}\ . \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \end{split}\right.$$ By using again the inequality [\[chart2\]](#chart2){reference-type="eqref" reference="chart2"}, if $\phi_1(t) \in \mathcal{W}_{\gamma_2,1}$ then $$\begin{aligned} S_k^{\sigma}( \phi_2(t))& \leq S_k^{\sigma}( \phi_1(t)) - D_{\gamma_2} \varepsilon^2 &&\\ & \leq \mathbf{s}(k) + \delta \varepsilon^2 - D_{\gamma_2}\varepsilon^2 &&\\ &< \mathbf{s}(k) , \end{aligned}$$ Moreover, if $\phi_1(t) \in \mathcal{W}_{\gamma_1,1} \cap \mathcal{W}_{\gamma_i,1}$ for some $i\in \{ 2,...,n \}$, then $$\begin{aligned} \Big| \mathrm{\Phi}_{\gamma_i} (\phi_2(t)) - \mathrm{\Phi}_{\gamma_i} (\phi_1(t)) \Big|_{0,i} & \leq C_{2i} \Big| \mathrm{\Phi}_{\gamma_2} (\phi_2(t)) - \mathrm{\Phi}_{\gamma_2}(\phi_1(t)) \Big|_{0,2} \leq C\varepsilon , \end{aligned}$$ which implies that $\phi_2(t) \in \mathcal{W}_{\gamma_i,2}$. Thus the element $\phi_2$ enjoys $$\phi_2([0,1]) \subset \{ S_k^{\sigma}< \mathbf{s}(k) \} \cup \Big( \{ \mathbf{s}(k) \leq S_k^{\sigma}< \mathbf{s}(k)+\varepsilon^2 \delta \} \cap \bigcup_{i=3}^n \mathcal{W}_{\gamma_i,2} \Big) .$$ Iterating the process, at the step $m$ we find $\phi_m$ such that $$\phi_m([0,1]) \subset \{ S_k^{\sigma}< \mathbf{s}(k) \} \cup \Big( \{ \mathbf{s}(k) \leq S_k^{\sigma}<\mathbf{s}(k) + \varepsilon^2 \delta \} \cap \bigcup_{i=m+1}^n \mathcal{W}_{\gamma_i,m} \Big) ,$$ at the step $n$, we find an element $\phi_n$ such that $$\phi_n([0,1]) \subset \{ S_k^{\sigma}< \mathbf{s}(k) \} ,$$ in contradiction with the definition of $\mathbf{s}(k)$. ◻* ## Non weakly exact case {#nonweaklyexact} ### Variation of $\eta_k$ along paths If $\sigma$ is not weakly exact, then $c(g,\sigma)= + \infty$ and $\eta_k$ does not admit a primitive globally defined on $\mathcal{M}_0$. Despite that if $u:[0,1] \to \mathcal{M}_0$ is a continuous path the action variation $\Delta\eta_k(u):[0,1] \to \mathbb{R}$ of $\eta_k$ along $u$ is always well defined and it is given by $$\Delta\eta_k(u)(t) = \int_0^t u^* \eta_k \ .$$ Moreover, as argued in [@AB Section 2.3], if $\mathcal{U}\subseteq \mathcal{M}_0$ is an open subset and $u([0,1]) \subset U$, then for every local primitive $S_k^{\sigma}: \mathcal{U}\to \mathbb{R}$ we have $$\label{primug} \Delta\eta_k(u)(t)=S_k^{\sigma}(u(t)) - S_k^{\sigma}(u(0)) \ \ , \ \forall t \in [0,1] \ .$$ Let $u:[0,1]\times[0,R] \to \mathcal{M}_0$ be a homotopy of $u$ with the starting point fixed and write $u_s=u(\cdot,s)$ and by $u^t=u(t, \cdot)$. The fact that $\eta_k$ is closed implies the following formula $$\label{omotopia} \Delta\eta_k(u_s)(t)=\Delta\eta_k(u_0)(t) + \Delta\eta_k(u^t)(R) \ .$$ ### Minimax geometry Henceforth we take into account $M^+$ and $\mathcal{V}_{\delta}$ as in the paragraph [5.1.1](#minimaxdebole){reference-type="ref" reference="minimaxdebole"}. Observe that, if $\delta$ is small enough, then we can restrict definition [\[primcon\]](#primcon){reference-type="eqref" reference="primcon"} of $S_k^{\sigma}$ to set of short loops $\mathcal{V}_{\delta}\subset \mathcal{M}_0$. By definition of $S_k^{\sigma}$, we deduce that $$\inf_{\mathcal{V}_{\delta}} S_k^{\sigma}= 0 \ .$$ By identifying the north and the south pole of the unit sphere $S^2$ with the end points of the interval $[0,1]$, we have a 1-1 correspondence between $$\Big\{ f:S^2 \to M \Big\} \xlongrightarrow {\text{F}} \Big\{ \phi: \Big( [0,1],\{ 0, 1 \} \Big) \to \Big(\mathcal{M}_0, M^+ \Big) \Big\} \ .$$ It is a well known fact that $F$ descends to the homotopy quotient (see for instance [@Kli1]). Because $\sigma$ is non weakly exact $\pi_2(M) \neq \{ 0 \}$ so that given a non trivial element $\mathfrak{u} \in \pi_2(M)$ one can consider the following set $$\Gamma_{\mathfrak{u}} = \Big\{ \phi: \Big( [0,1],\{ 0, 1 \} \Big) \to \Big(\mathcal{M}_0, M^+ \Big), \ F^{-1}(\phi) \in \mathfrak{u} \Big\} \ .$$ For $\phi \in \Gamma_{\mathfrak{u}}$, we can define a primitive $S_k^{\sigma}(\phi) :[0,1] \to \mathbb{R}$ of $\eta_k$ along $\phi$ by $$\label{primele} S_k^{\sigma}(\phi)(t) = \Delta\eta_k(\phi)(t) + T_{\phi(0)}k \ ;$$ In this setting, the minimax value function is given by $$\mathbf{s}^{\mathfrak{u}}: (0,+\infty)\to (0,+\infty)\ \ , \ \ \mathbf{s}^{\mathfrak{u}}(k) = \inf_{\phi \in \Gamma^{\mathfrak{u}}} \max_{t\in[0,1]} \Delta S_k^{\sigma}(\phi)(t) \ .$$ In analogy with the weakly exact case, the following lemma holds. **Lemma 20**. *Let $I$ be an open interval with compact closure fully contained in $(0,+\infty)$. There exists a positive $\varepsilon$ such that $$\mathbf{s}^{\mathfrak{u}}(k) \geq \varepsilon \ , \ \forall t \in I \ .$$ Moreover $\mathbf{s}^{\mathfrak{u}}$ is monotone increasing on $I$.* **Proof.* The proof is contained in [@AB Section 4]. ◻* Let $N_k = \{ (x,T) \in \mathcal{V}_{\delta} \ | \ S_k^{\sigma}(x,T) < \frac{\varepsilon}{4} \}$ and $h$ a cut-off function such that $h^{-1}(1)=[\frac{\varepsilon}{2}, +\infty )$ and $h^{-1}(0)=(-\infty , \frac{\varepsilon}{4}]$. Define $\tilde{h}: \mathcal{M}_0 \to \mathbb{R}$ as follows $$\tilde{h} (x,T)= \left\{\begin{split} \ \ 1 \ \ \mathrm{if} \ (x,T) \in \mathcal{M}_0\setminus N_k \\ h \circ S_k^{\sigma}\ \ \mathrm{if} \ (x,T) \in N_k \ \ \end{split}\right. \ .$$ Let $X_k$ be the vector field on $\mathcal{M}_0$ given by $$X_k = \tilde{h} \cdot \frac{ \eta^{\#}_k}{\sqrt{1+ | \eta^{\#}_k|^2_{\mathcal{M}}}} \ ,$$ where $\eta^{\#}_k$ is the dual vector field of the 1-form $\eta_k$ obtained through the Riemannian metric $g_{\mathcal{M}}$. In accordance with the weakly exact case, the pseudo gradient $\mathrm{X}_k$ enjoys the following properties **Lemma 21**. *The flow $\Psi_{k}$ of $\mathrm{X}_k$ is positively complete and $\Gamma_{\mathfrak{u}}$ is a $\Psi_{k}$-invariant set. Moreover, if $u=(x,T):(0,+\infty)\to \mathcal{M}_0$ is a flow line of $\Psi_{k}$, then for every $s \in (0,+\infty)$,* - *$\Delta\eta_k(u) (s) \leq 0$,* - *$|T(s)-T(0)|^2 \leq -s \Delta\eta_k(u)(s)$.* **Proof.* The details of the proof are in [@AB Proposition 2.8 and Lemma 2.9]. ◻* ### Existence almost everywhere and index estimates As pointed out in Lemma [Lemma 20](#monotone){reference-type="ref" reference="monotone"}, $\mathbf{s}^{\mathfrak{u}}$ is monotone increasing. Therefore there exists $J\subset I$ of full Lebesgue measure such that $\mathbf{s}^{\mathfrak{u}}$ is differentiable on $J$. In what follows we readapt Lemma [Lemma 17](#periodolimitato){reference-type="ref" reference="periodolimitato"}, Lemma [Lemma 18](#struwe1){reference-type="ref" reference="struwe1"} and Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} to the mini max geometry of $\eta_k$ on $\Gamma^{\mathfrak{u}}$. **Lemma 22**. *Let $k \in J$. Then there exists a constant $A>0$ such that $\forall \varepsilon >0$, by writing $k_{\varepsilon}= k + \varepsilon$, if $\phi \in \Gamma_{\mathfrak{u}}$ is such that $S_k^{\sigma}(\phi)(t) \leq \mathbf{s}^{\mathfrak{u}}(k_{\varepsilon}) + \varepsilon$ for all $t \in [0,1]$, then when $\phi(t)$ satisfies also $S_k^{\sigma}(\phi)(t) > \mathbf{s}^{\mathfrak{u}}(k) - \varepsilon$, the inequality $T_{\phi(t)}\leq A$ holds.* **Proof.* The fact that $k$ is a point of differentiability for $\mathbf{s}^{\mathfrak{u}}$ implies the existence of a positive constant $A=A(k)$ such that $$|\mathbf{s}^{\mathfrak{u}}( k_{\varepsilon}) - \mathbf{s}^{\mathfrak{u}}(k) | \leq A \cdot |k_{\varepsilon}-k| \ , \ \forall k_{\varepsilon} \in I.$$ Fix $\varepsilon>0$ and $\phi \in \Gamma_{\mathfrak{u}}$. If $\phi(t)$ is as in the statement then $$T_{\phi(t)} = \frac{S^{\sigma}_{k_{\varepsilon}}(\phi)(t) - S_k^{\sigma}(\phi)(t)}{k_{\varepsilon} - k} \leq \frac{\mathbf{s}^{\mathfrak{u}}(k_{\varepsilon})+ \varepsilon - \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon}{\varepsilon}\leq A+2$$ ◻* Let $T^* = \sqrt{2\mathbf{s}^{\mathfrak{u}}(k)}+3(A+1)$ and define the set $$\mathcal{B}_k =( \mathcal{M}_0 \setminus N_k) \cap \{ T\leq T^* \} \ .$$ Since every vanishing sequence of $\eta_k$ contained in $\mathcal{B}_k$ is compact the set $\mathcal{C}_k = \mathcal{B}_k \cap \mathcal{Z}(\eta_k)$ is compact. **Lemma 23**. *Let $k \in J$. Then for every $V$ open neighborhood of $\mathcal{C}_k$ and for every $\varepsilon>0$ there exists an element $\varphi_{\varepsilon} \in \Gamma_{\mathfrak{u}}$ such that, $\forall t \in [0,1]$ either $S_k^{\sigma}(\varphi_{\varepsilon})(t) < s^{\mathrm{u}}(k)$ or $S_k^{\sigma}(\varphi_{\varepsilon})(t) \in [s^{\mathrm{u}}(k) , \mathbf{s}^{\mathfrak{u}}+\varepsilon)$ and $\varphi_{\varepsilon}(t) \in V$. In particular, $\mathcal{C}_k$ is non empty.* **Proof.* Because $\mathcal{C}_k$ consists of zeros for $\eta_k$ and vanishing sequences of $\mathcal{B}_k$ converge, there exists $V^{'} \subset V$ and a positive $\delta=\delta(V^{'})$ such that $V^{'}$ still contains $\mathcal{C}_k$, $\Psi_{k}([0,1] \times V^{'}) \subset V \cap \mathcal{B}_k$ and $$\label{d1} | \eta^{\#}_k |_{\mathcal{M}_0} \geq \delta \ , \ \ \forall (x,T) \in \mathcal{B}_k \setminus V^{'}$$ Fix $\varepsilon>0$ such that $\varepsilon< \delta^2$. By definition of $\mathbf{s}^{\mathfrak{u}}$, for every $\tilde{\varepsilon} \in (0, \varepsilon)$ there exists $\phi_{\tilde{\varepsilon}} \in \Gamma_{\mathfrak{u}}$ such that $$\max_{t \in [0,1]} S_k^{\sigma}(\phi_{\tilde{\tilde{\varepsilon}}})(t) < \mathbf{s}^{\mathfrak{u}}(k_{\tilde{\varepsilon}}) + \tilde{\varepsilon} \ .$$ For $s\in[0,1]$ define $$\label{d2} \varphi^s_{\varepsilon}(t)= \Psi_{k}(s, \varphi_{\tilde{\varepsilon}}(t)) \ ,$$ and observe that, by [\[omotopia\]](#omotopia){reference-type="eqref" reference="omotopia"} $$\label{d3} S_k^{\sigma}(\varphi^s_{\varepsilon})(t)= S_k^{\sigma}(\phi_{\tilde{\varepsilon}})(t) + \Delta\eta_k\Big(\Psi_{k}( \cdot, \varphi_{\tilde{\varepsilon}}(t))\Big)(s)$$ As in the weakly exact case, we claim that the element $\varphi_{\varepsilon}=\varphi^1_{\varepsilon} \in \Gamma_{\mathfrak{u}}$ is the desired one. Indeed, if $t \in [0,1]$ is such that $S_k^{\sigma}(\phi_{\tilde{\varepsilon}})(t) < \mathbf{s}^{\mathfrak{u}}(k)$, because $\Delta\eta_k$ is non positive along flow lines of $\Psi_{k}$ (point (i) of Lemma [Lemma 21](#flussograd){reference-type="ref" reference="flussograd"}), by [\[d3\]](#d3){reference-type="eqref" reference="d3"} we deduce that $S_k^{\sigma}( \varphi_{\varepsilon})(t)< \mathbf{s}^{\mathfrak{u}}(k)$. On the other hand, if $S_k^{\sigma}(\phi_{\tilde{\varepsilon}})(t) \in [\mathbf{s}^{\mathfrak{u}}(k), \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon)$, by the choice of $T^*$, by Lemma [Lemma 22](#periodo){reference-type="ref" reference="periodo"} and Lemma [Lemma 21](#flussograd){reference-type="ref" reference="flussograd"}, it follows that, either $S_k^{\sigma}( \varphi_{\varepsilon})(t)< \mathbf{s}^{\mathfrak{u}}(k)$ or $S_k^{\sigma}( \varphi_{\varepsilon})(t)\in[\mathbf{s}^{\mathfrak{u}}(k), \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon)$ and $\varphi^{s}_{\varepsilon}(t) \in \mathcal{B}_k$ for every $s \in [0,1]$. Let us focus on the second case. By the assumptions on $V^{'}$, if for some time $s_0 \in [0,1]$ we have that $\varphi^{s_0}_{\varepsilon}(t)\in V^{'}$, then $\varphi_{\varepsilon}(t)$ also belongs to $V^{'}$. Suppose by contradiction that $\varphi^s_{\varepsilon}(t)$ does not enter $V^{'}$ for every $s\in [0,1]$. Write $\varphi_{\varepsilon}(s, t)= \varphi^s_{\varepsilon}(t)$ and observe that by [\[d1\]](#d1){reference-type="eqref" reference="d1"}, it yields $$\label{d4} \Delta\eta_k(\varphi_{\varepsilon}(\cdot, t))(1) = \int_0^1 \Psi_{k}( \cdot, \phi_0(t))^* \eta_k \leq - \int_0^1 |\nabla \eta_k|_{\mathcal{M}_0}^2 \leq -\delta^2 ,$$ From [\[d3\]](#d3){reference-type="eqref" reference="d3"} and [\[d4\]](#d4){reference-type="eqref" reference="d4"}, we conclude that $$S_k^{\sigma}(\phi)(t) \leq S_k^{\sigma}(\phi_0)(t) - \delta^2 \leq \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon - \delta^2 < \mathbf{s}^{\mathfrak{u}}(k) \ .$$ Summing up the element $\varphi_{\varepsilon} \in \Gamma_{\mathfrak{u}}$ is such that for every $t \in [0,1]$, $S_k^{\sigma}(\varphi_{\varepsilon})(t) < \mathbf{s}^{\mathfrak{u}}(k)$ in contradiction with the definition of $\mathbf{s}^{\mathfrak{u}}(k)$. Therefore, the element $\varphi_{\varepsilon}$ is such that either $S_k^{\sigma}(\varphi_{\varepsilon})(t) < s^{\mathrm{u}}(k)$ or $S_k^{\sigma}(\varphi_{\varepsilon})(t) \in [s^{\mathrm{u}}(k) , \mathbf{s}^{\mathfrak{u}}+\varepsilon)$ and $\varphi_{\varepsilon}(t) \in V{'} \subset V$. The claim holds. ◻* **Lemma 24**. *Let $k\in J$. There exists $\gamma \in \mathcal{Z}(\eta_k)$ such that $\mathrm{index}(\gamma)\leq 1$.* **Proof.* We proceed by contradiction following the same scheme and adopting the same setting and the same notation as in the proof of Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"}. By Lemma [Lemma 23](#struwe2){reference-type="ref" reference="struwe2"} and by transversality theorem [@trans Section 5], we can find an element $\phi_0 \in \Gamma_{\mathfrak{u}}$ such that its image does not intersect $B^H_{r_{\gamma_i}}\times \{ 0\}$ for every $i$ and it satisfies that for every $t \in [0,1]$ either $S_k^{\sigma}(\phi_0)(t)< \mathbf{s}^{\mathfrak{u}}(k)$ or $S_k^{\sigma}(\phi_0)(t) \in [\mathbf{s}^{\mathfrak{u}}(k), \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon^2 \delta)$ and $\phi_0(t) \in \bigcup_{i=1}^{n} \mathcal{W}_{\gamma_i}$. Define $\phi_1$ by [\[deformazione\]](#deformazione){reference-type="eqref" reference="deformazione"} and look at $\phi_0(t) \in \mathcal{W}_{\gamma_1}$. Let $S_k^{\sigma}$ a primitive of $\eta_k$ defined on $\mathcal{W}_{\gamma_1}$ and $\alpha(\phi_0(t)):[0,\varepsilon] \to \mathcal{W}_{\gamma_1}$ given by $$\alpha(\phi_0(t))(s) = \mathrm{\Phi}_{\gamma_1}^{-1} \Big( \rho_{_{\gamma_1}}^{s\varepsilon \chi_{_{\gamma_1}} (\phi_0(t))} (\mathrm{\Phi}_{\gamma_1}(\phi_0(t)))\Big)$$ By [\[primug\]](#primug){reference-type="eqref" reference="primug"} and by point $(ii)$ of Lemma [Lemma 3](#chart){reference-type="ref" reference="chart"} $$\begin{aligned} \Delta\eta_k(\alpha(\phi_0(t))(1) &= S_k^{\sigma}(\alpha(\phi_0(t)(1)) - S_k^{\sigma}(\alpha(\phi_0(t)(0)) \\ & \leq \Big( S_k^{\sigma}\circ \mathrm{\Phi}_{\gamma_1}^{-1} \Big) \Big( \rho_{_{\gamma_1}}^{\varepsilon \chi_{_{\gamma_1}} (\phi_0(t)))} (\mathrm{\Phi}_{\gamma_1}(\phi_0(t)) \Big) - \Big( S_k^{\sigma}\circ \mathrm{\Phi}_{\gamma_1}^{-1} \Big) \Big(\Phi_{\gamma_1}( \phi_0(t)) \Big) \\ & \leq -\varepsilon^2D_{\gamma_1}\end{aligned}$$ so that $$\begin{aligned} S_k^{\sigma}(\phi_1)(t) &= S_k^{\sigma}(\phi_0)(t) + \Delta\eta_k(\alpha(\phi_0(t))(1) \\ & \leq \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon^2 \delta - \varepsilon^2 D_{\gamma_1} \\ &< \mathbf{s}^{\mathfrak{u}}(k)\end{aligned}$$ Moreover if $\phi_0(t) \in \mathcal{W}_{\gamma_1} \cap \mathcal{W}_{\gamma_i}$ for some $i\in \{2,...,n \}$, due to [\[lipschitz\]](#lipschitz){reference-type="eqref" reference="lipschitz"}, we can deduce that $\phi_1(t) \in \mathcal{W}_{\gamma_i , 1 }$. Thus the element $\phi_1$ enjoys that for every $t \in [0,1]$ either $S_k^{\sigma}(\phi_1)(t) < \mathbf{s}^{\mathfrak{u}}(k)$ or $S_k^{\sigma}(\phi_1)(t) \in [\mathbf{s}^{\mathfrak{u}}(k), \mathbf{s}^{\mathfrak{u}}(k) + \varepsilon^2 \delta)$ and $\phi_1(t) \in \bigcup_{i=2}^{n} \mathcal{W}_{\gamma_i , 1}$. Iterating the procedure, at the step $n$ we find an element $\phi_n \in \Gamma_{\mathfrak{u}}$ such that for every $t \in [0,1]$, $S_k^{\sigma}(\phi_n)(t) < \mathbf{s}^{\mathfrak{u}}(k)$ which again contradicts the definition of $\mathbf{s}^{\mathfrak{u}}$. ◻* ### Proof of Theorem A (and Theorem A1) Let $k<c$ such that $\mathrm{Ric}^{\Omega}_k>0$. By continuity of $\mathrm{Ric}^{\Omega}_k$,there exists an open interval $I_k\subset (0,c)$ centered at $k$ and a positive constant $r$ such that $\mathrm{Ric}^{\Omega}_s \geq \frac{1}{r^2}$ for all $s \in I_k$. With the help of Lemma [Lemma 19](#index1){reference-type="ref" reference="index1"} or Lemma [Lemma 24](#index2){reference-type="ref" reference="index2"}, let $\{ \gamma_n = (x_n , T_n) \}$ be a sequence such that $\gamma_n$ is a contractible zero of $\eta_{k_n}$ with index smaller or equal than 1 and $k_n \to k$. By Lemma [Lemma 13](#Bonnet-Myers){reference-type="ref" reference="Bonnet-Myers"} it follows that $T_n \leq 2\pi r$. Moreover, $$|\eta_k (\gamma_n)|_{\mathcal{M}} = |(\eta_k - \eta_{k_n})(\gamma_n)|_{\mathcal{M}} + |\eta_{k_n}(\gamma_n)|_{\mathcal{M}} =|k_n-k| T_n \leq |k_n - k | 2\pi r \ .$$ Thus $\{ \gamma_n \}$ is a vanishing sequence for $\eta_k$ and $T_n$ is bounded from above. Since the magnetic flow at energy $k$ does not have rest points, by [@HZ Proposition 1, Section 4.1], up to subsequence, $\gamma_n$ converges in $\mathcal{M}$ to a point $\gamma$ which is a contractible zero of $\eta_k$. Finally, if $\sigma$ is nowhere vanishing, by point $(ii)$ of Lemma [Lemma 6](#cor1){reference-type="ref" reference="cor1"}, the magnetic Ricci curvature is positive for values close to zero and Theorem A1 follows.
arxiv_math
{ "id": "2309.03159", "title": "Magnetic curvature and existence of closed magnetic geodesics on low\n energy levels", "authors": "Valerio Assenza", "categories": "math.SG math.DG math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $\mathcal S$ be a finite set of integer points in $\mathbb{R}^d$, which we assume has many symmetries, and let $P\in\mathbb{R}^d$ be a fixed point. We calculate the distances from $P$ to the points in $\mathcal S$ and compare the results. In some of the most common cases, we find that they lead to unexpected conclusions if the dimension is sufficiently large. For example, if $\mathcal S$ is the set of vertices of a hypercube in $\mathbb{R}^d$ and $P$ is any point inside, then almost all triangles $PAB$ with $A,B\in\mathcal S$ are almost equilateral. Or, if $P$ is close to the center of the cube, then almost all triangles $PAB$ with $A\in \mathcal S$ and $B$ anywhere in the hypercube are almost right triangles. address: - " JA: Department of Mathematics, University of Illinois at Urbana-Champaign, Altgeld Hall, 1409 W. Green Street, Urbana, IL, 61801, USA " - " CC: Simion Stoilow Institute of Mathematics of the Romanian Academy, P. O. Box 1-764, RO-014700 Bucharest, Romania" - " AZ: Department of Mathematics, University of Illinois at Urbana-Champaign, Altgeld Hall, 1409 W. Green Street, Urbana, IL, 61801, USA and Simion Stoilow Institute of Mathematics of the Romanian Academy, P. O. Box 1-764, RO-014700 Bucharest, Romania" author: - Jack Anderson, Cristian Cobeli, Alexandru Zaharescu title: Counterintuitive patterns on angles and distances between lattice points in high dimensional hypercubes --- # Introduction Recent developments in network communications [@LQ2016; @SH2010] or artificial intelligence [@BS2021] have shed new light on studies of graphs and special models based on sets explored in combinatorial geometry or related to lattice points in multidimensional spaces [@AHK2001; @ACZ2023; @LM2023; @Buc1986; @ES1996; @Hal1982; @OO2015]. Our object in this article is to present a few results related to the fact that in high dimensional hypercubes, a random pick of lattice points to find some that are at an 'exceptional distance' apart from each other has zero chance of success if the dimension goes to infinity. (Here, an *exceptional distance* is any one that is different from the average.) Let $\mathcal S\subset\mathbb{R}^d$, $d\ge 1$, be a finite set and let $\ifmmode\bm{a}\else\textbf{a}\fi=(a_1,\dots,a_d)\in\mathbb{R}^d$. If we look from a distant point $\ifmmode\bm{a}\else\textbf{a}\fi$ to the points in $\mathcal S$, we find that they are all at about the same distance, which is the closer to a certain value the farther away from $\mathcal S$ our point of view is. On the contrary, if our point of view is close to $\mathcal S$, even in $\mathcal S$ or in its convex envelope, we see the variety of distances ranging from zero to the diameter of $\mathcal S$. But what we observe is largely influenced by the size of the space and its dimensions. Our goal here is to highlight some counterintuitive phenomena, some of them somehow related to the ones that have been studied for the set of lattice points visible from each other in a hypercube [@ACZ2023]. In order to illustrate two of these phenomena, let us note that if we randomly pick triangles with vertices at the lattice points of a cube that is large enough, the likelihood of encountering a significant number of some special triangles is low. ![Random triangles, in 2D and 3D, with vertices of integer coordinates in $[0,100]$. In each image, the triangles are chosen in such a way that they meet one of the following conditions: A. All triangles have a common vertex, initially chosen randomly but later fixed. B. All triangles have two vertices randomly chosen from the vertices of the cube, while the third vertex is free. ](4Triangles2d3d.pdf){#Fig2d3d width="\\textwidth"} For instance, in Figure [1](#Fig2d3d){reference-type="ref" reference="Fig2d3d"}, we can see two type of selections, each with a distinct feature in dimensions $2$ and $3$. The first type of choice conditions the triangles to have a common vertex, while the second one requires that two of the triangle's vertices be chosen randomly from the cube's vertices, while the third one remains unrestricted. Then we can wonder, what are the odds of getting similar triangles in the first case or non-degenerate isosceles triangles in the second case? Certainly, the questions may appear uninteresting, as the answer is so small in both situations. Furthermore, as the size of the cube and the dimension increases, the variety of these triangles increases immensely, and the attempt to randomly find the special ones seems completely in vain. Despite this, the situation is not like that at all, but rather the complete opposite. Thus, Theorem [Theorem 1](#ThAVVSimple){reference-type="ref" reference="ThAVVSimple"} shows that, if the dimension of the hypercube becomes large enough, then almost all triangles that have two vertices at the corners of the hypercube and the third a lattice point inside are almost isosceles. And on the same note, if both the size of the hypercube and the dimension become sufficiently large, then Theorem [Theorem 3](#Theorem3){reference-type="ref" reference="Theorem3"} shows that almost all triangles with vertices anywhere on the lattice points of the hypercube, which have a certain common vertex, not only are nearly isosceles but also have a particular shape, being almost all almost similar. To make things precise, let $N\ge 1$ be integer and let $\mathcal W=\mathcal W(d,N)$ be the maximal hypercube of lattice points from $[0,N]^d$. Since we are interested both in the discrete case and in the limit process, a good coverage of the phenomenon is taken if we choose $\mathcal S\subseteq \mathcal W$. We measure the distance between points $\ifmmode\bm{v}\else\textbf{v}\fi',\ifmmode\bm{v}\else\textbf{v}\fi''\in\mathbb{R}^d$ with the Euclidean distance $$\mathop{\mathrm{\mathfrak{d}}}(\ifmmode\bm{v}\else\textbf{v}\fi',\ifmmode\bm{v}\else\textbf{v}\fi'')=\big((v_1''-v_1')^2+\cdots+(v_d''-v_d')^2\big)^{1/2}$$ and, to compare with each other sizes from different dimensions, we use the *normalized distance*: $$\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v}\else\textbf{v}\fi',\ifmmode\bm{v}\else\textbf{v}\fi'')=\frac{1}{\sqrt{d}N}\big((v_1''-v_1')^2+\cdots+(v_d''-v_d')^2\big)^{1/2}.$$ Then the normalized distance between two opposite vertices, the farthest away points in $\mathcal W$, is $\mathop{\mathrm{\mathfrak{d}}}_d\big((0,\dots,0),(N,\dots,N)\big)=1$. In direct contrast, besides '*the thickest*' hyperlane' $\mathcal W$, we also consider '*the thinnest*' one, that of dimension zero, the set of vertices of $[0,N]^d$, which we denote by $\mathcal V=\mathcal V(d,N)$. For orientation in $\mathcal W$ or around, a useful support from some point $\ifmmode\bm{a}\else\textbf{a}\fi$ turns out to be the distance from $\ifmmode\bm{a}\else\textbf{a}\fi$ to the center of the cube, $\ifmmode\bm{c}\else\textbf{c}\fi$. That is why we denote $r_{\ifmmode\bm{a}\else\textbf{a}\fi}:=\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)$. From an arbitrary point $\ifmmode\bm{a}\else\textbf{a}\fi$, in Sections [2](#SectionAMV){reference-type="ref" reference="SectionAMV"} and [5](#SectionAMW){reference-type="ref" reference="SectionAMW"}, we find exact formulas for the average distances to points in $\mathcal V$ or in $\mathcal W$, respectively. Also, we calculate the second moments about these averages in both cases. They are the main tool that allow us to draw catching properties that most pairs or triples of points in the hypercube have in high dimensions. We mention that similar procedures were used recently in other different settings. For example, in a continuum case, in order to provide a framework for studying multifractal geometry, the authors of [@AEHO2017] and [@OR2019] study the average distance and the asymptotic behavior of higher moments of self-similar measures on self-similar subsets of $\mathbb{R}$, and on graph-directed self-similar subsets of $\mathbb{R}$. Corresponding characteristic properties of lattice points that are visible from each others were observed in [@ACZ2023]. Averages of relative distances from points in geometric figures were also the object of study in the articles [@MMP1999; @LQ2016; @Dun1997; @BP2009; @Bas2021]. To exemplify our results, regarding, for example, to the vertices of the hypercube, one may ask what is the expected distance from them to a fixed arbitrary point $\ifmmode\bm{a}\else\textbf{a}\fi$ and what is the probability that such a distance is close to the average. In Section [3](#SectionAVAV){reference-type="ref" reference="SectionAVAV"}, we show that, for any fixed point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, almost all vertices are at a normalized distance from $\ifmmode\bm{a}\else\textbf{a}\fi$ that is close to $\sqrt{1/4+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2}$, so long as the dimension $d$ is sufficiently large. As a consequence, it follows that almost all triangles formed from $\ifmmode\bm{a}\else\textbf{a}\fi$ and two vertices of the hypercube will be nearly an isosceles triangle, since the distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to each of the two vertices will both be close to the same value. **Theorem 1**. *For all $\varepsilon>0$, there exists an integer $d_\varepsilon$ such that, for all integers $d\geq d_\varepsilon$, $N\geq 1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, the proportion of triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ such that $$\left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi)-\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \right\vert\leq\varepsilon,$$ where $\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal V$, is greater than or equal to $1-\varepsilon$.* Another consequence arises from noticing that, for any vertex $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$, the square of the normalized distance from the center of the cube to $\ifmmode\bm{v}\else\textbf{v}\fi$ is $1/4$. As a result, for almost all vertices $\ifmmode\bm{v}\else\textbf{v}\fi$, the square of the distance from $\ifmmode\bm{a}\else\textbf{a}\fi$ to $\ifmmode\bm{v}\else\textbf{v}\fi$ is almost the sum of the squares of the distances from $\ifmmode\bm{c}\else\textbf{c}\fi$ to $\ifmmode\bm{a}\else\textbf{a}\fi$ and from $\ifmmode\bm{c}\else\textbf{c}\fi$ to $\ifmmode\bm{v}\else\textbf{v}\fi$. Therefore, it is natural to ponder if $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)$ may be close to a right triangle, and in fact this is the case so long as $\ifmmode\bm{a}\else\textbf{a}\fi$ is not too near to $\ifmmode\bm{c}\else\textbf{c}\fi$. **Theorem 2**. *For all $\varepsilon>0$, there exists an integer $d_\varepsilon$, and a function $f(d)\leq1/2$, such that for all integers $d\geq d_\varepsilon$, $N\geq1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$ with $\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)\geq f(d)$ (where $\ifmmode\bm{c}\else\textbf{c}\fi$ is the center of the hypercube), the proportion of triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)$ with $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$ and whose angle $\theta_{\ifmmode\bm{c}\else\textbf{c}\fi}(\ifmmode\bm{v}\else\textbf{v}\fi)$ at $\ifmmode\bm{c}\else\textbf{c}\fi$ satisfies $$\left\vert \cos\theta_{\ifmmode\bm{c}\else\textbf{c}\fi}(\ifmmode\bm{v}\else\textbf{v}\fi) \right\vert \leq \varepsilon,$$ is greater than or equal to $1-\varepsilon$.* Precise estimates and the effective bounds versions of Theorems [Theorem 1](#ThAVVSimple){reference-type="ref" reference="ThAVVSimple"} and [Theorem 2](#ThACVSimple){reference-type="ref" reference="ThACVSimple"} are proved in Section [4](#SectionVTriangles){reference-type="ref" reference="SectionVTriangles"}. In the second part of our manuscript, starting with Section [5](#SectionAMW){reference-type="ref" reference="SectionAMW"}, we turn our focus to looking at distances from a fixed point $\ifmmode\bm{a}\else\textbf{a}\fi$ to any integer point $\ifmmode\bm{w}\else\textbf{w}\fi$ in the cube. We similarly find that almost all points $\ifmmode\bm{w}\else\textbf{w}\fi\in\mathcal W$ are at a normalized distance from $\ifmmode\bm{a}\else\textbf{a}\fi$ which is close to $\sqrt{1/12+1/(6N)+\mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)}$, provided that the dimension $d$ is sufficiently large. Furthermore, we will also show that almost all pairs of points in the cube are at a relative distance close to $\sqrt{1/6+1/(3N)}$. As a consequence, we find that almost all triangles with one vertex at $\ifmmode\bm{a}\else\textbf{a}\fi$ and the other two anywhere in $\mathcal W$ are nearly identical. We summarise this fact in the following theorem, which, in explicit and effective form, we prove in Section [6](#SectionSpacingsW){reference-type="ref" reference="SectionSpacingsW"}. **Theorem 3**. *For any $\varepsilon>0$, there exist positive integers $d_\varepsilon$, $N_\varepsilon$, such that, for all integers $d\geq d_\varepsilon$, $N\geq N_\varepsilon$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, the proportion of triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{w}\else\textbf{w}\fi_1,\ifmmode\bm{w}\else\textbf{w}\fi_2)$, with $\ifmmode\bm{w}\else\textbf{w}\fi_1,\ifmmode\bm{w}\else\textbf{w}\fi_2\in \mathcal W$, in which $$%\label{eqTrangleaw1w2} \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{w}\else\textbf{w}\fi_1,\ifmmode\bm{w}\else\textbf{w}\fi_2)-\mbox{\small$\displaystyle\frac{1}{\sqrt{6}}$} \right\vert \le \varepsilon, \text{ and } \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{w}\else\textbf{w}\fi_j)-\sqrt{\mbox{\small$\displaystyle\frac{1}{12}$}+r_{\ifmmode\bm{a}\else\textbf{a}\fi}} \right\vert \le \varepsilon, \text{ for $j=1,2$ }$$ is greater than or equal to $1-\varepsilon$. (Here, $r_{\ifmmode\bm{a}\else\textbf{a}\fi}=\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)$ denotes the normalized distance from $\ifmmode\bm{a}\else\textbf{a}\fi$ to the center of $\mathcal W$.)* For a probabilistic description of some natural expectations in high dimensional hypercubes we refer the reader to [@ACZ2023 Section 8]. It is a super-fast approach to the subject, although, there, the discussion is done in a continuum and the positions of both the observer and the viewed point are variable, while in this paper, most of the time the observer has a fixed position. # Distances between any fixed point and the vertices of the hypercube {#SectionAMV} For any $\ifmmode\bm{a}\else\textbf{a}\fi=(a_1,\dots,a_d)\in\mathcal W$, in the following we denote $\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2:=a_1^2+\cdots+a_d^2$ and $\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}:=a_1+\cdots+a_d$. Let $\mathcal V$ denote the set of all vertices of $[0,N]^d$. This cube has $2^d$ vertices and each of them has components equal to $0$ or $N$. Notice that if $\mathcal V$ is seen as a subset of the set of lattice points $\mathcal W$, then no two vertices in $\mathcal V$ are visible from each other, since there are always other points of integer coordinates in $[0,N]^d$ that interfere between them provided that $N\ge 2$. The set of points in $\mathcal W$ that are visible from each other was the object of study in [@ACZ2023]. ## The average $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)$ {#SubsectionAV} Let $\ifmmode\bm{a}\else\textbf{a}\fi=(a_1,\dots,a_d)\in\mathbb{R}^d$ be fixed and let $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)$ denote the average of the squares of the distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to all vertices $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$. We have $$\begin{split} A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N) &= \frac{1}{\#\mathcal V}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)\\ &=\frac{1}{2^d}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \left((v_1-a_1)^2+\cdots+(v_d-a_d)^2\right)\\ &=\frac{1}{2^d}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\sum_{j=1}^{d}v_j^2 -\frac{1}{2^{d-1}}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\sum_{j=1}^{d}v_j a_j +\frac{1}{2^d}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\sum_{j=1}^{d}a_j^2. \end{split}$$ For any fixed $j$, there are $2^{d-1}$ vertices $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$ with the $j$-th component equal to $0$, while the remaining ones have the $j$-th component equal to $N$. Then $$\begin{split} A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N) &=\frac{1}{2^d}\sum_{j=1}^{d}2^{d-1}N^2 -\frac{1}{2^{d-1}}\sum_{j=1}^{d}a_j2^{d-1}N +\frac{1}{2^d}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2 2^d\\ &=\frac{1}{2}dN^2-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}N+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2. \end{split}$$ We state the result in the next lemma. **Lemma 1**. *Let $\mathcal V$ be the set of vertices of the hypercube $[0,N]^d$, where $N\ge 1$ and $d\ge 1$ are integers. Let $\ifmmode\bm{a}\else\textbf{a}\fi=(a_1,\dots,a_d)\in\mathbb{R}^d$ be fixed. Then, the average of all the squares of distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to points in $\mathcal V$ is $$\label{eqApV} A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N) =\frac{1}{2^d}\sum_{j=1}^{d}2^{d-1}N^2 = \frac{1}{2}dN^2-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}N+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2.$$* In particular, Lemma [Lemma 1](#LemmaAverageV){reference-type="ref" reference="LemmaAverageV"} says that the average distance from the origin to the vertices of $[0,N]^d$ equals $\sqrt{dN^2/2}$, which is the same as saying that the average normalized distance is $1/\sqrt{2}$. ## The second moment about the average distances to the vertices {#SubsectionM2V} Starting with the definition of the second moment, which we denote by $\mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)$, we rearrange the terms in its defining summation to aggregate the average and make use of Lemma [Lemma 1](#LemmaAverageV){reference-type="ref" reference="LemmaAverageV"}. Thus, writing shortly $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}$ instead of $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)$, we have: $$\label{eqM2aV1} \begin{split} \mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N) &:= \frac{1}{\#\mathcal V}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \left(\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)-A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}\right)^2\\ &=\frac{1}{2^d}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \left(\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) -2\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}+A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}^2\right)\\ &=\frac{1}{2^d}\left(\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) -2A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) +\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}^2\right)\\ &=\frac{1}{2^d}\cdot\Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}-A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}^2. \end{split}$$ To find the sum denoted by $\Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}$ in [\[eqM2aV1\]](#eqM2aV1){reference-type="eqref" reference="eqM2aV1"}, we write it explicitly: $$\label{eqM2aV11} \begin{split} \Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} &:=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V}\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) % = \sum_{\bv\in\cV} \sum_{m=1}^d \sum_{n=1}^d(v_m-a_m)^2(v_n-a_n)^2 = \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^d h(v_m,v_n, a_m,a_n), \end{split}$$ where $h(v_m,v_n, a_m,a_n)=(v_m-a_m)^2(v_n-a_n)^2$ is the sum of the following nine monomials: $$\label{eqM2aV} \begin{split} h(v_m,v_n, a_m,a_n)=& v_m^2v_n^2-2v_m^2 v_n a_n + v_m^2a_n^2\\ & -2v_m a_m v_n^2 + 4 v_m a_m v_n a_n -2v_m a_m a_n^2\\ & +a_m^2 v_n^2 -2 a_m^2 v_n a_n + a_m^2a_n^2. \end{split}$$ Next we take into account the contribution of each monomial in [\[eqM2aV\]](#eqM2aV){reference-type="eqref" reference="eqM2aV"} to the corresponding sum in [\[eqM2aV1\]](#eqM2aV1){reference-type="eqref" reference="eqM2aV1"}. For this we separate the group of the $d$ diagonal terms (those with $m=n$) from the group of the $d^2-d$ off-diagonal terms, and then count the number of vertices with the non-zero components at the right place. We have: $$\label{eqM2aV99a} \begin{split} S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^d v_m^2v_n^2 = N^4 \left(d 2^{d-1}+ (d^2-d)2^{d-2}\right);\\ S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^dv_m^2 v_n a_n = N^3 \left(2^{d-1}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}+ (d-1)2^{d-2}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\right);\\ S_3(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^d v_m^2a_n^2 = N^2 d2^{d-1}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2;\\ \end{split}$$ then $$\label{eqM2aV99b} \begin{split} S_4(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&=S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^dv_m a_m v_n^2 \\ &= N^3 \left(2^{d-1}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}+ (d-1)2^{d-2}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\right);\\ S_5(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^dv_m a_m v_n a_n = N^2 \left(2^{d-1}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2+ 2^{d-2}(\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}^2-\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2)\right);\\ S_6(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^dv_m a_m a_n^2 = N 2^{d-1}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2;\\ \end{split}$$ and $$\label{eqM2aV99c} \begin{split} S_7(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&= S_3(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^da_m^2 v_n^2 = N^2 d 2^{d-1}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2;\\ S_8(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&= S_6(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^d a_m^2 v_n a_n = N2^{d-1}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2;\\ S_9(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \sum_{m=1}^d \sum_{n=1}^d a_m^2 a_n^2 = 2^d\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^4.\\ \end{split}$$ On adding the sums in [\[eqM2aV99a\]](#eqM2aV99a){reference-type="eqref" reference="eqM2aV99a"}, [\[eqM2aV99b\]](#eqM2aV99b){reference-type="eqref" reference="eqM2aV99b"} and [\[eqM2aV99c\]](#eqM2aV99c){reference-type="eqref" reference="eqM2aV99c"} as many times as indicated by the appearances of their defining monomials in [\[eqM2aV\]](#eqM2aV){reference-type="eqref" reference="eqM2aV"}, we find that the sum $\Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}$ from [\[eqM2aV11\]](#eqM2aV11){reference-type="eqref" reference="eqM2aV11"} is equal to $$\label{eqsigmaTot} \begin{split} \Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} &= \big(S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)-2S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)+S_3(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)\big)\\ &\phantom{ = (S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)-} -\big(2S_4(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)-4S_5(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)+2S_6(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)\big) \\ &\phantom{ (S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)-2S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)+ S_3( } +\big(S_7(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)-2S_8(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)+S_9(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V)\big)\\ &= 2^{d-2}\left( (d^2 + d)N^4 - 4(d+1)\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}N^3 - 8\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2 N \right.\\ &\phantom{= 2^{d-2}( (d^2 + d)N^4 - } \left.+ 4\big(\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}^2 + (d+1)\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2\big)N^2 + 4\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^4\right). \end{split}$$ Then, inserting the results from [\[eqsigmaTot\]](#eqsigmaTot){reference-type="eqref" reference="eqsigmaTot"} and [\[eqApV\]](#eqApV){reference-type="eqref" reference="eqApV"} in formula [\[eqM2aV1\]](#eqM2aV1){reference-type="eqref" reference="eqM2aV1"}, we arrive at a closed form expression for $\mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)$, which we state in the next lemma. **Lemma 2**. *Let $d,N \ge 1$ be integers and let $\mathcal V$ be the set of vertices of the hypercube $[0,N]^d$. Then, the second moment about the mean $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)$ equals $$%\label{eqM2VerticesFinal} \begin{split} \mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N) = \frac{1}{\#\mathcal V}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \left(\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)-A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)\right)^2 &= \frac{1}{4}dN^4-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert} N^3+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2N^2. % 1/4*d*n^4 - a1*n^3 + a2*n^2 \end{split}$$* # The average of the squares and the square root of the average {#SectionAVAV} Since the normalized second moment $\mathfrak M_{2;\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)/d^2N^4 = o(1)$ as $d\to\infty$, it follows that for any fixed $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, almost all normalized distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to the vertices of $\mathcal W$ are close to $\sqrt{A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)/dN^2}$. This is the object of the following theorem. **Theorem 4**. *Let $B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}:=A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)/dN^2$ denote the average of the squares of the normalized distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to the vertices of $[0,N]^d$. Let $\eta\in (0,1/2)$ be fixed. Then, for any integers $d\ge 2$, $N\ge 1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, we have $$\label{eqPTV1} \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$} \#\left\{\ifmmode\bm{v}\else\textbf{v}\fi\in \mathcal V: \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)\in \left[\sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$},\ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right] \right\} \ge 1- \mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}\,.$$* *Proof.* Let $\eta, d, N, \ifmmode\bm{a}\else\textbf{a}\fi$ be as in the statement of the theorem. Since $$-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}N+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2 \le 0,$$ from Lemma [Lemma 2](#LemmaM2V){reference-type="ref" reference="LemmaM2V"} we find that $$\label{eqM2VerticesUpperBound} \begin{split} \mbox{\small$\displaystyle\frac{\mathfrak M_{2;\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)}{d^2N^4}$} \le \mbox{\small$\displaystyle\frac{1}{4d}$}\,. \end{split}$$ On the other hand, for any parameters $b,T>0$, $$\label{eqM2VerticesLowerBound} \begin{split} \mbox{\small$\displaystyle\frac{\mathfrak M_{2;\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}(d,N)}{d^2N^4}$} &= \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$} \times\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V} \left( \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) - B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} \right)^2 \\ &\ge \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$}\times \!\! \sum_{\substack{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V\\ \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)-B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} \right\vert \ge \tfrac{1}{bT} }} \!\! \left(\mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)-B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}\right)^2 \\ &\ge \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$} \times \!\! \sum_{\substack{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V\\ \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)-B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} \right\vert \ge \tfrac{1}{bT} }} \!\! \mbox{\small$\displaystyle\frac{1}{b^2T^2}$} \,. \end{split}$$ Then, on combining [\[eqM2VerticesUpperBound\]](#eqM2VerticesUpperBound){reference-type="eqref" reference="eqM2VerticesUpperBound"} and [\[eqM2VerticesLowerBound\]](#eqM2VerticesLowerBound){reference-type="eqref" reference="eqM2VerticesLowerBound"}, we see that $$\label{eqThmV1SquaresBound} \begin{split} \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$} \#\left\{ \ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V\colon \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)-B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} \right\vert \ge \mbox{\small$\displaystyle\frac{1}{bT}$} \right\} \le \mbox{\small$\displaystyle\frac{b^2T^2}{4d}$}. \end{split}$$ Now, by Lemma [Lemma 1](#LemmaAverageV){reference-type="ref" reference="LemmaAverageV"} and the definiton of $B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}$ in the hypothesis, we find that $$\label{eqLowerBoundBV} \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}} = \sqrt{\mbox{\small$\displaystyle\frac{1}{2}$} + \mbox{\small$\displaystyle\frac{1}{dN^2}$}\left( \left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2 - N\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert} \right)} \geq \sqrt{\mbox{\small$\displaystyle\frac{1}{2}$} - \mbox{\small$\displaystyle\frac{1}{4}$}} = \mbox{\small$\displaystyle\frac{1}{2}$}\,.$$ (Here we have taken into account the fact that the minimum of $\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2-N\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}$ is attained in the middle of the hypercube, which is a consequence of the fact that, independently on each of the $d$ coordinates, the minimum of $x\mapsto x^2-Nx$ is reached for $x=N/2$.) Then, using inequality [\[eqLowerBoundBV\]](#eqLowerBoundBV){reference-type="eqref" reference="eqLowerBoundBV"} and the fact that $\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)\geq 0$, it follows that $$\begin{split} \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) - B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} \right\vert &= \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) - \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}} \right\vert\left( \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) + \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}} \right)\\ &\ge \mbox{\small$\displaystyle\frac{1}{2}$}\left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi) - \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}} \right\vert. \end{split}$$ Therefore, we can tighten the restriction in the set on the left-hand side of inequality [\[eqThmV1SquaresBound\]](#eqThmV1SquaresBound){reference-type="eqref" reference="eqThmV1SquaresBound"} by taking $b=2$, and, as a consequence, we find that $$%\label{eqThmV1Proved} \begin{split} \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$} \#\left\{ \ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V\colon \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)-\sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}} \right\vert \ge \mbox{\small$\displaystyle\frac{1}{T}$} \right\} \le \mbox{\small$\displaystyle\frac{T^2}{d}$}. \end{split}$$ Finally, we take $T=d^{\eta}$ and then see that this completes the proof of Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"}. ◻ # Triangles involving the Vertices of the Hypercube {#SectionVTriangles} In this section we analyze the set of triangles in which at least one vertex is a corner of the hypercube. We count them to see how many of them are close or away from the average. ## Triangles formed from any fixed point and two vertices of the cube From Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"}, it will follow that almost all pairs of distinct vertices $(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal V^2$ are each at a distance close to $\sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}$ from $\ifmmode\bm{a}\else\textbf{a}\fi$. As a result, one can say that almost all triangles formed by $\ifmmode\bm{a}\else\textbf{a}\fi$ and two vertices are 'almost isosceles'. If we denote by $\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}\subset\mathcal V^2$ the set of all pairs of vertices $(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$, which form together with $\ifmmode\bm{a}\else\textbf{a}\fi$ a non-degenerate triangle (that is, triangles with distinct vertices), then $$\label{eqv1v2} \begin{split} &\mbox{\small$\displaystyle\frac{1}{\#\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}}$} \#\left\{ (\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2} \colon \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi), \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$}\right]\right\} \\ &\phantom{(\ifmmode\bm{v_1}\else\textbf{v_1}\fi, } \ge \mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$}\left( \#\left\{ \ifmmode\bm{v_1}\else\textbf{v_1}\fi\in\mathcal V\colon \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi) \in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$}\right] \right\} - 1 \right)\\ &\phantom{(\ifmmode\bm{v_1}\else\textbf{v_1}\fi, \ge}\times\mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$}\left( \#\left\{ \ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal V\colon \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$}\right] \right\} - 2 \right), \end{split}$$ where we subtract $1$ and $2$, respectively, from the two terms in the right-hand side of [\[eqv1v2\]](#eqv1v2){reference-type="eqref" reference="eqv1v2"} to account for the possibilities that $\ifmmode\bm{a}\else\textbf{a}\fi, \ifmmode\bm{v_1}\else\textbf{v_1}\fi, \ifmmode\bm{v_2}\else\textbf{v_2}\fi$ form a degenerate triangle. From Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"}, we see that the right-hand side of [\[eqv1v2\]](#eqv1v2){reference-type="eqref" reference="eqv1v2"} is bounded below by $$\begin{split} \left(1-\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}-\mbox{\small$\displaystyle\frac{1}{2^d}$}\right) & \left(1-\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}-\mbox{\small$\displaystyle\frac{2}{2^d}$}\right)\\ &=1-\mbox{\small$\displaystyle\frac{2}{d^{1-2\eta}}$} + \mbox{\small$\displaystyle\frac{1}{d^{2-4\eta}}$}-\mbox{\small$\displaystyle\frac{3}{2^d}$}+\mbox{\small$\displaystyle\frac{3}{2^d d^{1-2\eta}}$}+\mbox{\small$\displaystyle\frac{2}{2^{2d}}$} \\ &\ge 1-\mbox{\small$\displaystyle\frac{2}{d^{1-2\eta}}$}, \end{split}$$ for $d\geq 8$, since in that range $1/d^{2-4\eta}-3/2^d \ge 0$. We now arrive at the following theorem on isosceles triangles. **Theorem 5**. *Let $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$ be fixed and let $\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}$ denote the set of triangles with distinct vertices $\ifmmode\bm{a}\else\textbf{a}\fi$ and $\ifmmode\bm{v}\else\textbf{v}\fi_1,\ifmmode\bm{v}\else\textbf{v}\fi_2\in\mathcal V$. Let $\eta\in(0,1/2)$ be fixed. Then, for any integers $d\ge8$ and $N\ge1$, we have $$\frac{1}{\#\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}} \#\left\{ (\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2} \colon \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi)-\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \right\vert \le \mbox{\small$\displaystyle\frac{2}{d^\eta}$} \right\} \ge 1-\mbox{\small$\displaystyle\frac{2}{d^{1-2\eta}}$}.$$* Seeing that almost every such triangle is almost isosceles, we may wonder if any of these triangles can be equilateral, or perhaps right triangle. Let $\ifmmode\bm{c}\else\textbf{c}\fi=\big(\frac{N}{2},\dots,\frac{N}{2}\big)$ be the center of the hypercube. Notice that $\ifmmode\bm{c}\else\textbf{c}\fi$ may belong or not to $\mathcal W$, but in any case, if $N$ had been odd, than the distance from $\ifmmode\bm{c}\else\textbf{c}\fi$ to a point in $\mathcal W$ would have been not greater than $\sqrt{d}/2$. This is the same as saying that the normalized distance from $\ifmmode\bm{c}\else\textbf{c}\fi$ to $\mathcal W$ is at most $1/(2N)$ and we may make reasoning with a point with integer coordinates that is close to $\ifmmode\bm{c}\else\textbf{c}\fi$ instead of $\ifmmode\bm{c}\else\textbf{c}\fi$. For simplicity we may assume here that $N$ is even, but this is not necessary, since in fact, in the proofs of Theorems [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"} and [Theorem 5](#ThVIsosceles){reference-type="ref" reference="ThVIsosceles"}, we did not make use of the fact that the coordinates of $\ifmmode\bm{a}\else\textbf{a}\fi$ are integers. Note that all vertices from $\mathcal V$ are equally far from the center and $$\label{eqDistanceCV} \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)=\mbox{\small$\displaystyle\frac{1}{\sqrt{d}N}$} \Big(\sum_{1\le j\le d}\big(N/2\big)^2\Big)^{1/2} = \mbox{\small$\displaystyle\frac{1}{2}$}, \text{ for $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$,}$$ while for arbitrary $\ifmmode\bm{a}\else\textbf{a}\fi=(a_1,\dots,a_d)$, the normalized distance to $\ifmmode\bm{c}\else\textbf{c}\fi$ is $$\label{eqDistanceCba} \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)=\mbox{\small$\displaystyle\frac{1}{\sqrt{d}N}$} \Big(\sum_{1\le j\le d}\big(a_j-N/2\big)^2\Big)^{1/2} = \mbox{\small$\displaystyle\frac{1}{\sqrt{d}N}$}\left(\mbox{\small$\displaystyle\frac{dN^2}{4}$}-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}N+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2\right)^{1/2}.$$ Now let us point out the following two observations. *Remark 1*. `(1)` Taking $\ifmmode\bm{a}\else\textbf{a}\fi$ in $\mathcal V$, Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"} tells us that the normalized distance between almost any two vertices in $\mathcal V$ is close to $1/\sqrt{2}$. `(2)` By Lemma [Lemma 1](#LemmaAverageV){reference-type="ref" reference="LemmaAverageV"}, the normalized average of the squares of distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to vertices in $\mathcal V$ is $B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}=A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}/(dN^2) =1/2-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}/(dN)+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2/(dN^2)$. Then, by [\[eqDistanceCV\]](#eqDistanceCV){reference-type="eqref" reference="eqDistanceCV"} and [\[eqDistanceCba\]](#eqDistanceCba){reference-type="eqref" reference="eqDistanceCba"} this can further be expressed as $$\label{eqPythagoras} B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V} = \mbox{\small$\displaystyle\frac{1}{4}$} + \left(\mbox{\small$\displaystyle\frac{1}{4}$}-\mbox{\small$\displaystyle\frac{\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}}{dN}$}+\mbox{\small$\displaystyle\frac{\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2}{dN^2}$}\right) = \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)+\mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi), \text{ for any $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$}.$$ In particular, [\[eqPythagoras\]](#eqPythagoras){reference-type="eqref" reference="eqPythagoras"} shows that the average $B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}$ depends only on the normalized distance $\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)$, which we shall further denote by $r_{\ifmmode\bm{a}\else\textbf{a}\fi}$. On combining Theorem [Theorem 5](#ThVIsosceles){reference-type="ref" reference="ThVIsosceles"}, [\[eqDistanceCV\]](#eqDistanceCV){reference-type="eqref" reference="eqDistanceCV"}, and the observations from Remark [Remark 1](#Remark1){reference-type="ref" reference="Remark1"}, we see that almost all triangles in $\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}$ have one side lengths close to $1/\sqrt{2}$, while the other are both close to $\sqrt{1/4+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2}$. Since, by normalization, we know that $0\le r_{\ifmmode\bm{a}\else\textbf{a}\fi} \le 1/2$, it follows that $$\label{eqBoundra} \mbox{\small$\displaystyle\frac{1}{2}$} \le \sqrt{\mbox{\small$\displaystyle\frac{1}{4}$}+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2} \le \mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}.$$ In particular, if $r_{\ifmmode\bm{a}\else\textbf{a}\fi}=1/2$, which occurs when $\ifmmode\bm{a}\else\textbf{a}\fi$ is a vertex, we see that almost all triangles have each of their side lengths close to $1/\sqrt{2}$. In other words, almost all triangles formed by three vertices of the hypercube are 'almost equilateral'. On the other hand, if $r_{\ifmmode\bm{a}\else\textbf{a}\fi}=0$, which occurs when $\ifmmode\bm{a}\else\textbf{a}\fi=\ifmmode\bm{c}\else\textbf{c}\fi$ is at the center of the hypercube, we see that almost all triangles have side lengths close to $1/\sqrt{2}$, $1/2,$ and $1/2$, respectively, that is, they are almost isosceles with an almost right angle in $\ifmmode\bm{c}\else\textbf{c}\fi$. Making this more explicit, we argue similarly as in [\[eqv1v2\]](#eqv1v2){reference-type="eqref" reference="eqv1v2"} to find the proportion of non-degenerate triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ such that $$\label{eqCommonTriangleConditions} \begin{split} \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi), \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) &\in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right], \text{ and } \\ \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) &\in \left[ \mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right]. \end{split}$$ Firstly, for any $0<\eta<1/2$, from Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"}, we know that for any vertex $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$, the proportion of vertices $\ifmmode\bm{v_1}\else\textbf{v_1}\fi\in\mathcal V$ such that $$\begin{split} \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi) \not\in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right] \text{ and } \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) \not\in \left[ \mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right], \end{split}$$ is bounded above by $$\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}+\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$} = \mbox{\small$\displaystyle\frac{2}{d^{1-2\eta}}$}.$$ Therefore, where $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$ can be taken to be any vertex, the proportion of non-degenerate triangles formed by distinct vertices $(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)$, which satisfy conditions in [\[eqCommonTriangleConditions\]](#eqCommonTriangleConditions){reference-type="eqref" reference="eqCommonTriangleConditions"}, is bounded below by $$\begin{split} &\mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$}\bigg(\#\left\{ \ifmmode\bm{v_1}\else\textbf{v_1}\fi\in\mathcal V\colon \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi) \in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right] \right\} - 1 \bigg) \\ \times&\mbox{\small$\displaystyle\frac{1}{\#\mathcal V}$}\bigg(\#\bigg\{ \ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal V\colon \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right], \text{ and }\\ &\phantom{\qquad\qquad\qquad\qquad\qquad\quad} \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \in \left[\mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \mbox{\small$\displaystyle\frac{1}{\sqrt{2}}$}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right] \biggr\} - 2 \bigg) \\ &\ge \left( 1-\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}-\mbox{\small$\displaystyle\frac{1}{2^d}$}\right) \left( 1-\mbox{\small$\displaystyle\frac{2}{d^{1-2\eta}}$}-\mbox{\small$\displaystyle\frac{2}{2^d}$}\right) \\ &\ge 1-\mbox{\small$\displaystyle\frac{3}{d^{1-2\eta}}$}, \end{split}$$ for $d\ge 6$. As a consequence, we now have the following theorem. **Theorem 6**. *Let $\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}$ be the set of triangles with distinct vertices $\ifmmode\bm{a}\else\textbf{a}\fi$ and $\ifmmode\bm{v}\else\textbf{v}\fi_1,\ifmmode\bm{v}\else\textbf{v}\fi_2\in\mathcal V$. Let $\eta\in(0,1/2)$ be fixed. Then, for any integers $d\ge6$, $N\ge1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, we have $$\mbox{\small$\displaystyle\frac{1}{\#\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2}}$} \#\left\{ (\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal V^2} : \ifmmode\bm{a}\else\textbf{a}\fi, \ifmmode\bm{v_1}\else\textbf{v_1}\fi, \ifmmode\bm{v_2}\else\textbf{v_2}\fi\textup{ satisfy \eqref{eqCommonTriangleConditions}} \right\} \ge 1-\mbox{\small$\displaystyle\frac{3}{d^{1-2\eta}}$}.$$* ## Triangles formed by the center, any fixed point, and a vertex of the cube Counting triangles with one vertex in $\ifmmode\bm{c}\else\textbf{c}\fi$ and another in $\mathcal V$, and then sorting by the angle at the center, can be done by a somewhat winded combinatorial computation, but here we can take a shortcut using the effective result from Section [3](#SectionAVAV){reference-type="ref" reference="SectionAVAV"}. By Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"} and [\[eqPythagoras\]](#eqPythagoras){reference-type="eqref" reference="eqPythagoras"}, we know that if $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$ is fixed, then for any $0<\eta<1/2$, for almost all vertices $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$, we have $$\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) = \sqrt{\mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)^2 + \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)} + E(d^{-\eta}),$$ where $\left\vert E(x) \right\vert\le x$. This suggests that the triangle with vertices $\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi$ may be close to a right triangle. Letting $\theta$ denote the angle at $\ifmmode\bm{c}\else\textbf{c}\fi$, using the bound [\[eqBoundra\]](#eqBoundra){reference-type="eqref" reference="eqBoundra"}, we see that $$\begin{split} \left\vert \cos(\theta) \right\vert &= \left\vert \mbox{\small$\displaystyle\frac{1}{4}$}+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2-\left( \sqrt{\mbox{\small$\displaystyle\frac{1}{4}$}+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2} + E(d^{-\eta}) \right)^2 \right\vert \Big(2\cdot\mbox{\small$\displaystyle\frac{1}{2}$}\cdot r_{\ifmmode\bm{a}\else\textbf{a}\fi}\Big)^{-1} \\ &\le \left(\mbox{\small$\displaystyle\frac{2}{\sqrt{2}}$}\left\vert E(d^{-\eta}) \right\vert +\left\vert E(d^{-\eta}) \right\vert^2\right)r_{\ifmmode\bm{a}\else\textbf{a}\fi}^{-1} \\ &\le (\sqrt{2}+1)d^{-\eta}r_{\ifmmode\bm{a}\else\textbf{a}\fi}^{-1}. \end{split}$$ Picking any $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$ such that $r_{\ifmmode\bm{a}\else\textbf{a}\fi}\ge d^{\gamma-\eta}$ for some $\gamma>0$, we find that $$\left\vert \cos(\theta) \right\vert \le \big(\sqrt{2}+1\big)d^{-\gamma}.$$ Bearing in mind that we have an upper bound $r_{\ifmmode\bm{a}\else\textbf{a}\fi}\le 1/2$, one should avoid picking $\gamma$ too large or $d$ too small with respect to $\eta$, otherwise no such $\ifmmode\bm{a}\else\textbf{a}\fi$ would exist. As a consequence we have obtained the following result. **Theorem 7**. *Let $\eta\in(0,1/2)$ be fixed. Suppose $d>2^{1/\eta}$ is an integer and let $\gamma\in\big(0,\,\eta-\frac{\log 2}{\log d}\big]$. Further, for any integer $N\geq1$, with $\ifmmode\bm{c}\else\textbf{c}\fi=\big(\frac{N}{2},\dots,\frac{N}{2}\big)$ denoting the center of the hypercube, fix any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$ such that $\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)\geq d^{\gamma-\eta}$. Then, $$\frac{1}{\#\mathcal V}\#\left\{ \ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V: \left\vert \cos(\theta_{\ifmmode\bm{v}\else\textbf{v}\fi}) \right\vert \leq \big(\sqrt{2}+1\big)d^{-\gamma} \right\} \ge 1-\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$},$$ where, for any vertex $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal V$, $\theta_{\ifmmode\bm{v}\else\textbf{v}\fi}$ is the angle between the lines going from $\ifmmode\bm{c}\else\textbf{c}\fi$ to $\ifmmode\bm{a}\else\textbf{a}\fi$ and $\ifmmode\bm{v}\else\textbf{v}\fi$ respectively.* In plain words, Theorem [Theorem 7](#ThTriangleacv){reference-type="ref" reference="ThTriangleacv"} says that as long as $\ifmmode\bm{a}\else\textbf{a}\fi$ is not too close to the center of the cube, almost all triangles formed by $\ifmmode\bm{a}\else\textbf{a}\fi$, $\ifmmode\bm{c}\else\textbf{c}\fi$, and a vertex of the cube are almost right triangles. # The spacings between a fixed point and the lattice points in the hypercube {#SectionAMW} In this section we first calculate the mean distance from a fixed points to all the lattice points in $\mathcal W$. Afterwards, we use the result to find the second moment about the mean of these distances. This is the farthest opposite case in terms of the dimension from the problem dealt with before. Here, the whole hypercube of lattice points plays the previous role of the vertices. ## The average $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)$ Let $\ifmmode\bm{a}\else\textbf{a}\fi=(a_1,\dots,a_d)\in\mathbb{R}^d$ be fixed and denote $$\begin{split} A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N):=\frac{1}{\#\mathcal W}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \mathop{\mathrm{\mathfrak{d}}}^2 (\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)\,. \end{split}$$ Using the definitions and rearranging the terms, we find that $$\label{eqAW1} \begin{split} A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)&=\frac{1}{\#\mathcal W}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{j=1}^d (v_j-a_j)^2\\ &=\frac{1}{\#\mathcal W}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}\sum_{j=1}^dv_j^2 -\frac{2}{\#\mathcal W} \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{j=1}^d a_j v_j +\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2. \end{split}$$ Here, changing the order of summation, the sum of the squares is $$\label{eqAW1a} \begin{split} \sum_{j=1}^d\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}v_j^2 = d\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}v_1^2 =d(N+1)^{d-1}\sum_{v=0}^{N}v^2 % =d(N+1)^{d-1}\frac{N(N+1)(2N+1)}{6}. =\frac{dN(N+1)^d(2N+1)}{6}. \end{split}$$ In the same way, the mixed sum in [\[eqAW1\]](#eqAW1){reference-type="eqref" reference="eqAW1"} can be written as $$\label{eqAW1b} \begin{split} \sum_{j=1}^d\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}a_jv_j =\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}(N+1)^{d-1}\sum_{v=0}^{N}v % =\normu{\ba}(N+1)^{d-1}\frac{N(N+1)}{2}. =\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\frac{N(N+1)^d}{2}. \end{split}$$ On inserting the results [\[eqAW1a\]](#eqAW1a){reference-type="eqref" reference="eqAW1a"} and [\[eqAW1b\]](#eqAW1b){reference-type="eqref" reference="eqAW1b"} in [\[eqAW1\]](#eqAW1){reference-type="eqref" reference="eqAW1"} we find a closed form expression for $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)$, which we state in the next lemma. **Lemma 3**. *Let $d,N \ge 1$ be integers, and let $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathbb{R}^d$ be fixed. Let $\mathcal W$ be the set of lattice points in $[0,N]^d$. Then, the average of all squares of distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to points in $\mathcal W$ is $$\label{eqAW2} \begin{split} A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)& =\frac{1}{\#\mathcal W}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \mathop{\mathrm{\mathfrak{d}}}^2 (\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi) =\frac{dN(2N+1)}{6}-\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}{N}+\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2. \end{split}$$* ## The second moment about the mean The second moment about the mean for the whole hypercube, denoted by $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}=A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)$, is defined by $$%\label{eqM2aW1} \begin{split} \mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N) & := \frac{1}{\#\mathcal W}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \left(\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)-A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}\right)^2. \end{split}$$ Rearranging the terms on the summation, we may rewrite $\mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}$ as $$\label{eqM2aW1} \begin{split} \mathfrak M_{2; \ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N) &=\mbox{\large$\frac{1}{(N+1)^d}$}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \left(\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) -2\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}+A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}^2\right)\\ &=\mbox{\large$\frac{1}{(N+1)^d}$}\bigg( \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) -2A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) +\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}^2\bigg)\\ &=\mbox{\large$\frac{1}{(N+1)^d}$}\cdot\Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}-A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}^2. \end{split}$$ Here the terms collected in $\Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}$ are the analogs of that from relation [\[eqM2aV11\]](#eqM2aV11){reference-type="eqref" reference="eqM2aV11"}, so that their sum is $$\label{eqM2aW17} \begin{split} \Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W} &=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W}\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) = \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^d h(v_m,v_n, a_m,a_n), \end{split}$$ where $h(v_m,v_n, a_m,a_n)=(v_m-a_m)^2(v_n-a_n)^2$ is the same sum of nine monomials from [\[eqM2aV\]](#eqM2aV){reference-type="eqref" reference="eqM2aV"}. Next we calculate the contribution of each of the nine type of terms to the total sum. In the process, we change the order of summation and take care if the terms are on the diagonal (that is, if $m=n$) or not. We denote by $T_k(N)$ the sum of the first $N$ $k$-powers of positive integers, that is, $T_k(N)=1^k+2^k+\cdots +N^k$. Thus, we obtain: $$\label{eqM2aW99a} \begin{split} S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^d v_m^2v_n^2\\ &= d(N+1)^{d-1}T_4(N) +(d^2-d)(N+1)^{d-2}T_2^2(N);\\ S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^dv_m^2 v_n a_n\\ &= \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}(N+1)^{d-1}T_3(N) + \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}(d-1)(N+1)^{d-2}T_1(N)T_2(N);\\ S_3(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^d v_m^2a_n^2 = \left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2 d (N+1)^{d-1}T_2(N) ;\\ \end{split}$$ then $$\label{eqM2aW99b} \begin{split} % S_1(\ba,\cW)&=\sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^d v_m^2v_n^2\\ % &= d(N+1)^{d-1}S_4(N) +(d^2-d)(N+1)^{d-2}\big(S_2(N)\big)^2;\\ % S_2(\ba,\cW)&= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^dv_m^2 v_n a_n\\ % &= \normu{\ba}(N+1)^{d-1}S_3(N) + \normu{\ba}(d-1)(N+1)^{d-2}S_1(N)S_2(N);\\ % S_3(\ba,\cW)&=\sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^d v_m^2a_n^2 % = \norm{\ba}^2 d (N+1)^{d-1}S_2(N) ;\\ S_4(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&=S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^dv_m a_m v_n^2 \\ &= \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}(N+1)^{d-1}T_3(N) + \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}(d-1)(N+1)^{d-2}T_1(N)T_2(N);\\ S_5(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^dv_m a_m v_n a_n\\ &= \left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2(N+1)^{d-1}T_2(N)+\left(\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}^2-\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2\right)(N+1)^{d-2}T_1^2(N);\\ S_6(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^dv_m a_m a_n^2 = \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2 (N+1)^{d-1}T_1(N);\\ % S_7(\ba,\cW)&= S_3(\ba,\cW)=\sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^da_m^2 v_n^2 % = \norm{\ba}^2d(N+1)^{d-1}S_2(N);\\ % S_8(\ba,\cW)&= S_6(\ba,\cW)= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^d a_m^2 v_n a_n % = \normu{\ba}\norm{\ba}^2(N+1)^{d-1}S_1(N);\\ % S_9(\ba,\cW)&= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^d a_m^2 a_n^2 % = \norm{\ba}^4(N+1)^d.\\ \end{split}$$ and $$\label{eqM2aW99c} \begin{split} % S_1(\ba,\cW)&=\sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^d v_m^2v_n^2\\ % &= d(N+1)^{d-1}S_4(N) +(d^2-d)(N+1)^{d-2}\big(S_2(N)\big)^2;\\ % S_2(\ba,\cW)&= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^dv_m^2 v_n a_n\\ % &= \normu{\ba}(N+1)^{d-1}S_3(N) + \normu{\ba}(d-1)(N+1)^{d-2}S_1(N)S_2(N);\\ % S_3(\ba,\cW)&=\sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^d v_m^2a_n^2 % = \norm{\ba}^2 d (N+1)^{d-1}S_2(N) ;\\ % S_4(\ba,\cW)&=S_2(\ba,\cW)= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^dv_m a_m v_n^2 \\ % &= \normu{\ba}(N+1)^{d-1}S_3(N) + \normu{\ba}(d-1)(N+1)^{d-2}S_1(N)S_2(N);\\ % S_5(\ba,\cW)&= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^dv_m a_m v_n a_n\\ % &= \norm{\ba}^2(N+1)^{d-1}S_2(N)+\left[\normu{\ba}^2-\norm{\ba}^2\right](N+1)^{d-2}\big(S_1(N)\big)^2;\\ % S_6(\ba,\cW)&= \sum_{\bv\in\cW} \sum_{m=1}^d \sum_{n=1}^dv_m a_m a_n^2 % = \normu{\ba}\norm{\ba}^2 (N+1)^{d-1}S_1(N);\\ S_7(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&= S_3(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)=\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^da_m^2 v_n^2 = \left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2d(N+1)^{d-1}T_2(N);\\ S_8(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&= S_6(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^d a_m^2 v_n a_n = \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2(N+1)^{d-1}T_1(N);\\ S_9(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)&= \sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \sum_{m=1}^d \sum_{n=1}^d a_m^2 a_n^2 = \left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^4(N+1)^d.\\ \end{split}$$ On combining [\[eqM2aW99a\]](#eqM2aW99a){reference-type="eqref" reference="eqM2aW99a"}, [\[eqM2aW99b\]](#eqM2aW99b){reference-type="eqref" reference="eqM2aW99b"}, [\[eqM2aW99c\]](#eqM2aW99c){reference-type="eqref" reference="eqM2aW99c"}, and [\[eqM2aV\]](#eqM2aV){reference-type="eqref" reference="eqM2aV"}, we find that $\Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}$ from [\[eqM2aW17\]](#eqM2aW17){reference-type="eqref" reference="eqM2aW17"} equals $$\label{eqsigmaaWTot} \begin{split} \Sigma_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W} =& \big(S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)-2S_2(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)+S_3(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)\big)\\ &\phantom{= \big(S_1(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)\;} -\big( 2S_4(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W) -4S_5(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)+2S_6(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)\big) \\ &\phantom{=}+ \big(S_7(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)-2S_8(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)+S_9(\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W)\big)\\ =& (N+1)^d\biggl( \left(\mbox{\small$\displaystyle\frac{1}{9}$}d^2-\mbox{\small$\displaystyle\frac{4}{45}$}d \right)N^4 +\left( \mbox{\small$\displaystyle\frac{1}{9}$}d^2 + \mbox{\small$\displaystyle\frac{17}{90}$}d + \left( -\mbox{\small$\displaystyle\frac{2}{3}$}d - \mbox{\small$\displaystyle\frac{1}{3}$} \right) \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert} \right)N^3 \biggr. \\ &\phantom{(N+1)^d\biggl(} + \left( \mbox{\small$\displaystyle\frac{1}{36}$}d^2 + \mbox{\small$\displaystyle\frac{1}{180}$}d + \left( -\mbox{\small$\displaystyle\frac{1}{3}$}d - \mbox{\small$\displaystyle\frac{2}{3}$} + \pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert} \right)\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\right.\\ &\left.\phantom{\hspace*{.53\textwidth}} + \left( \mbox{\small$\displaystyle\frac{2}{3}$}d + \mbox{\small$\displaystyle\frac{1}{3}$} \right)\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2\right)N^2 \\ &\phantom{(N+1)^d\biggl(}\,\biggl. + \left( -\mbox{\small$\displaystyle\frac{1}{30}$}d + \left(\mbox{\small$\displaystyle\frac{1}{3}$}d + \mbox{\small$\displaystyle\frac{2}{3}$} - 2\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert} \right)\left|\hspace*{2.5pt} \!\!\left| a\right|\hspace*{2.5pt} \!\!\right|^2\right)N + \left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^4 \biggr). \end{split}$$ Finally, inserting the results from [\[eqsigmaaWTot\]](#eqsigmaaWTot){reference-type="eqref" reference="eqsigmaaWTot"} and [\[eqAW2\]](#eqAW2){reference-type="eqref" reference="eqAW2"} into [\[eqM2aW1\]](#eqM2aW1){reference-type="eqref" reference="eqM2aW1"}, we obtain the needed formula for $\mathfrak M_{2;\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)$. **Lemma 4**. *Let $d,N \ge 1$ be integers, and let $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)$ be the average distance from a fixed point $\ifmmode\bm{a}\else\textbf{a}\fi$ to the points in the hypercube $\mathcal W$. Then, the second moment about $A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N)$ is $$\begin{split} \mathfrak M_{2;\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}(d,N) &=\mbox{\small$\displaystyle\frac{1}{(N+1)^d}$}\sum_{\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W} \left(\mathop{\mathrm{\mathfrak{d}}}^4(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi) -2\mathop{\mathrm{\mathfrak{d}}}^2(\ifmmode\bm{v}\else\textbf{v}\fi,\ifmmode\bm{a}\else\textbf{a}\fi)A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}+A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}^2\right)\\ &= \mbox{\small$\displaystyle\frac{4}{45}$}dN^4 + \left(\mbox{\small$\displaystyle\frac{17}{90}$}d - \mbox{\small$\displaystyle\frac{1}{3}$}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}\right)N^3 \\ &\phantom{=}+ \left(\mbox{\small$\displaystyle\frac{1}{180}$}d - \mbox{\small$\displaystyle\frac{2}{3}$}\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert} + \mbox{\small$\displaystyle\frac{1}{3}$}\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2\right)N^2 + \left(-\mbox{\small$\displaystyle\frac{1}{30}$}d + \mbox{\small$\displaystyle\frac{2}{3}$}\left|\hspace*{2.5pt} \!\!\left| a\right|\hspace*{2.5pt} \!\!\right|^2\right)N. \end{split}$$* # The chance to find points in $\mathcal W$ that are at some uncommon spacing from each other {#SectionSpacingsW} The formulas for the average and the second moment obtained in Section [5](#SectionAMW){reference-type="ref" reference="SectionAMW"} allows us to estimate the chance to find points in $\mathcal W$ that are situated at uncommon (away from the average) spacing from each other. It turns out that, as the dimension $d$ gets larger, the probability to select at random two points from $\mathcal W$ that are closer or farther away from the average is smaller and smaller, reducing to zero as $d$ tends to infinity. Following the same argument used in the proof of Theorem [Theorem 4](#TheoremV1){reference-type="ref" reference="TheoremV1"}, we obtain the following result, which shows that for any fixed $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathbb{R}^d$, almost all normalized distances from $\ifmmode\bm{a}\else\textbf{a}\fi$ to the points in $\mathcal W$ are close to $\sqrt{A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}/dN^2}$. **Theorem 8**. *Let $\eta\in (0,1/2)$ be fixed. Let $B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}=A_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}/dN^2$ denote the normalized average of the square distance from $\ifmmode\bm{a}\else\textbf{a}\fi$ to points in $\mathcal W$. Then, for any integers $d\ge 2$, $N\ge 1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathbb{R}^d$, we have $$%\label{eqPTV1} \mbox{\small$\displaystyle\frac{1}{\#\mathcal W}$} \#\left\{\ifmmode\bm{v}\else\textbf{v}\fi\in \mathcal W: \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)\in \left[\sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$}, \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}}+\mbox{\small$\displaystyle\frac{1}{d^{\eta}}$} \right] \right\} \ge 1- \mbox{\small$\displaystyle\frac{51}{15}$}\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}.$$* We can continue our quest by looking for triplets of points in $\mathcal W$. In the same way as we proceeded in Section [4](#SectionVTriangles){reference-type="ref" reference="SectionVTriangles"}, we see that almost all pairs of distinct points $(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal W^2$ have components situated at a distance close to $\sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}}$ from $\ifmmode\bm{a}\else\textbf{a}\fi$. This means that almost all triangles formed by $\ifmmode\bm{a}\else\textbf{a}\fi$ and two other points in the cube are 'almost isosceles'. We can make the argument explicit, as we did in Theorem [Theorem 5](#ThVIsosceles){reference-type="ref" reference="ThVIsosceles"} for vertices, to find the following analogous result. **Theorem 9**. *Let $\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W^2}\subset\mathcal W^2$ be the set of all pairs of integer points $(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ which form a non-degenerate triangle together with $\ifmmode\bm{a}\else\textbf{a}\fi$. Let $\eta\in(0,1/2)$ be fixed. Then, for any integers $d\geq2$, $N\geq1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, we have $$\mbox{\small$\displaystyle\frac{1}{\#\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W^2}}$} \#\left\{ (\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W^2} \colon \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi)-\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) \right\vert\leq\mbox{\small$\displaystyle\frac{2}{d^{\eta}}$} \right\} \geq 1-\mbox{\small$\displaystyle\frac{102}{15}$}\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}.$$* The sides of these triangles can be found explicitly. To see this, we first use Lemma [Lemma 3](#LemmaAW){reference-type="ref" reference="LemmaAW"} to express the normalized average solely on the distance from the center of the cube to $\ifmmode\bm{a}\else\textbf{a}\fi$. Thus, using  [\[eqDistanceCba\]](#eqDistanceCba){reference-type="eqref" reference="eqDistanceCba"}, we have $$\label{eqWAverageSquares} \begin{split} B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W} &= \mbox{\small$\displaystyle\frac{1}{3}$}+\mbox{\small$\displaystyle\frac{1}{6N}$} - \frac{\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}}{dN}+\frac{\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2}{dN^2} \\ &= \mbox{\small$\displaystyle\frac{1}{12}$}+\mbox{\small$\displaystyle\frac{1}{6N}$}+\left(\mbox{\small$\displaystyle\frac{1}{4}$} - \frac{\pmb{\left\vert\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right.}\ifmmode\bm{a}\else\textbf{a}\fi\pmb{\left.\vphantom{\ifmmode\bm{a}\else\textbf{a}\fi}\right\vert}}{dN} +\frac{\left|\hspace*{2.5pt} \!\!\left| \ifmmode\bm{a}\else\textbf{a}\fi\right|\hspace*{2.5pt} \!\!\right|^2}{dN^2}\right) \\ &= \mbox{\small$\displaystyle\frac{1}{12}$}+\mbox{\small$\displaystyle\frac{1}{6N}$} + \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi). \end{split}$$ Here, employing Theorem [Theorem 8](#TheoremW1){reference-type="ref" reference="TheoremW1"}, the first thing of which we make note, is that for almost all points $\ifmmode\bm{v}\else\textbf{v}\fi\in\mathcal W$, the square distance $\mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{v}\else\textbf{v}\fi)$ is close to $1/12+1/(6N)$. It also follows that, for almost all pairs of points $\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal W$, their mutual distance $\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ is close to $\sqrt{B_{\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\mathcal W}}$, which is itself close to $\sqrt{1/6+1/(3N)}$. Therefore, with our earlier notation $r_{\ifmmode\bm{a}\else\textbf{a}\fi}:=\mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{c}\else\textbf{c}\fi)$, we find that almost all triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ have side lengths close to $\sqrt{1/6+1/(3N)}$, $\sqrt{1/12+1/(6N)+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2}$, and $\sqrt{1/12+1/(6N)+r_{\ifmmode\bm{a}\else\textbf{a}\fi}^2}$. If $r_{\ifmmode\bm{a}\else\textbf{a}\fi}=0$, which occurs when $\ifmmode\bm{a}\else\textbf{a}\fi$ is the center of the cube, we see that almost all triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ are almost right triangles. On the other hand, if $r_{\ifmmode\bm{a}\else\textbf{a}\fi}$ is close to $\sqrt{1/12+1/(6N)}$, then almost all triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ are almost equilateral. In order to make this remarks explicit, we first use the analogue of [\[eqThmV1SquaresBound\]](#eqThmV1SquaresBound){reference-type="eqref" reference="eqThmV1SquaresBound"} in the proof of Theorem [Theorem 8](#TheoremW1){reference-type="ref" reference="TheoremW1"} to see that $$\mbox{\small$\displaystyle\frac{1}{\#\mathcal W}$} \#\left\{ \ifmmode\bm{v_1}\else\textbf{v_1}\fi\in\mathcal W: \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi)-\left( \mbox{\small$\displaystyle\frac{1}{12}$}+\mbox{\small$\displaystyle\frac{1}{6N}$} \right) \right\vert \geq \mbox{\small$\displaystyle\frac{1}{2\sqrt{6}d^{\eta}}$} \right\} \leq \mbox{\small$\displaystyle\frac{102}{15}$}\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}.$$ Furthermore, if $\ifmmode\bm{v_1}\else\textbf{v_1}\fi$ is a fixed point such that $$\mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{c}\else\textbf{c}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi)\in\left[ \mbox{\small$\displaystyle\frac{1}{12}$}+\mbox{\small$\displaystyle\frac{1}{6N}$}-\mbox{\small$\displaystyle\frac{1}{2\sqrt{6}d^{\eta}}$},\ \mbox{\small$\displaystyle\frac{1}{12}$}+\mbox{\small$\displaystyle\frac{1}{6N}$}+\mbox{\small$\displaystyle\frac{1}{2\sqrt{6}d^{\eta}}$} \right],$$ then, $$\begin{split} &\phantom{\leq}\mbox{\small$\displaystyle\frac{1}{\#\mathcal W}$} \#\left\{ \ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal W: \left\vert \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)-\sqrt{\mbox{\small$\displaystyle\frac{1}{6}$}+\mbox{\small$\displaystyle\frac{1}{3N}$}} \right\vert \geq \mbox{\small$\displaystyle\frac{1}{d^{\eta}}$} \right\} \\ &\leq \mbox{\small$\displaystyle\frac{1}{\#\mathcal W}$} \#\left\{ \ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal W: \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)-\left( \mbox{\small$\displaystyle\frac{1}{6}$}+\mbox{\small$\displaystyle\frac{1}{3N}$} \right) \right\vert \geq \mbox{\small$\displaystyle\frac{1}{\sqrt{6}d^{\eta}}$} \right\} \\ &\leq \mbox{\small$\displaystyle\frac{1}{\#\mathcal W}$} \#\left\{ \ifmmode\bm{v_2}\else\textbf{v_2}\fi\in\mathcal W: \left\vert \mathop{\mathrm{\mathfrak{d}}}_d^2(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)-B_{\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\mathcal W} \right\vert \geq \mbox{\small$\displaystyle\frac{1}{2\sqrt{6}d^{\eta}}$} \right\} \\ &\leq \mbox{\small$\displaystyle\frac{102}{15}$}\cdot\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}. \end{split}$$ Then, we can argue just as we did in the proof of Theorem [Theorem 6](#ThVSimilarTriangles){reference-type="ref" reference="ThVSimilarTriangles"} to find the proportion of non-degenerate triangles $(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi, \ifmmode\bm{v_2}\else\textbf{v_2}\fi)$ such that $$\label{eqCommonTriangleConditionsW} \begin{split} \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_1}\else\textbf{v_1}\fi), \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{a}\else\textbf{a}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) &\in \left[ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$},\ \sqrt{B_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right], \text{ and }\\ % \distance_d(\ba,\bvii) &\in \left[ \sqrt{B_{\ba,\cW}}-\sdfrac{1}{d^\eta}, \sqrt{B_{\ba,\cW}}+\sdfrac{1}{d^\eta} \right], \\ \mathop{\mathrm{\mathfrak{d}}}_d(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi) &\in \left[ \sqrt{\mbox{\small$\displaystyle\frac{1}{6}$}+\mbox{\small$\displaystyle\frac{1}{3N}$}}-\mbox{\small$\displaystyle\frac{1}{d^\eta}$},\ \sqrt{\mbox{\small$\displaystyle\frac{1}{6}$}+\mbox{\small$\displaystyle\frac{1}{3N}$}}+\mbox{\small$\displaystyle\frac{1}{d^\eta}$} \right], \end{split}$$ and arrive at the following result. **Theorem 10**. *Let $\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W^2}\subset\mathcal W^2$ be the set of all pairs of integer points $(\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)$, which together with $\ifmmode\bm{a}\else\textbf{a}\fi$, form a non-degenerate triangle. Fix $\eta\in(0,1/2)$. Then, for any integers $d\geq2$, $N\geq1$, and any point $\ifmmode\bm{a}\else\textbf{a}\fi\in\mathcal W$, we have $$\mbox{\small$\displaystyle\frac{1}{\#\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W^2}}$} \#\left\{ (\ifmmode\bm{v_1}\else\textbf{v_1}\fi,\ifmmode\bm{v_2}\else\textbf{v_2}\fi)\in\mathcal T_{\ifmmode\bm{a}\else\textbf{a}\fi,\mathcal W^2} : \ifmmode\bm{a}\else\textbf{a}\fi, \ifmmode\bm{v_1}\else\textbf{v_1}\fi, \ifmmode\bm{v_2}\else\textbf{v_2}\fi\text{ satisfy } \eqref{eqCommonTriangleConditionsW} \right\} \geq 1- \mbox{\small$\displaystyle\frac{102}{5}$}\mbox{\small$\displaystyle\frac{1}{d^{1-2\eta}}$}.$$* 1 C. C. Aggarwal, A. Hinneburg, D. A. Keim: *On the Surprising Behavior of Distance Metric in High-Dimensional Space.* Database Theory --- ICDT 2001 / Van den Bussche, Jan; Vianu, Victor (ed.). - Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. (Lecture Notes in Computer Science; 1973). pp. 420--434. <http://link.springer.de/link/service/series/0558/bibs/1973/19730420.htm> D. Allen, H. Edwards, S. Harper, L. Olsen: *Average distances on self-similar sets and higher order average distances of self-similar measures.* Math. Z. 287, No. 1-2 (2017), 287--324. <https://doi.org/10.1007/s00209-016-1826-3> J. S. Athreya, C. Cobeli, A. Zaharescu: *Visibility phenomena in hypercubes.* Chaos, Solitons & Fractals 175, (2023), Article ID 114024, 12p. <https://doi.org/10.1016/j.chaos.2023.114024> D. H. Bailey, J. M. Borwein, R. E. Crandall: *Box integrals.* J. Comput. Appl. Math. 206, No. 1 (2007), 196--208. U. Bäsel: *The moments of the distance between two random points in a regular polygon.* preprint January 11, 2021. <https://arxiv.org/pdf/2101.03815.pdf> S. Bubeck, M. Sellke: *A universal law of robustness via isoperimetry.* Advances in Neural Information Processing Systems 34 pre-proceedings ([NeurIPS 2021](https://proceedings.neurips.cc/paper/2021)), <https://arxiv.org/pdf/2105.12806.pdf>. C. Buchta: *A note on the volume of a random polytope in a tetrahedron.* Ill. J. Math. 30 (1986), 653--659. <https://projecteuclid.org/euclid.ijm/1256064237> B. Burgstaller, F. Pillichshammer: *The average distance between two points.* Bull. Aust. Math. Soc. 80, No. 3 (2009), 353--359. <https://doi.org/10.1017/S0004972709000707> S. R. Dunbar: *The average distance between points in geometric figures.* Coll. Math. J. 28, No. 3 (1997), 187--197. <https://doi.org/10.2307/2687522> B. Eisenberg, R. Sullivan: *Random triangles in $n$ dimensions.* Am. Math. Mon. 103, No. 4 (1996), 308--318. <https://doi.org/10.2307/2975187> G. R. Hall: *Acute triangles in the $n$-ball.* J. Appl. Probab. 19 (1982), 712--715. <https://doi.org/10.2307/3213533> H. Li, X. Qiu: *Moments of distance from a vertex to a uniformly distributed random point within arbitrary triangles.* Math. Probl. Eng. (2016), Article ID 8371750, 10 p. <https://doi.org/10.1155/2016/8371750> A. M. Mathai, P. Moschopoulos, G. Pederzoli: *Random points associated with rectangles.* Rend. Circ. Mat. Palermo, II. Ser. 48, No. 1 (1999), 163--190. <https://doi.org/10.1007/BF02844387> M. Lu, X. Meng: *Visible lattice points in Pólya's walk*. Preprint 2023. <https://arxiv.org/pdf/2307.16583.pdf> D. Oberlin, R. Oberlin: *Unit distance problems.* Am. J. Math. 137, No. 1 (2015), 251--270. <https://doi.org/10.1353/ajm.2015.0004> L. Olsen, A. Richardson: *Average distances between points in graph-directed self-similar fractals.* Math. Nachr. 292, No. 1 (2019), 170--194. <https://doi.org/10.1002/mana.201600354> S. Srinivasa, M. Haenggi: *Distance distributions in finite uniformly random networks: theory and application.* IEEE Transactions on vehicular technology, 59, no. 2 (2010), 940--949. <https://doi.org/10.1109/TVT.2009.2035044>
arxiv_math
{ "id": "2309.15338", "title": "Counterintuitive patterns on angles and distances between lattice points\n in high dimensional hypercubes", "authors": "Jack Anderson, Cristian Cobeli, Alexandru Zaharescu", "categories": "math.CO math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove the finiteness of the kernel of the localization map in the Galois cohomology of a connected reductive group over a global field. author: - Dylon Chow bibliography: - biblio.bib title: A Finiteness Theorem in Galois Cohomology --- # Introduction The purpose of this note is to prove: **Theorem 1**. *(Borel-Serre, Borel-Prasad, Nisnevich). Let $G$ be a connected reductive group defined over a global field $F$. Let $V$ denote the set of places of $F$. The fibers of the canonical map $$\lambda \colon H^1(F,G) \to \prod_{v \in V}H^1(F_v,G)$$ are finite.* In more geometric language, this says that given a principal homogeneous space $X$ over $F$ for $G$, the principal homogeneous spaces over $F$ for $G$ which are isomorphic to $X$ over $F_v$ for all $v$ form a finite number of isomorphism classes over $F$. In the case when $F$ is a number field, this is  [@BS64 Theorem 7.1]. By a twisting argument, it suffices to prove that the kernel of $\lambda$ is finite. A proof that the kernel of $\lambda$ is finite is given in  [@Borel1963 Theorem 6.8]. As Platonov and Rapinchuk point out (see p.316 of  [@PR]), there is a fairly standard proof in the case when $G$ is a torus, but the proof of Borel and Serre in the general case is very different and appeals to the reduction theory of adele groups. On p.176 of  [@BP90], Borel and Prasad gave a proof in the function field case for any connected reductive group $G$. They first prove the theorem for semisimple groups in  [@BP89 Theorem B.1]. For arbitrary reductive groups, they use the finiteness of the class number of a semisimple group  [@BP89 Proposition 3.9], the proof of which uses lower bounds on covolumes of arithmetic subgroups. In this note, we give a totally different proof of the theorem. We take as given the case when $G$ is a torus. We prove the theorem in two steps. The first step is the case when the derived group of $G$ is simply connected. Then we use a $z$-extension to prove the theorem in general. # Acknowledgements {#acknowledgements .unnumbered} We would like to thank Mikhail Borovoi, Gopal Prasad and Ramin Takloo-Bighash for helpful comments. # Proof of the Theorem First we record the statement of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} for tori: **Proposition 2**. *Theorem [Theorem 1](#main){reference-type="ref" reference="main"} holds when $G$ is a torus.* *Proof.* This follows from the global Nakayama-Tate duality theorem. See  [@PR], Theorem 6.3 and the discussion preceding it. This reference only states Nakayama-Tate duality for number fields, but the same proof works for function fields. See  [@Tate1966] for a proof of Nakayama-Tate duality. ◻ In the statement below, a local field is a finite extension of either ${\mathbb R}, {\mathbb Q}_p$, or $\mathbb{F}_p((t))$ for a prime $p$. **Proposition 3**. 1. *If $H$ is a connected reductive group over a local field $k$, then $H^1(k,H)$ is finite.* 2. *If $H$ is a semisimple simply connected group over a nonarchimedean local field $k$, then $H^1(k,H)=1$.* 3. *Let $H$ be a semisimple, simply connected algebraic group over a global field $k$. The canonical map $$H^1(k,H) \to \prod_v H^1(k_v,H)$$ is injective.* *Proof.* 1. In characteristic $0$, this is proven in  [@BS64]. For a textbook reference, see  [@PR Theorem 6.14]. In characteristic $p>0$, see  [@Serre79 III.4.3, Remark 2]. 2. This is due to Bruhat-Tits. See  [@BT1987 Thm. in section 4.7]. 3. In the number field case, this was proved in  [@harder1966] except for the case when $G$ has a factor of type $E_8$, which was proved in  [@chernousov1989]. The function field case is proved in  [@Harder1975] - in fact, in this case, $H^1(k,H)=0$.  ◻ Now we proceed to the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. The first step is to handle the case when the derived group of $G$ is simply connected. **Proposition 4**. *If the derived group of $G$ is simply connected, then Theorem 1 holds.* *Proof.* We consider the short exact sequence $$1 \to G_{\text{der}} \to G \to D \to 1,$$ where $D$ denotes the quotient of $G$ by its derived group $G_{\text{der}}$. This gives us a commutative diagram with exact rows We want to show that $\text{ker}(\beta)$ is finite. We have $$\text{ker}(\beta) \subset \sigma^{-1}(\text{ker}(\gamma)),$$ and by Proposition [Proposition 2](#tori){reference-type="ref" reference="tori"}, $\text{ker}(\gamma)$ is finite. Proposition [Proposition 3](#wellknown){reference-type="ref" reference="wellknown"} tells us that $H^1(F_v,G_{\text{der}})$ is trivial for all finite places $v$, that $H^1(F_v,G_{\text{der}})$ is finite for all infinite places $v$, and that $\alpha$ is injective. Together, these statements imply that $H^1(F,G_{\text{der}})$ is finite. Thus $\text{ker}(\sigma) = \text{im}(\rho)$ is also finite. Now we use a standard twisting argument (see section I.5.4 of  [@Serre79]). Let $\zeta \in \text{ker}(\gamma)$, and choose a 1-cocycle $z$ that represents a class in $\sigma^{-1}(\zeta)$. By twisting the sequence above by $z$, we obtain a short exact sequence $$1 \to (G_z)_{\text{der}} \to G_z \to D \to 1.$$ This induces a map $H^1(F,G_z) \to H^1(F,D)$ whose kernel is $\sigma^{-1}(\zeta)$ (after identifying $H^1(F,G)$ with $H^1(F,G_z)$). Therefore, for every element $\zeta \in \text{ker}(\gamma)$, $\sigma^{-1}(\zeta)$ is finite. and so $\text{ker}(\beta)$ is finite. ◻ We have proved Theorem [Theorem 1](#main){reference-type="ref" reference="main"} when $G_{\text{der}}$ is simply connected. Now we use $z$-extensions to prove it in general. We choose a finite Galois extension $E/F$ splitting $G$ and an extension $$1 \to Z \to G' \to G \to 1$$ such that $Z$ is an induced torus (i.e., it is obtained by Weil restriction of scalars from a split $E$-torus) and the derived group of $G'$ is simply connected. Such an extension exists - see, for instance, the discussion in the proof of Lemma A.1 of  [@LM2015]. This gives a commutative diagram with exact rows The $0$ on the leftmost term in the top row comes from the fact that the $F$-torus $Z$ is induced. Since $Z \times_{\text{Spec}(F)} \text{Spec}(F')$ is also induced over $F'$ for all fields $F' \supset F$, the leftmost term in the bottom row is also $0$. We will use the following lemma: **Lemma 5**. *The homomorphism $H^2(F,Z) \to \prod_v H^2(F_v,Z)$ is injective.* *Proof.* The torus $Z$ is the Weil restriction of some split torus over $E$, say $Z=R_{E/F}(S)$ where $S \cong \mathbb{G}_m^r$. By Shapiro's lemma we have $$H^2(F,Z) \cong H^2(E,(\mathbb{G}_{m,E})^r).$$ Let $v$ be a place of $F$. Then we have (see  [@milne2017], p.58) $$\begin{aligned} (R_{E/F}(S))_{F_v} &\cong R_{E \otimes_F F_v}(S_{E \otimes_F F_v}) \\ &\cong \left(\prod_{w \lvert v} R_{E_w/F_v} (\mathbb{G}_m)\right)^r. \end{aligned}$$ Therefore, $$\begin{aligned} H^2(F_v,Z) &\cong \prod_{w \lvert v} H^2(F_v, R_{E_w/F_v} (\mathbb{G}_m)^r) \\ &=\prod_{w \lvert v} H^2(E_w,\mathbb{G}_m)^r. \end{aligned}$$ Thus the map $H^2(F,Z) \to \prod_v H^2(F_v,Z)$ becomes $$(\text{Br}(E))^r\to (\prod_{w \in V_E} \text{Br}(E_w))^r,$$ which is injective by global class field theory. ◻ Now we return to the proof of the theorem. We want to show that $\text{ker}(\gamma_2)$ is finite. Suppose that $x \in \text{ker}(\gamma_2)$. Then $\gamma_3(\beta(x))=0$. By Lemma [Lemma 5](#global cft){reference-type="ref" reference="global cft"}, $\beta(x)=0$. Thus $x \in \text{im}(\alpha)$, say $\alpha(w)=x$. But then $\alpha'(\gamma_1(w))=0$, and so $\gamma_1(w)=0$. Therefore, $\text{ker}(\gamma_2) \subset \alpha(\text{ker}(\gamma_1))$. Since $\text{ker}(\gamma_1)$ is finite by Proposition [Proposition 4](#derived group){reference-type="ref" reference="derived group"}, $\text{ker}(\gamma_2)$ is also finite. This completes the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.
arxiv_math
{ "id": "2309.11769", "title": "A finiteness theorem in Galois cohomology", "authors": "Dylon Chow", "categories": "math.NT math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In their 1988 paper "Gluing of perverse sheaves and discrete series representations,\" D. Kazhdan and G. Laumon constructed an abelian category $\mathcal{A}$ associated to a reductive group $G$ over a finite field with the aim of using it to construct discrete series representations of the finite Chevalley group $G(\mathbb{F}_q)$. The well-definedness of their construction depended on their conjecture that this category has finite cohomological dimension. This was disproven by R. Bezrukavnikov and A. Polishchuk in 2001, who found a counterexample in the case $G = SL_3$. In the same paper, Polishchuk then made an alternative conjecture: though this counterexample shows that the Grothendieck group $K_0(\mathcal{A})$ is not spanned by objects of finite projective dimension, he noted that a graded version of $K_0(\mathcal{A})$ can be thought of as a module over Laurent polynomials and conjectured that a certain localization of this module is generated by objects of finite projective dimension, and suggested that this conjecture could lead toward a proof that Kazhdan and Laumon's construction is well-defined. He proved this conjecture in Types $A_1, A_2, A_3$, and $B_2$. In the present paper, we prove Polishchuk's conjecture in full generality, and go on to prove that Kazhdan and Laumon's construction is indeed well-defined, giving a new geometric construction of discrete series representations of $G(\mathbb{F}_q)$. author: - Calder Morton-Ferguson bibliography: - bibl.bib title: | Polishchuk's conjecture\ and\ Kazhdan-Laumon representations --- # Introduction In their 1988 paper [@KL], Kazhdan and Laumon described a gluing construction for perverse sheaves on the basic affine space associated to a semisimple algebraic group $G$ split over a finite field $\mathbb{F}_q$, defining an abelian category $\mathcal{A}$ of "glued perverse sheaves\" consisting of certain tuples of perverse sheaves on the basic affine space indexed by the Weyl group. They aimed to use these categories to provide a new geometric construction of discrete series representations of $G(\mathbb{F}_q)$. Their proposal was to use $\mathcal{A}$ to construct representations as follows. First, they observed that the discrete series representations they sought to construct arise from characters of the non-split tori $T(w)$ of $G$, which are indexed by the Weyl group. For each $w \in W$, they defined a category $\mathcal{A}_{w, \mathbb{F}_q}$ in a way such that $K_0(\mathcal{A}_{w, \mathbb{F}_q})$ carries commuting actions of $G(\mathbb{F}_q)$ and $T(w)$. They expected that the wildly infinite-dimensional representation $K_0(\mathcal{A}_{w, \mathbb{F}_q}) \otimes \mathbb{C}$ of $G(\mathbb{F}_q)$ admits a finite-dimensional quotient whose $T(w)$-isotypic components are the discrete series representations they sought to construct. Following the philosophy of Grothendieck's sheaf-function dictionary, Kazhdan and Laumon knew that the appropriate subspace of $K_0(\mathcal{A}_{w, \mathbb{F}_q}) \otimes \mathbb{C}$ by which one should take the quotient should be the kernel of a certain "Grothendieck-Lefschetz pairing\" on $K_0(\mathcal{A}_{w, \mathbb{F}_q})$, which is defined in terms of the $\mathrm{Ext}$ groups in the category $\mathcal{A}$. They then made the following conjecture and proved that it implies the well-definedness of their representations. **Conjecture 1** (Kazhdan-Laumon, [@KL]). *The category $\mathcal{A}$ has finite cohomological dimension. In other words, for any two objects $A$ and $B$ of $\mathcal{A}$, there is an $n$ for which $\mathrm{Ext}^i(A, B) = 0$ whenever $i > n$.* More than decade later, Bezrukavnikov and Polishchuk found a counterexample to this conjecture in the case $G = \mathrm{SL}_3$. **Proposition 2** (Bezrukavnikov-Polishchuk, Appendix to [@P]). *is false.* In [@P], Polishchuk put forward the idea that although is false as stated in [@KL], it is not strictly necessary in order to prove the more important assertion that Kazhdan and Laumon's construction of representations is well-defined. He notes that $K_0(\mathcal{A}_{w, \mathbb{F}_q})$ carries the structure of a $\mathbb{Z}[v, v^{-1}]$-module using the formalism of mixed sheaves where $v$ acts by a Tate twist, and then frames as the claim that $K_0(\mathcal{A}_{w, \mathbb{F}_q})$ is spanned by objects of finite projective dimension. In this situation, the Grothendieck-Lefschetz pairing defined on $K_0(\mathcal{A}_{w, \mathbb{F}_q}) \otimes \mathbb{C}$ can be thought of as taking polynomial values in $\mathbb{Z}[v, v^{-1}]$ and then specializing at $v = q^{\frac{1}{2}}$, which is one way to see why would imply the well-definedness of this pairing and therefore the well-definedness of Kazhdan and Laumon's construction. Although shows that this is false, he instead proposes that this pairing is still well-defined if one allows it to take values in a certain localization of the ring $\mathbb{Z}[v, v^{-1}]$. Letting $\mathcal{A}_{\mathbb{F}_q} = \mathcal{A}_{e, \mathbb{F}_q}$, i.e. the category of "Weil sheaves" in the Kazhdan-Laumon context, Polishchuk proposes the following more precise conjecture as a first step toward this goal. **Conjecture 3**. *There exists a finite set of polynomials, which are nonzero away from roots of unity, such that the localization of $K_0(\mathcal{A}_{\mathbb{F}_q})$ at the multiplicative set generated by these polynomials is generated by objects of finite projective dimension.* In [@P], Polishchuk develops a framework toward answering this conjecture, resolving it himself in Types $A_2, A_2, A_3,$ and $B_2$. In our first main theorem, we use this framework along with the algebraic understanding of symplectic Fourier transforms provided by [@CMFSymplectic] to prove this conjecture in general. **Theorem 4**. *is true. In particular, the localization of the $\mathbb{Z}[v, v^{-1}]$-module $K_0(\mathcal{A}_{\mathbb{F}_q})$ at the polynomial $$\begin{aligned} p(v) & = \prod_{i=1}^{\ell(w_0)} \left(1 - v^{2i}\right) \end{aligned}$$ is generated by objects of finite projective dimension.* As Polishchuk expected, the resolution of brings us very close to showing that Kazhdan and Laumon's construction of discrete series representations is well-defined. The main result of the present paper is that by using the formalism of monodromic perverse sheaves, we can prove a similar theorem which indeed completes the necessary technicalities to carry out Kazhdan and Laumon's construction in general. **Theorem 5**. *For any character sheaf $\mathcal{L}$ of $T$ and element $w \in W$, the localization of the $\mathbb{Z}[v, v^{-1}]$-module $K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})$ at $p(v)$ is spanned by classes of objects of finite projective dimension in $\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}$.* This monodromic approach to Kazhdan and Laumon's construction was already successfully carried out in [@BP], in which Braverman and Polishchuk explain how to carry out a well-defined version of Kazhdan and Laumon's construction in the case where $\mathcal{L}$ corresponds to a *quasi-regular* character. So one can think of the following corollary to Theorem [Theorem 5](#thm:mainthm){reference-type="ref" reference="thm:mainthm"} as a generalization of Braverman and Polishchuk's result to the case of an arbitrary character. **Corollary 6**. *The Kazhdan-Laumon construction proposed in [@KL] is well-defined for monodromic sheaves corresponding to any character.* ## Layout of the paper In Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we explain some background on Kazhdan-Laumon categories. In Section [3](#sec:monodrom){reference-type="ref" reference="sec:monodrom"}, we then explain the monodromic setting required to state and discuss the categorical center of the monodromic Hecke category, which will be an important tool in the proof. Then in Section [4](#sec:dg){reference-type="ref" reference="sec:dg"} we use dg formalism to explain why the derived category of the Kazhdan-Laumon category admits an action of this categorical center. This is followed in Section [5](#sec:canonical){reference-type="ref" reference="sec:canonical"} by an explanation of a crucial tool in the study of Kazhdan-Laumon categories proposed by Polishchuk [@P] called the canonical complex. We then complete the proofs of our results: in Section [6](#sec:central){reference-type="ref" reference="sec:central"} we prove , and in Section [7](#sec:final){reference-type="ref" reference="sec:final"} we recall the results of [@CMFSymplectic] and explain how it, combined with the previous setup, allow us to prove Polishchuk's original conjecture and establish independently of the monodromic setting. Finally, in , we explain how to carry out the construction of Kazhdan-Laumon representations explicitly given our theorems. ## Acknowledgments I am grateful to my advisor, Roman Bezrukavnikov, for introducing me to Kazhdan-Laumon's construction and for continuous feedback and support throughout all stages of the project. I would also like to thank Alexander Polishchuk, Pavel Etingof, Zhiwei Yun, Ben Elias, Minh-Tam Trinh, Elijah Bodish, Alex Karapetyan and Matthew Nicoletti for helpful conversations. During this work, I was supported by an NSERC PGS-D award. # Preliminaries {#sec:prelim} ## Background and notation ### General setup Let $G$ be a split semisimple group over a finite field $\mathbb{F}_q$. Let $T$ be a Cartan subgroup split over $\mathbb{F}_q$, $B$ a Borel subgroup containing $T$, and $U$ its unipotent radical. Let $X = G/U$ be the basic affine space associated to $G$ considered as a variety over $\mathbb{F}_q$. Let $W$ be the Weyl group $W$. We let $S$ denote the set of simple reflections in $W$. Writing $q = p^m$ for some prime number $p$, we choose $\ell$ to be a prime with $\ell \neq p$. ### $\ell$-adic sheaves, Tate twists, and Grothendieck groups {#sec:ladic} We will work with the category $\mathrm{Perv}(G/U)$ of mixed $\ell$-adic perverse sheaves on the basic affine space $G/U$, and more generally with the constructible derived category $D^b(G/U)$ of mixed $\ell$-adic sheaves on $G/U$. We choose an isomorphism $\overline{\mathbb{Q}}_{\ell} \cong \mathbb{C}$ and work with $\mathbb{C}$ going forward. Pick a square root $q^{\frac{1}{2}}$ of $q$ in $\mathbb{C}$ once and for all, and define the half-integer Tate twist $(\tfrac{1}{2})$ on $D^b(G/U)$. We then view $K_0(G/U) = K_0(D^b(G/U))$ as a $\mathbb{Z}[v, v^{-1}]$-module where $v^{-1}$ acts by $(\tfrac{1}{2})$. When $\mathcal{C}$ is any category for which $K_0(\mathcal{C})$ is a $\mathbb{Z}[v, v^{-1}]$-module, we denote by $K_0(\mathcal{C})\otimes \mathbb{C}$ the specialization $K_0(\mathcal{C}) \otimes_{\mathbb{Z}[v, v^{-1}]} \mathbb{C}$ at $v = q^{\frac{1}{2}}$. We use $\mathbb{D}$ to denote the Verdier duality functor. We let ${}^pH^i$ be the perverse cohomology functors for any $i \in \mathbb{Z}$. We also choose once and for all a nontrivial additive character $\psi : \mathbb{F}_q \to \overline{\mathbb{Q}_{\ell}}$, and let $\mathcal{L}_{\psi}$ be the corresponding Artin-Schrieier sheaf on $\mathbb{G}_a$. The variety $G/U$ comes with a natural Frobenius morphism $\mathrm{Fr} : G/U \to G/U$; we can then consider the category of Weil sheaves $\mathrm{Perv}_{\mathbb{F}_q}(G/U)$ on $G/U$, i.e. sheaves $K \in \mathrm{Perv}(G/U)$ equipped with a natural isomorphism $\mathrm{Fr}^*K \cong K$. ### Elements of $G$ indexed by $W$ For every simple root $\alpha_s$ of $G$ corresponding to the simple reflection $s$, we fix an isomorphism of the corresponding one-parameter subgroup $U_s \subset U$ with the additive group $\mathbb{G}_{a}$. This uniquely defines a homomorphism $\rho_s : SL_{2} \to G$ which induces the given isomorphism of $\mathbb{G}_{a}$ (embedded in $SL_{2}$ as upper-triangular matrices) with $U_s$; then let $$\begin{aligned} n_s & = \rho_s \begin{pmatrix} 0 & 1\\-1 & 0 \end{pmatrix}.\end{aligned}$$ For any $w \in W$, writing a reduced word $w = s_{i_1} \dots s_{i_k}$ we set $n_w = n_{s_{i_1}} \dots n_{s_{i_k}}$, and one can check that this does not depend on the reduced word. We also define for any $s \in S$ the subtorus $T_s \subset T$ obtained from the image of the coroot $\alpha_s^\vee$ and define $T_w$ for any $w \in W$ to be the product of all $T_s$ ($s \in S$) for which $s \leq w$ in the Bruhat order. ## Kazhdan-Laumon categories ### Fourier transforms on $\mathrm{Perv}(G/U)$ {#sec:ft} In [@KL] and [@P], to each $w \in W$ the authors assign an element of $D^b(G/U\times G/U)$ which, up to shift, is perverse and irreducible. Following [@P], let $X(w) \subset G/U \times G/U$ be the subvariety of pairs $(gU, g'U) \subset (G/U)^2$ such that $g^{-1}g' \in Un_wT_wU$. There is a canonical projection $\mathrm{pr}_w : X(w) \to T_w$ sending $(gU, g'U)$ to the unique $t_w \in T_w$ such that $g^{-1}g' \in Un_wt_wU$. In the case when $w = s \in S$, the morphism $\mathrm{pr}_s : X(s) \to T_s \cong \mathbb{G}_{m, k}$ extends to $\overline{\mathrm{pr}}_s : \overline{X(s)} \to \mathbb{G}_{a,k}$ and we have $$\begin{aligned} \overline{K(s)} = (-\overline{\mathrm{pr}}_s)^* \mathcal{L}_\psi\end{aligned}$$ and in the case of general $w \in W$ $$\begin{aligned} \label{eqn:wfroms} \overline{K(w)} & = \overline{K(s_{i_1})} * \dots * \overline{K(s_{i_k})}\end{aligned}$$ whenever $w = s_{i_1} \dots s_{i_k}$ is a reduced expression, where $*$ denotes the convolution of sheaves on $G/U \times G/U$ as defined in [@KL]. One can take this as the definition of Kazhdan-Laumon sheaves, which is well-defined by the proposition below, or refer to the explicit definition of $\overline{K(w)}$ which works for all $w \in W$ at once given in [@KL] or [@P]. **Proposition 7** ([@KL]). *The Kazhdan-Laumon sheaves $\overline{K(s)}$ for $s \in S$ under convolution satisfy the braid relations (up to isomorphism).* For any $s \in S$, they note that the endofunctor $K \to K * \overline{K(s)}$ of $D^b(G/U)$ can be identified with a certain "symplectic Fourier-Deligne transform" defined as follows. Let $P_s$ be the parabolic subgroup of $G$ associated to $s$, and let $Q_s = [P_s, P_s]$. The map $G/U \to G/Q_s$ has all fibers isomorphic to $\mathbb{A}^2 \setminus \{(0, 0)\}$, and it is shown in Section 2 of [@KL] that there exists a natural fiber bundle $\pi : V_s \to G/Q_s$ of rank $2$ equipped with a $G$-invariant symplectic pairing which contains $G/U$ as an open set, with inclusion $j : G/U \to V_s$ with $\pi\circ j$ being the original projection $G/U \to G/Q_s$. There is then a symplectic Fourier-Deligne transform $\tilde{\Phi}_s$ on $D^b(V_s)$ defined by $$\begin{aligned} \tilde{\Phi}_s(K) & = p_{2!}(\mathcal{L}\otimes p_1^*(K))[2](1),\end{aligned}$$ where the $p_i$ are the projections of the product $V_s \times_{G/Q_s} V_s$ on its factors, and $\mathcal{L} = \mathcal{L}_\psi(\langle, \rangle)$ is a smooth rank-1 $\overline{\mathbb{Q}_\ell}$-sheaf which is the pullback of the Artin---Schreier sheaf $\mathcal{L}_\psi$ under the morphism $\langle, \rangle$, c.f. Section 4 of [@P]. We then define the endofunctor $\Phi_s$ of $D^b(G/U)$ By $$\begin{aligned} \Phi_s(K) & = j^*\tilde{\Phi}_sj_{!} K.\end{aligned}$$ **Proposition 8** ([@KL], [@P]). *The functors $\Phi_s$ and $- * \overline{K(s)}$ are naturally isomorphic.* For any $w \in W$, we let $\Phi_w = \Phi_{s_{i_1}} \dots \Phi_{s_{i_k}}$ where $s_{i_1} \dots s_{i_k}$ is a reduced expression for $w$ as a product of simple reflections. The functors $\Phi_w$ are the gluing functors which Kazhdan and Laumon use to define the so-called glued categories $\mathcal{A}$. **Definition 9**. Using the six-functor formalism, one can check that each $\Phi_s$ has a right adjoint which we call $\Psi_s$, following the setup of Section 1.2 of [@P]. The functors $\Psi_s$ also form a braid action on $D^b(G/U)$. We then define $\Psi_w$ similarly. The functors $\Phi_w$ (resp. $\Psi_w$) are each right (resp. left) $t$-exact on $D^b(G/U)$ with respect to the perverse $t$-structure. For any $w \in W$, let $\Phi_w^\circ = {}^pH^0\Phi_w$ and $\Psi_w^\circ = {}^pH^0\Psi_w$, noting that $\Phi_w = L\Phi_w^\circ$, $\Psi_w = R\Psi_w^\circ$. **Proposition 10** ([@P], Section 4.1). *For any $s \in S$, there are natural morphisms $c_s : \Phi_s^2 \to \mathrm{Id}$ and $c_s' : \mathrm{Id} \to \Psi_s^2$ satisfying the associativity conditions $$\begin{aligned} \Phi_s \circ c_s & = c_s \circ \Phi_s : \Phi_s^3 \to \Phi_s & \Psi \circ c_s' & = c_s' \circ \Psi : \Psi_s \to \Psi_s^3. \end{aligned}$$* **Corollary 11**. *For any $y', y \in W$, there is a natural transformation $\nu_{y', y} : \Phi_{y'}\Phi_y \to \Phi_{y'y}$.* *Proof.* We go by induction on $\ell(y) + \ell(y')$. If $\ell(y'y) = \ell(y') + \ell(y)$, then $\nu_{y', y}$ is the tautological map arising from the fact that the $\Phi_w$ form a braid action. If instead $\ell(y'y) < \ell(y') + \ell(y)$, then there exists some $s \in S$ such that $y' = \tilde{y}'s$ and $y = s\tilde{y}$ for some $\tilde{y}', \tilde{y} \in W$ with $\ell(\tilde{y}'s) = \ell(\tilde{y}') + 1$ and $\ell(s\tilde{y}) = \ell(\tilde{y}) + 1$, and so we have maps $$\begin{tikzcd} \Phi_{y'}\Phi_y = \Phi_{\tilde{y}'}\Phi_s^2\Phi_{\tilde{y}} \arrow[rr, "\Phi_{\tilde{y}}\circ c_s \circ \Phi_{\tilde{y}}"] & & \Phi_{\tilde{y}'}\Phi_{\tilde{y}} \arrow[r, "\nu_{\tilde{y}',\tilde{y}}"] & \Phi_{y'y}, \end{tikzcd}$$ the former coming from $c_s$ and the latter coming from our induction hypothesis. ◻ ### Definition of the Kazhdan-Laumon category **Definition 12** ([@KL], [@P]). The Kazhdan-Laumon category $\mathcal{A}$ has objects which are tuples $(A_w)_{w \in W}$ with $A_w \in \mathrm{Perv}(G/U)$ and equipped with morphisms $$\begin{aligned} \theta_{y,w} : \Phi_y^\circ A_w \to A_{yw} \end{aligned}$$ for every $y, w \in W$ such that the diagram $$\begin{tikzcd} \Phi_{y'}^\circ\Phi_{y}^\circ A_w \arrow[r, "\Phi_{y'}\theta_{y,w}"] \arrow[d, "\nu_{y',y}"] & \Phi_{y'}^\circ A_{yw} \arrow[d, "\theta_{y',yw}"]\\ \Phi_{y'y}^\circ A_w \arrow[r, "\theta_{y'y,w}"] & A_{y'yw} \end{tikzcd}$$ commutes for any $y, y', w \in W$. A morphism $f$ between objects $(A_w)_{w \in W}$ and $(B_w)_{w \in W}$ is a collection of morphisms $f_w : A_w \to B_w$ such that $$\begin{tikzcd} \Phi_y^\circ A_w \arrow[r, "\Phi_y^\circ f_w"] \arrow[d, "\theta_{y,w}^A"] & \Phi_y^\circ B_w\arrow[d, "\theta_{y,w}^B"]\\ A_{yw} \arrow[r, "f_{yw}"] & B_{yw}. \end{tikzcd}$$ It is shown in [@P] that this category is abelian, and that the functors $j_{w}^* : \mathcal{A} \to \mathrm{Perv}(G/U)$ defined by $j_{w}^*((A_w)_{w \in W}) = A_w$ are exact. **Remark 13**. We could have instead asked for morphisms $A_{yw} \to \Psi_y^\circ A_w$, making reference to the functors $\Psi_y^\circ$ rather than the $\Phi_y^\circ$. Later, we will discuss an alternate and more elegant definition of $\mathcal{A}$ as coalgebras over a certain comonad on $\oplus_{w \in W} \mathrm{Perv}(G/U)$ which is assembled from the functors $\Psi_y^\circ$. In though, we present the definition of the Kazhdan-Laumon category as it was originally formulated in [@KL] and later explained in more detail in [@P]. ### Definition of the $w$-twisted categories $\mathcal{A}_{w, \mathbb{F}_q}$ We note that the category $\mathcal{A}$ carries an action of the Weyl group $W$ as follows. For any $w \in W$, we let $\mathcal{F}_w : \mathcal{A} \to \mathcal{A}$ be the exact functor defined by right translation of the indices in the tuple, i.e. $\mathcal{F}_w((A_y)_{y\in W}) = (A_{yw})_{y \in W}$. **Definition 14**. For any $w \in W$, let $\mathcal{A}_{w,\mathbb{F}_q}$ be the category with objects $(A, \psi_A)$ where $A \in \mathcal{A}$ and $\psi_A : \mathcal{F}_w\mathrm{Fr}^*A \to A$ is an isomorphism. We call these $w$-twisted Weil sheaves in the Kazhdan-Laumon category. For any two such objects $(A, \psi_A)$ and $(B, \psi_B)$, we let $\mathrm{Hom}_{\mathcal{A}_{w, \mathbb{F}_q}}(A, B)$ be the set of morphisms $f \in \mathrm{Hom}_{\mathcal{A}}(A, B)$ such that $f \circ \psi_A = \psi_B \circ \mathcal{F}_w\mathrm{Fr}^*f$. **Remark 15**. When $w = e$ is trivial, we will write $\mathcal{A}_{\mathbb{F}_q} = \mathcal{A}_{w, \mathbb{F}_q}$. In this case, it is shown in [@P] that $\mathcal{A}_{\mathbb{F}_q}$ is equivalent to the category obtained by applying the Kazhdan-Laumon gluing procedure described in to the category $\mathrm{Perv}_{\mathbb{F}_q}(G/U)$ of Weil perverse sheaves on $G/U$, i.e. perverse sheaves $K$ equipped with an isomorphism $\mathrm{Fr}^*K \to K$. ### Adjoint functors on $D^b(\mathcal{A})$ **Definition 16**. For any $w \in W$, let $j_w^* : D^b(\mathcal{A}) \to D^b(G/U)$ be the functor arising from the same-named exact functor $j_w^* : \mathcal{A} \to \mathrm{Perv}(G/U)$ given by $j_w^*((A_y)_{y \in W}) = A_w$. We then define a functor ${}^{\circ}j_{w!} : \mathrm{Perv}(G/U) \to \mathcal{A}$ by $$\begin{aligned} j_{w!}^\circ(K) & = (\Phi_{yw^{-1}}^\circ K)_{y \in W}. \end{aligned}$$ One can check that the morphisms $\nu_{y', y}$ for $y, y' \in W$ introduced in endow the tuple $(\Phi_{yw^{-1}}^\circ K)_{y \in W}$ with the structure morphisms required to define an object of $\mathcal{A}$. We let $j_{w!}$ be the left-derived functor to $j_{w!}^\circ$. **Proposition 17** (Proposition 7.1.2, [@P]). *For any $w \in W$, there is an adjunction $(j_{w!}^\circ, j_w^*)$. Further, the functor $j_{w!} : D^b(G/U) \to D^b(\mathcal{A})$ has the property that $$\begin{aligned} {}^pH^i(j_{w!}(K)) & = ({}^pH^i\Phi_{yw^{-1}}(K))_{y \in W},\label{eqn:jwshriek} \end{aligned}$$ and there is also an adjunction $(j_{w!}, j_w^*)$ on derived categories.* Analogously, acting instead by the functors $\Psi_{yw^{-1}}$ in ([\[eqn:jwshriek\]](#eqn:jwshriek){reference-type="ref" reference="eqn:jwshriek"}) defines a right-adjoint $j_{w*}^\circ$ to $j_w^*$ and its right-derived functor $j_{w*}$ in the very same way, as is also explained in [@P]. ### The functor $\iota$ **Definition 18**. We define an endofunctor $\iota$ of $\mathcal{A}$ by $$\begin{aligned} \iota((A_w)_{w \in W}) & = (\Phi_{w_0}^\circ A_{w_0w})_{w \in W} \end{aligned}$$ with the structure morphisms described in 7.2 of [@P]. It is shown in loc. cit. that we can also abuse notation and view $\iota$ as a functor on $D^b(\mathcal{A})$ (by replacing $\Phi_{w_0}^\circ$ with $\Phi_{w_0}$); for our purposes we will only need the functor $\iota^2$, which we will later describe as an endofunctor of $D^b(\mathcal{A})$ more conceptually in . ### Objects of the form $j_{w!}(A)$ **Proposition 19**. *For any $B \in D^b(G/U)$ and any $w \in W$, the object $j_{w!}B \in D^b(\mathcal{A})$ has finite projective dimension.* *Proof.* By Proposition 7.1.2 of [@P], the functor $j_{w!} : D^b(G/U) \to D^b(\mathcal{A})$ is left-adjoint to the restriction functor $j_w^*$. Hence for any $A \in \mathcal{A}$, $$\begin{aligned} \mathrm{Ext}_{\mathcal{A}}^\bullet(j_{w!}(B), A) & = \mathrm{Ext}_{\mathrm{Perv}(G/U)}^\bullet(B, j_w^*A), \end{aligned}$$ and the latter is a finite-dimensional vector space since the category $\mathrm{Perv}(G/U)$ has finite cohomological dimension. ◻ ### Polynomials and localization Fix the Weyl group $W$ and choose $w_0$ its longest element. **Definition 20**. Define $P(x, v), \tilde{P}(x, v) \in \mathbb{Z}[x, v, v^{-1}]$ by $$\begin{aligned} P(x, v) & = \prod_{i=0}^{\ell(w_0)} (x - v^{2i})\\ \tilde{P}(x, v) & = \prod_{i=1}^{\ell(w_0)} (x - v^{2i}) \end{aligned}$$ and let $p(v) = \tilde{P}(1, v)$. **Definition 21**. For any abelian category $\mathcal{C}$ such that $K_0(\mathcal{C})$ has the structure of a $\mathbb{Z}[v, v^{-1}]$-module, let $V^{\mathrm{fp}}\subset K_0(\mathcal{C})$ be the submodule spanned by all objects of finite projective dimension. Further, let $V^{\mathrm{fp}}_{p(v)}$ denote the localization of this module at $p(v)$, or equivalently at all of the linear factors $(1 - v^{2i})$ for $1 \leq i \leq \ell(w_0)$. # Monodromic sheaves, character sheaves, and the categorical center {#sec:monodrom} ### Rank one character sheaves on $T$ We let $\mathrm{Ch}(T)$ be the category of rank one character sheaves on $T$; we refer to Appendix A of [@YCh] for a detailed treatment. We note that $\mathrm{Ch}(T)$ carries a natural action of the Weyl group $W$. For any $\mathcal{L}$ in $\mathrm{Ch}(T)$, we let $W_{\mathcal{L}}^\circ$ be the normal subgroup of the stabilizer of $\mathcal{L}$ in $W$ which is the Weyl group of the root subsystem of the root system of $W$ on which $\mathcal{L}$ is trivial; see 2.4 of [@lusztig_yun_2020] for details. ### Monodromic version of the Kazhdan-Laumon category In Sections 2.1.3 and 2.3.4 of [@BP], the authors explain how to define a category $\mathrm{Perv}_{\mathcal{L}}(G/U)$ of monodromic sheaves on $G/U$ with respect to the monodromy $\mathcal{L}$. We then let $D_{\mathcal{L}}^b(G/U) = D^b(\mathrm{Perv}_{\mathcal{L}}(G/U))$ be its derived category. **Remark 22**. We note that in this definition, monodromic perverse sheaves are defined such that the extension of two $\mathcal{L}$-equivariant sheaves may not be $\mathcal{L}$-equivariant, only $\mathcal{L}$-monodromic. In other words, for $\mathcal{L}$ trivial, this reduces to the category of perverse sheaves on $G/U$ with unipotent monodromy (c.f. [@BY]) on the right rather than simply to $\mathrm{Perv}(G/B)$. We use this to define the $\mathcal{L}$-monodromic Kazhdan-Laumon category. **Definition 23**. For any $\mathcal{L} \in \mathrm{Ch}(T)$, we define $\mathcal{A}^{\mathcal{L}}$ to be the category obtained by applying the gluing procedure in to the category $\mathrm{Perv}_{\mathcal{L}}(G/U)$. We further define $\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}$ in analogy to . ## The monodromic Hecke category and its center ### The monodromic Hecke category {#sec:monodromhecke} In [@Gouttard], the author defines for any $\mathcal{L}$ a category $\mathcal{P}_{\mathcal{L}}$ (called $D^b_{(B)}(G/U)_{t}$ in loc. cit., where $t$ is a parameter determined by $\mathcal{L}$) such that for $\mathcal{L}$ trivial, this reduces to the familiar derived category $D^b_{(B)}(G/U)$ of $B$-constructible sheaves on $G/U$. Elements of $\mathcal{P}_{\mathcal{L}}$ are $\mathcal{L}$-monodromic with respect to the right action of $T$, while their left monodromies may correspond to any character sheaf in the $W$-orbit of $\mathcal{L}$. For any $w \in W$, this category contains monodromic versions ${}_{w\mathcal{L}}\Delta(w)_{\mathcal{L}}$ and ${}_{w\mathcal{L}}\nabla(w)_{\mathcal{L}}$ of standard and costandard sheaves; we emphasize that extensions of such objects in this category may not be $\mathcal{L}$-equivariant with respect to the right $T$-action even when they remain $\mathcal{L}$-monodromic, so this category also contains monodromic versions of tilting objects $\mathcal{T}(w)_{\mathcal{L}}$ for any $w \in W$. We let ${}_{\mathcal{L}}\mathcal{P}_{\mathcal{L}}$ be the triangulated subcategory of objects generated by the standard and costandard objects corresponding to $w \in W_{\mathcal{L}}^\circ$, in other words, the subcategory of objects in $\mathcal{P}_{\mathcal{L}}$ which are also left-monodromic with respect to $\mathcal{L}$. ### Free-monodromic Hecke categories {#subsubsec:free-monodromic} In [@BY], in the unipotent monodromy case where $\mathcal{L}$ is trivial, the authors define a category formed from a certain completion of $D^b_{\mathcal{L}}(G/U)$ called the category of *unipotently free-monodromic* sheaves. In [@Gouttard], the case of non-unipotent monodromy was treated carefully. In loc. cit., the author defines a category $\hat{\mathcal{P}}_{\mathcal{L}}$ (which is called $\hat{D}^b_{(B)}(G/U)_{t}$ in loc. cit. where $t$ is a parameter determined by $\mathcal{L}$), which is a certain completion of the category $\mathcal{P}_{\mathcal{L}}$ defined in , equipped with a monoidal structure which we also denote by $*$ in this context. This category contains elements $\varepsilon_{n,\mathcal{L}}$ and $\hat{\delta}_{\mathcal{L}}$ introduced in Corollary 5.3.3 of [@BT]. The object $\hat{\delta}_{\mathcal{L}}$ is the monoidal unit for the convolution product on $\hat{\mathcal{P}}_{\mathcal{L}}$, whereas convolution with the objects $\varepsilon_{n, \mathcal{L}}$ can be thought of as a sort of projection to the subcategory of objects whose corresponding "logarithmic monodromy operator\" is nilpotent of order at most $n$; c.f. Appendix A of [@BY] for an explanation of this perspective. Finally, we recall that $\hat{\mathcal{P}}_{\mathcal{L}}$ contains for all $w \in W$ free-monodromic versions ${}_{w\mathcal{L}}\hat{\Delta}(w)_{\mathcal{L}}$, ${}_{w\mathcal{L}}\hat{\nabla}(w)_{w\mathcal{\mathcal{L}}}$ of standard and costandard sheaves, c.f. [@BY] for the unipotent case, [@lusztig_yun_2020] for a description of these objects in $\mathcal{P}_{\mathcal{L}}$ for arbitrary $\mathcal{L}$, and [@Gouttard] for their free-monodromic versions in $\hat{\mathcal{P}}_{\mathcal{L}}$. We define the subcategory ${}_{\mathcal{L}}\hat{\mathcal{P}}_{\mathcal{L}}$ analogously to the subcategory ${}_{\mathcal{L}}\mathcal{P}_{\mathcal{L}}$ of $\mathcal{P}_{\mathcal{L}}$. ### The center of the monodromic Hecke category To define the notion of categorical center which will be useful in this setting, we will closely follow the conventions of [@BITV] in this section. We begin by recalling some definitions from loc. cit. **Definition 24** ([@BITV]). Let $\mathcal{Y} = (G/U \times G/U)/T$ where $T$ acts by the right-diagonal multiplication. Let $\mathcal{Y}^{(2)} = (G/U)^4/T^2$ where the right $T^2$-action on $(G/U)^4$ is defined by $$\begin{aligned} (x_1U, x_2U, x_3U, x_4U) \cdot (t, z) & = (x_1tU, x_2zU, x_3zU, x_4tU), \end{aligned}$$ and is equipped with a $G^2$ action by $$\begin{aligned} (g, h)\cdot (x_1U, x_2U, x_3U, x_4U) & = (gx_1U, gx_2U, hx_3U, hx_4U). \end{aligned}$$ Let $\mathcal{H}^{(1)} = D^b(G\backslash \mathcal{Y})$ and $\mathcal{H}^{(2)} = D^b(G^2 \backslash \mathcal{Y}^{(2)})$. It is shown in [@BITV] that $\mathcal{H}^{(2)}$ is equipped with a convolution product which makes it a monoidal category, and that $\mathcal{H}^{(1)}$ is a module category over $\mathcal{H}^{(2)}$ via the "two-sided convolution" described in 2.2 of loc. cit. Finally we recall the definitions of $\mathcal{H}^{(1)}_{\mathrm{mon}}$ and $\mathcal{H}^{(2)}_{\mathrm{mon}}$ from loc. cit. as the full unipotently monodromic subcategories of $\mathcal{H}^{(1)}$ and $\mathcal{H}^{(2)}$ with respect to the projections $\mathcal{Y} \to (G/B)^2$ and $\mathcal{Y}^{(2)} \to (G/B)^4$ respectively. For an arbitrary $\mathcal{L} \in \mathrm{Ch}(T)$, we define the categories $\mathcal{H}^{(1)}_{\mathcal{L}}$ and $\mathcal{H}^{(2)}_{\mathcal{L}}$ similarly but replacing unipotent monodromy with $\mathcal{L}$-monodromy. **Definition 25** ([@BITV], Definition 5.2.1). Let $$\mathcal{Z}\mathcal{H}^{(1)}_{\mathcal{L}} = \mathrm{Fun}^{\mathrm{fd}}_{\mathcal{H}^{(2)}_{\mathcal{L}}}(\mathcal{H}^{(1)}_{\mathcal{L}}, \mathcal{H}^{(1)}_{\mathcal{L}}),$$where $\mathrm{Fun}^{\mathrm{fd}}_{\mathcal{H}^{(2)}_{\mathcal{L}}}(\mathcal{H}^{(1)}_{\mathcal{L}}, \mathcal{H}^{(1)}_{\mathcal{L}})$ is the full subcategory of $\mathrm{Fun}_{\mathcal{H}^{(2)}_{\mathcal{L}}}(\mathcal{H}^{(1)}_{\mathcal{L}}, \mathcal{H}^{(1)}_{\mathcal{L}})$ consisting of functors $F$ such that the limit $F(\hat{\delta}_{\mathcal{L}})$ exists in $\mathcal{H}^{(1)}_{\mathcal{L}}$. Going forward, if $Z \in \mathcal{H}_{\mathcal{L}}^{(1)}$, we will sometimes write $- * Z$ to denote the functor $Z$, since the functor $Z$ is equivalent to convolution with the object $Z(\hat{\delta}_{\mathcal{L}})$. ### Harish-Chandra functor and equivalence from center to character sheaves Following 3.2 of [@BITV], consider the diagram $$\begin{tikzcd} & G \arrow[dl, "\pi"'] \arrow[dr, "q"] \times G/B & \\ G & & \mathcal{Y} \end{tikzcd}$$ where $\pi$ is the projection and $q$ is the quotient of the map $q' : G \times G/U \to G/U \times G/U$ given by $q'(g, xU) = (xU, gxU)$ by the free right $T$-action, with respect to which $q'$ is equivariant. **Definition 26**. The Harish-Chandra transform is the functor $$\begin{aligned} \mathfrak{hc} = q_! \circ \pi^*: D^b(G/_{\mathrm{Ad}} G) \to D^b(G\backslash \mathcal{Y}), \end{aligned}$$ which is monoidal with respect to the natural convolution product on each side by [@GinsburgAdmissible], c.f. 2.2 of [@BITV] for a detailed exposition of these convolution products. **Definition 27**. For any $\mathcal{L} \in \mathrm{Ch}(T)$, let $D^b_{\mathfrak{C},\mathcal{L}}(G) \subset D^b(G/_{\mathrm{Ad}}G)$ be the full triangulated subcategory with objects $\mathcal{F}$ satisfying $\mathfrak{hc}(\mathcal{F}) \in \mathcal{H}^{(1)}_{\mathcal{L}}$. The following proposition is proven for the case of unipotent monodromy in Theorem 5.2.2 of [@BITV], and a version for arbitrary monodromy $\mathcal{L}$ is similar and will appear in a future version of loc. cit. **Proposition 28** ([@BITV], Theorem 5.2.2). *There is an equivalence $\tilde{a}_{\mathcal{L}}$ of semigroupal categories $$\begin{aligned} \tilde{a}_{\mathcal{L}} : D^b_{\mathfrak{C}, \mathcal{L}}(G) \to \mathcal{Z}\mathcal{H}^{(1)}_{\mathcal{L}} \end{aligned}$$ such that $\varepsilon \circ \tilde{a}_{\mathcal{L}} = \mathfrak{hc}$ where $\varepsilon$ is the evaluation of a functor at $\hat{\delta}$.* ### Two-sided cells and character sheaves In this and subsequent sections, we use the notion of two-sided Kazhdan-Lusztig cells in the Weyl group; see e.g. [@W] for a clear exposition. **Definition 29**. For any $\mathcal{L}$, let $\underline{C}_{\mathcal{L}}$ denote the set of two-sided cells in $W_{\mathcal{L}}^\circ$. For any $w \in W_{\mathcal{L}}^\circ$, we let $\underline{c}_w$ denote the corresponding cell. We let $\underline{c}_{e}$ be the top cell which corresponds to the identity element. There is a well-defined partial order $\leq$ on $\underline{C}_{\mathcal{L}}$ for which $\underline{c}_e$ is maximal. Two-sided Kazhdan-Lusztig cells give a filtration on the category $D^b_{\mathfrak{C}, \mathcal{L}}(G)$ (c.f. [@CSIV]), and so for each $\underline{c} \in \underline{C}_{\mathcal{L}}$, there are triangulated subcategories $D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\leq \underline{c}}$ and $D^b_{\mathfrak{C}, \mathcal{L}}(G)_{< \underline{c}}$ of $D^b_{\mathfrak{C}, \mathcal{L}}(G)$. We then define $D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\underline{c}}$ as the quotient category $D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\leq \underline{c}}/D^b_{\mathfrak{C}, \mathcal{L}}(G)_{< \underline{c}}$, referring to the unipotent case treated in Section 5 of [@BFO] for details. Let $G_{\mathrm{ad}}$ be the adjoint quotient of $G$; the following is a consequence of the classification of irreducible character sheaves in terms of cells given in [@CSIV], c.f. Corollary 5.4 of [@BFO]. **Proposition 30**. *For any $\mathcal{L} \in \mathrm{Ch}(T)$, $$\begin{aligned} K_0(D^b_{\mathfrak{C}, \mathcal{L}}(G_{\mathrm{ad}})) \cong \bigoplus_{\underline{c} \in \underline{C}_{\mathcal{L}}} K_0(D^b_{\mathfrak{C}, \mathcal{L}}(G_{\mathrm{ad}})_{\underline{c}}) \end{aligned}$$ as vector spaces. Further, for any $\underline{c} \in \underline{C}_{\mathcal{L}}$, the preimage of the subspace $$\begin{aligned} \bigoplus_{\underline{c}' \leq \underline{c}} K_0(D^b_{\mathfrak{C}, \mathcal{L}}{(G_{\mathrm{ad}})}_{\underline{c}'}) \end{aligned}$$ in $K_0(D^b_{\mathfrak{C}, \mathcal{L}}(G_{\mathrm{ad}}))$ under this isomorphism is a monoidal ideal.* ### The big free-monodromic tilting object and $\mathbb{K}_{\mathcal{L}}$ In Section 9.4 of [@Gouttard], the author defines free-monodromic tilting sheaves with general monodromy, analogous to the ones appearing in [@BY] for the case of unipotent monodromy. **Definition 31**. Given $\mathcal{L} \in \mathrm{Ch}(T)$, let $\hat{\mathcal{T}}(w_{0,\mathcal{L}})_{\mathcal{L}}$ be the free-monodromic tilting object in ${}_{\mathcal{L}}\hat{\mathcal{P}}_{\mathcal{L}}$ corresponding to the longest element $w_{0,\mathcal{L}}$ of $W_{\mathcal{L}}^\circ$. In this paper, we will denote it simply by $\hat{\mathcal{T}}_{\mathcal{L}}$ for convenience. In [@BT], working in the case of unipotent monodromy, the authors construct from $\hat{\mathcal{T}}(w_0)$ an object they call $\mathbb{K}$, defined as $\mathbb{K} = p^*p_!\hat{\mathcal{T}}(w_0)$ where $p : U\backslash G/U \to (U\backslash G/U)/T$ where $T$ acts on $U\backslash G / U$ by conjugation. They also define a character sheaf $\Xi$ whose details are explained in 1.4 of [@BT] obtained by averaging the derived pushforward of the constant sheaf on the regular locus of the unipotent variety of $G$ to obtain an element of $D^b(G/_{\mathrm{Ad}} G)$. **Proposition 32** (Theorem 1.4.1, [@BT]). *There exists an object $\Xi \in D^b(G/_{\mathrm{Ad}} G)$ such that, if $\Xi * \hat{\delta}$ is the projection of $\Xi$ to the derived category of unipotent character sheaves, then $\mathfrak{hc}(\Xi * \hat{\delta}) = \mathbb{K}$.* We now define an anologue for arbitrary monodromy generalizing the unipotent case. **Definition 33**. Let $\mathbb{K}_{\mathcal{L}} = p^*p_!\hat{\mathcal{T}}_{\mathcal{L}}$. Then by the same argument as in the proof of Theorem 1.4.1 of [@BT], $\mathfrak{hc}(\Xi * \hat{\delta}_{\mathcal{L}}) = \mathbb{K}_{\mathcal{L}}$ where $\Xi * \hat{\delta}_{\mathcal{L}}$ is the analogous projection to the derived category of character sheaves with monodromy $\mathcal{L}$. As a result, $\mathbb{K}_{\mathcal{L}}$ can be considered as an element of $\mathcal{Z}\mathcal{H}_{\mathcal{L}}^{(1)}$. By abuse of notation, in the future when we use the convolution functor $- * \mathbb{K}_{\mathcal{L}}$, we will often identify $\mathbb{K}_{\mathcal{L}}$ with the character sheaf it comes from under the Harish-Chandra transform, e.g. as in the convolution action of character sheaves which we describe in . ### Fourier transform and convolution with costandard sheaves **Proposition 34**. *Let $\mathcal{F} \in D^b_{\mathcal{L}}(G/U)$ where $\mathcal{L} \in \mathrm{Ch}(T)$. Then for any $s \in S$, $$\begin{aligned} \Phi_s(\mathcal{F}) & = \begin{cases} \mathcal{F} * {}_\mathcal{L}\hat{\nabla}(s)_{\mathcal{L}}(\tfrac{1}{2}) & s \in W_{\mathcal{L}}^\circ\\ \mathcal{F} * {}_{\mathcal{L}}\hat{\nabla}(s)_{s\mathcal{L}} & s \not\in W_{\mathcal{L}}^\circ. \end{cases} \end{aligned}$$* *Proof.* In the case where $s \in W_{\mathcal{L}}^\circ$, this follows for equivariant sheaves by Proposition 4.3 of [@CMFKLCatO], c.f. 6.3 of [@P], while the second case follows from a similar computation. The proof then generalizes to arbitrary extensions of $\mathcal{L}$-equivariant sheaves, and therefore to all monodromic sheaves as in the claim. ◻ # Coalgebras and dg enhancements {#sec:dg} ## Coalgebras over comonads and Barr-Beck for Kazhdan-Laumon categories ### The Kazhdan-Laumon category as coalgebras over a comonad We now recall a result from [@BBP] exhibiting the Kazhdan-Laumon category $\mathcal{A}$ as the category of coalgebras over a certain comonad on the underlying category $\mathcal{B} = \mathrm{Perv}(G/U)^{\oplus W}$. **Definition 35**. Let $\Psi : \mathcal{B} \to \mathcal{B}$ be the endofunctor defined for any $A = (A_w)_{w \in W} \in \mathcal{B}$ by $$\begin{aligned} \label{eqn:comonad} (\Psi^\circ A)_w & = \oplus_{y \in W} \Psi_{wy^{-1}}^\circ A_y. \end{aligned}$$ It has right-derived functor $\Psi = R\Psi^\circ$ given by $$\begin{aligned} (\Psi A)_w & = \oplus_{y \in W} \Psi_{wy^{-1}} A_y. \end{aligned}$$ **Theorem 36** ([@BBP]). *The Kazhdan-Laumon category $\mathcal{A}$ is equivalent to the category of coalgebras over the left-exact comonad $\Psi^\circ : \mathcal{B} \to \mathcal{B}$.* **Remark 37**. We can just as easily see that the Kazhdan-Laumon category $\mathcal{A}$ is equivalent to the category of algebras over the right-exact monad $\Phi^\circ$, where $\Phi^\circ$ is defined as in ([\[eqn:comonad\]](#eqn:comonad){reference-type="ref" reference="eqn:comonad"}) but using the $\Psi_w^\circ$ functors. ### dg enhancements of derived categories Given an abelian category $\mathcal{C}$, let $C_{\mathrm{dg}}(\mathcal{C})$ be the dg category with objects complexes of sheaves and morphisms the usual complexes of maps between complexes. We define the dg derived category $D_{\mathrm{dg}}(\mathcal{C})$ to be the dg quotient of $C_{\mathrm{dg}}(\mathcal{C})$ by the full subcategory of acyclic objects; its homotopy category is the usual derived category $D(\mathcal{C})$. The bounded dg derived category $D^b_{\mathrm{dg}}(\mathcal{C})$ is defined to be the full dg subcategory of $D_{\mathrm{dg}}(\mathcal{C})$ consisting of objects which project to $D^b(\mathcal{C})$ when passing to the homotopy category. ### Barr-Beck-Lurie for dg categories applied to the Kazhdan-Laumon category Note that the comonad $\Psi^\circ : \mathcal{B} \to \mathcal{B}$ can be enhanced to a comonad $\Psi = R\Psi^\circ : D_{\mathrm{dg}}^b(\mathcal{B}) \to D_{\mathrm{dg}}^b(\mathcal{B})$; i.e. the right-derived functor of $\Psi^\circ$ has a dg enhancement (since it arises from the functors $\Psi_w$ which are themselves defined using the six-functor formalism). We can then consider the dg category $D_{\mathrm{dg}}^b(\mathcal{B})_{\Psi}$ of coalgebras over the comonad $\Psi$. The following is an application of the Barr-Beck-Lurie theorem for dg categories. **Proposition 38**. *The dg categories $D_{\mathrm{dg}}^b(\mathcal{B}_{\Psi^\circ})$ and $D_{\mathrm{dg}}^b(\mathcal{B})_{\Psi}$ are equivalent.* *Proof.* We first check the conditions of the Barr-Beck-Lurie theorem for the functor $F : D_{\mathrm{dg}}(\mathcal{B}_{\Psi^\circ}) \to D_{\mathrm{dg}}(\mathcal{B})$ obtained from the forgetful functor $\mathcal{B}_{\Psi^\circ} \to \mathcal{B}$. This functor has a right adjoint given by the dg enhancement of the right-derived functor of the cofree coalgebra functor $\mathcal{B} \to \mathcal{B}_{\Psi^\circ}$. Indeed, we check this adjunction explicitly: this cofree coalgebra functor sends an element $(B_w)_{w \in W} \in D^b_{\mathrm{dg}}(\mathcal{B})$ to the direct sum $\oplus_{w \in W} Rj_{w*}^\circ(A_w)$ where $Rj_{w*}^\circ = j_{w*}$ is the right-adjoint to $j_w^*$. It is then clear from the adjunction $(j_{w}^*, j_{w*})$ that the forgetful functor is its left adjoint. Now notice that in a similar way, we observe that the forgetful functor $F$ has a left adjoint: it follows from the opposite adjunction $(j_{w!}, j_{w}^*)$ that the functor sending any $(B_w)_{w \in W} \in D^b_{\mathrm{dg}}(\mathcal{B})$ to $\oplus_{w \in W} j_{w!}(B_w)$ (where again $j_{w!} = Lj_{w!}^\circ$) is left adjoint to $F$. We are now in the setup of Example 2.6 of [@gunningham] (by Remark [Remark 37](#rem:monad){reference-type="ref" reference="rem:monad"} we can even work monadically rather than having to dualize the argument in loc. cit.), where $F$ and its right adjoint both preserve small limits since each is itself a right adjoint. Since $D_{\mathrm{dg}}(\mathcal{B}_{\Psi^\circ})$ and $D_{\mathrm{dg}}(\mathcal{B})$ are compactly-generated, the result of loc. cit. gives that the hypotheses of the Barr-Beck-Lurie theorem will be satisfied as long as $F$ is conservative. It then only remains to check that $F$ is conservative, but this is clear by its exactness and the fact that the forgetful functor $\mathcal{B}_{\Psi^\circ} \to \mathcal{B}$ on the abelian level is faithful. To conclude, we then note that the equivalence $D_{\mathrm{dg}}(\mathcal{B}_{\Psi^\circ}) \to D_{\mathrm{dg}}(\mathcal{B})_{\Psi}$ provided by the Barr-Beck-Lurie theorem then restricts to an equivalence of bounded derived categories $D_{\mathrm{dg}}^b(\mathcal{B}_{\Psi^\circ}) \to D_{\mathrm{dg}}^b(\mathcal{B})_{\Psi}$ since cohomology in both categories is computed in the underlying category $D(\mathcal{B})$ after forgetting the coalgebra structure. ◻ ## The center of the monodromic Hecke category ### dg-enhancement for the monodromic Hecke category In 5.3 of [@BITV], the authors explain how to construct a dg enhancement of $\mathcal{H}_{\mathrm{mon}}^{(1)}$, and a similar construction works for the category $\mathcal{H}_{\mathcal{L}}^{(1)}$; this will be treated in a future version of loc. cit. Going forward, when we consider the convolution action of $\mathcal{H}_{\mathrm{mon}}^{(1)}$ on $D^b(\mathrm{Perv}_{\mathcal{L}}(G/U))$, we will use the fact that this functor has a dg lift and therefore can be considered as a dg functor which preserves distinguished triangles in any of its arguments. ### Action of the center on the Kazhdan-Laumon dg-category For any $\mathcal{L} \in \mathrm{Ch}(T)$, let $\mathcal{B}_{\mathcal{L}} = \oplus_{w \in W}\mathrm{Perv}_{w\mathcal{L}}(G/U)$. **Definition 39**. Given $Z \in \mathcal{Z}\mathcal{H}^{(1)}_{\mathcal{L}}$, we consider $Z$ as a functor on the direct sum of categories $\oplus_{w\in W} \mathrm{Perv}_{w\mathcal{L}}(G/U)$ as follows. Given any object $A = (A_w)$ in $\oplus_{w \in W} \mathrm{Perv}_{w\mathcal{L}}(G/U)$, let $Z(A) = A * Z \in \oplus_{w \in W} \mathrm{Perv}_{w\mathcal{L}}(G/U)$ be defined such that $Z(A)_w = (A * Z)_w = A_w * {}_{w\mathcal{L}}\hat{\Delta}(w)_{\mathcal{L}} * Z * {}_{\mathcal{L}}\hat{\nabla}(w^{-1})_{w\mathcal{L}}$. By the definition of $\mathcal{Z}\mathcal{H}_{\mathcal{L}}^{(1)}$, for any $B \in {}_{\mathcal{L}}\hat{\mathcal{P}}_{\mathcal{L}}$, the functors $- * Z * B$ and $- * B * Z$ on $\mathrm{Perv}_{\mathcal{L}}(G/U)$ are naturally isomorphic. The following lemma is a consequence of this fact. **Lemma 40**. *For any $Z \in \mathcal{Z}\mathcal{H}_{\mathcal{L}}^{(1)}$, there is a natural isomorphism $Z \circ \Psi \cong \Psi \circ Z$ of endofunctors of $\mathcal{A}^{\mathcal{L}}$.* *Proof.* By , for any $w \in W$, there exists some Tate twist $(d)$ such that $((Z \circ \Psi)(A))_w(d)$ is isomorphic to $$\begin{aligned} & \quad \oplus_{y \in W} (\Psi_{wy^{-1}} A_y) * {}_{w\mathcal{L}}\hat{\Delta} (w)_{\mathcal{L}} * Z * {}_{\mathcal{L}}\hat{\nabla}(w^{-1})_{w\mathcal{L}}\\ & \cong A_y * {}_{y\mathcal{L}}\hat{\nabla}(yw^{-1})_{w\mathcal{L}} * {}_{w\mathcal{L}}\hat{\Delta}(w)_{\mathcal{L}} * Z * {}_{\mathcal{L}}\hat{\nabla}(w^{-1})_{w\mathcal{L}}\\ & \cong A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1}){}_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\nabla}(yw^{-1})_{w\mathcal{L}} * {}_{w\mathcal{L}}\hat{\Delta}(w)_{\mathcal{L}} * Z * {}_{\mathcal{L}}\hat{\nabla}(w^{-1})_{w\mathcal{L}} \\ & \cong A_y * {}_{y\mathcal{L}}\hat{\Delta}(y){}_{\mathcal{L}} * Z * {}_{\mathcal{L}}\hat{\nabla}(y^{-1}){}_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\nabla}(yw^{-1}){}_{w\mathcal{L}} * {}_{w\mathcal{L}}\hat{\Delta}(w){}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(w^{-1}){}_{w\mathcal{L}}\\ & \cong A_y * {}_{y\mathcal{L}}\hat{\Delta}(y){}_{\mathcal{L}} * Z * {}_{\mathcal{L}}\hat{\nabla}(y^{-1}){}_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\nabla}(yw^{-1}){}_{w\mathcal{L}} \\ & \cong \oplus_{y \in W} \Psi_{wy^{-1}}(Z(A))(d)\\ & \cong ((\Psi \circ Z)(A))_w(d), \end{aligned}$$ and these isomorphisms across all $w \in W$ assemble together to give a natural isomorphism $Z \circ \Psi \cong \Psi \circ Z$. A conceptual explanation for the existence of this isomorphism is explained in Lemma 11.12 of [@lusztig_yun_2020]. ◻ In the following, let $\pi_{\mathrm{ad}} : G \to G_{\mathrm{ad}}$ be the natural map from $G$ to its adjoint quotient. **Proposition 41**. *There is an action of the monoidal category $D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$ on the category $D^b(\mathcal{A}^{\mathcal{L}})$, which we denote by convolution on the right.* *Proof.* Given an element $Z \in D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$, we first pull back along $\pi_{\mathrm{ad}}$ to obtain an element in $D^b_{\mathfrak{C},\mathcal{L}}(G)$, then pass to $\mathcal{Z}\mathcal{H}^{(1)}_{\mathcal{L}}$ via the isomorphism $\tilde{a}_{\mathcal{L}}$, obtaining a new element $\tilde{Z}$. We can then act by the usual convolution $- * \tilde{Z}$. By abuse of notation, for any element $Z \in D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$ we will define the endofunctor of $\mathrm{Perv}_{\mathcal{L}}(G/U)$ which we just described by the same symbol $- * Z$. We note that $D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$ has a dg enhancement discussed for the unipotent case in [@BZN] but analogous in general. Further, the functors we use are built from compositions of those occuring in the six-functor formalism and therefore each has a dg enhancement as well. For any $B \in D^b_{\mathrm{dg}}(\mathcal{B}_\mathcal{L})_{\Psi}$ and $Z \in D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$, we define the new object $B * Z$ by $F(B) * Z$ as a tuple of elements of $\mathrm{Perv}_{\mathcal{L}}(G/U)$ where $F$ is the forgetful functor. We then define the coalgebra structure map $B * Z \to \Psi (B * Z)$ by the composition $$\begin{aligned} B * Z & \to \Psi(B) * Z \cong \Psi(B * Z) \end{aligned}$$ where the first map comes from the coalgebra structure on $B$ and the subsequent isomorphism is the natural isomorphism described in Lemma [Lemma 40](#lem:naturaliso){reference-type="ref" reference="lem:naturaliso"}. One can then check that this defines a coalgebra structure on $B * Z$. ◻ We note that we get a similar action of $D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$ on $D^b(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})$ since any object $Z$ here is a sheaf on the variety $G_{\mathrm{ad}}$ and therefore has a natural isomorphism $\mathrm{Fr}^* Z \to Z$; this means $- * Z$ commutes with $\mathrm{Fr}^*$, and therefore the action in [Proposition 41](#prop:dgacts){reference-type="ref" reference="prop:dgacts"} also gives an action in the setting of $w$-twisted Weil sheaves. # Polishchuk's canonical complex {#sec:canonical} ## Parabolic Kazhdan-Laumon categories ### Definition of the categories $\mathcal{A}_{w, \mathbb{F}_q}^I$ We will now define certain parabolic analogues of the Kazhdan-Laumon category which correspond to subsets $I \subset S$. We first recall the definition of the categories $\mathcal{A}_{W_I}$ from [@P], build off of this definition to define the categories $\mathcal{A}^I$ which we will need in the present work, and finally define $\mathcal{A}^I_{w, \mathbb{F}_q}$ for any $w \in W$. **Definition 42** ([@P], Section 7). For any $I \subset S$, the category $\mathcal{A}_{W_I}$ is the category of tuples of elements of $\mathrm{Perv}(G/U)$ indexed by $W_I$ with morphisms and compatibilities as in but only for $w \in W_I$, $s \in I$. For any $J \subset I \subset S$, and any right coset $W_Jx$ of $W_J$ in $W_I$, there is a restriction functor $j_{W_Jx}^{W_I*} : \mathcal{A}_{W_I} \to \mathcal{A}_{W_J}$ remembering only the tuple elements and morphisms for $w \in W_Jx$, $s \in J$. When $I = S$, we write only $j_{W_Jx}^*$, and in this case we omit superscripts similarly for all functors introduced in this section. **Proposition 43** ([@P], Proposition 7.1.1). *For any $J \subset I \subset S$, and any $W_Jx$, the functor $j_{W_Jx}^{W_I*}$ admits a left adjoint $j_{W_Jx!}^{W_I}$.* **Definition 44**. For $I \subset S$, let $\mathcal{A}^I$ be the category whose objects are tuples $(A_w)_{w \in W}$ equipped with morphisms $\Phi_y^\circ A_w \to A_{yw}$ for any $w \in W$, $y \in W_I$ satisfying the same conditions as in but only for $y \in W_I$. Morphisms in $\mathcal{A}^I$ are morphisms in $\mathrm{Perv}(G/U)^{\oplus W}$ satisfying the compatibilities in but only for $y \in W_I$. Alternatively, $\mathcal{A}^I = \oplus_{W_I \backslash W} \mathcal{A}_{W_I}$ with a reindexing by $W$ on the tuples in this category. Just like $\mathcal{A}$, the category $\mathcal{A}^I$ admits an action of $W$ by functors $\{\mathcal{F}_w\}_{w \in W}$ defined by $\mathcal{F}_w((A_y)_{y \in W}) = (A_{yw})_{w \in W}$. We define the category $\mathcal{A}_{w, \mathbb{F}_q}^I$ as in but with $\mathcal{A}$ replaced by $\mathcal{A}^I$. ### Adjunctions between these categories For any $J \subset I$, there is an obvious restriction functor $j_{I,J}^* : \mathcal{A}^I \to \mathcal{A}^J$ which is the identity on objects but which remembers only the morphisms $\Phi_y^\circ A_w \to A_{yw}$ for $y \in W_J$. As in , there is an analogous derived version of this morphism and the following adjunction. **Proposition 45**. *For any $J \subset I$, the functor $j_{I,J}^*$ admits a left adjoint $j_{I,J!}$. For any $A \in \mathcal{A}^J$, $j_{I,J!}^\circ(A)$ is the direct sum $$\begin{aligned} \label{eqn:dirsumadj} \bigoplus_{x \in W_J\backslash W_I} j^{W_I\circ}_{W_Jx!}((A_w)_{w \in W_Jx}),\end{aligned}$$ where $(A_w)_{w \in W_Jx}$ is considered as an element of $\mathcal{A}_{W_J}$. Further, the adjoint pair $(j_{I,J!}^\circ, j_{I,J}^*)$ gives also an adjunction between $\mathcal{A}^I_{w, \mathbb{F}_q}$ and $\mathcal{A}^J_{w, \mathbb{F}_q}$.* *Proof.* The direct sum in ([\[eqn:dirsumadj\]](#eqn:dirsumadj){reference-type="ref" reference="eqn:dirsumadj"}) has the natural structure of an object of $\mathcal{A}^I$, as each object $j_{W_Jx!}^{W_I \circ}(A_w)_{w \in W_J x}$ has such a structure by Proposition [Proposition 43](#prop:polparind){reference-type="ref" reference="prop:polparind"}. We then note that for any such $A \in \mathcal{A}^J$ and any $B \in \mathcal{A}^I$, $$\begin{aligned} & \quad \mathrm{Hom}_{\mathcal{A}^I}\left(\bigoplus_{x \in W_J \backslash W_I} j_{W_Jx!}^{I\circ}((A_w)_{w \in W_Jx}), B\right)\\ & \cong \bigoplus_{x \in W_J \backslash W_I} \mathrm{Hom}_{\mathcal{A}^I}(j_{W_J x!}^\circ((A_w)_{w \in W_J x}), B)\\ & \cong \bigoplus_{x \in W_J \backslash W_I} \mathrm{Hom}_{\mathcal{A}_{W_J x}}((A_w)_{w \in W_J x}, j_{W_J x}^* B)\\ & \cong \mathrm{Hom}_{\mathcal{A}^J}((A_w)_{w \in W}, \oplus_{x \in W_J \backslash W_I}j_{W_J x}^*B)\\ & = \mathrm{Hom}_{\mathcal{A}^J}((A_w)_{w \in W}, j_{I,J}^*B).\end{aligned}$$ To see that the adjoint pair $(j_{I,J!}^\circ, j_{I,J}^*)$ gives also an adjunction between $\mathcal{A}^I_{w, \mathbb{F}_q}$ and $\mathcal{A}^J_{w, \mathbb{F}_q}$, we note that the morphisms on both sides of the equation above which are compatible with the morphism $\psi_A : \mathcal{F}_w\mathrm{Fr}^* A \to A$ are preserved by these isomorphisms. ◻ ## Polishchuk's complex for $\mathcal{A}_{w, \mathbb{F}_q}$ ### Polishchuk's canonical complex in [@P][\[subsubsec:pcc\]]{#subsubsec:pcc label="subsubsec:pcc"} For a fixed $A \in \mathcal{A}$ and a choice of $J \subset S$, Polishchuk writes $A(J) = j_{S - J!}j_{S - J}^*A$. In 7.1 of [@P], Polishchuk explains that for any $A \in \mathcal{A}$, adjunction of parabolic pushforward and pullback functors $(j_{W_Ix!}, j_{W_Ix}^*)$ gives a canonical morphism $A(J) \to A(J')$ whenever $J \subset J'$. He then defines the complex $C_\bullet(A)$ as $$\begin{tikzcd} C_{n-1} = A(S) \arrow[r] & \dots \arrow[r] & C_1 = \bigoplus_{|J| = 2} A(J) \arrow[r] & C_0 = \bigoplus_{|J| = 1} A(J), \end{tikzcd}$$ where $n = |S|$. Recalling the morphism $\iota : \mathcal{A} \to \mathcal{A}$ sending $(A_w)_{w \in W}$ to $(\Phi_{w_0}^\circ A_{w_0w})_{w \in W}$, he describes natural morphisms $C_0 \to A$ and $\iota(A) \to C_{n-1}$, and shows that $$\begin{aligned} H_i(C_\bullet(A)) & = \begin{cases} A & i = 0,\\ 0 & i \neq 0, n - 1,\\ \iota(A) & i = n-1. \end{cases}\end{aligned}$$ He then describes a $2n$-term complex $\tilde{C}_\bullet$ formed from attaching $C_\bullet(\iota A)$ to $C_\bullet(A)$ via the maps $C_0(\iota A) \to \iota A \to C_{n-1}(A)$ with the property that $$\begin{aligned} H_i(\tilde{C}_{\bullet}(A)) & = \begin{cases} A & i = 0,\\ 0 & i \neq 0, 2n - 1,\\ \iota^2(A) & i = 2n - 1. \end{cases}\end{aligned}$$ ### Compatibility of the complex with $w$-twisted structure **Proposition 46**. *If $A \in \mathcal{A}_{w, \mathbb{F}_q}$, then the complex $C_\bullet(A)$ is compatible with the $w$-twisted Weil structures, i.e. is a complex of objects in $\mathcal{A}_{w, \mathbb{F}_q}$.* *Proof.* For any $k$, components of the map $C_k(A) \to C_{k-1}(A)$ are each maps of the form $$\begin{aligned} \label{eqn:polimap} j_{S-J!}j_{S-J}^*A \to j_{S-J'!}j_{S-J'}^*A \end{aligned}$$ where $J' \subset J \subset S$ are such that $|J| = k$, $|J'| = k - 1$, which we now describe. By adjunction the data of such a map is equivalent to a map $$\begin{aligned} \label{eqn:poliadj} j_{S-J}^*A \to j_{S-J}^*j_{S-J'!}j_{S-J'}^*A, \end{aligned}$$ and compatibility with the $w$-twisted structure is preserved under this adjunction. By the definition of $j_{S-J'!}$, we have that $$\begin{aligned} j_{S-J}^*j_{S-J'!}j_{S-J'}^*A & \cong \bigoplus_{x \in W_{S-J}\backslash W} j_{S-J}^*j_{W_{S-J}x!}j_{S-J'}^*A, \end{aligned}$$ and one can check that the map in ([\[eqn:polimap\]](#eqn:polimap){reference-type="ref" reference="eqn:polimap"}) appearing in the definition of Polishchuk's complex in [@P] comes in ([\[eqn:poliadj\]](#eqn:poliadj){reference-type="ref" reference="eqn:poliadj"}) from a natural injection in $\mathcal{A}^{S-J}$ from $j_{S-J}^*A$ into this direct sum defined by sending the $y$th tuple entry to the $y$th tuple entry in the direct summand corresponding to the unique $x$ for which $y \in W_{S-J}x$. It is straightforward to check that this injection preserves $w$-twisted Weil structures on both sides coming from the $w$-twisted Weil structure on $A$, and therefore the map in ([\[eqn:polimap\]](#eqn:polimap){reference-type="ref" reference="eqn:polimap"}) is a map in $\mathcal{A}_{w, \mathbb{F}_q}$. ◻ ### Parabolic canonical complexes We remark how that the content appearing in [\[subsubsec:pcc\]](#subsubsec:pcc){reference-type="ref" reference="subsubsec:pcc"} can be generalized to provide complexes in $\mathcal{A}^I_{w, \mathbb{F}_q}$ for any $I \subset S$ with $|I| = k$ and any $w \in W$. Namely, if we fix $I \subset S$, $w \in W$, and $A \in \mathcal{A}_{w, \mathbb{F}_q}^I$, we let $A^I(J) = j_{S-J!}^Ij_{S-J}^{I*}A$ whenever $J \supset S - I$, and we define the complex $C^I_\bullet(A)$ by $$\begin{tikzcd} A^I(S) \arrow[r] & \dots \arrow[r] & {\bigoplus_{|J|=n-k + 2}}A^I(J) \arrow[r] & \bigoplus_{|J| = n-k + 1}A^I(J), \end{tikzcd}$$ indexed such that $C_{k-1}$ is the first term and $C_0$ is the last term in the above. The results in then still hold, giving a version of the canonical complex associated to an object $A \in \mathcal{A}_{w, \mathbb{F}_q}^I$. ### Equations in the Grothendieck group The following is a consequence of the fact that the full twist is central in the braid group, along with the fact that the symplectic Fourier transforms $\Phi_w$ form a braid action. **Lemma 47**. *For any $J \subset I \subset S$ there is a natural isomorphism $$\begin{aligned} j_{J!}^I\circ \iota^2 \cong \iota^2\circ j_{J!}^I \end{aligned}$$ of functors from $D^b(\mathcal{A}^J)$ to $D^b(\mathcal{A}^I)$.* **Theorem 48**. *For any $w \in W$ and $A \in \mathcal{A}_{w, \mathbb{F}_q}$ the element $$\begin{aligned} (\iota^2 - 1)^n[A] \in K_0(\mathcal{A}_{w, \mathbb{F}_q}) \end{aligned}$$ lies in $V^{\mathrm{fp}}$, for $n = |S|$.* *Proof.* By the derived category version of the canonical complex construction in combination with the observation about its homology provided in [\[subsubsec:pcc\]](#subsubsec:pcc){reference-type="ref" reference="subsubsec:pcc"}, for any $A \in \mathcal{A}_{w, \mathbb{F}_q}$ we have the following equation in $K_0(\mathcal{A}_{w, \mathbb{F}_q})$: $$\begin{aligned} + (-1)^{|I| - 1}[A] & = \sum_{\substack{I'\\ I \subset I' \subsetneq S}} (-1)^{|I'|} [j_{J!}^Ij_{J}^{I*} A], \end{aligned}$$ c.f. the proof of Theorem 11.5.1 in [@P] where the analogous equation is used in the case where $w = e$, $I = S$. By the "doubled\" canonical complex $\tilde{C}_{\bullet}(A)$ and the description of its homology in [\[subsubsec:pcc\]](#subsubsec:pcc){reference-type="ref" reference="subsubsec:pcc"}, this means that $[\iota^2 A] - [A]$ is a linear combination of elements lying in the image of the functors $j_{J!}^Ij_{J}^{I*}$ for $J \subsetneq I$. Now by induction on the $|I|$ appearing in the equation above, it follows from Lemma [Lemma 47](#lem:iotacommutes){reference-type="ref" reference="lem:iotacommutes"} that $(\iota^2 - 1)^n[A]$ is a linear combination of elements lying in the image of the functors $j_{\varnothing!}j_{\varnothing}^*$. We have that for any $B \in \mathcal{A}^\varnothing_{w, \mathbb{F}_q}$, $$\begin{aligned} j_{\varnothing!}j_{\varnothing}^* = \oplus_{y \in W} j_{y!}B_y,\end{aligned}$$ and each of these direct summands has finite cohomological dimension by , completing the proof of the theorem. ◻ # Central objects and the full twist {#sec:central} ## Cells, the big tilting object, and the full twist ### The full twist **Definition 49**. Let $\mathcal{L} \in \mathrm{Ch}(T)$. Then we define the element $$\begin{aligned} \mathrm{FT}_{\mathcal{L}} & = {}_{\mathcal{L}}\hat{\nabla}(w_0)_{w_0\mathcal{L}} * {}_{w_0\mathcal{L}}\hat{\nabla}(w_0)_{\mathcal{L}} \end{aligned}$$ of ${}_{\mathcal{L}}\hat{\mathcal{P}}_{\mathcal{L}}$. The object $\mathrm{FT}_{\mathcal{L}}$ admits a central structure and arises comes from a character sheaf in $D^b_{\mathfrak{C}, \mathcal{L}}(G_{\mathrm{ad}})$ via pullback composed with the Harish-Chandra functor, see [@BT] for an explicit description of this character sheaf in the unipotent case, whose proof can be adapted similarly for arbitrary monodromy. As with $\mathbb{K}_{\mathcal{L}}$, we will identify $\mathrm{FT}_{\mathcal{L}}$ as an element of $\mathcal{ZH}^{(1)}_{\mathcal{L}}$ with its underlying character sheaf in $D^b_{\mathfrak{C}, \mathcal{L}}(G_{\mathrm{ad}})$, and by $- * \mathrm{FT}_{\mathcal{L}}$ we will denote the action as described in . By the definition of $\mathrm{FT}_{\mathcal{L}}$ combined with , we obtain the following. **Lemma 50**. *The functors $- * \mathrm{FT}_{\mathcal{L}}$ and $\iota^2$ on $D^b(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})$ are naturally isomorphic.* ### The action of the full twist on cells **Proposition 51**. *For any $\underline{c} \in \underline{C}_{\mathcal{L}}$, there exists an integer $d_{\mathcal{L}}(\underline{c})$ between $0$ and $2\ell(w_0)$ such that for any $a \in K_0(D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\underline{c}})$, $$\begin{aligned} * a = v^{d_{\mathcal{L}}(\underline{c})}a\end{aligned}$$* *Proof.* We follow the same argument as in Proposition 4.1 and Remark 4.2 of [@BFO]; in other words, we begin with the observation that $[\mathrm{FT}_{\mathcal{L}}]$ acts trivially on the non-graded version of the Grothendieck group $K_0(D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\underline{c}})$. Continuing to follow the argument of loc. cit., we then know that for any object $A$ in the heart of $D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\underline{c}}$, the object $\mathrm{FT}_{\mathcal{L}} * A \in D^b_{\mathfrak{C}, \mathcal{L}}(G)_{\underline{c}}$ is perverse up to shift, and furthermore has the property that $[\mathrm{FT}_{\mathcal{L}} * A] = v^{d}[A]$ for some $d$. To compute the value of $d$, we can pass back along the Harish-Chandra transform and work in the category $\mathcal{H}_{\mathcal{L}}^{(1)}$. The Grothendieck ring $K_0(\mathcal{H}_{\mathcal{L}}^{(1)})$ is the monodromic Hecke algebra $\mathcal{H}_{\mathcal{L}}$. By [@lusztig_yun_2020], this is isomorphic to the usual Hecke algebra associated to the group $W_{\mathcal{L}}^\circ \subset W$, with $\mathrm{FT}_{\mathcal{L}}$ being identified with the usual full-twist $\tilde{T}_{w_{0,\mathcal{L}}}^2$. By [@Lbook 5.12.2], the full twist in the usual Hecke algebra acts on the cell subquotient module of the Hecke algebra corresponding to a cell $\underline{c}$ by the scalar $v^{d(\underline{c})}$, where $d(\underline{c})$ is described in loc. cit. Passing this fact back along the monodromic-equivariant isomorphism from [@lusztig_yun_2020], the result follows. ◻ **Definition 52**. For any two-sided cell $\underline{c}$, let $d_{\mathcal{L}}(\underline{c})$ be the integer between $0$ and $2\ell(w_0)$ for which the equation in holds. ### $\mathbb{K}_{\mathcal{L}}$ in the top cell subquotient **Definition 53**. Let $K_0(D^b(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}))_{< \underline{c}_e}$ be the submodule of $K_0(D^b(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}))$ spanned by the image of the ideal $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})_{< \underline{c}_{e}})$ under the action described in . **Definition 54**. For any $\mathcal{L} \in \mathrm{Ch}(T)$, let $q_{\mathcal{L}}(v)$ be the Poincaré polynomial $$q_{\mathcal{L}}(v) = \sum_{w \in W} (-v^2)^{\ell(w)}$$ of the group $W_{\mathcal{L}}^\circ$. Note by the Chevalley-Solomon formula that $q_{\mathcal{L}}(v)$ can be expressed as the product of some linear factors each of which is a factor of $(v^{2i} - 1)$ for some $1 \leq i \leq \ell(w_0)$.[^1] **Lemma 55**. *The multiplicity with grading of the irreducible object $\mathrm{IC}(e)_{\mathcal{L}}$ in the Jordan-Holder decomposition of $\hat{\mathcal{T}}_{\mathcal{L}}$ is $q_{\mathcal{L}}(v)$.* *Proof.* In [@Z], Yun computes the $\mathbb{Z}[v,v^{-1}]$-graded multiplicity of any standard object $\Delta(y)$ in a filtration of $T(w)$, where $w, y \in W$ and $\Delta(y)$ and $T(w)$ are standard and tilting objects respectively in the usual Hecke category. The main equivalence of categories in [@lusztig_yun_2020] allows us to extend these results to the monodromic Hecke category by replacing $W$ with $W_{\mathcal{L}}^\circ$, whose combinatorics in terms of tilting, standard, and irreducible objects matches exactly the combinatorics of the completed category ${}_{\mathcal{L}}\hat{\mathcal{P}}_\mathcal{L}$ (c.f. 9.3.3 of [@Gouttard] for an explicit description of the standard filtration on a tilting object in the monodromic setting). Combining Theorem 5.3.1 of [@Z] with the expression of standard objects in terms of irreducible objects via inverse Kazhdan-Lusztig polynomials, we compute that the multiplicity of $\hat{\mathrm{IC}(e)}_{\mathcal{L}}$ is exactly $$\begin{aligned} \sum_{w \in W_{\mathcal{L}}^\circ} (-v^2)^{\ell(w_0) - \ell(w)} = \sum_{w \in W_{\mathcal{L}}^\circ} (-v^2)^{\ell(w)} = q_{\mathcal{L}}(v). \end{aligned}$$ ◻ For the next proposition, we recall the definition of the character sheaves $\varepsilon_{n,\mathcal{L}}$ from . **Proposition 56**. *For any $n$, the element $$\begin{aligned} \label{eqn:tiltid} [\varepsilon_{n,\mathcal{L}} * \mathbb{K}_{\mathcal{L}}] - (v^2 - 1)^{\mathrm{rank}(T)}q_{\mathcal{L}}(v)[\varepsilon_{n,\mathcal{L}}]\end{aligned}$$ of $K_0(D^b_{\mathfrak{C}}(G_{\mathrm{ad}}))$ lies in the subspace $K_0(D^b_{\mathfrak{C},< \underline{c}_{e}}(G_{\mathrm{ad}}))$.* *Proof.* First note that $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})_{\underline{c}_{e}})$ is of rank $1$ as a $\mathbb{Z}[v, v^{-1}]$-module. This means we have that the classes of the images of $\varepsilon_{n,\mathcal{L}} * \mathbb{K}_{\mathcal{L}}$ and $\varepsilon_{n,\mathcal{L}}$ under the cell quotient map to $D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})_{\underline{c}_{e}}$ are scalar multiples, so $[\varepsilon_{n,\mathcal{L}} * \mathbb{K}_{\mathcal{L}}] - q'(v)[\varepsilon_{n,\mathcal{L}}]$ for some $q'(v) \in \mathbb{Z}[v, v^{-1}]$. By [@BT], $[\mathbb{K}_{\mathcal{L}}] = (v^2 - 1)^{\mathrm{rank}(T)}[\hat{\mathcal{T}}_{\mathcal{L}}]$ in the full Grothendieck group $K_0(\hat{\mathcal{P}}_{\mathcal{L}})$. Note that in the corresponding top cell subquotient module for the monodromic Hecke algebra $K_0(\mathcal{H}_{\mathcal{L}}^{(1)})$, the equation $[\hat{\mathcal{T}}_{\mathcal{L}}] - q_{\mathcal{L}}(v)[\hat{\delta}_{\mathcal{L}}]$ holds by . This means that $q'(v) = (v^2 - 1)^{\mathrm{rank}(T)}q_{\mathcal{L}}(v)$ is the only value for which $[\varepsilon_{n,\mathcal{L}} * {\mathbb{K}_{\mathcal{L}}}] - q'(v)[\varepsilon_{n,\mathcal{L}}]$ lies in a lower cell submodule of $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}}))$, and therefore $[\varepsilon_{n,\mathcal{L}} * \mathbb{K}_{\mathcal{L}}] = (v^2 - 1)^{\mathrm{rank}(T)}q_{\mathcal{L}}(v)[\varepsilon_{n,\mathcal{L}}]$. ◻ ### Convolution with $\mathbb{K}_{\mathcal{L}}$ for Kazhdan-Laumon objects **Lemma 57**. *For any $s \in S$, the map $$\begin{aligned} c_s * \mathbb{K}_{\mathcal{L}}: {}_{\mathcal{L}}\hat{\nabla}(s)_{s\mathcal{L}} * {}_{s\mathcal{L}}\hat{\nabla}(s)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} \to \mathbb{K}_{\mathcal{L}} \end{aligned}$$ is an isomorphism.* *Proof.* It is enough to show that the corresponding map $\tilde{c}_s : {}_{s\mathcal{L}}\hat{\nabla}(s)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} \to {}_{s\mathcal{L}}\hat{\Delta}(s)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}}$ (obtained by convolving $c_s$ with ${}_{s\mathcal{L}}\hat{\Delta}(s)_{\mathcal{L}}$) is an isomorphism. If $s\mathcal{L} \neq \mathcal{L}$, then by Lemma 3.6 in [@lusztig_yun_2020], ${}_{s\mathcal{L}}\hat{\nabla}(s)_{\mathcal{L}} \cong {}_{s\mathcal{L}}\hat{\Delta}(s)_{\mathcal{L}}$, and so this becomes immediate. We now consider the case when $s\mathcal{L} = \mathcal{L}$. Note that $\tilde{c}_s = (i' \circ p) * \mathbb{K}_{\mathcal{L}}$ where $i'$ and $p$ are as in the canonical exact sequences $$\begin{tikzcd} 0 \arrow[r] & \hat{\mathrm{IC}}(s)_{\mathcal{L}} \arrow[r, "i"] & {}_{s\mathcal{L}}\hat{\nabla}(s)_{\mathcal{L}} \arrow[r, "p"] & \hat{\mathrm{IC}}(e)_{\mathcal{L}} \arrow[r] & 0,\\ 0 \arrow[r] & \hat{\mathrm{IC}}(e)_{\mathcal{L}} \arrow[r, "i'"] & {}_{s\mathcal{L}}\hat{\Delta}(s)_{\mathcal{L}} \arrow[r, "p'"] & \hat{\mathrm{IC}}(s)_{\mathcal{L}} \arrow[r] & 0. \end{tikzcd}$$ It is then enough to show that $p * \mathbb{K}_{\mathcal{L}}$ and $i' * \mathbb{K}_{\mathcal{L}}$ are each isomorphisms. This follows from the fact that $\hat{\mathrm{IC}}(s)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} = 0$. Indeed, $\hat{\mathrm{IC}}(s)_{\mathcal{L}} * \mathbb{K} = 0$ if and only if $\hat{\mathrm{IC}}(s)_{\mathcal{L}} * \hat{\mathcal{T}}_{\mathcal{L}} = 0$, which follows from the fact that its class in the Grothendieck is zero combined with the fact that $\hat{\mathcal{T}}_{\mathcal{L}}$ is convolution-exact, since it is tilting. ◻ **Corollary 58**. *For any $A \in \mathcal{A}_{w,\mathbb{F}_q}$, the object $A * \mathbb{K}_{\mathcal{L}}$ of $\mathcal{A}_{w,\mathbb{F}_q}$ has the property that for any $y, z \in W$, the composition $$\begin{aligned} \theta_{z^{-1},zy} \circ (\Phi_{z^{-1}}^\circ\theta_{z,y}) : A_y * \mathbb{K}_{\mathcal{L}} \to A_y * \mathbb{K}_{\mathcal{L}} \end{aligned}$$ is an isomorphism.* *Proof.* By , the morphism in question can be written, up to Tate twist, as a morphism from $A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\nabla}(z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}}$ to $A_y *{}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}$. In particular, it is a Tate twist of the morphism $$\begin{tikzcd} A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\nabla}(z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}}\arrow[d]\\ A_y * {}_{y\mathcal{L}}\hat{\nabla}(z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\Delta}(zy)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(y^{-1}z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}}\arrow[d, "\theta_{z,y} * {}_{zy\mathcal{L}}\hat{\Delta}(zy)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1}z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}}"]\\ A_{zy} * {}_{zy\mathcal{L}}\hat{\Delta}(zy)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1}z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}}\arrow[d]\\ A_{zy} * {}_{zy\mathcal{L}} \hat{\nabla}(z)_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}\arrow[d, "\theta_{z^{-1},zy} * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}"]\\ A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}, \end{tikzcd}$$ where the unlabeled arrows are the isomorphisms given by the central structure on $\mathbb{K}_{\mathcal{L}}$; these extend to similar central morphisms for conjugates of of $\mathbb{K}_{\mathcal{L}}$ by standard/costandard sheaves by the same argument as in Lemma 11.12 of [@lusztig_yun_2020]. We note that since the second and third morphisms above clearly commute with these central morphisms, the above morphism agrees with the composition $$\begin{tikzcd} A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} * {}_{y\mathcal{L}} \hat{\nabla}(z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}} \hat{\nabla}(z)_{y\mathcal{L}} \arrow[d]\\ A_y * {}_{y\mathcal{L}}\hat{\nabla}(z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} \arrow[d, "\theta_{z,y} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}"]\\ A_y * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}} * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} \arrow[d, "\theta_{z^{-1},zy} * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}"]\\ A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}}, \end{tikzcd}$$ where the first morphism is again the isomorphism coming from centrality of $\mathbb{K}_{\mathcal{L}}$ The composition of the last two morphisms in the sequence above must, by the definition of Kazhdan-Laumon categories, be equal to the morphism $$A_y * c_z * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}},$$ where $c_z$ is the morphism ${}_{y\mathcal{L}}\hat{\nabla}(z^{-1})_{zy\mathcal{L}} * {}_{zy\mathcal{L}}\hat{\nabla}(z)_{y\mathcal{L}} \to \mathrm{Id}$ which is obtained by applying the morphisms $c_s$ successively for every simple reflection $s$ in a reduced expression for $z$. By the functoriality of the central morphisms discussed above, they also commute with this $c_z$, and so the entire composition above actually agrees with the morphism $$A_y * {}_{y\mathcal{L}}\hat{\Delta}(y)_{\mathcal{L}} * \mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} * c_z,$$ By our inductive definition of $c_z$ along with the same argument as in Lemma [Lemma 57](#lem:csiso){reference-type="ref" reference="lem:csiso"}, $\mathbb{K}_{\mathcal{L}} * {}_{\mathcal{L}}\hat{\nabla}(y^{-1})_{y\mathcal{L}} * c_z$ is an isomorphism, and therefore $\theta_{z^{-1},zy} \circ (\Phi_{z^{-1}}^\circ \theta_{z,y})$ must be, too. ◻ **Proposition 59**. *For any $A \in \mathcal{A}_{w, \mathbb{F}_q}$, $A * \mathbb{K}_{\mathcal{L}}$ has finite projective dimension.* *Proof.* We can forget the $w$-twisted Weil structure on $A * \mathbb{K}_{\mathcal{L}}$ and consider it as an object in $\mathcal{A}$. In $\mathcal{A}$, the adjunction $(j_{e!}, j_{e}^*)$ gives a morphism $$\begin{aligned} a : j_{e!}j_e^*(A * \mathbb{K}_{\mathcal{L}}) \to A * \mathbb{K}_{\mathcal{L}}. \end{aligned}$$ We claim that this is an isomorphism. It is enough to show that the component morphisms $a_y : j_y^*j_{e!}j_e^*(A * \mathbb{K}_{\mathcal{L}}) \to j_y^*(A * \mathbb{K}_{\mathcal{L}})$ are each isomorphisms. By definition, these are the structure morphisms $\theta_{y,e} : \Phi_y^\circ(A * \mathbb{K}_{\mathcal{L}})_e \to (A * \mathbb{K}_{\mathcal{L}})_y$. By , the morphisms $$\begin{aligned} \theta_{y^{-1},y}\circ (\Phi_{y^{-1}}^\circ \theta_{y,e}) & : \Phi_{y{-1}}^\circ \Phi_y^\circ A_e \to A_e\\ \theta_{y,e} \circ (\Phi_{y^{-1}}^\circ\theta_{y^{-1},y}) & : \Phi_{y}^\circ\Phi_{y^{-1}}^\circ A_{y} \to A_y \end{aligned}$$ are both isomorphisms. This tells us that $\theta_{y,e}$ is both a monomorphism and an epimorphism. This means $a : j_{e!}j_e^*(A * \mathbb{K}_{\mathcal{L}}) \to A * \mathbb{K}_{\mathcal{L}}$ is an isomorphism as we claimed. The proposition then follows since all objects in the image of $j_{e!}$ have finite cohomological dimension by . ◻ ## Completing the proof of Theorem [Theorem 5](#thm:mainthm){reference-type="ref" reference="thm:mainthm"} {#completing-the-proof-of-theorem-thmmainthm} ### Proof of Theorem [Theorem 5](#thm:mainthm){reference-type="ref" reference="thm:mainthm"} by the action of the full twist on cells {#proof-of-theorem-thmmainthm-by-the-action-of-the-full-twist-on-cells} **Lemma 60**. *For any $a \in K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})$, there exists some $r \geq 1$ for which $$\begin{aligned} P^r(\mathrm{FT}_{\mathcal{L}}, v) \cdot a = 0 \end{aligned}$$ in $K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})$, where $P^r$ is the polynomial for which $P^r(x, v) = P(x, v)^r$ for any $x$.* *Further, if $a \in K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})_{< \underline{c}_{e}}$, then $$\begin{aligned} \tilde{P}^r(\mathrm{FT}_{\mathcal{L}}, v) \cdot a = 0. \end{aligned}$$* *Proof.* By and Propositions [Proposition 38](#prop:dgequiv){reference-type="ref" reference="prop:dgequiv"} and [Proposition 41](#prop:dgacts){reference-type="ref" reference="prop:dgacts"}, the category $D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$ acts on $D^b_{\mathrm{dg}}(\mathcal{A}_{w,\mathbb{F}_q}^{\mathcal{L}})$ in a way which respects distinguished triangles, therefore giving an action of the $\mathbb{Z}[v, v^{-1}]$-algebra $D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})$ on $K_0(\mathcal{A}_{w,\mathbb{F}_q})$. It is then enough to show that there exists $r$ for which $P^r([\mathrm{FT}_{\mathcal{L}}], v) = 0$ in $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}}))$ and that $\tilde{P}^r([\mathrm{FT}_{\mathcal{L}}], v) \cdot b = 0$ for any $b \in K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})_{< \underline{c}_e})$. Indeed, since by Proposition [Proposition 51](#prop:celleigvals){reference-type="ref" reference="prop:celleigvals"} and Proposition [Proposition 30](#prop:cellsum){reference-type="ref" reference="prop:cellsum"}, the eigenvalues of $[\mathrm{FT}_{\mathcal{L}}] * -$ on $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}}))$ are each of the form $v^{2i}$ for $0 \leq i \leq \ell(w_0)$ and of the form $v^{2i}$ for $1 \leq i \leq \ell(w_0)$ on $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}})_{< \underline{c}_{e}})$, we must only choose $r$ to be the maximum multiplicity occuring in the characteristic polynomial of $[\mathrm{FT}_{\mathcal{L}}] * -$ on $K_0(D^b_{\mathfrak{C},\mathcal{L}}(G_{\mathrm{ad}}))$, since each degree $1$ term of this characteristic polynomial is a factor of $P(x, v)$ (resp. $\tilde{P}(x, v)$) by definition of the latter. Choosing $r$ in this way, we get that the polynomials in the lemma vanish, as desired. ◻ **Corollary 61**. *For any $a \in K_0(D^b(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}))_{< \underline{c}_e}$, $a \in V^{\mathrm{fp}}_{p(v)}$.* *Proof.* Let $a \in K_0(\mathcal{A}_{w,\mathbb{F}_q}^{\mathcal{L}})_{< \underline{c}_{e}}$. By , $(\iota^2 - 1)^{n} \cdot a \in V^{\mathrm{fp}}$, while by Lemmas [Lemma 50](#lem:i2ft){reference-type="ref" reference="lem:i2ft"} and [Lemma 60](#lem:ftpolyzero){reference-type="ref" reference="lem:ftpolyzero"}, $\tilde{P}(\iota^2, v) \cdot a = 0$. We claim that this implies that $(\iota^2 - 1)^{n-k} \cdot a \in V^{\mathrm{fp}}_{p(v)}$ for all $0 \leq k \leq n$. Indeed, suppose for induction that $(\iota^2 - 1)^{n-k+1}\cdot a \in V^{\mathrm{fp}}_{p(v)}$, and let $a' = (\iota^2 - 1)^{n-k} \cdot a$. Then $(\iota^2 - 1)\cdot a' \in V^{\mathrm{fp}}_{p(v)}$ and $\tilde{P}(\iota^2, v)^r \cdot a' = 0$, so by the Euclidean algorithm, $p(v)^r\cdot a' = \tilde{P}(1, v)^r\cdot a'$ is in the $\mathbb{Z}[\iota^2, v, v^{-1}]$-span of $(\iota^2 - 1) \cdot a'$, and therefore lies in $V^{\mathrm{fp}}_{p(v)}$. Dividing by $p(v)^r$ gives $a' \in V^{\mathrm{fp}}_{p(v)}$, and so we can proceed by induction until $k = n$ where we conclude that $a \in V^{\mathrm{fp}}_{p(v)}$. ◻ *Proof of .* Let $A \in \mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}$. By Proposition [Proposition 56](#prop:tiltingid){reference-type="ref" reference="prop:tiltingid"}, $$\begin{aligned} \label{eqn:diff} [A * \mathbb{K}_{\mathcal{L}}] - (v^2 - 1)^{\mathrm{rank}(T)}q_{\mathcal{L}}(v)[A] \end{aligned}$$ is an element of $K_0(D^b(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}))_{< \underline{c}_e}$ By Corollary [Corollary 61](#cor:lowcellpoly){reference-type="ref" reference="cor:lowcellpoly"}, this means $(v^2 - 1)^{\mathrm{rank}(T)}([A * \mathbb{K}_{\mathcal{L}}] - q_{\mathcal{L}}(v)[A]) \in V^{\mathrm{fp}}_{p(v)}$. By Proposition [Proposition 59](#prop:akfin){reference-type="ref" reference="prop:akfin"}, $A * \mathbb{K}_{\mathcal{L}}$ itself has finite projective dimension, and so in combination with equation ([\[eqn:diff\]](#eqn:diff){reference-type="ref" reference="eqn:diff"}), this means $(v^2 - 1)^{\mathrm{rank}(T)}q_{\mathcal{L}}(v)p(v)[A] \in V^{\mathrm{fp}}_{p(v)}$. Note that by each degree $1$ factor of $q_{\mathcal{L}}(v)$ is also a factor of $p(v)$, so we can divide by $q_{\mathcal{L}}(v)$ in the localization $V^{\mathrm{fp}}_{p(v)}$ to get $[A] \in V^{\mathrm{fp}}_{p(v)}$, completing the proof. ◻ # Polishchuk's rationality conjecture {#sec:final} ## A general study of $K_0(\mathcal{A}_{\mathbb{F}_q})$ ### Polishchuk's description of $K_0(\mathcal{A}_{\mathbb{F}_q})$ A crucial tool which we will use in the proof of is the following description of $K_0(\mathcal{A}_{\mathbb{F}_q})$ provided by Polishchuk. **Theorem 62** ([@P], Proposition 3.4.1). *The map $$\begin{aligned} K_0(\mathcal{A}_{\mathbb{F}_q}) & \to \bigoplus_{w \in W} K_0(\mathrm{Perv}_{\mathbb{F}_q}(G/U)) \end{aligned}$$ induced by the functor $\oplus_{w \in W} j_{w}^*$ is injective. Its image is the subset $$\{(a_w)_{w \in W} \in K_0(\mathrm{Perv}_{\mathbb{F}_q}(G/U)) ~|~ \textrm{for any }s \in S, w\in W, a_{sw} - \Phi_s a_w \in \mathrm{im}(\Phi_s^2 - 1)\}.$$* ### Recalling [@CMFSymplectic] In [@CMFSymplectic], we study the subalgebra $\mathrm{KL}(v)$ of endomorphisms of $K_0(G/U)$ generated by the symplectic Fourier transforms $\{\Phi_s\}_{s \in S}$. By , this is the same as the subalgebra of $K_0(G/U \times G/U)$ generated under convolution by classes of Kazhdan-Laumon sheaves; we denote the generator of $\mathrm{KL}(v)$ corresponding to $w \in W$ by $\mathsf{a}_w$. In this section, we use the monodromic Hecke algebras $\mathcal{H}_{\mathcal{L}}$ and $\mathcal{H}_{\mathfrak{o}}$ with the standard generators $\tilde{T}_w$ as defined in [@lusztig_yun_2020], see [@CMFSymplectic] for a more precise outline of our conventions. In Section 4 of [@CMFSymplectic], we show that for any character sheaf $\mathcal{L}$ with $W$-orbit $\mathfrak{o}$, there is a surjection $\pi_{\mathcal{L}} : \mathrm{KL}(v) \to \mathcal{H}_{\mathfrak{o}}$. The following result follows from the main result of loc. cit. which explicitly identifies the algebra $\mathrm{KL}(v)$ as a subalgebra of a generic-parameter version of the Yokonuma-Hecke algebra. **Proposition 63** ([@CMFSymplectic]). *The following properties are satisfied by the morphisms $\{\pi_{\mathcal{L}}\}_{\mathcal{L}}$.* 1. *If $w_1, w_2 \in W$, then $\pi_{\mathcal{L}}(\mathsf{a}_{w_1}\mathsf{a}_{w_2}) = \pi_{w_2\mathcal{L}}(\mathsf{a}_{w_1})\pi_{\mathcal{L}}(\mathsf{a}_{w_2})$ in $\mathcal{H}_{\mathfrak{o}}$.* 2. *If $w \in W_{\mathcal{L}}^\circ$, then $\pi_{\mathcal{L}}(\mathsf{a}_w) = \tilde{T}_{w} \in \mathcal{H}_{\mathfrak{o}}$.* 3. *If $s\in S$ is not in $W_{\mathcal{L}}^\circ$, then $\pi_{\mathcal{L}}(\mathsf{a}_s^2) = 1$.* *Finally, the morphism $\prod_{\mathcal{L}} \pi_{\mathcal{L}}$ is injective, so if $a \in \mathrm{KL}(v)$ is such that $\pi_{\mathcal{L}}(a) = 0$ for all character sheaves $\mathcal{L}$, then $a = 0$.* **Lemma 64**. *For any $\mathcal{L}$ with $W$-orbit $\mathfrak{o}$, the algebra morphism $\pi_{\mathcal{L}} : \mathrm{KL}(v) \to \mathcal{H}_{\mathfrak{o}}$ is such that $$\begin{aligned} \pi_{\mathcal{L}}(\mathsf{a}_{w_0}^2) & = \tilde{T}_{w_{0,\mathcal{L}}}^2, \end{aligned}$$ where $w_{0,\mathcal{L}}$ is the longest element of $W_{\mathcal{L}}^\circ$.* *Proof.* Recall that if $s \in S$ is such that $s \not\in W_{\mathcal{L}}^\circ$, then $\pi_{\mathcal{L}}(\mathsf{a}_{s}^2) = 1$ in $\mathcal{H}_{\mathfrak{o}}$. For induction, we claim that if $y \in W$ is such that $\ell(y) + \ell(w_{\mathcal{L},0}) = l(yw_{\mathcal{L},0})$, and if $s \in S$ such that $l(sy) < l(y)$, then $s \not\in W_{y\mathcal{L}}$, i.e. $y^{-1}sy \not\in W_{\mathcal{L}}^\circ$. Indeed, if we had $y^{-1}sy \in W_{\mathcal{L}}^\circ$, then we since $w_{0,\mathcal{L}}$ dominates all elements of $W_{\mathcal{L}}^\circ$ in the Bruhat order, we could pick $z \in W$ such that $y^{-1}syz = w_{0,\mathcal{L}}$ with $\ell(y^{-1}sy) + \ell(z) = \ell(w_0,\mathcal{L})$. But then we would have $syz = yw_{0,\mathcal{L}}$ with $\ell(sy) + \ell(z) = \ell(syz) = \ell(yw_{\mathcal{L},0})$. But since $\ell(sy) < \ell(y)$ and $\ell(z) \leq \ell(w_{0,\mathcal{L}})$, this is impossible. Now choose some $y$ for which $yw_{0,\mathcal{L}} = w_0$ with $\ell(y) + \ell(w_{0,\mathcal{L}})$. We will show that $\pi_{\mathcal{L}}(\mathsf{a}_{w_0}^2) = \tilde{T}_{w_{0,\mathcal{L}}}^2$ by induction on the length of $y$. Choosing $s \in S$ such that $\ell(sy) < \ell(y)$, this induction hypothesis along with the fact proved in the previous paragraph gives that $$\begin{aligned} \pi_{\mathcal{L}}(\mathsf{a}_{w_0}^2) & = \pi_{\mathcal{L}}(\mathsf{a}_{w_{0,\mathcal{L}}}\mathsf{a}_{y^{-1}}\mathsf{a}_y\mathsf{a}_{w_{0,\mathcal{L}}})\\ & = \pi_{\mathcal{L}}(\mathsf{a}_{w_{0,\mathcal{L}}}\mathsf{a}_{(sy)^{-1}}\mathsf{a}_s^2\mathsf{a}_{sy}\mathsf{a}_{w_{0,\mathcal{L}}})\\ & = \pi_{sy\mathcal{L}}(\mathsf{a}_{w_{0,\mathcal{L}}}\mathsf{a}_{(sy)^{-1}})\pi_{sy\mathcal{L}}(\mathsf{a}_s^2)\pi_{\mathcal{L}}(\mathsf{a}_{sy}\mathsf{a}_{w_{0,\mathcal{L}}})\\ & = \pi_{\mathcal{L}}(\mathsf{a}_{w_{0,\mathcal{L}}}\mathsf{a}_{(sy)^{-1}}\mathsf{a}_{sy}\mathsf{a}_{w_{0,\mathcal{L}}})\\ & = \pi_{\mathcal{L}}(\mathsf{a}_{syw_{0,\mathcal{L}}}^2)\\ & = \tilde{T}_{w_{0,\mathcal{L}}}^2.\end{aligned}$$ ◻ ### The action of the full twist on $\mathrm{Perv}_{\mathbb{F}_q}(G/U)$ **Proposition 65**. *The endomorphism $\iota^2$ on $K_0(\mathcal{A})$ satisfies $P(\iota^2, v) = 0$.* *Proof.* By Theorem [Theorem 62](#thm:k0injective){reference-type="ref" reference="thm:k0injective"}, it is enough to show that $P(\Phi_{w_0}^2, v) = 0$ as an endomorphism of $K_0(\mathrm{Perv}_{\mathbb{F}_q}(G/U))$. Now recall that the endomorphism $\Phi_{w_0} : K_0(G/U) \to K_0(G/U)$ agrees with right convolution with the Kazhdan-Laumon sheaf $\overline{K(w_0)}$. Since the convolution $D^b(G/U) \times D^b(G/U \times G/U) \to D^b(G/U)$ is a triangulated functor, it is enough to show that $P([\overline{K(w_0)} * \overline{K(w_0)}], v) = 0$ in $K_0(G/U \times G/U)$. Letting $1_{\mathcal{L}}$ be the idempotent in $\mathcal{H}_{\mathfrak{o}}$ corresponding to the $\mathcal{L}$-monodromic subalgebra, then in loc. cit. we show that $$\begin{aligned} \pi_{\mathcal{L}}(\mathsf{a}_s) & = \begin{cases} -v\tilde{T}_{s}^{-1}1_{\mathcal{L}} & s \in W_{\mathcal{L}}^\circ\\ -\tilde{T}_s1_{\mathcal{L}} & s \not\in W_{\mathcal{L}}^\circ. \end{cases} \end{aligned}$$ It is a straightforward calculation from the above to show that $\pi_{\mathcal{L}}([K(w_0) * K(w_0)]) = v^{2\ell(y)}\tilde{T}_{y}^{-2}1_{\mathcal{L}} \in \mathcal{H}_{\mathcal{L}}$, where $y$ is the longest element of $W_{\mathcal{L}}^\circ$. By the main result of [@lusztig_yun_2020], the monodromic Hecke algebra $\mathcal{H}_{\mathcal{L}}$ is isomorphic to $\mathcal{H}_{W_{\mathcal{L}}^\circ}$, with full twists on each side being identified as in . By 5.12.2 of [@Lbook] which identifies the eigenvalues of the full twist in the regular representation, we have that $P(-v^{2\ell(y)}\tilde{T}_{y}^2, v) = 0$ in $\mathcal{H}_{\mathcal{L}}$. Finally, we note that by , if a polynomial is satisfied by $\pi_{\mathcal{L}}([\overline{K(w_0)} * \overline{K(w_0)}])$ in each $\mathcal{H}_{\mathfrak{o}}$, then it is also satisfied in $K_0(G/U \times G/U)$; this completes the proof of the proposition. ◻ ## Completing the proof of ### Proof of Let $a \in K_0(\mathcal{A}_{\mathbb{F}_q})$. We can write $p(v) = \tilde{P}(1, v) = \tilde{P}(x, v) + r(x, v)(x - 1)$ for some $r(x, v) \in \mathbb{Z}[x, v]$. So if $$\begin{aligned} a_0 & = \tilde{P}(\iota^2, v)a\\ a_1 & = r(\iota^2, v)(\iota^2 - 1)a,\end{aligned}$$ then $a_0 + a_1 = p(v)a$, so it suffices to show that $a_0, a_1 \in V^{\mathrm{fp}}_{p(v)}(\mathcal{A}_{\mathbb{F}_q})$. First we show this for $a_0$. We claim that for any $w \in W$ and $s \in S$, $\Phi_{s}^2((a_0)_w) = (a_0)_w$. Since $a_0 = \tilde{P}(\Phi_{w_0}^2, v)a$, this will follow from the fact that for any $s \in I$, $(\Phi_s^2 - 1)\tilde{P}(\Phi_{w_0}^2, v) = 0$ as an endomorphism of $K_0(G/U)$. By , this relation holds if and only if it holds after applying $\pi_{\mathcal{L}}$ for any character sheaf $\mathcal{L}$. By , this reduces to showing that if $\tilde{P}(\tilde{T}_{w_0}^2, v) \neq 0$, then $(\tilde{T}_s^2 - 1)\tilde{P}(\tilde{T}_{w_0}^2, v) = 0$ for any $s \in S$. We can rephrase this as saying that if the full twist acts by the eigenvalue $1$, then so does $\tilde{T}_{s}^2$ for every $s \in S$. Indeed, this follows from the classification of irreducible representations of Hecke algebras, and it is shown directly in 11.5.3 of [@P]. Now note that by , we have $(a_0)_{sw} - \Phi_s(a_0)_w = (\Phi_s^2 - 1)b$ for some $b$. Using the relation $(\Phi_s + v^2)(\Phi_s^2 - 1) = 0$ from Proposition 6.2.1 of [@P] and applying $\Phi_s$ to both sides, we then get $$\begin{aligned} (\Phi_s^2 - 1)((a_0)_{ws} - (a_0)_w) & = (\Phi_s^2 - 1)^2b\\ & =(v^4 - 1)(\Phi_{s}^2 - 1)b,\\ & = (v^4 - 1)((a_0)_{sw} - \Phi_s(a_0)_w), \end{aligned}$$ so $(v^4 - 1)((a_0)_{sw} - \Phi_s(a_0)_w) = 0$, meaning $(a_0)_{sw} = \Phi_s(a_0)_w$. This means $a_0 = j_{w!}j_w^*(a_0)$ in $K_0(\mathcal{A}_{\mathbb{F}_q})$ for any $w \in W$, and so $a_0 \in V^{\mathrm{fp}}_{p(v)}$. Now it only remains to observe that $a_1 \in V^{\mathrm{fp}}_{p(v)}$. By the definition of $a_1$, $\tilde{P}(\Phi_{w_0}^2, v)a_1 = 0$. This means we can then apply the exact same argument as in the proof of to get that $a_1 \in V^{\mathrm{fp}}_{p(v)}$. Then since $a_0$ and $a_1$ both lie in $V^{\mathrm{fp}}_{p(v)}$ and $p(v)a = a_0 + a_1$, we have that $a \in V^{\mathrm{fp}}_{p(v)}$. # Construction of Kazhdan-Laumon representations {#sec:construction} ## The Grothendieck-Lefschetz pairing ### The original proposal in [@KL] In Section 3 of [@KL], the proposed construction of representations is as follows. The authors begin by making Conjecture [Conjecture 1](#conj:kl){reference-type="ref" reference="conj:kl"}, which we now know to be false by Bezrukavnikov and Polishchuk's appendix to [@P]. However, for objects of finite projective dimension, one can still define the Grothendieck-Lefschetz-type pairing in the manner they describe First, they define a Verdier duality functor $\mathbb{D} : \mathcal{A}_\psi \to \mathcal{A}_{\psi^{-1}}$, where $\mathcal{A}_{\psi} = \mathcal{A}$ as we have been using it throughout this paper, while $\mathcal{A}_{\psi^{-1}}$ is the same category but using the additive character $\psi^{-1}$ instead of $\psi$ (where $\psi$ is the additive character we chose in Section [2.1.2](#sec:ladic){reference-type="ref" reference="sec:ladic"}). They note that for any $A \in\mathcal{A}_{w, \mathbb{F}_q}$ and $B \in (\mathcal{A}_{\psi^{-1}})_{w, \mathbb{F}_q}$ and any $i \in \mathbb{Z}$, the isomorphisms $\psi_A : \mathcal{F}_w\mathrm{Fr}^*A \to A$ and $\psi_B : \mathcal{F}_w\mathrm{Fr}^*B \to B$ give an endomorphism $\psi_{A,B}^i$ of the vector space $\mathrm{Ext}_{\mathcal{A}}^i(A, \mathbb{D}B)$ given by the composition, $$\mathrm{Ext}_{\mathcal{A}}^i(A, \mathbb{D}B) \rightarrow \mathrm{Ext}_{\mathcal{A}}^i(\mathcal{F}_w\mathrm{Fr}^*A, \mathcal{F}_{w}\mathrm{Fr}^*\mathbb{D}B)\rightarrow \mathrm{Ext}_{\mathcal{A}}^i(\mathcal{F}_w A, \mathcal{F}_w \mathbb{D}B) \rightarrow \mathrm{Ext}_{\mathcal{A}}^i(A, \mathbb{D}B)$$ where the first map arises from the morphisms $\psi_A$ and $\psi_B$, the next from the canonical isomorphisms $\mathrm{Fr}^*A \to A$ and $\mathrm{Fr}^*B \to B$, and the last from the fact that $\mathcal{F}_w$ is invertible. This map is also described explicitly in 4.3.1 of [@BP]. We can then define, for $A$ having finite projective dimension and arbitrary $B$, the value $$\begin{aligned} \langle [A], [B]\rangle = \sum_{i\in \mathbb{Z}} (-1)^i \mathrm{tr}(\psi^i_{A,B}, \mathrm{Ext}^i_{\mathcal{A}}(A, \mathbb{D}B)).\end{aligned}$$ This is clearly well-defined at the level of Grothendieck groups. We will now explain how to use the result in to extend Kazhdan and Laumon's pairing, which as of this point is only defined for objects of finite projective dimension, to the full Grothendieck group in the monodromic case. It is a straightforward computation that for any such $A$ and $B$, we have $$\begin{aligned} \langle [A(-\tfrac{1}{2})], [B]\rangle = q^{\frac{1}{2}}\langle [A], [B]\rangle,\end{aligned}$$ so the pairing is $\mathbb{Z}[v, v^{-1}]$-linear where $\mathbb{Z}[v, v^{-1}]$ acts on the target field such that $v$ is multiplication by $q^{\frac{1}{2}}$. ### A pairing on $K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}) \otimes \mathbb{C}$ We can do the same construction on the monodromic Kazhdan-Laumon category $\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}$ and its Grothendieck group. Then using $\mathbb{Z}[v, v^{-1}]$-linearity, the above definition gives us a pairing which is well-defined on elements of $V^{\mathrm{fp}}$. Now we note that the polynomial $p(v)$ evaluated at $v = q^{\frac{1}{2}}$ is nonzero, so we can extend this pairing linearly to the localization $V^{\mathrm{fp}}_{p(v)}$. This then gives that the pairing is well-defined on all of $V^{\mathrm{fp}}_{p(v)}\otimes \mathbb{C}$, where in the tensor product we send $v \mapsto q^{\frac{1}{2}}$. But by , $V^{\mathrm{fp}}_{p(v)}\otimes \mathbb{C}$ is all of $K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}) \otimes \mathbb{C}$, so we can indeed define the pairing on this entire vector space; we will now explain how to use this to construct the Kazhdan-Laumon representations. ## Construction of representations As Kazhdan and Laumon explain in [@KL], the category $\mathcal{A}_{w, \mathbb{F}_q}$ is defined so that $K_0(\mathcal{A}_{w, \mathbb{F}_q})$ carries commuting actions of $G(\mathbb{F}_q)$ and $T(w)$, where $T(w)$ is the (usually non-split) torus of $G$ corresponding to $w \in W$, defined by $$\begin{aligned} T(w) & = \{t \in T(\overline{\mathbb{F}_q})~|~\mathrm{Fr}^*(t) = w(t)\}.\end{aligned}$$ We note that $K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})$ then also carries commuting actions of $G(\mathbb{F}_q)$ and $T(w)$ where $T(w)$ acts by its character $\theta$ which corresponds to the data of the character sheaf $\mathcal{L}$. As we explained, the pairing $\langle, \rangle$ is well-defined on $K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}})\otimes \mathbb{C}$, and so we can define $K_{w}^{\mathcal{L}}$ to be its kernel. Then the Kazhdan-Laumon representation corresponding to the pairing $(T(w), \theta)$ which was originally sought in [@KL] is the vector space $V_{w,\mathcal{L}} = (K_0(\mathcal{A}_{w, \mathbb{F}_q}^{\mathcal{L}}) \otimes \mathbb{C})/K_w^{\mathcal{L}}$. In future work, we hope to explicitly decompose this vector space into irreducibles and compute the characters of $V_{w,\mathcal{L}}$ explicitly, generalizing the work which was done in [@BP] for quasi-regular characters. [^1]: The variable $u$ used throughout [@P] is replaced in the present work by $v^2$.
arxiv_math
{ "id": "2309.13462", "title": "Polishchuk's conjecture and Kazhdan-Laumon representations", "authors": "Calder Morton-Ferguson", "categories": "math.RT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study $\mathcal{D}$-modules and related invariants on the space of $2\times 2\times n$ hypermatrices for $n\geq 3$, which has finitely many orbits under the action of $G=\operatorname{GL}_2(\mathbb{C}) \times \operatorname{GL}_2(\mathbb{C}) \times \operatorname{GL}_n(\mathbb{C})$. We describe the category of coherent $G$-equivariant $\mathcal{D}$-modules as the category of representations of a quiver with relations. We classify the simple equivariant $\mathcal{D}$-modules, determine their characteristic cycles and find special representations that appear in their $G$-structures. We determine the explicit $\mathcal{D}$-module structure of the local cohomology groups with supports given by orbit closures. As a consequence, we calculate the Lyubeznik numbers and intersection cohomology groups of the orbit closures. All but one of the orbit closures have rational singularities: we use local cohomology to prove that the one exception is neither normal nor Cohen--Macaulay. While our results display special behavior in the cases $n=3$ and $n=4$, they are completely uniform for $n\geq 5$. author: - András C. Lőrincz - Michael Perlman title: Equivariant $\mathcal{D}$-modules on $2\times 2\times n$ hypermatrices --- # Introduction {#sec:orbits} The roots behind hypermatrices can be traced back to Cayley's hyperdeterminant, and this classical theory has continued to generate research since then -- see [@gkz] and [@landsberg2012tensors] and references therein. The theory has a wide range of applications, since tensors and (multi-)linearity are universal features in the sciences. In particular, they have recently gained more traction due to their use in complexity theory. Spaces of hypermatrices have large groups of symmetries arising from natural actions of products of general linear groups, which corresponds to performing changes of bases. While for matrices the rank completely parametrizes the orbits, this is not true for hypermatrices in general, and there can be an infinite number of them. In fact, the ones that have finitely many orbits are classified (see [@satokimura; @kac]), and correspond to the space of $m\times n$ matrices with $m\geq n \geq 1$, the $2\times 2\times n$ hypermatrices with $n\geq 2$, and the $2\times 3 \times n$ hypermatrices with $n\geq 3$. In this paper, we study the geometry of the space of $2\times 2\times n$ hypermatrices (for $n\geq 3$) from a $\mathcal{D}$-module-theoretic point of view. Similar work was undertaken in the case of $m\times n$ matrices [@raicuWeymanLocal; @lorcla], as well as in the case of $2\times 2 \times 2$ hypermatrices [@perlman2020equivariant]. Since the orbit decomposition yields a natural finite stratification of these spaces, there are a number powerful techniques that can be used (cf. [@lHorincz2019categories]) to compute explicit objects and invariants that are, in general, notoriously difficult to determine. This work is part of an ongoing effort to investigate categories of equivariant coherent $\mathcal{D}$-modules on spaces with finitely many orbits under actions of linear algebraic groups, such as flag varieties, toric varieties, and nilpotent cones. It follows from Luna's Slice Theorem [@luna] that, in the case of a smooth affine variety under the action of a reductive group, this category is equivalent to the corresponding category of equivariant $\mathcal{D}$-modules on the normal space to a point in the closed orbit (see [@lHorincz2019categories Proposition 4.6]). Thus, the study of categories of equivariant coherent $\mathcal{D}$-modules on such affine varieties reduces to the case of representations of reductive groups with finitely many orbits. The spaces of $2\times 2\times n$ hypermatrices are examples of Vinberg representations, which are the *irreducible* representations with finitely many orbits (see [@landsberg2012tensors Section 10.1], cf. also [@satokimura; @kac]). Such representations are classified and, besides a couple of exceptions, can be described by Dynkin diagrams with a choice of simple root -- the $2\times 2\times n$ case corresponds to the type $\mathbb{D}_{n+2}$ with the choice of the node with degree $3$. Quiver categories of equivariant $\mathcal{D}$-modules on Vinberg representations have been studied in several articles [@binary; @lHorincz2019categories; @perlman2020equivariant; @senary; @subexceptional]. We now introduce the basic setup. Let $A=\langle a_1, a_2\rangle$ and $B=\langle b_1, b_2\rangle$ be two-dimensional complex vector spaces, and let $C=\langle c_1,\cdots,c_n\rangle$ be an $n$-dimensional complex vector space with $n\geq 2$. We consider the space $V=A\otimes B\otimes C$ of $2\times 2\times n$ hypermatrices, endowed with the natural action of $\operatorname{GL}=\operatorname{GL}(A)\times \operatorname{GL}(B)\times \operatorname{GL}(C)$. The $\operatorname{GL}$-orbits of $V$ are described below (see [@landsberg2012tensors Table 10.3.1] or [@parfenov]). - The zero orbit $O_0=\{0\}$. - The orbit $O_1$ of dimension $n+2$, with representative $a_1\otimes b_1\otimes c_1$. - The orbit $O_2$ of dimension $n+3$, with representative $a_1\otimes b_1\otimes c_1+a_2\otimes b_2\otimes c_1$. - The orbit $O_3$ of dimension $2n+1$, with representative $a_1\otimes b_1\otimes c_1+a_1\otimes b_2\otimes c_2$. - The orbit $O_4$ of dimension $2n+1$, with representative $a_1\otimes b_1\otimes c_1+a_2\otimes b_1\otimes c_2$. - The orbit $O_5$ of dimension $2n+3$, with representative $a_1\otimes(b_1\otimes c_1+b_2\otimes c_2)+a_2\otimes b_1\otimes c_2$. - The orbit $O_6$ of dimension $2n+4$, with representative $a_1\otimes b_1\otimes c_1+a_2\otimes b_2\otimes c_2$. - The orbit $O_7$ of dimension $3n+2$, with representative $a_1\otimes(b_1\otimes c_1+b_2\otimes c_3)+a_2\otimes b_1\otimes c_2$. - The orbit $O_8$ of dimension $3n+3$, with representative $a_1\otimes(b_1\otimes c_1+b_2\otimes c_2)+a_2\otimes(b_1\otimes c_2+b_2\otimes c_3)$. - The orbit $O_9$ of dimension $4n$, with representative $a_1\otimes(b_1\otimes c_1+b_2\otimes c_3)+a_2\otimes(b_1\otimes c_2+b_2\otimes c_4)$. For $n=2$ the orbit $O_6$ is dense, $\overline{O}_5$ is a hypersurface, and orbits $7-9$ do not appear. For $n=3$ the orbit $O_8$ is dense, $\overline{O}_7$ is a hypersurface, and orbit $9$ does not appear. For $n\geq 4$ all orbits appear, and $\overline{O}_8$ is a hypersurface if and only if $n=4$. We recall that for positive integers $i\leq 2,j \leq 2,k \leq n$ the subspace variety $\textnormal{Sub}_{ijk} \subset V$ is the closed subset of tensors $v \in V$ such that there exists subspaces $A' \subset A, B'\subset B, C'\subset C$ of dimensions $i,j,k$ respectively, such that $v \in A' \otimes B' \otimes C'$. In our case $\overline{O}_1 = \textnormal{Sub}_{111}, \overline{O}_2 = \textnormal{Sub}_{221},\overline{O}_3 = \textnormal{Sub}_{122},\overline{O}_4 = \textnormal{Sub}_{212}, \overline{O}_6 = \textnormal{Sub}_{222}, \overline{O}_8=\textnormal{Sub}_{223}, \overline{O}_9 = \textnormal{Sub}_{224}$. In particular, $\overline{O}_1$ is the affine cone over the Segre variety $\operatorname{Seg}(\mathbb{P}(A)\times \mathbb{P}(B)\times \mathbb{P}(C))$. The orbit closures $\overline{O}_5$ and $\overline{O}_7$ have descriptions as affine cones over the tangential and projectively dual varieties of this Segre variety. Using this (or see [@parfenov]), the following is the Hasse diagram with respect to containment of orbit closures for $n\geq 4$ (for $n=3$ the diagram is obtained by removing $O_9$): $$\xymatrix@R-1.4pc{ & O_9 \ar@{-}[d] & \\ & O_8 \ar@{-}[d]& \\ & O_7 \ar@{-}[d]& \\ & O_6 \ar@{-}[d]& \\ & O_5 \ar@{-}[dl]\ar@{-}[d]\ar@{-}[dr]& \\ O_4 \ar@{-}[dr] & O_2 \ar@{-}[d] & O_3 \ar@{-}[dl] \\ & O_1 \ar@{-}[d]& \\ & O_0 & \\ }$$ Due to the exceptional isomorphism $\operatorname{Spin}_4(\mathbb{C}) \cong \operatorname{SL}_2(\mathbb{C}) \times \operatorname{SL}_2(\mathbb{C})$, the $\operatorname{GL}$-representation $V$ is equivalent to the action of $\operatorname{SO}_4(\mathbb{C})\times \operatorname{GL}_n(\mathbb{C})$ on the space of $4\times n$ matrices $X$, i.e. the action maps into $\operatorname{GL}(V)$ have the same image. As the latter action is sometimes more convenient to handle, we will use this identification several times throughout the article. The orbit closures can be defined set-theoretically by rank conditions imposed on $X$ and $X^t\cdot X$, with the exception of $\overline{O}_3$ and $\overline{O}_4$ as such conditions define their union only. ## Summary of results and organization {#summary-of-results-and-organization .unnumbered} The paper is organized as follows. In Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we give the necessary background and introduce some techniques that are used subsequently in the article. In Section [3](#sec:steps){reference-type="ref" reference="sec:steps"}, we derive our preliminary results. In particular, we classify the simple objects in the category of $\operatorname{GL}$-equivariant $\mathcal{D}$-modules $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ in Proposition [Proposition 11](#componentGrp){reference-type="ref" reference="componentGrp"} and describe how the (twisted) Fourier transform permutes them (Proposition [Proposition 12](#prop:fourier){reference-type="ref" reference="prop:fourier"}). We further determine the characteristic cycles of the simples in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ (Proposition [Proposition 16](#prop:char566){reference-type="ref" reference="prop:char566"}) and determine the local cohomology modules supported in $\overline{O}_1$ (Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"}). In Section [4](#sec:eqcat){reference-type="ref" reference="sec:eqcat"}, we describe the category of $\operatorname{GL}$-equivariant $\mathcal{D}$-modules $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ as the category of finite-dimensional representations of a quiver (with relations). While the cases $n=3$ (Theorem [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"}) and $n=4$ (Theorem [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"}) are more involved as the orbits sit more crammed, the description simplifies and is uniform for $n\geq 5$ (Theorem [Theorem 31](#quiverN5){reference-type="ref" reference="quiverN5"}). In Section [5](#sec:o5){reference-type="ref" reference="sec:o5"} we compute the explicit $\mathcal{D}$-module structure of the local cohomology modules supported in $\overline{O}_5$ (Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}), while in Section [6](#sec:O7){reference-type="ref" reference="sec:O7"} those supported in $\overline{O}_7$ (Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"}). Furthermore, we explain how the latter result can be used to show that $\overline{O}_7$, unlike the other orbit closures, is not normal nor Cohen--Macaulay, see Corollary [Corollary 42](#cor:sing){reference-type="ref" reference="cor:sing"}. In Section [7](#sec:characters){reference-type="ref" reference="sec:characters"} we find special representations (so-called witness weights) for each simple $\operatorname{GL}$-equivariant $\mathcal{D}$-module, see Theorem [Theorem 62](#thm:witnessngeq4){reference-type="ref" reference="thm:witnessngeq4"}. This allows the determination of the composition factors of any equivariant $\mathcal{D}$-module through representation-theoretic means, by just finding multiplicities of these witness weights. Finally, in Section [8](#sec:lyub){reference-type="ref" reference="sec:lyub"} we use our results obtained on local cohomology modules to determine the Lyubeznik numbers and intersection cohomology groups of each orbit closure. # Preliminaries {#sec:prelim} ## Background on $\mathcal{D}$-modules and functors {#sec:Dmod} Our $\mathcal{D}$-modules are left $\mathcal{D}$-modules. Given a morphism $f:X\to Y$ of smooth varieties, we write $f_+$ for $\mathcal{D}$-module pushforward along $f$ [@htt Section 1.5]. Let $X=\mathbb{A}^N$. For a closed subvariety $Z\subset X$, we denote by $L_Z$ the intersection cohomology $\mathcal{D}_X$-module of Brylinksi and Kashiwara [@htt Section 3.4], that corresponds to the intersection cohomology complex $IC_Z$ via the Riemann--Hilbert correspondence. Throughout we denote $E=L_{\{0\}}$ and $S=L_X$, where the latter stands for the coordinate ring of $X$. When $X$ is the space of $2\times 2\times n$ hypermatrices, throughout the article we use the notation $D_p = L_{\overline{O}_p}$, for all $p=0,\dots, 9$. Suppose an algebraic group $G$ acts on $X$, and let $\mathfrak{g}$ denote the Lie algebra of $G$. A $\mathcal{D}_X$-module $M$ is equivariant if we have a $\mathcal{D}_{G\times X}$-isomorphism $\tau: p^*M \rightarrow m^*M$, where $p: G\times X\to X$ denotes the projection and $m: G\times X\to X$ the map defining the action, with $\tau$ satisfying the usual compatibility conditions (see [@htt Definition 11.5.2]). On the other hand, differentiating the $G$-action on $X$ we obtain a map from $\mathfrak{g}$ to the space of algebraic vector fields on $X$, which in turn yields a map $\mathfrak{g} \to \mathcal{D}_X$. This gives an equivalent way of looking at equivariance: $M$ admits an algebraic $G$-action whose differential coincides with the $\mathfrak{g}$-action induced by the natural map $\mathfrak{g} \to \mathcal{D}_X$. Since $G$ is connected, the category $\textnormal{mod}_{G}(\mathcal{D}_X)$ of equivariant $\mathcal{D}$-modules is a full subcategory of the category of all coherent $\mathcal{D}$-modules, which is closed under taking submodules and quotients. For a $G$-stable closed subset $Z$ of $X$, we denote by $\textnormal{mod}_{G}^Z(\mathcal{D}_X)$ the full subcategory of $\textnormal{mod}_{G}(\mathcal{D}_X)$ consisting of equivariant $\mathcal{D}$-modules supported inside $Z$. When $G$ acts on $X$ with finitely many orbits, each module in $\textnormal{mod}_{G}(\mathcal{D}_X)$ is regular and holonomic [@htt Theorem 11.6.1]. Therefore, by the Riemann--Hilbert correspondence the category $\textnormal{mod}_{G}(\mathcal{D}_X)$ is equivalent to the category of $G$-equivariant perverse sheaves on $X$. Moreover, this category is equivalent to the category of finitely generated modules over a finite dimensional $\mathbb{C}$-algebra (see [@vilonen Theorem 4.3] or [@lHorincz2019categories Theorem 3.4]), in other words, to the category of finite-dimensional representations of a quiver with relations. When $G$ is reductive acting linearly on $X$, another important functor on $\operatorname{mod}_G(\mathcal{D}_X)$ comes from considering the (twisted) Fourier transform [@lHorincz2019categories Section 4.3]. This functor gives an involution $$\mathcal{F} : \operatorname{mod}_G(\mathcal{D}_X) \xrightarrow{\sim} \operatorname{mod}_G(\mathcal{D}_X).$$ For a $G$-stable (locally) closed subset $Z$ of $X$, we can consider for each $i\geq 0$ and $M\in\operatorname{mod}_G(\mathcal{D}_X)$ the $i$-th local cohomology module $H^i_Z(M)$ of $M$ with support in $Z$, which is an object in $\textnormal{mod}_{G}^Z(\mathcal{D}_X)$. Similarly, for a smooth variety $X$ we write $\mathscr{H}^i_Z(\mathcal{M})$ for the local cohomology sheaves associated to a sheaf of $\mathcal{D}_X$-modules $\mathcal{M}$. When $Z$ is a divisor, there is a short exact sequence of $\mathcal{D}_X$-modules $$\label{comphypn2} 0\longrightarrow \mathcal{O}_X\longrightarrow \mathcal{O}_X(\ast Z)\longrightarrow \mathscr{H}^1_Z(\mathcal{O}_X)\longrightarrow 0,$$ where we denote by $\mathcal{O}_X(\ast Z)$ the localization of $\mathcal{O}_X$ at the divisor $Z$. ## Local cohomology preliminaries Let $Z'\subset Z$ be closed subvarieties of $X=\mathbb{A}^N$, and let $Y=Z\setminus Z'$. We set $c=\operatorname{codim}(Z,\mathbb{A}^N)$. There is a long exact sequence of $\mathcal{D}$-modules (see [@hartshornelocal Proposition 1.9]): $$\label{locallyclosedsequence} 0 \to H^c_Z(S) \to H^c_Y(S)\to H^{c+1}_{Z'}(S)\to H^{c+1}_Z(S) \to H^{c+1}_Y(S)\to H^{c+2}_{Z'}(S)\to \cdots$$ We store the following fact relating local cohomology supported at the origin to intersection cohomology. **Proposition 1**. *Let $Z \subset \mathbb{A}^N$ be the cone over an irreducible projective variety in $\mathbb{P}^{N-1}$. Put $d=\dim_{\mathbb{C}} Z$, and $c= N -d$. For all $i\in \mathbb{Z}$, we have $$\mbox{\large\(H_{\{0\}}^{d-i}(L_Z)\)}=\mbox{\large\(E\)}^{\mbox{\small\(\oplus \dim IH^i(Z)\)}}=\mbox{\large\(H_{\{0\}}^{i+c}(\mathcal{F}(L_Z))\)}.$$* *Proof.* The first equality follows from [@achar Theorem 2.10.3], using that $L_Z$ is self-dual. The second equality follows from [@subexceptional Proposition 5.1]. ◻ We mention a useful tool for calculating local cohomology modules that is based on invariant theory. **Lemma 2**. *Let $G$ be a reductive group acting on $X$. Take an ideal $I\subset S^G$. Then for each $i\geq 0$ we have a $\mathcal{D}_X^G$-isomorphism $$\left(H^i_{S\cdot I}(S)\right)^G \cong H^i_I(S^G).$$* *Proof.* Use the Čech-type sequence with respect to a set of generators of $I$ to calculate local cohomology, and the fact that taking $G$-invariants is exact. ◻ ## Representation theory of the general linear group {#sec:reps} Let $W$ be an $m$-dimensional complex vector space, and let $\operatorname{GL}(W)$ denote the general linear group, consisting of linear automorphisms of $W$. Irreducible representations of $\operatorname{GL}(W)$ are indexed by dominant weights $\lambda=(\lambda_1\geq \lambda_{2}\geq \cdots \geq \lambda_m)$ where $\lambda\in \mathbb{Z}^m$. We write $\mathbb{Z}^m_{\operatorname{dom}}$ for the set of dominant weights with $m$ entries, and we write $\mathbb{S}_{\lambda}W$ for the irreducible representation corresponding to $\lambda$, where $\mathbb{S}_{\lambda}(-)$ is the corresponding Schur functor. For example, for $d\geq 0$ we have $\mathbb{S}_{(d)}(W)=\operatorname{Sym}^d(W)$ and $\mathbb{S}_{(1^d)}(W)=\bigwedge^d(W)$. Here, as we often do throughout the article, we omit trailing zeros in dominant weights, e.g. $(d)=(d,0,\cdots,0)$, and we use exponents to denote the multiplicity of an entry, e.g. $(1^m)=(1,\cdots, 1)$. Given a representation $U$ of dimension $d$, we write $\operatorname{det}(U)=\bigwedge^d(U)$ for the highest nonzero exterior power of $U$. Given $\lambda\in \mathbb{Z}^m_{\operatorname{dom}}$ we write $|\lambda|=\lambda_1+\cdots +\lambda_m$, and we say that $\lambda$ is a partition if $\lambda_m\geq 0$. We have an isomorphism $\mathbb{S}_{\lambda}(W)^{\ast}\cong \mathbb{S}_{\lambda^{\ast}}(W)$, where $\lambda^{\ast}=(-\lambda_m,\cdots, -\lambda_1)$. Throughout the article, $A$ and $B$ are two-dimensional complex vector spaces, and $C$ is an $n$-dimensional vector space for some $n\geq 3$. We write $\operatorname{GL}=\operatorname{GL}(A)\times \operatorname{GL}(B)\times \operatorname{GL}(C)$. Irreducible representations of $\operatorname{GL}$ are of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$, where $\alpha,\beta\in \mathbb{Z}^2_{\operatorname{dom}}$ and $\gamma\in \mathbb{Z}^n_{\operatorname{dom}}$. Given a representation $U$ of $\operatorname{GL}$, we write $[U:\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C]$ for the multiplicity of $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ as a summand of $U$. We note that the multiplicity of an irreducible $\operatorname{GL}$-representation in a module in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ is finite [@lHorincz2019categories Proposition 3.14]. We write $V=A\otimes B\otimes C$ and $S$ for the polynomial ring $S=\operatorname{Sym}(V)$. It is a difficult problem in general to find the $\operatorname{GL}$-irreducible summands of $S$ (see [@weyman pg. 60-61]). Nevertheless, one may obtain some information about the $\operatorname{GL}$ structure $S$ via one of the three flattenings: $$A\otimes B\otimes C\cong (A\otimes B)\otimes C\cong A\otimes(B\otimes C)\cong B\otimes(A\otimes C).$$ For example, by the Cauchy Formula [@weyman Theorem 2.3.2] we have the following $\operatorname{GL}$-decomposition of $S$: $$\bigoplus_{\gamma=(\gamma_1\geq \gamma_2\geq \gamma_3\geq\gamma_4\geq 0)} \mathbb{S}_{\gamma}(A\otimes B)\otimes \mathbb{S}_{\gamma}C.$$ There is no general formula to decompose $\mathbb{S}_{\gamma}(A\otimes B)$, unless $\gamma_3=\gamma_4$ (or dually $\gamma_1=\gamma_2$). Upon twisting by a suitable power of the determinant representation, this is the case when $\gamma$ has at most two rows. If $\gamma$ has one row, we may apply the Cauchy Formula. If $\gamma$ has two rows, one may apply [@raicu2012secant Corollary 4.3b] to find the multiplicity of $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B$ in $\mathbb{S}_{\gamma}(A\otimes B)$, which we often do in this article. ## Equivariant $\mathcal{D}$-modules on spaces of matrices {#sec:dmodsMat} Here we discuss some information on equivariant $\mathcal{D}$-modules on spaces of matrices, and local cohomology modules with support in determinantal varieties, which we use repeatedly throughout the article. We employ the flattening of $V$ to the space of $4\times n$ matrices, and use the known $\mathcal{D}$-module structure of local cohomology with support the determinantal varieties $Z_p$ of matrices with rank $\leq p$. Note that we have $$\label{eq:detorbits} O_0=Z_0, \; \overline{O}_2=Z_1, \; \overline{O}_6=Z_2, \, \mbox{ and }\, \overline{O}_8=Z_3 \, (\mbox{for } n\geq 3).$$ Let $n=4$ and $\operatorname{det}=\operatorname{det}(x_{i,j})$ denote the $4\times 4$ determinant on $(A\otimes B)\otimes C$. The localization $S_{\operatorname{det}}$ of the polynomial ring at $\operatorname{det}$ is a holonomic $\mathcal{D}$-module, with composition series as follows [@raicu2016characters Theorem 1.1]: $$0\subsetneq S\subsetneq \mathcal{D}\cdot \operatorname{det}^{-1}\subsetneq \mathcal{D}\cdot \operatorname{det}^{-2}\subsetneq \mathcal{D}\cdot \operatorname{det}^{-3}\subsetneq S_{\operatorname{det}},$$ with $(\mathcal{D}\cdot \operatorname{det}^{-1})/S=D_8$, $(\mathcal{D}\cdot \operatorname{det}^{-2})/(\mathcal{D}\cdot \operatorname{det}^{-1})=D_6$, $(\mathcal{D}\cdot \operatorname{det}^{-3})/(\mathcal{D}\cdot \operatorname{det}^{-2})=D_2$, and $S_{\operatorname{det}}/(\mathcal{D}\cdot \operatorname{det}^{-3})=D_0$. Following [@lorcla] we define $$\label{eq:detQ} Q_3=S_{\operatorname{det}}/S, \quad Q_2=S_{\operatorname{det}}/(\mathcal{D}\cdot \operatorname{det}^{-1}),\quad Q_1=S_{\operatorname{det}}/(\mathcal{D}\cdot \operatorname{det}^{-2}).$$ With this notation, we recall the $\mathcal{D}$-module structure of local cohomology with support in $\overline{O}_2$, $\overline{O}_6$, $\overline{O}_8$ (see [@raicu2016ocal Main Theorem] and [@lorcla Theorem 1.6]): **Theorem 3**. *The following is true about local cohomology of $S$ with support in $\overline{O}_2$.* 1. *If $n=3$ the nonzero local cohomology modules are given by $$H^6_{\overline{O}_2}(S)=D_2,\quad H^7_{\overline{O}_2}(S)=D_0,\quad H^9_{\overline{O}_2}(S)=D_0.$$* 2. *If $n=4$ the nonzero local cohomology modules are given by $$H^9_{\overline{O}_2}(S)=Q_1,\quad H^{11}_{\overline{O}_2}(S)=D_0,\quad H^{13}_{\overline{O}_2}(S)=D_0.$$* 3. *If $n\geq 5$ the nonzero local cohomology modules are given by $$H^{3n-3}_{\overline{O}_2}(S)=D_2,\quad H^{4n-7}_{\overline{O}_2}(S)=D_0,\quad H^{4n-5}_{\overline{O}_2}(S)=D_0,\quad H^{4n-3}_{\overline{O}_2}(S)=D_0.$$* **Theorem 4**. *The following is true about local cohomology of $S$ with support in $\overline{O}_6$.* 1. *If $n=3$ the nonzero local cohomology modules are given by $$H^2_{\overline{O}_6}(S)=D_6,\quad H^3_{\overline{O}_6}(S)=D_2,\quad H^4_{\overline{O}_6}(S)=D_0.$$* 2. *If $n=4$ the nonzero local cohomology modules are given by $$H^4_{\overline{O}_6}(S)=Q_2,\quad H^6_{\overline{O}_6}(S)=Q_1,\quad H^8_{\overline{O}_6}(S)=D_0.$$* 3. *If $n\geq 5$ the local cohomology modules are semi-simple, and we have the following in the Grothendieck group of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$: $$\sum_{j\geq 0}[H^j_{\overline{O}_6}(S)]\cdot q^j=[D_6]\cdot q^{2n-4}+[D_2]\cdot (q^{3n-8}+q^{3n-6})+[D_0]\cdot (q^{4n-12}+q^{4n-10}+q^{4n-8}).$$* **Theorem 5**. *The following is true about local cohomology of $S$ with support in $\overline{O}_8$.* 1. *If $n=4$ the only nonzero local cohomology module is $H^1_{\overline{O}_8}(S)=Q_3$.* 2. *If $n\geq 5$ the nonzero local cohomology modules are given by $$H^{n-3}_{\overline{O}_8}(S)=D_8,\quad H^{2n-7}_{\overline{O}_8}(S)=D_6,\quad H^{3n-11}_{\overline{O}_8}(S)=D_2,\quad H^{4n-15}_{\overline{O}_8}(S)=D_0.$$* Via one of the two identifications of $V$ with the space of $2\times 2n$ matrices, the orbit closures $\overline{O}_3$ and $\overline{O}_4$ are the determinantal varieties of matrices of rank $\leq 1$. By [@raicu2016ocal Main Theorem] we have the following. **Theorem 6**. *Let $i=3,4$. For all $n\geq 2$ the nonzero local cohomology modules of $S$ with support in $\overline{O}_i$ are given by $$H^{2n-1}_{\overline{O}_i}(S)=D_i,\quad H^{4n-3}_{\overline{O}_i}(S)=D_0.$$* We recall the characters of the simple modules on spaces of matrices [@raicu2016characters Section 3.2]. Let $W_1$ be an $n_1$-dimensional vector space, and let $W_2$ be an $n_2$-dimensional vector space, with $n_1\geq n_2$. We consider the space $W_1\otimes W_2$ of $n_1\times n_2$ matrices. For $0\leq p\leq n_2$, let $L_{Z_p}$ to be the intersection cohomology $\mathcal{D}$-module associated to the set $Z_p$ of matrices of rank $\leq p$. **Theorem 7**. *For $0\leq p\leq n_2$ consider the following set of dominant weights: $$W^p=\{\lambda\in \mathbb{Z}^{n_2}_{\operatorname{dom}}\mid \lambda_p\geq p-n_2,\; \lambda_{p+1}\leq p-n_1\},$$ and for $\lambda\in W^p$ we define $$\lambda(p)=(\lambda_1,\cdots,\lambda_p,(p-n_2)^{n_1-n_2},\lambda_{p+1}+(n_1-n_2),\cdots,\lambda_{n_2}+(n_1-n_2)).$$ Then the $\operatorname{GL}(W_1)\times \operatorname{GL}(W_2)$-character of $L_{Z_p}$ is given by $$\bigoplus_{\lambda\in W^p} \mathbb{S}_{\lambda(p)}W_1\otimes\mathbb{S}_{\lambda}W_2.$$* Using this formula, we can obtain a large amount of information about the $\operatorname{GL}$-characters of $D_0$, $D_2$, $D_3$, $D_4$, $D_6$, $D_8$, $D_9$, though finding the complete $\operatorname{GL}$-character of these is not feasible (see Section [2.3](#sec:reps){reference-type="ref" reference="sec:reps"}). In particular, when $n=3$ we have $W_1=A\otimes B$, $W_2=C$, and $L_{Z_0}=E=D_0$, $L_{Z_1}=D_2$, $L_{Z_2}=D_6$, $L_{Z_3}=S=D_8$. When $n\geq 4$ we have $W_1=C$, $W_2=A\otimes B$, and $L_{Z_0}=D_0$, $L_{Z_1}=D_2$, $L_{Z_2}=D_6$, $L_{Z_3}=D_8$, $L_{Z_4}=D_9=S$. When $W_1=A\otimes C$ and $W_2=B$ we have $L_{Z_1}=D_3$, and when $W_1=B\otimes C$ and $W_2=A$ we have $L_{Z_1}=D_4$. ## Grassmannians, tautological bundles, and Bott's Theorem Let $W$ be an $m$-dimensional complex vector space. Given $1\leq k\leq m$ we write $\mathbb{G}(k;W)$ for the Grassmannian of $k$-dimensional quotients of $W$, with tautological quotient sheaf $\mathcal{Q}$ of rank $k$. We store the following consequence of Bott's Theorem (see [@weyman Corollary 4.1.9]) for use in our calculations. **Lemma 8**. *Let $\lambda\in \mathbb{Z}^k_{\operatorname{dom}}$. We have that $\mathbb{S}_{\lambda}\mathcal{Q}$ has cohomology in at most one degree.* 1. *If $\lambda_p\geq p-k$ and $\lambda_{p+1}\leq p-m$ for some $0\leq p\leq k$, then $$H^{(m-k)(k-p)}(\mathbb{G}(k;W),\mathbb{S}_{\lambda}\mathcal{Q})\cong \mathbb{S}_{\lambda(p) }W,$$ where $$\lambda(p)=(\lambda_1,\cdots,\lambda_p, (p-k)^{m-k},\lambda_{p+1}+(m-k),\cdots, \lambda_k+(m-k)).$$* 2. *Conversely, if for some $0\leq p\leq k$ we have $$H^{(m-k)(k-p)}(\mathbb{G}(k;W),\mathbb{S}_{\lambda}\mathcal{Q})\neq 0,$$ then $\lambda_p\geq p-k$ and $\lambda_{p+1}\leq p-m$.* ## Spaces of relative hypermatrices {#sec:relmatrices} We continue to write $V=A\otimes B\otimes C$. Fix positive integers $i\leq 2,j \leq 2,k \leq n$, and consider the product of Grassmannians $$\mathbb{G}_{ijk}:=\mathbb{G}(i;A)\times \mathbb{G}(j;B)\times \mathbb{G}(k;C),$$ where $\mathbb{G}(k;W)$ denotes the Grassmannian of $k$-dimensional quotients of a vector space $W$. On $\mathbb{G}_{ijk}$ we consider the geometric vector bundle (see [@hartshorne Exercise II.5.18]): $$Y_{ijk}:=\underline{\textnormal{Spec}}_{\mathcal{O}_{\mathbb{G}_{ijk}}} \operatorname{Sym}(\mathcal{Q}_A\boxtimes \mathcal{Q}_B\boxtimes \mathcal{Q}_C),$$ where $\mathcal{Q}_W$ denotes the tautological quotient sheaf of rank $k$ on $\mathbb{G}(k;W)$. Fix $i,j,k$ and let $Y=Y_{i,j,k}$. We may think of $Y$ as the space of relative $i\times j\times k$ hypermatrices, fibered over $\mathbb{G}_{i,j,k}$. Indeed, over an open affine subset of $\mathbb{G}_{i,j,k}$ the space $Y$ is isomorphic to its product with the space of $i\times j\times k$ hypermatrices, and thus $Y$ has stratification by subsets $O_p^{Y}$ as in Section [1](#sec:orbits){reference-type="ref" reference="sec:orbits"}. Given such a locally closed subset, we write $D_p^Y$ for the corresponding intersection cohomology $\mathcal{D}_Y$-module. As $Y$ is a closed subvariety of the trivial bundle $V\times \mathbb{G}_{i,j,k}$, we often abuse notation and identify the modules $D_p^Y$ with sheaves of graded $S^{Y}=\operatorname{Sym}(\mathcal{Q}_A\boxtimes \mathcal{Q}_B\boxtimes \mathcal{Q}_C)$-modules on $\mathbb{G}_{i,j,k}$ (see [@hartshorne Exercise II.5.17]). If we write $V_{ijk}=\mathbb{C}^i \otimes\mathbb{C}^j \otimes\mathbb{C}^k$, then we have an identification $$Y_{ijk}\cong \operatorname{GL}\times_{P_{ijk}} V_{ijk},$$ where the Levi subgroup $\operatorname{GL}_i(\mathbb{C}) \times \operatorname{GL}_j (\mathbb{C}) \times \operatorname{GL}_k(\mathbb{C})$ of the parabolic $P_{ijk}$ acts on $V_{ijk}=\mathbb{C}^i \otimes\mathbb{C}^j \otimes\mathbb{C}^k$ naturally, while its unipotent radical acts trivially. We recall that for positive integers $i\leq 2,j \leq 2,k \leq n$ the subspace variety $\textnormal{Sub}_{ijk} \subset V$ is the closed subset of tensors $v \in V$ such that there exists subspaces $A' \subset A, B'\subset B, C'\subset C$ of dimensions $i,j,k$ respectively, with $v \in A' \otimes B' \otimes C'$. Such spaces admit desingularizations as follows: **Lemma 9**. *We have a $\operatorname{GL}$-equivariant resolution of singularities of $\textnormal{Sub}_{ijk}$ given by $$\pi_{ijk}: Y_{ijk} \to \textnormal{Sub}_{ijk},$$* We store the following consequence for use in our local cohomology calculations. **Lemma 10**. *Using notation as above, let $(Y,\pi,O)$ be a triple such that $\pi:Y\to V$ is a desingularization of $\overline{O}$, and assume that $Z:=Y\setminus O$ is a divisor. Let $\mathbb{G}$ be the product of Grassmannians on which $Y$ is a vector bundle. If we set $c=\operatorname{codim}(O,V)$, then we have the following for all $q\geq 0$: $$\label{pushdownlocallyclosed} H^q_{O}(S)=\mathcal{H}^{q-c}(\pi_{+}\mathcal{O}_Y(\ast Z)).$$ In particular,* 1. *$H^q_{O}(S)\neq 0$ only if $c\leq q\leq c+\dim \mathbb{G}$,* 2. *$\mathcal{H}^{q}(\pi_{+}\mathcal{O}_Y(\ast Z)) \neq 0$ only if $0\leq q\leq \dim \mathbb{G}$.* In this article, we apply Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"} to the triples $(Y_{111},\pi_{111},O_1)$ and $(Y_{222},\pi_{222},O_6)$. *Proof.* Let $j:O\to Y$ and $i:O\to V$ be the natural inclusions, so that $\pi\circ j=i$. We observe that $$i_{+}(\mathcal{O}_{O})=\pi_{+}(j_{+}(\mathcal{O}_{O}))=\pi_{+}(\mathcal{O}_Y(\ast Z)).$$ On the other hand, let $U=V\setminus (\overline{O}\setminus O)$ with open immersion $k:U\to V$. Since $O$ is a smooth closed subvariety of $U$ of codimension $c$, we have (see [@htt Proposition 1.7.1]): $$\label{lcO6push} i_{+}(\mathcal{O}_{O})=k_{+}(\mathbb{R}\mathscr{H}^0_{O}(\mathcal{O}_U)[c])=\mathbb{R}\Gamma_{O}(S)[c],$$ where the last equality follows from [@hartshornelocal Proposition 1.3, Proposition 1.4]. In particular, ([\[pushdownlocallyclosed\]](#pushdownlocallyclosed){reference-type="ref" reference="pushdownlocallyclosed"}) holds. Thus, the lower bounds (resp. upper bounds) on $q$ in parts (1) and (2) are equivalent. The lower bound on $q$ in (1) follows from ([\[lcO6push\]](#lcO6push){reference-type="ref" reference="lcO6push"}), and the upper bound on $q$ in (2) follows from [@htt Proposition 1.5.28(ii)], using that $\pi$ factors through the projection from $V\times \mathbb{G}$ to $V$. ◻ We have the following notation for the alternating sum of the cohomology of $\pi_+\mathcal{O}_Y(\ast Z)$ in the Grothendieck group of representations of $\operatorname{GL}$: $$\label{defofEuler} \big[\chi(\pi_+\mathcal{O}_Y(\ast Z))\big]=\sum_{i\in \mathbb{Z}} (-1)^i \cdot \big[\mathcal{H}^i(\pi_+\mathcal{O}_Y(\ast Z))\big].$$ In Section [7](#sec:characters){reference-type="ref" reference="sec:characters"}, we use ([\[defofEuler\]](#defofEuler){reference-type="ref" reference="defofEuler"}) in conjunction with the techniques of [@raicu2016characters Section 2] to get information about the characters of the simple equivariant $\mathcal{D}_V$-modules. We recall some notation for use in these calculations. Given $m\geq 1$ we write $[m]=\{1,2,\cdots, m\}$, and for $0\leq k\leq m$ we write $\binom{[m]}{k}$ for the set of size $k$ subsets of $[m]$. For example $\binom{[3]}{2}=\{\{1,2\},\{1,3\},\{2,3\}\}$. Given $r\in \mathbb{Z}$ and $I\in \binom{[m]}{k}$ we write $(r^I)\in \mathbb{Z}^m$ for the tuple with $r$ in the $i$-th place for all $i\in I$, and zeros elsewhere. For example, if $m=3$ then $(r^{\{1,3\}})=(r,0,r)$. Let $W$ be an $m$-dimensional vector space and let $\lambda\in \mathbb{Z}^m$ be a not necessarily dominant tuple of $m$ integers. We introduce the notation $[\mathbb{S}_{\lambda}W]$ in the Grothendieck group of $\operatorname{GL}(W)$ representations as follows. Let $\rho=(m-1,\cdots ,1,0)$. If $\lambda+\rho$ has a repeated entry, then we define $[\mathbb{S}_{\lambda}W]=0$. Otherwise, $\lambda+\rho$ has distinct entries, and we write $\operatorname{sort}(\lambda+\rho)$ for the tuple obtained by arranging the entries of $\lambda+\rho$ into strictly decreasing order, and we let $\sigma \in \mathfrak{S}_m$ be the corresponding permutation. If we set $\tilde{\lambda}=\operatorname{sort}(\lambda+\rho)-\rho$, then we make the following definition (cf. [@weyman Corollary 4.1.9]) $$\big[\mathbb{S}_{\lambda}W\big]=\operatorname{sgn}(\sigma)\cdot \big[\mathbb{S}_{\tilde{\lambda}}W\big].$$ We generalize this to the Grothendieck group of $\operatorname{GL}=\operatorname{GL}(A)\times \operatorname{GL}(B)\times \operatorname{GL}(C)$ representations as follows. Let $\rho^2=(1,0)$ and $\rho^n=(n-1,\cdots, 1,0)$, and let $\alpha,\beta\in \mathbb{Z}^2$ and $\gamma \in \mathbb{Z}^m$. If any of the tuples $\rho^2+\alpha$, $\rho^2+\beta$, $\rho^n+\gamma$ has a repeated entry, then we set $[\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C]=0$. Otherwise, we let $\sigma_{\alpha}$, $\sigma_{\beta}$, $\sigma_{\gamma}$ be the corresponding permutations, and we define $$\label{eq:bottGG} \big[\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C\big]=\operatorname{sgn}(\sigma_{\alpha})\cdot \operatorname{sgn}(\sigma_{\beta})\cdot \operatorname{sgn}(\sigma_{\gamma})\cdot \big[\mathbb{S}_{\tilde{\alpha}}A\otimes\mathbb{S}_{\tilde{\beta}}B\otimes\mathbb{S}_{\tilde{\gamma}}C\big].$$ Given an $m$-dimensional vector space $W$ and $r,k\geq 0$, we write (see [@raicu2016characters Section 2.1.3, Lemma 2.5]) $$\label{eq:pkr} \big[p_{k,r}(W)\big]=\sum_{I\in \binom{[m]}{k}} \big[\mathbb{S}_{(r^I)}W\big].$$ For $\lambda \in \mathbb{Z}^m$, $I\in \binom{[m]}{k}$ and $r\geq 0$ we define the tuple $(\lambda, r,I)\in\mathbb{Z}^m$ via (cf. [@raicu2016characters Lemma 2.3]). $$\label{eq:lambdaplusrI} (\lambda, r,I)=\lambda+(r^I)$$ For example, if $m=3$, $\lambda=(-1,-1,-2)$, $k=2$, $I=\{1,3\}$, and $r\geq 3$ then $[\mathbb{S}_{(\lambda, r,I)}W]=-[\mathbb{S}_{(r-1,r-3,0)}W]$. # First steps towards the quiver structure of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ {#sec:steps} ## Classification of simple objects First, we describe the component groups $H/H^0$ (here $H^0$ is the connected component of $H$ containing the identity) of all the stabilizers of orbits $O_i = \operatorname{GL}/H$. **Proposition 11**. *The component group of the $\operatorname{GL}$-orbit $O_6$ is $\mathbb{Z}/2\mathbb{Z}$, while all other $\operatorname{GL}$-orbits $O_i$ ($i\neq 6$) have connected stabilizers.* *Proof.* We use notation as in Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}. First, assume that for a $\operatorname{GL}$-orbit $O \subset V$ we have $\overline{O} = \textnormal{Sub}_{ijk}$ for some $i,j,k \geq 1$. By Lemma [Lemma 9](#lem:desing){reference-type="ref" reference="lem:desing"} $O \cong \pi^{-1}_{ijk}(O) = \operatorname{GL}\times_{P_{ijk}} O' \subset \operatorname{GL}\times_{P_{ijk}} V_{ijk}$, for some $P_{ijk}$-orbit $O' \subset V_{ijk}$. Since the unipotent radical of $P_{ijk}$ acts trivially, $O'$ is a $\operatorname{GL}_i(\mathbb{C}) \times \operatorname{GL}_j (\mathbb{C}) \times \operatorname{GL}_k(\mathbb{C})$-orbit in $\mathbb{C}^i \otimes\mathbb{C}^j \otimes\mathbb{C}^k$ (in fact, the dense orbit). The generic $\operatorname{GL}$-stabilizer of $O$ agrees with the generic $P_{ijk}$-stabilizer of $O'$. Since the unipotent radical of $P_{ijk}$ acts trivially on $O'$, this shows that the $\operatorname{GL}$-component group of $O$ agrees with the $\operatorname{GL}_i(\mathbb{C}) \times \operatorname{GL}_j (\mathbb{C}) \times \operatorname{GL}_k(\mathbb{C})$-component group of $O' \subset \mathbb{C}^i \otimes\mathbb{C}^j \otimes\mathbb{C}^k$. When one of $i,j,k$ is $\leq 1$, the above argument gives the result for $O_1, O_2, O_3, O_4$, as it is reduced to the case of matrices, where the corresponding claim is easy to see. As $\overline{O}_6 = \textnormal{Sub}_{222}$, for $O=O_6$ the result follows [@perlman2020equivariant Lemma 3.3] or [@lHorincz2019categories Remark 4.12]. Note that the restriction of $\pi_{222}$ still yields an isomorphism $\pi_{222}^{-1}(O_5) \xrightarrow{\cong} O_5$, hence the argument above follows through and yields the result for $O_5$ using [@perlman2020equivariant Lemma 3.2]. Since $\overline{O}_8 = \textnormal{Sub}_{223}$, by the above argument with $O=O_8$ it is enough to prove the claim in the case $n=3$. Then the orbit $O_8 \subset V$ is dense. There is a castling transform from $V=\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^3$ to $\mathbb{C}^2\otimes\mathbb{C}^3\otimes\mathbb{C}^1$ (see [@satokimura Section 2]). By [@satokimura Proposition 7] the generic stabilizer of $V$ is isomorphic to the generic stabilizer of $\mathbb{C}^2\otimes\mathbb{C}^3\otimes\mathbb{C}^1$, endowed with the natural action of $\operatorname{GL}_2(\mathbb{C})\times \operatorname{GL}_3(\mathbb{C})\times \mathbb{C}^{\ast}$. This space may be identified again with the space of $2\times 3$ matrices, where the corresponding claim is easy to see. Next, we show that the stabilizer of $O_7$ is connected. Since $\pi_{223}$ yields an isomorphism $\pi_{223}^{-1}(O_7) \xrightarrow{\cong} O_7$, it is enough to consider the case $n=3$ (similar as we did for $O_5$). Then this is a consequence of Lemma [Lemma 14](#lem:opencat2){reference-type="ref" reference="lem:opencat2"} below, as there is only one $\operatorname{GL}$-equivariant simple local system on $O_7$. As $\overline{O}_9 = \textnormal{Sub}_{224}$, by the above argument with $O=O_9$ we can assume that $n>4$. Then the complement of $O_9$ in $V$ has codimension $\geq 2$, which implies that $O_9$ is simply-connected (see [@lHorincz2019categories Remark 4.10]), hence its component group must be trivial. ◻ According to the above result, we denote by $D_6'$ the simple $\mathcal{D}_V$-module corresponding to the non-trivial equivariant local system of rank one on $O_6$. We prove the following: **Proposition 12**. *Let $\mathcal{F}$ denote the (twisted) Fourier transform on $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$.* 1. *If $n=3$ then $\mathcal{F}$ swaps the modules in each of the pairs $$(D_0,D_8),\quad (D_1,D_7),\quad (D_2,D_6),$$ and and all other simples are fixed.* 2. *If $n\geq 4$ then $\mathcal{F}$ swaps the modules in each of the pairs $$(D_0,D_9),\quad (D_1,D_7),\quad (D_2,D_8),$$ and and all other simples are fixed.* *Proof.* The fact that $\mathcal{F}$ swaps $D_0$ and $S$ follows immediately from the definition. We have that $O_1$ and $O_7$ are dual orbits [@landsberg2012tensors Table 10.3.1]. The characteristic cycle of $D_1$ is of the form $\textnormal{charC}(D_1)=[\overline{T^{\ast}_{O_1}V}]+m_0[\overline{T^{\ast}_{O_0}V}]$ for some $m_0\geq 0$. By [@lHorincz2019categories Equation 4.15], it follows that $\textnormal{charC}(\mathcal{F}(D_1))=[\overline{T^{\ast}_{O_7}V}]+m_0[\overline{T^{\ast}_{V}V}]$. Since there is only one simple with full support, namely $S$, it follows that $m_0=0$. As $D_7$ is the only simple with support $\overline{O}_7$, we have that $\mathcal{F}(D_1)=D_7$ and $\mathcal{F}(D_7)=D_1$. Using the two flattenings of $V$ to the space of $2\times 2n$ matrices, the orbit closures $\overline{O}_3$ and $\overline{O}_4$ are identified with the determinantal variety of matrices of rank $\leq 1$. Therefore $\mathcal{F}(D_3)=D_3$ and $\mathcal{F}(D_4)=D_4$. Assume that $n=3$. Using the flattening of $V$ to the space of $4\times n$ matrices, the orbit closures $\overline{O}_2$, $\overline{O}_6$, are identified with the determinantal varieties of ranks $\leq 1$, $\leq 2$. Thus, $\mathcal{F}$ swaps the modules $D_2$ and $D_6$. Now let $n\geq 4$. Using the flattening of $V$ to the space of $4\times n$ matrices, the orbit closures $\overline{O}_2$, $\overline{O}_6$, $\overline{O}_8$, are identified with the determinantal varieties of matrices of rank $\leq 1$, $\leq 2$, $\leq 3$. Thus, the Fourier transform swaps $D_2$ and $D_8$, and it fixes $D_6$. The characteristic cycle of $D_6'$ is of the form $\textnormal{charC}(D_6')=[\overline{T^{\ast}_{O_6}V}]+\sum_{i=0}^5 m_i[\overline{T^{\ast}_{O_i}V}]$ for some $m_i\geq 0$. Since $O_5$ is self-dual, by [@lHorincz2019categories Equation 4.15], it follows that $$\textnormal{charC}(\mathcal{F}(D_6'))=[\overline{T^{\ast}_{O_6}V}]+m_5[\overline{T^{\ast}_{O_5}V}]+m_4[\overline{T^{\ast}_{O_4}V}]+m_3[\overline{T^{\ast}_{O_3}V}]+m_2[\overline{T^{\ast}_{O_8}V}]+m_1[\overline{T^{\ast}_{O_7}V}]+m_0[\overline{T^{\ast}_{O_9}V}].$$ Since $O_7, O_8, O_9$ support unique simples, we have that $m_0=m_1=m_2=0$, which shows $\mathcal{F}(D_6')=D_6'$. By process of elimination, it follows that $\mathcal{F}$ fixes $D_5$. ◻ ## Restricting to open subsets {#sec:open} In this subsection we analyze $\mathcal{D}$-modules on some distinguished open sets in $V$. This helps us patch up the quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ in Section [4](#sec:eqcat){reference-type="ref" reference="sec:eqcat"}. **Lemma 13**. *Let $U= V\setminus \overline{O}_2$. Then the quiver of $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6 \setminus \overline{O}_2}(\mathcal{D}_U)$ is given by $$\xymatrix{ (5) \ar@<0.5ex>[r] & (6) \ar@<0.5ex>[l] & (3) \ar@<0.5ex>[r] & \ar@<0.2ex>[l] (6') \ar@<0.5ex>[r] & \ar@<0.2ex>[l] (4) }$$ where all the 2-cycles are zero. Here the vertices of the equivariant simple $\mathcal{D}_U$-modules are labeled as the corresponding simple $\mathcal{D}_V$-modules via restriction.* *Proof.* Let $\pi=\pi_{222}$ as in Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}. Note that $\overline{O}_6 \setminus \overline{O}_2 \subset U$ is closed and smooth, as it can be identified with the space of $4\times n$ matrices of rank $2$. Furthermore, the map $\pi$ induces an isomorphism $\pi^{-1} (\overline{O}_6 \setminus \overline{O}_2) \xrightarrow{\,\cong\,} \overline{O}_6 \setminus \overline{O}_2$. For $2\leq i \leq 6$, let $O_i' \subset V_{222}$ denote the $\operatorname{GL}_2 \times \operatorname{GL}_2 \times \operatorname{GL}_2$-orbit having the same representative as $O_i$ in Section [1](#sec:orbits){reference-type="ref" reference="sec:orbits"}, let $U' = V_{222} \setminus \overline{O}_2'$ and $j': U' \to V_{222}$ be the open embedding, so that $\pi^{-1} (\overline{O}_6 \setminus \overline{O}_2) = \operatorname{GL}\times_{P_{222}} (U')$. By [@lHorincz2019categories Proposition 4.5] we have equivalences of categories $$\label{eq:catchain} \textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6 \setminus \overline{O}_2}(\mathcal{D}_U) \cong \textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_{\overline{O}_6 \setminus \overline{O}_2}) = \textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_{\operatorname{GL}\!\times_{P_{222}} U'}) \cong \textnormal{mod}_{P_{222}}(\mathcal{D}_{U'}) = \textnormal{mod}_{\operatorname{GL}_2 \!\times \! \operatorname{GL}_2 \! \times \! \operatorname{GL}_2}(\mathcal{D}_{U'}),$$ where the last equality follows as the unipotent radical of $P_{222}$ is connected and acts trivially on $V_{222}$. By adjunction, $j'_*$ sends the injective envelopes of the equivariant simple $\mathcal{D}_{U'}$-modules with support $\overline{O}'_i \setminus \overline{O}'_2$ to the injective envelopes of the equivariant simple $\mathcal{D}_{V_{222}}$-modules with support $\overline{O}'_i$ (cf. [@binary Lemma 2.4]), and $j'^*$ sends them back. Therefore, we readily obtain the quiver of $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6 \setminus \overline{O}_2}(\mathcal{D}_U)\cong \textnormal{mod}_{\operatorname{GL}_2 \!\times \! \operatorname{GL}_2 \! \times \! \operatorname{GL}_2}(\mathcal{D}_{U'})$ from [@perlmancorrected Theorem on the Quiver Structure] (see Remark [Remark 17](#rem:mikeerror){reference-type="ref" reference="rem:mikeerror"}). ◻ Below we use the analogous convention as in Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"} regarding the labeling of vertices. **Lemma 14**. *Let $U=V\setminus \overline{O}_6$. The quiver of $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_8 \setminus \overline{O}_6}(\mathcal{D}_{U})$ is given by $$\xymatrix{ (7) \ar@<0.5ex>[r] & (8) \ar@<0.5ex>[l]}$$ where all the 2-cycles are zero.* *Proof.* We note that $\overline{O}_8\setminus \overline{O}_6$ can be identified as the homogeneous space of $4\times n$ matrices of rank $3$. Let $P \subset \operatorname{GL}_4(\mathbb{C})$ denote the parabolic subgroup given by $g_{4i} = 0$, for $i=1,2,3$. Choosing the representative $\begin{bmatrix} I_3 & 0 \\ 0 & 0 \end{bmatrix}$, we get that $\overline{O}_8\setminus \overline{O}_6$ is $\operatorname{GL}_4(\mathbb{C})\times \operatorname{GL}_n(\mathbb{C})$-isomorphic to $\operatorname{GL}_4(\mathbb{C})\times \operatorname{GL}_n(\mathbb{C}) / H$, where $H$ is the subgroup of $P \times \operatorname{GL}_n(\mathbb{C})$ consisting of elements of the form $$\left\{ \,\left(\begin{bmatrix} X & a \\ 0 & b \end{bmatrix}, \begin{bmatrix} X & 0 \\ Y & Z \end{bmatrix}\right) \, : \, X\in \operatorname{GL}_3(\mathbb{C}), a\in \mathbb{C}^3, b\in \mathbb{C}^*, Y \in \mathbb{C}^{(n-3) \times 3}, Z \in \operatorname{GL}_{n-3}(\mathbb{C}) \right\}.$$ Using [@lHorincz2019categories Proposition 4.5] repeatedly, we have $$\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_8 \setminus \overline{O}_6}(\mathcal{D}_{U})\cong\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_{\overline{O}_8\setminus \overline{O}_6}) \cong \textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_{\operatorname{GL}_4(\mathbb{C})\times \operatorname{GL}_3(\mathbb{C}) / H}) \cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2 \times \operatorname{GL}_n \times H}(\mathcal{D}_{\operatorname{GL}_4(\mathbb{C})\times \operatorname{GL}_n(\mathbb{C})}) \cong$$ $$\cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2 \times P}(\mathcal{D}_{\operatorname{GL}_4(\mathbb{C})}) \cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2}(\mathcal{D}_{\operatorname{GL}_4(\mathbb{C})/P}) \cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2}(\mathcal{D}_{\mathbb{P}(\bigwedge^3 \mathbb{C}^4)}) \cong$$ $$\cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2}(\mathcal{D}_{\mathbb{P} (\mathbb{C}^2\otimes\mathbb{C}^2)}) \cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2 \times \mathbb{C}^*}(\mathcal{D}_{\mathbb{C}^2 \otimes \mathbb{C}^2 \setminus \{0\}}) \cong \textnormal{mod}_{\operatorname{GL}_2\times \operatorname{GL}_2}(\mathcal{D}_{\mathbb{C}^2 \otimes \mathbb{C}^2 \setminus \{0\}}),$$ where the last equivalence follows as the normal subgroup consisting of elements $(a I_2, a I_2, a^{-1}) \subset \operatorname{GL}_2(\mathbb{C})\times \operatorname{GL}_2(\mathbb{C}) \times \mathbb{C}^*$ (with $a\in \mathbb{C}^*$) acts trivially on $\mathbb{C}^2 \otimes\mathbb{C}^2$. Under the action of $\operatorname{GL}_2(\mathbb{C})\times \operatorname{GL}_2(\mathbb{C})$, we can identify $\mathbb{C}^2 \otimes \mathbb{C}^2$ with the space of $2\times 2$ matrices, in which case the category is described in [@lHorincz2019categories Theorem 5.4]. The statement now follows since the injective envelopes on the open $(\mathbb{C}^2 \otimes\mathbb{C}^2) \setminus \{0\}$ are restrictions of the corresponding injective envelopes on the $\mathbb{C}^2 \otimes\mathbb{C}^2$ (cf. proof of Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"}). ◻ **Lemma 15**. *For any $i\neq 2n-3$, the modules $D_3$ and $D_4$ are not composition factors of $H_{\overline{O}_5}^i(S)$.* *Proof.* We use the notation as in Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"} and its proof. For $k=5,6$, put $Z_k=\overline{O}_k \setminus \overline{O}_2$. Its enough to prove the statement on the open set $U=V\setminus \overline{O}_2$, i.e. that the simple $\mathcal{D}_U$-modules $\mathcal{D}_3$ and $\mathcal{D}_4$ are not composition factors of $\mathscr{H}^i_{Z_5}(\mathcal{O}_U)$ for any $i\neq 2n-3$. Since the variety $O_6'$ is affine, the inclusion of $\operatorname{GL}/{P_{222}}$-bundles $\operatorname{GL}\times_{P_{222}} O_6' \to \operatorname{GL}\times_{P_{222}} U'$ is an affine morphism. Applying $\pi_{222}$, we see that the open embedding $O_6 \to Z_6$ is an affine morphism. As the inclusion $j: O_6 \to U$ is the composition of the latter and the closed embedding $Z_6 \to U$, this implies that $j_+(\mathcal{O}_{O_6})$ has cohomology only in degree zero. As $j_{+}(\mathcal{O}_{O_6})=\mathbb{R}\mathscr{H}^0_{O_6}(\mathcal{O}_U)[2n-4]$ (see ([\[lcO6push\]](#lcO6push){reference-type="ref" reference="lcO6push"})), we obtain that $\mathscr{H}^k_{O_6}(\mathcal{O}_U)=0$ for all $k\neq 2n-4$. Since $Z_6$ is smooth, we have $\mathscr{H}^k_{Z_6}(\mathcal{O}_U)=0$ for $k\neq 2n-4$. Hence, the long exact sequence $$\cdots \to \mathscr{H}^{i-1}_{O_6}(\mathcal{O}_U) \to\mathscr{H}^i_{Z_5}(\mathcal{O}_U) \to \mathscr{H}^i_{Z_6}(\mathcal{O}_U) \to \mathscr{H}^i_{O_6}(\mathcal{O}_U) \to \mathscr{H}^{i+1}_{Z_5}(\mathcal{O}_U) \to \cdots$$ yields $\mathscr{H}^i_{Z_5}(\mathcal{O}_U)=0$ for all $i\neq 2n-3$. ◻ ## Conormal bundles We first determine the character cycles of all the simples in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$. As the characteristic cycles are known in the determinantal case (e.g. [@raicu2016characters Remark 1.5]), we have $\textnormal{charC}(D_i)=[\overline{T^{\ast}_{O_i}V}]$ for $i=0, 2, 3, 4, 6, 8, 9$. **Proposition 16**. *We have $$\textnormal{charC}(D_1)=[\overline{T^{\ast}_{O_1}V}], \quad \textnormal{charC}(D_7)=[\overline{T^{\ast}_{O_7}V}],$$ $$\textnormal{charC}(D_5)=[\overline{T^{\ast}_{O_5}V}]+[\overline{T^{\ast}_{O_4}V}]+[\overline{T^{\ast}_{O_3}V}],$$ $$\qquad \mbox{for } n=3: \,\, \textnormal{charC}(D_6')=[\overline{T^{\ast}_{O_6}V}]+[\overline{T^{\ast}_{O_5}V}]+[\overline{T^{\ast}_{O_2}V}],$$ $$\mbox{for } n\geq 4: \,\, \textnormal{charC}(D_6')=[\overline{T^{\ast}_{O_6}V}]+[\overline{T^{\ast}_{O_5}V}]. \qquad$$* *Proof.* The claim about $D_1$ and $D_7$ follows from the proof of Proposition [Proposition 12](#prop:fourier){reference-type="ref" reference="prop:fourier"}. As both $D_5$ and $D_6'$ are fixed under $\mathcal{F}$, it follows for $n\geq 4$ as in the proof of Proposition [Proposition 12](#prop:fourier){reference-type="ref" reference="prop:fourier"} that $[\overline{T^{\ast}_{O_i}V}]$ does not appear in the their characteristic cycle for $i=0,1,2$. When $n=3$, the same argument works for $D_5$, and for $D_6'$ we get that $[\overline{T^{\ast}_{O_i}V}]$ does not appear in its characteristic cycle for $i=0,1$, while $[\overline{T^{\ast}_{O_2}V}]$ does so with multiplicity one, since $\overline{O}_2$ is the projective dual of $\overline{O}_6$ and $\mathcal{F}(D_6')\cong D_6'$. Thus, to finish the proof, we can now restrict to the open set $U=V\setminus \overline{O}_2$. Since the chain of equivalences in ([\[eq:catchain\]](#eq:catchain){reference-type="ref" reference="eq:catchain"}) preserves respective characteristic cycles, the statement follows from the corresponding result for $n=2$ in [@perlmancorrected Theorem 3.4] (see Remark [Remark 17](#rem:mikeerror){reference-type="ref" reference="rem:mikeerror"}). ◻ **Remark 17**. *There is an error in the statement of [@perlman2020equivariant Theorem on the Quiver Structure], which has been corrected in [@perlmancorrected]. The issue is with the asserted relations of the quiver, namely with the relations $\beta_{p,q,r}\alpha_{i,j,k}$ and $\delta_{p,q,r}\gamma_{i,j,k}$ for $(p,q,r)\neq (i,j,k)$. These are in fact not relations of the quiver, and should be replaced with the following new relations for each $(p,q,r)\neq (i,j,k)$: $\beta_{p,q,r}\alpha_{i,j,k}-\delta_{p,q,r}\gamma_{i,j,k}$. The newer article [@perlmancorrected] also has the characteristic cycle calculations cited in the proof of Proposition [Proposition 16](#prop:char566){reference-type="ref" reference="prop:char566"}.* For the next two results below we identify the $\operatorname{GL}$-action on $V$ with action of $H=\operatorname{SO}_4(\mathbb{C}) \times \operatorname{GL}_n(\mathbb{C})$ on the space of $4\times n$ matrices, and the cotangent bundle $V \times V^*$ of $V$ with the space pairs of matrices $(X, Y)$, where $X$ is $4\times n$ and $Y$ is $n\times 4$ with the natural actions of $\operatorname{SO}_4(\mathbb{C}) \times \operatorname{GL}_n(\mathbb{C})$. Then the equations $YX = 0$ and $(XY)^t = XY$ define (set-theoretically) $\bigcup_{i=0}^9 \overline{T_{O_i}^* V}$. **Lemma 18**. *The varieties $\overline{T_{O_6}^* V}$ and $\overline{T_{O_8}^* V}$ have a dense $\operatorname{GL}$-orbit.* *Proof.* By a straightforward calculation, we see that the $\operatorname{Lie}(H)$-stabilizers of the points $$\left(\begin{bmatrix} 1 & 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 1 & 0 & 0 & 0 &\cdots & 0 \\ 0& 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 0 & 0 &\cdots & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 1 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 1 & 0 &\cdots & 0 \end{bmatrix}^t\right) \in \overline{T_{O_6}^* V}$$ and when $n\geq 4$ $$\left(\begin{bmatrix} 1 & 0 & 0 & 0 & 0 &\cdots & 0\\ 0 & 1 & 0 & 0 & 0 &\cdots & 0 \\ 0& 0 & 1 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 0 & 0 &\cdots & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 0 & 0 &\cdots & 0 \\ 0 & 0 & 0 & 1 & 0 &\cdots & 0 \end{bmatrix}^t\right) \in \overline{T_{O_8}^* V},$$ have dimension $n^2-4n+6$, thus their $H$-orbits are dense in $\overline{T_{O_6}^* V}$ and $\overline{T_{O_8}^* V}$, respectively. ◻ **Lemma 19**. *For $n\geq 4$, we have $\dim \overline{T_{O_7}^* V} \cap \overline{T_{O_i}^* V} < \dim V - 1$ for $i=1,3,4,5,6$.* *Proof.* Since $\overline{O}_1$ is the projective dual of $\overline{O}_7$, the variety $\overline{T_{O_7}^* V}$ is defined (set-theoretically) by the equations $$YX = 0, \, (XY)^t = XY, \, \operatorname{rank} X \leq 3, \, \operatorname{rank} (X^t X)\leq 2, \, \operatorname{rank} Y \leq 1, \, Y Y^t = 0.$$ First, we prove the statement for $i=6$. As $\overline{O}_6 = Z_2$, the variety $\overline{T_{O_6}^* V}$ is defined by $YX = 0, XY=0, \operatorname{rank} X \leq 2, \operatorname{rank} Y \leq 2$ [@strickland]. Thus, the variety $\overline{T_{O_6}^* V} \cap \overline{T_{O_7}^* V}$ is a subset of the variety $T$ defined by $$YX = 0, \, XY=0, \, \operatorname{rank} X \leq 2, \, \operatorname{rank} Y \leq 1,$$ and this inclusion is strict since it is easy to see that $Y Y^t$ is not identically zero on $T$. Thus, it is enough to show that $T$ is irreducible and has dimension $\leq 4n-1$. Using the terminology from [@node], the variety $T$ can be realized as a rank variety on the quiver $$\xymatrix{ 4 \ar@<0.5ex>[r]^Y & n \ar@<0.5ex>[l]^X}$$ where the two vertices are nodes. Thus, $T$ is a representation variety of a radical square algebra. By [@node Theorem 3.19] and its proof, $T$ is irreducible and has desingularization by a total space of a vector bundle over (a product of) Grassmannians. This bundle has dimension $3n+3 \leq 4n-1$, proving the claim. Next, we prove the cases $i=1,3,4,5$ simultaneously. Note that all $\overline{T_{O_7}^* V} \cap \overline{T_{O_i}^* V}$ (for $i=1,3,4,5$) are contained in the variety $T_1$ defined by $$YX = 0, \, (XY)^t = XY, \, \operatorname{rank} X \leq 2, \, \operatorname{rank} (X^t X)\leq 1, \, \operatorname{rank} Y \leq 1.$$ Thus, it is enough to show that $\dim T_1 \leq 4n - 2$. When $n=4$ and $n=5$, this is done by a computer calculation using Macaulay2 [@M2], when the dimensions are 14 and 17, respectively. Now assume $n\geq 6$. Clearly, $T_1$ is a subvariety of the variety $T_2$ defined by $$YX = 0, \, \operatorname{rank} X \leq 2, \, \operatorname{rank} Y \leq 1.$$ and this inclusion is strict. Thus, to finish to proof, it suffices to prove that $T_2$ is irreducible and $\dim T_2 \leq 4n-1$. Again, using the terminology from [@node], the variety $T_2$ can be realized as a rank variety on the quiver $$\xymatrix{ n \ar[r]^X & 4 \ar[r]^Y & n }$$ where the middle vertex is a node -- thus, $T_2$ is the representation variety of a radical square algebra. By [@node Theorem 3.19] and its proof, $T_2$ is irreducible and has a desingularization that is the total space of a vector bundle over (a product of) Grassmannians, the dimension of which is $3n+5 \leq 4n-1$. ◻ ## Preliminaries for $n=3$ We let $n=3$ and let $f\in S$ be the semi-invariant, which is the defining equation of $\overline{O}_7$ and has weight $(3,3)\times (3,3)\times (2,2,2)$. By [@binary Lemma 2.4] the $\mathcal{D}$-module $S_f$ is the injective envelope of $S$ in $\textnormal{mod}_{\textnormal{GL}}(\mathcal{D}_V)$. We recall that the representation $V$ of $\operatorname{GL}$ is equivalent to the representation of $\textnormal{SO}_{4}(\mathbb{C}) \times \operatorname{GL}_3(\mathbb{C})$ acting on the space of $4\times 3$ matrices. If $X$ denotes a generic $4\times 3$ matrix of variables, then $f=\operatorname{det}(X^t \cdot X)$. Thus, the Bernstein--Sato polynomial $b_f(s)$ of $f$ is given by [@decomp Example 2.9] (cf. also [@microlocal Example 9.2]). $$\label{eq:bs3} b_f(s)= (s+1)^2 (s+3/2)^2 (s+2)^2.$$ We need also the following equation for a local $b$-function of $f$, which follows again from [@decomp Example 2.9]. Let $f_1$ be the $(1,1)$-entry of $X^t \cdot X$, and note that the $\operatorname{GL}$-translates of $f_1$ generate the defining ideal of the highest weight orbit $\overline{O}_1$. Below, the operator $Q$ can be chosen (up to a scalar) by replacing the variables with the corresponding partial derivatives in a $2\times 2$ minor of $X^t \cdot X$. **Lemma 20**. *There exists $Q \in \mathcal{D}_V$ such that $$Q \cdot f^{s+1} = (s+1)^2 (s+3/2)^2 \cdot f_{1} \cdot f^s.$$* **Lemma 21**. *The $\operatorname{GL}$-representation $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C$ has multiplicity one in $D_6$.* *Proof.* Using the flattening of $V$ to the space of $4\times 3$ matrices, our simple $D_6$ is identified with $L_{Z_2}$ supported on the determinantal variety $Z_2$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} the $\operatorname{GL}(A\otimes B)\times \operatorname{GL}(C)$ representation $\mathbb{S}_{(-1,-1,-1,-1)}(A\otimes B)\otimes\mathbb{S}_{(-1,-1,-2)}C$ belongs to $D_6$, and is the whole $\operatorname{GL}(C)$-isotypic component of $D_6$ corresponding to $\mathbb{S}_{(-1,-1,-2)}C$. The former representation is isomorphic to $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C$ as a representation of $\operatorname{GL}$. ◻ **Lemma 22**. *The following is true about subrepresentations of $S_f$:* 1. *Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\alpha}B\otimes\mathbb{S}_{(2t,2t,2t)}C$ ($t\in \mathbb{Z}$) appear in $S_f$ with multiplicity one.* 2. *Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(2t,2t,2t)}C$ ($t\in \mathbb{Z}$) with $\alpha\neq \beta$ do not appear in $S_f$.* 3. *Let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$ with $|\alpha|=|\beta|=-4$, $\alpha_1+\beta_1\leq -2$, and $|\alpha_1-\beta_1|\leq 1$. Set $\phi=\min\{\alpha_1,\beta_1\}$. The multiplicity of a representation of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(-1,-1,-2)}C$ in $S_f$ is $\phi+3$ if $\alpha_1+\beta_1$ is odd, and it is $\phi+2$ if $\alpha_1+\beta_1$ is even.* 4. *The following representations appear in $S_f$ with multiplicity one: $$\mathbb{S}_{(-1,-3)}A\otimes\mathbb{S}_{(-1,-3)}B\otimes\mathbb{S}_{(-1,-1,-2)}C,\quad \mathbb{S}_{(-1,-3)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C.$$* 5. *The following representations do not appear in $S_f$: $$\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C,\quad \mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}C,$$ $$\mathbb{S}_{(0,-4)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C.$$* *Proof.* We prove (1) and (2) simultaneously. Let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$, and let $t\in \mathbb{Z}$. Since $f$ has weight $(3,3)\times (3,3)\times (2,2,2)$, the multiplicity of $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(2t,2t,2t)}C$ in $S_f$ is equal to the multiplicity of $\mathbb{S}_{(3k+\alpha_1,3k+\alpha_2)}A\otimes\mathbb{S}_{(3k+\beta_1,3k+\beta_2)}B\otimes\mathbb{S}_{(2k+2t,2k+2t,2k+2t)}C$ in $S$ for $k\gg 0$. Using the Cauchy Formula (see Section [2.3](#sec:reps){reference-type="ref" reference="sec:reps"}) applied to $S=\textnormal{Sym}((A\otimes B)\otimes C)$, if $\mathbb{S}_{(3k+\alpha_1,3k+\alpha_2)}A\otimes\mathbb{S}_{(3k+\beta_1,3k+\beta_2)}B\otimes\mathbb{S}_{(2k+2t,2k+2t,2k+2t)}C$ belonged to $S$, it would have to be a subrepresentation of $\mathbb{S}_{(2k+2t,2k+2t,2k+2t)}(A\otimes B)\otimes\mathbb{S}_{(2k+2t,2k+2t,2k+2t)}C$. Twisting by $\operatorname{det}(A\otimes B)^{\otimes(-2k-2t)}$ and taking duals, we are interested in the multiplicity of $\mathbb{S}_{(k+4t-\alpha_2,k+4t-\alpha_1)}A^{\ast}\otimes\mathbb{S}_{(k+4t-\beta_2,k+4t-\beta_1)}B^{\ast}$ in $\mathbb{S}_{(2k+2t,0,0,0)}(A^{\ast}\otimes B^{\ast})$ for $k\gg 0$. By the Cauchy formula, this is one if and only if $\alpha=\beta$. \(3\) We want the multiplicity of $\mathbb{S}_{(3k+\alpha_1,3k+\alpha_2)}A\otimes\mathbb{S}_{(3k+\beta_1,3k+\beta_2)}B\otimes\mathbb{S}_{(2k-1,2k-1,2k-2)}C$ in $S$ for $k\gg 0$. Using the Cauchy formula applied to $S=\textnormal{Sym}((A\otimes B)\otimes C)$, we are interested in the multiplicity of $\mathbb{S}_{(3k+\alpha_1,3k+\alpha_2)}A\otimes\mathbb{S}_{(3k+\beta_1,3k+\beta_2)}B$ in $\mathbb{S}_{(2k-1,2k-1,2k-2)}(A\otimes B)$ for $k\gg 0$. Dualizing and twisting by $\operatorname{det}(A\otimes B)^{\otimes 2k-1}$, this is the same as the multiplicity of $\mathbb{S}_{(k-\alpha_2-2,k-\alpha_1-2)}A^{\ast}\otimes\mathbb{S}_{(k-\beta_2-2,k-\beta_1-2)}B^{\ast}$ in $\mathbb{S}_{(2k-1,1,0,0)}(A^{\ast}\otimes B^{\ast})$. Let $\phi$ be as in the statement of the lemma. We apply [@raicu2012secant Corollary 4.3b]. Using the notation there, for $k\gg 0$ we have $f=k-\phi-2$, $e=2k-\alpha_1-\beta_1-3$ and $r=2k$. Since $\alpha_1+\beta_1\leq -2$ we have $e\geq r-1$, and since $|\alpha_1-\beta_1|\leq 1$ we have $e\geq 2f$. We have $e$ is even if and only if $\alpha_1+\beta_1$ is odd, in which case the multiplicity is $\phi+3$. We have $e$ is odd if and only if $\alpha_1+\beta_1$ is even, in which case the multiplicity is $\phi+2$. \(4\) This follows from (3). \(5\) By (3) we have that $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C$ does not appear in $S_f$. Applying the Fourier transform, $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}C$ also does not appear. To determine the multiplicity of $\mathbb{S}_{(0,-4)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}C$, by reasoning of the second paragraph, we are interested in the multiplicity of $\mathbb{S}_{(k+2,k-2)}A^{\ast}\otimes\mathbb{S}_{(k,k)}B^{\ast}$ in $\mathbb{S}_{(2k-1,1,0,0)}(A^{\ast}\otimes B^{\ast})$. Here, $r=2k$, $f=k$ and $e=2k-1$. Since $e<2f$ it follows from [@raicu2012secant Corollary 4.3b] that the multiplicity is zero. ◻ As a consequence of Lemma [Lemma 21](#lem:witD6){reference-type="ref" reference="lem:witD6"} and Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"} we have the following. **Lemma 23**. *The simple $D_6$ is not a composition factor of $S_f$.* Let $P'_2= \mathcal{D}_V \otimes_{U(\mathfrak{gl})} (\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}C)$ be the projective equivariant $\mathcal{D}_V$-module, as constructed in [@lHorincz2019categories Section 2.1]. **Lemma 24**. *The support of $P'_2$ is contained in $\overline{O}_5$, and the multiplicity of $\overline{T_{O_2}^* V}$ in $\operatorname{charC}(P_2')$ is one.* *Proof.* The proof of the first part is similar to that of [@lHorincz2019categories Lemma 5.14]. We again identify $V$ with the space of $4\times 3$ matrices with the action of $\textnormal{SO}_{4}(\mathbb{C}) \times \operatorname{GL}_3(\mathbb{C})$. Since $P'_2$ has an explicit presentation, we implemented it into the computer algebra system Macaulay2 [@M2]. By computing a partial Gröbner basis, we obtained the following generator of the characteristic ideal of $P'_2$ in $\mathbb{C}[V \times V^*]$ (where $x_{ij}$ are the coordinates on $V$, viewed as the space of $4\times 3$ matrices): $$x_{12}^2x_{21}^2-2x_{11}x_{12}x_{21}x_{22}+x_{11}^2x_{22}^2+x_{12}^2x_{31}^2+x_{22}^2x_{31}^2-2x_{11}x_{12}x_{31}x_{32}-2x_{21}x_{22}x_{31}x_{32}+x_{11}^2x_{32}^2+x_{21}^2x_{32}^2+x_{12}^2x_{41}^2+$$ $$+x_{22}^2x_{41}^2+x_{32}^2x_{41}^2-2x_{11}x_{12}x_{41}x_{42}-2x_{21}x_{22}x_{41}x_{42}-2x_{31}x_{32}x_{41}x_{42}+x_{11}^2x_{42}^2+x_{21}^2x_{42}^2+x_{31}^2x_{42}^2.$$ Viewed as an element in $\mathbb{C}[V]$, this it is non-zero when evaluated on $\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \in O_6$. For the second part, we first claim that $\overline{T_{O_2}^* V}$ has a dense $P=\operatorname{SO}_4(\mathbb{C})\times P'$-orbit, where $P'\subset \operatorname{GL}_3(\mathbb{C})$ is the parabolic given by $g_{21}=g_{31}=0$. This follows readily by explicitly computing the $\mathfrak{p}=\textnormal{Lie}(P)$-stabilizer of the point $\left(\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}\right) \in \overline{T_{O_2}^* V}$, which is seen to be 1-dimensional. Since the highest weight vector in the representation $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}C$ is $P$-semi-invariant, the claim now follows analogously to [@lHorincz2019categories Lemma 3.12 and Proposition 3.14] ◻ ## Extensions between $D_0$, $D_1$, $D_2$, $D_3$, $D_4$ We start by determining the local cohomology of the polynomial ring $S$ with support in $\overline{O}_1$. This result will help determine extensions between $D_1$ and $D_0$. For the following statement, note that the codimension of $\overline{O}_1$ is $3n-2$, which is equal to $4n-5$ when $n=3$. **Theorem 25**. *The following is true about local cohomology of the polynomial ring $S$ with support in $\overline{O}_1$ (all short exact sequences are non-split).* 1. *If $n=3$ then the only non-zero local cohomology modules are $$0 \to D_1 \to H^{3n-2}_{\overline{O}_1}(S)\to D_0\to 0,\quad H^{4n-3}_{\overline{O}_1}(S)=D_0^{\oplus 2}.$$* 2. *If $n\geq 4$ then the only non-zero local cohomology modules are $$H^{3n-2}_{\overline{O}_1}(S)=D_1,\quad H^{4n-5}_{\overline{O}_1}(S)=D_0,\quad H^{4n-3}_{\overline{O}_1}(S)=D_0^{\oplus 2}.$$* *Proof.* For ease of notation, set $d=\dim S=4n$, and $c=\operatorname{codim}(\overline{O}_1, V)=3n-2$. The orbit closure $\overline{O}_1$ is the affine cone over the Segre product $X=\mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^{n-1}$, a smooth projective variety. It follows from [@hartshorne2018simple Theorem 4.8, Theorem 6.4] (see also [@switala Main Theorem 1.2], [@garsab Theorem], and [@LSW Theorem 3.1]) that the multiplicity of $D_0$ in $H^{\bullet}_{\overline{O}_1}(S)$ is determined by the de Rham Betti numbers $b_k=\dim H^k_{DR}(X)$ of $X$. More precisely, for $c<j<d-1$ we have $H^j_{\overline{O}_1}(S)=D_0^{\oplus (b_{d-j-1}-b_{d-j-3})}$, and there is a non-split short exact sequence $$\label{eqn:EsinO1} 0\to D_1 \to H^{c}_{\overline{O}_1}(S)\to D_0^{\oplus (b_{n+1}-b_{n-1})}\to 0.$$ Furthermore, $H^j_{\overline{O}_1}(S)=0$ for $j\geq d-1$. By the Künneth formula, the Poincaré polynomial of $X$ is $$P_X(q)=\binom{2}{1}_{q^2}\cdot \binom{2}{1}_{q^2}\cdot \binom{n}{1}_{q^2}=(1+q^2)^2\cdot \sum_{i=0}^{n-1} q^{2i}=(1+2q^2+q^4)\cdot \sum_{i=0}^{n-1}q^{2i},$$ so $b_k$ is the coefficient of $q^k$ in $P_X(q)$. We first consider $H^c_{\overline{O}_1}(S)$. Notice that $P_X(q)$ is supported in even degrees, so that if $n$ is even, then $b_{n+1}= b_{n-1} = 0$, and the claim follows from ([\[eqn:EsinO1\]](#eqn:EsinO1){reference-type="ref" reference="eqn:EsinO1"}). For $n$ odd, the coefficient of $q^{n+1}$ in $P_X(q)$ is $4$, so $b_d = 4$. If $n = 3$, the coefficient of $q^{n-1}$ in $P_X(q)$ is $3$, whereas if $n\geq 5$, the coefficient of $q^{n-1}$ in $P_X(q)$ is $4$. Therefore, the difference $b_{n+1}- b_{n-1}$ is as claimed. Now we consider cohomological degrees $c<j<d-1$. The values of the desired differences of Betti numbers are described by the following table (if $j$ is even then $b_{d-j-1}=b_{d-j-3}=0$). $j$ $2n+1\leq j \leq 4n-7$ $4n-5$ $4n-3$ $4n-1$ ----------------------- ------------------------ -------- -------- -------- $b_{d-j-1}-b_{d-j-3}$ $0$ $1$ $2$ $1$ We are interested in when $c<j<4n-1$ and $b_{d-j-1}-b_{d-j-3}\neq 0$, namely the columns corresponding to $4n-5$ and $4n-3$. If $n=3$, then $c=3n-2=4n-5$, so we are only interested in $j=4n-3$, and $H^{4n-3}_{\overline{O}_1}(S)=D_0^{\oplus 2}$. If $n\geq 4$, then $4n-5>c$, so $H^{4n-5}_{\overline{O}_1}(S)=D_0$. For all $n\geq 3$ we have $4n-3>c$ so $H^{4n-3}_{\overline{O}_1}(S)=D_0^{\oplus 2}$. ◻ Throughout $\textnormal{Ext}$ denotes the group of extensions in the subcategory $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$. **Lemma 26**. *Let $c=\textnormal{codim}(\overline{O}_1,V)=3n-2$. The module $H^c_{\overline{O}_1}(S)$ is the injective envelope in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_1}(\mathcal{D}_V)$ of $D_1$. If $n\neq 3$ then $\textnormal{Ext}^1(D_1,D_0)=\textnormal{Ext}^1(D_0,D_1)=0$. If $n=3$, then $$\dim_{\mathbb{C}} \textnormal{Ext}^1(D_1,D_0)=\dim_{\mathbb{C}} \textnormal{Ext}^1(D_0,D_1)=1.$$* *Proof.* By [@lHorincz2019categories Lemma 3.11] there is an exact sequence of $\mathcal{D}$-modules $$0 \longrightarrow H^c_{\overline{O}_1}(S)\longrightarrow H^c_{O_1}(S)\longrightarrow H^{c+1}_{O_0}(S),$$ and $H^c_{O_1}(S)$ is the injective envelope in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_1}(\mathcal{D}_V)$ of $D_1$. Since $H^{c+1}_{O_0}(S)=0$, there is an isomorphism $H^c_{\overline{O}_1}(S)\cong H^c_{O_1}(S)$, proving the first assertion. Since $H^c_{\overline{O}_1}(S)$ is the injective envelope in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_1}(\mathcal{D}_V)$ of $D_1$, Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"} implies that $\textnormal{Ext}^1(D_0,D_1)$ is as claimed. Since $D_1$ and $D_0$ are self-dual, yields the assertion about $\textnormal{Ext}^1(D_1,D_0)$. ◻ Now we turn to studying extensions between $D_2$ and $D_1$, $D_0$. Using the above results and [@lHorincz2019categories Lemma 3.11] we prove the following: **Lemma 27**. *We have* 1. *If $n=3$ or $n\geq 5$, the injective envelope of $D_2$ in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_2}(\mathcal{D}_V)$ is a non-trivial extension of $D_1$ by $D_2$. Thus, for these values of $n$, we have $\textnormal{Ext}^1(D_2,D_0)=\textnormal{Ext}^1(D_0,D_2)=0$ and the vector spaces $\textnormal{Ext}^1(D_2,D_1)$, $\textnormal{Ext}^1(D_1,D_2)$ are one-dimensional.* 2. *If $n=4$ and $I$ is the injective envelope of $D_2$ in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_2}(\mathcal{D}_V)$, there is a non-split short exact sequence $$0\longrightarrow D_2\longrightarrow I\longrightarrow D_0\oplus D_1\longrightarrow 0.$$ In particular, for $n=4$ the vector spaces $\textnormal{Ext}^1(D_2,D_1)$, $\textnormal{Ext}^1(D_1,D_2)$, $\textnormal{Ext}^1(D_2,D_0)$, $\textnormal{Ext}^1(D_0,D_2)$ are one-dimensional.* *Proof.* Similar to the proof of Lemma [Lemma 26](#ext01){reference-type="ref" reference="ext01"}, let $c=\textnormal{codim}(\overline{O}_2,V)=3n-3$. By [@lHorincz2019categories Lemma 3.11] there is an exact sequence of $\mathcal{D}$-modules $$\label{injLocalSequence} 0\longrightarrow H^c_{\overline{O}_2}(S)\longrightarrow H^c_{O_2}(S)\longrightarrow H^{c+1}_{\overline{O}_1}(S)\longrightarrow H^{c+1}_{\overline{O}_2}(S),$$ and $H^c_{O_2}(S)$ is the injective envelope of $D_2$ in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_2}(\mathcal{D}_V)$. To prove (1), first assume $n\geq 6$. Using Theorem [Theorem 3](#localSegre){reference-type="ref" reference="localSegre"} we see that $H^c_{\overline{O}_2}(S)=D_2$ and $H^{c+1}_{\overline{O}_2}(S)=0$. By Lemma [Lemma 26](#ext01){reference-type="ref" reference="ext01"} we have $H^{c+1}_{\overline{O}_1}(S)=D_1$, and the result follows from ([\[injLocalSequence\]](#injLocalSequence){reference-type="ref" reference="injLocalSequence"}). If $n=5$, Theorem [Theorem 3](#localSegre){reference-type="ref" reference="localSegre"} implies that $H^c_{\overline{O}_2}(S)=D_2$ and $H^{c+1}_{\overline{O}_2}(S)=D_0$, and Lemma [Lemma 26](#ext01){reference-type="ref" reference="ext01"} implies that $H^{c+1}_{\overline{O}_1}(S)=D_1$. Thus, the final map in ([\[injLocalSequence\]](#injLocalSequence){reference-type="ref" reference="injLocalSequence"}) goes from $D_1$ to $D_0$, so it must be zero. Let $n=3$. By Theorem [Theorem 3](#localSegre){reference-type="ref" reference="localSegre"} the exact sequence ([\[injLocalSequence\]](#injLocalSequence){reference-type="ref" reference="injLocalSequence"}) becomes $$\label{inj2n3} 0\longrightarrow D_2\longrightarrow H^c_{O_2}(S)\longrightarrow H^{c+1}_{\overline{O}_1}(S)\longrightarrow D_0,$$ and $H^{c+1}_{\overline{O}_1}(S)$ is the unique extension of $D_0$ by $D_1$ by Lemma [Lemma 26](#ext01){reference-type="ref" reference="ext01"}. Thus, the claim is that the final map in ([\[inj2n3\]](#inj2n3){reference-type="ref" reference="inj2n3"}) is surjective, which we prove using the Fourier transform. Let $(\mathcal{Q},\mathcal{I})$ be the quiver with relations of $\textnormal{mod}_{\textnormal{GL}}(\mathcal{D}_V)$. The number of paths in $\mathcal{Q}$ (modulo relations) from $D_0$ from $D_2$ is equal to the number of paths from $\mathcal{F}(D_0)=D_8$ to $\mathcal{F}(D_2)=D_6$. By Lemma [Lemma 23](#D6S0fn3){reference-type="ref" reference="D6S0fn3"}, there are no such paths. Applying the holonomic duality functor, there are no paths in $\mathcal{Q}$ from $D_2$ to $D_0$. This proves the lemma for $n=3$. Finally, we prove (2). Let $n=4$. By Theorem [Theorem 3](#localSegre){reference-type="ref" reference="localSegre"} we have that $H^c_{\overline{O}_2}(S)$ is a non-trivial extension of $D_0$ by $D_2$, and $H^{c+1}_{\overline{O}_2}(S)=0$. By Lemma [Lemma 26](#ext01){reference-type="ref" reference="ext01"} we have $H^{c+1}_{\overline{O}_1}(S)=D_1$. Thus, the exact sequence ([\[injLocalSequence\]](#injLocalSequence){reference-type="ref" reference="injLocalSequence"}) becomes $$0\longrightarrow H^c_{\overline{O}_2}(S)\longrightarrow H^c_{O_2}(S)\longrightarrow D_1\longrightarrow 0.$$ Since there are no extensions between $D_1$ and $D_0$ for $n=4$, we obtain the desired result. ◻ **Lemma 28**. *Let $i=3,4$. If $n\geq 3$ the simple $D_i$ is injective in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_i}(\mathcal{D}_V)$. In particular, the vector spaces $\textnormal{Ext}^1(D_i,D_1)$, $\textnormal{Ext}^1(D_1,D_i)$, $\textnormal{Ext}^1(D_i,D_0)$, $\textnormal{Ext}^1(D_0,D_i)$ are zero.* *Proof.* Let $i$ be $3$ or $4$. Let $c=\textnormal{codim}(\overline{O}_i,V)=2n-1$. By Theorem [Theorem 6](#thm:localcoho34){reference-type="ref" reference="thm:localcoho34"} the only nonvanishing local cohomology is $H^c_{\overline{O}_i}(S)=D_i$ and $H^{c+2n-2}_{\overline{O}_i}(S)=D_0$. Again by [@lHorincz2019categories Lemma 3.11] we have an exact sequence $$0\longrightarrow D_i\longrightarrow H^c_{O_i}(S)\longrightarrow H^{c+1}_{\overline{O}_1}(S)\longrightarrow H^{c+1}_{\overline{O}_i}(S),$$ and $H^c_{O_i}(S)$ is the injective envelope of $D_i$ in $\textnormal{mod}_{\textnormal{GL}}^{\overline{O}_i}(\mathcal{D}_V)$. If $n\geq 3$ then $\textnormal{codim}(\overline{O}_1,V)-c>1$, so that $H^{c+1}_{\overline{O}_1}(S)=0$. Therefore $D_i\cong H^c_{O_i}(S)$ for $n\geq 3$. ◻ # Categories of equivariant $\mathcal{D}$-modules {#sec:eqcat} In this section, we determine the quiver of the category $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ of $\operatorname{GL}$-equivariant $\mathcal{D}_V$-modules for the space $V$ of $2\times 2 \times n$ tensors, according to the cases $n=3, n=4,$ and $n\geq 5$. ## The case $n=3$ {#sec:n3} **Theorem 29**. *Let $n=3$. The quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ is given by $$\xymatrix@R+1pc@C+1.6pc{ & (3) \ar@<0.5ex>[rr]^{\alpha_3} & & \ar@<0.2ex>[ll]^{\beta_3} \ar@<0.2ex>[dll]^{\beta_{1'}} (6') \ar@<0.5ex>[rr]^{\alpha_4} \ar@<0.5ex>[drr]^{\alpha_{7'}}& & \ar@<0.2ex>[ll]^{\beta_4} (4) & \\ (0) \ar@<0.3ex>[r]^{\alpha_1} & \ar@<0.5ex>[l]^{\beta_1} (1) \ar@<0.3ex>[r]^{\alpha_2} \ar@<0.5ex>[urr]^{\alpha_{1'}}& \ar@<0.5ex>[l]^{\beta_2} (2) \ar@<0.3ex>[r]^{\alpha_5} & \ar@<0.5ex>[l]^{\beta_5} (5) \ar@<0.3ex>[r]^{\alpha_6} & \ar@<0.5ex>[l]^{\beta_6} (6) \ar@<0.3ex>[r]^{\alpha_7} & \ar@<0.5ex>[l]^{\beta_7} \ar@<0.2ex>[ull]^{\beta_{7'}} (7) \ar@<0.3ex>[r]^{\alpha_8} & \ar@<0.5ex>[l]^{\beta_8} (8) }$$ with the following relations: $$\alpha_i \beta_i, \beta_i \alpha_i , \alpha_1 \alpha_2, \beta_2 \beta_1, \alpha_2 \alpha_5, \beta_5\beta_2, \alpha_5\alpha_6, \beta_6 \beta_5, \alpha_6 \alpha_7, \beta_7 \beta_6, \alpha_7 \alpha_8, \beta_8 \beta_7, \beta_2\alpha_{1'}, \beta_{1'}\alpha_2, \alpha_7\beta_{7'}, \alpha_{7'}\beta_7,$$ $$\alpha_{1'}\beta_3, \alpha_{3}\beta_{1'}, \alpha_{1'}\alpha_4, \beta_4\beta_{1'}, \beta_{7'}\beta_3, \alpha_3\alpha_{7'}, \beta_{7'}\alpha_4, \beta_4\alpha_{7'}.$$ The Fourier transform acts on the quiver as the reflection along the line between $(5)$ and $(6')$.* *Proof.* By [@binary Lemma 2.4], $S_f$ is the injective envelope of $S=D_8$. Since $S_f/S=H_f^1(S)$, and $D_7$ is its unique simple submodule, we conclude that there is a single arrow $\alpha_8$ from $(7)$ to $(8)$, and there are no other arrows into $(8)$. Via holonomic duality $\mathbb{D}$, there is a single arrow $\beta_8$ from $(8)$ to $(7)$, and the vertex $(8)$ has no further arrows connected to it. Since $S$ has multiplicity one in $S_f$, we have $\beta_8 \alpha_8=0$. By Lemma [Lemma 14](#lem:opencat2){reference-type="ref" reference="lem:opencat2"} we also have $\alpha_8 \beta_8 =0$, as the restriction of the injective envelope of $D_7$ to $V\setminus \overline{O}_6$ (which is again an injective envelope there) has only one copy of the simple $\mathcal{D}_{V\setminus \overline{O}_6}$-module at corresponding to the trivial local system on $O_7$. Furthermore, since $\alpha_8\beta_8 =0$, all relations in the quiver of the subcategory $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_7}(\mathcal{D}_V)$ stay relations in the quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$. In particular, by [@lHorincz2019categories Corollary 3.9] all $2$-cycles at $(7)$ must be zero in the quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$. By Fourier transform $\mathcal{F}$, there are two arrows $\alpha_1,\beta_1$ between $(0)$ and $(1)$ with $\alpha_1 \beta_1=0$, there are no further arrows connected to $(0)$, and all $2$-cycles at $(1)$ must be zero. By Lemma [Lemma 27](#lemmaExt12){reference-type="ref" reference="lemmaExt12"}, there are exactly two arrows $\alpha_2, \beta_2$ between $(1), (2)$, and by Lemma [Lemma 28](#lemmaExt34){reference-type="ref" reference="lemmaExt34"} there are no arrows between $(3)$ (resp. $(4)$) and $(1)$ or $(2)$. Applying $\mathcal{F}$, we have that there are exactly two arrows $\alpha_7,\beta_7$ between $(6),(7)$, and there are no arrows between $(3)$ (resp. $(4)$) and $(6)$ or $(7)$. By Lemma [Lemma 23](#D6S0fn3){reference-type="ref" reference="D6S0fn3"} we have $\alpha_7 \alpha_8=0$, and by self-duality of $D_6$, $D_7$, $D_8$, we have $\beta_8 \beta_7=0$. Let $U= V\setminus \overline{O}_2$ and $j: U \to V$ be the open embedding. By Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"} and using the argument with injective envelopes as we did in its proof, we get arrows $\alpha_3, \beta_3, \alpha_4, \beta_4, \alpha_6,\beta_6$ in the quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ as required, and there are no further arrows between $(3)$, $(4)$, $(5)$, $(6)$ and $(6')$. Further, all compositions between the former arrows are zero in the subcategory $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6}(\mathcal{D}_V)$. As $\mathcal{F}(D_3) = D_4$, we must have $\mathcal{F}(D_5) = D_5$ and $\mathcal{F}(D'_{6}) = D'_6$. In particular, we have unique arrows $\alpha_5, \beta_5$ between $(2)$ and $(5)$. As there are no more arrows involving the vertices $(3)$ and $(4)$, this implies that the compositions between $\alpha_3, \beta_3, \alpha_4, \beta_4$ are zero in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$, and not just in the subcategory $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6}(\mathcal{D}_V)$. We have the following exact sequence $$\label{eq:O7} 0 \to H^1_f(S) \to H^1_{O_7}(S) \to H^2_{\overline{O}_6}(S) \to 0.$$ By Lemma [Lemma 23](#D6S0fn3){reference-type="ref" reference="D6S0fn3"}, $D_6$ does not appear as a composition factor in $H^1_f(S)$. We have already showed that there is an arrow $\alpha_7$ from $(6)$ to $(7)$. Together with [@lHorincz2019categories Lemma 3.11], this shows that $D_6$ is a direct summand of $H^1_{O_7}(S)/D_7$, and therefore $\alpha_6 \alpha_7 = 0$. Via $\mathbb{D}$ and $\mathcal{F}$, we obtain also $\beta_7 \beta_6 = \alpha_2 \alpha_5 = \beta_5 \beta_2=0$. The $b$-function ([\[eq:bs3\]](#eq:bs3){reference-type="ref" reference="eq:bs3"}) of $f$ yields according to [@lHorincz2019categories Proposition 4.9] a filtration $$S \subsetneq \mathcal{D}_V f^{-1} \subsetneq \mathcal{D}_V f^{-2} = S_f.$$ By [@lHorincz2019categories Lemma 2.8] the modules $\mathcal{D}_V f^{-1}$ and $S_f$ have unique simple quotients, say $L^{-1}$ and $L^{-2}$, with $L^{-i}$ having a non-zero semi-invariant element of weight $\sigma^{-i}$, for $i=1,2$, where $\sigma$ stands for the weight of $f$. By [@microlocal Proposition 4.6], the characteristic variety of $S_f$ contains the conormal $T_{\{0\}}^* V$ at the origin. Since $S$ is the only simple equivariant $\mathcal{D}_V$-module with full support, via $\mathcal{F}$ we obtain that $D_0$ is the only simple equivariant $\mathcal{D}_V$-module whose characteristic variety contains $T_{\{0\}}^* V$. Therefore, $D_0$ must be a composition factor of $S_f$. From [@lHorincz2019categories Equation 4.14], we see that $D_0$ has a non-zero semi-invariant of weight $\sigma^{-2}$. Since the weight space of $\sigma^{-2}$ in $S_f$ is one-dimensional (as the action is prehomogeneous), this shows that $L^{-2} \cong D_0$. We have an exact sequence $$\label{eq:O6} 0 \to H^2_{\overline{O}_6}(S) \to H^2_{O_6}(S) \to H^3_{\overline{O}_5}(S) \to H^3_{\overline{O}_6}(S).$$ By Theorem [Theorem 4](#thm:localcoho6){reference-type="ref" reference="thm:localcoho6"} we have $H^2_{\overline{O}_6}(S)=D_6$ and $H^3_{\overline{O}_6}(S)=D_2$. By [@lHorincz2019categories Lemma 3.11] there are no arrows connected to $(6)$ other than $\alpha_6, \beta_6, \alpha_7, \beta_7$. Via $\mathcal{F}$, there are exactly $4$ arrows connected to $(2)$ as required. At this point only the vertices $(1), (5), (6'), (7)$ can have potentially additional arrows among themselves. We have seen that there are no arrows between $(5)$ and $(6')$, and now we show that there is no arrow from $(1)$ to $(5)$. Assume by contradiction that there is such an arrow $\gamma$. Then from the sequence ([\[eq:O6\]](#eq:O6){reference-type="ref" reference="eq:O6"}) and [@lHorincz2019categories Lemma 3.11] we deduce that the composition $\gamma \alpha_6$ is a non-zero path from $(1)$ to $(6)$. Via $\mathcal{F}$ and $\mathbb{D}$, we obtain a non-zero path from $(2)$ to $(7)$ starting with $\alpha_5$. Since $\alpha_2 \alpha_5 =0$, and in the subcategory $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6}(\mathcal{D}_V)$ we have $\beta_5 \alpha_5 =0$ (and there are no other arrows into $(2)$), we see by [@lHorincz2019categories Lemma 3.11] that $D_2$ is a quotient of $H^1_{O_7}(S)/D_7 \, \in \textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6}(\mathcal{D}_V)$. From ([\[eq:O7\]](#eq:O7){reference-type="ref" reference="eq:O7"}), this in turn implies that $D_2$ is a quotient of $S_f$. This is a contradiction, since $D_0=L^{-2}$ is the unique quotient of $S_f$. Therefore, no such arrow $\gamma$ exists. Via $\mathcal{F}$ and $\mathbb{D}$, we see that the vertex $(5)$ can not have other arrows besides $\alpha_5,\beta_5,\alpha_6,\beta_6$. This further implies that $\alpha_6\beta_6=0$ (and not just in the subcategory $\textnormal{mod}_{\operatorname{GL}}^{\overline{O}_6}(\mathcal{D}_V)$), and via $\mathcal{F}$ also $\beta_5\alpha_5=0$. As $S_f$ is the injective envelope of $S$, and $D_0$ is one of its composition factors, there is a non-trivial path from $(0)$ to $(8)$. Since $\alpha_1$ is the only arrow starting from $(0)$, this shows that $D_1$ is a composition factor of $S_f$. By [@lHorincz2019categories Equation 4.14] we see that $\mathcal{F}(L^{-1})$ also has a non-zero semi-invariant element of weight $\sigma^{-1}$. As both $D_7$ and $D_1 = \mathcal{F}(D_7)$ are composition factors of $S_f$, and its weight space corresponding to $\sigma^{-1}$ is 1-dimensional, we must have $D_7 \ncong L^{-1} \ncong D_1$. From Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we see readily that $\sigma^{-1}$ does not appear in the character formula for $D_2$ (it has no non-zero $\operatorname{SL}_3(\mathbb{C})$-invariants), thus $L^{-1} \ncong D_2$. By Lemma [Lemma 20](#lem:loc3){reference-type="ref" reference="lem:loc3"} (for $s=-2$), we have that $\textnormal{supp} \, S_f/\mathcal{D}_Vf^{-1} \subset \overline{O}_1$. We have an exact sequence of the form $$0 \to K \to S_f \to D_0 \to 0.$$ Clearly, $\mathcal{D}_V f^{-1} \subset K$, and by the above, $D_1$ is a quotient of $K$. Since $L^{-1}$ is the unique quotient of $\mathcal{D}_V f^{-1}$ and $L^{-1} \ncong D_1$, this shows that $D_1$ is a composition factor of $S_f / D_V f^{-1}$. Since the latter has $D_0$ as a unique quotient, and has support contained in $\overline{O}_1$, it is given by the (unique) non-split sequence $$0 \to D_1 \to S_f / \mathcal{D}_V f^{-1} \to D_0 \to 0.$$ As the module $\mathcal{D}_V f^{-1}$ (resp. $S_f$) has the unique quotient $L^{-1}$ (resp. $D_0$), we must have an arrow from $(1)$ to the vertex corresponding to the simple $L^{-1}$. As $D_2 \ncong L^{-1} \ncong D_7$, the only possibility is that $D'_6 \cong L^{-1}$. This gives the arrow $\alpha_{1'}$, and via $\mathcal{F}$ and $\mathbb{D}$ also $\beta_{1'}, \alpha_{7'}, \beta_{7'}$. The module $D_7$ is the unique simple submodule of $H_f^1(S)$. Therefore, $D_7 \subset \mathcal{D}_V f^{-1}/S$, and we denote by $Q$ their quotient. Due to the arrow $\alpha_{7'}$, we deduce from ([\[eq:O7\]](#eq:O7){reference-type="ref" reference="eq:O7"}) and [@lHorincz2019categories Lemma 3.11] that $D'_6$ is a submodule of $Q$. Since $D'_6$ is also the unique quotient of $Q$, this shows that $Q \cong D'_6$. Thus, we have determined the structure of $S_f$, and the corresponding representation of the quiver is non-zero along the path (with one-dimensional spaces at each vertex, and identity map at each arrow) $$\label{eq:Sf} (0) \xrightarrow{\alpha_1} (1) \xrightarrow{\alpha_{1'}} (6') \xrightarrow{\alpha_{7'}} (7) \xrightarrow{\alpha_8} (8).$$ With this and from ([\[eq:O7\]](#eq:O7){reference-type="ref" reference="eq:O7"}) we can now deduce that $D_3$ and $D_4$ are not composition factors of $H^1_{O_7}(S)$, therefore $\alpha_3 \alpha_{7'} = \beta_4 \alpha_{7'} =0$. Via $\mathbb{D}$ and $\mathcal{F}$, we obtain $\beta_{7'} \beta_3 = \beta_{7'} \alpha_4 = \alpha_{1'} \beta_3 = \alpha_{1'} \alpha_4 = \alpha_3 \beta_{1'} = \beta_4 \beta_{1'}=0$. Further, ([\[eq:Sf\]](#eq:Sf){reference-type="ref" reference="eq:Sf"}) and ([\[eq:O7\]](#eq:O7){reference-type="ref" reference="eq:O7"}) show that there are no arrows from $(1)$ to $(7)$, and $\alpha_{6'}$ is the only arrow from $(6') \to (7)$. Via $\mathbb{D}$ and $\mathcal{F}$, we can conclude at this point that we have obtained all the arrows of the quiver. Let $P_2$ be the projective cover of $D_2$ in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$, and recall the module $P_2'$ from Lemma [Lemma 24](#lem:P2char){reference-type="ref" reference="lem:P2char"}. Using Lemma [Lemma 21](#lem:witD6){reference-type="ref" reference="lem:witD6"} together with [@lHorincz2019categories Lemma 2.1 and Equation 4.14], we see that $P_2$ is a direct summand of $P_2'$, therefore by Lemma [Lemma 24](#lem:P2char){reference-type="ref" reference="lem:P2char"} we must have $\beta_2 \alpha_{1'}= \beta_2\alpha_2 = \beta_5 \alpha_5=\alpha_5 \alpha_6=0$. Via $\mathcal{F}$ and $\mathbb{D}$, we get also $\beta_{1'} \alpha_2 = \alpha_7 \beta_{7'} = \alpha_{7'} \beta_7=\beta_6\beta_5=\beta_6\alpha_6=\alpha_7\beta_7=0$ We are left to show that all $2$-cycles at $(6')$ are zero. The projective cover $P_{6}'$ of $D_6'$ in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ is a direct summand of the projective module $P(\sigma^{-1})$ as constructed in [@lHorincz2019categories Lemma 2.1]. By Lemma [Lemma 18](#lem:dense){reference-type="ref" reference="lem:dense"} the variety $\overline{T_{O_6}^* V}$ has a dense $\operatorname{GL}$-orbit, and therefore the multiplicity of $\overline{T_{O_6}^* V}$ in $P(\sigma^{-1})$ is one, which can be seen as in the proof of Lemma [Lemma 24](#lem:P2char){reference-type="ref" reference="lem:P2char"}. This implies that all $2$-cycles at $(6')$ are zero, finishing the proof. ◻ ## The case $n=4$ {#sec:n4} **Theorem 30**. *Let $n=4$. The quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ is given by $$\xymatrix@R-0.5pc@C+1.6pc{ & (3) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (6') \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (4) & \\ (0) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (2) \ar@<0.5ex>[r] \ar@<0.5ex>[d] & \ar@<0.5ex>[l] \ar@<0.5ex>[d] (6) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] \ar@<0.5ex>[d] (8) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (9) \\ & (1) \ar@<0.5ex>[u] & (5) \ar@<0.5ex>[u] & (7) \ar@<0.5ex>[u] & }$$ with relations: all 2-cycles, and all paths of length two starting or ending at vertices $(1), (5), (7)$.* *Proof.* As in this case the semi-invariant $f$ is just the $4\times 4$ determinant, from the structure of $S_f$ together with duality $\mathbb{D}$ we obtain the following arrows of the quiver (cf. [@lHorincz2019categories Theorem 5.4]) $$\xymatrix{ (0) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (2) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (6) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (8) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (9) }$$ Since the codimension of $O_1, O_5, O_7$ in $O_2, O_6, O_8$ is one, respectively, we obtain as in ([\[eq:O6\]](#eq:O6){reference-type="ref" reference="eq:O6"}) the arrows between the vertices $(1)$ and $(2)$, $(5)$ and $(6)$, $(7)$ and $(8)$, as desired. As in the proof of Theorem [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"}, we obtain by Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"} the arrows between $(3)$ and $(6')$, $(4)$ and $(6')$. Further, the paths of length two from $(3)$ to $(4)$ and from $(4)$ to $(3)$ are non-zero. We are left to obtain the relations and to show that there are no more arrows. Since $S_f$ is the injective envelope of $D_9$ (see [@binary Lemma 2.4]), there are no more arrows connected to $(9)$, the 2-cycle at $(9)$ is zero, and the path of length two from $(7)$ to $(9)$ is zero. By $\mathcal{F}$ and $\mathbb{D}$, we see using Proposition [Proposition 12](#prop:fourier){reference-type="ref" reference="prop:fourier"} that there are no more arrows connected to $(0)$, the 2-cycle at $(0)$ is zero, the paths of length two from $(9)$ to $(7)$, $(0)$ to $(1)$, $(1)$ to $(0)$ are all zero. By [@lHorincz2019categories Lemma 3.11], the following short exact sequence shows that there are no more arrows connected to vertex $(8)$: $$\label{eq:O8} 0 \to H^1_{\overline{O}_8}(S) \to H^1_{O_8}(S) \to H^2_{\overline{O}_7}(S) \to 0.$$ Hence, via $\mathcal{F}$ there are no more arrows connected to $(2)$. As we did at the end of the proof of Theorem [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"}, we see by Lemma [Lemma 18](#lem:dense){reference-type="ref" reference="lem:dense"} that all the 2-cycles at $(8)$ and $(6)$ are zero. Via $\mathcal{F}$, all the 2-cycles at $(2)$ are also zero. As there is a non-zero path from $(6)$ to either of the vertices $(0), (2), (6), (8), (9)$, we see by a similar argument that there is no non-zero path between either of these vertices and $(6')$. This also implies by ([\[eq:O8\]](#eq:O8){reference-type="ref" reference="eq:O8"}) that there is no arrow between $(7)$ and either $(6)$ or $(6')$, and so by $\mathcal{F}$ between $(1)$ and either $(6')$ or $(6)$. By Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"}, we see that there are no more arrows among the vertices $(3), (4), (5), (6), (6')$. Together with the above, this implies that there are no more arrows connected to the vertices $(6)$ or $(6')$, and also that the 2-cycles at $(6')$ are zero. By Theorem [Theorem 6](#thm:localcoho34){reference-type="ref" reference="thm:localcoho34"} we have $H^7_{\overline{O}_3}(S)=D_3$ and $H^7_{\overline{O}_4}(S)=D_4$. In particular, by [@lHorincz2019categories Lemma 3.11] there is no arrow from $(1)$ to either $(3)$ or $(4)$, and via $\mathcal{F}$ no arrow from $(7)$ to either $(3)$ or $(4)$. Hence, there are no more arrows connected to either $(3)$ or $(4)$, and the 2-cycles at these vertices are zero. By Lemma [Lemma 19](#lem:codim1){reference-type="ref" reference="lem:codim1"} and [@macvil Theorem 6.7] (see also [@kk]), we see via $\mathcal{F}$ there are no arrows among the vertices $(1), (5), (7)$. Hence, at this stage we have obtained all the arrows of the quiver. Using Lemma [Lemma 14](#lem:opencat2){reference-type="ref" reference="lem:opencat2"}, we see that the 2-cycle at $(7)$ is zero. Via $\mathcal{F}$, the 2-cycle $(1)$ is also zero. By [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"}, the 2-cycle at $(5)$ is also zero. From ([\[eq:O8\]](#eq:O8){reference-type="ref" reference="eq:O8"}), we see that the path $(5) \to (6) \to (8)$ is zero. Similarly, using Theorem [Theorem 4](#thm:localcoho6){reference-type="ref" reference="thm:localcoho6"} we see that the path $(1)\to (2) \to (6)$ is zero. Via $\mathcal{F}$ we see that the paths $(5) \to (6) \to (2)$ and $(7)\to (8) \to (6)$ are zero as well. By applying also $\mathbb{D}$, we have now obtained all the relations in the quiver. ◻ ## The case $n\geq 5$ {#sec:n5} **Theorem 31**. *Let $n\geq 5$. The quiver of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$ is given by $$\xymatrix@R-1pc@C+1.6pc{ & & & (3) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (6') \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (4) & \\ (0) & (1) \ar@<0.5ex>[r] & (2) \ar@<0.5ex>[l] & (5) \ar@<0.5ex>[r] & (6) \ar@<0.5ex>[l] & \ar@<0.5ex>[r] (7) & \ar@<0.5ex>[l] (8) & (9) }$$ where all the 2-cycles are zero.* *Proof.* As the arguments here are very similar to the cases above, we only sketch the proof. The required arrows and relations between $(3), (4), (5), (6), (6')$ are obtained using Lemma [Lemma 13](#lem:opencat1){reference-type="ref" reference="lem:opencat1"}. The arrows and relations between $(7), (8)$ are obtained using Lemma [Lemma 14](#lem:opencat2){reference-type="ref" reference="lem:opencat2"}, which give also the arrows between $(1)$ and $(2)$ via $\mathcal{F}$. We are left to show that there are no more arrows. Since $\operatorname{codim}(\overline{O}_8, V) \geq 2$, it follows from [@binary Lemma 2.4] that $(9)$ is an isolated vertex, and via $\mathcal{F}$, so is $(0)$. Theorem [Theorem 5](#thm:localcoho8){reference-type="ref" reference="thm:localcoho8"} together with the exact sequence analogous to ([\[eq:O8\]](#eq:O8){reference-type="ref" reference="eq:O8"}) shows that there are no more arrows connected to $(8)$, and using $\mathcal{F}$, no more arrows connected to $(2)$. By Theorem [Theorem 6](#thm:localcoho34){reference-type="ref" reference="thm:localcoho34"}and [@lHorincz2019categories Lemma 3.11] there is no arrow from $(1)$ to either $(3)$ or $(4)$, and via $\mathcal{F}$ no arrow from $(7)$ to either $(3)$ or $(4)$. Finally, by Proposition [Proposition 16](#prop:char566){reference-type="ref" reference="prop:char566"}, Lemma [Lemma 19](#lem:codim1){reference-type="ref" reference="lem:codim1"} and [@macvil Theorem 6.7] (see also [@kk]), we see via $\mathcal{F}$ there are no arrows among the vertices $(1), (5), (6), (6'), (7)$. ◻ **Remark 32**. *By [@lHorincz2019categories Theorem 2.13], the quiver in Theorem [Theorem 31](#quiverN5){reference-type="ref" reference="quiverN5"} is of finite representation type, i.e. it has finitely many indecomposable representations (up to isomorphism). On the other hand, it is easy to see that the quivers in Theorems [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"} and [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"} are of wild representation type.* # Local cohomology with support in $\overline{O}_5$ {#sec:o5} The goal of this section is to prove the following. **Theorem 33**. *The following is true about local cohomology of the polynomial ring $S$ with support in $\overline{O}_5$ (all short exact sequences are non-split).* 1. *If $n=3$ then the only non-zero local cohomology module is $$0\to D_5 \to H^3_{\overline{O}_5}(S)\to D_2\to 0.$$* 2. *If $n=4$ then the non-zero local cohomology modules are $$H^5_{\overline{O}_5}(S)=D_5,\quad 0\to D_2\to H^6_{\overline{O}_5}(S)\to D_0 \to 0.$$* 3. *If $n\geq 5$ then the non-zero local cohomology modules are  $$H^{2n-3}_{\overline{O}_5}(S)=D_5,\quad H^{3n-6}_{\overline{O}_5}(S)=D_2,\quad H^{4n-10}_{\overline{O}_5}(S)=D_0.$$* The representation $V$ of $\operatorname{GL}$ is equivalent to the action of $\textnormal{SO}_4(\mathbb{C}) \times \operatorname{GL}_n(\mathbb{C})$ on the space of $4\times n$ matrices. Note that we could consider also the action of the bigger group $\textnormal{O}_4(\mathbb{C}) \times \operatorname{GL}_n(\mathbb{C})$ on $V$. ## The case $n=3$ {#the-case-n3} In this subsection, we prove part (1) of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}. **Lemma 34**. *We have $D_0^{\textnormal{SO}_4(\mathbb{C})} \neq 0$ and $D_2^{\textnormal{SO}_4(\mathbb{C})} \neq 0$.* *Proof.* Since $S^{\textnormal{SO}_4(\mathbb{C})} \neq 0$ and $\mathcal{F}(S)=D_0$, we also have $D_0^{\textnormal{SO}_4(\mathbb{C})} \neq 0$ by [@lHorincz2019categories (4.14)]. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} the $\operatorname{GL}_4(\mathbb{C})\times \operatorname{GL}_3(\mathbb{C})$-representation $\mathbb{S}_{(-2,-2,-2,-2)}\mathbb{C}^4\otimes\mathbb{S}_{(-2,-3,-3)}\mathbb{C}^3$ belongs to $D_2$, which is $\textnormal{SO}_4(\mathbb{C})$-invariant. ◻ **Lemma 35**. *For all $i\neq 3$ we have that $H^i_{\overline{O}_5}(S)$ is a direct sum of copies of $D_1$.* *Proof.* We identify $V$ with the space of $4\times 3$ matrices under the action of $G\times \operatorname{GL}_3(\mathbb{C})$, where $G=\textnormal{SO}_4(\mathbb{C})$. Then $\overline{O}_5$ is set-theoretically defined by $\operatorname{rank} X^t \cdot X \leq 1$, where $X$ is the $4\times 3$ generic matrix of variables. By the First Fundamental Theorem of Invariant Theory for orthogonal groups, $R=S^G$ is a polynomial ring generated by the entries of $X^t \cdot X$. Let $I$ be the ideal in $R$ generated by the $2\times 2$ minors of $X^t \cdot X$. By Lemma [Lemma 2](#lem:locinv){reference-type="ref" reference="lem:locinv"}, we have for all $i$ $$(H^i_{\overline{O}_5}(S))^G \cong H^i_{I}(R).$$ We identify $I$ with the ideal of $2\times 2$ minors of the $3\times 3$ symmetric matrix of variables. By [@raicu2016ocal (1.5)] we see that for all $i\neq 3$, we have $H^i_{I}(R)=0$, hence also $(H^i_{\overline{O}_5}(S))^G=0$. By Lemma [Lemma 34](#lem:D2inv){reference-type="ref" reference="lem:D2inv"}, we have $D_0^G\neq 0$ and $D_2^G\neq 0$. Thus, $D_0$ and $D_2$ cannot be a composition factors of $H^i_{\overline{O}_5}(S)$ for any $i\neq 3$. The conclusion now follows from Lemma [Lemma 15](#lem:noD34){reference-type="ref" reference="lem:noD34"}. ◻ The main result in this subsection is the following. **Proposition 36**. *The nonzero local cohomology modules of the polynomial ring $S$ with support in the orbit $O_6$ are (the short exact sequence is non-split): $$0\to D_6 \to H^2_{O_6}(S)\to D_5\to 0,\quad H^4_{O_6}(S)=D_0.$$* Using this, we prove part (1) of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}. *Proof of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(1).* The assertion about $H^3_{\overline{O}_5}(S)$ follows from [@lHorincz2019categories Lemma 3.11] and Theorem [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"}. Indeed, there is an exact sequence $$0\to H^3_{\overline{O}_5}(S)\to H^3_{O_5}(S)\to H^4_{Z}(S)\to \cdots$$ where $Z=\overline{O}_2\cup \overline{O}_3\cup \overline{O}_4$. Since $\operatorname{codim}(Z,V) = 5$, we have $H^4_{Z}(S)=0$, so $H^3_{\overline{O}_5}(S)\cong H^3_{O_5}(S)$. By Theorem [Theorem 4](#thm:localcoho6){reference-type="ref" reference="thm:localcoho6"} we have $$\label{cohoO6} H^2_{\overline{O}_6}(S)=D_6,\quad H^3_{\overline{O}_6}(S)=D_2,\quad H^4_{\overline{O}_6}(S)=D_0,$$ and all other local cohomology modules vanish. By Proposition [Proposition 36](#locorbit06n3){reference-type="ref" reference="locorbit06n3"} there is an exact sequence $$0\to H^2_{\overline{O}_6}(S)\to H^2_{O_6}(S)\to H^3_{\overline{O}_5}(S)\to H^3_{\overline{O}_6}(S)\to H^3_{O_6}(S)\to H^4_{\overline{O}_5}(S)\to H^4_{\overline{O}_6}(S)\to H^4_{O_6}(S)\to H^5_{\overline{O}_5}(S)\to 0,$$ and $H^q_{\overline{O}_5}(S)=0$ for $q\geq 6$. By Lemma [Lemma 35](#eq:locO5){reference-type="ref" reference="eq:locO5"}, we have that $H^4_{\overline{O}_5}(S)$ and $H^5_{\overline{O}_5}(S)$ are direct sums of copies of $D_1$. Using ([\[cohoO6\]](#cohoO6){reference-type="ref" reference="cohoO6"}) and Proposition [Proposition 36](#locorbit06n3){reference-type="ref" reference="locorbit06n3"}, we conclude that these modules are zero. ◻ We now turn to the proof of Proposition [Proposition 36](#locorbit06n3){reference-type="ref" reference="locorbit06n3"}. Using notation from Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}, we let $Y=Y_{222}$ and $\pi=\pi_{222}$, which gives a desingularization of $\overline{O}_6$. We note that $\pi$ is an isomorphism on $\overline{O}_6\setminus \overline{O}_2$, and it is small (see [@htt Definition 8.2.29]) if $n=3,4$ (but not for $n\geq 5$). For $i=0,\cdots ,6,$ we write $D_i^Y$ for the simple $\mathcal{D}_Y$-module corresponding locally to $D_i$ in the case $n=2$ (see Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}). Let $Y_5=\pi^{-1}(\overline{O}_5)$ be the hypersurface defined by the relative $2\times 2\times 2$ hyperdeterminant, and let $\mathcal{O}_Y(\ast Y_5)$ denote the localization of $\mathcal{O}_Y$ at $Y_5$. It fits into a non-split short exact sequence ([\[comphypn2\]](#comphypn2){reference-type="ref" reference="comphypn2"}) with $Z=Y_5$, and by [@perlman2020equivariant] we further have a non-split short exact sequence $$\label{eq:n2bundle} 0 \to D_5^Y \to \mathscr{H}^1_{Y_5}(\mathcal{O}_Y) \to D_0^Y \to 0.$$ By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"}, the following implies Proposition [Proposition 36](#locorbit06n3){reference-type="ref" reference="locorbit06n3"}. **Lemma 37**. *Let $n=3$, $Y=Y_{222}$, $\pi=\pi_{222}$, and $Y_5=\pi^{-1}(\overline{O}_5)$. We have the following.* 1. *The non-zero cohomology modules of $\pi_{+}D^Y_5$ are $$\mathcal{H}^{-1}(\pi_{+}D^Y_5)=D_0,\quad \mathcal{H}^{0}(\pi_{+}D^Y_5)=D_5,\quad \mathcal{H}^{1}(\pi_{+}D^Y_5)=D_0.$$* 2. *The non-zero cohomology modules of $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ are $$0\to D_6 \to \mathcal{H}^0(\pi_{+}\mathcal{O}_Y(\ast Y_5))\to D_5\to 0,\quad \mathcal{H}^2(\pi_{+}\mathcal{O}_Y(\ast Y_5))=D_0,$$ where the short exact sequence is non-split.* *Proof.* By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"}(2) we have $\mathcal{H}^q(\pi_{+}\mathcal{O}_Y(\ast Y_5))=0$ for $q<0$. The morphism $\pi$ is small and a desingularization of $\overline{O}_6$, so $\pi_{+}\mathcal{O}_Y=D_6$. By ([\[comphypn2\]](#comphypn2){reference-type="ref" reference="comphypn2"}) we obtain an exact sequence $$\label{lesprooflemmapush3} 0\to \mathcal{H}^{-1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to D_6\to \mathcal{H}^0(\pi_{+}\mathcal{O}_Y(\ast Y_5))\to \mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to 0,$$ and $\mathcal{H}^{q}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\cong \mathcal{H}^q(\pi_{+}\mathcal{O}_Y(\ast Y_5))$ for $q\neq -1,0$. Since $\mathcal{H}^{-1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))$ is supported inside $\overline{O}_5$, it follows from the sequence above that $\mathcal{H}^{-1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=0$. By ([\[eq:n2bundle\]](#eq:n2bundle){reference-type="ref" reference="eq:n2bundle"}), we conclude that $\mathcal{H}^{q}(\pi_{+}D^Y_5)\cong \mathcal{H}^{q-1}(\pi_{+}D^Y_0)$ for $q<0$. The space $Y$ is a geometric vector bundle on $\mathbb{P}^2$, and $D_0^Y$ is the intersection cohomology module associated to the trivial local system on the zero section. It follows from [@htt Proposition 1.5.28] that $\mathcal{H}^{i-2}(\pi_{+}D_0^Y)=D_0^{\oplus H^{i}_{DR}(\mathbb{P}^2)}$ for $i\in\mathbb{Z}$. Thus, we have $$\label{pushdo3} \mathcal{H}^{-2}(\pi_{+}D^Y_0)=D_0,\quad \mathcal{H}^{0}(\pi_{+}D^Y_0)=D_0,\quad \mathcal{H}^{2}(\pi_{+}D^Y_0)=D_0.$$ We conclude that, in cohomological degrees less than zero, the only non-zero cohomology module of $\pi_{+}D_5^Y$ is $\mathcal{H}^{-1}(\pi_{+}D^Y_5)=D_0$. To complete the proof of (1), it remains to verify the cohomology of $\pi_{+}D_5^Y$ in non-negative cohomological degrees. By ([\[pushdownlocallyclosed\]](#pushdownlocallyclosed){reference-type="ref" reference="pushdownlocallyclosed"}) and Theorem [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"} we know that $\mathcal{H}^0(\pi_{+}\mathcal{O}_Y(\ast Y_5))$ is a non-trivial extension of $D_5$ by $D_6$, so that by ([\[lesprooflemmapush3\]](#lesprooflemmapush3){reference-type="ref" reference="lesprooflemmapush3"}) we conclude that $\mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=D_5$. As $\mathscr{H}^1_{Y_5}(\mathcal{O}_Y)$ is an extension of $D_0^Y$ by $D_5^Y$, it follows from ([\[pushdo3\]](#pushdo3){reference-type="ref" reference="pushdo3"}) that $\mathcal{H}^{0}(\pi_{+}D^Y_5)=D_5$. Since $\pi$ is proper and $D_5^Y$ is self-dual, it follows that $\pi_{+}D_5^Y$ is self-dual [@htt Theorem 2.7.2], so the cohomology of $\pi_{+}D_5^Y$ must be as claimed. To complete the proof of (2), it remains to show that the cohomology $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ is as claimed in positive cohomological degrees. As explained above, the cohomology of $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ is the same as that of $\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y)$ in these degrees, and there is a long exact sequence $$0\to \mathcal{H}^{0}(\pi_{+}D^Y_0)\to \mathcal{H}^{1}(\pi_{+}D^Y_5)\to \mathcal{H}^{1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{1}(\pi_{+}D^Y_0)\to$$ $$\to \mathcal{H}^{2}(\pi_{+}D^Y_5)\to \mathcal{H}^{2}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{2}(\pi_{+}D^Y_0)\to 0.$$ Since $\mathcal{H}^{1}(\pi_{+}D^Y_0)=\mathcal{H}^{2}(\pi_{+}D^Y_5)=0$ and the first map is an isomorphism, we conclude that $\mathcal{H}^{1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=0$ and that $\mathcal{H}^{2}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=D_0$. ◻ ## The case $n=4$ {#sec:o5n4} In this subsection, we prove part (2) of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}. Recall the $\mathcal{D}_V$-modules $Q_p$ introduced in ([\[eq:detQ\]](#eq:detQ){reference-type="ref" reference="eq:detQ"}). The main result here is the following. **Proposition 38**. *Let $n=4$. The following is true about local cohomology of the polynomial ring $S$ with support in the orbit $O_6$ and the orbit closure $\overline{O}_5$.* 1. *The non-zero local cohomology modules with support in $O_6$ are (the short exact sequence is non-split): $$0\to Q_2 \to H^4_{O_6}(S)\to D_5\to 0,\quad H^8_{O_6}(S)=D_0.$$* 2. *We have that $H^q_{\overline{O}_5}(S)=0$ for $q\geq 8$.* Using this, we prove part (2) of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}. *Proof of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(2).* The assertion about $H^5_{\overline{O}_5}(S)$ follows from [@lHorincz2019categories Lemma 3.11] and Theorem [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"}. By Theorem [Theorem 4](#thm:localcoho6){reference-type="ref" reference="thm:localcoho6"} we have $$\label{lcmatrix} H^4_{\overline{O}_6}(S)=Q_2,\quad H^6_{\overline{O}_6}(S)=Q_1,\quad H^8_{\overline{O}_6}(S)=D_0,$$ and all other local cohomology modules vanish. By Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"} there is an exact sequence $$0\to H^5_{O_6}(S)\to H^6_{\overline{O}_5}(S)\to H^6_{\overline{O}_6}(S)\to H^6_{O_6}(S)\to H^7_{\overline{O}_5}(S)\to 0,$$ and $H^q_{\overline{O}_5}(S)=0$ for $q\geq 8$. By Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"}(1) and ([\[lcmatrix\]](#lcmatrix){reference-type="ref" reference="lcmatrix"}) we conclude that $H^6_{\overline{O}_5}(S)\cong H^6_{\overline{O}_6}(S)=Q_1$ and $H^7_{\overline{O}_5}(S)=0$, completing the proof. ◻ Next, we prove Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"}(2). *Proof of Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"}(2).* Let $Y=Y_{223}$, $\pi=\pi_{223}$, and $Y_5=\pi^{-1}(\overline{O}_5)$. Since $\overline{O}_8$ is the determinant hypersurface, it has rational singularities (e.g. see [@weyman Corollary 6.1.5]). Thus, we have $\mathbb{R}\pi_{\ast}\mathcal{O}_Y\cong \mathbb{C}[\overline{O}_8]$, so by Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(1) there is a degenerate spectral sequence $\mathbb{R}^i\pi_{\ast}\mathscr{H}^j_{Y_5}(\mathcal{O}_Y)\implies H^{i+j}_{\overline{O}_5}(\mathbb{C}[\overline{O}_8])$, which yields $$\label{pushDown2231} \mathbb{R}^i\pi_{\ast}\mathscr{H}^3_{Y_5}(\mathcal{O}_Y) = H^{i+3}_{\overline{O}_5}(\mathbb{C}[\overline{O}_8]).$$ The coordinate ring $\mathbb{C}[\overline{O}_8]$ has equivariant minimal free resolution as follows $$\label{resDet1} 0\longrightarrow S\otimes \operatorname{det}\longrightarrow S\longrightarrow \mathbb{C}[\overline{O}_8]\longrightarrow 0.$$ As $Y$ is a vector bundle on the Grassmannian $\operatorname{Gr}(3,4)\cong \mathbb{P}^3$, by Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} we have that $\mathbb{R}^i\pi_{\ast}\mathscr{H}^3_{Y_5}(\mathcal{O}_Y)$ is zero for $i\geq 4$. Thus, by ([\[pushDown2231\]](#pushDown2231){reference-type="ref" reference="pushDown2231"}) and the long exact sequence of local cohomology of ([\[resDet1\]](#resDet1){reference-type="ref" reference="resDet1"}), we conclude that for $j>3+4$ multiplication by $\operatorname{det}$ is an isomorphism on $H^j_{\overline{O}_5}(S)$. Since $H^j_{\overline{O}_5}(S)$ has support inside $\overline{O}_5$, it follows that $H^i_{\overline{O}_5}(S)=0$ for $j\geq 8$. ◻ By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"}, the following implies Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"}(1). **Lemma 39**. *Let $n=4$, $Y=Y_{222}$, $\pi=\pi_{222}$, and $Y_5=\pi^{-1}(\overline{O}_5)$. We have the following.* 1. *The non-zero cohomology modules of $\pi_{+}D^Y_5$ are $$\mathcal{H}^{-3}(\pi_{+}D^Y_5)=D_0,\quad \mathcal{H}^{-1}(\pi_{+}D^Y_5)=D_0,\quad \mathcal{H}^{0}(\pi_{+}D^Y_5)=D_5\oplus D_2,\quad \mathcal{H}^{1}(\pi_{+}D^Y_5)=D_0,\quad \mathcal{H}^{3}(\pi_{+}D^Y_5)=D_0.$$* 2. *The non-zero cohomology modules of $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ are $$0\to Q_2 \to \mathcal{H}^0(\pi_{+}\mathcal{O}_Y(\ast Y_5))\to D_5\to 0,\quad \mathcal{H}^4(\pi_{+}\mathcal{O}_Y(\ast Y_5))=D_0,$$ where the short exact sequence is non-split.* *Proof.* The argument uses Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"}, and is similar to the proof of Lemma [Lemma 37](#push222to3){reference-type="ref" reference="push222to3"}. The space $Y$ is a geometric vector bundle on $\mathbb{G}(2,4)$, and $D_0^Y$ is the intersection cohomology module associated to the trivial local system on the zero section. It follows from [@htt Proposition 1.5.28] that the non-zero cohomology modules of $\pi_{+}D_0^Y$ are (see, for example, [@3264 Theorem 3.10] for the cohomology of $\mathbb{G}(2,4)$) $$\label{eq:decompD0} \mathcal{H}^{-4}(\pi_{+}D^Y_0)=D_0,\quad \mathcal{H}^{-2}(\pi_{+}D^Y_0)=D_0,\quad \mathcal{H}^{0}(\pi_{+}D^Y_0)=D_0^{\oplus 2},\quad \mathcal{H}^{2}(\pi_{+}D^Y_0)=D_0,\quad \mathcal{H}^{4}(\pi_{+}D^Y_0)=D_0.$$ We first prove (1). The short exact sequence $0\to D^Y_5\to \mathscr{H}^1_{Y_5}(\mathcal{O}_Y) \to D^Y_0\to 0$ induces the long exact sequence $$0\to \mathcal{H}^{-4}(\pi_{+}D^Y_5)\to \mathcal{H}^{-4}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{-4}(\pi_{+}D^Y_0)\to \mathcal{H}^{-3}(\pi_{+}D^Y_5)\to \mathcal{H}^{-3}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to$$ $$\to \mathcal{H}^{-3}(\pi_{+}D^Y_0)\to \mathcal{H}^{-2}(\pi_{+}D^Y_5)\to \mathcal{H}^{-2}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{-2}(\pi_{+}D^Y_0)\to \mathcal{H}^{-1}(\pi_{+}D^Y_5)\to$$ $$\to \mathcal{H}^{-1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{-1}(\pi_{+}D^Y_0)\to \mathcal{H}^{0}(\pi_{+}D^Y_5)\to \mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{0}(\pi_{+}D^Y_0) \to \mathcal{H}^{1}(\pi_{+}D^Y_5).$$ Since $\pi$ is a small resolution of $\overline{O}_6$, we have that $\pi_+\mathcal{O}_Y=D_6$. Thus, the cohomology of $\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y)$ is the same as that of $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ in nonzero cohomological degrees, and there is an exact sequence $$\label{eq:sesLocO5} 0\to D_6 \to \mathcal{H}^0(\pi_{+}\mathcal{O}_Y(\ast Y_5))\to \mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to 0.$$ By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"} we have that the cohomology of $\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y)$ is zero in negative degrees, so from ([\[eq:decompD0\]](#eq:decompD0){reference-type="ref" reference="eq:decompD0"}) and the long exact sequence above we conclude that the cohomology of $\pi_{+}D^Y_5$ is as claimed in negative degrees. Since $\pi$ is proper and $D_5^Y$ is self-dual, it follows that $\pi_{+}D_5^Y$ is self-dual [@htt Theorem 2.7.2]. Thus, we have proven (1) in nonzero cohomological degrees. By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"}, [@lHorincz2019categories Lemma 3.11], and Theorem [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"} we know that $\mathcal{H}^0(\pi_{+}\mathcal{O}_Y(\ast Y_5))$ is a non-trivial extension of $D_5$ by $Q_2$. Thus, by ([\[eq:sesLocO5\]](#eq:sesLocO5){reference-type="ref" reference="eq:sesLocO5"}) we have that $\mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))$ is a direct sum of $D_5$ and $Q_1$. Since $\mathcal{H}^{-1}(\pi_+D_0^Y)=0$, we have that $\mathcal{H}^{0}(\pi_{+}D^Y_5)$ is a submodule of $\mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=D_5\oplus Q_1$. As $\mathcal{H}^{0}(\pi_{+}D^Y_0)=D_0^{\oplus 2}$ and $\mathcal{H}^{1}(\pi_{+}D^Y_5)=D_0$, the last map in the long exact sequence above cannot be injective. Since $Q_1$ is a non-trivial extension of $D_0$ by $D_2$, it follows that $D_0$ cannot be a composition factor of $\mathcal{H}^{0}(\pi_{+}D^Y_5)$, so that $\mathcal{H}^{0}(\pi_{+}D^Y_5)=D_5\oplus D_2$, as required to prove (1). To complete the proof of (2), it remains to show that the cohomology $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ is as claimed in positive cohomological degrees. As explained above, the cohomology of $\pi_{+}\mathcal{O}_Y(\ast Y_5)$ is the same as that of $\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y)$ in these degrees, and there is a long exact sequence $$\to \mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{0}(\pi_{+}D^Y_0)\to \mathcal{H}^{1}(\pi_{+}D^Y_5)\to \mathcal{H}^{1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{1}(\pi_{+}D^Y_0)\to$$ $$\to \mathcal{H}^{2}(\pi_{+}D^Y_5)\to \mathcal{H}^{2}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to \mathcal{H}^{2}(\pi_{+}D^Y_0)\to \mathcal{H}^{3}(\pi_{+}D^Y_5)\to \mathcal{H}^{3}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))\to 0.$$ Furthermore, $\mathcal{H}^{4}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=\mathcal{H}^{4}(\pi_{+}D^Y_0)=D_0$. It remains to show that $\mathcal{H}^{q}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=0$ for $q=1,2,3$. Since $\mathcal{H}^{0}(\pi_{+}D^Y_0)$ has two copies of $D_0$ and $\mathcal{H}^{0}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))$ only has one copy of $D_0$, it follows that the map $\mathcal{H}^{0}(\pi_{+}D^Y_0)\to \mathcal{H}^{1}(\pi_{+}D^Y_5)$ is a surjection. As $\mathcal{H}^{1}(\pi_{+}D^Y_0)=0$, we conclude that $\mathcal{H}^{1}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=0$. To prove the remaining two vanishings, we need to show that the map $\mathcal{H}^{2}(\pi_{+}D^Y_0)\to \mathcal{H}^{3}(\pi_{+}D^Y_5)$ is nonzero (and hence an isomorphism). Consider the exact sequence $$H^7_{\overline{O}_6}(S) \to H^7_{O_6}(S)\to H^8_{\overline{O}_5}(S)\to H^8_{\overline{O}_6}(S).$$ By ([\[lcmatrix\]](#lcmatrix){reference-type="ref" reference="lcmatrix"}) and Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"}(2), we have $H^7_{\overline{O}_6}(S)=H^8_{\overline{O}_5}(S)=0$, hence $H^7_{O_6}(S)=0$. By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"} we have that $\mathcal{H}^{3}(\pi_{+}\mathscr{H}^1_{Y_5}(\mathcal{O}_Y))=H^7_{O_6}(S)=0$. Thus, the map $\mathcal{H}^{2}(\pi_{+}D^Y_0)\to \mathcal{H}^{3}(\pi_{+}D^Y_5)$ is an isomorphism. ◻ **Corollary 40**. *Let $n=4$. All subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $D_5$ satisfy $\gamma_2\geq -2$ and $\gamma_3\leq -2$.* *Proof.* We show that $\gamma_3\leq -2$ for all such subrepresentations. The other inequality then follows from the fact that $\mathcal{F}(D_5)=D_5$. By Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(2) we have $H ^5_{\overline{O}_5}(S)=D_5$. By the proof of Proposition [Proposition 38](#locorbit06n4){reference-type="ref" reference="locorbit06n4"}(2) we have that the kernel of the multiplication by $\operatorname{det}$ map $D_5\otimes\operatorname{det}\to D_5$ is $H^4_{\overline{O}_5}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^3_{Y_5}(\mathcal{O}_Y)$, where $Y=Y_{223}$, $\pi=\pi_{223}$, and $Y_5=\pi^{-1}(\overline{O}_5)$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} all subrepresentations $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ of $\mathbb{R}^1\pi_{\ast}\mathscr{H}^3_{Y_5}(\mathcal{O}_Y)$ satisfy $\nu_3=-1$. Suppose for contradiction that $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ in $D_5$ satisfies $\gamma_{3}\geq -1$, and let $v$ be its highest weight vector. Then $v\otimes \operatorname{det}^p=(v\otimes\operatorname{det}^{p-1})\otimes\operatorname{det}$ is not in the kernel of the multiplication map $D_5\otimes \operatorname{det}\to D_5$ for all $p\geq 1$. Thus, $v$ is not supported inside $\overline{O}_8$, a contradiction. ◻ ## The case $n\geq 5$ {#sec:o5ngeq5} In this subsection, we let $\pi'=\pi_{224}$ and $Y'=Y_{224}$. We set $Y'_5=\pi'^{-1}(\overline{O}_5)$. Then we have $\mathbb{R}\pi'_{\ast}\mathbb{R}\mathscr{H}^0_{Y_5'}(\mathcal{O}_{Y'})=\mathbb{R}\Gamma_{\overline{O}_5}(S)$, so there is a spectral sequence $$\label{localO5ngeq5} E_2^{i,j}=\mathbb{R}^i\pi'_{\ast}\mathscr{H}^j_{Y_5'}(\mathcal{O}_{Y'})\implies H^{i+j}_{\overline{O}_5}(S).$$ We use the case $n=4$, namely Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(2), to obtain information about the case $n\geq 5$. To determine the $E_2$-page, we calculate the higher direct images along $\pi'$ of the modules $D^{Y'}_5$, $D^{Y'}_2$, and $D^{Y'}_0$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} and Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} we have that $\mathbb{R}\pi_{\ast}'D_2^{Y'}$ only has cohomology in degree $3n-12$, and $\mathbb{R}\pi_{\ast}'D_0^{Y'}$ only has cohomology in degree $4n-16$. Similarly, by Corollary [Corollary 40](#cor:weightboundsD5n4){reference-type="ref" reference="cor:weightboundsD5n4"} we have that $\mathbb{R}\pi_{\ast}'D_5^{Y'}$ only has cohomology in degree $2n-8$. By Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(2) the nonzero modules on the $E_2$ page are given by $$\quad E_2^{2n-8,5}=\mathbb{R}^{2n-8}\pi'_{\ast}D_5^{Y'},\quad E_2^{3n-12,6}=\mathbb{R}^{3n-12}\pi'_{\ast}D_2^{Y'},\quad E_2^{4n-16,6}=\mathbb{R}^{4n-16}\pi'_{\ast}D_0^{Y'}.$$ The arrows on the $E_2$-page go from column six to column five, and increase cohomological degree by two. From this, we conclude that there are no nonzero differentials in the spectral sequence, and it degenerates on the $E_2$ page. Since $D_2=L_{Z_1}$ and $D_0=L_{Z_0}$, it follows from [@raicu2016characters; @raicuWeymanLocal] (see [@socledegs Lemma 3.4] for the explicit statement) that $E_2^{3n-12,6}\cong D_2$, and $E_2^{4n-16,6}\cong D_0$. By [@lHorincz2019categories Lemma 3.11] and Theorem [Theorem 31](#quiverN5){reference-type="ref" reference="quiverN5"}, we have $E_2^{2n-8,5}=D_5$, as required to complete the proof of Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}. # Local cohomology with support in $\overline{O}_7$ {#sec:O7} Recall that since for $n=3$, $\overline{O}_7$ is defined by $f$ so that its only non-zero local cohomology module is $H^1_{\overline{O}_7}(S)=S_f/S$. The main result of this section is the following. **Theorem 41**. 1. *Let $n=3$ and let $f$ be the semi-invariant. There is a filtration $$0\subsetneq S \subsetneq \mathcal{D}_Vf^{-1}\subsetneq S_f,$$ such that the quotients are described by the following non-split exact sequences: $$0\to D_7 \to \mathcal{D}_Vf^{-1}/S \to D_6'\to 0,\quad\quad 0\to D_1 \to S_f/\mathcal{D}_Vf^{-1}\to D_0 \to 0.$$* 2. *Let $n\geq 4$. The nonzero local cohomology modules of the polynomial ring $S$ with support in $\overline{O}_7$ are given by: $$H^{n-2}_{\overline{O}_7}(S)=D_7,\quad H^{2n-5}_{\overline{O}_7}(S)=D_6',\quad H^{3n-8}_{\overline{O}_7}(S)=D_1,\quad H^{4n-11}_{\overline{O}_7}(S)=D_0,$$* We note that, when $n=4$, the cohomological degrees above are: $2$, $3$, $4$, $5$. Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"}(1) was proven during the proof of Theorem [Theorem 29](#thm:quiv3){reference-type="ref" reference="thm:quiv3"}. The appearance $D_6'$ in its local cohomology detects the bad singularities of $\overline{O}_7$. **Corollary 42**. *For $n\geq 3$, the variety $\overline{O}_7$ is not normal, and for $n\geq 4$ it is not Cohen--Macaulay.* *Proof.* Let $I$ denote the defining ideal of $\overline{O}_7$. By [@varbaro Proposition 3.2] and Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"}(2), $\operatorname{depth}(S/I)\leq 2n+5 < 3n+2 = \dim(S/I)$ when $n\geq 4$. Now we prove that $\overline{O}_7$ is not normal. When $n=3$, this follows from Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"}(1), since the support of $H^1_f(S)/D_7$ is $\overline{O}_6$, which is contained in the singular locus of $\overline{O}_7$ but has codimension 1. Now we can extrapolate the result readily to the case $n\geq 4$ using [@collapse Theorem 1.4](1). ◻ **Remark 43**. *In fact, the above shows that $\overline{O}_7$ is not set-theoretically Cohen--Macaulay, i.e. there is no Cohen--Macaulay ideal with zero-set $\overline{O}_7$. Furthermore, using [@lovett Theorem 3.1] we see that all the other orbit closures are Cohen--Macaulay. We claim that they have also rational singularities. For the determinantal varieties this is known [@weyman Corollary 6.1.5]. For $\overline{O}_1$ this follows from [@weyman pg. 158, Exercise 8.]. We are left with $\overline{O}_5$: for $n=2$ this follows from the description of the Bernstein--Sato polynomial of the hyperdeterminant (e.g. see [@perlman2020equivariant Section 3.2]) by [@saito Theorem 0.4], which extends to the case $n\geq 3$ by [@collapse Theorem 1.4](2).* ## The case $n=4$ {#sec:o7n4} We first assume $n=4$. **Lemma 44**. *We have the following.* 1. *$H^2_{\overline{O}_7}(S)=D_7$.* 2. *$H^3_{\overline{O}_7}(S)$ and $H^5_{\overline{O}_7}(S)$ are nonzero.* *Proof.* (1) We have $H^2_{\overline{O}_7}(S)=D_7$ by [@lHorincz2019categories Lemma 3.11] and Theorem [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"}. \(2\) Let $G=\textnormal{O}_4(\mathbb{C})$. Similar to the proof of Lemma [Lemma 35](#eq:locO5){reference-type="ref" reference="eq:locO5"}, by Lemma [Lemma 2](#lem:locinv){reference-type="ref" reference="lem:locinv"} we have for all $i$ $$(H^i_{\overline{O}_7}(S))^G \cong H^i_{I}(R),$$ where $R=S^G$ is the polynomial ring on $4\times 4$ symmetric matrices, and $I$ is the ideal of $3\times 3$ minors. By [@raicu2016ocal (1.5)] we have that $H^i_{I}(R)\neq 0$ if and only if $i=3$ or $i=5$. Thus, $H^3_{\overline{O}_7}(S)$ and $H^5_{\overline{O}_7}(S)$ are nonzero. ◻ **Remark 45**. *In Section [5](#sec:o5){reference-type="ref" reference="sec:o5"} we have used Lemma [Lemma 2](#lem:locinv){reference-type="ref" reference="lem:locinv"} in a stronger way, working with representations of the special orthogonal group explicitly. However, in the proof above this would be less straightforward, as $\textnormal{O}_4(\mathbb{C})$ is disconnected. This means that in the category of $\textnormal{O}_4(\mathbb{C})\times \operatorname{GL}_n(\mathbb{C})$-equivariant $\mathcal{D}_V$-modules some new objects may appear with non-trivial action of the component group of $\textnormal{O}_4(\mathbb{C})$, i.e. different objects can have the same $\mathcal{D}$-module (but different equivariant) structure.* **Lemma 46**. *For all $j\geq 4$, $D_6$ and $D_6'$ are not composition factors of $H^j_{\overline{O}_7}(S)$.* *Proof.* The variety $\overline{O}_7$ is defined by the rank conditions $\operatorname{rank} X \leq 3, \, \operatorname{rank} (X^t X)\leq 2$ (cf. proof of Lemma [Lemma 19](#lem:codim1){reference-type="ref" reference="lem:codim1"}). Using Macaulay2 [@M2], we obtain that the minimal free resolution of the ideal generated by the corresponding minors has length 3, hence depth 13 by the Auslander--Buchsbaum Formula (see, for example, [@weyman Theorem 1.2.7]). The result now follows from [@varbaro Proposition 3.2]. ◻ Let $Y=Y_{223}$, $\pi=\pi_{223}$, and $Y_7=\pi^{-1}(\overline{O}_7)$. Since $\overline{O}_8$ is the determinant hypersurface, it has rational singularities. Thus, $\mathbb{R}\pi_{\ast}\mathcal{O}_Y\cong \mathbb{C}[\overline{O}_8]$, so the spectral sequence $\mathbb{R}^i\pi_{\ast}\mathscr{H}^j_{Y_7}(\mathcal{O}_Y)\implies H^{i+j}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$ yields $$\label{pushDown223} \mathbb{R}^i\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y) = H^{i+1}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8]).$$ The coordinate ring $\mathbb{C}[\overline{O}_8]$ has equivariant minimal free resolution as follows $$\label{resDet} 0\longrightarrow S\otimes \operatorname{det}\longrightarrow S\longrightarrow \mathbb{C}[\overline{O}_8]\longrightarrow 0.$$ Using the above information, we deduce the following. **Lemma 47**. *We have that $H^i_{\overline{O}_7}(S)=0$ for $i\geq 6$.* *Proof.* By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} we have that $\mathbb{R}^i\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$ is zero for $i\geq 4$. Thus, by ([\[pushDown223\]](#pushDown223){reference-type="ref" reference="pushDown223"}) and the long exact sequence of local cohomology of ([\[resDet\]](#resDet){reference-type="ref" reference="resDet"}), we conclude that for $i\geq 6$ multiplication by $\operatorname{det}$ is an isomorphism on $H^i_{\overline{O}_7}(S)$. Since $H^i_{\overline{O}_7}(S)$ has support inside $\overline{O}_7$, it follows that $H^i_{\overline{O}_7}(S)=0$ for $i\geq 6$. ◻ We obtain the following long exact sequence of local cohomology. $$0\to H^1_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\to H^2_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^2_{\overline{O}_7}(S)\to H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\to H^3_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^3_{\overline{O}_7}(S)\to H^3_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\to$$ $$H^4_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^4_{\overline{O}_7}(S)\to H^4_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\to H^5_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^5_{\overline{O}_7}(S)\to 0$$ For the remainder of the subsection we use the above long exact sequence to determine $H^i_{\overline{O}_7}(S)$ for $i=3,4,5$. Namely, we will use that $H^{i-1}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$ contains the kernel of $H^i_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^i_{\overline{O}_7}(S)$ as a subrepresentation, and it contains the cokernel of $H^{i-1}_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^{i-1}_{\overline{O}_7}(S)$ as a subrepresentation. **Lemma 48**. *For $i=2,3,4,5$, we have that all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $H^i_{\overline{O}_7}(S)$ satisfy $\gamma_{6-i}\leq -i+1$ .* *Proof.* Consider the exact sequence: $$H^{i-1}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\to H^i_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^i_{\overline{O}_7}(S).$$ By ([\[pushDown223\]](#pushDown223){reference-type="ref" reference="pushDown223"}) we have $\mathbb{R}^{i-2}\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)\cong H^{i-1}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$. Since $\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$ is a direct sum of bundles of the form $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}\mathcal{Q}$, it follows from Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} that subrepresentations $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ of $H^{i-1}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$ satisfy $\nu_{6-i}=-i+2$. Suppose for contradiction that $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ in $H^i_{\overline{O}_7}(S)$ satisfies $\gamma_{6-i}\geq-i+2$, and let $v$ be its highest weight vector. Then $v\otimes \operatorname{det}^p=(v\otimes\operatorname{det}^{p-1})\otimes\operatorname{det}$ is not in the kernel of the multiplication map $H^i_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^i_{\overline{O}_7}(S)$ for all $p\geq 1$. Thus, $v$ is not supported inside $\overline{O}_8$, a contradiction. ◻ **Lemma 49**. *Let $M$ be a simple module in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$, and let $K$ denote the kernel of the multiplication by determinant map: $M\otimes \operatorname{det}\to M$. If $M\neq S$ then $K\neq 0$. Furthermore, the following is true:* 1. *If $M$ is equal to $D_3$, $D_4$, or $D_5$, then all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $K$ satisfy $\gamma_3=-1$.* 2. *If $M$ is equal to $D_1$ or $D_2$, then all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $K$ satisfy $\gamma_2=-2$.* 3. *If $M$ is equal to $D_0$, then all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $K$ satisfy $\gamma_1=-3$.* *Proof.* Let $i=0,1,3,4,5$, and let $c_i=\operatorname{codim}(\overline{O}_i,V)$. We have $H^{c_i}_{\overline{O}_i}(S)=D_i$ in these cases, so that $$K=H^{c_i-1}_{\overline{O}_i}(\mathbb{C}[\overline{O}_8]).$$ If $i=2$, then $H^{c_i}_{\overline{O}_i}(S)$ is a non-trivial extension of $D_0$ by $D_2$, so that $K$ is a submodule of $H^{c_i-1}_{\overline{O}_i}(\mathbb{C}[\overline{O}_8])$. Let $Y=Y_{223}$ and $\pi=\pi_{223}$ as above, and set $Y_i=\pi^{-1}(\overline{O}_i)$. Again, since $\overline{O}_8$ has rational singularities, we have $\mathbb{R}\pi_{\ast}\mathcal{O}_Y\cong \mathbb{C}[\overline{O}_8]$, so there is a spectral sequence $\mathbb{R}^p\pi_{\ast}\mathscr{H}^q_{Y_i}(\mathcal{O}_Y)\implies H^{p+q}_{\overline{O}_i}(\mathbb{C}[\overline{O}_8])$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}, we have that $\mathbb{R}^p\pi_{\ast}\mathscr{H}^q_{Y_i}(\mathcal{O}_Y)$ satisfies: 1. if $p=1$ then all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $\mathbb{R}^p\pi_{\ast}\mathscr{H}^q_{Y_i}(\mathcal{O}_Y)$ satisfy $\gamma_3=-1$, 2. if $p=2$ then all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $\mathbb{R}^p\pi_{\ast}\mathscr{H}^q_{Y_i}(\mathcal{O}_Y)$ satisfy $\gamma_2=-2$, 3. if $p=3$ then all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $\mathbb{R}^p\pi_{\ast}\mathscr{H}^q_{Y_i}(\mathcal{O}_Y)$ satisfy $\gamma_1=-3$. Let $c_i^Y=\operatorname{codim}(Y_i,Y)$. If $i=0$, then $c_i^Y=12$ and $c_i=16$, so $p=16-12-1=3$. If $i=1$, then $c_i^Y=7$ and $c_i=10$, so $p=10-7-1=2$. If $i=2$, then $c_i^Y=6$ and $c_i=9$, so $p=9-6-1=2$. If $i=3,4$, then $c_i^Y=5$ and $c_i=7$, so $p=7-5-1=1$. If $i=5$, then $c_i^Y=3$ and $c_i=5$, so $p=5-3-1=1$. ◻ **Lemma 50**. *We have that $D_2$ is not a submodule of $H^4_{\overline{O}_7}(S)$, and $D_6$ is not a submodule of $H^3_{\overline{O}_7}(S)$.* *Proof.* We continue to write $Y=Y_{223}$ and $\pi=\pi_{223}$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B\otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ is a subrepresentation of $D_2$, so that $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ is a subrepresentation of $D_2\otimes \operatorname{det}$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"}, $D_2$ has no subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ satisfying $\gamma_2=-2$, so it follows that $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ belongs to the kernel of the multiplication map $D_2\otimes \operatorname{det}\to D_2$. Similarly, $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ belongs to the kernel of $D_6\otimes\operatorname{det}\to D_6$. It suffices to show that $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ is not in $H^3_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^2\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$ and that $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ does not belong to $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. By Bott's Theorem (Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}) it suffices to show that neither $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}\mathcal{Q}$ nor $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ are subbundles of $\mathcal{O}_Y(\ast Y_7)$. This is proven in Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}. ◻ **Lemma 51**. *The following is true about $H^i_{\overline{O}_7}(S)$ for $i=4,5$.* 1. *$H^4_{\overline{O}_7}(S)$ is a direct sum of copies of $D_1$.* 2. *We have that $H^5_{\overline{O}_7}(S)=D_0$.* *Proof.* We continue to write $Y=Y_{223}$ and $\pi=\pi_{223}$. \(2\) By Lemma [Lemma 44](#lem:firstFactsO7){reference-type="ref" reference="lem:firstFactsO7"} we have that $H^5_{\overline{O}_7}(S)$ is nonzero, and by Lemma [Lemma 48](#lem:upperBoundsOnWeights){reference-type="ref" reference="lem:upperBoundsOnWeights"} we have that all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $H^5_{\overline{O}_7}(S)$ satisfy $\gamma_{1}\leq -4$. Thus, by Pieri's Rule [@weyman Corollary 2.3.5] we see that $H^5_{\overline{O}_7}(S)$ must be a nonzero module supported on the origin, so it must be a direct sum of copies of $D_0$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"}, $D_0$ has no subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ satisfying $\gamma_1=-3$, so the representation $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B \otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ appears in the kernel of the multiplication map $D_0\otimes\operatorname{det}\to D_0$. Since $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B \otimes\mathbb{S}_{(-4,-4,-4)}\mathcal{Q}$ appears in $\mathcal{O}_Y(\ast Y_7)$ with multiplicity one (as a power of the semi-invariant), we have by Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} that $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B \otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ appears with multiplicity one in $H^4_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$. We conclude that $H^5_{\overline{O}_7}(S)=D_0$. \(1\) By Lemma [Lemma 48](#lem:upperBoundsOnWeights){reference-type="ref" reference="lem:upperBoundsOnWeights"} and Lemma [Lemma 49](#lem:kerCharlemma){reference-type="ref" reference="lem:kerCharlemma"}, neither $D_3$, $D_4$ nor $D_5$ are submodules of $H^4_{\overline{O}_7}(S)$. Again, since $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}\otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ belongs to the kernel of the multiplication map $D_0\otimes\operatorname{det}\to D_0$, if $D_0$ were a submodule of $H^4_{\overline{O}_7}(S)$, then $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}\otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ would belong to $\mathbb{R}^2\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)=H^{3}_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}, this is impossible, as there is no bundle $\mathbb{S}_{\lambda}\mathcal{Q}$ on $\mathbb{G}(3;C)$ with $\mathbb{R}^2\pi_{\ast}\mathbb{S}_{\lambda}\mathcal{Q}\cong \mathbb{S}_{(-3,-3,-3,-3)}C$. By Lemma [Lemma 50](#lem:2notsub7){reference-type="ref" reference="lem:2notsub7"} we have that $D_2$ is not a submodule of $H^4_{\overline{O}_7}(S)$. By Lemma [Lemma 46](#lem:Varbaro){reference-type="ref" reference="lem:Varbaro"}, neither $D_6$ nor $D_6'$ are composition factors of $H^4_{\overline{O}_7}(S)$. We conclude that the only possible simple submodule is $D_1$, and by Theorem [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"}, $H^4_{\overline{O}_7}(S)$ is supported on the quiver $$\xymatrix@C+1.6pc{(0) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (2) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (1) }$$ with all paths of length two being zero. Suppose $M$ is a non-zero indecomposable summand of $H^4_{\overline{O}_7}(S)$ (with unique simple submodule $D_1$). By [@lHorincz2019categories Theorem 2.13], the only possible composition factors of $M$ (and hence $H^4_{\overline{O}_7}(S)$) are $D_1$ and $D_2$, and if $D_2$ appeared, it would have to be a quotient. If $D_2$ were a quotient of $H^4_{\overline{O}_7}(S)$ then the cokernel of the multiplication map $D_2\otimes\operatorname{det}\to D_2$ would be a subrepresentation of $H^4_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$. Since $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B \otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ belongs to this cokernel, we would then have that another copy of $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B \otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ belongs to $H^4_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$. This is a contradiction, as in part (2) above we showed that this representation appears in $H^4_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$ with multiplicity one, and comes from the kernel of $D_0\otimes\operatorname{det}\to D_0$. ◻ **Lemma 52**. *The following is true about weights in $D_1$.* 1. *For all $t>0$ the representation $\mathbb{S}_{(t-6,-t-6)}A \otimes\mathbb{S}_{(t-6,-t-6)}B\otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ appears with multiplicity one in the cokernel of the multiplication map $D_1\otimes\operatorname{det}\to D_1$. In particular, it appears in $D_1$.* 2. *We have that that $H^4_{\overline{O}_7}(S)=D_1$.* *Proof.* We continue to write $Y=Y_{223}$ and $\pi=\pi_{223}$. We consider the cokernel of the map $H^4_{\overline{O}_7}(S)\otimes \operatorname{det}\to H^4_{\overline{O}_7}(S)$, which is a subrepresentation of $H^4_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^3\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. By Lemma [Lemma 51](#lem:reduceD1D5){reference-type="ref" reference="lem:reduceD1D5"} we know that $H^4_{\overline{O}_7}(S)$ is a direct sum of copies of $D_1$. Since the representations $\mathbb{S}_{(t-6,-t-6)}A \otimes\mathbb{S}_{(t-6,-t-6)}B\otimes\mathbb{S}_{(-3,-3,-3,-3)}C$ ($t>0$) do not appear in $D_0\otimes\operatorname{det}$ (Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"}), it suffices to show that they appear with multiplicity one in $\mathbb{R}^3\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}, we need to show that $\mathbb{S}_{(t-6,-t-6)}A \otimes\mathbb{S}_{(t-6,-t-6)}B\otimes\mathbb{S}_{(-4,-4,-4)}\mathcal{Q}$ appears with multiplicity one in $\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. This is a consequence of Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}. ◻ **Corollary 53**. *The following is true about subrepresentations of $D_1$ and $D_7$.* 1. *if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $D_1$ then $\gamma_1\geq -3$ and $\gamma_2\leq -3$,* 2. *if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $D_7$ then $\gamma_3\geq -1$ and $\gamma_4\leq -1$.* *Proof.* The upper bounds on $\gamma_i$ follow from Lemma [Lemma 48](#lem:upperBoundsOnWeights){reference-type="ref" reference="lem:upperBoundsOnWeights"} and the fact that $H^2_{\overline{O}_7}(S)=D_7$ and $H^4_{\overline{O}_7}(S)=D_1$. The lower bounds on $\gamma_i$ then follow from applying the Fourier transform, using that $\mathcal{F}(D_7)=D_1$. ◻ Finally, we turn our attention to $H^3_{\overline{O}_7}(S)$, the remaining ingredient to prove the case $n=4$ of Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"}. **Lemma 54**. *Let $n=4$. The following is true about the characters of $D_1$ and $D_7$.* 1. *Let $\gamma_1\geq -3$ and $\gamma_2=\gamma_3=\gamma_4\leq -3$, and let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$ with $|\alpha|=|\beta|=|\gamma|$. Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ appear in $D_1$ if and only if $\alpha=\beta$ and $\alpha_2\leq 2\gamma_2-1$. If $\alpha_2= 2\gamma_2-1$, then the multiplicity is one.* 2. *Let $\gamma_4\leq -1$ and $\gamma_1=\gamma_2=\gamma_3\geq -1$, and let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$ with $|\alpha|=|\beta|=|\gamma|$. Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ appear in $D_7$ if and only if $\alpha=\beta$ and $\alpha_1\geq 2\gamma_1+1$. If $\alpha_1= 2\gamma_1+1$, then the multiplicity is one.* *Proof.* Using $\mathcal{F}$, it suffices to prove (1). Let $\pi=\pi_{111}$, $Y=Y_{111}$, and $Y_1=\pi^{-1}(\overline{O}_1)$. By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"} we have $\pi_{+}\mathcal{O}_Y(\ast Y_1)=\mathbb{R}\Gamma_{O_1}(S)[10]$. By Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"}, and the long exact sequence $$\cdots \to H^j_{\overline{O}_1}(S)\to H^j_{O_1}(S)\to H^{j-1}_{\overline{O}_0}(S)\to \cdots,$$ we have the following in the Grothendieck group of representations of $\operatorname{GL}$ (see Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}): $$=[D_1]-4[D_0].$$ By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ with $\gamma_1\geq -3$ and $\gamma_2=\gamma_3=\gamma_4\leq -3$ do not belong to $D_0$. Thus, for representations of this form, we have $$m_{\alpha,\beta,\gamma}:=\big[ \chi(\pi_+ \mathcal{O}_Y(\ast Y_1)):\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C\big]=\big[ D_1:\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C\big].$$ For the remainder of the proof, we freely use notation from Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}. By [@raicu2016characters Lemma 2.5, Proposition 2.10] we have that $$\big[\chi(\pi_+ \mathcal{O}_Y(\ast Y_5))\big]=\lim_{k\to \infty} \big[p_{1,k}(A)\otimes p_{1,k}(B)\otimes p_{1,k}(C)\otimes S^{\ast}\otimes \operatorname{det}(A^{\ast}\otimes B^{\ast}\otimes C^{\ast})\big].$$ Thus, by [@raicu2016characters Lemma 2.3] we have that $m_{\alpha,\beta,\gamma}$ is equal to $$\lim_{k\to \infty}\left(\sum_{I,J\in \binom{[2]}{1},K\in \binom{[4]}{1}}\big[S:\mathbb{S}_{((-\alpha_2-8,-\alpha_1-8),k,I)}A\otimes\mathbb{S}_{((-\beta_2-8,-\beta_1-8),k,J)}B \otimes\mathbb{S}_{((-\gamma_4-4, -\gamma_3-4, -\gamma_2-4, -\gamma_1-4),k,K)}C \big]\right).$$ Note $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ appears in $S$ only if $\lambda,\mu,\nu$ are partitions. Since $\gamma_1\geq -3$ we have $-\gamma_1-4\leq -1$, so for $k\gg 0$ we have $((-\gamma_4-4, -\gamma_3-4, -\gamma_2-4, -\gamma_1-4),k,K)$ is a partition if and only if $K=\{4\}$, and $$((-\gamma_4-4, -\gamma_3-4, -\gamma_2-4, -\gamma_1-4),k,\{4\})=(k-\gamma_1-7, -\gamma_4-3, -\gamma_3-3, -\gamma_2-3).$$ Therefore, by the Cauchy Formula applied to $\operatorname{Sym}((A\otimes B)\otimes C)$, we have that $m_{\alpha,\beta,\gamma}$ is the limit as $k$ goes to infinity of the following quantity: $$\big[\mathbb{S}_{(k-\gamma_1-7, -\gamma_4-3, -\gamma_3-3, -\gamma_2-3)}(A\otimes B):\mathbb{S}_{(k-\alpha_2-8,-\alpha_1-8)}A\otimes\mathbb{S}_{(k-\beta_2-8,-\beta_1-8)}B \big]$$ $$-2\big[\mathbb{S}_{(k-\gamma_1-7, -\gamma_4-3, -\gamma_3-3, -\gamma_2-3)}(A\otimes B):\mathbb{S}_{(k-\alpha_2-8,-\alpha_1-8)}A\otimes\mathbb{S}_{(k-\beta_1-9,-\beta_2-7)}B \big]$$ $$+\big[\mathbb{S}_{(k-\gamma_1-7, -\gamma_4-3, -\gamma_3-3, -\gamma_2-3)}(A\otimes B):\mathbb{S}_{(k-\alpha_1-9,-\alpha_2-7)}A\otimes\mathbb{S}_{(k-\beta_1-9,-\beta_2-7)}B \big],$$ where the first term corresponds to $I=J=\{1\}$, the middle term corresponds to $I=\{1\}$ and $J=\{2\}$ (twice, by symmetry), and the third term corresponds to $I=J=\{2\}$. Since $\gamma_2=\gamma_3=\gamma_4$, we can twist by $\operatorname{det}(A\otimes B)^{\otimes \gamma_2+3}$ and apply the Cauchy Formula. Since $\alpha_1\geq \alpha_2$ and $\beta_1\geq \beta_2$ we have $(k-\alpha_2-8,-\alpha_1-8)\neq (k-\beta_1-9,-\beta_2-7)$, so by the Cauchy Formula we have that the middle term above is zero. Similarly, the first and third term are nonzero only if $\alpha=\beta$. Therefore, $m_{\alpha,\beta,\gamma}$ is nonzero only if $\alpha=\beta$ and is equal to the limit as $k$ goes to infinity of the following quantity: $$\big[\mathbb{S}_{(k+\gamma_2-\gamma_1-4)}(A\otimes B):\mathbb{S}_{(k+2\gamma_2-\alpha_2-2,2\gamma_2-\alpha_1-2)}A\otimes\mathbb{S}_{(k+2\gamma_2-\alpha_2-2,2\gamma_2-\alpha_1-2)}B \big]$$ $$+\big[\mathbb{S}_{(k+\gamma_2-\gamma_1-4)}(A\otimes B):\mathbb{S}_{(k+2\gamma_2-\alpha_2-2,2\gamma_2-\alpha_1-2)}A\otimes\mathbb{S}_{(k+2\gamma_2-\alpha_1-3,2\gamma_2-\alpha_2-1)}B \big].$$ The first term is nonzero (and equal to one) if and only if $\alpha_1\leq 2\gamma_2-2$. Similarly, the second term is nonzero (and equal to one) if and only if $\alpha_2\leq 2\gamma_2-1$. If $\alpha_2=2\gamma_2-1$, then since $|\alpha|=|\gamma|=\gamma_1+3\gamma_2$ we have $\alpha_1=\gamma_1+\gamma_2+1>2\gamma_2-2$, so $m_{\alpha,\beta,\gamma}=1$. ◻ **Lemma 55**. *The following is true about weights in $D_5$ and $D_7$.* 1. *For all $t>0$ the representation $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ appears with multiplicity one in the kernel of the multiplication map $D_5\otimes\operatorname{det}\to D_5$.* 2. *For all $t>0$ the representation $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ appears in the cokernel of the multiplication map $D_7\otimes\operatorname{det}\to D_7$.* 3. *We have that $D_5$ is not a submodule of $H^3_{\overline{O}_7}(S)$.* *Proof.* (1) Let $Y=Y_{223}$ and $\pi=\pi_{223}$. By Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"}(1) we have that $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ belongs to $D_5^{Y}$ with multiplicity one. It follows from Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} that $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ does not belong to $D_2^Y$, so by Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(1) we have that $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ belongs to $\mathscr{H}^3_{Y_5}(\mathcal{O}_Y)$ with multiplicity one. By the proof of Lemma [Lemma 49](#lem:kerCharlemma){reference-type="ref" reference="lem:kerCharlemma"}, we have that $\mathbb{R}^1\pi_{\ast}\mathscr{H}^3_{Y_5}(\mathcal{O}_Y)$ is the kernel of the multiplication map $D_5\otimes\operatorname{det}\to D_5$. By Bott's Theorem (Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}) applied to $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$, we obtain (1). \(2\) By Lemma [Lemma 54](#lem:wit17n4){reference-type="ref" reference="lem:wit17n4"} we have that $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ appears in $D_7$, and by Corollary [Corollary 53](#cor:weightBoundsn4){reference-type="ref" reference="cor:weightBoundsn4"} we have that $\mathbb{S}_{(t-4,-t-4)}A \otimes\mathbb{S}_{(t-4,-t-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ does not appear in $D_7$. Thus, $\mathbb{S}_{(t-2,-t-2)}A \otimes\mathbb{S}_{(t-2,-t-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ is in the cokernel of the multiplication map $D_7\otimes\operatorname{det}\to D_7$. \(3\) Since $H^2_{\overline{O}_7}(S)=D_7$, by (1) and (2) it suffices to show that $\mathbb{S}_{(-1,-3)}A \otimes\mathbb{S}_{(-1,-3)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ appears with multiplicity one in $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} is suffices to show that $\mathbb{S}_{(-1,-3)}A \otimes\mathbb{S}_{(-1,-3)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ appears with multiplicity one in $\mathcal{O}_Y(\ast Y_7)$. This follows from Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}. ◻ **Lemma 56**. *Let $i=3,4$. We have that $D_i$ is neither a submodule nor a quotient of $H^3_{\overline{O}_7}(S)$.* *Proof.* We prove the result for $D_3$. The proof for $D_4$ is similar. We continue to write $Y=Y_{223}$ and $\pi=\pi_{223}$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $\mathbb{S}_{(-1,-7)}A\otimes\operatorname{det}(B\otimes C)=\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ is a subrepresentation of $D_3$, so that $\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ is a subrepresentation of $D_3\otimes \operatorname{det}$. Since $D_3$ has no subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ satisfying $\alpha_2=-5$, it follows that $\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ belongs to the kernel of the multiplication map $D_3\otimes \operatorname{det}\to D_3$. Since $\mathcal{F}(D_3)=D_3$, it follows also that $\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ is in the cokernel of the multiplication map $D_3\otimes\operatorname{det}\to D_3$. We need to show that $\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ does not belong to $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$ and that $\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ does not belong to $H^3_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^2\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. By Bott's Theorem (Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}), it suffices to show that neither $\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ nor $\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}\mathcal{Q}$ are subbundles of $\mathcal{O}_Y(\ast Y_7)$. Since $\mathcal{F}(\mathcal{O}_Y(\ast Y_7))=\mathcal{O}_Y(\ast Y_7)$ and $\mathcal{F}(\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q})=\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-3,-3)}\mathcal{Q}$, it suffices to show that $\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ is not a subbundle of $\mathcal{O}_Y(\ast Y_7)$. Following the argument of Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}, it suffices to show that $\mathbb{S}_{(k+3,k-3)}A^{\ast}\otimes\mathbb{S}_{(k,k)}B^{\ast}$ is not a subrepresentation of $\mathbb{S}_{(2k-1,1,0,0)}(A^{\ast}\otimes B^{\ast})$ for $k\gg 0$. To prove this, we apply [@raicu2012secant Corollary 4.3b]. Using the notation there $f=k$ and $e=2k-2$, so $e<2f$. Thus, the multiplicity is zero. ◻ **Lemma 57**. *The following is true about $D_6'$.* 1. *We have $H^3_{\overline{O}_7}(S)=D_6'$.* 2. *The representation $\mathbb{S}_{(-1,-3)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ appears with multiplicity one in the kernel of the multiplication map $D_6'\otimes\operatorname{det}\to D_6'$. In particular, the representation $\mathbb{S}_{(-3,-5)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ appears in $D_6'$ with multiplicity one.* 3. *If $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $D_6'$ then $\gamma_2\geq -2$ and $\gamma_3\leq -2$.* 4. *The following representations do not appear in $D_6'$: $$\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C, \quad \mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-1,-7)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C,$$ $$\mathbb{S}_{(-2,-6)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C.$$* *Proof.* The kernel of the multiplication map $H^3_{\overline{O}_7}(S)\otimes\operatorname{det}\to H^3_{\overline{O}_7}(S)$ is a subrepresentation of $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} we have that all subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of this kernel satisfy $\gamma_3=-1$. By Lemma [Lemma 49](#lem:kerCharlemma){reference-type="ref" reference="lem:kerCharlemma"}, neither $D_0$, $D_1$, nor $D_2$ are submodules of $H^3_{\overline{O}_7}(S)$. By Lemma [Lemma 50](#lem:2notsub7){reference-type="ref" reference="lem:2notsub7"} and Lemma [Lemma 55](#lem:factsD5lc7){reference-type="ref" reference="lem:factsD5lc7"} we have that neither $D_5$ nor $D_6$ are submodules of $H^3_{\overline{O}_7}(S)$. By Lemma [Lemma 56](#lem:no34in7){reference-type="ref" reference="lem:no34in7"}, neither $D_3$ nor $D_4$ are submodules. We conclude that the only possible simple submodule is $D_6'$. By Theorem [Theorem 30](#thm:quiv4){reference-type="ref" reference="thm:quiv4"}, $H^3_{\overline{O}_7}(S)$ must be supported on the quiver $$\xymatrix@C+1.6pc{(3) \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (6') \ar@<0.5ex>[r] & \ar@<0.5ex>[l] (4) }$$ with all 2-cycles being zero. Suppose $M$ is a non-zero indecomposable summand of $H^3_{\overline{O}_7}(S)$ (with unique simple submodule $D_6'$). The only other possible composition factors of $M$ (and hence $H^3_{\overline{O}_7}(S)$) are $D_3$ and $D_4$, and if they appeared, they would have to be quotients, by [@lHorincz2019categories Theorem 2.13]). By Lemma [Lemma 56](#lem:no34in7){reference-type="ref" reference="lem:no34in7"} they cannot be quotients of $H^3_{\overline{O}_7}(S)$, so we conclude that $H^3_{\overline{O}_7}(S)$ is a direct sum of copies of $D_6'$. By Lemma [Lemma 54](#lem:wit17n4){reference-type="ref" reference="lem:wit17n4"} the representation $\mathbb{S}_{(-1,-3)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ does not appear in $D_7$ (since $(-1,-3)\neq (-2,-2)$). Thus, it does not appear in the cokernel of $D_7\otimes\operatorname{det}\to D_7$. Therefore, to complete the proof of (1) and (2), it suffices to show that $\mathbb{S}_{(-1,-3)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ appears in $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$ with multiplicity one. By Bott's Theorem (Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}) it suffices to show that $\mathbb{S}_{(-1,-3)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-2)}\mathcal{Q}$ appears in $\mathcal{O}_Y(\ast Y_7)$ with multiplicity one. This is a consequence of Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}. \(3\) The upper bound on $\gamma_3$ follows from Lemma [Lemma 48](#lem:upperBoundsOnWeights){reference-type="ref" reference="lem:upperBoundsOnWeights"} and the fact that $H^3_{\overline{O}_7}(S)=D_6'$. The lower bound on $\gamma_2$ then follows from applying the Fourier transform, using that $\mathcal{F}(D_6')=D_6'$. \(4\) If either of the first two representations appeared in $D_6'$, then by (3) we would have that $\mathbb{S}_{(1,-5)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ or $\mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(1,-5)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ is in the kernel of the multiplication map $D_6'\otimes\operatorname{det}\to D_6'$, and hence in $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. We showed that this does not happen during the proof of Lemma [Lemma 56](#lem:no34in7){reference-type="ref" reference="lem:no34in7"}. Similarly, if $\mathbb{S}_{(-2,-6)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ was in $D_6'$, then $\mathbb{S}_{(0,-4)}A\otimes\mathbb{S}_{(-2,-2)}B\otimes\mathbb{S}_{(-1,-1,-1,-1)}C$ would be in $H^2_{\overline{O}_7}(\mathbb{C}[\overline{O}_8])\cong \mathbb{R}^1\pi_{\ast}\mathscr{H}^1_{Y_7}(\mathcal{O}_Y)$. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} and Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}(5) this representation does not appear. ◻ ## The case $n\geq 5$ {#sec:lcO7ngeq5} Let $\pi'=\pi_{224}$ and $Y'=Y_{224}$. We set $Y'_7=\pi'^{-1}(\overline{O}_7)$. Then we have $\mathbb{R}\pi'_{\ast}\mathbb{R}\mathscr{H}^0_{Y_7'}(\mathcal{O}_{Y'})=\mathbb{R}\Gamma_{\overline{O}_7}(S)$, so there is a spectral sequence $$\label{localO7ngeq5} E_2^{i,j}=\mathbb{R}^i\pi'_{\ast}\mathscr{H}^j_{Y_7'}(\mathcal{O}_{Y'})\implies H^{i+j}_{\overline{O}_7}(S).$$ We use the case $n=4$ of Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"} to prove the case $n\geq 5$. By Corollary [Corollary 53](#cor:weightBoundsn4){reference-type="ref" reference="cor:weightBoundsn4"}, Lemma [Lemma 57](#lem:D6primeisH3){reference-type="ref" reference="lem:D6primeisH3"} and Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} the nonzero modules on the $E_2$ page are given by $$E_2^{n-4,2}=\mathbb{R}^{n-4}\pi'_{\ast}D_7^{Y'} ,\quad E_2^{2n-8,3}=\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'},\quad E_2^{3n-12,4}=\mathbb{R}^{3n-12}\pi'_{\ast}D_1^{Y'},\quad E_2^{4n-16,5}=\mathbb{R}^{4n-16}\pi'_{\ast}D_0^{Y'}.$$ The arrows on the $E_r$ page decrease $j$ by $r-1$, and increase $i$ by $r$, so there are no nonzero differentials in the spectral sequence, and it degenerates on the $E_2$ page. By [@lHorincz2019categories Lemma 3.11] and Theorem [Theorem 31](#quiverN5){reference-type="ref" reference="quiverN5"} we have $\mathbb{R}^{n-4}\pi'_{\ast}D_7^{Y'}\cong D_7$. Since $\mathbb{R}^{4n-16}\pi'_{\ast}D_0^{Y'}\cong D_0$ (see [@socledegs Lemma 3.4]), to complete the proof of Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"}, we need to show that $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}=D_6'$ and $\mathbb{R}^{3n-12}\pi'_{\ast}D_1^{Y'}\cong D_1$. We take care of this in Lemma [Lemma 65](#lem:directimD1){reference-type="ref" reference="lem:directimD1"} and Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"} below. # Characters and witness weights {#sec:characters} Given a simple module $M\in \textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$, and a triple $(\alpha,\beta,\gamma)\in \mathbb{Z}^2_{\operatorname{dom}}\times \mathbb{Z}^2_{\operatorname{dom}}\times \mathbb{Z}^n_{\operatorname{dom}}$, we say that $\alpha\times \beta\times \gamma$ is a witness weight for $M$ if $\mathbb{S}_{\alpha}A\otimes \mathbb{S}_{\beta}B\otimes \mathbb{S}_{\gamma}C$ appears in $M$ with multiplicity one, and does not appear in any other simple object of $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$. Such a weight may be used to identify the multiplicity of $M$ in a composition series of any $\mathcal{D}$-module in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$. Furthermore, a witness weight for $M$ yields an explicit presentation of its projective cover in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$, see [@lHorincz2019categories Section 2.1]. We write $\mathcal{W}(M)$ for the set of witness weights of $M$. ## Characters of the simple modules when $n=3$ {#sec:charn3} **Theorem 58**. *Let $n=3$. The following is true about the witness weights for simple modules in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$: $$(-6,-6)\times (-6,-6)\times (-4,-4,-4)\in \mathcal{W}(D_0),\quad (0,0)\times (0,0)\times (0,0,0)\in \mathcal{W}(D_8),$$ $$(t-6,-t-6)\times (t-6,-t-6)\times (-4,-4,-4)\in \mathcal{W}(D_1)\quad t>0,\quad (t,-t)\times (t,-t)\times (0,0,0)\in \mathcal{W}(D_7)\quad t>0,$$ $$(-4,-4)\times (-4,-4)\times (-2,-3,-3)\in \mathcal{W}(D_2),\quad (-2,-2)\times (-2,-2)\times (-1,-1,-2)\in \mathcal{W}(D_6),$$ $$(-1,-5)\times (-3,-3)\times (-2,-2,-2)\in \mathcal{W}(D_3),\quad (-3,-3)\times (-1,-5)\times (-2,-2,-2)\in \mathcal{W}(D_4),$$ $$(t-2,-t-4)\times (t-3,-t-3)\times (-2,-2,-2),\; (t-3,-t-3)\times (t-2,-t-4)\times (-2,-2,-2)\in \mathcal{W}(D_5)\quad t>0,$$ $$(-3,-3)\times (-3,-3)\times (-2,-2,-2)\in \mathcal{W}(D_6').$$* **Lemma 59**. *Let $A$ and $B$ be two-dimensional complex vector spaces, and let $t\geq 0$. For $k\gg t$ we have that $\mathbb{S}_{(k-t-1,t+1)}A\otimes \mathbb{S}_{(k-t-1,t+1)}B$ appears in $\mathbb{S}_{(k-2,1,1)}(A\otimes B)$ with multiplicity one.* *Proof.* Let $a=k-2$. By Pieri's Rule [@weyman Corollary 2.3.5] we have $\bigwedge^2\otimes\operatorname{Sym}^a\cong \mathbb{S}_{(a,1,1)}\oplus \mathbb{S}_{(a+1,1)}$, which implies that we have the following equality in the Grothendieck group of representations of $\operatorname{GL}(A)\times \operatorname{GL}(B)$: $$\label{eqn:GGplethysm} \big[\mathbb{S}_{(a,1,1)}(A\otimes B)\big]=\big[\bigwedge^2(A\otimes B)\otimes S^a(A\otimes B)\big]-\big[\mathbb{S}_{(a+1,1)}(A\otimes B)\big].$$ By the Cauchy formulas we have $$\begin{aligned} \bigwedge^2(A\otimes B)\otimes S^a(A\otimes B) & = \left(\left(\bigwedge^2A\otimes S^2B\right)\oplus \left(S^2A\otimes\bigwedge^2B\right)\right)\otimes S^a(A\otimes B)\\ & = \left(\left(\bigwedge^2A\otimes S^2B\right)\oplus \left(S^2A\otimes\bigwedge^2B\right)\right)\otimes\left(\bigoplus_{\lambda \vdash a} \mathbb{S}_{\lambda} A\otimes \mathbb{S}_{\lambda} B\right)\\ & = \left( \bigoplus_{\lambda \vdash a} \mathbb{S}_{(\lambda_1+1,\lambda_2+1)} A\otimes (S^2B\otimes\mathbb{S}_{\lambda} B)\right)\oplus \left( \bigoplus_{\lambda \vdash a} (S^2A\otimes\mathbb{S}_{\lambda} A)\otimes\mathbb{S}_{(\lambda_1+1,\lambda_2+1)}B\right),\end{aligned}$$ where the last equality follows from the fact that $\dim A =\dim B=2$, so that $\bigwedge^2A=\operatorname{det}A$ and $\bigwedge^2B=\operatorname{det}B$. We are interested in the case when $\lambda_1=a-t$ and $\lambda_2=t$. Since $a \gg t$, Pieri's Rule implies that $$S^2B\otimes\mathbb{S}_{(a-t,t)} B=\mathbb{S}_{(a-t+2,t)}B\oplus \mathbb{S}_{(a-t+1,t+1)}B\oplus \mathbb{S}_{(a-t,t+2)}B,$$ and the similar identity holds for $S^2A\otimes\mathbb{S}_{(a-t,t)} A$. Therefore, for $a\gg t\geq 0$ we have $$\big[\bigwedge^2(A\otimes B)\otimes S^a(A\otimes B) : \mathbb{S}_{(a-t+1,t+1)}A\otimes \mathbb{S}_{(a-t+1,t+1)}B \big] =2.$$ To complete the proof, we need to show that $[\mathbb{S}_{(a+1,1)}(A\otimes B):\mathbb{S}_{(a-t+1,t+1)}A\otimes \mathbb{S}_{(a-t+1,t+1)}B ]=1$ when $a\gg t$. To do this, we apply [@raicu2012secant Corollary 4.3b], which handles the plethysm $\mathbb{S}_{\mu}(A\otimes B)$ when $A$ and $B$ are two-dimensional, and $\mu$ has at most two parts. Using the notation there, $f=t+1$, $e=2t+3$, and $r=a+2$. Since $a\gg t$ we have $e<r-1$ and $e$ is odd. Therefore, the multiplicity in question is $(e+1)/2-f=(t+2)-(t+1)=1$, as required. ◻ For proofs in the remainder of this section, we will use material from [@raicu2016characters Section 2] to calculate Euler characteristics of $\mathcal{D}$-module pushforwards. See Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"} for notation used below. **Lemma 60**. *Let $n=3$. The following is true about about the characters of $D_1$ and $D_7$.* 1. *For $t>0$ the representation $\mathbb{S}_{(t-6,-t-6)}A\otimes\mathbb{S}_{(t-6,-t-6)}B \otimes\mathbb{S}_{(-4,-4,-4)}C$ appears in $D_1$ with multiplicity one, and no representation of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{(\gamma_1,\gamma_2,\gamma_3)}C$ with $\gamma_2\geq-2$ appears in $D_1$.* 2. *For $t>0$ the representation $\mathbb{S}_{(t,-t)}A\otimes\mathbb{S}_{(t,-t)}B \otimes\mathbb{S}_{(0,0,0)}C$ appears in $D_7$ with multiplicity one, and no representation of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{(\gamma_1,\gamma_2,\gamma_3)}C$ with $\gamma_2\leq -2$ appears in $D_7$.* *Proof.* If we prove (1), then (2) follows from application of the Fourier transform. Let $Y=Y_{111}$ and $\pi=\pi_{111}$. By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"} we have $\pi_{+}\mathcal{O}_Y(\ast Y_1)=\mathbb{R}\Gamma_{O_1}(S)[7]$. By Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"}, and the long exact sequence $$\cdots \to H^j_{\overline{O}_1}(S)\to H^j_{O_1}(S)\to H^{j-1}_{\overline{O}_0}(S)\to \cdots,$$ we have the following in the Grothendieck group of representations of $\operatorname{GL}$ (see Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}): $$\label{eqn:eulerCharinGG} [\chi(\pi_{+}\mathcal{O}_Y(\ast Y_1))]=[H^7_{\overline{O}_1}(S)]+[H^9_{\overline{O}_1}(S)]+[H^{12}_{\overline{O}_0}(S)]=[D_1]+4[D_0].$$ Let $\alpha,\beta\in \mathbb{Z}^2_{\operatorname{dom}}$ and $\gamma \in \mathbb{Z}^3_{\operatorname{dom}}$, and define $$m_{\alpha,\beta,\gamma}=\big[ \chi(\pi_+ \mathcal{O}_Y(\ast Y_1)):\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C\big].$$ By [@raicu2016characters Lemma 2.5, Proposition 2.10] we have that $$\big[\chi(\pi_+ \mathcal{O}_Y(\ast Y_1))\big]=\lim_{k\to \infty} \big[p_{1,k}(A)\otimes p_{1,k}(B)\otimes p_{1,k}(C)\otimes S^{\ast}\otimes \operatorname{det}(A^{\ast}\otimes B^{\ast}\otimes C^{\ast})\big].$$ Thus, by [@raicu2016characters Lemma 2.3] we have that $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\left(\sum_{\substack{I,J\in \binom{[2]}{1}\\ K\in \binom{[3]}{1}}}\big[S:\mathbb{S}_{((-\alpha_2-6,-\alpha_1-6),k,I)}A\otimes\mathbb{S}_{((-\beta_2-6,-\beta_1-6),k,J)}B \otimes\mathbb{S}_{((-\gamma_3-4,-\gamma_2-4,-\gamma_1-4),k,K)}C \big]\right).$$ Note $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ appears in $S$ only if $\lambda,\mu,\nu$ are partitions. Note that, for $k\gg 0$ we have $$[\mathbb{S}_{((-\gamma_3-4,-\gamma_2-4,-\gamma_1-4),k,K)}C]=\begin{cases} [\mathbb{S}_{(k-\gamma_3-4,-\gamma_2-4,-\gamma_1-4)}C] & \textnormal{if $K=\{1\}$},\\ -[\mathbb{S}_{(k-\gamma_2-5,-\gamma_3-3,-\gamma_1-4)}C] & \textnormal{if $K=\{2\}$},\\ [\mathbb{S}_{(k-\gamma_1-6,-\gamma_3-3,-\gamma_2-3)}C] & \textnormal{if $K=\{3\}$}. \end{cases}$$ If $\gamma_2\geq -2$, then $-\gamma_2\leq 2$ and $-\gamma_1\leq 2$, so $((-\gamma_3-4,-\gamma_2-4,-\gamma_1-4),k,K)$ is not a partition, and $m_{\alpha,\beta,\gamma}=0$. It remains to calculate $m_{(t-6,-t-6),(t-6,-t-6),(-4,-4,-4)}$ for $t>0$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $\mathbb{S}_{(t-6,-t-6)}A\otimes\mathbb{S}_{(t-6,-t-6)}B \otimes\mathbb{S}_{(-4,-4,-4)}C$ belongs to $D_0$ if and only if $t=0$, in which case, it has multiplicity one. Thus, by ([\[eqn:eulerCharinGG\]](#eqn:eulerCharinGG){reference-type="ref" reference="eqn:eulerCharinGG"}), to complete the proof of (1), we need to show that $$m_{(t-6,-t-6),(t-6,-t-6),(-4,-4,-4)}=1\quad t>0.$$ For $k\gg 0$ we have $[\mathbb{S}_{((t,-t),k,I)}A]=[\mathbb{S}_{(k+t,-t)}A]$ if $I=\{1\}$ and $[\mathbb{S}_{((t,-t),k,I)}A]=-[\mathbb{S}_{(k-t-1,t+1)}A]$ if $I=\{2\}$, and the similar result holds for $\mathbb{S}_{((t,-t),k,J)}B$. Since $t>0$, only the case $I=J=\{2\}$ gives partitions. Note that for $k\gg 0$ we have $$=\begin{cases} [\mathbb{S}_{(k,0,0)}C] & \textnormal{if $K=\{1\}$},\\ -[\mathbb{S}_{(k-1,1,0)}C] & \textnormal{if $K=\{2\}$},\\ [\mathbb{S}_{(k-2,1,1)}C] & \textnormal{if $K=\{3\}$}. \end{cases}$$ Let $t>0$. By the Cauchy formula applied to $\operatorname{Sym}^{k}((A\otimes B)\otimes C)$, we conclude that $$m_{(t-6,-t-6),(t-6,-t-6),(-4,-4,-4)}=[\mathbb{S}_{(k,0,0)}(A\otimes B):\mathbb{S}_{(k-t-1,t+1)}A\otimes\mathbb{S}_{(k-t-1,t+1)}B]$$ $$-[\mathbb{S}_{(k-1,1,0)}(A\otimes B):\mathbb{S}_{(k-t-1,t+1)}A\otimes\mathbb{S}_{(k-t-1,t+1)}B]+[\mathbb{S}_{(k-2,1,1)}(A\otimes B):\mathbb{S}_{(k-t-1,t+1)}A\otimes\mathbb{S}_{(k-t-1,t+1)}B],$$ where $k\gg t>0$. The first summand is one by the Cauchy formula, and the third summand is one by Lemma [Lemma 59](#lem:helperLemmaWit17){reference-type="ref" reference="lem:helperLemmaWit17"}. We use [@raicu2012secant Corollary 4.3b] to calculate $[\mathbb{S}_{(k-1,1,0)}(A\otimes B):\mathbb{S}_{(k-t-1,t+1)}A\otimes\mathbb{S}_{(k-t-1,t+1)}B]$. Using the notation there, we have $f=t+1$, $e=2t+3$, and $r=k$. Since $k\gg t$ we have $e<r-1$ and $e$ is odd. Thus, $[\mathbb{S}_{(k-1,1,0)}(A\otimes B):\mathbb{S}_{(k-t-1,t+1)}A\otimes\mathbb{S}_{(k-t-1,t+1)}B]=(e+1)/2-f=1$. We conclude that, for $t>0$, we have $m_{(t-6,-t-6),(t-6,-t-6),(-4,-4,-4)}=1-1+1=1$, as desired. ◻ We remark that one can use the techniques above to verify that $\mathbb{S}_{(-6,-6)}A\otimes\mathbb{S}_{(-6,-6)}B \otimes\mathbb{S}_{(-4,-4,-4)}C$ appears in $[\chi(\pi_{+}\mathcal{O}_Y(\ast Y_1))]$ with multiplicity four, which by ([\[eqn:eulerCharinGG\]](#eqn:eulerCharinGG){reference-type="ref" reference="eqn:eulerCharinGG"}) implies that this representation does not appear in $D_1$. This is confirmed using Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}. **Lemma 61**. *Let $n=3$. The following is true about about the character of $D_5$.* 1. *For $t>0$ the following representations appear in $D_5$ with multiplicity one: $$\mathbb{S}_{(t-2,-t-2)}A\otimes\mathbb{S}_{(t-2,-t-2)}B \otimes\mathbb{S}_{(-1,-1,-2)}C,\quad \mathbb{S}_{(t-4,-t-4)}A\otimes\mathbb{S}_{(t-4,-t-4)}B \otimes\mathbb{S}_{(-2,-3,-3)}C.$$* 2. *For $t>0$ the following representations appear in $D_5$ with multiplicity one: $$\mathbb{S}_{(t-2,-t-4)}A\otimes\mathbb{S}_{(t-3,-t-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C,\quad \mathbb{S}_{(t-3,-t-3)}A\otimes\mathbb{S}_{(t-2,-t-4)}B \otimes\mathbb{S}_{(-2,-2,-2)}C.$$* 3. *If $\gamma_1\leq -3$ or $\gamma_3\geq -1$, then no representation of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{(\gamma_1,\gamma_2,\gamma_3)}C$ appears in $D_5$.* 4. *The following representations do not appear in $D_5$: $$\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-4,-4)}B \otimes\mathbb{S}_{(-2,-3,-3)}C,\quad \mathbb{S}_{(-2,-2)}A\otimes\mathbb{S}_{(-2,-2)}B \otimes\mathbb{S}_{(-1,-1,-2)}C,$$ $$\mathbb{S}_{(-1,-5)}A\otimes\mathbb{S}_{(-3,-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C,\quad \mathbb{S}_{(-3,-3)}A\otimes\mathbb{S}_{(-1,-5)}B \otimes\mathbb{S}_{(-2,-2,-2)}C,$$ $$\mathbb{S}_{(-3,-3)}A\otimes\mathbb{S}_{(-3,-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C.$$* We remark that the representations in (1) are used for the local cohomology calculation of $H^i_{\overline{O}_7}(S)$ (in the proof of Lemma [Lemma 55](#lem:factsD5lc7){reference-type="ref" reference="lem:factsD5lc7"}), but are not witness weights. Indeed, one can show using the techniques from the proof of Lemma [Lemma 60](#lem:charD17n3){reference-type="ref" reference="lem:charD17n3"} that $\mathbb{S}_{(t-4,-t-4)}A\otimes\mathbb{S}_{(t-4,-t-4)}B \otimes\mathbb{S}_{(-2,-3,-3)}C$ belongs to $D_1$, and $\mathbb{S}_{(t-2,-t-2)}A\otimes\mathbb{S}_{(t-2,-t-2)}B \otimes\mathbb{S}_{(-1,-1,-2)}C$ belongs to $D_7$. *Proof.* Let $Y=Y_{222}$ and $\pi=\pi_{222}$. By Lemma [Lemma 10](#pushdownlocallyclosed1){reference-type="ref" reference="pushdownlocallyclosed1"} we have $\pi_{+}\mathcal{O}_Y(\ast Y_5)=\mathbb{R}\Gamma_{O_6}(S)[2]$, so by Proposition [Proposition 36](#locorbit06n3){reference-type="ref" reference="locorbit06n3"} we have the following in the Grothendieck group of representations of $\operatorname{GL}$ (see Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}): $$\label{eqn:gglco6n3} [\chi(\pi_{+}\mathcal{O}_Y(\ast Y_5))]=[D_6]+[D_5]+[D_0].$$ Let $\alpha,\beta\in \mathbb{Z}^2_{\operatorname{dom}}$ and $\gamma \in \mathbb{Z}^3_{\operatorname{dom}}$, and define $$m_{\alpha,\beta,\gamma}=\big[ \chi(\pi_+ \mathcal{O}_Y(\ast Y_5)):\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C\big].$$ By [@raicu2016characters Lemma 2.5, Proposition 2.10] we have that $$\big[\chi(\pi_+ \mathcal{O}_Y(\ast Y_5))\big]=\lim_{k\to \infty} \big[\mathbb{S}_{(2k,2k)}A\otimes\mathbb{S}_{(2k,2k)}B\otimes p_{2,2k}(C)\otimes S^{\ast}\otimes \operatorname{det}(A^{\ast}\otimes B^{\ast}\otimes C^{\ast})\big].$$ Thus, by [@raicu2016characters Lemma 2.3] we have that $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\left(\sum_{J\in \binom{[3]}{2}}\big[S:\mathbb{S}_{(2k-\alpha_2-6,2k-\alpha_1-6)}A\otimes\mathbb{S}_{(2k-\beta_2-6,2k-\beta_1-6)}B \otimes\mathbb{S}_{((-\gamma_3-4,-\gamma_2-4,-\gamma_1-4),2k,J)}C \big]\right),$$ where for $k\gg0$ we have $$[\mathbb{S}_{((-\gamma_3-4,-\gamma_2-4,-\gamma_1-4),2k,J)}C]=\begin{cases} [\mathbb{S}_{(2k-\gamma_3-4,2k-\gamma_2-4,-\gamma_1-4)}C] & \textnormal{if }J=\{1,2\},\\ -[\mathbb{S}_{(2k-\gamma_3-4,2k-\gamma_1-5,-\gamma_2-3)}C] & \textnormal{if }J=\{1,3\},\\ [\mathbb{S}_{(2k-\gamma_2-5,2k-\gamma_1-5,-\gamma_3-2)}C] & \textnormal{if }J=\{2,3\}. \end{cases}$$ Note $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ appears in $S$ only if $\lambda,\mu,\nu$ are partitions. If $\gamma_3\geq -1$, then $-\gamma_i\leq 1$ for $i=1,2,3$. Thus, $((-\gamma_3-4,-\gamma_2-4,-\gamma_1-4),2k,J)$ is not a partition, so $m_{\alpha,\beta,\gamma}=0$. Since $\mathcal{F}(D_5)=D_5$, we conclude also that, if $\gamma_1\leq -3$ then the multiplicity of $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C$ in $D_5$ is zero. This proves (3). \(1\) We first consider $\mathbb{S}_{(t-2,-t-2)}A\otimes\mathbb{S}_{(t-2,-t-2)}B \otimes\mathbb{S}_{(-1,-1,-2)}C$ for $t\geq 0$. If $J$ is equal to $\{1,2\}$ or $\{1,3\}$, then $((-2,-3,-3),2k,J)$ is not a partition, so $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\big[S,\mathbb{S}_{(2k+t-4,2k-t-4)}A\otimes\mathbb{S}_{(2k+t-4,2k-t-4)}B \otimes\mathbb{S}_{(2k-4,2k-4,0)}C \big].$$ We apply [@raicu2012secant Corollary 4.3b]. Using notation there, we have $f=2k-4$, $e=6k-2t-12$, and $r=4k-8$. For $k\gg 0$ we have $e\geq r-1$ and $e$ even, so $m_{\alpha,\beta,\gamma}=\lfloor r/2\rfloor -f+1=1$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $\mathbb{S}_{(t-2,-t-2)}A\otimes\mathbb{S}_{(t-2,-t-2)}B \otimes\mathbb{S}_{(-1,-1,-2)}C$ belongs to $D_6$ if and only if $t=0$, and it does not belong to $D_0$. By ([\[eqn:gglco6n3\]](#eqn:gglco6n3){reference-type="ref" reference="eqn:gglco6n3"}) we conclude that $\mathbb{S}_{(t-2,-t-2)}A\otimes\mathbb{S}_{(t-2,-t-2)}B \otimes\mathbb{S}_{(-1,-1,-2)}C$ has multiplicity one in $D_5$ if $t>0$, and multiplicity zero if $t=0$. Since $\mathcal{F}(D_5)=D_5$, and $\mathcal{F}$ swaps the representations in (1), we have completed the proof of (1), and proved that the representations in the first row of (4) do not appear in $D_5$. \(2\) By symmetry, it suffices to prove the result for $\mathbb{S}_{(t-2,-t-4)}A\otimes\mathbb{S}_{(t-3,-t-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C$. Again, if $J$ is equal to $\{1,2\}$ or $\{1,3\}$, then $((-2,-2,-2),2k,J)$ is not a partition, so $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\big[S,\mathbb{S}_{(2k+t-2,2k-t-4)}A\otimes\mathbb{S}_{(2k+t-3,2k-t-3)}B \otimes\mathbb{S}_{(2k-3,2k-3,0)}C \big].$$ We apply [@raicu2012secant Corollary 4.3b]. Using notation there, we have $f=2k-3$, $e=6k-2t-10$, $r=4k-6$. Thus, for $k\gg 0$ we have $e\geq r-1$, and $e$ even, so $m_{\alpha,\beta,\gamma}=(2k-3)-(2k-3)+1=1$. Since $\mathbb{S}_{(t-2,-t-4)}A\otimes\mathbb{S}_{(t-3,-t-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C$ does not appear in $D_6$ nor $D_0$, the result follows from ([\[eqn:gglco6n3\]](#eqn:gglco6n3){reference-type="ref" reference="eqn:gglco6n3"}). \(4\) It remains to prove the assertion about the second and third rows of (4). For the second row, by symmetry, it suffices to consider $\mathbb{S}_{(-1,-5)}A\otimes\mathbb{S}_{(-3,-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C$. Again, if $J$ is equal to $\{1,2\}$ or $\{1,3\}$, then $((-2,-2,-2),2k,J)$ is not a partition, so $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\big[S,\mathbb{S}_{(2k-1,2k-5)}A\otimes\mathbb{S}_{(2k-3,2k-3)}B \otimes\mathbb{S}_{(2k-3,2k-3,0)}C \big].$$ By [@raicu2012secant Corollary 4.3b], this is zero. A similar argument can be used to verify that $\mathbb{S}_{(-3,-3)}A\otimes\mathbb{S}_{(-3,-3)}B \otimes\mathbb{S}_{(-2,-2,-2)}C$ does not appear in $D_5$. ◻ *Proof of Theorem [Theorem 58](#thm:witnessn3){reference-type="ref" reference="thm:witnessn3"}.* By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have $(-6,-6)\times (-6,-6)\times (-4,-4,-4)$ appears with multiplicity one in $D_0$, and does not appear in $D_2$, $D_3$, $D_4$, $D_6$, $D_8$. By Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}, this weight appears with multiplicity one in $S_f$, so it does not appear in $D_1$, $D_6'$, $D_7$. By Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"} it does not appear in $D_5$. Therefore, it is a witness weight for $D_0$. Applying $\mathcal{F}$, we conclude that $(0,0)\times (0,0)\times (0,0,0)$ is a witness weight for $D_8$. Let $t>0$. By Lemma [Lemma 60](#lem:charD17n3){reference-type="ref" reference="lem:charD17n3"} we have that $(t-6,-t-6)\times (t-6,-t-6)\times (-4,-4,-4)$ appears with multiplicity one in $D_1$, and does not appear in $D_7$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"}, these weights do not appear in $D_0$, $D_2$, $D_3$, $D_4$, $D_6$, $D_8$, and by Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"} they do not appear in $D_5$. By Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"} they have multiplicity one in $S_f$, so they do not appear in $D_6'$. Therefore, $(t-6,-t-6)\times (t-6,-t-6)\times (-4,-4,-4)$ ($t>0$) are witness weights for $D_1$. Applying $\mathcal{F}$, we conclude also that $(t,-t)\times (t,-t)\times (0,0,0)$ ($t>0$) are witness weights for $D_7$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $(-4,-4)\times (-4,-4)\times (-2,-3,-3)$ appears with multiplicity one in $D_2$, and does not appear in $D_0$, $D_3$, $D_4$, $D_6$, $D_8$. By Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"}, this weight does not appear in $D_5$. By Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"} it does not appear in $S_f$, so it does not appear in $D_1$, $D_6'$, $D_7$. Therefore, $(-4,-4)\times (-4,-4)\times (-2,-3,-3)$ is a witness weight for $D_2$. Applying $\mathcal{F}$, we conclude also that $(-2,-2)\times (-2,-2)\times (-1,-1,-2)$ is a witness weight for $D_6$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $(-1,-5)\times (-3,-3)\times (-2,-2,-2)$ appears with multiplicity one in $D_3$, and does not appear in $D_0$, $D_2$, $D_4$, $D_6$, $D_8$. By Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"}, this weight does not appear in $D_5$. By Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}, this weight does not appear in $S_f$, so it does not appear in $D_1$, $D_6'$, $D_7$. Therefore, it is a witness weight for $D_3$. The case of $D_4$ is proved similarly. Let $t>0$. By Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"} we have that $(t-2,-t-4)\times (t-3,-t-3)\times (-2,-2,-2)$ and $(t-3,-t-3)\times (t-2,-t-4)\times (-2,-2,-2)$ appear with multiplicity one in $D_5$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"}, these weights do not appear in $D_0$, $D_2$, $D_3$, $D_4$, $D_6$, $D_8$. By Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}, these weights do not appear in $S_f$, so they do not appear in $D_1$, $D_6'$, $D_7$. Therefore, they are witness weights for $D_5$. By Lemma [Lemma 22](#charSfn3){reference-type="ref" reference="charSfn3"}, the representation $(-3,-3)\times (-3,-3)\times (-2,-2,-2)$ appears in $S_f$ with multiplicity one (spanned by $1/f$). By Lemma [Lemma 60](#lem:charD17n3){reference-type="ref" reference="lem:charD17n3"}, this weight does not appear in $D_1$ nor $D_7$, and by Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} this representation does not appear in $D_0$, $D_2$, $D_3$, $D_4$, $D_6$, $D_8$. By Lemma [Lemma 61](#lem:charD5n3){reference-type="ref" reference="lem:charD5n3"}, this representation does not appear in $D_5$. Therefore, we have that $(-3,-3)\times (-3,-3)\times (-2,-2,-2)$ appears in $D_6'$ with multiplicity one, and does not appear in any other simple module. ◻ ## Characters for $n\geq 4$ {#sec:charactersn4} **Theorem 62**. *Let $n\geq 4$. The following is true about the witness weights for simple modules in $\textnormal{mod}_{\operatorname{GL}}(\mathcal{D}_V)$: $$(-2n,-2n)\times (-2n,-2n)\times (-4^n)\in \mathcal{W}(D_0),\quad (0,0)\times (0,0)\times (0^n)\in \mathcal{W}(D_9),$$ $$(-n-1,-2n+1)\times (-n-1,-2n+1)\times (-3^n)\in \mathcal{W}(D_1),\quad (-1,1-n)\times (-1,1-n)\times (-1^n)\in \mathcal{W}(D_7),$$ $$(-n-2,-2n+2)\times (-n-2,-2n+2)\times (-3^n)\in \mathcal{W}(D_2),\quad (-2,2-n)\times (-2,2-n)\times (-1^n)\in \mathcal{W}(D_8),$$ $$(-1,1-2n)\times (-n,-n)\times (-2^n)\in \mathcal{W}(D_3),\quad (-n,-n)\times (-1,1-2n)\times (-2^n)\in \mathcal{W}(D_4),$$ $$(-2,2-2n)\times (-n,-n) \times (-2^n)\in \mathcal{W}(D_5),$$ $$(-3,3-2n)\times (-n,-n)\times (-2^n)\in \mathcal{W}(D_6').$$ $$(-4,4-2n)\times (-n,-n)\times (-2^n)\in \mathcal{W}(D_6),$$* We note that, in addition to the calculations in this subsection, we found information about the characters of $D_1$ and $D_7$ in Lemma [Lemma 54](#lem:wit17n4){reference-type="ref" reference="lem:wit17n4"}. We start by investigating the case $n=4$. **Lemma 63**. *Let $n=4$ and $p\geq 4$, and let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$ with $|\alpha|=|\beta|=-2p$. The following is true about $D_5$ and $D_6$.* 1. *Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C$ appear in $D_6$ if and only if $\alpha_1+\beta_1\leq -4-p$ and $p-\alpha_1-\beta_1$ is even, in which case the multiplicity is one.* 2. *Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C$ appear in $D_5$ if and only if $\alpha_1+\beta_1\geq -3-p$ and $p-\alpha_1-\beta_1$ is even, in which case the multiplicity is one.* *Proof.* (1) By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C$ appears in $D_6$ if and only if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B$ is a subrepresentation of $\mathbb{S}_{(-2,-2,2-p,2-p)}(A\otimes B)$. Dualizing and twisting by $\operatorname{det}(A^{\ast}\otimes B^{\ast})^{\otimes -2}$, this is equivalent to $\mathbb{S}_{(-\alpha_2-4,-\alpha_1-4)}A^{\ast}\otimes\mathbb{S}_{(-\beta_2-4,-\beta_1-4)}B^{\ast}$ being a subrepresentation of $\mathbb{S}_{(p-4,p-4,0,0)}(A^{\ast}\otimes B^{\ast})$. Using the notation of [@raicu2012secant Corollary 4.3b], since $-\alpha_1\leq p$ and $-\beta_1\leq p$, we have $r=2p-8$, $f=p-4$, and $e=p-\alpha_1-\beta_1-12$. Then $e\geq 2f$ if and only if $\alpha_1+\beta_1\leq -4-p$. Thus, if $\alpha_1+\beta_1\geq -3-p$ then the desired multiplicity is zero. If $\alpha_1+\beta_1\leq -4-p$, then since $r=2f$, we have that the desired multiplicity is $r/2-f+1$ if $e$ is even, and it is $r/2-f$ if $e$ is odd. Thus, the multiplicity is one if and only if $p-\alpha_1-\beta_1$ is even, and zero otherwise. \(2\) Let $\pi=\pi_{222}$, $Y=Y_{222}$, and $Y_5=\pi^{-1}(\overline{O}_5)$. By Lemma [Lemma 39](#lem:decompn4){reference-type="ref" reference="lem:decompn4"} we have the following in the Grothendieck group of representations of $\operatorname{GL}$ (see Section [2.6](#sec:relmatrices){reference-type="ref" reference="sec:relmatrices"}): $$\label{eq:ggpusheuler} \big[\chi(\pi_{+}\mathcal{O}_Y(\ast Y_5))\big]=[D_6]+[D_5]+[D_2]+2[D_0].$$ Let $\alpha,\beta\in \mathbb{Z}^2_{\operatorname{dom}}$ and $\gamma=(-2,-2,2-p,2-p)$ with $|\alpha|=|\beta|=-2p$, and define $$m_{\alpha,\beta,\gamma}=\big[ \chi(\pi_+ \mathcal{O}_Y(\ast Y_5)):\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C\big].$$ By [@raicu2016characters Lemma 2.5, Proposition 2.10] we have that $$\big[\chi(\pi_+ \mathcal{O}_Y(\ast Y_5))\big]=\lim_{k\to \infty} \big[\mathbb{S}_{(2k,2k)}A\otimes\mathbb{S}_{(2k,2k)}B\otimes p_{2,2k}(C)\otimes S^{\ast}\otimes \operatorname{det}(A^{\ast}\otimes B^{\ast}\otimes C^{\ast})\big].$$ Thus, by [@raicu2016characters Lemma 2.3] we have that $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\left(\sum_{J\in \binom{[4]}{2}}\big[S:\mathbb{S}_{(2k-\alpha_2-8,2k-\alpha_1-8)}A\otimes\mathbb{S}_{(2k-\beta_2-8,2k-\beta_1-8)}B \otimes\mathbb{S}_{((p-6,p-6,-2,-2),2k,J)}C \big]\right).$$ Note $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ appears in $S$ only if $\lambda,\mu,\nu$ are partitions. We have for $k\gg 0$ that $((p-6,p-6,-2,-2),2k,J)$ is a partition if and only if $J=\{3,4\}$, in which case $$((p-6,p-6,-2,-2),2k,\{3,4\})=(2k-4,2k-4, p-4, p-4).$$ By the Cauchy Formula applied to $S=\operatorname{Sym}((A\otimes B)\otimes C)$ we have $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\big[\mathbb{S}_{(2k-4,2k-4, p-4, p-4)}(A\otimes B):\mathbb{S}_{(2k-\alpha_2-8,2k-\alpha_1-8)}A\otimes\mathbb{S}_{(2k-\beta_2-8,2k-\beta_1-8)}B]$$ Twisting by $\operatorname{det}(A\otimes B)^{\otimes 4-p}$ we want the multiplicity of $\mathbb{S}_{(2k-\alpha_2-2p,2k-\alpha_1-2p)}A\otimes\mathbb{S}_{(2k-\beta_2-2p,2k-\beta_1-2p)}B$ in $\mathbb{S}_{(2k-p,2k-p, 0, 0)}(A\otimes B)$ for $k\gg 0$. We apply [@raicu2012secant Corollary 4.3b], noting that $-\alpha_1\leq p$ and $-\beta_1\leq p$. Using the notation there, for $k\gg 0$ we have $r=4k-2p$, $f=2k-p$, and $e=6k-5p-\alpha_1-\beta_1$. Thus, for $k\gg 0$ we have $e\geq r-1$, $e\geq 2f$, $r$ even. Then $e$ is even if and only if $p-\alpha_1-\beta_1$ is even, in which case $m_{\alpha,\beta,\gamma}=r/2-f+1=1$. Otherwise, $m_{\alpha,\beta,\gamma}=r/2-f=0$. By part (1) and ([\[eq:ggpusheuler\]](#eq:ggpusheuler){reference-type="ref" reference="eq:ggpusheuler"}) we obtain (2), using that no representation in the statement of the lemma belongs to $D_0$ or $D_2$ (Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"}). ◻ **Lemma 64**. *Let $n=4$ and $p\geq 4$. The following is true about weights of $D_6'$.* 1. *The representation $\mathbb{S}_{(-3,3-2p)}A\otimes\mathbb{S}_{(-p,-p)}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C$ appears in $D_6'$ with multiplicity one.* 2. *The following representations do not appear in $D_6'$: $$\mathbb{S}_{(-1,1-2p)}A\otimes\mathbb{S}_{(-p,-p)}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C,\quad \mathbb{S}_{(-p,-p)}A\otimes\mathbb{S}_{(-1,1-2p)}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C,$$ $$\mathbb{S}_{(-2,2-2p)}A\otimes\mathbb{S}_{(-p,-p)}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C,\quad \mathbb{S}_{(-4,4-2p)}A\otimes\mathbb{S}_{(-p,-p)}B\otimes\mathbb{S}_{(-2,-2,2-p,2-p)}C.$$* *Proof.* Let $\pi=\pi_{222}$, $Y=Y_{222}$, and $Y_5=\pi^{-1}(\overline{O}_5)$. Let $\mathcal{L}=\mathbb{S}_{(1,1)}A\otimes\mathbb{S}_{(1,1)}B\otimes\mathbb{S}_{(1,1)}\mathcal{Q}$ and let $f:Y\to \mathbb{G}(2;C)$ be the structure map for the vector bundle $Y$. Then $\mathcal{O}_Y(\ast Y_5)\otimes f^{\ast}(\mathcal{L})$ is the relative version of the $\mathcal{D}$-module $S_h\cdot \sqrt{h}$ studied in [@perlman2020equivariant], so that $\mathcal{O}_Y(\ast Y_5)\otimes f^{\ast}(\mathcal{L})$ has composition factors $D_1^Y$, $D_2^Y$, $D_3^Y$, $D_4^Y$, $D_6'^Y$, each with multiplicity one. Since $D_6'$ is supported on $\overline{O}_6$ and $\pi$ is an isomorphism on $O_6$, we have the following in the Grothendieck group of $\operatorname{GL}$-representations: $$\label{eq:eulerD6prime} \big[\chi(\pi_+(\mathcal{O}_Y(\ast Y_5)\otimes f^{\ast}(\mathcal{L})))\big]=c_0[D_0]+c_1[D_1]+c_2[D_2]+c_3[D_3]+c_4[D_4]+c_5[D_5]+[D_6'],$$ for some $c_0,c_1,c_2,c_3,c_4,c_5\in \mathbb{Z}$. Let $\alpha,\beta\in \mathbb{Z}^2_{\operatorname{dom}}$ and $\gamma \in \mathbb{Z}^4_{\operatorname{dom}}$ with $|\alpha|=|\beta|=|\gamma|$, and define $$m_{\alpha,\beta,\gamma}=\big[ \chi(\pi_+(\mathcal{O}_Y(\ast Y_5)\otimes f^{\ast}(\mathcal{L}))):\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B \otimes\mathbb{S}_{\gamma}C\big].$$ By [@raicu2016characters Lemma 2.5, Proposition 2.10] we have that $$\big[\chi(\pi_+(\mathcal{O}_Y(\ast Y_5)\otimes f^{\ast}(\mathcal{L})))\big]=\lim_{k\to \infty} \big[\mathbb{S}_{(2k+1,2k+1)}A\otimes\mathbb{S}_{(2k+1,2k+1)}B\otimes p_{2,2k+1}(C)\otimes S^{\ast}\otimes \operatorname{det}(A^{\ast}\otimes B^{\ast}\otimes C^{\ast})\big].$$ Let $\gamma=(-2,-2,2-p,2-p)$, so that $|\alpha|=|\beta|=-2p$. By [@raicu2016characters Lemma 2.3] we have that $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\left(\sum_{J\in \binom{[4]}{2}}\big[S:\mathbb{S}_{(2k-\alpha_2-7,2k-\alpha_1-7)}A\otimes\mathbb{S}_{(2k-\beta_2-7,2k-\beta_1-7)}B \otimes\mathbb{S}_{((p-6,p-6,-2,-2),2k+1,J)}C \big]\right).$$ Note $\mathbb{S}_{\lambda}A\otimes\mathbb{S}_{\mu}B\otimes\mathbb{S}_{\nu}C$ appears in $S$ only if $\lambda,\mu,\nu$ are partitions. For $k\gg 0$ we have that $((p-6,p-6,-2,-2),2k,J)$ is a partition if and only if $J=\{3,4\}$, in which case $$((p-6,p-6,-2,-2),2k,\{3,4\})=(2k-3,2k-3, p-4, p-4).$$ By the Cauchy Formula applied to $S=\operatorname{Sym}((A\otimes B)\otimes C)$ we have $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\big[\mathbb{S}_{(2k-3,2k-3, p-4, p-4)}(A\otimes B):\mathbb{S}_{(2k-\alpha_2-7,2k-\alpha_1-7)}A\otimes\mathbb{S}_{(2k-\beta_2-7,2k-\beta_1-7)}B].$$ Twisting by $\operatorname{det}(A\otimes B)^{\otimes 4-p}$ we have $$m_{\alpha,\beta,\gamma}=\lim_{k\to \infty}\big[\mathbb{S}_{(2k-p+1,2k-p+1)}(A\otimes B):\mathbb{S}_{(2k-2p-\alpha_2+1,2k-2p-\alpha_1+1)}A\otimes\mathbb{S}_{(2k-2p-\beta_2+1,2k-2p-\beta_1+1)}B].$$ We apply [@raicu2012secant Corollary 4.3b], noting that $-\alpha_1\leq p$ and $-\beta_1\leq p$. Using the notation there, for $k\gg 0$ we have $r=4k-2p+2$, $e=6k-5p-\alpha_1-\beta_1+3$, $f=2k-p+1$. Thus, $e\geq 2f$, $e\geq r-1$, and $r$ is even. Therefore, $e$ is even if and only if $p-\alpha_1-\beta_1$ is odd, in which case, $m_{\alpha, \beta, \gamma}=r/2-f+1=1$. Otherwise, when $p-\alpha_1-\beta_1$ is even, we have $m_{\alpha, \beta, \gamma}=r/2-f=0$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that none of the representations in the statement of the lemma belong to $D_0$, $D_2$, and by Corollary [Corollary 53](#cor:weightBoundsn4){reference-type="ref" reference="cor:weightBoundsn4"} none of these representations appear in $D_1$. If $p=4$ and $\alpha=(1,-7)$ and $\beta=(-4,-4)$ or vice versa we have that $p-\alpha_1-\beta_1=7$, which is odd, so $m_{\alpha,\beta, \gamma}=1$. By Lemma [Lemma 57](#lem:D6primeisH3){reference-type="ref" reference="lem:D6primeisH3"} the representations $\mathbb{S}_{(-1,-7)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$, $\mathbb{S}_{(-4,-4)}A\otimes\mathbb{S}_{(-1,-7)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ do not appear in $D_6'$, and by Lemma [Lemma 63](#lem:charD5D6ngeq4){reference-type="ref" reference="lem:charD5D6ngeq4"} they do not appear in $D_5$. Therefore, $c_3=c_4=1$ in ([\[eq:eulerD6prime\]](#eq:eulerD6prime){reference-type="ref" reference="eq:eulerD6prime"}). Similarly, by Lemma [Lemma 57](#lem:D6primeisH3){reference-type="ref" reference="lem:D6primeisH3"} and Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} the representation $\mathbb{S}_{(-2,-6)}A\otimes\mathbb{S}_{(-4,-4)}B\otimes\mathbb{S}_{(-2,-2,-2,-2)}C$ ($p=4$) does not appear in $D_6'$, $D_3$, nor $D_4$. Since $p-\alpha_1-\beta_1=10$ is even, we conclude that $c_5=0$. Therefore, given $\alpha,\beta, \gamma$ as in the statement of the lemma, we have $$\big[ D_6':\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C\big]=m_{\alpha,\beta, \gamma}-\big[ D_3:\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C\big]-\big[ D_4:\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C\big].$$ The proof of (1) and (2) then follows from a case-by-case analysis of the parity of $p-\alpha_1-\beta_1$. The representations in the first row of (2) belong to $D_3$ (resp. $D_4$), and the representations in the second row of (2) do not belong to $D_3$ nor $D_4$. ◻ **Lemma 65**. *Let $n\geq 4$. The following is true about $D_1$ and $D_7$.* 1. *$\mathbb{R}\pi'_{\ast}D_1^{Y'}$ has only cohomology in degree $3n-12$, isomorphic as representations of $\operatorname{GL}$ to $D_1$.* 2. *$\mathbb{R}\pi'_{\ast}D_7^{Y'}$ has only cohomology in degree $n-4$, isomorphic as representations of $\operatorname{GL}$ to $D_7$.* 3. *The representation $\mathbb{S}_{(-n-1,-2n+1)}A\otimes\mathbb{S}_{(-n-1,-2n+1)}B \otimes\mathbb{S}_{(-3^n)}C$ appears with multiplicity one in $D_1$, and does not appear in $D_2$.* 4. *The representation $\mathbb{S}_{(-n-2,-2n+2)}A\otimes\mathbb{S}_{(-n-2,-2n+2)}B \otimes\mathbb{S}_{(-3^n)}C$ does not appear in $D_1$, and appears with multiplicity one in $D_2$.* 5. *The representation $\mathbb{S}_{(-1,1-n)}A\otimes\mathbb{S}_{(-1,1-n)}B \otimes\mathbb{S}_{(-1^n)}C$ appears with multiplicity one in $D_7$, and does not appear in $D_8$.* 6. *The representation $\mathbb{S}_{(-2,2-n)}A\otimes\mathbb{S}_{(-2,2-n)}B \otimes\mathbb{S}_{(-1^n)}C$ does not appear in $D_7$, and appears with multiplicity one in $D_8$.* *Proof.* (1) We consider the spectral sequence $$\mathbb{R}^i\pi'_{\ast}\mathscr{H}^j_{Y_1'}(\mathcal{O}_{Y'})\implies H^{i+j}_{\overline{O}_1}(S).$$ By Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"}(2) for $n=4$, the $E_2$ page of this spectral sequence has only four nonzero columns, corresponding to $j=10$, $11$, $13$, $15$. Note that column ten corresponds to the cohomology of $\mathbb{R}\pi'_{\ast}D_1^{Y'}$, whereas the other columns correspond to direct images of (direct sums of) $D_0^{Y'}$. We recall that $\mathbb{R}\pi'_{\ast}D_0^{Y'}$ has only cohomology in degree $4n-16$, the highest possible degree. Thus, by Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"}(2) for $n>4$, the spectral sequence does not converge properly unless the cohomology of $\mathbb{R}\pi'_{\ast}D_1^{Y'}$ is as claimed. \(2\) Proven in Section [6.2](#sec:lcO7ngeq5){reference-type="ref" reference="sec:lcO7ngeq5"}. By application of the Fourier transform, it remains to prove (3) and (4). Let $\pi'=\pi_{224}$ and $Y'=Y_{224}$. By Lemma [Lemma 54](#lem:wit17n4){reference-type="ref" reference="lem:wit17n4"} we have that $\mathbb{S}_{(-n-1,-2n+1)}A\otimes\mathbb{S}_{(-n-1,-2n+1)}B \otimes\mathbb{S}_{(-3,1-n,1-n,1-n)}\mathcal{Q}$ belongs to $D_1^{Y'}$ and $\mathbb{S}_{(-n-2,-2n+2)}A\otimes\mathbb{S}_{(-n-2,-2n+2)}B \otimes\mathbb{S}_{(-3,1-n,1-n,1-n)}\mathcal{Q}$ does not belong to $D_1^{Y'}$. By (1) and Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} we have that $\mathbb{S}_{(-n-1,-2n+1)}A\otimes\mathbb{S}_{(-n-1,-2n+1)}B \otimes\mathbb{S}_{(-3^n)}C$ belongs to $D_1$ with multiplicity one and $\mathbb{S}_{(-n-2,-2n+2)}A\otimes\mathbb{S}_{(-n-2,-2n+2)}B \otimes\mathbb{S}_{(-3^n)}C$ does not appear in $D_1$. Next, we consider the character of $D_2$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} a representation of the form $\mathbb{S}_{\lambda}(A\otimes B)\otimes\mathbb{S}_{(-3^n)}C$ appears in $D_2$ if and only if $\lambda=(-3,1-n,1-n,1-n)$. By Cauchy's formula applied to $\mathbb{S}_{(-3,1-n,1-n,1-n)}(A\otimes B)=\mathbb{S}_{(n-4,0,0,0)}(A\otimes B)\otimes\operatorname{det}(A\otimes B)^{\otimes 2-2n}$ we have that $\mathbb{S}_{(-n-2,-2n+2)}A\otimes\mathbb{S}_{(-n-2,-2n+2)}B$ appears in $\mathbb{S}_{(-3,1-n,1-n,1-n)}(A\otimes B)$ with multiplicity one, and $\mathbb{S}_{(-n-1,-2n+1)}A\otimes\mathbb{S}_{(-n-1,-2n+1)}B$ does not appear. ◻ **Lemma 66**. *Let $n\geq 4$. The following is true about $D_5$ and $D_6$.* 1. *$\mathbb{R}\pi'_{\ast}D_5^{Y'}$ has only cohomology in degree $2n-8$, isomorphic as representations of $\operatorname{GL}$ to $D_5$.* 2. *$\mathbb{R}\pi'_{\ast}D_6^{Y'}$ has only cohomology in degree $2n-8$, isomorphic as representations of $\operatorname{GL}$ to $D_6$.* 3. *Let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$ with $|\alpha|=|\beta|=|\gamma|=-2n$. Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(-2^n)}C$ appear in $D_6$ if and only if $\alpha_1+\beta_1\leq -4-n$ and $n-\alpha_1-\beta_1$ is even, in which case the multiplicity is one.* 4. *Let $\alpha,\beta \in \mathbb{Z}^2_{\operatorname{dom}}$ with $|\alpha|=|\beta|=|\gamma|=-2n$. Representations of the form $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{(-2^n)}C$ appear in $D_5$ if and only if $\alpha_1+\beta_1\geq -3-n$ and $n-\alpha_1-\beta_1$ is even, in which case the multiplicity is one.* *Proof.* By discussion in Section [5.3](#sec:o5ngeq5){reference-type="ref" reference="sec:o5ngeq5"} we have (1). Using that $D_6=L_{Z_2}$ we have that (2) can be deduced from [@raicu2016characters; @raicuWeymanLocal] (see [@socledegs Lemma 3.4] for the explicit statement). By (1), (2), Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}, and Lemma [Lemma 63](#lem:charD5D6ngeq4){reference-type="ref" reference="lem:charD5D6ngeq4"} we obtain (3),(4). ◻ **Lemma 67**. *Let $n\geq 4$. The following is true about $D_6'$.* 1. *$\mathbb{R}\pi'_{\ast}D_6'^{Y'}$ has only cohomology in degree $2n-8$, isomorphic as representations of $\operatorname{GL}$ to $D_6'$.* 2. *The representation $\mathbb{S}_{(-3,3-2n)}A\otimes\mathbb{S}_{(-n,-n)}B\otimes\mathbb{S}_{(-2^n)}C$ appears in $D_6'$ with multiplicity one.* 3. *The following representations do not appear in $D_6'$: $$\mathbb{S}_{(-1,1-2n)}A\otimes\mathbb{S}_{(-n,-n)}B\otimes\mathbb{S}_{(-2^n)}C,\quad \mathbb{S}_{(-n,-n)}A\otimes\mathbb{S}_{(-1,1-2n)}B\otimes\mathbb{S}_{(-2^n)}C,$$ $$\mathbb{S}_{(-2,2-2n)}A\otimes\mathbb{S}_{(-n,-n)}B\otimes\mathbb{S}_{(-2^n)}C,\quad \mathbb{S}_{(-4,4-2n)}A\otimes\mathbb{S}_{(-n,-n)}B\otimes\mathbb{S}_{(-2^n)}C.$$* *Proof.* For $n=4$ we have that (2) and (3) are the case $p=4$ of Lemma [Lemma 64](#lem:witD_6primen4){reference-type="ref" reference="lem:witD_6primen4"}, so we assume $n\geq 5$. Let $\pi'=\pi_{224}$ and $Y'=Y_{224}$. By Lemma [Lemma 57](#lem:D6primeisH3){reference-type="ref" reference="lem:D6primeisH3"} we have that if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}\mathcal{Q}$ belongs to $D_6'^{Y'}$ then $\gamma_2\geq -2$ and $\gamma_3\leq -2$. Thus, by Bott's Theorem (Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}) we have that $\mathbb{R}\pi'_{\ast}D_6'^{Y'}$ has only cohomology in degree $2n-8$. By the discussion in Section [6.2](#sec:lcO7ngeq5){reference-type="ref" reference="sec:lcO7ngeq5"}, since the spectral sequence $E_r^{i,j}$ there is degenerate, it follows that $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}$ is isomorphic as representation of $\operatorname{GL}$ to an equivariant holonomic $\mathcal{D}_V$-module. By Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"} it follows that if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}$ then $$\gamma_3=\cdots=\gamma_{n-2}=-2.$$ Thus, by Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} we have that none of $D_0$, $D_2$, $D_8$, $D_9$ are subrepresentations of $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}$. By Lemma [Lemma 65](#lem:directimD1){reference-type="ref" reference="lem:directimD1"}(1)(2) we have that subrepresentations $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ of $D_1$ satisfy $\gamma_{n-2}\leq -3$, and those of $D_7$ satisfy $\gamma_3\geq -1$. Therefore, neither $D_1$ nor $D_7$ are subrepresentations of $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}$. By Lemma [Lemma 64](#lem:witD_6primen4){reference-type="ref" reference="lem:witD_6primen4"} and Bott's Theorem, the representation of Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"}(2) appears in $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}$ with multiplicity one, and the representations of Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"}(3) do not appear. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} the representations in the first row of Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"}(3) belong to $D_3$ resp. $D_4$, and by Lemma [Lemma 66](#lem:pushD5){reference-type="ref" reference="lem:pushD5"} the representations in the second row of Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"}(3) belong to $D_5$ resp. $D_6$. Thus, none of $D_3$, $D_4$, $D_5$, $D_6$ are subrepresentations of $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}$, so by process of elimination and Lemma [Lemma 64](#lem:witD_6primen4){reference-type="ref" reference="lem:witD_6primen4"}, we have $\mathbb{R}^{2n-8}\pi'_{\ast}D_6'^{Y'}\cong D_6'$ and (2) and (3) hold. ◻ **Corollary 68**. *Let $n\geq 4$. The following is true about the characters of $D_1$, $D_5$, $D_6'$, $D_7$.* 1. *if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $D_1$ then $\gamma_1\geq -3$, $\gamma_{n-2}\leq -3$ and $$\gamma_2=\cdots =\gamma_{n-3}=-3.$$* 2. *if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $D_5$ or $D_6'$ then $\gamma_2\geq -2$, $\gamma_{n-1}\leq -2$ and $$\gamma_3=\cdots=\gamma_{n-2}=-2.$$* 3. *if $\mathbb{S}_{\alpha}A\otimes\mathbb{S}_{\beta}B\otimes\mathbb{S}_{\gamma}C$ belongs to $D_7$ then $\gamma_3\geq -1$, $\gamma_n\leq -1$ and $$\gamma_4=\cdots =\gamma_{n-1}=-1.$$* *Proof.* The case $n=4$ follows from Corollary [Corollary 40](#cor:weightboundsD5n4){reference-type="ref" reference="cor:weightboundsD5n4"}, Corollary [Corollary 53](#cor:weightBoundsn4){reference-type="ref" reference="cor:weightBoundsn4"}, and Lemma [Lemma 57](#lem:D6primeisH3){reference-type="ref" reference="lem:D6primeisH3"}. The case $n\geq 5$ follows from Lemma [Lemma 8](#BottLemma){reference-type="ref" reference="BottLemma"}, Lemma [Lemma 65](#lem:directimD1){reference-type="ref" reference="lem:directimD1"}, Lemma [Lemma 66](#lem:pushD5){reference-type="ref" reference="lem:pushD5"}, Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"}. ◻ *Proof of Theorem [Theorem 62](#thm:witnessngeq4){reference-type="ref" reference="thm:witnessngeq4"}.* By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} the weight $(-2n,-2n)\times (-2n,-2n)\times (-4^n)$ appears with multiplicity one in $D_0$, and does not appear in $D_2$, $D_3$, $D_4$, $D_6$, $D_8$, $D_9$. By Corollary [Corollary 68](#cor:bottcorsimple){reference-type="ref" reference="cor:bottcorsimple"} this weight does not appear in $D_1$, $D_5$, $D_6'$, $D_7$. Therefore, it is a witness weight for $D_0$. Applying $\mathcal{F}$ we conclude that $(0,0)\times (0,0)\times (0^n)$ is a witness weight for $D_9$. By Lemma [Lemma 65](#lem:directimD1){reference-type="ref" reference="lem:directimD1"} the weight $(-n-1,-2n+1)\times (-n-1,-2n+1)\times (-3^n)$ appears with multiplicity one in $D_1$, and does not appear in $D_2$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} this weight does not appear in $D_0$, $D_3$, $D_4$, $D_6$, $D_8$, $D_9$. By Corollary [Corollary 68](#cor:bottcorsimple){reference-type="ref" reference="cor:bottcorsimple"} this weight does not appear in $D_5$, $D_6'$, $D_7$. Therefore, it is a witness weight for $D_1$. Applying $\mathcal{F}$ we conclude that $(-1,1-n)\times (-1,1-n)\times (-1^n)$ is a witness weight for $D_7$. By Lemma [Lemma 65](#lem:directimD1){reference-type="ref" reference="lem:directimD1"}(4) the weight $(-n-2,-2n+2)\times (-n-2,-2n+2)\times (-3^n)$ appears with multiplicity one in $D_2$, and by Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} it does not appear in $D_0$, $D_3$, $D_4$, $D_6$, $D_8$, $D_9$. By Lemma [Lemma 65](#lem:directimD1){reference-type="ref" reference="lem:directimD1"} it does not appear in $D_1$, and by Corollary [Corollary 68](#cor:bottcorsimple){reference-type="ref" reference="cor:bottcorsimple"} it does not appear in $D_5$, $D_6'$, $D_7$. Therefore, it is a witness weight for $D_2$. Applying $\mathcal{F}$ we conclude that $(-2,2-n)\times (-2,2-n)\times (-1^n)$ is a witness weight for $D_8$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} the weight $(-1,1-2n)\times (-n,-n)\times (-2^n)$ appears with multiplicity one in $D_3$, and does not appear in $D_0$, $D_2$, $D_4$, $D_8$, $D_9$. By Lemma [Lemma 66](#lem:pushD5){reference-type="ref" reference="lem:pushD5"} it does not appear in $D_5$, $D_6$, and by Corollary [Corollary 68](#cor:bottcorsimple){reference-type="ref" reference="cor:bottcorsimple"} it does not appear in $D_1$, $D_7$. By Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"} it does not appear in $D_6'$. Therefore, it is a witness weight for $D_3$. By symmetry, we also obtain the witness weight for $D_4$. By Lemma [Lemma 66](#lem:pushD5){reference-type="ref" reference="lem:pushD5"} the weight $(-2,2-2n)\times (-n,-n) \times (-2^n)$ appears with multiplicity one in $D_5$, and does not appear in $D_6$. By Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"} it does not appear in $D_6'$, and by Corollary [Corollary 68](#cor:bottcorsimple){reference-type="ref" reference="cor:bottcorsimple"} it does not appear in $D_1$, $D_7$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} it does not appear in $D_0$, $D_2$, $D_3$, $D_4$, $D_8$, $D_9$. Therefore, it is a witness weight for $D_5$. By an identical argument, $(-4,4-2n)\times (-n,-n)\times (-2^n)$ is a witness weight for $D_6$. By Lemma [Lemma 67](#lem:pushD6prime){reference-type="ref" reference="lem:pushD6prime"} the weight $(-3,3-2n)\times (-n,-n)\times (-2^n)$ appears with multiplicity one in $D_6'$. By Theorem [Theorem 7](#thm:charMatrices){reference-type="ref" reference="thm:charMatrices"} it does not appear in $D_0$, $D_2$, $D_3$, $D_4$, $D_8$, $D_9$. By Lemma [Lemma 66](#lem:pushD5){reference-type="ref" reference="lem:pushD5"} it does not appear in $D_5$, $D_6$, and by Corollary [Corollary 68](#cor:bottcorsimple){reference-type="ref" reference="cor:bottcorsimple"} it does not appear in $D_1$, $D_7$. Therefore it is a witness weight for $D_6'$. ◻ # Lyubeznik numbers and intersection cohomology {#sec:lyub} In this section we explain how to obtain the Lyubeznik number of orbit closures, and $H^i_{\{0\}}(M)$ for each simple equivariant $\mathcal{D}$-module $M$. By Proposition [Proposition 1](#prop:intcoho){reference-type="ref" reference="prop:intcoho"}, we thus also obtain all the intersection cohomology groups for each orbit closure. As the determinantal case is done in [@lorcla], we only need to deal with $\overline{O}_i$ for $i=1,5,7$, as well as calculate $H^j_{\{0\}}(D_6')$, for $j\geq 0$. For $i=1,5,7$, let $R_i=\mathbb{C}[\overline{O}_i]_{\mathfrak{m}}$, where $\mathfrak{m}$ denotes the homogeneous maximal ideal. **Proposition 69**. *For all $n\geq 3$ the nonzero local cohomology modules of $D_1$ with support in $\{0\}$ are given by $$H^{n-2}_{\{0\}}(D_1)=E,\quad H^{n}_{\{0\}}(D_1)=E^{\oplus 2},\quad H^{n+2}_{\{0\}}(D_1)=E.$$ The nonzero Lyubeznik numbers of $R_1$ are:* - *$n=3$: $\lambda_{0,3}(R_1)= 2, \, \lambda_{3,5}(R_1)=2, \, \lambda_{5,5}(R_1)=1$;* - *$n\geq 4$: $\lambda_{0,3}(R_1) =2 , \lambda_{0,5}(R_1) = 1, \lambda_{n-2,n+2}(R_1)=1, \lambda_{n,n+2}(R_1)=2, \lambda_{n+2,n+2}(R_1)=1.$* *Proof.* The claim about the Lyubeznik numbers follows readily from Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"} by looking at the spectral sequence $$H^i_{\{0\}} H^j_{\overline{O}_1} (S) \implies H^{i+j}_{\overline{O}_1}(S).$$ When $n\geq 4$, this also yields the first part, while for $n=3$ it follows from the long exact sequence obtained by applying the functor $H_{\{0\}}^0(-)$ to the short exact sequence in Theorem [Theorem 25](#BettiThm){reference-type="ref" reference="BettiThm"}(1). ◻ As $\mathcal{F}(D_1)\cong D_7$, we readily obtain the following by Proposition [Proposition 1](#prop:intcoho){reference-type="ref" reference="prop:intcoho"}. **Corollary 70**. *The nonzero local cohomology modules of $D_7$ with support in $\{0\}$ are given by $$H^{3n-2}_{\{0\}}(D_7)=E,\quad H^{3n}_{\{0\}}(D_7)=E^{\oplus 2},\quad H^{3n+2}_{\{0\}}(D_7)=E.$$* **Theorem 71**. *For all $n\geq 3$ the nonzero local cohomology modules of $D_5$ with support in $\{0\}$ are given by $$H^{2n-3}_{\{0\}}(D_5)=E,\quad H^{2n-1}_{\{0\}}(D_5)=E,\quad H^{2n+1}_{\{0\}}(D_5)=E,\quad H^{2n+3}_{\{0\}}(D_5)=E.$$ The nonzero Lyubeznik numbers of $R_5$ are:* - *$n=3$: $\lambda_{9,9}(R_5)=1$;* - *$n=4$: $\lambda_{3,10}(R_5) = \lambda_{5,10}(R_5) = \lambda_{7,10}(R_5)= \lambda_{5,11}(R_5)=\lambda_{7,11}(R_5)= \lambda_{9,11}(R_5)= \lambda_{11,11}(R_5)=1$;* - *$n\geq 5$: $\lambda_{0,10}(R_5) = \lambda_{n-3,n+6}(R_5) = \lambda_{n-1,n+6}(R_5)= \lambda_{n+1,n+6}(R_5)=\lambda_{n+3,n+6}(R_5)= \lambda_{2n-3,2n+3}(R_5)$ $= \lambda_{2n-1,2n+3}(R_5)=\lambda_{2n+1,2n+3}(R_5)=\lambda_{2n+3,2n+3}(R_5)=1$.* *Proof.* We analyze the spectral sequence $$H^i_{\{0\}} H^j_{\overline{O}_5} (S) \implies H^{i+j}_{\overline{O}_5}(S).$$ First, let $n=3$. By Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}, the spectral sequence degenerates, and the claim about the Lyubeznik numbers follows. By [@lorcla Theorem 1.1], the only non-zero local cohomology modules of $D_2$ supported in $\{0\}$ are in degrees $2, 4, 6$ (in which case they are all equal to $E$). From the long exact sequence obtained by applying the functor $H_{\{0\}}^0(-)$ to the short exact sequence in Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}(1), we obtain the first part of the statement in this case. Next, let $n=4$. By Theorem [Theorem 33](#loc05){reference-type="ref" reference="loc05"}, the module $H^6_{\overline{O}_5}(S)$ is isomorphic to what is called $Q_1$ in [@lorcla Theorem 1.6], therefore its only non-zero local cohomology modules supported in $\{0\}$ are in degrees $3, 5, 7$ (in which case they are all equal to $D_0$). The spectral sequence now gives the result. Lastly, let $n\geq 5$. Again, by [@lorcla Theorem 1.1], the only non-zero local cohomology modules of $D_2$ supported in $\{0\}$ are in degrees $n-3, n-1, n+1, n+3$ (in which case they are all equal to $E$). In the spectral sequence, the term $H^0_{\{0\}} H^{4n-10}_{\overline{O}_5}(S)$ can be canceled out by either the term $H^{n-3}_{\{0\}} H^{3n-6}_{\overline{O}_5}(S)$ or by the term $H^{2n-6}_{\{0\}} H^{2n-3}_{\overline{O}_5}(S)$, in principle. To see that the latter is not possible, it is enough to see that $H^{2n-6}_{\{0\}}(D_5) = 0$, which follows from Proposition [Proposition 1](#prop:intcoho){reference-type="ref" reference="prop:intcoho"} as $c= 2n-3$. The spectral sequence gives now the required result. ◻ **Theorem 72**. *For $n\geq 3$, the nonzero local cohomology modules of $D_6'$ with support in $\{0\}$ are given by $$H^{2n-2}_{\{0\}}(D_6')=E^{\oplus 2},\quad H^{2n}_{\{0\}}(D_6')=E^{\oplus 2},\quad H^{2n+2}_{\{0\}}(D_6')=E^{\oplus 2}.$$ The nonzero Lyubeznik numbers of $R_7$ are* *$n=3$: $\lambda_{11,11}(R_7)= 1$;* *$n\geq 4$: $\lambda_{0,11}(R_7) =1 , \lambda_{n-2,n+2}(R_7) = 1, \lambda_{n,n+2}(R_7)=2, \lambda_{n+2,n+2}(R_7)=1, \lambda_{2n-2,2n+5}(R_7)=2,$* *$\lambda_{2n,2n+5}(R_7)=2, \lambda_{2n+2,2n+5}(R_7)=2, \lambda_{3n-2,3n+2}(R_7)=1, \lambda_{3n,3n+2}(R_7)=2, \lambda_{3n+2,3n+2}(R_7)=1.$* *Proof.* We analyze the spectral sequence $$H^i_{\{0\}} H^j_{\overline{O}_7} (S) \implies H^{i+j}_{\overline{O}_7}(S).$$ First, let $n=3$. Then the spectral sequence degenerates and the claim on the Lyubeznik numbers is clear. The claim on the local cohomology modules of $D_6'$ with support in $\{0\}$ follows by tracing the long exact sequences associated to the short exact sequences in Theorem [Theorem 41](#loc07){reference-type="ref" reference="loc07"} (1). Namely, by Proposition [Proposition 69](#prop:loc0D1){reference-type="ref" reference="prop:loc0D1"} we get that nonzero local cohomology modules of $S_f/\mathcal{D}_Vf^{-1}$ with support in $\{0\}$ are $H^3_{\{0\}}(S_f/\mathcal{D}_Vf^{-1})=E^{\oplus 2}$ and $H^5_{\{0\}}(S_f/\mathcal{D}_Vf^{-1})=E$. Then the nonzero local cohomology modules of $\mathcal{D}_Vf^{-1}/S$ with support in $\{0\}$ are $H^4_{\{0\}}(\mathcal{D}_Vf^{-1}/S)=E^{\oplus 2}$, $H^6_{\{0\}}(\mathcal{D}_Vf^{-1}/S)=H^{11}_{\{0\}}(\mathcal{D}_Vf^{-1}/S)=E$. By the analogue of Proposition [Proposition 1](#prop:intcoho){reference-type="ref" reference="prop:intcoho"} for non-trivial local systems, since $D_6'$ is self-dual and $\mathcal{F}(D_6')=D_6'$, we see that $H^{11}(D_6')=0$. The claim now follows from the long exact sequence induced by $0\to D_7 \to \mathcal{D}_Vf^{-1}/S \to D_6'\to 0$ and Corollary [Corollary 70](#cor:loc0D7){reference-type="ref" reference="cor:loc0D7"}. Now let $n\geq 4$. Again, by (the analogue of) Proposition [Proposition 1](#prop:intcoho){reference-type="ref" reference="prop:intcoho"} we see that $H_{\{0\}}^{2n-5}(D_6') = 0$ as $2n-5 < \operatorname{codim}(O_6, V)=2n-4$. Therefore, by Proposition [Proposition 69](#prop:loc0D1){reference-type="ref" reference="prop:loc0D1"} and Corollary [Corollary 70](#cor:loc0D7){reference-type="ref" reference="cor:loc0D7"} in the spectral sequence the term $E=H^0_{\{0\}} H^{4n-11}_{\overline{O}_7}(S)$ must be canceled out by $E=H^{n-2}_{\{0\}} H^{3n-8}_{\overline{O}_7}(S)$. The rest of the cancelations are straightforward, which yields the local cohomology modules of $D_6'$, and therefore the Lyubeznik numbers of $R_7$. ◻ # Acknowledgments {#acknowledgments .unnumbered} We thank Vic Reiner for a helpful conversation regarding the character calculations in Section [7](#sec:characters){reference-type="ref" reference="sec:characters"}.
arxiv_math
{ "id": "2309.07697", "title": "Equivariant D-modules on 2x2xn hypermatrices", "authors": "Andr\\'as C. L\\H{o}rincz, Michael Perlman", "categories": "math.AG math.AC math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper is concerned with the mean curvature flow, which describes the dynamics of a hypersurface whose normal velocity is determined by local mean curvature. We present a Cartesian grid-based method for solving mean curvature flows in two and three space dimensions. The present method embeds a closed hypersurface into a fixed Cartesian grid and decomposes it into multiple overlapping subsets. For each subset, extra tangential velocities are introduced such that marker points on the hypersurface only moves along grid lines. By utilizing an alternating direction implicit (ADI)-type time integration method, the subsets are evolved alternately by solving scalar parabolic partial differential equations on planar domains. The method removes the stiffness using a semi-implicit scheme and has no high-order stability constraint on time step size. Numerical examples in two and three space dimensions are presented to validate the proposed method. author: - Han Zhou - Shuwang Li - Wenjun Ying bibliography: - references.bib date: "Received: date / Accepted: date" title: An Alternating Direction Implicit Method for Mean Curvature Flows --- example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore # Introduction {#sec:intro} Geometric evolution of interfaces draws many attentions in the last decades due to its wide applications in mathematics [@Colding2015], materials science [@Mullins1956], biology [@Nelson2005] and, more recently, image processing [@Malladi1995; @6253258; @Mallat2007]. In a geometric evolution problem, the dynamics of a hypersurface are described by its geometry. Typically, the normal velocity of the hypersurface is given by a law defined by geometry. In this paper, we are concerned with a representative case of geometric evolution problems, the mean curvature flow, in which the hypersurface evolves such that its normal velocity equals its negative mean curvature. Mean curvature flow was originally proposed by Mullins to model an ideal grain boundary motion [@Mullins1956]. Thereafter, it was also used to model various other physical phenomena [@Nelson2005; @Alias2020]. Numerical methods for mean curvature flows can be classified into three categories, based on their different representations of hypersurfaces, which are parametric approaches, level set method [@osher2004level; @Osher1988; @osher2003geometric] and phase field method [@Bellettini1996; @Chen1992; @paolini1995efficient]. Representative parametric approaches include the parametric finite element method [@Bao2021; @Barrett2020; @BARRETT20084281; @BARRETT2007441], graph approach [@deckelnick_dziuk_1999; @Deckelnick2005] and the front tracking method [@Lai2009; @UNVERDI199225]. For a comprehensive survey on numerical methods for mean curvature flows, the interested reader is referred to the review article by Deckelnick et al. [@Deckelnick2005]. For general moving interface problems, although the level set method and the phase field method have their advantages in handling topological changes and ease in implementation, parametric approaches provide surprisingly good results, such as accurate computation of curvature and conservation of mass, even with a coarse grid, and are computationally very cheap as well [@Barrett2014; @Tryggvason2001]. Despite the benefits, numerical computation of mean curvature flows with parametric approaches also encounters several difficulties, including the deterioration of mesh quality during the computation and the numerical stiffness induced by the mean curvature term. Due to the pure normal motion of the surface, adjacent mesh nodes may become closer and closer, making the computation highly unstable. The problem is even more severe for the mean curvature flow due to its \"curve shortening\" property by Mullins [@Mullins1956]. In addition, the evolution equation of mean curvature flows has second-order spatial derivatives in the mean curvature term, which induces numerical stiffness such that a small time step is required for explicit time integration schemes [@HOU1994312]. A naive discretization with implicit time integration to remove stiffness leads to a nonlinear system, for which finding a numerical solution is time-consuming, even with advanced iterative solvers. For two space dimensional curves, these difficulties can be very well handled by the small scale decomposition method, initially proposed by Hou et al. [@HOU1994312]. The idea was also extended to three space dimensional cases for some special surfaces [@HOU1998628; @Ambrose2013]. However, for general closed surfaces in three space dimensions, it is still unclear how to apply small scale decomposition for mean curvature flows. This work proposes a new numerical method for mean curvature flows. The method decomposes a moving hypersurface into multiple overlapping subsets such that each subset can be viewed as a Monge patch for which the graph approach [@deckelnick_dziuk_1999; @Deckelnick2005] is applicable. The method is based on an overlapping decomposition method for hypersurfaces, initially proposed by Wilson for computing integrals on implicitly defined curves and surfaces [@Jason2010]. A few years later, the method was extended by the second author to incorporate a kernel-free boundary integral method for solving elliptic partial differential equations on irregular domains [@Ying2013]. The decomposition strategy has many advantages in simplicity and efficiency. By representing each subset with its intersection points with grid lines, it is natural to keep marker points quasi-equidistant and maintain mesh quality. By reformulating the evolution equation, which is a nonlinear system, into a sequence of scalar PDEs on overlapping subsets, one can evolve the subsets alternately in the spirit of the alternating direction implicit (ADI) method [@Peaceman1955; @Douglas1962; @Douglas1964]. The resulting algorithm is efficient and has no high-order stability constraint. The remainder of the paper is organized as follows. In Section [2](#sec:math){reference-type="ref" reference="sec:math"}, we describe the governing equation of mean curvature flow and its hybrid formulation based on an overlapping surface decomposition method. The numerical methods for solving mean curvature flows are described in Section [3](#sec:method){reference-type="ref" reference="sec:method"}. In Section [4](#sec:algorithm){reference-type="ref" reference="sec:algorithm"}, the numerical algorithm of the proposed method is briefly summarized. Multiple numerical examples are presented to validate the present method in Section [5](#sec:result){reference-type="ref" reference="sec:result"}. In the final Section [6](#sec:discu){reference-type="ref" reference="sec:discu"}, we briefly discuss the present method and some further work. # Mathematical formulation {#sec:math} Let $\Gamma(t)\subset\mathbb{R}^d, d =2, 3$ be a closed moving hypersurface. Consider the mean curvature flow problem that, for any point $\boldsymbol{x}$ on $\Gamma$, the evolution is given by, $$\label{eqn:mcf} \boldsymbol{x}_t = V \boldsymbol{n}, \quad V = -\kappa, \quad \boldsymbol x \in \Gamma,$$ where $\kappa$ is the (mean) curvature and $\boldsymbol{n}$ the unit outward normal. Here, a circle/sphere has positive curvature. By applying the transport theorem of evolving hypersurfaces, it can be shown that the mean curvature flow has length-decreasing and area-decreasing properties in 2D and 3D, respectively, $$\begin{aligned} \dfrac{d}{dt}|\Gamma(t)| &= \dfrac{d}{dt}\int_{\Gamma(t)} 1 \,ds = - \int_{\Gamma(t)} \kappa^2 \, ds < 0, &\text{ if } d = 2,\\ \dfrac{d}{dt}|\Gamma(t)| &= \dfrac{d}{dt}\int_{\Gamma(t)} 1 \,dA = - \int_{\Gamma(t)} \kappa^2 \, dA < 0, &\text{ if } d = 3,\end{aligned}$$ where $|\Gamma(t)|$ is the length of $\Gamma(t)$ in 2D and the area of $\Gamma(t)$ in 3D. Specially, for the case $d=2$, it holds that $$\dfrac{d}{dt} A(t) = \int_{\Gamma(t)} \dfrac{d\boldsymbol{x}}{dt} \cdot \boldsymbol{n}\, ds = -\int_{\Gamma(t)} \kappa\,ds = -2\pi,$$ where $A(t)$ is the area enclosed by $\Gamma(t)$. It suggests that the enclosed area of a 2D mean curvature flow always decreases at a constant rate [@Dallaston2016]. The result no longer holds for the enclosed volume of 3D surfaces since the surface integral of mean curvature varies from case to case. If $\Gamma$ is closed, and there is no boundary condition, then the solution of mean curvature flow is determined by its initial configuration. For some certain initial configurations, solutions of mean curvature flows may develop singularities, such as pinch-off and topological changes, in finite time before shrinking to a point. In this paper, we seek well-defined solutions of mean curvature flows, $i.e.$, solutions before singularities happen. In order to tackle the mean curvature flow problem, the evolution equation [\[eqn:mcf\]](#eqn:mcf){reference-type="eqref" reference="eqn:mcf"} is divided into multiple subproblems with an overlapping surface decomposition of $\Gamma$. For $r = 1,\cdots, d$, let $\mathbf{e}_r$ be the $r^{th}$ unit vector in $\mathbb{R}^d$ and $\alpha\in(\cos^{-1}(1/\sqrt{d}), \pi/2)$ be a fixed angle. The set $$\label{eqn:sur-decom} \Gamma_r = \{\boldsymbol{x}\in \Gamma :| \mathbf{n}\cdot\mathbf{e}_r|(\boldsymbol{x})>\cos\alpha\},$$ is an open subset of the surface $\Gamma$ for each $r = 1, 2, \cdots, d$. The union of the sets $\{\Gamma_r\}_{r = 1}^{d}$ forms an overlapping surface decomposition of $\Gamma$. Then the surface $\Gamma$ is represented by the overlapping subsets $\Gamma_r$ with a partition of unity. ## Divided problems Note that the evolution of a hypersurface is only determined by its normal velocity $V$. One is allowed to add arbitrary tangential velocity to the evolution of $\Gamma$ without altering its shape. The tangent velocity only changes in the frame for the parametrization of the surface. For arbitrary tangential velocities $T$, $T_1$ and $T_2$, the evolution governed by [\[eqn:mcf\]](#eqn:mcf){reference-type="eqref" reference="eqn:mcf"} is equivalent to $$\label{eqn:mcf-2d-mod} \boldsymbol{x}_t = -V\pmb{n} + T \pmb{\tau},\quad \boldsymbol{x}\in\Gamma,$$ in two space dimensions and $$\label{eqn:mcf-3d-mod} \boldsymbol{x}_t = -V\pmb{n} + T_1 \pmb{\tau}_1 + T_2 \pmb{\tau}_2,\quad \boldsymbol{x}\in\Gamma,$$ in three space dimensions. Here, the notations $\boldsymbol{\tau}$, $\boldsymbol{\tau}_1$ and $\boldsymbol{\tau}_2$ mean tangent vectors. Consider the evolution of the overlapping subsets $\Gamma_r$ in three space dimensions. Let $\boldsymbol x^{(r)}$ denote a point on the subset $\Gamma_r$. The evolutions of $\Gamma_r$ are given by $$\boldsymbol{x}^{(r)}_t = - V^{(r)}\boldsymbol{n}, \quad \boldsymbol{x}^{(r)}\in\Gamma_r , \quad r = 1, \cdots, d.$$ where $V^{(r)}$ are the restrictions of $V$ from $\Gamma$ to $\Gamma_r$. By adding tangential velocities $T^{(r)}_1$ and $T^{(r)}_2$, it yields the equivalent evolution equations $$\label{eqn:decom-form} \boldsymbol{x}^{(r)}_t = - V^{(r)} \boldsymbol{n} + T^{(r)}_1 \boldsymbol{\tau}_1 + T^{(r)}_2 \boldsymbol{\tau}_2, \quad \boldsymbol{x}^{(r)}\in\Gamma_r , \quad r = 1, \cdots, d.$$ Hence, the evolution equation [\[eqn:mcf\]](#eqn:mcf){reference-type="eqref" reference="eqn:mcf"} of $\Gamma$ is divided into a sequence of evolution equations of subsets $\Gamma_r$. With the divided formulation [\[eqn:decom-form\]](#eqn:decom-form){reference-type="eqref" reference="eqn:decom-form"}, for each subset $\Gamma$, the tangential velocities $T^{(r)}_1$ and $T^{(r)}_2$ can be chosen independently. Due to the overlapping surface decomposition [\[eqn:sur-decom\]](#eqn:sur-decom){reference-type="eqref" reference="eqn:sur-decom"}, each subset $\Gamma_r$ can be easily parameterized with Cartesian coordinates in a planar domain $\Omega_r\subset \mathbb{R}^{d-1}$. For example, in three space dimensions, denote by $\Omega_3$ the projection of $\Gamma_3$ onto $X$-$Y$ plane. Then $\Gamma_3$ can be represented by the Monge patch $\boldsymbol{x}(x,y) = \boldsymbol{x}(x,y, z(x,y)),(x,y)\in\Omega_3$ in which $z(x, y)$ is a height function. With this understanding, the evolution of $\Gamma_r$ can be described as a time-dependent height function on the base plane $\Omega_3$ in $d-1$ space dimensions. The height function representation is an Eulerian description of the moving hypersurface. Its numerical approximation is much simpler than that for tracking a moving hypersurface with its Lagrangian motion. With Eulerian description, it is natural to use a Cartesian grid to approximate the height function. With the understanding that the Eulerian description is equivalent to moving marker points of the hypersurface along fixed grid lines, the evolution equations of the equivalent Eulerian motion can be derived by carefully choosing tangent velocities $T^{(r)}$, $T^{(r)}_1$ and $T^{(r)}_2$ such that $\Gamma_r$ only have one non-zero velocity component in direction $\boldsymbol{e}_r$. ### Two space dimensional case Let $\Gamma\subset \mathbb{R}^2$ be a time-dependent Jordan curve which is given by $\boldsymbol{x}(t) = (x(\theta, t), y(\theta, t))$ where $\theta$ parameterizes the curve. Its curvature $\kappa$ and unit outward normal vector are, respectively, given by $$\label{eqn:cur-normal} \kappa = \frac{x_{\theta}y_{\theta\theta} - x_{\theta\theta}y_{\theta}}{(x_{\theta}^{2} + y_{\theta}^{2})^{\frac{3}{2}}},\quad \boldsymbol{n} = \dfrac{1}{(x_{\theta}^{2} + y_{\theta}^{2})^{\frac{1}{2}}} \begin{pmatrix} y_{\theta}\\ -x_{\theta} \end{pmatrix}.$$ For each subset $\Gamma_r, r = 1,2$, it can be parameterized by $x$ or $y$ to be a height function $y = y(x)$ or $x = x(y)$ depending on its orientation. Suppose $\Gamma_r$ is represented in the form $\eta = \eta(\xi)$ where $(\xi, \eta)$ coincides with $(x,y)$ or $(y, x)$. After extra tangential velocity $T$ is added into the original evolution equation [\[eqn:mcf\]](#eqn:mcf){reference-type="eqref" reference="eqn:mcf"}, the evolution of $\Gamma_r$ is equivalent to $$\begin{aligned} \label{eqn:par-form-2d} \frac{d}{dt} \begin{pmatrix} \xi\\ \eta \end{pmatrix} = &- \frac{\eta_{\xi\xi}}{(\eta_{\xi}^{2} + 1)^{2}} \begin{pmatrix} \eta_{\xi}\\ - 1 \end{pmatrix} + T \begin{pmatrix} 1\\ \eta_{\xi} \end{pmatrix}.\end{aligned}$$ It is worth mentioning that equation [\[eqn:par-form-2d\]](#eqn:par-form-2d){reference-type="eqref" reference="eqn:par-form-2d"} does not rely on the orientation of the curve due to the cancellation of signs in the curvature and normal vector when one reverses the parameterization, namely, from $\xi$ to $-\xi$. To determine a specific tangential velocity $T$ such that marker points on $\Gamma_r$ only have non-zero velocity component in $\boldsymbol{e}_r$ direction. One needs to set $\xi_t = 0$, $i.e.$ $$\xi_t = -\frac{\eta_{\xi}\eta_{\xi\xi}}{(\eta_{\xi}^{2} + 1)^{2}} + T = 0.$$ The expression of $T$ can be easily solved. By substituting the determined $T$ into [\[eqn:par-form-2d\]](#eqn:par-form-2d){reference-type="eqref" reference="eqn:par-form-2d"}, it yields the evolution law for $\Gamma_r$ in terms of height function, $$\label{eqn:2d-pde} \eta_t = \frac{\eta_{\xi\xi}}{\eta_{\xi}^{2} + 1},$$ which is a scalar parabolic-type partial differential equation. ### Three space dimensional case The derivation in three space dimensions is similar. For each subset $\Gamma_r, r = 1,2,3$, it can be regarded as a Monge patch $\boldsymbol{x}(u,v) = \boldsymbol{x}(u,v,w(u, v)), (u, v)\in\Omega_r$ where $\Omega_r$ is the projection of $\Gamma_r$ on its base plane and $w(u,v)$ is the height function. Denote by $\boldsymbol{\tau}_1 = (1, 0, w_u)^T,\boldsymbol{\tau}_2 = (0, 1, w_v)^T$ two tangent vectors of $\Gamma_{r}$. After adding two tangential velocities $T_1$ and $T_2$, the evolution equation $\eqref{eqn:mcf}$ in three space dimensions becomes $$\label{eqn:par-3d} \frac{d}{dt} \begin{pmatrix} u\\ v\\ w \end{pmatrix} = \frac{(1 + w_u^2)w_{vv} - 2w_{u}w_{v}w_{uv} + (1 + w_{v}^2)w_{uu}}{2(1 + w_{u}^2 + w_{v}^2)^2} \begin{pmatrix} -w_{u}\\ -w_v\\ 1 \end{pmatrix} + T_1 \begin{pmatrix} 1\\ 0\\ w_u \end{pmatrix} + T_2 \begin{pmatrix} 0\\ 1\\ w_v \end{pmatrix}.$$ By setting $u_t = v_t = 0$, one can solve for $T_1$ and $T_2$, $$\begin{aligned} T_1 = w_u \frac{(1 + w_u^2)w_{vv} - 2w_{u}w_{v}w_{uv} + (1 + w_{v}^2)w_{uu}}{2(1 + w_{u}^2 + w_{v}^2)^2}\label{eqn:tan-v1}, \\ T_2 = w_v \frac{(1 + w_u^2)w_{vv} - 2w_{u}w_{v}w_{uv} + (1 + w_{v}^2)w_{uu}}{2(1 + w_{u}^2 + w_{v}^2)^2}\label{eqn:tan-v2}.\end{aligned}$$ By substituting [\[eqn:tan-v1\]](#eqn:tan-v1){reference-type="eqref" reference="eqn:tan-v1"} and [\[eqn:tan-v2\]](#eqn:tan-v2){reference-type="eqref" reference="eqn:tan-v2"} into [\[eqn:par-3d\]](#eqn:par-3d){reference-type="eqref" reference="eqn:par-3d"}, it gives the evolution of $\Gamma_{r}$ in terms of its height functions $w$, $$\label{eqn:3d-pde} w_t = \frac{(1 + w_u^2)w_{vv} - 2w_{u}w_{v}w_{uv} + (1 + w_{v}^2)w_{uu}}{2(1 + w_{u}^2 + w_{v}^2)}.$$ The equation [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"} is also a scalar parabolic-type partial differential equation. ## Matching condition Until now, the original evolution equation $\eqref{eqn:mcf}$ is reformulated as a sequence of scalar partial differential equations $\eqref{eqn:2d-pde}$ and $\eqref{eqn:3d-pde}$. To ensure the well-posedness of the divided problem, we follow the idea of domain decomposition [@mathew2008domain] to add an extra matching condition for the solution of the divided problem at the overlapping zone such that the equations [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"} have boundary condition. A simple choice of the matching condition is to enforce continuity of the global solution at an overlapping zone with a partition of unity, $$\label{eqn:match-cond} \boldsymbol{x}^{(r)} = \sum_{j \neq r}\chi_j \boldsymbol{x}^{(j)}, \quad \boldsymbol{x}^{(r)}\in \partial\Gamma_r.$$ where $\partial\Gamma_r$ denotes the boundary of $\Gamma_r$. Here, the notation $\chi_j$ is the partition of unity subordinate to the subset $\Gamma_j$, which satisfies $$\left \{ \begin{aligned} &\chi_r(\boldsymbol{x}) \geq 0, \quad &\boldsymbol{x} \in \Gamma_r,\\ &\chi_r(\boldsymbol{x}) = 0, \quad &\boldsymbol{x} \in \Gamma \backslash \Gamma_r, \\ &\sum_{r=1}^d\chi_r(\boldsymbol{x}) = 1, \quad &\boldsymbol{x} \in \Gamma. \end{aligned} \right.$$ The divided problem [\[eqn:decom-form\]](#eqn:decom-form){reference-type="eqref" reference="eqn:decom-form"} together with the matching condition [\[eqn:match-cond\]](#eqn:match-cond){reference-type="eqref" reference="eqn:match-cond"} forms an equivalent coupled system, which is called hybrid formulation, to equation [\[eqn:mcf\]](#eqn:mcf){reference-type="eqref" reference="eqn:mcf"} in three space dimensions, $$\label{eqn:hybrid} \left \{ \begin{aligned} &\boldsymbol{x}^{(r)}_t = - V^{(r)} \boldsymbol{n} + T^{(r)}_1 \boldsymbol{\tau}_1 + T^{(r)}_2 \boldsymbol{\tau}_2, \quad &\boldsymbol{x}^{(r)}\in\Gamma_r, \\ &\boldsymbol{x}^{(r)} = \sum_{j \neq r}\chi_j \boldsymbol{x}^{(j)}, \quad &\boldsymbol{x}^{(j)}\in \partial\Gamma_r. \end{aligned} \right.$$ Once the hybrid formulation [\[eqn:hybrid\]](#eqn:hybrid){reference-type="eqref" reference="eqn:hybrid"} is solved, the global solution $\boldsymbol{x}$ can be reconstructed with a partition of unity, $$\boldsymbol{x} = \sum_{r = 1}^{d}\chi_r\boldsymbol{x}^{(r)}.$$ Unlike Ambrose's method [@Ambrose2013], which is applicable only for a particular class of surfaces with doubly-periodic boundary conditions, our method can handle more general cases, including closed surfaces, with this hybrid formulation. # Numerical Methods {#sec:method} In this section, the numerical methods for mean curvature flow are described, including the discrete representation of a moving hypersurface, numerical discretizations of the partial differential equations $\eqref{eqn:2d-pde}$ and $\eqref{eqn:3d-pde}$ as well as the matching condition $\eqref{eqn:match-cond}$. ## Hypersurface representation {#sec:osd-dis} Let $\Gamma$ be a smooth closed hypersurface. It is separately represented by the height functions of its overlapping subsets $\Gamma_r, r = 1,\cdots,d$, which are approximated by nodal values at Cartesian grid nodes in the base domain $\Omega_r$. Equivalently, the nodal values are, in fact, the intersection points of $\Gamma_r$ and grid lines, which are aligned with $\boldsymbol{e}_r$. At the implementation level, this is done by selecting from all intersection points $\boldsymbol{p}$ of $\Gamma$ and grid lines for certain ones which satisfy the decomposition rule: $$\label{eqn:decom-rule} \boldsymbol p\in\Gamma_r\text{ and } \boldsymbol{n}(\boldsymbol{p})\cdot \boldsymbol{e}_r>\cos\alpha.$$ where $\boldsymbol{n}(\boldsymbol{p})$ is the unit outward normal at $\boldsymbol p$ and $\alpha$ is a given threshold angle. Those points which satisfy [\[eqn:decom-rule\]](#eqn:decom-rule){reference-type="eqref" reference="eqn:decom-rule"} are named as control points for $\Gamma_r$ and the point set is denoted by $\Gamma^h_r$. We also denote all control points on $\Gamma$ by $\Gamma^h = \cup_{r=1}^d \Gamma^h_r$. The point set $\Gamma^h$ is used to represent $\Gamma$ in terms of its overlapping subsets. We remark that, although points in $\Gamma^h$ are not quasi-equidistant, local interpolation stencil only involves points in each subset $\Gamma^h_r$, which are quasi-equidistant. Figure [\[fig:osd-nodes\]](#fig:osd-nodes){reference-type="ref" reference="fig:osd-nodes"} and [1](#fig:osd-nodes-3d){reference-type="ref" reference="fig:osd-nodes-3d"} show the distribution of control points in two and three space dimensions, respectively. ![Control points for the representation of an ellipsoid on $\Gamma_1$.](./pics/node3d-1.pdf){#fig:osd-nodes-3d width="50%"} The advantage of this representation of hypersurface is evident. One can easily find out that the projections of control points in $\Gamma^h_r$ coincide with Cartesian grid nodes in $\Omega_r$. Instead of tracking $\Gamma$ by marker points whose velocities are in $d$ dimensions, one needs to solve the evolution for the height functions, $i.e.$ marker points moving along grid lines, which only change values in one dimension. ### Solving PDEs on hypersurfaces Generally, for a closed smooth hypersurface $\Gamma$, the subset $\Gamma_r$ consists of several isolated components which are denoted by $\Gamma_{r,l}, l=1,2,\cdots$. For example, suppose $\Gamma:\boldsymbol{x} = \boldsymbol{x}(\theta),\theta\in[0,2\pi)$ is a circle, then the curve segments $\Gamma_{1,1}:\boldsymbol{x} = \boldsymbol{x}(\theta),\theta\in(2\pi-\alpha,2\pi)\cup[0, \alpha)$ and $\Gamma_{1,2}:\boldsymbol{x} = \boldsymbol{x}(\theta),\theta\in(\pi-\alpha, \pi+\alpha)$ are both subsets of $\Gamma_1$. Let $\Omega_{r,l}$ denote the projection of $\Gamma_{r,l}$ on the base plane. If $\Omega_{r,l}$ overlap with each other, then $\Gamma_r$ is a multi-valued function on $\Omega_r$, which induces ambiguity. The correct understanding is to separate $\Gamma_{r,l}$ from each other, and on which PDEs are solved independently. Once the components are separated from each other, the ambiguity is removed, since $\Gamma_{r,l}$ is a single-valued function on $\Omega_{r,l}$ (see Figure [2](#fig:patch_prj){reference-type="ref" reference="fig:patch_prj"}). Only points on the same component are involved in a local stencil for solving PDEs. In the implementation, one can check their distance and normals to determine if two points are on the same component $\Gamma_{r,l}$. We identify two points $\boldsymbol{p}$ and $\boldsymbol{q}$ on the same component if they satisfy $$\label{eqn:criteria} \Vert \boldsymbol{p} - \boldsymbol{q}\Vert < D_0\text{ and }\boldsymbol{n}(\boldsymbol{p})\cdot \boldsymbol{n}(\boldsymbol{q}) < \cos(\theta_0),$$ where $\Vert\cdot\Vert$ is the Euclidean distance, $D_0$ and $\theta_0$ are threshold values given in advance. In this work, we set $D_0=5h$ and $\theta_0 = \pi/6$. The separation procedure is meant to find the correct finite difference stencil point from possible intersection points that lie on the same grid line. The implementation based on the criteria [\[eqn:criteria\]](#eqn:criteria){reference-type="eqref" reference="eqn:criteria"} can naturally handle cases with multiple (more than 2) components in each $\Gamma_r$, for example, oscillating curves or surfaces, as long as the grid is fine enough to resolve them. ![A surface component $\Gamma_{r,l}$ and its projection on the base plane $\Omega_{r,l}$.](pics/patch_prj.png){#fig:patch_prj width="50%"} ### Interpolation on hypersurface {#sec:interp} In the discrete representation $\Gamma^h$, geometric quantities and functions on the hypersurface are evaluated by local interpolation. Given a point $\boldsymbol{p}\in\Gamma$, to interpolate the function value at $\boldsymbol{p}$ using function values on $\Gamma_r^h$, one needs to find its projection point $\boldsymbol{p}^{\star}\in\Omega$ in which the selection of $r$ depends on the direction of $\boldsymbol{n}(\boldsymbol{p})$. A quadratic polynomial is locally constructed for interpolation, $$P_2(u,v) = c_1 + c_2 u + c_3 v + c_4 u^2 + c_5 v^2 + c_6 uv,$$ where $(u, v)$ is the local coordinate near $\boldsymbol{p}^{\star}$ in the base plane. With the help of a Cartesian grid, finding interpolation stencils on $\Gamma^h$ is very simple. One can attach control points to their closest grid nodes and find stencil points by searching nearby grid nodes with their indices in the Cartesian grid. ### Evolving the hypersurface After the PDEs [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"} are solved for a time step, control points are moved to different positions and form a new hypersurface. Old points can no longer represent the new hypersurface since they do not satisfy the decomposition rule mentioned before. To represent the new hypersurface, new control points must be added, and some old ones must be deleted. This is achieved by finding out all intersection points by local interpolation and select for new control points. Even in three space dimensions, intersection points can be computed using one-dimensional polynomial interpolations since both stencil points and new intersection points are on the same plane. Take the surface component $\Gamma_{3, l}$ as an example. The component is discretized by intersection points with coordinates $(x_i, y_j, \eta_{i,j})$ where $(x_i,y_j)\in\Omega^h_{3,l}$. In order to find new intersection points of the component with the grid line $\{(x,y,z)|y=y_j, z=z_k\}$, one first needs to identify the intersection interval $(x_{i_0}, x_{i_1})$ by check the side of grid nodes, and then choose three intersection points $(x_{i_0-1},y_j,\eta_{i_0-1,j})$, $(x_{i_0},y_j,\eta_{i_0,j})$, and $(x_{i_0+1},y_j,\eta_{i_0+1,j})$ to locally construct a 1D quadratic polynomial $z=P_2(x)$. By solving the equation $P_2(x^*) = z_k$ with either the Newton method or the bisection method, one can obtain the new intersection point $(x^*, y_j, z_k)$. After an intersection point is found, following [\[eqn:decom-rule\]](#eqn:decom-rule){reference-type="eqref" reference="eqn:decom-rule"}, one can determine whether it is kept or deleted by checking the normal vector, which is also evaluated by locally constructing a parabola. It is worth mentioning that if a new component $\Gamma_{r,l}$ is too small and does not have enough control points for interpolation, the whole component should be deleted. ## Discretization of PDEs ### Temporal discretization The equation [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"} are parabolic PDEs and have second-order spatial derivatives. Generally, explicit time integration methods, such as the forward Euler method, for PDEs involving high-order spatial derivatives suffer from high-order stability constraints, and one has to use small time steps to solve the equations. These problems are known as stiff problems. The stiffness for mean curvature flow is rather severe. Take two space dimensional cases as an example. If one discretizes $\Gamma$ by uniformly partitioning the parameter $\theta$. An explicit time integration suffers from a second order stability constraint $\Delta t\leq C(\min_{\theta} s_{\theta}h)^2$ where $C$ is a constant, $s$ is arclength, and $h$ is the grid spacing in $\theta$. Since 2-D mean curvature flow is also known as the \"curve shortening flow\", $s_{\theta}h$ decreases with time and results in an even worse situation. Although implicit methods are unconditionally stable for stiff problems, there is another difficulty in implicit time integration for [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"} since it results in a nonlinear system in each time step, for which finding a solution is highly inefficient. Notice that equations [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"} are quasi-linear equations. The source of stiffness comes from the highest-order terms, which are linear in equations [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"}. The stiffness can be removed by only treating the highest order terms implicitly with lower order terms treated explicitly. The resulting time integration scheme is semi-implicit in time. It only requires solving linear systems, which are much more acceptable than nonlinear systems. Suppose the time interval $[0, T]$ is uniformly partitioned into $0 = t^0<t^1 <\cdots<t^n<\cdots<t^N = T$ with $t^{n+1}-t^n = \Delta t$. The semi-implicit schemes for equations $\eqref{eqn:2d-pde}$ and $\eqref{eqn:3d-pde}$, respectively, are give by $$\label{eqn:2d-semi-dis} \dfrac{\eta^{n+1} - \eta^n}{\Delta t} = \frac{\eta_{\xi\xi}^{n+1}}{(\eta_{\xi}^n)^{2} + 1},$$ and $$\label{eqn:3d-semi-dis} \dfrac{w^{n+1} - w^n}{\Delta t} = \frac{(1 + (w_u^n)^2)w_{vv}^{n+1} - 2w_{u}^n w_{v}^n w_{uv}^{n+1} + (1 + (w_{v}^n)^2)w_{uu}^{n+1}}{2(1 + (w_{u}^n)^2 + (w_{v}^n)^2)}.$$ The semi-implicit schemes [\[eqn:2d-semi-dis\]](#eqn:2d-semi-dis){reference-type="eqref" reference="eqn:2d-semi-dis"} and [\[eqn:3d-semi-dis\]](#eqn:3d-semi-dis){reference-type="eqref" reference="eqn:3d-semi-dis"} are only in semi-discrete forms. ### Spatial discretization Generally, physical properties should be encoded into the discretization of spatial derivatives. In the case of mean curvature flow, the evolution of a hypersurface is driven by surface tension, which is diffusive in nature. Since diffusion comes from all directions, it is preferred to adopt central differences for approximating the spatial derivatives in the mean curvature term. Though tangential velocities terms may introduce the convection effect, which commonly should be discretized with methods for hyperbolic PDEs, we remark that the convection effect is expected to be small compared to the diffusion effect. Hence, for simplicity, we approximated all the spatial derivatives in $\eqref{eqn:2d-pde}$ and $\eqref{eqn:3d-pde}$ with central differences. Suppose $\Gamma\in\mathbb{R}^d, d =2,3$ is embedded into a bounding box $\mathcal{B}$ which is the tensor product of one dimensional intervals $\mathcal{I}_i, i = 1, 2, \cdots,d$. If $\mathcal{I}_i$ is uniformly partitioned into a Cartesian grid $\mathcal{G}_i$ for each $i$, then the tensor product of $\mathcal{G}_i$ forms a natural uniform partition of $\mathcal{B}$, which is also a Cartesian grid and is denoted by $\mathcal{G}$. Without loss of generality, we assume equal bounding intervals $\mathcal{I}_i = [a,b], i = 1, 2,\cdots,d$, and they are uniformly partitioned into $N$ intervals. Denote by $h = (b-a)/N$ the mesh parameter. Let $\Delta_{\xi}, \delta_{\xi}^2$ denote central difference quotients, $$\Delta_{\xi} u_i = \dfrac{u_{i+1} - u_{i-1}}{2h} ,\quad \delta_{\xi}^2 u_i = \dfrac{u_{i+1}-2u_i+u_{i-1}}{h^2}.$$ The fully discrete form of equation $\eqref{eqn:2d-pde}$ is given by $$\label{eqn:2d-fully-dis} \frac{u^{n + 1}_i - u^n_i}{\Delta t} = \frac{\delta^2_{\xi} u_i^{n + 1}}{(\Delta_{\xi} u_i^n)^2 + 1}.$$ Similarly, introduce the central difference quotients, $$\begin{aligned} \Delta_{u} w_{ij} &= \dfrac{w_{i + 1, j} - w_{i - 1, j}}{2h}, \quad \Delta_{v} w_{ij} = \dfrac{w_{i, j + 1} - w_{i, j - 1}}{2h}, \\ \delta_{uu}^2 w_{ij} &= \dfrac{w_{i + 1, j} + w_{i - 1, j} - 2w_{ij}}{h^2},\quad \delta_{vv}^2 w_{ij} = \dfrac{w_{i, j + 1} + w_{i, j - 1} - 2w_{ij}}{h^2}.\end{aligned}$$ The fully discrete form of equation $\eqref{eqn:3d-pde}$ is given by $$\label{eqn:3d-fully-dis} \frac{w^{n + 1}_{ij} - w^n_{ij}}{\Delta t} = C_{ij, 1}^{n}\delta_{vv}^2w_{ij}^{n + 1} + C_{ij, 2}^{n}\Delta_u \Delta_v w_{ij}^{n + 1} + C_{ij, 3}^{n}\delta_{uu}^2w_{ij}^{n + 1},$$ where the coefficients $C_{ij,1}$, $C_{ij,2}$ and $C_{ij,3}$ are specified by $$\begin{aligned} C_{ij, 1}^{n} &= \dfrac{1 + (\Delta_{u}w_{ij}^n)^2}{2(1 + (\Delta_{u}w_{ij}^n)^2 + (\Delta_{u}w_{ij}^n)^2)},\\ C_{ij, 2}^{n} &= \dfrac{- \Delta_{u}w_{ij}^n\Delta_{v}w_{ij}^n}{1 + (\Delta_{u}w_{ij}^n)^2 + (\Delta_{u}w_{ij}^n)^2}, \\ C_{ij, 3}^{n} &= \dfrac{1 + (\Delta_{v}w_{ij}^n)^2 }{ 2(1 + (\Delta_{u}w_{ij}^n)^2 + (\Delta_{u}w_{ij}^n)^2)}. \end{aligned}$$ ![A schematic of node identification of the nine-point scheme [\[eqn:3d-fully-dis\]](#eqn:3d-fully-dis){reference-type="eqref" reference="eqn:3d-fully-dis"} on a base domain: boundary nodes $\partial\Omega^h_{r,l}$ (marked by red rectangles) and interior nodes $\Omega^h_{r,l}\backslash\partial\Omega^h_{r,l}$(marked by blue triangles).](pics/grid_nodes.png){#fig:grid-nodes width="40%"} ## Boundary condition Note that the finite difference schemes [\[eqn:2d-fully-dis\]](#eqn:2d-fully-dis){reference-type="eqref" reference="eqn:2d-fully-dis"} and [\[eqn:3d-fully-dis\]](#eqn:3d-fully-dis){reference-type="eqref" reference="eqn:3d-fully-dis"} are three-point and nine-point schemes, respectively. Denote by $\Omega^h_{r,l}$ all grid nodes in $\Omega_{r,l}$. A grid node is identified as a boundary node if all of its stencil nodes belong to $\Omega_{r,l}^h$. Otherwise, it is identified as an interior node, see Figure [3](#fig:grid-nodes){reference-type="ref" reference="fig:grid-nodes"} for an example. The set of boundary nodes is denoted as $\partial\Omega^h_{r,l}$. Further, a control point is identified as a boundary control point if its projection on the base plane belongs to $\partial\Omega_{r,l}^h$. Denote by $\partial\Gamma_{r,l}^h$ the set of boundary control points. It is worthwhile to mention that $\partial\Gamma^h_{r,l}$ works as the numerical boundary of $\Gamma_{r,l}$ and is not necessarily a subset of $\partial\Gamma_{r,l}$. At boundary nodes, the match condition [\[eqn:match-cond\]](#eqn:match-cond){reference-type="eqref" reference="eqn:match-cond"} is utilized to pose Dirichlet boundary conditions for [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"}. Numerically, the partition of unity $\chi_r$ is taken as $$\label{eqn:pou-num} \chi_r(\boldsymbol{x}) = \left \{ \begin{aligned} &1, \quad &\text{if } |\boldsymbol{n}(\boldsymbol{x})\cdot \boldsymbol{e}_r| > |\boldsymbol{n}(\boldsymbol{x})\cdot \boldsymbol{e}_j|, \forall j\neq r, \\ &0, \quad &\text{otherwise}. \end{aligned} \right.$$ This simple choice of partition of unity is sufficient for accuracy, though it is not a smooth one. The partition of unity can be understood as evaluating values at boundary nodes on $\Gamma_r$ by interpolation from interior nodes on other subsets $\Gamma_j,j\neq r$. Since subsets overlap with each other, interpolation stencils always exist. In the following, we introduce the approaches to discretize the matching condition [\[eqn:match-cond\]](#eqn:match-cond){reference-type="eqref" reference="eqn:match-cond"}. ### Coupled matching condition The simplest way to discretize [\[eqn:match-cond\]](#eqn:match-cond){reference-type="eqref" reference="eqn:match-cond"} is to enforce it at every time level in a discrete sense. $$\label{eqn:cou-match} \boldsymbol{x}^{(r), n + 1} = \sum_{j \neq r}\chi_j \boldsymbol{x}^{(j), n+1},\quad\boldsymbol{x}^{(r), n + 1}\in\partial\Gamma_{r}^{h,n+1}.$$ This leads to a nonlinear system that couples the solutions on all subsets $\Gamma_r$ in each time step. Let $\mathbf{u^i}$ and $\mathbf{u^b}$ denote the vectors of solutions at the interior and boundary nodes, respectively. The system, which needs to be solved in each time step, is written as $$\label{eqn:sys-1} \begin{aligned} \mathbf{A}\mathbf{u}^i + \mathbf{Q}\mathbf{u}^b &= \mathbf{f},\\ \mathbf{u}^b &= \mathbf{\Pi}\mathbf{u}^i, \end{aligned}$$ where $\mathbf{A}, \mathbf{Q}$ are matrices, $\mathbf{\Pi}$ is the interpolation operator and $\mathbf{f}$ is the vector containing solutions at previous time levels. Typically, in the system [\[eqn:sys-1\]](#eqn:sys-1){reference-type="eqref" reference="eqn:sys-1"}, the first equation approximates PDEs, and the second approximates the matching condition. Here, the operator $\mathbf{\Pi}$ is essentially nonlinear since discretizing [\[eqn:cou-match\]](#eqn:cou-match){reference-type="eqref" reference="eqn:cou-match"} involves the root-finding of polynomials. Note that the matrix $\mathbf{A}$ is block-wise diagonal and is invertible. The nonlinear system [\[eqn:sys-1\]](#eqn:sys-1){reference-type="eqref" reference="eqn:sys-1"} can be solved, in spirit, with the technique of Schur complement. One first needs to solve the lower dimensional system $$\label{eqn:schur} % (\mathbf{I}+\mathbf{\Pi A^{-1} Q}) \mathbf{u}^b = \mathbf{\Pi A^{-1}f}. \mathbf{\Pi}\mathbf{A}^{-1}(\mathbf{f} - \mathbf{Q}\mathbf{u}^b) - \mathbf{u}^b = 0,$$ for $\mathbf{u}^b$ and then obtains $\mathbf{u}^i$ by solving $$\mathbf{A}\mathbf{u}^i = \mathbf{f} - \mathbf{Q}\mathbf{u}^b.$$ The system [\[eqn:schur\]](#eqn:schur){reference-type="eqref" reference="eqn:schur"}, which looks like a Schur complement system but is nonlinear, can be solved with the method widely used in domain decomposition methods, the Schwarz alternating method, which is a block-wise Gauss-Seidel type iteration method [@mathew2008domain; @Saad2003]. The main idea of the method is to solve problems alternately on each subdomain and to provide boundary conditions for other subdomains. Generally, the Schwarz alternating method converges geometrically within a few iterations. The blocks in matrix $\mathbf{A}$ are the approximations of elliptic differential operators, which can also be inverted by an iterative method, such as the successive over-relaxation (SOR) method. In particular, in two space dimensions, $\mathbf{A}$ is block-wise tri-diagonal, and the Thomas algorithm is applicable. ### ADI method Instead of directly enforcing the matching condition [\[eqn:match-cond\]](#eqn:match-cond){reference-type="eqref" reference="eqn:match-cond"}, we can also follow the idea of the alternating direction implicit (ADI) method, which is used to solve time-dependent PDEs in multiple space dimensions and discretize the matching condition with a time splitting technique. Note that there is no need to enforce [\[eqn:match-cond\]](#eqn:match-cond){reference-type="eqref" reference="eqn:match-cond"} accurately since numerical discretization of the PDEs has already introduced numerical errors. One only needs to approximate it with an error on the order of $\mathcal{O}(\tau^p)$ where $\tau$ is the time step, and $p$ is the approximation order. We evolve $\Gamma_r$ alternately and compute boundary conditions with the newest solutions, which is also an accurate approximation to the matching condition, by $$\label{eqn:adi-match} \boldsymbol{x}^{(r), n + 1} = \sum_{j \neq r}\chi_j \boldsymbol{x}^{(j), n^{\star}},\quad \boldsymbol{x}^{(r), n + 1}\in\partial\Gamma_{ r}^{h,n+1}.$$ where $\boldsymbol{x}^{(j), n^{\star}}$ is the newest solution on $\Gamma_j$. For example, boundary conditions for $\Gamma_1$ are interpolated from the newest solution on $\Gamma_2$ and $\Gamma_3$; then, update the solution on $\Gamma_1$ and compute boundary conditions for $\Gamma_2$ using the newest solution on $\Gamma_1$ and $\Gamma_3$, etc. This approach is non-iterative in the sense that the solutions on subsets $\Gamma_r$ are not coupled, and no Schur complement system needs to be solved in each time step. This ADI method is also a time-splitting strategy with a formal splitting error on the order of $\mathcal{O}(\tau)$. The two approaches only differ in the computation of boundary conditions. Figure [\[fig:adi-swz\]](#fig:adi-swz){reference-type="ref" reference="fig:adi-swz"} presents the numerical solutions obtained by these two approaches. One can see that the solutions obtained by these two approaches only have subtle differences at several nodes in the overlapping region. In fact, the ADI method and the Schwarz alternating method are closely related to this problem. The ADI method is only a strategy to provide Dirichlet boundary conditions for PDEs [\[eqn:2d-pde\]](#eqn:2d-pde){reference-type="eqref" reference="eqn:2d-pde"} and [\[eqn:3d-pde\]](#eqn:3d-pde){reference-type="eqref" reference="eqn:3d-pde"}. One can also repeatedly use the ADI method to compute new boundary conditions and update the solution at $t^{n+1}$, which exactly leads to the Schwarz alternating method. Therefore, the Schwarz alternating method reduces to the ADI method if only one iteration is performed in each time step. # Algorithm summary {#sec:algorithm} In this section, the algorithm for solving mean curvature flows [\[eqn:mcf\]](#eqn:mcf){reference-type="eqref" reference="eqn:mcf"} with the proposed ADI method is summarized as follows: Algorithm. : The ADI method for mean curvature flows: Step 1. : Given the initial hypersurface by its parametric form or a level set function, embed it into a bounding box that is uniformly partitioned into a Cartesian grid and find the control points with overlapping decomposition strategy described in Subsection [3.1](#sec:osd-dis){reference-type="ref" reference="sec:osd-dis"}. Step 2. : In each time step, evolve the overlapping subsets alternately. For each subset, do the following three procedures: 1. identify all the nodes in each isolated component of the subset with the breadth-first search method. 2. compute the Dirichlet-type boundary condition for boundary nodes with the discrete matching condition [\[eqn:adi-match\]](#eqn:adi-match){reference-type="eqref" reference="eqn:adi-match"}; 3. evolve the subset to the next time level by solving [\[eqn:2d-fully-dis\]](#eqn:2d-fully-dis){reference-type="eqref" reference="eqn:2d-fully-dis"} or [\[eqn:3d-fully-dis\]](#eqn:3d-fully-dis){reference-type="eqref" reference="eqn:3d-fully-dis"}; Step 4. : Update control points such that they satisfy the decomposition strategy described in Subsection [3.1](#sec:osd-dis){reference-type="ref" reference="sec:osd-dis"}. Step 5. : Repeat steps 2-4 until the final computational time. Procedure (a) in Step 2 is only for matrix assembly such that direct methods, such as the Thomas algorithm, are applicable. Suppose the finite difference equations [\[eqn:2d-fully-dis\]](#eqn:2d-fully-dis){reference-type="eqref" reference="eqn:2d-fully-dis"} and [\[eqn:3d-fully-dis\]](#eqn:3d-fully-dis){reference-type="eqref" reference="eqn:3d-fully-dis"} are solved with iterative methods which only require the matrix-vector product. In that case, one can find the local stencil points on-the-fly instead of finding all the nodes in the components in advance. # Numerical results {#sec:result} This section presents numerical examples in two and three space dimensions to validate the proposed method. Initial hypersurfaces are given in parametric forms or the zero level set of level set functions, which will be prescribed in each example. In all the numerical examples, the bounding box $\mathcal{B}$ is uniformly partitioned into a Cartesian grid $\mathcal{G}$ with $N$ intervals in each direction. For problems with exact solutions, we estimate the numerical error at a surface point by finding its projection on the hypersurface of the exact solution and computing the distance. We take the solution on a fine grid for problems without exact solutions as a reference "exact" solution. Then, we estimate the numerical error at a surface point by finding the closest surface point on the reference solution and computing the distance. The numerical errors in the maximum norm and $l_2$ norm are computed by $$\Vert \mathbf{e}_h\Vert_{\infty} = \max_{\boldsymbol{x}_i\in\Gamma^h}\left\{\Vert \boldsymbol{x}_i - \boldsymbol{x}^{ref}_i\Vert\right\}, \quad \Vert \mathbf{e}_h\Vert_{2} = \sqrt{\dfrac{1}{N_{\Gamma}} \sum_{\boldsymbol{x}_i\in\Gamma^h}\Vert \boldsymbol{x}_i - \boldsymbol{x}^{ref}_i\Vert^2 },$$ where $\boldsymbol{x}^{ref}_i$ is the exact solution or reference solution on a fine grid associated with $\boldsymbol{x}_i$ and $N_{\Gamma}$ is the total point number. The following numerical experiments are performed on a personal computer with a 3.80 GHz Intel Core i7 processor. The codes for conducting the numerical experiments are written in C++ computer language. ## Two space dimensional examples {#eg:2d-egs} First, we solve the mean curvature flow for a simple case and compare the numerical solution with the exact solution to verify the convergence of the proposed method. The initial shape is chosen such that the curve is a circle whose radius $r(t)$ satisfies $$\label{eqn:init-cir} r(t) = \sqrt{1 - 2t}.$$ The bounding box is taken as $[-1.2, 1.2]^2$. Note that the curve will eventually shrink to a point at $T_{end} = 0.5$. We chose to estimate the numerical errors at $T = 0.2$ to ensure that the coarsest grid $N = 64$ can fully resolve the curve during the computation. On finer grids, the computation can last longer than $T$. Time step size is chosen to be $\Delta t = 0.1\Delta x$ where $\Delta x$ is the spatial grid size. Numerical results are summarized in Table [1](#tab:cir-err){reference-type="ref" reference="tab:cir-err"}. N $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert \boldsymbol{e}_h\ \Vert_{2}$ order ------ ---------------------------------------- ------- ------------------------------------- ------- 64 7.99e-03 \- 3.99e-03 \- 128 3.80e-03 1.07 1.79e-04 1.16 256 1.88e-03 1.02 8.42e-04 1.09 512 9.45e-04 0.99 4.12e-05 1.03 1024 4.72e-04 1.00 2.05e-05 1.01 : Numerical error and convergence order of the 2D MCF for a circle-shaped initial curve. Next, we change the initial shape to an ellipse which is given by $$\label{eqn:init-ellipse} \left \{ \begin{aligned} x &= a \cos(\theta),\\ y &= b \sin(\theta), \end{aligned} \right. \quad \theta \in[0, 2\pi),$$ with $a = 1.0, b = 0.5$. The problem is solved in the bounding box $\mathcal{B}=[-1.2,1.2]^2$. Time step size is taken as $\Delta t = 0.1\Delta x$. Since there is no exact solution for this configuration, the solution on a fine grid with $N = 2048$ is chosen as a reference solution to estimate numerical errors. The estimated error and the convergence order are summarized in Table [2](#tab:ellipse){reference-type="ref" reference="tab:ellipse"}. It can be observed that the convergence order is a bit larger than $1$. This may be due to the inaccurate estimation of numerical error based on the distance between the surface point to its closest point in the reference solution. The evolution history of the curve is presented in Figure [\[fig:ellipse\]](#fig:ellipse){reference-type="ref" reference="fig:ellipse"}. The changes in curve length and enclosed area are computed on the grid with $N=1024$ and shown in Figure [4](#fig:ell-L-A){reference-type="ref" reference="fig:ell-L-A"}. It can be observed that the enclosed area loss rate compares favorably with the theoretic result, whose slope is $m_{ref} = -2\pi$. $N$ $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert \boldsymbol{e}_h \Vert_{2}$ order ------ ---------------------------------------- ------- ------------------------------------ ------- -- 128 2.45e-02 \- 1.84e-02 \- 256 1.04e-02 1.24 7.95e-03 1.21 512 4.31e-03 1.27 3.31e-03 1.26 1024 1.55e-03 1.48 1.15e-03 1.52 : Numerical error and convergence order of the 2D MCF for an ellipse-shaped initial curve. \ ![Time evolution of curve length and enclosed area of the ellipse-shaped curve.](./pics/ell1024data2.pdf){#fig:ell-L-A width="80%"} We also chose a five-fold star-shaped initial curve, which is given by $$\label{eqn:init-star} \left \{ \begin{aligned} x &= a(\kappa + \eta \sin(m\theta))\cos(\theta),\\ y &= b(\kappa + \eta \sin(m\theta))\sin(\theta), \end{aligned} \right. \quad \theta \in[0, 2\pi),$$ with $a = 1.0, b = 1.0, \kappa = 0.8, \eta = 0.2$ and $m = 5$. The bounding box and time step size are the same as those in the last case. Estimated numerical errors and convergence orders are summarized in Table [3](#tab:star){reference-type="ref" reference="tab:star"}. The evolution history of the curve and changes in curve length and enclosed area are presented in Figure [\[fig:star\]](#fig:star){reference-type="ref" reference="fig:star"} and [4](#fig:ell-L-A){reference-type="ref" reference="fig:ell-L-A"}, respectively. The numerical results are also consistent with theoretical results. $N$ $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert \boldsymbol{e}_h \Vert_{2}$ order ------ ---------------------------------------- ------- ------------------------------------ ------- -- 128 5.45e-03 \- 3.21e-03 \- 256 2.60e-03 1.07 1.46e-03 1.14 512 1.21e-03 1.10 6.60e-04 1.15 1024 5.00e-04 1.28 2.26e-05 1.55 : Numerical error and convergence order of the 2D MCF for a five-fold star-shaped initial curve. \ ![Time evolution of curve length and enclosed area of the five-fold star-shaped curve.](./pics/star1024data.pdf){#fig:star-L-A width="80%"} For this case, we compare the present method with an explicit time-advancing scheme for the mean curvature flow, which discretizes the equation $$\frac{d}{dt} \begin{pmatrix} x\\ y \end{pmatrix} = - \frac{x_{\theta}y_{\theta\theta} - x_{\theta\theta}y_{\theta}}{(x_{\theta}^{2} + y_{\theta}^{2})^{2}} \begin{pmatrix} y_{\theta} \\ - x _{\theta} \end{pmatrix},\quad \theta\in[0, 2\pi),$$ with forward Euler scheme and central differences for temporal and spatial derivatives, respectively. The time step size for the explicit method is chosen to ensure numerical stability, using the adaptive time step $$\Delta t = 0.8(\min_{\theta}\Delta s)^2,$$ where $\Delta s$ denotes the Euclidean distance between two adjacent grid nodes. We properly optimize the codes for both methods and collect the required CPU times for solving the mean curvature flow to the final time $T = 0.1$. The control point number at time $t$ is denoted by $M_t$. Numerical results are summarized in Table [4](#tab:cpu-time){reference-type="ref" reference="tab:cpu-time"}. It can be observed that while the forward Euler method is faster on coarse grids, the present method becomes more efficient than the forward Euler method as the point number increases. It can be explained by the complexities of the two methods. To compute the solution to a fixed final time, since the time step size can be chosen as linearly proportional to the spatial grid size for the present method, the computational complexity is $\mathcal{O}(N^2)$. However, the computational complexity of the forward Euler scheme is $\mathcal{O}(N^3)$ due to high-order constraints on time step size. In fact, the forward Euler method fails for long-time computation since the required time step size quickly decreases to $10^{-6}$ due to the curve shortening phenomenon and poor mesh quality. Present method Forward Euler method ---------------- ------- ------- ----------------- ---------------------- ----------------- $N$ $M_0$ $M_T$ CPU times(secs) $M_0=M_T$ CPU times(secs) 128 392 224 9.66e-03 224 3.73e-03 256 786 444 3.54e-02 448 1.73e-02 512 1574 890 1.34e-01 896 1.34e-01 1024 3150 1776 5.15e-01 1792 1.02e+00 2048 6300 3548 2.08e+00 3584 8.30e+00 : CPU time comparison between the present method and an explicit time advancing scheme. ## Three space dimensional examples {#eg:3d-egs} For three space dimensional mean curvature flows, we test the convergence rate of the method by considering a simple case, a sphere-shaped initial surface. Similar to two space dimensional case, this configuration has an exact solution: the surface maintains a sphere, and the radius $r(t)$ satisfies $$r(t) = \sqrt{1 - 2t}.$$ Numerical errors are estimated at $T=0.2$ and summarized in Table [5](#tab:sph-err){reference-type="ref" reference="tab:sph-err"}. N $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert \boldsymbol{e}_h \Vert_{2}$ order ------ ---------------------------------------- ------- ------------------------------------ ------- 64 4.72e-03 \- 1.42e-03 \- 128 3.46e-03 0.45 7.46e-04 0.93 256 1.45e-03 1.25 3.63e-04 1.04 512 9.18e-04 0.66 1.82e-04 1.00 1024 3.47e-04 1.40 8.81e-05 1.05 : Numerical error and convergence order of the 3D MCF for a sphere-shaped initial surface. We also solve the mean curvature flow in three space dimensions for more examples. In the following numerical examples, the bounding boxes partitioned into a Cartesian grid are all chosen as $\mathcal{B} = [-1.2, 1.2]^3$. The time step is chosen as $\Delta t = 0.05\Delta x$. In the first case, we set the initial shape as an ellipsoid which is given by $$\Gamma = \left\{(x,y,z)\Big |\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} - 1 = 0\right\},$$ with $a = 1.0, b = 0.7, c = 0.5$. Numerical error and convergence order estimated at $t = 0.2$ are summarized in Table [6](#tab:ellipsoid){reference-type="ref" reference="tab:ellipsoid"}. The time evolution of the surface and its area and enclosed volume are presented in Figure [\[fig:ellipsoid\]](#fig:ellipsoid){reference-type="ref" reference="fig:ellipsoid"} and [6](#fig:ell-A-V){reference-type="ref" reference="fig:ell-A-V"}, respectively. One can observe that the major axis of the ellipsoid decreases faster compared with the other two axes and the ellipsoid becomes very close to a sphere. Surface area decreases with time, which is consistent with the theoretical result. $N$ $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert \boldsymbol{e}_h \Vert_{2}$ order ----- ---------------------------------------- ------- ------------------------------------ ------- 128 1.58e-03 \- 3.78e-04 \- 256 1.32e-03 0.26 1.74e-04 1.12 512 3.70e-04 1.83 6.36e-05 1.45 : Numerical error and convergence order of the 3D MCF for an ellipsoid-shaped initial surface. \ ![Time evolution of the surface area and enclosed volume of the ellipsoid-shaped surface.](./pics/ell3d-data.pdf){#fig:ell-A-V width="80%"} In the second case, we chose a genus $1$ torus-shaped initial surface. The surface is given by $$\Gamma = \left\{(x,y,z)\Big |\left (c - \sqrt{x^2 + y ^2}\right )^2 + z ^ 2 - a^2=0\right\},$$ with $a = 0.34, c = 0.8$. Numerical error and convergence order are summarized in Table [7](#tab:torus){reference-type="ref" reference="tab:torus"}. The time evolution of the surface and its area and enclosed volume are presented in Figure [\[fig:torus\]](#fig:torus){reference-type="ref" reference="fig:torus"} and [7](#fig:torus-A-V){reference-type="ref" reference="fig:torus-A-V"}, respectively. Driven by mean curvature, the torus-shaped surface becomes thinner with time. $N$ $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert\boldsymbol{e}_h \Vert_{2}$ order ----- ---------------------------------------- ------- ----------------------------------- ------- 128 1.65e-03 \- 3.08e-04 \- 256 1.29e-03 0.36 1.64e-04 0.91 512 8.26e-04 0.64 5.94e-05 1.47 : Numerical error and convergence order of the 3D MCF for a torus-shaped initial surface. ![Time evolution of the surface area and enclosed volume of the torus-shaped surface.](./pics/torus-data.pdf){#fig:torus-A-V width="80%"} In the final case, the initial surface is a four-atom molecular-shaped surface which is given by $$\Gamma = \left\{(x,y,z)\Big |c - \sum_{k = 1}^4 \exp\left (-\frac{|\boldsymbol{x} - \boldsymbol{x}_k|^2}{r^2}\right ) = 0\right\},$$ with $\boldsymbol{x}_1 = (\sqrt{3} / 3, 0, -\sqrt{6}/12)$, $\boldsymbol{x}_2 = (-\sqrt{3} / 6, 0.5, -\sqrt{6}/12)$, $\boldsymbol{x}_3 = (-\sqrt{3} / 6, -0.5, -\sqrt{6}/12)$, $\boldsymbol{x}_4 = (0, 0, \sqrt{6}/4)$ and $c = 0.5$, $r = 0.5$. The numerical error and convergence order are summarized in Table [8](#tab:molecular){reference-type="ref" reference="tab:molecular"}. The time evolution of the surface and its area and enclosed volume are presented in Figure [\[fig:molecular\]](#fig:molecular){reference-type="ref" reference="fig:molecular"} and [8](#fig:atoms-A-V){reference-type="ref" reference="fig:atoms-A-V"}, respectively. $N$ $\Vert \boldsymbol{e}_h\Vert_{\infty}$ order $\Vert \boldsymbol{e}_h \Vert_{2}$ order ----- ---------------------------------------- ------- ------------------------------------ ------- 128 1.79e-03 \- 4.43e-04 \- 256 1.24e-03 0.53 2.51e-04 0.82 512 5.19e-04 1.26 9.12e-05 1.46 : Numerical error and convergence order of the 3D MCF for a molecular-shaped initial surface. \ ![Time evolution of the surface area and enclosed volume of the molecular-shaped surface.](./pics/atoms-data.pdf){#fig:atoms-A-V width="80%"} # Discussion {#sec:discu} This work presents a Cartesian grid-based alternating direction implicit method for solving mean curvature flows in two and three space dimensions. The method decomposes a hypersurface into multiple overlapping subsets for which new evolution equations are derived by adding extra tangential velocities. The new formulations for the moving hypersurface only require solving a sequence of scalar quasi-linear parabolic PDEs on planar domains, which is one dimensional lower than the original formulation. The overlapping subsets of the hypersurface can be represented in terms of height functions of Monge patches which are discretized with Cartesian grids. With this representation of the hypersurface, an ADI-type semi-implicit time integration method is proposed such that the subsets can be evolved alternately. The convergence of the proposed method is validated by numerical experiments. The results show that the ADI method is efficient compared with an explicit scheme since it does not have high-order stability constraints on time step size. Mean curvature flows for various hypersurfaces in two and three space dimensions are also presented, including one whose initial configuration is a genus $1$ surface. Although the method in this paper is designed for solving mean curvature flows, it is expected to be able to solve more moving interface problems described by geometric evolution laws, such as the anisotropic mean curvature flow, the surface diffusion flow, and the Willmore flow. Further, for problems that involve moving interfaces and bulk PDEs simultaneously, such as the Stefan problem and two-phase Stokes flow, the method can also be applicable if combined with a PDE solver such as the kernel-free boundary integral method [@Ying2013]. **Funding** W. Y. is financially supported by the National Key R&D Program of China, Project Number 2020YFA0712000, the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDA25010405), the National Natural Science Foundation of China (Grant No. DMS-11771290) and the Science Challenge Project of China (Grant No. TZ2016002). S. L. is partially supported by the U.S. National Science Foundation, Division of Mathematical Sciences grants DMS-1720420 and DMS-2309798. **Data availibility** Enquiries about data availability should be directed to the authors. # Declarations {#declarations .unnumbered} **Conflict of interest** We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work. There is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.
arxiv_math
{ "id": "2309.05963", "title": "An Alternating Direction Implicit Method for Mean Curvature Flows", "authors": "Han Zhou, Shuwang Li, Wenjun Ying", "categories": "math.NA cs.NA physics.comp-ph", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We implemented active learning pedagogy in teaching and learning an introductory course of linear algebra at the tertiary level. We adopted a flipped classroom approach for several semesters and collected students' perceptions regarding the pedagogy. A questionnaire is distributed and collected at the end of the semester, and it collects students' demographics, including gender, their year of study, their major, and their expected grade. The students were asked to respond to ten statements on a five-item Likert scale, five of them gauging their opinion and attitude toward flipped classroom pedagogy, whereas the other five regarding the instructor. Using a machine learning algorithm of the support vector machine (SVM) classification, we investigated whether there are observable variations in the survey response when stratified by gender difference. We seek a hyperplane that best divides this dataset into two distinct classes based on gender differences. After implementing the principal component analysis (PCA) for reducing the data dimensionality, we observed that there exists a discernible hyperplane that segregates the responses based on gender difference. This implies that SVM has learned to capture and represent the underlying gender-related characteristics within the dataset. Furthermore, this finding suggests that we need to tailor our approach when implementing flipped classroom pedagogy because different gender learns differently.\ Keywords: machine learning, support vector machine, principal component analysis, linear algebra, flipped classroom. author: - S. Laudari  - "N. Karjanto[^1]" date: Last updated title: " **Enhancing Flipped Classroom Pedagogy in Linear Algebra through Machine Learning**" --- # Introduction As educators, we aim to aid students' learning, promoting knowledge retention, problem-solving, and critical thinking skills beyond the course completion. Active learning, including the popular approach of flipped classrooms, enhances instruction and engagement. We have implemented flipped classroom pedagogy in teaching and learning linear algebra for college students from various majors in a private university in South Korea. Research on flipped classrooms in linear algebra is extensive and varied. Various designs, such as flipping single topics, entire courses, and workshops, have been proposed by Talbert [@talbert2014inverting]. The literature consistently demonstrates the effectiveness of flipped classrooms in linear algebra, such as the study that compares flipped classrooms with traditional lectures, consistently showing improved student understanding in the former [@love2014student]. Further evidence by Murphy [@murphy2016student] highlighted the superior comprehension, positive attitudes, enjoyment, and confidence of students in flipped classrooms. Additional support for positive student perspectives, participation, interest, and self-directed learning skills can be found in [@novak2017flip; @nasir2020the; @karjanto2019english; @karjanto2022sustainable]. In this study, we collected students' opinions and perceptions after experiencing flipped classroom pedagogy and analyzed the dataset using a machine learning algorithm called the support vector machine (SVM), a supervised learning algorithm used for classification and regression tasks [@cortes1995support; @noble2006what; @suthaharan2016support]. This involves training the SVM model on the dataset to learn patterns and relationships within the data. During training, the SVM identifies a hyperplane that best separates data points belonging to different classes. The goal is to find the hyperplane with the largest margin between the classes, which allows for better generalization to new, unseen data. In cases where the data is not linearly separable, the SVM can use kernel functions to transform the data into a higher dimensional space, making it possible to find a separating hyperplane. Once the SVM is trained, it can be used to predict the class of new, unseen data points based on their features. The model assigns data points to classes depending on which side of the learned hyperplane they fall on. In classification tasks, the SVM can provide a decision boundary that maximizes the separation between classes, while in regression tasks, it predicts a continuous value. We are interested in investigating the following research question: Can machine learning algorithms, such as the SVM, identify a hyperplane that separates data points belonging to different genders? If yes, what will be the implication for our practice and implementation in a flipped classroom pedagogy? # Methodology ## Data Collection To understand student perspectives on the flipped classroom method, we crafted a survey comprised of 10 thoughtfully designed questions to collect their opinion and attitudes toward flipped classroom pedagogy in linear algebra. This survey was administered to the students at the semester's end, a time that allowed them to reflect on their entire experience. In the survey, students were prompted to express their level of satisfaction with each statement by selecting a number from 1 to 5, which correspond to responses ranging from strongly disagree to strongly agree. In addition to gender identification, we also collect their year of study, their affiliated faculty or school, their expected grade, and their current grade point accumulation (GPA). We obtained a total of 108 valid returned questionnaires during Spring 2016. Recognizing the value of a larger dataset for deeper analysis, we also generated synthetic data. For this, we developed a Python script that mimicked the patterns seen in the original student responses. This approach ensured that our additional data closely mirrored real student feedback. ## Implementation of Machine Learning To derive actionable insights from the survey data and to better understand the underlying patterns within the students' responses, we employed machine learning, specifically the support vector classifier (SVC). SVC is a part of the support vector machines (SVM) set of algorithms that are primarily used for classification tasks. At its core, the idea behind SVM is to find a hyperplane that best divides a dataset into classes. The mathematical formulation for this classifier can be expressed as follows: $$\min_{\mathbf{w}, b, \varepsilon} \frac{1}{2} \| \mathbf{w} \|^2 + C \sum_{i = 1}^{N} \varepsilon_{i},$$ subject to the following conditions: $$y_i(\mathbf{w} \cdot \mathbf{x}_i + b) \geq 1 - \varepsilon_i, \qquad \text{and} \qquad \varepsilon_i \geq 0, \qquad i = 1, 2, \cdots, N.$$ Here, $\mathbf{w}$ is the vector to the hyperplane, $\mathbf{x}_i$ are training vectors, $y_i$ are class labels, $b$ is the bias, $\varepsilon_i$ are nonnegative slack variables that allow for misclassification of the data points, and the positive constant $C$ is a regularization parameter. In our implementation, the two classes defined by the SVC were 0 and 1, where they represent male and female students, respectively. Intuitively, the SVM will try to maximize the margin between the closest data points (called support vectors) of the two classes (male and female). The hyperplane is the boundary that separates the classes, and its orientation and position are defined by the normal vector. Before feeding our data into the SVC, we implemented principal component analysis (PCA) for dimensionality reduction. PCA is a technique used to emphasize variation and highlight strong patterns in a dataset. It works by calculating the eigenvectors and eigenvalues from the covariance matrix of the data, which is used to determine the principal components (directions of maximum variance). The primary goal is to reduce the number of dimensions without much loss of information. By utilizing PCA, we intended to enhance the computational efficiency and possibly improve the classification performance of our SVC. # Results Our analysis using the SVC yielded a remarkable classification accuracy of $98.9\%$. This high accuracy rate underscores the potential that the survey responses might exhibit inherent patterns or tendencies that differ based on gender. With male and female as our target classes, this result offers compelling evidence that there is a significant variance in the way each gender responded to the survey, as shown in Figure [2](#figure){reference-type="ref" reference="figure"}. ![(Left panel) The visualization of the decision boundary clearly marks the distinction between male and female responses to the questionnaire statements. (Right panel) The rank of the statements in the questionnaire according to their important features. It indicates the influence of each statement on the model's decision. Both the fifth and seventh statements were highly ranked ("I understand better the course material when it is delivered in the flipped learning format." and "My instructor successfully implemented flipped classroom pedagogy for this course."), whereas the first (preference) and second (enjoyment) statements were lowly ranked ("I prefer flipped classroom more than the traditional classroom." and "I enjoy flipped classroom more than the traditional classroom.").](Figure1-boundary-plot.png "fig:"){#figure width="45%"} ![(Left panel) The visualization of the decision boundary clearly marks the distinction between male and female responses to the questionnaire statements. (Right panel) The rank of the statements in the questionnaire according to their important features. It indicates the influence of each statement on the model's decision. Both the fifth and seventh statements were highly ranked ("I understand better the course material when it is delivered in the flipped learning format." and "My instructor successfully implemented flipped classroom pedagogy for this course."), whereas the first (preference) and second (enjoyment) statements were lowly ranked ("I prefer flipped classroom more than the traditional classroom." and "I enjoy flipped classroom more than the traditional classroom.").](Figure2-feature-importance.png "fig:"){#figure width="45%"} Our machine learning algorithm also identified certain questions that displayed the highest variance. This implies that these questions might have been more divisive or elicited stronger, more varied reactions from the students based on their gender. Such questions can be of immense value when tailoring or improving the educational approach since they highlight areas where perceptions significantly diverge. # Conclusion Our analysis of survey data related to the flipped classroom approach revealed crucial insights into educational nuances. The variance in responses highlights the necessity for tailored educational strategies. For instance, if some elements of the flipped classroom are found less effective for females, educators have a clear opportunity to refine their resources to better cater to this group. Furthermore, the data indicates that aspects of this teaching methodology may resonate differently with each gender, urging curriculum designers to ensure inclusivity in their materials. The impressive accuracy of our machine learning model, combined with specific question variance, suggests fertile ground for more in-depth academic investigations. Potential avenues could include qualitative research methods, and exploring the root causes of varied responses. Moreover, institutions can enhance their pedagogical practices by creating a feedback loop with students, emphasizing the value of their opinions and the commitment to continuous educational improvement. This synthesis of educational practices and technological methodologies offers a promising route to tailored, effective learning experiences. ## Conflict of Interest {#conflict-of-interest .unnumbered} The authors declare that they have no conflicts of interest. 99 Talbert, R. (2014). Inverting the linear algebra classroom. *Primus* **24**(5): 361--374. Love, B., Hodge, A., Grandgenett, N., & Swift, A. W. (2014). Student learning and perceptions in a flipped linear algebra course. *International Journal of Mathematical Education in Science and Technology* **45**(3): 317--324. Murphy, J., Chang, J. M., & Suaray, K. (2016). Student performance and attitudes in a collaborative and flipped linear algebra course. *International Journal of Mathematical Education in Science and Technology* **47**(5): 653--673. Novak, J., Kensington-Miller, B., & Evans, T. (2017). Flip or flop? Students' perspectives of a flipped lecture in mathematics. *International Journal of Mathematical Education in Science and Technology* **48**(5): 647--658. Nasir, M. A. M., Alaudin, R. I., Ismail, S., Ali, N. A. T. M., Faudzi, F. N. M., Yusuff, N., & Pozi, M. S. M. (2020). The effectiveness of flipped classroom strategy on self-directed learning among undergraduate mathematics students. *Practitioner Research* **2**: 61--81. Karjanto, N., & Simon, L. (2019). English-medium instruction Calculus in Confucian-Heritage Culture: Flipping the class or overriding the culture? *Studies in Educational Evaluation* **63**: 122--135. Karjanto, N., & Acelajado, M. J. (2022). Sustainable learning, cognitive gains, and improved attitudes in College Algebra flipped classrooms. *Sustainability* **14**(19): 12500. Cortes, C., & Vapnik, V. (1995). Support-vector networks. *Machine Learning* **20**: 273--297. Noble, W. S. (2006). What is a support vector machine? *Nature Biotechnology* **24**(12): 1565--1567. Suthaharan, S. (2016). Support vector machine. In *Machine Learning Models and Algorithms for Big Data Classification: Thinking with Examples for Effective Learning*. Integrated Series in Information Systems, **36**: 207--235. Boston, MA, US: Springer. ## Appendix: Brief Survey of Flipped Learning in Introductory Linear Algebra {#appendix-brief-survey-of-flipped-learning-in-introductory-linear-algebra .unnumbered} **Instruction:**\ Please respond to the following items by marking the boxes as or . Your current GPA: \ (Please estimate if you do not know; give only one single numeric response: e.g., 3.78.)\ The statements below are designed to collect your opinion and your attitude toward flipped learning in Linear Algebra. Each item has 5 possible responses. The responses range from strongly disagree through strongly agree. If you have no opinion, choose response neutral. Please read each statement. Mark one response that most clearly represents your degree of agreement or disagreement with that statement. Try not to think deeply about each response. Record your answer by or and move quickly to the next item. Please respond to all of the statements. **Your additional comment:**\ ------------------------------------------------------------------------ \ ------------------------------------------------------------------------ \ ------------------------------------------------------------------------ [^1]: : [natanael\@skku.edu](natanael@skku.edu)[![image](orcid.pdf)](https://orcid.org/0000-0002-6859-447X)
arxiv_math
{ "id": "2309.16259", "title": "Enhancing flipped classroom pedagogy in linear algebra through machine\n learning", "authors": "S. Laudari and N. Karjanto", "categories": "math.HO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this article, we establish initially regular sequences on cycles of the form $C_{3n+2}$ for $n\ge 1$, in the sense of [@FHM-ini]. These sequences accurately compute the depth of these cycles, completing the case of finding effective initially regular sequences on cycles. Our approach involves a careful analysis of associated primes of initial ideals of the form $\mathop{\mathrm{in_{>}}}(I,f)$ for arbitrary monomial ideals $I$ and $f$ linear sums. We describe the minimal associated primes of these ideals in terms of the minimal primes of $I$. Moreover, we obtain a description of the embedded associated primes of arbitrary monomial ideals. Finally, we accurately compute the depth of certain types of unicyclic graphs. address: | Department of Mathematical Sciences\ New Mexico State University\ P.O. Box 30001\ Department 3MB\ Las Cruces, NM 88003 author: - Le Tran title: Initially Regular Sequences on Cycles and Depth of Unicyclic Graphs --- # Introduction {#intro} Let $R$ be a Noetherian local ring, let $\mathfrak{m}$ be the unique maximal ideal of $R$, and let $k = R/\mathfrak{m}$ denote the residue field of $R$. For a finitely generated module $M$, the *depth* of $M$ is an invariant that has been used extensively in the study of rings and modules. In general, $\mathop{\mathrm{depth}}M$ is bounded above by the dimension of $M$ and when equality is achieved, then the module is Cohen-Macaulay. This an important class of modules that has many desirable properties. The depth of a module was first introduced in a homological setting, namely $$\mathop{\mathrm{depth}}(M) = \min\left\{i \mid \mathop{\mathrm{Ext}}^i_R(k,M) \ne 0\right\}.$$ In the Noetherian setting, we can also compute the depth of a module using the notion of a maximal $M$-*regular sequence* contained in $\mathfrak{m}$. A sequence $f_1, \ldots, f_d\in \mathfrak{m}$ is said to be an *$M$-regular sequence* if $f_1$ is a *nonzero divisor* (*regular*) on $M$ and $f_i$ is regular on $M/(f_1,\ldots, f_{i-1})M$ for $2\le i\le d$. An $M$-regular sequence $f_1,\ldots, f_d\in \mathfrak{m}$ is maximal if for any $f_{d+1}\in\mathfrak{m}$, the sequence $f_1,\ldots,f_{d+1}$ is not an $M$-regular sequence. It can be seen that the depth of $M$ is the length of any maximal $M$-regular sequence in $\mathfrak{m}$. In fact, all maximal $M$-regular sequences have the same length, so one can take advantage of this to compute the depth of $M$. However, identifying an $M$-regular sequence is in general difficult. In this work, we focus our attention on the special class of monomial ideals, and in particular squarefree monomial ideals. We can take advantage of the underlying combinatorial structures to determine certain types of sequences, namely initially regular sequences as in [@FHM-ini], that in turn give a lower bound on the depth of modules of the form $R/I$, where $I$ is a monomial ideal in a polynomial ring $R$. Let $R$ be a polynomial ring over a field and $I$ a monomial ideal in $R$. In general, it is difficult to obtain the value of $\mathop{\mathrm{depth}}(R/I)$, or more generally $\mathop{\mathrm{depth}}(R/I^t)$, for $t\ge 1$. The function $F(t)=\mathop{\mathrm{depth}}(R/I^t)$ for $t\ge 1$ is called the *depth function* of $I$. A classic result of Burch [@Burch] that was improved by Brodmann [@Brodmann] establishes the limiting behavior of the depth function, namely $\lim_{t\to\infty}\mathop{\mathrm{depth}}(R/I^t) \le \dim R - \ell(I)$, where $\ell(I)$ is the analytic spread of $I$. It was shown by Eisenbud and Huneke [@Eis-Hun] that equality holds if the associated graded ring $gr_R(I) = \displaystyle\bigoplus_{i=1}^\infty I^i/I^{i+1}$ of $I$ is Cohen-Macaulay. However, the initial behavior of $\mathop{\mathrm{depth}}(R/I^t)$ is still mysterious and is sometimes wild, see [@Bandari; @et.; @al.]. It is natural to focus on finding lower bounds for $\mathop{\mathrm{depth}}(R/I^t)$. Herzog and Hibi showed that $\mathop{\mathrm{depth}}(R/I^t)$ is a decreasing function if all powers of $I$ have a linear resolution, [@powerofidealHerzog]. In general, edge ideals and their powers do not have linear resolutions. Recently, Hà, Nguyen, Trung, and Trung proved that the depth function of a monomial ideal can be any numerical function that is asymptotically constant, [@HNTT]. It is known that $\mathop{\mathrm{depth}}(R/I^t)$ is not necessarily a nonincreasing function in the case $I$ is a squarefree monomial ideal, see [@Kaiser; @et.; @al. Theorem 13]. In general, for a monomial ideal $I$, lower bounds of $\mathop{\mathrm{depth}}(R/I)$ have been studied by various authors, see [@DHS; @DaoSchweig1; @DaoSchweig2; @FHM-ini; @FHM-hyperforest; @FHM-square; @FM-lower; @Lin-Mantero; @squarefreePopescu]. However, the exact value for $\mathop{\mathrm{depth}}(R/I)$ is not known for arbitrary monomial ideals. For an arbitrary homogenous ideal $I$, one way to determine a lower bound of $\mathop{\mathrm{depth}}(R/I)$ is to pass to an initial ideal using the fact that $\mathop{\mathrm{depth}}(R/I) \ge \mathop{\mathrm{depth}}(R/\mathop{\mathrm{in_{>}}}(I))$, see [@HerzogHibi Theorem 3.3.4]. Moreover, equality holds when $\mathop{\mathrm{in_{>}}}(I)$ is squarefree, [@Conca Corollary 2.7]. Exploiting this fact, Fouli, Hà, and Morey introduced the notion of *initially regular sequences* on $R/I$, [@FHM-ini]. We recall the definition below. **Definition 1**. [@FHM-ini Definition 1.1] Let $R$ be a polynomial ring over a field and $I$ a proper ideal of $R$. Let $>$ be a fixed term order and set $I_1 = \mathop{\mathrm{in_{>}}}(I)$. We say that a sequence of nonconstant polynomials $f_1,\ldots, f_q$ is an *initially regular sequence* on $R/I$ if for each $i$, $1\le i\le q$, $f_i$ is a regular element of $R/I_i$, where $I_i = \mathop{\mathrm{in_{>}}}(I_{i-1},f_{i-1})$. These initially regular sequences play a similar role as regular sequences and give an effective lower bound for the depth of $R/I$ when $I$ is a homogeneous ideal, see [@FHM-ini Proposition 2.3]. In many instances, their length accurately computes the depth of $R/I$. Even though apriori the definition of initially regular sequences appears rather cumbersome, Fouli, Hà, and Morey show that if $I$ is a monomial ideal one can use the underlying combinatorics to determine such initially regular sequences that compute the depth of $R/I$ and in some cases the depth of $R/I^t$ for $t\ge 1$, see [@FHM-ini; @FHM-hyperforest; @FHM-square]. Recall that for a graph $G$ on $n$ vertices, say $x_1, \ldots, x_n$, we let $R=k[x_1, \ldots, x_n]$ denote the polynomial ring over a field $k$, where $x_1, \ldots, x_n$ are now variables by abuse of notation. The edge ideal of the graph $G$, denoted $I(G)$, is the monomial ideal in $R$ generated by the monomials of the form $x_ix_j$, where $\left\{x_i,x_j\right\}$ is an edge of graph $G$. We refer the reader to [@graphtheory] for more information on graph theory. One case where the initially regular sequences established in [@FHM-ini Theorem 3.11] do not capture the depth of $R/I$ is the case of the edge ideal of a cycle of length $3n+2$, where $n \in\mathbb{N}$, see Remark [Remark 20](#sequences){reference-type="ref" reference="sequences"}. One of our goals is to determine an initially regular sequence of length $n+1$ on $R/I$, where $I$ is the edge ideal of a cycle of length $3n+2$; this is accomplished in Theorem [Theorem 21](#C_{3n+2}){reference-type="ref" reference="C_{3n+2}"}. The idea of our approach comes from the fact that for a nonzero module $M$ of a Noetherian ring $R$, the set of zero divisors of $M$ is the union of all associated primes of $M$. We know that an associated prime of $R/I$ corresponds to a vertex cover for the graph of the ideal $I$, when $I$ is the edge ideal of a graph or hypergraph. Recall that for an ideal $I$ in a Noetherian ring $R$, we define the set of associated primes of the ideal $I$ as follows $$\mathop{\mathrm{Ass}}(R/I) = \left\{P\subseteq R \mid P \text{ is prime and }P= (I : c) \text{ for some }c\in R\right\}.$$ The set $\mathop{\mathrm{Ass}}(R/I^t), t\ge 1$ has been studied extensively in the case $I$ is the edge ideal of a graph, see [@Francisco; @et.; @al.], [@asspowerofsquarefree], [@hien2019saturation], [@Martinez]. In [@eardecom] Lam and Trung introduced the notion of *ear decompositions* to describe explicitly $\mathop{\mathrm{Ass}}(R/I^t)$ for any edge ideal $I$ and any $t\ge 1$. However, there is still very little known about $\mathop{\mathrm{Ass}}(R/I^t)$, when $I$ is an arbitrary monomial ideal, even for $t=1$. The paper is organized as follows. In Section [2](#associated primes){reference-type="ref" reference="associated primes"} for a monomial ideal $I$, we describe the minimal associated primes of $R/\mathop{\mathrm{in_{>}}}(I,f)$, where $f$ is a linear sum, in terms of the minimal associated primes of $R/I$, see Lemma [Lemma 5](#min primes leaf pair){reference-type="ref" reference="min primes leaf pair"} and Proposition [Proposition 7](#mainprop1){reference-type="ref" reference="mainprop1"}. Moreover, we show that the embedded associated primes of $R/I$ can be described by taking the union of minimal associated primes of $R/I$ and some appropriate variables coming from the set of *star neighbors* in the ideal $I$, see Definition [Definition 10](#star neighbors){reference-type="ref" reference="star neighbors"} and Theorem [Theorem 12](#mainthm){reference-type="ref" reference="mainthm"}. This description allows us to show that certain classes of monomial ideals have no embedded associated primes, Corollaries [Corollary 14](#no embedded){reference-type="ref" reference="no embedded"}, [Corollary 16](#bracket powers){reference-type="ref" reference="bracket powers"}. Using this description we can determine another type of regular element on monomial ideals, Corollary [Corollary 17](#new regular){reference-type="ref" reference="new regular"}. In Section [3](#depth section){reference-type="ref" reference="depth section"} we establish an initially regular sequence that realizes the depth of $R/I(C_{3n+2})$ for any $n\ge 1$, Theorem [Theorem 21](#C_{3n+2}){reference-type="ref" reference="C_{3n+2}"}. Furthermore, we accurately compute the depth of certain unicyclic graphs, Theorem [Theorem 26](#G_nm){reference-type="ref" reference="G_nm"}. # Associated primes of monomial ideals {#associated primes} In this section, we describe the associated primes of monomial ideals in a polynomial ring. Before we proceed we fix some notation. Let $R$ be a polynomial ring and $I$ a monomial ideal. Let $\mathcal{G}(I)$ denote the set of monomial generators of $I$. A variable $a$ is a *leaf* for $I$ if there exists a unique monomial $M \in \mathcal{G}(I)$ such that $a\mid M$. For any variable $x$ and any monomial $M$ in $R$, we let $d_x(M)$ denote the degree of $x$ in $M$. Moreover, $d_x(I)=\max\{d_{x}(M)\mid M\in \mathcal{G}(I)\}$. First, we will examine the minimal associated primes of the initial ideal $\mathop{\mathrm{in_{>}}}(I, f)$, where $f$ is a binomial, for an arbitrary monomial ideal $I$. **Lemma 2**. *Let $R$ be a polynomial ring and $I$ a monomial ideal in $R$. Let $a, b$ be variables in $R$ and suppose that $a$ is a leaf in $I$ and $ab\mid M$ for some $M\in \mathcal{G}(I)$. Let $I_1 = {\mathop{\mathrm{in_{>}}}(I,a+b)}$, where $>$ is an order such that $a>b$. If $Q\in\mathop{\mathrm{Min}}(R/I_1)$, then $Q = (P,a)$ for some $P\in \mathop{\mathrm{Min}}(R/I)$.* *Proof.* Suppose that $\mathcal{G}(I) = \left\{M_1,\ldots, M_p\right\}$. Without loss of generality, we may assume that $M_1 = a^{r_1}b^{r_2}x$, for some monomial $x$ with $r_1 = d_a(M_1)$ and $r_2 = d_b(M_2)$. Using [@FHM-ini Lemma 2.4] we see that $I_1=\langle a, \widehat{M} \mid M\in \mathcal{G}(I)\rangle$, where $\widehat{M} = b^{d_{a}(M)}\dfrac{M}{a^{d_{a}(M)}}$. Since $a$ is a leaf in $I$, then $M_1$ is the only monomial that $a$ divides and therefore, $\widehat{M_1} = b^{r_1+r_2}x$ and $\widehat{M_i}=M_{i}$ for all $i>1$. Therefore, $$I_1=\langle a, b^{r_1+r_2}x, M_2, \ldots, M_p\rangle.$$ Let $Q$ be a minimal associated prime of $R/I_1$. Then we may write $Q = (a,\,x_1,\,\ldots,\,x_r)$ for some distinct variables $x_1,\ldots,x_r$. Consider $P = (x_1,\,\ldots,\,x_r)$. We claim that $P\in\mathop{\mathrm{Min}}(R/I)$. Indeed, since $Q \in \mathop{\mathrm{Min}}(R/I_1)$, then for each $2\le k \le p$, there exists a variable $x_i$ such that $x_i \mid M_k$ with $1\le i\le r$ and there exists a variable $x_j$ such that $x_j\mid \widehat{M_1}$ with $1\le j\le r$. Thus, $x_j \mid M_1$ as well and hence $I\subseteq P$. Suppose that $P$ is not minimal. Then there exists $P'\in\mathop{\mathrm{Min}}(R/I)$ such that $P'\subsetneq P$. Without loss of generality, we may assume that $x_1 \notin P'$. Since $P'\in\mathop{\mathrm{Min}}(R/I)$, it follows that there exists $x_{\ell}\in P'$ such that $x_{\ell} \mid M_1$ for some $2\le \ell \le r$. Since $x_{\ell}\ne a$, then $x_{\ell}\mid b^{r_2}x$ and hence $x_{\ell}\mid b^{r_1+r_2}x$. Moreover, since $I\subseteq P'$, then $M_2,\ldots, M_n \in P'$. Therefore, $I_1\subseteq (P', a)$. It is clear that $(P',a) \subsetneq Q$, which is contrary to the minimality of $Q$. Therefore, $P$ is minimal, completing the proof. ◻ Next, we recall the definition of a leaf pair from [@FHM-ini]. **Definition 3**. [@FHM-ini Definition 4.10] Let $I$ be a monomial ideal in a polynomial ring R and let $x, y$ be two leaves in $I$. If $M_1 \ne M_2$ are the unique monomial generators in $I$ such that $x\mid M_1$ and $y\mid M_2$, and there exist monomials $z, w\in R$ with $\gcd(z,w)=1$ such that $x\nmid z$, $z\mid M_1$, $y\nmid w, w\mid M_2$, and $zw\in I$, then $x$ and $y$ is called a leaf pair. If $I$ is the edge ideal of a graph, then a leaf pair is a pair of leaves that are distance $3$ apart. The following lemma establishes a similar result as in Lemma [Lemma 2](#leaf){reference-type="ref" reference="leaf"} when $a,b$ is a leaf pair. **Lemma 4**. *Let $R$ be a polynomial ring and $I$ a monomial ideal in $R$. Suppose $a,\,b$ is a leaf pair and let $I_1 = \mathop{\mathrm{in_{>}}}(I, a+b)$ with respect to a term order $>$. If $Q\in\mathop{\mathrm{Min}}(R/I_1)$, then either $Q = (P,a)$ or $Q = (P,b)$ for some $P\in\mathop{\mathrm{Min}}(R/I)$.* *Proof.* Let $\mathcal{G}(I) = \left\{M_1,\ldots,M_p\right\}$. Without loss of generality we may assume that $a>b$, and $M_1$, $M_2$ are the unique monomials that are divisible by $a, b$, respectively. Let $r_1 = d_a(M_1)$ and $r_2 = d_b(M_2)$. Then we can write $M_1 = a^{r_1}zx$ and $M_2 = b^{r_2}wy$, with $z, w,x,y$ monomials, $a\nmid z$, $b\nmid w$, and $zw\in \mathcal{G}(I)$. Without loss of generality, assume $M_3 = zw$. By [@FHM-ini Lemma 2.4], we have $I_1 = (a, b^{r_1}zx, b^{r_2}wy, zw, M_4,\ldots, M_p)$. Let $Q\in \mathop{\mathrm{Min}}(R/I_1)$ and notice that $a\in Q$. We have two cases to consider. First, suppose $b\in Q$. Then we can write $Q=(a, b, x_1, \ldots, x_r)$ with $x_1,\ldots, x_r$ distinct variables. Notice that since $b\nmid M_i$ for all $i\ge 3$, then $(zw, M_4,\ldots, M_p) \subseteq (x_1, \ldots, x_r)$. Moreover, there exists $x_i$ with $1\le i \le r$ such that $x_i \mid zw$. Then either $x_i \mid z$ or $x_i \mid w$. Suppose first that $x_i \mid z$. Let $P=(b, x_1, \ldots, x_r)$. As $x_i \mid z$, then $x_j \nmid wy$ for all $j$, since otherwise $b$ would be redundant in $Q$, contradicting the minimality of $Q$. Notice that $x_i \mid a^{r_1}zx$ and thus $I \subseteq P$. We claim that $P\in \mathop{\mathrm{Min}}(R/I)$. Suppose instead that there exists a prime $P'\subsetneq P$ such that $I\subseteq P'$. If $b\not\in P'$, then there exists $x_j \in P'$ such that $x_j \mid wy$, a contradiction since $x_j \in P$. Therefore, without loss of generality, we may assume that $x_1\notin P'$. Hence $(zw, M_4,\ldots, M_p) \subseteq (x_2, \ldots, x_r)$ and since $b\in P'$ , $I\subseteq P'$ and $x_1\neq a$, then $I_1 \subseteq (P',a) \subsetneq (P,a)=Q$, contradicting the minimality of $Q$. Hence $P\in \mathop{\mathrm{Min}}(R/I)$ and $Q=(P,a)$. Next suppose $x_i \mid w$. Then $x_j \nmid zx$ for all $j$, since $b$ would be redundant in $Q$. Let $P=(a, x_1, \ldots, x_r)$. We claim that $P\in \mathop{\mathrm{Min}}(R/I)$. Suppose instead that there exists a prime $P'$ with $I\subseteq P' \subsetneq P$. Notice that $a \in P'$, since $x_j \nmid zx$ for all $j$. Therefore, without loss of generality, we may assume that $x_1\not\in P'$. Hence $(zw, M_4,\ldots, M_p) \subseteq (x_2, \ldots, x_r)$ and thus $I_1 \subseteq (P', b)\subsetneq (P,b)=Q$, contradicting the minimality of $Q$. Therefore, $P\in \mathop{\mathrm{Min}}(R/I)$ and $Q=(P, b)$ in this case. Finally, it remains to consider the case $b\not \in Q$. We may write $Q=(a, y_1, \ldots, y_t)$ for some variables $y_1, \ldots, y_t$. Let $P=(y_1, \ldots, y_t)$ and notice that since $b\not\in Q$, then there exists $y_j$ with $1\le j \le r$ such that $y_j \mid zx$. Hence $y_j \mid a^{r_1}zx$ and thus $I\subseteq P$. As before, we claim that $P \in \mathop{\mathrm{Min}}(R/I)$. Indeed, if there exists a prime $P' \subsetneq P$ with $I\subseteq P$, then without loss of generality we may assume that $y_1 \not\in P'$. Then since $a\not\in P'$, then $zx\in P'$ and thus $I_1 \subseteq (P',a) \subsetneq (P,a)=Q$, a contradiction to the minimality of $Q$. Thus $P\in \mathop{\mathrm{Min}}(R/I)$ and $Q=(P,a)$ in this case. ◻ The converse of Lemma  [Lemma 4](#leafpair){reference-type="ref" reference="leafpair"} is also true in the case of edge ideals of graphs. Recall that if $I$ is the edge ideal of a graph and $x$ is a vertex of the graph, then $N(x)$ is the neighbors of $x$, that is $N(x)=\{y\in R\mid y \mbox{ is a variable, } xy\in \mathcal{G}(I)\}$. **Lemma 5**. *Let $R$ be a polynomial ring, $I$ the edge ideal of a graph, and let $P\in\mathop{\mathrm{Min}}(R/I)$. Suppose that $a,\,b$ is a leaf pair in $I$ and let $I_1=\mathop{\mathrm{in_{>}}}(I,a+b)$ with respect to an order such that $a > b$. If $a\not\in P$, then $(P,a)\in \mathop{\mathrm{Min}}(R/I_1)$ and if $a\in P$, then $(P,b)\in \mathop{\mathrm{Min}}(R/I_1)$.* *Proof.* Since $a,\,b$ is a leaf pair then there exist $M_1,\,M_2, M_3\in \mathcal{G}(I)$ such that $M_1 = ax$, $M_2 = by$ and $M_3 = xy$, where $x,\,y$ are distinct variables of $R$. Let $\mathcal{G}(I)=\{M_1, \ldots, M_p\}$ and notice that by construction $a, b \nmid M_i$ for all $i\ge 3$. By [@FHM-ini Lemma 2.4], we have $$I_1 = \mathop{\mathrm{in_{>}}}(I,\,a+b) = \left(a,\,bx,\,M_i\mid 2\le i\le p\right).$$ Let $P\in\mathop{\mathrm{Min}}(R/I)$. Then $P = (x_{1},\ldots, x_{s})$ for some distinct variables $x_{1},\ldots, x_{s}$. We remark that since $M_3 = xy\in\mathcal{G}(I)$, then $a$ and $b$ can not both be in $P$. Moreover, note that every associated prime of $R/I_1$ contains the variable $a$ by the construction of $I_1$. We have two cases to consider. First, suppose that $a\notin P$. We claim that $Q=(P,a)\in \mathop{\mathrm{Min}}(R/I_1)$. Since $ax\in I$ and $a\not\in P$, then $x\in P$. Hence $I_1 \subseteq (P,a)=Q$. Suppose $Q\not\in \mathop{\mathrm{Min}}(R/I_1)$. Then there exists a prime $Q'$ such that $I_1\subseteq Q'\subsetneq Q$. Therefore, $a\in Q'$ and thus without loss of generality we may assume that $x_1\not\in Q'$. If $b\notin P$, then $y\in P$. Moreover, we have that $b\notin Q'$ and thus $x, y\in Q'$. Hence $x_{1}\notin \left\{a, b, x, y\right\}$. Now, since $x_{1}\notin Q'$, then $N(x_{1})\subseteq Q'$, and thus $N(x_{1})\subseteq P$. Now, since $x_{1}\notin \left\{a, b, x, y\right\}$, then for all $v\in N(x_{1})$, there exists $M_i\in I_1$ with $i\ge 4$ such that $M_i = vx_{1}$. Notice that $M_i\in I$ as well since $i\ge 4$. Hence, we have that $x_{1}\in P$ and $N(x_{1})\subseteq P$, which is a contradiction to the minimality of $P$. Therefore, $Q = (P,a)\in \mathop{\mathrm{Min}}(R/I_1)$. If $b\in P$, then we see that $I_1\subseteq (P,a)$ and $y\notin P$. Since $y\notin P$, then $y\notin Q'$, and hence $x, b\in Q'$ and $x_{1}\ne x, b$. Again, $x_{1}\ne y$ since $y\notin P$ and $x_{1}\in P$. Therefore, $x_{1}\notin\left\{a, b, x, y\right\}$. By a similar argument as above, we arrive at a contradiction. Hence, $Q = (P, a) \in \mathop{\mathrm{Min}}(R/I_1)$. Finally, suppose $a\in P$. Then we claim that $Q=(P,b) \in \mathop{\mathrm{Min}}(R/I_1)$. Since $a\in P$, then $b\not\in P$ and thus $x\notin P$ and $y\in P$. Clearly, we have $I_1 \subseteq (P, b)$. Suppose there exists a prime $Q$ such that $I_1\subseteq Q\subsetneq (P, b)$. Then without loss of generality, we may assume that $x_{1}\notin Q$. Since $x\notin P$, then $x\notin Q$, and hence $b, y\in Q$. So $x_{1}\ne b, y$. Moreover, $x_{1}\ne x$ since $x_{1}\in P$ and $x\notin P$. Thus, $x_{1}\notin\left\{a, b, x, y\right\}$. Using a similar argument as above, we have a contradiction again. Therefore, $Q = (P, b)\in \mathop{\mathrm{Min}}(R/I_1)$. ◻ Our next goal is to investigate the minimal associated primes of $\mathop{\mathrm{in_{>}}}(I, f)$, where $f$ is a trinomial. We recall a lemma from [@FHM-square] which gives a description of the initial ideal $\mathop{\mathrm{in_{>}}}(I,f)$, when $f$ is a trinomial. **Lemma 6**. *[@FHM-square Lemma 4.1][\[lemmatrinomial\]]{#lemmatrinomial label="lemmatrinomial"} Let $I$ be a monomial ideal. Let $a, b, c$ be variables satisfying the following condition:* 1. *$d_{a}(I), d_b(I), d_c(I) \le 1$ and* 2. *If $M\in\mathcal{G}(I)$ and $a \mid M$, then either $b\mid M$ or $c \mid M$.* *Furthermore, assume that $bc\nmid M$ for any $M\in\mathcal{G}(I)$. If $>$ is a term order such that $a > b > c$, then $$\mathop{\mathrm{in_{>}}}(I, a+b+c) = \langle a, \widehat{M}, \ensuremath{{\rm{lcm}}{(}}X,M')c^2 \mid M, bX, acM'\in \mathcal{G}(I)\rangle.$$* The next proposition establishes a connection between the minimal associated primes of an ideal and its initial ideal with a trinomial. **Proposition 7**. *Let $R$ be a polynomial ring, $I$ a monomial ideal in $R$, and $a,b,c$ distinct variables in $R$ such that $\mathcal{G}(I)= \left\{ab, ac, M_1,\ldots, M_p\right\}$ with $a \nmid M_i$ for $1 \le i \le p$. Moreover, suppose that $d_b(I) = d_c(I) = 1$. Let $I_1 = \mathop{\mathrm{in_{>}}}(I, a+b+c)$, where $>$ is an order such that $a > b > c$. If $Q \in\mathop{\mathrm{Min}}(R/I_1)$, then there exists $P\in\mathop{\mathrm{Min}}(R/I)$ such that either $Q = (P,a)$ if $c\in P$ or $Q = (P,b)$ if $c\not\in P$.* *Proof.* Let $Q\in\mathop{\mathrm{Min}}(R/I_1)$. Without loss of generality, let $M_1,\ldots,M_r$ be the monomials such that $b\mid M_i$ for $1 \le i\le r$, that is $M_i = bM'_i$ for some monomial $M'_i$ and $b\nmid M_j$ for any $j>r$. If there do not exist such monomials, then $r=0$ and the argument still goes through. By Lemma [\[lemmatrinomial\]](#lemmatrinomial){reference-type="ref" reference="lemmatrinomial"}, we have that $$I_1 = (a, b^2, bc, M_1,\ldots, M_p, M'_1c^2, \ldots, M'_rc^2).$$ Notice that from the description of $I_1$, one can see that $a, b\in Q$ for any $Q\in\mathop{\mathrm{Min}}(R/I_1)$. Hence we may write $Q = (a,b,y_1,\ldots,y_t)$ for some distinct variables $y_1, \ldots, y_t$. We consider two possible cases whether $c\in Q$ or $c\not\in Q$. First, assume $c\notin Q$. Let $P = (a, y_1,\ldots, y_t)$. We will show that $P\in\mathop{\mathrm{Min}}(R/I)$ and thus $Q=(P,b)$ in this case. Since $a\nmid M_i$ for $1\le i\le p$, then $a\nmid M_i'c^2$ for $1\le i\le r$ and hence it follows that $M_1,\ldots, M_p, M'_1c^2,\ldots, M'_rc^2 \in (b, y_1,\ldots, y_t)$. Notice that since $d_b(I)=1$, then for each $1\le i\le r$, $b\nmid M'_ic^2$. Hence for each $1\le i \le r$ there exists $1\le j\le t$ such that $y_j \mid M'_ic^2$. Since $y_j \ne c$, then $y_j\mid M'_i$ and hence $y_j\mid M_i$. Thus $M_1, \ldots, M_r \in (y_1, \ldots, y_t)$. Moreover, since $b\nmid M_i$ for each $r+1 \le i\le p$, then $M_{r+1}, \ldots, M_p\in (y_1, \ldots, y_t)$. Therefore, $I\subseteq P$. Suppose now that $P$ is not minimal, that is there exists $P'\in\mathop{\mathrm{Min}}(R/I)$ such that $P'\subsetneq P$. Then, without loss of generality, we may assume that $y_1\notin P'$. Notice that $y_1 \ne a$ and hence $I\subseteq P'\subseteq (a, y_2,\ldots, y_t)$. For $1\le i\le r$, there exists $j$ with $2\le j \le t$ such that $y_j\mid M_i = bM'_i$. Since $y_j\ne b$, then $y_j\mid M'_i$ and thus, $y_j\mid M'_ic^2$. For $r+1\le i\le p$, there exists $j$ with $2\le j\le t$ such that $y_j\mid M_i$. Therefore, $M_1,\ldots, M_p, M'_1c^2,\ldots, M'_rc^2 \in (y_2,\ldots, y_t)$. Hence, $I_1\subseteq (a, b, y_2,\ldots, y_t)\subsetneq Q$, a contradiction to the minimality of $Q$. Therefore, $P\in\mathop{\mathrm{Min}}(R/I)$ and $Q = (P, b)$. For the remaining case, assume $c\in Q$. Then we may write $Q = (a, b, c, y_1,\ldots, y_t)$. Let $P = (b, c, y_1,\ldots, y_t)$. It suffices to show that $P\in\mathop{\mathrm{Min}}(R/I)$. Clearly, $b\mid ab$ and $c\mid ac$. For $1\le i\le r$, by our assumption, $b\mid M_i$. For each $M_i$ with $r+1\le i\le p$ such that $c\nmid M_i$, there exists $y_j\mid M_i$ for some $1\le j\le t$. Thus, $I\subseteq P$. Suppose there exists $P'\in\mathop{\mathrm{Min}}(R/I)$ such that $P'\subsetneq P$. Notice that since $a\notin P$, then $a\notin P'$, and hence $b, c\in P'$. Then without loss of generality, we may assume that $y_1\notin P'$. We see that $I\subseteq P'\subseteq (b, c, y_2,\ldots, y_t)$. Clearly, one can see that $M_1,\ldots, M_r, M'_1c^2,\ldots, M'_rc^2 \in (b, c)$. Moreover, for each $M_i$ with $r+1\le i\le t$ such that $c\nmid M_i$, there exists $y_j\mid M_i$ for some $1\le j\le t$. Therefore, $I_1\subseteq (a, b, c, y_2,\ldots, y_t)\subsetneq Q$, which contradicts to the minimality of $Q$. Therefore, $P\in\mathop{\mathrm{Min}}(R/I)$ and $Q = (P, a)$. ◻ Our next goal is to describe the embedded associated primes of an arbitrary monomial ideal $I$. We will use the concept of polarization, a convenient tool that will help us to work with the minimal associated primes in the polarized ring. We recall the definition of polarization below. **Definition 8**. [@Faradi-polarization Definition 2.1][\[pola-def\]]{#pola-def label="pola-def"} Let $R = k[x_1,\ldots,x_n]$ be a polynomial ring over a field $k$. Suppose $M = x_1^{a_1}\cdots x_n^{a_n}$ is a monomial in $R$ with $a_i \ge 1$ for all $1\le i\le n$. We define the *polarization* of $M$ to be the squarefree monomial $$\mathcal{P}(M) = x_{1,1}x_{1,2}\cdots x_{1,a_1}\cdots x_{n,1}\cdots x_{n,a_n}$$ in the polynomial ring $R^{\mathop{\mathrm{pol}}} = k[x_{i,j} \mid 1\le i \le n, 1\le j \le a_i]$. If $I$ is an ideal of $R$ generated by a set of monomials $M_1,\ldots,M_q$, then the polarization of $I$ is defined as: $$I^{\mathop{\mathrm{pol}}} = (\mathcal{P}(M_1),\ldots, \mathcal{P}(M_q))$$ which is a squarefree monomial ideal in the polynomial ring $R^{\mathop{\mathrm{pol}}}$. The following lemma establishes the connection between the associated primes of an ideal and its polarization. **Lemma 9**. *[@Faradi-polarization Corollary 2.6][\[polarization primes\]]{#polarization primes label="polarization primes"} Let $I$ be a monomial ideal in a polynomial ring $R = k[x_1,\ldots,x_n]$, and let $I^{\mathop{\mathrm{pol}}}$ be its polarization in $R^{\mathop{\mathrm{pol}}}= k[x_{i,j}]$. Then $(x_{i_1},\ldots, x_{i_r})\in\mathop{\mathrm{Ass}}(R/I)$ if and only if $(x_{i_1,c_1},\ldots, x_{i_r,c_r})\in\mathop{\mathrm{Ass}}(R^{\mathop{\mathrm{pol}}}/I^{\mathop{\mathrm{pol}}})$ for some positive integers $c_1,\ldots, c_r$.* In the setting of Lemma [\[polarization primes\]](#polarization primes){reference-type="ref" reference="polarization primes"} for an associated prime $$W=(x_{i_1,c_1},\ldots, x_{i_r,c_r})\in\mathop{\mathrm{Ass}}(R^{\mathop{\mathrm{pol}}}/I^{\mathop{\mathrm{pol}}}),$$ we denote $W^{\mathop{\mathrm{depol}}}$ the depolarization of $W$, that is $W^{\mathop{\mathrm{depol}}}=(x_{i_1},\ldots, x_{i_r})\in\mathop{\mathrm{Ass}}(R/I)$. We introduce a special set of neighbors that we will use for the next theorem. **Definition 10**. Let $I$ be a monomial ideal in a polynomial ring. For a variable $w$, we define the set $$N^*(w) = \left\{z \in R \mid z \mbox{ is a variable, } z\neq w, \exists M\in\mathcal{G}(I) \mbox{ with } zw\mid M \mbox{ and }d_w(M) < d_w(I)\right\}.$$ We give an example to illustrate these special neighbor sets. **Example 11**. Let $R=k[a,b,c,d]$ be a polynomial ring over a field $k$ and let $I=(a^2bc, ad, b^3cd)$ be a monomial ideal in $R$. Notice that $N^{*}(w)\neq \emptyset$ for $w=a,b$ since $d_{a}(ad)<d_a(a^2bc)$ and $d_b(a^2bc) <d_b(b^3cd)$. Moreover, $N^{*}(a)=\{d\}$ and $N^{*}(b)=\{a,c\}$. We are now ready to prove the main theorem of this section. We show that for an arbitrary monomial ideal $I$, any embedded associated prime of $R/I$ can be expressed as the sum of a minimal associated prime of $R/I$ with an ideal of some additional appropriate variables. The following theorem is the main theorem of this section. **Theorem 12**. *Let $R$ be a polynomial ring over a field and $I$ be a monomial ideal in $R$. Let $w_1,\ldots, w_n$ be the distinct variables in $R$ such that $N^{*}(w_i)\neq \emptyset$. Let $Q\in\mathop{\mathrm{Ass}}(R/I)$ be an embedded associated prime. Then $Q = (Q', z_1, z_2,\ldots, z_t)$ for some $Q'\in\mathop{\mathrm{Min}}(R/I)$ and $z_j\in N^*(w_i)$ for some $1\le i\le n$ and $1\le j\le t$.* *Proof.* Suppose that $Q$ is an embedded associated prime ideal of $R/I$. Then there exists $Q'\in\mathop{\mathrm{Min}}(R/I)$ such that $Q'\subsetneq Q$. We claim that $$Q = (Q', z_1,\ldots, z_t \mid z_j \in N^*(w_i), 1\le i \le r, 1\le j\le t).$$ Since $Q'\subsetneq Q$, there exists a variable $v\in Q\setminus Q'$. Suppose that $v\notin N^*(w_i)$ for all $1\le i\le r$. Let $$\mathcal{A} = \left\{N\in\mathcal{G}(I) \mid v\mid N\right\}.$$ Since $v\notin Q'$ and $Q'\in\mathop{\mathrm{Min}}(R/I)$, then for each $N\in\mathcal{A}$, there exists a variable $t_N\in Q'$ such that $vt_N \mid N$. Let $N = v^{m_N}t_N^{e_N}N'$ for some monomial $N'$, $m_N, e_N \ge 0$, and $v,t_N\nmid N'$. By Lemma [\[polarization primes\]](#polarization primes){reference-type="ref" reference="polarization primes"} we have that $Q = W^{\text{depol}}$ for some $W\in\mathop{\mathrm{Min}}(R^{\mathop{\mathrm{pol}}}/I^{\mathop{\mathrm{pol}}})$. Let $t_{N, 1},\ldots, t_{N, e_N}$ be the polarizing variables of $t_N$ with $t_{N, 1} = t_N$ and $v_{1},\ldots, v_{m_N}$ be polarizing variables of $v$ with $v_1 = v$ in $R^{\mathop{\mathrm{pol}}}$. We have two cases to consider, whether $v=w_i$ for some $i$ or if $v\neq w_i$ for all $i$. First suppose that $v=w_{i_0}$ for some $1\le i_0 \le n$. By our assumption $w_{i_0}\not\in N^{*}(w_i)$ for all $i$. Hence for each $N\in\mathcal{A}$, by our assumption, we have that $t_N\ne w_i$ for all $1\le i\le n$. Since $v\in Q$, then there exists some $v_q \in W$ for some $1 \le q \le m_N$. If there exists some $L\in\mathcal{G}(I)$ with $0 < d_{t_N}(L) < e_N$ or $d_{t_N}(L) > e_N$ then $t_N = w_u$ for some $1 \le u \le r$, in other words, $v\in N^{*}(w_u)$, a contradiction. Hence, $d_{t_N}(I) = e_N$ and for each $M\in \mathcal{G}(I)$ such that $t_N \mid M$, then $d_{t_N}(M)=e_N$. Therefore, for each $N\in\mathcal{A}$, there exists $t_N\in Q'$ such that $d_{t_N}(N) = d_{t_N}(I)$. Since $t_N\in Q'$, then $t_N\in Q$. Hence $t_{N,j}\in W$ for some $1\le j\le e_N$. Thus, $v_q$ and $t_{N,j}$ are both in $W$, a contradiction to the minimality of $W$. Finally, suppose that $v \ne w_i$ for all $1\le i\le n$. Since $v\ne w_i$, then $d_v(I) = d_v(N)$ for all $N\in\mathcal{A}$, that is $m_N = d_v(I) = m$ for some $m\in\mathbb{N}$. Since $v\in Q$, then there exists $v_q\in W$ for some $1 \le q \le m$. Notice that if $d_{t_N}(N) < d_{t_N}(I)$, then $t_N = w_{i_0}$ for some $1 \le i_0\le r$ and hence $v\in N^*(w_{i_0})$, which is a contradiction. Therefore, $e_N = d_{t_N}(I)$ for all $N\in\mathcal{A}$. Since $t_N \in Q'$, then $t_N\in Q$. Hence $t_{N,j} \in W$ for some $1 \le j\le e_N$ and $v_q$ and $t_{N,j}$ are both in $W$, a contradiction to the minimality of $W$ as before. ◻ The next example illustrates the statement of Theorem [Theorem 12](#mainthm){reference-type="ref" reference="mainthm"} and how the embedded associated primes arise from minimal associated primes. **Example 13**. Consider the ideal $I = (a^3bc, a^2d,b^2c,ce^2,de,c^2f,eg)$ in the polynomial ring $R=\mathbb{Q}[a,b,c,d,e,f,g]$. Notice that $N^*(w)\neq \emptyset$ for $w\in \{a,b,c,e\}$. Indeed, $N^*(a) = \left\{d\right\}$, $N^*(b) = \left\{a,c\right\}$, $N^*(c) = \left\{a, b, e\right\}$, $N^*(e) = \left\{d, g\right\}$. Using Macaulay $2$ [@m2], the list of associated primes of $I$ is: $$\begin{aligned} P_1 &= (a, c, e), P_2 = (c, d, e), P_3 = (c, d, g), P_4 = (a, b, e, f), P_5 = (b, d, e, f),\\ Q_1 &= (a, b, c, e), Q_2 = (b, c, d, e), Q_3 = (a, b, c, d, e), Q_4 = (a, b, d, e, f),\\ Q_5 &= (b, c, d, e, g), Q_6 = (b, d, e, f, g), Q_7 = (a, b, c, d, e, g), Q_8 = (a, b, d, e, f, g).\end{aligned}$$ Notice that $P_1, \ldots, P_5 \in \mathop{\mathrm{Min}}(R/I)$ and $Q_1, \ldots, Q_8$ are embedded associated primes of $R/I$. Notice that $Q_1 = (P_1, b)$, $Q_2 = (P_2 , b)$, $Q_3 = (P_1 , b, d) = (P_2, a, b)$, $Q_4 = (P_4, d) = (P_5, a)$, $Q_5 = (P_2, b, g) = (P_3, b, e)$, $Q_6 = (P_5, g)$, $Q_7 = (P_1, b, d, g) = (P_2, a, b, g) = (P_3, a, b, e)$, $Q_8 = (P_4, d, g) = (P_5, a, g)$. In particular, notice that $Q_7=(P_1,b,d,g)$ with $b\in N^{*}(w_3)$ and $d,g\in N^{*}(w_4)$, showing that the variables in an embedded associated prime can come from one or more of $N^{*}(w_i)$. For any monomial $M$, let $\widetilde{M}$ denote the squarefree part of $M$. That is if $M=x_1^{a_1}\cdots x_n^{a_n}$, then $\widetilde{M}=x_1\cdots x_n$. **Corollary 14**. *Let $I$ be a monomial ideal in a polynomial ring $R$ and suppose that for every variable $x\in R$ we have $d_{x}(M)=d_x(I)$ for every monomial $M \in \mathcal{G}(I)$ such that $x\mid M$. Then $R/I$ has no embedded primes. Furthermore, $\mathop{\mathrm{Min}}(R/I)=\mathop{\mathrm{Min}}(R/J)$, where $J=\langle \widetilde{M}\mid M\in \mathcal{G}(I)\rangle$.* *Proof.* This follows immediately from Theorem [Theorem 12](#mainthm){reference-type="ref" reference="mainthm"} by noticing that $N^*(x_i)=\emptyset$ for all $i$. ◻ The following example illustrates Corollary [Corollary 14](#no embedded){reference-type="ref" reference="no embedded"} **Example 15**. Let $R=k[x_1,\ldots, x_4]$ and let $I=(x_1^3x_2,x_2x_3^2, x_3^2x_4^4,x_1^3x_4^4)$. Then $I$ has the same associated primes as $J=(x_1x_2,x_2x_3,x_3x_4,x_1x_4)$, that is $$\mathop{\mathrm{Ass}}(R/I)=\mathop{\mathrm{Min}}(R/I)=\mathop{\mathrm{Ass}}(R/J)=\mathop{\mathrm{Min}}(R/J)=\{(x_1,x_3), (x_2,x_4)\}.$$ Recall that for an ideal $I$ in a Noetherian ring $R$ we have $I^{[n]}=\{f^n \mid f\in I\}$. The following shows that these types of ideals have no embedded associated primes when $I$ is squarefree or satisfies the assumptions of Corollary [Corollary 14](#no embedded){reference-type="ref" reference="no embedded"}. **Corollary 16**. *Let $I$ be either a squarefree monomial ideal or a monomial ideal that satisfies the assumptions of Corollary [Corollary 14](#no embedded){reference-type="ref" reference="no embedded"} in a polynomial ring $R$. Then for all $n\ge 1$, $I^{[n]}$ has no embedded associated primes.* Finally, we determine another type of regular element on a monomial ideal. This is a generalization of [@FHM-ini Lemma 3.9], where the degree of all but one variable was required to be $1$. **Corollary 17**. *Let $I$ be a monomial ideal in a polynomial ring $R$ and suppose that for every variable $x\in R$ we have $d_{x}(M)=d_x(I)$ for every monomial $M \in \mathcal{G}(I)$ such that $x\mid M$. Let $b_0, b_1, \ldots, b_t$ be distinct variables in $R$ such that if $b_0\mid M$ for some monomial $M\in \mathcal{G}(I)$, then $b_i \mid M$ for some $1\le i\le t$. If $f=b_0+\ldots +b_t$, then $f$ is regular on $R/I$.* *Proof.* First notice that by Corollary [Corollary 14](#no embedded){reference-type="ref" reference="no embedded"} we know $R/I$ has no embedded associated primes and in fact $\mathop{\mathrm{Min}}(R/I)=\mathop{\mathrm{Min}}(R/J)$, where $J=\langle \widetilde{M} \mid M \in \mathcal{G}(I)\rangle$. By [@FHM-ini Lemma 3.9] $f$ is regular on $R/J$ and therefore, $f$ is regular on $R/I$. ◻ We close with an example of the new type of regular elements obtained in Corollary [Corollary 17](#new regular){reference-type="ref" reference="new regular"}. **Example 18**. Let $R=\mathbb{Q}[x_1, x_2, x_3, x_4, x_5]$ and let $I=(x_1^3x_2^4,x_1^3x_5^2, x_2^4x_3^2, x_3^2x_4^5,x_4^5x_5^2)$. Since $d_{x_i}(I)>1$ for all $i$, then the results in [@FHM-ini] do not apply here. However, $I$ satisfies the assumptions of Corollary [Corollary 17](#new regular){reference-type="ref" reference="new regular"}. Let $J=\langle \widetilde{M} \mid M \in \mathcal{G}(I)\rangle$ and notice that $J=(x_1x_2,x_1x_5,x_2x_3,x_3x_4,x_4x_5)$, which is the edge ideal of a pentagon. The element $f=x_1+x_2+x_5$ is regular on $R/J$, by [@FHM-ini Lemma 3.9] and since $\mathop{\mathrm{Ass}}(R/J)=\mathop{\mathrm{Ass}}(R/I)$, then $f\notin \mathop{\mathrm{Ass}}(R/I)$, or in other words, $f$ is regular on $R/I$. # Initially regular sequences on cycles and the depth of unicyclic graphs {#depth section} In this section, we will use the description of the associated primes of monomial ideals from Section [2](#associated primes){reference-type="ref" reference="associated primes"} to construct initially regular sequences on cycles. Moreover, using these sequences we will compute accurately the depth of certain unicyclic graphs. We first recall the following result that gives the depth of any cycle. For $n\ge3$ let $C_n$ denote a cycle on $n$ variables and let $I(C_n)$ be the edge ideal of the cycle in $R=k[x_1, \ldots, x_n]$, where $k$ is a field, that is $I(C_n)=(x_1x_2, x_2x_3, \ldots, x_{n-1}x_n, x_1x_n)$. Note that $[n]=\{1, \ldots, n\}$ for any $n\in \mathbb{N}$. **Theorem 19**. *[@Jacq Corollary 7.6.30][\[Jacq\]]{#Jacq label="Jacq"} Let $n\ge3$, $C_n$ be a cycle on $n$ vertices, and $I(C_n)$ be the edge ideal of the cycle in the ring $R=k[x_1, \ldots, x_n]$, where $k$ is a field. Then $$\mathop{\mathrm{depth}}(R/I(C_n)) = \lceil\dfrac{n-1}{3}\rceil.$$* **Remark 20**. Let $n\ge 1$ and let $I_1=I(C_{3n})$ denote the edge ideal of a cycle of length $3n$ in the polynomial ring $R_1=k[x_1, \ldots, x_{3n}]$. Let $I_2=I(C_{3n+1})$ denote the edge ideal of a cycle of length $3n+1$ in $R_2=k[x_1, \ldots, x_{3n+1}]$ and let $I_3=I(C_{3n+2})$ denote the edge ideal of a cycle of length $3n+2$ in $R_3=k[x_1, \ldots, x_{3n+2}]$. According to Theorem [\[Jacq\]](#Jacq){reference-type="ref" reference="Jacq"} we have $\mathop{\mathrm{depth}}R_1/I_1=\mathop{\mathrm{depth}}R_2/I_2=n$, whereas $\mathop{\mathrm{depth}}R_3/I_3=n+1$. Let $h_1 = x_1+x_{3n} + x_2$ and $h_i=x_{3i-2}+x_{3i-3}+x_{3i-1}$ for all $2\le i\le n$ and observe that $h_1, \ldots, h_n$ is an initially regular sequence on $R_1/I_1$ that realizes the depth of $R_1/I_1$, by [@FHM-ini Theorem 3.11]. Similarly, let $g_1 = x_1+x_{3n+1} + x_2$ and $g_i=x_{3i-2}+x_{3i-3}+x_{3i-1}$ for all $2\le i\le n$. Then $g_1, \ldots, g_n$ is an initially regular sequence that realizes the depth of $R_2/I_2$. However, [@FHM-ini Theorem 3.11] does not provide a method to construct an initially regular sequence of length $n+1$ that would realize the depth of $R_3/I_3$. In fact, [@FHM-ini Theorem 3.11] shows that there is an initially regular sequence of length $n$ on $R_3/I_3$, that is $f_1 = x_1+x_{3n+2} + x_2$, $f_i=x_{3i-2}+x_{3i-3}+x_{3i-1}$ for $2\le i\le n$ is an initially regular sequence on $R/I$ with respect to an appropriate term order. As noted in Remark [Remark 20](#sequences){reference-type="ref" reference="sequences"} we have initially regular sequences that realize the depth for $I(C_{3n})$ and $I(C_{3n+1})$, for any $n$. Moreover, we have an initially regular sequence of length $n$ on $R/I(C_{3n+2})$. The next theorem establishes an initially regular sequence that realizes the depth for the edge ideal of a cycle of length $3n+2$ for $n\ge 1$. **Theorem 21**. *Let $n\ge 1$, $C_{3n+2}$ be a cycle on $3n+2$ vertices, and let $I=I(C_{3n+2})$ be the edge ideal of the cycle $C_{3n+2}$ in the ring $R=k[x_1, \ldots, x_{3n+2}]$. Let $f_1 = x_1+x_{3n+2} + x_2$, $f_i=x_{3i-2}+x_{3i-3}+x_{3i-1}$ for $2\le i\le n$, and $f_{n+1}=x_{3n} + x_{3n+1} + \sum \limits_{i=1}^{n}x_{3i-1}$. The sequence $f_1, \ldots, f_{n+1}$ is an initially regular sequence on $R/I$ with respect to a term order such that $x_1 > x_{3n+2} > x_2$, and $x_{3i-2}>x_{3i-3}>x_{3i-1}$, for all $2\le i\le n$.* *Proof.* We fix some notation. For all $i\in [n]$ and $2\le j \le n$, let $a_i=x_{3i-2}$, $b_1=x_{3n+2}$, $b_{j}=x_{3j-3}$, $c_i=x_{3i-1}$. Then $f_i=a_i+b_i+c_i$ for all $i\in[n]$. By [@FHM-ini Theorem 3.11], we have that $f_1, f_2,\ldots, f_n$ is an initially regular sequence on $R/I$ with respect to a term order such that $x_1 > x_{3n+2} > x_2$, and $x_{3i-2}>x_{3i-3}>x_{3i-1}$, for all $2\le i\le n$. It remains then to show that $f_{n+1}$ is regular on $R/I_n$. It suffices to show that $f_{n+1}\not\in \mathop{\mathrm{Ass}}(R/I_n)$. Applying Lemma [\[lemmatrinomial\]](#lemmatrinomial){reference-type="ref" reference="lemmatrinomial"} repeatedly, we can give a complete description of $I_n$, that is, $$\begin{aligned} I &= (x_1x_2, x_2x_3,\ldots, x_{3n}x_{3n+1}, x_{3n+1}x_{3n+2}, x_1x_{3n+2})\\ I_1 &= \mathop{\mathrm{in_{>}}}(I,f_1)=(x_1, x_{i}x_{i+1}, x_{3n+2}x_2, x_{3n+2}^2,x_{3n+1}x_2^2\mid 2\le i\le 3n+1)\\ &\,\,\,\,\vdots\\ I_n &= \mathop{\mathrm{in_{>}}}(I_{n-1},f_n) \\ &=(x_{3i-2}, x_{3i-1}x_{3i}, x_{3n}x_{3n+1}, x_{3n+1}x_{3n+2}, x_{3n+2}^2, x_{3j}^2, x_{3n+1}x_2^2, x_{3j-1}x_{3j+2}^2, \\ & \hspace{.7cm} x_{2}x_{3n+2}, x_{3j}x_{3j+2},\mid i\in[n], j\in[n-1]).\\ \end{aligned}$$ Let $Q \in \mathop{\mathrm{Min}}(R/I_n)$. Using Proposition [Proposition 7](#mainprop1){reference-type="ref" reference="mainprop1"} repeatedly, there exists $P\in \mathop{\mathrm{Min}}(R/I)$ such that $$Q = (P, a_{{i_1}},\ldots,a_{{i_k}},b_{{i_{k+1}}},\ldots, b_{{i_n}}),$$ for some $i_j\in [n]$ such that $i_j \ne i_r$ for all $j\ne r$. We make the following observations: - If $a_{{i_j}} = x_1$, for some $i_j$, then $x_1\notin P$ and thus $x_{3n+2}\in P$. Hence either $x_{3n}\not\in P$ or $x_{3n+1}\notin P$ by the minimality of $P$. - If $b_{{i_j}} = x_{3n-3}$, for some $i_j$, then $x_{3n-3}\notin P$ and thus $x_{3n-2}\in P$. Hence either $x_{3n-1}\not\in P$ or $x_{3n}\notin P$ by the minimality of $P$. - If $a_{{i_j}} \ne x_1, b_{{i_j}} \ne x_{3n-3}$ for all $i_j$, then $x_1,x_{3n-3}\in P$. So $x_{3n+2}, x_{3n-2}\notin P$, by Proposition [Proposition 7](#mainprop1){reference-type="ref" reference="mainprop1"}. Thus, $x_{3n+1}, x_{3n-1}\in P$, and hence $x_{3n}\notin P$. Therefore, $f_{n+1}\notin Q$ for any $Q\in\mathop{\mathrm{Min}}(R/I_n)$. Now, we let $Q$ be an embedded associated prime of $R/I_n$. Then there exists $Q'\in \mathop{\mathrm{Min}}(R/I_n)$ and variables $z_1, \ldots, z_t$ such that $$Q = (Q', z_1,\ldots,z_t).$$ Notice that $x_{3n+2}, x_{3j} \in Q'$ for all $j\in[n-1]$. Following the notation of Theorem [Theorem 12](#mainthm){reference-type="ref" reference="mainthm"}, we have $w_1 = x_{3n+2}$, $w_i=x_{3i}$ for $2\le i\le n$ and for $1\le j\le n$ we have $w_{n+j}=x_{3j-1}$. That is these are the $w_i$ such that $N^*(w_i)\neq \emptyset$. Consequently, we obtain $N^{*}(x_{3n+2})=\{x_{3n+1}, x_2\}$, $N^{*}(x_{3i})=\{x_{3i-1}, x_{3i+2}\}$, $N^{*}(x_2)=\{x_3, x_{3n+2}\}$, and $N^{*}(x_{3j-1})=\{x_{3j}, x_{3j-3}\}$ for all $2\le i\le n$ and $2\le j \le n$. Therefore, by Theorem [Theorem 12](#mainthm){reference-type="ref" reference="mainthm"}, we have that $z_1,\ldots,z_t\in \{x_{3i-1}, x_{3n}, x_{3n+1} \mid 1\le i\le n\}$. We write $Q = W^{\text{depol}}$ for some $W\in\mathop{\mathrm{Min}}(R^{\mathop{\mathrm{pol}}}/I_n^{\mathop{\mathrm{pol}}})$. By Lemma [\[lemmatrinomial\]](#lemmatrinomial){reference-type="ref" reference="lemmatrinomial"}, we see that the degrees of $x_1,\ldots, x_{3n-1}, x_{3n+2}$ are $2$, so for each $i\in\left\{1, 2,\ldots, 3n-1, 3n+2\right\}$, by abuse of notation, we let $x_i$ and $x_i'$ denote the polarizing variables of $x_i$ in $R^{\mathop{\mathrm{pol}}}$. Notice that, since $Q'$ is minimal, then not all $x_{3n-1}, x_{3n}, x_{3n+1}$ are in $Q'$. Moreover, there are at most $2$ of them in $Q'$. If one of these is not in $Q$, then we are done. Therefore, we consider the following cases: - If $x_{3n}\in Q \setminus Q'$, then $x_{3n-1}, x_{3n+1}\in Q'$. Since $x_{3n+1}\in Q'\subseteq Q$ then $x_{3n+1}\in W$, since $\deg_{x_{3n+1}}(I_n)=1$. Thus, $x_{3n-1}\notin W$. So $x_{3n-1}'\in W$ and thus $x_{3n-4}\notin W$. If $x_{3n-4}'\notin W$, then $x_{3n-4}\notin Q$, and we are done. Otherwise, if $x_{3n-4}'\in W$, then $x_{3n-7}\notin W$. If $x_{3n-7}'\notin W$, then $x_{3n-7}\notin Q$, and we are done again. Otherwise, if $x_{3n-7}'\in W$, then $x_{3n-10}\notin W$. Continuing this way, we may assume assume that $x_{3i-1}'\in W$ for $2\le i\le n$. In particular, $x_5'\in W$ and thus $x_2\notin W$. Since $x_{3n+1}\in W$, then $x_2'\notin W$. Thus, $x_2\notin Q$, as desired. - If $x_{3n+1}\in Q\setminus Q'$, then $\left\{x_2,x_{3n}, x_{3n+2}\right\}\subseteq Q'$. Since $x_{3n}\in Q'\subseteq Q$, then $x_{3n}\in Q$, and hence $x_{3n}\in W$. Since $x_{3n+1}\in Q$ then $x_{3n+1}\in W$. So $x_{3n-1}\notin W$. If $x_{3n-1}\notin Q$ then we are done. Otherwise, we have $x_{3n-1}'\in W$. If $x_{3n-4}\notin Q$, again, we are done. Otherwise, we have $x_{3n-4}'\in W$. Continuing this way, we may assume that $x'_{3i-1}\in W$ for $2\le i\le n$. In particular, $x_5'\in W$ and hence $x_2\notin W$. Since $x_2\in Q_n$, then $x_2\in Q$. Hence, $x_2'\in W$. On the other hand, since $x_{3n+1}\in W$, then $x_2'\notin W$, a contradiction. - If $x_{3n-1}\in Q\setminus Q'$, then $\{x_{3n}, x_{3n-3}, x_{3n-4}\}\subseteq Q'$. Since $x_{3n}\in Q'\subseteq Q$, then $x_{3n}\in Q$ and hence $x_{3n}\in W$. If $x_{3n+1}\notin Q$, then we are done. Otherwise, if $x_{3n+1}\in Q$, then $x_{3n+1}\in W$. Then $x_{3n-1}\notin W$. If $x_{3n-1}\notin Q$, then we are done. Otherwise, we have $x_{3n-1}'\in W$. So, $x_{3n-4}\notin W$. Since $x_{3n-4}\in Q'$, then $x_{3n-4}\in Q$. Hence $x_{3n-4}'\in W$. By a similar argument, we may assume that $x_{3i-1}'\in W$ for $2\le i\le n$. In particular, $x_5'\in W$ and hence $x_2\notin W$. If $x_2\notin Q$, again, we are done. Otherwise, $x_2'\in W$. Then $x_{3n+1}\notin W$, a contradiction. Therefore, $f_{n+1}\notin Q$ for any embedded associated prime $Q$ in $R/I_n$. Hence, $f_{n+1}$ is regular on $R/I_n$, completing the proof. ◻ Next, we will give an explicit formula to calculate the depth of certain unicyclic graphs defined below. **Definition 22**. Let $n\ge 3, m\ge 2$, $A_n = k[x_1,\ldots,x_n]$, $B_m = k[y_1,\ldots,y_m]$, and $R_{n,m} = k[x_1,\ldots,x_n,y_1,\ldots,y_m]$. Let $G_{n,m}$ denote the graph whose edge ideal is $$I(G_{n,m}) = (I(C_n), x_{2}y_1, I(P_m)),$$ where $C_n$ is the $n$ cycle on variables $x_1,\ldots, x_n$ and $P_m$ is a path on variables $y_1,\ldots,y_m$. By convention, we denote $I(G_{n,0}) = I(C_n)$ and $I(G_{n,1}) = (I(C_n), x_2y_1)$. Below is a figure showing the graph $G_{n,m}$ for arbitrary $n, m$. To give a formula for the depth of $R_{n,m}/I(G_{n,m})$ we will make use of induction. The next proposition gives the depth of the edge ideal of the unicyclic graph $G_{n,1}$ for any $n\ge 3$. **Proposition 23**. *For $n\ge 3$, $\mathop{\mathrm{depth}}(R_{n,1}/I(G_{n,1})) = \lceil \dfrac{n}{3}\rceil$.* *Proof.* Let $I = I(G_{n,1})$. Consider the following short exact sequence $$0 \longrightarrow R_{n,1}/(I : x_2) \longrightarrow R_{n,1}/I \longrightarrow R_{n,1}/(I, x_2) \longrightarrow 0.$$ Let $J_1 = (I : x_2)$ and $J_2 = (I , x_2)$. We see that $J_1 = (I : x_2) = (J_1',x_1,x_3,y_1)$, where $J_1'=(x_4x_5, x_5x_6, \ldots, x_{n-1}x_n)$ is the edge ideal of a path. Hence $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,1}/J_1) &= \mathop{\mathrm{depth}}\left(\dfrac{k[x_4,\ldots, x_n]}{J_1'}\right)[x_2]\\ &= \lceil\dfrac{n-3}{3}\rceil+ 1= \lceil\dfrac{n}{3}\rceil, \quad \text{by \cite[Lemma~2.8]{M-depthoftree}.} \end{aligned}$$ Next notice that $J_2 = (J_2', x_2)$, where $J_2'$ is the edge ideal of path on $x_3,x_4,\ldots,x_{n-1},x_n,x_1$. Thus $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,1}/J_2) &= \mathop{\mathrm{depth}}\dfrac{k[x_1,x_3,x_4,\ldots,x_n]}{J_2'} + 1\\ &= \lceil\dfrac{n - 1}{3}\rceil + 1= \lceil\dfrac{n+2}{3}\rceil, \quad \text{by \cite[Lemma~2.8]{M-depthoftree}.} \end{aligned}$$ Since $\mathop{\mathrm{depth}}(R_{n,1}/J_2) \ge \mathop{\mathrm{depth}}(R_{n,1}/J_1)$, then by [@depthmodulo Theorem 4.3] we have that $$\mathop{\mathrm{depth}}(R_{n,1}/I) = \mathop{\mathrm{depth}}(R_{n,1}/J_1) = \lceil\dfrac{n}{3}\rceil,$$ as claimed. ◻ Before we can prove the formula for the depth of $R_{n,2}/I(G_{n,2})$ we show first a lower bound in a special case. **Lemma 24**. *For $t\in\mathbb{N}$, $\mathop{\mathrm{depth}}(R_{3t+2,2}/I(G_{3t+2,2}) \ge t+2$.* *Proof.* Let $f_1 = x_1+x_{3t+2} + x_2$, $f_i=x_{3i-2}+x_{3i-3}+x_{3i-1}$ for $2\le i\le t$, $f_{t+1}=x_{3t} + x_{3t+1} + \sum \limits_{i=1}^{t}x_{3i-1}$ and $f_{t+2} = y_2 + y_1$. Using an argument similar to the proof of Theorem [Theorem 21](#C_{3n+2}){reference-type="ref" reference="C_{3n+2}"} one can see that $f_1, \ldots, f_{t+1}$ is an initially regular sequence on $R/I(G_{3t+2,2})$ with respect to a term order such that $x_1 > x_{3t+2} > x_2$, and $x_{3i-2}>x_{3i-3}>x_{3i-1}$, for all $2\le i\le t$. It remains to show that $f_{t+2}$ is regular on $I_{t+1}$, where $I_{i+1} = \mathop{\mathrm{in_{>}}}(I_i, f_i)$ for $1 \le i \le t$ and $I_1 = \mathop{\mathrm{in_{>}}}(I) = I$. By [@FHM-ini Lemma 3.7], we see that $d_{y_1}(I_{t+1}) = 1$. Next, we will show that $y_1y_2$ is the only monomial generator that $y_2$ divides in $I_{t+1}$. We will proceed by induction. Notice that $y_1y_2 \in I$ and it is the only monomial that $y_2$ divides in $I$. Now, for $1 \le i\le t$, suppose that $y_1y_2\in I_i$ and is the only monomial that $y_2$ divides in $I_i$. Consider $I_{i+1} = \mathop{\mathrm{in_{>}}}(I_i, f_{i+1})$. Let $R_2 = k[x_{3i+1}, x_{3i}, x_{3i+2}]$ if $i<t$ and let $R_2 = k[x_{3t}, x_{3t+1}, x_{3j-1} \mid 2\le j\le t]$ if $i=t+1$. Let $R_1$ be the polynomial ring such that $R = R_1[x_{3i+1}, x_{3i}, x_{3i+2}]$ if $i<t$ and if $i=t+1$ let $R_1$ be the polynomial ring such that $R=R_1[x_{3t}, x_{3t+1}, x_{3j-1} \mid 2\le j\le t]$. It is easy to see that $I_{i+1}$ is $(R_1,R_2)$-factorable in the sense of [@FHM-ini Definition 3.2]. By [@FHM-ini Proposition 3.5], the reduced Gröbner basis of $I_{i+1}$ is $(R_1,R_2)$-factorable and consists of elements of the form $\ensuremath{{\rm{lcm}}{(}}M_{i_1}, \ldots, M_{i\ell})g$, where $M_{i_1}, \ldots, M_{i_{\ell}}\in R_1$ are monomials such that $M_{i_j}$ divides a monomial generator of $I_{i}$ and $g\in R_2$. Let $N\in I_{i+1}$ be monomial such that $y_2 \mid N$. Notice that $N = \ensuremath{{\rm{lcm}}{(}}M_1,\ldots, M_p)\mathop{\mathrm{in_{>}}}(g)$ for some $M_1, \ldots, M_p\in R_1$ monomials that divide monomial generators of $I_i$ and $g\in R_2$. Since $g\in R_2$, then $\mathop{\mathrm{in_{>}}}(g)\in R_2$ and hence $y_2 \mid \ensuremath{{\rm{lcm}}{(}}M_1, \ldots, M_p)$. Thus $y_2\mid M_j$ for some monomial $M_j$ in $R_1$. In other words, $M_j$ is a monomial in $I_i$ such that $y_2 \mid M_j$, and hence, by induction $M_j=y_1y_2$, see also the proof of [@FHM-ini Proposition 3.5]. So, $N$ is a multiple of $y_1y_2 \in I_i$, and thus for $N$ to be in the reduced Gröbner basis of $I_{i+1}$ it must be that $N=y_1y_2$. Since the only monomial that $y_2$ divides in $I_{t+1}$ is $y_1y_2$, it follows by [@FHM-ini Lemma 3.9] that $f_{t+2} = y_2 + y_1$ is regular on $R_{3t+2,2}/I_{t+1}$ with respect to an appropriate order as in the statement. Therefore, $f_1,\ldots,f_{t+2}$ is an initially regular sequence on $R/I$, with respect to aforementioned order and hence $\mathop{\mathrm{depth}}(R_{3t+2,2}/I)\ge t+2$. ◻ Next, we prove a general formula in the case of the unicyclic graph $G_{n,2}$ for any $n\ge 3$. **Proposition 25**. *For $n\ge 3$, $\mathop{\mathrm{depth}}(R_{n,2}/I(G_{n,2})) = \lceil \dfrac{n-1}{3}\rceil+1$.* *Proof.* Let $I = I(G_{n,2})$ for $n\ge 3$. Consider the following short exact sequence $$0 \longrightarrow R_{n,2}/(I : y_2) \longrightarrow R_{n,2}/I \longrightarrow R_{n,2}/(I, y_2) \longrightarrow 0.$$ Let $J_1 = (I : y_2)$ and $J_2 = (J,y_2)$. Notice that $J_1 = (I:y_2) = (I(C_n) , y_1)$ and hence $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,2}/J_1) &= \mathop{\mathrm{depth}}\dfrac{k[x_1,\ldots,x_n]}{I(C_n)}[y_2]\\ &= \mathop{\mathrm{depth}}(A_n/I(C_n)) + 1\\ &= \lceil\dfrac{n-1}{3}\rceil + 1= \lceil\dfrac{n+2}{3}\rceil, \quad \text{by Theorem }\ref{Jacq}. \end{aligned}$$ Next, notice that $J_2 = (I,y_2) = (I(G_{n,1}),y_2)$ and thus $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,2}/J_2) &= \mathop{\mathrm{depth}}\dfrac{k[x_1,\ldots,x_n]}{I(G_{n,1})} [y_1]= \lceil\dfrac{n}{3}\rceil, \quad \mbox{by Proposition }~\ref{G_{n,1}}. \end{aligned}$$ If $n\equiv 1 \mod 3$, then $\lceil\dfrac{n+2}{3}\rceil=\lceil\dfrac{n}{3}\rceil=\lceil\dfrac{n-1}{3}\rceil+1$ and in that case $$\mathop{\mathrm{depth}}R_{n,2}/I=\lceil\dfrac{n-1}{3}\rceil+1,$$ by [@depthmodulo Theorem 4.3]. We now handle the remaining two cases. If $n=3t$ with $t\in\mathbb{N}$, we have $\mathop{\mathrm{depth}}(R_{n,2}/J_1) = t+1$ and $\mathop{\mathrm{depth}}(R_{n,2}/J_2) = t$. Let $f_1 = x_1 + x_{3t} + x_2, f_i =x_{3i-2}+x_{3i-3}+x_{3i-1}$ for all $2\le i\le t$, and $f_{t+1} = y_2+y_1$. By [@FHM-ini Theorem 3.11], $f_1, \ldots, f_{t+1}$ is an initially regular sequence on $R_{n,2}/I$ with respect to an order such that $x_1 > x_{3t} > x_2$, $x_{3i-2}>x_{3i-3}>x_{3i-1}$ for all $2\le i \le t$ and $y_2>y_1$. Thus, $\mathop{\mathrm{depth}}(R_{n,2}/J)\ge t+1$ and thus $$\mathop{\mathrm{depth}}(R_{n,2}/J) = \mathop{\mathrm{depth}}(R_{n,2}/J_1)=t+1=\lceil\dfrac{n+2}{3}\rceil=\lceil\dfrac{n-1}{3}\rceil+1,$$ by [@depthmodulo Theorem 4.3]. In the remaining case that $n=3t+2$ with $t\in\mathbb{N}$ we have $\mathop{\mathrm{depth}}(R_{n,2}/J_1) = t+2$ and $\mathop{\mathrm{depth}}(R_{n,2}/J_2) = t+1$. Moreover, by Lemma [Lemma 24](#G_{3n+2,2}){reference-type="ref" reference="G_{3n+2,2}"}, we have $\mathop{\mathrm{depth}}(R_{n,2}/J)\ge t+2$. Hence $\mathop{\mathrm{depth}}(R_{n,2}/J)\ne\mathop{\mathrm{depth}}(R_{n,2}/J_2)$ and therefore, $$\mathop{\mathrm{depth}}(R_{n,2}/J) = \mathop{\mathrm{depth}}(R_{n,2}/J_1) = \mathop{\mathrm{depth}}(A_n/I(C_n)) + 1 = \lceil\dfrac{n-1}{3}\rceil + 1,$$ by [@depthmodulo Theorem 4.3]. ◻ In the final result of this paper, we establish the depth of $R_{n,m}/I(G_{n,m})$ for any $m\ge 0$ and any $n\ge 3$. **Theorem 26**. *For $n\ge 3$ and $m\ge 0$, we have that $$\mathop{\mathrm{depth}}(\frac{R_{n,m}}{I(G_{n,m})}) = \left\{ \begin{array}{lc} \mathop{\mathrm{depth}}(\dfrac{A_n}{I(C_{n})}) + \dfrac{m}{3}=\lceil\dfrac{n-1}{3}\rceil + \dfrac{m}{3}, & \quad \mbox{if } m\equiv 0 \mod 3, \\ \lceil\dfrac{n}{3}\rceil + \dfrac{m-1}{3}=\lceil\dfrac{n}{3}\rceil + \dfrac{m-1}{3}, & \quad \mbox{if } m\equiv 1\mod 3,\\ \mathop{\mathrm{depth}}(\dfrac{A_n}{I(C_n)}) + \dfrac{m+1}{3}=\lceil\dfrac{n-1}{3}\rceil + \dfrac{m+1}{3}, & \quad \mbox{if } m\equiv 2\mod 3. \end{array} \right.$$* *Proof.* We proceed by induction on $m$. When $m=0$ there is nothing to show. The case when $m=1$ is shown in Proposition [Proposition 23](#G_{n,1}){reference-type="ref" reference="G_{n,1}"}, whereas the case $m=2$ is shown in Proposition [Proposition 25](#m=2 case){reference-type="ref" reference="m=2 case"}. Let $n\ge 3, m\ge 3$ and let $I = I(G_{n,m})$. Consider the following short exact sequence $$0 \longrightarrow R_{n,m}/(I : y_{m-1}) \longrightarrow R_{n,m}/I \longrightarrow R_{n,m}/(I, y_{m-1}) \longrightarrow 0.$$ Let $I_1 = (I:y_{m-1})$ and $I_2 = (I,y_{m-1})$. Notice that $I_1 = (I(G_{n,m-3}),y_{m-2},y_m)$ and thus $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_1) = \mathop{\mathrm{depth}}(\dfrac{k[x_1,\ldots,y_{m-3}]}{I(G_{n,m-3})}[y_{m-1}])=\mathop{\mathrm{depth}}(R_{n,m-3}/I(G_{n,m-3})) + 1. \end{aligned}$$ On the other hand, we have $I_2 = (I(G_{n,m-2}),y_{m-1})$ and thus $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_2) =\mathop{\mathrm{depth}}(\dfrac{k[x_1,\ldots,x_n,y_1,\ldots,y_{m-2}]}{I(G_{n,m-2})}[y_m])= \mathop{\mathrm{depth}}(R_{n,m-2}/I(G_{n,m-2})) + 1. \end{aligned}$$ Write $m=3h+r$, where $0\le r\le 2$ and $h\in \mathbb{N}$, and consider all three cases separately. If $r=0$, then $m=3h$ and by the inductive hypothesis $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_1) &= \mathop{\mathrm{depth}}(R_{n,3h-3}/I(G_{n,3h-3}) + 1\\ &= \lceil\dfrac{n-1}{3}\rceil + \dfrac{3h-3}{3} + 1= \lceil\dfrac{n-1}{3}\rceil + h. \end{aligned}$$ Also, $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_2) &= \mathop{\mathrm{depth}}(R_{n,3h-2}/I(G_{n,3h-2}) + 1\\ &= \lceil\dfrac{n}{3}\rceil + \dfrac{3h-2-1}{3} +1= \lceil\dfrac{n}{3}\rceil + h. \end{aligned}$$ Therefore, $\mathop{\mathrm{depth}}(R_{n,m}/I_2)\ge\mathop{\mathrm{depth}}(R_{n,m}/I_1)$, and hence $$\mathop{\mathrm{depth}}(R_{n,m}/I) =\mathop{\mathrm{depth}}(R_{n,m}/I_1) = \lceil\dfrac{n-1}{3}\rceil+ h = \lceil\dfrac{n-1}{3}\rceil + \dfrac{m}{3},$$ by [@depthmodulo Theorem 4.3]. If $r=1$, then $m=3h+1$ and by the inductive hypothesis $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_1) &= \mathop{\mathrm{depth}}(R_{n,3h-2}/I(G_{n,3h-2})) + 1\\ &= \lceil\dfrac{n}{3}\rceil + \dfrac{3h-2-1}{3} + 1= \lceil\dfrac{n}{3}\rceil + h, \end{aligned}$$ and $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_2) &= \mathop{\mathrm{depth}}(R_{n,3h-1}/I(G_{n,3h-1})) + 1\\ &= \lceil\dfrac{n-1}{3}\rceil + \dfrac{3h-1+1}{3} +1= \lceil\dfrac{n+2}{3}\rceil + h. \end{aligned}$$ Hence, $\mathop{\mathrm{depth}}(R_{n,m}/I) = \mathop{\mathrm{depth}}(R_{n,m}/I_1) = \lceil\dfrac{n}{3}\rceil + h = \lceil\dfrac{n}{3}\rceil + \dfrac{m-1}{3}$, by [@depthmodulo Theorem 4.3]. If $r=2$, then $m=3h+2$ and again by the inductive hypothesis we have $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_1) &= \mathop{\mathrm{depth}}(R_{n,3h-1}/I(G_{n,3h-1}) + 1\\ &= \lceil\dfrac{n-1}{3}\rceil + \dfrac{3h-1+1}{3} + 1= \lceil\dfrac{n-1}{3}\rceil + h + 1, \end{aligned}$$ and $$\begin{aligned} \mathop{\mathrm{depth}}(R_{n,m}/I_2) &= \mathop{\mathrm{depth}}(R_{n,3h}/I(G_{n,3h}) + 1\\ &= \lceil\dfrac{n-1}{3}\rceil + \dfrac{3h}{3} +1= \lceil\dfrac{n-1}{3}\rceil + h + 1. \end{aligned}$$ Therefore, as before by [@depthmodulo Theorem 4.3] we have $$\mathop{\mathrm{depth}}(R_{n,m}/I) = \mathop{\mathrm{depth}}(R_{n,m}/I_1) = \lceil\dfrac{n-1}{3}\rceil + h + 1= \lceil\dfrac{n-1}{3}\rceil + \dfrac{m+1}{3}.$$ This completes the proof. ◻ # Acknowledgement I would like to express my deepest gratitude to my advisor, Professor Louiza Fouli, for her expert guidance and constant encouragement. I also want to thank Professor Tài Hà for his suggestion of the question regarding the depth of unicyclic graphs, and Professor Susan Morey for her comments about initially regular sequences on cycles in general. 99 S. Bandari, J. Herzog, T. Hibi, Monomial ideals whose depth function has any given number of strict local maxima. Ark. Mat. **52** (2014), no.1, 11--19. M. Brodmann, The asymptotic nature of the analytic spread. Math. Proc. Cambridge Philos. Soc. **86** (1979), no.1, 35--39. L. Burch, Codimension and analytic spread. Proc. Camb. Philos. Soc. **72**, (1972) 369--373. G. Caviglia, H.T. Hà, J. Herzog, M. Kummini, N. Terai, N. Trung, Depth and regularity modulo a principal ideal. J. Algebraic Combin. **49** (2019), no.1, 1--20. A. Conca, M. Varbaro, Squarefree Gröbner degenerations. Invent. Math. **221** (2020), no.3, 713--730. H. Dao, C. Huneke, J. Schweig, Bounds on the regularity and projective dimension of ideals associated to graphs. J. Algebraic Combin. **38** (2013), no. 1, 37--55. H. Dao, J. Schweig, Projective dimension, graph domination parameters, and independence complex homology. J. Combin. Theory Ser. A **120** (2013), no.2, 453--469. H. Dao, J. Schweig, Bounding the projective dimension of a squarefree monomial ideal via domination in clutters. Proc. Amer. Math. Soc. **143** (2015), no.2, 555--565. D. Eisenbud, C. Huneke, Cohen-Macaulay Rees algebras and their specializations. J. Algebra **81** (1983), no.1, 202--224. S. Faridi, Monomial ideals via squarefree monomial ideals. IN: Commutative algebra, 85--114, Lect. Notes Pure Appl. Math., vol. 244, Chapman & Hall/CRC, Boca Raton, FL (2006). L. Fouli, H.T. Hà, S. Morey, Initially regular sequences and depths of ideals. J. Algebra **559** (2020), 33--57. L. Fouli, H.T. Hà, S. Morey, Depth of powers of squarefree monomial ideals. Advances in mathematical sciences, 161--171. Assoc. Women Math. Ser., 21 Springer, Cham, 2020. L. Fouli, H.T. Hà, S. Morey, Regular sequences on squares of monomial ideals. São Paulo J. Math. Sci. **17** (2023), no.1, 122--146. L. Fouli, S. Morey, A lower bound for depths of powers of edge ideals. J. Algebraic Combin. **42** (2015), no.3, 829--848. C. Francisco, H.T. Hà, A. Van Tuyl, Associated primes of monomial ideals and odd holes in graphs. J. Algebraic Combin. **32** (2010), no.2, 287--301. D. R. Grayson, M. E. Stillman, Macaulay2, a software system for research in algebraic geometry, available at https://faculty.math.illinois.edu/Macaulay2/. H. T. Hà, H.D. Nguyen, N.V. Trung, T.N. Trung, Depth functions of powers of homogeneous ideals. Proc. Amer. Math. Soc. **149** (2021), no.5, 1837--1844. H. T. Hà, S. Morey, Embedded associated primes of powers of square-free monomial ideals. J. Pure Appl. Algebra **214** (2010), no.4, 301--308. J. M. Harris, J. L. Hirst, M. J. Mossinghoff, Combinatorics and Graph Theory. Second edition, Undergrad. Texts Math. Springer, 2008. J. Herzog, T. Hibi, The depth of powers of an ideal. J. Algebra **291** (2005), no.2, 534--550. J. Herzog, T. Hibi, Monomial Ideals. Grad. Texts in Math., 260, Springer-Verlag London, Ltd., London, 2011. H. T. T. Hien, H. M. Lam, N. V. Trung, Saturation and associated primes of powers of edge ideals. J. Algebra **439** (2015), 225--244. S. Jacques, Betti numbers of graph ideals. Ph.D. Thesis, University of Sheffield, 2004. T. Kaiser, M. Stehlík, R. Škrekovski, Replication in critical graphs and the persistence of monomial ideals. J. Combin. Theory Ser. A **123** (2014), 239--251. H. M. Lam, N. V. Trung, Associated primes of powers of edge ideals and ear decompositions of graphs. Trans. Amer. Math. Soc. **372** (2019), no.5, 3211--3236. K.-N. Lin, P. Mantero, Projective dimension of string and cycle hypergraphs. Comm. Algebra **44** (2016), no.4, 1671--1694. J. Martínez-Bernal, S. Morey, R. H. Villarreal, Associated primes of powers of edge ideals. Collect. Math. **63** (2012), no.3, 361--374. S. Morey, Depths of powers of the edge ideal of a tree. Comm. Algebra **38** (2010), no.11, 4042--4055. D. Popescu, Graph and depth of a monomial squarefree ideal. Proc. Amer. Math. Soc. **140** (2012), no.11, 3813--3822.
arxiv_math
{ "id": "2309.07286", "title": "Initially Regular Sequences on Cycles and Depth of Unicyclic Graphs", "authors": "Le Tran", "categories": "math.AC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We describe a rigidity phenomenon exhibited by the second Chern Ricci curvature of a Hermitian metric on a compact complex manifold. This yields a characterisation of second Chern Ricci-flat Hermitian metrics on several types of manifolds as well as a range of non-existence results for such metrics. address: - The University of Queensland, St. Lucia, QLD 4067, Australia - The University of Queensland, St. Lucia, QLD 4067, Australia author: - Kyle Broder - Artem Pulemotov title: Hermitian metrics with vanishing second Chern Ricci curvature --- # Introduction and the main results The theory of Ricci-flat Kähler metrics has been a major component of complex geometry for almost half a century, pioneered by Yau's resolution of the Calabi conjecture [@Yau1976]. The past few decades have seen rapidly growing interest in the study of Hermitian generalizations of this theory on complex non-Kähler manifolds, with motivation coming from both mathematics and theoretical physics. Due to the presence of torsion in the Chern connection of a non-Kähler metric, the notions of Ricci curvature and, consequently, Ricci-flatness become ambiguous. In particular, there are four distinct *Chern Ricci curvatures* for a Hermitian metric $\omega$. The first Chern Ricci curvature, denoted $\text{Ric}_{\omega}^{(1)}$, is given by the trace over the third and fourth indices of the Chern curvature tensor and is locally equal to $-\sqrt{-1} \partial \bar{\partial} \log(\omega^n)$. It represents the first (Bott--Chern) class (of the anti-canonical bundle) and is governed by a complex Monge--Ampère equation (see, e.g., [@TosattiWeinkove; @TosattiNonKahlerCY]). Compact non-Kähler Hermitian manifolds with $\text{Ric}_{\omega}^{(1)} =0$, called *non-Kähler Calabi--Yau manifolds*, were systematically studied by Tosatti [@TosattiNonKahlerCY] (see also [@AngellaCalamaiSpotti]). Our understanding of the second Chern Ricci curvature $\text{Ric}_{\omega}^{(2)}$, given by the trace over the first and second indices of the Chern curvature tensor, is much less complete. This tensor does not have the cohomological nature that the first Chern Ricci curvature possesses. Its local expression is formidable and is not directly related to a complex Monge--Ampère equation. On the other hand, the second Chern Ricci curvature plays essential roles in the Bochner technique [@LiuYangGeometry; @Lu; @Royden; @BroderSBC; @BroderSBC2] and the Streets--Tian Hermitian curvature flow [@StreetsTian]. Several remarkable properties of this tensor were established in  [@LiuYangGeometry; @LeeLi; @YangSecondChernRicci]. Gauduchon--Ivanov [@GauduchonIvanov] studied second Chern Einstein metrics (i.e., those $\omega$ for which $\text{Ric}_{\omega}^{(2)} = \lambda \omega$ with $\lambda \in \mathbf{R}$) on compact complex surfaces. They showed that all such metrics are Kähler--Einstein with the exception of the standard metric on the primary Hopf surface $\mathbf{S}^1 \times \mathbf{S}^3$, which has $\lambda >0$. As a consequence, second Chern Ricci-flat metrics on compact complex surfaces are Ricci-flat Kähler, and thus only exist on complex tori $\mathbf{C}^2/ \Lambda$, K3, and Enriques surfaces. Angella--Calamai--Spotti [@AngellaCalamaiSpotti] conducted a systematic investigation of second Chern Einstein metrics in all dimensions. They showed, among other things, that the possible signs of the Einstein constant correspond to the canonical bundle $K_X$ or the anti-canonical bundle $-K_X$ being pseudo-effective or unitary flat. Further, second Chern Ricci-flat metrics have nonpositive Kodaira dimension $\text{kod}(X) \leq 0$. The remaining third and fourth Chern Ricci curvatures, $\text{Ric}_{\omega}^{(3)}$ and $\text{Ric}_{\omega}^{(4)}$, are given by the two remaining traces of the Chern curvature tensor. They are conjugate to each other, and they do not define real $(1,1)$--forms. In the framework developed by the first-named author and Tang [@BroderTangAltered], such curvatures provide altered variants. For a detailed explanation of this terminology, see [@BroderStanfield Remark 3.45]. In the present paper, we describe a rigidity phenomenon exhibited by the second Chern Ricci curvature and, most notably, by second Chern Ricci-flat metrics. This leads to several interesting corollaries, such as a complete understanding of the moduli space of second Chern Ricci-flat metrics on the torus. We also obtain several non-existence results and obstructions. To state our first main theorem, we remind the reader that the *real bisectional curvature* [@YangZhengRBC; @LeeStreets] of a Hermitian metric $\omega$ is given by $$\begin{aligned} \text{RBC}_{\omega}(\xi) &: = & \frac{1}{| \xi |_{\omega}^2} R_{\alpha \overline{\beta} \gamma \overline{\delta}} \xi^{\alpha \overline{\beta}} \xi^{\gamma \overline{\delta}},\end{aligned}$$ where $R$ is the Chern curvature tensor and $\xi$ is a nonnegative Hermitian $(1,1)$--tensor. This notion has played an essential role in the Schwarz lemma for holomorphic maps between Hermitian manifolds  [@YangZhengRBC; @LeeStreets; @BroderSBC; @BroderSBC2]. ## Theorem 1.1 {#Theorem_Neg .unnumbered} Let $(X,\widetilde{\omega})$ be a compact Hermitian manifold with $\text{RBC}_{\widetilde{\omega}} \leq 0$. If the equality $\text{Ric}_{\omega}^{(2)}=0$ holds for some Hermitian metric $\omega$ on $X$, then $\omega$ has the same Chern connection as $\widetilde\omega$. If there is a point where $\text{RBC}_{\widetilde{\omega}}<0$, then there are no metrics on $X$ with vanishing second Chern Ricci curvature.\ For Kähler metrics, the real bisectional curvature is comparable to (in the sense that it always has the same sign as) the more familiar *holomorphic sectional curvature* $$\text{HSC}_{\omega}(v) \ : = \ \frac{1}{| v |_{\omega}^4} R_{\alpha \overline{\beta} \gamma \overline{\delta}} v^{\alpha} \overline{v}^{\beta} v^{\gamma} \overline{v}^{\delta},$$ where $v \in T^{1,0}X$. This holds more generally, for the class of Kähler-like metrics [@YangZhengCurvature], described by the Chern curvature tensor having the same symmetries as that of a Kähler metric. Since the Kähler condition $d \omega =0$ is equivalent to the vanishing of the Chern torsion, the property of being Kähler depends only on the Chern connection. The same holds for the property of being Kähler-like. With these observations at hand, we can derive the following from . ## Corollary 1.2 {#corollary_met_type .unnumbered} Assume that a compact complex manifold $X$ admits a Kähler (respectively, Kähler-like) metric with $\text{HSC}_{\widetilde{\omega}} \leq 0$. Then every Hermitian metric $\omega$ on $X$ satisfying $\text{Ric}_{\omega}^{(2)}=0$ is Kähler (respectively, Kähler-like). If such a metric exists, the manifold $X$ is Calabi--Yau in the sense that the canonical bundle is holomorphically torsion.\ Berger [@BergerHBC] showed that any Ricci-flat Riemannian metric on the torus $\mathbf{T}^4$ is flat. This was extended to all dimensions by Fischer--Wolf [@FischerWolf]. Since complex tori $\mathbf{T}^n$ admit flat Kähler metrics, yields a complete description of the moduli space of Hermitian metrics with vanishing second Chern Ricci curvature on $\mathbf T^n$. ## Corollary 1.3 {#corollary_torus .unnumbered} Every second Chern Ricci-flat metric on a complex torus is flat.\ The Alekseevsky--Kimelfeld theorem [@AB87 Theorem 7.61] asserts that every homogeneous Ricci-flat Riemannian metric must be flat. It is noteworthy that the result of Fischer--Wolf and , while similar to this theorem in spirit, hold without any symmetry assumptions. Kähler metrics being the most prominent example, several of the types of Hermitian metrics that arise in complex geometry can be described exclusively in terms of the Chern connection. For instance, this is the case for balanced metrics. Thus, from , we obtain the following analogue of . ## Corollary 1.4 {#corollary_met_type2 .unnumbered} Assume that a compact complex manifold $X$ admits a balanced metric with $\text{RBC}_{\widetilde{\omega}} \leq 0$. Then every Hermitian metric $\omega$ on $X$ satisfying $\text{Ric}_{\omega}^{(2)}=0$ is balanced.\ We will not endeavor to provide an exhaustive list of the types of metrics determined by the Chern connection that have appeared in the literature. It is obvious, however, that extends to all of them. Our second main theorem is a counterpart to for nonnegatively curved metrics. To state it, we remind the reader of the *Schwarz bisectional curvature* that was introduced by the first-named author  [@BroderSBC; @BroderSBC2]: $$\text{SBC}_{\omega}(\xi) \ : = \ R_{\alpha \overline{\beta} \gamma \overline{\delta}} \xi^{\alpha \overline{\beta}} (\xi^{-1})^{\gamma \overline{\delta}},$$ where $\xi \in \Lambda_X^{1,1}$ is a positive-definite Hermitian $(1,1)$--tensor with inverse $\xi^{-1}$. The Schwarz bisectional curvature provides an analogue of the real bisectional curvature for the Aubin--Yau formulation of the Schwarz lemma. ## Theorem 1.5 {#Theorem_Pos .unnumbered} Let $(X,\widetilde{\omega})$ be a compact Hermitian manifold with $\text{SBC}_{\widetilde{\omega}} \geq 0$. If $\text{Ric}_{\omega}^{(2)}=0$ for some Hermitian metric $\omega$ on $X$, then $\omega$ has the same Chern connection as $\widetilde\omega$. If there is a point where $\text{SBC}_{\widetilde{\omega}}>0$, then there are no metrics on $X$ with vanishing second Chern Ricci curvature.\ The strength of Schwarz bisectional curvature remains unknown, but it is dominated by the holomorphic bisectional curvature. As a consequence, we obtain the following. ## Corollary 1.6 {#Cor17 .unnumbered} There are no metrics with vanishing second Chern Ricci curvature on compact Hermitian symmetric spaces.\ The proofs of and rely on the Chern--Lu  [@Lu; @Royden; @YangZhengRBC] and Aubin--Yau  [@Yau1976; @BroderSBC; @BroderSBC2] incarnations of the Schwarz lemma, respectively. The way we employ these results is inspired by the work of DeTurck--Koiso [@DeTurckKoiso] concerning the prescribed Ricci curvature problem on Riemannian manifolds. In fact, as a byproduct of our techniques, we obtain a uniqueness theorem for Hermitian metrics with prescribed second Chern Ricci curvature; see Section [5](#DTKSection){reference-type="ref" reference="DTKSection"}. Interestingly, thanks to the underlying Hermitian structure, this theorem requires weaker assumptions than its (real) Riemannian counterpart. ## Acknowledgements {#acknowledgements .unnumbered} The authors would like to thank Ben Andrews, Ramiro Lafuente, James Stanfield, and Wolfgang Ziller for valuable discussions. Part of this work was completed while the authors were visiting Ground Zero. We would like to thank them for their hospitality and express our gratitude to Gürkan Baydogan for stimulating conversations that helped us grapple with some of the ideas in this paper. # Preliminaries Throughout we will denote by $X$ a complex manifold of (complex) dimension $n$. Let $\mathcal J$ be the underlying complex structure. It splits the complexified tangent bundle $T^{\mathbf{C}}X : = TX \otimes_{\mathbf{R}} \mathbf{C}$ into a sum of eigenbundles $T^{1,0}X \oplus T^{0,1}X$, where $T^{1,0}X$ is the $\sqrt{-1}$--eigenbundle of $(1,0)$--tangent vectors $u - \sqrt{-1} \mathcal{J} u$ and $T^{0,1}X$ is the $-\sqrt{-1}$--eigenbundle of $(0,1)$--tangent vectors $u + \sqrt{-1} \mathcal{J} u$. The splitting of $T^{\mathbf{C}} X$ induces a splitting of the bundle of $k$--forms: $\Lambda_X^k \ \simeq \ \bigoplus_{p+q=k} \Lambda_X^{p,q}$. We denote by $K_X : = \Lambda_X^{n,0}$ the canonical bundle, whose dual is the anti-canonical bundle $-K_X$. The exterior derivative splits as $d = \partial + \bar{\partial}$ with $\partial : \Omega_X^{p,q} \to \Omega_X^{p+1, q}$ and $\bar{\partial} : \Omega_X^{p,q} \to \Omega_X^{p,q+1}$. A Riemannian metric $g$ on a complex manifold $(X,\mathcal{J})$ is said to be *Hermitian* if $$g(\mathcal{J}\cdot, \mathcal{J} \cdot) \ = \ g(\cdot, \cdot).$$ We write $\omega=\omega_g(\cdot, \cdot) : = g(\mathcal{J} \cdot, \cdot)$ for the associated $2$--form and follow the general practice of referring to $\omega$ as a Hermitian metric, often suppressing any mention of $g$ or $\mathcal{J}$. Several distinguished metric types are given in terms of the associated $2$--form: a Hermitian metric $\omega$ is *Kähler* if $d\omega=0$, and *balanced* if $d \omega^{n-1}=0$. These structures can be encoded in the properties of the *Chern connection*---the unique Hermitian connection such that $\nabla^{0,1} = \bar{\partial}$. Indeed, the Kähler metrics are characterized by the vanishing of the Chern torsion, while the balanced metrics are those for which the torsion $(1,0)$--form $\tau = \sum_{i,k} T_{ik}^k e^i$ vanishes, where $e^i$ is any local coframe. Let $R$ denote the Chern curvature tensor. In a local frame $e_k$ for $T^{1,0}X$, the only non-zero components of $R$ are $R(e_i, \overline{e}_j) e_k = - \partial_{\overline{j}} \Gamma_{ik}^p e_p =: R_{i \overline{j} k}{}^{p} e_{p}$. We use the Hermitian metric to lower the index, writing $R_{i \overline{j} k \overline{\ell}} := R_{i \overline{j} k}{}^p g_{p \overline{\ell}}$. Further, we note that the Chern curvature tensor has the conjugate symmetry $R_{\overline{j} i \overline{\ell} k} = \overline{R_{i \overline{j} k \overline{\ell}}}$. For a general Hermitian non-Kähler metric, the Chern curvature tensor will not possess the symmetries of the Riemannian curvature tensor. While the sectional curvature of the Chern connection is meaningful, the more natural counterpart of the sectional curvature in complex geometry is the *holomorphic bisectional curvature* $$\begin{aligned} \label{HBC} \text{HBC}_{\omega}(u,v) & : = & \frac{1}{| u |_{\omega}^2 | v |_{\omega}^2} R_{\alpha \overline{\beta} \gamma \overline{\delta}} u^{\alpha} \overline{u}^{\beta} v^{\gamma} \overline{v}^{\delta},\end{aligned}$$ where $u,v \in T^{1,0}X$ are $(1,0)$--tangent vectors. Compact Kähler manifolds with $\text{HBC}_{\omega} \geq 0$ are Hermitian symmetric spaces of compact type by Mok's resolution of the generalized Frankel conjecture [@MokFrankel]. # Nonpositively curved metrics In this section, we proof and . We also discuss several additional corollaries of these results. The main technique we use is the Schwarz lemma, which is the product of combining the maximum principle with the Bochner formula applied to $| \partial f |^2$ for a holomorphic map $f$. We, therefore, start by recalling the main Schwarz lemma calculation that goes back to Lu [@Lu]. Given a holomorphic map $f : (X, \omega) \to (X, \widetilde{\omega})$ and local holomorphic coordinates $(z_1,..., z_n)$ centered at a point $p \in X$, let us write $f_i^{\alpha} : = \frac{\partial f^{\alpha}}{\partial z_i}$, where $f = (f^1, ..., f^n)$. The complex Laplacian $\Delta_{\omega}$ with respect to the metric $\omega$ is defined by $\Delta_{\omega} : = \text{tr}_{\omega} \sqrt{-1} \partial \bar{\partial}$. *Proof of Theorem 1.1.* Let $f : (X, \omega) \to (X, \widetilde{\omega})$ be a holomorphic map. We will eventually take $f = \text{id}$, but it is more transparent to work with a general holomorphic map, restricting to the identity map at the end of the proof. Let $g$ and $\widetilde{g}$ be the metrics underlying $\omega$ and $\widetilde{\omega}$, respectively. Denote by $\nabla$ and $\widetilde{\nabla}$ the Chern connections of $\omega$ and $\widetilde{\omega}$. Write $\Gamma_{ij}^k = g^{k \overline{\ell}} \partial_i g_{j \overline{\ell}}$ and $\widetilde{\Gamma}_{\alpha \beta}^{\gamma} = \widetilde{g}^{\gamma \overline{\delta}} \partial_{\alpha} \widetilde{g}_{\beta \overline{\delta}}$ for the corresponding Christoffel symbols. The differential $\partial f$ is a holomorphic section of $\Omega_X^{1,0} \otimes f^{\ast} T^{1,0}X$. Hence, if we write $\widehat{\nabla}$ for the connection on $\Omega_X^{1,0} \otimes f^{\ast} T^{1,0}X$ induced from $\nabla$ and $\widetilde{\nabla}$, then in any local frame, $$\begin{aligned} \label{DiffChristoffel} (\widehat{\nabla}_k \partial f)_{\ell}^{\alpha} &=& f_{k\ell}^{\alpha} + \widetilde{\Gamma}_{\gamma \rho}^{\alpha} f_k^{\gamma} f_{\ell}^{\rho} - \Gamma_{k\ell}^m f_m^{\alpha}.\end{aligned}$$ A calculation going back to Lu  [@Lu] shows that $$\begin{aligned} \label{CoordinateSchwarzFormula} \Delta_{\omega} | \partial f |^2 &=& | \widehat{\nabla} \partial f |^2 + \text{Ric}_{k \overline{\ell}}^{(2)} g^{k \overline{q}} g^{p \overline{\ell}} \widetilde{g}_{\alpha \overline{\beta}} f_p^{\alpha} \overline{f_q^{\beta}} - g^{i \overline{j}} g^{p \overline{q}} \widetilde{R}_{\alpha \overline{\beta} \gamma \overline{\delta}} f_i^{\alpha} \overline{f_j^{\beta}}f_p^{\gamma} \overline{f_q^{\delta}},\end{aligned}$$ where $\text{Ric}^{(2)}_{k \overline{\ell}}$ are the components of the second Chern Ricci curvature of $\omega$ and $\widetilde{R}$ is the Chern curvature tensor of $\widetilde{\omega}$. Since $\text{Ric}^{(2)}_{\omega}=0$ and $\text{RBC}_{\widetilde{\omega}} \leq 0$, it follows that $$\begin{aligned} \label{Estimate1} \Delta_{\omega} | \partial f |^2 &=& | \widehat{\nabla} \partial f |^2 - g^{i \overline{j}} g^{p \overline{q}} \widetilde{R}_{\alpha \overline{\beta} \gamma \overline{\delta}} f_i^{\alpha} \overline{f_j^{\beta}}f_p^{\gamma} \overline{f_q^{\delta}} \ \geq \ | \widehat{\nabla} \partial f |^2.\end{aligned}$$ Because $X$ is compact, $$\begin{aligned} \label{IntDiv} 0 \ = \ \int_X \Delta_{\omega} | \partial f |^2 \omega^n & \geq & \int_X | \widehat{\nabla} \partial f |^2 \omega^n, \end{aligned}$$ and therefore, $| \widehat{\nabla} \partial f |^2 =0$. Taking $f = \text{id}$ in [\[DiffChristoffel\]](#DiffChristoffel){reference-type="eqref" reference="DiffChristoffel"}, we see that $$\begin{aligned} (\widehat{\nabla}_k \partial f)_{\ell}^m &=& \widetilde{\Gamma}_{\gamma \rho}^{\alpha} \delta_k^{\gamma} \delta_{\ell}^{\rho} - \Gamma_{k\ell}^m \delta_m^{\alpha} \ = \ \widetilde{\Gamma}_{k \ell}^m - \Gamma_{k\ell}^m,\end{aligned}$$ and so the Christoffel symbols of $\omega$ and $\widetilde{\omega}$ coincide. It follows from [\[Estimate1\]](#Estimate1){reference-type="eqref" reference="Estimate1"} and [\[IntDiv\]](#IntDiv){reference-type="eqref" reference="IntDiv"} that if $\text{RBC}_{\widetilde{\omega}} < 0$ at some point, then we arrive at a contradiction, showing that there are no second Chern Ricci-flat metrics on compact manifolds with quasi-negative real bisectional curvature. ◻ *Proof of Corollary 1.2.* Let $f : (X, \omega) \to (X, \widetilde{\omega})$ be a holomorphic map. Suppose that $\widetilde{\omega}$ is Kähler-like. By Royden's polarisation argument [@Royden p. 552], $\text{HSC}_{\widetilde{\omega}} \leq 0$ (respectively, $\text{HSC}_{\widetilde{\omega}} <0$) implies that $\text{RBC}_{\widetilde{\omega}} \leq 0$ (respectively, $\text{RBC}_{\widetilde{\omega}} <0$). Hence, we can apply to obtain the equality of the Chern connections in the $\text{HSC}_{\widetilde{\omega}}\leq 0$ case, and non-existence if $\text{HSC}_{\widetilde{\omega}} <0$. ◻ ## Remark 3.1 {#Remark33 .unnumbered} Instead of assuming that $\text{RBC}_{\widetilde\omega}\le0$ and $\text{Ric}_\omega^{(2)}=0$ in , one may assume that $\text{RBC}_{\widetilde{\omega}} \leq c$ and $\text{Ric}_{\omega}^{(2)} = c \widetilde{\omega}$ for some constant $c\in\mathbf R$. Only minor changes to the proof are required. admits a similar generalisation.\ In the case where the real bisectional curvature vanishes identically, Yang--Zheng  [@YangZhengRBC] showed that the metric must be balanced. A metric that is simultaneously pluriclosed and balanced must be Kähler. This is clear from tracing the formula for the second Chern Ricci curvature (see, e.g., [@BroderStanfield Theorem 3.4]). ## Corollary 3.2 {#corollary-3.2 .unnumbered} Let $(X, \widetilde{\omega})$ be a compact Hermitian manifold with real bisectional curvature vanishing identically. Every second Chern Ricci-flat metric on $X$ is balanced. If $X$ is non-Kähler, there are no second Chern Ricci-flat pluriclosed metrics on $X$.\ It is clear from [@BroderTangAltered Theorem 2.9] that the assertion of this corollary also holds for compact Hermitian manifolds whose altered real bisectional curvature (in the sense of [@BroderTangAltered]) vanishes identically. # Nonnegatively curved metrics In this section, we prove and . The main technique is the Hermitian Aubin--Yau inequality established by the first-named author in [@BroderSBC] (building from [@Aubin; @Yau1976]). *Proof of Theorem 1.5..* As in the proofs of and , we will work with a general biholomorphic map $f : (X, \omega) \to (X, \widetilde{\omega})$ and restrict to the identity map at the end. We also maintain the notation used in the proof of . From [@BroderSBC], we have $$\begin{aligned} \Delta_{\widetilde\omega} | \partial f |^2 \circ f^{-1} = | \widehat{\nabla} \partial f \circ f^{-1} |^2 &- \widetilde{\text{Ric}}_{\alpha \overline{\beta}}^{(2)} g^{i \overline{j}} f_i^{\alpha} \overline{f_j^{\beta}} \\ &+ R_{k \overline{\ell} p \overline{q}} g^{i \overline{q}} g^{p \overline{j}} \widetilde g^{\gamma \overline{\delta}} \widetilde g_{\alpha \overline{\beta}} f_i^{\alpha} \overline{f_j^{\beta}} (f^{-1})_{\gamma}^k \overline{(f^{-1})_{\delta}^{\ell}}.\end{aligned}$$ The source curvature term $$R_{k \overline{\ell} p \overline{q}} g^{\gamma \overline{\delta}} g_{\alpha \overline{\beta}} \widetilde g^{i \overline{q}} \widetilde g^{p \overline{j}} f_i^{\alpha} \overline{f_j^{\beta}} (f^{-1})_{\gamma}^k \overline{(f^{-1})_{\delta}^{\ell}}$$ is controlled precisely by the Schwarz bisectional curvature, and hence $$\begin{aligned} \Delta_{\widetilde\omega} | \partial f |^2 \circ f^{-1} & \geq & | \widehat{\nabla} \partial f \circ f^{-1} |^2.\end{aligned}$$ Integrating this inequality, we see that $| \widehat{\nabla} \partial f \circ f^{-1} |^2 = 0$. Taking $f$ to be the identity map shows that the Chern connections coincide. Moreover, if $\text{SBC}_{\widetilde{\omega}} >0$ at one point, we get a contradiction in the same manner as in the proof of . ◻ The strength of the Schwarz bisectional curvature remains largely unknown. However, it is clear that the holomorphic bisectional curvature dominates the Schwarz bisectional curvature in the sense that the sign of the former determines the same sign on the latter. This enables us to prove . *Proof of Corollary 1.6.* A compact Hermitian symmetric space $X$ admits a Kähler metric $\widetilde\omega$ with $\text{HBC}_{\widetilde{\omega}} \geq 0$ (see, e.g., [@MokZhong]). By , since the holomorphic bisectional curvature dominates the Schwarz bisectional curvature, every second Chern Ricci-flat metric on $X$ must be Kähler. If such a metric existed, the canonical bundle would be holomorphically torsion, which is impossible. ◻ Let us remark that a second Chern Ricci-flat metric was found on the (non-compact) Snow manifold $S5$ in [@AngellaCalamaiSpotti $\S 3.3.4$]. # Hermitian metrics with prescribed second Chern Ricci curvature {#DTKSection} In the Kähler category, the prescribed Ricci curvature problem is settled by Yau's resolution of the Calabi conjecture [@Yau1976]. In the Riemannian category, however, this problem is largely unresolved. Numerous results have appeared in the literature over the past four decades (see the survey [@BP19] and more recent works such as [@BK20; @LWa; @PulemotovZiller]). One of the first uniqueness theorems for Riemannian metrics with prescribed Ricci curvature was produced by DeTurck--Koiso [@DeTurckKoiso]. According to this theorem, if $(M, \bar{g})$ is a compact Einstein manifold with nonnegative sectional curvature and $\text{Ric}_{\bar{g}} = \bar{g}$, then every Riemannian metric $g$ on $M$ with $\text{Ric}_g = \text{Ric}_{\bar{g}}$ has the same Levi-Civita connection as $\bar{g}$. Analysis of Kähler metrics on the Wallach space $\text{SU}(3)/\mathbf{T}^2$ demonstrates that the result may fail without the nonnegativity assumption on the sectional curvature (see [@PulemotovZiller Example 1]). A similar uniqueness theorem holds in the Hermitian category for the second Chern Ricci curvature. In this setting, we require only the nonnegativity of the holomorphic bisectional curvature. ## Theorem 5.1 {#DeTurckKoisoTrick .unnumbered} Let $(X, \widetilde{\omega})$ be a compact Hermitian manifold with $\text{Ric}_{\widetilde{\omega}}^{(2)} = \widetilde{\omega}$ and $\text{HBC}_{\widetilde{\omega}} \geq 0$. If $\omega$ is a Hermitian metric on $X$ with $\text{Ric}_{\omega}^{(2)} = \text{Ric}_{\widetilde{\omega}}^{(2)}$, then $\omega$ and $\widetilde{\omega}$ have the same Chern connection. *Proof.* The assumptions on $\widetilde{\omega}$ ensure that the real bisectional curvature of $\widetilde{\omega}$ is bounded above. To see this, denote by $\mathcal{H}_X$ the space of Hermitian $(0,2)$--tensors on $X$. Let $\widetilde{g}$ be the metric underlying $\widetilde{\omega}$. The real bisectional curvature $\text{RBC}_{\widetilde{\omega}}$ is the quadratic form associated to the complex curvature operator $$\mathfrak{K} : \mathcal{H}_X \to \mathcal{H}_X, \hspace{1cm} \mathfrak{K}(\xi)_{k \overline{\ell}} \ : = \ \widetilde{R}_{i \overline{j} k \overline{\ell}} \widetilde{g}^{p \overline{j}} \widetilde{g}^{i \overline{q}} \xi_{p \overline{q}},$$ where $\widetilde{R}$ is the Chern curvature tensor of $\widetilde{\omega}$. Hence, an upper bound on $\text{RBC}_{\widetilde{\omega}}$ is given by an upper bound on the largest eigenvalue of $\mathfrak{K}$. Since $\widetilde{g}$ is second Chern Einstein, it is an eigenvector of $\mathfrak{K}$ with unit eigenvalue: $$\begin{aligned} \mathfrak{K}(\widetilde{g}_{p \overline{q}}) &=& \widetilde{R}_{i \overline{j} k \overline{\ell}} \widetilde{g}^{p \overline{j}} \widetilde{g}^{i \overline{q}} \widetilde{g}_{p\bar{q}} \ = \ \widetilde{g}^{i \overline{j}} \widetilde{R}_{i \overline{j} k \overline{\ell}} \ = \ \widetilde{\text{Ric}}_{k \overline{\ell}}^{(2)} \ = \ \widetilde{g}_{k \overline{\ell}}.\end{aligned}$$ Let $\zeta$ be an eigenvector of $\mathfrak{K}$, distinct from $\widetilde g$, with eigenvalue $\mu$. Since $\mathfrak{K}$ is Hermitian, the distinct eigenvectors of $\mathfrak{K}$ are orthogonal. Hence, we may assume that $\text{tr}_{\widetilde{g}}(\zeta) =0$ and $\lambda_1 = \max_k \lambda_k >0$. Fix a point $x_0\in X$ and choose coordinates such that $\widetilde{g}_{i \overline{j}} = \delta_{ij}$ and $\zeta_{i \overline{j}} = \lambda_i \delta_{ij}$ at $x_0$. Computing at this point, we find $$\begin{aligned} \mu \lambda_1 \ = \ \mu \zeta_{1 \overline{1}} \ = \ \mathfrak{K}(\zeta)_{1 \overline{1}} &=& \sum \widetilde{R}_{i \overline{j} 1 \overline{1}} \widetilde{g}^{i\overline{q}} \widetilde{g}^{p \overline{j}} \zeta_{p \overline{q}} \ = \ \sum_p \widetilde{R}_{p \overline{p} 1 \overline{1}} \lambda_p.\end{aligned}$$ Hence, $$\begin{aligned} \mu \ = \ \frac{1}{\lambda_1} \sum_p \widetilde{R}_{p \overline{p} 1 \overline{1}} \lambda_p &=& \frac{1}{\lambda_1} \sum_p (\widetilde{R}_{p \overline{p} 1 \overline{1}} - \mathcal{B}_{\min}) \lambda_p \\ & \leq & \frac{1}{\lambda_1} \sum_p (\widetilde{R}_{p \overline{p} 1 \overline{1}} - \mathcal{B}_{\min}) \lambda_1 \ = \ 1 - n \mathcal{B}_{\min},\end{aligned}$$ where $\mathcal{B}_{\min}$ denotes the minimum of the holomorphic bisectional curvature. It follows that the eigenvalues of $\mathfrak{K}$ are bounded above by $\max\{ 1, 1 - n \mathcal{B}_{\min} \}$. Therefore, if $\mathcal{B}_{\min} \geq 0$, we must have $\text{RBC}_{\widetilde{\omega}} \leq 1$. Let now $f : (X, \omega) \to (X, \widetilde{\omega})$ be a holomorphic map. Since $\text{Ric}_{\omega}^{(2)} = \text{Ric}_{\widetilde{\omega}}^{(2)} = \widetilde{\omega}$, we see that $$\begin{aligned} \text{Ric}_{k \overline{\ell}}^{(2)} g^{k \overline{q}} g^{p \overline{\ell}} \widetilde{g}_{\alpha \overline{\beta}} f_p^{\alpha} \overline{f_q^{\beta}} &=& f_k^{\gamma} \overline{f_{\ell}^{\delta}} \widetilde{g}_{\gamma \overline{\delta}} g^{k \overline{q}} g^{p \overline{\ell}} \widetilde{g}_{\alpha \overline{\beta}} f_p^{\alpha} \overline{f_q^{\beta}} \ = \ | \partial f |^4.\end{aligned}$$ Hence, using the inequality $\text{RBC}_{\widetilde{\omega}} \leq 1$ and the Chern--Lu formula [\[CoordinateSchwarzFormula\]](#CoordinateSchwarzFormula){reference-type="eqref" reference="CoordinateSchwarzFormula"}, we find $$\begin{aligned} \Delta_{\omega} | \partial f |^2 &=& | \widehat{\nabla} \partial f |^2 + | \partial f |^4 - g^{i \overline{j}} g^{p \overline{q}} \widetilde{R}_{\alpha \overline{\beta} \gamma \overline{\delta}} f_i^{\alpha} \overline{f_j^{\beta}}f_p^{\gamma} \overline{f_q^{\delta}} \ \geq \ | \widehat{\nabla} \partial f |^2.\end{aligned}$$ Since $X$ is compact, by integrating the above inequality with $f = \text{id}$ as in the proof of , we obtain the desired conclusion. ◻ The main theorem of [@MokZhong] states that a compact Kähler manifold $(X,\widetilde{\omega})$ with $\text{Ric}_{\widetilde{\omega}}>0$ and $\text{HBC}_{\widetilde{\omega}} \geq 0$ must be biholomorphically isometric to a compact Hermitian symmetric space. It is interesting to compare to this result. The conditions on the curvature in are considerably stronger: we require $\text{Ric}_\omega^{(2)}=\text{Ric}_{\widetilde\omega}^{(2)}$ instead of just positivity. However, the gain is that does not require the manifold $X$ to be Kähler. For instance, in (complex) dimension $2$, there are two metrics $\widetilde{\omega}$ that satisfy the hypotheses: the (Kähler) Fubini--Study metric on $\mathbf{P}^2$ and the (non-Kähler) standard metric on a primary Hopf surface $\mathbf{S}^1 \times \mathbf{S}^3$ (see [@GauduchonIvanov]). ## Corollary 5.2 {#corollary-5.2 .unnumbered} Let $\mathbf{S}^1 \times \mathbf{S}^3$ be a primary Hopf surface endowed with its standard metric $\omega_0$. If $\omega$ is a Hermitian metric on $\mathbf{S}^1 \times \mathbf{S}^3$ with $\text{Ric}_{\omega}^{(2)} = \text{Ric}_{\omega_0}^{(2)}$, then $\omega$ has the same Chern connection as $\omega_0$. ## Remark 5.3 {#Remark_HermBetter .unnumbered} It is interesting to compare our use of the Bochner technique to that of DeTurck--Koiso [@DeTurckKoiso]. The identity map $\text{id} : (M,g) \to (M,\widetilde{g})$ of Riemannian manifolds is harmonic only if $g^{jk}(\Gamma_{jk}^i - \widetilde{\Gamma}_{jk}^i)=0$. From the Bianchi identity, this is satisfied by $\text{id} : (M, g) \to (M, \text{Ric}_g)$ if $\text{Ric}_{g}$ is positive definite, and this is the situation that DeTurck--Koiso consider. In the Hermitian category, the identity map is holomorphic, and hence, the Bochner technique can be applied without these cumbersome assumptions on the Christoffel symbols or the positivity of the Ricci curvature. ## Remark 5.4 {#Remark_Ric1_Ric2 .unnumbered} The first Chern Ricci curvature of a Hermitian metric is determined completely by the Chern connection of the metric. However, since $\text{Ric}^{(2)}_\omega= g^{i \overline{j}} R_{i \overline{j}k \overline{\ell}}$, it appears that two Hermitian metrics may have distinct second Chern Ricci curvatures even if their Chern connections coincide. For this reason, we do not claim that $\text{Ric}^{(2)}_\omega=\text{Ric}^{(2)}_{\widetilde\omega}$ in and . Similarly, it appears that the pluriclosed condition is not determined entirely by the Chern connection. Indeed, let $\Gamma_{ij}^k = g^{k \overline{\ell}} \partial_i g_{j \overline{\ell}}$ be the Christoffel symbols of the Chern connection. The pluriclosed condition $\sqrt{-1} \partial \bar{\partial} \omega=0$ is equivalent to $$\begin{aligned} g_{p \overline{\ell}} \left( R_{k \bar{j} i}{}^p - R_{i \bar{j} k}{}^p \right) + g_{p \bar{j}} \left( R_{i \bar{\ell} k}{}^p - R_{k \bar{\ell} i}{}^p \right) + 4 g_{p \bar{q}} \Gamma_{ik}^p \overline{\Gamma_{j\ell}^q} &=& 0.\end{aligned}$$ This cannot be expressed in terms of the Christoffel symbols alone. 40 Angella, D., Calamai, S., Spotti, C., Remarks on Chern--Einstein Hermitian metrics, Math. Z. **295**, no. 3-4, pp. 1707--1722, (2020). Aubin, T., Équations du type Monge--Ampère sur les varietés kählériennes compactes, C. Rendus Acad. Sci. Paris, **283**, pp. 119--121, (1976). Berger, M., Sur les variétés d'einstein compactes. Comptes Rendus de la IIIe Réunion du Groupement des Mathématiciens d'Expression Latine (Namur, 1965), pp. 35--55, (1965). Besse, A., Einstein manifolds, Springer-Verlag, Berlin, (1987). Broder, K., The Schwarz lemma in Kähler and non-Kähler geometry, Asian J. Math., **27**, no. 1, pp. 121--134, (2023). Broder, K., The Schwarz lemma: An Odyssey, Rocky Mountain J. Math., **52**, no. 4, pp. 1141--1155, (2022). Broder, K., Stanfield, J., On the Gauduchon Curvature of Hermitian Manifolds, Int. J. Math., **34**, no. 7, 2350039, 37 pages, (2023). Broder, K., Tang, K., On the altered holomorphic curvatures of Hermitian manifolds, arXiv:2201.03666. To appear in the Pacific Journal of Mathematics. Buttsworth, T., Krishnan, A.M., Prescribing Ricci curvature on a product of spheres, Ann. Mat. Pura Appl., **201**, pp. 1--36, (2022). Buttsworth, T., Pulemotov, A., The prescribed Ricci curvature problem for homogeneous metrics, in: O. Dearricott et al. (eds), Differential geometry in the large, Cambridge University Press, pp. 169--192, (2021). DeTurck, D., Koiso, N., Uniqueness and nonexistence of metrics with prescribed Ricci curvature, Ann. Inst. H. Poincaré Anal. Non Linéaire, **1**, no. 5, pp. 351--359, (1984). Fischer, A. E., Wolf, J. A., The Calabi construction for compact Ricci-flat Riemannian manifolds, Bull. Amer. Math. Soc., **80**, no. 1, (1974). Gauduchon, P., Ivanov, S., Einstein-Hermitian surfaces and Hermitian Einstein-Weyl structures in dimension 4, Math. Z., **226**, pp. 317--326, (1997). Lauret, J., Will, C.E., Prescribing Ricci curvature on homogeneous spaces, J. reine angew. Math., **2022**, pp. 95--133, (2022). Lee, M. C., Li, K.-F., Deformation of Hermitian metrics, Math. Res. Lett., **29**, no. 5, pp. 1485--1497, (2022). Lee, M. C., Streets, J., Complex manifolds with negative curvature operator, Int. Math. Res. Not. IMRN, no. 24, pp. 18520--18528, (2021). Liu, K., Yang, X., Geometry of Hermitian Manifolds, Int. J. Math. **23**, no. 6, 1250055, 40 pages, (2012). Lu, Y.-C., Holomorphic mappings of complex manifolds. J. Diff. Geom., **2**, pp. 299--312, (1968). Mok, N., The uniformization theorem for compact Kähler manifolds of nonnegative holomorphic bisectional curvature. J. Diff. Geom. **27**, no. 2, pp. 179--214, (1988). Mok, N., Zhong, J.-Q., Curvature characterization of compact Hermitian symmetric spaces, J. Diff. Geom., **23**, pp. 15--67, (1986). Pulemotov, A., Ziller, W., On the variational properties of the prescribed Ricci curvature functional, arXiv:2110.14129. Submitted. Royden, H. L., The Ahlfors-Schwarz lemma in several complex variables, Comment. Math. Helvetici, **55**, pp. 547--558, (1980). Streets, J., Tian, G., Hermitian Curvature Flow, J. Eur. Math. Soc. (JEMS), **13**, no. 3, pp. 601--634, (2011). Tosatti, V. , Non-Kähler Calabi--Yau Manifolds, Contemp. Math., **644**, pp. 261--277, (2015). Tosatti, V., Weinkove, B., The complex Monge-Ampr̀e equation on compact Hermitian manifolds, J. Amer. Math. Soc., **23**, no. 4, pp. 1187--1195, (2010). Yang, B., Zheng, F., On curvature tensors of Hermitian manifolds, Communications in Analysis and Geometry, **26**, no. 5, pp. 1195--1222, (2018). Yang, X., Compact Kähler manifolds with quasi-positive second Chern--Ricci curvature, arXiv:2006.13884. Yang, X., Zheng, F., On the real bisectional curvature for Hermitian manifolds, Trans. Amer. Math. Soc., **371**, no. 4, pp. 2703--2718, (2019). Yau, S.-T., On the Ricci curvature of a compact Kähler manifold and the complex Monge--Ampr̀e equation, I, Comm. Pure Appl. Math., **31**, pp. 339--411, (1978).
arxiv_math
{ "id": "2309.10295", "title": "Hermitian metrics with vanishing second Chern Ricci curvature", "authors": "Kyle Broder and Artem Pulemotov", "categories": "math.DG math.CV", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - Tao Hao[^1] ,   Jiaqiang Wen[^2] ,   Jie Xiong[^3]    title: "**Maximum principle for a Markovian regime switching system under model uncertainty** " --- **Abstract: In this paper, we study a stochastic optimal control problem with a Markovian regime switching system, where the coefficients of the state equation and the cost functional are uncertain. First, we obtain the variational inequality by showing the continuity with respect to the uncertainty parameter of the variational equation, which is characterized as forward-backward stochastic differential equations. Second, using the linearization method and weak convergence technique, we prove the necessary stochastic maximum principle and show the sufficient condition of the stochastic optimal control. Finally, as an application, a risk-minimizing portfolio selection problem is studied. Meanwhile, the $L^\beta$-solution and $L^\beta$-estimate of stochastic differential equations with regime switching are given for $\beta=2k$ with $k\in \mathbb{N}$.** **Key words: Forward-backward stochastic differential equation, regime switching, model uncertainty, partial information, maximum principle.** **AMS subject classifications. 93E20, 60H10, 60H30.** # Introduction In a real financial market, besides the Markov chain, it is more reasonable and general allowing the model to be uncertain, due to the fact that the interest rates, stock rates, and volatilities are affected by the uncertainties, which urgently motivates us to study this work. For this, we first present the main motivation of the optimal control problem. Then we overview the history of this topic, and finally, the contribution of our work is expressed. ## Motivation Let us explain the motivation by a problem of finding risk-minimizing portfolios under model uncertainty in finance, where the risk is characterized in terms of $g$-expectation (see Peng [@Peng-2004]). **Example 1**. *Suppose the share market is roughly divided into either a bull market or a bear market. Let $\theta=1$ and 2 denote a bull market and a bear market, respectively, which is characterized as $\theta\in\Theta=\{1,2\}$. Let ${\cal Q}=\{Q^\lambda: \lambda\in[0,1]\},$ where $Q^\lambda$ is the probability such that $Q^\lambda(\{1\})=\lambda$ and $Q^\lambda(\{2\})=1-\lambda.$* *Consider a market consisting of a bond and a stock whose prices are $S_0(t)$ and $S_1(t)$, respectively, driven by the following equations: $$dS_{0}(t)=S_{0}(t)r(t)dt \quad\hbox{and}\quad dS_{1}(t)=S_{1}(t)\{\mu(t)dt+\sigma(t)d\overline{W}^v_\theta(t)\}\quad (\hbox{\hbox{\rm under}}\ \tilde{\mathbb{P}}_\theta^v), \quad t\in[0,T].$$ where $v(t)$ is the amount invested in the risky asset at time $t$; $r(t)$ is the interest rate; $\mu(t)$ is the appreciation rate with $\mu(t)>r(t)$ for $t\in[0,T]$; $\sigma(t)$ is the volatility; $\overline{W}^v_\theta$ is a one-dimensional stochastic process depending on $v(\cdot)$ and $\theta$. Under a self-financed portfolio, the wealth process of an agent, starting with an initial wealth $x_0$, satisfies the following wealth equation: $$\label{1.2} x_\theta^v(0)=x_0\quad\hbox{and}\quad dx^v_\theta(t)=\{r(t)x_\theta^v(t)+(\mu(t)-r(t))v(t)\}dt+\sigma(t)v(t)d\overline{W}_\theta^v(t),\quad t\in[0,T].$$ Moreover, we introduce the following cost functional $$\begin{aligned} J_g(v(\cdot))&= \sup_{Q^\lambda\in {\cal Q}}\int_\Theta{\cal E}_{g,\theta}^v[x^v_\theta(T)]Q^\lambda(d\theta)=\max\bigg\{{\cal E}_{g,1}^v[x^v_1(T)], {\cal E}_{g,2}^v[x^v_2(T)]\bigg\}, \end{aligned}$$ where ${\cal E}_{g,\theta}^v[\cdot]$ is the $g$-expectation which represents a risk measure (see Rosazza and Gianin [@Rosazza-Gianin-2006]). The objective is to minimize $J_g(v(\cdot))$, which can be considered as a robust optimal control problem (see Hu and Wang [@Hu-Wang-20]). Meanwhile, according to the definition of $g$-expectation, the above cost functional can be reformulated as $$\begin{aligned} J_g(v(\cdot))&=\sup_{Q^\lambda\in {\cal Q}}\int_\Theta y_\theta^v(0) Q^\lambda(d\theta) =\max\Big\{y_1^v(0), y_2^v(0)\Big\}, \end{aligned}$$ where $y^v_\theta$ is a part of the solution of the following BSDE with regime switching $$\label{1.4} \left\{ \begin{aligned} dy^v_\theta(t) & = -g_\theta(t,y^v_\theta(t),z^v_\theta(t), \bar{z}^v_\theta(t),\sum_{i,j=1,i\neq j}^Ik^v_{\theta,ij}(t) \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\\ &\quad+z^v_\theta(t)dW(t) + \bar{z}_\theta^v(t)dG(t) + k^v_\theta(t)\bullet dM(t),\\ y^v_\theta(t)&=x^v_\theta(T). \end{aligned} \right.$$ In [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"}, $G(t)\triangleq\frac{1}{\sigma(t)}\log S_{1}(t)$ can be regarded as an observation process since the policymaker can obtain information from the stock price, which satisfies $$\label{1.3} dG(t)=\frac{1}{\sigma(t)}\Big\{\mu(t)-\frac{1}{2}\sigma^2(t)\Big\}dt+d\overline W^v_\theta(t)\quad\hbox{and}\quad G(0)=0.$$ Note that, in a real market, it is reasonable to assume the amount $v(\cdot)$ being adapted to the sub-filtration ${\cal G}_t\triangleq\sigma\{G(r): 0\leqslant r\leqslant t\}$. Thereby, we can introduce a new framework which is considering a risk-minimizing portfolio selection problem under model uncertainty.* ***Problem**. Find a ${\cal G}_t$-adapted process $\widehat{v}(t)$ such that $$J_g(\widehat{v}(\cdot))=\min\limits_{v(\cdot)}J_g(v(\cdot)) =\min\limits_{v(\cdot)}\sup_{Q^\lambda\in {\cal Q}}\int_\Theta y_\theta^v(0) Q^\lambda(d\theta).$$* The above example inspires us to study the optimal control problem of finding risk-minimizing portfolios for a Markovian regime switching system with partial information under model uncertainty. In order to present our work more clearly, we describe the problem in detail. Let $(\Omega,\mathscr{F}, \mathbb{F}, \mathbb{P})$ be a complete filtered probability space on which two standard Brownian motions $W(\cdot)$ and $G(\cdot)$ and a continuous-time, finite-state Markov chain $\alpha(\cdot)$ are defined, where the processes $W(\cdot)$, $G(\cdot)$ and $\alpha(\cdot)$ are independent, and $\mathbb{F}=\{\mathscr{F}_t\}_{t\geqslant 0}$ is the natural filtration of them with $\mathscr{F}_0$ containing all $\mathbb{P}$-null sets of $\mathscr{F}$. Now, let us consider the following controlled forward-backward stochastic differential equation (FBSDE, for short): For $t\in[0,T]$, $$\label{4.1} \left\{ \begin{aligned} dX^v_\theta(t) & = b_\theta(t,X^v_\theta(t),v(t),\alpha(t-))dt + \sigma_\theta(t,X^v_\theta(t),v(t),\alpha(t-))dW(t) \\ & \quad+\bar{\sigma}_\theta(t,X^v_\theta(t),v(t),\alpha(t-)) d\overline W^v_{\theta}(t) +\beta_\theta(t,X^v_\theta(t-),v(t),\alpha(t-)) \bullet dM(t),\\ % dY^v_\theta(t) & = -f_\theta(t,X^v_\theta(t),Y^v_\theta(t),Z^v_\theta(t),\bar{Z}_\theta(t), \sum_{i,j=1,i\neq j}^IK^v_{\theta,ij}(t) \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}},v(t),\alpha(t-))dt\\ &\quad+Z^v_\theta(t)dW(t) + \bar{Z}(t)dG(t) + K^v_\theta(t)\bullet dM(t),\\ % X^v_\theta(0)& = x, \quad Y^v_\theta(T) = \Phi_\theta(X^v_\theta(T), \alpha(T)), \end{aligned} \right.$$ where $x\in \mathbb{R}$, $\theta$ comes from $\Theta$, which is a locally compact and complete separable space with distance $d$; $M(\cdot)$ is the canonical martingale of the Markov chain $\alpha(\cdot)$, and $\overline W^v_\theta(\cdot)$ is a stochastic process depending on the control process $v(\cdot)$ and the parameter $\theta$. Meanwhile, the observation process $G(\cdot)$ satisfies $$\label{4.2} \left\{ \begin{aligned} dG(t) &=h_\theta(t,X^v_\theta(t), \alpha(t-))dt+d\overline{W}^v_{\theta}(t),\quad t\in[0,T], \\ G(0) &=0, \end{aligned}\right.$$ where $h_\theta(\cdot)$ is a given continuous mapping for each $\theta\in \Theta$. Note that, on the one hand, the state process $X$ can not be observed directly, but through the observation process $G(\cdot)$. On the other hand, since the coefficients $(b_\theta, \sigma_\theta, \bar{\sigma}_\theta, \beta_\theta, f_\theta, \Phi_\theta, h_\theta)$ depend on the parameter $\theta\in \Theta$, the above model is said to be *uncertain* with partial information. Now, let ${\cal Q}$ be a weakly compact and convex set of probability measures on $(\Theta, \mathscr{B}(\Theta))$ and let the cost functional of the optimal control problem be defined by $$\begin{aligned} \label{4.2222} J(v(\cdot))=\sup_{Q\in {\cal Q}}\int_{\Theta} \mathbb{E}^{\tilde{\mathbb{P}}^v_\theta}[\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q(d\theta), \end{aligned}$$ where $d\tilde{\mathbb{P}}^v_\theta\triangleq R^v_\theta(t)d\mathbb{P}$ and $R^v_\theta$ satisfies ([\[4.7-1\]](#4.7-1){reference-type="ref" reference="4.7-1"}) below. The optimal control problem is descried as follows. **Problem (PRUC)**. Find an optimal control $\widehat{v}(\cdot)$ such that $$\begin{aligned} \nonumber J(\widehat v(\cdot))=\inf_{v(\cdot)\in {\cal V}_{ad}} J(v(\cdot)), \end{aligned}$$ subject to ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) and ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}), where ${\cal V}_{ad}$ is the set of admissible controls (see ). ## Literature review The classical stochastic control theory appeared with the birth of stochastic analysis and developed rapidly in recent decades due to its wide range of applications (see Yong and Zhou [@Yong-Zhou-99]), and one of the main approaches to solving the stochastic optimal control problems is Pontryagin's maximum principle, which makes the problem more tractable. The basic idea of the stochastic maximum principle is to derive a set of necessary and sufficient conditions that must be satisfied by any optimal control. In the seminal paper [@Peng-1990], Peng established a global maximum principle for the stochastic optimal control problem owing to the nonlinear backward stochastic differential equations introduced by Pardoux and Peng [@Pardoux-Peng-1990]. From then on, the stochastic maximum principle was extensively studied by a lot of researchers using a system of forward-backward stochastic differential equations (FBSDEs, for short). For more details, we refer the interested readers to [@Du-Meng-2013; @Wu-2013; @Buckdahn-Li-Ma-2016; @Hu-Ji-Xue-2018; @Hu-Ji-Xu-2022; @Hu-Wang-20], etc. Recently, there has been a dramatically increasing interest in studying stochastic maximum principle for optimal control problems with random jumps, such as Poisson jumps (see [@TL]) or the regime switching jumps, which are of practical importance in various fields such as economics, financial management, science, and engineering. For example, one could face two market regimes in financial markets, one of which stands for a bull market with price rises, while the other for a bear market with price drops. We call such a formulation the regime switching model, where the market parameters depend on market modes that switch among a finite number of regimes. More recently, applications of the stochastic maximum principle for optimal control problems with regime switching systems or Poisson jumps have been extensively developed. For instance, Zhang, Elliott, and Siu [@Zhang-Elliott-Siu-2012] studied a stochastic maximum principle for a Markovian regime-switching jump-diffusion model and its application to finance. Li and Zheng [@Li-Zheng-15] considered the weak necessary and sufficient stochastic maximum principle for Markovian regime switching systems. Menoukeu-Pamen [@Menoukeu-2017] investigated maximum principles for FBSDEs with Markovian regime switching and partial information. Zhang, Sun, and Xiong [@Zhang-Sun-Xiong-2018] obtained a general stochastic maximum principle for a mean-field Markovian regime switching system. Zhang, Xiong, and Liu [@Zhang-Xiong-Liu-2018] considered a stochastic maximum principle for partially observed forward-backward stochastic differential equations with jumps and regime switching. Sun, Kemajou-Brown, and Menoukeu-Pamen [@Sun-Kemajou-Menoukeu-2018] investigated a risk-sensitive maximum principle for a Markov regime switching system. Dong, Nie, and Wu [@Dong-Nie-Wu-2022] got the maximum principle for the mean-field stochastic control problem with jumps. Wen et al. [@Wen-Li-Xiong-Zhang-2022] studied the related stochastic linear-quadratic optimal control problems with Markovian regime switching systems and their applications to finance. For some other important works, we refer the readers to Donnelly and Heunis [@Donnelly-Heunis-2012], Mei and Yong [@Mei--Yong; @2019], Oksendal and Sulem [@Oksendal-Sulem-2005], Song, Stockbridge, and Zhu [@Song-Stockbridge-Zhu-2011], Zhang and Zhou [@Zhang-Zhou-2009], Zhou and Yin [@Zhou-Yin-2003], and the references therein. From , we know that in a real market, besides the Markov chain, it is more reasonable and general allowing the model to be uncertain, due to the fact that the interest rates, stock rates, and volatilities are affected by the uncertainties. However, to our best knowledge, few results have been published on this topic up to now. In this paper, we are interested in studying this topic, i.e., the maximum principle for a Markovian regime switching system under model uncertainty. Meanwhile, in order to obtain a general result, we prefer to consider the general situation of the state equation being controlled by an FBSDE system with partial information, since there are more partial information scenarios in financial models. See [@Tang-1998; @Xiong-Zhou-2007; @Wang-Wu-2009; @Wang-Wu-Xiong-13; @Shi-Wang-Xiong-2016; @Menoukeu-2017] for the optimal control problems with partial information. ## The contribution of this paper We now present our main contributions and difficulties in detail. - The optimal control problem with partial information and regime switching under model uncertainty (**Problem (PRUC)**) is formulated. The key approaches in the investigation of maximum principle are linearization method and weak convergence technique. First, notice that the cost functional $J(v(\cdot))$ involves the probability $\tilde{\mathbb{P}}^v_\theta$. In order to study **Problem (PRUC)**, making use of Bayes' formula, we rewrite the value function ([\[4.2222\]](#4.2222){reference-type="ref" reference="4.2222"}) as a new form associated with the probability $\mathbb{P}$ (see ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"})), which does not depend on $v$ and $\theta$. Second, for the sake of obtaining the variational inequality (), we need to show the continuity with respect to $\theta$ of the solution of the variational equation of FBSDE ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) (see ). Based on the above results, the variational inequality is established (see ). Third, the essential material to establish the necessary maximum principle, i.e., the adjoint equation, is introduced. Through investigating its boundness and continuity in parameter $\theta$, thanks to Fubini's theorem, the stochastic maximum principle is proved. At last, the sufficient condition for the optimal control $\widehat v$ is obtained. Its proof mainly depends on some convex assumptions on the Hamiltonian. - We propose risk-minimizing portfolio section problems in a new framework. Applying the above theoretical result to a special risk-minimizing portfolio section problem, the explicit solution of this portfolio section problem under model uncertainty is given. - We initially study the theory of forward-backward stochastic differential equations driven by Brownian motion and the canonical martingales for Markov chain. A useful estimate (see ) is introduced, which establishes the relation between $\mathbb{E}[(\int_t^T |\phi(s)|^2\bullet d[M](s))^\frac{\beta}{2}]$ and $\mathbb{E}[(\int_t^T |\phi(s)|^2\bullet d\langle M \rangle(s))^\frac{\beta}{2}]$ for $\beta=2k, k\in \mathbb{N}$. Based on this result, we prove that the SDE ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}) possesses a unique $L^\beta$-solution for $\beta=2k, k\in \mathbb{N}$, and give its $L^\beta$-estimate (see ). Then, making use of the fixed point argument, the existence and uniqueness result of the BSDE with regime switching ([\[6.1-1\]](#6.1-1){reference-type="ref" reference="6.1-1"}) is shown (see ). Since the Markovian regime switching jump and partial information appear in our model, compared with Hu and Wang [@Hu-Wang-20], the difficulties of the present work mainly come from the boundness and continuity of the solution of the variational equation. - First of all, due to regime switching emerged in state equations, the existence and uniqueness of square integrable adapted solutions of FBSDEs with regime switching is necessary. And what's more, we need $L^\beta$-estimate for $\beta=2k, k\in \mathbb{N}$ of SDEs with regime switching. For this $L^\beta$-estimate, the essential obstacle is to establish the relation between $\mathbb{E}[(\int_t^T |\phi(s)|^2\bullet d[M](s))^\frac{\beta}{2}]$ and $\mathbb{E}[(\int_t^T |\phi(s)|^2\bullet d\langle M \rangle(s))^\frac{\beta}{2}]$ for $\beta=2k$ with $k\in \mathbb{N}$. We prove that $\mathbb{E}[(\int_t^T |\phi(s)|^2\bullet d[M](s))^\frac{\beta}{2}]$ is dominated by $\mathbb{E}[(\int_t^T |\phi(s)|^2\bullet d\langle M \rangle(s))^\frac{\beta}{2}]$. Please see for more details. - Next, since partial information is considered in our model, the original cost functional ([\[4.2222\]](#4.2222){reference-type="ref" reference="4.2222"}) depends on probability $\tilde{\mathbb{P}}^v_\theta$. It is difficult to obtain the variation of the cost functional ([\[4.2222\]](#4.2222){reference-type="ref" reference="4.2222"}) directly, since the probability $\tilde{\mathbb{P}}^v_\theta$ changes when the control $v$ changes. In order to avoid this obstacle, we apply Bayes's formula to rewrite the above cost functional as a new cost functional ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}), which depends on probability $\mathbb{P}$, but not $\tilde{\mathbb{P}}^v_\theta$. Following this line, **Problem** (**PRUC**) is equivalent to minimizing the new optimization problem ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"})-([\[4.7-1\]](#4.7-1){reference-type="ref" reference="4.7-1"})-([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}). - Finally, since the cost functional in Hu and Wang [@Hu-Wang-20] only involves $Y^v_\theta(0)$, relatively speaking, the variation of the cost functional follows directly. But the cost functional ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}) in present work contains $R^v_\theta(T), X_\theta^v(T), Y^v_\theta(0)$, the calculation of variation of the cost functional ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}) is more involved. For example, the boundness and the continuity in $\theta$ of the triple $(R^v_\theta(T), X_\theta^v(T), Y^v_\theta(0))$, the boundness and the continuity in $\theta$ of the function $H$ (see ([\[4.9999\]](#4.9999){reference-type="ref" reference="4.9999"})), the boundness and the continuity in $\theta$ of the adjoint equation, and so on. Besides, owing to the presence of partial information and Markovian regime switching, we propose a new adjoint equation, which is different from that of Hu and Wang [@Hu-Wang-20], and Wang, Wu, and Xiong [@Wang-Wu-Xiong-13]. The paper is arranged as follows. In Section 2 some basic spaces is introduced, and the existence and uniqueness results of FBSDEs with regime switching are proved. Section 3 is devoted to the formulation of the optimal control problems with partial information and regime switching under model uncertainty. The maximum principle and sufficient condition are supplied in Section 4. In Section 5, a risk-minimizing portfolio selection problem is investigated. Some proofs are listed in Section 6. # Preliminaries {#Sec2} ## Basic Spaces Let $T>0$ and $(\Omega,\mathscr{F}, \mathbb{F}, \mathbb{P})$ be a filtered probability space, on which a $d$-dimensional standard Brownian motions $W$ and a continuous-time Markov chain $\alpha(\cdot)$ with a finite state space ${\cal I}=\{1,2,\cdot\cdot\cdot, I\}$ are defined. The generator of Markov chain $\alpha(\cdot)$ is denoted by $\Lambda=(\lambda_{ij})_{i,j\in {\cal I}}$ satisfying $\lambda_{ij}\geqslant 0$, for $i\neq j\in {\cal I}$ and $\sum_{j=1}^I\lambda_{ij}=0$, for every $i\in {\cal I}$. For each $s>0$, we set $$\begin{aligned} \mathscr{F}^\alpha_s&=\sigma\{\alpha(r): 0\leqslant r\leqslant s\}\vee \mathscr{N},\quad\mathscr{F}_s=\sigma\{W(r),\alpha(r): 0\leqslant r\leqslant s\}\vee \mathscr{N}, \end{aligned}$$ where $\mathscr{N}$ is the set of all $\mathbb{P}$-null subsets, and denote $\mathbb{F}^\alpha=(\mathscr{F}^\alpha_s)_{s\in[0,T]},\ \mathbb{F}=(\mathscr{F}_s)_{s\in[0,T]}$. By $\mathbb{R}^n$ we denote the $n$-dimensional real Euclidean space, $\mathbb{R}^{n\times d}$ the family of all $n\times d$ real matrices, $\mathbb{N}$ the set of natural numbers. $\langle C, D\rangle =\hbox{\rm tr$\,$}\{CD^\top \}$ denote the scalar product of $C=(c_{ij}), D=(d_{ij})\in \mathbb{R}^{n\times d}$, where the superscript $\top$ denotes the transpose of vectors or matrices. For each pair $(i,j)\in {\cal I}\times\mathcal{I}$ with $i\neq j$, we define $$\label{2.4} [M_{ij}](t)=\sum_{0\leqslant r\leqslant t} \mathbf{1}_{\{\alpha(r-)=i\}}\mathbf{1}_{\{\alpha(r)=j\}},\quad \langle M_{ij} \rangle(t)=\int_0^t \lambda_{ij} \mathbf{1}_{\{\alpha(r-)=i\}}dr,$$ where $\mathbf{1}_A$ denotes the indicator function of $A$. From [@Donnelly-Heunis-2012; @Li-Zheng-15; @Nguyen-Yin-Nguyen-2021], the process $M_{ij}(t):=[M_{ij}](t)-\langle M_{ij}\rangle(t)$ is a discontinuous and square integrable martingale with respect to $\mathscr{F}^\alpha_t$, and equals to zero at the origin. Furthermore, the process $[M_{ij}]=([M_{ij}](t))_{t\in[0,T]}$ is the optional quadratic variation and the $\langle M_{ij}\rangle=(\langle M_{ij}\rangle(t))_{t\in[0,T]}$ is the quadratic variation. According to the definition of optional quadratic covariations, it follows $$\label{2.6} [M_{ij},W]=0, \quad [M_{ij}, M_{mn}]=0,\ \text{as}\ (i,j)\neq(m,n).$$ For simplicity, we let $M_{ii}(t)=[M_{ii}](t)=\langle M_{ii}\rangle(t)=0$, for each $i\in {\cal I}$ and denote, for $S=M,[M],\langle M\rangle,$ $$\label{2.0} \begin{aligned} \int_0^tK(r)\bullet dS(r)&=\int_0^t\sum_{i,j=1,i\neq j}^IK_{ij}(r)dS_{ij}(r),\\ K(r)\bullet dS(r)&=\sum_{i,j=1,i\neq j}^IK_{ij}(r)dS_{ij}(r). \end{aligned}$$ The following spaces are used frequently. For $\beta\geqslant 2,$ - $L^\beta(\mathscr{F}_t;\mathbb{R}^n)$ is the family of $\mathbb{R}^n$-valued $\mathscr{F}_t$-measurable random variables $\xi$ with $\mathbb{E}|\xi|^\beta<\infty.$ - ${{\cal S}}_{\mathbb{F}}^{\beta}(0,T;\mathbb{R}^n)$ is the family of $\mathbb{R}^n$-valued $\mathbb{F}$-adapted càdlàg processes $(\psi_t)_{0\leqslant t\leqslant T}$ with $$\|\psi\|_{{\cal S}_\mathbb{F}}=\mathbb{E}\Big[\mathop{\rm sup}\limits_{0\leqslant t\leqslant T}| \psi_{t} |^{\beta}\Big]^\frac{1}{\beta}< +\infty.$$ - ${\cal H}_{\mathbb{F}}^{1, \beta}(0,T;\mathbb{R}^{n})$ is the family of $\mathbb{R}^n$-valued $\mathbb{F}$-progressively measurable processes $(\psi_t)_{0\leqslant t\leqslant T}$ with $$\mathbb{E}\Big[\Big (\int^{T}_{0} |\psi_{t}|dt\Big )^\beta\Big]<+\infty.$$ - ${\cal H}_{\mathbb{F}}^{2, \frac{\beta}{2}}(0,T;\mathbb{R}^{n})$ is the family of $\mathbb{R}^n$-valued $\mathbb{F}$-progressively measurable processes $(\psi_t)_{0\leqslant t\leqslant T}$ with $$\|\psi\|_{{\cal H}_\mathbb{F}}=\mathbb{E}\Big[\Big (\int^{T}_{0} |\psi_{t}|^2dt\Big )^\frac{\beta}{2}\Big]^{\frac{1}{\beta}}<+\infty.$$ - ${\cal K}_{\mathbb{F}}^{\beta,1}(0,T;\mathbb{R}^n)$ is the family of $K=(K_{ij})_{i,j\in {\cal I}}$ such that the $\mathbb{F}$-progressively measurable processes $K_{ij}$ satisfies $K_{ii}=0$ and $$\mathbb{E}\Big[\int^T_0 \sum_{i,j=1, i\neq j}^I |K_{ij}(t)|^\beta\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt \Big]< \infty.$$ - ${\cal K}_{\mathbb{F}}^{2,\frac{\beta}{2}}(0,T;\mathbb{R}^n)$ is the family of $K=(K_{ij})_{i,j\in {\cal I}}$ such that the $\mathbb{F}$-progressively measurable processes $K_{ij}$ satisfies $K_{ii}=0$ and $$\|K\|_{{\cal K}_\mathbb{F}}:=\mathbb{E}\Big[\Big (\int^T_0 \sum_{i,j=1, i\neq j}^I |K_{ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt \Big ) ^\frac{\beta}{2}\Big]^{\frac{1}{\beta}}< \infty.$$ Denote ${\cal M}_{\mathbb{F}}^\beta(0,T;\mathbb{R}^m):={\cal S}^\beta_{\mathbb{F}}(0,T;\mathbb{R}^m) \times{\cal H}^{2,\frac{\beta}{2}}_{\mathbb{F}}(0,T;\mathbb{R}^{m\times d}) \times{{\cal K}}_{\mathbb{F}}^{2,\frac{\beta}{2}}(0,T;\mathbb{R}^m).$ We write ${\cal M}_{\mathbb{F}}^\beta(0,T;\mathbb{R})$ as ${\cal M}_{\mathbb{F}}^\beta(0,T)$ for short. ## FBSDEs with regime switching In this subsection, we set $\beta=2k, k\in \mathbb{N}$. First, we consider the following SDE: $$\begin{aligned} \label{6.111} \begin{aligned} X(t)= \ & x+\int_0^tb(s,X(s),\alpha(s-))ds+\int_0^t\sigma(s,X(s),\alpha(s-))dW(s)\\ & +\int_0^t\beta(s,X(s),\alpha(s-))\bullet d M(s),\ t\in[0,T]. \end{aligned} \end{aligned}$$ Here $W$ is a $d$-dimensional standard Brownian motion and $M$ is the canonical martingale of the Markov chain $\alpha$. We call ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}) as a *SDE with regime switching*. For each pair $(i_0,j_0)\in {\cal I}\times{\cal I}$, let $b:[0,T]\times\Omega\times\mathbb{R}^n \times{\cal I}\rightarrow\mathbb{R}^n,$ $\sigma:[0,T] \times\Omega\times\mathbb{R}^{n} \times{\cal I}\rightarrow\mathbb{R}^{n\times d},$ $\beta_{i_0j_0}:[0,T]\times\Omega\times\mathbb{R}^n \times{\cal I}\rightarrow\mathbb{R}^n$ satisfy **Assumption 1**. - *There exists some $L>0$ such that, for $t\in[0,T]$, $x,x'\in \mathbb{R}^n$, $i\in {\cal I}$, $$\begin{aligned} \nonumber |b(t,x,i)-b(t,x',i)|+|\sigma(t,x,i)-\sigma(t,x',i)|+|\beta_{i_0j_0}(t,x,i)-\beta_{i_0j_0}(t,x',i)|\leqslant L|x-x'|. \end{aligned}$$* - *For $i\in {\cal I}$, $b(\cdot,0,i)\in {\cal H}_{\mathbb{F}}^{1,\beta}(0,T;\mathbb{R}^n),$ $\beta(t,0,i)=(\beta_{i_0j_0}(t,0,i))_{i_0,j_0\in{\cal I}}\in{\cal K}_{\mathbb{F}}^{\beta,1}(0,T;\mathbb{R}^{n}),$ $\sigma(\cdot,0,i) \in{\cal H}_{\mathbb{F}}^{2,\frac{\beta}{2}}(0,T;\mathbb{R}^{n\times d}).$* The following lemma is useful in investigating the $L^\beta$-estimate of Eq. ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}), which is inspired by [@Li-Wei-14]. **Lemma 2**. *For $\phi=(\phi_{ij})_{i,j\in{\cal I}}\in{\cal K}_{\mathbb{F}}^{\beta,1}(0,T;\mathbb{R}^m)$, there exists a constant $C$ depending on $\beta,\sum_{i,j=1,i\neq j}^I\lambda_{ij}$ such that for $0\leqslant t\leqslant T$, $$\begin{aligned} \nonumber \begin{aligned} \mathbb{E}\bigg[\Big (\int_t^T |\phi(s)|^2\bullet d[M](s)\Big )^\frac{\beta}{2}\bigg] \leqslant C \mathbb{E}\bigg[ \int_t^T \sum_{i,j=1,i\neq j}^I|\phi_{ij}(s)|^\beta\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds \bigg]. \end{aligned} \end{aligned}$$* Now we can show the $L^\beta$-solution and $L^\beta$-estimate of Eq. ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}). **Theorem 3**. *Under , SDE with regime switching ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}) possesses a unique solution $X\in {\cal S}_{\mathbb{F}}^{\beta}(0,T;\mathbb{R}^n).$ Furthermore, there exists a constant $C>0$ depending on $L,T,\beta,\sum\limits_{i,j=1,i\neq j}^I\lambda_{ij}$ such that $$\begin{aligned} \label{6.112} \begin{aligned} & \mathbb{E}\Big[\sup_{0\leqslant t\leqslant T}|X(t)|^\beta\Big]\leqslant C\mathbb{E}\bigg[|x|^\beta+\Big (\int_0^T|b(t,0,\alpha(t-))|dt\Big )^\beta+\Big (\int_0^T| \sigma(t,0,\alpha(t-))|^2dt\Big )^\frac{\beta}{2}\\ & \qquad\qquad\qquad\qquad +\Big (\int_0^T\sum_{i,j=1,i\neq j}^I|\beta_{ij}(t,0,\alpha(t-))|^\beta\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt \Big )\bigg]. \end{aligned} \end{aligned}$$* Next, let us focus on the following BSDE: $$\label{6.1-1} \begin{aligned} Y(t)&=\xi+\int_t^TF(s,Y(s),Z(s),\sum_{i,j=1,i\neq j}^IK_{ij}(s)\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}},\alpha(s-))ds\\ &\quad-\int_t^TZ(s)dW(s)-\int_t^TK(s)\bullet dM(s), \ t\in[0,T]. \end{aligned}$$ Similar to ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}), the above equation is called a *BSDE with regime switching*. Assume $F: \Omega\times[0,T] \times\mathbb{R}^m \times\mathbb{R}^{m\times d} \times\mathbb{R}^m \times{\cal I}\rightarrow \mathbb{R}^m$ satisfies **Assumption 2**. *There exists some constant $L>0$ such that, for $t\in[0,T]$, $\omega\in\Omega$, $y,y',k,k'\in \mathbb{R}^m$, $z,z'\in \mathbb{R}^{m\times d}$, $i\in {\cal I}$, $$|F(t,y,z,k,i)-F(t,y',z',k',i)|\leqslant L(|y-y'|+|z-z'|+|k-k'|)\ \ \hbox{and}\ \ \mathbb{E}\Big[\int_0^T |F(t,0,0,0,i)|^2dt\Big]<\infty.$$* **Theorem 4**. *Under , for $\xi\in L^2(\mathscr{F}_T; \mathbb{R}^m)$, the BSDE ([\[6.1-1\]](#6.1-1){reference-type="ref" reference="6.1-1"}) exists a unique solution $(Y(\cdot),Z(\cdot),K(\cdot))\in {\cal M}_{\mathbb{F}}^2(0,T;\mathbb{R}^m).$ Moreover, there exists a constant $C>0$ depending on $L,T,\sum\limits_{i,j=1,i\neq j}^I\lambda_{ij}$ such that $$\label{6.4-1} \begin{aligned} &\mathbb{E}\bigg[\sup_{t\in[0,T]}|Y(t)|^2+\int_0^T(|Z(t)|^2+\sum_{i,j=1,i\neq j}^I|K_{ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg] \\ &\leqslant C\mathbb{E}\bigg[|\xi|^2+\int_0^T |F(t,0,0,0,\alpha(t-))|^2dt \bigg]. \end{aligned}$$* The proofs of , and refer to Section [6](#sec 5){reference-type="ref" reference="sec 5"}. # Formulation of the problem In this section we formulate the optimal control problems with partial information and regime switching under model uncertainty. For simplicity, we only consider the 1-dimensional case. The multi-dimensional case can be handled similarly. Recall that $V$ is a nonempty convex subset of $\mathbb{R}^l$, and $\Theta$ a locally compact, complete separable space with distance $d$. For each pair $(i_0,j_0)\in{\cal I}\times{\cal I}$ and $\theta\in \Theta$, let the mappings $$\begin{aligned} \nonumber \begin{aligned} & b_\theta:[0,T] \times\mathbb{R}\times V \times{\cal I}\rightarrow\mathbb{R},\quad (\sigma_\theta,\bar{\sigma}_\theta):[0,T]\times\mathbb{R}\times V \times{\cal I}\rightarrow\mathbb{R},\\ % & \beta_{\theta,i_0j_0}: [0,T] \times\mathbb{R}\times V \times{\cal I}\rightarrow\mathbb{R}, \quad\Phi_\theta: \mathbb{R}\times{\cal I}\rightarrow\mathbb{R},\\ % & f_\theta:[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times V \times{\cal I}\rightarrow\mathbb{R}, \quad h_\theta: [0,T]\times\mathbb{R}\times{\cal I}\rightarrow\mathbb{R} \end{aligned} \end{aligned}$$ satisfy **Assumption 3**. - *There exists a constant $L>0$ such that, for $t\in[0,T]$, $x,x'\in \mathbb{R}$, $y,y',k,k'\in \mathbb{R}, z,z',\bar{z},\bar{z}' \in \mathbb{R},\ v,v'\in V,\ i\in {\cal I},\ \psi_\theta=b_\theta,\sigma_\theta,$ $$\begin{aligned} \nonumber \begin{aligned} & |\psi_\theta(t,x,v,i)-\psi_\theta(t,x',v',i)|+|\bar{\sigma}_\theta(t,x,v,i)-\bar{\sigma}_\theta(t,x',v',i)|\leqslant L(|x-x'|+|v-v'|),\\ & |\Phi_\theta(x,i)-\Phi_\theta(x',i)|+|f_\theta(t,x,y,z,\bar{z},k,v,i)-f_\theta(t,x',y',z',\bar{z}',k',v',i)|\\ &\ \leqslant L\Big ( (1+|x|+|x'|+|v|+|v'|) (|x-x'|+|v-v'|)+|y-y'|+|z-z'|+|\bar{z}-\bar{z}'|+|k-k'| \Big ),\\ % & |\psi_\theta(t,0,0,i)|+|f_\theta(t,0,0,0,0,0,i)|+|\Phi_\theta(0,i)|\leqslant L,\\ % &\ |h_\theta(t,x,i)-h_\theta(t,x',i)|\leqslant L|x-x'|, \quad|h_\theta(t,x,i)|+|\bar{\sigma}_\theta(t,x,v,i)|\leqslant L. \end{aligned} \end{aligned}$$* - *There exists a constant $L>0$ such that, for $t\in[0,T],\ x,x'\in \mathbb{R},\ v,v'\in V,\ i \in {\cal I}$, $$\begin{aligned} \nonumber \begin{aligned} & |\beta_{\theta,i_0j_0}(t,x,v,i)-\beta_{\theta,i_0j_0}(t,x',v',i)|\leqslant L(|x-x'|+|v-v'|),\\ & |\beta_{\theta,i_0j_0}(t,0,0,i)|\leqslant L. \end{aligned} \end{aligned}$$* - *$b_\theta,\sigma_\theta,\bar{\sigma}_\theta, \beta_{\theta,i_0j_0}, f_\theta,\Phi_\theta,h_\theta$ are continuously differentiable in $(x,y,z,\bar{z},k)$; there exists some constant $L>0$ such that for $t\in[0,T]$, $x,x'\in \mathbb{R},$ $y,y',k,k'\in \mathbb{R},\ z,z',\bar{z},\bar{z}' \in \mathbb{R},\ v,v'\in V,\ i\in {\cal I}$, $$\begin{aligned} \nonumber \begin{aligned} & |\phi_\theta(t,x,v,i)-\phi_\theta(t,x',v',i)| \leqslant L(|x-x'|+|v-v'|),\\ & |\varphi_\theta(t,x,y,z,\bar{z},k,v,i)-\varphi_\theta(t,x',y',z',\bar{z}',k',v',i)|\\ & \leqslant L(|x-x'|+|y-y'|+|z-z'|+|\bar{z}-\bar{z}'|+|k-k'|+|v-v'|), \end{aligned} \end{aligned}$$ where $\theta\in \Theta, i\in {\cal I}$, $\phi_\theta$ and $\varphi_\theta$ denote the derivatives of $b_\theta, \sigma_\theta, \bar{\sigma}_\theta, \beta_{\theta,i_0j_0},\Phi_\theta,h_\theta$ and $f_\theta$ with respect to $(x,v)$ and $(x,y,z,\bar{z},$ $k,v)$, respectively.* - *There exists a positive constant $L$ such that, for $t\in[0,T]$, $x\in \mathbb{R},\ y,k\in \mathbb{R},\ z,\bar{z}\in \mathbb{R},$ $v\in V,\ i\in {\cal I},\ \theta,\theta'\in \Theta$, $$\begin{aligned} \nonumber |\psi_\theta(t,x,y,z,\bar{z},k,v,i)- \psi_{\theta'}(t,x,y,z,\bar{z},k,v,i)|\leqslant L d(\theta, \theta'), \end{aligned}$$ where $\psi_\theta$ denotes $b_\theta, \sigma_\theta, \bar{\sigma}_\theta, \beta_{\theta,i_0j_0},\Phi_\theta,f_\theta,h_\theta$ and their derivatives in $(x,y,z,\bar{z},k,v)$.* - *${\cal Q}$ is a weakly compact and convex set of probability measures on $(\Theta, \mathscr{B}(\Theta))$.* Now we introduce the notion of admissible control. **Definition 5**. *A process $v:\Omega\times[0,T]\rightarrow V$ is called to be an admissible control if $v(t)$ is ${\cal G}_t$-adapted, the natural filtration generated by $G(\cdot)$ in ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}), and satisfies $\sup_{t\in[0,T]}\mathbb{E}[|v(t)|^4]<\infty$. The set of admissible controls is denoted by ${\cal V}_{ad}$.* Under the above assumption, let us consider the equation ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) and the observation process ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}). By inserting ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}) into ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) we obtain $$\begin{aligned} \label{4.3} \left \{\begin{aligned} dX^v_\theta(t) & =[b_\theta(t,X^v_\theta(t),v(t),\alpha(t-))-\bar{\sigma}_\theta(t,X^v_\theta(t),v(t),\alpha(t-))h_\theta(t,X^v_\theta(t),\alpha(t-))]dt\\ &\quad+\sigma_\theta(t,X^v_\theta(t),v(t),\alpha(t-))dW(t)+\bar{\sigma}_\theta(t,X^v_\theta(t),v(t),\alpha(t-)) dG(t) \\ &\quad+\beta_\theta(t,X^v_\theta(t-),v(t),\alpha(t-))\bullet dM(t),\quad t\in[0,T],\\ % dY^v_\theta(t) &= -f_\theta(t,X^v_\theta(t),Y^v_\theta(t), Z^v_\theta(t),\bar{Z}^v_\theta(t),\sum_{i,j=1,i\neq j}^IK^v_{\theta,ij}(t)\lambda_{ij} \mathbf{1}_{\{\alpha(t-)=i\}},v(t),\alpha(t-))dt\\ &\quad+Z^v_\theta(t)dW(t)+\bar{Z}^v_\theta(t)dG(t)+K^v_\theta(t)\bullet dM(t),\ t\in[0,T],\\ % X^v_\theta(0) &=x,\quad Y^v_\theta(T)=\Phi_\theta(X^v_\theta(T), \alpha(T)). \end{aligned} \right. \end{aligned}$$ **Lemma 6**. *Under $\mathrm{(i)}$ and $\mathrm{(ii)}$ of , for each $v(\cdot)\in {\cal V}_{ad}$, the equation ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) possesses a unique solution $(X^v(\cdot),Y^v(\cdot),Z^v(\cdot),\bar{Z}^v(\cdot) ,K^v(\cdot)) \in {\cal S}_{\mathbb{F}}^{4}(0,T;\mathbb{R})\times{\cal S}^2_{\mathbb{F}}(0,T;\mathbb{R}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R})\times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R})\times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}).$ Furthermore, there exists a constant $C$ depending on $L,T,\sum\limits_{i,j=1,i\neq j}^I\lambda_{ij}$ such that $$\begin{aligned} \nonumber \begin{aligned} & \mathbb{E}\bigg[\sup_{0\leqslant t\leqslant T}|X^v_\theta(t)|^4+\sup_{0\leqslant t\leqslant T}|Y^v_\theta(t)|^{2} + \int_0^T (|Z^v_\theta(t)|^2+|\overline{Z}^v_\theta(t)|^2)dt\\ & +\int_0^T\sum_{i,j=1,i\neq j}^I|K^v_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt\bigg] \leqslant C\bigg(|x|^4+\mathbb{E}\Big[ \int_0^T|v(t)|^4dt \Big] \bigg). \end{aligned} \end{aligned}$$* *Proof.* According to and , for each $v(\cdot)\in {\cal V}_{ad}$, the equation ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) possesses a unique solution $(X_\theta^v(\cdot),Y_\theta^v(\cdot),Z_\theta^v(\cdot),\bar{Z}_\theta^v(\cdot),K_\theta^v(\cdot))\in {\cal S}_{\mathbb{F}}^{4}(0,T;\mathbb{R})\times{\cal S}^2_{\mathbb{F}}(0,T;\mathbb{R}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R})\times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R})\times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}).$ Finally, the above estimate is a direct result of ([\[6.112\]](#6.112){reference-type="ref" reference="6.112"}) and ([\[6.4-1\]](#6.4-1){reference-type="ref" reference="6.4-1"}). ◻ The following lemma states the continuity with respect to $\theta$ of $(X^v_\theta, Y^v_\theta, Z^v_\theta,\bar{Z}^v_\theta,K^v_{\theta})$, whose proof is referred to Section [6](#sec 5){reference-type="ref" reference="sec 5"}. **Lemma 7**. *Under $\mathrm{(i)}$, $\mathrm{(ii)}$ and $\mathrm{(iv)}$ of , the mappings $X^v_\theta, Y^v_\theta, Z^v_\theta, \bar{Z}^v_\theta,K^v_{\theta}=(K^v_{\theta,ij})_{i,j\in{\cal I}}$ are continuous in $\theta$, i.e., $$\label{5.5} \begin{aligned} & \lim_{\delta\rightarrow 0}\sup_{d(\theta, \bar{\theta})\leqslant\delta}\mathbb{E}\bigg[\sup_{t\in[0,T]} \Big (|X^v_\theta(t)-X^v_{\bar{\theta}}(t)|^4+|Y^v_\theta(t)-Y^v_{\bar{\theta}}(t)|^2\Big )+\int_0^T (|Z^v_\theta(t)-Z^v_{\bar{\theta}}(t)|^2\\ & \qquad\qquad\quad+ |\bar{Z}^v_\theta(t)-\bar{Z}^v_{\bar{\theta}}(t)|^2+\sum_{i,j=1,i\neq j}^I |K^v_{\theta,ij}(t)-K^v_{\bar{\theta},ij}(t)|^2 \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg]=0. \end{aligned}$$* Let us now consider a process satisfying $$\begin{aligned} \label{4.7-1} \left\{ \begin{aligned} dR^v_\theta(t)&=R^v_\theta(t)h_\theta(t,X^v_\theta(t), \alpha(t-))dG(t),\ t\in[0,T],\\ R^v_\theta(0)&=1. \end{aligned} \right. \end{aligned}$$ From the boundness of $h_\theta$, we know that $$\begin{aligned} \label{4.7-2} \begin{aligned} \sup_{\theta\in\Theta}\mathbb{E}\Big[\sup_{0\leqslant t \leqslant T}|R_\theta^v(t)|^l\Big]<\infty,\ \forall\ l>1. \end{aligned} \end{aligned}$$ **Remark 8**. *Clearly, the process $(R^v_\theta(\cdot))^{-1}$ satisfies $$\left\{ \begin{aligned} d(R^v_\theta(t))^{-1} & =(R^v_\theta(t))^{-1}h^2_\theta(t,X^v_\theta(t), \alpha(t-))dt + (R^v_\theta(t))^{-1}h_\theta(t,X^v_\theta(t), \alpha(t-))dG(t),\ t\in[0,T],\\ % (R^v_\theta(0))^{-1} & =1. \end{aligned} \right.$$ From the boundness of $h_\theta$, one has $$\label{5.8} \sup_{\theta\in\Theta}\mathbb{E}\Big[\sup_{0\leqslant t \leqslant T}|(R_\theta^v(t))^{-1}|^l\Big]<\infty,\ \forall\ l>1.$$* The continuity of the process $R^v_\theta$ with respect to $\theta$ is listed in the following lemma. **Lemma 9**. *Under $\mathrm{(i)}$, $\mathrm{(ii)}$ and $\mathrm{(iv)}$ of , it follows, for $v(\cdot)\in {\cal V}_{ad}$, $$\begin{aligned} & \lim_{\delta\rightarrow 0}\sup_{d(\theta, \bar{\theta})\leqslant\delta}\mathbb{E}\bigg[\sup_{t\in[0,T]}|R^v_\theta(t)-R^v_{\bar{\theta}}(t)|^2\bigg]=0. \end{aligned}$$* The proof is similar to . We omit it. In addition, $R^v_\theta(\cdot)$ is an $(\mathbb{F}, \mathbb{P})$-martingale. Recall that we have defined $d\tilde{\mathbb{P}}^v_\theta:=R^v_\theta(t)d\mathbb{P}$ on $\mathcal{F}_t$, for each $t\in[0,T]$. Thanks to Girsanov theorem, $(W,\overline W^v_\theta)$ is a $2$-dimensional Brownian motions under the probability measure $\tilde{\mathbb{P}}^v_\theta$. Let the mappings $\Psi_\theta:\mathbb{R}\rightarrow\mathbb{R}$ and $\Lambda_\theta:\mathbb{R}\rightarrow\mathbb{R}$ satisfy **Assumption 4**. - *$\Psi_\theta$ and $\Lambda_\theta$ are continuously differentiable with respect to $x$ and $y$.* - *There exists a constant $L>0$ such that for $x,x'\in \mathbb{R}$, $y,y'\in \mathbb{R}$, $\theta\in \Theta$, $$\begin{aligned} \nonumber \begin{aligned} & |\Lambda_\theta(y)-\Lambda_\theta(y')|\leqslant L|y-y'|, \quad|\Psi_\theta(0)|+|\Lambda_\theta(0)|\leqslant L,\\ & |\Psi_\theta(x)-\Psi_\theta(x')|\leqslant L(1+|x|+|x'|)|x-x'|. \end{aligned} \end{aligned}$$* - *There exists a constant $L>0$ such that for $x\in \mathbb{R}$, $y\in\mathbb{R}$, $\theta,\theta'\in \Theta$, $$\begin{aligned} \nonumber |\Psi_\theta(x)-\Psi_{\theta'}(x)|+| \Lambda_\theta(y)-\Lambda_{\theta'}(y)|\leqslant Ld(\theta,\theta'). \end{aligned}$$* - *There exists a constant $L>0$ such that for $x,x'\in \mathbb{R}$, $y,y'\in \mathbb{R}$, $\theta\in \Theta$, $$\begin{aligned} \nonumber |\partial_x\Psi_\theta(x)-\partial_x\Psi_\theta(x')|\leqslant L|x-x'|,\quad|\partial_y\Lambda_\theta(y)-\partial_y\Lambda_\theta(y')|\leqslant L|y-y'|. \end{aligned}$$* Recall that the cost functional is defined by $$\begin{aligned} \nonumber J(v(\cdot))=\sup_{Q\in {\cal Q}}\int_{\Theta} \mathbb{E}^{\tilde{\mathbb{P}}^v_\theta}[\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q(d\theta). \end{aligned}$$ Thanks to Bayes' formula, the above cost functional can be written as $$\begin{aligned} \label{4.10} J(v(\cdot))=\sup_{Q\in {\cal Q}}\int_{\Theta} \mathbb{E}[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q(d\theta). \end{aligned}$$ Obviously, **Problem** (**PRUC**) (see Introduction) is equivalent to minimizing ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}) over $\mathcal{V}_{ad}$ subject to ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) and ([\[4.7-1\]](#4.7-1){reference-type="ref" reference="4.7-1"}). The purpose of the present work is to give the necessary and sufficient conditions of the optimal control $\widehat{v}(\cdot).$ # Maximum principle and sufficient condition This section is devoted to the necessary maximum principle and sufficient condition. ## Variational equations Let $\widehat v(\cdot)$ be an optimal control and for each $\theta\in \Theta$, let $(\widehat X_\theta,\widehat Y_\theta,\widehat Z_\theta,\widehat{\bar{Z}}_\theta,\widehat K_\theta)$ and $\widehat R_\theta$ be the solutions of state processes of ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) and ([\[4.7-1\]](#4.7-1){reference-type="ref" reference="4.7-1"}) with $\widehat v(\cdot)$, respectively. From the convexity of ${\cal V}_{ad}$, for each $v(\cdot)\in{\cal V}_{ad}$ and $\varepsilon>0$, $v^\varepsilon(\cdot):=\widehat v(\cdot)+\varepsilon(v(\cdot)-\widehat v(\cdot))\in{\cal V}_{ad}$. By $(X^\varepsilon_\theta,Y^\varepsilon_\theta,Z^\varepsilon_\theta, \bar{Z}^\varepsilon_\theta, K^\varepsilon_\theta)$ and $R^\varepsilon_\theta$ we denote the solutions of ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) and ([\[4.7-1\]](#4.7-1){reference-type="ref" reference="4.7-1"}) with $v^\varepsilon(\cdot)$, for each $\theta\in \Theta$. Denote for $\psi_\theta=b_\theta,\sigma_\theta,\bar{\sigma}_\theta,\beta_\theta,\Phi_\theta, h_\theta$, $\ell=x,y,z,\bar{z},k,v$, $$\nonumber \begin{aligned} \psi_\theta(t) & :=\psi_\theta(t,\widehat{X}_\theta(t),\widehat v(t),\alpha(t-)), \qquad\ \psi^\varepsilon_\theta(t):=\psi_\theta(t,X^\varepsilon_\theta(t),v^\varepsilon(t),\alpha(t-)), \\ % \partial_x\psi_\theta(t) & :=\partial_x\psi_\theta(t,\widehat{X}_\theta(t),\widehat{v}(t),\alpha(t-)),\ \partial_v\psi_\theta(t):=\partial_v\psi_\theta(t,\widehat{X}_\theta(t),\widehat v(t),\alpha(t-)),\\ % f^\varepsilon_\theta(t) & :=f_\theta(t,X^\varepsilon_\theta(t),Y^\varepsilon_\theta(t),Z^\varepsilon_\theta(t),\bar{Z}^\varepsilon_\theta(t) \sum_{i,j=1,i\neq j}^IK^\varepsilon_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}},v^\varepsilon(t),\alpha(t-)),\\ % f_\theta(t) & :=f_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t), \sum_{i,j=1,i\neq j}^I\widehat{K}_{\theta,ij}(t)\lambda_{ij} \mathbf{1}_{\{\alpha(t-)=i\}},\widehat{v}(t),\alpha(t-)),\\ % \partial_\ell f_\theta(t) & :=\partial_\ell f_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t), \sum_{i,j=1,i\neq j}^I\widehat{K}_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}, \widehat{v}(t),\alpha(t-)). % % %\Phi_\th(T) & :=\Phi_\th(\h{X}_\th(T),\a(T)),\qq\q h_\th(t):=h_\th(t,\h{X}_\th(t), \a(t-)),\\ %% % \Phi^\e_\th(T)& :=\Phi_\th(X^\e_\th(T), \a(T)),\qq\q h^\e_\th(t):=h_\th(t,X^\e_\th(t), \a(t-)),\\ % % % \pa_x\Phi_\th(T) & :=\pa_x\Phi_\th(\h{X}_\th(T),\a(T)),\qq \pa_xh_\th(t):=\pa_x h_\th(t,\h{X}_\th(t), \a(t-)). \end{aligned}$$ We consider the following variational equations: $$\left\{ \begin{aligned} dR^{1}_\theta(t)&=\Big ( R^{1}_\theta(t)h_\theta(t) +\widehat{R}_\theta(t)\partial_xh_\theta(t)X^{1}_\theta(t)\Big )dG_\theta(t),\ t\in[0,T], \\ R^{1}_\theta(0)&=0, \end{aligned} \right.$$ and $$\begin{aligned} \nonumber \left\{ \begin{aligned} dX^{1}_\theta(t) & =\bigg\{\Big[\partial_xb_\theta(t)-\partial_x\bar{\sigma}_\theta(t)h_\theta(t)-\bar{\sigma}_\theta(t)\partial_xh_\theta(t)\Big]X^{1}_\theta(t)\\ & \quad+ \Big[\partial_vb_\theta(t) -\partial_v\bar{\sigma}_\theta(t)h_\theta(t)\Big](v(t)-\widehat{v}(t))\bigg\}dt\\ & \quad+\Big[\partial_x\sigma_\theta(t)X^{1}_\theta(t)+\partial_v\sigma_\theta(t)(v(t)-\widehat{v}(t))\Big]dW(t)\\ % & \quad+\Big[\partial_x\bar{\sigma}_\theta(t)X^{1}_\theta(t)+\partial_v\bar{\sigma}_\theta(t)(v(t)-\widehat{v}(t))\Big]dG(t)\\ & \quad+\Big[\partial_x\beta_\theta(t)X^{1}_\theta(t)+\partial_v\beta_\theta(t)(v(t)-\widehat{v}(t))\Big]\bullet dM(t),\quad t\in[0,T],\\ X^{1}_\theta(0)& =0. \end{aligned} \right. \end{aligned}$$ According to , under , the above variational equations exist unique solutions $X^{1}_\theta\in {{\cal S}}_{\mathbb{F}}^{4}(0,T;\mathbb{R})$ and $R^{1}_\theta\in {{\cal S}}_{\mathbb{F}}^{2}(0,T;\mathbb{R})$. Moreover, it follows $$\label{4.12-1} \left\{ \begin{aligned} & \mathbb{E}\bigg[\sup_{0\leqslant t\leqslant T}|X^{1}_\theta(t) |^4\bigg] \leqslant C\mathbb{E}\bigg[\int_0^T(|v(t)|^4+|\widehat{v}(t)|^4)dt\bigg],\\ & \mathbb{E}\bigg[\sup_{0\leqslant t\leqslant T}|R^{1}_\theta(t)|^2 \bigg] \leqslant C\bigg\{\mathbb{E}\Big[\int_0^T(|v(t)|^4+|\widehat{v}(t)|^4)dt\Big]\bigg\}^{\frac{1}{2}}. \end{aligned} \right.$$ For simplicity presentation, denote $$\delta^{\varepsilon}X_\theta(t):=\frac{1}{\varepsilon}(X^\varepsilon_\theta(t)-\widehat{X}_\theta(t))-X^{1}_\theta(t),\quad\delta^{\varepsilon}R_\theta(t):=\frac{1}{\varepsilon}(R^\varepsilon_\theta(t)-\widehat{R}_\theta(t))-R^{1}_\theta(t).$$ **Lemma 10**. *Let $\mathrm{(i)}$-$\mathrm{(iii)}$ of hold true, then we have, for $\theta\in \Theta$, $$\begin{aligned} \mathrm{ (i)}&\ \mathbb{E}\bigg[\sup\limits_{0\leqslant t\leqslant T}|\delta^{\varepsilon}X_\theta(t) |^4 \bigg] \leqslant C\mathbb{E}\bigg[\int_0^T(|v(t)|^4+|\widehat{v}(t)|^4)dt\bigg].\\ % \mathrm{(ii)}&\ \lim\limits_{\varepsilon\rightarrow 0}\sup\limits_{\theta\in\Theta}\mathbb{E}\bigg[\sup\limits_{0\leqslant t\leqslant T}|\delta^{\varepsilon}X_\theta(t)|^4\bigg]=0.\\ % \mathrm{ (iii)}&\ \mathbb{E}\bigg[\sup\limits_{0\leqslant t\leqslant T}|\delta^{\varepsilon}R_\theta(t)|^2\bigg] \leqslant C\mathbb{E}\bigg[\int_0^T(|v(t)|^4 +|\widehat{v}(t)|^4)dt\bigg]^\frac{1}{2}.\\ \mathrm{(iv)}&\ \lim\limits_{\varepsilon\rightarrow 0}\sup\limits_{\theta\in\Theta} \mathbb{E}\bigg[\sup\limits_{0\leqslant t\leqslant T}|\delta^{\varepsilon}R_\theta(t) |^2 \bigg]=0. \end{aligned}$$* Next, let us investigate the variational BSDE: for each $\theta\in \Theta$, $$\left\{ \begin{aligned} d Y_\theta^{1}(t) & =-\bigg\{ \partial_xf_\theta(t)X_\theta^{1}(t)+ \partial_yf_\theta(t)Y_\theta^{1}(t)+ \partial_zf_\theta(t)Z_\theta^{1}(t) + \partial_{\bar{z}}f_\theta(t)\bar{Z}_\theta^{1}(t)\\ % & \quad+ \partial_kf_\theta(t)\sum_{i,j=1,i\neq j}^IK^{1}_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}} +\langle\partial_v f_\theta(t), v(t)-\widehat{v}(t)\rangle \bigg\}dt\\ % & \quad+Z_\theta^{1}(t)dW(t)+\bar{Z}_\theta^{1}(t)dG(t)+K_\theta^{1}(t)\bullet dM(t),\ t\in[0,T],\\ % Y_\theta^{1}(T) & =\partial_x\Phi_\theta(T)X_\theta^{1}(T). \end{aligned} \right.$$ From , $\partial_yf_\theta,\partial_zf_\theta,\partial_{\bar{z}}f_\theta,\partial_kf_\theta$ are uniformly bounded and $\partial_xf_\theta, \partial_v f_\theta,\partial_x\Phi_\theta$ are bounded by $L(1+|x|+|v|)$. Thanks to , the above equation exists a unique solution $(Y_\theta^1(\cdot),Z_\theta^1(\cdot),\bar{Z}_\theta^1(\cdot),K_\theta^1(\cdot))\in {\cal S}^2_{\mathbb{F}}(0,T;\mathbb{R}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R})\times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R})\times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R})$. Moreover, there exists a constant $C>0$ depending on $L,T,\sum\limits_{i,j=1,i\neq j}^I\lambda_{ij}$ such that $$\label{4.21-1} \begin{aligned} & \mathbb{E}\bigg[\sup_{t\in[0,T]}|Y_\theta^{1}(t)|^2+\int_0^T(|Z_\theta^{1}(t)|^2+|\bar{Z}_\theta^{1}(t)|^2)dt +\int_0^T\sum_{i,j=1,i\neq j}^I|K^{1}_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt\bigg]\\ % & \leqslant C\mathbb{E}\bigg[|x|^4+\int_0^T(|v(t)|^4+|\widehat{v}(t)|^4) dt\bigg]. \end{aligned}$$ Moreover, we can know that $(X^1_\theta,R^1_\theta, Y^1_\theta,Z^1_\theta,\bar{Z}^1_\theta,K^1_\theta)$ are continuous in $\theta$ as follows. **Lemma 11**. *Under $\mathrm{(i)}$-$\mathrm{(iv)}$ of , it follows $$\begin{aligned} & \lim_{\delta\rightarrow 0}\sup_{d(\theta, \bar{\theta})\leqslant\delta}\mathbb{E}\bigg[\sup_{t\in[0,T]}\Big (|X^1_\theta(t)-X^1_{\bar{\theta}}(t)|^4 +|R^1_\theta(t)-R^1_{\bar{\theta}}(t)|^2+|Y^1_\theta(t)-Y^1_{\bar{\theta}}(t)|^2\Big )\\ & \ +\int_0^T(|Z^1_\theta(t)-Z^1_{\bar{\theta}}(t)|^2+|\bar{Z}^1_\theta(t)-\bar{Z}^1_{\bar{\theta}}(t)|^2 +\sum_{i,j=1,i\neq j}^I|K^1_{\theta,ij}(t)-K^1_{\bar{\theta},ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg]=0. \end{aligned}$$* Define, for $\Psi_\theta= Y_\theta, Z_\theta, \bar{Z}_\theta, K_{\theta,ij}$, $\delta^{\varepsilon}\Psi_\theta(t):=\frac{1}{\varepsilon}(\Psi^\varepsilon_\theta(t)-\widehat{\Psi}_\theta(t))-\Psi^{1}_\theta(t).$ The following lemma states the boundness and continuity in $\theta$ of $(\delta^\varepsilon Y_\theta, \delta^\varepsilon Z_\theta, \delta^\varepsilon\overline Z_\theta,\delta^\varepsilon K_{\theta})$. **Lemma 12**. *Under , there exists a constant $C$ depending on $L,T,\sum\limits_{i,j=1,i\neq j}^I\lambda_{ij}$ such that $$\begin{aligned} \mathrm{(i)}\ & \mathbb{E}\bigg[\sup_{t\in[0,T]}|\delta^\varepsilon Y_\theta(t)|^2+\int_0^T(|\delta^\varepsilon Z_\theta(t)|^2+|\delta^\varepsilon\overline Z_\theta(t)|^2 +\sum\limits_{i,j=1,i\neq j}^I|\delta^\varepsilon K_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg]\\ % \end{aligned} % \end{equation*} % % % \begin{equation*} % \begin{aligned} & \leqslant C\mathbb{E}\bigg[|x|^4+\int_0^T(|v(t)|^4+|\widehat{v}(t)|^4) dt\bigg].\\ % \end{aligned} % \end{equation*} % % % \begin{equation*} % \begin{aligned} \mathrm{(ii)}\ & \lim_{\varepsilon\rightarrow 0}\sup_{\theta\in\Theta} \mathbb{E}\bigg[\sup_{t\in[0,T]}|\delta^\varepsilon Y_\theta(t)|^2 +\int_0^T(|\delta^\varepsilon Z_\theta(t)|^2+|\delta^\varepsilon \bar{Z}_\theta(t)|^2 +\sum\limits_{i,j=1,i\neq j}^I|\delta^\varepsilon K_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg]=0. \end{aligned}$$* The proofs for , and refer to Appendix. Now, we define $$\begin{aligned} \nonumber {\cal Q}^v=\bigg\{Q\in {\cal Q}\Big|J(v(\cdot))=\int_{\Theta} \mathbb{E}[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q(d\theta)\bigg\}. \end{aligned}$$ Note that ${\cal Q}^v$ is nonempty. Indeed, according to the definition of $J(v(\cdot))$, there exists a sequence $Q^L\in {\cal Q}$ such that $\int_{\Theta}\mathbb{E}[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q^L(d\theta)\geqslant J(v(\cdot))-\frac{1}{L}$. Notice that ${\cal Q}$ is weakly compact, one can find a $Q^v\in {\cal Q}$ such that a subsequent of $Q^L$ (if necessary) converges weakly to $Q^v$. Thanks to , and , ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}) and , the mapping $\theta\rightarrow\mathbb{E}[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))]$ is bounded and continuous. Hence, we obtain $$\begin{aligned} &J(v(\cdot))\geqslant\int_{\Theta}\mathbb{E}[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q^v(d\theta)\\ &=\lim_{L\rightarrow\infty}\int_\Theta\mathbb{E}[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))+\Lambda_\theta(Y^v_\theta(0))] Q^L(d\theta)\geqslant J(v(\cdot)). \end{aligned}$$ Thereby, ${\cal Q}^v$ is nonempty. Now we can show the following variational inequality. **Theorem 13**. *Let and hold true. Then $$\begin{aligned} 0&\le \lim_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\Big (J(v^\varepsilon(\cdot))-J(\widehat{v}(\cdot)) \Big )\\ &=\sup_{Q\in {\cal Q}^{\widehat{v}}}\int_\Theta\mathbb{E}[R^{1}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X^{1}_\theta(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)) Y^{1}_\theta(0) ]Q(d\theta). %& =\int_\Th\dbE[R^{1}_\th(T)\Psi_\th(\h{X}_\th(T))+\h{R}_\th(T)\pa_x\Psi_\th(\h{X}_\th(T))X^{1}_\th(T) % +\pa_y\L_\th(\h{Y}_\th(0)) Y^{1}_\theta(0)]\h{Q}(d\th). \end{aligned}$$* *Proof.* The proof is split into four steps. *Step 1.* Notice that for arbitrary $Q\in \mathcal{Q}^{\widehat{v}}$, we have $$\begin{aligned} \nonumber \begin{aligned} J(v^\varepsilon(\cdot)) & \geqslant\int_{\Theta} \mathbb{E}[R^\varepsilon_\theta(T)\Psi_\theta(X_\theta^\varepsilon(T)) +\Lambda_\theta(Y^\varepsilon_\theta(0))] Q(d\theta),\\ J(\widehat{v}(\cdot)) & =\int_{\Theta} \mathbb{E}[\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(\widehat{Y}_\theta(0))] Q(d\theta). \end{aligned} \end{aligned}$$ From the fact $\frac{1}{\varepsilon}\Big (R^\varepsilon_\theta(T)-\widehat R_\theta(T)\Big )=\delta^\varepsilon R_\theta(T)+R^1(T), \frac{1}{\varepsilon}\Big (X^\varepsilon_\theta(T)-\widehat X_\theta(T)\Big )=\delta^\varepsilon X_\theta(T)+X^1(T),\\ \frac{1}{\varepsilon}\Big (Y^\varepsilon_\theta(T)-\widehat Y_\theta(T)\Big )=\delta^\varepsilon Y_\theta(T)+Y^1(T),$ we have $$\label{4.25-1} \begin{aligned} & \frac{1}{\varepsilon}\Big ( J(v^\varepsilon(\cdot))-J(\widehat{v}(\cdot)) \Big )\\ & \geqslant\frac{1}{\varepsilon}\int_{\Theta} \mathbb{E}[R^\varepsilon_\theta(T)\Psi_\theta(X_\theta^\varepsilon(T))-\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T)) +\Lambda_\theta(Y^\varepsilon_\theta(0))-\Lambda_\theta(\widehat{Y}_\theta(0))] Q(d\theta)\\ % & =\int_{\Theta} \mathbb{E}\Big[\Psi_\theta(X_\theta^\varepsilon(T))(\delta^\varepsilon R_\theta(T)+R_\theta^{1}(T)) +\widehat{R}_\theta(T)\partial_x\Psi^\varepsilon_\theta(T)(\delta^\varepsilon X_\theta(T)+X_\theta^{1}(T))\\ & \qquad+\partial_x\Lambda_\theta^\varepsilon(0)(\delta^\varepsilon Y_\theta(0)+Y_\theta^{1}(0))\Big] Q(d\theta)\\ % \end{aligned} % \end{equation} % % % \begin{equation*} % \begin{aligned} % & =\int_{\Theta} \mathbb{E}\Big[\Psi_\theta(X_\theta^\varepsilon(T))\delta^\varepsilon R_\theta(T)+\widehat{R}_\theta(T)\partial_x\Psi^\varepsilon_\theta(T)\delta^\varepsilon X_\theta(T) +\partial_y\Lambda_\theta^\varepsilon(0)\delta^\varepsilon Y_\theta(0)\Big] Q(d\theta)\\ % & \quad+\int_{\Theta} \mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X^{1}_\theta(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)) Y^{1}_\theta(0) \Big] Q(d\theta),\\ % & \quad+\int_{\Theta} \mathbb{E}\Big[(\Psi_\theta(X_\theta^\varepsilon(T))-\Psi_\theta(\widehat{X}_\theta(T))) R_\theta^{1}(T) + \widehat{R}_\theta(T)(\partial_x\Psi^\varepsilon_\theta(T)-\partial_x\Psi_\theta(\widehat{X}_\theta(T)))X_\theta^{1}(T)\\ % & \quad+(\partial_y\Lambda_\theta^\varepsilon(0)-\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)))Y_\theta^{1}(0)\Big] Q(d\theta),\\ \end{aligned}$$ where $$\begin{aligned} \label{6.21} \begin{aligned} \partial_x\Psi^\varepsilon_\theta(T)&=\int_0^1\partial_x\Psi_\theta( \widehat{X}_\theta(T))+\lambda\varepsilon(\delta^\varepsilon X_\theta(T)+X_\theta^{1}(T))d\lambda,\\ \partial_x\Lambda_\theta^\varepsilon(0)&=\int_0^1 \partial_x\Lambda_\theta(\widehat{Y}_\theta(0))+\lambda\varepsilon(\delta^\varepsilon Y_\theta(0)+Y_\theta^{1}(0))d\lambda. \end{aligned} \end{aligned}$$ Since $|\Psi_\theta(X_\theta^\varepsilon(T))|\leqslant L(1+|X_\theta^\varepsilon(T)|^2)$, $|\partial_x\Psi^\varepsilon_\theta(T)|\leqslant L(1+|\widehat{X}_\theta(T)|+|\delta^\varepsilon X_\theta(T)|+|X_\theta^{1}(T)|)$ and $\partial_y\Lambda^\varepsilon_\theta(0)$ is uniformly bounded in $\theta$, we have $$\begin{aligned} & \Big|\mathbb{E}\Big[\Psi_\theta(X_\theta^\varepsilon(T))\delta^\varepsilon R_\theta(T)+\widehat{R}_\theta(T)\partial_x\Psi^\varepsilon_\theta(T)\delta^\varepsilon X_\theta(T) +\partial_y\Lambda_\theta^\varepsilon(0)\delta^\varepsilon Y_\theta(0)\Big]\Big|\\ & \leqslant L\mathbb{E}\Big[(1+|X_\theta^\varepsilon(T)|^2)|\delta^\varepsilon R_\theta(T)| +(1+|\widehat{X}_\theta(T)|+|\delta^\varepsilon X_\theta(T)|+|X_\theta^{1}(T)|) |\widehat{R}_\theta(T)||\delta^\varepsilon X_\theta(T)| +|\delta^\varepsilon Y_\theta(0)|\Big]. \end{aligned}$$ It follows from and that $$\label{4.27} \begin{aligned} \lim_{\varepsilon\rightarrow 0}\sup_{\theta\in \Theta}\mathbb{E}\Big[\Psi_\theta(X_\theta^\varepsilon(T))\delta^\varepsilon R_\theta(T) +\widehat R_\theta(T)\partial_x\Psi^\varepsilon_\theta(T)\delta^\varepsilon X_\theta(T) +\partial_y\Lambda_\theta^\varepsilon(0)\delta^\varepsilon Y_\theta(0)\Big]=0. \end{aligned}$$ Thanks to , $$\begin{aligned} \nonumber \begin{aligned} & \Big| \mathbb{E}\Big[(\Psi_\theta(X_\theta^\varepsilon(T))-\Psi_\theta(\widehat{X}_\theta(T)))R_\theta^{1}(T) +\widehat{R}_\theta(T)(\partial_x\Psi^\varepsilon_\theta(T)-\partial_x\Psi_\theta(\widehat{X}_\theta(T)))X_\theta^{1}(T)\\ & \quad+(\partial_y\Lambda_\theta^\varepsilon(0)-\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)))Y_\theta^{1}(0)\Big]\Big|\\ % & \leqslant L\varepsilon\mathbb{E}\Big[ (1+|X_\theta^\varepsilon(T)|+|\widehat{X}_\theta(T)|)|X^{1}_\theta(T)+\delta^\varepsilon X_\theta(T)||R^1_\theta(T)|\\ & \quad+|X^{1}_\theta(T)+\delta^\varepsilon X_\theta(T)||\widehat{R}_\theta(T)||X^{1}_\theta(T)| +|Y^{1}_\theta(0)+\delta^\varepsilon Y_\theta(0)||Y^{1}_\theta(0)|\Big], \end{aligned} \end{aligned}$$ which combining , , and ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}), ([\[4.21-1\]](#4.21-1){reference-type="ref" reference="4.21-1"}) yield $$\begin{aligned} \label{4.28} \begin{aligned} & \lim_{\varepsilon\rightarrow 0}\sup_{\theta\in\Theta}\mathbb{E}\Big[(\Psi_\theta(X_\theta^\varepsilon(T))-\Psi_\theta(\widehat{X}_\theta(T)))R_\theta^{1}(T) +\widehat{R}_\theta(T)(\partial_x\Psi^\varepsilon_\theta(T)-\partial_x\Psi_\theta(\widehat{X}_\theta(T)))X_\theta^{1}(T)\\ & \qquad\qquad+(\partial_y\Lambda_\theta^\varepsilon(0)-\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)))Y_\theta^{1}(0)\Big]=0. \end{aligned} \end{aligned}$$ Insert ([\[4.27\]](#4.27){reference-type="ref" reference="4.27"}), ([\[4.28\]](#4.28){reference-type="ref" reference="4.28"}) into ([\[4.25-1\]](#4.25-1){reference-type="ref" reference="4.25-1"}) we deduce $$\begin{aligned} \label{4.29} \begin{aligned} & \liminf_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\Big ( J(v^\varepsilon(\cdot))-J(\widehat{v}(\cdot)) \Big )\\ & \geqslant\int_{\Theta} \mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X^{1}_\theta(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)) Y^{1}_\theta(0) \Big] Q(d\theta). \end{aligned} \end{aligned}$$ *Step 2.* Next, let us choose a subsequence $\varepsilon_{_M}\rightarrow 0$ such that $$\limsup_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\Big ( J(v^\varepsilon(\cdot))-J(\widehat{v}(\cdot)) \Big )=\lim_{M\rightarrow\infty}\frac{1}{\varepsilon_{_M}}\Big ( J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot)) \Big ).$$ For every $M\geqslant 1$, since ${\cal Q}^{v^{\varepsilon_{_{_M}}}}$ is nonempty, there exists a probability measure $Q^{{\varepsilon_{_M}}}\in {\cal Q}^{v^{\varepsilon_{_M}}}$ such that $$\begin{aligned} \label{6.7-1} \begin{aligned} J(v^{\varepsilon_{_M}}(\cdot)) & =\int_{\Theta}\mathbb{E}\Big[R^{{\varepsilon_{_M}}}_\theta(T)\Psi_\theta(X_\theta^{{\varepsilon_{_M}}}(T)) +\Lambda_\theta(Y^{{\varepsilon_{_M}}}_\theta(0))\Big] Q^{{\varepsilon_{_M}}}(d\theta),\\ J(\widehat{v}(\cdot)) & \geqslant\int_{\Theta} \mathbb{E}\Big[\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(\widehat{Y}_\theta(0))\Big] Q^{{\varepsilon_{_M}}}(d\theta). \end{aligned} \end{aligned}$$ Similar to ([\[4.25-1\]](#4.25-1){reference-type="ref" reference="4.25-1"}), one gets $$\begin{aligned} & \frac{1}{\varepsilon_{_M}}\Big ( J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot)) \Big )\\ & \leqslant\frac{1}{\varepsilon_{_M}}\int_{\Theta} \mathbb{E}[R^{{\varepsilon_{_M}}}_\theta(T)\Psi_\theta(X_\theta^{{\varepsilon_{_M}}}(T))-\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T)) % +\Lambda_\theta(Y^{{\varepsilon_{_M}}}_\theta(0))-\Lambda_\theta(\widehat{Y}_\theta(0))] Q^{{\varepsilon_{_M}}}(d\theta)\\ % & \quad=\int_{\Theta} \mathbb{E}[\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))\delta^{\varepsilon_{_M}} R_\theta(T) +\widehat{R}_\theta(T)\partial_x\Psi^{\varepsilon_{_M}}_\theta(T)\delta^{\varepsilon_{_M}} X_\theta(T)+\partial_y\Lambda_\theta^{\varepsilon_{_M}}(0)\delta^{\varepsilon_{_M}} Y_\theta(0)] Q^{{\varepsilon_{_M}}}(d\theta)\\ % & \quad+\int_{\Theta} \mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big] Q^{{\varepsilon_{_M}}}(d\theta)\\ % & \quad+\int_{\Theta} \mathbb{E}\Big[(\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))-\Psi_\theta(\widehat{X}_\theta(T)))R_\theta^{1}(T) +(\partial_x\Psi^{\varepsilon_{_M}}_\theta(T)-\partial_x\Psi_\theta(\widehat{X}_\theta(T)))\widehat{R}_\theta(T)X_\theta^{1}(T)\\ % & \quad+(\partial_y\Lambda_\theta^{\varepsilon_{_M}}(0)-\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)))Y_\theta^{1}(0)\Big] Q^{{\varepsilon_{_M}}}(d\theta), \end{aligned}$$ where $\partial_x\Psi^{\varepsilon_{_M}}_\theta(T), \partial_y\Lambda_\theta^{\varepsilon_{_M}}(0)$ are given in ([\[6.21\]](#6.21){reference-type="ref" reference="6.21"}). We analyse the above terms one by one. In analogous to ([\[4.27\]](#4.27){reference-type="ref" reference="4.27"}), $$\begin{aligned} \nonumber \begin{aligned} & \lim_{M\rightarrow\infty}\sup_{\theta\in \Theta}\mathbb{E}[\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))\delta^{\varepsilon_{_M}} R_\theta(T) +\widehat{R}_\theta(T)\partial_x\Psi^{\varepsilon_{_M}}_\theta(T)\delta^{\varepsilon_{_M}} X_\theta(T)+\partial_y\Lambda_\theta^{\varepsilon_{_M}}(0)\delta^{\varepsilon_{_M}} Y_\theta(0)]=0. \end{aligned} \end{aligned}$$ Since ${\cal Q}$ is weakly compact, there exists a probability measure $Q^{\#}\in{\cal Q}$ such that a subsequence of $(Q^{\varepsilon_{_M}})_{M\geqslant 1}$, still denoted by $(Q^{\varepsilon_{_M}})_{M\geqslant 1}$, converges to $Q^{\#}$. Define $$\label{4.9999} H(\theta):=\mathbb{E}[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)].$$ Since $|\Psi_\theta(\widehat{X}_\theta(T))|\leqslant L(1+|\widehat{X}_\theta(T)|^2), |\partial_x\Psi_\theta(\widehat{X}_\theta(T))|\leqslant L(1+|\widehat{X}_\theta(T)|)$, from , ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}), ([\[4.21-1\]](#4.21-1){reference-type="ref" reference="4.21-1"}), the function $\theta\rightarrow H(\theta)$ is bounded. In addition, according to , ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}) and , the function $\theta\rightarrow H(\theta)$ is continuous. Hence, we have $$\begin{aligned} & \lim_{M\rightarrow\infty}\int_{\Theta} \mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big] Q^{{\varepsilon_{_M}}}(d\theta)\\ % & =\int_{\Theta}\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big] Q^{\#}(d\theta). \end{aligned}$$ Finally, it follows from , and , ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}), ([\[4.21-1\]](#4.21-1){reference-type="ref" reference="4.21-1"}) that $$\begin{aligned} & \lim _{M\rightarrow\infty} \Big|\int_{\Theta} \mathbb{E}\Big[(\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))-\Psi_\theta(\widehat{X}_\theta(T)))R_\theta^{1}(T) +(\partial_x\Psi^{\varepsilon_{_M}}_\theta(T)-\partial_x\Psi_\theta(\widehat{X}_\theta(T)))\widehat{R}_\theta(T)X_\theta^{1}(T)\\ % & \qquad+(\partial_y\Lambda_\theta^{\varepsilon_{_M}}(0)-\partial_y\Lambda_\theta(\widehat{Y}_\theta(0)))Y_\theta^{1}(0)\Big] Q^{{\varepsilon_{_M}}}(d\theta)\Big|\\ % & \leqslant L\lim_{M\rightarrow\infty} \varepsilon_{_M} \int_{\Theta} \mathbb{E}\Big[ (1+|X_\theta^{\varepsilon_{_M}}(T)|+|\widehat{X}_\theta(T)|)|X^{1}_\theta(T)+\delta^{\varepsilon_{_M}} X_\theta(T)| |R^{1}_\theta(T)| \\ & \quad+ |X^{1}_\theta(T)+\delta^{\varepsilon_{_M}} X_\theta(T)| |\widehat{R}_\theta(T)||X^{1}_\theta(T)| +|Y^{1}_\theta(0)+\delta^{\varepsilon_{_M}} Y_\theta(0)||Y^{1}_\theta(0)|\Big]Q^{{\varepsilon_{_M}}}(d\theta)=0. \end{aligned}$$ Consequently, we have $$\begin{aligned} & \limsup_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\Big ( J(v^\varepsilon(\cdot))-J(\widehat{v}(\cdot)) \Big )=\lim_{M\rightarrow\infty}\frac{1}{\varepsilon_{_M}}\Big ( J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot))\Big )\\ % & \leqslant\int_{\Theta}\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big] Q^{\#}(d\theta). \end{aligned}$$ *Step 3.* We claim that $Q^{\#}\in {\cal Q}^{\widehat{v}}$. Indeed, on the one hand, from the definition of $J(v(\cdot))$ and ([\[6.7-1\]](#6.7-1){reference-type="ref" reference="6.7-1"}), $$\begin{aligned} & |J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot))|=J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot))\\ & \leqslant\int_{\Theta}\mathbb{E}\Big[|R^{\varepsilon_{_M}}_\theta(T)\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))-\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))| +|\Lambda_\theta(Y^{\varepsilon_{_M}}_\theta(0))-\Lambda_\theta(\widehat{Y}_\theta(0))|\Big]Q^{\varepsilon_{_M}}(d\theta)\\ % & \leqslant\sup_{\theta\in \Theta}\mathbb{E}\Big[|R^{\varepsilon_{_M}}_\theta(T)\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))-\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))| +|\Lambda_\theta(Y^{\varepsilon_{_M}}_\theta(0))-\Lambda_\theta(\widehat{Y}_\theta(0))|\Big] \\ % & \leqslant L\varepsilon_{_M}\sup_{\theta\in \Theta}\mathbb{E}\bigg[|R^{1}(T)+\delta^{\varepsilon_M} R_\theta(T)||\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))|\\ &\quad+(1+|X_\theta^{\varepsilon_{_M}}(T)|+|\widehat{X}_\theta(T)|)|X^{1}_\theta(T)+\delta^{\varepsilon_M} X_\theta(T)| |\widehat{R}_\theta(T)| +|Y^{1}_\theta(0)+\delta^{\varepsilon_M} Y_\theta(0)|\bigg], \end{aligned}$$ which and the assumption $|\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))|\leqslant L(1+|X_\theta^{\varepsilon_{_M}}(T)|^2)$, , , ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}) and ([\[4.21-1\]](#4.21-1){reference-type="ref" reference="4.21-1"}) allow to show $$\label{4.34111} \begin{aligned} \lim_{M\rightarrow\infty}|J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot))|=0. \end{aligned}$$ On the other hand, $$\label{4.35112} \begin{aligned} & \lim_{M\rightarrow\infty}\int_{\Theta} \Big|\mathbb{E}[R^{{\varepsilon_{_M}}}_\theta(T)\Psi_\theta(X_\theta^{{\varepsilon_{_M}}}(T))+\Lambda_\theta(Y^{{\varepsilon_{_M}}}_\theta(0)) % -(\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(\widehat{Y}_\theta(0)))]\Big| Q^{{\varepsilon_{_M}}}(d\theta)\\ % & \leqslant L\lim_{M\rightarrow\infty}\varepsilon_{_M} \int_{\Theta} \mathbb{E}\Big[ |R^{1}(T)+\delta^\varepsilon R_\theta(T)||\Psi_\theta(X_\theta^{\varepsilon_{_M}}(T))| +|Y^{1}_\theta(0)+\delta^\varepsilon Y_\theta(0)| \\ &\quad+(1+|X_\theta^{\varepsilon_{_M}}(T)|+|\widehat{X}_\theta(T)|)|X^{1}_\theta(T)+\delta^\varepsilon X_\theta(T)| |\widehat{R}_\theta(T)| \Big]Q^{{\varepsilon_{_M}}}(d\theta)=0. \end{aligned}$$ Hence, we can derive from ([\[4.34111\]](#4.34111){reference-type="ref" reference="4.34111"}), ([\[4.35112\]](#4.35112){reference-type="ref" reference="4.35112"}) and the boundness as well as the continuity of the function $\theta\rightarrow\mathbb{E}[\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(\widehat{Y}_\theta(0))]$ that $$\begin{aligned} \nonumber \begin{aligned} J(\widehat{v}(\cdot))&=\lim_{M\rightarrow\infty}J(v^{\varepsilon_{_M}}(\cdot))=\lim_{M\rightarrow\infty}\int_{\Theta}\mathbb{E}[R^{{\varepsilon_{_M}}}_\theta(T)\Psi_\theta(X_\theta^{{\varepsilon_{_M}}}(T)) +\Lambda_\theta(Y^{{\varepsilon_{_M}}}_\theta(0))] Q^{{\varepsilon_{_M}}}(d\theta) \\ & =\lim_{M\rightarrow\infty}\int_{\Theta}\mathbb{E}[\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(\widehat{Y}_\theta(0))] Q^{{\varepsilon_{_M}}}(d\theta)\\ & =\lim_{M\rightarrow\infty}\int_{\Theta}\mathbb{E}[\widehat{R}_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(\widehat{Y}_\theta(0))] Q^{\#}(d\theta), \end{aligned} \end{aligned}$$ which means $Q^{\#}\in {\cal Q}^{\widehat{v}}$. *Step 4.* Taking $Q=Q^{\#}$ in ([\[4.29\]](#4.29){reference-type="ref" reference="4.29"}), it follows $$\begin{aligned} & \lim_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\Big( J(v^{\varepsilon_{_M}}(\cdot))-J(\widehat{v}(\cdot)) \Big)\\ &=\int_{\Theta}\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big] Q^{\#}(d\theta)\\ % &=\sup_{Q\in {\cal Q}^{\widehat{v}}}\int_{\Theta}\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big]Q(d\theta). \end{aligned}$$ ◻ The above variational inequality implies the following result. **Theorem 14**. *Let and be in force, then there exists some probability $\overline{Q} \in {\cal Q}^{\widehat{v}}$ such that for all $v(\cdot)\in {\cal V}_{ad}$, $$\begin{aligned} \int_\Theta\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1;v}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1;v}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1;v}(0)\Big]\overline{Q}(d\theta)\geqslant 0, \end{aligned}$$ where $(X_\theta^{1;v}(\cdot),R^{1;v}_\theta(\cdot),Y_\theta^{1;v}(\cdot))$ denote the solutions of the variational SDE, the variational observation process and the variational BSDE, respectively.* *Proof.* Define $$S(\theta,v):=\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))R_\theta^{1;v}(T)+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1;v}(T) +\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1;v}(0)\Big].$$ From , $\lim\limits_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\Big ( J(v^\varepsilon(\cdot))-J(\widehat{v}(\cdot))\Big ) =\sup\limits_{Q\in {\cal Q}^{\widehat{v}}} \int_\Theta S(\theta,v)Q(d\theta)\geqslant 0.$ Consequently, we obtain $$\begin{aligned} \inf_{v(\cdot)\in {\cal V}_{ad}}\sup_{Q\in {\cal Q}^{\widehat{v}}}\int_\Theta S(\theta,v)Q(d\theta)\geqslant 0. \end{aligned}$$ Next, we are ready to use Sion's theorem. To this end, we need to show that $S(\theta,v)$ is convex and continuous with respect to $v$. In fact, on the one hand, from the fact that, for $0\leqslant\rho\leqslant 1$, $v(\cdot),\tilde{v}(\cdot)\in \mathcal{V}_{ad}$, $$\begin{aligned} & R^{1;\rho v+(1-\rho)\tilde{v}}_\theta(T)=\rho R^{1; v}_\theta(T)+(1-\rho)R^{1; \tilde{v}}_\theta(T),\\ & X^{1;\rho v+(1-\rho)\tilde{v}}_\theta(T)=\rho X^{1; v}_\theta(T)+(1-\rho)X^{1; \tilde{v}}_\theta(T),\\ & Y^{1;\rho v+(1-\rho)\tilde{v}}_\theta(0)=\rho Y^{1; v}_\theta(0)+(1-\rho)Y^{1; \tilde{v}}_\theta(0),\\ \end{aligned}$$ it yields, for all $\theta\in \Theta$, $S(\theta,\rho v+(1-\rho)\tilde{v})=\rho S(\theta, v)+(1-\rho)S(\theta, \tilde{v})$. On the other hand, it follows from that for $v(\cdot),\tilde{v}(\cdot)\in \mathcal{V}_{ad}$, $$\begin{aligned} & \mathbb{E}\Big[|R^{1;v}_\theta(T)-R^{1;\tilde{v}}_\theta(T)|^2\Big]\leqslant C \mathbb{E}\Big[\int_0^T|v(t)-\tilde{v}(t)|^4dt \Big]^\frac{1}{2},\\ & \mathbb{E}\Big[|X^{1;v}_\theta(T)-X^{1;\tilde{v}}_\theta(T)|^4\Big]\leqslant C\mathbb{E}\Big[\int_0^T|v(t)-\tilde{v}(t)|^4dt \Big],\\ \end{aligned}$$ and from that $$\begin{aligned} &|Y^{1;v}_\theta(0)-Y^{1;\tilde{v}}_\theta(0)|\leqslant C \mathbb{E}\Big[\int_0^T|v(t)-\tilde{v}(t)|^4dt\Big]^\frac{1}{2}, \end{aligned}$$ where the constant $C>0$ depends on $L, x,\sum\limits_{i,j=1,i\neq j}^I\lambda_{ij}$. Hence, $S(\theta,v)$ is continuous in $v$, for each $\theta$. Applying Sion's theorem, we derive $$\begin{aligned} \sup_{Q\in {\cal Q}^{\widehat{v}}}\inf_{v(\cdot)\in {\cal V}_{ad}}\int_\Theta S(\theta,v)Q(d\theta) =\inf_{v(\cdot)\in {\cal V}_{ad}}\sup_{Q\in {\cal Q}^{\widehat{v}}}\int_\Theta S(\theta,v)Q(d\theta)\geqslant 0. \end{aligned}$$ Then, for arbitrary $\delta>0$, there exists a probability $Q^\delta\in {\cal Q}^{\widehat{v}}$ such that $$\inf_{v(\cdot)\in {\cal V}_{ad}}\int_\Theta S(\theta,v)Q^\delta(d\theta) \geqslant-\delta.$$ From the compactness of ${\cal Q}^{\widehat{v}}$, there exists a subsequence $\delta_M\rightarrow0$ such that $Q^{\delta_M}$ converges weakly to a probability $\overline{Q}\in{\cal Q}^{\widehat{v}}$. Then, one has, $$\int_\Theta S(\theta,v)\overline{Q}(d\theta)=\lim_{\delta_M\rightarrow 0}\int_\Theta S(\theta,v)Q^{\delta_M}(d\theta)\geqslant 0,\quad\forall v(\cdot)\in {\cal V}_{ad}.$$ ◻ ## Adjoint equations and SMP For simplicity, we denote $\overline W_\theta(\cdot)=\overline W_\theta^{\widehat{v}}(\cdot),$ for each $\theta\in \Theta$. Let us introduce the following adjoint equations: $$\label{4.36} \left\{ \begin{aligned} dp^A_\theta(t) & =q^A_\theta(t) dW(t)+\bar{q}^A_\theta(t) d\overline{W}_\theta(t)+k^A_\theta(t)\bullet dM(t),\ t\in[0,T],\\ p^A_\theta(T) & =\Psi_\theta(\widehat{X}_\theta(T)), \end{aligned} \right.$$ and $$\label{4.37} \left\{ \begin{aligned} dp^F_\theta(t) & =\partial_yf_\theta(t)p^F_\theta(t)dt+\partial_zf_\theta(t)p^F_\theta(t)dW(t)+[\partial_{\bar{z}}f_\theta(t)-h_\theta(t)]p^F_\theta(t)d\overline{W}_\theta(t)\\ &\quad+ \partial_kf_\theta(t)p^F_\theta(t)\bullet dM(t),\ t\in[0,T],\\ % dp^B_\theta(t)& =-\bigg\{[\partial_xb_\theta(t)-\bar{\sigma}_\theta(t)\partial_xh_\theta(t)]p^B_\theta(t) +\partial_x\sigma_\theta(t)q^B_\theta(t)+\partial_x\bar{\sigma}_\theta(t)\bar{q}^B_\theta(t)\\ & \quad+\sum_{i,j=1,i\neq j}^I\partial_x\beta_{\theta,ij}(t)k^B_{ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}} +\partial_xh_\theta(t)\bar{q}^A_\theta(t)-\partial_xf_\theta(t)p^F_\theta(t)\bigg\}dt\\ & \quad+q^B_\theta(t)dW(t)+\bar{q}^B_\theta(t)d\overline{W}_\theta(t)+k^B_\theta(t)\bullet dM(t),\ t\in[0,T],\\ % p^F_\theta(0)& =-\partial_x\Lambda_\theta(\widehat Y_\theta(0)),\quad p^B_\theta(T)=\partial_x\Psi_\theta(\widehat{X}_\theta(T))-\partial_x\Phi_\theta(\widehat{X}_\theta(T))p^F_\theta(T). \end{aligned} \right.$$ According to and , there exists a constant $C$ such that $$\label{6.44} \begin{aligned} & \mathrm{(i)}\ \mathbb{E}\Big[\sup_{t\in[0,T]}|p^F_\theta(t)|^4\Big]\leqslant C,\\ % & \mathrm{(ii)}\ \mathbb{E}\Big[\sup_{t\in[0,T]}|p^A_\theta(t)|^2+\int_0^T(|q^A_\theta(t)|^2+|\bar{q}^A_\theta(t)|^2 +\sum_{i,j=1,i\neq j}^I|k^A_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\Big]\\ & \qquad\leqslant C\Big (|x|^4+\mathbb{E}\int_0^T|\widehat v(t)|^4dt\Big ),\\ % & \mathrm{(iii)}\ \mathbb{E}\Big[\sup_{t\in[0,T]}|p^B_\theta(t)|^2+\int_0^T(|q^B_\theta(t)|^2+|\bar{q}^B_\theta(t)|^2 +\sum_{i,j=1,i\neq j}^I|k^B_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\Big]\\ &\qquad\leqslant C\Big (|x|^4+\mathbb{E}\int_0^T|\widehat v(t)|^4dt\Big ). \end{aligned}$$ In addition, using the method in , we can show that $p^F_\theta, (p^A_\theta,q^A_\theta,\bar{q}^A_\theta,\beta^A_{\theta}), (p^B_\theta,q^B_\theta,\bar{q}^B_\theta,\beta^B_{\theta})$ are continuous in $\theta$. **Lemma 15**. *Under and , we have $$\begin{aligned} & \lim_{\delta\rightarrow 0}\sup_{d(\theta, \bar{\theta})\leqslant\delta}\mathbb{E}\bigg[\sup_{t\in[0,T]}\Big (|p^F_\theta(t)-p^F_{\bar{\theta}}(t)|^4 +|p^A_\theta(t)-p^A_{\bar{\theta}}(t)|^2+|p^B_\theta(t)-p^B_{\bar{\theta}}(t)|^2\Big )\\ & \ +\int_0^T(|q^A_\theta(t)-q^A_{\bar{\theta}}(t)|^2+|\bar{q}^A_\theta(t)-\bar{q}^A_{\bar{\theta}}(t)|^2 +\sum_{i,j=1,i\neq j}^I|\beta^A_{\theta,ij}(t)-\beta^A_{\bar{\theta},ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt \\ & +\int_0^T(|q^B_\theta(t)-q^B_{\bar{\theta}}(t)|^2+|\bar{q}^B_\theta(t)-\bar{q}^B_{\bar{\theta}}(t)|^2 +\sum_{i,j=1,i\neq j}^I|\beta^B_{\theta,ij}(t)-\beta^B_{\bar{\theta},ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg]=0. \end{aligned}$$* Define the Hamiltonian as follows, for $i,j,i_0\in {\cal I}$ and for $t\in[0,T]$, $x\in \mathbb{R}$, $y,k\in \mathbb{R},$ $z,\bar{z}\in \mathbb{R}$, $p^F\in \mathbb{R},$ $p^B\in \mathbb{R},$ $q^B,\bar{q}^B\in \mathbb{R},$ $k^B,$ $\bar{q}^A\in \mathbb{R}$, $v\in V$, $$\label{4.38} \begin{aligned} & H_\theta(t,x,y,z,\bar{z},k,v,i_0;p^F,p^B,q^B,\bar{q}^B,k^B,\bar{q}^A)\\ & = b_\theta(t,x,v,i_0)p^B +\sigma_\theta(t,x,v,i_0) q^{B} +\bar{\sigma}_\theta(t,x,v,i_0)\bar{q}^{B}\\ &\quad+\sum_{i,j=1,i\neq j}^I \beta_{\theta,ij}(t,x,v,i_0)k^B_{ij} \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}} + h_\theta(t,x,i_0)\bar{q}^A \\ &\quad-\Big (f_\theta(t,x,y,z,\bar{z},\sum_{i,j=1,i\neq j}^Ik_{ij}\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}},v,i_0)-h_\theta(t,x,i_0)\bar{z}\Big )p^F. \end{aligned}$$ Here $k^B=(k^B_{ij})_{i,j\in \mathcal{I}}$ and $k=(k_{ij})_{i,j\in \mathcal{I}}$. We have the following stochastic maximum principle. **Theorem 16**. *Under and , let $\widehat{v}(\cdot)$ be an optimal control and $(\widehat{X}_\theta,\widehat{Y}_\theta, \widehat{Z}_\theta,\widehat{\bar{Z}}_\theta,\widehat{K}_\theta)$ the solution of ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) with $\widehat{v}(\cdot)$. Then there exists a probability measure $\overline{Q}\in {\cal Q}^{\widehat{v}}$ such that $dt\times d\mathbb{P}^{\widehat{v},\theta}$-a.s., $$\begin{aligned} \nonumber \begin{aligned} & \int_\Theta\mathbb{E}^{\widehat{v},\theta}\Big[ \langle\partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta}, \widehat{v}(t),\alpha(t-), \\ & \qquad\qquad\qquad p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v-\widehat{v}(t)\rangle\Big|{\cal G}_t\Big]\overline{Q}(d\theta)\geqslant 0, \end{aligned} \end{aligned}$$ where $(p^F_\theta,p^B_\theta,q^B_\theta,\overline{q}^B_\theta,k^B_{\theta})$ and $(\bar{p}^A_\theta,q^A_\theta,\bar{q}^A_\theta,k^A_\theta)$ are the solutions of the adjoint equations ([\[4.37\]](#4.37){reference-type="ref" reference="4.37"}) and ([\[4.36\]](#4.36){reference-type="ref" reference="4.36"}), respectively.* *Proof.* From Itô's formula, one has $$\left\{ \begin{aligned} d[(\widehat{R} _\theta(t))^{-1}R^{1} _\theta(t)]&=X_\theta^1(t)\partial_xh_\theta(t)d\overline{W}_\theta(t),\ t\in[0,T],\\ [(\widehat{R} _\theta(0))^{-1}R^{1} _\theta(0)]&=0. \end{aligned} \right.$$ Applying Itô's formula to $[(\widehat{R} _\theta(\cdot))^{-1}R^{1} _\theta(\cdot)]p^A_\theta(\cdot)$, $p^F_\theta(\cdot)Y^{1}_\theta(\cdot)$, $p^B_\theta(\cdot) X^{1}_\theta(\cdot)$, respectively, we have $$\begin{aligned} & \mathbb{E}^{\widehat v,\theta}\Big[[(\widehat{R} _\theta(T))^{-1}R^{1} _\theta(T)]\Psi_\theta(\widehat{X}_\theta(T))\Big] =\mathbb{E}^{\widehat v,\theta}\Big[\int_0^T\bar{q}^A_\theta(t)\partial_xh_\theta(t)X_\theta^1(t)dt\Big],\\ % & \mathbb{E}^{\widehat v,\theta}\Big[p_\theta^F(T)\partial_x\Phi_\theta(\widehat X_\theta(T))X_\theta^1(T)+\partial_x\Lambda_\theta(Y_\theta(0))Y^1_\theta(0)\Big]\\ & =-\mathbb{E}^{\widehat v,\theta}\Big[\int_0^T[\partial_xf_\theta(t)X_\theta^1(t)+\langle\partial_vf_\theta(t),v(t)-\widehat{v}(t)\rangle]p^F_\theta(t)dt\Big],\\ \end{aligned}$$ and $$\begin{aligned} & \mathbb{E}^{\widehat v,\theta}\Big[\partial_x\Psi_\theta(\widehat X_\theta(T))X_\theta^1(T)-p_\theta^F(T)\partial_x\Phi_\theta(\widehat X_\theta(T))X_\theta^1(T)\Big]\\ & = \mathbb{E}^{\widehat v,\theta}\Big[\int_0^T\partial_xf_\theta(t)X_\theta^1(t)p_\theta^F(t)-\bar{q}_\theta^A(t)\partial_xh_\theta(t)X_\theta^1(t)dt\Big]\\ &\quad+\mathbb{E}^{\widehat v,\theta}\Big[\int_0^T\langle \partial_vb_\theta(t)p^B_\theta(t)+\partial_v\sigma_\theta(t)q^B_\theta(t)+\partial_v\bar{\sigma}_\theta(t)\bar{q}^B_\theta(t)\\ &\quad+\sum_{i,j=1,i\neq j}^I\partial_v \beta_{\theta,ij}(t)k^B_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}},v(t)-\widehat v(t)\rangle dt\Big]. \end{aligned}$$ Hence, it yields $$\begin{aligned} & \mathbb{E}^{\widehat v,\theta}\Big[[(\widehat{R} _\theta(T))^{-1}R^{1} _\theta(T)]\Psi_\theta(\widehat{X}_\theta(T))+\partial_x\Lambda_\theta(\widehat Y_\theta(0))Y^1_\theta(0) +\partial_x\Psi_\theta(\widehat X_\theta(T))X_\theta^1(T)\Big]\\ % & =\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T\langle \partial_vb_\theta(t)p^B_\theta(t) + \partial_v\sigma_\theta(t) q^{B}_\theta(t) +\partial_v\bar{\sigma}_\theta(t)\bar{q}^{B}_\theta(t)\\ &\quad+\sum_{i,j=1,i\neq j}^I\partial_v \beta_{\theta,ij}(t)k^B_{\theta,ij}(t) \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}} -p^F_\theta(t) \partial_v f_\theta(t), v(t)-\widehat{v}(t)\rangle dt\bigg]\\ % & =\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T\langle \partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t), \widehat{K}_{\theta},\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\\ & \qquad\qquad\qquad\overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)), v(t)-\widehat{v}(t)\rangle dt\bigg].\\ \end{aligned}$$ According to and the above equality, we know, for $v(\cdot)\in \mathcal{V}_{ad}$, $$\begin{aligned} 0 & \leqslant\int_\Theta\mathbb{E}\Big[R_\theta^{1}(T)\Psi_\theta(\widehat{X}_\theta(T))+\widehat{R}_\theta(T)\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_x\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0)\Big]\overline{Q}(d\theta)\\ & =\int_\Theta\mathbb{E}^{\widehat v,\theta}[(\widehat{R} _\theta(T))^{-1}R^{1} _\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\partial_x\Psi_\theta(\widehat{X}_\theta(T))X_\theta^{1}(T) +\partial_x\Lambda_\theta(\widehat{Y}_\theta(0))Y_\theta^{1}(0) ]\overline{Q}(d\theta)\\ % & =\int_\Theta\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T\langle\partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t), \widehat{K}_{\theta}(t),\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\\ & \qquad\qquad\qquad\overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v(t)-\widehat{v}(t)\rangle dt\bigg]\overline{Q}(d\theta)\\ % & =\int_\Theta\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T\mathbb{E}^{\widehat v,\theta}\Big[\langle \partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\\ & \qquad\qquad\qquad \overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v(t)-\widehat{v}(t)\rangle|{\cal G}_t\Big]dt\bigg]\overline{Q}(d\theta). \end{aligned}$$ Note that in the above equality we suppress the superscript $v$ in $\mathbb{R}^{1;v}_\theta, X^{1;v}_\theta,Y^{1;v}_\theta$ for convenience. According to , one can know that the mapping $$\begin{aligned} & (\theta,t,\omega)\rightarrow\mathbb{E}^{\widehat v,\theta}\Big[\langle \partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\\ & \qquad\qquad\qquad \overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v(t)-\widehat{v}(t)\rangle|\mathcal{G}_t\Big](\omega) \end{aligned}$$ is a ${\cal G}$-progressively measurable process. Fubini's Theorem allows to show, for arbitrary $v(\cdot)\in {\cal V}_{ad}$, $$\begin{aligned} & \mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T\int_\Theta\mathbb{E}^{\widehat v,\theta}\Big[ \langle\partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\\ & \qquad\qquad\qquad\overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v(t)-\widehat{v}(t)\rangle|{\cal G}_t\Big]\overline{Q}(d\theta)dt\bigg]\geqslant 0, \end{aligned}$$ which implies $dt\times d\mathbb{P}^{\widehat{v},\theta}$-a.s., $$\begin{aligned} & \int_\Theta\mathbb{E}^{\widehat v,\theta}\Big[\langle \partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\\ & \qquad\qquad\qquad\overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v-\widehat{v}(t)\rangle|{\cal G}_t\Big]\overline{Q}(d\theta)\geqslant 0,\ v(\cdot)\in {\cal V}_{ad}. \end{aligned}$$ ◻ ## Sufficient condition This subsection concerns the sufficient condition for the optimal control $\widehat v(\cdot)$. We go on using the notation $\overline W_\theta(\cdot)=\overline W_\theta^{\widehat{v}}(\cdot),$ for each $\theta\in \Theta$. Denote $h^v_\theta(t):=h_\theta(t,X^v_\theta(t), \alpha(t-)).$ For $p^B\in \mathbb{R}, h\in \mathbb{R}, p^F\in \mathbb{R}$, let us define $L:[0,T]\times\mathbb{R}\times V\times{\cal I}\rightarrow\mathbb{R},\ S:\mathbb{R}\times{\cal I}\rightarrow\mathbb{R}$ by $$\begin{aligned} L(t,x,v,i_0;h,p^B) & := p^Bh\bar{\sigma}_\theta(t,x,v,i_0),\quad S(x,i_0;p^F) :=p^F\Phi(x,i_0). %M(t,x,v,i_0;p^B) & =p^B \bar{\si}(t,x,v,i)h(t,x,i_0), \\ % N(t,x,v,i_0;\bar{q}^A,\widehat{R}^{-1}) &= \bar{q}^A\widehat{R}^{-1} R^v_\th(t)h_\th(t,x,i_0). \end{aligned}$$ Now we are in the position of showing the sufficient condition for optimal control. **Theorem 17**. *Under and , let $\widehat{v}(\cdot)\in{\cal V}_{ad}$, $\overline{Q}\in {\cal Q}^{\widehat{v}}$, let $(\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),$ $\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta})$ be the solution of ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) with $\widehat{v}(\cdot)$, and let $(p^F_\theta,p^B_\theta, q^B_\theta,\bar{q}^B_\theta,k^B_{\theta})$ and $(p^A_\theta,q^A_\theta,\bar{q}^A_\theta, k^A_\theta)$ be the solutions of the adjoint equations ([\[4.37\]](#4.37){reference-type="ref" reference="4.37"}) and ([\[4.36\]](#4.36){reference-type="ref" reference="4.36"}), respectively. Assume that* - *the function $H_\theta(t,x,y,z,\bar{z},k,v,i_0;p^F,p^B,q^B,\bar{q}^B,k^B,\bar{q}^A)$ defined in ([\[4.38\]](#4.38){reference-type="ref" reference="4.38"}) is convex with respect to $x,y,z,\overline{z},k,v$;* - *$L$ is convex with respect to $x$ and $S$ is concave with respect to $x$;* - *For any $v(\cdot)\in \mathcal{V}_{ad},t\in[0,T],$ $\mathbb{P}^{\hat{v},\theta}$-a.s., $$\label{4.43-2} \left\{ \begin{aligned} & R^v_\theta(T)\Psi_\theta(X_\theta^v(T))-\widehat R_\theta(T)\Psi_\theta(\widehat X_\theta(T))-\Psi_\theta( X_\theta(T))(R^v_\theta(T)-\widehat R_\theta(T))\\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\widehat R_\theta(T)\partial_x\Psi_\theta(\widehat X_\theta (T))(X_\theta^v(T)-\widehat X_\theta(T))\geqslant 0,\\ % & p_\theta^B(t)\Big(\bar{\sigma}^v_\theta(t)h_\theta^v(t)-\bar{\sigma}_\theta(t)h_\theta(t)-h_\theta(t)\partial_x\bar{\sigma}_\theta(t)(X_\theta^v(t)-X_\theta(t))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\bar{\sigma}_\theta(t) \partial_xh_\theta(t)(X_\theta^v(t)-X_\theta(t))\Big)\geqslant 0,\\ % & \bar{q}_\theta^A(t)(\widehat R_\theta(t))^{-1}\Big(R^v_\theta(t)h_\theta^v(t)-\widehat R_\theta(t)h_\theta(t)-h_\theta(t)(R^v_\theta(t)-\widehat{R}_\theta(t))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\widehat{R}_\theta(t)\partial_xh_\theta(t)(X_\theta^v(t)-X_\theta(t))\Big)\geqslant 0. \end{aligned} \right.$$* *If $dt\times d\mathbb{P}^{\widehat{v},\theta}$-a.s., $$\begin{aligned} \nonumber \begin{aligned} & \int_\Theta\mathbb{E}^{\widehat{v},\theta}\Big[\langle\partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-), \\ & \qquad\qquad\qquad p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)),v-\widehat{v}(t)\rangle|{\cal G}_t\Big]\overline{Q}(d\theta)\geqslant 0, \end{aligned} \end{aligned}$$ then the control $\widehat{v}(\cdot)$ is optimal.* **Remark 18**. *If we have full information, i.e., $h(\cdot)=\bar{\sigma}(\cdot)=0$, then $L(\cdot)\equiv0, \widehat{R}_\theta(\cdot)=R^v_\theta(\cdot)\equiv1, v(\cdot)\in {\cal V}_{ad}$. Moreover, ([\[4.43-2\]](#4.43-2){reference-type="ref" reference="4.43-2"}) reduces to the condition that $\Psi_\theta(\cdot)$ is convex with respect to $x$. Moreover, if $\Lambda_\theta(x)=x,\Psi_\theta=0$, i.e., $J(v(\cdot))=\sup_{Q\in {\cal Q}}\int_{\Theta}Y^v_\theta(0)Q(d\theta),$ and our FBSDE is only driven by Brownian motion (without the canonical martingales for Markov chain), then $p^F_\theta(\cdot)$ satisfies $$\label{4.44-1} \left\{ \begin{aligned} % dp^F_\theta(t) & =\partial_yf_\theta(t)p^F_\theta(t)dt+\partial_zf_\theta(t)p^F_\theta(t)dW(t),\ t\in[0,T],\\ p^F_\theta(0) & =-1. \end{aligned} \right.$$ Set $m_\theta(t)=-p^F_\theta(t)$, then $m_\theta(t)$ satisfies $$\label{4.44-1} \left\{ \begin{aligned} dm_\theta(t) &=\partial_yf_\theta(t)m_\theta(t)dt+\partial_zf_\theta(t)m_\theta(t)dW(t),\ t\in[0,T],\\ m_\theta(0) &=1, \end{aligned} \right.$$ which is just Eq. (20) of Hu and Wang [@Hu-Wang-20]. In the meanwhile, from the relation $S(x,i_0;p^F)=p^F\Phi(x,i_0)=(-m)\cdot\Phi(x,i_0)$, we know that the concavity of $S(\cdot)$ with respect to $x$ is equivalent to the convexity of $\Phi(\cdot)$ with respect to $x$. In this case, our reduces to the sufficient condition for the optimal control problem with full information under model uncertainty, see Hu and Wang [@Hu-Wang-20 Theorem 3.11].* *Proof of .* For simplicity, for $v(\cdot)\in {\cal V}_{ad}$ and $\theta\in \Theta$, by $(X_\theta^v,Y_\theta^v,Z_\theta^v,\bar{Z}_\theta^v,K_\theta^v)$ and $R_\theta^v$ we denote the solutions of ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) and ([\[4.7-1\]](#4.7-1){reference-type="ref" reference="4.7-1"}), respectively. Denote $(\eta_\theta,\xi_\theta,\gamma_\theta,\bar{\gamma}_\theta,\zeta_\theta,\chi_\theta)=(X_\theta^v-\widehat{X}_\theta,Y_\theta^v-\widehat{Y}_\theta,Z_\theta^v -\widehat{Z}_\theta,\bar{Z}_\theta^v-\widehat{\bar{Z}}_\theta, K_\theta^v-\widehat{K}_\theta,R_\theta^v-\widehat{R}_\theta)$. Then we have $$\label{4.44-1} \left\{ \begin{aligned} d\eta_\theta(t)& =([\partial_xb_\theta(t)-h_\theta(t)\partial_x\bar{\sigma}_\theta(t)-\bar{\sigma}_\theta(t) \partial_xh_\theta(t) ]\eta_\theta(t)+A_{1,\theta}(t))dt\\ & \quad+(\partial_x\sigma_\theta(t)\eta_\theta(t)+A_{2,\theta}(t))dW(t)+(\partial_x\bar{\sigma}_\theta(t)\eta_\theta(t)+A_{3,\theta}(t))dG(t)\\ & \quad+(\partial_x\beta_\theta(t)\eta_\theta(t)+A_{4,\theta}(t))\bullet dM(t),\\ \eta_\theta(0) & =0, \end{aligned} \right.$$ where $$\begin{aligned} A_{1,\theta}(t) & =b_\theta^v(t)-b_\theta(t)-\partial_xb_\theta(t)\eta_\theta(t)\\ & \quad-\Big (\bar{\sigma}^v_\theta(t)h^v_\theta(t)-\bar{\sigma}_\theta(t)h_\theta(t)-h_\theta(t)\partial_x\bar{\sigma}_\theta(t) \eta_\theta(t)-\bar{\sigma}_\theta(t) \partial_xh_\theta(t)\eta_\theta(t)\Big ),\\ A_{2,\theta}(t)& =\sigma_\theta^v(t)-\sigma_\theta(t)-\partial_x\sigma_\theta(t)\eta_\theta(t),\quad A_{3,\theta}(t) =\bar{\sigma}_\theta^v(t)-\bar{\sigma}_\theta(t)-\partial_x\bar{\sigma}_\theta(t)\eta_\theta(t),\\ A_{4,\theta}(t)& =\beta_\theta^v(t)-\beta_\theta(t)-\partial_x\beta_\theta(t)\eta_\theta(t). \end{aligned}$$ Recall that $dG(t)=h_\theta(t)dt+d\overline W_\theta(t)$, then $$\label{4.44-1} \left\{ \begin{aligned} d\eta_\theta(t) & =([\partial_xb_\theta(t)- \bar{\sigma}_\theta(t) \partial_xh_\theta(t) ]\eta_\theta(t)+A_{1,\theta}(t)+h_\theta(t)A_{3,\theta}(t))dt\\ & \quad+(\partial_x\sigma_\theta(t)\eta_\theta(t)+A_{2,\theta}(t))dW(t)+(\partial_x\bar{\sigma}_\theta(t)\eta_\theta(t)+A_{3,\theta}(t))d\overline{W}_\theta(t)\\ & \quad+(\partial_x\beta_\theta(t)\eta_\theta(t)+A_{4,\theta}(t))\bullet dM(t),\\ \eta_\theta(0)& =0. \end{aligned} \right.$$ Applying Itô's formula to $p^B_\theta(\cdot)\eta_\theta(\cdot)$, we derive $$\label{4.45} \begin{aligned} & \mathbb{E}^{\widehat{v},\theta}\Big[\partial_x\Psi_\theta(X_\theta(T))\eta_\theta(T)-\partial_x\Phi_\theta(\widehat{X}_\theta(T))p^F_\theta(T)\eta_\theta(T)\Big]\\ & =\mathbb{E}^{\widehat{v},\theta}\Big[\int_0^T p_\theta^F(t)\partial_xf_\theta(t)\eta_\theta(t) -\bar{q}_\theta^A(t)\partial_xh_\theta(t)\eta_\theta(t) dt\Big]\\ & \quad+\mathbb{E}^{\widehat{v},\theta}\Big[\int_0^T(A_{1,\theta}(t)+h_\theta(t)A_{3,\theta}(t))p_\theta^B(t)+A_{2,\theta}(t)q_\theta^B(t) +A_{3,\theta}(t)\bar{q}_\theta^B(t)\\ & \quad+\sum_{i,j=1,i\neq j}^IA_{4,\theta,ij}(t)k_{\theta,ij}^B(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt\Big]. \end{aligned}$$ Notice $$\label{4.46} \left\{ \begin{aligned} d\xi_\theta(t) & =-\bigg\{\partial_yf_\theta(t)\xi_\theta(t)+\partial_zf_\theta(t)\gamma_\theta(t)+\partial_{\bar{z}}f_\theta(t)\bar{\gamma}_\theta(t) + \sum_{i,j=1,i\neq j}^I\partial_{k}f_\theta(t)\zeta_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}\\ % & \qquad-\bar{\gamma}_\theta(t)h_\theta(t)+A_{5,\theta}(t)\bigg\}dt+\gamma_\theta(t)dW(t)+\bar{\gamma}_\theta(t)d\overline{W}_\theta(t) +\zeta_\theta(t)\bullet dM(t),\\ \xi_\theta(T) & =\Phi_\theta(X^v_\theta(T), \alpha(T))-\Phi_\theta(\widehat X_\theta(T), \alpha(T)), \end{aligned} \right.$$ where $$\begin{aligned} A_{5,\theta}(t) =f_\theta^v(t)-f_\theta(t)-\partial_yf_\theta(t)\xi_\theta(t)- \partial_zf_\theta(t)\gamma_\theta(t) - \partial_{\bar{z}}f_\theta(t)\bar{\gamma}_\theta(t) - \sum_{i,j=1,i\neq j}^I\partial_{k}f_\theta(t)\zeta_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}. \end{aligned}$$ Applying Itô's formula to $p^F_\theta(t)\xi_\theta(t)$, it follows $$\label{4.46-1} \begin{aligned} \mathbb{E}^{\widehat{v},\theta}[p^F_\theta(T)(\Phi_\theta(X^v_\theta(T), \alpha(T))-\Phi_\theta(\widehat X_\theta(T), \alpha(T))) +\partial_y\Lambda_\theta(Y_\theta(0))\xi_\theta(0)] =-\mathbb{E}^{\widehat{v},\theta}\Big[\int_0^TA_{5,\theta}(t)p_\theta^F(t)dt\Big]. \end{aligned}$$ In addition, $$\begin{aligned} \chi_\theta(0) =0\quad\hbox{and}\quad d\chi_\theta(t) =\Big[h_\theta(t)\chi_\theta(t)+\widehat R_\theta(t)\partial_xh_\theta(t)\eta_\theta(t)+A_{6,\theta}(t) \Big]dG(t), % \end{aligned}$$ where $$A_{6,\theta}(t)=R^v_\theta(t)h^v_\theta(t)-\widehat{R}_\theta(t)h_\theta(t)-h_\theta(t)\chi_\theta(t)-\widehat{R}_\theta(t)\partial_xh_\theta(t)\eta_\theta(t).$$ It follows from Itô's formula to $p^A_\theta(t)[(R_\theta(t))^{-1}\chi_\theta(t)]$ on $[0,T]$ that $$\label{4.47} \begin{aligned} &\mathbb{E}^{\widehat{v},\theta}\Big[ \Psi_\theta(\widehat{X}_\theta(T))[(\widehat{R}_\theta(T))^{-1}\chi_\theta(T)] \Big] =\mathbb{E}^{\widehat{v},\theta}\Big[\int_0^T\bar{q}_\theta^ A(\partial_xh_\theta(t)\eta_\theta(t)+(\widehat{R}_\theta(t))^{-1}A_{6,\theta}(t))dt\Big]. \end{aligned}$$ Since $\Lambda_\theta$ is convex with respect to $x$, $S$ is concave with respect to $x$ as well as the first inequality of ([\[4.43-2\]](#4.43-2){reference-type="ref" reference="4.43-2"}), we arrive at from the definition of ${\cal Q}^{\widehat{v}}$ that $$\begin{aligned} & J(v(\cdot))-J(\widehat{v}(\cdot))\\ & \geqslant\int_\Theta\mathbb{E}\Big[R^v_\theta(T)\Psi_\theta(X_\theta^v(T))-\widehat R_\theta(T)\Psi_\theta(\widehat{X}_\theta(T))+\Lambda_\theta(Y^v_\theta(0))-\Lambda_\theta(\widehat{Y}_\theta(0))\Big] \overline{Q}(d\theta)\\ % & \geqslant\int_\Theta\mathbb{E}\Big[\Psi_\theta(\widehat{X}_\theta(T))\chi_\theta(T)\Big] \overline{Q}(d\theta) +\int_\Theta\mathbb{E}[\widehat R_\theta(T)\partial_x\Psi_\theta(\widehat X_\theta(T))\eta_\theta(T)] \overline{Q}(d\theta)\\ & \quad+\int_\Theta\mathbb{E}[\partial_x\Lambda_\theta(\widehat{Y}_\theta(0))\xi_\theta(0)] \overline{Q}(d\theta)\\ % & =\int_\Theta\mathbb{E}^{\widehat{v},\theta}\Big[(\widehat{R}_\theta(T))^{-1}\Psi_\theta(\widehat{X}_\theta(T))\chi_\theta(T)\Big] \overline{Q}(d\theta) +\int_\Theta\mathbb{E}^{\widehat{v},\theta}[\partial_x\Psi_\theta(X_\theta(T))\eta_\theta(T)] \overline{Q}(d\theta)\\ & \quad+\int_\Theta\mathbb{E}^{\widehat{v},\theta}[\partial_x\Lambda_\theta(\widehat{Y}_\theta(0))\xi_\theta(0)] \overline{Q}(d\theta)\\ % & \geqslant\int_\Theta\bigg\{\mathbb{E}^{\widehat{v},\theta}\Big[(\widehat{R}_\theta(T))^{-1}\Psi_\theta(\widehat{X}_\theta(T))\chi_\theta(T)\Big] +\mathbb{E}^{\widehat{v},\theta}\Big[\partial_x\Psi_\theta(X_\theta(T))\eta_\theta(T)-\partial_x\Phi_\theta(\widehat{X}_\theta(T))p^F_\theta(T)\eta_\theta(T)\Big]\\ % &\quad+ \mathbb{E}^{\widehat{v},\theta}\Big[p^F_\theta(T)(\Phi_\theta(X^v_\theta(T), \alpha(T))-\Phi_\theta(\widehat X_\theta(T), \alpha(T)))\Big] +\mathbb{E}^{\widehat{v},\theta}\Big[\partial_y\Lambda_\theta(\widehat{Y}_\theta(0))\xi_\theta(0)\Big]\bigg\} \overline{Q}(d\theta) . \end{aligned}$$ Insert ([\[4.45\]](#4.45){reference-type="ref" reference="4.45"}), ([\[4.46-1\]](#4.46-1){reference-type="ref" reference="4.46-1"}) and ([\[4.47\]](#4.47){reference-type="ref" reference="4.47"}) into the above inequality and from the definitions of $A_{i,\theta}, i=1,2,\cdots, 6$, one can see $$\begin{aligned} & J(v(\cdot))-J(\widehat{v}(\cdot))\\ %& \geq\int_\Th\bigg\{\dbE^{\h{v},\th}[\int_0^T\bar{q}_\th^A(\pa_xh_\th(t)\eta_\th(t)+(\widehat{R}_\th(t))^{-1}A_{6,\th}(t))dt)]\\ %% % & -\dbE^{\h{v},\th}[\int_0^TA_{5,\th}(t)p_\th^F(t)dt] %% % +\dbE^{\h{v},\th}[\int_0^T p_\th^F(t)\pa_xf_\th(t)\eta_\th(t) % -\bar{q}_\th^A(t)\pa_xh_\th(t)\eta_\th(t)dt]\\ %% % & +\dbE^{\h{v},\th}[\int_0^T(A_{1,\th}(t)+h_\th(t)A_{3,\th}(t))p_\th^B(t)+A_{2,\th}(t)q_\th^B(t) % +A_{3,\th}(t)\bar{q}_\th^B(t)\\ %% % & +\sum_{i,j=1,i\neq j}^IA_{4,\th,ij}(t)k_{\th,ij}^B(t)\l_{ij}\mathbf{1}_{\{\a(t-)=i\}}dt] \bigg\}\bar{Q}(d\th)\\ % & \geqslant\int_\Theta\Bigg\{ \mathbb{E}^{\widehat{v},\theta}\bigg[\int_0^T\langle\partial_vH_\theta(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-),\\ & \qquad\qquad\qquad\qquad\qquad\qquad p^F_\theta(t),p^B_\theta(t),q^B_\theta(t),\bar{q}^B_\theta(t),k^B_{\theta}(t),\bar{q}^A_\theta(t)),v(t)-\widehat{v}(t)\rangle dt\bigg]\\ % & \quad+\mathbb{E}^{\widehat{v},\theta}\bigg[\int_0^Tp_\theta^B(t)\Big (\bar{\sigma}^v_\theta(t)h_\theta^v(t)-\bar{\sigma}_\theta(t)h_\theta(t) -h_\theta(t)\partial_x\bar{\sigma}(t)\eta_\theta(t)-\bar{\sigma}_\theta(t)\partial_xh_\theta(t)\eta_\theta(t)\Big )dt\bigg]\\ % & \quad+\mathbb{E}^{\widehat{v},\theta}\bigg[\int_0^T\bar{q}_\theta^A(t)(\widehat R_\theta(t))^{-1}\Big (R^v_\theta(t)h_\theta^v(t)-\widehat R_\theta(t)h_\theta(t) -h_\theta(t)\chi_\theta(t) -\widehat{R}_\theta(t)\partial_xh_\theta(t)\eta_\theta(t)\Big )dt\bigg]\\ % & \quad+\mathbb{E}^{\widehat{v},\theta}\bigg[\int_0^T(L(t,X_\theta^v(t),v(t),\alpha(t-);h_\theta(t),p^B_\theta(t))-L_\theta(t)-\partial_xL_\theta(t)\eta_\theta(t) )dt\bigg] \Bigg\}\overline{Q}(d\theta), \end{aligned}$$ where $$L_\theta(t)=L(t,X_\theta(t),\widehat{v}(t),\alpha(t-);h_\theta(t),p^B_\theta(t)),\quad \partial_xL_\theta(t)=\partial_xL(t,X_\theta(t),\widehat{v}(t),\alpha(t-);h_\theta(t),p^B_\theta(t)).$$ According to the item $\mathrm{ii)}$ and item $\mathrm{iii)}$ of ([\[4.43-2\]](#4.43-2){reference-type="ref" reference="4.43-2"}) and the fact that $L$ is convex with respect to $x$, it follows $$\begin{aligned} J(v(\cdot))-J(\widehat{v}(\cdot)) % & \geqslant\mathbb{E}^{\widehat{v},\theta}\bigg[\int_\Theta\mathbb{E}^{\widehat{v},\theta}\Big[\langle\partial_vH_\theta\big(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t), \widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta},\widehat{v}(t),\alpha(t-),p^F_\theta(t),p^B_\theta(t),\\ & \qquad\qquad\qquad\qquad\qquad q^B_\theta(t),\bar{q}^B_\theta(t),k^B_{\theta}(t),\bar{q}^A_\theta(t)\big),v(t)-\widehat{v}(t)\rangle |{\cal G}_t\Big] \overline{Q}(d\theta)\bigg]\geqslant 0. \end{aligned}$$ The proof is complete. ◻ **Remark 19**. *Compared to Menoukeu and Pamen [@Menoukeu-2017], our work has four significant differences: 1) the partial information filtration is generated by the process $G(\cdot)$ in ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}), i.e., $\mathcal{G}_s=\sigma\{G(r): 0\leqslant r\leqslant s\}$; 2) our original cost functional involves probability $\tilde{\mathbb{P}}^v_\theta$, but not $\mathbb{P}$; 3) the adjoint equation is an FBSDE with Markovian regime switching; 4) we adopt the linearization method and weak convergence technique to study the sufficient maximum principle.* # A risk-minimizing portfolio selection problem {#sec 5-0} In the section, we review in introduction and solve it. Recall that $\theta\in\Theta=\{1,2\}$ and ${\cal Q}=\{Q^\lambda: \lambda\in[0,1]\}.$ Here $Q^\lambda$ is the probability such that $Q^\lambda(\{1\})=\lambda,\ Q^\lambda(\{2\})=1-\lambda.$ Define $\widetilde{\mu}(t)=\mathbb{E}[\mu(t)|{\cal G}_t]$. Making use of the separation principle to the equation ([\[1.2\]](#1.2){reference-type="ref" reference="1.2"}) (see [@Xiong-Zhou-2007 Theorem 3.1]), we obtain $$\label{5.2} \left\{ \begin{aligned} dx^v_\theta(t)&=r(t)(x_\theta^v(t)-v(t))dt+\widetilde{\mu}(t)v(t)dt+\sigma(t)v(t)d\nu(t),\\ x_\theta^v(0)&=0, \end{aligned} \right.$$ where $\nu(\cdot)$ is the innovation process, which satisfies $$\begin{aligned} d \nu(t)=\frac{1}{\sigma(t)}d \log S_{1}(t)-\frac{1}{\sigma(t)}\Big (\widetilde{\mu}(t)-\frac{1}{2}\sigma^2(t)\Big )dt. \end{aligned}$$ From the definition of $G(t)$ (see ([\[1.3\]](#1.3){reference-type="ref" reference="1.3"})), we know $$\label{5.3} \begin{aligned} d \nu(t)= dG(t)-\frac{1}{\sigma(t)}\Big (\widetilde{\mu}(t)-\frac{1}{2}\sigma^2(t)\Big )dt. \end{aligned}$$ In order to obtain the explicit solution of the optimal control, we consider a special BSDE. Precisely, let $f_{1,\theta}(\cdot), f_{2,\theta}(\cdot):[0,T]\rightarrow\mathbb{R}$ be two bounded functions and let $f_{2,\theta}(t)>0, \theta=1,2$. Consider the following BSDE with regime switching $$\left\{ \begin{aligned} dy^v_\theta(t)&= -\Big (f_{1,\theta}(t)y^v_\theta(t) +\frac{1}{2}f_{2,\theta}(t)(v(t))^2\Big )dt +\bar{z}_\theta^v(t) dG(t)+k^v_{\theta}(t)\bullet dM(t),\\ y_\theta^v(T)&=x_\theta^v(T). \end{aligned} \right.$$ According to , the above equation exists a unique solution $(y^v_\theta,\bar{z}^v_\theta,k^v_\theta)$. The objective is to minimize $$\begin{aligned} J(v(\cdot))& =\sup_{Q^\lambda\in {\cal Q}}\int_\Theta y_\theta^v(0) Q^\lambda(d\theta) =\max\Big\{y_1^v(0), y_2^v(0)\Big\}=\sup_{\lambda\in[0,1]}\Big (\lambda y_1^v(0)+(1-\lambda)y_2^v(0)\Big ). \end{aligned}$$ In this case, the Hamiltonian is of the form $$\begin{aligned} H_\theta(t,x,y,\bar{z},v, p^F,p^B,\bar{q}^B) & =\Big[r(t)(x_\theta(t)-v(t))+\mu(t) v(t)\Big]p^B+\sigma(t)v(t)\bar{q}^B\\ &\quad-\Big[f_{1,\theta}(t)y_\theta(t)+\frac{1}{2}f_{2,\theta}(v(t))^2-\frac{1}{\sigma(t)}\Big (\mu(t)-\frac{1}{2}\sigma^2(t)\Big )\bar{z}_\theta(t) \Big]p^F, \end{aligned}$$ and the adjoint equation is $$\left\{ \begin{aligned} dp_\theta^F(t)&=\Big (f_{1,\theta}(t)+\frac{1}{\sigma^2(t)}(\mu(t)-\frac{1}{2}\sigma^2(t))^2\Big )p_\theta^F(t)dt -\frac{1}{\sigma(t)}\Big (\mu(t)-\frac{1}{2}\sigma^2(t)\Big )p_\theta^F(t)dG(t),\\ dp_\theta^B(t)&=-\Big (r(t)p_\theta^B(t)+ \frac{1}{\sigma(t)}\Big (\mu(t)-\frac{1}{2}\sigma^2(t)\Big )\bar{q}_\theta^B(t)\Big )dt+\bar{q}_\theta^B(t)dG(t)+k_\theta^B(t)\bullet dM(t),\\ p_\theta^F(0)&=-1, \quad p_\theta^B(T)=-p_\theta^F(T). \end{aligned} \right.$$ According to ([\[5.3\]](#5.3){reference-type="ref" reference="5.3"}), we can rewrite the above equation as $$\left\{ \begin{aligned} dp_\theta^F(t)&=\Big (f_{1,\theta}(t)+\frac{1}{\sigma^2(t)}(\mu(t)-\frac{1}{2}\sigma^2(t))(\mu(t)-\tilde{\mu}(t))\Big )p_\theta^F(t)dt\\ &\quad -\frac{1}{\sigma(t)}\Big (\mu(t)-\frac{1}{2}\sigma^2(t)\Big )p_\theta^F(t)d\nu(t),\ t\in[0,T],\\ dp_\theta^B(t)&=-\Big (r(t)p_\theta^B(t)-\frac{1}{\sigma(t)}(\mu(t)-\tilde{\mu}(t))\bar{q}_\theta^B(t)\Big )dt+\bar{q}_\theta^B(t)d\nu(t)+k_\theta^B(t)\bullet dM(t),\\ p_\theta^F(0)&=-1, \quad p_\theta^B(T)=-p_\theta^F(T). \end{aligned} \right.$$ Let $\widehat{v}(\cdot)$ be an optimal control. According to , there exist a constant $\widehat{\lambda}\in[0,1]$ such that $\max\{\widehat{y}_1(0),\widehat{y}_2(0) \}=\widehat{\lambda}\widehat{y}_1(0)+(1-\widehat{\lambda})\widehat{y}_2(0)$ and $$\begin{aligned} &\widehat{\lambda}\Big[ (r(t)-\widetilde{\mu}(t))\widetilde{p}_1^B(t)-\sigma(t) \widetilde{\bar{q}}_1^B(t)+f_{2,1}(t)\widetilde{p}_1^F(t)\widehat v(t)\Big]\\ &+(1-\widehat{\lambda})\Big[ (r(t)-\widetilde{\mu}(t))\widetilde{p}_2^B(t)-\sigma(t) \widetilde{\bar{q}}_2^B(t)+f_{2,2}(t)\widetilde{p}_2^F(t)\widehat v(t)\Big]=0, \end{aligned}$$ where $\widetilde{p}_\theta^F$ and $(\widetilde{p}_\theta^B, \widetilde{\bar{q}}_\theta^B, \widetilde{k}_\theta^B)$ are the unique solutions of $$\left\{ \begin{aligned} d\widetilde{p}_\theta^F(t)&=\Big (f_{1,\theta}(t)+\frac{1}{\sigma^2(t)}(\mu(t)-\frac{1}{2}\sigma^2(t))(\mu(t)-\tilde{\mu}(t))\Big )\widetilde{p}_\theta^F(t)dt\\ &\quad -\frac{1}{\sigma(t)}(\mu(t)-\frac{1}{2}\sigma^2(t))\widetilde p_\theta^F(t)d\nu(t),\ t\in[0,T],\\ d\widetilde p_\theta^B(t)&=-\Big (r(t)\widetilde p_\theta^B(t)-\frac{1}{\sigma(t)}(\mu(t)-\tilde{\mu}(t) )\widetilde{\bar{q}}_\theta^B(t)\Big )dt+\widetilde{\bar{q}}_\theta^B(t)d\nu(t)+\widetilde k_\theta^B(t)\bullet dM(t),\\ \widetilde p_\theta^F(0)&=-1, \quad\widetilde p_\theta^B(T)=-\widetilde p_\theta^F(T). \end{aligned} \right.$$ For simplicity of editing, denote $$\begin{aligned} \widetilde{p}^F(t)&= \begin{pmatrix} \widetilde{p}^F_1(t)\\ % \widetilde{p}^F_2(t) \end{pmatrix},\quad % % H(t)= % \begin{pmatrix} % \mu_1(t)-\frac{1}{2}\si^2_1(t) &0 \\ % % % 0 & \mu_2(t)-\frac{1}{2}\si^2_2(t) % \end{pmatrix}\\ % % % L(t)&= % \begin{pmatrix} % \mu_1(t)-\tilde{\mu}_1(t) &0 \\ % % % 0 & \mu_2(t)-\tilde{\mu}_2(t) % \end{pmatrix}\\ % \si^{-1}(t)&= % \begin{pmatrix} % \frac{1}{\si_1(t)} &0 \\ % % % 0 & \frac{1}{\si_2(t)} % \end{pmatrix}\q % f_1(t)= \begin{pmatrix} f_{1,1}(t) &0 \\ % 0 & f_{1,2}(t) \end{pmatrix}, \quad \widehat{X}(t)= \begin{pmatrix} \widehat x_1(t)\\ % \widehat x_2(t) \end{pmatrix}, \\ % \widetilde{p}^B(t)&= \begin{pmatrix} \widetilde{p}^B_1(t)\\ % \widetilde{p}^B_2(t) \end{pmatrix},\quad % \widetilde{\bar{q}}^B(t)= \begin{pmatrix} \widetilde{\bar{q}}^B_1(t)\\ % \widetilde{\bar{q}}^B_2(t) \end{pmatrix},\quad % \widetilde{\bar{k}}^B(t)= \begin{pmatrix} \widetilde{\bar{k}}^B_1(t)\\ % \widetilde{\bar{k}}^B_2(t) \end{pmatrix}, \\ % % \widetilde{p}^F(t)= % \begin{pmatrix} % \widetilde{p}^F_1(t)\\ % % % \widetilde{p}^F_2(t) % \end{pmatrix},\\ % % \widehat{\Lambda}&= \begin{pmatrix} \widehat{\lambda} &0 \\ % 0 & 1-\widehat{\lambda} \end{pmatrix}, \quad % % % f_2(t)= % \begin{pmatrix} % f_{2,1}(t) &0 \\ % % % 0 & f_{2,2}(t) % \end{pmatrix}, \\ % % r(t)&= % \begin{pmatrix} % r_1(t) &0 \\ % % % 0 & r_2(t) % \end{pmatrix},\q % % % x(t)= % \begin{pmatrix} % x_1(t) \\ % % % x_2(t) % \end{pmatrix},\q % % % \nu(t)= % \begin{pmatrix} % \nu_1(t) \\ % % % \nu_2(t) % \end{pmatrix},\\ % f_2^{\widehat{\lambda}}= \widehat{\lambda}f_{2,1}(t)\widetilde{p}_1^F(t)+(1-\widehat{\lambda})f_{2,2}(t)\widetilde{p}_2^F(t). \end{aligned}$$ Consequently, we have $$\label{5.4} \left\{ \begin{aligned} &\max\{\widehat{y}_1(0),\widehat{y}_2(0) \}=\widehat{\lambda}\widehat{y}_1(0)+(1-\widehat{\lambda})\widehat{y}_2(0),\\ & (r(t)-\widetilde{\mu}(t))I_{1\times 2} \widehat{\Lambda}\widetilde{p}^B(t) -\sigma(t)I_{1\times 2}\widehat{\Lambda}\widetilde{\bar{q}}^B(t)+f^{\widehat{\lambda}}_2(t) \widehat{v}(t)=0, \end{aligned} \right.$$ and $$\label{5.11} \left\{ \begin{aligned} d\widehat{X}(t)&=\Big (r(t)\widehat{X}(t)- (r(t)-\widetilde{\mu}(t))I_{2\times 1}\widehat{v}(t)\Big )dt+\sigma(t)\widehat{v}(t)I_{2\times 1}d\nu(t),\\ \widehat{X}(0)&=x_0I_{2\times 1}. \end{aligned} \right.$$ From the second equality of ([\[5.4\]](#5.4){reference-type="ref" reference="5.4"}), one has $$\widehat{v}(t)=\Big (f^{\widehat{\lambda}}_2(t)\Big )^{-1}\Big ( \sigma(t)I_{1\times 2} \widehat{\Lambda}\widetilde{\bar{q}}^B(t) -(r(t)-\widetilde{\mu}(t))I_{1\times 2} \widehat{\Lambda}\widetilde{p}^B(t) \Big ),$$ where $\widetilde{p}^F$ and $(\widetilde{p}^B, \widetilde{\bar{q}}^B, \widetilde{k}^B)$ are the unique solutions of $$\left\{ \begin{aligned} d\widetilde{p}^F(t)&=\Big (f_{1}(t)+\frac{1}{\sigma^2(t)}(\mu(t)-\frac{1}{2}\sigma^2(t))(\mu(t)-\tilde{\mu}(t))I_{2\times 2}\Big ) \widetilde{p}^F(t)dt\\ & - \frac{1}{\sigma(t)}(\mu(t)-\frac{1}{2}\sigma^2(t)) \widetilde{p}^F(t)d\nu(t), \quad t\in[0,T],\\ d\widetilde{p}^B(t)&=-\Big (r(t)\widetilde{p}^B(t)-\frac{1}{\sigma(t)}(\mu(t)-\tilde{\mu}(t) )\widetilde{\bar{q}}^B(t)\Big )dt+\widetilde{\bar{q}}^B(t)d\nu(t)+\widetilde k^B(t)\bullet dM(t),\\ \widetilde{p}^F(0)&=-I_{2\times 1}, \quad\widetilde{p}^B(T)=-\widetilde{p}^F(T). \end{aligned} \right.$$ In the meanwhile, the optimal state process $\widehat X$ satisfies $$\label{5.11} \left\{ \begin{aligned} d\widehat{X}(t)&=\Big[r(t)\widehat{X}(t)- (r(t)-\widetilde{\mu}(t))I_{2\times 1} \Big (f^{\widehat{\lambda}}_2(t)\Big )^{-1}\Big ( \sigma(t)I_{1\times 2} \widehat{\Lambda}\widetilde{\bar{q}}^B(t) -(r(t)-\widetilde{\mu}(t))I_{1\times 2} \widehat{\Lambda}\widetilde{p}^B(t) \Big )\Big]dt\\ &\quad+\sigma(t) \Big (f^{\widehat{\lambda}}_2(t)\Big )^{-1}\Big ( \sigma(t)I_{1\times 2} \widehat{\Lambda}\widetilde{\bar{q}}^B(t) -(r(t)-\widetilde{\mu}(t))I_{1\times 2} \widehat{\Lambda}\widetilde{p}^B(t) \Big ) I_{2\times 1} d\nu(t),\\ \widehat{X}(0)&=x_0I_{2\times 1}, \end{aligned} \right.$$ which is a linear SDE. Hence it possesses a unique solution $\widehat{X}\in {{\cal S}}_{\mathbb{G}}^{2}(0,T;\mathbb{R}^2)$, where $\mathbb{G}=({\cal G}_s)_{0\leqslant s \leqslant T}$. # Proof of the auxiliary results {#sec 5} ## Proof of Recall $\beta= 2k,\ k\in \mathbb{N}.$ It follows from Young inequality, for $0\leqslant t\leqslant r\leqslant T$, $$\begin{aligned} \nonumber \begin{aligned} &\bigg(\int_t^r|\phi(s)|^2\bullet d[M](s)\bigg)^k \\ &= \sum_{t\leqslant s\leqslant r}\bigg\{ \bigg( \int_t^s|\phi(u)|^2 \bullet d[M](u) \bigg)^k -\bigg( \int_t^{s-}|\phi(u)|^2 \bullet d[M](u)\bigg)^k\bigg\}\\ & =\sum_{t\leqslant s\leqslant r}\sum_{i,j=1,i\neq j}^I\bigg\{\bigg( \int_t^{s-}|\phi(u)|^2\bullet d[M](u)+|\phi_{ij}(s)|^2\bigg)^k % -\bigg( \int_t^{s-}|\phi(u)|^2\bullet d[M](u)\bigg)^k\bigg\}\mathbf{1}_{\{\alpha(s-)=i\}} \mathbf{1}_{\{\alpha(s)=j\}} \\ % & =\sum_{t\leqslant s\leqslant r}\sum_{i,j=1,i\neq j}^I\sum_{\ell=1}^k\begin{pmatrix}k\\ \ell \end{pmatrix} \bigg( \int_t^{s-}|\phi(u)|^2\bullet d[M](u)\bigg)^{k-\ell} \Big (|\phi_{ij}(s)|^2\Big )^\ell\mathbf{1}_{\{\alpha(s-)=i\}} \mathbf{1}_{\{\alpha(s)=j\}}\\ % & \leqslant C_k \sum_{t\leqslant s\leqslant r}\sum_{i,j=1,i\neq j}^I\bigg(\Big (\int_t^{s-}|\phi(u)|^2\bullet d[M](u)\Big )^{k} + \Big (|\phi_{ij}(s)|^2\Big )^k\bigg)\mathbf{1}_{\{\alpha(s-)=i\}} \mathbf{1}_{\{\alpha(s)=j\}}\\ % & \leqslant C_k \int_t^r\bigg(\Big (\int_t^{s-}|\phi(u)|^2\bullet d[M](u)\Big )^{k} + \Big (|\phi(s)|^2\Big )^k\bigg)\bullet d[M](s). \\ \end{aligned} \end{aligned}$$ Consequently, it follows $$\begin{aligned} \nonumber \begin{aligned} & \mathbb{E}\bigg[\bigg(\int_t^r|\phi(s)|^2\bullet d[M](s)\bigg)^k\bigg]\\ & \leqslant C_k \mathbb{E}\bigg[\int_t^r\sum_{i,j=1,i\neq j}^I\bigg(\Big (\int_t^{s-}|\phi(u)|^2\bullet d[M](u)\Big )^{k} + \Big (|\phi_{ij}(s)|^2\Big )^k\bigg)\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds\bigg] \\ &\leqslant C_k \mathbb{E}\bigg[\int_t^r\sum_{i,j=1,i\neq j}^I\Big (\int_t^{s-}|\phi(u)|^2\bullet d[M](u)\Big )^{k}\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds\bigg]\\ &\quad+ C_k \mathbb{E}\bigg[\int_t^r\sum_{i,j=1,i\neq j}^I|\phi_{ij}(s)|^{2k}\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds\bigg] \\ % &\leqslant C_k \Big (\sum_{i,j=1,i\neq j}^I\lambda_{ij}\Big )\mathbb{E}\bigg[\int_t^r \Big (\int_t^{s-}|\phi(u)|^2\bullet d[M](u)\Big )^{k}ds\bigg] + C_k \mathbb{E}\bigg[\int_t^r\sum_{i,j=1,i\neq j}^I|\phi_{ij}(s)|^{2k}\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds\bigg]. \end{aligned} \end{aligned}$$ From Gronwall lemma and $\beta=2k$, we arrive at $$\begin{aligned} \nonumber \begin{aligned} & \mathbb{E}\bigg[\bigg(\int_t^T|\phi(s)|^2\bullet d[M](s)\bigg)^\frac{\beta}{2}\bigg] \leqslant C \mathbb{E}\bigg[ \int_t^T \sum_{i,j=1,i\neq j}^I|\phi_{ij}(s)|^\beta\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds \bigg]. \end{aligned} \end{aligned}$$ ## Proof of *Proof of Theorem 2.1* Notice that $$\begin{aligned} \nonumber \begin{aligned} \mathbb{E}\Big[|X(t)|^\beta\Big]\leqslant&C\bigg\{|x|^\beta+\mathbb{E}\Big[\Big|\int_0^t b(s,X(s),\alpha(s-))ds\Big|^\beta +\Big|\int_0^t\sigma(s,X(s),\alpha(s-))dW(s)\Big|^\beta\\ &+\Big|\int_0^t\beta(s,X(s-),\alpha(s-))\bullet dM(s)\Big|^\beta\Big]\bigg\}. \end{aligned} \end{aligned}$$ According to Burkholder-Davis-Gundy's inequality, and $|\beta_{ij}(s,X(s),\alpha(s-))|\leqslant L(|X(s)|+|\beta_{ij}(s,$ $0, \alpha(s-))|),$ we have $$\begin{aligned} \nonumber \begin{aligned} & \mathbb{E}\Big[\Big|\int_0^t\beta(s,X(s-),\alpha(s-))\bullet dM(s)\Big|^\beta\Big]\\ &\leqslant\mathbb{E}\bigg[\bigg(\int_0^T|\beta(s,X(s-),\alpha(s-))|^2\bullet d[M](s)\bigg)^\frac{\beta}{2}\bigg]\\ &\leqslant C\mathbb{E}\bigg[\int_0^T\sum_{i,j=1,i\neq j}^I|\beta_{ij}(s,X(s),\alpha(s-))|^\beta\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds \bigg]\\ % &\leq C\dbE\bigg[\int_0^T\sum_{i,j=1,i\neq j}^I(|\b_{ij}(s,0,\a(s-))|^\b+|X(s)|^\b)\l_{ij}\mathbf{1}_{\{\a(s)=j\}}ds % \bigg]\\ &\leqslant C\sum_{i,j=1,i\neq j}^I\lambda_{ij}\mathbb{E}\bigg[\int_0^T|X(s)|^\beta ds\bigg]+C\mathbb{E}\bigg[\int_0^T\sum_{i,j=1,i\neq j}^I| \beta_{ij}(s,0,\alpha(s-))|^\beta\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}}ds \bigg]. \end{aligned} \end{aligned}$$ We make similar estimate for other terms. Following standard argument for classical SDEs, the equation ([\[6.111\]](#6.111){reference-type="ref" reference="6.111"}) exists a unique solution $X(\cdot)\in{{\cal S}}_{\mathbb{F}}^{\beta}(0,T;\mathbb{R}^n).$ ## Proof of Let $\gamma:=2L^2(2+\sum\limits_{i,j=1,i\neq j}^I|\lambda_{ij}|^2)+\frac{1}{2}$, we define the norm $$\begin{aligned} \|(Y,Z,K)\|^2_\gamma=\mathbb{E}\Big[\int_0^Te^{\gamma s}(|Y(s)|^2+ |Z(s)|^2+\sum_{i,j=1,i\neq j}^I|K_{ij}(s)|^2\lambda_{ij}1_{\{\alpha(s-)=i\}})ds\Big], \end{aligned}$$ which is equivalent to the norm $$||(Y,Z,K)||^2 =\mathbb{E}\Big[\int_0^T(|Y(s)|^2+ |Z(s)|^2+\sum_{i,j=1,i\neq j}^I|K_{ij}(s)|^2\lambda_{ij}1_{\{\alpha(s-)=i\}})ds\Big].$$ From the assumption $\mathbb{E}[(\int_0^T |F(t,0,0,0,i)|dt)^2]<\infty,$ it is easy to know that, for $\xi\in L^2(\mathscr{F}_T;\mathbb{R}^m)$ and the triple $(y(\cdot),z(\cdot),k(\cdot))\in {\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m \times d}) \times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}^m)$, $$\begin{aligned} &Y(t)+\int_0^tF(s,y(s),z(s),\sum_{i,j=1,i\neq j}^Ik_{ij}(s)\lambda_{ij}1_{\{\alpha(s-)=i\}},\alpha(s-))ds\\ &=\mathbb{E}\Big[\xi+\int_0^TF(s,y(s),z(s),\sum_{i,j=1,i\neq j}^Ik_{ij}(s)\lambda_{ij}1_{\{\alpha(s-)=i\}},\alpha(s-))ds|\mathscr{F}_t\Big] \end{aligned}$$ is an $(\mathbb{F},\mathbb{P})$-square integrable martingale. From the Martingale Represent Theorem ([@Donnelly-Heunis-2012 Proposition 3.9]), there exists a pair $(Z(\cdot), K(\cdot))\in {\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m\times d}) \times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}^m)$ such that $$\begin{aligned} Y(t)&=\xi+\int_t^TF(s,y(s),z(s),\sum_{i,j=1,i\neq j}^Ik_{ij}(s)\lambda_{ij}1_{\{\alpha(s-)=i\}},\alpha(s-))ds\\ &\quad-\int_t^TZ(s)dW(s)-\int_t^TK(s)\bullet dM(s),\quad t\in[0,T]. \end{aligned}$$ This allows to define a mapping $$\begin{aligned} &\Gamma:{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m \times d}) \times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}^m)\\ &\qquad\rightarrow{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m \times d}) \times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}^m) \end{aligned}$$ under the norm $\|\cdot\|_\gamma$. For $(y^i(\cdot),z^i(\cdot),k^i(\cdot))\in {\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m}) \times{\cal H}^{2,1}_{\mathbb{F}}(0,T;\mathbb{R}^{m \times d}) \times{{\cal K}}_{\mathbb{F}}^{2,1}(0,T;\mathbb{R}^m)$, we define $(Y^i(\cdot),Z^i(\cdot),$ $K^i(\cdot)):=\Gamma((y^i(\cdot),z^i(\cdot),k^i(\cdot))),\ i=1,2.$ Denote $\Psi=Y,Z,K,y,z,k$, $\delta\Psi(t)=\Psi^1(t)-\Psi^2(t)$. Applying Itô's formula to $e^{\gamma t} |\delta Y(t)|^2$, it follows $$\begin{aligned} &|e^{\gamma t}\delta Y(t)|^2+\int_t^T e^{\gamma s}(\gamma|\delta Y(s)|^2+|\delta Z(s)|^2+\sum_{i,j=1,i\neq j}^I|\delta K_{ij}(s)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}})ds\\ % &=2\int_t^Te^{\gamma s}\delta Y(s)\Big (F(s,y^1(s),z^1(s),\sum_{i,j=1,i\neq j}^Ik^1_{ij}(s)\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}},\alpha(s-))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad-F(s,y^2(s),z^2(s),\sum_{i,j=1,i\neq j}^Ik^2_{ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}},\alpha(s-))\Big )ds\\ % &\quad-\int_t^T2e^{\gamma s}\delta Y(s)\delta Z(s)dW(s)-\int_t^Te^{\gamma s}\sum\limits_{i,j=1,i\neq j}^I(|\delta Y(s-)+\delta K_{ij}(s)|^2-|\delta Y(s-)|^2)dM_{ij}(s).\\ \end{aligned}$$ Taking $t=0$ and making use of the Lipschitz property of $F$, then we have $$\begin{aligned} &\mathbb{E}\Big[\int_0^T e^{\gamma s}(|\delta Y(s)|^2+|\delta Z(s)|^2+\sum_{i,j=1,i\neq j}^I|\delta K_{ij}(s)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}})ds\Big]\\ &\leqslant\frac{1}{2}\mathbb{E}\Big[\int_0^Te^{\gamma s}(|\delta y(s)|^2+|\delta z(s)|^2+\sum_{i,j=1,i\neq j}^I|\delta k_{ij}(s)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}})ds\Big]. \end{aligned}$$ It means that $\Gamma$ is a strict contraction mapping under the norm $\|\cdot\|_\gamma$. Thereby, the existence and uniqueness is obtained. The proof of $Y\in {\cal S}^2_{\mathbb{F}}(0,T;\mathbb{R}^m)$ is regular. Hence, we omit it. Finally, applying Itô's formula to $e^{\gamma t}|Y(t)|^2$ on $[0,T]$, one has $$\begin{aligned} &|e^{\gamma t} Y(t)|^2+\int_t^T e^{\gamma s}(\gamma| Y(s)|^2+| Z(s)|^2+\sum_{i,j=1,i\neq j}^I| K_{ij}(s)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}})ds\\ % &=2\int_t^Te^{\gamma s} Y(s)F(s,Y(s),Z(s),\sum_{i,j=1,i\neq j}^IK_{ij}(s)\lambda_{ij}\mathbf{1}_{\{\alpha(s-)=i\}},\alpha(s-))ds\\ % &\quad-\int_t^T2e^{\gamma s} Y(s) Z(s)dW(s)-\int_t^Te^{\gamma s}\sum\limits_{i,j=1,i\neq j}^I(| Y(s-)+ K_{ij}(s)|^2-| Y(s-)|^2)dM_{ij}(s). \end{aligned}$$ Thanks to Burkholder-Davis-Gundy inequality and Hölder inequality, we have the estimate ([\[6.4-1\]](#6.4-1){reference-type="ref" reference="6.4-1"}). ## Proof of The proof is split into two steps. *Step 1 ($X$-estimate).* Denote $\eta(s)=X_\theta^v(s)-X_{\bar{\theta}}^v(s)$ and, for $l=b,\sigma,\bar{\sigma},h,$ $$\begin{aligned} A_l(s)&:=\int_0^1 \partial_xb_{\bar{\theta}}(s,X^v_{\bar{\theta}}(s)+\lambda(X^v_{\theta}(s)-X^v_{\bar{\theta}}(s)), v(s),\alpha(s-))d\lambda,\\ B^{\theta,\bar{\theta}}_l(s)&:=l_\theta(s,X^v_{\theta}(s), v(s),\alpha(s-))-l_{\bar{\theta}}(s,X^v_{\theta}(s), v(s),\alpha(s-)), \\ A_{\beta,ij}(s)&:=\int_0^1 \partial_x\beta_{\bar{\theta},ij}(s,X^v_{\bar{\theta}}(s)+\lambda(X^v_{\theta}(s)-X^v_{\bar{\theta}}(s)), v(s),\alpha(s-))d\lambda,\\ B^{\theta,\bar{\theta}}_{\beta,ij}(s)&:=\beta_{\theta,ij}(s,X^v_{\theta}(s), v(s),\alpha(s-))-\beta_{\bar{\theta},ij}(s,X^v_{\theta}(s), v(s),\alpha(s-)). \end{aligned}$$ Then we have $$\begin{aligned} \eta(t) & =\int_0^t\Big (\Big[A_b(s)+h^v_{\bar{\theta}}(s)A_{\bar{\sigma}}(s)+\bar{\sigma}^v_{\theta}(s)A_h(s)\Big]\eta(s) +\Big[B^{\theta,\bar{\theta}}_b(s)+h^v_{\bar{\theta}}B^{\theta,\bar{\theta}}_{\bar{\sigma}}(s) +\bar{\sigma}^v_{\theta}(s)B^{\theta,\bar{\theta}}_h(s)\Big]\Big )ds\\ & \quad+\int_0^t \Big (A_{\sigma}(s)\eta(s)+B^{\theta,\bar{\theta}}_{\sigma}(s)\Big )dW(s)+\int_0^t \Big (A_{\bar{\sigma}}(s)\eta(s) +B^{\theta,\bar{\theta}}_{\bar{\sigma}}(s)\Big )dG(s) \\ & \quad+ \int_0^t \Big (\sum_{i,j=1,i\neq j}^IA_{\beta,ij}(s) \eta(s)\lambda_{ij} \mathbf{1}_{\{\alpha(s-)=i\}} +\sum_{i,j=1,i\neq j}^IB^{\theta,\bar{\theta}}_{\beta,ij}(s) \lambda_{ij} \mathbf{1}_{\{\alpha(s-)=i\}}\Big ) dM_{ij}(s). \end{aligned}$$ It follows from $$\begin{aligned} \mathbb{E}\Big[\sup_{t\in[0,T]}|\eta(t)|^4\Big] & \leqslant C\mathbb{E}\bigg[\Big (\int_0^T|B^{\theta,\bar{\theta}}_b(s)+h^v_{\bar{\theta}}B^{\theta,\bar{\theta}}_{\bar{\sigma}}(s) +\bar{\sigma}^v_{\theta}(s)B^{\theta,\bar{\theta}}_h(s)|ds\Big )^4+\Big (\int_0^T|B^{\theta,\bar{\theta}}_{\sigma}(s)|^2ds\Big )^2\\ & \quad+\Big (\int_0^T|B^{\theta,\bar{\theta}}_{\bar{\sigma}}(s)|^2ds\Big )^2+\int_0^T\sum_{i,j=1,i\neq j}^I|B^{\theta,\bar{\theta}}_{\beta,ij}(s)|^4\lambda_{ij} \mathbf{1}_{\{\alpha(s-)=i\}}ds\bigg]. \end{aligned}$$ Since $|B^{\theta,\bar{\theta}}_b|+|B^{\theta,\bar{\theta}}_{\sigma}|+|B^{\theta,\bar{\theta}}_{\bar{\sigma}}|+|B^{\theta,\bar{\theta}}_{\beta,ij}| +|B^{\theta,\bar{\theta}}_h|\leqslant Ld(\theta,\bar{\theta})$ and $h^v_{\bar{\theta}},\bar{\sigma}^v_{\theta}$ are bounded by $L$, we have $$\begin{aligned} \mathbb{E}\Big[\sup_{t\in[0,T]}|\eta(t)|^4\Big]\leqslant Ld(\theta,\bar{\theta})^4. \end{aligned}$$ *Step 2 ($Y$-estimate).* Denote $\xi(s):=Y_\theta^v(s)-Y_{\bar{\theta}}^v(s),\ \gamma(s):=Z_\theta^v(s)-Z_{\bar{\theta}}^v(s),\ \bar{\gamma}(s):=\bar{Z}_\theta^v(s)-\bar{Z}_{\bar{\theta}}^v(s),\zeta_{ij}(s):=\beta^v_{\theta,ij}(s)-\beta^v_{\bar{\theta},ij}(s)$ and denote, for $l=x,y,z,\bar{z},k$, $$\begin{aligned} C_\Phi(T)&=\int_0^1\partial_x\Phi_{\bar{\theta}}(X^v_{\bar{\theta}}(T)+\lambda(X_\theta^v(T)-X^v_{\bar{\theta}}(T)),\alpha(T))d\lambda,\\ D_\Phi^{\theta,\bar{\theta}}(T)&=\Phi_\theta(X^v_\theta(T),\alpha(T))-\Phi_{\bar{\theta}}(X^v_\theta(T),\alpha(T)),\\ \Pi^v_\theta(s)&=(X^v_\theta(s),Y^v_\theta(s),Z^v_\theta(s),\bar{Z}^v_\theta(s),\sum_{i,j=1,i\neq j}^IK^v_{\theta,ij}(s) \lambda_{ij} \mathbf{1}_{\{\alpha(s-)=i\}} ),\\ C_l(s)&=\int_0^1\partial_lf_{\bar{\theta}}(s,\Pi^v_{\bar{\theta}}(s)+\lambda(\Pi^v_{\theta}(s)-\Pi^v_{\bar{\theta}}(s)),v(s),\alpha(s-))ds,\\ D_f^{\theta,\bar{\theta}}(s)&=f_\theta(s,\Pi^v_\theta(s), v(s),\alpha(s-))-f_{\bar{\theta}}(s,\Pi^v_\theta(s), v(s),\alpha(s-)). \end{aligned}$$ Consequently, we have $$\begin{aligned} \xi(t) & =C_\Phi(T)\eta(T)+D_\Phi^{\theta,\bar{\theta}}(T)+\int_t^T (C_x(s)\eta(s)+C_y(s)\xi(s)+C_z(s)\gamma(s)+C_{\bar{z}}(s)\bar{\gamma}(s)\\ & +C_k(s)\sum_{i,j=1,i\neq j}^I\zeta_{ij}(s)\lambda_{ij} \mathbf{1}_{\{\alpha(s-)=i\}})ds-\int_t^T\gamma(s)dW(s)-\int_0^T\bar{\gamma}(s)dG(s)\\ & -\int_s^T \zeta(s)\bullet dM(t). \end{aligned}$$ Thanks to one gets $$\begin{aligned} &\mathbb{E}\bigg[\sup_{t\in[0,T]}|\xi(t)|^2+\int_0^T(|\gamma(t)|^2+|\bar{\gamma}(t)|^2 +\sum_{i,j=1,i\neq j}^I|\zeta_{ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg] \\ &\leqslant C\mathbb{E}\bigg[|C_\Phi(T)\eta(T)+D_\Phi^{\theta,\bar{\theta}}(T)|^2+\int_0^T |C_x(t)\eta(t)|^2dt \bigg]\leqslant Ld(\theta,\bar{\theta})^2. \end{aligned}$$ ## Proof of According to the definition of $\delta^{\varepsilon}X_\theta$, we have $$\begin{aligned} \label{4.12} \left\{ \begin{aligned} d\delta^{\varepsilon}X_\theta(t) & =\bigg\{\frac{1}{\varepsilon}\Big ([b^\varepsilon_\theta(t)-\bar{\sigma}^\varepsilon_\theta(t)h^\varepsilon_\theta(t)] - [b_\theta(t)-\bar{\sigma}_\theta(t)h_\theta(t)] \Big )\\ % & \quad-\Big (\Big[\partial_xb_\theta(t)-\partial_x\bar{\sigma}_\theta(t)h_\theta(t)-\bar{\sigma}_\theta(t)\partial_xh_\theta(t)\Big]X^{1}_\theta(t)\\ & \qquad\qquad\qquad\qquad\qquad\quad+\Big[\partial_vb_\theta(t)-\partial_v\bar{\sigma}_\theta(t)h_\theta(t)\Big](v(t)-\widehat{v}(t))\Big )\bigg\}dt\\ % &\quad+\bigg\{\frac{1}{\varepsilon}\Big (\sigma^\varepsilon_\theta(t)-\sigma_\theta(t)\Big )-\Big[\partial_x\sigma_\theta(t)X^{1}_\theta(t) +\partial_v\sigma_\theta(t)(v(t)-\widehat{v}(t))\Big]\bigg\}dW(t)\\ % &\quad+\bigg\{\frac{1}{\varepsilon}\Big (\bar{\sigma}^\varepsilon_\theta(t)-\bar{\sigma}_\theta(t)\Big )- \Big[\partial_x\bar{\sigma}_\theta(t)X^{1}_\theta(t)+\partial_v\bar{\sigma}_\theta(t)(v(t)-\widehat{v}(t))\Big]\bigg\}dG(t)\\ % &\quad+\bigg\{\frac{1}{\varepsilon}\Big (\beta^\varepsilon_\theta(t)-\beta_\theta(t)\Big )- \Big[\partial_x\beta_\theta(t)X^{1}_\theta(t) +\partial_v\beta_\theta(t)(v(t)-\widehat{v}(t))\Big]\bigg\}\bullet dM(t),\ t\in[0,T],\\ \delta^\varepsilon X_\theta(0) & =0. \end{aligned} \right. \end{aligned}$$ For simplicity, we denote, for $\psi_\theta=b_\theta,\sigma_\theta,\bar{\sigma}_\theta,\beta_\theta,h_\theta$, $$\nonumber \begin{aligned} \partial_x\psi^\varepsilon_\theta(t) & =\int_0^1\partial_x\psi_\theta(t,\widehat{X}_\theta(t)+\lambda\varepsilon(X^1_\theta(t)+\delta^\varepsilon X_\theta(t)), \widehat{v}(t)+\lambda\varepsilon(v(t)-\widehat{v}(t)), \alpha(t-))d\lambda, \\ % A_{1,\theta}^\varepsilon(t): & =\Big[(\partial_xb^\varepsilon_\theta(t)-\partial_xb_\theta(t))+(\partial_x\bar{\sigma}^\varepsilon_\theta(t)-\partial_x\bar{\sigma}_\theta(t))h_\theta(t) +(\partial_xh^\varepsilon_\theta(t)-\partial_xh_\theta(t))\bar{\sigma}_\theta(t)\\ &\quad+\partial_x h_\theta^\varepsilon(t)(\bar{\sigma}_\theta^\varepsilon(t)-\bar{\sigma}_\theta(t))\Big] X^{1}_\theta(t)\\ % & \quad+\Big[(\partial_vb^\varepsilon_\theta(t)-\partial_vb_\theta(t))+(\partial_v\bar{\sigma}^\varepsilon_\theta(t)-\partial_v\bar{\sigma}_\theta(t)) h_\theta(t)\Big](v(t)-\widehat{v}(t)),\\ % A_{2,\theta}^\varepsilon(t): & =\Big[\partial_x\sigma^\varepsilon_\theta(t)-\partial_x\sigma_\theta(t)\Big] X^{1}_\theta(t) +\Big[\partial_v\sigma^\varepsilon_\theta(t)-\partial_v\sigma_\theta(t)\Big](v(t)-\widehat{v}(t)),\\ % A_{3,\theta}^\varepsilon(t): & =\Big[\partial_x\bar{\sigma}^\varepsilon_\theta(t)-\partial_x\bar{\sigma}_\theta(t)\Big] X^{1}_\theta(t) +\Big[\partial_v\bar{\sigma}^\varepsilon_\theta(t)-\partial_v\bar{\sigma}_\theta(t)\Big](v(t)-\widehat{v}(t)),\\ % A_{4,\theta,ij}^\varepsilon(t):& =\Big[\partial_x\beta^\varepsilon_{\theta,ij}(t)-\partial_x\beta_{\theta,ij}(t)\Big] X^{1}_\theta(t) +\Big[\partial_v\beta^\varepsilon_{\theta,ij}(t)-\partial_v\beta_{\theta,ij}(t)\Big](v(t)-\widehat{v}(t)).\\ \end{aligned}$$ ([\[4.12\]](#4.12){reference-type="ref" reference="4.12"}) can be written as the following linear SDE $$\left\{ \begin{aligned} d\delta^{\varepsilon}X_\theta(t) & =\Big[\Big (\partial_xb^\varepsilon_\theta(t)+\partial_x\bar{\sigma}^\varepsilon_\theta(t)h_\theta(t)+\partial_x h^\varepsilon_\theta(t)\bar{\sigma}_\theta(t)\\ & \quad+ \partial_x h_\theta^\varepsilon(t)(\bar{\sigma}_\theta^\varepsilon(t)-\sigma_\theta(t))\Big )\delta^{\varepsilon}X_\theta(t) +A_{1,\theta}^\varepsilon(t)\Big]dt\\ % &\quad+\Big[\partial_x\sigma^\varepsilon_\theta(t)\delta^{\varepsilon}X_\theta(t)+A_{2,\theta}^\varepsilon(t)\Big]dW(t) +\Big[\partial_x\bar{\sigma}^\varepsilon_\theta(t)\delta^{\varepsilon}X_\theta(t)+A_{3,\theta}^\varepsilon(t)\Big]dG(t)\\ % &\quad+\Big[\partial_x\beta^\varepsilon_\theta(t)\delta^{\varepsilon}X_\theta(t)+A_{4,\theta}^\varepsilon(t)\Big]\bullet dM(t),\\ \delta^{\varepsilon}X_\theta(0) & =0. \end{aligned} \right.$$ From and ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}), there exists a constant $C>0$ depending on $L,T,\sum\limits_{i,j=1,i\neq j}^I \lambda_{ij}$ such that $$\begin{aligned} \mathbb{E}\Big[\sup_{0\leqslant t\leqslant T}|\delta^{\varepsilon}X_\theta(t)|^4 \Big] & \leqslant C\mathbb{E}\bigg[\Big (\int_0^T|A_{1,\theta}^\varepsilon(t)|dt\Big )^4+\Big (\int_0^T|A_{2,\theta}^\varepsilon(t)|^2dt\Big )^2+\Big (\int_0^T|A_{3,\theta}^\varepsilon(t)|^2dt\Big )^2 \\ & \quad+\int_0^T\sum_{i,j=1,i\neq j}^I|A_{4,\theta,ij}^\varepsilon(t)|^4 \lambda_{ij}\mathbf{1}_{\{\alpha(t)=i\}}dt\bigg] \\ & \leqslant C\mathbb{E}\bigg[\int_0^T\Big (|A_{1,\theta}^\varepsilon(t)|^4+|A_{2,\theta}^\varepsilon(t)|^4+|A_{3,\theta}^\varepsilon(t)|^4 +\sum_{i,j=1,i\neq j}^I|A_{4,\theta,ij}^\varepsilon(t)|^4\lambda_{ij} \Big )dt\bigg] \\ & \leqslant C\mathbb{E}\bigg[\int_0^T(|v(t)|^4+|\widehat{v}(t)|^4) dt\bigg]. \end{aligned}$$ As for the item $\mathrm{(ii)}$, it follows from dominated convergence theorem that $$\begin{aligned} \nonumber \mathbb{E}\bigg[\int_0^T\Big (|A_{1,\theta}^\varepsilon(t)|^4+|A_{2,\theta}^\varepsilon(t)|^4+|A_{3,\theta}^\varepsilon(t)|^4 +\sum_{i,j=1,i\neq j}^I|A_{4,\theta,ij}^\varepsilon(t)|^4\lambda_{ij} \Big )dt\bigg]\rightarrow 0, \ \text{as}\ \varepsilon\rightarrow 0. \end{aligned}$$ Thereby we obtain $\mathrm{(ii)}$. We now focus on the item $\mathrm{(iii)}$. From the definition of $\delta^{\varepsilon}R_\theta$, we know $$\label{4.17} \left\{ \begin{aligned} d\delta^{\varepsilon}R_\theta(t) & =\bigg\{\frac{1}{\varepsilon}\Big (R^\varepsilon_\theta(t)h^\varepsilon_\theta(t)-\widehat{R}_\theta(t)h_\theta(t)\Big )- \Big (R^{1}_\theta(t)h_\theta(t) +\widehat{R}_\theta(t)\partial_xh_\theta(t)X^{1}_\theta(t)\Big )\bigg\}dG(t),\\ \delta^{\varepsilon}R_\theta(0) & =0. \end{aligned} \right.$$ Denote $$\begin{aligned} A_{5,\theta}^\varepsilon(t): & =(\partial_xh^\varepsilon_\theta(t)-\partial_xh_\theta(t))\widehat{R}_\theta(t) X^{1}_\theta(t)-\partial_xh^\varepsilon_\theta(t)\widehat{R}_\theta(t)\delta^\varepsilon X_\theta(t) +R_\theta^{1}(t)(h^\varepsilon_\theta(t)-h_\theta(t)). \end{aligned}$$ We can rewrite ([\[4.17\]](#4.17){reference-type="ref" reference="4.17"}) as $$d\delta^{\varepsilon}R_\theta(t)=\bigg\{h^\varepsilon_\theta(t)\delta^{\varepsilon}R_\theta(t)+A_{5,\theta}^\varepsilon(t)\bigg\}dG(t),\quad\delta^{\varepsilon}R_\theta(0)=0.$$ Thanks to , ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}) and the item $\mathrm{(i)}$, it follows from Hölder inequality that $$\begin{aligned} & \mathbb{E}\bigg[\sup_{0\leqslant t\leqslant T}|\delta^{\varepsilon}R_\theta(t)|^2 \bigg]\leqslant C\mathbb{E}\bigg[\int_0^T|A_{5,\theta}^\varepsilon(t)|^2dt\bigg]\\ & \leqslant C\mathbb{E}\bigg[\int_0^T(|\widehat{R}_\theta(t) X^{1}_\theta(t)|^2+|\widehat{R}_\theta(t)\delta^\varepsilon X_\theta(t)|^2+|R_\theta^{1}(t)|^2)dt\bigg]\\ & \leqslant C\bigg\{\mathbb{E}\bigg[\int_0^T(|v(t)|^4+|\bar{v}(t)|^4)dt\bigg]\bigg\}^\frac{1}{2}. \end{aligned}$$ Finally, the item $\mathrm{(iv)}$ is a direct result of the item $\mathrm{(ii)}$ and dominated convergence theorem. ## Proof of The proof is split into three steps. *Step 1 ($X$-estimate).* Denote $\eta^1=X_\theta^1-X^1_{\bar{\theta}},\xi^1=Y_\theta^1-Y^1_{\bar{\theta}},\gamma^1=Z_\theta^1-Z^1_{\bar{\theta}}, \bar{\gamma}^1=\bar{Z}_\theta^1-\bar{Z}^1_{\bar{\theta}}, \zeta^1_{ij}=K^1_{\theta,ij}- K^1_{\bar{\theta},ij}$ and denote for $l=\sigma,\bar{\sigma},$ $$\begin{aligned} & E_b(t)=\bigg[\Big (\partial_xb_\theta(t)-\partial_x\bar{\sigma}_\theta(t)h_\theta(t)-\bar{\sigma}_\theta(t)\partial_xh_\theta(t)\Big ) -\Big (\partial_xb_{\bar{\theta}}(t)-\partial_x\bar{\sigma}_{\bar{\theta}}(t)h_{\bar{\theta}}(t) -\bar{\sigma}_{\bar{\theta}}(t)\partial_xh_{\bar{\theta}}(t)\Big )\bigg]X_\theta^1(t) \\ % & \qquad+\bigg[(\partial_vb_\theta(t) -\partial_v\bar{\sigma}_\theta(t)h_\theta(t))-(\partial_vb_{\bar{\theta}}(t) -\partial_v\bar{\sigma}_{\bar{\theta}}(t)h_{\bar{\theta}}(t))\bigg](v(t)-\bar{v}(t)),\\ % & E_l(t)=\bigg[ \partial_x l_\theta(t)-\partial_x l_{\bar{\theta}}(t) \bigg]X_\theta^1(t) +\bigg[ \partial_v l_\theta(t)-\partial_v l_{\bar{\theta}}(t) \bigg](v(t)-\bar{v}(t)),\\ % & E_{\beta,ij}(t)=\bigg[ \partial_x\beta_{\theta,ij}(t)-\partial_x\beta_{\bar{\theta},ij}(t) \bigg]X_\theta^1(t) +\bigg[ \partial_v\beta_{\theta,ij}(t)-\partial_v\beta_{\bar{\theta},ij}(t) \bigg](v(t)-\bar{v}(t)). \end{aligned}$$ According to , we obtain $$\begin{aligned} & \mathbb{E}\Big[\sup_{t\in[0,T]}|X^1_\theta(t)-X^1_{\bar{\theta}}(t)|^4 \Big] \leqslant C\mathbb{E}\Big[\int_0^T\Big (|E_b(t)|^4+|E_\sigma(t)|^4+|E_{\bar{\sigma}}(t)|^4 +\sum_{i,j=1,i\neq j}^I|E_{\beta,ij}(t)|^4\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}} \Big )dt\Big]. \end{aligned}$$ Since $\partial_xb_\theta, \partial_x\sigma_\theta,\partial_x\bar{\sigma}_\theta,\partial_x\beta_{\theta,ij},\partial_x h_\theta, h_\theta$ and $\partial_vb_\theta,\partial_v\sigma_\theta,\partial_v\bar{\sigma}_\theta,\partial_v\beta_{\theta,ij}$ are Lipschitz with respect to $(x,\theta)$, as well as $h_\theta,\bar{\sigma}_\theta,\partial_x\bar{\sigma}_\theta$ are bounded by $L$, we have that $\mathbb{E}\Big[\sup\limits_{t\in[0,T]}|X^1_\theta(t)-X^1_{\bar{\theta}}(t)|^4 \Big]\leqslant Ld(\theta,\bar{\theta})^4.$ *Step 2 ($R$-estimate).* Denote $F_R(t)=R^1_{\bar{\theta}}(t)(h_\theta(t)-h_{\bar{\theta}}(t))+\widehat{R}_\theta(t)\partial_xh_\theta(t)X_\theta^1(t) -\widehat{R}_{\bar{\theta}}(t)\partial_xh_{\bar{\theta}}(t)X_{\bar{\theta}}^1(t).$ According to [@Hu-Wang-20 Lemma 2.3], we obtain $\mathbb{E}\Big[\sup\limits_{t\in[0,T]}|R^1_\theta(t)-R^1_{\bar{\theta}}(t)|^2 \Big]\leqslant C\mathbb{E}\Big[\int_0^T|F_R(t)|^2dt\Big].$ Since $$\begin{aligned} &h_\theta(t)-h_{\bar{\theta}}(t)=h_\theta(t,\bar{X}_\theta(t),\alpha(t-))-h_{\bar{\theta}}(t,\bar{X}_{\bar{\theta}}(t),\alpha(t-) )\\ &=h_\theta(t,\bar{X}_\theta(t),\alpha(t-))-h_{\bar{\theta}}(t,\bar{X}_\theta(t),\alpha(t-) )+ h_{\bar{\theta}}(t,\bar{X}_\theta(t),\alpha(t-))-h_{\bar{\theta}}(t,\bar{X}_{\bar{\theta}}(t),\alpha(t-) ), \end{aligned}$$ and $$\begin{aligned} & \widehat{R}_\theta(t)\partial_xh_\theta(t)X_\theta^1(t)-\widehat{R}_{\bar{\theta}}(t)\partial_xh_{\bar{\theta}}(t)X_{\bar{\theta}}^1(t) \\ & =(\widehat{R}_\theta(t)- \widehat{R}_{\bar{\theta}}(t))\partial_xh_\theta(t)X_\theta^1(t)+\widehat{R}_{\bar{\theta}}(t)(\partial_xh_\theta(t) -\partial_xh_{\bar{\theta}}(t) )X_\theta^1(t) +\widehat{R}_{\bar{\theta}}(t)\partial_xh_{\bar{\theta}}(t)(X_\theta^1(t)-X_{\bar{\theta}}^1(t)), \end{aligned}$$ we have from the Lipschitz property of $h_\theta, \partial_x h_\theta$ in $(x,\theta)$ and the boundenss of $\partial_x h_\theta$ as well as , , , ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}) that $\mathbb{E}\Big[\sup\limits_{t\in[0,T]}|R^1_\theta(t)-R^1_{\bar{\theta}}(t)|^2 \Big]\leqslant Cd(\theta,\bar{\theta})^2.$ *Step 3 ($Y$-estimate).* Denote for $l=x,y,z,\bar{z},k,$ $$\begin{aligned} I^{1,1}& = \partial_x\Phi(\widehat{X}_{\bar{\theta}}(T))\overline{\eta}(T),\ I^{1,2}=\Big (\partial_x\Phi_\theta(\widehat{X}_\theta(T))-\partial_x\Phi_{\bar{\theta}} (\widehat{X}_{\bar{\theta}}(T))\Big )\widehat{X}_\theta(T),\\ % \widehat{\Pi}_{\theta}(t)& =(\widehat{X}_\theta(t),\widehat{Y}_\theta(t),\widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\sum_{i,j=1,i\neq j}^I\widehat{K}_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}),\\ % G_l(t) & = \partial_lf_{\bar{\theta}}(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-)),\\ H(t) & = \Big[\partial_vf_\theta(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))-\partial_vf_{\bar{\theta}}(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))\Big] (v(t)-\bar{v}(t))\\ & +\Big[ \partial_xf_\theta(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))-G_x(t) \Big]X^1_\theta(t) +\Big[ \partial_yf_\theta(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))-G_y(t) \Big]Y^1_\theta(t)\\ & +\Big[ \partial_zf_\theta(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))-G_z(t) \Big]Z^1_\theta(t) +\Big[ \partial_{\bar{z}}f_\theta(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))-G_{\bar{z}}(t) \Big]\bar{Z}^1_\theta(t)\\ & + \sum_{i,j=1,i\neq j}^I\Big[\partial_kf_\theta(t,\widehat{\Pi}_{\theta}(t), \widehat{v}(t),\alpha(t-))-G_k(t)\Big] K_{\theta,ij}^1(t)\lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}}. \end{aligned}$$ Thanks to , we have $$\begin{aligned} & \mathbb{E}\Big[\sup_{t\in[0,T]}|\xi^1(t)|^2+\int_0^T(|\gamma^1(t)|^2+|\bar{\gamma}^1(t)|^2 +\sum_{i,j=1,i\neq j}^I|\zeta^1_{ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt \Big]\\ & \leqslant C\mathbb{E}\Big[|I^{1,1}|^2+|I^{1,2}|^2+\int_0^T(|G_x(t)\eta(t)|^2+|H(t)|^2)dt\Big]. \end{aligned}$$ On the one hand, allows to show $\mathbb{E}\Big[|I^{1,1}|^2 +\int_0^T|G_x(t)\eta(t)|^2 dt\Big]\leqslant Cd(\theta, \bar{\theta})^2.$ On the other hand, in analogy to the estimate of $X$, one has $\mathbb{E}\Big[|I^{1,2}|^2+\int_0^T|H(t)|^2dt\Big]\leqslant Cd(\theta, \bar{\theta})^2.$ This finishes the proof. ## Proof of First, notice that $(\delta^\varepsilon Y_\theta(t),\delta^\varepsilon Z_\theta(t), \delta^\varepsilon\bar{Z}_\theta(t),\delta^\varepsilon K_{\theta}(t))$ solves the following equation $$\label{4.22} \left\{ \begin{aligned} d \delta^\varepsilon Y_\theta(t) & =-\bigg\{ \frac{1}{\varepsilon}(f^\varepsilon_\theta(t)-f_\theta(t))-[\partial_xf_\theta(t)X_\theta^{1}(t)+ \partial_yf_\theta(t)Y_\theta^{1}(t) + \partial_zf_\theta(t)Z_\theta^{1}(t)\\ & \quad+ \partial_{\bar{z}}f_\theta(t)\bar{Z}_\theta^{1}(t) +\partial_kf_\theta(t)\sum_{i,j=1,i\neq j}^IK^{1}_{\theta,ij}(t) \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}+\langle \partial_v f_\theta(t), v(t)-\widehat{v}(t)\rangle]\bigg\}dt\\ % &\quad+\delta^\varepsilon Z_\theta(t)dW(t)+\delta^\varepsilon\bar{Z}_\theta(t)dG(t)+\delta^\varepsilon K_{\theta}(t)\bullet dM(t),\quad t\in[0,T],\\ % \delta^\varepsilon Y_\theta(T) & =\frac{1}{\varepsilon}(\Phi^\varepsilon_\theta(T)-\Phi_\theta(T))-\partial_x\Phi_\theta(\widehat X_\theta(T))X_\theta^{1}(T). \end{aligned} \right.$$ Denote, for $\ell=x,y,z,\bar{z},k,v$, $$\begin{aligned} \Pi^\varepsilon_\theta(t) & =(\widehat{X}_\theta(t)+\lambda\varepsilon(\delta^\varepsilon X_\theta(t)+X^1_\theta(t)), \widehat{Y}_\theta(t)+\lambda\varepsilon(\delta^\varepsilon Y_\theta(t)+Y^1_\theta(t)), \widehat{Z}_\theta(t)+\lambda\varepsilon(\delta^\varepsilon Z_\theta(t)+Z^1_\theta(t)), \\ & \quad\, \widehat{\bar{Z}}_\theta(t)+\lambda\varepsilon(\delta^\varepsilon\overline{Z}_\theta(t)+\overline{Z}^1_\theta(t)), \sum_{i,j=1,i\neq j}^I\Big (\widehat{K}_{\theta,ij}(t)+\lambda\varepsilon(\delta^\varepsilon K_{\theta,ij}(t)+K^1_{\theta,ij}(t))\Big )\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}},\\ & \quad\widehat v(t)+\lambda\varepsilon(v(t)-\widehat v(t))),\\ \partial_\ell f^\varepsilon_\theta(t) & =\int_0^1\partial_\ell f_\theta(t,\Pi^\varepsilon_\theta(t), \alpha(t-))d\lambda,\\ % % % \end{aligned}$$ and $$\begin{aligned} % B_{1,\theta}^\varepsilon(T) & =\int_0^1\partial_x\Phi_\theta(\widehat{X}_\theta(T) +\lambda\varepsilon(X^{1}_\theta(T)+\delta^\varepsilon X_\theta(T)))d\lambda\delta^\varepsilon X_\theta(T),\\ B_{2,\theta}^\varepsilon(T) & =\int_0^1\partial_x\Phi_\theta(\widehat{X}_\theta(T)+\lambda\varepsilon(X^{1}_\theta(T)+\delta^\varepsilon X_\theta(T))-\partial_x\Phi_\theta(\widehat{X}_\theta(T)))d \lambda X^{1}_\theta(T),\\ % C_{1,\theta}^\varepsilon(t) & =\int_0^1[\partial_x f^\varepsilon_\theta(t)-\partial_x f_\theta(t)]X^1_\theta(t)+[\partial_y f^\varepsilon_\theta(t)-\partial_y f_\theta(t)]Y^1_\theta(t) +[\partial_z f^\varepsilon_\theta(t)-\partial_z f_\theta(t)]Z^1_\theta(t)\\ & \quad+[\partial_{\bar{z}} f^\varepsilon_\theta(t)-\partial_{\bar{z}} f_\theta(t)] \overline{Z}^1_\theta(t) +[\partial_k f^\varepsilon_\theta(t)-\partial_k f_\theta(t)] \sum_{i,j=1,i\neq j}^IK^1_{\theta,ij}(t)\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}\\ & \quad+\langle\partial_v f^\varepsilon_\theta(t)-\partial_v f_\theta(t)), v(t)-\widehat{v}(t)\rangle. \end{aligned}$$ Hence, ([\[4.22\]](#4.22){reference-type="ref" reference="4.22"}) can be rewritten as the following linear BSDE with regime switching $$\left\{ \begin{aligned} d \delta^\varepsilon Y_\theta(t) & =-\bigg\{ \partial_xf^\varepsilon_\theta(t)\delta^\varepsilon X_\theta(t) +\partial_yf^\varepsilon_\theta(t)\delta^\varepsilon Y_\theta(t)+\partial_zf^\varepsilon_\theta(t)\delta^\varepsilon Z_\theta(t)\\ & \quad+\partial_{\bar{z}}f^\varepsilon_\theta(t)\delta^\varepsilon\bar{Z}_\theta(t)+ \partial_kf^\varepsilon_\theta(t)\sum_{i,j=1,i\neq j}^I\delta^\varepsilon K_{\theta,ij}(t) \lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}+C_{1,\theta}^\varepsilon(t)\bigg\}dt\\ & \quad+\delta^\varepsilon Z_\theta(t)dW(t)+\delta^\varepsilon\bar{Z}_\theta(t)dG(t)+\delta^\varepsilon K_{\theta}(t)\bullet dM(t),\ t\in[0,T],\\ \delta^\varepsilon Y_\theta(T) & =B_{1,\theta}^\varepsilon(T)+B_{2,\theta}^\varepsilon(T). \end{aligned} \right.$$ From , the item (i) of and ([\[4.12-1\]](#4.12-1){reference-type="ref" reference="4.12-1"}), ([\[4.21-1\]](#4.21-1){reference-type="ref" reference="4.21-1"}), the item $\mathrm{(i)}$ of , there exists a constant $C>0$ depending on $L,T,\sum_{i,j=1,i\neq j}^I\lambda_{ij}$ such that $$\begin{aligned} & \mathbb{E}\bigg[\sup_{t\in[0,T]}|\delta^\varepsilon Y_\theta(t)|^2+\int_0^T(|\delta^\varepsilon Z_\theta(t)|^2+|\delta^\varepsilon\bar{Z}_\theta(t)|^2+\sum_{i,j=1,i\neq j}^I |\delta^\varepsilon K_{\theta,ij}(t)|^2\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}})dt\bigg]\\ % & \leqslant C\mathbb{E}\bigg[|B_{1,\theta}^\varepsilon(T)+B_{2,\theta}^\varepsilon(T)|^2+ \int_0^T| \partial_xf^\varepsilon_\theta(t)\delta^\varepsilon X_\theta(t) |^2+ |C_{1,\theta}^\varepsilon(t)|^{2}dt\bigg]\\ & \leqslant C\mathbb{E}\bigg[|x|^4+\int_0^T(|v(t)|^4+|\bar{v}(t)|^4) dt\bigg]. \end{aligned}$$ The proof of item $\mathrm{(ii)}$ comes from dominated convergence theorem. ## and its proof Define $$\begin{aligned} \Pi_\theta(t,\omega):=\partial_vH_\theta\big(t,\widehat{X}_\theta(t),\widehat{Y}_\theta(t), \widehat{Z}_\theta(t),\widehat{\bar{Z}}_\theta(t),\widehat{K}_{\theta}(t),\widehat{v}(t),\alpha(t-), p^F_\theta(t),p^B_\theta(t),q^B_\theta(t), % \overline{q}^B_\theta(t),k^B_{\theta}(t),\overline{q}^A_\theta(t)\big). \end{aligned}$$ **Lemma 20**. *Under and , the map $(\theta,t,\omega)\rightarrow\mathbb{E}\Big[\Pi_\theta(t,\omega)|{\cal G}_t\Big]$ is a ${\cal G}$-progressively measurable process, in other words, for each $t\in[0,T]$, the function $\mathbb{E}\Big[\Pi_\theta(t,\omega)|{\cal G}_t\Big]: \Theta\times[0,t]\times\Omega\rightarrow\mathbb{R}$ is $\mathscr{B}(\Theta)\times \mathscr{B}([0,t])\times{\cal G}_t$-measurable.* *Proof.* Since $\Theta$ is a Polish space, for each $M>1$, there exists a compact subset $C^M\subset \Theta$ such that $\overline Q(\theta\notin C^M)\leqslant\frac{1}{M}$. Then we can find a subsequence of open neighborhoods $\Big (B(\theta_l, \frac{1}{2M})\Big )_{l=1}^{L_M}$ such that $C^M\subset \bigcup_{l=1}^{L_M}\Big (B(\theta_l, \frac{1}{2M})\Big )$. From the locally compact property of $\Theta$ and using partitions of unity, there exists a sequence of continuous functions $\kappa_l:\Theta\rightarrow\mathbb{R}$ with values in $[0,1]$ such that $$\kappa_l(\theta)=0,\ \text{if}\ \theta\notin B\Big (\theta_l, \frac{1}{2M}\Big ),\ l=1, \cdots, L_M,\ \text{and}\ \sum_{l=1}^{L_M}\kappa_l(\theta)=1,\ \text{if}\ \theta\in C^M.$$ We set $\theta_l^*$ satisfying $\kappa_l(\theta_l^*)>0$ and define $$\Pi^M_\theta(t):=\sum_{l=1}^{L_M}\Pi_{\theta_l^*}(t)\kappa_l(\theta)\mathbf{1}_{\{\theta\in C^M\}}.$$ Notice that from ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}) and ([\[6.44\]](#6.44){reference-type="ref" reference="6.44"}), one has $$\begin{aligned} & \mathbb{E}^{\widehat v,\theta}\Big[\int_0^T |\Pi_\theta(t)|dt\Big]=\mathbb{E}\Big[\widehat R_{\theta}(T)\int_0^T |\Pi_\theta(t)|dt\Big]\leqslant L. \end{aligned}$$ Hence, we have from Bayes's rule that $$\begin{aligned} &\int_\Theta\mathbb{E}^{\widehat v,\theta}\Big[\int_0^T\Big|\mathbb{E}^{\widehat v,\theta}[\Pi_\theta^M(t)-\Pi_\theta(t)|{\cal G}_t ]\Big|dt\Big]\overline Q(d\theta)\\ % &\leqslant\int_\Theta\mathbb{E}^{\widehat v,\theta}\Big[\int_0^T\Big|\Pi_\theta^M(t)-\Pi_\theta(t)\Big|dt\Big]\overline Q(d\theta)\\ % &\leqslant\int_\Theta\sum_{l=1}^{L_M}\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T |\Pi_\theta^M(t)-\Pi_\theta(t)|dt\bigg]\kappa_l(\theta) \mathbf{1}_{\{\theta\in C^M\}} +\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T|\Pi_\theta(t)|dt\bigg]\kappa_l(\theta) \mathbf{1}_{\{\theta\notin C^M\}} \overline Q(d\theta)\\ % & \leqslant\sup_{d(\theta,\bar{\theta})\leqslant\frac{1}{M}}\mathbb{E}^{\widehat v,\theta}\bigg[\int_0^T |\Pi_{\bar{\theta}}(t)-\Pi_\theta(t)|dt\bigg] +L\overline Q(\theta\notin C^M) \\ & \leqslant\sup_{d(\theta,\bar{\theta})\leqslant\frac{1}{M}}\mathbb{E}\bigg[\widehat R_{\theta}(T)\cdot\int_0^T |\Pi_{\bar{\theta}}(t)-\Pi_\theta(t)|dt\bigg] +\frac{L}{M}. \end{aligned}$$ Here we use the fact that $\kappa_l(\theta)=0$ whenever $d(\theta,\bar{\theta})\geqslant\frac{1}{2N}$. Next, we prove $$\lim_{M\rightarrow\infty} \sup_{d(\theta,\bar{\theta})\leqslant\frac{1}{M}}\mathbb{E}\bigg[\widehat R_{\theta}(T)\cdot \int_0^T |\Pi_{\bar{\theta}}(t)-\Pi_\theta(t)|dt\bigg]=0.$$ Recall that $$\begin{aligned} \Pi_\theta(t) & =\partial_vb_\theta(t,\widehat X_\theta(t),\widehat v(t), \alpha(t-))p_\theta^B(t)+\partial_v\sigma_\theta(t,\widehat X_\theta(t),\widehat v(t), \alpha(t-))q_\theta^B(t)\\ &\quad+\partial_v\bar{\sigma}_\theta(t,\widehat X_\theta(t),\widehat v(t), \alpha(t-))\bar{q}_\theta^B(t)\\ & \quad+\sum_{i,j=1,i\neq j}^I\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t),\widehat v(t), \alpha(t-))k_{\theta,ij}^B(t)\lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}}\\ &\quad-\partial_vf(t,\widehat X_\theta(t), \widehat Y_\theta(t), \widehat Z_\theta(t), \widehat{\bar{Z}}_\theta(t), \sum_{i,j=1,i\neq j}^I \widehat K_{\theta,ij}(t)\lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}},\alpha(t-))p_\theta^F(t). \end{aligned}$$ We just show $$\begin{aligned} & \lim_{M\rightarrow\infty}\sup_{d(\theta,\bar{\theta})\leqslant\frac{1}{M}}\mathbb{E}\bigg[\widehat R_{\theta}(T)\cdot \int_0^T \sum_{i,j=1,i\neq j}^I|\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t)\\ &\qquad\qquad\qquad\qquad\qquad-\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\bar{\theta},ij}^B(t)|\lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}}dt\bigg]=0, \end{aligned}$$ since the other terms can be estimated similarly. Here and after we use $\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t))$ instead of $\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t),\widehat v(t),\alpha(t-))$ for short. Since $$\begin{aligned} & |\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t)-\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\bar{\theta},ij}^B(t)|\\ & \leqslant|\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t)-\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t)|\\ & +|\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t)-\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\theta,ij}^B(t)|\\ & +|\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\theta,ij}^B(t)-\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\bar{\theta},ij}^B(t)|, \end{aligned}$$ from we have $$\begin{aligned} & |\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t)-\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\bar{\theta},ij}^B(t)|\\ & \leqslant Ld(\theta,\bar{\theta})|k_{\theta,ij}^B(t)|+L|\widehat X_\theta(t)-\widehat X_{\bar{\theta}}(t)||k_{\theta,ij}^B(t)| +L|k_{\theta,ij}^B(t)-k_{\bar{\theta},ij}^B(t)|. \end{aligned}$$ Hence, according to ([\[5.5\]](#5.5){reference-type="ref" reference="5.5"}), ([\[4.7-2\]](#4.7-2){reference-type="ref" reference="4.7-2"}), ([\[6.44\]](#6.44){reference-type="ref" reference="6.44"}) and , it follows from Cauchy-Schwarz inequality and Hölder inequality that $$\begin{aligned} & \mathbb{E}\bigg[\widehat R_{\theta}(T)\cdot\int_0^T \sum_{i,j=1,i\neq j}^I|\partial_v\beta_{\theta,ij}(t,\widehat X_\theta(t))k_{\theta,ij}^B(t) -\partial_v\beta_{\bar{\theta},ij}(t,\widehat X_{\bar{\theta}}(t))k_{\bar{\theta},ij}^B(t)|\lambda_{ij}\mathbf{1}_{\{\alpha(t-)=i\}}dt\bigg]\\ & \leqslant L\Big(\sum_{i,j=1,i\neq j}^I\lambda_{ij}\Big) \cdot\Bigg( \mathbb{E}\bigg[\int_0^T \sum_{i,j=1,i\neq j}^I|k_{\theta,ij}^B(t)|^2 \lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}}dt \bigg]^\frac{1}{2}\cdot \mathbb{E}\bigg[(\widehat R_{\theta}(T) )^2 \bigg]^\frac{1}{2}\cdot d(\theta,\bar{\theta})\\ & \quad+\mathbb{E}\bigg[\int_0^T \sum_{i,j=1,i\neq j}^I|k_{\theta,ij}^B(t)|^2 \lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}}dt \bigg]^\frac{1}{2}\cdot\mathbb{E}\bigg[(\widehat R_{\theta}(T))^4 \bigg]^\frac{1}{4}\cdot \mathbb{E}\bigg[\sup_{t\in[0,T]}|\widehat X_\theta(t)-\widehat X_{\bar{\theta}}(t)|^4\bigg]^\frac{1}{4}\\ & \quad+ \mathbb{E}\bigg[\int_0^T \sum_{i,j=1,i\neq j}^I|k_{\theta,ij}^B(t)-k_{\bar{\theta},ij}^B(t) |^2 \lambda_{ij}(t)\mathbf{1}_{\{\alpha(t-)=i\}}dt \bigg]^\frac{1}{2}\cdot \mathbb{E}\bigg[(\widehat R_{\theta}(T))^2 \bigg]^\frac{1}{2}\Bigg)\\ & \rightarrow 0,\ \text{as}\ d(\theta,\bar{\theta})\rightarrow 0. \end{aligned}$$ ◻ 90 R. Buckdahn, J. Li and J. Ma, *A stochastic maximum principle for general mean-field systems, *Appl. Math. Optim., 74 (2016), 507--534.** B. Dong, T. Nie and Z. Wu, *Maximum principle for discrete-time stochastic control problem of mean-field type, *Automatica J. IFAC, 144 (2022), 110497.** C. Donnelly and A.J. Heunis, *Quadratic risk minimization in a regime-switching model with portfolio constraints, *SIAM J. Control Optim., 50 (2012), 2431--2461.** K. Du and Q. Meng, *A maximum principle for optimal control of stochastic evolution equations, *SIAM J. Control Optim., 51 (2013), 4343--4362.** M. Hu, S. Ji and R. Xu, *A global stochastic maximum principle for forward-backward stochastic control systems with quadratic generators, *SIAM J. Control Optim., 60(3) (2022), 1791--1818.** M. Hu, S. Ji and X. Xue, *A global stochastic maximum principle for fully coupled forward-backward stochastic systems, *SIAM J. Control Optim., 56 (2018), 4309--4335.** M. Hu and F. Wang, *Maximum principle for stochastic recursive utilities optimal control problem under model uncertainty, *SIAM J. Control Optim., 58(3) (2020), 1341--1370.** J. Li and Q. Wei, *$L^p$ estimates for fully coupled FBSDEs with jumps, *Stochastic Process. Appl., 124 (2014), 1582--1611. Y. Li and H. Zheng, *Weak necessary and sufficient stochastic maximum principle for Markovian regime switching diffusion models, *Appl. Math. Optim., 71 (2015), 39--77.**** H. Mei and J. Yong, *Equilibrium strategies for time-inconsistent stochastic switching systems, ESAIM Control Optim. Calc. Var., **25** (2019), No. 64.* O. Menoukeu-Pamen, *Maximum principles of Markov regime-switching forward-backward stochastic differential equations with jumps and partial information, *J. Optim. Theory Appl., 175(2) (2017), 373--410.** S.L. Nguyen, G. Yin and D.T. Nguyen, *A general stochastic maximum principle for mean-field controls with regime switching, *Appl. Math. Optim., 84 (2021), 3295--3294.** B. Oksendal and A. Sulem, *Applied Stochastic Control of Jump Diffusions, Springer, New York, 2005.* P. Pardoux and S. Peng, *Adapted solution of a backward stochastic differential equation, *Syst. Control Lett., 14 (1990), 55--61.** S. Peng, *A general stochastic maximum principle for optimal control problems, *SIAM J. Control Optim., 28 (1990), 966--979.** S. Peng, *Backward Stochastic Differential Equations, Nonlinear Expectations and Risk Measures, Lectures in Chinese Summer School in Mathematics, Shandong University at Weihai, Shandong, China, 2004.* E. Rosazza-Gianin, *Risk measures via g-expectations, *Insurance Math. Econ., 39 (2006), 19--34.** J. Shi, G. Wang and J. Xiong, *Leader-follower stochastic differential game with asymmetric information and applications, *Automatica, 63 (2016), 60--73.** Q. Song, R.H. Stockbridge and C. Zhu, *On optimal harvesting problems in random environments, *SIAM J. Control Optim., 49 (2011), 859--889.** Z. Sun, I. Kemajou-Brown and O. Menoukeu-Pamen, *A risk-sensitive maximum principle for a Markov regime-switching jump-diffusion system and applications, *ESAIM Control Optim. Calc. Var., 24(3) (2018), 985-1013.** S. Tang, *The maximum principle for partially observed optimal control of stochastic differential equations, *SIAM J. Control Optim., 36 (1998), 1596--1617.** S. Tang, X. Li, *Necessary conditions for optimal control of stochastic systems with random jumps, *SIAM J. Control Optim., 32(5) (1994), 1447--1475.** G. Wang and Z. Wu, *The maximum principles for stochastic recursive optimal control problems under partial information, *IEEE T. Automat. Control, 54 (2009), 1230-1242. G. Wang, Z. Wu and J. Xiong, *Maximum principle for forward-backward stochastic control systems with correlated state and observation noises, *SIAM J. Control Optim., 51(1) (2013), 491--524.**** J. Wen, X. Li, J. Xiong and X. Zhang, *Stochastic linear-quadratic optimal control problems with random coefficients and Markovian regime switching system, *SIAM J. Control Optim., to appear, 2022.** Z. Wu, *A general maximum principle for optimal control of forward-backward stochastic systems, *Automatica J. IFAC, 49 (2013), 1473--1480.** J. Xiong and X.Y. Zhou, *Mean-variance portfolio selection under partial information, *SIAM J. Control Optim., 46 (2007), 156--175.** J. Yong and X. Zhou, *Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer, New York, 1999.* X. Zhang, R. Elliott and T. Siu, *A stochastic maximum principle for a Markov regime-switching jump-diffusion model and its application to finance., *SIAM J. Control Optim., 50(2) (2012), 964--990.** X. Zhang, Z. Sun and J. Xiong, *A general stochastic maximum principle for a Markov regime switching jump-diffusion model of mean-field type, *SIAM J. Control Optim., 56(4) (2018), 2563--2592.** S. Zhang, J. Xiong, and X. Liu *Stochastic maximum principle for partially observed forward-backward stochastic differential equations with jumps and regime switching, *Sci. China, Inf. Sci., 61(7) (2018), 070211.** Q. Zhang and X.Y.  Zhou, *Valuation of stock loans with regime switching, *SIAM J. Control Optim., 48 (2009), 1229--1250.** X.Y. Zhou, G. Yin, *Markowitz's mean-variance portfolio selection with regime switching: A continuous-time model, *SIAM J. Control Optim., 42 (2003), 1466--1482.** [^1]: School of Statistics and Mathematics, Shandong University of Finance and Economics, Jinan 250014, China (Email: `taohao@sdufe.edu.cn`). TH is partially supported by Natural Science Foundation of Shandong Province (Grant Nos. ZR2020MA032, ZR2022MA029), National Natural Science Foundation of China (Grant Nos. 12171279, 72171133). [^2]: Department of Mathematics, Southern University of Science and Technology, Shenzhen 518055, China (Email: `wenjq@sustech.edu.cn`). JW is supported by National Natural Science Foundation of China grant 12101291 and Guangdong Basic and Applied Basic Research Foundation grant 2022A1515012017. [^3]: Department of Mathematics and SUSTech International Center for Mathematics, Southern University of Science and Technology, Shenzhen 518055, P. R. China. xiongj\@sustc.edu.cn. JX is supported by National Key R&D Program of China 2022YFA1006102 and National Natural Science Foundation of China grant 11831010.
arxiv_math
{ "id": "2309.10454", "title": "Maximum principle for a Markovian regime switching system under model\n uncertainty", "authors": "Tao Hao, Jiaqiang Wen, Jie Xiong", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $\overline{M}$ be a compact smoothly stratified pseudo-manifold endowed with a wedge metric $g$. Let $\overline{M}_\Gamma$ be a Galois $\Gamma$-covering. Under additional assumptions on $\overline{M}$, satisfied for example by Witt pseudo-manifolds, we show that the $L^2$-Betti numbers and the Novikov-Shubin invariants are well defined. We then establish their invariance under a smoothly stratified, strongly stratum preserving homotopy equivalence, thus extending results of Dodziuk, Gromov and Shubin to these pseudo-manifolds. address: - Sapienza University, Rome, Italy - Sapienza University, Rome, Italy - Universität Oldenburg, Germany author: - Francesco Bei - Paolo Piazza - Boris Vertman title: Stability of $L^2-$invariants on stratified spaces ---  \ # Introduction and statement of the main results ## Historical overview {#history-subsection} Let $(M,g)$ be a compact manifold with an infinite fundamental group. In his seminal paper [@Ati] Atiyah introduced $L^2$-Betti numbers associated to the universal covering of $M$, and conjectured their stability under homotopy equivalences. The conjecture was established shorty after the appearance of Atiyah's paper by Dodziuk in [@Dodziuk]. Later, Novikov and Shubin [@Novikov-Shubin1; @Novikov-Shubin2] introduced new invariants associated to a Galois $\Gamma$-covering $M_\Gamma$ of $M$, with $\Gamma$ a finitely generated discrete group; these invariants measure the density of the continuous spectrum of the differential form Laplacian on $M_\Gamma$ and are nowadays referred to as the Novikov-Shubin invariants. Their stability under homotopy equivalence was proved by Gromov and Shubin in [@Gromov-Shubin; @Gromov-Shubin-err]. *The goal of this article is to address the same stability problems but for singular spaces, namely compact smoothly stratified (Thom-Mather) pseudo-manifolds.* ## Statement of the main results Consider a compact smoothly stratified (Thom-Mather) pseudo-manifold $\overline{M}$ with an iterated wedge metric $g$ in its open interior $M \subset \overline{M}$. Simply put, the singular neighborhoods of $\overline{M}$ can locally be viewed as fibrations of cones, as in Figure [1](#figure1){reference-type="ref" reference="figure1"}, where their cross sections may be singular again. ![A cone bundle over $B$.](edge.pdf){#figure1} Let $p: \overline{M}_\Gamma \to \overline{M}$ be a Galois $\Gamma$-covering with $\Gamma$ being the group of deck transformations $\Gamma$. It is again a stratified pseudo-manifold and the metric $g$ lifts to an iterated wedge metric $g_\Gamma$ in the interior $M_\Gamma$ of $\overline{M}_\Gamma$. The corresponding minimal and maximal Hilbert complexes $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$ arise from the de Rham complex of compactly supported differential forms on $M_\Gamma$ by choosing minimal and maximal closed extensions of the differential in $L^2$, respectively. Note that in general $$(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma) \neq (\mathscr{D}_{\max}(M_\Gamma), d_\Gamma).$$ The $L^2$-Betti numbers are defined as von Neumann $\Gamma$-dimensions of the corresponding reduced $L^2$-cohomologies $$\begin{split} b^*_{(2),\min}(\overline{M}_\Gamma) := \dim_\Gamma \overline{H}^*_{(2)}(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma), \\ b^*_{(2),\max}(\overline{M}_\Gamma) := \dim_\Gamma \overline{H}^*_{(2)}(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma). \end{split}$$ The Novikov-Shubin invariants $\alpha_{*,\min}(\overline{M}_\Gamma)$ and $\alpha_{*,\max}(\overline{M}_\Gamma)$ are in turn defined in terms of the Hodge Laplacians associated to the minimal and maximal Hilbert complexes $(\mathscr{D}_{\min}(M_\Gamma), d)$ and $(\mathscr{D}_{\max}(M_\Gamma), d)$, respectively. These invariants will be introduced explicitly in §[3](#finiteness-section){reference-type="ref" reference="finiteness-section"} below. We can now state our first main theorem. **Theorem 1**. *Let $p: \overline{M}_\Gamma \to \overline{M}$ be a Galois $\Gamma$-covering of a compact smoothly stratified pseudo-manifold $\overline{M}$ endowed with an iterated wedge metric. If the Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"} holds true, we have the following properties:* 1. *The $L^2$-Betti numbers $b^*_{(2),\min}(\overline{M}_\Gamma)$ and $b^*_{(2),\max}(\overline{M}_\Gamma)$ are finite.* 2. *The Novikov-Shubin invariants $\alpha_{*,\min}(\overline{M}_\Gamma)$ and $\alpha_{*,\max}(\overline{M}_\Gamma)$ are well-defined.* Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"} is a technical assumption on the resolvent $(i+D_{{\rm abs/rel}})^{-1}$ on $(M,g)$, with $D_{{\rm abs/rel}}$ being the absolute and relative self-adjoint extensions $$D_{\textup{rel}}:= d_{\min} + d^*_{\min}, \quad \text{and} \quad D_{\textup{abs}}:= d_{\max} + d^*_{\max}\,.$$ It is satisfied on stratified pseudo-manifolds with isolated singularities and on Witt spaces of arbitrary depth., as shown in Proposition [Proposition 10](#witt-examples){reference-type="ref" reference="witt-examples"} below. In order to state our second main result we pause for a moment and explain in detail the result of Gromov and Shubin [@Gromov-Shubin]. Let $X$ and $Y$ be connected smooth compact manifolds without boundary and let $f:X\to Y$ be a homotopy equivalence. We now consider two connected Galois $\Gamma$-coverings $X_\Gamma\xrightarrow{p} X$ and $Y_\Gamma\xrightarrow{q} Y$; once we fix base points $x\in X, \tilde{x}\in p^{-1}(x)$, $f(x)\in Y$ and $\tilde{y}\in q^{-1}(f(x))$ we have well defined surjective homomorphisms $$\pi_1 (X,x)\xrightarrow{j_X} \Gamma\,,\quad \pi_1 (Y,f(x))\xrightarrow{j_Y} \Gamma.$$ We shall explain all this later in the paper. Gromov and Shubin requires the following diagram to commute: We shall weaken this assumption of Gromov and Shubin, that is $j_X=j_Y\circ f_*$, and require only that $$\label{comp} f_* ({\rm Ker} (j_X))={\rm Ker} (j_Y)\,.$$ This is condition [\[compatibility\]](#compatibility){reference-type="eqref" reference="compatibility"} appearing in the statement of our next result. In order to state it, we need to introduce one final ingredient $-$ a smoothly stratified, strongly stratum preserving homotopy equivalence between compact smoothly stratified pseudo-manifolds. This is a homotopy equivalence that maps singular strata onto singular strata and preserves co-dimension. We will be more precise in §[5](#stability-section){reference-type="ref" reference="stability-section"} below. Our second main result now reads as follows. **Theorem 2**. *Let $\overline{M}$ and $\overline{N}$ be two compact smoothly stratified pseudo-manifolds which satisfy Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"}; assume that there exist two Galois $\Gamma$-coverings $\overline{M}_{\Gamma}$, $\overline{N}_{\Gamma}$ and a smoothly stratified, strongly stratum preserving homotopy equivalence $f: \overline{M} \to \overline{N}$ satisfying condition [\[comp\]](#comp){reference-type="eqref" reference="comp"} (see [\[compatibility\]](#compatibility){reference-type="eqref" reference="compatibility"} for precise notation). Then $$\begin{split} b^*_{(2),\min / \max}(\overline{M}_{\Gamma}) &= b^*_{(2),\min / \max}(\overline{N}_{\Gamma}), \\ \alpha_{*,\min / \max}(\overline{M}_{\Gamma}) &= \alpha_{*,\min / \max}(\overline{N}_{\Gamma}). \end{split}$$* **Corollary 3**. *Let $\overline{M}$ and $\overline{N}$ be two compact smoothly stratified pseudo-manifolds which satisfy Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"}; assume that there exists a smoothly stratified, strongly stratum preserving homotopy equivalence $f: \overline{M} \to \overline{N}$ which satisfies the condition [\[compatibility\]](#compatibility){reference-type="eqref" reference="compatibility"}. Then $$\begin{split} b^*_{(2),\min / \max}(\overline{M}) &= b^*_{(2),\min / \max}(\overline{N}), \\ \alpha_{*,\min / \max}(\overline{M}) &= \alpha_{*,\min / \max}(\overline{N}) \end{split}$$ where $b^*_{(2),\min / \max}(\overline{M})$, $b^*_{(2),\min / \max}(\overline{N})$, $\alpha_{*,\min / \max}(\overline{M})$ and $\alpha_{*,\min / \max}(\overline{N})$ denote the maximal/minimal $L^2$-Betti numbers and Novikov-Shubin invariants of $\overline{M}$ and $\overline{N}$ computed with respect to the corresponding universal coverings.* As anticipated in §[1.1](#history-subsection){reference-type="ref" reference="history-subsection"}, this theorem extends the result of Dodziuk [@Dodziuk] for $L^2$-Betti numbers and of Gromov and Shubin [@Gromov-Shubin] for Novikov-Shubin invariants to the setting of smoothly stratified Thom-Mather pseudo-manifolds. ## Background and notation This paper builds strongly upon the classical notions of stratified spaces, their Galois coverings and the corresponding von Neumann theory. A careful introduction of these concepts is out of scope of the present note and is presented in full detail in other references. We list the main objects of interest for us, their notation and precise references where these objects are introduced. 1. **stratified space $\overline{M}$, its open interior $M$ and resolution $\widetilde{M}$.** A compact smoothly stratified (Thom-Mather) pseudo-manifold $\overline{M}$ with open interior $M \subset \overline{M}$ is defined, for example, in [@ALMP3 Definition 2.1]. The precise definition is rather involved, due to additional Thom-Mather conditions, see [@Mather], which guarantee that such $\overline{M}$ can be resolved into a compact manifold with corners and boundary fibration structure, see [@package Proposition 2.5] and [@package Definition 2]. Further references are e.g. [@Brasselet], [@Verona], see also [@ALMP2; @Alb]. 2. **iterated incomplete wedge metric $g$ and complete edge metric $\rho_M^{-2}g$.** An (iterated incomplete) wedge metric $g$ on the open interior $M$ is defined in [@package Definition 5] (note that there such a metric is called 'iterated edge'). If $\rho_M$ is a total boundary function on $\widetilde{M}$, i.e. a smooth function that vanishes to first order at all boundary faces, then a complete edge metric is defined by $\rho_M^{-2}g$ as in [@package (3.2)] 3. **Galois covering $\widetilde{M}_\Gamma$ and von Neumann theory.** A Galois covering $\pi: \overline{M}_{\Gamma} \to \overline{M}$ with Galois group $\Gamma$ is again a smoothly stratified (Thom-Mather) pseudo-manifold with the smooth open and dense stratum $M_\Gamma$ being a Galois covering of $M$. Any singular stratum $Y_\Gamma$ of $\overline{M}_{\Gamma}$ is a Galois covering of a singular stratum $Y$ in $\overline{M}$ of same depTheorem The lift $g_\Gamma$ of the wedge metric $g$ defines a $\Gamma$-invariant wedge metric on $M_\Gamma$. We refer to e.g. [@PiVe1 §7.1] and [@PiVe2 §2.4] for more details. 4. **edge and wedge tangent bundles ${}^{e}TM, {}^{e}TM_\Gamma$ and ${}^{w}TM, {}^{w}TM_\Gamma$.** The edge tangent bundle ${}^{e}TM$ is a vector bundle over $\widetilde{M}$, defined in [@package §4.1] (note that there a different notation ${}^{ie}TM$ is used, with the additional upper script i standing for 'iterated'). An efficient definition of the wedge tangent bundle ${}^{w}TM$ is obtained by specifying the smooth sections (of its dual bundle) as $$C^\infty(\widetilde{M}, {}^{w}T^*M) := \rho_M \, C^\infty(\widetilde{M}, {}^{e}T^*M).$$ One defines ${}^{e}TM_\Gamma, {}^{e}TM_\Gamma$ in the same way. Consider the $L^2$-completions $$\begin{split} & L^2\Omega^*(M,g) :=L^2(M, \Lambda^* ({}^{w}T^*M),g), \\ & L^2\Omega^*(M_{\Gamma},g_{\Gamma}):=L^2(M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma),g_\Gamma). \end{split}$$ We shall often simply abbreviate the spaces as $L^2\Omega^*(M)$ and $L^2\Omega^*(M_{\Gamma})$, respectively, without keeping track of the metrics. 5. **Minimal and maximal complexes $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$.** The notion of Hilbert complexes has been carefully introduced in [@Hilbert]. The minimal and maximal Hilbert complexes are defined e.g. in [@Hilbert Lemma 3.1]. They are defined by the minimal and maximal domains $\mathscr{D}_{\min / \max}(M_\Gamma):= \mathscr{D}(d_{\Gamma, \, \min / \max})$ in $L^2\Omega^*(M_{\Gamma},g_{\Gamma})$ of de Rham differentials on $M_\Gamma$. *Acknowledgements.* The authors would like to acknowledge interesting discussions with Pierre Albin and Matthias Lesch. The third author thanks Sapienza University for hospitality. This work was supported by the \"National Group for the Algebraic and Geometric Structures and their Applications\" (GNSAGA-INDAM). # Reformulating the problem using Mishchenko bundles {#twist-section} Below, it will become necessary to use an equivalent description of the Hilbert complexes $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma), (\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$, and the Hilbert space $L^2\Omega^*(M_{\Gamma},g_{\Gamma})$, working only on the base $\overline{M}$ and using the von Neumann algebra framework. We shall be brief here and refer to [@Lueck-book §1 and §2] and [@Schick] for details. Consider the complex group ring $\mathbb{C}\Gamma$ of complex-valued maps $f: \Gamma \to \mathbb{C}$ with compact support and the Hilbert space of square-summable sequences $$\ell^2\Gamma := \Bigl\{f:\Gamma \to \mathbb{C}\ \bigl| \ \sum_{\gamma \in \Gamma} \bigl|f(\gamma)\bigr|^2 < \infty\Bigr\},$$ which can alternatively be viewed as Hilbert space completion of $\mathbb{C}\Gamma$. Recall that the von Neumann algebra $\mathscr{N}\Gamma$ is defined as the space of bounded $\Gamma$-equivariant linear operators on the Hilbert space $\ell^2\Gamma$. We recall the notion of $\mathscr{N}\Gamma$-Hilbert modules here, see [@Lueck-book §1 and §2] and [@Gromov-Shubin §4] for more details **Definition 4**. *A $\mathscr{N}\Gamma$-Hilbert module $H$ is a Hilbert space together with a linear isometric $\Gamma$-action such that there exists a Hilbert space $\mathcal{H}$ and an isometric linear $\Gamma$-embedding of $H$ into the tensor product of Hilbert spaces $\mathcal{H}\otimes \ell^2 \Gamma$ equipped with the obvious $\Gamma$-action.* We now define the so-called Mishchenko bundles $$\label{mi-bundles} \begin{split} \mathscr{E} &:= \overline{M}_\Gamma\times_\Gamma \ell^2\Gamma \;\longrightarrow \overline{M},\\ \mathscr{E}_c &:= \overline{M}_\Gamma\times_\Gamma \mathbb{C}\Gamma \;\longrightarrow \overline{M}, \end{split}$$ Fibres of both bundles are modules, the first one for the natural $\mathscr{N}\Gamma$-action and the second one for the natural $\mathbb{C}\Gamma$ action. In fact, the bundle $\mathscr{E}$ is a bundle of $\mathscr{N}\Gamma$-Hilbert modules in the sense of [@Schick Definition 2.10] and the same is true after twisting $\mathscr{E}$ with the vector bundle $\Lambda^* {}^{w}(T^*M)$. The construction defines natural isomorphisms of vector spaces $$\label{phi-isomorphism} \begin{split} \Phi_c: \, &C^\infty_c (M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma)) \to C^\infty_c (M, \Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}_c), \\ \Phi' :\, & C^\infty_{(2)} (M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma)) \to C^\infty(M, \Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}), \end{split}$$ with $C^\infty_{(2)} (M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma))$ defined as $$\Bigl\{s\in C^\infty (M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma)) : \forall_{x \in M_\Gamma}\sum_\gamma |s(\gamma \cdot x)|^2 < \infty \Bigr\},$$ and $\Phi$, $\Phi'$ written down explicitly in [@Schick §7.5, (1)]. The lower index $c$ indicates compact support. We shall simplify notation $$\begin{aligned} &\Omega^*_{c}(M,\mathscr{E}_c) := C^\infty_c (M, \Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}_c), \quad &&\Omega^* (M,\mathscr{E}) := C^\infty (M, \Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}), \\ &\Omega^*_{c}(M_\Gamma) := C^\infty_c (M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma)).\end{aligned}$$ We can define a metric $g_{\mathscr{E}}$ on $\Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}$, using the metric $g$ together with the $\ell^2\Gamma$ inner product along the fibers. Noting in the first equality below that $\ell^2\Gamma$ is the Hilbert space completion of $\mathbb{C}\Gamma$, we have $$\begin{split} L^2 (M, \Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}, g_{\mathscr{E}}) &= \overline{\Omega^*_c (M, \mathscr{E}_c)}^{\ g_{\mathscr{E}}} \\ L^2 (M_\Gamma, \Lambda^* ({}^{w}T^*M), g_\Gamma) &= \overline{\Omega^*_c (M_\Gamma)}^{\ g_{\Gamma}} \end{split}$$ where the closures are in terms of the corresponding $L^2$ inner products defined by the metrics in the upper indices. We shall also use the following simplified notation $$L^2 \Omega^* (M, \mathscr{E}) := L^2 (M, \Lambda^* ({}^{w}T^*M)\otimes \mathscr{E}, g_{\mathscr{E}}),$$ together with the one that we have already introduced $$L^2 \Omega^* (M_\Gamma) := L^2(M_\Gamma, \Lambda^* ({}^{w}T^*M_\Gamma),g_\Gamma).$$ In fact, [@Schick Lemma 7.10] proves that these two spaces are $\mathscr{N}\Gamma$-Hilbert modules in the sense of [@Schick Definition 2.1], and as explained in [@Schick §7.5, (2)], the isomorphism $\Phi$ extends to an isometry of $\mathscr{N}\Gamma$-Hilbert modules $$\label{phi-isometry} \Phi: L^2 \Omega^*(M_\Gamma)\to L^2 \Omega^* (M, \mathscr{E})$$ We now study the Hilbert complexes $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$ under the isometric identification $\Phi$. There is a canonical connection on $\mathscr{E}$ and its subbundle $\mathscr{E}_c$, over the interior $M$, induced by the exterior differential on $M_\Gamma$. We refer to [@Schick §3] for more details on connections in the von Neumann setting. Hence, there is a natural (twisted) de Rham differential $d_{\mathscr{E}}$ on the twisted differential forms $\Omega^*_{c}(M,\mathscr{E}_c)$. In fact, $d_{\mathscr{E}}$ and the de Rham differential $d_\Gamma$ on $M_\Gamma$ act on larger domains of smooth forms $$\label{smooth-max} \begin{split} \Omega^{*}_{\max}(M,\mathscr{E}) &:= \Bigl\{ \omega \in \Omega^{*}(M,\mathscr{E})\cap L^2 \Omega^* (M, \mathscr{E}) \mid d_{\mathscr{E}} \omega \in \Omega^{*}(M,\mathscr{E})\cap L^2 \Omega^* (M, \mathscr{E}) \Bigr\}, \\ \Omega^{*}_{\max} (M_\Gamma) &:= \Bigl\{ \omega \in \Omega^* (M_\Gamma)\cap L^2 \Omega^* (M_\Gamma) \mid d_\Gamma \omega \in \Omega^* (M_\Gamma)\cap L^2 \Omega^* (M_\Gamma) \Bigr\}. \end{split}$$ Recalling the classical definitions of $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$, we can define the corresponding minimal and maximal Hilbert complexes in $L^2\Omega^*(M, \mathscr{E})$ in the same way. Define the graph norms $$\begin{split} \| \omega \|_{\mathscr{E}} &:= \| \omega \|_{L^2\Omega^*(M, \mathscr{E})} + \| d_{\mathscr{E}} \omega \|_{L^2\Omega^*(M, \mathscr{E})}, \quad \omega \in \Omega^*_{\max}(M,\mathscr{E}), \\ \| \omega \|_\Gamma &:= \| \omega \|_{L^2\Omega^*(M_\Gamma)} + \| d_\Gamma \omega \|_{L^2\Omega^*(M_\Gamma)}, \quad \omega \in \Omega^*_{\max}(M_\Gamma). \end{split}$$ **Definition 5**. *Consider the graph norms $\| \cdot \|_{\mathscr{E}}$ and $\| \cdot \|_\Gamma$. Then the minimal and maximal Hilbert complexes in $L^2\Omega^*(M, \mathscr{E})$ and $L^2\Omega^*(M_\Gamma)$ are defined as follows.* 1. *$(\mathscr{D}_{\min}(M, \mathscr{E}), d_{\mathscr{E}})$ and $(\mathscr{D}_{\max}(M, \mathscr{E}), d_{\mathscr{E}})$ are defined by $$\mathscr{D}_{\min}(M, \mathscr{E}) := \overline{\Omega^*_c(M,\mathscr{E}_c)}^{\,\| \cdot \|_{\mathscr{E}}}, \mathscr{D}_{\max}(M, \mathscr{E}) := \overline{\Omega^*_{ \max}(M,\mathscr{E})}^{\, \| \cdot \|_{\mathscr{E}}}$$* 2. *$(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$ are defined by $$\label{domains-smooth} \mathscr{D}_{\min}(M_\Gamma) := \overline{\Omega_c^*(M_\Gamma)}^{\, \| \cdot \|_{\Gamma}}, \mathscr{D}_{\max}(M_\Gamma) := \overline{\Omega^*_{\max}(M_\Gamma)}^{\, \| \cdot \|_{\Gamma}}.$$* The definition of the maximal domain in [\[domains-smooth\]](#domains-smooth){reference-type="eqref" reference="domains-smooth"} is equivalent to the classical distributional definition, see [@Hilbert (2.31)]. Notice that while the von Neumann theory is carefully developed in e.g. [@Lueck-book] and [@Schick], the definition of the minimal domain in [\[domains-smooth\]](#domains-smooth){reference-type="eqref" reference="domains-smooth"} is only implicit in the literature. Recall now the notion of $\mathscr{N}\Gamma$-Hilbert complexes and isomorphisms between them: **Definition 6**. *Let $\Gamma$ be a finitely generated discrete group.* 1. *A Hilbert complex $(\mathscr{D}(d_*),d_*), \mathscr{D}(d_*) \subset H_*,$ is an $\mathscr{N}\Gamma$-Hilbert complex, if each Hilbert space $H_*$ is a $\mathscr{N}\Gamma$-Hilbert module and each differential $d_*$ is a closed and densely defined operator commuting with the action of $\Gamma$ and satisfying $d_{*+1}\circ d_*\equiv 0$ on the domain of $d_*$.* 2. *A morphism between $\mathscr{N}\Gamma$-Hilbert complexes $H_\bullet:=(\mathscr{D}(d),d)$ with $\mathscr{D}(d_*) \subset H_*$ and $K_\bullet:=(\mathscr{D}(d'),d')$ with $\mathscr{D}(d'_*) \subset K_*,$ is defined as a sequence of bounded linear operators $f_k:H_k\rightarrow K_k$ for each $k$, commuting with the $\Gamma$ action and satisfying $f_{k+1}\circ d_k=d_k\circ f_k$ on $dom(d_k)$. We write $f:H_{\bullet}\rightarrow K_{\bullet}$.* 3. *We say that these $\mathscr{N}\Gamma$-Hilbert complexes are isomorphic, if the maps $f_k:H_k\rightarrow K_k$ are isometries and hence $f_k\mathscr{D}(d_k) = \mathscr{D}(d'_*)$ for each $k$.* The minimal and maximal Hilbert complexes above are $\mathscr{N}\Gamma$-Hilbert complexes in the sense of Definition [Definition 6](#gamma-notions2){reference-type="ref" reference="gamma-notions2"}. We arrive at a central observation. **Proposition 7**. *The isometry $\Phi: L^2\Omega^*(M_\Gamma)\to L^2\Omega^*(M, \mathscr{E})$ of $\mathscr{N}\Gamma$-Hilbert modules in [\[phi-isometry\]](#phi-isometry){reference-type="eqref" reference="phi-isometry"}, defines isomorphisms of the $\mathscr{N}\Gamma$-Hilbert complexes $$\begin{split} \Phi: (\mathscr{D}_{\min}(M_\Gamma), d_\Gamma) \to (\mathscr{D}_{\min}(M, \mathscr{E}), d_{\mathscr{E}}) \\ \Phi: (\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)\to (\mathscr{D}_{\max}(M, \mathscr{E}), d_{\mathscr{E}}). \end{split}$$* *Proof.* One can check easily by construction that on $\Omega^*_c(M,\mathscr{E}_c)$ (in fact also on $\Omega^*_{\max}(M,\mathscr{E})$ by a partition of unity argument) $$d_{\mathscr{E}} = \Phi \circ d_{\Gamma} \circ \Phi^{-1}.$$ Thus, using the isomorphisms [\[phi-isomorphism\]](#phi-isomorphism){reference-type="eqref" reference="phi-isomorphism"} and the fact that the map $\Phi: L^2\Omega^*(M, \mathscr{E}) \to L^2\Omega^*(M_\Gamma)$ in [\[phi-isometry\]](#phi-isometry){reference-type="eqref" reference="phi-isometry"} is an isometry, we obtain $$\begin{split} \mathscr{D}_{\min}(M, \mathscr{E}) = \Phi (\mathscr{D}_{\min}(M_\Gamma)), \quad d_{\mathscr{E}} = \Phi \circ d_{\Gamma} \circ \Phi^{-1} \ \textup{on} \ \mathscr{D}_{\min}(M, \mathscr{E}), \\ \mathscr{D}_{\max}(M, \mathscr{E}) = \Phi (\mathscr{D}_{\max}(M_\Gamma)), \quad d_{\mathscr{E}} = \Phi\circ d_{\Gamma} \circ \Phi^{-1} \ \textup{on} \ \mathscr{D}_{\max}(M, \mathscr{E}). \end{split}$$ The statement now follows. ◻ # Existence of $L^2$-Betti numbers and Novikov-Shubin invariants {#finiteness-section} ## Some analytic properties Let $D_{\textup{rel}}$ and $D_{\textup{abs}}$ denote the self-adjoint operators on $L^2\Omega^*(M,g)$ defined as the rolled-up operators of the Hilbert complexes $(\mathscr{D}_{\min}(M), d)$ and $(\mathscr{D}_{\max}(M), d)$, respectively. Let us also introduce the operators $\Delta_{\mathrm{abs}}:=D^2_{\mathrm{abs}}$ and $\Delta_{\mathrm{rel}}:=D^2_{\mathrm{rel}}$. Similarly, let $D_{\Gamma, \textup{rel}}$ and $D_{\Gamma, \textup{abs}}$ denote the self-adjoint operators on $L^2\Omega^*(M_\Gamma,g_{\Gamma})$ defined as the rolled-up operators of the Hilbert complexes $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$, respectively. We write $$\begin{aligned} &D_{\textup{rel}}:= d_{\Gamma, \, \min} + d^*_{\Gamma, \, \min}, \quad \Delta_{\Gamma,\mathrm{rel}}:=D^2_{\Gamma,\mathrm{rel}}, \\ &D_{\textup{abs}}:= d_{\Gamma, \, \max} + d^*_{\Gamma, \, \max}, \quad \Delta_{\Gamma,\mathrm{abs}}:=D^2_{\Gamma,\mathrm{abs}}.\end{aligned}$$ We define the following sets of functions $$\begin{aligned} &\mathcal{A}(\overline{M}):=\{f\in C(\overline{M}) : f|_{M}\in C^{\infty}(M),\ df\in L^{\infty}\Omega^1(M,g)\}, \\ &\mathcal{A}(\overline{M}_{\Gamma}):=\{f\in C(\overline{M}_{\Gamma}) : f|_{M_{\Gamma}}\in C^{\infty}(M_{\Gamma}),\ df\in L^{\infty}_{\mathrm{loc}}\Omega^1(M_{\Gamma},g_{\Gamma})\}.\end{aligned}$$ Note that given an open cover $\{U_\alpha\}_{\alpha \in I}$ of $\overline{M}_{\Gamma}$, there exists a partition of unity $\{\phi_{\alpha'}\}_{\alpha'\in J}$, with compact support subordinate to $\{U_\alpha\}_{\alpha \in I}$ with $\phi_{\alpha'}\in \mathcal{A}(\overline{M}_{\Gamma})$ for each $\alpha'\in J$, see [@You Proposition 3.2.2]. Obviously the analogous result holds true on $\overline{M}$ with respect to $\mathcal{A}(\overline{M})$. **Lemma 8**. *Let $U\subset \overline{M}_{\Gamma}$ be an open subset such that $p|_U:U\rightarrow V$ is an isomorphism, with $V=p(U)$ and $p:\overline{M}_{\Gamma}\rightarrow \overline{M}$ the covering map. Let $n \in \mathbb{N}$ and consider any collection $\{\phi_1, \ldots, \phi_n\} \subset \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\phi_k)\subset U$. Then $$\label{composition} \prod_{k=1}^n\left(\phi_k(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\right)$$ is a Hilbert-Schmidt operator for $n\gg 0$ sufficiently large.* *Proof.* Let us denote with $\Psi:L^2\Omega^*(\mathrm{reg}(U),g_{\Gamma}|_{\mathrm{reg}(U)})\rightarrow L^2\Omega^*(\mathrm{reg}(V),g|_{\mathrm{reg}(V)})$ the isometry induced by $(p|_U)^{-1}:V\rightarrow U$. Here, we write $\mathrm{reg}(U)$ and $\mathrm{reg}(V)$ for the open interior of $U$ and $V$, respectively. Let $\phi\in \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\phi)\subset U$ and $\omega\in \mathscr{D}(D_{\Gamma,\mathrm{abs}/\mathrm{rel}})$ be arbitrarily fixed. It is not difficult to show that $$\begin{aligned} &\phi \omega\in \mathscr{D}(D_{\Gamma,\mathrm{abs}/\mathrm{rel}}), \quad \Psi(\phi\omega) \in \mathscr{D}(D_{\mathrm{abs}/\mathrm{rel}}), \\ &D_{\mathrm{abs}/\mathrm{rel}}(\Psi(\phi\omega))=\Psi(D_{\Gamma,\mathrm{abs}/\mathrm{rel}}(\phi \omega)).\end{aligned}$$ Hence we get $$\begin{aligned} \phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}&= \Psi^{-1}(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1} \circ (i+D_{\mathrm{abs}/\mathrm{rel}})\Psi \circ \phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\\ &=\Psi^{-1}(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1} \circ \Psi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}}) \circ \phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}. \end{aligned}$$ Note that $(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})\phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}$ is a bounded operator and $(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}$ lies in the $n$-Schatten class for $n$ sufficiently big as a consequence of [@Alvarez Theorem 1.1]. Therefore $\phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}$ is $n$-Schatten and so we can conclude that [\[composition\]](#composition){reference-type="eqref" reference="composition"} is Hilbert Schmidt for $n$ sufficiently big. ◻ Below, we shall impose an analytic assumption. **Assumption 9**. *For any arbitrarily fixed cutoff functions $\phi$, $\chi\in \mathcal{A}(\overline{M})$ such that $\mathrm{supp}(\phi)\cap \mathrm{supp}(\chi)=\varnothing$, the composition $$\phi(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}\chi$$ is Hilbert-Schmidt.* We expect the Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"} to hold on any compact smoothly stratified (Thom-Mather) pseudo-manifold with an iterated wedge metric. Currently, we can show that the assumption holds in the following two cases. **Proposition 10**. *The above Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"} is satisfied in the following two cases:* 1. *$(M,g)$ has an isolated conical singularity.* 2. *$M$ is a Witt pseudo-manifold with (non-isolated) singularities of arbitrary depth and $g$ is an adapted iterated wedge metric as in [@package Proposition 5.4] (notice that it is always possible to endow a Witt pseudo-manifold with such a metric, see again [@package Proposition 5.4]).* *Proof.* Let us abbreviate $D:= D_{\mathrm{abs}/\mathrm{rel}}$ Notice that in the Witt case the operator is essentially self-adjoint, [@package Section 5.8], so $D_{\mathrm{abs}} =D_{\mathrm{rel}}$ in this case. The proof uses the microlocal analysis of the heat kernel $e^{-tD^2}$ by Mooers [@Moo Theorem 4.1] for the first case, and the microlocal description of the resolvent $(D-\lambda)^{-1}$ under the Witt assumption by Albin and Gell-Redman [@AlGe Theorem 4.3] for the second case. In both cases, the microlocal analysis employs the blowup techniques, see for example Melrose [@Mel:TAP] and Mazzeo [@Maz:ETO]. Since the notion of blowups appears only within the limits of this proof, we don't attempt to provide a detailed presentation and instead suggest to think of blowups in terms of polar coordinates around the corresponding blown up submanifolds. [*Proof of the first case:*]{.ul} Since the Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"} is concerned with the resolvent, when using Mooers [@Moo Theorem 4.1] in the first case we need to pass from the heat operator to the resolvent. We do this using the following relation $$\begin{aligned} (i-D) \int_0^\infty e^{-t(\textup{Id} + D^2)} dt = (i-D) (\textup{Id} + D^2)^{-1} = (i + D)^{-1}.\end{aligned}$$ In particular, we have a relation between the integral kernels $$\begin{aligned} \label{heat-resolvent} \phi(i + D)^{-1} \chi = \int_0^\infty e^{-t} \phi (i-D) e^{-t D^2}\chi dt.\end{aligned}$$ The asymptotics of the integrand in [\[heat-resolvent\]](#heat-resolvent){reference-type="eqref" reference="heat-resolvent"} is conveniently described by the heat-space blowup of $\widetilde{M} \times \widetilde{M} \times [0,\infty)_{\sqrt{t}}$. One first blows up the highest codimension corner $\partial M \times \partial M \times \{0\}$, which introduces a new boundary face ff. Then one blows up the temporal diagonal $\textup{diag} (\widetilde{M} \times \widetilde{M}) \times \{0\}$, which introduces the boundary face td. The resulting blowup is referred to as the heat space blowup $\mathscr{M}^2_h$ and is illustrated in Figure [2](#heat-space){reference-type="ref" reference="heat-space"}. There, $x$ and $\widetilde{x}$ are boundary defining functions of the two copies of $\widetilde{M}$. The boundary faces rf, lf and tf correspond to the boundary faces $\{x=0\}, \{\widetilde{x}=0\}$ and $\{t=0\}$ before the blowup, respectively. There is also a canonical blowdown map $$\beta: \mathscr{M}^2_h \to \widetilde{M} \times \widetilde{M} \times [0,\infty).$$ ![The heat-space blowup $\mathscr{M}^2_h$ and support of $\phi e^{-t D^2} \chi$.](heat-space.pdf){#heat-space} Since $\mathrm{supp}(\phi)\cap \mathrm{supp}(\chi)=\varnothing$, the lift $\beta^*\phi e^{-t D^2} \chi$ is in fact supported away from ff and td. Figure [2](#heat-space){reference-type="ref" reference="heat-space"} illustrates the support of $\beta^*\phi e^{-t D^2} \chi$, where we consider the generic case that $\mathrm{supp}(\phi) \cap \partial M \neq \varnothing$. In that case $\mathrm{supp}(\chi)$ must be fully contained in the interior of $M$ and hence the support of $\beta^*\phi e^{-t D^2} \chi$ in $\mathscr{M}^2_h$ is of the form as indicated by the blue shaded region. It is a central result of Mooers [@Moo Theorem 4.1] that $\beta^*e^{-t D^2}$ is a conormal function on $\mathscr{M}^2_h$, polyhomogeneous at ff and td, vanishing to infinite order at tf, with uniform asymptotics at lf and rf governed by the asymptotics of solutions in $\mathscr{D}(D^2)$. Hence, we conclude that in the generic case illustrated in Figure [2](#heat-space){reference-type="ref" reference="heat-space"} $$\begin{aligned} \phi e^{-t D^2} \chi \in \phi \cdot \mathscr{D}(D^2) \times C^\infty_c(M, \Lambda^* ({}^{w}T^*M)), \end{aligned}$$ uniformly as $t \to \infty$ and $t\to 0$. Consequently $$\phi (i-D) e^{-t D^2}\chi \in L^2(M \times M, \Lambda^* ({}^{w}T^*M) \boxtimes \Lambda^* ({}^{w}T^*M)),$$ uniformly in $t$. Moreover, the integrand in [\[heat-resolvent\]](#heat-resolvent){reference-type="eqref" reference="heat-resolvent"} is exponentially vanishing as $t\to \infty$. Hence the right hand side of [\[heat-resolvent\]](#heat-resolvent){reference-type="eqref" reference="heat-resolvent"} is Hilbert-Schmidt and the claim follows in the first case. [*Proof of the second case:*]{.ul} For the second case, note that if $\widetilde{M}$ is the resolution of a compact space with a wedge singularity, $\partial M$ is the total space of a fibration $\phi: \partial M \to B$ over a compact base manifold $B$ with fibres $F$ being compact manifolds as well. The wedge metric is conical on the fibres of the boundary collar $\mathscr{U} \cong (0,1) \times \partial M$. In this setting the resolvent $(i + D)^{-1}$ is conveniently described as a polyhomogeneous (with bounds) distribution on the edge space blowup of $\widetilde{M} \times \widetilde{M}$, obtained by blowing up the fibre diagonal $$\textup{diag}_\phi (\partial M \times \partial M) = \{(p,p') | \phi(p) = \phi(p')\}.$$ The blowup introduces a new boundary face ff and is illustrated in Figure [3](#edge-space){reference-type="ref" reference="edge-space"}. There, $x$ and $\widetilde{x}$ are boundary defining functions of the two copies of $\widetilde{M}$. Moreover, $(y,\widetilde{y})$ are local coordinates on the two copies of the base of the fibration $\phi: \partial M \to B$, so that locally $\textup{diag}_\phi (\partial M \times \partial M) = \{y=\widetilde{y}\}$. The boundary faces rf and lf correspond to the boundary faces $\{x=0\}$ and $\{\widetilde{x}=0\}$ before the blowup, respectively. There is also a canonical blowdown map $$\beta: \mathscr{M}^2_e \to \widetilde{M} \times \widetilde{M}.$$ ![The edge-space blowup $\mathscr{M}^2_e$ and support of $\phi e^{-t D^2} \chi$.](edge-space.pdf){#edge-space} Since $\mathrm{supp}(\phi)\cap \mathrm{supp}(\chi)=\varnothing$, the lift $\beta^*\phi (i + D)^{-1} \chi$ is supported away from ff. Figure [3](#edge-space){reference-type="ref" reference="edge-space"} also illustrates the support of $\beta^*\phi (i + D)^{-1} \chi$, where we consider the generic case that $\mathrm{supp}(\phi) \cap \partial M \neq \varnothing$. In that case $\mathrm{supp}(\chi)$ must be fully contained in the interior of $M$ and hence the support of $\beta^*\phi (i + D)^{-1} \chi$ in $\mathscr{M}^2_e$ is of the form as indicated by the blue shaded region. Remark now that since we are considering an *adapted* iterated wedge metric, the operator $D$ satisfies what Albin and Gell-Redman call the geometric Witt condition; this means that we can indeed apply their results. The microlocal description of the resolvent in Albin and Gell-Redman [@AlGe Theorem 4.3] asserts that the resolvent $(i + D)^{-1}$ lifts to a polyhomogeneous (with bounds) distribution on $\mathscr{M}^2_e$ with the asymptotics at rf and lf governed by the asymptotics of solutions in $\mathscr{D}(D)$. Consequently $$\begin{aligned} \phi (i + D)^{-1} \chi \in \mathscr{D}(D) \times C^\infty_c(M, \Lambda^* ({}^{w}T^*M)).\end{aligned}$$ In particular, $\phi (i + D)^{-1} \chi \in L^2(M \times M, \Lambda^* ({}^{w}T^*M) \boxtimes \Lambda^* ({}^{w}T^*M))$ and the claim follows in the second case as well, finishing the proof. ◻ Before we continue to the next result, we shall point out that in the Witt case and for an adapted metric as in the second case of Proposition [Proposition 10](#witt-examples){reference-type="ref" reference="witt-examples"}, the Gauss-Bonnet operator is essentially self-adjoint on $(M_\Gamma,g_\Gamma)$, see [@package Proposition 6.3]. Hence in that case there is no point in distinguishing $D_{\Gamma, \mathrm{abs}}$ and $D_{\Gamma, \mathrm{rel}}$. **Proposition 11**. *Let $U\subset \overline{M}_{\Gamma}$ be an open subset, such that $p|_U:U\rightarrow V$ is an isomorphism, with $V=p(U)$ and $p:\overline{M}_{\Gamma}\rightarrow \overline{M}$ the covering map. Let $\overline{\phi}$, $\overline{\chi}\in \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\overline{\phi})\cap \mathrm{supp}(\overline{\chi})=\varnothing$ and $\mathrm{supp}(\overline{\phi})\subset U$. If Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"} holds true then $$\overline{\phi}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}$$ is a Hilbert-Schmidt operator.* *Proof.* Let $\Psi$ be the isometry defined in the proof of Lemma [Lemma 8](#lemma-1){reference-type="ref" reference="lemma-1"}. Let $\overline{\theta}\in \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\overline{\theta})\subset U$, $\mathrm{supp}(\overline{\theta})\cap \mathrm{supp}(\overline{\chi})=\varnothing$ and $\overline{\theta} \equiv 1$ on an open neighbourhood of $\mathrm{supp}(\overline{\phi})$. Since $\overline{\theta}\in \mathcal{A}(\overline{M}_{\Gamma})$ the following operator $[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}](i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}$ is well defined and we have $$\label{start} \begin{split} &[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}](i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\ = \, &D_{\Gamma,\mathrm{abs}/\mathrm{rel}}\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}-\overline{\theta}(-i+i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\ = \, &(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}-\overline{\theta}\overline{\chi}\\ = \, &(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}. \end{split}$$ Let now introduce a second auxiliary function $\overline{\zeta}\in \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\overline{\zeta})\subset U$ and $\overline{\zeta}\equiv 1$ on $\mathrm{supp}([D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}])$. Then $[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}]\overline{\zeta} \equiv [D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}]$. Applying $\Psi$ on both sides of [\[start\]](#start){reference-type="eqref" reference="start"} implies $$\begin{aligned} \Psi[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}]\overline{\zeta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}&=\Psi[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}](i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\&=\Psi(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}. \end{aligned}$$ Let us denote $\theta:=\overline{\theta} \circ (p|_U)^{-1}$. The presence of $\overline{\zeta}$ allows us to commute $\Psi$ with the commutator and yields $$[D_{\mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}=(i+D_{\mathrm{abs}/\mathrm{rel}})\Psi\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}.$$ Applying the resolvent on both sides of this equality implies $$(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}[D_{\mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}=\Psi\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}.$$ Therefore, by multiplying both sides with $\phi:=\overline{\phi} \circ (p|_U)^{-1}$, we get $$\begin{aligned} \phi(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}[D_{\mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}=\phi\Psi\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\ =\Psi\overline{\phi}\, \overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi} =\Psi\overline{\phi}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi} \end{aligned}$$ and so we arrive at $$\Psi^{-1}\phi(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1} [D_{\mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta} (i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}= \overline{\phi}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}.$$ Note now that $\mathrm{supp}(\phi)\subset V$, $\phi\in \mathcal{A}(\overline{M})$ and $\mathrm{supp}(\phi)\cap \mathrm{supp}([D_{\mathrm{abs}/\mathrm{rel}},\theta])=\varnothing$. Hence there exists $\gamma\in \mathcal{A}(\overline{M})$ such that $\gamma [D_{\mathrm{abs}/\mathrm{rel}},\theta]=[D_{\mathrm{abs}/\mathrm{rel}},\theta]$ and $\mathrm{supp}(\phi)\cap \mathrm{supp}(\gamma)=\varnothing$. We obtain $$\Psi^{-1}\phi(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}\gamma [D_{\mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1} \overline{\chi}=\overline{\phi}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}$$ Finally since both $[D_{\mathrm{abs}/\mathrm{rel}},\theta]$ and $(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}$ are bounded operators and $\phi(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}\gamma$ is a Hilbert-Schmidt operator by Assumption [Assumption 9](#fundamental){reference-type="ref" reference="fundamental"}, we can conclude that $\overline{\phi}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}$ is also Hilbert-Schmidt, as required. ◻ **Proposition 12**. *In the setting of Proposition [Proposition 11](#key-trick){reference-type="ref" reference="key-trick"} the operator $$(\textup{Id}+\Delta_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-n}$$ is $\Gamma$-trace class for $n \gg 0$ sufficiently large.* *Proof.* Since the argument is the same for both $\Delta_{\Gamma,\mathrm{abs}}$ and $\Delta_{\mathrm{rel}}$ we drop the subscription $\mathrm{abs}/\mathrm{rel}$ in the rest of the proof. Now we note that $(\textup{Id}+\Delta_{\Gamma})^{-n}= (i+D_{\Gamma})^{-n/2}\circ(i-D_{\Gamma})^{-n/2}$. Thus it is enough to prove that both $(i+D_{\Gamma})^{-n/2}$ and $(i-D_{\Gamma})^{-n/2}$ are $\Gamma$-Hilbert-Schmidt. We show now that $(i+D_{\Gamma})^{-n/2}$ is $\Gamma$-Hilbert-Schmidt. The proof of the other case is identical. According to [@Ati Definition (4.3)'] we need to show that $\theta(i+D_{\Gamma})^{-n/2}$ is Hilbert-Schmidt, with $\theta$ an arbitrarily fixed bounded measurable function with compact support on $\overline{M}_{\Gamma}$. Let $\{U_1,...,U_{\ell}\}$ be a finite open cover of $\mathrm{supp}(\theta)$ such that $p|_{U_k}:U_k\rightarrow V_k=p(U_k)$ is an isomorphism. Thanks to [@You Proposition 3.2.2] we can find $\psi_1,...,\psi_k\in \mathcal{A}(\overline{M}_{\Gamma})$ such that $\mathrm{supp}(\psi_k)\subset U_k$ and $\sum_{k=1}^{\ell}\psi_k(x)=1$ for each $x\in \mathrm{supp}(\theta)$. Thus we have $$\begin{aligned} \theta(i+D_{\Gamma})^{-n/2}=\theta\left(\sum_{k=1}^{\ell}\psi_k\right)(i+D_{\Gamma})^{-n/2}=\sum_{k=1}^{\ell}\theta\psi_k(i+D_{\Gamma})^{-n/2}. \end{aligned}$$ Since $\theta$ is bounded, it is enough to show that $\psi_k(i+D_{\Gamma})^{-n/2}$ is $\Gamma$-Hilbert-Schmidt for any $k$. Without loss of generality we can assume that $n$ is even. Let $q=n/2$ and $\phi_1\in \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\phi_1)\subset U_k$ and $\mathrm{supp}(\psi_k)\cap \mathrm{supp}(1-\phi_1)=\varnothing$. We have $$\begin{aligned} \psi_k(i+D_{\Gamma})^{-n/2}&=\psi_k(i+D_{\Gamma})^{-1}(i+D_{\Gamma})^{1-q}\\ &=\psi_k(i+D_{\Gamma})^{-1} (\phi_1+1-\phi_1)(i+D_{\Gamma})^{1-q}\\ &= \psi_k(i+D_{\Gamma})^{-1}\phi_1(i+D_{\Gamma})^{1-q}+\psi_k(i+D_{\Gamma})^{-1}(1-\phi_1)(i+D_{\Gamma})^{1-q}. \end{aligned}$$ Note that $\psi_k(i+D_{\Gamma})^{-1}(1-\phi_1)(i+D_{\Gamma})^{1-q}$ is Hilbert-Schmidt, since $(i+D_{\Gamma})^{1-q}$ is bounded and $\psi_k(i+D_{\Gamma})^{-1}(1-\phi_1)$ is Hilbert-Schmidt thanks to Proposition [Proposition 11](#key-trick){reference-type="ref" reference="key-trick"}. Thus $\psi_k(i+D_{\Gamma})^{-p/2}$ is Hilbert-Schmidt if and only if $\psi_k(i+D_{\Gamma})^{-1}\phi_1(i+D_{\Gamma})^{1-q}$ is Hilbert-Schmidt. By picking $\phi_1,...,\phi_{q-1}\in \mathcal{A}(\overline{M}_{\Gamma})$ with $\mathrm{supp}(\phi_j)\subset U_k$ and $\mathrm{supp}(\phi_j)\cap \mathrm{supp}(1-\phi_{j+1})=\varnothing$ for each $j=2,...,q-1$ and iterating the above procedure, we get that $\psi_k(i+D_{\Gamma})^{-n/2}$ is Hilbert-Schmidt if and only if the composition $$\label{HScomposition} \psi_k(i+D_{\Gamma})^{-1}\phi_1(i+D_{\Gamma})^{-1}...\phi_{q-1}(i+D_{\Gamma})^{-1}$$ is Hilbert-Schmidt. Thanks to Lemma [Lemma 8](#lemma-1){reference-type="ref" reference="lemma-1"} we know that [\[HScomposition\]](#HScomposition){reference-type="eqref" reference="HScomposition"} is Hilbert-Schmidt. We can thus finally conclude that $\psi_k(i+D_{\Gamma})^{-n/2}$ is also Hilbert-Schmidt. ◻ **Remark 13**. *Note that this statement in the Witt case, where the operator is in fact essentially self-adjoint, is already claimed in [@PiVe1 Proposition 7.4]. However, the argument there is incomplete. Namely, the trace class property of $$\overline{\phi}(\textup{Id}+\Delta_\Gamma)^{-n}\overline{\chi}$$ is presented there as a simple consequence of the corresponding property on the base. This is however only true for isolated conical singularities, since there one may use methods of Lesch [@Lesch-singular Proposition 2.7], where $\mathrm{supp}\, d\overline{\phi}$ is required to be disjoint from the singular strata. This we cannot guarantee unless we only deal with isolated conical singularities.* **Theorem 14**. *Let $f:\mathbb{R}\to \mathbb{R}$ be any rapidly decreasing function. Then $$f(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})$$ is $\Gamma$-trace class. In particular, the corresponding heat operators as well as spectral projections to finite intervals are $\Gamma$-trace class.* *Proof.* Let us consider the function $g(z):= (1+z)^n f(z)$ with $n>0$ sufficiently large. Note that $g$ is a bounded function and thus $g(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})$ is a bounded operator. Writing $f(z)=(1+z)^{-n} g(z)$ we conclude that $$\begin{aligned} f(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})&= (\textup{Id}+\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-n} \circ \Bigl((\textup{Id}+\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{n}f(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})\Bigr)\\ &=(1+\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-n} g(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}}), \end{aligned}$$ is the composition of a bounded operator $g(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})$ with a $\Gamma$-trace class operator $(\textup{Id}+\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-n}$. Thus $f(\Delta_{\Gamma, \mathrm{abs}/\mathrm{rel}})$ is $\Gamma$-trace class, as required. ◻ We have now everything in place to define $L^2$-Betti numbers and Novikov-Shubin invariants on $\overline{M}_\Gamma$, thereby proving our first main Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}. ## Definition of $L^2$-Betti numbers and Novikov-Shubin invariants We begin by recalling the notion of $\Gamma$-dimension of a Hilbert $\mathscr{N}\Gamma$-module. As before, we refer the reader to [@Lueck-book §1 and §2] and [@Schick] for more details. Continuing in the notion of Definition [Definition 4](#gamma-notions1){reference-type="ref" reference="gamma-notions1"}, we have the following. **Definition 15**. *Recall, a $\mathscr{N}\Gamma$-Hilbert module $H$ is a Hilbert space with an isometric linear $\Gamma$-embedding of $H$ into the tensor product $\mathcal{H}\otimes \ell^2 \Gamma$. The $\Gamma$ dimension of a Hilbert $\mathscr{N}\Gamma$-module $H$ is defined as the von Neuman trace of the orthogonal projection $\mathcal{P}:\mathcal{H}\otimes L^2(\Gamma)\rightarrow H$. Explicitly, let $\{b_i\}$ be an arbitrarily fixed Hilbert basis of $\mathcal{H}$ and $e\in \Gamma$ the unit element, see [@Lueck-book Definition 1.8]. Then we have $$\begin{aligned} \label{gamma-dim} \dim_{\Gamma}(H):= \mathrm{Tr}_{\Gamma}(\mathcal{P})= \sum_i\langle \mathcal{P}(b_i\otimes e),b_i\otimes \delta_e\rangle_{\mathcal{H}\otimes L^2(\Gamma)} \in [0,\infty].\end{aligned}$$* As explained in [@Lueck-book p. 17] the above definition is well-posed and independent of the choices made. Now let us consider such an $\mathscr{N}\Gamma$-complex $H_\bullet:=(\mathscr{D}(d),d)$ and define in terms of the adjoint $d^*$ the associated Laplace operator $$\begin{aligned} \Delta_k:=d_k^*\circ d_k+d_{k-1}\circ d_{k-1}^*.\end{aligned}$$ It is clear that $\Delta_k$ commutes with the $\Gamma$-action and thus $\ker(\Delta_k)$ has the structure of Hilbert $\mathscr{N}\Gamma$-module. We can therefore define in view of [\[gamma-dim\]](#gamma-dim){reference-type="eqref" reference="gamma-dim"}. **Definition 16**. *The k-th $L^2$-Betti number of $H_\bullet:=(\mathscr{D}(d),d)$ is defined as $$\begin{aligned} b^k_{(2),\Gamma}(H_{\bullet}):=\dim_{\Gamma}(\ker(\Delta_k)) \in [0,\infty].\end{aligned}$$* In order to define Novikov-Shubin invariants, consider for each $k$ and $\lambda\geq 0$ $$\begin{split} &d^{\perp}_k:=d_k \restriction \mathscr{D}(d_k)\cap \overline{\mathrm{im}(d_{k-1})}^{\perp}, \\ &\mathcal{L}(d_k^{\perp},\lambda):= \Bigl\{ L\subset H_k \ \textup{Hilbert $\mathscr{N}\Gamma$-submodules} \ | \\ &\qquad \qquad \qquad L\subset \mathscr{D}(d_k^{\perp}), \forall\ u \in L: \|d_k^{\perp}u\|\leq \lambda \|u\|\Bigr\}. \end{split}$$ **Definition 17**. *The Novikov-Shubin invariants are defined in two steps.* 1. *The k-th spectral density function of $H_\bullet:=(\mathscr{D}(d),d)$ is defined as $$F_k(\lambda,H_{\bullet}) : [0,\infty)\rightarrow [0, \infty] \qquad \lambda\mapsto \sup \Bigl\{ \dim_{\Gamma}L | L\subset \mathcal{L}(d_k^{\perp},\lambda) \Bigr\}.$$* 2. *We say that a $\mathscr{N}\Gamma$-Hilbert complex $H_\bullet:=(\mathscr{D}(d),d)$ is Fredholm if for each $k$ there exists $\lambda_k$ such that $F_k(\lambda_k,H_{\bullet})<\infty$. Note that then $F_k(\lambda,H_{\bullet})<\infty$ for each $\lambda \in (0, \lambda_k)$ as $F_k(\lambda,H_{\bullet})$ is non-decreasing. In that case, we define the $k$-th Novikov-Shubin invariant as $$\alpha_k(H_{\bullet}):=\lim_{\lambda\rightarrow 0^+}\frac{\log \bigl(F_{k-1}(\lambda,H_{\bullet})-F_{k-1}(0,H_{\bullet})\bigr)}{\log(\lambda)}\in [0,\infty],$$ provided that $F_{k-1}(\lambda,H_{\bullet})-F_{k-1}(0,H_{\bullet})>0$ holds for all $\lambda>0.$ Otherwise, we put $\alpha_k(H_{\bullet}):=\infty^{+}$.* In the above definition $\infty^{+}$ denotes a new formal symbol which should not be confused with $+\infty$. We have $\alpha_k(H_{\bullet})=\infty^{+}$ if and only if there is an $\epsilon>0$ such that $F_{k-1}(\epsilon,H_{\bullet})=F_{k-1}(0,H_{\bullet})$. Note that $$F_k(0,H_{\bullet})=b^k_{(2),\Gamma}(H_{\bullet}).$$ Consider now our setting of a compact smoothly (Thom-Mather) stratified pseudomanifold $\overline{M}$ with an iterated wedge metric $g$ on $M$. Let $\overline{M}_{\Gamma}\rightarrow \overline{M}$ be a Galois $\Gamma$-covering and lift $g$ to a metric $g_\Gamma$ in $M_\Gamma$. Consider the corresponding $\mathscr{N}\Gamma$-Hilbert complexes $(\mathscr{D}_{\min}(M_\Gamma), d_\Gamma)$ and $(\mathscr{D}_{\max}(M_\Gamma), d_\Gamma)$ with the corresponding Laplacians $\Delta_{\Gamma, \, \textup{rel}}$ and $\Delta_{\Gamma, \, \textup{abs}}$, respectively. The maximal and minimal $L^2$-Betti numbers, spectral density functions and Novikov-Shubin invariants with respect to the Galois covering $(\overline{M}_{\Gamma},g_{\Gamma})\rightarrow (\overline{M},g)$ are defined as follows $$\label{invariants} \begin{split} b^k_{(2),\, \min/\max}(\overline{M}_{\Gamma}) &:= \dim_{\Gamma}\Bigl(\ker\bigl(\Delta_{\Gamma,\, k,\, \mathrm{rel}/\mathrm{abs}}\bigr)\Bigr), \\ F_{k,\min/\max}(\lambda,\overline{M}_{\Gamma}) &:= F_{k}\Bigl(\lambda, \bigl(\mathscr{D}_{\min / \max}(M_\Gamma), d_\Gamma\bigr)\Bigr), \\ \alpha_{k,\max/\min}(\overline{M}_{\Gamma}) &:= \alpha_k\bigl(\mathscr{D}_{\min / \max}(M_\Gamma), d_\Gamma\bigr). \end{split}$$ The definition of the Novikov-Shubin invariants $\alpha_{k,\max/\min}(\overline{M}_{\Gamma})$ makes sense only if the corresponding $\mathscr{N}\Gamma$-Hilbert complexes $(\mathscr{D}_{\min / \max}(M_\Gamma), d_\Gamma)$ are Fredholm. This is done below, in the proof of our first main result, Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}. *Proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}.* The first statement follows immediately from Theorem [Theorem 14](#trace-class-bottom-up){reference-type="ref" reference="trace-class-bottom-up"}. Indeed for each $P \in \{\Delta_{\Gamma,\, k,\, \mathrm{rel}}, \Delta_{\Gamma,\, k,\, \mathrm{abs}}\}$ let $\{E_{\lambda} (P), \lambda \in \mathbb{R}\}$ denote the corresponding spectral family. Theorem [Theorem 14](#trace-class-bottom-up){reference-type="ref" reference="trace-class-bottom-up"} tells us that $E_{\lambda} (P)$ is a $\Gamma$ trace-class projection for any $\lambda$. Therefore $$\dim_{\Gamma}(\ker P ) = \mathrm{Tr}_{\Gamma}(E_{0}(P))<\infty.$$ Thus the $L^2$-Betti numbers $b^k_{(2),\, \min/\max}(\overline{M}_{\Gamma})$ are finite. Concerning the second part, since $E_{\lambda}(P)$ is a $\Gamma$ trace-class projection for each $\lambda$, the Hilbert $\mathscr{N}\Gamma$ complexes $(\mathscr{D}_{\min / \max}(M_\Gamma), d_\Gamma)$ are Fredholm, see [@Lueck-book Proposition 2.3] and the argument given in the proof of [@Lueck-book Lemma 2.6]. We can thus conclude that the Novikov-Subin invariants $\alpha_{k,\max/\min}(\overline{M}_{\Gamma})$ are well-defined, as claimed. ◻ # A Hilsum-Skandalis-type replacement of a pullback {#HS-subsection} ## Topological preliminaries This section contains some topological properties that are well known to people familiar with coverings. However, since we could not pin down a specific reference, we prefer to collect and prove the necessary results in this preliminary subsection. In what follows $X$ and $Y$ stand for path-connected, locally path-connected and semi-locally simply-connected topological spaces. Consider a possibly disconnected Galois covering $p:N\rightarrow Y$ with $\Gamma$ the corresponding group of deck transformations. Let $y\in Y$ and $n\in p^{-1}(y)$. Then we denote with $$\Phi_{N,y,n}:\pi_1(Y,y)\rightarrow \Gamma.$$ the homomorphism of groups that assigns to each $[\alpha]\in \pi_1(Y,y)$ the unique element $\gamma\in \Gamma$ such that $\gamma(n)=n[\alpha]$, where $$p^{-1}(y)\times \pi_1(Y,y)\rightarrow p^{-1}(y),\quad (n,[\alpha])\mapsto n[\alpha]$$ denotes the monodromy action, see [@Manetti Ch. 13]. **Lemma 18**. *Let $p:N\rightarrow Y$ be a possibly disconnected Galois covering. Then $N$ is connected if and only if $\Phi_{N,y,n}$ is surjective for any choice of $y\in Y$ and $n\in p^{-1}(y)$.* *Proof.* The surjectivity of $\Phi_{N,y,n}$ provided $N$ is connected is well known, see [@Manetti §13.3]. The converse is an easy exercise that we leave to the reader. ◻ **Lemma 19**. *Let $f:X\rightarrow Y$ be a continuous map. Let $x\in X$ be such that $f(x)=y$ and let $f_*: \pi_1(X,x) \to \pi_1(Y,y)$ be the homomorphism induced by $f$. Consider the Galois $\Gamma$-covering $f^*N\rightarrow X$. Then for any $(x,n) \in f^*N$ $$\Phi_{f^*N,x,(x,n)}=\Phi_{N,y,n}\circ f_*.$$* *Proof.* Let $r:f^*N\rightarrow X$ the covering map of $f^*N$ and let $\hat{f}:f^*N\rightarrow N$ be the usual map induced by the right projection. The maps combine into a commutative diagram First of all we note that $p\circ \hat{f}=f\circ r$ and that $\hat{f}:f^*N\rightarrow N$ is $\Gamma$-equivariant. Let now $[\alpha]\in \pi_1(X,x)$ and let $g=\Phi_{f^*N,x,(x,n)}([\alpha])$. Then we have $g((x,n))=(x,n)[\alpha]$. Consider now $f_*([\alpha])$ and let $g'=\Phi_{N,y,n}(f_*([\alpha]))$. Then $g'(n)=n[f\circ \alpha]$. By [@Manetti §13.1] we know that $n[f\circ\alpha]=\hat{f}((x,n)[\alpha])$. Hence we have $$g'(n)=n[f\circ\alpha]=\hat{f}((x,n)[\alpha])=\hat{f}(g((x,n)))=g(\hat{f}((x,n)))=g(n).$$ Therefore $g'(n)=g(n)$ and thus $g'=g$, as desired. ◻ **Lemma 20**. *In the setting of Lemma [Lemma 19](#lemma){reference-type="ref" reference="lemma"} assume that $f$ is homotopy equivalence. Then $f^*N$ is connected if and only if $N$ is so.* *Proof.* If $N$ is connected then $\Phi_{N,y,n}$ is surjective and consequently $\Phi_{f^*N,x,(x,n)}$ is also surjective, as $\Phi_{f^*N,x,(x,n)}=\Phi_{N,y,n}\circ f_*$. Thus, thanks to Lemma [Lemma 18](#1lemma){reference-type="ref" reference="1lemma"}, we can conclude that $f^*N$ is connected. Conversely let us assume that $f^*N$ is connected. Then $\Phi_{f^*N,x,(x,n)}$ is surjective and therefore $\Phi_{N,y,n}$ is also surjective as $\Phi_{f^*N,x,(x,n)}=\Phi_{N,y,n}\circ f_*$. We can thus conclude that $N$ is connected. ◻ **Corollary 21**. *Let $f:X\rightarrow Y$ be a homotopy equivalence and let $p:N\rightarrow Y$ be a universal covering of $Y$. Then $r:f^*N\rightarrow X$ is an universal covering of $X$.* *Proof.* Thanks to Lemma [Lemma 20](#connection){reference-type="ref" reference="connection"} we know that $f^*N$ is connected. Let now $x\in X$ and $n\in p^{-1}(f(x))$. Let $\hat{f}:f^*N\rightarrow N$ be the map induced by the right projection. We have $p\circ \hat{f}=f\circ r$ and thus $(p\circ \hat{f})_*=(f\circ r)_*$ as group homomorphism acting between $\pi_1(f^*N,(x,n))$ and $\pi_1(Y,f(x))$. Since $(f\circ r)_*$ is injective and $\pi_1(N,n)$ is trivial we deduce that $\pi_1(f^*N,(x,n))$ is trivial, as well. We can thus conclude that $r:f^*N\rightarrow X$ is a universal covering of $X$ since it is a connected and simply connected covering of $X$. ◻ More generally we have the following result. **Lemma 22**. *Let $f:X\rightarrow Y$ be a homotopy equivalence and let $q:M\rightarrow X$ and $p:N\rightarrow Y$ be two path-connected Galois $\Gamma$-covering. Assume that there exists $x\in X$, $m\in q^{-1}(x)$ and $n\in p^{-1}(f(x))$ such that $$\label{compatibility} f_*(\ker(\Phi_{M,x,m}))=\ker(\Phi_{N,f(x),n}).$$ Then $M$ and $f^*N$ are isomorphic Galois $\Gamma$-coverings.* *Proof.* First we note that both $M$ and $f^*N$ are path-connected. Write $r:f^*N\rightarrow X$ for the covering map of $f^*N$. Thanks to [@Hatcher Proposition 1.37] we know that if $q_*(\pi_1(M,m))=r_*(\pi_1(f^*N,(x,n)))$ then $M$ and $f^*N$ are isomorphic. Thanks to [@Hatcher Propositiion 1.39] we know that $q_*(\pi_1(M,m))=\ker(\Phi_{M,x,m})$ and $r_*(\pi_1(f^*N,(x,n)))=\ker(\Phi_{f^*N,x,(x,n)})$. Note that from Lemma [Lemma 19](#lemma){reference-type="ref" reference="lemma"} we have $$\ker(\Phi_{f^*N,x,(x,n)})=(f_*)^{-1}(\ker(\Phi_{N,f(x),n}))= (f_*)^{-1}(f_*(\ker(\Phi_{M,x,m})))=\ker(\Phi_{M,x,m}).$$ We can thus conclude that $M$ and $f^*N$ are isomorphic, as required. ◻ **Remark 23**. *In [@Gromov-Shubin p. 382] it is required that $\Phi_{M,x,m}=\Phi_{N,f(x),n}\circ f_*$. Obviously their assumption implies [\[compatibility\]](#compatibility){reference-type="eqref" reference="compatibility"}.* ## Construction Consider two compact smoothly stratified spaces $\overline{M}$ and $\overline{N}$ with the respective iterated incomplete wedge metrics $g_M$ and $g_N$ in the open interior. We repeat the definition of a smoothly stratified stratum preserving map between $\overline{M}$ and $\overline{N}$. **Definition 24**. *[([@package Definition 4 and §9.2])]{.upright}* 1. *A smoothly stratified map between smoothly stratified spaces $\overline{M}$ and $\overline{N}$, is a continuous map $f:\overline{M} \to \overline{N}$ which sends open strata of $\overline{M}$ smoothly into open strata of $\overline{N}$ and lifts to $\widetilde{f}: \widetilde{M} \to \widetilde{N}$, which is a $b$-map of manifolds with corners, preserving the iterated fibration structures.* 2. *A stratum preserving map $f:\overline{M} \to \overline{N}$ is a map with the property that the preimage of any stratum in $\overline{N}$ is a union of strata in $\overline{M}$. We shall say that such a map is codimension preserving if, in addition, for any stratum $S\subset \overline{N}$ $$\textup{codim} \, f^{-1}(S) = \textup{codim} S.$$* 3. *We call a smoothly stratified, stratum and codimension preserving map \"smooth strongly stratum preserving\". We will always work in this category.* Our setting in this section is as follows. Consider a smooth strongly stratum preserving map $f:\overline{M} \to \overline{N}$. Consider the disc subbundle $\mathbb{B}^N \subset {}^{e}TN$, with fibres given by open discs of radius $\delta > 0$ with respect to the complete edge metric $\rho_N^{-2} g_N$. Here, we obviously assume $\delta>0$ to be a lower bound for the injectivity radius, which exists since $(N, \rho_N^{-2} g_N)$ is of bounded geometry. Let $\pi: \widetilde{f}^*(\mathbb{B}^N) \to \widetilde{M}$ be the pullback bundle with the usual associated bundle map $\mathbb{B}\widetilde{f}: \widetilde{f}^*(\mathbb{B}^N) \to \mathbb{B}^N$, induced by the right projection. Consider the exponential $\exp: \mathbb{B}^N|_N \to N$ with respect to $\rho_N^{-2} g_N$, evaluating tangent vectors at the end points of the corresponding geodesics at time $\delta$. The map extends to $\exp: \mathbb{B}^N \to \widetilde{N}$, mapping $\mathbb{B}^N|_{\partial \widetilde{N}} \to \partial \widetilde{N}$. These maps combine into a diagram as in Figure [\[maps-diagram\]](#maps-diagram){reference-type="ref" reference="maps-diagram"}. **Lemma 25**. *The differential $df$ in the open interior $M$ extends to a fiberwise linear map between edge as well as wedge tangent bundles $${}^{e}df: {}^{e}TM \to {}^{e}TN, \qquad {}^{w}df: {}^{w}TM \to {}^{w}TN.$$ The corresponding pullbacks, defined by ${}^{e}df$ and ${}^{w}df$, consequently act as follows $$\label{pullbacks} \begin{split} {}^{e}f^*: & C^\infty(\widetilde{N}, \Lambda^* ({}^{e}T^*N) \otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^* ({}^{e}T^*M) \otimes f^*\mathscr{E}), \\ {}^{w}f^*: & C^\infty(\widetilde{N}, \Lambda^* ({}^{w}T^*N) \otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^* ({}^{w}T^*M) \otimes f^*\mathscr{E}), \end{split}$$ where $\mathscr{E}$ is the Mishchenko bundle over $\overline{N}$, as in [\[mi-bundles\]](#mi-bundles){reference-type="eqref" reference="mi-bundles"}, pulled back to $\widetilde{N}$ by the blowdown map $\beta: \widetilde{N} \to \overline{N}$ that is associated with the resolution $\widetilde{N}$.* *Proof.* We define the $e$-differential ${}^{e}df$ as follows. By assumption in Definition [Definition 24](#stratified-map-def){reference-type="ref" reference="stratified-map-def"}, $f$ preserves the iterated boundary fibrations structures and hence sends curves that are tangent to the fibres at the boundary of $M$ to curves that are tangent to the fibres at the boundary of $N$. Hence we obtain ${}^{e}df: {}^{e}TM \to {}^{e}TN$. For the action of the differential between the wedge tangent bundles, recall that the total boundary functions $\rho_M \in C^\infty(\widetilde{M})$ and $\rho_N \in C^\infty(\widetilde{N})$ are nowhere vanishing in the open interior, and vanish to first order at each boundary face of $\widetilde{M}$ and $\widetilde{N}$, respectively. We write for any $V \in T_pM$ $$\label{d} \begin{split} df (V) &= \frac{\rho_N(f(p))}{\rho_M(p)} \cdot \frac{\rho_M(p)}{\rho_N(f(p))} \cdot df(V) \\ &= \frac{\rho_N(f(p))}{\rho_M(p)} \cdot \rho^{-1}_N(f(p)) \cdot df(\rho_M(p)V). \end{split}$$ If $V$ is now a section of ${}^{w}TM$, $\rho_M V$ extends to a section of ${}^{e}TM$, and $df(\rho_M V)$ extends to a section ${}^{e}df(\rho_M V)$ of ${}^{e}TN$. Consequently, $\rho^{-1}_N\circ f \cdot df(V)$ extends to a section $\widetilde{f}^* \rho^{-1}_N \cdot {}^{e}df(\rho_M V)$ of ${}^{w}TN$. Now, since $f$ is smoothly stratified and stratum preserving, it sends open strata of $\overline{M}$ smoothly onto open strata of $\overline{N}$, $\widetilde{f}$ sends boundary hypersurfaces of $\widetilde{M}$ onto boundary hypersurfaces of $\widetilde{N}$. Thus $\rho^{-1}_M \cdot \widetilde{f}^* \rho_N$ is smooth and bounded (note that this is not the case for an arbitrary $b$-map), and we can define in view of [\[d\]](#d){reference-type="eqref" reference="d"} for any $V \in {}^{w}TM$ $$\begin{aligned} \label{wedge-diff-def} {}^{w}df (V) := \frac{\widetilde{f}^* \rho_N}{\rho_M} \cdot \widetilde{f}^* \rho^{-1}_N \cdot {}^{e}df(\rho_M V) \in {}^{w}TN.\end{aligned}$$ The action of the pullbacks ${}^{e}f^*$ and ${}^{w}f^*$, defined by ${}^{e}df$ and ${}^{w}df$, respectively, is then obvious. ◻ Now, as it is well known even in the smooth closed case, the pullbacks in [\[pullbacks\]](#pullbacks){reference-type="eqref" reference="pullbacks"} do not necessarily induce bounded maps in $L^2$ and hence do not define morphisms of $\mathscr{N}\Gamma$-Hilbert complexes. This is precisely where the Hilsum-Skandalis type replacement of the pullback is needed. The next proposition is a central element of the construction. **Proposition 26**. *The differential $d(\exp \circ \, \mathbb{B}\widetilde{f})$ in the open interior of $\widetilde{f}^*(\mathbb{B}^N)$ extends to a well-defined surjection between edge and wedge tangent bundles $$\label{differentials} \begin{split} &{}^{e}d (\exp \circ \, \mathbb{B}\widetilde{f}): {}^{e}T \widetilde{f}^*(\mathbb{B}^N) \to {}^{e}T\widetilde{N}, \\ &{}^{w}d (\exp \circ \, \mathbb{B}\widetilde{f}): {}^{w}T \widetilde{f}^*(\mathbb{B}^N) \to {}^{w}T\widetilde{N}. \end{split}$$ Here, ${}^{e}T \widetilde{f}^*(\mathbb{B}^N)$ and ${}^{w}T \widetilde{f}^*(\mathbb{B}^N)$ are defined as follows: the disc subbundle $\mathbb{B}^N \subset {}^{e}TN$, as well as its pullback $\widetilde{f}^*(\mathbb{B}^N)$ are manifolds with corners and we consider the corresponding edge and wedge tangent bundles. The pullbacks, defined by ${}^{e}d (\exp \circ \, \mathbb{B}\widetilde{f})$ and ${}^{w}d (\exp \circ \, \mathbb{B}\widetilde{f})$, act as follows $$\label{exp-f-pullback} \begin{split} &{}^{e} (\exp \circ \, \mathbb{B}\widetilde{f})^*: C^\infty(\widetilde{N}, \Lambda^* ({}^{e}T^*\widetilde{N}) \otimes \mathscr{E})\to C^\infty(\widetilde{M}, \Lambda^* ({}^{e}T^* \widetilde{f}^*(\mathbb{B}^N)\otimes \mathscr{E}'), \\ &{}^{w} (\exp \circ \, \mathbb{B}\widetilde{f})^*: C^\infty(\widetilde{N}, \Lambda^* ({}^{w}T^*\widetilde{N})\otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^* ({}^{w}T^* \widetilde{f}^*(\mathbb{B}^N)\otimes \mathscr{E}'). \end{split}$$ where $\mathscr{E}$ is the Mishchenko bundle as in Lemma [Lemma 25](#pullback-lemma){reference-type="ref" reference="pullback-lemma"} and $\mathscr{E}' := (\exp \circ \, \mathbb{B}\widetilde{f})^* \mathscr{E}$.* *Proof.* By [@ALMP3 §4.2], the disc subbundle $\mathbb{B}^N$ is a manifold with iterated boundary fibration structure, where each boundary hypersurface in $\mathbb{B}^N$ is the restriction of ${}^{e}TN$ to a boundary hypersurface of $\widetilde{N}$. Note that the disc fibres of $\mathbb{B}^N$ are open by definition and hence the boundary of the discs is by definition not part of the boundary fibration structure. Since $f:\overline{M} \to \overline{N}$ is a smoothly stratified map, the induced bundle map $\mathbb{B}\widetilde{f}$ is again smoothly stratified. By [@ALMP3 Lemma 4.3] we know that $\exp$ is also smoothly stratified and ${}^{e}d \exp$ is surjective (with respect to edge bundles). Surjectivity is not affected by the rescaling as in [\[wedge-diff-def\]](#wedge-diff-def){reference-type="eqref" reference="wedge-diff-def"}. Hence the statement follows by exactly the same argument as in Lemma [Lemma 25](#pullback-lemma){reference-type="ref" reference="pullback-lemma"}. ◻ Below, it will become necessary to replace $\mathscr{E}' := (\exp \circ \, \mathbb{B}\widetilde{f})^* \mathscr{E}$ by an isometric pullback bundle, where the isometry commutes with the pullback connections. The notion of connections on $\mathscr{N}\Gamma$-Hilbert module bundles is presented e.g. in [@Schick §3]. In fact, [@Schick Definition 3.2] introduces connections on $\mathscr{N}\Gamma$-Hilbert module bundles, [@Schick Definition 3.4] introduces pullback bundles and pullback connections. **Lemma 27**. *Consider the Mishchenko bundle $\mathscr{E} = \widetilde{N}_\Gamma \times_\Gamma \ell^2\Gamma$ with the canonical flat connection, induced by the exterior derivative on $\widetilde{N}_\Gamma$. Consider the pullback bundles $(\exp \circ \, \mathbb{B}\widetilde{f})^* \mathscr{E}$ and $(\widetilde{f} \circ \pi)^* \mathscr{E}$ with the pullback connections, see Figure [\[maps-diagram\]](#maps-diagram){reference-type="ref" reference="maps-diagram"} for the various maps. Then there exists an isometry of $\mathscr{N}\Gamma$-Hilbert module bundles, commuting with the flat pullback connections $$\mathscr{J}: (\exp \circ \, \mathbb{B}\widetilde{f})^* \mathscr{E} \to (\widetilde{f} \circ \pi)^* \mathscr{E}.$$* *Proof.* Consider the continuous family $$r_t: \mathbb{B}^N \to \mathbb{B}^N, \quad \mathbb{B}^N_p \ni v \mapsto tv \in \mathbb{B}^N_p,$$ for $t \in [0,1]$. Pullback by homotopic maps gives isomorphic bundles and hence there exists an isomorphism $\mathscr{J}': r_0^* \exp^* \mathscr{E} \to r_1^* \exp^* \mathscr{E}$. Consider the flat connection $\nabla$ on $\exp^* \mathscr{E}$, obtained as the pullback of the canonical flat connection on the Mishchenko bundle $\mathscr{E}$. We claim that this isomorphism $\mathscr{J}'$ commutes with the pullback connections $r_0^*\nabla \equiv \nabla$ and $r_1^*\nabla$. Indeed, consider $$r: [0,1] \times \mathbb{B}^N \to \mathbb{B}^N, \quad (t,v) \mapsto r_t(v).$$ Consider the pullback bundle $r^*\exp^* \mathscr{E}$ and the flat pullback connection $r^*\nabla$ on that bundle. Clearly, we have the following restrictions $$\begin{aligned} \begin{array}{lll} &\Bigl. r^*\exp^* \mathscr{E} \Bigr|_{\{0\} \times \mathbb{B}^N} = r_0^* \exp^* \mathscr{E} \equiv \exp^* \mathscr{E}, \quad &\Bigl. r^*\nabla \Bigr|_{\{0\} \times \mathbb{B}^N} = r_0^*\nabla \equiv \nabla, \\ &\Bigl. r^*\exp^* \mathscr{E} \Bigr|_{\{1\} \times \mathbb{B}^N} = r_1^* \exp^* \mathscr{E}, \quad &\Bigl. r^*\nabla \Bigr|_{\{1\} \times \mathbb{B}^N} = r_1^*\nabla. \end{array}\end{aligned}$$ The isomorphism $\mathscr{J}'$ may be constructed via parallel transport $P$ by $r^*\nabla$: given any $(0,p) \in [0,1] \times \mathbb{B}^N$, consider the path $\gamma(t) := (t,p), t \in [0,1]$ and define $$\mathscr{J}':= P_\gamma: r_0^* \exp^* \mathscr{E} \to r_1^* \exp^* \mathscr{E}.$$ It is an isometry, since the metric in each fibre is given by the $\ell^2\Gamma$ inner product and parallel transport preserves the inner product. The isomorphism commutes with the pullback connections $r_0^*\nabla$ and $r_1^*\nabla$, since it maps parallel sections to parallel sections: indeed, for any path $\gamma'$ between $p,q, \in \mathbb{B}^N$ we have as indicated in Figure [\[commuting-pullback-connections\]](#commuting-pullback-connections){reference-type="ref" reference="commuting-pullback-connections"} $$P_\gamma \circ P_{\gamma'} = P_{\gamma'} \circ P_\gamma,$$ since parallel transport for flat connections is trivial along closed paths. Thus, parallel sections along $\gamma'$ with respect to $r_0^*\nabla$ are mapped by $\mathscr{J}'$ to parallel sections along $\gamma'$ with respect to $r_1^*\nabla$. This proves our claim and we can now finish the proof. We have the following sequence of bundle isomorphisms (note that $r_1 = \textup{id}$) $$\begin{aligned} \exp^* \mathscr{E} = r_1^* \exp^* \mathscr{E} \cong r_0^* \exp^* \mathscr{E} = (\exp \, \circ \, r_0)^* \mathscr{E} = (\pi_N \circ r_0)^* \mathscr{E} = \pi_N^* \mathscr{E}.\end{aligned}$$ From here we conclude, using $\pi_N \circ \mathbb{B}\, \widetilde{f} = \widetilde{f} \circ \pi$ $$\begin{aligned} (\exp \circ \, \mathbb{B}\widetilde{f})^* \mathscr{E} = \mathbb{B}\widetilde{f}^* (\exp^* \mathscr{E}) \cong \mathbb{B}\widetilde{f}^* (\pi_N^* \mathscr{E}) = (\pi_N \circ \mathbb{B}\, \widetilde{f} )^* \mathscr{E} = (\widetilde{f} \circ \pi)^* \mathscr{E}.\end{aligned}$$ This constructs the isomorphism $\mathscr{J}$. Since the only non-trivial identification in $\mathscr{J}$ is the isomorphism $\mathscr{J}'$, it is an isometry and commutes with the pullback connections. ◻ We can now define the Hilsum-Skandalis replacement for $f^*$. **Theorem 28**. *Let $f: \overline{M} \to \overline{N}$ be smooth, strongly stratum preserving. Consider the Thom form $\tau[\widetilde{f}^*(\mathbb{B}^N)]$ of the disc bundle $\widetilde{f}^*(\mathbb{B}^N)$. Then the Hilsum-Skandalis maps $$\label{HS} \begin{split} &{}^{e}HS(f): C^\infty(\widetilde{N}, \Lambda^* ({}^{e}T^*N)\otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^* ({}^{e}T^*M)\otimes \widetilde{f}^*\mathscr{E}), \\ &{}^{e}HS(f) \omega := \pi_* \Bigl( \tau \bigl[ \widetilde{f}^*(\mathbb{B}^N) \bigr] \wedge \mathscr{J} \circ {}^{e}(\exp \circ \, \mathbb{B}\widetilde{f})^* \omega \Bigr) \\ &\qquad \qquad {}^{w}HS(f): C^\infty(\widetilde{N}, \Lambda^* ({}^{w}T^*N)\otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^* ({}^{w}T^*M)\otimes \widetilde{f}^*\mathscr{E}), \\ &\qquad \qquad {}^{w}HS(f) \omega := \pi_* \Bigl( \tau\bigl[\widetilde{f}^*(\mathbb{B}^N)\bigr] \wedge \mathscr{J} \circ {}^{w}(\exp \circ \, \mathbb{B}\widetilde{f})^* \omega \Bigr), \end{split}$$ are well-defined and extend to bounded maps on the corresponding $L^2$ completions $$\label{HSL2} \begin{split} &{}^{e}HS(f): L^2(\widetilde{N}, \Lambda^* ({}^{e}T^*N)\otimes \mathscr{E}, \rho_N^{-2} g_{\mathscr{E}}) \to L^2(\widetilde{M}, \Lambda^* ({}^{e}T^*M)\otimes \widetilde{f}^*\mathscr{E}, \rho_M^{-2}g_{\mathscr{E}}), \\ &{}^{w}HS(f): L^2(\widetilde{N}, \Lambda^* ({}^{w}T^*N) \otimes \mathscr{E}, g_{\mathscr{E}}) \to L^2(\widetilde{M}, \Lambda^* ({}^{w}T^*M)\otimes \widetilde{f}^*\mathscr{E}, g_{\mathscr{E}}). \end{split}$$ The maps commute with the twisted de Rham differentials $d_{\mathscr{E}}$ and $d_{\widetilde{f}^*\mathscr{E}}$ on smooth compactly supported sections. The metrics $g_{\mathscr{E}}$ are induced by the wedge metric[s]{style="color: violet"} $g_N,g_M$ together with the $\ell^2\Gamma$ inner product along the fibers of $\mathscr{E}$.* *Proof.* The maps are well-defined by Proposition [Proposition 26](#exp-f-prop){reference-type="ref" reference="exp-f-prop"}. We now prove [\[HSL2\]](#HSL2){reference-type="eqref" reference="HSL2"}. Note first that $(M, \rho_M^{-2}g_{M})$ and $(N, \rho_N^{-2}g_{N})$ are of bounded geometry. Hence, boundedness of ${}^{e}HS(f)$ is discussed in detail in [@Spess Proposition 4.5], who extended [@Hilsum-Skandalis] to non-compact Riemannian manifolds of bounded geometry. Note here, that the construction of ${}^{e}HS(f)$ differs from $T_f$ in [@Spess (112), (147)] only by the boundary isomorphism $\mathscr{J}$. We now establish boundedness of ${}^{w}HS(f)$: note that $(\exp \circ \, \mathbb{B}\widetilde{f})$ and $\pi$ are both smooth, strongly stratum preserving. Consequently, writing $\rho_N$ and $\rho_{\widetilde{f}^*(\mathbb{B}^N)}$ for the total boundary defining functions on $N$ and $\widetilde{f}^*(\mathbb{B}^N)$, respectively, we conclude that the quotients $$\frac{(\exp \circ \, \mathbb{B}\widetilde{f})^*\rho_N}{\rho_{\widetilde{f}^*(\mathbb{B}^N)}}, \quad \frac{\rho_{\widetilde{f}^*(\mathbb{B}^N)}}{\pi^*\rho_M},$$ are uniformly bounded. Given $\omega \in C_c^\infty(N, \Lambda^k (T^*N)\otimes \mathscr{E})$ with compact support and any fixed degree $k$, we compute $$\begin{aligned} {}^{w}HS(f) \rho_N^k \omega &\equiv {}^{e}HS(f) \rho_N^k \omega \\ &= \pi_* \Bigl( (\exp \circ \, \mathbb{B}\widetilde{f})^* \rho_N^k \cdot \tau\bigl[\widetilde{f}^*(\mathbb{B}^N)\bigr] \wedge \mathscr{J} \circ (\exp \circ \, \mathbb{B}\widetilde{f})^* \omega \Bigr) \\ &= \rho^k_M \cdot \pi_* \Bigl( \frac{(\exp \circ \, \mathbb{B}\widetilde{f})^*\rho^k_N}{\pi^*\rho^k_M} \cdot \tau\bigl[\widetilde{f}^*(\mathbb{B}^N)\bigr] \wedge \mathscr{J} \circ (\exp \circ \, \mathbb{B}\widetilde{f})^* \omega \Bigr).\end{aligned}$$ The presence of the bounded factor $\frac{(\exp \circ \, \mathbb{B}\widetilde{f})^*\rho^k_N}{\pi^*\rho^k_M}$ does not affect the argument in [@Spess Proposition 4.5] and hence $\rho^{-k}_M \cdot {}^{w}HS(f) \rho_N^k$ is again bounded on the $L^2$ completions of $k$-th degree twisted differential forms with respect to $\rho_N^{-2} g_{\mathscr{E}}$ and $\rho_M^{-2} g_{\mathscr{E}}$. This is equivalent to boundedness of ${}^{w}HS(f)$ in [\[HSL2\]](#HSL2){reference-type="eqref" reference="HSL2"} as claimed. The last claim that ${}^{e/w}HS(f)$ commutes with the de Rham differentials is proved by [@package Proposition 9.3 (i)], where Lemma [Lemma 27](#retract-lemma){reference-type="ref" reference="retract-lemma"} is implicitly used. ◻ ## Hilsum-Skandalis maps for homotopic maps Consider a continuous family $f_s: \overline{M} \to \overline{N}$ of smooth strongly stratum preserving maps. Consider the pullback bundles $\widetilde{f}_s^*(\mathbb{B}^N)$. Homotopic maps induce vector bundle isomorphisms, and hence there exists a continuous family $A_s: \widetilde{f}_0^*(\mathbb{B}^N) \to \widetilde{f}_s^*(\mathbb{B}^N)$ of vector bundle isomorphisms, leading to the following commutative diagram. Similarly, there exists a continuous family $\mathscr{J}_s: \widetilde{f}_s^* \mathscr{E} \to \widetilde{f}_0^* \mathscr{E}$ of bundle isomorphisms, which by the same argument as in Lemma [Lemma 27](#retract-lemma){reference-type="ref" reference="retract-lemma"} are isometries of $\mathscr{N}\Gamma$-Hilbert module bundles, commuting with the pullback connections. Motivated by Theorem [Theorem 28](#HS-thm){reference-type="ref" reference="HS-thm"} we set (we change the construction slightly) $$\label{HS-prime} \begin{split} &{}^{w}HS(f_s)': C^\infty(\widetilde{N}, \Lambda^\ell ({}^{w}T^*N) \otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^\ell ({}^{w}T^*M)\otimes \widetilde{f}_0^*\mathscr{E}), \\ &{}^{w}HS(f_s)' \omega := (-1)^{\ell} \, {}^{w}\pi_* \Bigl( \tau\bigl[\widetilde{f}_0^*(\mathbb{B}^N)\bigr] \wedge \mathscr{J}_s \circ \mathscr{J} \circ {}^{w}(\exp \circ \, \mathbb{B}\widetilde{f}_s \circ A_s)^* \omega \Bigr), \end{split}$$ where $\mathscr{J}$ is constructed in Lemma [Lemma 27](#retract-lemma){reference-type="ref" reference="retract-lemma"}. As before, ${}^{w}HS(f_s)'$ extends to a bounded map between the $L^2$-completions. The following proposition explains how these maps define a chain homotopy. **Proposition 29**. *Write $p_s:= \exp \circ \, \mathbb{B}\widetilde{f}_s \circ A_s: \widetilde{f}_0^*(\mathbb{B}^N) \to \widetilde{N}$ and set $$p: \widetilde{f}_0^*(\mathbb{B}^N) \times [0,1] \to \widetilde{N}, \qquad (v,s) \mapsto p_s(v).$$ Then ${}^{w}HS(f_1)' - {}^{w}HS(f_0)' = dK + Kd$ with $K$ bounded in $L^2$ and defined by $$\begin{split} &K: C^\infty(\widetilde{N}, \Lambda^* ({}^{w}T^*N)\otimes \mathscr{E}) \to C^\infty(\widetilde{M}, \Lambda^{*-1} ({}^{w}T^*M)\otimes \widetilde{f}_0^* \mathscr{E}), \\ &K \omega := (-1)^{\ell} \, {}^{w}\pi_* \Bigl( \tau\bigl[\widetilde{f}_0^*(\mathbb{B}^N)\bigr] \wedge \mathscr{J}_s \circ \mathscr{J} \circ \int_0^1 \iota_{\partial_s} \bigl( {}^{w}p^* \omega \bigr) \Bigr), \end{split}$$* *Proof.* Consider $\omega \in C^\infty(\widetilde{N}, \Lambda^\ell ({}^{w}T^*N)\otimes \mathscr{E})$. The statement follows by plugging the next computation into [\[HS-prime\]](#HS-prime){reference-type="eqref" reference="HS-prime"}. We change the notation of our differentials $d_{\mathscr{E}}$ by changing the lower index to specify the underlying space and obtain $$\begin{aligned} d_{\widetilde{f}_0^*(\mathbb{B}^N)} &\int_0^1 \iota_{\partial_s} \bigl( {}^{w}p^* \omega \bigr) = \int_0^1 \iota_{\partial_s} d_{\widetilde{f}_0^*(\mathbb{B}^N)} \bigl( {}^{w}p^* \omega \bigr) \\ = \, &\int_0^1 \iota_{\partial_s} \Bigl( d_{\widetilde{f}_0^*(\mathbb{B}^N)\times [0,1]} \bigl( {}^{w}p^* \omega \bigr) + (-1)^{\ell+1} \frac{\partial \bigl( {}^{w}p^* \omega \bigr)}{\partial s} \wedge ds \Bigr) \\ = \, &\int_0^1 \iota_{\partial_s} \Bigl( {}^{w}p^* \bigl( d_{\widetilde{N}}\omega \bigr) + (-1)^{\ell+1} \frac{\partial \bigl( {}^{w}p^* \omega \bigr)}{\partial s} \wedge ds \Bigr) \\ = \, &\int_0^1 \iota_{\partial_s} \Bigl( {}^{w}p^* \bigl( d_{\widetilde{N}}\omega \bigr)\Bigr) + {}^{w}p_1^* \omega - {}^{w}p_0^* \omega.\end{aligned}$$ ◻ # Stability of $L^2$-Betti numbers and Novikov-Shubin invariants {#stability-section} Let us first recall the notion of homotopy equivalences between $\mathscr{N}\Gamma$-Hilbert complexes. We refer to [@Lueck-book §1 and §2] and [@Gromov-Shubin §4] for more details. We continue in the notation fixed in Definition [Definition 6](#gamma-notions2){reference-type="ref" reference="gamma-notions2"}. **Definition 30**. *Let $\Gamma$ be a finitely generated discrete group and recall Definition [Definition 6](#gamma-notions2){reference-type="ref" reference="gamma-notions2"}.* 1. *A homotopy between two morphisms of $\mathscr{N}\Gamma$-Hilbert complexes $f,h:H_{\bullet}\rightarrow K_{\bullet}$ is a sequence of bounded linear operators $T_k :H_k\rightarrow K_{k-1}$ commuting with the $\Gamma$-action such that $$f_k-h_k = T_{k+1}\circ d_k+d_{k-1}\circ T_k \ \textup{on} \ \mathscr{D}(d_k) \subset H_k.$$* 2. *Two $\mathscr{N}\Gamma$-Hilbert complexes $H_{\bullet}$ and $K_{\bullet}$ are said to be homotopy equivalent if there are morphisms $f:H_{\bullet}\rightarrow K_{\bullet}$ and $h:H_{\bullet}\rightarrow K_{\bullet}$ such that $f\circ h$ and $h\circ f$ are homotopic to the identity morphisms of $H_{\bullet}$ and $K_{\bullet}$ respectively.* We also introduce homotopy equivalence in the smoothly stratified setting. **Definition 31**. *Consider two compact smoothly stratified spaces $\overline{M}$ and $\overline{N}$. We say that $\overline{M}$ and $\overline{N}$ are \"smoothly stratified, strongly stratum preserving homotopy equivalent\" if there exist two smooth strongly stratum preserving maps $f: \overline{M} \to \overline{N}$ and $h: \overline{N} \to \overline{M}$, such that $f\circ h \sim \textup{id}_{\overline{N}}$ and $h\circ f \sim \textup{id}_{\overline{M}}$ with homotopies given by continuous families of smooth strongly stratified stratum preserving maps.* We show that an equivalence as in Definition [Definition 31](#hom-equiv-def){reference-type="ref" reference="hom-equiv-def"} induces a chain homotopy equivalence of minimal and maximal Hilbert complexes for $\overline{M}_\Gamma$ and $\overline{N}_\Gamma$. **Corollary 32**. *Let $\overline{M}$ and $\overline{N}$ be compact smoothly stratified, strongly stratum preserving homotopy equivalent in the sense of Definition [Definition 31](#hom-equiv-def){reference-type="ref" reference="hom-equiv-def"}. Let $\overline{N}_\Gamma$ be a Galois $\Gamma$-covering on $\overline{N}$. Let $\mathcal{E}_{\overline{N}}$ be the associated Mishchenko bundle over $\overline{N}$. Then the $\mathscr{N}\Gamma$-Hilbert complexes $$(\mathscr{D}_{\min / \max}(M, f^* \mathscr{E}_{\overline{N}}), d_{f^* \mathscr{E}_{\overline{N}}}) \ \textup{and} \ (\mathscr{D}_{\min / \max}(N, \mathscr{E}_{\overline{N}}), d_{\mathscr{E}_{\overline{N}}}).$$ are chain homotopy equivalent.\ * *Proof.* Consider the two smooth strongly stratum preserving maps $f: \overline{M} \to \overline{N}$ and $h: \overline{N} \to \overline{M}$ that define the smoothly stratified, strongly stratum preserving homotopy equivalence between $\overline{M}$ and $\overline{N}$. In particular, $f\circ h \sim \textup{id}_{\overline{N}}$ with the homotopy given by a continuous family $u_s: \overline{N} \to \overline{N}, s\in [0,1]$, of smooth strongly stratum preserving maps. Similarly, $h\circ f \sim \textup{id}_{\overline{M}}$ with the homotopy given by a continuous family $v_s: \overline{M} \to \overline{M}, s\in [0,1]$, of smooth strongly stratum preserving maps. We have $$u_1= f\circ h, u_0 = \textup{id}_{\overline{N}}, \qquad v_1= h\circ f, v_0 = \textup{id}_{\overline{M}}.$$ Consider the Hilsum-Skandalis replacements ${}^{w}HS(u_s)'$ and ${}^{w}HS(v_s)'$ of the homotopies. By Theorem [Theorem 28](#HS-thm){reference-type="ref" reference="HS-thm"} these maps are bounded on the $L^2$ completions, commute with the differential by [@package Proposition 9.3 (i)] and hence define bounded maps between the minimal and maximal Hilbert complexes $$\begin{aligned} &{}^{w}HS(u_s)': (\mathscr{D}_{\min / \max}(N, \mathscr{E}), d_{\mathscr{E}}) \to (\mathscr{D}_{\min / \max}(N, \mathscr{E}), d_{\mathscr{E}}), \\ &{}^{w}HS(v_s)': (\mathscr{D}_{\min / \max}(M, \mathscr{E}), d_{\mathscr{E}}) \to (\mathscr{D}_{\min / \max}(M, \mathscr{E}), d_{\mathscr{E}}).\end{aligned}$$ Moreover, by Proposition [Proposition 29](#chain-homotopy1){reference-type="ref" reference="chain-homotopy1"}, we find for appropriate $K_u, K_v$ $$\label{HK} \begin{split} &{}^{w}HS(f\circ h)' - {}^{w}HS(\textup{id}_{\overline{N}})' \equiv {}^{w}HS(u_1)' - {}^{w}HS(u_0)' = d_NK_u + K_ud_N, \\ &{}^{w}HS(h \circ f)' - {}^{w}HS(\textup{id}_{\overline{M}})' \equiv {}^{w}HS(v_1)' - {}^{w}HS(v_0)' = d_MK_v + K_vd_M. \end{split}$$ Note that $K_u$ and $K_v$ do not commute with the differential and hence their boundedness in the $L^2$ spaces, as asserted in Proposition [Proposition 29](#chain-homotopy1){reference-type="ref" reference="chain-homotopy1"}, does not yet imply that they preserve the minimal and maximal domains. However, their relation (see [\[HK\]](#HK){reference-type="eqref" reference="HK"}) with $HS(u_s)'$ and $HS(v_s)'$, which do preserve the domains, imply $$\label{K} \begin{split} &K_u: \mathscr{D}^*_{\min / \max}(N, \mathscr{E}) \to \mathscr{D}^{*-1}_{\min / \max}(N, \mathscr{E}),\\ &K_v: \mathscr{D}^*_{\min / \max}(M, \mathscr{E}) \to \mathscr{D}^{*-1}_{\min / \max}(M, \mathscr{E}). \end{split}$$ If we had ${}^{w}HS(\textup{id}_{\overline{N}})' = \textup{Id}$ and ${}^{w}HS(\textup{id}_{\overline{M}})' = \textup{Id}$, this would already mean that the $\mathscr{N}\Gamma$-Hilbert complexes $(\mathscr{D}_{\min / \max}(M, \mathscr{E}), d_{\mathscr{E}})$ and $(\mathscr{D}_{\min / \max}(N), d_{\mathscr{E}})$ are chain homotopy equivalent as in Definition [Definition 30](#gamma-notions4){reference-type="ref" reference="gamma-notions4"}. This would finish the proof. However, this is not the case. In order to compute $({}^{w}HS(\textup{id}_{\overline{N}})' - \textup{Id})$ as well as $({}^{w}HS(\textup{id}_{\overline{M}})' - \textup{Id})$, we consider the exponential maps as above (recall that the disc fibres in the bundles $\mathbb{B}^N$ and $\mathbb{B}^M$ are of radius $\delta$, some lower bound for the injectivity radius) $$\begin{aligned} &\exp^N: \mathbb{B}^N |_N \to N, \qquad V \mapsto \gamma_V(\delta), \\ &\exp^M: \mathbb{B}^M |_M \to M \qquad V \mapsto \gamma_V(\delta),\end{aligned}$$ where $\gamma_V$ denotes the geodesic with respect to the complete edge metrics on either $N$ or $M$. Instead of evaluating the geodesics at $s=\delta$, we evaluate them at any $s \in [0,\delta]$ and define $$\begin{aligned} &\exp_s^N: \mathbb{B}^N |_N \to N, \qquad V \mapsto \gamma_V(s), \\ &\exp_s^M: \mathbb{B}^M |_M \to M \qquad V \mapsto \gamma_V(s),\end{aligned}$$ Applying Proposition [Proposition 29](#chain-homotopy1){reference-type="ref" reference="chain-homotopy1"} to $p_s$, that is defined in terms of the extensions $\exp^N_s: \mathbb{B}^N \to \widetilde{N}$ and $\exp^M_s: \mathbb{B}^M \to \widetilde{M}$, gives $$\label{HK2} \begin{split} &{}^{w}HS(\textup{id}_{\overline{N}})' - \textup{Id} = d_{\mathscr{E}}K'_u + K'_ud_{\mathscr{E}}, \qquad K'_u: \mathscr{D}^*_{\min / \max}(N, \mathscr{E}) \to \mathscr{D}^{*-1}_{\min / \max}(N, \mathscr{E}),\\ &{}^{w}HS(\textup{id}_{\overline{M}})' - \textup{Id} = d_{\mathscr{E}}K'_v + K'_vd_{\mathscr{E}} \qquad K'_v: \mathscr{D}^*_{\min / \max}(M, \mathscr{E}) \to \mathscr{D}^{*-1}_{\min / \max}(M, \mathscr{E}). \end{split}$$ Combining [\[HK\]](#HK){reference-type="eqref" reference="HK"}, [\[K\]](#K){reference-type="eqref" reference="K"} and [\[HK2\]](#HK2){reference-type="eqref" reference="HK2"} proves the statement. ◻ ## Proof of stability Our proof is now a consequence of the abstract stability result in [@Gromov-Shubin Proposition 4.1], see also [@Lueck-book Theorem 2.19], which we now state for convenience. **Theorem 33**. *Consider two $\mathscr{N}\Gamma$ Hilbert complexes $(\mathscr{D},d)$ and $(\mathscr{D}',d')$. If they are chain homotopy equivalent as $\mathscr{N}\Gamma$ Hilbert complexes, then their spectral density functions $F_*(\lambda, (\mathscr{D},d))$ and $F_*(\lambda, (\mathscr{D}',d'))$ are dilatationally equivalent, cf. [@Lueck-book Definition 2.7], and in particular we have the following.* 1. *The Betti numbers $b^*_{(2),\Gamma}(\mathscr{D},d)$ and $b^*_{(2),\Gamma}(\mathscr{D}',d')$ are equal.* 2. *$(\mathscr{D},d)$ is Fredholm iff $(\mathscr{D}',d')$ is so and the Novikov-Shubin invariants $\alpha_*(\mathscr{D},d)$ and $\alpha_*(\mathscr{D}',d')$ are equal.* We can now prove our second main result, Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}. *Proof of Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}.* **Step 1)** Using (in both lines below) Proposition [Proposition 7](#minmax-thm){reference-type="ref" reference="minmax-thm"} for the first equality and Corollary [Corollary 32](#chain-corollary){reference-type="ref" reference="chain-corollary"} for the second equality (this is the only place where the Hilsum-Skandalis replacement is used), we know that $$\begin{aligned} b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{N}_{\Gamma}) &= b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{N},\mathcal{E}_{\overline{N}_{\Gamma}}) = b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{M},f^*\mathcal{E}_{\overline{N}_{\Gamma}}), \\ \alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{N}_{\Gamma}) &= \alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{N},\mathcal{E}_{\overline{N}_{\Gamma}}) = \alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{M},f^*\mathcal{E}_{\overline{N}_{\Gamma}}).\end{aligned}$$ **Step 2)** We claim that $f^*\mathcal{E}_{\overline{N}_{\Gamma}} \cong \mathcal{E}_{f^*\overline{N}_{\Gamma}}$ and that the isomorphism commutes with the natural flat pullback connections. The proof amounts to understanding the isomorphism explicitly. Consider an atlas $\{U_\alpha\}_{\alpha}$ on $\overline{N}$, consisting of trivializing neighborhoods for the Galois $\Gamma$-covering $p:\overline{N}_{\Gamma} \to \overline{N}$. The covering comes with local trivializations $$\phi_\alpha: \Bigl. \overline{N}_{\Gamma} \Bigr|_{U_\alpha} \to U_\alpha \times \Gamma, \quad x \mapsto (p(x),\phi_\alpha(x)),$$ and transition functions $\phi_{\alpha, \beta}: U_\alpha \cap U_\beta \to \Gamma$. Let us recall the construction of the associated flat bundle $\mathcal{E}_{\overline{N}_{\Gamma}} = \overline{N}_{\Gamma} \times_\Gamma \ell^2\Gamma$ in order to fix notation. The Mishchenko bundle is defined as a set of equivalence classes $$\mathcal{E}_{\overline{N}_{\Gamma}} := \{ [(x,v)] \subset \overline{N}_{\Gamma} \times \ell^2\Gamma \mid \forall_{\gamma \in \Gamma}: (x,v) \sim (\gamma \cdot x, \gamma^{-1} \cdot v)\},$$ where $\Gamma$ acts naturally on $\overline{N}_{\Gamma}$ and $\ell^2\Gamma$. Its bundle structure comes from the local trivializations (the transition functions are given by the action of $\phi_{\alpha, \beta}$ on $\ell^2\Gamma$) $$\Phi_\alpha: \Bigl. \mathcal{E}_{\overline{N}_{\Gamma}} \Bigr|_{U_\alpha} \to U_\alpha \times \ell^2\Gamma, \quad [(x,v)] \mapsto (p(x),\phi_\alpha(x) \cdot v).$$ We now consider the pullbacks of these objects by $f$: the pullback $f^*\overline{N}_{\Gamma}$ is a Galois $\Gamma$-covering over $\overline{M}$, and the pullback $f^*\mathcal{E}_{\overline{N}_{\Gamma}}$ as a bundle of Hilbert modules. Both are defined as a set as follows $$\begin{aligned} &f^*\overline{N}_{\Gamma} := \left\{(q,x) \in \overline{M} \times \Bigl. \overline{N}_{\Gamma} \mid f(q) = p(x) \right\}, \\ &f^*\mathcal{E}_{\overline{N}_{\Gamma}} := \left\{(q; [(x,v)]) \in \overline{M} \times \Bigl. \mathcal{E}_{\overline{N}_{\Gamma}} \mid f(q) = p(x) \right\}.\end{aligned}$$ Their covering, respectively the bundle structures are given in terms of the atlas $\{f^{-1}(U_\alpha\}_{\alpha})$ on $\overline{M}$ of trivializing neighborhoods and local trivializations $$\begin{aligned} &f^* \phi_\alpha: \Bigl. f^*\overline{N}_{\Gamma} \Bigr|_{f^{-1}(U_\alpha)} \to f^{-1}(U_\alpha) \times \Gamma, \quad (q,x) \mapsto (q,\phi_\alpha(x)), \\ &f^*\Phi_\alpha: \Bigl. f^*\mathcal{E}_{\overline{N}_{\Gamma}} \Bigr|_{f^{-1}(U_\alpha)} \to f^{-1}(U_\alpha) \times \ell^2\Gamma, \quad (q;[(x,v)]) \mapsto (q,\phi_\alpha(x)\cdot v), \end{aligned}$$ and transition functions given in both cases by $f^*\phi_{\alpha, \beta}: f^{-1}(U_\alpha) \cap f^{-1}(U_\beta) \to \Gamma$. Our task is to compare $f^*\mathcal{E}_{\overline{N}_{\Gamma}}$ to the bundle $\mathcal{E}_{f^*\overline{N}_{\Gamma}} := f^*\overline{N}_{\Gamma} \times_\Gamma \ell^2 \Gamma$. The latter bundle is defined as a set of equivalence classes $$\mathcal{E}_{f^*\overline{N}_{\Gamma}} := \Bigl\{ \Bigl[\bigl((q,x); v\bigr)\Bigr] \subset f^*\overline{N}_{\Gamma} \times \ell^2\Gamma \mid \forall_{\gamma \in \Gamma}: \bigl((q,x); v\bigr) \sim \bigl((q,\gamma \cdot x), \gamma^{-1} \cdot v\bigr) \Bigr\}.$$ Its bundle structure is given by the local trivializations $$\begin{aligned} \Psi_\alpha: \Bigl. \mathcal{E}_{f^*\overline{N}_{\Gamma}} \Bigr|_{f^{-1}(U_\alpha)} \to f^{-1}(U_\alpha) \times \ell^2\Gamma, \quad \Bigl[\bigl((q,x); v\bigr)\Bigr] \mapsto (q,\phi_\alpha(x)\cdot v), \end{aligned}$$ with transition functions given as before for $f^*\mathcal{E}_{\overline{N}_{\Gamma}}$ by $f^*\phi_{\alpha, \beta}$. We can now define an isomorphism $f^*\mathcal{E}_{\overline{N}_{\Gamma}} \cong \mathcal{E}_{f^*\overline{N}_{\Gamma}}$, namely $$I: \mathcal{E}_{f^*\overline{N}_{\Gamma}} \to f^*\mathcal{E}_{\overline{N}_{\Gamma}}, \quad \Bigl[\bigl((q,x); v\bigr)\Bigr] \mapsto (q;[(x,v)]).$$ One can easily check that this is well-defined, since $$I \bigl((q,\gamma\cdot x); \gamma^{-1}\cdot v\bigr) = \bigl(q; (\gamma\cdot x, \gamma^{-1}\cdot v)\bigr) \in (q;[(x,v)]).$$ By construction we have a commutative diagram as in Figure [\[iso-pullbacks\]](#iso-pullbacks){reference-type="ref" reference="iso-pullbacks"}. Since $I$ is acting as identity in local trivializations, it commutes with the natural flat pullback connections and hence we have a functoriality result $$\begin{aligned} b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{M},f^*\mathcal{E}_{\overline{N}_{\Gamma}}) &=b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{M},\mathcal{E}_{f^*\overline{N}_{\Gamma}}), \\ \alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{M},f^*\mathcal{E}_{\overline{N}_{\Gamma}}) &=\alpha^*_{*,\mathrm{max}/\mathrm{min}}(\overline{M},\mathcal{E}_{f^*\overline{N}_{\Gamma}}).\end{aligned}$$ **Step 3)** Now we have $$\begin{aligned} b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{N},\mathcal{E}_{f^*\overline{N}_{\Gamma}})=b_{(2),\mathrm{max}/\mathrm{min}}(\overline{M},\mathcal{E}_{\overline{M}_{\Gamma}}) &= b^*_{(2),\mathrm{max}/\mathrm{min}}(\overline{M}_{\Gamma}), \\ \alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{N},\mathcal{E}_{f^*\overline{N}_{\Gamma}})=\alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{M},\mathcal{E}_{\overline{M}_{\Gamma}}) &= \alpha_{*,\mathrm{max}/\mathrm{min}}(\overline{M}_{\Gamma}),\end{aligned}$$ since $\overline{M}_{\Gamma}$ and $f^*\overline{N}_{\Gamma}$ are isomorphic as Galois $\Gamma$-coverings. ◻ *Proof of Corollary [Corollary 3](#universal-c){reference-type="ref" reference="universal-c"}.* This follows immediately by the above proof and Corollary [Corollary 21](#pullback-u){reference-type="ref" reference="pullback-u"}. ◻ textscALXP12 [Albin, P.]{.smallcaps} *On the Hodge theory of stratified spaces. Hodge theory and $L^2$-analysis*, Adv. Lect. Math (2017), 39, 1--78, Int. Press, Somerville, MA. Albin, P.; Gell-Redman, J. *The index formula for families of Dirac type operators on pseudomanifolds*, J. Differential Geom. 125 (2), 207--343 [Albin, P.]{.smallcaps}, [Leichtnam, E.]{.smallcaps}, [Mazzeo, R.]{.smallcaps}, [Piazza, P.]{.smallcaps} *The signature package on Witt spaces*, Ann. Sci. Ec. Yorm. Super. (4) 45 (2012), no. 2, 241--310. [Albin, P.]{.smallcaps}, [Leichtnam, E.]{.smallcaps}, [Mazzeo, R.]{.smallcaps}, [Piazza, P.]{.smallcaps} *Hodge theory on Cheeger spaces*, J. Reine Angew. Math 744 (2018), 29--102. [Albin, P.]{.smallcaps}, [Leichtnam, E.]{.smallcaps}, [Mazzeo, R.]{.smallcaps}, [Piazza, P.]{.smallcaps} *Novikov conjecture on Cheeger spaces*, J. Noncomm. Geom. 11 (2017), 451--506. [Álvarez L.]{.smallcaps}, [Calaza, M.]{.smallcaps}, *Witten's perturbation on strata*, Asian J. Math. 21(1), (2017), 47--126. [Atiyah, M. F.]{.smallcaps}, *Elliptic operators, discrete groups and von Neumann algebras*, Asterisque 32-33 (1976), 10--16. [Brasselet, J.-P.]{.smallcaps}; [Hector, G.]{.smallcaps}; [Saralegi, M.]{.smallcaps}, *Theoreme de de Rham pour les varietes stratifiees.* (French) \[The de Rham theorem for stratified manifolds\] Ann. Global Anal. Geom. 9 (1991), no. 3, 211--243. [Brüning, J.]{.smallcaps}, [Lesch, M.]{.smallcaps}, *Hilbert spaces*, J. Funct. An. 108 (1992) 88--132. [Dodziuk, J.]{.smallcaps}, *DeRham-Hodge theory for $L^2$-cohomology of infinite coverings*, Topology 16 (1977), 157--165. [Gromov, M.]{.smallcaps}, [Shubin, M. A.]{.smallcaps}, *von Neumann spectra near zero*, Geom. Funct. Anal. 1 (1991), no. 4, 375--404. [Gromov, M.]{.smallcaps}, [Shubin, M. A.]{.smallcaps}, *Erratum to: \"von Neumann spectra near zero"* Geom. Funct. Anal. 1 (1991), no. 4, 375--404; Geom. Funct. Anal. 5 (1995), no. 4, 729. [Hatcher, A.]{.smallcaps}, *Algbraic Topology*, Cambridge University Press, Cambridge, (2002). [Hilsum, M.]{.smallcaps}, [Skandalis, G.]{.smallcaps}, *Invariance par homotopie de la signature á coefficients dans un fibre presque plat*, J. Reine Angew. Math 423 (1992), 73--99. [Lesch, M.]{.smallcaps}, *Singular asymptotics lemma*, \"Pseudo-differential calculus and Mathematical Physics\" Akademie Verlag Berlin, editors M. Demuth, E. Schrohe, B.-W. Schulze (1994). [Lück, W.]{.smallcaps}, *$L^2$-invariants: theory and applications to geometry and $K$-theory*, A Series of Modern Surveys in Mathematics 44. Springer-Verlag, Berlin, (2002). [Manetti, M.]{.smallcaps}, *Topology*, Unitext, vol. 91, Springer, Cham, (2015). [Mather, J.]{.smallcaps}, *Stratifications and mappings*, In \"Dynamical systems (Proc. Sympos., Univ. Bahia, Salvador, 1971)\", (1973), 195--232. Mazzeo, R. *Elliptic theory of differential edge operators. I*, Comm. PDE. **16** (1991), no. 10, 1615--1664. [Melrose, R.B.]{.smallcaps}, *The Atiyah-Patodi-Singer index theorem*, Research Notes in Mathematics, vol. 4, A K Peters Ltd., Wellesley, MA, (1993). [Mooers, E.]{.smallcaps} *Heat kernel asymptotics on manifolds with conic singularities*, J. Anal. Math. **78** (1999), 1--36. [Novikov, S.P.]{.smallcaps} and [Shubin, M.A.]{.smallcaps}, *Morse theory and von Neumann $II_1$-factors*, Doklady Akad. Nauk SSSR 289 (1986), 289--292. [Novikov, S.P.]{.smallcaps} and [Shubin, M.A.]{.smallcaps}, *Morse theory and von Neumann invariants on non-simply connected manifolds*, Uspekhi Nauk 41, 5 (1986), 222--223 (in Russian). [Piazza, P.]{.smallcaps}, [Vertman, B.]{.smallcaps} *Eta and rho invariants on manifolds with edges*, Ann. Inst. Fourier (Grenoble) 69 (2019), no. 5, 1955--2035. [Piazza, P.]{.smallcaps}, [Vertman, B.]{.smallcaps} *Signatures on Witt spaces with boundary*, Advances in Mathematics 405 (2022). [Schick, T.]{.smallcaps}, *$L^2$-index theorems, KK-theory, and connections*, New York J. Math 11 (2005) 387--443. [Spessato S.]{.smallcaps}, *Pullback functors for reduced and unreduced $L^{q,p}$-cohomology*, Annals of Global Analysis and Geometry 62 (2022) 533--578 [Verona, A.]{.smallcaps}, *Stratified mappings---structure and triangulability*, Lecture Notes in Mathematics, 1102. Springer-Verlag, Berlin, (1984). [Youssin, B.]{.smallcaps} *$L^p$ cohomology of cones and horns.*, J. Differential Geom. 39 (1994), no. 3, 559--603.
arxiv_math
{ "id": "2310.05721", "title": "Stability of $L^2-$invariants on stratified spaces", "authors": "Francesco Bei, Paolo Piazza, Boris Vertman", "categories": "math.DG math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
arxiv_math
{ "id": "2310.04587", "title": "Strongly finitary monads and multi-sorted varieties enriched in\n cartesian closed concrete categories", "authors": "Jason Parker", "categories": "math.CT cs.LO math.LO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The interior regularity of area-minimizing integral currents and semi-calibrated currents has been studied extensively in recent decades, with sharp dimension estimates established on their interior singular sets in any dimension and codimension. In stark contrast, the best result in this direction for general almost-minimizing integral currents is due to Bombieri in the 1980's, and demonstrates that the interior regular set is dense. The main results of this article show the sharpness of Bombieri's result by constructing two families of examples of area almost-minimizing integral currents whose flat singular sets contain any closed, empty interior subset $K$ of an $m$-dimensional plane in $\mathbb{R}^{m+n}$. The first family of examples are codimension one currents induced by a superposition of $C^{k,\alpha_{*}}$ graphs with $K$ contained in the boundary of their zero set. The second family of examples are two dimensional area almost-minimizing integral currents in $\mathbb{R}^4$, whose set of branching singularities contains $K$. address: - Max Planck Institute, Inselstraße 22, 04103 Leipzig, Germany - Department of Mathematics, Fine Hall, Princeton University, Washington Road, Princeton, NJ 08540, USA author: - Max Goering - Anna Skorobogatova bibliography: - references.bib title: | Flat interior singularities for area almost-\ minimizing currents --- # Introduction and main results Suppose that $T$ is an $m$-dimensional integral current in in $\mathbb{R}^{m+n}$. Recall that a point $x \in \mathrm{spt}(T)$ is called an *interior regular point* if there is a ball $\mathbf{B}_r (x)$ and there exists an $\alpha > 0$ so that $\mathrm{spt}(T)$ is a $C^{1,\alpha}$ embedded submanifold of $\mathbb{R}^{m+n}$ without boundary in $\mathbf{B}_r (x)$. A point $x \in \mathrm{spt}(T) \setminus \mathrm{spt}(\partial T)$ is called an *interior singular point* if it is not regular. The set of interior singular points is denoted by $\mathrm{Sing}(T)$. Given an open subset $U \subset \mathbb{R}^{m+n}$, $T$ is called area-minimizing in $U$ if it satisfies $$\|T\|(U) \leq \|T+\partial S\|(U),$$ for any $(m+1)$-dimensional integral current $S$ supported in $U$, where $\|T\|$ denotes the $m$-dimensional mass measure induced by $T$. Questions about the regularity of an $m$-dimensional area-minimizing integral current $T$ have fundamentally shaped the development of geometric measure theory and calculus of variations. In codimension $1$, i.e., when $n = 1$, the works of Allard, Almgren, Bombieri, De Giorgi, Giusti, Federer, Fleming, and Simons show that the Hausdorff dimension of $\mathrm{Sing}(T)$ is at most $m-7$, see [@maggi2012sets] for more detailed references. In higher codimension, the situation is much more delicate. The monograph of Almgren [@Almgren_regularity] showed that the Hausdorff dimension of $\mathrm{Sing}(T)$ is at most $m-2$. This dimension bound is sharp in light of examples arising from holomorphic varieties with branching singularities, such as $$\{w^Q = z^p : (z,w)\in \mathbb C^2\}\, ,$$ where $p>Q\geq 2$ are coprime integers. Almgren's theory has since been simplified and made more transparent in the series of works [@DLS_MAMS; @DLS_multiple_valued; @DLS14Lp; @DLS16centermfld; @DLS16blowup]. In this article we study the regularity of area almost-minimizers. We say that an $m$-dimensional integral current $T$ is $(C_{0},2\alpha, R_{0})$-almost minimizing in an open subset $U\subset\mathbb{R}^{m+n}$ if it has the following property: for all $r \le R_{0}$ and $\mathbf{B}_{r}(x_{0}) \Subset U$, $$\label{e:am} \|T\|(\mathbf{B}_{r}(x_{0})) \le \|T + \partial S\|(\mathbf{B}_{r}(x_{0})) + C_{0} r^{m+2 \alpha},$$ for all $(m+1)$-dimensional integral currents $S$ with $\mathrm{spt}(S) \Subset \mathbf{B}_{r}(x_{0})$. Extending [@Almgren-elliptic] from almost minimal varifolds to almost area-minimizing currents, Bombieri showed in [@Bombieri82] that the interior regular set is dense for any almost-minimizer with the following property in place of [\[e:am\]](#e:am){reference-type="eqref" reference="e:am"}: given a fixed compact subset $F$ of $U$, for all $r \le R_{0}$ and $\mathbf{B}_{r}(x_{0}) \Subset F$ it holds that $$\label{e:am-Bombieri} \|T\|(\mathbf{B}_{r}(x_{0})) \le \|T + \partial S\|(\mathbf{B}_{r}(x_{0})) + C_{0} r^{2 \alpha}\|T + \partial S\|(F)$$ for all $(m+1)$-dimensional integral currents $S$ with $\mathrm{spt}(S) \Subset \mathbf{B}_{r}(x_{0})$. [^1] This article shows that Bombieri's result cannot generally be strengthened, and demonstrates the impossibility of extending the known regularity theory for area minimizers to the class of $(C_0,2\alpha,R_0)$-almost minimizing currents. This is done by two types of example for which the collection of flat singularities of multiplicity $Q$, denoted by $\mathfrak{F}_Q(T)$, is large. In general, it appears that the notion of $(C_{0},2\alpha, R_{0})$-almost minimality cannot be immediately compared to that introduced by Bombieri. However, our examples satisfy both notions of almost-minimality (see Remark [Remark 12](#r:Bom){reference-type="ref" reference="r:Bom"}). It is well-known that in the codimension $1$ and multiplicity $1$ setting (i.e., when $n=1=Q$), the singular set for area almost-minimizers satisfies the same dimension bounds as minimizers [@Tamanini84]; see also [@maggi2012sets] for a nice exposition on this vein of results. The main machinery in those settings is a "flat implies smooth\" $\varepsilon$-regularity argument, which breaks down in higher codimension and in the presence of higher multiplicities. So, necessarily, both types of almost-minimizing integral currents constructed here will have a large set of interior flat singularities. The first type of example, giving rise to Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"}, is a superposition of single-valued $C^{k,\alpha_*}$ graphs $\{f_{i}\}_{i=1}^{Q}$, for any $k\in \mathbb{N}$, $\alpha_{*} \in (0,1]$, inducing a $(C_{0}, 2 \frac{k+\alpha_{*}-1}{k+\alpha_{*}}, R_{0})$-almost minimizing $m$-dimensional integral current T in $\mathbb{R}^{m+1}$. In this example, we prescribe a compact set with empty interior $K$ in the domains of $f_{i}$ so that $\mathfrak{F}_Q(T)$ contains the set $K$. Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} is heavily motivated by the recent work of [@david2023branch] which demonstrates that almost-minimizers of the $2$-phase Bernoulli free boundary problem can have a larger set of branch points than that expected for minimizers. In that context, branch points are regular points of the free boundaries, but are points at which the two free boundaries have no better regularity than $C^{1,\alpha}$ and meet tangentially to form a "cusp-like\" structure. Bombieri's result [@Bombieri82] can be rephrased as saying that for any area almost-minimizing integral current $T$ (in the sense [\[e:am-Bombieri\]](#e:am-Bombieri){reference-type="eqref" reference="e:am-Bombieri"}), $\mathrm{Sing}(T)$ is a relatively closed subset of $\mathrm{spt}(T)$ with relatively empty interior. Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} demonstrates that one may prescribe $\mathrm{Sing}(T)$ to contain an arbitrary such set embedded in $\mathbb{R}^{m} \times \{0\}^{n}$. **Theorem 1**. *Let $Q \in \mathbb{N}_{\ge 2}$, $\alpha \in (0,1)$, and $K \subset \mathbb{R}^{m} \times \{0\} \subset \mathbb{R}^{m+1}$ be a closed set with relatively empty interior. Choose $k \in \mathbb{N}$ and $\alpha_{*} \in (0,1]$ so that $\alpha = \frac{k + \alpha_{*}-1}{k+\alpha_{*}}$. Then there exists $C_{0}, R_{0} > 0$ depending only on $m$, $k$ and $\alpha_{*}$ and there exists an $m$-dimensional integral current $T = \mathbf{G}_F$ where $F=\sum_{i=1}^{Q} \llbracket f_{i} \rrbracket$ with $f_{i} \in C^{k,\alpha_*}(\mathbb{R}^{m};\mathbb{R})$ such that $T$ is $(C_{0}, 2 \alpha, R_{0})$-almost minimizing in $\mathbb{R}^{m+1}$ and has a singular set satisfying $\mathrm{Sing}(T) = \mathfrak{F}_Q(T)\supset K$.* Letting $\alpha=\frac{k+\alpha_*-1}{k+\alpha_*}$ and choosing $k$ arbitrarily large, Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} produces a $(C_{0}, 2 \alpha, R_{0})$-almost minimizing integral current with a large singular set for $\alpha$ arbitrarily close to $1$. However, Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} leaves open the question of whether or not there is a $(C_{0}, 2, R_{0})$-almost minimizing integral current with large singular set. **Remark 2**. Note that when $Q=1$, the integral current induced by *any* $C^{1,\alpha}$ graph of an $\mathbb{R}^n$-valued function is $(C_0,2\alpha, \infty)$-almost minimizing in $\mathbb{R}^{m+n}$ for some $C_0,R_0>0$. However, no such current has any singularities, since by definition, there exists a neighborhood of every point where $\mathrm{spt}(T)$ is a $C^{1,\alpha}$ embedded submanifold of $\mathbb{R}^{m+n}$. In light of Remark [Remark 2](#r:Q=1){reference-type="ref" reference="r:Q=1"}, Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"}, and [@Tamanini84], one could reasonably wonder if any area almost-minimizing integral current is a superposition of sheets each with better regularity than the full current, at least up to a small singular set. In Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}, we show that this is not necessarily possible by constructing a family of area almost-minimizers which patch together re-scaled and translated cut-offs of the current induced by the branched holomorphic variety $\{z^{Q} = w^{Qk+1}\}$ in such a way that the branching singularities accumulate toward an arbitrary relatively closed subset with empty interior. **Theorem 3**. *Let $Q \in \mathbb{N}_{\ge 2}, k \in \mathbb{N}$, and $\alpha \vcentcolon = \frac{Qk+1-Q}{Qk+1}$. There exists $C_{0}, R_{0} > 0$ depending only on $k$ and $Q$, so that for any closed $K \subset \mathbb{R}^{2}\times\{0\}\subset\mathbb{R}^4$ with empty interior, there exists a $(C_{0}, 2 \alpha, R_{0})$-almost minimizing $2$-dimensional integral current $T$ in $\mathbb{R}^{4}$ with a (genuinely) branched singular set $\mathrm{Sing}(T) = \mathfrak{F}_Q(T)\supset K$.* In addition to showing that area almost-minimizing integral currents are not generally superpositions of regular surfaces like in Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"}, this shows that the genuine branch points of a two-dimensional area almost-minimizer can be prescribed to contain any closed subset of a two-dimensional plane with empty interior. In particular, not only is it in sharp contrast with the work of [@DLSS1; @DLSS2; @DLSS3], where it is demonstrated that two-dimensional semi-calibrated currents have isolated singularities, but it also shows the drastic failure of the $(m-2)$-dimension bound [@Spolaor] for the set of genuine branching singularities for semi-calibrated currents. Semi-calibrated currents form a subclass of $(C_{0}, 1, R_{0})$-almost minimizing currents for some $C_0,R_0>0$, but the error from minimality has a very specific structure coming from the semi-calibration, therefore allowing for improved regularity. This article confirms that without the additional structure on the permitted error,[^2] one cannot hope for an analogous regularity theory. The following example demonstrates precisely how Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} and Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"} can be used to explicitly create an almost area-minimizing current with singular sets whose mass is arbitrarily close to that of the entire current. **Example 4**. Let $\{x_{i}\} \subset \pi_0\coloneqq \mathbb{R}^{m} \times \{0\}^{n} \subset \mathbb{R}^{m+n}$ enumerate the rational points, $Q =2$, and $\alpha \in (0,1)$. For each $\varepsilon> 0$, write $B^{\varepsilon}_{i} \coloneqq B_{\varepsilon 2^{-i}}(x_{i})$. Then the set $K_{\varepsilon} \vcentcolon = \pi_0 \setminus \cup_{i} B^{\varepsilon}_{i}$ is relatively closed with empty interior in $\pi_0$, and for every $x \in \pi_0$ we have the lower estimate $|K_\varepsilon\cap B_{r}(x)| \ge \omega_{m}\big(r^{m} - \frac{\varepsilon^{m}}{2^{m}-1}\big)$ for the $m$-dimensional Lebesgue measure of $K_\varepsilon\cap B_r(x)$. By Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} (or Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"} if $m=2=n$), there exists a current $T_{\varepsilon}$ which is a $(C_{0}, 2 \alpha, R_{0})$ almost-minimizer for area whose singular set contains $K_{\varepsilon}$ and with $C_{0}$ and $R_{0}$ independent of $\varepsilon$. Moreover, $\|T_\varepsilon\|(\mathbf{C}_r(x,\pi_0)\cap K_\varepsilon) = 2|K_\varepsilon\cap B_{r}(x)|$ and $T_\varepsilon$ is induced by the superposition of graphs of two Lipschitz functions whose Lipschitz constants converge to $0$ as $\varepsilon$ converges to zero (see Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"}). Thus, the family of $(C_{0}, 2 \alpha, R_{0})$ almost-minimizers of area $\{T_{\varepsilon}\}$ has the property that $$\lim_{r \uparrow \infty} \frac{\|T_{1/2}\| \left( \mathbf{C}_{r}(x, \pi_0) \cap \mathrm{Sing}(T_{1/2}) \right)}{ \|T_{1/2}\| \left( \mathbf{C}_{r}(x, \pi_0) \right) } = \lim_{\varepsilon\downarrow 0} \frac{\|T_{\varepsilon}\| \left( \mathbf{C}_{1}(x, \pi_0) \cap \mathrm{Sing}(T_{\varepsilon}) \right)}{ \|T_{\varepsilon}\| \left( \mathbf{C}_{1}(x, \pi_0)\right)} = 1 \qquad \forall x \in \pi_0.$$ That is, the singular set (branched singular set when $m=2=n$) has mass arbitrarily close to the entirety of $\|T_{\varepsilon}\|$ in $\mathbf{C}_1(x,\pi_0)$ as $\varepsilon \downarrow 0$, and the same holds true for $T_{1/2}$ in $\mathbf{C}_r(x,\pi_0)$ as $r\uparrow \infty$. It is instructive to compare Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} to the recent work [@Simon-fractal], which shows that for any compact set $K\subset \mathbb{R}^{m-7}\times\{0\}\subset \mathbb{R}^{m+1}$, there exists a smooth metric on $\mathbb{R}^{m+1}$ and an $m$-dimensional stable minimal hypersurface whose singular set is $K$. A similar result in the framework of higher codimension area-minimizing integral currents was demonstrated by Liu in [@liu2021conjecture], where it was shown that for any compact subset $K\subset \mathbb{R}^{m-2}$, there exists a smooth $(m+3)$-dimensional manifold $\Sigma$ and an $m$-dimensional homologically area-minimizing (calibrated) surface contained in $\Sigma$, whose interior singular set is $K$. However, in the former case, the singularities are modeled on Simons' cone, thus lying in the $(m-7)$-stratum, while in the latter case, the singularities are modeled on "crossing-type\" singularities formed by transverse self-intersections of the surface, thus lying in the $(m-2)$-stratum (cf. [@liu2021conjecture Remark 5]). In particular, these are not flat singularities. Moreover, the ambient metric in both [@Simon-fractal] and [@liu2021conjecture] is merely smooth, not real analytic. While stable minimal surfaces are not in general almost area-minimizing, stable minimal surfaces clearly enjoy significantly better regularity properties, in light of the contrast between [@W14] and this article. Nevertheless, the constructions in [@Simon-fractal; @liu2021conjecture] demonstrate that even under more robust structural assumptions such as being area-minimizing or being a stable minimal hypersurface, fractal singular sets are permitted, albeit lower dimensional and with only a smooth ambient metric. ## Acknowledgments {#acknowledgments .unnumbered} Some of this work was completed while the authors visited SLMath (formerly MSRI), supported by the NSF grant FRG-1853993. A.S. acknowledges further support from FRG-1854147 and thanks Camillo De Lellis for his helpful comments and Reinaldo Resende for his interest and for numerous useful discussions. # Background and preliminaries ## Notation and background {#s:notation} Throughout, we will consider $m$-dimensional integral currents in $\mathbb{R}^{n+m}$ which are induced by graphs of functions which are $C^{k,\alpha_{*}}$ for some $m,n,k \in \mathbb{N}$ and $\alpha_{*} \in (0,1]$. We let $C, C_0, C_1, \dots$ denote constants, whose dependencies will be given when they are introduced. We will at times use the notation $\lesssim$ and $\simeq$, to suppress multiplicative constants. If these constants depend only on $m$ and $n$, there will be no subscript, meanwhile we include any other dependencies in subscripts. We let $\pi, \pi_{i}, \varpi$ denote $m$-dimensional planes (namely, affine subspaces) in $\mathbb{R}^{m+n}$, while and $\pi^{\perp}$ denotes the $n$-dimensional plane orthogonal to $\pi$. We denote by $\mathbf{p}_{\pi}: \mathbb R^{m+n}\to \pi$ the orthogonal projection onto $\pi$, while $\mathbf{p}_{\pi}^\perp$ denotes the orthogonal projection onto $\pi^\perp$. $\mathbf{B}_r(x)$ and $\overline{\mathbf{B}_r(x)}$ will respectively denote an open and closed $(m+n)$-dimensional Euclidean ball of radius $r$ centered at $x$. We let $B_r(x,\pi)$, $\overline{{B}_r(x,\pi)}$ respectively denote the open and closed $m$-dimensional disks $\mathbf{B}_r(x)\cap \pi$ and $\overline{\mathbf{B}_r (x)} \cap \pi$ of radius $r$ centered at $x$ in a given $m$-dimensional plane $\pi\subset\mathbb{R}^{m+n}$ passing through $x$; the dependency on the plane will be omitted if clear from context. We let $\mathbf{C}_r(x,\pi)$ denote the $(m+n)$-dimensional cylinder $B_r(\mathbf{p}_\pi(x),\pi)\times \pi^\perp$. Given an $m$-dimensional current $T$, $\|T\|$ denotes the mass measure induced by $T$, while $\partial T$ denotes the $(m-1)$-dimensional current which is the boundary of $T$ and $\mathrm{spt}(T)$ denotes the support of the current. We use $\omega_m$ to denote the $m$-dimensional Hausdorff measure of the $m$-dimensional unit disk. As an abuse of notation which should be easily discernible from context, $|\cdot|$ will denote the Euclidean norm of vectors, the norm induced by the metric on the Grassmannian $G(m,m+n)$ of $m$-dimensional linear subspaces of $\mathbb{R}^{m+n}$, and also the $m$-dimensional Lebesgue measure of subsets of a given $m$-dimensional plane. For a matrix $B$, we let $B^t$ denote its transpose. $\mathcal{A}_Q(\pi^\perp)$ denotes the space of $Q$-tuples of vectors in $\pi^\perp$ (cf. [@DLS_MAMS]). We use the notation $\nabla$ for the gradient of a function on an $m$-dimensional plane, with respect to a canonical orthonormal choice of coordinates on that plane. Given Lipschitz functions $f:\Omega\to \pi_0^\perp$ and $F:\Omega\to \mathcal{A}_Q(\pi_0^\perp)$ on an open subset $\Omega$ of an $m$-dimensional plane $\pi_0\subset\mathbb{R}^{m+n}$, $\mathrm{gr}(f)\subset\mathbb{R}^{m+n}$ denotes the graph of $f$, while $\mathbf{G}_F$ denotes the $m$-dimensional current induced by the multi-graph of $F$ (cf. [@DLS_multiple_valued Definition 1.10]). Given measurable functions $f_1,\dots,f_Q:\Omega\to \pi_0^\perp$, we write $F=\sum_{i=1}^Q \llbracket f_i \rrbracket$ for the $Q$-valued map with a decomposition into $f_i$ (cf. [@DLS_MAMS Definition 1.1]). For a constant $\Lambda>0$, we let $\mathop{\mathrm{Lip}}_\Lambda$ denote the space of Lipschitz functions with Lipschitz constant bounded above by $\Lambda$. We use $\mathop{\mathrm{supp}}(f)$ to denote the support of functions $f$. ## Key preliminary results {#s:prelim} In this section we prove an important preliminary result (Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"}), demonstrating that controlling pairwise gradient deviations for a superposition of $C^{1,\alpha}$ graphs guarantees a $(C_0, 2\alpha, R_0)$-almost minimality estimate in a ball of any sufficiently small radius in $\mathbb{R}^{m+n}$. This will be a key tool for proving both Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} and Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}. We first state the following important lemma, which not only will be crucial in the proof of Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} below, but is in addition a powerful tool in its own right, due to the flexibility inherent in the domain $\Omega^{\prime}$ being any open set and the simplicity of verifying the hypotheses therein for explicitly constructed sets $\Omega^{\prime}$. Indeed, it will also be used independently in the proof of Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}. **Lemma 5**. *Fix $q \in \mathbb{N}$ and $\alpha \in (0,1)$. Let $\pi\subset\mathbb{R}^{m+n}$ be an $m$-dimensional plane. Let $\Omega'$ be an open subset of $\pi$. Suppose that there exist $g_{i},\dots,g_q \in C^{1}( \Omega' ;\pi^\perp)$, points $\{x_{i}\}_{i=1}^{q} \subset \Omega^{\prime}$ and constants $C_{1},C_{2}, C_{3} > 0$ so that:* - *$|\nabla g_{i_0}(x_{i_0})| \leq C_1 (\mathop{\mathrm{diam}}\Omega')^\alpha$ for some $i_0\in\{1,\dots,q\}$;* - *for all $i\neq j$, $$\label{e:C1alpha} |\nabla g_i(x_i) -\nabla g_j(x_i)| \le C_2 (\mathop{\mathrm{diam}}\Omega')^\alpha;$$* - *and for all $y,z \in \Omega^{\prime}$, $$| \nabla g_{i}(y) - \nabla g_{i}(z)| \le C_{3} (\mathop{\mathrm{diam}}\Omega^{\prime})^\alpha.$$* *Then there exists $\bar{C}=\bar{C}(C_1,C_{2}, C_{3})$ such that $$\label{e:Dir} \sum_{i=1}^{q}\int_{\Omega'} |\nabla g_i|^2 \leq \bar C q (\mathop{\mathrm{diam}}\Omega')^{2\alpha}|\Omega'|.$$* *Proof of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"}.* Let $r\coloneqq \mathop{\mathrm{diam}}\Omega'$ and relabel indices if necessary so that property (i) holds for $i_0=1$. Then, for all $x \in \Omega'$ and all $1 \le i \le q$, we have $$|\nabla g_{i}(x)| \le |\nabla g_{i}(x) - \nabla g_{i}(x_{1})| + |\nabla g_{i}(x_{1}) - \nabla g_{1}(x_{1})| + |\nabla g_1(x_1)| \le (C_{3} + C_{2} + C_{1}) r^\alpha$$ Squaring and integrating over $\Omega'$, this completes the proof. ◻ The following Proposition uses Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} to provide a sufficient condition for a $q$-valued graph to induce an area almost-minimizing integral current. **Proposition 6**. *Fix $q \in \mathbb{N}$, $\alpha \in(0,1)$, $r > 0$, and $x_0\in\mathbb{R}^{m+n}$. Denote $\mathbb{R}^m \equiv \mathbb{R}^m\times\{0\}^{n} \subset \mathbb{R}^{m+n}$ and $\mathbb{R}^n \equiv (\mathbb{R}^m\times \{0\}^n)^\perp$. Let $x'_0\coloneqq \mathbf{p}_{\mathbb{R}^m}(x_0)$ and $M > 0$. Then there exists a constant $R_{0}>0$ depending on $m,n$ and $M$ such that the following holds for each $r \leq R_0$. Suppose that $f_{1}, \dots, f_{q} \in {C^{1,\alpha}\cap \mathop{\mathrm{Lip}}_{1/4}(B_{2r}(x'_0);\mathbb{R}^n)}$ with $\max_i[ \nabla f_{i}]_{C^{0,\alpha}(B_{2r}(x_{0}^{\prime}))} \leq M$, and $\mathop{\mathrm{gr}}(f_i)\cap\mathbf{B}_r(x_0)\neq \emptyset$. If for every pair of indices $i<j$ we have $$\label{e:C1alpha.2} |\nabla f_i(x) -\nabla f_j(x)| \le C_4 |f_i(x)-f_j(x)|^\alpha \qquad \forall x\in B_{2r}(x_0'),$$ then the multivalued function $F_1\coloneqq \sum_{i=1}^q \llbracket f_i \rrbracket$ satisfies $$\label{e:alm-min-ball} \|\mathbf{G}_{F_1}\|(\mathbf{B}_r(x_0)) \leq \|\mathbf{G}_{F_1}+\partial S\|(\mathbf{B}_r(x_0)) + C_0 r^{m+2\alpha}$$ for some constant $C_0 = C_0(C_4,m,n,q,\alpha,M)>0$, for all $(m+1)$-dimensional integral currents $S$ supported in $\mathbf{B}_{r}(x_0)$.* **Remark 7**. If $f_{i} \in C^{1,\alpha_{*}}(\mathbb{R}^{m};\mathbb{R}^{n})$, the requirement $f_{i} \in \mathop{\mathrm{Lip}}_{1/4}(\mathbb{R}^{m};\mathbb{R}^n)$ could be removed. The reason this small Lipschitz requirement appears is due to formulating the proposition in a localized way so that information on the Hölder semi-norm at scale $2r$ implies a reparametrization at scale $r$. This is done for the sake of exposition in the proof of Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}. At the cost of making the ratio larger (and consequently $R_{0}$ smaller, see the hypothesis [\[e:closeenough\]](#e:closeenough){reference-type="eqref" reference="e:closeenough"} of Proposition [Proposition 13](#p:step1){reference-type="ref" reference="p:step1"}) the Lipschitz constant need only be finite, which is implied by $f_{i} \in C^{1,\alpha_{*}}(\mathbb{R}^{m} ; \mathbb{R}^{n})$. Although Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} merely requires $f_i \in C^{1,\alpha} ( B_{2r}(x_{0}^{\prime}))$, it is instructive to think of the functions $f_{i}$ in Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} as being in $C^{k,\alpha_{*}}$ for some $\alpha_*\in(0,1]$ and consider $\alpha = \frac{k+\alpha_* - 1}{k+\alpha_*}$. This is the context in which Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} will be applied when proving Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} and Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}, and helps explain the presence of the exponent $\frac{k + \alpha_{*} - 1}{k+\alpha_{*}}$: if $f \in C^{k,\alpha_{*}}$ and $x'_{0} \in \mathbb{R}^{m}$ is such that $\partial^{\beta}f(x'_{0}) = 0$ for all multi-indices $\beta$ with $|\beta| \le k$, then $$\begin{cases} |f(x) - f(x'_{0})| \lesssim |x-x_{0}|^{k+\alpha_{*}} \\ |\nabla f(x) - \nabla f(x'_{0})| \lesssim |x-x'_{0}|^{k + \alpha_{*} -1 } = \left( |x-x'_{0}|^{k+\alpha_{*}} \right)^{\frac{k + \alpha_{*} - 1}{k+\alpha_{*}}}. \end{cases}$$ See also Proposition [Proposition 9](#p:dprop){reference-type="ref" reference="p:dprop"} for more insight on the relationship between $k,\alpha,\alpha_{*}$. **Remark 8**. When $q=1$, [\[e:C1alpha.2\]](#e:C1alpha.2){reference-type="eqref" reference="e:C1alpha.2"} is a vacuous hypothesis. So, when $k=1$, the conclusion of Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} follows merely from the $C^{1,\alpha}$-regularity of the one function $f_i$. This recovers the sharp exponent between area almost-minimizing integral currents and $C^{1,\alpha}$ graphs, see [@duzaar2002optimal]. When $q \geq 2$, the hypothesis [\[e:C1alpha.2\]](#e:C1alpha.2){reference-type="eqref" reference="e:C1alpha.2"} assumes roughly that despite being merely $C^{1,\alpha}$-regular, the graphs of $f_{j}$ meet tangentially in a way that their difference tangentially approaches zero like a $C^{1,\alpha_{*}}$ function (cf. Proposition [Proposition 9](#p:dprop){reference-type="ref" reference="p:dprop"} below). When $\alpha < 1 = k$, we have $\alpha_{*} = \frac{\alpha}{2-\alpha} > \alpha = \frac{\alpha_{*}}{1+\alpha_{*}}\in (0,\frac{1}{2}]$. This means in the multi-sheeted setting, to conclude that the corresponding multi-valued graph is an area almost-minimizer with error exponent $2 \alpha$ our techniques require strictly more regularity on $f_j$ than in the single-sheeted setting. Nonetheless, this gap appears to be necessary in our methods, because it can happen that a ball $\mathbf{B}_{r}(x_0)$ intersects two sheets $\llbracket \mathop{\mathrm{gr}}(f_{1}) \rrbracket, \llbracket \mathop{\mathrm{gr}}(f_2) \rrbracket$ that are mutually disjoint in $\mathbf{B}_r(x_0)$, with $|\nabla f_{1}(x) - \nabla f_{2}(x)| \gg r^{\alpha}$ for all $x \in B_r(\mathbf{p}_{\pi_0}(x_0))$. The following proposition demonstrates that in codimension one, the hypothesis [\[e:C1alpha.2\]](#e:C1alpha.2){reference-type="eqref" reference="e:C1alpha.2"} in Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} follows from the pointwise property of pairwise tangential touching for a collection of $C^{1,\alpha_*}$ graphs. We will not make use of this proposition in the proofs of the main results. However, not only is it instructive in gaining a deeper understanding behind the relationship of the exponents $\alpha$ and $\alpha_*$, but it also allows one to conclude that *any* superposition of ordered $C^{1,\alpha_*}$-graphs is area almost-minimizing; see Remark [Remark 10](#r:ordered-am){reference-type="ref" reference="r:ordered-am"} below. **Proposition 9**. *Fix $Q >1$, and $\alpha_*\in(0,1]$. Let $\mathbb{R}^m \equiv \mathbb{R}^m\times\{0\}\subset \mathbb{R}^{m+1}$ and let $\mathbb{R}\equiv (\mathbb{R}^m\times \{0\})^\perp$. Suppose that $f_{1},\dots, f_Q \in C^{1,\alpha_*}\cap \mathop{\mathrm{Lip}}_{1/4}( \mathbb{R}^m ;\mathbb{R})$ and that $f_{i}$ satisfies the property for all $i \neq j$: $$\label{e:dprop} f_{i}(x) = f_{j}(x) \implies \nabla f_{i}(x) = \nabla f_{j}(x) \qquad \forall x\in \mathbb{R}^m.$$ Choose $C_4 = 2 \max \{1, \max_{i\neq j}[\nabla f_i-\nabla f_j]_{C^{0,\alpha_*}} \}$. Then the functions $\{f_{i}\}$ satisfy [\[e:C1alpha.2\]](#e:C1alpha.2){reference-type="eqref" reference="e:C1alpha.2"} with $\alpha = \frac{\alpha_*}{1+\alpha_*}$ for all $x \in \mathbb{R}^m$, with this choice of $C_4$. In particular, for $F \coloneqq \sum_{i=1}^Q \llbracket f_{i} \rrbracket$, the current $\mathbf{G}_{F}$ is a $(C_0, 2\alpha, R_0)$-almost minimizing current in $\mathbb{R}^{m+1}$, for $C_0$, $R_0$ given by Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"}.* We again recall that the assumption $f_i\in\mathop{\mathrm{Lip}}_{1/4}(\mathbb{R}^{m};\mathbb{R})$ could be dropped at the expense of shrinking $R_{0}$, see Remark [Remark 7](#r:smalllip){reference-type="ref" reference="r:smalllip"}. **Remark 10**. If $f_{1} \le \dots \le f_{Q}$ are $C^{1,\alpha_{*}}(\mathbb{R}^{m} ; \mathbb{R})$ functions, then they necessarily satisfy [\[e:dprop\]](#e:dprop){reference-type="eqref" reference="e:dprop"}. In particular, Proposition [Proposition 9](#p:dprop){reference-type="ref" reference="p:dprop"} and Remark [Remark 7](#r:smalllip){reference-type="ref" reference="r:smalllip"} imply that the current $T=\mathbf{G}_F$ associated to the graph of $F = \sum_{i=1}^{Q} \llbracket f_{i} \rrbracket$ is a $(C_{0}, 2 \alpha, R_{0})$-almost minimizing current for some $C_0,R_0>0$. *Proof of Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"}.* Fix a ball $\mathbf{B}_r(x_0)$ as in the statement of the proposition. Let $\pi_{1}$ denote the $m$-dimensional tangent plane to $\mathbf{G}_{f_1}$ at some point $(y_0,f_1(y_0))\in\mathbf{B}_{r}(x_0)$. We will apply Proposition [Proposition 13](#p:step1){reference-type="ref" reference="p:step1"} to the plane $\varpi = \pi_{1}$ to reparametrize each $f_{i}$ over $B_{r}(\mathbf{p}_{\pi_{1}}(x_{0});\pi_{1})$ by some functions $g_{i}$, which we will show satisfying the hypotheses of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"}. Since these $g_{i}$ will have the same graph as $f_{i}$ after intersecting with $\mathbf{B}_{r}(x_0)$, the conclusion of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} confirms [\[e:alm-min-ball\]](#e:alm-min-ball){reference-type="eqref" reference="e:alm-min-ball"} and will complete our proof. We first check the hypotheses of Proposition [Proposition 13](#p:step1){reference-type="ref" reference="p:step1"}, under the assumptions $$\delta= \frac{1}{2}, \quad \sigma = \frac{1}{2}\left[1 + \max_i\mathop{\mathrm{Lip}}(f_i)\right], \quad \max_i \mathop{\mathrm{Lip}}(f_i) \le \frac{1}{4}.$$ Let $A : \mathbb{R}^{m} \to \mathbb{R}^{n}$ have graph parallel to $\pi_{1}$. Since $\nabla A = \nabla f_{1}(y_0)$, it follows that $\Lambda \coloneqq \max_i\mathop{\mathrm{Lip}}(f_{i} - A) \le 2\max_i\mathop{\mathrm{Lip}}(f_i) \leq \frac{1}{2}$. Moreover, since $\mathop{\mathrm{gr}}(f_{i}) \cap \mathbf{B}_{r}(x_0) \neq \emptyset$ there exists $x_i\in B_r(x'_0)$ with $|(x_{i},f_{i}(x_{i})) - x_{0}| \le r$. Writing $x_{0} = (x'_0, \bar{x})$, we have $|f_{i}(x'_{0}) - \bar{x}| \le |x'_0-x_{i}| \mathop{\mathrm{Lip}}(f_{i}) + |f_{i}(x_i) - \bar{x}| \le (1+ \frac{1}{4}) r$. So, the choice of $\delta = \frac{1}{2}$ is indeed valid since the above assumptions yield $$\frac{1}{2} \le \frac{1}{\sqrt{1 + \Lambda^{2}}} - \frac{\sigma \max_{i} \mathop{\mathrm{Lip}}(f_i)}{\sqrt{1+\max_{i} \mathop{\mathrm{Lip}}(f_i)^{2}}},$$ confirming [\[e:closeenough\]](#e:closeenough){reference-type="eqref" reference="e:closeenough"}. Since by assumption $f_{i} \in C^{1,\alpha}(B_{2r}(x'))$ we can apply Proposition [Proposition 13](#p:step1){reference-type="ref" reference="p:step1"} to construct $g_{i} \in C^{1,\alpha}( B_{r}(\mathbf{p}_{\pi_{1}}(x_0));\pi_{1}); \pi_{1}^{\perp})$ so that $\mathop{\mathrm{gr}}(g_{i}) = \mathop{\mathrm{gr}}(f) \cap \mathbf{C}_{r}(x_0;\pi_{1})$. In particular, if $G = \sum_{i=1}^{q}, \llbracket g_{i} \rrbracket$ then $$\label{e:preconstancy} {\mathbf{G}_G = \mathbf{G}_{F_1} }\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\mathbf{C}_{r}(x_{0};\pi_1).$$ Thus, $\partial \left( \left(\mathbf{p}_{\pi_{1}}\right)_{\sharp} \mathbf{G}_{F_1} \right)\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}B_{r}(\mathbf{p}_{\pi_{1}}(x_{0})) = 0$. Hence, by [\[e:preconstancy\]](#e:preconstancy){reference-type="eqref" reference="e:preconstancy"} and the fact that $G$ is $q$-valued it follows that $$\label{e:0thorder} (\mathbf{p}_{\pi_1})_\sharp\mathbf{G}_{F_1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\mathbf{C}_r(x_0,\pi_1) = q\llbracket B_r(\mathbf{p}_{\pi_1}(x_0))\rrbracket.$$ We now claim that for $r$ small enough the $g_{i}$ satisfy the hypotheses of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"}. Indeed, by our choice of $\pi_{1}$, we have $\nabla f_{1}(y_0) = \nabla A(y_0)$, which in particular tells us that $\nabla g_1(\zeta_0)=0$ at some point $\zeta_0\in B_r(\mathbf{p}_{\pi_{1}}(x_0))$. Proposition [Proposition 13](#p:step1){reference-type="ref" reference="p:step1"}(2), (3), and [\[e:hseminorm\]](#e:hseminorm){reference-type="eqref" reference="e:hseminorm"} and the hypotheses $\max_i\mathop{\mathrm{Lip}}(f_i)\leq \frac{1}{4}$ and $\max_i [\nabla f_i]_{C^{\alpha}}\leq M$ tells us that for each $x\in B_{r}(\mathbf{p}_{\pi_{1}}(x_0))$ we have $$\label{e:lemma2.1(i)} |\nabla g_1(x)| \lesssim_\alpha [\nabla g_1]_{C^{\alpha}} r^\alpha \leq C [\nabla f_1]_{C^{\alpha}} r^\alpha \leq C_1 r^\alpha,$$ where $C_1= C_1(\mathop{\mathrm{Lip}}(f_1),m,n,\alpha,M)$. This confirms hypothesis (i) of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"}. Now, for any $x\in B_{2r}(x'_0)$ we have $$\begin{aligned} |\nabla f_i(x) - \nabla f_j(x)| &\lesssim_{ C_4}|f_i(x) - f_j(x)|^\alpha \\ & \lesssim_{\alpha} |f_i(x) - f_i(x_i)|^\alpha + |f_i(x_i)-f_j(x_j)|^\alpha + |f_j(x_j)-f_j(x)|^\alpha \\ &\le (1+\max_i \mathop{\mathrm{Lip}}(f_i)) r^\alpha \lesssim r^\alpha \end{aligned}$$ which shows that, due to the choice of $\delta=\frac{1}{2}$, $\Lambda\leq \frac{1}{2}$, and $M$, the right-hand side of [\[e:relgrad\]](#e:relgrad){reference-type="eqref" reference="e:relgrad"} can be controlled by $$|f_{i}(y_{i}^{-1}) - f_{j}(y_{j}^{-1})|^{\alpha} + r^{\alpha},$$ with constant depending on $M, m,n,\alpha$. Since each $f_{i}$ is Lipschitz and has graph intersecting $\mathbf{B}_{r}(x_0)$, the difference in values between $f_{i}$ and $f_{j}$ can also be controlled by a constant multiple of $r$. Therefore [\[e:relgrad\]](#e:relgrad){reference-type="eqref" reference="e:relgrad"} guarantees $$\label{e:diff-g} |\nabla g_i(z) - \nabla g_j(z)| \lesssim_{M,m,n} r^\alpha,$$ thus verifying the hypothesis (ii) of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"}. Meanwhile, hypothesis (iii) of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} follows from [\[e:lemma2.1(i)\]](#e:lemma2.1(i)){reference-type="eqref" reference="e:lemma2.1(i)"}, [\[e:diff-g\]](#e:diff-g){reference-type="eqref" reference="e:diff-g"} with $j=1$ and the triangle inequality. Thus we may apply Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} with $\Omega' = B_r(\mathbf{p}_{\pi_{1}}(x_0))$ to deduce that $$\label{e:dir} \sum_{i=1}^{q}\int_{B_{r}(\mathbf{p}_{\pi_{1}}(x_0))} |\nabla g_i|^2 \leq \bar C q r^{2\alpha}|B_{r}(\mathbf{p}_{\pi_{1}}(x_0))|.$$ Since $\partial S$ has empty boundary, invoking [\[e:0thorder\]](#e:0thorder){reference-type="eqref" reference="e:0thorder"} in turn yields $$\begin{aligned} \|\mathbf{G}_{F_1}\|(\mathbf{C}_{r}(x_0,\pi_1)) & = \sum_{i=1}^{q} \|\mathbf{G}_{g_i}\|(\mathbf{C}_r(x_0,\pi_1)) \\ &\leq q|B_r(\mathbf{p}_{\pi_1}(x_0))| + \frac{1}{2}\sum_{i=1}^{q}\int_{B_{r}(\mathbf{p}_{\pi_{1}}(x_0))} |\nabla g_i|^2 \\ &\qquad+ \int_{B_{r}(\mathbf{p}_{\pi_{1}}(x_1))} O\big(|\nabla g_i|^4\big) \\ &\leq q(1+Cr^{2\alpha})|B_r(\mathbf{p}_{\pi_1}(x_0))| \\ & \le \|\mathbf{G}_{F_{1}} + \partial S\|(\mathbf{C}_{r}(x_0,\pi_1)) +C_0 r^{2\alpha}|B_r(\mathbf{p}_{\pi_1}(x_0))|. \end{aligned}$$ Subtracting $\| \mathbf{G}_{F_{1}}\|\left(\mathbf{C}_{r}(x_{0}, \pi_{1}) \setminus \mathbf{B}_{r}(x_{0})\right)$ from each side and recalling that $\mathbf{G}_{F} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\mathbf{B}_{r}(x_{0}) = \mathbf{G}_{F_{1}} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\mathbf{B}_{r}(x_{0})$ yields $$\label{e:am-concl} \|\mathbf{G}_{F}\|(\mathbf{B}_{r}(x_{0})) \le \| \mathbf{G}_{F} + \partial S\|(\mathbf{B}_{r}(x_{0})) + C_{0} r^{m+2 \alpha},$$ for some $C_0=C_0(m,n,M,\alpha)>0$. ◻ *Proof of Proposition [Proposition 9](#p:dprop){reference-type="ref" reference="p:dprop"}.* Fix two indices $i\neq j$ and let $g\coloneqq f_i - f_j$. We will show that for any $x\in\mathbb{R}^m$ it holds that $$\label{e:C1alpha-scalar} |\nabla g(x)| \le 2\max\left\{1, [\nabla g]_{C^{\alpha_*}(B_\varepsilon(x))}\right\} g(x)^{\alpha} \equiv 2\max\left\{1, [\nabla g]_{C^{\alpha_*}(B_\varepsilon(x))}\right\} g(x)^{\frac{\alpha_*}{1+\alpha_*}}.$$ If $g$ is constant, there is nothing to prove. Otherwise, suppose to the contrary that, after translating, $g(0) =R^{1+\alpha_*}> 0$, but $|\nabla g(0)| > 2 C_{g} R^{\alpha_*}$ where $C_{g} \vcentcolon = \max\{1,[\nabla g]_{C^{\alpha}(B_\varepsilon)}\}$. Further, by rotation, we may assume that $|\nabla g(0)| = \partial_1 g(0) > 2 C_g R^{\alpha_*}$. Then for every $\xi \in [-R e_1,0]\cap\mathop{\mathrm{supp}}(g)$, the $C^{1,\alpha_*}$-regularity of $g$ yields $|\partial_1 g (0) - \partial_1 g(\xi)| \le C_{g} R^{\alpha_*}$, which in turn implies $$\label{e:fprimeeps} \partial_1 g(\xi) > C_{g} R^{\alpha_*} \qquad \forall \xi \in [-Re_1,0].$$ If $[-R e_1,0]\subset \mathop{\mathrm{supp}}(g)$, the Fundamental Theorem of Calculus yields $$R^{1+\alpha_*} - g(- R e_1) = \int_{-R}^{0} \partial_{1} g( t e_{1}) dt \overset{\eqref{e:fprimeeps}}{>} C_{g} R^{1+\alpha_*}.$$ Hence $g(-R e_1) < (1- C_{g}) R^{1+\alpha_*} \le 0$. By the mean value theorem there exists $t \in [- R, 0]$ so that $g(t e_1) = 0$. By hypothesis, it follows $\partial_1 g(t e_1) = 0$ contradicting [\[e:fprimeeps\]](#e:fprimeeps){reference-type="eqref" reference="e:fprimeeps"} in this case. On the other hand, if $[-R e_1,0]$ is not contained in $\mathop{\mathrm{supp}}(g)$, then we can immediately find $t\in [-R,0]$ with $g(te_1)=\partial_1 g(t e_1) = 0$, again contradicting [\[e:fprimeeps\]](#e:fprimeeps){reference-type="eqref" reference="e:fprimeeps"}. Thus, we conclude that [\[e:C1alpha-scalar\]](#e:C1alpha-scalar){reference-type="eqref" reference="e:C1alpha-scalar"} indeed holds for every $x\in\mathbb{R}^m$. Since by assumption, $f_{i} \in C^{1,\alpha}\cap\mathop{\mathrm{Lip}}_{1/4}(\mathbb{R}^{m};\mathbb{R})$, the hypotheses of Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} are satisfied. Applying Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} completes the proof. ◻ # Proof of main Theorems {#s:pf} We will frequently be using the notion of a Whitney decomposition $\mathscr{W}$ of $\mathbb{R}^m\setminus E$, for a given closed set $E\subset \mathbb{R}^m$ (or more generally, for a closed subset of an $m$-dimensional plane $\pi_0\subset \mathbb{R}^{m+n}$). Such a decomposition $\mathscr{W}$ consists of a family of closed dyadic cubes $L \subset \mathbb{R}^m \setminus E$ with - $\mathop{\mathrm{dist}}(L, E) \simeq \ell(L)$; - each cube $L$ intersects at most $\Lambda=\Lambda(m)\in \mathbb{N}$ cubes in $\mathscr{W}$; - there exists $\lambda=\lambda(m)>0$ such that if $L_1,L_2\in \mathscr{W}$ satisfy $L_1\cap L_2\neq \emptyset$ then $\lambda^{-1} \ell(L_2) \leq \ell(L_1)\leq \lambda \ell(L_2)$. Before starting the proof, given $k \in \mathbb{N}$ and $\alpha_{*} \in (0,1]$, we construct a regularized $(k+\alpha_{*})$-power of the distance to a closed set $E \subset \mathbb{R}^{m}$, denoted by $\eta = \eta_{k,\alpha_{*},E}$. This function, defined below (see [\[e:smoothd\]](#e:smoothd){reference-type="eqref" reference="e:smoothd"}), is in direct analogue with the regularized distance function $\Delta$ in [@SteinSI VI.2, Theorem 2]. Crucially, in Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"} we show that for any closed set $E$, the function $\eta$ enjoys the following properties $$\label{e:i} \eta(x) \simeq_{m,k,\alpha_*} \mathop{\mathrm{dist}}(x, E)^{k+\alpha_*}, \qquad x\in \mathbb{R}^m,$$ and for any multi-index $\beta$, $$\label{e:ii} |\partial^\beta \eta(x)| \lesssim_{|\beta|,m,k,\alpha_*} \mathop{\mathrm{dist}}(x,E)^{k+\alpha_*-|\beta|}, \qquad x\in \mathbb{R}^m$$ Let $\mathscr{W}$ be a Whitney decomposition of $\mathbb{R}^{m} \setminus E$ and let $\varphi \in C_{c}^{\infty}(\mathbb{R}^{m} ; [0, \infty))$ be a smooth cut-off function satisfying $\varphi \equiv 1$ on $[-0.5,0.5]^m$, $\mathop{\mathrm{supp}}(\varphi) \subset [-0.6,0.6]^m$. Consider $$\label{e:smoothd} \eta_{k,\alpha_*,E}(x) \vcentcolon = \sum_{L \in \mathscr{W}} \ell(L)^{k+\alpha_*} ~ \varphi \left( \frac{x-x_{L}}{\ell(L)} \right),$$ where $x_{L}$ denotes the center of $L$ and $\ell(L)$ is the side-length of $L$. Note that the properties of a Whitney decomposition guarantee that the sum in [\[e:smoothd\]](#e:smoothd){reference-type="eqref" reference="e:smoothd"} is locally finite, with a uniformly bounded number of summands. **Lemma 11**. *Let $k \in \mathbb{N}$, $M>0$, $\alpha_{*} \in (0,1]$, and $\alpha = \frac{k+\alpha_{*}-1}{k+\alpha_{*}}$. Let $E \subset \mathbb{R}^{m}$ be a closed set. If $\eta\equiv \eta_{k,\alpha_*,E}$ is as in [\[e:smoothd\]](#e:smoothd){reference-type="eqref" reference="e:smoothd"}, then $\eta\in C^\infty(\mathbb{R}^m\setminus E)$ satisfies [\[e:i\]](#e:i){reference-type="eqref" reference="e:i"} and [\[e:ii\]](#e:ii){reference-type="eqref" reference="e:ii"}. If $E$ additionally satisfies $\mathop{\mathrm{dist}}(x,E) \le M$ for all $x \in \mathbb{R}^{m}$, then $$\|\eta \|_{C^{k,\alpha_{*}}(\mathbb{R}^{m})} \lesssim_{m,k,\alpha_{*},M} 1.$$* *Proof.* Let us first demonstrate the validity of [\[e:i\]](#e:i){reference-type="eqref" reference="e:i"} and [\[e:ii\]](#e:ii){reference-type="eqref" reference="e:ii"}. Note that [\[e:i\]](#e:i){reference-type="eqref" reference="e:i"} follows trivially in the case where $x\in E$. Now fix $x\in \mathbb{R}^m\setminus E$. Observe that if $x\in 1.2 L$ for $L\in \mathscr{W}$, then $$\ell(L) \simeq \mathop{\mathrm{dist}}(L,E) \lesssim \mathop{\mathrm{dist}}(x,E) \lesssim \mathop{\mathrm{dist}}(L,E) + \ell(L) \lesssim \ell(L),$$ and if $x \not \in 1.2 L$, $\varphi \left( \frac{x-x_{L}}{\ell(L)} \right) = 0$. Combining this with the boundedness of $\varphi$ and the local finiteness of the sum in [\[e:smoothd\]](#e:smoothd){reference-type="eqref" reference="e:smoothd"} yields the conclusion of [\[e:i\]](#e:i){reference-type="eqref" reference="e:i"}. The conclusion of [\[e:ii\]](#e:ii){reference-type="eqref" reference="e:ii"} follows by entirely analogous reasoning, combined with the boundedness of $\partial^{\beta} \varphi$ and the fact that $$\label{e:gradd} \partial^\beta \eta(x) = \sum_{L \in \mathscr{W}} \ell(L)^{k+\alpha_*-|\beta|}\partial^\beta\varphi\left(\frac{x-x_L}{\ell(L)}\right),$$ for every multi-index $\beta$. In particular, whenever $|\beta| \le k$, [\[e:ii\]](#e:ii){reference-type="eqref" reference="e:ii"} implies that $$\label{e:ine} \lim_{y \to x} |\partial^{\beta} \eta(y)| = 0 \qquad \forall x \in E,$$ verifying that $\| \eta \|_{C^{k}(\mathbb{R}^{m})} \lesssim_{m,k,\alpha_{*},M} 1$. So, it only remains to check that for any $|\beta| = k$, $[\partial^{\beta} \eta]_{C^{\alpha_{*}}(\mathbb{R}^{m})} \lesssim_{m,k,\alpha_{*},M} 1$. Note that [\[e:ine\]](#e:ine){reference-type="eqref" reference="e:ine"} confirms there is nothing to show when $x,y \in E$. So, for the remainder of the proof it suffices to consider, without loss of generality, that $x \in \mathbb{R}^{m} \setminus E$. We turn our attention to the case when $\max \{\mathop{\mathrm{dist}}(x,E), \mathop{\mathrm{dist}}(y,E) \} \le |x-y|$. Since $\partial^{\beta} \varphi$ is bounded, [\[e:gradd\]](#e:gradd){reference-type="eqref" reference="e:gradd"} yields $$\label{e:crude} |\partial^{\beta} \eta(x) - \partial^{\beta} \eta(y)| \lesssim \sum_{L \in\mathscr{W}} \max\{\mathop{\mathrm{dist}}(x,E), \mathop{\mathrm{dist}}(y,E)\}^{\alpha_*} \lesssim |x-y|^{\alpha_*}.$$ When $\min\{\mathop{\mathrm{dist}}(x,E),\mathop{\mathrm{dist}}(y,E)\} \ge |x-y|$, we first note that the definition of a Whitney decomposition guarantees that all the sums are taken over only those cubes $L \in \mathscr{W}_x \cup \mathscr{W}_y$ where $\mathscr{W}_x = \{ L \in \mathscr{W}: x \in 1.2 L \}$ and $\mathscr{W}_y$ is defined analogously. In particular, all sums are finite and the number of summands is bounded uniformly by a dimensional constant independent of $x,y$. Exploiting this and the Lipschitz regularity of $\partial^\beta\varphi$, [\[e:gradd\]](#e:gradd){reference-type="eqref" reference="e:gradd"} implies that $$\begin{aligned} \nonumber | \partial^\beta \eta(x) - \partial^\beta \eta(y)| &\lesssim \sum_{L \in \mathscr{W}_x} \ell(L)^{\alpha_*} \left|\frac{x-y}{\ell(L)}\right| + \sum_{L \in \mathscr{W}_y} \ell(L)^{\alpha_*} \left|\frac{x-y}{\ell(L)}\right| \\ \nonumber &\lesssim \sum_{L \in \mathscr{W}_x} \mathop{\mathrm{dist}}(x,E)^{\alpha_*} \left|\frac{x-y}{\mathop{\mathrm{dist}}(x,E)}\right| + \sum_{L \in \mathscr{W}_y} \mathop{\mathrm{dist}}(y,E)^{\alpha_*} \left|\frac{x-y}{\mathop{\mathrm{dist}}(y,E)}\right| \\ \nonumber &= |x-y|^{\alpha_*}\left[\sum_{L\in \mathscr{W}_x} \left|\frac{x-y}{\mathop{\mathrm{dist}}(x,E)}\right|^{1-\alpha_*} + \sum_{L \in \mathscr{W}_y}\left|\frac{x-y}{\mathop{\mathrm{dist}}(y,E)}\right|^{1-\alpha_*}\right] \\ \label{e:care} &\lesssim |x-y|^{\alpha_*}.\end{aligned}$$ It remains to deal with the case where, without loss of generality, $$\mathop{\mathrm{dist}}(y,E) \leq |x-y| \leq \mathop{\mathrm{dist}}(x,E).$$ To this end, we argue as in [\[e:care\]](#e:care){reference-type="eqref" reference="e:care"} for cubes $L \in \mathscr{W}_{x}$ and as in [\[e:crude\]](#e:crude){reference-type="eqref" reference="e:crude"} for cubes $L \in \mathscr{W}_{y}$: $$\begin{aligned} | \partial^\beta \eta(x) - \partial^\beta \eta(y)| &\lesssim \sum_{L\in \mathscr{W}_x} \ell(L)^{\alpha_*} \left|\frac{x-y}{\ell(L)}\right| + \sum_{L\in \mathscr{W}_y} \ell(L)^{\alpha_*} \\ &\lesssim |x-y|^{\alpha_*}.\end{aligned}$$ ◻ **Remark 12** (Validity of Bombieri's almost-minimality definition). Observe that in place of the estimate [\[e:am-concl\]](#e:am-concl){reference-type="eqref" reference="e:am-concl"} in Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"}, one can use the preceding calculations and [\[e:0thorder\]](#e:0thorder){reference-type="eqref" reference="e:0thorder"} to instead establish the estimate $$\|\mathbf{G}_{F}\|(\mathbf{B}_{r}(x_{0})) \le \| \mathbf{G}_{F} + \partial S\|(\mathbf{B}_{r}(x_{0})) + C r^{2 \alpha}\| \mathbf{G}_{F} + \partial S\|(\mathbf{C}_{r}(x_{0},\pi_1)).$$ Combining with the property that $\mathop{\mathrm{dist}}(x,E)\leq 1$ for the set $E$ chosen in the proof of Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"} below, if the set $K$ is taken to be compact (rather than merely closed), the choice of $\mathbf{G}_F$ therein satisfies the property [\[e:am-Bombieri\]](#e:am-Bombieri){reference-type="eqref" reference="e:am-Bombieri"} with the compact set $F$ taken to be $\overline{B_{M}(0,\pi_0)}\times [0,C_{m,k,\alpha_{*}}]\subset \mathbb{R}^{m+1}$, where $C_{m,k,\alpha_{*}}$ is a positive constant depending on $m,k,\alpha_*$ coming from [\[e:i\]](#e:i){reference-type="eqref" reference="e:i"}, with a choice of $M$ sufficiently large so that $K \Subset \overline{B_{M}(0,\pi_0)}$, for example. Likewise, we can establish the almost-minimality property [\[e:am-Bombieri\]](#e:am-Bombieri){reference-type="eqref" reference="e:am-Bombieri"} for the current $T$ in Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}. *Proof of Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"}.* Let $K \subset \mathbb{R}^{m} \times \{0\}$ be as in Theorem [Theorem 1](#t:graphs){reference-type="ref" reference="t:graphs"}, let $E = K \cup \{ x \in \mathbb{R}^{m} : \mathop{\mathrm{dist}}(x,K) \ge 1 \}$. Then $E$ satisfies the hypotheses of Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"} with $M=1$. Let $\eta\equiv \eta_{k,\alpha_*,E}$ be given by Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"} for this choice of $E$. Then, for $1 \le i \le Q$, consider the functions $$\label{e:fi2} f_{i}(x) = \frac{i \eta(x)}{4 Q \mathop{\mathrm{Lip}}(\eta)}.$$ Then, $\max_{i}\mathop{\mathrm{Lip}}(f_{i}) \le \frac{1}{4}$. In addition, since $\alpha < \alpha_{*}$, Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"} tells us that $\max_i [\nabla f_i]_{C^{0,\alpha}}$ $\lesssim_{m,k,\alpha_*} 1$. So, Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} can be applied with some $M$ depending on $m,k,\alpha_*$. Moreover, a direct computation shows that for all $x \in \mathbb{R}^m$, $$\begin{aligned} |\nabla f_{i}(x) - \nabla f_{j}(x)| &\overset{\eqref{e:ii}}{\lesssim}_{k,m,\alpha_{*},Q} |i-j|^{\frac{k+\alpha_{*}-1}{k+\alpha_{*}}}\mathop{\mathrm{dist}}(x,E)^{k+\alpha_{*}-1} \\ &\overset{\eqref{e:i}}{\simeq_{k,m,\alpha_*},Q} |f_{i}(x) - f_{j}(x)|^{\frac{k+\alpha_{*}-1}{k+\alpha_{*}}}.\end{aligned}$$ This demonstrates the hypotheses of Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} are satisfied with some constant $C_{4}$ depending only on $m,k,\alpha_{*}$, and $Q$. From Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} it follows that there exists some $R_{0}, C_{0}$ depending only on $m, k,\alpha_{*},$ and $Q$ so that [\[e:alm-min-ball\]](#e:alm-min-ball){reference-type="eqref" reference="e:alm-min-ball"} holds for any ball $\mathbf{B}_{r}(x_0) \subset \mathbb{R}^{m+1}$ whenever $r \le R_{0}$. Consequently, the current $T = \mathbf{G}_{F}$ with $F = \sum_{i=1}^{Q} \llbracket f_{i} \rrbracket$ is $(C_{0}, 2 \alpha, R_{0})$-almost minimizing with constants depending only on $m,k,\alpha_{*}$, and $Q$. To see that $K\subset\mathrm{Sing}(T)$, simply notice that $K\subset\partial E$, and for any $x \in \partial E$, $\mathrm{spt}(T)$ is the union of $Q$ distinct $C^{1,\alpha_*}$-manifolds intersecting at $x$, and thus is $\mathrm{spt}(T)$ is not an embedded $C^{1,\alpha_*}$-manifold in any neighborhood of $x$. That is, $\partial E = \mathrm{Sing}(T)$. Finally, by [\[e:fi2\]](#e:fi2){reference-type="eqref" reference="e:fi2"} it follows that $\mathrm{Sing}(T) = \mathfrak{F}_Q(T)$; this completes the proof. ◻ *Proof of Theorem [Theorem 3](#t:main-branch){reference-type="ref" reference="t:main-branch"}.* Let $\alpha=\frac{Qk+1-Q}{Qk+1}$, $\pi_0\coloneqq \mathbb{R}^2\times \{0\}^{2} \subset\mathbb{R}^4$, $z=(x,y)$ denote coordinates in $\pi_0\cong \mathbb{R}^2$, and $K\subset \pi_0$ be an arbitrary closed set with empty interior. We proceed to construct a multi-valued function $F: B_1(0,\pi_0) \to \mathcal{A}_2(\pi_0^\perp)$ whose reparametrization to an appropriate plane will satisfy the assumptions of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"}. Let $\mathscr{W}$ be a Whitney decomposition of $\pi_0\setminus K$ in $\pi_0$, $\{L_l\}_{l\in \mathbb{N}}$ be an enumeration of the cubes in $\mathscr{W}$ with $\ell(L_l) \le 1$, $z_l$ denote the center of $L_l$, and $r_l \coloneqq \frac{\ell(L_l)}{4} \le \frac{1}{4}$. Define $v: \pi_0 \to \mathcal{A}_Q(\pi_0^\perp)$ as $$v(z) = \sum_{\xi^Q = z^{Qk+1}}\llbracket \xi \rrbracket \eqqcolon \sum_{j=1}^Q \llbracket v_j(z)\rrbracket,$$ the $Q$-valued function whose graph is the holomorphic variety $\{w^Q=z^{Qk+1}\}\subset \mathbb{C}^2 \equiv \mathbb{R}^4$. Here, $v_j:\pi_0\to\pi_0^\perp$ are measurable functions representing the $Q$ roots of $z\mapsto z^{Qk+1}$. Let $\eta\equiv \eta_{k,\frac{1}{Q},E}:\pi_0\to \mathbb{R}$ be the function given by Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"} for the closed set $E\coloneqq \pi_0\setminus B_{1/2}(0)$. Note that Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"} implies that $\eta\in C_c^{k,\frac{1}{Q}}(\pi_0;\mathbb{R})$ with $\|\eta\|_{C^{k,\frac{1}{Q}}}\lesssim_{k,Q} 1$. We note for later use, that $\mathop{\mathrm{supp}}(\eta)= \overline{B_{1/2}(0)}$ and from [\[e:i\]](#e:i){reference-type="eqref" reference="e:i"} and [\[e:ii\]](#e:ii){reference-type="eqref" reference="e:ii"} it follows that $|\nabla \eta(z)| \lesssim_{Q,k} \eta(z)^{\alpha}.$ In particular, $$\label{e:etabounded} \left| \nabla \eta(z) \right| \eta(z)^{-\alpha} \lesssim_{Q,k} 1.$$ Consider $w\coloneqq \eta v : B_1(0) \to \mathcal{A}_Q(\pi_0^\perp)$. Define the measurable functions $w_l : B_{r_{l}}(z_{l}) \to \mathcal{A}_{Q}(\pi_{0}^{\perp})$ by $$w_l(z) \coloneqq \kappa r_l^{\frac{Qk+1}{Q}} w\left(\frac{z - z_l}{r_l}\right) = \sum_{j=1}^Q \left\llbracket \kappa r_l^{\frac{Qk+1}{Q}} \eta\left(\frac{z - z_l}{r_l}\right) v_j\left(\frac{z - z_l}{r_l}\right)\right\rrbracket,$$ where $\kappa$ depends only on $Q,k$ and is chosen so that given any choice of branch cut for the logarithm on $B_{r_{l}}(z_l)$, each branch of $w_{l}$ satisfies $|\nabla w_{l}| \le \frac{1}{4}$, see [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"}. Now let $\mathcal{U}= \bigcup_{l \in \mathbb{N}} B_{r_l}(z_l)$ and define the measurable $Q$-valued function $F:\pi_0 \to \mathcal{A}_Q(\pi_0^\perp)$ via a selection of measurable functions $g_j:\pi_0\to \pi_0^\perp$ as follows: $$\label{e:keytoentiretyofcasec} F(z) = \sum_{j=1}^Q \llbracket g_j(z) \rrbracket \coloneqq \begin{cases*} Q \llbracket 0 \rrbracket & $z\in \pi_0 \setminus \mathcal{U}$ \\ w_l(z) = \sum_{j=1}^Q \left\llbracket \kappa r_l^{\frac{Qk+1}{Q}} \eta\left(\frac{z - z_l}{r_l}\right) v_j\left(\frac{z - z_l}{r_l}\right)\right\rrbracket & $z\in B_{r_l}(z_l)$. \end{cases*}$$ We claim that $\mathbf{G}_F$ is a $(C_0,2\alpha,R_0)$-almost minimizer for an appropriate choice of positive constants $C_0,R_0$ and $\kappa$. Let $\mathbf{B}_r(x_0)\subset \mathbb{R}^4$ be such that $\mathbf{B}_r(x_0)\cap \mathrm{spt}\mathbf{G}_F\neq \emptyset$. To this end, we subdivide into three cases, based on $I\coloneqq \{ l : z_l\in B_{2r}(\mathbf{p}_{\pi_{0}}(x_0))\}$ and $I^{*} = \{ l : B_{r_{l}}(z_{l}) \cap B_{r}(\mathbf{p}_{\pi_0}(x_0)) \neq\emptyset\}$: 1. $I=I^*=\emptyset$ 2. $I=\emptyset$ and $I^* \neq \emptyset$ 3. $I\neq\emptyset$ In case (a), $\mathbf{G}_{F} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\mathbf{B}_{r}(x_{0})$ is the current $Q\llbracket \mathbf{B}_r(x_0)\cap (\pi_0\times \{0\})\rrbracket$, which is clearly area minimizing in $\mathbf{B}_{r}(x_{0})$. We emphasize a key distinction between the remaining cases: in (b) we will need to reparameterize the sheets of $F$ over a possibly tilted plane (relative to $\pi_0$), and in case (c) we will not need to reparameterize. This is because in case (c), the plane $\pi_0$ is sufficiently close to optimal, while in case (b) it may not be. The need to reparametrize in case (b) crucially requires us to know that the sheets of $F$ are disjoint single-valued graphs in the entirety of $\mathbf{B}_r(x_0)$, which is due to the lack of branch points in $B_{2r}(\mathbf{p}_{\pi_0}(x_0))$, unlike in case (c). Let us begin with case (c). Here, we will proceed to define an open subset $\Omega'\subset B_{2r}(\mathbf{p}_{\pi_0}(x_0))$ such that $|B_{2r}(\mathbf{p}_{\pi_0}(x_0))\setminus \Omega'| = 0$, and for which the hypotheses of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} hold, directly for the functions $g_i$. Indeed, let $$\Omega' \coloneqq B_{2r}(\mathbf{p}_{\pi_0}(x_0)) \setminus \bigcup_l [z_l,z_l + r_l].$$ In light of the introduction of a branch cut in each $\Omega'\cap B_{r_l}(z_l)$, combined with the pairwise disjointness of the supports of the functions $w_l$, a choice of the complex logarithm can be made to ensure that $$\label{e:rep} g_j(\zeta) = \kappa r_{l}^{\frac{Qk+1}{Q}} \eta \left( \frac{\zeta-z_{l}}{r_l} \right) \left( \frac{\zeta-z_{l}}{r_l} \right)^{\frac{(Qk+1)}{Q}}e^{\frac{j2\pi i}{Q}} \qquad \forall \zeta \in \Omega' \cap B_{r_l}(z_l)$$ and is therefore a $C^1$ function on $\Omega'$. Now fix any index $l\in I$ and any $\zeta\in \Omega'\cap B_{r_l}(z_l)$. We have $$\begin{aligned} |\nabla g_j(\zeta)| &\lesssim_{Q,k} |\zeta - z_l|^{\frac{Qk+1-Q}{Q}} \eta \left(\frac{\zeta - z_l}{r_l}\right) + r_l^{-1}|\zeta-z_l|^{\frac{Qk+1}{Q}}\left|\nabla\eta\left(\frac{\zeta - z_l}{r_l}\right)\right| \\ &= |g_j(\zeta)|^{\frac{Qk+1-Q}{Qk+1}}\left[\eta\left(\frac{\zeta - z_l}{r_l}\right)^{\frac{Q}{Qk+1}} + \left|\frac{\zeta-z_l}{r_l}\right|\left|\nabla\eta\left(\frac{\zeta - z_l}{r_l}\right)\right|\eta\left(\frac{\zeta-z_l}{r_l}\right)^{-\frac{Qk+1-Q}{Qk+1}}\right]\end{aligned}$$ Now, [\[e:etabounded\]](#e:etabounded){reference-type="eqref" reference="e:etabounded"} and $\mathop{\mathrm{supp}}(\eta)\subset \overline{B_{1/2}(0)}$ imply that $$|\zeta||\nabla \eta(\zeta)|\eta(\zeta)^{-\frac{Qk+1-Q}{Qk+1}} \lesssim_{Q,k} 1 \qquad \forall \zeta \in \pi_0.$$ This, combined with the fact that $r_{l} \le \frac{1}{4}$ and $\|\eta\|_{C^0} \lesssim 1$ therefore yields the existence of a choice of $\kappa$ depending only on $Q$ and $k$ so that for every $\zeta \in \Omega' \cap B_{r_l}(z_l)$: $$\begin{aligned} \label{e:theconstant} \nonumber |\nabla g_j(\zeta)| & \lesssim_{Q,k} |g_j(\zeta)|^{\frac{Qk+1-Q}{Qk+1}} \lesssim_{Q,k} \kappa r_l^{\frac{Qk+1-Q}{Q} - \frac{Qk+1-Q}{Qk+1}}|\zeta-z_l|^{\frac{Qk+1-Q}{Qk+1}} \\ & \le \kappa |\zeta-z_l|^{\frac{Qk+1-Q}{Qk+1}} \le \frac{1}{4}.\end{aligned}$$ We claim that the penultimate inequality in [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"} yields $$\label{e:pre3.1} |\nabla g_{j}(\zeta)| \lesssim (\mathop{\mathrm{diam}}\Omega')^{\frac{Qk+1-Q}{Qk+1}} \qquad \forall ~\zeta \in \Omega'.$$ Assuming [\[e:pre3.1\]](#e:pre3.1){reference-type="eqref" reference="e:pre3.1"} holds, one can readily check the hypotheses (i), (ii), and (iii) of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} by the triangle inequality. Let us now proceed to verify [\[e:pre3.1\]](#e:pre3.1){reference-type="eqref" reference="e:pre3.1"}. Note it is trivial whenever $\nabla g_{j}(\zeta) = 0$. So we assume $\zeta \in \Omega' \cap B_{r_l}(z_l)$ for some $l \in I \cup I^*$. If $l \in I$ then $z_l, \zeta \in B_{2r}(\mathbf{p}_{\pi_0}(x_0))$ so $|\zeta - z_l| \le 4r = \mathop{\mathrm{diam}}\Omega'$, so [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"} implies [\[e:pre3.1\]](#e:pre3.1){reference-type="eqref" reference="e:pre3.1"}. On the other hand if $l \in I^{*} \setminus I$, because we are in case (c) there exists $l'\in I$ such that $z_{l'}\in B_{2r}(\mathbf{p}_{\pi_0}(x_0))$. Since for each $z_{i}$, $B_{2r_{i}}(z_{i}) \subset L_{i} \in \mathscr{W}$ it follows that $|\zeta-z_l| \leq r_{l} \le |z_{l'} - \zeta| \le \mathop{\mathrm{diam}}\Omega'$. Thus, again [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"} implies [\[e:pre3.1\]](#e:pre3.1){reference-type="eqref" reference="e:pre3.1"} in this case as claimed. We therefore apply Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} with this choice of $\Omega'$, to conclude that [\[e:Dir\]](#e:Dir){reference-type="eqref" reference="e:Dir"} holds for some constant $\bar C=\bar{C}(k,Q)>0$.[^3] Since $|B_{r}(\mathbf{p}_{\pi_0}(x_0) \setminus \Omega'| = 0$ and $\mathop{\mathrm{diam}}(\Omega')=2r$, [\[e:Dir\]](#e:Dir){reference-type="eqref" reference="e:Dir"} implies [\[e:dir\]](#e:dir){reference-type="eqref" reference="e:dir"}, so proceeding as in the proof of Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} from that point on, we conclude that $$\|\mathbf{G}_F\|(\mathbf{B}_r(x_0))\leq \|\mathbf{G}_F + \partial S\|(\mathbf{B}_r(x_0)) + C_0r^{2+2\alpha},$$ where $C_0= C_0(Q,k)>0$. Note there is no restriction on the scale $r$ in this case. We now turn our attention to case (b). In this case, $|\nabla g_{j}|$ could be large relative to $r^\alpha$ so the hypotheses of Lemma [Lemma 5](#l:almostmin){reference-type="ref" reference="l:almostmin"} cannot be satisfied directly for $g_i$ and we instead must appeal to Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"}. Up to relabelling the indices, choose $q\leq Q$ so that $g_1,\dots,g_q$ denote the functions as defined in [\[e:keytoentiretyofcasec\]](#e:keytoentiretyofcasec){reference-type="eqref" reference="e:keytoentiretyofcasec"} whose graphs intersect $\mathbf{B}_r(x_0)$. In this case, we claim that the functions $g_{j}$ are single-valued $C^{k,\frac{1}{Q}}$ functions on the entirety of $B_{2r}(\mathbf{p}_{\pi_0}(x_0))$ and further that [\[e:rep\]](#e:rep){reference-type="eqref" reference="e:rep"} holds with $\Omega' = B_{2r}(\mathbf{p}_{\pi_0}(x_0))$ and $$\label{e:fullreg} \|g_j\|_{C^{1,\alpha}(\Omega')}\lesssim_{Q,k} 1.$$ That $g_{j}$ is single-valued and that [\[e:rep\]](#e:rep){reference-type="eqref" reference="e:rep"} hold both follow from the assumption $I = \emptyset$ in case (b). Since [\[e:rep\]](#e:rep){reference-type="eqref" reference="e:rep"} holds, so does [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"}. Since $\Omega'$ is a ball, this guarantees that $\mathop{\mathrm{Lip}}(g_{j}) \le \frac{1}{4}$. Since $I = \emptyset$, observe that $B_{2r}(\mathbf{p}_{\pi_0}(x_0)) \cap B_{r_l}(z_l) \subset \mathbb{H}_{l} \cap B_{r_{l}}(z_{l})$, where $\mathbb{H}_{l}$ is the halfspace through $z_l$ with normal in the direction of $\mathbf{p}_{\pi_{0}}(x_0) - z_{l}$. Thus, it follows from Lemma [Lemma 11](#t:uniformity){reference-type="ref" reference="t:uniformity"}, [\[e:rep\]](#e:rep){reference-type="eqref" reference="e:rep"}, and the fact that $z \mapsto z^{\frac{Qk+1}{Q}}$ is $C^{k,\frac{1}{Q}}(\mathbb{H})$ for any halfspace $\mathbb{H}$ through the origin (with norms depending only on $k,Q$ and not the halfspace) that if $\zeta, \xi \in \Omega' \cap B_{r_{l}}(z_{l})$ then $|\nabla g_{j}(\zeta) - \nabla g_{j}(\xi)| \lesssim_{k,Q} |\zeta - \xi|^{\alpha}$. If $\zeta \in B_{r_{l}}(z_{l})$ and $\xi \in B_{r_{l'}}(z_{l'})$ for distinct $l, l'$ then $|\xi - \zeta| \ge r_{l} + r_{l'}$ since $B_{2r_{i}}(z_{i}) \subset L_{i}$ for all $L_{i} \in \mathscr{W}$ with $\ell(L_{i}) \le 1$. In particular, recalling $\alpha = \frac{Qk+1-Q}{Qk+1}$ and applying [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"} yields $$| \nabla g_{j}(\xi) - \nabla g_{j}(\zeta)| \le |\nabla g_{j}(\xi)| + |\nabla g_{j}(\zeta)| \lesssim_{Q,k} r_{l}^{\alpha} + r_{l'}^{\alpha} \lesssim |\xi - \zeta|^{\alpha}.$$ Together with the estimate when $\xi,\zeta$ are in the same ball and the fact that $g_{j} \equiv 0$ outside $\cup_{l} B_{r_l}(z_{l})$ it follows [\[e:fullreg\]](#e:fullreg){reference-type="eqref" reference="e:fullreg"} holds. As a consequence, $[\nabla g_{j}]_{C^{\alpha}(\Omega')} \lesssim 1$ so the only hypothesis of Lemma [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} left to check is [\[e:C1alpha.2\]](#e:C1alpha.2){reference-type="eqref" reference="e:C1alpha.2"}. Now, we check [\[e:C1alpha.2\]](#e:C1alpha.2){reference-type="eqref" reference="e:C1alpha.2"}. By [\[e:rep\]](#e:rep){reference-type="eqref" reference="e:rep"}, we have $$g_{j}(\zeta) - g_{j'}(\zeta) = g_{j}(\zeta) \left(1 - e^{\frac{2 \pi i (j'-j)}{Q}} \right)$$ for each $\zeta\in B_{2r}(\mathbf{p}_{\pi_0}(x_0))$. Thus, following the computations leading to the first inequality in [\[e:theconstant\]](#e:theconstant){reference-type="eqref" reference="e:theconstant"} (except now for any point $z\in B_{2r}(\mathbf{p}_{\pi_0}(x_0))$), we deduce that for each $j\neq j'$, $$|\nabla g_j(\zeta)-\nabla g_{j'}(\zeta)| \lesssim_{Q,k} |g_j(\zeta) - g_{j'}(\zeta)|^{\frac{Qk+1-Q}{Qk+1}} \qquad \forall \zeta \in B_{2r}(\mathbf{p}_{\pi_0}(x_0)),$$ It follows from Proposition [Proposition 6](#c:almostmin){reference-type="ref" reference="c:almostmin"} that in case (b) there exists $R_0=R_0(k,Q) >0$ such that whenever $r\leq R_0$, $$\|\mathbf{G}_F\|(\mathbf{B}_r(x_0))\leq \|\mathbf{G}_F + \partial S\|(\mathbf{B}_r(x_0)) + C_0r^{2+2\alpha},$$ for some positive constant $C_{0}=C_0(k,Q)$. Since case (b) is the only case restricting $R_{0}$, we may take this choice of $R_0$ for the conclusion of the theorem. Choosing $C_{0}$ to be the largest constant from each of the above cases therefore completes all the necessary work to verify that $\mathbf{G}_{F}$ is a $(C_{0}, 2\alpha, R_{0})$-almost minimizer in $\mathbb{R}^{4}$. It remains to check that for $T = \mathbf{G}_{F}$ we have $K \subset \mathrm{Sing}(T) = \mathfrak{F}_{Q}(T)$. The fact that $\mathrm{Sing}(T) \supset K$ follows since the singular set is always closed and $\overline{ \cup_{l} \{z_{l}\}} \supset K$, while each $z_l$ is a branching singularity of $\mathbf{G}_{F}$. To see that $\mathrm{Sing}(T)=\mathfrak{F}_Q(T)$, we argue as follows. First of all, clearly $\{z_l\}_{l\in\mathbb{N}}\subset \mathfrak{F}_Q(T)$. It therefore remains to check that $K\subset \mathfrak{F}_Q(T)$. Since $$\| w_{l}\|_{L^\infty} \lesssim_{k,Q} r_{l}^{\frac{Qk+1}{Q}} \simeq \mathop{\mathrm{dist}}( \mathop{\mathrm{supp}}(w_{l}), K)^{\frac{Qk+1}{Q}},$$ and the $w_{l}$ have disjoint support, it follows that for each $x\in K$, $$\mathop{\mathrm{dist}}( \mathbf{B}_{r}(x) \cap \mathrm{spt}(T) , \mathbf{B}_{r}(x) \cap \pi_{0}) \lesssim r^{\frac{Qk+1}{Q}}.$$ This, together with the $Q$-valued graphicality of $T$, in turn yields that for any such $x$, the rescalings[^4] $T_{x,r}\coloneqq (\iota_{x,r})_\sharp T$ satisfy $$T_{x,r}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\mathbf{B}_1 \overset{*}\rightharpoonup Q\llbracket \pi_0 \cap \mathbf{B}_1 \rrbracket \qquad \text{along any subsequence as $r\downarrow 0$.}$$ ◻ # Reparameterizations Here, we introduce a key reparameterization result which is used frequently throughout this article. We first recall the following elementary fact. For an affine function $A$, note that $\frac{1}{\sqrt{1 + \mathop{\mathrm{Lip}}(A)^{2}}}$ and $\frac{\mathop{\mathrm{Lip}}(A)}{\sqrt{1 + \mathop{\mathrm{Lip}}(A)^{2}}}$ are respectively the cosine and sine of the (maximal one-dimensional) angle between the domain of $A$ and the graph of $A$. **Proposition 13**. *Fix $r,\sigma,\Lambda>0$ and set $\tau = (1 + \Lambda^{2})^{-1/2}$. Let $\pi_0$ be an $m$-dimensional plane in $\mathbb{R}^{m+n}$, $x_0=(x', \bar{x}) \in \pi_0\times \pi_0^\perp \equiv \mathbb{R}^{m+n}$, $A : \pi_0 \to \pi_0^{\perp}$ a linear function with graph parallel to some $\varpi \in G(m,m+n)$, and $\delta > 0$ satisfy $$\label{e:closeenough} \delta + \frac{ \mathop{\mathrm{Lip}}(A) \sigma}{\sqrt{1+\mathop{\mathrm{Lip}}(A)^{2}}} \le \tau.$$ Suppose that $f \in \mathop{\mathrm{Lip}}\left( B_{\delta^{-1} r}(x',\pi_0); \pi_0^\perp \right)$. To each $x \in B_{\delta^{-1} r}(x')$ define $y = y(x) \in \varpi$ by $y = \mathbf{p}_{\varpi}(x, f(x))$. In particular, $y' = y(x')$.* 1. *If $\mathop{\mathrm{Lip}}(f - A) \le \Lambda < \infty,$ then there exist $g : B_{\tau \delta^{-1} r}(y') \to \varpi^{\perp}$ so that $\mathop{\mathrm{gr}}(g)= \mathop{\mathrm{gr}}(f) \cap \mathbf{C}_{\tau \delta^{-1} r}(y',\varpi)$.* 2. *If $| f(x') - \bar{x}| \le \sigma \delta^{-1} r$ then it follows that $B_{r}(\mathbf{p}_{\varpi}(x_0)) \subset B_{\tau \delta^{-1} r}(y').$* 3. *If $f \in C^{1,\alpha}(B_{\delta^{-1} r}(x'),\pi_0^\perp)$, there exists $R_{2} = R_{2}([\nabla f]_{C^{\alpha}(B_{\delta^{-1} r}(x'))},m,n,\Lambda)>0$ such that if $\delta^{-1} r \le R_{2}$ then $g \in C^{1,\alpha}(B_{\tau \delta^{-1} r}(y'),\varpi^\perp)$ and there exists a constant $C = C(\mathop{\mathrm{Lip}}(f),m,n)$ so that $$\begin{aligned} \label{e:hseminorm} [\nabla g]_{C^{\alpha}(B_{\tau \delta^{-1} r}(y'))} &\le C [\nabla f]_{C^{\alpha}(B_{\delta^{-1} r}(x'))} \equiv [\nabla (f-A)]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}. \end{aligned}$$* 4. *If $f_{1}, f_{2} \in \mathop{\mathrm{Lip}}\left( B_{\delta^{-1} r}(x') ; \pi_0^\perp \right)$ satisfy the same hypotheses as $f$ in (1) and (2), if $\delta$ is as in (2) and $g_{1}, g_{2}$ are the corresponding functions from (1) for $f_{1}, f_{2}$ respectively then $$B_{r}(\mathbf{p}_\varpi(x_0)) \subset B_{\tau\delta^{-1}r}(\mathbf{p}_\varpi(x',f_1(x')))\cap B_{\tau\delta^{-1}r}(\mathbf{p}_\varpi(x',f_2(x')))$$ and for any $z\in B_{r}(\mathbf{p}_\varpi(x_0))$, we have $$\begin{aligned} \nonumber |\nabla g_{1}(z) - \nabla g_{2}(z)|&\le \frac{\mathop{\mathrm{Lip}}(A)^\alpha}{(1 + \mathop{\mathrm{Lip}}(A)^{2})^{\frac{\alpha}{2}}} \min_{i=1,2}[\nabla f_{i}]_{C^{\alpha}(B_{\delta^{-1} r}(x'))} |f_1(y_1^{-1}(z)) - f_2(y_2^{-1}(z))|^\alpha \\ \label{e:relgrad} &\qquad+ \|\nabla f_1 - \nabla f_2\|_{C^0(B_{\delta^{-1} r}(x'))}. \end{aligned}$$* *Proof.* We may without loss of generality assume that $\pi_0 = \mathbb{R}^m \equiv \mathbb{R}^m\times \{0\}^{n}$. To prove (1), note that $\mathop{\mathrm{Lip}}(f-A) \le \Lambda$ implies that for $z_{1}, z_{2} \in \mathop{\mathrm{gr}}(f)$, $$|z_{1} - z_{2}|^{2} = |\mathbf{p}_{\varpi}(z_1) - \mathbf{p}_{\varpi}(z_2)|^{2} + |\mathbf{p}_{\varpi^{\perp}}(z_{1})- \mathbf{p}_{\varpi^{\perp}}(z_{2})|^{2} \le (1+\Lambda^{2})|\mathbf{p}_{\varpi}(z_1) - \mathbf{p}_{\varpi}(z_2)|^{2},$$ so $\mathbf{p}_{\varpi}|_{\mathop{\mathrm{gr}}(g)}$ is invertible on its image. We claim this image contains $B_{\tau \delta^{-1} r}(\mathbf{p}_{\varpi}(x',f(x')))$. Indeed, let $\pi$ denote the translate of $\varpi$ passing through $(x', f(x'))$. Then, $\mathop{\mathrm{Lip}}(f - A) \le \Lambda$ implies $$\mathop{\mathrm{gr}}(f) \cap \mathbf{B}_{\delta^{-1} r}((x',f(x'))) \subset \left\{ z \in \mathbf{B}_{\delta^{-1} r}((x', f(x'))) : \mathop{\mathrm{dist}}(z, \pi) \le \Lambda \delta^{-1} r \right\}.$$ Hence, the graphicality of $f$ ensures $\mathbf{p}_{\varpi}(\mathop{\mathrm{gr}}(f) \cap \mathbf{B}_{\delta^{-1} r}(x',f(x'))$ contains the $\mathbf{p}_{\varpi}$-image of the disc $$\{ z \in \mathbf{B}_{\delta^{-1} r}((x',f(x'))) : \mathop{\mathrm{dist}}(z, \pi) = \Lambda r \}.$$ By the Pythagorean theorem, this is precisely the disk $B_{\tau \delta^{-1} r}(\mathbf{p}_{\varpi}(x',f(x')))$. Meanwhile, (2) follows since $|f(x') - \bar{x}| \le \sigma \delta^{-1} r$ implies $$\begin{aligned} |\mathbf{p}_{\varpi}( x', f(x')) & - \mathbf{p}_{\varpi}(x_0)| \le \frac{\mathop{\mathrm{Lip}}(A)}{\sqrt{1+\mathop{\mathrm{Lip}}(A)^{2}}} |(x', f(x')) - x_0| \\ & = \frac{\mathop{\mathrm{Lip}}(A)}{\sqrt{1+\mathop{\mathrm{Lip}}(A)^{2}}} |f(x') - \bar{x}| \le \frac{\sigma r \mathop{\mathrm{Lip}}(A)}{\sqrt{1+ \mathop{\mathrm{Lip}}(A)^{2}}}. \end{aligned}$$ In particular, if $y \in B_{r}(\mathbf{p}_{\varpi}(x_0))$ then $$|y - \mathbf{p}_{\varpi}( x', f(x'))| \le |y- \mathbf{p}_{\varpi}(x_0)| + \frac{\sigma \delta^{-1} r \mathop{\mathrm{Lip}}(A)}{\sqrt{1+ \mathop{\mathrm{Lip}}(A)^{2}}} \le r + \frac{\sigma \delta^{-1} r\mathop{\mathrm{Lip}}(A)}{\sqrt{1+\mathop{\mathrm{Lip}}(A)^{2}}}$$ so that [\[e:closeenough\]](#e:closeenough){reference-type="eqref" reference="e:closeenough"} implies $y \in B_{\tau \delta^{-1} r}(y')$ as desired. To prove (3), consider the map $x \mapsto y(x)$ and note it has explicit form $$y(x) = \mathbf{p}_{\varpi}( x,f(x)).$$ Let $\{e_{i}\}_{i=1}^{m+n}$ be an orthonormal basis for $\mathbb{R}^{m + n}$ adjusted to the decomposition $\mathbb{R}^{m} \times \mathbb{R}^{n}$. Since $A$ graphs $\varpi$, it follows that for $i=1, \dots, m$ $$\xi_{i} \vcentcolon = \frac{e_{i} + A e_{i}}{|e_{i} + A e_{i}|}$$ is an orthonormal basis for the embedding of $\varpi$ in $\mathbb{R}^{m+n}$. Choose $\xi_{m+j}$ for $j=1, \dots, n$ so that $\{\xi_{i}\}_{i=1}^{m+n}$ is an orthonormal basis for $\mathbb{R}^{m+n}$. With respect to these bases, it follows that $(\nabla F)^{i}_{j} = \nabla (F \cdot \xi_{i}) \cdot e_{j}= |(e_{i} + A(e_{i})| \delta^i_j$ for $i,j =1, \dots, m$, where $\delta^i_{j}$ is the Kroenecker delta. In particular, with respect to these coordinates, $(\nabla F)_{1 \le i,j \le m}$ is a diagonal matrix with entries at least $1$. On the other hand, for $i,j = 1, \dots, m$, $$|\nabla (y-F)^{i}_{j}| = |\nabla( (y-F) \cdot \xi_{i} ) \cdot e_{j} |= |\xi_{i}^{t} \left( \nabla \left( f - A \right) \right) e_{j}| \le [\nabla f]_{C^{\alpha}(B_{\delta^{-1} r}(x'))} (\delta^{-1} r)^{\alpha}.$$ Therefore, for $R_{2}$ small enough depending on $\delta, m,n$, and $[\nabla f]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}$ and hence depending on $\Lambda, m,n$, and $[\nabla f]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}$ all eigenvalues of[^5] $(\nabla y) \equiv (\nabla y)_{1\leq i,j \leq m}$ are at least $\frac{1}{2}$, so we may apply the inverse function theorem. Let $y^{-1}$ denote its inverse, which satisfies $$\label{e:invft} \nabla y^{-1}(z) = (\nabla y( y^{-1}(z)))^{-t}$$ Hence, we can bound $[ \nabla y^{-1}]_{C^{\alpha}(B_{\tau \delta^{-1} r}(y'))}$ by $[\nabla y]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}$. Indeed, since $(\nabla y)^{-1}$ is a product of $\det( \nabla y)^{-1}$ and an alternating sum of determinants of the principle minors of $\nabla y$ it follows from the fact that $\det \in \mathop{\mathrm{Lip}}(\mathbb{R}^{m \times m} ; \mathbb{R})$, $\det( \nabla y) \ge 2^{-m}$, and $t \mapsto t^{-1} \in \mathop{\mathrm{Lip}}( [2^{-m},\infty), \mathbb{R})$ that $[\nabla y^{-1}]_{C^{\alpha}(B_{\tau \delta^{-1} r}(y'))}$ is bounded by $[\nabla y]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}$ with only dimensional dependencies. We may analogously bound $\|\nabla y^{-1}\|_{C^0(B_{\tau \delta^{-1}r}(y'))}$ by $\|\nabla y\|_{C^0(B_{\delta^{-1}r}(x'))}$. Observe that for $\Phi(x)\coloneqq \mathbf{p}_{\varpi}^\perp(x,f(x))$, we have $g(z) = \Phi\circ y^{-1}(z)$. The chain rule, together with the regularity of $f$ implies that $g\in C^{1,\alpha}(B_{\delta^{-1} \tau r}(y');\varpi^\perp)$. It remains to check the Hölder estimate. For this, we again use the chain rule to compute $$\begin{aligned} _{C^{\alpha}} &= [\nabla(\Phi\circ y^{-1})]_{C^{\alpha}} \\ &= [(\nabla y^{-1})^t (\nabla \Phi\circ y^{-1})]_{C^{\alpha}} \\ &\leq \|\nabla \Phi\circ y^{-1}\|_{C^0} [\nabla y^{-1}]_{C^{\alpha}} + [\nabla \Phi\circ y^{-1}]_{C^{\alpha}} \| \nabla y^{-1}\|_{C^0}, \end{aligned}$$ where the domain for all the norms and seminorms is $B_{\tau \delta^{-1} r}(y')$. Combining this with the above bounds on $[ \nabla y^{-1}]_{C^{\alpha}(B_{\tau \delta^{-1} r}(y'))}$ and $\|\nabla y^{-1}\|_{C^0(B_{\tau \delta^{-1}r}(y'))}$, the conclusion follows. Finally, we prove (4). Letting $y_i(x) \coloneqq (x,f_i(x))$ for $i=1,2$, and let $\Phi_i$ be the respective functions such that $g_i=\Phi_i\circ y_i^{-1}$, and suppose $z \in B_{r}(\mathbf{p}_{\varpi}(x_0))$. Without loss of generality, suppose that $[\nabla f_1]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}\leq [\nabla f_2]_{C^{\alpha}(B_{\delta^{-1} r}(x'))}$. We have $$\begin{aligned} |\nabla g_1(z) - \nabla g_2(z)| &= |\nabla \Phi_1(y_1^{-1}(z)) - \nabla \Phi_2(y_2^{-1}(z))| \\ &\le|\nabla \Phi_1(y_1^{-1}(z)) - \nabla \Phi_1(y_2^{-1}(z))| + |\nabla \Phi_1(y_2^{-1}(z)) - \nabla \Phi_2(y_2^{-1}(z))| \\ & \le [\nabla \Phi_{1}]_{C^{\alpha}} |y_{1}^{-1}(z) - y_{2}^{-1}(z)|^\alpha + \|\nabla \Phi_1 - \nabla \Phi_2\|_{C^0(B_{\delta^{-1} r}(x'))} \\ &\le \frac{\mathop{\mathrm{Lip}}(A)^\alpha}{(1 + \mathop{\mathrm{Lip}}(A)^{2})^{\frac{\alpha}{2}}} [\nabla \Phi_{1}]_{C^{\alpha}(B_{\delta^{-1} r}(x'))} |f_1(y_1^{-1}(z)) - f_2(y_2^{-1}(z))|^\alpha \\ &\qquad+ \|\nabla \Phi_1 - \nabla \Phi_2\|_{C^0(B_{\delta^{-1} r}(x'))} \\ &\le \frac{\mathop{\mathrm{Lip}}(A)^\alpha}{(1 + \mathop{\mathrm{Lip}}(A)^{2})^{\frac{\alpha}{2}}} [\nabla f_1]_{C^{\alpha}(B_{\delta^{-1} r}(x'))} |f_1(y_1^{-1}(z)) - f_2(y_2^{-1}(z))|^\alpha \\ &\qquad+ \|\nabla f_1 - \nabla f_2\|_{C^0(B_{\delta^{-1} r}(x'))}. \end{aligned}$$ ◻ [^1]: In fact, Bombieri demonstrated this for a larger class of gauge functions $r\mapsto\omega(r)$ that satisfy a suitable Dini condition, which in particular includes $r\mapsto r^{2\alpha}$ for $\alpha>0$. This is however compensated by his definition of the regular set being that where $\mathrm{spt}(T)$ is locally merely $C^1$ in place of $C^{1,\alpha}$. [^2]: For example, having an elliptic PDE constraint on the current, like in the semi-calibrated case. [^3]: In particular, $\bar C$ is independent of the specific choice of $\Omega'$ (which here depends on the positioning of the disk $B_{2r}(\mathbf{p}_{\pi_0}(x_0))$). [^4]: Here $\iota_{x,r}(y)\coloneqq \frac{y-x}{r}$ and we let $(\iota_{x,r})_\sharp T$ denote the pushforward current under the map $\iota_{x,r}$. [^5]: At the risk of abusing notation, we are henceforth identifying the $m\times n$ matrix $\nabla y$ with the square $m\times m$ matrix consisting of its non-zero entries.
arxiv_math
{ "id": "2309.09634", "title": "Flat interior singularities for area almost-minimizing currents", "authors": "Max Goering, Anna Skorobogatova", "categories": "math.AP math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
On the diophantine equation $An!+Bm!=f(x,y)$\ Sa$\mathrm{\check{s}}$a Novakovi$\mathrm{\acute{c}}$\ September 2023\ **Abstract**. Erdös and Obláth proved that the equation $n!\pm m!=x^p$ has only finitely many integer solutions. More general, under the ABC-conjecture, Luca showed that $P(x)=An!+Bm!$ has finitely many integer solutions for polynomials of degree $\geq 3$. For certain polynomials of degree $\geq 2$, this result holds unconditionally. We consider irreducible homogeneous $f(x,y)\in \mathbb{Q}[x,y]$ of degree $\geq 2$ and show that there are only finitely many $n,m$ such that $An!+Bm!$ is represented by $f(x,y)$. As corollaries we get alternative proofs for the unconditional results of Luca. We also discuss the case of certain reducible $f(x,y)$. Furthermore, we study equations of the form $n!!m!!=f(x,y)$ and $n!!m!!=f(x)$. # Introduction Diophantine quations involving factorials have a long and rich history. For example Brocard [@BR] and independently Ramanujan [@RA] asked to find all integer solutions for $n!=x^2-1$. Up to now this is an open problem, known as Brocard's problem. It is believed that the equation has only the three solutions $(x,n)=(5,4), (11,5)$ and $(71,7)$. Overholt [@O] observed that a weak form of Szpiro's-conjecture implies that Brocard's equation has finitely many integer solutions. Using the ABC-conjecture Luca [@L] proved that diophantine equations of the form $n!=P(x)$ with $P(x)\in \mathbb{Z}[x]$ of degree $d\geq 2$ have only finitely many integer solutions with $n>0$. If $P(x)$ is irreducible, Berend and Harmse [@BH1] showed unconditionally that $P(x)=H_n$ has finitely many integer solutions where $H_n$ are highly divisible sequences which also include $n!$. Furthermore, they proved that the same is true for certain reducible polynomials. Without assuming the ABC-conjecture, Berend and Osgood [@BO] showed that for arbitrary $P(x)$ of degree $\geq2$ the density of the set of positive integers $n$ for which there exists an integer $x$ such that $P(x)=n!$ is zero. Further progress in this direction was obtained by Bui, Pratt and Zaharescu [@BPZ] where the authors give an upper bound on integer solutions $n\leq N$ to $n!=P(x)$. Of course, there are several polynomials $P(x)$ for which $P(x)=n!$ is known to have either very few integer solutions or none (see for instance [@EO], [@DAB] and [@PS]). Berndt and Galway [@BG] showed that the largest value of $n$ in the range $n<10^9$ for which Brocard's equation $x^2-1=n!$ has a positive integer solution is $n=7$. Matson [@MA] extended the range to $n<10^{12}$ and Epstein and Glickman [@EG] to $n<10^{15}$. Starting from Brocard's problem there are also studied variations or generalizations of $x^2-1=n!$ (see for instance [@DMU], [@KL], [@MU], [@MUT] [@TY]). For instance Ulas [@MU] studied, among others, diophantine equations of the form $2^nn!=y^2$ and proved that the Hall conjecture (which is a special case of ABC-conjecture) implies that the equation has only finitely many integer solutions. Note that $2^nn!$ can also be formulated using the notation for the Bhargava factorial $n!_S$. Note that $2^nn!=n!_S$, with $S=\{2n+b| n\in \mathbb{Z}\}$. We do not recall the definition of the Bhargava factorial and refer to [@BH] or [@WT] instead. Other diophantine equations involving factorials have proved more tractable. For example, Erdös and Obláth [@EO] proved that $n!\pm m!=x^p$ has only finitely many integer solutions. This result has been generalized to equations of the form $An!+Bm!=P(x)$ where $P(x)$ is a polynomial with rational coefficients of degree $\geq 2$. Luca [@L2] proved that if the degree is $\geq 3$, then the ABC-conjecture implies that $An!+Bm!=P(x)$ has finitely many integer solutions, except when $A+B=0$. In this case there are only finitely many solutions with $n\neq m$. For polynomials of the form $P(x)=a_dx^d+a_{d-3}x^{d-3}+\cdots +a_0$ with $a_d\neq 0$ and $d\geq 2$ Gawron [@MG] showed that there a finitely many integer solutions to $n!+m!=P(x)$, provided ABC holds. In [@NO] the author started to study diophantine equations of the following type: let $g(x_1,...,x_r)\in\mathbb{Z}[x_1,...,x_r]$ and $f$ be polynomials where either $f\in\mathbb{Q}[x]$ or $f\in\mathbb{Q}[x,y]$. Consider the equation $g(x_1,...,x_r)=f$, where for any of the $x_i$ in $g$ we may also plug in $A^n$ or $n!_S$. Several results concerning the existence of finitely many integer solutions were proved by the author [@NO] for $g(x_1,...,x_r)=bx_1\cdots x_r$ and certain $f$. For details we refer to *loc.cit.* In the above equation, we are mainly interested in $f\in\mathbb{Q}[x]$ or $f\in\mathbb{Q}[x,y]$. The reason for this is the following: it is known that a positive integer is represented by $f(x,y,z)=x^2+y^2+z^2$ if it is not of the form $4^l(8k+7)$, where $l,k$ are non-negative integers. So one can check for instance that there are infinitely many $n$ such that $n!$ or $A^nn!$ are not of the form $4^l(8k+7)$. So for certain $f\in \mathbb{Z}[x,y,z]$ we have an easy strategy to check whether there are infinitely many solutions. In four variables, it is known that $x^2+y^2+z^2+w^2$ represents any positive integer. Some interesting (exponential) diophantine equations are of the above form. Below we give only a few examples: - *superelliptic equations*. - *Erdös-Obláth type equations*: take $g(x_1)=x_1$ and plug in $n!$ and let $g(x,y)=x^p\pm y^p$ or take $g(x_1,x_2)=x_1\pm x_2$ and plug in $n!$ respectively $m!$ and let $f(x)=x^p$. - *Thue-Mahler equation*: take $g(x_1,...,x_r)=x_1\cdot x_2\cdots x_r$ and let $x_i=p_i^{n_i}$. - *Thue-equation*: take $g(x_1,...,x_r)=m$ and let $f(x,y)$ be a homogeneous polynomial of degree $\geq 3$. - *Brocard's problem*: take $g(x_1)=x_1$ and plug in $n!$ and let $f(x)=x^2-1$. More generally, let $f(x)$ be any polynomial of degree $\geq 2$ and we get the diophantine equation considered in [@L]. - *Ramanujan-Nagell type equation*: take $g(x_1)=bx_1+D$ and plug in $A^n$ for some positive fixed $A$ and let $f(x)=x^2$. - *Fermat equation*: take $g(x_1,x_2)=x_1^n+x_2^n$ and let $f(x)=x^n$. - *generalizations of Brocard's problem*, see [@WT]: take $g(x_1)=x_1$ and plug in $n!$ and let $f(x,y)$ be any irreducible binary form of degree $\geq 2$. - *generalizations of Brocard's problem*, see [@DA]: take $g(x_1,x_2)=-x_1^2+x_2$ and plug in $n!$ and let $f(x,y)=x^2+y^2-A$. In the examples (i) to (viii) from above, conditionally or unconditionally there are only finitly many integer solutions. In some situations the exact number of integer solutions is known. In this note we want to study two types of euations: first, we want to study the case $g(x_1,x_2)=Ax_1+Bx_2$, where we plug in $n!$ and $m!$ respectively. Formulated in the notation of the Bhargava factorial, we plug in $n!_{\mathbb{Z}}$ respectively $m!_{\mathbb{Z}}$. Therefore, we want to study diophantine equations of the form $An!+Bm!=f(x)$ and $An!+ Bm!=f(x,y)$, where $f(x)$ and $f(x,y)$ are polynomials with rational coefficients and $A,B$ non-zero fixed integers. If $n=m$ and $A+B=0$, then $f(x)=0$ respectively $f(x,y)=0$. Hence, we get infinitely many solutions. They all have $n=m$ and $f(x)=0$ respectively $f(x,y)=0$. If $n=m$ and $A+B\neq 0$, we have the diophantine equations $(A+B)n!=f(x)$ [and]{.nodecor} $(A+B)n!=f(x,y)$. Now these equations have been studied in [@NO]. So we may assume $n>m$. The second type of equations we are interested in involve the double factorial $n!!$, which is closely related to a certain Bhargava factorial. Equations involving the double factorial have been studied by Ulas [@MU]. In the present note we also want to generalize some of the results in [@MU] by considering $g(x_1,...,x_r)=bx_1\cdots x_r$ and pluging in the double factorial. Here $b$ is a non-zero integer. More precisely, we study the equations: $bn_1!!\cdots n_r!!=f(x)$ [and]{.nodecor} $bn_1!!\cdots n_r!!=f(x,y)$. Note that if $n=2m$ we have $n!!=2^mm!=m!_S$ where $S=\{2m+b | m\in \mathbb{Z}\}$. And if $n=2m+1$ we have $n!!=\frac{(2m+1)!}{2^mm!}=\frac{(2m+1)!_{\mathbb{Z}}}{m!_S}$.\ The proofs of Theorems 1.9 and 1.10 below need the ABC-conjecture. So we recall its content (see [@LA]). For a non-zero integer $a$, let $N(a)$ be the *algebraic radical*, namely $N(a)=\prod_{p|a}{p}$. Note that $$\begin{aligned} N(a)=\prod_{p|a}{p}\leq \prod_{p\leq a}{p}< 4^a,\end{aligned}$$ where the last inequality follows from a Chebyshev-type result in elementary prime number theory and is called the Finsler inequality. **Conjecture 1** (ABC-conjecture). *For any $\epsilon >0$ there is a constant $K(\epsilon)$ depending only on $\epsilon$ such that whenever $A,B$ and $C$ are three coprime and non-zero integers with $A+B=C$, then $$\begin{aligned} \mathrm{max}\{|A|,|B|,|C|\}<K(\epsilon)N(ABC)^{1+\epsilon}\end{aligned}$$ holds.* We have the following results: **Theorem 1**. *Let $f(x,y)\in\mathbb{Q}[x,y]$ be an irreducibel binary form of degree $\geq 2$. Then there are only finitely many $n,m$ such that $An!+ Bm!$ is represented by $f(x,y)$, except when $A+B=0$. In this case there are only finitely many $n,m$ with $n\neq m$. If the degree is $\geq 3$, then the equation $$\begin{aligned} An!+Bm!=f(x,y) \end{aligned}$$ has only finitely many integer solutions, except when $A+B=0$. In this case there are only finitely many integer solutions with $n\neq m$.* **Theorem 2**. *Let $f(x,y)\in\mathbb{Q}[x,y]$ be a polynomial with factorization* *$f(x,y)=f_1(x,y)^{e_1}\cdots f_u(x,y)^{e_u}$,* *where the $f_i(x,y)$ are distinct irreducible homogeneous polynomials of degree $d_i$. If $\mathrm{min}\{d_1e_1,...,d_ue_u\}>1$, then there are only finitely many $n,m$ such that $An!+Bm!$ is represented by $f(x,y)$, except when $A+B=0$. In this case there are only finitely many $n,m$ with $n\neq m$. If the degree is $\geq 3$, then the equation $$\begin{aligned} An!+Bm!=f(x,y)\end{aligned}$$ has only finitely many integer solutions, except when $A+B=0$. In this case there are only finitely many integer solutions with $n\neq m$.* Theorems 1.1 and 1.2 generalize the main result of Luca [@L2] to certain binary forms. Consequently, we find Luca's unconditional results as corollaries: **Corollary 3**. *Let $f(x)\in\mathbb{Q}[x]$ be an irreducible polynomial of degree $\geq 2$. Then the diophantine equation $An!+Bm!=f(x)$ has only finitely many integer solutions, except when $A+B=0$. In this case there are only finitely many integer solutions with $n\neq m$.* **Corollary 4**. *Let $f(x)\in\mathbb{Q}[x]$ be a polynomial with factorization* *$f(x)=f_1(x)^{e_1}\cdots f_u(x)^{e_u}$,* *where $f_i(x)$ are distinct irreducible polynomials of degree $d_i$. If $\mathrm{min}\{d_1e_1,...,d_ue_u\}>1$, then the equation $An!+ Bm!=f(x)$ has only finitely many integer solutions, except when $A+B=0$. In this case there are only finitely many integer solutions with $n\neq m$.* **Theorem 5**. *Let $f(x,y)\in\mathbb{Q}[x,y]$ be an irreducibel binary form of degree $d>r$ and $b$ a non-zero integer. Then there are only finitely many $(n_1,...n_r)$ such that $bn_1!!\cdots n_r!!$ is represented by $f(x,y)$. If the degree is $\geq 3$, then the equation $$\begin{aligned} bn_1!!\cdots n_r!!=f(x,y)\end{aligned}$$ has only finitely many integer solutions.* **Theorem 6**. *Let $b$ be a non-zero integer and $f(x,y)\in\mathbb{Q}[x,y]$ a polynomial with factorization* *$f(x,y)=f_1(x,y)^{e_1}\cdots f_u(x,y)^{e_u}$,* *where the $f_i(x,y)$ are distinct irreducible homogeneous polynomials of degree $d_i$. Now if $d_i\geq2$ and $d_1e_1+\cdots +d_ue_u>r$ or if $\mathrm{min}\{d_1e_1,...,d_ue_u\}>r$, then there are only finitely many $(n_1,...n_r)$ such that $bn_1!!\cdots n_r!!$ is represented by $f(x,y)$. If the degree is $\geq 3$, then the equation $$\begin{aligned} bn_1!!\cdots n_r!!=f(x,y) \end{aligned}$$ has only finitely many integer solutions.* Note that if $d\leq r$ one usually has infinitely many solutions. Indeed, let $f(x,y)=a_dx^d+a_{d-1}x^{d-1}y+\cdots + a_{d-1}xy^{1}+a_0y^d$ and assume $a:=a_d+\cdots +a_0\neq 0$. If $x=y$, we obtain the equation $bn_1!!\cdots n_r!!=f(x,y)=ax^d$. Now if all $n_i$ are even then [@NO], Theorem 1.1 yields that there are infinitely many solutions for $b\neq 0$. **Corollary 7**. *Let $f(x)\in\mathbb{Q}[x]$ be an irreducible polynomial of degree $d>r$ and $b$ a non-zero integer. Then the diophantine equation $bn_1!!\cdots n_r!!=f(x)$ has only finitely many integer solutions.* **Corollary 8**. *Let $b$ be a non-zero integer and $f(x)\in\mathbb{Q}[x]$ a polynomial with factorization* *$f(x)=f_1(x)^{e_1}\cdots f_u(x)^{e_u}$,* *where the $f_i(x)$ are irreducible polynomials of degree $d_i$. If $d_i\geq2$ and $d_1e_1+\cdots +d_ue_u>r$ or if $\mathrm{min}\{d_1e_1,...,d_ue_u\}>r$ then the equation $bn_1!!\cdots n_r!!=f(x)$ has only finitely many integer solutions.* Assuming the ABC-conjecture we show: **Theorem 9**. *Let $f(x)\in\mathbb{Q}[x]$ be a polynomial of degree $\geq 2$. Then the ABC-conjecture implies that $n!!=f(x)$ has only finitely many integer solutions.* **Theorem 10**. *Let $f(x)\in\mathbb{Q}[x]$ be a polynomial of degree $\geq 2$ which is not monomial and has at least two distinct roots and let $b$ be a non-zero integer. Then the ABC-conjecture implies that $bn_1!!\cdots n_r!!=f(x)$ has only finitely many integer solutions.* **Problem 11**. *[Let $A_i$ be fixed non-zero integers. Study the diophantibe equations]{.nodecor}* *$\sum_{i=1}^rA_in_i!=f(x)$ [and]{.nodecor} $\sum_{i=1}^rA_in_i!=f(x,y)$.* **Problem 12**. *[Let $A_i$ be fixed non-zero integers and $S_i\subset \mathbb{Z}$ infinite subsets. Study the diophantibe equations]{.nodecor}* *$\sum_{i=1}^rA_in_{S_i}!=f(x)$ [and]{.nodecor} $\sum_{i=1}^rA_in_{S_i}!=f(x,y)$.* **Problem 13**. *[Let $A_i$ be fixed non-zero integers and $S_i\subset \mathbb{Z}$ infinite subsets. Study the diophantibe equations]{.nodecor}* *$\prod_{i=1}^rA_in_{S_i}!=f(x)$ [and]{.nodecor} $\prod_{i=1}^rA_in_{S_i}!=f(x,y)$.* **Acknowledgement**. I am grateful to Wataru Takeda for some helpful explanations. # Proof of Theorems 1.1 and 1.2 First note that after multiplying the equation $An!+Bm!=f(x,y)$ by a certain integer we can assume $f(x,y)\in\mathbb{Z}[x,y]$. To prove the statement of Theorem 1.1 we actually go through the proof of Theorem 4.1 in [@WT] and through the proof of [@EO], Satz 1.4 and adapt both to our situation. We follow the notation of [@WT]. Let $f(x,y)=a_dx^d+a_{d-1}x^{d-1}y+\cdots + a_{d-1}xy^{1}+a_0y^d$ be an irreducible polynomial and let $K_F$ be the splitting field of $f(x,1)$. Denote by $C_F$ the set of conjugacy classes of the Galois group $G_F=\mathrm{Gal}(K_F/\mathbb{Q})$ whose cycle type $[h_1,...,h_s]$ satisfies $h_i\geq 2$. For a cycle $\sigma$, the cycle type is defined as the ascending ordered list $[h_1,...,h_s]$ of the sizes of cycles in the cycle decomposition of $\sigma$. For further details we refer to Chapters 2, 3 and 4 in [@WT]. Of particular interest are the proofs of Lemma 2.1, Lemma 3.1, Theorem 3.6 and Theorem 4.1. We proceed with our proof. Since $d\geq 2$, we conclude from [@WT], Lemma 2.1 that $C_F\neq \emptyset$. Now let $C\in C_F$ be a fixed conjugacy class of $G_F$. We say that a prime $p$ corresponds to $C$ if the Frobenius map $(p,K_F/\mathbb{Q})$ belongs to $C$ (see [@WT], chapter 2 for details). Let $g=\mathrm{gcd}(a_d,...,a_0)$ and $N=gp_1\cdots p_uq_1^{l_1}\cdots q_v^{l_v}$ where $q_i$ are primes corresponding to a conjugacy class in $C_F$ satisfying $\mathrm{gcd}(q,a\Delta_{mod})=1$ where $a\in\{a_d,a_0\}$ and $p_j$ are the other primes (see [@WT], Lemma 3.1 for details). Here $\Delta_{mod}$ denotes a certain modified discriminant of $f(x,1)$ and is defined by $\Delta_{mod}=\frac{\Delta_{f(x,1)}}{\mathrm{gcd}(a_n,...,a_0)^{2n-2}}$. The assumption $d\geq 2$ and [@WT], Lemma 3.1 then imply that if $N$ is represented by $f(x,y)$ and $q|N$ for a prime $q$ corresponding to $C$ satisfying $\mathrm{gcd}(q,a\Delta_{mod})=1$, then $N$ is divisible by $q^d$ at least. Assume $m>2\mathrm{max}\{|A|,|B|\}$. Now if $q<m<2q$ and if $n>2m$, then there is no solution to the equation $An!+Bm!=f(x,y)$. Indeed, $q<m<2q$ implies $\frac{m}{2}<q<m$. Therefore, $q$ appears with exponent one in $Bm!$. Now if $n>2m$, the prime $q$ appears in $An!$ with exponent two. This shows that $An!+Bm!$ is divided by $q$ but not by $q^2$. And since $d\geq 2$, we conclude that at least $q^2$ divides $f(x,y)$. Now apply [@WT], Theorem 3.6 (as in the proof of Theorem 4.1 in *loc.cit.*) to conclude that there exists a prime $q'$ corresponding to $C$ with $q'\in (q,2q)$ and satisfying $\mathrm{gcd}(q',a\Delta_{mod})=1$. Therefore, by the same argument as before, if $q'<m<2q'$ and $n>2m$ there are no integer solutions for $An!+Bm!=f(x,y)$. By induction we conclude that whenever $m>2\mathrm{max}\{|A|,|B|\}$ and $m>q$ and $n>2m$ there are no integer solutions. So there can be solutions if $m\leq 2\mathrm{max}\{|A|,|B|\}$ or $m\leq q$ or if $n\leq 2m$. We look at each of the seven possible cases. In all the cases where $m\leq p$ and $n\leq 2m$ we are done since $n,m$ are bounded. - The case $m>2\mathrm{max}\{|A|,|B|\}$, $m>q$ and $n\le 2m$: suppose we have infinitely many integer solutions $(n,m,x,y)$ to $An!+ Bm!=f(x,y)$. If there are only finitely many such $n,m$ but infinitely many $x,y$ we are done. So we assume there are infinitely many $m$. In this case we can argue as in the proof of Satz 4 in [@EO]. One can use [@WT], Theorem 3.6 to conclude that\ **claim**:\ for large $m$ there is allways a prime $p$ corresponding to $C$ and satisfying $\mathrm{gcd}(p,a\Delta_{mod})=1$ such that $\frac{m}{2}<p<\frac{m}{2}+\frac{m}{12\mathrm{log}m}$. **proof of claim**:\ as above, [@WT], Theorem 3.6 implies that there is a prime $q'$ corresponding to $C$ with $q'\in (q,2q)$ and satisfying $\mathrm{gcd}(q',a\Delta_{mod})=1$. By induction, we can assume that there is a prime $q_0$ corresponding to $C$ and satisfying $\mathrm{gcd}(q_0,a\Delta_{mod})=1$ such that $m/2\geq q_0$. Applying [@WT], Lemma 3.5 and Theorem 3.6 again, we conclude that for any $D>0$ there is a prime $p$ corresponding to $C$ with $p\in (q_0,Dq_0)$ and satisfying $\mathrm{gcd}(p,a\Delta_{mod})=1$. As long as $q_0\leq m/2<p$, we see that $\frac{m}{2}<p<\frac{Dm}{2}<\frac{m}{2}+\frac{m}{12\mathrm{log}m}$. Note that $D>0$ can be chosen small enough such that the last inequality holds. If $m/2\geq p$, we use the same argument to produce a a prime $p'$ corresponding to $C$ with $p'\in (p,Dp)$ and satisfying $\mathrm{gcd}(p',a\Delta_{mod})=1$. Again, as long as $p\leq m/2<p'$, we have $\frac{m}{2}<p'<\frac{Dm}{2}<\frac{m}{2}+\frac{m}{12\mathrm{log}m}$. Now by an induction argument, we get the claim.\ With the help of the claim, we see that $m<2p<m+\frac{m}{6\mathrm{log}m}$. As above, if $n>m+\frac{m}{6\mathrm{log}m}$ we can coclude that $p$ divides $An!+Bm!$ but not $p^2$. Since $p^2$ divides $f(x,y)$, there can be no solutions in this case. Therefore, we assume $n\leq m+\frac{m}{6\cdot \mathrm{log}m}$. Since we assumed $n>m$, we can consider $Bm!\bigl(\frac{An!}{Bm!} + 1 \bigr)=f(x,y)$. Then $Bm!$ contains any prime $p$ with $\frac{m}{2}<p\leq m$ with exponent exactly one. Therefore, $\frac{An!}{Bm!} + 1$ must contain these primes, too. Using the prime number theorem one can show that for $m$ big enough the product of these primes is bigger than $2^{m/2}+1$. Hence $|\frac{An!}{Bm!}|>2^{m/2}$. From $n\leq m+\frac{m}{6\cdot \mathrm{log}m}<2m$, it follows $|\frac{An!}{Bm!}|<\frac{|A|}{|B|}n^{(n-m)}<\frac{|A|}{|B|}(2m)^{\frac{m}{6\cdot \mathrm{log}m}}<\frac{|A|}{|B|}2^{\frac{m}{6\cdot \mathrm{log}m}}e^{\frac{m}{6}}$. But for large $m$ this contradicts $|\frac{An!}{Bm!}|>2^{m/2}$. This implies that here can be only finitely many $m$ and hence finitely many $n$. - The case $m>2\mathrm{max}\{|A|,|B|\}$, $m\leq q$ and $n>2m$: Assume $n\geq 2q$ otherwise $n$ and $m$ are bounded. We see that the exponent of $q$ in $Bm!$ is at most one, whereas the exponent of $q$ in $An!$ is at least two. The exponent of $q$ in $f(x,y)$ is also at least two. Hence there are no solutions in this case. - The case $m\leq 2\mathrm{max}\{|A|,|B|\}$, $m\leq q$ and $n>2m$: As above, we can find a prime $q'$ corresponding to $C$ with $q'\in (q,2q)$ and satisfying $\mathrm{gcd}(q',a\Delta_{mod})=1$. We can repeat that argument until we find a prime $q_0>2\mathrm{max}\{|A|,|B|\}$ corresponding to $C$, satisfying $\mathrm{gcd}(q_0,a\Delta_{mod})=1$. Now we assume $n>2q_0$ otherwise $n$ and $m$ are bounded. Now since $m<q_0$ and $q_0>2\mathrm{max}\{|A|,|B|\}$ and $n>2q_0$, we see that the exponent of $q_0$ in $Bm!$ is zero whereas the exponent in $An!$ and $f(x,y)$ is at least two. Consequently, there are no solutions in this case. - The case $m\leq 2\mathrm{max}\{|A|,|B|\}$, $m>q$ and $n>2m$: we argue as in case 3). Thus we can find a prime $q_0$ corresponding to $C$ and satisfying $\mathrm{gcd}(q_0,a\Delta_{mod})=1$ such that $m\leq q_0$. Then assume $n>2q_0$ and continue as in 3). Summarizing we see that there are only finitely many $n,m$ such that $An!+ Bm!$ is represented by $f(x,y)$. Now we use Thue's theorem to conclude that $An!+Bm!=f(x,y)$ has finitely many integer solutions if the degree of $f(x,y)$ is $\geq 3$.\ Proof of Theorem 1.2:\ If all $d_i\geq 2$, then proceed as follows: let $K_{F_j}$ be the splitting field of $f_j(x,1)$ and denote by $C_{F_j}$ be the set of conjugacy classes whose cycle type has sizes $\geq 2$. Now let $q$ be a prime corresponding to a conjugacy class $C\in \cap_{j=1}^uC_{F_j}$ satisfying $\mathrm{gcd}(q,a\Delta_{mod})=1$. Assume $N$ is represented by $f(x,y)$. Then there are $x_0,y_0$ such that $q|f(x_0,y_0)$. Hence there exists a polynomial $f_j(x,y)=a_{j,d_j}x^{d_j}+\cdots + a_{j,0}y^{d_j}$ such that $q|f_j(x_0,y_0)$. Now $\mathrm{gcd}(q,a\Delta_{mod})=1$ with $a\in\{a_d,a_0\}$ implies $\mathrm{gcd}(q,a(j)\Delta_{j, mod})=1$ where $a(j)\in\{a_{j,d_j},a_{j,0}\}$. Here $\Delta_{j, mod}$ denotes the modified discriminant of $f_j(x,1)$. It follows from [@WT], Lemma 3.1 that $q^d|f(x,y)$. The rest of the proof is the same as in the proof of Theorem 1.1. Now let $\mathrm{min}\{d_1e_1,...,d_ue_u\}=d_{i_0}e_{i_0}$ and assume $d_{i_0}=1$ and $e_{i_0}>1$. Then one can argue as in [@WT], Theorem 4.7 to conclude that, let's say, $q$ divides $f(x,y)$ at least $e_{i_0}>1$ times. Again, the rest of the proof is as in the proof of Theorem 1.1 above. # Proof of Theorems 1.5 and 1.6 For a better readibility, to keep the arguments clear and to avoid a lot of indices, we formulate the proof only for $r=3$. First note that after multiply the equation $bn_1!!\cdots n_r!!=f(x,y)$ with a certain integer we can assume that $f(x,y)\in\mathbb{Z}[x,y]$. Secondly, note that if $n=2m$ is even, we have $n!!=2^mm!$. So in the case $n_,n_2,n_3$ are all even, we conclude form [@NO], Theorem 1.5 that there are finitely many $(n_1,n_2,n_3)$ such that $bn_1!!n_2!!n_3!!$ is represented by $f(x,y)$. Now consider the case $n_1,n_2,n_3$ are all odd. Note that for $n_i=2m_i+1$ one has $(2m_i+1)!!=\frac{(2m_i+1)!}{2^{m_i}m_i!}$ We rewrite our diophantine equation and consider $b(2m_1+1)!(2m_2+1)!(2m_3+1)!=2^{m_1}m_1!2^{m_2}m_2!2^{m_3}m_3!f(x,y)$. As in the proof of Theorem 1.1, let $K_F$ be the splitting field of $f(x,1)$ and denote by $C_F$ the set of conjugacy classes of the Galois group $G_F=\mathrm{Gal}(K_F/\mathbb{Q})$ whose cycle type $[h_1,...,h_s]$ satisfies $h_i\geq 2$. Since $d\geq 2$, we conclude from [@WT], Lemma 2.1 that $C_F\neq \emptyset$. Now let $C\in C_F$ be a fixed conjugacy class of $G_F$. Again, let $g=\mathrm{gcd}(a_d,...,a_0)$ and $N=gp_1\cdots p_uq_1^{l_1}\cdots q_v^{l_v}$ where $q_i$ are primes corresponding to a conjugacy class in $C_F$ satisfying $\mathrm{gcd}(q,a\Delta_{mod})=1$ where $a\in\{a_d,a_0\}$ and $p_j$ are the other primes (see [@WT], Lemma 3.1 for details). The assumption $d>r>2$ and [@WT], Lemma 3.1 then imply that if $N$ is represented by $f(x,y)$ and $q|N$ for a prime $q$ corresponding to $C$ satisfying $\mathrm{gcd}(q,a\Delta_{mod})=1$, then $N$ is divisible by $q^d$ at least. Let us assume $m_1\geq m_2\geq m_3$ and $m_1>2|b|$. Now if $q<2m_1+1<2q$, then there is no solution to the equation $b(2m_1+1)!(2m_2+1)!(2m_3+1)!=2^{m_1}m_1!2^{m_2}m_2!2^{m_3}m_3!f(x,y)$. Indeed, $q<2m_1+1<2q$ implies $\frac{2m_1+1}{2}<q<2m_1+1$. Therefore, $q$ appears with exponent exactly one in $(2m_1+1)!$. We can assume $m_1>2$. Since $m_3\leq m_2\leq m_1<q$ we see that $q$ does not divide $2^{m_i}m_i!$. Therefore, $q$ must divide $f(x,y)$. As mentioned above, this implies that $q^d$ divides $f(x,y)$. Since $d>r$ by assumption, we conclude that there are no solutions if $q<2m_1+1<2q$. Now by the same induction argument as in the proof of Theorem 1.1 we find that there are no solutions to the above equation whenever $2m_1+1>q$. Now let $n_1,n_2$ be odd and $n_3$ even. Then the diophantine equation becomes $b(2m_1+1)!(2m_2+1)!2^{m_3}m_3!=2^{m_1}m_1!2^{m_2}m_2!f(x,y)$. Because we have symmetry with respect to $m_1$ and $m_2$, we can consider three cases: assume $m_1\geq m_2\geq m_3$. Then one can apply the arguments from before. The same can be done if $m_1\geq m_3\geq m_2$. The last case is $m_3\geq m_1\geq m_2$. Here we assume $m_3\geq 2|b|$. Now if $q<2m_3+1<2q$ we argue as before. The case where two of the numbers $n_1,n_2,n_3$ are even and one is odd is similar and left to the reader.\ Proof of Theorem 1.6:\ If all $d_i\geq 2$, then proceed as follows: let $K_{F_j}$ be the splitting field of $f_j(x,1)$ and denote by $C_{F_j}$ be the set of conjugacy classes whose cycle type has sizes $\geq 2$. Now let $q$ be a prime corresponding to a conjugacy class $C\in \cap_{j=1}^uC_{F_j}$ satisfying $\mathrm{gcd}(q,a\Delta_{mod})=1$. Assume $N$ is represented by $f(x,y)$. Therefore, there are $x_0,y_0$ such that $q|f(x_0,y_0)$. Then there exists a polynomial $f_j(x,y)=a_{j,d_j}x^{d_j}+\cdots + a_{j,0}y^{d_j}$ such that $q|f_j(x_0,y_0)$. Now $\mathrm{gcd}(q,a\Delta_{mod})=1$ with $a\in\{a_d,a_0\}$ implies $\mathrm{gcd}(q,a(j)\Delta_{j, mod})=1$ where $a(j)\in\{a_{j,d_j},a_{j,0}\}$. Here $\Delta_{j, mod}$ denotes the modified discriminant of $f_j(x,1)$. It follows from [@WT], Lemma 3.1 that $q^d|f(x,y)$. The rest of the proof is the same as in the proof of Theorem 1.5. Now let $\mathrm{min}\{d_1e_1,...,d_ue_u\}=d_{i_0}e_{i_0}$ and assume $d_{i_0}=1$ and $e_{i_0}>1$. Then one can argue as in [@WT], Theorem 4.7 to conclude that, let's say, $q$ divides $f(x,y)$ at least $e_{i_0}>1$ times. Again, the rest of the proof is as in the proof of Theorem 1.5 above. # Proof of Theorems 1.9 and 1.10 We can assume $f(x)\in\mathbb{Z}[x]$. First, if $n=2m$ is even we have $n!!=2^mm!$ and the result follows from [@NO], Theorems 1.1 and 1.2. So let $n=2m+1$. Notice that $(2m+1)!!=\frac{(2m+1)!}{2^mm!}$. Let us consider a polynomial $$\begin{aligned} f(x)=a_0x^d+a_1x^{d-1}+\cdots +a_d\end{aligned}$$ with $a_i\in \mathbb{Z}$. Now multiply the equation $\frac{(2m+1)!}{2^mm!}=f(x)$ by $d^da_0^{d-1}$. We obtain $$\begin{aligned} y^d+b_1y^{d-1}+\cdots +b_d=c(\frac{(2m+1)!}{2^mm!})\end{aligned}$$ for a constant $c$, where $c=bd^da_0^{d-1}$ and $y:=a_0dx$. Notice that $b_i=d^ia_ia_0^{i-1}$ so that we can make the change of variable $z:=y+\frac{b_1}{d}$. Therefore we get the following equation $$\begin{aligned} z^d+c_2d^{d-2}+\cdots +c_d=c(\frac{(2m+1)!}{2^mm!}).\end{aligned}$$ Notice that $c_i$ are integer coefficients wich can be computed in terms of $a_i$ and $d$. Now let $Q(X)=X^d+c_2X^{d-2}+\cdots +c_d$ and notice that when $|z|$ is large one has $$\begin{aligned} \frac{|z|^d}{2}<|Q(z)|<2|z|^d.\end{aligned}$$ For the rest of the proof we denote by $C_1,C_2,...$ computable positive constants depending on the coefficients $a_i$ and eventually on some small $\epsilon >0$ which comes into play later by applying the ABC-conjecture. Whenever $(m,z)$ is a solution to $\frac{(2m+1)!}{2^mm!}=f(x)$ we conclude from (3) and (4) that there exist constants $C_1$ and $C_2$ such that $$\begin{aligned} |d\cdot\mathrm{log}|z|-\mathrm{log}(\frac{(2m+1)!}{2^mm!})|<C_1,\end{aligned}$$ for $|z|>C_2$ (see [@L] equation (10)). Now let $R(X)\in \mathbb{Z}[X]$ be such that $Q(X)=X^d+R(X)$. If $R(X)=0$, the diophantine equation becomes $c(\frac{(2m+1)!}{2^mm!})=x^d$ which is equivalent to $c(2m+1)!=2^mm!x^d$. Let us assume $m>2|c]$. Then there is a prime $p$ in the interval $((2m+1)/2,2m+1)$ that divides $(2m+1)!$ with exponent one. Since $p>m$ and since $m>2|c|$ we see that $p$ does not divide $2^mm!$. Therefore the exponent of $p$ in $2^mm!x^d$ is at least two. Hence there are no solutions if $m>2|c|$. So let us assume that $R(X)$ is non-zero and let $j\leq d$ be the largest integer with $c_j\neq 0$. We rewrite (3) as $$\begin{aligned} z^j+c_2z^{j-2}+\cdots +c_j=\frac{c(\frac{(2m+1)!}{2^mm!})}{z^{d-j}}.\end{aligned}$$ Let $R_1(X)$ be the polynomial $$\begin{aligned} R_1(X):= \frac{R(X)}{X^{d-j}}=c_2X^{j-2}+\cdots +c_j.\end{aligned}$$ It is shown in [@L] there are constants $C_3$ and $C_4\geq C_2$ such that $$\begin{aligned} 0<|R_1(z)|< C_3|z|^{j-2},\end{aligned}$$ for $|z|> C_4$. So let us write $z^j+R_1(z)=\frac{c(\frac{(2m+1)!}{2^mm!})}{z^{d-j}}$. For $D=\mathrm{gcd}(z^j, R_1(z))$ we get $$\begin{aligned} \frac{z^j}{D}+\frac{R_1(z)}{D}=\frac{c(\frac{(2m+1)!}{2^mm!})}{Dz^{d-j}}\end{aligned}$$ Applying the ABC-conjecture to $A=\frac{z^j}{D}$, $B=\frac{R_1(z)}{D}$ and $C=\frac{c(2m+1)!}{z^{d-j}D2^mm!}$, we find $$\begin{aligned} \frac{|z|^j}{D}< C_5N(\frac{z^jR_1(z)c(2m+1)!}{D^3z^{d-j}2^mm!})^{1+\epsilon},\end{aligned}$$ where $C_5$ depends only on $\epsilon$. It is shown in [@L], p.272 that $$\begin{aligned} N(\frac{|z|^j}{D})\leq |z|,\\ N(\frac{R_1(z)}{D})<\frac{C_3|z|^{j-2}}{D}.\end{aligned}$$ Moreover, we have $$\begin{aligned} N(\frac{c(2m+1)!}{z^{d-j}D2^mm!})\leq N(c)N((2m+1)!).\end{aligned}$$ From (1) it follows $$\begin{aligned} N(\frac{c(2m+1)!}{z^{d-j}D2^mm!})<C_64^{2m+1}\end{aligned}$$ and from (7), (8) and (9) we get $$\begin{aligned} N(\frac{|z|^j}{D})N(\frac{R_1(z)}{D})N(\frac{c(2m+1)!}{z^{d-j}D2^mm!})<\frac{C_3C_6|z|^{j-1}4^{(2m+1)}}{D}\end{aligned}$$ From inequalities (6) and (10), we obtain $$\begin{aligned} \frac{|z|^j}{D}<C_7\bigl(\frac{|z|^{j-1}4^{(2m+1)}}{D}\bigr)^{(1+\epsilon)}\end{aligned}$$ If we choose $\epsilon =\frac{1}{2d}\leq \frac{1}{2j}$, inequality (14) implies that $$\begin{aligned} |z|^{1/2}<|z|^{1+\epsilon -\epsilon j}< C_84^{(2m+1)(1+\epsilon)},\end{aligned}$$ or simply $$\begin{aligned} \mathrm{log}|z|<C_9(2m+1)+C_{10}.\end{aligned}$$ Thus $$\begin{aligned} d\cdot\mathrm{log}|z|<C_{11}m+C_{12}.\end{aligned}$$ This gives $$\begin{aligned} \mathrm{log}|(\frac{(2m+1)!}{2^mm!})|<C_1+d\cdot \mathrm{log}|z|< C_{11}m+C_{13}.\end{aligned}$$ Now we can conclude that only finitely many $m$ satisfy (12). For the convenience to the reader we give an argument. So let us consider an inequality of the form $\mathrm{log}|\frac{(2m+1)!}{2^mm!}|<A'm+C'$ where $A'$ and $C'$ are positive constant intergers. We first rewrite the equation by applying the $e$-function. This gives $\frac{(2m+1)!}{2^mm!}<E\cdot e^{A'm}$. The Stirling formula then yields $\frac{(\frac{2m+1}{e})^{2m+1}}{2^mm^m}<\frac{(\frac{2m+1}{e})^{2m+1}}{2^mm!}<E\cdot e^{A'm}$. And since $\frac{(\frac{2m+1}{e})^{2m+1}}{2^mm^m}=\frac{(2m+1)^{2m+1}}{2^me^{2m+1}m^m}$ we get the inequality $\frac{(2m+1)^{2m+1}}{m^m}<E\cdot e^{A'm}e^{2m+1}2^m=E'\cdot e^{(A'+2+ln(2))m}=E'\cdot e^{B'm}$ But this inequality is satified only for finitely many $m$. We finally conclude from (5) that $|z|< C_{15}$. This completes the proof of Theorem 1.9.\ Proof of Theorem 1.10:\ We formulate the proof only for $r=2$ since the proof for arbitrary $r>2$ is analogous. So let us consider the equation $bn!!m!!=f(x)$. Note that if $d=2$ and $b\neq 0$ there could be infinitely many solutions. This happens for instance if $f(x)=x^2$ and follows from [@NO], Theorem 1.1. Assuming that $f(x)$ is not monomial and has at least two distinct roots enables us to make a change of variables as in the proof of Theorem 1.9. Therefore, we can assume $f(z)=z^d+c_2d^{d-2}+\cdots +c_d$, with $c_2d^{d-2}+\cdots +c_d\neq 0$. If $n=2n_1,m=2m_1$ are both even, we actually have the equation $c2^{n_1}n_1!2^{m_1}m_1!=f(z)$ for some integer constant $c$ and the result follows from [@NO], Theorem 1.2. Now let us consider the case where $n=2n_1+1$ and $m=2m_1+1$ are both odd. Then we have the equation $f(z)=c\frac{(2n_1+1)!}{2^{n_1}n_1!}\frac{(2m_1+1)!}{2^{m_1}m_1!}$. Let $R_1(X), A, B, C$ and $D$ be as in the proof of Theorem 1.9. For the algebraic radical we then find $N(\frac{c}{z^{d-j}D}\frac{(2n_1+1)!}{2^{n_1}n_1!}\frac{(2m_1+1)!}{2^{m_1}m_1!})<\tilde{C}4^{2(n_1+m_1)+2}$ where $\tilde{C}$ is a certain positive constant integer. As in the proof of Theorem 1.9, we finally get $\mathrm{log}(|\frac{(2n_1+1)!}{2^{n_1}n_1!}\frac{(2m_1+1)!}{2^{m_1}m_1!}|)<A^*n_1+B^*m_1+C^*$ where $A^*, B^*$ and $C^*$ are positive constant integers. Applying $e$-function and Stirling formula finally yields $\frac{(2n_1+1)^{2n_1+1}}{n_1^{n_1}}\frac{(2m_1+1)^{2m_1+1}}{m_1^{m_1}}<E^*e^{A^*n_1}e^{B^*m_1}$. This inequality however is satisfied only for finitely many $(n_1,m_1)$. The case where $n$ is odd and $m$ is even is analogous. We omit the proof. 999 M. Bhargava: P-orderings and polynimial functions on arbitrary subsets of Dedekind rings. J. Reine Angew. Math. 490 (1997), 101-127. D. Berend and C.F. Osgood: On the equation $P(x)=n!$ and a question of Erdös. J. Number Theory. 42 (1992), 189-193. D. Berend and J. E. Harmse: On polynomial-factorial diophantine equations. Trans. Amer. Math. Soc. 358 (2006), 1741-1779. B.C. Berndt and W.F. Galeway: On the Brocard-Ramanujan diophantine equation $n!+1=m^2$. The Ramanujan J. 4 (2000), 41-42. H. Brocard: Question 1532. Nouv. Corresp. Math. 2 (1876); Nouv. Ann. Math. 4 (1885), 391. P. Erdös and R. Obláth: Über diophantische Gleichungen der Form $n!=x^p \pm y^p$ und $n!\pm m!= x^p$. Acta Szeged. 8 (1937), 241-255. HM. Bui, K. Pratt and A. Zaharescu: Power savings for counting solutions to polynomial-factorial equations. arXiv:2204.08423. A. Dabrowski: On the equation $n!+A=y^2$. Nieuw Arch. Wisk. 14 (1996), 321-324. A. Dabrowski: On the Brocard--Ramanujan problem and generalizations. Coll. Mathe. 126 (2012), 105-110. A. Dabrowski and M. Ulas: Variations on the Brocard--Ramanujan equation. J. Number Theory 133 (2013), 1168-1185. A. Epstein and J. Glickman (2020), https://github.com/jhg023/brocard M. Gawron: A note on the diophantine equation $P(z)=n!+m!$. Coll. Mathe. 131 (2013), 53-58. O. Kihel and F. Luca: Variants of the Brocard--Ramanujan equation. J. Théor. Nombres Bordeaux 20 (2008), 353-363. S. Lang: Old and new conjectured diophantine inequalities. Bull. Amer. Math. Soc. 23 (1990), 37-75. F. Luca: The diophantine equation $P(x)=n!$ and a result of M. Overholt. Glasnik Matematićki 37 (2002), 269-273. F. Luca: On the diophantine equations $f(n)=u!+v!$. Glasnik Matematićki 48 (2013), 31-48. R. Matson: Brocard's problem 4th solution search utilizing quadratic residues. Unsolved Problems in Number Theory, Logic and Cryptography (2017), available at http://unsolvedproblems.org/S99.pdf S. Novaković: A note on some polynomial-factorial diophantine equations. arXiv:2308.11002 M. Overholt: The diophantine equation $n!+1=m^2$. Bull. London. Math. Soc. 42 (1993), 104. R.M. Pollack and H.N. Shapiro: The next to last case of a factorial diophantine equation. Comm. Pure Appl. Math. 25 (1973), 313-325. S. Ramanujan: Question 469. J. Indian Math. Soc. 5 (1913), 59. W. Takeda: On the finiteness of solutions for polynomial-factorial diophantine equations. Forum Math. 33 (2021), 361-374. M. Ulas: Some observations on the diophantine equation $y^2=x!+A$ and related results. Bull. Aust. Math. Soc. 86 (2012), 377-388. M. Ulas: Some experiments with Ramanujan--Nagell type diophantine equations. Glasnik Matematićki 49 (2014), 287-302. T. Yamada: A generalization of the Ramanujan--Nagell equation. Glasgow Math. J. 61 (2019), 535-544. HOCHSCHULE FRESENIUS UNIVERSITY OF APPLIED SCIENCES 40476 DÜSSELDORF, GERMANY. E-mail adress: sasa.novakovic\@hs-fresenius.de\ MATHEMATISCHES INSTITUT, HEINRICH--HEINE--UNIVERSITÄT 40225 DÜSSELDORF, GERMANY. E-mail adress: novakovic\@math.uni-duesseldorf.de
arxiv_math
{ "id": "2309.15007", "title": "On the diophantine equation $An!+Bm!=f(x,y)$", "authors": "Sa\\v{s}a Novakovi\\'c", "categories": "math.NT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This work is devoted to generating optimal guidance commands in real time for attitude-constrained solar sailcrafts in coplanar circular-to-circular interplanetary transfers. Firstly, a nonlinear optimal control problem is established, and necessary conditions for optimality are derived by the Pontryagin's Minimum Principle. Under some assumptions, the attitude constraints are rewritten as control constraints, which are replaced by a saturation function so that a parameterized system is formulated to generate an optimal trajectory via solving an initial value problem. This approach allows for the efficient generation of a dataset containing optimal samples, which are essential for training Neural Networks (NNs) to achieve real-time implementation. However, the optimal guidance command may suddenly change from one extreme to another, resulting in discontinuous jumps that generally impair the NN's approximation performance. To address this issue, we use two co-states that the optimal guidance command depends on, to detect discontinuous jumps. A procedure for preprocessing these jumps is then established, thereby ensuring that the preprocessed guidance command remains smooth. Meanwhile, the sign of one co-state is found to be sufficient to revert the preprocessed guidance command back into the original optimal guidance command. Furthermore, three NNs are built and trained offline, and they cooperate together to precisely generate the optimal guidance command in real time. Finally, numerical simulations are presented to demonstrate the developments of the paper. author: - Kun Wang - Fangmin Lu - Zheng Chen - Jun Li bibliography: - bibfile.bib title: Real-time optimal control for attitude-constrained solar sailcrafts via neural networks --- Solar sailing ,Attitude constraints ,Real-time optimal control ,Neural networks # Introduction {#intro} Unlike conventional spacecrafts that use chemical or electric propellant to produce thrust during a flight mission, solar sailing exploits the solar radiation pressure (SRP) to propel the solar sailcraft. Although the resulting propulsive force is smaller than that of chemical- or electric-based propulsion systems, the SRP offers an almost "infinite" energy source that can be used to orient the solar sailcraft [@tsu1959interplanetary]. This makes it a very promising technology, especially for long-duration interplanetary transfer missions [@fu2016solar; @gong2019review]. While the study on solar sailing dates back to the 1920s, substantial progress in the development of solar sailing has not been achieved until recently. Following the first demonstrations of solar sailcraft in orbit by JAXA's IKAROS [@mori2010first] and NASA's NanoSail-D2 [@johnson2011nanosail] in 2010, the successes of LightSail 1 and 2 [@spencer2021lightsail] have sparked renewed interests in this technology. These achievements have led to the planning of numerous solar sailcraft-based missions, including Solar Cruiser [@spencer2021lightsail] and OKEANOS [@takao2021solar]. Designing the transfer trajectory for solar sailcrafts is one of the most fundamental problems. As the acceleration provided by the SRP is extremely small, the transferring time of solar sailcraft in space is usually too long. Thus, it is fundamentally important to find the time-optimal control so that the solar sailcraft can be steered to its target with minimum time. This is essentially equivalent to addressing a minimum-time control problem, which is a conventional Optimal Control Problem (OCP). Numerical methods for OCPs include indirect and direct methods [@rao2009survey]. The indirect method, based on the calculus of variations or Pontryagin's Minimum Principle (PMP), transforms the OCP into a Two-Point Boundary-Value Problem (TPBVP) according to the necessary conditions for optimality. The resulting TPBVP is then typically solved by Newton-like iterative methods. The indirect method has been applied to solve the time-optimal orbital transfer problem for solar sailcraft in Refs. [@sauer1976optimum; @wood1982comment; @kim2005symmetries]. Mengali and Quarta [@mengali2005optimal] employed the indirect method to solve the three-dimensional time-optimal transfer problem for a non-ideal solar sailcraft with optical and parametric force models. Sullo *et al.* [@sullo2017low] embedded continuation methods into the indirect method, so that the time-optimal transfer of solar sailcraft was found from a previously designed low-thrust transfer trajectory. Recently, the solar sail primer vector theory, combined with the indirect method, was employed to design the flight trajectory that minimizes the solar angle over a transfer with fixed time-of-flight [@oguri2023indirect]. On the other hand, direct methods reformulate the OCP (usually via direct shooting or pseudospectral methods) as a nonlinear programming problem, which can be solved using interior point or sequential quadratic programming method [@rao2009survey]. solar sailcraft transfers to hybrid pole and quasi pole sitters on Mars and Venus were studied using a direct pseudospectral algorithm in Ref. [@vergaaij2019solar]. In Ref. [@fitzgerald2021characterizing], the Gaussian quadrature pseudospectral optimization was combined with a slerped control parameterization method, and the transfer time was drastically reduced. Furthermore, a comparison study on application of indirect methods and direct methods to time-optimal transfer for solar sailcrafts was conducted in Ref. [@caruso2019comparison]. In addition to these conventional methods, heuristic methods have also been utilized in the literature to solve the time-optimal transfer problem; see, e.g., Ref. [@dachwald2005optimization]. Although the techniques mentioned in the previous paragraph are widely used, they are usually time-consuming and require good initial guesses of co-state or state vector. Therefore, these methods are typically not suitable for onboard systems with limited computational resources. To overcome this issue, shape-based methods have been developed. The basic idea of the shape-based methods is to describe the geometric shape of trajectory using a set of mathematical expressions with some tunable parameters. The parameters are usually adjusted so that the mathematical expressions well fit the required boundary conditions. In this regard, Peloni *et al.* [@peloni2016solar] expressed the trajectory as a function of four parameters for a multiple near-Earth-asteroid mission. A Radau pseudospectral method was leveraged to solve the resulting problem in terms of these four parameters. Since then, different shaping functions, such as Bézier curve [@huo2019electric] and Fourier series [@taheri2012shape; @caruso2021optimal], have been proposed for problems. In order to further cope with constraints on the propulsive acceleration, some additional shape-based functions have been developed [@peloni2018automated; @caruso2020shape]. It is worth mentioning that shape-based methods, albeit computationally efficient, often provide suboptimal solutions. Consequently, they are usually employed to provide initial guesses for direct methods. According to the preceding review, existing approaches for solar sailcraft transfer suffer from computational burden, convergence issues, or solution suboptimality. In fact, solution optimality plays an important role in space exploration missions [@izzo2023optimality], and it is demanding for onboard systems to generate real-time solutions. Therefore, it is worth exploiting more effective methods capable of providing optimal solutions in real time. Thanks to the emerging achievements of artificial intelligence in recent decades, machine learning techniques have become a viable approach. Generally, there are two quite distinct machine learning based approaches. The first class, known as Reinforcement Learning (RL) involves training an agent to behave in a potentially changing environment via repeatedly interactions, with the goal of maximizing a reward function. Owing to excellent generalization abilities of deep Neural Network (NN), deep RL algorithms have demonstrated great success in learning policies to guide spacecrafts in transfer missions; see, e.g., Refs [@zavoli2021reinforcement; @federici2021autonomous; @holt2021optimal]. Although the trained agent is able to cover a wide range of operations and is robust to system uncertainties and non-nominal conditions, significant effort may be required to design an appropriate reward function, in order to derive a solution that is very close to optimal [@gaudet2020deep]. In contrast to RL, Behavioral Cloning (BC) aims to clone the observed expert behavior through training an NN on optimal state-action pairs. Such pairs are usually obtained, either via indirect or direct methods, as the solution of deterministic OCPs. Recently, BC has been widely used to generate optimal guidance commands in real time, or reduce the computational time for the optimal solution in aerospace applications, such as spacecraft landing [@sanchez2018real; @cheng2020real], hypersonic vehicle reentry [@chai2019six; @shi2021onboard], missile interception [@wang2022nonlinear], low-thrust transfer [@izzo2021real], end-to-end human-Mars entry, powered-descent, and landing problem [@you2022onboard], minimum-fuel geostationary station-keeping [@zhang2023minimum], time-optimal interplanetary rendezvous [@izzo2023neural], and solar sailcraft transfer [@cheng2018real]. Specifically, in Ref. [@cheng2018real], some minimum-time orbital transfer problems with random initial conditions were solved by the indirect method to formulate the dataset for the state-guidance command pairs. Then, the dataset was used to train NNs within a supervised learning framework. Finally, the trained NNs were able to generate the optimal guidance command in real time. However, constraints on the solar sailcraft attitude were not considered in that paper. In practice, factors such as power generation and thermal control generally reduce the admissible variation range of the cone angle, which is equal to the sail attitude for a perfectly flat ideal sail [@he2014time; @caruso2020effects; @oguri2022solar]. In addition, the optimal guidance command for the solar sailcraft transfer problem may be discontinuous, which generally degrades the approximation performance of NNs. In fact, approximating discontinuous controls via NNs, even with many hidden layers and/or neurons, can be quite challenging, as shown by the numerical simulations in Refs. [@li2019neural; @george2022use; @origer2023guidance]. As a continuation effort to the work in Ref. [@cheng2018real], this paper considers that the solar sailcraft's attitude is constrained, and an NN-based method will be developed for real-time generation of optimal guidance command. Firstly, the time-optimal problem for the solar sailcraft with constraints on attitude is formulated. Then, necessary conditions for optimal trajectories are established by employing the PMP. These conditions allow using a saturation function to approximate the optimal guidance law. Unlike conventional optimization-based approaches, such as indirect methods and direct methods which often suffer from convergence issues [@chen2019nonlinear], we formulate a parameterized system to facilitate the generation of datasets for training NNs. In this method, an optimal trajectory can be generated by solving an initial value problem instead of a TPBVP. As a consequence, a large number of optimal trajectories can be readily obtained. However, discontinuous jumps in the optimal guidance command may defer one from obtaining perfect approximation of the optimal guidance command, leading to a critical challenge. To resolve this issue, one viable method is to use NN to approximate smooth co-states instead, as was done in Ref. [@parrish2018low]. Unfortunately, using co-states to train NNs is not reliable as the relationship between co-states and optimal guidance command is highly nonlinear. This may result in magnified propagation errors even for small errors from the predicted co-states [@rubinsztejn2020neural]. To avoid this difficulty, we propose to employ two co-states to preprocess the guidance command, smoothing out the discontinuous jumps. After preprocessing, the guidance command can be reverted to its original optimal form by examining the sign of one specific co-state. To achieve real-time generation of the optimal guidance command, we employ a system comprising three NNs that predict the optimal time of flight, the preprocessed guidance command, and one co-state. These three NNs cooperate together to generate the optimal guidance command. The remainder of the paper is structured as follows. The OCP is formulated in Section [2](#Problem){reference-type="ref" reference="Problem"}. Section [3](#Characterizations){reference-type="ref" reference="Characterizations"} presents the optimality conditions derived from the PMP, and a parameterized system for optimal solutions is established. In Section [4](#OptimalSteering){reference-type="ref" reference="OptimalSteering"}, procedures for generating the dataset and preprocessing discontinuous jumps are introduced, and the scheme for generating the optimal guidance command in real time is detailed. Section [5](#NumericalSimulations){reference-type="ref" reference="NumericalSimulations"} provides some numerical examples to demonstrate the developments of the paper. This paper finally concludes by Section [6](#Colus){reference-type="ref" reference="Colus"}. # Problem Formulation {#Problem} ## Solar Sailcraft Dynamics We consider a two-dimensional interplanetary transfer of an ideal solar sailcraft. Before proceeding, we present some units widely used in astrodynamics for better numerical conditioning. The distance is in units of Astronomical Unit (AU, mean distance between the Earth and the Sun), and time is in units of Time Unit (TU, a time unit such that an object in a circular orbit at 1 AU would have a speed of 1 AU/TU). For clearer presentation, a heliocentric--ecliptic inertial frame $(O,X,Y)$ and a polar reference frame $(O,r,\theta)$ are used, as shown in Fig. [1](#Fig:polar_reference){reference-type="ref" reference="Fig:polar_reference"}. The origin $O$ is located at the Sun's center; the $Y$ axis points towards the opposite direction of the vernal equinox, and the $X$ axis is specified by rotating the $Y$ axis 90 degrees clockwise in the ecliptic plane. $r \in \mathbb{R}_0^+$ denotes the distance from the origin to the solar sailcraft, and $\theta \in [0,2\pi]$ is the rotation angle of the Sun-solar sailcraft line measured counterclockwise in the polar reference frame from the positive $X$ axis. ![Dynamics of an ideal solar sailcraft.](Capture_solarframe.png){#Fig:polar_reference width="0.4\\linewidth"} In the context of preliminary mission design, it is reasonable to make some assumptions. The Sun and solar sailcraft are treated as point masses. The motion of the solar sailcraft is only influenced by the Sun's gravitational attraction and the propulsive acceleration due to the SRP acting on the solar sailcraft. Other disturbing factors, such as the solar wind and light aberration are neglected. Then, the dimensionless dynamics of the solar sailcraft is given by [@kim2005symmetries] $$\begin{aligned} \begin{cases} \dot{r}(t) = u(t),\\ \dot{\theta}(t) = \frac{v(t)}{r(t)},\\ \dot{u}(t) = \frac{\beta \cos^3\alpha(t)}{r^2(t)} + \frac{v^2(t)}{r(t)} - \frac{1}{r^2(t)},\\ \dot{v}(t) = \frac{\beta \sin\alpha(t) \cos^2\alpha(t)}{r^2(t)} - \frac{u(t)v(t)}{r(t)} \label{EQ:dyna_equation} \end{cases}\end{aligned}$$ where $t \geq 0$ is the time; $u \in \mathbb{R}$ and $v \in \mathbb{R}$ is the radial and transverse speed in units of AU/TU, respectively; $\beta$ is the normalized characteristic acceleration; $\alpha$ represents the cone angle, which is the angle between the normal direction of sail plane $\hat{n}$ and the direction of incoming rays $\hat{i}_u$, measured positive in the clockwise direction from the normal direction of sail plane $\hat{n}$. ## Attitude Constraints Note that the radial component $a_u$ and transverse component $a_v$ of the ideal propulsive acceleration vector acting on the solar sailcraft are given by [@caruso2020effects] $$\begin{aligned} \begin{cases} a_u(t) := \frac{\beta \cos^3\alpha(t)}{r^2(t)}, \\ a_v(t) := \frac{\beta \sin\alpha(t) \cos^2\alpha(t)}{r^2(t)} \label{EQ:acc_equation} \end{cases}\end{aligned}$$ Based on the assumption that the ideal sail is approximated with a flat and rigid reflecting surface, the attitude constraint can be rewritten in terms of cone angle as $$\begin{aligned} \alpha \in [-\phi_{max},\phi_{max}] \label{EQ:att_constraints}\end{aligned}$$ in which $\phi_{max} \in (0,\frac{\pi}{2})$ is a given parameter depending on the minimum acceptable level of the electric power generated by the solar sailcraft [@caruso2020effects]. It essentially sets the limits for the solar sailcraft's orientation with respect to the incoming solar rays. The "force bubble" of the ideal solar mentioned in Ref. [@mengali2007refined] due to attitude constraints is displayed in Fig. [2](#Fig:force_bubble){reference-type="ref" reference="Fig:force_bubble"}. The colormap represents the value of $\beta$ in $[0.01892, 0.3784]$ (corresponding to a characteristic acceleration domain of $\rm {[0.1, 2]~mm/s^2}$). Then, for a given $r$, it is clear that $\beta$ defines the size of the force bubble, whereas $\phi_{max}$ determines its actual shape, as shown by the pink-dashed lines. Specifically, for $\phi_{max}$ to be $90$ deg, the SRP can only provide transverse acceleration, and the propulsive acceleration is constrained to the radial direction if $\phi_{max} = 0$. ![Shape of the ideal sail force bubble.](bubble2-eps-converted-to.pdf){#Fig:force_bubble width="0.8\\linewidth"} ## Formulation of the OCP Without loss of generality, consider an initial condition for the solar sailcraft at the initial time $t_0 = 0$ given by $$\begin{aligned} r(0) = r_0, \theta(0) = \theta_0, u(0) = u_0, v(0) = v_0 \label{EQ:initial_condition}\end{aligned}$$ The mission purpose is, through orientating the cone angle $\alpha$ subject to constraints in Eq. ([\[EQ:att_constraints\]](#EQ:att_constraints){reference-type="ref" reference="EQ:att_constraints"}), to drive the solar sailcraft governed by Eq. ([\[EQ:dyna_equation\]](#EQ:dyna_equation){reference-type="ref" reference="EQ:dyna_equation"}), into a coplanar circular orbit of radius $r_f$ with the terminal condition given by $$\begin{aligned} r(t_f) = r_f, u(t_f) = 0, v(t_f) = \frac{1}{\sqrt{r_f}} \label{EQ:final_condition}\end{aligned}$$ The functional cost $J$ is to minimize the arrival time (time of flight) $t_f$, i.e., $$\begin{aligned} J = \int_{0}^{t_f}1~dt \label{EQ:cost_function}\end{aligned}$$ # Parameterization of Optimal Trajectories {#Characterizations} In this section, we first derive the necessary conditions for optimal trajectories. Then, we formulate a parameterized system that enables the generation of an optimal trajectory via solving an initial value problem. For simplicity of presentation, important results and claims are written in lemmas with their proofs postponed to the appendix. ## Optimality Conditions Denote by $\boldsymbol{\lambda} = [\lambda_r, \lambda_\theta, \lambda_{u},\lambda_{v}]$ the co-state vector of the state vector $\boldsymbol{x} = [r, \theta, u,v]$. Then, the Hamiltonian for the OCP is expressed as $$\begin{aligned} \mathscr H = 1 + \lambda_r u + \lambda_\theta \frac{v}{r} + \lambda_{u}(\frac{\beta \cos^3\alpha}{r^2} + \frac{v^2}{r} - \frac{1}{r^2}) + \lambda_{v} (\frac{\beta \sin\alpha \cos^2\alpha}{r^2} - \frac{u v}{r}) \label{EQ:Ham_function}\end{aligned}$$ According to the PMP [@Pontryagin], we have $$\begin{aligned} \begin{cases} \dot{\lambda}_r(t) = \lambda_\theta(t)\frac{v(t)}{r^2(t)}+\lambda_{u}(t)[2\beta\frac{\cos^3\alpha(t)}{r^3(t)}+\frac{{v}^2(t)}{r^2(t)}-\frac{2}{r^3(t)}]+\lambda_{v}(t)[2\beta \cos^2 \alpha(t) \frac{\sin \alpha(t)}{r^3(t)}-\frac{u(t)v(t)}{r^2(t)}],\\ \dot{\lambda}_\theta(t) = 0,\\ \dot{\lambda}_{u}(t) = -\lambda_r(t) + \frac{\lambda_{v}(t) v(t)}{r(t)},\\ \dot{\lambda}_{v}(t) = -\frac{\lambda_\theta(t)}{r(t)} - 2\frac{\lambda_{u}(t)v(t)}{r(t)} + \frac{\lambda_{v}(t)u(t)}{r(t)} \label{EQ:costate_function} \end{cases}\end{aligned}$$ Because $t_f$ is free, it holds $$\begin{aligned} \mathscr H(t) \equiv 0, ~\rm{for}~\it {t \in [0,t_f]} \label{EQ:tf_law}\end{aligned}$$ In addition, the terminal rotation angle $\theta(t_f)$ is not constrained, leading to $$\begin{aligned} \lambda_\theta(t_f) = 0 \label{EQ:theta_law}\end{aligned}$$ By the following lemma, we present the optimal guidance law. **Lemma 1**. *The optimal guidance law that minimizes $\mathscr H$ in Eq. ([\[EQ:Ham_function\]](#EQ:Ham_function){reference-type="ref" reference="EQ:Ham_function"}) is $$\begin{aligned} \alpha^*=\rm{Median} {[-\phi_{max},\bar{\alpha},\phi_{max}]}, \rm{where}~\bar{\alpha} = \arctan~\it{\frac{-3 \lambda_{u}-\sqrt{9\lambda^2_{u}+8\lambda^2_{v}}}{4\lambda_{v}}} \label{EQ:optimal_control}\end{aligned}$$* The proof of this lemma is postponed to [7](#Appendix:A){reference-type="ref" reference="Appendix:A"}. Notice that the optimal guidance law in Eq. ([\[EQ:optimal_control\]](#EQ:optimal_control){reference-type="ref" reference="EQ:optimal_control"}) may not be smooth at some isolated points. In fact, the optimal guidance law in Eq. ([\[EQ:optimal_control\]](#EQ:optimal_control){reference-type="ref" reference="EQ:optimal_control"}) can be approximated in a compact form as  [@avvakumov2000boundary] $$\begin{aligned} \alpha^* \approx \alpha^*(\delta)= \frac{1}{2}~[\sqrt{\left(\bar{\alpha}+\phi_{max}\right)^2+\delta}-\sqrt{\left(\bar{\alpha}-\phi_{max}\right)^2+\delta}] \label{EQ:optimal_control_smooth}\end{aligned}$$ where $\delta$ is a small non-negative number. It is clear that Eq. ([\[EQ:optimal_control_smooth\]](#EQ:optimal_control_smooth){reference-type="ref" reference="EQ:optimal_control_smooth"}) is equivalent to Eq. ([\[EQ:optimal_control\]](#EQ:optimal_control){reference-type="ref" reference="EQ:optimal_control"}) if $\delta=0$. If $\delta>0$ is sufficiently small, Eq. ([\[EQ:optimal_control_smooth\]](#EQ:optimal_control_smooth){reference-type="ref" reference="EQ:optimal_control_smooth"}) acts like a smoothing filter function. Note that the initial rotation angle $\theta_0$ has no effect on the solutions because the transfer trajectory is rotatable [@cheng2018real]. For brevity, a triple $(r(t),u(t),v(t))$ for $t \in [0,t_f]$ is said to be an optimal trajectory if all the necessary conditions in Eqs. ([\[EQ:costate_function\]](#EQ:costate_function){reference-type="ref" reference="EQ:costate_function"}), ([\[EQ:tf_law\]](#EQ:tf_law){reference-type="ref" reference="EQ:tf_law"}), ([\[EQ:theta_law\]](#EQ:theta_law){reference-type="ref" reference="EQ:theta_law"}), and ([\[EQ:optimal_control\]](#EQ:optimal_control){reference-type="ref" reference="EQ:optimal_control"}) are met. In order to generate the optimal guidance command in real time via NNs, a training dataset containing a large number of optimal trajectories is required. In this regard, one viable approach is to employ some root-finding algorithms to solve the TPBVP $$\begin{aligned} =\boldsymbol{0} \label{EQ:TPBVP_law}\end{aligned}$$ Nevertheless, this procedure is usually time-consuming and suffers from convergence issues. In the next subsection, we present a parameterized system so that an optimal trajectory can be readily obtained by solving an initial value problem. ## Parameterized System Define a new independent variable $\tau$ as below $$\begin{aligned} \tau = t_f - t, t \in [0,t_f]\end{aligned}$$ Let us establish a first-order ordinary differential system $$\begin{aligned} \begin{cases} \dot{R}(\tau) = -U(\tau),\\ \dot{U}(\tau) = -\frac{\beta \cos^3\hat{\alpha}(\tau)}{R^2(\tau)} - \frac{V^2(\tau)}{R(\tau)} + \frac{1}{R^2(\tau)},\\ \dot{V}(\tau) = -\frac{\beta \sin\hat{\alpha}(\tau) \cos^2\hat{\alpha}(\tau)}{R^2(\tau)} + \frac{U(\tau)V(\tau)}{R(\tau)},\\ \dot{\lambda}_R(\tau) = -\lambda_{U}(\tau)[2\beta\frac{\cos^3\hat{\alpha}(\tau)}{R^3(\tau)}\frac{{V}^2(\tau)}{R^2(\tau)}-\frac{2}{R^3(\tau)}]-\lambda_{V}(\tau)[2\beta \cos^2 \hat{\alpha}(\tau) \frac{\sin \hat{\alpha}(\tau)}{R^3(\tau)}-\frac{U(\tau)V(\tau)}{R^2(\tau)}],\\ \dot{\lambda}_{U}(\tau) = \lambda_R(\tau) - \frac{\lambda_{V}(\tau) V(\tau)}{R(\tau)},\\ \dot{\lambda}_{V}(\tau) = 2\frac{\lambda_{U}(\tau)V(\tau)}{R(\tau)} - \frac{\lambda_{V}(\tau)U(\tau)}{R(\tau)} \label{EQ:new_system} \end{cases}\end{aligned}$$ where $(R, U, V, \lambda_R, \lambda_{U}, \lambda_{V}) \in \mathbb{R}_0^+ \times \mathbb{R}^5$, and $\hat{\alpha}$ is defined as $$\begin{aligned} \hat{\alpha}=\frac{1}{2}~[\sqrt{\left(\bar{\alpha}+\phi_{max}\right)^2+\delta}-\sqrt{\left(\bar{\alpha}-\phi_{max}\right)^2+\delta}]~\rm{with}~ \bar{\alpha} = \arctan~\it{\frac{-3 \lambda_{U}-\sqrt{9\lambda^2_{U}+8\lambda^2_{V}}}{4\lambda_{V}}} \label{EQ:new_control}\end{aligned}$$ The initial value at $\tau = 0$ for the system in Eq. ([\[EQ:new_system\]](#EQ:new_system){reference-type="ref" reference="EQ:new_system"}) is set as $$\begin{aligned} R(0) = R_0, U(0) = 0, V(0) = \frac{1}{\sqrt{R_0}}, \lambda_{R}(0) = \lambda_{R_0}, \lambda_{U}(0) = \lambda_{U_0}, \lambda_{V}(0) = \lambda_{V_0} \label{para_initial}\end{aligned}$$ The value of $\lambda_{V_0}$ satisfies the following equation $$\begin{aligned} 1 + \lambda_{R}(0) U(0) + \lambda_{U}(0)(\frac{\beta \cos^3\hat{\alpha}}{R^2(0)} + \frac{V^2(0)}{R(0)} - \frac{1}{R^2(0)}) + \lambda_{V}(0) (\frac{\beta \sin\hat{\alpha} \cos^2\hat{\alpha}}{R^2(0)} - \frac{U(0) V(0)}{R(0)})= 0 \label{EQ:solve_equation}\end{aligned}$$ Substituting Eq. ([\[para_initial\]](#para_initial){reference-type="ref" reference="para_initial"}) into Eq. ([\[EQ:solve_equation\]](#EQ:solve_equation){reference-type="ref" reference="EQ:solve_equation"}) leads to $$\begin{aligned} 1 + \lambda_{U_0}\frac{\beta \cos^3\hat{\alpha}}{R^2_0} + \lambda_{V_0} \frac{\beta \sin\hat{\alpha} \cos^2\hat{\alpha}}{R^2_0} = 0 \label{EQ:solve_equation_simple}\end{aligned}$$ By the following lemma, we shall show that an optimal trajectory can be generated by solving an initial value problem based on the parameterized system in Eq. ([\[EQ:new_system\]](#EQ:new_system){reference-type="ref" reference="EQ:new_system"}). **Lemma 2**. *For any given pair $(\lambda_{R_0},\lambda_{U_0})$, a fixed time $t_f$ and $\lambda_{V_0}$ determined by Eq. ([\[EQ:solve_equation_simple\]](#EQ:solve_equation_simple){reference-type="ref" reference="EQ:solve_equation_simple"}), denote by $$\begin{aligned} \mathcal{F}:= (R(\tau,\lambda_{R_0},\lambda_{U_0}),U(\tau,\lambda_{R_0},\lambda_{U_0}),V(\tau,\lambda_{R_0},\lambda_{U_0}),\\ \lambda_R(\tau,\lambda_{R_0},\lambda_{U_0}),\lambda_{U}(\tau,\lambda_{R_0},\lambda_{U_0}), \lambda_{V}(\tau,\lambda_{R_0},\lambda_{U_0})) \in \mathbb{R}^+ \times \mathbb{R}^5\end{aligned}$$ the solution of the $(\lambda_{R_0},\lambda_{U_0})$-parameterized system in Eq. ([\[EQ:new_system\]](#EQ:new_system){reference-type="ref" reference="EQ:new_system"}) with the initial value specified in Eq. ([\[para_initial\]](#para_initial){reference-type="ref" reference="para_initial"}). Define $\mathcal{F}_p$ as $$\begin{aligned} \mathcal{F}_p:= \{R(\tau,\lambda_{R_0},\lambda_{U_0}),U(\tau,\lambda_{R_0},\lambda_{U_0}),V(\tau,\lambda_{R_0},\lambda_{U_0}),\tau |\\ (R(\tau,\lambda_{R_0},\lambda_{U_0}),U(\tau,\lambda_{R_0},\lambda_{U_0}),V(\tau,\lambda_{R_0},\lambda_{U_0}),\tau) \in \mathcal{F}\}\end{aligned}$$ Then $\mathcal{F}_p$ represents the solution space of an optimal trajectory starting from a circular orbit with a radius of $R_0$.* The proof of this lemma is postponed to [7](#Appendix:A){reference-type="ref" reference="Appendix:A"}. With the parameterized system established, obtaining an optimal trajectory becomes a straightforward process that involves solving an initial value problem, rather than tackling a TPBVP. Once the optimal trajectory $\mathcal{F}_p$ is obtained, the corresponding optimal guidance command $\hat{\alpha}$ can be determined through the definition of $\mathcal{F}$ and reference to Eq. ([\[EQ:new_control\]](#EQ:new_control){reference-type="ref" reference="EQ:new_control"}). To maintain clarity and prevent any ambiguity in notation, we continue to employ $(r,u,v)$ to represent the flight state and $t_f$ represent the optimal time of flight along the optimal trajectory $\mathcal{F}_p$. Since the optimal time of flight is of great importance for preliminary mission design, evaluating the optimal time of flight without the need for the exact guidance law is quite attractive, as was done in Refs. [@cheng2020real; @izzo2021real]. In this paper, we include $t_f$ into the flight state, and it can be evaluated via a trained NN, as shown by our scheme in Subsection [4.3](#SchemeGuidanceCommands){reference-type="ref" reference="SchemeGuidanceCommands"}. Likewise, the optimal guidance command is again denoted by $\alpha$. Then we shall define $f$ as the nonlinear mapping from the flight state $(r,u,v,t_f)$ to the optimal guidance command $\alpha$, i.e., $$\begin{aligned} f:=(r,u,v,t_f) \to \alpha\end{aligned}$$ According to the universal approximation theorem [@hornik1989multilayer], employing a large number of sampled data points that map the flight state $(r,u,v,t_f)$ to the optimal guidance command $\alpha$ to train feedforward NNs enables the trained networks to accurately approximate the nonlinear mapping $f$. Before generating the dataset, a *nominal trajectory is typically required as a baseline. Then, by applying perturbations over $\lambda_{R_0}$ and $\lambda_{U_0}$ on the *nominal trajectory, lots of optimal trajectories containing optimal state-guidance command pairs can be aggregated. With this dataset, the optimal guidance command can be generated in real time within the supervised learning framework, as we will detail in the next section.** # Generation of Optimal Guidance Commands in Real Time via NNs {#OptimalSteering} In this section, we begin by outlining the procedure for generating the training dataset. As our analysis progresses, we observe a notable characteristic of the optimal guidance command: it may abruptly change from one extreme to another, resulting in discontinuous jumps. This behavior poses a challenge, as the approximation performance of the NN typically deteriorates under such circumstances, a phenomenon seen in Refs. [@li2019neural; @george2022use; @origer2023guidance]. To address this issue, we introduce a preprocessing method that employs two co-states to detect the discontinuous jump. With the saturation function defined in Eq. ([\[EQ:optimal_control_smooth\]](#EQ:optimal_control_smooth){reference-type="ref" reference="EQ:optimal_control_smooth"}), the preprocessed guidance command becomes smooth everywhere. Following this procedure, we present a scheme for generating optimal guidance commands in real time via NNs. ## Generation of the Training Dataset ### Nominal Trajectory {#nominal} Consider a solar sailcraft subject to attitude constraints $\alpha \in [-\frac{\pi}{3}, \frac{\pi}{3}]$, starting from Earth's orbit around the Sun, which is a circular orbit with radius of 1 AU (1 AU = $1.4959965\times 10^{11}$ m), i.e., $$\begin{aligned} r(0) = 1, \theta(0) = 0, u(0) = 0, v(0) = 1 \label{EQ:initial_condition_nominal}\end{aligned}$$ and the mission is to fly into Mars' orbit around the Sun, which is a circular orbit with radius of 1.524 AU, i.e., $$\begin{aligned} r(t_f) = 1.524, u(t_f) = 0, v(t_f) = \frac{1}{\sqrt {1.524}} \label{EQ:final_condition_nominal}\end{aligned}$$ The normalized characteristic acceleration $\beta$ is set as a constant of 0.16892, corresponding to a characteristic acceleration of 1 $\rm{mm/s^2}$. To demonstrate the effects of the saturation function in Eq. ([\[EQ:optimal_control_smooth\]](#EQ:optimal_control_smooth){reference-type="ref" reference="EQ:optimal_control_smooth"}), the optimal guidance laws in Eqs. ([\[EQ:optimal_control\]](#EQ:optimal_control){reference-type="ref" reference="EQ:optimal_control"}) and ([\[EQ:optimal_control_smooth\]](#EQ:optimal_control_smooth){reference-type="ref" reference="EQ:optimal_control_smooth"}) are embedded into the shooting function defined by Eq. ([\[EQ:TPBVP_law\]](#EQ:TPBVP_law){reference-type="ref" reference="EQ:TPBVP_law"}). To enhance the convergence of the indirect method, we initially consider cases without attitude constraints and subsequently apply a homotopy technique to accommodate attitude constraints. Fig. [3](#Fig:norm_angle_two){reference-type="ref" reference="Fig:norm_angle_two"} ![Optimal guidance command profiles for the unconstrained and constrained cases.](norm_angle-eps-converted-to.pdf){#Fig:norm_angle_two width="0.7\\linewidth"} shows the optimal guidance command profiles for the unconstrained and constrained cases with two smoothing constants. It is evident that the constrained cases contain rapid changes in the guidance command. Additionally, the guidance command with $\delta = 1 \times 10^{-3}$ displays smoother behavior compared to the case with $\delta = 0$. Besides, the arrival time for the unconstrained case is 7.01204 TU ( 1 TU $=5.0226757\times 10^{6}$ s), whereas this time span extends to $7.01818$ TU for the constrained case with $\delta = 0$. Regarding $\delta = 1 \times 10^{-3}$, the arrival time is $7.01838$ TU, indicating that a smoothing constant of $\delta = 1 \times 10^{-3}$ leads to a minor functional penalty of merely $2 \times 10^{-4}$ TU. Consequently, we designate the transfer trajectory with $\delta=1 \times 10^{-3}$ as the *nominal trajectory henceforth.* ### Dataset Denote by $\lambda^*_{r_f}$, $\lambda^*_{{u}_f}$ the final values along the *nominal trajectory for the co-state $\lambda_{r}$ and $\lambda_{u}$, respectively. Consider a new set of values given by $$\begin{aligned} \lambda'_{R_0} = \lambda^*_{r_f} + \delta \lambda_R, \lambda'_{U_0} = \lambda^*_{{u}_f} + \delta \lambda_{U} \label{EQ:newset}\end{aligned}$$ where the perturbations $\delta \lambda_R$ and $\delta \lambda_{U}$ are uniformly selected from the intervals $[-20,23]$ and $[5,11]$, correspondingly. Set $\lambda_{R_0} = \lambda'_{R_0}$,$\lambda_{U_0} = \lambda'_{U_0}$, and calculate the value of $\lambda_{V_0}$ by solving Eq. ([\[EQ:solve_equation_simple\]](#EQ:solve_equation_simple){reference-type="ref" reference="EQ:solve_equation_simple"}). Then, $\mathcal F$ can be obtained by propagating the $(\lambda_{R_0},\lambda_{U_0})$-parameterized system with the initial value specified in Eq. ([\[para_initial\]](#para_initial){reference-type="ref" reference="para_initial"}). Define an empty set $\mathcal D$, and insert $\mathcal {F}_p$ depending on the pair $(\lambda_{R_0},\lambda_{U_0})$, into $\mathcal D$ until the perturbation process described in Eq. ([\[EQ:newset\]](#EQ:newset){reference-type="ref" reference="EQ:newset"}) ends. In this context, we can amass a dataset $\mathcal D$ comprising the essential optimal state-guidance command pairs required for NN training.* Because we are specifically dealing with orbit transfer missions from Earth to Mars in this study, any optimal trajectory with $R(\tau) > 1.54$ AU for $\tau \in [0,t_f]$ ($t_f$ is fixed as 10) is classified as beyond the region of interest and excluded from $\mathcal D$. Moreover, to explore the NN's generalization ability, the *nominal trajectory is excluded from $\mathcal D$, as was done in Ref. [@izzo2021real]. Ultimately, a total of $18,324$ optimal trajectories are acquired, and they are evenly sampled, resulting in $6,416,613$ pairs.* ## Preprocessing Discontinuous Jumps {#Jumps} While generating the dataset, we encounter instances where the optimal guidance command displays discontinuous jumps. Remember that these jumps can be detected using $\lambda_{u}$ and $\lambda_{v}$, as elaborated in [7](#Appendix:A){reference-type="ref" reference="Appendix:A"}. For a representative optimal trajectory defined in $[0,t_f]$, spotting a discontinuous jump point $t_j$ is straightforward, as illustrated in Fig. [4](#Fig:costate_ex){reference-type="ref" reference="Fig:costate_ex"}. ![Two co-state profiles along an optimal trajectory.](jump_costate-eps-converted-to.pdf){#Fig:costate_ex width="0.7\\linewidth"} The optimal trajectory is then split into two segments: $[0,t_j]$ and $[t_j,t_f]$. Assume that there is only one jump in this scenario, we can derive a preprocessed smooth guidance command, denoted by $\alpha_{pre}$, using the following approach $$\begin{aligned} \alpha_{pre} = \begin{cases} -\alpha, \rm{for}~ \it{t \in [0, t_j]}\\ ~~\alpha, \rm{for}~ \it{t \in [t_j, t_f]} \end{cases} \label{regulaeise}\end{aligned}$$ A representative example is provided in Fig. [5](#Fig:alpha_ex){reference-type="ref" reference="Fig:alpha_ex"}. ![Preprocessed and original optimal guidance command profiles along an optimal trajectory.](alpha_reg-eps-converted-to.pdf){#Fig:alpha_ex width="0.7\\linewidth"} It is evident that, thanks to the saturation function in Eq. ([\[EQ:optimal_control_smooth\]](#EQ:optimal_control_smooth){reference-type="ref" reference="EQ:optimal_control_smooth"}) and the preprocessing procedure outlined in Eq. ([\[regulaeise\]](#regulaeise){reference-type="ref" reference="regulaeise"}), $\alpha_{pre}$ now demonstrates smooth behavior along the entire optimal trajectory. Conversely, if the discontinuous jump does not appear along the optimal trajectory, it is clear that $\alpha_{pre} = \alpha$ for $t \in [0,t_f]$. Moreover, we observe that multiple discontinuous jumps are rare and primarily occur in orbital transfers with unusually extended time of flight. Hence, in this study, we limit our consideration to orbital transfers with a maximum of one discontinuous jump. With the parameterized system, a sample of dataset $\mathcal{D}_{\alpha}:=\left\{(r,u,v,t_f),\alpha \right\}$ can be extracted from the dataset $\mathcal{D}$. Thanks to the preprocessing procedure aforementioned, we can transform the dataset $\mathcal{D}_{\alpha}$, which could contain discontinuous jumps, into a refined dataset $\mathcal{D}_{\alpha_{pre}}:=\left\{(r,u,v,t_f),\alpha_{pre} \right\}$ consisting exclusively of smooth guidance commands. Consequently, a well-trained NN $\mathcal{N}_{\alpha_{pre}}$ based on $\mathcal{D}_{\alpha_{pre}}$ is anticipated to yield reduced errors compared to an NN $\mathcal{N}_\alpha$ trained on $\mathcal{D}_{\alpha}$ [@izzo2023optimality]. For the sake of completeness, we will elucidate the process of reverting the output $\alpha_{pre}$ from $\mathcal{N}_{\alpha_{pre}}$ back into the original optimal guidance command $\alpha$ for practical implementation. It can be demonstrated that the sign of the original optimal guidance command $\alpha$ is always opposite to that of $\lambda_{v}$, i.e., $$\begin{aligned} 4 \lambda_{v} \cdot \arctan \alpha = -3\lambda_{u} - \sqrt{9\lambda_{u}^2 + 8\lambda_{v}^2} < 0 \end{aligned}$$ Hence, the original optimal guidance command $\alpha$ can be obtained by simply comparing the sign of the preprocessed guidance command $\alpha_{pre}$ and $\lambda_{v}$, i.e., $$\begin{aligned} \alpha = \begin{cases} -\alpha_{pre}, \rm{if}~\alpha_{pre}\cdot \lambda_{v} >0\\ \alpha_{pre},~~\text{otherwise} \end{cases}\end{aligned}$$ ## Scheme for Generating Optimal Guidance Commands in Real Time {#SchemeGuidanceCommands} To facilitate real-time implementation, a system composed of three NNs is established, as depicted in Fig. [6](#Fig:dnn_three){reference-type="ref" reference="Fig:dnn_three"}. To elaborate, $\mathcal{D}$ is divided into three samples, i.e., $\mathcal{D}_{t_f}:=\left\{(r,u,v),t_f \right\}$, $\mathcal{D}_{\alpha_{pre}}:=\left\{(r,u,v,t_f),\alpha_{pre} \right\}$, and $\mathcal{D}_{\lambda_{v}}:=\left\{(r,u,v,t_f), \lambda_{v} \right\}$; and the corresponding NN is denoted by $\mathcal{N}_{t_f}$, $\mathcal{N}_{\alpha_{pre}}$, and $\mathcal{N}_{\lambda_{v}}$, respectively. $\mathcal{N}_{t_f}$ is designed to forecast the optimal time of flight $t_f$ given a flight state $(r, u, v)$. Additionally, this network facilitates preliminary mission design by providing time of flight evaluations without necessitating an exact solution. The output of $\mathcal{N}_{t_f}$ is subsequently used as part of the input for $\mathcal{N}_{\alpha_{pre}}$, which yields the preprocessed guidance command $\alpha_{pre}$. Concurrently, $\mathcal{N}_{\lambda_{v}}$ predicts the co-state $\lambda_{v}$, in which its sign $sgn(\lambda_{v})$ is used to revert the preprocessed guidance command $\alpha_{pre}$ back into the original guidance command $\alpha$. Once these three NNs are trained offline, they enable the generation of the optimal guidance command $\alpha$ given an initial condition $(r_0, u_0, v_0)$. Consequently, the trained NNs offer closed-form solutions to the OCP. This endows the proposed method with robustness and generalization abilities, as shown in Subsections  [5.2](#robus){reference-type="ref" reference="robus"} and [5.3](#gener){reference-type="ref" reference="gener"}. ![Generation of optimal guidance commands in real time via NNs.](nn_structure.pdf){#Fig:dnn_three width="0.8\\linewidth"} ## NN Training Now, our focus shifts to the implementation of NN training algorithm. In addition to the three aforementioned NNs, i.e., $\mathcal{N}_{t_f}$, $\mathcal{N}_{\lambda_{v}}$, and $\mathcal{N}_{\alpha_{pre}}$, we also include the training of $\mathcal{N}_{\alpha}$ based on the sample $\mathcal{D}_{\alpha}=\left\{(r,u,v,t_f),\alpha \right\}$. This is done to highlight the enhancement in approximation resulting from the preprocessing procedure. All the networks considered are feedforward NNs with multiple hidden layers. *Prior to training, the dataset samples undergo shuffling to establish a split of $70\%$ for training, $15\%$ for validation, and $15\%$ for testing sets. All input and output data are normalized by subtracting the mean and dividing by the standard deviation. Selecting an appropriate NN structure, particularly the number of hidden layers and neurons per layer, is a non-trivial task. A structure that is too simplistic tends to lead to underfitting. Conversely, overfitting often arises when the structure is overly complex. Making a balance between time consumption and the overfitting issue, we adopt a structure with three hidden layers, each containing 20 neurons. Subsequently, the sigmoid function serves as the activation function. For the output layer, a linear function is utilized. The crux of the training lies in minimizing the loss function, quantified as the mean squared error (MSE) between the predicted values from the trained NNs and the actual values within the dataset samples. We employ the 'Levenberg-Marquardt' in Ref. [@hagan1994training] for training NNs, halting the training after 1,000 epochs or when the loss function drops below $1\times 10^{-8}$. To prevent overfitting, we employ an early stopping criteria based on the concept of patience. Other hyperparameters utilized in training are set to their default values.* Fig. [\[Fig:Training\]](#Fig:Training){reference-type="ref" reference="Fig:Training"} ![$\mathcal{N}_{t_f}$](Loss_tf-eps-converted-to.pdf){#Fig:loss_tf width="8cm"}       ![$\mathcal{N}_{\lambda_{v}}$](loss_pvtheta-eps-converted-to.pdf){#Fig:loss_pvtheta width="8cm"} \ ![$\mathcal{N}_{\alpha_{pre}}$](loss_reg_u-eps-converted-to.pdf){#Fig:loss_reg_u width="8cm"}       ![$\mathcal{N}_{\alpha}$](loss_alpha-eps-converted-to.pdf){#Fig:loss_alpha width="8cm"} illustrates the training progression of four NNs. As seen in Fig. [7](#Fig:loss_tf){reference-type="ref" reference="Fig:loss_tf"} , it is evident that the MSEs for the training, validation, and test sets decrease to $1\times 10^{-8}$ in less than $300$ epochs. Table [1](#Table:control_effort_25_50){reference-type="ref" reference="Table:control_effort_25_50"} displays the MSEs of the four NNs upon completion of the training process. Notably, the validation and test errors of $\mathcal{N}_{\alpha_{pre}}$ are smaller than those of $\mathcal{N}_\alpha$, indicating that our preprocessing procedure enhances the approximation accuracy. This is further confirmed through simulations detailed in the next section. $\mathcal N_{t_f}$ $\mathcal{N}_{\lambda_{v}}$ $\mathcal{N}_{\alpha_{pre}}$ $\mathcal{N}_{\alpha}$ -------------------------------------------------------------------------------------------------- -------------------- ----------------------------- ------------------------------ ------------------------ Training $1\times 10^{-8}$ $4.94\times 10^{-6}$ $7.53 \times 10^{-7}$ $1.72\times 10^{-6}$ Validation $1\times 10^{-8}$ $5.02\times 10^{-6}$ $6.82\times 10^{-7}$ $3.53\times 10^{-6}$ Test $1\times 10^{-8}$ $5.07\times 10^{-6}$ $7.70\times 10^{-7}$ $2.55\times 10^{-6}$ [\[Table:control_effort_25_50\]]{#Table:control_effort_25_50 label="Table:control_effort_25_50"} : MSEs of four NNs # Numerical Simulations {#NumericalSimulations} In this section, we present some numerical simulations to demonstrate the efficacy and performance of the proposed methodology. We commence by showcasing its optimality performance. Subsequently, robustness against perturbations and generalization ability of the proposed method are investigated. In addition, we evaluate the real-time execution of the command generation process. It is worth noting that the proposed method has been implemented on a desktop equipped with an Intel Core i9-10980XE CPU \@3.00 GHz and 128 GB of RAM. ## Optimality Performance {#firstSim} This subsection is devoted to examining whether or not the proposed approach can produce optimal solutions compared with existing methods. Table [2](#Table:Initialconditions){reference-type="ref" reference="Table:Initialconditions"} outlines the initial conditions for two orbital transfers. For comparison, two strategies are employed to steer the solar sailcraft. The first strategy employs three NNs, i.e., $\mathcal{N}_{t_f}$, $\mathcal{N}_{\alpha_{pre}}$, and $\mathcal{N}_{\lambda_{v}}$, while the second one uses two NNs, $\mathcal{N}_{t_f}$ and $\mathcal{N}_{\alpha}$. The indirect method, resolving the shooting function in Eq. ([\[EQ:TPBVP_law\]](#EQ:TPBVP_law){reference-type="ref" reference="EQ:TPBVP_law"}) will be used for comparison. $r_0$ $\theta_0$ $u_0$ $v_0$ ----------------------------------------------------------------------------------------- -------- ------------ -------- -------- Case 1 $1$ $0$ $0$ $1$ Case 2 $1.05$ $0$ $0.15$ $1.03$ [\[Table:Initialconditions\]]{#Table:Initialconditions label="Table:Initialconditions"} : Initial conditions for two orbital transfer cases For Case 1, the solar sailcraft embarks on a journey from Earth's orbit around the Sun, and the mission is to steer the solar sailcraft into Mars' orbit around the Sun. Fig. [\[Fig:cooperative_control_profile\]](#Fig:cooperative_control_profile){reference-type="ref" reference="Fig:cooperative_control_profile"} represents the outcomes yielded by the trained NNs and the indirect method. The optimal time of flight obtained via the indirect method is $7.0184$ TU, while the prediction from $\mathcal{N}_{t_f}$ is $7.0192$ TU for the given initial condition. ![Time of flight profile](Nominal_tf-eps-converted-to.pdf){#Fig:Time_of_flight width="8cm"}       ![Profile for co-state of the transverse speed](nominal_pvtheta-eps-converted-to.pdf){#Fig:Costate_oftangential width="8cm"} \ ![Guidance command profile derived from $\mathcal{N}_{\alpha_{pre}}$, $\mathcal{N}_{\lambda_{v}}$ and $\mathcal{N}_{t_f}$.](nominal_u-eps-converted-to.pdf){#Fig:cooperativesteering width="8cm"}       ![Guidance command profile derived from $\mathcal{N}_{\alpha}$ and $\mathcal{N}_{t_f}$.](nomimal_jump_u-eps-converted-to.pdf){#Fig:cooperativeVelocity width="8cm"} Remarkably, $\mathcal{N}_{t_f}$ exhibits precise approximation of the optimal time of flight throughout the entire transfer, as shown by Fig. [11](#Fig:Time_of_flight){reference-type="ref" reference="Fig:Time_of_flight"}. On the other hand, Fig. [12](#Fig:Costate_oftangential){reference-type="ref" reference="Fig:Costate_oftangential"} shows less accurate approximation of the optimal co-state $\lambda_{v}$, especially at the end of the transfer. Fortunately, the sign of the predicted $\lambda_{v}$, rather than its exact value, is sufficient for reverting the preprocessed guidance command back to the original guidance command. This observation is depicted in Fig. [13](#Fig:cooperativesteering){reference-type="ref" reference="Fig:cooperativesteering"}, where the guidance commands derived from the first strategy and the indirect method closely align, except at the end of the transfer. In contrast, the guidance command derived from the second strategy exhibits lower accuracy, especially in the presence of rapid changes in the optimal guidance command. The degeneration of approximation performance, caused by the second strategy, is demonstrated in Fig. [14](#Fig:cooperativeVelocity){reference-type="ref" reference="Fig:cooperativeVelocity"}. It is worth emphasizing that this problem of performance degeneration may not be well fixed by simply adjusting the hyperparameters for training NNs, as shown by the numerical simulations in Ref. [@li2019neural]. In that work, a random search for the hyperparameters was employed, and the trained NNs, even with the best approximation performance, were not able to capture the rapid change in the guidance command. In contrast, thanks to the preprocessing procedure in this paper, the well-trained NNs are capable of precisely generating the optimal guidance command. To assess the performance of fulfilling the terminal condition, we define the flight error $\Delta \Phi$ as the difference between the NN-driven terminal flight state $(r^{\mathcal{N}}_{(t_f)},u^{\mathcal{N}}_{(t_f)},v^{\mathcal{N}}_{(t_f)})$ and desired terminal flight state $(1.524,0,\frac{1}{\sqrt{1.524}})$. This error is computed as follows $$\begin{aligned} \Delta \Phi = ( \lvert r^{\mathcal{N}}_{(t_f)}-1.524 \rvert,\lvert u^{\mathcal{N}}_{(t_f)}\rvert,\lvert v^{\mathcal{N}}_{(t_f)}-\frac{1}{\sqrt{1.524}}\rvert) % \label{EQ:initial_condition_robust}\end{aligned}$$ Then, the flight error caused by the first strategy is $\Delta \Phi = (5.3703 \times 10^{-7}, 3.0121 \times 10^{-5}, 3.6943 \times 10^{-5} )$, which is lower than that of the second strategy with $\Delta \Phi =( 2.2220 \times 10^{-5}, 4.1852 \times 10^{-4}, 2.5968 \times 10^{-4})$. Although both strategies can guide the solar sailcraft into Mars' orbit with negligible flight errors, the guidance command derived from the first strategy is more feasible and accurate than the second strategy, as shown in Figs. [13](#Fig:cooperativesteering){reference-type="ref" reference="Fig:cooperativesteering"} and [14](#Fig:cooperativeVelocity){reference-type="ref" reference="Fig:cooperativeVelocity"}. In addition, the transfer trajectories derived from the first strategy and the indirect method are illustrated in Fig. [15](#Fig:tra_nominalcase1){reference-type="ref" reference="Fig:tra_nominalcase1"}. ![Transfer trajectories derived from the first strategy and the indirect method for Case 1.](transfer_case_nominal-eps-converted-to.pdf){#Fig:tra_nominalcase1 width="0.7\\linewidth"} Given the superiority of the first strategy over the second, our focus will exclusively be on the first strategy, which will be called NN-based approach hereafter. Shifting to Case 2, the output of $\mathcal{N}_{t_f}$ for the initial condition stands at 6.1643 TU, while the optimal solution attained via the indirect method is 6.1552 TU, resulting in a marginal functional penalty of merely 0.5290 days. Furthermore, Fig. [16](#Fig:control_nominalcase2){reference-type="ref" reference="Fig:control_nominalcase2"} illustrates that the NN-based approach precisely captures the discontinuous jump and effectively approximates the optimal guidance command, except at the end of the transfer. The transfer trajectories derived from the NN-based approach and the indirect method are depicted in Fig. [17](#Fig:transfer_nominal){reference-type="ref" reference="Fig:transfer_nominal"}. As a consequence, the flight error caused by the NN-based approach is $\Delta \Phi = (5.2011 \times 10^{-6}, 3.3526 \times 10^{-4}, 1.5393 \times 10^{-4})$, implying that the NN-based approach capably steers the solar sailcraft into Mars' orbit with acceptable errors. ![Guidance command profiles derived from the NN-based approach and the indirect method for Case 2.](Sim_a_case2-eps-converted-to.pdf){#Fig:control_nominalcase2 width="0.7\\linewidth"} ![Transfer trajectories derived from the NN-based approach and the indirect method for Case 2.](Sim_a_case2_tra-eps-converted-to.pdf){#Fig:transfer_nominal width="0.7\\linewidth"} ## Robustness Analysis {#robus} In reality, the normalized characteristic acceleration $\beta$ is influenced by diverse factors, such as solar sailing efficiency, solar radiation pressure [@spencer2019solar]. Consequently, the solar sailcraft must continually adjust its flight trajectory based on its current flight state to rectify any flight errors. Unfortunately, this process is quite challenging for onboard systems with limited computational resources. Additionally, solving OCPs featuring parameters under perturbations tends to be problematic for conventional direct and the indirect methods that rely on precise system models. For this reason, we evaluate the proposed method's robustness against perturbations concerning $\beta$. The initial condition for the solar sailcraft is set as $$\begin{aligned} r_0 = 1.1, \theta_0 = \frac{\pi}{2}, u_0 = 0.18, v_0 = 0.93 %\label{EQ:initial_condition_robust}\end{aligned}$$ and the mission is to fly into Mars' orbit. The normalized characteristic acceleration $\beta$ is under perturbations following a standard uniform distribution within the range of $(-15\%, 15\%)$. It is worth mentioning that the simulation stops once the predicted time of flight $t_f$ from $\mathcal{N}_{t_f}$ drops below 0.01 TU (equivalent to 0.580 days). Figs.[18](#Fig:transfer_noise){reference-type="ref" reference="Fig:transfer_noise"} and [19](#Fig:control_noise){reference-type="ref" reference="Fig:control_noise"} depicts the transfer trajectory and the corresponding guidance command derived from the NN-based approach, respectively. ![NN-based transfer trajectory with $\beta$ under perturbations.](nose_tra-eps-converted-to.pdf){#Fig:transfer_noise width="0.7\\linewidth"} It can be seen that the colormap-identified speed exhibits a gradual reduction for most of the transfer, followed by a gradual increase towards the end of the transfer, as depicted in Fig.[18](#Fig:transfer_noise){reference-type="ref" reference="Fig:transfer_noise"}. Regarding Fig.[19](#Fig:control_noise){reference-type="ref" reference="Fig:control_noise"}, the guidance command is generally smooth during the initial phase of the transfer. Subsequently, to uphold precise flight, the NN-based approach acts like an error corrector by dynamically adapting the guidance command in the latter phase of the transfer. A remarkable observation is that even when the solar sailcraft's dynamics is under perturbations, the attitude constraints remain unviolated throughout the entire transfer. Furthermore, it takes 5.9152 TU for the solar sailcraft to accomplish its journey with a minor flight error of $\Delta \Phi = (2.0160 \times 10^{-5}, 9.8708 \times 10^{-5}, 8.4738 \times 10^{-5})$. ![NN-based guidance command profile with $\beta$ under perturbations.](noise_control-eps-converted-to.pdf){#Fig:control_noise width="0.7\\linewidth"} ## Generalization Analysis {#gener} The assessment of generalization ability pertains to evaluating the predictive performance for inputs that lie outside the training dataset. Define a state space $\mathcal{A}$, that is not inside $\mathcal{D}$, as below $$\begin{aligned} r_0 \in [1,1.15], \theta_0 \in [0, 2\pi], u_0 \in [0,0.1], v_0 \in [0.8,1.2] %\label{EQ:initial_condition_Generalization}\end{aligned}$$ Within this space, the solar sailcraft's initial conditions are randomly chosen. Subsequently, the proposed approach is employed to guide the solar sailcraft from the selected initial condition to Mars' orbit. A total of 30 tests are conducted, resulting in 20 successful transfers, as depicted in Fig. [20](#Fig:generalisation){reference-type="ref" reference="Fig:generalisation"}, in which the small circles denote the solar sailcraft's initial positions. Among these successful cases, the average flight error is $\Delta \Phi = ( 4.5091 \times 10^{-5}, 1.4726 \times 10^{-4},1.0147 \times 10^{-4})$. This indicates that the proposed method is able to steer the spacecraft to Mars' obit accurately, although there are some states that are not in the training set. ![NN-based transfer trajectories with initial conditions randomly chosen in $\mathcal{A}$.](generalise_tra-eps-converted-to.pdf){#Fig:generalisation width="0.7\\linewidth"} ## Real-Time Performance Since the output of a feedforward NN is essentially a composition of linear mappings of the input vector, a trained NN can produce the output within a constant time. Recall that our method requires three trained NNs with simple structures. 10,000 trials of the proposed method across various flight states are run in a C-based computational environment, and the mean execution time for generating a guidance command is 0.0177 ms. This translates to approximately 0.5310 ms on a typical flight processor operating at 100 MHz [@gankidi2017fpga]. Conversely, indirect methods typically necessitate time-consuming processes of integration and iteration. We utilize *fsolve function to solve the relevant TPBVP and observe a convergence time of approximately 1.22 seconds, even when an appropriate initial guess is provided. Consequently, the computational burden associated with optimization-based indirect methods can be substantially alleviated through adopting the proposed method.* # Conclusions {#Colus} Real-time optimal control for attitude-constrained solar sailcrafts in coplanar circular-to-circular interplanetary transfers was studied in the paper. The necessary conditions governing optimal trajectories were deduced through the PMP. Different from the conventional methods that rely on optimization-based solvers to construct a training dataset, we formulated a parameterized system. This enabled us to generate an optimal trajectory by simply solving an initial value problem, and the optimal state-guidance command pairs could be readily obtained. Regarding the challenge posed by potential discontinuities in the optimal guidance law, which could undermine the NN's performance, we developed a technique to preprocess the guidance command. With the saturation function introduced in the optimal guidance law, this preprocessing technique smoothened the guidance command. Subsequently, we trained three NNs, endowing them with the capability to generate optimal guidance commands in real time. Finally, all the theoretical developments were verified by some numerical simulations. # Acknowledgement {#acknowledgement .unnumbered} This research was supported by the National Natural Science Foundation of China under Grant No. 62088101. # Proof for Lemmas in Section [2](#Problem){reference-type="ref" reference="Problem"} {#Appendix:A} Proof for Lemma [Lemma 1](#LE:optimal_control_law){reference-type="ref" reference="LE:optimal_control_law"}. In view of $\dot{\lambda}_{\theta}(t) = 0$ in Eq. ([\[EQ:costate_function\]](#EQ:costate_function){reference-type="ref" reference="EQ:costate_function"}) and Eq. ([\[EQ:theta_law\]](#EQ:theta_law){reference-type="ref" reference="EQ:theta_law"}), it is easy to see that $\lambda_{\theta}(t)$ remains zero for $t \in [0,t_f]$ along an optimal trajectory. Then, Eq. ([\[EQ:Ham_function\]](#EQ:Ham_function){reference-type="ref" reference="EQ:Ham_function"}) reduces to $$\begin{aligned} \mathscr H = 1 + \lambda_r u + \lambda_{u}(\frac{\beta \cos^3\alpha}{r^2} + \frac{v^2}{r} - \frac{1}{r^2}) + \lambda_{v} (\frac{\beta \sin\alpha \cos^2\alpha}{r^2} - \frac{u v}{r}) \label{A1_H}\end{aligned}$$ To obtain stationary values of $\mathscr H$, we have $$\begin{aligned} \frac{\partial \mathscr H}{\partial \alpha} = \frac{\beta}{r^2}\cos ^3 \alpha [-3\lambda_{u} \tan \alpha + \lambda_{v}(1-2 \tan ^2 \alpha)] = 0 \label{A1_stationary}\end{aligned}$$ Assume that $\lambda_{u}$ and $\lambda_{v}$ are not both zero [@wood1982comment]. Because $\beta > 0$, and $\cos \alpha \neq 0$ for $\alpha \in [-\phi_{max},\phi_{max}]$, two different roots, denoted by $\alpha_1$ and $\alpha_2$, can be obtained by solving Eq. ([\[A1_stationary\]](#A1_stationary){reference-type="ref" reference="A1_stationary"}) as $$\begin{aligned} \begin{cases} \alpha_{1} = \arctan ~\frac{-3\lambda_{u} + \sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}}{4 \lambda_{v}},\\ \alpha_{2} = \arctan ~ \frac{-3\lambda_{u} - \sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}}{4 \lambda_{v}} \label{A1_alpha} \end{cases}\end{aligned}$$ Now we proceed to check for the Hessian's positive definiteness. According to Eq. ([\[A1_stationary\]](#A1_stationary){reference-type="ref" reference="A1_stationary"}), taking the second partial derivative of $\mathscr H$ w.r.t. $\alpha$ yields $$\begin{aligned} \frac{\partial^2 \mathscr H}{\partial \alpha^2} = \frac{\beta}{r^2}\{-3 \cos^2\alpha \sin \alpha[-3\lambda_{u} \tan \alpha + \lambda_{v}(1-2 \tan^2 \alpha)] + \cos^3 \alpha[-3 \lambda_{u} \sec^2 \alpha - 4 \lambda_{v} \tan \alpha \sec^2\alpha]\} \label{A1_Hess}\end{aligned}$$ Substituting Eq. ([\[A1_alpha\]](#A1_alpha){reference-type="ref" reference="A1_alpha"}) into Eq. ([\[A1_Hess\]](#A1_Hess){reference-type="ref" reference="A1_Hess"}) leads to $$\begin{aligned} \begin{cases} \frac{\partial^2 \mathscr H}{\partial \alpha_1^2} = \frac{\beta}{r^2} \cos \alpha (-\sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}),\\ \frac{\partial^2 \mathscr H}{\partial \alpha_2^2} = \frac{\beta}{r^2} \cos \alpha (+\sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}) \label{A1_Hess_alpha12} \end{cases}\end{aligned}$$ Clearly, $\frac{\partial^2 \mathscr H}{\partial \alpha^2} > 0$ holds iff $\alpha = \alpha_2$. Therefore, the local minimum solution $\bar{\alpha}$ is given by $$\begin{aligned} \bar{\alpha} = \alpha_2 = \arctan~\frac{-3\lambda_{u} - \sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}}{4 \lambda_{v}} \label{Eq:optimal_alpha}\end{aligned}$$ Notice that Eq. ([\[Eq:optimal_alpha\]](#Eq:optimal_alpha){reference-type="ref" reference="Eq:optimal_alpha"}) will become singular if $\lambda_{v}(t) = 0$. Then there are two cases. The first case is that $\lambda_{v}(t) = 0$ at some isolated points for $t \in [0,t_f]$. The second one is that $\lambda_{v}(t) \equiv 0$ in a time interval $[t_a,t_b] \in [0,t_f]$. We now analyze the first case. If $\lambda_{u}(t) < 0$ for $t\in [t_a,t_b]$, then $$\begin{aligned} \begin{gathered} \lim_{\lambda_{u} < 0, \lambda_{v} \to 0} \frac{-3\lambda_{u} - \sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}}{4 \lambda_{v}} = \lim_{\lambda_{u} < 0, \lambda_{v} \to 0} \frac{-3\lambda_{u} +3\lambda_{u} \sqrt{1+ \frac{8 \lambda^2_{v}}{9 \lambda^2_{u}}}}{4 \lambda_{v}}\\ \approx \lim_{\lambda_{u} < 0, \lambda_{v} \to 0} \frac{-3\lambda_{u} +3\lambda_{u}(1+\frac{1}{2}\frac{8 \lambda^2_{v}}{9 \lambda^2_{u}})}{4 \lambda_{v}} = \lim_{\lambda_{u} < 0, \lambda_{v} \to 0} \frac{\lambda_{v}}{3 \lambda_{u}} = 0 \end{gathered} \label{Eq:jump1} \end{aligned}$$ Analogously, if $\lambda_{u}(t) > 0$ for $t\in [t_a,t_b]$, we have $$\begin{aligned} \begin{gathered} \lim_{\lambda_{u} > 0, \lambda_{v} \to 0} \frac{-3\lambda_{u} - \sqrt{9 \lambda^2_{u} + 8 \lambda^2_{v}}}{4 \lambda_{v}} = \lim_{\lambda_{u} > 0, \lambda_{v} \to 0} \frac{-3\lambda_{u} - 3\lambda_{u} \sqrt{1+ \frac{8 \lambda^2_{v}}{9 \lambda^2_{u}}}}{4 \lambda_{v}}\\ \approx \lim_{\lambda_{u} > 0, \lambda_{v} \to 0} \frac{-3\lambda_{u} - 3\lambda_{u}(1+\frac{1}{2}\frac{8 \lambda^2_{v}}{9 \lambda^2_{u}})}{4 \lambda_{v}} = \lim_{\lambda_{u} > 0, \lambda_{v} \to 0} -(\frac{3 \lambda_{u}}{2\lambda_{v}}+ \frac{\lambda_{v}}{3 \lambda_{u}}) = \lim_{\lambda_{u} > 0, \lambda_{v} \to 0} -\frac{3 \lambda_{u}}{2\lambda_{v}} \end{gathered} \label{Eq:jump2} \end{aligned}$$ Therefore, from Eq. ([\[Eq:jump1\]](#Eq:jump1){reference-type="ref" reference="Eq:jump1"}), it is clear that the local minimum solution $\bar{\alpha}$ in Eq. ([\[Eq:optimal_alpha\]](#Eq:optimal_alpha){reference-type="ref" reference="Eq:optimal_alpha"}) will automatically reduce to zero if $\lambda_{u}(t) < 0$ and $\lambda_{v}(t) = 0$ at some isolated points, indicating that Eq. ([\[Eq:optimal_alpha\]](#Eq:optimal_alpha){reference-type="ref" reference="Eq:optimal_alpha"}) still holds in such case. On the other hand, from Eq. ([\[Eq:jump2\]](#Eq:jump2){reference-type="ref" reference="Eq:jump2"}), the sign of the local minimum solution $\bar{\alpha}$ will change if $\lambda_{v}(t)$ crosses zero and $\lambda_{u}(t) > 0$ at the isolated points, which will result in the appearance of the discontinuous jump. As a result, a discontinuous jump can be detected using the signs of $\lambda_{v}$ and $\lambda_{u}$. Now we prove that the second case, that is, $\lambda_{v}(t) \equiv 0$ in a time interval $t \in [t_a,t_b]$, does not hold along an optimal trajectory. By contradiction, we assume that $\lambda_{v}(t) \equiv 0$ for $t \in [t_a,t_b]$. Recall that $\lambda_\theta(t)\equiv 0$ along an optimal trajectory. Then, in view of Eq. ([\[EQ:costate_function\]](#EQ:costate_function){reference-type="ref" reference="EQ:costate_function"}), we have $$\begin{aligned} \dot\lambda_{v}(t)=-2\frac{\lambda_{u}v(t)}{r(t)} \equiv 0, t \in [t_a,t_b] \label{Eq:optimal_lambda}\end{aligned}$$ Note that if $v(t)\equiv 0$ in any time interval, it will result in $\theta$ being constant in such time interval, as shown by Eq. ([\[EQ:dyna_equation\]](#EQ:dyna_equation){reference-type="ref" reference="EQ:dyna_equation"}). This is obviously impossible during an orbital transfer. Thus, to make Eq. ([\[Eq:optimal_lambda\]](#Eq:optimal_lambda){reference-type="ref" reference="Eq:optimal_lambda"}) true, $\lambda_{u}(t)$ must be kept zero for $t \in [t_a,t_b]$. In this case, it implies $$\begin{aligned} \lambda_{r}(t)\equiv 0, t \in [t_a,t_b] \label{Eq:optimal_r}\end{aligned}$$ Clearly, if $\lambda_{v}(t) \equiv 0$ for $t \in [t_a,t_b]$ does hold, the equation as follows will be valid $$\begin{aligned} \boldsymbol{\lambda}(t) = [\lambda_{r}(t), \lambda_{\theta}(t), \lambda_{u}(t),\lambda_{v}(t)] \equiv \boldsymbol{0}, t \in [t_a,t_b] \label{Eq:optimal_00}\end{aligned}$$ which contradicts the PMP. Thus, $\lambda_{v}(t)$ can only be zero at some isolated points. Remember that the optimal guidance law is still ambiguous for the case that $\lambda_{v}(t)$ crosses zero and $\lambda_{u}(t) > 0$, as shown in Eq. ([\[Eq:jump2\]](#Eq:jump2){reference-type="ref" reference="Eq:jump2"}). Because a nonlinear function takes its global minimum at one of its local minima or at one of the endpoints of its feasible domain [@oguri2022solar], we have $$\begin{aligned} \alpha^* = \mathop {\rm{argmin}}\limits_{\alpha \in \{-\phi_{max},\bar{\alpha}, \phi_{max}\}} \mathscr H \label{Eq:global_optimal_Ham}\end{aligned}$$ To further resolve the ambiguity in terms of $\alpha$, rewrite Eq. ([\[A1_H\]](#A1_H){reference-type="ref" reference="A1_H"}) as $$\begin{aligned} \mathscr H(\alpha,\boldsymbol{\lambda},\boldsymbol{x}) = \mathscr H_1(\alpha,\boldsymbol{\lambda},\boldsymbol{x}) + \mathscr H_2(\boldsymbol{\lambda},\boldsymbol{x}) \label{Eq:optimal_Ham}\end{aligned}$$ where $\mathscr H_1$ is part of $\mathscr H$ related to $\alpha$, and $\mathscr H_2$ is the rest of $\mathscr H$ independent from $\alpha$. Then, $\mathscr H_1$ satisfies $$\begin{aligned} \mathscr H_1(\alpha,\boldsymbol{\lambda},\boldsymbol{x}) = \frac{\beta}{r^2}(\lambda_{u}\cos^3 \alpha + \lambda_{v} \sin \alpha \cos ^2 \alpha) \label{Eq:optimal_Ham_1}\end{aligned}$$ Recall Eq. ([\[EQ:att_constraints\]](#EQ:att_constraints){reference-type="ref" reference="EQ:att_constraints"}) and $\phi_{max} \in (0,\frac{\pi}{2})$, thus we obtain $$\begin{aligned} \mathscr H_1(\frac{\pi}{2},\boldsymbol{\lambda},\boldsymbol{x}) = \mathscr H_1(-\frac{\pi}{2},\boldsymbol{\lambda},\boldsymbol{x}) = 0 \label{Eq:optimal_Ham_2}\end{aligned}$$ Substituting Eq. ([\[A1_alpha\]](#A1_alpha){reference-type="ref" reference="A1_alpha"}) into Eq. ([\[Eq:optimal_Ham_1\]](#Eq:optimal_Ham_1){reference-type="ref" reference="Eq:optimal_Ham_1"}) yields $$\begin{aligned} \begin{cases} \mathscr H_{1}(\alpha_{1}) = \frac{\beta}{4r^{2}}\cos^{3}\alpha\left( \lambda_{u}+\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}} \right)\geq 0 \\ \mathscr H_{1}(\alpha_{2}) = \frac{\beta}{4r^{2}}\cos^{3}\alpha\left( \lambda_{u} - \sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}} \right)\leq 0 \end{cases}\end{aligned}$$ which indicates that $\alpha_1$ and $\alpha_2$ is the global maximum and global minimum for $\mathscr H$ in $[-\frac{\pi}{2}, \frac{\pi}{2}]$, respectively. Define a variable $\Delta \mathscr H$ as $$\begin{aligned} \Delta \mathscr H = \mathscr H_1(\phi_{max},\boldsymbol{\lambda},\boldsymbol{x}) - \mathscr H_1(-\phi_{max},\boldsymbol{\lambda},\boldsymbol{x}) = \frac{2\beta}{r^2}\cos^2 \phi_{max} \sin \phi_{max} \lambda_{v} \label{Eq:optimal_Ham_3}\end{aligned}$$ Without loss of generality, we consider the case that $\lambda_{v}<0$, which leads to a negative local maximum point $\alpha_1$ and a positive local minimum point $\alpha_2$. If $\alpha_2 < \phi_{max}$, then $\alpha_2$ is the global minimum point in $[-\phi_{max}, \phi_{max}]$. If $\alpha_2 > \phi_{max}$, then $\phi_{max}$ will be the global minimum in $[-\phi_{max}, \phi_{max}]$ since $\Delta \mathscr H < 0$, as shown in Fig. [21](#Fig:polar_reference_app){reference-type="ref" reference="Fig:polar_reference_app"}. ![A schematic view of $\mathscr H_1(\alpha, \boldsymbol{\lambda}, \boldsymbol{x})$ with $\lambda_{u}>0$ and $\lambda_{v}<0$.](proof.pdf){#Fig:polar_reference_app width="0.65\\linewidth"} Hence, in view of Eq. ([\[Eq:optimal_alpha\]](#Eq:optimal_alpha){reference-type="ref" reference="Eq:optimal_alpha"}), we have that the optimal guidance law $\alpha^*$ minimizing the Hamiltonian $\mathscr H$ w.r.t. $\alpha$ is $$\begin{aligned} \alpha^*=\rm{Median} {[-\phi_{max},\bar{\alpha},\phi_{max}]}, \rm{where}~\bar{\alpha} = \arctan~\it{\frac{-3 \lambda_{u}-\sqrt{9\lambda^2_{u}+8\lambda^2_{v}}}{4\lambda_{v}}}\end{aligned}$$ Proof for Lemma [Lemma 2](#LE:optimal_trajectory_lemma){reference-type="ref" reference="LE:optimal_trajectory_lemma"}. Taking Eq. ([\[para_initial\]](#para_initial){reference-type="ref" reference="para_initial"}) into consideration, it is easy to see that the solution to the parameterized system in Eq. ([\[EQ:new_system\]](#EQ:new_system){reference-type="ref" reference="EQ:new_system"}) at $\tau = 0$ represents a circular orbit with a radius of $R_0$. Moreover, regarding Eq. ([\[EQ:theta_law\]](#EQ:theta_law){reference-type="ref" reference="EQ:theta_law"}), we have that Eq. ([\[EQ:tf_law\]](#EQ:tf_law){reference-type="ref" reference="EQ:tf_law"}) holds at $\tau = 0$ for the pair $(\lambda_{R_0},\lambda_{U_0})$ and $\lambda_{V_0}$ determined by Eq. ([\[EQ:solve_equation_simple\]](#EQ:solve_equation_simple){reference-type="ref" reference="EQ:solve_equation_simple"}). By propagating the parameterized system in Eq. ([\[EQ:new_system\]](#EQ:new_system){reference-type="ref" reference="EQ:new_system"}) for $\tau \in [0,t_f]$, $\mathcal{F}$ meets all the necessary conditions for an optimal trajectory. By the definition of $\mathcal F_p$, it is obvious that $\mathcal F_p$ represents the solution space of an optimal trajectory, and $\tau \in [0,t_f]$ defines the time of flight. In other words, an optimal trajectory can be readily obtained simply by arbitrarily choosing two parameters $\lambda_{R_0}$ and $\lambda_{U_0}$ and solving an initial value problem governed by Eqs. ([\[EQ:new_system\]](#EQ:new_system){reference-type="ref" reference="EQ:new_system"}) and ([\[para_initial\]](#para_initial){reference-type="ref" reference="para_initial"}).
arxiv_math
{ "id": "2309.09018", "title": "Real-time optimal control for attitude-constrained solar sailcrafts via\n neural networks", "authors": "Kun Wang, Fangmin Lu, Zheng Chen, Jun Li", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | It is known that an automorphism of $F_g$, the oriented closed surface of genus $g$, is extendable over the 4-sphere $S^4$ if and only if it has a bounding invariant spin structure [@WsWz]. We show that each automorphism of $F_g$ has an invariant spin structure, and obtain a stably extendable result: Each automorphism of $F_g$ is extendable over $S^4$ after a connected sum with the identity map on the torus. Then each automorphism of an oriented once punctured surface is extendable over $S^4$. For each $g\neq 4$, we construct a periodic map on $F_g$ that is not extendable over $S^4$, and we prove that every periodic map on $F_4$ is extendable over $S^4$, which answer a question in [@WsWz]. We illustrate for an automorphism $f$ of $F_g$, how to find its invariant spin structures, bounding or not; and once $f$ has a bounding invariant spin structure, how to construct an embedding $F_g\hookrightarrow S^4$ so that $f$ is extendable with respect to this embedding. address: - Morningside Center of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China - School of Mathematical Sciences, Peking University, Beijing, 100871, China author: - Weibiao Wang - Zhongzi Wang title: Extendability over the $4$-sphere and invariant spin structures of surface automorphisms --- # Introduction {#sect:introduction} We will work in the category of oriented smooth manifolds. For an oriented compact manifold $M$, we use ${\rm Aut}(M)$ to denote the group of its automorphisms, i.e., orientation-preserving self-diffeomorphisms. Let $F_g$ be the oriented closed surface of genus $g\geq 1$. And let ${\rm MCG}(F_g)$ be the mapping class group of $F_g$, i.e., the group of smooth isotopy classes of automorphisms of $F_g$. For $f\in{\rm Aut}(F_g)$, denote its mapping class by $[f]$. An element $f$ in ${\rm Aut}(F_g)$ is *extendable over $M$ with respect to an embedding $e:F_g\hookrightarrow M$* if there exists an element $\tilde f$ in ${\rm Aut}(M)$ such that $$\tilde f\circ e=e\circ f.$$ Call a surface map $f\in{\rm Aut}(F_g)$ is *extendable over $M$*, if $f$ is extendable over $M$ with respect to some embedding $e$. Call $[f]$ is extendable if $f$ is extendable. For an automorphism of a general manifold $N$, we can similarly talk about its extendability over another manifold $M$. It is known that each $f\in{\rm Aut}(F_g)$ is extendable over $S^5$, thus is extendable over any $n$-dimensional manifold with $n\geq 5$ [@Hir]. On the other hand, the codimension-$1$ extendability is an active topic, see [@NWW] and the references therein. We will focus on the codimension-$2$ case. An embedding $e:F_g\hookrightarrow S^4$ is said to be *trivial* if $e(F_g)$ bounds an embedded handlebody. For a trivial embedding of the torus $F_1$, Montesinos showed in 1983 that $f\in{\rm Aut}(F_1)$ is extendable over $S^4$ with respect to it if and only if $f$ preserves the induced Rohlin form [@Mon]. In 2002 the same criterion for $F_g$ was proved by Hirose in a comprehensive work [@Hi1]. In 2012, it was proved that if $N$ is an $n$-dimensional manifold and $f\in{\rm Aut}(N)$ is extendable over $S^{n+2}$ with respect to $e: N\hookrightarrow S^{n+2}$, then $f$ preserves the spin structure induced by $e$ [@DLWY]. In the case of $F_g\hookrightarrow S^4$, the induced spin structure and the induced Rohlin form coincide [@WsWz Lemma 3.2]. Based on [@Hi1] and [@DLWY], as well as the fact that ${\rm Aut}(F_g)$ acts transitively on the set of bounding spin structures on $F_g$ [@Jo §5] (also in [@DLWY Lemma 4.2]), it was proved in 2021 that **Theorem 1**. *[@WsWz Theorem 3.1] An automorphism $f$ of $F_g$ is extendable over $S^4$ if and only if $f$ has an invariant bounding spin structure on $F_g$.* Theorem [Theorem 1](#WW3.1){reference-type="ref" reference="WW3.1"} transfers a problem in geometric flavor to a problem in more algebraic flavor (for details see Theorem [\[thm:extendability\]](#thm:extendability){reference-type="ref" reference="thm:extendability"}), in particular, when $f$ is given by a product of Dehn twists along standard generators of $H_1(F_g; \mathbb{Z}_2)$ (for details see Section [5](#sect:low-genus){reference-type="ref" reference="sect:low-genus"}). It is natural to ask whether every $f\in{\rm Aut}(F_g)$ has an invariant spin structure. The answer is YES. **Theorem 2**. *Every automorphism of $F_g$ has an invariant spin structure.* This was first proved by Atiyah [@Ati Proposition 5.2]. We will reprove it using quadratic forms, indeed get a stronger result: The number of $f$-invariant spin structures equals $2^d$, where $d$ is the dimension of the kernel of $f_*-id: H_1(F_g;\mathbb{Z}_2)\to H_1(F_g;\mathbb{Z}_2)$ (Theorem [Theorem 9](#SIS){reference-type="ref" reference="SIS"}). Moreover, to check whether there is a bounding one among those $2^d$ elements, one needs only to check $\frac{d^2+d+2}2$ elements among them (Proposition [Proposition 12](#prop:Arf){reference-type="ref" reference="prop:Arf"}). Based on Theorem [Theorem 1](#WW3.1){reference-type="ref" reference="WW3.1"} and Theorem [Theorem 2](#IS){reference-type="ref" reference="IS"}, we have the following stably extendable result. (The connected sum of two mapping classes will be defined in Section [4](#sect:applications){reference-type="ref" reference="sect:applications"}.) **Theorem 3**. *Let $I_T$ be the identity on the torus $T$. For any $[f]\in {\rm MCG}(F_g)$, the connected sum $[f]\# [I_T]\in {\rm MCG}(F_{g+1})$ is extendable over $S^4$.* Let $F_{g,k}$ be the compact oriented surface of genus $g$ and with $k$ boundary components. **Theorem 4**. *Every automorphism of $F_{g,k}$ is extendable over $S^4$ when $k$ is odd.* It is an application of Theorem [Theorem 1](#WW3.1){reference-type="ref" reference="WW3.1"} that for $g\geq 1$, a Wiman map on $F_g$ (i.e., a periodic map of the maximum order $4g+2$) is extendable over $S^4$ if and only if $g\equiv 0\text{ or }3\,({\rm mod}\,4)$ [@WsWz Theorem 1.2 (1)]. Note that for each $g\ge 1$, there exists a mapping class on $F_g$ that is not extendable over $S^4$ [@DLWY Corollary 1.4], but the explicit map was unknown before for $g>1$. The following question is raised in [@WsWz Remark 1 (4)]. *Question 5*. For each $g\geq 1$, is there a periodic automorphism of $F_g$ that is not extendable over $S^4$? We give a complete answer: **Theorem 6**. *Suppose that $g\geq 1$ and $g\neq 4$, then there exists a periodic map on $F_g$ that is not extendable over $S^4$. Indeed the map can be of period $8$ except when $g=1$. In contrast, every periodic map on $F_4$ is extendable over $S^4$.* This paper is organized as follows. In Section [2](#sect:quadratic){reference-type="ref" reference="sect:quadratic"} we introduce the basic tools, spin structures and quadratic forms on surfaces. In Section [3](#sect:IS){reference-type="ref" reference="sect:IS"} we prove Theorem [Theorem 2](#IS){reference-type="ref" reference="IS"} and get more information about the invariant spin structures. In Section [4](#sect:applications){reference-type="ref" reference="sect:applications"} we introduce a method to constructing surface maps on connected sums, then prove Theorem [Theorem 3](#stable){reference-type="ref" reference="stable"} and Theorem [Theorem 4](#puncture){reference-type="ref" reference="puncture"}. We also provide explicit examples, from which the first half of Theorem [Theorem 6](#non-extendable){reference-type="ref" reference="non-extendable"} follows. The detailed calculations needed in the proof of Theorem [Theorem 6](#non-extendable){reference-type="ref" reference="non-extendable"} are left to Section [5](#sect:low-genus){reference-type="ref" reference="sect:low-genus"}, which also provides a general way to find invariant spin structures for an automorphism $f$ of $F_g$, bounding or not, especially when $f$ is given by a product of Dehn twists along standard generators of $H_1(F_g; \mathbb{Z}_2)$. In Section [6](#sect:embedding){reference-type="ref" reference="sect:embedding"}, we illustrate once $f$ has a bounding invariant spin structure $\sigma$ (i.e., $f$ is extendable over $S^4$), how to construct an embedding $e: F_g\hookrightarrow S^4$ so that the induced spin structure is $\sigma$ (i.e., $f$ is extendable with respect to $e$). *Remark 7*. Let $\mathcal{T}_g$ be the finite index subgroup of ${\rm MCG}(F_g)$ which acts trivially on $H_1(F_g;\mathbb{Z}_2)$. Then for any $f\in {\rm Aut}(F_g)$ and $[h]\in \mathcal{T}_g$, $h \circ f$ is extendable over $S^4$ if and only if $f$ is extendable over $S^4$. Each $[h]\in \mathcal{T}_g$ is extendable over $S^4$ with respect to any trivial embedding $F_g\hookrightarrow S^4$. # Spin structures and quadratic forms on surfaces {#sect:quadratic} Spin structure on surfaces is a subject which has many applications as well as has many faces. One may refer to [@Ki Chapter IV and Appendix] for an introduction. Take the first homology group $H_1(F_g;\mathbb{Z}_2)$ of $F_g$ with coefficients in $\mathbb{Z}_2$. For $x,y\in H_1(F_g;\mathbb{Z}_2)$, denote by $x\cdot y$ their mod-2 intersection number. A *quadratic form* on $H_1(F_g;\mathbb{Z}_2)$ is a function $q:H_1(F_g;\mathbb{Z}_2)\to\mathbb{Z}_2$ satisfying $$q(x+y)=q(x)+q(y)+x\cdot y,\, \forall\,x,y\in H_1(F_g;\mathbb{Z}_2).$$ For convenience, we use $\mathcal{Q}(F_g)$ to denote the set of quadratic forms on $H_1(F_g;\mathbb{Z}_2)$. An element $q\in\mathcal{Q}(F_g)$ is determined by its values on a basis of $H_1(F_g;\mathbb{Z}_2)$, so $\mathcal{Q}(F_g)$ is isomorphic to $H^1(F_g;\mathbb{Z}_2)$ as a set. Suppose that $\{a_1,b_1,\cdots,a_g,b_g\}$ is a *symplectic basis* of $H_1(F_g;\mathbb{Z}_2)$ with respect to the mod-$2$ intersection form, i.e., $a_i\cdot b_j=1$ if and only if $i=j$, and $a_i\cdot a_j=b_i\cdot b_j=0$ for any $i,j$. See Fig. [\[fig:symplectic\]](#fig:symplectic){reference-type="ref" reference="fig:symplectic"} for a typical example. The *Arf invariant* of a quadratic form $q$ on $H_1(F_g;\mathbb{Z}_2)$ is defined by $${\rm Arf}(q)=\sum_{i=1}^{g}q(a_i)q(b_i)\in\mathbb{Z}_2,$$ which is independent of the choice of the symplectic basis. Spin structures on $F_g$ are in 1-1 correspondance with quadratic forms on $H_1(F_g;\mathbb{Z}_2)$, and bounding spin structures correspond exactly to those quadratic forms with Arf invariant $0$ (cf. [@Ki p. 36]). For $f\in{\rm Aut}(F_g)$ and $q\in\mathcal{Q}(F_g)$, naturally we have the pull-back quadratic form $f^*q\in\mathcal{Q}(F_g)$: $$f^*q(x)=q(f_*(x)),\,\forall\,x\in H_1(F_g;\mathbb{Z}_2).$$ Now we restate Theorem [Theorem 1](#WW3.1){reference-type="ref" reference="WW3.1"}. **Theorem 8**. *[@WsWz Theorem 3.1][\[thm:extendability\]]{#thm:extendability label="thm:extendability"} For any $f\in{\rm Aut}(F_g)$, $f$ is extendable over $S^4$ if and only if there exists a quadratic form $q$ on $H_1(F_g;\mathbb{Z}_2)$ such that ${\rm Arf}(q)=0$ and $f^*q=q$.* According to the above theorem, the extendability over $S^4$ of $f\in{\rm Aut}(F_g)$ is totally determined by the induced isomorphism $f_*$ on $H_1(F_g;\mathbb{Z}_2)$. Indeed once $f_*$ is given, there is an algorithm to determine whether $f$ is extendable over $S^4$, see Section [5](#sect:low-genus){reference-type="ref" reference="sect:low-genus"}. # Every surface automorphism has an invariant spin structure {#sect:IS} We will prove the following stronger version of Theorem [Theorem 2](#IS){reference-type="ref" reference="IS"}. **Theorem 9**. *For every $f\in {\rm Aut}(F_g)$ with $f_*: H_1(F_g;\mathbb{Z}_2)\rightarrow H_1(F_g;\mathbb{Z}_2)$,* *(1) there exists a quadratic form $q$ on $H_1(F_g;\mathbb{Z}_2)$ such that $f^*q=q$;* *(2) the number of $f$-invariant spin structures equals $$|ker(f_*-id)|=2^{{\rm dim}ker(f_*-id)}.$$* Logically (1) follows from (2). But to prove (2), we need first prove (1). For any $\mathbb{Z}_2$-linear map $\xi:H_1(F_g;\mathbb{Z}_2)\rightarrow \mathbb{Z}_2$ and any quadratic forms $q,q'$ on $H_1(F_g;\mathbb{Z}_2)$, we denote $\xi+q$ the function defined by $x\mapsto \xi(x)+q(x)$ for all $x\in H_1(F_g;\mathbb{Z}_2)$, and denote $q'-q$ the map defined by $x\mapsto q'(x)-q(x)$ for all $x\in H_1(F_g;\mathbb{Z}_2)$. **Lemma 10**. *[@Jo Lemma 2][\[lem:linear\]]{#lem:linear label="lem:linear"} With the above assumption, $\xi+q$ is a quadratic form on $H_1(F_g;\mathbb{Z}_2)$ and $q'-q$ is $\mathbb{Z}_2$-linear.* We also need the following known fact from algebra. **Lemma 11**. *Suppose $V,W$ are $\mathbb{Z}_2$-vector spaces, and $\psi:V\rightarrow \mathbb{Z}_2$ and $\phi:V\rightarrow W$ are $\mathbb{Z}_2$-linear. Then there exists a $\mathbb{Z}_2$-linear map $\xi:W\rightarrow \mathbb{Z}_2$ such that $\xi\circ\phi=\psi$ if and only if $ker\phi\subseteq ker\psi$.* *Proof of Theorem [Theorem 9](#SIS){reference-type="ref" reference="SIS"}.* (1) Pick an arbitrary quadratic form $q_0\in \mathcal{Q}(F_g)$ and fix it. For $q\in\mathcal{Q}(F_g)$, let $\xi=q-q_0$ be the $\mathbb{Z}_2$-linear map such that $q=\xi+q_0$. Transform the equation $f^*q=q$: $$\begin{split} f^*q=q & \Leftrightarrow f^*(\xi+q_0)=\xi+q_0\\ &\Leftrightarrow f^*\xi+f^*q_0=\xi+q_0\\ & \Leftrightarrow f^*\xi-\xi=q_0-f^*q_0\\ & \Leftrightarrow \xi\circ(f_*-id)=q_0-f^*q_0. \end{split}$$ By Lemma [\[lem:linear\]](#lem:linear){reference-type="ref" reference="lem:linear"}, $q_0-f^*q_0$ is a $\mathbb{Z}_2$-linear map. Then by Lemma [Lemma 11](#lem:factor){reference-type="ref" reference="lem:factor"}, we only need to verify $ker(f_*-id)\subseteq ker(q_0-f^*q_0)$. It is straightforward: for any $x\in ker(f_*-id)$, $f_*(x)=x$, so $$(q_0-f^*q_0)(x)=q_0(x)-q_0(f_*(x))=q_0(x)-q_0(x)=0.$$ \(2\) By (1) we may fix $q_0\in\mathcal{Q}(F_g)$ with $f^*q_0=q_0$. Then $$\begin{split} f^*q=q & \Leftrightarrow \xi\circ(f_*-id)=q_0-f^*q_0 \\ & \Leftrightarrow \xi|_{Im(f_*-id)}=0. \end{split}$$ Let $W$ be a direct sum complement of the image $Im(f_*-id)$ in $H_1(F_g;\mathbb{Z}_2)$. Then the function $$q\mapsto \xi|_W$$ defines a bijection between the set of $f$-invariant quadratic forms to the hom-space ${\rm Hom}(W,\mathbb{Z}_2)$. So the number of $f$-invariant quadratic forms equals the cardinality of ${\rm Hom}(W,\mathbb{Z}_2)$. Note that $${\rm Hom}(W,\mathbb{Z}_2)\cong W$$ as $\mathbb{Z}_2$-vector spaces, so we have $$\begin{split} {\rm dim}\,{\rm Hom}(W,\mathbb{Z}_2)={\rm dim}W &={\rm dim}H_1(F_g;\mathbb{Z}_2)-{\rm dim}Im(f_*-id)\\ &={\rm dim}ker(f_*-id). \end{split}$$ Thus $|{\rm Hom}(W, \mathbb Z_2)|=|ker(f_*-id)|$, and the theorem follows. ◻ **Proposition 12**. *Suppose that $f$ is an automorphism of $F_g$ and $q_0$ is an $f$-invariant quadratic form on $H_1(F_g;\mathbb{Z}_2)$. Let $\{\xi_1,\xi_2,\cdots,\xi_d\}$ be a basis of the $\mathbb{Z}_2$-linear space $$\{\xi:H_1(F_g;\mathbb{Z}_2)\to\mathbb{Z}_2\,|\,\xi\circ(f_*-id)=0\}.$$ Then there exists an $f$-invariant quadratic form with Arf invariant $0$ if and only if $0$ belongs to the set $$\{{\rm Arf}(q_0),{\rm Arf}(q_0+\xi_i),{\rm Arf}(q_0+\xi_i+\xi_j)\,|\,i,j=1,2,\cdots,d,i\neq j\}.$$* *Proof.* By the proof of Theorem [Theorem 9](#SIS){reference-type="ref" reference="SIS"} (2), $q\in\mathcal{Q}(F_g)$ is $f$-invariant if and only if $q-q_0$ belongs to $$\{\xi:H_1(F_g;\mathbb{Z}_2)\to\mathbb{Z}_2\,|\,\xi\circ(f_*-id)=0\},$$ i.e., $q$ has the form $$q_0+\sum_{i=1}^{d}t_i\xi_i,\,t_1,t_2,\cdots,t_d\in\mathbb{Z}_2.$$ So it suffices to prove the "only if\" part. Take a symplectic basis $\{a_1,b_1,\cdots,a_g,b_g\}$ of $H_1(F_g;\mathbb{Z}_2)$. Expanding $$\begin{split} {\rm Arf}\left(q_0+\sum_{i=1}^{d}t_i\xi_i\right) & =\sum_{j=1}^{g}\left(q_0+\sum_{i=1}^{d}t_i\xi_i\right)(a_j)\cdot\left(q_0+\sum_{i=1}^{d}t_i\xi_i\right)(b_j)\\ &=\sum_{j=1}^{g}\left(q_0(a_j)+\sum_{i=1}^{d}t_i\xi_i(a_j)\right)\left(q_0(b_j)+\sum_{i=1}^{d}t_i\xi_i(b_j)\right), \end{split}$$ we see that it can be expressed as a polynomial of $t_1,t_2,\cdots,t_d$: $${\rm Arf}\left(q_0+\sum_{i=1}^{d}t_i\xi_i\right) =A+\sum_{i=1}^{d}B_it_i+\sum_{1\leq i<j\leq d}C_{ij}t_it_j.$$ The coefficients $A,B_i,C_{ij}$ can be determined as follows: - Take $t_i=0$, $\forall i$, then we see $A={\rm Arf}(q_0);$ - Fix $i$ and take $t_i=1,t_j=0,\forall j\neq i$, then we see $B_i={\rm Arf}(q_0+\xi_i)-A$; - Fix $i<j$ and take $t_i=t_j=1,t_k=0,\forall k\neq i,j$, then we see $$C_{ij}={\rm Arf}(q_0+\xi_i+\xi_j)-A-B_i-B_j.$$ If $0$ does not belong to the set $$\{{\rm Arf}(q_0),{\rm Arf}(q_0+\xi_i),{\rm Arf}(q_0+\xi_i+\xi_j)\,|\,i,j=1,2,\cdots,d,i\neq j\},$$ then $A=1,B_i=C_{ij}=0$, which makes the Arf invariant of any $f$-invariant quadratic form $q_0+\sum_{i=1}^{d}t_i\xi_i$ a constant $1$. This completes our proof. ◻ **Corollary 13**. *Suppose that $f$ is an automorphism of $F_g$ and $f_*-id:H_1(F_g;\mathbb{Z}_2)\rightarrow H_1(F_g;\mathbb{Z}_2)$ is bijective. Then there exists a unique $f$-invariant quadratic form $q$ on $H_1(F_g;\mathbb{Z}_2)$. Moreover, $q$ is explicitly given by $$q(x)=x\cdot (f_*-id)^{-1}(x),\,\forall\,x\in H_1(F_g;\mathbb{Z}_2).$$* *Proof.* The existence and uniqueness follow immediately from Theorem [Theorem 9](#SIS){reference-type="ref" reference="SIS"}, so we only need to prove the moreover part. For any $x\in H_1(F_g;\mathbb{Z}_2)$, as $f_*-id$ is bijective, we can take $y$ with $x=(f_*-id)(y)=f_*(y)-y.$ For the $f$-invariant quadratic form $q$, we have $q(f_*(y))=q(y)=-q(y)$, so $$\begin{split} q(x)=q(f_*(y)-y)=q(f_*(y))+q(y)+f_*(y)\cdot y=f_*(y)\cdot y \\ =f_*(y)\cdot y-y\cdot y=(f_*(y)-y)\cdot y=x\cdot (f_*-id)^{-1}(x). \end{split}$$ ◻ # Applications on extending surface automorphisms over $S^4$ {#sect:applications} For any two mapping classes $[f_1]\in{\rm MCG}(F_{g_1}), [f_2]\in{\rm MCG}(F_{g_2})$, we can always choose representatives $f_1$ and $f_2$ such that each $f_i(i=1,2)$ coincides with the identity map on a closed disk $D_i$ in $F_{g_i}$. When we glue $F_{g_1}-{\rm Int}(D_1)$ with $F_{g_2}-{\rm Int}(D_2)$ along their oriented boundaries via an orientation-reversing homeomorphism, $f_1|_{F_{g_1}-{\rm Int}(D_1)}$ and $f_2|_{F_{g_2}-{\rm Int}(D_2)}$ together provide an automorphism $f$ of $F_{g_1+g_2}=F_{g_1}\# F_{g_2}$. We call $[f]$ a *connected sum* of $[f_1]$ and $[f_2]$, denoted as $[f]=[f_1]\#[f_2]$. Similarly we can construct a connected sum of finitely many mapping classes on surfaces. **Proposition 14**. *Suppose that $g=g_1+g_2+\cdots +g_k(k>0)$, and $[f]\in{\rm MCG}(F_g)$ is a connected sum of $[f_i]\in{\rm MCG}(F_{g_i})(i=1,2,\cdots,k)$.* *(1) If some $f_i$ has both bounding and unbounding invariant spin structures, then $f$ is extendable over $S^4$.* *(2) Otherwise $f$ is extendable if and only if the number of those $f_i$ with only unbounding invariant spin structures is even.* *Proof.* (1) By Theorem [Theorem 2](#IS){reference-type="ref" reference="IS"}, each $f_i$ has an invariant quadratic form $q_i$ on $H_1(F_{g_i};\mathbb{Z}_2)$. They together give an $f$-invariant quadratic form $q$ on $$H_1(F_g;\mathbb{Z}_2)\cong\bigoplus_{i=1}^{k}H_1(F_{g_i};\mathbb{Z}_2),$$ which equals $q_i$ on $H_1(F_{g_i};\mathbb{Z}_2)$. Without loss of generality, assume that $f_1$ has both bounding and unbounding invariant spin structures, then we can choose an $f_1$-invariant quadratic form $q_1$ such that ${\rm Arf}(q_1)=\sum\limits_{i=2}^{k}{\rm Arf}(q_i)$, thus $${\rm Arf}(q)=\sum_{i=1}^{k}{\rm Arf}(q_i)=0.$$ By Theorem [\[thm:extendability\]](#thm:extendability){reference-type="ref" reference="thm:extendability"}, $f$ is extendable over $S^4$. \(2\) Otherwise let $j$ be the number of $f_i$ with only unbounding invariant spin structures, then the number of $f_i$ with only bounding invariant spin structures is $k-j$. Note that $f$ acts on $$F_g=F_{g_1}\# F_{g_2}\#\cdots\# F_{g_k}$$ as $f_i$ on each $F_{g_i}$ part, so any $f$-invariant quadratic form $q$ induces an $f_i$-invariant quadratic form $q_i$ on $H_1(F_{g_i};\mathbb{Z}_2)$. It follows that $${\rm Arf}(q)=\sum_{i=1}^{k}{\rm Arf}(q_i)=j\times 1 + (k-j)\times 0=j.$$ By Theorem [\[thm:extendability\]](#thm:extendability){reference-type="ref" reference="thm:extendability"}, $f$ is extendable over $S^4$ if and only if $j$ is even. ◻ *Proof of Theorem [Theorem 3](#stable){reference-type="ref" reference="stable"}.* According to [@Ati Theorem 3'], the number of bounding spin structures on $F_g$ is $2^{g-1}(2^g+1)$, while the number of unbounding spin structures is $2^{g-1}(2^g-1)$. Particularly, there are three bounding spin structures and one unbounding spin structure on the torus $T$. Clearly the identity map $I_T$ keeps each spin structure on $T$ invariant, so it has both bounding and unbounding invariant spin structures. Then the conclusion follows from Proposition [Proposition 14](#prop:connectedsum){reference-type="ref" reference="prop:connectedsum"} (1). ◻ Theorem [Theorem 4](#puncture){reference-type="ref" reference="puncture"} can be stated in a stronger version as follows. **Proposition 15**. *Suppose $f$ is an automorphism of $F_{g, k}$ and there exists a boundary component whose $f$-orbit is of odd length, then $f$ is extendable over $S^4$.* *Proof.* Suppose the $k$ boundary components of $F_{g,k}$ has $m$ $f$-orbits, denoted by $$\{b_{i,j}:j=1,2,\cdots,k_i\},\,i=1,2,\cdots,m$$ with $b_{i,j+1}=f^j(b_{i,1})(1\leq j<k_i)$ and $f^{k_i}(b_{i,1})=b_{i,1}$. By isotopy, we may assume that the $f$-action on the union of any orbit $$\bigcup_{j=1}^{k_i}b_{i,j}\,(i=1,2,\cdots,m)$$ is periodic, i.e., $f^{k_i}$ coincides with the identity on it. Take $k$ copies $X_{i,j}(1\leq i\leq m,1\leq j\leq k_i)$ of $F_{1,1}$. Glue them with corresponding $b_{i,j}$ to obtain a closed surface $F_{g+k}$. The gluing can be made $f$-equivariantly, so $f$ extends to an automorphism $\hat{f}$ of $F_{g+k}$ which acts on each $$\bigcup_{j=1}^{k_i}X_{i,j}\,(i=1,2,\cdots,m)$$ periodically. For any $\hat{f}$-invariant quadratic form $\hat{q}$ on $$H_1(F_{g+k};\mathbb{Z}_2)\cong H_1(F_{g,k};\mathbb{Z}_2)\oplus\left(\bigoplus_{i=1}^{m}\bigoplus_{j=1}^{k_i}H_1(X_{i,j};\mathbb{Z}_2)\right),$$ $\hat{q}$ induces invariant quadratic forms $q$ on $H_1(F_{g,k};\mathbb{Z}_2)$ and $q_i$ on $\bigoplus\limits_{j=1}^{k_i}H_1(X_{i,j};\mathbb{Z}_2)$. Assume $k_1$ is odd. Fix invariant quadratic forms $q$ and $q_2,q_3,\cdots,q_m$. Then choose a quadratic form $q_{1,1}$ on $H_1(X_{1,1};\mathbb{Z}_2)$ such that $${\rm Arf}(q_{1,1})={\rm Arf}(q)+\sum_{i=2}^{m}{\rm Arf}(q_i).$$ Let $q_{1,j}(j=1,\cdots,k_1)$ be the pull-back $(\hat{f}^{1-j})^*q_{1,1}$ on $H_1(X_{i,j};\mathbb{Z}_2)$. Then $q_{1,1},q_{1,2},\cdots,q_{1,k_1}$ and $q,q_2,q_3,\cdots,q_m$ together provide an $\hat{f}$-invariant quadratic form $\hat{q}'$ on $H_1(F_{g+k};\mathbb{Z}_2)$. As ${\rm Arf}(q_{1,j})={\rm Arf}(q_{1,1})$, we have $$\begin{split} {\rm Arf}(\hat{q}')& =\sum_{j=1}^{k_1}{\rm Arf}(q_{1,j})+{\rm Arf}(q)+\sum_{i=2}^{m}{\rm Arf}(q_i)\\ & =k_1\times {\rm Arf}(q_{1,1})+{\rm Arf}(q)+\sum_{i=2}^{m}{\rm Arf}(q_i)=0. \end{split}$$ By Theorem [\[thm:extendability\]](#thm:extendability){reference-type="ref" reference="thm:extendability"}, $\hat{f}$ is extendable over $S^4$, thus so is $f$. ◻ Below we focus on periodic automorphisms of surfaces. Suppose that $f_1\in{\rm Aut}(F_{g_1}),f_2\in{\rm Aut}(F_{g_2})$ are periodic maps of the same order $n$ and each has an isolated fixed point. For each $i=1,2$, let $x_i\in F_{g_i}$ be an isolated fixed point of $f_i$, and take an $f_i$-invariant closed neighborhood $U_i\subset F_{g_i}$ with $\varphi_i:U_i\xrightarrow{\cong}D^2$, such that $\varphi_i\circ f_i\circ\varphi_i^{-1}$ is an order-$n$ rotation of the standard closed unit disk $D^2$ (Fig. [\[fig:periodic-connected-sum\]](#fig:periodic-connected-sum){reference-type="ref" reference="fig:periodic-connected-sum"}). By replacing $f_2$ with some power of it, we may assume that $\varphi_1\circ f_1\circ\varphi_1^{-1}=\varphi_2\circ f_2\circ\varphi_2^{-1}$. Glue $F_{g_1}-{\rm Int}(U_1)$ with $F_{g_2}-{\rm Int}(U_2)$ by identifying every point $x\in\partial U_1$ with $\varphi_2^{-1}\circ\varphi_1(x)\in\partial U_2$. We obtain $F_{g_1+g_2}\cong F_{g_1}\#\overline{F_{g_2}}$, where $\overline{F_{g_2}}$ is $F_2$ with the orientation reversed. And there is a map $f$ of period $n$ on $F_{g+n}$, which coincides with $f_i$ on $F_{g_i}-{\rm Int}(U_i)\,(i=1,2)$. We call $f$ a *periodic connected sum* of $f_1,f_2$. Similarly, we can construct a periodic connected sum of several periodic maps if they are of the same period and have enough fixed points for connecting. With the same argument for Proposition [Proposition 14](#prop:connectedsum){reference-type="ref" reference="prop:connectedsum"} we have the following conclusion. **Proposition 16**. *Suppose that $g=g_1+g_2+\cdots g_k(k>0)$, and $f\in{\rm Aut}(F_g)$ is a periodic connected sum of $f_i\in{\rm Aut}(F_{g_i})(i=1,2,\cdots,k)$.* *(1) If some $f_i$ has both bounding and unbounding invariant spin structures, then $f$ is extendable over $S^4$.* *(2) Otherwise $f$ is extendable if and only if the number of those $f_i$ with only unbounding invariant spin structures is even.* Now we are to prove the first half of Theorem [Theorem 6](#non-extendable){reference-type="ref" reference="non-extendable"}. According to [@WsWz Theorem 1.2 (1)], a Wiman map $w_1$ on $F_1$ of period $6$ is an example that is not extendable. So we only need to prove the following proposition. **Proposition 17**. *Suppose that $g\geq 2$ and $g\neq 4$, then there exists a map of period $8$ on $F_g$ that is not extendable over $S^4$.* The proof of Proposiotion [Proposition 17](#prop:period8){reference-type="ref" reference="prop:period8"} relies on Proposition [Proposition 16](#prop:periodicconnectedsum){reference-type="ref" reference="prop:periodicconnectedsum"} (2), as well as the contructions based on Example [Example 18](#ex:f233){reference-type="ref" reference="ex:f233"} and Lemma [Lemma 19](#lem:f233){reference-type="ref" reference="lem:f233"} below. **Example 18**. (1) The surface $F_2$ can be obtained by gluing a regular octagon with each pair of opposite edges identified as in Fig. [\[fig:f_22\]](#fig:f_22){reference-type="ref" reference="fig:f_22"}. The $\frac{\pi}{4}$-rotation of the octagon induces a map of period $8$ on $F_2$, denoted by $f_2$. \(2\) The surface $F_3$ can be obtained by gluing the edges of one regular octagon and two squares according to the labels shown in Fig. [\[fig:f_37\]](#fig:f_37){reference-type="ref" reference="fig:f_37"}. The $\frac{\pi}{4}$-rotation of the octagon induces a periodic map on the union of the squares, and then induces a map of period $8$ on $F_3$, denoted by $f_3$. \(3\) The surface $F_3$ can also be obtained as in Fig. [\[fig:f_33\]](#fig:f_33){reference-type="ref" reference="fig:f_33"}. Simialrly, the $\frac{\pi}{4}$-rotation of the octagon induces a map of period $8$ on $F_3$, denoted by $f_3'$. Each map above has two isolated fixed points, one corresponding to the center of the octagon, and the other corresponding to the vertices. **Lemma 19**. *(1) $f_2$ has only unbounding invariant spin structures;* *(2) $f_3$ has only unbounding invariant spin structures;* *(3) $f_3'$ has only bounding invariant spin structures.* *Proof.* (1) follows from [@WsWz Proposition 5.2 (2)]. (2) and (3) will be proved in Lemma [Lemma 22](#lem:3337){reference-type="ref" reference="lem:3337"} and Lemma [Lemma 23](#lem:periodic){reference-type="ref" reference="lem:periodic"}. ◻ *Proof of Proposition [Proposition 17](#prop:period8){reference-type="ref" reference="prop:period8"}.* Since $f_2$, $f_3$ and $f'_3$ are all of period $8$ and each have two isolated fixed points, we can make periodic connected sums of them to construct periodic maps $h_g$ of order 8 on $F_g$ for $g\ge 2$, $g\ne 4$, which are not extendable over $S^4$. The constructions are as below: \(1\) If $g\equiv 0\,({\rm mod}\,4)$ and $g\geq 8$, let $h_g\in{\rm Aut}(F_g)$ be a periodic connected sum of $$f_3,f_3,\underbrace{f_2,f_2,\cdots,f_2}_{(g-6)/2}.$$ \(2\) If $g\equiv 1\,({\rm mod}\,4)$ and $g\geq 9$, let $h_g\in{\rm Aut}(F_g)$ be a periodic connected sum of $$f_3,f_3,f_3,\underbrace{f_2,f_2,\cdots,f_2}_{(g-9)/2}.$$ \(3\) If $g\equiv 2\,({\rm mod}\,4)$, let $h_g\in{\rm Aut}(F_g)$ be a periodic connected sum of $$\underbrace{f_2,f_2,\cdots,f_2}_{g/2}.$$ \(4\) If $g\equiv 3\,({\rm mod}\,4)$, let $h_g\in{\rm Aut}(F_g)$ be a periodic connected sum of $$f_3,\underbrace{f_2,f_2,\cdots,f_2}_{(g-3)/2}.$$ \(5\) If $g=5$, let $h_g\in{\rm Aut}(F_5)$ be a periodic connected sum of $f_2$ and $f'_3$. In each case, the total number of $f_2$ and $f_3$ in the connected sum of $h_g$ is odd. By Lemma [Lemma 19](#lem:f233){reference-type="ref" reference="lem:f233"} and Proposition [Proposition 16](#prop:periodicconnectedsum){reference-type="ref" reference="prop:periodicconnectedsum"}, $h_g$ is not extendable over $S^4$. ◻ **Example 20**. A Wiman map $w_1$ on the torus is not extendable over $S^4$ thus has only an unbounding invariant spin structure. So a connected sum of $w_1$ and $f_3'\in{\rm Aut}(F_3)$ is an example for $F_4$ that is not extendable. # Invariant quadratic forms of torsion mapping classes on low genus surfaces {#sect:low-genus} According to Theorem [\[thm:extendability\]](#thm:extendability){reference-type="ref" reference="thm:extendability"}, to tell whether $f\in {\rm Aut}(F_g)$ is extendable, we only need to: choose a basis $\{a_1,a_2,\cdots,a_{2g}\}$ of $H_1(F_g;\mathbb{Z}_2)$ and calculate $f_*(a_i)$ (say, $f_*(a_i)=\sum_{j=1}^{2g}t_{ij}a_j$, $t_{ij}\in\mathbb{Z}_2$), then solve the equation system $$\label{eq:5.0} q(a_i)=q(f_*(a_i)),\,i=1,2,\cdots,2g$$ for quadratic form $q$ on $H_1(F_g;\mathbb{Z}_2)$, and finally check whether ${\rm Arf}(q)$ is $0$. Note that using the following lemma, which can be verified directly, we can transform the equation system [\[eq:5.0\]](#eq:5.0){reference-type="eqref" reference="eq:5.0"} for $q\in\mathcal{Q}(F_g)$ into a $\mathbb{Z}_2$-linear equation system for $q(a_1),q(a_2),\cdots,q(a_{2g})\in\mathbb{Z}_2$: $$q(a_i)=\sum_{j=1}^{2g}t_{ij}q(a_j)+\sum_{1\leq j<k\leq 2g}t_{ij}t_{ik}a_j\cdot a_k.$$ **Lemma 21**. *For $q\in\mathcal{Q}(F_g)$ and $x_1,x_2,\cdots,x_m\in H_1(F_g;\mathbb{Z}_2)$, we have $$q\left(\sum_{i=1}^{m}x_i\right)=\sum_{i=1}^{m}q(x_i)+\sum_{1\leq i<j\leq m}x_i\cdot x_j.$$* Once $f\in{\rm Aut}(F_g)$ is given as a product of Dehn twists, it is convenient to compute $f_*(a_i)$ thus to decide whether $f$ is extendable over $S^4$. The famous Dehn-Lickorish theorem tells that each mapping class on $F_g$ can be written as a product of Dehn twists. In [@Hi2], Hirose gave such presentations up to conjugacy for all torsion mapping classes on $F_g$ with $g\leq 4$. Our calculations in this section rely on his work. We use $T_c$ to denote the mapping class of a right Dehn twist along a simple closed curve $c$ on $F_g$. Writing a composition of mapping classes, we always begin from right to left, e.g., $T_{c}T_{c'}$ acts on $F_g$ as $T_{c'}$ first and then $T_{c}$. Take curves $c_1,\cdots,c_7$ and $d_1$ on $F_3$ as in Fig. [\[fig:Dehn3\]](#fig:Dehn3){reference-type="ref" reference="fig:Dehn3"}. According to [@Hi2 Proposition 3.1, Theorem 3.2], the mapping classes $$T_{c_1}T_{c_2}T_{c_3}T_{c_4}T_{c_5}T_{c_6}T_{c_7}$$ and $$T_{d_1}T_{c_3}T_{c_4}T_{c_5}T_{c_2}T_{c_3}T_{c_4}T_{c_5}T_{c_6}$$ are of order 8 in ${\rm MCG}(F_3)$. Hirose denoted them by $f_{3,3},f_{3,7}$ respectively. **Lemma 22**. *(1) Each invariant quadratic form of $f_{3,3}$ has Arf invariant $0$.* *(2) Each invariant quadratic form of $f_{3,7}$ has Arf invariant $1$.* *Proof.* Abusing the notations, for a simple closed curve $c$, we still denote by $c$ the $\mathbb{Z}_2$-homology class it represents. Since $\{c_1,c_2,\cdots,c_6\}$ is a basis of $H_1(F_3;\mathbb{Z}_2)$, a quadratic form $q\in\mathcal{Q}(F_3)$ is preserved by $f\in{\rm MCG}(F_3)$ if and only if the following equation system holds: $$\label{eq:5.1} q(c_i)=q(f(c_i)),\,i=1,2,\cdots,6.$$ \(1\) Recall that a Dehn twist $T_c\in {\rm MCG}(F_3)$ acts on $H_1(F_3;\mathbb{Z}_2)$ as $$T_c(c_i)=c_i+(c\cdot c_i)c,$$ and that the mod-2 intersection number of $c_i,c_j(i,j\in\{1,2,\cdots,7\})$ is $$c_i\cdot c_j= \begin{cases} 1,\text{ if }|i-j|=1;\\ 0,\text{ otherwise}. \end{cases}$$ We can calculate $f_{3,3}(c_i)\in H_1(F_3;\mathbb{Z}_2)(i=1,2,\cdots,6)$ as follows: $$\setlength{\arraycolsep}{0.5pt} \begin{array}{llllllll} &T_{c_7} &T_{c_6} &T_{c_5} &T_{c_4} &T_{c_3} &T_{c_2} &T_{c_1}\\ c_1&\longrightarrow c_1&\longrightarrow c_1&\longrightarrow c_1&\longrightarrow c_1&\longrightarrow c_1&\longrightarrow c_1+c_2&\longrightarrow c_2\\ c_2&\longrightarrow c_2&\longrightarrow c_2&\longrightarrow c_2&\longrightarrow c_2&\longrightarrow c_2+c_3&\longrightarrow c_3&\longrightarrow c_3\\ c_3&\longrightarrow c_3&\longrightarrow c_3&\longrightarrow c_3&\longrightarrow c_3+c_4&\longrightarrow c_4&\longrightarrow c_4&\longrightarrow c_4\\ c_4&\longrightarrow c_4&\longrightarrow c_4&\longrightarrow c_4+c_5&\longrightarrow c_5&\longrightarrow c_5&\longrightarrow c_5&\longrightarrow c_5\\ c_5&\longrightarrow c_5&\longrightarrow c_5+c_6&\longrightarrow c_6&\longrightarrow c_6&\longrightarrow c_6&\longrightarrow c_6&\longrightarrow c_6\\ c_6&\longrightarrow c_6+c_7&\longrightarrow c_7&\longrightarrow c_7&\longrightarrow c_7&\longrightarrow c_7&\longrightarrow c_7&\longrightarrow c_7 \end{array}$$ In summary, $$f_{3,3}(c_i)=c_{i+1}\in H_1(F_3;\mathbb{Z}_2),\,i=1,2,\cdots,6.$$ Note that $$c_7=c_1+c_3+c_5\in H_1(F_3;\mathbb{Z}_2)$$ because the curves $c_1,c_3,c_5,c_7$ on $F_g$ together bound a subsurface. The equation system [\[eq:5.1\]](#eq:5.1){reference-type="eqref" reference="eq:5.1"} for $f_{3,3}$ now becomes $$\left\{ \begin{array}{l} q(c_1)=q(c_2), \\ q(c_2)=q(c_3), \\ q(c_3)=q(c_4), \\ q(c_4)=q(c_5), \\ q(c_5)=q(c_6), \\ q(c_6)=q(c_1)+q(c_3)+q(c_5). \end{array} \right.$$ It has two solutions: $$q(c_1)=q(c_2)=q(c_3)=q(c_4)=q(c_5)=q(c_6)=0;$$ or $$q(c_1)=q(c_2)=q(c_3)=q(c_4)=q(c_5)=q(c_6)=1.$$ That is to say, there are exactly two $f_{3,3}$-invariant quadratic forms on $H_1(F_3;\mathbb{Z}_2)$. Comparing Fig. [\[fig:Dehn3\]](#fig:Dehn3){reference-type="ref" reference="fig:Dehn3"} with Fig. [\[fig:symplectic\]](#fig:symplectic){reference-type="ref" reference="fig:symplectic"}, we see that in $\mathbb{Z}_2$-homology, $$a_i=c_{2i},\,b_i=\sum_{j=1}^{i}c_{2j-1},\,i=1,2,3.$$ So for $q\in\mathcal{Q}(F_3)$, the Arf invariant is $${\rm Arf}(q)=\sum_{i=1}^{3}q(a_i)q(b_i)=\sum_{i=1}^{3}q(c_{2i})\sum_{j=1}^{i}q(c_{2j-1}).$$ It follows that each $f_{3,3}$-invariant quadratic form has Arf invariant $0$. \(2\) For $f_{3,7}=T_{d_1}T_{c_3}T_{c_4}T_{c_5}T_{c_2}T_{c_3}T_{c_4}T_{c_5}T_{c_6}$, with similar calculations we obtain the following results in $H_1(F_3;\mathbb{Z}_2)$ (noting that $d_1=c_1+c_3$): $$\begin{split} f_{3,7}(c_1) &=c_1+c_2+c_3,\\ f_{3,7}(c_2) &=c_4+d_1=c_1+c_3+c_4,\\ f_{3,7}(c_3) &=c_5,\\ f_{3,7}(c_4) &=c_3+c_4+c_5+d_1=c_1+c_4+c_5,\\ f_{3,7}(c_5) &=c_3+c_4+c_5+c_6+d_1=c_1+c_4+c_5+c_6,\\ f_{3,7}(c_6) &=c_2+c_3+c_4+c_5+c_6+d_1=c_1+c_2+c_4+c_5+c_6.\\ \end{split}$$ Using Lemma [Lemma 21](#lem:quadratic-sum){reference-type="ref" reference="lem:quadratic-sum"}, we see that the equation system [\[eq:5.1\]](#eq:5.1){reference-type="eqref" reference="eq:5.1"} for $q\in\mathcal{Q}(F_3)$ to be $f_{3,7}$-invariant is $$\left\{ \begin{array}{l} q(c_1)=q(c_1)+q(c_2)+q(c_3),\\ q(c_2)=q(c_1)+q(c_3)+q(c_4)+1,\\ q(c_3)=q(c_5),\\ q(c_4)=q(c_1)+q(c_4)+q(c_5)+1,\\ q(c_5)=q(c_1)+q(c_4)+q(c_5)+q(c_6),\\ q(c_6)=q(c_1)+q(c_2)+q(c_4)+q(c_5)+q(c_6)+1. \end{array} \right.$$ It also has two solutions: $$q(c_1)=0,\,q(c_2)=q(c_3)=q(c_4)=q(c_5)=1,\,q(c_6)=1;$$ or $$q(c_1)=1,\,q(c_2)=q(c_3)=q(c_4)=q(c_5)=0,\,q(c_6)=1.$$ Each has Arf invariant $1$. ◻ **Lemma 23**. *In ${\rm MCG}(F_3)$, $f_{3,3}$ is conjugate to $[f_{3}']$, and $f_{3,7}$ is conjugate to $[f_{3}]$, where $f_3,f_3'\in{\rm Aut}(F_3)$ are the maps of period $8$ in Example [Example 18](#ex:f233){reference-type="ref" reference="ex:f233"}.* To prove that, we here introduce Nielsen's classification of periodic maps on oriented closed surfaces. Suppose $f\in{\rm Aut}(F_g)$ is of period $n$. Let $\Sigma_f$ be the set $$\{x\in F_g\,|\, \exists\,k,\text{ s.t. } 1<k<n,\,f^k(x)=x\}.$$ Then the quotient map $$F_g-\Sigma_f\to (F_g-\Sigma_f)/f$$ is a covering of surfaces, corresponding to an epimorphism $$\pi_1((F_g-\Sigma_f)/f)\twoheadrightarrow\langle f\rangle.$$ Identify $\langle f\rangle$ with the addative group $\mathbb{Z}_n$ by identifying $f$ with $1$, then the epimorphism factors through as $$\psi_f: H_1((F_g-\Sigma_f)/f;\mathbb{Z})\twoheadrightarrow\mathbb{Z}_n.$$ Suppose the surface $(F_g-\Sigma_f)/f$ has $s$ punctures, each corresponding to an $f$-orbit of points in $\Sigma_f$. For each puncture $p_i(1\leq i\leq s)$, take a closed curve $\xi_i:[0,1]\to(F_g-\Sigma_f)/f$ surrounding $p_i$ with orientation induced by the surface. Let $\tilde{\xi}_i:[0,1]\to F_g-\Sigma_f$ be a lift of $\xi_i$, then $\psi_f(\xi_i)$ is the number satisfying $$f^{\psi_f(\xi_i)}(\tilde{\xi}_i(0))=\tilde{\xi}_i(1).$$ **Theorem 24**. *[@Ni] A periodic map $f$ on an oriented closed surface $F_g$ is uniquely determined up to conjugacy by the following invariants: its period $n$, the number $s$ of punctures on $(F_g-\Sigma_f)/f$, and the multiset $\{\psi_f(\xi_1),\psi_f(\xi_2),\cdots,\psi_f(\xi_s)\}$.* *Proof of Lemma [Lemma 23](#lem:periodic){reference-type="ref" reference="lem:periodic"}.* The invariants of $f_{3,3}, f_{3,7}$ are given in [@Hi2 Proposition 3.1]: $$\begin{split} &\text{for}\, f_{3,3},\,n=8,\,s=3,\,\{\psi_f(\xi_1),\psi_f(\xi_2),\psi_f(\xi_3)\}=\{1,1,6\};\\ &\text{for}\, f_{3,7},\,n=8,\,s=3,\,\{\psi_f(\xi_1),\psi_f(\xi_2),\psi_f(\xi_3)\}=\{1,2,5\}. \end{split}$$ From Fig. [\[fig:f_37\]](#fig:f_37){reference-type="ref" reference="fig:f_37"}, we see that $f_3$ has $3$ singular orbits: the center of the octagon, the centers of the squares, and all the vertices. Take $\tilde{\xi}_1,\tilde{\xi}_2,\tilde{\xi}_3$ as in Fig. [\[fig:f_3\]](#fig:f_3){reference-type="ref" reference="fig:f_3"}, then it is clear that the multiset invariant for $f_3$ is $\{1,2,5\}$. By Theorem [Theorem 24](#thm:classification){reference-type="ref" reference="thm:classification"}, $f_3$ is conjugate to a periodic map in the isotopy class of $f_{3,7}$. Similarly one can verify that $[f_3']$ is conjugate to $f_{3,3}$. ◻ **Lemma 25**. *[@Hi2 Theorem 3.2] Take curves $c_1,\cdots,c_9$ and $d_1,d_2,d_1',d_2',e$ on $F_4$ as in Fig. [\[fig:Dehn4\]](#fig:Dehn4){reference-type="ref" reference="fig:Dehn4"}. Each torsion in ${\rm MCG}(F_4)$ is conjugate to a power of one of the following 12 mapping classes: $$\begin{split} f_{4,1} &=T_{c_1}T_{c_2}T_{c_3}T_{c_4}T_{c_5}T_{c_6}T_{c_7}T_{c_8}\,(\text{of order }18),\\ f_{4,2} &=f_{4,1}T_{c_8}\,(\text{of order }16),\\ f_{4,3} &=f_{4,1}T_{c_9}\,(\text{of order }10),\\ f_{4,4} &=f_{4,3}^5T_{c_9}T_{c_8}T_{c_7}T_{c_6}T_{c_5}T_{c_4}T_{c_3}T_{c_2}T_{c_1}\,(\text{of order }10),\\ f_{4,5} &=T_{d_2}T_{c_2}T_{c_3}T_{c_4}T_{c_5}T_{c_6}T_{c_7}T_{c_8}\,(\text{of order }15),\\ f_{4,6} &=f_{4,5}T_{c_5}T_{c_6} \,(\text{of order }12),\\ f_{4,7} &=T_{d_2}T_{c_4}T_{c_5}T_{c_6}T_{c_7}T_{c_2}T_{c_3}T_{c_4}T_{c_5}T_{c_6}T_{c_7}T_{c_8}\,(\text{of order }10),\\ f_{4,8} &=T_{e}T_{d_2}T_{c_6}T_{c_8}T_{c_7}T_{c_6}T_{c_5}T_{c_4}T_{c_3}\,(\text{of order }12),\\ f_{4,9}& =T_{d_2}^{-1}T_{c_6}^{-1}T_{c_7}^{-1}T_{c_8}^{-1}T_{c_9}^{-1}T_{d_1}T_{c_4}T_{c_3}T_{c_2}T_{c_1}\,(\text{of order }6),\\ f_{4,10} &=T_{c_9}T_{c_8}T_{d_2}^{-1}T_{c_6}^{-1}T_{c_5}^{-1}T_{c_4}^{-1}T_{d'_1}^{-1}T_{c_2}T_{c_1}\,(\text{of order }6),\\ f_{4,11} &=T_{c_9}^{-1}(T_{c_7}T_{d'_2}T_{c_6}T_{c_5}T_{c_4}T_{c_3}T_{c_2})^2\,(\text{of order }6),\\ f_{4,12} &=T_{c_7}^{-1}T_{d_2}^{-1}T_{c_6}^{-1}T_{c_7}^{-1}T_{d'_2}^{-1}T_{c_6}^{-1}T_{c_7}^{-1}T_{c_8}^{-1}T_{c_3}T_{d_1}T_{c_4}T_{c_3}T_{d'_1}T_{c_4}T_{c_3}T_{c_2}\,(\text{of order }5). \end{split}$$* With the following proposition, we finish the proof of Theorem [Theorem 6](#non-extendable){reference-type="ref" reference="non-extendable"}. **Proposition 26**. *Every torsion mapping class on $F_4$ is extendable over $S^4$.* *Proof.* It suffices to verify that the 12 mapping classes above all have invariant quadratic forms with Arf invariant $0$. We list each $f_{4,i}(c_j)$ in Table [1](#table:genus4){reference-type="ref" reference="table:genus4"}. Note that in $H_1(F_4;\mathbb{Z}_2)$ we have $$\begin{split} d_1 &=d'_1=c_1+c_3,\\ d_2 &=d'_2=c_1+c_3+c_5,\\ e\ &=c_1+c_2+c_3+c_4+c_6,\\ c_9 &=c_1+c_3+c_5+c_7. \end{split}$$ $c_i$ $c_1$ $c_2$ $c_3$ $c_4$ $c_5$ $c_6$ $c_7$ $c_8$ ----------------- ---------------------------- --------------- ----------------------- ------------------- ------------------- ----------------------- ------------------------------------ ---------------------------- $f_{4,1}(c_i)$ $c_2$ $c_3$ $c_4$ $c_5$ $c_6$ $c_7$ $c_8$ $\sum\limits_{j=1}^{8}c_j$ $f_{4,2}(c_i)$ $c_2$ $c_3$ $c_4$ $c_5$ $c_6$ $c_7$ $\sum\limits_{j=1}^{7}c_j$ $\sum\limits_{j=1}^{8}c_j$ $f_{4,3}(c_i)$ $c_2$ $c_3$ $c_4$ $c_5$ $c_6$ $c_7$ $c_8$ $c_1+c_3+c_5+c_7$ $f_{4,4}(c_i)$ $c_5$ $c_6$ $c_7$ $c_8$ $c_1+c_3+c_5+c_7$ $c_2+c_4+c_6+c_8$ $c_1$ $c_2$ $f_{4,5}(c_i)$ $c_1+c_2$ $c_3$ $c_4$ $c_5$ $c_1+c_3+c_5+c_6$ $c_7$ $c_8$ $c_1+c_2+c_4+c_6+c_7+c_8$ $f_{4,6}(c_i)$ $c_1+c_2$ $c_3$ $c_4$ $c_1+c_3+c_6$ $c_7$ $c_1+c_3+c_5+c_6+c_7$ $c_1+c_3+\sum\limits_{j=5}^{8}c_j$ $c_1+c_2+c_4+c_6+c_7+c_8$ $f_{4,7}(c_i)$ $c_1+c_2$ $c_3+c_4$ $c_5$ $c_1+c_3+c_5+c_6$ $c_7$ $c_1+c_3+c_4+c_6+c_7$ $c_1+c_3+c_4+c_6+c_7+c_8$ $c_1+c_2+c_4+c_6+c_7+c_8$ $f_{4,8}(c_i)$ $c_2+c_3+c_4+c_6$ $c_3+c_7+c_8$ $c_2+c_3+c_7+c_8$ $c_3$ $c_1+c_2+c_3+c_6$ $c_2+c_4$ $c_2+c_4+c_5$ $c_1+c_3+c_5+c_6+c_7$ $f_{4,9}(c_i)$ $c_2+c_4$ $c_1$ $c_2$ $c_3$ $c_4+c_6$ $c_7$ $c_8$ $c_1+c_3+c_5+c_7$ $f_{4,10}(c_i)$ $c_1+c_2$ $c_1$ $c_1+c_2+c_4+c_6$ $c_1+c_3$ $c_4$ $c_5$ $c_6+c_8$ $c_1+c_3+c_5+c_7+c_8$ $f_{4,11}(c_i)$ $c_2+c_4+c_7$ $c_6$ $c_1+c_2+c_4+c_6+c_7$ $c_2$ $c_3$ $c_4$ $c_5+c_6+c_7$ $c_6+c_8$ $f_{4,12}(c_i)$ $\sum\limits_{j=1}^{4}c_j$ $c_2+c_3+c_4$ $c_1+c_2+c_4$ $c_1+c_3+c_4$ $c_3+c_7$ $c_1+c_3+c_5+c_6$ $c_1+c_3+\sum\limits_{j=5}^{8}c_j$ $c_6+c_7+c_8$ : The actions of $f_{4,i}$'s on $H_1(F_4;\mathbb{Z}_2)$. It can be verified that $f_{4,1},f_{4,2},f_{4,5},f_{4,6},f_{4,7},f_{4,9}$ preserve the quadratic form $q_1\in\mathcal{Q}(F_4)$ with $$q_1(c_i)=1(1\leq i\leq 8);$$ $f_{4,3},f_{4,4},f_{4,11}$ preserve $q_2$ with $$q_2(c_i)=0(1\leq i\leq 8);$$ $f_{4,8}$ preserves $q_3$ with $$q_3(c_1)=q_3(c_2)=\cdots=q_3(c_5)=q_3(c_8)=1,\,q_3(c_6)=q_3(c_7)=0;$$ $f_{4,10}$ preserves $q_4$ with $$q_4(c_1)=q_4(c_2)=q_4(c_3)=q_4(c_7)=q_4(c_8)=1,\,q_4(c_4)=q_4(c_5)=q_4(c_6)=0;$$ and $f_{4,12}$ preserves $q_5$ with $$q_5(c_1)=q_5(c_5)=0,\,q_5(c_2)=q_5(c_3)=q_5(c_4)=q_5(c_6)=q_5(c_7)=q_5(c_8)=1.$$ Again, the Arf invariant of $q\in\mathcal{Q}(F_4)$ can be computed by $${\rm Arf}(q)=\sum_{i=1}^{4}q(c_{2i})\sum_{j=1}^{i}q(c_{2j-1}).$$ So each ${\rm Arf}(q_i)(i=1,2,3,4,5)$ turns out to be $0$. ◻ *Remark 27*. According to [@NWW Theorem 1.2, Theorem 1.3] and the data given in [@Hi2 Proposition 3.1], $f_{4,i}\in {\rm Aut}(F_4)$ is extendable over the $3$-sphere if and only if $i=4,9,11,12$. And $f_3'\in{\rm Aut}(F_3)$ is not extendable over the $3$-sphere while it is extendable over $S^4$. # Constructing extendable embeddings $F_g\hookrightarrow S^4$ {#sect:embedding} Now we illustrate how to construct a (trivial) embedding $e: F_g\hookrightarrow S^4$ for an automorphism $f$ of $F_g$, which has an invariant quadratic form $q$ with Arf invariant $0$, so that $f$ is extendable with respect to $e$. Take a basis $\{a_1,b_1,\cdots,a_g,b_g\}$ of $H_1(F_g;\mathbb{Z}_2)$ as in Fig. [\[fig:symplectic\]](#fig:symplectic){reference-type="ref" reference="fig:symplectic"}. Divide $\{1,2,\cdots,g\}$ into $4$ subsets $$Q_{r,s}=\{i\,|\,q(a_i)=r,q(b_i)=s\},\,r,s\in\mathbb{Z}_2.$$ As ${\rm Arf}(q)=\sum_{i=1}^{g}q(a_i)q(b_i)=0$, the cardinality of $Q_{1,1}$ is even, and we may write $$Q_{1,1}=\{i_1,j_1,i_2,j_2,\cdots,i_t,j_t\}$$ with $1\leq i_1<j_1<i_2<j_2<\cdots<i_t<j_t\leq g$. For each $k=1,2,\cdots,t$, take two simple closed curves $c_k$ and $d_k$ on $F_g$ representing $a_{i_k}+a_{j_k}$ and $b_{i_k}+b_{j_k}$ respectively (Fig. [\[fig:curves\]](#fig:curves){reference-type="ref" reference="fig:curves"}). Let $h$ be a map representing the product $$\prod_{i\in Q_{0,1}}T_{a_i}\prod_{j\in Q_{1,0}}T_{b_j}\prod_{k=1}^{t}T_{c_k}T_{d_k}$$ of Dehn twists. If we view Fig. [\[fig:symplectic\]](#fig:symplectic){reference-type="ref" reference="fig:symplectic"} as an obvious embedding of $F_g$ in $S^3$ and view $S^3$ as the equator of $S^4$, then we have a trivial embedding, denoted by $e_0$. **Claim 28**. *The map $f$ is extendable over $S^4$ with respect to the embedding $e_0\circ h$.* Moreover, for an automorphism of $F_{g,k}$ which satisfies the condition in Proposition [Proposition 15](#prop:finitetype){reference-type="ref" reference="prop:finitetype"}, we can extends it to an automorphism of a closed surface as in the proof, and then construct an embedding in $S^4$ for its extension. To prove Claim [Claim 28](#claim){reference-type="ref" reference="claim"}, we should first introduce the induced Rohlin form $q_e$ for an embedding $e:F_g\hookrightarrow S^4$ as follows. Suppose $P\subset S^4$ is an embedded surface with $\partial P\subset e(F_g)$ and $P-\partial P$ intersects $e(F_g)$ transversely. Perturb $P$ to obtain $P'$ such that $P'$ is transverse to both $P,e(F_g)$, and $\partial P'\subset e(F_g)$ is parallel to $\partial P$. Then the value of $q_e$ on $[e^{-1}(\partial P)]\in H_1(F_g;\mathbb{Z}_2)$ equals $|P\cap P'|\,({\rm mod}\,2)$. It defines a quadratic form $q_e:H_1(F_g;\mathbb{Z}_2)\to\mathbb{Z}_2$ [@Roh]. **Theorem 29**. *[@Hi1 Theorem 1.2][\[thm:Hirose\]]{#thm:Hirose label="thm:Hirose"} Suppose $e:F_g\hookrightarrow S^4$ is a trivial embedding. An automorphism $f$ of $F_g$ is extendable over $S^4$ with respect to $e$ if and only if $f^*q_e=q_e$.* *Proof of Claim [Claim 28](#claim){reference-type="ref" reference="claim"}.* In $H_1(F_g;\mathbb{Z}_2)$ we have $$\begin{split} T_{c_k}T_{d_k}(a_{i_k}) & =a_{i_k}+b_{i_k}+b_{j_k},\\ T_{c_k}T_{d_k}(b_{i_k}) & =a_{i_k}+b_{i_k}+a_{j_k},\\ T_{c_k}T_{d_k}(a_{j_k}) & =a_{j_k}+b_{i_k}+b_{j_k},\\ T_{c_k}T_{d_k}(b_{j_k}) & =a_{i_k}+a_{j_k}+b_{j_k},\\ T_{a_j}(b_j) & = a_j+b_j,\\ T_{b_i}(a_i) & = a_i+b_i, \end{split}$$ while the other generators are preserved by these Dehn twists respectively. Let $q_{e_0}$ be the induced Rohlin form of $e_0$. For $i,j=1,2,\cdots,g$, we see that $a_i,b_j$ and their parallel copies bound disjoint disks in $S^3$ (Fig. [\[fig:disks\]](#fig:disks){reference-type="ref" reference="fig:disks"}), so according to the definition, $q_{e_0}(a_i)=q_{e_0}(b_j)=0$. Therefore, $$\begin{split} h^*q_{e_0}(a_i) &=q_{e_0}(h(a_i))=\left\{ \begin{array}{ll} q_{e_0}(a_i)=0 & \text{ if } i\in Q_{0,0}\cup Q_{0,1}\\ q_{e_0}(a_i+b_i)=1 & \text{ if } i\in Q_{1,0}\\ q_{e_0}(a_{i_k}+b_{i_k}+b_{j_k})=1 & \text{ if } i=i_k\in Q_{1,1}\\ q_{e_0}(a_{j_k}+b_{i_k}+b_{j_k})=1 & \text{ if } i=j_k\in Q_{1,1} \end{array} \right.\\ & =q(a_i), \end{split}$$ and similarly $h^*q_{e_0}(b_i)=q(b_i)$ for each $i=1,2,\cdots,g$. Thus $h^*q_{e_0}=q$. Denote $e=e_0\circ h$. According to the definition of induced Rohlin form, $q_{e}(x)$ equals $q_{e_0}(h(x))$ for any $x\in H_1(F_g;\mathbb{Z}_2)$. As $q_{e_0}(h(x))=h^*q_{e_0}(x)=q(x)$, we have $q_e=q$, thus $f^*q_e=q_e$. It follows from Theorem [\[thm:Hirose\]](#thm:Hirose){reference-type="ref" reference="thm:Hirose"} that $f$ is extendable over $S^4$ with respect to $e$. ◻ **Example 30**. Below we work on a concrete example which provides intuition of both Theorem [Theorem 3](#stable){reference-type="ref" reference="stable"} and Theorem [Theorem 4](#puncture){reference-type="ref" reference="puncture"}. The map $f_2\in{\rm Aut}(F_2)$ in Example [Example 18](#ex:f233){reference-type="ref" reference="ex:f233"} has only unbounding invariant spin structures thus is not extendable over $S^4$. According to Theorem [Theorem 3](#stable){reference-type="ref" reference="stable"}, it becomes extendable after a connected sum with the identity $I_T$ on the torus $T$. See Fig. [\[fig:stable\]](#fig:stable){reference-type="ref" reference="fig:stable"}. View $F_3$ as a connected sum of $T$ and $F_2$ along the blue curve in the middle. Take a baisi $\{a_1,b_1,a_2,b_2,a_3,b_3\}$ of $H_1(F_3;\mathbb{Z}_2)$ as in Fig. [\[fig:symplectic\]](#fig:symplectic){reference-type="ref" reference="fig:symplectic"}, then in $H_1(F_3;\mathbb{Z}_2)$ we have $$a=b_2,\,b=a_2+a_3+b_3,\,c=a_2+b_3,\,d=a_2+b_2.$$ Note that $\{a,b,c,d\}$ is a basis of $H_1(F_2;\mathbb{Z}_2)$ and is an $f_2$-orbit, so a quadratic form on $F_3$ is invariant under $[I_T]\#[f_2]$ if and only if it is constant on $a,b,c,d$. Now fix an invariant quadratic form $q$ with $q(a_1)=q(b_1)=q(a)=q(b)=q(c)=q(d)=1$. Then $$\begin{split} q(a_2)=q(a+d)=1,\, & q(b_2)=q(a)=1,\\ q(a_3)=q(b+c)=1,\,& q(b_3)=q(a+c+d)=0. \end{split}$$ Take the trivial embedding $e_0:F_3\hookrightarrow S^3\subset S^4$ as Fig. [\[fig:symplectic\]](#fig:symplectic){reference-type="ref" reference="fig:symplectic"} presents and let $h=T_{b_3}T_{c_1}T_{d_1}$, where $b_3,c_1,d_1$ are the black curves in the middle of Fig. [\[fig:stable\]](#fig:stable){reference-type="ref" reference="fig:stable"}. Then by Claim [Claim 28](#claim){reference-type="ref" reference="claim"}, $[I_T]\#[f_2]$ is extendable over $S^4$ with respect to the embedding $e_0\circ h$ (the bottom of Fig. [\[fig:stable\]](#fig:stable){reference-type="ref" reference="fig:stable"}). Related to Theorem [Theorem 4](#puncture){reference-type="ref" reference="puncture"}, the blue curve, which is a figure-eight knot in $S^3$, bounds an embedded $F_{2,1}$ on the right side. The restriction of $f_2$ on $F_{2,1}$ (obtained from $F_2$ by removing an open neighborhood of a fixed point of $f_2$) is also extendable over $S^4$ with respect to this embedding. 11 Atiyah, M. F., *Riemann surfaces and spin structures*, Annales scientifiques de l'École Normale Supérieure 4.1 (1971): 47-62. Ding, F., Liu, Y., Wang, S. C. and Yao, J. G., *Spin structures and codimension-two homeomorphism extensions*, Mathematical Research Letters 19.2 (2012): 345-357. Hirose, S., *On diffeomorphisms over surfaces trivially embedded in the 4--sphere*, Algebraic & Geometric Topology 2.2 (2002): 791-824. Hirose, S., *Presentations of periodic maps on oriented closed surfaces of genera up to 4*, Osaka Journal of Mathematics 47.2 (2010): 385-421. Hirsch, M. W., *The imbedding of bounding manifolds in euclidean space*, Annals of Mathematics 74.3 (1961): 494-497. Johnson, D., *Spin structures and quadratic forms on surfaces*, J. London Math. Soc. 2.2 (1980): 365-373. Kirby, R. C., *The topology of 4-manifolds*, Lecture notes in mathematics 1374, Springer, 1989. Montesinos, J. M., *On twins in the four-sphere I*, Quart. J. Math. 34.2 (1983): 171-199. Nielsen, J., *Die Struktur periodischer Transformationen von Flächen*, Levin & Munksgaard, 1937. Ni, Y., Wang, C. and Wang, S. C., *Extending periodic automorphisms of surfaces to 3-manifolds*, Communications in Analysis and Geometry (to appear), arXiv:2003.11773 \[math.GT\]. Rohlin, V. A., *Proof of a conjecture of Gudkov*, Funkcional. Anal. i Prilozen. 6.2 (1972): 62-64. Wang, S. C. and Wang, Z. Z., *Extending periodic maps on surfaces over 4-sphere*, Journal of Topology and Analysis (to appear), https://doi.org/10.1142/S1793525322500108, arXiv:2212.13050 \[math.GT\].
arxiv_math
{ "id": "2310.05783", "title": "Extendability over the $4$-sphere and invariant spin structures of\n surface automorphisms", "authors": "Weibiao Wang and Zhongzi Wang", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, by using the spectral theory of functions and properties of evolution semigroups, we establish conditions on the existence, and uniqueness of asymptotic 1-periodic solutions to a class of abstract differential equations with infinite delay of the form $$\frac{d u(t)}{d t}=A u(t)+L(u_t)+f(t)$$ where $A$ is the generator of a strongly continuous semigroup of linear operators, $L$ is a bounded linear operator from a phase space $\mathscr{B}$ to a Banach space $X$, $u_t$ is an element of $\mathscr{B}$ which is defined as $u_t(\theta)=u(t+\theta)$ for $\theta \leq 0$ and $f$ is asymptotic 1-periodic in the sense that $\lim\limits_{t \rightarrow \infty}(f(t+1)-$ $f(t))=0$. A Lotka-Volterra model with diffusion and infinite delay is considered to illustrate our results. address: - Nguyen Duc Huy VNU University of Education, Vietnam National University, Hanoi; Xuan Thuy, Cau Giay, Hanoi, Vietnam - Anh Minh Le Department of Mathematical Analysis, Hong Duc University Quang Trung, Dong Ve, Thanh Hoa, Vietnam - VNU University of Education, Vietnam National University at Hanoi, Xuan Thuy, Cau Giay, Hanoi, Vietnam - Faculty of Foundations, Hai Duong University, Hai Duong City, Vietnam author: - Nguyen Duc Huy - Anh Minh Le - Vu Trong Luong - Nguyen Ngoc Vien title: Asymptotic periodic solutions of differential equations with infinite delay --- [^1] # Introduction The famous Massera Theorem (see [@M50]) on the connection between the boundedness and periodicity of solutions of ordinary differential equations has been extended to many classes of abstract differential equations and difference equations in general Banach spaces (see [@MMHL22; @HNM00; @N04; @Liu04; @Liu08; @Nai02; @Zi13]). Especially, in [@L22] the authors presented an analog of Massera Theorem for asymptotic periodic solutions of linear equations $$\label{intro.1.06} x^{\prime}(t)= A(t) x(t)+f(t), t \geq 0$$ where the family of linear operators $A(t)$ generates a 1-periodic process in a Banach space $X$ and $f$ is asymptotic 1-periodic, i.e., $\lim \limits_{t \rightarrow \infty}(f(t+1)-$ $f(t))=0$. The appropriate conditions are given to ensure that [\[intro.1.06\]](#intro.1.06){reference-type="eqref" reference="intro.1.06"} has an asymptotic 1-periodic mild solution if and only if it has an asymptotic mild solution that is bounded uniformly continuous with a precompact range. More precisely, they showed that a bounded and continuous function $g: \mathbb{R} \rightarrow X$ is asymptotic 1-periodic if and only if its circular spectrum $\sigma(g)$ (see [@Minh09]) satisfies $\sigma(g) \subset\{1\}$. Therefore, the existence of asymptotic 1-periodic solutions is reduced to that of solutions $x$ such that $\sigma(x) \subset\{1\}$. Motivated by these significant results, in this paper, we are investigating the existence and uniqueness of asymptotic 1-periodic mild solutions to a class of abstract differential equations with the infinite delay of the form $$\label{intro.2} \frac{d u(t)}{d t}=A u(t)+L(u_t)+f(t)$$ where $A$ is the generator of a strongly continuous semigroup of bounded linear operators $(T(t))_{t \geq 0}$ on $X$, $L:\mathscr{B}\to X$ is a bounded linear operator from an axiomatically definite phase space $\mathscr{B}$ to $X$, $u_t$ is an element of $\mathscr{B}$ which is defined as $u_t(\theta)=u(t+\theta)$ for $\theta \leq 0$ and $f$ is asymptotic 1-periodic. To begin, let us recall that a function $u(\cdot)$ is an asymptotic solution to Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} if there is a continuous function $\epsilon(\cdot)$ such that $\lim\limits _{t \rightarrow \infty} \epsilon(t)=0$ and $$\dfrac{d u(t)}{d t}=A u(t)+L(u_t)+f(t) + \epsilon (t), \ \forall t \in \mathbb{R}.$$ We will find conditions so that if Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} has a bounded uniformly continuous solution, then it has an asymptotic periodic solution. To do this, we first recall some facts on the abstract functional differential equations with infinite delay in a uniform fading memory phase space. Then, under our appropriate assumptions, there exists an evolution semigroup $\mathscr{U}(t,s)$ defined by $\mathscr{U}(t,s)\phi = u_t$ the mild solution of $$\label{homo} \frac{d u(t)}{d t}=A u(t)+L (u_t), \ \ u_s= \phi.$$ Finally, we will make use of a variation of constants formula in the phase space \[17\] combined with the spectral decomposition technique developed in \[29\] to obtain our main results, that is, if 1 is isolated in $\sigma(\mathscr{U}(1,0))$ on the unit circle $\Gamma$ of the complex plane, then Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} has an asymptotic 1-periodic mild solution if and only if it has an asymptotic mild solution that is bounded and uniformly continuous with precompact range and if $1 \notin \sigma(\mathscr{U}(1,0)) \cap \Gamma$, such an asymptotic 1-periodic mild solution always exists and unique within a function $g(t)$ on $\mathbb{R}$ with $\lim\limits_{t \rightarrow \infty} g(t)=0$. The main results of this paper are stated in Theorems [Theorem 13](#theo.main.1){reference-type="ref" reference="theo.main.1"}, [Theorem 19](#theo.main.2){reference-type="ref" reference="theo.main.2"} and [Theorem 20](#theo.main.3){reference-type="ref" reference="theo.main.3"}. # Preliminaries ## Notations In this paper, we denote by $\mathbb{N}, \mathbb{R}$ and $\mathbb{C}$ the set of natural, real, complex numbers, respectively. Let $X$ be a general (complex) Banach space. Then, we denote by $\mathscr L(X)$ the space of all bounded linear operators in $X$. The spectrum of a linear operator $T$ in a Banach space is denoted by $\sigma(T)$, and $\rho(T):=\mathbb{C} \backslash \sigma(T)$. As usual, we denote by $B U C(\mathbb{R}, X), B C(\mathbb{R}, X)$ the spaces of all $X$-valued bounded uniformly continuous functions and bounded continuous functions on $\mathbb{R}$, respectively. We also use $BUC_C\left(\mathbb{R}, X\right)$ and $C_0\left(\mathbb{R}, X\right)$ to denote $\{g \in BUC\left(\mathbb{R}, X\right): \text{range of } g \text{ is precompact} \}$ and $\left\{g \in BUC (\mathbb{R}, X\right): \lim\limits_{t \rightarrow \infty} g(t)= 0\}$, respectively. Finally, $\Gamma$ will stand for the unit circle in $\mathbb C$ i.e., $\Gamma:= \{z \in \mathbb{C}:|z|=1\}$. Now, we give a precisely axiomatic definition of the notion of uniform fading memory space which was introduced by Hale and Kato in [@halk] and follow the terminology used in [@HMN91]. ## Uniform fading memory phase spaces Let $\left(\mathscr{B},\|\cdot\|_{\mathscr{B}}\right)$ be a Banach space, consisting of functions mapping $(-\infty, 0]$ into $X$ such that - There exist a positive constant $N$ and locally bounded functions $K(\cdot)$ and $M(\cdot)$ on $\mathbb{R}_{+}$with the property that if $x:(-\infty, a) \mapsto X$ is continuous on $[\sigma, a)$ with $x_\sigma \in \mathscr{B}$ for some $\sigma<a$, then for all $t \in[\sigma, a)$, - $x_t \in \mathscr{B}$, - $x_t$ is continuous in $t$ (w.r.t. $\|\cdot\|_{\mathscr{B}}$ ), - $N\|x(t)\| \leq\left\|x_t\right\|_{\mathscr{B}} \leq K(t-\sigma) \sup _{\sigma \leq s \leq t}\|x(s)\| +M(t-\sigma)\left\|x_\sigma\right\|_{\mathscr{B}}$, - If $\left\{\phi^k\right\}, \phi^k \in \mathscr{B}$, converges to $\phi$ uniformly on any compact set in $\mathbb{R}_{-}$ and if $\left\{\phi^k\right\}$ is a Cauchy sequence in $\mathscr{B}$, then $\phi \in \mathscr{B}$ and $\phi^k \rightarrow \phi$ in $\mathscr{B}$. **Definition 1** ([@MNM04]). The space $\mathscr{B}$ is called a uniform fading memory space, if it satisfies $(\mathbf{A}_1)$ and $(\mathbf{A}_2)$ with $K(\cdot) \equiv K$ and $M(t) \rightarrow 0$ as $t\rightarrow \infty$ in $(\mathbf{A}_1)$. **Example 2**. For $\gamma>0$, we define $$C_\gamma=\left\{\phi:(-\infty, 0] \rightarrow X \text{ continuous such that } \lim _{\theta \rightarrow-\infty} {e}^{\gamma \theta} \phi(\theta) \text{ exists in } X\right\}$$ endowed with the norm $$\|\phi\|_\gamma=\sup _{\theta \leq 0} {e}^{\gamma \theta}\|\phi(\theta)\| \quad \text { for } \phi \in C_\gamma .$$ Then, $C_\gamma$ is a uniform fading memory space (see [@HMN91] for the proof). ## Circular spectra of functions Now, in $B C\left(\mathbb{R}, X\right)$ we consider the translation operator $S$ as follows $$[S x](\xi):=x(1+\xi), \quad \xi \geq 0, x \in B C\left(\mathbb{R}, X\right)$$ One can see that $S$ induces operators in the quotient spaces $$\begin{aligned} &\mathbb{Y}:=B C\left(\mathbb{R}, X\right) / C_0\left(\mathbb{R}, X\right), \ \ \mathbb{Y}^C:= BUC_C\left(\mathbb{R}, X\right) / C_0\left(\mathbb{R}, X\right)\end{aligned}$$ respectively, which will be denoted by $\bar{S}$. Moreover, $\bar{S}$ is an isometry and so that $\sigma(\bar{S}) \subset \Gamma$. Then, for each $x \in B C\left(\mathbb{R}, X\right)$ we define the complex function $\mathcal{S} x(\lambda)$ in $\lambda \in \mathbb{C} \backslash \Gamma$ as $$\mathcal{S} x(\lambda):=R(\lambda, \bar{S}) \bar{x}, \quad \text{for} \quad \lambda \in \mathbb{C} \backslash \Gamma$$ where $\bar{x}$ is an equivalent class in $\mathbb{Y}$ (or, $\mathbb{Y}^C$ respectively) which contains $x$. **Lemma 3**. *We have the following estimate: $$\|\mathcal{S} x(\lambda)\| \leq \frac{\|\bar{x}\|}{|1-| \lambda||}, \quad|\lambda| \neq 1 .$$* **Definition 4** ([@Minh09]). The *circular spectrum* of a function $x \in B C\left(\mathbb{R}, X\right)$ is defined to be the set of all $\xi_0 \in \Gamma$ such that $\mathcal{S} x(\lambda)$ has no analytic extension into any neighborhood of $\xi_0$ in the complex plane. This spectrum of $x$ is denoted by $\sigma(x)$. We will denote by $\rho(x)$ the set $\Gamma \backslash \sigma(x)$. **Lemma 5**. *For each $x \in B C\left(\mathbb{R}, X\right)$, $$\sigma(Q x) \subset \sigma(x),$$ provided that $Q$ is an operator in $B C\left(\mathbb{R}, X\right)$ leaving $C_0\left(\mathbb{R}, X\right)$ invariant so that $\bar{Q}$ commutes with $\bar{S}$.* ## Asymptotic periodic functions and their spectral characterization We begin this subsection with the concept of asymptotic periodic functions on the half line. It is noted that our definition of asymptotic periodicity is slightly different from the concept used in many previous works, and the period 1 is not a restriction, but just for the reader's convenience. All results can be easily stated for the general case of period. **Definition 6**. A function $f \in BUC\left(\mathbb{R}, X\right)$ is said to be *asymptotic 1-periodic* if $$\lim _{t \rightarrow + \infty}(f(t+1)-f(t))=0.$$ If $\bar{x} \in \mathbb{Y}$, then we define its spectrum in the same way as in Definition 2.2. For every element $\bar{x} \in \mathbb{Y}$ we denote the closed space spanned by $\left\{\bar{S}^n \bar{x}, n \in \mathbb{Z}\right\}$ by $\mathcal{M}_{\bar{x}}$. Note that $\mathcal{M}_{\bar{x}}$ is invariant under $\bar{S}$. **Proposition 7** ([@L22 Proposition 3.4]). *The following assertions are valid:* - *Let $x \in B C\left(\mathbb{R}, X\right)$. Then, $\sigma(x)=\emptyset$ if and only if $x \in C_0\left(\mathbb{R}, X\right)$;* - *Let $p \in \mathbb{R}$ and $x \in B C\left(\mathbb{R}, X\right)$. Then, $\sigma(x) \subset\left\{e^{i p}\right\}$ if and only if $$\lim _{t \rightarrow \infty}\left(x(t+1)-e^{i p} x(t)\right)=0 .$$* - *Let $\Lambda$ be a closed subset of $\Gamma$, and $\mathbb{Y}_{\Lambda}:=\left\{\bar{x} \in \mathbb{Y}^C: \sigma(\bar{x}) \subset \Lambda\right\}$. Then, $\mathbb{Y}_{\Lambda}$ is a closed subspace of $\mathbb{Y}^C$;* - *If $\sigma(\bar{x}) \neq \emptyset$, then $\sigma(\bar{x})=\sigma\left(\left.\bar{S}\right|_{\mathcal{M}_{\bar{x}}}\right)$;* - *Let $\Lambda=\Lambda_1 \sqcup \Lambda_2$, where $\Lambda_1, \Lambda_2$ are disjoint closed subsets of $\Gamma$. Then, $$\mathbb{Y}_{\Lambda}=\mathbb{Y}_{\Lambda_1} \oplus \mathbb{Y}_{\Lambda_2}.$$ Moreover, the projection associated with this direct sum commutes with any operator that commutes with the shift operator $\bar{S}$;* - *Let $f \in B U C\left(\mathbb{R}, X\right)$. Assume that $f^{\prime}$ exists and also belongs to $B U C\left(\mathbb{R}, X\right)$. Then, $$\sigma\left(f^{\prime}\right) \subset \sigma(f).$$* # Main results ## The existence of asymptotic periodic solutions To begin, we assume that - The phase space $\mathscr{B}$ is a uniform fading memory. Then, we have the following result. **Lemma 8**. *Let $(\mathbf M_0)$ be satisfied and $z \in BUC( \mathbb{R},X)$ is an asymptotic 1-periodic function. Let $v: \mathbb{R}\to \mathscr{B}$ be a function defined by $v(t)= z_t$. Then* - *$v$ is a $\mathscr{B}$-valued asymptotic 1-periodic function, and* - *$\sigma (z)= \sigma (v)$.* *Proof.* a) is obvious.\ b) By assumption $(\mathbf{M}_{0})$, since $z \in BUC(\mathbb{R}, X)$, there exists positive constants $M, K$ which do not depend on $z$ such that $$M\|z(t)\| \leq\left\|z_t\right\|_{\mathscr{B}} \leq K \sup _{t \in \mathbb{R}}\|z(t)\|$$ which implies $$v \in BUC (\mathbb{R}, \mathscr{B}) \ \Leftrightarrow z \in BUC (\mathbb{R}, X).$$ Moreover, $$\begin{aligned} \left( (\lambda - S)v(t) \right)(\theta)& =\left( (\lambda - S)z_t \right)(\theta) \notag\\ & = \left( \lambda z_t - z_{t+1}\right) (\theta)\notag\\ &= \lambda z(t + \theta) - z(t+1+ \theta)\notag\\ &=\left( \lambda - S\right) z(t + \theta), \ \ \forall t \in \mathbb{R}. \ \theta\leq 0.\label{eq.10.06.23}\end{aligned}$$ Now, let $\lambda_0 \in \rho (v)$ i.e., there exists a small neighborhood $V$ of $\lambda_0$ such that $R(\lambda, \bar S)\bar v$ is analytic in $V$. Then, by [\[eq.10.06.23\]](#eq.10.06.23){reference-type="eqref" reference="eq.10.06.23"}, $R(\lambda, \bar S)\bar z$ is also analytic in $V$. This gives $\lambda_0 \in \rho (z)$. Thus, $\sigma (z) \subset \sigma (v)$. By the same argument one has $\sigma (z) \subset \sigma (v)$ and therefore $\sigma (z) = \sigma (v)$. The proof is complete. ◻ Now, we consider abstract functional differential equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} and make the following assumptions. - The operator $A$ is generator of a strongly continuous semigroup of linear operators $(T(t))_{t \geq 0}$ on a Banach space $X$. - $L:\mathscr B \to X$ is a bounded linear operator. Then, for any $(\sigma, \phi) \in \mathbb{R}\times \mathscr{B}$, there exists a function $u: \mathbb{R}\mapsto X$ such that $u_\sigma=\phi, u$ is continuous on $[\sigma, \infty)$ and the following relation holds: $$u(t)=T(t-\sigma) \phi(0)+\int_\sigma^t T(t-\xi)\left[ L( u_\xi)+f(\xi)\right] d \xi, \quad t \geq \sigma.$$ The function $u$ is called a *mild solution* of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} through $(\sigma, \phi)$ on $[\sigma, \infty)$, and denoted by $u(\cdot, \sigma, \phi ; f)$. Also, a function $v \in C(\mathbb{R}, X)$ is called a mild solution of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} on $\mathbb{R}$, if $v_t \in \mathscr{B}$ for all $t \in \mathbb{R}$ and it satisfies $u\left(t, \sigma, v_\sigma ; f\right)=v(t)$ for all $t$ and $\sigma$ with $t \geq \sigma$. **Definition 9**. A function $u\in BUC\left(\mathbb{R}, X\right)$ is said to be an *asymptotic mild solution* of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} if $u_\sigma =\phi \in \mathscr{B}$ and there exists a function $\epsilon \in C_0\left(\mathbb{R}, X\right)$ such that $$u(t)=T(t) \phi(0)+\int_\sigma^t T(t-\xi)\left[L( u_{\xi})+f(\xi)+\epsilon(\xi)\right] d \xi, \ \forall t \geq \sigma.$$ Now, for any $t \geq s$, we define an operator $\mathscr{U}(t, s)$ on $\mathscr{B}$ by $$\mathscr{U}(t, s) \phi=u_t(s, \phi ; 0), \quad \phi \in \mathscr{B} .$$ Then, two-parameter family $(\mathscr{U}(t, s))_{t \geq s \geq 0}$ is a strongly continuous evolutionary process on $\mathscr{B}$ (see [@MNM04 Section 4.2]), which is called the *solution process* of [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"}. Here, by a strongly continuous evolutionary process in a Banach space $\boldsymbol{Y}$ we mean a two-parameter family of bounded linear operators $(V(t, s))_{t \geq s}$ from $\boldsymbol{Y}$ to $\boldsymbol{Y}$ such that the following conditions are satisfied: - $V(t, t)=I, \forall t \in \mathbb{R}$, - $V(t, s) V(s, r)=V(t, r), \forall t \geq s \geq r$, - For every fixed $y \in \boldsymbol{Y}$ the map $$\left\{(\eta, \xi) \in \mathbb{R}^2: \eta \geq \xi\right\} \ni(t, s) \rightarrow V(t, s) y$$ is continuous, - There exist positive constants $N, \omega$ such that $$\|V(t, s)\| \leq N e^{\omega(t-s)}, \quad \forall t \geq s.$$ Moreover, $(\mathscr{U}(t, s))_{t \geq s}$ is 1-periodic in the sense that $$\mathscr{U}(t+1, s+1)=\mathscr{U}(t, s), \text { for all } t \geq s$$ which enables us to define a *monodromy operator* $\mathscr{P}(t): \mathscr{B}\to \mathscr{B}$ by $$\mathscr{P}(t):=\mathscr{U}(t+1, t), \ t \in \mathbb{R}.$$ For the sake of simplicity, we will denote $\mathscr{P}:= \mathscr{P}(0)$. The nonzero eigenvalues of $\mathscr{P}(t)$ are called *characteristic multipliers*. The following lemma gives us important properties of monodromy operators which can be proved by modifying similar results in $[9,11]$ **Lemma 10**. *The following assertions hold:* - *$\mathscr{P}(t+1)=\mathscr{P}(t)$ for all $t$; characteristic multipliers are independent of time, i.e. the nonzero eigenvalues of $\mathscr{P}(t)$ coincide with those of $\mathscr{P}$,* - *$\sigma(\mathscr{P}(t)) \backslash\{0\}=\sigma(\mathscr{P}) \backslash\{0\}$, i.e., it is independent of $t$,* - *If $\lambda \in \rho(\mathscr{P})$, then the resolvent $R(\lambda, \mathscr{P}(t))$ is strongly continuous,* - *If $\mathcal{P}$ denotes the operator of multiplication by $\mathscr{P}(t)$ in any one of the function spaces $BUC\left(\mathbb{R}, \mathscr{B}\right)$, then $$\sigma(\mathcal{P}) \backslash\{0\} \subset \sigma(\mathscr{P}) \backslash\{0\}.$$* Now, we introduce a function $G^n$ defined by $$G^n(\theta)= \begin{cases}(n \theta+1) I, & -\dfrac{1}{n} \leq \theta \leq 0 \\ 0, & \theta<- \dfrac{1}{n},\end{cases}$$ where $n$ is any positive integer and $I$ is the identity operator on $X$. Then, for $x \in X$ one has $$G^n x \in \mathscr{B} \ \text{and} \ \left\|G^n x\right\|_{\mathscr{B}} \leq K(1)\|x\|.$$ Moreover, since the process $(\mathscr{U}(t, s))_{t \geq s}$ is strongly continuous, the $\mathscr{B}$-valued function $\mathscr{U}(t, s) G^n f(s)$ is continuous in $s \in [0, t]$ whenever $f \in BUC(\mathbb{R}, X)$. The following theorem yields a representation formula for solutions of [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} in the phase space: **Theorem 11** ([@MNM04]). *The segment $u_t(\sigma, \phi ; f)$ of solution $u(\cdot, \sigma, \phi, f)$ of [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} satisfies the following relation in $\mathscr{B}$ : $$\label{vi.23.5} u_t(\sigma, \phi ; f)=\mathscr{U}(t, \sigma) \phi+\lim _{n \rightarrow \infty} \int_\sigma^t \mathscr{U}(t, s) G^n f(s) d s, \quad t \geq \sigma.$$ Moreover, the above limit exists uniformly for bounded $|t-\sigma|$.* **Lemma 12**. *Let $f \in BUC\left(\mathbb{R}, X\right)$ and let $u \in BUC (\mathbb{R}, X)$ is a mild solution to Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"}. Then $$\label{vi.23.10} \sigma (u ) \subset \sigma_{\Gamma} (\mathscr{P}) \cup \sigma (f).$$* *Proof.* By applying [\[vi.23.5\]](#vi.23.5){reference-type="eqref" reference="vi.23.5"} one gets $$\label{vi.23.6} u_{t+1}=\mathscr{U}(t+1, t) u_t+\lim _{n \rightarrow \infty} \int_t^{t+1} \mathscr{U}(t+1, s) G^n f(s) d s$$ here, the limit on the right hand side exists uniformly for all bounded $t$.\ For each $n \in \mathbb{N}$, we define $$\label{vi.23.7} \left[H_n f\right](t):=\int_t^{t+1} \mathscr{U}(t+1, s) G^n f(s) d s, \ \ f \in BUC\left(\mathbb{R} , X\right), t \in \mathbb{R}.$$ Then, by Theorem [Theorem 11](#theo.limit){reference-type="ref" reference="theo.limit"} $$\label{vi.23.8} \left[H_n f\right](t) \rightrightarrows \left[H f\right](t):=\lim\limits_{n \to \infty}\int_t^{t+1} \mathscr{U}(t+1, s) G^n f(s) d s.$$ Taking into account that $H_n$ commutes with $S$ so $$\sigma (H_n f) \subset \sigma (H f) \subset \sigma (f).$$ Now, for each $\lambda \in \mathbb{C}$ such that $|\lambda| \neq 1$ we have $$\begin{aligned} {[(\lambda-S) v](t) } & =\lambda u_t-\mathscr{U}(t+1, t) u_t+\lim _{n \rightarrow \infty} H_n f \\ & =\lambda u_t-\mathscr{P}(t) u_t+ H f \end{aligned}$$ Here $v$ is a mapping defined in Lemma [Lemma 8](#lem.v){reference-type="ref" reference="lem.v"}. Moreover, as in Lemma [Lemma 10](#lem.P){reference-type="ref" reference="lem.P"} we denote by $\mathcal{P}$ the operator of multiplication by $\mathscr{P}(t)$. Since $(\mathscr{U}(t, s))_{t \geq s}$ is 1-periodic strongly continuous evolutionary process, $\mathscr{P}(t)$ is 1-periodic and commutes with the translation operator $S$. Hence, $$\label{vi.23.9} (\lambda-\bar{S}) \bar{v} =(\lambda-\bar{\mathcal{P}}) \bar{v}+ H \bar f .$$ Let $0 \neq \lambda_0 \notin \sigma_{\Gamma}(\mathscr{P}) \cup \sigma(f)$ and let $V$ be a fixed small open neighborhood of $\lambda_0$ such that $$V \cap\left(\sigma_{\Gamma}(\mathscr{P}) \cup \sigma(f)\right)=\emptyset.$$ Using [\[vi.23.9\]](#vi.23.9){reference-type="eqref" reference="vi.23.9"} we obtain $$R(\lambda, \bar{S}) \bar{v} =R(\lambda, \mathcal{P}) \bar{v} -R(\lambda, \mathcal{P}) R(\lambda, \bar{S}) \bar{H} \bar f.$$ This shows that $R(\lambda, \bar{S}) \bar{v}$ has an analytic extension in $V$ i.e. $\lambda_0 \not \in \sigma (v)$. Thus $$\sigma (v) \subset \sigma_{\Gamma}(\mathscr{P}) \cup \sigma (f).$$ Finally, by applying Lemma [Lemma 8](#lem.v){reference-type="ref" reference="lem.v"} one get $$\sigma (u ) \subset \sigma_{\Gamma} (\mathscr{P}) \cup \sigma (f).$$ The theorem is proved. ◻ Now, we present the main result of this section on the existence of asymptotic 1-periodic solutions to Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} emanating from asymptotic mild ones. **Theorem 13**. *Let $\left(\mathbf{M}_0\right),\left(\mathbf{M}_1\right)$ and $\left(\mathbf{M}_2\right)$ are satisfied and let $\sigma_{\Gamma}(\mathscr{P}) \subset\{1\}$. Assume that $u$ be an asymptotic mild solution of [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} such that $u \in$ $BUC\left(\mathbb{R}, X\right)$ and $f \in BUC (\mathbb{R}, X)$ be asymptotic 1-periodic. Then, $u$ is asymptotic 1-periodic, i.e., $$\lim _{t \rightarrow \infty}(u(t+1)-u(t))=0 .$$* *Proof.* Since $f \in BUC(\mathbb{R}, X)$ is asymptotic 1-periodic, $$\sigma(f) \subset\{1\} .$$ By Lemma [Lemma 12](#lem.spec){reference-type="ref" reference="lem.spec"}, $$\sigma(u) \subset \sigma_{\Gamma}(\mathscr{P}) \cup \sigma(f) \subset\{1\} .$$ Combine this with Proposition [Proposition 7](#prop.m1){reference-type="ref" reference="prop.m1"} to obtain that $u$ is asymptotic 1-periodic. ◻ ## The uniqueness of asymptotic periodic solution Now let $A$ be a linear operator satisfies $(\mathbf{M}_{1})$ and $(T(t))_{t \geqslant 0}$ be a strongly continuous semigroup generated by $A$. Then, in $BUC(\mathbb{R}, X)$ we can define the *evolution semigroup* $(\mathbf{T}^h)_{h \geq 0}$ associated with $(T(t))_{t \geqslant 0}$ by $$\left[\mathbf{T}^h g\right](\xi):= T(h)g(\xi-h), \ \forall \xi \in \mathbb{R}, \ \forall g \in B U C\left(\mathbb{R}, X\right).$$ By a simple argument one can show that $\left(\mathbf{T}^h\right)_{h \geq 0}$ leaves $B U C_C\left(\mathbb{R}, X\right)$ invariant. Beside that, we now consider the autonomous equation $$\label{N17.2.72} \dfrac{dx}{dt}=Ax+f(t)$$ where $f \in BUC_C(\mathbb{R}, X)$. For a mild solution to [\[N17.2.72\]](#N17.2.72){reference-type="eqref" reference="N17.2.72"} we mean a continuous function $x: \mathbb{R}\to X$ which satisfy the following integral equation $$x(t)=T(t-s) x(s) +\int_s^t T(t-\xi) f(\xi) d \xi, \quad \forall t \geq s.$$ With this notion, we define an operator $\mathcal{L}:BUC_C\left(\mathbb{R}, X\right)\to BUC_C\left(\mathbb{R}, X\right)$ as follows: $$\begin{aligned} & \mathscr{D}(\mathcal{L}): = \left\lbrace x \in BUC_C\left(\mathbb{R}, X\right): \exists f \in B U C_C(\mathbb{R}, X) \right. \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. \ \text{ such that } x \text{ is a mild solution to \eqref{N17.2.72}} \right\rbrace \end{aligned}$$ and $\mathcal{L}x:=f$ for $x \in \mathscr{D}(\mathcal{L})$.\ Further, we define an operator $\bar{\mathcal{L}}: \mathbb{Y}^C \to \mathbb{Y}^C$ as follows: $$\begin{aligned} & \mathscr{D}(\bar \mathcal{L}): = \left\lbrace [x(\cdot)] : \exists \text{ a representative } \bar{x}(\cdot) \in[x(\cdot)] \text{ such that } x(\cdot) \in \mathscr{D}(\mathcal{L}) \right\rbrace.\end{aligned}$$ Then, for $[x(\cdot)] \in \mathscr{D}(\overline{\mathcal{L}})$, we define $\bar{\mathcal{L}}[x(\cdot)]=[\mathcal{L}\bar{x}(\cdot)]$ if $\bar{x}(\cdot)$ is a representative of the class $[x(\cdot)]$. As a consequence of [@L22 Lemma 3.8] with $U(t,s) = T(t-s)$, one has **Lemma 14**. *Let $(T(t))_{t \geq 0}$ be strongly continuous. Then* - *$\mathcal{L}$ and $\bar{\mathcal{L}}$ are well-defined linear operators;* - *The evolution semigroup $\left(\mathbf{T}^h\right)_{h \geq 0}$ is a $C_0$-semigroup in $B U C_C\left(\mathbb{R}, X\right)$ that leaves $C_0\left(\mathbb{R}, X\right)$ invariant. Moreover, $-\bar{\mathcal{L}}$ is the generator of its induced semigroup $\left(\bar{\mathbf{T}}^h\right)_{h \geq 0}$ in $\mathbb{Y}^C$.* Assume that $(\mathbf{M}_{2})$ be satisfied, then we can define $\mathscr{F}: BUC(\mathbb{R},X) \to BUC(\mathbb{R},X)$ by the formula $$(\mathscr{F} u)(t):= L( u_{t}), \forall u \in BUC(\mathbb{R},X).$$ **Theorem 15** ([@N17 Theorem 2.17]). *A function $u \in BUC_{C}(\mathbb{R},X)$ be a mild solution of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} if and only if $$\mathcal{L} u =\mathscr{F} u +f.$$* **Lemma 16**. *Let $\left(\mathbf{T}^h\right)_{h \geq 0}$ be the evolution semigroup associated with a given strongly continuous semigroup $(T(t))_{t \geq 0}$, and let $\mathcal{G}$ be an infinitesimal generator of $\left(\mathbf{T}^h\right)_{h \geq 0}$ in $BUC_C(\mathbb{R},X)$. Then $$\mathcal{G} g=-\mathcal{L} g \text{ if } g \in \mathscr{D}(\mathcal G).$$* *Proof.* See [@N17 Lemma 2.1]. ◻ **Remark 17**. By [@L22 Lemma 3.9], for each $h \geq 0$ and $x \in B C\left(\mathbb{R}, X\right)$ the following assertion is valid: $$\label{sT.3.12} \sigma\left(\mathbf{T}^h x \right) \subset \sigma(x )$$ or, equivalently, $$\label{sT.3.13} \sigma\left(\bar{\mathbf{T}}^h \bar{x} )\right) \subset \sigma(\bar{x}).$$ As shown below, in this case we can refine the spectral decomposition technique to get stronger assertions which usefulness will be shown in the next subsection when we deal with asymptotic periodic solutions. **Lemma 18**. *Let $\left(\mathbf{M}_0\right),\left(\mathbf{M}_1\right)$ and $\left(\mathbf{M}_2\right)$ are satisfied and let $f \in BUC_C(\mathbb{R}, X)$. Moreover, let $u \in BUC (\mathbb{R}, X)$ be mild solution to Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"}. Then $$\label{vi.23.a.1} \sigma (f) \subset \sigma (u)$$* *Proof.* We have $$\begin{aligned} S \mathscr{F} u(\xi) =\mathscr{F} u(\xi+1) = L u_{\xi+1}=L S u_{\xi}=(\mathscr{F} S u)(\xi) \end{aligned}$$ Then, $\mathscr{F}$ induces operators in $\mathbb{Y}$ and $\mathbb{Y}^C$ that will be denoted by $\bar{\mathscr{F}}$. Furthermore, $\bar{\mathscr{F}}$ commutes with $\bar{S}$ so that $$\label{m.1} \sigma(\bar{\mathscr{F}} \bar{u}) \subset \sigma(\bar{u}).$$ By [\[sT.3.13\]](#sT.3.13){reference-type="eqref" reference="sT.3.13"}, for each $h>0$ we have $\sigma\left(\bar{\mathbf{T}}^h \bar{u} \right) \subset \sigma(\bar{u})$ and therefore $$\label{m.2} \sigma\left(\frac{\bar \mathbf{T}^h \bar u - \bar u}{h}\right) \subset \sigma(\bar u).$$ Now, by Lemma [Lemma 14](#lem.m.3.8){reference-type="ref" reference="lem.m.3.8"} one has $\bar{u} \in \mathscr{D}(\mathcal G)$ and $$\label{m.3} \lim _{h \downarrow 0} \frac{\bar{\mathbf{T}}^h \bar{u}-\bar{u}}{h}=-\bar{f}$$ Using [\[m.1\]](#m.1){reference-type="eqref" reference="m.1"}, [\[m.2\]](#m.2){reference-type="eqref" reference="m.2"}, [\[m.3\]](#m.3){reference-type="eqref" reference="m.3"} and Proposition [Proposition 7](#prop.m1){reference-type="ref" reference="prop.m1"} we obtain that $$\sigma(\bar{f})=\sigma(-\bar{f}) =\sigma\left(\lim _{h \downarrow 0} \frac{\bar{\mathbf{T}}^h \bar{u}-\bar{u}}{h}+\bar{\mathscr{F}} \bar{u}\right) \subset \sigma(\bar{u}) .$$ This implies [\[vi.23.a.1\]](#vi.23.a.1){reference-type="eqref" reference="vi.23.a.1"}. The proof is completed. ◻ **Theorem 19**. *Let $\left(\mathbf{M}_0\right),\left(\mathbf{M}_1\right)$ and $\left(\mathbf{M}_2\right)$ are satisfied. Let $f \in BUC_C(\mathbb{R}, X)$ be asymptotic 1-periodic. Then, Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} has an asymptotic mild solution that is asymptotic 1-periodic whenever it has an asymptotic mild solution in $BUC_C\left(\mathbb{R}, X\right)$ provided that 1 is either not in, or just an isolated point of $\sigma_{\Gamma}(\mathscr{P})$. Moreover, if $\sigma_{\Gamma}(\mathscr{P}) \subset\{1\}$, then every asymptotic mild solution $u \in BUC_{C}\left(\mathbb{R}, X\right)$ of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} is asymptotic 1-periodic.* *Proof.* By using Lemma [Lemma 12](#lem.spec){reference-type="ref" reference="lem.spec"} one has $$\sigma(u) \subset \sigma_{\Gamma}(\mathscr{P}) \cup \sigma(f).$$ Since $f$ is asymptotic 1-periodic, $\sigma(f) \subset\{1\}.$ We set $$\Lambda:=\sigma_{\Gamma}(\mathscr{P}), \ \Lambda_1:=\{1\}, \ \text{ and } \ \Lambda_2=\sigma_{\Gamma}(\mathscr{P}) \backslash\{1\}.$$ It is obvious that $\Lambda_1$ and $\Lambda_2$ are closed and disjoint. Then, the proof can be done in the same way as that of [@L22 Theorem 3.15] with slightly changes. ◻ **Theorem 20**. *Assume that* - *The conditions $\left(\mathbf{M}_0\right),\left(\mathbf{M}_1\right)$ and $\left(\mathbf{M}_2\right)$ are satisfied;* - *The monodromy operator $\mathscr{P}$ and $f$ satisfy $$\sigma_{\Gamma}(\mathscr{P}) \cap \sigma(f)=\emptyset$$* *Then, Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} has an asymptotic mild solution $u$ such that $\sigma(u) \subset \sigma(f)$. Moreover, this solution is unique within a function in $C_0\left(\mathbb{R}, X\right)$.* *Proof.* Again, by using Lemma [Lemma 12](#lem.spec){reference-type="ref" reference="lem.spec"} one has $$\sigma(u) \subset \sigma_{\Gamma}(\mathscr{P}) \cup \sigma(f).$$ Now, we define $$\Lambda:=\sigma_{\Gamma}(\mathscr{P}) \cup \sigma(f),\Lambda_1:=\sigma(f) \ \text{and} \ \Lambda_2=\sigma_{\Gamma}(\mathscr{P}) \backslash \sigma(f).$$ Then, $\Lambda_1$ and $\Lambda_2$ are closed and disjoint. Let $\left(\bar{\mathbf{T}}^h\right)_{h \geq 0}$ be an induced evolution semigroup in $\mathbb{Y}$. By Proposition [Proposition 7](#prop.m1){reference-type="ref" reference="prop.m1"}, $\left(\bar{\mathbf{T}}^h\right)_{h \geq 0}$ leaves $\mathbb{Y}_{\Lambda}, \mathbb{Y}_{\Lambda_1}$ and $\mathbb{Y}_{\Lambda_2}$ invariant. Moreover, $$\bar{u} \in \mathbb{Y}_{\Lambda}=\mathbb{Y}_{\Lambda_1} \oplus \mathbb{Y}_{\Lambda_2}$$ Let us denote by $\mathbf Q$ the projection of $\mathbb{Y}_{\Lambda}$ onto $\mathbb{Y}_{\Lambda_1}$. Then, $\mathbf Q$ is commutative with the evolution semigroup $\left(\bar{\mathbf{T}}^h\right)_{h \geq 0}$ and $\mathscr{F}$. By Theorem [Theorem 15](#theo.X.1){reference-type="ref" reference="theo.X.1"} and Lemma [Lemma 16](#lem.X.2){reference-type="ref" reference="lem.X.2"}, $u(\cdot) \in BUC_C\left(\mathbb{R}_{+}, X\right)$ is a mild solution of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} if and only if $u \in \mathscr{D}(\mathcal G)$ and $\mathcal{L}u=\mathscr F u+f, \ \mathcal{L}u=-\mathcal{G} u.$ Then, $$\begin{aligned} \mathbf Q \mathcal{L}u & =-\mathbf Q \mathcal G u =-\mathbf Q \lim _{h \rightarrow 0^{+}} \frac{\mathbf{T}^h u-u}{h} =-\lim _{h \rightarrow 0^{+}} \mathbf Q \frac{\mathbf{T}^h u-u}{h} \\ & =-\lim _{h \rightarrow 0^{+}} \frac{\mathbf{T}^h \mathbf Q u-\mathbf Q u}{h} =-\mathcal G \mathbf Q u =\mathcal{L}\mathbf Q u .\end{aligned}$$ Since $\mathbf Q f=f$ and $\mathbf Q$ commutes with $\mathscr{F}$, $$\begin{aligned} \mathbf Q \mathcal{L}u & =\mathbf Q \mathscr{F} u+ \mathbf Q f \\ \mathcal{L}\mathbf Q u & =\mathscr{F} \mathbf Q u+f . \end{aligned}$$ By the same argument as in the proof of Theorem [Theorem 19](#theo.main.2){reference-type="ref" reference="theo.main.2"} one can see that $$\bar{f}=\mathbf Q \bar{f}=-\bar{\mathcal{G}} \mathbf Q \bar{u}-\bar{\mathscr{F}} \mathbf Q \bar{u} = (-\bar{\mathcal{G}} - \bar{\mathscr{F}})\mathbf Q \bar{u}.$$ Thus, there is a representative $w \in \mathscr{D}(\mathcal{G})$ in the class $\mathbf Q \bar{u}$ such that $\mathcal G u+ \mathscr F u \in \bar{f}.$ This means $w:=\mathbf Q u \in \mathbb{Y}_{\Lambda_1}$ is an asymptotic mild solution of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} with $$\sigma(w)= \sigma(\mathbf Q u) \subset \Lambda_1=\sigma(f).$$ Finally, let $y \in B U C_C\left(\mathbb{R}, X\right)$ be another asymptotic mild solution of Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} with $\sigma(y) \subset \sigma(f)$. Then, $u-y$ is an asymptotic mild solution of the homogeneous equation and $$\sigma(u-y) \subset \sigma_{\Gamma}(\mathscr{P}) \cap \sigma(f) = \emptyset.$$ Therefore $u-y \in C_0\left(\mathbb{R}, X\right)$. The theorem is proved. ◻ As a simple consequence of this theorem we have **Corollary 21**. *Assume that* - *The conditions $\left(\mathbf{M}_0\right),\left(\mathbf{M}_1\right)$ and $\left(\mathbf{M}_2\right)$ are satisfied;* - *The monodromy operator $\mathscr{P}$ satisfies $1 \notin \sigma_{\Gamma}(\mathscr{P})$;* *Then, for each asymptotic 1-periodic function $f$, Equation [\[intro.2\]](#intro.2){reference-type="eqref" reference="intro.2"} has an asymptotic mild solution $u$ that is asymptotic 1-periodic. Moreover, this solution is unique within a function in $C_0\left(\mathbb{R}, X\right)$.* # Example To illustrate our previous results, we consider the following Lotka-Volterra model with diffusion and infinite delay $$\label{ex.1} \left\lbrace \begin{aligned} \frac{\partial w(t,\xi)}{\partial t} & =\frac{\partial^2 w(t, \xi)}{\partial \xi^2}+ \dfrac{1}{2}\int\limits_{-\infty}^{0}e^{\theta}w(t+\theta, \xi)d\theta + h(t,\xi) , \ t \geq 0, \ \xi \in[0, \pi], \\ w( t,0) & =w(t,\pi)=0, \quad t \geq 0, \\ w(\theta, \xi) & =w_0(\theta, \xi), \quad \theta \leq 0\quad \xi \in[0, \pi]. \end{aligned} \right.$$ In order to establish our results, we need to introduce the required technical tools. To model this system in abstract form we consider $X=L^2([0, \pi])$. The operator $A$ is defined by $$(A z)(\xi)=\frac{d^2 z(\xi)}{d \xi^2}$$ with domain $\mathscr D(A)=H_0^1(0, \pi) \cap H^2(0, \pi).$ It is well known that $A$ is the infinitesimal generator of a strongly continuous semigroup of operators $(T(t))_{t \geq 0}$. Moreover, the spectrum of $A$ consists of eigenvalues $-n^2$ for $n \in \mathbb{N}$. We model [\[ex.1\]](#ex.1){reference-type="eqref" reference="ex.1"} in the phase space $C_{\frac{1}{2}}$ (see Example [Example 2](#ex.gam){reference-type="ref" reference="ex.gam"}) as follows. For $t \geq 0, \theta \leq 0, \xi \in[0, \pi]$ we define $$u(t)(\xi)=w(t, \xi), \quad \phi(\theta)(\xi)=w_0(\theta, \xi) \text { and } f(t)\xi = h(t, \xi).$$ Now, for $\phi \in C_{\frac{1}{2}}$ we put $$L\phi :=\dfrac{1}{2} \int_{-\infty}^{0}e^{\theta}\phi(\theta) d\theta.$$ Then, [\[ex.1\]](#ex.1){reference-type="eqref" reference="ex.1"} takes the following form $$\label{ex.2} \begin{cases}\dfrac{du(t)}{dt}=A u(t)+L(u_t) +f(t) & \text { for } t \geq 0 \\ u_0=\phi \in C_{\frac{1}{2}}. \end{cases}$$ Notice that $$\begin{aligned} \Vert L \phi \Vert & = \Vert \dfrac{1}{2} \int\limits_{-\infty}^{0}e^{\theta}\phi(\theta) d\theta \Vert \leq \dfrac{1}{2} \int\limits_{-\infty}^{0} e^{\frac{\theta}{2}}\cdot \sup\limits_{\theta \leq 0} e^{\frac{\theta}{2}}\Vert \phi (\theta) \Vert d\theta\\ & \leq \dfrac{1}{2} \int\limits_{-\infty}^{0}e^{\frac{\theta}{2}}d\theta \cdot \Vert \phi \Vert_{\frac{1}{2}}\leq \Vert \phi \Vert_{\frac{1}{2}}\end{aligned}$$ i.e., $L$ is bounded linear operator. Now we consider corresponding homogeneous quation of [\[ex.1\]](#ex.1){reference-type="eqref" reference="ex.1"} which has the form $$\label{ex.3} \dfrac{du(t)}{dt} = Au(t) + \dfrac{1}{2}\int\limits_{-\infty}^{0}e^{\theta}u(t+\theta)d\theta.$$ It is not easy to define the evolution semigroup $\mathscr{U}(t,s)$ explicitly, but by the mean of spectral mapping theorem we can compute the intersection of circular spectrum of monodromy operator $\sigma(\mathscr{U}(1,0))$ with $\Gamma$. We have (see [@NS97 Section 4]) the characteristic equation $\Delta(\lambda)$ is given by $$\Delta(\lambda) v=\lambda v-A v- \dfrac{1}{2} \int\limits_{-\infty}^0 e^{(1+\lambda) \theta} d \theta v \quad v \in \mathscr{D}(A), \quad \text{Re} \lambda> -\dfrac{1}{2}> -1 .$$ Let $\lambda$ be such an eigenvalue. Then $$\lambda - \dfrac{1}{2(1+\lambda)} = -n^2 , \text{ for some } -n^2 \in \sigma_{p}(A).$$ We have $$\begin{aligned} \lambda - \dfrac{1}{2(1+\lambda)} =-n^2 \Leftrightarrow \lambda^2 + (1+n^2)\lambda + n^2 - \dfrac{1}{2} =0.\end{aligned}$$ Since $1+ n^2 >0$, $n^2 - \dfrac{1}{2} >0, \ \forall n \in \mathbb{N}$ and $$\Delta = (1+ n^2)^2 - 4 \left( n^2 - \dfrac{1}{2} \right) = (1 - n^2)^2 + 2 > 0,$$ the characteristic equation has two distinct positive real roots. Thus, $$\sigma_{\Gamma} (\mathscr{P}) = \emptyset.$$ Now, if we take $f(t) = \sin \sqrt{t}$ then $$\begin{aligned} \lim\limits_{t \to \infty}\left[ f(t+1) - f(t) \right] & = \lim\limits_{t \to \infty} \left[ \sin \sqrt{t+1} - \sin \sqrt{t} \right] \\ & = \lim\limits_{t \to \infty} \left[ 2 \sin \dfrac{\sqrt{t+1} - \sqrt{t}}{2}\cdot \cos \dfrac{\sqrt{t+1} + \sqrt{t}}{2}\right] \\ & = 0,\end{aligned}$$ that is, $f$ is asymptotic 1-periodic function. We have $\sigma (f) \subset \{1\}$ and $$\sigma_{\Gamma}(\mathscr{P}) \cap \sigma (f) = \emptyset.$$ Thus, by applying Corollary [Corollary 21](#fin.col){reference-type="ref" reference="fin.col"} one can see that Equation [\[ex.2\]](#ex.2){reference-type="eqref" reference="ex.2"} (or equivalently, Equation [\[ex.1\]](#ex.1){reference-type="eqref" reference="ex.1"}) has an asymptotic mild solution $u$ that is asymptotic 1-periodic. Moreover, this solution is unique within a function in $C_0\left(\mathbb{R}, X\right)$. ## Acknowledgment {#acknowledgment .unnumbered} This research is funded by Vietnam National University, Hanoi (VNU) under project number QG.23.47. We also would like to thank the Vietnam Institute of Advanced Study in Mathematics (VIASM). 99 Murakami S., Naito T. and N. V. Minh, Massera's theorem for almost periodic solutions of functional differential equations, *J. Math. Soc. Japan* **56** (1), 2004, pp. 247-268. N'Guerekata G.M. (2017), *Spectral Theory for Bounded Functions and Applications to Evolution Equations*, Nova Science Pub Inc, New York, United States. Nguyen Van Minh, Hideaki Matsunaga, Nguyen Duc Huy and Vu Trong Luong, A Katznelson-Tzafriri type theorem for difference equations and applications, *Proc. Amer. Math. Soc*. **150** (2022), 1105-1114. Luong, Vu Trong; Van Loi, Do; Van Minh, Nguyen; Matsunaga, Hideaki A Massera theorem for asymptotic periodic solutions of periodic evolution equations, *J. Differential Equations* **329** (2022), 371--394. Hale J. , Kato J. (1978), Phase space for retarded equations with infinite delay, *Funkcial. Ekvac.*, **21**, pp. 11-41. Hino Y. , Murakami S. , Naito T. (1991), *Functional Differential Equations with Infinite Delay,* Springer Y. Hino, T. Naito, N. Van Minh and J. Son Shin (2001), Almost Periodic Solutions of Differential Equations in Banach Spaces, CRC Press, Taylor & Francis Group. Nishikawa, Takeshi; Nguyen, Van Minh; Naito, Toshiki On the asymptotic periodic solutions of abstract functional differential equations, *Funkcial. Ekvac.* 47 (2004), no. 2, 307--327. J. Massera (1950), The existence of periodic solutions of systems of differential equations, *Duke Math. J.* **17**, 457-475. Liu, James; N'Guérékata, Gaston; Nguyen van Minh A Massera type theorem for almost automorphic solutions of differential equations, *J. Math. Anal. Appl.* 299 (2004), no. 2, 587--599. Liu, Qing; Van Minh, Nguyen; Nguerekata, G.; Yuan, Rong Massera type theorems for abstract functional differential equations, *Funkcial. Ekvac.* 51 (2008), no. 3, 329--350. Naito, Toshiki; Minh, Nguyen Van; Shin, Jong Son A Massera type theorem for functional differential equations with infinite delay, *Japan. J. Math. (N.S.)* 28 (2002), no. 1, 31--49. Zitane Mohamed, Bensouda Charaf, Massera problem for non-autonomous retarded differential equations, *J. Math. Anal. Appl.* 402 (2013), no. 2, 453--462. Nguyen, Van Minh; N'Guerekata, Gaston; Siegmund, Stefan Circular spectrum and bounded solutions of periodic evolution equations, *J. Differential Equations* 246 (2009), no. 8, 3089--3108. Naito, Toshiki; Shin, Jong Son Evolution equations with infinite delay, Structure of functional equations and mathematical methods (Japanese) (Kyoto, 1996). *Sūrikaisekikenkyūsho Kōkyūroku* No. 984 (1997), 147--160. [^1]: The author gratefully acknowledges the many helpful suggestions of Prof. Nguyen Van Minh during the preparation of the paper.
arxiv_math
{ "id": "2309.02679", "title": "Asymptotic periodic solutions of differential equations with infinite\n delay", "authors": "Nguyen Duc Huy, Le Anh Minh, Vu trong Luong, and Nguyen Ngoc Vien", "categories": "math.CA math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study the four plactic-like monoids that arise by taking the meets and joins of stalactic and taiga congruences. We obtain the combinatorial objects associated with the meet monoids, establishing Robinson--Schensted-like correspondences and giving extraction and iterative insertion algorithms for these objects. We then obtain results on the sizes of classes of words equal in plactic-like monoids, show that some of these monoids are syntactic, and characterise their equational theories. address: - Department of Mathematics, The University of Manchester, Alan Turing Building, Oxford Rd, Manchester, M13 9PL, UK. - Center for Mathematics and Applications (NOVA Math), FCT NOVA, 2829-516 Caparica, Portugal. author: - Thomas Aird - Duarte Ribeiro bibliography: - Meet_and_join_plactic_like_monoids.bib date: September 18, 2023 title: Plactic-like monoids arising from meets and joins of stalactic and taiga congruences --- # Introduction {#section:introduction} Plactic-like monoids are quotients of the free monoid sharing the interesting property that their elements are uniquely identified with combinatorial objects, thus giving these monoids a fundamental role in algebraic combinatorics. The namesake of this family is the plactic monoid ${\mathsf{plac}}$, also known as the monoid of Young tableaux [@lothaire_2002]. Defined by Lascoux and Schützenberger [@LS1978], it has been widely applied in several areas of mathematics such as representation theory [@green2006polynomial], symmetric functions [@macdonald_symmetric] and crystal bases [@bump_crystalbases]. In the last decade, the equational theories of the plactic monoids have received great attention [@aird2023semigroup; @ckkmo_placticidentity; @izhakian_cloaktic; @kubat_identities]. In particular, Johnson and Kambites [@johnson_kambites_tropical_plactic] gave faithful tropical representations of finite-rank plactic monoids, thus showing that they all satisfy non-trivial identities. Other plactic-like monoids arise in the context of combinatorial Hopf algebras whose bases are indexed by combinatorial objects. In particular, the hypoplactic monoid ${\mathsf{hypo}}$ [@novelli_hypoplactic], the sylvester monoid ${\mathsf{sylv}}$ [@hivert_sylvester], the Baxter monoid ${\mathsf{baxt}}$ [@giraudo_baxter], the (left) stalactic monoid ${\mathsf{lSt}}$ [@hnt_stalactic] and the (right) taiga monoid ${\mathsf{rTg}}$ [@priez_binary_trees] have recently been studied with regards to their connections with crystal graphs [@cm_sylv_crystal; @cm_hypo_crystal; @cain_gray_malheiro_rewriting], and their equational theories [@cain_johnson_kambites_malheiro_representations_2022; @cm_identities; @cain_malheiro_ribeiro_sylvester_baxter_2023; @cain_malheiro_ribeiro_hypoplactic_2022; @han2021preprint]. The Baxter monoid was introduced by Giraudo [@giraudo_baxter] as the analogue of the plactic monoid for Hopf algebras indexed by Baxter permutations. Its corresponding congruence was shown to be the meet of the sylvester congruence and its dual. Thus, its combinatorial objects are pairs of twin binary search trees, that is, pairs of combinatorial objects of the sylvester monoid and its dual that satisfy certain properties. From this connection, a Robinson--Schensted-like correspondence is established, along with several combinatorial properties. On the other hand, the (right) taiga monoid was defined by Priez [@priez_binary_trees] as the quotient of the free monoid by the join of the sylvester and (right) stalactic congruences. Its associated combinatorial objects are binary search trees with multiplicities, and a Robinson--Schensted-like correspondence and 'hook-length'-like formula are given. In this paper, we introduce four plactic-like monoids by factoring the free monoid over an ordered alphabet by the meets and joins of stalactic and taiga congruences, in the vein of Giraudo and Priez. It is organised as follows: In Section [2](#section:preliminaries){reference-type="ref" reference="section:preliminaries"}, we first give the necessary background on words. Then, for completeness, we present a detailed description of the combinatorial objects of the stalactic and taiga monoids, giving formal proofs of Robinson--Schensted-like correspondences. We end this section with left and right-oriented definitions of the stalactic and taiga monoids. We define the meet and join plactic-like monoids in Section [3](#section:meets_and_joins_of_stalactic_and_taiga_congruences){reference-type="ref" reference="section:meets_and_joins_of_stalactic_and_taiga_congruences"}, for which we give presentations, characterise when words are equal in them, and study their compatibility properties. Section [4](#section:Robinson_Schensted_like_correspondences){reference-type="ref" reference="section:Robinson_Schensted_like_correspondences"} focuses on the combinatorial objects associated with the meet-stalactic and meet-taiga monoids. We define $\mathrm{P}$-symbols by taking pairs of stalactic tableaux and binary search trees with multiplicities, respectively, which satisfy certain properties. We similarly define $\mathrm{Q}$-symbols and establish Robinson--Schensted-like correspondences. These symbols are analogues of semistandard Young tableaux and recording tableaux, respectively, in the case of the plactic monoid. We then give algorithms to extract words from $\mathrm{P}$-symbols. Finally, we give iterative versions of the insertion algorithms, allowing us to easily compute the concatenation of words under these congruences. In Section [5](#section:Counting_in_the_stalactic_and_taiga_monoids){reference-type="ref" reference="section:Counting_in_the_stalactic_and_taiga_monoids"}, we provide explicit formulas for the sizes of congruence classes in stalactic and taiga monoids, with the exception of the meet-stalactic and meet-taiga monoids. For these, we show that computing the size of congruence classes is equivalent to counting the number of linear extensions in specific posets, for which we give a recursive formula. Then, we provide a 'hook-length'-like formula to compute the number of stalactic tableaux which form a pair of twin stalactic tableaux with a given one, and give bounds for the corresponding number for binary search trees with multiplicities. We study the syntacticity of plactic-like monoids in Section [6](#section:Syntacticity_of_plactic_like_monoids){reference-type="ref" reference="section:Syntacticity_of_plactic_like_monoids"}, with respect to the shape functions of combinatorial objects. Finally, in Section [7](#section:Equational_theories_of_meet_and_join_stalactic_and_taiga_monoids){reference-type="ref" reference="section:Equational_theories_of_meet_and_join_stalactic_and_taiga_monoids"}, we give a characterisation of the identities satisfied by the meet and join plactic-like monoids, which can be verified in polynomial time. We then produce a finite basis for each of these monoids and compute their axiomatic ranks. # Preliminaries {#section:preliminaries} We first recall the necessary definitions and notations. For the necessary background on semigroups and monoids, see [@howie1995fundamentals]; for presentations, see [@higgins1992techniques]. Let $\mathbb{N}= \{1,2,\dots\}$ denote the set of natural numbers, without zero. For $a,b \in \mathbb{N}$, we denote by $[a]$ the set $\{1 < \cdots < a\}$, and by $[[a,b]]$ the set $\{ k \in \mathbb{N}\colon \min{(a,b)} \leq k \leq \max{(a,b)} \}$, or simply $[a,b]$ when $a<b$. Permutations are written in one-line notation, so they can be viewed as words of length $n$ over $[n]$, with no repetitions. We denote the inverse of a permutation $\sigma$ of $[n]$ by $\sigma^{-1}$. ## Words {#subsection:words} For any non-empty set $\mathcal{X}$, we denote by $\mathcal{X}^*$ the free monoid generated by $\mathcal{X}$, that is, the set of all words over $\mathcal{X}$ under concatenation. We denote the empty word by $\varepsilon$, and refer to $\mathcal{X}$ as an *alphabet*, and to its elements as *letters*. For a word $\mathbf{w}\in \mathcal{X}^*$, we denote its length by $|\mathbf{w}|$ and, for each $x \in \mathcal{X}$, we denote the number of occurrences of $x$ in $\mathbf{w}$ by $|\mathbf{w}|_x$. If $|\mathbf{w}|_x = 1$, we say $x$ is a *simple letter* of $\mathbf{w}$, and we denote the restriction of $\mathbf{w}$ to its simple letters by $\overline{\mathbf{w}}$. For words $\mathbf{u},\mathbf{v}\in \mathcal{X}^*$ we say that $\mathbf{u}$ is a *factor* of $\mathbf{v}$ if there exist $\mathbf{v}_1,\mathbf{v}_{2} \in \mathcal{X}^*$ such that $\mathbf{v}= \mathbf{v}_1 \mathbf{u}\mathbf{v}_2$. The subset of $\mathcal{X}$ of letters that occur in $\mathbf{w}$ is called the *support* of $\mathbf{w}$, denoted by $\mathrm{supp}\parens{\mathbf{w}}$, and the function from $\mathcal{X}$ to $\mathbb{N}_0$ given by $x \mapsto |\mathbf{w}|_x$ is called the *content* of $\mathbf{w}$, denoted by $\mathrm{cont}\parens{\mathbf{w}}$. Clearly, two words that share the same content also share the same support. We denote by $\overrightarrow{\rho_\mathbf{w}}$ (resp. $\overleftarrow{\rho_\mathbf{w}}$) the bijection from $\mathrm{supp}\parens{\mathbf{w}}$ to $[|\mathrm{supp}\parens{\mathbf{w}}|]$ mapping each letter $x \in \mathrm{supp}\parens{\mathbf{w}}$ to its position in the ordering of $\mathrm{supp}\parens{\mathbf{w}}$ according to its first (resp. last) occurrence, when reading $\mathbf{w}$ from left-to-right. One can extend this bijection to an isomorphism from $\mathrm{supp}\parens{\mathbf{w}}^*$ to $[|\mathrm{supp}\parens{\mathbf{w}}|]^*$. *Remark 1*. The definition of $\overleftarrow{\rho_\mathbf{w}}$ might be slightly counterintuitive since we order by last occurrences from left-to-right instead of first occurrences from right-to-left, but this definition greatly simplifies the notation and arguments used throughout this paper. For a word $\mathbf{w}$ over a totally ordered alphabet $\mathcal{X}$, the *standardised word* of $\mathbf{w}$, denoted by $\mathrm{Std}\parens{\mathbf{w}}$, is the unique permutation of size $|\mathbf{w}|$ given by reading $\mathbf{w}$ from left-to-right, attaching the subscript $i$ to the $i$-th occurrence of a letter, then replacing each letter by its rank with regards to the lexicographic order $$a_i < b_j \Leftrightarrow (a < b) \vee ((a = b) \wedge (i < j)).$$ For $\mathrm{supp}\parens{\mathbf{w}} = \{ a_1 < \cdots < a_k\}$, the *packed* word of $\mathbf{w}$ is the unique word obtained by replacing each $a_i$ in $\mathbf{w}$ with $i$. The *restriction* of $\mathbf{w}$ to a subset $\mathcal{S}$ of $\mathcal{X}$, denoted by $\mathbf{w}[S]$, is the word obtained from $\mathbf{w}$ by removing any letter not in $\mathcal{S}$. The *reverse* of $\mathbf{w}$ is the word whose $i$-th letter is the $(|\mathbf{w}|+1-i)$-th letter of $\mathbf{w}$. For an alphabet $\mathcal{X}$ with $n$ letters, the *Schützenberger involution* is the unique antiautomorphism of $\mathcal{X}^*$ induced by the map sending the $i$-th letter of $\mathcal{X}$ to its $(n+1-i)$-th letter. We now define a variation of the standardised word: For a word $\mathbf{w}$ over a totally ordered alphabet $\mathcal{X}$, the *reverse-standardised word* of $\mathbf{w}$, denoted by $\mathrm{rStd}\parens{\mathbf{w}}$, is the unique permutation of size $|\mathbf{w}|$ given by reading $\mathbf{w}$ from left-to-right, attaching the subscript $i$ to the $i$-th occurrence of a letter, then replacing each letter by its rank with regards to the lexicographic order $$a_i < b_j \Leftrightarrow (a < b) \vee ((a = b) \wedge (i > j)).$$ Notice that, if two letters in a word are different, they are ordered in the same way as when computing the standardised word, but if they are the same, the order is reversed. This will be useful to distinguish strictly decreasing sequences. Let $\mathcal{X}$ be an alphabet and let $\equiv$ be a congruence on $\mathcal{X}^*$. We say the factor monoid $X^*/{\equiv}$ is compatible with: - *standardisation* when $\mathbf{u}$ and $\mathbf{v}$ are congruent if and only if they share the same content and their standardised words are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$; - *reverse-standardisation* when $\mathbf{u}$ and $\mathbf{v}$ are congruent if and only if they share the same content and their reverse-standardised words are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$; - *packing* when $\mathbf{u}$ and $\mathbf{v}$ are congruent if and only if they share the same content and their packed words are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$; - *restriction to alphabet subsets* when $\mathbf{u}$ and $\mathbf{v}$ are congruent only if their restrictions to $\mathcal{S}$ are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$ and any subset $\mathcal{S}$ of $\mathcal{X}$; - *restriction to alphabet intervals* when $\mathbf{u}$ and $\mathbf{v}$ are congruent only if their restrictions to $\mathcal{I}$ are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$ and any interval $\mathcal{I}$ of $\mathcal{X}$; - *word reversals* when $\mathbf{u}$ and $\mathbf{v}$ are congruent if and only if their reverses are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$; - the *Schützenberger involution* when $|\mathcal{X}| \in \mathbb{N}$, and $\mathbf{u}$ and $\mathbf{v}$ are congruent if and only if their images under the Schützenberger involution are congruent, for any $\mathbf{u}, \mathbf{v}\in \mathcal{X}^*$. ## Combinatorial objects and insertion algorithms {#subsection:combinatorial_objects_and_insertion_algorithms} We now give the necessary background on the combinatorial objects needed to define the stalactic and taiga monoids. For completeness, we give a full proof of the Robinson--Schensted-like correspondences for these objects, which we will require for the following sections. For background on right strict, left strict and pairs of twin binary search trees, see [@giraudo_baxter]. We denote any empty combinatorial object by $\perp$. We introduce unlabelled and labelled combinatorial objects, and we refer to the underlying unlabelled objects of labelled objects as *shapes*. ### Composition diagrams, stalactic tableaux and patience-sorting tableaux {#subsubsection:composition_diagrams_stalactic_tableaux_and_patience_sorting_tableaux} A *composition diagram* is a top-aligned finite collection of cells, with no order imposed on the length of the columns. A *stalactic tableau* is an $\mathbb{N}$-labelled composition diagram where all cells in a column share the same label, and no two columns share the same label. An example of a stalactic tableau is the following: $$\label{example:stalactic_tableau} \tikz[tableau]\matrix{ 2 \& 1 \& 3 \& 5 \& 4\\ 2 \& 1 \& \& 5 \& \\ \& 1 \& \& \& \\ };$$ We say that a column in a composition diagram or stalactic tableau is *simple* if it only has one cell. A remark about an abuse of language: we refer to a column whose cells are labelled by some letter $a$ by 'the column labelled $a$', if the context is clear. The subset of $\mathbb{N}$ of letters that label columns of a stalactic tableau $T$ is called the *support* of $T$, denoted by $\mathrm{supp}\parens{T}$. We define $\rho_T \colon \mathrm{supp}\parens{T} \to [|\mathrm{supp}\parens{T}|]$ as the function which gives the position of a letter in the top row of $T$, when reading it from left-to-right. If the context is clear, we denote by $\rho_T$ the top row of $T$. Algorithm [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion) allows one to insert a letter into a stalactic tableau, either at the bottom of a previously existing column, or by creating a new column to the left of the tableau: [\[alg:RightStalacticLeftInsertion\]]{#alg:RightStalacticLeftInsertion label="alg:RightStalacticLeftInsertion"} the resulting stalactic tableau $a \rightarrow T$. We can define a *Left-Stalactic Right-Insertion* ([[lStRI]{.sans-serif}](#alg:RightStalacticLeftInsertion)) algorithm by replacing "right" with "left" and "left" with "right", respectively, in the previous algorithm. Using algorithm [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion) (resp. [[lStRI]{.sans-serif}](#alg:RightStalacticLeftInsertion)), one can compute a unique stalactic tableau from a word $\mathbf{w}\in \mathbb{N}^*$: Starting from the empty tableau, read $\mathbf{w}$ from right-to-left (resp. left-to-right) and insert its letters one-by-one into the tableau. The resulting tableau is denoted by $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$). **Example 2**. Computing $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{212511354}$: $$\begin{gathered} \tikz[tableau,]\matrix{ 4 \\ }; \quad\xleftarrow{5}\quad \tikz[tableau,]\matrix{ 5 \& 4 \\ }; \quad\xleftarrow{3}\quad \tikz[tableau,]\matrix{ 3 \& 5 \& 4 \\ }; \quad\xleftarrow{1}\quad \tikz[tableau,]\matrix{ 1 \& 3 \& 5 \& 4 \\ }; \quad\xleftarrow{1}\\[10pt] \xleftarrow{1}\quad \tikz[tableau,]\matrix{ 1 \& 3 \& 5 \& 4 \\ 1 \\ }; \quad\xleftarrow{5}\quad \tikz[tableau,]\matrix{ 1 \& 3 \& 5 \& 4 \\ 1 \& \& 5 \\ }; \quad\xleftarrow{2}\quad \tikz[tableau,]\matrix{ 2 \& 1 \& 3 \& 5 \& 4 \\ \& 1 \& \& 5 \\ }; \quad\xleftarrow{1}\\[10pt] \xleftarrow{1}\quad \tikz[tableau,]\matrix{ 2 \& 1 \& 3 \& 5 \& 4 \\ \& 1 \& \& 5 \\ \& 1 \\ }; \quad\xleftarrow{2}\quad \tikz[tableau,]\matrix{ 2 \& 1 \& 3 \& 5 \& 4 \\ 2 \& 1 \& \& 5 \\ \& 1 \\ }; \end{gathered}$$ The *column reading* of a stalactic tableau $T$ is the word $\mathrm{read}\parens{T}$ given by reading the labels of each column from top to bottom, and from leftmost column to rightmost column. As such, the column reading of a stalactic tableau is a product of words which are powers of letters, where the $i$-th word corresponds to the $i$-th column of the tableau. Notice that reading columns from bottom to top gives the same output. It is also clear that applying algorithm [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion) or [[lStRI]{.sans-serif}](#alg:RightStalacticLeftInsertion) to a column reading of a tableau gives back the same tableau. Thus, we have the following: **Lemma 3**. *For $\mathbf{w}\in \mathbb{N}^*$, the column reading of $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$) is the product of powers of the letters of $\mathbf{w}$, where the exponents are the number of occurrences of the letters and the order of the letters is given by $\overrightarrow{\rho_\mathbf{w}}$ (resp. $\overleftarrow{\rho_\mathbf{w}}$).* A *(standard) increasing* (resp. *decreasing*) *patience-sorting tableau* is a composition diagram, labelled by a permutation of $[n]$, where $n$ is the number of cells, such that the top row is strictly increasing from left-to-right and the columns are strictly increasing (resp. strictly decreasing) from top-to-bottom. Examples of increasing and decreasing patience-sorting tableaux, respectively, are the following: $$\label{example:patience_sorting_tableaux} \tikz[tableau]\matrix{ 1 \& 2 \& 4 \& 7 \& 9\\ 3 \& 5 \& 8 \& \& \\ \& 6 \& \& \& \\ }; \quad \text{and} \quad \tikz[tableau]\matrix{ 3 \& 6 \& 7 \& 8 \& 9\\ 1 \& 5 \& \& 4 \& \\ \& 2 \& \& \& \\ };$$ *Remark 4*. The given definitions are variations of the usual definition of patience-sorting tableaux (or piles). Notice that the tableaux defined here are top-aligned. The following insertion algorithm will also differ from the usual patience-sorting algorithm. To obtain an increasing patience-sorting tableau from a permutation, we use Algorithm [[iPsRI]{.sans-serif}](#alg:IncreasingPatienceSortingRightInsertion) iteratively: [\[alg:IncreasingPatienceSortingRightInsertion\]]{#alg:IncreasingPatienceSortingRightInsertion label="alg:IncreasingPatienceSortingRightInsertion"} the resulting increasing patience-sorting tableau $T \leftarrow a$. We can define a *Decreasing Patience-Sorting Left-Insertion* ([[dPsLI]{.sans-serif}](#alg:IncreasingPatienceSortingRightInsertion)) algorithm by replacing "right" with "left", "left" with "right", and "greater" with "less", respectively, in the previous algorithm. Using algorithm [[iPsRI]{.sans-serif}](#alg:IncreasingPatienceSortingRightInsertion) (resp. [[dPsLI]{.sans-serif}](#alg:IncreasingPatienceSortingRightInsertion)), one can compute a unique increasing (resp. decreasing) patience-sorting tableau from a word $\mathbf{w}\in \mathbb{N}^*$: Starting from the empty tableau, read $\mathrm{rStd}\parens{\overrightarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1}$ (resp. $\mathrm{rStd}\parens{\overleftarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1}$) from left-to-right (resp. right-to-left) and insert its letters one-by-one into the tableau. The resulting tableau is denoted by $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$). In other words, to compute $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$), we first replace each letter in $\mathbf{w}$ by its image under $\overrightarrow{\rho_{\mathbf{w}}}$ (resp. $\overleftarrow{\rho_{\mathbf{w}}}$), reverse-standardise it, and then compute the inverse of the resulting permutation. **Lemma 5**. *Let $\mathbf{w}\in \mathbb{N}^*$. Then, $\mathrm{rStd}\parens{\overrightarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1}$ (resp. $\mathrm{rStd}\parens{\overleftarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1}$) is a product of strictly decreasing words, with the $i$-th word giving the positions of the letter $\overrightarrow{\rho_{\mathbf{w}}}^{-1}(i)$ (resp. $\overleftarrow{\rho_{\mathbf{w}}}^{-1}(i)$) occurring in $\mathbf{w}$, in decreasing order.* *Proof.* The result follows from the fact that reverse-standardisation orders the letters of a word by their natural order first, and then by the reverse of the order of their occurrences. ◻ Notice that the last letter of each strictly decreasing word is less than the first letters of words to its right, since the former corresponds to the first occurrence of a letter in $\mathbf{w}$ and the latter corresponds to the last occurrences of letters that only appear after it in $\mathbf{w}$. **Example 6**. Computing $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{212511354}$: Let $\mathbf{w}= 212511354$. Then, $$\mathrm{rStd}\parens{\overleftarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1} = \mathrm{rStd}\parens{121422345}^{-1} = 251843679^{-1} = 316527849.$$ $$\begin{gathered} \tikz[tableau,]\matrix{ 9 \\ }; \quad\xleftarrow{4}\quad \tikz[tableau,]\matrix{ 4 \& 9 \\ }; \quad\xleftarrow{8}\quad \tikz[tableau,]\matrix{ 8 \& 9 \\ 4 \\ }; \quad\xleftarrow{7}\quad \tikz[tableau,]\matrix{ 7 \& 8 \& 9 \\ \& 4 \\ }; \quad\xleftarrow{2}\\[10pt] \xleftarrow{2}\quad \tikz[tableau,]\matrix{ 2 \& 7 \& 8 \& 9 \\ \& \& 4 \\ }; \quad\xleftarrow{5}\quad \tikz[tableau,]\matrix{ 5 \& 7 \& 8 \& 9 \\ 2 \& \& 4 \\ }; \quad\xleftarrow{6}\quad \tikz[tableau,]\matrix{ 6 \& 7 \& 8 \& 9 \\ 5 \& \& 4 \\ 2 \\ }; \quad\xleftarrow{1}\\[10pt] \xleftarrow{1}\quad \tikz[tableau,]\matrix{ 1 \& 6 \& 7 \& 8 \& 9 \\ \& 5 \& \& 4 \\ \& 2 \\ }; \quad\xleftarrow{3}\quad \tikz[tableau,]\matrix{ 3 \& 6 \& 7 \& 8 \& 9 \\ 1 \& 5 \& \& 4 \\ \& 2 \\ }; \end{gathered}$$ The *column reading* of an increasing (resp. decreasing) patience-sorting tableau $T$ is the word $\mathrm{read}\parens{T}$ given by reading the labels of each column from bottom to top (resp. from top to bottom), and from leftmost column to rightmost column. As such, the column reading of a patience-sorting tableau will be a product of strictly decreasing words, with the $i$-th word corresponding to the $i$-th column of the tableau. Furthermore, applying algorithm [[iPsRI]{.sans-serif}](#alg:IncreasingPatienceSortingRightInsertion) and [[dPsLI]{.sans-serif}](#alg:IncreasingPatienceSortingRightInsertion) to a column reading of a tableau gives back the same tableau. Thus, we can conclude that: **Lemma 7**. *For any $\mathbf{w}\in \mathbb{N}^*$, we have $$\mathrm{read}\parens{\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}} = \mathrm{rStd}\parens{\overrightarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1} \quad \text{and} \quad \mathrm{read}\parens{\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}} = \mathrm{rStd}\parens{\overleftarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1}.$$* As such, we have that the stalactic $\mathrm{P}$ and $\mathrm{Q}$-symbols have the same structure, in the following sense: **Lemma 8**. *Let $\mathbf{w}\in \mathbb{N}^*$. Then, $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$) and $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$) have the same shape. Furthermore, the letters of $\mathbf{w}$ given by the $i$-th column of $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$) have their position given by the letters of the $i$-th column of $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$).* *Proof.* The result follows from Lemmas [Lemma 3](#lemma:column_reading_stalactic_tableau){reference-type="ref" reference="lemma:column_reading_stalactic_tableau"}, [Lemma 5](#lemma:rstd_rho_inverse_positions_of_word){reference-type="ref" reference="lemma:rstd_rho_inverse_positions_of_word"}, and [Lemma 7](#lemma:column_reading_patience_sorting_tableau){reference-type="ref" reference="lemma:column_reading_patience_sorting_tableau"}. ◻ Now, we show the analogue of the Robinson--Schensted correspondence for pairs of stalactic and patience-sorting tableaux. We show the result for the left-insertion algorithm case: **Theorem 9**. *The map $\mathbf{w}\mapsto (\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}})$ is a bijection between the elements of $\mathbb{N}^*$ and the set formed by the pairs $(T,S)$ where* (i) *$T$ is a stalactic tableau.* (ii) *$S$ is an increasing patience-sorting tableau.* (iii) *$T$ and $S$ have the same shape.* *Proof.* Let $\mathbf{w}\in \mathbb{N}^*$. We can see that the map is well-defined by the definition of $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ and $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ and Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}. Furthermore, it is possible to reconstruct $\mathbf{w}$ from $(\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}})$ by taking the column readings of $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$, which gives $\mathrm{cont}\parens{\mathbf{w}}$ and $\overrightarrow{\rho_{\mathbf{w}}}$ by Lemma [Lemma 3](#lemma:column_reading_stalactic_tableau){reference-type="ref" reference="lemma:column_reading_stalactic_tableau"}, and $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$, which, by Lemmas [Lemma 5](#lemma:rstd_rho_inverse_positions_of_word){reference-type="ref" reference="lemma:rstd_rho_inverse_positions_of_word"} and [Lemma 7](#lemma:column_reading_patience_sorting_tableau){reference-type="ref" reference="lemma:column_reading_patience_sorting_tableau"}, allows us to reconstruct $\mathbf{w}$ from the information obtained from $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$. In other words, for $\mathbf{u}= \mathrm{read}\parens{\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}}$ and $\sigma = \mathrm{read}\parens{\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}}$, we have that the $\sigma(i)$-th letter of $\mathbf{w}$ is given by the $i$-th letter of $\mathbf{u}$, for each $1 \leq i \leq |\mathbf{w}|$. As such, the map is injective. For surjectivity, let $(T,S)$ satisfy conditions *(i)--(iii)*. Let $\mathbf{u}= \mathrm{read}\parens{T}$ and $\sigma = \mathrm{read}\parens{S}$, then define $\mathbf{w}\in \mathbb{N}^*$ such that the $\sigma(i)$-th letter of $\mathbf{w}$ is given by the $i$-th letter of $\mathbf{u}$, for each $1 \leq i \leq |\mathbf{u}|$. It is clear that $\mathrm{cont}\parens{\mathbf{w}} = \mathrm{cont}\parens{\mathbf{u}}$. On the other hand, since $T$ and $S$ have the same shape, it follows from the observations made on column readings and the definition of $\mathbf{w}$ that the letters of $\mathbf{w}$ given by the $i$-th column of $T$ have their position given by the letters of the $i$-th column of $S$. In particular, the position of the first occurrence of a letter is given by the topmost letter in its column. Since $S$ is an increasing patience-sorting tableau, the topmost letter in a column is less than every letter in any column to the right. As such, we have $\overrightarrow{\rho_\mathbf{w}} = \overrightarrow{\rho_\mathbf{u}} = \rho_T$ and therefore $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}} = T$. On the other hand, $\mathrm{rStd}\parens{\overrightarrow{\rho_{\mathbf{w}}}(\mathbf{w})}^{-1}$ is the unique product of strictly decreasing words, where the $i$-th word gives the positions of the letter $\overrightarrow{\rho_{\mathbf{w}}}^{-1}(i)$ in $\mathbf{w}$, which, by the previous paragraph, corresponds to the $i$-th column of $T$. As such, by Lemma [Lemma 7](#lemma:column_reading_patience_sorting_tableau){reference-type="ref" reference="lemma:column_reading_patience_sorting_tableau"}, we have that $\mathrm{read}\parens{\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}} = \sigma$, and hence, since $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ and $S$ have the same shape, $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}} = S$. Thus, the map is surjective. ◻ By symmetrical reasoning, we obtain the result for the right-insertion algorithm case: **Theorem 10**. *The map $\mathbf{w}\mapsto (\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}, \mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ is a bijection between the elements of $\mathbb{N}^*$ and the set formed by the pairs $(T,S)$ where* (i) *$T$ is a stalactic tableau.* (ii) *$S$ is a decreasing patience-sorting tableau.* (iii) *$T$ and $S$ have the same shape.* ### Binary trees with multiplicities and binary search trees with multiplicities {#subsubsection:binary_trees_with_multiplicities_and_binary_search_trees_with_multiplicities} A *binary tree with multiplicities* (BTM) is a rooted planar binary tree where each node is labelled by a positive integer, called a *multiplicity*. A *binary search tree with multiplicities* (BSTM) is a BTM where each node is further labelled by a letter of $\mathbb{N}$, such that the letter of the left (resp. right) child of any node is strictly less than (resp. strictly greater than) the letter of said node. In other words, if we remove the multiplicities of the nodes in a BSTM, we obtain a binary search tree. For ease of notation, we write multiplicities as superscripts. An example of a BSTM is the following: $$\label{example:bstm} \begin{tikzpicture}[tinybst,baseline=-4mm] \node {$2^2$} child { node {$1^2$} } child { node {$4^3$} child { node {$3^1$} } child { node {$5^1$} } }; \end{tikzpicture}$$ To be clear on terminology, by *label* of a node of a BSTM, we mean the pair consisting of the letter and multiplicity that labels the node. If we are just referring to the letter, we simply say the *letter* of a node. We say a node in a BSTM is *simple* if it has multiplicity $1$. Algorithm [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion) allows one to insert a letter into a BSTM, either by a new leaf labelled by it or by increasing the multiplicity of a node labelled by it, and still obtain a BSTM: [\[alg:TaigaLeafInsertion\]]{#alg:TaigaLeafInsertion label="alg:TaigaLeafInsertion"} the resulting tree $T \uparrow a$. Using Algorithm [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion), one can compute a unique BSTM from a word $\mathbf{w}\in \mathbb{N}^*$: Starting from the empty tree, read $\mathbf{w}$ from right-to-left (resp. left-to-right) and insert its letters one-by-one into the tree. The resulting tree is denoted by $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ (resp. $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$). **Example 11**. Computing $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{451423412}$: $$\begin{gathered} \begin{tikzpicture}[tinybst, baseline=-1mm] \node {$2^1$}; \end{tikzpicture} \quad\xleftarrow{1}\quad \begin{tikzpicture}[tinybst, baseline=-4mm] \node {$2^1$} child { node {$1^1$} } child[missing]; \end{tikzpicture} \quad\xleftarrow{4}\quad \begin{tikzpicture}[tinybst, baseline=-4mm] \node {$2^1$} child { node {$1^1$} } child { node {$4^1$} }; \end{tikzpicture} \quad\xleftarrow{3}\quad \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^1$} child { node {$1^1$} } child { node {$4^1$} child { node {$3^1$} } child [missing] }; \end{tikzpicture} \quad\xleftarrow{2}\quad \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^2$} child { node {$1^1$} } child { node {$4^1$} child { node {$3^1$} } child [missing] }; \end{tikzpicture} \quad\xleftarrow{4}\\[10pt] \xleftarrow{4}\quad \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^2$} child { node {$1^1$} } child { node {$4^2$} child { node {$3^1$} } child [missing] }; \end{tikzpicture} \quad\xleftarrow{1}\quad \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^2$} child { node {$1^2$} } child { node {$4^2$} child { node {$3^1$} } child [missing] }; \end{tikzpicture} \quad\xleftarrow{5}\quad \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^2$} child { node {$1^2$} } child { node {$4^2$} child { node {$3^1$} } child { node {$5^1$} } }; \end{tikzpicture} \quad\xleftarrow{4}\quad \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^2$} child { node {$1^2$} } child { node {$4^3$} child { node {$3^1$} } child { node {$5^1$} } }; \end{tikzpicture} \end{gathered}$$ The *inorder traversal* of a labelled rooted binary tree is the sequence of nodes obtained by recursively computing the inorder traversal of the left subtree of the root node, then adding the root node to the sequence and then recursively computing the inorder traversal of the right subtree of the root node. We say a node of a tree is its $i$-th node if it is the $i$-th node in the inorder traversal of the tree. The *inorder reading* of a BSTM $T$ is the word $\mathrm{read}\parens{T}$ given by the product of powers of letters labelling the nodes of the inorder traversal, where the exponents are given by the multiplicities. Notice that the inorder reading of a BSTM is a weakly increasing sequence. Notice that a BSTM cannot have two nodes with the same letter. As such, given a BTM and a subset of $\mathbb{N}$ with size equal to the number of nodes of the tree, there is only one way to label the tree with the letters of the subset and obtain a BSTM (see Definition 8 and the following remark in [@priez_binary_trees]). An *increasing* (resp. *decreasing*) *binary tree* is an $\mathbb{N}$-labelled rooted planar binary tree, such that the label of each node is less than (resp. greater than) the label of its children, and no two nodes have the same label. For $\mathbf{w}\in \mathbb{N}^*$ with no repeated letters, algorithm [[RiBTA]{.sans-serif}](#alg:RecursiveIncreasingBinaryTreeAlgorithm) allows one to compute an increasing binary tree $\mathrm{incr}\parens{\mathbf{w}}$ obtained from $\mathbf{w}$ in the following way: [\[alg:RecursiveIncreasingBinaryTreeAlgorithm\]]{#alg:RecursiveIncreasingBinaryTreeAlgorithm label="alg:RecursiveIncreasingBinaryTreeAlgorithm"} let $\mathrm{incr}\parens{\mathbf{w}} = \perp$;\ the resulting tree $\mathrm{incr}\parens{\mathbf{w}}$. We can define a *Recursive Decreasing Binary Tree Algorithm* ([[RdBTA]{.sans-serif}](#alg:RecursiveIncreasingBinaryTreeAlgorithm)) algorithm by replacing "[incr]{.roman}" with "[decr]{.roman}" and "least" with "greatest" in the previous algorithm. We say that a binary tree whose labels are non-intersecting subsets of $\mathbb{N}$ is an *increasing* (resp. *decreasing*) *binary tree over sets* (BTS) if the binary tree obtained by replacing each set with its minimum (resp. maximum) element is an increasing (resp. decreasing) binary tree. For ease of notation, we label the nodes of these trees with the elements of the subsets only. Examples of increasing and decreasing binary trees over sets are the following: $$\label{example:increasing_decreasing_binary_trees_over_sets} \begin{tikzpicture}[microbst,baseline=-8mm] \node {$1,4,7$} child { node {$3,8$} child[missing] child { node {$5,9$} child[missing] child { node {$6$} } } } child[missing] child { node {$2$} }; \end{tikzpicture} \quad \text{and} \quad \begin{tikzpicture}[microbst, baseline=-6mm] \node {$5,9$} child { node {$3,8$} } child { node {$1,4,7$} child { node {$6$} } child { node {$2$} } }; \end{tikzpicture}$$ Using Algorithm [[RiBTA]{.sans-serif}](#alg:RecursiveIncreasingBinaryTreeAlgorithm), (resp. [[RdBTA]{.sans-serif}](#alg:RecursiveIncreasingBinaryTreeAlgorithm)) one can compute a unique increasing (resp. decreasing) BTS from a word $\mathbf{w}\in \mathbb{N}^*$: Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1 < \dots < x_k\}$. For each $i \in [k]$, consider the set $W_i$ of all $j \in [|\mathbf{w}|]$ such that $x_i$ is the $j$-th letter of $\mathbf{w}$, when reading $\mathbf{w}$ from left-to-right. Let $\mathbf{v}= v_1\dots v_k \in \mathbb{N}^*$ be the word where $v_i = \min(W_i)$ (resp. $v_i = \max(W_i)$). Now, apply [[RiBTA]{.sans-serif}](#alg:RecursiveIncreasingBinaryTreeAlgorithm), (resp. [[RdBTA]{.sans-serif}](#alg:RecursiveIncreasingBinaryTreeAlgorithm)) to $\mathbf{v}$ to obtain an increasing (resp. decreasing) binary tree. Finally, for each $i \in [k]$ replace the label $v_i$ in the tree with the set $W_i$. The resulting tree is denoted $\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$). **Example 12**. Computing $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{451423412}$: Let $\mathbf{w}= 451423412$. Then, $W_1 = \{3,8\}$, $W_2 = \{5,9\}$, $W_3 = \{6\}$, $W_4 = \{1,4,7\}$ and $W_5 = \{2\}$ and, as such, $\mathbf{v}= 89672$. $$\begin{gathered} \begin{tikzpicture}[microbst, baseline=-1mm] \node {$9$}; \end{tikzpicture} \quad\xleftarrow{8,7}\quad \begin{tikzpicture}[microbst, baseline=-4mm] \node {$9$} child { node {$8$} } child { node {$7$} }; \end{tikzpicture} \quad\xleftarrow{6,2}\quad \begin{tikzpicture}[microbst, baseline=-6mm] \node {$9$} child { node {$8$} } child { node {$7$} child { node {$6$} } child { node {$2$} } }; \end{tikzpicture} \quad\leftarrow\quad \begin{tikzpicture}[microbst, baseline=-6mm] \node {$5,9$} child { node {$3,8$} } child { node {$1,4,7$} child { node {$6$} } child { node {$2$} } }; \end{tikzpicture} \end{gathered}$$ **Lemma 13**. *Let $\mathbf{w}\in \mathbb{N}^*$ and $\mathrm{supp}\parens{\mathbf{w}} = \{x_1 < \dots < x_k\}$. Then, $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$) and $\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$) have the same underlying binary tree shape. Furthermore, for each $i \in [k]$, the letters $x_i$ have their positions in $\mathbf{w}$ given by the label of the $i$-th node of $\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$ (resp. $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$).* *Proof.* We prove the case for $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$. The proof for the remaining case is symmetrical. By the definition of $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$, its underlying binary tree shape is given by $\mathrm{decr}(\mathbf{v})$, where $\mathbf{v}= v_1 \cdots v_k$ is such that $v_i$ is the position of the last occurrence of $x_i$ in $\mathbf{w}$. Furthermore, the $i$-th node of $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$ is the unique node whose label is a set containing the label of the $i$-th node of $\mathrm{decr}(\mathbf{v})$. Notice that $\mathbf{v}$ is a word where no letter is repeated. Thus, $\mathrm{decr}(\mathbf{v})$ has the same shape as $\mathrm{decr}(\mathrm{Std}\parens{\mathbf{v}})$ and there is an order-preserving bijection taking the label of the $i$-th of $\mathrm{decr}(\mathbf{v})$ to that of the $i$-th node of $\mathrm{decr}(\mathrm{Std}\parens{\mathbf{v}})$. By [@hivert_sylvester Theorem 13], which gives the sylvester Robinson--Schensted correspondence, we have that $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathrm{Std}\parens{\mathbf{v}}^{-1}}$ has the same shape as $\mathrm{decr}(\mathrm{Std}\parens{\mathbf{v}})$, as Algorithm [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion) outputs the same result as the algorithm given in [@hivert_sylvester Definition 7] for right strict binary search trees, if the input is a permutation and one views a BSTM with only simple nodes as a right strict binary search tree. Moreover, the label of $i$-th node of $\mathrm{decr}(\mathrm{Std}\parens{\mathbf{v}})$ gives the position of $i$ in $\mathrm{Std}\parens{\mathbf{v}}^{-1}$. Notice that the binary tree shape of $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ is determined by the last occurrences of letters in $\mathbf{w}$, and that $\mathrm{Std}\parens{\mathbf{v}}^{-1}$ is the word obtained from $\mathbf{w}$ by only keeping the last occurrences of each letter and replacing each $x_i$ with $i$. As the map $x_i \mapsto i$ is an order-preserving bijection, we conclude that $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathrm{Std}\parens{\mathbf{v}}^{-1}}$ has the same shape as $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$. In conclusion, the $i$-th node of $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ corresponds to the $i$-th node of $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$, thus, by the definition of $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$, the positions of $x_i$ in $\mathbf{w}$ are given by the label of the $i$-th node of $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$. ◻ Now, we show the analogue of the Robinson--Schensted correspondence for pairs of BSTMs and BTSs. We show the result for the right-insertion algorithm case: **Theorem 14**. *The map $\mathbf{w}\mapsto (\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}, \mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}})$ is a bijection between the elements of $\mathbb{N}^*$ and the set formed by the pairs $(T,S)$ where* (i) *$T$ is a BSTM.* (ii) *$S$ is a decreasing BTS such that the union of the sets labelling $S$ is the interval $[m]$, where $m$ is the sum of the multiplicities of $T$.* (iii) *$T$ and $S$ have the same underlying binary tree shape.* (iv) *the multiplicity of the $i$-th node of $T$ is the cardinality of the set labelling the $i$-th node of $S$.* *Proof.* Let $\mathbf{w}\in \mathbb{N}^*$ and $\mathrm{supp}\parens{\mathbf{w}} = \{x_1,\dots,x_k\}$. We can see that the map is well-defined by the definition of $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ and $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ and Lemma [Lemma 13](#lemma:pxtg_qxtg_same_shape){reference-type="ref" reference="lemma:pxtg_qxtg_same_shape"}. Furthermore, it is possible to reconstruct $\mathbf{w}$ from $(\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}, \mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}})$ as by Lemma [Lemma 13](#lemma:pxtg_qxtg_same_shape){reference-type="ref" reference="lemma:pxtg_qxtg_same_shape"} the $i$-th node of $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ gives the positions of $x_i$ in $\mathbf{w}$. As such, the map is injective. For surjectivity, let $(T,S)$ satisfy conditions *(i)--(iv)*. Let $\mathrm{supp}\parens{\mathrm{read}\parens{T}} = \{x_1,\dots,x_k\}$. Now, define $\mathbf{w}\in \mathbb{N}^*$ as the rearrangement of $\mathrm{read}\parens{T}$ where the positions of the letter $x_i$ are given by the set labelling the $i$-th node of $S$. By conditions *(ii)* and *(iv)*, $\mathbf{w}$ is well-defined. We aim to show $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}} = T$ and $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}} = S$. By definition, we have $\mathrm{cont}\parens{\mathbf{w}} = \mathrm{cont}\parens{\mathrm{read}\parens{T}}$. Since $S$ is a decreasing BTS and has the same underlying binary tree shape as $T$, then, by Algorithm [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion), the node labelled $x_i$ is an ancestor of the node labelled $x_j$ in $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ if and only if the same occurs in $T$. Thus, $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}} = T$. On the other hand, by the definition of $\mathbf{w}$ and Lemma [Lemma 13](#lemma:pxtg_qxtg_same_shape){reference-type="ref" reference="lemma:pxtg_qxtg_same_shape"}, we have that $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}} = S$. Hence, the map is surjective. ◻ By symmetrical reasoning, we obtain the result for the left-insertion algorithm case: **Theorem 15**. *The map $\mathbf{w}\mapsto (\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}, \mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}})$ is a bijection between the elements of $\mathbb{N}^*$ and the set formed by the pairs $(T,S)$ where* (i) *$T$ is a BSTM.* (ii) *$S$ is an increasing BTS such that the union of the sets labelling $S$ is the interval $[m]$, where $m$ is the sum of the multiplicities of $T$.* (iii) *$T$ and $S$ have the same underlying binary tree shape.* (iv) *the multiplicity of the $i$-th node of $T$ is the cardinality of the set labelling the $i$-th node of $S$.* ## The stalactic and taiga monoids {#subsection:the_stalactic_and_taiga_monoids} We now give the definitions and required properties of the (right and left) stalactic and taiga monoids. For background on the sylvester monoid, see [@hivert_sylvester]; on the Baxter monoid, see [@giraudo_baxter]; on the identities satisfied by these monoids, see [@cain_malheiro_ribeiro_sylvester_baxter_2023]. We define the *right stalactic congruence* $\equiv_{\mathsf{rSt}}$ on $\mathbb{N}^*$ in the following way: for $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$, $$\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}\Leftrightarrow \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}} = \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}}$$ and is generated by the relations $$\mathcal{R}_{\mathsf{rSt}}= \{(ba \mathbf{u}b,ab \mathbf{u}b)\colon a,b \in \mathbb{N}, \mathbf{u}\in \mathbb{N}^*\}.$$ The factor monoid $\mathbb{N}^*/{\equiv_{\mathsf{rSt}}}$ is the infinite-rank *right stalactic monoid*, denoted by ${\mathsf{rSt}}$. The congruence naturally restricts to a congruence on $[n]^*$, for each $n \in \mathbb{N}$, and the corresponding factor monoid $[n]^*/{\equiv_{\mathsf{rSt}}}$ is the right stalactic monoid of rank $n$. Notice that any right stalactic monoid of rank $n$ is (isomorphic to) a submonoid of all right stalactic monoids of rank greater than $n$. It follows from the definition of ${\mathsf{rSt}}$ that any element $[\mathbf{u}]_{\mathsf{rSt}}$ of ${\mathsf{rSt}}$ can be uniquely identified with the stalactic tableau $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}$. All words in each $\equiv_{\mathsf{rSt}}$-class share the same content (and therefore the same support), hence, we extend the definition of content and support to $\equiv_{\mathsf{rSt}}$-classes and stalactic tableaux in a natural way. On the other hand, it follows from the defining relations of the right stalactic congruence that two words $\mathbf{u}, \mathbf{v}\in \mathbb{N}^*$ are $\equiv_{\mathsf{rSt}}$-congruent if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\overleftarrow{\rho_\mathbf{u}} = \overleftarrow{\rho_\mathbf{v}}$. From this, it is immediate that ${\mathsf{rSt}}$ is left-cancellative. The left stalactic monoids ${\mathsf{lSt}}$, right taiga monoids ${\mathsf{rTg}}$ and left taiga monoids ${\mathsf{lTg}}$ are defined in a similar manner. Recall that the sylvester and \#-sylvester congruences are generated, respectively, by the relations $$\begin{aligned} \mathcal{R}_{\mathsf{sylv}}= \{(ca \mathbf{u}b,ac \mathbf{u}b) \colon a \leq b < c, \mathbf{u}\in \mathbb{N}^*\} &\quad \text{and} \\ \mathcal{R}_{\mathsf{sylv}^{\#}}= \{(b \mathbf{u}ac,b \mathbf{u}ca) \colon a < b \leq c, \mathbf{u}\in \mathbb{N}^*\}&.\end{aligned}$$ The left-stalactic congruence is generated by the relations $$\mathcal{R}_{\mathsf{lSt}}= \{(b \mathbf{u}ab,b \mathbf{u}ba)\colon a,b \in \mathbb{N}, \mathbf{u}\in \mathbb{N}^*\}.$$ The right-taiga and left-taiga congruences are generated, respectively, by the relations $$\mathcal{R}_{\mathsf{rTg}}= \mathcal{R}_{\mathsf{sylv}}\cup \mathcal{R}_{\mathsf{rSt}}\quad \text{and} \quad \mathcal{R}_{\mathsf{lTg}}= \mathcal{R}_{\mathsf{sylv}^{\#}}\cup \mathcal{R}_{\mathsf{lSt}}.$$ It follows from the defining relations of the left stalactic congruence that two words $\mathbf{u}, \mathbf{v}\in \mathbb{N}^*$ are $\equiv_{\mathsf{lSt}}$-congruent if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\overrightarrow{\rho_\mathbf{u}} = \overrightarrow{\rho_\mathbf{v}}$. From this, it is immediate that ${\mathsf{lSt}}$ is right-cancellative. The proof of [@cm_identities Lemma 17] can be easily adapted to show that ${\mathsf{rTg}}$ is left-cancellative and ${\mathsf{lTg}}$ is right-cancellative. # Meets and joins of stalactic and taiga congruences {#section:meets_and_joins_of_stalactic_and_taiga_congruences} In this section, we define four plactic-like monoids arising from the meets and joins of the stalactic and taiga congruences, and study their compatibility properties. ## Monoids arising from meets and joins of stalactic and taiga congruences {#subsection:monoids_arising_from_meets_and_joins_of_stalactic_and_taiga_congruences} We first define the meets and joins of the stalactic and taiga congruences, respectively, and then their corresponding monoids. We then give presentations for the meet monoids and characterise words that are equal in the join monoids. ### The meet-stalactic and join-stalactic monoids {#subsubsection:the_meet_stalactic_and_join_stalactic_monoids} We define the *meet-stalactic* congruence $\equiv_{{\mathsf{mSt}}}$ and the *join-stalactic* congruence $\equiv_{{\mathsf{jSt}}}$ as, respectively, the meet and join of the right-stalactic and left-stalactic congruences, and define the infinite-rank *meet-stalactic* monoid ${\mathsf{mSt}}$ and *join-stalactic* monoid ${\mathsf{jSt}}$ as quotients of $\mathbb{N}^*$ by their respective congruences. These congruences naturally restrict to congruences on $[n]^*$, for each $n \in \mathbb{N}$, and the corresponding factor monoids $[n]^*/{\equiv_{\mathsf{mSt}}}$ and $[n]^*/{\equiv_{\mathsf{jSt}}}$ are the meet-stalactic and join-stalactic monoids of rank $n$. Notice that two words $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ are $\equiv_{\mathsf{mSt}}$-congruent if and only if they share the same content, $\overleftarrow{\rho_\mathbf{u}} = \overleftarrow{\rho_\mathbf{v}}$ and $\overrightarrow{\rho_\mathbf{u}} = \overrightarrow{\rho_\mathbf{v}}$. Furthermore, it is clear that the join-stalactic monoid is presented by $\pres{\mathbb{N}}{\mathcal{R}_{\mathsf{jSt}}}$, where $\mathcal{R}_{\mathsf{jSt}}:= \mathcal{R}_{\mathsf{rSt}}\cup \mathcal{R}_{\mathsf{lSt}}$. On the other hand, we have the following results: **Proposition 16**. *The meet-stalactic monoid ${\mathsf{mSt}}$ is presented by $\pres{\mathbb{N}}{\mathcal{R}_{\mathsf{mSt}}}$, where $$\begin{aligned} \mathcal{R}_{\mathsf{mSt}}:=& \{ (b \mathbf{u}ba \mathbf{v}b,b \mathbf{u}ab \mathbf{v}b)\colon a,b \in \mathbb{N}, \mathbf{u},\mathbf{v}\in \mathbb{N}^* \} \\ & \cup \{ (a \mathbf{u}ab \mathbf{v}b,a \mathbf{u}ba \mathbf{v}b)\colon a,b \in \mathbb{N}, \mathbf{u},\mathbf{v}\in \mathbb{N}^* \}. \end{aligned}$$* *Proof.* Consider the congruence $\equiv$ generated by the relations $\mathcal{R}_{\mathsf{mSt}}$, as given in the statement. It is clear that, for any words $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$, if $\mathbf{u}\equiv \mathbf{v}$, then $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$, hence $\mathbf{u}\equiv_{{\mathsf{mSt}}} \mathbf{v}$. Suppose now that $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$. Let $$\mathbf{u}= \mathbf{w}_1 \mathbf{u}' \mathbf{w}_2 \quad \text{and} \quad \mathbf{v}= \mathbf{w}_1 \mathbf{v}' \mathbf{w}_2,$$ where $\mathbf{u}' \neq \mathbf{v}'$ and $\mathbf{w}_1, \mathbf{w}_2 \in \mathbb{N}^*$ are the common prefix and suffix of maximum length, respectively. Notice that the first (resp. last) letter of $\mathbf{u}'$ is different from the first (resp. last) letter of $\mathbf{v}'$. On the other hand, since $\mathbf{u}$ and $\mathbf{v}$ have the same content, then $\mathbf{u}'$ and $\mathbf{v}'$ have the same content as well. In particular, they have the same length, hence, we will prove by induction on the length of $\mathbf{u}'$ and $\mathbf{v}'$ that $\mathbf{u}\equiv \mathbf{v}$. The base case for the induction is when $|\mathbf{u}'|=|\mathbf{v}'|=2$. In this case, we have $\mathbf{u}' = ab$ and $\mathbf{v}' = ba$, for some $a,b \in \mathbb{N}$. Since $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$, then $a$ or $b$ must occur in $\mathbf{w}_2$, and since $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$, then $a$ or $b$ must occur in $\mathbf{w}_1$. Therefore, if the same symbols occur in both $\mathbf{w}_1$ and $\mathbf{w}_2$, we use the first defining relation of $\equiv$, and if different symbols occur, we use the second defining relation to show that $\mathbf{u}\equiv \mathbf{v}$. Suppose now that $|\mathbf{u}'|=|\mathbf{v}'|>2$. Write $\mathbf{u}' = a \mathbf{u}_1'$ and $\mathbf{v}' = \mathbf{v}_1' a \mathbf{v}_2'$, for some $a \in \mathbb{N}$ and $\mathbf{v}_1' \in \mathbb{N}^+$ such that $a \notin \mathrm{supp}\parens{\mathbf{v}_1'}$. Consider the following cases: - $a \notin \mathrm{supp}\parens{\mathbf{w}_1} \cup \mathrm{supp}\parens{\mathbf{v}_2' \mathbf{w}_2}$; - $a \in \mathrm{supp}\parens{\mathbf{w}_1}$ and $a \notin \mathrm{supp}\parens{\mathbf{v}_2' \mathbf{w}_2}$; - $a \in \mathrm{supp}\parens{\mathbf{v}_2' \mathbf{w}_2}$ and $a \notin \mathrm{supp}\parens{\mathbf{w}_1}$; - $a \in \mathrm{supp}\parens{\mathbf{w}_1} \cap \mathrm{supp}\parens{\mathbf{v}_2' \mathbf{w}_2}$. If $a \notin \mathrm{supp}\parens{\mathbf{w}_1}$, then the first occurrence of $a$ is after the first occurrence of every letter of $\mathbf{v}_1'$, in the word $\mathbf{v}$. Therefore, since $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$, we have that $\mathrm{supp}\parens{\mathbf{v}_1'} \subseteq \mathrm{supp}\parens{\mathbf{w}_1}$. On the other hand, if $a \notin \mathrm{supp}\parens{\mathbf{v}_2' \mathbf{w}_2}$, then the last occurrence of $a$ is before the last occurrence of every letter of $\mathbf{v}_1'$, in the word $\mathbf{u}$. Therefore, since $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$, we have that $\mathrm{supp}\parens{\mathbf{v}_1'} \subseteq \mathrm{supp}\parens{\mathbf{v}_2' \mathbf{w}_2}$. As such, using the first defining relation of $\equiv$ in the first and fourth cases and the second defining relation of $\equiv$ in the second and third cases, finitely many times, we have that $\mathbf{v}\equiv \mathbf{w}_1 a \mathbf{v}'' \mathbf{w}_2$, for some $\mathbf{v}'' \in \mathbb{N}^*$. Hence, by the induction hypothesis, $\mathbf{u}\equiv \mathbf{v}$. Thus, we have shown that $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$ imply that $\mathbf{u}\equiv \mathbf{v}$, and as such, the congruences $\equiv$ and $\equiv_{\mathsf{mSt}}$ are the same. ◻ **Proposition 17**. *Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$. Then, $\mathbf{u}\equiv_{\mathsf{jSt}}\mathbf{v}$ if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\overline{\mathbf{u}} = \overline{\mathbf{v}}$.* *Proof.* Consider the congruence $\equiv$ given by $\mathbf{u}\equiv \mathbf{v}$ if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\overline{\mathbf{u}} = \overline{\mathbf{v}}$, for $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$. We first show that $\mathbf{u}\equiv \mathbf{v}$ implies $\mathbf{u}\equiv_{\mathsf{jSt}}\mathbf{v}$. Let $\overline{\mathbf{u}} = \overline{\mathbf{v}} = a_1 \cdots a_k$, where $a_1, \dots, a_k$ denote the simple letters, and let $x_1, \dots, x_l$ denote the non-simple letters, with $k + l = |\mathrm{supp}\parens{\mathbf{u}}|$. Ranging $1 \leq j \leq l$, we can repeatedly use $\equiv_{\mathsf{rSt}}$ to shift the leftmost $x_j$ to the beginning of both $\mathbf{u}$ and $\mathbf{v}$; and then repeatedly use $\equiv_{\mathsf{lSt}}$ to obtain a word with the prefix $x_j^{|\mathbf{u}|_{x_j}}$. Inductively, this allows us to show that $$\mathbf{u}\equiv_{\mathsf{jSt}}x_l^{|\mathbf{u}|_{x_l}} \cdots x_1^{|\mathbf{u}|_{x_1}} a_1 \cdots a_k \equiv_{\mathsf{jSt}}\mathbf{v}.$$ On the other hand, notice that $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$ implies $\overline{\mathbf{u}} = \overline{\mathbf{v}}$ since the order of occurrences of simple letters can be deduced from the order of the first occurrences of all letters. Similarly, $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$ implies $\overline{\mathbf{u}} = \overline{\mathbf{v}}$. Thus, since the join-stalactic congruence is the congruence join of $\equiv_{\mathsf{lSt}}$ and $\equiv_{\mathsf{rSt}}$, $\mathbf{u}\equiv_{\mathsf{jSt}}\mathbf{v}$ also implies $\overline{\mathbf{u}} = \overline{\mathbf{v}}$. Hence, $\mathbf{u}\equiv_{\mathsf{jSt}}\mathbf{v}$ implies $\mathbf{u}\equiv \mathbf{v}$. ◻ ### The meet-taiga and join-taiga monoids {#subsubsection:the_meet_taiga_and_join_taiga_monoids} As with the case of the stalactic monoids, we now consider the meet and join of the right-taiga and left-taiga congruences, which we respectively call the *meet-taiga* congruence $\equiv_{{\mathsf{mTg}}}$ and the *join-taiga* congruence $\equiv_{{\mathsf{jTg}}}$, and define the *meet-taiga* monoid ${\mathsf{mTg}}$ and *join-taiga* monoid ${\mathsf{jTg}}$ as quotients of $\mathbb{N}^*$ by their respective congruences. These congruences naturally restrict to congruences on $[n]^*$, for each $n \in \mathbb{N}$, and the corresponding factor monoids $[n]^*/{\equiv_{\mathsf{mTg}}}$ and $[n]^*/{\equiv_{\mathsf{jTg}}}$ are the meet-taiga and join-taiga monoids of rank $n$. Notice that the meet-taiga congruence contains both the Baxter congruence and the meet-stalactic congruence. As before with the case of the join-stalactic monoid, it is clear that the join-taiga monoid is presented by $\pres{\mathbb{N}}{\mathcal{R}_{\mathsf{jTg}}}$, where $\mathcal{R}_{\mathsf{jTg}}:= \mathcal{R}_{\mathsf{rTg}}\cup \mathcal{R}_{\mathsf{lTg}}$. On the other hand, we have the following results: **Proposition 18**. *The meet-taiga monoid ${\mathsf{mTg}}$ is presented by the relations $\mathcal{R}_{\mathsf{mTg}}$, where $$\begin{aligned} \mathcal{R}_{\mathsf{mTg}}:=& \{ (b \mathbf{u}ad \mathbf{v}c,b \mathbf{u}da \mathbf{v}c)\colon a \leq d, \; b,c \in [a,d], \; \mathbf{u},\mathbf{v}\in \mathbb{N}^* \}. \end{aligned}$$* *Proof.* Consider the congruence $\equiv$ generated by the relations $\mathcal{R}_{\mathsf{mTg}}$, as given in the statement. It is clear that, for any words $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$, if $\mathbf{u}\equiv \mathbf{v}$, then $\mathbf{u}\equiv_{\mathsf{rTg}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{lTg}}\mathbf{v}$, hence $\mathbf{u}\equiv_{{\mathsf{mTg}}} \mathbf{v}$. Suppose now that $\mathbf{u}\equiv_{\mathsf{rTg}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{lTg}}\mathbf{v}$. Let $$\mathbf{u}= \mathbf{w}_1 \mathbf{u}' \mathbf{w}_2 \quad \text{and} \quad \mathbf{v}= \mathbf{w}_1 \mathbf{v}' \mathbf{w}_2,$$ where $\mathbf{u}' \neq \mathbf{v}'$ and $\mathbf{w}_1, \mathbf{w}_2 \in \mathbb{N}^*$ are of maximum length. Notice that the first (resp. last) letter of $\mathbf{u}'$ is different from the first (resp. last) letter of $\mathbf{v}'$. On the other hand, since $\mathbf{u}$ and $\mathbf{v}$ have the same content, then $\mathbf{u}'$ and $\mathbf{v}'$ have the same content as well. In particular, they have the same length, hence, we will prove by induction on the length of $\mathbf{u}'$ that $\mathbf{u}\equiv \mathbf{v}$. The base case for the induction is when $|\mathbf{u}'|=|\mathbf{v}'|=2$. In this case, we have $\mathbf{u}' = ad$ and $\mathbf{v}' = da$, for some $a,d \in \mathbb{N}$. Assume, without loss of generality, that $a \leq d$. Since $\mathbf{u}\equiv_{\mathsf{lTg}}\mathbf{v}$, then there exists $b \in \mathbb{N}$ such that $a \leq b \leq d$ and $b$ occurs in $\mathbf{w}_1$. Similarly, since $\mathbf{u}\equiv_{\mathsf{rTg}}\mathbf{v}$, then there exists $c \in \mathbb{N}$ such that $a \leq c \leq d$ and $c$ occurs in $\mathbf{w}_2$. Therefore, we use the defining relation of $\equiv$ to show that $\mathbf{u}\equiv \mathbf{v}$. Suppose now that $|\mathbf{u}'|=|\mathbf{v}'|>2$. Write $\mathbf{u}' = a \mathbf{u}_1'$ and $\mathbf{v}' = \mathbf{v}_1' b a \mathbf{v}_2'$, for some $a,b \in \mathbb{N}$ and $\mathbf{v}_1' \in \mathbb{N}^*$ such that $a \notin \mathrm{supp}\parens{\mathbf{v}_1'}$. Notice that $b \in \mathrm{supp}\parens{\mathbf{u}_1'}$. If $\mathbf{u}$ factorises as $\mathbf{u}= \mathbf{u}_1 b \mathbf{u}_2 a \mathbf{u}_3$, for some $\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \in \mathbb{N}^*$ such that $\mathrm{supp}\parens{\mathbf{u}_1} \cap [[a,b]] = \emptyset$, then $b \in \mathrm{supp}\parens{\mathbf{w}_1}$. On the other hand, if $\mathbf{u}$ does not factorise in such a way, then there exists $c \in \mathrm{supp}\parens{\mathbf{w}_1 \mathbf{v}_1'} \cap [[a,b]]$, since $\mathbf{u}\equiv_{\mathsf{lTg}}\mathbf{v}$. Similarly, if $\mathbf{u}$ factorises as $\mathbf{u}= \mathbf{u}_1 b \mathbf{u}_2 a \mathbf{u}_3$, for some $\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \in \mathbb{N}^*$ such that $\mathrm{supp}\parens{\mathbf{u}_3} \cap [[a,b]] = \emptyset$, then there exists $a \in \mathrm{supp}\parens{\mathbf{u}' \mathbf{w}_2}$ such that $a$ is to the right of the last occurrence of $b$ in $\mathbf{u}$. As such, we have that $a \in \mathrm{supp}\parens{\mathbf{v}_2'\mathbf{w}_2}$. On the other hand, if $\mathbf{u}$ does not factorise in such a way, then there exists $c \in \mathrm{supp}\parens{\mathbf{v}_2'\mathbf{w}_2} \cap [[a,b]]$, since $\mathbf{u}\equiv_{\mathsf{rTg}}\mathbf{v}$. Thus, we can use the defining relation of $\equiv$ to show that $\mathbf{v}\equiv \mathbf{w}_1 \mathbf{v}_1' a b \mathbf{v}_2' \mathbf{w}_2$. Notice that this reasoning can be applied finitely many times to show that $\mathbf{v}\equiv \mathbf{w}_1 a \mathbf{v}'' \mathbf{w}_2$, for some $\mathbf{v}'' \in \mathbb{N}^*$. Hence, by the induction hypothesis, $\mathbf{u}\equiv \mathbf{v}$. Thus, we have shown that $\mathbf{u}\equiv_{\mathsf{lTg}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{rTg}}\mathbf{v}$ implies that $\mathbf{u}\equiv \mathbf{v}$, and as such, the congruences $\equiv$ and $\equiv_{\mathsf{mTg}}$ are the same. ◻ **Proposition 19**. *Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$. Let $A_1,\dots A_k$ be all the intervals of $\mathrm{supp}\parens{\mathbf{u}}$ such that $A_i$ only contains simple letters and $A_i \cup \{a\}$ is not an interval, for any simple letter $a \notin A_i$. Then, $\mathbf{u}\equiv_{\mathsf{jTg}}\mathbf{v}$ if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\mathbf{u}[A_i] \equiv_{\mathsf{hypo}}\mathbf{v}[A_i]$ for all $1 \leq i \leq k$.* *Proof.* Consider the congruence $\equiv$ given by $\mathbf{u}\equiv \mathbf{v}$ if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\mathbf{u}[A_i] \equiv_{\mathsf{hypo}}\mathbf{v}[A_i]$ for all $1 \leq i \leq k$ where $A_1,\dots A_k$ are as given in the statement of the proposition. We first show that $\mathbf{u}\equiv \mathbf{v}$ implies $\mathbf{u}\equiv_{\mathsf{jTg}}\mathbf{v}$. Let $x_1, \dots, x_l$ denote the non-simple letters of $\mathbf{u}$ and $\mathbf{v}$. Ranging $1 \leq j \leq l$, we can repeatedly use $\equiv_{\mathsf{rSt}}$ to shift the leftmost $x_j$ to the beginning of both words, and then repeatedly use $\equiv_{\mathsf{lSt}}$ to obtain words with the prefix $x_j^{|\mathbf{u}|_{x_j}}$. Inductively, this allows us to show that $$\begin{aligned} \mathbf{u}\equiv_{\mathsf{jSt}}x_l^{|\mathbf{u}|_{x_l}} &\cdots x_1^{|\mathbf{u}|_{x_1}}\overline{\mathbf{u}}, \quad \text{and} \\ \mathbf{v}\equiv_{\mathsf{jSt}}x_l^{|\mathbf{u}|_{x_l}} &\cdots x_1^{|\mathbf{u}|_{x_1}}\overline{\mathbf{v}}.\end{aligned}$$ Now, by applying the ${\mathsf{sylv}^{\#}}$ relations, we can order the simple variables so that the variables in the interval $A_i$ appear to the left of the simple variables in $A_{i+1}$ for each $1 \leq i < k$. Thus, $$\begin{aligned} \mathbf{u}\equiv_{\mathsf{jSt}}x_l^{|\mathbf{u}|_{x_l}} &\cdots x_1^{|\mathbf{u}|_{x_1}}\mathbf{u}[A_1]\cdots \mathbf{u}[A_k], \quad \text{and} \\ \mathbf{v}\equiv_{\mathsf{jSt}}x_l^{|\mathbf{u}|_{x_l}} &\cdots x_1^{|\mathbf{u}|_{x_1}}\mathbf{v}[A_1]\cdots \mathbf{v}[A_k].\end{aligned}$$ Now as $\mathcal{R}_{{\mathsf{hypo}}} = \mathcal{R}_{{\mathsf{sylv}}} \cup \mathcal{R}_{{\mathsf{sylv}^{\#}}} \subset \mathcal{R}_{{\mathsf{jTg}}}$, then, by the definition of $\equiv$, we have that $\mathbf{u}[A_i] \equiv_{\mathsf{jTg}}\mathbf{v}[A_i]$. Thus, $\mathbf{u}\equiv_{{\mathsf{jTg}}} \mathbf{v}$. On the other hand, first note that all the relations in $\mathcal{R}_{{\mathsf{jTg}}}$ are content preserving, thus, if $\mathbf{u}\equiv_{\mathsf{jTg}}\mathbf{v}$, then $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$. Let $\mathbf{r},\mathbf{s}\in \mathbb{N}^*$. If $\mathbf{r}\equiv_{\mathsf{jSt}}\mathbf{s}$, then $\overline{\mathbf{r}} = \overline{\mathbf{s}}$ by Proposition [Proposition 17](#prop:jst_characterisation){reference-type="ref" reference="prop:jst_characterisation"} and if $\mathbf{r}\equiv_{\mathsf{hypo}}\mathbf{s}$ then, as ${\mathsf{hypo}}$ is compatible with restriction to alphabet intervals [@novelli_hypoplactic Theorem 5.6], $\mathbf{r}[A_i] \equiv_{\mathsf{hypo}}\mathbf{s}[A_i]$ for all $1 \leq i \leq k$. Thus, since the join-taiga congruence is the congruence join of $\equiv_{\mathsf{jSt}}$ and $\equiv_{\mathsf{hypo}}$, $\mathbf{u}\equiv_{\mathsf{jTg}}\mathbf{v}$ implies $\mathbf{u}[A_i] \equiv_{\mathsf{hypo}}\mathbf{v}[A_i]$ for all $1 \leq i \leq k$. Hence, $\mathbf{u}\equiv_{\mathsf{jSt}}\mathbf{v}$ if and only if $\mathbf{u}\equiv \mathbf{v}$. ◻ ## Compatibility properties {#subsection:compatibility_properties} Recall that the Baxter and hypoplactic congruences are defined, respectively, as the meet and join of the sylvester and \#-sylvester congruences [@giraudo_baxter; @hnt_stalactic]. Some of the compatibility properties (see Subsection [2.1](#subsection:words){reference-type="ref" reference="subsection:words"}) of the Baxter and hypoplactic monoids have already been studied. We state the known results here and prove the remaining ones: **Proposition 20**. *The Baxter monoid is compatible with standardisation, packing, restriction to alphabet intervals and the Schützenberger involution, but not with reverse-standardisation, restriction to alphabet subsets and word reversal.* *Proof.* Compatibility with standardisation was shown in [@giraudo_baxter Proposition 3.2], from which compatibility with packing follows. Compatibility with restriction to alphabet intervals was shown in [@giraudo_baxter Proposition 3.3], and compatibility with the Schützenberger involution was shown in [@giraudo_baxter Proposition 3.4]. On the other hand, notice that $2121$ and $2211$ are ${\mathsf{baxt}}$-congruent, but $4231$ and $4321$ are not, hence ${\mathsf{baxt}}$ is not compatible with reverse-standardisation. Furthermore, $2132$ and $2312$ are ${\mathsf{baxt}}$-congruent, but $13$ and $31$ are not, hence ${\mathsf{baxt}}$ is not compatible with restriction to alphabet subsets. Similarly, notice that $2131$ and $2311$ are ${\mathsf{baxt}}$-congruent, but $1312$ and $1132$ are not, hence ${\mathsf{baxt}}$ is not compatible with word reversal. ◻ **Proposition 21**. *The hypoplactic monoid is compatible with standardisation, packing, restriction to alphabet intervals and the Schützenberger involution, but not with reverse-standardisation, restriction to alphabet subsets and word reversal.* *Proof.* Compatibility with standardisation follows from [@novelli_hypoplactic Lemma 4.13 and Theorem 4.18], from which compatibility with packing follows. Compatibility with restriction to alphabet intervals was shown in [@novelli_hypoplactic Theorem 5.6], and compatibility with the Schützenberger involution was shown in [@novelli_hypoplactic Theorem 5.4]. On the other hand, as with the Baxter monoid, notice that $2121$ and $2211$ are ${\mathsf{hypo}}$-congruent, but $4231$ and $4321$ are not, hence ${\mathsf{hypo}}$ is not compatible with reverse-standardisation. Furthermore, notice that $213$ and $231$ are ${\mathsf{hypo}}$-congruent, but $13$ and $31$ are not, hence ${\mathsf{hypo}}$ is not compatible with restriction to alphabet subsets. Similarly, notice that $221$ and $212$ are ${\mathsf{hypo}}$-congruent, but $122$ and $212$ are not, hence ${\mathsf{hypo}}$ is not compatible with word reversal. ◻ We now study the compatibility properties of the monoids introduced in this section, and compare them with the Baxter and hypoplactic monoids: **Proposition 22**. *The meet-stalactic monoid is compatible with packing, restriction to alphabet subsets, word reversal and the Schützenberger involution, but not with standardisation and reverse-standardisation.* *Proof.* It is immediate that ${\mathsf{mSt}}$ is compatible with packing and restriction to alphabet subsets since there is no restriction placed on the letters and words occurring in its defining relations. Furthermore, ${\mathsf{mSt}}$ is compatible with both word reversal and the Schützenberger involution, since if two words are ${\mathsf{lSt}}$-congruent (resp. ${\mathsf{rSt}}$-congruent), then their images under word reversal or the Schützenberger involution are ${\mathsf{rSt}}$-congruent (resp. ${\mathsf{lSt}}$-congruent), once again due to the lack of restrictions on the defining relations. On the other hand, notice that while the words $1211$ and $1121$ are ${\mathsf{mSt}}$-congruent, their standardised words $1423$ and $1243$ are not, hence ${\mathsf{mSt}}$ is not compatible with standardisation. Similarly, their reverse-standardised words $3421$ and $3241$ are also not ${\mathsf{mSt}}$-congruent, hence ${\mathsf{mSt}}$ is not compatible with reverse-standardisation. ◻ An immediate consequence is that ${\mathsf{mSt}}$ is compatible with restriction to alphabet intervals. Analogously, one can prove that: **Proposition 23**. *The join-stalactic monoid is compatible with packing, restriction to alphabet subsets, word reversal and the Schützenberger involution, but not with standardisation and reverse-standardisation.* **Proposition 24**. *The meet-taiga monoid is compatible with packing, restriction to alphabet intervals, word reversal and the Schützenberger involution, but not with standardisation, reverse-standardisation, and restriction to alphabet subsets.* *Proof.* It is immediate that ${\mathsf{mTg}}$ is compatible with packing since packing can be viewed as an isomorphism that preserves the order on the support of a word, hence the defining relations of ${\mathsf{mTg}}$ still hold under it. Similarly, ${\mathsf{mTg}}$ is compatible with restriction to alphabet intervals, since its defining relations are of the form $$b \mathbf{u}ad \mathbf{v}c \equiv_{\mathsf{mSt}}b \mathbf{u}da \mathbf{v}c,$$ where $a \leq d$, $b,c \in [a,d]$ and $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$, hence restricting the alphabet to an interval that includes $a$ and $d$ still allows equality under $\equiv_{\mathsf{mTg}}$ (excluding $a$ or $d$ trivially gives equality). To show that ${\mathsf{mTg}}$ is compatible with word reversal and the Schützenberger involution, it suffices to see that the defining relations still hold under these operations, which is clear from their definition. On the other hand, notice that $1122$ and $1212$ are ${\mathsf{mTg}}$-congruent, however, their standardised words $1234$ and $1324$ are not, hence ${\mathsf{mTg}}$ is not compatible with standardisation. Similarly, notice that $2211$ and $2121$ are ${\mathsf{mTg}}$-congruent, however, their reverse-standardised words $4321$ and $4231$ are not, hence ${\mathsf{mTg}}$ is not compatible with reverse-standardisation. Furthermore, notice that $2132$ and $2312$ are ${\mathsf{mTg}}$-congruent, but $13$ and $31$ are not, hence ${\mathsf{mTg}}$ is not compatible with restriction to alphabet subsets. ◻ Analogously, one can prove that: **Proposition 25**. *The join-taiga monoid is compatible with packing, restriction to alphabet intervals, word reversal and the Schützenberger involution, but not with standardisation, reverse-standardisation, and restriction to alphabet subsets.* In Table [1](#table:compatibilities){reference-type="ref" reference="table:compatibilities"}, we give a summary of the results on compatibilities of the hypoplactic, Baxter, meet and join-stalactic and meet and join-taiga monoids. **Monoid** **Std** **rStd** **Pack** **RAS** **RAI** **WR** **SI** **Result** ------------------- --------- ---------- ---------- --------- --------- -------- -------- ------------------------------------------------------------------------------------------------------------ ${\mathsf{hypo}}$ Proposition [Proposition 21](#prop:hypo_compatible){reference-type="ref" reference="prop:hypo_compatible"} ${\mathsf{baxt}}$ Proposition [Proposition 20](#prop:baxt_compatible){reference-type="ref" reference="prop:baxt_compatible"} ${\mathsf{mSt}}$ Proposition [Proposition 22](#prop:mst_compatible){reference-type="ref" reference="prop:mst_compatible"} ${\mathsf{jSt}}$ Proposition [Proposition 23](#prop:jst_compatible){reference-type="ref" reference="prop:jst_compatible"} ${\mathsf{mTg}}$ Proposition [Proposition 24](#prop:mtg_compatible){reference-type="ref" reference="prop:mtg_compatible"} ${\mathsf{jTg}}$ Proposition [Proposition 25](#prop:jtg_compatible){reference-type="ref" reference="prop:jtg_compatible"} : Compatibility of plactic-like monoids, where **Std** stands for 'standardisation'; **rStd** for 'standardisation'; **Pack** for 'packing'; **RAS** for 'restriction to alphabet subsets'; **RAI** for 'restriction to alphabet intervals'; **WR** for 'word reversal'; **SI** for 'Schützenberger involution'. # Robinson--Schensted-like correspondences {#section:Robinson_Schensted_like_correspondences} In this section, we introduce analogues of the Robinson--Schensted correspondence for the meet-stalactic and meet-taiga monoids. We then show how to extract representatives of the congruence classes from their $\mathrm{P}$-symbols and give iterative versions of the insertion algorithms. ## Definitions and correctness of the correspondences {#subsection:Definitions_and_correctness_of_the_correspondences} We begin by defining the $\mathrm{P}$ and $\mathrm{Q}$-symbols and stating their properties, which then allow us to prove the correctness of the correspondences. ### The meet-stalactic correspondence {#subsubsection:The_meet_stalactic_correspondence} **Definition 26**. For a word $\mathbf{w}\in \mathbb{N}^*$, the *meet-stalactic $\mathrm{P}$-symbol* of $\mathbf{w}$ is the pair $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}}:=(\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ of stalactic tableaux, while the *meet-stalactic $\mathrm{Q}$-symbol* of $\mathbf{w}$ is the pair $\mathrm{Q}_{{\mathsf{mSt}}}\parens{\mathbf{w}}:=(\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ of (respectively) increasing and decreasing patience-sorting tableaux. **Example 27**. The meet-stalactic $\mathrm{P}$-symbol of $212511354$ is $$\left(\tikz[tableau]\matrix{ 2 \& 1 \& 5 \& 3 \& 4\\ 2 \& 1 \& 5 \& \& \\ \& 1 \& \& \& \\ };, \tikz[tableau]\matrix{ 2 \& 1 \& 3 \& 5 \& 4\\ 2 \& 1 \& \& 5 \& \\ \& 1 \& \& \& \\ };\right),$$ and its meet-stalactic $\mathrm{Q}$-symbol is $$\left(\tikz[tableau]\matrix{ 1 \& 2 \& 4 \& 7 \& 9\\ 3 \& 5 \& 8 \& \& \\ \& 6 \& \& \& \\ };, \tikz[tableau]\matrix{ 3 \& 6 \& 7 \& 8 \& 9\\ 1 \& 5 \& \& 4 \& \\ \& 2 \& \& \& \\ };\right).$$ **Proposition 28**. *Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$. Then $\mathbf{u}\equiv_{\mathsf{mSt}}\mathbf{v}$ if and only if $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{u}} = \mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{v}}$.* *Proof.* By definition, $\mathbf{u}\equiv_{\mathsf{mSt}}\mathbf{v}$ if and only if $\mathbf{u}\equiv_{\mathsf{lSt}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{rSt}}\mathbf{v}$. This is equivalent to $\mathbf{u}$ and $\mathbf{v}$ having the same left-stalactic and right-stalactic $\mathrm{P}$-symbol, that is, $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{u}} = \mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{v}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}} = \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}}$. In turn, by the definition of the [meet-stalactic $\mathrm{P}$-symbol](#definition:mstpandqsymbols), this is equivalent to $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{u}} = \mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{v}}$. ◻ **Definition 29**. Let $T_L,T_R$ be stalactic tableaux. We say $(T_L,T_R)$ are a *pair of twin stalactic tableaux* if $\mathrm{cont}\parens{T_R} = \mathrm{cont}\parens{T_L}$ and for each simple column labelled $c$ and any other column labelled $d$, if $\rho_{T_L}(c) < \rho_{T_L}(d)$, then $\rho_{T_R}(c) < \rho_{T_R}(d)$. The condition on the order of columns is equivalent to the following: if $\rho_{T_R}(d) < \rho_{T_R}(c)$, then $\rho_{T_L}(d) < \rho_{T_L}(c)$. Notice that, in particular, the order of simple columns, from left-to-right, is the same in pairs of twin stalactic tableaux. The $\mathrm{P}$-symbol given in Example [Example 27](#example:meet_stalactic_p_and_q_symbols){reference-type="ref" reference="example:meet_stalactic_p_and_q_symbols"} is a pair of twin stalactic tableaux. In fact, we have the following: **Proposition 30**. *For any $\mathbf{w}\in \mathbb{N}^*$, the meet-stalactic $\mathrm{P}$-symbol $(\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}},\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ of $\mathbf{w}$ is a pair of twin stalactic tableaux.* *Proof.* Notice that, since $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$ are computed from the same word $\mathbf{w}$, then $\mathrm{cont}\parens{\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}} = \mathrm{cont}\parens{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}}$. Let $a,x \in \mathrm{supp}\parens{\mathbf{w}}$ be such that $a$ is simple. Suppose that $\rho_{\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}}(a) < \rho_{\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}}(x)$, which implies that the only occurrence of $a$ in $\mathbf{w}$ is to the left of the leftmost and, therefore, all occurrences of $x$ in $\mathbf{w}$. In particular, $a$ only occurs to the left of the rightmost occurrence of $x$ in $\mathbf{w}$, which, by Algorithm [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion), implies $\rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}}(a) < \rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}}(x)$. Thus, $(\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}},\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ is a pair of twin stalactic tableaux. ◻ **Definition 31**. Let $(S_L,S_R)$ be a pair of (respectively) increasing and decreasing patience-sorting tableaux. We say $(S_L,S_R)$ are a *pair of twin patience-sorting tableaux* if (i) The columns of $S_L$ are in bijection with the columns of $S_R$, with the content of the columns being preserved by the bijection; (ii) for each pair of corresponding columns $(c_L,c_R)$ and $(d_L,d_R)$ of $(S_L,S_R)$ such that $c_L$ and $c_R$ are simple, if $c_L$ appears to the left of $d_L$ in $S_L$ then $c_R$ appears to the left of $d_R$ in $S_R$. The condition on the order of columns is equivalent to the following: if $c_R$ appears to the right of $d_R$ in $S_R$ then $c_L$ appears to the right of $d_L$ in $S_L$. The $\mathrm{Q}$-symbol given in Example [Example 27](#example:meet_stalactic_p_and_q_symbols){reference-type="ref" reference="example:meet_stalactic_p_and_q_symbols"} is a pair of twin patience-sorting tableaux. In fact, we have the following: **Proposition 32**. *For any $\mathbf{w}\in \mathbb{N}^*$, the meet-stalactic $\mathrm{Q}$-symbol $(\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}},\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ of $\mathbf{w}$ is a pair of twin patience-sorting tableaux.* *Proof.* For each $x \in \mathrm{supp}\parens{\mathbf{w}}$, by Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}, there exists a column in $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ and a column in $\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$, each containing all the positions of where $x$ appears in $\mathbf{w}$, when read left-to-right. These columns must therefore have the same content, and hence the first property holds. The second property then follows from Proposition [Proposition 30](#prop:pmst_pair_twin){reference-type="ref" reference="prop:pmst_pair_twin"}, Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}. ◻ **Theorem 33**. *The map $\mathbf{w}\mapsto (\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}}, \mathrm{Q}_{{\mathsf{mSt}}}\parens{\mathbf{w}})$ is a bijection between the elements of $\mathbb{N}^*$ and the set formed by the pairs $\bigl((T_L,T_R),(S_L,S_R)\bigr)$ where* (i) *$(T_L,T_R)$ is a pair of twin stalactic tableaux.* (ii) *$(S_L,S_R)$ is a pair of twin patience-sorting tableau.* (iii) *$(T_L,T_R)$ and $(S_L,S_R)$ have the same shape.* (iv) *The $i$-th column of $S_L$ is in bijection with the $\rho_{T_R}(\rho_{T_L}^{-1}(i))$-th column of $S_R$.* *Proof.* By Propositions [Proposition 30](#prop:pmst_pair_twin){reference-type="ref" reference="prop:pmst_pair_twin"} and [Proposition 32](#prop:qmst_pair_twin){reference-type="ref" reference="prop:qmst_pair_twin"}, we have that the output of the map satisfies the first two conditions. Moreover, we can see that they also satisfy the third condition as, by Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}, $T_L$ and $S_L$ have the same shape and $T_R$ and $S_R$ have the same shape. Again by Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}, we have that the $i$-th column of $S_L$ (resp. $S_R$) gives the positions of the letter $\rho_{T_L}^{-1}(i)$ (resp. $\rho_{T_R}^{-1}(i)$) in $\mathbf{w}$, and hence the $i$-th column of $S_L$ is in bijection with the $\rho_{T_R}(\rho_{T_L}^{-1}(i))$-th column of $S_R$. Thus, the map is well-defined. The map is injective due to the stalactic Robinson--Schensted correspondences (Theorems [Theorem 9](#theorem:robinson_left_stalactic){reference-type="ref" reference="theorem:robinson_left_stalactic"} and [Theorem 10](#theorem:robinson_right_stalactic){reference-type="ref" reference="theorem:robinson_right_stalactic"}), as from either of the pairs $(\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}},\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}})$ or $(\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}},\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}})$ we can reconstruct $\mathbf{w}$. For surjectivity, let $\bigl((T_L,T_R),(S_L,S_R)\bigr)$ satisfy conditions *(i)--(iv)*. Using the stalactic Robinson--Schensted correspondences, we have that there exist unique $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ such that $\bigl(\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{u}},\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{u}}\bigr) = (T_L,S_L)$ and $\bigl(\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}},\mathrm{Q}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}}\bigr) = (T_R,S_R)$. Clearly, by condition *(i)*, we have $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$. Suppose $a = \overrightarrow{\rho_\mathbf{u}}^{-1}(i)$ for some $1 \leq i \leq |\mathrm{supp}\parens{\mathbf{u}}|$. Then, by Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}, the $i$-th column of $S_L$ is the unique column of $S_L$ which gives the positions of $a$ in $\mathbf{u}$. Moreover, by condition *(iv)*, the $i$-th column of $S_L$ is also the unique column of $S_L$ which gives the positions of $\overleftarrow{\rho_\mathbf{v}}^{-1}(\rho_{T_R}(\rho_{T_L}^{-1}(i))) = \rho_{T_L}^{-1}(i) = a$ in $\mathbf{v}$. As such, $\overrightarrow{\rho_\mathbf{v}}$ is completely determined by $(T_L,S_L)$, which implies $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{v}} = T_L$, and, by Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}, $\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{v}} = S_L$. Thus, $\mathbf{u}= \mathbf{v}$ and $\bigl(\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{u}},\mathrm{Q}_{{\mathsf{mSt}}}\parens{\mathbf{u}}\bigr) = \bigl((T_L,T_R),(S_L,S_R)\bigr)$. ◻ ### The meet-taiga correspondence {#subsubsection:The_meet_taiga_correspondence} **Definition 34**. For a word $\mathbf{w}\in \mathbb{N}^*$, the *meet-taiga $\mathrm{P}$-symbol* of $\mathbf{w}$ is the pair $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}}:=(\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}, \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}})$ of BSTMs, while the *meet-taiga $\mathrm{Q}$-symbol* of $\mathbf{w}$ is the pair $\mathrm{Q}_{{\mathsf{mTg}}}\parens{\mathbf{w}}:=(\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}, \mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}})$ of (respectively) increasing and decreasing binary trees. **Example 35**. The meet-taiga $\mathrm{P}$-symbol of $451423412$ is $$\left(\begin{tikzpicture}[tinybst,baseline=-8mm] \node {$4^3$} child { node {$1^2$} child[missing] child { node {$2^2$} child[missing] child { node {$3^1$} } } } child[missing] child { node {$5^1$} }; \end{tikzpicture}\;,\; \begin{tikzpicture}[tinybst, baseline=-6mm] \node {$2^2$} child { node {$1^2$} } child { node {$4^3$} child { node {$3^1$} } child { node {$5^1$} } }; \end{tikzpicture}\right),$$ and its meet-taiga $\mathrm{Q}$-symbol is $$\left(\begin{tikzpicture}[microbst,baseline=-8mm] \node {$1,4,7$} child { node {$3,8$} child[missing] child { node {$5,9$} child[missing] child { node {$6$} } } } child[missing] child { node {$2$} }; \end{tikzpicture}\;,\; \begin{tikzpicture}[microbst, baseline=-6mm] \node {$5,9$} child { node {$3,8$} } child { node {$1,4,7$} child { node {$6$} } child { node {$2$} } }; \end{tikzpicture}\right).$$ **Proposition 36**. *Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$. Then $\mathbf{u}\equiv_{\mathsf{mTg}}\mathbf{v}$ if and only if $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{u}} = \mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{v}}$.* *Proof.* By definition, $\mathbf{u}\equiv_{\mathsf{mTg}}\mathbf{v}$ if and only if $\mathbf{u}\equiv_{\mathsf{lTg}}\mathbf{v}$ and $\mathbf{u}\equiv_{\mathsf{rTg}}\mathbf{v}$. This is equivalent to $\mathbf{u}$ and $\mathbf{v}$ having the same left-taiga and right-taiga $\mathrm{P}$-symbol, that is, $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}} = \mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{v}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}} = \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}}$. In turn, by the definition of the [meet-taiga $\mathrm{P}$-symbol](#definition:mtgpandqsymbols), this is equivalent to $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{u}} = \mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{v}}$. ◻ **Definition 37**. Let $T_L,T_R$ be BTMs, with the same number of nodes. We say $(T_L,T_R)$ is a *pair of twin binary trees with multiplicities* (pair of twin BTMs), if for all $i$, (i) the $i$-th node of $T_L$ and the $i$-th node of $T_R$ have the same multiplicities. (ii) if the $i$-th node of $T_L$ has multiplicity 1 and has a left (resp. right) child then the $i$-th node of $T_R$ does not have a left (resp. right) child. Notice that condition (ii) is equivalent to saying that if $r_i$ has multiplicity 1 then if $r_i$ has a left (resp. right) child then $l_i$ does not have a left (resp. right) child. **Definition 38**. Let $T_L,T_R$ be BSTMs. We say $(T_L,T_R)$ is a *pair of twin binary search trees with multiplicities* (pair of twin BSTMs) if $\mathrm{cont}\parens{T_R} = \mathrm{cont}\parens{T_L}$ and the shape of $(T_L,T_R)$ is a pair of twin BTMs. The $\mathrm{P}$-symbol given in Example [Example 35](#example:meet_taiga_p_and_q_symbols){reference-type="ref" reference="example:meet_taiga_p_and_q_symbols"} is a pair of twin BSTMs. In fact, we have the following: **Proposition 39**. *For any $\mathbf{u}\in \mathbb{N}^*$, the $\mathrm{P}_{\mathsf{mTg}}$-symbol $(\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}},\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}})$ of $\mathbf{u}$ is a pair of twin BSTMs.* *Proof.* As $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$ are both computed by the same word $\mathbf{u}$, $\mathrm{cont}\parens{\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}} = \mathrm{cont}\parens{\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}}$. Moreover, suppose $a$ is a simple variable in $\mathbf{u}$ and that $a$ has a left child in $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$. Let $x \in \mathrm{supp}\parens{\mathbf{u}}$ be the rightmost descendant of the left child of $a$ in $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$, then $z \leq x < a$ for all $z < a$. As $x$ is a descendant of $a$, we have that the leftmost appearance of $x$ appears to the right of $a$ in $\mathbf{u}$ and as $a$ is simple, the rightmost appearance of $x$ appears to the right of $a$ in $\mathbf{u}$. Therefore, $a$ is a right descendant of $x$ in $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$ as there does not exist a $z$ such that $x < z < a$. Thus, $a$ does not have a left child in $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$, as if $z$ is a descendant of $x$ and $z < a$, then $z < x$ and hence $z$ is a left descendant of $x$ and therefore not a left descendant of $a$. The proof for right children is similar. ◻ **Definition 40**. Let $(S_L,S_R)$ be a pair of (respectively) increasing and decreasing binary trees over sets, with the same number of nodes. We say $(S_L,S_R)$ are a *pair of twin binary trees over sets* (pair of twin BTSs) if, for all $i$, (i) the $i$-th node of $S_L$ has the same label as the $i$-th node of $S_R$. (ii) if the $i$-th node of $S_L$ is labelled by a set of cardinality 1 and has a left (resp. right) child, then the $i$-th node of $S_R$ does not have a left (resp. right) child. The $\mathrm{Q}$-symbol given in Example [Example 35](#example:meet_taiga_p_and_q_symbols){reference-type="ref" reference="example:meet_taiga_p_and_q_symbols"} is a pair of twin BTSs. In fact, we have the following: **Proposition 41**. *For any $\mathbf{w}\in \mathbb{N}^*$, the meet-taiga $\mathrm{Q}$-symbol $(\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}},\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}})$ of $\mathbf{w}$ is a pair of twin BTSs.* *Proof.* Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1 < \dots < x_k\}$. For each $x_i \in \mathrm{supp}\parens{\mathbf{w}}$, by Lemma [Lemma 13](#lemma:pxtg_qxtg_same_shape){reference-type="ref" reference="lemma:pxtg_qxtg_same_shape"}, the $i$-th node of $\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$ and the $i$-th node of $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ each contain all the positions of where $x_i$ appears in $\mathbf{w}$, when read left-to-right. Thus, the $i$-th nodes of $S_L$ and $S_R$ share the same label. The second property then follows from Proposition [Proposition 39](#prop:pmtg_pair_twin){reference-type="ref" reference="prop:pmtg_pair_twin"} and Lemma [Lemma 13](#lemma:pxtg_qxtg_same_shape){reference-type="ref" reference="lemma:pxtg_qxtg_same_shape"}. ◻ **Theorem 42**. *The map $\mathbf{w}\mapsto (\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}}, \mathrm{Q}_{{\mathsf{mTg}}}\parens{\mathbf{w}})$ is a bijection between the elements of $\mathbb{N}^*$ and the set formed by the pairs $\bigl((T_L,T_R),(S_L,S_R)\bigr)$ where* (i) *$(T_L,T_R)$ is a pair of twin BSTMs.* (ii) *$(S_L,S_R)$ is a pair of twin BTSs such that the union of the sets labelling $S_L$, and therefore $S_R$, is the interval $[m]$, where $m$ is the sum of the multiplicities of $T_L$ (or $T_R$).* (iii) *$(T_L,T_R)$ and $(S_L,S_R)$ have the same underlying pair of binary trees shape.* (iv) *the multiplicity of the $i$-th node of $T_L$ (resp. $T_R$) is the cardinality of the set labelling the $i$-th node of $S_L$ (resp. $S_R$).* *Proof.* Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1<\dots<x_k\}$. By Propositions [Proposition 39](#prop:pmtg_pair_twin){reference-type="ref" reference="prop:pmtg_pair_twin"} and [Proposition 41](#prop:qmtg_pair_twin){reference-type="ref" reference="prop:qmtg_pair_twin"}, we have that the output of the map satisfies the first two conditions. By Lemma [Lemma 13](#lemma:pxtg_qxtg_same_shape){reference-type="ref" reference="lemma:pxtg_qxtg_same_shape"}, the third and fourth conditions are satisfied, as $T_L$ (resp. $T_R$) and $S_L$ (resp. $S_R$) have the same shape, and furthermore, the $i$-th node of $S_L$ (resp. $S_R$) gives the positions of the letter $x_i$ in $\mathbf{w}$, and hence the multiplicity of the $i$-th node of $T_L$ (resp. $T_R$) is the cardinality of the set labelling the $i$-th node of $S_L$ (resp. $S_R$). Thus, the map is well-defined. The map is injective due to the taiga Robinson--Schensted correspondences (Theorems [Theorem 15](#theorem:robinson_left_taiga){reference-type="ref" reference="theorem:robinson_left_taiga"} and [Theorem 14](#theorem:robinson_right_taiga){reference-type="ref" reference="theorem:robinson_right_taiga"}), as from either of the pairs $(\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}},\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}})$ or $(\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}},\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}})$ we can reconstruct $\mathbf{w}$. For surjectivity, let $\bigl((T_L,T_R),(S_L,S_R)\bigr)$ satisfy conditions *(i)--(iv)*. Using the taiga Robinson--Schensted correspondences, there exist unique $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ such that $\bigl(\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}},\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}\bigr) = (T_L,S_L)$ and $\bigl(\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}},\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}}\bigr) = (T_R,S_R)$. Clearly, by condition *(i)*, we have $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$. Let $\mathrm{supp}\parens{\mathbf{u}} = \mathrm{supp}\parens{\mathbf{v}} = \{x_1 < \dots < x_k\}$. Then, by Lemma [Lemma 8](#lemma:pxst_qxst_same_shape){reference-type="ref" reference="lemma:pxst_qxst_same_shape"}, for all $i \in [k]$, the label of the $i$-th node of $S_L$ gives the positions of $x_i$ in $\mathbf{u}$. By condition *(ii)*, the $i$-th node of $S_L$ has the same label as the $i$-th node of $S_R$, thus the label of the $i$-th node of $S_R$ gives the positions of $x_i$ in $\mathbf{u}$. As such, $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}} = T_R$ and $\mathrm{Q}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}} = S_R$. Therefore, $\mathbf{u}= \mathbf{v}$ and $\bigl(\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{u}},\mathrm{Q}_{{\mathsf{mTg}}}\parens{\mathbf{u}}\bigr) = \bigl((T_L,T_R),(S_L,S_R)\bigr)$. ◻ ## Extraction algorithms {#subsection:Extraction_algorithms} In the previous subsection, we have shown how to obtain a word from its meet-stalactic or meet-taiga $\mathrm{P}$ and $\mathrm{Q}$-symbols. We now show how to obtain words only from the $\mathrm{P}$-symbols, without the need to compute the $\mathrm{Q}$-symbols. We give a deterministic algorithm for the meet-stalactic case and a non-deterministic one for the meet-taiga case. ### The meet-stalactic extraction algorithm {#subsubsection:The_meet_stalactic_extraction_algorithm} Algorithm [[[EmSt]{.sans-serif}]{.upright}](#ExtractMeetStalactic) takes a pair of twin stalactic tableaux $(T_L,T_R)$ and outputs a word in the $\equiv_{\mathsf{mSt}}$-class corresponding to $(T_L,T_R)$. [\[ExtractMeetStalactic\]]{#ExtractMeetStalactic label="ExtractMeetStalactic"} let $\mathbf{u}:= \varepsilon$;\ let $\mathbf{w}$ be the reading of the top row of $T_R$ from left-to-right;\ the word $\mathbf{u}\mathbf{w}$. **Proposition 43**. *For any pair of twin stalactic tableaux $(T_L, T_R)$ as input, Algorithm [[[EmSt]{.sans-serif}]{.upright}](#ExtractMeetStalactic) computes a word belonging to the $\equiv_{\mathsf{mSt}}$-equivalence class encoded by $(T_L, T_R )$.* *Proof.* The key to this proof is to show that the order of first and last occurrences of letters in the output of [[EmSt]{.sans-serif}](#ExtractMeetStalactic) is given, respectively, by the row readings of $T_L$ and $T_R$. Notice that, when computing [[EmSt]{.sans-serif}](#ExtractMeetStalactic), if one encounters a simple column, then it follows from the definition of [pairs of twin stalactic tableaux](#definition:TwinStalacticTableauxCondition) that the columns to be removed from $T_R$, in the simple column subroutine, have already been removed from $T_L$. As such, $\overrightarrow{\rho_{\mathbf{u}\mathbf{w}}} = \rho_{T_L}$. Recall that $T_L$ has a simple column labelled by $a$ if and only if $T_R$ does as well, hence they have the same simple columns. If $T_L$ and $T_R$ have no simple columns, then $\mathrm{supp}\parens{\mathbf{w}} = \mathrm{supp}\parens{\mathbf{u}}$, hence $\overleftarrow{\rho_{\mathbf{u}\mathbf{w}}} = \overleftarrow{\rho_{\mathbf{w}}}$. Furthermore, $\mathbf{w}$ is the row reading of $T_R$, hence $\overleftarrow{\rho_{\mathbf{u}\mathbf{w}}} = \rho_{T_R}$. Suppose $T_L$ has $n$ simple columns, that is, $$\mathrm{col}(T_L) = \mathbf{v}_1 a_1 \mathbf{v}_2 a_2 \cdots a_n \mathbf{v}_{n+1},$$ where $\mathbf{v}_i = \prod_{j=1}^{\alpha_i} b_{i,j}^{|T_L|_{b_{i,j}}}$, with $a_1, \dots, a_n, b_{i,j} \in \mathbb{N}$ all distinct, $\alpha_i \in \mathbb{N}$. Informally, $a_1, \dots, a_n$ correspond to the simple columns of $T_L$ and $\mathbf{v}_i$ corresponds to the block of non-simple columns between $a_{i-1}$ and $a_i$ (with the exception of $\mathbf{v}_1$ and $\mathbf{v}_{n+1}$, which are before $a_1$ and after $a_n$, respectively). For each $\mathbf{v}_i$, $\alpha_i$ is its number of columns and $b_{i,j}$ is the label of the cells of its $j$-th column (counting from left-to-right). By the definition of [pairs of twin stalactic tableaux](#definition:TwinStalacticTableauxCondition), the top row, read from left-to-right, of $T_R$ is of the form $\mathbf{r}_1 a_1 \mathbf{r}_2 a_2 \cdots a_n \mathbf{r}_{n+1}$, where $\mathrm{supp}\parens{\mathbf{r}_i} \subseteq \mathrm{supp}\parens{\mathbf{v}_1 \cdots \mathbf{v}_i}$. Informally, the $i$-th non-simple column block of $T_R$ has columns from the blocks of non-simple columns of $T_L$ of index less than or equal to $i$ (but not necessarily from every single one of those blocks). Thus, the columns of each block of non-simple columns of $T_L$ are distributed throughout the blocks of non-simple columns of $T_R$ of higher or equal index. Notice that, by the algorithm, the word $\mathbf{w}$ is equal to $\mathbf{r}_{n+1}$. Furthermore, the word $\mathbf{u}$ is of the form $$\prod_{i=1}^{n+1} \left(\prod_{j=1}^{\alpha_i} b_{i,j}^{|T_L|_{b_{i,j}}-1}\right) \mathbf{s}_i$$ where $\mathbf{s}_i = \mathbf{r}_i a_i$ for $1 \leq i \leq n$ and $\mathbf{s}_{n+1} = \varepsilon$. In other words, $\mathbf{u}$ is obtained by reading, from left-to-right, each non-simple column block of $T_L$ while excluding the last cell of each column, then reading the top row of the corresponding non-simple column block of $T_R$ and then reading the following simple column. As each $\mathrm{supp}\parens{\mathbf{v}_i} \subseteq \mathrm{supp}\parens{\mathbf{r}_i \cdots \mathbf{r}_{n+1}}$, we have that $\overleftarrow{\rho_{\mathbf{u}\mathbf{w}}} = \rho_{T_R}$. The result follows from the fact that $\overrightarrow{\rho_{\mathbf{u}\mathbf{w}}} = \rho_{\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{u}\mathbf{w}}}$ and $\overleftarrow{\rho_{\mathbf{u}\mathbf{w}}} = \rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}\mathbf{w}}}$. As such, we have that $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{u}\mathbf{w}} = T_L$ and $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}\mathbf{w}} = T_R$, and hence $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{u}\mathbf{w}} = (T_L,T_R)$. ◻ As a consequence of Propositions [Proposition 28](#prop:pmst_pair_plst_prst){reference-type="ref" reference="prop:pmst_pair_plst_prst"}, [Proposition 30](#prop:pmst_pair_twin){reference-type="ref" reference="prop:pmst_pair_twin"} and [Proposition 43](#ExtractTwinToMst){reference-type="ref" reference="ExtractTwinToMst"}, we have the following: **Corollary 44**. *For any $n,m \geq 0$, there is a bijection between the set of ${\mathsf{mSt}}$-equivalence classes of words of length $m$ over $[n]^*$ and pairs of twin stalactic tableaux with $m$ blocks labelled by letters from $[n]$.* ### The meet-taiga extraction algorithm {#subsubsection:The_meet_taiga_extraction_algorithm} Algorithm [[EmTg]{.sans-serif}](#ExtractMeetTaiga) takes a pair of taiga trees $(T_L,T_R)$ and outputs a word in the $\equiv_{\mathsf{mTg}}$-class corresponding to $(T_L,T_R)$. [\[ExtractMeetTaiga\]]{#ExtractMeetTaiga label="ExtractMeetTaiga"} let $\mathbf{u}:= \varepsilon$;\ set $(U_L,U_R) = (T_L,T_R)$;\ the word $\mathbf{u}$. Notice that, in contrast with [[EmSt]{.sans-serif}](#ExtractMeetStalactic), [[EmTg]{.sans-serif}](#ExtractMeetTaiga) is a non-deterministic algorithm, since there may be several choices for $a$ in steps 5 and 14. **Proposition 45**. *For any pair of twin BSTMs $(T_L, T_R)$ as input, Algorithm [[[EmTg]{.sans-serif}]{.upright}](#ExtractMeetTaiga) computes a word belonging to the $\equiv_{\mathsf{mTg}}$-equivalence class encoded by $(T_L, T_R )$.* *Proof.* First, we show that [[EmTg]{.sans-serif}](#ExtractMeetTaiga) always terminates. It is clear that, in each step of the algorithm, a node is removed from $U_L$ or $U_R$. Thus, it suffices to show that while running [[EmTg]{.sans-serif}](#ExtractMeetTaiga), if all leaves in $U_R$ share letters with nodes in $U_L$, and all root nodes of trees in $U_L$ are simple, then at least one root node in $U_L$ shares a letter with a leaf in $U_R$. For contradiction, suppose that all the root nodes of the trees in $U_L$ are simple, and no root node in $U_L$ shares a letter with a leaf in $U_R$. Let $\{s_1 < \cdots < s_k\}$ be the letters of the root nodes of the trees in $U_L$. It follows from the definition of [pairs of twin BSTMs](#definition:TwinBSTMCondition) that the node labelled $s_i$ cannot have left (resp. right) subtrees in both $U_L$ and $U_R$. Therefore, the node labelled $s_1$ cannot have a left child in $U_R$, since the letter labelling this child must be less than $s_1$ and, in $U_L$, all nodes with letters less than $s_1$ must be in the left subtree of the root node labelled $s_1$. By symmetric reasoning, one can show that $s_k$ does not have a right child in $U_R$. Let $s_i$ be the least letter such that the node labelled $s_i$ has a left subtree in $U_R$, and let $a$ be the greatest letter labelling a node of said subtree. Therefore, the node labelled $a$ does not have a right child, hence, by hypothesis, it is not the node labelled $s_{i-1}$. Thus we have $s_{i-1} < a < s_i$ and, as such, in $U_L$, the node labelled $a$ must be a right descendant of $s_{i-1}$ or a left descendant of $s_i$. By the definition of [pairs of twin BTMs](#definition:TwinBTMCondition), this contradicts our hypothesis. As such, we can conclude that at least one root node in $U_L$ shares a letter with a leaf in $U_R$. Now, we show that the meet-taiga $\mathrm{P}$-symbol of the output $\mathbf{u}$ of [[EmTg]{.sans-serif}](#ExtractMeetTaiga) coincides with its input. Clearly, $T_L$, $T_R$, $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$ all share the same content. Furthermore, the root node of $T_L$ has the same label as the root node of $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$, which is the first letter of $\mathbf{u}$, and therefore read from the root node of $T_L$ by the algorithm. Similarly, the root node of $T_R$ has the same label as the root node of $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$, which is the last letter of $\mathbf{u}$, and therefore read from the last leaf of $U_R$ by the algorithm, that is, the root node of $T_R$. Suppose, in order to obtain a contradiction, that $T_L$ and $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$ are different, and that they are equal up to depth $k \geq 1$. As such, they share their binary tree shape (without multiplicities) up to depth $k+1$, since $T_L$ and $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$ have the same inorder reading. Hence, there must be a node of depth $k$ common to both $T_L$ and $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$ such that at least one of its children is labelled differently in each tree. Assume, without loss of generality, that said node is labelled $i$ in $T_L$ and labelled $j$ in $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$. Then, by the definition of a BSTM, there must be a descendant of node labelled $i$ (resp. $j$) in $T_L$ (resp. $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$) labelled by $j$ (resp. $i$). On one hand, we can deduce that $\overrightarrow{\rho_\mathbf{u}}(i) < \overrightarrow{\rho_\mathbf{u}}(j)$, since [[EmTg]{.sans-serif}](#ExtractMeetTaiga) will always read first any ancestor of the node labelled $j$. On the other hand, we can deduce that $\overrightarrow{\rho_\mathbf{u}}(i) > \overrightarrow{\rho_\mathbf{u}}(j)$, since the insertion of $i$ as the label of a descendant of the node labelled $j$ can only be done after reading $j$. We obtain a contradiction, hence $T_L = \mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}}$. The case of $T_R$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$ is similarly proven. By the same reasoning as before, assuming $T_R$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$ are different, then there exists a common node to both trees, with at least one differently labelled child, $i$ in $T_R$ and $j$ in $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$. On one hand, we can conclude that $\overleftarrow{\rho_\mathbf{u}}(i) > \overleftarrow{\rho_\mathbf{u}}(j)$, since [[EmTg]{.sans-serif}](#ExtractMeetTaiga) will output a letter if and only if it labels a node in $U_R$, and it will only delete the node labelled $i$ when it is a leaf in $U_R$, so it will delete the node labelled $j$ first. On the other hand, we can conclude that $\overleftarrow{\rho_\mathbf{u}}(i) < \overleftarrow{\rho_\mathbf{u}}(j)$, since by [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion), we read the word $\mathbf{u}$ from right-to-left, and insert $i$ as a descendant of the node labelled $j$ only after reading $j$. We obtain a contradiction, hence $T_R = \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}}$. ◻ As a consequence of Propositions [Proposition 36](#MtgClassesPsymbolCorr){reference-type="ref" reference="MtgClassesPsymbolCorr"}, [Proposition 39](#prop:pmtg_pair_twin){reference-type="ref" reference="prop:pmtg_pair_twin"} and [Proposition 45](#ExtractTwinToMtg){reference-type="ref" reference="ExtractTwinToMtg"}, we have the following: **Corollary 46**. *For any $n,m \geq 0$, there is a bijection between the set of ${\mathsf{mTg}}$-equivalence classes of words of length $m$ over $[n]^*$ and pairs of twin BSTMs, labelled by letters from $[n]$ and whose sums of multiplicities is $m$.* ## Iterative insertion algorithms {#subsection:Iterative_insertion_algorithms} We now introduce iterative versions of our insertion algorithms which allow us to compute a pair of twin stalactic tableaux from a word, while reading it only in one direction. Thus, we can easily compute the concatenation of two words under the meet-stalactic and meet-taiga congruences. Furthermore, we can compute the product of two pairs of twin stalactic tableaux (resp. pairs of twin BSTMs) $(T_L,T_R)$ and $(T'_L,T'_R)$ by applying [[EmSt]{.sans-serif}](#ExtractMeetStalactic) (resp. [[EmTg]{.sans-serif}](#ExtractMeetTaiga)) to the second pair and inserting the letters of the resulting word, from left-to-right, into the first pair. ### The meet-stalactic iterative insertion algorithm {#subsubsection:The_meet_stalactic_iterative_insertion_algorithm} Algorithm [[rStRI]{.sans-serif}](#alg:RightStalacticRightInsertion) allows one to insert a letter into the top row of a stalactic tableau: [\[alg:RightStalacticRightInsertion\]]{#alg:RightStalacticRightInsertion label="alg:RightStalacticRightInsertion"} the resulting tableau $T \leftarrow a$. We can define a *Left-Stalactic Left-Insertion* ([[lStLI]{.sans-serif}](#alg:RightStalacticRightInsertion)) algorithm in an analogous way. As before in Subsubsection [2.2.1](#subsubsection:composition_diagrams_stalactic_tableaux_and_patience_sorting_tableaux){reference-type="ref" reference="subsubsection:composition_diagrams_stalactic_tableaux_and_patience_sorting_tableaux"}, using Algorithm [[rStRI]{.sans-serif}](#alg:RightStalacticRightInsertion) (resp. [[lStLI]{.sans-serif}](#alg:RightStalacticRightInsertion)), one can compute a unique stalactic tableau from a word $\mathbf{w}\in \mathbb{N}^*$: Starting from the empty tableau, read $\mathbf{w}$ from left-to-right (resp. right-to-left) and insert its letters one-by-one into the tableau. The resulting tableau is denoted by $\mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{w})$ (resp. $\mathrm{P}^\leftarrow_{\mathsf{lSt}}(\mathbf{w})$). **Lemma 47**. *Let $\mathbf{w}\in \mathbb{N}^*$. Then, $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}} = \mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{w})$ and $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}} = \mathrm{P}^\leftarrow_{\mathsf{lSt}}(\mathbf{w})$.* *Proof.* We prove the first equality, as the proof for the second equality is analogous. The proof is done by induction on the length of $\mathbf{w}$. If $\mathbf{w}$ is the empty word, then the result is trivially satisfied. Suppose $\mathbf{w}= \mathbf{u}a$, for some $a \in \mathbb{N}$ and $\mathbf{u}\in \mathbb{N}^*$. Recall that $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$ is obtained by reading $\mathbf{w}$ from right-to-left and inserting its letters using Algorithm [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion), and as such, its rightmost column is labelled by $a$ and of height $|\mathbf{w}|_a = |\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}|_a + 1$, and the remaining columns form the tableau $(\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}})_{\neq a}$. By induction, we have that $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}} = \mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{u})$. As such, $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}$ is obtained in the same way as $\mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{w})$. ◻ As such, we can compute the $\mathrm{P}$-symbol of a meet-stalactic class by reading a word in only one direction and applying Algorithms [[rStLI]{.sans-serif}](#alg:RightStalacticLeftInsertion) and [[rStRI]{.sans-serif}](#alg:RightStalacticRightInsertion) (or Algorithms [[lStRI]{.sans-serif}](#alg:RightStalacticLeftInsertion) and [[lStLI]{.sans-serif}](#alg:RightStalacticRightInsertion)) at the same time: **Corollary 48**. *For any $\mathbf{w}\in \mathbb{N}^*$, we have that $$\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}} = (\mathrm{P}^\leftarrow_{\mathsf{lSt}}(\mathbf{w}), \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{w}}) = (\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{w})).$$* We define the *left-to-right iterative meet-stalactic $\mathrm{P}$-symbol* of a word $\mathbf{w}\in \mathbb{N}^*$ as the pair $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}} = (\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{w}))$, obtained by reading $\mathbf{w}$ from left-to-right and iteratively inserting its letters into $(\perp,\perp)$, using Algorithms [[lStRI]{.sans-serif}](#alg:RightStalacticLeftInsertion) and [[lStLI]{.sans-serif}](#alg:RightStalacticRightInsertion) for $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ and $\mathrm{P}^\rightarrow_{\mathsf{rSt}}(\mathbf{w})$, respectively. The *left-to-right iterative meet-stalactic $\mathrm{Q}$-symbol* of $\mathbf{w}$ is the pair $\mathrm{Q}_{{\mathsf{mSt}}}\parens{\mathbf{w}} = (\mathrm{Q}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}, \mathrm{Q}^\rightarrow_{\mathsf{rSt}}(\mathbf{w}))$ of the same shape as $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}}$, where each cell is labelled by the position in $\mathbf{w}$ of the letter in its corresponding cell in $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}}$. In other words, each cell in $\mathrm{Q}_{{\mathsf{mSt}}}\parens{\mathbf{w}}$ is labelled by the step in which its corresponding cell in $\mathrm{P}_{{\mathsf{mSt}}}\parens{\mathbf{w}}$ was created, when applying the left-to-right iterative insertion algorithm. The correctness of the iterative insertion algorithm follows from Corollary [Corollary 48](#corollary:mst_root_insertion){reference-type="ref" reference="corollary:mst_root_insertion"}. Thus, we obtain an insertion algorithm in line with the usual Robinson--Schensted correspondence algorithms. Furthermore, notice that we can also define a right-to-left version of the iterative insertion algorithm. From these, we also obtain iterative versions of insertion algorithms for the left and right-stalactic $\mathrm{P}$ and $\mathrm{Q}$-symbols. ### The meet-taiga iterative insertion algorithm {#subsubsection:The_meet_taiga_iterative_insertion_algorithm} Algorithm [[TgRI]{.sans-serif}](#alg:TaigaRootInsertion) allows one to obtain a new BSTM from another one, with the root node labelled by the letter of our choice: [\[alg:TaigaRootInsertion\]]{#alg:TaigaRootInsertion label="alg:TaigaRootInsertion"} let $T_{<a}$ (resp. $T_{>a}$) be the tree of all nodes of $T$ with letter $<a$ (resp. $>a$), such that a node $x$ is a descendent of a node $y$ in $T_{<a}$ (resp. $T_{>a}$ only if $x$ is a descendant of $y$ in $T$;\ let $a \downarrow T$ be the tree with root node labelled $a$ with multiplicity $|T|_a + 1$, with left subtree $T_{<a}$ and right subtree $T_{>a}$;\ the resulting tree $a \downarrow T$. As before in Subsubsection [2.2.2](#subsubsection:binary_trees_with_multiplicities_and_binary_search_trees_with_multiplicities){reference-type="ref" reference="subsubsection:binary_trees_with_multiplicities_and_binary_search_trees_with_multiplicities"}, using Algorithm [[TgRI]{.sans-serif}](#alg:TaigaRootInsertion), one can compute a unique BSTM from a word $\mathbf{w}\in \mathbb{N}^*$: Starting from the empty tree, read $\mathbf{w}$ from left-to-right (resp. right-to-left) and insert its letters one-by-one into the tree. The resulting BSTM is denoted by $\mathrm{P}^\rightarrow_{\mathsf{rTg}}(\mathbf{w})$ (resp. $\mathrm{P}^\leftarrow_{\mathsf{lTg}}(\mathbf{w})$). The proof of [@giraudo_baxter Lemma 4.18] can be easily adapted to show the following: **Lemma 49**. *Let $\mathbf{w}\in \mathbb{N}^*$. Then $\mathrm{P}^\rightarrow_{\mathsf{rTg}}(\mathbf{w}) = \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}$ and $\mathrm{P}^\leftarrow_{\mathsf{lTg}}(\mathbf{w}) = \mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$.* As such, we can compute the $\mathrm{P}$-symbol of a meet-taiga class by reading a word in only one direction and applying Algorithms [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion) and [[TgRI]{.sans-serif}](#alg:TaigaRootInsertion) at the same time: **Corollary 50**. *For any $\mathbf{w}\in \mathbb{N}^*$, we have that $$\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}} = (\mathrm{P}^\leftarrow_{\mathsf{lTg}}(\mathbf{w}), \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}}) = (\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}, \mathrm{P}^\rightarrow_{\mathsf{rTg}}(\mathbf{w})).$$* We define the *left-to-right iterative meet-taiga $\mathrm{P}$-symbol* of a word $\mathbf{w}\in \mathbb{N}^*$ as the pair $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}} = (\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}, \mathrm{P}^\rightarrow_{\mathsf{rTg}}(\mathbf{w}))$, obtained by reading $\mathbf{w}$ from left-to-right and iteratively inserting its letters into $(\perp,\perp)$, using Algorithms [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion) and [[TgRI]{.sans-serif}](#alg:TaigaRootInsertion) for $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$ and $\mathrm{P}^\rightarrow_{\mathsf{rTg}}(\mathbf{w})$, respectively. The *left-to-right iterative meet-taiga $\mathrm{Q}$-symbol* of $\mathbf{w}$ is the pair $\mathrm{Q}_{{\mathsf{mTg}}}\parens{\mathbf{w}} = (\mathrm{Q}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}, \mathrm{Q}^\rightarrow_{\mathsf{rTg}}(\mathbf{w}))$ of the same shape as $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}}$, where each node is labelled by the positions in $\mathbf{w}$ of the letter in its corresponding node in $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}}$. In other words, each node in $\mathrm{Q}_{{\mathsf{mTg}}}\parens{\mathbf{w}}$ is labelled by the steps in which its corresponding node in $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}}$ was created or had its multiplicity incremented, when applying the left-to-right iterative insertion algorithm. The correctness of the iterative insertion algorithm follows from Corollary [Corollary 50](#corollary:mtg_root_insertion){reference-type="ref" reference="corollary:mtg_root_insertion"}. Thus, as in the previous Subsubsection, we obtain an insertion algorithm in line with the usual Robinson--Schensted correspondence algorithms. We can also define a right-to-left version of the iterative insertion algorithm. From these, we also obtain iterative versions of insertion algorithms for the left and right-taiga $\mathrm{P}$ and $\mathrm{Q}$-symbols. # Counting in the stalactic and taiga monoids {#section:Counting_in_the_stalactic_and_taiga_monoids} Throughout this section, for any permutation $\sigma$ of $[n]$, we denote by $\sigma_i$ the image of $i$ by $\sigma$, to simplify the notation. ## Sizes of classes of words under stalactic and taiga congruences {#subsection:Size_of_equivalence_classes_in_stalactic_and_taiga_monoids} We now obtain formulas for the sizes of classes of words under the stalactic and taiga congruences. The meet-stalactic and meet-taiga cases are treated separately from the other cases, due to the complexity of the arguments we use. ### The left, right, and join cases {#subsubsection:The_left_right_and_join_cases} We first give non-recursive formulas for the left, right and join cases. **Proposition 51**. *Let $\mathbf{w}\in \mathbb{N}^*$. Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1,\dots,x_k\}$ be such that $\overrightarrow{\rho_{\mathbf{w}}}(x_i) < \overrightarrow{\rho_{\mathbf{w}}}(x_{i+1})$ for all $1 \leq i < k$. Then, there are $$\frac{|\mathbf{w}|!}{\prod_{i=1}^k \left((|\mathbf{w}|_{x_i}-1)! \sum_{j=i}^k |\mathbf{w}|_{x_j}\right)}$$ words over $\mathbb{N}$ in the $\equiv_{\mathsf{lSt}}$-class of $\mathbf{w}$.* *Proof.* By the left-stalactic Robinson--Schensted correspondence (Theorem [Theorem 9](#theorem:robinson_left_stalactic){reference-type="ref" reference="theorem:robinson_left_stalactic"}), the size of the $\equiv_{\mathsf{lSt}}$-class of $\mathbf{w}$ is exactly the number of increasing patience-sorting tableaux of the same shape as $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{\mathbf{w}}$ with entries in $[|\mathbf{w}|]$. We can obtain the unique column reading of an increasing patience-sorting tableau by choosing $W_1,\dots,W_k \subseteq [|\mathbf{w}|]$ such that $|W_i| = |\mathbf{w}|_{x_i}$ and, for each $1 \leq i < j \leq k$, we have $\min(W_i) < \min(W_j)$ and $W_i \cap W_j = \emptyset$. Hence, we can see that $\min(W_i)$ is the least letter not contained in $\bigcup_{j=1}^{i-1} W_j$ but the remaining letters in $W_i$ can be any letters from $[|\mathbf{w}|]$ not in $\bigcup_{j=1}^{i-1} W_j$. Thus, there are $$\prod_{i=1}^{k}\begin{pmatrix} |\mathbf{w}| - 1 - \sum_{j=1}^{i-1} |\mathbf{w}|_{x_{j}} \\ |\mathbf{w}|_{x_i}-1 \end{pmatrix}$$ possible column readings of increasing patience-sorting tableaux, and therefore, words over $\mathbb{N}$ in the same $\equiv_{\mathsf{lSt}}$-class of $\mathbf{w}$. Furthermore, by expanding the product, we can see that $$\prod_{i=1}^{k}\begin{pmatrix} |\mathbf{w}| - 1 - \sum_{j=1}^{i-1} |\mathbf{w}|_{x_j} \\ |\mathbf{w}|_{x_i}-1 \end{pmatrix} = \frac{(|\mathbf{w}|-1)!}{\left(\prod_{i=1}^k(|\mathbf{w}|_{x_i}-1)!\right)\prod_{t=2}^k(|\mathbf{w}| - \sum_{j=1}^{t-1}|\mathbf{w}|_{x_j})}.$$ Finally, by noting that $|\mathbf{w}| - \sum_{j=1}^{t-1}|\mathbf{w}|_{x_j} = \sum_{j=t}^k|\mathbf{w}|_{x_j}$ and that $|\mathbf{w}| = \sum_{j=1}^k|\mathbf{w}|_{x_j}$, we have the result. ◻ **Proposition 52**. *Let $\mathbf{w}\in \mathbb{N}^*$. Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1,\dots,x_k\}$ and suppose $\mathbf{w}$ has $m$ simple variables. Then, there are $$\frac{|\mathbf{w}|!}{m! \cdot \prod_{i=1}^k|\mathbf{w}|_{x_i}!}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{jSt}}$-class of $\mathbf{w}$.* *Proof.* By Proposition [Proposition 17](#prop:jst_characterisation){reference-type="ref" reference="prop:jst_characterisation"}, $\mathbf{u}\equiv_{{\mathsf{jSt}}} \mathbf{v}$ if and only if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\overline{\mathbf{u}} = \overline{\mathbf{v}}$. Thus, we aim to count the number of rearrangements of $\mathbf{w}$ which preserve the order of the $m$ simple variables, that is, rearrangements which are equal to $\overline{\mathbf{w}}$ when restricted to the simple letters. There are $$\frac{|\mathbf{w}|!}{\prod_{i=1}^k|\mathbf{w}|_{x_i}!}$$ unique rearrangements of $\mathbf{w}$ and $m!$ rearrangements of $\overline{\mathbf{w}}$. Thus, there are $$\frac{|\mathbf{w}|!}{m! \cdot \prod_{i=1}^k|\mathbf{w}|_{x_i}!}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{jSt}}$-class of $\mathbf{w}$. ◻ We prove by different means the following known proposition: **Proposition 53** (Proposition 5, [@priez_binary_trees]). *Let $\mathbf{w}\in \mathbb{N}^*$. Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1 < \dots < x_k\}$ and for all $1 \leq i \leq k$, let $m_i$ be the sum of the multiplicities of the node labelled $x_i$ and its descendants in $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}}$. Then, there are $$\frac{|\mathbf{w}|!}{\prod_{i=1}^k(|\mathbf{w}|_{x_i}-1)! \cdot m_i}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{lTg}}$-class of $\mathbf{w}$.* *Proof.* We prove this by induction on the size of the $\mathrm{supp}\parens{\mathbf{w}}$. If $|\mathrm{supp}\parens{\mathbf{w}}| = 1$, then there is a unique word in the $\equiv_{\mathsf{lTg}}$-class of $\mathbf{w}$, so the formula holds. For induction, suppose the formula holds when $|\mathrm{supp}\parens{\mathbf{w}}| < k$. Now, let $|\mathrm{supp}\parens{\mathbf{w}}| = k$ and $\mathbf{w}= x_l\mathbf{w}'$ for some $l \in [k]$ and $\mathbf{w}' \in \mathrm{supp}\parens{\mathbf{w}}^*$. We define the sets $X = \{ x_1\dots,x_{l-1}\}$ and $Y = \{ x_{l+1},\dots x_k\}$. Now, we may apply the inductive hypothesis to $\mathbf{w}[X]$ and $\mathbf{w}[Y]$, from which we obtain that there are $$\frac{|\mathbf{w}[X]|!}{\prod_{i=1}^{l-1}(|\mathbf{w}|_{x_i}-1)!m_i} \quad \text{and} \quad \frac{|\mathbf{w}[Y]|!}{\prod_{i=l+1}^k(|\mathbf{w}|_{x_i}-1)!m_i}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{lTg}}$-class as $\mathbf{w}[X]$ and $\mathbf{w}[Y]$ respectively. We aim to show that there are $$\begin{pmatrix} |\mathbf{w}|-1 \\ |\mathbf{w}|_{x_l}-1 \end{pmatrix} \begin{pmatrix} |\mathbf{w}[X]| + |\mathbf{w}[Y]| \\ |\mathbf{w}[X]| \end{pmatrix} \frac{\mathbf{w}[X]|!|\mathbf{w}[Y]|!}{\prod_{i\neq l}(|\mathbf{w}|_{x_i}-1)!m_i}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{lTg}}$-class as $\mathbf{w}$. The first factor gives the number of choices for the positions of $x_l$ in a word in $[\mathbf{w}]_{\mathsf{lTg}}$ as, by the characterisation of equality in ${\mathsf{lTg}}$, its first letter has to be $x_l$, but there are no restrictions on the remaining occurrences of $x_l$ in the word. The second factor then gives the number of choices for the positions of letters $x_1, \dots, x_{l-1}$ in the word, without distinction between the letters, after choosing the positions of $x_l$. Notice that these choices determine the positions of letters $x_{l+1}, \dots, x_k$ in the word, without distinction between the letters. The final factor gives us the number of possible rearrangements of $\mathbf{w}[X]$ and $\mathbf{w}[Y]$, as given by the induction hypothesis. Remark that a word chosen in this way is equal to $\mathbf{w}$ in ${\mathsf{lTg}}$ since any letter from $X$ can commute with any letter from $Y$ as the word begins with $x_l$. Thus, the above formula counts the number of words of $\mathbb{N}$ in the same $\equiv_{\mathsf{lTg}}$-class as $\mathbf{w}$. Finally, by noting that $|\mathbf{w}| = m_l$ and that $$\begin{pmatrix} |\mathbf{w}[X]| + |\mathbf{w}[Y]| \\ |\mathbf{w}[X]| \end{pmatrix}|\mathbf{w}[X]|!|\mathbf{w}[Y]|! = (|\mathbf{w}|-|\mathbf{w}|_{x_l})!,$$ the result follows. ◻ The following proposition shows that we are able to obtain the size of a $\equiv_{\mathsf{jTg}}$-class, by computing the size of multiple $\equiv_{\mathsf{hypo}}$-classes. By a result of Cain and Malheiro [@cm_hypo_crystal Theorem 3], these are known to be efficiently computable. **Proposition 54**. *Let $\mathbf{w}\in \mathbb{N}^*$. Let $\mathrm{supp}\parens{\mathbf{w}} = \{x_1 < \dots < x_k\}$ and $A_1,\dots A_l$ be all the intervals of $\mathrm{supp}\parens{\mathbf{w}}$ such that $A_j$ only contains simple letters and $A_j \cup \{a\}$ is not an interval, for any simple letter $a \notin A_j$. Let $h(\mathbf{w}[A_j])$ be the size of the hypoplactic class of $\mathbf{w}[A_j]$. Then, there are $$\frac{|\mathbf{w}|!}{\prod_{i=1}^{k}|\mathbf{w}|_{x_i}!} \prod_{j=1}^{l} \frac{h(\mathbf{w}[A_j])}{|A_j|!}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{jTg}}$-class of $\mathbf{w}$.* *Proof.* Let $\mathbf{v}\in [\mathbf{w}]_{{\mathsf{jTg}}}$. By Proposition [Proposition 19](#prop:jtg_characterisation){reference-type="ref" reference="prop:jtg_characterisation"}, $\mathbf{w}\equiv_{{\mathsf{jTg}}} \mathbf{v}$ if and only if $\mathrm{cont}\parens{\mathbf{w}} = \mathrm{cont}\parens{\mathbf{v}}$ and $\mathbf{w}[A_j] \equiv_{\mathsf{hypo}}\mathbf{v}[A_j]$ for all $1 \leq j \leq k$. We now aim to show that there are $$\left(\prod_{j=1}^{l}\begin{pmatrix} |\mathbf{w}| - \sum_{r=1}^{j-1} |A_{r}| \\ |A_{j}| \end{pmatrix}h(\mathbf{w}[A_j])\right)\frac{(|\mathbf{w}| - \sum_{s=1}^l|A_s|)!}{\prod_{i=1}^{k}|\mathbf{w}|_{x_i}!}$$ words over $\mathbb{N}$ in the same $\equiv_{\mathsf{jTg}}$-class as $\mathbf{w}$. The first factor gives the product of the number of possible positions of the letters from $A_i$ in any rearrangement of $\mathbf{w}$, without distinction between the letters, by the number of rearrangements of $\mathbf{w}[A_j]$ which are $\equiv_{\mathsf{hypo}}$-congruent to $\mathbf{w}[A_j]$. The second factor is the number of positions of the remaining letters in any rearrangement of $\mathbf{w}$, as there is no restriction on their positions. Note that if $x \in A_j$ for some $j$, then $|\mathbf{w}|_{x} = 1$ and therefore $|\mathbf{w}|_{x}! = 1$. As such, we do not need to exclude $|\mathbf{w}|_{x}!$ from the product $\prod_{i=1}^{k}|\mathbf{w}|_{x_i}!$ in the final factor. Finally, by noting that $$\prod_{j=1}^{l}\begin{pmatrix} |\mathbf{w}| - \sum_{r=1}^{j-1} |A_r| \\ |A_{j}| \end{pmatrix} = \frac{|\mathbf{w}|!}{(|\mathbf{w}| - \sum_{s=1}^l|A_s|)! \cdot \prod_{j=1}^l|A_j|!},$$ the result follows. ◻ ### The meet cases {#subsubsection:The_meet_cases} We now give recursive formulas for the meet cases. In order to compute the size of $\equiv_{\mathsf{mSt}}$ and $\equiv_{\mathsf{mTg}}$-classes, it is easier to visualize a word of said classes as a linear extension of a partially ordered set (poset for short). Let $P = (I,\leq_P)$ be a poset. Then, a linearly ordered set $Q = (I, \leq_Q)$ is a *linear extension* of $P$ if $i \leq_P j$ implies $i \leq_Q j$, for all $i,j \in I$. In the following, we define a class of posets: for $k \in \mathbb{N}$, $X = (X_1,\dots, X_k) \in \mathbb{N}^k$, and a permutation $\tau$ of $[k]$, let $P[X;\tau]$ be the poset built by taking $k$ chains, where the $i$-th chain has $X_i$ elements, and requiring the least (resp. greatest) element of the $i$-th (resp. $\tau_i$-th) chain to be less than the least (resp. greatest) element of the $(i+1)$-th (resp. $\tau_{i+1}$-th) chain. We represent these posets by graphs and refer to their elements as 'nodes'. The order relations are represented by arrows, in the sense that an arrow starts in one node and ends in another if the element corresponding to the former node is less than the element corresponding to the latter node. **Proposition 55**. *Let $\mathbf{w}\in \mathbb{N}^*$. Then, each word in $[\mathbf{w}]_{\mathsf{mSt}}$ is in bijection with a linear extension of $P[X;\tau]$ where $k = |\mathrm{supp}\parens{\mathbf{w}}|$, $X_i = |\mathbf{w}|_{{\overrightarrow{\rho_\mathbf{w}}}^{-1}(i)}$ and $\tau_i = {\overrightarrow{\rho_\mathbf{w}}}({\overleftarrow{\rho_\mathbf{w}}}^{-1}(i))$, for $1 \leq i \leq k$.* *Proof.* Let $\mathbf{u}\in [\mathbf{w}]_{\mathsf{mSt}}$. From $\mathbf{u}$ we define a chain $P'$, with $|\mathbf{u}|$ element, by letting the $i$-th least element in $P'$ be the $j$-th node in the $l$-th chain of $P[X;\tau]$ if the $i$-th letter of $\mathbf{u}$ is the $j$-th appearance of ${\overrightarrow{\rho_\mathbf{u}}}^{-1}(l)$. We want to show that $P'$ is a linear extension of $P[X;\tau]$. First, note that every node of $P[X;\tau]$ appears in $P'$ as $\overrightarrow{\rho_\mathbf{w}} = \overrightarrow{\rho_\mathbf{u}}$ and $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{w}}$, therefore $X_l = |\mathbf{w}|_{{\overrightarrow{\rho_\mathbf{w}}}^{-1}(l)} = |\mathbf{u}|_{{\overrightarrow{\rho_\mathbf{u}}}^{-1}(l)}$. Next, we show that if a node is less than another node in $P[X;\tau]$ then the same happens in $P'$. By the definition of $P'$, it is clear that, in $P'$, the $j$-th node in the $l$-th chain of $P[X;\tau]$ is less than the $(j+1)$-th node in the $l$-th chain, for all $1 \leq l \leq k$ and $1 \leq j < X_l$. For each $1 \leq l < k$, in $P'$, the least element in the $l$-th chain of $P[X;\tau]$ is less than the least element in the $(l+1)$-th chain as $$l = {\overrightarrow{\rho_\mathbf{u}}}({\overrightarrow{\rho_\mathbf{u}}}^{-1}(l)) < {\overrightarrow{\rho_\mathbf{u}}}({\overrightarrow{\rho_\mathbf{u}}}^{-1}(l+1)) = l+1,$$ and hence the first occurrence of ${\overrightarrow{\rho_\mathbf{u}}}^{-1}(l)$ appears to the left of the first occurrence of ${\overrightarrow{\rho_\mathbf{u}}}^{-1}(l+1)$ in $\mathbf{u}$. By its definition, in $P'$, the greatest element of the $\tau_l$-th chain of $P[X;\tau]$ is less than the greatest element of the $\tau_{l+1}$-th chain if the last occurrence of ${\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_l)$ appears to the left of the last occurrence of ${\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_{l+1})$ in $\mathbf{u}$, that is, if ${\overleftarrow{\rho_\mathbf{u}}}{\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_l) < {\overleftarrow{\rho_\mathbf{u}}}{\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_{l+1})$. As $\overrightarrow{\rho_\mathbf{w}} = \overrightarrow{\rho_\mathbf{u}}$, $\overleftarrow{\rho_\mathbf{w}} = \overleftarrow{\rho_\mathbf{u}}$ and $\tau_l = {\overrightarrow{\rho_\mathbf{w}}}({\overleftarrow{\rho_\mathbf{w}}}^{-1}(l)$, we have that ${\overleftarrow{\rho_\mathbf{u}}}({\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_l)) = l$ for each $1 \leq l \leq k$. Therefore, ${\overleftarrow{\rho_\mathbf{u}}}{\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_l) < {\overleftarrow{\rho_\mathbf{u}}}{\overrightarrow{\rho_\mathbf{u}}}^{-1}(\tau_{l+1})$. Now, let $P'$ be a linear extension of $P[X;\tau]$ and define $\mathbf{u}\in \mathbb{N}^*$ by letting the $i$-th letter of $\mathbf{u}$ be ${\overrightarrow{\rho_\mathbf{w}}}^{-1}(l)$ if the $i$-th least element of $P'$ is in the $l$-th chain of $P[X;\tau]$. By the definition of $\mathbf{u}$ and $P[X;\tau]$, $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{w}}$ and, by considering the least elements of each chain in $P[X;\tau]$, we have that $\overrightarrow{\rho_\mathbf{w}} = \overrightarrow{\rho_\mathbf{u}}$. To see that $\overleftarrow{\rho_\mathbf{w}} = \overleftarrow{\rho_\mathbf{u}}$, note that, for any $a,b \in \mathrm{supp}\parens{\mathbf{u}}$, we have that $\overleftarrow{\rho_\mathbf{u}}(a) < \overleftarrow{\rho_\mathbf{u}}(b)$ if and only if the greatest node in the $\overrightarrow{\rho_\mathbf{w}}(a)$-th chain of $P[X,\tau]$ is less than the greatest node in the $\overrightarrow{\rho_\mathbf{w}}(b)$-th chain. This happens if and only if there exist $1 \leq s < t \leq k$ such that $\overrightarrow{\rho_\mathbf{w}}(a) = \tau_s$ and $\overrightarrow{\rho_\mathbf{w}}(b) = \tau_t$, by the definition of $P[X,\tau]$. Furthermore, as $\tau_s = {\overrightarrow{\rho_\mathbf{w}}}({\overleftarrow{\rho_\mathbf{w}}}^{-1}(s))$, we have $a = {\overleftarrow{\rho_\mathbf{w}}}^{-1}(s)$ and, similarly, $b = {\overleftarrow{\rho_\mathbf{w}}}^{-1}(t)$. Therefore, $$\overleftarrow{\rho_\mathbf{w}}(a) = s < t = \overleftarrow{\rho_\mathbf{w}}(b),$$ and hence $\mathbf{u}\in [\mathbf{w}]_{\mathsf{mSt}}$. ◻ To improve the readability of the following theorem, we introduce the following function: let $k,n,l \in \mathbb{N}$, such that $k \geq 2$, and $1 \leq n < l \leq k$. Let $M = (m_1,\dots,m_{l-n}) \in \mathbb{N}_0^{l-n}$ and $\lVert M \rVert_1 := \sum_{i=1}^{l-n} m_i$. We define the function $f_{k,n,l,M} \colon (\mathbb{N}_0^k,\mathbb{N}_0^{k-1}) \to (\mathbb{N}_0^{k-1},\mathbb{N}_0^{k-2})$, where for $V = (v_1,\dots,v_k) \in \mathbb{N}_0^k$, $B = (b_1,\dots, b_{k-1}) \in \mathbb{N}_0^{k-1}$, the first coordinate of $f_{k,n,l,M}(V,B)$ is given by $$(v_1,\dots,v_{n-1},v_{n+1},\dots,v_{l-1},v_{l}+v_{n}-\lVert M \rVert_1,v_{l+1},\dots,v_k),$$ and the second coordinate is given by $$(b_2 + m_2,\dots,b_{l-1} + m_{l-1},b_{l},\dots,b_k)$$ when $n = 1$, and by $$(b_1, \dots, b_{n-2}, b_{n-1} + 1 + b_n + m_1, b_{n+1} + m_2,\dots,b_{l-1} + m_{l-n}, b_{l},\dots,b_k)$$ when $n > 1$. Let $V \in \mathbb{N}_0^k$, $B \in \mathbb{N}_0^{k-1}$, and $\sigma$ be a permutation of $[k]$ such that if $\sigma_i > \sigma_j$, then $v_{\sigma_j} \neq 0$, for any $1 \leq i < j \leq k$. Define $G[V;B;\sigma]$ to be the poset built by taking $k$ chains, where each chain has length $v_i + 1$, and requiring the least element in the $i$-th chain to be less than the least element in the $(i+1)$-th chain, and to have a chain of $b_i$ nodes between them. Moreover, we require the greatest element in the $\sigma_i$-th chain to be less than the greatest element in the $\sigma_{i+1}$-th chain. The following theorem follows a similar reasoning to the one given by Pan in [@ranpan]. In the following, we define $\mathcal{L}(P)$ to be the number of linear extensions of the poset $P$ and, for ease of notation, $\begin{pmatrix} -1 \\ 0 \end{pmatrix} := 1$. By inserting a node in an edge, we mean that we remove said edge and add two extra edges, one starting in the initial node and ending in the inserted node, and the other starting in the inserted node and ending in the terminal node. **Theorem 56**. *Let $k \geq 2$, $V = (v_1, \dots, v_k) \in \mathbb{N}_0^k$, $B = (b_1, \dots, b_{k-1}) \in \mathbb{N}_0^{k-1}$ and $\sigma$ be a permutation of $[k]$. Then, when $\sigma_1 < \sigma_2$, $\mathcal{L}(G[V;B;\sigma])$ is equal to $$\sum_{\substack{M\in \mathbb{N}_0^{\sigma_2-\sigma_1} \\ 0 \leq \lVert M \rVert_1 \leq v_{\sigma_1}}} \mathcal{L}_M \cdot \begin{pmatrix} v_{\sigma_2} - 1 + v_{\sigma_1} - \lVert M \rVert_1 \\ v_{\sigma_1} - \lVert M \rVert_1 \end{pmatrix} \prod_{i=1}^{\sigma_2-\sigma_1} \begin{pmatrix} b_{\sigma_1 - 1 + i} + m_i \\ m_i \end{pmatrix},$$ where $\mathcal{L}_M = 1$ if $k=2$ and $\mathcal{L}_M = \mathcal{L}(G[f_{k,\sigma_1,\sigma_2,M}(V,B);\mathrm{Std}\parens{\sigma_2\sigma_3\cdots \sigma_k}])$ otherwise, and, when $\sigma_2 < \sigma_1$, is equal to $$\sum_{\substack{M \in \mathbb{N}_0^{\sigma_1-\sigma_2} \\ 0 \leq \lVert M \rVert_1 \leq v_{\sigma_2}-1}} \mathcal{L}'_M \cdot \begin{pmatrix} v_{\sigma_1} + v_{\sigma_2} - 1 - \lVert M \rVert_1 \\ v_{\sigma_2} - 1 - \lVert M \rVert_1 \end{pmatrix} \prod_{i=1}^{\sigma_1-\sigma_2} \begin{pmatrix} b_{\sigma_2 - 1 + i} + m_i \\ m_i \end{pmatrix},$$ where $\mathcal{L}_M' = 1$ if $k = 2$ and $\mathcal{L}'_M = \mathcal{L}(G[f_{k,\sigma_2,\sigma_1,M}(V,B);\mathrm{Std}\parens{\sigma_1\sigma_3\cdots \sigma_k}])$ otherwise.* *Proof.* In this proof, we show that we are able to take any poset of the form $G[V;B;\sigma]$ and create a new set of (sometimes smaller) posets in which the sum of the number of linear extensions of the new posets is equal to the number of linear extensions of the original poset. We begin with the $\sigma_1 < \sigma_2$ case. To create a new poset, we take the nodes in the $\sigma_1$-th chain, excluding the least node, and insert each of these $v_{\sigma_1}$ nodes into either an edge between the least node of the $\sigma_1$-th chain and the least node of the $\sigma_2$-th chain, or an edge in the $\sigma_2$-th chain. In the case where $v_{\sigma_1} = 0$, we just remove the edge from the only node in the $\sigma_1$-th chain to the greatest node of the $\sigma_2$-th chain. Let $M = (m_1,\dots, m_{\sigma_2 - \sigma_1})$ be such that $0 \leq \lVert M \rVert_1 \leq v_{\sigma_1}$ where $m_i$ is the number of nodes inserted into the edges between the least nodes of the $(\sigma_1 + i - 1)$-th chain and the $(\sigma_1 + i)$-th chain. Then, $\lVert M \rVert_1$ nodes are inserted into the bottom edges and $v_{\sigma_1}-\lVert M \rVert_1$ nodes are inserted into the $\sigma_2$-th chain. By the definition of $f_{k,\sigma_1,\sigma_2,M}(V,B)$, the poset obtained from this process is given by $$G[f_{k,\sigma_1,\sigma_2,M}(V,B);\mathrm{Std}\parens{\sigma_2\sigma_3\cdots \sigma_k}],$$ with the exception of the case where $\sigma_1 = 1$, where we obtain a poset of this form with a chain of least nodes before the first chain. As these nodes are the least in the poset and already linearly ordered, we can remove them without changing the number of linear extensions. As such, in all cases, we remove the $\sigma_1$-th chain and relabel the remaining chains to start at 1 and end at $k-1$. For each $M$, there are $\left(\begin{smallmatrix} v_{\sigma_2} -1 + v_{\sigma_1} - \lVert M \rVert_1 \\ v_{\sigma_1} - \lVert M \rVert_1 \end{smallmatrix}\right)$ ways to insert the $v_{\sigma_1} -\lVert M \rVert_1$ nodes into the $\sigma_2$-th chain and, for each $1 \leq i \leq \sigma_2 -\sigma_1$, there are $\left(\begin{smallmatrix} b_{\sigma_1 - 1 + i} + m_i \\ m_i \end{smallmatrix}\right)$ ways to insert the $m_i$ nodes into the edges between the least nodes of the $(\sigma_1 - 1 + i)$-th chain and $(\sigma_1 + i)$-th chain. In the case where $v_{\sigma_1} = 0$, we only have one possible choice of $M$, so the number of linear extensions of the new poset is the same as that of $G[V;B;\sigma]$. For the $\sigma_1 > \sigma_2$ case, first notice that $v_{\sigma_2} > 0$ by the definition of $G[V;B;\sigma]$. To create a new poset, we instead take the nodes in the $\sigma_2$-th chain, excluding the least node, move the greatest node of the $\sigma_2$-th chain to the top of the $\sigma_1$-th chain, and then insert each of the remaining $v_{\sigma_2} - 1$ nodes into either an edge between the least nodes of the $\sigma_2$-th chain and the $\sigma_1$-th chain or an edge in the $\sigma_1$-th chain (which has now been extended by a node). Let $M = (m_1,\dots, m_{\sigma_1 - \sigma_2})$ be such that $0 \leq \lVert M \rVert_1 \leq v_{\sigma_2}-1$ where $m_i$ is the number of nodes inserted into an edge between the least nodes of the $(\sigma_2 - 1 + i)$-th chain and the $(\sigma_2 + i)$-th chain. Then, $\lVert M \rVert_1$ nodes are inserted into the bottom edges and $v_{\sigma_2} - 1 -\lVert M \rVert_1$ nodes are inserted into the $\sigma_1$-th chain. Notice that we do not count the greatest node of the $\sigma_2$-th chain moving to the top of the $\sigma_1$-th chain. By the definition of $f_{k,\sigma_2,\sigma_1,M}(V,B)$, the poset obtained from this process is given by $$G[f_{k,\sigma_2,\sigma_1,M}(V,B);\mathrm{Std}\parens{\sigma_1\sigma_3\cdots \sigma_k}]$$ with the exception of the case where $\sigma_2 = 1$, whereas in the previous case, we obtain a poset of this form with a chain of least nodes before the first chain, which we can remove without changing the number of linear extensions. As such, in all cases, we remove the $\sigma_2$-th chain and relabel the remaining chains to start at 1 and end at $k-1$. Similarly to above, for each $M$, there are $\left(\begin{smallmatrix} v_{\sigma_1} + v_{\sigma_2} - 1 - \lVert M \rVert_1 \\ v_{\sigma_1} \end{smallmatrix}\right)$ ways to insert the $v_{\sigma_2} -1 - \lVert M \rVert_1$ nodes into the $\sigma_1$-th chain and for $1 \leq i \leq \sigma_1 - \sigma_2$, there are $\left(\begin{smallmatrix} b_{\sigma_2 - 1 + i} + m_i \\ m_i \end{smallmatrix}\right)$ ways to insert the $m_i$ nodes into the edges between the least nodes of the $(\sigma_2 - 1 + i)$-th chain and $(\sigma_2 + i)$-th chain. The result follows from the fact that each choice of inserting nodes results in a different poset, uniquely characterised by the ordering of specific elements, namely the total ordering of the nodes in the $\sigma_1$-th and $\sigma_2$-th chains and the nodes between the least nodes of these two chains. As such, no double-counting occurs when summing the number of linear extensions of the posets obtained using this method. ◻ We are able to count the number of linear extensions of a poset of the above form by recursively applying the above theorem until each poset is a chain. The time complexity of such a process is as follows: **Proposition 57**. *The algorithm given in Theorem [Theorem 56](#theorem:recursive_formula_linear_extensions_mst_poset){reference-type="ref" reference="theorem:recursive_formula_linear_extensions_mst_poset"} to compute the number of linear extensions of a poset has time complexity $\mathcal{O}(n^{2k-2}k!)$ where $n = \lVert V \rVert_1 + \lVert B \rVert_1 + k$.* *Proof.* We prove this result by induction on the size of $k \geq 2$. First of all, notice that each binomial coefficient in the formulas can be computed in $\mathcal{O}(n)$ operations. Furthermore, per each factor of the sum, we multiply at most $k$ binomial coefficients. Note that $\lVert M \rVert_1 \leq n - k$ in either formula, and $|\sigma_1 - \sigma_2| \leq k-1$. Therefore, there are at most $\left(\begin{smallmatrix} n-k+k-1 \\ n-k \end{smallmatrix}\right) = \left(\begin{smallmatrix} n-1 \\ n-k \end{smallmatrix}\right)$ choices for $M$, and thus the sums have $\mathcal{O}(n)$ factors, in the worst-case scenario. In the $k=2$ case, we have that $\mathcal{L}_M = \mathcal{L}'_M = 1$. As such, with the previously given information, we have that the algorithm performs $$\mathcal{O}(n) \cdot \mathcal{O}(2) \cdot \mathcal{O}(n) = \mathcal{O}(n^2)$$ operations. As the induction hypothesis, assume that the formula holds up to the case $k=m-1$, for some $m > 2$. As such in the $k=m$ case, we have that $\mathcal{L}_M$ and $\mathcal{L}'_M$ can be computed by performing $\mathcal{O}(n^{2m-4}(m-1)!)$ operations. Thus, the algorithm performs $$\mathcal{O}(n) \cdot \mathcal{O}(n^{2m-4}(m-1)!) \cdot \mathcal{O}(m) \cdot \mathcal{O}(n) = \mathcal{O}(n^{2m-2} m!)$$ operations. As such, the result follows. ◻ We can use Proposition [Proposition 55](#prop:mst_poset_bijection){reference-type="ref" reference="prop:mst_poset_bijection"} and Theorem [Theorem 56](#theorem:recursive_formula_linear_extensions_mst_poset){reference-type="ref" reference="theorem:recursive_formula_linear_extensions_mst_poset"} to compute the size of $\equiv_{\mathsf{mSt}}$-classes as for any $X = (X_1,\dots,X_k) \in \mathbb{N}^k$ and any permutation $\tau$ of $[k]$ we have that $$P[X;\tau] = G[(X_1-1,\dots,X_k-1);(0,\dots,0);\tau].$$ The time complexity to perform such a computation is $\mathcal{O}(n^{2k-2}k!)$, by Proposition [Proposition 57](#prop:recursive_formula_complexity){reference-type="ref" reference="prop:recursive_formula_complexity"}, where $n$ is the length of the words in the $\equiv_{\mathsf{mSt}}$-class. We now look at the case of $\equiv_{\mathsf{mTg}}$-classes. First, for any binary tree $T$, we define $\Delta(T)$ (resp. $\nabla(T)$) to be the poset $(N, \leq)$, where $N$ is the set of nodes of $T$ and, for all $i,j \in N$, $i \leq j$ if the $i$-th node is an ancestor (resp. descendant) of the $j$-th node in $T$. For $k \in \mathbb{N}$, $X = (X_1,\dots, X_k) \in \mathbb{N}^k$, and $(T_L,T_R)$ a pair of twin BTMs, define $Q[X;T_L;T_R]$ to be the poset built by taking $k$ chains of nodes, where the $i$-th chain has $X_i$ elements, and requiring the least (resp. greatest) element of the $i$-th chain to be less than the least (resp. greatest) element of the $j$-th chain if $i \leq j$ in $\Delta(T_L)$ (resp. $\nabla(T_R)$). **Proposition 58**. *Let $\mathbf{w}\in \mathbb{N}^*$. Then, each word in $[\mathbf{w}]_{\mathsf{mTg}}$ is in bijection with a linear extension of $Q[X;T_L;T_R]$ where $(T_L,T_R) = \mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{w}}$, $\mathrm{supp}\parens{\mathbf{w}} = \{x_1 < \cdots < x_k\}$, and $X \in \mathbb{N}^k$ is such that $X_i = |\mathbf{w}|_{x_i}$.* *Proof.* Let $\mathbf{u}\in [\mathbf{w}]_{\mathsf{mTg}}$. From $\mathbf{u}$ we define a chain $Q'$, with $|\mathbf{u}|$ elements by letting the $i$-th least node of $Q'$ be the $j$-th node in the $l$-th chain of $Q[X;T_L;T_R]$ if the $i$-th letter of $\mathbf{u}$ is the $j$-th appearance of $x_l$. We want to show that $Q'$ is a linear extension of $Q[X;T_L;T_R]$. First, note that every node of $Q[X;T_L;T_R]$ appears in $Q'$ as $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{w}}$ and $X_i = |\mathbf{w}|_{x_i}$. Next, we show that if a node is less than another in $Q[X;T_L;T_R]$ then the same happens in $Q'$. By the definition of $Q'$, it is clear that, in $Q'$, the $j$-th node in the $l$-th chain of $Q[X;T_L;T_R]$ is less than the $j+1$-th node in the $l$-th chain, for all $1 \leq l \leq k$ and $1 \leq j < X_l$. Suppose that the least node in the $i$-th chain of $Q[X;T_L;T_R]$ is less than the least node in the $j$-th chain, that is, if $i < j$ in $\Delta(T_L)$. We can see that this also holds in $Q'$ as $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{u}} = \mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}} = T_L$. We can similarly show that if the greatest node in the $i$-th chain of $Q[X;T_L;T_R]$ is less than the greatest node in the $j$-th chain, the same happens in $Q'$. Therefore, $Q'$ is a linear extension of $Q[X;T_L;T_R]$. Now, let $Q'$ be a linear extension of $Q[X;T_L;T_R]$ and define $\mathbf{u}\in \mathbb{N}^*$ by letting the $i$-th letter of $\mathbf{u}$ be $x_l$ if the $i$-th least element of $Q'$ is in the $l$-th chain of $Q[X;T_L;T_R]$. Then, $\mathbf{u}\in [\mathbf{w}]_{\mathsf{mTg}}$, as, by its definition and that of $Q[X;T_L;T_R]$, $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{w}}$ and, from the least and greatest elements of each chain in $Q[X;T_L;T_R]$, we obtain the shape of $\mathrm{P}_{{\mathsf{mTg}}}\parens{\mathbf{u}}$, which coincides with that of $(T_L,T_R)$. ◻ Proposition [Proposition 58](#prop:mtg_poset_bijection){reference-type="ref" reference="prop:mtg_poset_bijection"} allows us to view the problem of computing the size of $\equiv_{\mathsf{mTg}}$-classes as a problem of counting linear extensions of posets. With the following theorem, we can count these linear extensions by splitting the problem into multiple $\equiv_{\mathsf{mSt}}$ cases and counting linear extensions of these instead. We first require the following notation: Let $(T_L,T_R)$ be a pair of twin BSTMs. Then, for a poset $P$ with the same underlying set as $\nabla(T_R)$, let $\nabla(T_R,P)$ be the poset $\nabla(T_R)$ with the extra condition that if $i \leq j$ in $P$ then $i \leq j$ in $\nabla(T_R,P)$. Moreover, for any chain $P' = (I, \leq)$, let $\lambda_{P'} \colon I \to \{1,\dots,|I|\}$ be the function that sends each $x \in I$ to $m$ if $x$ is the $m$-th least element in $P'$. **Theorem 59**. *Let $(T_L,T_R)$ be a pair of twin BSTMs with $k$ nodes and $X \in \mathbb{N}^k$ such that $X_i$ is the multiplicity of the $i$-th node of the trees. Then, $$\mathcal{L}(Q[X;T_L;T_R]) = \sum_{L} \sum_{R} \mathcal{L}(P[Y_{L};\lambda_L \lambda_R^{-1}])$$ where the first sum is over all linear extensions $L$ of $\Delta(T_L)$ and the second sum is over all linear extensions $R$ of $\nabla(T_R,p_L)$ where $a < x$ in $p_L$ if and only if $a$ is simple in $(T_L,T_R)$ and $a < x$ in $L$, and $(Y_{L})_i = X_{\lambda_L^{-1}(i)}$.* *Proof.* Throughout this proof, let $Q = Q[X;T_L;T_R]$. First, we show that given linear extensions of $L$ of $\Delta(T_L)$, $R$ of $\nabla(T_R,p_L)$ and $P'$ of $P[Y_{L};\lambda_L \lambda_R^{-1}]$, we are able to construct a unique linear extension $Q'$ of $Q$. We then show that given a linear extension of $Q$ we are able to construct linear extensions $L$ of $\Delta(T_L)$, $R$ of $\nabla(T_R,p_L)$ and $P'$ of $P[Y_{L};\lambda_L \lambda_R^{-1}]$, and we show that this triple is uniquely dependent on $Q'$. Let $L$ be a linear extension of $\Delta(T_L)$, $R$ be a linear extension of $\nabla(T_R,p_L)$, and $P = P[Y_{L};\lambda_L\lambda_R^{-1}]$. Now, suppose $P'$ is a linear extension of $P$ and define a chain $Q'$ by letting the $i$-th least element of $Q'$ be the $j$-th node in the $l$-th chain of $Q$ if the $i$-th least element of $P'$ is the $j$-th node in the $\lambda_L(l)$-th chain of $P$. We now show that $Q'$ satisfies the conditions of being a linear extension of $Q$. First, note that the $l$-th chain of $Q$ and the $\lambda_L(l)$-th chain of $P$ are of the same length as $X_l = {(Y_L)}_{\lambda_L(l)}$. Thus, $Q$ and $Q'$ have the same underlying set. We now aim to show that if a node is less than another in $Q$ then the same happens in $Q'$. By the definition of $Q'$, in $Q'$, the $j$-th node in the $l$-th chain of $Q$ is less than $(j+1)$-th node in the $l$-th chain. Suppose that the least node in the $i$-th chain of $Q$ is less than the least node in the $j$-th chain, that is, if $i < j$ in $\Delta(T_L)$. By the definition of $Q'$, this happens in $Q'$ if, in $P'$, the least node in the $\lambda_L(i)$-th chain of $P$ is less than the least node in the $\lambda_L(j)$-th chain, that is, if $\lambda_L(i) < \lambda_L(j)$, which is true as $i < j$ in $\Delta(T_L)$ and $L$ is a linear extension of $T_L$. Next, suppose that the greatest node in the $i$-th chain of $Q$ is less than the greatest node in the $j$-th chain, that is, if $i < j$ in $\nabla(T_R)$. Again, by the definition of $Q'$, this happens in $Q'$ if, in $P'$, the greatest node in the $\lambda_L(i)$-th chain of $P$ is less than the greatest node in the $\lambda_L(j)$-th chain of $P$, that is, by the definition of $P$, if there exist $s < t$ such that $\lambda_L(i) = \lambda_L\lambda_R^{-1}(s)$ and $\lambda_L(j) = \lambda_L\lambda_R^{-1}(t)$. So it suffices to show that $\lambda_R(i) < \lambda_R(j)$, which is true as $i < j$ in $\nabla(T_R)$ and $R$ is a linear extension of $\nabla(T_R,p_L)$, and hence a linear extension of $\nabla(T_R)$. Therefore, $Q'$ is a linear extension of $Q$, and as such, this method gives a well-defined function. We now show that this function is injective. Given linear extensions $P_1' \neq P_2'$ of $P$, it follows from the definition of $Q'$ that linear extensions $Q_1'$ and $Q_2'$ of $Q$, obtained respectively from $P_1'$ and $P_2'$ by the method given above, must be different. Furthermore, notice that $L$ (resp. $R$) determines the order, in $Q'$, of the least (resp. greatest) nodes of the chains of $Q$. Thus, different choices of $L$ or $R$ result in different linear extensions of $Q$. Hence, this function is injective. Now, let $Q'$ be a linear extension of $Q$. Let $L$ (resp. $R$) be the chain on $[k]$ where $i < j$ if, in $Q'$, the least (resp. greatest) node in the $i$-th chain of $Q$ is less than the least (resp. greatest) node in the $j$-th chain. Clearly, $L$ and $R$ are linear extensions of $\Delta(T_L)$ and $\nabla(T_R)$, respectively. Furthermore, for any chain of $Q$ with only one node, if the node is less than the least node in the $j$-th chain of $Q$, then it is less than the greatest node in the $j$-th chain. Thus, $R$ is a linear extension of $\nabla(T_R,p_L)$ as well. Let $P = P[Y_{L};\lambda_L\lambda_R^{-1}]$. Define a chain $P'$ by letting the $i$-th least element of $P'$ be the $j$-th node in the $l$-th chain of $P$ if the $i$-th least element of $Q'$ is the $j$-th node in the $\lambda_L^{-1}(l)$-th chain of $Q$. We now show that $P'$ satisfies the conditions of being a linear extension of $P$. First, note that the $l$-th chain of $P$ and the $\lambda^{-1}_L(l)$-th chain of $Q$ are of the same length as $(Y_L)_{l} = X_{\lambda_L^{-1}(l)}$. Thus, $P$ and $P'$ have the same underlying set. Furthermore, by the definition of $P'$, in $P'$, the $j$-th node in the $l$-th chain of $P$ is less than the $(j+1)$-th node in the $l$-th chain. Suppose $i < j$. We now show that, in $P'$, the least node in the $i$-th chain of $P$ is less than the least node in the $j$-th chain. By the definition of $P'$, this happens in $P'$ if, in $Q'$, the least node in the $\lambda_L^{-1}(i)$-th chain of $Q$ is less than the least node in the $\lambda_L^{-1}(j)$-th chain, that is, if $i = \lambda_L\lambda_L^{-1}(i) < \lambda_L\lambda_L^{-1}(j) = j$, which holds. Finally, we show that, in $P'$, the greatest element in the $\lambda_L\lambda_R^{-1}(i)$-th chain of $P$ is less than the greatest element in the $\lambda_L\lambda_R^{-1}(j)$-th chain. By the definition of $P'$, this happens in $P'$ if, in $Q'$, the greatest node in the $\lambda_L^{-1}\lambda_L\lambda_R^{-1}(i)$-th chain of $Q$ is less than the greatest node in the $\lambda_L^{-1}\lambda_L\lambda_R^{-1}(j)$-th chain, that is, if $$\lambda_R\lambda_L^{-1}\lambda_L\lambda_R^{-1}(i) = i < j = \lambda_R\lambda_L^{-1}\lambda_L\lambda_R^{-1}(j),$$ which holds. Therefore, $P'$ is a linear extension of $P$, and as such, this method gives a well-defined function. We now show that this function is injective. Suppose that, given linear extensions $Q_1' \neq Q_2'$ of $Q$, we obtain, by the method above, the same linear extensions $L$ of $\Delta(T_L)$ and $R$ of $\nabla(T_R,p_L)$. As such, $P_1$ and $P_2$ as given by the method above are equal. Let $P_1'$ and $P_2'$ be linear extensions of $P_1$, obtained by the method above from $Q_1'$ and $Q_2'$, respectively. By the definition of $P'$, it follows that $P_1' \neq P_2'$. Thus, different choices of $Q$ which give the same $L$ and $R$ result in different linear extensions of $P$. Hence, this function is injective. As such, we can conclude that the cardinality of the set of linear extensions of $Q$ is the sum of cardinalities of the sets of linear extensions of posets of the form $P[Y_L;\lambda_L\lambda_R^{-1}]$, summing over all linear extensions $L$ of $\Delta(T_L)$ and $R$ of $\nabla(T_R,p_L)$. ◻ The number of linear extensions of $\Delta(T_L)$ and $\nabla(T_R,p_L)$ depends on the number of nodes of $T_L$ and $T_R$, but not on the sum of multiplicities of the nodes. By [@atkinson_linear_extensions_trees], computing each of these numbers takes $\mathcal{O}(k^2)$ operations. As such, computing $\mathcal{L}(Q[X;T_L;T_R])$ takes $\mathcal{O}(n^{2k-2}k!k^4)$ operations, where $n$ is the sum of the multiplicities of the nodes. As such, from Proposition [Proposition 58](#prop:mtg_poset_bijection){reference-type="ref" reference="prop:mtg_poset_bijection"}, we can conclude that using the algorithm given in Theorem [Theorem 59](#theorem:recursive_formula_linear_extensions_mtg_poset){reference-type="ref" reference="theorem:recursive_formula_linear_extensions_mtg_poset"} allows us to compute the size of $\equiv_{\mathsf{mTg}}$-classes in $\mathcal{O}(n^{2k-2}k!k^4)$ operations, where $n$ is the length of the words and $k$ is the size of their support. Notice that, for a fixed support, as the length of a word increases, the behaviour of the time-complexity of counting the linear extensions of the $\equiv_{\mathsf{mTg}}$-class of the word is the same as that of counting the linear extensions of $\equiv_{\mathsf{mSt}}$-class of the same word. ## 'Hook-length'-like formulas for pairs of twin stalactic tableaux and binary search trees with multiplicities {#subsection:Pairs_of_twin_stalactic_tableaux_and_binary_search_trees_with_multiplicities} For the rest of this section, we focus on 'hook-length'-like formulas for the meet-stalactic and meet-taiga $\mathrm{P}$-symbols. Given a stalactic tableau, we determine how many distinct stalactic tableaux form a pair of twin stalactic tableaux with it. We then consider the case of BSTMs and find bounds for the corresponding question. **Proposition 60**. *Let $T_L$ be a stalactic tableau. Let $a_1,\dots,a_k$ be the labels of the simple columns in $T_L$, ordered from left-to-right. For $1 \leq i \leq k$, let $m_i$ be the number of columns to the right of and including the column labelled $a_i$. Then, there are $$\frac{|\mathrm{supp}\parens{T_L}|!}{m_1\cdots m_k}$$ distinct stalactic tableaux $T_R$ such that $(T_L,T_R)$ is a pair of twin stalactic tableaux.* *Proof.* Recall that stalactic tableaux are uniquely determined by their top row and content. Thus, there are $|\mathrm{supp}\parens{T_L}|!$ distinct stalactic tableaux with the same content as $T_L$. By the definition of [pairs of twin stalactic tableaux](#definition:TwinStalacticTableauxCondition), we need to count the number of stalactic tableaux $T_R$ such that for every simple column labelled $c$ and column labelled $d$, if $\rho_{T_L}(c) < \rho_{T_L}(d)$, then $\rho_{T_R}(c) < \rho_{T_R}(d)$. Let $\mathbf{w}$ be the reading of the top row of $T_L$. Then, there exist words $\mathbf{w}_i$ over $\mathrm{supp}\parens{T_L}$ such that $\mathbf{w}= \mathbf{w}_1a_1\mathbf{w}_2\cdots \mathbf{w}_k a_k\mathbf{w}_{k+1}$. A rearrangement $\mathbf{u}$ of $\mathbf{w}$ is the reading of the top row of a stalactic tableau $T_R$, where $(T_L,T_R)$ is a pair of twins stalactic tableaux, if and only if $\mathbf{u}[X_i]$ begins with $a_i$ for each $1 \leq i \leq k$, where $X_i = \mathrm{supp}\parens{a_i \mathbf{w}_{i+1} \cdots \mathbf{w}_k a_k \mathbf{w}_{k+1}}$. We now recursively define any such rearrangement $\mathbf{u}$. View $\mathbf{u}$ as an empty $|\mathbf{w}|$-tuple. From $1 \leq i \leq k$, reading $\mathbf{w}_i$ from left-to-right, choose any unoccupied positions in $\mathbf{u}$ to be the positions of the letters of $\mathbf{w}_i$, then choose the least remaining unoccupied position to be the position of $a_i$. Finally, add the letters of $\mathbf{w}_{k+1}$ to the remaining unoccupied positions of $\mathbf{u}$. Note that there are $|\mathrm{supp}\parens{T_L}|+1-i$ choices for the position of the $i$-th letter of $\mathbf{w}$ in $\mathbf{u}$, unless it is simple, then there is only one option. Hence, as each rearrangement $\mathbf{u}$ corresponds to a distinct stalactic tableau $T_R$, there are $$\frac{|\mathrm{supp}\parens{T_L}|!}{m_1\cdots m_k}$$ distinct stalactic tableaux $T_R$ such that $(T_L,T_R)$ is a pair of twin stalactic tableaux, where $m_i = |X_i|$. ◻ Notice that, given a stalactic tableau $T_R$, one can obtain a similar formula for the number of distinct stalactic tableaux $T_L$ such that $(T_L,T_R)$ is a pair of twin stalactic tableaux, by symmetrical reasoning and considering the number of columns to the left of and including the simple columns instead. **Proposition 61**. *Let $T_L$ be a BSTM with $n$ nodes. Suppose there are $k$ simple nodes in $T_L$ that are not leaves. Let $R(T_L)$ denote the number of distinct BSTMs $T_R$ such that $(T_L,T_R)$ is a pair of twin BSTMs. Then, $$C_{n-k} \leq R(T_L) \leq C_n$$ where $C_m = \frac{(2m)!}{(m+1)!m!}$ is the $m$-th Catalan number.* *Proof.* It is a well-known fact that there are exactly $C_n$ binary trees with $n$ nodes, thus $R(T_L) \leq C_n$. We aim to show that $C_{n-k} \leq R(T_L)$. First note that, as there are $k$ nodes which are simple non-leaf nodes, there are $n-k$ nodes which are either leaves or non-simple. Let $\mathbf{w}$ be such that $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}} = T_L$. Then, let $\mathbf{w}_1$ (resp. $\mathbf{w}_2$) be the word obtained from $\mathbf{w}$ by removing (resp. keeping) all letters except the first occurrence of each letter which does not label a leaf in $T_L$. Note that for any rearrangement $\mathbf{v}$ of $\mathbf{w}_2$, $\mathrm{P}^\rightarrow_{{\mathsf{lTg}}}\parens{\mathbf{w}_1 \mathbf{v}} = T_L$ as $\mathrm{supp}\parens{\mathbf{w}_1}$ contains every letter that does not label a leaf in $T_L$. On the other hand, for each pair of rearrangements $\mathbf{v}$, $\mathbf{v}'$ of $\mathbf{w}_2$ with $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}} \neq \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}'}$, we have that $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}_1\mathbf{v}} \neq \mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}_1\mathbf{v}'}$, since ${\mathsf{rTg}}$ is left-cancellative. As such, $R(T_L) \geq C_{n-k}$ as there are $C_{n-k}$ BSTMs with the same labels as $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{w}_2}$, since $|\mathrm{supp}\parens{\mathbf{w}_2}| = n-k$. ◻ # Syntacticity of plactic-like congruence {#section:Syntacticity_of_plactic_like_monoids} Recall that, for each $\mathbb{N}$-labelled combinatorial object which identifies classes of a plactic-like monoid, its *shape* is the unlabelled object, that is, unlabelled by letters of $\mathbb{N}$. For each plactic-like monoid $\mathsf{M}$ with associated $\mathrm{P}$-symbol $\mathrm{P}_\mathsf{M}$, let $\mathrm{Sh}_\mathsf{M}$ be the function which maps a word $\mathbf{w}\in \mathbb{N}^*$ to the shape of its combinatorial object $\mathrm{P}_\mathsf{M}{(\mathbf{w})}$. We now study the syntacticity of plactic-like congruences with regard to their corresponding shape functions. *Remark 62*. We consider the shape functions of BSTMs and pairs of twin BSTMs that give us combinatorial objects still labelled by multiplicities. The following is a generalisation of the usual definition of syntactic congruences, extended to functions from a free monoid to any set: Let $S$ be any set and let $f \colon \mathbb{N}^* \to S$. We say a congruence $\equiv$ of $\mathbb{N}^*$ is the (two-sided) *syntactic congruence* of $f$ if $$\mathbf{u}\equiv \mathbf{v}\Leftrightarrow (\forall \mathbf{r},\mathbf{s}\in \mathbb{N}^*, f(\mathbf{r}\mathbf{u}\mathbf{s}) = f(\mathbf{r}\mathbf{v}\mathbf{s})).$$ This is the coarsest congruence of $\mathbb{N}^*$ which is compatible with $f$. Similarly, a left congruence $\equiv$ is the *left syntactic congruence* of $f$ if $$\mathbf{u}\equiv \mathbf{v}\Leftrightarrow (\forall \mathbf{r}\in \mathbb{N}^*, f(\mathbf{r}\mathbf{u}) = f(\mathbf{r}\mathbf{v})).$$ This is the coarsest left congruence of $\mathbb{N}^*$ which is compatible with $f$. One defines a *right syntactic congruence* in a parallel way. Remark that, for congruences, left or right syntacticity implies syntacticity. Lascoux and Schützenberger stated without proof [@LS1978 Theoréme 2.15] that the plactic congruence is the syntactic congruence of the shape function of (semi-standard) Young tableaux. Recently, Abram and Reutenauer have shown a generalisation of this result, namely that the plactic congruence is also the left and right syntactic congruence of said shape function [@abram_reutenauer_stylic_monoid_2021 Theorem 13.3]. On the other hand, Novelli has shown that the hypoplactic congruence is both the left and right syntactic congruence of the shape function of quasi-ribbon tableaux [@novelli_hypoplactic Subsection 5.4]. We now look at the sylvester, Baxter, stalactic and taiga cases. Notice that, since we are already working with two-sided congruences, we only need to prove the necessary condition of the definition of (left or right) syntacticity. Recall the right strict insertion algorithm given in [@hivert_sylvester Definition 7], which computes a unique right strict binary search tree $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{w}}$ from a word $\mathbf{w}\in \mathbb{N}^*$. **Proposition 63**. *The sylvester (resp. \#-sylvester) congruence is both the left and right syntactic congruence of $\mathrm{Sh}_{\mathsf{sylv}}$ (resp. $\mathrm{Sh}_{\mathsf{sylv}^{\#}}$).* *Proof.* We prove the result for the sylvester monoid only, as the case of the \#-sylvester monoid is analogous. Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ be such that $\mathbf{u}\not\equiv_{\mathsf{sylv}}\mathbf{v}$ and $\mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{u}} = \mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{v}}$. Then, since there is at most one way to label a binary tree with the letters of a word and obtain a right strict binary search tree, we must have $\mathrm{cont}\parens{\mathbf{u}} \neq \mathrm{cont}\parens{\mathbf{v}}$. Let $a \in \mathbb{N}$ be the least letter such that $|\mathbf{u}|_a \neq |\mathbf{v}|_a$. Assume, without loss of generality, that $|\mathbf{u}|_a > |\mathbf{v}|_a$. Then, $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{u}a}$ and $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{v}a}$ will both have root node labelled $a$. Notice that the number of nodes of the left subtree of the root nodes in $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{u}a}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{v}a}$) is given by $\sum_{c \in [a]} |\mathbf{u}|_c$ (resp. $\sum_{c \in [a]} |\mathbf{v}|_c$). By hypothesis, if $c < a$, then $|\mathbf{u}|_c = |\mathbf{v}|_c$. As such, the left subtree of the root node in $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{u}a}$ will have more nodes than the left subtree of the root node in $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{\mathbf{v}a}$, and thus $\mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{u}a} \neq \mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{v}a}$. Therefore, $\equiv_{\mathsf{sylv}}$ is the right syntactic congruence of $\mathrm{Sh}_{{\mathsf{sylv}}}$. Now, let $b \in \mathbb{N}$ be the greatest letter such that $|\mathbf{u}|_b \neq |\mathbf{v}|_b$. Assuming, in order to obtain a contradiction, that $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{b\mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{b\mathbf{v}}$ have the same shape then we can conclude that $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{b\mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{b\mathbf{v}}$ both have a leaf node labelled $b$, which has the same position in the inorder traversals of the trees. Recall that the inorder reading of a right strict binary search tree gives the (unique) weakly increasing reading of the tree, and that the first occurrence of $b$ in the inorder reading corresponds to the first node labelled $b$ in the inorder traversal, which is always the node labelled $b$ of greatest depth. Since $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{b\mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{sylv}}}\parens{b\mathbf{v}}$ have different content, their inorder reading is different, and in particular, the shortest suffix containing all the letters $b$ is of greater length in one reading than in the other. As such, the leaf node labelled $b$ is in a different position in the inorder traversals of the trees, which contradicts our hypothesis, and hence implies that $\mathrm{Sh}_{{\mathsf{sylv}}}\parens{b\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{sylv}}}\parens{b\mathbf{v}}$. Therefore, $\equiv_{\mathsf{sylv}}$ is the left syntactic congruence of $\mathrm{Sh}_{{\mathsf{sylv}}}$. ◻ Notice that $\mathrm{Sh}_{{\mathsf{sylv}}}$ is not the same function as $\mathrm{Sh}_{{\mathsf{sylv}^{\#}}}$: they are both functions from $\mathbb{N}^*$ to the set of binary trees, however, for $\mathbf{w}\in \mathbb{N}^*$, $\mathrm{Sh}_{{\mathsf{sylv}}}(\mathbf{w})$ (resp. $\mathrm{Sh}_{{\mathsf{sylv}^{\#}}}(\mathbf{w})$) is defined as the shape of the right (resp. left) strict binary search tree obtained from $\mathbf{w}$ by the right (resp. left) strict insertion algorithm. For example, $$\mathrm{Sh}_{{\mathsf{sylv}^{\#}}}(12) = \begin{tikzpicture}[tinybst,baseline=-3mm] \node {} child[missing] child { node {}} ; \end{tikzpicture} \quad \text{and} \quad \mathrm{Sh}_{{\mathsf{sylv}}}(12) = \begin{tikzpicture}[tinybst,baseline=-3mm] \node {} child { node {}} child[missing] ; \end{tikzpicture}$$ **Proposition 64**. *The Baxter congruence is both the left and right syntactic congruence of $\mathrm{Sh}_{\mathsf{baxt}}$.* *Proof.* Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ be such that $\mathbf{u}\not\equiv_{\mathsf{baxt}}\mathbf{v}$. Then, $\mathbf{u}\not\equiv_{\mathsf{sylv}}\mathbf{v}$ or $\mathbf{u}\not\equiv_{\mathsf{sylv}^{\#}}\mathbf{v}$. Assume, without loss of generality, that $\mathbf{u}\not\equiv_{\mathsf{sylv}}\mathbf{v}$. Since $\equiv_{\mathsf{sylv}}$ is both the left and right syntactic congruence of $\mathrm{Sh}_{{\mathsf{sylv}}}$, there exist $\mathbf{r},\mathbf{s}\in \mathbb{N}^*$ such that $\mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{r}\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{r}\mathbf{v}}$ and $\mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{u}\mathbf{s}} \neq \mathrm{Sh}_{{\mathsf{sylv}}}\parens{\mathbf{v}\mathbf{s}}$. Since the shape of pairs of twin binary search trees is determined by the shape of its left and right binary search trees, we have that $\mathrm{Sh}_{{\mathsf{baxt}}}\parens{\mathbf{r}\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{baxt}}}\parens{\mathbf{r}\mathbf{v}}$ and $\mathrm{Sh}_{{\mathsf{baxt}}}\parens{\mathbf{u}\mathbf{s}} \neq \mathrm{Sh}_{{\mathsf{baxt}}}\parens{\mathbf{v}\mathbf{s}}$. ◻ **Proposition 65**. *The right-stalactic (resp. left-stalactic) congruence is the left (resp. right) syntactic congruence of $\mathrm{Sh}_{\mathsf{rSt}}$ (resp. $\mathrm{Sh}_{\mathsf{lSt}}$), but not its right (resp. left) syntactic congruence.* *Proof.* We prove the result for the right-stalactic monoid only, as the case of the left-stalactic monoid is analogous. Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ be such that $\mathbf{u}\not\equiv_{\mathsf{rSt}}\mathbf{v}$ and $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{\mathbf{u}} = \mathrm{Sh}_{{\mathsf{rSt}}}\parens{\mathbf{v}}$. Notice that, if $\mathrm{cont}\parens{\mathbf{u}} \neq \mathrm{cont}\parens{\mathbf{v}}$, then, for $a \in \mathbb{N}$ such that $|\mathbf{u}|_a > |\mathbf{v}|_a$, we have $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{a\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{rSt}}}\parens{a\mathbf{v}}$, since these stalactic tableaux are obtained by adding a new $a$-cell to the column of $a$-cells, which has a different height in $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}$ than in $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}}$. On the other hand, if $\mathrm{cont}\parens{\mathbf{u}} = \mathrm{cont}\parens{\mathbf{v}}$, then $\rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}} \neq \rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}}}$. Choosing $b \in \mathbb{N}$ such that $\rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}}(b) \neq \rho_{\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{v}}}(b)$, we have that $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{b\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{rSt}}}\parens{b\mathbf{v}}$, since we are increasing the heights of different columns in the shapes of the stalactic tableaux. Therefore, $\equiv_{\mathsf{rSt}}$ is the left syntactic congruence of $\mathrm{Sh}_{{\mathsf{rSt}}}$. On the other hand, consider the words $1221$ and $1122$. Notice that $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1221} = \tikz[tableau]\matrix{ 2 \& 1 \\ 2 \& 1 \\ }; \quad \text{and} \quad \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1122} = \tikz[tableau]\matrix{ 1 \& 2 \\ 1 \& 2 \\ };$ and in particular, $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{1221} = \mathrm{Sh}_{{\mathsf{rSt}}}\parens{1122}$ and $\mathrm{cont}\parens{1221} = \mathrm{cont}\parens{1122}$. Furthermore, notice that, for any $\mathbf{u}\in \mathbb{N}^*$, we have that $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{1221\mathbf{u}} = \mathrm{Sh}_{{\mathsf{rSt}}}\parens{1122\mathbf{u}}$, since $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1221\mathbf{u}}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1122\mathbf{u}}$) is obtained from $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}$ by attaching to the left of it all columns of $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1221}$ (resp. $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1122}$) labelled by letters which do not occur in $\mathbf{u}$, followed by attaching the remaining columns to the bottom of their respectively labelled columns in $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}}$. Therefore, $\equiv_{\mathsf{rSt}}$ is not the right syntactic congruence of $\mathrm{Sh}_{{\mathsf{rSt}}}$. ◻ **Proposition 66**. *The meet-stalactic congruence is neither the left nor the right syntactic congruence of $\mathrm{Sh}_{\mathsf{mSt}}$, but it is still the (two-sided) syntactic congruence of $\mathrm{Sh}_{\mathsf{mSt}}$.* *Proof.* Consider the words $1122$, $1221$ and $2112$, whose pairs of twin stalactic tableaux are, respectively, $\left(\tikz[tableau]\matrix{ 1 \& 2 \\ 1 \& 2 \\ }; , \tikz[tableau]\matrix{ 1 \& 2 \\ 1 \& 2 \\ };\right), \quad \left(\tikz[tableau]\matrix{ 1 \& 2 \\ 1 \& 2 \\ }; , \tikz[tableau]\matrix{ 2 \& 1 \\ 2 \& 1 \\ };\right), \quad \text{and} \quad \left(\tikz[tableau]\matrix{ 2 \& 1 \\ 2 \& 1 \\ }; , \tikz[tableau]\matrix{ 1 \& 2 \\ 1 \& 2 \\ };\right)$. Notice that these pairs of twin stalactic tableaux all have the same content and shape, and that $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{1122} = \mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{1221}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{1122} = \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{2112}$. By the reasoning given in the previous proof, we have that, for any $\mathbf{u}\in \mathbb{N}^*$, we have that $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{1122\mathbf{u}} = \mathrm{Sh}_{{\mathsf{rSt}}}\parens{1221\mathbf{u}}$ and, by parallel reasoning, $\mathrm{Sh}_{{\mathsf{lSt}}}\parens{\mathbf{u}1122} = \mathrm{Sh}_{{\mathsf{lSt}}}\parens{\mathbf{u}2112}$. Furthermore, since ${\mathsf{mSt}}$ is a congruence, we also have $\mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{1122 \mathbf{u}} = \mathrm{P}^\rightarrow_{{\mathsf{lSt}}}\parens{1221 \mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}1122} = \mathrm{P}^\leftarrow_{{\mathsf{rSt}}}\parens{\mathbf{u}2112}$. As such, we have that $\mathrm{Sh}_{{\mathsf{mSt}}}\parens{1122\mathbf{u}} = \mathrm{Sh}_{{\mathsf{mSt}}}\parens{1221\mathbf{u}}$ and $\mathrm{Sh}_{{\mathsf{mSt}}}\parens{\mathbf{u}1122} = \mathrm{Sh}_{{\mathsf{mSt}}}\parens{\mathbf{u}2112}$. Therefore, $\equiv_{\mathsf{mSt}}$ is neither the left nor the right syntactic congruence of $\mathrm{Sh}_{\mathsf{mSt}}$. Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ be such that $\mathbf{u}\not\equiv_{\mathsf{mSt}}\mathbf{v}$. Then, $\mathbf{u}\not\equiv_{\mathsf{rSt}}\mathbf{v}$ or $\mathbf{u}\not\equiv_{\mathsf{lSt}}\mathbf{v}$. Assume, without loss of generality, that $\mathbf{u}\not\equiv_{\mathsf{rSt}}\mathbf{v}$. Since $\equiv_{\mathsf{rSt}}$ is the left syntactic congruence of $\mathrm{Sh}_{{\mathsf{rSt}}}$, there exists $\mathbf{w}\in \mathbb{N}^*$ such that $\mathrm{Sh}_{{\mathsf{rSt}}}\parens{\mathbf{w}\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{rSt}}}\parens{\mathbf{w}\mathbf{v}}$. Since the shape of pairs of twin stalactic tableaux is determined by the shape of its (left and right) twin stalactic tableaux, we have that $\mathrm{Sh}_{{\mathsf{mSt}}}\parens{\mathbf{w}\mathbf{u}} \neq \mathrm{Sh}_{{\mathsf{mSt}}}\parens{\mathbf{w}\mathbf{v}}$. Similarly, one can show that if $\mathbf{u}\not\equiv_{\mathsf{lSt}}\mathbf{v}$, then there exists $\mathbf{w}\in \mathbb{N}^*$ such that $\mathrm{Sh}_{{\mathsf{mSt}}}\parens{\mathbf{u}\mathbf{w}} \neq \mathrm{Sh}_{{\mathsf{mSt}}}\parens{\mathbf{v}\mathbf{w}}$. Therefore, $\equiv_{\mathsf{mSt}}$ is the syntactic congruence of $\mathrm{Sh}_{\mathsf{mSt}}$. ◻ **Proposition 67**. *The right-taiga (resp. left-taiga) congruence is both the left and right syntactic congruence of $\mathrm{Sh}_{\mathsf{rTg}}$ (resp. $\mathrm{Sh}_{\mathsf{lTg}}$).* *Proof.* We prove the result for the right-taiga monoid only, as the case of the left-taiga monoid is analogous. Let $\mathbf{u},\mathbf{v}\in \mathbb{N}^*$ be such that $\mathbf{u}\not\equiv_{\mathsf{rTg}}\mathbf{v}$ and $\mathrm{Sh}_{{\mathsf{rTg}}}\parens{\mathbf{u}} = \mathrm{Sh}_{{\mathsf{rTg}}}\parens{\mathbf{v}}$. Then, by the observation given after [@priez_binary_trees Definition 8], we must have $\mathrm{supp}\parens{\mathbf{u}} \neq \mathrm{supp}\parens{\mathbf{v}}$. Let $a \in \mathbb{N}$ be any letter such that $|\mathbf{u}|_a \neq 0$ and $|\mathbf{v}|_a = 0$. Then, $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}a}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}a}$ will have different shapes, since $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{v}a}$ will have one more node than $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{\mathbf{u}a}$. For the same reason, $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{a\mathbf{u}}$ and $\mathrm{P}^\leftarrow_{{\mathsf{rTg}}}\parens{a\mathbf{v}}$ will have different shapes. Therefore, $\equiv_{\mathsf{rTg}}$ is both the right and left syntactic congruence of $\mathrm{Sh}_{{\mathsf{rTg}}}$. ◻ As with the case of $\mathrm{Sh}_{{\mathsf{sylv}}}$ and $\mathrm{Sh}_{{\mathsf{sylv}^{\#}}}$, notice that $\mathrm{Sh}_{{\mathsf{rTg}}}$ is not the same function as $\mathrm{Sh}_{{\mathsf{lTg}}}$: they are both functions from $\mathbb{N}^*$ to the set of BTMs, however, for $\mathbf{w}\in \mathbb{N}^*$, $\mathrm{Sh}_{{\mathsf{rTg}}}(\mathbf{w})$ (resp. $\mathrm{Sh}_{{\mathsf{lTg}}}(\mathbf{w})$) is defined as the shape of the BSTM obtained from $\mathbf{w}$ by reading it from right-to-left (resp. left-to-right) and inserting letters using Algorithm [[TgLI]{.sans-serif}](#alg:TaigaLeafInsertion). **Proposition 68**. *The meet-taiga congruence is both the left and right syntactic congruence of $\mathrm{Sh}_{\mathsf{mTg}}$.* *Proof.* The proof is analogous to the proof of Proposition [Proposition 64](#BaxterLeftRightSyntactic){reference-type="ref" reference="BaxterLeftRightSyntactic"}. ◻ In Table [2](#table:syntacticity){reference-type="ref" reference="table:syntacticity"}, we give a summary of the results on syntacticity of plactic-like congruences. **Congruence** **Syntactity** **Left Syn.** **Right Syn.** **Result** ------------------------ ---------------- --------------- ---------------- -------------------------------------------------------------------------------------------------------------------------- ${\mathsf{plac}}$ [@abram_reutenauer_stylic_monoid_2021 Theorem 13.3] ${\mathsf{hypo}}$ [@novelli_hypoplactic Subsection 5.4] ${\mathsf{sylv}}$ Proposition [Proposition 63](#SylvesterLeftRightSyntactic){reference-type="ref" reference="SylvesterLeftRightSyntactic"} ${\mathsf{sylv}^{\#}}$ Proposition [Proposition 63](#SylvesterLeftRightSyntactic){reference-type="ref" reference="SylvesterLeftRightSyntactic"} ${\mathsf{baxt}}$ Proposition [Proposition 64](#BaxterLeftRightSyntactic){reference-type="ref" reference="BaxterLeftRightSyntactic"} ${\mathsf{lSt}}$ Proposition [Proposition 65](#StalacticSyntactic){reference-type="ref" reference="StalacticSyntactic"} ${\mathsf{rSt}}$ Proposition [Proposition 65](#StalacticSyntactic){reference-type="ref" reference="StalacticSyntactic"} ${\mathsf{mSt}}$ Proposition [Proposition 66](#MeetStalacticSyntactic){reference-type="ref" reference="MeetStalacticSyntactic"} ${\mathsf{lTg}}$ Proposition [Proposition 67](#TaigaSyntactic){reference-type="ref" reference="TaigaSyntactic"} ${\mathsf{rTg}}$ Proposition [Proposition 67](#TaigaSyntactic){reference-type="ref" reference="TaigaSyntactic"} ${\mathsf{mTg}}$ Proposition [Proposition 68](#MeetTaigaSyntactic){reference-type="ref" reference="MeetTaigaSyntactic"} : Syntacticity of plactic-like congruences with regards to shape functions. # Equational theories of the meet and join-stalactic and meet and join-taiga monoids {#section:Equational_theories_of_meet_and_join_stalactic_and_taiga_monoids} We now characterise the identities satisfied by the monoids we defined. For a general background on universal algebra, see [@Bergman_universal_algebra; @bs_universal_algebra]. The following background is given in the context of monoids. An *identity* over an alphabet of variables $\mathcal{X}$ is a formal equality $\mathbf{u}\approx \mathbf{v}$, where $\mathbf{u}$ and $\mathbf{v}$ are words over $\mathcal{X}$, said to be *non-trivial* if $\mathbf{u}\neq \mathbf{v}$. A variable $x$ *occurs* in $\mathbf{u}\approx \mathbf{v}$ if $x$ occurs in $\mathbf{u}$ or $\mathbf{v}$, and $\mathbf{u}\approx \mathbf{v}$ is said to be *balanced* if $\mathbf{u}$ and $\mathbf{v}$ have the same content. Since two words with the same content must have the same length, we say the *length* of a balanced identity is the length of its left or right-hand side. Two identities are equivalent if one can be obtained from the other by renaming variables or swapping both sides of the identities. A monoid $M$ *satisfies* the identity $\mathbf{u}\approx \mathbf{v}$ if for every morphism $\psi\colon \mathcal{X}^* \to M$, referred to as an *evaluation*, we have $\psi(\mathbf{u}) = \psi(\mathbf{v})$. The *identity checking problem* of a monoid $M$ is the combinatorial decision problem $\textsc{Check-Id}{(M)}$ of deciding whether an identity is satisfied or not by $M$. The input of this problem is simply the identity, hence its time complexity is measured in terms of the size of the identity, that is, the sum of the lengths of each side of the formal equality. Given a class of monoids $\mathbb{K}$, the set of all identities simultaneously satisfied by all monoids in $\mathbb{K}$ is called its *equational theory*. On the other hand, given a set of identities $\Sigma$, the class of all monoids that satisfy all identities in $\Sigma$ is called its *variety*. By Birkhoff's $HSP$-theorem, a class of monoids is a variety if and only if is closed under taking homomorphic images, submonoids and direct products. A *subvariety* is a subclass of a variety which is itself a variety. We say a variety is *generated* by a monoid $M$ if it is the least variety containing $M$, and denote it by $\mathbb{V}(M)$. A congruence $\equiv$ on a monoid $M$ is *fully invariant* if $a \mathrel{\equiv} b$ implies $f(a) \mathrel{\equiv} f(b)$, for every $a,b \in M$ and every endomorphism $f$ of $M$. Equational theories can be viewed as fully invariant congruences on $\mathcal{X}^*$ (see, for example, [@bs_universal_algebra II§14]). The *free object* of rank $n$ of a variety $\mathbb{V}$ is the quotient of the free monoid over an $n$-letter alphabet by the equational theory of $\mathbb{V}$. The equational theory of the varietal join (resp. meet) of varieties $\mathbb{V}_1$ and $\mathbb{V}_2$ is the meet (resp. join) of the equational theories of $\mathbb{V}_1$ and $\mathbb{V}_2$. An identity $\mathbf{u}\approx \mathbf{v}$ is a *consequence* of a set of identities $\Sigma$ if it is in the equational theory generated by $\Sigma$. An *equational basis* $\mathcal{B}$ of a variety $\mathbb{V}$ is a subset of its equational theory such that each identity of the equational theory is a consequence of $\mathcal{B}$. An equational theory is *finitely based* if it admits a finite equational basis. The *axiomatic rank* of $\mathbb{V}$ is the least natural number such that $\mathbb{V}$ admits a basis where the number of distinct variables occurring in each identity of the basis does not exceed said number. Notice that if a variety is finitely based, then it has finite axiomatic rank. The identities satisfied by ${\mathsf{rSt}}$ have been independently studied by Cain *et al.* [@cain_johnson_kambites_malheiro_representations_2022] and Han and Zhang [@han2021preprint]: **Theorem 69** ([@cain_johnson_kambites_malheiro_representations_2022 Corollary 4.6],[@han2021preprint Lemma 2.1 and Theorem 2.3]). *The identities satisfied by ${\mathsf{rSt}}$ are precisely the balanced identities of the form $\mathbf{u}\approx \mathbf{v}$, where $\overleftarrow{\rho_\mathbf{u}} = \overleftarrow{\rho_\mathbf{v}}$.* By symmetric reasoning, we have that the identities satisfied by ${\mathsf{lSt}}$ are precisely the balanced identities of the form $\mathbf{u}\approx \mathbf{v}$, where $\overrightarrow{\rho_\mathbf{u}} = \overrightarrow{\rho_\mathbf{v}}$. **Theorem 70** ([@cain_johnson_kambites_malheiro_representations_2022 Corollary 4.6],[@han2021preprint Theorem 2.3]). *The variety generated by ${\mathsf{rSt}}$ admits a finite basis, consisting of the identity $$xyx \approx yxx.$$* By symmetric reasoning, we have that the variety generated by ${\mathsf{lSt}}$ admits a finite equational basis, consisting of the identity $$xyx \approx xxy.$$ From these results, and the fact that identities satisfied by either $\mathbb{V}({\mathsf{lSt}})$ or $\mathbb{V}({\mathsf{rSt}})$ must be balanced, it follows that both varieties have axiomatic rank $2$. The following hasn't been stated in the literature, but it is an immediate consequence of the previous result: **Proposition 71**. *The left-stalactic (resp. right-stalactic) monoid of rank $n$ is the free object of rank $n$ of $\mathbb{V}({\mathsf{lSt}})$ (resp. $\mathbb{V}({\mathsf{rSt}})$), for countable $n$.* *Proof.* We prove the case of ${\mathsf{lSt}}$, since the case of ${\mathsf{rSt}}$ is analogous. The left-stalactic congruence can also be defined as the congruence on $\mathbb{N}^*$ generated by the relations $(a \mathbf{w}a, a a \mathbf{w})$, for all $a \in \mathbb{N}$ and $\mathbf{w}\in \mathbb{N}^*$ [@hnt_stalactic Section 3.7]. On the other hand, the equational theory of ${\mathsf{lSt}}$ can be viewed as a congruence on $\mathcal{X}^*$ generated by the relations $(xyx,xxy)$, for all $x,y \in \mathcal{X}$. Since it is fully invariant, it is also generated by the same relations as the left-stalactic congruence, hence they are the same. As such, ${\mathsf{lSt}}$ is isomorphic to the free object of countably infinite rank of $\mathbb{V}({\mathsf{lSt}})$. Since ${\mathsf{lSt}}$ is compatible with restriction to alphabet intervals, the result also holds for finite rank $n$. ◻ As a consequence, the left-stalactic and right-stalactic congruences can be viewed as equational theories of $\mathbb{V}({\mathsf{lSt}})$ and $\mathbb{V}({\mathsf{rSt}})$. As such, the meet-stalactic congruence, given as the meet of $\equiv_{\mathsf{lSt}}$ and $\equiv_{\mathsf{rSt}}$, can also be viewed as the equational theory of the varietal join $\mathbb{V}({\mathsf{rSt}})\vee \mathbb{V}({\mathsf{lSt}})$. Hence, the variety generated by ${\mathsf{mSt}}$, denoted by $\mathbb{V}({\mathsf{mSt}})$, is equal to $\mathbb{V}({\mathsf{rSt}})\vee \mathbb{V}({\mathsf{lSt}})$, and we have the following: **Corollary 72**. *The meet-stalactic monoid of rank $n$ is the free object of rank $n$ of $\mathbb{V}({\mathsf{mSt}})$, for countable $n$.* Therefore, the identities satisfied by the meet-stalactic monoid are exactly those identities that are simultaneously satisfied by ${\mathsf{rSt}}$ and ${\mathsf{lSt}}$: **Corollary 73**. *For $\mathbf{u},\mathbf{v}\in \mathcal{X}^*$, the identity $\mathbf{u}\approx \mathbf{v}$ is satisfied by ${\mathsf{mSt}}$ if and only if it is balanced, $\overrightarrow{\rho_\mathbf{u}} = \overrightarrow{\rho_\mathbf{v}}$ and $\overleftarrow{\rho_{\mathbf{u}}} = \overleftarrow{\rho_{\mathbf{v}}}$.* In other words, the identities satisfied by ${\mathsf{mSt}}$ are those identities where the left and right-hand sides have the same content, and, when reading them from left-to-right, the order of the first and last occurrences of the variables is the same on both sides. Clearly, this condition is verifiable in polynomial time: **Corollary 74**. *The decision problem $\textsc{Check-Id}({\mathsf{mSt}})$ belongs to the complexity class $\mathsf{P}$.* From the presentation of ${\mathsf{mSt}}$ given in Proposition [Proposition 16](#prop:MeetStalacticPresentation){reference-type="ref" reference="prop:MeetStalacticPresentation"}, we obtain the following: **Corollary 75**. *$\mathbb{V}({\mathsf{mSt}})$ admits a finite equational basis $\mathcal{B}_{{\mathsf{mSt}}}$, consisting of the identities $$\begin{aligned} xzxyty \approx xzyxty, \label{id22}\\ xzxytx \approx xzyxtx. \label{id31} \end{aligned}$$* *Proof.* Since $\equiv_{\mathsf{mSt}}$ is a fully invariant congruence, we can see that the relations given in Proposition [Proposition 16](#prop:MeetStalacticPresentation){reference-type="ref" reference="prop:MeetStalacticPresentation"} can be deduced from the identities [\[id22\]](#id22){reference-type="eqref" reference="id22"} and [\[id31\]](#id31){reference-type="eqref" reference="id31"}. The result follows. ◻ The identity [\[id31\]](#id31){reference-type="eqref" reference="id31"} can be deduced from other non-equivalent identities satisfied by ${\mathsf{mSt}}$, such as, for example, the identity $xyxzx \approx xxyzx$. Such is not the case with the identity [\[id22\]](#id22){reference-type="eqref" reference="id22"}, or any equivalent one, which must be in any equational basis for $\mathbb{V}({\mathsf{mSt}})$ where only up to four different variables occur in each identity. To show this, we need the following lemma: **Lemma 76**. *The shortest non-trivial identity, with n variables, satisfied by ${\mathsf{mSt}}$, is of length $n+2$.* *Proof.* Let $\mathbf{u}\approx \mathbf{v}$ be a non-trivial identity, with $|\mathrm{supp}\parens{\mathbf{u}\approx \mathbf{v}}|=n$, satisfied by ${\mathsf{mSt}}$. Since it is a non-trivial identity, there exist $x,y \in \mathcal{X}$ and $\mathbf{w},\mathbf{u}',\mathbf{v}' \in \mathcal{X}^*$ such that $\mathbf{u}= \mathbf{w}x \mathbf{u}'$ and $\mathbf{v}=\mathbf{w}y \mathbf{v}'$. Notice that, by Corollary [Corollary 73](#corollary:mst_identities){reference-type="ref" reference="corollary:mst_identities"}, $\mathbf{u}\approx \mathbf{v}$ is a balanced identity, therefore $x$ occurs in $\mathbf{v}'$ and $y$ occurs in $\mathbf{u}'$. Furthermore, since $\overrightarrow{\rho_\mathbf{u}} = \overrightarrow{\rho_\mathbf{v}}$, then $x$ occurs in $\mathbf{w}$ before the first occurrence of $y$, or $y$ occurs in $\mathbf{w}$ before the first occurrence of $x$. Similarly, since $\overleftarrow{\rho_{\mathbf{u}}} = \overleftarrow{\rho_{\mathbf{v}}}$, then $x$ occurs in $\mathbf{u}'$ after the last occurrence of $y$, or $y$ occurs in $\mathbf{v}'$ after the last occurrence of $x$. Thus, we can conclude that the length of $\mathbf{u}\approx \mathbf{v}$ is at least $n+2$. On the other hand, by Corollary [Corollary 73](#corollary:mst_identities){reference-type="ref" reference="corollary:mst_identities"}, for variables $x,y,a_1, \dots, a_{n-2} \in \mathcal{X}$, the identity $$x y x a_1 \cdots a_{n-2} x \approx x x y a_1 \cdots a_{n-2} x,$$ of length $n+2$, is satisfied by ${\mathsf{mSt}}$. The result follows. ◻ **Proposition 77**. *The identity [\[id22\]](#id22){reference-type="eqref" reference="id22"} is not a consequence of the set of non-trivial identities, satisfied by ${\mathsf{mSt}}$, over an alphabet with four variables, excluding [\[id22\]](#id22){reference-type="eqref" reference="id22"} itself and equivalent identities. Therefore, any equational basis for $\mathbb{V}({\mathsf{mSt}})$ with only identities over an alphabet with four variables must contain the identity [\[id22\]](#id22){reference-type="eqref" reference="id22"}, or an equivalent identity.* *Proof.* Let $\mathcal{S}$ be the set of all non-trivial identities, satisfied by ${\mathsf{mSt}}$, over an alphabet with four variables. By definition, [\[id22\]](#id22){reference-type="eqref" reference="id22"} is a consequence of $\mathcal{S}$. As such, there exists a non-trivial identity $\mathbf{u}\approx \mathbf{v}$ in $\mathcal{S}$, and a substitution $\psi$, such that $$xzxyty = \mathbf{w}_1 \psi(\mathbf{u}) \mathbf{w}_2,$$ where $\mathbf{w}_1, \mathbf{w}_2$ are words over the four-letter alphabet, and $\psi(\mathbf{u}) \neq \psi(\mathbf{v})$. We can assume, without loss of generality, that $\psi$ does not map any variable to the empty word. By Lemma [Lemma 76](#lemma:mst_shortest){reference-type="ref" reference="lemma:mst_shortest"}, no proper factor of $xzxyty$ is the left-hand side of a non-trivial identity satisfied by ${\mathsf{mSt}}$, hence $\mathbf{w}_1$ and $\mathbf{w}_2$ are the empty word, that is, $xzxyty = \psi(\mathbf{u})$. Since $\mathrm{cont}\parens{xzxyty}=\bigl(\begin{smallmatrix} x & y & z & t \\ 2 & 2 & 1 & 1 \end{smallmatrix}\bigr)$ and, by Lemma [Lemma 76](#lemma:mst_shortest){reference-type="ref" reference="lemma:mst_shortest"}, there is a lower bound on the length of identities satisfied by ${\mathsf{mSt}}$, we can conclude that, up to renaming of variables, $x$ and $y$ occur exactly twice, and $z$ and $t$ can each occur at most once in $\mathbf{u}\approx \mathbf{v}$. Notice that, since all factors of length $2$ of $xzxyty$ are distinct, then $\psi(x)$ and $\psi(y)$ must be single variables, in particular, up to renaming, $x$ and $y$. Furthermore, since $zt$ is not a factor of $xzxyty$, then $\psi(z)$ and $\psi(t)$ must also be single variables, in particular, up to renaming, $z$ and $t$. As such, the length of $\mathbf{u}$ is the same as that of $\psi(\mathbf{u})$, which implies that exactly four distinct variables occur in $\mathbf{u}\approx \mathbf{v}$. But, since we are considering only substitutions which do not map variables to the empty word, this implies that $\psi$ is only a renaming of variables. Notice that $xzxyty$ is the left-hand side of a non-trivial identity satisfied by ${\mathsf{mSt}}$ if and only if the right-hand side is $xzyxty$, by Corollary [Corollary 73](#corollary:mst_identities){reference-type="ref" reference="corollary:mst_identities"}. Hence $\mathbf{u}\approx \mathbf{v}$ is equivalent to [\[id22\]](#id22){reference-type="eqref" reference="id22"}. ◻ As a consequence, we have the following: **Corollary 78**. *The axiomatic rank of ${\mathsf{mSt}}$ is $4$.* Recall that the meet-taiga monoid ${\mathsf{mTg}}$ is given by the meet of the left and right-taiga congruences. Therefore, in parallel with the meet-stalactic case, we can show that the variety generated by ${\mathsf{mTg}}$ is the varietal join of the varieties generated, respectively, by ${\mathsf{lTg}}$ and ${\mathsf{rTg}}$. By [@cain_johnson_kambites_malheiro_representations_2022 Corollary 5.8] and [@han2021preprint Theorem 2.3], ${\mathsf{lTg}}$ (resp. ${\mathsf{rTg}}$) generates the same variety as ${\mathsf{lSt}}$ (resp. ${\mathsf{rSt}}$). Hence, we have that: **Corollary 79**. *${\mathsf{mTg}}$ generates the same variety as ${\mathsf{mSt}}$.* Since the equational theory of ${\mathsf{mSt}}$ is a strict subset of the equational theory of ${\mathsf{baxt}}$, given in [@cain_malheiro_ribeiro_sylvester_baxter_2023 Theorem 4.3] and [@han2021preprint Lemma 3.3], we have the following: **Corollary 80**. *$\mathbb{V}({\mathsf{mSt}})$ is strictly contained in $\mathbb{V}({\mathsf{baxt}})$.* On the other hand, by [@cain_johnson_kambites_malheiro_representations_2022 Corollary 4.6], $\mathbb{V}({\mathsf{rSt}})$ is the varietal join of the variety $\mathbb{COM}$ of all commutative monoids and the variety $\mathbb{RRB}$ of all right regular bands (restricted to monoids). By a parallel argument, $\mathbb{V}({\mathsf{lSt}})$ is the varietal join of $\mathbb{COM}$ and the variety $\mathbb{LRB}$ of all left regular bands (restricted to monoids). Therefore, since $\mathbb{V}({\mathsf{mSt}})= \mathbb{V}({\mathsf{rSt}})\vee \mathbb{V}({\mathsf{lSt}})$, and the varietal join of $\mathbb{LRB}$ and $\mathbb{RRB}$ is the variety $\mathbb{RB}$ of all regular bands (restricted to monoids), we can conclude that: **Corollary 81**. *$\mathbb{V}({\mathsf{mSt}})$ is the varietal join of $\mathbb{COM}$ and $\mathbb{RB}$.* Now, we consider the case of the join-stalactic monoid. We write $\mathbb{V}({\mathsf{jSt}})$ to denote the variety generated by ${\mathsf{jSt}}$. As before in the meet-stalactic case, the join-stalactic congruence can be viewed as the equational theory of the varietal meet $\mathbb{V}({\mathsf{lSt}})\wedge \mathbb{V}({\mathsf{rSt}})$, hence $\mathbb{V}({\mathsf{lSt}})\wedge \mathbb{V}({\mathsf{rSt}})$ is equal to $\mathbb{V}({\mathsf{jSt}})$, and we have the following: **Corollary 82**. *The join-stalactic monoid of rank $n$ is the free object of rank $n$ of $\mathbb{V}({\mathsf{jSt}})$, for countable $n$.* This stands in contrast with the hypoplactic monoid: the hypoplactic congruence is the join of the sylvester and \#-sylvester congruences, however, the hypoplactic monoid does not generate the varietal meet of the varieties generated by the sylvester and \#-sylvester monoids, respectively. From Proposition [Proposition 17](#prop:jst_characterisation){reference-type="ref" reference="prop:jst_characterisation"}, we also obtain the following: **Corollary 83**. *The equational theory of ${\mathsf{jSt}}$ is the set of balanced identities $\mathbf{u}\approx \mathbf{v}$ over the alphabet of variables $\mathcal{X}$ such that $\overline{\mathbf{u}} = \overline{\mathbf{v}}$.* It follows that checking if an identity holds in ${\mathsf{jSt}}$ can be done in polynomial time: **Corollary 84**. *The decision problem $\textsc{Check-Id}({\mathsf{jSt}})$ belongs to the complexity class $\mathsf{P}$.* Recall that $\equiv_{\mathsf{lSt}}$ and $\equiv_{\mathsf{rSt}}$ are generated, respectively, by the relations $(a \mathbf{w}a, a a \mathbf{w})$ and $(a \mathbf{w}a, \mathbf{w}a a)$, for all $a \in \mathbb{N}$ and $\mathbf{w}\in \mathbb{N}^*$. As such, $\equiv_{\mathsf{jSt}}$ is generated by these relations, from which we obtain the following: **Corollary 85**. *$\mathbb{V}({\mathsf{jSt}})$ admits a finite equational basis $\mathcal{B}_{{\mathsf{jSt}}}$, consisting of the identities $$\begin{aligned} xxy \approx xyx \approx yxx. \end{aligned}$$* Since the only identities satisfied by ${\mathsf{jSt}}$ where only one variable occurs are trivial, we can conclude the following: **Corollary 86**. *The axiomatic rank of ${\mathsf{jSt}}$ is $2$.* While ${\mathsf{jSt}}$ satisfies the identities $xxy \approx xyx \approx yxx$, it is not commutative. As such, it is natural to ask which other monoids in $\mathbb{V}({\mathsf{jSt}})$ are non-commutative. **Proposition 87**. *$\mathbb{V}({\mathsf{jSt}})$ is the unique cover of $\mathbb{COM}$ in the lattice of all varieties of monoids.* *Proof.* Recall that any over-commutative variety satisfies only balanced identities. Let $\mathbf{u}\approx \mathbf{v}$ be a balanced identity not satisfied by ${\mathsf{jSt}}$. Then, $\overline{\mathbf{u}} \neq \overline{\mathbf{v}}$, that is, there exist $x,y \in \mathcal{X}$ such that $|\mathbf{u}|_x = |\mathbf{u}|_y = 1$ and, without loss of generality, $x$ occurs before $y$ in $\mathbf{u}$ but after it in $\mathbf{v}$. Consider the substitution $\phi$ that keeps $x$ and $y$, and maps all other variables to the empty word. Then, $xy = \phi(\mathbf{u}) \approx \phi(\mathbf{v}) = yx$ can be deduced from $\mathbf{u}\approx \mathbf{v}$. Thus, any monoid in $\mathbb{V}({\mathsf{jSt}})$ that does not generate it must be commutative. As such, $\mathbb{V}({\mathsf{jSt}})$ admits no proper over-commutative subvarieties (other than $\mathbb{COM}$), hence it is the unique cover of $\mathbb{COM}$ in the lattice of all varieties of monoids, by [@gusev_lee_vernikov_lattice_variety_monoids Remark 3.10]. ◻ Thus, any non-commutative monoid in $\mathbb{V}({\mathsf{jSt}})$ generates the variety. From this, it is immediate that $\mathbb{V}({\mathsf{jSt}})$ is generated by ${\mathsf{jTg}}$, a non-commutative homomorphic image of ${\mathsf{jSt}}$: **Corollary 88**. *${\mathsf{jTg}}$ generates the same variety as ${\mathsf{jSt}}$.* On the other hand, since the equational theory of ${\mathsf{hypo}}$, given in [@cain_malheiro_ribeiro_hypoplactic_2022 Theorem 4.1], is a strict subset of the equational theory of ${\mathsf{jSt}}$, we have that: **Corollary 89**. *$\mathbb{V}({\mathsf{jSt}})$ is strictly contained in $\mathbb{V}({\mathsf{hypo}})$.* # Acknowledgements {#acknowledgements .unnumbered} The authors would like to thank Alan Cain and António Malheiro for their suggestions, in particular to study the results in Section [6](#section:Syntacticity_of_plactic_like_monoids){reference-type="ref" reference="section:Syntacticity_of_plactic_like_monoids"}, and many helpful comments.
arxiv_math
{ "id": "2309.10184", "title": "Plactic-like monoids arising from meets and joins of stalactic and taiga\n congruences", "authors": "Thomas Aird and Duarte Ribeiro", "categories": "math.RA math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we study a tumor growth model where the growth is driven by nutrient availability and the tumor expands according to Darcy's law with a mechanical pressure resulting from the incompressibility of the cells. Our focus is on the free boundary regularity of the tumor patch that holds beyond topological changes. A crucial element in our analysis is establishing the regularity of the *hitting time* $T(x)$, namely the first time the tumor patch reaches a given point. We achieve this by introducing a novel Hamilton-Jacobi-Bellman (HJB) interpretation of the pressure, which is of independent interest. The HJB structure is obtained by viewing the model as a limit of the Porous Media Equation (PME) and building upon a new variant of the AB estimate. Using the HJB structure, we establish a new Hopf-Lax type formula for the pressure variable. Combined with barrier arguments, the formula allows us to show that $T$ is $C^{\alpha}$ with $\alpha=\alpha(d)$, which translates into a mild nondegeneracy of the tumor patch evolution. Building on this and obstacle problem theory, we show that the tumor patch boundary is regular in ${ \mathbb{R}}^d\times (0,\infty)$ except on a set of Hausdorff dimension at most $d-\alpha$. On the set of regular points, we further show that the tumor patch is locally $C^{1,\alpha}$ in space-time. This conclusively establishes that instabilities in the boundary evolution do not amplify arbitrarily high frequencies. author: - Carson Collins, Matt Jacobs and Inwon Kim bibliography: - ref_CJK.bib title: Free boundary regularity for tumor growth with nutrients and diffusion --- # Introduction In this paper, we consider the following tumor growth model: $$\label{first} \partial_t \rho - \nabla \cdot (\rho \nabla p ) = n\rho, \quad p(1-\rho)=0, \quad \rho\leq 1,$$ where $\rho$ denotes the density of tumor cells, $p$ denotes the pressure, and $n$ is a nutrient variable that evolves according to the diffusion equation $$\label{second} \partial_t n - \Delta n = -n \rho.$$ The form of the pressure-density relation reflects the incompressibility of the tumor cells, namely the pressure variable $p$ acts as the *Lagrange multiplier* for the constraint $\rho\leq 1$. In short, the system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) describes a cell growth system where the growth rate is mediated by nutrient availability and the tumor region expands according to Darcy's law with a mechanical pressure driven by the incompressibility of the cells. Models of the form ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) have been extensively studied by both the mathematical and biological communities with various different assumptions on the growth term and density pressure coupling [@BYRNE2003567; @Preziosi2008; @Ranft20863; @maury14; @pqv], to name just a few. Nonetheless, many mathematical questions remain outstanding, in particular, those regarding the long-time behavior of the tumor boundary region. Our focus on the specific source term $n\rho$ is due to the fact that the model ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) generates particularly interesting behavior of the tumor patch despite the apparent simplicity of the coupling between the tumor and nutrient. It is well-known in the biology literature (through numerical and physical experiments) that the tumor patch generated by this model exhibits a fingering instability (c.f. the discussion in [@kitsunezaki97], [@MVS02], [@maury14], [@golden22], [@jkt_nutrient]). In particular, it has been unclear whether this fingering phenomenon occurs at some discrete scale or whether it leads to an immediate or eventual loss of regularity in the tumor boundary. Investigating this behavior will be the main goal of this paper. Although the tumor system nearly corresponds to that of the classical *Hele-Shaw flow*, a mathematically rigorous study of the boundary behavior has remained elusive, due to the difficulties presented by the source term $\rho n$. In the classical setting, which we will call the *injection problem*, the Hele-Shaw flow is given with no source (namely $n=0$) and with a fixed boundary from which the flow is injected at a given rate. For the injection problem, the global structure of the boundary $\partial \{\rho=1\}$ is well understood by now, mainly through comparison principle type arguments [@CJK; @CJK2; @dong21] or via connections to the obstacle problem [@Baiocchi1973; @monneau; @figalli_serra; @figalli_generic]. For our problem, the comparison approach is immediately ruled out, as the full system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) does not have comparison (though note that the individual equations when considered separately do have comparison principles). As such, we shall proceed via the obstacle problem analysis. However, there is a highly nontrivial roadblock that must be overcome. Indeed, the source term $\rho n$ necessarily depends on the space-time geometry of the free boundary, while for the injection case, the source is concentrated at a fixed boundary that is safely away from the free boundary. This makes the analysis of the tumor system considerably more difficult, as the influence of the source term cannot be ignored when blowing up the problem at free boundary points (the fundamental technique for the obstacle problem approach). In particular, to use the obstacle problem toolbox, one must first establish the regularity of the *hitting time* $T(x)$, which records the first time that the tumor patch reaches the point $x$ (ignoring the regularity issues, one can formulate $T(x)$ as $\inf\{t>0: \rho(t,x)=1\}$, see equation ([\[eq:hitting_time\]](#eq:hitting_time){reference-type="ref" reference="eq:hitting_time"}) for a more careful definition). This is essentially equivalent to establishing a quantitative non-degeneracy property for the tumor expansion speed, a highly nontrivial task. To establish the regularity of $T(x)$ we first derive a novel Hopf-Lax type estimate for the pressure (c.f. Theorem [Theorem 1](#thm:main_1){reference-type="ref" reference="thm:main_1"}). To the best of our knowledge, such Hopf-Lax type formulas have not previously appeared in the Hele-Shaw literature, perhaps in part due to the difficulty of controlling the time derivative of $p$. We get around this by viewing equation ([\[first\]](#first){reference-type="ref" reference="first"}) as the incompressible limit of the Porous Media Equation (PME). Given some parameter $\gamma\in (1,\infty)$, the PME analogue of ([\[first\]](#first){reference-type="ref" reference="first"}) is the equation $$\label{eq:pme_intro} \partial_t \rho_{\gamma}-\nabla \cdot (\rho_{\gamma}\nabla p_{\gamma})=\rho_{\gamma} n_{\gamma}, \quad p_{\gamma}=\rho_{\gamma}^{\gamma},$$ where $n_{\gamma}$ will solve ([\[second\]](#second){reference-type="ref" reference="second"}) with $\rho$ replaced by $\rho_{\gamma}$, and ([\[first\]](#first){reference-type="ref" reference="first"}) can be recovered by sending $\gamma\to\infty$ (see for instance [@pqv; @perthame_david; @jacobs_2021]). Since the pressure-density coupling $p_{\gamma}=\rho_{\gamma}^{\gamma}$ is explicit for PME, one can rewrite ([\[eq:pme_intro\]](#eq:pme_intro){reference-type="ref" reference="eq:pme_intro"}) solely in terms of the pressure, namely, $$\label{eq:pme_pressure_intro} \partial_t p_{\gamma}-|\nabla p_{\gamma}|^2-\gamma p_{\gamma}(\Delta p_{\gamma}+n_{\gamma})=0.$$ Interestingly, we ignore the parabolic structure of this equation and instead focus on the Hamilton-Jacobi-Bellman (HJB) structure of the first two terms. We then build upon the recent improved versions of the Aronson-Benílan estimate introduced in [@jacobs_lagrangian] to show that the positive part of $u_{\gamma}:=-\gamma(\Delta p_{\gamma}+n_{\gamma})$ is uniformly bounded with respect to $\gamma$ in a BMO type space, implying that our limiting $p$ must be a supersolution to the HJB equation $$\label{eq:p_super} \partial_t p-|\nabla p|^2+ pu_+\geq 0$$ where $u:=\lim_{\gamma\to\infty} u_{\gamma}.$ From here we finally obtain the Hopf-Lax formula by adapting the techniques of [@cardaliaguet_graber_mfg] for HJB equations with unbounded coefficients. It is highly intriguing to speculate whether it is possible to obtain ([\[eq:p_super\]](#eq:p_super){reference-type="ref" reference="eq:p_super"}) or Hopf-Lax estimates directly from ([\[first\]](#first){reference-type="ref" reference="first"}), however, we will not consider this line of inquiry further in this work. Once we have the Hopf-Lax formula, we combine this with a powerful barrier-type argument to prove that for any point $x\notin \textup{spt}(\rho_0)$ and any sufficiently small radius $r>0$ there exists an explicit time $t_r(x)<T(x)$ such that the tumor patch does not occupy any point in $B_r(x)$. From here, it will follow that the hitting time is Hölder continuous with an exponent that depends on the dimension only. With the Hölder continuity of $T$ in hand, we can turn to the obstacle problem formulation to address the regularity of the free boundary. Here, the novelty in our analysis lies in establishing the global space-time regularity of the free boundary, with data that is far less regular than the typical injection problems that have previously been considered. Ultimately, through the obstacle problem analysis, we are able to show that the free boundary is regular except at topological singularities, which are unavoidable for general initial data. This conclusively demonstrates that the observed instabilities for the system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) do not amplify arbitrarily high frequencies and must occur at some fixed scale. In particular, we show that the tumor patch boundary is regular in ${ \mathbb{R}}^d\times (0,\infty)$ except on a relatively closed set of Hausdorff dimension at most $d-\alpha$ for some $\alpha\in (0,1)$ depending only on the dimension. On the set of regular points, we further show that the tumor patch is $C^{1,\alpha}$ in space, locally uniformly in time. It then follows that the associated pressure gradient at regular boundary points is well-defined and uniformly positive in space-time. Moreover, the direction of the pressure gradient on the set of regular points is continuous in space-time. In the remainder of the introduction, we give a more complete explanation of the obstacle formulation of our problem and the connection to the hitting time. We then summarize our main results and give a roadmap for the rest of the paper. ## The obstacle problem and the hitting time To better understand the aforementioned difficulties and the importance of the hitting time, let us describe some properties of the tumor patch and formally introduce the obstacle problem associated to ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}). Since our main interest is the regularity properties of the tumor patch, throughout the paper, we will assume that $$\label{initial} \rho(x,0)\hbox{ is a characteristic function and } n(x,0) \hbox{ is uniformly positive.}$$ Under these assumptions, $\rho$ will remain a characteristic function for all times and $t\mapsto \rho(x,t)$ will be nondecreasing for a.e. $x\in \mathbb{R}^d$. Transitioning to the obstacle problem formulation, if we integrate the pressure variable in time, $$\label{w_def} w(x,t):=\int_0^t p(x,s) ds,$$ the new variable $w$, the so-called *Baoicchi transform*, will satisfy an obstacle problem [@Baiocchi1973]. Since the density is nondecreasing in time, the relation $(1-\rho)p=0$ implies that $(1-\rho)w=0$. Using the patch property for the density, this coupling can be upgraded to the even stronger relation that the sets $\{w>0\}$ and $\{\rho=1\}$ coincide spacetime almost everywhere (c.f. Lemma [Lemma 10](#lem:rho_w_positive_sets_agree){reference-type="ref" reference="lem:rho_w_positive_sets_agree"}). This key relation can then be combined with the time integral of ([\[first\]](#first){reference-type="ref" reference="first"}) to see that $w$ solves the elliptic obstacle problem $$\label{eq:w_obst_eqn} \Delta w = (1 - \rho_0 -\eta)\chi_{\{w>0\}},$$ where $\eta(x,t) :=\int_{0}^t\rho(x,s) n(x,s)\, ds$ (c.f. Lemma [Lemma 12](#obstacle){reference-type="ref" reference="obstacle"}). The main challenge in analyzing ([\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="ref" reference="eq:w_obst_eqn"}) is the presence of the term $\eta$, which is absent in the obstacle formulation of the classical injection case (due to local regularity results, $\rho_0$ does not affect the free boundary regularity at positive times away from the support of $\rho_0$). Since $\rho$ is a characteristic function, it is not clear whether $\eta$ has any nice regularity. This is crucial, as obstacle problem regularity theory breaks down without Dini continuity of the coefficients (see [@blank]). Hence, one must hope that the time integral induces some smoothing effect. At the very least, this can only happen if the tumor boundary is strictly expanding. Indeed, if any part of the free boundary stagnates in time, then $\eta$ will become discontinuous across that portion of the boundary. Note that such stagnation would correspond to a jump in the values of the hitting time function $T$ introduced earlier. Hence, the smoothness of $\eta$ and $T$ are highly intertwined. In fact, it will turn out that we can express $\eta$ solely in terms of the hitting time $T$ and $n$. To see the connection between $\eta$ and $T$, we need to first give a proper definition of the hitting time. Recall that the hitting time $T(x)$ records the first time that the tumor patch arrives at a point $x$. We will formally define it using $w$, the most regular variable at our disposal. Given a point $x\in \mathbb{R}^d$ we set $$\label{eq:hitting_time} T(x):= \inf \{t>0: w(x,t)>0\}.$$ Since the positivity set of $w$ coincides almost everywhere with the tumor patch, we have $\rho(x,t)=\mathop{\mathrm{sgn}}_+(t-T(x))$ almost everywhere. Hence, $\eta$ can be rewritten in terms of $T$ and $n$ as $$\label{eq:eta_alternate} \eta(x,t)=\mathop{\mathrm{sgn}}_+(t-T(x))\int_{T(x)}^t n(x,s)\, ds.$$ From the above formula, we now see that the spatial regularity of $\eta$ is more or less equivalent to the regularity of $T$ and $n$. Note that generically $T$ is at best Lipschitz continuous, as it is easy to cook up a scenario where two different parts of the tumor patch collide with different velocities. In addition, topological changes of the tumor boundary can cause the pressure to suddenly jump with highly nonlocal effects. For instance, the merger of two portions of the boundary can cause far away parts of the boundary to instantaneously start moving faster. Since $n$ is much better than Lipschitz continuous, it is $T$ that will determine the regularity of $\eta$. While we are inclined to believe that the Lipschitz continuity of $T$ is true, our methods are only able to show that $T$ is Hölder continuous with a dimensionally dependent exponent. Nonetheless, the Hölder continuity is sufficient for us to deduce free boundary regularity using the obstacle problem approach. However, let us note that we are forced to work in a much lower regularity regime than what is typically considered in the obstacle problem literature, requiring us to develop new arguments. ## Main results We are now ready to present the main results of our paper. All of our results will use the following mild assumptions on the initial data. 1. $\rho(\cdot,0)\in L^1(\mathbb{R}^d)\cap \textup{BV}(\mathbb{R}^d)$ and $\rho(x,0)\in \{0,1\}$ for almost every $x\in \mathbb{R}^d$. 2. $n(\cdot,0)\in W^{1,\infty}(\mathbb{R}^d)$ and there exists $c>0$ such that $n(x,0)\geq c$ for all $x\in \mathbb{R}^d$. The main results of the first half of the paper are the HJB structure and Hopf-Lax formula for the pressure, along with the Hölder continuity of the hitting time. **Theorem 1**. *The following holds for the unique weak solution $p$ to the system [\[first\]](#first){reference-type="eqref" reference="first"}-[\[second\]](#second){reference-type="eqref" reference="second"}.* - *$p$ solves, in the sense of weak solutions, $$\partial_t p - |\nabla p|^2 +pu_+ \geq 0,$$ where for any $\tau > 0$ there exists $b=b(\tau, d) > 0$ such that $bu_+ e^{bu_+} \in L^1([0, \tau ] ; { \mathbb{R}}^d)$.\ * - *Given points $(x_1, t_1)$, $(x_0, t_0)$ with $t_0<t_1$ and any decreasing function $\lambda\in L^1([0, t_1-t_0])$, there exist constants $C=C(t_1,d)$ and $b=b(t_1, d)$ such that $$p(x_0, t_0)\leq e^{\Lambda(t_1-t_0)}\Big( p(x_1, t_1)+\frac{|x_1-x_0|^2}{4\int_{0}^{t_1-t_0} e^{\Lambda(s)}\, ds} +C(t_1-t_0)^{7/10}e^{-\lambda(t_1-t_0)}\Big)$$ where $\Lambda(t):=\frac{5}{4b}\int_0^t \lambda(s)\, ds+\frac{t}{b}\log(1+\frac{C}{t})$.\ * - *$T$ is locally Hölder continuous on the set $\{x\in \mathbb{R}^d: 0<T(x)<\infty\}$ with an exponent that depends only on the dimension.* Let us note that Theorem [Theorem 1](#thm:main_1){reference-type="ref" reference="thm:main_1"} parts (a) and (b) represent a significant improvement to our understanding of the Hele-Shaw equation. In particular, any control on the time derivative of the pressure has been previously missing in the literature. Furthermore, the delicate control that we obtain from the Hopf-Lax formula in part (b) is completely new and unexpected. As we mentioned earlier, we establish the HJB structure by first going through the PME ([\[eq:pme_gamma\]](#eq:pme_gamma){reference-type="ref" reference="eq:pme_gamma"}). For the classic PME without a source term, bounds on the negative part of $\Delta p_{\gamma}$ are known through the celebrated Aronson-Benilan estimate [@ab]. In the presence of a source term, AB-type bounds on quantities taking a similar form to $u_{\gamma}=-\gamma(\Delta p_{\gamma}+n)$ have been studied in the literature [@pqv; @gpsg; @perthame_david; @Bevilacqua2022; @jacobs_lagrangian], however except for [@jacobs_lagrangian], these bounds do not scale well with respect to $\gamma$. We adapt the arguments from [@jacobs_lagrangian] to show that $[u_{\gamma}]_+$ can be bounded uniformly with respect to $\gamma$ in a BMO-type space. Once we have the uniform control on $u_{\gamma,+}$ we can pass to the limit in ([\[eq:pme_pressure\]](#eq:pme_pressure){reference-type="ref" reference="eq:pme_pressure"}) to obtain the result (a). A direct derivation of (a) from the Hele-Shaw flow or the meaning of the singular limit $u=\lim_{\gamma\to\infty}u_{\gamma}$ in terms of the Hele-Shaw flow remains open. To obtain (b), we cannot take the usual approach to proving Hopf-Lax type formulas (i.e. differentiating $p$ along paths) due to the potential unboundedness of $u_+$. To overcome this, we adapt the approach developed in [@cardaliaguet_graber_mfg], which handles unbounded coefficients by averaging over paths indexed by the unit ball. Our calculation is somewhat different however, as we can exploit the specific structure of $pu_+$ to decompose $pu_+\leq \lambda p+p(u-\lambda)_+$ for some scalar $\lambda\geq 0$. By choosing $\lambda$ appropriately we can force $p(u-\lambda)_+$ to be small while using a Gronwall argument to handle $\lambda p$. This allows us to obtain a much more favorable error term in our Hopf-Lax formula compared to [@cardaliaguet_graber_mfg]. Although Theorem 1.1 (a) and (b) are stated for our particular system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}), equivalent results can be proved for more general tumor growth models where the growth term $\rho n$ is replaced by $\rho G$ for some general growth rate $G$. In particular, our arguments only need $G\geq c(\tau)>0$ along with some control on $[\partial_t G]_-$. As a consequence of the Hopf-Lax formula, we obtain the $C^{\alpha}$ regularity of $T$ where $\alpha=\alpha(d)$. We do this by combining the formula with a novel barrier type argument. Given a point $(x, T(x))$ on the free boundary and some time $t_0\in (0,T(x))$, we use the Hopf-Lax formula and the values of $p$ at time $T(x)$ to construct an explicit supersolution $\psi$ that dominates $p$ on $(t_0, T(x))\times\mathbb{R}^d$. The key is that the Hopf-Lax formula allows us to choose the values in such a way that $\psi$ is zero in a neighborhood of $x$ up until the hitting time $T(x)$. Since we are able to explicitly calculate and invert $t(r)=\inf\{t\in (t_0, T(x)): \sup_{y\in B_r(x)} \psi(y, t)>0\}$ we obtain an upper bound on $\sup_{y\in B_r(x)} T(x)-T(y)$, which implies the Hölder continuity of $T$. Some remarks on the previous literature for hitting times are in order. Quantitative regularity of the hitting time for PME has been obtained in [@CF80] for the classical PME and in [@paul21] for the PME with a source term and drift. Nevertheless, both of these results obtain estimates that blow up as $\gamma$ tends to infinity, due to the lack of a uniform AB estimate on $u_{\gamma}$. As a result, their approaches are not suitable for our problem. Let us also note that these papers used a rather different approach that did not involve the Hopf-Lax approach that we use here. Estimates on the hitting time for a simpler version of ([\[first\]](#first){reference-type="ref" reference="first"}) where $n$ is replaced by a decreasing function of $p$, were obtained in [@pqm] for dimensions $d\leq 3$. Their proof strongly relies on the specific structure of their growth term, which allows them to relate the Hölder continuity of $T$ to that of the pressure through a clever trick. Again, this approach is not applicable to our problem. Although we also focus on a specific source term, our method is much more general and can be applied to other instances of the Hele-Shaw or Porous Media equation. The remaining analysis in the paper is devoted to the study of the obstacle problem [\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="eqref" reference="eq:w_obst_eqn"}, based on the $C^{\alpha}$ regularity of $T$. We build on the low-regularity obstacle problem analysis of Blank [@blank] to establish the space-time regularity of the tumor patch. A crucial fact we use is that the solution of the obstacle problem with $C^{\alpha}$ data has a unique blow-up limit at each point, allowing us to decompose the boundary into a regular part and a singular part (the regular points have blow up limits that look like half-planes). A direct application of this dichotomy yields that the boundary has locally finite $H^{d-1}$ measure for each time, as mentioned for instance in [@pqm]. However, this standard description lacks the geometric information of the free boundary over time. Indeed, the main novelty of our obstacle problem analysis is that we are able to stitch together information from each time $t$ to obtain regularity of the full space time boundary $\Gamma:=\{(x, T(x): x\in \mathbb{R}^d\}\subset \mathbb{R}^d\times (0,\infty)$. In particular, we show that $\Gamma$ is regular in space-time outside of a set of at most Hausdorff dimension $d-\alpha$, and its outward normal is Hölder continuous in space-time. While space-time analysis of the singular set has been carried out before for the injection problem ([@monneau], [@figalli_serra], [@figalli_generic]), these results have utilized smoothness (at least $C^4$) of the fixed boundary data in an essential way. A more general time-varying source term was considered in [@serfaty_serra], but only for a short range of time that ensures that no topological singularity occurs during the evolution. Our results are summarized in the following Theorem. **Theorem 2**. *Let $\Gamma$ denote the space-time boundary set of the tumor region i.e. $\Gamma=\{(x,T(x)): x\in \mathbb{R}^d\}$* - *The set $\{ 0 < T(x) < \infty \}\subset { \mathbb{R}}^d$ decomposes as $R\cup \Sigma$, where the set $R$ of regular points is open in ${ \mathbb{R}}^d$ and the set $\Sigma$ of singular points is locally contained in a $C^1$ manifold of dimension $d-1$.* - *At any $x\in R$, the free boundary near $(x,T(x))\in\Gamma$ can be locally represented as a graph $\{x_n = f(x', t)\}$ where $f$ is $C^{1,1}$ in $x'$ and Lipschitz in time.* - *$p(\cdot,T(x))$ has linear growth at $x\in R$, with locally uniform growth rates. In particular $T$ is Lipschitz in $R$.* - *The map $\nu: R\to \mathcal{S}^d$, where $\nu(x)$ denotes the spatial outward normal of $\Gamma$ at $(x, T(x))$, is Hölder continuous. In particular $\nabla p(x, T(x))$ is well-defined for $x\in R$ and has continuous direction.* Note that, while Theorem 1.2 (d) yields the continuity of the direction of $\nabla p$ on $\Gamma$, we cannot expect the same for $|\nabla p|$: this can be easily seen from examples where a topological change occurs far away from the given free boundary point. In terms of the quadratic blow up limit of $w$, Theorem 1.2 (a) and (d) yield its continuity at free boundary points, along $\Sigma$ and along $R$. As a consequence, $D^2 w(x, T(x))$ exists on $\Sigma$ and exists in a one-sided sense on $R$, and is continuous on each set (though not necessarily on their union). Our last theorem discusses the Hausdorff measure of the free boundary in space-time coordinate. Let us introduce the notation $$\label{domain} \Omega_t := \{w(\cdot,t)>0\}, \quad \Gamma_t := \partial\Omega_t.$$ **Theorem 3** (Corollary [Corollary 42](#cor:hausdorff_measure){reference-type="ref" reference="cor:hausdorff_measure"}). - *The free boundary $\partial\{w>0\}$ has Hausdorff dimension d in $(x,t)$-coordinates.\ * - *$Graph(R) = \{(x,t): x\in \Gamma_t, x\in R\}$ is relatively open with locally finite $H^d$ measure.\ * - *$Graph(\Sigma)= \{(x,t): x\in \Gamma_t, x\in \Sigma\}$ has locally finite $H^{d-\alpha}$ measure.* Let us mention that we expect $T$ to be Lipschitz for all points, not just in $R$. For instance, in the classical setting with constant Dirichlet fixed boundary data, the Lipschitz continuity of $T$ was shown by [@monneau] by a simple comparison principle. The remaining challenge in our setting lies in the analysis of the singular points. This is an intriguing question as the blow-up profile of the tumor patch at these points suggests that the evolution at these points should be non-degenerate in general. In fact, one might even expect the gradient of $T$ to vanish at these points. Nonetheless, accurately capturing the hitting time behavior near singular points appears to be out of reach for the moment. It would also be interesting to improve upon our estimate of the singular set, using a generic notion of initial data. While it seems plausible, new ideas seem to be necessary to obtain such a result. The rest of the paper is organized as follows. In Section 2, we review the basic properties of the system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) and the connection to the obstacle problem. In Section 3, we establish the HJB structure of the pressure along with the Hopf-Lax formula. In Section 4, we construct a barrier supersolution using the Hopf-Lax formula which allows us to establish the Hölder regularity of the hitting time map $T(x)$. Section 5 builds on the regularity of $T$ and the existing obstacle theory to investigate the global regularity of the free boundary $\Gamma:= \partial\{w(x,t)>0\}$. # Basic properties of the system Here we recall the notion and show basic properties of solutions for [\[first\]](#first){reference-type="eqref" reference="first"}-[\[second\]](#second){reference-type="eqref" reference="second"}. We first introduce the weak notion of our solutions, parallel to those introduced in [@jkt_nutrient] for a similar model. **Definition 4**. A triple $(\rho, p, n)$ is a weak solution to [\[first\]](#first){reference-type="eqref" reference="first"}-[\[second\]](#second){reference-type="eqref" reference="second"} for initial data $\rho_0\in L^1({ \mathbb{R}}^d)\cap BV({ \mathbb{R}}^d)$ and $n_0\in L^\infty({ \mathbb{R}}^d)\cap BV({ \mathbb{R}}^d)$ if for any $\tau > 0$, (i) $p(1-\rho) = 0$ in $\mathcal{D}'({ \mathbb{R}}^d\times [0, \tau])$ (ii) For any $\psi\in H^1({ \mathbb{R}}^d\times [0,\tau])$ vanishing at time $\tau$, $$\label{eq:first_dist_eqn} \int_0^{\tau} \int_{{ \mathbb{R}}^d} \nabla \psi\cdot \nabla p - \rho \partial_t \psi\,dx\,dt = \int_{{ \mathbb{R}}^d} \psi(x, 0)\rho_0(x)\,dx + \int_0^{\tau} \int_{{ \mathbb{R}}^d} \psi n \rho \,dx\,dt.$$ (iii) $$\partial_t n - \Delta n = -\rho n\hbox{ in }\mathcal{D}'({ \mathbb{R}}^d\times[0, \tau]),\quad n(x, 0) = n_0(x)$$ (iv) We have $\rho\in C([0, \tau]; L^1({ \mathbb{R}}^d))\cap L^\infty([0, \tau]; BV({ \mathbb{R}}^d))$, $p\in L^2([0, \tau]; H^1({ \mathbb{R}}^d))$, and $n\in L^\infty(Q_{\tau})\cap L^\infty([0, \tau]; BV({ \mathbb{R}}^d))$. Here, $BV({ \mathbb{R}}^d)$ is the space of (not necessarily integrable) functions with finite total variation. We also record a few useful properties of a weak solution. **Lemma 5**. *[\[lem:n_regularity\]]{#lem:n_regularity label="lem:n_regularity"}[\[lem:p_dist_eqn\]]{#lem:p_dist_eqn label="lem:p_dist_eqn"} For any $\tau > 0$, $\varepsilon\in (0,1)$, we have* (i) *$\rho\in C^{0,1}_t L^1_x([0,\tau]; { \mathbb{R}}^d)$.* (ii) *The support of $\rho$ in ${ \mathbb{R}}^d\times [0, \tau]$ is compact, and $0\leq \rho \leq 1$.* (iii) *For a.e. $x\in { \mathbb{R}}^d$, $\rho(x, \cdot)$ is increasing in time.* (iv) *$n\in L^{\infty}([0,\tau]\times\mathbb{R}^d),\; \partial_t n \in \textup{BMO}([0,\tau]\times\mathbb{R}^d), \;D^2 n \in \textup{BMO}([0,\tau]\times\mathbb{R}^d).$* (v) *We have $$\label{eq:p_dist_eqn} p(\Delta p + n) = 0\hbox{ in }\mathcal{D}'({ \mathbb{R}}^d\times [0, \tau])$$ Also, $p\in L^\infty({ \mathbb{R}}^d\times [0, \tau])$.* *Proof.* Statements (i) and (ii) are proved in [@jkt_nutrient], Theorem 2.2 and Proposition 3.6. \(iii\) follows from Lemma 3.11 of [@jkt_nutrient], which provides comparison for the equation $$\partial_t \rho^i - \nabla\cdot(\rho^i \nabla p^i) = f^i$$ Namely, if $\rho^0(x,0) \leq \rho^1(x,0)$, $f^0 \leq f^1$, and $p^i (1 - \rho^i) = 0$, then $\rho^0(x,t)\leq \rho^1(x,t)$. For our system, since $n\rho \geq 0$, comparison to the system with $p^0 = f^0 = 0$ implies that $\rho(x, t_0) \leq \rho(x, t_1)$ for any $t_0 \leq t_1$ and a.e. $x$. The measure zero set where this fails depends on $t_0, t_1$, so we conclude by applying this with a countable basis of intervals. Item (iv) follows from parabolic estimates for the heat equation with $L^{\infty}$ coefficients (see e.g. [@ogawa22]). The distributional equation $p(\Delta p + n) = 0$ is also proved in [@jkt_nutrient], Theorem 2.2. The $L^\infty$ bound for $p$ follows from the compact support of $\rho$, and the boundedness of $n$; one can take a sufficiently large paraboloid supersolution to $\Delta p = -n$ to get an upper bound. ◻ From the definition of the weak solution, we can derive that $w$ satisfies an elliptic equation at each time. **Lemma 6**. *[\[lem:w_regularity\]]{#lem:w_regularity label="lem:w_regularity"} We have $w(1 - \rho) = 0$ a.e. in space-time. For each $t > 0$, $w$ solves $$\label{eqn:simple_w_eqn} \Delta w(x, t) = \rho(x,t) - \rho(x,0) - \int_0^t n(x,s)\rho(x,s)\,ds\hbox{ in }\mathcal{D}'({ \mathbb{R}}^d)$$ In particular, for any $\tau > 0$ and $\varepsilon \in (0,1)$, we have $w\in C^{0,1}({ \mathbb{R}}^d\times [0,\tau])\times L^\infty_t C^{1,1-\varepsilon}([0, \tau], { \mathbb{R}}^d)$.* *Proof.* For the first statement, we simply note that the monotonicity of $\rho$ in time from Lemma [Lemma 5](#lem:rho_regularity){reference-type="ref" reference="lem:rho_regularity"} implies $$0 \leq w(x,t)(1 - \rho(x, t)) = \int_0^t p(x,s)(1 - \rho(x,t))\,ds \leq \int_0^t p(x,s)(1-\rho(x,s))\,ds \equiv 0$$ For the second statement, we first note that both $\rho$ and the function $\int_0^t n\rho\,ds$ are continuous in time into any $L^p$ with $p < \infty$; this follows from the weak solution definition, since we have $\rho\in C_t L^1$ with values in $[0, 1]$ and compact support for bounded time intervals, while $n$ is spacetime continuous from Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"}. Then to derive a distributional equation for $w$, we consider Definition [Definition 4](#def:weak_sln){reference-type="ref" reference="def:weak_sln"}(ii) with $\psi$ of the form $\psi(x, t) = \varphi(x)\chi(t)$, where $\varphi\in C^\infty_c({ \mathbb{R}}^d)$ and $\chi\in C^\infty_c([0, \tau))$ with $\chi\equiv 1$ near 0. Using that $w = \int p\,dt$, we obtain $$\int_{{ \mathbb{R}}^d} \chi \nabla \varphi\cdot \nabla w(\cdot, \tau)\,dx - \int_{{ \mathbb{R}}^d}\int_0^{\tau} \partial_t \chi \varphi \rho \,dx\,dt = \int_{{ \mathbb{R}}^d}\varphi(x)\rho_0(x)\,dx + \int_{{ \mathbb{R}}^d} \varphi \int_0^{\tau} \chi n \rho \,dt\,dx$$ If we take for $\chi$ a sequence of cutoffs valued in $[0,1]$ and converging pointwise to the indicator of $[0, T)$, then we can apply the aforementioned time continuity of $\rho$ and $\int_0^t n\rho\,ds$ to obtain the limiting equation $$\int_{{ \mathbb{R}}^d} \nabla \varphi \cdot \nabla w(\cdot, \tau) + \varphi \rho(\cdot, \tau)\, dx = \int_{{ \mathbb{R}}^d} \varphi (\rho_0 + \eta(\cdot, \tau))\,dx$$ from which we conclude [\[eqn:simple_w\_eqn\]](#eqn:simple_w_eqn){reference-type="eqref" reference="eqn:simple_w_eqn"}. Since $w(x, t) = \int_0^t p(x, s)\,ds$, the upper bound for $p$ implies that $w$ is Lipschitz in time uniformly in space. This will improve to Lipschitz in spacetime once we have $w\in L^\infty_t C^{1,1-}_x$. Since $n$ is bounded on ${ \mathbb{R}}^d\times [0,\tau]$, $\eta$ is bounded on ${ \mathbb{R}}^d\times [0,\tau]$. Then since $\Delta w = (1 - \eta)\chi_{\{w > 0\}}$ is uniformly bounded in $L^\infty$, and up to time $\tau$, $w(\cdot, t)$ is compactly supported in $\overline{\Omega_\tau}$, it follows that for any $p\in (1, \infty)$, $\Delta w(\cdot, t)\in L^p({ \mathbb{R}}^d)$. Calderon-Zygmund estimates then give $w(\cdot, t)\in W^{2,p}({ \mathbb{R}}^d)$, and thus $w(\cdot, t)\in C^{1,1-\varepsilon}({ \mathbb{R}}^d)$ for any $\varepsilon > 0$, uniformly in $t\in [0, \tau]$. ◻ From now on we make the assumptions (A1) and (A2) on our initial data $\rho_0, n_0$. **Lemma 7**. *Let $\bar{n}(t):=\inf_{x\in \mathbb{R}^d} n(t,x)$ For any $t>0$, $\bar{n}(t)\geq e^{-t} \bar{n}(0).$* *Proof.* Suppose that $\tilde{n}$ satisfies the equation $\partial_t \tilde{n}-\alpha \Delta \tilde{n}=-\tilde{n}$ with constant initial data $\tilde{n}(0,x)=\bar{n}(0).$ The comparison principle for the heat equation implies that $\tilde{n}\leq n$ almost everywhere. If we define $\tilde{N}=e^{t}\tilde{n}$, then $\tilde{N}$ satisfies $\partial_t \tilde{N}-\alpha\Delta\tilde{N}=0$ with initial data $\tilde{N}(0,x)=\bar{n}(0)$. Hence, $\tilde{N}(t,x)=\tilde{N}(0,x)=\bar{n}(0)$ and thus it follows that $\tilde{n}(t,x)=e^{-t}\bar{n}(0)$, which implies the result. ◻ The following characterization of the pressure variable replaces the formal description of $p$ solving the elliptic problem $-\Delta p = n$ in $\{\rho=1\}$ with zero Dirichlet data, to avoid ambiguity rising from potentially irregular boundary of $\{\rho=1\}$. The argument is similar to ones that have previously appeared in the literature [@pqv; @santambrogio_crowd_motion; @guillen_kim_mellet; @jacobs_2021], here we include a proof since our setting is slightly different. **Lemma 8**. *For almost every time $t$, the pressure is a solution to the variational problem $$p(\cdot,t)=\mathop{\mathrm{argmin}}_{\varphi(\cdot)(1-\rho(\cdot,t))=0, \; \varphi\geq 0} \, \int_{\mathbb{R}^d} \frac{1}{2}|\nabla \varphi(x)|^2-\varphi(x) n(x,t)\, dx.$$* *Proof.* Given $\epsilon>0$ define $p_{\epsilon}(x,t):=\frac{1}{\epsilon}\int_{t-\epsilon}^t p(x,s)\, ds$ where we set $p(x,s)=p(x,0)$ if $s<0$ and $p^{\epsilon}(x,t):=\frac{1}{\epsilon}\int_{t}^{t+\epsilon} p(x,s)\, ds$. Fix a time $t_0$ such that $p_{\epsilon}(t_0,\cdot), p^{\epsilon}(t_0,\cdot)$ converge to $p(t_0,\cdot)$ in $H^1(\mathbb{R}^d).$ Choose some nonnegative function $\varphi\in H^1(\mathbb{R}^d)$ such that $\varphi(x)(1-\rho(x,t_0))=0$ for almost every $x\in \mathbb{R}^d$ (note that space integrals of $\rho(\cdot,t_0)$ against functions in $H^1(\mathbb{R}^d)$ are well defined at any time $t_0$ since $\partial_t \rho\in L^2([0,T];H^{-1}(\mathbb{R}^d))$, which itself is a consequence of the continuity equation and $p\in L^2([0,T];H^1(\mathbb{R}^d))$ ). Integrating equation ([\[first\]](#first){reference-type="ref" reference="first"}) from time $t_0-\epsilon$ to $t_0$, dividing by $\epsilon$, and integrating against $\varphi$ we see that $$\int_{\mathbb{R}^d} \varphi(x)\frac{\rho(x,t_0)-\rho(x,t_0-\epsilon)}{\epsilon}+\nabla \varphi(x)\cdot \nabla p_{\epsilon}(x,t_0)\, dx=\int_{\mathbb{R}^d} \varphi(x) \frac{1}{\epsilon}\int_{t_0-\epsilon}^{t_0} \rho(x,s) n(x,s)\, ds\, dx$$ The condition $\varphi(x)(1-\rho(x,t_0))=0$ implies that $\varphi(x)\frac{\rho(x, t_0)-\rho(x, t_0-\epsilon)}{\epsilon}= \varphi(x)\frac{1-\rho(x, t_0-\epsilon)}{\epsilon}$. Combined with the constraint $\rho\leq 1$, we can conclude that $$\int_{\mathbb{R}^d}\nabla \varphi(x)\cdot \nabla p_{\epsilon}(x,t_0)\, dx\leq \int_{\mathbb{R}^d} \varphi(x) \frac{1}{\epsilon}\int_{t_0-\epsilon}^{t_0} \rho(x,s)n(x,s)\, ds\, dx.$$ Applying the same logic to the time integral over the interval $[t_0,t_0+\epsilon]$, we find that $$\int_{\mathbb{R}^d}\nabla \varphi(x)\cdot \nabla p^{\epsilon}(x,t_0)\, dx\geq \int_{\mathbb{R}^d} \varphi(x) \frac{1}{\epsilon}\int^{t_0+\epsilon}_{t_0}\rho(x,s) n(x,s)\, ds\, dx.$$ Sending $\epsilon\to 0$ we can conclude that $$\int_{\mathbb{R}^d} \nabla \varphi(x)\cdot \nabla p(x,t_0)=\int_{\mathbb{R}^d} \varphi(x)\rho(x, t_0)n(t_0,x)=\int_{\mathbb{R}^d} \varphi(x)n(x, t_0)$$ where the final equality follows from the fact that $\varphi(x)(1-\rho(x,t_0))=0$ almost everywhere. The above equation is the Euler-Lagrange equation for the variational problem, thus, combined with the strong convexity of the variational problem, we see that $p$ solves the variational problem at every time $t_0$ where $p_{\epsilon}(\cdot, t_0), p^{\epsilon}(\cdot, t_0)$ converge to $p(\cdot, t_0)$ in $H^1(\mathbb{R}^d).$ Since this must hold for almost every $t_0\in [0,T]$ we are done. ◻ A straightforward consequence of the previous Lemma is the following Lemma which gives a crude comparison between the pressure values at different times. We will obtain a much sharper comparison property in Section 3 when we establish the Hopf-Lax type formula for the pressure. **Lemma 9**. *Fix some time $\tau>0$. Given almost any times $s,t\in [0,\tau]$ such that $s<t$, there exists a constant $C(\tau)$ such that $$p(s,x)\leq C(\tau) p(t,x).$$ In particular, this implies $$\{x\in \mathbb{R}^d: w(s,x)>0\}\subset \{x\in \mathbb{R}^d: p(s,x)>0\}\subset \{x\in \mathbb{R}^d: w(t,x)>0\}.$$* *Proof.* By Lemma [Lemma 7](#lem:nutrient_lower_bound){reference-type="ref" reference="lem:nutrient_lower_bound"}, there exists a constant $C(\tau)>0$ such that $n(x,s)< C(\tau) n(x,t)$ for all $x\in \mathbb{R}^d$. Since $\rho$ is increasing with respect to time and $\rho\leq 1$, we know that $\varphi(x)(1-\rho(x,t))=0$ for any nonnegative function $\varphi(x)$ such that $\varphi(x)(1-\rho(x,s))=0$. Let us choose $\varphi(x)=(p(s,x)-C(\tau)p(t,x))_+$. It then follows from Lemma [Lemma 8](#lem:variational){reference-type="ref" reference="lem:variational"} that $$\int_{\mathbb{R}^d} \nabla (p(x,s)-C(\tau)p(x,t))_+ \cdot \nabla p(x,t)\, dx=\int_{\mathbb{R}^d} (p(x,s)-C(\tau)p(x,t))_+n(x,t)\, dx,$$ and $$\int_{\mathbb{R}^d} \nabla (p(x,s)-C(\tau)p(x,t))_+ \cdot \nabla p(x,s)\, dx=\int_{\mathbb{R}^d} (p(x,s)-C(\tau)p(x,t))_+n(x,s)\, dx.$$ Hence, $$\int_{\mathbb{R}^d} \nabla (p(x,s)-C(\tau)p(x,t))_+ \cdot \nabla (p(x,s)-C(\tau)p(x,t))\, dx=\int_{\mathbb{R}^d} (p(x,s)-C(\tau)p(x,t))_+(n(x,s)-C(\tau)n(x,t))\, dx.$$ The left-hand side of the above equation is nonnegative while the right-hand side of the equation is nonpositive. This is only possible if $(p(x,s)-C(\tau)p(x,t))_+=0$ almost everywhere. ◻ **Lemma 10**. *Up to a set of measure zero, for any $t > 0$ we have $\{x\in \mathbb{R}^d: \rho(x,t)=1\}=\{x\in \mathbb{R}^d: w(x,t)>0\}$.* *Proof.* This is nearly Lemma 4.6 of [@jkt_nutrient], except that in the diffusion case we lack an explicit formula for the nutrient. Nevertheless, we proceed along the same lines. From Lemma [Lemma 6](#lem:simple_w_eqn){reference-type="ref" reference="lem:simple_w_eqn"}, we have $w(1-\rho) = 0$, and thus $$\{x\in \mathbb{R}^d: w(x,t)>0\}\subset \{x\in \mathbb{R}^d: \rho(x,t)=1\}$$ Thus, we must show that the set $A_t := \{ x : \rho(x, t) = 1, w(x,t) = 0 \}$ has measure zero. For this, we observe that $\Delta w$ vanishes a.e. where $w$ vanishes, and thus [\[eqn:simple_w\_eqn\]](#eqn:simple_w_eqn){reference-type="eqref" reference="eqn:simple_w_eqn"} implies that $$\rho(x, 0) + \int_0^t n(x,s)\rho(x, s)\,ds = \rho(x,t) = 1 \hbox{ a.e. on }A_t$$ From the pressure equation [\[eq:p_dist_eqn\]](#eq:p_dist_eqn){reference-type="eqref" reference="eq:p_dist_eqn"}, any interior point of $\{ \rho(x, 0) = 1 \}$ has positive pressure at every positive time, and our assumptions on the initial data provide that the boundary of this set has zero measure. Thus, we need only consider the case where $\int_0^t n\rho\,ds = 1$. Since the nutrient is uniformly positive due to Lemma [Lemma 7](#lem:nutrient_lower_bound){reference-type="ref" reference="lem:nutrient_lower_bound"}, this occurs for at most one time for a given $x$. On the other hand, since the nutrient is uniformly bounded, this function is continuous in time, and $x$ must be in $A_t$ for an open set of times before $\int_0^t n\rho\,ds = 1$ is satisfied. It follows directly that $A_t$ (and, in fact, $\bigcup_t A_t$) is null. ◻ **Lemma 11**. *$$T(x) := \inf \{ t \geq 0 : w(x, t) > 0 \} = \inf \{ t \geq 0 : \rho(x, t) = 1\} \hbox{ for a.e. } x\in { \mathbb{R}}^d$$* *Proof.* For conciseness, write $\widetilde{T}(x) = \inf \{ t\geq 0 : \rho(x, t) = 1 \}$. Suppose that for some $x$, we have $\widetilde{T}(x) < T(x)$. Then we have $w(x, t) = 0$ for all $t < T(x)$. Then, since Lemma [Lemma 5](#lem:rho_regularity){reference-type="ref" reference="lem:rho_regularity"} gives that $\rho$ is monotone in time except for a null set of $x$ which we ignore, we have $x\in \{ y : w(y, t) = 0, \rho(y, t) = 1 \}$ for all $t\in (\widetilde{T}(x), T(x))$. Then any such $x$ is contained in $\bigcup_{t\in \mathbb{Q}\cap (0, \infty)} \{ y : w(y,t) = 0, \rho(y, t) = 1 \}$, which is null by Lemma [Lemma 10](#lem:rho_w_positive_sets_agree){reference-type="ref" reference="lem:rho_w_positive_sets_agree"}. The other direction is similar. Suppose instead that for some $x$, we have $T(x) < \widetilde{T}(x)$. Then $w(x, t) > 0$ for all $t > T(x)$, while monotonicity implies that for a.e. $x$ we have $\rho(x, t) = 0$ for all $t < \widetilde{T}(x)$. Then any such $x$ is contained in $\bigcup_{t\in \mathbb{Q}\cap (0, \infty)} \{ y : w(y, t) > 0, \rho(y, t) = 0 \}$, which is null by Lemma [Lemma 10](#lem:rho_w_positive_sets_agree){reference-type="ref" reference="lem:rho_w_positive_sets_agree"}. ◻ **Lemma 12**. *$w(\cdot,t)$ solves the obstacle problem [\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="eqref" reference="eq:w_obst_eqn"}.* *Proof.* This follows from applying Lemma [Lemma 11](#lem:alt_hitting_time){reference-type="ref" reference="lem:alt_hitting_time"} to Lemma [Lemma 6](#lem:simple_w_eqn){reference-type="ref" reference="lem:simple_w_eqn"}. We have $$\rho(x, t) = \chi_{\{ T < t\}}(x) = \chi_{ \{ w(\cdot, t) > 0\} }(x)$$ for a.e. $x$ and all $s\neq T(x)$. Thus, to obtain [\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="eqref" reference="eq:w_obst_eqn"} we modify [\[eqn:simple_w\_eqn\]](#eqn:simple_w_eqn){reference-type="eqref" reference="eqn:simple_w_eqn"} by replacing the occurrence of $\rho$ in $\int_0^t n\rho\,ds$ with $\chi_{\{ T < t\}}$ and the other occurrence of $\rho$ with $\chi_{\{w(\cdot, t) > 0\}}$. ◻ Recalling the notation of [\[domain\]](#domain){reference-type="eqref" reference="domain"}, we additionally define $$\label{domain:2} \Omega_\infty := \{0\leq T(x) <\infty\}=\bigcup_{t > 0} \Omega_t, \quad \mathcal{O} :=\{0<T(x)<\infty\} = \Omega_{\infty}\setminus \overline{\Omega_0}.$$ We now prove that the hitting time $T$ is continuous. While this justifies the characterization of the level sets of $T$ as the free boundary $\Omega_t$, it also is an important first step that initiates the regularity analysis of $T$ in section 4. The main idea will be to show that a discontinuity must result in a point $x_0$ and times $t_0 < t_1$ such that $x_0$ is in $\partial \Omega_t$ for $t_0 \leq t \leq t_1$. Then $w(\cdot, t_1) - w(\cdot, t_0)$ is a positive superharmonic function on $\Omega_{t_0}$, so one would like to apply the Hopf lemma to draw a contradiction between $w(x_0, t_0) = w(x_0, t_1) = 0$ and $\nabla w(x_0, t_0) = \nabla w(x_0, t_1) = 0$. Unfortunately, $\Omega_{t_0}$ does not a priori have the regularity needed to apply the Hopf lemma, so we must first use obstacle problem techniques to shift to a setting where we do have such regularity. The key tool in doing so will be the quadratic blowup of $w$ at free boundary points: **Lemma 13** ([@blank] Corollary 2.5). *Let $u$ be a nonnegative solution to $\Delta u = f\chi_{\{u > 0\}}$, for some $f$ which is strictly positive and bounded near the free boundary $\partial \{ u > 0\}$. Then if $x_0$ is a free boundary point, then the quadratic blowup sequence $r^{-2}w(r(x - x_0) + x_0, t)$ is compact in $C^{1,\alpha}(B_1(x_0))$ as $r\to 0^+$. Moreover, if $f$ is continuous at $x_0$, then the subsequential limits solve $\Delta v = f(x_0)\chi_{\{ v > 0\}}$.* The subsequential limit enjoys better geometry, due to the following property of global solutions to the constant-source obstacle problem: **Lemma 14** ([@caffarelli98] Corollary 7). *A nonnegative solution to $\Delta u = \chi_{\{ u > 0\}}$ on ${ \mathbb{R}}^d$ is convex.* **Proposition 15**. - *$T$ is continuous.* - *$x\in \partial\Omega_t$ if and only if $x\in \mathcal{O}$ and $t = T(x)$, for all $x\in{ \mathbb{R}}^d$ and $t>0$.* *Proof.* First, we verify that the $\overline{\Omega_t}$ are continuous from above, in the sense that for any $t$ we have: $$\label{eq:set_continuity} \overline{\Omega_t} = \bigcap_{\varepsilon > 0} \overline{\Omega_{t + \varepsilon}}$$ The forward inclusion is trivial by the monotonicity of $w$. For the reverse inclusion, we suppose for contradiction that there exists $x \in \bigcap_{\varepsilon > 0} \overline{\Omega_{t + \varepsilon}} \setminus \overline{\Omega_t}$. Let $r$ be sufficiently small that $B_r(x)\subset \{ w(\cdot, t) = 0 \}$. From [\[eqn:simple_w\_eqn\]](#eqn:simple_w_eqn){reference-type="eqref" reference="eqn:simple_w_eqn"}, $\Delta w(\cdot, t + \varepsilon) \geq 1 - \varepsilon \|n_0\|_{\infty}$ on $B_r(x)\cap \{ w(\cdot, t + \varepsilon) > 0 \}$, and by assumption we have $x\in \overline{\Omega_{t + \varepsilon}}$ for all $\varepsilon > 0$. It follows by quadratic nondegeneracy for the obstacle problem (Lemma [Lemma 46](#lem:obst_nondegen){reference-type="ref" reference="lem:obst_nondegen"}) that if $\varepsilon < \frac{\|n_0\|_{\infty}}{2}$, then $$\sup_{B_r(x)} w(\cdot, t + \varepsilon) \geq Cr^2$$ uniformly in $\varepsilon$. Since $w(\cdot, t) \equiv 0$ on $B_r(x)$, we get a contradiction with the Lipschitz continuity of $w$ in time by shrinking $\varepsilon$. Now, we introduce $$\label{eq:t_0_defn} T_0(x) := \inf \{ t > 0 : x\in \overline{\Omega_t} \}$$ It is immediate that $T_0(x) \leq T(x) = \inf \{ t > 0 : x\in \Omega_t \}$. We claim that $x\in \partial \Omega_t$ if and only if $t\in [T_0(x), T(x)]$. It is clear that for $t < T_0(x)$, $x\notin \overline{\Omega_t}$, and for $t > T(x)$, $x\in \Omega_t$. Since $x\in \Omega_{T(x) + \varepsilon}$ for every $\varepsilon > 0$, [\[eq:set_continuity\]](#eq:set_continuity){reference-type="eqref" reference="eq:set_continuity"} gives $x\in \overline{\Omega_{T(x)}}$. On the other hand, by continuity of $w$ and minimality of $T$, we have $x\notin \Omega_{T(x)}$, so $x\in \partial \Omega_{T(x)}$. Monotonicity implies that $x$ is a boundary point for all $t \leq T(x)$ for which $x\in \overline{\Omega_t}$. We have $x\in \overline{\Omega_{T_0(x)}}$ by using [\[eq:set_continuity\]](#eq:set_continuity){reference-type="eqref" reference="eq:set_continuity"} with the definition of $T_0$, so we get the claim. For purely topological reasons related to how each is defined, $T_0$ is lower semicontinuous and $T$ is upper semicontinuous. To check lower semicontinuity of $T_0$, let $(x_n)$ be a sequence converging to $x$ with $\liminf T_0(x_n) := t$. Then for any $\varepsilon > 0$, the $x_n$ are eventually in $\overline{\Omega_{t + \varepsilon}}$, and thus $x\in \overline{\Omega_{t + \varepsilon}}$. It follows that $T_0(x) \leq t$. To check upper semicontinuity of $T$, we note that $x\in \Omega_{T(x) + \varepsilon}$ for all $\varepsilon > 0$. Any sequence $x_n$ converging to $x$ eventually has $T(x_n) \leq T(x) + \varepsilon$ since $\Omega_{T(x) + \varepsilon}$ is open, so we conclude that $\limsup T(x_n) \leq T(x)$. Therefore, both parts of the proposition will follow if we can show that $T_0 \equiv T$. For this, we will need the following useful property: $$\label{eq:t_t0_continuity} \lim_{\Omega_{T_0(x)}\ni x_n \to x} T(x_n) = T_0(x)$$ To see this, we note that if $(x_n)$ is such a sequence, then $T(x_n) \leq T_0(x)$ for each $n$. On the other hand, by minimality of $T_0$, $x\notin \overline{\Omega_{T_0(x) - \varepsilon}}$, and so we eventually have $T(x_n) \geq T_0(x) - \varepsilon$ for any $\varepsilon > 0$. Finally, we proceed to the proof that $T_0 = T$. Suppose that $x_0\in \partial\Omega_t$ with $T_0(x_0) < T(x_0)$. We will use the obstacle problem theory to compare blowups of $w$ at $(x_0, T_0(x_0))$ and at $(x_0,t)$ with $t>T_0(x_0)$ to derive a contradiction. First let us ensure that the blow-up profiles are well-defined. Due to ([\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="ref" reference="eq:w_obst_eqn"}) it follows that $$0\leq \eta(x,t) \leq (t - T(x))_+\|n_0\|_\infty$$ We also get a continuity estimate. Assuming $T(y) \leq T(x)$, we have either $T(y) \leq t \leq T(x)$, giving $$|\eta(x, t) - \eta(y, t)| = \eta(x, t) \leq \|n_0\|_\infty |T(x) - T(y)|$$ or else $t \leq T(y) \leq T(x)$, giving $$\begin{aligned} |\eta(x,t) - \eta(y,t)| &= \left|\int_{T(x)}^t n(x,s) - n(y,s)\,ds - \int_{T(y)}^{T(x)} n(y,s)\,ds\right| \\&\leq |t - T(x)|\sup_{T(x)\leq s\leq t} |n(x,s) - n(y,s)| + \|n_0\|_\infty |T(x) - T(y)|\end{aligned}$$ Thus, using the nutrient regularity from Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"} and the result of [\[eq:t_t0_continuity\]](#eq:t_t0_continuity){reference-type="eqref" reference="eq:t_t0_continuity"}, we conclude that $\eta(\cdot, T_0(x_0))$ restricted to $\Omega_{T_0(x_0)}$ is continuous at $x_0$, and in a sufficiently small neighborhood of $x_0$, we can ensure that it is less than $\frac{1}{2}$. Then Lemma [Lemma 13](#lem:quad_blowup_cpt){reference-type="ref" reference="lem:quad_blowup_cpt"} gives that the family of rescalings $x\mapsto r^{-2}w(r(x - x_0) + x_0, T_0(x_0))$ are compact as $r\to 0$ in $C^{1,\alpha}_{loc}$, and their subsequential limits are nonzero global solutions of $$\label{obstacle_00} \Delta u = \chi_{\{u > 0\}}.$$ Now, choose $\tau\in (T_0(x), T(x))$, sufficiently small such that we still have $\eta(x, \tau) < \frac{1}{2}$ in some neighborhood of $x_0$. By taking a further subsequence, the discussion above yields a sequence $r_n\to 0$ such that $$r_n^{-2}w(r_n(x - x_0) + x_0, T(x_0))\to u \hbox{ and } r_n^{-2}w(r_n(x - x_0) + x_0, \tau)\to v,$$ for some $u, v$ in $C^{1,\alpha}_{loc}({ \mathbb{R}}^d)$. Unlike with $u$, $\eta(\cdot, \tau)$ restricted to $\Omega_\tau$ is not known to be continuous at $x_0$, since we do not yet know that $T$ is continuous. In particular, we do not know that $v$ solves a constant Laplacian obstacle problem. However, we do get $u(x_0) = v(x_0) = 0$ and $\nabla u(x_0) = \nabla v(x_0) = 0$ from the convergence, and we also have that $v\geq u$ since $w(\cdot, t) \geq w(\cdot, T_0(x_0))$. We will apply the Hopf lemma to $v-u$ in the domain $U: = \{ u > 0\}$. First observe that from the definition of $u$, we have $r_n(x - x_0) + x_0 \in \Omega_{T(x_0)}$ if $x\in U$ and if $n$ is sufficiently large depending on $x$. We have checked above that $\eta(\cdot, \tau)$ restricted to $\Omega_{T_0(x_0)}$ is continuous at $x_0$. Thus, for any $x\in U$, we have $\eta(r_n(x - x_0) + x_0, \tau) \to \eta(x_0, \tau)$, and so we conclude that $$\Delta v = 1 - \eta(x_0, \tau) \hbox{ in } U.$$ Comparing this equation to [\[obstacle_00\]](#obstacle_00){reference-type="eqref" reference="obstacle_00"}, it follows that $v - u$ satisfies $$\Delta (v - u) = - \eta(x_0, \tau) \leq - (\tau - T_0(x_0))\inf_{t\in [T_0(x_0), \tau]}n(x_0, t) < 0$$ with the last inequality following from the nutrient lower bound in Lemma [Lemma 7](#lem:nutrient_lower_bound){reference-type="ref" reference="lem:nutrient_lower_bound"} and the assumption that the initial nutrient is bounded away from 0. This implies that $v - u$ is strictly superharmonic inside $U$, and so our previous observation that $v - u\geq 0$ by the monotonicity in time of $w$ improves to $v - u > 0$ inside $U$. Lastly let us observe that, from Lemma [Lemma 14](#lem:global_obst_sln_cvx){reference-type="ref" reference="lem:global_obst_sln_cvx"}, the complement of $U$ is convex and so $U$ satisfies the interior ball condition at $x_0$. Putting together the above information, the Hopf lemma applied at $x_0$ implies that $\nabla v(x_0) - \nabla u(x_0) \neq 0$, which is a contradiction. It follows that $T_0 = T$, so we finish. ◻ *Remark 16*. An important consequence of Proposition [Proposition 15](#prop:hitting_time_continuous){reference-type="ref" reference="prop:hitting_time_continuous"} is that the spacetime interface is exactly the graph of $T$ on $\mathcal{O}$. In other words, $$\label{eq:graph} \{ (x, t) : t\in (0. \infty), x\in \partial \Omega_t \} = \mathrm{Graph}_T(\mathcal{O}) := \{ (x, T(x)) : x\in \mathcal{O} \}$$ This also means that the interface is a $d$-dimensional topological manifold, and the regularity of its parametrization in $d$ spatial variables is exactly that of $T$. We will use the notation $\mathrm{Graph}_T$ with subsets of $\mathcal{O}$, which may be understood in this light as projections of the spacetime interface into ${ \mathbb{R}}^d$. Finally, in light of the regularity of $w$ and $T$, we note a natural way to standardize $\rho$ on measure zero sets. A corresponding standardization of the pressure will need to wait until the next Section, due to the need to preserve certain delicate structures. **Lemma 17**. *$\partial \Omega_t$ has zero measure in ${ \mathbb{R}}^d$ for all $t > 0$. The weak solution $(\rho, p, n)$ can be taken such that $\rho$ is upper semicontinuous in space and time, with $\{ \rho(\cdot, t) = 1 \} = \overline{\Omega_t}$ for each $t$. In particular, the support of $p$ is then contained in $\{ \rho = 1 \}$.* *Proof.* By Proposition [Proposition 15](#prop:hitting_time_continuous){reference-type="ref" reference="prop:hitting_time_continuous"}, for any $t >0$ and $\varepsilon\in (0, t)$, we have $\partial \Omega_t\subset \Omega_{t + \varepsilon}\setminus \Omega_{t - \varepsilon}$. By Lemma [Lemma 10](#lem:rho_w_positive_sets_agree){reference-type="ref" reference="lem:rho_w_positive_sets_agree"}, up to measure zero sets we can replace the right-hand-side with $\{ \rho(\cdot, t + \varepsilon) = 1 \}\setminus \{ \rho(\cdot, t - \varepsilon) = 1 \}$, and by time continuity of $\rho$ in $L^1$, the measure of this set goes to 0 with $\varepsilon$. Thus, $\partial \Omega_t$ has zero measure.\ Then we claim that $(\rho, p, n)$ with $\rho$ redefined as $\chi_{\overline{\{ w > 0\}}}$ and $p$ redefined to vanish outside $\overline{\{ w > 0\}}$ remains a weak solution as defined in Definition [Definition 4](#def:weak_sln){reference-type="ref" reference="def:weak_sln"}. Indeed, since this changes $\rho, p$ by measure zero sets for each time, equation [\[eq:first_dist_eqn\]](#eq:first_dist_eqn){reference-type="eqref" reference="eq:first_dist_eqn"} is unaffected. On the other hand, we have $p (1 - \rho) = 0$ by construction. To check that $\rho$ is spacetime upper semicontinuous, we only need to show that the set $\{ \rho = 1 \} := \{ (x, t) : t\geq 0, x\in \overline{\Omega_t}\}$ is closed. Let $(x_n, t_n)$ be a sequence in this set converging to $(x, t)$. Since the $t_n$ converge, they are bounded, and so the sequence is contained in an $\overline{\Omega_\tau}$ for $\tau$ sufficiently large. Then $T(x) < \infty$, so we have $T(x_n)\to T(x)$ by continuity, and since $T(x_n) \leq t_n$ for each $n$ by Proposition [Proposition 15](#prop:hitting_time_continuous){reference-type="ref" reference="prop:hitting_time_continuous"}, we have $T(x) \leq t$. It follows that $x\in \overline{\Omega_t}$, so we conclude. ◻ # AB estimates and the Hopf-Lax bound {#sec:hjb} In this section, we will show that there exists a nonnegative function $u_+$ such that the pressure is a super solution to the following Hamilton-Jacobi equation $$\label{eq:hjb} \partial_t p-|\nabla p|^2\geq pu_+.$$ We will then use ([\[eq:hjb\]](#eq:hjb){reference-type="ref" reference="eq:hjb"}) to obtain a Hopf-Lax type formula for the pressure. In particular, given a fixed time $t_0$, this will allow us to give lower bounds for the pressure at times $t>t_0$ and upper bounds for the pressure at times $t<t_0$ in terms of $p(t_0,\cdot)$. This will give us a very precise way of constructing pressure super solutions that lead to powerful barrier-type arguments and eventually Hölder regularity of the hitting times (c.f. Section [4](#sec:barrier){reference-type="ref" reference="sec:barrier"}). Let us emphasize that to the best of our knowledge, the Hopf-Lax type bounds we obtain have not previously appeared in the literature for Hele-Shaw type equations and they require some highly nontrivial efforts to obtain. First, to establish ([\[eq:hjb\]](#eq:hjb){reference-type="ref" reference="eq:hjb"}), we go through the Porous Media Equation (PME) and use the fact that our solution $(\rho, p)$ can be obtained as the incompressible limit of solutions $(\rho_{\gamma}, p_{\gamma}, n_{\gamma})$ of the PME-nutrient system $$\label{eq:pme_gamma} \partial_t \rho_{\gamma}-\nabla \cdot (\rho_{\gamma}\nabla p_{\gamma})=\rho_{\gamma}n_{\gamma}, \quad p_{\gamma}=\rho_{\gamma}^{\gamma},$$ $$\label{eq:nutrient_gamma} \partial_t n_{\gamma} -\Delta n_{\gamma}=-\rho_{\gamma} n_{\gamma}$$ as the scalar parameter $\gamma$ is sent to infinity, and where the nutrient variable from our original system is held fixed. The advantage of the PME system is that it is possible to use the relation $p_{\gamma}=\rho_{\gamma}^{\gamma}$ to rewrite ([\[eq:pme_gamma\]](#eq:pme_gamma){reference-type="ref" reference="eq:pme_gamma"}) solely in terms of the pressure variable $p_{\gamma}$, which yields the equation $$\label{eq:pme_pressure} \partial_t p_{\gamma}-|\nabla p_{\gamma}|^2-\gamma p_{\gamma}(\Delta p_{\gamma}+n)=0.$$ The main difficulty in obtaining ([\[eq:hjb\]](#eq:hjb){reference-type="ref" reference="eq:hjb"}) is to show that as $\gamma\to\infty$, $u_{\gamma}=-\gamma(\Delta p_{\gamma}+n)$ converges to a meaningful limit object $u$, whose positive part can be controlled. For the classic PME without a source term, bounds on the negative part of $\Delta p_{\gamma}$ are known through the celebrated Aronson-Benilan estimate [@ab]. In the presence of a source term, AB-type bounds on quantities taking a similar form to $\gamma(\Delta p_{\gamma}+n)$ have been studied in the literature [@pqv; @gpsg; @perthame_david; @jacobs_lagrangian], however except for [@jacobs_lagrangian], these bounds do not scale well with respect to $\gamma$. We adapt the arguments from [@jacobs_lagrangian] to show that $[u_{\gamma}]_+$ can be bounded uniformly with respect to $\gamma$ in BMO-type spaces. Note that we are unable to get $L^{\infty}$ bounds on $u_{\gamma}$ essentially because two key quantities in the estimate $\partial_t n$ and $\nabla n\cdot \nabla p$ are not in general bounded in $L^{\infty}$. It would be interesting to see whether equation ([\[eq:hjb\]](#eq:hjb){reference-type="ref" reference="eq:hjb"}) could be obtained directly from the original system without going through PME, but we leave this question to a future work. Once we have obtained equation ([\[eq:hjb\]](#eq:hjb){reference-type="ref" reference="eq:hjb"}), there is still significant work required to obtain a Hopf-Lax type control for $p$. Here the difficulty is that $u_+$ is not bounded in $L^{\infty}$. Noting that $p$ should satisfy $$\partial_t p-|\nabla p|^2+pu_+\geq 0,$$ the derivative of $p$ along an arbitrary path $x(t)$ gives $$\frac{d}{dt} p(x(t),t)=\partial_t p(x(t), t)+x'(t)\cdot \nabla p(x(t),t)\geq$$ $$|\nabla p(x(t),t)|^2-p(x(t),t)u_+(x(t),t)+x'(t)\cdot \nabla p(x(t),t)\geq -p(x(t),t)u_+(x(t),t)-\frac{1}{4}|x'(t)|^2.$$ Unfortunately, without $L^{\infty}$ control on $u_+$ it is not clear that time integrals of the final quantity will be well-defined. This prevents the usual approach to proving Hopf-Lax type bounds. To overcome this, we adapt the approach developed in [@cardaliaguet_graber_mfg], which handles unbounded coefficients by instead considering an average over paths indexed by the unit ball. Our calculation is somewhat different however, as we can exploit the specific structure of $pu_+$ to write $pu_+=\lambda p+p(u-\lambda)_+$ for some scalar $\lambda\geq 0$. By choosing $\lambda$ appropriately we can force $p(u-\lambda)_+$ to be small while using a Gronwall argument to handle $\lambda p$. This allows us to obtain a much more favorable error term in our Hopf-Lax formula compared to [@cardaliaguet_graber_mfg] (c.f. Proposition [Proposition 24](#prop:hj_estimate){reference-type="ref" reference="prop:hj_estimate"}). We begin with the aforementioned result (well-known) that says we can approximate our system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) with a sequence of smooth solutions to PME. **Proposition 18** (see e.g. [@pqv; @gpsg; @jacobs_lagrangian]). *There exists a sequence of smooth solutions $(\rho_{\gamma}, p_{\gamma}, n_{\gamma})$ to the PME-nutrient system ([\[eq:pme_gamma\]](#eq:pme_gamma){reference-type="ref" reference="eq:pme_gamma"}-[\[eq:nutrient_gamma\]](#eq:nutrient_gamma){reference-type="ref" reference="eq:nutrient_gamma"}) with initial data $(\rho_{0,\gamma}, n_{0,\gamma})$ such that for any $\tau>0$ we have that $\rho_{\gamma}$ converges strongly in $L^1([0,\tau]\times\mathbb{R}^d)$, $p_{\gamma}, n_{\gamma}$ converge strongly in $L^2([0,\tau];H^1(\mathbb{R}^d))$ to the unique solution $(\rho, p, n)$ to the system ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}) with initial data $(\rho_0, n_0)$ as $\gamma\to\infty$. Furthermore, one may choose $\rho_{0,\gamma}$ such that $u_{\gamma,+}(\cdot,0)$ is bounded in $L^{\infty}(\mathbb{R}^d)$ uniformly in $\gamma$.* Next, we record the following simple result for solutions to the heat equation with $L^{\infty}$ source. **Lemma 19**. *There exists some $b_0>0$ such that $n$ satisfies the bound $$\exp(b_0\frac{|\partial_t n|+|\Delta n|}{n})-1\in L^1([0,\tau]\times\mathbb{R}^d)$$* *Proof.* Thanks to Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"} we know that $\partial_t n$ and $D^2 n$ are bounded in BMO. Since we also have $$\frac{1}{2}\lVert \partial_t n \rVert_{L^2([0,\tau]\times\mathbb{R}^d)}^2+\lVert \nabla n \rVert_{L^2(\{\tau\}\times\mathbb{R}^d)}^2\leq \lVert \nabla n \rVert_{L^2(\{0\}\times\mathbb{R}^d)}^2+\frac{1}{2}\lVert n \rVert_{L^{\infty}([0,\tau]\times\mathbb{R}^d)}\lVert \rho \rVert_{L^{2}([0,\tau]\times\mathbb{R}^d)},$$ the BMO bound implies the existence of a constant $c>0$ such that $\exp(c|\partial_t n|)-1, \exp(c|\Delta n|)-1 \in L^1([0,T]\times\mathbb{R}^d)$. Following the logic of Lemma [Lemma 7](#lem:nutrient_lower_bound){reference-type="ref" reference="lem:nutrient_lower_bound"}, it follows that $n$ is uniformly bounded from below on any time interval. Hence, there must be an appropriate choice of $b_0$ where the result holds. ◻ **Proposition 20**. *If $(p_{\gamma}, n_{\gamma})$ is a smooth solution to the system ([\[eq:nutrient_gamma\]](#eq:nutrient_gamma){reference-type="ref" reference="eq:nutrient_gamma"}-[\[eq:pme_pressure\]](#eq:pme_pressure){reference-type="ref" reference="eq:pme_pressure"}) for some $\gamma\in (1,\infty)$, then for any $\tau>0$ there exists $b>0$ that only depends on $\tau$ such that $(b[u_{\gamma}]_+-1)\exp(b[u_{\gamma}]_+)+1$ is uniformly bounded in $L^1([0,\tau]\times\mathbb{R}^d)$ with respect to $\gamma$, where $u_{\gamma} := -\gamma(\Delta p_{\gamma} + n_{\gamma})$.* *Proof.* If we differentiate $\frac{1}{\gamma} u_{\gamma}$ with respect to time, we get $$\partial_t \frac{1}{\gamma} u_{\gamma}=-\partial_t n_{\gamma}-\Delta \partial_t p_{\gamma}= -\partial_t n_{\gamma}-\Delta(|\nabla p|^2-p_{\gamma}u_{\gamma})$$ Expanding the Laplacian, we see that $$\partial_t \frac{1}{\gamma} u_{\gamma}=-\partial_t n_{\gamma}-2|D^2p_{\gamma}|^2-2\nabla \Delta p_{\gamma}\cdot \nabla p_{\gamma}+2\nabla p_{\gamma}\cdot \nabla u_{\gamma}+p_{\gamma}\Delta u_{\gamma}+u_{\gamma}\Delta p_{\gamma}.$$ Noting that $-\Delta p_{\gamma}=n_{\gamma}+\frac{1}{\gamma} u_{\gamma}$ we can rewrite the previous line as $$\partial_t \frac{1}{\gamma} u_{\gamma}=2\nabla n_{\gamma}\cdot \nabla p_{\gamma}-\partial_t n_{\gamma}-2|D^2p_{\gamma}|^2+2(1+\frac{1}{\gamma})\nabla p_{\gamma}\cdot \nabla u_{\gamma}+p_{\gamma}\Delta u_{\gamma}-n_{\gamma} u_{\gamma}-\frac{1}{\gamma} u_{\gamma}^2.$$ Hence, eliminating $|D^2p|^2$, we can conclude that $$\label{eq:u_ineq} n_{\gamma}u_{\gamma} + \frac{1}{\gamma} (\partial_t u_{\gamma}+u_{\gamma}^2)\leq 2\nabla n_{\gamma}\cdot \nabla p_{\gamma}-\partial_t n_{\gamma}+2(1+\frac{1}{\gamma})\nabla p_{\gamma}\cdot \nabla u_{\gamma}+p_{\gamma}\Delta u_{\gamma}.$$ Now let $f:\mathbb{R}\to\mathbb{R}$ be a $C^2$ convex function such that $f'\geq 0$ everywhere and $f=0$ on $(-\infty,0]$. If we integrate ([\[eq:u_ineq\]](#eq:u_ineq){reference-type="ref" reference="eq:u_ineq"}) against $f'(u_{\gamma})$ on $[0,\tau]\times \mathbb{R}^d$ we find that $$\begin{gathered} \label{eq:f_u_1} \int_{\mathbb{R}^d\times \{\tau\}} \frac{1}{\gamma} f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} n_{\gamma}u_{\gamma}f'(u_{\gamma})+\frac{1}{\gamma}u_{\gamma}^2f'(u_{\gamma})\leq \\ \int_{\mathbb{R}^d\times \{0\}} \frac{1}{\gamma}f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} f'(u_{\gamma})\big(2\nabla n_{\gamma}\cdot \nabla p_{\gamma}-\partial_t n_{\gamma}\big)+2(1+\frac{1}{\gamma})\nabla p_{\gamma}\cdot \nabla \big(f(u_{\gamma})\big)+p_{\gamma}f'(u_{\gamma})\Delta u_{\gamma} .\end{gathered}$$ Noting that $\Delta \big(f(u_{\gamma})\big)=f'(u_{\gamma})\Delta u_{\gamma} +f''(u_{\gamma})|\nabla u_{\gamma}|^2$, we can integrate by parts in ([\[eq:f_u\_1\]](#eq:f_u_1){reference-type="ref" reference="eq:f_u_1"}) to obtain $$\begin{gathered} \label{eq:f_u_2} \int_{\mathbb{R}^d\times \{\tau\}} \frac{1}{\gamma} f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} n_{\gamma}u_{\gamma}f'(u_{\gamma})+\frac{1}{\gamma}u_{\gamma}^2f'(u_{\gamma})+p_{\gamma}f''(u_{\gamma})|\nabla u_{\gamma}|^2\leq \\ \int_{\mathbb{R}^d\times \{0\}} \frac{1}{\gamma}f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} f'(u_{\gamma})\big(2\nabla n_{\gamma}\cdot \nabla p_{\gamma}-\partial_t n_{\gamma}\big)-(1+\frac{2}{\gamma})f(u_{\gamma})\Delta p_{\gamma} \end{gathered}$$ We then integrate by parts in $\nabla n_{\gamma}\cdot \nabla p_{\gamma}$ to get $$\begin{gathered} \label{eq:f_u_3} \int_{\mathbb{R}^d\times \{\tau\}} \frac{1}{\gamma} f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} n_{\gamma}u_{\gamma}f'(u_{\gamma})+\frac{1}{\gamma}u_{\gamma}^2f'(u_{\gamma})+p_{\gamma}f''(u_{\gamma})|\nabla u_{\gamma}|^2\leq \\ \int_{\mathbb{R}^d\times \{0\}} \frac{1}{\gamma}f(u_{\gamma})-\int_{\mathbb{R}^d\times [0,\tau]} f'(u_{\gamma})\big(2p_{\gamma}\Delta n_{\gamma}+\partial_t n_{\gamma}\big)+p_{\gamma} f''(u_{\gamma})\nabla u_{\gamma}\cdot \nabla n_{\gamma}+(1+\frac{2}{\gamma})f(u_{\gamma})\Delta p_{\gamma} .\end{gathered}$$ Once again using $-\Delta p_{\gamma}=n_{\gamma}+\frac{1}{\gamma} u_{\gamma}$ and using the quadratic Young's inequality on $\nabla u_{\gamma}\cdot \nabla n_{\gamma}$ we find that $$\begin{gathered} \label{eq:f_u_4} \int_{\mathbb{R}^d\times \{\tau\}} \frac{1}{\gamma} f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} n_{\gamma}u_{\gamma}f'(u_{\gamma})+\frac{1}{\gamma}u_{\gamma}^2f'(u_{\gamma})+\frac{1}{2}p_{\gamma}f''(u_{\gamma})|\nabla u_{\gamma}|^2\leq \\ \int_{\mathbb{R}^d\times \{0\}} \frac{1}{\gamma}f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} (1+\frac{2}{\gamma})f(u_{\gamma})(n_{\gamma}+\frac{1}{\gamma}u_{\gamma}) -f'(u_{\gamma})\big(2p_{\gamma}\Delta n_{\gamma}+\partial_t n_{\gamma}\big)+\frac{1}{2}p_{\gamma} f''(u_{\gamma})| \nabla n_{\gamma}|^2.\end{gathered}$$ Next, to help compare the left and right hand sides, we divide and multiply by multiples of $n_{\gamma}$ to get $$\begin{gathered} \label{eq:f_u_5} \int_{\mathbb{R}^d\times \{\tau\}} \frac{1}{\gamma} f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} n_{\gamma}u_{\gamma}f'(u_{\gamma})+\frac{1}{\gamma}u_{\gamma}^2f'(u_{\gamma})+\frac{1}{2}p_{\gamma}f''(u_{\gamma})|\nabla u_{\gamma}|^2\leq \\ \int_{\mathbb{R}^d\times \{0\}} \frac{1}{\gamma}f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} (1+\frac{2}{\gamma})f(u_{\gamma})(n_{\gamma}+\frac{1}{\gamma}u_{\gamma}) -\frac{n_{\gamma}}{2}f'(u_{\gamma})\big(\frac{4p_{\gamma}\Delta n_{\gamma}+2\partial_t n_{\gamma}}{n_{\gamma}}\big)+n_{\gamma} f''(u_{\gamma})\frac{p_{\gamma}| \nabla n_{\gamma}|^2}{2n_{\gamma}}.\end{gathered}$$ Using the identity $uf'(u)-f(u)=f^*(f'(u))$ and applying Young's inequality to $-\frac{n_{\gamma}}{2}f'(u_{\gamma})\big(\frac{2p_{\gamma}\Delta n_{\gamma}+\partial_t n_{\gamma}}{n_{\gamma}}\big)$, we get $$\begin{gathered} \label{eq:f_u_6} \int_{\mathbb{R}^d\times \{\tau\}} \frac{1}{\gamma} f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} (n_{\gamma}+\frac{1}{\gamma}u_{\gamma})f^*(f'(u_{\gamma}))+\frac{1}{2}p_{\gamma}f''(u_{\gamma})|\nabla u_{\gamma}|^2\leq \\ \int_{\mathbb{R}^d\times \{0\}} \frac{1}{\gamma}f(u_{\gamma})+\int_{\mathbb{R}^d\times [0,\tau]} \frac{2}{\gamma}f(u_{\gamma})(n_{\gamma}+\frac{1}{\gamma}u_{\gamma}) +\frac{n_{\gamma}}{2}f^*(f'(u_{\gamma}))+\frac{n_{\gamma}}{2}f\big(\frac{4p_{\gamma}\Delta n_{\gamma}+2\partial_t n_{\gamma}}{n_{\gamma}}\big)+n_{\gamma} f''(u_{\gamma})\frac{p_{\gamma}| \nabla n_{\gamma}|^2}{2n_{\gamma}},\end{gathered}$$ which is finally in a form that will allow us to estimate. Fix some $b\leq \frac{b_0}{4\max\big(1,\sup_{\gamma}\lVert p_{\gamma} \rVert_{L^{\infty}([0,\tau]\times\mathbb{R}^d)}\big)}$ where $b_0$ is the constant from Lemma [Lemma 19](#lem:dt_n_d2_n){reference-type="ref" reference="lem:dt_n_d2_n"}. If we choose $f$ such that $f$ grows like $\exp(b u)$ at infinity, then $f^*(f'(u))$ grows like $b u\exp(bu)$ at infinity, and hence $f^*(f'(u))$ dominates both $f(u)$ and $f''(u)$ at infinity. PME has finite propagation in time (uniform in $\gamma$) [@vazquez_book], thus, there exists a radius $R=R_{\tau}>0$ sufficiently large such that $(\rho_{\gamma}, p_{\gamma})$ is supported in $B_R$ independently of $\gamma$. Recalling that $u_{\gamma}=-\gamma(\Delta p_{\gamma}+n_{\gamma})$ and $n_{\gamma}\geq 0$, it follows that $f(u_{\gamma}), f'(u_{\gamma}), f''(u_{\gamma})$ are all supported on $B_R$ independently of $\gamma$ as well. Since we are integrating functions with uniformly bounded support and $f\big(\frac{4p_{\gamma}\Delta n_{\gamma}+2\partial_t n_{\gamma}}{n_{\gamma}}\big)$ is bounded by Lemma [Lemma 19](#lem:dt_n_d2_n){reference-type="ref" reference="lem:dt_n_d2_n"} for our choice of $b$, it follows that the left-hand side of ([\[eq:f_u\_6\]](#eq:f_u_6){reference-type="ref" reference="eq:f_u_6"}) dominates the right-hand side and so the result follows. ◻ It essentially immediately follows that $p$ is a weak supersolution to the appropriate HJB equation. **Corollary 21**. *Given any $L^2_{\mathop{\mathrm{\textup{loc}}}}([0,\infty); L^2(\mathbb{R}^d))$ weak limit point $u_+$ of the family $u_{\gamma, +}$ $p$ solves, in the sense of weak solutions, $$\label{eq:p_hjb} \partial_t p - |\nabla p|^2 +u_+p\geq 0,$$ where for any $\tau > 0$ there exists $b=b(\tau,d) > 0$ such that $(bu_+-1) e^{bu_+} + 1 \in L^1([0, \tau ] ; { \mathbb{R}}^d)$.* Although we now know that $p$ is a supersolution to an HJB equation, it is somewhat annoying to directly obtain the Hopf-Lax formula from ([\[eq:p_hjb\]](#eq:p_hjb){reference-type="ref" reference="eq:p_hjb"}), due to the fact that $p$ is not continuous. Instead, we will work towards the Hopf-Lax formula by once again going through the $\gamma$ limit. Here, we will still need to deal with the difficulty that $u_{\gamma, +}$ is not uniformly bounded in $L^{\infty}$. We proceed by adapting an argument from [@cardaliaguet_graber_mfg], which provides a method to obtain Hopf-Lax type formulas for Hamilton-Jacobi equations with unbounded coefficients. A key difference in our setting is that the right-hand side has the specific form $p_{\gamma} u_{\gamma, +}$. This structure allows us to combine their approach with Gronwall-type estimates to obtain much stronger bounds. **Lemma 22**. *Choose a decreasing nonnegative function $\lambda\in L^1([0,t_1-t_0])$. Given any $\gamma\in (1,\infty)$ and any points $(x_1,t_1), (x_0, t_0)$ with $t_0<t_1$ there exists a constant $C=C(t_1,d)$ such that $$\label{eq:hj_bound} p_{\gamma}(x_0, t_0)\leq e^{\Lambda_{\gamma}(t_1-t_0)}\Big( p_{\gamma}(x_1, t_1)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0}e^{\Lambda_{\gamma}(s)}\, ds}+C(t_1-t_0)^{7/10}e^{-\lambda(t_1-t_0)}\Big)$$* where $b$ is the constant from Proposition [Proposition 20](#prop:ab){reference-type="ref" reference="prop:ab"} and $$\Lambda_{\gamma}(t) := \frac{5}{4b}\int_{0}^t \lambda(a)\, da+\frac{1}{b}\int_{0}^t\log(1+\lVert \exp(b u_{\gamma,+})-1 \rVert_{L^1(\{t_1-a\}\times\mathbb{R}^d)}) \, da .$$ *Proof.* Define $\varphi_{\gamma}(x,t):=p_{\gamma}(x, t_1-t)$. It then follows that $\varphi_{\gamma}$ satisfies the differential inequality $$\partial_t \varphi_{\gamma}(x,t)+|\nabla \varphi_{\gamma}(x,t)|^2\leq \varphi_{\gamma}(x,t) u_{\gamma,+}(x,t_1-t)$$ almost everywhere. Define $$\bar{\lambda}_{\gamma}(s):=\lambda(s)+\frac{1}{b}\log(\lVert \exp(b u_{\gamma,+})-1 \rVert_{L^1(\{a\}\times\mathbb{R}^d)})$$ and split $$\varphi_{\gamma}(x,t) u_{\gamma,+}(x,t_1-t)\leq \varphi_{\gamma}(x,t)\bar{\lambda}_{\gamma}(t)+ \varphi_{\gamma}(x,t)(u_{\gamma}(x,t_1-t)-\bar{\lambda}_{\gamma}(t))_+$$ Multiplying both sides of the differential inequality by $e^{-\Lambda_{\gamma}(t)}$ we see that $$\partial_t (e^{-\Lambda_{\gamma}(t)} \varphi_{\gamma})+e^{-\Lambda_{\gamma}(t) }|\nabla \varphi_{\gamma}|^2\leq e^{-\Lambda_{\gamma}(t)}\varphi_{\gamma}(u_{\gamma}-\bar{\lambda}_{\gamma}(t))_+.$$ Let $q_{\gamma}(t,x):=e^{-\Lambda_{\gamma}(t)}\varphi_{\gamma}(t,x)$, we then have $$\partial_t q_{\gamma}+e^{\Lambda_{\gamma}(t)}|\nabla q_{\gamma}|^2\leq q_{\gamma}(u_{\gamma}-\bar{\lambda}_{\gamma}(t) )_+.$$ Fix any two points $x_1, x_0\in \mathbb{R}^d$. We now introduce a family of paths $x_{\sigma}$ in the spirit of the path optimization argument introduced in [@cardaliaguet_graber_mfg]. For each $\sigma$ in the unit ball $B_1$ let $x_{\sigma}:[0,t_1-t_0]\to \mathbb{R}^d$ be a path such that $x_{\sigma}(0)=x_1$ and $x_{\sigma}(t_1-t_0)=x_0$. Consider $$\begin{gathered} \frac{d}{dt}\big[ q_{\gamma}(x_{\sigma}(t),t)-\frac{1}{4}\int_{t_0}^t e^{-\Lambda_{\gamma}(s)}|x_{\sigma}'(s)|^2\, ds\big]=\\ \partial_t q_{\gamma}(x_{\sigma}(t),t)+\nabla q_{\gamma}(x_{\sigma}(t),t)\cdot x_{\sigma}'(t)-\frac{e^{-\Lambda_{\gamma}(t)}}{4}|x_{\sigma}'(t)|^2\leq q_{\gamma}(x_{\sigma}(t),t)(u_{\gamma}(,x_{\sigma}(t)t_1-t)-\bar{\lambda}_{\gamma}(t) )_+\end{gathered}$$ Thus, $$q_{\gamma}(x_0, t_1-t_0)\leq q_{\gamma}(x_1,0)+\frac{1}{4}\int_{0}^{t_1-t_0} e^{-\Lambda_{\gamma}(s)}|x_{\sigma}'(s)|^2+ q_{\gamma}(x_{\sigma}(s),s)(u_{\gamma}(x_{\sigma}(s),t_1-s)-\bar{\lambda}_{\gamma}(s))_+\, ds,$$ It then follows that $$\label{eq:hj_1} \varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{1}{4}\int_{0}^{t_1-t_0} e^{-\Lambda_{\gamma}(s)}\Big(|x_{\sigma}'(s)|^2+ \varphi_{\gamma}(s,x_{\sigma}(s))(u_{\gamma}(t_1-s,x_{\sigma}(s))-\bar{\lambda}_{\gamma}(s))_+\Big)\, ds.$$ We now assume that $x_{\sigma}$ has the form $$x_{\sigma}(s)=\sigma \xi(s)+x_0+z(s)(x_1-x_0),$$ where $\xi:[0,t_1-t_0]\to[0,1]$ satisfies $\xi(0)=\xi(t_1-t_0)=0$ and $z:[0,t_1-t_0]\to [0,1]$ is an increasing function such that $z(0)=0$ and $z(t_1-t_0)=1$. For notational simplicity, we will write $\alpha_{\gamma}=\varphi_{\gamma}(u_{\gamma}-\bar{\lambda}_{\gamma})_+$. Averaging ([\[eq:hj_1\]](#eq:hj_1){reference-type="ref" reference="eq:hj_1"}) over $B_1$ we see that $$\varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{1}{4|B_1|}\int_{0}^{t_1-t_0} \int_{B_1} e^{-\Lambda_{\gamma}(s)}\Big(|\sigma|^2|\xi'(s)|^2+|x_1-x_0|^2z'(s)^2+ \alpha_{\gamma}(s, x_{\sigma}(s))\Big)\, d\sigma\, ds.$$ The optimality condition for $z$ implies that $(z'(s)e^{-\Lambda_{\gamma}(s)})'=0$, therefore $z'(s)=\frac{e^{\Lambda_{\gamma}(s)}}{\int_{0}^{t_1-t_0} e^{\Lambda_{\gamma}(s)}\, ds}$. Thus, making this choice we see that $$\varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0} e^{\Lambda_{\gamma}(s)}\, ds}+\frac{1}{4|B_1|}\int_{0}^{t_1-t_0} \int_{B_1} e^{-\Lambda_{\gamma}(s)}\Big(|\sigma|^2|\xi'(s)|^2+ \alpha_{\gamma}(s, x_{\sigma}(s))\Big)\, d\sigma\, ds.$$ Changing variables $y=x_{\sigma}$, it follows that $$\frac{1}{|B_1|}\int_{B_1} \alpha_{\gamma}(s,x_{\sigma}(s))d\sigma =\frac{\xi(s)^{-d}}{|B_1|}\int_{B_{\xi(s)}(x_0+z(s)(x_1-x_0))} \alpha_{\gamma}(s,y)dy$$ where $B_{\xi(s)}(x_0+z(s)(x_1-x_0))$ is the ball of radius $\xi(s)$ centered at $x_0+z(s)(x_1-x_0)$. Using Hölder's inequality with exponent $2d$, it follows that the above quantity is bounded above by $\xi(s)^{-1/2}\lVert \alpha_{\gamma} \rVert_{L^{2d}(\{s\}\times\mathbb{R}^d)}.$ Hence, after dropping the good term $e^{-\Lambda_{\gamma}(s)}$ in the last integral we see that $$\varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0} e^{\Lambda_{\gamma}(s)}\, ds} +\frac{1}{4}\int_{0}^{t_1-t_0} \big(|\xi'(s)|^2+ \xi^{-1/2}(s)\lVert \alpha_{\gamma}(s,\cdot) \rVert_{L^{2d}(\mathbb{R}^d)}\big)\, ds.$$ Fix some $a>0$ and set $$\xi(s):=\begin{cases} as^{3/4} & \textup{if}\;\; t_0\leq s<(t_1-t_0)/2,\\ a(t_1-s)^{3/4} & \textup{if}\;\; (t_1-t_0)/2\leq s\leq t_1. \end{cases}$$ Using Hölder's inequality with exponent $2$ on $\int_0^t \xi^{-1/2}(s)\lVert \alpha_{\gamma}(s,\cdot) \rVert_{L^{2d}(\mathbb{R}^d)}\, ds$, we see that $$\lVert \xi' \rVert_{L^2([t_0,t_1])}^2\leq C (t_1-t_0)^{1/2} a^2, \quad \lVert \xi^{-1/2} \rVert_{L^{2}([0,t_1-t_0])}\leq Ca^{-1/2} (t_1-t_0)^{1/8}$$ thus, $$\varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0} e^{\Lambda_{\gamma}(s)}\, ds} +C((t_1-t_0)^{1/8}a^{-1/2}\lVert \alpha_{\gamma} \rVert_{L^2([0,t_1-t_0];L^{2d}(\mathbb{R}^d))}+(t_1-t_0)^{1/2}a^2)$$ Optimizing over $a>0$, we obtain $$\label{eq:hj_almost} \varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0} e^{\Lambda_{\gamma}(s)}\, ds} +C(t_1-t_0)^{\frac{3}{10}}\lVert \alpha_{\gamma} \rVert_{L^2([0,t_1-t_0];L^{2d}(\mathbb{R}^d))}^{4/5}$$ for a potentially different constant $C>0$. Finally, it remains to estimate $\lVert \alpha_{\gamma} \rVert_{L^2([0,t];L^{2d}(\mathbb{R}^d))}$. Recalling that $\alpha_{\gamma}=\varphi_{\gamma} (u_{\gamma}-\lambda)_+$, we may write $$\lVert \alpha_{\gamma} \rVert_{L^{2d}(\{s\}\times \mathbb{R}^d))}^{2d}\leq \lVert \varphi_{\gamma} \rVert_{L^{\infty}([t_0,t_1]\times\mathbb{R}^d)}^2 \int_{0}^{\infty} 2dv^{2d-1}|\{x\in \mathbb{R}^d: u_{\gamma,+}(t_1-s,x)>v+\bar{\lambda}_{\gamma}(s)\}|\, dv$$ By Chebyshev's inequality, for any strictly increasing function $f:\mathbb{R}\to\mathbb{R}$, $$\leq \lVert p_{\gamma} \rVert_{L^{\infty}([t_0,t_1]\times\mathbb{R}^d)}^2\int_{0}^{\infty} 2dv^{2d-1}\frac{\lVert f(u_{\gamma,+})-f(0) \rVert_{L^1(\{t_1-s\}\times\mathbb{R}^d)}}{f(\bar{\lambda}_{\gamma}(s)+v)-f(0)}\, dv$$ If we choose $f(a)=\exp(b a)-1$, then we see that $$\lVert \alpha_{\gamma} \rVert_{L^{2d}(\{s\}\times \mathbb{R}^d))}\leq C e^{-b\bar{\lambda}_{\gamma}(s)}\lVert \exp(b u_{\gamma,+})-1 \rVert_{L^1(\{t_1-s\}\times \mathbb{R}^d)}=Ce^{-b\lambda(s)},$$ where one should note carefully that now $\bar{\lambda}_{\gamma}$ has been replaced by $\lambda$ in the last right-hand term. Thus, we have the estimate $$\lVert \alpha_{\gamma} \rVert_{L^2([t_0,t_1];L^{2d}(\mathbb{R}^d))}^2\lesssim \int_{0}^{t_1-t_1}e^{-\frac{5}{2}\lambda(s)}\leq (t_1-t_0)e^{-\frac{5}{2}\lambda(t_1-t_0)},$$ hence, $$\lVert \alpha_{\gamma} \rVert_{L^2([t_0,t_1];L^{2d}(\mathbb{R}^d))}^{4/5}\lesssim (t_1-t_0)^{2/5}e^{-\lambda(t_1-t_0)}.$$ Combining our work, we now have $$\label{eq:hj_almost_2} \varphi_{\gamma}(x_0, t_1-t_0)e^{-\Lambda_{\gamma}(t_1-t_0)}\leq \varphi_{\gamma}(x_1, 0)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0} e^{\Lambda_{\gamma}(s)}\, ds} +C(t_1-t_0)^{7/10}e^{-\lambda(t_1-t_0)},$$ for some possibly new constant $C$. The result follows after replacing $\varphi_{\gamma}$ with $p_{\gamma}$ and multiplying both sides by $e^{\Lambda_{\gamma}(t_1-t_0)}$ ◻ Before we can show that the Hopf-Lax formula also holds for the limiting pressure, we first need a Lemma that gives us a pointwise well-defined representative of our weak solution $p$. The argument is a simple adaptation of a result from [@pqm]. **Lemma 23**. *Suppose that $(\rho, p, n)$ is a weak solution to ([\[first\]](#first){reference-type="ref" reference="first"}-[\[second\]](#second){reference-type="ref" reference="second"}). $p$ can be redefined on a set of measure zero so that $$\label{eq:p_pointwise} p(x,t)=\lim_{r\to 0} \frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{t}^{t+r^2} p(y,s)\, ds\, dy$$ for all $(x,t)$. With this definition, $p$ is spacetime upper semicontinuous and for all $x\in \mathbb{R}^d$ the mapping $t\mapsto p(x,t)$ is continuous from the right.* *Proof.* From our control on $u_{\gamma}$ and the relation $\Delta p_{\gamma}=-n_{\gamma}-\frac{1}{\gamma}u_{\gamma}$ it follows that after taking limits, we have $$\Delta p\geq -n\geq -n_0$$ in the sense of spacetime distributions. Therefore, for any $\epsilon>0$, $$\Delta \Big(\frac{1}{\epsilon^2}\int_{0}^{\epsilon^2} p(\cdot,t+s)\, ds\Big) \geq -n_0$$ in the sense of space distributions. Hence, the mean value property for Laplace's equation implies that for all $x\in \mathbb{R}^d$ and $t>0$ the function $$\phi(r,\epsilon):=\frac{1}{\epsilon^2|B_r|}\int_{B_r(x)}\int_{0}^{\epsilon^2} p(y,t+s)+\frac{n_0}{2d}|y-x|^2\, ds\, dy$$ is non-decreasing with respect to $r$. We also note that for $0\leq r'\leq r$ we have $$\phi(r,r)-\phi(r,r')=\frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{0}^{r^2} p(y,t+s)-p(y,t+(\frac{r'}{r})^2s ) \,ds\, dy$$ $$\geq \frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{0}^{r^2} \int_{(\frac{r'}{r})^2s}^s [\partial_t p(y,a)]_-\, da\, ds\, dy.$$ From Corollary [Corollary 21](#cor:p_hjb){reference-type="ref" reference="cor:p_hjb"}, it follows that $[\partial_t p]_-$ is bounded in $L^q(\mathbb{R}^d\times [0,\tau])$ for any $q\in [1,\infty)$. Therefore, $$\phi(r,r)-\phi(r,r')\geq -|B_r|^{-1/q}(r^2-r'^2)^{1-1/q}\lVert [\partial_t p]_- \rVert_{L^q([0,\tau]\times\mathbb{R}^d)}\geq -Cr^{1-(d+1)/q} (r-r')^{1-1/q}$$ for some constant $C>0$. By choosing $q>d+1$, we can conclude that there exists a Holder continuous function $g$ such that $r\mapsto \phi(r,r)+g(r)$ is nondecreasing and $g(0)=0$. As a result, $\lim_{r\to 0^+} \phi(r,r)$ must exist for all $(x,t)$. Hence, ([\[eq:p_pointwise\]](#eq:p_pointwise){reference-type="ref" reference="eq:p_pointwise"}) is well defined everywhere. The Lebesgue differentiation theorem also implies that our redefinition only changes $p$ on a set of measure zero. Finally, to see that $p$ is upper semicontinuous, we note that $\lim_{r\to 0} \phi(r,r)=\lim_{r\to 0} \phi(r,r)+g(r)=\inf_{r>0} \phi(r,r)+g(r)$. Thus, we may write $$p(x,t)=\inf_{r>0 } g(r)+ \frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{t}^{t+r^2} p(y,s)\, ds\, dy.$$ The infimum over a family of functions always produces an upper semicontinuous function, hence, $p$ is upper semicontinuous. ◻ At last we obtain the main result of this Section, the Hopf-Lax formula for our limit pressure $p$. **Proposition 24**. *Given any points $(x_1,t_1), (x_0, t_0)$ with $t_0<t_1$ and a decreasing function $\lambda\in L^1([0,t_1-t_0])$, there exists a constant $C=C(t_1,d)$ such that $$\label{eq:hj_bound_limit} p(x_0, t_0)\leq e^{\Lambda(t_1-t_0)}\Big( p(x_1, t_1)+\frac{|x_1-x_0|^2}{4\int_0^{t_1-t_0}e^{\Lambda(s)}\, ds}+C(t_1-t_0)^{7/10} e^{-\lambda(t_1-t_0)}\Big)$$* where $b$ is the constant from Proposition [Proposition 20](#prop:ab){reference-type="ref" reference="prop:ab"} and $$\Lambda(t):=\frac{5}{4b}\int_0^t \lambda(s)\, ds+\frac{t}{b}\log(1+\frac{C}{t})$$ *Proof.* Using the formula from Lemma [Lemma 23](#lem:p_pointwise){reference-type="ref" reference="lem:p_pointwise"}, we have $$p(x_0,t_0)=\lim_{r\to 0^+} \frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{t}^{t+r^2} p(y,s)\, ds\, dy.$$ Choose a point $x_2\in \mathbb{R}^d$ and $t_2>t_0$ such that $p_{\gamma}(x_2,t_2)$ converges to $p(x_2,t_2)$ along some subsequence $\gamma_k$. Using the $L^2_tH^1_x$ strong convergence of $p_{\gamma}$ to $p$ and then applying Lemma [Lemma 22](#lem:hj_estimate){reference-type="ref" reference="lem:hj_estimate"}, we have for any $x_2\in \mathbb{R}^d$ and $t_2>t_0$ $$p(x_0,t_0)=\lim_{r\to 0^+}\lim_{k\to \infty} \frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{t_0}^{t_0+r^2} p_{\gamma_k}(y,s)\, ds\, dy\leq$$ $$\lim_{r\to 0^+}\lim_{k\to \infty} \frac{1}{r^2|B_r|}\int_{B_r(x)}\int_{t_0}^{t_0+r^2} e^{\Lambda_{\gamma_k}(t_2-s)}\Big( p_{\gamma_k}(x_2, t_2)+\frac{|x_2-x_0|^2}{4\int_0^{t_2-s}e^{\Lambda_{\gamma_k}(a)}\, da}+C(t_2-t_0)^{7/10} e^{-\lambda(t_2-s)}\Big) \, dy\, ds.$$ Recall that $$\Lambda_{\gamma}(t) := \frac{5}{4b}\int_{0}^t \lambda(a)\, da+\frac{1}{b}\int_{0}^t\log(1+\lVert \exp(b u_{\gamma,+})-1 \rVert_{L^1(\{t_1-a\}\times\mathbb{R}^d)}) \, da .$$ Applying Jensen's inequality, we have the bound $$\Lambda_{\gamma}(t) \leq \frac{5}{4b}\int_{0}^t \lambda(a)\, da+\frac{t}{b}\log(1+\frac{1}{t}\lVert \exp(b u_{\gamma,+})-1 \rVert_{L^1([t_1-t, t_1]\times\mathbb{R}^d)}) .$$ Hence, we can find a potentially new constant $C=C(\tau,d)>0$ such that $$\Lambda_{\gamma}(t) \leq \frac{5}{4b}\int_{0}^t \lambda(a)\, da+\frac{t}{b}\log(1+\frac{C}{t})=\Lambda(t).$$ for all $\gamma$. Therefore, $$p(x_0,t_0)\leq e^{\Lambda(t_2-t_0)}\Big( p(x_2, t_2)+\frac{|x_2-x_0|^2}{4\int_0^{t_2-t_0}e^{\Lambda(s)}\, ds}+C(t_2-t_0)^{7/10} e^{-\lambda(t_2-t_0)}\Big).$$ Since $p_{\gamma}$ converges pointwise almost everywhere to $p$ along appropriate subsequences, it follows that $$p(x_0,t_0)\leq e^{\Lambda(t_2-t_0)}\Big( p(x_2, t_2)+\frac{|x_2-x_0|^2}{4\int_0^{t_2-t_0}e^{\Lambda(a)}\, da}+C(t_2-t_0)^{7/10} e^{-\lambda(t_2-t_0)}\Big).$$ for a dense set of $(x_2, t_2)$ with $t_2>t_0$. The result now follows from the upper semicontinuity of $p$. ◻ # Hölder continuity of the Hitting time {#sec:barrier} We are now going to construct a radial supersolution which will give an upper bound for the rate of expansion for the tumor, and thus, will yield a lower bound on the arrival times. Given a point of interest $x_0\in \mathbb{R}^d$, our supersolution will be defined on the time-dependent annulus $$A(t):=\{t\}\times \{ x: r(t) \leq |x-x_0| \leq m r(t)\} , \quad A=\bigcup_{t\in [0,\epsilon]} A(t),$$ for some $m>1$ and a function $r(t)\geq 0$ that we will define shortly. Given some starting time $t_0$, for each $t, r\geq 0$ we define $$\bar{p}(t,r):=\sup_{x\in B_{r}(x_0)} p(t+t_0,x),$$ where the sup is well defined since $p$ is upper semicontinuous in space. Now we will construct our supersolution $\psi(t,x)$ by solving $$\begin{cases} -\Delta \psi(t,x)=\bar{n}_0& \textup{if} \; r(t)<|x-x_0|<mr(t),\\ \psi(t,x)=0 & \textup{if} \;|x-x_0|\leq r(t),\\ \psi(t,x)=\bar{p}(t,|x-x_0|) & \textup{if} \; |x-x_0|\geq mr(t).\\ \end{cases}$$ On $A(t)$, the equation admits the explicit radial solution $$\label{eq:supersolution_psi} \psi(t,x)=h(t)\Gamma_d(|x-x_0|)-\frac{\bar{n}(0)}{2d}|x-x_0|^2+g(t),$$ where $\Gamma_d$ is the fundamental solution of the Laplace equation in dimension $d$, i.e. $\Gamma_d'(r)=r^{1-d}$, $$\label{eq:super_solution_h} h(t):=\frac{\bar{p}(t,mr(t))+(2d)^{-1}\bar{n}(0)(m^2-1)r(t)^2}{\Gamma_d(mr(t))-\Gamma_d(r(t))},$$ and $$g(t):=\frac{\bar{n}_0}{2d}-h(t)\Gamma_d(r(t)).$$ Finally, we define $r(t)$ by choosing some initial data $r(0)$ and then solving the ODE $$\label{eq:radius_ode} r'(t)=-|\nabla \psi(t,y)|$$ where the right hand side is evaluated at any point $y$ such that $|y-x_0|=r(t)$. We now show that $\psi$ is indeed a supersolution as long as comparison holds at initial time. Due to the lack of regularity for the pressure variable, we establish comparison using the time integrated versions of $\psi$ and $p$. This creates an annoying issue where it is difficult to establish that the boundary data stays ordered as the annulus moves. To avoid this problem, we establish comparison by first going through a sequence of supersolutions $\psi_k$, where the $\psi_k$ are defined on modified annuli whose outer radii are taken to be piecewise constant in time. **Lemma 25**. *Let $\mu(t,x)$ be the characteristic function of the set $\{x\in \mathbb{R}^d: |x-x_0|\geq r(t)\}$. If $\mu(0,x)\leq \rho(t_0,x)$ for almost every $x\in \mathbb{R}^d$, then $p(t_0+t,x)\leq \psi(t,x)$ for almost every $x\in \mathbb{R}^d$ and almost every time $t\geq 0$* *Proof.* As we noted above, we will first prove the comparison for a modified sequence of supersolutions $\psi_k$. The $\psi_k$ will be defined in precisely the same way as $\psi$, except that we will modify the construction of the moving annulus. Hence, given radii $r_k(t)< R_k(t)$, we define $\psi_k$ by solving $$\begin{cases} -\Delta \psi_k(t,x)=\bar{n}_0& \textup{if} \; r_k(t)<|x-x_0|<R_k(t),\\ \psi_k(t,x)=0 & \textup{if} \;|x-x_0|\leq r_k(t),\\ \psi_k(t,x)=\bar{p}(t,|x-x_0|) & \textup{if} \; |x-x_0|\geq R_k(t).\\ \end{cases}$$ $r_k(t)$ will be defined as before via the ODE $r_k'(t)=-|\nabla \psi_k(t,y)|$ where $y$ is any point satisfying $|y-x_0|=r_k(t)$. We then define $R_k$ by setting $$R_k(t):=mr_k(t_{k,j}), \quad \textup{if} \; t\in [t_{k,j}, t_{k, j+1}),$$ where we inductively define the points $t_{k,j}$ by setting $t_{k,0}=0$ and then taking $$t_{k,j+1}:=\inf \{ t\geq t_{k,j}: r_k(t)<(1-\frac{1}{k+1})r_k(t_{k,j})\}.$$ As before, on the annulus $r_k(t)\leq |x-x_0|\leq R_k(t)$, the $\psi_k$ will admit the explicit radial solutions $$\label{eq:supersolution_psi_k} \psi_k(t,x)=h_k(t)\Gamma_d(|x-x_0|)-\frac{\bar{n}(0)}{2d}|x-x_0|^2+g_k(t),$$ where $$\label{eq:super_solution_h_k} h_k(t):=\frac{\bar{p}(t,R_k(t))+(2d)^{-1}\bar{n}(0)(R_k(t)^2-r_k(t)^2)}{\Gamma_d(R_k(t))-\Gamma_d(r_k(t))},$$ and $$\label{eq:super_solution_g_k} g_k(t):=\frac{\bar{n}_0}{2d}-h_k(t)\Gamma_d(r_k(t)).$$ Let $\Psi_k(t,x)=\int_0^t \psi_k(s,x)\, ds$. Since $\psi_k$ is clearly Lipschitz in space on $|x-x_0|\leq R_k(t)$, it follows that $\Psi_k$ is Lipschitz in space on $|x-x_0|\leq R_k(t)$ and $$\nabla \Psi_k(t,x)=\int_0^t \nabla \psi_k(s,x)\, ds$$ almost everywhere on $|x-x_0|\leq R_k(t)$. Define $\tilde{t}_k(r)$ to be the inverse function of $r_k(t)$. From the definition of $\psi_k$, it follows that $$\nabla \Psi_k(t,x)=\int_{\min(t, \tilde{t}_k(|x-x_0|))}^t \nabla \psi_k(s,x)\, ds.$$ Now if $x$ is a point such that $|x-x_0|<R_k(t)$ and $|x-x_0|<r(0)$, then for each fixed $s\in (\tilde{t}(|x-x_0|), t]$, there exists a neighborhood of $x$ such that $\nabla \psi_k(s,x)$ is differentiable and $-\Delta \psi_k(s,x)=\bar{n}_0$. Thus, it follows that $$-\Delta \Psi_k(t,x)=\mathop{\mathrm{sgn}}_+\big(t-\tilde{t}(|x-x_0|)\big)\tilde{t}'(|x-x_0|)|\nabla \psi_k(\tilde{t}(|x-x_0|),x)|+ \int_{\min(t, \tilde{t}_k(|x-x_0|))}^t \bar{n}_0\, ds.$$ Since $r'_k(\tilde{t}_k(|x-x_0|))=-|\nabla \psi(\tilde{t}(|x-x_0|), x)|$ and $\tilde{t}_k(r)$ is the inverse of $r_k(t)$, we see that $$-\Delta \Psi_k(t,x)=-\mathop{\mathrm{sgn}}_+\big(t-\tilde{t}(|x-x_0|)\big)+ (t- \tilde{t}(|x-x_0|)_+\bar{n}_0.$$ On the other hand, if $x$ is a point such that $r(0)<|x-x_0|<R_k(t)$, then $$-\Delta \Psi_k(t,x)=t \bar{n}_0.$$ Let $\mu_k(t,x)$ be the characteristic function of the set $\{(t,x): |x-x_0|\geq r_k(t)\}$ and note that $\mu_k(t,x)=\mathop{\mathrm{sgn}}_+(t-\tilde{t}_k(|x-x_0|))$ and $\int_0^t \mu_k(s,x)\, ds=(t- \tilde{t}_k(|x-x_0|)_+.$ Combining our work from above, we can conclude that for almost every $x$ satisfying $|x-x_0|<R_k(t)$ we have $$-\Delta \Psi_k(t,x)=\mu_k(0,x)-\mu_k(t,x)+\int_0^t \mu_k(s,x) \bar{n}_0,$$ and for almost all $x\in \mathbb{R}^d$ we have $\Psi_k(t,x)(1-\mu_k(t,x))=0,$ as well as $\psi_k(1-\mu_k(t,x))=0$. Now let us define the time shifted variables $\tilde{w}(t,x):=\int_0^t p(s+t_0,x)\, ds$, $\tilde{\rho}(t,x):=\rho(t_0+t,x)$, and $\tilde{n}(t,x):=n(t_0+t,x)$. It then follows that $-\Delta \tilde{w}(t,x)=\tilde{\rho}(t,x)-\tilde{\rho}(0,x)+\int_{t_0}^t \tilde{\rho}(s,x) \tilde{n}(s,x)$ and $(1-\rho(t+t_0,x))w_0=0$ almost everywhere. For any time $t\in [0,t_{k,1})$, the definition of $\psi_k$ guarantees that $\Psi_k(t,x)\geq w(t,x)$ for all $x$ satisfying $|x-x_0|=R_k(t)=mr(0)$. Hence, for any $t\in [0, t_1)$ and any increasing $C^1$ function $\eta:\mathbb{R}\to\mathbb{R}$ such that $\eta(a)=0$ if $a\leq 0$, we have $$\int_{\{|x-x_0|\leq R_k(0)\}} (\tilde{\rho}-\mu_k)\eta(\tilde{w}-\Psi_k)+\eta'(\tilde{w}-\Psi_k)|\nabla (\tilde{w}-\Psi_k)|^2\leq \int_{\{|x-x_0|\leq R_k(0)\}}\int_0^t \eta(\tilde{w}-\Psi_k)(\tilde{\rho}\tilde{n}-\mu_k\bar{n}_0)$$ Letting $\eta$ approach $\mathop{\mathrm{sgn}}_+$ and using the fact that $\mathop{\mathrm{sgn}}_+(\tilde{w}-\Psi_k)=\mathop{\mathrm{sgn}}_+(\tilde{\rho}-\mu_k)$, we can conclude that $$\int_{\{|x-x_0|\leq R_k(0)\}} (\tilde{\rho}-\mu_k)_+\leq \int_{\{|x-x_0|\leq R_k(0)\}}\bar{n}_0\int_0^t(\tilde{\rho}-\mu_k)_+.$$ Hence, Gronwall's inequality now implies that $\tilde{\rho}(t,x)\leq \mu_k(t,x)$ for all $t\in [0,t_{k,1})$ and almost all $x\in \mathbb{R}^d$ (recall it is immediate that $\tilde{\rho}\leq \mu_k$ on $|x-x_0|\geq R(0)$ from the definition of $\mu_k$). The masses of the differences $\mu_k(t,x)-\mu_k(0,x)$ and $\tilde{\rho}(t,x)-\tilde{\rho}(0,x)$ are continuous functions of time, therefore, the ordering $\tilde{\rho}\leq \mu_k$ must hold at time $t_1$. This allows us to run the above argument on $[t_{k,1}, t_{k,2})$. Iterating, we conclude that the ordering $\tilde{\rho}\leq \mu_k$ must hold for all times $t$ when $r_k(t)>0$. Now we wish to argue that $\liminf_{k\to\infty} r_k(t)\geq r(t)$. Let $$t_*=\inf\{t>0: \liminf_{k\to\infty} r_k(t)<r(t)\},$$ and note that $\liminf_{k\to\infty} r_k(t_*)=r(t_*)$. Using the explicit formulas ([\[eq:supersolution_psi\]](#eq:supersolution_psi){reference-type="ref" reference="eq:supersolution_psi"}) and ([\[eq:supersolution_psi_k\]](#eq:supersolution_psi_k){reference-type="ref" reference="eq:supersolution_psi_k"}), as well as the upper semicontinuity of $r\mapsto \bar{p}(t,r)$, it follows that $$\liminf_{k\to\infty} r_k'(t_*)\geq r'(t_*)$$ whenever $r(t_*)>0$. Hence, $r(t)\leq \liminf_{k\to\infty} r_k(t)$ for all times where $r(t)>0$. This implies that $\tilde{\rho}(t,x)\leq \mu(t,x)$ for all $t$ and almost all $x$. Finally, we note that the ordering $\tilde{\rho}(t,x)\leq \mu(t,x)$ implies that for almost every time $t$ $$(p-\psi)_+(\Delta p+n)=0, \quad (p-\psi)_+(\Delta \psi+\bar{n}_0)=0.$$ distributionally. Thus, for any $T>0$ $$\int_{Q_T} |\nabla (p-\psi)_+|^2=\int_{Q_T} (p-\psi)_+(n-\bar{n}_0)$$ which is only possible if $(p-\psi)_+=0$ almost everywhere. ◻ We can now use this barrier supersolution to get bounds on the Hölder continuity of the hitting time. The key is to use our Hopf-Lax estimate from [Proposition 24](#prop:hj_estimate){reference-type="ref" reference="prop:hj_estimate"} to ensure that the supersolution arrives at the point of interest at the correct time. **Theorem 26**. *$T$ is locally Hölder continuous on the set $\{x\in \mathbb{R}^d: 0<T(x)<\infty\}$. In particular, for any $x_1\in \mathbb{R}^d$ such that $T(x_1)\in (0,\infty)$, we have $$\label{eq:holder_bound} \sup_{y\in B_R(x_1)} T(x_1)-T(y)\lesssim R^{\alpha_d}$$ for all $R>0$ sufficiently small, where $$\label{eq:holder_exponent} \alpha_d:=\begin{cases} \frac{2}{e} & \textup{if}\; d=2,\\ 2(\frac{2}{d})^{\frac{d}{d-2}} & \textup{if}\; d>2.\\ \end{cases}$$* *Proof.* Let $\epsilon>0$ be a small value that we will choose later. Let $$\delta=\delta(\epsilon):=\inf\{R>0: \sup_{y\in B_R(x_1)} T(x_1)- T(y)\geq \epsilon\}.$$ Since $T$ is continuous at $x_1$, it follows that $\lim_{\epsilon\to 0}\delta(\epsilon)=0$. Let $t_0=T(x_1)-\epsilon$ and $t_1=T(x_1)$. Thanks to the super solution that we have constructed above, we know that $$\inf_{y\in B_{r(t)}(x_1)} T(y)\geq t_0+t,$$ which implies $$\sup_{y\in B_{r(t)}(x_1)} T(x_1)-T(y)\leq (t_1-t_0-t).$$ Hence, if we can provide lower bounds on $r(t)$ in terms of $t$, we can get a Hölder estimate for $T$ at $x_1$. In particular, a bound of the form $(t_1-t_0-t)^{1/\alpha}\lesssim r(t)$ will imply that $\sup_{y\in B_R(x_1)} T(x_1)-T(y)\lesssim R^{\alpha}$. To bound $r(t)$ from below, we must consider the ODE ([\[eq:radius_ode\]](#eq:radius_ode){reference-type="ref" reference="eq:radius_ode"}), which can be simplified to $$r'(t)=-|h(t)||\Gamma_d'(r(t))|-\frac{\bar{n}(0)}{d}r(t).$$ Noting that in any dimension there exists a function $\xi_d(m)$ such that $\frac{|\Gamma_d'(r(t))|}{|\Gamma_d(m(r(t))-\Gamma_d(r(t))|}=r(t)^{-1}\xi_d(m)$, it follows from the structure of $h$ and the ODE that there exists some constant $K>0$ such that $$\label{eq:ode_1} r'(t)+Kr(t)\geq -\frac{\bar{p}(t, m r(t))\xi_d(m)}{r(t)}.$$ Now we want to estimate $\bar{p}(t, m r(t))$. To do so, we will apply the bounds from Lemma [Lemma 22](#lem:hj_estimate){reference-type="ref" reference="lem:hj_estimate"}, choosing to evaluate $p$ at $(x_1, t_1)$ and leaving the choice of $\lambda\in L^1([0,t_1-t_0])$ until later. With these choices, we see that $$\begin{array}{lll} \bar{p}(t,mr(t))&=&\sup_{x\in B_{m r(t))}(x_1)} p(t+t_0,x)\\ &\leq& \sup_{x\in B_{m r(t))}(x_1)} H(t)|x-x_1|^2 +F(t)=m^2r^2 H(t)+F(t),\label{eq:p_upper_bound} \end{array}$$ where we have defined $$\label{eq:H_and_F} H(t):=e^{\Lambda(t_1-t_0-t)}(4\int_{0}^{t_1-t_0-t} e^{\Lambda(s)}\, ds)^{-1}, \quad F(t):=C(t_1-t_0-t)^{7/10}e^{-\lambda(t_1-t_0-t)+\Lambda(t_1-t_0-t)}$$ for notational convenience. Returning to equation ([\[eq:ode_1\]](#eq:ode_1){reference-type="ref" reference="eq:ode_1"}) and applying the upper bound on $\bar{p}$ obtained above, we have $$r'(t)+r(t)(K+m^2\xi_d(m) H(t))\geq -\xi_d(m)\frac{F(t)}{r(t)}$$ Multiplying both sides by $2r(t)$ and defining $z(t)=r(t)^2$, we get $$\label{eq:z_ode_1} z'(t)+z(t)(2K+2m^2\xi_d(m) H(t))\geq -\xi_d(m)F(t).$$ Now we choose $m$ by optimizing $m^2\xi_d(m)$. Define $$\xi_d:=\inf_{m>1}\frac{m^2}{2}\xi_d(m).$$ One can then check that $\xi_d=(\frac{d}{2})^{\frac{d}{d-2}}$ and $\mathop{\mathrm{argmin}}\frac{m^2}{2}\xi_d(m)=(\frac{d}{2})^{\frac{1}{d-2}}$ (where these should be understood in a limiting sense when $d=2$). Thus, we have $$\label{eq:z_ode_2} z'(t)+z(t)(2K+4\xi_d H(t))\geq -dF(t).$$ Let $\bar{H}(t)=\int_0^t 4H(s)\, ds$. Multiplying both sides of ([\[eq:z_ode_2\]](#eq:z_ode_2){reference-type="ref" reference="eq:z_ode_2"}) by $\exp(2Kt+\xi_d\bar{H}(t))$ and integrating in time, we can conclude that $$\label{eq:z_ode_3} z(t)e^{2Kt+\xi_d\bar{H}(t)}\geq z(0) -d\int_0^t F(s)e^{2Ks+\xi_d \bar{H}(s)}\, ds.$$ Now we need to provide upper bounds on $\exp(\xi_d\bar{H}(t))$. To do so, we will need to make a choice for $\lambda$. Fix some $\theta>0$ and set $$\lambda(s)=\theta+s^{-1/2}.$$ We then have $$\Lambda(t)=\frac{5}{4b}(\theta t+2t^{1/2})+\frac{t}{b}\log(1+\frac{C}{t}).$$ Using the above estimates, we see that $$4H(t)\leq \frac{\exp\Big(\frac{5}{4b}(\theta (t_1-t_0-t)+2(t_1-t_0-t)^{1/2})+(t_1-t_0-t)\log(1+C/(t_1-t_0-t))\Big)}{\int_0^{t_1-t_0-t} e^{\frac{5}{4b}\theta s}\, ds}=$$ $$\frac{\frac{5\theta}{4b}\exp\Big(2(t_1-t_0-t)^{1/2})+(t_1-t_0-t)\log(1+C/(t_1-t_0-t))\Big)}{1- e^{-\frac{5}{4b}\theta (t_1-t_0-t)}}.$$ Since $(t_1-t_0-t)\leq (t_1-t_0)=\epsilon$, we can assume that $\epsilon$ is sufficiently small that $$4H(t)\leq \frac{5\theta}{4b(1- e^{-\frac{5}{4b}\theta (t_1-t_0-t)})}+\frac{20\theta(t_1-t_0-t)^{1/2}}{4b(1- e^{-\frac{5}{4b}\theta (t_1-t_0-t)})}$$ Hence, for some possibly new constant $C>0$ independent of $\epsilon$ and $\theta$ we get $$\bar{H}(t)\leq \log\big(\frac{e^{\frac{5\theta}{4b}\epsilon}-1}{e^{\frac{5\theta}{4b}(t_1-t_0-t)}-1}\big)+C(1+\epsilon^{3/2}\theta).$$ Thus, $$\exp(\xi_d\bar{H}(t))\lesssim \big(\frac{e^{\frac{5\theta}{4b}\epsilon}-1}{e^{\frac{5\theta}{4b}(t_1-t_0-t)}-1}\big)^{\xi_d}\exp(C\epsilon^{3/2}\theta)$$ Now we move to estimating $F(s)\exp(2Ks+\xi_d \bar{H}(s))$. From our choice of $\lambda$, it is clear that $\lambda(t)\geq 2\Lambda(t)$ for all $t$ sufficiently small. Thus, it follows that $$F(s)\lesssim (t_1-t_0-s)^{7/10}e^{-\lambda(t_1-t_0-s)/2},$$ hence, we have the bound $$F(s)\exp(2Ks+\xi_d \bar{H}(s))\lesssim \big(\frac{e^{\frac{5\theta}{4b}\epsilon}-1}{e^{\frac{5\theta}{4b}(t_1-t_0-s)}-1}\big)^{\xi_d}(t_1-t_0-s)^{7/10}\exp\Big(\frac{1}{2}\big(4Ks+(2C\epsilon^{3/2}-1)\theta-(t_1-t_0-s)^{-1/2})\big)\Big)$$ Therefore, once $\epsilon$ is small enough that $2C\epsilon^{3/2}+\frac{5}{4b}\epsilon<1$ we can choose $\theta$ large enough that $$z(0) -d\int_0^{\epsilon} F(s)e^{2Ks+\xi_d \bar{H}(s)}\, ds\geq z(0)/2,$$ and from there we can conclude that $$z(0)\lesssim z(t)e^{2Kt+\xi_d\bar{H}(t)}$$ for all $t\in [0, \epsilon]$. This implies that $$z(0)(\theta(t_1-t_0-t))^{\xi_d}\lesssim z(t)=r(t)^2.$$ Hence, $r(0)(t_1-t_0-t)^{\xi_d/2}\lesssim r(t).$ The result now follows from the fact that $\alpha_d=2/\xi_d$. ◻ # Results from Obstacle Problem Theory In this section, we use techniques from the theory of the obstacle problem to study the local behavior of the interface. The main technique here is the quadratic blowup, which classifies free boundary points into regular points, where the zero set is asymptotically a half-space, and singular points, where the zero set is asymptotically lower dimensional. With sufficiently regular source term, the blowup limit approximates the solution at a uniform scale, and we can use this to extract information on the local geometry of the positive set. The regularity of the source term in the equation satisfied by $w$ is governed by the regularity of the nutrient and the regularity of the hitting time. Since the nutrient enjoys parabolic regularity as in Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"}, Hölder continuity of the hitting time leads to Hölder continuous source, which is enough to control the blowup limit at both types of free boundary points. Thus, using that $T\in C^{0,\alpha}_{\mathop{\mathrm{\textup{loc}}}}(\mathcal{O})$ for the $\alpha\in (0,1)$ from Theorem [Theorem 26](#thm:hitting_time_holder){reference-type="ref" reference="thm:hitting_time_holder"}, we show that the regular points form an open set of full measure in the spacetime interface, on which $T$ improves to locally Lipschitz and the spatial interface evolves as a locally $C^{1,1-}$ graph. The scale and bounds for which this regularity is achieved can be quantified in terms of the Hölder seminorm of $T$ and the scale at which the zero set achieves sufficiently large density near the regular point. We also show Hölder regularity of the unit normal to the interface in spacetime, using the spatial regularity of the interface and the monotonicity of its expansion. Under the stronger assumption that no singular points occur for some time interval, this lets us improve $T$ to $C^{1,1/2-}_{\mathop{\mathrm{\textup{loc}}}}$ on the corresponding region. As for singular points, we show that they form a relatively closed set in $\mathcal{O}$ contained in a $C^1$ manifold of dimension $d-1$. This improves on the standard obstacle problem result that the singular points at a fixed time are contained in a $C^1$ manifold of dimension $d-1$ and implies that the worse case, where the singular points have positive $d-1$ Hausdorff measure for some time, only occurs for at most countably many times. Under the additional assumption that $T$ is Lipschitz up to the singular set, we show a stronger generic regularity result which gives that for a.e. time the singular points have $d-2$ Hausdorff measure 0. In dimension 2, this would imply that the times with singular points have zero measure, as a relatively closed subset of $(0,\infty)$. We note that we are not currently able to prove that $T$ is Lipschitz up to singular points. It is not clear to what extent the obstacle problem can be leveraged to understand the geometry of the patch at times just before a singular point occurs, in order to prove nondegeneracy of the pressure. Nondegeneracy at later times is also uncertain, but appears more tractable since the blowup is available. For example, suppose one knew, for a singular point $x_0$ and all sufficiently small $r$, that the set $\{ w(\cdot, T(x_0)) = 0 \}\cap B_r(x_0)$ is contained in a strip of width $Cr^{1 + \alpha}$, for some $C, \alpha$ depending on the source term. Assuming such a strip condition, then the Hopf lemma could be applied to establish nondegeneracy of $p$ near $x_0$ at times $t\geq T(x_0)$. This strip condition has been proven by [@figalli_serra] for singular points in the $(d-1)$-dimensional stratum, albeit using methods which require much stronger regularity than $C^{0,\alpha}$ source. A related result on the rate of convergence of the quadratic blowup at singular points in dimension 2 has been proven by [@colombo]. As far as we are aware, it is not currently known whether or in what sense this strip condition may hold for the obstacle problem with $C^{0,\alpha}$ source. Nevertheless, we expect that $T$ is indeed Lipschitz, as it seems unlikely that the pressure sometimes becomes degenerate, but only at instances of merging or topological change. It is clear that we cannot hope for better than Lipschitz, since $T$ cannot be differentiable when two pieces of the boundary collide while traveling at different speeds. We also discuss examples in Remark [Remark 33](#rem:regular_points_nondifferentiable){reference-type="ref" reference="rem:regular_points_nondifferentiable"} which show that Lipschitz continuity of $T$ is sharp at regular points without the additional assumption to give global control over singular points. For the obstacle problem, the $C^{1,\alpha}$ regularity of the free boundary at regular points and the $C^1$ manifold covering singular points are well-known for Hölder source (see appendix for more details). Thus, the main challenge in lies in the analyzing the time-indexed family of obstacle problems satisfied by $w(\cdot, t)$ to control these properties in time. We note that such parameterized families have now been studied extensively for the constant source obstacle problem with varying fixed boundary data, mainly with the goal of understanding generic behavior of singular points ([@monneau], [@figalli_generic]). Our problem differs in that we must contend with a varying low regularity source and no fixed boundary data, which rules out many of the techniques typically used. In particular, the results of [@figalli_generic], including that the singular set has $(d-4)$-Hausdorff measure zero, do not appear to be in reach with even Lipschitz source. Our approach draws from arguments in [@monneau] to establish the $C^1$ manifold property for the singular set. However, whereas comparison arguments with the fixed boundary data allow Monneau to prove directly that the hitting time is Lipschitz, we must work harder to get lower regularity for $T$. Finally, the analysis of the regular set for the time-parameterized family, to our knowledge, is new. The main facts we make use of are the Hölder continuity of $T$, the spacetime continuity of $w$, the $L^1$ time-continuity of $\rho$, and the monotonicity of $\rho$ and $w$ in time. We remark that Proposition [Proposition 15](#prop:hitting_time_continuous){reference-type="ref" reference="prop:hitting_time_continuous"} shows that the interface strictly expands, and in space-time is exactly the graph of $T$. Therefore, regularity improvements to $T$ correspond exactly to regularity of the space-time interface as a $d$-dimensional manifold. As a result, we will generally not consider the space-time perspective directly, preferring to work with the subsets of ${ \mathbb{R}}^d$ traced out by the moving interface. Now, we proceed to study the local situation at the free boundary. As we noted above, the new regularity of $T$ from Theorem [Theorem 26](#thm:hitting_time_holder){reference-type="ref" reference="thm:hitting_time_holder"} feeds back into the obstacle problem satisfied by $w$ through the dependence of $\eta$, as defined in ([\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="ref" reference="eq:w_obst_eqn"}), on $T$. We state this precisely below: **Lemma 27**. *Up to $C^{1,1-}$, $\eta(\cdot, t)$ has the same spatial regularity as $T$ on $\overline{\{ w(\cdot, t) > 0\}}$. In particular, for any $\tau > 0$, we have $\eta\in L^\infty_t C^{0,\alpha}_x([0, \tau]; { \mathbb{R}}^d)$.* *Proof.* The first part follows immediately from the $L^\infty_t C^{1,1-}$ regularity of $n$, from Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"}. The second part follows from the $C^{0,\alpha}$ regularity of $T$. ◻ The exact regularity of $\eta$ is relevant for determining the spatial regularity of the free boundary near regular points. However, for most results in this section, we only require Hölder continuity to give uniqueness of the quadratic blowup limit introduced in Lemma [Lemma 13](#lem:quad_blowup_cpt){reference-type="ref" reference="lem:quad_blowup_cpt"}, and to give spatial equicontinuity of $\eta(\cdot, t)$. The uniqueness of the blowup limit and its subsequent characterization is best expressed as the following dichotomy, originally due to Caffarelli: **Lemma 28**. *Let $u$ be a solution of the obstacle problem $\Delta u = f\chi_{\{u>0\}}$ in ${ \mathbb{R}}^d$ with $f$ positive and $C^{0,\alpha}$ near 0. If $0\in \partial \{ u > 0 \}$, then one of the following holds:* 1. *$\{ u = 0\}$ has density $\frac{1}{2}$ at 0, and the quadratic rescalings $r^{-2}u(rx)$ converge in $C^{1,1-}(B_1)$ to $\frac{f(0)}{2}(x\cdot e)^2_+$ for some unit vector $e$.* 2. *$\{ u = 0\}$ has density 0 at 0, and the quadratic rescalings $r^{-2}u(rx)$ converge in $C^{1,1-}(B_1)$ to $\frac{f(0)}{2}x\cdot D^2 u(0) x$, where $D^2 u(0)$ exists in the classical sense and is a positive semidefinite matrix with trace 1.* *Points of the first type are called regular points, and points of the second type are called singular points.* This dichotomy was proven in [@caffarelli98] for the constant source obstacle problem, with the note that minor modifications could extend the proof to the Hölder continuous case. An energetic criterion for the dichotomy appears in [@weiss]. Careful proofs for the uniqueness of the blowup limit in the Hölder continuous case are given in [@blank] for regular points and [@monneau] for singular points. The dichotomy applies to the free boundary of $w(\cdot, t)$ for each $t$. We let $R_t$ denote the regular points of $\partial \Omega_t$, and $\Sigma_t$ denote the singular points of $\partial \Omega_t$, for the obstacle problem solved by $w(\cdot, t)$ at each time. Subsequently, we take $$R := \bigcup_{t>0} R_t \hbox{ and } \Sigma := \bigcup_{t>0} \Sigma_t,$$ so that $$\mathcal{O} =\{0<T(x)<\infty\} = R \cup \Sigma.$$ Let us also mention that $\{R_t\}_{t>0}$ is a foliation of $R$, and so is $\{\Sigma_t\}_{t>0}$ for $\Sigma$, due to Proposition [Proposition 15](#prop:hitting_time_continuous){reference-type="ref" reference="prop:hitting_time_continuous"}. We further subdivide singular points into strata by the dimensionality of the zero set; specifically, for $0\leq k\leq d-1$ we denote $$\Sigma^k_t: = \{x\in \partial \Omega_t : \dim \ker D^2 w = k\} \hbox{ and } \Sigma^k := \bigcup_t \Sigma^k_t.$$ For the obstacle problem, regular points are relatively open in the free boundary ([@blank], Corollary 4.8), and thus singular points form a closed set. This topological control is lost in the union over all times, so a first step is to reestablish that control for our $R$ and $\Sigma$. For this, we use a lemma due to Blank, which allows us to identify regular points by finite-scale behavior. **Lemma 29** ([@blank] Theorem 4.5). *Let $u \geq 0$ solve $\Delta u = f\chi_{\{u > 0\}}$ in $B_1(0)$, with $0\in \partial \{ u > 0\}$ and $0 < f\leq 1$. Then there exist universal parameters $\lambda_0, r_0, \tau\in (0,1)$ such that if $\lambda_0 < f$ in $B_1$ and $$\frac{|\{x : u(x) = 0\}\cap B_r(x_0)|}{|B_r|} \geq \frac{1}{8}\hbox{ for some } r < r_0,$$ then $$\frac{|\{x : u(x) = 0\}\cap B_s(x_0)|}{|B_s|} \geq \frac{3}{8}\hbox{ for all } s < \tau r.$$* In particular, if the hypothesis of the lemma holds, then $\{ u = 0\}$ has positive density at 0, so 0 is a regular point. This lemma is applicable even when $f$ is less regular than Hölder, and results in a modified regular-singular dichotomy in that case. Essentially, one may take away that nonuniqueness of the blowup limit in the low regularity setting can occur due to infinite rotation, but not due to any sort of mixing of regular and singular point behavior at different scales. Since the following result only relies on the previous lemma and continuity of $T$, it also holds when $T$ is less regular than Hölder. **Proposition 30**. *$R$ is open. Thus, $\Sigma$ is relatively closed in $\mathcal{O}$.* *Proof.* Let $\lambda_0, r_0, \tau \in (0,1)$ be the parameters given by Lemma [Lemma 29](#lem:modified_regular_pt_criterion){reference-type="ref" reference="lem:modified_regular_pt_criterion"}. Suppose $x_0\in R$, so that $\Omega_{T(x_0)}$ has density $\frac{1}{2}$ at $x_0$, and thus there exists $r > 0$ such that $$\frac{|\{x : w(x, T(x_0)) = 0\}\cap B_r(x_0)|}{|B_r|} \geq \frac{1}{8}$$ Then the result of the lemma is that for all $s < \tau r$, $$\frac{|\{x : T(x) \geq T(x_0)\}\cap B_s(x_0)|}{|B_s|} \geq \frac{3}{8}$$ Fix $s_0 = \tau r / 2$. By Lemma [Lemma 5](#lem:rho_regularity){reference-type="ref" reference="lem:rho_regularity"}, $|\Omega_t|$ is continuous in $t$, so we can choose $\delta$ for $s_0$ such that $$\frac{|\{x : T(x) > T(x_0) + \delta\}\cap B_{s_0}(x_0)|}{|B_{s_0}|} \geq \frac{5}{16}$$ Let $s_1 < s_0$ be such that if $|x - x_0| < s_1$, then $|T(x) - T(x_0)| < \delta$. Then for $x_1\in B_{s_1}(x_0)$, we compute $$\begin{aligned} \frac{|\{x : T(x) > T(x_1)\}\cap B_{s_0 + |x_1 - x_0|}(x_1)|}{|B_{s_0 + |x_1 - x_0|}|} &\geq \frac{|\{x : T(x) > T(x_0) + \delta\}\cap B_{s_0}(x_0)|}{|B_{s_0 + |x_1 - x_0|}|} \\&\geq \left(\frac{5}{16}\right)\left(1 - \frac{s_0}{s_0 + s_1}\right)^d \end{aligned}$$ Thus, if we take $s_1$ sufficiently small, this last quantity is greater than $\frac{1}{8}$, and all points in $B_{s_1}(x_0)$ are regular. ◻ ## Regular points We now turn toward understanding the behavior of the interface near regular points. Standard obstacle problem theory ([@caffarelli98], [@blank]) gives that for $C^{0,\alpha}$ source, the interface is locally $C^{1,\alpha}$ at regular points, with the scale at which the regularity is achieved depending on the scale at which the zero set is sufficiently large. We discuss the dependence of this regularity in greater detail in the appendix. In particular, for this problem we have: **Proposition 31**. *$R$ can be covered by open neighborhoods $V$, each with the property that there exist constants $C, r > 0$ such that for each $x\in V$, $B_r(x)\cap\Omega_{T(x)}$ is the intersection of $B_r(x)$ with the lower graph of a $C^{1,\alpha}$ function in some coordinate system (depending on $x$) with seminorm bounded by $C$.* *Proof.* By Lemma [Lemma 54](#lem:regular_point_boundary_estimate){reference-type="ref" reference="lem:regular_point_boundary_estimate"}, we need only show that if the zero set reaches density sufficiently close to $\frac{1}{2}$ at scale $r$ near $x$, then it does so at the same scale at all points near $x$. This is essentially immediate from the fact that $\Omega_t$ expands monotonically in $t$, with the measure $|\Omega_t|$ Lipschitz as a function of $t$ by Lemma [Lemma 5](#lem:rho_regularity){reference-type="ref" reference="lem:rho_regularity"}. ◻ The dependence of the coordinate system in Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"} is only a minor inconvenience, and we will eventually remove it in Proposition [Proposition 38](#prop:regular_final_boundary_regularity){reference-type="ref" reference="prop:regular_final_boundary_regularity"}. To better understand this dependence, we introduce $\nu(x)$, defined for $x\in R$ as the outward unit normal to $\Omega_{T(x)}$ at $x$. We have spatial regularity of $\nu$ from the obstacle problem; namely, $\nu\in C^{0,\alpha}_{\mathop{\mathrm{\textup{loc}}}}(R_t)$ for each $t$. Our goal will be to improve this to regularity of $\nu$ on $R$. The key ingredients will be the regularity of $\Omega_t$ near regular points, and the strictly monotonic expansion of the $\Omega_t$. The essential idea will be that if the tangent planes to $\Omega_{T(x)}$ at $x$ and to $\Omega_{T(y)}$ at $y$ intersect for some points $x,y$ with different hitting times, they must intersect well away from $x$ and $y$ or else we will be able to use the regularity of the interfaces to show that $\partial \Omega_{T(x)}$ and $\partial \Omega_{T(y)}$ intersect, which contradicts monotonicity. This then gives control over the angle at which the tangent planes may intersect in terms of the distance between $x$ and $y$. **Proposition 32**. *The outward unit normal vector $\nu$ to $\Omega_{T(x)}$ at $x$ satisfies $\nu\in C^{0,\alpha/(1 + \alpha)}_{\mathop{\mathrm{\textup{loc}}}}(R)$.* *Proof.* By Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"}, we may cover $R$ with neighborhoods $V$ such that for each $x\in V$ uniformly, $\partial\Omega_{T(x)}\cap V$ is the lower graph in some coordinate system of a function $f_{T(x)}$, with the $f_t$ uniformly bounded in $C^{1,\alpha}$. We will restrict to such a $V$ for the remainder of the proof. Then as a preliminary step, we can observe continuity of $\nu$ from the regularity and monotonicity of the interface by a purely geometrical argument. Namely, a $C^{1,\alpha}$ domain entertains a uniform interior and exterior cone condition, where the angle of the cone improves toward $\pi$ as we allow its height to approach 0; specifically, the cone in $B_r$ can be taken with angle $2\arccos(C r^\alpha)$, when the $C^{1,\alpha}$ seminorm is $C$. Thus, if $\nu$ were discontinuous at some $x\in R$, we could use compactness to find a sequence $(y_n)$ converging to $x$ with $T(y_n)$ either increasing or decreasing to $T(x)$ and $\nu(y_n)$ converging to some unit vector distinct from $\nu(x)$. Then for $n$ sufficiently large, at a sufficiently small scale, the interior cone at $x$ will intersect with the exterior cone of a $y_n$, or vice versa, and we draw a contradiction with the monotonic expansion of the $\Omega_t$ depending on whether the $T(y_n)$ are decreasing or increasing. Then, we have checked that $x\to \nu(x)$ is continuous. Now, to obtain a quantitative local continuity estimate in view of the cone regularity we described above, we may restrict attention to $x,y\in V$ with $\frac{1}{2} < \nu(x)\cdot \nu(y) < 1$. Moreover, since the case $T(x) = T(y)$ is managed by the spatial regularity of the interface, we may assume that $T(x) > T(y)$. For notation, we let $r = \nu(x) \cdot \nu(y)$ and use $P_x, P_y$ to refer to the tangent planes to $\Omega_{T(x)}$ at $x$ and $\Omega_{T(y)}$ at $y$ respectively. Let $v := \frac{\nu(y) - |r|\nu(x)}{\sqrt{1 - r^2}}$ be the projection of $\nu(y)$ into $\nu(x)^\perp$ , scaled to unit norm. Considering the point $x - hv$ for $h > 0$, we compute that its $\nu(y)$ component is $x\cdot \nu(y) - h\sqrt{1 - r^2}$. Thus $x - hv$ reaches $P_y$ precisely when $h = \frac{(x - y)\cdot \nu(y)}{\sqrt{1 - r^2}}$, and in general we have $$\label{eq:dist_Py} d(x-hv, P_y) \geq h\sqrt{1 - r^2} - \delta \quad \hbox{ where } \delta:=|x-y|$$ Now, we apply the $C^{1,\alpha}$ regularity of the interface in $V$. For all $h$ sufficiently small, this regularity implies that $\partial \Omega_{T(x)}$ in $B_h(x)$ is contained in a $Ch^{1 + \alpha}$-neighborhood of $P_x$; in other words, $\Omega_{T(x)}$ and its exterior contain the following halfspaces: $$\label{eq:px_lower_halfspace} \{ z\in B_h(x) : (z-x-Ch^{1 + \alpha})\cdot \nu(x)\leq 0\} \subset \Omega_{T(x)}\cap B_h(x)$$ $$\label{eq:px_upper_halfspace} \{ z\in B_h(x) : (z-x+Ch^{1 + \alpha})\cdot \nu(x)\geq 0\} \subset ({ \mathbb{R}}^d\setminus\Omega_{T(x)})\cap B_h(x)$$ In particular, since $x - hv\in P_x$, it follows that there is a point $\tilde{x}\in \partial \Omega_{T(x)}$ with $$\label{eq:dist_x_tilde} |\tilde{x} - (x - hv)| < Ch^{1 + \alpha}.$$ Let $y_1$ be the nearest point in $P_y$ to $\tilde{x}$. We illustrate this with the figure below. ![](figures/unit_normal_proof_diagram.pdf){#fig:unit_normal_proof} As the figure may suggest, $\tilde{x}$ cannot be too far below $y_1$, or else it falls into $\Omega_{T(y)}$, contradicting that $T(x) > T(y)$. Specifically, as in ([\[eq:px_lower_halfspace\]](#eq:px_lower_halfspace){reference-type="ref" reference="eq:px_lower_halfspace"}), we can apply the $C^{1,\alpha}$ regularity to to get that for all sufficiently small $r$, $\Omega_{T(y)} \cap B_r(y)$ contains the halfspace $\{ z : (z - y - Cr^{1 + \alpha}\nu(y))\cdot \nu(y) \leq 0 \}\cap B_r(y)$. Since $\tilde{x}\notin \overline{\Omega_{T(y)}}$, $\tilde{x}$ must not be contained in that halfspace, and we have $$\label{eq:halfspace_ineq_x_tilde} (\tilde{x} - y_1)\cdot \nu(y) > -C |y_1 - y|^{1 + \alpha}$$ The left side here is the signed distance of $\tilde{x}$ to $P_y$. From ([\[eq:dist_Py\]](#eq:dist_Py){reference-type="ref" reference="eq:dist_Py"}), the signed distance of $x - hv$ to $P_y$ is bounded above by $-(h\sqrt{1 - r^2} - \delta)$, so using ([\[eq:dist_x\_tilde\]](#eq:dist_x_tilde){reference-type="ref" reference="eq:dist_x_tilde"}), we conclude that the left side above is bounded above by $Ch^{1 + \alpha} - (h\sqrt{1 - r^2} - \delta)$. On the other hand, we have $$|y_1 - y| \leq |\tilde{x} - y| \leq |\tilde{x} - (x - hv)| + |(x - hv) - x| + |x - y| \leq C_0 h^{1 + \alpha} + h + \delta$$ When $\delta < h < 1$, this is $O(h)$, and so $|y_1 - y|^{1 + \alpha} \geq -C h^{1 + \alpha}$ for some $C$. Thus, the inequality ([\[eq:halfspace_ineq_x\_tilde\]](#eq:halfspace_ineq_x_tilde){reference-type="ref" reference="eq:halfspace_ineq_x_tilde"}) becomes $$Ch^{1 + \alpha} - (h\sqrt{1 - r^2} - \delta) > - C h^{1 + \alpha}$$ Rearranging and absorbing constants, this means $$h\sqrt{1 - r^2} - \delta \leq Ch^{1 + \alpha}$$ so that $$|\nu(x) - \nu(y)| \leq \sqrt{1 - r^2} \leq Ch^{\alpha} + \delta h^{-1}$$ Optimizing $h$, we get $$|\nu(x) - \nu(y)| \leq C\delta^{\frac{\alpha}{1 + \alpha}}$$ where the constant depends only on the uniform bound for the $C^{1,\alpha}$ seminorms of the graphs, and on $\alpha$. Hence we conclude. ◻ As a result of the regularity of $\nu$, we can improve Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"} to also have the coordinate system chosen locally uniformly. In other words, near regular points, one can fix a local coordinate system in which the free boundary evolves as a $C^{1,\alpha}$ graph over some time interval. We now turn toward applying the improved geometry of the patch at regular points to the pressure. Elliptic regularity for $C^{1,\alpha}$ domains implies that the pressure $p(\cdot, t)$ has a well-defined gradient on $R_t$, and the Hopf lemma for $C^{1,\alpha}$ domains implies that $\nabla p(\cdot, t)$ is nonvanishing on $R_t$. However, there is an important limitation here: $x\mapsto \nabla p(x, T(x))$ is not necessarily continuous on $R$, complicating our analysis. This is illustrated with the following example: *Remark 33*. Singular points can exert a nonlocal effect on the pressure gradient. For example, if we consider a pressure supported on a strip of width $h$ with zero boundary conditions and constant Laplacian $-1$, then we can see that $|\nabla p| = \frac{h}{2}$ on the boundary, since the solution to the one-dimensional problem with $p(0) = p(h) = 0$ is $p(x) = \frac{1}{2}x(h-x)$. In particular, it follows that if we have a patch which consists of two strips, and those strips merge along a hyperplane at some time, then $\nabla p$ has a jump discontinuity in time at every regular point at the time those strips merge. Similar examples can be considered for singular points in each stratum $\Sigma^k$ by examining a cylindrical patch with a cylindrical hole as the radius of the cylinder shrinks to 0. Here, by cylinder we mean the product of ${ \mathbb{R}}^k$ and a $(d-k)$-dimensional ball, for $0\leq k \leq d-1$. We note that the discontinuity in the gradient in Remark [Remark 33](#rem:regular_points_nondifferentiable){reference-type="ref" reference="rem:regular_points_nondifferentiable"} is a jump in magnitude, not in direction. Indeed, the zero boundary condition implies that $\frac{\nabla p(x, T(x))}{|\nabla p(x, T(x))|} = -\nu(x)$ on $R_t$, so the regularity of $\nu$ from Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"} rules out such discontinuities. This example also proves to be an obstacle to higher regularity of $T$; we will later see in Lemma [Lemma 44](#lem:hitting_time_differentiable){reference-type="ref" reference="lem:hitting_time_differentiable"} that the derivative of $T$ closely depends on $\nabla p(x, T(x))$ when it exists. As a result, we can hope for $T$ to be at best Lipschitz on $R$. The key in establishing this regularity will be to obtain a quantitative estimate from the Hopf lemma, to get a locally uniform lower bound for $|\nabla p(x, T(x))|$ on $R$. For this, we require an a priori estimate for the growth of the solution, in addition to control over the geometry. We can obtain this growth estimate from the strict superharmonicity of $p(\cdot, t)$, and so we have the following statement: **Lemma 34**. *Let $r_0, c_0, C_0 > 0$ and $\alpha\in (0,1)$. Suppose $u$ is a positive $C^2$ solution to $\Delta u \leq -c_0 < 0$ on $B_{r_0}(0)\cap \{ x : x_d > C_0|x'|^{1 + \alpha} \}$ with $u(0) = 0$, where we write $x = (x', x_d)$. Then there exist $\varepsilon, \delta > 0$ depending only on $d, \alpha, r_0, c_0, C_0$ such that for all $h\in (0,\delta)$, we have $u(h e_n) \geq \varepsilon h$.* *Proof.* First, we let $C_1 = C_0 + 1$, to give additional separation from the boundary when we are away from $0$, and let $U_r = B_r(0)\cap \{ x : x_d > C_1 |x'|^{1 + \alpha}\}$ for $0 < r < r_0$. For a given $r$, we will decompose $\partial U_r$ as $\Gamma_1 \cup \Gamma_2$, where $\Gamma_2 := \partial B_r(0)\cap \{ x : x_d \geq C_1|x'|^{1 + \alpha} \}$ is a spherical cap, and $\Gamma_1 = B_r(0)\cap \{ x : x_d = C_1 |x'|^{1 + \alpha} \}$. ![the region $U_r$ (shaded)](figures/hopf_drawing.pdf){#fig:hopf} The proof of the Hopf lemma proceeds by perturbing $u$ by a function $v$ constructed specially for the domain and applying the comparison to the result. In particular, if for some $v, \varepsilon > 0, \delta > 0$ we have $$\begin{cases} \Delta(u - \varepsilon v) \leq 0 \hbox{ on } U_\delta \\ u - \varepsilon v \geq 0 \hbox{ on } \partial U_\delta \\ \partial_d v(0) = 1 \end{cases}$$ then the comparison principle implies that $u - \varepsilon v \geq 0$ on $U_\delta$, and the control over the derivative of $v$ at 0 implies the result. Borrowing from the proof of the Hopf lemma for $C^{1,\alpha}$ domains in [@li2007hopf], we let $$v(x) = x_d + \frac{2C_1}{\alpha}(\alpha + d - 1)x_d^{1 + \alpha} - 2C_1|x|^{1 + \alpha}$$ We note by direct computation that $\partial_d v(0) = 1$ and $$\Delta v(x) = 2C_1(1 + \alpha)(\alpha + d - 1)\left(x_d^{\alpha - 1} - |x|^{\alpha - 1}\right)$$ In particular, $\Delta v(x) \geq 0$ on $U_r$ for any $r$. Thus, we reduce to verifying the boundary condition, which requires using the particular behavior of $v$ on $\Gamma_1$, and choosing $\delta$ and $\varepsilon$ appropriately to control $v$ on $\Gamma_1$ and $\Gamma_2$. Following this plan, we first check the boundary inequality on $\Gamma_1$, defined above, where we have $x_d = C_1 |x'|^{1 + \alpha}$. Along a curve $x(t) = (te', C_0 t^{1 + \alpha})$ for a unit vector $e'\in { \mathbb{R}}^{d-1}$, we have $$v(x(t)) = C_1 t^{1 + \alpha} + \frac{2C_0^2}{\alpha}(\alpha + d - 1)t^{(1 + \alpha)^2} - 2C_0(t^2 + C_1 t^{2(1 + \alpha)})^{(1 + \alpha)/2}$$ For small $t > 0$, we can drop the higher order terms and see that $v$ grows like $C_1 t^{1 + \alpha} - 2C_1 t^{1 + \alpha} = -C_1 t^{1 + \alpha}$. In particular, there exists $\delta_0 = \delta_0(d, \alpha, C_0)$ such that if $\delta \leq \delta_0$, then $v(x) \leq 0$ on $B_\delta(0) \cap\{ x : x_d = C_1 |x'|^{1 + \alpha} \}$. We will set $\delta = \min(\delta_0, r_0/2)$ for the rest of the proof. Next, we handle the boundary inequality on $\Gamma_2$, the spherical cap defined by $\Gamma_2 = \partial B_\delta(0)\cap \{ x : x_d \geq C_1 |x'|^{1 + \alpha} \}$. By compactness, $\Gamma_2$ has positive distance to $\{ x : x_d \geq C_0 |x'|^{1 + \alpha} \}$. Let $D = D(d, \alpha, \delta, C_0)$ denote this distance. By shrinking $D$ to $\frac{r_0}{2}$ if necessary, we get that $u$ is defined on a ball of radius $D$ at each point of $\Gamma_2$. The paraboloid on the ball of radius $D$ with zero boundary data and Laplacian $-c_0$ is a subsolution to $u$, so we conclude that $u(x) \geq \frac{c_0}{2}D^2$ on $\Gamma_2$. Let $M = \max(1, \max_{\Gamma_2} v)$, and then we can take $$\varepsilon = \frac{c_0 D^2}{2M}$$ to get $u - \varepsilon v \geq 0$ on $\Gamma_2$, with $\varepsilon$ depending on all of the parameters in the statement of the lemma. On $\Gamma_1$, we have $u\geq 0$ and $v\leq 0$, so $u - \varepsilon v \geq 0$. On $\Gamma_2$, we chose $\varepsilon$ so that $u - \varepsilon v \geq 0$. Thus, $u - \varepsilon v \geq 0$ on $\partial U_\delta$, so we conclude. ◻ With the Hopf lemma estimate and the regularity of the boundary near regular points, we can conclude that $p(\cdot, T(x))$ has linear growth at $x$, locally uniformly on $R$. **Proposition 35**. *$R$ can be covered with neighborhoods $V$ with the following property: there exist parameters $C, c, r_0 > 0$ such that for any $x\in V$, $$c \leq r^{-1}\sup_{B_r(x)} p(\cdot, T(x)) \leq C$$ In particular, $|\nabla p(x, T(x))| \sim 1$ on $V$, for implicit constants depending on $V$.* *Proof.* To obtain the lower bound, we apply Lemma [Lemma 34](#lem:quantitative_hopf_lemma){reference-type="ref" reference="lem:quantitative_hopf_lemma"}. Thus, we must show that the parameters $r_0, c_0, C_0$ from the statement of the lemma can be chosen locally uniformly on $R$. By Lemma [Lemma 7](#lem:nutrient_lower_bound){reference-type="ref" reference="lem:nutrient_lower_bound"}, we can choose $c_0 > 0$ locally uniformly in time so that $\Delta p = -n \leq -c_0 < 0$, using the assumption that $n_0$ is bounded away from 0. By Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"}, for any $x_0\in R$, we can find a neighborhood $V$ of $x_0$ in which we have a uniform $C_0$ so that each $R_t$ which intersects the neighborhood does so as a graph with $C^{1,\alpha}$ seminorm controlled by $C_0$. By taking $r_0$ so that $B_{2r_0}(x_0)\subset V$, we can use $r_0, C_0$ as the parameters for all points in $B_{r_0}(x_0)$. To obtain the upper bound, we first use Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"}. This gives us a neighborhood $V$ and a parameter $r_1 > 0$ for which $B_{r_1}(x)\cap \partial \Omega_{T(x)}$ is a $C^{1,\alpha}$ graph, with uniform control over the $C^{1,\alpha}$ seminorm. In particular, this regularity implies that there is an $r_2 > 0$ such that for all $x\in V$ and all $r\in (0, r_2]$, we have $x - r\nu(x)\in \Omega_{T(x)}$. Then the mean value theorem along the path $x - r\nu(x)$ implies that there exists some $r$ for which $$\nabla p(x, x - r\nu(x))\cdot -\nu(x) = \frac{p(x - r_2 \nu(x), T(x))}{r_2}$$ We recall from the proof of Lemma [\[lem:w_regularity\]](#lem:w_regularity){reference-type="ref" reference="lem:w_regularity"} that $p\in L^\infty({ \mathbb{R}}^d\times [0, \tau])$, for any $\tau\in (0,\infty)$, using that the patch has bounded support and comparing to a sufficiently large paraboloid supersolution. Thus, we have $$|\nabla p(x, x - r\nu(x))\cdot \nu(x)| \leq Cr_2^{-1}$$ Using the regularity of the boundary, the $L^\infty$ bound on the pressure, and the $L^\infty$ bound on the nutrient from Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"}, we can invoke boundary Schauder estimates to have $p(\cdot, T(x))$ uniformly $C^{1,\alpha}$ on $B_{r_1}(x)\cap \Omega_{T(x)}$ for each $x\in V$. Thus, we can transfer our bound to the boundary: $$|\nabla p(x, T(x))| = |\nabla p(x, T(x))\cdot \nu(x)| \leq |(\nabla p(x, T(x)) - \nabla p(x - r\nu(x)))\cdot\nu(x)| + |\nabla p(x - r\nu(x))\cdot \nu(x)| \leq Cr + Cr_2^{-1}$$ This also gives the upper bound on the linear growth of $p(\cdot, T(x))$ near $x$, so we conclude. ◻ Having established nondegeneracy of the pressure, we get improved regularity of $T$ on $R$. **Corollary 36**. *$T\in C^{0,1}_{\mathop{\mathrm{\textup{loc}}}}(R)$. In other words, $T$ attains its optimal regularity on $R$ in light of Remark [Remark 33](#rem:regular_points_nondifferentiable){reference-type="ref" reference="rem:regular_points_nondifferentiable"}, barring additional assumptions on $\Sigma$.* *Proof.* This follows from the boundary regularity from Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"} and the linear nondegeneracy of the pressure from Proposition [Proposition 35](#prop:regular_linear_growth){reference-type="ref" reference="prop:regular_linear_growth"}, via a method similar to the comparison arguments of Section 4. Using these properties, we can construct a radial subsolution initially supported on an annulus in $\Omega_t$ which expands at a constant rate, near any $x_0\in R$. From this argument, we get $$\label{eq:t_one_sided_lip} (T(x) - T(x_0))_+ \leq C|x - x_0|$$ for $x_0\in R$ and $x$ sufficiently close to $x_0$. Since the boundary regularity and linear growth rate are uniform for $x_0$ restricted to a compact $K\subset R$, the one-sided bound [\[eq:t_one_sided_lip\]](#eq:t_one_sided_lip){reference-type="eqref" reference="eq:t_one_sided_lip"} holds uniformly for $x, x_0\in K$, and so we conclude that $T$ is locally Lipschitz on $R$. ◻ Finally, we turn to refining the statement of Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"}. The first step will be to use the linear growth of the pressure gradient to control the boundary in the Hausdorff metric. **Lemma 37**. *For any $x\in R$, there exists parameters $r, \delta > 0$ such that for $t_1, t_2\in (T(x) - \delta, T(x) + \delta)$, we have $$\sup_{y_1\in R_{t_1}\cap B_r(x)} \inf_{y_2\in R_{t_2}\cap B_r(x)} |y_1 - y_2| \sim |t_1 - t_2|$$ In other words, $D(R_{t_1}\cap B_r(x), R_{t_2}\cap B_r(x))\sim |t_1 - t_2|$, where $D$ denotes Hausdorff distance. Here, all parameters and implicit constants depend on $x$.* *Proof.* Let $B$ be a ball compactly contained in $R$, and let $y_1, y_2\in B$ with $T(y_1) < T(y_2)$. Since $T\in C^{0,1}_{\mathop{\mathrm{\textup{loc}}}}(R)$ by Corollary [Corollary 36](#cor:regular_points_t_lipschitz){reference-type="ref" reference="cor:regular_points_t_lipschitz"}, we have $|T(y_1) - T(y_2)| \leq C(B)|y_1 - y_2|$. We get the reverse bound by an analogous argument using the boundary regularity of Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"} and using Proposition [Proposition 35](#prop:regular_linear_growth){reference-type="ref" reference="prop:regular_linear_growth"}'s upper bound on the linear growth of $p(\cdot, t_1)$ away from $y_1$. That is, following the approach of Section 4, we can construct a supersolution supported outside a ball in the exterior of $\Omega_{T(y_1)}$ near $y_1$, such that the supersolution expands at a constant rate. From that argument, we get that there exists a point $\tilde{y_2}\in R_{t_2}$ with $|y_1 - \tilde{y_2}| \leq C(B)|T(y_1) - T(y_2)|$. Letting $t_1 = T(y_1), t_2 = T(y_2)$, and restricting to the case where $|t_1 - t_2|$ is sufficiently small to guarantee that the $\tilde{y_2}$ from before is in $B$, we have shown that $D(R_{t_1}\cap B, R_{t_2}\cap B) \sim |t_1 - t_2|$ for implicit constants depending on $B$, from which we can obtain the original statement. ◻ Using the previous result, we can now state our final improved form of Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"}. **Proposition 38**. *$R$ is covered by neighborhoods $V$ with the following property: there exists $r > 0$, a coordinate system $(x', x_n)$, and a locally defined function $f(x', t)$ such that for each $x\in V$, $\Omega_{T(x)}\cap B_r(x)$ is the lower graph $\{ y\in B_r(x) : y_n \leq f(y', T(x)) \}$. Moreover, $f$ is uniformly $C^{1,1}$ in space and $C^{0, 1}$ in time.* *Proof.* First, by Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"}, we note that the coordinate system in Proposition [Proposition 31](#prop:regular_point_standard_boundary_regularity){reference-type="ref" reference="prop:regular_point_standard_boundary_regularity"} can be chosen locally uniformly, so that we get $r > 0$ and the family $f(x', t)$ which are uniformly $C^{1,1}$ in space by the Lipschitz regularity of $T$. Thus, it remains only to check that the regularity in time follows from our control over the Hausdorff distance. Fix $x'$ and $t_1, t_2$, and write $x_1 = (x', f(x', t_1))$, $x_2 = (x', f(x', t_2))$. Then for $\varepsilon = |f(x', t_1) - f(x', t_2)| = |x_1 - x_2|$, $B_\varepsilon(x_1)$ contains the point $\tilde{x}\in \partial \Omega_{T(x_2)}$ which minimizes the distance to $x_1$. By Lemma [Lemma 37](#lem:regular_hausdorff_distance){reference-type="ref" reference="lem:regular_hausdorff_distance"}, after possibly shrinking our neighborhood, $|x_1 - \tilde{x}| \leq C|t_1 - t_2|$. In particular, $|\tilde{x}' - x'| \leq C|t_1 - t_2|$, so $|f(\tilde{x}', t_2) - f(x', t_2)| \leq C|t_1 - t_2|$, for some larger $C$ given by the $C^1$ spatial regularity of $f$. Then $$|f(x', t_1) - f(x', t_2)| \leq |f(x', t_1) - f(\tilde{x}', t_2)| + |f(\tilde{x}', t_2) - f(x', t_2)| \leq C|t_1 - t_2|$$ which completes the proof. ◻ ## Singular points Now we proceed to analysis of the singular set, with the goal of controlling singular points in dimension. First, we will show that the blowup profile at singular points varies continuously along the spacetime interface. The main tool will be a uniform approximation result that we prove in the appendix, which will allow us to make use of the uniform-in-time spatial continuity of $\eta$. **Proposition 39**. *$x\mapsto D^2w(x, T(x))$ is continuous on $\Sigma$.* *Proof.* First, by Lemma [\[lem:w_regularity\]](#lem:w_regularity){reference-type="ref" reference="lem:w_regularity"} and Lemma [Lemma 27](#lem:eta_regularity){reference-type="ref" reference="lem:eta_regularity"}, we have that $w$ is locally spacetime Lipschitz and $\eta$ is $C^{0,\alpha}$ in space locally uniformly in time. We also have that $T$ is locally $C^{0,\alpha}$ on $\mathcal{O}$. Thus, for some $C > 0$, we can restrict to a spacetime neighborhood of the interface where all of these norms are bounded by $C$ ( in the case of $T$, in the sense of the neighborhood's projection into space). Let $\varepsilon > 0$. By Lemma [Lemma 58](#lem:uniform_sing_blowup_scale){reference-type="ref" reference="lem:uniform_sing_blowup_scale"}, there exists a scale depending only on the modulus of continuity of $1 - \eta$ in ([\[eq:w_obst_eqn\]](#eq:w_obst_eqn){reference-type="ref" reference="eq:w_obst_eqn"}), such that the quadratic blowup uniformly approximates $w$ near singular points at that scale; concretely, there is a $\delta = \delta(\varepsilon / 3, C) > 0$ such that if $x_0$ is a singular point in our neighborhood with blowup $q_0(x) = \frac{1}{2}x\cdot D^2w(x_0)x$ (recentered at 0), then $$\|\delta^{-2}w(x_0 + \delta x, T(x_0)) - q_0\|_{C^1(B_1)} < \frac{\varepsilon}{3}$$ In particular, if $x_1$ is another singular point in the same neighborhood with blowup $q_1$, then we have $$\|q_0 - q_1\|_{L^\infty(B_1)} \leq \frac{2\varepsilon}{3} + \delta^{-2}\|w(x_0 + \delta x, T(x_0)) - w(x_1 + \delta x, T(x_1))\|_{L^\infty(B_1)}$$ From the regularity of $T$ and $w$ on our chosen neighborhood, it follows that $$\|q_0 - q_1\|_{L^\infty(B_1)} \leq \frac{2\varepsilon}{3} + C_1\delta^{-2}|x_1 - x_0|^\alpha$$ for some $C_1$ depending only on $C$. Then it is clear that for $|x_1 - x_0|$ sufficiently small, we have $\|q_0 - q_1\|_{L^\infty(B_1)} < \varepsilon$. By equivalence of norms on ${ \mathbb{R}}^{d\times d}$, this gives $|D^2 w(x_0) - D^2 w(x_1)| \leq O(\varepsilon)$, and we conclude. ◻ We remark that $D^2 w(x, T(x))$ also exists in a one-sided sense for $x\in R$, with $D^2 w(x, T(x)) = \nu(x)\nu(x)^T$ for $\nu$ as in Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"}. Thus, in light of that proposition, $D^2 w(x, T(x))$ is continuous on $R$. However, due to the jump in rank, there is no possibility of continuity from $R$ to $\Sigma^{k}$ when $k < d-1$. Using the continuous dependence of the blowup, we can subsequently apply a Whitney extension argument to obtain that singular points are contained in $C^1$ manifolds. Following the approach in [@petrosyan], we introduce the following lemma: **Lemma 40** (Whitney's extension theorem). *Let $K\subset { \mathbb{R}}^d$ be compact, and suppose we have a function $f:K\to { \mathbb{R}}$ and a family of degree $m$ polynomials $p_x$ indexed over $K$. If* (i) *$p_{x_0}(x_0) = f(x_0)$ for each $x_0\in K$* (ii) *$|D^k p_{x_0}(x_1) - D^kp_{x_1}(x_1)| = o(|x_0 - x_1|^{m-k})$ for $x_0, x_1\in K$ and $0\leq k \leq m$.* *Then $f$ extends to a $C^m$ function on ${ \mathbb{R}}^d$ such that $f(x) = p_{x_0}(x) + o(|x - x_0|^m)$ for all $x_0\in K$.* **Proposition 41**. *Near a point in $\Sigma^k$, $\Sigma$ is locally contained in a $C^1$ manifold of dimension $k$. In particular, $\Sigma$ is contained in countably many $C^1$ submanifolds of dimension $d-1$.* *Proof.* We fix a compact $K\subset \Sigma$ for the proof. We will apply Lemma [Lemma 40](#lem:whitney){reference-type="ref" reference="lem:whitney"} to extend the zero function on $K$, with second order Taylor polynomial $q_{x_0}$ at $x_0\in K$ given by the quadratic blowup of $w$ at each point. Specifically, this results in $$q_{x_0}(x) = \frac{1}{2}(x-x_0)\cdot D^2 w(x_0, T(x_0))(x-x_0) \hbox{ where } x_0\in K, x\in { \mathbb{R}}^d$$ The extension will give us a $C^2$ function $f$ on ${ \mathbb{R}}^d$ such that $\nabla f \equiv 0$ on $K$, after which the implicit function theorem will imply that in a neighborhood of $x_0\in K$, the set $\{ \nabla f = 0 \}$ is contained in a $C^1$ manifold of dimension $\dim\ker D^2 f(x_0) = \dim\ker D^2 w(x_0, T(x_0))$. Thus, we proceed to verify the assumptions of the lemma. First, we note that by Lemma [Lemma 27](#lem:eta_regularity){reference-type="ref" reference="lem:eta_regularity"}, $\eta$ is uniformly $C^{0,\alpha}$ in space in a spacetime neighborhood of the interface as it passes through $K$ in space. It follows by our uniform approximation result for the quadratic blowup at singular points, Lemma [Lemma 58](#lem:uniform_sing_blowup_scale){reference-type="ref" reference="lem:uniform_sing_blowup_scale"}, that there is a modulus of continuity $\sigma$ such that $$\label{eq:singular_blowup_convergence} \|r^{-2}w(x_0 + r(x - x_0), T(x_0)) - \frac{1}{2}(x - x_0)\cdot D^2w(x_0, T(x_0)) (x - x_0)\|_{C^1_x(B_1)} \leq \sigma(r)$$ for any $x_0\in K$. Then, if we apply this estimate to $x_0, x_1\in K$ with $T(x_0) \leq T(x_1)$, we get $$|q_{x_0}(x_1) - q_{x_1}(x_1)| = |\frac{1}{2}(x_1 - x_0)\cdot D^2w(x_0, T(x_0))(x_1 - x_0)| \leq |x_0 - x_1|^2\sigma(|x_0 - x_1|)$$ directly from ([\[eq:singular_blowup_convergence\]](#eq:singular_blowup_convergence){reference-type="ref" reference="eq:singular_blowup_convergence"}) and the fact that $w(x_1, T(x_0)) = 0$. On the other hand, if $T(x_0) > T(x_1)$, we have that $|q_{x_1}(x_0)| \leq |x_0 - x_1|^2 \sigma(|x_0 - x_1|)$ from the above, and $$|q_{x_1}(x_0) - q_{x_0}(x_1)| = |\frac{1}{2}(x_1 - x_0)\cdot(D^2w(x_1,T(x_1)) - D^2 w(x_0, T(x_0)))(x_1 - x_0)| \leq o(|x_1 - x_0|^2)$$ by the continuity of the Hessian from Proposition [Proposition 39](#prop:singular_hessian_continuous){reference-type="ref" reference="prop:singular_hessian_continuous"}. This completes the $k = 0$ case of (ii) in Lemma [Lemma 40](#lem:whitney){reference-type="ref" reference="lem:whitney"}. The verification of the $k = 1$ case of (ii) is similar, using the derivative bound from ([\[eq:singular_blowup_convergence\]](#eq:singular_blowup_convergence){reference-type="ref" reference="eq:singular_blowup_convergence"}). In general, we have $\nabla q_{x_0}(x) = D^2 w(x_0, T(x_0))(x - x_0)$. Then, when $T(x_0) \leq T(x_1)$, we get $$|\nabla q_{x_0}(x_1) - \nabla q_{x_1}(x_1)| = |D^2 w(x_0, T(x_0))(x_1 - x_0)| \leq |x_1 - x_0|\sigma(|x_1 - x_0|)$$ directly from ([\[eq:singular_blowup_convergence\]](#eq:singular_blowup_convergence){reference-type="ref" reference="eq:singular_blowup_convergence"}) and the fact that $\nabla w(x_1, T(x_0)) = 0$. On the other hand, if $T(x_0) > T(x_1)$, we have that $$|\nabla q_{x_1}(x_0) + \nabla q_{x_0}(x_1)| = |\frac{1}{2}(D^2w(x_0, T(x_0)) - D^2w(x_1, T(x_1)))(x_1 - x_0)| \leq o(|x_1 - x_0|)$$ again by Proposition [Proposition 39](#prop:singular_hessian_continuous){reference-type="ref" reference="prop:singular_hessian_continuous"}, giving that $$|\nabla q_{x_0}(x_1) - \nabla q_{x_1}(x_1)| = |\nabla q_{x_0}(x_1)| \leq |\nabla q_{x_0}(x_1) + \nabla q_{x_1}(x_0)| + |\nabla q_{x_1}(x_0)| \leq o(|x_1 - x_0|)$$ Finally, the $k = 2$ case of (ii) in Lemma [Lemma 40](#lem:whitney){reference-type="ref" reference="lem:whitney"} is exactly continuity of the Hessian, from Proposition [Proposition 39](#prop:singular_hessian_continuous){reference-type="ref" reference="prop:singular_hessian_continuous"}. Thus, the conditions of the lemma are satisfied, which completes the proof. ◻ ![A cylindrical patch with a nearly cylindrical hole. As the hole contracts, singular points are expected to occur near the axis (dotted). Due to variation in the diameter of the hole, however, singular points may occur at different times. Proposition [Proposition 41](#prop:singular_manifold){reference-type="ref" reference="prop:singular_manifold"} confirms that we have the expected spatial regularity for the set $\Sigma\subset { \mathbb{R}}^d$ of points which are singular at any time.](figures/almost_cylinder.pdf){#fig:almost_cylinder} We stress that this result is for $\Sigma$, and not for the corresponding subset $\mathrm{Graph}_T(\Sigma)$ of the spacetime interface, where we use the notation introduced in [\[eq:graph\]](#eq:graph){reference-type="eqref" reference="eq:graph"}. Since $T$ is only known to be Hölder continuous on $\Sigma$, we obtain the weaker result that $\mathrm{Graph}_T(\Sigma)$ is locally contained in $C^{0,\alpha}$ manifolds of dimension $d-1$. Nevertheless, we are able to apply this to establish control in Hausdorff dimension over the interface. The Hausdorff dimension of the spatial interface has previously been studied in [@pqm] for a similar problem, where the obstacle problem satisfied by $w$ was applied to conclude that $\partial \Omega_t$ has locally finite $(d-1)$-dimensional Hausdorff measure for each $t$. Using the Lipschitz regularity of $T$ near regular points and our spatial control over singular points, we can study the spacetime interface $\mathrm{Graph}_T(\mathcal{O})$ for the first time and show that it has the expected Hausdorff dimension $d$. We summarize the consequences of Proposition [Proposition 41](#prop:singular_manifold){reference-type="ref" reference="prop:singular_manifold"} with the following statements. **Corollary 42**. *We have, for $\alpha$ as in Theorem [Theorem 26](#thm:hitting_time_holder){reference-type="ref" reference="thm:hitting_time_holder"}:* (i) *$\partial \Omega_t$ has finite $(d-1)$-dimensional Hausdorff measure.* (ii) *$\Sigma$ has locally finite $(d-1)$-dimensional Hausdorff measure. In particular, $\Sigma_t$ has zero $(d-1)$-dimensional Hausdorff measure for all but countably many $t$ in $(0,\infty)$, and for a.e. $t\in (0, \infty)$, $\Sigma_t$ has Hausdorff dimension at most $d - 1 - \alpha$.* (iii) *$\mathrm{Graph}_T(\mathcal{O})$ has Hausdorff dimension $d$, and decomposes as $\mathrm{Graph}_T(R)\cup \mathrm{Graph}_T(\Sigma)$, where the first set is relatively open with locally finite $d$-dimensional Hausdorff measure, and the second set has locally finite $(d - \alpha)$ Hausdorff measure.* *Proof.* We use the fact that $\rho\in L^\infty_t BV_x([0, \tau]; { \mathbb{R}}^d)$, and thus $\Omega_t$ is a set of finite perimeter. In particular, we can consider the reduced boundary $\partial^* \Omega_t$, which has finite $(d-1)$ Hausdorff measure and contains $R_t$, by the local regularity of the boundary at those points. On the other hand, $\Sigma_t$ is locally contained in a $C^1$ manifold of dimension $(d-1)$. In fact, since $\Sigma_t$ is compact, $\Sigma_t$ is contained in a bounded $C^1$ manifold, which then also has finite $(d-1)$ measure, so we have $$\mathcal{H}^{d-1}(\partial \Omega_t) \leq \mathcal{H}^{d-1}(\partial^* \Omega_t) + \mathcal{H}^{d-1}(\Sigma_t) < \infty$$ Since $\Sigma$ is locally contained in $C^1$ manifolds of dimension $d-1$, the $(d-1)$-dimensional Hausdorff measure on $\Sigma$ is locally finite. In particular, it is also $\sigma$-finite, which implies that there cannot be uncountably many $t$ for which $\Sigma_t$ has positive $(d-1)$ measure. The improvement to $(d-1-\alpha)$ dimension at a.e. time follows from a geometric measure theory lemma of [@figalli_generic]. Since $\Sigma$ has dimension $d-1$ and $T\in C^{0,\alpha}_{\mathop{\mathrm{\textup{loc}}}}(\mathcal{O})$, Corollary 7.8 of that paper directly gives the result. For the final statement, we use the general result that the graph of a $C^{0,\alpha}$ function on a set of Hausdorff dimension $s$, for $\alpha\in (0, 1]$ and $s \geq 0$, has Hausdorff dimension at most $s + 1 - \alpha$. In particular, since $T$ is locally Lipschitz on $R$, which is open in ${ \mathbb{R}}^d$, and locally $C^{0,\alpha}$ on $\Sigma$, which is Hausdorff dimension at most $d-1$, we get the local control in Hausdorff measure for the graphs of those sets. Then we can write $\mathrm{Graph}_T(\mathcal{O})$ as a countable union of sets of Hausdorff dimension at most $d$, so we conclude. ◻ We remark that a natural open question is whether the space-time interface has locally finite $d$-dimensional Hausdorff measure near singular points. This would follow, for example, if $T$ were known to be uniformly Lipschitz near $\Sigma$. ## Speculative Results We finish our treatment of the singular set by noting that stronger generic control over $\Sigma_t$ is possible with slightly stronger regularity than currently known: namely, when $T$ is Lipschitz. **Proposition 43**. *If $T\in C^{0,1}_{\mathop{\mathrm{\textup{loc}}}}(\mathcal{O})$, then $\Sigma_t$ has $(d-2)$-Hausdorff measure 0 for a.e. $t\in (0,\infty)$.* *Proof.* Here we follow an argument by [@monneau], originally applied to a hitting time for the constant Laplacian obstacle problem with a time-varying condition on the fixed boundary. Since $T$ is Lipschitz, Proposition 4.6 in [@monneau] implies that for any compact subset $K$ of $\Sigma$, $$\limsup_{x,y\in K, |x-y|\to 0} \frac{|T(x) - T(y)|}{|x - y|} = 0$$ Then from the coarea formula, we have $$\int_{\Sigma^d} |\nabla T_{|_{\Sigma^d}}|\,d\mathcal{H}^{d-1} = \int_0^\infty \mathcal{H}^{d-2}(T_{|_{\Sigma^{d-1}}}^{-1}(t))\,dt = \int_0^\infty \mathcal{H}^{d-2}(\Sigma^{d-1}_t)\,dt$$ Then the integrand on the left is 0, so $\Sigma^{d-1}_t$ has $(d-2)$-Hausdorff measure 0 for a.e. $t$. On the other hand, $\Sigma^k_t$ has $(d-2)$-Hausdorff measure 0 for $k < d-2$ and all $t$, while $\Sigma^{d-2}_t$ has positive $(d-2)$-Hausdorff measure for at most countably many $t$, so the result follows. ◻ We finish our treatment of the regular set by investigating the regularity improvement possible under the assumption that no singular points occur at some time. As suggested by Remark [Remark 33](#rem:regular_points_nondifferentiable){reference-type="ref" reference="rem:regular_points_nondifferentiable"}, an assumption of this form is required to go beyond the regularity established in Proposition [Corollary 36](#cor:regular_points_t_lipschitz){reference-type="ref" reference="cor:regular_points_t_lipschitz"}. The idea here will be to apply the regularity of $\nu$ from Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"} in conjunction with global Schauder estimates to prove time regularity of $p$ and higher spatial regularity of $T$. As a preliminary step, we show the relationship between $\nabla p$ and $\nabla T$. **Lemma 44**. *Suppose that for some open $U\subset R$, we have that $\nabla p$ is continuous in spacetime on $(U\times (t_0, t_1))\cap \overline{\{ (x,t) : \rho(x,t) = 1 \}}$ for some $t_0, t_1$ with $\inf T(U) < t_0 < t_1 < \sup T(U)$. Then $T$ is continuously differentiable on $U\cap T^{-1}((t_0, t_1))$ with $\nabla T(x) = -\frac{\nabla p(T(x), x)}{|\nabla p(T(x), x)|^2}$.* *Proof.* Let $e$ be a vector with positive component in the inward normal direction to $\Omega_{T(x)}$ at $x$; that is, with $e\cdot \nu(x) < 0$. Then, we have $$\nabla w(T(x), x + he) = \mathop{\mathrm{sgn}}_+(T(x) - T(x + he))\int_{T(x + he)}^{T(x)} \nabla p(t, x + he)\,dt$$ If we divide both sides by $h$ and let $h\to 0$, then the left side converges to $(\nu(x)\cdot e)\nu(x)$, from the quadratic blowup. If the right side is nonzero, we rewrite it as $$\frac{T(x) - T(x + he)}{h}\nabla p(T(x), x) + \frac{1}{h}\int_{T(x + he)}^{T(x)} \nabla p(t, x + he) - \nabla p(T(x), x)\,dt$$ Using the Lipschitz continuity of $T$ from Proposition [Corollary 36](#cor:regular_points_t_lipschitz){reference-type="ref" reference="cor:regular_points_t_lipschitz"} and the spacetime continuity of $\nabla p$, the second term vanishes as $h\to 0$. As we have already seen, $\nabla p$ cannot vanish on the interface due to the Hopf lemma, so in the limit, we get $$(\nu(x)\cdot e)\nu(x) = -\partial_e T(x)\nabla p(T(x), x)$$ Here, $\partial_e T(x)$ refers to the one-sided derivative of $T$ at $x$ in direction $e$. Since $\nabla p(T(x), x)$ has the same direction as $-\nu(x)$, we get that $\partial_e T(x) = \frac{\nu(x)\cdot e}{|\nabla p(T(x), x)|}$. Then, it is an elementary result that a continuous function on ${ \mathbb{R}}$ with continuous left derivative is differentiable. Applying it here, we get that $T$ has all two-sided directional derivatives, and we can read from the formula that we must have $$\nabla T(x) = \frac{\nu(x)}{|\nabla p(T(x), x)|} = -\frac{\nabla p(T(x), x)}{|\nabla p(T(x),x)|^2}$$ ◻ **Proposition 45**. *If for some interval $(t_0, t_1)$, we have that $\Sigma_t$ is empty for every $t\in (t_0, t_1)$, then $T\in C^{1,1/2 - \varepsilon}_{\mathop{\mathrm{\textup{loc}}}}(\Omega_{t_1}\setminus \overline{\Omega_{t_0}})$ for every $\varepsilon\in (0, \frac{1}{2})$.* *Proof.* From the previous lemma, we need to show spacetime continuity of $\nabla p$ to establish differentiability of $T$. We do this using the $C^{1,\alpha}$ global Schauder estimates, which applied to a function $u$ on a $C^{1,\alpha}$ domain $\Omega$, give that $$\|u\|_{C^{1,\alpha}(\overline{\Omega})} \leq C(\|\Delta u\|_{L^\infty(\Omega)} + \|u\|_{C^{1,\alpha}(\partial \Omega)})$$ for some $C$ which depends only on $\alpha$ and $\Omega$. In our case, $C$ will actually be locally uniform in $t$ for $\Omega_t$, since the global Schauder estimates are proved by patching interior and boundary estimates, and we can cover the boundary with finitely many balls in which it evolves as a uniformly $C^{1,1}$ graph for some time interval. Then, specifically, we will apply the Schauder estimate to $p(t) - p(s)$ on $\Omega_s$, for $t > s$, to bound $\|\nabla p(t) - \nabla p(s)\|_{L^\infty(\Omega_s)}$ in terms of $|t - s|$. Since $n\in C^{0,1-}_t L^\infty_x$ by Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"}, the work will be in controlling $p(t)$ on $\partial \Omega_s$. Intuitively, since $p(t)$ and its tangential derivative vanish on $\partial\Omega_t$, we expect that if the free boundary has not rotated too much between times $s$ and $t$, then these should be close to 0 on $\partial \Omega_s$. We make this quantitative using Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"}. First, since we have a locally uniform in time bound on $\nabla p$, we get by the radial supersolution that $D(\partial \Omega_s, \partial \Omega_t) \leq C|t - s|$ for some locally uniform in time constant, where $D$ denotes Hausdorff distance as in Definition [Definition 50](#def:reifenberg_vanishing){reference-type="ref" reference="def:reifenberg_vanishing"}. Then, since $p(t)$ vanishes on $\partial \Omega_t$, we integrate along shortest-distance paths and use the gradient bound again to conclude that $\|p(t) - p(s)\|_{L^\infty(\partial \Omega_s)} = \|p(t)\|_{L^\infty(\partial \Omega_s)} \leq C|t-s|$ for some uniform $C$. Next, we bound the tangential part of $\nabla p(t)$ on $\partial \Omega_{s}$. Recalling our definition of $\nu(x)$ as the outward unit normal to $\Omega_{T(x)}$ at $x$, we denote the projection onto the tangential part as $P^\perp_{\nu(x)}$. Then for $x\in \partial \Omega_{s}$, we let $\tilde{x}\in \partial \Omega_{t}$ be the distance minimizer so that $|x - \tilde{x}| \leq C|t-s|$, and we have $$\begin{aligned} |P^\perp_{\nu(x)}\nabla p(x)| &\leq |(P^\perp_{\nu(\tilde{x})} - P^\perp_{\nu(x)})\nabla p(\tilde{x})| + |P^\perp_{\nu(x)}(\nabla p(\tilde{x}) - \nabla p(x))| \end{aligned}$$ where all pressures are at time $t$, and we use that the tangential derivative of $p(t)$ on $\partial \Omega_{t}$ vanishes. The first term is controlled by the continuity of $\nu$ and our uniform bound on the pressure gradient, so by Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"}, it contributes $C|t - s|^{1/2}$. The second term is controlled by the $C^{1,\alpha}$ regularity of $p$, so it contributes $C|t - s|^\alpha$. Thus, choosing $\alpha > \frac{1}{2}$, we have $\|p(t) - p(s)\|_{C^1(\partial \Omega_{s})} \leq C|t - s|^{1/2}$. Finally, we improve this to Hölder by interpolation. Specifically, since $p(\tau)$ is $C^{1,\alpha}$ on $\Omega_{\tau}$, uniformly in $\tau$, for any $\alpha\in (0,1)$, we have $$\frac{|\nabla p(t,x) - \nabla p(t,y)|}{|x-y|^\alpha} \leq C(\alpha)$$ We want the Hölder seminorm of $\nabla p(t)$ on $\partial \Omega_s$ to be small, so at small scales, we rearrange this to $$\frac{|\nabla p(t,x) - \nabla p(t,y)|}{|x-y|^\beta} \leq C(\alpha)|x-y|^{\alpha - \beta}$$ for $\beta < \alpha$ to be chosen. For large scales, we use the $L^\infty$ bound for the tangential part of $\nabla p(t)$ on $\partial \Omega_s$, which gives $\frac{C|t - s|^{1/2}}{|x-y|^\beta}$ on the right hand side. Optimizing, the critical scale is $|x-y|\sim |t - s|^{1/2\alpha}$, and the $C^{1,\beta}$ seminorm will scale as $C(\alpha)|t-s|^{\frac{\alpha - \beta}{2\alpha}}$. In particular, this shows that by choosing $\alpha$ close to 1 and $\beta$ close to 0, we can get arbitrarily close to $\frac{1}{2}$, so for each $\varepsilon > 0$, we have some $\beta > 0$ and some $C$ such that $$\|p(t) - p(s)\|_{C^{1,\beta}(\partial \Omega_s)} < C|t - s|^{1/2 - \varepsilon}$$ Then, combining this with the regularity of the nutrient from Lemma [\[lem:n_regularity\]](#lem:n_regularity){reference-type="ref" reference="lem:n_regularity"}, we get $\|p(t) - p(s)\|_{C^{1,\beta}(\overline{\Omega}_{s})} \leq C|t - s|^{1/2 - \varepsilon}$ from the boundary Schauder estimate. We conclude that for points $(x,t)$ and $(y,s)$ in the region with $t > s$, we have $$\label{eq:grad_p_spacetime_cts} |\nabla p(x,t) - \nabla p(y,s)| \leq |\nabla p(x,t) - \nabla p(y,t)| + |\nabla p(y,t) - \nabla p(y,s)| \leq C|x-y|^{\alpha} + C|t-s|^{1/2 - \varepsilon}$$ for any fixed $\alpha\in (0,1)$ and $\varepsilon > 0$, which proves the spacetime continuity of $\nabla p$. Then, by the lemma, $\nabla T(x) = -\frac{\nabla p(x, T(x))}{|\nabla p(x, T(x))|^2}$, where we have $|\nabla p(x, T(x))|$ locally uniformly bounded away from 0 by the Hopf lemma. In particular, it follows from ([\[eq:grad_p\_spacetime_cts\]](#eq:grad_p_spacetime_cts){reference-type="ref" reference="eq:grad_p_spacetime_cts"}) and the Lipschitz continuity of $T$ that $$|\nabla p(x, T(x)) - \nabla p(y, T(y))| \leq C|x - y|^{1/2 - \varepsilon}$$ so we obtain that $\nabla T\in C^{0,1/2 - \varepsilon}_{\mathop{\mathrm{\textup{loc}}}}$ directly from its formula. ◻ This implies that the free boundary has $C^{2, 1/2 - \varepsilon}$ regularity at the relevant times. Subsequently, the regularity of $\nu$ in Proposition [Proposition 32](#prop:unit_normal_holder){reference-type="ref" reference="prop:unit_normal_holder"} can be improved using second order approximations to the free boundary, leading to a minor improvement in the Hölder exponent. # Appendix: Obstacle problem with $C^{0,\alpha}$ source In this section, we collect several known facts about obstacle problems with Hölder continuous data. Many results for the model obstacle problem with constant source carry over to the Hölder continuous case with minor modifications, as noted in [@caffarelli98] and [@weiss]. We will cite several results from [@blank], [@monneau], and [@colombo], which offer careful treatments of this topic. Let us consider solutions $u$ to the obstacle problem $$\label{eq:obst} \Delta u = (1 + f)\chi_{\{u > 0\}}$$ on $B_1$, where $f$ is known a priori to vanish on the free boundary $\Gamma(u) = \partial \{ u > 0\}$. When $f$ is sufficiently regular, this equation has similar local behavior to the model case where $f\equiv 0$; in particular, we make the assumption $f\in C^{0,\alpha}(B_1)$ for some $\alpha\in (0,1)$. Regarding notation, we write $\Omega(u) = \{ u > 0 \}$, $\Lambda(u) = \{ u = 0\}$, and in many cases we will refer to a tuple $(u,f)$ as the solution to ([\[eq:obst\]](#eq:obst){reference-type="ref" reference="eq:obst"}). We take $\lambda = \inf_{B_1} 1 + f, \mu = \sup_{B_1} 1 + f$, and we will have the standing assumption that $\lambda > \frac{1}{2}$, which holds if $0\in \Gamma(u)$ and $[f]_{C^{0,\alpha}(B_1)}$ is sufficiently small. On the free boundary, we have $u = 0, \nabla u = 0$, so we should expect $u$ to have quadratic growth away from the free boundary in the positive set. Of course, $u$ is not regular enough to admit a second order Taylor expansion due to the jump in the second derivatives along the free boundary, but nevertheless, we recover several results to the same effect: **Lemma 46** (Quadratic nondegeneracy, [@blank] Thm. 2.1). *If $0\in \overline{\Omega(u)}$, then for all $r < 1$, $$\sup_{B_r} u \geq \frac{\lambda}{2d}r^2$$* **Lemma 47** (Quadratic bound, [@blank] Thm. 2.4). *If $0\in \Gamma(u)$, then for all $r < \frac{1}{2}$, $$\sup_{B_r} u \leq C(d)\mu r^2$$* **Lemma 48** (Regularity up to the free boundary, [@blank] Thm. 2.3). *If $0\in \Gamma(u)$, then $\|u\|_{C^{1,\beta}(B_{1})} \leq C(d, \beta) \mu$ for all $\beta\in (0, 1)$.* We adopt the following notation for the quadratic rescalings: $$\label{eq:quadratic_rescaling} u_r(x) = r^{-2}u(rx)$$ $$u_0(x) = \lim_{r\to 0^+} u_r(x), \hbox{ provided this limit exists }$$ The combined results of lemmas [Lemma 46](#lem:obst_nondegen){reference-type="ref" reference="lem:obst_nondegen"}, [Lemma 47](#lem:obst_quadratic_bound){reference-type="ref" reference="lem:obst_quadratic_bound"}, and [Lemma 48](#lem:obst_regularity){reference-type="ref" reference="lem:obst_regularity"} can be used to derive Lemma [Lemma 13](#lem:quad_blowup_cpt){reference-type="ref" reference="lem:quad_blowup_cpt"}: the $C^{1,\beta}$ compactness of the quadratic blowup sequence $(u_r)$. As we discuss in the main paper with Lemma [Lemma 28](#lem:caffarelli_dichotomy){reference-type="ref" reference="lem:caffarelli_dichotomy"}, this compactness improves to convergence of the blowup sequence when $f\in C^{0,\alpha}$, and we classify points as regular or singular based on the blowup profile. In general, regular points can be identified at finite scales, using criteria such as Lemma [Lemma 29](#lem:modified_regular_pt_criterion){reference-type="ref" reference="lem:modified_regular_pt_criterion"}. The quadratic blowup proves to be a key tool in understanding local behavior of the free boundary. For regular points, we use comparison and stability results to show flatness of the free boundary, which leads to regularity of the boundary. For singular points, we use monotonicity formulas and compactness results to show that they can be locally contained in $C^1$ manifolds. Since the treatment of these cases diverges considerably, we will split them into the next two sections. ## Regular points In this section, we review several results from [@blank] which connect the regularity of the free boundary at regular points and the scale at which this regularity is achieved with the regularity of the source term and the scale at which the zero set becomes large. The regularity of the free boundary can be summarized as follows: **Lemma 49** ([@blank] Thm. 7.2). *If $f\in C^{0,\alpha}$ with $\alpha\in (0, 1]$, then in a neighborhood of a regular point, the free boundary is a $C^{1,\alpha}$ graph.* In light of this result, we allow the case $\alpha = 1$ for the rest of this subsection. For the main paper, we require a quantified version of this lemma, in order to apply it uniformly to the family of obstacle problems $w(\cdot, t)$. Thus, we will retrace Blank's approach in this section while keeping track of its dependencies. **Definition 50**. Let $S\subset { \mathbb{R}}^d$ be a compact set. We define the modulus of flatness, $$\theta(r) = \sup_{0 < \rho \leq r} \sup_{x\in S} \inf_L \frac{D(L\cap B_\rho(x), S\cap B_\rho(x))}{\rho}$$ where the inner infimum is over all hyperplanes $L$ containing $x$, and $D$ denotes Hausdorff distance: $$D(A,B) = \max(\sup_{x\in A} \mathop{\mathrm{dist}}(x, B), \sup_{y\in B} \mathop{\mathrm{dist}}(y, A))$$ We say that $S$ is $\delta$-Reifenberg flat if there exists $R$ such that $\theta(r) \leq 2\delta$ for all $r < R$, and Reifenberg vanishing if $\theta(r)\to 0$. **Lemma 51** ([@blank] Theorem 6.7). *Let $S$ be a compact Reifenberg vanishing set with modulus of flatness $\theta$ satisfying $\int_0^1 \frac{\theta(r)}{r}\,dr < \infty$. Then there exist constants $C_0, C_1$ such that if $\int_0^\rho \frac{\theta(r)}{r}\,dr < C_0$, then there exists a coordinate system in which $S\cap B_{\rho/2}$ is the graph of a $C^1$ function $g$, such that $\nabla g$ is continuous with modulus of continuity $C_1 \int_0^r \frac{\theta(s)}{s}\,ds$.* **Lemma 52** ([@blank] Theorem 7.1). *Suppose that $u$ solves the obstacle problem $\Delta u = f\chi_{\{u > 0\}}$ in $B_1$ with $\lambda \leq f \leq \mu$ and $f$ Dini continuous with modulus $\sigma$. Then if $0$ is a regular point and the free boundary is $\delta$-Reifenberg flat in $B_{3/4}$ for some sufficiently small $\delta$, then the modulus of flatness of the free boundary inside $B_{1/2}$ is controlled by $C\sigma(r)$.* In particular, these two results imply the $C^{1,\alpha}$ regularity of the free boundary near regular points when $f\in C^{0,\alpha}$ with $f(0) = 1$, at a scale depending on $[f]_{C^{0,\alpha}}(B_1)$ and the scale at which the $\delta$ Reifenberg flatness is achieved. For this, we have another result from Blank **Lemma 53** ([@blank] Theorem 6.4). *Let $\varepsilon\in (0, \frac{1}{4})$, and suppose we have $f$ with $\lambda \leq f \leq \mu$ and $u$ a solution to the obstacle problem $\Delta u = f\chi_{\{u > 0\}}$ in $B_1$. If $\mu - \lambda$ is sufficiently small, then there exist constants $r_0, \tau, \delta\in (0,1)$ depending on $d, \mu, \lambda, \varepsilon$ for which the following holds:* *If for some $t \leq r_0$, we have $$\frac{|B_t\cap \{u = 0\}|}{|B_t|} > \varepsilon,$$ then $\overline{B_{\tau t}}\cap \partial\{u > 0\}$ is $\delta$-Reifenberg flat.* *Moreover, as $\mu - \lambda \to 0$, $\delta\to 0$. In particular, if $f$ is continuous, $\delta$ can be taken to be arbitrarily small (with all parameters now also depending on the modulus of continuity of $f$).* To be more precise, suppose $u$ solves the obstacle problem $\Delta u = f\chi_{\{u > 0\}}$ in $B_1$ with $f$ taking values in $[\lambda, \mu]$, and write $u_c$ for the solution to the obstacle problem $\Delta u_c = c\chi_{\{u_c > 0\}}$ such that $u_c|_{\partial B_1} = u$. Then $\{ u_\lambda = 0\} \subset \{ u = 0\} \subset \{u_\mu = 0 \}$. Moreover, if $0$ is a regular point for $u$, then there exists $c\in [\lambda, \mu]$ such that $0$ is a regular point for $u_c$. Then Blank's uniform stability theorem for regular points ([@blank] Theorem 5.4) gives that in $B_{1/2}$, there is a universal $C$ such that for any $c'$, we have $d(FB(u_c), FB(u_{c'})) \leq C|c - c'|$ where $FB(v)$ denotes the free boundary of $v$. Then we get flatness of $FB(u)$ by trapping it between $FB(u_\lambda)$ and $FB(u_\mu)$ and using the stability and $C^{1,\alpha}$ regularity of the constant-source free boundaries. As we zoom in, the $C^{1,\alpha}$ seminorm goes to 0, and if $f$ is continuous, $|\mu - \lambda|\to 0$, so we can get $\delta$-Reifenberg flatness with arbitrarily small $\delta$. In particular, the $C^{1,\alpha}$ seminorm is uniformly bounded, depending only on the scale at which the density of the zero set is sufficiently large, and the rate at which $|\mu - \lambda|\to 0$ depends only on the modulus of continuity of $f$. Thus, we can replace the hypothesis of $\delta$-Reifenberg flatness in Lemma [Lemma 51](#lem:flat_implies_smooth){reference-type="ref" reference="lem:flat_implies_smooth"}, to conclude: **Lemma 54**. *Suppose that $(u, f)$ solve ([\[eq:obst\]](#eq:obst){reference-type="ref" reference="eq:obst"}) with $f\in C^{0,\alpha}$ for $\alpha\in (0, 1]$. Then there exist $r_0, \varepsilon_0$ such that if $0$ is a free boundary point and $\frac{|B_t\cap \{u = 0\}|}{|B_t|} > \varepsilon_0$ for some $t \leq r_0$, then there exists $r = r(t,[f]_{C^{0,\alpha}})$ such that the free boundary is a $C^{1,\alpha}$ graph in $B_r$, with $C^{1,\alpha}$ seminorm controlled by $[f]_{C^{0,\alpha}(B_1)}$ and $r$.* ## Singular points In this section, we use monotonicity formulas to study the continuity of the blowup limit at singular points and the rate of convergence for the blowup sequence. First, we have the following result: **Lemma 55** ([@colombo], Thm 5). *Restricted to the singular set, $D^2u$ is continuous with a logarithmic modulus of continuity. In particular, in a neighborhood of a singular point, the singular set is contained in a $C^{1,\log}$ manifold of dimension $\dim \ker D^2 u$.* The result of [@colombo] is obtained using an epiperimetric inequality to control the Weiss monotonocity formula, introduced in [@weiss]. For simplicity, we will instead consider the related Monneau monotonicity formula, at the cost of the explicit logarithmic modulus of continuity. The following result is part of the proof of [@monneau] Theorem 1.9. **Lemma 56**. *Define $$\label{eq:monneau_mf} \Xi_u^q(r) = r^{-(d+3)}\int_{\partial B_r} (u-q)^2 = \int_{\partial B_1} (u_r - q)^2$$ where $u$ solves ([\[eq:obst\]](#eq:obst){reference-type="ref" reference="eq:obst"}) with $0$ as a singular point, and $q$ is a quadratic form $q(x) = \frac{1}{2}x\cdot Qx$ with $Q\geq 0, trQ = 1$. Then $$\frac{d}{dr}\Xi_u^q(r)\geq -Cr^{\alpha - 1}$$ where $C = C([f]_{C^{0,\alpha}})$. In particular, the limit $\Xi_u^q(0+)$ exists.* As a corollary, we can show that near a singular point, $u$ approximates a global solution at a uniform scale. This extends Lemma 13 of [@caffarelli98] to the $C^{0,\alpha}$ source case. We break the proof into two steps. First, we will prove the following slightly weaker claim: **Lemma 57**. *Let $u$ solve [\[eq:obst\]](#eq:obst){reference-type="eqref" reference="eq:obst"} with 0 as a singular point. Then for every $\varepsilon > 0$, there exists $\delta = \delta(\varepsilon, [f]_{C^\alpha})$ such that $\|u_{\delta} - q\|_{C^1(B_1)} < \varepsilon$, for some $q$ of the form $q(x) = \frac{1}{2}x\cdot Qx$ with $Q$ a positive semidefinite matrix of trace 1. Here, we use the notation defined in [\[eq:quadratic_rescaling\]](#eq:quadratic_rescaling){reference-type="eqref" reference="eq:quadratic_rescaling"}.* *Proof.* Suppose for contradiction that we can find a sequence $v^k$ solving $\Delta v^k = (1 + f^k)\chi_{\{v^k > 0\}}$ on $B_1$ such that the result fails along the sequence $v^k_{1/k}$. That is, for each $k$, and every $q$ of the form in the statement of the lemma, $$\label{eq:sing_approximation_failure} \|v^k_{1/k} - q\|_{C^1(B_1)} > \varepsilon$$ Then the sequence $v^k_{1/k}$, defined on the expanding balls $B_k$, converges along a subsequence on all compact sets in $C^1$, to some $v^\infty$. Since the $f^k$ are uniformly $C^\alpha$, the sequence $f^k(\frac{x}{k})$ converges locally uniformly to 0. It follows that $v^\infty$ is a nonnegative solution to $\Delta v^\infty = \chi_{\{v^\infty > 0\}}$ on ${ \mathbb{R}}^d$ with $v^\infty(0) = 0$. Next, we show that the zero set of $v^\infty$ has empty interior. Suppose otherwise, and we have some $B_r(x)\subset B_1$ such that $v^\infty\equiv 0$ on $B_r(x)$. This implies that $v^k_{1/k}$ is $o(1)$ on $\partial B_r(x)$ as $k\to \infty$, and an application of the nondegeneracy bound Lemma [Lemma 46](#lem:obst_nondegen){reference-type="ref" reference="lem:obst_nondegen"} yields that $v^k_{1/k}\equiv 0$ on $B_{r/2}(x)$ for $k$ sufficiently large. Then, since the zero set of $v^\infty$ has empty interior, all points in the zero set are singular free boundary points, and $v^\infty$ is twice differentiable at those points. In particular, we get that $v^\infty$ solves $\Delta v^\infty = 1$. As noted in the remarks after Lemma 13 of [@caffarelli98], this, along with the quadratic growth estimate, Lemma [Lemma 47](#lem:obst_quadratic_bound){reference-type="ref" reference="lem:obst_quadratic_bound"}, implies that $v^\infty$ is a quadratic polynomial of the type in the statement above. Thus, taking $q = v^\infty$, we get a contradiction to [\[eq:sing_approximation_failure\]](#eq:sing_approximation_failure){reference-type="eqref" reference="eq:sing_approximation_failure"} for $k$ sufficiently large, which completes the proof. ◻ **Lemma 58**. *For every $\varepsilon > 0$, there exists $\delta = \delta(\varepsilon, [f]_{C^{0,\alpha}(B_1)})$ such that $\|u_\delta - u_0\|_{C^1(B_1)} < \varepsilon$, where we use the notation of [\[eq:quadratic_rescaling\]](#eq:quadratic_rescaling){reference-type="eqref" reference="eq:quadratic_rescaling"}. Equivalently, we have $\|u - u_0\|_{C^1(B_\delta)} < o(\delta^2)$ as $\delta\to 0$, where the little $o$ depends only on $[f]_{C^{0,\alpha}}(B_1)$.* *Proof.* Let $q$ be the quadratic form given by the previous lemma. We apply Monneau's monotonicity formula, $\Xi_u^q$, defined in [\[eq:monneau_mf\]](#eq:monneau_mf){reference-type="eqref" reference="eq:monneau_mf"}. The derivative bound from Lemma [Lemma 56](#lem:monneau_mf){reference-type="ref" reference="lem:monneau_mf"} gives $$\Xi_u^q(0) - \Xi_u^q(\delta) \leq C\delta^\alpha$$ We have $\|u_\delta - q\|_{C^1(B_1)} < \varepsilon$ from the lemma, so it follows that $\|u_\delta - q\|_{L^2(\partial B_1)} < C\varepsilon$ for some dimensional constant. Then $\|u_0 - q\|_{L^2(\partial B_1)}^2 \leq C\varepsilon^2 + C\delta^\alpha$. But now we recall that $u_0, q$ are both quadratic forms, and so by equivalence of norms on ${ \mathbb{R}}^{d\times d}$, there exists a dimensional constant for which $\|u_0 - q\|_{C^1(B_1)} \leq C\|u_0 - q\|_{L^2(\partial B_1)}$. Combining our estimates for $\|u_0 - q\|_{C^1(B_1)}$ and $\|u_{\delta} - q\|_{C^1(B_1)}$, we conclude. ◻
arxiv_math
{ "id": "2309.05971", "title": "Free boundary regularity for tumor growth with nutrients and diffusion", "authors": "Carson Collins, Matt Jacobs, Inwon Kim", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The quadratic penalty alternating minimization (AM) method is widely used for solving the convex $\ell_1$ total variation (TV) image deblurring problem. However, quadratic penalty AM for solving the nonconvex nonsmooth $\ell_p$, $0 < p < 1$ TV image deblurring problems is less studied. In this paper, we propose two algorithms, namely proximal iterative re-weighted $\ell_1$ AM (PIRL1-AM) and its accelerated version, accelerated proximal iterative re-weighted $\ell_1$ AM (APIRL1-AM) for solving the nonconvex nonsmooth $\ell_p$ TV image deblurring problem. The proposed algorithms are derived from the proximal iterative re-weighted $\ell_1$ (IRL1) algorithm and the proximal gradient algorithm. Numerical results show that PIRL1-AM is effective in retaining sharp edges in image deblurring while APIRL1-AM can further provide convergence speed up in terms of the number of algorithm iterations and computational time. author: - - - - - bibliography: - ICIP2023.bib title: Accelerated Proximal Iterative re-Weighted $\ell_1$ Alternating Minimization for Image Deblurring --- total variation, convex optimization, deblurring, nonconvex optimization, alternating minimization # Introduction In this paper, we are interested in the image deblurring problem obtained from the following image degradation model $$\label{eq:DegMod} \mathbf{f} = \mathbf{Ku} + \mathbf{n},$$ where $\mathbf{f}\in \mathbb{R}^n$ is the observed noisy and blurred image, $\mathbf{K} \in \mathbb{R}^{n \times n}$ is a blur kernel, $\mathbf{u} \in \mathbb{R}^n$ is the uncorrupted image to be estimated, and $\mathbf{n} \in \mathbb{R}^n$ is additive Gaussian noise. One way to solve ([\[eq:DegMod\]](#eq:DegMod){reference-type="ref" reference="eq:DegMod"}) for the image $\mathbf{u}$ is by minimizing the nonconvex nonsmooth composite optimization problem $$\label{eq:compMin} \underset{\mathbf{u} \in \mathbb{R}^n}{\text{min}}\,\frac{1}{2} \| \mathbf{Ku} - \mathbf{f} \|_2^2 + \mu \| \nabla \mathbf{u} \|_p^p,$$ where $0 < p < 1$ and $\nabla \in \mathbb{R}^{n \times n}$ are the discrete difference operator [@lu2016implementation]. If $p = 1$, problem ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}) results in the convex $\ell_1$-norm total variation (TV) image restoration [@rudin1992nonlinear]. Operator splitting methods such as the alternating direction method of multipliers (ADMM) and the alternating minimization (AM) [@wang2008new; @boyd2011distributed] for solving the convex $\ell_1$-norm TV problem produces the sublinear convergence rate of $\mathcal{O}\left( \frac{1}{k} \right)$ which is quite slow in practice[@he20121; @beck2015convergence]. Therefore, this has motivated researchers to accelerate these methods to improved the convergence rate to $\mathcal{O}\left( \frac{1}{k^2} \right)$. However, the majority of the work in this direction is focused on convex optimization. This paper, focuses on the model ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}) when $0 < p < 1$ i.e., $\ell_p$ quasi-norm hence, the nonconvex nonsmooth $\ell_p$-norm TV. Our motivation mainly stems from the advantage of nonconvex nonsmooth penalties in restoring even sharper image quality compared to the $\ell_1$-norm TV [@zhang2020tv]. Furthermore, this paper is further motivated by applying the quadratic penalty AM which is an operator splitting method for solving nonconvex nonsmooth image deblurring problems ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}). By this, we propose a proximal iterative re-weighted $\ell_1$ alternating minimization (AM) algorithm along with its accelerated version. The proposed algorithm uses the ideas of proximal operators and the iterative re-weighted $\ell_1$ (IRL1) method [@candes2008enhancing] in combination with the alternating minimization method [@wang2008new]. To accelerate the convergence of the proposed method, Nesterov acceleration is also used [@nesterov1983method; @beck2009fast]. # Related Works The alternating minimization (AM) method for solving the $\ell_1$ TV image deblurring problem was initially proposed in [@wang2008new]. An accelerated version employing acceleration techniques from [@beck2009fast] was further investigated in [@xie2018new]. Specifically, it was shown that the $\ell_1$-norm sub minimization problem of the AM can be seen as a proximal gradient step. Hence, amenable to acceleration via the proximal gradient method [@beck2009fast]. However, the accelerated AM assumes the minimization problem to be convex. The interesting link between AM and the accelerated proximal gradient method shown in [@xie2018new] suggests that AM may have links with nonconvex proximal gradient type methods. For example, the re-weighted $\ell_1$ method that was originally proposed in [@candes2008enhancing] for nonconvex $\ell_p$ sparse recovery problems along with its proximal version [@lu2014proximal; @sun2017global]. The proximal re-weighted $\ell_1$ algorithm with acceleration is relatively new and studied in [@wang2022; @yu2019iteratively; @phan2021accelerated] and has shown promising results in minimizing problems of the form ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}). However, their applications have been mainly restricted to sparse and low-rank matrix recovery problems. Furthermore, their connections with operator splitting methods such as AM for image deblurring have not been to our knowledge explored. # Problem Formulation and Algorithm In this paper, we focus on image deblurring by minimizing the nonconvex nonsmooth optimization problem ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}). However, we restrict our results and discussion to a value of $p = 0.1$. By adding a quadratic penalty term to ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}), we have $$\label{eq:prob2} \underset{\mathbf{u}}{\text{min}}\, \frac{1}{2} \| \mathbf{Ku} - \mathbf{f} \|_2^2 + \mu \| \mathbf{z} \|^p_p + \frac{\beta}{2}\| \mathbf{z} - \nabla \mathbf{u} \|_2^2.$$ Indeed, problem ([\[eq:prob2\]](#eq:prob2){reference-type="ref" reference="eq:prob2"}) is equivalent to problem ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}) with constraint $\mathbf{z} = \nabla \mathbf{u}$ [@tan2014smoothing]. To minimize ([\[eq:prob2\]](#eq:prob2){reference-type="ref" reference="eq:prob2"}), we can fix $\mathbf{u}$ with the current value and minimize with respect to $\mathbf{z}$ and vice-versa $$\label{eq:AMitr} \begin{cases} \mathbf{z}_{k+1}= \underset{\mathbf{z}}{\operatorname{arg\,min}} \, \frac{\beta}{2}\| \mathbf{z} - \nabla \mathbf{u} \|_2^2 + \mu \| \mathbf{z} \|^p_p, \\ \mathbf{u}_{k+1}= \underset{\mathbf{u}}{\operatorname{arg\,min}} \, \frac{1}{2} \| \mathbf{Ku} - \mathbf{f} \|_2^2 + \frac{\beta}{2}\| \mathbf{z} - \nabla \mathbf{u} \|_2^2. \\ \end{cases}$$ Quadratic penalty AM scheme ([\[eq:AMitr\]](#eq:AMitr){reference-type="ref" reference="eq:AMitr"}) is a classic method and commonly used in the image and signal processing literature [@tan2014smoothing; @wang2008new]. Note that the minimization problem concerning $\mathbf{z}$ in ([\[eq:AMitr\]](#eq:AMitr){reference-type="ref" reference="eq:AMitr"}) is the nonconvex nonsmooth $\ell_p$ minimization problem hence, amenable to the iterative re-weighted $\ell_1$ minimization algorithm [@candes2008enhancing]. If $p=1$ in ([\[eq:AMitr\]](#eq:AMitr){reference-type="ref" reference="eq:AMitr"}), this minimization problem is convex and solvable using soft thresholding [@wright2022optimization]. Consequently, it was shown that it is equivalent to the proximal gradient update [@xie2018new] $$\begin{aligned} \mathbf{z}_{k+1} = \underset{\mathbf{z} \in \mathbb{R}^n}{\text{argmin}}\, f\left( \mathbf{y}_k\right) + &\langle \nabla f\left( \mathbf{y}_k \right),\, \mathbf{z} - \mathbf{y}_k \rangle \\ &+ \frac{L}{2} \| \mathbf{z} - \mathbf{y}_k \|_2^2 + \mu \| \mathbf{z} \|_1, \end{aligned} \label{eq:pgm}$$ where $\mathbf{y}_k = \nabla \mathbf{u}_k$, $L$ is the Lipschitz constant of $f\left( \mathbf{y}_k \right)=\frac{\beta}{2}\|\mathbf{z} - \mathbf{y}_k \|_2^2$, and $\nabla f\left( \mathbf{y}_k \right) = \beta \left(\mathbf{z} - \nabla \mathbf{u} \right)$. Due to this equivalence, the $\mathbf{z}$ sub problem can be accelerated by the fast iterative shrinkage and thresholding algorithm (FISTA) [@beck2009fast]. ## Proximal Iterative re-Weighted $\ell_1$ Alternating Minimization With previous foundations in place, coming back to the nonconvex nonsmooth $\ell_p$ minimization subproblem $\mathbf{z}$ in ([\[eq:AMitr\]](#eq:AMitr){reference-type="ref" reference="eq:AMitr"}), we have $$\mathbf{z}_{k+1}= \underset{\mathbf{z}}{\operatorname{arg\,min}} \, \frac{1}{2}\| \mathbf{z} - \nabla \mathbf{u}_k\|_2^2 + \frac{\mu}{\beta} \| \mathbf{z} \|^p_p, \label{eq:subZa}$$ after a simple re-arrangement of $\beta$. The IRL1 approximately solves ([\[eq:subZa\]](#eq:subZa){reference-type="ref" reference="eq:subZa"}) by [@candes2008enhancing; @wang2021relating] $$\mathbf{z}_{k+1} =\underset{\mathbf{z}\in \mathbb{R}^n}{\operatorname{arg\,min}} \, \frac{1}{2}\| \mathbf{z} - \nabla \mathbf{u}_k \|_2^2 +\frac{\mu}{\beta} \sum_i w^i | z^i |, \label{eq:sumIrl1}$$ where $w^i=p\left( |z^i_k| + \epsilon \right)^{p-1}$, and $z^i$ are the weights and entries of vector $\mathbf{z}$ respectively. Note that in ([\[eq:sumIrl1\]](#eq:sumIrl1){reference-type="ref" reference="eq:sumIrl1"}) the nonconvex nonsmooth $\ell_p$ minimization problem is approximated into a convex weighted $\ell_1$-norm minimization hence, IRL1 is a convex relaxation method for nonconvex optimization problems. By introducing a diagonal weight matrix $\mathbf{W}_k = \text{diag}\left( w^i_k, \cdots w_k^n \right)$, IRL1 problem ([\[eq:sumIrl1\]](#eq:sumIrl1){reference-type="ref" reference="eq:sumIrl1"}) can be written as $$\mathbf{z}_{k+1} =\underset{\mathbf{z}\in \mathbb{R}^n}{\operatorname{arg\,min}} \, \frac{1}{2}\| \mathbf{z} - \nabla \mathbf{u}_k \|_2^2 + \frac{\mu}{\beta}\| \mathbf{W}_k \mathbf{z} \|_1. \label{eq:matIrl1}$$ Problem ([\[eq:matIrl1\]](#eq:matIrl1){reference-type="ref" reference="eq:matIrl1"}) can be interpreted as solving a proximal linearization of the term $f\left( \mathbf{z} \right)=\frac{1}{2}\| \mathbf{z} - \bar{\mathbf{z}_k} \|_2^2$ at $\bar{\mathbf{z}_k}= \nabla \mathbf{u}_k$ i.e., $$\begin{aligned} \mathbf{z}_{k+1} = \underset{\mathbf{z} \in \mathbb{R}^n}{\operatorname{arg\,min}}\, f\left( \bar{\mathbf{z}_k}\right) + &\langle \nabla f\left( \bar{\mathbf{z}_k} \right),\, \mathbf{z} - \bar{\mathbf{z}_k} \rangle \\ &+ \frac{L}{2} \| \mathbf{z} - \bar{\mathbf{z}_k} \|_2^2 + \frac{\mu}{\beta} \| \mathbf{W}_k \mathbf{z} \|_1, \end{aligned} \label{eq:NvxPgm}$$ hence, by ignoring constant terms can be written as [@beck2017first; @beck2009fast] $$\mathbf{z}_{k+1} =\underset{\mathbf{z}\in \mathbb{R}^n}{\operatorname{arg\,min}} \, \frac{1}{2}\| \mathbf{z} - \mathbf{v}_k \|_2^2 + \lambda \| \mathbf{W}_k \mathbf{z} \|_1,$$ with $\mathbf{v}_k = \bar{\mathbf{z}_k} - \frac{1}{L}\nabla f\left(\bar{\mathbf{z}_k} \right)$ and $\lambda = \frac{\mu}{L\beta}$. Furthermore, by combining the weight entries with $\lambda$ in the weight matrix $\mathbf{W}_k$ we finally arrive at $$\mathbf{z}_{k+1} =\underset{\mathbf{z}\in \mathbb{R}^n}{\operatorname{arg\,min}} \, \frac{1}{2}\| \mathbf{z} - \mathbf{v}_k \|_2^2 + \frac{p\lambda}{\left( |\bar{z}^i_k| + \epsilon \right)^{1-p}} \| \mathbf{z} \|_1, \label{eq:FinUpd}$$ where $\bar{z}^i_k$ is the $i^\text{th}$ entry of $\bar{\mathbf{z}}_k$ at the $k^\text{th}$ iteration and $\epsilon$ a very small number to avoid division by zero. Equation ([\[eq:FinUpd\]](#eq:FinUpd){reference-type="ref" reference="eq:FinUpd"}) can be solved in a closed form via the soft-thresholding operation as $$\mathbf{z}_{k+1} = \text{sgn}\left(\bar{z}_k^i \right) \text{max} \left( 0,\, |\bar{z}_k^i| - \frac{p\lambda}{\left( |z^i_k| + \epsilon \right)^{1-p}} \right). \label{eq:lpShrink}$$ From ([\[eq:lpShrink\]](#eq:lpShrink){reference-type="ref" reference="eq:lpShrink"}), it can be seen that solving the original nonconvex subproblem ([\[eq:subZa\]](#eq:subZa){reference-type="ref" reference="eq:subZa"}) boils down to a series of proximal weighted $\ell_1$ minimization problem. The idea of proximal re-weighted $\ell_1$ minimization has been proposed in [@lu2014proximal] and its convergence behavior analyzed in [@sun2017global]. However, the applications are mainly confined to sparse signal recovery and its use as a sub-minimization problem in the AM scheme ([\[eq:AMitr\]](#eq:AMitr){reference-type="ref" reference="eq:AMitr"}) for image deblurring to our knowledge has not been studied. Next, the sub-minimization problem for $\mathbf{u}$ $$\mathbf{u}_{k+1}= \underset{\mathbf{u}}{\operatorname{arg\,min}} \, \frac{1}{2} \| \mathbf{Ku} - \mathbf{f} \|_2^2 + \frac{\beta}{2}\| \mathbf{z}_{k+1} - \nabla \mathbf{u} \|_2^2,$$ is a convex quadratic problem that can be solved by solving the following linear system $$\mathbf{u}_{k+1} = \left( \mathbf{K}^\top \mathbf{K} + \beta \nabla^\top \nabla \right)^{-1} \left( \mathbf{K}^\top \mathbf{f} + \beta \nabla^\top \mathbf{z}_{k+1} \right). \label{eq:linSyst}$$ Taking into account equations ([\[eq:lpShrink\]](#eq:lpShrink){reference-type="ref" reference="eq:lpShrink"}) and ([\[eq:linSyst\]](#eq:linSyst){reference-type="ref" reference="eq:linSyst"}), the complete listing for the proximal iterative re-weighted $\ell_1$ alternating minimization (PIRL1-AM) is listed as Algorithm [\[al:am1\]](#al:am1){reference-type="ref" reference="al:am1"}. $\mathbf{z}_{k+1} = \text{sgn}\left(\bar{z}_k^i \right) \text{max} \left( 0,\, |\bar{z}_k^i| - \frac{p\lambda}{\left( |z^i_k| + \epsilon \right)^{1-p}} \right)$,\ $\mathbf{u}_{k+1} = \left( \mathbf{K}^\top \mathbf{K} + \beta \nabla^\top \nabla \right)^{-1} \left( \mathbf{K}^\top \mathbf{f} + \beta \nabla^\top \mathbf{z}_{k+1} \right)$,\ $k= k+1$ ## Accelerated Proximal Iterative re-Weighted $\ell_1$ Alternating Minimization To accelerate the proximal iterative re-weighted $\ell_1$ AM for the nonconvex nonsmooth $\ell_p$ TV image deblurring problem, accelerated techniques from [@nesterov1983method; @beck2009fast] can be used. Consider the sub-minimization problem ([\[eq:FinUpd\]](#eq:FinUpd){reference-type="ref" reference="eq:FinUpd"}), due to the convexity of ([\[eq:FinUpd\]](#eq:FinUpd){reference-type="ref" reference="eq:FinUpd"}) and its equivalence to minimizing a proximal linearization of $\frac{1}{2}\| \mathbf{z} - \mathbf{v}_k \|_2^2$ as discussed earlier, we have $$\begin{aligned} \label{eq:ProxACCIR} \mathbf{z}_{k+1} = \underset{\mathbf{z} \in \mathbb{R}^n}{\text{argmin}}\, f\left( \mathbf{v}_k\right) + &\langle \nabla f\left( \mathbf{v}_k \right),\, \mathbf{z} - \mathbf{v}_k \rangle \\ &+\frac{L}{2} \| \mathbf{z} - \mathbf{v}_k \|_2^2 + \tau \| \mathbf{z} \|_1, \end{aligned}$$ where $\tau = \frac{p\lambda}{\left( |\bar{z}^i_k| + \epsilon \right)^{1-p}}$. Then, the acceleration strategies along the lines of [@beck2009fast; @nesterov1983method] can be employed, which involves the following iterative scheme $$\begin{aligned} \begin{cases} \mathbf{z}_{k+1} = \underset{\mathbf{z} \in \mathbb{R}^n}{\text{argmin}}\, f\left( \mathbf{y}_k\right) + \langle \nabla f\left( \mathbf{y}_k \right),\, \mathbf{z} - \mathbf{y}_k \rangle \\ \qquad \qquad \qquad \qquad \qquad +\frac{L}{2} \| \mathbf{z} - \mathbf{y}_k \|_2^2 + \tau \| \mathbf{z} \|_1,\\ t_k = \frac{k-1}{k + 2}, \\ \mathbf{y}_{k+1} = \mathbf{z}_{k+1} + t_k \left( \mathbf{z}_{k+1} - \mathbf{z}_k \right).\\ \end{cases} \end{aligned} \label{eq:itrScmACCIRL1}$$ The $\mathbf{z}$ minimization in ([\[eq:itrScmACCIRL1\]](#eq:itrScmACCIRL1){reference-type="ref" reference="eq:itrScmACCIRL1"}) as disccused previously has a closed form solution of $$\mathbf{z}_{k+1} = \text{sgn}\left(\bar{z}_k^i \right) \text{max} \left( 0,\, |\bar{z}_k^i| - \tau \right).$$ In the iterative scheme ([\[eq:itrScmACCIRL1\]](#eq:itrScmACCIRL1){reference-type="ref" reference="eq:itrScmACCIRL1"}), the scalar $t_k$ is known as the Nesterov momentum coefficient and changes in each iteration $k$. The step $\mathbf{y}_k$ is called the extrapolation step. Also, recall that $\mathbf{z}_k = \nabla\mathbf{u}_k$. Acceleration techniques of this form are shown to match the theoretical lower bound of $\mathcal{O}\left( \frac{1}{k^2} \right)$ for smooth first-order convex optimization. Having shown the equivalence between ([\[eq:FinUpd\]](#eq:FinUpd){reference-type="ref" reference="eq:FinUpd"}) and the proximal linearization step ([\[eq:ProxACCIR\]](#eq:ProxACCIR){reference-type="ref" reference="eq:ProxACCIR"}) along with its acceleration, the iterative scheme of the accelerated proximal iterative re-weighted $\ell_1$ AM (APIRL1-AM) taking account the sub minimization problem for $\mathbf{u}$ is as follows $$\begin{cases} \mathbf{z}_{k+1} = \text{sgn}\left(\bar{z}_k^i \right) \text{max} \left( 0,\, |\bar{z}_k^i| - \tau \right),\\ t_k = \frac{k-1}{k + 2}, \\ \mathbf{y}_{k+1} = \mathbf{z}_{k+1} + t_k \left( \mathbf{z}_{k+1} - \mathbf{z}_k \right),\\ \mathbf{u}_{k+1} = \left( \mathbf{K}^\top \mathbf{K} + \beta \nabla^\top \nabla \right)^{-1} \left( \mathbf{K}^\top \mathbf{f} + \beta \nabla^\top \mathbf{z}_{k+1} \right).\\ \end{cases} \label{eq:PropAcc}$$ The complete algorithm is listed as Algorithm [\[al:amProp\]](#al:amProp){reference-type="ref" reference="al:amProp"}. $\mathbf{z}_{k+1} = \text{sgn}\left(\bar{z}_k^i \right) \text{max} \left( 0,\, |\bar{z}_k^i| - \tau \right)$\ $t_k = \frac{k-1}{k + 2}$\ $\mathbf{y}_{k+1} = \mathbf{z}_{k+1} + t_k \left( \mathbf{z}_{k+1} - \mathbf{z}_k \right)$\ $\mathbf{u}_{k+1} = \left( \mathbf{K}^\top \mathbf{K} + \beta \nabla^\top \nabla \right)^{-1} \left( \mathbf{K}^\top \mathbf{f} + \beta \nabla^\top \mathbf{z}_{k+1} \right)$.\ $k= k+1$ In Algorithms [\[al:am1\]](#al:am1){reference-type="ref" reference="al:am1"} and [\[al:amProp\]](#al:amProp){reference-type="ref" reference="al:amProp"}, the Lipschitz constant $L$ is fixed to 1. In real applications, the value of $L$ is usually unknown. The values of $\mu$ and $\beta$ are used for computing $\lambda = \frac{\mu}{L\beta}$. Additionally, the linear system for solving $\mathbf{u}$ can be solved very fast using the two-dimensional fast Fourier transforms (FFT) with complexity $\mathcal{O}\left(n^n \log n \right)$ [@wang2008new] while the $\mathbf{z}$ problem is only of linear complexity $\mathcal{O}\left( n \right)$. # Results In this section, we apply the two proposed algorithms on the nonconvex nonsmooth image deblurring problem ([\[eq:compMin\]](#eq:compMin){reference-type="ref" reference="eq:compMin"}) and discuss the results of the proposed algorithms without and with acceleration. ## Experimental Setup For the experiments, the blurred and noisy images are obtained by the model ([\[eq:DegMod\]](#eq:DegMod){reference-type="ref" reference="eq:DegMod"}). The blur kernel used is a Gaussian blur of size $17\times 17$ pixel with $\sigma = 7$. Two blur levels were used namely, blur signal-to-noise ratio (BSNR) 30 and 20 [@banham1997digital]. The values for $\beta$ are $\beta = 0.01$ and $\beta = 0.009$ for BSNR 20 and 30 respectively. The deblurring on each image was done 10 times and the average was taken. All images are of size $512\times 512$ pixels. Experiments were conducted using MATLAB[^1] on an Intel Core i3-10105 CPU operating at 3.70 GHz with a memory of 4 Gb. APIRL1-AM PIRL1-AM -- ---------- ------------- ------- ----------- ----- ------- -- ------- ---------- ----- ------- BSNR lvl PSNR SSIM Itr T PSNR SSIM Itr T *Peppers* 27.39 0.759 269 11.20 27.39 0.759 364 14.71 30 *Cameraman* 26.03 0.763 228 9.50 26.04 0.762 324 13.12 *Peppers* 25.52 0.625 141 6.03 25.52 0.625 186 7.63 20 *Cameraman* 24.49 0.570 225 9.64 24.49 0.570 244 10.02 [\[tab: gausRes\]]{#tab: gausRes label="tab: gausRes"} ![](Pep_BSNR_30.eps){height="5.1cm" width="8.3cm"} ![](Cam_BSNR_20.eps){height="5.1cm" width="8.3cm"} ![](Box_pep_BSNR30_APIRL.png){height="2.3cm" width="2.5cm"} ![](Crop_pepBSNR30.png){height="2.3cm" width="2.5cm"} \ ![](Crop_pepper512.png){height="2.3cm" width="2.5cm"} ![](Crop_pep_BSNR30_APIRL.png){height="2.3cm" width="2.5cm"} ![](Box_camera512.png){height="2.3cm" width="2.5cm"} ![](Crop_cam_BSNR20.png){height="2.3cm" width="2.5cm"} \ ![](Crop_camera512.png){height="2.3cm" width="2.5cm"} ![](Crop_cam_BSNR20_APRIL1.png){height="2.3cm" width="2.5cm"} ## Results and discussion Table [\[tab: gausRes\]](#tab: gausRes){reference-type="ref" reference="tab: gausRes"} shows the results of the two proposed algorithms. In terms of image quality of PSNR and SSIM [@wang2004image], the algorithms gave almost similar results. Some[^2] of the deblurred images are shown in Figures [\[fig:test2\]](#fig:test2){reference-type="ref" reference="fig:test2"} and [\[fig:test1\]](#fig:test1){reference-type="ref" reference="fig:test1"}. For BSNR 30 (less noise corruption), Figure [\[fig:test2\]](#fig:test2){reference-type="ref" reference="fig:test2"} shows the ability of the methods in preserving sharp images. However, there is a difference in terms of the number of iterations to converge to the required relative error value. For both BSNR levels tested, the acceleration technique used in APIRL1-AM manages to decrease the number of iterations to converge. Additionally, the time to converge also improves for APIRL1-AM compared to PIRL1-AM. Figure [\[fig:convergencePep30\]](#fig:convergencePep30){reference-type="ref" reference="fig:convergencePep30"} compares the convergence between APIRL1-AM and PIRL1-AM. It can be seen that APIRL1-AM converges faster than PIRL1-AM and exhibits acceleration ripples akin to first-order accelerated techniques [@o2015adaptive]. Taking into account similar results in terms of PSNR and SSIM of the two algorithms, acceleration gives an additional advantage for arriving at similar deblurring quality at a lower number of iterations and CPU time. # Conclusion In this paper, two algorithms PIRL1-AM and APIRL1-AM were proposed for nonconvex nonsmooth $\ell_p$ TV image deblurring. The algorithms were derived by showing the links between the proximal gradient method and the proximal iterative re-weighted $\ell_1$. Both algorithms were able to retain sharp images for image deblurring. For algorithm convergence, APIRL1-AM exhibits the optimal $\mathcal{O}\left( \frac{1}{k^2}\right)$ rate of convergence and improves the CPU time and the number of iterations to converge. Future works include establishing the convergence rate of both algorithms and applying them to different problems. [^1]: Codes at https://github.com/tarmiziAdam2005/PIRL1-AM [^2]: Only deblurred images of APIRL1-AM are shown due to both algorithms giving very similar PSNR and SSIM values.
arxiv_math
{ "id": "2309.05204", "title": "Accelerated Proximal Iterative re-Weighted $\\ell_1$ Alternating\n Minimization for Image Deblurring", "authors": "Tarmizi Adam, Alexander Malyshev, Mohd Fikree Hassan, Nur Syarafina\n Mohamed, Md Sah Hj Salam", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we study the optimal dividend problem under the continuous time diffusion model with the dividend rate being restricted in a given interval $[0,a]$. Unlike the standard literature, we shall particularly be interested in the case when the parameters (e.g. drift and diffusion coefficients) of the model are not specified so that the optimal control cannot be explicitly determined. We therefore follow the recently developed method via the *Reinforcement Learning (RL)* to find the optimal strategy. Specifically, we shall design a corresponding RL-type entropy-regularized exploratory control problem, which randomize the control actions, and balance the exploitation and exploration. We shall first carry out a theoretical analysis of the new relaxed control problem and prove that the value function is the unique bounded classical solution to the corresponding HJB equation. We will then use a *policy improvement* argument, along with *policy evaluation* devices (e.g., Temporal Difference (TD)-based algorithm and Martingale Loss (ML)-algorithms) to construct approximating sequences of the optimal strategy. We present some numerical results using different parametrization families for the cost functional to illustrate the effectiveness of the approximation schemes. author: - "Lihua Bai[^1],  Thejani Gamage[^2],  Jin Ma[^3],  Pengxu Xie[^4]" bibliography: - references.bib title: "**Reinforcement Learning for optimal dividend problem under diffusion model** " --- **Keywords.** Optimal dividend problem, entropy-regularized exploratory control problem, policy improvement, policy evaluation, temporal difference (TD) algorithm, martingale loss (ML).. 93E20,35; 91G50; 93B47. # Introduction The problem of maximizing the cumulative discounted dividend payment can be traced back to the work of de Finetti [@de]. Since then the problem has been widely studied in the literature under different models, and in many cases the problem can be explicitly solved when the model parameters are known. In particular, for optimal dividend problem and its many variations in continuous time under diffusion models, we refer to the works of, among others, [@asmussen; @BP; @CGY; @fh2; @DTE; @TA; @fh4] and the references cited therein. The main motivation of this paper is to study the optimal dividend problems in which the model parameters are not specified so the optimal control cannot be explicitly determined. Following the recently developed method using *Reinforcement Learning* (RL), we shall try to find the optimal strategy for a corresponding entropy-regularized control problem and solve it along the lines of both policy improvement and evaluation schemes. The method of using reinforcement learning to solve discrete Markov decision problems has been well studied, but the extension of these concepts to the continuous time and space setting is still fairly new. Roughly speaking, in RL the learning agent uses a sequence of *trial and errors* to simultaneously identify the environment or the model parameters, and to determine the optimal control and optimal value function. Such a learning process has been characterized by a mixture of *exploration* and *exploitation*, which repeatedly tries the new actions to improve the collective outcomes. A critical point in this process is to balance the exploration and exploitation levels since the former is usually computationally expensive and time consuming, while the latter may lead to sub optimums. In RL theory a typical idea to balance the exploration and exploitation in an optimal control problem is to "randomize\" the control action and add a (Shannon's) entropy term in the cost function weighted by a temperature parameter. By maximizing the entropy one encourages the exploration and by decreasing the temperature parameter one gives more weight to exploitation. The resulting optimal control problem is often refer to as the *entropy-regularized exploratory optimal control problem*, which will be the starting point of our investigation. As any reinforcement learning scheme, we shall solve the entropy-regularized exploratory optimal dividend problem via a sequence of *Policy Evaluation* (PE) and *Policy Improvement* (PI) procedures. The former evaluates the cost functional for a given policy, and the latter produces new policy that is "better\" than the current one. We note that the idea of Policy Improvement Algorithms (PIA) as well as their convergence analysis is not new in the numerical optimal control literature (see, e.g., [@PIA1; @PIA2; @PIA3; @PIA4]). The main difference in the current RL setting is the involvement of the entropy regularization, which causes some technical complications in the convergence analysis. In the continuous time entropy-regularized exploratory control problem with diffusion models a successful convergence analysis of PIA was first established for a particular Linear-Quadratic (LQ) case in [@zhou2], in which the exploratory HJB equation (i.e., HJB equation corresponding to the entropy regularized problem) can be directly solved, and the Gaussian nature of the optimal exploratory control is known. A more general case was recently investigated in [@HWZcvg], in which the convergence of PIA is proved in a general infinite horizon setting, without requiring the knowledge of the explicit form of the optimal control. The problem studied in this paper is very close to the one in [@HWZcvg], but not identical. While some of the analysis in this paper is benefitted from the fact that the spatial variable is one dimensional, but there are particular technical subtleties because of the presence of ruin time, although the problem is essentially an infinite horizon one, like the one studied in [@HWZcvg]. There are two main issues that this paper will focus on. The first is to design the PE and PI algorithms that are suitable for the continuous time optimal dividend problems. We shall follow some of the "popular\" schemes in RL, such as the well-understood *Temporal Difference* (TD) methods, combined with the so-called *martingale approach* to design the PE learing procedure. Two technical points are worth noting: 1) since the cost functional involves ruin time, and the observation of the ruin time of the state process is sometimes practically impossible (especially in the cases where ruin time actually occurs beyond the time horizon we can practically observe), we shall propose algorithms that are insensitive to the ruin time; 2) although the infinite horizon nature of the problem somewhat prevents the use of the so-called "batch\" learning method, we shall nevertheless try to study the temporally "truncated\" problem so that the batch learning method can be applied. It should also be noted that one of the main difficulties in PE methods is to find an effective parameterization family of functions from which the best approximation for the cost functional is chosen, and the choice of the parameterization family directly affects the accuracy of the approximation. Since there are no proven standard methods of finding a suitable parameterization family, except for the LQ (Gaussian) case when the optimal value function is explicitly known, we shall use the classical "barrier\"-type (restricted) optimal dividend strategy in [@asmussen] to propose the parametrization family, and carry out numerical experiments using the corresponding families. The second main issue is the convergence analysis of the PIA. Similar to [@HWZcvg], in this paper we focus on the regularity analysis on the solution to the exploratory HJB equation and some related PDEs. Compared to the heavy PDE arguments as in [@HWZcvg], we shall take advantage of the fact that in this paper the state process is one dimensional taking nonnegative values, so that some stability arguments for 2-dimensional first-order nonlinear systems can be applied to conclude that the exploratory HJB equation has a concave, bounded classical solution, which would coincide with the viscosity solution (of class (L)) of HJB equation and the value function of the optimal dividend problem. With the help of these regularity results, we prove the convergence of PIA to the value function along the line of [@HWZcvg] but with a much simpler argument. The rest of the paper is organized as follows. In §2 we give the preliminary description of the problem and all the necessary notations, definitions, and assumptions. In §3 we study the value function and its regularity, and prove that it is a concave, bounded classical solution to the exploratory HJB equation. In §4 we study the issue of policy update. We shall introduce our PIA and prove its convergence. In §5 and§6 we discuss the methods of Policy Evaluation, that is, the methods for approximating the cost functional for a given policy, using a martingale lost function based approach and (online) CTD($\gamma$) methods, respectively. In §7 we propose parametrization families for PE and present numerical experiments using the proposed PI and PE methods. # Preliminaries and Problem Formulation Throughout this paper we consider a filtered probability space $(\Omega, {\cal F}, \{{\cal F}_t\}_{t\geq0}, \mathbb{P})$ on which is defined a standard Brownian motion $\{W_t,t\geq 0\}$. We assume that the filtration $\mathbb{F}:=\{{\cal F}_t\}=\{{\cal F}^W_t\}$, with the usual augmentation so that it satisfies the usual conditions. For any metric space $\mathbb{X}$ with topological Borel sets $\mathscr{B}(\mathbb{X})$, we denote $\mathbb{L}^0(\mathbb{X})$ to be all $\mathscr{B}(\mathbb{X})$-measurable functions, and $\mathbb{L}^p(\mathbb{X})$, $p\ge 1$, to be the space of $p$-th integrable functions. The spaces $\mathbb{L}^0_\mathbb{F}([0,T];\mathbb{R})$ and $\mathbb{L}^p_\mathbb{F}([0,T];\mathbb{R})$, $p\ge1$, etc., are defined in the usual ways. Furthermore, for a given domain $\mathbb{D}\subset \mathbb{R}$, we denote $\mathbb{C}^k(\mathbb{D})$ to be the space of all $k$-th order continuously differentiable functions on $\mathbb{D}$, and $\mathbb{C}(\mathbb{D})=\mathbb{C}^0(\mathbb{D})$. In particular, for $\mathbb{R}_+:=[0,\infty)$, we denote $\mathbb{C}^k_b(\mathbb{R}_+)$ to be the space of all bounded and $k$-th continuously differentiable functions on $\mathbb{R}_+$ with all derivatives being bounded. Consider the following simplest diffusion approximation of a Cramer-Lundberg model with dividend: $$\begin{aligned} \label{X0} dX_t=(\mu-\alpha_t)dt+\sigma dW_t,\quad t>0,\quad X_0=x\in\mathbb{R},\end{aligned}$$ where $x$ is the initial state, $\mu$ and $\sigma$ are constants determined by the premium rate and the claim frequency and size (cf., e.g., [@asmussen]), and $\alpha_t$ is the dividend rate at time $t\geq0$. We denote $X=X^\alpha$ if necessary, and say that $\alpha=\{\alpha_t,t\geq0\}$ is *admissible* if it is $\mathbb{F}$-adapted and takes values in a given "action space\" $[0,a]$. Furthermore, let us define the *ruin time* to be $\tau^\alpha_x:=\inf\{t>0:X^\alpha_t<0\}$. Clearly, $X^\alpha_{\tau^\alpha}=0$, and the problem is considered "ruined\" as no dividend will be paid after $\tau^\alpha$. Our aim is to maximize the expected total discounted dividend given the initial condition $X^\alpha_0=x\in\mathbb{R}$: $$\begin{aligned} \label{value} \qquad V(x):=\sup_{\alpha\in \mathscr{U}[0,a]}\mathbb{E}_x\Big[\int_{0}^{\tau_x^\alpha} e^{-ct} \alpha_tdt\Big]:=\sup_{\alpha\in \mathscr{U}[0,a]}\mathbb{E}\Big[\int_{0}^{\tau^\alpha_x} e^{-ct} \alpha_tdt\Big|x_0=x\Big],\end{aligned}$$ where $c>0$ is the discount rate, and $\mathscr{U}[0,a]$ is the set of *admissible* dividend rates taking values in $[0,a]$. The problem ([\[X0\]](#X0){reference-type="ref" reference="X0"})-([\[value\]](#value){reference-type="ref" reference="value"}) is often referred to as the *classical optimal restricted dividend* problem, meaning that the dividend rate is restricted in a given interval $[0,a]$. It is well-understood that when the parameters $\mu$ and $\sigma$ are known, then the optimal control is of the "feedback\" form: $\alpha_t^*=\boldsymbol{a}^*(X_t^*)$, where $X_t^*$ is the corresponding state process and $\boldsymbol{a}^*(\cdot)$ is a deterministic function taking values in $[0,a]$, often in the form of a *threshold control* (see, e.g., [@asmussen]). However, in practice the exact form of $\boldsymbol{a}^*(\cdot)$ is not implementable since the model parameters are usually not known, thus the "parameter insensitive\" method through *Reinforcement Learning* (RL) becomes much more desirable alternative, which we now elaborate. In the RL formulation, the agent follows a process of exploration and exploitation via a sequence of trial and error evaluation. A key element is to randomize the control action as a probability distribution over $[0,a]$, similar to the notion of *relaxed control* in control theory, and the classical control is considered as a special point-mass (or Dirac $\delta$-measure) case. To make the idea more accurate mathematically, let us denote $\mathscr{B}([0,a])$ to be the Borel field on $[0,a]$, and $\mathscr{P}([0,a])$ to be the space of all probability measure on $([0,a], \mathscr{B}([0,a]))$, endowed with, say, the Wasserstein metric. A "relaxed control\" is a randomized policy defined as a measure-valued progressively measurable process $(t, \omega)\mapsto { \pi}(\cdot; t, \omega)\in \mathscr{P}([0,a])$. Assuming that ${ \pi}(\cdot;t, \omega)$ has a density, denoted by $\pi_t(\cdot, \omega)\in\mathbb{L}^1_+([0,a])\subset \mathbb{L}^1([0,a])$, $(t, \omega)\in[0,T]\times\Omega$, then we can write $${ \pi}(A; t, \omega)=\int_A \pi_t(w, \omega)dw, \qquad A\in \mathscr{B}([0,a]), \quad(t, \omega)\in[0,T]\times \Omega.$$ In what follows we shall often identify a relaxed control with its density process $\pi=\{\pi_t,t\geq 0\}$. Now, for $t\in[0,T]$, we define a probability measure on $([0,a]\times\Omega, \mathscr{B}([0,a])\otimes{\cal F})$ as follows: for $A\in\mathscr{B}([0,a])$ and $B\in{\cal F}$, $$\begin{aligned} \label{Qt} \mathbb{Q}_t(A\times B):=\int_A\int_B\pi(dw; t, \omega)\mathbb{P}(d\omega)=\int_A\int_B\pi_t( w, \omega)dw\mathbb{P}(d\omega).\end{aligned}$$ We call a function $A^\pi: [0,T]\times[0,a]\times\Omega\mapsto [0,a]$ the "canonical representation\" of a relaxed control $\pi=\{\pi(\cdot, t, \cdot)\}_{t\ge0}$, if $A^\pi_t(w, \omega)=w$. Then, for $t\ge 0$ we have $$\begin{aligned} \label{EQtA} \mathbb{E}^{\mathbb{Q}_t}[A^\pi_t]=\int_\Omega\int_0^a A^\pi_t(w, \omega)\pi(dw;t, \omega)\mathbb{P}(d\omega)=\mathbb{E}^{\mathbb{P}}\Big[\int_0^a w\pi_t(w)dw\Big].\end{aligned}$$ We can now derive the exploratory dynamics of the state process $X$ along the lines of entropy-regularized relaxed stochastic control arguments (see, e.g., [@zhou1]). Roughly speaking, consider the discrete version of the dynamics ([\[X0\]](#X0){reference-type="ref" reference="X0"}): for small $\Delta t>0$, $$\begin{aligned} \label{zengliang} \Delta x_t :=x_{t+\Delta t} -x_t \approx (\mu-a_t )\Delta t+\sigma (W_{t+\Delta t} -W_t ),\qquad t\geq0.\end{aligned}$$ Let $\{a_t^i\}_{i=1}^N$ and $\{(x^i_t,W_t^i)\}_{i=1}^N$ be $N$ independent samples of $(a_t)$ under the distribution $\pi_t$, and the corresponding samples of $(X_t^\pi,W_t)$, respectively. Then, the law of large numbers and [\[EQtA\]](#EQtA){reference-type="eqref" reference="EQtA"} imply that $$\begin{aligned} \label{yczl} \qquad\sum_{i=1}^{N}\frac{\Delta x_t^i}{N}\approx \sum_{i=1}^{N} (\mu-a^i_t)\frac{\Delta t}{N}\approx \mathbb{E}^{\mathbb{Q}_t}[\mu-A^\pi_t]\Delta t=\mathbb{E}^\mathbb{P}\Big[\mu\negthinspace-\negthinspace\int_{0}^{a}w\pi_t (w, \cdot)dw\Big]\Delta t, \end{aligned}$$ as $N\to\infty$. This, together with the fact $\frac{1}{N}\sum_{i=1}^{N}(\Delta x_t^i)^2\approx\sigma^2\Delta t$, leads to the follow form of the exploratory version of the state dynamics: $$\begin{aligned} \label{xfc} dX_t=\Big(\mu-\int_{0}^{a}w\pi_t(w, \cdot)dw\Big)dt+\sigma dW_t, \quad X_0=x,\end{aligned}$$ where $\{\pi_t(w, \cdot)\}$ is the (density of) relaxed control process, and we shall often denote $X=X^{\pi, 0,x}=X^{\pi, x}$ to specify its dependence on control $\pi$ and the initial state $x$. To formulate the *entropy-regularized* optimal dividend problem, we first give a heuristic argument. Similar to ([\[yczl\]](#yczl){reference-type="ref" reference="yczl"}), for $N$ large and $\Delta t$ small we should have $$\begin{aligned} \frac{1}{N}\sum_{i=1}^{N}e^{-ct}a_t^i{\bf 1}_{[t\leq \tau^i]}\Delta t %\xrightarrow{a.s.} \approx\mathbb{E}^{\mathbb{Q}_t}\Big[e^{-ct}A_t^\pi {\bf 1}_{[t\leq \tau^\pi_x]}\Delta t\Big] = \mathbb{E}^\mathbb{P}\Big[{\bf 1}_{[t\leq \tau^\pi_x]}e^{-ct}\int_{0}^{a}w\pi_t(w)dw\Delta t\Big].\end{aligned}$$ Therefore, in light of [@zhou1] we shall define the entropy-regularized cost functional of the optimal expected dividend control problem under the relaxed control ${\pi}$ as $$\begin{aligned} \label{Jxpi} && J(x, {\pi)}=\mathbb{E}_x\Big[\int_0^{\tau_x^{{\pi}}}e^{-ct} {\cal H}_\lambda^\pi(t) dt \Big],\end{aligned}$$ where ${\cal H}_\lambda^ \pi(t):= \int_{0}^{a}(w-\lambda\ln\pi_t(w))\pi_t(w)dw$, ${\tau^{\pi}_x} = \inf \{t > 0: X_t^{\pi,x} < 0 \}$, and $\lambda>0$ is the so-called *temperature parameter* balancing the exploration and exploitation. We now define the set of *open-loop admissible controls* as follows. **Definition 1**. *A measurable (density) process ${ \pi}=\{\pi_t(\cdot,\cdot)\}_{t\ge0}\in \mathbb{L}^0([0, \infty)\times[0,a]\times\Omega)$ is called an open-loop admissible relaxed control if* *1. ${{\pi}_t}(\cdot;\omega)\in\mathbb{L}^1([0,a])$, for $dt\otimes d\mathbb{P}$-a.e. $(t, \omega)\in[0,\infty)\times \Omega$;* *. for each $w\in [0,a]$, the process $(t, \omega)\mapsto {{\pi}_t}(w, \omega)$ is $\mathbb{F}$-progressively measurable;* *.  $\mathbb{E}_x\big[\int_{0}^{\tau^\pi_x}e^{-ct} |{\cal H}_\lambda^\pi (t) |dt\big]<+\infty$.* *We shall denote $\mathscr{A}(x)$ to be the set of open-loop admissible relaxed controls. width.25cm height.25cm depth0cm* Consequently, the value function ([\[value\]](#value){reference-type="ref" reference="value"}) now reads $$\begin{aligned} \label{v} && V(x)= \sup_{{\pi}\in\mathscr{A}(x)}\mathbb{E}_x\Big\{\int_{0}^{\tau_x^\pi}e^{-ct}{\cal H}_\lambda^\pi(t) dt\Big\}, \qquad x\ge 0.\end{aligned}$$ An important type of $\pi\in \mathscr{A}(x)$ is of the "feedback\" nature, that is, $\pi_t(w, \omega)=\pmb\pi(w, X^{\pmb\pi,x}_t(\omega))$ for some deterministic function $\pmb\pi$, where $X^{\pmb\pi,x}$ satisfies the SDE: $$\begin{aligned} \label{sde2} dX_t=\Big(\mu-\int_{0}^{a}w {\pmb\pi} (w, X_t)dw\Big)dt+\sigma dW_t, \quad t\ge 0; \quad X_0=x.\end{aligned}$$ **Definition 2**. *A function $\pmb \pi\in\mathbb{L}^0([0,a]\times\mathbb{R})$ is called a closed-loop admissible relaxed control if, for every $x>0$,* *. The SDE $(\ref{sde2})$ admits a unique strong solution $X^{\pmb{\pi},x}$,* *. The process $\pi=\{{{\pi}_t}(\cdot;\omega):= {\pmb\pi}(\cdot,X^{\pmb\pi,x}_t(\omega)); (t,\omega)\in[0, T]\times\Omega\} \in \mathscr{A}(x)$.* *We denote $\mathscr{A}_{cl}\subset \mathscr{A}(x)$ to be the set of closed-loop admissible relaxed controls. width.25cm height.25cm depth0cm* The following properties of the value function is straightforward. **Proposition 3**. *Assume $a>1$. Then the value function $V$ satisfies the following properties:* *(1) $V(x)\ge V(y)$, if $x\ge y>0$;* *(2) $0\leq V(x)\leq \frac{\lambda \ln a+a}{c}$, $x\in\mathbb{R}_+$.* *Proof.* (1) Let $x\ge y$, and $\pi\in\mathscr{A}(y)$. Consider $\hat \pi_t(w, \omega):=\pi_t(w, \omega){\bf 1}_{\{t<\tau^\pi_y(\omega)\}} +\frac{e^{\frac{w}{\lambda}}} {\lambda(e^{\frac{a}{\lambda}}-1)} {\bf 1}_{\{t\ge \tau^\pi_y(\omega)\}}$, $(t,w, \omega)\in[0, \infty)\times[0,a]\times\Omega$. Then, it is readily seen that $J(x, \hat \pi)\ge J(y, \pi)$, for $a>1$. Thus $V(x)\ge J(x, \hat\pi)\ge V(y)$, proving (1), as $\pi\in\mathscr{A}(y)$ is arbitrary. \(2\) By definition $\int_{0}^{a} w\pi_t(w)dw \leq a$ and $-\int_{0}^{a} \ln\pi_t(w))\pi_t(w)dw \leq \ln a$ by the well-known Kullback-Leibler divergence property. Thus, ${\cal H}_\lambda^\pi(t) \leq \lambda\ln a +a$, and then $V(x) \leq \frac{\lambda\ln a +a}{c}$. On the other hand, since $\hat{\pmb \pi}(w,x) \equiv\frac{1}{a}$, $(x,w) \in \mathbb{R}_+ \times [0,a]$, is admissible and $J(x,\hat{\pmb \pi}) \geq 0$ for $a>1$, the conclusion follows. width.25cm height.25cm depth0cm We remark that in optimal dividend control problems it is often assumed that the maximal dividend rate is greater than the average return rate (that is, $a>2\mu$), and that the average return of a surplus process $X$, including the safety loading, is higher than the interest rate $c$. These, together with Proposition [Proposition 3](#assl){reference-type="ref" reference="assl"}, lead to the following standing assumption that will be used throughout the paper. **Assumption 4**. *(i) The maximal dividend rate $a$ satisfies $a>\max\left\{1,2\mu\right\}$; and* *(ii) the average return $\mu$ satisfies $\mu >\max\left\{c,\sigma^2/2\right\}$. width.25cm height.25cm depth0cm* # The Value Function and Its Regularity In this section we study the value function of the relaxed control problem ([\[v\]](#v){reference-type="ref" reference="v"}). We note that while most of the results are well-understood, some details still require justification, especially concerning the regularity, due particularly to the non-smoothness of the exit time $\tau_x$. We begin by recalling the Bellman optimality principle (cf. e.g., [@YongZhou]): $$\begin{aligned} V(x)&=&\sup_{{\pi}(\cdot)\in\mathscr{A}(x)}\mathbb{E}_x\Big[\int_0^{s\wedge\tau^\pi}e^{-ct} {\cal H}_\lambda^\pi(t) dt +e^{-c(s\wedge\tau^\pi)}V(X_{s\wedge\tau^\pi}^\pi)\Big], \quad s>0.\end{aligned}$$ Noting that $V(0)=0$, we can (formally) argue that $V$ satisfies the HJB equation: $$\begin{aligned} \label{hjbfc} \qquad\quad\begin{cases} \displaystyle cv(x)\negthinspace=\negthinspace\sup_{{\pi}\in\mathbb{L}^1[0,a]}\int_0^a\negthinspace\Big[w\negthinspace-\negthinspace\lambda\ln\pi(w)+\frac{1}{2}\sigma^2v''(x)\negthinspace+\negthinspace(\mu-w)v'(x)\Big]\pi(w)dw;\\ v(0)=0. \end{cases}\end{aligned}$$ Next, by an argument of Lagrange multiplier and the calculus of variations (see [@bff]), we can find the maximizer of the right hand side of ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}) and obtain the optimal feedback control which has the following *Gibbs form*, assuming all derivatives exist: $$\begin{aligned} \label{zykz} {\pmb\pi}^*(w,x)= G(w, 1-v'(x)),\end{aligned}$$ where $G(w,y) = \frac{y}{\lambda [e^{\frac{a}{\lambda}y}-1]}\cdot e^{\frac{w}{\lambda}y}{\bf 1}_{\{y \neq 0\}}+ \frac{1}{a}{\bf 1}_{\{ y=0\}}$ for $y \in \mathbb{R}$. Plugging [\[zykz\]](#zykz){reference-type="eqref" reference="zykz"} into ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}), we see that the HJB equation ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}) becomes the following second order ODE: $$\begin{aligned} \label{ode} \frac{1}{2}\sigma^2v''(x)+f(v'(x))-cv(x)=0, \quad x\ge 0;\qquad v(0)=0,\end{aligned}$$ where the function $f$ is defined by $$\begin{aligned} \label{fdxx} f(z):=\Big\{ \mu z +\lambda \ln\Big[\frac{\lambda(e^{\frac{a}{\lambda}(1-z)}-1 )}{1-z}\Big]\Big\}{\bf 1}_{\{z\neq1\}}+[ \mu+\lambda\ln a]{\bf 1}_{\{z=1\}}.\end{aligned}$$ The following result regarding the function $f$ is important in our discussion. **Proposition 5**. *The function $f$ defined by [\[fdxx\]](#fdxx){reference-type="eqref" reference="fdxx"} enjoys the following properties:* *(1) $f(z)= \mu z + \lambda\ln w(z)$ for all $z \in \mathbb{R}$, where $w(z) = a+ \sum_{n=2}^\infty \frac{ a^n (1-z)^{n-1}}{n! \lambda^{n-1}}$, $z\in\mathbb{R}$. In particular, $f \in \mathbb{C}^\infty(\mathbb{R})$;* *(2) the function $f(\cdot)$ is convex and has a unique intersection point with $k(x)=\mu x$, $x\in\mathbb{R}$. Moreover, the abscissa value of intersection point $H\in (1,1+\lambda)$.* *Proof.* (1) Since the function $w(z)$ is an entire function and $w(z)>0$ for $z \ne 1$, $\ln[w(z)]$ is infinitely many times differentiable for all $z \ne 1$. On the other hand, since $w(1)=a>1$, by the continuity of $w(z)$, there exists $r>0$ such that $w(z) \in (\frac{a}{2}, \frac{3a}{2})$, whenever $|z -1|<r$. Thus $\ln[w(z)]$ is infinitely many times differentiable for $|z -1|<r$ as well. Consequently, $\ln[w(z)] \in \mathbb{C}^\infty(\mathbb{R})$, whence $f(z) \in \mathbb{C}^\infty(\mathbb{R})$ by extension. \(2\) The convexity of the function $f$ follows from a direct calculation of $f''(z)$ for $z \in \mathbb{R}$. Define $\tilde w(z) := \lambda\ln[ w(z)]=f(z)-\mu z$. It is straightforward to show that $\tilde w(1) > 0$, $\tilde w(1+\lambda) <0$, and $\tilde w'(z) <0$. Thus $\tilde w(H)=0$ for some (unique) $H\in(1, 1+\lambda)$, proving (2). width.25cm height.25cm depth0cm We should note that ([\[ode\]](#ode){reference-type="ref" reference="ode"}) can be viewed as either a boundary value problem of an elliptic PDE with unbounded domain $[0, \infty)$ or a second order ODE defined on $[0,\infty)$. But in either case, there is missing information on boundary/initial conditions. Therefore the well-posedness of the classical solution is actually not trivial. Let us first consider the equation ([\[ode\]](#ode){reference-type="ref" reference="ode"}) as an ODE defined on $[0,\infty)$. Since the value function is non-decreasing by Proposition [Proposition 3](#assl){reference-type="ref" reference="assl"}, for the sake of argument let us first consider ([\[ode\]](#ode){reference-type="ref" reference="ode"}) as an ODE with initial condition $v(0)=0$ and $v'(0)=\alpha>0$. By denoting $X_1(x) = v(x)$ and $X_2(x) = v'(x)$, we see that ([\[ode\]](#ode){reference-type="ref" reference="ode"}) is equivalent to the following system of first order ODEs: for $x\in[0,\infty)$, $$\begin{aligned} \label{OODE} \begin{cases} X_1' = X_2, \qquad\qquad\qquad\qquad&X_1(0)=v(0)=0;\\ X_2'= \frac{2c}{\sigma^2} X_1-\frac{2}{\sigma^2}f(X_2), &X_2(0)=v'(0). \end{cases}\end{aligned}$$ Here $f$ is an entire function. Let us define $\tilde X_1:= X_1 - \frac{f(0)}{c}$, $X:=(\tilde X_1, X_2)^T$, $A := \begin{bmatrix} 0 & 1 \\ \frac{2c}{\sigma^2} & -\frac{2}{\sigma^2} h'(0) \end{bmatrix}$ and $q(X) = \begin{bmatrix} 0 \\ -\frac{2}{\sigma^2} k(X_2) \end{bmatrix}$ where $h(y): = f(y)-f(0) = yh'(0)+ \sum_{n=2}^\infty \frac{h^{(n)}(0)y^n}{n!} = yh'(0)+k(y)$. Then, $X$ satisfies the following system of ODEs: $$\begin{aligned} \label{ODEN} X'=AX+q(X), \qquad X(0)=(- f(0)/c, v'(0))^T.%\begin{cases} % %\tilde X_1' = X_2;\\ %X_2'= \frac{2c}{\sigma^2} \tilde X_1-\frac{2}{\sigma^2}[h'(0) X_2 +k(X_2)], %\end{cases}\end{aligned}$$ It is easy to check $A$ has eigenvalues $\lambda_{1,2} = \frac{-h'(0) \mp \sqrt{2c\sigma^2 + h'(0)^2}}{\sigma^2}$, with $\lambda_1<0<\lambda_2$. Now, let $Y=PX$, where $P$ is such that $PAP^{-1} = \mbox{\rm diag}[ \lambda_1, \lambda_2 ]:= B$. Then $Y$ satisfies $$\begin{aligned} \label{ODEP} Y' = BY+g(Y), \quad Y(0)=PX(0),\end{aligned}$$ where $g(Y) = P q(P^{-1}Y)$. Since $\nabla_Yg(Y)$ exists and tends to 0 as $|Y| \to 0$, and $\lambda_1<0<\lambda_2$, we can follow the argument of [@CoLe Theorem 13.4.3] to construct a solution $\tilde \phi$ to [\[ODEP\]](#ODEP){reference-type="eqref" reference="ODEP"} such that $|\tilde \phi(x)| \leq Ce^{-\alpha x}$ for some constant $C>0$, so that $|\tilde \phi(x)|\to 0$, as $x \to \infty$. Consequently, the function $\phi(x):=P^{-1} \tilde \phi(x)$ is a solution to [\[ODEN\]](#ODEN){reference-type="eqref" reference="ODEN"} satisfying $|\phi(x)|\to0$, as $x \to \infty$. In other words, [\[OODE\]](#OODE){reference-type="eqref" reference="OODE"} has a solution such that $(X_1(x), X_2(x)) \to (0+\frac{f(0)}{c}, 0) = (\frac{f(0)}{c},0)$ as $x \to \infty$. We summarize the discussion above as the following result. **Proposition 6**. *The differential equation ([\[ode\]](#ode){reference-type="ref" reference="ode"}) has a classical solution $v$ that enjoys the following properties:* *(i) $\lim_{x\to\infty} v(x)=\frac{f(0)}{c}$;* *(ii) $v'(0)>0$ and $\lim_{x\to\infty} v'(x)=0$;* *(iii) $v$ is increasing and concave.* *Proof.* Following the discussion proceeding the proposition we know that the classical solution $v$ to ([\[ode\]](#ode){reference-type="ref" reference="ode"}) satisfying (i) and (ii) exists. We need only check (iii). To this end, we shall follow an argument of [@shreve]. Let us first formally differentiate [\[ode\]](#ode){reference-type="eqref" reference="ode"} to get $v'''(x)=\frac{2c}{\sigma^2}v'(x)-\frac{2}{\sigma^2}f'(v'(x))v''(x)$, $x\in[0,\infty)$. Since $v\in\mathbb{C}^2_b([0,\infty))$, denoting $m(x):=v'(x)$, we can write $$\begin{aligned} m''(x)=\frac{2c}{\sigma^2}m(x)-\frac{2}{\sigma^2}f'(m(x))m'(x)\qquad x\in[0,\infty).\end{aligned}$$ Now, noting Proposition [Proposition 5](#fdxz){reference-type="ref" reference="fdxz"}, we define a change of variables such that for $x\in[0,\infty)$, $\varphi(x):=\int_{0}^{x}\exp\left[\int_{0}^{v}-\frac{2}{\sigma^2}f'(m(w))dw\right]dv$, and denote $l(y)=m(\varphi^{-1}(y))$, $y\in(0, \infty)$. Since $\varphi(0)=0$, and $\varphi'(0)=1$, we can define $\varphi^{-1}(0)=0$ as well. Then we see that, $$\begin{aligned} \label{ly} \quad\quad\quad l''(y)=[\varphi'(\varphi^{-1}(y))]^{-2}\frac{2c}{\sigma^2} l(y),~ y\in(0,\infty); \quad l(0)=m(0)=v'(0)=\alpha>0.\end{aligned}$$ Since ([\[ly\]](#ly){reference-type="ref" reference="ly"}) is a homogeneous linear ODE, by uniqueness $l(0)=\alpha>0$ implies that $l(y)>0$, $y\ge 0$. That is, $m(x)=v'(x)> 0$, $x\ge 0$, and $v$ is (strictly) increasing. Finally, from ([\[ly\]](#ly){reference-type="ref" reference="ly"}) we see that $l(y)>0$, $y\in[0,\infty)$ also implies that $l''(y)> 0$, $y\in[0,\infty)$. Thus $l(\cdot)$ is convex on $[0,+\infty)$, and hence would be unbounded unless $l'(y)\le 0$ for all $y\in[0,\infty)$. This, together with the fact that $v(x)$ is a bounded and increasing function, shows that $l(\cdot)$ (i.e., $v'(\cdot)$) can only be decreasing and convex, thus $v''(x)$ (i.e., $l'(y)$) $\leq 0$, proving the concavity of $v$, whence the proposition. width.25cm height.25cm depth0cm **Viscosity Solution of [\[ode\]](#ode){reference-type="eqref" reference="ode"}.** We note that Proposition [Proposition 6](#classic){reference-type="ref" reference="classic"} requires that $v'(0)$ exists, which is not a priorily known. We now conside ([\[ode\]](#ode){reference-type="ref" reference="ode"}) as an elliptic PDE defined on $[0,\infty)$, and argue that it possesses a unique bounded *viscosity solution*. We will then identify its value $v'(0)$ and argue that it must coincide with the classical solution identified in Proposition [Proposition 6](#classic){reference-type="ref" reference="classic"}. To begin with, let us first recall the notion of viscosity solution to ([\[ode\]](#ode){reference-type="ref" reference="ode"}). For $\mathbb{D}\subseteq \mathbb{R}$, we denote the set of all upper (resp. lower) semicontinuous function in $\mathbb{D}$ by USC$(\mathbb{D})$ (resp. LSC$(\mathbb{D})$). **Definition 7**. *We say that $u\in USC([0,+\infty))$ is a viscosity sub-(resp. super-)solution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,+\infty)$, if $u(0)=0$ and for any $x\in(0,+\infty)$ and $\varphi\in \mathbb{C}^2(\mathbb{R})$ such that $0=[u-\varphi](x)=\max_{y\in(0,+\infty)}$ (resp. $\min_{y\in(0,+\infty)}$)$[u-\varphi](y)$, it holds that $$\begin{aligned} \frac{1}{2}\sigma^2\varphi''(x)+f(\varphi'(x))-cu(x)\ge (resp. ~ \le \ \!) \ \! 0. %\quad \left(\frac{1}{2}\sigma^2\varphi''(x)+f( \varphi'(x))-cu(x)\le 0\right).\end{aligned}$$ We say that $u\in \mathbb{C}([0,+\infty))$ is a viscosity solution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,+\infty)$ if it is both a viscosity subsolution and a viscosity supersolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,+\infty)$.* We first show that both viscosity subsolution and viscosity supersolution to ([\[ode\]](#ode){reference-type="ref" reference="ode"}) exist. To see this, for $x\in[0,\infty)$, consider the following two functions: $$\begin{aligned} \label{supsolution} \underline{\psi}(x):=1-e^{-x},\quad \overline{\psi}(x):= \frac{A}{M}(1-e^{-M(x\wedge b)}), %,\qquad x\in[0,b);\\ %\frac{A}{M}(1-e^{-Mb}),\qquad x\in[b,\infty), %\end{cases}\end{aligned}$$ where $A$, $M$, $b>0$ are constants satisfying $M>2\mu/\sigma^2$ and the following constraints: $$\begin{aligned} \label{AMbound} \left\{\begin{array}{lll} \frac{1}{M}\Big\{\ln\Big(\frac{A}{A-M}\Big)\vee \ln\Big(\frac{A}{A-\frac{f(0)}{c}M}\Big)\Big\}< b<\frac{1}{M}\Big\{\ln\frac{A}{H}\wedge \ln\Big(\frac{\sigma^2}{2\mu}M\Big)\Big\}; \medskip\\ %\end{eqnarray*} %and %\begin{eqnarray*} A>\max\left\{M+H,~\frac{f(0)}{c}M+H,~\frac{\sigma^2M^2}{\sigma^2M-2\mu},~\frac{f(0)}{c}\cdot\frac{\sigma^2M^2}{\sigma^2M-2\mu}\right\}. \end{array}\right.\end{aligned}$$ **Proposition 8**. *Assume that Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"} holds, and let $\underline{\psi}$, $\overline{\psi}$ be defined by ([\[supsolution\]](#supsolution){reference-type="ref" reference="supsolution"}). Then $\underline{\psi}(\cdot)$ is a viscosity subsolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,\infty)$, $\overline{\psi}(\cdot)$ is a viscosity supersolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,\infty)$. Furthermore, it holds that $\underline{\psi}(x)\leq\overline{\psi}(x)$ on $[0,\infty)$.* *Proof.* We first show that $\underline{\psi}$ is a viscosity subsolution. To see this, note that $\underline{\psi}'(x)=e^{-x}$ and $\underline{\psi}''(x)=-e^{-x}$ on $(0,+\infty)$. By Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"}, Proposition [Proposition 5](#fdxz){reference-type="ref" reference="fdxz"}, and the fact $f'(1)<0$, we have $$\begin{aligned} &&\frac{1}{2}\sigma^2\underline{\psi}''(x)+f(\underline{\psi}'(x))-c(\underline{\psi}(x)) =-\frac{1}{2}\sigma^2e^{-x}+f(e^{-x})-c(1-e^{-x})\\ &\geq&-\frac{1}{2}\sigma^2e^{-x}+\mu+\lambda\ln a-c(1-e^{-x})=(c-\frac{1}{2}\sigma^2)e^{-x}+\mu+\lambda\ln a-c> 0.\end{aligned}$$ That is, $\underline{\psi}(x)$ is a viscosity subsolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,\infty)$. To prove that $\overline{\psi}$ is a supersolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"}, we take the following three steps: \(i\) Note that $\overline{\psi}'(x)=Ae^{-Mx}$ and $\overline{\psi}''(x)=-AMe^{-Mx}$ for all $x\in(0,b)$. Let $H$ be the abscissa value of intersection point of $f(x)$ and $k(x)$, then $H\in(1,1+\lambda)$, thanks to Proposition [Proposition 5](#fdxz){reference-type="ref" reference="fdxz"}. Since $A>H$ and $b<(1/M)\ln(A/H)$ (i.e. $Ae^{-Mb}>H$), we have $f(A)<\mu A$ and $f(Ae^{-Mb})<\mu Ae^{-Mb}$. Also since $M>(2\mu/\sigma^2)$ and $b<(1/M)\ln(\sigma^2M/2\mu)$, we have $f(A)<\frac{1}{2}\sigma^2AMe^{-Mb}$ and $f(Ae^{-Mb})<\frac{1}{2}\sigma^2AMe^{-Mb}$. By Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"}, $A>\max\left\{M,H\right\}$, $M>\frac{2\mu}{\sigma^2}$ and $b<\min\left\{\frac{\ln \left(\frac{A}{H}\right)}{M},\frac{\ln\left(\frac{\sigma^2}{2\mu}M\right)}{M}\right\}$, we have $-\frac{1}{2}\sigma^2AMe^{-Mx}+f(Ae^{-Mx})-c\cdot\frac{A}{M}(1-e^{-Mx})<0$ for $x\in(0,b)$. That is, $\overline{\psi}(x)$ is a viscosity supersolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,b)$. \(ii\) Next, note that $A>\frac{f(0)M}{c}$ and $b>\frac{1}{M}\ln\big( \frac{A}{A-\frac{f(0)M}{c}} \big)$, we see that $f(0)-c\cdot\frac{A}{M}(1-e^{-Mb})<0$ for $x\in (b,\infty)$, and it follows that $\overline{\psi}(x)$ is a viscosity supersolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $(b,\infty)$. \(iii\) Finally, for $x=b$, it is clear that there is no test function satisfying the definition of supersolution. We thus conclude that $\overline{\psi}(x)$ is a viscosity supersolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,\infty)$. Furthermore, noting Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"}, $A>M$ and $b>\frac{\ln\big(\frac{A}{A-M}\big)}{M}$, some direct calculations shows that $\underline{\psi}(x)\leq\overline{\psi}(x)$ on $[0,\infty)$, proving the proposition. width.25cm height.25cm depth0cm We now follow the Perron's method to prove the existence of the (bounded) viscosity solution for [\[ode\]](#ode){reference-type="eqref" reference="ode"}. We first recall the following definition (see, e.g., [@BM]). **Definition 9**. *A function $\varphi\in\mathbb{L}^1(\mathbb{R}_+)$ is said to be of class $(L)$ if* *(1) $\varphi$ is increasing with respect to $x$ on $[0,+\infty)$;* *(2) $\varphi$ is bounded on $[0,+\infty)$. width.25cm height.25cm depth0cm* Now let $\underline{\psi}$ and $\overline{\psi}$ be defined by ([\[supsolution\]](#supsolution){reference-type="ref" reference="supsolution"}), and consider the set $$\begin{aligned} \label{FF} \qquad\quad\mathfrak{F}:=\{u\in \mathbb{C}(\mathbb{R}_+)\ | \ \underline{\psi}\le u\le\overline{\psi};~ \mbox{$u$ is a class ($L$) vis. subsolution to \eqref{ode}}\}.\end{aligned}$$ Clearly, $\underline{\psi}\in \mathfrak{F}$, so $\mathfrak{F}\neq\emptyset$. Define $\hat v(x)=\sup_{u\in\mathfrak{F}}u(x)$, $x\in[0,+\infty)$, and let $v^*$ (resp. $v_*$) be the USC (resp. LSC) envelope of $\hat v$, defined respectively by $$\begin{aligned} v^*(x) (\mbox{resp. $v_*(x)$}):=\mathop{\overline{\rm lim}}_{r\downarrow0} (\mbox{resp. $\displaystyle\mathop{\underline{\rm lim}}_{r\downarrow0}$})\{\hat v(y):y\in(0,+\infty), |y-x|\leq r\}. %&&=\{v(y):y\in(0,+\infty), |y-x|\leq r\}.\end{aligned}$$ **Theorem 1**. *$v^*$ (resp. $v_*$) is a viscosity sub-(resp. super-)solution of class $(L)$ to [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $\mathbb{R}_+$.* *Proof.* The proof is the same as a similar result in [@BM]. We omit it here. width.25cm height.25cm depth0cm Note that by definition we have $v_*(x)\le v^*(x)$, $x\in[0,\infty)$. Thus, given Theorem [Theorem 1](#perron){reference-type="ref" reference="perron"}, one can derive the existence and uniqueness of the viscosity solution to ([\[ode\]](#ode){reference-type="ref" reference="ode"}) of class (L) by the following *comparison principle*, which can be argued along the lines of [@user Theorem 5.1], we omit the proof. **Theorem 2** (Comparison Principle). *Let $\bar{v}$ be a viscosity supersolution and $\underline{v}$ a viscosity subsolution of [\[ode\]](#ode){reference-type="eqref" reference="ode"}, both of class $(L)$. Then $\underline{v}\leq\bar{v}$. Consequently, $v^*=v_*=\hat v$ is the unique viscosity solution of class $(L)$ to [\[ode\]](#ode){reference-type="eqref" reference="ode"} on $[0,+\infty)$.* Following our discussion we can easily raise the regularity of the viscosity solution. **Corollary 10**. *Let $v$ be a vis. solution of class (L) to the HJB equation ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}). Then, $v$ has a right-derivative $v'(0+)>0$, and consequently $v \in \mathbb{C}^2_b([0,\infty))$. Furthermore, $v$ is concave and satisfies $\lim_{x\to\infty}v(0)=f(0)/c$ and $\lim_{x\to +\infty} v'(x)=0$.* *Proof.* Let $v$ be a viscosity solution of class (L) to ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}). We first claim that $v'(0+)>0$ exists. Indeed, consider the subsolution $\underline{\psi}$ and supersolution $\overline{\psi}$ defined by ([\[supsolution\]](#supsolution){reference-type="ref" reference="supsolution"}). Applying Theorem [Theorem 2](#Comp){reference-type="ref" reference="Comp"}, for any $x>0$ but small enough we have $$\frac{1-e^{-x}}{x}=\frac{\underline{\psi}(x)}{x}\le \frac{v(x)}{x}\le \frac{\overline{\psi}(x)}{x}=\frac{A}{M} \frac{(1-e^{-Mx})}{x}.$$ Sending $x\searrow 0$ we obtain that $1\le \mathop{\underline{\rm lim}}_{x\searrow0}\frac{v(x)}{x}\le \mathop{\overline{\rm lim}}_{x\searrow0} \frac{v(x)}{x}\le A$. Since $v$ is of class (L), whence increasing, and thus $v'(0+)$ exists, and $v'(0+)=\alpha\ge 1>0$. Then, it follows from Proposition [Proposition 6](#classic){reference-type="ref" reference="classic"} that the ODE ([\[ode\]](#ode){reference-type="ref" reference="ode"}) has a bounded classical solution in $\mathbb{C}^2_b([0,\infty))$ satisfying $v'(0+)=\alpha$, and is increasing and concave. Hence it is also a viscosity solution to ([\[ode\]](#ode){reference-type="ref" reference="ode"}) of class ($L$). But by Theorem [Theorem 2](#Comp){reference-type="ref" reference="Comp"}, the bounded viscosity solution to ([\[ode\]](#ode){reference-type="ref" reference="ode"}) of class ($L$) is unique, thus the viscosity solution $v\in\mathbb{C}^2_b([0,\infty))$. The rest of the properties are the consequences of Proposition [Proposition 6](#classic){reference-type="ref" reference="classic"}. width.25cm height.25cm depth0cm **Verification Theorem and Optimal Strategy.** Having argued the well-posedness of ODE ([\[ode\]](#ode){reference-type="ref" reference="ode"}) from both classical and viscosity sense, we now look at its connection to the value function. We have the following *Verification Theorem*. **Theorem 3**. *Assume that Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"} is in force. Then, the value function $V$ defined in [\[v\]](#v){reference-type="eqref" reference="v"} is a viscosity solution of class ($L$) to the HJB equation [\[ode\]](#ode){reference-type="eqref" reference="ode"}. More precisely, it holds that $$\begin{aligned} \label{Vsupv} V(x)=\sup_{v\in\mathfrak{F}}v(x):=\hat v(x), \qquad x\in[0,+\infty),\end{aligned}$$ where the set $\mathfrak{F}$ is defined by ([\[FF\]](#FF){reference-type="ref" reference="FF"}). Moreover, $V$ coincides with the classical solution of ([\[ode\]](#ode){reference-type="ref" reference="ode"}) described in Proposition [Proposition 6](#classic){reference-type="ref" reference="classic"}, and the optimal control has the following form: $$\begin{aligned} \label{zycl} \pi^*_t(w)= G(w,1-V'(X^{\pi^*}_t)). %\end{cases}\end{aligned}$$* *Proof.* The proof that $V$ is a viscosity solution satisfying ([\[Vsupv\]](#Vsupv){reference-type="ref" reference="Vsupv"}) is more or less standard (see, e.g., [@YongZhou]), and Proposition [Proposition 3](#assl){reference-type="ref" reference="assl"} shows that $V$ must be of class ($L$). It then follows from Corollary [Corollary 10](#regu){reference-type="ref" reference="regu"} that $V'(0+)$ exists and $V$ is the (unique) classical solution of [\[ode\]](#ode){reference-type="eqref" reference="ode"}. It remains to show that $\pi^*$ defined by ([\[zycl\]](#zycl){reference-type="ref" reference="zycl"}) is optimal. To this end, note that $|{\cal H}_\lambda^{\pi^*}(t)| =\left|\int_{0}^{a}\bar{f}(V'(X_t^*))\pi_t^*(w)dw\right|$, where $\bar{f}(z):= \big\{wz+\lambda \ln\big[\frac{\lambda (e^{\frac{a}{\lambda}(1-z)}-1 )}{1-z}\big]\big\}{\bf 1}_{\{z\neq1\}}+[w+\lambda\ln a]{\bf 1}_{\{z=1\}}$. Thus $$\begin{aligned} \mathbb{E}_x\Big[\int_{0}^{\tau^{\pi^*}} e^{-ct}|{\cal H}_\lambda^{\pi^*}(t)|dt\Big] =\mathbb{E}_x\Big[\int_0^{\tau^{\pi^*}}\negthinspace\negthinspace\negthinspace e^{-ct}\Big|\int_{0}^{a}\bar{f}(V'(X_t^*))\pi_t^*(w)dw\Big|dt\Big]<+\infty,\end{aligned}$$ as $V'(X_t^*)\in(0,V'(0+)]$, thanks to the concavity of $V$. Consequently $\pi^*\in\mathcal{A}(x)$. Finally, since $V\in\mathbb{C}^2_b([0,\infty))$ and $\pi^*$ is defined by ([\[zycl\]](#zycl){reference-type="ref" reference="zycl"}) is obviously the maximizer of the Hamiltonian in HJB equation ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}), the optimality of $\pi^*$ follows from a standard argument via Itô's formula. We omit it. width.25cm height.25cm depth0cm # Policy Update We now turn to an important step in the RL scheme, that is, the so-called *Policy Update*. More precisely, we prove a *Policy Improvement Theorem* which states that for any close-loop policy $\pmb\pi\in {\cal A}_{cl}(x)$, we can construct another $\pmb\pi'\in {\cal A}_{cl}(x)$, such that $J(x, \tilde{\pmb\pi})\ge J(x, \pmb\pi)$. Furthermore, we argue that such a policy policy updating procedure can be constructed without using the system parameters, and we shall discuss the convergence of the iterations to the optimal policy To begin with, for $x\in\mathbb{R}$ and $\pmb\pi \in \mathscr{A}_{cl}(x)$, let $X^{\pmb\pi, x}$ be the unique strong solution to the SDE ([\[sde2\]](#sde2){reference-type="ref" reference="sde2"}). For $t>0$, we consider the process $\hat W_s:=W_{s+t}-W_t$, $s>0$. Then $\hat W$ is an $\hat\mathbb{F}$-Brownian motion, where $\hat{\cal F}_s={\cal F}_{s+t}$, $s>0$. Since the SDE ([\[sde2\]](#sde2){reference-type="ref" reference="sde2"}) is time-homogeneous, the path-wise uniqueness then renders the *flow property*: $X_{r+t}^{\pmb\pi, x}=\hat X_r^{\pmb\pi,X_t^{\pmb\pi,x} }$, $r\ge 0$, where $\hat X$ satisfies the SDE $$\begin{aligned} \label{SDE0} d\hat X_s=\Big(\mu-\int_{0}^{a}w \pmb{\pi} (w, \hat X_s)dw\Big)ds+\sigma d\hat W_s, \quad s\ge 0; \quad\hat X_0= X_t^{\pmb\pi,x}.\end{aligned}$$ Now we denote $\hat \pi:=\pmb\pi(\cdot, \hat X_\cdot)\in\mathscr{A}_{ol}(X_t^{\pmb\pi, x})$ to be the open-loop strategy induced by the closed-loop control $\pmb\pi$. Then the corresponding cost functional can be written as (denoting $X^{\pmb\pi}=X^{\pmb\pi,x}$) $$\begin{aligned} \label{Jpmbpi} \quad J (X_t^{\pmb\pi};\pmb\pi) = \mathbb{E}_{X_t^{\pmb\pi}} \Big[ \int_{0}^{\tau ^ {\pmb{\pi}}_{X_t^{\pmb\pi}}} e^{-cr} \Big[\int_{0}^{a} ( w - \lambda \ln \hat{\pi}_r(w)) \hat{\pi}_r(w) dw \Big] dr\Big], \quad t\ge 0,\end{aligned}$$ where ${\tau ^ {\pmb\pi}_{X_t^{\pmb\pi, x}}} = \inf \{r>0: \hat X_r^{\pmb\pi,X_t^{\pmb\pi,x} } <0 \}$. It is clear that, by flow property, we have ${\tau ^{\pmb\pi}_x} = {\tau^{\pmb\pi}_{X_t^{\pmb\pi,x}}}+t$, $\mathbb{P}$-a.s. on $\{\tau^{\pmb\pi}_x>t\}$. Next, for any admissible policy $\pmb{\pi}\in \mathscr{A}_{cl}$, we formally define a new feedback control policy as follows: for $(w,x)\in [0,a]\times \mathbb{R}^+$, $$\begin{aligned} \label{pitilde} \pmb{\tilde \pi}(w,x) := G(w, 1- J^{'}(x;\pmb{\pi}) ), \end{aligned}$$ where $G(\cdot, \cdot)$ is the Gibbs function defined by ([\[zykz\]](#zykz){reference-type="ref" reference="zykz"}). We would like to emphasize that the new policy $\pmb{\tilde\pi}$ in ([\[pitilde\]](#pitilde){reference-type="ref" reference="pitilde"}) depends on $J$ and $\pmb{\pi}$, but is independent of the coefficients $(\mu, \sigma)$(!). To facilitate the argument we introduce the following definition. **Definition 11**. *A function $x\mapsto {\pmb \pi}(\cdot;x)\in\mathscr{P}([0,a])$ is called "Strongly Admissible\" if its density function enjoys the following properties:* *(i) there exist $u,l > 0$ such that $l \leq \pmb \pi (x,w) \leq u$, $x \in \mathbb{R}^+$ and $w \in [0,a]$;* *(ii) there exists $K > 0$ such that $|\pmb \pi(x,w) - \pmb \pi (y,w)| \leq K |x-y|$, $x,y \in \mathbb{R}^+$, uniformly in $w$.* *The set of strongly admissible controls is denoted by $\mathscr{A}^s_{cl}$. width.25cm height.25cm depth0cm* The following lemma justifies the Definition [Definition 11](#strongadm){reference-type="ref" reference="strongadm"}. **Lemma 1**. *Suppose that a function $x\mapsto {\pmb \pi}(\cdot;x)\in\mathscr{P}([0,a])$ whose density takes the form $\pmb\pi(w,x) = G(w,c(x))$ where $c\in \mathbb{C}^1_b(\mathbb{R}_+)$. Then $\pmb\pi\in\mathscr{A}^s_{cl}$.* *Proof.* Since $c\in \mathbb{C}^1_b(\mathbb{R}_+)$, let $K>0$ be such that $|c(x)|+|c'(x)|\le K$, $x\in\mathbb{R}_+$. Next, note that $G$ is a positive and continuous, and for fixed $w$, $G(w,\cdot) \in\mathbb{C}^\infty(\mathbb{R})$, there exist constants $0<l<u$, such that $l \le G(w,y) \le u$ and $|G_y(w, y)|\le u$, $(w,y) \in [0,a] \times[-K,K]$. Consequently, we have $l \le \pmb\pi(x,w)= G(w,c(x)) \le u$, $(w,x) \in [0,a] \times \mathbb{R}^+$, and $\pmb\pi(\cdot,w) = G(w,c(\cdot))$ is uniformly Lipschitz on $\mathbb{R}^+$, uniformly in $w\in[0,a]$, proving the lemma. width.25cm height.25cm depth0cm In what follows we shall use the following notations. For any $\pmb\pi\in\mathscr{A}_{cl}$, $$\begin{aligned} \label{ABr} \quad r^{\pmb\pi}(x) := \int_0^a (w -\lambda\ln \pmb\pi(w,x)) \pmb\pi(w,x) dw; \quad b^{\pmb\pi}(x)=\mu- \int_0^a w \pmb\pi(w,x) dw.\end{aligned}$$ Clearly, for $\pmb\pi\in\mathscr{A}_{cl}^s$, $b^{\pmb \pi}$ and $r^{\pmb \pi}$ are bounded and are Lipschitz continuous. We denote $X:=X^{\pmb\pi, x}$ to be the solution to SDE ([\[sde2\]](#sde2){reference-type="ref" reference="sde2"}), and rewrite the cost function ([\[Jxpi\]](#Jxpi){reference-type="ref" reference="Jxpi"}) as $$\begin{aligned} \label{Jxpi1} \qquad J(x, \pmb\pi)=\mathbb{E}_x\Big[\int_0^{\tau_x^{\pmb\pi}} e^{-cs}r^{\pmb\pi}(X^{\pmb\pi, x}_s)ds\Big].\end{aligned}$$ where $\tau^{\pmb\pi}_x = \inf \{t>0: X^{\pmb\pi,x}_t <0 \}$. Thus, in light of the Feynman-Kac formula, for any $\pmb\pi \in \mathscr{A}^s_{cl}$, $J(\cdot, \pmb\pi)$ is the *probabilistic solution* to the following ODE on $\mathbb{R}_+$: $$\begin{aligned} \label{fkpde} \qquad L^{\pmb\pi}[u](x)+r^{\pmb\pi}(x)\negthinspace:= \negthinspace\frac{1}{2} \sigma^2 u_{xx}(x) + b^{\pmb\pi}(x)u_x(x) -cu(x) + r^{\pmb\pi}(x) = 0, ~ u(0) =0.\end{aligned}$$ Now let us denote $u^{\pmb\pi}_R$ to be solution to the linear elliptic equation ([\[fkpde\]](#fkpde){reference-type="ref" reference="fkpde"}) on finite interval $[0,R]$ with boundary conditions $u(0)=0$ and $u(R)=J(R, \pmb\pi)$, then by the regularity and the boundedness of $b^{\pmb\pi}$ and $r^{\pmb\pi}$, and using only the interior type Schauder estimates (cf. [@GT]), one can show that $u^{\pmb\pi}_R\in\mathbb{C}^2_b([0,R])$ and the bounds of $(u^{\pmb\pi}_R)'$ and $(u^{\pmb\pi}_R)''$ depend only on those of the coefficients $b^{\pmb\pi}$, $r^{\pmb\pi}$ and $J(\cdot, \pmb\pi)$, but uniform in $R>0$. By sending $R\to\infty$ and applying the standard diagonalization argument (cf. e.g., [@Lady]) one shows that $\lim_{R\to\infty}u^{\pmb\pi}_R(\cdot)=J(\cdot, \pmb\pi)$, which satisfies ([\[fkpde\]](#fkpde){reference-type="ref" reference="fkpde"}). We summarize the above discussion as the following proposition for ready reference. **Proposition 12**. *If $\pmb{\pi} \in\mathscr{A}^s_{cl}$, then $J(\cdot, \pmb{\pi})\in \mathbb{C}^2_b(\mathbb{R}^+)$, and the bounds of $J'$ and $J''$ depend only on those of $b^{\pmb\pi}$, $r^{\pmb\pi}$, and $J(\cdot, \pmb\pi)$. width.25cm height.25cm depth0cm* Our main result of this section is the following *Policy Improvement Theorem*. **Theorem 4**. *Assume that Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"} is in force. Then, let $\pmb\pi\in\mathscr{A}^s_{cl}$ and let $\tilde{\pmb\pi}$ be defined by ([\[pitilde\]](#pitilde){reference-type="ref" reference="pitilde"}) associate to $\pmb\pi$, it holds that $J(x,\pmb{ \tilde\pi} ) \geq J(x,\pmb{ \pi})$, $x \in \mathbb{R}_+$.* *Proof.* Let $\pmb\pi\in\mathscr{A}^s_{cl}$ be given, and let $\pmb{\tilde\pi }$ be the corresponding control defined by ([\[pitilde\]](#pitilde){reference-type="ref" reference="pitilde"}). Since $\pmb\pi\in\mathscr{A}^s_{cl}$, $b^{\pmb\pi}$ and $r^{\pmb\pi}$ are uniformly bounded, and by Proposition [Proposition 12](#propDB){reference-type="ref" reference="propDB"}, $(1-J'(\cdot, \pmb \pi)) \in \mathbb{C}^1_b(\mathbb{R}^+)$. Thus Lemma [Lemma 1](#lemma4){reference-type="ref" reference="lemma4"} (with $c(x) = 1 - J'(x , \pmb\pi)$) implies that $\tilde{\pmb\pi}\in\mathscr{A}^s_{cl}$ as well. Moreover, since $\pmb\pi\in\mathscr{A}^s_{cl}$, $J(\cdot, \pmb\pi)$ is a $\mathbb{C}^2$-solution to the ODE ([\[fkpde\]](#fkpde){reference-type="ref" reference="fkpde"}). Now recall that $\tilde{\pmb\pi}\in\mathscr{A}^s_{cl}$ is the maximizer of $\sup_{\widehat{\pmb\pi}\in\mathscr{A}^s_{cl}} [b^{\widehat{\pmb\pi}}(x)J'(x, \pmb\pi)+r^{\widehat{\pmb\pi}}(x)]$, we have $$\begin{aligned} \label{Jpitilde} L^{\tilde{\pmb\pi}}[J(\cdot, \pmb\pi)](x)+ r^{\tilde{\pmb\pi}}(x) \ge 0, \quad x\in\mathbb{R}_+.\end{aligned}$$ Now, let us consider the process $X^{\tilde{\pmb\pi}}$, the solution to ([\[SDE0\]](#SDE0){reference-type="ref" reference="SDE0"}) with $\pmb\pi$ being replaced by $\tilde{\pmb{\pi}}$. Applying Itô's formula to $e^{-ct}J(X^{\tilde{\pmb\pi}}_t, {\pmb{\pi}})$ from $0$ to $\tau^{\tilde{\pmb\pi}}_x\wedge T$, for any $T>0$, and noting the definitions of $b^{\tilde{\pmb\pi}}$ and $r^{\tilde{\pmb\pi}}$, we deduce from ([\[Jpitilde\]](#Jpitilde){reference-type="ref" reference="Jpitilde"}) that $$\begin{aligned} \label{fgh} && e^{-c(\tau^{\tilde{\pmb\pi}}_x\wedge T)}J (X_{\tau^{\tilde{\pmb\pi}}_x\wedge T}^{\tilde{\pmb\pi}} ,\pmb{ \pi}) \\ & =& J (x,\pmb{ \pi}) + \int_0^{\tau^{\tilde{\pmb\pi}}_x\wedge T} \negthinspace\negthinspace e^{-cr}L^{\tilde{\pmb\pi}}[J(\cdot, \pmb\pi)](X_r^{\tilde{\pmb\pi}})dr %\Big [ \frac{1}{2} {\sigma}^2 J^{''}(X_r^{\tilde{\pmb\pi}},\pmb{\pi}) +b^{\tilde{\pmb\pi}} (X_r^{{\tilde\pi}})J'(X_r^{\tilde{\pmb\pi}},\pmb{\pi}) % - c J(X_r^{\tilde{\pmb\pi}},\pmb{\pi}) \Big] dr \nonumber \\ + \int_0^{\tau^{\tilde{\pmb\pi}}_x\wedge T} e^{-cr} J^{'}(X_r^{\tilde{\pmb\pi}},\pmb{\pi}) {\sigma} dW_r \nonumber \\ &\ge& J (x,\pmb{ \pi}) - \int_0^{\tau^{\tilde{\pmb\pi}}_x\wedge T} e^{-cr}r^{\tilde{\pmb\pi}} (X_r^{{\tilde\pi}}) dr + \int_0^{\tau^{\tilde{\pmb\pi}}_x\wedge T} e^{-cr} J^{'}(X_r^{\tilde{\pmb\pi}},\pmb{\pi}) {\sigma} dW_r, \nonumber \end{aligned}$$ Taking expectation on both sides above, sending $T\to \infty$ and noting that $J (X_{\tau^{\tilde{\pmb\pi}}_x}^{\tilde{\pmb\pi}} ,\pmb{ \pi})=J(0, \pmb\pi)=0$, we obtain that $J(x,\pmb{ \pi}) \le J(x, \tilde{\pmb\pi})$, $x \in \mathbb{R}^+$, proving the theorem. width.25cm height.25cm depth0cm In light of Theorem [Theorem 4](#PIthm){reference-type="ref" reference="PIthm"} we can naturally define a "learning sequence\" as follows. We start with $c_0\in C^1_b(\mathbb{R}^+)$, and define $\pmb\pi_0(x,w):=G(w, c_0(x))$, and $v_0(x) := J(x, \pmb\pi_0)$, $$\begin{aligned} \label{pin} \quad\quad\pmb\pi_n(x,w) := G(w, 1- J'(x, \pmb\pi_{n-1})), \qquad(w,x)\in [0,a]\times \mathbb{R}^+ , \text{ for }\quad n \geq 1 \end{aligned}$$ Also for each $n \geq 1$, let $v_{n}(x) := J(x, \pmb\pi_{n})$. The natural question is whether this learning sequence is actually a "maximizing sequence\", that is, $v_n(x)\nearrow v(x)$, as $n\to\infty$. Such a result would obviously justify the *policy improvement* scheme, and was proved in the LQ case in [@zhou2]. Before we proceed, we note that by Proposition [Proposition 12](#propDB){reference-type="ref" reference="propDB"} the learning sequence $v_n =J(\cdot, \pmb{\pi}^n)\in \mathbb{C}^2_b(\mathbb{R}_+)$, $n \geq 1$, but the bounds may depend on the coefficients $b^{\pmb\pi^n}$, $r^{\pmb\pi^n}$, thus may not be uniform in $n$. But by definition $b^{\pmb\pi^n}$ and Proposition [Proposition 3](#assl){reference-type="ref" reference="assl"}, we see that $\sup_n\|b^{\pmb\pi^n}\|_{\mathbb{L}^\infty(\mathbb{R}_+)}+\|V\|_{\mathbb{L}^\infty(\mathbb{R}_+)}\le C$ for some $C>0$. Moreover, since for each $n \geq 1$, $J(\cdot, \pmb\pi^0) \le J(\cdot, \pmb\pi^n) \le V(\cdot)$, if we choose $\pmb\pi^0\in\mathscr{A}^s_{cl}$ be such that $J(x,\pmb\pi^0)\ge 0$ (e.g., $\pmb\pi^0_t\equiv \frac1a$), then we have $\|J(\cdot, \pmb\pi^n) \|_{L^\infty(\mathbb{R}_+)}\le\|V\|_{\mathbb{L}^\infty(\mathbb{R}_+)}\le C$ for all $n \ge 1$. That is, $v_n$'s are uniformly bounded, and uniformly in $n$, provided that $r^{\pmb\pi^n}$'s are. The following result, based on the recent work [@HWZcvg], is thus crucial. **Proposition 13**. *The functions $r^{\pmb\pi_n}$, $n\ge 1$ are uniformly bounded, uniformly in $n$. Consequently, the learning sequence $v_n =J(\cdot, \pmb{\pi}^n)\in \mathbb{C}^2_b(\mathbb{R}_+)$, $n\ge 1$, and the bounds of $v_n$'s, up to their second derivatives, are uniform in $n$. width.25cm height.25cm depth0cm* Our main result of this section is the following. **Theorem 5**. *Assume that the Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"} is in force. Then the sequence $\{v_n\}_{n \geq 0}$ is a maximizing sequence. Furthermore, the sequence $\{\pi_n\}_{n \geq 0}$ converges to the optimal policy $\pi^*$.* *Proof.* We first observe that by Lemma [Lemma 1](#lemma4){reference-type="ref" reference="lemma4"} the sequence $\{\pmb\pi_n\}\subset \mathscr{A}_{cl}^s$, provided $\pmb\pi_0\in\mathscr{A}_{cl}^s$. Since $v_n=J(\cdot, \pmb\pi_n)$, Proposition [Proposition 13](#assumAP){reference-type="ref" reference="assumAP"} guarantees that $v_n\in \mathbb{C}^2_b(\mathbb{R}_+)$, and the bounds are independent of $n$. Thus a simple application of Arzella-Ascolli Theorem shows that there exist subsequences $\{n_k\}_{k\ge1}$ and $\{n'_k\}_{k\ge1}$ such that $\{v_{n_k}\}_{k \geq 0}$ and $\{v'_{n'_k}\}_{k \geq 0}$ converge uniformly on compacts. Let us fix any compact set $E\subset \mathbb{R}_+$, and assume $\lim_{k\to\infty}v_{n_k}(\cdot)=v^*(\cdot)$, uniformly on $E$, for some function $v^*$. By definition of $\pmb\pi_n$'s we know that $\{v_n\}$ is monotonically increasing, thanks to Theorem [Theorem 4](#PIthm){reference-type="ref" reference="PIthm"}, thus the whole sequence $\{v_n\}_{n\ge0}$ must converge uniformly on $E$ to $v^*$. Next, let us assume that $\lim_{k\to\infty}v'_{n'_k}(\cdot)=v^{**}(\cdot)$, uniformly on $E$, for some function $v^{**}$. Since obviously $\lim_{k\to\infty}v_{n'_k}(\cdot)=v^{*}(\cdot)$ as well, and note that the derivative operator is a closed operator, it follows that $v^{**}(x)= (v^*)'(x)$, $x \in E$. Applying the same argument one shows that for any subsequence of $\{v'_n\}$, there exists a sub-subsequence that converges uniformly on $E$ to the same limit $(v^*)'$, we conclude that the sequence $\{v'_n\}$ itself converges uniformly on $E$ to $(v^*)'$. Since $E$ is arbitrary, this shows that $\{(v_n, v'_n)\}_{n\ge0}$ converges uniformly on compacts to $(v^*, (v^*)')$. Since $\pmb \pi_n$ is a continuous function of $v'_n$, we see that $\{\pmb \pi_n\}_{n \geq 0}$ converges uniformly to $\pmb\pi^*\in\mathscr{A}_{cl}$ defined by $\pmb\pi^*(x,w) := G(w, 1- (v^*)'(x))$. Finally, applying Lemma [Lemma 1](#lemma4){reference-type="ref" reference="lemma4"} we see that $\pmb\pi^* \in\mathscr{A}^s_{cl}$, and the structure of the $\pmb\pi^*(\cdot,\cdot)$ guarantees that $v^*$ satisfies the HJB equation ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}) on the compact set $E$. By expanding the result to $\mathbb{R}^+$ using the fact that $E$ is arbitrary, $v^*$ satisfies the HJB equation ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}) (or equivalently ([\[ode\]](#ode){reference-type="ref" reference="ode"})). Now by using the slightly modified verification argument in Theorem 4.1 in [@HWZcvg] we conclude that $v^* = V^*$ is the unique solution to the HJB equation ([\[hjbfc\]](#hjbfc){reference-type="ref" reference="hjbfc"}) and thus $\pi^*$ by definition is the optimal control. width.25cm height.25cm depth0cm **Remark 14**. * An alternate policy improvement method is the so-called *Policy Gradient* (PG) method introduced in [@PGJZ], applicable for both finite and infinite horizon problems. Roughly speaking, a PG method parametrizes the policies $\pmb \pi ^ \phi\in{\cal A}_{cl}^s$ and then solves for $\phi$ via the equation $\nabla_\phi J(x,\pi ^ \phi ) =0$, using stochastic approximation method. The advantage of a PG method is that it does not depend on the system parameter, whereas in theory Theorem [Theorem 4](#PIthm){reference-type="ref" reference="PIthm"} is based on finding the maximizer of the Hamiltonian, and thus the learning strategy ([\[pin\]](#pin){reference-type="ref" reference="pin"}) may depend on the system parameter. However, a closer look at the learning parameters $c_n$ and $d_n$ in ([\[pin\]](#pin){reference-type="ref" reference="pin"}) we see that they depend only on $v_n$, but not $(\mu, \sigma)$ directly. In fact, we believe that in our case the PG method would not be advantageous, especially given the convergence result in Theorem [Theorem 5](#convergence){reference-type="ref" reference="convergence"} and the fact that the the PG method requires also a proper choice of the parameterization family which, to the best of our knowledge, remains a challenging issue in practice. We shall therefore content ourselves to algorithms using learning strategy ([\[pin\]](#pin){reference-type="ref" reference="pin"}) for our numerical analysis in §7. width.25cm height.25cm depth0cm* # Policy Evaluation --- A Martingale Approach Having proved the policy improvement theorem, we turn our attention to an equally important issue in the learning process, that is, the *evaluation* of the cost (value) functional, or the *Policy Evaluation*. The main idea of the policy evaluation in reinforcement learning literature usually refers to a process of approximating the cost functional $J(\cdot,\pmb\pi)$, for a given feedback control $\pmb\pi$, by approximating $J(\cdot,\pmb\pi)$ by a parametric family of functions $J^\theta$, where $\theta \in \Theta \subseteq \mathbb{R}^l$. Throughout this section, we shall consider a fixed feedback control policy $\pmb\pi\in\mathscr{A}^s_{cl}$. Thus for simplicity of notation, we shall drop the superscript $\pmb\pi$ and thus write $r(x) = r^{\pmb\pi}(x), b(x) = b^{\pmb\pi}(x)$ and $J(x, \pmb\pi)=J(x); \quad\tau_x = \tau^{\pmb\pi}_x$. We note that for $\pmb\pi\in\mathscr{A}^s_{cl}$, the functions $r,b \in \mathbb{C}_b(\mathbb{R}_+)$ and $J \in \mathbb{C}_b^2(\mathbb{R}_+)$. Now let $X^x=X^{\pmb\pi, x}$ be the solution to the SDE ([\[Xpi\]](#Xpi){reference-type="ref" reference="Xpi"}), and $J(\cdot)$ satisfies the ODE ([\[fkpde\]](#fkpde){reference-type="ref" reference="fkpde"}). Then, applying Itô's formula we see that $$\begin{aligned} \label{M} M^x_t: = e^{-ct}J(X^x_t) + \int_0^t e^{-cs}r(X^x_s) ds, \qquad t\ge 0,\end{aligned}$$ is an $\mathbb{F}$-martingale. Furthermore, the following result is more or less standard. **Proposition 15**. *Assume that Assumption [Proposition 13](#assumAP){reference-type="ref" reference="assumAP"} holds, and suppose that $\tilde J(\cdot)\in\mathbb{C}_b(\mathbb{R}_+)$ is such that $\tilde J (0) = 0$, and for all $x \in \mathbb{R}_+$, the process $\tilde M^x:= \{ \tilde M^x_t = e^{-cs}\tilde J(X^x_s) + \int_0^te^{-cs} r(X^x_s) ds ;~ t\geq 0\}$ is an $\mathbb{F}$-martingale. Then $J \equiv \tilde J$.* *Proof.* First note that $J(0)=\tilde{J}(0)=0$, and $X^x_{\tau_x}=0$. By ([\[M\]](#M){reference-type="ref" reference="M"}) and definition of $\tilde M^x$ we have $\tilde M^x_{\tau_x}=\int_0^{\tau_x} e^{-cs}r(X^x_s) ds =M^x_{\tau_x}$. Now, since $r$, $J$, and $\tilde J$ are bounded, both $\tilde M^x$ and $M^x$ are uniformly integrable $\mathbb{F}$-martingales, by optional sampling it holds that $$\tilde J(x)=\tilde M^x_0=\mathbb{E}[\tilde M^x_{\tau_x}|{\cal F}_0]=\mathbb{E}[M^x_{\tau_x}|{\cal F}_0]=M^x_0 = J(x), \qquad x\in\mathbb{R}_+.$$ The result follows. width.25cm height.25cm depth0cm We now consider a family of functions $\{J^\theta(x): (x, \theta)\in\mathbb{R}_+\times \Theta\}$, where $\Theta\subseteq\mathbb{R}^l$ is a certain index set. For the sake of argument, we shall assume further that $\Theta$ is compact. Moreover, we shall make the following assumptions for the parameterized family $\{J^\theta\}$. **Assumption 16**. *(i) The mapping $(x, \theta)\mapsto J^\theta(x)$ is sufficiently smooth, so that all the derivatives required exist in the classical sense.* *(ii) For all $\theta \in \Theta$, $\varphi^\theta(X^x_\cdot)$ are square-integrable continuous processes, and the mappings $\theta\mapsto \|\varphi^\theta\|_{\mathbb{L}^2_{{\cal F}}([0,T])}$ are continuous, where $\varphi^\theta=J^\theta, (J^\theta)', (J^\theta)''$.* *(iii) There exists a continuous function $K(\cdot)>0$, such that $\|J^\theta\|_\infty \leq K(\theta)$. width.25cm height.25cm depth0cm* In what follows we shall often drop the superscript $x$ from the processes $X^x$, $M^x$ etc., if there is no danger of confusion. Also, for practical purpose we shall consider a finite time horizon $[0,T]$, for an arbitrarily fixed and sufficiently large $T>0$. Denoting the stopping time $\tilde\tau_x =\tau^T_x:= \tau_x\wedge T$, by optional sampling theorem, we know that $\tilde M_t:=M_{\tilde\tau_x\wedge t}=M_{\tau_x\wedge t}$, for $t\in[0, T]$, is an $\tilde{\mathbb{F}}$-martingale on $[0,T]$, where $\tilde \mathbb{F}=\{{\cal F}_{\tilde\tau_x\wedge t}\}_{t\in[0, T]}$. Let us also denote $\tilde{M}^\theta_t:=M^\theta_{\tau_x\wedge t}$, $t\in[0,T]$. We now follow the idea of [@Zhou] to construct the so-called *Martingale Loss Function*. For any $\theta\in\Theta$, consider the parametrized approximation of the process $M=M^x$: $$\begin{aligned} \label{Mth} M^\theta_t=M^{\theta, x}_t:=e^{-ct}J^\theta(X^x_t)+\int_0^t e^{-cs}r(X^x_s)ds, \qquad t\in[0,T].\end{aligned}$$ In light of the *Martingale Loss function* introduced in [@Zhou], we denote $$\begin{aligned} \label{ML} {ML}(\theta)\negthinspace=\negthinspace\frac{1}{2} \mathbb{E}\Big[\negthinspace\int_0^{\tilde\tau_x}\negthinspace\negthinspace|M_{\tau_x}\negthinspace\negthinspace-\negthinspace\tilde{M}_t^\theta|^2 dt\Big]\negthinspace=\negthinspace \frac{1}{2} \mathbb{E}\Big[\negthinspace\int_0^{\tilde\tau_x}\negthinspace\negthinspace\negthinspace\big|e^{-ct} J^\theta(X_t)\negthinspace-\negthinspace\negthinspace\int_{t}^{\tau_x}\negthinspace\negthinspace\negthinspace e^{-cs}r(X_s) ds\big| ^2 dt\Big]. %\nonumber\end{aligned}$$ We should note that the last equality above indicates that the martingale loss function is actually independent of the function $J$, which is one of the main features of this algorithm. Furthermore, inspired by the *mean-squared* and *discounted mean-squared* value errors we define $$\begin{aligned} \label{MSVE} \mbox{\it MSVE}(\theta) &=& \frac{1}{2} \mathbb{E}\Big[ \int_0^{\tilde\tau_x} |J^\theta(X_t) - J(X_t) |^2 dt\Big], \\ \label{DMSVE} \mbox{\it DMSVE}(\theta) &=& \frac{1}{2} \mathbb{E}\Big[ \int_0^{\tilde\tau_x} e^{-2ct} |J^\theta(X_t) - J(X_t) |^2 dt\Big]. \end{aligned}$$ The following result shows the connection between the minimizers of $ML(\cdot)$ and $DMSVE(\cdot)$. **Theorem 6**. *Assume that Assumption [Assumption 16](#ape1){reference-type="ref" reference="ape1"} is in force. Then, it holds that $$\begin{aligned} \label{equivLoss} \arg \min_ {\theta \in \Theta} \mbox{\it ML}(\theta) = \arg \min_ {\theta \in \Theta} \mbox{\it DMSVE}(\theta).\end{aligned}$$* *Proof.* First, note that $J(0)=J^\theta(0)=0$, and $X_{\tau_x}=0$, we see that $$\tilde{M}^\theta_t=M^\theta_{\tau_x}=M_{\tau_x}=\int_0^{\tau_x}e^{-cs}r(X_s)ds, %for $t\in[\tilde{\t}_x, T]$. \qquad t\in(\tilde{\tau}_x, T).$$ Here in the above we use the convention that $(\tilde{\tau}_x, T)=\emptyset$ if $\tilde{\tau}_x=T$, and the identities becomes trivial. Consequently, by definition ([\[ML\]](#ML){reference-type="ref" reference="ML"}) and noting that $\tilde M^\theta_t=M^\theta_t$, for $t\in [0,\tilde\tau_x]$, we can write $$\begin{aligned} \label{2ML} \qquad 2\mbox{\it ML}(\theta)\negthinspace\negthinspace&\negthinspace\negthinspace=\negthinspace\negthinspace&\negthinspace\negthinspace\mathbb{E}\Big[ \int_0^{\tilde \tau_x} |M_{\tau_x} -\tilde M_t^\theta|^2 dt\Big]= \mathbb{E}\Big[ \int_0^{\tilde\tau_x} |M_{\tau_x} - M_t^\theta|^2 dt \Big] \\ %&=& \hE \Big[ \int_0^{T} |M_{\t_x} -\tilde M_t^\theta|^2 dt \1_{\{{\t_x}\ge T\}} %+ \int_0^{T} |M_{\t_x} -\tilde M_t^\theta|^2 dt \1_{\{{\t_x} < T\}}\Big] \nonumber \\ \negthinspace\negthinspace&\negthinspace\negthinspace=\negthinspace\negthinspace&\negthinspace\negthinspace\mathbb{E}\Big[ \int_0^{\tilde\tau_x} \big[ |M_{\tau_x} - M_t|^2 + | M_t - M_t^\theta|^2 +2(M_{\tau_x} - M_t)( M_t - M_t^\theta) \big] dt\Big]. \nonumber\end{aligned}$$ Next, noting ([\[M\]](#M){reference-type="ref" reference="M"}) and ([\[Mth\]](#Mth){reference-type="ref" reference="Mth"}), we see that $$\begin{aligned} %\hE \Big[\neg \int_0^{T} \neg|\tilde M_t - \tilde M_t^{\th}|^2 dt\Big]= \mathbb{E}\Big[\negthinspace\int_0^{\tilde \tau_x}\negthinspace| M_t - M_t^\theta|^2 dt\Big] =\mathbb{E}\Big[\negthinspace\int_0^{\tilde\tau_x}\negthinspace\negthinspace e^{-2ct} |J(X_t) - J^\theta(X_t) |^2 dt\Big] = 2\text{DMSVE}(\theta).\end{aligned}$$ Also, applying optional sampling we can see that $$\begin{aligned} &&\mathbb{E}\Big[\negthinspace\int_0^{\tilde\tau_x} \negthinspace\negthinspace(M_{\tau_x}\negthinspace-\negthinspace M_t)( M_t \negthinspace-\negthinspace M_t^\theta)dt\Big] = \int_0^T\negthinspace\negthinspace\mathbb{E}\big[ \mathbb{E}\big[({M}_{\tau_x}\negthinspace-\negthinspace{M}_t) | {\cal F}_{ t} \big] {\bf 1}_{\{\tau_x\ge t\}}( {M}_t - {M}_t^\theta)\big] dt \nonumber \\ &&= \mathbb{E}\Big[ \int_0^T \mathbb{E}\big[({M}_{\tau_x} - {M}_t) | {\cal F}_{t\wedge \tau_x} \big] {\bf 1}_{\{\tau_x\ge t\}}(\tilde{M}_t - \tilde{M}_t^\theta) dt\Big] =0.\end{aligned}$$ Combining above we see that ([\[2ML\]](#2ML){reference-type="ref" reference="2ML"}) becomes $2ML(\theta)= 2\text{DMSVE}(\theta)+\mathbb{E}\big[ \int_0^{\tilde\tau_x} |M_{\tau_x} - M_t|^2 dt\big]$. Since $\mathbb{E}[ \int_0^{\tilde\tau_x} |M_{\tau_x} - M_t|^2 dt]$ is independent of $\theta$, we conclude the result. width.25cm height.25cm depth0cm **Remark 17**. * Since the minimizers of MSVE$(\theta)$ and DMSVE$(\theta)$ are obviously identical, Theorem [Theorem 6](#equiv){reference-type="ref" reference="equiv"} suggests that if $\theta^*$ is a minimizer of either one of $ML(\cdot)$, $MSVE(\cdot)$, $DMSVE(\cdot)$, then $J^{\theta^*}$ would be an acceptable approximation of $J$. In the rest of the section we shall therefore focus on the identification of $\theta^*$. width.25cm height.25cm depth0cm* We now propose an algorithm that provides a numerical approximation of the policy evaluation $J(\cdot)$ (or equivalently the martingale $M^x$), by discretizing the integrals in the loss functional $ML(\cdot)$. To this end, let $T>0$ be an arbitrary but fixed time horizon, and consider the partition $0=t_0<\cdots<t_n=T$, and denote $\Delta t=t_i-t_{i-1}$, $i=1, \cdots n$. Now for $x\in\mathbb{R}_+$, we define $K_x = \min \{l \in \mathbb{N}: \exists t\in[l\Delta t, (l+1)\Delta t): X^x_{ t} <0\}$, and $\lfloor \tau_x \rfloor:=K_x \Delta t$ so that $\tau_x\in [K_x\Delta t, (K_x+1)\Delta t)$. Finally, we define $N_x= \min\{ K_x, n\}$. Clearly, both $K_x$ and $N_x$ are integer-valued random variables, and we shall often drop the subscript $x$ if there is no danger of confusion. In light of ([\[ML\]](#ML){reference-type="ref" reference="ML"}), let us define $$\begin{aligned} \label{MLDt} \mbox{\it ML}_{\Delta t}(\theta) = \frac{1}{2} \mathbb{E}\Big[ \sum_{i=0}^{N-1} \Big| e^{-c{t_{i}}}J^\theta(X_{t_{i}}) - \sum_{j=i}^{K-1} e^{-c{{t_j}} } r(X_{t_j})\Delta t \Big|^2 \Delta t \Big] =: \frac{1}{2} \mathbb{E}\Big[ \sum_{i=0}^{N-1}|\Delta \tilde M ^ \theta _{t_{i}}|^2 \Delta t \Big],\nonumber\end{aligned}$$ where $\Delta \tilde M ^ \theta _{t_i}$, $i=1, \cdots, n$, are defined in an obvious way. Furthermore, for $t\in [0, \tau_x]$, we define $m(t,\theta) := -e^{-ct} J^\theta (X_t)+ \int_t^{\tau_x} e^{-cs}r(X_s)ds$. Now note that $\{{\tau_x}\ge T\} =\{\lfloor \tau_x \rfloor < T \le \tau_x \} \cup \{\lfloor \tau_x \rfloor\ge T\}$, and $\{\lfloor \tau_x \rfloor \ge T\} = \{N=n\}$. Denoting $\tilde F_1 = \mathbb{E}\Big[ \int_0^{T} |m(t,\theta)|^2 dt {\bf 1}_{\{\lfloor \tau_x \rfloor < T\ \le \tau_x }\} \Big]$, we have $$\begin{aligned} \label{F1} \mathbb{E}\Big[ \int_0^{T} |m(t,\theta)|^2 dt {\bf 1}_{\{{\tau_x}\ge T\}} \Big] &=& %\hE \Big[ \int_0^{T} |m(t,\theta)|^2 dt \1_{\{{\t_x}\ge T\}} \Big] \\&=& \mathbb{E}\Big[ \int_0^{T} |m(t,\theta)|^2 dt \big( {\bf 1}_{\{\lfloor \tau_x \rfloor < T \le \tau_x \}} +{\bf 1}_{\{\lfloor \tau_x \rfloor\ge T\}} \big) \Big]\nonumber\\ &=& %\hE \Big[ \int_0^{T} |m(t,\theta)|^2 dt \1_{\{N=n\} }\Big]+ \tilde F_1 + \mathbb{E}\Big[ \sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}} |m(t,\theta)|^2 dt {\bf 1}_{\{N=n\} }\Big].\end{aligned}$$ Since $\{{\lfloor \tau_x \rfloor } < T\}= \negthinspace\{N<n\} =\{{\tau_x}< T\}\cup \{\lfloor \tau_x \rfloor < T\ \le \tau_x\}$, denoting $\tilde F_2 = \mathbb{E}\Big[ \int_0^{\tau_x} |m(t,\theta)|^2 dt {\bf 1}_{\{\lfloor \tau_x \rfloor < T\ \le \tau_x }\} \Big]$ and $\tilde F_3 = \mathbb{E}\Big[ \int_{ \lfloor \tau_x \rfloor}^{\tau_x} |m(t,\theta)|^2 dt {\bf 1}_{ \{\lfloor \tau_x \rfloor < T \} }\Big]$, we obtain, $$\begin{aligned} \label{F23} \mathbb{E}\Big[ \int_0^{\tau_x} |m(t,\theta)|^2 dt {\bf 1}_{\{{\tau_x}< T\}}\Big]&=&\mathbb{E}\Big[ \int_0^{ \lfloor \tau_x \rfloor} |m(t,\theta)|^2 dt {\bf 1}_{\{N < n\} }\Big] + \tilde F_3 - \tilde F_2 \nonumber \\ &=& \mathbb{E}\Big[ \sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}}|m(t,\theta)|^2 dt {\bf 1}_{\{N < n\} }\Big]+ \tilde F_3 - \tilde F_2 \nonumber\end{aligned}$$ Combining [\[F1\]](#F1){reference-type="eqref" reference="F1"} and [\[F23\]](#F23){reference-type="eqref" reference="F23"}, similar to [\[2ML\]](#2ML){reference-type="eqref" reference="2ML"}) we can now rewrite [\[ML\]](#ML){reference-type="eqref" reference="ML"} as $$\begin{aligned} \label{F123} 2\mbox{\it ML}(\theta) &=& \mathbb{E}\Big[ \int_0^{\tilde\tau_x} |m(t,\theta)|^2 dt \Big] = \mathbb{E}\Big[ \int_0^{T} |m(t,\theta)|^2 dt \big( {\bf 1}_{\{{\tau_x}\ge T\}} + 1_{\{{\tau_x}< T\}} \big) \Big]\nonumber\\ \negthinspace&\negthinspace=\negthinspace&\negthinspace\mathbb{E}\Big[ \sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}}|m(t,\theta)|^2 dt \big( {\bf 1}_{\{N=n\}} + {\bf 1}_{\{N < n\} }\big) \Big]+ \tilde F_1 - \tilde F_2 + \tilde F_3 \nonumber\\ \negthinspace&\negthinspace=\negthinspace&\negthinspace\mathbb{E}\Big[ \sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}} |m(t,\theta)|^2 dt \Big] + \tilde F_1 - \tilde F_2 +\tilde F_3.\end{aligned}$$ We are now ready to give the main result of this section. **Theorem 7**. *Let Assumptions [Proposition 13](#assumAP){reference-type="ref" reference="assumAP"} and [Assumption 16](#ape1){reference-type="ref" reference="ape1"} be in force. Then it holds that $$\lim_{\Delta t\to 0}\mbox{ ML}_{\Delta t}(\theta)=\mbox{ML}(\theta), \quad\mbox{\rm uniformly in $\theta$ on compacta. }$$* *Proof.* Fix a partition $0=t_0<\cdots<t_n=T$. By ([\[F123\]](#F123){reference-type="ref" reference="F123"}) and ([\[MLDt\]](#MLDt){reference-type="ref" reference="MLDt"}) we have, for $\theta\in\Theta$, $$\begin{aligned} \label{DMLDt} 2|M\negthinspace L(\theta)- M\negthinspace L_{\Delta t}(\theta)| &=& \mathbb{E}\bigg| \sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}} |m(t,\theta)|^2 dt + \tilde F_1 + \tilde F_3 -\tilde F_2 -\sum_{i=0}^{N-1} |\Delta \tilde M ^\theta_ {t_i}|^2 \Delta t \bigg |\nonumber\\ &\le &\mathbb{E}\Big[\sum_{i=0}^{N-1}\int_{t_i}^{t_{i+1}}\big||m(t,\theta)|^2-|\Delta\tilde{M}_{t_i}^\theta|^2\big|dt\Big]+\sum_{i=1}^3 |\tilde{F}_i| .\end{aligned}$$ Let us first check $|\tilde{F}_i|$, $i=1,2,3$. First, by Assumption [Assumption 16](#ape1){reference-type="ref" reference="ape1"}, we see that $$|m(t, \theta)|\le |J^\theta(X_t)|+\int_0^{\infty} e^{-cs}|r(X_s)|ds\le K(\theta)+\frac{R}{c}=:C_1(\theta), \qquad\text{ for } t>0, ~\mathbb{P}\hbox{\rm-a.s.{ }},$$ where $C_1(\cdot)$ is a continuous function, and $R$ is the bound of $r(\cdot)$. Thus we have $$\begin{aligned} \label{limF3} |\tilde F_3| = \mathbb{E}\Big[ \int_{ \lfloor \tau_x \rfloor}^{\tau_x} |m(t,\theta)|^2 dt {\bf 1}_{\{N <n\} }\Big] %\le \hE \Big[\int_{\lfloor \t_x\rfloor }^{\t_x} |m(t,\theta)|^2 dt \bigg] %&\leq & \hE \bigg[\sup_{t \geq 0} |m(t,\theta)|^2 \int_{\lfloor \t_x\rfloor }^{\t_x} dt \bigg] \\ \leq |C_1(\theta)|^2 \Delta t,\end{aligned}$$ Next, note that $\lfloor \tau_x \rfloor \leq T$ implies $\tau_x \leq \lfloor \tau_x \rfloor + \Delta t \le T + \Delta t$, and since we are considering the case when $\Delta t\to 0$, we might assume $\Delta t<1$. Thus by definitions of $\tilde F_1$ and $\tilde F_2$ we have $$\begin{aligned} \label{limF12} |\tilde F_1| \negthinspace+ \negthinspace|\tilde F_2| \negthinspace\negthinspace&\negthinspace\leq \negthinspace& \negthinspace\negthinspace 2 \mathbb{E}\Big[ \negthinspace\negthinspace\int_0^{T+1} \negthinspace\negthinspace\negthinspace\negthinspace|m(t,\theta)|^2 dt {\bf 1}_{\{\lfloor \tau_x \rfloor \leq T \le \tau_x }\} \Big] \leq 2|C_1(\theta)|^2 (T+1) \mathbb{P}\{ \lfloor \tau_x \rfloor \leq T \le \tau_x \}\nonumber\\ & \le & 2|C_1(\theta)|^2 (T+1) \mathbb{P}\{ |T-\tau_x| \le \Delta t\}.\end{aligned}$$ Since $X$ is a diffusion, one can easily check that $\lim_{ \Delta t \to 0} \mathbb{P}\{ |T-\tau_x| \le \Delta t\}= \mathbb{P}\{ T = \tau_x \} \le \mathbb{P}\{X_T=0\}= 0$. Furthermore, noting that $|C_1(\theta)|^2$ is uniformly bounded for $\theta$ in any compact set, from [\[limF3\]](#limF3){reference-type="eqref" reference="limF3"} and [\[limF12\]](#limF12){reference-type="eqref" reference="limF12"} we conclude that $$\begin{aligned} \label{limF123} \lim_{\Delta t\to 0}(|\tilde F_1|+|\tilde F_2|+|\tilde F_3|)=0, \qquad\mbox{uniformly in $\theta$ on compacta.}\end{aligned}$$ It remains to show that $$\begin{aligned} \label{limDMth} \lim_{\Delta t\to 0}\mathbb{E}\Big[\sum_{i=0}^{N-1}\int_{t_i}^{t_{i+1}}\big||m(t,\theta)|^2-|\Delta\tilde{M}_{t_i}^\theta|^2\big|dt\Big]=0,\end{aligned}$$ uniformly in $\theta$ on compacta. To this end, we first note that $$\begin{aligned} \label{EDt12} \mathbb{E}\Big[\sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}}\big| |m(t,\theta)|^2 - |\Delta \tilde M^\theta_ {t_i}|^2\big|dt \Big]\le \tilde E^{\Delta t}_1+\tilde E^{\Delta t}_2,\end{aligned}$$ where $\tilde E^{\Delta t}_1 \negthinspace\negthinspace:= \negthinspace\negthinspace\mathbb{E}\Big[ \sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}}\negthinspace\negthinspace\negthinspace\negthinspace\big(|m(t,\theta)|^2 \negthinspace\negthinspace- \negthinspace\negthinspace|m(t_i,\theta)|^2\big)dt \Big]$ and $\tilde E^{\Delta t}_2 \negthinspace\negthinspace:= \negthinspace\negthinspace\mathbb{E}\Big[ \sum_{i=0}^{N-1}\Big| |m(t_i,\theta)|^2 \negthinspace\negthinspace-\negthinspace\negthinspace|\Delta \tilde M ^\theta_ {t_i}|^2\Big|\Delta t \Big]$. From definition ([\[MLDt\]](#MLDt){reference-type="ref" reference="MLDt"}) we see that under Assumption [Assumption 16](#ape1){reference-type="ref" reference="ape1"} it holds that $|\Delta\tilde{M}^\theta_{t_i}|\le C_1(\theta)$, $i=1, \cdots, n$. Furthermore, we denote $$\begin{aligned} D_i &:=& m(t_i, \theta) - \Delta \tilde M ^\theta_ {t_i} = \int_{t_i}^{\tau_x} e^{-cs} r(X_s) ds - \sum_{j=i}^{K-1} e^{-ct_j}r(X_{t_j}) \Delta t \\ &=& \sum_{j=i}^{K-1}\int_{t_j}^{t_{j+1}} [e^{-cs}r(X_s)- e^{-ct_j} r(X_{t_j})]dt+\int_{\lfloor \tau_x\rfloor}^{\tau_x} e^{-cs}r(X_s)ds,\end{aligned}$$ Since $r(\cdot)$ is bounded, we see that $|\mathbb{E}(\int_{\lfloor\tau_x \rfloor}^{\tau_x } e^{-cs} r(X_s) ds)| \leq K_1 \Delta t$ for some constant $K_1>0$. Then it holds that $$\begin{aligned} \mathbb{E}|D_i| \le \mathbb{E}\Big[\sum_{j=i}^{K-1} \int_{t_j}^{t_{j+1}} |\tilde r_s- \tilde r_{t_{j}}| ds \Big] + K_1 \Delta t \le \mathbb{E}\Big[\sum_{j=i}^{\infty} \int_{t_j}^{t_{j+1}} |\tilde r_s- \tilde r_{t_{j}}| ds \Big] + K_1 \Delta t, \end{aligned}$$ where $\tilde r_t:=e^{-ct}r(X_t)$, $t\ge 0$, is a bounded and continuous process. Now for any $\varepsilon> 0$, choose $\tilde M \in \mathbb{Z}^+$ so that $e ^ {-ct} \le \frac{\varepsilon c}{4R}$, for $t \ge \tilde M$, and define $\rho^{\tilde M}_2(\tilde r,\Delta t):=\sup_{|t-s|\le \Delta t, t, s \in [0, \tilde M] }\| \tilde r_t-\tilde r_s\|_{\mathbb{L}^2(\Omega)}$, we have $$\begin{aligned} &&\mathbb{E}\Big[\sum_{j=i}^{\infty} \int_{t_j}^{t_{j+1}} |\tilde r_s- \tilde r_{t_{j}}| ds \Big] \le \sum_{j=i}^{\tilde M-1} \int_{t_j}^{t_{j+1}} \mathbb{E}|\tilde r_s- \tilde r_{t_{j}}| ds + \sum_{j=\tilde M}^{\infty} \int_{t_j}^{t_{j+1}} \mathbb{E}|\tilde r_s- \tilde r_{t_{j}}| ds \\ &\le& \sum_{j=i}^{\tilde M-1} \int_{t_j}^{t_{j+1}} \rho_2(\tilde r,\Delta t) ds + 4R \int_{\tilde M}^{\infty} e^{-cs} ds= \Delta t ( \tilde M-1) \rho^{\tilde M}_2(\tilde r,\Delta t) + 4R \frac{ e^{-c\tilde M}}{c} \\ &\le& \Delta t ( \tilde M-1) \rho^{\tilde M}_2(\tilde r,\Delta t) + \epsilon.\end{aligned}$$ Sending $\Delta t\to 0$ we obtain that $\mathop{\overline{\rm lim}}_{\Delta t \to 0} \mathbb{E}\Big[\sum_{j=i}^{\infty} \int_{t_j}^{t_{j+1}} |\tilde r_s- \tilde r_{t_{j}}| ds \Big] \le \varepsilon$. Since $\varepsilon> 0$ is arbitrary, we conclude that $\mathbb{E}\Big[\sum_{j=i}^{\infty} \int_{t_j}^{t_{j+1}} |\tilde r_s- \tilde r_{t_{j}}| ds \Big] \to 0$ as $\Delta t \to 0$. Note that the argument above is uniform in $i$, it follows that $\sup_{i\ge 0}\mathbb{E}|D_i| \to 0$ as $\Delta t \to 0$. Consequently, we have $$\begin{aligned} \label{limE2} \tilde E^{\Delta t}_2 &=& \mathbb{E}\Big[ \sum_{i=0}^{N-1} \big[ |m(t_i,\theta)+\Delta \tilde M ^\theta_ {t_i}| D_i \big] \Delta t\Big] \le 2\Delta t|C_1(\theta)| \mathbb{E}\Big[ \sum_{i=0}^{N-1} |D_i|\Big]\\ &\leq& 2n\Delta t|C_1(\theta)|\sup_{i\ge 0 } \mathbb{E}|D_i|\to 0, \quad\mbox{as $\Delta t\to0$.} \nonumber %&\leq& K(\th)\hE \bigg[ \sum_{i=0}^{n-1} \Delta t |D_i|\bigg]\\ %&\leq& K(\th)\Delta t \sum_{i=0}^{n-1} \hE [D_i|. \end{aligned}$$ Since $C_1(\cdot)$ is continuous in $\theta$, we see that the convergence above is uniform in $\theta$ on compacta. Similarly, note that by Assumption [Assumption 16](#ape1){reference-type="ref" reference="ape1"} the process $m(\cdot, \theta)$ is also a square-integrable continuous process, and uniform in $\theta$, we have $$\begin{aligned} \label{limE1} \tilde E^{\Delta t}_1 &\le & \mathbb{E}\Big[\sum_{i=0}^{N-1} \int_{t_i}^{t_{i+1}}\big| |m(t,\theta)|^2 - |m(t_i,\theta)|^2\big|dt \Big]\nonumber\\ &\le& 2C_1(\theta)\mathbb{E}\Big[\sum_{i=0}^{n-1} \int_{t_i}^{t_{i+1}} |m(t,\theta)- m(t_i,\theta)| dt \Big]\\ % \hE\bigg[ %2\sup_{t \geq 0}|m(t,\theta)| \sum_{i=0}^{n-1} \int_{t_i}^{t_{i+1}} |m(t,\theta)- m(t_i,\theta)| dt \bigg] (\text{ since } a^2-b^2 = (a-b)(a+b)) \\ %2 C_3(\th) % \sum_{i=0}^{n-1} \int_{t_i}^{t_{i+1}} \hE |m(t,\theta)- m(t_i ,\theta)| dt\\ &\leq & 2 C_1(\theta) \sum_{i=0}^{n-1} \int_{t_i}^{t_{i+1}} \rho(m(\cdot, \theta), \Delta t)| dt % &\leq& 2 C_1(\th) \sum_{i=0}^{n-1} \int_{t_i}^{t_{i+1}} \rho |(\Delta t)| dt = 2 C_1(\theta) T\rho (m(\cdot, \theta), \Delta t), \nonumber \end{aligned}$$ where $\rho(m(\cdot, \theta), \Delta t) = \sup_{|t-s|\le \Delta t, t, s \in [0, T] }\| m(t,\theta)-m(s,\theta) \|_{\mathbb{L}^2(\Omega)}$ is the modulus of continuity of $m(\cdot, \theta)$ in $\mathbb{L}^2(\Omega)$. Therefore $\rho(m(\cdot, \theta), \Delta t)\to 0$, as $\Delta t\to 0$, uniformly in $\theta$ on compacta. Finally, combining ([\[EDt12\]](#EDt12){reference-type="ref" reference="EDt12"})--([\[limE1\]](#limE1){reference-type="ref" reference="limE1"}) we obtain ([\[limDMth\]](#limDMth){reference-type="ref" reference="limDMth"}). This, together with ([\[limF123\]](#limF123){reference-type="ref" reference="limF123"}) as well as ([\[DMLDt\]](#DMLDt){reference-type="ref" reference="DMLDt"}), proves the theorem. width.25cm height.25cm depth0cm Now let us denote $h=\Delta t$, and consider the functions $$\begin{aligned} f(\theta):=\mbox{\it ML}(\theta), \qquad f_h(\theta):=\mbox{\it ML}_{h}(\theta), \qquad r_{h}(\theta):=\mbox{\it ML}_h(\theta)-\mbox{\it ML}(\theta).\end{aligned}$$ Then $f_h(\theta) = f(\theta) + r_h(\theta)$, and by Assumption [Assumption 16](#ape1){reference-type="ref" reference="ape1"} we can easily check that the mappings $\theta\mapsto f_{h}(\theta), r_h(\theta)$ are continuous functions. Applying Theorem [Theorem 7](#conv){reference-type="ref" reference="conv"} we see that $r_h(\theta) \to 0$, uniformly in $\theta$ on compacta, as $h \to 0$. Note that if $\Theta$ is compact, then for any $h>0$, there exists $\theta^*_{h} \in \arg \min_{\theta \in \Theta} f_h(\theta)$. In general, we have the following corollary of Theorem [Theorem 7](#conv){reference-type="ref" reference="conv"}. **Corollary 18**. *Assume that all assumptions in Theorem [Theorem 7](#conv){reference-type="ref" reference="conv"} are in force. If there exists a sequence $\{h_n\}_{n \ge 0} \searrow 0$, such that $\Theta_{n}:= \arg \min_{\theta \in \Theta} f_{h_n}(\theta) \neq\emptyset$, then any limit point $\theta^*$ of the sequence $\{\theta^*_n\}_{\theta^*_n\in\Theta_n}$ must satisfy $\theta^* \in \arg \min_{\theta \in \Theta} f(\theta)$.* *Proof.* This is a direct consequence of [@Zhou Lemma 1.1]. **Remark 19**. * We should note that, by Remark [Remark 17](#remark5){reference-type="ref" reference="remark5"}, the set of minimizers of the martingale loss function *ML*$(\theta)$ is the same as that of DMVSE$(\theta)$. Thus Corollary [Corollary 18](#convth){reference-type="ref" reference="convth"} indicates that we have a reasonable approach for approximating the unknown function $J$. Indeed, if $\{ \theta^*_n\}$ has a convergent subsequence that converges to some $\theta^*\in\Theta$, then $J^{\theta^*}$ is the best approximation for $J$ by either the measures of MSVE or DMSVE. width.25cm height.25cm depth0cm* To end this section we discuss the ways to fulfill our last task: finding the optimal parameter $\theta^*$. There are usually two learning methods for this task in RL, often referred to as *online* and *batch learning*, respectively. Roughly speaking, the batch learning methods use multiple sample trajectories of $X$ over a given finite time horizon $[0,T]$ to update parameter $\theta$ at each step, whereas in online learning, one observes only a single sample trajectory $X$ to continuously update the parameter $\theta$ until it converges. Clearly, the online learning is particularly suitable for infinite horizon problem, whereas the ML function is by definition better suited for batch learning. Although our problem is by nature an infinite horizon one, we shall first create a batch learning algorithm via the ML function by restricting ourselves to an arbitrarily fixed finite horizon $T>0$, so as to convert it to an finite time horizon problem. To this end, we note that $$\begin{aligned} 2 ML_{\Delta t }(\theta) & =&\mathbb{E}\Big[ \sum_{i=0}^{N-1} \big(e^{-c{t_{i}}}J^\theta(X_{t_{i}}) - \sum_{j=i}^{K-1} e^{-c{{t_j}} } r(X_{t_j})\Delta t \Big)^2 \Delta t \Big]. \end{aligned}$$ However, since $K_x$ may be unbounded, we shall consider instead the function: $$\begin{aligned} 2 \widetilde{ML}_{\Delta t }(\theta) & =&\mathbb{E}\Big[ \sum_{i=0}^{N-1} \big(e^{-c{t_{i}}}J^\theta(X_{t_{i}}) - \sum_{j=i}^{N-1} e^{-c{{t_j}} } r(X_{t_j})\Delta t \Big)^2 \Delta t \Big].\end{aligned}$$ We observe that the difference $$\begin{aligned} && |2 ML_{\Delta t }(\theta) - 2 \widetilde{ML}_{\Delta t }(\theta)| \\ &=&\Big| \sum_{i=0}^{N-1} \Big[ \Big(e^{-c{t_i}}J^\theta(X_{t_i}) - \sum_{j=i}^{N-1} \tilde r_{t_j}\Delta t \Big)^2 \Delta t - \Big(e^{-c{t_i}} J^\theta(X_{t_{i}}) - \sum_{j=i}^{K-1} \tilde r_{t_j}\Delta t \Big)^2 \Delta t\Big]\Big| \\ &=& \Big | \sum_{i=0}^{N-1} \Big [ \Big(\sum_{j=N}^{K-1} \tilde r_{t_j}\Delta t \Big) \Big( 2e^{-c{t_i}} J^\theta(X_{t_{i}})-\sum_{j=i}^{N-1} \tilde r_{t_j}\Delta t - \sum_{j=i}^{K-1} \tilde r_{t_j}\Delta t \Big) \Big ] \Big|\Delta t \\ &\leq& K(\theta)\Delta t \Big| \sum_{i=0}^{N-1} \Big(\sum_{j=N}^{K-1} \tilde r_{t_j}\Delta t \Big) \Big| \le \widetilde K(\theta) \Delta t\Big| \sum_{i=0}^{N-1} \Big(\sum_{j=N}^{K-1} e^{-c{t_j} } \Delta t \Big) \Big| \le \widetilde K(\theta) e^{-cT} T\Delta t,\end{aligned}$$ for some continuous function $\widetilde K(\theta)$. Thus if $\Theta$ is compact, for $T$ large enough or $\Delta t$ small enough, the difference between $ML_{\Delta t }(\theta)$ and $\widetilde{ML}_{\Delta t }(\theta)$ is negligible. Furthermore, we note that $$\begin{aligned} 2 \widetilde{ML}_{\Delta t }(\theta) & =&\mathbb{E}^{\mathbb{Q}}\Big[ \sum_{i=0}^{N-1} \Big(e^{-c{t_{i}}}J^\theta(\tilde X_{t_{i}}) - \sum_{j=i}^{N-1} e^{-c{{t_j}} } (a^{\pmb \pi}_{t_j} - \lambda\ln \pi (a^{\pmb \pi}_{t_j},\tilde X_{t_{j}}))\Delta t \Big)^2 \Delta t \Big],\end{aligned}$$ we can now follow the method of *Stochastic Gradient Descent* (SGD) to minimize $\widetilde{ML}_{\Delta t}( \theta)$ and obtain the updating rule, $$\begin{aligned} \label{MLA} \theta^{(k+1)} \leftarrow \theta^{(k)} - \alpha_{(k)} \nabla_\theta\widetilde{ML}_{\Delta t} \theta \end{aligned}$$ where $\alpha_{(k)}$ denotes the learning rate for the $k^{\text{th}}$ iteration (using the $k^{\text{th}}$ simulated sample trajectory). Here $\alpha_{(k)}$ is chosen so that $\sum_{k=0}^\infty \alpha_{(k)} = \infty, \sum_{k=0}^\infty \alpha_{(k)}^2 < \infty$ to help guarantee the convergence of the algorithm, based on the literature on the convergence of SGD. # Temporal Difference (TD) Based Online Learning In this section we consider another policy evaluation method utilizing the parametric family $\{J^\theta\}_{\theta\in \Theta}$. The starting point of this method is Proposition [Proposition 15](#tildeM){reference-type="ref" reference="tildeM"}, which states that the best approximation $J^\theta$ is one whose corresponding approximating process $M^\theta$ defined by ([\[Mth\]](#Mth){reference-type="ref" reference="Mth"}) is a martingale (in which case $J^\theta=J$(!)). Next, we recall the following simple fact (see, e.g., [@Zhou] for a proof). **Proposition 20**. *An Itô process $M^\theta\in \mathbb{L}^2_{\mathbb{F}}([0,T])$ is a martingale if and only if $$\begin{aligned} \label{moc} \mathbb{E}\Big[ \int_0^T \xi_t dM^\theta_t \Big] =0 , \quad\mbox{for any} ~\xi \in \mathbb{L}^2_{{\cal F}}([0,T];M^\theta).\end{aligned}$$ The functions $\xi \in \mathbb{L}^2_{\mathbb{F}}([0,T];M^\theta)$ are called test functions. width.25cm height.25cm depth0cm* Proposition [Proposition 20](#ocp){reference-type="ref" reference="ocp"} suggests that a reasonable approach for approximating the optimal $\theta^*$ could be solving the martingale orthogonality condition ([\[moc\]](#moc){reference-type="ref" reference="moc"}). However, since ([\[moc\]](#moc){reference-type="ref" reference="moc"}) involves infinitely many equations, for numerical approximations we should only choose a finite number of test functions, often referred to as *moment conditions*. There are many ways to choose test functions. In the finite horizon case, [@Zhou] proposed some algorithms of solving equation [\[moc\]](#moc){reference-type="eqref" reference="moc"} with certain test functions. By using the well known Robbins-Monroe stochastic approximation (cf. (1951), they suggested some *continuous* analogs of the well-known discrete time Temporal Difference (TD) algorithms such as TD$(\gamma), \gamma \in [0,1]$ method and the (linear) least square TD(0) (or LSTD$(0)$) method, which are often referred to as the CTD$(\gamma), \gamma \in [0,1]$ method and CLSTD$(0)$ method, respectively, for obvious reasons. We should note that although our problem is essentially an infinite horizon one, we could consider a sufficiently large truncated time horizon $[0,T]$, as we did in previous section, so that offline CTD methods similar to [@Zhou] can also be applied. However, in what follows we shall focus only on an online version of CTD method that is more suitable to the infinite horizon case. We begin by recalling the following fact: $$\begin{aligned} \label{dMth} \mathbb{E}[dM^\theta_t] & =& e^{-ct}\mathbb{E}[ dJ^\theta(X_t) - c J^\theta(X_t) dt + r_t dt] \\ %That is, $$ \hE^{\hQ} [dM^\th_t] &=& e^{-ct} \mathbb{E}^{\mathbb{Q}} [ dJ^\theta(\tilde X_t) - c J^\theta(\tilde X_t) dt +( a_t - \lambda\ln \pmb \pi (a_t ,\tilde X_t))dt], \nonumber\end{aligned}$$ where $\mathbb{Q}$ is the probability measure defined in Remark [\[remark2.0\]](#remark2.0){reference-type="ref" reference="remark2.0"}, and $\tilde X$ is the trajectory corresponding to the action $a=\{a_t\}$ "sampled\" from the policy distribution $\{\pmb \pi (\cdot,\tilde X_{t} )\}$. Now let $t_{i+1}=t_i+\Delta t$, $i=1,2,\cdots$ be a sequence of discrete time points, and $a_{t_i}$ the action sampled at time $t_i$. Denote $$\left\{\begin{array}{lll} \Delta J^\theta(\tilde X_{t_i}):=J^\theta(\tilde X_{t_{i+1}})-J^\theta(\tilde X_{t_i});\\ %$ and $\ \Delta_i := \Delta J^\theta(\tilde X_{t_i})+ [-cJ^\theta(\tilde X_{t_i})+ (a_{t_i}- \lambda\ln \pmb \pi (a_{t_i} ,\tilde X_{t_i} ))] \Delta t, \end{array}\right. \quad i=0,1,2, \cdots.$$ By the same argument as in [@Zhou], we have the discrete time approximation of ([\[dMth\]](#dMth){reference-type="ref" reference="dMth"}): $$\begin{aligned} \label{DMth} \qquad M_{t_{i+1}}^\theta\negthinspace\negthinspace-\negthinspace M_{t_i}^\theta\approx e^{-ct_i}\negthinspace\big\{\negthinspace\Delta J^\theta(\tilde X_{t_i}\negthinspace)\negthinspace-\negthinspace[cJ^\theta(\tilde X_{t_i}\negthinspace)\negthinspace-\negthinspace(a_{t_i}\negthinspace-\negthinspace\lambda\ln \pmb \pi (a_{t_i}\negthinspace,\negthinspace\tilde X_{t_i} \negthinspace) \negthinspace)]\Delta t \big\}\negthinspace=\negthinspace e^{-ct_i}\negthinspace\Delta_i.\end{aligned}$$ In what follows we summarize the updating rules for CTD($\gamma$) method using [\[DMth\]](#DMth){reference-type="eqref" reference="DMth"}. **CTD(0)** ($\gamma=0$). In this case we let $\xi_t = \nabla_\theta J^\theta(\tilde X_t)$, with the updating rule: $$\begin{aligned} \label{CTD0} \theta^{(i+1)} =\theta^{(i)} + \alpha_{(i)} \big( \nabla_\theta J^{\theta^{(i)}}(\tilde X_{t_i}) \big) e^{-ct_i}\Delta_i.\end{aligned}$$ **CTD($\gamma$)** ($\gamma\in(0,1]$) In this case we choose $\xi_t = \int_0^t \gamma^{t-s}\nabla_\theta J^\theta(X_s) ds$, with the updating rule: $$\begin{aligned} \label{CTDg} \theta^{(i+1)} = \theta+ \alpha_{(i)} \Big[\sum_{j=1}^{i} \gamma^{\Delta t (i-j)} \big( \nabla_\theta J^{\theta^{(i)}}(\tilde X_{t_j})\big) \Delta t \bigg] e^{-ct_i} \Delta_i . \end{aligned}$$ **Remark 21**. *(i) We note that the updating rule ([\[CTD0\]](#CTD0){reference-type="ref" reference="CTD0"}) can be viewed as the special case of ([\[CTDg\]](#CTDg){reference-type="ref" reference="CTDg"}) when $\gamma=0$ if we make the convention that $0^\alpha:={\bf 1}_{\{\alpha=0\}}$, for $\alpha\ge 0$.* *(ii) Although we are considering the infinite horizon case, in practice in order to prevent the infinite loop one always has to stop the iteration at a finite time. In other words, for both CTD(0) and CTD($\gamma$), we assume that $t_n = T$ for a large $T>0$.* *(iii) The constants $\alpha_{(i)}$'s are often referred as the *learning rate* for the $i$-th iteration. In light of the convergence conditions of Stochastic Approximation methods discussed at the end of previous section, we shall choose $\alpha_{(i)}$'s so that $\sum_{i=0}^\infty \alpha_{(i)} = \infty, \sum_{i=0}^\infty \alpha_{(i)}^2 < \infty$ to help guarantee the convergence of the algorithm. width.25cm height.25cm depth0cm* We observe that the convergence analysis of the above method for each fixed $\Delta t$, coincides with that of the stochastic approximation methods. It would naturally be interesting to see the convergence with respect to $\Delta t$. To this end, let us first define a special subspace of $\mathbb{L}^2_{\mathbb{F}}([0,T], M^\theta)$: $$H^{2, \alpha}_\mathbb{F}([0,T], M^\theta):=\{\xi \in L^2_{\mathbb{F}}([0,T],M^\theta): ~\mathbb{E}|\xi_t - \xi_s|^2 \leq C(\theta)|t-s|^ \alpha, C(\cdot)\in C(\Theta)\},$$ where $\alpha\in(0,1)$, $\theta\in\Theta$, and $T>0$. The following result is adapted from [@Zhou §4.2 Theorem 3] with an almost identical proof. We thus only state it and omit the proof. **Proposition 22**. *Assume that $\xi\in H^{2, \alpha}([0,T], M^\theta)$ for some $\alpha\in (0,1]$, and that $\theta^*$,$(\theta^*_{\Delta t})$ are such that $\mathbb{E}[\int_0^T \xi_t d M_t^\theta] =0$ $(\mathbb{E}[\sum_{i=0}^{n-1} \xi_{t_i} (M^\theta_{t_{i+1}}-M^\theta_{t_i})]=0 )$. Then for any sequence $[\Delta t]^{(n)}\to 0$ such that $\lim_{n\to\infty}\theta^*_{[\Delta t]^{(n)} }=\bar\theta$ exists, it must hold that $\bar\theta=\theta^*$. Furthermore, there exits $C>0$ such that $\mathbb{E}[ \int_0^T \xi_t dM_t^{\theta^*_{\Delta t}}]\leq C[\Delta t]^{\frac{\alpha}{2}}$. width.25cm height.25cm depth0cm* Finally, we remark that although there are other PE methods analogous to well known TD methods (e.g., CLSTD(0)), which are particularly well suited for linear parameterization families, in this paper we are interested in parameterized families that are nonlinear in nature. Thus we shall only focus on the *CTD*$(\gamma)$ methods as well as the *Martingale Loss Function* based PE methods developed in the previous section, (which will be referred to as the *ML-algorithm* in what follows) and present the detailed algorithms and numerical results in the next section. # Numerical Results In this section we present the numerical results along the lines of PE and PI schemes discussed in the previous sections. In particular, we shall consider the *CTD*$(\gamma)$ methods and *ML* Algorithm and some special parametrization based on the knowledge of the explicit solution of the original optimal dividend problem (with $\lambda=0$), but without specifying the market parameter $\mu$ and $\sigma$. To test the effectiveness of the learning procedure, we shall use the so-called *environment simulator*: $(x') = ENV_{\Delta t} (x,a)$, that takes the current state $x$ and action $a$ as inputs and generates state $x'$ at time $t+ \Delta t$, and we shall use the outcome of the simulator as the dynamics of $X$. We note that the environment simulator will be problem specific, and should be created using historic data pertaining to the problem, without using environment coefficients, which is considered as *unknown* in the RL setting. But in our testing procedure, shall use some dummy values of $\mu$ and $\sigma$, along with the following Euler--Maruyama discretization for the SDE ([\[sde2\]](#sde2){reference-type="ref" reference="sde2"}): $$\begin{aligned} \label{Euler} %d \tilde X_t^{\pmb \pi}= (\mu - {a_t^{\pmb \pi}}) dt + \sigma dW_t, \q \tilde X^{\pmb \pi}_{0}=x. x_{t_{i+1}}^{(k)} = x_{t_i}^{(k)} + (\mu - a_{t_i} ^{(k)} ) \Delta t + \sigma Z , \qquad i=1,2,\cdots,\end{aligned}$$ where $Z\sim N(0, \sqrt{\Delta t})$ is a normal random variable, and at each time $t_i$, $x_{t_{i+1}}^{(k)}$ is calculated by the environment simulator recursively via the given $x_{t_i}^{(k)}$ and $a_{t_i} ^{(k)}$, and to be specified below. We recall from ([\[optimalpi\]](#optimalpi){reference-type="ref" reference="optimalpi"}) the optimal policy function has the form $\pmb\pi^*(x,w) = G(w, \tilde c(x))$ where $\tilde c(x)$ is a continuous function. It can be easily calculated that the inverse of the cumulative distribution function, denoted by $\big(F^ {\pmb \pi} \big)^{-1}$, is of the form: $$\begin{aligned} \big(F^ {\pmb \pi} \big)^{-1} (x,w)= \frac{ \lambda\ln \Big( w(e^{\frac{a \tilde c(x)}{\lambda}}-1)+1\Big)}{\tilde c(x)} {\bf 1}_{\{ \tilde c(x)\neq 0\}}+ aw {\bf 1}_{\{\tilde c(x) = 0\}}, (w,x)\in [0,a]\times \mathbb{R}^+.\end{aligned}$$ Thus, by the *inversion method*, if ${\bf U}\sim U[0,1]$, the uniform distribution on $[0,1]$, then the random variable $\hat{\pmb \pi}({\bf U},x) := \big(F^ {\pmb \pi} \big)^{-1} (x,{\bf U})\sim\pmb\pi^*(\cdot, x)$, and we need only sample ${\bf U}$, which is much simpler. The next step is to choose the parametrization of $J^\theta$. In light of the well-known result (cf. e.g., [@asmussen]), we know that if $(\mu, \sigma)$ are given, and $\beta=\frac{a}{c} -\frac{1}{\beta_3}>0$, (thanks to Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"}), the classical solution for the optimal dividend problem is given by $$\begin{aligned} \label{classicalV} V(x) = K(e^{\beta_1x}-e^{-\beta_2x}){\bf 1}_{\{x \leq m \}}+\Big[\frac{a}{c} - \frac{e^{-\beta_3(x-m)}}{\beta_3}\Big]{\bf 1}_{\{x>m\}}.\end{aligned}$$ where $K = \frac{\beta}{e^{\beta_1m}-e^{-\beta_2m}}$, $\beta_{1,2}= \frac{\mp\mu + \sqrt{2c\sigma^2+\mu^2}}{\sigma^2}$, $\beta_3 = \frac{\mu-a + \sqrt{2c\sigma^2+(a-\mu)^2}}{\sigma^2}$, and $m = \frac{log(\frac{1+\beta \beta_2}{1-\beta \beta_1})}{\beta_1+\beta_2}$. We should note that the threshold $m>0$ in ([\[classicalV\]](#classicalV){reference-type="ref" reference="classicalV"}) is most critical in the value function, as it determines the switching barrier of the optimal dividend rate. That is, optimal dividend rate is of the "bang-bang\" form: $\alpha_t=a{\bf 1}_{\{X(t)>m\}}$, where $X$ is the reserve process (see, e.g., [@asmussen]). We therefore consider the following two parametrizations based on the initial state $x=X_0$. \(i\) $x<m$. By ([\[classicalV\]](#classicalV){reference-type="ref" reference="classicalV"}) we use the approximation family: $$\begin{aligned} \label{family1} && J^\theta(x) = \theta_3 ( e^{\theta_1x} -e^{-\theta_2x}), \nonumber \\ &&\theta_1 \in \Big[\frac{4c}{(1+\sqrt{5})a}, 1\Big]; ~\theta_2 \in \Big[1+\frac{4c}{(1+\sqrt{5})a}, 2\Big];~ \theta_3 \in [15,16], \end{aligned}$$ where $\theta_1,\theta_2,\theta_3$ represent $\beta_1, \beta_2$ and $K$ of the classical solution respectively. In particular, the bounds for $\theta_1$ and $\theta_2$ are due to the fact $\beta_1 \in [\frac{4c}{(1+\sqrt{5})a}, 1]$ and $\beta_2 \in [1+\frac{4c}{(1+\sqrt{5})a}, \infty)$ under Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"}. We should note that these bounds alone are not sufficient for the algorithms to converge, and we actually enforced some additional bounds. In practice, the range of $\theta_2$ and $\theta_3$ should be obtained from historical data for this method to be effective in real life applications. Finally, it is worth noting that ([\[classicalV\]](#classicalV){reference-type="ref" reference="classicalV"}) actually implies that $\mu=\frac{c(\beta_2-\beta_1)}{\beta_2 \beta_1}$ and $\sigma^2 = \frac{2c}{\beta_2 \beta_1}$. We can therefore approximate $\mu , \sigma$ by $\frac{c(\theta_2^*-\theta_1^*)}{\theta_2^* \theta_1^*}$ and $\sqrt{\frac{2c}{\theta_2^* \theta_1^*}}$, respectively, whenever the limit $\theta^*$ can be obtained. The threshold $m$ can then be approximated via $\mu$ and $\sigma$ as well. \(ii\) $x>m$. Again, in this case by ([\[classicalV\]](#classicalV){reference-type="ref" reference="classicalV"}) we choose $$\begin{aligned} \label{family2} J^\theta(x) = \frac{a}{c} - \frac{ \theta_1}{\theta_2} e^{-\theta_2x}, \quad\theta_1 \in [1,2], ~\theta_2 \in \Big[\frac{c}{a}, \frac{2c}{a}\Big], \end{aligned}$$ where $\theta_1, \theta_2$ represent $(e^m)^{\beta_3}$ and $\beta_3$ respectively, and the bounds for $\theta_2$ are the bounds of parameter $\beta_3$ in ([\[classicalV\]](#classicalV){reference-type="ref" reference="classicalV"}). To obtain an upper bound of $\theta_1$, we note that $\theta_1 \leq \frac{\theta_2a}{c}$ is necessary to ensure $J^\theta(x)>0$ for each $x>0$, and thus the upper bound of $\theta_2$ leads to that of $\theta_1$. For the lower bound of $\theta_1$, note that $e^m >1$ and hence so is $(e^m)^{\beta_3}$. Using $J^{\theta^*}_x(m)=1$, we approximate $m$ by $\frac{\ln(\theta_1)}{\theta_2}$. **Remark 23**. *The parametrization above depends heavily on the knowledge of the explicit solution for the classical optimal control problem. In general, it is natural to consider using the viscosity solution of the entropy regularized relaxed control problem as the basis for the parameterization family. However, although we did identify both viscosity super- and sub-solutions in ([\[supsolution\]](#supsolution){reference-type="ref" reference="supsolution"}), we found that the specific super-solution does not work effectively due to the computational complexities resulted by the piece-wise nature of the function, as well as the the complicated nature of the bounds of the parameters involved (see ([\[AMbound\]](#AMbound){reference-type="ref" reference="AMbound"})); whereas the viscosity sub-solution, being a simple function independent of all the parameters we consider, does not seem to be an effective choice for a parameterization family in this case either. We shall leave the study of the effective parametrization using viscosity solutions to our future research. width.25cm height.25cm depth0cm* In the following two subsections we summarize our numerical experiments following the analysis so far. For testing purpose, we choose "dummy\" parameters $a=3, \mu=0.4, \sigma=0.8$ and $c=0.02$, so that Assumption [Assumption 4](#jc){reference-type="ref" reference="jc"} holds. We use $T=10$ to limit the number of iterations, and we observe that on average the ruin time of the path simulations occurs in the interval $[0,10]$. We also use the error bound $\epsilon = 10^{-8}$, and make the convention that $d\sim 0$ whenever $|d|<\epsilon$. ## $CTD(\gamma)$ methods **Learning Procedure** Initialize $\theta$ , $m=1$ and set $Var=\theta$.\ Set $\theta=Var$.\ Compute and store $A_m = Average(\theta^*)$ over the last $sz$ iterations.\ Set $Var = A_m$.\ End iteration if the absolute difference $DA = |A_m-A_{(m-sz)}|<\epsilon$. Initialize $j=0$.\ Observe $x_0$ and store $x_{t_j} \Leftarrow x_0$.\ $\lambda= \lambda l(j)$.\ Compute $\pmb \pi (.,x_{t_j}) = G(.,1-J_x^{Var}(x_{t_j}))$ and generate action $a_{t_j} \sim \pmb \pi (.,x_{t_j}).$\ Apply $a_{t_j}$ to $ENV_{\Delta t} (t_j, x_{t_j} , a_{t_j})$ to observe and store $x_{t_{j+1}}$ .\ **end iteration** if $x_{t_{j+1}}<\epsilon$. Compute $\Delta\theta= \big( \nabla_\theta J^{\theta^{(i)}}(\tilde X_{t_i}) \big) e^{-ct_i}\Delta_i$.\ **end iteration** if $\|\Delta\theta\|_2<\epsilon$ .\ Update $\theta\leftarrow \theta+ \alpha p(j) \Delta\theta$.\ Update $j \leftarrow j+1$. Set $\theta^*=\theta$ and update $m \leftarrow m+1$. Set $\theta^*= A_m$. In Algorithm [\[alg:one\]](#alg:one){reference-type="ref" reference="alg:one"} below we carry out the PE procedure using the $CTD(0)$ method. We choose $\lambda=\lambda(j)$ as a function of iteration number: $\lambda(j)=2l(j) =2 (0.2)^{j* \Delta t} \geq 2\cdot 0.2^T = 2.048 \times 10^{-7}$. This particular function is chosen so that $\lambda\to0$ and the entropy regularized control problem converges to the classical problem, but $\lambda$ is still bounded away from $0$ so as to ensure that $\pi$ is well defined. We shall initialize the learning rate at 1 and decrease it using the function $p(j) = 1/j$ so as to ensures that the conditions $\sum_{i=0}^\infty \alpha_i = \infty$ and $\sum_{i=0}^\infty (\alpha_i)^2 < \infty$ are satisfied. We note that Algorithm [\[alg:one\]](#alg:one){reference-type="ref" reference="alg:one"} is designed as a combination of *online learning* and the so-called *batch learning*, which updates parameter $\theta$ at each temporal partition point, but only updates the policy after a certain number (the parameter "$sz$\" in Algorightm [\[alg:one\]](#alg:one){reference-type="ref" reference="alg:one"}) of path simulations. This particular design is to allow the PE method to better approximate $J(\cdot, \pmb \pi)$ before updating $\pmb\pi$. **Convergence Analysis.** To analyze the convergence as $\Delta t \to 0$, we consider $\Delta t =0.005$, 0.001, 0.0005, 0.0001, 0.00005, respectively. We take $M=40000$ path simulations and $sz=250$ in the implementation. Note that with the choice of dummy parameters $a, \mu$ and $\sigma$, the classical solution is given by $m=4.7797$ , $V(3) = 17.9522$ and $V(10)=24.9940$. We thus consider two parameterization families, for initial values $x = 3 <m$ and $x=10>m$ respectively. $\Delta t$ $J^{\theta^*}$ m $J^{\theta^*}$ m $J^{\theta^*}$ m $J^{\theta^*}$ m ------------ ---------------- ------- ---------------- ------- ---------------- -------- ---------------- -------- family(i) family(ii) x=3 x=10 x=3 x=10 ${0.01}$ 15.49 5.383 31.276 3.476 32.489 19.359 55.667 9.635 ${0.005}$ 17.188 4.099 22.217 4.292 31.108 18.532 53.262 11.942 ${0.001}$ 16.68 4.474 23.082 3.931 37.58 11.925 60.948 10.691 ${0.0005}$ 16.858 4.444 23.079 4.049 40.825 11.797 65.179 10.5 ${0.0001}$ 17.261 4.392 23.094 4.505 38.899 18.994 55.341 18.142 : Results for the $CTD0$ method $J^{\theta^*}$ w.r.t. $\Delta t$ $m$ w.r.t. $\Delta t$ ------------------------------------------------------------ ------------------------------------------------- ![image](x3F1.png){width="40%" height="25mm"} ![image](mx3F1.png){width="40%" height="25mm"} Results obtained using family (i) for $x=3$ ![image](x10F1.png){width="40%" height="25mm"} ![image](mx10F1.png){width="40%" height="25mm"} Results obtained using family (i) for $x=10$ ![image](x3F2.png){width="0.4 \\textwidth" height="25mm"} ![image](mx3F2.png){width="40%" height="25mm"} Results obtained using family (ii) for $x=3$ ![image](x10F2.png){width="0.4 \\textwidth" height="25mm"} ![image](mx10F2.png){width="40%" height="25mm"} Results obtained using family (ii) for $x=10$ : Convergence results for the $CTD0$ method *Case 1. $x=3<m$.* As we can observe from Table [1](#Numericalvalues01){reference-type="ref" reference="Numericalvalues01"} and [2](#Numericalvalues02){reference-type="ref" reference="Numericalvalues02"} , in this case using the approximation ([\[family1\]](#family1){reference-type="ref" reference="family1"}) (family (i)) shows reasonably satisfactory level of convergence towards the known classical solution values of $J(x_0)$ and $m$ as $\Delta t \to 0$, despite some mild fluctuations. We believe that such fluctuations are due to the randomness of the data we observe and that averaging over the $sz$ paths in our algorithm reduced the occurrence of these fluctuations to a satisfactory level. As we can see, despite the minor anomalies, the general trajectory of these graphs tends towards the classical solution as $\Delta t \to 0$. We should also observe that using family (ii) ([\[family2\]](#family2){reference-type="ref" reference="family2"}) does not produce any satisfactory convergent results. But this is as expected, since the function ([\[family2\]](#family2){reference-type="ref" reference="family2"}) is based on the classical solution for $x>m$. *Case 2. $x=10>m$.* Even though the family ([\[family1\]](#family1){reference-type="ref" reference="family1"}) is based on the classical solution for $x<m$, as we can see from [1](#Numericalvalues01){reference-type="ref" reference="Numericalvalues01"} and [2](#Numericalvalues02){reference-type="ref" reference="Numericalvalues02"} , the algorithm using family ([\[family1\]](#family1){reference-type="ref" reference="family1"}) converges to the values of the classical solution even in the case $x=10>m$, whereas the algorithm using family (ii) ([\[family2\]](#family2){reference-type="ref" reference="family2"}) does not. While a bit counter intuitive, this is actually not entirely unexpected since the state process can be seen to reach $0$ in the considered time interval in general, but the parameterization ([\[family2\]](#family2){reference-type="ref" reference="family2"}) is not suitable when value of the state reaches below $m$. Consequently, it seems that the parameterization ([\[family1\]](#family1){reference-type="ref" reference="family1"}) suits better for $CTD(0)$ method, regardless of the initial value. Finally, we would like to point out that the case for $CTD(\gamma)$ methods for $\gamma \neq 0$ is much more complicated, and the algorithms are computationally much slower than $CTD(0)$ method. We believe that the proper choice of the learning rate in this case warrants some deeper investigation, but we prefer not to discuss these issues in this paper. ## The ML Algorithm In Algorithm [\[alg:two\]](#alg:two){reference-type="ref" reference="alg:two"} we present the so-called ML-algorithm in which we use a batch learning approach where we update the parameters $\theta$ by $\tilde \theta$ at the end of each simulated path using the information from the time interval $[0, T \wedge \tau_x]$. We use $M=40000$ path simulations and initial temperature $\lambda=2$. In the $m$-th simulated path we decrease the temperature parameter by $\lambda= 2 \times (0.9)^{m}$. We also initialize the learning rate at 1 and decrease it using the function $p(m) = 1/m$. Finally, we consider $\Delta t=0.005, 0.001,0.0005,0.0001$, respectively, to represent the convergence as $\Delta t \to 0$. **Learning Procedure** Initialize $\theta$ , $m=1$ .\ $\lambda= \lambda l(m)$.\ Compute and store $A_m = Average(\theta)$ over the last $sz$ iterations.\ End iteration if the absolute difference $DA = |A_m-A_{(m-sz)}|<\epsilon$. Initialize $k=0$, observe $x_0$ and store $x_{t_k} \Leftarrow x_0$.\ Compute $\pmb \pi (.,x_{t_k}) = G(.,1-J_x^{\theta}(x_{t_k}))$ and generate action $a_{t_k} \sim \pmb \pi (.,x_{t_k}).$\ Apply $a_{t_k}$ to $ENV_{\Delta t} (t_k, x_{t_k} , a_{t_k})$ to observe and store $x_{t_{k+1}}$ .\ **end iteration** if $x_{t_{k+1}}<\epsilon$.\ observe and store $J^{\theta}(x_{t_{k+1}}), \nabla_\theta J^{\theta}(x_{t_{k+1}})$ .\ Update $k \leftarrow k+1$. Compute $\Delta\theta$ using *Ml* algorithm and Update $\theta\leftarrow \theta- \alpha p(m) \Delta\theta$.\ Update $m \leftarrow m+1$. Set $\theta^*= A_m$ . Using parameterized family (i) with both the initial values $x=3$ and $x=10$ , we obtain the optimal $\theta^*_i$ as the lower bound of each parameter $\theta_i$, for $i=1,2,3$. Using parameterized family (ii) with both the initial values $x=3$ and $x=10$, we obtain the optimal $\theta^*_i$ as the average of the lower and upper bounds of each parameter $\theta_i$ for $i=1,2$, since in each iteration $\theta_i$ is updated as the upper and lower boundary alternatively. This is due to the fact that the learning rate $\frac{1}{m}$ is too large for this particular algorithm. Decreasing the size of the learning rate results in optimal $\theta$ values that are away from the boundaries, but the algorithms in these cases were shown not to converge empirically, and thus the final result depends on the number of iterations used(M). In general, the reason for this could be due to the loss of efficiency occurred by decreasing the learning rates, since Gradient Descent Algorithms are generally sensitive to learning rates. Specific to our problem, among many possible reasons, we believe that the limiting behavior of the optimal strategy when $\lambda\to 0$ is a serious issue, as $\pmb \pi$ is not well defined when $\lambda=0$ and a Dirac $\delta$-measure is supposed to be involved. Furthermore, the \"bang-bang\" nature and the jump of the optimal control could also affect the convergence of the algorithm. Finally, the algorithms seems to be quite sensitive to the value of $m$ since value function $V(x)$ is a piece-wisely smooth function depending on $m$. Thus, to rigorously analyze the effectiveness of the ML-algorithm with parameterization families (i) and (ii), further empirical analysis are needed which involves finding effective learning rates. All these issues call for further investigation, but based on our numerical experiment we can nevertheless conclude that the CTD(0) method using the parameterization family (i) is effective in finding the value $m$ and $V(x)$, provided that the effective upper and lower bounds for the parameters can be identified using historic data. [^1]: School of Mathematics, Nankai University, Tianjin, 300071, China. Email: lbai\@nankai.edu.cn. This author is supported in part by Chinese NSF grants \#11931018, \#12272274, and \#12171257. [^2]: Department of Mathematics, University of Southern California, Los Angeles, CA 90089. Email: gamage\@usc.edu. [^3]: Department of Mathematics, University of Southern California, Los Angeles, CA 90089. Email: jinma\@usc.edu. This author is supported in part by NSF grants \#DMS-1908665 and 2205972. [^4]: School of Mathematics, Nankai University, Tianjin, 300071, China. Email:1120180026\@mail.nankai.edu.cn.
arxiv_math
{ "id": "2309.10242", "title": "Reinforcement Learning for optimal dividend problem under diffusion\n model", "authors": "Lihua Bai, Thejani Gamage, Jin Ma, Pengxu Xie", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider a lattice polytope in the $xy$-plane such that each edge is parallel to the $x$-axis or the $y$-axis. In [@N], we investigated transformations of certain lattice polytopes, and we considered the reduced graph that is obtained from deformations of a graph associated with a lattice polytope. In this paper, we refine the notion of the reduced graph by introducing the notion of a dotted graph. A lattice polytope is presented by an admissible dotted graph. We investigate deformations of dotted graphs, and we investigate relation between deformations of admissible dotted graphs and transformations of lattice polytopes, giving results that are refined and corrected versions of [@N Lemma 6.2, Theorem 6.3]. address: Department of Mathematics, Information Science and Engineering, Saga University, Honjomachi, Saga, 840-1153, Japan author: - Inasa Nakamura title: Transformations of lattice polytopes and their associated dotted graphs --- # Introduction {#sec1} Let $\mathbb{R}^2$ be the $xy$-plane. We say that a segment is in the *$x$-direction* or in the *$y$-direction* if it is parallel to the $x$-axis or the $y$-axis, respectively. A *lattice polytope in the lattice*, or simply a *lattice polytope*, is a polytope $P$ in $\mathbb{R}^2$ consisting of a finite number of vertices of degree zero (called *isolated vertices*) with multiplicity $2$, vertices of degree $2$ with multiplicity $1$, edges, and faces satisfying the following conditions. 1. Each edge is either in the $x$-direction or in the $y$-direction. 2. The $x$-components (respectively $y$-components) of isolated vertices and edges in the $x$-direction (respectively $y$-direction) are distinct. 3. The boundary $\partial P$ is equipped with a coherent orientation as an immersion of a union of several circles, where we call the union of edges of $P$ the *boundary* of $P$, denoted by $\partial P$. The set of vertices are divided into two sets $X_0$ and $X_1$ such that each edge in the $x$-direction is oriented from a vertex of $X_0$ to a vertex of $X_1$, and isolated vertices are both in $X_0$ and $X_1$ with multiplicity 1. We denote $X_0$ (respectively $X_1$) by $\mathrm{Ver}_0(P)$ (respectively $\mathrm{Ver}_1(P)$), and we call it the set of *initial vertices* (respectively *terminal vertices*). When we draw figures, we denote an initial vertex (respectively a terminal vertex) by a small black disk (respectively an $X$ mark). And in our figures, the $x$-direction is the vertical direction, in order to be coherent with figures in [@N]. ![A lattice polytope in the lattice (left figure) and the result of a transformation along a rectangle (right figure). The rectangle is denoted by the shadowed area. We remark that the $x$-direction is the vertical direction, and we denote an initial vertex (respectively a terminal vertex) by a small black disk (respectively an $X$ mark). ](A-fig1.pdf){#Fig1 height="4cm"} See the left figure in Figure [1](#Fig1){reference-type="ref" reference="Fig1"} for an example of a lattice polytope. We remark that in graph theory, a lattice polytope is defined as a polytope whose vertices are lattice points, where a *lattice point* is a point $(x,y)$ of $\mathbb{R}^2$ such that $x$ and $y$ are integers [@Barvinok2; @Diestel]. However, here, though we don't require that the vertices are lattice points, we call our polytope a lattice polytope. In [@N], we investigated partial matchings, using certain lattice polytopes whose vertices are lattice points. A partial matching is a finite graph consisting of edges and vertices of degree zero or one, with the vertex set $\{1,2,\ldots,m\}$ for some positive integer $m$; see [@Reidys] for researches on structures of partial matchings using chord diagrams. In [@N], with a motivation to investigate partial matchings, we introduced the lattice presentation of a partial matching, and the lattice polytope associated with a pair of partial matchings. Further, in [@N], we treated transformations of lattice presentations and lattice polytopes, and the area of a transformation, and the reduced graph of a lattice polytope. A transformation of a lattice polytope is a sequence of transformations of lattice polytopes such that each is described by a rectangle as in Figure [1](#Fig1){reference-type="ref" reference="Fig1"}. In this paper, we refine the notion of a reduced graph by introducing the notion called a dotted graph (Definition [\[def2-1\]](#def2-1){reference-type="ref" reference="def2-1"}). A lattice polytope is presented by an admissible dotted graph. We introduce a refined version of deformations of dotted graphs, and show that a reduced graph of a dotted graph obtained by a certain sequence of deformations is uniquely determined up to certain local moves (Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}), and that for a dotted graph, any sequence consisting of certain deformations is finite (Theorem [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}). These theorems provide a refined and corrected version of [@N Lemma 6.2]. Further, we give a refined and corrected version of [@N Theorem 6.3]: for an admissible dotted graph $\Gamma$, its associated lattice polytope $P$ admits a transformation with minimal area if it is deformed to the empty graph by specific deformations, and if $P$ admits a transformation with minimal area, then $\Gamma$ is deformed to the empty graph by certain deformations (Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"}); see Remark [\[rem0904\]](#rem0904){reference-type="ref" reference="rem0904"}. Further, we show that if a dotted graph has ''many" dots, then it is admissible (Proposition [Proposition 7](#prop3-8){reference-type="ref" reference="prop3-8"}), and a dotted graph with "many" dots is reduced to the empty graph (Proposition [Proposition 8](#prop3-9){reference-type="ref" reference="prop3-9"}). The paper is organized as follows. In Section [2](#sec1){reference-type="ref" reference="sec1"}, we review transformations of lattice polytopes and the areas of transformations. In Section [3](#sec2){reference-type="ref" reference="sec2"}, we give the definitions of a dotted graph and the deformations of dotted graphs, and Theorems [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"} and [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}. In section [4](#sec-proof){reference-type="ref" reference="sec-proof"}, we show Theorems [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"} and [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}. In section [5](#deformation){reference-type="ref" reference="deformation"}, we investigate relation between deformations of admissible dotted graphs and transformations of lattice polytopes, and we give Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"} and Propositions [Proposition 7](#prop3-8){reference-type="ref" reference="prop3-8"} and [Proposition 8](#prop3-9){reference-type="ref" reference="prop3-9"}. Section [6](#sec-lemma){reference-type="ref" reference="sec-lemma"} is devoted to showing lemmas. # Transformations of lattice polytopes and the areas of transformations {#sec1} In this section, we review transformations of lattice polytopes and the areas of transformations [@N]. ## Transformations of lattice polytopes For a point $v=(x_1,y_1)$ of $\mathbb{R}^2$, we call $x_1$ (respectively $y_1$) the *$x$-component* (respectively the *$y$-component*) of $v$. For a set of points of $\mathbb{R}^2$, $\{ v_1, \ldots, v_n\}$ with $v_j=(x_j, y_j)$ ($j=1,\dots, n$), we call the set $\{ x_1, \ldots, x_n, y_1,\ldots, y_n\}$ the *set of $x$, $y$ components* of $\{ v_1,\ldots, v_n\}$. Let $\Delta$ be a set of points in $\mathbb{R}^2$. For a point $u$ of $\mathbb{R}^2$, we denote by $x(u)$ and $y(u)$ the $x$ and $y$-components of $u$, respectively. For two distinct points $v, w$ of $\Delta$, let $R(v, w)$ be the rectangle one pair of whose diagonal vertices are $v$ and $w$. Put $\tilde{v}=(x(w), y(v))$ and $\tilde{w}=(x(v), y(w))$, which form the other pair of diagonal vertices of $R(v,w)$. Then, consider a new set of points obtained from $\Delta$, from removing $v,w$ and adding $\tilde{v}, \tilde{w}$. We call the new set of points the result of the *transformation of $\Delta$ by the rectangle* $R(v, w)$, denoted by $t( \Delta; R(v, w))$. For two sets of points $\Delta$ and $\Delta'$ in $\mathbb{R}^2$ with the same set of $x, y$-components, we define a *transformation from $\Delta$ to $\Delta'$*, denoted by $\Delta \to \Delta'$, as a sequence of transformations by rectangles such that the initial and the terminal points are $\Delta$ and $\Delta'$, respectively. For a lattice polytope $P$, we define a *transformation of $P$* as a transformation from $\mathrm{Ver}_0(P)$ to $\mathrm{Ver}_1(P)$. We recall that $\mathrm{Ver}_0(P)$ and $\mathrm{Ver}_1(P)$ are the set of initial vertices and terminal vertices of $P$, respectively. For a lattice polytope $P'$, let $R$ be a rectangle that contains vertices of $\mathrm{Ver}_0(P')$ as a diagonal pair. We define the *transformation of $P'$ along the rectangle $R$* as the transformation from $P'$ to the lattice polytope whose initial vertices are $t(\mathrm{Ver}_0(P'); R)$ and terminal vertices are $\mathrm{Ver}_1(P')$, respectively. A transformation of a lattice polytope $P$ is described as a sequence of transformations of lattice polytopes along rectangles such that each rectangle contains initial vertices of each lattice polytope as a diagonal pair. See Figure [1](#Fig1){reference-type="ref" reference="Fig1"} for an example of a transformation of a lattice polytope along a rectangle. ## Areas of transformations {#sec4-2} For a lattice polytope $P$, we define the area of $P$ and the area of $|P|$, denoted by $\mathrm{Area}(P)$ and $\mathrm{Area}|P|$, respectively, as follows. Recall that for a lattice polytope $P$, we give an orientation of $\partial P$ by giving each edge in the $x$-direction (respectively in the $y$-direction) the orientation from a vertex of $\mathrm{Ver}_0(P)$ to a vertex of $\mathrm{Ver}_1(P)$ (respectively from a vertex of $\mathrm{Ver}_1(P)$ to a vertex of $\mathrm{Ver}_0(P)$). Then, $\partial P$ has a coherent orientation as an immersion of a union of several circles. The space $\mathbb{R}^2$, which contains $P$, is divided into several regions $A_1,\ldots, A_m$ by $\partial P$. For each region $A_i$ ($i=1,\dots,m$), let $\omega(A_i)$ be the *rotation number* of $P$ with respect to $A_i$, which is the sum of rotation numbers of the connected components (as immersed circles) of $P$, with respect to $A_i$. Here, the *rotation number* of a connected lattice polytope $Q$ (that is regarded as an immersed circle) with respect to a region $A$ of $\mathbb{R}^2 \backslash\partial Q$ is the rotation number of a map $f$ from $\partial Q=\mathbb{R}/2\pi \mathbb{Z}$ to $\mathbb{R}/2\pi\mathbb{Z}$ which maps $x \in \partial Q$ to the argument of the vector from a fixed interior point of $A$ to $x$. Here, the *rotation number* of the map $f$ is defined by $(F(x)-x)/2\pi$, where $F: \mathbb{R} \to \mathbb{R}$ is the lift of $f$ and $x \in \partial Q$. We define $\mathrm{Area}(A_i)$ for a region $A_i$ by the area induced from $\mathrm{Area}(R)=|x_2-x_1||y_2-y_1|$ for a rectangle $R$ whose diagonal vertices are $(x_1, y_1)$ and $(x_2, y_2)$. Then, we define the *area* of $P$, denoted by $\mathrm{Area}(P)$, by $\mathrm{Area}(P)=\sum_{i=1}^m \omega(A_i)\mathrm{Area}(A_i)$, and the *area* of $|P|$, denoted by $\mathrm{Area}|P|$, by $\mathrm{Area}|P|=\sum_{i=1}^m |\omega(A_i)\mathrm{Area}(A_i)|$. Let $P$ be a lattice polytope. We consider a transformation of $P$, $\mathrm{Ver}_0(P)=\Delta_0 \to \Delta_1\to \cdots \to \Delta_k=\mathrm{Ver}_1(P)$, such that each $\Delta_{j-1} \to \Delta_j$ is a transformation by a rectangle $R_j$ ($j=1,2,\ldots,k$). Then, we call $\sum_{j=1}^k |\mathrm{Area}(R_j)|$ the *area of the transformation of $P$*. Theorem 5.9 in [@N] implies the following. **Theorem 1**. *Let $P$ be a lattice polytope. We consider a transformation of $P$, $\mathrm{Ver}_0(P)=\Delta_0 \to \Delta_1\to \cdots \to \Delta_k=\mathrm{Ver}_1(P)$, such that each $\Delta_{j-1} \to \Delta_j$ is a transformation by a rectangle $R_j$ ($j=1,2,\ldots, k)$.* *Then $$\label{eq0} \sum _{j=1}^k \mathrm{Area}(R_j) =\mathrm{Area}(P)$$ and $$\label{eq2} \sum_{j=1}^k |\mathrm{Area}(R_j)|\geq \mathrm{Area}|P|.$$* *Further, when $P$ satisfies either the condition $(1)$ or $(2)$ in [@N Theorem 5.9], there exist transformations which realize the equality of $(\ref{eq2})$.* We call a transformation which realizes the equality of $(\ref{eq2})$ a transformation *with minimal area*. # Dotted graphs and their reduced graphs {#sec2} In this section, we introduce the notion of reduced graphs of a lattice polytope. And we give theorems that a reduced graph of a dotted graph obtained by a certain sequence of deformations is uniquely determined up to certain local moves (Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}), and that for a dotted graph, any sequence consisting of certain deformations is finite (Theorem [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}). [\[rem0904\]]{#rem0904 label="rem0904"} Theorems [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"} and [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"} provide a refined and corrected version of [@N Lemma 6.2]; we remark that the set of deformations given in [@N] does not include the cases that have ''overlapping regions", and for deformations that delete circle/loop components, the condition for the label of the complement region is necessary to show the uniqueness; the original version of [@N Lemma 6.2] holds true when we add the condition of labels for deformations that delete circle/loop components, and any sequence of deformations satisfies the condition (A) given in Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}. Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"} is a refined and corrected version of [@N Theorem 6.3]; we remark that [@N Theorem 6.3] does not hold for lattice polytopes that admit transformations other than those described by the given deformations. We reformulate the notion of a reduced graph. [\[def2-1\]]{#def2-1 label="def2-1"} Let $\Gamma$ be a finite graph in $\mathbb{R}^2$. Then $\Gamma$ is a *dotted graph* if each edge has an orientation and each vertex is of degree 2 or degree 4, satisfying the following conditions. 1. Around each vertex of degree 2, the edges has a coherent orientation. We denote the vertex by a small black disk, called a *dot*. 2. Around each vertex of degree 4, each pair of diagonal edges has a coherent orientation. We call the vertex a *crossing*. We regard edges connected by vertices of degree 2 as an edge equipped with several dots. We call an edge or a part of an edge an *arc* (with/without dots). The complement of a dotted graph $\Gamma \subset \mathbb{R}^2$ consists of a finite number of connected components. We call each component a *region*. We regard $\Gamma$ as a finite number of immersed oriented circles with transverse intersection points, and we assign each region with an integral label denoting the rotation number. We also call $\Gamma$ equipped with integral labels on regions a *dotted graph*. Two dotted graphs are the *same* if they are related by an ambient isotopy of $\mathbb{R}^2$. Let $P$ be a lattice polytope in the lattice. Let $\partial P$ be the boundary of $P$ equipped with the initial vertices $\mathrm{Ver}_0(P)$, which are one of the two types of vertices $\mathrm{Ver}_0(P)$ and $\mathrm{Ver}_1(P)$. Then $\partial P$ is a dotted graph, which will be called the *dotted graph associated with a lattice polytope $P$*; see Figure [2](#Fig2){reference-type="ref" reference="Fig2"}. We say that a dotted graph $\Gamma$ is *admissible* if there exists a lattice polytope $P$ with which $\Gamma$ is associated. ![(a) A lattice polytope and (b) the associated dotted graph.](A-fig2.pdf){#Fig2 height="4cm"} Let $\Gamma$ be a dotted graph. We call a set of arcs with dots and connecting crossings a *circle component* if it bounds an embedded disk and any pair of arcs connected by a crossing is a pair of diagonal arcs. And we call a set of arcs with dots and connecting crossings a *loop component* if it bounds an embedded disk and one pair of arcs connected by a crossing $c$ is a pair of adjacent arcs and any other pair of arcs connected by a crossing is a pair of diagonal arcs, and moreover around the crossing $c$ connecting the adjacent arcs forming the component, the bounded disk does not contain the other two arcs. Let $\Gamma_0, \Gamma_1$ be dotted graphs such that $\Gamma_0 \cap \Gamma_1$ consists of a finite number of transverse intersection points of arcs, avoiding dots. Regarding the intersection points as crossings, we have a dotted graph $\Gamma_0 \cup \Gamma_1$. Each region $R$ of $\Gamma_0 \cup \Gamma_1$ is described as $R_0 \cap R_1$, where $R_0$ (respectively $R_1$) is the region of $\Gamma_0$ (respectively $\Gamma_1$) containing an inner point of $R$, and the label of $R$ is the sum of the labels of $R_0$ and $R_1$. Let $X_1$ be the union of regions of $\Gamma_1$ such that any region in $\mathbb{R}^2\backslash X_1$ has the label zero. We call the dotted graph $\Gamma_0 \cup \Gamma_1$ the dotted graph $\Gamma_0$ *overlapped by the regions $X_1$*, and we call $X_1$ *overlapping regions*. And for a region $R_0$ of $\Gamma_0$ such that $R_0 \cap X_1 \neq \emptyset$, we say that *$R_0$ is overlapped by $X_1$*, and we call $R_0$ the *overlapped region*. Similarly, by induction, we call the dotted graph $\Gamma=\Gamma_0 \cup \Gamma_1 \cdots \cup \Gamma_m$ the dotted graph $\Gamma_0$ *overlapped by regions $X_1, \ldots, X_m$*, where $X_i$ is the union of regions of $\Gamma_i$ such that any region in $\mathbb{R}^2\backslash X_i$ has the label zero $(i=1, \ldots, m)$. The label of a region $R$ of $\Gamma$ is the sum of the labels of the corresponding regions in the overlapped/overlapping regions. When we want to distinguish the label of $R$ of $\Gamma$ from that of an overlapped/overlapping region, we call the former the *total label* of $R$. Let $\Gamma$ be a dotted graph. Let $D$ be a disk in $\mathbb{R}^2$. If there exists a dotted graph $\Gamma_0$ overlapped by some regions $X$, denoted by $\Gamma'$, such that $\Gamma \cap D=\Gamma' \cap D$ and $X \cap D \neq \emptyset$, then we say that $\Gamma \cap D$ is *overlapped by regions $X \cap D$*. And we call the part of regions $X \cap D$ the *overlapping regions*. [\[def3-2\]]{#def3-2 label="def3-2"} Let $\Gamma$ and $\Gamma'$ be dotted graphs. Then, $\Gamma'$ is obtained from $\Gamma$ by a *dotted graph deformation* or simply a *deformation* if $\Gamma$ is changed to $\Gamma'$ by a local move in a disk $D\subset \mathbb{R}^2$, satisfying the following. 1. The boundary $\partial D$ does not contain vertices (dots and crossings) of $\Gamma$ and $\Gamma'$, and $\Gamma \cap \partial D$ (respectively $\Gamma' \cap \partial D$) consists of transverse intersection points of edges of $\Gamma$ (respectively $\Gamma'$) and $\partial D$. 2. The dotted graphs are identical in the complement of $D$: $\Gamma \cap (\mathbb{R}^2\backslash D)=\Gamma' \cap (\mathbb{R}^2\backslash D)$. 3. The dotted graph in $D$, $\Gamma \cap D$, is deformed to $\Gamma' \cap D$, by one of the following. 1. Reduce several dots on an arc to one dot on the arc; see Figure [3](#Fig3){reference-type="ref" reference="Fig3"} (I). 2. Remove a circle component whose bounding disk has a nonzero label $\epsilon i$, where $\epsilon \in \{+1, -1\}$ and $i$ is a positive integer, and in the neighborhood of the disk, the complement has the label $\epsilon (i-1)$, where we don't give a condition concerning the number of dots in the circle component: it can have dots, and we also include the case it has no dots; see Figure [3](#Fig3){reference-type="ref" reference="Fig3"} (II). 3. Remove a loop component whose bounding disk has a nonzero label $\epsilon i$, where $\epsilon \in \{+1, -1\}$ and $i$ is a positive integer, and in the neighborhood of the disk, the region in the complement whose closure's boundary contains the loop component has the label $\epsilon (i-1)$, such that we deform the pair of adjacent arcs to one arc with a dot (respectively, with no dots) if the loop component has dots (respectively, does not have dots); see Figure [3](#Fig3){reference-type="ref" reference="Fig3"} (III). 4. A local move as illustrated in Figure [3](#Fig3){reference-type="ref" reference="Fig3"} (IV), where we require that there is a dot on each arc, and the label $\epsilon i$ is not zero, and the arcs admit induced orientations. We call a part of the region before the deformation with the label $\epsilon i$ the *middle region*. Further, a deformation II/III/IV can be applied when the concerning regions are overlapped by several regions such that each overlapping region has the label $\epsilon$. See Remark [\[rem902\]](#rem902){reference-type="ref" reference="rem902"}. Further, we say that $\Gamma'$ is obtained from $\Gamma$ by a *local move* or a *deformation* $\mathcal{E}$ if $\Gamma$ is changed to $\Gamma'$ by a local move in a disk $D\subset \mathbb{R}^2$, satisfying the following. We have the situation (1) and (2), and that $\Gamma \cap D$ is an arc as in the left figure of Figure [3](#Fig3){reference-type="ref" reference="Fig3"} (I), where we don't give a condition for the number of dots. Further, we assume that the concerning regions are overlapped by several regions $R$ such that each overlapping region has the label $\epsilon$. Then, $\Gamma \cap D$ is deformed to $\Gamma'\cap D$ by moving the arc by an ambient isotopy of $D$ ignoring $R$ such that the result does not create loop components. 1. Move an arc with/without dots by an ambient isotopy of $D$ ignoring overlapping regions $R$ such that the result does not create loop components; where $R$ satisfies the condition given above. In particular, move a dot ignoring $R$. ![Local deformations I--IV, where $\epsilon \in \{+1, -1\}$ and $i$ is a positive integer, and we omit the orientations of the arcs and some of the labels of the regions. A deformation IV is applicable when the arcs admit induced orientations. Deformations II, III, IV can be applied including the case when the regions are overlapped by several regions such that each overlapping region has the label $\epsilon$.](B-fig16-3.pdf){#Fig3 height="5cm"} [\[def3-3\]]{#def3-3 label="def3-3"} Let $\Gamma$ be a dotted graph. We say that $\Gamma$ is *reducible* if we cannot apply deformations I--IV to $\Gamma$. And we call a dotted graph $\Gamma'$ a *reduced graph* of $\Gamma$ if $\Gamma'$ is not reducible and $\Gamma'$ is obtained from $\Gamma$ by a finite sequence of deformations I--IV. Further, we consider a specific deformation IV, called a *deformation IVa*, that satisfies one of the following conditions (a1) and (a2): 1. The arcs involved in $\mathcal{R}$ are adjacent arcs of a crossing, where we ignore overlapping regions, such that $\mathcal{R}$ creates a loop component applicable of a deformation III. 2. The deformation $\mathcal{R}$ creates a circle component $C$ from two concentric circle components such that a deformation II is applicable to $C$. We call deformations I, II, III and IVa *good deformations*. And we say that a sequence of deformations I, II, III, IVa is *in good order* if it satisfies the following rule: 1. After a deformation IVa satisfying (a1), next comes the deformation III that deletes the created loop component. 2. After a deformation IVa satisfying (a2), next comes the deformation II that deletes the created circle component. And we call a dotted graph $\Gamma'$ a *good reduced graph* of $\Gamma$ if $\Gamma'$ is not applicable of good deformations I, II, III, IVa, and $\Gamma'$ is obtained from $\Gamma$ by a finite sequence of good deformations I, II, III, IVa in good order. See [@N Figure 6.2] for an example of a non-empty reduced graph. **Proposition 2**. *Let $P$ be a lattice polytope, and let $\Gamma$ be the dotted graph associated with $P$. Then, if $\Gamma$ is deformed to a dotted graph $\Gamma'$ by a sequence of good deformations I, II, III, IVa in good order, then, there exists a sequence of normal/reversed transformations (see Definition [\[def818\]](#def818){reference-type="ref" reference="def818"}) of lattice polytopes from $P$ to a lattice polytope $P'$ whose associated dotted graph is $\Gamma'$, such that each transformation along a rectangle changes the area of the rectangle from $\epsilon i$ to $\epsilon (i-1)$ for some positive integer $i$ and $\epsilon\in \{+1, -1\}$.* *Proof.* The argument in the proof of Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"} implies the required result. See the proof of Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"}. ◻ We remark that transformations of a lattice polytope along a rectangle satisfying the condition given in Proposition [Proposition 2](#prop919){reference-type="ref" reference="prop919"} form a transformation of a lattice polytope with minimal area.\ Let $\mathcal{R}$ be a deformation IV and let $p_1$ and $p_2$ be the involved dots. The result of $\mathcal{R}$ is described as the result of band surgery [@Kawauchi] along an untwisted band, and the band is presented by a core between $p_1$ and $p_2$, that is a simple arc in the middle region connecting $p_1$ and $p_2$; see the proof of Lemma [Lemma 13](#lem826){reference-type="ref" reference="lem826"} for the precise definition of a core. When the middle region has overlapping regions, we have choices of the band and the core, but the result of $\mathcal{R}$ is determined up to local moves $\mathcal{E}$. For a dotted graph $\Gamma$, we consider the following condition. Let $p_1$ and $p_2$ be a pair of dots of $\Gamma$, between which a deformations IV is applicable. We say that $\Gamma$ *satisfies the condition $\mathrm{(A)}$ with respect to $(p_1, p_2)$* if it satisfies the following condition: 1. Let $\rho$ and $\rho'$ be cores of bands connecting $p_1$ and $p_2$, and let $\Gamma'$ (respectively $\Gamma''$) be the dotted graph obtained from $\Gamma$ by the deformation IV associated with $\rho$ (respectively $\rho'$). Then, $\Gamma'$ and $\Gamma''$ are related by local moves $\mathcal{E}$, for all possible cores $\rho, \rho'$. And we say that $\Gamma$ *satisfies the condition* (A) if it satisfies the condition (A) with respect to all pairs of dots. Further, we say that a sequence of deformations *satisfies the condition* (A) if any appearing dotted graph satisfies the condition (A). We remark in an imprecise expression that if the ''middle region" contains connected components that cannot be regarded as overlapping regions, then the dotted graph does not satisfy the condition (A). **Theorem 3**. *Let $\Gamma$ be a dotted graph, and let $\Gamma'$ be a reduced graph of $\Gamma$ whose associated sequence of deformations satisfies the condition $\mathrm{(A)}$, and the sequence does not contain a deformation IV applied between a circle/loop component $C$ applicable of a deformation II/III and an arc of an overlapping region of $C$. Then, $\Gamma'$ is uniquely determined up to local moves $\mathcal{E}$ and deformations I.* Let $\mathcal{R}$ be a deformation consisting of deformations I--IV satisfying the following property. 1. Let $\Gamma$ be a dotted graph and let $\Gamma'$ be the dotted graph obtained from $\mathcal{R}$. Then, the number of dots of $\Gamma'$ is equal to or less than that of $\Gamma$, and the number of crossings of $\Gamma'$ is equal to or less than $\Gamma$, for any dotted graph $\Gamma$ applicable of $\mathcal{R}$. We call a deformation satisfying property (X) a *deformation X*. Then we have the following theorem; see Remark [\[rem920\]](#rem920){reference-type="ref" reference="rem920"}. **Theorem 4**. *For a dotted graph $\Gamma$, any sequence of deformations consisting of deformations X is finite. In particular, there exists a good reduced graph of $\Gamma$.* # Proofs of Theorems [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"} and [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"} {#sec-proof} We modify the proof Lemma 6.2 in [@N]. See also Remark [\[rem902\]](#rem902){reference-type="ref" reference="rem902"}. * Proof of Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}.* By Lemma [Lemma 10](#rem915){reference-type="ref" reference="rem915"}, it suffices to show that (1) when we have graphs $\Gamma_1$ and $\Gamma_2$ obtained from $\Gamma$ by a finite sequence of deformations I--IV and local moves $\mathcal{E}$ satisfying the condition (A), $\Gamma_1$ and $\Gamma_2$ can be deformed by deformations I--IV and local moves $\mathcal{E}$ to the same dotted graph, where we don't apply a deformation IV between the arc used in a local move $\mathcal{E}$ and an arc of its overlapping region. If there are two dots $p_1$ and $p_2$ on an arc $\alpha$, we can apply a deformation IV between $p_1$ and $p_2$. By the condition of labels for the overlapping regions of the middle region and the condition (A), we see that any deformation IV between $p_1$ and $p_2$ deforms $\alpha$ into an arc $\alpha'$ and a circle component $C$, where $\alpha'$ is an arc with a dot obtained from $\alpha$ by a deformation $\mathcal{E}$ and a deformation I, and $C$ is a circle component with a dot applicable of a deformation II; we remark that by the former condition, we see that any core associated with the deformation IV, which is an arc connecting $p_1$ and $p_2$ with no self-intersections, does not intersect with $\alpha$ except at the endpoints $p_1$ and $p_2$. And if there is a pair of arcs $\alpha, \alpha'$ with dots where deformations IV are applicable, then, no matter what times we apply deformations IV, the resulting dotted graphs can be deformed to the same dotted graph as when we consider $\alpha, \alpha'$ with one dot on each arc; see the left figure of Figure [4](#B-fig11){reference-type="ref" reference="B-fig11"}. Similarly, if there is an arc $\alpha$ with dots and several arcs where deformations IV are applicable between $\alpha$ and the other arcs, then, no matter what times we apply deformations IV between $\alpha$ and the other arcs, the resulting dotted graphs can be deformed to the same dotted graph as when we consider $\alpha$ with one dot; see the right figure of Figure [4](#B-fig11){reference-type="ref" reference="B-fig11"}. Further, deformations II and III are not effected by the number of dots. So we can assume that there is at most one dot in each arc. See also the proof of Lemma [Lemma 13](#lem826){reference-type="ref" reference="lem826"}. If we have arcs where both deformations II and IV are applicable, then, since by assumption we don't have a deformation IV applied between a circle component $C$ applicable of a deformation II and an arc of an overlapping region of $C$, the resulting dotted graphs can be deformed to the same dotted graph; see Figure [5](#B-fig12){reference-type="ref" reference="B-fig12"}. We remark that we don't have a deformation IV between the arc used in the local move $\mathcal{E}$ and an arc of its overlapping region. And by Lemma [Lemma 11](#lem913){reference-type="ref" reference="lem913"}, there do not exist arcs where both deformations III and IV are applicable. When a deformations II or III can be applied, and the interior of the disk $D$ bounded by the circle/loop component of the deformations II or III contains arcs where a deformation IV is applicable, since any region in $D$ has a nonzero total label, by Claim [Claim 5](#rem2-6){reference-type="ref" reference="rem2-6"}, after the deformation IV the regions in $D$ also have nonzero total labels and the deformation II or III is applicable after the deformation IV. Further, by Lemma [Lemma 12](#lem912){reference-type="ref" reference="lem912"}, when we have a circle/loop component $C$ applicable a deformation II/III such that the interior of the disk bounding $C$ contains another circle/loop component $C'$ applicable of a deformation II or III, the result of the deformations II/III to $C$ and then $C'$ is the same with that of the deformations II/III to $C'$ and then $C$, up to local moves $\mathcal{E}$. Hence we see the following. We consider a dotted graph $\Gamma$ that deforms to $\Gamma'$ by a deformation II or III. We denote by $G$ the circle/loop components and arcs of $\Gamma$ which are applicable of deformations I--IV in $\Gamma$ and $G \cap \Gamma'=G$. Then $G$ in $\Gamma'$ are applicable of the same deformations I--IV, where two deformations are the *same* if they are the same as deformations of graphs; we remark that the labels of the regions might differ. Further, if there are plural circle/loop components applicable of deformations II or III that have overlapping with each other, since the regions of loop/circle components have all positive/negative labels, we can delete all components applying deformations II or III, and the result is independent of the order of deformations. Next we consider deformations IV. By taking the middle region to be sufficiently thin, we assume that the middle region does not contain circle/loop components. We recall that the sequence of deformations satisfies the condition (A). We consider the case when we have several choices of deformations IV. Suppose that there are three dots $p_1, p_2, p_3$ and we have two choices of applying a deformation IV: between $(p_1, p_2)$ or $(p_1, p_3)$. Then we can apply a deformation IV also to the other pair of dots $(p_2, p_3)$, and the result of each choice can be deformed to the same dotted graph (up to local moves $\mathcal{E}$), as shown in the top and middle rows of figures in Figure [6](#C-fig2){reference-type="ref" reference="C-fig2"}; we remark that in the middle figure of the top row in Figure [6](#C-fig2){reference-type="ref" reference="C-fig2"}, we cannot apply a deformation IV to a pair of dots other than given in the figure, by seeing the labels of the regions. Thus, we see that if we have several possible parts of a dotted graph $\Gamma$ to each of which a deformation I--IV, denoted by $\mathcal{R}_i$, is applicable, then, for any $i$, the dotted graph $\Gamma'_i$ obtained by $\mathcal{R}_i$ can be deformed to the same dotted graph by deformations I--IV and $\mathcal{E}$. This implies (1), and we have the required result. ◻ ![If there is a pair of arcs $\alpha, \alpha'$ with dots where deformations IV are applicable, then, no matter what times we apply deformations IV, the resulting dotted graphs can be deformed to the same dotted graph as when we consider $\alpha, \alpha'$ with one dot on each arc (left figure). If there is an arc $\alpha$ with dots and several arcs where deformations IV are applicable between $\alpha$ and the other arcs, then, no matter what times we apply deformations IV between $\alpha$ and the other arcs, the resulting dotted graphs can be deformed to the same dotted graph as when we consider $\alpha$ with one dot (right figure). We omit orientations of arcs and labels of regions.](E-fig3.pdf){#B-fig11 height="4.5cm"} ![If we have arcs where both deformations II and IV are applicable, then the resulting dotted graphs can be deformed to the same dotted graph, where we omit the orientations of arcs and labels of regions; we remark that in the lower figure, the shadowed region is the overlapping region. We also remark that the upper figure is [@N Figure 6.4].](F-fig3.pdf){#B-fig12 height="6.5cm"} ![A deformation IV can be regarded as the result of band surgery along an untwisted band (see the proof of Lemma [Lemma 13](#lem826){reference-type="ref" reference="lem826"}). In the top and the bottom rows of figures, we denote by dotted arcs cores of bands. We assume that each appearing dotted graph satisfies the condition (A) (see Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}). If there are $n$ dots between which there are several possible sequences of $n-1$ deformations IV, then the result is independent of the choice of sequence up to local moves $\mathcal{E}$ (the middle row). The deformation sequence can be regarded as band surgery along mutually disjoint bands such that the endpoints of cores are in the neighborhoods of dots, where we omit dots in the figures (the bottom row). ](C-fig2.pdf){#C-fig2 height="7cm"} * Proof of Theorem [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}.* Let $\Gamma=\Gamma_0, \Gamma_1, \Gamma_2, \ldots, \Gamma_n$ be dotted graphs that appear in a sequence of deformations X. We denote by $\mathcal{R}_i$ the deformation such that $\Gamma_{i+1}$ is obtained from $\Gamma_i$ by $\mathcal{R}_i$. When $\mathcal{R}_i$ is a deformation I, then the number of dots in ${\Gamma}_{i+1}$ is less than that of ${\Gamma}_{i}$, and the number of crossings of ${\Gamma}_{i+1}$ equals that of ${\Gamma}_{i}$. When $\mathcal{R}_i$ is a deformation III, then the number of dots of ${\Gamma}_{i+1}$ is equal to or less than that of ${\Gamma}_{i}$, and the number of crossings of ${\Gamma}_{i+1}$ is less than that of ${\Gamma}_{i}$. Thus, in any sequence of deformations I--IV, the number of deformations I and III is finite. Since a deformation II decreases the number of circle components, if there exists a sequence consisting of deformations II, the number of deformations is finite. And if a deformation IV creates a circle component applicable of a deformation II, then the circle component has a dot; hence the resulting graph can be deformed by a deformation II to a dotted graph the number of whose dots is less than that of the initial dotted graph, and the number of whose crossings is equal to or less than that of the initial dotted graph. And we have a similar situation if a deformation III creates a circle component applicable of a deformation II. Thus, in any sequence of deformations I--IV, the number of deformations II is also finite. Hence, since a deformation X consists of deformations I--IV, it suffices to show that for an arbitrary dotted graph $\Gamma$, there does not exist a sequence of deformations consisting of infinite number of deformations IV. This is shown in Lemma [Lemma 13](#lem826){reference-type="ref" reference="lem826"}. Thus, for a dotted graph $\Gamma$, any sequence of deformations X applicable to $\Gamma$ is finite. Now we show that good deformations I, II, III, and IVa satisfy the property (X). We see that deformations I, II, and III satisfy the property (X). From now on, we show that a good deformation IVa satisfies the property (X). We call a deformation IVa satisfying the condition (a1) (respectively (a2)) a deformation IV (a1) and (respectively IV (a2)). For a deformation IV (a1), we take the core sufficiently near the adjacent edges of the crossing. When $\mathcal{R}_i$ is a deformation IV (a1), then, the number of dots of $\Gamma_{i+2}$ is equal to or less than that of ${\Gamma}_{i}$, and the number of crossings of $\Gamma_{i+2}$ is less than that of $\Gamma_i$. When $\mathcal{R}_i$ is a deformation IV (a2), then, the number of dots of $\Gamma_{i+2}$ is less than that of ${\Gamma}_{i}$, and the number of crossings of $\Gamma_{i+2}$ is less than or equal to that of $\Gamma_i$. Thus a deformation IVa is a deformation X. Hence, for a dotted graph $\Gamma$, there exists a good reduced graph of $\Gamma$, which is the required result. ◻ [\[rem920\]]{#rem920 label="rem920"} We remark that by the same argument in the proof of Theorem [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}, we can see the following. We denote by a deformation IVb a deformation IV that does not have overlapping regions. Then, for a dotted graph $\Gamma$, we can construct a sequence of deformations consisting of a finite number of deformations I, II, III, and IVb, so that the result is a dotted graph to which we cannot apply deformations I, II, III, and IVb. # Deformations of admissible dotted graphs and transformations of lattice polytopes {#deformation} In this section, we investigate relation between deformations of admissible dotted graphs and transformations of lattice polytopes. We show that if an admissible dotted graph is deformed to the empty graph by a sequence of good deformations in good order, then its associated lattice polytope has a transformation with minimal area, and we obtain deformations of dotted graphs describing a transformation of lattice polytopes with minimal area (Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"}). Further, we show that if a dotted graph has ''many" dots, then it is admissible (Proposition [Proposition 7](#prop3-8){reference-type="ref" reference="prop3-8"}), and a dotted graph with "many" dots is reduced to the empty graph (Proposition [Proposition 8](#prop3-9){reference-type="ref" reference="prop3-9"}). We give deformation I\*--IV\* with a weaker condition for application than given in Definition [\[def3-2\]](#def3-2){reference-type="ref" reference="def3-2"}, as follows. [\[def3-7\]]{#def3-7 label="def3-7"} We use the notations in Definition [\[def3-2\]](#def3-2){reference-type="ref" reference="def3-2"}. For a dotted graph, we define *deformations I\*, II\*, III\*, IV\**, as deformations I, II, III, IV, respectively, such that we allow overlapping regions for deformations I and for the other deformations change the condition for overlapping regions as follows. 1. A deformation Y\* (Y=I, II, III, IV) can be applied when the concerning regions are overlapped by some regions such that each region in the region with the label $\epsilon i$ in Figure [3](#Fig3){reference-type="ref" reference="Fig3"} has a nonzero total label. Further, we give an additional condition that for a deformation II\*/III\* (respectively IV\*), the union of the closures of overlapping regions does not contain in its interior the disk bounded by the circle/loop component (respectively the middle region). **Claim 5**. *Let $D$ be the disk bounded by a circle/loop component or the middle region where we can apply a deformation II\*, III\*, IV\*: before applying the deformation, among the overlapping regions in $D$ there does not exist a region with the total label zero; further, the union of overlapping region does not contain in its interior the disk or the middle region $D$.* *Then, the total labels of the regions in the intersection of overlapping regions and $D$ are all positive or all negative, and the total label of the region does not change (respectively changes from $\epsilon j$ to $\epsilon (j-1)$ for some positive integer $j$ and $\epsilon \in \{+1, -1\}$) by the deformation if it is in the overlapped region that does not change (respectively changes from $\epsilon i$ to $\epsilon (i-1)$ for some positive integer $i$) by the deformation.* *Proof.* We show the case for a deformation IV; the other cases are shown similarly. Let $\epsilon i$ be the label of $D$, and let $x$ be a point in $D$ such that $x$ is not in the overlapping regions. Let $\rho$ be a path in $D$ connecting $x$ and a point in $D$. Then, the labels of the regions crossed by $\rho$ form a sequence beginning from $\epsilon i$ such that the adjacent labels differ by $\pm 1$. Since there does not exist a region with the total label zero among the overlapping regions in $D$, and the total label of the region containing $x$ is $\epsilon i$, the sequence does not contain zero, and hence all positive if $\epsilon=+1$, or negative if $\epsilon=-1$. Since we can consider $\rho$ that intersects with any region in $D$, we have the result for the sign of the labels of regions in $D$. Further, we can see the total labels satisfy the required condition. Hence we have the required result. ◻ Besides deformations I\*--IV\* given in Definition [\[def3-7\]](#def3-7){reference-type="ref" reference="def3-7"}, we consider other new types of deformations: we call a deformation as in Figure [8](#C-fig9){reference-type="ref" reference="C-fig9"} (V) a *deformation V*, and we give a *local move $\mathcal{E}^*$* as a local move induced from the definition of $\mathcal{E}$ by changing the condition of overlapping regions to that of I\* and moreover removing the condition that it does not create loop components. See Remark [\[rem902\]](#rem902){reference-type="ref" reference="rem902"} (4). **Theorem 6**. *Let $\Gamma$ be an admissible dotted graph associated with a lattice polytope $P$. Then we have the following.* 1. *If $\Gamma$ has a good reduced graph that is the empty graph, then there exists a transformation of $P$ with minimal area, that is, a transformation satisfying the equality of ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}).* 2. *If there exists a transformation of $P$ with minimal area, then $\Gamma$ is deformed to the empty graph by a sequence of deformations II\*, III\*, IV\*, V and $\mathcal{E}^*$.* *Proof.* First we show (1). Suppose that $\Gamma$ has a good reduced graph that is the empty graph. By an argument as in the proof of [@N Theorem 5.9], for each deletion of a circle/loop component, there exists a corresponding transformation of the lattice polytope realizing the minimal area; here we remark that in the case of a loop component, the transformation of the lattice polytope might consists of normal transformations and reversed transformations; see Definition [\[def818\]](#def818){reference-type="ref" reference="def818"} for these notions. When we have a deformation IVa satisfying the condition (a2) that creates a circle component $C$ from two concentric circle components and then a deformation II that deletes $C$, the lattice polytope satisfies the condition (2) in [@N Theorem 5.9], and hence there exists a transformation of the lattice polytope that realizes the deformations II and III. For each deformation IVa satisfying the condition (a1) that is applied between adjacent arcs of a crossing, there exists a normal/reversed transformation of the lattice polytope that realizes the deformation IVa by Lemma [Lemma 15](#lem-822){reference-type="ref" reference="lem-822"}, that is the transformation as shown in Figure [7](#F-fig1){reference-type="ref" reference="F-fig1"}. We denote the resulting lattice polytope by $P$. Then, as shown in Figure [7](#F-fig1){reference-type="ref" reference="F-fig1"}, $P$ is not the result of the transformation of the dotted graph, denoted by $G$, but since the deformation sequence is in good order, we delete the loop component of $G$ by a deformation III, and the result is the dotted graph associated with $P$. Hence, by Lemma [Lemma 16](#lem-821){reference-type="ref" reference="lem-821"}, we have a transformation of the lattice polytope with minimal area, consisting of only normal transformations. Thus we have the required result. Next we show (2). Suppose that there exists a transformation of $P$ with minimal area. Then translate the transformation by its associated dotted graphs. Since the transformation of $P$ has minimal area, each transformation along a rectangle $R$ satisfies that any region in $R$ has a nonzero label and in the resulting lattice polytope, each region in $R$ has a label whose absolute value is less than that of the former region by one. We see that each transformation of a lattice polytope along a rectangle is presented by one of deformations I\*, II\*, III\*, IV\*, V, or deformations as in Figure [7](#F-fig1){reference-type="ref" reference="F-fig1"} or Figure [8](#C-fig9){reference-type="ref" reference="C-fig9"} (VI) (and their mirror images). We draw in Figure [9](#B-fig17){reference-type="ref" reference="B-fig17"} transformations of a lattice polytope whose associated dotted graphs are related by deformations I\*--IV\*. Since we have a sequence of dotted graphs from $\Gamma$ to the empty graph, we can ignore deformations I\*. The dots will be deleted by deformations II\* and III\*. A deformation as in Figure [7](#F-fig1){reference-type="ref" reference="F-fig1"} is realized by a sequence of deformations IV\* and III\*, A deformation VI is realized by a sequence of deformations I\*, III\* and $\mathcal{E}^*$; see Figure [10](#B-fig18){reference-type="ref" reference="B-fig18"}. Thus, we have a sequence of deformations from $\Gamma$ to the empty graph, consisting of deformations II\*, III\*, IV\*, V and $\mathcal{E}^*$; which is the required result. ◻ ![Transformations of a lattice polytope along a rectangle and the corresponding sequence of deformations IVa and III in good order, where we omit orientations of edges and labels of regions. In the figure, (IV) can be (IVa) or (IV\*), and (III) can be (III\*).](F-fig1.pdf){#F-fig1 height="4cm"} ![Transformations V and VI of lattice polytopes along a rectangle, where we omit orientations of edges and labels of regions; in the bracket we draw a deformation V of a dotted graph, where $\epsilon \in \{+1, -1\}$ and $i$ is a positive integer. ](C-fig9.pdf){#C-fig9 height="5cm"} ![Transformations of a lattice polytope along a rectangle corresponding to deformations I\*--IV\*, where in the figures we omit the index ''$*$", and we omit orientations of edges and labels of regions.](C-fig7.pdf){#B-fig17 height="4cm"} ![A transformation VI is represented by a sequence consisting of deformations I\*, III\*, $\mathcal{E}^*$, where in the figures we omit the index ''$*$" and we omit orientations of edges and labels of regions. ](E-fig1-2.pdf){#B-fig18 height="6cm"} A dotted graph is admissible when it has ''many" dots. **Proposition 7**. *Let $\Gamma$ be a dotted graph such that each arc has at least two dots. Then, $\Gamma$ is admissible.* *Proof.* For a lattice polytope $P$, we call the part of $P$ corresponding to an arc (respectively a crossing) of its associated dotted graph an *arc* (respectively a *crossing*) of $P$. When we consider an arc of a lattice polytope with at least two initial vertices, we can construct an arc $\alpha$, connecting a given pair of distinct points, such that the edges connecting to the points are in given directions, that is, $\alpha$ consists of edges such that the initial edge is in the $a$-direction with orientation coherent/incoherent with the $a$-axis, and the terminal edge is in the $b$-direction with orientation coherent/incoherent with the $b$-axis, for any given pair $(a,b)$ and orientations ($a,b \in \{x,y\}$). Hence, by fixing each crossing of $\Gamma$ to be a crossing consisting of edges in the $x$-direction and $y$-direction, we can construct the corresponding lattice polytope. Thus $\Gamma$ is admissible. ◻ **Proposition 8**. *Let $\Gamma$ be a dotted graph such that each arc has at least one dot. Then, the reduced graph of $\Gamma$ is the empty graph. Further, if $\Gamma$ is admissible, then any lattice polytope associated with $\Gamma$ has a transformation with minimal area.* *Proof.* We show that (1) we can deform $\Gamma$ by good deformations I, II, III, IVa in good order to a dotted graph $\Gamma'$ consisting only of mutually disjoint circle components such that each circle component has at least one dot, and (2) we can deform the dotted graph $\Gamma'$ by deformations I--IV to the empty graph, and (3) if $\Gamma$ is admissible, then, for a lattice polytope whose associated dotted graph is $\Gamma'$, there exists a transformation with minimal area. Then, together with Proposition [Proposition 2](#prop919){reference-type="ref" reference="prop919"}, we have the required result. We show (1). Since each arc has a dot, We see that around each crossing, we have a pair of adjacent arcs between which a deformation IVa is applicable, if they don't form a loop component. So we apply deformations IVa for such arcs, and then we apply deformations III to delete the created loop components. Then we have a dotted graph consisting of circle components that are mutually disjoint and each circle component has at least one dot. We show (2). From now on, we call a circle component simply a *circle*. We consider a set of concentric circles in $\Gamma'$ from an outermost circle to an innermost circle. Then, let $\rho$ be a path from the outermost region to the innermost disk such that $\rho$ crosses each concentric circle exactly once. Then, the labels of the regions crossed by $\rho$ form a sequence of integers from zero to some $\epsilon n$ for a positive integer $n$ and $\epsilon \in \{+1, -1\}$. If the sequence does not contain zero other than the initial zero, then, $n$ is not zero and the sequence ends with $\epsilon (n-1), \epsilon n$; hence we apply a deformation II and delete the innermost circle. If the sequence contains zero other than the initial zero, then we have a subsequence $0, \epsilon 1, \epsilon 2, \ldots, \epsilon (n-1), \epsilon n, \epsilon (n-1), \ldots, \epsilon 2, \epsilon 1, 0$, for a positive integer $n$ and $\epsilon \in \{+1, -1\}$. Let $C_1$ and $C_2$ be the concentric circles whose arcs bounds the region with the label $\epsilon n$ such that $C_1$ is the outer circle. We denote by $R$ the region with the label $\epsilon n$. The boundary of the closure $R$, denoted by $\partial R$, consists of $C_1$, $C_2$ and several circles, denoted by $C_3, \ldots, C_m$. (Case 1) If the circles other than $C_1$ have the same orientations, then the regions adjacent to $R$ in the disks bounded by $C_2, \ldots, C_m$ have the label $n-1$, and we apply deformations IV between $C_1, \ldots, C_m$ to make $\partial R$ into one circle and then we apply a deformation II to delete $R$. (Case 2) If there exists a circle $C_j$ $(j=1, \ldots, m)$ with orientation opposite to that of $C_2$, then, the region adjacent to $R$ in the disk bounded by $C_j$ has the label $\epsilon(n+1)$, and we take a new path $\rho$ from $R$ to an innermost circle in the disk bounded by $C_j$. If the sequence of the labels does not contain $\epsilon n$ other than the initial $\epsilon n$, then, we can delete the innermost circle by a deformation II. And if the sequence of the labels contains $\epsilon n$ other than the initial $\epsilon n$, then, we consider the circles bounding the region the absolute value of whose label is the largest, and repeat the same process as above. Since the circles of $\Gamma'$ are finite, by repeating this process, we can delete circles by the method described in (Case 1) or by a deformation II, until we have the empty graph. We show (3). Suppose that $\Gamma'$ is admissible. We recall that for a lattice polytope $P$ whose dotted graph contains a circle component $C$ applicable of a deformation II, there exists a sequence of transformations of the lattice polytopes from $P$ to $P\backslash C$ realizing the minimal area, where we denote by the same notation $C$ the part of $P$ corresponding to the circle component $C$. We consider the deformation described in (Case 1) in the above argument. Let $P$ be a lattice polytope whose dotted graph is $\Gamma'$. We denote by the same notation $\partial R$ the part of $P$ corresponding to $\partial R$ in $\Gamma'$. Then, by [@N Theorem 5.9], there exists a sequence of transformations of the lattice polytopes from $P$ to $P\backslash \partial R$ realizing the minimal area. Thus, by the argument concerning (2), we see that there exists a transformation of $P$ with minimal area. Thus we have the required result. ◻ By the proof of Proposition [Proposition 8](#prop3-9){reference-type="ref" reference="prop3-9"}, we have the following. **Proposition 9**. *Let $\Gamma$ be a dotted graph such that each arc has at least one dot. Then, $\Gamma$ is deformed by a sequence of good deformations I, II, III, IVa in good order to a dotted graph $\Gamma'$ consisting only of mutually disjoint circle components such that each circle component has at least one dot.* # Remark and Lemmas {#sec-lemma} [\[rem902\]]{#rem902 label="rem902"} Let $\Gamma$ be a dotted graph. (1) If we use deformations II\* and III\* instead of deformations II and III in Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}, then deformations $\mathcal{E}$ might effect the results of deformations II/III and IV. Similarly, if we use deformations IV\* instead of deformations IV in Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}, then the commutativity of the order of deformations IV and Lemma [Lemma 14](#lem0901){reference-type="ref" reference="lem0901"} do not hold. (2) Suppose that we introduce a deformation III' that deletes a loop component with no condition for the labels of regions. Then, an application of a deformation IV to a loop component $C$ is possible where $C$ is applicable of a deformation III', and the result of a deformation IV is the result of a deformation III' and a deformation V as in Figure [8](#C-fig9){reference-type="ref" reference="C-fig9"}; see Figure [11](#B-fig14){reference-type="ref" reference="B-fig14"}, and we cannot use the argument in the proof of Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}. (3) If we include deformations V and $\mathcal{E}$ besides deformations I--IV, there exists a dotted graph $\Gamma$ and a sequence of deformations whose result is $\Gamma$ itself; there exists an infinite loop of deformations, and we cannot have a reduced graph, and we cannot use the argument in the proof of Theorem [Theorem 4](#r-prop3-5){reference-type="ref" reference="r-prop3-5"}. For example, see Figure [12](#C-fig4){reference-type="ref" reference="C-fig4"}. (4) Concerning Theorem [Theorem 6](#thm3-7){reference-type="ref" reference="thm3-7"} (2), by a more specified argument, we can take a new deformation sequence consisting of deformations I\*--IV\*, V, $\mathcal{E}^{**}$ such that each local move $\mathcal{E}^{**}$ satisfies the condition that it does not create loop components. ![If we have arcs where both deformations III' and IV are applicable, then the resulting dotted graphs need a deformation V to be deformed locally to the same dotted graph, where we omit the orientations of arcs. We remark that by the condition for labels of regions, a deformation III is not applicable. ](E-fig2.pdf){#B-fig14 height="4.5cm"} ![There exist a dotted graph $\Gamma$ admitting a sequence of deformations V and $\mathcal{E}$ whose result is $\Gamma$ itself.](C-fig4.pdf){#C-fig4 height="3.5cm"} **Lemma 10**. *In the proof of Theorem [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"}, (1) implies the uniqueness of the reduced graph up to local moves $\mathcal{E}$ and deformations I.* *Proof.* It suffices to show that the possibilities of application of deformations II--IV are the same before and after the application of a local move $\mathcal{E}$, where we don't apply deformations IV between the arc used in the local move $\mathcal{E}$ and an arc of its overlapping region. Let $\Gamma$ be a dotted graph and let $\Gamma'$ be the dotted graph obtained from $\Gamma$ by applying a local move $\mathcal{E}$. We denote the local move $\mathcal{E}$ by $\mathcal{R}$, and we denote the concerning arcs before and after the application by $\alpha$ and $\alpha'$, respectively. Further, we denote by $R_0$ the region with the label $\epsilon i$ in Figure [3](#Fig3){reference-type="ref" reference="Fig3"} (I) (with/without dots) in Definition [\[def3-2\]](#def3-2){reference-type="ref" reference="def3-2"}, and we denote by $X_1, \ldots, X_m$ the overlapping regions such that each $X_i$ $(i=1,\ldots,m)$ has the label $\epsilon$, and we put $R=R_0\cap (X_1 \cup \ldots \cup X_m)$. By seeing the labels of regions, we see that if a deformation IV is applicable to the arc $\alpha$ (respectively $\alpha'$) and some arc $\alpha''$, then a corresponding deformation IV between the arc $\alpha'$ and $\alpha''$ (respectively $\alpha$ and $\alpha''$) is also applicable. And if $\mathrm{Int}(R)$ contains a middle region applicable of a deformation IV, denoted by $D$, we can also apply a deformation IV to $D$ after the application of $\mathcal{R}$. Thus, $\Gamma'$ (respectively $\Gamma$) does not admit a deformation IV that is not induced from one of those applicable to $\Gamma$ (respectively $\Gamma'$). Further, $\mathcal{R}$ does not create a circle component. And by assumption, $\mathcal{R}$ does not create a loop component. Thus, $\mathcal{R}$ does not create a circle/loop component, so $\Gamma'$ does not admit deformations II/III that are not induced from those applicable to $\Gamma$. And if the closure of $R$ intersects a circle/loop component applicable of a deformation II/III, denoted by $C$, we can also apply a deformation II/III to $C$ after the application of $\mathcal{R}$. Thus, $\Gamma'$ (respectively $\Gamma$) does not admit a deformation II/III that is not induced from one of those applicable to $\Gamma$ (respectively $\Gamma'$). Thus, local moves $\mathcal{E}$ do not effect deformations II, III and IV. And local moves $\mathcal{E}$ might change arcs with dots to one arc with dots (or move a dot into another arc with a dot). Hence we see that (1) in the proof of Proposition [Theorem 3](#prop3-5){reference-type="ref" reference="prop3-5"} implies the uniqueness of the reduced graph up to local moves $\mathcal{E}$ and deformations I. ◻ **Lemma 11**. *There do not exist arcs where both deformations III and IV are applicable, under the assumption that we don't apply a deformation IV between a loop component $C$ applicable of a deformation III and an arc of an overlapping region of $C$.* *Proof.* By seeing the labels of the regions, we have the required result. See Figure [11](#B-fig14){reference-type="ref" reference="B-fig14"}. ◻ **Lemma 12**. *When we have a circle/loop component $C$ applicable a deformation II/III such that the interior of the disk bounding $C$ contains another circle/loop component $C'$ applicable of a deformation II/III, the result of the deformations II/III to $C$ and then $C'$ is the same with that of the deformations II/III to $C'$ and then $C$, up to local moves $\mathcal{E}$.* *Proof.* Let $D$ and $D'$ be the disks bounded by $C$ and $C'$, respectively. By seeing the labels of the regions, we see that the total labels in $D$ are all positive or all negative, and $C$ and $C'$ has a coherent orientations as concentric circles. Hence the result of the deformations II/III to $C$ and then $C'$ is the same with that of the deformations II/III to $C'$ and then $C$, up to local moves $\mathcal{E}$. ◻ **Lemma 13**. *Let $\Gamma$ be a dotted graph. Then, there does not exist a sequence of deformations consisting of infinite number of deformations IV.* *Proof.* Assume that there is a sequence of deformations IV, such that the number of deformations is greater than an arbitrarily large number $n$. A deformation IV can be regarded as the result of a band surgery along an untwisted band [@Kawauchi], as follows. Let $\Gamma'$ be a dotted graph where a deformation IV is applied, and let $\Gamma''$ be the resulting dotted graph. Let $B$ be the disk that is the intersection of the region with the label $\epsilon i$ of $\Gamma'$ and the region with the label $\epsilon (i-1)$ of $\Gamma''$. We assume that $B$ does not contain the pair of dots. Then, $B$ is a band attached to $\Gamma'$, and the deformation IV is the result of the band surgery along $B$. The *core* of the band $B$ is a simple arc connecting points of arcs of $\Gamma'$ that is a retraction of $B$. By the condition for the labels of the overlapping regions, the core is determined up to local moves $\mathcal{E}$. Let $\Gamma'$ be a dotted graph where a deformation IV is applied, and let $\Gamma''$ be the resulting dotted graph. Let $B$ be the band attached to $\Gamma'$ such that $\Gamma''$ is the result of band surgery along $B$. We call arcs $\Gamma' \cap B$ *ends* of the band $B$ attaching to $\Gamma'$. We also often assume that the ends are in the neighborhoods of the pair of dots, and we present the core by an arc connecting the dots. We remark that when we deform the band (and the core) by local moves $\mathcal{E}$, the result of the band surgery along the deformed band is related with the result of the original deformation IV by local moves $\mathcal{E}$. Thus, a deformation IV is determined by the core of the band, up to local moves $\mathcal{E}$. When we have a sequence of deformations IV from $\Gamma$ to some dotted graph, the sequence is presented as a set of the bands of the deformations attached to $\Gamma$. By applying local moves $\mathcal{E}$ if necessary, we assume that the bands are sufficiently thin, and hence they are presented by the cores. For each dot $p_i$ in $\Gamma$, we denote by the same notation $p_i$ the dot coming from $p_i$ in the resulting dotted graphs of the deformations IV; we remark that we have two choices of $p_i$ for each deformation IV. Let $n_i$ be the number of deformations IV applied to $p_i$. Let $N$ be the number of dots $p_i$ with $n_i>0$. By changing the indices, we arrange that $p_1, \ldots, p_N$ are the dots with $n_i>0$ $(i=1, \ldots, N)$; see Figure [6](#C-fig2){reference-type="ref" reference="C-fig2"}. Then, the cores of bands form a graph $G$ in $\mathbb{R}^2$ consisting of edges and vertices of degree one called *endpoints*, and vertices $v_i$ of degree $n_i+1$ such that the endpoints are dots $p_i$ of $\Gamma$ $(i=1, \ldots, N$). The result of the deformations IV is obtained from $\Gamma$ by the surgery along the regular neighborhood $N(G)$ of $G$, that is defined as the closure of $(\Gamma \cup \partial N(G))\backslash (a_1 \cup \cdots \cup a_N)$, where $N(G)$ is the union of bands associated with cores, and $a_i$ is the end of the band containing $p_i$ in $\Gamma$ $(i=1, \ldots, N)$. By regarding the vertex $v_i$ of degree $n_i+1$ as the dot $p_i$, we regard that the result of deformations IV is the result of band surgery along bands such that for each $p_i$, the associated cores are mutually disjoint and the endpoints are $n_i$ points in the neighborhood of the dot $p_i$. We call the endpoints the endpoints *associated with $p_i$*. We remark that since a deformation IV satisfies the condition of labels for overlapping regions, together with Lemma [Lemma 14](#lem0901){reference-type="ref" reference="lem0901"}, we see that the cores whose endpoints are associated with $p_i$ don't have intersecting points. Since we can take the number of deformations to be greater than an arbitrarily large number $n$, there are cores each of whose endpoints are associated with a fixed pair of dots. The cores are mutually disjoint, and in the neighborhood of the associated dot, near the endpoints, the cores are parallel by Lemma [Lemma 14](#lem0901){reference-type="ref" reference="lem0901"}. Since the bands have no twists, when we have $m$ such cores, the resulting dotted graph has at least $m-2$ circle components more than before the deformations, and each circle component has a dot. Since the number $m$ can be made arbitrarily large, the resulting graph has more dots than $\Gamma$. Since deformations IV do not change the number of dots, this is a contradiction. Thus a dotted graph does not admit a sequence of arbitrarily large number of deformations IV. ◻ **Lemma 14**. *Let $\Gamma$ be a dotted graph and let $p$ be a dot of $\Gamma$. Let $D$ be a small disk around $p$ such that the arc of $\Gamma$ divides $D$ into two disks $D_1$ and $D_2$. Let $\mathcal{X}$ be the set of deformations IV applicable between $p$ and other dots. Then, the middle regions of deformations in $\mathcal{X}$ intersect with $D$ in one of the disks $D_1$ and $D_2$.* *Proof.* Assume that we can apply a deformation IV, denoted by $\mathcal{R}_1$ (respectively $\mathcal{R}_2$), between $p$ and a dot $p_1$ (respectively $p$ and $p_2$) such that the middle region intersects with $D$ in $D_1$ (respectively $D_2$). Since $\mathcal{R}_1$ is applicable, we see that when $D_1$ is not overlapped, the label of $D_1$ (respectively $D_2$) is $\epsilon i$ (respectively $\epsilon (i-1)$), where $\epsilon \in \{+1, -1\}$ and $i$ is a positive integer: the absolute value of the label of $D_1$ is greater than that of $D_2$. And when the middle region containing $D_1$ is overlapped by some regions, since the total label of each region in the middle region are $\epsilon j$ for some $j\geq i$, the case does not occur that the absolute value of the total label of a region in $D_1$ is smaller than that of $D_2$. By the same argument, since $\mathcal{R}_2$ is applicable, we see that the absolute value of the total label of a region in $D_2$ is greater than that of $D_1$, which is a contradiction. Hence we have the required result. ◻ When we have a lattice polytope $P$ with initial vertices $\mathrm{Ver}_0(P)$ and terminal vertices $\mathrm{Ver}_1(P)$, we have given a transformation along a rectangle $R$ that has vertices in $\mathrm{Ver}_0(P)$ as its diagonal vertices. The transformation of $P$ along the rectangle $R$ is the transformation from $P$ to the lattice polytope whose initial vertices are $t(\mathrm{Ver}_0(P); R)$ and terminal vertices are $\mathrm{Ver}_1(P)$. We call such a transformation of $P$ a *normal transformation*. Now, we define a reversed transformation as follows. [\[def818\]]{#def818 label="def818"} Let $P$ be a lattice polytope, and let $R$ be a rectangle that has vertices in $\mathrm{Ver}_1(P)$ as its diagonal vertices. Then, we define the *reversed transformation* of $P$ along the rectangle $R$ as the transformation from $P$ to the lattice polytope whose initial vertices are $\mathrm{Ver}_0(P)$ and terminal vertices are $t(\mathrm{Ver}_1(P); R)$. **Lemma 15**. *Let $P$ be a lattice polytope and let $\Gamma$ be its associated dotted graph. When we can apply a deformation IVa that satisfies the condition (a1) given in Definition [\[def3-3\]](#def3-3){reference-type="ref" reference="def3-3"}, then there exists a normal/reversed transformation of $P$ that realizes the deformation IVa.* *Proof.* Suppose that we can apply to $\Gamma$ a deformation IV in question. We recall that the condition (a1) is that the arcs involved in the deformation are adjacent arcs of a crossing. We denote by $D$ the region of $P$ associated with the middle region. Then, the pair of dots involved in the deformation are on adjacent arcs connected by a crossing such that around the crossing, the other arcs are in the complement of $D$. Let $e$ and $e'$ be the pair of edges of $P$ in the $x$-direction and $y$-direction that form the crossing, and let $v$ and $v'$ be the pair of vertices that are endpoints of $e$ and $e'$ contained in the boundary of the closure of $D$. Let $R$ be the rectangle whose pair of diagonal vertices are $v$ and $v'$. We can see that $v$ and $v'$ are both initial vertices or both terminal vertices of $P$. The transformation of $P$ along $R$ is the required transformation: it realizes the deformation IV, and it is a normal (respectively reversed) transformation if $v$ and $v'$ are the initial (respectively terminal) vertices. ◻ **Lemma 16**. *Let $P$ be a lattice polytope. When $P$ admits a sequence of transformations consisting of normal transformations and reversed transformations realizing minimal area, then there is a sequence of transformations consisting only of normal transformations realizing minimal area.* *Proof.* We denote the sequence of transformations by $$(\mathcal{R}^1_1 \mathcal{R}^1_2 \cdots \mathcal{R}^1_{m_1}) (\mathcal{L}^1_1 \mathcal{L}^1_2 \cdots \mathcal{L}^1_{n_1}) (\mathcal{R}^2_1 \mathcal{R}^2_2 \ldots \mathcal{R}^2_{m_2}) \cdots (\mathcal{L}^r_1 \mathcal{L}^r_2 \ldots, \mathcal{L}^r_{n_r}),$$ where $\mathcal{R}_s^t$ (respectively $\mathcal{L}_s^t$) is a normal transformation (respectively a reversed transformation) along a rectangle, and we apply transformations from left to right. For a reversed transformation $\mathcal{L}$ along a rectangle $R$ from a lattice polytope $P_1$ to a lattice polytope $P_2$, we denote by $\mathcal{\bar{L}}$ the normal transformation along $R$ from $P_2$ to $P_1$. Then, the sequence of transformations $$\begin{aligned} && (\mathcal{R}^1_1 \mathcal{R}^1_2 \cdots \mathcal{R}^1_{m_1}) (\mathcal{R}^2_1 \mathcal{R}^2_2 \cdots \mathcal{R}^2_{m_2}) (\mathcal{\bar{L}}^1_{n_1} \mathcal{\bar{L}}^1_{n_1-1} \cdots \mathcal{\bar{L}}^1_{1})\\ && \ \cdot (\mathcal{R}^3_1 \mathcal{R}^3_2 \cdots \mathcal{R}^3_{m_3}) % (\mathcal{\bar{L}}^2_{n_2}\mathcal{\bar{L}}^2_{n_2-1}\cdots \mathcal{\bar{L}}^2_{1}) \cdots\\ && \ \cdot (\mathcal{R}^r_1 \mathcal{R}^r_2 \cdots \mathcal{R}^r_{m_r}) % (\mathcal{\bar{L}}^{r-1}_{n_{r-1}}\mathcal{\bar{L}}^{r-1}_{n_{r-1}-1}\cdots \mathcal{\bar{L}}^{r-1}_{1}) (\mathcal{\bar{L}}^r_{n_r} \mathcal{\bar{L}}^r_{n_r-1} \cdots \mathcal{\bar{L}}^r_{1})\end{aligned}$$ is the required transformation of $P$. ◻ # Acknowledgements {#acknowledgements .unnumbered} The author was partially supported by JST FOREST Program, Grant Number JPMJFR202U. 0 Barvinok, A. *Integer Points in Polyhedra*. Zurich Lectures in Advanced Mathematics, European Mathematical Society, 2008. Diestel, R. *Graph Theory*; Graduate Texts in Mathematics 173, American Mathematical Society, Springer, 2010. A. Kawauchi, *A Survey of Knot Theory*, Birkhäuser Verlag, Basel, 1996. Nakamura, I. *Transformations of partial matchings*; Kyungpook Math. J. 61 (2021), No.2, 409-439. Reidys, C. M. *Combinatorial Computational Biology of RNA. Pseudoknots and neutral networks*; Springer, New York, 2011.
arxiv_math
{ "id": "2310.00218", "title": "Transformations of lattice polytopes and their associated dotted graphs", "authors": "Inasa Nakamura", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we give an algorithm for describing the Weinstein presentation of Weinstein subdomains obtained by carving out regular Lagrangians. Our work generalizes previous work in dimension three and requires a novel Legendrian isotopy move (the "boat move\") that changes the local index of Reeb chords in a front projection. As applications, we describe presentations for certain exotic Weinstein subdomains and give explicit descriptions of $P$-loose Legendrians. address: - Department of Mathematics, ETH Zürich, Zürich, Switzerland - Department of Mathematics, University of Massachusetts Boston, Boston, MA, USA - Mathematics Department, Fordham University, New York, NY, USA - Department of Mathematics, Louisiana State University, Baton Rouge, LA, USA author: - Ipsita Datta - Oleg Lazarev - Chindu Mohanakumar - Angela Wu bibliography: - references.bib title: Weinstein presentations for high-dimensional antisurgery --- # Introduction {#sec: introduction} Weinstein domains [@Weinstein_91_CSSH] are exact symplectic manifolds equipped with symplectic handlebody decompositions, analogous to CW complexes in topology. These domains are relatively easy to construct by consecutively attaching handles along isotropic spheres in contact manifolds. Weinstein presentations or diagrams keep track of these isotropic spheres and their interactions with each other and make computation of invariants, like the wrapped Fukaya category, tractable via surgery formulas and gluing formulas [@Bourgeois_Ekholm_Eliashberg_surgery; @Ganatra_Pardon_Shende_descent]. A wealth of symplectically exotic Weinstein domains can be constructed as *subdomains* of more standard Weinstein domains, obtained by *carving out* Lagrangian disks. For example, Sylvan and the second author [@Lazarev_Sylvan_2023_PLWS] showed that if $n \ge 5$, the standard cotangent bundle $T^*S^n$ has infinitely many Weinstein subdomains that are diffeomorphic to $T^*S^n$ but pair-wise non-symplectomorphic. They also constructed $P$-loose Legendrians as subdomains of the sector $T^*D^n$ and showed that these $P$-loose Legendrians are smoothly isotopic but not Legendrian isotopic. The contact analog of carving out Lagrangian disks---*contact antisurgery*---is important for the construction of contact structures; for example, any contact structure on $S^{2n-1}$ is obtained by doing a single contact surgery and antisurgery on the standard contact structure $(S^{2n-1}, \xi_{std})$ [@Lazarev_2020_MCSS]. Weinstein subdomains are also attractive from the point of view of categorical invariants; their wrapped Fukaya categories are localizations of the Fukaya category of the ambient domain by the localization formula in [@Ganatra_Pardon_Shende_descent]. Finally, any Weinstein domain deformation retracts to its singular Lagrangian skeleton; therefore the question of studying Weinstein subdomains of a fixed domain $X$ is precisely the question of finding singular Lagrangian skeleta in $X$. Weinstein subdomains also arise naturally when relating complements of toric divisors and their (partial) smoothings. That is, $X\setminus D$ is a Weinstein subdomain of $X\setminus \tilde{D}$ for a Weinstein domain $X$, divisor $D \subset X$, and smoothing $\tilde{D}$ of $D$. In 4-dimensions, the Weinstein presentations of such manifolds have been related in this context by Acu, Capovilla-Searle, Gadbled, Marinkovic, Starkston, and the fourth author [@ACSGNNSW_22_WHCSTD]. They define a necessary condition on a Delzant polytope of the toric manifold, which ensures that the complement of a corresponding partial smoothing of the toric divisor supports a Weinstein structure. They give an algorithm to construct an explicit Weinstein presentation for the complement of such a partially smoothed toric divisor. On the other hand, Weinstein presentations have not been described yet for the constructions in [@Lazarev_Sylvan_2023_PLWS]; nor does there exist a general procedure for describing explicit Weinstein presentations for general Weinstein subdomains. For instance, the construction of $P$-flexible Weinstein manifolds was relatively inexplicit due to the fact that it was not clear how the carving out/antisurgery modified the front of the original Legendrian. In particular, the front projection of these $P$-loose Legendrians was not known. In this paper, our goal is to remedy this situation. We focus on the problem of constructing explicit Weinstein presentations of exotic Weinstein subdomains constructed by carving out Lagrangian disks. We want the presentation of such a subdomain to be in terms of a Weinstein presentation of the original Weinstein domain, which is compatible with the Lagrangian disk. In the process, we introduce a new Legendrian isotopy move, called the boat move. As a concrete application, we give an explicit front projection for the $P$-loose Legendrians constructed indirectly in [@Lazarev_Sylvan_2023_PLWS]. For any collection of integers $P$, a Legendrian $\Lambda$ is said to be *$P$-loose* if it is isotopic to $\Lambda \sharp \Lambda_P$, the connected sum of $\Lambda$ and $\Lambda_P$ where $\Lambda_P$ is a $P$-loose Legendrian unknot, defined in $\mathbb{R}^{2n+1}$ for $n \ge 4$. This operation of taking connect sum with $\Lambda_P$ can be used to make $P$-loose Legendrian representatives of any smooth $n$-dimensional knot type. A Weinstein manifold constructed via handle attachments along $P$-loose Legendrians is called *$P$-flexible*. In [@Lazarev_Sylvan_2023_PLWS], it was shown that $P$-loose Legendrians have properties that generalize those of loose Legendrians, which were introduced by Murphy in [@Murphy_12_LLEHD]. If $0 \in P$, then $\Lambda_P$ is a loose Legendrian unknot. In general, $\Lambda_P$ is not necessarily loose but has Legendrian dga (with loop space coefficients) equal to $\mathbb{Z}[\frac{1}{P}]$, see Section [4](#sec: main construction){reference-type="ref" reference="sec: main construction"}. Furthermore, for any Legendrian $\Lambda \subset Y$, the Chekanov-Eliashberg dga satisfies $$CE(\Lambda \sharp \Lambda_P) \cong CE(\Lambda)[ P^{-1}].$$ and hence vanishes with $\mathbb{Z}/P\mathbb{Z}$ coefficients. ## Main results For any Weinstein subdomain $X_0$ of a domain $X$, the complement $X\setminus X_0$ admits the structure of a Weinstein cobordism $C$. This cobordism has a subcritical part $C_{sub}$ that does not change invariants like the Fukaya category, and some critical handles $H_i^n$, $i = 1, \dots, l$, with Lagrangian co-cores disks $L_i^n \subset X$. Hence $X_0$ can be described, up to subcritical cobordism, as $X\setminus \left( \cup_{i=1}^l L_i \right)$. Conversely, given any Lagrangian disk $L \subset X$, $X\setminus L$ is an exact subdomain of $X$. The contact boundary $\partial(X\setminus L)$ of $X \setminus L$ is obtained from the contact boundary $\partial X$ of $X$ by doing *antisurgery*, or $(+1)$-contact surgery, along the Legendrian sphere $\partial L \subset \partial X$. To ensure that $X\setminus L$ is Weinstein, we assume that $L$ is *regular* [@Eliashberg_Flexible_Lagrangians], which implies that our starting Weinstein presentation for $X$ is compatible with $L$, as we describe later. The main goal of this paper is to give explicit constructions---Weinstein presentations and handlebody decompositions---for Weinstein manifolds obtained via antisurgery. To do this we introduce a new family of $n$-dimensional Legendrian moves in the front projection. We construct Legendrian isotopies called *$D^k$-suspensions* that consist of a Legendrian isotopy $\psi$ of an $(n-k)$-dimensional slice suspended over a $k$-dimensional disk. **Proposition 1**. *Given a Legendrian isotopy $\psi: D^{n-k} \times [0,1]_t \to \mathbb{R}^{2(n-k) + 1}$ which is the identity near $\partial D^{n-k}$ and t-independent near $\partial [0,1]$, its $D^k$-suspension, $\Sigma_{D^k} \{\psi\}$ is a Legendrian in $\mathbb{R}^{2n+1}$, and is Legendrian isotopic to $D^k \times D^{n-k}$ relative to the boundary.* We call the $D^k$-suspension of a $(n-k)$-dimensional Reidemeister 1 move an *$(n,k)$-boat move*, see for instance Figure [6](#fig:surface_boat_move){reference-type="ref" reference="fig:surface_boat_move"}. Using these boat moves, we can construct our desired presentations for Weinstein domains obtained via antisurgery. In dimension 3, there are several existing results, for example [@Ding_Geiges_09_HMCSD], explaining how to do antisurgery along Legendrian circles that admit Lagrangian disk fillings. Our main contribution is that we are able to replicate such explicit constructions in higher dimensions by first using $(n,k)$-boat moves. Next we state our main result. Let $L \subset X$ be a regular Lagrangian disk in a Weinstein domain $X^{2n}$. By [@Eliashberg_Flexible_Lagrangians], any regular Lagrangian disk $L \subset X$ in a Weinstein domain $X$ can be presented as $$D^n \subset T^*D^n \cup \left( \cup_i H_i\right),$$ where $H_i$ are Weinstein handles attached to $T^*D^n$ in the complement of $\partial D^n \subset T^*D^n$. For simplicity, we assume here that all of these handles have index $n$, although this assumption can be removed. Let $$\Lambda_\emptyset \cup \left( \cup_i \Lambda_i \right) \subset \mathbb{R}^{2n-1}$$ be the Legendrian link formed by the following Legendrians. First, $\Lambda_\emptyset = \partial D^n$ is the standard Legendrian unknot with front projection in $\mathbb{R}^{n}$ given by the "flying saucer\" (with a $S^{n-2}$-family of cusps and no other singularities). Second, $\Lambda_i$ are the attaching Legendrians of $H_i$. Let $C_i$ denote the set of Reeb chords from $\Lambda_i$ to $\Lambda_\emptyset$ with front projection contained in the subset of $\mathbb{R}^{n}$ bounded by the flying saucer front projection $\pi(\Lambda_\emptyset) \subset \mathbb{R}^{n}$, and let $C = \cup_i C_i$; see Figure [\[fig:acceptable_Reeb_chords\]](#fig:acceptable_Reeb_chords){reference-type="ref" reference="fig:acceptable_Reeb_chords"}. We assume all such chords are non-degenerate, and furthermore correspond to critical points of a local Morse function whose indices we call the local index of the Reeb chord; in particular, $C$ is finite. **Theorem 2**. *There is a Weinstein presentation for the Weinstein subdomain $X \setminus L \subset X$ with the following properties:* - *The Weinstein presentation of $X \setminus L$ has one more $(n-1)$-handle than the Weinstein presentation for $X$.* - *The $n$-handles for $X\setminus L$ are in one-to-one correspondence with the $n$-handles of $X$.* - *The attaching sphere $\Lambda_i'$ of the $n$-handle $H'_i$ of $X\setminus L$ is obtained from the attaching sphere $\Lambda_i$ of the corresponding handle $H_i$ of $X$ in the following way: for each Reeb chord $\gamma$ in $C_i$ with local index $k$, we apply an $(n, n-k)$-boat move to $\Lambda_i$ and do a cusp connected sum with a Legendrian that goes through the new $(n-1)$-handle one time.* *Remark 3*. The assumptions for Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} can be weakened. For example, as explained in Lemma [Lemma 23](#lemma:non-degenerate_Reeb_chords){reference-type="ref" reference="lemma:non-degenerate_Reeb_chords"}, any Legendrian link in $\mathbb{R}^{2n+1}$ can be perturbed by a $C^0$-small isotopy so that all Reeb chords in $C$ are non-degenerate and correspond to critical points of a Morse function; however, perturbing a degenerate Reeb chord may result in a larger (finite) number of non-degenerate Reeb chords. Also, the assumption that $X$ takes form $T^*D^n \cup (\cup_i H^n_i)$ can be generalized to allow subcritical handles. Generically, there will be no Reeb chords between $\Lambda_\emptyset$ and the attaching spheres of these subcritical handles, but the attaching spheres of the critical handles may interact with these subcritical handles. Hence, only a portion of these attaching spheres map to the contact boundary $\partial_\infty T^*D^n$, resulting in a Legendrian subset $\Lambda_i \subset \partial_\infty T^*D^n$ (which are not necessarily spheres but manifolds with boundary). In that case, a generalization of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} holds by considering Reeb chords between $\Lambda_\emptyset$ and the subset $\Lambda_i$. Additionally, as a main application of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"}, we construct explicitly the $P$-loose Legendrians in the Weinstein presentations of $P$-flexible Weinstein domains. **Corollary 4**. *For integers $p \ge 0$ and $n \ge 2$, the $P$-loose Legendrian unknot, $\Lambda_P \subset \mathbb{R}^{2n+1}$, for $P = \{p\}$, is a Legendrian submanifold consisting of four loose Legendrian unknots which are completely parallel away from a bounded region. Within this bounded region they may be linked (in a way that depends on p) and are connected via three boat moves and cusp connected sum gluings (see Figure [1](#fig:p_loose){reference-type="ref" reference="fig:p_loose"}); the front projection in this bounded region has no singularities except for the singularities in the local boat moves.* ![A cartoon depicting a low-dimensional slice of a $P$-loose Legendrian which cuts through the three boat move cusp connect sums, represented by the pink regions. Away from a neighbourhood of this slice, the black Legendrians may be linked in the yellow region but are parallel outside of it.](figures/p_loose_leg.pdf){#fig:p_loose width="6.35 cm"} ## Structure of the paper In Section [2](#sec: background){reference-type="ref" reference="sec: background"}, we give some background on antisurgery and $P$-loose Legendrians. In Section [3](#sec: Legendrian_moves){reference-type="ref" reference="sec: Legendrian_moves"}, we discuss the boat move and prove Proposition [Proposition 1](#prop: boat_move_intro){reference-type="ref" reference="prop: boat_move_intro"}. In Section [4](#sec: main construction){reference-type="ref" reference="sec: main construction"}, we present our construction for producing Weinstein presentations for subdomains and prove Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} and Corollary [Corollary 4](#cor: explicit_P_loose_intro){reference-type="ref" reference="cor: explicit_P_loose_intro"}. We then use our construction to give explicit examples of Weinstein antisurgery manifolds and conclude with some open questions. ## Acknowledgements The authors would like to thank the organizers (Orsola Capovilla-Searle, Roger Casals, and Caitlin Leverson) of the SYNC Early Career Workshop at the University of California, Davis where this project was started in August, 2022. The authors would also like to thank Hiro Lee Tanaka, Lea Kenigsberg, Tonie Scroggin, Georgios Dimitroglou Rizell, Jonathan Michala, and Josh Sabloff for helpful discussions. ID was supported by NSF grant DMS-1926686. OL was supported by NSF grant DMS-2305392. CM was supported by NSF Grant DMS-2003404. AW was supported by NSF grant DMS-2238131. # Background {#sec: background} ## Contact surgery and Weinstein manifolds Let $M$ be an $n$-dimensional manifold. In the topological setting, we perform $k$-surgery on $M$, for $k \leq n$, by first removing a tubular neighborhood $N(S^k) \cong S^k \times D^{n-k}$ of a $k$-sphere, and then gluing back a copy of $D^{k+1} \times S^{n-k-1}$ along boundary $S^{n-k} \times S^{n-k-1}$. A choice of framing, that is, the particular identification $\varphi: N(S^k) \cong S^k \times D^{n-k}$, specifies our gluing, which then determines our surgered manifold up to diffeomorphism. When the $k$-sphere is a Legendrian sphere, $\Lambda$, inside $M$, a contact manifold, there is a canonical framing, provided an identification $\Lambda \cong S^k \subset \mathbb{R}^{k+1}$. To see this, note that $TM = T\Lambda \oplus \text{\textlangle}R_{\alpha}\text{\textrangle}\oplus J(T\Lambda)$, and so $N(\Lambda) \cong \text{\textlangle}R_{\alpha}\text{\textrangle}\oplus J(T\Lambda)$ for a contact form $\alpha$ and corresponding Reeb vector field $R_\alpha$. The bundle $\text{\textlangle}R_{\alpha}\text{\textrangle}\oplus J(T\Lambda)$ can be identified with the stabilized tangent bundle of $\Lambda$, which carries a canonical trivialization after an identification of $\Lambda$ with $S^k \subset \mathbb{R}^{k+1}$ --- this trivialization is the canonical framing. At this point, we drop the dimension $k$ of our sphere from the notation, and refer to such a surgery by $(\pm \frac{m}{k})$-surgery, where $\pm \frac{m}{k}$ is a fraction relating our choice of framing to the canonical framing. For example, attaching a critical Weinstein handle affects the contact boundary by a $(-1)$-surgery. ## Legendrian moves {#sec:legendrian moves} Legendrian moves refer to the replacement of a Legendrian by something Legendrian isotopic that differs from it only within a Darboux neighbourhood. These are often depicted via front diagrams. Legendrian Reidemeister moves are analogous to knot diagram moves which preserve the topological knot type. They usually refer to replacements in the front diagrams of Legendrian knots in a contact $3$-manifold depicted in Figure [2](#fig:R-moves){reference-type="ref" reference="fig:R-moves"}. These fully characterize Legendrian isotopies for $1$-dimensional Legendrians in contact $3$-manifolds. **Theorem 5** (See [@Swiatowski_1992]). *Two front diagrams represent Legendrian isotopic Legendrian knots if and only if they are related by regular homotopy and a sequence of moves shown in Figure [2](#fig:R-moves){reference-type="ref" reference="fig:R-moves"}.* ![The Legendrian Reidemeister moves.](figures/Reidemeister_moves.pdf){#fig:R-moves width="5.3 cm"} ![A Legendrian Reidemeister 1 and isotopy for a knot (top) and a surface (bottom).](figures/min_to_max.pdf){#fig:min_to_max width="14.7 cm"} We will also be using higher dimensional first Reidemeister moves. For any Legendrian submanifold $\Lambda \subset (Y, \xi)$, the Legendrian submanifold $\Lambda'$ obtained by replacing a graphical portion of the front of $\Lambda$ (with respect to a particular Darboux chart) by the rightmost front depicted in Figure [3](#fig:min_to_max){reference-type="ref" reference="fig:min_to_max"} is Legendrian isotopic to $\Lambda$. To see this note that the first arrow is the same as one of the higher dimensional first Reidemeister moves described in [@Bourgeois_Sabloff_Traynor_2015_LCGFCG]. The subsequent two arrows of Figure [3](#fig:min_to_max){reference-type="ref" reference="fig:min_to_max"} are front diagrams of Legendrian isotopies where the domed part of the front is pushed inwards via an isotopy of the $(x_1, \dots, x_n, z)$-plane that does not have vertical tangencies at all times. We refer to this replacement within a Darboux chart as a **$k$-dimensional first Reidemeister move**.[^1] Let $$R1_k : [-1,1] \times D^k \to \mathbb{R}^{2k+1}$$ denote the associated Legendrian isotopy. Note that the fronts depicted in Figure [3](#fig:min_to_max){reference-type="ref" reference="fig:min_to_max"} have $S^{k-2}$-symmetry about the $z$-axis passing through the point of singularity. *Remark 6*. The fronts depicted in Figure [3](#fig:min_to_max){reference-type="ref" reference="fig:min_to_max"} and therefore, fronts obtained whenever we apply the n-dimensional R1, are not generic [@Dimitroglou_Rizell_11_KLSFRC] as there is a singularity at the cone point. This does not matter for our purposes but is important to keep in mind especially if computing Legendrian Contact Homology. *Remark 7*. Higher dimensional versions of the second and third Legendrian Reidemeister moves also exist, see for instance their depictions in Section 2.4 of [@Casals_Murphy_2019_LFAV]. In higher dimensions, these three moves in the front do not fully characterize all Legendrian isotopies. Generally, we can isotope Legendrian fronts past each other by the Reeb flow, as long as there are no Reeb chords between them. In addition to Reidemeister moves, we will use Legendrian handle slides and the addition or removal of a cancelling pair of Weinstein handles in our diagrammatic calculus. These are moves that allow us to pass between different Weinstein handle diagrams of equivalent Weinstein domains. In other words, before and after these moves, the Legendrians depicted are isotopic in the surgered contact boundaries (but maybe not isotopic in the original contact boundaries). Following Casals and Murphy [@Casals_Murphy_2019_LFAV], we depict the effect of Legendrian handle slides in Figure [\[fig:handleslides\]](#fig:handleslides){reference-type="ref" reference="fig:handleslides"}. A handle slide over a $(+1)$-surgery curve produces a cone singularity, while sliding over a $(-1)$-surgery produces a circle of cusp singularities. Meanwhile, for a Weinstein manifold of dimension $2n$, the addition or removal of a cancelling pair refers to adding in an $n$-handle and an $(n-1)$-handle such that the attaching sphere of the $n$-handle and the belt sphere of the $(n-1)$-handle intersect exactly once. ## Loose and $P$-loose Legendrians, flexible and $P$-flexible domains {#sec:flexibility} In [@Murphy_12_LLEHD], Murphy introduced a class of Legendrians called loose Legendrians. These Legendrians are characterized by an explicit local model. **Definition 8**. A loose unknot $\Lambda_l \subset (\mathbb{R}^{2n+1}, \xi_{std})$ for $n \ge 2$ is a Legendrian that is formally isotopic to the Legendrian unknot and there is an $\mathbb{R}^3$ slice that intersects $\Lambda_l$ transversely and the front projection of $\Lambda_l \cap \mathbb{R}^3$ is the one-dimensional stabilized arc (see Figure [4](#fig:loose_chart){reference-type="ref" reference="fig:loose_chart"}). A Legendrian $\Lambda \subset (Y, \xi)$ is loose if it is Legendrian isotopic to $\Lambda \sharp \Lambda_l \subset Y \sharp \mathbb{R}^{2n+1} \cong Y$. ![A one-dimensional stabilized Legendrian arc.](figures/zigzag.pdf){#fig:loose_chart width="1.5 cm"} *Remark 9*. Since we prefer to work with closed Legendrians, the above definition involves loose Legendrian unknots (which are spheres) instead of the original definition which involves loose Legendrian *charts*, which are Legendrian disks; these definitions are equivalent. We emphasize that there is not a canonical model for the loose Legendrian unknot (except in dimension 1, which must be excluded for reasons explained below). For example, we can take any codimension zero subdomain $U$ near the cusp of the Legendrian unknot $\Lambda_\emptyset$ and push through $U$ past the cusp to create $\Lambda_U$ (see Figure [5](#fig:push_to_loose){reference-type="ref" reference="fig:push_to_loose"}). Then $\Lambda_U$ is always loose and if $U$ has Euler characteristic zero, then $\Lambda_U$ is Legendrian isotopic to the loose Legendrian unknot. Alternatively, one can take any *closed* codimension $1$ submanifold $V$ of $\Lambda_\emptyset$ and add a $1$-dimensional zig-zag spun by $V$ to create $\Lambda_V$. Again if $V$ has Euler characteristic zero, $\Lambda_V$ is a Legendrian unknot. Another way to see the non-existence of a canonical model is to observe that there is no canonical way to extend the one-dimensional stabilized arc to a higher dimensional Legendrian disk such that the extension is standard near the boundary. ![Near the cusp of the black Legendrian, we push through the blue subdomain which perturbs the knot into becoming loose.](figures/push_loose.pdf){#fig:push_to_loose width="9.8 cm"} This indicates a proliferation of loose Legendrians unknots, making it impossible to speak of *the* loose Legendrian unknot. However, the main result proven in [@Murphy_12_LLEHD] about loose Legendrians is that they satisfy an h-principle, which essentially says that, if loose Legendrians $\Lambda_1, \Lambda_2$ in $(Y, \xi)$ are smoothly isotopic (and satisfy additional tangential data), then they are Legendrian isotopic. This means that the symplectic geometry of loose Legendrians reduces just to classical differential topology. In particular, all loose Legendrian unknots are Legendrian isotopic. For $n = 1$, the local model still makes sense but the h-principle does not hold and therefore, we speak of loose Legendrians only for $n \ge 2$. Using loose Legendrians, Cieliebak and Eliashberg defined the class of flexible Weinstein domains [@Cieliebak_Eliashberg_2012_FSWB]. **Definition 10**. A Weinstein domain is *flexible* if the attaching spheres for its half-dimensional handles are loose Legendrians. More generally, we can consider flexible Weinstein sectors. A sector is equivalently a Weinstein domain with the extra data of a Weinstein hypersurface in its contact boundary. A flexible sector is one for which the ambient domain is flexible and the Weinstein hypersurface is loose (in the sense that all core disks of its critical handles are loose Legendrians). We point out that, in work of Murphy-Siegel [@MurSie18], these domains are called *explicitly* flexible and Weinstein domains are called flexible if they admit a Weinstein homotopy to an explicitly flexible one; this notion of flexibility is tautologically preserved under Weinstein homotopy. ## Antisurgery construction of $P$-loose Legendrian unknots {#sec: antisurgery construction of $P$-loose legendrians} In this section, we review the construction of $P$-loose Legendrian unknots via antisurgery in [@Lazarev_Sylvan_2023_PLWS]. There, the authors considered Lagrangians disks $D_P \subset T^*D^n$ (described below) with Legendrian boundary $\partial D_P\subset \partial_\infty T^*D^n$ disjoint from the boundary of the zero-section $\partial D^n$. Then, they carved out these disks to obtain subdomains $T^*D^n \setminus D_P$ of $T^*D^n$. To construct $P$-loose Legendrian unknot $\Lambda_P$, we view $T^*D^n\setminus D_P$ as a Weinstein sector, or equivalently as a Weinstein domain $B^{2n}\setminus D_P$ with a Legendrian stop in its contact boundary. This can be done as follows. As a Weinstein domain, $B^{2n} \setminus D_P$ is just $B^{2n} \cup H^{n-1}$. Since $\partial D^n$ and $\partial D_P$ are disjoint, $\partial D^n$ can be viewed as a Legendrian in the carved out domain $B^{2n} \cup H^{n-1} = B^{2n} \setminus D_P$. Next, we recover $B^{2n}$ by attaching a flexible handle along any Legendrian that is loose in the complement of $\partial D^n \subset \partial (B^{2n}\cup H^{n-1})$. The image of $\partial D^n$ in this new $B^{2n}$ is denoted $\Lambda_P$, the $P$-loose unknot. To complete the construction of $\Lambda_P$, we need to describe the construction of the Lagrangian disks $D_P \subset T^*D^n$. These disks were originally introduced by Abouzaid and Seidel in Section 3b of [@Abouzaid_Seidel_2010_ASMHR]. Let $U \subset S^{n-1}$ be a compact codimension zero submanifold with smooth boundary. Let $$f: S^{n-1} \rightarrow \mathbb{R}$$ be a $C^1$-small function with zero as a regular value, so that $f$ is strictly negative in the interior of $U$, zero on $\partial U$, and strictly positive on $S^{n-1} \setminus \overline{U}$. Next, we consider $S^{n-1}$ as the $\frac{1}{2}$-radius sphere $S^{n-1}_\frac{1}{2} \subset D^n$ and extend $f$ to a smooth Morse function (again called) $$f: D^n \rightarrow \mathbb{R}$$ so that $f$ is $C^0$-small in the $\frac{1}{2}$-radius disk and satisfies $$f(tq) = |t|^2 f(q), \quad \text{for} \quad q \in S^{n-1}_\frac{1}{2}, \quad 1\leq t\leq 2.$$ Let $\Gamma(df)$ be the graph of $df$ in $T^*D^n$ and let $$D_U = \Gamma(df) \cap B^{2n}.$$ Since $f$ is homogeneous for $|q| \ge \frac{1}{2}$ and $0$ is a regular value of $g$, $D_U$ has Legendrian boundary (with respect to the standard radial Liouville vector field on $B^{2n}$) that is disjoint from $\partial D^n$. Furthermore, there is a Lagrangian isotopy $\Gamma(d(sf))_{s \in [0,1]}$ from the zero-section $D^n \subset T^*D^n$ to $D_U$ (that intersects the stop $\partial D^n$ precisely when $s=0$). Now, let $U$ be a $P$-Moore space: a CW complex whose reduced singular cohomology is isomorphic to $\mathbb{Z}/P\mathbb{Z}$ in some degree. For example, one can take the mapping cone of the degree $P$ map $$\varphi_P: S^k \rightarrow S^k.$$ Then the resulting Lagrangian disk $D_U$ is called $D_P$, and is the Lagrangian disk $D_P$ used earlier to construct $\Lambda_P$. *Remark 11*. Note that the above construction of $D_U \subset T^*D^n$ requires a smooth embedding $U \subset S^{n-1}$. If $U$ is a $P$-Moore space, this is possibly only if $n \ge 5$, hence there exist $P$-loose Legendrians of dimension at least four. The sector $T^*D^n \setminus D_U$ is obtained by doing antisurgery on the boundary of $D_U$, while keeping track of the original stop $\partial D^n$ of $T^*D^n$. In particular, the Legendrian boundaries $$\Lambda_U := \partial D_U \text{ and } \Lambda^{n-1} := \partial D^n$$ are linked in a way we now describe. Note that we can view $\Lambda^{n-1} \subset \partial T^*D^n$ as the Legendrian unknot in $\mathbb{R}^{2n-1}$. **Proposition 12**. *$\Lambda_U$ is contained in a small neighborhood of $\partial D^n$, and is given by the 1-jet of $f|_{\partial D^n}$, and its front projection in $\partial D^n\times \mathbb{R}$ is given by the graph of $f|_{\partial D^n}$. In particular, $\Lambda_U$ can be isotoped so that its front is obtained from $\partial D^n$ by a negative Reeb pushoff on $U'$, a smaller open set $U' \subset U$, and a positive Reeb pushoff in $\partial D^n \setminus U''$ for some larger open set $U'' \supset U$. See Figure [\[fig:D_U boundary\]](#fig:D_U boundary){reference-type="ref" reference="fig:D_U boundary"}.* *Proof.* As stated in the construction of $D_U = \Gamma(df)$ above, to ensure that $D_U$ has Legendrian boundary, we consider $T^*D^n$ as the Liouville domain $$(B^{2n}, \frac{1}{2}(\sum_{i=1}^n x_i dy_i - y_i dx_i))$$ equipped with a stop $\partial D^n$, which we identify with the Legendrian unknot. The Liouville vector field associated to the Liouville form $\frac{1}{2}(\sum_{i=1}^n x_i dy_i - y_i dx_i)$ is the radial vector field $\frac{1}{2}\sum_{i=1}^n (x_i \partial_{ x_i} + y_i \partial_{y_i})$. So the fact that $f$ has form $t^2 f(\theta)$ near $\partial D^n = S^{n-1}$, with coordinate $\theta$, implies that this Liouville vector field is tangent to $\Gamma(df)$ and hence $\Gamma(df)$ has Legendrian boundary. Near $\partial D^n$, $(B^{2n}, \frac{1}{2}(\sum_{i=1}^n x_i dy_i - y_i dx_i))$ equals $$(T^*S^{n-1}, \lambda_{T^*S^{n-1}}) \times (T^*[1-\varepsilon, 1], \frac{1}{2}(tdp - p dt)) = (T^*S^{n-1}\times T^*[1-\varepsilon, 1], \lambda_{T^*S^{n-1}} + \frac{1}{2}(tdp - p dt))$$ To see this, note that both of these Liouville structures have the radial vector field as their Liouville vector field, which determines the Liouville form. The (convex) contact boundary of this domain is $T^*S^{n-1} \times T^*_1 [1-\varepsilon, 1]$, the restriction to the cotangent fiber at $1$. The induced contact form is $$\lambda_{T^*S^{n-1}} + \frac{1}{2}dp$$ given by restricting the Liouville form $\lambda_{T^*S^{n-1}} + \frac{1}{2}(tdp - p dt)$ to the cotangent fiber at $1$. Furthermore, this contact boundary is contactomorphic to $$(J^1(S^{n-1}) = T^*S^{n-1}\times \mathbb{R}_z, \lambda_{T^*S^{n-1}} + dz)$$ via the map $z = \frac{1}{2}p$. Near its boundary, our Lagrangian is given by $$d(t^2 f(\theta)) = t^2 d_\theta f + 2t f(\theta) dt$$ and hence its Legendrian restriction to the contact boundary $T^*S^{n-1} \times T^*_1 [1-\varepsilon, 1]$ is $(d_\theta f, 2 f(\theta))$. Under the contactomorphism $z = \frac{1}{2}p$, this Legendrian maps to $(d_\theta f, f(\theta))$, the 1-jet of the function $f(\theta)$. Furthermore, we can isotopy $f$ through functions vanishing precisely on $\partial U$ so that $f$ is equal to $-1$ on $U'$ for a smaller open set $U' \subset U$ and equal to $+1$ on $S^{n-1}\setminus U''$, for a larger open neighborhood $U'' \supset U$, giving us the claimed result. ◻ One of our goals is to is to do antisurgery on $\partial D_U$ and find a presentation for $\partial D^n$ in the resulting contact manifold. # Constructions of Higher Dimensional Legendrian Isotopy Moves {#sec: Legendrian_moves} In this section we describe constructions of some higher dimensional Legendrian isotopies. These isotopies are compactly supported, so we may view these as higher dimensional Legendrian moves. In the subsequent section, we will use these Legendrian moves in the construction of handle diagrams for Weinstein manifolds obtained via antisurgery. ## Suspensions of Legendrian isotopies Consider a Darboux chart with coordinates $x_1, \dots, x_n, y_1, \dots, y_n, z$ and contact form $\alpha = dz - \sum_{i = 1}^n y_i dx_i$. Let us use $\mathbb{R}^{2k}$ to refer to the span of $x_1, \dots, x_k, y_1, \dots, y_k$ and $\mathbb{R}^{2(n-k) + 1}$ for the span of $x_{k+1}, \dots, x_n, y_{k+1}, \dots, y_n, z$. Then, with the contact form $$\alpha_{n-k} := dz - \sum_{i = k+1}^n y_i dx_i,$$ $(\mathbb{R}^{2(n-k) + 1}, \alpha_{n-k})$ is a contact manifold. With this notation in mind, we may view $$\begin{aligned} \mathbb{R}^{2n + 1} = \mathbb{R}^{2k} \times \mathbb{R}^{2(n-k) + 1} \cong T^* \mathbb{R}^n \times \mathbb{R}^{2(n-k) +1}.\end{aligned}$$ Consider the Legendrian (with boundary) in this Darboux chart given by $$\begin{aligned} D^k \times \{0\} \times D^{n-k} \times \{0\} \subset \mathbb{R}^k_{x_1, \dots, x_k} \times \mathbb{R}^k_{y_1, \dots, y_k} \times \mathbb{R}^{n-k}_{x_{k+1}, \dots, x_n} \times \mathbb{R}^{n-k + 1}_{y_{k+1}, \dots, y_n, z}.\end{aligned}$$ For ease of notation we write $D^k \times D^{n-k} : = D^k \times \{0\} \times D^{n-k} \times \{0\}$. We view $D^k \times D^{n-k}$ as a $D^k$-parameter family of Legendrian $D^{n-k}$'s. Let $s \in D^k$ and $\theta \in D^{n-k}$ denote arbitrary elements. Consider a Legendrian isotopy $$\begin{aligned} \psi: D^{n-k} \times [0,1]_t \to \mathbb{R}^{2(n-k) + 1}.\end{aligned}$$ We use the same notation $\psi: \mathbb{R}^{2(n-k)+1} \times [0,1] \to \mathbb{R}^{2(n-k) + 1}$ to denote a contact isotopy that extends this Legendrian isotopy. Assume that $\psi$ is identity near $\partial D^{n-k}$ and is $t$-independent near $\partial[0,1]$, ie., $$\begin{aligned} \frac{\partial\psi}{\partial t}(\theta, 0) = \frac{\partial\psi}{\partial t}(\theta, 1) = 0 \quad \text{ for all } \theta \in D^{n-k}.\end{aligned}$$ Fix a smooth "bump function\" $\beta_k: D^k \to [0,1]$ on the parameter space, such that - $\beta_k$ has a unique critical point which is a maxima at $0$ with $\beta_k(0) = 1$, - $\beta_k$ has radial symmetry, that is, $\beta_k(t) = \beta_k(t')$ whenever $|t| = |t'|$, and - $\beta_k|_{\partial D^k} \equiv 0$. **Definition 13**. Define $\Sigma_{D^k}\{\psi\}$, the **$D^k$-suspension of a Legendrian isotopy**, to be the unique Legendrian lift of $$\begin{aligned} \left\{ \left( s, \psi(\theta, \beta_k(s))\right) \quad \big| \quad s \in D^k, \theta \in D^{n-k} \right\} \end{aligned}$$ under the projection $$\begin{aligned} \Pi : T^* \mathbb{R}^k \times \mathbb{R}^{2(n-k) + 1} &\to \mathbb{R}^k \times \mathbb{R}^{2(n-k) + 1},\\ (x_1, \dots, x_k, y_1, \dots, y_k, x_{k+1}, \dots, y_n, z) &\mapsto (x_1, \dots, x_k, x_{k+1}, \dots, y_n, z). \end{aligned}$$ The Legendrian condition implies that the momentum coordinates $y_1, \dots, y_k$ of $T^* \mathbb{R}^k$, and thus the Legendrian submanifold itself, can be uniquely recovered from its projection $\Pi(\Sigma_{D^k}\{\psi\})$. The boundary of $D^k \times D^{n-k}$ is equal to $\partial D^k \times D^{n-k} \cup D^k \times \partial D^{n-k}.$ By our assumptions on $\psi$ and $\beta_k$, and uniqueness of the Legendrian lift, $$\begin{aligned} \partial\Sigma_{D^k} \{\psi\} = \partial\left(D^k \times D^{n-k}\right).\end{aligned}$$ We now prove Proposition [Proposition 1](#prop: boat_move_intro){reference-type="ref" reference="prop: boat_move_intro"} from the introduction which states that $\Sigma_{D^k} \{\psi\}$ is Legendrian isotopic to $D^k \times D^{n-k}$ relative to boundary. *Proof of Proposition [Proposition 1](#prop: boat_move_intro){reference-type="ref" reference="prop: boat_move_intro"}.* We can define the isotopy by setting, for every $\tau \in [0,1]$, $\varphi_\tau(D^k\times D^{n-k})$ to be the unique Legendrian lift of $$\begin{aligned} \left\{ \left( s, \psi_{\tau \beta_k(s)}(\theta)\right) \quad \big| \quad s \in D^k, \theta \in D^{n-k} \right\} \end{aligned}$$ under the canonical projection $\Pi$. Therefore, $\varphi_\tau$ is a Legendrian embedding and $\varphi$ is a Legendrian isotopy from $D^n\times D^{n-k}$ to $\Sigma_{D^k}(\psi)$. As $\partial\Sigma_{D^k} \{\psi\} = \partial\left(D^k \times D^{n-k}\right)$, for every $\tau \in [0,1]$, $\varphi_\tau$ is the identity on the boundary $\partial\left(D^k \times D^{n-k}\right)$. ◻ *Remark 14*. The word "suspension\" in the context of Legendrians has been used to denote suspensions when the parameter space is $S^k$. In [@Dimitroglou_Rizell_Golovko_2021_OLPTS] the suspension $\Sigma_{S^k}\{\Lambda_\theta\}$ is defined similarly to Definition [Definition 13](#defn: suspension of an isotopy){reference-type="ref" reference="defn: suspension of an isotopy"} for $S^k$-parameterized families of (closed) Legendrian embeddings. If the $S^k$-family is taken to be a constant family, then one recovers the $S^k$-spun of Legendrians which first appeared in [@Ekholm_Etnyre_Sullivan_2005_CHLSR] for $k = 1$ and then was extended to general $k$ in [@Golovko_2022_NINELFSS]. In this paper, our construction is a "local\" construction. We obtain a Legendrian with boundary within a Darboux chart. This is in contrast to the previous constructions where the obtained Legendrian is closed, that is, has no boundary. Another small point of difference from earlier constructions is that instead of beginning with a $D^k$-parametrized family of Legendrian embeddings, we first create such a family from a 1-parameter Legendrian isotopy. We may generalize the suspension construction to the case when the initial Legendrian has a graphical front for a nonzero function. That is, instead of $D^k \times D^{n-k}$, consider Legendrian $N$ contained in a Darboux chart so that the front of $N$ in $D^n\times D^{n-k}\times \mathbb{R}$ is given by $$\begin{aligned} \pi(N) = \Gamma(f_1 + f_2) = \{(x_1, \dots, x_k, x_{k+1}, \dots, x_n, f_1(x_1, \dots, x_k) + f_2(x_{k+1}, \dots, x_n) \}\end{aligned}$$ for two smooth functions $$\begin{aligned} f_1: D^k \to \mathbb{R}, \quad f_2: D^{n-k} \to \mathbb{R}.\end{aligned}$$ Let $\Lambda_{f_2}$ be the unique Legendrian lift in $\mathbb{R}^{2(n-k) + 1}$ of the front $\Gamma(f_2)$. Again consider a Legendrian isotopy $$\begin{aligned} \psi : \Lambda_{f_2} \times [0,1] \to \mathbb{R}^{2(n-k) + 1}\end{aligned}$$ which is identity near the boundary of $\Lambda_{f_2}$ and satisfies $$\begin{aligned} \frac{\partial\psi}{\partial t}(\theta, 0) = \frac{\partial\psi}{\partial t}(\theta, 1) = 0 \quad \text{ for all } \theta \in \Lambda_{f_2}.\end{aligned}$$ **Definition 15**. Define $\Sigma_{\Gamma(f_1)}\{\psi\}$, the **$\Gamma(f_1)$-suspension of a Legendrian isotopy**, to be the unique Legendrian lift of $$\begin{aligned} \Pi (\Sigma_{\Gamma(f)}\{\psi\}) = \left\{ \left( s, (0, 0, f_1(s)) + \psi_{\beta_k(s)} (\theta) \right) \in D^k \times \mathbb{R}^{2(n-k)+1} \quad \big| \quad s \in D^k, \theta \in \Lambda_{f_2} \right\}. \end{aligned}$$ Here, $s \in D^k$ is in the first $k$ position coordinates and $(0,0,f_1(s)) \in \mathbb{R}^{n-k} \times \mathbb{R}^{n-k} \times \mathbb{R}$ and $\psi_{\beta_k(s)}(\theta) \in \mathbb{R}^{2(n-k)+1}$. Analogous to Proposition [Proposition 1](#prop: boat_move_intro){reference-type="ref" reference="prop: boat_move_intro"}, we have the following proposition. We do not include a detailed proof as it would merely be a slight tweak of the proof of Proposition [Proposition 1](#prop: boat_move_intro){reference-type="ref" reference="prop: boat_move_intro"}. **Proposition 16**. *The $\Gamma(f)$-suspension, $\Sigma_{\Gamma(f)}\{\psi\}$, is Legendrian isotopic to $N$ relative boundary.* ## Boat move In this section, we introduce a new Legendrian move that can be viewed as a generalization of the Reidemeister $1$ move. Introducing a boat is a way of swapping non-maxima critical points in a graphical front with maximas and some cusp singularities. Consider a Legendrian $\Lambda \subset (Y, \xi)$ with front equal to the graph of a Morse function, i.e. the front in some Darboux chart is $\Gamma(f)$ for $f: D^n \to \mathbb{R}$ a Morse function. Suppose there exists a minima. Then, the $n$-dimensional Reidemeister $1$ move swaps in a maxima for the minima, as seen in Figure [3](#fig:min_to_max){reference-type="ref" reference="fig:min_to_max"}. We want to convert other index critical points to maximas as well. **Definition 17**. Suppose the front of an open subset $\Lambda_0 \subset \Lambda$ locally is given as the graph of a Morse function $f: D^n \to \mathbb{R}$ with an index $k$ critical point at $\mathbf{0}$. By the Morse lemma, there exist local coordinates $x_1, \dots, x_n$ such that $$\begin{aligned} f(x_1, \dots, x_n) = -x_1^2 - \dots - x_k^2 + x_{k+1}^2 + \dots + x_n^2.\end{aligned}$$ Locally we can extend $(x_1, \dots, x_n)$ to Darboux coordinates $(x_1, \dots, x_n, y_1, \dots, y_n, z)$, as any diffeomorphism of $\mathbb{R}^n$ can be extended to a contactomorphism of the $1$-jet space, $J^1(\mathbb{R}^n)$. If we set $$\begin{aligned} f_1: \mathbb{R}^k \to \mathbb{R}\quad &f_1(x_1, \dots, x_k) = -x_1^2 - \dots - x_k^2, \text{ and }\\ f_2: \mathbb{R}^{n-k} \to \mathbb{R}\quad &f_2(x_{k+1}, \dots, x_n) = x_{k+1}^2 + \dots + x_n^2,\end{aligned}$$ we are in the set up of Definition [Definition 15](#defn:graphical suspension of isotopy){reference-type="ref" reference="defn:graphical suspension of isotopy"}. We define the **$(n,k)$-boat move** to be the replacement of the graphical open subset $\Lambda_0$ by the $\Gamma(f_1)$-suspension of $R1_{n-k}$, $\Sigma_{\Gamma(f_1)} (R1_{n-k})$. Let $B_{n,k}$ denote the resulting Legendrian (locally). We refer to $B_{n,k}$ as the **$(n,k)$-boat**. ![A (2,1)-boat move which changes a saddle point into a maximum.](figures/surface_boat_move.pdf){#fig:surface_boat_move width="12 cm"} *Remark 18*. Note that if $k=n$, the boat move does not change anything. If $k = 0$, the boat move is equal to the $n$-dimensional first Reidemeister move. **Proposition 19**. *The $(n,k)$-boat $B_{n,k}$ is Legendrian isotopic to $\Lambda_0$ relative boundary.* *Proof.* As the boat move is a special case of the suspension defined in Definition [Definition 15](#defn:graphical suspension of isotopy){reference-type="ref" reference="defn:graphical suspension of isotopy"}, $B_{n,k}$ is Legendrian isotopic to $\Lambda_0$ relative to boundary directly from Proposition [Proposition 1](#prop: boat_move_intro){reference-type="ref" reference="prop: boat_move_intro"}. ◻ *Remark 20*. The boat move gets its name because the $(2,1)$-boat looks like a overturned boat or canoe as seen in Figure [6](#fig:surface_boat_move){reference-type="ref" reference="fig:surface_boat_move"}. The $(2,1)$-boat is very similar to the uni-germ $A^{e, \pm}_3$, birth of cuspidal lips, see [@Arnold_1990_SCWF] or [@Goryunov_Alsaeed_2015_LIFF3M]. The main difference is the parametrization of the Legendrian front before the move. Our goal for introducing the boat was that we wanted all the Reeb chords to correspond to maximas of the front. Of course, the new front has non-smooth points. **Proposition 21**. *Decompose the front of $B_{n,k}$ into a disjoint union of a finite number of graphical components away from non-smooth points. All but one of these graphical components are graphs of functions with no critical points. Further, the unique component with critical points has a unique critical point that is a maxima. Moreover, the tangent plane at the cusps are not parallel to $(x_1, \dots, x_n)$-plane.* *Proof.* The front of $B_{n,k}$ has a critical point only at points where both the front of $R1_{n-k}$ and $f_1$ have critical points. This happens at only one point, namely, $x_1 = \dots = x_n = 0$. By construction of $f_1$, $x_1 = \dots = x_n =0$ is assumed to be a maxima in the first $k$ coordinates. The Reidemeister move then converts the critical point to a maxima in the last $n-k$ coordinates, thus proving the proposition. We get that the cusps are not parallel to the $(x_1, \dots, x_n)$-plane as the $(n-k)$-dimensional Reidemeister front does not have cusps parallel to the $(x_{k+1}, \dots, x_n)$-plane. ◻ *Remark 22*. Consider a Reeb chord with one end on the pink dot in the left figure of Figure [6](#fig:surface_boat_move){reference-type="ref" reference="fig:surface_boat_move"}. This Reeb chord survives the boat move. One may be confused about how the grading of this Reeb chord as an element of the Legendrian dga changes, since the local Morse index is changing. However, note that the relationship between the grading and local Morse index is not the typical one [@Ekh_Etn_Sul_non_isotopic], as the cusps we use do not rotate the Lagrangian planes the way typical cusps in the literature do. As such, one can check that the associated Maslov indices of loops of tangent vectors from the top to the bottom of the Reeb chord actually stay the same under our boat moves, preserving the grading. # Weinstein presentations of subdomains {#sec: main construction} In this section, we prove Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} and Corollary [Corollary 4](#cor: explicit_P_loose_intro){reference-type="ref" reference="cor: explicit_P_loose_intro"}. We then apply this theorem to construct several explicit examples of handle decompositions for Weinstein manifolds obtained via antisurgery. We conclude with some open questions. ## Proof of Main Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} {#proof-of-main-theorem-thm-general_antisurgery_intro} Recall that Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} required that certain Reeb chords be non-degenerate and correspond to critical points of Morse functions. The following result shows that this condition can always be achieved. **Lemma 23**. *Let $\Lambda_1$ and $\Lambda_2$ be two Legendrians of a contact manifold $M$ which are $C^0$-close to one another; hence we can assume $\Lambda_2$ in the 1-jet space $J^1(\Lambda_1)$ of $\Lambda_1$. Then, we may perturb $\Lambda_2$ by a $C^0$-small isotopy so that there are finitely many Reeb chords from $\Lambda_1$ to $\Lambda_2$ and so that in a neighborhood of each Reeb chord endpoint on $\Lambda_2$, $\Lambda_2$ looks like $(x,df,f)$ for $f$ a local Morse function on $\Lambda_1$.* *Proof.* By definition, $J^{1}(\Lambda_1) = T^{*}\Lambda_1 \times \mathbb{R}$ and Reeb chords between $\Lambda_1, \Lambda_2$ project to double points under the Lagrangian projection $\Pi: J^1 (\Lambda_1) \to T^{*}(\Lambda_1)$. We may perturb $\Lambda_2$ by a $C^{\infty}$-small Legendrian isotopy so that these double points are transverse in $T^*\Lambda_1$. Let $q \in \Lambda_1 \subset T^*\Lambda_1$ denote one of these double points. Then we may further perturb $\Lambda_2$ so that $\Pi(\Lambda_2)$ is additionally transverse to $T^*_q \Lambda_1$ at $q$ (in addition to being transverse to $\Lambda_1$), without creating any new double points. Since $\Pi(\Lambda_2)$ is transverse to $T^*_q \Lambda_1$, we have that $\Pi(\Lambda_2)$ is given by $(x_1, \cdots, x_n, \partial_1 f(x), \cdots, \partial_n f(x))$ for some function $f$ with critical point at $q$. Furthermore, since the projection $\Pi(\Lambda_2)$ is transverse to $\Lambda_1$, this critical point is non-degenerate and hence is a Morse critical point. In particular, $q$ corresponds to a Reeb chord between $\Lambda_1, \Lambda_2$ with the desired property. See Figure [7](#fig:not_at_cusp){reference-type="ref" reference="fig:not_at_cusp"} for an example of this perturbation. ◻ ![Front projections of two Legendrian $\Lambda_1$ (zero-section) and $\Lambda_2$ (with the cusp). Left figure: there is a Reeb chord (depicted by the dotted line) between the Legendrian $\Lambda_2$ and $\Lambda_1$ with endpoint on the cusp of $\Lambda_2$. Right figure: after Legendrian isotopy of $\Lambda_2$, the Reeb chord between $\Lambda_1, \Lambda_2$ has endpoint on a smooth branch of $\Lambda_2$ which is locally described by the graph of a Morse function on $\Lambda_1$ (with minimum corresponding to the Reeb chord endpoint). ](figures/not_at_cusp.pdf){#fig:not_at_cusp width="5.95 cm"} *Remark 24*. In the following Lemma [Lemma 25](#lem:one surgery on lambda f is enough){reference-type="ref" reference="lem:one surgery on lambda f is enough"}, the Reeb chords we investigate may be interpreted as lying between $\Lambda$ and $\Lambda_-$ or between $\Lambda$ and $\Lambda_+$. This is relevant when we refer to the Reeb chords as being represented as maxima or minima of a Morse function in a suitable neighborhood; what is maxima with regard to a function on $\Lambda_-$ will be a minima with regard to a function on $\Lambda_+$. Here, it should be understood that we can only handleslide $\Lambda$ past $\Lambda_-$ at Reeb chords represented by maxima with respect to a locally-defined Morse function on $\Lambda_+$ (alternatively mimima with respect to a locally-defined Morse function on $\Lambda_-$). The following lemma uses suitable boat moves to turn all critical points into such maxima (resp. minima). **Lemma 25**. *Suppose $\Lambda$ is a Legendrian submanifold in a contact manifold $(M_0,\xi_0)$, and that $(-1)$-surgery on $\Lambda$ produces contact manifold $(M',\xi')$. Suppose $\Lambda_+$ and $\Lambda_-$ are a pair of $n$-dimensional Legendrian submanifolds which are completely parallel (so that one is an Reeb $\varepsilon$-pushoff of the other) in $(M_0,\xi_0)$ but not necessarily parallel in $(M',\xi')$, ie. they may be distinctly linked with $\Lambda$.* *Let $(M,\xi)$ be the $(2n+1)$-dimensional contact manifold obtained from $(M_0, \xi_0)$ by $(-1)$-surgeries along $\Lambda$ and $\Lambda_-$, and a $(+1)$-surgery along $\Lambda_+$. Then, there exists a Legendrian submanifold $\Lambda_f \subset (M_0, \xi_0)$ such that $(M,\xi)$ can be obtained by only $(-1)$-surgery along the components $\Lambda_f$.* *Proof.* To obtain $\Lambda_f$, our goal will be to cancel $\Lambda_{+}$ with $\Lambda_{-}$ in a surgery diagram of $(M,\xi)$. In order to do so, we need $\Lambda_{+}$ and $\Lambda_{-}$ to be completely parallel in $(M', \xi')$, ie. they must be identically linked with $\Lambda$ in $(M_0, \xi_0)$. We will parallelize $\Lambda_{+}$ and $\Lambda_{-}$ by performing a sequence of Legendrian isotopies that preserve the resulting surgered contact manifold $(M, \xi)$, see the example of Figure [\[fig:recipe_slides\]](#fig:recipe_slides){reference-type="ref" reference="fig:recipe_slides"}. By Lemma [Lemma 23](#lemma:non-degenerate_Reeb_chords){reference-type="ref" reference="lemma:non-degenerate_Reeb_chords"}, we may perturb $\Lambda$ by a Legendrian isotopy so that we have finitely many Reeb chords, and in a neighborhood of each Reeb chord, the height difference is Morse function. If we attempt to isotope $\Lambda_{-}$ towards $\Lambda_{+}$ by pushing $\Lambda_{-}$ in the $z$-coordinate in the front, we are obstructed whenever $f$ has a Morse critical point on a part of $\Lambda$ between $\Lambda_{-}$ and $\Lambda_{+}$. These critical points correspond to non-degenerate Reeb chords between $\Lambda_-$ and $\Lambda$ and must be removed. We do so one at a time: we always work with the smallest remaining critical value, that is, the shortest remaining Reeb chord. Consider the critical point, say $q$, with least critical value. If $q$ has Morse index 0, then in the front projection, we see a local maximum (see Remark [Remark 24](#rem: maximas are minimas){reference-type="ref" reference="rem: maximas are minimas"}). We remove it by performing a handleslide of $\Lambda$ over $\Lambda_{-}$ along the Reeb chord between $\Lambda_-$ and $\Lambda$ corresponding to $q$. We then isotope $\Lambda_{-}$ towards $\Lambda_{+}$. See Figure [\[fig:recipe_slides\]](#fig:recipe_slides){reference-type="ref" reference="fig:recipe_slides"}. We note that no new Reeb chords are created in the process. If $q$ has Morse index $n-k$, for $0 \leq k < n$, we first perform a $(n, k)$-boat move to $\Lambda$, as defined in Definition [Definition 17](#defn:boat move){reference-type="ref" reference="defn:boat move"}. (If the critical point has Morse index $n$, then it is a minimum in the front projection, so we perform an $(n, 0)$-boat move, which is just an $n$-dimensional first Reidemeister move). By Proposition [Proposition 21](#prop:boat_move_makes_max){reference-type="ref" reference="prop:boat_move_makes_max"}, the $(n,k)$-boat move is a Legendrian isotopy that converts the index $n-k$ critical point $q$ to a maximum, say $q'$, in the front projection. We now have a Reeb chord, say $\gamma_{q'}$, between $\Lambda$ and $\Lambda_{-}$ corresponding to $q'$. Note that Proposition [Proposition 21](#prop:boat_move_makes_max){reference-type="ref" reference="prop:boat_move_makes_max"} also implies no other Reeb chords are created in this process. We isotope $\Lambda_{-}$ towards $q'$ until $\gamma_{q'}$ is uninterrupted. Next, we handle slide $\Lambda$ over $\Lambda_{-}$ along the Reeb chord $\gamma_{q'}$ at $q'$. We can then isotope $\Lambda_{-}$ further towards $\Lambda_{+}$. This process is fully illustrated in Figure [\[fig:no_new_Reeb\]](#fig:no_new_Reeb){reference-type="ref" reference="fig:no_new_Reeb"} and Figure [\[fig:boat_handleslide\]](#fig:boat_handleslide){reference-type="ref" reference="fig:boat_handleslide"}. We repeat this process until no obstructing non-degenerate Reeb chords remain and $\Lambda_{+}$ and $\Lambda_{-}$ are in cancelling position. Once cancelled, we are left with a single Legendrian $\Lambda_f$ in the surgery diagram of $(M,\xi)$. In summary, $\Lambda_f$ corresponds to $\Lambda$ in the following way: for each Reeb chord $\gamma$ from $\Lambda$ to $\Lambda_-$ with local index $n-k$, we applied an $(n, k)$-boat move to $\Lambda$ and did a cusp connected sum with $\Lambda_-$. ◻ *Remark 26*. We note that if we handleslide over a Reeb chord corresponding to an index $k$ critical point without first doing a boat move, then new Reeb chords are created; see the top row of Figure [\[fig:no_new_Reeb\]](#fig:no_new_Reeb){reference-type="ref" reference="fig:no_new_Reeb"}. By first doing the boat move, which does not create any new Reeb chords but only changes the local index of the existing Reeb chord, we ensure that handlesliding removes that Reeb chord without creating any new chords. *Remark 27*. Although Lemma [Lemma 25](#lem:one surgery on lambda f is enough){reference-type="ref" reference="lem:one surgery on lambda f is enough"} is stated in the language of contact manifolds and contact $(\pm 1)$-surgeries, the result also holds with Weinstein homotopies. We recall that $(-1)$-surgeries correspond to Weinstein handle attachment while $(+1)$-surgeries correspond to handle removal, and in general do not produce fillable contact manifolds. However, in our case, all surgeries moves are handle-slides are over the $(-1)$-Legendrian and hence we can view the $(+1)$-Legendrian as a placeholder for the boundary of the Lagrangian disk, which will be removed at the last step. Hence all our contact moves are really Weinstein homotopies. *Remark 28*. In the next two proofs, we abuse terminology and refer interchangeably to Weinstein diagrams and Weinstein domains. So, we sometimes say "attach a handle to the Weinstein diagram\". We will also refer to the carving out of a Lagrangian disk as "antisurgery\" along its boundary Legendrian. We hope this abuse of terminology will not confuse the reader but, in fact, make the proof easier to read. *Proof of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"}.* [\[prf: main construction\]]{#prf: main construction label="prf: main construction"} Recall that $L \subset X$ are of the form $X = T^*D^n \cup H_i^n$ and $L = D^n \subset T^*D^n$. Then Weinstein sector $X \setminus L$ is obtained by antisurgery along the Legendrian knot $\partial L$ which corresponds to the unknot $\partial D^n \subset T^*D^n$. Now onward, we will denote the knot $\partial L$ by $\Lambda_+$, and depict it in red in all figures. Let $\Lambda=\cup_i\Lambda_i$ denote the link consisting of all the attaching spheres of the $H^n_i$. We may isotope $\Lambda_+$ so that the front projection of $\Lambda$ is disjoint from the cusps of $\Lambda_+$, giving us a surgery diagram, say "Diagram A\" for $\partial(X \setminus L)$, as depicted in the top left of Figure [\[fig:unknot_antisurgery\]](#fig:unknot_antisurgery){reference-type="ref" reference="fig:unknot_antisurgery"}. We introduce a cancelling pair to Diagram A---an $(n-1)$-handle and a critical handle $\Lambda_-$ so that $\Lambda_-$ is a parallel pushoff of the bottom arc of $\Lambda_+$---to obtain a new surgery diagram, say "Diagram B\", again for $\partial(X \setminus L)$, as depicted in top right of Figure [\[fig:unknot_antisurgery\]](#fig:unknot_antisurgery){reference-type="ref" reference="fig:unknot_antisurgery"}. We observe that Diagram B is in turn equivalent to a "Diagram C\" of the form of the bottom right of Figure [\[fig:unknot_antisurgery\]](#fig:unknot_antisurgery){reference-type="ref" reference="fig:unknot_antisurgery"} where $\Lambda_+$ and $\Lambda_-$ both traverse across the $n-1$ handle exactly once each. We note that, in Diagram C, $\Lambda_+$ and $\Lambda_-$ are parallel except in how they are linked with $\Lambda$. So, we can apply Lemma [Lemma 25](#lem:one surgery on lambda f is enough){reference-type="ref" reference="lem:one surgery on lambda f is enough"} to obtain a surgery presentation that contains only $(-1)$-surgery along Legendrian submanifolds, say "Diagram D\". Then, this Diagram D corresponds to a Weinstein handle diagram where each $(-1)$-surgery corresponds to a critical handle attachment. Note that the Weinstein diagram Diagram D has exactly one more index $n-1$ handle than the Weinstein presentation for $X$. Further, if we denote the attaching spheres of the index $n$ handles $H_i$ in Diagram D by $\Lambda_i'$, resp., the construction in Lemma [Lemma 25](#lem:one surgery on lambda f is enough){reference-type="ref" reference="lem:one surgery on lambda f is enough"} implies that the $\Lambda_i'$ are exactly as described in the theorem statement. ◻ As a corollary to this theorem, we explain how to describe the $P$-loose Legendrian unknot in $\mathbb{R}^{2n-1} \subset S^{2n-1} = \partial B^{2n}$. To do so, we first describe a front diagram of the knot in $S^{n-1} \times \mathbb{R}^{n} \subset \partial(B^{2n} \cup H^{n-1})$ and then attach a flexible handle to make the ambient space $\partial B^{2n}$. *Proof of Corollary [Corollary 4](#cor: explicit_P_loose_intro){reference-type="ref" reference="cor: explicit_P_loose_intro"}.* Recall from Section [2.4](#sec: antisurgery construction of $P$-loose legendrians){reference-type="ref" reference="sec: antisurgery construction of $P$-loose legendrians"} and [@Lazarev_Sylvan_2023_PLWS] that the $P$-loose Legendrian unknot, $$\Lambda_P \subset S^{2n-1} = \partial B^{2n}$$ is obtained as follows. First, to construct a $P$-Moore space, consider the CW complex $S^1 \cup_P D^2$, the result of attaching $D^2$ to $S^1$ along the degree $p$ map $\partial D^2 = S^1 \rightarrow S^1$. If $n \ge 5$, then this CW complex embeds into $S^{n-1}$, as observed in [@Abouzaid_Seidel_2010_ASMHR], and we let $U$ be a neighborhood of this CW complex. Next, we observe that $U$ has a Morse function (with gradient outward pointing near the boundary of $U$) that has three critical points, one of index 0 and 1 for the $S^1$ and one of index 2 for the $D^2$. Next, we carve out a disk $D_U \subset T^*D^n$ from $T^* D^n$ to obtain $T^*D^n \setminus D_U$ which as an unstopped Weinstein domain and is equivalent to $B^{2n} \cup H^{n-1}$. Then, $\Lambda_P$ is $$\partial D\subset\partial (B^{2n} \cup H^{n-1}\cup H_{flex}),$$ where $H_{flex}$ is a flexible Weinstein handle attached along a Legendrian which is loose in the complement of $\partial D \subset B^{2n} \cup H^{n-1}$. Additionally, $H_{flex}$ is in cancelling position with $H^{n-1}$. We will explicitly construct the Weinstein handle decomposition of this $B^{2n} \cup H^{n-1}$ to obtain an explicit front diagram for $\Lambda_P$. We begin by using the construction from Proposition [Proposition 12](#prop: D_U Legendrian){reference-type="ref" reference="prop: D_U Legendrian"}. In the boundary $S^{2n-1} = \partial B^{2n}$, we consider a pair of cancelling contact surgeries along parallel $(n-1)$-dimensional Legendrian unknots. We label the Legendrian unknot corresponding to the $(-1)$-surgery $\Lambda$ and the Legendrian unknot corresponding to the $(+1)$-surgery $\Lambda_+$. $\Lambda_+$ and $\Lambda$ represent Legendrian boundaries $\partial D_U$ and $\partial D$, respectively. Note that there exists a $P$-Moore space $U \subset \Lambda_+$ by our assumption. We perturb $\Lambda_+$ past the $\Lambda$ by "pushing\" the subdomain $U$ so that $\Lambda_+$ and $\Lambda$ are no longer parallel. Let us refer to this surgery diagram by "Diagram E\". To Diagram E, we insert a cancelling pair of handles consisting of an $(n-1)$-handle, $H^{n-1}$ and an $n$-handle, $H^n$, just below the cusps of $\Lambda_{+}$ and $\Lambda$, see Figure [\[fig:insert_cancelling_handles\]](#fig:insert_cancelling_handles){reference-type="ref" reference="fig:insert_cancelling_handles"}. Note that as a contact surgery curve on the boundary, the $n$-handle attachment corresponds to a $(-1)$-surgery along a Legendrian knot. Let us denote the Legendrian knot corresponding to this $(-1)$-surgery by $\Lambda_{-}$, and refer to this Weinstein diagram as "Diagram F\". We see from Figure [\[fig:insert_cancelling_handles\]](#fig:insert_cancelling_handles){reference-type="ref" reference="fig:insert_cancelling_handles"} that Diagram F is equivalent to "Diagram G\" after some handleslides and Legendrian isotopies. We are now set up exactly as in the proof of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"}, namely Diagram G is of the form of Diagram C in the proof of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"}. So, we similarly apply Lemma [Lemma 25](#lem:one surgery on lambda f is enough){reference-type="ref" reference="lem:one surgery on lambda f is enough"} to obtain the required Weinstein diagram, say "Diagram H\". Since the $P$-Moore space has three critical points, $\Lambda$ in Diagram G also has three critical points, of index 0, 1, and 2 when considering the Morse function $f$ of Lemma 4.3. Thus in applying the Lemma, we perform three instances of a boat move and a cusp connect sum to $\Lambda$ in Diagram G to obtain Diagram H, see Figure [\[fig:3_boats\]](#fig:3_boats){reference-type="ref" reference="fig:3_boats"}. Finally, to obtain $\Lambda_P$ in $B^{2n}$, that is, to make the ambient manifold $B^{2n}$, we attach a flexible handle, $H_{flex}$, to cancel out $H^{n-1}$ in Diagram H. This amounts to attaching a loose Legendrian $\Lambda_{flex}$ that winds around the $(n-1)$-handle once, and is in the complement of $\Lambda$ in Diagram H. This gives us "Diagram I\" which is depicted in the top left of Figure [\[fig:flexible_step\]](#fig:flexible_step){reference-type="ref" reference="fig:flexible_step"}, with $\Lambda_{flex}$ denoted in pink. Next, in Diagram I, we slide $\Lambda$ repeatedly over $H_{flex}$ until $\Lambda$ no longer passes through the $H^{n-1}$. We then cancel $H^{n-1}$ with $H_{flex}$. We are left with a surgery diagram, say "Diagram J\", that consists of a single Legendrian $\Lambda_P$ in $B^{2n}$. This process is illustrated in Figure [\[fig:flexible_step\]](#fig:flexible_step){reference-type="ref" reference="fig:flexible_step"}. We see that $\Lambda_P$ consists of four loose Legendrian unknots which are completely parallel away from a bounded region where they are linked (in a way that depends on p) and are connected via three boat moves and cusp connected sum gluings. As a Weinstein diagram, Diagram J depicts the Weinstein sector $(B^{2n}, \Lambda_P)$, where $\Lambda_P$ is a $P$-loose Legendrian unknot. ◻ ## Explicit examples We now construct several exotic Weinstein manifolds as applications of the construction from Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} and Corollary [Corollary 4](#cor: explicit_P_loose_intro){reference-type="ref" reference="cor: explicit_P_loose_intro"}. In all these examples, we consider antisurgery on the Lagrangian disk obtained by perturbing the boundary $S^{n-1} = \partial D^n$ of the zero section $D^n \subset T^*D^n$ in a neighbourhood $U \subset S^{n-1}$, as in the construction of $P$-loose Legendrians (see Section [2.4](#sec: antisurgery construction of $P$-loose legendrians){reference-type="ref" reference="sec: antisurgery construction of $P$-loose legendrians"}). **Example 29**. Suppose $U= D^{n-1} \subset S^{n-1}$ is a disk. Then, we may choose the Morse function $g : U \to \mathbb{R}$ to have a single critical point of index 0. So, we obtain $\Lambda_{U}$ with a single maximum. Applying the construction from Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"}, we obtain a Legendrian $\Lambda_{f}$ by a single handleslide (see Figure [\[fig:example_disk\]](#fig:example_disk){reference-type="ref" reference="fig:example_disk"}). The resulting Legendrian, $\Lambda_f$, is a standard Legendrian unknot in the complement of the $(n-1)$-handle, $H^{n-1}$. Here, $\Lambda_{f}$ is not loose, and is the Legendrian unknot in the subcritical domain $B^{2n} \cup H^{n-1}$. Indeed, we have constructed the Weinstein diagram of $T^*D^n \cup H^{n-1}$, i.e. $T^*D^n$ with a subcritical handle attached along a subcritical isotropic sphere in a Darboux chart. $\Lambda_+$ is the Legendrian unknot and bounds the Lagrangian unknot; carving out the Lagrangian unknot is equivalent to attaching an index $n-1$ handle. **Example 30**. Let $U \subset S^{n-1}$ be the disconnected union of codimension zero submanifold $U'$ and a disk $D^{n-1}$. In this case, after cancellation, we expect the remaining Legendrian to be loose. This is because $D_{U' \coprod D^{n-1}}^n$ is Lagrangian isotopic to $D_{U'}^n \natural T^*_0 D^n,$ and by [@Lazarev_grothendieck_group], $T^*D^n \setminus (D_{U'} \natural T^*_0 D^n)$ is obtained from the subcritical sector $T^*D^n \setminus (D_{U'} \coprod T^*_0 D^n)$ by attaching a flexible handle. We can see also see this from our explicit construction as follows. Since $U$ is disconnected, we may isotope $\Lambda_{U}$ so that the Reeb chords coming from $g|_{U'}$, the $U'$-perturbation, are shorter than the Reeb chord coming from the maximum of $g|_{D^{n-1}}$, the $D^{n-1}$-perturbation (see Figure [\[fig:disconnected_example\]](#fig:disconnected_example){reference-type="ref" reference="fig:disconnected_example"}). Next, we apply the construction of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} to obtain the Legendrian attaching sphere $\Lambda_{f}$ as follows. A combination of boat moves and handleslides will first remove all the critical points coming from $U'$. Then, a single handleslide will remove the critical point coming from $D^{n-1}$ (see Figure [\[fig:disconnected_example\]](#fig:disconnected_example){reference-type="ref" reference="fig:disconnected_example"}). We are now in position to cancel $\Lambda_-$ and $\Lambda_+$ as in the proof of Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"}. After this cancellation, we see that the resulting Legendrian knot $\Lambda_f$ has a loose fishtail chart in a transverse slice. By [@Murphy_12_LLEHD], this implies that $\Lambda_f$ is loose. **Example 31**. When $0\in P$, the $P$-loose Legendrian $\Lambda_P$ is loose. To see this, first recall that $$D_U = \Gamma(df) \cap B^{2n} \text{ for a function } f: D^n \rightarrow \mathbb{R}$$ which is an extension of a Morse function $f: S^{n-1} \to R$ that is negative on $U$ and positive on the closure of the complement. It is enough to prove that $\Lambda_P$ is loose for $P = \{0\}$; as discussed in Section 2.2.2 of [@Lazarev_Sylvan_2023_PLWS], $\Lambda_{P \coprod Q}$ is isotopic to $\Lambda_P \sharp \Lambda_Q$ and the connected sum of any Legendrian with a loose Legendrian (in a separate Darboux chart) is loose. Let $U = S^k$, that is, $U$ is a $P$-Moore space for $P=\{0\}$ since its relative cohomology is $\mathbb{Z} \cong \mathbb{Z}/\{0\}$ in positive degree. Consider $U$ to be embedded in $S^{n-1}$ as the intersection $D^{k+1} \cap S^{n-1}$. Then, one can take the function $f: D^n \rightarrow \mathbb{R}$ to be a perturbation of $$-(x^2_1 + \cdots x^2_{k+1}) + x_{k+2}^2 + \cdots x_{n}^2$$ Then, $D_U = \Gamma(df)$ is Hamiltonian isotopic to the cotangent fiber, $T^*_0 D^n$, in $T^*D^n$. So, $T^*D^n \setminus T^*_0 D^n$ is just $B^{2n} \cup H^{n-1}$. Further, the stop $\partial D_U$ is a Legendrian that passes through the $H^{n-1}$ exactly one time, i.e., it is a loose Legendrian. Next, we will explain how to see this explicitly by following the construction in the Proof [\[prf: main construction\]](#prf: main construction){reference-type="ref" reference="prf: main construction"}. We use the notation from Proof [\[prf: main construction\]](#prf: main construction){reference-type="ref" reference="prf: main construction"}. On $U = S^k$, $f$ has two critical points, namely, a maximum and a saddle point. Following the construction, to get an explicit Weinstein diagram we perform a single boat move and two handleslides. After cancelling $\Lambda_-$ and $\Lambda_+$, the resulting Legendrian (Diagram H) intersects the subcritical $(n-1)$-handle $H^{n-1}$ three times. This is because initially $\partial D^n$ intersected $H^{n-1}$ once, and each handleslide introduces one new intersection. For $n=3$, this process is illustrated in Figure [\[fig:example\]](#fig:example){reference-type="ref" reference="fig:example"}. In higher dimensions, the construction follows analogously. With Diagram H, instead of continuing with the construction in Proof [\[prf: main construction\]](#prf: main construction){reference-type="ref" reference="prf: main construction"} as is, we first do an additional step. We perform a Legendrian isotopy that passes the circle of cusps over $H^{n-1}$. The resulting Legendrian, $\Lambda_f = \partial D_U$, passes through $H^{n-1}$ exactly one time. To conclude, note that $\Lambda_f$ is now in cancelling position with $H^{n-1}$, see Figure [\[fig:example2\]](#fig:example2){reference-type="ref" reference="fig:example2"}. Hence, it is loose, and remains loose when the flexible handle $H_{flex}$ is attached alongside it to cancel $H^{n-1}$. Thus, we obtain that $\Lambda_0$ is loose. ## Questions One can construct a loose Legendrian unknot by pushing through any codimension zero subdomain $U$ (with boundary) past the Legendrian unknot *near a cusp*. As observed in [@Murphy_12_LLEHD], $\Lambda_U$ is always loose, see Figure [5](#fig:push_to_loose){reference-type="ref" reference="fig:push_to_loose"}. If the Euler characteristic of $U$ is 0, then $\Lambda_U$ is formally Legendrian isotopic to $\Lambda$ (but not genuinely Legendrian isotopic) and hence called the loose Legendrian unknot. So by the h-principle for loose Legendrians, $\Lambda_U$ and $\Lambda_V$ are isotopic if $\chi(U) = \chi(V)$. Our construction of the $P$-loose Legendrians also involves pushing through certain codimension zero subdomains (neighborhoods of $P$-Moore spaces). However, here the construction is less concrete; one must first push through to create the Lagrangian disk $D_U$, then carve out $D_U$, and then attach a flexible handle, ultimately resulting in our recipe above. Hence, it is natural to ask whether there is a more direct route towards the construction of these $P$-loose Legendrians, analogous to the construction to loose Legendrians by Murphy. **Question 32**. *Can a $P$-loose Legendrian unknot be constructed more directly by pushing through a $P$-Moore space past a region of the Legendrian unknot (not near a cusp), after Legendrian isotopy of the unknot?* For example, this pushing operation, if it exists, must have the property that if $U$ is disconnected, then $\Lambda_U$ must be loose, as discussed in Example [\[fig:example2\]](#fig:example2){reference-type="ref" reference="fig:example2"}. Another line of inquiry is to see whether our algorithm can provide an alternative proof of the Ganatra-Pardon-Shende [@Ganatra_Pardon_Shende_descent] localization formula from the point of view of Legendrian invariants. The localization formula [@Ganatra_Pardon_Shende_descent] computes the wrapped Fukaya of the subdomain $X\setminus D$ as $$\mathcal{W}(X\setminus D) \cong \mathcal{W}(X)/D,$$ where $\mathcal{W}(X)/D$ is the algebraic localization of $\mathcal{W}(X)$ by the $D$. They describe a concrete formula computing the morphism chain complexes, $\mathrm{Hom}_{\mathcal{W}(X)/D}(L, K)$, via a dg bar construction that depends on morphism $\mathrm{Hom}_{\mathcal{W}(X)}(L, K)$ as well as $\mathrm{Hom}_{\mathcal{W}(X)}(L, D)$ and $\mathrm{Hom}_{\mathcal{W}(X)}(D, K)$. For certain Lagrangians, like the co-cores of $X$, these complexes can all be computed using the Legendrian dga's of the attaching spheres of $X$ and $\partial D$. Hence, the Ganatra-Pardon-Shende surgery formula can be used a priori to describe the Legendrian dga's of the attaching spheres of $X\setminus D$. On the other hand, the current paper given an explicit geometric Weinstein presentation for $X\setminus D$ and an explicit depiction of its Legendrian attaching spheres. **Question 33**. *Can one compute the Legendrian dga's of the attaching spheres of $X\setminus D$ produced by Theorem [Theorem 2](#thm: general_antisurgery_intro){reference-type="ref" reference="thm: general_antisurgery_intro"} directly, giving an alternative direct proof of the Ganatra-Pardon-Shende localization formula?* [^1]: Replacing the figures in Figure [3](#fig:min_to_max){reference-type="ref" reference="fig:min_to_max"} by their reflections about the $(x_1, \dots, x_n)$-plane gives analogous Legendrian moves. We omit these in the discussion for simplicity.
arxiv_math
{ "id": "2310.03133", "title": "Weinstein presentations for high-dimensional antisurgery", "authors": "Ipsita Datta, Oleg Lazarev, Chindu Mohanakumar, and Angela Wu", "categories": "math.SG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Conformal dimension of a metric space $X$, denoted by $\dim_C X$, is the infimum of the Hausdorff dimension among all its quasisymmetric images. If conformal dimension of $X$ is equal to its Hausdorff dimension, $X$ is said to be *minimal for conformal dimension*. In this paper we show that the graph of the one dimensional Brownian motion is almost surely minimal for conformal dimension. We also give many other examples of minimal sets for conformal dimension, which we call *Bedford-McMullen type sets*. In particular we show that Bedford-McMullen self-affine sets with uniform fibers are minimal for conformal dimension. The main technique in the proofs is the construction of "rich families of minimal sets of conformal dimension one". The latter concept is quantified using Fuglede's modulus of measures. address: - Department of Mathematics, University of Toronto, 40 St. George Street, Toronto, Ontario, Canada M5S2E4 - Department of Mathematics, Kansas State University, Manhattan, KS, 66506 - Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road, Haidian District, Beijing, China, 100871 author: - Ilia Binder - Hrant Hakobyan - Wen-Bo Li title: Conformal Dimension of the Brownian Graph --- [^1] # Introduction {#Introduction} Let $\eta:[0, \infty) \longrightarrow [0, \infty)$ be an increasing continuous function such that $\eta(0)=0$ and $\eta(t)\underset{t\to\infty}{\longrightarrow}\infty$. A homeomorphism $f: X \to Y$ between metric spaces is called *$\eta$-quasisymmetric*, if for all $x, y, z \in X$ with $x \neq z$ and $t>0$ we have $$\begin{aligned} \label{def:quasisymmetry} \frac{d_Y(f(x),f(y))}{d_Y(f(x),f(z))} \leq \eta \left(t\right),\end{aligned}$$ whenever ${d_X(x,y)} \leq t {d_X(x,z)}$. A mapping $f$ is called *quasisymmetric* if it is $\eta$-quasisymmetric for some *distortion function* $\eta$. Informally, quasisymmetries can be thought of as deformations which distort approximate shapes of objects by a bounded amount, cf. [@Hei01]. Quasisymmetries were introduced by Tukia and Väisälä as a generalization of conformal and quasiconformal mappings to the setting of general metric spaces [@Tukia-Vaisala]. In fact, for $n\geq 2$ a self map of $\mathbb{R}^n$ is quasiconformal if and only if it is quasisymmetric, see e.g. [@Vaisala Section 34]. This equivalence holds true in a much greater generality, that is for so-called Loewner spaces introduced by Heinonen and Koskela in [@HK98], where also the foundation of the field of analysis on metric spaces was laid. ## Conformal dimension A central problem in geometric mapping theory and analysis on metric spaces is the classification of metric spaces up to quasisymmetries. The study of quasisymmetric invariants thus plays an important role in this context. One such invariant is obtained by studying how Hausdorff dimension of $X$, denoted by $\dim_H X$, varies under quasisymmetries. It is well known that Hausdorff dimension of any metric space $(X,d_X)$ can be arbitrarily increased by quasisymmetries. Indeed, if $p<1$ then the identity mapping of $(X,d_X)$ to $(X,d_X^p)$, also known as "snowflaking", is a quasisymmetry which increases the Hausdorff dimension of $X$ by a factor of $1/p>1$. On the other hand, for some spaces the Hausdorff dimension cannot be made smaller (e.g. $\mathbb{R}^n$). *Conformal dimension* of a metric space $X$ is the infimum of the Hausdorff dimension among all quasisymmetric images of $X$: $$\begin{aligned} \dim_C(X) = \inf \{\dim_H f(X) : \mbox{$f$ is a quasisymmetry} \}.\end{aligned}$$ Clearly, $\dim_C X\leq \dim_H X$. A metric space $X$ is said to be *minimal for conformal dimension* or *minimal* for short if $$\begin{aligned} \dim_C X = \dim_H X.\end{aligned}$$ Since being introduced in [@Pan89], conformal dimension has had a profound influence on analysis on metric spaces and also found important applications in geometric group theory and dynamics, see e.g. [@Bonk-Kleiner:confdim; @Bonk-Meyer; @Kleiner:icm; @MT10]. The problem of finding or estimating conformal dimension has been studied for many general metric spaces and concrete deterministic fractal sets, see [@BT01; @DS97; @KOR18; @Kov06; @Kwa20; @Mac11; @MT10; @TW06]. Nevertheless, many interesting questions remain wide open. For instance, the precise value of conformal dimension of the standard Sierpiński carpet $S_3$ is not known, and also it is not known if $\dim_C S_3$ is achieved for any specific quasisymmetric mapping, cf. [@MT10]. Perhaps more strikingly it is not even known if there is a quasisymmetric mapping $f$ of the complex plane s.t. $\dim_H f(S_3)< \dim_H S_3$. The fact that $\dim_C S_3$ is strictly smaller than $\dim_H S_3$ follows from the work of Keith and Laakso [@Keith-Laakso]. ## Main result Given the progress in understanding the quasisymmetric geometry of many deterministic fractals, a very natural and interesting further step is to study quasiconformal geometry of stochastic objects and, in particular, their conformal dimension. Namely, given a random space $X$ with almost sure Hausdorff dimension $d$, it is natural to ask if conformal dimension of $X$ is almost surely a constant, and if so, is $X$ minimal almost surely. In [@RS21], it was shown that random sets known as fractal percolation were not minimal almost surely, though no explicit values for the conformal dimension were obtained. In this paper, we denote by $W(t)$ the $1$-dimensional Brownian motion, and by $\Gamma(W)$ its graph. It is well known that Hausdorff dimension of of the graph $\Gamma(W)$ is ${3}/{2}$ a.s., see e.g. [@MP10 Theorem $4.29$]. The following is the main result of this paper. **Theorem 1**. *The graph $\Gamma(W)$ of the $1$-dimensional Brownian motion is minimal for conformal dimension almost surely.* As far as we know, Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"} is the first result giving minimality and an explicit value for conformal dimension of continuous random spaces and "natural" stochastic processes. In what follows we briefly describe the usual tools needed to obtain lower bounds for conformal dimension and how we needed to generalize them to apply to the case of the graph of Brownian motion. ## Lower bounds and modulus Lower bounds for the conformal dimension, and therefore also proofs of minimality of a metric space $X$, are usually obtained by utilizing various concepts of "sufficiently rich families of rectifiable curves" in $X$. The latter can be quantified through the concepts of modulus of path families, the validity of Poincaré inequalities, or the existence of certain diffuse enough measures on families of curves, see [@MT10 Chapter 4] and references therein. The most relevant to the present work is the classical concept of the *modulus of a curve family* $\Gamma$ in $X$, a fundamental tool in geometric function theory [@HK98; @Hei01]. Recall that if $p\geq 1$, $(X,d,\mu)$ is a metric measure space, and $\Gamma$ is a family of curves in $X$, then $p$-modulus of $\Gamma$ is defined by $$\begin{aligned} \mathrm{mod}_p(\Gamma) = \inf_{\rho} \int_X \rho^p d \mu,\end{aligned}$$ where the infimum is over all Borel functions $\rho$ such that $\int_{\gamma} \rho ds\geq 1$ for all $\gamma\in\Gamma$. Sometimes we will denote the modulus by $\mathrm{mod}_p(\Gamma,\mu)$ to emphasize the dependence on the background measure $\mu$. An important result of Tyson [@Tyson], see Theorem [Theorem 10](#modest){reference-type="ref" reference="modest"}, states that under quite general conditions one can estimate $\dim_C X$ from below provided $X$ contains a curve family of positive modulus. The prototypical example in this context is the product space $X=E\times [0,1]$, where $E$ is a compact set in $\mathbb{R}^n$. The curve family $\{ \{x\} \times [0,1]: x\in E\}$ in $X$ then has positive modulus and hence $X$ is minimal, see [@BT01]. Since the graph of the 1-dimensional Brownian motion does not contain any rectifiable curves, and the modulus of any family of such curves vanishes, one cannot use Tyson's theorem to prove Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"}. In [@Hak09] the second named author showed that an analogue of Tyson's theorem still holds if $X$ contains a "sufficiently rich family $\mathcal{E}$ of minimal sets of conformal dimension one". To quantify the "richness" of such families of minimal sets, Fuglede's notion of modulus of families of measures [@Fug57] was used. Specifically, Theorem 5.5 in [@Hak09] states that if $\mathcal{E}=\{E\}$ is a family of subsets of $X$, of conformal dimension one, each of which supports a measure $\lambda_E$, such that for every ball $B(x,r)\subset X$ we have $$\begin{aligned} \label{ineq:growth} \begin{split} \mu(B(x,r))&\lesssim r^d, \\ \lambda_{E}(B(x,r))&\gtrsim r, \forall E\in\mathcal{E}, \mbox{ and } x\in E, \end{split}\end{aligned}$$ then a lower bound $\dim_C X\geq d$ can be obtained if $$\begin{aligned} \label{ineq:modpositive} \mathrm{Mod}_1(\{\lambda_E\})&>0.\end{aligned}$$ One of our main results, Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"}, generalizes the minimality theorems from [@Tyson] and [@Hak09] by requiring that conditions ([\[ineq:growth\]](#ineq:growth){reference-type="ref" reference="ineq:growth"}) hold only locally. This allows us to obtain many new examples of minimal spaces, including minimal sets which are graphs of continuous functions of any dimension, $d\in[1,2]$, self-affine sets, and the Brownian graph. Next we describe these applications in more detail. ## Minimality of Bedford-McMullen type sets We say a set $K\subset[0,1]^2$ is of *Bedford-McMullen type with uniform fibers* if it can be constructed as follows. Suppose $k,\ell,m$ are positive integers, so that ${\ell <m}$ and $1\leq k\leq m$. Choose a pattern/subset $D_1\subset \{1,\ldots,m\}\times\{1,\ldots,\ell\}$ so that every row of $D_1$ has $k$ rectangles (thus $\#(D_1)=\ell\times k$). Divide the unit square $[0,1]^2$ into $m\times \ell$ congruent closed rectangles of width $1/m$ and height $1/\ell$ with disjoint interiors. Keep the rectangles corresponding to the pattern $D_1$ and remove the rest. Divide each of the $k\ell$ rectangles remaining after the first step into $m\times\ell$ rectangles of height $1/m^2$ and height $1/\ell^2$. Choose a (possibly different) pattern $D_2\subset \{1,\ldots,m\}\times\{1,\ldots,\ell\}$ so that every row of $D_2$ has $k$ elements. In each rectangle remaining from the first step keep the $m^{-2}$ by $\ell^{-2}$ rectangles corresponding to the pattern $D_2$ and remove the rest. Continuing by induction we suppose that the set $K_n$ obtained at step $n$ is a union of $(k\ell)^n$ rectangles of width $1/m^n$ and height $1/\ell^n$. Each of these rectangles are then divided into $m\times \ell$ rectangles and $k \ell$ are kept within each according to a pattern $D_n\subset\{1,\ldots,m \}\times\{1\ldots,\ell\}$ which contains $k$ elements in each of the $\ell$ rows. By induction we define $$K=K(\{D_i\})= \bigcap_{n=0}^\infty K_n.$$ See Figure [1](#minimalgraph){reference-type="ref" reference="minimalgraph"} for an example with $m=4$ and $k=\ell=2$. In Section [4](#MSS){reference-type="ref" reference="MSS"} we prove the following. ![The first three generations of a Bedford-McMullent type set with uniform fibers, with $m=4$, $k=\ell=2$. At each step one chooses $4$ rectangles out of $8$ possible ones, so that $2$ are chosen from each row. The resulting set $K$ has Hausdorff dimension $3/2$, and *every* nontrivial intersection of $K$ with a horizontal line is of Hausdorff dimension $1/2$. By Theorem [Theorem 2](#thm:minimalBM){reference-type="ref" reference="thm:minimalBM"} conformal dimension of $K$ is also $3/2$. ](MinimalGraph){#minimalgraph height="1.2in"} **Theorem 2**. *If $K\subset[0,1]^2$ is constructed as above then it is minimal for conformal dimension, and $$\begin{aligned} \label{eqn:dimK} \dim_C K = \dim_H K = 1+\log_m k. \end{aligned}$$* If $D_i=D$ for every $i\geq 1$, we denote the sets constructed above by $K(D)$ and call them Bedford-McMullen self-affine sets, cf. [@BP17]. The following is an immediate consequence of Theorem [Theorem 2](#thm:minimalBM){reference-type="ref" reference="thm:minimalBM"}. **Corollary 3**. *Every Bedford-McMullen self-affine set $K(D)$ with uniform fibers is minimal for conformal dimension.* It was shown in [@Mac11] that all Bedford-McMullen self-affine sets are either minimal (when projection in one direction covers an interval) for *conformal Assouad dimension* or have conformal dimension $0$. In the case of the Bedford-McMullen self-affine sets with uniform fibers, Hausdorff dimension is equal to Minkowski dimension and Assouad dimension. It then follows from Corollary [Corollary 3](#BMSM){reference-type="ref" reference="BMSM"} that conformal dimension and conformal Assouad dimension of the Bedford-McMullen self-affine sets with uniform fibers are equal to each other. It is not known if this is true in the case of non-uniform fibers. Since $D_i$'s can vary from step to step in the construction of $K$, we can make Bedford-McMullen sets which are graphs of continuous functions. In fact, in Section [4](#MSS){reference-type="ref" reference="MSS"} we prove the following consequence of Theorem [Theorem 2](#thm:minimalBM){reference-type="ref" reference="thm:minimalBM"}. **Corollary 4**. *For every $d\in[1,2]$ there is a continuous function $f:[0,1]\to\mathbb{R}$ such that the graph $\Gamma(f)$ of $f$ is minimal for conformal dimension, and $\dim_C \Gamma(f) = d$.* ![To obtain the family $\mathcal{E}$ of "vertical" Cantor sets in $I$ one has to choose one rectangle from each row at every step. Every vertical Cantor set $E$ is minimal and has conformal dimension $1$ and is equipped with the measure $\lambda_E$ which is just the pullback of Lebesgue measure from $\{0\}\times [0,1]$ under the orthogonal projection onto $y$-axis. The family $\{\lambda_E\}$ has positive modulus with respect to the natural measure $\mu$ on $I$.](VerticalCantor){#VC height="1.2 in"} To prove minimality of sets $K$ in Section [4](#MSS){reference-type="ref" reference="MSS"} we observe that they contain large collections of minimal sets of conformal dimension $1$. Indeed, choosing *one* rectangle (out of $k$ possible) of generation $n$ from each row at every step one obtains an uncountable collection of *vertical Cantor sets* in $K$. Each such vertical Cantor set $E\subset K$ has conformal dimension $1$. Moreover, $E$ is equipped with a measure $\lambda_E$, which is the pullback of the Lebesgue measure under the horizontal projection (or equivalently the $\mathcal{H}^1$ measure restricted to $E$). To apply Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"} it is enough to show that there is a measure $\mu$ on $I$, such that $\mu(B_r)\lesssim r^d$, with $d=\dim_H K$, so that inequality ([\[ineq:modpositive\]](#ineq:modpositive){reference-type="ref" reference="ineq:modpositive"}) holds. Equivalently, one needs to show that if $\rho$ is a Borel function on $K$ such that $\int \rho d\lambda_E\geq 1$ for every vertical Cantor set $E$ in $K$, then $$\begin{aligned} \label{ineq:intpositive} \int_K \rho d\mu>0.\end{aligned}$$ It is not hard to see that if $\pi_y:(x,y)\mapsto(0,y)$, then for ([\[ineq:intpositive\]](#ineq:intpositive){reference-type="ref" reference="ineq:intpositive"}) to hold it is necessary for the push forward measure $(\pi_y)_{*}(\mu)$ to be absolutely continuous with respect to the Lebesgue measure $\mathcal{L}^1$ on the vertical interval $\{0\}\times [0,1]$. Such a $\mu$ can be obtained easily by setting the measure of every $n$-block $Q\subset K_n$ to be the same, i.e., $\mu(Q)=1/m^n$. It turns out that with this choice of $\mu$ and the family $\{\lambda_E\}$ as above, conditions of Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"} are satisfied and hence $K$ is minimal. ## Conformal dimension of the Brownian graph {#CDBG} Minimal sets of the previous section have the special property that every nontrivial intersection of $K$ with a horizontal line has Hausdorff dimension equal to $\log_m k=\dim_H(K)-1$. This can be thought of as some type of a product-like structure on $I$ (at least at the level of Hausdorff dimensions). It turns out that the graph of Brownian motion has this property as well almost surely. Indeed, from the well-known dimension doubling theorem of Kaufman [@MP10 Theorem $9.28$] it follows that almost surely for *every* $a\in\mathbb{R}$ we have $\dim_H \left( \Gamma(W) \cap Z_a \right) = {1}/{2}$, where $Z_a$ is the horizontal line through $(0,a)$, see [@MP10 Corollary $9.30$]. On the other hand, $\dim_H\Gamma(W)=3/2$ almost surely, see [@MP10 Theorem $4.29$]. Therefore, almost surely we have $$\begin{aligned} \label{equation:BMslices} \dim_H (\Gamma(W)\cap Z_a) = \dim_H \Gamma(W)-1\end{aligned}$$ for every $a\in\mathbb{R}$. Equation ([\[equation:BMslices\]](#equation:BMslices){reference-type="ref" reference="equation:BMslices"}) is one of the motivations to suspect that $\Gamma(W)$ could be minimal for conformal dimension as well. However, there are immediate questions that have to be addressed in order to apply Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"}. Namely, are there "vertical" subsets $E$, measures $\lambda_E$, and measure $\mu$ on $\Gamma(W)$ so that ([\[ineq:growth\]](#ineq:growth){reference-type="ref" reference="ineq:growth"}) and ([\[ineq:modpositive\]](#ineq:modpositive){reference-type="ref" reference="ineq:modpositive"}) hold? ![ The family $\mathcal{E}$ of "vertical" subsets of the Brownian graph $\Gamma(W)$ is constructed by intersecting $\Gamma$ with horizontal dyadic strips as in the picture above and carefully selecting those parts in which Brownian sample path "grows fast". ](BrownianMotion){#VC height="1.2 in"} To mimic what was done for the sets $I$ above we construct the subsets of $\Gamma(W)$ by looking at the part of the graph obtained by intersecting it with "dyadic strips", i.e., $$\Gamma(W)\cap\left\{(x,y) : y \in\left(\frac{m}{2^n},\frac{m+1}{2^n}\right]\right\}.$$ The resulting subsets are not as nicely behaved as in the case of $I$ above. In fact we choose not all such intersections, but only those on which Brownian motion "increases fast", see Section [6](#BGMAS){reference-type="ref" reference="BGMAS"} for details. It turns out that it is possible to do this so that the resulting "vertical" subsets of $\Gamma(W)$ are minimal. For this reason we provide quite general criteria for minimality of Cantor-like sets. See Theorems [Theorem 13](#cd1){reference-type="ref" reference="cd1"}, [Theorem 15](#unioncd1){reference-type="ref" reference="unioncd1"}, [Theorem 16](#unioncd1new){reference-type="ref" reference="unioncd1new"} and Appendix [8](#appendix){reference-type="ref" reference="appendix"}. These criteria allow us to construct "vertical" subsets $E$ of the Brownian graph $\Gamma(W)$ which carry a family of measures $\mathbf{E}=\{\lambda_E\}_{E\in\mathcal{E}}$ satisfying the second condition in ([\[ineq:growth\]](#ineq:growth){reference-type="ref" reference="ineq:growth"}). Perhaps the most interesting property of the Brownian graph $\Gamma(W)$, from the point of view of metric geometry, studied in this paper is the fact that there exists a measure $\mu$ on $\Gamma(W)$ which "sees" the family $\mathbf{E}$ supported on "vertical" minimal subsets of $\Gamma(W)$, i.e. , for which ([\[ineq:modpositive\]](#ineq:modpositive){reference-type="ref" reference="ineq:modpositive"}) holds. This measure $\mu$ is constructed using the so-called *local time of $W(t)$ at level $a$*, denoted by $L^{a}(t)$, "which measures the amount of time Brownian motion spends at $a$". Local time $L^{a}(t)$ gives rise to a Borel measure on $\Gamma(W) \cap Z_a$ denoted by $l^{a}$, and integrating the latter with respect to $a$ gives the desired measure $\mu$ on $\Gamma(W)$. That is, for every measurable set $A \subseteq \Gamma(W)$, we let $$\label{spacemeasure} \mu\left( A \right) = \int_{-\infty}^{\infty} l^a(A \cap Z_a) \ da.$$ Thus, the "product like structure" of $\mu$ is build into its definition. The most technical part of the paper, Section [6](#BGMAS){reference-type="ref" reference="BGMAS"}, is devoted to proving various properties of the families of sets $\mathcal{E}$, families of measures $\mathbf{E}=\{\lambda_E\}_{E\in\mathcal{E}},$ and the measure $\mu$. We are able to show that these quantities satisfy the conditions of Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"}, and applying it we prove Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"}. The paper is organized as follows. In Section [2](#Preliminary){reference-type="ref" reference="Preliminary"}, we introduce basic notation, definitions, and properties that will be used in this paper. In Section [3](#CDM){reference-type="ref" reference="CDM"}, we define some of the main tools of this paper. Section [4](#MSS){reference-type="ref" reference="MSS"} is devoted to the proof of Theorem [Theorem 2](#thm:minimalBM){reference-type="ref" reference="thm:minimalBM"}. We introduce essential properties of Brownian motion for the proof of Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"} in Section [5](#BM){reference-type="ref" reference="BM"}. In Section [6](#BGMAS){reference-type="ref" reference="BGMAS"}, we prove Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"}: the graph of $1$-dimensional Brownian motion is minimal for conformal dimension almost surely. In Section [7](#Conclusion){reference-type="ref" reference="Conclusion"} we record some observations and formulate several open problems about conformal dimension. Finally, Appendix $\mathrm{A}$ contains generalizations of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} and gives more general criteria for minimality of Cantor sets. # Preliminaries {#Preliminary} For the convenience of the reader, in this section we collect some of the basic notation, conventions, and elementary results used in the paper. These are quite standard and experts may want to skip it without much harm. ## Notation and definitions In this paper we write $C = C(a_1, \ldots, a_n)$ if a constant $C$ can be chosen to depend only on the parameters $a, \ldots, a_n$. By $\mathrm{Card}(A)$ we denote the cardinality of a set $A$. If there is a constant $C > 0$ such that $f(x) \leq C g(x)$ or $f(x) \geq C g(x)$, we may write $f(x) \lesssim g(x)$ or $f(x) \gtrsim g(x)$, respectively, if the value of $C$ is not important. If both inequalities hold we may write $f(x )\asymp g(x)$ . By $\mathbb{R}^n$ we will denote the $n$-dimensional Euclidean space, which will be equipped with the Lebesgue measure $\mathcal{L}^n$ and the standard Euclidean metric denoted by $| \ \cdot \ |$. We call an element of the family of intervals $\left\{\left[\frac{i}{2^n}, \frac{i+1}{2^n} \right] \right\}_{i \in \mathbb{Z}}$ an *$n^{\mathrm{th}}$-generation dyadic interval*. Similarly, an element of $\left\{\left[\frac{i}{2^n}, \frac{i+1}{2^n} \right] \times \left[\frac{j}{2^n}, \frac{j+1}{2^n} \right] \right\}_{i,j \in \mathbb{Z}}$ will be called an *$n^{\mathrm{th}}$-generation dyadic square*. The projections of $\mathbb{R}^2$ onto the $x$ and $y$ axes will be denoted by $\pi_x$ and $\pi_y$, respectively. More precisely, $\pi_x((x,y)) = (x,0)$ and $\pi_y((x,y)) = (0,y).$ For $a\in\mathbb{R}$ we will denote by $Z_a$ the horizontal line passing through $(0,a)$, that is $Z_a = \{(x,a): x \in \mathbb{R}\}$. Let $(X, d)$ be a metric space. We will denote by $B(x,r)$ the open ball in $X$ of radius $r>0$ centered at $x\in X$, i.e., $B(x,r) = \{y \in X: d(x,y) < r \}.$ For a ball $B=B(x,r)\subseteq X$ and $\lambda>0$ we let $\lambda B = B(x,\lambda r)$. Given subsets $E$ and $F$ of $X$ and $r>0$, we define the diameter of $E$ in $X$, the distance between subsets $E$ and $F$ of $X$, and $r$-neighborhood of $E$ in $X$ as follows: $$\begin{aligned} \textrm{diam}(E) &=\sup\{d(x,y) : x,y \in E\}, \\ \textrm{dist}(E,F) &= \inf\{d(x,y): x \in E, y \in F\}, \\ N_r(E) &= \{x \in X: \exists \ y \in E \ \textrm{such that} \ d_X(x,y) < r\}.\end{aligned}$$ If $\textrm{diam}(E) ,\textrm{diam}(F) > 0$, the *relative distance between $E$ and $F$* is the following quantity $$\begin{aligned} \Delta(E, F) = \frac{\textrm{dist}(E, F)}{\min\{\textrm{diam}(E), \textrm{diam}(F)\}}.\end{aligned}$$ We will denote by $\mathcal{H}^\alpha$ the *Hausdorff $\alpha$-measure* on $X$ for some $\alpha > 0$. More specifically, $\mathcal{H}^\alpha(X)=\lim_{\epsilon\to0} \mathcal{H}^\alpha_{\epsilon}(X)$, where $$\begin{aligned} \mathcal{H}^\alpha_{\epsilon}(X) = \inf \left\{ \sum_{i=1}^{\infty} \mathrm{diam}(E_i)^\alpha : X \subseteq \bigcup_{i=1}^{\infty} E_i, \, \mathrm{diam}(E_i) \leq \epsilon\right\}.\end{aligned}$$ Hausdorff dimension of $X$ is defined to be $$\dim_H(X) = \inf \{\alpha: \mathcal{H}^\alpha(X) = 0 \}.$$ The following theorem is often used to obtain lower bounds for the Hausdorff dimension of a metric space, see [@BP17 Lemma $1.2.8$]. **Theorem 5** (Mass Distribution Principle). *Let $(X, d_X, \mu)$ be a metric measure space where $\mu(X) > 0$. If there exists a constant $C = C(X)>0$ such that for any $U \subseteq X$, $\mu(U) \leq C (\mathrm{diam}(U))^\alpha$ for some $\alpha>0$, then $\dim_H(X) \geq \alpha$.* Let $(X, \mu)$ be a measurable space with $\mu(X) < \infty$, then $$\intbar_{X} f d\mu = \frac{1}{\mu(X)}\int_X f d \mu.$$ We say a sequence of measures $\mu_n$ *converges weakly* to a measure $\mu$ on $X$ if for every continuous compactly supported function $f : X \to \mathbb{R}$, we have $$\int_X f d\mu_n \to \int_X f d\mu.$$ A measurable space $(X, \mu)$ is called *doubling* if there exists a constant $C \geq 1$ such that for any $x \in X$ and $r > 0$, $\mu(B(x,r)) \leq C \mu(B(x, r/2))$. ## Covering lemma and measure theory reader may refer to Theorem $1.2$ and $1.16$ in [@Hei01] for a proof of the following lemma. **Lemma 6**. *Every family $\mathcal{B}$ of balls of uniformly bounded diameter in a metric space $X$ contains a pairwise disjointed subfamily $\mathcal{B}_1$ such that $$\bigcup_{B \in \mathcal{B}}B \subseteq \bigcup_{B_1 \in \mathcal{B}_1}5B_1$$* *If $X$ is bounded, then there exists a subcollection $\mathcal{B}_2$ of $\mathcal{B}$ such that $\mathcal{B}_2$ covers $X$ and $$\frac{1}{5}\mathcal{B}_2 = \left\{\frac{1}{5}B_2: B_2 \in \mathcal{B}_2 \right\}$$ is pairwise disjoint.* The following result is known as the *Bojarski lemma* and is often used to estimate modulus from above, cf., [@Boj88 Lemma $4.2$]. **Theorem 7** (Bojarski). *Suppose that $\mathcal{B} = \{B_i\}_{i=1}^\infty$ is a countable collection of balls in a doubling measure space $(X ,\mu)$ and that $a_i \geq 0$ are real numbers, then $$\int_X\left( \sum_{\mathcal{B}} a_i \chi_{\lambda B_i} \right)^p d \mu \leq C(\lambda, p, \mu) \int_X\left( \sum_{\mathcal{B}} a_i \chi_{B_i} \right)^p d \mu$$ for $1 < p < \infty$ and $\lambda > 1$.* Let $f$ be a real-valued function on a topological space $X$. If $\{x : f(x) > a\}$ is open for every $a$, $f$ is said to be *lower semicontinuous*. Equivalently, $f$ is lower semicontinuous if and only if for any $x_0 \in X$, $$\liminf _{x\to x_0}f(x) \geq f(x_0).$$ If $\{x : f(x) < a\}$ is open for every $a$, $f$ is said to be *upper semicontinuous*. Equivalently, $f$ is upper semicontinuous if and only if for any $x_0 \in X$, $$\limsup _{x\to x_0}f(x)\leq f(x_0).$$ The following theorem tells that any real-valued integrable function can be approximated by an upper semicontinuous and a lower semicontinuous function. See [@Rud86 Theorem $2.25$]. **Theorem 8** (Vitali--Carathéodory). *Suppose $f: X \to \mathbb{R}$ is a real-valued integrable function on a measure space $(X, \mu)$, then for any $\epsilon> 0$ there exist functions $u$ and $v$ on $X$ such that $u \leq f \leq v$ where $u$ is upper semicontinuous and $v$ is lower semicontinuous, and $$\int_X (v-u) d\mu < \epsilon.$$* ## Quasisymmetries and their basic properties A homeomorphism $f: X \to Y$ is called *$\eta$-quasisymmetric*, where $\eta:[0, \infty) \to [0, \infty)$ is a given homeomorphism, if $$\frac{d_Y(f(x),f(y))}{d_Y(f(x),f(z))} \leq \eta \left(\frac{d_X(x,y)}{d_X(x,z)} \right)$$ for all $x, y, z \in X$ with $x \neq z$. The map $f$ is called *quasisymmetric* if it is $\eta$-quasisymmetric for some *distortion function* $\eta$. Some of the properties of quasisymmetric maps used below are summarized in the following proposition. We refer to [@Hei01] and [@HK98] for these and other properties of quasisymmetric maps in metric spaces. **Proposition 9**. *Suppose $f:X\to Y$ and $g:Y\to Z$ are $\eta$ and $\eta'$-quasisymmetric mappings, respectively.* - *The composition $g\circ f : X\to Z$ is an $\eta' \circ \eta$-quasisymmetric map.* - *The inverse $f^{-1}:Y\to X$ is a $\theta$-quasisymmetric map, where $\theta(t) = 1/\eta(1/t)$.* - *Quasisymmetries map bounded spaces to bounded spaces. If $A$ and $B$ are subsets of $X$ and $A\subseteq B$, then $$\begin{aligned} \frac{1}{2\eta\left( \frac{\mathrm{diam}(B)}{\mathrm{diam}(A)} \right)} \leq \frac{\mathrm{diam}(f(A))}{\mathrm{diam}(f(B))} \leq \eta\left( \frac{2\mathrm{diam}(A)}{\mathrm{diam}(B)} \right). \end{aligned}$$* # Conformal dimension and modulus {#CDM} Let $(X, d, \mu)$ be a metric measure space and $p \geq 1$. The *$p$-modulus* of a family of curves $\Gamma$ is defined as $$\mathrm{mod}_p (\Gamma) = \inf \int_{X} \rho^p d\mu$$ where the infimum is taken over all Borel functions $\rho:X\to[0,\infty)$ such that $$\label{admissible} \int_\gamma \rho \ ds \geq 1, \ \forall \ \gamma \in \Gamma.$$ Functions $\rho$ that satisfy inequality [\[admissible\]](#admissible){reference-type="eqref" reference="admissible"} are called *admissible functions* for $\Gamma$. The following theorem asserts that if a metric space satisfies an upper mass bound and has a sufficiently rich family of curves, then a lower bound for its conformal dimension can be obtained. See [@MT10 Proposition $4.1.8$]. **Theorem 10**. *Let $(X, d, \mu)$ be a compact, doubling metric measure space satisfying the upper mass bound $$\mu(B(x,r)) \le C \cdot r^q$$ for some constant $C = C(X) > 0$ and all balls $B(x,r)$ in $X$. If $X$ contains a curve family $\Gamma$ such that $\mathrm{mod}_p (\Gamma) > 0$ for some $1 < p \leq q$, then $\dim_C(X) \geq q$.* In [@Hak09], lower bounds for conformal dimension were obtained for spaces that have "few" or no curve families at all. The idea was to use Fuglede modulus of families of measures introduced in [@Fug57] rather than the classical modulus of path families. Next we recall the definition of Fuglede modulus. Let $(X, d, \mu)$ be a metric measure space and $p \geq 1$. Suppose $\mathbf{E}=\{\lambda\}_{\lambda\in \mathbf{E}}$ is a family of measures on $X$ such that the domain of each $\lambda\in\mathbf{E}$ contains the domain of $\mu$. The *$p$-modulus* of $\mathbf{E}$ is defined as $$\mathrm{Mod}_p(\mathbf{E}) = \inf \int_{X} \rho^p d\mu,$$ where the infimum is taken over all Borel functions $\rho:X\to[0,\infty)$ such that $$\label{admissiblem} \int \rho \ d \lambda \geq 1, \ \forall \ \lambda \in \mathbf{E}.$$ Functions $\rho$ that satisfy inequality [\[admissiblem\]](#admissiblem){reference-type="eqref" reference="admissiblem"} are called *admissible functions* for $\mathbf{E}$. The following result is a generalization of Theorem $5.5$ of [@Hak09]. The main difference is that the assumptions here are local in their nature. **Theorem 11**. *Let $(X, d, \mu)$ be a doubling metric measure space and $X$ is bounded. Suppose there is a constant $C = C(X) > 0$ such that for every $x \in X$, there exists $r_0 = r_0(x) > 0$ such that $$\label{um} \mu(B(x,r)) \leq C r^q,$$ whenever $B(x,r) \subseteq X$ and $r < r_0$.* *Let $\mathcal{E}$ be a family of subsets of $X$ and $\mathbf{E} = \{\lambda_E\}_{E \in \mathcal{E}}$ be a collection of measures associated to $\mathcal{E}$ where $\lambda_E$ is supported on $E$ such that* 1. *$\dim_C(E) \geq 1$ for all $E \in \mathcal{E}$;* 2. *for any $s > 1, E \in \mathcal{E}$ and $x \in E$ there exist constants $C_1 = C_1(\mathcal{E}, s)$, $C_2 = C_2(\mathcal{E}, s)$ and $r_1 = r_1(\mathcal{E}, s, x)>0$ such that for any $r < r_1$, we have $$\label{linear} \lambda_{E}(B(x,r) \cap E) \geq C_1r^s, \ \textrm{when} \ \frac{1}{C_2}B(x,r) \cap E \neq \emptyset.$$* *If $\mathrm{Mod}_{p}(\mathbf{E}) > 0$ for some $1 \leq p < q$, then $\dim_C(X) \geq q$.* *Proof of Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"}.* Assume that $f$ is a quasisymmetry on $X$ such that $\dim_H f(X) < q$. Let $\alpha < q$ be a chosen number so that $\alpha > \max(p, \dim_H f(X))$. Choose $t < 1$ such that $\alpha > \frac{1}{t} \dim_H f(X)$. For any $E \in \mathcal{E}$, we have $\mathcal{H}^t_{1/k}(f(E)) \to \infty$ as $k \to \infty$ since $\dim_H f(E) \geq 1$. Let $k_E \in \mathbb{N}$ denote the smallest integer such that $\mathcal{H}^t_{1/k_E}(f(E)) \geq 1$. Let $\mathcal{E}_i = \left\{E \in \mathcal{E}: k_E \leq i \right\}$ and $\mathbf{E}_i= \left\{ \lambda_E \in \mathbf{E}: E \in \mathcal{E}_i \right\}$. Then $\mathbf{E} = \bigcup_{j=1}^\infty \mathbf{E}_j$ and, by subadditivity of modulus, see [@Fug57], we have $$\mathrm{Mod}_\alpha \left( \mathbf{E} \right) \leq \sum_{j=1}^{\infty}\mathrm{Mod}_\alpha \left( \mathbf{E}_j \right).$$ Note that by Hölder's inequality if $\mathrm{Mod}_\alpha \left( \mathbf{E}_j \right) = 0$ then $\mathrm{Mod}_p \left( \mathbf{E}_j \right) = 0$, since $\alpha>p$. Therefore, to prove the theorem it is enough to show that $\mathrm{Mod}_\alpha \left( \mathbf{E}_j \right) = 0$ for every $j\in\mathbb{N}$. Since $\dim_H(f(X)) < \alpha t$, for any $\epsilon, \delta > 0$ there exists a collection of balls $\mathcal{B}' = \{B'_i\}$ on $f(X)$ such that $\mathcal{B}'$ covers $f(X)$, $\mathrm{diam}(B'_i) < \delta$ and $$\sum_i \mathrm{diam}(B'_i)^{\alpha t} < \epsilon.$$ Let $\epsilon, \delta$ be fixed. We choose some fixed $s > 1$ with $q > \alpha s$. Next, using $\mathcal{B}'$ we will construct a cover $\mathcal{F}'$ of $f(X)$, such that for every $F' \in \mathcal{F'}$ and a circumscribing ball $F$ of $f^{-1}(F')$ (i.e., $F$ is a metric ball that covers $f^{-1}(F')$ and has the same diameter as $f^{-1}(F')$), the following conditions are satisfied: 1. $\mu(F) \leq C\mathrm{diam}(F)^q$ for some constant $C = C(X) > 0$;[\[sum\]]{#sum label="sum"} 2. $\lambda_{E}(F \cap E) \geq C_1\mathrm{diam}(F)^s$ when $\frac{1}{C_2}F \cap E \neq \emptyset$ for some $C_1 = C_1(s)$ and $C_2 = C_2(s)$;[\[slinear\]]{#slinear label="slinear"} 3. $\sum_{F' \in \mathcal{F}'} \mathrm{diam}(F')^{\alpha t} < \epsilon$, where the constants $C, C_1, C_2$ are the same as those in ([\[um\]](#um){reference-type="ref" reference="um"}) and ([\[linear\]](#linear){reference-type="ref" reference="linear"}). To construct $\mathcal{F}'$ we start by picking any $B'\in\mathcal{B}'$. If the circumscribing ball of $f^{-1}(B')$ satisfies condition [\[sum\]](#sum){reference-type="eqref" reference="sum"} and [\[slinear\]](#slinear){reference-type="eqref" reference="slinear"}, we include $B'$ in $\mathcal{F}'$. Otherwise, we will define a cover $\mathcal{F}'$ of $B'$ as follows. Let $$B'_n = \left\{x \in B': r_0(x) \geq \frac{1}{n} \ \textrm{and} \ r_1(x) \geq \frac{1}{n}\right\}.$$ where $r_0, r_1$ are as in the statement of the theorem. Since $f^{-1}$ is uniformly continuous there exists $\sigma_n>0$ such that $\mathrm{diam}\left( f^{-1}\left( B\left(x, \sigma_n\right) \right) \right) < {1}/{n},$ for any ball $B\left(x, \sigma_n\right) \subseteq B'$ with radius $\sigma_n$. Since $\dim_H(B'(n) \cap f(X)) < \alpha t$ there exists a sequence of balls $\{F'_{n,i}\}$ centered at points of $B'(n)$ with diameters smaller than $\sigma_n$ such that $\{F'_{n,i}\}$ covers $B'(n)$ and $\sum_{i} \mathrm{diam}(F'_{n,i})^{\alpha t} < {\mathrm{diam}(B')^{\alpha t}}/{2^n}.$ It then follows from inequalities [\[um\]](#um){reference-type="eqref" reference="um"} and [\[linear\]](#linear){reference-type="eqref" reference="linear"} that for any $F_{n,i}'$, every circumscribing ball of $f^{-1}\left(F'_{n,i}\right)$ satisfies conditions [\[sum\]](#sum){reference-type="eqref" reference="sum"} and [\[slinear\]](#slinear){reference-type="eqref" reference="slinear"}. Collecting all such sets $\{F'_{n,i}\}$ for each $n$, we obtain a cover $\mathcal{F}_{B'}$ of $B'$ such that for any $F' \in \mathcal{F}_{B'}$, every circumscribing ball of $f^{-1}(F')$ satisfies conditions [\[sum\]](#sum){reference-type="eqref" reference="sum"} and [\[slinear\]](#slinear){reference-type="eqref" reference="slinear"}, and $$\sum_{F' \in \mathcal{F}_{B'}} \mathrm{diam}(F')^{\alpha t} < \mathrm{diam}(B')^{\alpha t}.$$ Applying the procedure above to every $B' \in \mathcal{B}'$ for which either [\[sum\]](#sum){reference-type="eqref" reference="sum"} or [\[slinear\]](#slinear){reference-type="eqref" reference="slinear"} fails, gives the desired cover $\mathcal{F}'$ of $f(X)$. Moreover, by Lemma [Lemma 6](#cl){reference-type="ref" reference="cl"}, without loss of generality we may assume that for any $F'_i, F'_j \in \mathcal{F}'$, we have $\frac{1}{5}F'_i \cap \frac{1}{5}F'_j = \emptyset.$ We enumerate $\mathcal{F}' = \{F'_i\}$. Recall that $F_i$ is a circumscribing ball of $f^{-1}(F_i')$ when $F'_i \in \mathcal{F}'$. We also enumerate $\mathcal{F} = \{F_i\}$. It follows from Proposition [Proposition 9](#diam){reference-type="ref" reference="diam"} that there exists a $H(X, f) > 1$ such that for any $F_i \in \mathcal{F}$, $$\frac{1}{H}F_i \subseteq f^{-1}\left( \frac{1}{5}F'_i \right) \subseteq f^{-1}\left(F'_i \right) \subseteq F_i.$$ Therefore for all $F_i, F_j \in \mathcal{F}$, we have $$\begin{aligned} \label{disjoint} \frac{1}{H}F_i \cap \frac{1}{H}F_j = \emptyset. \end{aligned}$$ Next we construct an admissible function $\rho_j$ for $\mathbf{E}_j$ and show that $\int_X \rho_j^\alpha d \mu \lesssim \epsilon$. Without loss of generality, we may assume that $\mathrm{diam}(F'_i) < \delta < \frac{1}{j}$ for any $F'_i \in \mathcal{F}'$. Define $$\rho_j(x) = \sum_i \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s}\frac{\chi_{C_2F_i}(x)}{C_1 C_2^s}.$$ Then for any $E \in \mathcal{E}_j$, $$\begin{aligned} \int_E \rho_j d \lambda_{E} & = \int_E \sum_i \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s}\frac{\chi_{C_2F_i}(x)}{C_1 C_2^s} d \lambda_{E} \geq \int_E \sum_{F_i \cap E \neq \emptyset} \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s}\frac{\chi_{C_2F_i}(x)}{C_1 C_2^s} d \lambda_{E} \\ & = \frac{1}{C_1 C_2^s} \sum_{F_i \cap E \neq \emptyset} \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s} \int_{E \cap C_2 F_i}d \lambda_{E} = \frac{1}{C_1 C_2^s} \sum_{F_i \cap E \neq \emptyset} \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s} \lambda_{E}\left( E \cap C_2 F_i \right) \\ & \geq \sum_{F'_i \cap f(E) \neq \emptyset} \mathrm{diam}(F'_i)^t \geq 1. \end{aligned}$$ The last inequality comes from the fact that $\mathcal{H}^{1/j}_t(f(E)) \geq 1$. Finally we will show that $\int_X \rho_j^\alpha d \mu \lesssim \epsilon$. Recall that $s > 1$ and $q > \alpha s$. $$\begin{aligned} \int_X \rho_j^\alpha d \mu & = \int_X \left( \sum_i \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s}\frac{\chi_{C_2F_i}(x)}{C_1 C_2^s} \right)^\alpha d \mu \\ & \leq A(H, \alpha, \mu, C_1, C_2) \int_X \left( \sum_i \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s}\chi_{\frac{1}{H}F_i}(x) \right)^\alpha d \mu \tag{Theorem \ref{Lpinequality}} \\ & = A(H, \alpha, \mu, C_1, C_2) \sum_i \left( \frac{\mathrm{diam}(F'_i)^t}{\mathrm{diam}(F_i)^s} \right)^\alpha \mu\left( \frac{1}{H}F_i \right) \tag{by (\ref{disjoint})} \\ & \leq A_1(H, \alpha, \mu, C, C_1, C_2) \sum_i \mathrm{diam}(F'_i)^{\alpha t} \mathrm{diam}(F_i)^{q-\alpha s} \tag{by (\ref{um})} \\ & \leq A_1(H, \alpha, \mu, C, C_1, C_2) \sum_i \mathrm{diam}(F'_i)^{\alpha t} \\ & \leq A_1(H, \alpha, \mu, C, C_1, C_2) \epsilon. \end{aligned}$$ This implies that $\mathrm{Mod}_\alpha \left( \mathbf{E}_j \right) = 0$ for every $\mathbf{E}_j$ and finishes the proof. ◻ *Remark 12*. Using the techniques from [@BHW16] the assumptions that $X$ is doubling and bounded in Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"} could be dropped, and one could just require that $X$ is a separable metric space instead. However, in our applications the extra assumptions are satisfied and we do not need this more general version of the theorem. ## Conformal dimension of hierarchical spaces {#section:cantor} We say that $E$ is a metric space with *hierarchical structure* if $E$ is a metric space and it can be written as $E = \bigcap_{i=1}^\infty \bigcup_{j=1}^{2^i} E_{i,j}$ where each $E_{i,j}$ is a locally compact Hausdorff metric space and $E_{i,j} \cap E_{i,k} = \emptyset$ for any $j \neq k$. We denote by $$\mathcal{E}_i = \left\{E_{i,j} \right\}_{j=1}^{2^i}$$ and will refer to it as the collection of the *$i^{\mathrm{th}}$-generation* elements. Moreover, we denote by $\mathcal{E} = \bigcup_{i=1}^\infty \mathcal{E}_i$. Clearly, $E$ has a hierarchical structure. We will use the following terminologies to simplify our notation. Each $E_{i,j}$ has: - two children spaces: $E_{i+1, 2j-1}, E_{i+1, 2j} \in \mathcal{E}_{i+1}$; - one parent space, containing $E_{i,j}$, denoted by $\tilde{E}_{i,j} \in \mathcal{E}_{i-1}$; - one sibling space which has the same parent as $E_{i,j}$, denoted by $E'_{i,j} \in \mathcal{E}_i$. For any $x \in E$, we denote by $E_i(x)$ the unique $i^{\mathrm{th}}$-generation element that contains $x$. We say that a metric space $E$ with hierarchical structure is *flat* if for any $x \in E$ and any $r > 0$, there exists a $A(x) > 0$ such that $$\mathrm{Card}\left( \left\{ E_{i,j} : E_{i,j} \subseteq B(x,r) \ \mathrm{and} \ \tilde{E}_{i,j} \nsubseteq B(x,r) \right\} \right) \leq A(x).$$ The reason to focus on flatness is that we need some restrictions on the local geometry of $E$. Intuitively, a flat Cantor space doesn't "oscillate" too much in any small neighborhood of any point. In the following, we may a little abuse notation and use $E_{i,j}$ to denote $E_{i,j} \cap E$. There should be no ambiguity once we consider the context. **Theorem 13**. *Let $E = \bigcap_{i=1}^\infty \bigcup_{j=1}^{2^i} E_{i,j}$ be a flat metric space with hierarchical structure. If for any $x \in E$, we have $$\label{rdist} \lim_{i \to \infty}\Delta(E_{i}(x), E'_{i}(x)) \to 0$$ and there exists $L = L(x)$ such that $$\label{cdiam} \frac{1}{L} \leq \frac{\mathrm{diam}(E_{i}(x))}{\mathrm{diam}(E'_{i}(x))} \leq L.$$ Then $\dim_C(E) \geq 1$.* Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} is a generalization of Theorem $3.2$ in [@Hak09] and the proof follows the same reasoning. First, we need the following lemma. **Lemma 14**. *Let $E$ be a metric space with hierarchical structure satisfying condition [\[rdist\]](#rdist){reference-type="eqref" reference="rdist"}. Then for any $\eta$-quasisymmetry $f: E \to I$ and any $0 < \alpha < 1$, there exists a measure $\mu$ on $I$ such that for any $x \in E$, $$\label{msubset} \mu(f(E_i(x))) \leq C \mathrm{diam}(f(E_i(x)))^{\alpha}$$ for some constant $0 < C < \infty$ depending on $x,\eta$ and $\alpha$.* *Proof.* Since $f$ is a homeomorphism and $E$ has a hierarchical structure, $f(E)$ has a hierarchical structure. The notation system used for $E$ will also be used for $f(E)$. In particular, for any $I_{i,j} \subseteq I$ of the form $I_{i,j} = f(E_{ij})$, we denote by $\tilde{I}_{i,j} = f(\tilde{E}_{ij})$ and $I'_{i,j} = f(E'_{ij})$ the parent and the sibling of $I_{i,j}$, respectively. The image of any $n^{\mathrm{th}}$-generation element of $E$ under $f$ is a $n^{\mathrm{th}}$-generation element of $X$. Moreover, we denote by $I_i(x) = f(E_i(x))$. Without loss of generality, we let $\mathrm{diam}(I) = 1$. We define $\mu$ as follows. Let $\mu(I)= 1$. Then for any $I_{i,j} = f(E_{i,j})$, we define $$\label{measure} \mu(I_{i,j}) = \frac{\mathrm{diam}(I_{i,j})^{\alpha}}{\mathrm{diam}(I_{i,j})^{\alpha} + \mathrm{diam}(I'_{i,j})^{\alpha}} \mu(\tilde{I}_{i,j}).$$ Since each $E_{ij}$ is locally compact and Hausdorff, $\mu$ is a Borel measure. For any $n^{\mathrm{th}}$-generation element $I_n \subseteq I$, there exists a unique sequence of nested subsets $$I_n \subseteq I_{n-1} \subseteq I_{n-2} \subseteq \ldots \subseteq I_1 \subseteq I_0 = I$$ containing it so that $I_i = \tilde{I}_{i+1}$. Therefore, we have $$\begin{aligned} \frac{\mu(I_n)}{\mathrm{diam}(I_n)^{\alpha}} & = \frac{1}{\mathrm{diam}(I_n)^{\alpha} +\mathrm{diam}(I'_n)^{\alpha}} \cdots \frac{\mathrm{diam}(I_{1})^{\alpha}}{\mathrm{diam}(I_{1})^{\alpha} +\mathrm{diam}(I'_{1})^{\alpha}}\mu(I) \\ & = \frac{\mathrm{diam}(I_{n-1})^{\alpha}}{\mathrm{diam}(I_n)^{\alpha} +\mathrm{diam}(I'_n)^{\alpha}} \cdots \frac{\mu(I)}{\mathrm{diam}(I_{1})^{\alpha} +\mathrm{diam}(I'_{1})^{\alpha}} \\ & = \prod_{i=1}^{n} \frac{\mathrm{diam}(\tilde{I}_i)^{\alpha}}{\mathrm{diam}(I_i)^{\alpha} +\mathrm{diam}(I'_i)^{\alpha}}. \end{aligned}$$ Therefore, to prove inequality [\[msubset\]](#msubset){reference-type="eqref" reference="msubset"}, it is sufficient to show that for a fixed $f(x) \in \bigcap_{i=1}^\infty I_i$, $$\prod_{i=1}^{\infty} \frac{\mathrm{diam}(\tilde{I}_i)^{\alpha}}{\mathrm{diam}(I_i)^{\alpha} +\mathrm{diam}(I'_i)^{\alpha}} < \infty.$$ We claim that for $i$ sufficiently large, $$\label{gdc} \frac{\mathrm{diam}(\tilde{I}_i)^{\alpha}}{\mathrm{diam}(I_i)^{\alpha} +\mathrm{diam}(I'_i)^{\alpha}} \leq 1.$$ Let $I_i = f(E_i)$, $I'_i = f(E'_i)$ and $\tilde{I}_i = f(\tilde{E}_i)$. Since $\Delta(E_i, E'_i) \to 0$ as $i \to \infty$, we have $\Delta(I_i, I'_i) \to 0$ by Proposition [Proposition 9](#diam){reference-type="ref" reference="diam"} and the definition of relative distance. Without loss of generality, we assume that $\mathrm{diam}(I_i) \geq \mathrm{diam}(I'_i)$. Since $\Delta(I_i, I'_i) \to 0$, we may assume that $\mathrm{diam}(I_i) \geq \frac{1}{3}\mathrm{diam}(\tilde{I}_i)$ for sufficiently large $i$, and $$\mathrm{diam}(I_i) + \mathrm{diam}(I'_i)/(1-\epsilon) > \mathrm{diam}(\tilde{I}_i)$$ for some $\epsilon> 0$. Let $p = \frac{\mathrm{diam}(I_i)}{\mathrm{diam}(\tilde{I}_i)}$, then $\frac{\mathrm{diam}(I'_i)}{\mathrm{diam}(\tilde{I}_i)} > (1-p)(1-\epsilon)$. It is then sufficient to prove that $$\label{mde} p^{\alpha} + (1-p)^{\alpha}(1-\epsilon)^{\alpha} \geq 1$$ for $p \in \left(\frac{1}{3}, 1\right)$. It is clear that the left hand side of inequality [\[mde\]](#mde){reference-type="eqref" reference="mde"} achieves the the minimum at either $p=\frac{1}{3}$ or $1$. When $i$ is sufficiently large, we may let $\epsilon$ be small enough such that $$\left( \frac{1}{3} \right)^{\alpha} + \left( \frac{2}{3} \right)^{\alpha}(1-\epsilon)^{\alpha} \geq 1.$$ Then $p^{\alpha} + (1-p)^{\alpha}(1-\epsilon)^{\alpha} \geq 1$ for $p=\frac{1}{3}$ or $1$. This finishes the proof. ◻ We are now ready to prove Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} *Proof of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}.* Let $f: E \to I$ be an $\eta$-quasisymmetry and $\alpha < 1$. Define $$E^*(N) = \{x \in E: C(x, \eta, \alpha) \leq N \} \cap \{x \in E: A(x) \leq N, \delta(x) \leq N, L(x) \leq N \}$$ where $A(x)$ is the flatness constant of $E$, $C(x,\eta, \alpha)$ is the constant defined in [\[msubset\]](#msubset){reference-type="eqref" reference="msubset"}, $L(x)$ is the constant defined in [\[cdiam\]](#cdiam){reference-type="eqref" reference="cdiam"} and $\delta(x) = \max_{i}\Delta(E_{i}(x), E'_{i}(x))$. Since $f$ is a quasisymmetry and $\Delta(E_i(x), E'_i(x)) \to 0$ for any $x \in E$, it follows from Proposition [Proposition 9](#diam){reference-type="ref" reference="diam"} that $\delta(x) < \infty$ for any $x \in E$ . Let $$E(N) = \bigcap_{n=1}^\infty \bigcup_{x \in E^*(N)} B\left(x,\frac{1}{n}\right).$$ Then $E(N)$ is a Borel measurable set containing $E^*(N)$. To simplify notation, we write $I^*(N) = f(E^*(N))$ and $I(N) = f(E(N))$. Notice that $I = \bigcup_{N=1}^\infty I(N)$ since $E = \bigcup_{N=1}^\infty E(N)$. We now prove that for any $x \in E(N)$, there exists a $G = G(\eta, N)$ such that for any $r > 0$, $$\label{umb} \mu\left(B(f(x),r) \cap I(N)\right) \leq Gr^{\alpha}$$ Given $x \in E(N), r > 0$ and a ball $B(f(x), r)$. We denote by $\mathcal{I}(x,r,N)$ the collection of all the $i^{\mathrm{th}}$-generation elements $I_i$ of $I$ satisfying the following conditions: - $I_{i} \subseteq B(f(x),r)$, - $\tilde{I}_{i} \nsubseteq B(f(x),r)$, - $I_{i} \cap I^*(N) \neq \emptyset$. We observe that if $I_i, I_j \in \mathcal{I}(x,r,N)$, it is clearly that either $I_i \cap I_{j} = \emptyset$ or $I_i = I_{j}$. We claim that $$B(f(x), r) \cap I(N)) \subseteq \bigcup_{I_i \in \mathcal{I}(x,r,N)}\tilde{I}_i.$$ If not, there exists a point $p \in E$ such that $f(p) \in B(f(x), r) \cap I(N)$ but $f(p) \notin \bigcup_{I_i \in \mathcal{I}(x,r,N)}\tilde{I}_i$. Since $f(p) \in B(f(x),r)$, there exists a $M \in \mathbb{N}$ such that $I_n(p) \subseteq B(f(x),r)$ when $n \leq M$ and $I_n(p) \nsubseteq B(f(x),r)$ when $n > M$. Then $I_M(p) \in \mathcal{I}(x,r, N)$ which implies that $f(p) \in \bigcup_{I_i \in \mathcal{I}(x,r,N)}\tilde{I}_i$. Since $E$ is a flat Cantor space, $\mathrm{Card}\left( \mathcal{I}(x,r,N) \right) \leq N$ by the definition of $\mathcal{I}(x,r,N)$. Since $I_i \cap f(E^*(N)) \neq \emptyset$ for any $I_i \in \mathcal{I}(x,r,N)$, there exists $M = M(\eta, N)$ such that $$\frac{1}{M} \leq \frac{\mathrm{diam}(I_i)}{\mathrm{diam}(I'_i)} \leq M$$ and $$\Delta(I_i, I'_i) \leq M.$$ by Proposition [Proposition 9](#diam){reference-type="ref" reference="diam"}. Then we have $$\begin{aligned} \mathrm{diam}(\tilde{I}_i) & \leq \mathrm{diam}(I_i) + \mathrm{diam}(I'_i) + \mathrm{dist}(I_i, I'_i) \\ & \leq \mathrm{diam}(I_i) + \mathrm{diam}(I'_i) + M\mathrm{diam}(I_i) \\ & \leq (2M+1) \mathrm{diam}(I_i). \end{aligned}$$ Notice that $\bigcup_{I_i \in \mathcal{I}(x,r,N)}\tilde{I}_i$ covers $B(f(x),r) \cap I^*(N)$ and $\bigcup_{I_i \in \mathcal{I}(x,r,N)}\tilde{I}_i$ is closed, thus by the definition of $E(N)$, we have $$B(f(x),r) \cap I(N) \subseteq \bigcup_{I_i \in \mathcal{I}(x,r,N)}\tilde{I}_i.$$ Then $$\begin{aligned} \mu(B(f(x),r) \cap I(N)) & \leq \sum_{I_i \in \mathcal{I}(x,r,N)}\mu(\tilde{I}_i) \\ & \leq \sum_{I_i \in \mathcal{I}(x,r,N)}N \mathrm{diam}(\tilde{I}_i)^\alpha \\ & \leq \sum_{I_i \in \mathcal{I}(x,r,N)}N \left( (2M+1) \mathrm{diam}(I_i)\right)^\alpha \\ & \leq 2^\alpha N^2 (2M+1)^\alpha r^\alpha \end{aligned}$$ The second inequality comes from Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"}. The third inequality comes from the fact that $\mathrm{diam}(\tilde{I}_i) \leq (2M+1) \mathrm{diam}(I_i)$. The last inequality comes from the fact that $\mathrm{Card}\left( \mathcal{I}(x,r,N) \right) \leq N$ and $I_{i} \subseteq B(f(x),r)$ for any $I_i \in \mathcal{I}(x,r,N)$. This finishes the proof of inequity [\[umb\]](#umb){reference-type="eqref" reference="umb"}. Since $X = \bigcup_{N=1}^\infty I(N)$ and $I(N-1) \subseteq I(N)$ for any $N$, it is clear that $\mu(I(N)) > 0$ when $N$ is sufficiently large. It then follows the Mass Distribution Principle [Theorem 5](#mdp){reference-type="ref" reference="mdp"} that $\dim_H(X) \geq \dim_(I(N)) \geq \alpha$ for sufficiently large $N$. This finishes the proof. ◻ One may wonder if conditions [\[rdist\]](#rdist){reference-type="eqref" reference="rdist"} and [\[cdiam\]](#cdiam){reference-type="eqref" reference="cdiam"} of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} can be weakened. The following theorem provides one answer to this question. **Theorem 15**. *Let $\mathcal{E} = \{E^n\}$ be a collection of flat metric spaces with hierarchical structure, where $E^n = \bigcap_{i=1}^\infty \bigcup_{j=1}^{2^i} E^n_{i,j}$ and $E^n_{i,j} \subseteq E^{n+1}_{i,j}$ for any $i,j$. For any $x \in E^n$, the flatness constants $A^n(X)$ is uniformly bounded by $A(x)$ for some $A(x) < \infty$.* *Let $\widetilde{E} = \bigcup_{n=1}^\infty E^n$. If for any $E^n \in \mathcal{E}$ and for any $x \in E^n$, we have $$\label{ssdist} \lim_{i \to \infty}\lim_{n \to\infty}\Delta\left(E^n_{i}(x), (E^n_{i})'(x)\right) \to 0$$ and there exists $L = L(x) < \infty$ such that $$\label{ssudiam} \frac{1}{L} \leq \lim_{n \to \infty}\frac{\mathrm{diam}(E^n_{i}(x))}{\mathrm{diam}((E^n_{i})'(x))} \leq L.$$ Then $\dim_C\left(\widetilde{E}\right) \geq 1$.* *Proof.* The proof here shares the same idea as the proof of Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} and Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. We first construct a sequence of measures $\mu_n$ analogy to Equation [\[measure\]](#measure){reference-type="eqref" reference="measure"} on each $E^n$ and then show that $\mu_n$ subconverges to a measure $\mu$ on $\widetilde{E}$ weakly. We finally finish the proof by applying Mass Distribution Principle [Theorem 5](#mdp){reference-type="ref" reference="mdp"} to $\mu$. Let $f: \widetilde{E} \to \widetilde{I}$ be an $\eta$-quasisymmetry and $0 < \alpha < 1$. Fix a $E \in \mathcal{E}$ and let $I = f(E)$. The notation of convention in this proof share the same principle as in the proof of Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} and Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. Without loss of generality, we let $\mathrm{diam}(I) = \mathrm{diam}(f(E)) = 1$. We construct a measure on $E$ in the same way as Equation [\[measure\]](#measure){reference-type="eqref" reference="measure"}, i.e., $$\mu(I_{i,j}) = \frac{\mathrm{diam}(I_{i,j})^{\alpha}}{\mathrm{diam}(I_{i,j})^{\alpha} + \mathrm{diam}(I'_{i,j})^{\alpha}} \mu(\tilde{I}_{i,j}).$$ Notice that $\mathrm{diam}(I_{i,j})$ could be $0$ here, however, our definition is still valid. Thus there exists a sequence of measures $\mu_n$ constructed as above that is supported on $E^n$. Fix $x \in \widetilde{E}$, then $x \in E^n$ for sufficiently large $n$. We claim that there exists a $N = N(x, i)$ such that when $n \geq N(x, i)$, $$\label{einterval} \mu_n(f(E^n_i(x))) \leq C \mathrm{diam}(f(E^n_i(x)))^{\alpha}$$ for some constant $C > 0$ which depends on $x,\eta$ and $\alpha$. We denote by $\sigma(x, i) = \lim_{n \to\infty}\Delta(E^n_{i}(x), (E^n_{i})'(x))$ and it is clearly finite for $i$ sufficiently large. Since $E^n_i(x) \subseteq E^{n+1}_i(x)$, there exists a $N(x,i)$ such that we have $\Delta(E^n_{i}(x), (E^n_{i})'(x)) < 2\sigma(x,i)$ when $n \geq N(x,i)$. Since $\sigma(x,i) \to 0$ when $i \to \infty$ by equation [\[ssdist\]](#ssdist){reference-type="eqref" reference="ssdist"}, there exists an $M= M(x)$ such that when $i \geq M(x)$ and $n \geq N(x,i)$, the following conditions are satisfied: For any $I^n_i = f(E^n_i), (I^n_i)' = f((E^n_i))'$ with $\mathrm{diam}(I^n_i) \geq \mathrm{diam}((I^n_i)')$ and $f(x) \in \tilde{I}^n_i$, 1. $\mathrm{diam}(I^n_i) \geq \frac{1}{3}\mathrm{diam}(\tilde{I}^n_i)$, 2. $\mathrm{diam}(I^n_i) + \mathrm{diam}((I^n_i)')/(1-\epsilon) > \mathrm{diam}(\tilde{I}^n_i)$ for some $\epsilon> 0$, 3. $\left( \frac{1}{3} \right)^{\alpha} + \left( \frac{2}{3} \right)^{\alpha}(1-\epsilon)^{\alpha} \geq 1$. Notice that these are all the conditions we need to prove inequality [\[gdc\]](#gdc){reference-type="eqref" reference="gdc"}. It then follows the proof of Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} that for any $x \in E$, and any $i \in \mathbb{N}$, there exists a $N(x,i)$ such that when $n > N(x,i)$, $$\mu_n(f(E^n_i(x))) \leq C \mathrm{diam}(f(E^n_i(x)))^{\alpha}$$ for some constant $C > 0$ which depends on $x,\eta$ and $\alpha$. To simplify the notation, we define $\widetilde{E}_{i,j} = \bigcup_{n=1}^\infty E^n_{i,j}$. Then $\widetilde{E} = \bigcap_{i=1}^\infty \bigcup_{j=1}^{2^i} \widetilde{E}_{i,j}$ and we have $$\label{gcdist} \lim_{i \to \infty}\Delta\left(\widetilde{E}_{i}(x), \widetilde{E}'_{i}(x)\right) = 0$$ and there exists $L = L(x) < \infty$ such that $$\label{gcdiam} \frac{1}{L} \leq \frac{\mathrm{diam}(\widetilde{E}_{i}(x))}{\mathrm{diam}(\widetilde{E}'_{i}(x))} \leq L.$$ Since $\mu_n$ is a finite measure for any $n \in \mathbb{N}$, there exists a subsequence of $\mu_n$ that converges weakly to a measure $\mu$ on $\widetilde{E}$ by [@DS97 Lemma $8.24$ and $8.26$]. Moreover, we have $$\label{gcmdp} \mu(f(\widetilde{E}_i(x))) \leq C \mathrm{diam}(f(\widetilde{E}_i(x)))^{\alpha}$$ for any $\widetilde{E}_i(x)$ by the proof of [@DS97 Lemma 8.28]. Recall that $$\label{gcfc} A(x) < \infty.$$ Given equation [\[gcdist\]](#gcdist){reference-type="eqref" reference="gcdist"} and inequalities [\[gcdiam\]](#gcdiam){reference-type="eqref" reference="gcdiam"}, [\[gcmdp\]](#gcmdp){reference-type="eqref" reference="gcmdp"}, [\[gcfc\]](#gcfc){reference-type="eqref" reference="gcfc"}, it then follows the proof of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} verbatim that $\dim_H(f(\widetilde{E})) \geq \alpha$. ◻ **Theorem 16**. *Let $\mathcal{E} = \{E^n\}$ be a collection of flat metric spaces with hierarchical structure and $\widetilde{E} = \bigcup_{n=1}^\infty E^n$. For any $E_n \in \mathcal{E}$ and for any $x \in E$, we have $$\label{srdist} \lim_{n \to \infty}\lim_{i \to \infty}\Delta(E^n_{i}(x), (E^n_{i})'(x)) = 0$$ and there exists $L = L(x, n)$ such that $$\label{scdiam} \frac{1}{L} \leq \frac{\mathrm{diam}(E^n_{i}(x))}{\mathrm{diam}((E^n_{i})'(x))} \leq L.$$ Then $\dim_C\left(\widetilde{E}\right) \geq 1$.* *Proof.* The proof here shares the same idea as the proof of Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} and Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. We construct a measure $\mu$ analogy to Equation [\[measure\]](#measure){reference-type="eqref" reference="measure"} on a specific $E \in \mathcal{E}$ and then applying Mass Distribution Principle [Theorem 5](#mdp){reference-type="ref" reference="mdp"} to $\mu$ to finish the proof. Let $f: \widetilde{E} \to \widetilde{I}$ be an $\eta$-quasisymmetry and $0 < \alpha < 1$. Fixed a $E \in \mathcal{E}$ and let $I = f(E)$. The notation of convention in this proof share the same principle as in the proof of Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} and Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. Without loss of generality, we let $\mathrm{diam}(I) = \mathrm{diam}(f(E)) = 1$. We let $\lim_{i \to \infty}\Delta(E_{i}(x), E'_{i}(x))$ be sufficiently small. Then for any $I_i, I'_i \subseteq I$ with $\mathrm{diam}(I_i) \geq \mathrm{diam}(I'_i)$, we have for sufficiently large $i$, 1. $\mathrm{diam}(I_i) \geq \frac{1}{3}\mathrm{diam}(\tilde{I}_i)$, 2. $\mathrm{diam}(I_i) + \mathrm{diam}(I'_i)/(1-\epsilon) > \mathrm{diam}(\tilde{I}_i)$ for some $\epsilon> 0$, 3. $\left( \frac{1}{3} \right)^{\alpha} + \left( \frac{2}{3} \right)^{\alpha}(1-\epsilon)^{\alpha} \geq 1$. Notice that these are all the conditions we need to prove inequality [\[gdc\]](#gdc){reference-type="eqref" reference="gdc"}. It then follows from the proof of Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} that there exists a measure $\mu$ on $I$ such that for any $x \in E$, $$\label{smsubset} \mu(f(E_i(x))) \leq C \mathrm{diam}(f(E_i(x)))^{\alpha}$$ for some constant $C > 0$, which depends on $x,\eta$ and $\alpha$. Given equation [\[srdist\]](#srdist){reference-type="eqref" reference="srdist"} and inequalities [\[scdiam\]](#scdiam){reference-type="eqref" reference="scdiam"}, [\[smsubset\]](#smsubset){reference-type="eqref" reference="smsubset"}, it then follows the proof of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} verbatim that $\dim_H(I) \geq \alpha$. ◻ # Bedford-McMullen type sets with uniform fibers are minimal {#MSS} In this section we construct subsets of the plane which are minimal for conformal dimension and prove Theorem [Theorem 18](#thm:uniformfibers){reference-type="ref" reference="thm:uniformfibers"}, which implies Theorem [Corollary 4](#thm:mg){reference-type="ref" reference="thm:mg"}. The minimal subsets constructed below include many self-affine sets, graphs of functions (continuous or not) and can attain any dimension $d$ in $[1,2]$. ## Bedford-McMullen type sets with uniform fibers We recall the construction of the sets $K$ as described in the introduction, meanwhile generalizing it and introducing some notation needed for the proofs below. Let ${\ell,m}\in\mathbb{N}$ be positive integers so that $\ell<m$. Suppose we are given a sequence $\{D_i\}_{i=1}^{\infty}$ of subsets (which we also called patterns) of $\{1,\ldots,m\}\times \{1,\ldots,\ell\}$. For every $n\geq 1$ we let $N_n=\mathrm{Card}(D_1)\cdot\ldots\cdot\mathrm{Card}(D_n)$. In particular, $N_1=\mathrm{Card}(D_1)$. Define a compact set $K=K(\{D_i\})\subset[0,1]^2$ as follows. Divide the unit square into $m \ell$ closed and congruent $m^{-1}\times \ell^{-1}$ rectangles with pairwise disjoint interiors. Keep the rectangles corresponding to the pattern $D_1$, and delete the rest. Let $\mathcal{K}_1=\{K_{1,j},\ldots,K_{1,N_1}\}$ be the collection of retained rectangles of *generation $1$*, and let $K_1=\cup_j K_{1,j}$. Continuing by induction, suppose that a collection $\mathcal{K}_n=\{K_{1,j},\ldots,K_{1,N_n}\}$ of *generation $n$* rectangles has been defined, and its elements are closed and congruent $m^{-n}\times \ell^{-n}$ rectangles with disjoint interiors. Let $K_n = \cup_{j} K_{n,j}$. Divide each $K_{n,j}\in\mathcal{K}_n$ into $m \ell$ congruent, closed $m^{-(n+1)}\times \ell^{-(n+1)}$ rectangles with disjoint interiors. In each $K_{n,j}\in\mathcal{K}_n$ keep the rectangles corresponding to the pattern $D_{n+1}$, and delete the rest. The resulting collection $\mathcal{K}_{n+1}=\{K_{n+1,1},\ldots,K_{n+1,N_{n+1}}\}$ is the family of generation $(n+1)$ rectangles, and their union will be denoted by $K_{n+1}$. We define the set $K$ as follows $$\begin{aligned} \label{def:det-set} K=K(\{D_n\}_{n=1}^{\infty})=\bigcap_{n=1}^{\infty} K_n = \bigcap_{n=1}^{\infty} \bigcup_{j=1}^{N_n}K_{n,j}=\bigcap_{n=1}^{\infty} \bigcup_{K_{n,j}\in\mathcal{K}_n}K_{n,j}.\end{aligned}$$ Note that if all the patterns in the construction above are the same, i.e., there is a $D\subset\{1,\ldots,m\}\times \{1,\ldots,\ell\}$ such that $D_i=D$, for all $i\geq 1$, then the set ([\[def:det-set\]](#def:det-set){reference-type="ref" reference="def:det-set"}) is an example of a Bedford-McMullen self-affine set, denoted by $K(D)$ in [@BP17]. The construction above can be generalized by dividing each rectangle of generation $i$ into $m_i \ell_i$ congruent rectangles where $\{\ell_i\}$ and $\{m_i\}$ are arbitrary sequences of integers such that $1<\ell_i<m_i$. We do not consider this more general situation, since we are mostly concerned with the analogy with the Brownian graph for which the motivating example can be thought to be the case when $\ell_i=2$ and $m_i=4$ for $i\geq 1$. Computing conformal dimension of sets defined by ([\[def:det-set\]](#def:det-set){reference-type="ref" reference="def:det-set"}) can be challenging for general sequences of patterns $\{D_i\}$. For instance, even if $K=K(D)$ is a self-affine set the conformal dimension of $K$ is unknown, unless we make the extra assumptions below. Inspired by the terminology in [@BP17] we will say that a set $K$ constructed as above has *uniform fibers* if there exists $1\leq k < m$ such that for every $i\geq 1$ the pattern $D_i$ has $k$ elements in *every* row. Therefore, for every $n\geq 1$ we have $\mathrm{Card}(D_n)=k\ell$ and $N_n=(k\ell)^n$. Note, that for every $n\geq1$ and every $\ell$-adic interval $J$ of generation $n$ on the $y$-axis there are exactly $k^n$ rectangles $K_{n,j}$ such that $\pi_y(K_{n,j})=J$, where $\pi_y$ is the orthogonal projection onto the second coordinate. We will say that two rectangles $K_{n,j_1}$ and $K_{n,j_2}$ are *at the same height* if they have the same projections: $\pi_y(K_{n,j_1})=\pi_y(K_{n,j_2})$. The following proposition can be proved like the analogous property for self affine sets in [@BP17]. We provide a slightly different proof for the convenience of the reader. **Proposition 17**. *Let $1\leq k<m$ and $d=\log_mk$. If $K=K(\{D_i\})$ has uniform fibers then* 1. *$\dim_H I = 1+d$, [\[spacedim\]]{#spacedim label="spacedim"}* 2. *$\dim_H I\cap Z_a = d$, for all $a\in[0,1]$,[\[fiberdim\]]{#fiberdim label="fiberdim"}* *where $Z_a$ is a horizontal line passing through $(0,a)$ for some $a\in[0,1]$.* *Proof.* We first prove [\[fiberdim\]](#fiberdim){reference-type="eqref" reference="fiberdim"}. For this, observe that because of the uniform fibered condition, for every $a\in[0,1]$, the set $K_a:=K\cap Z_a$ is a Cantor set obtained by dividing $[0,1]$ into $m$ equal intervals, selecting $k$ of them, and then repeating this procedure ad infinitum. Thus, for every $a$ there is a set of indices $J_n(a)\subset \{1,\ldots,N_n\}$ such that $\mathrm{Card}(J_n(a))=k^n$, $\mathrm{diam}(K_{n,j}\cap Z_a)=m^{-n}$ for $j\in J_n(a)$, and $$K_a=\bigcap_{n=1}^{\infty} \bigcup_{j\in J_n(a)} K_{n,j}\cap Z_a.$$ Therefore, $\dim_H K_a \leq \overline{\dim}_M K_a \leq \lim_{n\to\infty} {\log k^n }/({\log m^n})=\log_m k.$ To show the opposite inequality we define a probability measure $\mu_a$ on $K_a$, by setting $$\begin{aligned} \label{def:conditional-measures} \mu_a(K_{n,j}\cap Z_a) = k^{-n}=(m^{-n})^d=(\mathrm{diam}(K_{n,j}\cap Z_a))^d, \end{aligned}$$ for every $j\in J_n(a)$. Therefore, if $I\subset K\cap Z_a$ is such that $I=K_{n,j}\cap Z_a$ for some $j\in J_n(a)$ then $\mu_a(I) =k^{-n}=(m^{-n})^d=(\mathrm{diam}I)^d$. More generally, for every $I\subset K\cap Z_a$ such that $m^{-(n+1)}<\mathrm{diam}I\leq m^{-n}$ there are at most two "adjacent" $m$-adic intervals $I'$ and $I''$ in $Z_a$ such that $I\subset I'\cup I''$ and $\mathrm{diam}I'=\mathrm{diam}I''=m^{-n}$. Therefore, we have $$\begin{aligned} \label{ineq:growth1} \mu (I) \leq \mu (I')+\mu (I'') \leq 2 \mathrm{diam}(I')^d < 2 m^d (\mathrm{diam}I)^d, \end{aligned}$$ where the last inequality holds since $\mathrm{diam}I > 1/m^{n+1} = (\mathrm{diam}I') /m$. By mass distribution property we have $\dim_H K_a\geq d$, which finishes the proof of [\[fiberdim\]](#fiberdim){reference-type="eqref" reference="fiberdim"}. To prove [\[spacedim\]](#spacedim){reference-type="eqref" reference="spacedim"} first note that from [\[fiberdim\]](#fiberdim){reference-type="eqref" reference="fiberdim"} and Marstrand slicing theorem [@BP17] it follows that $\dim_H K\geq 1+\log_mk$. To get the upper bound in [\[spacedim\]](#spacedim){reference-type="eqref" reference="spacedim"}, note that for every $n\geq 1$ the set $K$ can be covered by $(\mathrm{Card}(D))^n=(k\ell)^n$ generation $n$ rectangles of width $1/m^n$ and height $1/\ell^n$. Dividing each such rectangle into $\lesssim (m/\ell)^n$ squares of side-length $1/m^n$ we obtain that $$\begin{aligned} \dim_H K\leq\overline{\dim}_M K \leq \lim_{n\to\infty} \frac{\log((k\ell)^n \cdot (m/\ell)^n)}{\log m^n} = 1+\log_m k, \end{aligned}$$ which completes the proof. ◻ The following is the main result of this section. **Theorem 18**. *Every $I$ that has uniform fibers is minimal for conformal dimension.* Observe that Theorem [Theorem 2](#thm:minimalBM){reference-type="ref" reference="thm:minimalBM"} follows from Theorem [Theorem 18](#thm:uniformfibers){reference-type="ref" reference="thm:uniformfibers"} and Proposition [Proposition 17](#prop:dim){reference-type="ref" reference="prop:dim"}. To prove Theorem [Theorem 18](#thm:uniformfibers){reference-type="ref" reference="thm:uniformfibers"} we would like to use Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"}. For this we will construct the measure $\mu$, families $\mathcal{E}$ and $\{\lambda_E\}_{E\in\mathcal{E}}$ so that conditions of that theorem are satisfied. ## Constructing the measure $\mu$ {#section:mu} The measure $\mu$ is defined by equidistributing the mass among all the rectangles of the same generation. More specifically, let $\mu(K)=1$. For every rectangle $K_{n,j}\in\mathcal{K}_n$ of generation $n$ let $$\begin{aligned} \label{def:mu} \mu(K_{n,j}\cap K) = (k\ell)^{-n}.\end{aligned}$$ This will define an outer regular probability measure $\mu$ on $K$. Note that if $I$ is an $\ell$-adic interval in $\{0\}\times [0,1]$ then there are $k^n$ generation $n$ rectangles in $\mathcal{K}_n$ which project under $\pi_y$ to $I$. Therefore we have $$\begin{aligned} \mu(\pi^{-1}_y(I)) = k^n (1/(k\ell)^n) = 1/(\ell)^n = \mathcal{L}^1(I).\end{aligned}$$ This implies that the pushforward of $\mu$ onto the $y$-axis is the restriction of the Lebesgue measure on $\{0\}\times[0,1]$, that is $\pi_y^*(\mu)=\mathcal{L}^1.$ By the disintegration theorem [@AGS05 Theorem $5.3.1$] there is a family of probability measures $\{\nu_a\}_{a\in[0,1]}$, which is $\mathcal{L}^1$-a.e. uniquely determined and is supported on the sets $K_a$ s.t. $$\begin{aligned} \mu(A) = \int_0^1 \nu_a(A\cap \pi^{-1}_y(a)) \, da.\end{aligned}$$ It turns out that in this case the family $\{\nu_a\}$ a.e. coincides with $\{\mu_a\}$ defined above. **Lemma 19**. *For every Borel set $A\subset K$ we have $$\begin{aligned} \label{eq:disint} \mu(A) = \int_0^1 \mu_a(A\cap \pi^{-1}_y(a)) \, da, \end{aligned}$$ where $\mu_a$ satisfies ([\[def:conditional-measures\]](#def:conditional-measures){reference-type="ref" reference="def:conditional-measures"}).* *Proof.* If $A=K_{n,j}\in\mathcal{K}_n$ for some $n\geq 1$, then by ([\[def:mu\]](#def:mu){reference-type="ref" reference="def:mu"}) and ([\[def:conditional-measures\]](#def:conditional-measures){reference-type="ref" reference="def:conditional-measures"}) we have $$\begin{aligned} \int_{0}^1 \mu_a (A\cap\pi_y^{-1}(a)) da = \int_{\pi_{y}(A)}k^{-n} da = k^{-n} \mathcal{L}^1(\pi_{y}(A)) = k^{-n}\ell^{-n}=\mu(A). \end{aligned}$$ Therefore, ([\[eq:disint\]](#eq:disint){reference-type="ref" reference="eq:disint"}) holds if $A=\cup_{j\in J} K_{n,j}$, whenever $J$ is any subset of $\{1,\ldots,N_n\}$. By outer regularity of $\mu$ then ([\[eq:disint\]](#eq:disint){reference-type="ref" reference="eq:disint"}) holds for all Borel sets $A\subset K$ as well. ◻ Equation ([\[eq:disint\]](#eq:disint){reference-type="ref" reference="eq:disint"}) implies that for every $x\in I$ and $r>0$ the measure $\mu$ satisfies $$\begin{aligned} \mu(B(x,r))\lesssim r^{1+d}.\end{aligned}$$ where, as in Proposition [Proposition 17](#prop:dim){reference-type="ref" reference="prop:dim"}, $d=\log_m k$. Indeed, using ([\[eq:disint\]](#eq:disint){reference-type="ref" reference="eq:disint"}) and ([\[ineq:growth1\]](#ineq:growth1){reference-type="ref" reference="ineq:growth1"}), for any $B=B(x,r)\subset K$ we have $$\begin{aligned} \mu(B)=\int_{\pi_{y}(B)} \mu(B\cap Z_a) da \lesssim \mathcal{L}^1(\pi_y(B)) \mathrm{diam}(B\cap Z_a)^d \lesssim\mathrm{diam}(B)^{1+d}.\end{aligned}$$ ## Constructing the "vertical" Cantor sets and the associated measures {#section:vertical} We construct a family of Cantor sets $\mathcal{E}=\{E\}$ in $K$ as follows. From each of the $\ell$ rows choose *one* $m^{-1}\times \ell^{-1}$ rectangle of generation $1$ out of $k$ possible ones. Thus, we chose $\ell$ rectangles $E_{1,1},\ldots,E_{1,\ell}$ from $\mathcal{K}_1$, so that their $\pi_y$ projections cover the entire vertical interval $\{0\}\times [0,1]$. Next from every row of every rectangle $E_{1,j}$, choose *one* rectangle of generation $2$ from $\mathcal{K}_2$. The resulting rectangles will be denoted $E_{2,j}$ with $j=1,\ldots, \ell^2$. Continuing by induction, in the $n$-th step we obtain rectangles $E_{n,j}, j=1,\ldots,\ell^n$ which have pairwise disjoint interiors. Moreover, for every $E_{n,j}$ we have that the projection $\pi_y(E_{n,j})$ is an $\ell$-adic interval of length $\ell^{-n}$, and the union of these projections is $\{0\}\times [0,1]$. A "vertical" Cantor set $E\subset I$ then can be defined as usual by $E=\bigcap_{n=1}^{\infty} \bigcup_{j=1}^{\ell^n}E_{n,j}$. The family of all vertical Cantor sets in $I$ will be denoted by $\mathcal{E}$. It follows from the definitions that the restriction of $\pi_y$ onto every "vertical" Cantor set $E$ is injective except for at the preimages of the $\ell$-adic points, at which it is two-to-one. Hence we can define a measure $\lambda_{E}$ by pushing forward the Lebesgue measure from $\{0\}\times [0,1]$ to $E$. That is $\lambda_{E}=(\pi_y^{-1})_*(\mathcal{L}_1)$, or more concretely for every $A\subset E$ we have $$\begin{aligned} \lambda_{E}(A)=\mathcal{L}_1(\pi_y(A)).\end{aligned}$$ We will denote $\mathbf{E}=\{\lambda_E\}_{E\in\mathcal{E}}$, i.e., the family of pull backs of the Lebesgue measure under $\pi_y$ to all the "vertical" Cantor sets in $I$. Next, we observe that every $\lambda_{E}\in\mathbf{E}$ satisfies ([\[linear\]](#linear){reference-type="ref" reference="linear"}). Indeed, let $x \in E$ and $r > 0$. Then there exists $n \in \mathbb{N}$ such that $1/\ell^{n+1} < r \leq 1/\ell^{n}$. Choose $j$ so that $x\in K_{n+2,j} \in \mathcal{K}_{n+2}$. Since $\mathrm{diam}(K_{n+2,j}) < 1/\ell^{n+1}$, it follows that $K_{n+2,j} \subseteq B(x,r)$. Therefore, $$\lambda_E(B(x,r) \cap E) \geq \lambda_E\left( K_{n+2,j} \cap E\right) \geq r/\ell^2,$$ and ([\[linear\]](#linear){reference-type="ref" reference="linear"}) holds with $s=1$. Finally, we would like to apply Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} to show that every $E\in\mathcal{E}$ has conformal dimension $1$. But from the definition of $E$ above it is clear that the conditions of the theorem are satisfied. Indeed, for every $E_{n,j}$ we have that the diameter of the siblings of $E_{n,j}$ are equal to $\mathrm{diam}E_{n,j}$. Moreover, the relative distance between "adjacent" generation $n$ rectangles tends to $0$ as it is comparable to $\ell^n/m^n$. ## Conclusion of the proof With $\mu$ and $\mathbf{E}$ constructed as above, we would like to show that $\mathrm{Mod}_1(\mathbf{E})>0$. For that let $\rho:K\to[0,\infty)$ be admissible for $\mathbf{E}$, that is $\int_E \rho d \lambda_E>1$ for every $E\in\mathcal{E}$. By the Vitali-Carathéodory theorem [Theorem 8](#VCT){reference-type="ref" reference="VCT"} we can assume that $\rho$ is in fact lower-semicontinuous. We will need the following result. **Lemma 20**. *For every lower-semicontinuous function $\rho:K\to[0,\infty)$ there is a Borel function $\rho_\infty:K\to[0,\infty)$ such that* 1. *$\int \rho d\mu \geq \int \rho_\infty d \mu$,[\[smeasure\]]{#smeasure label="smeasure"}* 2. *$\rho_\infty(x_1,y_1)=\rho_\infty(x_2,y_2)$ if $y_1=y_2$,[\[emeasure\]]{#emeasure label="emeasure"}* 3. *$\exists F\in\mathcal{E}$ s.t. $\rho(x)\leq \rho_\infty(x)$ for $x\in F$.[\[admissiblemeasure\]]{#admissiblemeasure label="admissiblemeasure"}* Before proving Lemma [Lemma 20](#lemma:construction){reference-type="ref" reference="lemma:construction"} we observe that it implies the inequality $\mathrm{Mod}_{1}(\mathbf{E})>0$. Since the other conditions of Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"} have been verified in Subsections [4.2](#section:mu){reference-type="ref" reference="section:mu"} and [4.3](#section:vertical){reference-type="ref" reference="section:vertical"} above, this would complete the proof of Theorem [Theorem 18](#thm:uniformfibers){reference-type="ref" reference="thm:uniformfibers"}. *Proof of Theorem [Theorem 18](#thm:uniformfibers){reference-type="ref" reference="thm:uniformfibers"}.* Note that by [\[emeasure\]](#emeasure){reference-type="eqref" reference="emeasure"} there exists a unique $\bar{\rho}: [0,1] \to \mathbb{R}$ s.t. for $\mathcal{L}^1$-a.e. $a \in [0,1]$ we have $\bar{\rho}(a) = \rho_\infty(z),$ whenever $\pi_y(z) = a$. Then, by [\[admissiblemeasure\]](#admissiblemeasure){reference-type="eqref" reference="admissiblemeasure"}, for any $E \in \mathcal{E}$ we have $$\int_E \rho_\infty(z) \ d \lambda_E = \int_0^1 \bar{\rho}(a) \ da = \int_F \rho_\infty(x) \ d \lambda \geq \int_F \rho(x) \ d \lambda \geq 1.$$ Thus $\rho_\infty$ is also admissible for $\mathcal{E}$. Finally, using [\[smeasure\]](#smeasure){reference-type="eqref" reference="smeasure"}, the Disintegration Theorem [@AGS05 Theorem $5.3.1$], and the inequalities above, we obtain $$\begin{aligned} \int_K \rho d\mu \geq \int_K \rho_\infty d\mu = \int_0^1 \left[ \int_{\pi_y^{-1}(a)}\rho_\infty(z) \ d \mu_a(z) \right] \ da &= \int_0^1 \bar{\rho}(a) \mu_a(K \cap Z_a)\ da \\ & =\int_0^1 \bar{\rho}(a) \ da \geq 1, \end{aligned}$$ where the last equality holds because $\mu_a\left(K \cap Z_a\right)=1$ for almost every $a \in [0,1]$. ◻ *Proof of Lemma [Lemma 20](#lemma:construction){reference-type="ref" reference="lemma:construction"}.* To construct the function $\rho_\infty$ we will first construct a sequence of piecewise constant functions $\rho_n$ from $\rho$ as follows. From each of the $\ell$ rows of $\mathcal{K}_1$ choose one rectangle with the smallest average of $\rho$. Specifically, choose rectangles $R_{1,1},\ldots,R_{1,\ell}\in\mathcal{K}_1$ so that if $R\in\mathcal{K}_1$ is such that $\pi_y(R)=\pi_y (R_{1,j})$ for some $j$, then $$\begin{aligned} \intbar_{R_{1,j}} \rho d\mu \leq \intbar_{R} \rho d\mu. \end{aligned}$$ Define $\rho_1:K_1\to[0,\infty]$ by $$\begin{aligned} \rho_1\big\rvert_{R} \equiv \intbar_{R_{1,j}}\rho d\mu, \mbox{ if } \pi_y(R)=\pi_y (R_{1,j}). \end{aligned}$$ That is $\rho_1$ is the same constant on every first generation rectangle at the same height. Hence, $\rho_1$ is also constant on almost every fiber $\pi^{-1}_y(a)$, and $\int_K \rho d\mu \geq \int_K \rho_1 d\mu$. Continuing by induction, suppose that at step $n-1$ we have chosen rectangles $R_{n-1,1},\ldots,R_{n-1,\ell^{n-1}}$ and defined $\rho_{n-1}$ so that it is constant on all rectangles of generation $n-1$ in $\mathcal{K}_{n-1}$ and the constants are the same on two rectangles with the same heights. ![The first three generations of $F$ on $I$.](Minelement){#ME height="1.2 in"} To construct $\rho_{n}$, from each of the $\ell$ rows of generation $n$ rectangles within $R_{n-1,j}$ for $j\in\{1,\ldots,\ell^{n-1}\}$ chosen above, we will select one with the smallest average of $\rho$. That is, the rectangles $R_{n,j'}$ with $j'\in\{1,\ldots,\ell^{n}\}$ are such that $$\begin{aligned} \intbar_{R_{n,j'}} \rho d\mu \leq \intbar_{R} \rho d\mu, \end{aligned}$$ whenever $R, R_{n,j'}\subset R_{n-1,j}$ for some $j\in\{1,\ldots,\ell^{n-1}\}$ and $\pi_y(R)=\pi_y (R_{n,j'})$. Define $\rho_n$ be setting $$\begin{aligned} \rho_n\big\rvert_{R} \equiv \intbar_{R_{n,j}}\rho d\mu, \mbox{ if } \pi_y(R)=\pi_y (R_{n,j}). \end{aligned}$$ Then we have a sequence of Borel measurable functions $\{\rho_n \}$. It follows from the construction above that we have $$\int_K \rho \ d\mu \geq \int_K \rho_1 \ d\mu \geq \ldots\geq \int_K \rho_n \ d\mu\geq \int_K \rho_{n+1} \ d\mu\geq \ldots.$$ For every $z=(x,y)\in K$ we define $$\begin{aligned} \label{def:trho} \rho_\infty(z) := \liminf_{n\to\infty} \rho_n(z). \end{aligned}$$ Condition [\[smeasure\]](#smeasure){reference-type="eqref" reference="smeasure"} then follows from the definition of $\rho_\infty$ and Fatou's Lemma. Equality [\[emeasure\]](#emeasure){reference-type="eqref" reference="emeasure"} follows from ([\[def:trho\]](#def:trho){reference-type="ref" reference="def:trho"}) and the fact that for every $n$ the function $\rho_n$ is constant on almost all (in fact all, except for a finitely many) horizontal slices $K\cap Z_a$. To prove [\[admissiblemeasure\]](#admissiblemeasure){reference-type="eqref" reference="admissiblemeasure"} define $F=\bigcap_{n=1}^{\infty}\bigcup_{j=1}^{\ell^n} R_{n,j}.$ For every $z\in F$ choose a rectangle $R_{n}(z)\in\mathcal{K}_n$ such that $z\in R_n(z)$ for all $n\geq 1$. Pick $z_{n}'\in R_{n}(z)$ so that $$\begin{aligned} \label{trho-adm} \rho(z_{n}')\leq \intbar_{R_{n}(z)} \rho d\mu = \rho_n(z_{n}')=\rho_n(z). \end{aligned}$$ Note that $z_n'\to z$ as $n\to\infty$, since $\mathrm{diam}R_n(z) \asymp \ell^{-n}$. Recalling that $\rho$ is lower semicontinuous, we also have that $\rho(z) \leq \liminf _{z'\to z}\rho(z')$ for any $z \in K$. Therefore, from ([\[def:trho\]](#def:trho){reference-type="ref" reference="def:trho"}) and ([\[trho-adm\]](#trho-adm){reference-type="ref" reference="trho-adm"}) it follows that $$\begin{aligned} \rho(z) & \leq \liminf_{n\to\infty} \rho(z_n') \leq \liminf_{n\to\infty} \rho_n (z)=\tilde\rho(z), \end{aligned}$$ for every $z \in F$. This proves (3) since $F$ is a vertical Cantor set in $K$ by definition. ◻ *Proof of Corollary [Corollary 4](#thm:mg){reference-type="ref" reference="thm:mg"}.* Note that by ([\[eqn:dimK\]](#eqn:dimK){reference-type="ref" reference="eqn:dimK"}) the values of $\dim_H K$ are dense in $[1,2]$. Therefore, if $d\in[1,2]$ then there is a sequence of minimal sets $K_i'$ constructed as above so that $\dim_H K_i'=d_i$, where $d_i\nearrow d$ as $i\to\infty$. Moreover, by choosing the patterns $\{D_{i,n}\}_n$ when defining $K_{i}'$ appropriately, each $K'_i$ can be made to be a graph of a *continuous* function $f_i$ such that $f_i(0)=f_i(1)=0$. Then, denoting $aK+b=\{az+b : z \in K\}\subset\mathbb{C}$, for $a,b\in\mathbb{R}$, the set $$\bigcup_{i=1}^{\infty}\left(\frac{1}{2^{i}}K_i'+1-\frac{2}{2^i}\right)$$ will be a graph of a continuous function, and the conformal dimension of that set will be $\sup_{i}\dim_C K_i'=\lim _i d_i=d$. ◻ # Brownian motion and local time {#BM} In this section, we investigate linear Brownian motion and introduce properties that are needed for what follows. We usually denote the $1$-dimensional Brownian motion, or linear Brownian motion, as $W(t)$, and if it's necessary to emphasize the probability sample space, we use $W(t,\omega)$. Let $W(t)$ be a Brownian motion defined on a probability space $\left( \Omega, \mathcal{F}, \mathbb{P}\right)$ and $\{\mathcal{F}(t): t\geq 0\}$ be a family of $\sigma$-algebras such that $\mathcal{F}(t)$ is generated by the random variables $W(s)$, for $0 \leq s \leq t$. A random variable $T$ with values in $[0, \infty]$ is called a *stopping time* with respect to $\{\mathcal{F}(t): t\geq 0\}$ if $\{T \leq t\} \in \mathcal{F}(t)$, for every $t \geq 0$. We define $\mathcal{F}^+(s) = \bigcap_{t>s} \mathcal{F}^0(t)$. If $T$ is a stopping time, define $$\mathcal{F}^+(T) = \left\{ A \in \mathcal{F}: A \cap \{T \leq t\} \in \mathcal{F}^+(t) \ \textnormal{for all} \ t \geq 0\right\}.$$ One of the most important properties of $W(t)$ is the strong Markov property. See [@MP10 Theorem $2.16$] or [@LeG16 Theorem $2.20$ and $6.17$] for a general version. **Theorem 21** (Strong Markov Property). *For every almost surely finite stopping time $T$, $$\{W(T + t) - W(T): t \geq 0\}$$ is a standard Brownian motion independent of $\mathcal{F}^+(T)$.* We denote by $T_a$ the *first hitting time* of $W(t)$ at level $a$. Namely, $$T_a = \inf\{ t \geq 0: W(t) = a \}.$$ Notice that $T_a$ is a stopping time. In the following, we denote by $\Gamma(W)$ the graph of $W(t)$, i.e., $$\Gamma(W) = \left\{(t,W(t)): t \geq 0\right\}.$$ More generally, we denote by $$\Gamma(W(I)) = \left\{(t,W(t)): t \in I\right\}.$$ for some $I \subseteq \mathbb{R}$. The following proposition contains standard results about Brownian motions, and reader may refer to [@MP10 Theorem $4.29$ and Corollary $9.30$] for more details. **Proposition 22**. *Almost surely $\dim_H(\Gamma(W)) = \frac{3}{2}$ and $\dim_H \left( \Gamma(W) \cap Z_a \right) = \frac{1}{2}$ for any $a \in \mathbb{R}$.* In order to measure $\Gamma(W) \cap Z_a$, we need to introduce the local time of $W(t)$. The classical local time $L^0(t)$ of a linear Brownian motion $W$ is a nondecreasing stochastic process that measures the amount of time $W$ spends at $0$. In general, we can define $L^a(t)$, the local time of $W$ at $a$, that measures the amount of time $W$ spends at $a$. In [@BP17 Theorem $6.43$], it shows that the local time $L^0(t)$ induces a Hausdorff measure on the real line. We define the local time rigorously now. Let $W(t)$ be a linear Brownian motion with arbitrary starting point. Given $a < b$, we define the stopping times $\tau_0 = 0$ and, for $j \geq 1$, $$\sigma_j = \inf\{t>\tau_{j-1}: W(t) = b\}, \ \tau_j = \inf\{t>\sigma_{j}: W(t) = a\}.$$ We call $W([\sigma_i, \tau_i])$ a *downcrossing* from $a$ to $b$. For every $t > 0$ we denote by $$D(a,b,t)= \max\{j\in \mathbb{N}: \tau_j \leq t\}.$$ This is the number of downcrossings of the interval $[a, b]$ before time $t$. Let $a_n < 0 < b_n$ such that $a_n \to 0$ and $b_n \to 0$. The *local time* of $W$ at zero is a stochastic process defined in the following way: $$L^0(t) = \lim_{n \to \infty}2(b_n-a_n)D(a_n, b_n,t).$$ The definition of $L^0(t)$ is independent of the choice of $\{a_n \}$ and $\{ b_n\}$. In general, we define $L^a(t)$ to be the local time of $W(t)$ at level $a$, i.e., $$\begin{aligned} \label{localtime} L^a(t) = \lim_{n \to \infty} 2^{-n+1}D_n(a,t)\end{aligned}$$ where $D_n(a,t)$ is the number of downcrossings before time $t$ of the unique $n^{\mathrm{th}}$-dyadic interval $(i2^{-n} , (i+ 1)2^{-n} ]$ containing $a$. Refer to [@MP10 Chapter $6$] for more details. The definition of $L^a(t)$ in [@MP10 Chapter $6$] is a slightly different as what we used here, but they are equivalent. The following theorem shows that local time exists and it is locally $\gamma$-Hölder continuous with respect to $a$ and $t$. See [@MP10 Theorem $6.19$]. **Theorem 23** (Trotter). *Let $W(t)$ be a linear Brownian motion and let $D_n(a,t)$ be the number of downcrossings before time $t$ of the $n^{\mathrm{th}}$-dyadic interval $(i2^{-n} , (i+ 1)2^{-n} ]$ containing $a$. Then, almost surely, $$L^a(t) = \lim_{n \to \infty} 2^{-n+1}D_n(a,t)$$ exists for all $a \in \mathbb{R}$ and $t \geq 0$. Moreover, for every $\gamma < \frac{1}{2}$, the random field $\{L^a(t): a \in\mathbb{R}, t \geq 0\}$ is almost surely locally $\gamma$-Hölder continuous.* The following theorem studies the local time before a hitting time . See [@MP10 Theorem $6.38$]. **Theorem 24** (Ray). *Suppose $b > 0$ and $\{W(t): 0 \leq t \leq T_b\}$ is a linear Brownian motion started at zero and stopped at time $T_b$. Then, almost surely, $L^a(T_b) > 0$ for all $0 \leq a < b$.* In [@Law99 Lemma $1$], a standard technique is presented for estimating an upper bound of the Hausdorff dimension of random subsets on the interval $[0, 1]$. The main strategy involves dividing the unit interval into a finite number of smaller subintervals and subsequently analyzing the statistics of how many of these subintervals intersect with the given random subset. In the subsequent discussion, we use the notation $[a]$ to represent the greatest integer that is less than or equal to $a$. Let ${A_n}$ be a collection of random subsets of the interval $[0, 1]$, and define $A = \bigcap_{i=1}^\infty \bigcup_{n=i}^\infty A_n$, i.e., $A = \limsup_{n \to \infty} A_n$. Lastly, let $s$ be a nonnegative real number. The following lemma is mentioned as [@Law99 Lemma $1$]. **Lemma 25**. *Let $p > 1$. If there exists a $C > 0$ such that $$\sum_{i=1}^{[p^n]+1}\mathbb{P}\left(\left[\frac{i-1}{p^{n}}, \frac{i}{p^n}\right] \cap A_n \neq \emptyset\right) \leq Cp^{sn},$$ then almost surely $\dim\left(A \right) \leq s$.* *Moreover, for any $\epsilon> 0$, almost surely, $$\mathrm{Card}\left( \left\{ \left[\frac{i-1}{p^{n}}, \frac{i}{p^n}\right] : A_n \cap \left[\frac{i-1}{p^{n}}, \frac{i}{p^n}\right] \neq \emptyset \right\} \right) \leq p^{(s+\epsilon)n}$$ for sufficiently large $n$.* We define $$M(t) = \max_{0 \leq s \leq t}W(s)$$ to be the maximum value of $W(s)$ from $0$ to $t$. Notice that for any $a >0$, $\mathbb{P}(M(t) > a) = 2\mathbb{P}(W(t) > a) = \mathbb{P}(|W(t)| > a)$ by [@MP10 Theorem $2.21$]. Recall that $T_a = \inf\{t > 0: W(t) = a \}$. Then for any $a > 0$, $$\label{hitdist} \mathbb{P}\left( T_a > t \right) = \mathbb{P}\left( M(t) < a \right) = 2\mathbb{P}(W(t) < a).$$ Let $a > 0$. We define $$T_{\scriptscriptstyle \pm a}(t_0) = \inf\{t - t_0: |W(t) - W(t_0)| = a, t > t_0 \},$$ i.e., the the difference between $t_0$ and first hitting time of the Brownian motion starts at time $t_0$ that hits either $W(t_0)+ a$ or $W(t_0)-a$. This clearly implies that $$\mathbb{P}\left( T_{\scriptscriptstyle \pm a}(t_0) > t \right) \leq 4\mathbb{P}(W(t) < a).$$ by equation [\[hitdist\]](#hitdist){reference-type="eqref" reference="hitdist"} and the Strong Markov Property [Theorem 21](#Strong Markov){reference-type="ref" reference="Strong Markov"}. **Lemma 26**. *For any $\frac{2}{3} < \alpha \leq 2$, almost surely, $$\dim\left\{t \in [0,1]: \limsup_{a \to 0^+} \frac{T_{\scriptscriptstyle \pm a}(t)}{a^\alpha} > 1 \right\} \leq \frac{3}{2} - \frac{1}{\alpha}.$$* *Proof.* Let $$A_n = \left\{t \in [0, 1]: T_{\scriptscriptstyle \pm \frac{1}{2^n}}(t) > \frac{1}{2^{\alpha(n+1)}} \right\}.$$ Let $t \in [0,1]$ such that $\limsup_{a \to 0^+} \frac{T_{\scriptscriptstyle \pm a}(t)}{a^\alpha} > 1$. Then there exists a sequence $a_n > 0$ such that $a_n \to 0$ and $T_{\scriptscriptstyle \pm a_n}(t) > (a_n)^\alpha$. Notice that for any $a_n$, there exists a $N$ such that $\frac{1}{2^{N+1}} \leq a_n \leq \frac{1}{2^N}$. Thus $$T_{\scriptscriptstyle \pm \frac{1}{2^N}}(t) > \frac{1}{2^{\alpha(N+1)}}.$$ Then $t \in \bigcap_{n=1}^\infty \bigcup_{j=i}^n A_i$. This implies that $$\left\{t \in [0,1]: \limsup_{a \to 0^+} \frac{T_{\scriptscriptstyle \pm a}(t)}{a^\alpha} > 1\right\} \subseteq \bigcap_{i=1}^\infty \bigcup_{n=i}^\infty A_n = \limsup_{n \to \infty} A_n.$$ Assume that $A_n \cap \left[0, \frac{1}{2^{\alpha (n+1)+1}}\right] \neq \emptyset$, then there exist a point $t_0 \in \left[0, \frac{1}{2^{\alpha (n+1)+1}}\right]$ such that $T_{\scriptscriptstyle \pm \frac{1}{2^n}}(t_0) > \frac{1}{2^{\alpha (n+1)}}$. This implies that $$W(t_0) - W\left( \frac{1}{2^{\alpha (n+1)+1}} \right) < \frac{1}{2^{n}}$$ and $$\max \left\{\left| W(s)- W(t_0) \right|: \frac{1}{2^{\alpha (n+1)+1}} \leq s \leq \frac{1}{2^{\alpha (n+1)}} \right\}< \frac{1}{2^{n}}.$$ Thus $$T_{\scriptscriptstyle \pm \frac{1}{2^{n-1}}} \left(\frac{1}{2^{\alpha (n+1)+1}}\right) > \frac{1}{2^{\alpha (n+1)+1}} .$$ Since $$\begin{aligned} \mathbb{P}\left(A_n \cap \left[0, \frac{1}{2^{\alpha (n+1)+1}} \right] \neq \emptyset \right) & \leq \mathbb{P}\left( T_{\scriptscriptstyle \pm \frac{1}{2^{n-1}}} \left(\frac{1}{2^{\alpha (n+1)+1}}\right) > \frac{1}{2^{\alpha (n+1)+1}} \right) \\ & \leq 4 \mathbb{P}\left( W\left(\frac{1}{2^{\alpha (n+1)+1}}\right) < \frac{1}{2^{ n-1}} \right), \end{aligned}$$ then we have $$\begin{aligned} \mathbb{P}\left(A_n \cap \left[0, \frac{1}{2^{\alpha (n+1)}} \right] \neq \emptyset \right) &= \mathbb{P}\left(A_n \cap \left[0, \frac{1}{2^{\alpha (n+1)+1}} \right] \neq \emptyset \right) + \mathbb{P}\left(A_n \cap \left[\frac{1}{2^{\alpha (n+1)+1}}, \frac{1}{2^{\alpha (n+1)}} \right] \neq \emptyset \right) \\ & \leq 8 \mathbb{P}\left( W\left(\frac{1}{2^{\alpha (n+1)+1}}\right) < \frac{1}{2^{ n-1}} \right) \\ & \leq C\frac{1}{2^{\left(1-\frac{\alpha}{2}\right)(n+1)}} \end{aligned}$$ for some $C > 0$. The first inequality comes from the the Strong Markov Property [Theorem 21](#Strong Markov){reference-type="ref" reference="Strong Markov"} and the second inequality comes from the density function of the Brownian motion. Moreover, for any $i \in \mathbb{N}$, $$\mathbb{P}\left(A_n \cap \left[\frac{i-1}{2^{\alpha (n+1)}}, \frac{i}{2^{\alpha (n+1)}}\right] \neq \emptyset\right) \leq C\frac{1}{2^{\left(1-\frac{\alpha}{2}\right)(n+1)}}$$ by the Strong Markov Property [Theorem 21](#Strong Markov){reference-type="ref" reference="Strong Markov"}. Then $$\sum_{i=1}^{[2^{\alpha (n+1)}]+1}\mathbb{P}\left(A_n \cap \left[\frac{i-1}{2^{\alpha (n+1)}}, \frac{i}{2^{\alpha (n+1)}}\right] \neq \emptyset\right) \leq (C+1) 2^{\left(\frac{3}{2}\alpha - 1 \right) (n+1)}.$$ Thus a.s. $$\dim \left( \limsup_{n \to \infty} A_n\right) \leq \frac{3}{2} - \frac{1}{\alpha}$$ by Lemma [Lemma 25](#sdim){reference-type="ref" reference="sdim"}. Then a.s. $$\dim\left\{t \in [0,1]: \limsup_{a \to 0^+} \frac{T_{\scriptscriptstyle \pm a}(t)}{a^\alpha} > 1\right\} \leq \frac{3}{2} - \frac{1}{\alpha}.$$ ◻ *Remark 27*. Since $$\sum_{i=1}^{[2^{\alpha (n+1)}]+1}\mathbb{P}\left(A_n \cap \left[\frac{i-1}{2^{\alpha (n+1)}}, \frac{i}{2^{\alpha (n+1)}}\right] \neq \emptyset\right) \leq (C+1) 2^{\left(\frac{3}{2}\alpha - 1 \right) (n+1)},$$ it follows from Lemma [Lemma 25](#sdim){reference-type="ref" reference="sdim"} that for any $\epsilon> 0$, a.s. $$\mathrm{Card}\left( \left[\frac{i}{2^{\alpha(n+1)}}, \frac{i+1}{2^{\alpha(n+1)}}\right] \subseteq [0,1]: A_n \cap \left[\frac{i}{2^{\alpha(n+1)}}, \frac{i+1}{2^{\alpha(n+1)}}\right] \neq \emptyset \right) \leq 2^{\left(\frac{3}{2}\alpha - 1 + \epsilon\right)(n+1)}$$ for sufficiently large $n$. # The Brownian graph is minimal almost surely {#BGMAS} We prove Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"} in this section. ## Constructing the measure $\mu$ {#constructing-the-measure-mu} Our first step is to construct a measure on $\Gamma(W)$ as described in equation [\[spacemeasure\]](#spacemeasure){reference-type="eqref" reference="spacemeasure"}. For any $a \in \mathbb{R}$, we first define a Borel measure $l^a$ on $\Gamma(W) \cap Z_a$ by $$l^a\left( \Gamma(W([0,t])) \cap Z_a \right) = L^a(t).$$ Then the measure $\mu$ on $\Gamma(W)$ is defined in the following way. For any measurable subset $A \subseteq \Gamma(W)$, $$\mu\left( A \right) = \int_{-\infty}^{\infty} l^a(A \cap Z_a) \ da.$$ We claim that for any $\epsilon> 0$, there exist $C = C(x, \epsilon)$ and $r_0 = r_0(x)$ such that when $r < r_0$, $$\label{umassbound} \mu\left( B(x,r) \cap \Gamma(W) \right) \leq C \cdot r^{\frac{3}{2}-\epsilon}.$$ Recall that, by Theorem [Theorem 23](#lf){reference-type="ref" reference="lf"}, $L^a(t)$ is a.s. locally $\gamma$-Hölder continuous for any $\gamma < \frac{1}{2}$. Then for any $x \in \Gamma(W)$ and $\epsilon> 0$, there exist $C_1 = C_1(x, \epsilon)$ and $r_0 = r_0(x)$ such that whenever $r < r_0$ we have $$|L^{a_1}(t_1) - L^{a_2}(t_2)| \leq C_1 \left( |a_1-a_2|^2 + |t_1-t_2|^2 \right)^{\frac{1}{2}-\epsilon}$$ for any $(t_1,a_1),(t_2,a_2) \in B(x,r) \cap \Gamma(W)$. Thus for any $a \in \mathbb{R}$, $$l^a(B(x,r) \cap Z_a) = \sup\{|L^a(s) - L^a(t)| : B(s), B(t) \in B(x,r) \cap Z_a \}\leq C_1 r^{\frac{1}{2}-\epsilon}.$$ Then $$\begin{aligned} \mu\left( B(x,r) \cap \Gamma(W) \right) & = \int_{-\infty}^{\infty} l^a(B(x,r) \cap Z_a) da \\ & \leq \int_{\pi_y(x)-r}^{\pi_y(x)+r} C_1 r^{\frac{1}{2}-\epsilon} da \\ & \leq C r^{\frac{3}{2}-\epsilon}\end{aligned}$$ for some $C = C(x, \epsilon) > 0$ a.s.. *Remark 28*. If $K$ is a compact subset of $\Gamma(W)$, the inequality [\[umassbound\]](#umassbound){reference-type="eqref" reference="umassbound"} is valid for $K$ in a more generally, i.e., for any $\epsilon>0$, $r >0$, there exist $C = C(x, \epsilon, K)$ such that $\mu\left( B(x,r) \cap K \right) \leq C \cdot r^{\frac{3}{2}-\epsilon}$. ## Constructing the "vertical" Cantor sets and associated measures We denote by $$X = \Gamma(W) \cap [0, T_6] \times [0,1]$$ and we will restrict our proofs and constructions in $X$ in the remaining of this section. Recall that $T_6 = \inf\{ t \geq 0: W(t) = 6 \}$ and notice that $T_6 < \infty$ a.s., thus $X$ is compact a.s.. Let $\sigma_1= 0$ and $\upsilon_1 = \inf\left\{t > 0: W(t)= \frac{1}{2}\right\}$, then we define $$\begin{aligned} \sigma_{i+1} &= \inf\left\{t: t > \varsigma_i \ \textrm{and} \ W(t) = 0\right\}; \\ \varsigma_i&= \inf\left\{t: t > \sigma_{i} \ \textrm{and} \ W(t) = \frac{1}{2}\right\}; \\ \upsilon_{i+1} &= \inf\left\{t: t > \tau_{i} \ \textrm{and} \ W(t) = \frac{1}{2}\right\}; \\ \tau_i &= \inf\left\{t: t > \upsilon_{i} \ \textrm{and} \ W(t) = 1\right\}.\end{aligned}$$ To simplify notations, we denote $W\left(\left(\sigma_i, \varsigma_i\right]\right) \cap X$ by $W\left(\sigma_i, \varsigma_i, X\right)$ for any $i \in \mathbb{N}$ and we keep the same principle for the notations that represent $W\left(\left[\varsigma_i, \sigma_{i+1}\right)\right) \cap X$, $W\left(\left(\upsilon_i, \tau_i\right]\right) \cap X$ and $W\left(\left[\tau_i, \upsilon_{i+1}\right)\right) \cap X$, respectively. ![A illustration of $\mathcal{E}_n$ for three generations.](BrownianMotion){#BMCS height="1.2 in"} We first focus on $\left\{ W\left(\sigma_i, \varsigma_i, X\right) \right\}$ and $\left\{ W\left(\varsigma_i, \sigma_{i+1}, X\right) \right\}$. Notice that $\left\{ W\left(\sigma_i, \varsigma_i, X\right) \right\} \cup \left\{ W\left(\varsigma_i, \sigma_{i+1}, X\right) \right\}$ is composed of finitely many disjoint elements, and they cover $X \cap \mathbb{R} \times \left[0, \ \frac{1}{2}\right]$. Without loss of generality, we may assume that $W\left(\varsigma_N, \sigma_{N+1}, X\right)$ is the last nonempty element, i.e., $W\left(\varsigma_N, \sigma_{N+1}, X\right) \neq \emptyset$, and $W\left(\sigma_{N+1}, \varsigma_{N+1}, X\right) = \emptyset$. Thus $\pi_y\left(W\left(\varsigma_N, \sigma_{N+1}, X\right)\right)$ may not cover $\left(0, \ \frac{1}{2}\right]$. We union it with its previous element, i.e., $W\left(\sigma_N, \varsigma_N, X\right) \cup W\left(\varsigma_N, \sigma_{N+1}, X\right)$, and regard this union as one element. We finally enumerate all the elements by $\mathcal{E}^1_1 = \left\{E^1_{1,j}\right\}$. More precisely, $$\mathcal{E}^1_1 = \left\{W\left(\sigma_1, \varsigma_1, X\right), W\left(\varsigma_1, \sigma_{2}, X\right), \dots, W\left(\sigma_N, \varsigma_N, X\right) \cup W\left(\varsigma_N, \sigma_{N+1}, X\right) \right\}.$$ A similar and symmetric procedure could be applied to them if the last nonempty element is $W\left(\sigma_M, \varsigma_M, X\right)$ for some $M \in \mathbb{N}$. Notice that $\pi_y\left(E^1_{1,j} \right) = \left(0, \ \frac{1}{2}\right]$for any $E^1_{1,j} \in \mathcal{E}^1_1$. Similarly, we have finitely many disjoint $W\left(\upsilon_i, \tau_i, X\right)$'s and $W\left(\tau_i, \upsilon_{i+1}, X\right)$'s that cover $X \cap \mathbb{R} \times \left(\frac{1}{2}, \ 1\right]$. Again we append the last one to its previous element, then enumerate them by $\mathcal{E}^2_1 = \left\{E^2_{1,j}\right\}$. It is clearly that $\pi_y\left(E^2_{1,j} \right) = \left(\frac{1}{2},\ 1\right]$for any $E^2_{1,j} \in \mathcal{E}^2_1$. Suppose an $E_{n,j}^m \subset \Gamma(W)$ is chosen. Then we have $\pi_y\left(E_{n,j}^m \right) =\left(\frac{m-1}{2^n}, \frac{m}{2^n}\right]$. Let $\sigma_1\left(E_{n,j}^m\right) = \inf\left\{t: (t, W(t)) \in E_{n,j}^m\right\}$, thus $$W\left(\sigma_1\left(E_{n,j}^m\right) \right) = \frac{m-1}{2^n} \ \mathrm{or} \ \frac{m}{2^n}.$$ Without loss of generality, we may assume that $W\left(\sigma_1\left(E_{n,j}^m\right) \right) = \frac{m-1}{2^n}$. Otherwise a symmetric construction could be taken. Let $\upsilon_1\left(E_{n,j}^m\right)= \inf\left\{t > \sigma_1\left(E_{n,j}^m\right): \ W(t) = \frac{2m-1}{2^{n+1}}\right\}$. Then define $$\begin{aligned} \sigma_{i+1}\left(E_{n,j}^m\right) &= \inf\left\{t: t > \varsigma_i\left(E_{n,j}^m\right) \ \textrm{and} \ W\left(t\right) = \frac{m-1}{2^n}\right\}; \\ \varsigma_i\left(E_{n,j}^m\right) &= \inf\left\{t: t > \sigma_{i}\left(E_{n,j}^m\right) \ \textrm{and} \ W\left(t\right) = \frac{2m-1}{2^{n+1}}\right\}; \\ \upsilon_{i+1}\left(E_{n,j}^m\right) &= \inf\left\{t: t > \tau_{i}\left(E_{n,j}^m\right) \ \textrm{and} \ W\left(t\right) = \frac{2m-1}{2^{n+1}}\right\}; \\ \tau_i\left(E_{n,j}^m\right) &= \inf\left\{t: t > \upsilon_{i}\left(E_{n,j}^m\right) \ \textrm{and} \ W\left(t\right) = \frac{m}{2^n}\right\}.\end{aligned}$$ The notation of convention used here will follow the same principle as above. There are only finite many elements in $\left\{ W\left(\sigma_i, \varsigma_i, E_{n,j}^m\right) \right\} \cup \left\{W\left(\varsigma_i, \sigma_{i+1}, E_{n,j}^m\right) \right\}$, these elements are disjoint and they cover $E_{n,j}^m \cap \mathbb{R} \times \left(\frac{m-1}{2^n}, \frac{2m-1}{2^{n+1}}\right]$. We append the last one to the previous element. Then we enumerate them by $\mathcal{E}^{2m-1}_{n+1}=\left\{E^{2m-1}_{n+1,j}\right\}$. Similarly, there are finitely many elements in $\left\{W\left(\upsilon_i, \tau_i, E_{n,j}^m \right)\right\} \cup \left\{W\left(\tau_i, \upsilon_{i+1}, E_{n,j}^m\right) \right\}$, they are disjoint and they cover $E_{n,j}^m \cap \mathbb{R} \times \left(\frac{2m-1}{2^{n+1}},\frac{m}{2^n}\right]$. After appending the last one to its previous element, we enumerate them by $\mathcal{E}^{2m}_{n+1} = \left\{E^{2m}_{n+1,j}\right\}$. In the remaining of this section, we denote by $\mathcal{E}^m_n = \left\{E_{n,j}^m\right\}$ and $\mathcal{E}_n = \bigcup_{m=1}^{2^n}\mathcal{E}^m_n$ the collection of all the *$n^{\mathrm{th}}$-generation elements at height $m$* of $X$ and the collection of *all the $n^{\mathrm{th}}$-generation elements* of $X$, respectively. Moreover, we define $$E^m_n = \bigcup_{E_{n,j}^m \in \mathcal{E}^m_n} E_{n,j}^m.$$ Let $$\hat{\mathcal{E}}^m_n = \left\{E_{n,j}^m \in \mathcal{E}^m_n:\pi_x(E_{n,j}^m) \geq \frac{1}{2^n}\frac{1}{n \log 2}\right\}$$ and $$\mathcal{G}^m_n = \mathcal{E} \setminus \hat{\mathcal{E}}^m_n.$$ Intuitively, $\hat{\mathcal{E}}^m_n$ is the collection of $n^{\mathrm{th}}$-generation elements at height $m$ with "flat slope", while $\mathcal{G}^m_n$ are the ones with "steep slope". Similarly, we denote by $$\hat{\mathcal{E}}_n = \bigcup_{m=1}^{2^n} \hat{\mathcal{E}}^m_n, \ \mathcal{G}_n = \bigcup_{m=1}^{2^n} \mathcal{G}^m_n$$ and $$\hat{E}_n = \bigcup_{E_{n,j} \in \hat{\mathcal{E}}_n} E_{n,j}, \ G_n = \bigcup_{E_{n,j} \in \mathcal{G}_n} E_{n,j}.$$ Moreover, We define $$A' = \bigcap_{i=1}^\infty \bigcup_{n=i}^\infty \hat{E}_n = \limsup_{n \to \infty} \hat{E}_n.$$ and $$A = \bigcup_{i=1}^\infty \bigcap_{n=1}^\infty G_n = \liminf_{n \to \infty} G_n$$ Thus $A \cup A' = X$ and $A \cap A' = \emptyset$. We define $$S' = \left\{t \in [0, T_6]: \limsup_{a \to 0^+} \frac{T_{\scriptscriptstyle \pm a}(t)|\log a|}{a} > 1\right\}$$ and $$S'_n = \left\{t \in [0, T_6]: T_{\scriptscriptstyle \pm \frac{1}{2^n}}(t) > \frac{1}{2^{n+1}} \frac{1}{|(n+1)\log 2|} \right\}.$$ Then $\dim_H(S') \leq \frac{1}{2}$ a.s. by Lemma [Lemma 26](#hitdim){reference-type="ref" reference="hitdim"} . Moreover, for any $\epsilon> 0$, a.s. $$\label{bji} \mathrm{Card}\left( S'_n \cap \left[\frac{i}{2^{n+1}}, \frac{i+1}{2^{n+1}}\right] \neq \emptyset \right) \leq 2^{\left(\frac{1}{2}+\epsilon\right)(n+1)}$$ for sufficiently large $n$ by Remark [Remark 27](#sufficientcover){reference-type="ref" reference="sufficientcover"}. The following results provide essential properties for our forthcoming constructions of the "vertical" Cantor sets. **Proposition 29**. *Almost surely, $$\dim_H(A') \leq \frac{1}{2} \ \mathrm{and} \ \mu(A') = 0.$$* *Proof.* We first prove the following statements: Let $E_{n,j} \in \hat{\mathcal{E}}_n$, then - $\pi_x(E_{n,j}) \cap S'_n \neq \emptyset$; - there exists a $C_1 > 0$ such that for any $\left[\frac{i}{2^n}, \frac{i+1}{2^n}\right]$ where $\pi_x(E_{n,j}) \cap \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right] \neq \emptyset$, we can cover $E_{n,j} \cap \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right] \times \mathbb{R}$ by at most $C_1$ many $n^{\mathrm{th}}$-dyadic squares; - there exists a $C_2 > 0$ such that $$\mathrm{Card}\left(\left\{ \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right]: \pi_x(E_{n,j}) \cap \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right] \neq \emptyset, S'_n \cap \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right] = \emptyset \right\}\right) \leq C_2.$$ Without loss of generality, we may assume that $$E_{n,j} = W\left(\sigma_N, \varsigma_N, E_{n-1,j}\right) \cup W\left(\varsigma_N, \sigma_{N+1}, E_{n-1,j}\right)$$ for some $E_{n-1,j} \in \mathcal{E}^{n-1}$ and $N \in \mathbb{N}$. Otherwise, we can prove a symmetric situation with the same idea. Here we allow $W\left(\varsigma_N, \sigma_{N+1}, E_{n-1,j}\right)$ to be empty. Since $E_{n,j} \in \hat{\mathcal{E}}_n$, then either $$\pi_x\left( W\left(\sigma_N, \varsigma_N, E_{n-1,j}\right) \right) \geq \frac{1}{2^{n+1}}\frac{1}{n \log 2}$$ or $$\pi_x\left(W\left(\varsigma_N, \sigma_{N+1}, E_{n-1,j}\right) \right) \geq \frac{1}{2^{n+1}}\frac{1}{n \log 2}.$$ This implies that $\pi_x(E_{n,j}) \cap S'_n \neq \emptyset$. Let $\left[\frac{i}{2^n}, \frac{i+1}{2^n}\right]$ be a $n^{\mathrm{th}}$-dyadic interval where $\pi_x(E_{n,j}) \cap \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right] \neq \emptyset$. Recall that $\pi_y(E_{n,j})$ is a $n^{\mathrm{th}}$-dyadic interval(with one end open). This implies that we can cover $E_{n,j} \cap \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right] \times \mathbb{R}$ by two $n^{\mathrm{th}}$-dyadic square. Let $\left[\frac{k}{2^n}, \frac{k+1}{2^n}\right]$ be a $n^{\mathrm{th}}$-dyadic interval such that $\pi_x(E_{n,j}) \cap \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right] \neq \emptyset$ and $S'_n \cap \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right] = \emptyset$. Then $$\max \left\{\left|W(t)-W(s)\right|: t,s \in \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right] \right\} > \frac{1}{2^n}.$$ Since $\mathcal{L}^1\left(\pi_y(E_{n,j}) \right) = \frac{1}{2^n}$, then $\pi_x(E_{n,j}) \subsetneq \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right]$. Intuitively, the $x$-projection of any connected component of $E_{n,j}$ should start or stop in this interval, since the Brownian motion there has larger "height" than that of $E_{n,j}$. Since $\pi_x(E_{n,j})$ is one interval or the union of two intervals, then $$\mathrm{Card}\left(\left\{ \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right]: \pi_x(E_{n,j}) \cap \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right] \neq \emptyset, S'_n \cap \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right] = \emptyset \right\}\right) \leq 4.$$ Let $\mathcal{D}_n$ be the collection of all the random $n^{\mathrm{th}}$-dyadic squares that intersect $\hat{E}_n$. Combining with all the above conditions, we conclude that $$\mathrm{Card}\left(\mathcal{D}_n\right) \leq C \cdot \mathrm{Card}\left( S'_n \cap \left[\frac{i}{2^{n+1}}, \frac{i+1}{2^{n+1}}\right] \neq \emptyset \right)$$ for some $C = C(C_1, C_2)> 0$. Then, by inequality [\[bji\]](#bji){reference-type="eqref" reference="bji"}, for any $\epsilon> 0$, a.s. $$\label{sje} \mathrm{Card}\left( \mathcal{D}_n\right) \leq 2^{\left(\frac{1}{2}+\epsilon\right)(n+1)}$$ for sufficiently large $n$ . Thus $$\mathcal{H}^{\frac{1}{2}+2\epsilon}\left(A\right) \leq \lim_{n \to \infty} \sum_{l=n}^{\infty}2^{\left(\frac{1}{2}+\epsilon\right)(l+1)}2^{-(\frac{1}{2}+2\epsilon)l} = 0.$$ This implies that $\dim_H(A') \leq \frac{1}{2}$ a.s.. Since $\dim_H(A') \leq \frac{1}{2}$ a.s., for any $\delta > 0$, there exists a countable sequence of balls $\left\{B(x_i,r_i) \right\}$ covering $A'$ such that $\sum_{i=1}^{\infty} r_i < \delta$. It follows from Remark [Remark 28](#smallradius){reference-type="ref" reference="smallradius"} that $$\mu\left( A' \right) \leq \sum_{i=1}^{\infty}\mu\left( B(x_i,r_i) \right) \lesssim \sum_{i=1}^{\infty} r_i \lesssim \delta.$$ This finishes the proof. ◻ *Remark 30*. Since $\pi_x(E_{n,j}) \geq \frac{1}{2^n}\frac{1}{n \log 2}$ for any $E_{n,j} \in \hat{\mathcal{E}}_n$, this implies that any $n^{\mathrm{th}}$-dyadic square can intersect at most $[\log 2 n]+1$ many elements from $\hat{\mathcal{E}}_n$. It then follows from inequality [\[sje\]](#sje){reference-type="eqref" reference="sje"} that for any $\epsilon> 0$, a.s. $$\mathrm{Card}\left( \hat{\mathcal{E}}_n \right) \leq 2^{\left(\frac{1}{2}+\epsilon\right)(n+1)}$$ for sufficiently large $n$. **Lemma 31**. *Almost surely, for any $E_{n,j} \in \mathcal{E}_n$, $$\pi_y(A \cap E_{n,j}) = \pi_y\left(E_{n,j}\right)$$ a.e..* *Proof.* It is sufficient to prove that a.s. $$\pi_y\left(A \cap \Gamma(W) \cap [0, T_1] \times (0,1]\right) \supset (0,1).$$ The remaining of the proof directly follows from the Strong Markov Property [Theorem 21](#Strong Markov){reference-type="ref" reference="Strong Markov"}. We first show that a.s. for any $a \in (0,1)$, $$\label{goodmeasure} l^a(A \cap \Gamma(W) \cap [0, T_1] \times (0,1] \cap Z_a) \geq \frac{1}{2} l^a(\Gamma(W) \cap [0, T_1] \times (0,1] \cap Z_a).$$ We denote by $\mathcal{E}_n(a) = \{E_{n,j}(a)\}$ the subcollection of $\mathcal{E}_n$ such that 1. $E_{n,j}(a) \subset \Gamma(W) \cap [0, T_1] \times [0,1]$, 2. $a \in \pi_y\left( E_{n,j}(a) \right)$, In simple words, $\mathcal{E}_n(a)$ is the collection of all the $n^{\mathrm{th}}$-generation elements in $[0, T_1] \times [0,1]$ that intersect $Z_a$. Thus we have $\pi_y\left( E_{n,i}(a) \right) = \pi_y\left( E_{n,j}(a) \right)$ for any $E_{n,i}(a), E_{n,j} \in \mathcal{E}_n(a)$. Recall that $D_n(a, T_1)$ is the number of downcrossings before time $T_1$ of the unique $n^{\mathrm{th}}$-dyadic interval $(i2^{-n} , (i+ 1)2^{-n}]$ containing $a$, thus $$\label{elementlt} \frac{1}{2}D_n\left( a, T_1\right) \leq \mathrm{Card}\left( \mathcal{E}_n(a)\right) \leq 2 D_n\left( a, T_1\right).$$ For sufficiently large $i$, a.s. $$\mathrm{Card}\left( \hat{\mathcal{E}}_n \right) \leq 2^{\left(\frac{2}{3} \right)n}$$ by Remark [\[secontrol\]](#secontrol){reference-type="eqref" reference="secontrol"}. Theorem [Theorem 24](#ltv){reference-type="ref" reference="ltv"} implies that a.s. for any $a \in (0,1)$, $l^a(\Gamma(W) \cap [0, T_1] \times [0,1] \cap Z_a) > \delta$ for some $\delta = \delta(a, w)$. Thus there exists a $N = N(a, w)>0$ such that $$\sum_{n=N}^\infty\frac{\mathrm{Card}\left( \hat{\mathcal{E}}_n \cap \mathcal{E}_n(a) \right)}{2^{n-1}} \leq \sum_{n=N}^\infty\frac{\mathrm{Card}\left( \hat{\mathcal{E}}_n\right)}{2^{n-1}} \leq \sum_{n=N}^\infty\frac{2^{\left(\frac{2}{3} \right)n}}{2^{n-1}} < \frac{1}{4}\delta.$$ Recall the definition of local time implies that $$l^a(\Gamma(W) \cap [0, T_1] \times (0,1] \cap Z_a) = \lim_{n \to \infty}\frac{D^n(a, T_1)}{2^{n-1}} > \delta.$$ Since $A' \subset \bigcup_{n = N}^\infty \bigcup_{E_{n,j} \in \hat{\mathcal{E}}_n}E_{n,j}$, we have a.s. for any $a \in (0,1)$, $$l^a(A' \cap \Gamma(W) \cap [0, T_1] \times [0,1] \cap Z_a) < \frac{1}{2} l^a(\Gamma(W) \cap [0, T_1] \times [0,1] \cap Z_a)$$ by the definition of local time. This implies that inequality [\[goodmeasure\]](#goodmeasure){reference-type="eqref" reference="goodmeasure"} is valid and thus $$\pi_y\left(A \cap \Gamma(W) \cap [0, T_1] \times (0,1]\right) \supset (0,1).$$ ◻ *Remark 32*. Almost surely for any $E_{n,j} \in \mathcal{E}_n$, $$l^a(A \cap E_{n,j} \cap Z_a) \geq \frac{1}{2} l^a(\Gamma(W) \cap E_{n,j} \cap Z_a)$$ for a.e. $a \in \pi_y(E_{n,j})$ by inequality [\[goodmeasure\]](#goodmeasure){reference-type="eqref" reference="goodmeasure"}. In the next step, we will construct a family of "Vertical" Cantor sets $\mathcal{E}$ from $X$. A "Vertical" Cantor set $E$ is constructed in the following way. We choose one element in $\mathcal{E}^1_1$ and another element in $\mathcal{E}^2_1$. The union of these two elements are the *first generation* of $E$ and we denote it by $E_1$. Suppose the $n^{\mathrm{th}}$-generation $E_n$ of $E$ is constructed and it is a union of $2^n$ disjoint elements from $\mathcal{E}_n$ whose projection to $y$-axis covers $(0,1]$. For each $E_{n,j}$ in $E^n$, we choose two disjoint elements from $\mathcal{E}_{n+1}$ that are subsets of $E_{n,j}$ and their projections to $y$-axis equals to $\pi_y(E_{n,j})$. The union of all the $2^{n+1}$ elements selected in the above way forms the *$(n+1)^{\mathrm{th}}$-generation* $E_{n+1}$ of $E$. Then $E$ is defined as $E= A \cap \left(\bigcap_{n=1}^\infty E_n\right) = A \cap \left(\bigcap_{n=1}^{\infty} \bigcup_{j=1}^{2^n} E_{n,j}\right)$. Notice that we define such a Cantor set in $A$, since restricting such a set in $A$ will generate good properties that will be proved in the next steps. We collect all the "vertical" Cantor sets in $A$ that can be produced by the above method and denote the whole collection by $\mathcal{E}$. Reader should be aware that $E \in \mathcal{E}$ may not be a metric space with hierarchical structure as defined in Section [3](#CDM){reference-type="ref" reference="CDM"}, since $A \cap E_{n,j}$ may not be a locally compact metric space for some $E_{n,j}$. **Lemma 33**. *For any $E \in \mathcal{E}$, $E \cap Z_a$ contains only one point for a.e. $a \in [0,1]$.* *Proof.* We first notice that $E \cap Z_a \neq \emptyset$ for any $E \in \mathcal{E}$ and a.e. $a \in (0,1]$ by Lemma [Lemma 31](#gecei){reference-type="ref" reference="gecei"}. Let $a \in (0,1]$ and assume that $p,q \in E \cap Z_a$. Then there exists a number $n \in \mathbb{N}$ such that $p \in E_{n,j}$ for some $E_{n,j} \in \mathcal{E}_n$ and $\mathrm{diam}\left(E_{n,j}\right) < |p-q|$ by our construction. This implies that $q \in E_{n,i}$ for some $E_{n,i} \in \mathcal{E}_{n}$ and $E_{n,j} \neq E_{n,i}$. Recall that $\pi_y\left(E_{n,j}\right) \cap \pi_y\left(E_{n,i}\right) = \emptyset$ when $E_{n,j} \neq E_{n,i}$ and both of them are in the $n^{\mathrm{th}}$-generation of $E$. This implies a contraction of $p,q \in E \cap Z_a$ and thus finishes the proof. ◻ Lemma [Lemma 33](#vcurve){reference-type="ref" reference="vcurve"} implies that the restriction of $\pi_y$ onto every "vertical" Cantor set $E$ is injective and we also have that it is surjective on a full measure subset of $[0,1]$ by Lemma [Lemma 31](#gecei){reference-type="ref" reference="gecei"}. Hence we can define a measure $\lambda_{E}$ by pushing forward the Lebesgue measure from $\{0\}\times [0,1]$ to $E$, i.e., $\lambda_{E}=(\pi_y^{-1})_*(\mathcal{L}_1)$, or more concretely for every $A\subset E$ we have $$\begin{aligned} \lambda_{E}(A)=\mathcal{L}_1(\pi_y(A)).\end{aligned}$$ We will denote $\mathbf{E}=\{\lambda_E\}_{E\in\mathcal{E}}$, i.e., the family of pull backs of the Lebesgue measure under $\pi_y$ to all the "vertical" Cantor sets in $\mathcal{E}$. **Lemma 34**. *$\dim_C\left(E\right) \geq 1$ for any $E \in \mathcal{E}$.* *Proof.* Let $E \in \mathcal{E}$ where $E = A \cap \left( \bigcap_{n=1}^\infty \bigcup_{j=1}^{2^n} E_{n,j} \right)$. Recall that $A = \bigcup_{i=1}^\infty \bigcap_{n=1}^\infty G_n$, so we denote by $A^N = \bigcup_{i=1}^N \bigcap_{n=1}^\infty G_n$. We then define $E^N = A^N \cap \left( \bigcap_{n=1}^\infty \bigcup_{j=1}^{2^n} E_{n,j} \right)$ and thus $E = \bigcup_{N=1}^\infty E^N$. Notice that each $E^N$ is a flat metric space with hierarchical structure and their flatness constants are uniformly bounded. Let $x \in E^N$ and we denote by $E^N_i(x)$ the unique element in $\mathcal{E}_i(E^N)$ that contains $x$. We have $$\lim_{i \to \infty}\lim_{N \to \infty}\Delta(E^N_{i}(x), (E^N_{i}(x))' \leq \lim_{i \to \infty} \frac{2}{i \log 2} = 0.$$ and for $i$ sufficiently large, $$\frac{1}{2} \leq \lim_{N \to \infty}\frac{\mathrm{diam}(E^N_i(x))}{\mathrm{diam}((E^N_i(x))')} \leq 2.$$ Applying Theorem [Theorem 15](#unioncd1){reference-type="ref" reference="unioncd1"} to $E$ and we finish the proof. ◻ **Lemma 35**. *Let $E \in \mathcal{E}$ and $\lambda_{E} \in \mathbf{E}$ equipped on $E$. For any $x \in E$, there exists $r_1 = r_1(x)$ such that whenever $r < r_1$, $$\ \lambda\left(B(x,r) \cap E \right) \geq \frac{1}{3}r.$$* *Proof.* We denote by $E_n(x)$ the unique element in $\mathcal{E}_n(E)$ that contains $x$. Since $x \in A$, we have that $E_n(x) \subseteq \mathcal{G}_n$ for sufficient large $n$. This implies that there exists a $N = N(x)$ such that when $r < \mathrm{diam}\left( E_N(x) \right)$, $$\lambda\left(B(x,r) \cap E \right) \geq \frac{1}{3}r.$$ ◻ ## The proof of Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"} {#the-proof-of-theorem-bg} We are now ready to prove Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"}. *Proof of Theorem [Theorem 1](#bg){reference-type="ref" reference="bg"}.* We list the conditions that are already known: Almost surely - for any $\epsilon>0$, there exist $C = C(x, \epsilon)$ and $r = r_0(x)$ such that when $r < r_0$, $$\mu\left( B(x,r) \cap \Gamma(W) \right) \leq C \cdot r^{\frac{3}{2}-\epsilon}.$$ See inequality [\[umassbound\]](#umassbound){reference-type="eqref" reference="umassbound"}. - for any $E \in \mathcal{E}$ with $\lambda_E \in \mathbf{E}$ equipped on $E$, and $x \in E$, there exists a $r_1 = r_1(x)$ such that when $r < r_1$, $$\lambda_E(B(x,r) \cap E) \geq \frac{1}{3}r.$$ See Lemma [Lemma 35](#linear1){reference-type="ref" reference="linear1"}. - $\dim_C(E) \geq 1$ for any $E \in \mathcal{E}$. See Lemma [Lemma 34](#dimcgood){reference-type="ref" reference="dimcgood"}. - $\mu(A') = 0$. See Proposition [Proposition 29](#sdc){reference-type="ref" reference="sdc"}. With Theorem [Theorem 11](#fmodest){reference-type="ref" reference="fmodest"} and all the conditions listed above, it is sufficient to prove that $\mathrm{Mod}_1(\mathbf{E}) > 0$ a.s.. For that let $\rho:X\to[0,\infty)$ be admissible for $\mathbf{E}$, that is $\int_E \rho d \lambda_E>1$ for every $E\in\mathcal{E}$. By the Vitali-Carathéodory theorem [Theorem 8](#VCT){reference-type="ref" reference="VCT"} we can assume that $\rho$ is in fact lower-semicontinuous. Analog to Lemma [Lemma 20](#lemma:construction){reference-type="ref" reference="lemma:construction"}, we will construct a new measurable function $\rho_\infty: X \to [0, \infty)$ such that 1. $\int \rho d\mu \geq \int \rho_\infty d \mu$,[\[srandommeasure\]]{#srandommeasure label="srandommeasure"} 2. $\rho_\infty(x_1,y_1)=\rho_\infty(x_2,y_2)$ if $y_1=y_2$,[\[equalrandommeasure\]]{#equalrandommeasure label="equalrandommeasure"} 3. $\exists F\in\mathcal{E}$ such that $\rho(x)\leq \rho_\infty(x)$ for $x\in F$.[\[arandommeasure\]]{#arandommeasure label="arandommeasure"} We need the following notation to assist our proof. $$\mathcal{A}^m_n = \left\{A \cap E^m_{n,j}: E^m_{n,j} \in \mathcal{E}^m_n \right\} \ \textnormal{and} \ \mathcal{A}_n = \bigcup_{m=1}^{2^n} \mathcal{A}^m_n.$$ ![A illustration of $F$ for three generations.](BrownianMotionMod){#SBM height="1.2 in"} We will first construct a sequence of "piecewise" constant function $\rho_n$ from $\rho$ as follows. From each of $\mathcal{A}_1^m, m=1,2$, choose one element with the smallest average of $\rho$. Specifically, Let $F_{1,1} \in \mathcal{A}^1_1$ such that $$\intbar_{F_{1,1}} \rho \ d\mu \leq \intbar_{A^1_{1,j}} \rho \ d\mu$$ for any $A^1_{1,j} \in \mathcal{A}_1^1$. Similarly, let $F_{1,2} \in\mathcal{A}_1^2$ such that $$\intbar_{F_{1,2}} \rho \ d\mu \leq \intbar_{A^2_{1,j}} \rho \ d\mu$$ for any $A^2_{1,j} \in\mathcal{A}_1^2$. Such elements exist since $\mathrm{Card}\left(\mathcal{A}_1^m \right) < \infty, m=1,2$. Define $\rho_1: X \to [0, \infty)$ by $$\rho_1\big\rvert_{A^m_{1,j}} \equiv \intbar_{F_{1,m}}\rho d\mu, \mbox{ if } A^m_{1,j} \in \mathcal{A}^m_1.$$ for $m =1,2$ and $\rho_1 = 0$ on $X \setminus A$. Thus $\rho_1$ is the same constant on every element of $\mathcal{A}_1^m, m=1,2$. Hence, $\rho_1$ is also constant on $A \cap \pi^{-1}_y(a)$ for any $a \in [0,1]$, and $\int_X \rho d\mu \geq \int_X \rho_1 d\mu$. Continuing by induction, suppose that at step $n-1$ we have chosen rectangles $F_{n-1,1},\ldots,F_{n-1,2^{n-1}}$ and defined $\rho_{n-1}$ so that it is a constant on all elements of $\mathcal{A}^m_{n-1}, m=1, \ldots, 2^{n-1}$ and $0$ otherwise. To construct $\rho_{n}$, from each of the elements from $\mathcal{A}^m_n$ that is also within $F_{n-1,j}$ for $j\in\{1,\ldots,2^{n-1}\}$ chosen above, we will select one with the smallest average of $\rho$, i.e., the element $F_{n,l}$ with $l \in\{1,\ldots,2^{n}\}$ are such that $$\begin{aligned} \intbar_{F_{n,l}} \rho d\mu \leq \intbar_{A_{n,k}} \rho d\mu, \end{aligned}$$ whenever $A_{n,k}, F_{n,l}\subset F_{n-1,j}$ for some $j\in\{1,\ldots,2^{n-1}\}$, $A_{n,k} \in \mathcal{A}_n$ and $\pi_y(A_{n,k})=\pi_y (F_{n,l})$ a.e.. Define $\rho_n$ be setting $$\begin{aligned} \rho_n\big\rvert_{A_{n,k}} \equiv \intbar_{F_{n,j}}\rho d\mu, \mbox{ if } \pi_y(A_{n,k})=\pi_y (F_{n,j}) \end{aligned}$$ and $\rho_n = 0$ on $X \setminus A$ Then we have a sequence of Borel measurable functions $\{\rho_n \}$. It follows from the construction above that we have $$\int_X \rho \ d\mu \geq \int_X \rho_1 \ d\mu \geq \ldots\geq \int_X \rho_n \ d\mu \geq \ldots.$$ For every $z=(x,y)\in X$ we define $$\begin{aligned} \label{definenewrho} \rho_\infty(z) := \liminf_{n\to\infty} \rho_n(z). \end{aligned}$$ Condition [\[srandommeasure\]](#srandommeasure){reference-type="eqref" reference="srandommeasure"} then follows from the definition of $\rho_\infty$ and Fatou's Lemma. Equality [\[equalrandommeasure\]](#equalrandommeasure){reference-type="eqref" reference="equalrandommeasure"} follows from [\[definenewrho\]](#definenewrho){reference-type="eqref" reference="definenewrho"} and the fact that for every $n$ the function $\rho_n$ is constant on all horizontal slices $A \cap Z_a$. Define $$F=\bigcap_{n=1}^{\infty}\bigcup_{j=1}^{2^n} F_{n,j}$$ and $F$ is clearly an element in $\mathcal{E}$. The proof of [\[arandommeasure\]](#arandommeasure){reference-type="eqref" reference="arandommeasure"} follows literally the end of the proof of Lemma [Lemma 20](#lemma:construction){reference-type="ref" reference="lemma:construction"} and we omit the details here. Note that by [\[equalrandommeasure\]](#equalrandommeasure){reference-type="eqref" reference="equalrandommeasure"} there exists a unique $\bar{\rho}: [0,1] \to \mathbb{R}$ s.t. for $\mathcal{L}^1$-a.e. $a \in [0,1]$ we have $\bar{\rho}(a) = \rho_\infty(z),$ whenever $\pi_y(z) = a$. Then, by $(3)$, for any $E \in \mathcal{E}$ we have $$\label{newadm} \int_E \rho_\infty(z) \ d \lambda_E = \int_0^1 \bar{\rho}(a) \ da = \int_F \rho_\infty(x) \ d \lambda_F \geq \int_F \rho(x) \ d \lambda_F \geq 1.$$ Thus $\rho_\infty$ is also admissible for $\mathcal{E}$. Recall that for any $a \in [0,1]$, we have $L^a(T_6) > 0$ a.s. by Theorem [Theorem 24](#ltv){reference-type="ref" reference="ltv"}. Since $L^a(T_6)$ is continuous a.s. by Theorem [Theorem 23](#lf){reference-type="ref" reference="lf"}, a.s. there exists a $\delta= \delta(\omega) > 0$ such that $L^a(T_6) > \delta$ for any $a \in [0,1]$. This implies that a.s. $$\label{hitmeasure} l^a\left(\Gamma(W) \cap [0, T_6] \times [0,1] \cap Z_a\right) > \delta(\omega)$$ for any $a \in [0,1]$. It then follows from Remark [Remark 32](#cover){reference-type="ref" reference="cover"} that a.s. $$\label{hestimate} l^a(A \cap Z_a) \geq \frac{1}{2}\delta(\omega)$$ for a.e. $a \in [0,1]$. Finally, using [\[srandommeasure\]](#srandommeasure){reference-type="eqref" reference="srandommeasure"}, the Disintegration Theorem [@AGS05 Theorem $5.3.1$], and the inequalities above, we obtain $$\begin{aligned} \int_X \rho d\mu & \geq \int_X \rho_\infty d\mu = \int_A \rho_\infty d\mu \\ &= \int_0^1 \left[ \int_{A \cap \pi_y^{-1}(a)}\rho_\infty(z) \ d l^a(z) \right] \ da \\ &= \int_0^1 \bar{\rho}(a) l^a(A \cap Z_a)\ da \\ & \geq \frac{\sigma}{2}\int_0^1 \bar{\rho}(a) \ da \tag{inequalities \eqref{hestimate} and \eqref{newadm}} \geq \frac{\sigma}{2}. \end{aligned}$$ This implies that $\mathrm{Mod}_1(\mathbf{E}) > 0$ a.s. and thus finishes the proof. ◻ # Remarks and open problems. {#Conclusion} One of the most widely studied stochastic object in recent years is the *Schramm Loewner Evolution (SLE)*. SLE was introduced by Oded Schramm in [@Sch00], and there has been significant progress in the last decade. It is a random growth process generated from a family of Riemann mappings $\{g_t(z)\}$, where $\{g_t(z)\}$ is the solution of the following ordinary differential equation: For any $z \in \{\omega \in \mathbb{C}: \Im\omega \geq 0 \} \setminus \{0\}$, $$\label{Loewner} \partial_t g_t(z) = \frac{2}{g_t(z) - \sqrt{\kappa}B(t)}, \ g_0(z) = z.$$ Equation [\[Loewner\]](#Loewner){reference-type="ref" reference="Loewner"} is the Loewner differential equation with a driving parameter of a standard one-dimensional Brownian motion running with speed $\kappa$. This process is also considered as conformal invariant random curves in simply connected planar domains. It has been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Readers may refer to [@Law05] for more information. It is proved in [@RS05] that for all $\kappa \neq 8$ the SLE$_{\kappa}$ trace is a continuous path. It is a simple path for $\kappa \in [0,4]$, a self-intersecting path for $\kappa \in (4,8)$ and space-filling for $\kappa > 8$. Thus it is only interesting to study SLE$_{\kappa}$ for $\kappa \in [0,8]$. Beffara proved in [@Bef08] that the Hausdorff dimension of the SLE$_{\kappa}$ trace is $1+ \frac{\kappa}{8}$ for $\kappa \in [0,8]$ almost surely. [@AK08] and [@AS08] study the intersection of the SLE trace with $\mathbb{R}$ or semi-circles and their results show that the SLE trace has a product-like structure. Quasisymmetric deformations of SLE appear naturally as the scaling limits of critical models in statistical physics on skewed lattices. The question of the conformal dimension of SLE can be heuristically formulated in Physical terms as follows: Can we decrease the scale of interactions in a critical lattice model by perturbing the lattice? In an effort to answer this question, we set the following conjecture: **Conjecture 36**. *The $SLE_\kappa$ trace is minimal for conformal dimension almost surely, i.e., the conformal dimension of the $SLE_\kappa$ curve is $1+\frac{\kappa}{8}$ almost surely for any $\kappa \in [0,8]$.* # Generalization of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} {#appendix} In this appendix, we collect several results that generalize Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. These results, together with Theorem [Theorem 15](#unioncd1){reference-type="ref" reference="unioncd1"} and [Theorem 16](#unioncd1new){reference-type="ref" reference="unioncd1new"}, could be used to compute the conformal dimension of Cantor sets in a variety of situations. We follow the same notation as defined in Section [3](#CDM){reference-type="ref" reference="CDM"}. Let $X$ be a totally bounded metric space, and $N(X,\epsilon)$ be the minimal number of sets of diameter at most $\epsilon$ needed to cover $X$. We define the *upper Minkowski dimension* as $$\overline{\dim}_M(X) = \limsup_{\epsilon\to 0}\frac{\log N(X, \epsilon)}{\log (1/\epsilon)}$$ and the *lower Minkowski dimension* as $$\underline{\dim}_M(X) = \liminf_{\epsilon\to 0}\frac{\log N(X, \epsilon)}{\log (1/\epsilon)}.$$ If the two values agree, the common value is simply called the *Minkowski dimension* of $X$ and denoted by $\dim_M(X)$. The following result is a quantitative generalization of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} in the sense that conditions [\[rdist\]](#rdist){reference-type="eqref" reference="rdist"} and [\[cdiam\]](#cdiam){reference-type="eqref" reference="cdiam"} are not satisfied for a quantitative small portion of $E$. **Theorem 37**. *Let $E = \bigcap_{i=1}^\infty \bigcup_{j=1}^{2^i} E_{i,j}$ be a flat metric space with hierarchical structure and $\hat{E} \subseteq E$ such that $\overline{\dim}_M(\hat{E}) < \underline{\dim}_M(E)$. If for any $x \in E \setminus \hat{E}$, we have $$\lim_{i \to \infty}\Delta(E_{i}(x), E'_{i}(x)) \to 0$$ and there exists $L = L(x)$ such that $$\frac{1}{L} \leq \frac{\mathrm{diam}(E_{i}(x))}{\mathrm{diam}(E'_{i}(x))} \leq L.$$ Then $\dim_C(E \setminus \hat{E}) \geq 1$.* *Proof.* The only difference between Theorem [Theorem 37](#mcd1){reference-type="ref" reference="mcd1"} and Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} is that we have a subset of exceptional points with a smaller Minkowski dimension. Since $\overline{\dim}_M(\hat{E}) < \underline{\dim}_M(E)$, we claim that there exists an $i \in \mathbb{N}$ and an $E_{i,j} \in \mathcal{E}_i$ such that $\hat{E} \cap E_{i,j} = \emptyset$. Otherwise, $\hat{E}$ is dense in $E$ and they have the same upper and lower Minkowski dimensions. Thus applying Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"} to $E_{i,j}$ finishes the proof. ◻ The next theorem is another quantitative generalization of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. Intuitively, our result is still valid even if quantitatively omitting the limsup of a collection of exceptional point. In the following theorem, we let $\hat{\mathcal{E}}_i \subseteq \mathcal{E}_i$ be a collection of $i^{\mathrm{th}}$-generation element of $E$ and denote by $\hat{E}_i = \bigcup_{\hat{E}_{i,j} \in \hat{\mathcal{E}}_i } \hat{E}_{i,j}$. We define $$\label{eCantorSet} \hat{E} = \bigcap_{n=1}^\infty \bigcup_{i=n}^\infty \hat{E}_i.$$ **Theorem 38**. *Let $E = \bigcap_{i=1}^\infty \bigcup_{j=1}^{2^i} E_{i,j}$ be a flat metric space with hierarchical structure and $\hat{E}$ be defined as in [\[eCantorSet\]](#eCantorSet){reference-type="eqref" reference="eCantorSet"}.* *For any $\epsilon>0$, $$\mathrm{Card}(\hat{\mathcal{E}_i})< 2^{\epsilon i}$$ when $i$ is sufficiently large.* *Moreover, for any $\tilde{E}_{i,j} \in \mathcal{E}_{i-1}\setminus \hat{\mathcal{E}}_{i-1}$, we have $$\label{sdist} \Delta(E_{i,j}, E'_{i,j}) \leq \delta(i)$$ where $\delta(i) \to 0$ as $i \to \infty$ and $$\label{sdiam} \frac{1}{L} \leq \frac{\mathrm{diam}(E_{i,j})}{\mathrm{diam}(E'_{i,j})} \leq L$$ for some $L > 0$ depends only on $E \setminus \hat{E}$.* *Then $\dim_C(E \setminus \hat{E}) \geq 1$.* *Proof.* We first claim that for any $\epsilon> 0$ and any $N \in \mathbb{N}$, there exists a $E_{n,l} \in \mathcal{E}_n$ such that $$\begin{aligned} &\mathrm{Card}\left(\left\{\hat{E}_{i,j} : \hat{E}_{i,j} \subseteq E_{n,l}\right\}\right) = 0 \ \textnormal{for any} \ n \leq i \leq n+N, \label{sbg} \\ &\mathrm{Card}\left(\left\{\hat{E}_{i,j} : \hat{E}_{i,j} \subseteq E_{n,l}\right\}\right) < 2^{\epsilon(i-n)} \ \textnormal{for any} \ i > n+N. \label{sgc} \end{aligned}$$ Fix $\epsilon> 0$ and $N \in \mathbb{N}$. We pick a sufficiently small $\epsilon_1 > 0$ and a sufficiently large $n \in \mathbb{N}$ such that $\mathrm{Card}(\hat{\mathcal{E}_i}) < 2^{\epsilon_1 i}$ when $i \geq n$ and $$\sum_{i=n}^{n+N} 2^{\epsilon_1 i} < 2^{n-1}.$$ This implies that $\mathrm{Card}\left( \bigcup_{i=n}^{n+N} \hat{\mathcal{E}}_i \right) < 2^{n-1}$. Since $\mathrm{Card}\left( \mathcal{E}_n \right) = 2^n$, $$\label{goodsetest} \mathrm{Card}\left(\left\{ E_{n,j} : E_{n,j} \cap \hat{E}_i = \emptyset \ \textnormal{for any} \ n \leq i \leq n+N \right\} \right) > 2^{n-1}$$ In a simple word, the number of $E_{n,j}$'s that contain no element from $\hat{\mathcal{E}}_i$ for any $n \leq i \leq n+N$ is larger than $2^{n-1}$. Fix a $m > n+N$. since $\mathrm{Card}(\hat{\mathcal{E}}_m) < 2^{\epsilon_1 m}$ and $$\frac{2^{\epsilon_1 m}}{2^{\epsilon(m-n)}} = 2^{n\epsilon- (\epsilon- \epsilon_1)m},$$ the number of elements $E_{n,k}$ in $\mathcal{E}_n$ that doesn't satisfies $$\mathrm{Card}\left(\left\{\hat{E}_{m,j} : \hat{E}_{m,j} \subseteq E_{n,k}\right\}\right) < 2^{\epsilon(m-n)}$$ is less or equal than $2^{n\epsilon- (\epsilon- \epsilon_1)m}$. Then the number of $\left\{E_{n,k}\right\}$ that doesn't satisfy $$\mathrm{Card}\left(\left\{\hat{E}_{i,j} : \hat{E}_{i,j} \subseteq E_{n,k}\right\}\right) < 2^{\epsilon(i-n)}$$ for some $i \in \mathbb{N}$ is bounded from above by $$\label{esub} \sum_{i=n+N}^\infty 2^{n\epsilon- (\epsilon- \epsilon_1)i} = 2^{n\epsilon}\sum_{i=n+N}^\infty \frac{1}{2^{(\epsilon- \epsilon_1)i}} = \frac{2^{n \epsilon_1 + N \epsilon_1 - N \epsilon}}{1-2^{\epsilon_1 - \epsilon}} < 2^{n-2}.$$ The last inequality is from that $\epsilon_1$ is sufficiently small and $n$ is sufficiency large. Combining inequality [\[goodsetest\]](#goodsetest){reference-type="eqref" reference="goodsetest"} and [\[esub\]](#esub){reference-type="eqref" reference="esub"} proves our claim that there exists a $E_{n,l} \in \mathcal{E}_n$ satisfying conditions [\[sbg\]](#sbg){reference-type="eqref" reference="sbg"} and [\[sgc\]](#sgc){reference-type="eqref" reference="sgc"}. Without loss of generality we may substitute $E$ with $E_{n,l}$. Namely, given some fixed $\epsilon> 0$ and $N \in \mathbb{N}$, $E$ satisfies $$\begin{aligned} &\mathrm{Card}\left(\hat{\mathcal{E}}_{i}\right) = 0 \ \textnormal{for any} \ i \leq N, \\ &\mathrm{Card}\left(\hat{\mathcal{E}}_{i}\right) < 2^{\epsilon i} \ \textnormal{for any} \ i > N. \end{aligned}$$ Moreover we let $\delta(i) < 1$ for any $i \in \mathbb{N}$. In the following proof, we let $\epsilon$ be sufficiently small and $N$ be sufficiently large and we will explain the meaning of the sufficiency in later part of the proof. Let $f: E \to I$ be an $\eta$-quasisymmetry, $0 < \alpha < 1$ and $\mathrm{diam}(I) = 1$. The notation of convention used here will be the same as in Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} and Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}. Especially, $\hat{I}_{i,j} = f(\hat{E}_{i,j})$, $\hat{I}_i = f(\hat{E}_i)$ and $\hat{I} = f(\hat{E})$. The main idea of this proof is to construct a nonzero measures on $f(E \setminus \hat{E})$ with a suitable upper mass bound for every ball, then applying the Mass Distribution Principle [Theorem 5](#mdp){reference-type="ref" reference="mdp"} to $f(E \setminus \hat{E})$ to finish the proof. Since $\hat{E} = \limsup \hat{E}_i$, for any $x \in E \setminus \hat{E}$ and sufficiently large $i$, $$\label{sldist} \Delta(E_{i}(x), E'_{i}(x)) \leq \delta(i)$$ and $$\label{sldiam} \frac{1}{L} \leq \frac{\mathrm{diam}(E_{i}(x))}{\mathrm{diam}(E'_{i}(x))} \leq L.$$ Applying Proposition [Proposition 9](#diam){reference-type="ref" reference="diam"} on inequalities [\[sldist\]](#sldist){reference-type="eqref" reference="sldist"} and [\[sldiam\]](#sldiam){reference-type="eqref" reference="sldiam"}, we have that for any $x \in E \setminus \hat{E}$ and for all $i \in \mathbb{N}$, $$\label{qsdist} \Delta(I_{i}(x), I'_{i}(x)) \leq \sigma(i)$$ where $\sigma(i) \to 0$ as $i \to \infty$ and $$\label{qsdiam} \frac{1}{M} \leq \frac{\mathrm{diam}(I_{i}(x))}{\mathrm{diam}(I'_{i}(x))} \leq M.$$ for some $M = M(x, L, \sigma, \eta)$. We now define a measure $\mu$ on $I$ in the same way as Equation [\[measure\]](#measure){reference-type="eqref" reference="measure"} in Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"}. First we let $\mu(I)= 1$. Then for any $I_{i,j} = f(E_{i,j})$, we define $$\label{newmeasure} \mu(I_{i,j}) = \frac{\mathrm{diam}(I_{i,j})^{\alpha}}{\mathrm{diam}(I_{i,j})^{\alpha} + \mathrm{diam}(I'_{i,j})^{\alpha}} \mu(\tilde{I}_{i,j}).$$ Thus $\mu$ is a Borel measure. Let $x \in E \setminus \hat{E}$ and $f(x) \in I_{i,j}$ for some $I_{i,j}$, then it is known from Lemma [Lemma 14](#mla){reference-type="ref" reference="mla"} that $$\mu(I_{i,j}) \leq C \mathrm{diam}(I_{i,j})^{\alpha}$$ for some constant $C > 0$ depending on $x,\eta$ and $\alpha$. Assume that $\mu\left(I \setminus \hat{I}\right) > 0$. Following the proof of Theorem [Theorem 13](#cd1){reference-type="ref" reference="cd1"}, there exists a measurable set $I' \subset I \setminus \hat{I}$ such that $\mu(I') > 0$ and for any $p \in I'$ and $r > 0$, $$\mu\left(B(p,r) \cap I'\right) \leq G r^\alpha$$ for some constant $G > 0$ depends on $I' ,\eta$ and $\alpha$. Applying the Mass Distribution Principle [Theorem 5](#mdp){reference-type="ref" reference="mdp"} on $I'$, we have $\dim_H(I \setminus \hat{I}) \geq \alpha$, which finishes the proof. It is then sufficient to prove that $\mu\left(I \setminus \hat{I}\right) > 0$. Let $\tilde{I}_{i,j} \subseteq I \setminus \hat{I}_{i-1}$. Recall that we have $\delta(i) < 1$, then by Proposition [Proposition 9](#diam){reference-type="ref" reference="diam"}, $$\begin{aligned} \frac{\mathrm{diam}(I_{i,j})}{\mathrm{diam}(I'_{i,j})} & = \frac{\mathrm{diam}(I_{i,j})}{\mathrm{diam}(\tilde{I}_{i,j})} \cdot \frac{\mathrm{diam}(\tilde{I}_{i,j})}{\mathrm{diam}(I'_{i,j})} \\ & \leq 2\eta\left( \frac{L}{L+1} \right) \eta\left( L+2 \right). \end{aligned}$$ Let $$\lambda = 2\eta\left( \frac{L}{L+1} \right) \eta\left( L+2 \right), \ p = \frac{1}{\lambda^\alpha + 1} \ \textrm{and} \ q = \frac{\lambda^\alpha}{\lambda^\alpha + 1}.$$ This implies that $$\label{binodiam} p \leq \frac{\mathrm{diam}(I_{i,j})^{\alpha}}{\mathrm{diam}(I_{i,j})^{\alpha} + \mathrm{diam}(I'_{i,j})^{\alpha}} \leq q.$$ Define $$\hat{\mathcal{E}}'_i = \left\{\hat{E}_{i,j} \in \hat{\mathcal{E}}_i : \hat{E}_{i,j} \nsubseteq \bigcup_{l=1}^{i-1}\hat{E}_l \right\}$$ and $$\hat{E}'_i = \bigcup_{\hat{E}_{i,j} \in \hat{\mathcal{E}}'_i} \hat{E}_{i,j}.$$ It is clearly that $\mathrm{Card}\left(\hat{\mathcal{E}}'_n\right) \leq \mathrm{Card}(\hat{\mathcal{E}}_n)$ and $\bigcup_{i=1}^{n}\hat{E}'_i = \bigcup_{i=1}^{n}\hat{E}_i$ for any $n \in \mathbb{N}$. Let $\hat{I}_{n,j} \in \hat{\mathcal{I}}'_n = f\left(\hat{\mathcal{E}}'_i\right)$. Since $\hat{I}_{n,j} \cap \bigcup_{l=1}^{n-1}\hat{I}_l = \emptyset$, we have $\hat{I}_{n,j} \nsubseteq \hat{I}_i$ for any $i < n$. Thus by equation [\[newmeasure\]](#newmeasure){reference-type="eqref" reference="newmeasure"} and inequality [\[binodiam\]](#binodiam){reference-type="eqref" reference="binodiam"}, we have $$\mu\left( \hat{I}_{n,j} \right) \leq q^n.$$ This implies that $$\mu\left( \hat{I}'_n \right) = \sum_{\hat{I}_{n,j} \in \hat{\mathcal{I}}'_n} \mu\left( \hat{I}_{n,j} \right) \leq \mathrm{Card}\left(\hat{\mathcal{I}}'_n\right) q^n \leq \mathrm{Card}(\hat{\mathcal{E}}_n) q^n.$$ Then $$\mu\left( \bigcup_{i=1}^{\infty}\hat{I}_i \right) = \mu\left( \bigcup_{i=1}^{\infty}\hat{I}'_i \right) = \mu\left( \bigcup_{i=N+1}^{\infty}\hat{I}'_i \right) \leq \sum_{i=N+1}^\infty \mathrm{Card}(\hat{\mathcal{E}}_i) q^i \leq \sum_{i=N+1}^\infty 2^{\epsilon i} q^i.$$ Since $\epsilon$ is sufficiently small and $N$ is sufficiently large, we have $$\mu(I \setminus \hat{I}) = \mu\left(I \setminus \bigcup_{i=1}^{\infty}\hat{I}_i \right) = 1- \mu\left( \bigcup_{i=1}^{\infty}\hat{I}_i \right) \geq 1- \sum_{i=N+1}^\infty 2^{\epsilon i} q^i> 0.$$ ◻ MTW12 T. Alberts and M. Kozdron, Intersection probabilities for a chordal SLE path and a semicircle, Electron. Commun. Probab., **13**, (2008), 448--460. T. Alberts and S. Sheffield, Hausdorff Dimension of the SLE Curve Intersected with the Real Line, Electron. J. Probab., **13**, (2008), 1166--1188. L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics, ETH Zürich, Birkhäuser Verlag, (2005). V. Beffara, The dimension of the SLE curves, Ann. Probab., **36**, (2008), 1421--1452. C. Bishop, H. Hakobyan and M. Williams, Quasisymmetric dimension distortion of Ahlfors regular subsets of a metric space, Geom. Funct. Anal., **26**, (2016), 379--421. C. Bishop and Y. Peres, Fractals in Probability and Analysis, Cambridge Stud. Adv. Math., **162**, Cambridge Univ. Press, (2017). C. Bishop and J. Tyson, Locally minimal sets for conformal dimension, Ann. Acad. Sci. Fenn., **26**, (2001), 361--373. B. Bojarski, Remarks on Sobolev imbedding inequalities, Complex Analysis Joensuu, Lecture Notes in Math., **1351**, Springer, Berlin, (1988), 52--68. M. Bonk and B. Kleiner, Conformal dimension and Gromov hyperbolic groups with 2-sphere boundary, Geom. Topol., **9**, (2005), 219--246. M. Bonk and D. Meyer, Expanding Thurston maps, Math. Surveys Monogr., **225**, Amer. Math. Soc., Providence, RI, (2017). G. David and S. Semmes, Fractured Fractals and Broken Dreams, Oxford Lect. Ser. Math. Appl., **7**, The Clarendon Press Oxford Univ. Press, New York, (1997). B. Fuglede, Extremal length and functional completion, Acta Math., **98**, (1957), 171--219. H. Hakobyan, Conformal dimension: Cantor sets and Fuglede modulus, Int. Math. Res. Not. IMRN, **1**, (2010), 87--111. J, Heinonen, Lectures on Analysis on Metric Spaces, Springer-Verlag, New York, (2001). J, Heinonen and P, Koskela, Quasiconformal maps in metric spaces with controlled geometry, Acta Math., **181**, (1998), 1--61. A. Käenmäki, T. Ojala, and E. Rossi, Rigidity of quasisymmetric mappings on self-affine carpets, Int. Math. Res. Not. IMRN, **12**, (2018), 3769--3799. S. Keith and T. Laakso, Conformal Assouad dimension and modulus, Geom. funct. anal., **14**, (2004), 1278--1321. B. Kleiner, The asymptotic geometry of negatively curved spaces: uniformization, geometrization and rigidity, International Congress of Mathematicians, **2**, 743--768, Eur. Math. Soc., Zürich, (2006). Kwapisz J., Conformal dimension via p-resistance: Sierpinski carpet, Ann. Acad. Sci. Fenn. Math., **45**, (2020), 3--51. L. Kovalev, Conformal dimension does not assume values between zero and one, Duke Math. J., **134**, (2006), 1--13. G. Lawler, Geometric and fractal properties of Brownian motion and random walk paths in two and three dimensions, Bolyai Math. Soc. Stud., **9**, (1999), 219--258. G. Lawler, Conformally Invariant Processes in the Plane, Math. Surveys Monogr, **114**, Amer. Math. Soc., Providence, RI, (2005). J. Le Gall, Brownian Motion, Martingales, and Stochastic Calculus, Grad. Texts in Math., **274**, Springer, (2016). J. Mackay, Assouad dimension of self-affine carpets, Conform. Geom. Dyn, **15**, 2011, 177--87. J. Mackay and J. Tyson, Conformal Dimension: Theory and Application, Univ. Lecture Ser., **54**, Amer. Math. Soc., Providence, PI, (2010). P. Morters and Y. Peres, Brownian Motion, Cambridge Univ. Press, (2010). P. Pansu, Dimension conforme et sphère à l'infini des variétés à courbure négative, Ann. Acad. Sci. Fenn., Ser. AI Math., **14**, (1989), 177--212. S. Rohde and O. Schramm, Basic properties of SLE, Ann. of Math., **161**, (2005), 883--924. E. Rossi and V. Suomala, Fractal Percolation and Quasisymmetric Mappings, Int. Math. Res. Not. IMRN, **10**, (2021), 7372--7393. W. Rudin, Real and Complex Analysis, McGraw-Hill, (1986). O. Schramm, Scaling limits of loop-erased random walks and uniform spanning trees, Israel J. Math., **118**, (2000), 221--288. P. Tukia and J. Väisälä, Quasisymmetric Embeddings of Metric Spaces, Ann. Acad. Sci. Fenn. Math., **5**, (1980), 97--114. J. Tyson, Sets of minimal Hausdorff dimension for quasiconformal maps. Proceedings of the American Mathematical Society **128** (2000): 3361--7. J. Tyson and J. Wu, Quasiconformal dimensions of self-similar fractals, Rev. Mat. Iberoam, **22**, (2006), 205--258. J. Väisälä, Lectures on $n$-Dimensional Quasiconformal Mappings, Lecture Notes in Math., **229**, Springer, Berlin, (1971). [^1]: Key words: Brownian motion, quasisymmetry, conformal dimension
arxiv_math
{ "id": "2309.02350", "title": "Conformal Dimension of the Brownian Graph", "authors": "Ilia Binder, Hrant Hakobyan, Wen-Bo Li", "categories": "math.MG math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }